id
stringlengths
10
10
title
stringlengths
5
246
abstract
stringlengths
42
3.32k
authors
stringlengths
5
21.5k
published_date
timestamp[s]
link
stringlengths
33
34
markdown
stringlengths
140
1.08M
abstract_ja
stringlengths
0
1.35k
2309.08157
RVAE-EM: Generative speech dereverberation based on recurrent variational auto-encoder and convolutive transfer function
In indoor scenes, reverberation is a crucial factor in degrading the perceived quality and intelligibility of speech. In this work, we propose a generative dereverberation method. Our approach is based on a probabilistic model utilizing a recurrent variational auto-encoder (RVAE) network and the convolutive transfer function (CTF) approximation. Different from most previous approaches, the output of our RVAE serves as the prior of the clean speech. And our target is the maximum a posteriori (MAP) estimation of clean speech, which is achieved iteratively through the expectation maximization (EM) algorithm. The proposed method integrates the capabilities of network-based speech prior modelling and CTF-based observation modelling. Experiments on single-channel speech dereverberation show that the proposed generative method noticeably outperforms the advanced discriminative networks.
Pengyu Wang, Xiaofei Li
2023-09-15T04:48:10
http://arxiv.org/abs/2309.08157v2
RVAE-EM: Generative Speech Dereverberation based on Recurrent Variational Auto-Encoder and Convolutive Transfer Function ###### Abstract In indoor scenes, reverberation is a crucial factor in degrading the perceived quality and intelligibility of speech. In this work, we propose a generative dereverberation method. Our approach is based on a probabilistic model utilizing a recurrent variational auto-encoder (RVAE) network and the convolutive transfer function (CTF) approximation. Different from most previous approaches, the output of our RVAE serves as the prior of the clean speech. And our target is the maximum a posteriori (MAP) estimation of clean speech, which is achieved iteratively through the expectation maximization (EM) algorithm. The proposed method integrates the capabilities of network-based speech prior modelling and CTF-based observation modelling. Experiments on single-channel speech dereverberation show that the proposed generative method noticeably outperforms the advanced discriminative networks. Pengyu Wang\({}^{1,2}\) Xiaofei Li\({}^{2,*}\)+\({}^{1}\)Zhejiang University, Hangzhou, China \({}^{2}\)Westlake University & Westlake Institute for Advanced Study, Hangzhou, China Speech enhancement, speech dereverberation, variational auto-encoder, convolutive transfer function, unsupervised learning. Footnote †: E-mails: wangengyu@westlake.edu.cn, lixiaofei@westlake.edu.cn ## 1 Introduction Speech enhancement aims to extract clean speech from degraded recordings, which is widely adopted in applications such as hearing aids, online communication devices, and intelligent robots. In indoor scenarios, the reverberation caused by reflection, refraction, and diffusion during sound wave propagation significantly degrades speech quality and intelligibility [1]. Traditional dereverberation approaches usually build models of speech and recordings, and complete the task by deconvolution. With the advancements in deep neural networks (DNNs) and the availability of large speech datasets, DNN-based approaches have been proposed, surpassing the performance of traditional ones. DNN-based approaches can be roughly categorized into two groups based on their paradigms. One paradigm of DNN-based approaches is to use discriminative networks to build a mapping from reverberant speech to clean ones. The mapping can be implemented in terms of waveforms [2] or spectrograms [3, 4]. For example, TCN-SA [3] and FullSubNet [5, 6] map the degraded spectrogram to clean ones, and achieve the state-of-the-art (SOTA) dereverberation performance. The discriminative networks are usually trained in a supervised manner with paired clean speech and corresponding reverberant recordings. One limitation of discriminative networks is the difficulty of covering all possible acoustic conditions in the training dataset. Sometimes, these methods may not perform as well in unseen acoustic environments [7]. A different paradigm is to regard speech dereverberation as a generative task and use generative networks such as variational auto-encoders (VAEs) [8], generative adversarial networks (GANs) [9], and diffusion models (DMs) [10] to complete it. For example, in [11], the authors combined the VAE network with non-negative matrix factorization (NMF) to realize unsupervised speech dereverberation, which resulted in better performance than the traditional NMF-based approaches. In [12], the authors combined the end-to-end WaveNet with adversarial training to tackle noise, reverberation, and distortion. In [7], the authors developed a DM-based iterative speech enhancement approach, yielding remarkable performance in both denoising and dereverberation. In this work, we propose a generative dereverberation method named RVAE-EM. Instead of studying the direct mapping from reverberant speech to clean ones, our objective is the maximum a posteriori (MAP) estimation of clean speech. To achieve this, we build a probabilistic model, which describes the process of generating observed recordings. In the short-time Fourier transform (STFT) domain, we train a recurrent variational auto-encoder (RVAE) [13] to model the generative process of clean speech, and use the convolutive transfer function (CTF) approximation [14] to model the observation process. The expectation maximization (EM) algorithm [15] is used to iteratively estimate the clean spectrogram and the acoustic parameters in the observation process. Our method integrates the capabilities of DNN-based speech prior modelling and CTF-based observation modelling. The proposed RVAE network can be trained in both unsupervised (U) and supervised (S) manners, resulting in two versions: RVAE-EM-U and RVAE-EM-S. Experiments on single-channel dereverberation show that RVAE-EM-U achieves SOTA performance in the unsupervised manner, while RVAE-EM-S noticeably outperforms the advanced discriminative networks [3, 6]. ## 2 Models Our approach is based on a probabilistic model, which describes the process of generating observed recordings from latent variables. The probabilistic model consists of two parts: a generative model of clean speech and an observation model of reverberant recordings. ### Clean speech generative model based on RVAE network The RVAE is a dynamic VAE network that consists of an encoder (parameterized by \(\mathbf{\phi}\)) and a decoder (parameterized by \(\mathbf{\theta}\)). Following the usage of RVAE for speech denoising in [16], we train an RVAE to obtain the prior of clean speech in the STFT domain. Let \(S_{f}(n)\in\mathbb{C}^{1}\) denote the clean spectrogram (dry speech spectrogram) in a specific time-frequency (T-F) bin, where \(n\in[1,N]\) is the frame index and \(f\in[1,F]\) is the frequency index. We suppose that the clean spectrogram \(\mathbf{S}\in\mathbb{C}^{F^{N}}\) follows a zero-mean complex Gaussian distribution, and this distribution is generated according to standard Gaussian distributed latent variables \(\mathbf{Z}\in\mathbb{R}^{D\times N}\), \(D\ll F\). The prior probability of latent variables is frame-independent as \[p\left(\mathbf{Z}\right)=\prod_{n=1}^{N}p\left(\mathbf{Z}(n)\right)=\prod_{n=1}^ {N}\mathcal{N}\left(\mathbf{Z}(n);\mathbf{0},\mathbf{I}_{D}\right), \tag{1}\] where \(\mathbf{Z}(n)\in\mathbb{R}^{D\times 1}\) is the \(n\)th column of \(\mathbf{Z}\), denoting the latent variables at the \(n\)th frame. The conditional generative process of clean spectrogram is modelled by the RVAE decoder as \[p_{\boldsymbol{\theta}}\left(\mathbf{S}|\mathbf{Z}\right) =\prod_{n=1}^{N}p_{\boldsymbol{\theta}}\left(\mathbf{S}(n)| \mathbf{Z}\right) \tag{2}\] \[=\prod_{n=1}^{N}\mathcal{C}\mathcal{N}\left(\mathbf{S}(n); \mathbf{0},\mathbf{\Sigma}_{\mathrm{spch}}(n)\right),\] where \(\mathbf{S}(n)\in\mathbb{C}^{F\times 1}\) is the \(n\)th column of \(\mathbf{S}\), denoting the clean spectrogram at the \(n\)th frame, and \(\mathbf{\Sigma}_{\mathrm{spch}}(n)=\mathrm{diag}([\sigma_{\mathrm{spch},1}^{2 }(n),\sigma_{\mathrm{spch},2}^{2}(n),\cdots,\sigma_{\mathrm{spch},F}^{2}(n)])\) is a diagonal matrix whose diagonal entries are the output of RVAE decoder at frame \(n\). In each frame, the acquisition of \(\mathbf{\Sigma}_{\mathrm{spch}}(n)\) takes into account latent variables from all frames. The generative model of clean spectrogram is \[p_{\boldsymbol{\theta}}\left(\mathbf{S},\mathbf{Z}\right)=p_{\boldsymbol{ \theta}}\left(\mathbf{S}|\mathbf{Z}\right)p\left(\mathbf{Z}\right). \tag{3}\] To train the RVAE decoder, the encoder must be introduced. The encoder provides a parameterized Gaussian posterior \[q_{\boldsymbol{\phi}}\left(\mathbf{Z}|\mathbf{S}\right)=\prod_{n=1}^{N}q_{ \boldsymbol{\phi}}\left(\mathbf{Z}(n)|\mathbf{Z}(1:n-1),\mathbf{S}\right), \tag{4}\] which is an approximation of the intractable posterior \(p_{\boldsymbol{\theta}}\left(\mathbf{Z}|\mathbf{S}\right)\). In each frame, the posterior of \(\mathbf{Z}(n)\) is obtained according to both the latent variables from all previous frames and the clean spectrogram from all frames. The word'recurrent' in RVAE describes the time dependency between clean spectrogram \(\mathbf{S}\) and latent variables \(\mathbf{Z}\). The RVAE is trained by jointly optimizing the encoder and decoder. The details of our RVAE will be shown in Section 3.3. ### Observation model based on CTF approximation The room impulse response (RIR) is the propagating filter in an enclosure. The CTF approximation is commonly utilized to model the reverberation effect in the STFT domain since the RIR length is relatively longer than the STFT window. In this work, we follow the CTF formulation described in [17]. Considering the case of a single speaker and a single microphone, assuming the CTF length is \(P+1\) and \(P\ll N\), the observed recording can be modelled as \[X_{f}(n)\approx\sum_{p=0}^{P}S_{f}(n-p)H_{f}(p)+W_{f}(n), \tag{5}\] where \(X_{f}(n)\in\mathbb{C}^{1}\) and \(W_{f}(n)\in\mathbb{C}^{1}\) are the STFT coefficients of reverberant speech and additive noise, respectively. \(H_{f}(p)\in\mathbb{C}^{1}\) is the CTF filter coefficient. We can rewrite it in the matrix form as \[\mathbf{X}_{f}\approx\widetilde{\mathbf{H}}_{f}\mathbf{S}_{f}+\mathbf{W}_{f}, \tag{6}\] where \(\mathbf{X}_{f}=\left[X_{f}\left(1\right),X_{f}\left(2\right),\cdots,X_{f} \left(N\right)\right]^{T}\in\mathbb{C}^{N\times 1}\), \(\mathbf{S}_{f}=\left[S_{f}\left(1\right),S_{f}\left(2\right),\cdots,S_{f} \left(N\right)\right]^{T}\in\mathbb{C}^{N\times 1}\), \[\widetilde{\mathbf{H}}_{f}=\left[\begin{array}{ccccc}H_{f}(0)&0&\cdots& \cdots&0\\ \vdots&\ddots&0&\ddots&\vdots\\ H_{f}(P)&\ddots&H_{f}(0)&\ddots&\vdots\\ 0&\ddots&\vdots&\ddots&\vdots\\ \vdots&\ddots&H_{f}(P)&\ddots&0\\ 0&\cdots&0&\cdots&H_{f}(0)\end{array}\right]\in\mathbb{C}^{N\times N}, \tag{7}\] and \(\mathbf{W}_{f}=\left[W_{f}\left(1\right),W_{f}\left(2\right),\cdots,W_{f} \left(N\right)\right]^{T}\in\mathbb{C}^{N\times 1}\). Supposing that the noise is isotropic stationary complex Gaussian, there is \[p_{\boldsymbol{\psi}}\left(\mathbf{X}_{f}|\mathbf{S}_{f}\right)=\mathcal{C} \mathcal{N}\left(\mathbf{X}_{f};\widetilde{\mathbf{H}}_{f}\mathbf{S}_{f}, \mathbf{\Sigma}_{\mathrm{noi},f}\right), \tag{8}\] where \(\mathbf{\Sigma}_{\mathrm{noi},f}=\sigma_{\mathrm{noi},f}^{2}\mathbf{I}_{N}\), \(\sigma_{\mathrm{noi},f}^{2}\) and \(\mathbf{I}_{N}\) are the noise power in the \(f\)th band and \(N\times N\) identity matrix, respectively. \(\boldsymbol{\psi}\) is the set of acoustic parameters which consists of the CTF filter coefficients and noise power. Defining the full-band observed recording as \(\mathbf{X}\in\mathbb{C}^{P\times N}\), we have the observation model \[p_{\boldsymbol{\psi}}\left(\mathbf{X}|\mathbf{S}\right)=\prod_{f=1}^{F}p_{ \boldsymbol{\psi}}\left(\mathbf{X}_{f}|\mathbf{S}_{f}\right). \tag{9}\] ## 3 Proposed method In our method, we use the EM algorithm to estimate the clean spectrogram and the acoustic parameters iteratively, which has two stages: E step and M step. ### E step: Estimation of clean spectrogram The E step provides a MAP estimate of the clean spectrogram given the corresponding reverberant input. The objective is expressed as \[\widehat{\mathbf{S}}=\mathrm{E}_{p\left(\mathbf{S}|\mathbf{X}\right)}\left[ \mathbf{S}\right]=\mathrm{E}_{p\left(\mathbf{Z}|\mathbf{X}\right)}\left[ \mathrm{E}_{p\left(\mathbf{S}|\mathbf{Z},\mathbf{X}\right)}\left[\mathbf{S} \right]\right]. \tag{10}\] For the inner expectation of (10), according to the Bayes rules and our probabilistic assumptions, we have \[p\left(\mathbf{S}|\mathbf{Z},\mathbf{X}\right)=\frac{p\left(\mathbf{X}| \mathbf{S},\mathbf{Z}\right)p\left(\mathbf{S}|\mathbf{Z}\right)}{p\left( \mathbf{X}|\mathbf{Z}\right)}\propto p_{\boldsymbol{\psi}}\left(\mathbf{X}| \mathbf{S}\right)p_{\boldsymbol{\theta}}\left(\mathbf{S}|\mathbf{Z}\right). \tag{11}\] It can be found that \(p\left(\mathbf{S}|\mathbf{Z},\mathbf{X}\right)\) is also Gaussian: \[p\left(\mathbf{S}|\mathbf{Z},\mathbf{X}\right)=\prod_{f=1}^{F}\mathcal{C} \mathcal{N}\left(\mathbf{S}_{f};\boldsymbol{\mu}_{f},\mathbf{\Sigma}_{f}\right), \tag{12}\] where \[\mathbf{\Sigma}_{f}=\left(\widetilde{\mathbf{H}}_{f}^{H}\mathbf{\Sigma}_{ \mathrm{noi},f}^{-1}\widetilde{\mathbf{H}}_{f}+\mathbf{\Sigma}_{\mathrm{ spch},f}^{-1}\right)^{-1}, \tag{13}\] \[\boldsymbol{\mu}_{f}=\mathbf{\Sigma}_{f}\widetilde{\mathbf{H}}_{f}^{H}\mathbf{ \Sigma}_{\mathrm{noi},f}^{-1}\mathbf{X}_{f}, \tag{14}\] and \(\mathbf{\Sigma}_{\mathrm{spch},f}=\mathrm{diag}([\sigma_{\mathrm{spch},f}^{2}(1), \sigma_{\mathrm{spch},f}^{2}(2),\cdots,\sigma_{\mathrm{spch},f}^{2}(N)])\) is a diagonal matrix whose diagonal entries are the output of RVAE decoder in the \(f\)th band. However, there are still two issues: both the posterior \(p\left(\mathbf{Z}|\mathbf{X}\right)\) and the outer expectation of (10) are intractable. We approximate \(p\left(\mathbf{Z}|\mathbf{X}\right)\) with \(q_{\boldsymbol{\phi}}\left(\mathbf{Z}|\mathbf{X}\right)\), and sample the latent variables \(\mathbf{Z}_{\mathrm{sample}}\). Using the sampled latent variables to approximate the expectation, the clean spectrogram is estimated as \[\widehat{\mathbf{S}}\approx\mathrm{E}_{p\left(\mathbf{S}|\mathbf{Z}_{\mathrm{sample }},\mathbf{X}\right)}\left[\mathbf{S}\right]. \tag{15}\] This means, in practice, the RVAE inference is performed once to obtain the prior variance of clean speech, i.e. \(\mathbf{\Sigma}_{\mathrm{spch},f}\), which will be used for all E-steps (and all M-steps), and the clean speech estimation is set to \(\widehat{\mathbf{S}}_{f}\approx\boldsymbol{\mu}_{f}\) for each \(f\). ### M step: Estimation of acoustic parameters After the E step, the acoustic parameters in the observation model can be updated by maximizing the expected logarithmic likelihood. For the CTF parameters \(\mathbf{H}_{f}=[H_{f}(0),H_{f}(1),\cdots,H_{f}(P)]\in\mathbb{C}^{1\times(P+1)}\) in each band, we have \[\begin{split}\mathbf{\widehat{H}}_{f}&=\operatorname*{ arg\,max}_{\mathbf{H}_{f}}\operatorname{E}_{p(\mathbf{S},\mathbf{Z}|\mathbf{X})} \left[\ln p\left(\mathbf{X},\mathbf{S},\mathbf{Z}\right)\right]\\ &=\left(\sum_{n=0}^{N}X_{f}\left(n\right)\boldsymbol{\mu}_{f}^{H} \left(n:n-P\right)\right)\left(\sum_{n=0}^{N}\mathbf{\widetilde{\Sigma}}_{f} \left(n\right)\right)^{-1},\end{split} \tag{16}\] where \[\begin{split}\mathbf{\widetilde{\Sigma}}_{f}\left(n\right)& =\boldsymbol{\mu}_{f}\left(n:n-P\right)\boldsymbol{\mu}_{f}^{H} \left(n:n-P\right)\\ &\quad+\mathbf{\Sigma}_{f}\left(n:n-P,n:n-P\right).\end{split} \tag{17}\] Similarly, the noise power in each band is updated as \[\begin{split}\widetilde{\sigma}_{\mathrm{noi},f}^{2}& =\operatorname*{arg\,max}_{\sigma_{\mathrm{noi},f}^{2}}\operatorname{E}_{p( \mathbf{S},\mathbf{Z}|\mathbf{X})}\left[\ln p\left(\mathbf{X},\mathbf{S}, \mathbf{Z}\right)\right]\\ &=\frac{\|\mathbf{X}_{f}-\mathbf{\widetilde{H}}_{f}\boldsymbol{ \mu}_{f}\|_{F}^{2}+\operatorname{tr}\left(\mathbf{\widetilde{H}}_{f} \mathbf{\Sigma}_{f}\mathbf{\widetilde{H}}_{f}^{H}\right)}{N},\end{split} \tag{18}\] where \(\|\cdot\|_{F}\) is the Frobenius norm and \(\operatorname{tr}\left(\cdot\right)\) is the trace of a matrix. After the M step, we can proceed to the E-step to further update the clean spectrogram. An overview of our method is shown in Fig. 0(a). The RVAE network provides the prior of clean speech. And the EM algorithm achieves the MAP estimation of the clean spectrogram according to both prior knowledge and observed recordings. ### Details of RVAE We propose a new network architecture for RVAE, which is more powerful for speech representation learning compared to other RVAE networks [16]. We set the feature dim of latent variables as \(D=32\). The specific architecture of the proposed RVAE is shown in Fig. 0(b). _Com_s are convolutional layers. _ResBlocks_ consist of eight modules in series. Each module consists of the LeakyReLU activation function, two \(3\times 3\)_Com_s, and skip connections. The numbers of in-channels and out-channels of _ResBlock_ are both 64. _GRU_ and _biGRU_s are single-layer forward and bidirectional gate recurrent units with 256 and 512 hidden units, respectively. _MLPs_ consists of fully-connected layers with 256, 256, and 32 units. The Tanh activation is used after all layers in _MLP_s except the last layer. Dropout of \(20\%\) is applied in _ResBlocks_ and the first two layers of _MLPs_. To transform the feature scales, operation \(\log(\cdot)\) is applied before the encoder. And operation \([\exp(\cdot)]^{2}\) is applied after the decoder to get the prior variance of the clean spectrogram. In our network, _Com_s in the encoder are used for spectral feature extraction. After that, the features are sent to _biGRU_ for temporal feature extraction. The hidden state of _biGRU_ contains information of clean spectrogram for all frames. On the other hand, at each frame, the input and hidden state of _GRU_ keeps the information of all previous latent variables. Therefore, the outputs of _biGRU_ and _GRU_ contains the information of \(\mathbf{S}\) and \(\mathbf{Z}(1:n-1)\) respectively. Then, the outputs are sent to two _MLPs_ to get the mean and variance of \(q_{\boldsymbol{\phi}}\left(\mathbf{Z}(n)|\mathbf{Z}(1:n-1),\mathbf{S}\right)\) separately. Between the encoder and decoder, the reparameterization trick [8] is utilized to sample the latent variables from the given distribution. Similarly, the output of _biGRU_ in the decoder contains information of \(\mathbf{Z}\). And it is further sent to _Com_s to get the distribution \(p_{\boldsymbol{\theta}}\left(\mathbf{S}(n)|\mathbf{Z}\right)\). Since this distribution is zero-mean complex Gaussian, the final output in each frame is its variance \(\mathbf{\Sigma}_{\mathrm{spch}}(n)\). The training of RVAE is achieved by maximizing the evidence lower bound. For simplicity, given an \(N\)-frame input, we define \(\mathbf{\Sigma}_{\mathrm{spch}}\in\mathbb{R}^{F\times N}\) as the output of RVAE composed of \(\sigma_{\mathrm{spch},f}^{2}(n)\) for each \(f\in[1,F]\) and \(n\in[1,N]\). The training loss is \[\begin{split}\mathcal{L}&=-\mathrm{E}_{q_{\boldsymbol{ \phi}}\left(\mathbf{Z}|\mathbf{S}\right)}\left[\ln p_{\boldsymbol{\theta}} \left(\mathbf{S}|\mathbf{Z}\right)\right]+\mathrm{D}_{KL}\left(q_{\boldsymbol{ \phi}}\left(\mathbf{Z}|\mathbf{S}\right)||p\left(\mathbf{Z}\right)\right)\\ &=\mathrm{D}_{IS}\left(|\mathbf{S}|^{2},\mathbf{\Sigma}_{\mathrm{ spch}}\right)+\mathrm{D}_{KL}\left(q_{\boldsymbol{\phi}}\left(\mathbf{Z}| \mathbf{S}\right)||p\left(\mathbf{Z}\right)\right),\end{split} \tag{19}\] where \(\mathrm{D}_{IS}\left(\cdot\right)\) denotes the Itakura-Saito (IS) divergence [18] and \(\mathrm{D}_{KL}\left(\cdot\right)\) denotes the Kullback-Leibler (KL) divergence [19]. Because only clean speech is utilized during training, the whole method becomes unsupervised. We denote this version as RVAE-EM-U. After the unsupervised training, we can also fine-tune the RVAE with reverberant-clean-paired training data to get a more accurate posterior distribution of latent variables. The fine-tuning loss is \[\begin{split}\mathcal{L}_{\mathrm{ft}}&=-\mathrm{E}_{q_{ \boldsymbol{\phi}}\left(\mathbf{Z}|\mathbf{X}\right)}\left[\ln p_{\boldsymbol{ \theta}}\left(\mathbf{S}|\mathbf{Z}\right)\right]+\mathrm{D}_{KL}\left(q_{ \boldsymbol{\phi}}\left(\mathbf{Z}|\mathbf{X}\right)||p\left(\mathbf{Z}\right) \right)\\ &=\mathrm{D}_{IS}\left(|\mathbf{S}|^{2},\mathbf{\Sigma}_{\mathrm{ spch}}\right)+\mathrm{D}_{KL}\left(q_{\boldsymbol{\phi}}\left(\mathbf{Z}| \mathbf{X}\right)||p\left(\mathbf{Z}\right)\right).\end{split} \tag{20}\] This supervised version is denoted as RVAE-EM-S. ## 4 Experiments ### Settings **Datasets** Parts of the speaker-independent clean speech sampled at 16kHz from the Wall Street Journal (WSJ0) corpus [20] are used for training, validating and testing. The training and test datasets consist of 24.92 and 1.48 hours of recordings, respectively. We simulate reverberant-dry RIR pairs with gpuIR toolbox [21] for fine-tuning and testing. The length, width, and height of simulated rooms are randomly selected in the range \([5\mathrm{m},15\mathrm{m}]\), \([5\mathrm{m},15\mathrm{m}]\), and \([2\mathrm{m},6\mathrm{m}]\), respectively. The minimum distance from the speaker/microphone to the wall is \(1\mathrm{m}\). Similar to the datasets Figure 1: Overview of our method and the proposed RVAE architecture. created in [7], for reverberant RIRs, the reverberation time RT60s are uniformly chosen in the range \([0.4\mathrm{s},1\mathrm{s}]\). Correspondingly, dry RIRs are generated by using the same geometric parameters but an absorption coefficient of 0.99. The input and target speech signals are generated by convoluting the WSJ0 utterances with reverberant and dry RIRs, respectively. The training and test datasets consist of 5000 and 300 RIRs respectively. **Other settings** The input and output of our method are complex-valued one-sided spectrograms using the Hanning window with a length of 1024 samples and a hop length of 256 samples. Additionally, we set the direct-current component of the spectrograms to zero and dismiss it. Therefore, the input and output have \(F=512\) frequency bands. The utterances are divided into segments of 5.104 seconds (320 frames) for training and testing. We set the CTF length in the observation model to \(P=30\). The initialization of the EM algorithm is always important. We set the iteration steps to 100 and initialize \(\sigma_{\mathrm{noi},f}^{2}=10^{3}\times|\mathbf{X}_{f}|^{2}/N\), \(H_{f}\left(0\right)=1\) and \(H_{f}\left(\neq 0\right)=0\) for each \(f\in[1,F]\). During training, the batch size is 64. We use the AdamW optimizer [22] with a maximum learning rate of 0.0001. Particularly, we warm up the KL loss periodically to mitigate KL vanishing in RVAE [23]. Codes and speech examples are available on [https://github.com/Audio-WestlakeU/RVAE-EM](https://github.com/Audio-WestlakeU/RVAE-EM). **Comparison methods** The comparison methods include VAE-NMF [11], TCN-SA [3], FullSubNet [5], and SGMSE+ [7]. These approaches cover generative (G) and discriminative (D) models, as well as supervised (S) and unsupervised (U) training manners. To demonstrate the role of the EM algorithm, we also show the results of our fine-tuned RVAE without any EM step. Notice that in TCN-SA we use the reverberant phase without enhancement. For FullSubNet, we adopt the modified version described in [6], which slightly changes the network architecture and training target. **Evaluation metrics** The evaluation metrics include SISDR [24], WBPESQ [25], NBPESQ [25], STOI [26], ESTOI [27], SRMR [28] and MOS-SIG, MOS-BAK, MOS-OVRL from DNSMOS [29]. Higher metrics denote better speech quality and intelligibility. ### Results and discussion Table 1 shows the results of single-channel dereverberation on the simulated datasets. In the unsupervised manner, our RVAE-EM-U surpasses the previous SOTA approach, VAE-NMF. That is mainly because the proposed RVAE architecture is much more powerful than the MLP-based VAE used in VAE-NMF. As a result, our network can provide a more credible prior of clean speech for the subsequent EM algorithm. Because we train the RVAE with dry input but test it with reverberant input, the prior provided by RVAE may not be ideal. However, we can still estimate the clean speech to an extent, since the MAP estimation depends on not only the prior knowledge but also the observed data. All supervised approaches exhibit superior performance to unsupervised ones due to the provision of paired training data. Our RVAE-EM-S outperforms the two SOTA discriminative networks, i.e. TCN-SA and FullSubNet, which shows the capability and potential of the proposed (first-try of) generative-observation model. The diffusion-model-based SGMSE+ achieves better performance than RVAE-EM-S. However, the proposed RVAE network has much fewer parameters. In addition, although RVAE-EM-S and SGMSE+ are both iterative during inference, SGMSE+ performs one network inference for each iteration, while the RVAE network in our approach does not participate in the EM iterations, which further saves time and memory cost. The performance of RVAE (w/o EM) is not as good as that of RVAE-EM-S as it solely utilizes the RVAE-based generative model and disregards the CTF-based observation model. In fact, the EM algorithm can reconstruct the phase and revise the magnitude of the spectrogram estimated by RVAE. In Fig. 2, we plot the average WBPESQ, ESTOI, and MOS-OVRL of RVAE-EM-S as the number of EM steps increases. The performance measures first decrease at the second step, possibly because the initialized CTF coefficients are overly simplified, and then a degraded clean speech will be estimated in the E-step. However, this problem will be corrected after updating the CTF coefficients in the first M-step. As the number of iteration steps increases, the performance of RVAE-EM-S rapidly improves up to the first 10 steps, and then slowly converges. It demonstrates that, through maximizing the likelihood of observations, the EM iterations are consistently improving the estimation of clean speech and acoustic parameters. ## 5 Conclusions In this work, we propose a speech dereverberation method named RVAE-EM. Our work is built upon a probabilistic model that comprises a generation model for clean speech, which is based on the RVAE network, and an observation model, which is based on the CTF approximation. The output of our method provides a MAP estimate of the clean spectrogram, which integrates the information from both clean speech generative model and observation model. Our method has two versions, RVAE-EM-U and RVAE-EM-S, which are trained in the unsupervised and supervised manner, respectively. Experiments on single-channel dereverberation demonstrate that RVAE-EM-U achieves SOTA performance among unsupervised methods, while RVAE-EM-S significantly surpasses the majority of existing supervised approaches. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline Method & Type & Params & SISDR\(\uparrow\) & WBPESQ\(\uparrow\) & NBPESQ\(\uparrow\) & STOI\(\uparrow\) & ESTOI\(\uparrow\) & SRMR\(\uparrow\) & MOS-SIG\(\uparrow\) & MOS-OVRL\(\uparrow\) \\ \hline Unprocessed & - & - & -7.33 \(\pm\) 5.44 & 1.25 \(\pm\) 0.16 & 1.59 \(\pm\) 0.23 & 0.69 \(\pm\) 0.10 & 0.45 \(\pm\) 0.14 & 3.38 \(\pm\) 1.41 & 2.20 \(\pm\) 0.69 & 2.11 \(\pm\) 0.66 & 1.69 \(\pm\) 0.46 \\ \hline VAE-NMF [11] & GRU & 7.5M & -6.45 \(\pm\) 5.73 & 1.36 \(\pm\) 0.25 & 1.74 \(\pm\) 0.31 & 0.74 \(\pm\) 0.09 & 0.52 \(\pm\) 0.13 & 3.44 \(\pm\) 1.68 & 2.66 \(\pm\) 0.54 & 2.65 \(\pm\) 0.63 & 2.05 \(\pm\) 0.48 \\ **RVAE-EM-U** (prop.) & GRU & 7.0M & -4.96 \(\pm\) 6.16 & 1.62 \(\pm\) 0.38 & 2.07 \(\pm\) 0.42 & 0.81 \(\pm\) 0.08 & 0.64 \(\pm\) 0.13 & 6.37 \(\pm\) 2.75 & 3.06 \(\pm\) 0.38 & 2.96 \(\pm\) 0.63 & 2.39 \(\pm\) 0.47 \\ \hline TCN-SA [3] & D&& 4.7M & -4.10 \(\pm\) 5.20 & 2.27 \(\pm\) 0.36 & 2.68 \(\pm\) 0.34 & 0.92 \(\pm\) 0.03 & 0.81 \(\pm\) 0.05 & 7.50 \(\pm\) 2.73 & 3.14 \(\pm\) 0.23 & 3.78 \(\pm\) 0.22 & 2.80 \(\pm\) 0.26 \\ FullSubNet [5] & D&& 14.5M & -3.99 \(\pm\) 5.22 & 2.39 \(\pm\) 0.38 & 2.78 \(\pm\) 0.37 & 0.92 \(\pm\) 0.03 & 0.81 \(\pm\) 0.05 & 6.69 \(\pm\) 2.14 & 3.00 \(\pm\) 0.28 & 3.67 \(\pm\) 0.28 & 2.64 \(\pm\) 0.31 \\ SGMSE+ [7] & G& 65.6M & 7.40 \(\pm\) 4.50 & 2.61 \(\pm\) 0.46 & 3.94 \(\pm\) 0.40 & 0.96 \(\pm\) 0.02 & 0.88 \(\pm\) 0.05 & 7.99 \(\pm\) 4.32 & 3.56 \(\pm\) 0.10 & 4.02 \(\pm\) 0.14 & 3.26 \(\pm\) 0.15 \\ RVAE (w/o EM) (prop.) & G& 7.0M & -4.21 \(\pm\) 4.95 & 1.97 \(\pm\) 0.28 & 2.39 \(\pm\) 0.27 & 0.89 \(\pm\) 0.03 & 0.75 \(\pm\) 0.06 & 6.43 \(\pm\) 2.35 & 3.09 \(\pm\) 0.26 & 3.49 \(\pm\) 0.39 & 2.64 \(\pm\) 0.33 \\ **RVAE-EM-S** (prop.) & G& 7.0M & -1.55 \(\pm\) 5.40 & 2.49 \(\pm\) 0.34 & 2.95 \(\pm\) 0.32 & 0.93 \(\pm\) 0.03 & 0.84 \(\pm\) 0.05 & 8.92 \(\pm\) 3.78 & 3.29 \(\pm\) 0.21 & 3.75 \(\pm\) 0.35 & 2.92 \(\pm\) 0.30 \\ \hline \hline \end{tabular} \end{table} Table 1: Results of dereverberation on simulated datasets. Figure 2: Average metrics of RVAE-EM-S versus EM steps.
室内シーンにおいて、響きによる音声認識の質の低下は重要な要因です。本研究では、生成型のディレイ・ reverberation メソッドを提案しました。本方法のベースは、反復可変性オートエンコーダ (RVAE) ネットワークと convolutive transfer function (CTF) の近似を用いた確率的モデルです。他の多くの方法とは異なり、RVAE の出力は、クリアな音声の事前分布として使用されます。そして、目標とするのは、クリアな音声の最大尤度推定 (MAP) であり、これは期待最大化 (EM) アルゴリズムを用いて逐次的に達成されます。提案された方法は、ネットワークベースの音声事前モデルと CTF ベースの観察モデルの機能を統合しています。単チャンネル音声のディレイ・reverberation の実験結果では、提案された生成的方法は、高度な識別的なネットワークに比べて著しく優れています。
2309.17077
ComPACT: ACT+Planck galaxy cluster catalogue
Galaxy clusters are the most massive gravitationally bound systems consisting of dark matter, hot baryonic gas and stars. They play an important role in observational cosmology and galaxy evolution studies. We develop a deep learning model for segmentation of Sunyaev-Zeldovich (SZ) signal on ACT+Planck intensity maps and construct a pipeline for microwave cluster detection in the ACT footprint. The proposed model allows us to identify previously unknown galaxy clusters, i.e. it is capable of detecting SZ sources below the detection threshold adopted in the published galaxy clusters catalogues (such as ACT DR5 and PSZ2). In this paper, we use the derived SZ signal map to considerably improve a cluster purity in the extended catalogue of Sunyaev-Zeldovich objects from Planck data (SZcat) in the ACT footprint. From SZcat, we create a new microwave galaxy cluster catalogue (ComPACT), which includes 2,962 SZ objects with cluster purity conservatively estimated as $\gtrsim74-84$\%. We categorise objects in the catalogue into 3 categories, based on their cluster reliability. Within the ComPACT catalogue, there are $\gtrsim{}977$ new clusters with respect to the ACT DR5 and PSZ2 catalogues.
S. Voskresenskaia, A. Meshcheryakov, N. Lyskova
2023-09-29T09:14:25
http://arxiv.org/abs/2309.17077v3
# ComPACT: combined ACT+Planck galaxy cluster catalogue ###### Abstract Galaxy clusters are the most massive gravitationally bound systems consisting of dark matter, hot baryonic gas and stars. They play an important role in observational cosmology and galaxy evolution studies. We have developed a deep learning model for segmentation of SZ signal on ACT+Planck intensity maps and present here a new galaxy cluster catalogue in the ACT footprint. In order to increase the purity of the cluster catalogue, we limit ourselves to publishing here only a part of the full sample with the most probable galaxy clusters lying in the directions to the candidates of the extended Planck cluster catalogue (SZcat). The ComPACT catalogue contains 2,934 galaxy clusters (with \(Purity\gtrsim 88\%\)), \(\gtrsim 1436\) clusters are new with respect to the existing ACT DR5 and PSZ2 cluster samples. Galaxy clusters - catalogues - Deep learning + Footnote †: E-mail: mesch@cosmos.ru + Footnote †: E-mail: mesch@cosmos.ru ## 1 Introduction Galaxy clusters are the most massive gravitationally bound systems in the Universe (see Allen et al., 2011; Kravtsov & Borgani, 2012, for reviews). These objects are located at the nodes of the cosmic web and serve as reliable tracers of the underlying dark matter distribution. The cluster mass function, which describes the number density of galaxy clusters as a function of their mass, and its evolution with redshift provide a critical test of structure formation models and allows to constrain cosmological parameters (e.g. Vikhlinin et al., 2009; Pratt et al., 2019; Burenin & Vikhlinin, 2012; Mantz et al., 2015, among others). Galaxy clusters are composed of dark matter, which makes up around 85% of their total mass, the intracluster medium (ICM; \(\sim 12\%\) of the total mass), which is an ionised hot gas, and stars (\(\sim 3\%\)). Consequently, they can be detected at several wavelength bands. In the optical range, galaxy clusters represent a concentration of elliptical galaxies. Clusters are also bright in X-rays due to the bremsstrahlung emission produced by the ionised ICM (Sarazin, 1988). X-ray catalogues may provide a wealth of information about galaxy clusters, such as their temperature, luminosity, and gas mass, which are essential for understanding the formation and evolution of these structures. The same hot ICM (namely, the high-energy electrons) also interacts with low-energy cosmic microwave background (CMB) radiation through the inverse Compton scattering creating a distortion in the black-body spectrum (the thermal Sunyaev-Zeldovich effect; Sunyaev & Zeldovich (1970, 1972)). The ICM boosts CMB photons to higher frequencies resulting in a characteristic increase/decrease in the CMB intensity depending on the specific microwave frequency in which a galaxy cluster is being observed. Studying galaxy clusters in (sub)millimetre range via the SZ effect has several advantages over other wavelength ranges. First, the magnitude of the SZ effect does not depend on redshift, thus providing a powerful tool for detecting clusters at high redshifts. Second, the SZ signal is proportional to the integrated gas pressure along the line of sight through the cluster, which is directly linked to the total mass of a cluster. The relationship between the integrated Compton parameter \(Y_{SZ}\) (the total SZ flux) and a cluster mass \(M\) has already proved to be particularly tight (see Kravtsov et al., 2006; Planck Collaboration et al., 2013, among many others). The number of SZ catalogues have been released in recent years including the Planck survey (Planck Collaboration (2014); Planck Collaboration (2016)), the South pole telescope (SPT, Bleem et al. (2015) and the Atacama Cosmology Telescope (ACT; Hilton, M. et al. (2021)). With ongoing and upcoming CMB surveys, more than \(10^{4}\) galaxy clusters are expected to be detected (Raghunathan et al., 2022). Together with reliable mass estimates, such cluster samples may have a ground-breaking impact on our understanding of the structure growth in the Universe, the dark energy equation of state, and modified gravity theories (e.g. Bocquet et al., 2015; Planck Collaboration et al., 2016; Burenin & Vikhlinin, 2012; Mantz et al., 2015; Burenin, 2018, 2013). Recently, Meshcheryakov et al. (2022) presented SZcat - a catalogue of Planck extended SZ sources with a low level of significance (so that nearly all possible Planck cluster candidates are there along with a large number of spurious detections). Here, we aim to develop a method for an automated cluster detection on publicly available combined ACT and Planck maps (Naess et al., 2020) to apply it to the fields of SZcat sources. As a result, we obtain a new catalogue of galaxy clusters - ComPACT - characterised by high purity. The paper is organised as follows. In the next section we review deep learning models for SZ clusters detection proposed in literature. Section 2 describes data used in this work. In Section 3, we describe a deep learning model for cluster segmentation and a procedure for an object detection. Then, in Section 4 the ComPACT catalogue is described. In Section 5 our conclusions are presented. ### Deep learning models for SZ clusters detection In the last few years, deep learning approaches were successfully applied to various object detection and segmentation problems in observational astronomy in different spectral domains: e.g. to imaging data in radio (Hartley et al., 2023), microwave (Meshcheryakov et al., 2022; Lin et al., 2021; Bonjean, 2020), and optical (Burke et al., 2019) spectral ranges. The key advantages of deep learning models over classical astronomical object detection methods (such as SExtractor (Bertin and Arnouts, 1996) in the optical or Matched Multi-filter approach (Melin, J.-B. et al., 2006) in the microwave) are: (i) unification of an object detection model architecture over many spectral domains, and (ii) the amazing ability of a DL method to improve itself with the increase of size of available training sample. As it was shown in SKA Science Data Challenge 2 (Hartley et al., 2023), deep learning models trained on a large and representative enough knowledge base outperform a classical approaches in the astronomical object detection tasks. As mentioned above, DL approaches were successfully applied to microwave imaging data. Bonjean (2020) proposed to use a DL model with the U-Net architecture to make a segmentation map of SZ signal from raw Planck HFI imaging data and detected SZ sources on it. Verkhodanov et al. (2021) trained a CNN classification model to recognise SZ sources in Planck intensity maps. Lin et al. (2021) proposed a hybrid model (DeepSZ, based on a combination of CNN and a classical MMF approach) and demonstrated its power in detecting SZ sources using the simulated CMB maps (with characteristics similar to SPT-3G survey). In all papers mentioned above no catalogue of SZ sources was presented. Recently, Meshcheryakov et al. (2022) trained a U-Net model on Planck HFI data and presented SZcat - a full catalogue of cluster candidates. SZcat contains potential SZ sources over the whole extragalactic sky obtained by two methods: (a) using the DL model (described in Meshcheryakov et al. (2022)) and applying it to Planck HFI intensity maps; and by (b) using a classical detection approach (Burenin, 2017) and applying it to Compton parameter maps published in Planck 2015 data release (Planck Collaboration et al., 2016). SZcat contains SZ sources detected down to low significance values. It is supposed to have the highest completeness level for clusters seen in Planck, but low purity of candidates in the full catalogue. In order to increase a purity of SZcat (staying at the high completeness level), one needs to combine SZcat with other cluster catalogues or with data from different instrument(s) (e.g. SPT, ACT) in the same microwave spectral domain, or with cluster data obtained in different spectral range (optical-IR, X-ray). Using a combination of data from different telescopes makes it possible to lower the threshold for cluster detection, so that new objects can be found with high precision. Melin, J.-B. et al. (2021) combined Planck and SPT microwave data; Tarrio, P. et al. (2019) combined Planck data with ROSAT X-ray catalogue. In both cases of Planck-SPT and Planck-ROSAT data combination, new cluster catalogues were published (with higher completeness/purity than cluster catalogues obtained from Planck/SPT/ROSAT data individually). ## 2 Data We use publicly available composite ACT+Planck intensity maps1 from Naess et al. (2020) at 98, 150, and 220 GHz with 0.5 arcmin per pix resolution covering around 18,000 squared degrees of the sky area (which we call as the ACT footprint hereafter). Footnote 1: [https://lambda.gsfc.nasa.gov/product/act/actpol_dr5_coadd_maps_get.html](https://lambda.gsfc.nasa.gov/product/act/actpol_dr5_coadd_maps_get.html) In order to train and test a SZ cluster segmentation model (see Section 3) we use the following catalogues of galaxy clusters and radio sources found in ACT data: * the ACT DR5 cluster catalogue (Hilton, M. et al., 2021), which contains 4,195 optically confirmed galaxy clusters. The sources were selected by using the multi-frequency matched filter method (MMF, Melin, J.-B. et al. (2006); Williamson et al. (2011)) from the ACT multi-frequency data collected from 2008 to 2018; * a catalogue of point sources (predominantly, active galactic nuclei from Datta et al. (2019), which covers 680 squared degrees of the full ACT footprint; * the equatorial catalogue of extragalactic sources (Gralla et al., 2020), which contains 287 dusty star-forming galaxies and 510 radio-loud active galactic nuclei (AGN). Also we use external X-ray and SZ cluster catalogues: MCXC, 4XMM DR12, SPT-SZ, ComPRASS, and PSZSPT (see Appendix B). ## 3 Method Fig. 1 illustrates the sequence of pipeline steps in constructing a catalogue of potential SZ sources. We convert the full ACT+Planck intensity map into a SZ signal segmentation map by using a Deep Learning classification model (see Section3.1). Then, the detection procedure (described in Section3.2) is applied to the SZ signal segmentation map, resulting in a catalogue of SZ sources candidates. Below, we describe the SZ signal segmentation and source detection procedures in detail. ### Deep learning SZ signal segmentation model To recognise SZ signal on ACT+Planck intensity maps, one could follow the approach already presented in Meshcheryakov et al. (2022); Bonjean (2020), i.e. to train a classical U-Net segmentation architecture on the microwave data. To do so, one needs to define a ground truth segmentation mask. For example, Meshcheryakov et al. (2022); Bonjean (2020) for each source in the training sample of clusters drew circles at the cluster positions with the average Planck PSF FWHM size. This approach implies that a galaxy cluster is approximated as a circle with the Planck PSF radius, thus some information about the real shape and size of galaxy clusters in the training set is lost. In this work, we use a different approach to SZ signal deep learning segmentation. We train a DL classification model on snapshot images centred on ACT DR5 galaxy clusters (the positive class), snapshot images centred on radio sources (mainly, dusty star-forming galaxies and radio-loud AGNs) and snapshot images taken in random directions of the sky (the negative class). Then, we apply our classifier to each pixel of ACT+Plank intensity maps and thus obtain the SZ signal segmentation map in the ACT footprint. This approach has a potential benefit of retaining information about clusters shapes in the deep learning model. Below we explain a construction of SZ signal classification model. #### 3.1.1 Classification model architecture We aim to construct a classification algorithm that analyses a segment of a microwave ACT+Planck map and yields a probability of detecting a cluster at a centre of a segment. To do so, we use a convenient CNN+MLP architecture, the adopted version of VGG (Simonyan and Zisserman, 2015)). Table 1 summarises the architecture of the artificial neural network model. The CNN (convolution neural network) part contains 4 identical blocks. Each convolutional block has two convolution kernels of 3x3 pixels size (padding=1, stride = 1), the rectified linear unit activation function (ReLU(\(x\)) = max(0,\(x\)); Maas, A.L., & Ng, A.Y. (2013)) and the MaxPooling sub-sampling unit. The second part contains a Multi-Layer Perceptron (MLP) with three linear layers with ReLU activation function each. We use the batch normalisation (Ioffe and Szegedy, 2015) in CNN and MLP layers and DropOut with \(p=0.5\)(Hinton et al., 2012) in each MLP layer for the network regularisation. We use the linear layer with the standard Logistic Sigmoid function (see e.g. Dubey et al. (2021)) to obtain a probability of SZ signal as the network output. Total number of trained parameters equals to 462,865. #### 3.1.2 Model training and validation To create the model input, the ACT+Planck map is divided into two distinct regions: training and test sets. The training set is used to learn the optimal model parameters, while the test set is needed to evaluate a final model. We also include in the knowledge base (train and test samples) fields in random directions on the sky, which do not overlap neither with clusters from ACT DRS nor with radio sources (within a 10-minute radius). For the training set, we select the western region of the sky in equatorial coordinates (\(\alpha>180^{\circ}\)). This region encompasses a total of 2,449 galaxy clusters from ACT DR5 (\(\simeq 35.5\) % of the train data set) and 1,226 non-cluster radio sources (\(\simeq 17.8\) %). Random fields comprise approximately \(46.7\%\) of the train sample. To prepare our dataset, we cut the ACT+Planck maps into 16 arcmin x 16 arcmin squares (32 x 32 pixels). To improve the model accuracy and to avoid overfitting, we apply image preprocessing and augmentation techniques to our input images. In particular, we normalise all the images in the following way: \[x=\frac{x}{max(||x||_{2},0)}, \tag{1}\] where \(||x||_{2}\) is the Euclidean norm over rows. Further image transformations include standard augmentation techniques such as flipping and random perspective. To quantify the model performance, we use a binary cross-entropy loss function \(\ell(y,f_{\theta})\): \[\ell(y,\hat{y})=-y\log(f_{\theta})-(1-y)\log(1-f_{\theta}) \tag{2}\] where \(y\) is the actual class (\(y=0\) for non-clusters and \(y=1\) for clusters), and \(f_{\theta}\in[0,1]\) is the predicted label, which can be interpreted as the probability of detecting a cluster in a given field. The loss function compares predicted probabilities to the actual class output and calculates a penalty score based on the distance from the expected value. This loss function is a way to measure the accuracy of a deep learning algorithm in predicting expected outcomes and is useful in optimising the model and increasing its accuracy. For iterative updating the network parameters \(\theta\), we use the Adam optimisation method (Kingma and Ba, 2017) (with default parameters except weights_decay = 0.001). All hyper-parameters, such as kernel size, padding, stride, etc., are selected manually by training different models to achieve the best performance. \begin{table} \begin{tabular}{c c c} \hline \hline & Layer & Output map size \\ \hline \hline 1 & Input & \(32\times 32\times 3\) \\ 2 & \(3\times 3\) convolution +ReLU & \(32\times 32\times 16\) \\ 3 & \(3\times 3\) convolution +ReLU & \(32\times 32\times 16\) \\ 4 & \(2\times 2\) MaxPooling & \(16\times 16\times 16\) \\ \hline 5 & \(3\times 3\) convolution +ReLU & \(16\times 16\times 32\) \\ 6 & \(3\times 3\) convolution +ReLU & \(16\times 16\times 32\) \\ 7 & \(2\times 2\) MaxPooling & \(8\times 8\times 32\) \\ \hline 8 & \(3\times 3\) convolution +ReLU & \(8\times 8\times 64\) \\ 9 & \(3\times 3\) convolution +ReLU & \(8\times 8\times 64\) \\ 10 & \(2\times 2\) MaxPooling & \(4\times 4\times 64\) \\ \hline 11 & \(3\times 3\) convolution +ReLU & \(4\times 4\times 128\) \\ 12 & \(3\times 3\) convolution +ReLU & \(4\times 4\times 128\) \\ 13 & \(2\times 2\) MaxPooling & \(2\times 2\times 128\) \\ \hline 14 & Flatten & 512 \\ 15 & Linear+ReLU & 256 \\ 16 & Linear+ReLU & 128 \\ 17 & Linear+ReLU & 32 \\ 18 & Linear+Sigmoid & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: Our model architecture. All layers, except the last one, use rectified linear units, and all the convolutional and linear layers use the batch normalisation. Figure 1: Scheme showing main steps of building the catalogue of potential SZ sources #### 3.1.3 Classification model test The test data set includes 1,745 galaxy clusters (\(\simeq\) 30.7 % of the test data set), 519 point sources (\(\simeq\) 9.1 %), and 3,422 random fields (\(\simeq\) 60.2%) in the eastern part of the sky (\(\alpha\leq 180^{\circ}\)). To demonstrate the model behaviour we show a distribution of predicted probabilities of detecting a cluster in a given patch of the sky for the test sample on Figure 2 (we consider only the area with the galactic latitude \(|b|>15^{\circ}\) to avoid the Galactic plane). The test dataset includes clusters, point sources and random fields (see Section 3.1.2). From Figure 2 we see that for most of the clusters (in grey) from the test dataset, the model correctly assigns high probabilities (note that the vertical axis is shown in log-scale), while for most of the point sources (in red) and random fields (in blue) predicted probabilities are low. Approximately \(\sim\)0.06% of point sources, the model misclassifies as galaxy clusters. We think it might be due to presence of active galactic nuclei within some clusters. The vertical line marks the probability threshold, which we further use to form a criterion for distinguishing cluster candidates from non-clusters. Most of the objects with probabilities \(<0.3\) are non-clusters. Although, \(\sim\)2% of genuine clusters are not recognised as clusters by our algorithm (i.e. the model predicts \(p<0.3\)). #### 3.1.4 Segmentation map We apply our classifier to each pixel of ACT+Planck intensity maps and obtain a segmentation map. Each pixel in the segmentation map denotes a probability of a presence of the SZ signal in the direction of pixel centre. An example of a patch of the segmentation map containing a massive galaxy cluster is shown in Fig. 3. We choose the direction of SZ_G098.31-41.19 (marked as the red cross) from the SZcat catalogue. The top row and the bottom left panel show three fragments of the ACT+Planck input maps at 98 GHz, 150 GHz, and 220 GHz. The probability map for this area is shown at the bottom right. It contains only one connected group within a chosen window, and this group can be associated with the galaxy cluster PSZ2 G098.30-41.15 from the PSZ2 catalogue (marked with a magenta star) and ACT-CL J2334.3+1759 from ACT DR5 (its centre is shown with a brown circle). It's a massive cluster with \(M_{500}\sim(7-8)10^{14}\)\(M_{\sun}\)(Hilton, M. et al., 2021) located at \(z=0.436\). ### Detection model Since galaxy clusters are extended objects, in the segmentation map we look for connected groups of pixels with probabilities above a certain threshold. To do so, we use the scikit-image (van der Walt et al., 2014) library developed for scientific image analysis in Python. We analyse manually a sample of probability maps for known clusters (i.e. patches of the segmentation map which contain galaxy clusters) and random fields on the sky (with no clusters) and derived the optimal value of the threshold segmentation \(p_{seg}=0.3\) (see Fig. 2). For comparison, Bonjean (2020) used a threshold \(p_{seg}=0.1\) to select sources in the SZ signal segmentation map created from Planck HFI imaging data. Additionally, we remove the complex Galactic plane from consideration and keep only objects with the galactic latitude \(|b|>15^{\circ}\). The resulting catalogue contains more than 1 million potential SZ sources candidates with \(p>0.3\) in the ACT footprint. ## 4 The compact catalogue Here, we aim to make a clean catalogue of cluster candidates using the SZcat catalogue (Meshcheryakov et al., 2022) as a basis. In the ACT field, the total number of SZcat sources is \(14,850\). For each direction from the SZcat catalogue we consider all detected connected groups of pixels in a window with \(R_{match}=5\) arcmin. For each connected group we calculate (i) the group area \(S\) (the total number of pixels in a given connected group), (ii) the maximum probability \(p_{max}\) in a connected group, and (iii) the distance \(R\) from the group centre (the pixel with \(p_{max}\)) to the input direction. We analyse different samples to form a criterion for distinguishing clusters from non-clusters among the connected groups (Figures 4 and 5). We come up with the following rule: a connected group is assigned to a cluster if its area \(S>20\) pixels and its \(p_{max}\) exceeds 0.8. As our tests show, with this criterion we reach a certain trade off between a purity and a completeness of the obtained catalogue of cluster candidates. Let us now illustrate how a distribution of detected sources looks like on the \(R-S\) and \(p_{max}\)-\(S\) planes for random directions2 on the sky (see the left panels of Fig. 4 and Fig. 5). Random fields are unlikely to contain galaxy clusters. We see that non-clusters tend to have low areas, lower values of \(p_{max}\) when compared to clusters, and are offset from the input direction. In central panels of Fig. 4 and Fig. 5 we show distributions of clusters detected by our model along the input directions from the ACT DR5 test subsample (see Section 2) and from PSZ2z (the PSZ2 subsample of clusters with spectroscopic redshifts). All those objects are optically confirmed clusters. We see that real clusters tend to have large areas and large values of \(p_{max}\). For ACT DR5 sources, \(R\) is low (\(\sim 1\) arcmin for majority of clusters) since ACT has a good spatial resolution. For PSZ2z, clusters are more widely distributed in the \(R\)-\(S\) plane because of poorer Planck resolution (compared to ACT). Right panels of Fig. 4 and Fig. 5 present distributions of connected groups of pixels detected by our model in the 5 arcmin window along the SZcat directions. Red lines in Figures 4 and 5 show thresholds in area \(S=20\) and in maximum probability in a group \(p_{max}=0.8\), which we use to distinguish clusters from non-clusters. We see that a low area - low \(p_{max}\) region is densely populated. Most likely, these objects are non-clusters. At the same time, we detect a large number of sources with high area and high values of \(p_{max}\) which are likely to be genuine clusters. Figure 2: Distribution of probabilities predicted by our model for the test dataset: galaxy clusters are shown in grey, points sources are in red, and random fields are shown with purple curve. Each histogram is normalised by the total amount of objects of a given type. The green vertical line marks the probability threshold below which most of objects are actually non-clusters. We use this threshold in Section 4 to distinguish between non-clusters and cluster candidates. In Fig. 6 we show a distribution of ComPACT clusters on the sky (in galactic coordinates). ### Purity and completeness of the ComPACT catalogue The key characteristics of a cluster catalogue are its purity and completeness metrics. We estimate the ComPACT completeness with respect to PSZ2z (the subsample of PSZ2 objects with optical identification and cluster spectroscopic redshift measurements) and ACT DR5 cluster catalogues. We associate clusters from PSZ2z or ACT samples in the test area with the nearest object from our full catalogue of SZ candidates (see Section 3.2). All cluster associations are shown in the central panels of Figures 4 and 5. Then, we estimate the ComPACT catalogue completeness as \[C=N_{cl}^{\star}/N_{cl}\, \tag{3}\] where \(N_{cl}^{\star}\) is the number of clusters with \(p_{max}>0.8\) and \(S>20\) (green circles in the central panels of Fig. 4) and \(N_{cl}\) is the total number of clusters in the considered sample. For the ACT DR5 and PSZ2z cluster samples, we obtain \(\mathrm{\tilde{C}}_{ACT}\approx 0.88\) and \(\mathrm{C}_{PSZ2z}\approx 0.79\). Figure 4: Distribution of connected groups of pixels according to their distance \(R\) from the input direction and their area \(S\). The left panel shows results for random directions that are unlikely to contain galaxy clusters. Density contours are shown as black dashed curves. Two central panels illustrate detected groups of pixels which corresponding to clusters from the ACT DR5 set subsample and spectroscopically confirmed clusters from PSZ2. The right panel shows the distribution of detected groups of pixels for SZcat directions. The latter plot combines characteristics of random fields and PSZ2 clusters since SZcat contains a quite number of false detections. Vertical red lines mark a threshold in area: we assume that all detected groups of pixels with \(S<20\) are non-clusters. In green, we show groups that satisfy our criteria for clusters: \(S>20\) and \(p_{max}>0.8\) Figure 3: An example of a cluster detection in the direction of SZ, G098.31-41.19 (marked as the red cross) from the SZcat catalogue. 5-arcmin fragments of the ACT+Planck maps at different frequencies are shown in the upper row and in the bottom left panel. The pixel size is 0.5 arcmin. The probability map is presented at the bottom right corner. For comparison, we also show positions of clusters from PSZ2 (magenta star) and ACT DR5 (brown circle) catalogues which are within the cross-match radius, i.e. 5 arcmin circle. The pixel with maximum probability is marked as a green triangle. In both cases, \(\sim 7\%\) of true clusters are not assigned to clusters by our (detection) model due to their low area, i.e. these objects have \(S<20\). The decrease in the completeness of Planck clusters detection in ComPACT seems to be due to: (i) the way of combining ACT and Planck data in the joint intensity maps (Naess et al., 2020) and (ii) the wrong association between connected group and SZcat for some ComPACT clusters. We define the purity of the ComPACT catalogue as: \[Purity=1-\frac{N_{F}}{N_{ComPACT}}\,, \tag{4}\] where \(N_{F}\) and \(N_{ComPACT}\) are the number of false detections and the total number of cluster candidates in the ComPACT catalogue, respectively. The number of false detections can be estimated as \[N_{F}\lesssim(1-P)\cdot\hat{N}_{field}\cdot\left(\frac{N_{SZcat}}{N_{field}} \right)\,, \tag{5}\] where \(P=0.8\) is our threshold in the maximum probability (\(p_{max}\)), \(\hat{N}_{field}\) and \(N_{field}\) are the number of detections (connected groups of pixels) having \(p_{max}>0.8\), \(S>20\), and the total number of objects in the random fields sample, respectively; \(N_{SZcat}\) is the total number of detections in the SZcat sample. The idea behind formula (5) is simple. The subsample of detections in random fields with \(p_{max}>0.8\) and \(S>20\) contains more than \(P\times 100\%=80\%\) of galaxy clusters. Thus, the number of false detections in this subsample can be estimated as \(<(1-P)\cdot\hat{N}_{field}\). Then, we need to rescale the obtained estimate to the total amount of detections \(N_{SZcat}\) in the fields of SZcat sources. It is worth noting, that equality (in terms of expected value) in the formula (5) can be achieved only if the SZcat objects are located in the random sky directions (in random fields) and all SZ detections have \(p_{max}=0.8\). Based on the expressions (4) and (5) one can obtain: \[Purity\gtrsim 1-(1-P)\cdot\left(\frac{\hat{N}_{field}}{N_{field}}\right)\cdot \left(\frac{N_{SZcat}}{N_{ComPACT}}\right). \tag{6}\] Substituting the corresponding values of \(P\), \(\hat{N}_{field}\), \(N_{field}\), \(N_{SZcat}\), \(N_{ComPACT}\) into the formula (6) gives us the estimate of the ComPACT purity \(\gtrsim 0.88\). ### Cross-correlation of ComPACT with external cluster catalogues We cross-correlate the derived ComPACT sample with the following X-ray and SZ external cluster catalogues: MCXC, 4XMM DR12, ACT DR5, PSZ2, SPT SZ, ComPRASS, and PSZSPT. The cross-match radius is set to 5 arcmin. This choice is motivated by the analysis of average number density radial profiles (for different cluster catalogues) in the neighbourhood of ComPACT objects (see Figure 10). Table 2 gives the summary information about external cluster catalogues in comparison with ComPACT. From ACT DR5, we identify 1,033 matches, cross-correlation with the PSZ2 catalogue gives 482 clusters. Also, we find ComPACT objects matches in cluster catalogues other than ACT DR5 and PSZ2. We identify 2 clusters in SPT-SZ, 3 clusters in PSZSPT, 67 objects in ComPRASS, 47 objects as extended X-ray sources in 4XMM DR12, 16 objects in MCXC. All ComPACT objects identified as X-ray galaxy clusters with luminosity (in the energy band 0.5-2 keV) exceeding \(7.5\cdot 10^{44}\ erg/s\) have \(p_{max}\sim 1\). In Fig. 7 (left panel), we show numbers of ComPACT clusters associated with different external cluster catalogues. ComPACT contains \(2,934\) cluster candidates, among which \(\gtrsim 2,581\) objects are expected to be real galaxy clusters (see SS4.1 for ComPACT catalogue purity estimate). A total number of 1,259 objects are associated with sources from external cluster catalogues, among which \(1,146\) clusters are found in Planck and/or ACT cluster catalogues. 1, 788 ComPACT objects are new with respect to the existing ACT DR5 or PSZ2 cluster samples, among them we expect \(\gtrsim 1\), 436 real clusters. In Fig. 7 (right panel), we show numbers of ComPACT clusters Figure 5: Group area 5 — maximum probability within a group \(p_{max}\) distributions. From left to right: random fields (in the ACT footprint), groups associated with optically confirmed galaxy clusters from the test ACT DR5 data set and from PSZ2z (in the ACT footprint), and SZcat directions (in the ACT footprint). Red lines indicates thresholds. With green colour we show a region where detected groups of pixels are classified as clusters: \(S>20\) and \(p_{max}>0.8\). We see that with this simple criterion in random fields almost all detected sources are classified as non-clusters, with only \(\sim 10\%\) of groups falling into a ‘cluster region’. For ACT DR5 and PSZ2 optically confirmed clusters we correctly classify \(\sim 80\) % of clusters Figure 6: Distribution of ComPACT objects on the sky (in galactic coordinates) not found in PSZ2/ACT cluster catalogues and associated with objects from other cluster samples. One can see, that among expected \(\gtrsim 1,436\) real clusters (not found in PSZ2/ACT DR5), 113 objects are found in MCXC, 4XMM DR12, SPT-SZ, ComPRASS, PSZSPT catalogues. Thus, we expect \(\gtrsim 1,366\) totally new clusters in ComPACT. In Figure 8, we illustrate the mass-redshift dependence for clusters in ACT DR5 and PSZ2z catalogues. The blue and orange contours represent the distributions of the full ACT DR5 and PSZ2z catalogues, respectively. For ACT DR5, we'recover' 1,033 clusters (shown as blue squares in the upper left panel), and 95 ACT clusters3 are missing in ComPACT (shown as green stars in the bottom left panel). We miss low-mass cluster candidates at intermediate and high redshifts due to (i) detection limitations of the SZcat catalogue, and (ii) our area criterion (see the second panel of Figure 4). Footnote 3: with SZcat candidate in \(R_{match}=5\) arcmin The mass-redshift distribution for PSZ2 matches4 in ComPACT is plotted as red squares in the upper right panel of Fig. 8. PSZ2z clusters that are not present in ComPACT are shown as purple stars (291 objects) in the bottom right panel of Fig. 8. Footnote 4: for candidates with measured masses and redshifts, 418 objects Let us now discuss cluster candidates that do not have any matches in ACT DR5 or PSZ2 catalogues. In Figure 9, we plot matches (with available in the literature masses and redshifts) with SPT-SZ (red dots), ComPRASS (black stars), and MCXC (magenta crosses). Most of them lie at \(z<0.8\). Compared to the ACT DR5 catalogue, ComPACT is more complete at lower redshifts. We expect to find objects with low mass at low redshifts and clusters on the border of PSZ2 density distribution. ## 5 Conclusions Galaxy clusters are the most massive gravitationally bound systems consisting of dark matter, hot baryonic gas and stars. They play an important role in observational cosmology and galaxy evolution studies. We have created a deep learning model for segmentation of SZ signal on ACT+Planck intensity maps (Naess et al., 2020) and present here a new galaxy cluster catalogue in the ACT footprint. In order to increase the purity of the cluster catalogue, we limit \begin{table} \begin{tabular}{c c c c c} \hline Cluster catalogue & Number of objects & Sky fraction & Instrument & Ref \\ \hline \hline SZcat & 14,850 & 0.88 & Planck & Meshcheryakov et al. (2022) \\ PSZ2 & 1,653 & 0.84 & Planck & Planck Collaboration (2016) \\ ACT DR5 & 4,195 & 0.32 & ACT & Hilton, M. et al. (2021) \\ SPT-SZ & 677 & 0.061 & SPT & Bocquet et al. (2019) \\ \hline ComPRASS & 2,323 & 0.81 & Planck, ROSAT & Tarrio, P. et al. (2019) \\ PSZSPT & 419 & 0.061 & SPT, Planck & Melin, J.-B. et al. (2021) \\ \hline MCXC & 1,743 & 0.38 & ROSAT, EXOSAT & Piffaretti et al. (2011) \\ 4XMM-DR12 & 88,169 & 0.028 & XMM-Newton & Webb et al. (2020) \\ \hline ComPACT & 2,934 & 0.37 & ACT+Planck & this paper \\ \hline \end{tabular} \end{table} Table 2: External galaxy cluster samples in comparison with new ComPACT cluster catalogue presented in this paper. Figure 7: Number of cross-matches of ComPACT cluster candidates with other catalogues. The left panel shows all candidates, the right panel shows only ComPACT candidates that have no matches with ACT DR5 or PSZ2 catalogues. In each panel, the first line shows the total number of ComPACT objects in a considered sample/subsample. In blue, the expected amount of genuine clusters (number of candidates multiplied by purity) is shown. The second line illustrates the total number of matches across explored catalogues. The remaining lines represent the number of matches with a given catalogue. ourselves to publishing here only a part of the full sample with the most probable galaxy clusters lying in the directions to the candidates of the extended Planck cluster catalogue (SZcat, Meshcheryakov et al. (2022)). The ComPACT catalogue contains 2,934 galaxy clusters with \(Parity\geq 88\%\). Note that \(\gtrsim 1,436\) SZ clusters in the presented ComPACT catalogue are new with respect to the existing ACT DR5 and PSZ2 cluster samples. In the future work we plan to identify these objects in the optical surveys, estimate their total masses and redshifts. ## Acknowledgements We acknowledge the publicly available software packages that were used throughout this work: numpy (Oliphant, 2006; van der Walt et al., 2011; Harris et al., 2020), pandas (pandas development team, 2023; Wes McKinney, 2010), matplotlib (Hunter, 2007), pixel1 and pytorch (Paszke et al., 2019). Footnote 1: [https://github.com/simonsobs/pixell](https://github.com/simonsobs/pixell) We also acknowledge the use of the Legacy Archive for Microwave Background Data Analysis (LAMBDA). ## Data Availability The ComPACT cluster catalogue is publicly available: [https://github.com/astromining/ComPACT](https://github.com/astromining/ComPACT) Figure 8: ComPACT cluster candidates cross-matched with ACT DR5 (1,033 objects; blue dots in the upper left panel) and PSZ2 (418 objects; orange dots in the upper right panel). On the bottom panel we present clusters from ACT DR5 (green stars) and PSZ2 (purple stars) that have SZcat candidates within a 5 arcmin radius, but are not presented in ComPACT. Blue contours show density of SZ sources in ACT DR5 on the mass-redshift plane, while orange contours represent density of PSZ2 sources. From comparing ComPACT candidates matched with ACT DR5 and density contours of ACT DR5 sources, it appears that we miss low-mass clusters especially at intermediate and high redshifts. It might be due to our cluster detection criteria and because ComPACT is based on SZcat, which in turn is constructed from Planck maps. Figure 9: Mass-redshift dependence for ComPACT cluster candidates which do not have matches with ACT DR5 or PSZ2 but do have with MCXC (magenta crosses). ComPRASS (black stars), and SPT-SZ (red circles). The orange density contours show density of PSZ2 sources, the blue one - ACT DR5 clusters.
銀河系集は、ダークマター、熱いbaryonicガス、星からなる最も massive な重力結合系です。彼らは観測的な kosmology と銀河進化の研究に重要な役割を果たしています。私たちは、ACT+Planck強度マップにおけるSunyaev-Zeldovich (SZ)シグナルのセグメンテーションのための深層学習モデルを開発し、ACTの足跡におけるマイクロ波銀河系検出のパイプラインを構築しました。提案されたモデルにより、以前は未知だった銀河系集を識別することができます。つまり、公開されている銀河系集カタログ(ACT DR5 と PSZ2 の例)の検出閾値より低いSZ源を検出できるのです。この論文では、得られたSZ信号マップを使用して、ACTの足跡におけるSunyaev-Zeldovichオブジェクトのクラスター純度を大幅に向上させました。SZcatから、新しいマイクロ波銀河系系カタログ (ComPACT)
2309.16369
Bringing the Discussion of Minima Sharpness to the Audio Domain: a Filter-Normalised Evaluation for Acoustic Scene Classification
The correlation between the sharpness of loss minima and generalisation in the context of deep neural networks has been subject to discussion for a long time. Whilst mostly investigated in the context of selected benchmark data sets in the area of computer vision, we explore this aspect for the acoustic scene classification task of the DCASE2020 challenge data. Our analysis is based on two-dimensional filter-normalised visualisations and a derived sharpness measure. Our exploratory analysis shows that sharper minima tend to show better generalisation than flat minima -even more so for out-of-domain data, recorded from previously unseen devices-, thus adding to the dispute about better generalisation capabilities of flat minima. We further find that, in particular, the choice of optimisers is a main driver of the sharpness of minima and we discuss resulting limitations with respect to comparability. Our code, trained model states and loss landscape visualisations are publicly available.
Manuel Milling, Andreas Triantafyllopoulos, Iosif Tsangko, Simon David Noel Rampp, Björn Wolfgang Schuller
2023-09-28T12:13:23
http://arxiv.org/abs/2309.16369v2
# Bringing the Discussion of Minima Sharpness to the Audio Domain: ###### Abstract The correlation between the sharpness of loss minima and generalisation in the context of deep neural networks has been subject to discussion for a long time. Whilst mostly investigated in the context of selected benchmark data sets in the area of computer vision, we explore this aspect for the audio scene classification task of the DCASE2020 challenge data. Our analysis is based on two-dimensional filter-normalised visualisations and a derived sharpness measure. Our exploratory analysis shows that sharper minima tend to show better generalisation than flat minima -even more so for out-of-domain data, recorded from previously unseen devices-, thus adding to the dispute about better generalisation capabilities of flat minima. We further find that, in particular, the choice of optimisers is a main driver of the sharpness of minima and we discuss resulting limitations with respect to comparability. Our code, trained model states and loss landscape visualisations are publicly available. Manuel Milling\({}^{1}\), Andreas Triantafyllopoulos\({}^{1}\), Iosif Tsangko\({}^{1}\), Simon David Noel Rampp\({}^{1}\), Bjorn Wolfgang Schuller\({}^{1,2}\)\({}^{1}\)Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany \({}^{2}\)GLAM - Group on Language, Audio, & Music, Imperial College London, UK audio scene classification, sharp minima, loss landscape, generalisation, deep neural networks ## 1 Introduction When training _artificial neural networks_ (ANNs) on a specific task, one of the key challenges lies in the network's ability to generalise to unseen data. As can be interpreted from the universal approximation theorem [1], ANNs are well capable of representing the underlying data distribution of any task. In practice -especially given a network with enough depth- good fits of the training data with converging loss values and perfect evaluation metrics are often easy to find. However, this does not translate to unseen data, as the generalisation error can vary hugely for almost perfect training loss and can be influenced by the amount of training data, the choice of network architecture, optimiser or batch size [2], among other things. Models with a high generalisation gap are considered to be overfitted and often perform even worse if the unseen data is _out-of-domain_ (OOD). This can, for instance, be observed in the yearly DCASE _audio scene classification_ (ASC) challenge, in which the organisers added new recording conditions, such as different recording devices or cities, only to the test data. Critically, model selection, in the form of choosing hyperparameters or 'early stopping', is predominantly performed based on validation performance, which on its own can bring quite some limitations as, for instance, reported for OOD performance [3]. An alternative perspective on model states can be gained by examining the behaviour of loss functions. Specifically, some characteristics of a model state's minimum have been pointed out to show an important connection to the generalisation error. _Flatness_ and _sharpness_ play a particular role here, with flatter minima often believed to have better generalisation [4], at least since the work of Hochreiter and Schmidhuber [5]. Intuitively, these terms are related to the Hessian matrix, which contains all second-order derivatives at a given point of a function, for all directions and can thus represent the local curvature behaviour of the function. Yet, an undisputed definition of flatness and sharpness in the high-dimensional parameter space of ANNs is still lacking. Nevertheless, several approaches to quantify flatness and sharpness have been developed over the years, but they have failed to paint a complete picture of the generalisation capabilities based on geometry, as a universal correlation between flatness and generalisation has been disputed [6, 7]. In particular authors in [8] claim that the conclusion that flat minima should generalise better than sharp ones cannot be applied as is without further context. Likewise, Andriushchenko et al. [9] recently observed in multiple cases that sharper minima can generalise better in some modern experimental settings. Arguably, the most impactful sharpness measure, the \(\epsilon\)-sharpness, was introduced by Keskar et al. [2]. It decodes the information from the eigenvalues of the Hessian matrix, while at the same time avoiding the computation-heavy calculation of the Hessian matrix itself. Li et al. [10], however, show that a problem in the interpretability of sharpness measures, such as the \(\epsilon\)-sharpness, may lie in the scaling of the weights. An apparent example is optimisers with weight penalties, which enforce smaller parameters, and are thus more prone to disturbance, leading to sharp minima with good generalisation. In order to overcome this limitation, they suggest to use filter-normalisation for the visualisation of loss landscapes and argue that flatter minima in low-dimensional visualisations with filter-normalised directions go hand-in-hand with better generalisation capabilities, even when compared across different ANN architectures. Even though this relationship is made evident in several instances on a qualitative level, a quantitative measure of the sharpness in the context of filter-normalisation and a corresponding analysis are not provided. Beyond, a core weakness with respect to the universal validity of the results in most previously mentioned contributions is that experiments are limited to established benchmark data sets for image classification, such as CIFAR-10 [11] or ImageNet [12], and should thus be further verified in different research areas and contexts. In this work, we focus on exploring the ASC task of the DCASE2020 challenge, which belongs to the same category of tasks as CIFAR-10 (10-class classification problem), but comprises a different modality (audio instead of images) and more challenges of real-world data. The DCASE ASC challenge has seen tremendous influence on the computer audition community [13]. The yearly updated data sets have been the basis for ASC studies ranging from the development of new model architectures [14] and the evaluation of model robustness [15, 16], to investigations of fairness in performance amongst different recording devices and locations [3]. In this contribution, we suggest a new approach to quantitatively measure the sharpness of a local minimum -or at least of the neighbourhood of a 'well-trained' model state- and find correlations to the generalisability of ASC models. We design our experiments considering different architectures, training parameters, and optimisation algorithms in order to address the following research questions: * Is the sharpness derived from a two-dimensional filter-normalised visualisation stable across random directions? * How does the sharpness of ASC models correlate with the generalisation error for _in-domain_ (ID) and OOD data? * Which hyperparameters of model training are drivers for sharp minima? These investigations might give insights relevant to the selection of models that generalise better to OOD data, as well as drive the understanding of different factors affecting this generalisation for computer audition, which are both important open questions for ASC. ## 2 Methodology ### Filter-Normalisation The basis for our characterisation of minima is low-dimensional filter-normalised visualisations of the loss minima as introduced in [10]. The prerequisite for such a visualisation is an ANN with parameters \(\theta\), which was trained to a model state \(\theta^{\star}\), close to a local minimum of the loss function, given a training set \(X\). The precise minimum, however, will most likely not be reached in practice, given a finite time for training, finite numerical precision, and in particular, through techniques such as early stopping. The loss function around the trained model state will nevertheless in most cases increase, when varying any of the parameters \(\theta_{i}\) of the network. With common ANNs having millions or even billions of parameters, this leads to very high-dimensional loss landscapes. The immediate surroundings of the minimum can best be described with the Hessian matrix. The high dimensionality however makes the calculation of the Hessian matrix very computation-heavy and thus not practical [17], whereas significant attempts are addressed in this direction [18]. Instead, a common approach to look at the loss landscape is through low-dimensional visualisations. In two dimensions, this can be realised through the choice of random Gaussian vectors \(\delta\) and \(\eta\), both of the same dimension as \(\theta\), which are in the following used to project the loss function as \[f(\alpha,\beta)=L(\theta^{\star}+\alpha\delta+\beta\eta). \tag{1}\] By varying the scalar variables \(\alpha\) and \(\beta\), we can depict a 2-dimensional projection of the loss landscape. However, Li et al. point out some weaknesses of the visualisation, as different models -and even different model states of the same architecture- can have differently scaled parameters, thus making them more or less vulnerable to perturbations of the same magnitude [10]. Therefore, they suggest adjusting the perturbations relative to the magnitude of the weights, thus rescaling the random gaussian directions \(\delta\) and \(\eta\) choosing a filter-level normalisation. This can be formulated as \[\delta_{i,j}\leftarrow\frac{\delta_{i,j}}{||\delta_{i,j}||}||\theta_{i,j}||, \tag{2}\] where the indices of \(\delta_{i,j}\) and \(\theta_{i,j}\) refer to the components of \(\delta\) corresponding to the \(j\)th filter of the \(i\)th layer in a convolutional neural network. Figure 1 shows two examples of filter-normalised loss landscapes in 2D around a minimum with \(\alpha\) and \(\beta\) ranging from -1 to 1, thus varying the filters of the network by around \(\pm 100\%\). We will use plots of this kind for the following analyses with the adapted code provided by the authors [10]. As the filter-normalised plots are solving the problem of different scales of filters, the authors claim that flatter minima in this representation, despite the heavy reduction in dimensionality, indicate better generalisation, which is underlined with a qualitative analysis of several model states, trained on the CIFAR-10 dataset. ### Sharpness In order to quantitatively evaluate these claims for our ASC problem, we base our analysis on the \(\epsilon\)-sharpness, which is prominently used in the literature. This measure focuses on a small neighbourhood of a minimum and computes the largest value potentially attained by the loss function and is considered a good approximation of the curvature of the minimum and thus, of the sharpness or flatness of the minimum. Formally, it is defined as \[s_{\epsilon}=\frac{\max_{\theta\in B(\epsilon,\theta^{\star})}(L(\theta)-L( \theta^{\star}))}{1+L(\theta^{\star})}\times 100, \tag{3}\] where \(\epsilon\) determines the radius of the ball \(B(\epsilon,\theta^{\star})\) around \(\theta^{\star}\). Alternative measures of sharpness include the consideration of local entropy around a minimum [19] or of the size of the connected region around the minimum where the loss is relatively similar [5]. Inspired by the \(\epsilon\)-sharpness, we calculate the sharpness for our two-dimensional visualisation based on the largest value of \(L(\theta)\) with a maximum distance of \(\epsilon\) to the minimum of the visualisation. We will utilise this sharpness measure in the following to analyse the influences certain experimental settings have on the sharpness of minima and, further, what sharpness can tell us about the generalisation of an ASC model on unseen data. ## 3 Experiments and Discussion ### Dataset As our dataset, we use the development partition of the DCASE 2020 Acoustic Scene Classification dataset [20] and evaluate the experiments based on the standard metric accuracy, which is defined as the ratio of correctly classified samples over all samples. The dataset includes 64 hours of audio segments from 10 different acoustic scenes, recorded in 10 European cities with 3 real devices (denoted as \(A\), \(B\), _C_), as well as data from 6 simulated devices (denoted as _S1-S6_). We use the official training/evaluation splits with devices S4-S6 only appearing in the test set (OOD). The data is evenly distributed across cities, whereas device A (Soundman OKM II Klassik/studio A3) is Figure 1: Visualisation of the two-dimensional filter-normalised loss landscape for two different model states with different architectures and training paradigms. dominating over B, C, and the simulated devices. We extract 64-bin log-Mel spectrograms with a hop size of 10 ms and a window size of 32 ms, additionally resampling the 10 s long audio segments to 16 kHz. ### Model training Our initial experiments involved two _convolutional neural network_ (CNN)-based architectures, the _pre-trained audio neural networks_ (PANNs) CNN10 and CNN14 [21] with random initialisation and around 5.2 million and 80.8 million parameters, respectively, which have frequently been applied to computer audition tasks, including the DCASE ASC task [3, 22, 23]. Their convolutional nature is well in line with the CNNs for which the filter-normalisation was developed. We explored widely-used optimisers, such as _Adaptive Moment Estimation_ (Adam) and _stochastic gradient descent_ (SGD) with momentum, as well as less common optimisation algorithms, such as the second-order _Kronecker-factored approximate curvature_ (KFAC) and _gradient descent: the ultimate optimiser_ (GDTUO). KFAC utilises approximations to the Hessian matrix to improve convergence speed, while GDTUO automatically adjusts hyperparameters using a stack of multiple optimisers, which in this case involves two stacked Adam optimisers, called hyperoptimisers. However, both KFAC and GDTUO resulted in higher computational costs in terms of runtime and memory requirements per optimisation step. We ran a grid-search for hyperparameters as manifested in Table 1. We additionally applied a learning rate of 1e-5 for the KFAC optimiser and excluded the learning rate 1e-4 for CNN14 with the SGD optimiser in order to prevent suboptimal convergence. Given some hardware limitations for the experiments, we only utilised the second-order optimisers for the CNN10 architecture, leading to overall 38 trained model states. Besides the learning rate, we used default parameters for the optimisers, with SGD using a momentum of 0.9. In all cases, the training was stopped after 50 epochs and the best model state of the epoch with the highest accuracy on the development set used for testing. The training is implemented in PyTorch 1.13.1+cu117 and models were trained on a NVIDIA GeForce GTX TITAN X and a NVIDIA TITAN X (Pascal), both with 12GB RAM. The training time per epoch mostly varied depending on the chosen optimiser, ranging from approximately four minutes for the SGD and Adam optimisers to slightly over six minutes for KFAC, and up to around 18 minutes for GDTUO. Our code and trained model states are publicly available.1 Footnote 1: [https://github.com/EIHW/ASC_Sharpness](https://github.com/EIHW/ASC_Sharpness) ### On the robustness towards random directions Even though not emphasised by the authors of the filter-normalisation method, the choice of the random Gaussian direction should have some impact on the measured or perceived sharpness of a given minimum. To mitigate this impact the authors in [24] use more directions in the parameter space, while in [25], it is suggested to analyse projections along Hessian directions as an alternative method. Nevertheless, most interpretations of the sharpness of minima are limited to (statistics of) a low-dimensional analysis and often show consistent trends across different random directions [26, 27, 28]. We tested the robustness of our sharpness measure by calculating it based on three plots with different random directions. In order to stay in line with the visual argumentation of the plots, as well as the characteristics of the filter-normalisation, we chose a neighbourhood of radius \(0.25\) to calculate the sharpness. Due to the high computational costs of such visualisations, the resolution was set to 0.025 in each direction, leading to 121 loss values per visualisation. The time required to compute one sharpness value in this scenario is around 45 minutes on a single NVIDIA A40 GPU with 16GB RAM. Figure 2 shows the mean sharpness and standard deviation for each trained model based on three different plots per model. Most model states show a relatively low standard deviation compared to the mean sharpness, allowing us to further interpret the sharpness in different settings. A few exceptions with high standard deviations indicate some limitations of this approach, which might, however, be mitigated by sampling more sharpness-measures per model. Similar analyses of the stability of sharpness-measures with respect to different random directions have previously been reported [27]. ### On the impact of sharpness on generalisation In order to gain insights into the generalisation capabilities of flat and sharp minima in ASC, we plot the test accuracies of the trained model states against their mean sharpness value in Figure 3. We thereby consider the accuracy for ID and OOD separately. To that end, we define OOD performance as the accuracy evaluated on the devices not represented in the training data, namely S4, S5, and S6, whilst ID performance is evaluated on the devices A, B, C, S1, S2 and S3, which are known at training time. Note that all discussed model states show a nearly 100% accuracy on the training data, such that one minus the test accuracy can be interpreted as the generalisation gap. Firstly, we note a tendency that, in our experiments, sharper minima show a better generalisation than flat minima. This is a rather surprising finding, as most of the existing literature reports preferable characteristics of flat minima in the computer vision domain, e.g., [5, 19, 2, 29, 30, 31, 32], whilst only few studies report \begin{table} \begin{tabular}{l c} Parameter & Values \\ Network & CNN10, CNN14 \\ Optimiser & SGD, Adam, GDTUO2, KFAC \\ Learning Rate & \(1\mathrm{e}-3\), \(1\mathrm{e}-4\), \(1\mathrm{e}-5\) \\ Batch Size & \(16\), \(32\) \\ Random Seeds & \(42\), \(43\) \\ \end{tabular} \end{table} Table 1: Overview of the grid search parameters for model training. Figure 2: Distribution of sharpness-measures. Each bar indicates the mean sharpness value with the standard deviation of a trained model state in three two-dimensional plots with different random directions. on good generalisation in context of sharp minima [33, 9]. Further investigations are necessary to unravel, whether our results are an indication of a general disparity of the impact of sharpness on generalisation in acoustic scene classification and image classification. Critical differences in the learning of computer audition models compared to computer vision models have been reported in our previous work: when fine-tuning a CNN for a computer audition task, the first layers were subject to more changes than the later layers [34]. This finding contradicts the common understanding, resulting from computer vision analyses, of earlier filters being trained to recognise low-complexity objects, such as edges, and are thus transferable without major changes amongst different tasks. Moreover, this effect seems to be considerably higher for OOD accuracy compared to ID accuracy, as we observe a correlation of \(.49\) in the former and a correlation of \(.28\) in the latter case. Based on our exploratory analysis, we hypothesise that flatter minima are over-optimised for the ID devices -in particular, device A which dominates the training set- and thus fail to generalise well to unseen devices. Nevertheless, the reasons for positive correlations between sharpness and generalisation are not obvious at this moment and should be further looked into. ### On the impact of hyperparameters on sharpness As a final aspect, we analyse the impact of the choice of different hyperparameters or experimental settings on the sharpness and compare these to the corresponding impact on test accuracy. Figure 4 suggests that both sharpness and accuracy are similarly affected by the training parameters. Certain hyperparameters lead to a higher value in both subplots compared to the other hyperparameters in the group, except for the batch size. This result is in line with our previous findings of sharper minima tending to have better generalisation. However, upon closer examination, it becomes apparent that the amount by which both subplots are affected by a certain group can vary considerably, as the selection of optimisers seems to have the highest impact on sharpness, which is not the case for the test accuracy. This provides us with some insights about when a deduction of generalisation from sharpness might be more reasonable, as, for instance, different optimisers seem to bring different tendencies in sharpness, which might not fully translate to generalisation. A remarkable similarity between average mean sharpness and average test accuracy can, however, be observed for the two model architectures, whose sharpness derives from a different(-dimensional) loss landscape. Note that the choice of learning rates and optimisers were not independent of each other, which limits their separate expressiveness. ### Limitations One of the limitations of our approach lies in the robustness of the sharpness measure, which might, however, be overcome by more efficient implementations, allowing for the consideration of additional random directions. Beyond that, a more thorough analysis of the convergence status of models and its impact on the sharpness measure and generalisation seems desirable. Especially, considering that not all experimental details could be investigated in depth, this contribution can only be a piece in the debate about flat versus sharp minima in ASC in particular and computer audition in general. Beyond, the reasons for good generalisation capabilities of sharp minima in our exploratory study need to be further investigated as the impact of individual hyperparameters on the training needs to be better understood. ## 4 Conclusions In this contribution, we explored the sharpness of minima in the loss function for acoustic scene classification models and its impact on the generalisation capabilities in different, practice-relevant, experimental settings. We found that for our trained models, sharper minima generalised better to unseen (in particular to OOD) data, which has rarely been observed in the computer vision domain. Our approach shows some limitations, as for instance, the choice of optimisers has a higher impact on the sharpness of minima than on the generalisation. In future work, we plan to focus on more efficient and interpretable implementations of sharpness measures and to better understand individual effects of hyperparameters before our findings can be put into practice. ## 5 Acknowledgements This work was partially funded by the DFG's Reinhart Koselleck project No. 442218748 (AUDIONMOUS). Figure 4: Disaggregated distribution of mean sharpness and accuracy across hyperparameters. Each bar averages the mean sharpness or accuracy of all trained models states, grouped by the different types of hyperparameters. Figure 3: Correlation plot between sharpness of minima (the higher, the sharper) and test accuracy for all trained models. Showing best-fit line and 95% confidence intervals for different models.
深層ニューラルネットワークにおける損失最小値の鋭さとの汎化に関する議論は長年続いています。主にコンピュータービジョン分野における選択されたベンチマークデータセットの範囲で調査されていますが、ここでは、DCASE2020チャレンジのオーディオブック分類タスクでこの側面を調査します。私たちの分析は、2次元フィルター正規化された可視化と推定された鋭さ度量に基づいています。私たちの探索的な分析では、鋭い最小値は平坦な最小値よりも汎化が良く、特に既視化されていないデバイスから取得されたアウトオブダインデータで、より顕著に明らかな結果を示しています。これにより、平坦な最小値の汎化能力に関する議論がさらに複雑になります。さらに、最適化器の選択は最小値の鋭さを主導する要因であり、その結果、比較が困難になる可能性を示しています。私たちのコード、トレーニング済みモデルの状態、損
2309.09249
LiteTrack: Layer Pruning with Asynchronous Feature Extraction for Lightweight and Efficient Visual Tracking
The recent advancements in transformer-based visual trackers have led to significant progress, attributed to their strong modeling capabilities. However, as performance improves, running latency correspondingly increases, presenting a challenge for real-time robotics applications, especially on edge devices with computational constraints. In response to this, we introduce LiteTrack, an efficient transformer-based tracking model optimized for high-speed operations across various devices. It achieves a more favorable trade-off between accuracy and efficiency than the other lightweight trackers. The main innovations of LiteTrack encompass: 1) asynchronous feature extraction and interaction between the template and search region for better feature fushion and cutting redundant computation, and 2) pruning encoder layers from a heavy tracker to refine the balnace between performance and speed. As an example, our fastest variant, LiteTrack-B4, achieves 65.2% AO on the GOT-10k benchmark, surpassing all preceding efficient trackers, while running over 100 fps with ONNX on the Jetson Orin NX edge device. Moreover, our LiteTrack-B9 reaches competitive 72.2% AO on GOT-10k and 82.4% AUC on TrackingNet, and operates at 171 fps on an NVIDIA 2080Ti GPU. The code and demo materials will be available at https://github.com/TsingWei/LiteTrack.
Qingmao Wei, Bi Zeng, Jianqi Liu, Li He, Guotian Zeng
2023-09-17T12:01:03
http://arxiv.org/abs/2309.09249v1
# LiteTrack: Layer Pruning with Asynchronous Feature Extraction ###### Abstract The recent advancements in transformer-based visual trackers have led to significant progress, attributed to their strong modeling capabilities. However, as performance improves, running latency correspondingly increases, presenting a challenge for real-time robotics applications, especially on edge devices with computational constraints. In response to this, we introduce LiteTrack, an efficient transformer-based tracking model optimized for high-speed operations across various devices. It achieves a more favorable trade-off between accuracy and efficiency than the other lightweight trackers. The main innovations of LiteTrack encompass: 1) asynchronous feature extraction and interaction between the template and search region for better feature fusion and cutting redundant computation, and 2) pruning encoder layers from a heavy tracker to refine the balance between performance and speed. As an example, our fastest variant, LiteTrack-B4, achieves 65.2% AO on the GOT-10k benchmark, surpassing all preceding efficient trackers, while running over 100 _fps_ with ONNX on the Jetson Orin NX edge device. Moreover, our LiteTrack-B9 reaches competitive 72.2% AO on GOT-10k and 82.4% AUC on TrackingNet, and operates at 171 _fps_ on an NVIDIA 2080Ti GPU. The code and demo materials will be available at [https://github.com/TsingWei/LiteTrack](https://github.com/TsingWei/LiteTrack). ## I Introduction Visual object tracking is a fundamental task in computer vision, which aims to track an arbitrary object given its initial state in a video sequence. In recent years, with the development of deep neural networks [1, 2, 3, 4], tracking has made significant progress. In particular, the utilization of transformers [4] has played a pivotal role in the development of several high-performance trackers [5, 6, 7, 8, 9, 10, 11]. Unfortunately, a majority of recent research efforts [5, 12, 13] has concentrated solely on achieving high performance without considering tracking speed. While these state-of-the-art trackers might deliver real-time performance on powerful GPUs, their efficiency diminishes on devices with limited computational resources. For instance, ARTrack [14], considered as a top-tier tracker, reaches a tracking speed of 37 frames per second (_fps_) on the NVIDIA RTX 2080Ti GPU but drops to 5 _fps_ on the Nvidia Jetson Orin NX, a common edge device. This underscores the pressing need for trackers that effectively strike a balance between performance and speed. The one-stage structure has gained popularity in tracking applications [10, 16, 9, 17]. This structure combines feature extraction and fusion as a joint process as pictured in Fig. 2 (a), leveraging the capabilities of the transformer network, expectedly the ViT [18] that has been pre-trained by mask-image-modeling(MIM) [19, 20]. Conversely, two-stage trackers [5, 6, 21], operating by sequentially extracting features and then fusing them, benefit from caching template features during the testing phase, as shown in Fig. 2 (b). However, the two-stage trackers who extract feature first then perform feature fusion, can cache the template feature during testing, while the one-stage trackers can not. Even though most of one-stage trackers are running faster than the two-stage, we can further accelerate the former by the similar caching technique. Inspired by ViTDet [22], we find that only the last layer of template feature is sufficient and better for fusion with the search feature of various earlier layer, which can be cached in the testing like two-stage trackers. Therefore, this naturally decides our overall design: the feature extraction of template is performed first and individually, then the extracted last-layer template features interact with the feature extraction of the search region, as shown in Fig. 2(c). Traditional efficient trackers have primarily sought to achieve faster runtimes by directly incorporating an initially lightweight-designed network as their backbone. These lightweight networks are designed for efficiency, which results in relatively mediocre performance in their upstream tasks like image classification. Consequently, when such Fig. 1: Performance comparison of LiteTrack against state-of-the-art trackers on GOT-10k in terms of Average Overlap and RTX 2080Ti Speed. \(\mathtt{o}\) and \(\mathtt{o}\) represent for non-real-time and real-time trackers respectively, based on Nvidia Jetson Orin NX speed (see Tab.2). Our LiteTrack (\(\mathtt{o}\)) family offers comparable accuracy to all other trackers, significantly outpacing them in inference speed. Notably, LiteTrack-B4 achieves over 300 _fps_ on 2080Ti and 100 _fps_ (ONNNX) on edge device. Notice that our LiteTrack delivering the best real-time accuracy trained without extra data, unlike the other efficient trackers. networks are utilized in visual tracking, their performance leaves much to be desired. In contrast, our approach derive efficient model by scaling down a high-performing heavy tracker, instead of starting with a lightweight architecture. This strategy is inspired by our observation, as depicted in Fig.3, that early layers pay sufficient attention to the target. By pruning network layers and integrating our novel asynchronous feature extraction technique, we ensure only a marginal drop in performance even when multiple layers are excised. Consequently, LiteTrack not only rivals the performance of its heavyweight peers but also competes in runtime with lightweight models, presenting an optimal trade-off. Fig.1 reinforces this assertion, showcasing LiteTrack's commendable performance on the challenging GOT-10k [23] benchmark, standing shoulder-to-shoulder with state-of-the-art (SOTA) trackers. Our contributions are summarized as follows: * A efficient tracking achitechture which feature extractions of template and search region are asynchronous is proposed for reducing redundant computation. * A novel scaling principle of tracking model is introduced by adjusting encoder layers for trade-off between accuracy and speed. * Comprehensive evaluations on authoritative generic visual tracking benchmarks have validated the excellent performance of LiteTrack compared with other SOTA trackers. Edge device deployment are tested with promising performance, demonstrating the superior effectiveness of LiteTrack on robotics applicability. ## II Related Works ### _Visual Tracking with Transformers._ Visual tracking has seen the rise of Siamese-based methods [24, 25, 26, 27, 28, 29, 30, 12] that typically employ dual-backbone networks with shared parameters. They have been instrumental in the field due to their efficiency in feature extraction of the template and search region images. Further advancements introduced transformers [4] into the tracking community [31, 32, 33, 34, 5, 7], leveraging them for feature interaction training from scratch. The emergence of the one-stream framework [16, 17, 35, 9] showcased improved performance by integrating feature extraction and fusion within the backbone network, enjoying the powerful mask-image-modeling (MIM) pretraining method [19, 20, 36]. Despite their effectiveness, these methods, tailored for powerful GPUs, often farler in speed on edge devices. In response, our research incorporates the last layer feature of the template directly into the search region's feature extraction, provide a simliar cache-in-testing ability like two-stage tracker while also enjoying the powerful pretraining. ### _Lightweight Trackers._ Efficiency in tracking is crucial for practical robotics applications, especially on edge devices. Early methods such as ECO [37] and ATOM [38] focused on real-time operation but didn't achieve the accuracy levels of newer trackers. Recent advancements [39, 40, 41] have employed lightweight-designed backbones for efficient real-time tracking. However, these solutions still show a performance gap when compared to SOTA heavyweight trackers [10, 6, 9]. There have been efforts to refine these advanced trackers: OSTrack [10] considered pruning non-essential features unrelated to the foreground, while SimTrack [16] suggested removing the last four layers to cut computational costs. Despite these modifications, there remains a lack of deep exploration into true real-time lightweight tracking architectures. Our proposed LiteTrack fills this gap, combining efficiency and performance for effective tracking on edge devices. ## III Proposed Method This section presents the LiteTrack method in detail. First, we briefly overview our LiteTrack framework. Then, we depict the asynchronous featrue extraction process, layer pruning of our model, and the head network plus with the training obejctive. Fig. 3: Visualization of attention map (average attention value over all template features attending to the search features) of 2nd, 6th and 8th layer in the twelve-layer encoder of JNTrack [15]. The model focuses nearly precisely on the target even in the early stage of the encoder. Fig. 2: Comparison of the popular architectures for visual tracking. Our method (c) is able to cache the template features like two-stage (b) in testing and also enjoy the powerful pretrain technique like one-stage method (a). ### _Overview_ As shown in Fig. 4, LiteTrack is a combination of one-stage and two-stage tracking framework consisting of two components: the lightweight transformer encoder and the head network. The template of target to be tracked is fed into the lightweight transformer encoder first for feature extraction individually. Then the image of search region is also fed into the same encoder but only the first n layers. We call these first n layers as _Feature Extraction Stage_ (FE). Next, the extracted template features of last layer together with the intermediate search features after the featrure extraction stage, are fed as a concatenated sequence into the remaining encoder layers. We call these final layers as _Asynchronous Interaction Stage_ (AI). Finally, only the part belongs to the search of the final sequence are selected and flatten to 2D feature map, and fed into the head network for the tracking result. ### _Asynchronous Feature Extraction_ The so-called _asynchronous_ means that we extract the template feature first and then the search feature. The feature extraction is done by the transformer encoder, in which each layer mainly consists of multi-head attention. Specifically, for a template image \(\mathbf{Z}\in\mathbb{R}^{3\times H_{z}\times W_{z}}\), the patch embedding layer transform the image into sequence of tokens \(\mathbf{Z_{p}}\in\mathbb{R}^{C\times\frac{H_{z}}{16}\times\frac{W_{p}}{16}}\). In each layer of the transofmer encoder, the main operation is multi-head self attention: \[\mathrm{Attn_{z}=}\mathrm{softmax}(\frac{\mathbf{Q}_{z}\mathbf{K}_{z}^{\top} }{\sqrt{d_{k}}})\mathbf{V}_{z}. \tag{1}\] The featrue extraction of template is performed by a serially stacked encoder layers. For a search image \(\mathbf{X}\in\mathbb{R}^{3\times H_{x}\times W_{x}}\), the same patch embedding layer also transform the image into sequence of tokens \(\mathbf{X_{p}}\in\mathbb{R}^{C\times\frac{H_{z}}{16}\times\frac{W_{p}}{16}}\). In the _feature extraction stage_, the search tokens only attend to itself in the multi-head attention operation: \[\mathrm{Attn_{x}=}\mathrm{softmax}(\frac{\mathbf{Q}_{x}\mathbf{K}_{x}^{\top} }{\sqrt{d_{k}}})\mathbf{V}_{x}. \tag{2}\] In the _asynchronous interaction stage_, the tokens of extracted template features of last layer concatenated together with the intermediate search features after the _feature extraction stage_ are concatenated, as the input of the encoder layer. Inspired by MixFormer [9], the attention in the layers within the interaction stage is a little different from the standard self-attention: we generate queries \(\mathbf{Q}\) only by the search features. Thus the attention process can be written as: \[\mathrm{Attn_{xx}=}\mathrm{softmax}(\frac{\mathbf{Q}\mathbf{K}^{ \top}}{\sqrt{d_{k}}})\mathbf{V} \tag{3}\] \[=\mathrm{softmax}(\frac{[\mathbf{Q}_{x}][\mathbf{K}_{x}^{\top}; \mathbf{K}_{z}^{\top}]}{\sqrt{d_{k}}})[V_{x};V_{z}].\] Though the attention computation are different between two stages, the parameters of the networks are still in the same structure. Therefore the template branch and search branch of the network can share the weight. _Analysis:_ Our method works depending on one critical development of recent trackers: the application of homogeneous structure backbone such as the ViT [18] encoder. The chanel of the features, or the dimension of the tokens in context of the transformer, remains unchange during the layers of encoder, therefore the template features can interact with the search features from any intermediate encoder layer. This introduce our first model scaling principle: adjusting the layers for the feature extraction stage and the asynchronous stage for the accuracy-speed trade-off as discussed in Sec. IV-C. During testing, synchronous and symmetric feature extraction methods, such as OSTrack [10] shown in Fig.5(a), often Fig. 4: Overview of the proposed LiteTrack-B6 tracker, consist of 3 layers in feature extraction (FE) stage and 3 layers in asynchronous interaction (AI) stage. For simplicity, we omit the position encoding, skip connection and MLP in the figure. Two branchs of network for template and search region share the same weights. result in redundant computations for the template. Given that the template, typically the initial frame of a video sequence, remains unchanged, its feature also remains constant. By caching these template features, we eliminate unnecessary computations during testing. In contrast to MixFormer, which caches every layer of template features as depicted in Fig.5(b), our method conserves memory by only storing the last layer of template features, as shown in Fig. 5(c). ### _Layer Pruning_ In the pursuit of enhancing object tracking performance, deep neural networks have grown increasingly complex, often at the expense of computational efficiency. Layer pruning offers an avenue to mitigate this by systematically reducing the number of layers in the network. Starting with a 12-layer ViT encoder, we adopted a top-down pruning strategy, progressively eliminating layers and assessing performance against a baseline. As illustrated in Fig.6, the performance dropped with the layers pruned while the speed rise up significantly. However, when paired with asynchronous feature extraction, the decreasing of performance by the layer pruning is moderated. For example, our 9-layer variant combined with asynchronous as outperformed its 12-layer counterpart as shown in Fig.6. ### _Head and Training Objective_ We employ the center head [10] for prediction, which consists of three convolutional branches for center classification, offset regression and size regression, respectively. The center classification branch outputs a centerness score map, where each score represents the confidence of the target center locating at the corresponding position. The prediction of offset regression branch is for the discretization error of the center. The size regression branch predicts the height and width of the target. The position with the highest confidence in the center score map is selected as the target position and the corresponding regressed coordinates are used to compute a bounding box as the final prediction. We apply the weighted focal loss [42] for classification. For localization, we combine \(\ell_{1}\) loss and the generalized GIoU loss [43] as the training objective. The overall loss function can be formulated as \[\mathcal{L}=\mathcal{L}_{\mathrm{focal}}+\lambda_{G}\mathcal{L}_{\mathrm{GIoU} }+\lambda_{l}\mathcal{L}_{l}, \tag{4}\] where \(\lambda_{G}=2\) and \(\lambda_{l}=5\) are trade-off weights following [10] to balance optimization. ## IV Experiments 9 hours for GOT-10k and 24 hours for the other benchmarks on one RTX 3090 GPU, and the lighter model trains faster. _Testing._ During inference, the template is initialized in the first frame of a video sequence. For each subsequent frame, the search region is cropped based on the target's bounding box of the previous frame. We adopt Hanning window penalty to utilize positional prior like scale change and motion smoothness in tracking, following the common practice [10, 33]. The output scores are simply element-wise multiplied by the Hanning window with the same size, and we choose the box with the highest multiplied score as the target box. ### _State-of-the-art Comparisons_ LiteTrack is benchmarked against state-of-the-art trackers, both real-time and non-real-time, across six tracking datasets. We evaluated the speed of these trackers on two distinct platforms: an Nvidia GeForce RTX 2080Ti GPU (with an Intel i5-11400F CPU) and an Nvidia Jetson Orin NX 16GB edge device. For these tests, we utilized PyTorch 1.12.0 on the former and PyTorch 2.0.0 @ JetPack 5.1 on the latter. Trackers are categorized into real-time and non-real-time based on their PyTorch speed on the Orin NX device, following the 20 _fps_ real-time setting of VOT [56]. Detailed comparative results are showcased in Tables II and III. We also report our tracker's speed accelerated with ONNX fp16 in Tab. I. _GOT-10k._ GOT-10k [23] is a large-scale and challenging dataset that contains 10k training sequences and 180 test sequences which are zero-overlapping in classes. The official one-shot protocol reuqires the evaluated tracker training without extra data, which encourages the model designed for challenging scenes like unseen objects. We report the Average Overlap (AO), Success Rate over a overlapping rate of 50% (SR\({}_{0.5}\)) and the same one over 75% (SR\({}_{0.75}\)) obtained by submitting the result to the official evaluation server. As shown in Table II, LiteTrack-B9 achieves the best real-time results of 72.2% AO score, which is also competitive to the best non-real-time tracker ARTrack-256 [14] (73.5% AO score). Our LiteTrack-B4 surpassing all the real-time trackers with AO score of 65.2%, even though our trackers are trained without extra data. _TrackingNet._ TrackingNet [44] is a large-scale dataset containing a variety of situations in natural scenes and multiple categories, and its test set includes 511 video sequences. We report the Area Under Curve (AUC), Normalized Precision (P\({}_{Norm}\)) and Precision (P) obtained by submitting the tracking result to the official evaluation server. As reported in Table II, LiteTrack series achieve competitive results \begin{table} \begin{tabular}{l|c|c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Method} & \multirow{2}{*}{Source} & \multicolumn{3}{c}{TrackingNet [44]} & \multicolumn{3}{c}{LaSOT [45]} & \multicolumn{3}{c}{GOT-10k* [23]} & \multicolumn{3}{c}{Speed (_fps_)} \\ \cline{3-13} & & AUC & P\({}_{Norm}\) & P & AUC & P\({}_{Norm}\) & P & AO & SR\({}_{0.5}\) & SR\({}_{0.75}\) & 2080Ti & OrinNX \\ \hline \multirow{8}{*}{LiteTrack-B9} & \multirow{2}{*}{Ours} & LiteTrack-B9} & **82.4** & **87.3** & **80.4** & **67.0** & **77.0** & **72.7** & **82.3** & **69.3** & 171 & 21 \\ & LiteTrack-B8 & Ours & 81.4 & 86.4 & 79.4 & 66.4 & 76.4 & 71.4 & 70.4 & 80.1 & 66.4 & 190 & 25 \\ & LiteTrack-B6 & Ours & 80.8 & 85.7 & 78.2 & 64.6 & 73.9 & 68.9 & 68.7 & 78.2 & 64.2 & 237 & 31 \\ & LiteTrack-B4 & Ours & 79.9 & 84.9 & 76.6 & 62.5 & 72.1 & 65.7 & 65.2 & 74.7 & 57.7 & 315 & 44 \\ & HiT-Base1 [41] & ICCV’23 & 80.0 & 84.4 & 77.3 & 64.6 & 73.3 & 68.1 & 64.0 & 72.1 & 58.1 & 175 & - \\ & E.T.Track [46] & WACV’23 & 75.0 & 80.3 & 70.6 & 59.1 & - & - & - & - & 67 & 21 \\ & E.T.Track-S [46] & ECCV’22 & - & - & - & 53.5 & - & 54.5 & 61.9 & 72.2 & - & 182 & 50 \\ & HCAT [47] & ECCV’22 & 76. & 82.6 & 72.9 & 59.3 & 68.7 & 61.0 & 65.1 & 76.5 & 56.7 & 235 & 34 \\ & LightTrack [39] & CVPR’21 & 72.5 & 77.8 & 69.5 & 53.8 & - & 53.7 & 61.1 & 71.0 & - & 107 & 38 \\ & HiT [48], & ICCV’21 & 66.7 & 73.8 & 60.9 & 45.1 & 52.7 & 42.1 & - & - & - & 230 & 50 \\ & ECO [37] & CVPR’17 & 55.4 & 61.8 & 49.2 & 32.4 & 33.8 & 30.1 & 39.5 & 40.7 & 17.0 & 113 & 22 \\ \hline \multirow{8}{*}{LiteTrack-256 [14]} & \multirow{2}{*}{CVPR’23} & \multirow{2}{*}{_84.2_} & _88.7_ & 83.5 & _70.4_ & _79.5_ & _76.6_ & _73.5_ & 82.2 & _70.9_ & 37 & 6 \\ & GRM-256 [35] & CVPR’23 & 84.0 & 88.7 & 83.3 & 69.9 & 79.3 & 75.8 & 73.4 & 82.9 & 70.4 & 79 & 10 \\ & OSTrack-256 [10] & ECCV’22 & 83.1 & 87.8 & 82.0 & 69.1 & 78.7 & 75.2 & 71.0 & 80.4 & 68.2 & 140 & 18 \\ & MiT-Gorner [9] & CVPR’22 & 83.1 & 88.1 & 81.6 & 69.2 & 78.7 & 74.7 & 70.7 & 80.0 & 67.8 & 48 & 12 \\ & Sim-B/16 [16] & ECCV’22 & 82.3 & 86.5 & - & 69.3 & 78.5 & - & 68.6 & 78.9 & 62.4 & 131 & 15 \\ & STARK-ST50 [6] & ICCV’21 & 81.3 & 86.1 & - & 66.6 & - & - & 68.0 & 77.7 & 62.3 & 61 & 13 \\ & TransT [5] & CVPR’21 & 81.4 & 86.7 & 80.3 & 64.9 & 73.8 & 69.0 & 67.1 & 76.8 & 60.9 & 84 & 13 \\ & TriDiffor [7] & CVPR’21 & 78.4 & 83.3 & 73.1 & 63.9 & - & 61.4 & 67.1 & 77.7 & 58.3 & 36 & 6 \\ & PDiMP [50] & CVPR’20 & 75.8 & 81.6 & 70.4 & 59.8 & 68.8 & 60.8 & 63.4 & 73.8 & 54.3 & 47 & 12 \\ & DiMP [13] & ICCV’19 & 74.0 & 80.1 & 68.7 & 56.9 & 65.0 & 56.7 & 61.1 & 71.7 & 49.2 & 100 & 16 \\ & SimRPN++ [26] & CVPR’19 & 73.3 & 80.0 & 69.4 & 49.6 & 56.9 & 49.1 & 51.7 & 61.6 & 32.5 & 83 & 16 \\ & ATOM [38] & CVPR’19 & 70.3 & 77.1 & 64.8 & 51.5 & 57.6 & 50.5 & 55.6 & 63.4 & 40.2 & 175 & 15 \\ \hline \hline \end{tabular} \end{table} TABLE II: State-of-the-art comparison on TrackingNet [44], LaSOT [45], and GOT-10k [23] benchmarks. The best three real-time results are shown in **red**, blue and green fonts, and the best non-real-time results are shown in **underline** font. * denotes results on GOT-10k obtained following the official one-shot protocol, with gray font indicating training using extra data. \begin{table} \begin{tabular}{l|c|c c} \hline \hline \multirow{2}{*}{Method} & NFS [51] & UAV123 [52] & VOT’21 [53] \\ & AUC & AUC & EAO \\ \hline LiteTrack-B9 & **65.4** & **67.7** & **0.269** \\ LiteTrack-B8 & 64.6 & 67.1 & 0.261 \\ LiteTrack-B6 & 64.4 & 66.2 & 0.254 \\ LiteTrack-B4 & 63.4 & 66.4 & 0.251 \\ HiT-Base [41] & 63.6 & 65.6 & 0.252 \\ HCAT [47] & 63.5 & 62.7 & - \\ FEAR compared with the previous real-time trackers. LiteTrack-B9 gets the best AUC of 82.4%, surpassing the previous best real-time tracker HiT-Base [41] by 2.4%. Compared to non-real-time tracker ARTrack [14], LiteTrack-B9 achieves comparable performance to it in AUC (82.4 \(vs.\) 84.2) while being \(4.5\times\) faster on the GPU and \(6\times\) faster on the Jetson edge platform. _LaSOT._ LaSOT [45] is a large-scale, long-term dataset containing 1400 video sequences, with 1120 training videos and 280 test videos. We report the same matrices as in TrackingNet evaluated by PyTracking2 tools. The results on LaSOT are shown in Table II. LiteTrack-B9 achieves the best real-time results of 67.0%, 77.0%, and 72.7% in AUC, P\({}_{Norm}\), and P, respectively. LiteTrack-B8 and LiteTrack-B6 achieves the second-best and the third-best AUC score. Compared with the recent efficient tracker HiT-Base [41], LiteTrack-B9 outperform it by 2.4% in AUC. Footnote 2: [https://github.com/visionml/ptytracking](https://github.com/visionml/ptytracking) _NFS, UAV123 and VOT2021_. On the NFS dataset [51], known for its fast-moving objects spanning 100 video sequences, our LiteTrack variants B9, B8, and B6 emerge as the top three in real-time performance as highlighted in Table III. Meanwhile, on the UAV123 dataset [52], which features 123 video clips from low-altitude UAVs, even our fastest LiteTrack-B4 takes the lead among real-time trackers with an AUC score of 66.4%, surpassing competitors such as HiT [41] and HCAT [47] by margins of 0.8% and 3.7%, respectively. Similarly, our VOT-2021 real-time experiments on the VOT2021 benchmark [53] witnessed LiteTrack-B9 achieving the highest EAO score of 26.9% among real-time trackers, as tabulated in Table III. ### _Ablation Study and Visualization_ _Component-wise Analysis._ The significance of our proposed methods is underscored through a comparative study built upon OSTrack [10]. For setting a solid baseline, we enhanced OSTrack by substituting its MAE [19] pretrained weights with those of CAE [20], the outcome of which is enumerated in Tab. IV, Row 2. Direct layer pruning, as seen in Row 3, led to a marked decline in performance. However, when integrated with our novel asynchronous feature extraction (Row 4), not only was the deficit recovered, but the model also achieved superior accuracy and efficiency, surpassing even the strong baseline. _Layer Configuration Analysis._ We explored various configurations concerning the ratio of feature extraction (FE) layers to asynchronous interaction (AI) layers, as depicted in Tab. V. For configurations with 8 total layers, peak performance was achieved with a majority of the layers dedicated to FE. The 6-layer configuration showed comparable results, especially with an even FE-to-AI ratio. Notably, in the 4-layer configurations, a balanced 2:2 FE-to-AI setup still produced respectable results. The data highlights the model's adaptability across different layer configurations and offers insights into achieving an optimal balance between FE and AI layers. _Qualitative Results._ To better present the superiority of LiteTrack, we highlight representative scenes in Fig. 7. In a challenging UAV tracking scenario under a noisy and jittery UAV camera feed, LiteTrack consistently maintains its track, outperforming other trackers. Similarly, when tracking a moving car from a UAV's perspective, LiteTrack demonstrates pinpoint precision, ensuring more accurate alignment with the ground truth than competing methods. These real-world tests underscore LiteTrack's proficiency in handling diverse tracking challenges. ## V Conclusions In this work, we've presented LiteTrack, a pioneering approach to object tracking tailored for robotics applications and edge devices. By combining layer pruning with asynchronous feature extraction, we've achieved significant improvements in both accuracy and execution speed across diverse datasets. Our results underscore LiteTrack's potential, as it not only outperforms leading real-time trackers but also addresses the constraints of computational resources often found in robotics and edge deployments. With its efficient design, LiteTrack promises to be a valuable basline for real-time robotics applications. \begin{table} \begin{tabular}{c|c c|c c c} \hline \hline \# Total & \# FE & \# AI & \multirow{2}{*}{AO} & \multirow{2}{*}{SR\({}_{0.5}\)} & \multirow{2}{*}{_fps_} \\ Layers & Layers & & Layers & & \\ \hline \multirow{3}{*}{8} & 6 & 2 & 70.3 & **80.4** & 190 \\ & 5 & 3 & **70.4** & 80.1 & 185 \\ & 0 & 8 & 68.3 & 77.9 & 173 \\ \hline \multirow{3}{*}{6} & 4 & 2 & 68.0 & 77.5 & 241 \\ & 3 & 3 & **68.7** & **78.2** & 237 \\ \hline \multirow{3}{*}{4} & 3 & 1 & 64.6 & 73.7 & 318 \\ & 2 & 2 & **65.2** & **75.5** & 315 \\ \hline \hline \end{tabular} \end{table} TABLE V: Performance comparison based on varying ratios of feature extraction layers to asynchronous interaction layers. We use gray color to denote our final configuration. Fig. 7: Prediction comparison from UAV123 [52]. We use green lines to demonstrate the Groud Truth bounding box of the target. Blue boxes represent our LiteTrack’s predictions, while yellow and red boxes denote the predictions of trackers HCAT [47] and E.T.Track [46] respectively.
transformerベースの視覚追跡技術の最近の進歩は、強力なモデル能力により、大きな進歩をもたらしました。しかしながら、パフォーマンスが向上すると、実行の遅延がそれに比例して増加し、リアルタイムロボットアプリケーションにおける課題となっています。特に、計算能力の制限のあるエッジデバイスでは、大きな課題となっています。この課題に対する対応として、私たちはLiteTrackという、高速な動作を可能にする効率的な transformerベースの追跡モデルを導入しました。LiteTrackは、他の軽量追跡モデルに比べて、精度と効率のバランスがより良好です。LiteTrackの主要な革新は以下の通りです。1) テンプレートと検索領域間の非同期的な特徴抽出と相互作用により、より良い特徴融合と不要な計算の削減を達成しました。2) 性能と速度のバランスを調整するために、重い追跡モデルからエンコーダレイヤーを削減しました。LiteTrackの例として、私たちが開発
2309.11360
The norm of the backward shift on $H^1$ is $\frac{2}{\sqrt{3}}$
We show that the norm of the backward shift operator on $H^1$ is $2/\sqrt{3}$, and we identify the functions for which the norm is attained.
Ole Fredrik Brevig, Kristian Seip
2023-09-20T14:43:27
http://arxiv.org/abs/2309.11360v2
# The norm of the backward shift on \(H^{1}\) is \(\frac{2}{\sqrt{3}}\) ###### Abstract. We show that the norm of the backward shift operator on \(H^{1}\) is \(2/\sqrt{3}\), and we identify the functions for which the norm is attained. 2020 Mathematics Subject Classification: Primary 30H10. Secondary 47B38 Research supported in part by Grant 275113 of the Research Council of Norway. For general \(1\leq p\leq\infty\), the estimate \(|f(0)|\leq\|f\|_{p}\) and the triangle inequality provide the upper bound \(\|B\|_{p}\leq 2\). A consequence [1, Theorem 7.7] is that \(\|B\|_{p}>1\) for any \(p\neq 2\). An elementary direct proof of the same assertion follows from the example in [2, Lemma 2.3]. The previous best result on \(\|B\|_{1}\) is due to Ferguson [5, Theorem 2.4], who established that \(\|B\|_{1}\leq 1.7047\). Ferguson [5, Theorem 2.5] also proved that \(\|B\|_{\infty}=2\), by observing that if \(f\) is the conformal automorphism of \(\mathbb{D}\) interchanging the origin and the point \(w\), then \(\|Bf\|_{\infty}=1+|w|\). In addition, Ferguson improved the upper bound to \(\|B\|_{p}\leq 2^{|1-2/p|}\) using Riesz-Thorin interpolation in \(L^{p}(\mathbb{T})\). The problem of determining \(\|B\|_{p}\) has been raised in connection with Toeplitz operators (see [8, Section 5] and [7, Open Problem 5.4]). We hope this note may inspire further work for \(p\neq 1,2,\infty\), including the case \(0<p<1\). ## 2. Proof We define square-roots of \(H^{1}\) functions as follows. We set \(f^{1/2}(z)\coloneqq 0\) if \(z\) is a point at which \(f(z)=0\), and otherwise we set \[f^{1/2}(z)\coloneqq\sqrt{|f(z)|}e^{i\frac{\operatorname{Arg}f(z)}{2}};\] here and elsewhere, \(\sqrt{a}\) signifies the nonnegative square-root of a nonnegative number \(a\), and \(\operatorname{Arg}w\) is the principal value of the argument of the complex number \(w\). We see that \(f^{1/2}\) is defined at every point in \(\mathbb{D}\) and almost everywhere on \(\mathbb{T}\). We will need the following result about such square-roots. **Lemma**.: _If \(f\) is a function in \(H^{1}\) with \(f(0)\geq 0\), then_ \[\sqrt{f(0)}\leq\operatorname{Re}\int_{0}^{2\pi}f^{1/2}(e^{i\theta})\,\frac{d \theta}{2\pi}. \tag{1}\] _Equality in (1) is attained if and only if \(f^{1/2}\) is analytic in \(\mathbb{D}\)._ Proof.: We notice that the function \[u(z)\coloneqq\operatorname{Re}f^{1/2}(z)\] is continuous and in fact subharmonic in \(\mathbb{D}\). Indeed, \(u\) clearly satisfies the submean value property at points \(z\) where \(u(z)=0\), and at all other points \(u\) is locally harmonic. Hence \[\sqrt{f(0)}=u(0)\leq\int_{0}^{2\pi}u(re^{i\theta})\,\frac{d\theta}{2\pi} \tag{2}\] for \(0<r<1\). Using that \(|\operatorname{Re}w_{1}-\operatorname{Re}w_{2}|\leq\sqrt{|w_{1}^{2}-w_{2}^{2}|}\) for complex numbers \(w_{1}\) and \(w_{2}\) with nonnegative real part in combination with the Cauchy-Schwarz inequality, we find that \[\int_{0}^{2\pi}\left|u(re^{i\theta})-\operatorname{Re}f^{1/2}(e^{i\theta}) \right|\frac{d\theta}{2\pi}\leq\sqrt{\int_{0}^{2\pi}|f(re^{i\theta})-f(e^{i \theta})|\ \frac{d\theta}{2\pi}}.\] Since \(f\) is in \(H^{1}\), the right-hand side goes to \(0\) as \(r\to 1^{-}\). The asserted inequality follows from this and (2). If equality is attained in (1), then the subharmonicity of \(u\) means that equality is attained in (2) for every \(0<r<1\). This implies that \(u\) is harmonic in \(\mathbb{D}\). If \(f\) is nontrivial, then \(u\) is strictly positive in \(\mathbb{D}\). Consequently, \(f^{1/2}\) is analytic in \(\mathbb{D}\) Let \(f\) be a function in \(H^{1}\). Then \(f=IF\), where \(I\) is an inner function and \[F(z)=\exp\left(\int_{0}^{2\pi}\frac{e^{i\theta}+z}{e^{i\theta}-z}\,\log|f(e^{i \theta})|\,\frac{d\theta}{2\pi}\right). \tag{3}\] We say that \(f\) is outer if \(f=F\). Proof of the theorem.: Let \(f\) be a function in \(H^{1}\). We may assume without loss of generality that \(f(0)\geq 0\). We set \(a\coloneqq\sqrt{|f(0)|}\) and note that \[f(e^{i\theta})-f(0)=\big{(}f^{1/2}(e^{i\theta})-a\big{)}\big{(}f^{1/2}(e^{i \theta})+a\big{)}\] for almost every \(e^{i\theta}\) on \(\mathbb{T}\). Setting next \[b\coloneqq\int_{0}^{2\pi}f^{1/2}(e^{i\theta})\,\frac{d\theta}{2\pi},\] we get \[\|Bf\|_{1}\leq\sqrt{\|f^{1/2}\|_{2}^{2}-|b|^{2}+|a-b|^{2}}\sqrt{ \|f^{1/2}\|_{2}^{2}}-|b|^{2}+|a+b|^{2}\\ =\sqrt{\big{(}\|f\|_{1}+|a|^{2}\big{)}^{2}-\big{(}2a\operatorname {Re}b\big{)}^{2}}\] by the Cauchy-Schwarz-inequality and orthogonality. Using that \(a\geq 0\) and that \(\operatorname{Re}b\geq a\) from the lemma, we obtain that \[\|Bf\|_{1}\leq\|f\|_{1}\sqrt{(1+x)^{2}-4x^{2}}\] for \(x=f(0)/\|f\|_{1}\leq 1\). The maximum of the right-hand side is attained for \(x=1/3\), which completes the proof of the asserted inequality. Suppose next that \[\|Bf\|_{1}=\frac{2}{\sqrt{3}}\|f\|_{1} \tag{4}\] for a nontrivial function \(f\) in \(H^{1}\). Since plainly \(f(0)\neq 0\), we may assume without loss of generality that \(f(0)>0\). Inspecting the argument above, we see that (4) can only hold if \(\operatorname{Re}b=a\), which by the lemma means that \(f^{1/2}\) is analytic. Moreover, we must have attained equality in our application of the Cauchy-Schwarz inequality. This is only possible if there is a constant \(\lambda\geq 0\) such that \[\big{|}f^{1/2}(e^{i\theta})-a\big{|}=\lambda\big{|}f^{1/2}(e^{i\theta})+a \big{|} \tag{5}\] for almost every \(e^{i\theta}\) on \(\mathbb{T}\). Since \(\operatorname{Re}f^{1/2}\geq 0\) by definition and since \(f(0)>0\) by assumption, we find that \(\operatorname{Re}f^{1/2}(z)+a\geq a>0\) for every \(z\) in \(\mathbb{D}\). This means that \(f^{1/2}+a\) is an outer function, so the combination of (3) and (5) yields \[f^{1/2}(z)-a=\lambda I(z)\big{(}f^{1/2}(z)+a\big{)}\] for some inner function \(I\). Since the left-hand side vanishes at \(z=0\), it is clear that \(I(0)=0\). Moreover, since \(|f^{1/2}(e^{i\theta})-a|<|f^{1/2}(e^{i\theta})+a|\) for almost every \(e^{i\theta}\) on \(\mathbb{T}\), we must have \(0<\lambda<1\). Consequently, \[f(z)=f(0)\left(\frac{1+\lambda I(z)}{1-\lambda I(z)}\right)^{2}.\] A direct computation using that \((I^{k})_{k\geq 0}\) is an orthonormal set in \(H^{2}\) shows that \[\frac{f(0)}{\|f\|_{1}}=\frac{1-\lambda^{2}}{1+3\lambda^{2}}.\] Since (4) is attained, the left-hand side equals \(1/3\) which means that \(\lambda=1/\sqrt{3}\).
私たちは $H^1$ 上での逆シフト算子ノルムが $2/\sqrt{3}$ であることを示し、そのノルムが達成される関数を同定します。 Please let me know if you need me to explain the translation or provide any additional information.
2308.00144
Logical Synchrony and the bittide Mechanism
We introduce logical synchrony, a framework that allows distributed computing to be coordinated as tightly as in synchronous systems without the distribution of a global clock or any reference to universal time. We develop a model of events called a logical synchrony network, in which nodes correspond to processors and every node has an associated local clock which generates the events. We construct a measure of logical latency and develop its properties. A further model, called a multiclock network, is then analyzed and shown to be a refinement of the logical synchrony network. We present the bittide mechanism as an instantiation of multiclock networks, and discuss the clock control mechanism that ensures that buffers do not overflow or underflow. Finally we give conditions under which a logical synchrony network has an equivalent synchronous realization.
Sanjay Lall, Calin Cascaval, Martin Izzard, Tammo Spalink
2023-07-31T20:25:30
http://arxiv.org/abs/2308.00144v3
# Logical Synchrony and the bittide Mechanism ###### Abstract We introduce logical synchrony, a framework that allows distributed computing to be coordinated as tightly as in synchronous systems without the distribution of a global clock or any reference to universal time. We develop a model of events called a logical synchrony network, in which nodes correspond to processors and every node has an associated local clock which generates the events. We construct a measure of logical latency and develop its properties. A further model, called a multiclock network, is then analyzed and shown to be a refinement of the logical synchrony network. We present the bittide mechanism as an instantiation of multiclock networks, and discuss the clock control mechanism that ensures that buffers do not overflow or underflow. Finally we give conditions under which a logical synchrony network has an equivalent synchronous realization. ## 1 Introduction In this paper we introduce _logical synchrony_, a property where machines share a common notion of time sufficient to reason about causality but without the need to share a system-wide clock. We discuss what this notion of time is and how it corresponds to existing models. We also discuss the relationship between logical synchrony and the more constrained purer form that has been the focus of much prior work. We finally relate the bittide mechanism which allows efficient implementation of logically synchrony on modern networks and thereby allows for cycle-accurate coordination across nodes. Synchronous execution models have been used successfully in realtime systems [1, 2, 3] to reason about correctness, in particular meeting deadlines. Often, synchronous abstractions are decoupled from implementation and are used to validate system functional behavior. When mapping synchronous abstractions to asynchronous non-deterministic hardware, work has been done to automate code generation that matches the functional semantics, hiding the non-deterministic behavior of the hardware with explicit synchronization, for example [4]. Logical Execution Time (LET) was introduced by Henzinger and Kirsch [5] to support the design of reactive, cyber-physical systems. More recently, Lingua Franca [6, 7] supports concurrent and distributed programming using time-stamped messages. Lingua Franca exposes to programmers the notion of _reactors_ that are triggered in logical time, allowing deterministic reasoning about four common design patterns in distributed systems: alignment, precedence, simultaneity, and consistency. We argue that the causality reasoning in the logical synchrony framework subsumes such design patterns - they are all effectively enabling reasoning about ordering of events in a system that exchanges messages, and as we will show in the paper, this is exactly the class of applications for which logical synchrony determines precisely the causality relationships. Alternatively, synchronous execution can be implemented using a single global clock. For small real-time systems, cyber-physical systems, and control systems, a global clock can be distributed from a single oscillator. Scaling such systems is difficult because large clock distribution networks introduce delays which must be corrected. For the majority of systems using wall-clock time as their global clock, synchronization implies exchanging timestamps [8, 9]. Techniques such as TrueTime [10] and Sundial [11] attempt to reduce the latency uncertainty, and thus the time-uncertainty bounds, from milliseconds in TrueTime to nanoseconds in Sundial. To achieve desired levels of performance using existing network protocols requires expensive time references such as dedicated atomic clocks and networking hardware enhancements to reduce protocol overhead. Time uncertainty is exposed to programmers through an uncertainty interval which guarantees that current time is within interval bounds for all nodes in the system, such that every node is guaranteed to have passed current time when the bound elapses. To provide an example use case, this method guarantees concurrency control correctness in (lock-free) database transactions by ensuring that all distributed system nodes observe the same order of events. Logical synchrony, formalized in Section 2, abstracts the notion of shared time and allows us to avoid a global reference clock or wall-clock. Time is defined only by local clocks decoupled from physical time. The idea is that events at the same node are ordered by local time, and events at different nodes are ordered by causality. As we will show, logical synchrony requires no system-wide global clock and no explicit synchronization (timestamp exchanges or similar), which thereby allows for potentially infinitely scalable systems. Reasoning about ordering of events in logically synchronous systems follows the partial order semantics of Lamport [12] and thus provides equivalence with any synchronous execution that generates identical event graphs. To establish how logical synchrony can be realized in practice, we first define what logical synchrony means within an abstract model of distributed systems with multiple clocks. We then explain how bittide [13, 14, 15] is a mechanism to efficiently implement logical synchrony with real hardware and thereby bring desirable synchronous execution properties to distributed applications efficiently at scale. ### Mathematical preliminaries and notation An _undirected graph_\(\mathcal{G}\) is pair \((\mathcal{V},\mathcal{E})\) where \(\mathcal{V}\) is a set and \(\mathcal{E}\) is a subset of the set of 2-element subsets of \(\mathcal{V}\). A _directed graph_\(\mathcal{G}\) is pair \((\mathcal{V},\mathcal{E})\) where \(\mathcal{E}\subset\mathcal{V}\times\mathcal{V}\) and \((v,v)\not\in\mathcal{E}\) for all \(v\in\mathcal{V}\). An edge \(e\in\mathcal{E}\) in a directed graph may be denoted \((u,v)\) or \(u\to v\). A directed graph may contain a 2-cycle, that is a pair of edges \(u\to v\) and \(v\to u\). An _oriented graph_ is a directed graph in which there are no 2-cycles. Suppose \(G=(\mathcal{V},\mathcal{E})\) is a directed graph, and number the vertices and edges so that \(\mathcal{V}=\{1,\ldots,n\}\) and \(\mathcal{E}=\{1,\ldots,m\}\). Then the _incidence matrix_\(B\in\mathbb{R}^{n\times m}\) is \[B_{ij}=\begin{cases}1&\text{if edge $j$ starts at node $i$}\\ -1&\text{if edge $j$ ends at node $i$}\\ 0&\text{otherwise}\end{cases}\] for \(i=1,\ldots,n\) and \(j=1,\ldots,m\). A _walk_ in a directed graph \(G\) is a non-empty alternating sequence \(v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k}\) in which \(v_{i}\in\mathcal{V}\), \(s_{i}\in\mathcal{E}\), and either \(s_{i}=v_{i}\to v_{i+1}\) or \(s_{i}=v_{i+1}\to v_{i}\). In the former case we say \(s_{i}\) has _forward_ or \(+1\) orientation, otherwise we say it has _backward_ or \(-1\) orientation. A _path_ is a walk in which all vertices are distinct. A _cycle_ is a walk in which vertices \(v_{0},\ldots,v_{k-1}\) are distinct, all edges are distinct, and \(v_{0}=v_{k}\). Walks, paths, and cycles are called _directed_ if all edges are in the forward orientation. In a directed graph \(G\), given a walk \[W=(v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k})\] the corresponding _incidence vector_\(x\in\mathbb{R}^{m}\) is such that \(x_{i}=1\) if there exists \(j\) such that \(i=s_{j}\) and \(s_{j}\) has forward orientation, and \(x_{i}=-1\) if there exists \(j\) such that \(i=s_{j}\) and \(s_{j}\) has reverse orientation, and \(x_{i}=0\) otherwise. For a directed graph with 2-cycles, there is an edge \(u\to v\) and \(v\to u\), and we assign one of these directions as primary and the other as secondary. This is simply a choice of sign convention. From a directed graph we construct an associated oriented graph by discarding all secondary edges. From an oriented graph we construct an associated undirected graph by discarding all orientations. The concepts of spanning tree and connectedness when applied to a directed graph always refer to the associated undirected graph. The following two results are well-known. **Theorem 1**.: _Suppose \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) is a directed graph with incidence matrix \(B\), and suppose edges \(1,\ldots,n-1\) form a spanning tree. Partition \(B\) according to_ \[B=\begin{bmatrix}B_{11}&B_{12}\\ -\mathbf{1}^{\mathsf{T}}B_{11}&-\mathbf{1}^{\mathsf{T}}B_{12}\end{bmatrix}\] _then \(B_{11}\) is unimodular. Further_ \[B=\begin{bmatrix}B_{11}&0\\ -\mathbf{1}^{\mathsf{T}}B_{11}&1\end{bmatrix}\begin{bmatrix}I&0\\ 0&0\end{bmatrix}\begin{bmatrix}I&N\\ 0&I\end{bmatrix}\] _where \(N=B_{11}^{-1}B_{12}\)._ Proof.: _See for example Theorem 2.10 of [16]._ For convenience, denote by \(Z\) the \(m\times(m-n+1)\) matrix \[Z=\begin{bmatrix}-N\\ I\end{bmatrix}\] Then we have the following important property. **Theorem 2**.: _Every column of \(Z\) is the incidence vector of a cycle in \(\mathcal{G}\)._ Proof.: _See, for example, Chapter 5 of [16]._ Theorem 1 implies that the columns of \(Z\) are a basis for the null space of \(B\), since \(BZ=0\) and \(\operatorname{null}(Z)=\{0\}\). The columns of \(Z\) are called the _fundamental cycles_ of the graph. Note that each of the fundamental cycles is associated with exactly one of the non-tree edges of the graph. ## 2 Logical synchrony networks We start with a formal definition of a logical synchrony network as a directed graph with edge weights, as follows. **Definition 1**.: _A **logical synchrony network** is a directed graph \((\mathcal{V},\mathcal{E})\) together with a set of edge weights \(\lambda:\mathcal{E}\rightarrow\mathbb{R}\)._ In this model, each node corresponds to a processor, and an edge between nodes \(i\to j\) indicates that node \(i\) can send data along a physical link to node \(j\). Sent data is divided into tokens which we refer to as _frames_. Local clocks.Every node has an infinite sequence of _events_ associated with it, which can be thought of as compute steps. The events at node \(i\) are denoted \((i,\tau)\), where \(\tau\) is referred to as a _localtick_ and thereby implicitly defines a local clock. We define the set of all events \[\mathcal{V}_{\text{ext}}=\{(i,\tau)\mid i\in\mathcal{V},\tau\in\mathbb{Z}\}\] Events at one node are aligned to events at other nodes by the transmission of frames. At localtick \(\tau\) and node \(i\), a frame is sent from node \(i\) to node \(j\), and it arrives at node \(j\) at localtick \(\tau+\lambda_{i\cdot j}\). The constant \(\lambda_{i\cdot j}\) is called the _logical latency_. We define the following binary relation. **Definition 2**.: _Event \((i,\tau)\) is said to **directly send to** the event \((j,\rho)\) if \((i,j)\in\mathcal{E}\) and \(\rho=\tau+\lambda_{i\cdot j}\), or \(i=j\) and \(\rho=\tau+1\). We use the notation_ \[(i,\tau)\rightarrow(j,\rho)\] _to mean \((i,\tau)\) directly sends to \((j,\rho)\), and define the set_ \[\mathcal{E}_{\text{ext}}=\{\left((i,\tau),(j,\rho)\right)\mid(i,\tau) \rightarrow(j,\rho)\}\] _The graph \(\mathcal{G}_{\text{ext}}=(\mathcal{V}_{\text{ext}},\mathcal{E}_{\text{ext}})\) is called the **extended graph** of the logical synchrony network._ This relation may be viewed as an infinite directed graph with vertex set \(\mathcal{V}_{\text{ext}}\) and directed edges \((i,\tau)\rightarrow(j,\rho)\). In this graph, those edges \((i,\tau)\rightarrow(j,\rho)\) for which \(i=j\) are called _computational edges_. An edge that is not a computational edge is called a _communication edge_. Figure 1 illustrates a logical synchrony network and its corresponding extended graph. The localticks define a separate and ideal notion of local duration at each node by counting events (_i.e._, frame transmissions or receptions.) We can speak of the event \((i,\tau)\) as occurring at time \(\tau\) localticks on node \(i\). We say that event \((i,\tau+a)\) happens \(a\) localticks after event \((i,\tau)\), for any \(a\in\mathbb{Z}\). We cannot in general compare clock values at two different nodes. Execution.This model captures the local evolution of time at each node \(i\in\mathcal{V}\), and the transmission of frames between them. Although we do not investigate execution models in this paper, it is possible to define many different execution semantics. One simple choice is the functional model, where frames carry data, and associated with each event \((i,\tau)\in\mathcal{V}_{\text{ext}}\) in the extended graph we have a function, which maps data from incoming edges to data on outgoing edges. Another possibility is to have a more procedural model, where events in \(\mathcal{V}_{\text{ext}}\) correspond to the clock ticks of a processor in the corresponding \(\mathcal{V}\). For the purposes of this paper it is not necessary to specify how many bits each frame contains but we assume all frames on a given link are equally sized. The abstract models considered in this paper consist of sequences of events which extend infinitely far into both the future and the past. It is possible to extend this model to include system startup, for example by introducing a minimum node within the extended graph, or by modifying the execution model. We do not address startup within this paper. Frames and logical latency.If \(A\) denotes a particular frame sent \(i\to j\), then we will make use of the notation receive(\(A\)) to refer to the localtick at node \(j\) when \(A\) arrives at \(j\). Similarly send(\(A\)) refers to the localtick at node \(i\) when \(A\) was sent. This notation leaves implicit the source and destination of frame \(A\), in that \(i,j\) are not included as arguments of the send and receive functions. We do not as yet assume any particular mechanism for transmission of frames, but we assume that frames are received in the order that they are sent, without any loss. Note that the logical latency has no connection to _physical latency_. If we were to measure the send and receive times with respect to a global notion of time, we would know that, for example, the receive time must be greater than the send time. In the framework presented here, that is not the case; the localticks are strictly local, and as a result there is no such requirement on their numerical value; the logical latency \(\lambda_{i\cdot j}\) may be negative. This is, of course, a statement about the clocks, not about causality. In words, the logical latency is the time of arrival _in the receiver's clock_ minus the time of departure _in the sender's clock_. There are several observations worth making about logical latency. * Logical latency is _constant_. For any two nodes \(i,j\), every frame sent \(i\to j\) has the same logical latency. It is a property of the edge \(i\to j\) in \(\mathcal{E}\). * Despite the name, logical latency is not a measure of length of time or duration. It is not the case that if \(\lambda_{i\cdot j}\) is greater than \(\lambda_{p\cdot q}\) then it takes longer for frames to move from \(i\) to \(j\) than it does for frames to move from \(p\) to \(q\). (In fact, we do not have a way within this framework to compare two such quantities.) Figure 1: A logical synchrony network (edges labeled with \(\lambda\)) and corresponding extended graph. * The logical latency can be negative. Logical latencies and paths.Logical latencies add along a path. Suppose node \(i\) sends a frame \(B\) along edge \(i\to j\) to node \(j\), and then node \(j\) forwards it \(j\to k\). Then we have \[\operatorname{receive}(B)=\operatorname{send}(B)+\lambda_{i\text{-}j}+\lambda_ {j\text{-}k}\] This means that we can speak of the logical latency of the path \(i\to j\to k\) as being \(\lambda_{i\text{-}j}+\lambda_{j\text{-}k}\), and more generally we can define the logical latency of a directed path \(\mathcal{P}=v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k}\) from node \(v_{0}\) to node \(v_{k}\) in \(\mathcal{G}\). The logical latency is path dependent; two paths with the same endpoints may have different logical latencies. We have \[\lambda_{\mathcal{P}}=\sum_{i=0}^{k-1}\lambda_{s_{i}}\] This makes sense, which is potentially surprising because we are measuring arrival and departure times with different clocks. Since frames are being relayed, there may be additional delay at intermediate nodes (_i.e._, additional compute steps) which would need to be included when determining the destination event. Logical latencies are defined such that they do not included this additional delay. ### Ordering of events A fundamental question regarding causality arises in the study of distributed systems. Given two events, we would like to determine which happened first. In a nonrelativistic physical setting, such a question is well-defined. In a relativistic setting, there are events which are separated in space for which the relative order is undetermined -- the order depends on the observer. Something similar happens in distributed systems, as was pointed out by Lamport [12]. Given two events, instead of asking which event happened first, a more useful question is to ask which event, if any, _must have_ happened first. The framework for distributed clocks developed by Lamport [12] established that there is a partial ordering on events determined by one event's ability to influence another by the sending of messages. In that paper the author defines a global notion of time consistent with said partial order. Subsequent work [17, 18] defines _vector clocks_ which assign a vector-valued time to events for which the partial ordering is equivalent to that defined by message-passing. We would like to construct the corresponding notion of causality in a logical synchrony network. We define below the \(\sqsubset\) relation, which can be used to define a partial order on \(\mathcal{G}_{\text{ext}}\) provided we can ensure that it is acyclic. To do this, we consider round-trip times. Round trip times.Logical latencies are not physical latencies, despite the additive property. However, there is one special case where logical latency is readily interpreted in such physical terms, specifically the time for a frame \(A\) to traverse a cycle in the graph, the cycle round-trip time. Suppose \(\mathcal{C}=v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k}\) is a directed cycle, then \[\lambda_{\mathcal{C}}=\operatorname{receive}(A)-\operatorname{send}(A)\] is the round-trip time measured in localticks. Two different cycles from a single node \(i\) may have different round-trip times, and these are comparable durations since they are both measured in localticks at that node. We have \[\lambda_{\mathcal{C}}=\sum_{i=0}^{k-1}\lambda_{s_{i}}\] We make the following definition. **Definition 3**.: _A logical synchrony network is said to have **positive round-trip times** if, for every directed cycle \(\mathcal{C}\) in the graph \(\mathcal{G}\) we have \(\lambda_{\mathcal{C}}>0\)._ We then have the following result, which says that if the round-trip times around every directed cycle in the logical synchrony network are positive, then the extended graph is acyclic. **Theorem 3**.: _If a logical synchrony network has positive round-trip times then its extended graph is acyclic._ Proof.: _Suppose for a contradiction that the extended graph is cyclic. Then there exists a directed cycle \(\mathcal{C}_{1}=v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k}\) where each \(v_{j}\in\mathcal{V}_{\text{ext}}\) is a pair \(v_{j}=(i_{j},\tau_{j})\). Since the start and end node is the same, we have_ \[\begin{split} 0&=\sum_{j=1}^{k-1}(\tau_{j+1}-\tau_{j})\\ &=\sum_{j\in C_{\text{comp}}}(\tau_{j+1}-\tau_{j})+\sum_{j\notin C _{\text{comp}}}(\tau_{j+1}-\tau_{j})\end{split} \tag{1}\] _where \(C_{\text{comp}}\) is the set of indices \(j\) such that \((v_{j},v_{j+1})\) is a computational edge. Each of the computational edges has \(\tau_{j+1}-\tau_{j}=1\). If all of the edges in the graph are computational then the right-hand side is positive. If there are some communication edges, then the second of the two terms on the right-hand side is positive due to the assumption that the logical synchrony graph has positive round-trip times, and again the right-hand-side is positive. This contradicts the claim that the sum is zero._ This acyclic property is necessary for an execution model based on function composition to be well-defined. It also allows us to define a temporal partial ordering between events in \(\mathcal{G}_{\text{ext}}\). Since a logical synchrony network with positive round-trip times has an extended graph which is acyclic, the reachability relation on the extended graph defines a partial order. Specifically, we write \[(i,\tau)\sqsubset(j,\rho)\] if there is a directed path from \((i,\tau)\) to \((j,\rho)\) in the extended graph. Here, the notation is meant to be similar to \(<\), indicating _comes before_. Under these conditions, a logical synchrony network is a distributed system in the sense of Lamport [12], with logical latencies providing strict inter-event timings at any node \(i\in\mathcal{V}\). The partial ordering on the induced logical synchrony network has exactly the property that, if \(u\sqsubset v\), then \(u\) must have happened before \(v\). ## III Equivalence of LSNs Two logical synchrony networks may have different logical latencies, but be nonetheless equivalent for the purpose of executing processes. An example is given by the graphs in Figure 2. This arises because we can relabel the events. Specifically, given a logical synchrony network with events \(\mathcal{V}_{\mathrm{ext}}\), we define a new logical synchrony network. Given \(c_{1},\ldots,c_{n}\in\mathbb{Z}\), we relabel event \((i,\tau)\) as \((i,\tau+c_{i})\). This is a relabeling of the vertices of the graph \(\mathcal{G}_{\mathrm{ext}}\). In \(\mathcal{G}_{\mathrm{ext}}\) we have edges \[(i,\tau)\rightarrow(j,\tau+\lambda_{i\text{-}j})\] for every \(i\neq j\in\mathcal{V}\) and \(\tau\in\mathbb{Z}\). Under the relabeling, these are mapped to \[(i,\tau+c_{i})\rightarrow(j,\tau+\lambda_{i\text{-}j}+c_{j})\] and since there is such an edge for all \(\tau\in\mathbb{Z}\) the edge set of the relabeled extended graph is \[\hat{\mathcal{E}}_{\mathrm{ext}}=\left\{\left((i,\tau),(j,\tau+\lambda_{i \text{-}j}+c_{j}-c_{i})\right)\mid i,j\in\mathcal{V},\tau\in\mathbb{Z}\right\}\] This is the extended graph for a logical synchrony network with logical latencies \[\hat{\lambda}_{i\text{-}j}=\lambda_{i\text{-}j}+c_{j}-c_{i}\] This leads us to the following definition of equivalence. **Definition 4**.: _Suppose we have two logical synchrony networks on a directed graph \((\mathcal{V},\mathcal{E})\), with edge weights \(\lambda\) and \(\hat{\lambda}\). We say these LSNs are **equivalent** if there exists \(c_{1},\ldots,c_{n}\in\mathbb{Z}\) such that, for all \(i,j\in\mathcal{V}\),_ \[\hat{\lambda}_{i\text{-}j}=\lambda_{i\text{-}j}+c_{j}-c_{i} \tag{2}\] We can write this equation as \[\lambda-\hat{\lambda}=B^{\mathsf{T}}c\] where \(B\) is the incidence matrix of \(\mathcal{G}\). Relabeling the clocks results in a relabeling of the corresponding extended graph. Since this only changes the labels of the nodes, not how the nodes are interconnected, any code which is executable on one graph may also be executed on the other (but any references to particular localticks will need to be changed.) Physically measurable properties such as round-trip times cannot change under such a simple relabeling. We have **Proposition 1**.: _If two LSNs are equivalent, they will have the same round trip times on every directed cycle._ Proof.: _The round-trip times for a directed cycle \(\mathcal{C}=v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k}\) in \(\mathcal{G}\) satisfy_ \[\sum_{j=0}^{k-1}\lambda_{s_{j}}=\sum_{j=0}^{k-1}\hat{\lambda}_{s_{j}}\] _which follows from equation (2)._ The converse is not generally true, as the following example shows. **Example 1**.: _Consider the logical synchrony networks shown in Figure 3. Both networks have the same underlying graph, which has no directed cycles, and so the round trip times on every directed cycle are trivially equal on both networks. If we order the edges \(((1\to 2),(2\to 3),(1\to 3))\) then we have incidence matrix_ \[B=\begin{bmatrix}1&0&1\\ -1&1&0\\ 0&-1&-1\end{bmatrix}\] _which has \(\mathrm{rank}(B)=2\). In the left-hand network of Figure 3 the logical latencies are \(\lambda_{1}=2\), \(\lambda_{2}=3\) and \(\lambda_{3}=4\), and in the right-hand network they are \(\hat{\lambda}_{1}=2\)\(\hat{\lambda}_{2}=3\) and \(\hat{\lambda}_{3}=3\). Therefore_ \[\lambda-\hat{\lambda}=\begin{bmatrix}0\\ 0\\ 1\end{bmatrix} \tag{3}\] _and there is no vector \(c\) such that \(\lambda-\hat{\lambda}=B^{\mathsf{T}}c\)._ Figure 3: Two non-equivalent logical synchrony graphs with no directed cycles (edges labeled with \(\lambda\)) Figure 2: Two equivalent logical synchrony graphs (edges labeled with \(\lambda\)). Relabeling the clocks using \(c=(1,2,3)\) maps the left-hand graph to the right-hand one. If the round trip times are equal around every cycle, accounting for signs and orientations, then the two logical synchrony networks are equivalent. To show this, we need a preliminary result. **Lemma 1**.: _Let the graph be connected. Suppose \(y\in\mathbb{Z}^{m}\), and for every cycle \(\mathcal{C}\) we have \(y^{\mathsf{T}}x=0\) for the corresponding incidence vector \(x\). Then \(y=B^{\mathsf{T}}c\) for some \(c\in\mathbb{Z}^{n}\)._ Proof.: _Pick a spanning tree, and partition \(B\) according to the spanning tree. Let \(N=B_{11}^{-1}B_{12}\). Partition \(y\) according to_ \[y=\begin{bmatrix}y_{1}\\ y_{2}\end{bmatrix}\] _where \(y_{1}\in\mathbb{Z}^{n-1}\). We choose_ \[c=\begin{bmatrix}B_{11}^{-\mathsf{T}}y_{1}\\ 0\end{bmatrix}\] _and note that since \(B_{11}\) is unimodular \(c\) must be integral. Then Theorem 1 implies_ \[B^{\mathsf{T}}c =\begin{bmatrix}I&0\\ N^{\mathsf{T}}&I\end{bmatrix}\begin{bmatrix}I&0\\ 0&0\end{bmatrix}\begin{bmatrix}B_{11}^{\mathsf{T}}&-B_{11}^{\mathsf{T}} \mathbf{1}\\ 0&1\end{bmatrix}\begin{bmatrix}B_{11}^{-\mathsf{T}}y_{1}\\ 0\end{bmatrix}\] \[=\begin{bmatrix}I&0\\ N^{\mathsf{T}}&I\end{bmatrix}\begin{bmatrix}y_{1}\\ 0\end{bmatrix}\] \[=\begin{bmatrix}y_{1}\\ y_{2}\end{bmatrix}\] _as desired, where in the last line we use Theorem 2 to show that_ \[y^{\mathsf{T}}\begin{bmatrix}-N\\ I\end{bmatrix}=0\] _since \(y\) is orthogonal to the incidence vectors of the fundamental cycles._ We now state and prove a variant of Proposition 1 which is both necessary and sufficient. **Theorem 4**.: _Suppose we have two logical synchrony networks on a connected directed graph \((\mathcal{V},\mathcal{E})\), with edge weights \(\lambda\) and \(\hat{\lambda}\). These networks are equivalent if and only if they have the same signed round trip times on every cycle in \(\mathcal{G}\). That is, for every cycle \(\mathcal{C}=v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k}\) we have_ \[\sum_{j=0}^{k-1}\lambda_{s_{j}}o_{j}=\sum_{j=0}^{k-1}\hat{\lambda}_{s_{j}}o_{j} \tag{4}\] _where \(o_{j}\) is the orientation of edge \(s_{j}\) on the cycle \(\mathcal{C}\)._ Proof.: _Equation (4) means that for every cycle \(C\) with incidence vector \(x\) we have_ \[(\lambda-\hat{\lambda})^{\mathsf{T}}x=0\] _Then Lemma 1 implies that \(\lambda-\hat{\lambda}=B^{\mathsf{T}}c\) for some integer vector \(c\), and hence \(\lambda\) and \(\hat{\lambda}\) are equivalent._ What this means, in particular, is that in Example 1 the graph does not have a directed cycle but it does have a cycle, where edges \(1\to 2\) and \(2\to 3\) are oriented in the forward direction, and edge \(1\to 3\) is oriented in the backward direction. Then \(\lambda\) and \(\hat{\lambda}\) are equivalent if and only if \[\lambda_{1}+\lambda_{2}-\lambda_{3}=\hat{\lambda}_{1}+\hat{\lambda}_{2}-\hat{ \lambda}_{3}\] Since this does not hold for \(\lambda\) and \(\hat{\lambda}\) in that example, those two networks are not equivalent. One cannot verify equivalence by checking pairs of nodes. That is, it is not sufficient to simply check the length-2 round trip times, as the following example shows. **Example 2**.: _Suppose \(\mathcal{G}\) is the complete graph with 3 nodes. For the two logical synchrony networks, shown in Figure 4, the length-2 round trip times are_ \[\lambda_{1\text{-}2\text{-}1} =5\] \[\lambda_{2\text{-}3\text{-}2} =4\] \[\lambda_{1\text{-}3\text{-}1} =2\] _and they are the same for \(\hat{\lambda}\). However, these networks are not equivalent. There is no way to relabel so that the logical latencies are the same. This is because the length-3 round trip times are \(\lambda_{1\text{-}2\text{-}3\text{-}1}=6\) and \(\hat{\lambda}_{1\text{-}2\text{-}3\text{-}1}=4\)._ Invariants.As shown by the above results, round-trip times around directed cycles are invariant under relabeling. Cycles which are not directed also result in invariants which may be physically measured and interpreted. We give some examples below. **Example 3**.: _Figure 5 shows a triangle graph in which node 1 sends frame \(A\) to node 3, and simultaneously sends frame \(B\) to node 3 via node 2. Then \(\operatorname{receive}(B)-\operatorname{receive}(A)\) is measured in localticks at node 3, and it is invariant under relabeling._ **Example 4**.: _Figure 6 shows a square graph. Here node 1 sends frame \(A\) to node 2 and simultaneously sends frame \(B\) to node 4. Node 3 sends frame \(C\) to node 2 and simultaneously sends frame \(D\) to node 4. Note that the Figure 4: Logical synchrony networks for Example 2 transmissions of node 1 and node 3 are not synchronized with each other. Then the quantity_ \[(\mathrm{receive}(A)-\mathrm{receive}(C))-(\mathrm{receive}(B)-\mathrm{ receive}(D))\] _is invariant under clock relabelings._ Equivalent networks can have different logical latencies, but must have the same round-trip times. The question of how much freedom this leaves is interesting, and has an important consequence which we discuss below. We first show that one can set the logical latencies arbitrarily on any spanning tree. **Theorem 5**.: _Suppose \(\mathcal{G},\lambda\) is a logical synchrony network, where \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). Suppose \(\mathcal{T}\subset\mathcal{E}\) is a spanning tree. Then for any \(\gamma:\mathcal{T}\rightarrow\mathbb{Z}\) there exists \(c\in\mathbb{Z}^{n}\) such that_ \[\gamma_{i\cdot j}=\lambda_{i\cdot j}+c_{j}-c_{i}\text{ for all }i\to j\in \mathcal{T}\] Proof.: _We would like to show that there exists \(c\in\mathbb{Z}^{n}\) such that_ \[\begin{bmatrix}I&0\end{bmatrix}(\lambda-\gamma)=\begin{bmatrix}I&0\end{bmatrix} B^{\mathsf{T}}c\] _Let \(y_{1}\) be the left-hand side, then using Theorem 1, this is equivalent to_ \[y_{1}=\begin{bmatrix}B_{11}^{\mathsf{T}}&-B_{11}\mathbf{1}\end{bmatrix}c\] _and hence we may choose_ \[c=\begin{bmatrix}B_{11}^{-\mathsf{T}}y_{1}\\ 0\end{bmatrix}\] _which is integral since \(B_{11}\) is unimodular._ We can use this result in the following way. There is no requirement within this framework that logical latencies be nonnegative. However, it turns out that any logical synchrony network which has nonnegative round-trip times is equivalent to one with nonnegative logical latencies. We state and prove this result below. This result will be useful when we discuss multiclock networks in the subsequent section. **Theorem 6**.: _Suppose \(\mathcal{G},\lambda\) is a logical synchrony network with \(\mathcal{G}\) strongly connected, and for every directed cycle \(\mathcal{C}\) the round-trip logical latency \(\lambda_{\mathcal{C}}\) is nonnegative. Then there exists an equivalent LSN with edge weights \(\hat{\lambda}\) which are nonnegative._ Proof.: _Pick a node \(r\). Since the graph has no negative cycles, there exists a spanning tree \(\mathcal{T}\), rooted at \(r\), with edges directed away from the root, each of whose paths is a shortest path [19]. Use Theorem 5 to construct \(c\) such that_ \[\lambda_{i\cdot j}+c_{j}-c_{i}=0\text{ for all }i\to j\in\mathcal{T}\] _As a result, we have \(\lambda_{i\cdot j}=c_{i}-c_{j}\) for all edges \(i\to j\) in the tree \(\mathcal{T}\). Denote by \(t_{i\cdot k}\) the length of the path from \(i\) to \(k\) in the tree. Then we have \(t_{i\cdot k}=c_{i}-c_{k}\)._ _Since this is a shortest path tree, we have for any edge \(i\to j\)_ \[t_{r\cdot i}+\lambda_{i\cdot j}\geq t_{r\cdot j}\] _because the path in the tree from \(r\) to \(j\) must be no longer than the path via node \(i\). Therefore_ \[c_{r}-c_{i}+\lambda_{i\cdot j}\geq c_{r}-c_{j}\] _Setting \(\hat{\lambda}_{i\cdot j}=\lambda_{i\cdot j}+c_{j}-c_{i}\) for all edges we find \(\hat{\lambda}_{i\cdot j}\geq 0\) as desired._ This result says that, if we have a shortest path tree, we can relabel the clocks so that the logical latency is zero on all edges of that tree, and with that new labeling the logical latency will be nonnegative on every tree edge. An example is given in Figure 7. Note also that an edge having zero logical latency does not imply that communication between the endpoints is instantaneous; only that the numerical value of the time at which the frame is received is equal to the numerical value of the time at which it was sent. ## IV Multiclock networks In this section we formulate the relationship between events on a network in terms of physical clocks, leading to a mathematical definition called the _multiclock network_. We show that multiclock networks are special types of logical synchrony networks. We will use \(t\) to denote an idealized notion of time, called _wall-clock time_, or _ideal time_[20]. Time on the network is _multiform_[1], in the sense that the nodes on the network each maintain their own sense of time. At each node, there is a real-valued clock, denoted by \(\theta_{i}\). Its units are the _localticks_. We refer to the value \(\theta_{i}\) as Figure 5: Triangle invariant Figure 6: Diamond invariant the _local time_ or _phase_ at node \(i\). Local time has no quantitative relationship to physical or wall-clock time. In particular, we do not view \(\theta_{i}\) as an approximation to wall-clock time and consequently clocks at two distinct nodes are inherently unrelated. At a node \(i\), a processor can read the value \(\theta_{i}\), its own clock, but cannot access the value \(\theta_{j}\) at any other node \(j\neq i\). We mathematically model \(\theta_{i}\) as a function of physical time \(t\), so that \(\theta_{i}:\mathbb{R}\rightarrow\mathbb{R}\), without implying anything about its construction; it simply means that if at physical time \(t\) a hypothetical outside observer were to read clock \(i\), it would read value \(\theta_{i}(t)\). What is required is that \(\theta_{i}\) is continuous and increasing, so that \(\theta_{i}(s)<\theta_{i}(t)\) if \(s<t\). We emphasize again that this does not imply that any processes running on the system can access wall-clock time \(t\). The quantity \(\theta_{i}\) is not related to physical time. At times \(t\) where \(\theta_{i}\) is differentiable, we define the frequency \(\omega_{i}\) of the clock \(\theta_{i}\) by \[\omega_{i}(t)=\frac{d\theta_{i}(t)}{dt}\] At a node \(i\), a clock generates an infinite sequence of events, also referred to as _localticks_, which happen whenever \(\theta_{i}\) is an integer. Clocks are not required to be periodic, and this definition of frequency is applicable in the general aperiodic case. Clocks at different nodes may have very different frequencies. If the frequency at node \(i\) is large, then events at that node occur more often. We model the process of frame transmission from node \(i\) to node \(j\) as a FIFO, but real-world implementations are likely to consist of uninterrupted physical communication streams feeding into memory buffers. Every node can access the output (or head) of the FIFO corresponding to each of its incoming links, and the input (or tail) of the FIFO corresponding to each of its outbound links. We will discuss below the requirement that FIFOs neither overflow nor underflow. Logical synchrony in multiclock networks.With every localtick, node \(i\) inserts a frame at the tail of each of its outgoing link FIFOs and removes a frame from the head of each of its incoming link FIFOs. This lock-step alignment of input and output is the fundamental synchronization mechanism that imposes logical synchrony upon the network. At each node, with every localtick, one frame is removed from each incoming FIFO and one frame is sent on each outgoing FIFO. Formal definition of multiclock network.We now turn to a mathematical model that will enable us to analyze the behavior of this system. **Definition 5**: _A **multiclock network** is a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) together with continuous increasing functions \(\theta_{i}:\mathbb{R}\rightarrow\mathbb{R}\) for each \(i\in\mathcal{V}\), and edge weights \(\lambda:\mathcal{E}\rightarrow\mathbb{Z}\)._ This definition contains the entire evolution of the clock phases \(\theta_{i}\), and the link properties \(\lambda_{i\cdot j}\). We will discuss the physical meaning of \(\lambda_{i\cdot j}\) below. Unlike the logical synchrony network, where events are abstract and have no physical time associated with them, in a multiclock network the global timing of all events is defined by the clocks \(\theta\). We will show that a multiclock network is a special case of a logical synchrony network, and the constants \(\lambda\) are the associated logical latencies. To do this, we model the behavior of the FIFOs connecting the nodes. Fifo model.If \(i\to j\) in the graph \(\mathcal{G}\), then there is a FIFO connecting node \(i\) to node \(j\). With every localtick at node \(i\), a frame is added to this FIFO, and with every localtick at node \(j\), a frame is removed from the FIFO. We number the frames in each FIFO by \(k\in\mathbb{Z}\), according to the localtick at the sender, and the frames in the FIFO are those with \(k\) satisfying \[\alpha_{i\cdot j}(t)\leq k\leq\beta_{i\cdot j}(t)\] where \(\alpha\) and \(\beta\) specify which frames are currently in the FIFO at time \(t\). The FIFO model is as follows. \[\beta_{i\cdot j}(t) =\left\lfloor\theta_{i}(t)\right\rfloor \tag{5}\] \[\alpha_{i\cdot j}(t) =\left\lfloor\theta_{j}(t)\right\rfloor-\lambda_{i\cdot j}+1 \tag{6}\] Figure 7: Relabeling so that logical latencies are nonnegative. The upper graph shows edges labeled with \(\lambda\). The root node is in the lower left, and the shortest-path spanning tree is shown in red. The lower graph shows an equivalent LSN, with nodes \(i\) labeled with \(c_{i}\), and the corresponding logical latencies \(\hat{\lambda}_{i\cdot j}=\lambda_{i\cdot j}+c_{j}-c_{i}\). All logical latencies in this graph are nonnegative. Equation (5) means that frames are added with each localclick at the sender, and numbered according to the sender's clock. Equation (6) means that frames are removed with each localtick at the receiver. The constant \(\lambda\) is to account for the offset between the frame numbers in the FIFO and the clock labels at the receiver. (We add 1 for convenience.) This offset must be constant, since one frame is removed for each receiver localtick. This constant is specified by the multiclock network model in Definition 5. This model precisely specifies the location of every frame on the network at all times \(t\). In particular, this determines the FIFO occupancy at startup. For any time \(t_{0}\), the specification of \(\lambda\) is equivalent to specifying the occupancy of the FIFOs at time \(t_{0}\). This allows us to have a well-defined FIFO occupancy without requiring an explicit model of startup. Logical latencyLogical latency is the fundamental quantity which characterizes the discrete behavior of a network, and allows us to ignore the details of the clocks \(\theta_{i}\). The idea is that we can understand the logical structure of the network, such as the events, the execution model, and causality, without needing to know specific wall-clock times at which these things occur. We now show that the quantity \(\lambda_{i\text{-}j}\) corresponds to the logical latency. Suppose a frame is sent from node \(i\) at localtick \(k\in\mathbb{Z}\), and wall-clock time \(t^{k}_{\text{send}}\). Then \(\theta_{i}(t^{k}_{\text{send}})=k\). Let the time which it is received at node \(j\) be denoted by \(t^{k}_{\text{rec}}\). Both \(t^{k}_{\text{send}}\) and \(t^{k}_{\text{rec}}\) are wall-clock times, and apart from the causality constraint that the frame must be received after it is sent, there is no constraint on the difference between these times; that is, the _physical latency_\(t^{k}_{\text{rec}}-t^{k}_{\text{send}}\) may be large or small. In general, physical latency will be affected by both the number of frames in the FIFO \(i\to j\) as well as the time required for a frame to be physically transmitted. We do not presuppose requirements on the physical latency. **Lemma 2**.: _Suppose frame \(k\) is sent from node \(i\) to node \(j\). Then \(t^{k}_{\text{send}}\) and \(t^{k}_{\text{rec}}\) satisfy_ \[\theta_{i}(t^{k}_{\text{send}}) =k \tag{7}\] \[\theta_{j}(t^{k}_{\text{rec}}) =k+\lambda_{i\text{-}j} \tag{8}\] _and hence the logical latency is given by_ \[\lambda_{i\text{-}j}=\theta_{j}(t^{k}_{\text{rec}})-\theta_{i}(t^{k}_{\text{ send}}) \tag{9}\] Proof.: _Since frames in the FIFO \(i\to j\) are numbered according to the sender's clock, we have_ \[t^{k}_{\text{send}}=\inf\{t\mid\beta_{i\text{-}j}(t)=k\}\] _that is, \(t^{k}_{\text{send}}\) is the earliest time at which frame \(k\) is in the FIFO from \(i\) to \(j\). Since the floor function is right continuous, this gives equation (7). Similarly, we have_ \[t^{k}_{\text{rec}}=\inf\{t\mid\alpha_{i\text{-}j}(t)=k+1\}\] _and this implies equation (8), and the logical latency follows._ Unlike the physical latency \(t_{\text{rec}}-t_{\text{send}}\), the logical latency \(\theta_{j}(t^{k}_{\text{rec}})-\theta_{i}(t^{k}_{\text{send}})\) does not change over time. Note also that the logical latency is an integer. Since the logical latency is constant, we can conclude that every multiclock network is a logical synchrony network; more precisely, the logical latencies defined by the multiclock network satisfy the same properties as those of a logical synchrony network. ### Realizability We now turn to an analysis of the occupancy of the FIFOs in more detail. A frame is considered _in-transit_ from \(i\to j\) at time \(t\) if it has been sent by node \(i\) but not yet received by node \(j\); that is, if it is in the FIFO from \(i\) to \(j\). Define \(\nu_{i\text{-}j}(t)\) to be the number of frames in transit \(i\to j\). Then we have \[\nu_{i\text{-}j}(t) =\beta_{i\text{-}j}(t)-\alpha_{i\text{-}j}(t)+1\] \[=\lfloor\theta_{i}(t)\rfloor-\lfloor\theta_{j}(t)\rfloor+\lambda_ {i\text{-}j} \tag{10}\] and this holds for all \(t\). Here we can see that the constant \(\lambda_{i\text{-}j}\) is a property of the link \(i\to j\), which determines the relationship between the clock phases at each end of the link and the number of frames in transit. So far in this model, there is nothing that prevents the FIFO occupancy on an edge \(i\to j\) from becoming negative. If the clock at node \(\theta_{j}\) has a higher frequency than the clock at \(\theta_{i}\), and if that frequency difference is maintained for long enough, then the FIFO \(i\to j\) will be rapidly emptied. In this case, \(\theta_{j}\) will become much larger than \(\theta_{i}\), and from (10) we have that \(\nu_{i\text{-}j}\) will become negative. Similarly, the FIFO will overflow if the frequencies become imbalanced in the other direction. In [15] a technique using a dynamically switching control algorithm is presented that allows prevention of such behaviors. We make the following definition. **Definition 6**.: _A multiclock network is called **realizable** if there exists \(\nu_{max}\in\mathbb{R}\) such that for all edges \(i\to j\)_ \[0\leq\nu_{i\text{-}j}(t)\leq\nu_{max}\quad\text{for all $t\in\mathbb{R}$} \tag{11}\] Note that this requirement must hold for all positive and negative time \(t\). The terminology here is chosen to be suggestive, in that we would like a condition which implies that we can physically implement a multiclock network. A physically necessary condition is that the FIFO occupancies are bounded and cannot be negative. Cycles and conservation of framesCycles within a multiclock network have several important properties. The first is _conservation of frames_, as follows. Theorem 7.: _Suppose \(\mathcal{C}=v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k}\) is a directed cycle in a multiclock network. Then_ \[\sum_{i=0}^{k-1}\nu_{s_{i}}(t)=\lambda_{\mathcal{C}}\] _In particular, the number of frames in transit around the cycle is constant, and is the sum of the logical latencies on the cycle._ Proof.: _The proof follows immediately from (10)._ An immediate corollary of this is that, in a physical network, if every edge of \(\mathcal{G}\) is on a cycle, then the number of frames in the network is finite and the upper bound condition for realizability is satisfied. This is the case, for example, in a strongly connected graph. Note that this holds because, in a physical network, the FIFO occupancy cannot be negative. It is not the case that the FIFO model used here implies that \(\nu\) is upper bounded, since in the model some FIFO lengths may become large and negative while others become large and positive. This theorem is particularly evocative in the simple and common case where we have two nodes \(i\), \(j\) connected by links in both directions. In this case, whenever \(i\) receives a frame, it removes it from it's incoming FIFO from \(j\), and adds a new frame to the outgoing FIFO to \(j\). Thus the sum of the occupancies of the two FIFOs is constant. The following result relates round trip times to realizability. Theorem 8.: _Suppose \(\mathcal{C}\) is a cycle in a realizable multiclock network. Then \(\lambda_{\mathcal{C}}\geq 0\)._ Proof.: _This follows immediately from Theorem 7 and Definition 6._ That is, a realizable multiclock network has the important physical property that all round-trip times are nonnegative. The monotonic property of \(\theta\) implies that this holds in both localticks and wall-clock time. No matter what path a frame takes around the network, it cannot arrive back at its starting point before it was sent. However, it is possible, within the class of realizable networks defined so far, for this sum to be equal to zero. In this case one would have a frame arrive at the time it is sent. This would require some pathological conditions on the clocks. ### Equivalent synchronous systems We now consider the class of perfectly synchronous systems, where all of the nodes of the graph share a single clock. The links between the nodes are FIFOs as before, and as a result of the synchronous assumption their occupancies are constant. This is a particular instance of the multiclock network where all clocks \(\theta_{i}\) are equal. Such a system has an extended graph, and it has logical latencies which do not change with time, and are equal to the occupancies of the FIFOs, according to (10). Because the system is synchronous, the FIFOs behave like a chain of delay buffers. The corresponding execution model, defined by the extended graph, is identical to that of a logical synchrony network with the same logical latencies. Said another way, a logical synchrony network is equivalent to a perfectly synchronous network of processors connected by delay buffers with occupancies given by the logical latencies. This suggests the following question; what happens if we have a logical synchrony network where one or more of the edges has a negative logical latency? Using Theorem 6, we know that if a network has nonnegative round-trip times, one can relabel the clocks so that all logical latencies are nonnegative. Hence any physically constructible multiclock network is equivalent to a perfectly synchronous network. ## V The bittide mechanism We now turn to physical implementation of logically synchronous systems. A hardware implementation can be found at [21]. In Section IV we have already discussed one of the key components of this, specifically that with each localtick, a node removes one frame from the head of every incoming FIFO, and sends one frame on every outgoing FIFO. However, this is not enough for implementation, since we must ensure that the occupancies of the FIFOs neither underflow nor overflow. In the bittide model, the FIFO connecting node \(i\) to node \(j\) is composed of two parts, connected sequentially. The first part is a communication link, which has a latency \(l_{i\text{-}j}\), the number of _wall-clock_ seconds it takes to send a frame across the link. The second part is called the _elastic buffer_. It is a FIFO which is located at the destination node \(j\). Node \(i\) sends frames, via the communication link to node \(j\), where they are inserted at the tail end of the elastic buffer. We assume that the communication link cannot reorder frames, and so together the communication link and the elastic buffer behave as a single FIFO. Each node has an elastic buffer for each of its incoming links. With each clock localtick, it does two things; first, it removes a frame from the head of each of the elastic buffers and passes that frame to the processor core; second, the core sends one frame on each outgoing communication link. The purpose of this structure is as follows. At each node, the system can observe the occupancy of all of the elastic buffers. These occupancies provide information regarding the relative clock frequencies of the node compared to its incoming neighbors. Specifically, if we have an edge \(i\to j\), and node \(i\) has a lower clock frequency that node \(j\), then the corresponding elastic buffer at node \(j\) will start to drain. Conversely, if node \(i\) has a higher clock frequency, the elastic buffer will start to fill. Node \(j\) can therefore use the occupancy of the elastic buffers to adjust its own clock frequency. If, on average, it's buffers are falling below half-full, the node can reduce its clock frequency, and conversely. This mechanism was originally proposed in [22]. The exact details for how it is implemented (such as how much to increase or decrease the frequency) were further developed in [13, 14, 15]. These papers show that, provided the frequency corrections are chosen appropriately, this mechanism will ensure that elastic buffers never underflow or overflow. A functional simulation of bittide is available at [23], and a simulation of the clock synchronization dynamics is at [24]. ## VI Related work The seminal work of Lamport [12] presents a formal framework for clocks in distributed systems, which in particular defined an ordering on a directed graph corresponding to temporal relationships between events, and a global scalar clock which was consistent with that ordering. Subsequent work [17, 18] developed the notion of vector clocks, where each node in a network maintains a vector notion of time which captures exactly the ordering defined by the graph. The synchronization mechanism of bittide was first proposed in [22]. Subsequent works include [13], which developed a mathematical model of the synchronization layer, and [14], which analyzed its performance properties. Ever since the first distributed systems, synchronous execution has been a gold standard for formal reasoning, provable correctness properties, and ability to express efficient algorithms [25, 26, 27, 28]. As a consequence, the domain of synchronous execution has been studied extensively, in particular in the context of cyber-physical systems. Cyber-physical systems interact with physical processes, and Lee [29] argues that integrating the notion of time in system architecture, programming languages and software components leads to the development of predictable and repeatable systems. Reasoning about distributed systems has lead to the definition of both execution models and parallel programming models. Kahn Process Networks [30] is one of the most general; while it does not involve time or synchronization explicitly, processes in a Kahn process network communicate through blocking FIFOs, and thus synchronize implicitly through the communication queues. An important distinction between bittide and the Kahn Process Networks is that the former does not make use of blocking. Synchrony, and its most common representation as a global time reference, led to the definition of multiple models of computation. For example, Synchronous Dataflow [31] enables static scheduling of tasks to resources; Timed Concurrent Sequential Processes (Timed CSP) [32] develop a model of real-time execution in concurrent systems; Globally Asynchronous, Locally Synchronous (GALS) communication models [33] address the issue of mapping a synchronous specification to existing systems which are asynchronous. Henzinger et al. [34] introduce the concept of _logical execution_ and Kopetz et al. [35] introduce Time-Triggered Architectures (TTAs) as a system architecture where time is a first-order quantity and they take advantage of the global time reference to exploit some of the desirable properties of synchronous execution: precisely defined interfaces, simpler communication and agreement protocols, and timeliness guarantees. Synchronous programming models led to synchronous programming languages, e.g., Esterel [36], Lustre [37], Signal [38], and the development of tools to formally analyze their execution correctness as well as compilers to generate correct synchronizing code for embedded [2] or multicore platforms [4]. This created a virtuous cycle - as researchers understood better properties and embedded them into languages and tools, they drove the adoption of synchronous execution and formal tools for a number of industrial control applications, avionics, and critical system components. ## VII Conclusions This paper has presented logical synchrony, a model where processes on distributed network cores behave as if they were synchronized, even if the clocks on the individual cores are imperfectly synchronized. A logical synchrony network is an abstraction which characterizes the causality relationship between events, and the logical latencies of the network have the striking property that they specify the causality relationships exactly. When we consider implementations of a logical synchrony network, that leads to defining local clocks in a multiclock network. In this setting, the logical latency combines the FIFO occupancies causality relationship with the offsets between neighboring clocks, and this combination is enough to determine the causality relationships. This offers a model where the logical latencies are sufficient to allow static scheduling of both communications and computation. The bittide mechanism gives a simple method for implementing this scheme. The result is a mechanism for distributed computation in which scheduling requires knowledge only of the graph topology and the logical latencies, and which has very low overhead. The main advantage of the bittide approach is that it enables _synchrony_ and not wall-clock time as the first order abstraction. The logical synchrony framework presented in this paper and the bittide mechanisms bring the guarantees available in synchronous execution to distributed systems without the need of a global time reference. This model has a natural utility for those applications with analyzable and predictable behavior. We expect that future work on abstractions and programming models that utilize logical synchrony will enable larger classes of applications. Examples may include probabilistically statically scheduled applications, where an application behavior is predictable with high probability, or slow changing applications, where the behavior is evolving from state to state, each of them predictable, but with enough latency that the system can adapt and reconfigure. ## VIII Acknowledgments The ideas for this paper came about through much collaboration. In particular, we would like to thank Nathan Allen, Pouya Dormiani, Chase Hensel, Logan Kenwright, Robert O'Callahan, Chris Pearce, Dumitru Potop-Butucaru, and Partha Roop for many stimulating discussions about this work. Robert had the idea for the proof of Theorem 6.
Logical synchronyを導入し、分散計算システムを同期系と同様に連携させるためのフレームワークとして、グローバルクロックや普遍時間に対する参照を必要とせず、分散システムを同期化させることができます。これは、プロセッサに対応するノードを持つ、イベントモデルである「論理同期ネットワーク」を構築し、各ノードには、ローカルクロックがイベントを生成します。論理遅延の度合いを測定し、その性質を検討します。その後、論理同期ネットワークの再帰的なモデルである「マルチクロックネットワーク」を分析し、論理同期ネットワークの再帰的なモデルと示します。ビット tideメカニズムはマルチクロックネットワークのインスタンスであり、クロック制御メカニズムを検討し、バッファがオーバーフローまたは欠落しないようにします。最後に、論理同期ネットワークが等価な同期化を実現する条件を示します。
2301.13355
Bond-Selective Full-Field Optical Coherence Tomography
Optical coherence tomography (OCT) is a label-free, non-invasive 3D imaging tool widely used in both biological research and clinical diagnosis. Current OCT modalities can only visualize specimen tomography without chemical information. Here, we report a bondselective full-field OCT (BS-FF-OCT), in which a pulsed mid-infrared laser is used to modulate the OCT signal through the photothermal effect, achieving label-free bond-selective 3D sectioned imaging of highly scattering samples. We first demonstrate BS-FF-OCT imaging of 1 {\mu}m PMMA beads embedded in agarose gel. Next, we then show 3D hyperspectral imaging of polypropylene fiber mattress from a standard surgical mask. We then demonstrate BS-FFOCT imaging on biological samples, including cancer cell spheroids and C. elegans. Using an alternative pulse timing configuration, we finally demonstrate the capability of BS-FF-OCT on a bulky and highly scattering 150 {\mu}m thick mouse brain slice.
Haonan Zong, Celalettin Yurdakul, Jian Zhao, Zian Wang, Fukai Chen, M. Selim Ünlü, Ji-Xin Cheng
2023-01-31T01:15:32
http://arxiv.org/abs/2301.13355v1
# Bond-Selective Full-Field Optical Coherence Tomography ###### Abstract Optical coherence tomography (OCT) is a label-free, non-invasive 3D imaging tool widely used in both biological research and clinical diagnosis. Current OCT modalities can only visualize specimen tomography without chemical information. Here, we report a bond-selective full-field OCT (BS-FF-OCT), in which a pulsed mid-infrared laser is used to modulate the OCT signal through the photothermal effect, achieving label-free bond-selective 3D sectioned imaging of highly scattering samples. We first demonstrate BS-FF-OCT imaging of 1 \(\upmu\)m PMMA beads embedded in agarose gel. Next, we then show 3D hyperspectral imaging of polypropylene fiber mattress from a standard surgical mask. We then demonstrate BS-FF-OCT imaging on biological samples, including cancer cell spheroids and _C. elegans_. Using an alternative pulse timing configuration, we finally demonstrate the capability of BS-FF-OCT on a bulky and highly scattering 150 \(\upmu\)m thick mouse brain slice. Since the first report by Huang _et al._ in 1991, optical coherence tomography (OCT) has experienced many advanced technical developments and demonstrated significant applications in the past decades. [1] OCT has evolved from time-domain OCT (TD-OCT) [2], which mechanically scans the optical phase of the reference arm to obtain the signal from different depths, to spectral-domain/Fourier-domain OCT (SD/FD-OCT) [3-5], which spectrally resolves the detected interferometric signal from different depths without mechanically scanning. SD/FD-OCT has dramatically improved the sensitivity and imaging speed of OCT and achieved in vivo retinal imaging [4] till video rate [5]. However, neither TD-OCT nor SD/FD-OCT is suitable for obtaining high-resolution en-face images of samples because these modalities acquire the signal from different depths at a fixed lateral location first and then scan the sample laterally. To enable high-resolution en-face OCT imaging, time-domain full-field OCT (FF-OCT) was developed. [6, 7] FF-OCT adopts widefield illumination and a multi-pixel detector (a CCD or CMOS camera) to obtain en-face images at a given depth without scanning across the sample. FF-OCT was applied to in vivo human corneal [8] and retinal imaging [9] for ophthalmic diagnosis. FF-OCT was also used for histological imaging of different types of tissues, such as human skin tissue [10], breast tissue [11], and brain tissue [12], for cancer diagnosis. However, those conventional FF-OCT modalities can only provide tomography images without any molecular information, which limits their potential applications to samples that have different chemical compositions but similar morphology. Vibrational microscopy has been a widely used tool for label-free molecular imaging without sample perturbation. [13] In these techniques, Raman scattering, or linear infrared absorption, is measured to provide the contrast. More recently, the relatively weak signal and low acquisition speed of the spontaneous Raman scattering [14] have been boosted by coherence Raman scattering microscopy [15, 16]. Compared to Raman scattering, which has an extremely small cross-section (\(\sim\)10\({}^{-30}\) to 10\({}^{-28}\) cm\({}^{2}\)), linear infrared (IR) absorption has ten orders of magnitude larger cross-section (\(\sim\)10\({}^{-18}\) cm\({}^{2}\)). Despite the large cross-section, conventional IR imaging technique such as Fourier transform infrared (FTIR) [17, 18] has poor spatial resolution due to the long illumination wavelength. To break this limitation, mid-infrared photothermal (MIP) microscopy, which indirectly measures the IR absorption by using the photothermal effect, was developed recently. [19, 20] Since then, MIP microscopy has evolved from point-scan [19-26] to widefield configurations [27-37]. As reviewed recently [38, 39], MIP microscopy offers a few advantages. First, sub-micron spatial resolution is achieved through the visible probe beam. Second, widefield MIP microscopy enables high-throughput chemical imaging by exploiting the advantage that linear IR absorption doesn't require a tight focus. By using widefield illumination and detection configuration, the imaging speed could reach half of the camera frame rate. Third, volumetric chemical imaging is possible through mid-infrared photothermal phase tomography [32, 33, 37]. Despite these advances, phase tomography, including optical diffraction tomography [32] and intensity diffraction tomography [37], is limited to weakly scattering samples and can't be applied to highly scattering specimens such as tissues. Using OCT as the probe of the mid-infrared (MIR) photothermal effect can potentially enable bond-selective 3D imaging for highly scattering samples. Notably, both "photothermal" and "MIR" processes have been applied to OCT separately. On the one hand, photothermal OCT has been a powerful functional extension of OCT since its first demonstration by Fujimoto et al. in 2008. [40] Photothermal OCT is realized by adding another modulated heating beam to OCT and measuring the modulation of the OCT signal induced by the heating beam. Since the heating beam is also in the visible or near-infrared region, it provides limited molecular specificity to OCT by detecting signals from specific absorbers at the heating wavelength, which can be endogenous pigments [41-43] in the sample or exogenous contrast agents [44-49] that are imported into the sample. Although the exogenous contrast agents can improve the molecular specificity, perturbations may be introduced to the sample during the labeling process. On the other hand, OCT in the "MIR" domain has been reported, including conventional OCT modalities using MIR light sources to improve penetration depth, [50] or a time-gated method to detect the reflection of MIR light from different depths, [51] while these techniques still suffer from the intrinsic MIR resolution limitation, which is the same as FTIR. Despite these efforts, bond-selective OCT that harnesses the MIR photothermal effect has not been reported. In this work, we report bond-selective full-field optical coherence tomography (BS-FF-OCT), in which a pulsed MIR laser modulates the full-field OCT signal through the photothermal effect. Our technique enables label-free bond-selective 3D sectioning imaging of highly scattering thick samples. To achieve this, we integrate a modulated MIR heating beam into a time-domain FF-OCT. We use a broadband light-emitting-diode (LED) as the probe light source and a virtual lock-in camera as the detector [27]. Our system can measure the change in the OCT signal as a result of thermal expansion and refractive index change induced by MIR heating. First, we demonstrate 3D bond-selective imaging of 1 \(\upmu\)m PMMA beads embedded in agarose gel, which confirms the isotropic 1-micron resolution of BS-FF-OCT. Second, we show 3D hyperspectral imaging of a polypropylene fiber mattress from a standard surgical mask and the comparison between BS-FF-OCT and FTIR to confirm the spectrum fidelity. Then, we demonstrate bond-selective volumetric imaging on biological samples, including cancer cell spheroids and _C. elegans_. Finally, we demonstrate the capability of the BS-FF-OCT setup on a very bulky and highly scattering biological sample, i.e., a 150-\(\upmu\)m thick mouse brain tissue slice, using an alternative pulse timing configuration. ## 2 Results and discussion ### BS-FF-OCT principles, instrumentation, and image reconstruction Figure 1: BS-FF-OCT setup, synchronization, and image processing. (a) BS-FF-OCT setup configuration. BS: Beam-splitter. L1-2: Lens. LED: Light-emitting-diode. OPM: Off-axis parabolic mirror (90 degrees). (b) Synchronization and image acquisition at a single depth. RM: reference mirror. Camera captures ”hot” and ”cold” frames, where the MIR beam is respectively on and off in a sequence. MIR and probe pulses are synchronized, and the time delay (t\({}_{\text{d}}\)) between them is optimized to detect the maximum photothermal signal. Reference mirror is shifted a certain distance (\(\delta\)) 4 times to create 4 pairs (hot and cold) interference raw images, and then a pair of FF-OCT images at a specific depth is obtained by image processing. **(c) Workflow of 3D image reconstruction.** By combining FF-OCT images at multiple depths, 3D reconstruction images can be obtained. Finally, 3D bond-selective image can be obtained by subtracting the hot and cold 3D images. BS-FF-OCT relies on the modulation of the OCT signal by the photothermal effect induced by the MIR beam. The setup shown in **Fig. 1a** is compartmentalized into two sub-systems: (1) FF-OCT and (2) MIR modulation. For the FF-OCT part, the light source is a broadband light-emitting-diode (LED, central wavelength: 545 nm, FWHM: 100 nm). The reference mirror (reflectivity: 4%) is placed on a piezo scanner to create phase shifting between the reference and sample arms. Both the sample and reference mirrors are installed on motorized stages to scan different depths of the sample. For the MIR modulation part, a tunable MIR laser from 1320 cm-1 to 1775 cm-1 (linewidth: 10 cm-1), covering the fingerprint region is used. The MIR and probe beams illuminate the sample from the same side. The setup captures the depth-resolved photothermal FF-OCT images at a specific depth of the sample using a virtual lock-in technique [27], as shown in **Fig. 1b**. The top panel of **Fig. 1b** shows the timing configuration of the probe, MIR pulses, and camera exposure. The MIR pulse has a 20 kHz repetition rate and is modulated to "on" and "off" duty cycles by an optical chopper at 50 Hz. The probe pulse repetition rate is also set to 20 kHz which is synchronized with the MIR pulse with a specific delay time to optimize the photothermal signal. The camera frame rate is 100 Hz and is synchronized with the modulated "on" and "off" duty cycles of the MIR pulse. The camera-captured frames that correspond to the "on" and "off" duty cycles are called "hot" and "cold" frames, respectively. The middle panel of **Fig. 1b** shows that at each phase position of the reference mirror, a set of "hot" and "cold" raw frames are captured (to be averaged to 1 "hot" frame and 1 "cold" frame), and there are in total 4 phase positions. The bottom panel of **Fig. 1b** shows that 1 "hot" or "cold" FF-OCT image is obtained from the 4 "hot" or "cold" averaged raw frames, using the 4-frame phase-shifting algorithm [7]. Then, the depth-resolved photothermal FF-OCT image at this specific depth can be obtained by subtracting the "hot" and "cold" FF-OCT images. Furthermore, to obtain 3D reconstructed images for both hot and cold states, as shown in **Fig. 1c**, the sample is scanned at different depths with automatic coherence plane correction within the imaging volume (see details in the methods section). A 3D bond-selective OCT map can be obtained by subtracting the hot and cold 3D reconstructed images. To characterize the BS-FF-OCT setup, we first demonstrate 3D bond-selective imaging of 1 \(\mu\)m Poly(methyl methacrylate) (PMMA) beads embedded in agarose gel. **Fig. 2** shows that BS-FF-OCT achieves label-free volumetric vibrational spectroscopic imaging at isotropic 1-micron resolution. Specifically, **Fig. 2a-c** shows the cold FF-OCT, on-resonance, and off-resonance BS-FF-OCT images captured at three different depths with 0.5 \(\mu\)m step size. First, the cold FF-OCT images in **Fig. 2a** distinguish beads suspended at different depths (i.e., 1 \(\mu\)m apart), showing the depth-resolving capability of BS-FF-OCT setup. Second, to demonstrate the bond-selective capability, the MIR beam is set to an on-resonance absorption peak of PMMA at 1730 cm-1. The BS-FF-OCT images show consistent features as in cold FF-OCT images (see **Fig. 2a-b**). Yet, the off-resonance BS-FF-OCT images at 1770 cm-1 display no beads, as shown in **Fig. 2c**. **Fig. 2d-e** are the zoom-in views of a selected imaging 3D volume from three different directions. **Fig. 2d\({}_{1}\)** and **Fig. 2e\({}_{1}\)** are the corresponding areas indicated by the dashed squares in **Fig. 2a\({}_{2}\)** and **Fig. 2b\({}_{2}\)**, respectively. It can be seen from **Fig. 2d-e** that the beads have a slightly longer dimension along the optical axis. To characterize the axial and lateral resolution quantitatively, the 1D line profiles across the selected bead are plotted in **Fig. 2f**. The full-width half maximum (FWHM) of these line profiles are as follows, 942.5 nm (green in (f\({}_{1}\))), 824.4 nm (black in (f\({}_{1}\))), 787.3 nm (blue in (f\({}_{2}\))), 772.7 nm (black in (f\({}_{2}\))), 870.3 nm (purple in (f\({}_{3}\))) and 1156.7 nm (black in (f\({}_{3}\))). This result demonstrates the isotropic 1-\(\mu\)m resolution of the BS-FF-OCT setup. As a pump-probe technique, the resolution of the BS-FF-OCT setup is determined by the wavelength and optics of the probe beam [19]. For FF-OCT, the axial resolution (\(\Delta z\)) [7] can be calculated as \(\Delta z=\left(\frac{1}{\Delta z_{s}{}^{2}}+\frac{1}{\Delta z_{NA}{}^{2}} \right)^{-\frac{1}{2}}\), where \(\Delta z_{s}=\frac{2\ln(2)}{n\pi}\cdot\frac{\lambda_{0}^{2}}{\Delta\lambda}\) and \(\Delta z_{NA}=\frac{n\lambda_{0}}{NA^{2}}\). The lateral resolution (\(\Delta\tau\)) can be calculated as \(\Delta\tau=\frac{\lambda_{0}}{2\cdot NA}\). Substituting \(\lambda_{0}=545\ nm\), \(\Delta\lambda=100\ nm\), \(n=1\), \(NA=0.35\), the theoretical axial resolution \(\Delta\tau\) can be calculated to be 1257.3 nm, and the theoretical lateral resolution \(\Delta\tau\) can be calculated to be 778.6 nm. The theoretical axial and lateral resolution values are roughly consistent with the experimental FWHM values shown in **Fig. 2f**. To demonstrate the 3D spectroscopic imaging capability of BS-FF-OCT, we use polypropylene fiber mattress from a standard surgical mask in air as a testbed **(Fig. 3**). To emphasize the depth-resolving capability of BS-FF-OCT, the cold, on-resonance, and off-resonance MIP images at different depths are captured as shown in **Fig. 3a-c**. Those widefield images are obtained under the same experimental condition and acquisition parameters except that the reference arm is blocked, which makes a fair comparison to those of BS-FF-OCT. As shown in **Fig. 3a-c**, the depth-resolving capability of conventional widefield MIP imaging is very limited, where the fiber features are indistinguishable. In contrast, BS-FF-OCT images in **Fig. 3d-f** clearly resolve features at different depths. Both widefield MIP images and BS-FF-OCT images demonstrate bond-selective capability, i.e., at the C-H asymmetric deformation vibration bond at around 1450 cm-1. While **Fig. 3b** and **Fig. 3e** both show bright contrast, no contrast was found at the 1600 cm-1 off-resonance wavenumber images (see **Fig. 3c** and **Fig. 3f**). To further show the 3D imaging capability of BS-FF-OCT, we perform 3D reconstruction of the polypropylene fiber mattress for a total depth range of 75 \(\upmu\)m (see **Fig. 3g**). We notice that each fiber strip in **Fig. 3g** shows "double strips" which can be seen more clearly in the **Mov. S1a and Mov. S1b**. Since FF-OCT measures back reflections from the sample, the air-polyppropylene top and polypropylene-air bottom interfaces of each fiber strip create two distinguishable strips. Also, the diameter of each fiber strip is larger than the axial resolution of the setup thus we can see the two reflection interfaces. **Fig. 3h** shows the BS-FF-OCT spectrum extracted from the position indicated by the green arrow in **Fig. 3e\({}_{2}\)** and comparison with the FTIR spectrum. Both Figure 3: **BS-FF-OCT imaging of polypropylene fiber mattress. (a) cold widefield images at different depths. (b-c) widefield MIP images at 1450 cm-1 and 1600 cm-1. 1450 cm-1 is the C-H asymmetric deformation vibration bond in polypropylene, and 1600 cm-1 is at off-resonance. (d) cold FF-OCT images at different depths. (e-f) BS-FF-OCT images at 1450 cm-1 and 1600 cm-1. (g) 3D reconstruction of cold FF-OCT and BS-FF-OCT images. (h) comparison of BS-FF-OCT and FTIR spectrum. The BS-FF-OCT spectrum is extracted from the position in (e\({}_{2}\)) indicated by the green arrow. FTIR spectrum is acquired by a commercial FTIR spectroscopy from a bulky measurement of the polypropylene fiber sample. BS-FF-OCT images and spectrum are normalized by MIR powers. All images are denoised by BM4D algorithm. BS-FF-OCT and FTIR spectrum is smoothed by Gaussian-weighted moving average filter.** BS-FF-OCT and FTIR spectra show peaks for the C-H symmetric deformation vibration bond at around 1370 cm-1 and the C-H asymmetric deformation vibration bond at around 1450 cm-1. These results further verify the bond-selective capability and demonstrate good spectral fidelity. ### BS-FF-OCT imaging of human bladder cancer cell spheroids and C. elegans To demonstrate the broad application potential of our technique on biological samples, we used human bladder cancer cell spheroids and _C. elegans_ as testbeds. **Fig. 4** shows the BS-FF-OCT images of human bladder cell spheroids. The high-density areas (cytoplasm) and low-density areas (nucleus) inside the cell spheroids volume can be seen clearly (see **Fig. 4b**). Features from different depths can be distinguished compared to the cold widefield images in **Fig. 4a**. **Fig. 4c,** and **Fig. 4d** confirm the bond-selective capability, i.e., at 1650 cm-1 (see **Fig. 4c**) in resonance with the amide I band of proteins where there is a stronger photothermal contrast than that of at off-resonance 1775 cm-1(see **Fig. 4d**). Moreover, the cutting-through sectioning images along the axial direction of the dashed lines in **Fig. 4c2** show the cytoplasm and nucleus areas from the side views (**see Fig. 4e**). Figure 4: **BS-FF-OCT imaging of cancer cell spheroids.****(a) Cold widefield images at different depths.****(b) cold FF-OCT images at different depths.****(c-d) BS-FF-OCT images at 1650 cm-1, and 1775 cm-1.** 1650 cm-1 is the amide I band in protein, and 1775 cm-1 is at off-resonance.****(e) Cross-sectional images along the dashed lines in (c2).** BS-FF-OCT images are normalized by MIR powers. **Fig. 5** shows the BS-FF-OCT images of _C. elegans_. The cold FF-OCT images in **Fig. 5b** show features inside the _C. elegans_ worm at various depths. In contrast, scatterers from different planes hinder these futures in the cold widefield images due to the lack of optical-sectioning capability (see **Fig. 5a**). The BS-FF-OCT images in **Fig. 5c** show strong photothermal contrast at 1650 cm-1, amide I band whereas the photothermal contrast at the 1770 cm-1 off-resonance wavenumber in **Fig. 5d** is weak. This confirms the chemical selective capability since the _C. elegans_ is rich in protein. To further demonstrate the 3D sectioning capability of the BS-FF-OCT setup, the cutting-through sectioning images along the axial direction and dashed lines shown in **Fig. 5b\({}_{1}\)-d\({}_{1}\)** are plotted in **Fig. 5e-g**. In these side views, the different structures inside the worm are shown more clearly. We chose a 150-\(\upmu\)m thick mouse brain slice (**Fig. 6**) as the testbed to demonstrate the application potential of our setup for imaging thick and highly scattering biological samples. Martin Schnell _et al._ previously demonstrated infrared spectroscopic imaging of biological tissues through a Mirau interference objective. [31] The tissues used in that work were 5-\(\upmu\)m-thin paraffin-embedded-sliced tissues prepared through a complex protocol. In this work, we demonstrate BS-FF-OCT imaging of fresh 150-\(\upmu\)m thick brain slices sectioned by a simple procedure. The BS-FF-OCT setup can image thicker tissues owing to its particular design. The reference arm is fixed in the study by Martin Schnell _et al._[31], since a Mirau objective is adopted to generate the interference signal. In contrast, BS-FF-OCT is based on a time-domain FF-OCT with a separated and tunable reference arm. Thus, in our BS-FF-OCT setup, the coherence plane can be tuned to the deeper layers of the samples. **Fig. 6a** shows the cold widefield reflection images focused at different depths. Due to limited depth-resolving capability, **Fig. 6a** looks similar at all depths. The photothermal widefield reflection images focused at different depths shown in **Fig. S1** also look similar. In comparison, the cold FF-OCT brain tissue images can distinguish myelinated axon structures from different depths (see **Fig. 6b**). Second, an alternative MIR and probe pulse timing configuration is adopted to maximize the detected photothermal signal. Simulations by Zong _et al._ demonstrate that photothermal cooling Figure 6: **BS-FF-OCT imaging of myelinated axons in mouse brain tissue.****(a) cold widefield images at different depths.****(b) cold FF-OCT images at different depths.****(c-e) BS-FF-OCT images at 1650 cm\({}^{\text{-1}}\), 1740 cm\({}^{\text{-1}}\), and 1775 cm\({}^{\text{-1}}\).** 1650 cm\({}^{\text{-1}}\) is the amide I band in protein, 1740 cm\({}^{\text{-1}}\) is the C=O band in lipids, and 1775 cm\({}^{\text{-1}}\) is at off-resonance.****(f) 3D reconstruction of cold FF-OCT and BS-FF-OCT images.****(g) BS-FF-OCT spectrum.** The BS-FF-OCT spectrum is extracted from the area in (c\({}_{2}\)) indicated by the green rectangle. BS-FF-OCT Images and spectrum are normalized by MIR powers. All images are denoised by BM4D algorithm. BS-FF-OCT spectrum is smoothed by Gaussian-weighted moving average filter. time increases with the sample size [36]. The maximum photothermal signal can be obtained when the temperature difference between the "hot" and "cold" states is largest. Therefore, there should be enough time between the probe pulses to differentiate the "hot" and "cold" states. In the pulse timing configuration shown in **Fig. 1b**, the time window between the first probe pulse for the "cold" state and the last probe pulse for the "hot" state is only 50 \(\upmu\)s, which is not enough for the cooling of the 150-\(\upmu\)m-thick brain tissue. Thus, a new timing configuration is added to the setup, as shown in **Fig. S3**. The maximum cooling time in this alternative timing configuration is limited to 10 ms by the camera period time. MIR-probe delay scan is also performed, as shown in **Fig. S4** and **Fig. S5**. The cooling time constant of the 150-\(\upmu\)m-thick brain tissue is found to be about 1.21 ms, showing that this thick tissue sample indeed requires an alternative timing configuration. Using the optimized MIR-probe delay value shown in **Fig. S5a**, BS-FF-OCT imaging results of myelinated axons of different depths are shown in **Fig. 6c**-**e**. At the 1650 cm-1 Amide I and 1740 cm-1 C=O bands, the BS-FF-OCT contrast is strong whereas the images at the 1775 cm-1 off-resonance wavenumber have very weak contrast. This result reflects the major chemical content of myelinated axons, i.e., protein and lipids. The 3D reconstruction results of the cold FF-OCT and BS-FF-OCT images at 1650 cm-1 and 1775 cm-1 are shown in **Fig. 6f**. To demonstrate the chemical selectivity, hyperspectral BS-FF-OCT imaging was performed. **Fig. 6g** shows the BS-FF-OCT spectrum extracted from the green area shown in **Fig. 6c2**. The spectrum shown in **Fig. 6g** is smoothed to reduce the noise level. The raw spectrum is shown in the supporting information **Fig. S2**. The peak positions (1550 cm-1, 1640 cm-1, 1730 cm-1) shown in the spectrum are consistent with the peak positions for amide II (1550 cm-1), amide I (1650 cm-1), and the C=O band (1740 cm-1) in protein and lipids, respectively. The other peak shown at 1460 cm-1 is altered from the amide II band with the deuterium-oxide-based environment [52] (i.e., the water-based environment is not used due to the MIR absorption of water). The spectrum is consistent with the result in the literature [53] except for the peak at 1460 cm-1. ## 3 Conclusion We present a 3D chemical imaging technology termed bond-selective full-field optical coherence tomography (BS-FF-OCT). The capability of BS-FF-OCT is demonstrated on polymer samples, including 1-micron PMMA beads and polypropylene fibers, and biological samples, including mouse brain tissue, _C. elegans_, and human bladder cancer cell spheroids. Our BS-FF-OCT setup has demonstrated the ability to image bulky samples as thick as 150 \(\upmu\)m. Furthermore, our setup is capable of imaging highly scattering samples, which is beyond the reach of phase tomography. With BS-FF-OCT, the high-density areas (cytoplasm) and the low-density areas (nucleus) inside a cell spheroid can be resolved. While the current implementation of BS-FF-OCT lacks the capability of resolving sub-cellular details, the resolution can be potentially improved by adopting the dynamic FF-OCT. [54] To reveal more details in the cytoplasm, the resolution can be potentially improved by adopting the dynamic FF-OCT [54]. In summary, we demonstrate a bond-selective OCT technique that enables label-free volumetric spectroscopic imaging at isotropic 1-micron resolution, with potential broad applications in biological imaging. ## 4 Methods ### Bs-Ff-OCT Setup A schematic of the BS-FF-OCT setup is shown in Fig. 1a. The full-field optical coherence tomography (FF-OCT) is based on a Michelson interferometer. A broadband light-emitting diode (LED, UHP-T-545-SR, Prizmatix) provides Kohler illumination in both the sample and reference arms. Air objectives (SLMPLN50X, Olympus) are used in both arms. A CMOS camera (BFS-U3-17S7, FLIR) captures the widefield interferometric image. The MIR beam comes from a mid-infrared optical parametric oscillator (Firefly-LW, M Squared Lasers), tunable from 1320 cm-1 to 1775 cm-1. The laser outputs a 20 kHz MIR pulse train. Then, the 20 kHz MIR pulse train is modulated at 50 Hz by an optical chopper system (MC2000B, Thorlabs). The modulated MIR beam is focused by an off-axis parabolic mirror (MPD019-M03, Thorlabs) at the same side of the sample as the LED illuminates. The MIR pulse, LED probe pulse, optical chopper, and camera are synchronized by a pulse generator (9254-TZ50-US, Quantum composers) similar to the widefield MIP microscopy [27]. The reference mirror is installed on a piezo stage (MIPOS 100 SG RMS, Piezosystem Jena) to shift the phase difference between the two arms. Both reference mirror (with the piezo stage) and sample are installed on motorized stages (Z825B, Thorlabs) to achieve automated and synchronized coherence and focal plane matching for volumetric image acquisition. ### Automatic multi-depth scanning The coherence plane shifting in FF-OCT is critical to match objective focal and coherence planes. [7, 55] The coherence plane shift and its correction are shown in **Fig. S6**. When the system is imaging a specific depth of a sample, the coherence plane has to overlay with the focal plane (**Fig. S6a**). Then motor 1 scans the sample to the next depth. The coherence plane shifts and doesn't overlay with the new focal plane (**Fig. S6b**). Then, motor 2 has to scan a certain distance of the reference mirror to make the coherence plane overlay with the new focal plane (**Fig. S6c**). Software is developed to achieve automatic volumetric data acquisition in BS-FF-OCT. The software can automatically correct the coherence plane position by linearly shifting the reference mirror position at each depth during the multi-depth scanning, i.e., shifting \(\Delta z/n\) at each depth in an (n+1)-depths multi-depth acquisition, where \(\Delta z\) is the reference mirror shifting distance between the initial depth and the final depth. Manual correction is needed only at the initial depth and the final depth. The coherence plane can be corrected by linearly shifting the reference mirror position because the correction distance of the coherence plane has a linear relation with the sample shifting distance, as shown in the following equation, [7] \[\Delta z_{coherence\;plane}=\;\Delta z_{sample}\;\cdot\;\frac{n_{sample}^{2}- n_{immersion}^{2}}{n_{sample}\cdot n_{immersion}} \tag{1}\] where the \(n_{sample}\) is the refractive index of the sample, and \(n_{immersion}\) is the refractive index of the immersion medium. \(n_{immersion}\) is a constant and \(n_{sample}\) can be treated approximately as a constant for a common sample that usually does not contain large refractive index changes within the data acquisition depth range. ### Theory and image reconstruction The theory of the image reconstruction process at a specific depth of the sample is summarized below. First, the phase difference change induced by the piezo stage is discussed in the "cold" state. Assuming the phase difference between the sample arm and the reference arm when the piezo stage is at its first position is \(\varphi_{cold}\), when the piezo stage position changes \(\Delta z\), the corresponding phase difference change is as follows, \[\Delta\varphi=\frac{2\pi}{\lambda}\cdot\Delta z \tag{2}\] where \(\lambda\) is the illumination wavelength. Since the experimental setup uses broadband LED as the light source, the effective value of \(\lambda\) in equation (2) can't be simply determined. The setting value of \(\Delta z\) is experimentally calibrated to make \(\Delta\varphi=\frac{\pi}{2}\). Having this phase difference change equals to \(\frac{\pi}{2}\), assuming that \(E_{sample}^{cold}\) is the reflected field magnitude from a specific depth of the sample, that \(I_{incoherent}^{cold}\) is the reflection intensity from the sample depths that are not coherent with the reference mirror, that \(E_{reference}\) is the reflected light field from the reference mirror, and that \(I_{1}^{cold}\) to \(I_{4}^{cold}\) are the intensity of the four raw images captured by the camera with different phase differences, then \(I_{1}^{cold}\) to \(I_{4}^{cold}\) can be expressed as follows, \[I_{1}^{cold}=\ I_{incoherent}^{cold}+E_{sample}^{cold}\cdot E_{reference}\cdot cos (0\ +\ \varphi_{cold}) \tag{3}\] \[I_{2}^{cold}=\ I_{incoherent}^{cold}+E_{sample}^{cold}\cdot E_{reference}\cdot cos (\frac{\pi}{2}\ +\ \varphi_{cold}) \tag{4}\] \[I_{3}^{cold}=\ I_{incoherent}^{cold}+E_{sample}^{cold}\cdot E_{reference}\cdot cos (\pi\ +\ \varphi_{cold}) \tag{5}\] \[I_{4}^{cold}=\ I_{incoherent}^{cold}+E_{sample}^{cold}\cdot E_{reference}\cdot cos (\frac{3\pi}{2}\ +\ \varphi_{cold}) \tag{6}\] To retrieve \(E_{sample}^{cold}\), we subtract equation (5) from (3) and subtract equation (6) from (4), \[I_{1}^{cold}-I_{3}^{cold}=\ E_{sample}^{cold}\cdot E_{reference}\cdot 2\cdot cos (\varphi_{cold}) \tag{7}\] \[I_{2}^{cold}-I_{4}^{cold}=\ E_{sample}^{cold}\cdot E_{reference}\cdot 2\cdot[\ -\ \sin (\varphi_{cold})\ ] \tag{8}\] In equations (7) and (8), the incoherent intensity term that is from other depths is canceled but the phase term still exists. To cancel the phase term, the square of equations (7) and (8) are summed as follows: \[\left(I_{1}^{cold}-\ I_{3}^{cold}\right)^{2}+\left(I_{2}^{cold}-\ I_{4}^{cold} \right)^{2}=\ E_{sample}^{cold}\cdot E_{reference}\cdot 4 \tag{9}\] Since the reflected field from the reference mirror is uniform and can be treated as constant, \(E_{sample}^{cold}\) can be obtained from equation (9), It is noteworthy that only the reflected field magnitude from a specific depth, \(E_{sample}^{cold}\), is used to yield the final photothermal signal, and the phase term, \(\varphi_{cold}\), is canceled in equation (9). Thus, the photothermal effect from other depths, which makes the accumulated phase in the "hot" state become \(\varphi_{hot}\), instead of \(\varphi_{cold}\), does not contribute to the photothermal OCT signal. Similarly, in the "hot" state, the reflected field magnitude \(E_{sample}^{hot}\) can be obtained by the following equation, \[\left(I_{1}^{hot}-\ I_{3}^{hot}\right)^{2}+\left(I_{2}^{hot}-\ I_{4}^{hot} \right)^{2}=\ E_{sample}^{hot}\cdot E_{reference}\cdot 4 \tag{10}\] Subtract equation (10) from (9), and the depth-resolved photothermal image from the sample's depth i can be obtained, \[\left(E_{sample}^{cold}-E_{sample}^{hot}\right)\cdot E_{reference}\cdot 4=\] \[\left(I_{1}^{cold}-\ I_{3}^{cold}\right)^{2}+\left(I_{2}^{cold}-\ I_{4}^{cold} \right)^{2}-\left(I_{1}^{hot}-\ I_{3}^{hot}\right)^{2}-\left(I_{2}^{hot}-\ I_{4}^{hot} \right)^{2} \tag{11}\] Equation (11) describes how a photothermal image of a specific depth of sample can be obtained by using 4 cold raw images and 4 hot raw images. ### Sample preparation Polymethyl methacrylate (PMMA) beads embedded in agar gel sample preparation process is as follows. 1 mg agarose powder (Ultrapure Agarose, 16500-500) is measured and blended with 800 \(\mu\)L DI water and 200 \(\mu\)L 1 \(\mu\)m PMMA bead suspension (Phosphorex, MMA1000). Then the suspension is heated on a 95 \({}^{\circ}\)C hot plate until the agarose powder is melted. One 50 \(\mu\)m thick spacer is put on top of a CaF\({}_{2}\) substrate. Then the CaF\({}_{2}\) substrate with the space and a CaF2 coverslip are preheated to 95 \({}^{\circ}\)C to avoid instant solidification when the hot agar gel suspension contacts with the cold CaF\({}_{2}\) substrate or coverslip. The temperature of the sample suspension and the CaF\({}_{2}\) substrate has to be below 100 \({}^{\circ}\)C to avoid water boiling during sample preparation. 50 \(\upmu\)L hot sample suspension is dropped on the CaF\({}_{2}\) substrate, and then the CaF\({}_{2}\) coverslip is put on top of the CaF\({}_{2}\) substrate to sandwich the sample suspension. Finally, the sample cools down at room temperature and solidifies. The polypropylene fiber mattress sample is made by peeling off the melt-blown fabric layer from a regular surgical mask. Then the polypropylene fiber layer is fixed on a silicon substrate by double-sided tape. The mouse brain tissue, _C. elegans_, and T24 human bladder cancer cell spheroids sample are prepared as follows. First, the fresh mouse brain (Charles River Labs Inc, BIOSPECIMEN - BRAIN - MOUSE) is fixed in 10% formalin and sliced into 150-\(\upmu\)m-thick slices. The wild type _C. elegans_ adults and T24 human bladder cancer cell spheroids are fixed in 10% formalin. Then the samples are washed in D\({}_{2}\)O-based phosphate-buffered saline (PBS) buffer three times. Then, the washed samples are sandwiched between the CaF\({}_{2}\) substrate and the CaF\({}_{2}\) coverslip. Finally, the gap between the substrate and the coverslip is sealed with nail polish. ### Images denoising The BM4D denoising method is by an open-source demo software for BM4D volumetric data denoising (release ver. 3.2, 30 March 2015). [56] The parameter values used are as follows. Noise standard deviation given as the percentage of the maximum intensity of the signal, 11%; noise distribution is Gaussian; BM4D parameter profile, modified profile; enable Wiener filtering; verbose mode; enable sigma estimation. ### FTIR measurement The FTIR spectrum is measured by a commercial FTIR spectroscopy (Nicolet FT-IR with ATR), which is a high-end optical benchtop system with 0.09 cm-1 resolution and continuous dynamic alignment. This unit allows AutoTune and automated continuously variable aperture adjustment. A horizontal attenuated total reflectance (HATR) accessory is also available. ### Spectrum smoothing The Gaussian-weighted moving average filter used in this work is realized by the "smoothdata" function in MATLAB R2021b. "Gaussian" window is chosen. ## 5 Back matter ### Funding This work is supported by R35GM136223 and R33CA261726 to JXC. ### Competing interests The authors declare no competing interests. ### Author contributions C.Y., M.S.U., and J.X.C. proposed the idea of BS-FF-OCT. M.S.U. and J.X.C. supervised the research team. J.X.C. and M.S.U. revised the final version of the manuscript. C.Y. designed and built the visible probing part of the experiment setup, wrote the single-depth cold FF-OCT image acquisition function of the data acquisition software, and performed initial cold FF-OCT imaging experiments. H.Z. designed and built the MIR part of the experiment setup, wrote the photothermal image acquisition, multi-depth scanning, and hyperspectral scanning functions of the data acquisition software, and performed BS-FF-OCT imaging experiment on polypropylene fiber. H.Z. and J.Z. optimized the MIR beam optical path and performed BS FF-OCT imaging experiments on 1 \(\mathrm{\SIUnitSymbolMicro m}\) PMMA beads, cell spheroids, and _C. elegans_. H.Z. processed the data, plotted figures, and wrote the first version of the manuscript. F.K.C. cultured the cell spheroids used in this manuscript. Z.A.W. cultured the _C. elegans_ used in this manuscript. All authors contributed to the final creation of the manuscript.
光学共振断層成像(OCT)は、ラベルレスで非侵襲的な3D撮像ツールであり、生物学的研究と臨床診断の両分野で広く使用されています。現在のOCTモデリティは、化学情報を取得できないため、試料の断層像をしか視認できません。ここでは、波長選択的フルフィル・フィールドOCT(BS-FF-OCT)を報告します。このBS-FF-OCTでは、中波長パルスレーザーがOCT信号を光熱効果により変調することで、ラベルレスの波長選択的3D断層像を撮像します。 BS-FF-OCTは、散乱度が高いサンプルにも適用できます。私たちは、アセトアセトアセテートの1μmの微粒子をアガロースゲル中に埋め込んだものを最初に画像化しました。次に、標準手術マスクのポリエチレン繊維マットの3Dヒ
2309.08631
Large Language Models Can Infer Psychological Dispositions of Social Media Users
Large Language Models (LLMs) demonstrate increasingly human-like abilities across a wide variety of tasks. In this paper, we investigate whether LLMs like ChatGPT can accurately infer the psychological dispositions of social media users and whether their ability to do so varies across socio-demographic groups. Specifically, we test whether GPT-3.5 and GPT-4 can derive the Big Five personality traits from users' Facebook status updates in a zero-shot learning scenario. Our results show an average correlation of r = .29 (range = [.22, .33]) between LLM-inferred and self-reported trait scores - a level of accuracy that is similar to that of supervised machine learning models specifically trained to infer personality. Our findings also highlight heterogeneity in the accuracy of personality inferences across different age groups and gender categories: predictions were found to be more accurate for women and younger individuals on several traits, suggesting a potential bias stemming from the underlying training data or differences in online self-expression. The ability of LLMs to infer psychological dispositions from user-generated text has the potential to democratize access to cheap and scalable psychometric assessments for both researchers and practitioners. On the one hand, this democratization might facilitate large-scale research of high ecological validity and spark innovation in personalized services. On the other hand, it also raises ethical concerns regarding user privacy and self-determination, highlighting the need for stringent ethical frameworks and regulation.
Heinrich Peters, Sandra Matz
2023-09-13T01:27:48
http://arxiv.org/abs/2309.08631v2
# Large Language Models Can Infer Psychological Dispositions of Social Media Users ###### Abstract As Large Language Models (LLMs) demonstrate increasingly human-like abilities in various natural language processing (NLP) tasks that are bound to become integral to personalized technologies, understanding their capabilities and inherent biases is crucial. Our study investigates the potential of LLMs like ChatGPT to infer psychological dispositions of individuals from their digital footprints. Specifically, we assess the ability of GPT-3.5 and GPT-4 to derive the Big Five personality traits from users' Facebook status updates in a zero-shot learning scenario. Our results show an average correlation of r =.29 (range = [.22,.33]) between LLM-inferred and self-reported trait scores. Furthermore, our findings suggest biases in personality inferences with regard to gender and age: inferred scores demonstrated smaller errors for women and younger individuals on several traits, suggesting a potential systematic bias stemming from the underlying training data or differences in online self-expression. Large language models ChatGPT GPT-4 Personality Big Five ## 1 Introduction Large language models (LLMs) and other transformer-based neural networks have revolutionized text analysis in research and practice. Models such as OpenAI's GPT-4 [1] or Anthropic's Claude [2], for example, have shown a remarkable ability to represent, comprehend, and generate human-like text. Compared to prior NLP approaches, one of the most striking advances of LLMs is their ability to generalize their "knowledge" to novel scenarios, contexts, and tasks [3, 4]. While LLMs were not explicitly designed to capture or mimic elements of human cognition and psychology, recent research suggests that - given their training on extensive corpora of human-generated language - they might have spontaneously developed the capacity to do so. For example, LLMs display properties that are similar to the cognitive abilities and processes observed in humans, including theory of mind (i.e., the ability to understand the mental states of other agents [5]), cognitive biases in decision-making [6] and semantic priming [7]. Similarly, LLMs are able to effectively generate persuasive messages tailored to specific psychological dispositions (e.g., personality traits, moral values [8]). Here, we examine whether LLMs possess another quality that is fundamentally human: The ability to "read" people and form first impressions about their psychological dispositions in the absence of direct or prior interaction. As research under the umbrella of zero-acquaintance studies shows, people are remarkably accurate at judging the psychological traits of strangers simply by observing traces of their behavior [9]. For example, people can accurately predict a stranger's personality traits by snooping through their offices or bedrooms [10], examining their music preferences [11], or scrolling through their social media profiles [12]. Existing research in computational social science shows that supervised machine learning models are able to make similar predictions. That is, given a large enough dataset including both self-reported personality traits and people's digital footprints - such as Facebook Likes, music playlists, or browsing histories - machine learning models are able to statistically relate both inputs in a way that allows them to predict personality traits after observing a person's digital footprints [13; 14]. This is also true for various forms of text data, including social media posts [15; 16], personal blogs [17], or short text responses collected in the context of job applications [18]. In this paper, we test whether LLMs have the ability to make similar psychological inferences without having been explicitly trained to do so (known as zero-shot learning [3]). Specifically, we use Open AI's ChatGPT (GPT-3.5 and GPT-4 [1]) to explore whether LLMs can accurately infer the Big Five personality traits Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism [19] of social media users from the content of their Facebook status updates in a zero-shot scenario. In addition, we test for biases in ChatGPT's judgments that might arise from its foundation in equally biased human-generated data. Building on previous work highlighting inherent stereotypes in pre-trained NLP models [20; 21], we explore the extent to which the personality inferences made by ChatGPT are indicative of gender and age-related biases (e.g., potential biases in how the personality of men and women or older and younger people is judged). ## 2 Method ### Data and Sampling Our analyses are based on text data obtained from MyPersonality [22], a Facebook application that allowed users to take real psychometric tests - including a validated measure of the Big Five personality traits (IPIP [23]) - and receive immediate feedback on their responses. Users also had the opportunity to donate their Facebook profile information - including their public profiles, Facebook Likes, and status updates - to research. For the purpose of this study, we randomly subsampled 1,000 adult users (24.2 \(\pm\) 8.8 years old, 63.1% female) who completed the full 100-item IPIP personality questionnaire and had at least 200 Facebook status updates (if they had more, we used the most recent 200). The study received IRB approval from Columbia University's ethics review board (Protocol #AAAU8559). ### Measures MyPersonality measured users' personality traits using the International Personality Item Pool (IPIP [23]), a widely established self-report questionnaire that captures the Big Five personality traits of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism [19]. We only included users who had completed the full questionnaire with all 100 items. To obtain inferred personality traits from ChatGPT, we used the last 200 Facebook status updates generated by each user without additional preprocessing. The average length of status updates in our sample was 17.10 words (SD=15.03). Status updates were scored using the ChatGPT API with GPT-3.5 (version gpt-3.5-turbo-0301) and GPT-4 (version gpt-4-0314) [1] as underlying models. For this purpose, the status updates were first concatenated into chunks and then fed into the GPT model, using a set of simple prompts to guide the behavior of the model. The system prompt was the default for GPT-3.5 and GPT-4, respectively: "You are a helpful assistant". Additionally, we prompted the model to infer Big Five traits using the inference prompt: "Rate the text on the Big Five personality dimensions. Pay attention to how people's personalities might be reflected in the content they post online. Provide your response on a scale from 1 to 5 for the traits Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Provide only the numbers." We then used a simple text-parsing script to transform the responses into numerical scores. In order to avoid exceeding the GPT token limit, status update histories were processed in chunks of 20 messages, and the inferred personality scores were then averaged to derive overall scores. To boost the reliability of the inferred personality estimates, we queried ChatGPT three times for each inference. Agreement across ratings across rating rounds was high for all traits (Openness: r\({}_{GPT3.5}\)=.88, r\({}_{GPT4}\)= 0.73; Conscientiousness: r\({}_{GPT3.5}\)=.88, r\({}_{GPT4}\)= 0.91; Extraversion: r\({}_{GPT3.5}\)=.92, r\({}_{GPT4}\)= 0.87; Agreeableness: r\({}_{GPT3.5}\)=.96, r\({}_{GPT4}\)= 0.94; Neuroticism: r\({}_{GPT3.5}\)=.91, r\({}_{GPT4}\)= 0.93), and all p-values were smaller than.001 with Bonferroni correction for multiple comparisons. Given the high level of agreement, we computed aggregate inferred scores by averaging scores across the three rounds of rating. We used the aggregate scores for all further analyses. ## 3 Results ### Can LLMs Infer Personality Traits From Social Media Posts? In order to assess the capacity of LLMs to infer psychological traits from social media data, we compared the inferred Big Five personality scores with self-reported scores. A comparison of the distributions suggests that both versions of ChatGPT tended to underestimate Conscientiousness and Agreeableness while overestimating Neuroticism. For Openness and Extraversion, the deviations were inconsistent across ChatGPT versions: While GPT-3.5 tended to underestimate Openness and Extraversion, GPT4 tended to overestimate Extraversion. Overall, the distributions of inferred scores were more closely aligned with self-reported scores for GPT-4 compared to GPT-3.5, suggesting a potential improvement across versions (see Figure 1). Detailed descriptive statistics can be found in SI A. Importantly, the mere comparison of distributions does not provide insights into the strength and directionality of the relationships between inferred and self-reported scores. For this purpose, we conducted correlation analyses. The average Pearson correlation coefficient of inferred and self-reported scores across all personality traits was \(\text{r}_{GPT3.5}\)=0.27 and \(\text{r}_{GPT4}\)= 0.31. The correlations were highest for the traits of Openness (\(\text{r}_{GPT3.5}\) =.28; \(\text{r}_{GPT4}\)=.33), Extraversion (\(\text{r}_{GPT3.5}\)=.29; \(\text{r}_{GPT4}\)=.32) and and Agreeabelness (\(\text{r}_{GPT3.5}\)=.30, \(\text{r}_{GPT4}\)=.32), and were slightly lower for Conscientiousness (\(\text{r}_{GPT3.5}\)=.22; \(\text{r}_{GPT4}\)=.26) and Neuroticism (\(\text{r}_{GPT3.5}\) =.26; \(\text{r}_{GPT4}\)=.29). All correlation coefficients were significantly different from 0 at p <.001 with Bonferroni correction for multiple comparisons. Similar to the comparison of distributions, GPT4 showed higher levels of accuracy across all five personality traits, although none of the individual comparisons reached statistical significance (see Figure 2). Detailed results, including confidence intervals and significance levels, can be found in SI B. In addition to exploring the capacity of ChatGPT to infer personality traits from social media user data, we also tested the extent to which this capacity is sensitive to changes in the amount of data that was available for inference. Specifically, we computed correlations between self-reported and inferred personality scores based on different numbers of status messages. Specifically, we computed correlations obtained from inferences for a single chunk of status messages (20 status messages) all the way up to ten chunks (200 status messages). As expected, having access to more status messages resulted in more accurate inferences. Notably, however, most correlations are close to their maximum level after observing far less than the ultimate number of 200 status messages. In addition, the inference of certain traits seems to be particularly susceptible to the volume of input data. For example, the models' accuracy kept increasing with higher levels in input volume for Openness, Extraversion, Agreeableness, and Neuroticism, while the benefits of additional status messages leveled off earlier for Conscientiousness. See Figure 2 for a graphical representation and SI C for detailed statistics. ### Does the Quality of LLM Inferences Vary Across Demographic Groups? In order to uncover potential gender and age-related biases, we analyzed group differences in inferred Big Five scores, as well as their residuals with respect to self-reported scores. Notably, such gender and age differences might not only emerge in inferred personality scores but are also known to exist in self-reports [24; 25]. Consequently, we test for both overall group differences and differences in the residuals between the self-reported and inferred personality scores of each individual. Figure 1: Distributions of self-reported and inferred personality scores for GPT-3.5 and GPT-4. Histograms show absolute frequencies for an overall sample size of n=1000. #### 3.2.1 Gender Differences We first explored the extent to which any observed group differences in inferred personality traits across men and women aligned with those observed in self-reports. As Figure 3 shows, women tend to score significantly higher in Agreeableness (t=2.31; p=.021) and Neuroticism (t=6.53; p<.001) when these traits are measured using questionnaires. In contrast, women scored significantly higher in Openness (t\({}_{GPT3.5}\)= 3.42, p\({}_{GPT3.5}\)<.001; t\({}_{GPT4}\)= 2.72, p\({}_{GPT4}\)=.007), Conscientiousness (t\({}_{GPT3.5}\)= 5.28, p\({}_{GPT3.5}\)<.001; t\({}_{GPT4}\)= 5.73, p\({}_{GPT4}\)<.001), Extraversion (t\({}_{GPT3.5}\)= 5.21, p\({}_{GPT3.5}\)<.001; t\({}_{GPT4}\)= 7.25, p\({}_{GPT4}\)<.001), and Agreeableness (t\({}_{GPT3.5}\)= 13.53, p\({}_{GPT3.5}\)<.001; t\({}_{GPT4}\)= 13.63, p\({}_{GPT4}\)<.001) when these traits were inferred by ChatGPT models, with no significant differences found for Neuroticism. This finding offers initial evidence for potential gender biases in the personality inferences made by LLMs (see Figure 3). To further explore these potential biases, we analyzed the residuals between inferred scores and self-reported scores as an indication of how well GPT is able to represent the personality traits of male and female users (see Table 1). The findings suggest that GPT's personality inferences are less accurate for men than women. First, we observed larger absolute residuals for male users in Conscientiousness (t\({}_{GPT3.5}\)= -3.53, p\({}_{GPT3.5}\)<.001; t\({}_{GPT4}\)= -4.48, p\({}_{GPT4}\)<.001), Agreeableness (t\({}_{GPT3.5}\)= -9.22, p\({}_{GPT3.5}\)<.001; t\({}_{GPT4}\)= -5.22, p\({}_{GPT4}\)<.001), and Neuroticism (t\({}_{GPT3.5}\)= -4.55, p\({}_{GPT3.5}\)<.001; t\({}_{GPT4}\)= -2.39, p\({}_{GPT4}\)=.017) across both GPT models, indicating lower accuracy on these traits for men. Additionally, we found larger residuals for male users for GPT-3.5 in Openness (t\({}_{GPT3.5}\)= -3.84, p\({}_{GPT3.5}\)<.001; t\({}_{GPT4}\)= -0.92, p\({}_{GPT4}\)=.357) and larger residuals for female users in Extraversion for GPT-4 (t\({}_{GPT3.5}\)= -1.36, p\({}_{GPT3.5}\)< 173; t\({}_{GPT4}\)= 3.12, p\({}_{GPT4}\)=.002). For a visual representation, please refer to Figure 4). Taken together, the findings suggest that GPT's personality inferences are less accurate for men than women. Notably, however, these biases seem to be limited to the absolute measures of accuracy and do not necessarily translate to GPT's ability to make inferences about men's relative personality levels. That is when computing Pearson correlations within gender groups, we did not observe any significant difference in the magnitude of these correlations. Similarly, controlling for gender in the overall correlations between self-reported and inferred personality scores by z-standardizing inferred scores within each gender group did not yield correlations significantly different from those obtained before. #### 3.2.2 Age Differences As for gender, we first explored the extent to which any observed group differences in inferred personality traits across younger and older adults (classified using a median split) were aligned with those observed in self-reports. As Figure 3 shows, older users displayed significantly higher self-reported scores in Openness (t=2.96; p=.003) and Conscientiousness (t=7.27; p<.001) and significantly lower self-reported scores in Neuroticism (t=-3.28; p=.001) compared to younger users. Partially mimicking these differences in self-reported personality traits, inferred scores were significantly higher in Conscientiousness (t\({}_{GPT3.5}\)= 9.23, p\({}_{GPT3.5}\)<.001; t\({}_{GPT4}\)= 10.41, p\({}_{GPT4}\)<.001) and Agreeableness (t\({}_{GPT3.5}\)= 4.87, p\({}_{GPT3.5}\)<.001; t\({}_{GPT4}\)= 4.39, p\({}_{GPT4}\)<.001), and lower in Neuroticism (t\({}_{GPT3.5}\)= -3.43, p\({}_{GPT3.5}\)<.001; t\({}_{GPT4}\)= -4.37, p\({}_{GPT4}\)<.001) for older compared to younger users. For Openness (t\({}_{GPT3.5}\)= -2.86, p\({}_{GPT3.5}\)=.004; t\({}_{GPT4}\)= -0.72, p\({}_{GPT4}\)=.472) and Extraversion (t\({}_{GPT3.5}\)= -3.55, p<.001; t\({}_{GPT4}\)= 0.36, p\({}_{GPT4}\)=.717), older individuals scored significantly lower on inferred scores for GPT-3.5 but not GPT-4 (see Figure 3). Figure 2: Pearson’s correlation coefficients between inferred and self-reported scores with 95% confidence intervals (left), and Pearson’s correlation coefficients for GPT-3.5 (mid) and GPT-4 (right) as a function of message volume. O: Openness; C: Conscientiousness; E: Extraversion; A: Agreeableness; N: Neuroticism. As before, we further explore these differences by analyzing age differences in the residuals between self-reported and inferred scores. Unlike in the analyses of gender, we found substantial inconsistency in the group differences between GPT-3.5 and GPT-4. While the inferences made by GPT-3.5 showed significantly larger absolute residuals for older users in Openness (t\({}_{GPT3.5}\)= 4.78, p\({}_{GPT3.5}\)<.001), Conscientiousness (t\({}_{GPT3.5}\)= 2.64, p\({}_{GPT3.5}\)=.008) and smaller residuals for Agreeableness (t\({}_{GPT3.5}\)= -2.64, p\({}_{GPT3.5}\)=.009), no differences in absolute residuals were found for GPT-4. For a visual representation, please refer to Figure 4). Taken together, the findings suggest that ChatGPT's personality inferences might be less accurate for older adults. However, as before, these biases did not translate to ChatGPT's ability to make inferences about people's relative personality levels. We did not find significant differences between within-group correlation coefficients, and z-standardizing personality scores within age groups did not yield correlation coefficients significantly different from those reported before. ## 4 Discussion Our findings suggest that LLMs, such as ChatGPT, can infer psychological dispositions from people's social media posts without having been explicitly trained to do so. They also offer preliminary evidence for the fact that LLMs might generate more accurate inferences for women and younger individuals (compared to men and older adults). Notably, the overall accuracy of the observed inferences (Pearson correlations between self-reported and inferred personality traits ranging between r =.22 and.33, average =.29) is slightly lower than that accomplished by supervised models which have been trained or fine-tuned specifically for this purpose and with the same textual data source as used in testing (e.g., Park et al. [16], who reported correlations between r =.26 and r =.41, average r =.37). Yet, the ability of LLMs to produce inferences of reasonably high accuracy in zero-shot learning scenarios has both important theoretical and practical implications. ### Implications for Theory and Practice On the one hand, our findings contribute to a growing body of research comparing the abilities of LLMs to those observed in humans [5; 7; 8]. As our findings suggest, LLMs might have the human-like ability to "profile" people based on their behavioral traces, without ever having had direct interactions with them. Although most social media posts do not contain explicit references to a person's character, ChatGPT - just like human judges - is able to translate people's accounts of their daily activities and preferences into a holistic picture of their psychological dispositions. Notably, the specific pathways by which LLMs such as ChatGPT arrive at their judgments, and the reasons for why certain biases are introduced into the predictions (e.g., systematic gender and age differences) remain unknown. That is, we cannot speak to the question of whether LLMs use the same behavioral cues as humans or supervised machine learning models when translating behavioral residues into psychological profiles, or offer an in-depth explanation for the observed differences in accuracies across different age and gender categories. For example, the fact that ChatGPT Figure 3: Mean differences in personality scores between gender groups (left) and age groups (right) for self-reported scores as well as inferences by GPT-3.5 and GPT-4. Positive values indicate higher scores for female users compared to male users and older users compared to younger users. O: Openness; C: Conscientiousness; E: Extraversion; A: Agreeableness; N: Neuroticism. ***><0.001; **><0.1; *><0.5. shows systematic biases in its estimation of certain personality traits and is more accurate for women and younger adults could be either indicative of a bias introduced in the training of the models and/or the corpora of text data the models have been trained on, or be reflective of differences in people's general self-expression on social media (e.g., women might be generally more revealing of their identities online). On the other hand, our findings also have important practical implications for the application of automated psychological profiling in research and industry. Specifically, the ability of LLMs to accurately infer psychological traits from social media data could foreshadow a remarkable shift in the accessibility - and therefore potential use - of scalable psychometric assessments. For decades, the assessment of psychological traits relied on the use of self-report questionnaires which are known to be prone to self-report biases and difficult to scale due to their costly and time-consuming nature [26]. With the introduction of automated psychological assessments driven by supervised machine learning models [13, 18], scientists and practitioners were afforded an alternative approach that promised to expand the study and application of individual differences to research questions and domains that were previously impractical if not impossible (e.g., the use of personality traits in targeted advertising [27]; or the investigation of individual differences in large scale, ecologically valid observational studies [28]). However, the widespread application of such automated personality predictions from digital footprints among scientists and practitioners was hindered by the need to collect large amounts of self-report surveys in combination with textual data (see e.g., the myPersonality dataset [22]) to train and validate the predictive models. With the ability to make similar inferences with models that are available to the broader public, LLMs could democratize access to cheap and scalable psychometric assessments. While this democratization holds remarkable opportunities for scientific discovery and personalized services, it also introduces considerable ethical challenges. For example, the ability to predict people's intimate psychological needs and preferences without their knowledge or consent poses a considerable threat to people's privacy and self-determination [29]. As the case of Cambridge Analytica [30] alongside a growing body of research on personalized persuasion and psychological targeting [27, 31, 32] has highlighted, insights into people's psychological dispositions can easily be weaponized to sway opinions and change behavior. Consequently, it might be necessary to introduce guardrails into systems like LLMs that prevent actors from obtaining psychological profiles of thousands or millions of users. ### Limitations and Future Research Our findings have several limitations that should be addressed by future research. Firstly, the text data used in our analysis was obtained from the MyPersonality Facebook application [22], which was active between 2007 and 2012. Linguistic conventions from this period might differ from contemporary online language, potentially limiting the zero-shot performance of LLMs, which have been trained on newer data. That is, we would expect the personality inferences of LLMs to be even more accurate when applied to more contemporary data. Second, our sample was sourced from Facebook users who interacted with the MyPersonality application. As such, our sample might not be representative of the broader population of social media users (or people more generally), which could limit the external validity of our findings. For example, the general underestimation of personality traits such as Openness might be Figure 4: Mean differences in absolute residuals between gender groups (left) and age groups (right) for inferences by GPT-3.5 and GPT-4. Positive values indicate higher residuals for female users compared to male users and older users compared to younger users. O: Openness; C: Conscientiousness; E: Extraversion; A: Agreeableness; N: Neuroticism. ***p<.001; **p<.01; *p<.05. due to the fact that myPersonality users were particularly curious and open-minded. Third, while our study probed the sensitivity of ChatGPT's predictions to the volume of text input, we limited our data to the 200 most recent status updates. In practice, predictive performance might vary for users with fewer or more status updates. Relatedly, due to the inherent token limit in models like ChatGPT, all input data was processed in chunks. It is possible that the accuracy of future models with the ability to process larger amounts of input data at once might be higher. Finally, our study did not encompass the dynamics of live interactions between LLMs and users. Real-time interactions might yield different insights and highlight additional complexities not captured in our static data set. Relatedly, while our research underscores the potential for LLMs in personalizing interactions and enhancing social computing, it does not delve into the specifics of how these personalizations can be effectively implemented. ### Conclusion Taken together, our research demonstrates the capacity of LLMs to derive psychological profiles from social media data, even without specific training. This zero-shot capability underscores the remarkable advancement LLMs represent in the domain of text analysis. While this "intuitive" understanding mirrors distinctly human abilities, the mechanisms and inherent biases associated with LLM-based personality judgments remain elusive and warrant further research. From a practical perspective, the potential of LLMs to effectively infer psychological traits from digital footprints presents a shift in psychometric evaluations, paving the way for large-scale AI-driven assessments. The prospect of democratized, scalable psychometric tools will enable breakthroughs in personalized services and large-scale research. Nevertheless, these advancements bring forth ethical challenges. The potential for non-consensual psychological predictions and other misuses highlights the necessity for stringent ethical frameworks. ## Acknowledgments We thank the Digital Future Initiative and Columbia Business School for their generous support. We thank Michal Kosinski for fruitful conversations and advice. ## Author Contributions H.P.: Conceptualization, Methodology, Software, Formal analysis, Investigation, Writing - Original Draft, Visualization; S.C.M.: Conceptualization, Methodology, Writing - Original Draft, Visualization.
大規模言語モデル(LLM)は、幅広いタスクで人間のように能力を発揮しています。この論文では、ChatGPTのようなLLMが、ソーシャルメディアユーザーの心理的 dispositionを正確に推論できるかどうか、その能力が社会-人口統計的グループによって異なり、その能力をテストする。具体的には、GPT-3.5 と GPT-4 が、Facebookの投稿からユーザーのBig Fiveの性格 traitsを導き出すことができるかどうかをテストする。結果として、LLM推論と自己報告された性格スコアとの平均相関率はr = .29(範囲 = [.22,.33]) であり、これは、性格推論を予測するのに特化して訓練された機械学習モデルの精度と同様です。また、LLMの性格推論の精度には、年齢グループや性別カテゴリーによって異なっていることが示唆されています。いくつかの特性では、女性と若い個
2309.03764
$L_{2,1}$-Norm Regularized Quaternion Matrix Completion Using Sparse Representation and Quaternion QR Decomposition
Color image completion is a challenging problem in computer vision, but recent research has shown that quaternion representations of color images perform well in many areas. These representations consider the entire color image and effectively utilize coupling information between the three color channels. Consequently, low-rank quaternion matrix completion (LRQMC) algorithms have gained significant attention. We propose a method based on quaternion Qatar Riyal decomposition (QQR) and quaternion $L_{2,1}$-norm called QLNM-QQR. This new approach reduces computational complexity by avoiding the need to calculate the QSVD of large quaternion matrices. We also present two improvements to the QLNM-QQR method: an enhanced version called IRQLNM-QQR that uses iteratively reweighted quaternion $L_{2,1}$-norm minimization and a method called QLNM-QQR-SR that integrates sparse regularization. Our experiments on natural color images and color medical images show that IRQLNM-QQR outperforms QLNM-QQR and that the proposed QLNM-QQR-SR method is superior to several state-of-the-art methods.
Juan Han, Kit Ian Kou, Jifei Miao, Lizhi Liu, Haojiang Li
2023-09-07T15:08:12
http://arxiv.org/abs/2309.03764v1
(L_{2,1}\)-Norm Regularized Quaternion Matrix Completion Using Sparse Representation and Quaternion QR Decomposition ###### Abstract Color image completion is a challenging problem in computer vision, but recent research has shown that quaternion representations of color images perform well in many areas. These representations consider the entire color image and effectively utilize coupling information between the three color channels. Consequently, low-rank quaternion matrix completion (LRQMC) algorithms have gained significant attention. We propose a method based on quaternion Qatar Riyal decomposition (QQR) and quaternion \(L_{2,1}\)-norm called QLNM-QQR. This new approach reduces computational complexity by avoiding the need to calculate the QSVD of large quaternion matrices. We also present two improvements to the QLNM-QQR method: an enhanced version called IRQLNM-QQR that uses iteratively reweighted quaternion \(L_{2,1}\)-norm minimization and a method called QLNM-QQR-SR that integrates sparse regularization. Our experiments on natural color images and color medical images show that IRQLNM-QQR outperforms QLNM-QQR and that the proposed QLNM-QQR-SR method is superior to several state-of-the-art methods. keywords: Quaternion matrix completion, iteratively reweighted quaternion \(L_{2,1}\)-norm, quaternion Qatar Riyal (QR) decomposition, low rank, sparse regularization + Footnote †: journal: ## 1 Introduction Image completion, which endeavors to restore missing pixel values in an image from limited available data, has a broad spectrum of applications in computer vision and has garnered considerable research attention in recent years [1; 2; 3; 4]. Of all the techniques for completing images, those based on low-rank matrix completion (LRMC) have attained significant success. The majority of LRMC-based models fall into matrix rank minimization. As for matrix rank minimization, the low-rank property of matrices is used by LRMC-based models to primarily formulate a problem for minimizing matrices with low rank. Hence, the problem is usually converted into solving a constrained rank optimization problem by using the rank function. Despite its effectiveness, the rank function poses a challenge due to its discontinuity and non-convexity, rendering it NP-hard. However, the nuclear norm has been demonstrated to be the most rigorous convex relaxation of the rank minimization problem [5]. Consequently, the nuclear norm, serving as a convex surrogate of the rank function, has been extensively employed to tackle image completion predicaments. Even so, the prevailing nuclear norm (NN) minimization methods, as noted by the authors in [6], tend to minimize all singular values simultaneously, leading to inadequate rank approximations. To remedy this, they proposed the truncated nuclear norm (TNN) regularization, which furnishes a more precise approximation of the rank function than the nuclear norm. Other comparable alternatives to TNN include the weighted nuclear norm (WNNM) [7; 2], the weighted Schatten \(p\)-norm [8], and the log-determinant penalty [9]. These methods also focus on optimizing the rank approximation, which entails processing the complete singular value decomposition (SVD). However, this can be computationally expensive and presents limitations for high-dimensional or big data applications. When it comes to matrix factorization, the general approach is to break down the original, larger matrix into at least two smaller matrices, often resulting in quicker numerical optimization [10; 11]. However, this factorization method can become enmeshed in local minima. Besides, in the processing of color images using the aforementioned LRMC-based models, the RGB's three color channels are typically processed separately and then combined to obtain the final restoration outcome. However, this approach can result in a loss of coupling information between the three channels due to the disregard of the inter-channel relationships. Models that can more effectively use the connection between three channels are thus worth researching. In recent years, the use of quaternion representation for color images has received significant attention from researchers. Various studies have demonstrated that quaternions can effectively describe color images, and approaches built upon quaternion representation have demonstrated competitiveness in addressing a range of image processing issues, such as foreground/background separation of color videos [12], color image denoising [13], edge detection [14], face recognition [15], and completion [4]. Specifically, the three color channels of a color image precisely correspond to the three imaginary sections of a quaternion matrix. We can express a pixel of a color image as a pure quaternion in the following manner: \[\dot{p}=0+p_{R}\,i+p_{G}\,j+p_{B}\,k,\] where \(p_{R}\), \(p_{G}\), and \(p_{B}\) correspond to the pixel values of the three channels (RGB) of a color pixel, respectively, and \(i\), \(j\), and \(k\) stand for the three imaginary units of a quaternion. Utilizing the pure quaternion matrix to represent the three color channels of a color image can better exploit the relationship between the channels in image processing. More recently, there have been proposed approaches that utilize quaternion-based LRMC for color image completion. A general approach is proposed in [16] for low-rank quaternion matrix completion (LRQMC) that employs the quaternion nuclear norm (QNN) and three nonconvex rank surrogates. These surrogates rely on the Laplace function, Geman function, and weighted Schatten norm. These methods have demonstrated superior performance compared to some popular LRMC-based methods. However, they still require solving the singular values of a large quaternion matrix, which can significantly increase the algorithm's computational complexity. Additionally, a logarithmic norm-based quaternion completion algorithm is proposed [17]. [18] proposed three minimization models based on quaternion-based bilinear decomposition. These approaches only require dealing with smaller-sized two-factor quaternion matrices, which reduces the computational complexity of solving quaternion singular value decomposition (QSVD). However, this factorization could result in getting trapped in a local minimum. Indeed, the singular values and vectors can be obtained using the Qatar Riyal (QR) decomposition, which has lower algorithmic complexity than SVD [19]. Similarly, a method based on the quaternion QR (QQR) decomposition to approximate the QSVD of quaternion matrices (CQSVD-QQR) is also proposed [20]. Additionally, the \(L_{2,1}\)-norm has recently shown success in various applications such as feature selection [21] and low-rank representation [22; 23; 19]. The study in [24] shows that the \(L_{2,1}\)-norm provides better robustness against outliers. In low-rank representation, the \(L_{2,1}\)-norm can be used to remove outliers by solving a minimization problem, which does not require the use of SVD to obtain the optimal solution [19]. However, currently, there is no quaternion \(L_{2,1}\)-norm-based quaternion matrix completion method available. This inspired us to establish a quaternion-based color image completion model by introducing the quaternion \(L_{2,1}\)-norm based on the quaternion Tri-Factorization method (CQSVD-QQR) so that the calculation of QSVD is not required during the model solving process. In order to improve image completion results, it is essential to consider more information beyond just the low-rank qualities of images [25; 19]. One crucial factor is the sparsity of images in a particular domain, such as transform domains, where many signals have a naturally sparse structure. To address this, researchers have proposed combining low-rank and sparse priors. In [26], to formulate the sparse prior, the authors use the \(l_{1}\)-norm regularizer, while the truncated nuclear norm is selected as the surrogate for the rank function. To extend this method to the quaternion system, the authors in [27] adapted it to work with quaternion matrices. Although combining low-rank and sparse prior has proven effective for improving image completion, these methods still depend on the SVD/QSVD calculation of large matrices, making them computationally expensive. We were inspired to improve the quaternion Tri-Factorization method (CQSVD-QQR)-based quaternion \(L_{2,1}\)-norm minimization model by introducing the sparse regularization, which led to enhancing the precision of image completion results. The following points summarize the key contributions of this paper: * A novel approach called QLNM-QQR, based on quaternion \(L_{2,1}\)-norm and CQSVD-QQR, is introduced for completing quaternion data. The computational complexity of QLNM-QQR is reduced compared to methods that use QSVD by replacing QSVD with QQR and the quaternion nuclear norm with the quaternion \(L_{2,1}\)-norm. * An improved method for quaternion matrix completion called IRQLNM-QQR is proposed by introducing iteratively reweighted quaternion \(L_{2,1}\)-norm minimization. The proposed method enhances the accuracy of QLNM-QQR. Theoretical analysis demonstrates that IRQLNM-QQR achieves the same optimal solution as an LRQA-based method using the weighted Schatten function as the nonconvex rank surrogate, which outperforms the traditional QQR decomposition-based methods in terms of accuracy. * To enhance the precision of color image recovery, we have integrated sparsity into the QLNM-QQR method, resulting in an improved method called QLNM-QQR-SR. * We have proved that the quaternion \(L_{2,1}\)-norm of a quaternion matrix serves as an upper bound for its quaternion nuclear norm. As a result, the methods proposed in this study can be extended to enhance the performance of low-rank representation and quaternion matrix completion methods based on the quaternion nuclear norm. The structure of this paper is outlined as follows: Section 2 reviews the related models for completing matrices/quaternion matrices based on the low-rank property of data. Section 3 presents commonly used mathematical notations and provides a brief introduction to quaternion algebra. Section 4 gives a brief overview of the CQSVD-QQR technique, followed by the introduction of three proposed quaternion matrix-based completion algorithms. The computational complexities of the proposed models are also discussed. In Section 5, we provide experimental results and compare our proposed methods with several state-of-the-art approaches. Finally, Section 6 presents our conclusions. ## 2 Related work A natural image has recurring patterns that may be utilized to estimate the missing values, which is the foundation for almost all of the existing image completion techniques [28; 29]. Consider an incomplete matrix \(\mathbf{M}\in\mathbb{R}^{M\times N},\ M\geq N>0\). Following are the formulation for the conventional LRMC-based technique: \[\min_{\mathbf{X}}\text{rank}(\mathbf{X}),\ \text{s.t.,}\ P_{\Omega}(\mathbf{X}- \mathbf{M})=\mathbf{0}, \tag{1}\] where \(\text{rank}(\cdot)\) signifies the rank function, \(\mathbf{X}\in\mathbb{R}^{M\times N}\) represents a restored matrix, and \(\Omega\) represents the set of observed entries. The definition of \(P_{\Omega}(\mathbf{X})\) is given by \[(P_{\Omega}(\mathbf{X}))_{mn}=\begin{cases}\mathbf{X}_{mn},\ (m,n)\in\Omega,\\ 0,\ \ \ \ \ \text{otherwise}.\end{cases}\] An effective method for solving the combinatorial optimization problem (1) is to optimize a convex substitute for the rank function. However, it should be noted that problem (1)-centric optimization algorithms mainly deal directly with two-dimensional data sets, specifically grayscale images. The processing of color images generally requires the RGB channels to be separated before the model in problem (1) can operate on them, while the LRQMC-based models can directly work with the assembled RGB channels. Analogous to problem (1) mentioned above, the prevalent low-rank quaternion matrix-based completion algorithm (LRQMC) can be expressed as follows: \[\min_{\hat{\mathbf{X}}}\text{rank}(\hat{\mathbf{X}}),\ \text{s.t.},\ P_{ \Omega}(\hat{\mathbf{X}}-\hat{\mathbf{M}})=\mathbf{0}, \tag{2}\] where \(\hat{\mathbf{X}}\in\mathbb{H}^{M\times N}\) denotes the resulting quaternion matrix that has been completed, \(\hat{\mathbf{M}}\in\mathbb{H}^{M\times N}\) is used to signify the observed quaternion matrix. The definition of \(P_{\Omega}(\hat{\mathbf{X}})\) is given by \[(P_{\Omega}(\hat{\mathbf{X}}))_{mn}=\begin{cases}\hat{\mathbf{X}}_{mn},\ (m,n)\in\Omega,\\ 0,\ \ \ \ \ \text{otherwise}.\end{cases}\] A commonly used technique for resolving the above minimization problem (2) is to adopt the QNN as a convex surrogate for the rank function. Subsequently, a minimization method can be formulated for this surrogate function as \[\min_{\hat{\mathbf{X}}}\|\hat{\mathbf{X}}\|_{*},\ \text{s.t.},\ P_{\Omega}(\hat{ \mathbf{X}}-\hat{\mathbf{M}})=\mathbf{0}, \tag{3}\] where \(\|\hat{\mathbf{X}}\|_{*}\) is the QNN of \(\hat{\mathbf{X}}\). Inspired by the favorable outcomes of nonconvex surrogates in LRMC, Chen _et al._[16] present a universal LRQMC model as follows: \[\min_{\hat{\mathbf{X}}}\sum_{l}\phi(\sigma_{l}(\hat{\mathbf{X}}),\gamma),\ \text{s.t.},\ P_{\Omega}(\hat{\mathbf{X}}-\hat{\mathbf{M}})=\mathbf{0}, \tag{4}\] where the nonnegative \(\gamma\) is associated with the particular function \(\phi(\cdot)\). Furthermore, their proposed approaches are based on three nonconvex rank surrogates that utilize the Laplace, Geman, and Weighted Schatten-\(\gamma\) functions, respectively. And \[\|\dot{\mathbf{X}}\|_{L,\gamma}=\sum_{l}(1-e^{\frac{-\omega_{l}(\dot{\mathbf{X}})}{ \gamma}}); \tag{5}\] \[\|\dot{\mathbf{X}}\|_{G,\gamma}=\sum_{l}(1-e^{\frac{(1+\gamma)\gamma\sigma( \dot{\mathbf{X}})}{\gamma+\omega_{l}(\dot{\mathbf{X}})}}); \tag{6}\] \[\|\dot{\mathbf{X}}\|_{W,\gamma}=\sum_{l}\omega_{l}\sigma_{l}^{\gamma}(\dot{ \mathbf{X}}). \tag{7}\] As for \(\|\dot{\mathbf{X}}\|_{W,\gamma}\), the contribution of the \(l\)-th singular value of \(\dot{\mathbf{X}}\) (\(\sigma_{l}(\dot{\mathbf{X}})\)) to the quaternion rank is balanced using a non-negative weight scalar \(\omega_{l}\). LRQA-N, LRQA-L, LRQA-G, and LRQA-W are the four low-rank quaternion approximation (LRQA) methods proposed in [16], employing the nuclear norm, Laplace function, Geman function, and weighted Schatten norm, respectively. To gain a deeper insight, please consult [16]. ## 3 Notations and preliminaries The primary mathematical notations used in this paper are initially introduced in this part, and then some basic concepts and theorems in the quaternion system are provided. For a more in-depth understanding of quaternion algebra, we recommend reading [30]. ### Notations \(\mathbb{R}\), \(\mathbb{C}\), and \(\mathbb{H}\) present the real space, complex space, and quaternion space, respectively. Lowercase letters, boldface lowercase letters, and boldface capital letters signify scalar, vectors, and matrices, respectively. A quaternion scalar, vector, and matrix are denoted by \(\dot{a}\), \(\mathbf{a}\), \(\dot{\mathbf{A}}\). \((\cdot)^{T}\), \((\cdot)^{*}\), and \((\cdot)^{H}\), respectively, signify the transpose, conjugation, and conjugate transpose. \(|\cdot|\), \(\|\cdot\|_{1}\), \(\|\cdot\|_{F}\), and \(\|\cdot\|_{*}\) correspond to the absolute value or modulus, the \(l_{1}\)-norm, the Frobenius norm, and the nuclear norm, respectively. The trace and rank operators are denoted by \(\text{tr}\{\cdot\}\) and rank(\(\cdot\)), respectively. \(\mathbf{I}_{m}\in\mathbb{R}^{m\times m}\) is the identity matrix. \(\Re(\cdot)\) signifies its real part for a quaternion scalar, vector, or matrix. ### The fundamentals of quaternion algebra Quaternion is invented by Hamilton in 1843 [31]. The definition of quaternions is \[\mathbb{H}=\{q_{0}+q_{1}i+q_{2}j+q_{3}k|q_{0},q_{1},q_{2},q_{3}\in\mathbb{R}\} \tag{8}\] and \(\mathbb{H}\) is an algebra that fulfills the associative law but not the commutative law. \(i\), \(j\), \(k\) in (8) represent imaginary units and satisfy \(i^{2}=j^{2}=k^{2}=ijk=-1\), and thus \(ij=-ji=k\), \(jk=-kj=i,ki=-ik=j\). Regarding any quaternion \[\dot{q}=q_{0}+q_{1}i+q_{2}j+q_{3}k=\Re(\dot{q})+\mathfrak{J}(\dot{q}),\] \(\Re(\dot{q})=q_{0}\in\mathbb{R}\) is its real part and \(\mathfrak{J}(\dot{q})=q_{1}i+q_{2}j+q_{3}k\) is its vector part. Additionally, \(\dot{q}\) is referred to be a pure quaternion if \(\Re(\dot{q})=0\). Obviously, in general, \(\dot{p}\dot{q}\neq\dot{q}\dot{p}\) since the commutative property of multiplication does not apply to the quaternion algebra. Given \(\dot{q}\), its conjugate is defined as \(\dot{q}^{*}=q_{0}-q_{1}i-q_{2}j-q_{3}k\), and its norm is calculated by \(|\dot{q}|=\sqrt{q^{*}q}=\sqrt{qq^{*}}=\sqrt{q_{0}^{2}+q_{1}^{2}+q_{2}^{2}+q_{ 3}^{2}}\). For any quaternion matrix \(\dot{\mathbf{Q}}=(\dot{q}_{mn})\in\mathbb{H}^{M\times N}\), it can be expressed as \(\dot{\mathbf{Q}}=\mathbf{Q}_{0}+\mathbf{Q}_{1}i+\mathbf{Q}_{2}j+\mathbf{Q}_{ 3}k\), where \(\mathbf{Q}_{s}\in\mathbb{R}^{M\times N}(s=0,\,1,\,2,\,3)\). If \(\Re(\dot{\mathbf{Q}})=\mathbf{Q}_{0}=\mathbf{0}\), a quaternion matrix is referred to as a pure quaternion matrix. The following is the definition of a quaternion matrix \(\dot{\mathbf{Q}}\)'s Frobenius norm: \[\left\|\dot{\mathbf{Q}}\right\|_{F}=\sqrt{\sum_{m=1}^{M}\sum_{n=1}^{N}|\dot{q }_{mn}|^{2}}=\sqrt{\text{tr}(\dot{\mathbf{Q}}^{H}\dot{\mathbf{Q}})}.\] \(\|\dot{\mathbf{Q}}\|_{*}\) denotes the nuclear norm of \(\dot{\mathbf{Q}}\), and \(\|\dot{\mathbf{Q}}\|_{*}=\sum_{s}\sigma_{s}(\mathbf{Q})\), where \(\sigma_{s}(\dot{\mathbf{Q}})\) is the \(s\)-th nonzero singular value of \(\dot{\mathbf{Q}}\). The \(l_{1}\)-norm of \(\dot{\mathbf{Q}}\) is defined as \(\|\dot{\mathbf{Q}}\|_{1}=\sum_{m=1}^{M}\sum_{n=1}^{N}|\dot{q}_{mn}|\). **Definition 1** (Cayley-Dickson form [32] and Equivalent complex matrix of a quaternion matrix [33]).: For any quaternion matrix \(\dot{\mathbf{Q}}=\mathbf{Q}_{0}+\mathbf{Q}_{1}i+\mathbf{Q}_{2}j+\mathbf{Q}_{ 3}k\in\mathbb{H}^{M\times N}\), its Cayley-Dickson form is given by \(\dot{\mathbf{Q}}=\mathbf{Q}_{a}+\mathbf{Q}_{b}j\), where \(\mathbf{Q}_{a}=\mathbf{Q}_{0}+\mathbf{Q}_{1}i\), \(\mathbf{Q}_{b}=\mathbf{Q}_{2}+\mathbf{Q}_{3}i\in\mathbb{C}^{M\times N}\). And the following definition applies to its equivalent complex matrix \(\chi_{\dot{\mathbf{Q}}}\in\mathbb{C}^{2M\times 2N}\): \[\chi_{\dot{\mathbf{Q}}}=\begin{bmatrix}\mathbf{Q}_{\mathbf{a}}&\mathbf{Q}_{ \mathbf{b}}\\ -\mathbf{Q}_{\mathbf{b}}{}^{*}&\mathbf{Q}_{\mathbf{a}}{}^{*}\end{bmatrix}. \tag{9}\] There are many similarities between the quaternion matrix and its equivalent complex matrix. To learn more, we advise reading [33]. **Theorem 1** (Quaternion singular value decomposition (QSVD) [33]).: For every quaternion matrix \(\dot{\mathbf{A}}\in\mathbb{H}^{M\times N}\) of rank \(r\), two unitary quaternion matrices \(\dot{\mathbf{U}}\in\mathbb{H}^{M\times M}\) and \(\dot{\mathbf{V}}\in\mathbb{H}^{N\times N}\) exist, such that \[\dot{\mathbf{A}}=\dot{\mathbf{U}}\begin{bmatrix}\mathbf{\Sigma}_{r}&\mathbf{0 }\\ \mathbf{0}&\mathbf{0}\end{bmatrix}\dot{\mathbf{V}}^{H}=\dot{\mathbf{U}}\dot{ \mathbf{A}}\dot{\mathbf{V}}^{H}, \tag{10}\] where \(\mathbf{\Sigma}_{r}\) is a real diagonal matrix with \(r\) positive singular values of \(\dot{\mathbf{A}}\) on its diagonal. **Theorem 2** (The quaternion Qatar Riyal decomposition (QQR) [34]).: Considering an arbitrary quaternion matrix \(\dot{\mathbf{A}}\in\mathbb{H}^{M\times N}\) with a rank of \(r\), one can find a unitary quaternion matrix \(\dot{\mathbf{Q}}\in\mathbb{H}^{M\times M}\) and a weakly upper triangular quaternion matrix \(\dot{\mathbf{R}}\in\mathbb{H}^{M\times N}\), such that \[\dot{\mathbf{A}}=\dot{\mathbf{Q}}\dot{\mathbf{R}}. \tag{11}\] In other words, there exists a permutation matrix \(\mathbf{P}\in\mathbb{R}^{N\times N}\) such that the product \(\dot{\mathbf{R}}\mathbf{P}\) forms an upper triangular quaternion matrix. **lemma 1** ([16]).: For any \(\mu>0\), \(\dot{\mathbf{Y}}\in\mathbb{H}^{M\times N}\) is a given quaternion matrix and the QSVD of \(\dot{\mathbf{Y}}\) is given by \(\dot{\mathbf{Y}}=\dot{\mathbf{U}}\Sigma\dot{\mathbf{V}}^{H}\). For the following quaternion nuclear norm minimization problem (QNNM) \[\min_{\dot{\mathbf{X}}}\mu\|\dot{\mathbf{X}}\|_{*}+\frac{1}{2}\|\dot{\mathbf{ Y}}-\dot{\mathbf{X}}\|_{F}^{2}, \tag{12}\] the closed-form solution \(\dot{\dot{\mathbf{X}}}\) is given by \[\dot{\dot{\mathbf{X}}}=\dot{\mathbf{U}}S_{\mu}(\Sigma)\dot{\mathbf{V}}^{H}, \tag{13}\] where \(S_{\mu}(\Sigma)=\text{diag}(\max\{\sigma_{s}(\dot{\mathbf{Y}})-\mu,0\})\) is the soft thresholding operator. **lemma 2** ([16]).: Given a positive value of \(\mu\), \(\dot{\mathbf{Y}}\in\mathbb{H}^{M\times N}\) is a known quaternion matrix, and its QSVD is expressed as \(\dot{\mathbf{Y}}=\dot{\mathbf{U}}\Sigma\dot{\mathbf{V}}^{H}\). The optimal solution for \(\dot{\dot{\mathbf{X}}}\) in the following problem \[\min_{\dot{\mathbf{X}}}\mu\sum_{l}\omega_{l}\sigma_{l}(\dot{\mathbf{X}})+ \frac{1}{2}\|\dot{\mathbf{Y}}-\dot{\mathbf{X}}\|_{F}^{2} \tag{14}\] is given by \[\dot{\dot{\mathbf{X}}}=\dot{\mathbf{U}}\Sigma_{\dot{\mathbf{X}}}\dot{\mathbf{ V}}^{H}, \tag{15}\] where \(\Sigma_{\dot{\mathbf{X}}}=\text{diag}(\sigma^{*})\) and \(\sigma^{*}\) is given by \[\sigma^{*}=\operatorname*{arg\,min}_{\sigma\geq 0}\mu\sum_{l}\omega_{l} \sigma_{l}(\dot{\mathbf{X}})+\frac{1}{2}\|\sigma-\sigma_{\dot{\mathbf{Y}}}\|_ {F}^{2}. \tag{16}\] Presented below is a succinct overview of the Quaternion Discrete Cosine Transform. **Definition 2** (Forward quaternion discrete cosine transform (FQDCT) [35]).: Given a quaternion matrix \(\dot{\mathbf{A}}\in\mathbb{H}^{M\times N}\), as quaternions are non-commutative, its quaternion discrete cosine transform (QDCT) exists in two distinct types, namely, the left-handed and right-handed forms as follows: \[FQDCT^{L}(s,t)=\psi(s)\psi(t)\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}\dot{q}\,\dot{\bf A }\,(m,n)\,Q\,(s,t,m,n), \tag{17}\] \[FQDCT^{R}(s,t)=\psi(s)\psi(t)\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}\dot{\bf A}\,(m,n) \,Q\,(s,t,m,n)\,\,\dot{q}, \tag{18}\] where the pure quaternion \(\dot{q}\), which fulfills the condition \(\dot{q}^{2}=-1\), is called the quaternionization factor. The values of \(\psi(s)\), \(\psi(t)\), and \(Q\,(s,t,m,n)\) in the QDCT are similar to those in the discrete cosine transform (DCT) in the real domain, which are given as follows: \[\psi(s)=\begin{cases}\sqrt{\frac{1}{M}}&\text{for}\quad s=0\\ \sqrt{\frac{2}{M}}&\text{for}\quad s\neq 0\end{cases},\quad\psi(t)= \begin{cases}\sqrt{\frac{1}{N}}&\text{for}\quad t=0\\ \sqrt{\frac{2}{N}}&\text{for}\quad t\neq 0\end{cases}, \tag{19}\] \[Q\,(s,t,m,n)=\cos\left[\frac{\pi(2m+1)s}{2M}\right]\cos\left[\frac{\pi(2n+1)t }{2N}\right]. \tag{20}\] The two types of inverse QDCT (IQDCT), following the two forms of the FQDCT, are given below: \[IQDCT^{L}(m,n)=\sum_{s=0}^{M-1}\sum_{t=0}^{N-1}\psi(s)\psi(t)\,\dot{q}\,{\bf B }\,(s,t)\,Q\,(s,t,m,n), \tag{21}\] \[IQDCT^{R}(m,n)=\sum_{s=0}^{M-1}\sum_{t=0}^{N-1}\psi(s)\psi(t)\dot{\bf B}\,(s,t )\,Q\,(s,t,m,n)\,\dot{q}, \tag{22}\] where \(\dot{\bf B}\in\mathbb{H}^{M\times N}\). **Theorem 3** (The relationship between FQDCT and IQDCT [35]).: \[\dot{\bf A}(m,n) =IQDCT^{L}\left[FQDCT^{L}(\dot{\bf A}(m,n))\right]\] (23) \[=IQDCT^{R}\left[FQDCT^{R}(\dot{\bf A}(m,n))\right].\] Since the construction of our proposed model necessitates the use of FQDCT\({}^{L}\), we outline the computational steps of FQDCT\({}^{L}\) in the following. 1. Express the quaternion matrix \(\dot{\bf A}(m,n)\in\mathbb{H}^{M\times N}\) in its Cayley-Dickson form, that is, \(\dot{\bf A}(m,n)={\bf A}_{a}(m,n)+{\bf A}_{b}(m,n)j\), where \({\bf A}_{a}(m,n)\) and \({\bf A}_{b}(m,n)\in\mathbb{C}^{M\times N}\). 2. Compute the DCT of \({\bf A}_{a}(m,n)\) and \({\bf A}_{b}(m,n)\), denoting the resulting matrices as \(DCT({\bf A}_{a}(m,n))\), and \(DCT({\bf A}_{b}(m,n))\) respectively. 3. Using \(DCT(\mathbf{A}_{a}(m,n))\) and \(DCT(\mathbf{A}_{b}(m,n))\), construct a quaternion matrix \(\hat{\mathbf{A}}(m,n)\) as follows: \(\hat{\mathbf{A}}(m,n)=DCT(\mathbf{A}_{a}(m,n))+DCT(\mathbf{A}_{b}(m,n))j\). 4. Multiply \(\hat{\mathbf{A}}(m,n)\) by the quaternization factor \(\dot{q}\) to obtain the final result: \[FQDCT^{L}\left[\hat{\mathbf{A}}(m,n)\right]=\dot{q}\ \hat{\mathbf{A}}(m,n).\] ## 4 Our method of quaternion completion After providing a brief overview of the CQSVD-QQR technique, this section introduces three novel algorithms for quaternion matrix-based completion. Furthermore, the computational complexities of these models are discussed. ### A method for quaternion matrix decomposition In the event that \(\hat{\mathbf{X}}\in\mathbb{H}^{M\times N}\) is a quaternion matrix, the following quaternion Tri-Factorization is used to compute \(\hat{\mathbf{X}}\)'s approximation QSVD based on QQR decomposition (CQSVD-QQR) [20]: \[\hat{\mathbf{X}}=\hat{\mathbf{L}}\hat{\mathbf{D}}\hat{\mathbf{R}}, \tag{24}\] where the quaternion matrices \(\hat{\mathbf{L}}\in\mathbb{H}^{M\times r}\) and \(\hat{\mathbf{R}}\in\mathbb{H}^{r\times N}\) satisfy \(\hat{\mathbf{L}}^{H}\hat{\mathbf{L}}=\mathbf{I}_{r}\), \(\hat{\mathbf{R}}\hat{\mathbf{R}}^{H}=\mathbf{I}_{r}\), the quaternion matrix \(\hat{\mathbf{D}}\in\mathbb{H}^{r\times r}\) is lower triangular, and \(|D_{sz}|=|\sigma_{s}(\hat{\mathbf{X}})|\). The CQSVD-QQR procedure is briefly described below. For more information, please see [20]. First, suppose that \(\hat{\mathbf{L}}^{1}=\mathrm{eye}(M,r)\), \(\hat{\mathbf{D}}^{1}=\mathrm{eye}(r,r)\), and \(\hat{\mathbf{R}}^{1}=\mathrm{eye}(r,N)\). Next, in the \(\tau+1\)-th iteration, \(\hat{\mathbf{L}}^{\tau+1}\) is determined by \[\left[\hat{\mathbf{Q}},\hat{\mathbf{G}}\right]=\mathrm{qqr}(\hat{\mathbf{X} }(\hat{\mathbf{R}}^{\tau})^{H}), \tag{25}\] \[\hat{\mathbf{L}}^{\tau+1}=\hat{\mathbf{Q}}(:,1:r), \tag{26}\] where the operator \(\mathrm{qqr}(\cdot)\) calculates the quaternion QR decomposition of a given quaternion matrix, and \(\hat{\mathbf{Q}}(:,1:r)\) means to extract the first \(r\) columns of \(\hat{\mathbf{Q}}\). \(\hat{\mathbf{R}}^{\tau+1}\) is provided by \[\left[\hat{\mathbf{T}},\hat{\mathbf{S}}\right]=\mathrm{qqr}(\hat{\mathbf{X}} ^{H}\mathbf{L}^{\tau+1}), \tag{27}\] \[\hat{\mathbf{R}}^{\tau+1}=(\hat{\mathbf{T}}(:,1:r))^{H}. \tag{28}\] The last update to \(\dot{\mathbf{D}}^{r+1}\) is made by \[\dot{\mathbf{D}}^{r+1}=(\dot{\mathbf{S}}(1:r,1:r))^{H}, \tag{29}\] ### Proposed three quaternion \(L_{2,1}\)-norm-based methods for color image completion The formulation of the proposed model is presented in this section. The definition of quaternion \(L_{2,1}\)-norm of \(\dot{\mathbf{X}}\in\mathbb{H}^{M\times N}\) is given by: \[\|\dot{\mathbf{X}}\|_{2,1}=\sum_{n=1}^{N}\sqrt{\sum_{m=1}^{M}|\dot{\mathbf{X}} _{mn}|^{2}}. \tag{30}\] We have verified that the three norm requirements are met by this valid norm, even for the triangle inequality. For the quaternion \(L_{2,1}\)-norm, we can obtain the following theorem. **Theorem 4**.: Using the method CQSVD-QQR, a quaternion matrix \(\dot{\mathbf{X}}\in\mathbb{H}^{M\times N}\) can be decomposed into \(\dot{\mathbf{X}}=\dot{\mathbf{L}}\dot{\mathbf{D}}\dot{\mathbf{R}}\). And for \(\|\dot{\mathbf{D}}\|_{*}\) and \(\|\dot{\mathbf{D}}\|_{2,1}\), it can be found that \(\|\dot{\mathbf{D}}\|_{*}\leq\|\dot{\mathbf{D}}\|_{2,1}\). Proof.: The following is a decomposition of the quaternion matrix \(\dot{\mathbf{D}}\in\mathbb{H}^{r\times r}\): \[\dot{\mathbf{D}}=\sum_{l=1}^{r}\dot{\mathbf{D}}^{l}, \tag{31}\] \[\dot{\mathbf{D}}^{l}_{ts}==\begin{cases}\dot{\mathbf{D}}_{tl},\ (s=l),\\ 0,\ \ \ (s\neq l),\end{cases} \tag{32}\] where \(\dot{\mathbf{D}}^{l}\in\mathbb{H}^{r\times r}\), and \(t\), \(s=1,\ldots,r\). Due to the convex nature of the quaternion nuclear norm, we can obtain that \[\|\dot{\mathbf{D}}\|_{*}\leq\ \sum_{l=1}^{r}\|\dot{\mathbf{D}}^{l}\|_{*}, \tag{33}\] By calculating the singular values of the equivalent complex matrix of \(\dot{\mathbf{D}}^{l}\) and according to the definition of the QNN, we can get \[\|\dot{\mathbf{D}}^{l}\|_{*}=\sqrt{\sum_{t=1}^{r}|\dot{\mathbf{D}}_{tl}|^{2}}. \tag{34}\] According to (34), we have \[\sum_{l=1}^{r}\|\dot{\mathbf{D}}^{l}\|_{*}=\|\dot{\mathbf{D}}\|_{2,1}. \tag{35}\] Therefore, we proved the above conclusion. _1) The proposed CQSVD-QQR-based quaternion \(L_{2,1}\)-norm minimization approach_: The quaternion \(L_{2,1}\)-norm of a quaternion matrix is unquestionably the upper bound of its QNN, as shown by Theorem (4). This result inspires us to replace the QNN in the minimization problem of (3) with the quaternion \(L_{2,1}\)-norm for quaternion matrix completion in the manner described below: \[\min_{\mathbf{\dot{D}}}\|\dot{\mathbf{D}}\|_{2,1},\ \text{s.t.},\begin{cases} \dot{\mathbf{L}}^{H}\dot{\mathbf{L}}=\mathbf{I}_{r},\ \dot{\mathbf{R}}\dot{\mathbf{R}}^{H}=\mathbf{I}_{r},\\ \dot{\mathbf{X}}=\dot{\mathbf{L}}\dot{\mathbf{D}}\dot{\mathbf{R}},\ P_{\Omega}( \dot{\mathbf{L}}\dot{\mathbf{D}}\dot{\mathbf{R}}-\dot{\mathbf{M}})=\mathbf{0}.\end{cases} \tag{36}\] Convexity of the quaternion \(L_{2,1}\)-norm in the optimization function in (36) allows for the solution of the problem using the alternating direction method of multipliers (ADMM). And we give the augmented Lagrangian function of (36): \[\text{Lag}=\|\dot{\mathbf{D}}\|_{2,1}+\Re((\dot{\mathbf{E}},\ \dot{\mathbf{X}}-\dot{ \mathbf{L}}\dot{\mathbf{D}}\dot{\mathbf{R}}))+\frac{\mu}{2}\|\dot{\mathbf{X}}- \dot{\mathbf{L}}\dot{\mathbf{D}}\dot{\mathbf{R}}\|_{F}^{2}, \tag{37}\] where \(\mu>0\), and the lagrange multiplier \(\dot{\mathbf{E}}\in\mathbb{H}^{M\times N}\). Essentially, there are two steps in the process of solving the entire problem. The first step is to update the variables \(\dot{\mathbf{L}}\) and \(\dot{\mathbf{R}}\) by solving the corresponding optimization problem based on the CQSVD-QQR method. The second step involves updating the variables \(\dot{\mathbf{D}}\) and \(\dot{\mathbf{X}}\). The following minimization problem is solved in step 1 to update \(\dot{\mathbf{L}}^{\tau+1}\) and \(\dot{\mathbf{R}}^{\tau+1}\): \[\min_{\mathbf{\dot{L}},\mathbf{R}}\left\|(\dot{\mathbf{X}}^{\tau}+\dot{ \mathbf{E}}^{\tau}/\mu^{\tau})-\dot{\mathbf{L}}\dot{\mathbf{D}}^{\tau}\dot{ \mathbf{R}}\right\|_{F}^{2}. \tag{38}\] We can update \(\dot{\mathbf{L}}^{\tau+1}\) and \(\dot{\mathbf{R}}^{\tau+1}\) by using CQSVD-QQR, in accordance with the analysis of the CQSVD-QQR method, i.e., \[\begin{cases}\left[\dot{\mathbf{L}}^{\tau+1},\sim\right]=\text{qqr}(\dot{ \mathbf{X}}_{b}(\dot{\mathbf{R}}^{\tau})^{H}),\\ \dot{\mathbf{L}}^{\tau+1}=\dot{\mathbf{L}}^{\tau+1}(;1:r),\end{cases} \tag{39}\] and \[\begin{cases}\left[\dot{\mathbf{R}}^{\tau+1},\dot{\mathbf{D}}^{\tau}\right]= \text{qqr}(\dot{\mathbf{X}}_{b}^{H}\dot{\mathbf{L}}^{\tau+1}),\\ \dot{\mathbf{R}}^{\tau+1}=(\dot{\dot{\mathbf{R}}}^{\tau+1}(1:r,1:r))^{H},\end{cases} \tag{40}\] where \(\dot{\mathbf{X}}_{b}=\dot{\mathbf{X}}^{\tau}+\dot{\mathbf{E}}^{\tau}/\mu^{\tau}\). If \(\dot{\mathbf{L}}\) and \(\dot{\mathbf{R}}\) are initialized as \(\dot{\mathbf{L}}^{\tau}\) and \(\dot{\mathbf{R}}^{\tau}\), the CQSVD-QQR method will converge within a limited number of iterations since the quaternion matrices \(\dot{\mathbf{L}}\) and \(\dot{\mathbf{R}}\) do not change significantly during the course of two iterations [19]. Therefore, the experiment's good outcome and the speed at which our algorithm iterates may both be enhanced by employing this wise initialization. Step 2 involves updating \(\dot{\mathbf{D}}^{\tau+1}\) and \(\dot{\mathbf{X}}^{\tau+1}\). The following quaternion \(L_{2,1}\)-norm minimization problem is needed to solve in order to determine \(\dot{\mathbf{D}}^{+1}\): \[\dot{\mathbf{D}}^{*+1}=\underset{\mathbf{b}}{\arg\min}\|\dot{\mathbf{D}}\|_{2,1} +\frac{\mu^{\tau}}{2}\|\dot{\mathbf{D}}-(\dot{\mathbf{L}}^{*+1})^{H}\ddot{ \mathbf{X}}_{b}(\dot{\mathbf{R}}^{+1})^{H}\|_{F}^{2}. \tag{41}\] The following theorem is established in order to solve the above problem (41). **Theorem 5**.: For the following minimization problem \[\underset{\dot{\mathbf{X}}}{\min}\mathcal{J}(\dot{\mathbf{X}})=\underset{ \dot{\mathbf{X}}}{\min}\beta\|\dot{\mathbf{X}}\|_{2,1}+\frac{1}{2}\|\dot{ \mathbf{X}}-\dot{\mathbf{Y}}\|_{F}^{2}, \tag{42}\] where \(\beta>0\), \(\dot{\mathbf{X}}\), and \(\dot{\mathbf{Y}}\in\mathbb{H}^{M\times N}\), the \(n\)-th column of the optimal \(\dot{\widetilde{\mathbf{X}}}\), i.e, \(\dot{\widetilde{\mathbf{X}}}_{n}\), of (42) is given by \[\dot{\widetilde{\mathbf{X}}}_{n}=\frac{(\|\dot{\mathbf{Y}}_{n}\|_{2}-4\beta)_ {+}}{\|\dot{\mathbf{Y}}_{n}\|_{2}}\dot{\mathbf{Y}}_{n}, \tag{43}\] where \(\dot{\widetilde{\mathbf{X}}}_{n}=\left[\ddot{\mathbf{X}}_{1n},\dot{\widetilde {\mathbf{X}}}_{2n},\ldots,\dot{\widetilde{\mathbf{X}}}_{Mn}\right]^{T}\), \(\|\dot{\mathbf{Y}}_{n}\|_{2}=\sqrt{\sum_{m=1}^{M}|\dot{\mathbf{Y}}_{mm}|^{2}}\), and \((\|\dot{\mathbf{Y}}(:,n)\|_{2}-4\beta)_{+}=\max\{\|\dot{\mathbf{Y}}(:,n)\|_{2} -4\beta,0\}\). The proof of Theorem 5 can be found in the A. Therefore, we can update \(\dot{\mathbf{D}}^{*+1}\) in the problem (41) by using Theorem 5 as follows: \[\dot{\mathbf{D}}^{*+1}=\dot{\mathbf{D}}\mathbf{C}, \tag{44}\] where \(\dot{\mathbf{D}}=(\dot{\mathbf{L}}^{*+1})^{H}\dot{\mathbf{X}}_{b}(\dot{ \mathbf{R}}^{*+1})^{H}\), and \(\mathbf{C}=\text{diag}(c_{1},\ldots,c_{r})\) is a diagonal matrix, where \(c_{s}\) (\(s=1,\ldots,r\)) is given by \[c_{s}=\frac{(\|\dot{\mathbf{D}}_{*s}\|_{2}-4\beta)_{+}}{\|\dot{\mathbf{D}}_{* s}\|_{2}}. \tag{45}\] Next, we can update the variable \(\dot{\mathbf{X}}_{\tau+1}\) by fixing the other variables as follows: \[\dot{\mathbf{X}}^{*+1}=\dot{\mathbf{L}}^{*+1}\dot{\mathbf{D}}^{*+1}\dot{ \mathbf{R}}^{*+1}-P_{\Omega}(\dot{\mathbf{L}}^{*+1}\dot{\mathbf{D}}^{*+1}\dot{ \mathbf{R}}^{*+1})+P_{\Omega}(\dot{\mathbf{M}}). \tag{46}\] Finally, we can update \(\dot{\mathbf{E}}^{*+1}\) and \(\mu\) by fixing the variables \(\dot{\mathbf{L}}^{*+1},\dot{\mathbf{D}}^{*+1},\dot{\mathbf{R}}^{*+1}\), and \(\dot{\mathbf{X}}^{*+1}\) as follows: \[\dot{\mathbf{E}}^{*+1}=\dot{\mathbf{E}}^{*}+\mu^{\tau}(\dot{\mathbf{X}}^{*+1}- \dot{\mathbf{L}}^{*+1}\dot{\mathbf{D}}^{*+1}\dot{\mathbf{R}}^{*+1}), \tag{47}\] \[\mu^{\tau+1}=\rho\mu^{\tau}, \tag{48}\] where \(\rho\geq 1\). We call the proposed CQSVD-QQR-based quaternion \(L_{2,1}\)-norm minimization approach QLNM-QQR. Algorithm 1 provides a summary of the whole proposed approach's process. QLNM-QQR can reach its optimal solution because the ADMM, a gradient-search-based approach, minimizes the convex optimization function of (36) and thus allows for convergence. Assume that QLNM-QQR converges after \(t\) iterations. If the updating processes of the QLNM-QQR approach are carried out further, \(\hat{\mathbf{X}}^{\tau}\) (\(\tau\geq t\)) will equal \(\hat{\mathbf{X}}^{t}\). QLNM-QQR can revert to CQSVD-QQR since \(\hat{\mathbf{L}}\) and \(\hat{\mathbf{R}}\) from the preceding iteration served as initialization for CQSVD-QQR. Consequently, the series of \(\{\mathbf{D}^{\tau}\}\) generated by QLNM-QQR will converge to a diagonal matrix \(\mathbf{D}\) and \[|\hat{\mathbf{D}}_{ss}|=\sigma_{s}(\hat{\mathbf{X}}^{t}). \tag{49}\] Because of this, the QLNM-QQR algorithm's quaternion \(L_{2,1}\)-norm minimization function can reach the QNN of \(\mathbf{D}\). It inspires us to extend QLNM-QQR by introducing a quaternion \(L_{2,1}\)-norm minimization approach with iterative reweighting. _2) Extending the approach QLNM-QQR with iterative reweighting_: Based on (35), the weighted quaternion \(L_{2,1}\)-norm of \(\hat{\mathbf{X}}\in\mathbb{H}^{M\times N}\) is given by \[\|\hat{\mathbf{X}}\|_{\omega(2,1)}=\sum_{l=1}^{N}\omega_{l}\|\hat{\mathbf{X}} ^{l}\|_{*}, \tag{50}\] where \(\omega_{l}\) is a positive number, and \(\hat{\mathbf{X}}^{l}\) and \(\hat{\mathbf{D}}^{l}\) (in (31)) both have the same definition. The following modification can be made to the QLNM-QQR minimization problem in (36): \[\min_{\mathbf{D}}\sum_{l=1}^{r}\partial g(\|\hat{\mathbf{D}}^{l}\|_{*})\|\hat{ \mathbf{D}}^{l}\|_{*},\ \text{s.t.},\begin{cases}\hat{\mathbf{L}}^{H}\hat{ \mathbf{L}}=\mathbf{I}_{r},\ \hat{\mathbf{R}}\hat{\mathbf{R}}^{H}=\mathbf{I}_{r},\\ \hat{\mathbf{X}}=\hat{\mathbf{L}}\hat{\mathbf{D}}\hat{\mathbf{R}},\ P_{\Omega}( \hat{\mathbf{L}}\hat{\mathbf{D}}\hat{\mathbf{R}}-\hat{\mathbf{M}})=\mathbf{0}.\end{cases} \tag{51}\] \(\partial g(\sigma_{s})\) is the gradient of \(g(\cdot)\) at \(\sigma_{s}\), and \(g(\cdot)\) is continuous, concave, smooth, differentiable, and monotonically increas ing on \([0,\,+\infty)\)[19; 16]. The ADMM can resolve the problem in (51). We give its augmented Lagrangian function as below: \[\begin{split}&\mathcal{L}(\dot{\mathbf{X}},\dot{\mathbf{L}}, \dot{\mathbf{D}},\dot{\mathbf{R}},\dot{\mathbf{Y}})\\ &=\sum_{l=1}^{r}\partial g(||\dot{\mathbf{D}}^{l}||_{*})||\dot{ \mathbf{D}}^{l}||_{*}+\Re((\dot{\mathbf{E}},\;\dot{\mathbf{X}}-\dot{\mathbf{L}} \dot{\mathbf{D}}\dot{\mathbf{R}}))+\frac{\mu}{2}||\dot{\mathbf{X}}-\dot{ \mathbf{L}}\dot{\mathbf{D}}\dot{\mathbf{R}}||_{F}^{2},\end{split} \tag{52}\] In the \(\tau\)-th iteration, the variables \(\dot{\mathbf{X}}^{\tau+1}\), \(\dot{\mathbf{L}}^{\tau+1}\), \(\dot{\mathbf{R}}^{\tau+1}\), \(\dot{\mathbf{E}}^{\tau+1}\), and \(\mu^{\tau+1}\) are updated the same as we did in the previous problem (36). We need to solve the subsequent minimization problem to update \(\dot{\mathbf{D}}^{\tau+1}\): \[\min_{\dot{\mathbf{D}}}\sum_{l=1}^{r}\partial g(||\dot{\mathbf{D}}^{l}||_{*})|| \dot{\mathbf{D}}^{l}||_{*}+\frac{\mu^{\tau}}{2}||\dot{\mathbf{D}}-\dot{\mathbf{ D}}||_{F}^{2}, \tag{53}\] where \(\dot{\mathbf{D}}=(\dot{\mathbf{L}}^{\tau+1})^{H}\dot{\mathbf{X}}_{b}(\dot{ \mathbf{R}}^{\tau+1})^{H}\). As in [19], in the \(\tau\)-th iteration, we also let \[\partial g(||\dot{\mathbf{D}}^{l}||_{*})=\mu^{\tau}(1-\hat{a}_{l})||\dot{ \mathbf{D}}^{l}||_{*}\quad(l=1,2,\ldots,r) \tag{54}\] where \(1\geq\hat{a}_{1}\geq\hat{a}_{2}\geq\cdots\geq\hat{a}_{r}>0\). **Theorem 6**.: Assume that \(\dot{\mathbf{Y}}\in\mathbb{H}^{M\times M}\) and \(\mu>0\), for the minimization problem below, \[\min_{\dot{\mathbf{X}}\in\mathbb{H}^{M\times M}}\frac{1}{\mu}||\dot{\mathbf{X} }||_{\mu^{\prime}(2,1)}+\frac{1}{2}||\dot{\mathbf{X}}-\dot{\mathbf{Y}}||_{F}^ {2}, \tag{55}\] the optimal solution is given by \[\dot{\mathbf{X}}_{\mathrm{opt}}=\dot{\mathbf{Y}}\mathbf{A}, \tag{56}\] where \(\mathbf{A}=\mathrm{diag}(a_{1},\ldots,a_{M})\), and \[a_{m}=\frac{(\sigma_{m}-\frac{\omega_{m}}{\mu})_{+}}{\sigma_{m}},\quad(m=1, \ldots,M) \tag{57}\] where \(\sigma_{m}\) is the singular value of \(\dot{\mathbf{Y}}^{m}\). \(\dot{\mathbf{Y}}^{m}\) is defined in the same way as \(\dot{\mathbf{D}}^{l}\) in (31). The proof of Theorem 6 can be found in the B. Theorem 6 and (54) allow us to update \(\dot{\mathbf{D}}^{\tau+1}\) as follows: \[\dot{\mathbf{D}}^{\tau+1}=\dot{\mathbf{D}}\mathbf{A}, \tag{58}\] where \(\mathbf{A}=\mathrm{diag}(\hat{a}_{1},\ldots,\hat{a}_{r})\). We abbreviate the proposed CQSVD-QQR-based iteratively reweighted quaternion \(L_{2,1}\) norm minimization model for matrix completion as IRQLNM-QQR. **Theorem 7**.: By using (54) to specify the weights \(\partial_{\mathbf{\hat{g}}}(\|\mathbf{\hat{D}}^{T}\|_{*})\) (\(l=1,\cdots,r\)) in (52), IRQLNM-QQR can converge to the optimal solution of an LRQA-W minimization model with \(\gamma=1\). The proof of Theorem 7 can be found in the C. For the experiment, we have specified the values of \(\hat{a}_{l}\) (\(l=1,2,\ldots,r\)) as follows: \[\omega_{l}=\begin{cases}1,&1\leqslant l\leqslant V,\;1<V<r\\ \frac{\varsigma-1}{r-V}+\omega_{l-1},&V<l\leqslant r,\end{cases} \tag{59}\] \[\hat{a}_{l}=\frac{1}{\omega_{l}} \tag{60}\] where \(\varsigma>1\), \(r\) represents the number of rows in \(\mathbf{A}\) from (58). #### Iv-B3 Qlnm-QQR with sparsity As we previously indicated, the performance of color image completion models will be enhanced by combining the low-rank property of color images with sparsity. As a result, we introduce the sparse regularization term into the minimization problem (36) and have the following optimization problem: \[\min_{\mathbf{\hat{D}}}\|\mathbf{\hat{D}}\|_{2,1}+\beta\|\mathbf{\hat{C}}\|_{ 1},\;\text{s.t.,}\begin{cases}\mathbf{\hat{L}}^{H}\mathbf{\hat{L}}=\mathbf{I} _{r},\;\;\mathbf{\hat{R}}\mathbf{\hat{R}}^{H}=\mathbf{I}_{r},\\ \mathbf{\hat{X}}=\mathbf{\hat{L}}\mathbf{\hat{D}}\mathbf{\hat{R}},\;P_{\Omega }(\mathbf{\hat{L}}\mathbf{\hat{D}}\mathbf{\hat{R}})=P_{\Omega}(\mathbf{\hat{M }}),\;\mathcal{T}(\mathbf{\hat{X}})=\mathbf{\hat{C}},\end{cases} \tag{61}\] where \(\beta>0\), \(\mathbf{\hat{C}}=\mathcal{T}(\mathbf{\hat{X}})\) stands for the quaternion matrix after transformation, and \(\mathcal{T}(\cdot)\) denotes a transform operator. In this section, the FQDCT\({}^{L}\) is employed to formulate our proposed approach. In this case, \(\mathcal{T}(\mathbf{\hat{X}})\) calculates FQDCT\({}^{L}\) of \(\mathbf{\hat{X}}\). We also use the ADMM framework in the quaternion system to solve problem (61), and its corresponding augmented Lagrangian function is shown below: \[\text{Lag}=\|\mathbf{\hat{D}}\|_{2,1}+\beta\|\mathbf{\hat{C}}\|_{1}+\Re(( \mathbf{\hat{E}},\;\mathbf{\hat{X}}-\mathbf{\hat{L}}\mathbf{\hat{D}}\mathbf{ \hat{R}}))+\frac{\mu}{2}\|\mathbf{\hat{X}}-\mathbf{\hat{L}}\mathbf{\hat{D}} \mathbf{\hat{R}}\|_{F}^{2}+\Re((\mathbf{\hat{F}},\;\mathbf{\hat{C}}-\mathcal{ T}(\mathbf{\hat{X}})))+\frac{\mu}{2}\|\mathbf{\hat{C}}-\mathcal{T}(\mathbf{ \hat{X}})\|_{F}^{2}, \tag{62}\] where the penalty parameter \(\mu\) is a positive number, \(\mathbf{\hat{E}}\) and \(\mathbf{\hat{F}}\) are Lagrange multiplier. The ADMM framework allows for the alternate updating of each parameter in the optimization problem (61) by fixing other variables. Notably, in the \(\tau\)-th iteration, each variable can be updated as follows: \[\begin{cases}\dot{\mathbf{L}}^{\tau+1}=\underset{\dot{\mathbf{L}}}{\arg\min}\, \mathrm{Lag}(\dot{\mathbf{X}}^{\tau},\dot{\mathbf{L}},\dot{\mathbf{D}}^{\tau}, \dot{\mathbf{R}}^{\tau},\dot{\mathbf{C}}^{\tau},\dot{\mathbf{E}}^{\tau},\dot{ \mathbf{F}}^{\tau}),\\ \dot{\mathbf{R}}^{\tau+1}=\underset{\dot{\mathbf{R}}}{\arg\min}\,\mathrm{Lag}( \dot{\mathbf{X}}^{\tau},\dot{\mathbf{L}}^{\tau+1},\dot{\mathbf{D}}^{\tau},\dot {\mathbf{R}},\dot{\mathbf{C}}^{\tau},\dot{\mathbf{E}}^{\tau},\dot{\mathbf{F}}^ {\tau}),\\ \dot{\mathbf{D}}^{\tau+1}=\underset{\dot{\mathbf{D}}}{\arg\min}\,\mathrm{Lag}( \dot{\mathbf{X}}^{\tau},\dot{\mathbf{L}}^{\tau+1},\dot{\mathbf{D}},\dot{ \mathbf{R}}^{\tau+1},\dot{\mathbf{C}}^{\tau},\dot{\mathbf{E}}^{\tau},\dot{ \mathbf{F}}^{\tau}),\\ \dot{\mathbf{X}}^{\tau+1}=\underset{\dot{\mathbf{x}}}{\arg\min}\,\mathrm{Lag}( \dot{\mathbf{X}},\dot{\mathbf{L}}^{\tau+1},\dot{\mathbf{D}}^{\tau+1},\dot{ \mathbf{R}}^{\tau+1},\dot{\mathbf{C}}^{\tau},\dot{\mathbf{E}}^{\tau},\dot{ \mathbf{F}}^{\tau}),\\ \dot{\mathbf{C}}^{\tau+1}=\underset{\dot{\mathbf{C}}}{\arg\min}\,\mathrm{Lag}( \dot{\mathbf{X}}^{\tau+1},\dot{\mathbf{L}}^{\tau+1},\dot{\mathbf{D}}^{\tau+1}, \dot{\mathbf{R}}^{\tau+1},\dot{\mathbf{C}},\dot{\mathbf{E}}^{\tau},\dot{ \mathbf{F}}^{\tau}),\\ \dot{\mathbf{E}}^{\tau+1}=\dot{\mathbf{E}}^{\tau}+\mu^{\tau}(\dot{\mathbf{X}}^ {\tau+1}-\dot{\mathbf{L}}^{\tau+1}\dot{\mathbf{D}}^{\tau+1}\dot{\mathbf{R}}^{ \tau+1}),\\ \dot{\mathbf{F}}^{\tau+1}=\dot{\mathbf{F}}^{\tau}+\mu^{\tau}(\dot{\mathbf{C}}^ {\tau+1}-\mathcal{T}(\dot{\mathbf{X}}^{\tau+1})).\end{cases} \tag{63}\] **Updating \(\dot{\mathbf{L}}\), \(\dot{\mathbf{D}}\), and \(\dot{\mathbf{R}}\)**: The following minimization problem is needed to solve in the \(\tau+1\)-th iteration to update \(\dot{\mathbf{L}}^{\tau+1}\) and \(\dot{\mathbf{R}}^{\tau+1}\): \[\underset{\dot{\mathbf{L}},\dot{\mathbf{R}}}{\min}\,\left\|(\dot{\mathbf{X}}^ {\tau}+\frac{\dot{\mathbf{E}}^{\tau}}{\mu^{\tau}})-\dot{\mathbf{L}}\dot{ \mathbf{D}}^{\tau}\dot{\mathbf{R}}\right\|_{F}^{2}. \tag{64}\] The updates of \(\dot{\mathbf{L}}^{\tau+1}\) and \(\dot{\mathbf{R}}^{\tau+1}\) here are the same as in the QLNM-QQR method, i.e., \(\dot{\mathbf{L}}^{\tau+1}\) and \(\dot{\mathbf{R}}^{\tau+1}\) are updated according to (39) and (40), respectively. Also, since variable \(\dot{\mathbf{D}}\) does not directly rely on variable \(\dot{\mathbf{C}}\), \(\dot{\mathbf{D}}^{\tau+1}\) is updated by addressing the same problem, i.e., problem (41), in the previous subsection. Thus, we update \(\dot{\mathbf{D}}^{\tau+1}\) by using (44). **Updating \(\dot{\mathbf{X}}\), \(\dot{\mathbf{C}}\), \(\dot{\mathbf{E}}\), \(\dot{\mathbf{F}}\), and \(\mu\)**: By resolving the following problem in the \(\tau\)-th iteration, \(\dot{\mathbf{X}}^{\tau+1}\) can be updated after updating the variables \(\dot{\mathbf{L}}^{\tau+1}\), \(\dot{\mathbf{D}}^{\tau+1}\), and \(\dot{\mathbf{R}}^{\tau+1}\) and fixing the remaining variables: \[\begin{split}\dot{\mathbf{X}}^{\tau+1}&=\underset{ \dot{\mathbf{x}}}{\arg\min}\,\mathfrak{R}((\dot{\mathbf{E}}^{\tau},\dot{ \mathbf{X}}-\dot{\mathbf{L}}^{\tau+1}\dot{\mathbf{D}}^{\tau+1}\dot{\mathbf{R}}^ {\tau+1}))+\frac{\mu^{\tau}}{2}\|\dot{\mathbf{X}}-\dot{\mathbf{L}}^{\tau+1} \dot{\mathbf{D}}^{\tau+1}\dot{\mathbf{R}}^{\tau+1}\|_{F}^{2}\\ &+\mathfrak{R}((\dot{\mathbf{F}}^{\tau},\dot{\mathbf{C}}^{\tau}- \mathcal{T}(\dot{\mathbf{X}})))+\frac{\mu^{\tau}}{2}\|\dot{\mathbf{C}}^{\tau}- \mathcal{T}(\dot{\mathbf{X}})\|_{F}^{2},\\ &=\underset{\dot{\mathbf{x}}}{\arg\min}\,\frac{\mu^{\tau}}{2}\| \dot{\mathbf{X}}+\frac{\dot{\mathbf{E}}^{\tau}}{\mu^{\tau}}-\dot{\mathbf{L}}^{ \tau+1}\dot{\mathbf{D}}^{\tau+1}\dot{\mathbf{R}}^{\tau+1}\|_{F}^{2}+\frac{\mu^ {\tau}}{2}\|\dot{\mathbf{C}}^{\tau}+\frac{\dot{\mathbf{F}}^{\tau}}{\mu^{\tau}} -\mathcal{T}(\dot{\mathbf{X}})\|_{F}^{2}.\end{split} \tag{65}\] Since the item \(\mathcal{T}(\dot{\mathbf{X}})\) is contained in problem (65), we cannot direct separate variable \(\dot{\mathbf{X}}\) from other variables. We can reformulate the problem according to the Parseval theorem in the quaternion system, and then we can isolate \(\dot{\mathbf{X}}\) from \(\mathcal{T}(\cdot)\). The quaternion system's equivalent of the Parseval theorem in the real domain states that following a unitary transformation like the quaternion discrete Fourier transform (QDFT) or quaternion discrete cosine transform (QDCT), the signal's overall energy stays constant [36]. As a result, by adding the appropriate inverse transform to the final component in (65), we get that \[\|\dot{\mathbf{C}}^{\tau}+\frac{\dot{\mathbf{F}}^{\tau}}{\mu^{\tau}}-\mathcal{T}( \dot{\mathbf{X}})\|_{F}^{2}=\|\mathcal{I}(\dot{\mathbf{C}}^{\tau}+\frac{\dot{ \mathbf{F}}^{\tau}}{\mu^{\tau}})-\dot{\mathbf{X}}\|_{F}^{2}, \tag{66}\] where \(\mathcal{I}(\cdot)\) stands for \(\mathcal{T}(\cdot)\)'s inverse transform. According to (65) and (66), as seen below, the optimization problem that is used to update X is reformulated as: \[\dot{\mathbf{X}}^{\tau+1}=\underset{\dot{\mathbf{X}}}{\arg\min}\,\|\frac{1}{2} (\dot{\mathbf{L}}^{\tau+1}\dot{\mathbf{D}}^{\tau+1}\dot{\mathbf{R}}^{\tau+1}+ \mathcal{I}(\dot{\mathbf{C}}^{\tau}+\frac{\dot{\mathbf{F}}^{\tau}}{\mu^{\tau} })-\dot{\mathbf{X}}\|_{F}^{2}. \tag{67}\] The closed-form solution to the problem (67) is given by \[\dot{\mathbf{X}}^{\tau+1}=\frac{1}{2}(\dot{\mathbf{L}}^{\tau+1}\dot{\mathbf{ D}}^{\tau+1}\dot{\mathbf{R}}^{\tau+1}+\mathcal{I}(\dot{\mathbf{C}}^{\tau}+ \frac{\dot{\mathbf{F}}^{\tau}}{\mu^{\tau}})-\frac{\dot{\mathbf{E}}^{\tau}}{ \mu^{\tau}}). \tag{68}\] Considering the restriction \(P_{\Omega}(\dot{\mathbf{L}}\dot{\mathbf{D}}\dot{\mathbf{R}})=P_{\Omega}(\dot {\mathbf{M}})\), we get that \[\dot{\mathbf{X}}^{\tau+1}=P_{\Omega}(\dot{\mathbf{M}})+P_{\Omega^{c}}(\dot{ \mathbf{X}}^{\tau+1}), \tag{69}\] where \(\Omega^{c}\) stands for the missing entries' indexes. The next step is to update the variable \(\dot{\mathbf{C}}^{\tau+1}\) by resolving the following problem: \[\dot{\mathbf{C}}^{\tau+1} =\underset{\dot{\mathbf{C}}}{\arg\min}\,\beta\|\dot{\mathbf{C}} \|_{1}+\Re((\dot{\mathbf{F}}^{\tau},\ \dot{\mathbf{C}}-\mathcal{T}(\dot{\mathbf{X}}^{\tau+1})))+\frac{\mu^{\tau}}{2 }\|\dot{\mathbf{C}}-\mathcal{T}(\dot{\mathbf{X}}^{\tau+1})\|_{F}^{2} \tag{70}\] \[=\underset{\dot{\mathbf{C}}}{\arg\min}\,\beta\|\dot{\mathbf{C}} \|_{1}+\frac{\mu^{\tau}}{2}\|\frac{\dot{\mathbf{F}}^{\tau}}{\mu^{\tau}}+\dot{ \mathbf{C}}-\mathcal{T}(\dot{\mathbf{X}}^{\tau+1})\|_{F}^{2}.\] The closed-form solution to the problem (70) is given by \[\dot{\mathbf{C}}^{\tau+1}=\mathcal{S}_{\frac{\mu}{\mu^{\tau}}}(\mathcal{T}( \dot{\mathbf{X}}^{\tau+1})-\frac{\dot{\mathbf{F}}^{\tau}}{\mu^{\tau}}), \tag{71}\] where \(\mathcal{S}_{t}(\dot{\mathbf{x}})=\frac{\dot{\mathbf{x}}}{|\dot{\mathbf{x}}|} \max\{|\dot{\mathbf{x}}|-t,0\}\) stands for the element-wise soft thresholding operator [27]. Finally, the penalty parameter \(\mu^{\tau+1}\) is updated as follows: \[\mu^{\tau+1}=\rho\mu^{\tau}. \tag{72}\] We call this approach QLNM-QQR-SR because, in contrast to QLNM-QQR, it also considers a sparse regularization term. Algorithm 2 summarizes all of the steps involved in the proposes model. ``` 0: The observed data \(\hat{\mathbf{M}}\in\mathbb{E}^{M\times N}\) (\(\mathcal{P}_{\Omega}(\hat{\mathbf{M}})=\mathbf{0}\)); \(\rho\); \(\mu_{\max}\); \(\beta\); \(r\). 1:Initialize \(\tau=0\); \(\varepsilon>0\); \(\mu^{0}\); \(\mathrm{It}_{\max}>0\); \(\hat{\mathbf{L}}^{0}=eye(M,r)\); \(\hat{\mathbf{R}}^{0}=eye(r,N)\); \(\hat{\mathbf{D}}^{0}=eye(r,r)\); \(\hat{\mathbf{X}}^{0}=\hat{\mathbf{M}}\); \(\hat{\mathbf{C}}^{0}=\mathbf{0}\). 2:Repeat 3:Step 1. \(\hat{\mathbf{L}}^{+1}\), \(\hat{\mathbf{R}}^{+1}\): (39) and (40), respectively. 4:Step 2. \(\hat{\mathbf{D}}^{+1}\): (44) and (45). 5:Step 3. \(\hat{\mathbf{X}}^{+1}=\mathcal{P}_{\Omega}(\hat{\mathbf{M}})+\mathcal{P}_{ \Omega}(\frac{1}{2}(\hat{\mathbf{L}}^{+1}\hat{\mathbf{D}}^{r+1}\hat{\mathbf{R} }^{+1}-\frac{\hat{\mathbf{R}}^{r}}{\mu^{r}}+\mathcal{I}(\hat{\mathbf{C}}^{r}+ \frac{\hat{\mathbf{R}}^{r}}{\mu^{r}})))\). 6:Step 4. \(\hat{\mathbf{C}}^{+1}=\mathcal{S}_{\frac{\omega}{r}}(\mathcal{T}(\hat{ \mathbf{X}}^{+1})-\frac{\hat{\mathbf{R}}^{r}}{\mu^{r}})\). 7:\(\hat{\mathbf{E}}^{r+1}=\hat{\mathbf{E}}^{r}+\mu^{r}(\hat{\mathbf{X}}^{r+1}- \hat{\mathbf{L}}^{r+1}\hat{\mathbf{D}}^{r+1}\hat{\mathbf{R}}^{r+1})\). 8:\(\hat{\mathbf{F}}^{+1}=\hat{\mathbf{F}}^{r}+\mu^{r}(\hat{\mathbf{C}}^{r+1}- \mathcal{T}(\hat{\mathbf{X}}^{r+1}))\). 9:\(\mu^{r+1}=\min(\rho\mu^{r},\mu_{max})\). 10:Until convergence 11:\(\hat{\mathbf{L}}^{r+1}\), \(\hat{\mathbf{D}}^{r+1}\), \(\hat{\mathbf{R}}^{r+1}\), \(\hat{\mathbf{X}}^{r+1}\), and \(\hat{\mathbf{C}}^{r+1}\). ``` **Algorithm 2** The proposed QLNM-QQR-SR method for color image completion. ### Complexity analysis The computational complexities of our proposed three approaches are investigated in this subsection. According to Algorithm 1, the computation of the QQR decomposition of two quaternion matrices accounts for most of the computing cost of each iteration of QLNM-QQR. Additionally, these two quaternion matrices have sizes of \(M\times r\) and \(N\times r\), respectively, where \(r<\min\{M,N\}\). As a result, the complexity is approximately \(\mathcal{O}(r^{2}(M+N)-r^{3})\). Similarly, the complexity of IRQLNM-QQR is also around \(\mathcal{O}(r^{2}(M+N)-r^{3})\). Regarding the algorithmic complexity of the QLNM-QQR-SR technique, the transformation operator \(\mathcal{T}(\cdot)\) adds significant additional algorithmic complexity on top of the QQR decomposition computation. \(\mathcal{T}(\cdot)\) corresponds to a complexity of approximately \(\mathcal{O}(M^{2}N^{2}+MN)\). The proposed QLNM-QQR-SR approach's overall computing complexity is thus around \(\mathcal{O}(M^{2}N^{2}+MN)\) per iteration. However, methods directly based on the QNN, such as LRQA-G [16], need to calculate the QSVD of the quaternion matrix with size \(M\times N\), and the complexity is about \(\mathcal{O}(\min(MN^{2},M^{2}N))\). It is clear that the computational cost of QQR decomposition of two quaternion matrices of size \(M\times r\) and \(N\times r\) is smaller than that of QSVD of the quaternion matrix with size \(M\times N\). Additionally, the computational complexity of each iteration for the LRQMC method is with an estimated cost of \(\mathcal{O}(\widehat{r}^{2}+(M+N)\widetilde{r}+MN))\), where \(\widetilde{r}\) denotes the estimated rank of the complex representation matrix of \(\hat{\mathbf{X}}\) of size \(2M\times 2N\). The computational cost of IRLNM-QR is around \(\mathcal{O}(r^{2}(M+N))\). It can be found that both QLNM-QQR and IRQLNM-QQR approaches our proposed have computational complexities that are comparable to IRLNM-QR while being less than those of LRQA and LRQMC. ## 5 Simulation results and discussion To show the effectiveness of the proposed three quaternion-based completion techniques (i.e., QLNM-QQR, IRQLNM-QQR, and QLNM-QQR-SR), we perform numerical experiments on natural color images and color medical images in this part. We also present corresponding numerical results analysis for the three quaternion-based completion methods. On a MATLAB 2019b platform with an i7-9700 CPU and 16 GB memory, we run all of the tests. **Evaluation metrics:** Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are two extensively used metrics that we utilize to assess the effectiveness of the proposed QQR-QNN-SR, IRQLNM-QQR, and QLNM-QQR-SR. Better recovery performance is indicated by higher PSNR and SSIM. We present the experimental results and analysis of the QLNM-QQR, IRQLNM-QQR, and QLNM-QQR-SR approaches. Specifically, we conduct numerical experiments on matrices with randomly missing entries, including those composed of quaternions. To showcase the effectiveness of our method, we consider the missing ratio (MR) of entries ranging from 50% to 85%. **Method comparison:** Several popular completion techniques, namely, WNNM [2], MC-NC [37], IRLNM-QR [19], TNN-SR [26], QLNF [17], TQLNA [17], LRQA-G [16], and LRQMC [4] are contrasted with the proposed QLNM-QQR, IRQLNM-QQR, and QLNM-QQR-SR approaches. Whereas WNNM, MC-NC, IRLNM-QR, and TNN-SR are LRMC-based completion algorithms, QLNF, TQLNA, LRQA-G, and LRQMC are LRQMC-based completion methods. Like the processing in [26], the approaches based on real matrix completion handle the three color channels individually first before combining their output to produce the final product. **Experimental results on the eight natural color images:** In our tests, eight frequently used \(256\times 256\) natural color images, as shown in Fig. 1, are chosen as the test images (four from McMaster Dataset [38]). Our experiments used the following parameters for QLNM-QQR: \(\mu^{0}=0.003,\rho=1.05\). And the value of \(r\) is set to 65, 90, 105, and 125, corresponding to MR values of 85%, 75%, 65%, and 50%, respectively. For IRQLNM-QQR, we set \(\mu^{0}=0.003\) and \(\rho=1\). The value of \(r\) is set to 115, 125, 155, and 170, corresponding to MR values of 85%, 75%, 65%, and 50%, respectively. To implement IRLNM-QQR, we set the values of parameters \(\varsigma\) and V in (59) to 10 and 3, respectively. As for the QLNM-QQR-SR method, \(\mu^{0}\), \(\beta\), and \(\rho\) are set as 0.5, 0.5, and 1.05, respectively. The parameter \(r\) is assigned the values 60, 85, 100, and 120, which correspond to MR (missing rate) values of 85%, 75%, 65%, and 50%, respectively. As the value of MR increases, the number of missing pixels increases, and the corresponding rank decreases. The recovery results for MR\(=75\%\) are compared among different methods, as shown in Fig. 2. The results of the quantitative evaluation for different methods under different missing ratios (MR) are presented in Table 1. To illustrate the superior performance of our proposed approaches, we present in Fig. 3 the visual results for image(2) and image(5) recovered using our approaches as well as several state-of-the-art approaches, all with a missing rate (MR) of 85%. The results presented in Table 1 and Fig. 2 indicate that both QLNM-QQR and IRQLNM-QQR exhibit superior completion performance compared to WNNM. IRQLNM-QQR achieves superior completion results, higher PSNR and SSIM values compared to IRLNM-QR, and QLNM-QQR generally outperforms IRLNM-QR. Also, when MR is large, IRQLNM-QQR outperforms MC-NC. These results demonstrate the effectiveness of quaternion representations in solving color image completion problems. Based on Theorem 7, the IRQLNM-QQR method will approach the optimal solution of an LRQA-W minimization model. Additionally, as discussed in [16], LRQA-based models using non-convex functions demonstrate similar performance in the color image completion problem. Moreover, compared to LRQA-N and LRQA-L, the LRQA-G method has been found to have the best completion results. Table 1 indicates that the IRQLNM-QQR method can achieve numerical results comparable to LRQA-G. Furthermore, when the missing rate is high, the IRQLNM-QQR method generally outperforms the LRQA-G method. Also, the experimental results suggest that IRLNM-QQR exhibits superior precision to QLNM-QQR. According to the results presented in Fig. 2 and Table 1, it can be observed that the QLNM-QQR-SR method proposed in this paper outperforms other methods in terms of both visual and quantitative assessments. The results indicate that incorporating sparse prior information is crucial for achieving better completion results, as both TNN-SR and QLNM-QQR-SR outperform other methods. Furthermore, QLNM-QQR-SR performs better than TNN-SR, which can be attributed to the ability of quaternions to characterize color images better. Based on the visual results presented in Fig. 3, our proposed method outperforms other state-of-the-art methods in terms of recovering more details from the observed images. **Experimental results on the color medical images:** The continuous development of medical imaging equipment has revolutionized healthcare by providing accurate and detailed visual information about the human body. Magnetic resonance imaging (MRI) and positron emission tomography (PET) are widely used imaging techniques for capturing organs' structural and functional characteristics, respectively. Fusing these two types of data can greatly enhance the interpretation of tissue and organ behavior, thereby improving the accuracy of diagnoses. Therefore, the analysis of Fig. 1: Ground truth: Image(1)-Image(8) are eight color images, each with dimensions of \(256\times 256\times 3\). their overlay is of great importance in medical diagnosis. However, medical images may be incomplete for various reasons, such as equipment limitations or patient movements during examinations, which can negatively impact the accuracy of disease diagnosis. Quaternion completion-based models can be employed to address this issue. Eight color medical images of size \(256\times 256\), obtained from "The Whole Brain Atlas"1 medical image database provided by Harvard Medical School, were used in the experiments. To leverage the information in the color medical images more effectively, we preprocessed the eight medical images by extracting sub-images with a size of \(141\times 141\) for subsequent experimental analysis. Fig. 4 shows the eight original medical images along with their corresponding sub-images obtained after preprocessing. Footnote 1: [http://www.med.harvard.edu/AANLIB/home.html](http://www.med.harvard.edu/AANLIB/home.html) For the experiment conducted at an MR=85%, we used the same parameter settings for \(\mu^{0}\), \(\rho\), and \(\beta\) in the QLNM-QQR, IRQLNM-QQR, and QLNM-QQR-SR methods as in the previous experiment. The \(r\) values for QLNM-QQR, IRQLNM-QQR, and QLNM-QQR-SR were set to 55, 80, and 45, respectively. The IRLNM-QQR method was also implemented with the values of \(\varsigma\) and V set to 10 and 3, respectively, in (59). Fig. 5 displays the recovered color medical images using different methods for visual comparison. Fig. 5 visually demonstrates the superiority of our proposed QLNM-QQR-SR approach over all other methods. The numerical comparison of different methods in terms Fig. 2: (a) Ground truth. From top to bottom: Image(1)-Image(8). (b) Observation (MR=75%). (c)-(m) are the restored results of WNNM, MC-NC, IRLNM-QR, TNN-SR, QLNF, TQLNA, LRQA-G, LRQMC, QLNM-QQR, IRQLNM-QQR, and QLNM-QQR-SR, respectively. Figure 4: (a)-(h) are the eight original color medical images with size \(256\times 256\times 3\). (i)-(p) Ground truth: Image(9)_Sub-Image(16)_Sub are the corresponding sub-images with size \(141\times 141\times 3\). Figure 3: The image recovery outcomes of MR = 85% on Image(2) and Image(5). (a) Ground truth. (b) Observation. (c)-(m) are the recovery outcomes of WNNM, MC-NC, IRLNM-QR, TNN-SR, QLNF, TQLNA, LRQA-G, LRQMC, QLNM-QQR, IRQLNM-QQR, and QLNM-QQR-SR, respectively. of PSNR and SSIM values for recovered medical images at MR=85% is shown in Table 2. The results show that the IRQLNM-QQR method is superior to the QLNM-QQR and LRQA-G methods regarding numerical and visual results. It is challenging to improve the quality of recovered images at this level of MR. However, our proposed QLNM-QQR-SR method outperforms other methods in terms of PSNR and SSIM values, as demonstrated by Table 2. ## 6 Conclusions In this study, we created a novel method called QLNM-QQR for completing color images using the tool of quaternion representation of color images. The method is based on the quaternion \(L_{2,1}\)-norm and a Tri-Factorization of a quaternion matrix called CQSVD-QQR. The coupling between color channels can be naturally handled with this approach and representing color pixels as vector units rather than scalars in the quaternion representation results in better retention of color information. The method avoids the need to calculate the QSVD of large quaternion matrices, which reduces the computational complexity compared to traditional LRQMC methods. According to theoretical analysis, the quaternion \(L_{2,1}\)-norm of a submatrix in QLNM-QQR is capable of converging to its QNN. To enhance \begin{table} \begin{tabular}{|l| ## References its performance, we introduce an improved version called IRQLNM-QQR that uses iteratively reweighted quaternion \(L_{2,1}\)-norm minimization. According to theoretical analysis, IRQLNM-QQR is equally precise as an LRQA-W minimization method. Additionally, we incorporate sparse regularization into the QLNM-QQR method to develop QLNM-QQR-SR. The experimental results obtained from both natural color images and color medical images indicate that IRLNM-QQR achieves almost comparable accuracy to the LRQA-G method and outperforms QLNM-QQR in precision. The experimental results also demonstrate that the QLNM-QQR-SR method proposed in this research displays better performance in both numerical accuracy and visual quality compared to several state-of-the-art techniques. Besides, we have proven that the quaternion \(L_{2,1}\)-norm is an upper bound for the quaternion nuclear norm of a quaternion matrix. As a result, the proposed methods have broad applicability and can enhance the performance of a variety of techniques, including multiview data analysis, quaternion matrix/tensor completion, and low-rank representation based on quaternion nuclear norm. ## Acknowledgments This work was supported by University of Macau (MYRG2019-00039-FST), Science and Technology Development Fund, Macao S.A.R (FDCT/0036/2021/AGJ), and Science and Technology Planning Project of Guangzhou City, China (Grant No. 201907010043). ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Methods: & WNNM & MC-NC & IRLNM-QR & TNN-SR & QLNF & TOLAN & LRQA-G & LROMC & QLNM-QQR & IRQLNM-QQR & QLNM-QQR \\ \hline \hline Images: & \multicolumn{10}{c}{MR - 85\%} \\ Image(5):Sab & 14.499/0.452 & 16.210/0.481 & 17.768/0.588 & 21.499/0.774 & 17.240/0.538 & 18.169/0.508 & 18.020/0.598 & 18.418/0.501 & 17.211/0.530 & 18.188/0.611 & **21.279/0.781** \\ Image(10): & 13.239/0.407 & 14.673/0.501 & 1.628/0.627 & 18.210/0.838 & 18.470/0.600 & 16.390/0.605 & 16.637/0.624 & 16.948/0.590 & 15.712/0.505 & 16.838/0.611 & **21.249/0.544** \\ Image(11):Shu & 14.549/0.480 & 16.422/0.543 & 17.675/0.637 & 22.728/0.860 & 17.476/0.605 & 18.330/0.661 & 18.116/0.645 & 17.639/0.621 & 17.366/0.598 & 18.191/0.667 & **23.031/0.870** \\ Image(12):Shu & 12.798/0.436 & 14.124/0.465 & 18.555/0.612 & 18.960/0.783 & 16.635/0.631 & 15.949/0.641 & 15.880/0.614 & 16.191/0.611 & 15.099/0.657 & 16.090/0.628 & **19.037/0.787** \\ Image(13):Shu & 13.454/0.415 & 15.159/0.495 & 11.671/0.556 & 21.103/0.586 & 16.530/0.595 & 17.166/0.623 & 16.838/0.605 & 15.451/0.590 & 16.144/0.555 & 17.039/0.627 & **14.127/0.843** \\ Image(14):Shu & 16.079/0.258 & 12.762/0.321 & 13.536/0.407 & 16.660/0.997 & 14.028/0.399 & 13.577/0.991 & 13.999/0.466 & 14.100/0.801 & 13.827/0.374 & 13.997/0.431 & **16.844/0.611** \\ Image(15):Shu & 12.684/0.488 & 14.190/0.571 & 15.809/0.640 & 20.420/0.581 & 15.531/0.416 & 15.813/0.612 & 15.858/0.615 & 15.541/0.614 & 15.260/0.585 & 15.986/0.656 & **20.726/0.733** \\ Image(15):Shu & 2.481/0.420 & 13.977/0.666 & 1.430/0.576 & 20.481/0.829 & 15.772/0.591 & 16.133/0.620 & 15.999/0.604 & 16.329/0.622 & 15.285/0.546 & 16.123/0.625 & **20.933/0.843** \\ \hline Avet. & 14.949 & 16.422 & 17.768 & 22.736 & 17.476 & 18.336 & 18.116 & 18.418 & 17.366 & 18.191 & **23.031** \\ \hline \end{tabular} \end{table} Table 2: A comparison of quantitative assessment indices (PSNR/SSIM) across different methods on the set of eight color medical images. ## Appendix A Proof of Theorem 5 Proof.: The quaternion \(L_{2,1}\)-norm of \(\dot{\mathbf{X}}\) can be rewritten as follows: \[\|\dot{\mathbf{X}}\|_{2,1}=\sum_{n=1}^{N}\sqrt{\sum_{m=1}^{M}\|\dot{\mathbf{X}}_ {mn}|^{2}}=\sum_{n=1}^{N}\|\dot{\mathbf{X}}_{n}\|_{2}, \tag{100}\] where \(\|\dot{\mathbf{X}}_{n}\|_{2}=\sqrt{\sum_{m=1}^{M}|\dot{\mathbf{X}}_{mn}|^{2}}\). Since two of the terms in (42) are convex, there is only one optimal solution. With the use of the associated theories for quaternion matrix derivatives in [39], we get \[\begin{split}\frac{\partial\mathcal{J}(\dot{\mathbf{X}}_{n})}{ \partial\dot{\mathbf{X}}_{n}}&=\beta\frac{\partial\|\dot{ \mathbf{X}}_{n}\|_{2}}{\partial\dot{\mathbf{X}}_{n}}+\frac{1}{2}\frac{ \partial\text{Tr}\left[(\dot{\mathbf{X}}_{n}-\dot{\mathbf{Y}}_{n})^{H}(\dot{ \mathbf{X}}_{n}-\dot{\mathbf{Y}}_{n})\right]}{\partial\dot{\mathbf{X}}_{n}}\\ &=\beta\frac{\dot{\mathbf{X}}_{n}}{\|\dot{\mathbf{X}}_{n}\|_{2}}+ \frac{1}{4}\Big{(}\dot{\mathbf{X}}_{n}-\dot{\mathbf{Y}}_{n}\Big{)}\\ &=\beta\frac{\dot{\mathbf{Y}}_{n}}{\|\dot{\mathbf{Y}}_{n}\|_{2}}+ \frac{1}{4}\Big{(}\dot{\mathbf{X}}_{n}-\dot{\mathbf{Y}}_{n}\Big{)}\left(\| \dot{\mathbf{Y}}_{n}\|_{2}>4\beta\right).\end{split} \tag{101}\] We can find the unique solution to problem (42) by setting (101) to zero as follows: \[\begin{split}\dot{\widetilde{\mathbf{X}}}_{n}&= \dot{\mathbf{Y}}_{n}-4\beta\frac{\dot{\mathbf{Y}}_{n}}{\|\dot{\mathbf{Y}}_{n} \|_{2}}\left(\|\dot{\mathbf{Y}}_{n}\|_{2}>4\beta\right)\\ &=\frac{\dot{\mathbf{Y}}_{n}}{\|\dot{\mathbf{Y}}_{n}\|_{2}}\big{(} \|\dot{\mathbf{Y}}_{n}\|_{2}-4\beta\big{)}\left(\|\dot{\mathbf{Y}}_{n}\|_{2}> 4\beta\right)\\ &=\frac{(\|\dot{\mathbf{Y}}_{n}\|_{2}-4\beta)_{+}}{\|\dot{ \mathbf{Y}}_{n}\|_{2}}\dot{\mathbf{Y}}_{n},\end{split} \tag{102}\] where \((y)_{+}=\max\{y,0\}\). ## Appendix B Proof of Theorem 6 Proof.: In line with (50), the optimal solution \(\dot{\mathbf{X}}_{\text{opt}}\) of the problem in (55) is given by \[\dot{\mathbf{X}}_{\text{opt}}^{m}=\min_{\dot{\mathbf{X}}_{\text{opt}}^{m}}\frac {\omega_{m}}{\mu}\|\dot{\mathbf{X}}^{m}\|_{*}+\frac{1}{2}\|\dot{\mathbf{X}}^{m }-\dot{\mathbf{Y}}^{m}\|_{F}^{2},\quad(m=1,\dots,M) \tag{103}\] where \(\dot{\mathbf{X}}_{\text{opt}}=\sum_{m=1}^{M}\dot{\mathbf{X}}_{\text{opt}}^{m}\). Assume that \(\dot{\mathbf{Y}}^{m}=\dot{\mathbf{U}}\mathbf{\Sigma}\dot{\mathbf{V}}^{H}\) is the QSVD of \(\dot{\mathbf{Y}}^{m}\). We represent \(\dot{\mathbf{U}}\) and \(\dot{\mathbf{V}}\) as two partitioned matrices: \(\dot{\mathbf{U}}=[\dot{\mathbf{u}}_{1},\dots,\dot{\mathbf{u}}_{M}]\), and \(\dot{\mathbf{V}}=[\dot{\mathbf{v}}_{1},\dots,\dot{\mathbf{v}}_{M}]\), where \(\dot{\mathbf{u}}_{m}\) and \(\dot{\mathbf{v}}_{m}\in\mathbb{H}^{M}\) (\(m=1,\dots,M\)). Based on Lemma 1, we can give the closed-form solution of (B.1) as follows: \[\dot{\mathbf{X}}_{\mathrm{opt}}^{m} =\dot{\mathbf{U}}S_{\frac{\mathrm{out}}{\mu}}(\Sigma)\dot{V}^{H}\] (B.2) \[=(\|\dot{\mathbf{Y}}^{m}\|_{F}-\frac{\omega_{m}}{\mu})_{+}\dot{ \mathbf{u}}_{1}\dot{\mathbf{v}}_{1}^{H}\] \[=\frac{(\|\dot{\mathbf{Y}}^{m}\|_{F}-\frac{\omega_{m}}{\mu})_{+}} {\|\dot{\mathbf{Y}}^{m}\|_{F}}\dot{\mathbf{Y}}^{m},\] where \(\|\dot{\mathbf{Y}}^{m}\|_{F}=\sigma_{m}\) is the only singular value of \(\dot{\mathbf{Y}}^{m}\). Based on the above discusses, the optimal solution \(\dot{\mathbf{X}}_{\mathrm{opt}}\) is given by \[\dot{\mathbf{X}}_{\mathrm{opt}}=\dot{\mathbf{Y}}\mathbf{A},\] (B.3) where \(\mathbf{A}=\mathrm{diag}(a_{1},\dots,a_{M})\), and \[a_{m}=\frac{(\sigma_{m}-\frac{\omega_{m}}{\mu})_{+}}{\sigma_{m}},\quad(m=1, \dots,M)\] (B.4) ## Appendix C Proof of Theorem 7 Proof.: The IRQLNM-QQR algorithm produces a sequence of quaternion matrices \(\{\dot{\mathbf{D}}^{\tau}\}\) that can converge to a diagonal matrix \(\dot{\mathbf{D}}\) satisfying (49). As a result, the \(\dot{\dot{\mathbf{D}}}\) in (53) also converge to a diagonal matrix \(\dot{\mathbf{T}}\). Therefore, we can reformulate the problem in (53) as follows: \[\min_{\dot{\mathbf{D}}}\sum_{l=1}^{r}\partial g(\|\dot{\mathbf{D}}^{l}\|_{*}) \|\dot{\mathbf{D}}^{l}\|_{*}+\frac{\mu^{\tau}}{2}\|\dot{\mathbf{D}}-\dot{ \mathbf{T}}\|_{F}^{2},\] (C.1) Because \(\|\dot{\mathbf{D}}^{l}\|_{*}=\sigma_{l}(\dot{\mathbf{D}})\), we can rewrite the above problem as follows: \[\min_{\dot{\mathbf{D}}}\sum_{l=1}^{r}\partial g(\|\dot{\mathbf{D}}^{l}\|_{*}) \|\sigma_{l}(\dot{\mathbf{D}})+\frac{\mu^{\tau}}{2}\|\dot{\mathbf{D}}-\dot{ \mathbf{T}}\|_{F}^{2},.\] (C.2) Therefore, based on Lemma 2, we can solve the problem (C.2).
コンピュータビジョンにおけるカラー画像の補完は、課題となる問題ですが、近年、カラー画像の四元数表現は多くの分野で良好な性能を示しています。これらの表現は、全体の色画像を考慮し、3色のカラーチャネル間の相互作用情報を効果的に利用します。そのため、低ランク四元数行列補完(LRQMC)アルゴリズムは注目を集めています。本提案は、四元数 Qatar Riyal 分解(QQR)と四元数 $L_{2,1}$ ノルムの組み合わせを呼び、QLNM-QQR という名前のメソッドです。この新しいアプローチは、大型四元数行列の QSVD 計算を回避することで計算複雑さを削減します。QLNM-QQR メソッドに、二つの改善を加えています。それは、iteratively reweighted quaternion $L_{2,1}$ ノルム最小化を用いた強化版 IRQLNM-QQR と
2309.14218
Cellular pavings of fibers of convolution morphisms
This article proves, in the case of split groups over arbitrary fields, that all fibers of convolution morphisms attached to parahoric affine flag varieties are paved by products of affine lines and affine lines minus a point. This applies in particular to the affine Grassmannian and to the convolution morphisms in the context of the geometric Satake correspondence. The second part of the article extends these results over $\mathbb Z$. Those in turn relate to the recent work of Cass-van den Hove-Scholbach on the geometric Satake equivalence for integral motives, and provide some alternative proofs for some of their results.
Thomas J. Haines
2023-09-25T15:23:11
http://arxiv.org/abs/2309.14218v3
# Cellular Pavings of Fibers of Convolution morphisms ###### Abstract. This article proves, in the case of split groups over arbitrary fields, that all fibers of convolution morphisms attached to parahoric affine flag varieties are paved by products of affine lines and affine lines minus a point. This applies in particular to the affine Grassmannian and to the convolution morphisms in the context of the geometric Satake correspondence. The second part of the article extends these results over \(\mathbb{Z}\). Those in turn relate to the recent work of Cass-van den Hove-Scholbach on the geometric Satake equivalence for integral motives, and provide some alternative proofs for some of their results. Research partially supported by NSF DMS-2200873 ###### Contents * 1 Introduction and Main Results * 2 Notation * 3 Review of convolution morphisms * 4 Consequences of a factorization of the pro-unipotent Iwahori subgroup * 5 Stratified triviality of convolution morphisms * 6 Paving results for the case \(\mathcal{P}=\mathcal{B}\) * 7 Proof of Theorem 1.1 * 8 Proof of Corollary 1.2 * 9 Application to structure constants for parahoric Hecke Algebras * 10 Cellular paving of certain subvarieties in the affine Grassmannian * 11 Paving results over \(\mathbb{Z}\) * 12 Errata for [4] ## 1. Introduction and Main Results Let \(G\) be a split connected reductive group over any field \(k\). Let \(W\) be the Iwahori-Weyl group of \(LG(k)=G(k(\!(t)\!))\), and for each \(r\)-tuple \(w_{\bullet}=(w_{1},\ldots,w_{r})\in W^{r}\) and choice of standard parahoric subgroup \(\mathcal{P}\subset LG(k)\) consider the convolution morphism \[m_{w_{\bullet},\mathcal{P}}:X_{\mathcal{P}}(w_{\bullet}):=X_{\mathcal{P}}(w_{ 1})\widetilde{\times}\cdots\widetilde{\times}X_{\mathcal{P}}(w_{r})\to X_{ \mathcal{P}}(w_{*})\] defined on the twisted product of Schubert varieties \(X_{\mathcal{P}}(w_{i})\subset\mathrm{Fl}_{\mathcal{P}}\) (see SS2 and SS3). Such morphisms have long played an important role in the geometric Langlands program and in the study of the geometry of Schubert varieties. For example, if \(w_{\bullet}=(s_{1},\ldots,s_{r})\) is a sequence of simple affine reflections, \(w=s_{1}\cdots s_{r}\) is a reduced word, and \(\mathcal{P}\) is the standard Iwahori subgroup \(\mathcal{B}\), then \(X_{\mathcal{B}}(s_{\bullet})\to X_{\mathcal{B}}(w)\) is the Demazure resolution (of singularities) of \(X_{\mathcal{B}}(w)\). If \(\mathcal{P}=L^{+}G\) is the positive loop group and \(w_{\bullet}=\mu_{\bullet}=(\mu_{1},\ldots,\mu_{r})\) is a tuple of cocharacters in \(G\), the corresponding convolution morphism is used to define the convolution of \(L^{+}G\)-equivariant perverse sheaves on the affine Grassmannian \(\mathrm{Gr}_{G}=LG/L^{+}G\), and hence it plays a key role in the geometric Satake correspondence. Numerous applications stem from the study of the fibers of convolution morphisms, their dimensions and irreducible components, and possible pavings of them by affine spaces or related spaces. This article will focus on pavings of fibers by affine spaces, or by closely related spaces. We recall that a variety \(X\) is _paved by varieties in a class \(\mathcal{C}\)_ provided that there exists a finite exhaustion by closed subvarieties \(\emptyset=X_{0}\subset X_{1}\subset\cdots\subset X_{l}=X\) such that each locally closed difference \(X_{i}-X_{i-1}\) for \(1\leq i\leq r\) isomorphic to a member of the class \(\mathcal{C}\). The fact that the fibers of Demazure resolutions admit pavings by affine spaces was for a long time a folklore result, until a proof appeared in [10] and later more generally in [11] (see [11, Thm. 2.5.2]). This has been used in proving various parity vanishing and purity results in Kazhdan-Lusztig theory [10, 11] and in the geometric Satake correspondence [1, 14, 15]. The paving of certain fibers related to the affine Grassmannian for \(\mathrm{GL}_{n}\) gives a different approach to paving by affines of some Springer-Spaltenstein varieties, which are certain partial Springer resolutions of the nilpotent cone for \(\mathrm{GL}_{n}\); see [1, Prop. 8.2, **ff.**], and [21]. One could conjecture that all fibers of general convolution morphisms \(X_{\mathcal{P}}(w_{\bullet})\to X_{\mathcal{P}}(w_{\ast})\) are paved by affine spaces. In the special case of a sequence of minuscule cocharacters \(w_{\bullet}=\mu_{\bullet}\) and the associated convolution morphism \(X_{L^{+}G}(\mu_{\bullet})\to X_{L^{+}G}(|\mu_{\bullet}|)\) for the affine Grassmannian \(Gr_{G}=LG/L^{+}G\), this was proved in [14, Cor. 1.2]. In general for the affine Grassmannian, it is not known which fibers are paved by affine spaces (see [14, Question 3.9]). The existence of an affine space paving of fibers of \(m_{w_{\bullet},\mathcal{P}}\) in the general case seems to be an interesting open question - and the author is not aware of any counterexamples. One can consider the analogous question of when fibers of _uncompactified_ convolution morphisms \(Y_{\mathcal{P}}(w_{\bullet})\to X_{\mathcal{P}}(w_{\ast})\) are paved by affine spaces. This turns out to usually fail (for examples, see Remarks 6.6 and 6.7 below). However, a weaker result does always hold. **Theorem 1.1**.: _Every fiber of a convolution morphism \(Y_{\mathcal{P}}(w_{\bullet})\to X_{\mathcal{P}}(w_{\ast})\) is paved by finite products of copies of \(\mathbb{A}^{1}\) and \(\mathbb{A}^{1}-\mathbb{A}^{0}\)._ As a corollary we obtain the following result on fibers of the usual convolution morphisms. **Corollary 1.2**.: _Every fiber of any convolution morphism \(X_{\mathcal{P}}(w_{\bullet})\to X_{\mathcal{P}}(w_{\ast})\) is paved by finite products of copies of \(\mathbb{A}^{1}\) and \(\mathbb{A}^{1}-\mathbb{A}^{0}\)._ The previous two results show that the fibers in question are _cellular \(k\)-schemes_, in the sense of [16, Def. 3.1.5]. We adopt a similar terminology and declare that they admit _cellular pavings_. A weaker version of Corollary 1.2 was stated without proof in [11, Rem. 2.5.4]. The proof is given here in SS6, SS7, and SS8. One situation where paving by affine spaces is known is given by the following result. **Theorem 1.3**.: _Suppose \(w_{\bullet}=s_{\bullet}=(s_{1},s_{2},\ldots,s_{r})\) is a sequence of simple reflections with Demazure product \(s_{\ast}=s_{1}\ast s_{2}\ast\cdots\ast s_{r}\). Then the fibers of \(X_{\mathcal{B}}(s_{\bullet})\to X_{\mathcal{B}}(s_{\ast})\) are paved by affine spaces._ This theorem was proved in [11, Thm. 2.5.2]. However, here we give a different proof, which has the advantage that it can be easily adapted to prove the special case of Theorem 1.1 where \(\mathcal{P}=\mathcal{B}\) and every \(w_{i}\) is a simple reflection. This in turn is used to prove the general case of Theorem 1.1. The results above should all have analogues at least for connected reductive groups \(G\) which are defined and tamely ramified over a field \(k(t)\) with \(k\) perfect (see Remark 4.2). The proofs will necessarily be more involved and technical, and the author expects them to appear in a separate work. In section 11, we extend all the preceding results over \(\mathbb{Z}\). We prove in that section the following result (Theorem 11.17), the second part of which recovers [11, Thm. 1.2]. **Theorem 1.4**.: _Assume \(G_{\mathbb{Z}}\supset B_{\mathbb{Z}}\supset T_{\mathbb{Z}}\) is a connected reductive group over \(\mathbb{Z}\) with Borel pair defined over \(\mathbb{Z}\). Consider a parahoric subgroup \(\mathcal{P}_{\mathbb{Z}}\) and the associated Schubert schemes \(X_{\mathcal{P},\mathbb{Z}}(w)\subset\mathrm{F}\mathrm{I}_{\mathcal{P},\mathbb{ Z}}\). The convolution morphisms attached to \(w_{\bullet}=(w_{1},\ldots,w_{r})\in W^{r}\) may be constructed over \(\mathbb{Z}\)_ \[m_{w_{\bullet},\mathcal{P}_{\mathbb{Z}}}:X_{\mathcal{P},\mathbb{Z}}(w_{\bullet })\to X_{\mathcal{P},\mathbb{Z}}(w_{\ast}).\] _and for any \(v\leq w_{\ast}\), the fiber \(m_{w_{\bullet},\mathcal{P}_{\mathbb{Z}}}^{-1}(v\,e_{\mathcal{P}_{\mathbb{Z}}})\) has a cellular paving over \(\mathbb{Z}\). Furthermore for any standard parabolic subgroup \(P_{\mathbb{Z}}=M_{\mathbb{Z}}N_{\mathbb{Z}}\) and any pair \((\mu,\lambda)\in X_{\ast}(T)^{+}\times X_{\ast}(T)^{+_{M}}\), the subscheme \(L^{+}M_{\mathbb{Z}}LN_{\mathbb{Z}}x_{\lambda}\cap L^{+}G_{\mathbb{Z}}x_{\mu}\) of the affine Grassmannian \(\mathrm{Gr}_{G,\mathbb{Z}}\) has a cellular paving over \(\mathbb{Z}\)._ **Leitfaden:** Here is an outline of the contents of this article. In SS2 and SS3 we give our notation and recall the basic definitions related to convolution morphisms. The main idea of the proof of Theorem 1.1 is to prove it by induction on \(r\): one projects from the fiber onto the \(r-1\) term in the twisted product; then one needs to show that the image is paved by locally closed subvarieties, each of which has a \(\mathcal{C}\)-paving, and over which the aformentioned projection morphism is trivial. The strategy of proof is given in more detail in SS6.2. The required triviality statements are proved in SS4 and SS5. The core of the article is found in SS6 - SS8. First, Theorem 1.3 is proved in SS6.2, and this proof is then adapted to prove the special case of Theorem 1.1 for \(\mathcal{P}=\mathcal{B}\) and all \(w_{i}\) simple reflections, in SS6.3. This is used to deduce the special case of Theorem 1.1 with \(\mathcal{P}=\mathcal{B}\) in SS7.2. Finally, the general case of Theorem 1.1 is proved in SS7.3, using the previous special cases as stepping-stones. In SS8 we quickly deduce Corollary 1.2 from Theorem 1.1. In SS9 we give an application to structure constants for parahoric Hecke algebras. In SS11 we develop all the needed machinery to extend the above results over \(\mathbb{Z}\). The paper ends with Errata for [1] in SS12. **Acknowledgments:** I express my thanks to Thibaud van den Hove, whose questions about [1, Rem. 2.5.4] prompted me to write up these results. I also thank him for helpful comments on an early version of this paper, and for giving me access to an advance copy of the revised version of [22]. ## 2. Notation Generally speaking, we follow the same notation and conventions as [1]. Let \(G\) be a split connected reductive group over a field \(k\) with algebraic closure \(\bar{k}\) and separable closure \(k^{\mathrm{sep}}\). Fix a Borel pair \(G\supset B\supset T\), also split and defined over \(k\). This gives rise to the based absolute root system \((X^{*}(T)\supset\Phi,X_{*}(T)\supset\Phi^{\vee},\Delta)\), the real vector space \(V=X_{*}(T)\otimes\mathbb{R}\), and the canonical perfect pairing \(\langle\cdot,\cdot\rangle:X^{*}(T)\times X_{*}(T)\to\mathbb{Z}\). The affine roots \(\Phi_{\mathrm{aff}}=\{a=\alpha+n\,|\,\alpha\in\Phi,\ n\in\mathbb{Z}\}\) are affine-linear functionals on \(V\). We denote the origin by \(\mathbf{0}\in V\) and the \(B\)-dominant Weyl chamber \(\mathfrak{C}=\{v\in V\,|\,\langle\alpha,v\rangle>0,\,\forall\alpha\in\Delta\}\) with apex at \(\mathbf{0}\). We denote the set of dominant cocharacters by \(X_{*}(T)^{+}:=X_{*}(T)\cap\mathfrak{C}\). We also fix the base alcove \(\mathbf{a}\subset\mathfrak{C}\) whose closure contains \(\mathbf{0}\). The positive simple affine roots \(\Delta_{\mathrm{aff}}\) are the minimal affine roots \(a=\alpha+n\) taking positive values on \(\mathbf{a}\). We use the convention that \(\lambda\in X_{*}(T)\) acts on \(V\) by translation by \(-\lambda\). The finite Weyl group is the Coxeter group \((W_{0},S)\) generated by the simple reflection \(\{s_{\alpha}\in S\}\) on \(V\), for \(\alpha\in\Delta\); the group \(W_{0}\) fixes the origin \(\mathbf{0}\). The extended affine Weyl group \(W=X_{*}(T)\rtimes W_{0}\) acts on \(V\) and hence on the set \(\Phi_{\mathrm{aff}}\) by precomposition. Let \((W_{\mathrm{aff}},S_{\mathrm{aff}})\) denote the Coxeter group generated by \(S_{\mathrm{aff}}\), the simple affine reflections \(s_{a}\) for \(a\in\Delta_{\mathrm{aff}}\). It has a Bruhat order \(\leq\) and a length function \(\ell:W_{\mathrm{aff}}\to\mathbb{Z}_{\geq 0}\). Let \(\Omega\subset W\) be the subgroup stabilizing \(\mathbf{a}\subset V\). The group decomposition \(W=W_{\mathrm{aff}}\rtimes\Omega\) allows us to extend \(\leq\) and \(\ell\) from \(W_{\mathrm{aff}}\) to \(W\), by declaring \(\Omega\) to be the set of length zero elements in \(W\). Fix the field \(F=k(\!(t)\!)\) and ring of integers \(\mathcal{O}=k[\![t]\!]\). The Iwahori-Weyl group \(N_{G}(T)(F)/T(\mathcal{O})\) may be naturally identified with \(W\). We choose once and for all lifts of \(w\in W_{0}\) in \(N_{G}T(\mathcal{O})\) and we lift \(\lambda\in X_{*}(T)\) to the element \(t^{\lambda}:=\lambda(t)\in T(F)\). Altering these lifts by any elements in \(T(\mathcal{O})\) does not affect anything in what follows. We define the loop group \(LG\) (resp. positive loop group \(L^{+}G\)) to be the group ind-scheme (resp., group scheme) over \(k\) representing the group functor on \(k\)-algebras \(LG(R)=G(R(\!(t)\!))\) (resp., \(L^{+}G(R)=G(k[\![t]\!])\)). For a facet \(\mathbf{f}\) contained in the closure of \(\mathbf{a}\), we obtain the'standard" parahoric group scheme \(P_{\mathbf{f}}\) (see [1, 1]). We often write \(\mathcal{P}:=L^{+}P_{\mathbf{f}}\), and regard this as a (standard) parahoric group in \(LG\). Note that \(L^{+}G=L^{+}P_{\mathbf{0}}\). The (standard) Iwahori subgroup will be denoted \(\mathcal{B}:=L^{+}P_{\mathbf{a}}\). Let \(W_{\mathbf{f}}=W_{\mathcal{P}}\subset W_{\mathrm{aff}}\) be the subgroup which fixes \(\mathbf{f}\) pointwise; it is a Coxeter group generated by the simple affine reflections which fix \(\mathbf{f}\). The Bruhat order \(\leq\) on \(W\) descends to a Bruhat order \(\leq\) on coset spaces such as \(W_{\mathcal{P}}\backslash W/W_{\mathcal{P}}\) and \(W/W_{\mathcal{P}}\). Let \(\mathbf{f}^{\mathbf{f}}\) denote the elements \(w\in W\) which are the unique \(\leq\)-maximal elements in their double cosets \(W_{\mathcal{P}}wW_{\mathcal{P}}\). The partial affine flag variety is by definition the etale sheafification of the presheaf on the category \(\mathrm{Aff}_{k}\) of affine schemes \(\mathrm{Spec}(R)\) over \(k\) given by \(R\mapsto LG(R)/L^{+}P_{\mathbf{f}}(R)\). It is represented by an ind-projective ind-scheme denoted simply by \(\mathrm{Fl}_{\mathcal{P}}=LG/L^{+}P_{\mathbf{f}}\), and it carries a left action by \(\mathcal{P}=L^{+}P_{\mathbf{f}}\). Denote by \(e_{\mathcal{P}}\) its natural base point. It is well-known (see e.g. [1]) that for any two standard parahoric subgroups \(\mathcal{Q}\) and \(\mathcal{P}\), we have a natural bijection on the level of \(k\)-points and \(k^{\mathrm{sep}}\)-points \[\mathcal{Q}(k)\backslash\mathrm{Fl}_{\mathcal{P}}(k)=W_{\mathcal{Q}} \backslash W/W_{\mathcal{P}}=\mathcal{Q}(k^{\mathrm{sep}})\backslash\mathrm{Fl}_{ \mathcal{P}}(k^{\mathrm{sep}}). \tag{2.1}\] The elements of Bruhat-Tits theory used in [11, Prop. 8, Rem. 9] work for split groups without any assumption that the residue field \(k\) is perfect (cf. also Remark 4.2). Alternatively, for split \(G\), (2.1) can be proved directly for any residue field \(k\) (including \(\bar{k}\)), using BN-pair relations. For \(w\in W\), let \(Y_{\mathcal{P}}(w)\) (resp. \(Y_{\mathcal{BP}}(w)\)) denote the \(\mathcal{P}\)-orbit (resp. \(\mathcal{B}\)-orbit) of \(we_{\mathcal{P}}\) in \(\mathrm{Fl}_{\mathcal{P}}\). When \(\mathcal{P}=\mathcal{B}\) we will often omit the subscripts. Define the _Schubert variety_\(X_{\mathcal{P}}(w)\) to be the Zariski closure of \(Y_{\mathcal{P}}(w)\subset\mathrm{Fl}_{\mathcal{P}}\), endowed with reduced structure. Similarly, define \(X(w)=X_{\mathcal{B}}(w)\) and \(X_{\mathcal{BP}}(w)\). In the part of this paper where we work over a field \(k\), the schemes which arise are finite-type separated schemes over \(k\) (not necessarily irreducible). We will always give them reduced structure, and we will call them "varieties". The morphisms of varieties we consider will always be defined over \(k\), and will usually be described on the level of points in an unspecified algebraic closure of \(k\). ## 3. Review of convolution morphisms For \(w\in W\), define \(\overline{\mathcal{P}w\mathcal{P}}=\prod_{v\leq w}\mathcal{P}v\mathcal{P}\), where \(v\) ranges over elements \(v\in W_{\mathcal{P}}\backslash W/W_{\mathcal{P}}\). For any \(r\)-tuple \(w_{\bullet}=(w_{1},\ldots,w_{r})\in W^{r}\), we define \(X_{\mathcal{P}}(w_{\bullet})\) to be the quotient of \(\mathcal{P}^{r}=(L^{+}P_{\mathbf{f}})^{r}\) acting on \[\overline{\mathcal{P}w_{1}\mathcal{P}}\times\overline{\mathcal{P}w_{2} \mathcal{P}}\times\ \cdots\ \times\overline{\mathcal{P}w_{r}\mathcal{P}}\] by the right action \[(g_{1},g_{2},\ldots,g_{r})\cdot(p_{1},p_{2},\ldots,p_{r}):=(g_{1}p_{1},p_{1}^{ -1}g_{2}p_{2},\ldots,p_{r-1}^{-1}g_{r}p_{r}). \tag{3.1}\] We define \(Y_{\mathcal{P}}(w_{\bullet})\) similarly, with each \(\overline{\mathcal{P}w_{i}\mathcal{P}}\) replaced by \(\mathcal{P}w_{i}\mathcal{P}\). The quotients should be understood as etale sheafifications of presheaf quotients on the category \(\mathrm{Aff}_{k}\). It is well-known that \(X_{\mathcal{P}}(w_{\bullet})\)\((\mathrm{resp.},Y_{\mathcal{P}}(w_{\bullet}))\) is represented by an irreducible projective (resp., quasi-projective) \(k\)-variety. We regard the above objects as "twisted products": \(X_{\mathcal{P}}(w_{\bullet})=X_{\mathcal{P}}(w_{1})\widetilde{\times}X_{ \mathcal{P}}(w_{2})\widetilde{\times}\cdots\widetilde{\times}X_{\mathcal{P}} (w_{r})\)\((\mathrm{resp.},Y_{\mathcal{P}}(w_{\bullet})=Y_{\mathcal{P}}(w_{1}) \widetilde{\times}Y_{\mathcal{P}}(w_{2})\widetilde{\times}\cdots\widetilde{ \times}Y_{\mathcal{P}}(w_{r}))\), consisting of tuples \((g_{1}\mathcal{P},g_{2}\mathcal{P},\ldots,g_{r}\mathcal{P})\) such that \(g_{i-1}^{-1}g_{i}\in\overline{\mathcal{P}w_{i}\mathcal{P}}\) (resp. \(\mathcal{P}w_{i}\mathcal{P}\)) for all \(1\leq i\leq r\) (here \(g_{0}=1\) by convention). Recall that the Demazure product \(W^{r}\to W\), \((w_{1},w_{2},\ldots,w_{r})\mapsto w_{1}*w_{2}*\cdots*w_{r}\) is an associative operation. It induces an associative product \((\,{}^{\mathbf{f}}W^{\mathbf{f}})^{r}\to\,{}^{\mathbf{f}}W^{\mathbf{f}}\), see e.g. [1, SS4]. Given any \(w\in W\), let \({}^{\mathbf{f}}w^{\mathbf{f}}\) denote the unique \(\leq\)-maximal element in \(W_{\mathcal{P}}wW_{\mathcal{P}}\). Given \(w_{\bullet}=(w_{1},w_{2},\ldots,w_{r})\in W^{r}\), define \[w_{*}:={}^{\mathbf{f}}w_{1}^{\mathbf{f}}\ {}^{\mathbf{f}}\ {}^{\mathbf{f}} \ {}^{\mathbf{f}}_{2}*\cdots*{}^{\mathbf{f}}w_{r}^{\mathbf{f}}.\] Then the multiplication map \((LG)^{r}\to LG\) descends to the quotient and defines the _convolution morphisms_ \[m_{w_{\bullet}} =m_{w_{\bullet},\mathcal{P}}\ :\ X_{\mathcal{P}}(w_{\bullet})\to X_{ \mathcal{P}}(w_{*})\] \[p_{w_{\bullet}} =p_{w_{\bullet},\mathcal{P}}\ :\ Y_{\mathcal{P}}(w_{\bullet})\to X_{ \mathcal{P}}(w_{*}),\] see [1, SS4]. We might describe those of the the second kind as _uncompactified convolution morphisms_. ## 4. Consequences of a factorization of the pro-unipotent Iwahori subgroup Recall that \(\mathbf{f}\) is a facet in the closure of \(\mathbf{a}\) and the facets \(\mathbf{f}\) and \(\mathbf{a}\) give rise to the Iwahori and parahoric subgroups \(\mathcal{B}=L^{+}P_{\mathbf{a}}\) and \(\mathcal{P}=L^{+}P_{\mathbf{f}}\). The group \(\mathcal{U}\) is the pro-unipotent radical of the Iwahori subgroup \(\mathcal{B}\); it is the preimage of the unipotent radical \(U\subset B\) under the natural homomorphism \(\mathcal{B}\to B\) induced by \(t\mapsto 0\); see [1, SS3.7]. Recall also that \(\overline{\mathcal{U}}_{\mathcal{P}}=L^{--}P_{\mathbf{f}}\) is the ind-affine group ind-scheme defined in [1, Def. 3.6.1], called the negative parahoric loop group. The definition is given over \(\mathbb{Z}\) in (11.1). For an affine root \(a\), the notation \(a\stackrel{{\mathbf{f}}}{{>}}0\) means that \(a\) takes positive values on the facet \(\mathbf{f}\). When \(\mathbf{f}=\mathbf{a}\), we usually simply write \(a>0\). The notations \(a\stackrel{{\mathbf{f}}}{{\geq}}0\), \(a\stackrel{{\mathbf{f}}}{{<}}0\), etc, have the obvious meaning. **Proposition 4.1**.: _Let \(\mathcal{P}\supset\mathcal{B}\) be any fixed parahoric subgroup as above, and let \(v\in W\) be an arbitrary element. Let \(\mathbf{f}\) be the facet in the closure of the base alcove \(\mathbf{a}\) which corresponds to \(\mathcal{P}\)._ 1. _We have a factorization of group functors_ \[\mathcal{U}=(\mathcal{U}\,)\,^{v}\overline{\mathcal{U}}_{\mathcal{P}})\,\cdot\,( \mathcal{U}\cap\,^{v}\mathcal{P}).\] _._ 2. _There is an isomorphism of schemes_ \(\mathcal{U}\cap\,^{v}\overline{\mathcal{U}}_{\mathcal{P}}\cong\prod_{a}U_{a}\) _where_ \(U_{a}\) _ranges over the affine root groups corresponding to affine roots with_ \(a>0\) _and_ \(v^{-1}a\overset{\mathbf{f}}{<}0\)_, and the product is taken in any order._ Proof.: This is [1, Prop. 3.7.4]. The proof over \(\mathbb{Z}\) given in Proposition 11.6 works here as well. **Remark 4.2**.: Usually the hypothesis that \(k\) is perfect is implicit in Bruhat-Tits theory and the theory of parahoric subgroups: all residue fields of the complete discretely-valued fields \(F\) one works over should be assumed to be perfect, so that Steinberg's theorem applies to show that every reductive group over the completion \(\breve{F}\) of a maximal unramified extension of \(F\) is quasi-split. (This assumption on residue fields is missing from [10], and should be added. I am grateful to Gopal Prasad for pointing out this oversight.) Since we are assuming our group \(G\) is already split over \(k\), it is automatically quasi-split over \(k(\!(t)\!)=k^{\text{sep}}(\!(t)\!)\). Therefore we do not need to assume \(k\) is perfect when invoking Bruhat-Tits theory for \(G\). Note that the hypothesis that \(k\) is perfect appears to be used in the proof of [1, Prop. 3.7.4], since that proof relies on [1, Rem. 3.1.1]. However, the latter actually holds for all \(k\): we see the key point that \(B\) is _\(k\)-triangularizable_ in the sense of [Spr, SS14.1] by invoking [Spr, 16.1.1, 14.1.2] applied to \(B\). **Lemma 4.3**.: _Assume that \(v\) is right-\(\mathbf{f}\)-minimal, ie., it is the unique minimal element in its coset \(vW_{\langle\mathbf{f}\rangle}\), where \(W_{\langle\mathbf{f}\rangle}\) is the Coxeter subgroup of \(W_{\mathrm{aff}}\) which fixes \(\mathbf{f}\) pointwise. Then for any positive affine root \(a>0\), we have_ \[v^{-1}a\overset{\mathbf{f}}{<}0\,\Leftrightarrow\,v^{-1}a<0.\] Proof.: The implication (\(\Rightarrow\)) uses only that \(\mathbf{f}\) belongs to the closure of \(\mathbf{a}\), and holds for any \(v\in W\). Next we prove (\(\Leftarrow\)): Assuming \(v^{-1}a<0\), we wish to prove \(v^{-1}a\overset{\mathbf{f}}{<}0\). Suppose on the contrary that \(v^{-1}a\overset{\mathbf{f}}{\geq}0\). Combined with \(v^{-1}a<0\), we deduce \(v^{-1}a\overset{\mathbf{f}}{=}0\), that is, \(v^{-1}s_{a}v\in W_{\langle\mathbf{f}\rangle}\). Since \(v\) is right-\(\mathbf{f}\)-minimal and \(s_{a}v\in vW_{\langle\mathbf{f}\rangle}\), we deduce that \(s_{a}v>v\). On the other hand, since \(a\) is positive on \(\mathbf{a}\) and \(a\) is negative on \(v\mathbf{a}\), we see that \(v\mathbf{a}\) and \(\mathbf{a}\) are on opposite sides of the affine root hyperplane \(H_{a}\), which means \(s_{a}v<v\), a contradiction. **Proposition 4.4**.: _If \(v\in W\) is right-\(\mathbf{f}\)-minimal, then we have isomorphisms_ \[Y_{\mathcal{B}\mathcal{P}}(v)\ \cong\ \mathcal{U}\cap\,^{v}\overline{\mathcal{U}}_{ \mathcal{P}}\ \cong\ \mathcal{U}\cap\,^{v}\overline{\mathcal{U}}\ \cong\ Y_{\mathcal{B}\mathcal{B}}(v).\] Proof.: Since \(v\) normalizes \(T(\mathcal{O})\), we have \(Y_{\mathcal{B}\mathcal{P}}(v)=\mathcal{U}v\mathcal{P}/\mathcal{P}\), which identifies with \(\mathcal{U}\cap\,^{v}\overline{\mathcal{U}}_{\mathcal{P}}\) by Proposition 4.1(a) and by the fact that \(\overline{\mathcal{U}}_{\mathcal{P}}\to\mathrm{Fl}_{\mathcal{P}}\) is an (open) immersion, see e.g. [1, Thm. 2.3.1]. This is identified with \(\mathcal{U}\cap\,^{v}\overline{\mathcal{U}}\) by Proposition 4.1(b) and Lemma 4.3. **Remark 4.5**.: The proof of Proposition 4.4 given here ultimately relies on the negative parahoric loop group introduced in [1]. Another proof which is more general and which avoids this reliance, is given in [1, Lem. 3.3]. We give the proof above because it is an almost immediate consequence of Proposition 4.1, which we need anyway to establish Proposition 5.1 below. ## 5. Stratified triviality of convolution morphisms **Proposition 5.1**.: _The morphism \(m_{w_{\bullet}}:X_{\mathcal{P}}(w_{\bullet})\to X_{\mathcal{P}}(w_{*})\) is trivial over every \(\mathcal{B}\)-orbit in its image._ Proof.: Writing \(m:=m_{w_{\bullet}}\), we prove the triviality of the map \(m\) over \(\mathcal{B}\)-orbits contained in its image. Assume \(Y_{\mathcal{B}\mathcal{P}}(v)\subset X_{\mathcal{P}}(w_{*})\). By Proposition 4.4, an element \(\mathcal{P}^{\prime}\in Y_{\mathcal{B}\mathcal{P}}(v)\) can be written in the form \[\mathcal{P}^{\prime}=uv\mathcal{P}\] for a unique element \(u\in\mathcal{U}\cap\,^{v}\overline{\mathcal{U}}_{\mathcal{P}}\). We can then define an isomorphism \[m^{-1}(Y_{\mathcal{B}\mathcal{P}}(v))\ \overset{\sim}{\longrightarrow}\ m^{-1}(v \mathcal{P})\times Y_{\mathcal{B}\mathcal{P}}(v)\] by sending \((\mathcal{P}_{1},\dots,\mathcal{P}_{r-1},uv\mathcal{P})\) to \((\,u^{-1}\mathcal{P}_{1},\cdots,u^{-1}\mathcal{P}_{r-1},v\mathcal{P})\times uv \mathcal{P}\). Obviously the first factor belongs to \(m^{-1}(v\mathcal{P})\) ## 6. Paving results for the case \(\mathcal{P}=\mathcal{B}\) ### BN-pair relations and lemmas on retractions The following statements can be interpreted at the level of \(k\) or \(\bar{k}\)-points, but we will suppress this from the notation. Recall that given \(\mathcal{B}_{1}=g_{1}\mathcal{B}\), \(\mathcal{B}_{2}=g_{2}\mathcal{B}\) and \(w\in W\), we say the pair \((\mathcal{B}_{1},\mathcal{B}_{2})\) is in relative position \(w\) (and we write \(\mathcal{B}_{1}\overset{w}{\longrightarrow}\mathcal{B}_{2}\)) if and only if \(g_{1}^{-1}g_{2}\in\mathcal{B}w\mathcal{B}\). We write \[\mathcal{B}_{1}\overset{\leq w}{\longrightarrow}\mathcal{B}_{2}\quad\text{ if and only if }\quad\mathcal{B}_{1}\overset{v}{\longrightarrow}\mathcal{B}_{2}\ \text{ for some }v\leq w.\] We have \[Y_{\mathcal{B}}(w)=\{\mathcal{B}^{\prime}\ |\ \mathcal{B}\overset{w}{\longrightarrow }\mathcal{B}^{\prime}\}\quad\quad\text{and}\quad\quad X_{\mathcal{B}}(w)=\{ \mathcal{B}^{\prime}\ |\ \mathcal{B}\overset{\leq w}{\longrightarrow}\mathcal{B}^{\prime}\}.\] The BN-pair relations hold for \(v\in W\) and \(s\in S_{\mathrm{aff}}\): (6.1) Note that for every \(v\in W\) and \(s\in S_{\mathrm{aff}}\), there is an isomorphism \(\{\mathcal{B}^{\prime}\ |\ v\mathcal{B}\overset{\leq s}{\longrightarrow} \mathcal{B}^{\prime}\}\cong\mathbb{P}^{1}\) and \(\{\mathcal{B}^{\prime}\ |\ v\mathcal{B}\overset{\leq s}{\longrightarrow} \mathcal{B}^{\prime}\}\subset Y_{\mathcal{B}}(v)\cup Y_{\mathcal{B}}(vs)\). **Lemma 6.1**.: _Suppose \(s\in S_{\mathrm{aff}}\) and \(v\in W\)._ * _If_ \(v<vs\)_, then_ \(\{\mathcal{B}^{\prime}\ |\ v\mathcal{B}\overset{\leq s}{\longrightarrow} \mathcal{B}^{\prime}\}\cap Y_{\mathcal{B}}(v)=\{v\mathcal{B}\}\cong\mathbb{A}^{0}\)_._ * _If_ \(v<vs\)_, then_ \(\{\mathcal{B}^{\prime}\ |\ v\mathcal{B}\overset{\leq s}{\longrightarrow} \mathcal{B}^{\prime}\}\cap Y_{\mathcal{B}}(vs)\cong\mathbb{A}^{1}\)_._ * _If_ \(vs<v\)_, then_ \(\{\mathcal{B}^{\prime}\ |\ v\mathcal{B}\overset{\leq s}{\longrightarrow} \mathcal{B}^{\prime}\}\cap Y_{\mathcal{B}}(vs)\cong\mathbb{A}^{1}\)_._ * _If_ \(vs<v\)_, Then_ \(\{\mathcal{B}^{\prime}\ |\ v\mathcal{B}\overset{\leq s}{\longrightarrow} \mathcal{B}^{\prime}\}\cap Y_{\mathcal{B}}(vs)=\{vs\mathcal{B}\}\cong\mathbb{A} ^{0}\)_._ Proof.: This is obvious from properties of the retraction map from the building associated to \(G\) onto the apartment corresponding to \(T\), with respect to an alcove in that apartment. A reference for how such retractions "work" is [HKM, SS6]. In a similar way, we get an analogous lemma. **Lemma 6.2**.: _Suppose \(s\in S_{\mathrm{aff}}\) and \(v\in W\)._ * _If_ \(v<vs\)_, then_ \(\{\mathcal{B}^{\prime}\ |\ v\mathcal{B}\overset{s}{\longrightarrow} \mathcal{B}^{\prime}\}\cap Y_{\mathcal{B}}(v)=\emptyset\)_._ * _If_ \(v<vs\)_, then_ \(\{\mathcal{B}^{\prime}\ |\ v\mathcal{B}\overset{s}{\longrightarrow} \mathcal{B}^{\prime}\}\cap Y_{\mathcal{B}}(vs)\cong\mathbb{A}^{1}\)_._ * _If_ \(vs<v\)_, then_ \(\{\mathcal{B}^{\prime}\ |\ v\mathcal{B}\overset{s}{\longrightarrow} \mathcal{B}^{\prime}\}\cap Y_{\mathcal{B}}(v)\cong\mathbb{A}^{1}-\mathbb{A}^{0}\)_._ * _If_ \(vs<v\)_, Then_ \(\{\mathcal{B}^{\prime}\ |\ v\mathcal{B}\overset{s}{\longrightarrow} \mathcal{B}^{\prime}\}\cap Y_{\mathcal{B}}(vs)=\{vs\mathcal{B}\}\cong\mathbb{A} ^{0}\)_._ ### Proof of Theorem 1.3 Since \(\mathcal{B}\) is understood, we will write \(Y(w_{\bullet})\) for \(Y_{\mathcal{B}}(w_{\bullet})\), and \(X(w_{\bullet})\) for \(X_{\mathcal{B}}(w_{\bullet})\) in what follows. Let \(s_{\bullet}=(s_{1},\ldots,s_{r})\in\mathcal{S}^{r}\). There is no requirement here that \(s_{1}\cdots s_{r}\) be reduced. Recall the subvariety \(X(s_{\bullet})\subset(\mathcal{G}/\mathcal{B})^{r}\) which consists of the \(r\)-tuples \((\mathcal{B}_{1},\ldots,\mathcal{B}_{r})\) such that \(\mathcal{B}_{i-1}\overset{\leq s_{i}}{\longrightarrow}\mathcal{B}_{i}\) for all \(i=1,\ldots,r\) (with the convention that \(\mathcal{B}_{0}=\mathcal{B}\)). We are going to prove the paving by affine spaces of the fibers of the morphism \[m\ :\ X(s_{\bullet})\longrightarrow X(s_{\bullet})\subset\mathcal{G}/\mathcal{B },\qquad(\mathcal{B}_{1},\ldots,\mathcal{B}_{r})\longmapsto\mathcal{B}_{r}.\] We proceed by induction on \(r\). The case \(r=1\) is trivial, so we assume \(r>1\) and that the theorem holds for \(r-1\). Let \(s_{*}^{\prime}:=s_{1}*\cdots*s_{r-1}\). Let \(s_{\bullet}^{\prime}=(s_{1},\ldots,s_{r-1})\). By our induction hypothesis, the theorem holds for \[m^{\prime}:X(s_{\bullet}^{\prime})\longrightarrow X(s_{*}^{\prime}),\qquad( \mathcal{B}_{1},\ldots,\mathcal{B}_{r-1})\longmapsto\mathcal{B}_{r-1}.\] Now suppose \(v\leq s_{*}\), so that \(v\mathcal{B}\in\mathrm{Im}(m)\). For an element \((\mathcal{B}_{1},\ldots,\mathcal{B}_{r-1},v\mathcal{B})\in m^{-1}(v\mathcal{ B})\), we have \[\mathcal{B}\overset{v}{\longrightarrow}v\mathcal{B}\overset{\leq s_{r}}{ \longrightarrow}\mathcal{B}_{r-1}.\] It follows from the BN-pair relations that \(\mathcal{B}_{r-1}\in Y(v)\cup Y(vs_{r})\). We consider the map \[\xi:m^{-1}(v\mathcal{B})\longrightarrow Y(v)\cup Y(vs_{r})\] \[(\mathcal{B}_{1},\ldots,\mathcal{B}_{r-1},v\mathcal{B})\longmapsto\mathcal{B}_{r-1}.\] We will examine the subsets \(\operatorname{Im}(\xi)\cap Y(v)\) and \(\operatorname{Im}(\xi)\cap Y(vs_{r})\). We will show that 1. these subsets are affine spaces (either empty, a point, or \(\mathbb{A}^{1}\)); one of them, denoted \(\mathbb{A}_{1}\), is closed in \(\operatorname{Im}(\xi)\), and the other, denoted \(\mathbb{A}_{2}\), is nonempty, open, and dense in \(\operatorname{Im}(\xi)\); 2. if \(\mathbb{A}_{i}\neq\emptyset\), then \(\mathbb{A}_{i}\) belongs to \(\operatorname{Im}(m^{\prime})\); furthermore \(\xi^{-1}(\mathbb{A}_{i})\cong m^{\prime-1}(\mathbb{A}_{i})\) under the obvious identification, and \(\xi:\xi^{-1}(\mathbb{A}_{i})\to\mathbb{A}_{i}\) corresponds to the morphism \(m^{\prime}:m^{\prime-1}(\mathbb{A}_{i})\to\mathbb{A}_{i}\). These facts are enough to prove Theorem 1.3. Indeed, applying \(\xi^{-1}\) to the decomposition \[\operatorname{Im}(\xi)=\mathbb{A}_{1}\cup\mathbb{A}_{2}\] and using (ii) gives us a decomposition \[m^{-1}(v\mathcal{B})=\xi^{-1}(\mathbb{A}_{1})\cup\xi^{-1}(\mathbb{A}_{2})=m^{ \prime-1}(\mathbb{A}_{1})\cup m^{\prime-1}(\mathbb{A}_{2})\] where the first is closed and the second is nonempty and open. By the induction hypothesis, the fibers of \(m^{\prime}\) are paved by affine spaces. Since \(\mathbb{A}_{i}\) is contained in a \(\mathcal{B}\)-orbit, we see \(m^{\prime}\) is trivial over each \(\mathbb{A}_{i}\) by Proposition 5.1, and hence each \(m^{\prime-1}(\mathbb{A}_{i})\) is paved by affine spaces. Thus \(m^{-1}(v\mathcal{B})\) is paved by affine spaces. To verify the properties (i,ii), we need to consider various cases. We start with two cases which arise from the following standard lemma about the Bruhat order (see e.g. [12, Prop. 5.9]). **Lemma 6.3**.: _Let \((W,S)\) be a Coxeter group and \(x,y\in W\) and \(s\in S\). Then \(x\leq y\) implies \(x\leq ys\) or \(xs\leq ys\) (or both)._ Recall \(v\leq s_{*}\) by assumption. The two cases we need to consider are Case I: \(s^{\prime}_{*}<s^{\prime}_{*}s_{r}\), so that \(s_{*}=s^{\prime}_{*}s_{r}\). Thus by Lemma 6.3, \(v\leq s^{\prime}_{*}\) or \(vs_{r}\leq s^{\prime}_{*}\). Case II: \(s^{\prime}_{*}s_{r}<s^{\prime}_{*}\), so that \(s_{*}=s^{\prime}_{*}\). Thus \(v\leq s^{\prime}_{*}\). We will break each of these into subcases, depending on whether \(v<vs_{r}\) or \(vs_{r}<v\). We then consider further subcases depending on which of \(v\) or \(vs_{r}\) precedes \(s^{\prime}_{*}\) in the Bruhat order. Case I.1: \(v<vs_{r}\). So \(v\leq s^{\prime}_{*}\) is automatic. There are two subcases: I.1a: \(v<vs_{r}\leq s^{\prime}_{*}\); I.1b: \(v\leq s^{\prime}_{*}\) but \(vs_{r}\nleq s^{\prime}_{*}\). Case I.2: \(vs_{r}<v\). So \(vs_{r}\leq s^{\prime}_{*}\) is automatic. There are two subcases: I.2a: \(vs_{r}<v\leq s^{\prime}_{*}\); I.2b: \(vs_{r}\leq s^{\prime}_{*}\) but \(v\nleq s^{\prime}_{*}\). Case II.1: \(v<vs_{r}\). As \(v\leq s^{\prime}_{*}\) is automatic, there are two subcases: II.1a: \(v<vs_{r}\leq s^{\prime}_{*}\). II.1b: \(v\leq s^{\prime}_{*}\) but \(vs_{r}\nleq s^{\prime}_{*}\). Case II.2: \(vs_{r}<v\). Here \(vs_{r}\leq s^{\prime}_{*}\) and \(v\leq s^{\prime}_{*}\), so there are no further subcases. Consider any element \((\mathcal{B}_{1},\dots,\mathcal{B}_{r-1},v\mathcal{B})\) in \(m^{-1}(v\mathcal{B})\). As noted already above, we have \(\mathcal{B}\stackrel{{ v}}{{\longrightarrow}}v\mathcal{B} \stackrel{{\leq s_{r}}}{{\longrightarrow}}\mathcal{B}_{r-1}\). Then Lemma 6.1 tells us the shape of \(\operatorname{Im}(\xi)\cap Y(v)\) and \(\operatorname{Im}(\xi)\cap Y(vs_{r})\) in all the cases enumerated above. We record the results in the following table. \begin{tabular}{|c|c|c|} \hline \hline Case & \(\operatorname{Im}(\xi)\cap Y(v)\) & \(\operatorname{Im}(\xi)\cap Y(vs_{r})\) \\ \hline \hline l.1a & \(\mathbb{A}^{u}\) & \(\mathbb{A}^{\perp}\) \\ \hline l.1b & \(\mathbb{A}^{u}\) & \(\emptyset\) \\ \hline l.2a & \(\mathbb{A}^{\perp}\) & \(\mathbb{A}^{0}\) \\ \hline l.2b & \(\emptyset\) & \(\mathbb{A}^{0}\) \\ \hline l.1a & \(\mathbb{A}^{0}\) & \(\mathbb{A}^{\perp}\) \\ \hline l.1b & \(\mathbb{A}^{0}\) & \(\mathbb{0}\) \\ \hline l.2 & \(\mathbb{A}^{1}\) & \(\mathbb{A}^{0}\) \\ \hline \end{tabular} In each case it is clear which piece should be labelled \(\mathbb{A}_{1}\) or \(\mathbb{A}_{2}\). This proves the main part of (i,ii); the other assertions are clear. This completes the proof of Theorem 1.3. **Remark 6.4**.: Each \(\mathbb{A}^{1}\) appearing in the table may be identified with a suitable affine root group \(U_{\alpha+n}\), the \(k\)-group with \(k\)-points \[U_{\alpha+n}(k)=\{u_{\alpha}(xt^{n})\,|\,x\in k\},\] where \(u_{\alpha}:\mathbb{G}_{a}\to G\) is the root homomorphism corresponding to the root \(\alpha\). For example, consider Case I.1a. Then \(\operatorname{Im}(\xi)\cap Y(vs_{r})\) is \(\{\mathcal{B}_{r-1}\,|\,v\underline{B}^{s_{r}}\mathcal{B}_{r-1}\}\). Each such \(\mathcal{B}_{r-1}\) can be expressed as \(\mathcal{B}_{r-1}=vus_{r}\mathcal{B}\) for a unique \(u\in\mathcal{U}\cap\,^{s_{r}}\overline{\mathcal{U}}_{\mathcal{B}}\). Now use Proposition 4.1(b). ### Proof Theorem 1.1 in a special case We will now prove Theorem 1.1 in the case where \(\mathcal{P}=\mathcal{B}\) and \(w_{i}=s_{i}\) is a simple reflection for all \(1\leq i\leq r\). The argument is by induction on \(r\), as in the previous subsection. We consider the analogues \(p\) and \(p^{\prime}\) of the morphisms \(m\) and \(m^{\prime}\) \[p:Y(s_{\bullet})\to X(s_{*}), (\mathcal{B}_{1},\dots,\mathcal{B}_{r-1},\mathcal{B}_{r})\mapsto \mathcal{B}_{r}\] \[p^{\prime}:Y(s^{\prime}_{\bullet})\to X(s^{\prime}_{*}), (\mathcal{B}_{1},\dots,\mathcal{B}_{r-2},\mathcal{B}_{r-1})\mapsto \mathcal{B}_{r-1}\] and for \(v\mathcal{B}\) in the image of \(p\), we consider the map \[\xi^{\circ}:p^{-1}(v\mathcal{B}) \longrightarrow Y(v)\cup Y(vs_{r})\] \[(\mathcal{B}_{1},\dots,\mathcal{B}_{r-1},v\mathcal{B}) \longmapsto\mathcal{B}_{r-1}.\] The locally triviality of \(p\) over \(\mathcal{B}\)-orbits in its image still holds, and similarly for \(p^{\prime}\) (see the proof of Proposition 5.1), and it suffices to establish the analogues of (i, ii) above. We consider the same cases as above, and we list the possibilities for \(\operatorname{Im}(\xi^{\circ})\cap Y(v)\) and \(\operatorname{Im}(\xi^{\circ})\cap Y(vs_{r})\) in the table below, determined in each case with the help of Lemma 6.2. \begin{tabular}{|c|c|c|} \hline Case & \(\operatorname{Im}(\xi^{\circ})\cap Y(v)\) & \(\operatorname{Im}(\xi^{\circ})\cap Y(vs_{r})\) \\ \hline \hline I.1a & \(\emptyset\) & \(\mathbb{A}^{1}\) or \(\emptyset\) \\ \hline I.1b & \(\emptyset\) & \(\emptyset\) \\ \hline I.2a & \(\mathbb{A}^{1}-\mathbb{A}^{\circ}\) or \(\emptyset\) & \(\mathbb{A}^{\circ}\) or \(\emptyset\) \\ \hline I.2b & \(\emptyset\) & \(\mathbb{A}^{\circ}\) or \(\emptyset\) \\ \hline II.1a & \(\emptyset\) & \(\mathbb{A}^{\circ}\) or \(\emptyset\) \\ \hline II.1b & \(\emptyset\) & \(\emptyset\) \\ \hline II.2 & \(\mathbb{A}^{1}-\mathbb{A}^{\circ}\) or \(\emptyset\) & \(\mathbb{A}^{\circ}\) or \(\emptyset\) \\ \hline \end{tabular} Let us explain the meaning of entries such as "\(\mathbb{A}^{1}-\mathbb{A}^{0}\) or \(\emptyset\)", for example in the entry in case I.2a for \(\operatorname{Im}(\xi^{\circ})\cap Y(v)\). Note that \(v\leq s^{\prime}_{*}\) implies that \(Y(v)\subset\operatorname{Im}(m^{\prime})\), but \(Y(v)\subset\operatorname{Im}(p^{\prime})\) is not automatic. However, since \(p^{\prime}\) is \(\mathcal{B}\)-equivariant we either have \(Y(v)\cap\operatorname{Im}(p^{\prime})=\emptyset\), or \(Y(v)\subset\operatorname{Im}(p^{\prime})\). If \(Y(v)\cap\operatorname{Im}(p^{\prime})=\emptyset\), the table entry is \(\emptyset\). If \(Y(v)\subset\operatorname{Im}(p^{\prime})\), the intersection \(\operatorname{Im}(\xi^{\circ})\cap Y(v)\) is precisely the part of \(Y(v)\) which is exactly of relative position \(s_{r}\) from \(v\mathcal{B}\), and this identifies with \(\mathbb{A}^{1}-\mathbb{A}^{0}\) in the case where \(vs_{r}<v\). The analogues of (i,ii) above hold, except that here, both \(\mathbb{A}_{1}\) and \(\mathbb{A}_{2}\) can be empty, and when nonempty the larger subset can be either \(\mathbb{A}^{0}\), \(\mathbb{A}^{1}-\mathbb{A}^{0}\), or \(\mathbb{A}^{1}\). The morphism \(p^{\prime}\) is trivial over every \(\mathcal{B}\)-orbit in its image (comp. Proposition 5.1), and by induction the nonempty fibers of \(p^{\prime}\) are paved by finite products of copies of \(\mathbb{A}^{1}\) and \(\mathbb{A}^{1}-\mathbb{A}^{0}\). Therefore the fibers of \(p\) also have the desired property. This proves Theorem 1.1 in the special case where \(\mathcal{P}=\mathcal{B}\) and each \(w_{i}\) is a simple reflection \(s_{i}\). **Remark 6.5**.: As in Remark 6.4, each \(\mathbb{A}^{1}\) in the table may be identified with an affine root group \(U_{\alpha+n}\), and each \(\mathbb{A}^{1}-\mathbb{A}^{0}\) may be identified with a suitable variety of non-identity elements \(U_{\alpha+n}^{*}\). For example, consider Case I.2a. Then \(\operatorname{Im}(\xi^{\circ})\cap Y(v)\) is \(\{\mathcal{B}_{r-1}\,|\,v\underline{B}^{s_{r}}\mathcal{B}_{r-1}\}\cap Y(v)\). We may write such \(\mathcal{B}_{r-1}\) as \[\mathcal{B}_{r-1}=vus_{r}\mathcal{B}\] for a unique \(u\in\mathcal{U}\cap\,^{s_{r}}\overline{\mathcal{U}}_{\mathcal{B}}\) such that \(u\neq e\). Now use Proposition 4.1(b). **Remark 6.6**.: In the cases I.2a and II.2, the \(\mathbb{A}^{0}\) piece is in the closure of the \(\mathbb{A}^{1}-\mathbb{A}^{0}\) piece, and it is tempting to consider the union of these as \(\mathbb{A}^{1}\). Indeed, if one ignores the possibiliy of \(\emptyset\) in cases I.2a and II.2, the table seems to show that in every case \(\operatorname{Im}(\xi^{\circ})\) is an affine space (\(\emptyset\), \(\mathbb{A}^{0}\), or \(\mathbb{A}^{1}\)), and one could ask whether the argument does not in fact prove (by induction again) that every fiber of \(p\) is paved by affine spaces. However, one cannot ignore the empty set, and in fact in Case II.2 it is possible to have \(\operatorname{Im}(\xi^{\circ})\cap Y(v)=\mathbb{A}^{1}-\mathbb{A}^{0}\), while \(\operatorname{Im}(\xi^{\circ})\cap Y(vs_{r})=\emptyset\). Letting \(s\in S_{\text{aff}}\), this happens for \(s_{\bullet}=(s_{1},s_{2})=(s,s)\) and \(v=s_{2}=s\). This situation is reflected by the quadratic relation in the Iwahori-Hecke algebra \(T_{s}*T_{s}=(q-1)T_{s}+qT_{1}\). In addition, even in a special situation where \(\operatorname{Im}(\xi^{\circ})\) is always an affine space, the affine space paving would remain elusive, as it is not clear that \(p^{\prime}\) would be trivial over all of \(\operatorname{Im}(\xi^{\circ})\) whenever it is not contained in single \(\mathcal{B}\)-orbit. **Remark 6.7**.: The above remark "explains" why we cannot hope to improve Theorem 1.1 to assert that all fibers of \(Y_{\mathcal{P}}(w_{\bullet})\to Y_{\mathcal{P}}(w_{*})\) are paved by _affine spaces_. For a concrete example related to the affine Grassmannian \(\operatorname{Gr}_{G}=LG/L^{+}P_{\mathbf{0}}\) over a finite field \(k=\mathbb{F}_{q}\), take \(G=\operatorname{SO}(5)\), and let \[\mu_{1}=\mu_{2}=\mu_{3}=\alpha_{1}^{\vee}+\alpha_{2}^{\vee}=(1,1),\] where \(\alpha_{i}^{\vee}\) are the two simple coroots of \(G\). Here we use notation following the conventions of [Bou]. In [KLM, SS8.5] it is shown that the Hecke algebra structure constant \(c_{\mu_{\bullet}}^{0}(q)\) (the coefficient of the unit element in the product \(1_{K\mu^{1}K}*1_{K\mu^{2}K}*1_{K\mu^{3}K}\) for \(K=L^{+}P_{\mathbf{0}}(\mathbb{F}_{q})\)) satisfies \(c_{\mu_{\bullet}}^{0}(q)=q^{5}-q\). This shows that the fiber over the base point \(e_{0}\) of \(Y_{L^{+}P_{\mathbf{0}}}(\mu_{\bullet})\to X_{L^{+}P_{\mathbf{0}}}(|\mu_{ \bullet}|)\) cannot be paved by affine spaces over \(\mathbb{F}_{q}\). ## 7. Proof of Theorem 1.1 ### Schubert cells in \(\operatorname{Fl}_{\mathcal{B}}\) as convolution spaces If \(\tau s_{1}\cdots s_{r}=w\) is a reduced expression, we sometimes write \(Y(\tau s_{1}\cdots s_{r})\) for \(Y(w)\). This Schubert cell has the following well-known moduli description of its \(k\)-points. **Lemma 7.1**.: _Fix the reduced expression \(w=\tau s_{1}\cdots s_{r}\) as above._ 1. _Giving a_ \(k\)_-point of_ \(Y(w)\) _is equivalent to giving a_ \(k\)_-point of_ \(\tau^{-1}Y(w)\)_, which is equivalent to giving a sequence of Iwahori subgroups_ \((\mathcal{B}_{0},\mathcal{B}_{1},\dots,\mathcal{B}_{r})\) _such that_ \[\mathcal{B}=:\mathcal{B}_{0}\smash{\mathop{\longrightarrow}\limits^{s_{1}}} \mathcal{B}_{1}\smash{\mathop{\longrightarrow}\limits^{s_{2}}}\mathcal{B}_{2} \smash{\mathop{\longrightarrow}\limits^{s_{3}}}\cdots\smash{\mathop{ \longrightarrow}\limits^{s_{r}}}\mathcal{B}_{r}.\] 2. _For any element_ \(y\in LG(k)\)_, giving a_ \(k\)_-point of_ \(y^{-1}Y(w)\) _is equivalent to giving a sequence of Iwahori subgroups_ \((\mathcal{B}_{0},\mathcal{B}_{1},\cdots,\mathcal{B}_{r})\) _such that_ \[y^{-1}\mathcal{B}=:\mathcal{B}_{0}\smash{\mathop{\longrightarrow}\limits^{s_ {1}}}\mathcal{B}_{1}\smash{\mathop{\longrightarrow}\limits^{s_{2}}}\mathcal{B} _{2}\smash{\mathop{\longrightarrow}\limits^{s_{3}}}\cdots\smash{\mathop{ \longrightarrow}\limits^{s_{r}}}\mathcal{B}_{r}.\] Proof.: In both cases, note that \(\tau\) normalizes the Iwahori \(\mathcal{B}\). ### Proof of Theorem 1.1 for \(\mathcal{P}=\mathcal{B}\) Consider the morphism \(p_{w_{\bullet},\mathcal{B}}:Y_{\mathcal{B}}(w_{\bullet})\to X_{\mathcal{B}}(w_ {*})\). For each \(1\leq i\leq r\), we choose a reduced expression \[w_{i}=\tau_{i}s_{i1}\cdots s_{in_{i}}\] for \(s_{ij}\in S_{\mathrm{aff}}\) and \(\tau_{i}\in\Omega\). Since conjugation by \(\tau_{i}\) normalizes \(\mathcal{B}\), permutes \(S_{\mathrm{aff}}\), and preserves the Demazure product, we may reduce the study of fibers to the case where each \(\tau_{i}=1\). Then we have \[w_{*}=s_{11}*\cdots*s_{1n_{1}}*s_{21}*\cdots*s_{2n_{2}}*\cdots\cdots*s_{r1}* \cdots*s_{rn_{r}}=:s_{**}.\] By Lemma 7.1, the morphism \(p_{w_{\bullet},\mathcal{B}}\) is identified with the morphism \(p_{s_{\bullet},\mathcal{B}}:Y(s_{\bullet\bullet})\to Y(s_{**})\). By SS6.3, its fibers possess the required pavings. ### Proof of Theorem 1.1 in general Let \(\mathcal{C}\) be the class of varieties which are finite products of copies of \(\mathbb{A}^{1}\) and \(\mathbb{A}^{1}-\mathbb{A}^{0}\). We consider the morphism \(p=p_{w_{\bullet},\mathcal{P}}:Y_{\mathcal{P}}(w_{\bullet})\to X_{\mathcal{P}} (w_{*})\), and suppose \(v\mathcal{P}\) lies in the image. We prove that the fiber \(p^{-1}(v\mathcal{P})\) has a \(\mathcal{C}\)-paving by induction on \(r\). As before, consider the morphism \(p^{\prime}:Y_{\mathcal{P}}(w_{1},\dots,w_{r-1})\to\operatorname{Fl}_{\mathcal{P}}\) given by \((\mathcal{P}_{1},\mathcal{P}_{2},\dots,\mathcal{P}_{r-1})\mapsto\mathcal{P}_{ r-1}\), and, by a slight abuse, its restriction \(\xi^{\circ}=p^{\prime}|_{p^{-1}(v\mathcal{P})}:p^{-1}(v\mathcal{P})\to \operatorname{Fl}_{\mathcal{P}}\), defined by \((\mathcal{P}_{1},\dots,\mathcal{P}_{r-1},v\mathcal{P})\mapsto\mathcal{P}_{r-1}\). We have \[\operatorname{Im}(\xi^{\circ})=\operatorname{Im}(p^{\prime})\cap vY_{\mathcal{ P}}(w_{r}^{-1}).\] We claim that for any \(y\in W\) with corresponding \(\mathcal{B}\)-orbit \(Y_{\mathcal{P}\mathcal{P}}(y)\), the intersection \(\operatorname{Im}(\xi^{\circ})\cap Y_{\mathcal{B}\mathcal{P}}(y)\) is either empty, or has a \(\mathcal{C}\)-paving. Then since such locally closed subsets cover \(\operatorname{Im}(\xi^{\circ})\) and since \(p^{\prime}\) is trivial over each such subset, the \(\mathcal{C}\)-paving of \(p^{-1}(v\mathcal{P})\) will follow by our induction hypothesis applied to \(p^{\prime}\). Note that if \(\operatorname{Im}(p^{\prime})\cap Y_{\mathcal{P}\mathcal{P}}(y)\) is nonempty, then \(Y_{\mathcal{P}\mathcal{P}}(y)\subset\operatorname{Im}(p^{\prime})\), and we are trying to produce a \(\mathcal{C}\)-paving of \[Y_{\mathcal{P}\mathcal{P}}(y)\cap vY_{\mathcal{P}}(w_{r}^{-1}).\] We can pass to \(\mathcal{B}\)-orbits by writing \(W_{\mathcal{P}}w_{r}^{-1}W_{\mathcal{P}}=\coprod_{\eta_{m}}\eta_{m}W_{\mathcal{P}}\), for \(\eta_{m}\in W\) a finite collection of right-\(\mathbf{f}\)-minimal elements. We then have a locally closed decomposition \(Y_{\mathcal{P}}(w_{r}^{-1})=\coprod_{\eta_{m}}Y_{\mathcal{BP}}(\eta_{m})\). Thus we need to show that each \[Y_{\mathcal{BP}}(y)\cap vY_{\mathcal{BP}}(\eta_{m})\] has a \(\mathcal{C}\)-paving. We may assume \(y\) is also right-\(\mathbf{f}\)-minimal. Then by Lemma 4.4, this is isomorphic to \[Y_{\mathcal{B}}(y)\cap vY_{\mathcal{B}}(\eta_{m}).\] This is turn is equal to the fiber over \(v\mathcal{B}\) of the morphism \(Y_{\mathcal{B}}(y)\widetilde{\times}Y_{\mathcal{B}}(\eta_{m}^{-1})\to X_{ \mathcal{B}}(y*\eta_{m}^{-1})\). But each fiber of this morphism has a \(\mathcal{C}\)-paving by SS7.2. ## 8. Proof of Corollary 1.2 This follows immediately from Theorem 1.1, as we have a decomposition into locally closed subvarieties \[X_{\mathcal{P}}(w_{\bullet})=\coprod_{v_{\bullet}}Y_{\mathcal{P}}(v_{\bullet}) \tag{8.1}\] where \(v_{\bullet}\) ranges over all tuples \((v_{1},v_{2},\dots,v_{r})\in W_{\mathcal{P}}\backslash W/W_{\mathcal{P}}\) such that \(v_{i}\leq w_{i}\) in the Bruhat order on \(W_{\mathcal{P}}\backslash W/W_{\mathcal{P}}\) for all \(i\). Thus the fiber has a corresponding decomposition, and the result follows from Theorem 1.1. ## 9. Application to structure constants for parahoric Hecke Algebras Fix a nonarchimedean field \(F\) with ring of integers \(\mathcal{O}_{F}\) and residue field \(k_{F}=\mathbb{F}_{q}\). Let us suppose \(G\) is a split group over \(\mathbb{Z}\), and fix a Borel pair \(B\supset T\) in \(G\), also split and defined over \(\mathbb{Z}\). This gives rise to the extended affine Weyl group \(W\) defined using \(G\supset B\supset T\) (it agrees with the extended affine Weyl group attached to \(G_{F}\supset B_{F}\supset T_{F}\)). For any parahoric subgroup \(\mathcal{P}\subset G(F)\), consider the parahoric Hecke algebra \(\mathcal{H}(G(F)/\mathcal{P})=C_{c}(\mathcal{P}\backslash G(F)/\mathcal{P}, \mathbb{C})\), give the structure of a unital associative \(\mathbb{C}\)-algebra with convolution \(*\) defined using the Haar measure on \(G(F)\) giving \(\mathcal{P}\) volume 1. Consider the \(\mathbb{C}\)-basis of characteristic functions \(f_{w}:=1_{\mathcal{P}w\mathcal{P}}\) indexed by elements \(w\in W_{\mathcal{P}}\backslash W/W_{\mathcal{P}}\). We can represent such cosets by maximal length elements \(w\in\,^{\mathsf{f}}W^{\mathsf{f}}\). **Proposition 9.1**.: _For any \(w_{1},w_{2}\in\,^{\mathsf{f}}W^{\mathsf{f}}\), we have_ \[f_{w_{1}}*f_{w_{2}}=\sum_{v\in\,^{\mathsf{f}}W^{\mathsf{f}}}c_{w_{1},w_{2}}^{v }(q)\,f_{v}\] _where the structure constant is a non-negative integer of the form_ \[c_{w_{1},w_{2}}^{v}(q)=\sum_{a,b\in\mathbb{Z}_{\geq 0}}m_{a,b}\ q^{a}(q-1)^{b}\] _for certain non-negative integers \(m_{a,b}\) which vanish for all but finitely many pairs \((a,b)\)._ Proof.: The combinatorics of parahoric Hecke algebras over characteristic zero local fields \(F\) are the same as those for \(F=\mathbb{F}_{q}(\!(t)\!)\) (the parahoric subgroups in each setting chosen to correspond to each other in the obvious way, suitably identifying apartments for \(G_{F}\supset T_{F}\) and \(G_{\mathbb{F}_{q}(\!(t)\!)}\supset T_{\mathbb{F}_{q}(\!(t)\!)}\) and facets therein - for a much more general statement, see [10, 4.1.2]). Therefore we can assume \(F\) is of the later form. Then note that \(c_{w_{1},w_{2}}^{v}(q)\) is the number of \(\mathbb{F}_{q}\)-rational points in the fiber over \(v\mathcal{P}\) of the corresponding convolution morphism \(Y_{\mathcal{P}}(w_{1})\widetilde{\times}Y_{\mathcal{P}}(w_{2})\to X_{ \mathcal{P}}(w_{*})\). Thus the result follows from Theorem 1.1. This gives rise to general parahoric variants (in the equal parameter case) of combinatorial results on structure constants for spherical affine Hecke algebras due to Parkinson [11, Thm. 7.2] and Schwer [12]. By virtue of the Macdonald formula (see e.g. [10, Thm. 5.6.1]), the function \(P_{\lambda}\) considered (albeit with differing normalizations) by Parkinson and Schwer agrees up to an explicit normalizing factor with the Satake tranform \(f_{\lambda}^{\vee}\) of the basis elements \(f_{\lambda}=1_{G(\mathbb{F}_{q}[\!(t)\!)^{k}\alpha(\mathbb{F}_{q}[\!(t]\!))}\) above, for any dominant \(\lambda\in X_{*}(T)\). In particular, Proposition 9.1 shows that suitably renormalized versions of the functions \(C_{\lambda\mu}^{v}\) appearing in [12, Thm. 1.3] lie in \(\mathbb{Z}_{\geq 0}[q-1]\). ## 10. Cellular paving of certain subvarieties in the affine Grassmannian In this section we will restrict our attention to certain generalizations of the intersections containing the Mirkovic-Vilonen cycles in the affine Grassmannian. Let \(\mathcal{P}=\mathcal{P}_{\boldsymbol{0}}=L^{+}G\), and consider the affine Grassmannian \(\operatorname{Gr}_{G}=\operatorname{Fl}_{\mathcal{P}}\). We fix any standard parabolic subgroup \(P\supset B\) with Levi factorization \(P=MN\), for a Levi subgroup \(M\supset T\) and unipotent radical \(N\subset U\). Here \(B=TU\) is the Levi decomposition of the fixed Borel subgroup \(B\). We abbreviate \(K=L^{+}G\) and note that the intersection \(K_{M}:=K\cap M\) in \(LG\) can be identified with \(L^{+}M\). We define \(K_{P}:=K_{M}\cdot LN\). This is a semidirect group ind-scheme over \(k\), since \(K_{M}\) normalizes \(LN\). For \(\lambda\in X_{*}(T)\), denote the corresponding point by \(x_{\lambda}:=\lambda(t)e_{P}\in\operatorname{Gr}_{G}(k)\). Fix \(\mu\in X_{*}(T)^{+}\). Recall [13, Def. 3.1], in which we declare \(\nu\in X_{*}(T)\) satisfies \(\nu\geq^{P}\mu\) provided that * \(\langle\alpha,\nu\rangle=0\) for all \(T\)-roots \(\alpha\) appearing in \(\operatorname{Lie}(M)\); * \(\langle\alpha,\nu+\lambda\rangle>0\) for all \(T\)-roots \(\alpha\) appearing in \(\operatorname{Lie}(N)\) and for all \(\lambda\in\Omega(\mu)\). Here \(\Omega(\mu)=\{\lambda\in X_{*}(T)\,|\,\mu-w\lambda\text{ is a sum of positive coroots, for all }w\in W_{0}\}\). Also, let \(X_{*}(T)^{+_{M}}\) be the cocharacters which are dominant for the roots appearing in \(\operatorname{Lie}(B\cap M)\). **Proposition 10.1**.: _If \(\nu\geq^{P}\mu\) for \(\mu\in X_{*}(T)^{+}\), and if \(\lambda\in\Omega(\mu)\cap X_{*}(T)^{+_{M}}\), then there is an equality of \(k\)-subvarieties in \(\operatorname{Gr}_{G}\)_ \[(t^{-\nu}Kt^{\nu})x_{\lambda}\,\cap\,Kx_{\mu}=K_{P}x_{\lambda}\,\cap\,Kx_{\mu}. \tag{10.1}\] Proof.: The equality \((t^{-\nu}Kt^{\nu})x_{\lambda}\,\cap\,\overline{Kx_{\mu}}=K_{P}x_{\lambda}\,\cap \,\overline{Kx_{\mu}}\) follows on combining [13, Prop. 7.1] and [13, Lem. 7.3]. The desired equality without the closures follows formally from this one. The left hand side of (10.1) admits a cellular paving by Theorem 1.1. Indeed, we have \[(t^{-\nu}Kt^{\nu})x_{\lambda}\,\cap\,Kx_{\mu}=p_{w_{\bullet},L^{+}G}^{-1}(t^{ -\nu}e_{L^{+}G}),\] for \(w_{\bullet}=(t_{\mu},t_{-\nu-\lambda})\). Hence we deduce the following result. **Corollary 10.2**.: _For \(\mu,\lambda\) as above, the variety \(L^{+}M\,LN\,x_{\lambda}\cap L^{+}G\,x_{\mu}\) in \(\operatorname{Gr}_{G}\) admits a cellular paving. In particular, for \(P=B\), the Mirkovic-Vilonen variety \(LUx_{\lambda}\cap L^{+}Gx_{\mu}\) admits a cellular paving._ Note that this applies to all pairs \((\mu,\lambda)\in X_{*}(T)^{+}\times X_{*}(T)^{+_{M}}\): if the intersection is non-empty, then \(\lambda\in\Omega(\mu)\) is automatic, by [13, Lem. 7.2(b)]. ## 11. Paving results over \(\mathbb{Z}\) The goal of what follows is to extend the constructions and results above to work over \(\mathbb{Z}\). Because there is no building attached to a group over \(\mathbb{Z}\llbracket t\rrbracket\), the main challenge is to give purely group theoretic arguments for certain results which are usually proved with the aid of buildings. ### Basic constructions over \(\mathbb{Z}\) We shall recall the basic notions attached to groups over \(\mathbb{Z}\). One useful reference is [16, SS4], but in places we have chosen a slightly different way to justify the foundational results (for example, we do not assume the existence of the Demazure resolutions over \(\mathbb{Z}\) - a result stated without proof in [14] - and instead we construct them as a special case of the convolution morphisms over \(\mathbb{Z}\)). We assume \(G\) is a reductive group over \(\mathbb{Z}\), more precisely, a smooth affine group scheme over \(\mathbb{Z}\) whose geometric fibers are connected reductive groups, and which admits a maximal torus \(T\) over \(\mathbb{Z}\), which is automatically split (see [12, Sec. 5.1.4]). We fix a Borel pair over \(\mathbb{Z}\), given by \(G\supset B\supset T\) (Borel subgroups \(B\supset T\) exist, by e.g. [12, p. of Thm. 5.1.13]). Following [16, SS4], we have the usual objects: the standard apartment endowed with its Coxeter complex structure given by the affine roots, the base alcove \(\mathbf{a}\) and other facets \(\mathbf{f}\) therein, the Weyl group \(W_{0}\), the Iwahori-Weyl group \(W\), the affine Weyl group \(W_{\text{aff}}\), and the stabilizer subgroups \(W_{\mathbf{f}}\subset W_{\text{aff}}\). The Iwahori-Weyl group \(W:=N_{G}(T)(\mathbb{Z}(\llbracket t\rrbracket))/T(\mathbb{Z}[\llbracket t \rrbracket)\) can be identified with the extended affine Weyl group \(X_{*}(T)\rtimes W_{0}\) where using [12, Prop. 5.1.6] we may identify \(W_{0}=N_{G}(T)(\mathbb{Z}[\llbracket t\rrbracket)/T(\mathbb{Z}[\llbracket t \rrbracket)\). As \(X_{*}(T)\rtimes W_{0}\) remains unchanged upon base changing along \(\mathbb{Z}\to k\) for any field \(k\), it inherits a Bruhat order \(\leq\) as in the classical theory over a field. Similarly, the apartment is canonically identified with the apartments attached to \((G_{\mathbb{Q}(\!\ell\!t)},T_{\mathbb{Q}(\!t\!t)})\) or \((G_{\mathbb{F}_{p}(\!t\!)},T_{\mathbb{F}_{p}(\!t\!)})\) for any prime number \(p\). We define in the obvious way the positive loop group \(L^{+}G_{\mathbb{Z}}\) (a pro-smooth affine group scheme over \(\mathbb{Z}\)) and the loop group \(LG_{\mathbb{Z}}\) (an ind-affine group ind-scheme over \(\mathbb{Z}\)). For representability, see e.g. [10, Lem. 3.2]. The following result is essentially due to Pappas and Zhu, and this precise form was checked jointly with Timo Richarz. **Lemma 11.1**.: _Let \(\mathbf{f}\) be any facet of the apartment corresponding to \(T\) in the Bruhat-Tits building of \(G(\mathbb{Q}(\!t\!))\), and let \(\mathcal{G}_{\mathfrak{f}_{\mathfrak{f}}}\) be the associated parahoric \(\mathbb{Q}[\!t\!]\)-group scheme with connected fibers and with generic fiber \(G\otimes_{\mathbb{Z}}\mathbb{Q}(\!t\!)\). Then there exists a unique smooth affine fiberwise connected \(\mathbb{Z}[\!t\!]\)-group scheme \(\mathcal{G}_{\mathfrak{f}}\) of finite type extending \(\mathcal{G}_{\mathfrak{f}_{\mathfrak{f}}}\) with the following properties: \(i)\) There is an identification of \(\mathbb{Z}(\!t\!)\)-groups \(\mathcal{G}_{\mathfrak{f}}\otimes_{\mathbb{Z}[\!t\!]}\mathbb{Z}(\!t\!)=G \otimes_{\mathbb{Z}}\mathbb{Z}(\!t\!)\). \(ii)\) For every prime number \(p\), the group scheme \(\mathcal{G}_{\mathfrak{f}}\otimes_{\mathbb{Z}[\!t\!]}\mathbb{F}_{p}[\!t\!]\) is the Bruhat-Tits group scheme with connected fibers for \(G\otimes_{\mathbb{Z}}\mathbb{F}_{p}(\!t\!)\) associated with \(\mathbf{f}\)._ Proof.: This is proven in [11, 4.2.2]. Note that the base ring in _loc. cit._ is the polynomial ring \(\mathcal{O}[\!t]\) where \(\mathcal{O}\) is discretely valued. The same proof remains valid over the base ring \(\mathbb{Z}[\![t\!]\!]\) using [10, 3.9.4]. For each \(\mathbf{f}\), we define the "parahoric" subgroup \(L^{+}\mathcal{G}_{\mathfrak{f}}\subset LG_{\mathbb{Z}}\), and we often abbreviate by writing \(\mathcal{P}_{\mathbb{Z}}:=L^{+}\mathcal{G}_{\mathfrak{f}}\). This has the property that for each homomorphism \(\mathbb{Z}\to k\) for \(k\) a field, we have \(\mathcal{P}_{\mathbb{Z}}\otimes_{\mathbb{Z}}k\cong\mathcal{P}_{k}\) where the latter is the object defined earlier when working over the field \(k\). We define the (partial) affine flag variety \[\mathrm{Fl}_{\mathcal{P},\mathbb{Z}}=(LG_{\mathbb{Z}}/\mathcal{P}_{\mathbb{Z} })^{\hat{\mathfrak{e}}\mathfrak{t}},\] the etale sheafification of the quotient presheaf on \(\mathrm{Aff}_{\mathbb{Z}}\). This is represented by an ind-projective ind-scheme over \(\mathbb{Z}\); see [10, Cor. 3.11], where the proof is given for objects defined over \(\mathcal{O}[\!t]\) for any Noetherian ring \(\mathcal{O}\) - a similar proof works in our setting over \(\mathbb{Z}[\![t\!]\!]\). We denote the base point in \(\mathrm{Fl}_{\mathcal{P},\mathbb{Z}}\) by \(e_{\mathcal{P},\mathbb{Z}}\) We have a notion of a negative parahoric loop group and a correponding open cell in \(\mathrm{Fl}_{\mathcal{P},\mathbb{Z}}\). We define \(L^{--}G_{\mathbb{Z}}:=\ker(L^{-}G_{\mathbb{Z}}\to G_{\mathbb{Z}})\), \(t^{-1}\mapsto 0\), where \(L^{-}G_{\mathbb{Z}}(R)=G(R[t^{-1}])\). Following [1], we define \(L^{--}\mathcal{G}_{\mathfrak{a},\mathbb{Z}}=L^{--}G_{\mathbb{Z}}\rtimes \overline{U}_{\mathbb{Z}}\). Then for any facet \(\mathbf{f}\) in the closure of \(\mathbf{a}\), we define the negative parahoric loop group \[L^{--}\mathcal{G}_{\mathfrak{f},\mathbb{Z}}:=\bigcap_{w\in W_{\mathbf{f}}}\,^{w }(L^{--}\mathcal{G}_{\mathfrak{a},\mathbb{Z}}), \tag{11.1}\] the intersection being taken in \(LG_{\mathbb{Z}}\). **Lemma 11.2**.: _The multiplication map \(L^{--}\mathcal{G}_{\mathfrak{f},\mathbb{Z}}\times L^{+}\mathcal{G}_{\mathfrak{ f},\mathbb{Z}}\ \to\ LG_{\mathbb{Z}}\) is representable by a quasi-compact open immersion._ Proof.: This is proved in the same way as [10, Lem. 3.6], which proves the analogous result when the base ring is a ring of Witt vectors \(\mathbb{W}\) instead of \(\mathbb{Z}\); the same argument works for our group schemes \(\mathcal{G}_{\mathfrak{f},\mathbb{Z}}\) over \(D_{\mathbb{Z}}:=\mathbb{Z}[\![t\!]\!]\). We omit the details. From now on, we often write \(\mathcal{G}\) for \(\mathcal{G}_{\mathfrak{f}}\) and \(\mathcal{P}_{\mathbb{Z}}\) for \(L^{+}\mathcal{G}_{\mathfrak{f}}\). We recall the interpretation of partial affine flag varieties in terms of suitable spaces of torsors. For any ring \(R\), denote \(D_{R}=\mathrm{Spec}(R[\![t\!]\!])\) and \(D_{R}^{*}=\mathrm{Spec}(R(\![t\!]\!))\). Recall that we define the sheaf \(\mathrm{Gr}_{\mathcal{G}}\) on \(\mathrm{Aff}_{\mathbb{Z}}\) to be the functor sending \(R\) to the set \(\mathrm{Gr}_{\mathcal{G}}(R)\) of isomorphism classes of pairs \((\mathcal{E},\alpha)\) where \(\mathcal{E}\) is a right etale torsor for \(\mathcal{G}_{D_{R}}=\mathcal{G}\times_{D_{\mathbb{Z}}}D_{R}\) over \(D_{R}\), and where \(\alpha\in\mathcal{E}(D_{R}^{*})\), that is, an isomorphism of \(\mathcal{G}_{D_{R}^{*}}\)-torsors \(\mathcal{E}_{0}|_{D_{R}^{*}}\xrightarrow{\sim}\mathcal{E}|_{D_{R}^{*}}\), where \(\mathcal{E}_{0}\) is the trivial \(\mathcal{G}_{D_{R}}\)-torsor. The left action of \(g\in LG(R)\) on \(\mathrm{Gr}_{\mathcal{G}}(R)\) sends \((\mathcal{E},\alpha)\) to \((\mathcal{E},\alpha\circ g^{-1})\). Then \(\mathrm{Gr}_{\mathcal{G}}(R)\cong\mathrm{Fl}_{\mathcal{P},\mathbb{Z}}(R)\), functorially in \(R\) (see e.g. [10, Lem. 3.4]). **Remark 11.3**.: For the groups \(G\) over \(\mathbb{Z}\) we consider, one can show using negative parahoric loop groups that the morphism \(LG_{\mathbb{Z}}\to\mathrm{Fl}_{\mathcal{P},\mathbb{Z}}\) has sections locally in the Zariski-topology, and hence for any semi-local ring \(R\) we have \(\mathrm{Fl}_{\mathcal{P},\mathbb{Z}}(R)=LG_{\mathbb{Z}}(R)/\mathcal{P}_{ \mathbb{Z}}(R)\). This can be seen by reducing to the case of fields, as in [10, SS4.3]. One can also deduce it from a recent result of Cesnavicius [11, Thm. 1.7] that the affine Grassmannian \(\operatorname{Gr}_{\mathbb{Z}}\) agrees with the Zariski sheafification of the presheaf quotient \(LG_{\mathbb{Z}}/L^{+}G_{\mathbb{Z}}\). To use this to prove the corresponding result for a general parahoric \(\mathcal{P}_{\mathbb{Z}}\), one first deduces the result for \(\mathcal{P}_{\mathbb{Z}}=\mathcal{B}_{\mathbb{Z}}\), using the lifting for \(\mathcal{P}_{\mathbb{Z}}=L^{+}G_{\mathbb{Z}}\) and the fact that the fiber of \(\operatorname{Fl}_{\mathbb{R},\mathbb{Z}}\to\operatorname{Gr}_{G,\mathbb{Z}}\) over the base point is \((G/B)_{\mathbb{Z}}\) and \(G\to(G/B)_{\mathbb{Z}}\) is Zariski locally trivial. Then finally one uses the topological surjectivity of \(\operatorname{Fl}_{\mathbb{R},\mathbb{Z}}\to\operatorname{Fl}_{\mathcal{P}, \mathbb{Z}}\) to prove that a cover given by translates of the big cell in the source maps to a cover of translates of the big cell in the target. In fact one can use translates \(wL^{--}\mathcal{G}_{\mathbf{a}}e_{\mathbb{R},\mathbb{Z}}\) for \(w\in W\) to cover \(\operatorname{Fl}_{\mathbb{R},\mathbb{Z}}\), thanks to the Birkhoff decomposition of \(LG\) over fields (see [12, Lem. 4]). I am grateful to Thibaud van den Hove for a clarifying discussion about this remark, which we shall not need in the rest of this article. **Lemma 11.4**.: _Fix a ring \(R\) and \((\mathcal{E},\alpha)\in\operatorname{Gr}_{\mathcal{G}}(R)\). Then the presheaf \(\operatorname{Gr}_{\mathcal{E},\alpha}\) sending \(\operatorname{Spec}(R^{\prime})\to\operatorname{Spec}(R)\) to the set of isomorphism classes of pairs \((\mathcal{E}^{\prime},\alpha^{\prime})\) consisting of a \(\mathcal{G}_{D_{R^{\prime}}}\)-torsor \(\mathcal{E}^{\prime}\to D_{R^{\prime}}\) and an isomorphism of \(\mathcal{G}_{D^{*}_{R^{\prime}}}\)-torsors \(\alpha^{\prime}:\mathcal{E}_{D^{*}_{R^{\prime}}}\overset{\sim}{\to}\mathcal{E }^{\prime}_{D^{*}_{R^{\prime}}}\) is representable by an ind-projective ind-flat ind-scheme over \(R\)._ Proof.: If we fix a representative \((\mathcal{E},\alpha)\) within its isomorphism class, then the map \[(\mathcal{E}^{\prime},\alpha^{\prime})\mapsto(\mathcal{E}^{\prime},\alpha^{ \prime}\circ\alpha)\] is a well-defined isomorphism of presheaves \(\operatorname{Gr}_{\mathcal{E},\alpha}\overset{\sim}{\to}\operatorname{Gr}_{ \mathcal{G}}\times\operatorname{Spec}(R)\). Now recall that \(\operatorname{Gr}_{\mathcal{G}}\) is ind-flat over \(\mathbb{Z}\) by adapting the proof of [13, Prop. 8.9], or by reducing to the case \(\mathcal{P}_{\mathbb{Z}}=L^{+}G_{\mathbb{Z}}\) and then invoking [13, Prop. 8.8]. ### Ingredients needed for paving over \(\mathbb{Z}\) #### 11.2.1. Iwahori decompositions of \(\mathcal{B}_{\mathbb{Z}}\) and \(\mathcal{U}_{\mathbb{Z}}\) Our choice of base Iwahori subgroup \(\mathcal{B}_{\mathbb{Z}}\) is compatible with our choice of Borel subgroup \(B=TU\) over \(\mathbb{Z}\) in the following sense: for any algebra \(R\), we have \[\mathcal{B}_{\mathbb{Z}}(R)=\{g\in L^{+}G_{\mathbb{Z}}(R)\,|\,\bar{g}\in B(R)\}\] where \(\bar{g}\) is the image of \(g\) under the canonical projection \(L^{+}G_{\mathbb{Z}}(R)\to G(R)\). We define the pro-unipotent radical \(\mathcal{U}_{\mathbb{Z}}\subset\mathcal{B}_{\mathbb{Z}}\) by requiring \(\mathcal{U}_{\mathbb{Z}}(R)\) to be the preimage of \(U(R)\) under the projection \(g\mapsto\bar{g}\). Let \(\mathcal{T}_{\mathbb{Z}}\) denote the group scheme \(\mathcal{T}_{\mathbb{Z}}=L^{+}T_{\mathbb{Z}}\). Let \(\overline{B}=T\overline{U}\) be the Borel subgroup such that \(B\cap\overline{B}=T\). For any integer \(m\geq 1\), let \(L^{(m)}G_{\mathbb{Z}}(R)\) denote the kernel of the natural homomorphism \(L^{+}G_{\mathbb{Z}}(R)\to G(R/t^{m}R)\). Write \(\mathcal{T}_{\mathbb{Z}}^{(1)}:=L^{(1)}T_{\mathbb{Z}}\). **Proposition 11.5**.: _The group schemes \(\mathcal{B}_{\mathbb{Z}}\) and \(\mathcal{U}_{\mathbb{Z}}\) possess Iwahori decompositions with respect to \(B=TU\), that is, there are unique factorizations of functors_ \[\mathcal{B}_{\mathbb{Z}} =(\mathcal{B}_{\mathbb{Z}}\cap L\overline{U}_{\mathbb{Z}})\cdot \mathcal{T}_{\mathbb{Z}}\cdot(\mathcal{B}_{\mathbb{Z}}\cap LU_{\mathbb{Z}}) \tag{11.3}\] \[\mathcal{U}_{\mathbb{Z}} =(\mathcal{U}_{\mathbb{Z}}\cap L\overline{U}_{\mathbb{Z}})\cdot \mathcal{T}_{\mathbb{Z}}^{(1)}\cdot(\mathcal{B}_{\mathbb{Z}}\cap LU_{\mathbb{Z}}). \tag{11.2}\] Proof.: First we note that the uniqueness in the decomposition follows from the uniqueness of the decomposition in the big cell in \(\overline{U}\cdot T\cdot U\) in \(G\). We shall prove only the first decomposition (the second is completely similar). Consider \(g\in\mathcal{B}_{\mathbb{Z}}(R)\), with reduction modulo \(t\) given by \(\bar{g}=\bar{b}\) for some \(b\in B(R)\subset L^{+}B_{\mathbb{Z}}(R)\). Then \(g^{(1)}:=gb^{-1}\in L^{(1)}G_{\mathbb{Z}}(R)\), and it suffices to show this element lies in \[(\mathcal{B}_{\mathbb{Z}}\cap L^{(1)}\overline{U}_{\mathbb{Z}})\cdot\mathcal{T} _{\mathbb{Z}}^{(1)}\cdot(\mathcal{B}_{\mathbb{Z}}\cap L^{(1)}U_{\mathbb{Z}}). \tag{11.4}\] The filtration \(\cdots\subset L^{(m+1)}G_{\mathbb{Z}}\subset L^{(m)}G_{\mathbb{Z}}\subset\cdots \subset L^{+}G_{\mathbb{Z}}\) has abelian quotients isomorphic to \(\operatorname{Lie}(G)_{\mathbb{Z}}=\operatorname{Lie}(\overline{U})_{\mathbb{Z} }\oplus\operatorname{Lie}(T)_{\mathbb{Z}}\oplus\operatorname{Lie}(U)_{\mathbb{Z}}\). We claim that we can write \[g^{(1)}=\lim_{m\to\infty}\bar{u}_{m}\cdot t_{m}\cdot u_{m}\] with \(\bar{u}_{m},t_{m},u_{m}\) lying in the \(R\)-points of the appropriate factors of (11.4), and such that the limit converges in the \(t\)-adic topology. Indeed, decomposing the image modulo \(t^{2}\) of \(g^{(1)}\) in terms of the Lie algebra and lifting, we can write \[g^{(1)}\ =\ \bar{u}^{(1,2)}\cdot g^{(2)}\cdot t^{(1,2)}\cdot u^{(1,2)}\] where \(\bar{u}^{(1,2)}\in L^{(1)}\overline{U}_{\mathbb{Z}}\), \(t^{(1,2)}\in L^{(1)}T_{\mathbb{Z}}\), and \(u^{(1,2)}\in L^{(1)}U_{\mathbb{Z}}\) and where \(g^{(2)}\in L^{(2)}G_{\mathbb{Z}}\). Here we have used that \(\bar{u}^{(1,2)}\) normalizes \(L^{(2)}G_{\mathbb{Z}}\). We then repeat this process with \(g^{(2)}\), and get an expression \[g^{(1)}=(\bar{u}^{(1,2)}\bar{u}^{(2,3)})\cdot g^{(3)}\cdot(t^{(1,2)}t^{(2,3)}) \cdot(u^{(2,3)}u^{(1,2)})\] where \(g^{(3)}\in L^{(3)}G_{\mathbb{Z}}\) and where \(?^{(2,3)}\) refers to a component of \(g^{(2)}\) lying in an appropriate \(L^{(2)}\)? group, with \(g^{(3)}\) viewed as an "error term". Here we have used again the normality of \(L^{(m)}G_{\mathbb{Z}}\) in \(L^{+}G_{\mathbb{Z}}\), the commutativity of \(L^{+}T_{\mathbb{Z}}\), and the fact that \(L^{+}T_{\mathbb{Z}}\) normalizes each \(L^{(m)}U_{\mathbb{Z}}\). Continuing this, we define the sequences \(\bar{u}^{(m-1,m)}\), \(t^{(m-1,m)}\), \(u^{(m-1,m)}\), and \(g^{(m)}\) and then set \[\bar{u}_{m} =\bar{u}^{(1,2)}\cdots\bar{u}^{(m-1,m)}\] \[t_{m} =t^{(1,2)}\cdots t^{(m-1,m)}\] \[u_{m} =u^{(m-1,m)}\cdots u^{(1,2)}.\] In the \(t\)-adic topology, these products converge and the terms \(g^{(m)}\) approach the identity element \(e\in G\). Hence this proves the claim, and thus the Proposition. #### 11.2.2. Proposition 4.1 over \(\mathbb{Z}\) **Proposition 11.6**.: _The analogue over \(\mathbb{Z}\) of Proposition 4.1 holds._ Proof.: There are only finitely many affine roots \(a\) such that \(U_{a,\mathbb{Z}}\) is contained in \(\mathcal{U}_{\mathbb{Z}}\cap\,^{v}\overline{\mathcal{U}}_{\mathcal{P},\mathbb{ Z}}\), namely the finitely many \(a\) with \(a>0\) and \(v^{-1}a\stackrel{{\mathbf{f}}}{{<}}0\). By the Iwahori decomposition Proposition 11.5 and the root group filtrations in \(U_{\mathbb{Z}}\) and \(\overline{U}_{\mathbb{Z}}\), we easily see that there exist finitely many positive affine roots \(a_{1},\dots,a_{N}\) such that \[\mathcal{U}_{\mathbb{Z}}=U_{a_{1},\mathbb{Z}}\cdots U_{a_{N},\mathbb{Z}}\,\,( \mathcal{U}_{\mathbb{Z}}\cap\,^{v}\mathcal{P}_{\mathbb{Z}}). \tag{11.5}\] In what follows, we suppress the subscript \(\mathbb{Z}\). Give a total order \(\preceq\) to the set of positive affine roots \(a_{i}\) in this list, by letting \(a\prec b\) if and only if \(a(x_{0})<b(x_{0})\) for a suitably general point \(x_{0}\in\mathbf{a}\). Let \(r_{1}\prec r_{2}\prec\cdots\prec r_{M}\) be the totally order subset of the \(a_{i}\)'s with the property that \(v^{-1}r_{i}\stackrel{{\mathbf{f}}}{{<}}0\). The root group \(U_{r_{1}}\) appears finitely many times in (11.5). Starting from the left, we commute the first \(U_{r_{1}}\) to the left past any preceding \(U_{b}\)'s. By the commutator relations (e.g. [1, (3.6)]), in moving all the \(U_{r_{1}}\) groups all the way to the left, we introduce finitely many additional affine root groups \(U_{c}\) with \(r_{1}\prec c\). Then we consider the part of the product which now involves only root groups in of the form \(U_{r_{2}},\dots,U_{r_{M}}\) and certain \(U_{c}\) with \(v^{-1}c\stackrel{{\mathbf{f}}}{{\geq}}0\). Then we repeat the above process with \(r_{2}\) in place of \(r_{1}\). Continuing, we eventually move all the \(U_{r}\) factors with \(v^{-1}r\stackrel{{\mathbf{f}}}{{<}}0\) all the way to the left. We have proved that \[\mathcal{U}=\prod_{r}U_{r}\,(\mathcal{U}\cap\,^{v}\mathcal{P}) \tag{11.6}\] where \(r\) ranges over the affine roots with \(r>0\) and \(v^{-1}r\stackrel{{\mathbf{f}}}{{<}}0\). We claim that the obvious inclusion \(\prod_{r}U_{r}\subset\mathcal{U}\cap\,^{v}\overline{\mathcal{U}}_{\mathcal{P}}\) is an equality, and the resulting product is a decomposition. Both statements follow easily using the theory of the big cell, Lemma 11.2. **Corollary 11.7**.: _The analogues over \(\mathbb{Z}\) of Propositions 4.4 and 5.1 hold._ #### 11.2.3. Schubert cells and Schubert schemes over \(\mathbb{Z}\) Fix \(w\in W\) and fix a lift \(\dot{w}\in N_{G}T(\mathbb{Z}[\![t]\!])\) of \(w\). We usually suppress the dot from now on, since no construction depends on this choice. The group \(\mathcal{P}_{\mathbb{Z}}\) acts on the left on \(\operatorname{Fl}_{\mathcal{P},\mathbb{Z}}\), we define the _Schubert scheme_\(X_{\mathcal{P},\mathbb{Z}}(w)\subset\operatorname{Fl}_{\mathcal{P},\mathbb{ Z}}\) to be the scheme-theoretic image of the morphism \[\mathcal{P}_{\mathbb{Z}}\to\operatorname{Fl}_{\mathcal{P},\mathbb{Z}},\qquad p \mapsto p\dot{w}e_{\mathcal{P},\mathbb{Z}}.\] Similarly we can define \(X_{\mathcal{QP},\mathbb{Z}}(w)\) for any parahoric subgroup \(\mathcal{Q}_{\mathbb{Z}}\), in particular we have \(X_{\mathcal{BP},\mathbb{Z}}\). We define \(Y_{\mathcal{P},\mathbb{Z}}(w)\subset\operatorname{Fl}_{\mathcal{P},\mathbb{Z}}\) to be the etale sheaf-theoretic image of the morphism of sheaves \(\mathcal{P}_{\mathbb{Z}}\to\operatorname{Fl}_{\mathcal{P},\mathbb{Z}}\), \(p\mapsto p\dot{w}e_{\mathcal{P}}\), and as before we define similarly \(Y_{\mathcal{QP},\mathbb{Z}}\) for any parahoric subgroup \(\mathcal{Q}_{\mathbb{Z}}\subset LG_{\mathbb{Z}}\). **Lemma 11.8**.: _Let \(\mathcal{P}_{\mathbb{Z}}\subset LG\) be the parahoric subgroup fixed above \((\)similar statements apply to any \(\mathcal{Q}\)-orbits in \(\operatorname{Fl}_{\mathcal{P},\mathbb{Z}})\)._ 1. _The scheme_ \(X_{\mathcal{P},\mathbb{Z}}(w)\) _is an integral scheme which is projective and faithfully flat over_ \(\operatorname{Spec}(\mathbb{Z})\)_, and_ \(X_{\mathcal{P},\mathbb{Z}}(w)\otimes\mathbb{Q}=X_{\mathcal{P},\mathbb{Q}}(w)\)_._ 2. _The morphism_ \(Y_{\mathcal{P},\mathbb{Z}}(w)\to\operatorname{Fl}_{\mathcal{P},\mathbb{Z}}\) _of etale sheaves factors canonically as_ \[Y_{\mathcal{P},\mathbb{Z}}(w)\to X_{\mathcal{P},\mathbb{Z}}(w)\to \operatorname{Fl}_{\mathcal{P},\mathbb{Z}},\] _and the first morphism is represented by a quasi-compact open immersion of schemes._ 3. _The scheme_ \(Y_{\mathcal{P},\mathbb{Z}}(w)\) _is smooth over_ \(\operatorname{Spec}(\mathbb{Z})\)_, and its formation commutes with base change along an arbitrary homomorphism_ \(\mathbb{Z}\to R\)_._ Proof.: The projectivity in (a) is proved in [10, Def.4.3.4, ff]. Part (b) can be proved by adapting the argument of [10, Cor. 3.14]. Part (c) holds since \(Y_{\mathcal{P},\mathbb{Z}}(w)\) is the orbit under a smooth group scheme over \(\mathbb{Z}\). Hence \(Y_{\mathcal{P},\mathbb{Z}}(w)\otimes\mathbb{Q}=Y_{\mathcal{P},\mathbb{Q}}(w)\). The formation of scheme-theoretic image of a quasi-compact morphism commutes with flat base change (see [10, Lem. 29.25.16]). So the generic fiber of \(X_{\mathcal{P},\mathbb{Z}}(w)\) is the schematic-closure of \(Y_{\mathcal{P},\mathbb{Q}}(w)\) in \(\operatorname{Fl}_{\mathcal{P},\mathbb{Q}}\), that is, \(X_{\mathcal{P},\mathbb{Z}}(w)\otimes\mathbb{Q}=X_{\mathcal{P},\mathbb{Q}}(w)\). Now the flat closure of the latter in \(X_{\mathcal{P},\mathbb{Z}}\) contains the scheme-theoretic closure of \(Y_{\mathcal{P},\mathbb{Z}}(w)\), which is all of \(X_{\mathcal{P},\mathbb{Z}}(w)\). This shows that the latter is faithfully flat over \(\mathbb{Z}\). Clearly \(X_{\mathcal{P},\mathbb{Z}}(w)\) is irreducible since it is the scheme-theoretic image of a morphism with irreducible source. Moreover \(X_{\mathcal{P},\mathbb{Z}}(w)\) is reduced, since is it the flat-closure of a \(\mathbb{Q}\)-variety. #### 11.2.4. Reduction to neutral element of \(\Omega\) In the theory over fields \(k\), it is easy to see that \(\tau\in\Omega\) has the property that \(\tau\) normalizes the standard Iwahori subgroup \(\mathcal{B}_{k}\subset LG_{k}\) corresponding to the base alcove \(\mathbf{a}\). We need to know that this remains true over \(\mathbb{Z}\). **Lemma 11.9**.: _If \(\tau\in\Omega\), then \({}^{\tau}\mathcal{B}_{\mathbb{Z}}=\mathcal{B}_{\mathbb{Z}}\) as subgroups of \(LG_{\mathbb{Z}}\)._ Proof.: The identification follows by the uniqueness characterization of the group scheme \(\mathcal{G}_{\mathbf{a},\mathbb{Z}}\) in Lemma 11.1 and the fact that it holds after base change to every field \(k\). #### 11.2.5. Twisted products over \(\mathbb{Z}\) As above we fix \(\mathcal{P}_{\mathbb{Z}}=L^{+}\mathcal{G}_{\mathbf{f},\mathbb{Z}}\). Again abbreviate \(\mathcal{G}:=\mathcal{G}_{\mathbf{f},\mathbb{Z}}\). Fix \(r\in\mathbb{N}\) and consider the right action of \(\mathcal{P}_{\mathbb{Z}}^{r}\) on \(LG_{\mathbb{Z}}^{r}\) given by the same formula as (3.1). **Definition 11.10**.: We define the \(r\)-fold twisted product \[\widetilde{\operatorname{Gr}}_{\mathcal{G}}:=LG_{\mathbb{Z}}\times^{\mathcal{ P}_{\mathbb{Z}}}LG_{\mathbb{Z}}\times^{\mathcal{P}_{\mathbb{Z}}}\dots\times^{ \mathcal{P}_{\mathbb{Z}}}LG_{\mathbb{Z}}/\mathcal{P}_{\mathbb{Z}}=:\operatorname {Gr}_{\mathcal{G}}\tilde{\times}\dots\tilde{\times}\operatorname{Gr}_{ \mathcal{G}}\] to be the etale quotient sheaf for the presheaf \((LG_{\mathbb{Z}})^{r}/(\mathcal{P}_{\mathbb{Z}})^{r}\) defined above. It is clear from the fact that every \(\mathcal{G}\)-bundle over \(D_{R}\) is trivializable over \(D_{R^{\prime}}\) for some etale ring extension \(R\to R^{\prime}\), that we can identify \(\widetilde{\operatorname{Gr}}_{\mathcal{G}}(R)\) with the set of equivalence classes of tuples \[(\mathcal{E}_{\bullet},\alpha_{\bullet})=(\mathcal{E}_{1},\dots,\mathcal{E}_{ r};\alpha_{1},\dots,\alpha_{r})\] such that each \(\mathcal{E}_{i}\) is a \(\mathcal{G}_{D_{R}}\)-torsor over \(D_{R}\), and the \(\alpha_{i}:\mathcal{E}_{i-1}|_{D_{R}^{*}}\stackrel{{\sim}}{{ \rightarrow}}\mathcal{E}_{i}|_{D_{R}^{*}}\) are isomorphisms of \(\mathcal{G}_{D_{R}^{*}}\)-torsors over \(D_{R}^{*}\) for all \(i=1,\dots,r\) (with the convention that \(\mathcal{E}_{0}\) is the trivial torsor). **Lemma 11.11**.: _The sheaf \(LG_{\mathbb{Z}}\times^{\mathcal{P}_{\mathbb{Z}}}LG_{\mathbb{Z}}\times^{ \mathcal{P}_{\mathbb{Z}}}\dots\times^{\mathcal{P}_{\mathbb{Z}}}LG_{\mathbb{Z}}/ \mathcal{P}_{\mathbb{Z}}\) is represented by an ind-proper ind-scheme which is faithfully flat over \(\mathbb{Z}\)._ Proof.: We proceed by induction on \(r\). The case \(r=1\) is clear: the ind-flatness of \(\operatorname{Gr}_{\mathcal{G}}\to\operatorname{Spec}(\mathbb{Z})\) is proved by an easy reduction to the case \(\mathcal{P}_{\mathbb{Z}}=L^{+}G_{\mathbb{Z}}\), which is then handled by [10, Prop. 8.8]. Now assume \(r>1\) and that the result holds for \(r-1\)-fold quotients. The projection onto the first factor gives a morphism \[p:LG_{\mathbb{Z}}\times^{\mathcal{P}_{\mathbb{Z}}}LG_{\mathbb{Z}}\times^{ \mathcal{P}_{\mathbb{Z}}}\dots\times^{\mathcal{P}_{\mathbb{Z}}}LG_{\mathbb{Z}}/ \mathcal{P}_{\mathbb{Z}}\to LG_{\mathbb{Z}}/\mathcal{P}_{\mathbb{Z}}.\] Now the induction hypothesis and the proof of Lemma 11.4 shows that this morphism is representable by an ind-proper ind-flat ind-scheme, and hence the total space is represented by an ind-scheme. Locally in the etale topology on the target, \(p\) is locally trivial with flat fiber, hence is flat. It follows that the source of \(p\) is flat over \(\mathbb{Z}\). It remains to prove the source of \(p\) is proper over \(\mathbb{Z}\). We know that ind-locally in the etale topology on the target, \(LG\to\operatorname{Gr}_{\mathcal{G}}\) has sections, and hence after passing to an etale cover \(p\) becomes Zariski-locally trivial with ind-proper over \(\mathbb{Z}\) fibers, by using translates of the big cell (see Lemma 11.2). Since properness descends along etale covers, we conclude that \(p\) is ind-proper, as desired. Note that this argument becomes even simpler if we use the Zariski local triviality result of Cesnavicius (see Remark 11.3) but we do not need this more sophisticated result. Let \(w\in W\). Denoting the quotient morphism by \(q:LG_{\mathbb{Z}}\to\operatorname{Fl}_{\mathcal{P},\mathbb{Z}}\), note that \(\mathcal{P}_{\mathbb{Z}}w\mathcal{P}_{\mathbb{Z}}=q^{-1}(Y_{\mathcal{P}, \mathbb{Z}}(w))\) (an equality of etale subsheaves of \(LG_{\mathbb{Z}}\)), where by definition \(\mathcal{P}_{\mathbb{Z}}w\mathcal{P}_{\mathbb{Z}}\) denotes the etale sheaf quotient \(\mathcal{P}_{\mathbb{Z}}\times^{w,\mathcal{P}_{\mathbb{Z}}}\mathcal{P}_{ \mathbb{Z}}\) of \(\mathcal{P}_{\mathbb{Z}}\times\mathcal{P}_{\mathbb{Z}}\) by the right action of \(\mathcal{P}_{\mathbb{Z}}\cap w\mathcal{P}_{\mathbb{Z}}w^{-1}\) given by \((p,p^{\prime})\cdot\delta=(p\delta,w^{-1}\delta^{-1}wp)\). In this vein, we _define_\(\widetilde{\mathcal{P}_{\mathbb{Z}}w\mathcal{P}_{\mathbb{Z}}}:=q^{-1}(X_{ \mathcal{P},\mathbb{Z}}(w))\). **Definition 11.12**.: Let \(w_{\bullet}=(w_{1},w_{2},\ldots,w_{r})\in W^{r}\). We define \[Y_{\mathcal{P},\mathbb{Z}}(w_{\bullet}):=\mathcal{P}_{\mathbb{Z}}w_{1} \mathcal{P}_{\mathbb{Z}}\times^{\mathcal{P}_{\mathbb{Z}}}\mathcal{P}_{ \mathbb{Z}}w_{2}\mathcal{P}_{\mathbb{Z}}\times^{\mathcal{P}_{\mathbb{Z}}} \ldots\times^{\mathcal{P}_{\mathbb{Z}}}\mathcal{P}_{\mathbb{Z}}w_{r}\mathcal{ P}_{\mathbb{Z}}/\mathcal{P}_{\mathbb{Z}} =Y_{\mathcal{P},\mathbb{Z}}(w_{1})\tilde{\times}Y_{\mathcal{P}, \mathbb{Z}}(w_{2})\tilde{\times}\cdots\tilde{\times}Y_{\mathcal{P}}(w_{r})\] to be the etale quotient sheaves as in Definition 11.10. **Lemma 11.13**.: _The sheaves \(X_{\mathcal{P},\mathbb{Z}}(w_{\bullet})\) and \(Y_{\mathcal{P},\mathbb{Z}}(w_{\bullet})\) are represented by integral schemes which are finite type and flat over \(\mathbb{Z}\). Moreover, \(X_{\mathcal{P},\mathbb{Z}}(w_{\bullet})\) is proper over \(\mathbb{Z}\)._ Proof.: The proof goes by induction on \(r\), in the same manner as Lemma 11.11. #### 11.2.6. Demazure morphisms and closure relations over \(\mathbb{Z}\) We need to construct the Demaure resolutions over \(\mathbb{Z}\). This is stated without proof in [11] and is implicit in some literature (e.g. [10, 11]) but we think some extra discussion is needed. For \(s\in S_{\operatorname{aff}}\), let \(\mathcal{G}_{s,\mathbb{Z}}:=\mathcal{G}_{\mathbb{f},\mathbb{Z}}\), where \(\mathbb{f}\) is the facet fixed by \(s\). Let \(\mathcal{P}_{s,\mathbb{Z}}=L^{+}\mathcal{G}_{s,\mathbb{Z}}\). We have \(\mathcal{P}_{s,\mathbb{Z}}=\mathcal{B}_{\mathbb{Z}}\cup\mathcal{B}_{\mathbb{Z}} s\mathcal{B}_{\mathbb{Z}}\) as schemes (to show this we use Lemma 11.8(c), and the fact that the inclusion \(\mathcal{B}_{\mathbb{Z}}(R)\cup\mathcal{B}_{\mathbb{Z}}s\mathcal{B}_{\mathbb{Z }}(R)\hookrightarrow\mathcal{P}_{s,\mathbb{Z}}(R)\) is surjective when \(R\) is any field, but we warn that this equality fails for general \(R\), in particular for \(R=\mathbb{Z}\)). We have an identification \(\mathbb{P}_{\mathbb{Z}}^{1}=\mathcal{P}_{s,\mathbb{Z}}/\mathcal{B}_{\mathbb{Z}}\). Furthermore, the foregoing shows we have an open immersion \(\mathbb{A}_{\mathbb{Z}}^{1}=\mathcal{B}_{\mathbb{Z}}s\mathcal{B}_{\mathbb{Z}}/ \mathcal{B}_{\mathbb{Z}}\hookrightarrow\mathbb{P}_{\mathbb{Z}}^{1}\) with closed complement \(\mathbb{A}_{\mathbb{Z}}^{0}=\mathcal{B}_{\mathbb{Z}}/\mathcal{B}_{\mathbb{Z }}\hookrightarrow\mathbb{P}_{\mathbb{Z}}^{1}\). The BN-pair relations hold: **Lemma 11.14**.: _For any \(w\in W\) and \(s\in S_{\operatorname{af}}\), we have equalities of sub-ind-schemes in \(LG_{\mathbb{Z}}\)_ \[\mathcal{B}_{\mathbb{Z}}w\mathcal{B}_{\mathbb{Z}}s\mathcal{B}_{\mathbb{Z}}= \begin{cases}\mathcal{B}_{\mathbb{Z}}ws\mathcal{B}_{\mathbb{Z}},&\text{if }w<ws\\ \mathcal{B}_{\mathbb{Z}}w\mathcal{B}_{\mathbb{Z}}\cup\mathcal{B}_{\mathbb{Z}}ws \mathcal{B}_{\mathbb{Z}},&\text{if }ws<w.\end{cases}\] Proof.: Both cases are proved by induction on \(\ell(w)\). The first case follows from the case of fields and Lemma 11.8(c). For the second case, it is enough to prove the result for \(w=s\). But \(\mathcal{B}_{\mathbb{Z}}s\mathcal{B}_{\mathbb{Z}}s\mathcal{B}_{\mathbb{Z}}= \mathcal{B}_{\mathbb{Z}}s\mathcal{B}_{\mathbb{Z}}\cup\mathcal{B}_{\mathbb{Z}}= \mathcal{P}_{s,\mathbb{Z}}\) follows because \(\mathcal{P}_{s,\mathbb{Z}}\) is a group subscheme of \(LG_{\mathbb{Z}}\) and \(s\mathcal{B}_{\mathbb{Z}}s\not\subset\mathcal{B}_{\mathbb{Z}}\). Let \(w=s_{1}\ldots s_{r}\) be a reduced word in \(W\). Consider the Demazure morphism given by projecting to the final coordinate \[m_{s_{\bullet},\mathbb{Z}}:D(s_{\bullet})_{\mathbb{Z}}:=\mathcal{P}_{s_{1}, \mathbb{Z}}\times^{\mathcal{B}_{\mathbb{Z}}}\mathcal{P}_{s_{2},\mathbb{Z}} \times^{\mathcal{B}_{\mathbb{Z}}}\ldots\times^{\mathcal{B}_{\mathbb{Z}}} \mathcal{P}_{s_{r},\mathbb{Z}}/\mathcal{B}_{\mathbb{Z}}\to X_{\mathcal{B}, \mathbb{Z}}(w).\] The image lies in \(X_{\mathcal{B},\mathbb{Z}}(w)\) by flatness and properness, and by the fact that this holds over \(\mathbb{Q}\). By the BN-pair relations, it gives an isomorphism over \(Y_{\mathcal{B},\mathbb{Z}}(w)\). Furthermore, it implies the closure relations \[X_{\mathcal{P},\mathbb{Z}}(w)=\coprod_{v}Y_{\mathcal{P},\mathbb{Z}}(v) \tag{11.7}\] where \(v\in W_{\mathcal{P}}\backslash W/W_{\mathcal{P}}\) is such that \(v\leq w\) in the Bruhat order on \(W_{\mathcal{P}}\backslash W/W_{\mathcal{P}}\). In particular, we see \(\widetilde{\mathcal{P}_{\mathbb{Z}}w\mathcal{P}_{\mathbb{Z}}}=\coprod_{v} \mathcal{P}_{\mathbb{Z}}v\mathcal{P}_{\mathbb{Z}}\). Here and in (11.7) the union indicates a union of locally closed subschemes, and every subscheme appearIng is reduced by construction. With the existence of Demazure resolutions over \(\mathbb{Z}\) in hand, one can prove the following result by copying the argument of [11, Prop. 3.4] (Demazure resolutions over \(\mathbb{Z}\) are used to prove that Schubert varieties attached to simply-connected groups over a field \(k\) are normal, following the argument in [10, SS9]). **Corollary 11.15**.: _For any field \(k\), \((X_{\mathcal{P},\mathbb{Z}}(w)\otimes_{\mathbb{Z}}k)_{\mathrm{red}}=X_{\mathcal{P},k}(w)\). Further, \(X_{\mathcal{P},\mathbb{Z}}(w)\otimes_{\mathbb{Z}}k\) is reduced if and only if \(X_{\mathcal{P},k}(w)\) is normal._ #### Convolution morphisms over \(\mathbb{Z}\) Given the BN-pair relations involving subschemes of \(LG_{\mathbb{Z}}\), we have the following. **Lemma 11.16**.: _For any \(w_{\bullet}=(w_{1},w_{2},\ldots,w_{r})\in W^{r}\) with Demazure product_ \[w_{*}:=\,^{\mathsf{f}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
この論文は、任意の体での分割群の場合に、Convolutions の微分Morphisms は、affine な直線と affine な直線からなる製品で満たされることを証明しています。この結果が、特に Affine Grassmannian に適用され、その変換は、幾何学的 Satake 対応の文脈で、Convolutions の Morphisms に適用されます。論文の後半はこれらの結果を $\mathbb Z$ 上に拡張します。それらは、Cass-van den Hove-Scholbach が整数 Motives の幾何学的 Satake 等価性についての最近の作業と関連しています。これらは、彼らの結果のいくつかの代替的な証明を提供しています。
2309.15106
Improved constraints for axion-like particles from 3-photon events at $e^+e^-$ colliders
Axions and axion-like particles (ALPs) are one of the most widely discussed extensions of the Standard Model when it comes to the strong CP problem and dark matter candidates. Current experiments are focused on the indirect searches of invisible pseudoscalars in a wide parameter range. In this paper we investigate limits on ALP mass, and its couplings to photons and leptons from 3-photon annihilation at $e^+e^-$ colliders. We provide detailed calculations and apply them to the particular kinematics of the Belle II experiment, covering the ALP mass range from few hundred MeV to around 10 GeV. Our results, which improve upon previous analyses by also including the ALP coupling to electrons, show that such future analyses will allow to significantly extend the ALP search range and impose much more stringent restrictions on their couplings.
Aleksandr Pustyntsev, Marc Vanderhaeghen
2023-09-26T17:54:38
http://arxiv.org/abs/2309.15106v2
# Improved constraints for axion-like particles from 3-photon events at \(e^{+}e^{-}\) colliders ###### Abstract Axions and axion-like particles (ALPs) are one of the most widely discussed extensions of the Standard Model when it comes to the strong CP problem and dark matter candidates. Current experiments are focused on the indirect searches of invisible pseudoscalars in a wide parameter range. In this paper we investigate limits on ALP mass, and its couplings to photons and leptons from 3-photon annihilation at \(e^{+}e^{-}\) colliders. We provide detailed calculations and apply them to the particular kinematics of the Belle II experiment, covering the ALP mass range from few hundred MeV to around 10 GeV. Our results, which improve upon previous analyses by also including the ALP coupling to electrons, show that such future analyses will allow to significantly extend the ALP search range and impose much more stringent restrictions on their couplings. ## I Introduction Initially proposed in 1977, the Peccei-Quinn theory so far is considered to be the most compelling strong CP problem resolution [1; 2]. In this model a CP-violating phase is dynamically driven to zero, giving rise to a new pseudoscalar particle called axion [3; 4]. During the last four decades numerous attempts have been made to find a signal of this particle, including both lab searches and astronomical observations [5; 6]. Current constraints show that the QCD axion (in case it exists) must be very weakly interacting and thus is called "invisible", which forces one to concentrate on the possible indirect detection of this particle [7; 8]. A key property of the QCD axion is the linear proportionality between its couplings to the Standard Model particles and the axion mass [9]. The exact relation being model-dependent and usually refers to KSVZ [10; 11] or DFSZ mechanisms [12; 13]. With the current limits taken into account, both scenarios result in a very small axion mass, \(m_{a}\lesssim 10^{-3}\) eV [14]. However, recent studies report the possibility of MeV mass range for the QCD axion not yet excluded by experiments [15]. Given the significance of the problem it is important to assure that there are no loopholes left at this scale and to reinvestigate the parameter space in the MeV to GeV range. In addition to the QCD axion mechanism, various Standard Model extensions with axion-like particles were proposed [16; 17; 18]. The main difference to the original model is that ALPs are not restricted to a linear mass-coupling relation. During the past few years there was an increasing interest in the MeV to GeV range [19; 20; 21; 22]. Furthermore, ALPs are considered to be promising dark matter candidates as being both very long-living and weakly-interacting with the mass unconstrained by their interactions with other particles [23; 24]. In this work we investigate the mass and coupling constraints of ALPs in the MeV to GeV range from 3-photon events in \(e^{+}e^{-}\) annihilation. We focus on the kinematical setting of the Belle II experiment. Section 2 provides a general overview of the given formalism with the discussion of couplings, matrix elements and cross sections. Section 3 illustrates the main results and provides predictions for Belle II kinematics and constraints which follow from the calculated processes. Section 4 summarizes our work. ## II ALP formalism In this work we assume that ALPs in the MeV to GeV mass range couple predominantly to photons and electrons, i.e. decay only to visible states. The following section provides a short review of the relevant ALP interactions. The parameter space includes three variables - the ALP mass \(m_{a}\) and its couplings to photons and electrons, which are denoted by \(g_{a\gamma\gamma}\) and \(g_{aee}\), respectively. We detail the calculations of ALP contributions to 2- and 3-photon annihilation of \(e^{+}e^{-}\) pairs. ### Interaction with photons In this section we analyze the interaction of ALPs with the electromagnetic field. The corresponding effective Lagrangian has the form \[\mathcal{L}_{a\gamma\gamma}=-\frac{g_{a\gamma\gamma}}{4}aF^{\mu\nu}\tilde{F}_{ \mu\nu}, \tag{1}\] \(a\) stands for the pseudoscalar ALP field, \(F^{\mu\nu}\) is the electromagnetic field tensor with the corresponding dual pseudostensor \(\tilde{F}_{\mu\nu}=\frac{1}{2}\varepsilon_{\mu\nu\lambda\sigma}F^{\lambda \sigma}\), \(g_{a\gamma\gamma}\) is the coupling constant of the dimension \(\text{GeV}^{-1}\). The matrix element for the \(a\to 2\gamma\) decay shown on Fig. 1 is given by \[\begin{split} M_{a\rightarrow\gamma\gamma}\left(k_{1},k_{2}\right)& =-ig_{a\gamma\gamma}k_{1,\kappa}k_{2,\beta}\varepsilon^{\kappa \beta\mu\nu}\\ &\times\epsilon_{\mu}^{\ast}\left(k_{1},\lambda_{1}\right) \epsilon_{\nu}^{\ast}\left(k_{2},\lambda_{2}\right),\end{split} \tag{2}\] where \(\epsilon_{\mu}\left(k_{1},\lambda_{1}\right)\) and \(\epsilon_{\mu}\left(k_{2},\lambda_{2}\right)\) are the polarization vectors of the photons with 4-momenta \(k_{1}\), \(k_{2}\) and helicities \(\lambda_{1}\), \(\lambda_{2}\), respectively. Summing over the final helicities, we obtain \[\sum_{f}\left|M_{a\rightarrow\gamma\gamma}\left(k_{1},k_{2}\right)\right|^{2}=2g _{a\gamma\gamma}^{2}\left(k_{1}k_{2}\right)^{2}. \tag{3}\] The corresponding decay width is then obtained as \[\Gamma_{a\gamma\gamma}=\frac{g_{a\gamma\gamma}^{2}m_{a}^{3}}{64\pi}. \tag{4}\] ### Interaction with leptons We next discuss the ALP-fermion coupling and the corresponding decay rate. The generic interaction of ALPs with fermions is of the form \[\mathcal{L}_{aff}=-\frac{g_{aff}}{2m_{f}}\partial_{\mu}a\bar{f}\gamma^{5} \gamma^{\mu}f, \tag{5}\] where \(f\) stands for the fermion field, \(m_{f}\) denotes its mass and \(g_{aff}\) is the dimensionless coupling constant. From \(\mathcal{L}_{aff}\) it is clear that lepton universality requires the large enhancement of ALP coupling to muon, namely \[g_{a\mu\mu}\approx\frac{m_{\mu}}{m_{e}}g_{aee}. \tag{6}\] In this paper we follow Alves and Wiener work Alves and Wiener (2010) and consider ALPs coupled only to electrons in order to avoid effects induced by this enhanced coupling, such as \(\left(g-2\right)_{\mu}\) corrections on the muon anomalous magnetic moment. At tree level \(\mathcal{L}_{aff}\) can be equivalently reduced to a pseudoscalar coupling \[\mathcal{L}_{aff}=-ig_{aff}\bar{f}\gamma^{5}f. \tag{7}\] We are interested in the \(a\to e^{+}e^{-}\) decay shown on Fig. 2, which has the amplitude \[M_{a\rightarrow e^{+}e^{-}}=g_{aee}\bar{u}\left(p_{-},s_{-}\right)\gamma^{5}v \left(p_{+},s_{+}\right), \tag{8}\] where \(u\left(p_{-},s_{-}\right)\) and \(v\left(p_{+},s_{+}\right)\) are the bispinors describing electron and positron with momenta \(p_{\pm}\) and helicities \(s_{\pm}\), respectively. In the domain of interest we can assume \(m_{e}\ll m_{a}\). After summing over the final helicities, the squared amplitude for this process is given by \[\sum_{f}\left|M_{a\to e^{+}e^{-}}\right|^{2}=2g_{aee}^{2}m_{a}^{2}. \tag{9}\] The corresponding decay width then has the form \[\Gamma_{aee}=\frac{g_{aee}^{2}m_{a}}{8\pi}. \tag{10}\] In the absence of interaction with other fields, the total ALP decay width is assumed to consist of two contributions \[\Gamma_{a}=\Gamma_{aee}+\Gamma_{a\gamma\gamma}. \tag{11}\] ### ALP production at \(e^{+}e^{-}\) colliders An ALP contributes to the 2-photon annihilation of \(e^{+}e^{-}\) through the diagram shown on Fig. 3. The matrix element is \[\begin{split} M_{e^{+}e^{-}\rightarrow\gamma\gamma}& =ig_{aee}\frac{\bar{v}\left(p_{+},s_{+}\right)\gamma^{5}u\left(p_{ -},s_{-}\right)}{s-m_{a}^{2}+im_{a}\Gamma_{a}}\\ &\times M_{a\rightarrow\gamma\gamma}\left(k_{1},k_{2}\right), \end{split} \tag{12}\] with \(M_{a\rightarrow\gamma\gamma}\left(k_{1},k_{2}\right)\) given in Eq. (2). As a function of \(m_{a}\), this cross section is significantly different from zero only in a small region around \(m_{a}^{2}=s=4E^{2}\), where \(E\) denotes the initial electron (positron) energy in the center-of-momentum frame. Thus, for a fixed collider energy, the \(e^{+}e^{-}\rightarrow\gamma\gamma\) process is not providing constraints on ALP parameters in a broad \(m_{a}\), \(g_{a\gamma\gamma}\), \(g_{aee}\) parameter space in \(e^{+}e^{-}\) annihilation. Therefore, in the following we investigate 3-photon final states. Figure 1: ALP decay to two photons in the lowest order. Figure 2: ALP decay to the lepton-antilepton pair in the lowest order. Fig. 4 shows the contribution of ALP-photon coupling resulting in 3-photon events. The corresponding amplitudes are given by \[M_{e^{+}e^{-}\rightarrow\gamma\gamma\gamma}(ALP_{1})=i\frac{H_{e^{+}e ^{-}\rightarrow\gamma^{*}\to a\gamma}\left(k_{1}\right)}{K_{23}^{2}-m_{a}^{2}+ im_{a}\Gamma_{a}} \tag{13}\] \[\times M_{a\rightarrow\gamma\gamma}\left(k_{2},k_{3}\right)+ \text{crossed terms},\] where \(H_{e^{+}e^{-}\rightarrow\gamma^{*}\to a\gamma}\) stands for \(e^{+}e^{-}\to a\gamma_{i}\) amplitude \[H_{e^{+}e^{-}\rightarrow\gamma^{*}\to a\gamma}\left(k_{i}\right) =-ieg_{a\gamma\gamma}\,\varepsilon_{\alpha\beta\mu\gamma}q^{\alpha }k_{i}^{\beta}\epsilon^{\gamma}\left(k_{i},\lambda_{i}\right) \tag{14}\] \[\times\frac{\bar{v}\left(p_{+},s_{+}\right)\gamma^{\mu}u\left(p_{ -},s_{-}\right)}{s},\] with \(e\) being the positron charge and the internal photon 4-momentum \(q=p_{+}+p_{-}\). We denote the ALP 4-momenta as \(K_{23}=k_{2}+k_{3}\). It is generally assumed that ALPs are long-lived particles, i.e. their decay width \(\Gamma_{a}\) is a small quantity, typically much smaller than the experimental resolution of the invariant mass of the two-photon system in which the ALP decays. Thus the integration over the phase space gives the main contribution only in a very small range of variables where the invariant mass of the photon pair produced by the ALP is close to \(m_{a}^{2}\). In such kinematics the interference terms become unobservable and can be omitted. After the integration over the phase space the total cross section can be represented as a cross section obtained from only Feynman diagrams shown in Fig. 4 multiplied by a factor of three to account for the 3 channels. Thus we obtain \[\begin{split}&\sum_{i}\sum_{f}\left|H_{e^{+}e^{-}\rightarrow\gamma \gamma^{*}\to a\gamma}\left(k_{1}\right)\right|^{2}\\ &=\frac{2e^{2}g_{a\gamma\gamma}^{2}}{s^{2}}\left[\left(k_{1}p_{+} \right)^{2}+\left(k_{1}p_{-}\right)^{2}\right]\left(p_{-}p_{+}\right),\end{split} \tag{15}\] where \(\sum_{i}\sum_{f}\) denotes the average over initial helicities states and the sum over final. We next discuss the 3-photon production in \(e^{+}e^{-}\) annihilation which results from the ALP-electron coupling contribution, shown in Fig. 5. The corresponding amplitudes can be expressed as \[\begin{split}& M_{e^{+}e^{-}\rightarrow\gamma\gamma\gamma}\left( ALP_{2}\right)\\ &=i\frac{H_{e^{+}e^{-}\rightarrow\gamma\gamma,1}\left(k_{1}\right) +H_{e^{+}e^{-}\rightarrow a\gamma,2}\left(k_{1}\right)}{K_{23}^{2}-m_{a}^{2}+ im_{a}\Gamma_{a}}\\ &\times M_{a\rightarrow\gamma\gamma}\left(k_{2},k_{3}\right)+ \text{crossed terms},\end{split} \tag{16}\] where \(H_{e^{+}e^{-}\rightarrow a\gamma,i}\) denote amplitudes for the corresponding \(2\to 2\) process \[\begin{split} H_{e^{+}e^{-}\rightarrow a\gamma,1}\left(k_{i} \right)&=eg_{aee}\,\epsilon_{\eta}^{*}\left(k_{i},\lambda_{i} \right)\\ &\times\bar{v}\left(p_{+},s_{+}\right)\gamma^{\underline{i}}\frac{ \bar{t}_{i}}{\bar{t}_{i}^{2}}\gamma^{5}u\left(p_{-},s_{-}\right),\\ H_{e^{+}e^{-}\to a\gamma,2}\left(k_{i}\right)& =eg_{aee}\,\epsilon_{\lambda}^{*}\left(k_{i},\lambda_{i}\right)\\ &\times\bar{v}\left(p_{+},s_{+}\right)\gamma^{5}\frac{\bar{f}_{i} }{\bar{f}_{i}^{2}}\gamma^{\lambda}u\left(p_{-},s_{-}\right),\end{split} \tag{17}\] with the internal electron momenta \[l_{i}=k_{i}-p_{+},\quad f_{i}=p_{-}-k_{i}. \tag{19}\] It is worth noticing that there is no interference between this set of diagrams and the diagrams shown in Fig. 4. Using the same arguments as before, we conclude that for the cross section calculation, we only need to evaluate the two topologies shown in Fig. 5, as \[\begin{split}&\sum_{i}\sum_{f}\left|H_{e^{+}e^{-}\to a\gamma,1} \left(k_{1}\right)+H_{e^{+}e^{-}\to a\gamma,2}\left(k_{1}\right)\right|^{2} \\ &=e^{2}g_{aee}^{2}\left(\frac{p_{-}k_{1}}{p_{+}k_{1}}+\frac{p_{+}k_ {1}}{p_{-}k_{1}}+2\frac{\left(p_{+}K_{23}\right)\left(p_{-}K_{23}\right)}{\left( p_{-}k_{1}\right)\left(p_{+}k_{1}\right)}\right).\end{split} \tag{20}\] ### Cross section and observables The cross section of \(e^{+}e^{-}\rightarrow\gamma\gamma\gamma\) process is given by the expression \[\begin{split}&\sum_{i}\sum_{f}\left|H_{e^{+}e^{-}\to a\gamma,1} \left(k_{1}\right)+H_{e^{+}e^{-}\to a\gamma,2}\left(k_{1}\right)\right|^{2}\\ &=e^{2}g_{aee}^{2}\left(\frac{p_{-}k_{1}}{p_{+}k_{1}}+\frac{p_{+}k_ {1}}{p_{-}k_{1}}+2\frac{\left(p_{+}K_{23}\right)\left(p_{-}K_{23}\right)}{ \left(p_{-}k_{1}\right)\left(p_{+}k_{1}\right)}\right).\end{split} \tag{21}\] Figure 4: \(e^{+}e^{-}\) annihilation into three photons involving the \(g_{a\gamma\gamma}\) coupling. Graphs obtained from these by crossing are not shown, but are evaluated too. Figure 3: \(e^{+}e^{-}\) annihilation into two photons through an intermediate ALP. \[\sigma = \frac{1}{3!}\int d_{LIPS}\left(2\pi\right)^{4}\delta^{\left[4\right]} \left(p_{-}+p_{+}-k_{1}-k_{2}-k_{3}\right) \tag{21}\] \[\times \frac{1}{2s}\underset{i}{\sum}\sum_{f}\left|M_{e^{+}e^{-}\to\gamma \gamma\gamma}\right|^{2},\] where \(d_{LIPS}\) stands for the Lorentz-invariant phase space of the three final photons \[d_{LIPS}=\frac{d^{3}k_{1}}{2\omega_{1}\left(2\pi\right)^{3}}\frac{d^{3}k_{2}}{ 2\omega_{2}\left(2\pi\right)^{3}}\frac{d^{3}k_{3}}{2\omega_{3}\left(2\pi \right)^{3}}. \tag{22}\] After the integration with the delta function, the phase space can be expressed as \[\begin{split}& d_{LIPS}\left(2\pi\right)^{4}\delta^{\left[4\right]} \left(p_{-}+p_{+}-k_{1}-k_{2}-k_{3}\right)\\ &=\frac{1}{2^{8}\pi^{5}}\frac{\omega_{1}\omega_{2}}{2E+\omega_{ 1}\left(\cos\theta_{12}-1\right)}d\omega_{1}d\Omega_{1}d\Omega_{2},\end{split} \tag{23}\] with \(\theta_{12}\) denoting the angle between \(\mathbf{k}_{1}\) and \(\mathbf{k}_{2}\) momenta. The remaining phase space is parameterized as \[d\Omega_{1}d\Omega_{2}=2\pi d\phi\,d\cos\theta_{1-}d\cos\theta_{2-}, \tag{24}\] where \(\theta_{i-}\) is the angle between \(\mathbf{p}_{-}\) and \(\mathbf{k}_{i}\), which leads to \[\cos\theta_{12}=\sin\theta_{1-}\sin\theta_{2-}\cos\phi+\cos\theta_{1-}\cos \theta_{2-}. \tag{25}\] Furthermore, in the center-of-momentum frame it holds \[\begin{cases}\omega_{1}+\omega_{2}+\omega_{3}=2E,\\ \mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{3}=0,\end{cases} \tag{26}\] allowing to express \(\omega_{2}\) as \[\omega_{2}=\frac{2E\left(E-\omega_{1}\right)}{2E+\omega_{1}\left(\cos\theta_ {12}-1\right)}. \tag{27}\] For the ALP-associated process, the photon which is opposite to the ALP in center-of-momentum frame is denoted by \(k_{1}\). In this case, we can remove the integration over \(d\omega_{1}\) using the definition of the delta function \[\begin{split}&\frac{1}{\left(K_{23}^{2}-m_{a}^{2}\right)^{2}+ \left(m_{a}\Gamma_{a}\right)^{2}}\to\frac{\pi}{m_{a}\Gamma_{a}}\delta\left(K_ {23}^{2}-m_{a}^{2}\right)\\ &=\frac{\pi}{m_{a}\Gamma_{a}}\frac{1}{4E}\,\delta\left(\omega_{1} -\frac{4E^{2}-m_{a}^{2}}{4E}\right).\end{split} \tag{28}\] Due to the resonant behavior of the amplitude, one photon is always emitted with a fixed energy \[\omega=\frac{4E^{2}-m_{a}^{2}}{4E}. \tag{29}\] We note that, as the branching fraction, represented by the ratio \(\Gamma_{a\gamma\gamma}/\Gamma_{a}\), is always less than 1, the total cross-section of the process under investigation with an intermediate ALP may actually become smaller with a non-zero value of \(g_{aee}\), compared to when this quantity is equal to zero. After the integration over the full phase space, the cross section of the \(2\to 3\) process with the intermediate ALP can be written in the compact form \[\sigma_{e^{+}e^{-}\to\gamma\gamma\gamma}=\sigma_{e^{+}e^{-}\to a\gamma}\times \frac{\Gamma_{a\gamma\gamma}}{\Gamma_{a}}. \tag{30}\] If \(g_{aee}=0\), the ALP decays directly to photons and this formula can be simplified further (notably, it is independent of \(s\) if \(m_{a}^{2}\ll s\)) as \[\sigma_{e^{+}e^{-}\to a\gamma}=\frac{\alpha g_{a\gamma\gamma}^{2}}{24}\left(1- \frac{m_{a}^{2}}{s}\right)^{3}, \tag{31}\] where \(\alpha\equiv e^{2}/4\pi\). For a realistic detector, one has to integrate the cross section formula over the phase space, restricted by the experimental setup, as discussed below. Figure 5: \(e^{-}e^{+}\) annihilation into three photons involving the \(g_{aee}\) coupling. Graphs which are obtained by crossing are not shown, but are evaluated too. ## III Results and discussion The ALP signal detection strategy can be based on searches for a narrow peak in the squared mass distribution \(m_{\gamma\gamma}\) of photon pairs, or a narrow peak in the photon energies distributions, due to the fact that the photon which accompanies the ALP is always monoenergetic in this process. If no significant ALP signal is observed, it is possible to constrain ALP parameters in the corresponding mass range. In this section we illustrate our results with exclusion plots for the kinematics of an \(e^{+}e^{-}\) collider. For this purpose, we first split the total \(e^{+}e^{-}\to\gamma\gamma\gamma\) cross section into three terms \[\sigma_{ALP+B}=\sigma_{ALP}+\sigma_{B}+\sigma_{int}, \tag{32}\] with \(\sigma_{ALP}\) referring to the ALP-associated process (shown in Fig. 4 and 5), while \(\sigma_{B}\) is the background. The interference term \(\sigma_{int}\) does not contribute since the ALP decay width \(\Gamma_{a}\) is assumed to be much smaller than the experimental resolution of the invariant mass of the final photon pair. The dominant part of the background originates from QED 3-photon annihilation [19], i.e. \(\sigma_{B}=\sigma_{QED}\). The aimed sensitivity is then expressed by the formula \[\frac{\sigma_{ALP}}{\sigma_{QED}}=\frac{N}{\sqrt{L\cdot\sigma_{QED}}}, \tag{33}\] where \(L\) denotes the integrated luminosity and \(N\) is the number of standard deviations that determines whether or not a fluctuation is considered as a signal. We conventionally set \(N=2\), which refers to 95% confidence level. In our study we neglect the potential hadronic background from \(\pi^{0}\), \(\eta\) and \(\eta^{\prime}\) mesons. In a complete analysis, however, those contributions must be included. Therefore, the parameter space for \(m_{a}\) in the vicinity of the \(\pi^{0}\), \(\eta\) and \(\eta^{\prime}\) masses can be expected to be modified. The \(e^{+}e^{-}\to a\gamma\to\gamma\gamma\gamma\) cross section is a function of three variables. For purposes of illustration, we use the available independent constraints for \(g_{ace}\) to show two-dimensional projections of \(g_{a\gamma\gamma}\) as a function of \(m_{a}\). Experimental searches in the MeV to GeV region are mostly focused on ALP-muon interaction [22] and therefore not able to constrain \(g_{aee}\). However, it is possible to convert constraints on visibly decaying dark photons to limits on the ALP-electron mixing [15]. Indeed, the processes of \(X\to e^{+}e^{-}\) and \(a\to e^{+}e^{-}\) achieve comparable signal strengths in case of \(g_{Xee}\sim g_{aee}\). This relation, of course, is only approximate, since the two processes have different angular distributions, but using it one can estimate \(g_{aee}\lesssim 10^{-4}\)[22]. In the following, we discuss the reach on \(m_{a}\), \(g_{aee}\) and \(g_{a\gamma\gamma}\) which can be obtained from \(e^{+}e^{-}\to\gamma\gamma\gamma\) data that are already available from the Belle II experiment or are expected from future running. ### Belle II kinematics To obtain the exclusion plots for Belle II kinematics, we start by discussing the detector acceptance. Belle II is an asymmetric collider, for which electron and positron have energies of \(7\,\mathrm{GeV}\) and \(4\,\mathrm{GeV}\), respectively. This requires a boost with a relative velocity \(\beta\approx 0.27\) to the center-of-momentum frame, where particles have energies of \(E=5.29\,\mathrm{GeV}\). The angular coverage of the electromagnetic calorimeter in the lab frame is \(12.4^{\circ}<\theta<155.1^{\circ}\). The angular region \(37.3^{\circ}<\theta<123.7^{\circ}\) provides the best energy resolution, avoiding regions close to detector gaps, and offers the lowest beam background levels [25]. Following the work of [19], we set the photon energy selection threshold of \(0.25\,\mathrm{GeV}\) in the center-of-momentum frame. Our analysis requires all three photons to be in this acceptance range and, unless otherwise specified, these experimental cuts are used for all the plots shown below. The angular distributions for the ALP process are presented in Fig. 6 for two different values of \(m_{a}\) and two different values of \(g_{aee}\). For a given \(g_{aee}\), there is more than an order of magnitude difference between \(m_{a}=0.3\,\mathrm{GeV}\) and \(m_{a}=3\,\mathrm{GeV}\) curves due to the fact that for a relatively light ALP the decay width is dominated by the \(a\to e^{+}e^{-}\) channel, see Eqs. (4) and (10). For the particular case of \(g_{a\gamma\gamma}=10^{-4}\,\mathrm{GeV}^{-1}\) and \(g_{aee}=10^{-4}\), one obtains \[\frac{\Gamma_{a\gamma\gamma}}{\Gamma_{a}} \approx 0.01,\quad\text{for}\quad m_{a}=0.3\,\mathrm{GeV},\] \[\frac{\Gamma_{a\gamma\gamma}}{\Gamma_{a}} \approx 0.53,\quad\text{for}\quad m_{a}=3\,\mathrm{GeV}.\] ### QED background We next discuss the QED background process. The cross section of leading order QED \(e^{+}e^{-}\) annihilation in 3 photons in the massless electron limit is given by [26] \[\begin{split}&\sum_{i}\sum_{f}\left|M_{e^{+}e^{-}\to\gamma\gamma \gamma\,(QED)}\right|^{2}=s\left(4\pi\alpha\right)^{3}\\ &\quad\times\frac{\sum_{i=1}^{3}\left(p_{+}k_{i}\right)\left(p_{- }k_{i}\right)\left[\left(p_{+}k_{i}\right)^{2}+\left(p_{-}k_{i}\right)^{2} \right]}{\prod_{i=1}^{3}\left(p_{+}k_{i}\right)\left(p_{-}k_{i}\right)}.\end{split} \tag{34}\] For the total cross section an additional factor \(1/3!\) must be added due to the 3 identical bosons in the final state. Fig. 7 shows the corresponding QED background angular and energy distributions. In contrast to the ALP-related process (see Fig. 6), which exhibits a rather uniform angular distribution, the QED three-photon annihilation is characterized by an enhanced angular distribution in both the forward and backward directions. The presence of a distinct peak in the photon energy distribution would serve as an indication of ALP creation. ### Belle II results from 2018 data set In the 2018 data run Belle II achieved an integrated luminosity of \(445\,\mathrm{pb}^{-1}\)[25], which was used for the ALP searches in a simplified way by converting the cross section limit to the coupling using Eq. (31). The latter formula does not take into consideration the fact that all three photons in the ALP-associated process must be detected in the acceptance range of the electromagnetic calorimeter. We require three resolved photons with energies higher than \(0.65\,\mathrm{GeV}\) in the center-of-momentum frame as a crite Figure 7: QED background distributions for the softest, middle and hardest photons in the \(e^{+}e^{-}\to\gamma\gamma\gamma\) process in Belle II kinematics. ria for this event selection. These requirements are slightly different from those used in the Belle II report [25], where the selection of photons with energies above \(0.65\,\mathrm{GeV}\) (for \(m_{a}>4\,\mathrm{GeV}\)) and \(1\,\mathrm{GeV}\) (for \(m_{a}\leq 4\,\mathrm{GeV}\)) in the lab frame was performed. The difference is negligible since \(g_{a\gamma\gamma}\) is sensitive to \(\sigma_{QED}^{-1/4}\). Our result based on Eq. (31) is shown on Fig. 8 (left panel) by the black curve. It shows a good agreement with the analysis of [25] in the higher ALP mass region. In the lower mass region some deviations are seen. This is expected because in the case of a light ALP the invariant mass of a photon pair also becomes low, i.e. two photons travel in a very narrow cone with each other, oppositely to the third photon. This produces very asymmetric kinematics and the QED background becomes suppressed. In our analysis we do not take this into consideration, but more detailed investigation can be performed in future work. ### Belle II projection from upcoming data collection Belle II is expected to reach an integrated luminosity of \(50\,\mathrm{ab}^{-1}\) after around 10 years of running. The resulting constraints which such future data would yield were investigated in [19] for the case where ALPs are only coupled to photons (i.e. for \(g_{aee}=0\)). Fig. 8 right panel shows the projected sensitivity in two scenarios: ALPs coupled only to photons and ALPs coupled to photons and electrons with different \(g_{aee}\) coupling strength. Our results for the scenario \(g_{aee}=0\) are in reasonably good agreement with the exclusion limits deduced in [19] in the high \(m_{a}\) region. For lower values of \(m_{a}\) one expects a similar deviation as for the 2018 Belle II data discussed above. Fig. 8 also shows that the inclusion of a non-zero interaction of ALPs with electrons significantly affects the final result, especially in ALP mass range \(m_{a}\lesssim 2\,\mathrm{GeV}\). The assumption \(g_{aee}=0\) generally leads to an overestimated \(g_{a\gamma\gamma}\) limit, which may be incorrect if the ALP has other decay channels besides the 2-photon mode. In such case, more detailed models with additional parameters are required to constrain invisible particles in a more rigorous way. ## IV Conclusion In this paper we discussed ALPs coupled to electrons and photons in a minimal way. The contributions of ALP states to 2- and 3-photon \(e^{+}e^{-}\) annihilation events were calculated. In this way, we obtained new constraints for possible ALPs in the MeV to GeV mass range, which can be tested at \(e^{+}e^{-}\) colliders. Results were shown for Belle II kinematics both from existing data and from forthcoming data with projected integrated luminosity of \(50\,\mathrm{ab}^{-1}\). Our results indicate that the \(g_{a\gamma\gamma}\) limits can be vastly affected in the presence of an additional decay mode, especially in the lower \(m_{a}\) region. Using current best limits for \(g_{aee}\), it is possible to improve the \(g_{a\gamma\gamma}\) limits by at least an order of magnitude, which allows to significantly narrow down the search area for potential ALPs and to test the possible solution of the strong CP problem in the MeV to GeV mass range. This result can be improved further if a better way to constrain \(g_{aee}\) independently is Figure 8: Left panel: Belle II constraints for \(g_{a\gamma\gamma}\) based on the 2018 data set, with the analytical result shown by the black dashed curve. Right panel: projected results on the \((m_{a},g_{a\gamma\gamma})\) reach for the future data collection at Belle II corresponding to \(50\,\mathrm{ab}^{-1}\) of integrated luminosity. available. There are many possible ways to further extend this work. First of all, a more precise background modeling and experimental analysis must be performed in order to refine the exclusion plots, especially around the \(\pi^{0}\), \(\eta\) and \(\eta^{\prime}\) masses. The comparison with the Belle II 2018 data analysis shows that with more detailed background analysis it is possible to even further improve \(g_{a\gamma\gamma}\) constraints in the \(m_{a}^{2}\ll s\) case. Furthermore, the ALP coupling to photons can be replaced with a more general model of ALPs interacting with the electroweak sector of the Standard Model as discussed e.g. in [19]. This implies the interaction of ALPs with Z-bosons, which was not taken into consideration here. In the presence of such coupling, the exclusion plots will be modified accordingly. Additionally, in this paper we assumed that ALPs interact only with the electrons and photons. Hidden decay channels were not considered. However, the contribution of light dark matter particles of sub-GeV masses may change the obtained constraints if the pair production threshold is surpassed. At the same time, the inclusion of such particles makes it more complicated to set any constraints, as new free parameters appear. Finally, we restricted ourselves with ALPs which are not coupled to muons (and taus). However, lepton universality leads to an increase in the coupling constant \(g_{a\mu\mu}\) by around two orders of magnitude compared to \(g_{aee}\). It may notably affect the results if \(m_{a}\) is larger than \(2m_{\mu}\). The parameter space in such case will get additional restrictions from the requirement of compatibility with current \((g-2)_{\mu}\) data [27; 28]. ###### Acknowledgements. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), in part through the Research Unit [Photon-photon interactions in the Standard Model and beyond, Projektnummer 458854507 - FOR 5327], and in part through the Cluster of Excellence [Precision Physics, Fundamental Interactions, and Structure of Matter] (PRISMA\({}^{+}\) EXC 2118/1) within the German Excellence Strategy (Project ID 39083149).
アキオンとアキオン様粒子 (ALP) は、強い CP 問題と暗黒物質候補において標準模型の最も広く議論されている拡張の一つです。現在、実験は、暗黒物質候補の観測に焦点を当て、幅広いパラメータ範囲で無視可能な素粒子を探しています。本論文では、ALP の質量と、光子と lepton に対するカップリングを、$e^+e^-$ Colliderにおける3光子 Annihilation から算出しています。私たちは詳細な計算を行い、これらの計算をBelle II 測定実験の特定の運動学的条件に適用しています。これは、ALP の質量が数百 MeV から 10 GeV 程度まで広範囲にわたります。私たちの結果、従来の分析に加えて、ALP の電子とのカップリングを含めることで、将来的な分析が ALP 探索範囲を大幅に拡大し、それらのカップリングに厳しい制限を課す可能性を示しています。
2309.12669
HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text Hybrid Question Answering
Answering numerical questions over hybrid contents from the given tables and text(TextTableQA) is a challenging task. Recently, Large Language Models (LLMs) have gained significant attention in the NLP community. With the emergence of large language models, In-Context Learning and Chain-of-Thought prompting have become two particularly popular research topics in this field. In this paper, we introduce a new prompting strategy called Hybrid prompt strategy and Retrieval of Thought for TextTableQA. Through In-Context Learning, we prompt the model to develop the ability of retrieval thinking when dealing with hybrid data. Our method achieves superior performance compared to the fully-supervised SOTA on the MultiHiertt dataset in the few-shot setting.
Tongxu Luo, Fangyu Lei, Jiahe Lei, Weihao Liu, Shihu He, Jun Zhao, Kang Liu
2023-09-22T07:26:17
http://arxiv.org/abs/2309.12669v1
# Hk0T: Hybrid prompt strategy and Retrieval of Thought for Table-Text Hybrid Question Answering ###### Abstract Answering numerical questions over hybrid contents from the given tables and text (TextTableQA) is a challenging task. Recently, Large Language Models (LLMs) have gained significant attention in the NLP community. With the emergence of large language models, In-Context Learning and Chain-of-Thought prompting have become two particularly popular research topics in this field. In this paper, we introduce a new prompting strategy called Hybrid prompt strategy and Retrieval of Thought for TextTableQA. Through In-Context Learning, we prompt the model to develop the ability of retrieval thinking when dealing with hybrid data. Our method achieves superior performance compared to the fully-supervised SOTA on the MultiHiertt dataset in the **few-shot** setting. Keywords:HybridQA Chain-of-Thought Language Models In-Context-Learning. ## 1 Introduction Question-answering (QA) systems aim to answer various questions using evidence located in structured knowledge bases, such as tables [13][19] or unstructured texts [15]. In real-world scenarios, QA systems often face the challenge of integrating various data resources of diverse types to answer complex questions, including numerical reasoning problems in financial statements. Therefore, the TextTableQA system, a hybrid of question answering over tables and texts [1][2][3] has garnered increasing attention. Recently, Large Language Models (LLMs) leverage Chain-of-Thought (CoT) [17] prompts to break down complex problems into intermediate steps. Currently, there are three paradigms for mainstream CoT prompts. The first involves adding a single CoT trigger as a prompt for a single question, such as "Let's think step by step." This paradigm is called zero-shot, and in some simple datasets, LLMs perform well with this method. The second paradigm is manually constructing demonstrations, each consisting of a question, an inference process containing a CoT trigger, and a prompt to trigger the answer. The third paradigm is automatic demonstration selection and inference chain construction [21]. With zero-shot, LLMs generate inference chains for demonstrations one-by-one, then cluster and select typical demonstrations for few-shot. We evaluated CoT on the MultiHiertt dataset [22], which contains long textual and multi-hierarchical tabular data in finance. The evaluation results showed that, while CoT has achieved State-Of-The-Art (SOTA) results on many datasets, it may not be effective for handling long and complex hybrid data that contains tables and text, especially when there is a lot of irrelevant information (as show in Fig 1). The problem leads to poor performance of CoT is that CoT often relies on irrelevant information for reasoning, leading to incorrect reasoning chains and ultimately incorrect results. Furthermore, as shown in Fig 3, tables in real scenes are typically hierarchical and complex, making it challenging to extract useful insights directly from the data. Therefore, it is necessary to address the problem of CoT being unable to retrieve correct evidence and explore effective modeling methods for real scene tables. To address the aforementioned problems, we propose a novel method **HRoT**, which consists of two parts. Firstly, we introduce retrieval thinking by artificially constructing some arguments and guide the model to learn the way of thinking, which prevents LLM from relying on irrelevant information during reasoning. We illustrate the difference between CoT and our proposed method, **Retrieval of Thought** (RoT), in Fig 1. We Figure 1: Comparison result between CoT and HRoT with an example input. CoT uses text and table descriptions, which losing the **hierarchical information** of multi-hierarchical tables, while our method **reconstructs the tables** and constructs prompts to introduce **retrieval thinking** to prevent LLM from using irrelevant information for reasoning. provided several examples of retrieval-based thinking in the prompt. For example, "We need to find... located in...". Secondly, we propose a **Hybrid prompt strategy** that enhances the reasoning process by reconstructing the retrieved table based on the question type and considering the inherent hierarchical structure of the table, as described in Section 3.3. The overall framework of the model is illustrated in Fig 1(b). We compare CoT and HRoT in both zero-shot and few-shot settings, and our HRoT achieves better results in both settings, surpassing the fully supervised performance in 4-shots and achieving State-Of-The-Art performance4. In summary, our contributions are as follows: Footnote 4: [https://codalab.lisn.upsaclay.fr/competitions/6738](https://codalab.lisn.upsaclay.fr/competitions/6738) (1) We propose a novel method to enhance the reasoning capability of models by introducing retrieval thinking. (2) Proposing Type-Aware Table Reconstruction algorithm to reconstruct multi-hierarchical tables based on the retrieved evidence. (3) We propose a more powerful baseline retriever by introducing DeBERTa [5]. ## 2 Related Work ### In-Context Learning Large language models such as GPT-3 exhibit impressive few-shot learning ability [10][4], requiring only a few questions and answers as prompts in the context without the need for finetuning on a dataset of training examples. However, this approach struggles with tasks requiring complex reasoning [14], leading researchers to explore prompting strategies. CoT [17] is a chained reasoning approach that inserts a multi-step reasoning path before generating the final answer. Wang et al. [16] proposed a Self-Consistency decoding strategy to vote on the reasoning path, and Kojima et al. [6] demonstrated that LLMs could act as zero-shot reasoners through the use of "Let's think step-by-step". These methods focus on constructing inference chains, but cannot be well migrated to the field of HybridQA. CoT often relies on irrelevant information for reasoning, leading to incorrect reasoning chains and ultimately incorrect results. To Figure 2: The comparison between the pipline of CoT and HRoT in TextTableQA. overcome these challenges, our approach reconstructs a sub-table containing all evidence based on the retrieved table evidence, thus preserving the hierarchical information of multi-hierarchical tables. Moreover, we introduce retrieval thinking through prompts to prevent irrelevant information from being used in reasoning. ### TextTableQA In recent years, the hybrid form of question answering over tables and texts (TextTableQA) has attracted more and more attention. There are two major question types for TextTableQA. The first is the fact reasoning question, whose answer is usually a span from the table or linked paragraphs, such as the contents in Wikipedia [1][2]. The second is the numerical reasoning question, which usually aims to use the contents of tables and texts for numerical calculation [24][3]. Our work focuses on numerical reasoning. TAT-QA [24], FinQA [3] and Multi-hiertt [22] are the numerical reasoning hybrid dataset which comes from the financial field. TAGOP [24] uses the sequence tagging method to extract facts, and performs a single arithmetic op- eration based on predefined operators. FinQANet [3] and MT2Net [22] can perform multi-step reasoning, both of them use the LSTM decoder to autoregressively generate the program. UniRPG [23] generated numerical reasoning programs from tables to text, which can also use text spans as computation values. KIQA [12] through knowledge injection approach helpd the model to learn additional symbolic knowledge. RegHNT [7][18] focused on designing a relation graph about the input. Different from the above methods, our method employs LLMs as reasoning module and introduces retrieval thinking through prompts, significantly improving the effectiveness of the retrieval process. S3HQA [8] and MMHQA [11] also use LLMs, but they focus on multi-hop TextTableQA tasks. ## 3 Method ### Overview Our method is divided into three stages. The first stage is retrieval, which classifies the questions and retrieves for text and tables, selecting the top n as evidence. The second stage is reconstruct. In this stage, we first reconstruct the table of questions classified as arithmetic, and then use the text and reconstructed table as hybrid prompts to LLMs. The third stage is reasoning. We introduce retrieval thinking to guide the LLMs retrieve the evidence required for the reasoning from the text and table. The entire pipeline is shown in Figure 1(b) ### Retriever Similar to the baseline, we use Pretrained Language Models (PLMs) to classify the questions into two types: arithmetic and span selection, and convert multi-hierarchical tables into table descriptions. However, unlike the baseline, we train separate models for text and table descriptions. In the training phase, for the \(k\)-th question \(Q_{k}\), we have texts \(P_{k}=\{p_{1},p_{2},\cdots,p_{N_{k}}\}\) and \(M_{k}\) table descriptions \(T_{k}=\{t_{1},t_{2},\cdots,t_{M_{k}}\}\). We concatenate \(Q_{k}\) with each \(p_{i}\) and \(t_{i}\)(e.g., [CLS]**In what year is Home equity greater than 13000?[SEP]**Annuities The following table presents the results of...[SEP]), and use DeBERTa [5] as the encoder to predict the correlation between the question and each text or table description pair. For DeBERTa's output: \[H=[h_{1};h_{2};\cdots;h_{l}]=DeBERTa(X) \tag{1}\] Where \(X\) is the concatenation of \(Q_{k}\), \(Q_{k}\) with \(p_{i}\) or \(Q_{k}\) with \(t_{i}\). Among \(H\), the classification information is \(h_{1}\), and we use FFN as a classifier to binary classify the question's type or relevance. During training, only a small portion of the given text contains the evidence required to answer a question. To address the problem of imbalanced positive and negative samples, we use resampling to increase the probability of sampling positive samples in a batch. Additionally, our loss function is defined as: \[Loss=CrossEntropy(y,\hat{y})+\lambda\cdot DSCLoss(y,\hat{y}) \tag{2}\] Where \(\lambda\) is a hyperparameter and \(DSCLoss\)[9] is used to optimize the F1 score. During the inference phase, we follow the same data processing steps as in the training phase. However, after predicting the relevance between the question and each text or table description pair, we sort \(P_{k}\) and \(T_{k}\) based on their relevance scores, and select the top \(n\) and \(m\) texts and table descriptions, respectively, as the retrieved candidate evidence. ### Hybrid Prompt Strategy Performing arithmetic operations on multi-hierarchical tables can present challenges when certain spatial information is not explicitly included in table descriptions. To address this issue, we propose a hybrid prompt strategy that utilizes hybrid data to prompt large language models (LLMs). Specifically, we introduce a type-aware table reconstruction algorithm to reconstruct large and complex tables to sub-tables. For example, consider the MultiHiertt dataset in Fig 3, where each table contains hierarchical column and row headers. Ignoring the hierarchical structure of the headers may result in incorrect reasoning outcomes. We first classify the questions into two types: arithmetic and span selection. Then, for one of arithmetic questions \(q\), we get the tables of \(q\) which is \(T_{q}\). For one of table \(t\) in \(T_{q}\), we partition the table and obtain a span list \(L\). For example, for the table in Fig 3, we obtain \(L=[[0,1][2,3][4,7]]\), where \([0,1]\) represents the row span of the table header, and \([2,3]\) and \([4,7]\) represent the spans of sub-tables. Then, for each piece of evidence, we determine which row span it belongs to and retain the sub-header of that span. Therefore, the set \(R\) of rows to be retained actually consists of three parts, \(R=\{h_{r},h_{sub},r_{e}\}\). The set \(C\) of columns to be retained consists of two parts, \(C=\{h_{c},c_{e}\}\). With the rows and columns to be retained identified, we can easily reconstruct the table. Our table reconstruction algorithm can be summarized as follows: ### Hybrid Retrieval of Thought To address the problem of LLM selecting incorrect evidence for reasoning, we introduce HRoT, which adopts a retrieval-based approach to gradually retrieve the evidence required for the question and generate the answer based on these evidence. Figure 3: An expamle of Multi-hierarchical table. #### 3.3.3 HRoT Prompting for Zero-Shot For Zero-Shot reasoning, we use the prompt "Let's retrieve above text and table step by step and then think step by step to answer the question. First, based on the question, we need to find" to guide the LLM to conduct retrieval before answering the question. Finally, we use "Therefore, the answer to the question is" as the Answer trigger to extract the answer. The specific process is shown in Fig 3(a). #### 3.3.4 HRoT Prompting for Few-Shot For Few-Shot, we adopted a similar approach to AutoCoT [21] and used clustering to select representative examples for demonstration, with a difference being that we clustered two types of questions separately, namely arithmetic and span selection. We first clustered the training set, then applied Zero-Shot on these examples. However, the correctness of the retrieved chains cannot be guaranteed, so we manually corrected any errors in the output. For Few-Shot, we directly used "Let's retrieve above text and table step by step" as the prompt. For arithmetic questions, we followed the requirements of the dataset and used the "Program" format as the answer. The specific process is shown in Fig 3(b). ## 4 Experiments ### Datasets We conducted our experiments on the MultiHiertt dataset [22]. Compared with existing datasets, each document in MultiHiertt contains multiple hierarchical tables and longer un- structured text. A more complex reasoning process across multiple tables and paragraphs is required to correctly answer the question (Zhao et al., 2022). The dataset consists of 10,440 questions with 2,513 financial documents, and is split into three parts: training (75%), development (10%), and test (15%). Figure 4: An example of HRoT. The Text in green is the retrieval result from Retriever and the Tables in green is the reconstructed tables. ### Implementation Details During the retrieval phase, we tested several pre-trained language models (PLMs) including BERT, RoBERTa, and DeBERTa. For each PLM, we trained separate models for text and table descriptions. To address the problem of imbalanced positive and negative samples, we utilized resampling to increase the probability of sampling positive samples in a batch. Furthermore, to directly optimizes the F1 score, \(\lambda\) in 2 is set to \(0.5\). During the reasoning phase, we used the OpenAI GPT3.5 (text-davanci-003) API with the setting \(temperature=0\). We conducted experiments on CoT, HRoT without reconstructed table, and HRoT with reconstructed table under 0-4 shot settings. ### Main Results Table 1 presents a comparison between our proposed method and several typical methods on the test set. As can be seen, our method significantly outperforms the previous baselines in terms of both EM and F1 scores. These results demonstrate the effectiveness of our approach in addressing complex, hierarchical table-based question answering tasks. ### Ablation Study We conducted ablation experiments on the development set to validate the effectiveness of retriever and on the test set to validate the effectiveness of Table Reconstruction and HRoT Prompting. Additionally, we compared the performance of HRoT under 0-4 shot settings. Table Reconstruction involves reconstructing a sub-table based on the retrieved results. HRoT Prompting is used to guide the LLM to retrieve the correct evidence from both text and table. Effect of proposed retrieverAs shown in Table 2, we validate the BERT, RoBERTa, DeBERTa and DeBERTa(+DSCLoss) Retriever. Comparing DeBERTa(+DSCLoss) with Bert, DeBERTa achieves an improvement of 10.34% in \begin{table} \begin{tabular}{l c c} \hline \hline & EM & F1 \\ \hline TAGOP [3] & 17.81 & 19.35 \\ \hline FinQANet [3] & 31.72 & 33.60 \\ \hline MT2Net [22] & 36.22 & 38.43 \\ \hline NAPG [20] & 44.19 & 44.81 \\ \hline HRoT-fewshot & **46.17** & **46.91** \\ \hline \hline \end{tabular} \end{table} Table 1: Main results on test set. \begin{table} \begin{tabular}{l c c} \hline \hline & Text Recall & Table Recall \\ \hline Bert & 84.14 & 71.47 \\ \hline RoBERTa & 91.98 & 86.11 \\ \hline DeBERTa & 93.62 & 90.46 \\ \hline DeBERTa(+DSCLoss) & **94.48** & **91.27** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on Bert, RoBERTa, DeBERTa and DeBERTa(+DSCLoss) Retriever using top 5 on text and top 10 on table setting. the Text Recall, and 19.8% in the Table Recall. When DSCLoss is removed, the Text Recall decreases by approximately 0.86% and the Table Recall decreases by approximately 0.81%. Effect of HRoTAs shown in Table 3, when the reconstructed table is replaced with table descriptions under the same settings, the EM score decreases by approximately 1.82% and the F1 score decreases by approximately 1.95%. Comparing HRoT with CoT under the same settings, HRoT with reconstructed table achieves an improvement of 6.13% in EM and 6.14% in F1, while HRoT without reconstructed table achieves an improvement of 4.31% in EM and 4.13% in F1. These results demonstrate the effectiveness of our proposed improvements. Different shots on HRoTAs shown in Table 4, we used a 0-4 shot setting, and it can be seen that when doing few-shot, both EM and F1 scores are positively correlated with the number of demonstrations. When doing zero-shot, due to the inability of LLM to generate the required "Program" format for the dataset, there are more computational errors. ## 5 Conclusion In this study, we investigate the construction of appropriate demonstrations and prompts for hybrid data of text and tables and propose HRoT, short for Hybrid prompt \begin{table} \begin{tabular}{l|c c|c c} \hline & \multicolumn{2}{c|}{**HRoT**} & \multicolumn{2}{c}{**CoT**} \\ & **EM** & **F1** & **EM** & **F1** \\ \hline 0-shot & 22.67 & 23.72 & 20.45 & 21.67 \\ 1-shot & 39.55 & 40.62 & 29.14 & 30.15 \\ 2-shot & 41.53 & 42.67 & 34.76 & 35.49 \\ 3-shot & 43.38 & 44.43 & 36.03 & 36.96 \\ 4-shot & **46.17** & **46.91** & 39.03 & 39.77 \\ \hline \end{tabular} \end{table} Table 4: Experiments on different numbers of examples for HRoT and CoT. \begin{table} \begin{tabular}{l c} \hline & EM & F1 \\ \hline HRoT-fewshot & **46.17** & **46.91** \\ \hline w/o Deberta (w Roberta) & 45.89 & 46.57 \\ \hline w/o Hybrid Prompt Strategy & 44.35 & 44.96 \\ \hline CoT-fewshot & 40.04 & 40.77 \\ \hline \end{tabular} \end{table} Table 3: Ablation study on Table Reconstruction and HRoT Prompting. strategy and Retrieval of Thought for TextTableQA. By significantly reducing the retrieval errors of evidence in hybrid data for LLMs, our method achieves SOTA performance on the MultiHiertt dataset.
数値的な質問をハイブリッドコンテンツのテーブルからテキスト(TextTableQA)で答えるのは、難しい課題です。近年、大規模言語モデル(LLM)はNLPコミュニティにおいて大きな注目を集めています。大規模言語モデルの出現に伴い、イン・コンテキスト学習とチェーン・オブ・ pensato promptsは、この分野で特に人気のある研究テーマとなっています。この論文では、ハイブリッドプロンプティング戦略とテキストTableQAの考えを想起する方法を導入します。イン・コンテキスト学習を通じて、モデルがハイブリッドデータに対処する際に、検索思考能力を開発するように促します。この方法により、マルチハイエッティDatasetの少人数設定で、完全な监督のSOTAよりも優れたパフォーマンスを達成しています。
2309.05315
Computing Wasserstein Barycenter via operator splitting: the method of averaged marginals
The Wasserstein barycenter (WB) is an important tool for summarizing sets of probabilities. It finds applications in applied probability, clustering, image processing, etc. When the probability supports are finite and fixed, the problem of computing a WB is formulated as a linear optimization problem whose dimensions generally exceed standard solvers' capabilities. For this reason, the WB problem is often replaced with a simpler nonlinear optimization model constructed via an entropic regularization function so that specialized algorithms can be employed to compute an approximate WB efficiently. Contrary to such a widespread inexact scheme, we propose an exact approach based on the Douglas-Rachford splitting method applied directly to the WB linear optimization problem for applications requiring accurate WB. Our algorithm, which has the interesting interpretation of being built upon averaging marginals, operates series of simple (and exact) projections that can be parallelized and even randomized, making it suitable for large-scale datasets. As a result, our method achieves good performance in terms of speed while still attaining accuracy. Furthermore, the same algorithm can be applied to compute generalized barycenters of sets of measures with different total masses by allowing for mass creation and destruction upon setting an additional parameter. Our contribution to the field lies in the development of an exact and efficient algorithm for computing barycenters, enabling its wider use in practical applications. The approach's mathematical properties are examined, and the method is benchmarked against the state-of-the-art methods on several data sets from the literature.
D. Mimouni, P Malisani, J. Zhu, W. de Oliveira
2023-09-11T09:02:07
http://arxiv.org/abs/2309.05315v1
# Computing Wasserstein Barycenter via operator splitting: the method of averaged marginals ###### Abstract The Wasserstein barycenter (WB) is an important tool for summarizing sets of probabilities. It finds applications in applied probability, clustering, image processing, etc. When the probability supports are finite and fixed, the problem of computing a WB is formulated as a linear optimization problem whose dimensions generally exceed standard solvers' capabilities. For this reason, the WB problem is often replaced with a simpler nonlinear optimization model constructed via an entropic regularization function so that specialized algorithms can be employed to compute an approximate WB efficiently. Contrary to such a widespread inexact scheme, we propose an exact approach based on the Douglas-Rachford splitting method applied directly to the WB linear optimization problem for applications requiring accurate WB. Our algorithm, which has the interesting interpretation of being built upon averaging marginals, operates series of simple (and exact) projections that can be parallelized and even randomized, making it suitable for large-scale datasets. As a result, our method achieves good performance in terms of speed while still attaining accuracy. Furthermore, the same algorithm can be applied to compute generalized barycenters of sets of measures with different total masses by allowing for mass creation and destruction upon setting an additional parameter. Our contribution to the field lies in the development of an exact and efficient algorithm for computing barycenters, enabling its wider use in practical applications. The approach's mathematical properties are examined, and the method is benchmarked against the state-of-the-art methods on several data sets from the literature. ## 1 Introduction In applied probability, stochastic optimization, and data science, a crucial aspect is the ability to compare, summarize, and reduce the dimensionality of empirical measures. Since these tasks rely heavily on pairwise comparisons of measures, it is essential to use an appropriate metric for accurate data analysis. Different metrics define different barycenters of a set of measures: a barycenter is a mean element that minimizes the (weighted) sum of all its distances to the set of target measures. When the chosen metric is the optimal transport one, and there is mass equality between the measures, the underlying barycenter is denoted by Wasserstein Barycenter (WB). The optimal transport metric defines the so-called Wasserstein distance (also known as Mallows or Earth Mover's distance), a popular choice in statistics, machine learning, and stochastic optimization [13, 22, 23]. The Wasserstein distance has several valuable theoretical and practical properties [25, 30] that are transferred to (Wasserstein) barycenters [1, 22, 24, 1]. Indeed, thanks to the Wasserstein distance, one key advantage of WBs is their ability to preserve the underlying geometry of the data, even in high-dimensional spaces. This fact makes WBs particularly useful in image processing, where datasets often contain many pixels and complex features that must be accurately represented and analyzed [18, 27]. Being defined by the Wasserstein distance, WBs are challenging to compute. The Wasserstein distance is computationally expensive because, to compute an optimal transport plan, one needs to cope with a large linear program (LP) that has no analytical solution and cubic worst-case complexity1[35]. The situation becomes even worse for computing a WB because its definition involves several optimal transport plans. In the simpler case of fixed support, which is the focus of this work (see section 2 below), computing a WB amounts to solving an LP whose dimensions generally exceed standard solvers' capabilities [1]. Several numerical methods have been proposed in the literature to address the challenge. They invariably fall into one of the following categories: Footnote 1: More precisely, \(O(S^{3}log(S))\), with \(S\) the size of the input data. * Inexact methods, based on reformulations via an entropic regularization [10, 11, 12, 17, 22, 35, 4]; * Exact decomposition methods, consisting in solving a sequence of smaller and simpler subproblems [11, 34]. While our proposal falls into the second category, the vast majority of methods found in the literature are inexact ones, employing or not decomposition techniques. Indeed, the work [11] proposes to inexactly compute a WB by solving an approximating model obtained by regularizing the WB problem with an entropy-like function. The technique allows one to employ the celebrate Sinkhorn algorithm [9, 28], which has a simple closed-form expression and can be implemented efficiently using only matrix operations. When combined with a projected subgradient method, this regularizing approach fits into category i) above. However, if instead the underlying transport subproblems are solved exactly without the regularization technique, then Algorithm 1 from [11] falls into category ii). The regularization technique of [9] opened the way to the _Iterative Bregman Projection_ (IBP) method proposed in [4]. IBP is highly memory efficient for distributions with shared support set and is considered to be one of the most effective methods to tackle WB problems. However, as IBP works with an approximating model, the computed point is not a solution to the WB problem, and thus IBP is an inexact method. Another approach fitting into the category of inexact methods has been recently proposed in [35], which uses the same type of regularization as IBP but decomposes the problem into a sequence of smaller subproblems with straightforward solutions. More specifically, the approach in [35] is an modification (tailored to the WB problem) of the _Bregman Alternating Direction Method of Multipliers_ (B-ADMM) of [31]. The modified B-ADMM has been shown to compute promising results for sparse support measures and therefore is well-suited in some clustering applications. However, the theoretical convergence properties of the modified B-ADMM algorithm is not well understood and the approach should be considered as an heuristic. An inexact method that disregards entropic regularizations is presented [24], and denoted by _Iterative Swapping Algorithm_ (ISA). The approach is a non-parametric algorithm that provides a sharp image of the support of the barycenter and has a quadratic time complexity. Essentially, ISA is designed upon approximating the linear program by a multi-index assignment problem which is solved in an iterative manner. Another approach based on successive approximations of the WB (linear programming) problem is proposed in [6]. Concerning exact methods, the work [7] proposes a simpler linear programming reformulation of the WB problem that leads to an LP that scales linearly with the number of measures. Although the resulting LP is smaller than the WB problem, it still suffers from heavy computation time and memory consumption [24]. In contrast, [34] proposes to address the WB problem via the standard ADMM algorithm, which decomposes the problem in smaller and simpler subproblems. As mentioned by the authors in their subsequent paper [35], the numerical efficiency of the standard ADMM is still inadequate for large datasets. All the methods mentioned in the above references deal exclusively with sets of probability measures because WBs are limited to measures with equal total masses. A tentative way to circumvent this limitation is to normalize general positive measures to compute a standard WB. However, such a naive strategy is generally unsatisfactory and limits the use of WBs in many real-life applications such as logistic, medical imaging and others coming from the field of biology [19, 26]. Consequently, the concept of WB has been generalized to summarize such more general measures. Different generalizations of the WB exist in the literature, and they are based on variants of _unbalanced optimal transport problems_ that define a distance between general non-negative, finitely supported measures by allowing for mass creation and destruction [19]. Essentially, such generalizations, known as unbalanced Wasserstein barycenters (UWBs), depend on how one chooses to relax the marginal constraints. In the review paper [26] and references therein, marginal constraints are moved to the objective function with the help of divergence functions. Differently, in [19] the authors replace the marginal constraints with sub-couplings and penalize their discrepancies. It is worth mentioning that UWB is more than simply copying with global variation in the measures' total masses. Generalized barycenters tend to be more robust to local mass variations, which include outliers and missing parts [26]. For the sake of a unified algorithmic proposal for both balanced and unbalanced WBs, in this work, we consider a different formulation for dealing with sets of measures with different total masses. While our approach can be seen as an abridged alternative to the thorough methodologies of [19] and [26], its favorable structure for efficient splitting techniques combined with the good quality of the issued UWBs confirm the formulation's practical interest. To cope with the challenge of computing (balanced and unbalanced) WBs, we propose a new algorithm based on the celebrated Douglas-Rachford splitting operator method (DR) [14, 15, 16]. Our proposal, which falls into the category of exact decomposition methods, is denoted by _Method of Averaged Marginals_ (MAM). The motivation for its name is due to the fact that, at every iteration, the algorithm computes a barycenter approximation by averaging marginals issued by transportation plans that are updated independently, in parallel, and even randomly if necessary. Accordingly, the algorithm operates a series of simple and exact projections that can be carried out in parallel and even randomly. Thanks to our unified analysis, MAM can be applied to both balanced and unbalanced WB problems without any change: all that is needed is to set up a parameter. To the best of our knowledge, MAM is the first approach capable of handling balanced and unbalanced WB problems in a single algorithm, which can be further run in a deterministic or randomized fashion. In addition to its versatility, MAM copes with scalability issues arising from barycenter problems, is memory efficient, and has convergence guarantees to an exact barycenter. Although MAM's convergence speed is not as exceptional as IBP's, it is observed in practice that after a few tens of iterations, the average of marginals computed by MAM is a better approximation of a WB than the solution provided by IBP, no matter how many iterations the latter performs2. As further contributions, we conduct experiments on several data sets from the literature to demonstrate the computational efficiency and accuracy of the new algorithm and make our Python codes publicly available at the link ([https://ifpen-gitlab.appcollaboratif.fr/detocs/mam_wb](https://ifpen-gitlab.appcollaboratif.fr/detocs/mam_wb)). Footnote 2: The reason is that IBP converges fast but to the solution of an approximate model, not to an exact WB. The remainder of this work is organized as follows. Section 2 introduces the notation and recalls the balanced WB problems' formulation. The proposed formulation for unbalanced WBs is presented in section 3. In section 4 the WB problems are reformulated in a suitable way so that the Douglas-Rachford splitting (DR) method can be applied. The same section briefly recalls the DR algorithm and its convergence properties both in the deterministic and randomized settings. The main contribution of this work, the Method of Averaged Marginals, is presented in section 5. Convergence analysis is given in the same section by relying on the DR algorithm's properties. Section 6 illustrates the numerical performance of the deterministic and randomized variants of MAM on several data sets from the literature. Numerical comparisons with IBP and B-ADMM are presented for the balanced case. Then some applications of the UWB are considered. ## 2 Background on optimal transport and Wasserstein barycenter Let \((\Omega,\mathsf{d})\) be a metric space and \(P(\Omega)\) the set of Borel probability measures on \(\Omega\). Furthermore, let \(\xi\) and \(\zeta\) be two random vectors having probability measures \(\mu\) and \(\nu\) in \(P(\Omega)\), that is, \(\xi\sim\mu\) and \(\zeta\sim\nu\). **Definition 1** (Wasserstein Distance).: _For \(\iota\in[1,\infty)\), and probability measures \(\mu\) and \(\nu\) in \(P(\Omega)\). Their \(\iota\)-Wasserstein distance \(W_{\iota}\) is :_ \[W_{\iota}(\mu,\nu):=\left(\inf_{\pi\in U(\mu,\nu)}\int_{\Omega\times\Omega} \mathsf{d}(\xi,\zeta)^{\iota}d\pi(\xi,\zeta)\right)^{1/\iota},\] (WD) _where \(U(\mu,\nu)\) is the set of all probability measures on \(\Omega\times\Omega\) having marginals \(\mu\) and \(\nu\). We denote by \(W_{\iota}^{\iota}(\mu,\nu)\), \(W_{\iota}\) to the power \(\iota\), i.e. \(W_{\iota}^{\iota}(\mu,\nu):=(W_{\iota}(\mu,\nu))^{\iota}\)._ Throughout this work, for \(\tau\geq 0\) a given scalar, the notation \(\Delta_{n}(\tau)\) denotes the set of non-negative vectors in \(\mathds{R}^{n}\) adding up to \(\tau\), that is, \[\Delta_{n}(\tau):=\left\{u\in\mathds{R}^{n}_{+}:\;\sum_{i=1}^{n}u_{i}=\tau \right\}. \tag{1}\] If \(\tau=1\), then \(\Delta_{n}(\tau)\), denoted simply by \(\Delta_{n}\), is the \(n+1\) simplex. **Definition 2** (Wasserstein Barycenter).: _Given \(M\) measures \(\{\nu^{(1)},\ldots,\nu^{(M)}\}\) in \(P(\Omega)\) and \(\alpha\in\Delta_{M}\), an \(\iota\)-Wasserstein barycenter with weights \(\alpha\) is a solution to the following optimization problem_ \[\min_{\mu\in P(\Omega)}\;\sum_{m=1}^{M}\alpha_{m}W_{\iota}^{\iota}(\mu,\nu^{( m)})\,. \tag{2}\] A WB \(\mu\) exists in generality and, if one of the \(\nu^{(m)}\) vanishes on all Borel subsets of Hausdorff dimension \(dim(\Omega)-1\), then it is also unique [1]. If the measures are discrete, then uniqueness is no longer ensured in general. ### Discrete Wasserstein Barycenter This work focus on empirical measures based on finitely many \(R\) scenarios \(\Xi:=\{\xi_{1},\ldots,\xi_{R}\}\) for \(\xi\) and \(S^{(m)}\) scenarios \(Z^{(m)}:=\{\zeta_{1}^{(m)},\ldots,\zeta_{S^{(m)}}^{(m)}\}\) for \(\zeta^{(m)}\), \(m=1,\ldots,M\), i.e., measures of the form \[\mu=\sum_{r=1}^{R}p_{r}\delta_{\xi_{r}}\quad\text{ and }\quad\nu^{(m)}=\sum_{s=1} ^{S^{(m)}}q_{s}^{(m)}\delta_{\zeta_{\zeta}^{(m)}},\quad m=1,\ldots,M, \tag{3}\] with \(\delta_{u}\) the Dirac unit mass on \(u\in\Omega\), \(p\in\Delta_{R}\), and \(q^{(m)}\in\Delta_{S^{(m)}}\), \(m=1,\ldots,M\). In this setting, when the support \(\Xi\) is fixed, the \(\iota\)-Wasserstein distance \(W_{\iota}(\mu,\nu)\) of two empirical measures \(\mu\) and \(\nu^{(m)}\) is the \(\iota^{th}\) root of the optimal value of the following LP, known as _optimal transportation_ (OT) problem \[\mathtt{OT}\Xi(p,q):=\left\{\begin{array}{ll}\min_{\pi\geq 0}&\sum_{r=1}^ {R}\sum_{s=1}^{S}\mathsf{d}(\xi_{r},\zeta_{s})^{\iota}\pi_{rs}\\ \text{s.t.}&\sum_{r=1}^{R}\pi_{rs}=q_{s},&s=1,\ldots,S\\ &\sum_{s=1}^{S^{(m)}}\pi_{rs}=p_{r},&r=1,\ldots,R.\end{array}\right. \tag{4}\] The feasible set above is refereed to as the _transportation polytope_, issued by the so-called _marginal constraints_. An optimal solution of this problem is known as an optimal transportation plan. Observe that a transportation plan can be represented as a matrix whose entries are non-negative, the row sum equals the marginal \(q\), and the column sum equals \(p\). **Definition 3** (Discrete Wassertein Barycenter - WB).: _A Wassertein barycenter of a set of \(M\) empirical probability measures \(\nu^{(m)}\) having support \(Z^{(m)}\), \(m=1,\ldots,M\), is a solution to the following optimization problem_ \[\min_{\Xi,p\in\Delta_{R}}\ \sum_{m=1}^{M}\alpha_{m}\mathfrak{OT}_{\Xi}(p,q^{(m)}). \tag{5}\] The above is a challenging nonconvex optimization problem that is in general dealt with via block-coordinate optimization: at iteration \(k\), the support is fixed \(\Xi^{k}\), and the convex optimization problem, \(\min_{p\in\Delta_{R}}\ \sum_{m=1}^{M}\alpha_{m}\mathfrak{OT}_{\Xi^{k}}(p,q^{(m)})\), is solved to define a vector \(p^{k}\), which is in turn fixed to solve \(\min_{\Xi}\ \sum_{m=1}^{M}\alpha_{m}\mathfrak{OT}_{\Xi}(p^{k},q^{(m)})\) that updates the support \(\Xi^{k}\) to \(\Xi^{k+1}\). When the metric \(\mathsf{d}(\cdot,\cdot)\) is the Euclidean distance and \(\iota=2\), the last problem has a straightforward solution (see for instance [10, Alg. 2] and [35, SS II]). For this reason, in the remainder of this work we focus on the more challenging problem of minimizing w.r.t. the vector \(p\). **Definition 4** (Discrete Wasserstein Barycenter with Fixed Support).: _A fixed-support Wasserstein barycenter of a set of \(M\) empirical probabilities measures \(\nu^{(m)}\) having support \(Z^{(m)}\), \(m=1,\ldots,M\), is a solution to the following optimization problem_ \[\min_{p\in\Delta_{R}}\ \sum_{m=1}^{M}\alpha_{m}\mathfrak{OT}_{\Xi}(p,q^{(m)}). \tag{6}\] In the above definition, the support \(\Xi\) is fixed and the optimization is performed with respect to the vector \(p\). Our approach to solve eq. (6) can be better motivated after writing down the extensive formulation of the problem. Indeed, looking for a barycenter \(\mu\) with fixed atoms \(\xi\) and probability \(p\) is equivalent to solving the following huge-scale LP, where we denote \(d_{rs}^{(m)}:=\alpha_{m}\mathsf{d}(\xi_{r},\zeta_{s}^{(m)})\) (\(r=1,\ldots,R\), \(s=1,\ldots,S^{(m)}\), \(m=1,\ldots,M\)): \[\left\{\begin{array}{llll}\min_{p,\pi}&\sum_{r=1}^{R}\sum_{s=1}^{S^{(1)}}d _{rs}^{(1)}\pi_{rs}^{(1)}&+\cdots+&\sum_{r=1}^{R}\sum_{s=1}^{S^{(M)}}d_{rs}^{( M)}\pi_{rs}^{(M)}\\ \text{s.t.}&\sum_{r=1}^{R}\pi_{rs}^{(1)}&&=q_{s}^{(1)},\quad s=1,\ldots,S^{(1)} \\ &&\ddots&\vdots\\ &&\sum_{r=1}^{R}\pi_{rs}^{(M)}=q_{s}^{(M)},\quad s=1,\ldots,S^{(M)}\\ &\sum_{s=1}^{S^{(1)}}\pi_{rs}^{(1)}&&=p_{r},\quad\quad r=1,\ldots,R\\ &&\ddots&\vdots\\ &&\sum_{s=1}^{S^{(M)}}\pi_{rs}^{(M)}=p_{r},\quad\quad r=1,\ldots,R\\ &&\sum_{s=1}^{S^{(M)}}\pi_{rs}^{(M)}=p_{r},\quad\quad r=1,\ldots,R\\ &&\end{array}\right. \tag{7}\] The constraint \(p\in\Delta_{R}\) above is redundant and can be removed: we always have, for \(m=1,\ldots,M\), \(\sum_{r=1}^{R}p_{r}=\sum_{r=1}^{R}\left(\sum_{s=1}^{S^{(m)}}\pi_{rs}^{(m)} \right)=\sum_{s=1}^{S^{(m)}}\left(\sum_{r=1}^{R}\pi_{rs}^{(m)}\right)=\sum_{s= 1}^{S^{(m)}}q_{s}^{(m)}=1\). The dimension of the above LP is \(R(1+\sum_{m=1}^{M}S^{(m)})\) and grows rapidly with the number of measures \(M\). For instance, for moderate values such as \(R=S^{(m)}=1600\), \(m=1,\ldots,M\) (corresponding to figures with \(40\times 40\) pixels) and \(M=100\), the above LP has more than 256 millions of variables3. Footnote 3: More precisely, \(256\,001\,600\) variables. Although variable \(p\) is the one of interest, we can remove \(p\) from the above formulation and recover it easily by working with the following linear subspace. **Definition 5** (Balanced subspace).: _We denote by balanced subspace the following linear subspace of balanced transportation plans:_ \[\mathcal{B}:=\left\{\begin{array}{ccccc}\pi=(\pi^{(1)},\ldots,\pi^{(M)})& \left|\begin{array}{ccccc}\sum_{s=1}^{S^{(1)}}\pi_{rs}^{(1)}&=&\sum_{s=1}^{S ^{(2)}}\pi_{rs}^{(2)},&r=1,\ldots,R\\ \sum_{s=1}^{S^{(2)}}\pi_{rs}^{(2)}&=&\sum_{s=1}^{S^{(3)}}\pi_{rs}^{(3)},&r=1, \ldots,R\\ &\vdots&\\ \sum_{s=1}^{S^{(M-1)}}\pi_{rs}^{(M-1)}&=&\sum_{s=1}^{S^{(M)}}\pi_{rs}^{(M)},&r =1,\ldots,R\end{array}\right.\end{array}\right\}. \tag{8}\] The term _balanced_ is due to the fact that given \(\nu^{(m)}\) in eq. (3), \(\sum_{j=1}^{S(m)}q_{j}^{(m)}=\sum_{j=1}^{S(m^{\prime})}q_{j}^{(m^{\prime})}\), \(m,m^{\prime}=1,\ldots,M\). Observe that problem eq. (7) is equivalent, in terms of optimal value and optimal transportation plans, to the following LP \[\left\{\begin{array}{ccccc}\min_{\pi\in\mathcal{B}}&\sum_{r=1}^{R}\sum_{s=1} ^{S^{(1)}}d_{rs}^{(1)}\pi_{rs}^{(1)}&+\cdots+&\sum_{r=1}^{R}\sum_{s=1}^{S^{(M )}}d_{rs}^{(M)}\pi_{rs}^{(M)}\\ \text{s.t.}&\sum_{r=1}^{R}\pi_{rs}^{(1)}&&=q_{s}^{(1)},&s=1,\ldots,S^{(1)}\\ &&\ddots&&\vdots\\ &&\sum_{r=1}^{R}\pi_{rs}^{(M)}=q_{s}^{(M)},&s=1,\ldots,S^{(M)}\\ &&\pi^{(1)}\geq 0&\cdots&\pi^{(M)}\geq 0,\end{array}\right. \tag{9}\] having \(R\sum_{m=1}^{M}S^{(m)}\) variables. Note furthermore that an optimal vector \(p\) can be easily recovered from any given optimal transportation plan \(\pi^{(m)}\) by simply setting \(p_{r}=\sum_{s=1}^{S^{(m)}}\pi_{rs}^{(m)}\), \(r=1,\ldots,R\). ## 3 Discrete unbalanced Wasserstein Barycenter A well-known drawback of the above OT-based concepts is their limitation to measures with equal total mass. To overcome this limitation, a simple idea is to relax the marginal constraints in eq. (4), giving rise to an extension of the OT problem often referred to as _unbalanced optimal transportation_ (UOT) problem [26] because of its ability to cope with "unbalanced" measures, i.e., with different masses. Different manners to relax the marginal constraints yield various UOT problems that can replace OT problems in the barycenter definition eq. (6) to yield different generalizations of the Wasserstein barycenters. As a result, unbalanced Wasserstein barycenter (UWB) deals with the fact that in some applications the non-negative vectors \(q^{(m)}\), \(m=1,\ldots,M\), are not necessarily probability related: they do not live in any simplex, i.e., \(\sum_{j=1}^{S(m)}q_{j}^{(m)}\neq\sum_{j=1}^{S(m^{\prime})}q_{j}^{(m^{\prime})}\) for at least a pair \((m,m^{\prime})\) s.t. \(m\neq m^{\prime}\). In this case, the WB problem eq. (7) is infeasible and a (balanced) WB is not defined. As an example, think of \(p\) as the quantity of goods to be produced, and \(q^{(m)}\) as an event of the random demand for these goods. Since the demand events are different, we can not decide on a production \(p\) that satisfies all the \(M\) future demands exactly: the production of some goods might be overestimated, while for others, underestimated. Hence, it makes sense to produce \(p\) that minimizes not only transportation costs but also the (expectation w.r.t. \(\alpha\) of the) mismatches between production \(p\) and demand \(q^{(m)}\). Such an intention can be mathematically formulated in several ways. In this work, we propose a simple one by using a metric that measures the distance of a multi-plan \(\pi=(\pi^{(1)},\ldots,\pi^{(M)})\) to the balanced subspace \(\mathcal{B}\) defined in eq. (8). We take such a metric as being the Euclidean distance \(\mathtt{dist}_{\mathcal{B}}(\pi):=\min_{\theta\in\mathcal{B}}\|\theta-\pi\| = \|\mathtt{Proj}_{\mathcal{B}}(\pi)-\pi\|\), and define the following non-linear optimization problem, with \(\gamma>0\) a penalty parameter: \[\left\{\begin{array}{llll}\min_{\pi}&\sum_{r=1}^{R}\sum_{s=1}^{S^{(1)}}d^{(1 )}_{rs}\pi^{(1)}_{rs}&+\cdots+&\sum_{r=1}^{R}\sum_{s=1}^{S^{(M)}}d^{(M)}_{rs} \pi^{(M)}_{rs}&+&\gamma\,\mathtt{dist}_{\mathcal{B}}(\pi)\\ \text{s.t.}&\sum_{r=1}^{R}\pi^{(1)}_{rs}&&=q^{(1)}_{s},&s=1,\ldots,S^{(1)}\\ &&\ddots&&\vdots\\ &&&&&\sum_{r=1}^{R}\pi^{(M)}_{rs}=q^{(M)}_{s},&s=1,\ldots,S^{(M)}\\ &&\\ &\pi^{(1)}\geq 0&&\cdots&\pi^{(M)}\geq 0.\end{array}\right. \tag{10}\] This problem has always a solution because the objective function is continuous and the non-empty feasible set is compact. Note that in the balanced case, problem eq. (10) is a relaxation of eq. (9). In the unbalanced setting, any feasible point to eq. (10) yields \(\mathtt{dist}_{\mathcal{B}}(\pi)>0\). As this distance function is strictly convex outside \(\mathcal{B}\), the above problem has a unique solution. **Definition 6** (Discrete Unbalanced Wassertein Barycenter - UWB).: _Given a set \(\{\nu^{(1)},\ldots,\nu^{(M)}\}\) of unbalanced non-negative vectors, let \(\bar{\pi}\geq 0\) be the unique solution to problem eq. (10), and \(\tilde{\pi}\) the projection of \(\bar{\pi}\) onto the balanced subspace \(\mathcal{B}\), that is, \(\tilde{\pi}:=\mathtt{Proj}_{\mathcal{B}}(\bar{\pi})\) (\(\geq 0\)). The vector \(\bar{p}_{r}:=\sum_{s=1}^{S(m)}\tilde{\pi}^{(m)}_{rs}\),\(r=1,\ldots,R\) (no matter \(m=1,\ldots,M\)) is defined as the \(\gamma\)-unbalanced Wasserstein barycenter of \(\{\nu^{(1)},\ldots,\nu^{(M)}\}\)._ The above definition differs from the ones found in the literature, that also relaxes the constraints \(\sum_{r=1}^{R}\pi^{(m)}_{rs}=q^{(m)}_{s}\), see for instance [19, 26]. Although the above definition is not as general as the ones of the latter references, it is important to highlight that our UBW definition provides meaningful results (see section 6.4 below), uniqueness of the barycenter (if unbalanced), and is indeed an extension of definition 4. **Proposition 1**.: _Suppose that \(\{\nu^{(1)},\ldots,\nu^{(M)}\}\) are probability measures and let \(\gamma>\|\mathtt{vec}(d)\|\), in problem eq. (10), with \(\mathtt{vec}(d)\) the vectorization of the matrix \(d\in\mathds{R}^{R\times\sum_{m=1}^{M}S^{(m)}}\). Then any UWB according to definition 6 is also a (balanced) WB._ Proof.: Observe that the linear function \(\sum_{m=1}^{M}\sum_{r=1}^{R}\sum_{s=1}^{S^{(m)}}d^{(m)}_{rs}\pi^{(m)}_{rs}\) is obviously Lipschitz continuous with constant \(\|\mathtt{vec}(d)\|\). Thus, the standard theory of exact penalty methods in optimization (see for instance [5, Prop. 1.5.2]) ensures that, when \(\gamma>\|\mathtt{vec}(d)\|\), then \(\bar{\pi}\) solves4 problem eq. (10) if and only if \(\bar{\pi}\) solves eq. (9). As a result, \(\bar{\pi}=\mathtt{Proj}_{\mathcal{B}}(\bar{\pi})\) and definition 6 boils down to definition 4. Footnote 4: Note that in the balance case, the objective function of problem eq. (10) is no longer strictly convex on the feasible set, and thus multiple solutions may exist. Another advantage of our definition is that the problem yielding the proposed UBW enjoys a favorable structure that can be efficiently exploited by splitting methods. At the first glance, computing a UWB seems much more challenging than computing a WB: the former is obtained by solving a nonlinear optimization problem followed by the projection onto the balanced subspace, while the latter is a solution of an LP. However, in practice, the LP problem eq. (9) is already too large to be solved directly by off-the-shelf solvers and thus decomposition techniques need to come into play. In the next section we show that the computational burden to solve either the LP eq. (9) or the nonlinear problem eq. (10) by the Douglas-Rachford splitting method is the same. Indeed, it turns out that both problems can be efficiently solved by the algorithm presented in section 5.3. ## 4 Problem reformulation and the DR algorithm In this section, we reformulate problems eq. (9) and eq. (10) in a suitable way so that the Douglas Rachford splitting operator method can be easily deployed to compute a barycenter in the balanced and unbalanced settings under the following assumptions: (i) each of the \(M\) measures \(\nu^{(m)}\) are empirical ones, described by a list of atoms whose weights are \(q^{(m)}\in\mathds{R}_{+}^{S^{(m)}}\); (ii) the search for a barycenter is considered on a finitely fixed support of \(R\) atoms with weights \(p\in\mathds{R}_{+}^{R}\). To this end, we start by first defining the following convex and compact sets \[\Pi^{(m)}:=\left\{\pi^{(m)}\geq 0:\,\sum_{r=1}^{R}\pi^{(m)}_{rs}=q^{(m)}_{s}, \;s=1,\ldots,S^{(m)}\right\},\;m=1,\ldots,M. \tag{11}\] These are the sets of transportation plans \(\pi^{(m)}\) with right marginals \(q^{(m)}\). The set with left marginals has already been characterized by the linear subspace \(\mathcal{B}\) of balanced plans eq. (8). With the help of the indicator function \(\mathbf{i}_{C}\) of a convex set \(C\), that is \(\mathbf{i}_{C}(x)=0\) if \(x\in C\) and \(\mathbf{i}_{C}(x)=\infty\) otherwise, we can define the convex functions \[f^{(m)}(\pi^{(m)}):=\sum_{r=1}^{R}\sum_{s=1}^{S^{(m)}}d^{(m)}_{rs}\pi^{(m)}_{ rs}+\mathbf{i}_{\Pi^{(m)}}(\pi^{(m)}),\quad m=1,\ldots,M, \tag{12}\] and recast problems eq. (9) and eq. (10) in the following more general setting \[\min_{\pi}\;f(\pi)+g(\pi)\,,\;\text{with}: \tag{13a}\] \[f(\pi):=\sum_{m=1}^{M}f^{(m)}(\pi^{(m)})\quad\text{and}\quad g(x ):=\left\{\begin{array}{ll}\mathbf{i}_{\mathcal{B}}(\pi)&\text{if balanced}\\ \gamma\,\mathtt{dist}_{\mathcal{B}}(\pi)&\text{if unbalanced}.\end{array}\right. \tag{13b}\] Since \(f\) is polyhedral and eq. (13) is solvable, computing one of its solution is equivalent to \[\text{find}\quad\pi\quad\text{such that}\quad 0\in\partial f(\pi)+\partial g(\pi). \tag{14}\] Recall that the subdifferential of a lower semicontinuous convex functions is a maximal monotone operator. Thus, the above generalized equation is nothing but the problem of finding a zero of the sum of two maximal monotone operators, a well-understood problem for which several methods exist (see, for instance, Chapters 25 and 27 of the textbook [3]). Among the existing algorithms, the Douglas-Rachford operator splitting method [14] (see also [3, SS 25.2 and SS 27.2 ]) is the most popular one. When applied to problem eq. (14), the DR algorithm asymptotically computes a solution by repeating the following steps, with \(k=0,1,\ldots\) and given initial point \(\theta^{0}=(\theta^{(1),0},\ldots,\theta^{(M),0})\) and prox-parameter \(\rho>0\): \[\left\{\begin{array}{lll}\pi^{k+1}&=&\arg\min_{\pi}\ g(\pi)+\frac{\rho}{2}\| \pi-\theta^{k}\|^{2}\\ \hat{\pi}^{k+1}&=&\arg\min_{\pi}\ f(\pi)+\frac{\rho}{2}\|\pi-(2\pi^{k+1}- \theta^{k})\|^{2}\\ \theta^{k+1}&=&\theta^{k}+\hat{\pi}^{k+1}-\pi^{k+1}.\end{array}\right. \tag{15}\] By noting that \(f\) and \(g\) in eq.13b are lower semicontinuous convex functions and problem eq.13 is solvable, the following is a direct consequence of Theorem25.6 and Corollary27.4 of [3]. **Theorem 1**.: _The sequence \(\{\theta^{k}\}\) produced by the DR algorithm eq.15 converges to a point \(\bar{\theta}\), and the following holds:_ * \(\bar{\pi}:=\arg\min_{\pi}\ g(\pi)+\frac{\rho}{2}\|\pi-\bar{\theta}\|^{2}\) _solves eq._13_;_ * \(\{\pi^{k}\}\) _and_ \(\{\hat{\pi}^{k}\}\) _converges to_ \(\bar{\pi}\)_._ The DR algorithm is attractive when the two first steps in eq.15 are convenient to execute, which is the case in our settings. As we will shortly see, the iterate \(\pi^{k+1}\) above has an explicit formula in both balanced and unbalanced cases, and computing \(\hat{\pi}^{k+1}\) amounts to executing a series of independent projections onto the simplex. This task can be accomplished exactly and efficiently by specialized algorithms. Since \(f\) in eq.13b has an additive structure, the computation of \(\hat{\pi}^{k+1}\) in eq.15 breaks down to a series of smaller and simpler subproblems as just mentioned. Hence, we may exploit such a structure by combining recent developments in DR literature to produce the following randomized version of the DR algorithm eq.15, with \(\alpha\) the vector of weights in eq.2: \[\left\{\begin{array}{lll}\pi^{k+1}&=&\arg\min_{\pi}\ g(\pi)+\frac{\rho}{2}\| \pi-\theta^{k}\|^{2}\\ &&\text{Draw randomly }m\in\{1,2,\ldots,M\}\text{ with probability }\alpha_{m}>0\\ \\ \hat{\pi}^{(m),k+1}&=&\arg\min_{\pi^{(m)}}\ f^{(m)}(\pi^{(m)})+\frac{\rho}{2} \|\pi^{(m)}-(2\pi^{(m),k+1}-\theta^{(m),k})\|^{2}\\ \\ \theta^{(m^{\prime}),k+1}&=&\left\{\begin{array}{ll}\theta^{(m),k}+\hat{ \pi}^{(m),k+1}-\pi^{(m),k+1}&\text{if }m^{\prime}=m\\ \theta^{(m^{\prime}),k}&\text{if }m^{\prime}\neq m.\end{array}\right.\end{array}\right. \tag{16}\] The randomized DR algorithm eq.16 aims at reducing computational burden and accelerating the optimization process. Such goals can be attained in some situations, depending on the underlying problem and available computational resources. The particular choice of \(\alpha_{m}>0\) as the probability of picking up the \(m^{th}\) subproblem is not necessary for convergence: the only requirement is that every subproblem is picked-up with a fixed and positive probability. The intuition behind our choice is that measures that play a more significant role in the objective function of eq.6 (i.e., higher \(\alpha_{m}\)) should have more chance to be picked by the randomized DR algorithm. Furthermore, the presentation above where only one measure (subproblem) in eq.16 is drawn is made for the sake of simplicity. One can perfectly split the set of measures into \(nb<M\) bundles, each containing a subset of measures, and select randomly bundles instead of individual measures. Such an approach proves advantageous in a parallel computing environment with \(nb\) available machines/processors (see section6.2.2 in the numerical section). The almost surely (i.e., with probability one) convergence of the randomized DR algorithm depicted in eq.16 can be summarized as follows. We refer the interest reader to Theorem2 in [20] for the proof (see also additional comments in the Appendix of [2]). **Theorem 2**.: _The sequence \(\{\pi^{k}\}\) produced by the randomized DR algorithm eq. (16) converges almost surely to a solution \(\tilde{\pi}\) of problem eq. (13)._ In the next section we further exploit the structure of functions \(f\) and \(g\) in eq. (13) and re-arrange terms in the schemes eq. (15) and eq. (16) to provide an easy-to-implement and memory-efficient algorithm for computing balanced and unbalanced WBs. ## 5 The Method of Averaged Marginals Both deterministic and randomized DR algorithms above require evaluating the proximal mapping of function \(g\) given in eq. (13b). In the balanced WB setting, \(g\) is the indicator function of the balanced subspace \(\mathcal{B}\) given in eq. (8). Therefore, the solution \(\pi^{k+1}\) above is nothing but the projection of \(\theta^{k}\) onto \(\mathcal{B}\): \(\pi^{k+1}=\mathtt{Proj}_{\mathcal{B}}(\theta^{k})\). On the other hand, in the unbalanced WB case, \(g(\cdot)\) is the penalized distance function \(\gamma\,\mathtt{dist}_{\mathcal{B}}(\cdot)\). Computing \(\pi^{k+1}\) then amounts to evaluating the proximal mapping of the distance function: \(\min\limits_{\pi}\,\mathtt{dist}_{\mathcal{B}}(\pi)+\dfrac{\rho}{2\gamma}\| \pi-\theta^{k}\|^{2}\). The unique solution to this problem is well-known to be given by \[\pi^{k+1}=\left\{\begin{array}{ll}\mathtt{Proj}_{\mathcal{B}}(\theta^{k})& \text{ if }\,\,\rho\,\mathtt{dist}_{\mathcal{B}}(\theta^{k})\leq\gamma\\ \theta^{k}+\frac{\gamma}{\rho\,\mathtt{dist}_{\mathcal{B}}(\theta^{k})}( \mathtt{Proj}_{\mathcal{B}}(\theta^{k})-\theta^{k})&\text{ otherwise.}\end{array}\right. \tag{17}\] Hence, computing \(\pi^{k+1}\) in both balanced and unbalanced settings boils down to projecting onto the balanced subspace (recall that \(\mathtt{dist}_{\mathcal{B}}(\theta)=\|\mathtt{Proj}_{\mathcal{B}}(\theta)- \theta\|\)). This fact allows us to provide a unified algorithm for WB and UWB problems. ### Projecting onto the subspace of balanced plans In what follows we exploit the particular geometry of \(\mathcal{B}\) to provide an explicit formula for projecting onto this set. **Proposition 2**.: _With the notation of section 2, let \(\theta\in\mathds{R}^{R\times\sum_{m=1}^{M}S^{(m)}}\),_ \[a_{m}:=\frac{\frac{1}{S^{(m)}}}{\sum_{j=1}^{M}\frac{1}{S^{(j)}}},\quad p^{(m) }:=\sum_{s=1}^{S^{(m)}}\theta_{rs}^{(m)},\quad\text{and}\quad p:=\sum_{m=1}^{M }a_{m}p^{(m)}.\] (18a) _The (matrix) projection \[\pi=\mathtt{Proj}_{\mathcal{B}}(\theta)\] has the explicit form:_ \[\pi_{rs}^{(m)}:=\theta_{rs}^{(m)}+\frac{(p_{r}-p_{r}^{(m)})}{S^{(m)}},\quad s =1,\dots,S^{(m)},\;r=1,\dots,R,\;m=1,\dots,M. \tag{18b}\] Proof.: First, observe that \(\pi=\mathtt{Proj}_{\mathcal{B}}(\theta)\) solves the QP problem \[\left\{\begin{array}{ll}\min\limits_{y^{(1)},\dots,y^{(M)}}&\dfrac{1}{2} \sum_{m=1}^{M}\|y^{(m)}-\theta^{(m),k}\|^{2}\\ \text{s.t}&\sum_{s=1}^{S^{(1)}}y_{rs}^{(1)}&=\sum_{s=1}^{S^{(2)}}y_{rs}^{(2)},\qquad r=1,\dots,R\\ &\sum_{s=1}^{S^{(2)}}y_{rs}^{(2)}&=\sum_{s=1}^{S^{(3)}}y_{rs}^{(3)},\qquad r= 1,\dots,R\\ &\vdots&\\ &\sum_{s=1}^{S^{(M-1)}}y_{rs}^{(M-1)}&=\sum_{s=1}^{S^{(M)}}y_{rs}^{(M)},\quad r =1,\dots,R,\end{array}\right. \tag{19}\] which is only coupled by the "columns" of \(\pi\): there is no constraint liking \(\pi_{rs}^{(m)}\) with \(\pi_{rs}^{(m^{\prime})}\) for \(r\neq r^{\prime}\) and \(m\) and \(m^{\prime}\) arbitrary. Therefore, we can decompose it by rows: for \(r=1,\ldots,R\), the \(r^{th}\)-row \((\pi_{r1}^{(1)},\ldots,\pi_{rS^{(1)}}^{(1)},\ldots,\pi_{r1}^{(M)},\ldots,\pi_{ rS^{(M)}}^{(M)})\) of \(\pi\) is the unique solution to the problem \[\left\{\begin{array}{ll}\min_{w}&\frac{1}{2}\sum_{m=1}^{M}\sum_{s=1}^{S^{( m)}}\Big{(}w_{s}^{(m)}-\theta_{rs}^{(m)}\Big{)}^{2}\\ \text{s.t}&\sum_{s=1}^{S^{(1)}}w_{s}^{(1)}=\sum_{s=1}^{S^{(2)}}w_{s}^{(2)}\\ &\sum_{s=1}^{S^{(2)}}w_{s}^{(2)}=\sum_{s=1}^{S^{(3)}}w_{s}^{(3)}\\ &\vdots\\ &\sum_{s=1}^{S^{(M-1)}}w_{s}^{(M-1)}=\sum_{s=1}^{S^{(M)}}w_{s}^{(M)}.\end{array}\right. \tag{20}\] The Lagrangian function to this problem is, for a dual variable \(u\), given by \[L_{r}(w,u)=\frac{1}{2}\sum_{m=1}^{M}\sum_{s=1}^{S^{(m)}}\Big{(}w_{s}^{(m)}- \theta_{rs}^{(m)}\Big{)}^{2}+\sum_{m=1}^{M-1}u^{(m)}\Big{(}\sum_{s=1}^{S^{(m) }}w_{s}^{(m)}-\sum_{s=1}^{S^{(m+1)}}w_{s}^{(m+1)}\Big{)}. \tag{21}\] A primal-dual \((w,u)\) solution to problem eq. (20) must satisfy the Lagrange system, in particular \(\nabla_{w}L_{r}(w,u)=0\) with \(w\) the \(r^{th}\) row of \(\pi=\mathtt{Proj}_{\mathcal{B}}(\theta)\), that is, \[\left\{\begin{array}{llll}\pi_{rs}^{(1)}-\theta_{rs}^{(1)}&+&u^{(1)}&=0&s=1,\ldots,S^{(1)}\\ \pi_{rs}^{(2)}-\theta_{rs}^{(2)}&+&u^{(2)}-u^{(1)}&=0&s=1,\ldots,S^{(2)}\\ &\vdots&\\ \pi_{rs}^{(M-1)}-\theta_{rs}^{(M-1)}&+&u^{(M-1)}-u^{(M-2)}&=0&s=1,\ldots,S^{(M- 1)}\\ \pi_{rs}^{(M)}-\theta_{rs}^{(M)}&-&u^{(M-1)}&=0&s=1,\ldots,S^{(M)}.\end{array}\right. \tag{22}\] Let us denote \(p_{r}=\sum_{s=1}^{S^{(m)}}\pi_{rs}^{(m)}\) (no matter \(m\in\{1,\ldots,M\}\) because \(\pi\in\mathcal{B}\)), \(p_{r}^{(m)}=\sum_{s=1}^{S^{(m)}}\theta_{rs}^{(m)}\) (the \(r^{th}\) component of \(p^{(m)}\) as defined in eq. (18a)), and sum, above over \(s\), the first row of system eq. (22) to get \[p_{r}-p_{r}^{(1)}+u^{(1)}S^{(1)}=0\quad\Rightarrow\quad u^{(1)}=\frac{p_{r}^ {(1)}-p_{r}}{S^{(1)}}, \tag{23}\] Now, by summing the second row in eq. (22) over \(s\) we get \[p_{r}-p_{r}^{(2)}+u^{(2)}S^{(2)}-u^{(1)}S^{(2)}=0\quad\Rightarrow\quad u^{(2 )}=u^{(1)}+\frac{p_{r}^{(2)}-p_{r}}{S^{(2)}}. \tag{24}\] By proceeding in this way and setting \(u^{(0)}:=0\) we obtain \[u^{(m)}=u^{(m-1)}+\frac{p_{r}^{(m)}-p_{r}}{S^{(m)}},\quad m=1,\ldots,M-1.\] (25a) Furthermore, for \[M-1\] we get the alternative formula \[u^{(M-1)}=-\frac{p_{r}^{(M)}-p_{r}}{S^{(M)}}\]. Given these dual values, we can use eq. ( 22 ) to conclude that the \[r^{th}\] row of \[\pi=\mathtt{Proj}_{\mathcal{B}}(\theta)\] is given as in eq. ( 18b ). It is remaining to show that \(p_{r}=\sum_{s=1}^{S^{(m)}}\pi_{rs}^{(m)}\), as defined above, is alternatively given by eq.18a. To this end, observe that \(u^{(M-1)}=u^{(M-1)}-u^{(0)}=\sum_{m=1}^{M-1}(u^{(m)}-u^{(m-1)})\), so: \[u^{(M-1)}=\sum_{m=1}^{M-1}\Big{(}\frac{p_{r}^{(m)}-p_{r}}{S^{(m)}}\Big{)}=\sum_ {m=1}^{M-1}\frac{p_{r}^{(m)}}{S^{(m)}}-p_{r}\sum_{m=1}^{M-1}\frac{1}{S^{(m)}}. \tag{26}\] Recall that \(u^{(M-1)}=\frac{p_{r}-p_{r}^{(M)}}{S^{(M)}}\), i.e., \(p_{r}=p_{r}^{(M)}+u^{(M-1)}S^{(M)}.\) Replacing \(u^{(M-1)}\) with the expression eq.26 yields \[p_{r}=S^{(M)}\Big{[}\frac{p_{r}^{(M)}}{S^{(M)}}+u^{(M-1)}\Big{]}=S^{(M)}\Big{[} \frac{p_{r}^{(M)}}{S^{(M)}}+\sum_{m=1}^{M-1}\frac{p_{r}^{(m)}}{S^{(m)}}-p_{r} \sum_{m=1}^{M-1}\frac{1}{S^{(m)}}\Big{]}, \tag{27}\] which implies \(p_{r}\sum_{m=1}^{M}\frac{1}{S^{(m)}}=\sum_{m=1}^{M}\Big{(}\frac{p_{r}^{(m)}}{ S^{(m)}}\Big{)}\). Hence, \(p\) is as given in eq.18a, and the proof is thus complete. Note that projection can be computed in parallel over the rows, and the average \(p\) of the marginals \(p^{(m)}\) is the gathering step between parallel processors. ### Evaluating the Proximal Mapping of Transportation Costs In this subsection we turn our attention to the DR algorithm's second step, which requires solving a convex optimization problem of the form: \(\min_{\pi}\ f(\pi)+\frac{\rho}{2}\|\pi-y\|^{2}\) (see eq.15). Given the addictive structure of \(f\) in eq.13b, the above problem can be decomposed into \(M\) smaller ones \[\min_{\pi^{(m)}}\ f^{(m)}(\pi^{(m)})+\frac{\rho}{2}\|\pi^{(m)}-y^{(m)}\|^{2}, \quad m=1,\ldots,M. \tag{28}\] Then looking closely at every subproblem above, we can see that we can decompose it even more: the columns of the the transportation plan \(\pi^{(m)}\) are independent in the minimization. Besides, as the following result shows, every column optimization is simply the projection of an \(R\)-dimensional vector onto the simplex \(\Delta_{R}\). **Proposition 3**.: _Let \(\Delta_{R}(\tau)\) be as in eq.1. The proximal mapping \(\hat{\pi}:=\min_{\pi}\ f(\pi)+\frac{\rho}{2}\|\pi-y\|^{2}\) can be computed exactly, in parallel along the columns of each transport plan \(y^{(m)}\), as follows: for all \(m\in\{1,\ldots,M\}\),_ \[\begin{pmatrix}\hat{\pi}_{1s}^{(m)}\\ \vdots\\ \hat{\pi}_{Rs}^{(m)})\end{pmatrix}=\texttt{Proj}_{\Delta_{R}(q_{s}^{(m)})} \begin{pmatrix}y_{1s}-\frac{1}{\rho}d_{1s}^{(m)}\\ \vdots\\ y_{Rs}-\frac{1}{\rho}d_{Rs}^{(m)}\end{pmatrix},\quad s=1,\ldots,S^{(m)}. \tag{29}\] Proof.: It has already argued that evaluating this proximal mapping into \(M\) smaller subproblems eq.28, which is a quadratic program problem due to the definition of \(f^{(m)}\) in eq.12: \[\left\{\begin{array}{ll}\min_{\pi^{(m)}\geq 0}&\sum_{r=1}^{R}\sum_{s=1}^{S^{( m)}}\Big{[}d_{rs}^{(m)}\pi_{rs}^{(m)}+\frac{\rho}{2}\|\pi_{rs}^{(m)}-y_{rs}^{( m)})\|^{2}\Big{]}\\ \mathrm{s.t.}&\sum_{r=1}^{R}\pi_{rs}^{(m)}=q_{s}^{(m)},\ s=1,\ldots,S^{(m)}. \end{array}\right. \tag{30}\] By taking a close look at the above problem, we can see that the objective function is decomposable, and the constraints couple only the "rows" of \(\pi^{(m)}\). Therefore, we can go further and decompose the above problem per columns: for \(s=1,\ldots,S^{(m)}\), the \(s^{th}\)-column of \(\hat{\pi}^{(m)}\) is the unique solution to the \(R\)-dimensional problem \[\left\{\begin{array}{ll}\min_{w\geq 0}&\sum_{r=1}^{R}\left[d_{rs}^{(m)}w_{r}+ \frac{\rho}{2}(w_{r}-y_{rs}^{(m)})^{2}\right]\\ \text{s.t.}&\sum_{r=1}^{R}w_{r}=q_{s}^{(m)},\end{array}\right. \tag{31}\] which is nothing but eq. (29). Such projection can be performed exactly [8]. **Remark 1**.: _If \(\tau=0\), then \(\Delta_{R}(\tau)=\{0\}\) and the projection onto this set is trivial. Otherwise, \(\tau>0\) and computing \(\mathtt{Proj}_{\Delta_{R}(\tau)}(w)\) amounts to projecting onto the \(R+1\) simplex \(\Delta_{R}\): \(\mathtt{Proj}_{\Delta_{R}(\tau)}(w)\quad:=\quad\tau\,\mathtt{Proj}_{\Delta_{R }}(w/\tau)\). The latter task can be performed exactly by using efficient methods [8]. Hence, evaluating the proximal mapping in proposition 3 decomposes into \(\sum_{m=1}^{M}S^{(m)}\) independent projections onto \(\Delta_{R}\)._ ### The Method of Averaged Marginals (MAM) Putting propositions 2 and 3 together with the general lines of DR algorithm eq. (15) and rearranging terms we provide below an easy-to-implement and memory efficient algorithm for computing barycenters. The pseudo code for this algorithm is presented in algorithm 1. The algorithm gathers the DR's three main steps and integrates an option in case the problem is unbalanced, since treating the Wasserstein barycenter problem the way we did, enables to easily switch from the balanced to the unbalanced case. Note that part of the first DR step has been placed at the end of the _while-loop_ iteration in a storage optimization purpose that will be discussed in the following paragraphs. In the following algorithm, the vector \(\alpha\in\Delta_{M}\) of weights is included in the distance matrix definition, as done in eq. (12). Some comments on algorithm 1 are in order. MAM's interpretationA simple interpretation of the _Method of Averaged Marginals_ is as follows: at every iteration the barycenter approximation \(p^{k}\) is a weighted average of the \(M\) marginals \(p^{(m)}\) of the plans \(\theta^{(m),k}\), \(m=1,\ldots,M\). As we will shortly see, the whole sequence \(\{p^{k}\}\) converges (almost surely or deterministically) to an exact barycenter upon specific assumptions on the choice of the index set at line 6 of algorithm 1. InitializationThe algorithm initialization, the choices for \(\theta^{0}\in\mathds{R}^{R\times\sum_{m=1}^{M}S^{(m)}}\) and \(\rho>0\) are arbitrary ones. The prox-parameter \(\rho>0\) is borrowed from the DR algorithm, which is known to has an impact on the practical convergence speed. Therefore, \(\rho\) should be tuned for the set of distributions at stakes. Some heuristics for tuning this parameter exist for other methods derived from the DR algorithms [32, 33] and can be adapted to the setting of algorithm 1. Stopping criteriaA possible stopping test for the algorithm, with mathematical justification, is to terminate the iterative process as soon as \(\|\theta^{k+1}-\theta^{k}\|_{\infty}\leq\mathtt{Tol}\), where \(\mathtt{Tol}>0\) is a given tolerance. In practical terms, this test boils down to checking whether \(|\theta_{rs}^{(m),k}+t^{k}(p_{r}^{k}-p_{r}^{(m)})/S^{(m)}-\hat{\pi}_{rs}|\leq \mathtt{Tol}\), for all \(r=1,\ldots,R\), \(s=1,\ldots,S^{(m)}\), and all \(m=1,\ldots,M\). Alternatively, we may stop the algorithm when \(\|p^{k+1}-p^{k}\|\) is small enough. The latter should be understood as a heuristic criterion. Deterministic and random variants of MAMThe most computationally expensive step of MAM is Step 2, which requires a series of independent projections onto the \(R+1\) simplex (see remark 1). Our approach underlines that this step can be conducted in parallel over \(s\) or, if preferable, over the measures \(m\). As a result, it is a natural idea to derive a randomized variant of algorithm. This is the reason for having the possibility of choosing an index set \(\mathcal{M}^{k}\subsetneq\{1,\ldots,M\}\) at line 6 of algorithm 1. For example, we may employ an economical rule and choose \(\mathcal{M}^{k}=\{m\}\) randomly (with a fixed and positive probability, e.g. \(\alpha_{m}\)) at every iteration, or the costly one \(\mathcal{M}^{k}=\{1,\ldots,M\}\) for all \(k\). The latter yields the deterministic method of averaged marginals, while the former gives rise to a randomized variant of MAM. Depending on the computational resources, intermediate choices between these two extremes can perform better in practice. **Remark 2**.: _Suppose that \(1<\mathtt{nb}<M\) processors are available. We may then create a partition \(A_{1},\ldots,A_{\mathtt{nb}}\) of the set \(\{1,\ldots,M\}\) (\(=\cup_{i=1}^{\mathtt{nb}}A_{i}\)) and define weights \(\beta_{i}:=\sum_{m\in A_{i}}\alpha_{m}>0\). Then, at every iteration \(k\), we may draw with probability \(\beta_{i}\) the subset \(A_{i}\) of measures and set \(\mathcal{M}^{k}=A_{i}\)._ This randomized variant would enable the algorithm to compute more iterations per time unit but with less precision per iteration (since not all the marginals \(p^{(m)}\) are updated). Such a randomized variant of MAM is benchmarked against its deterministic counterpart in section 6.2.2, where we demonstrate empirically that with certain configurations (depending on the number \(M\) of probability distributions and the number of processors) this randomized algorithm can be effective. We highlight that other choices for \(\mathcal{M}^{k}\) rather than randomized ones or the deterministic rule \(\mathcal{M}^{k}=\{1,\ldots,M\}\) should be understood as heuristics. Within such a framework, one may choose \(\mathcal{M}^{k}\subsetneq\{1,\ldots,M\}\) deterministically, for instance cyclically or yet by the discrepancy of the marginal \(p^{(m)}\) with respect to the average \(p^{k}\). Storage complexityNote that the operation at line 10 is trivial if \(q^{(m)}_{s}=0\). This motivates us to remove all the zero components of \(q^{(m)}\) from the problem's data, and consequently, all the columns \(s\) of the distance matrix \(d^{(m)}\) and variables \(\theta,\hat{\pi}\) corresponding to \(q^{(m)}_{s}=0\), \(m=1,\ldots,M\). In some applications (e.g. general sparse problems), this strategy significantly reduces the WB problem and thus memory allocation, since the non taken columns are both not stored and not treated in the _for loops_. This remark raises the question of how sparse data impacts the practical performance of MAM. Section 6.1 conducts an empirical analysis on this matter. In nominal use, the algorithm needs to store the decision variables \(\theta^{(m)}\in\mathbb{R}^{R\times S^{(m)}}\) for all \(m=1,\ldots,M\) (transport plans for every measure), along with \(M\) distance matrices \(d\in\mathbb{R}^{R\times S^{(m)}}\), one barycenter approximation \(p^{k}\in\mathbb{R}^{R}\), \(M\) approximated marginals \(p^{(m)}\in\mathbb{R}^{R}\) and \(M\) marginals \(q^{(m)}\in\mathds{R}^{S(m)}\). Note that in practical terms, the auxiliary variables \(w\) and \(\hat{\pi}\) in algorithm 1 can be easily removed from the algorithm's implementation by merging lines 9-11 into a single one. Hence, for \(T:=\sum_{m=1}^{M}S^{(m)}\), the method's memory allocation is \(2R\,T+T+M(R+1)\) floating-points. This number can be reduced if the measures share the same distance matrix, i.e., \(d^{(m)}=d^{(m^{\prime})}\) for all \(m,m^{\prime}=1,\ldots,M\). In this case, \(S^{(m)}=S\) for all \(m\), \(T=M\,S\) and the method's memory allocation drops to \(R\,T+R\,S+T+M(R+1)\) floating-points. Within the light of the previous remark this memory complexity should be treated as an upper bound: the sparser are the data the less memory will be needed. Balanced and unbalanced settingsAs already mentioned, our approach can handle both balanced and unbalanced WB problems. All that is necessary it to choose a finite (positive) value for the parameter \(\gamma\) in the unbalanced case. Such a parameter is only used to define \(t^{k}\in(0,1]\) at every iteration. Indeed, algorithm 1 defines \(t^{k}=1\) for all iterations if the WB problem is balanced (because \(\gamma=\infty\) in this case)5, and \(t^{k}=\gamma/\left(\rho\,\sqrt{\sum_{m=1}^{M}\frac{\|p^{k}-p^{(m)}\|^{2}}{S^{ (m)}}}\right)\) otherwise. This rule for setting up \(t^{k}\) is a mere artifice to model eq. (17). Indeed, \(\mathtt{dist}_{\mathcal{B}}(\theta^{k})=\|\mathtt{Proj}_{\mathcal{B}}(\theta^ {k})-\theta^{k}\|\) reduces to \(\sqrt{\sum_{m=1}^{M}\frac{\|p^{k}-p^{(m)}\|^{2}}{S^{(m)}}}\) thanks to proposition 2. Footnote 5: Observe that line 5 can be entirely disregarded in this case, by setting \(t^{k}=t=1\) fixed at initialization. Convergence analysisThe convergence analysis of algorithm 1 can be summarized as follows. **Theorem 3** (MAM's convergence analysis).: * _(Deterministic MAM.) Consider algorithm_ 1 _with the choice_ \(\mathcal{M}^{k}=\{1,\ldots,m\}\) _for all_ \(k\)_. Then the sequence of points_ \(\{p^{k}\}\) _generated by the algorithm converges to a point_ \(\bar{p}\)_. If the measures are balanced, then_ \(\bar{p}\) _is a balanced WB; otherwise,_ \(\bar{p}\) _is a_ \(\gamma\)_-unbalanced WB._ * _(Randomized MAM.) Consider algorithm_ 1 _with the choice_ \(\mathcal{M}^{k}\subset\{1,\ldots,m\}\) _as in remark_ 2_. Then the sequence of points_ \(\{p^{k}\}\) _generated by the algorithm converges almost surely to a point_ \(\bar{p}\)_. If the measures are balanced, then_ \(\bar{p}\) _is almost surely a balanced WB; otherwise,_ \(\bar{p}\) _is almost surely a_ \(\gamma\)_-unbalanced WB._ Proof.: It suffices to show that algorithm 1 is an implementation of the (randomized) DR algorithm and invoke theorem 1 for item a) and theorem 2 for item b). To this end, we first rely on proposition 2 to get that the projection of \(\theta^{k}\) onto the balanced subspace \(\mathcal{B}\) is given by \(\theta^{(m),k}_{rs}+\frac{(p^{k}_{k}-p^{(m)}_{s})}{S^{(m)}}\), \(s=1,\ldots,S^{(m)},\,r=1,\ldots,R,\,\,\,m=1,\ldots,M\), where \(p^{k}\) is computed at Step 1 of the algorithm, and the marginals \(p^{(m)}\) of \(\theta^{k}\) are computed at Step 0 if \(k=0\) or at Step 3 otherwise. Therefore, \(\mathtt{dist}_{\mathcal{B}}(\theta^{k})=\|\mathtt{Proj}_{\mathcal{B}}(\theta^ {k})-\theta^{k}\|=\sqrt{\sum_{m=1}^{M}\frac{\|p^{k}-p^{(m)}\|^{2}}{S^{(m)}}}\). Now, given the rule for updating \(t^{k}\) in algorithm 1 we can define the auxiliary variable \(\pi^{k+1}\) as \(\pi^{k+1}=\theta^{k}+t^{k}(\mathtt{proj}_{\mathcal{B}}(\theta^{k})-\theta^{k})\), or alternatively, \[\pi_{rs}^{(m),k+1}=\theta_{rs}^{(m),k}+t^{k}\frac{(p_{r}^{k}-p_{r}^{(m)})}{S^{(m)}},\quad s=1,\ldots,S^{(m)},\ r=1,\ldots,R,\ m=1,\ldots,M. \tag{32}\] In the balanced case, \(t^{k}=1\) for all \(k\) (because \(\gamma=\infty\)) and thus \(\pi^{k+1}\) is as in eq.18b. Otherwise, \(\pi^{k+1}\) is as in eq.17 (see the comments after algorithm1). In both cases, \(\pi^{k+1}\) coincides with the auxiliary variable at the first step of the DR scheme eq.15 (see the developments at the beginning of this section). Next, observe that to perform the second step of eq.15 we need to assess \(y=2\pi^{k+1}-\theta^{k}\), which is thanks to the above formula for \(\pi^{k+1}\) given by \(y_{rs}^{(m)}=\theta_{rs}^{(m),k}+2\,t^{k}\,\frac{p_{r}^{k}-p_{r}^{(m)}}{S^{(m) }}\), \(s=1,\ldots,S^{(m)}\), \(r=1,\ldots,R,\ m=1,\ldots,M\). As a result, for the choice \(\mathcal{M}^{k}=\{1,\ldots,M\}\) for all \(k\), Step 2 of algorithm1 yields, thanks to proposition3, \(\hat{\pi}^{k+1}\) as at the second step of eq.15. Furthermore, the updating of \(\theta^{k+1}\) in the latter coincides with the rule in algorithm1: for \(s=1,\ldots,S^{(m)}\), \(r=1,\ldots,R\), and \(m=1,\ldots,M\), \[\theta_{rs}^{(m),k+1} =\theta_{rs}^{(m),k}+\hat{\pi}_{rs}^{(m),k+1}-\pi_{rs}^{(m),k+1}= \theta_{rs}^{(m),k}+\hat{\pi}_{rs}^{(m),k+1}-\left(\theta_{rs}^{(m),k}+t^{k} \frac{(p_{r}^{k}-p_{r}^{(m)})}{S^{(m)}}\right)\] \[=\hat{\pi}_{rs}^{(m),k+1}-t^{k}\frac{(p_{r}^{k}-p_{r}^{(m)})}{S^{( m)}}.\] Hence, for the choice \(\mathcal{M}^{k}=\{1,\ldots,M\}\) for all \(k\), algorithm1 is the DR Algorithm eq.15 applied to the WB eq.13. Theorem1 thus ensures that the sequence \(\{\pi^{k}\}\) as defined above converges to some \(\bar{\pi}\) solving eq.13. To show that \(\{p^{k}\}\) converges to a barycenter, let us first use the property that \(\mathcal{B}\) is a linear subspace to obtain the decomposition \(\theta=\texttt{Proj}_{\mathcal{B}}(\theta)+\texttt{Proj}_{\mathcal{B}^{\perp} }(\theta)\) that allows us to rewrite the auxiliary variable \(\pi^{k+1}\) differently: \(\pi^{k+1}=\theta^{k}+t^{k}(\texttt{Proj}_{\mathcal{B}}(\theta^{k})-\theta^{k} )=\theta^{k}-t^{k}\texttt{Proj}_{\mathcal{B}^{\perp}}(\theta^{k})\). Let us denote \(\tilde{\pi}^{k+1}:=\texttt{Proj}_{\mathcal{B}}(\pi^{k+1})\). Then \(\tilde{\pi}^{k+1}=\texttt{Proj}_{\mathcal{B}}(\theta^{k}-t^{k}\texttt{Proj}_ {\mathcal{B}^{\perp}}(\theta^{k}))=\texttt{Proj}_{\mathcal{B}}(\theta^{k})\), and thus proposition2 yields \(\tilde{\pi}_{rs}^{(m),k+1}=\theta_{rs}^{(m),k}+\frac{p_{r}^{k}-p_{r}^{(m)}}{S^{( m)}}\ s=1,\ldots,S^{(m)}\), \(r=1,\ldots,R\), \(m=1,\ldots,M\), which in turn gives (by recalling that \(\sum_{s=1}^{S^{(m)}}\theta_{rs}^{(m),k}=p_{r}^{(m)}\)): \(\sum_{s=1}^{S^{(m)}}\tilde{\pi}_{rs}^{(m),k+1}=p_{r}^{k}\), \(r=1,\ldots,R\), \(m=1,\ldots,M\). As \(\lim_{k\to\infty}\pi^{k}=\bar{\pi}\), \(\lim_{k\to\infty}\tilde{\pi}^{k}=\lim_{k\to\infty}\texttt{Proj}_{\mathcal{B}}( \pi^{k})=\texttt{Proj}_{\mathcal{B}}(\bar{\pi})=:\tilde{\pi}\). Therefore, for all \(r=1,\ldots,R\), \(m=1,\ldots,M\), the following limits are well defined: \[\bar{p}_{r}:=\sum_{s=1}^{S^{(m)}}\tilde{\pi}_{rs}^{(m)}=\lim_{k\to\infty}\sum_ {s=1}^{S^{(m)}}\tilde{\pi}_{rs}^{(m),k+1}=\lim_{k\to\infty}p_{r}^{k}. \tag{33}\] We have shown that the whole sequence \(\{p^{k}\}\) converges to \(\bar{p}\). By recalling that \(\bar{\pi}\) solves eq.13, we conclude that in the balanced setting \(\tilde{\pi}=\bar{\pi}\) and thus \(\bar{p}\) is a WB according to definition4. On the other hand, in the unbalanced setting, \(\bar{p}\) above is a \(\gamma\)-unbalanced WB according to definition6. The proof of item b) is a verbatim copy of the above proof: the sole difference, given the assumptions on the choice of \(\mathcal{M}^{k}\), is that we need to rely on theorem2 (and not on theorem1 as previously done) to conclude that \(\{\pi^{k}\}\) converges almost surely to some \(\bar{\pi}\) solving eq.13. Thanks to the continuity of the orthogonal projection onto the subspace \(\mathcal{B}\), the limits above yield almost surely convergence of \(\{p^{k}\}\) to a barycenter \(\bar{p}\). ## 6 Numerical Experiments This section illustrates the MAM's practical performance on some well-known datasets. The impact of different data structures is studied before the algorithm is compared to state-of-the-art methods. This section closes with an illustrative example of MAM to compute UWBs. Numerical experiments were conducted using 20 cores (_Intel(R) Xeon(R) Gold 5120 CPU_) and _Python 3.9_. The test problems and solvers' codes are available from download in the link [https://ifpen-gitlab.appcollaboratif.fr/detocs/mam_wb](https://ifpen-gitlab.appcollaboratif.fr/detocs/mam_wb). ### Study on data structure influence We start by evaluating the impact of conditions that influence the storage complexity and the algorithm performance. The main conditions are the _sparsity_ of the data and the _number of distributions_\(M\). Indeed, on the one hand, the denser are the distributions, the greater RAM would be needed to store the data per transport plan (see the management of _storage complexity_ in section 5.3). On the other hand, the more distributions are treated, the more transport plans would be stored. In both of these configurations, the time per iteration is meant to grow, either because a processor would need to project more columns onto the respected simplex within _Step 2_, or because _Step 2_ is repeated as many time as the number of distribution \(M\) (see algorithm 1). The dataset at hand, inspired from [4, 11], has been naturally built to control the sparsity (or respectively, density) of the distributions (see section 6.1 and section 6.1). Note that each image is normalized making it a representation of a probability distribution. The density of a dataset is controlled by the number of nested ellipses: as exemplified in section 6.1 and section 6.1, measures with only a single ellipse are very sparse, while a dataset with 5 nested ellipses is denser. In this first experiments we analyze the impact over MAM caused by the sparsity and number of measures. We have set \(\rho=100\) without proper tuning for every dataset. The study has been carried out with one processor to avoid CPU communication management. Figure 1: Sample of the artificial nested ellipses datasets. First column is taken from the first dataset with 1 ellipse, second columns from the second dataset with 2 nested ellipses, until the sixth column with 6 nested ellipses. section 6.1 shows that, as expected, the execution time of an iteration increases with increasing density and number of measures. When compared to density it can be seen that the number of measures has greater influence on the method's speed (such a phenomenon can be due to the \(numpy\) matrix management). This means the quantity of information in each measure does not seem to make the algorithm less efficient in term of speed. Such a result is to be put in regard with algorithms such as B-ADMM [35] that are particularly shaped for sparse dataset but less efficient for denser ones. This is a significant point that will be further developed in section 6.2.3. ### Comparison with IBP The Iterative Bregman Projection (IBP) [4] is a state-of-the-art algorithm for computing Wasserstein barycenters. As mentionned in the Introduction, IBP employes a regularizing function parametrized by \(\lambda>0\). The greater the \(\lambda\), the worst the approximation. But in practice, \(\lambda\) has to be kept in a moderate magnitude to avoid numerical errors (double-precision overflow). IBP is very sensitive to \(\lambda\), that strongly relies on the dataset at stake. Thus IBP is an inexact method, whereas MAM is exact. Although the study below shows certain advantages of MAM, we make it clear that the aim is not to demonstrate which algorithm is better in general but instead to highlight the differences between the two methods and their advantages depending on the use. Note that the code for IBP is inspired from the original [21]. #### 6.2.1 Qualitative comparison Here we use 100 images per digit of the MNIST database [29] where each digit has been randomly translated and rotated. Each image has 40 \(\times\) 40 pixels and can be treated as probability distributions thanks to a normalization, where the pixel location is the _support_ and the pixel intensity the _probability_. In section 6.2.1, we display intermediate barycenter solutions for digits \(3,4,5\) at different time steps both for MAM and IBP. For the two methods the hyperparameters have been tuned: for instance, \(\lambda=1700\) is the greatest lambda that enables IBP to compute the barycenter of the 3's dataset without double-precision overflow error. Regarding MAM, a range of values for \(\rho>0\) have been tested for 100 seconds of execution, to identify which one provides good performance (for example, \(\rho=50\) for the dataset of 3's). As illustrated in section 6.2.1, for each dataset, IBP gets quickly to a stable approximation of a barycenter. Such a point is obtained shortly after with MAM (less than 5 to 10 seconds after) but MAM continues to converge toward a sharper solution (closer to the exact solution as exemplified quantitatively in section 6.2.2). It is clear that the more CPUs used for MAM the better. We have limited the study to a dozen of CPU to allow the reader to reproduce the experimentations. While IBP is not well shaped for CPU parallelization [4, 21, 35], MAM offers a clear advantage depending on the hardware at stake. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(n_{ellipses}\) & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \(Density\) (\%) & 29.0 & 51.4 & 64.3 & 70.9 & 73.5 & 75.0 \\ \hline \end{tabular} \end{table} Table 1: Mean density with the number of nested ellipses. The density has been calculated by averaging the ratio of non-null pixel per images over 100 generated pictures for each dataset sharing the same number \(n_{ellipses}\) of nested ellipses. Figure 2: Evolution of the number of iterations per minute depending on the density or the number of distributions. #### 6.2.2 Quantitative comparison Next we benchmark MAM, randomized MAM and IBP on a dataset with 60 images per digit of the MNIST database [29] where every digit is a normalized image 40 \(\times\) 40 pixels. First, all three methods have their hyperparameters tuned thanks to a sensitivity study as explained in section 6.2.1. Then, at every time step an approximation of the computed barycenter is stored, to compute the error \(\bar{W}_{2}^{2}(p^{k})-\bar{W}_{2}^{2}(p_{exact}):=\sum_{m=1}^{M}\frac{1}{M} \texttt{OT}(p^{k},q^{(m)})-\frac{1}{M}\texttt{OT}(p^{k},q^{(m)})\) where \(M\) is the number of digits in the dataset. The results are shown in Figure 3. Figure 3: (top) For each digit 36 out of the 100 scaled, translated and rotated images considered for each barycenter. (bottom) Barycenters after \(t=10,50,500,1000,2000\) seconds, where the left-hand-side is IBP evolution of its barycenter approximation, the middle panel is MAM evolutions using 10 CPU and the right-hand-side is the exact solution computed by applying _Gurobi_ to the LP eq. (7). \(\sum_{m=1}^{M}\frac{1}{M}\mathtt{OT}(p_{exact},q^{(m)})\). All methods were implemented in \(python\) using a _MPI_ based parallelization. Note that IBP is inspired from the code of G. Peyre [21], MAM from algorithm 1 and MAM-randomized (remark 2) has only one distribution treated by processor. section 6.2.2 displays the evolution w.r.t time, of the error measure \(\tilde{W}_{2}^{2}(p^{k})-\tilde{W}_{2}^{2}(p_{exact})\), with \(p_{exact}\) an exact barycenter obtained by solving LP eq. (7) directly. It is clear that IBP is almost 10 time faster per iteration. However IBP computes an exact solution of an approximated problem that is tuned through the hyperparameter \(\lambda\) (see [4]). Therefore it is natural to witness IBP converging to a solution to the approximated problem, but not to an exact WB. While MAM does converge to an exact solution. So there is a threshold where the accuracy of MAM exceeds IBP: in our case, around 200s - for the computation with the greatest number of processors (see section 6.2.2). Such a treshold always exists depending on the computational means (hardware). This quantitative study explains what have been exemplified with the images of section 6.2.1: the accuracy of IBP is bounded by the choice of \(\lambda\), itself bounded by an overflow error, while MAM hyperparameters only impact the convergence speed and the algorithm is always improving towards an exact solution. For this dataset, the WB computed by IBP is within 2% of accuracy and thus reasonably good. However, as shown in Table 1 in [35], one can choose other datasets where IBP's accuracy might be unsatisfactory. Furthermore, section 6.2.2 exemplifies an interesting asset of randomized variants of MAM: for some configurations randomized-MAM is more efficient than (deterministic) MAM but for others, the latter seems to be more effective. Note that the curve _MAM 1-random, 1 processor_ does not appear on the figure: this is because it is above the y-axis value range due to its bad performance. Indeed, there is a trade-off between time spent per iteration and precision gained after an iteration. For example, with 10 processors, each processor treats 6 measures in the deterministic MAM but only one is treated in the randomized MAM. Therefore, the time spent per iteration is roughly six time shorter in the latter and this counterbalances the loss of accuracy per iterations. On the other hand, when using 20 processors, only 3 measures are treated by each processor and the trade-off is not worth it anymore: the gain in time does not compensate for the loss in accuracy per iteration. One should adapt the use of the algorithm with care since this trade-off conclusion is only heuristic and strongly depends on the dataset and Figure 4: Evolution with respect to time of the difference between the Wasserstein barycenter distance of an approximation, \(\tilde{W}_{2}^{2}(p^{k})\), and the Wasserstein barycentric distance of the exact solution \(\tilde{W}_{2}^{2}(p_{exact})\) given by the LP. The time step between two points is 30 seconds. hardware at use. A sensitivity analysis is always a good thought for choosing the most effective amount of measures handled per processor while using the randomized-MAM against the deterministic MAM. #### 6.2.3 Influence of the support This section is echoing section 6.1 and studies the influence of the support size. To do so, two datasets have been tested for MAM and IBP. The first dataset is already used in section 6.2.2: 60 pictures of 3's taken from the classic MNIST database [29]. The second dataset is also composed by these 60 images but each digit has been randomly translated and rotated in the same way as in section 6.2.1. Therefore, the union of the support of the second dataset is greater than the first one, as illustrated in section 6.2.3. section 6.2.3 presents two graphs that have been obtained just as in section 6.2.2, but displaying the evolution w.r.t time in percentage: \(\Delta W_{\%}:=\frac{W_{2}^{2}(p^{k})-W_{2}^{2}(p_{exact})}{W_{2}^{2}(p_{ exact})}\times 100\). Once more, the hyperparameters have been fully tuned. The hyperparameter of the IBP method is smaller for the second datacase. Indeed, as stated in [35], the greater is the support, the stronger are the restrictions on \(\lambda\). And since the smaller is \(\lambda\) the further is the approximated problem to the exact one this is expected to witness rising differences between on the following graphs. Being an exact method, MAM is insensitive to support size. The density of the dataset has little impact on the convergence time as explained in section 6.1 and exemplified in section 6.2.3. Such visual results concerning IBP initialization and parametrization have already been discussed in section 6.2.1, some other qualitative results can be found in [24] where the author shows that properties of the distributions can be lost due to the entropy penalization in IBP. ### Comparison with B-ADMM This subsection compares MAM with the algorithm B-ADMM of [31] using the dataset and Matlab implementation provided by the authors at the link [https://github.com/bobye/d2_kmeans](https://github.com/bobye/d2_kmeans). We omit IBP in our analysis because it has already been shown in [31, Table I] that IBP is outperformed by B-ADMM in this dataset. As in [31, Section IV], we consider \(M=1000\) discrete measures, each with a sparse finite support set obtained by clustering pixel colors of images. The average number of support points is around 6, and the barycenter's number of (fixed) support points is \(R=60\). An exact WB can be computed by applying an LP solver to the extensive formulation eq. (7). Its optimal value is \(712.7\), computed in \(10.6\) seconds by the Gurobi LP solver. We have coded MAM in Matlab to have a fair comparison with the Matlab B-ADMM algorithm provided at the above link. Since MAM and B-ADMM Figure 5: Images with \(40\times 40\) pixel grid, where the red represents the pixels which are in the union of the dataset support composed with 60 images. (left) for the standard MNIST, (right) for the randomly translated and rotated MNIST. use different stopping tests, we have set their stopping tolerances equal to zero and let the solvers stop with a maximum number of iterations. Table 2 below reports CPU time in seconds and the objective values yielded by the (approximated) barycenter \(\tilde{p}\) computed by both solvers: \(\bar{W}_{2}^{2}(\tilde{p})\). The results show that, for the considered dataset, MAM and B-ADMM are comparable regarding CPU time, with MAM providing more precise results. In contrast with MAM, B-ADMM does not have (at this time) a convergence analysis. ### Unbalanced Wasserstein Barycenter This section treats a particular example to illustrate the interest of using UWB. The artificial dataset is composed by 50 images with resolution \(80\times 80\). Each image is divided in four squared. The top left, bottom left and bottom right squared are randomly filled with double nested ellipses and the top right squared is always empty as exemplified \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Iterations} & \multicolumn{2}{c|}{Objective value} & \multicolumn{2}{c|}{Seconds} \\ \cline{2-5} & B-ADMM & MAM & B-ADMM & MAM \\ \hline 100 & 742.8 & 716.7 & 1.1 & 1.1 \\ \hline 200 & 725.9 & 714.1 & 2.4 & 2.2 \\ \hline 500 & 716.5 & 713.3 & 5.6 & 5.4 \\ \hline 1000 & 714.1 & 712.9 & 11.8 & 10.8 \\ \hline 1500 & 713.5 & 712.8 & 18.9 & 16.2 \\ \hline 2000 & 713.3 & 712.8 & 25.1 & 21.6 \\ \hline 2500 & 713.2 & 712.8 & 31.0 & 27.1 \\ \hline 3000 & 713.1 & 712.7 & 39.8 & 32.4 \\ \hline \end{tabular} \end{table} Table 2: MAM vs B-ADMM. The considered implementation of B-ADMM is the one provided by its designers without changing parameters (except the stopping set to zero and the maximum number of iterations). Both algorithms use the same initial point. The dataset is the one considered in [31, Section IV]. The optimal value of the WB barycenter for this dataset is 712.7, computed by Gurobi in 10.6 seconds. Figure 6: Evolution of the percentage of the distance between the exact solution of the barycenter problem and the computed solution using IBP and MAM method with 20 processors: (left) for the standard MNIST, (right) for the randomly translated and rotated MNIST. in section 6.4. In this example, every image is normalized to depict a probability measure so that we can compare WB and UWB. With respect to eq. (10), one set of constraints is relaxed and the influence of the hyperparameter \(\gamma\) is studied. If \(\gamma\) is large enough (i.e. greater than \(\|\texttt{vec}(d)\|\approx 1000\), see proposition 1), the problem boils down to the standard WB problem since the example deals with probability measures: the resulting UWB is indeed a WB. When decreasing \(\gamma\) the transportation costs take more importance than the distance to \(\mathcal{B}\) that is more and more relaxed. Therefore, as illustrated by section 6.4, the resulting UWB splits the image in four parts, giving visual meaning to the barycenter. In the same vein, section 6.4 provides an illustrative application of MAM for computing UWB in another dataset. Figure 8: UWB computed with MAM for different values of \(\gamma\). Figure 7: Dataset composed by 50 pictures with nested ellipses randomly positionned in the top left, bottom right and left corners. ## References * [1] Martial Agueh and Guillaume Carlier. Barycenters in the wasserstein space. _Siam Journal on Mathematical Analysis_, 43(2):904-924, 2011. * [2] Gilles Bareilles, Yassine Laguel, Dmitry Grishchenko, Franck Iutzeler, and Jerome Malick. Randomized progressive hedging methods for multi-stage stochastic programming. _Annals of Operations Research_, 295(2):535-560, sep 2020. * [3] Heinz H. Bauschke and Patrick L. Combettes. _Convex Analysis and Monotone Operator Theory in Hilbert Spaces_. Springer International Publishing, 2nd edition, 2017. * [4] Jean-David Benamou, Guillaume Carlier, Marco Cuturi, Luca Nenna, and Gabriel Peyre. Iterative bregman projections for regularized transportation problems. _SIAM Journal on Scientific Computing_, 37(2):1111-1138, 2015. * [5] Dimitri P. Bertsekas. _Convex Optimization Algorithms_. Number 1st. Athena Scientific, 2015. * [6] Steffen Borgwardt. An lp-based, strongly-polynomial 2-approximation algorithm for sparse wasserstein barycenters. _Operational Research_, 22(2):1511-1551, Apr 2022. * [7] Guillaume Carlier, Adam Oberman, and Edouard Oudet. Numerical methods for matching for teams and wasserstein barycenters. _ESAIM: Mathematical Modelling and Numerical Analysis_, 49(6):1621-1642, nov 2015. * [8] Laurent Condat. Fast projection onto the simplex and the \(\boldsymbol{l_{1}}\)ball. _Mathematical Programming_, 158(1):575-585, Jul 2016. * [9] Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In C.J. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, _Advances in Neural Information Processing Systems_, volume 26. Curran Associates, Inc., 2013. Figure 9: _(left)_ UWB for a dataset of letters M-A-M built in the same logic than section 6.4 with 50 figures: _(right)_ resulting UWB with \(\gamma=0.01\), computed in 200 seconds using 10 processors. * [10] Marco Cuturi and Arnaud Doucet. Fast computation of wasserstein barycenters. In Eric P. Xing and Tony Jebara, editors, _Proceedings of the 31st International Conference on Machine Learning_, volume 32 of _Proceedings of Machine Learning Research_, pages 685-693, Bejing, China, 22-24 Jun 2014. PMLR. * [11] Marco Cuturi and Arnaud Doucet. Fast computation of wasserstein barycenters. _International Conference on Machine Learning_, 32(2):685-693, 2014. * [12] Marco Cuturi and Gabriel Peyre. A smoothed dual approach for variational wasserstein problems. _SIAM Journal on Imaging Sciences_, 9(1):320-343, 2016. * [13] Welington de Oliveira, Claudia Sagastizabal, Debora Dias Jardim Penna, Maria Elvira Pineiro Maceira, and Jorge Machado Damazio. Optimal scenario tree reduction for stochastic streamflows in power generation planning problems. _Optimization Methods and Software_, 25(6):917-936, 2010. * [14] Jim Douglas and H. H. Rachford. On the numerical solution of heat conduction problems in two and three space variables. _Transactions of the American Mathematical Society_, 82(2):421-439, 1956. * [15] Jonathan Eckstein and Dimitri P. Bertsekas. On the douglas--rachford splitting method and the proximal point algorithm for maximal monotone operators. _Mathematical Programming_, 55(1-3):293-318, apr 1992. * [16] Anqi Fu, Junzi Zhang, and Stephen Boyd. Anderson accelerated douglas-rachford splitting. _SIAM Journal on Scientific Computing_, 42(6):A3560-A3583, jan 2020. * [17] A. Gramfort, G. Peyre, and M. Cuturi. Fast optimal transport averaging of neuroimaging data. In Sebastien Ourselin, Daniel C. Alexander, Carl-Fredrik Westin, and M. Jorge Cardoso, editors, _Information Processing in Medical Imaging_, pages 261-272, Cham, 2015. Springer International Publishing. * [18] Tartavel Guillaume, Peyre Gabriel, and Gousseau Yann. Wasserstein loss for image synthesis and restoration. _SIAM Journal on Imaging Sciences_, 9(4):1726-1755, 2016. * [19] Florian Heinemann, Marcel Klatt, and Axel Munk. Kantorovich-rubinstein distance and barycenter for finitely supported measures: Foundations and algorithms. _Applied Mathematics & Optimization_, 87(1):4, Nov 2022. * [20] Franck Iutzeler, Pascal Bianchi, Philippe Ciblat, and Walid Hachem. Asynchronous distributed optimization using a randomized alternating direction method of multipliers. In _52nd IEEE Conference on Decision and Control_. IEEE, dec 2013. * [21] Gabriel Peyre. Bregmanot, 2014. * [22] Gabriel Peyre and Marco Cuturi. Computational optimal transport: With applications to data science. _Foundations and Trends in Machine Learning_, 11(5-6):355-607, 2019. * [23] Georg Ch. Pflug and Alois Pichler. _Multistage Stochastic Optimization_. Springer International Publishing, 2014. * [24] Giovanni Puccetti, Ludger Ruschendorf, and Steven Vanduffel. On the computation of wasserstein barycenters. _Journal of Multivariate Analysis_, 176(104581), 2020. * [25] Yossi Rubner, Carlo Tomasi, and Leonidas J. Guibas. The earth mover's distance as a metric for image retrieval. _International Journal of Computer Vision_, 40(2):99-121, Nov 2000. * [26] Thibault Sejourne, Gabriel Peyre, and Francois-Xavier Vialard. Unbalanced optimal transport, from theory to numerics. _Handbook of Numerical Analysis_, 24:407-471, 2023. * [27] Dror Simon and Aviad Aberdam. Barycenters of natural images constrained wasserstein barycenters for image morphing. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 7910-7919, 2020. * [28] Richard Sinkhorn. Diagonal equivalence to matrices with prescribed row and column sums. ii. _Proceedings of the American Mathematical Society_, 45(2):195-198, 1974. * [29] Tijmen. affnist, 2013. * [30] Cedric Villani. _Optimal transport: ondl and new_, volume 338. Springer Verlag, 2009. * [31] Huahua Wang and Arindam Banerjee. Bregman alternating direction method of multipliers. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger, editors, _Advances in Neural Information Processing Systems_, volume 27. Curran Associates, Inc., 2014. * [32] Jean-Paul Watson and David L. Woodruff. Progressive hedging innovations for a class of stochastic mixed-integer resource allocation problems. _Computational Management Science_, 8(4):355-370, jul 2010. * [33] Zheng Xu, Mario Figueiredo, and Tom Goldstein. Adaptive ADMM with Spectral Penalty Parameter Selection. In Aarti Singh and Jerry Zhu, editors, _Proceedings of the 20th International Conference on Artificial Intelligence and Statistics_, volume 54 of _Proceedings of Machine Learning Research_, pages 718-727. PMLR, 20-22 Apr 2017. * [34] Jianbo Ye and Jia Li. Scaling up discrete distribution clustering using ADMM. In _2014 IEEE International Conference on Image Processing (ICIP)_, pages 5267-5271, 2014. * [35] Jianbo Ye, Panruo Wu, James Z. Wang, and Jia Li. Fast discrete distribution clustering using wasserstein barycenter with sparse support. _IEEE Transactions on Signal Processing_, 65:2317-2332, May 2017.
Wasserstein Barycenter (WB)は確率集合を要約する重要なツールです。これらは応用確率、クラスタリング、画像処理などの分野で用いられます。確率サポートが有限かつ固定の場合、WBの計算問題は線形最適化問題となります。この問題の解の次元は、標準のソルバーの能力を超えることが多く、このため、WBの問題は、エンタルピー正規化関数を用いた単純な非線形最適化モデルに置き換えられることが多く、これにより、特化したアルゴリズムを使用して効率的に近似したWBを計算することができます。このような広範な不正確なスキームとは異なり、私たちは、Douglas-Rachford分割法をWB線形最適化問題に直接適用する正確なアプローチを提案しました。このアルゴリズムは、マルginsの平均を基礎とした興味深い解釈を持っています。それは、単純な(正確な)プロジェクションのシリーズを処理する
2309.13563
Multivariate Prototype Representation for Domain-Generalized Incremental Learning
Deep learning models suffer from catastrophic forgetting when being fine-tuned with samples of new classes. This issue becomes even more pronounced when faced with the domain shift between training and testing data. In this paper, we study the critical and less explored Domain-Generalized Class-Incremental Learning (DGCIL). We design a DGCIL approach that remembers old classes, adapts to new classes, and can classify reliably objects from unseen domains. Specifically, our loss formulation maintains classification boundaries and suppresses the domain-specific information of each class. With no old exemplars stored, we use knowledge distillation and estimate old class prototype drift as incremental training advances. Our prototype representations are based on multivariate Normal distributions whose means and covariances are constantly adapted to changing model features to represent old classes well by adapting to the feature space drift. For old classes, we sample pseudo-features from the adapted Normal distributions with the help of Cholesky decomposition. In contrast to previous pseudo-feature sampling strategies that rely solely on average mean prototypes, our method excels at capturing varying semantic information. Experiments on several benchmarks validate our claims.
Can Peng, Piotr Koniusz, Kaiyu Guo, Brian C. Lovell, Peyman Moghadam
2023-09-24T06:42:04
http://arxiv.org/abs/2309.13563v1
# Multivariate Prototype Representation for Domain-Generalized Incremental Learning ###### Abstract Deep learning models suffer from catastrophic forgetting when being fine-tuned with samples of new classes. This issue becomes even more pronounced when faced with the domain shift between training and testing data. In this paper, we study the critical and less explored Domain-Generalized Class-Incremental Learning (DGCIL). We design a DGCIL approach that remembers old classes, adapts to new classes, and can classify reliably objects from unseen domains. Specifically, our loss formulation maintains classification boundaries and suppresses the domain-specific information of each class. With no old exemplars stored, we use knowledge distillation and estimate old class prototype drift as incremental training advances. Our prototype representations are based on multivariate Normal distributions whose means and covariances are constantly adapted to changing model features to represent old classes well by adapting to the feature space drift. For old classes, we sample pseudo-features from the adapted Normal distributions with the help of Cholesky decomposition. In contrast to previous pseudo-feature sampling strategies that rely solely on average mean prototypes, our method excels at capturing varying semantic information. Experiments on several benchmarks validate our claims. ## 1 Introduction Despite the progress in deep learning, many algorithms fail when the data distribution changes. Objects of interest from new classes can continually appear, and simultaneously the old class data might be inaccessible due to data storage limitations, privacy, or licensing issues. Under such conditions, direct fine-tuning of deep learning models with the new class data will make a deep learning model lose the performance on previously-learnt tasks - the catastrophic forgetting problem. Also, when trained models are deployed into new data domains stemming from diverse sources of data, the model often fails to generalize to out-of-distribution cases. Incremental Learning (IL) addresses the problem of catastrophic forgetting posed by the incoming data whose distribution changes over time. Domain Generalization (DG) tackles the out-of-distribution problem caused by the distribution shift between training and testing data. Unfortunately, IL and DG are usually considered separate problems. Yet, for autonomous vehicles or robots, new classes of objects such as new road signs, new means of transport, and various surrounding buildings can continually appear and need to be learnt without forgetting the already-learnt visual concepts. The system also has to perform well under conditions that differ from the training domain, including variations in lighting, weather, infrastructure designs, _etc._ Domain-Generalized Class-Incremental Learning (DGCIL) addresses these two scenarios that require simultaneous handling of catastrophic forgetting and out-of-distribution recognition. DGCIL approaches have to achieve good performance under novel test domains and a significant intake of new visual categories. Figure 1 illustrates the DGCIL paradigm. During training, for each incremental step, with only new class data available, the model needs to learn representations of new classes as well as maintain representations of old classes. At the inference stage, the model is required to perform well in all the classes learnt so far on novel domains (not seen in training). Compared with IL and DG, DGCIL is more challenging and less explored. Figure 1: Domain-Generalized Class-Incremental Learning. Samples of new classes are provided from a fixed set of training domains for each incremental learning step \(t\). During testing, the model has to perform well on all the categories learnt so far under an unseen testing domain. Each domain becomes a testing domain once, _i.e._, four separate runs are needed, over which average accuracy is calculated. While some efforts have been made to explore continual domain adaptation which evaluates the model on continually arriving classes and domains (Buzzega et al., 2020; Knights et al., 2022; Volpi et al., 2021; Xie et al., 2022; Liu et al., 2023), DG-CIL simultaneously targets out-of-domain generalization. In contrast, the problem of continual domain adaptation focuses on adaptation under training domain shift. To the best of our knowledge, MSL_MOV (Simon et al., 2022) is the only work that targets a similar task. However, in their setting, old class exemplars from each training domain need to be stored which might cause data privacy or storage issues, _i.e._, storing exemplars may be simply not an option. In contrast, in this work, we design an exemplar-free DGCIL model. As we are interested in an exemplar-free setting, we remove the exemplars from the experimental setting. For a baseline, we modify the only existing DGCIL model, MSL_MOV, to work with the prototypes instead of exemplars. Moreover, we notice that the evaluation protocol introduced by MSL_MOV is limited to their specific pipeline design, and may be biased towards old classes. Specifically, within each incremental step, MSL_MOV performs three-stage training (training for each domain progressively). For each incremental step, MSL_MOV reports the average results from three stages. The evaluation results within such a training procedure may be not universal as they are biased towards first- and early-stage results, _e.g._, the first stage of a new incremental step may be biased towards good performance on old classes, whereas the last stage may be biased towards new classes. Also, the first stage may be more aligned with the test domain than the model from all three stages. Thus, averaging the results of three stages may make results on old classes look better compared to a one-off evaluation at the end of the incremental step. Thus, we propose a more realistic evaluation protocol as follows. During each incremental step, we only perform a one-time evaluation once the given incremental step learnt from all training domains (all stages are completed). Based on the publicly available domain generalization benchmark DomainBed (Gulrajani and Lopez-Paz, 2021), we build our framework and method. We reproduce MSL_MOV in our evaluation setting for fair comparisons with our approach. Figure 4 illustrates the difference in both evaluation settings. As we adopt the challenging exemplar-free DGCIL setting, we propose _TRIplet loss with Pseudo old-class feature Sampling_ (TRIPS). Figure 2 illustrates our model. During training, we adopt a novel variant of triplet loss that focuses on distinguishing semantic information while suppressing the domain information. The goal of such a loss is to improve the generalization of the model towards an unseen test distribution. In addition, we apply knowledge distillation to maintain previously-learnt knowledge and prevent catastrophic forgetting. Moreover, old classes are described by class prototype representations based on the multivariate Normal distributions. As incremental sessions progress, the drift associated with each mean and covariance matrix of each prototype is estimated, and prototype representations are updated accordingly before we sample old-class pseudo-features via the Cholesky decomposition. Such pseudo-features help preserve the knowledge of old classes from the past sessions and maintain the class separation by TRIPS, facilitating balanced performance on old and new class concepts. Our main contributions are summarised as follows: 1. With data storage, privacy, and licensing issues in mind, we propose an exemplar-free approach for DGCIL, based on _TRIplet loss with Pseudo old-class feature Sampling_. The goal of TRIPS is to keep features of different classes apart while suppressing domain-related feature information, which facilitates domain generalization. 2. As TRIPS has to handle old and new classes incrementally, we introduce class-wise prototype representations based on the multivariate Normal distributions. We estimate their drift (for each mean/covariance) as the backbone is updated along incremental sessions. We then sample pseudo-features for old classes from such distributions. 3. Details of model selection (validation procedure), data splitting, and evaluation strategies are important for reproducibility. As exemplar-free DGCIL models do not exist, we provide a comprehensive DGCIL task setting for exemplar-free DGCIL including (i) validation procedure, (ii) evaluation setting reflecting the performance of each incremental step, and (iii) the overall and balanced performance based on the domain-wise and class-wise average and harmonic accuracy, respectively. The code with validation/evaluation protocols will be released. ## 2 Related Works In this section, we explain related works in Incremental Learning (IL), Domain Generalization (DG), and Domain-Generalized Class-IL (DGCIL). Fig. 2: The TRIPS pipeline. During the incremental step \(t\), we pass new class samples through the old (frozen) and the current models. A classifier loss \(L_{t}^{\text{class}}\) and a distillation loss \(L_{t}^{\text{dist}}\) are applied. The mean and covariance matrices of each class prototype are updated due to the feature drift between models. Subsequently, pseudo-samples are obtained from covariances and passed to pseudo-classifier \(L_{t}^{\text{pseudo}}\). Finally, our TRIPS pushes feature vectors of different classes apart (black dashed arrows), while suppressing domain-specific information (grey arrows facing inwards). **Incremental Learning Settings.** IL requires the model to continually learn new tasks from a sequential data stream without forgetting the past. IL can be categorized into three subtasks: Task-IL, Class-IL, and General IL. The main difference between them is whether task boundaries are accessible during training and testing. Task-IL, the simplest IL scenario, allows access to task boundaries during both training and testing (Li and Hoiem, 2017). Different from Task-IL, the Class-IL scenario only provides task boundaries during training (Kirkpatrick et al., 2017; Chaudhry et al., 2018; Aljundi et al., 2018; Rebuffi et al., 2017; Hou et al., 2019). During testing, a Class-IL model is required to have a unified classifier that can classify all classes learnt so far. General IL, a newly emerged IL scenario, is a more practical setting than Task-IL and Class-IL. In this setting, the task boundaries are not accessible during either training or testing (Xie et al., 2022; Ji et al., 2022; Li et al., 2022). However, General-IL does not model out-of-distribution scenes to facilitate novel test distributions. Some General-IL methods consider domain adaptation (Buzzega et al., 2020; Xie et al., 2022) by incrementally adding new classes/domains and performing testing on classes/domains learnt so far. They do not consider unseen domains during testing. In contrast, we study Domain-Generalized IL to preserve old class knowledge, and continually learn new class concepts, while generalizing well to unseen test distributions. **Solutions to Incremental Learning.** Current IL methods are often based on: regularization, dynamic architecture, or old exemplar replay. Advanced regularization terms impose constraints on the network update and mitigate catastrophic forgetting. Some models regularize the parameter updating according to their importance w.r.t. previously-learnt tasks (Kirkpatrick et al., 2017; Chaudhry et al., 2018; Aljundi et al., 2018). Other models maintain past feature distribution by knowledge distillation on logits (Li and Hoiem, 2017; Rebuffi et al., 2017; Hou et al., 2019) or on intermediate features (Simon et al., 2021; Roy et al., 2023). Dynamic architecture (Yoon et al., 2018; Douillard et al., 2022) assigns each new task with a task-specific subnetwork to balance the stability and plasticity trade-off. Old exemplar replay retains some old class samples for experience reply during new task training to refresh the model memory (Rebuffi et al., 2017; Hou et al., 2019; Xie et al., 2022; Li et al., 2022; Ji et al., 2022). **Domain Generalization.** DG approaches are required to make good predictions on novel test domains. Many DG models reduce domain gaps in the latent space and obtain easy-to-transfer model parameters by meta-learning (Dou et al., 2019), data augmentation (Zhang et al., 2018; Zhou et al., 2021), or capturing causal relations (Arjovsky et al., 2019; Krueger et al., 2021). Although a myriad of domain generalization algorithms has been proposed, they use various experimental settings, such as different datasets and model selection criteria (Gulrajani and Lopez-Paz, 2021). Gulrajani and Lopez-Paz (2021) propose a benchmark testbed called DomainBed and argue that when equipped with modern neural network architectures and data augmentation techniques, empirical risk minimization (ERM) achieves state-of-the-art performance. **Domain-Generalized Class-IL (DGCIL).** In this paper, we study Domain-Generalized Class-Incremental Learning, bridging the gap between the Class-IL (CIL) and DG problems. DGCIL solves both catastrophic forgetting due to the semantic shift and domain generalization beyond training domains. Compared with CIL and DG, DGCIL is more challenging and less explored. MSL_MOV (Simon et al., 2022) is the only related work that explores a similar problem by the use of Mahalanobis metrics for classification in unseen domains. MSL_MOV also uses old class exemplars and exponential moving average knowledge distillation to overcome catastrophic forgetting. However, in many situations, old class exemplars should not be used (privacy, storage, _etc._) and the number of parameters of MSL_MOV grows linearly. In this paper, we provide a comprehensive task setting for DGCIL and propose an exemplar-free DGCIL method. ## 3 What is the DGCIL About? A DGCIL model (similar to CIL) learns new tasks continually in several steps but the model is required to perform well on unseen test domains (domain-invariant model). The DGCIL algorithms have two goals: (i) learning new class concepts while maintaining knowledge of old class concepts, and (ii) learning semantic invariance from training data with domain-specific information. The latter property helps the model generalize to out-of-distribution cases during testing. Figure 1 describes the training and testing of DGCIL. Specifically, a DGCIL task has \(T+1\) steps each associated with a sub-task \(\mathcal{T}_{t}\) where \(t\in\{0,1,\ldots,T\}\). Let \(\mathcal{T}_{0}\) denote the base task and \(\{\mathcal{T}_{t}\colon t\geq 1\}\) be incremental tasks. The \(t\)-th task has to learn \(|\mathcal{C}_{t}|\) new class concepts with training samples of categories from set \(\mathcal{C}_{t}\). Considering the possible memory limitation in realistic conditions, we opt for an exemplar-free setting. Training data from different tasks has no overlap in class labels or samples, thus \(\mathcal{C}_{t}\cap\mathcal{C}_{t^{\prime}}=\varnothing\) if \(t\neq t^{\prime}\). During each incremental learning task, training samples from \(Z\) different domain distributions are provided by sets \(\mathcal{D}_{z}\) where \(z\in\{1,\ldots,Z\}\). The total training data is represented as \(\mathcal{D}_{\text{train}}=\mathcal{D}_{1}\cup\mathcal{D}_{2}\ldots\cup \mathcal{D}_{Z}\). During testing, the model will be evaluated on all classes learnt up to \(t\) in an unseen domain \(\mathcal{D}_{\text{test}}=\mathcal{D}_{Z+1}\). Thus, \(\mathcal{C}_{\text{train}}=\mathcal{C}_{t}\) and \(\mathcal{C}_{\text{test}}=\mathcal{C}_{0}\cup\mathcal{C}_{1}\ldots\cup \mathcal{C}_{t}\), while domains \(\mathcal{Z}_{\text{train}}\cap\mathcal{Z}_{\text{test}}=\varnothing\). ## 4 Proposed Approach In this section, we outline our algorithm, which consists of: (i) triplet loss which teaches the model to distinguish semantic information across different classes while suppressing domain-specific information, (ii) cross-entropy knowledge distillation which prevents catastrophic forgetting of past knowledge, (iii) prototype representations based on the multivariate Normal distribution with a drift mechanism that adapts prototypes from previous sessions to the feature space of the current session, and (iv) sampling mechanism via Cholesky decomposition. ### Learning New Classes Our approach consists of three components: (i) learning new knowledge, (ii) preserving the former knowledge, and (iii) handling out-of-distribution cases. For the first component, we adopt the commonly used strategy based on the cross-entropy loss using the ground truth labels and softmax-normalized model output over all classes learnt so far. Let a set of sample-label pairs \((\mathbf{x},y)\) for the current session \(t\) be stored in batch \(\mathcal{B}\). Let the set of classes observed so far for the session/step \(t\) where \(t\leq T\) be denoted as \(\mathcal{C}^{\prime}=\mathcal{C}_{0}\cup\mathcal{C}_{1}\ldots\cup\mathcal{C}_{t}\). Then we have the new knowledge learning for the step \(t\): \[L_{t}^{\text{class}}=-\frac{1}{|\mathcal{B}|}\sum_{(\mathbf{x},y)\in\mathcal{ B}}\!\!\log\!\left(\!\frac{\exp\left(\theta_{\gamma\gamma}^{\top}f_{t}( \mathbf{x})\right)}{\sum\limits_{c\in\mathcal{C}^{\prime}}\exp\left(\theta_{ \kappa c}^{\top}f_{t}(\mathbf{x})\right)}\right). \tag{1}\] Notice that while \(\mathcal{B}\) only contains samples of classes in set \(\mathcal{C}_{t}\), the denominator of Eq. (1) runs over all classes observed in sessions/steps \(0,\ldots,t\). Function \(f_{t}(\cdot)\) denotes our feature extractor trained in session \(t\). Parameters of \(f_{t}(\cdot)\) are initialized by coping from parameters of \(f_{t-1}(\cdot)\). Moreover, \(\theta_{\gamma\gamma}\) denotes a linear projection from session \(t\) for class \(y\in\mathcal{C}^{\prime}\) (which is essentially an FC layer). Linear projection parameters \(\theta_{\gamma^{\prime}}\) for \(y^{\prime}\in\mathcal{C}^{\prime\prime}\) where \(\mathcal{C}^{\prime\prime}=\mathcal{C}^{\prime}\setminus\mathcal{C}_{t}\) are initialized by coping from \(\theta_{\gamma^{\prime}\gamma}\) where \(t^{\prime}=t-1\). In practice, we also use bias terms \(b_{\gamma\gamma}\), _i.e._, we have a linear projection \(\theta_{\gamma\gamma}^{\top}f_{t}(\mathbf{x})+b_{\gamma}\) but we skip \(b_{\gamma}\) for brevity. ### Knowledge Distillation As DGCIL approaches have to retain the knowledge from previous sessions and overcome catastrophic forgetting Hinton et al. (2015), for our model we adopt simple knowledge distillation1 to transfer responses of feature extractor \(f_{t-1}(\cdot)\) to \(f_{t}(\cdot)\) and linear projection \(\theta_{t-1}\) to \(\theta_{t}\) by cross-entropy. The samples of new classes are passed via frozen \(f_{t-1}(\cdot)\) and \(\theta_{t-1}\) to distill through the lens of old classes from set \(\mathcal{C}^{\prime\prime}=\mathcal{C}_{0}\cup\mathcal{C}_{1}\ldots\cup \mathcal{C}_{t-1}\) representing past sessions \(t=0,\ldots,t-1\) into \(f_{t}(\cdot)\) and \(\theta_{t}\). We define distillation for the step \(t\) as: Footnote 1: Kindly note we do not claim the methodology of Sections 4.1 and 4.2_per se_ as contributions. These are prerequisites required in our model. \[L_{t}^{\text{dist}}=-\frac{1}{|\mathcal{B}|}\sum_{\mathbf{x}\in \mathcal{B}}\ \sum_{c\in\mathcal{C}^{\prime}}\pi_{c}^{-1}(\mathbf{x})\log\left(\pi_{c}^{ \prime}(\mathbf{x})\right) \tag{2}\] \[\text{where}\quad\pi_{c}^{\prime}(\mathbf{x})=\frac{\exp\left( \theta_{\kappa c}^{\top}f_{t}(\mathbf{x})\right)}{\sum\limits_{c^{\prime}\in \mathcal{C}^{\prime\prime}}\exp\left(\theta_{\kappa c^{\prime}}^{\top}f_{t}( \mathbf{x})\right)}. \tag{3}\] In Eq. (2) and (3), \(\mathbf{\pi}^{\prime}\) and \(\mathbf{\pi}^{\prime-1}\) are the probability outputs of the current model (\(t\)) and the previous model (\(t-1\)) when passing samples of current session \(t\), denoted as \((\mathbf{x},y)\in\mathcal{B}\), but distilling them through the lens of classes from the past sessions (set \(\mathcal{C}^{\prime\prime}\)). Although batch \(\mathcal{B}\) contains pairs \((\mathbf{x},y)\), we simply enumerate over data samples and so we slightly abuse our notation by writing \(\mathbf{x}\in\mathcal{B}\). We also use the bias terms but we skip them in equations for clarity. ### The TRIPS Loss As DGCIL approaches have to perform well during testing on an unseen domain (out-of-distribution conditions), during training, we want our model to capture the semantic information while suppressing the domain-specific information. To this end, we adapt our TRIPS formulation based on the triplet loss2. Footnote 2: Kindly note we do not claim standard triplet loss as our contribution. Our contribution is in the bespoke design that separates classes while discarding domain-specific information. Specifically, we regard encoded feature vectors \(f_{t}(\mathbf{x})\) and \(f_{t}(\mathbf{x}^{\prime})\) of samples \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) with the same class labels \(y(\mathbf{x})=y(\mathbf{x}^{\prime})\) but different domain labels \(z(\mathbf{x})\neq z(\mathbf{x}^{\prime})\) as positive pairs, denoted as \(\gamma_{\mathbf{x},\mathbf{x}}^{\star}\). Moreover, we regard feature vectors \(f_{t}(\mathbf{x})\) and \(f_{t}(\mathbf{x}^{\prime})\) with two different class labels \(y(\mathbf{x})\neq y(\mathbf{x}^{\prime})\) but the same domain labels \(z(\mathbf{x})=z(\mathbf{x}^{\prime})\) as negative pairs, denoted as \(\gamma_{\mathbf{x},\mathbf{x}^{\prime}}^{\star}\). To help the model learn to classify class concepts instead of domain concepts, the distance between positive pairs should be smaller than the distance between negative pairs. We employ the squared Euclidean distance \(d(\mathbf{x},\mathbf{x}^{\prime})=\|f_{t}(\mathbf{x})-f_{t}(\mathbf{x}^{\prime}) \|_{2}^{2}\) to measure the pairwise distance. Formally, positive and negative feature pairs are defined as: \[\gamma_{\mathbf{x},\mathbf{x}^{\prime}}^{\star}=\begin{cases}1&\text{if }y( \mathbf{x})=y(\mathbf{x}^{\prime})\ \wedge\ z(\mathbf{x})\neq z(\mathbf{x}^{\prime}),\\ 0&\text{otherwise,}\end{cases} \tag{4}\] \[\delta_{\mathbf{x},\mathbf{x}^{\prime}}^{-}=\begin{cases}1&\text{if }y(\mathbf{x})\neq y( \mathbf{x}^{\prime})\ \wedge\ z(\mathbf{x})=z(\mathbf{x}^{\prime}),\\ 0&\text{otherwise.}\end{cases} \tag{5}\] Subsequently, we define our TRIPS loss as follows: \[L_{t}^{\text{trig,base}}=\frac{1}{|\mathcal{B}|}\sum_{\mathbf{x} \in\mathcal{B}}\Big{[} \max_{\mathbf{x}^{\prime}\in\mathcal{B}(|\mathbf{x}|}\gamma_{\mathbf{x}, \mathbf{x}^{\prime}}^{\star}\cdot d(\mathbf{x},\mathbf{x}^{\prime})\] \[-\min_{\mathbf{x}^{\prime}\in\mathcal{B}(|\mathbf{x}|}\gamma_{ \mathbf{x},\mathbf{x}^{\prime}}^{\star}\cdot d(\mathbf{x},\mathbf{x}^{\prime \prime})+m\ \Big{]}_{+}, \tag{6}\] where \([\phi]_{+}\equiv\max(\phi,0)\) is a ReLU to realize the triplet loss with margin \(m\) (set to \(0\) in our experiments). Although batch \(\mathcal{B}\) contains pairs \((\mathbf{x},y)\), we enumerate only over data samples-we slightly abuse our notation by writing \(\mathbf{x}\in\mathcal{B}\). ### Modeling Prototype Drift With no access to old data, we utilize old-class prototype representations to prevent catastrophic forgetting. Related works on incremental learning tasks commonly use the per-class mean-based prototype Yu et al. (2020); Zhu et al. (2021). However, these prototypes are insufficient for capturing intra-class variance resulting from multiple samples from different training domains as they only capture first-order statistics of class-wise feature distribution. In our approach, we utilize both the mean vector and covariance matrix for each class to enhance the representation of distribution of each old class. By incorporating covariance around its mean, the multivariate Normal distribution offers a more accurate model of class-specific boundaries. Furthermore, compared to isotropic/univariate Normal distributions or mean vectors (first-order statistics), considering second-order statistics through multivariate Normal distributions provides a more realistic distribution model. Figure 3 illustrates the latent feature space with class-wise distributions. Given our prototype representations are modeled as the multivariate Normal distribution per class, we can draw pseudo-samples from these distributions for old classes. However, as the feature extractor changes over time, we need to account for the prototype drift in the evolving feature space. As the new classes are being learnt, the model is updated, and the feature distribution changes. Consequently, we cannot simply use the previously stored prototype representations for old classes without accounting for this drift. With no access to old class exemplars, we capture the drift of features through the lens of samples of new classes passed through the current and past feature extractor, \(f_{t}(\cdot)\) and \(f_{t-1}(\cdot)\), respectively. Essentially, \(f_{t}(\mathbf{x})-f_{t-1}(\mathbf{x})\) captures the changes. Such a modeling approach assumes that the semantics of old tasks and new tasks must somewhat overlap, _i.e._, the past feature space and current feature spaces are not completely disjoint semantically. We define: \[\Delta\boldsymbol{\phi}(\mathbf{x})=f_{t}(\mathbf{x})-f_{t-1}( \mathbf{x}), \tag{7}\] \[w(\mathbf{x},\boldsymbol{\mu})=\exp\Big{(}-\frac{1}{2\sigma^{2} }\|f_{t-1}(\mathbf{x})-\boldsymbol{\mu}\|_{2}^{2}\Big{)}, \tag{8}\] where \(\Delta\boldsymbol{\phi}(\mathbf{x})\) is the drift for an individual sample \(\mathbf{x}\), and \(w(\mathbf{x},\boldsymbol{\mu}_{c}^{-1})\) captures similarity of \(\mathbf{x}\) w.r.t. prototype mean \(\boldsymbol{\mu}_{c}^{-1}\) from step \(t-1\) given some bandwidth hyper-parameter \(\sigma>0\) (set to 0.5 in all our experiments). Let the drift of prototype mean be defined as: \[\Delta\boldsymbol{\mu}_{c}=\frac{\sum_{\mathbf{x}\in\mathcal{B}} w(\mathbf{x},\boldsymbol{\mu}_{c}^{-1})\cdot\Delta\boldsymbol{\phi}(\mathbf{x})}{ \sum_{\mathbf{x}\in\mathcal{B}}w(\mathbf{x}^{\prime},\boldsymbol{\mu}_{c}^{-1 })}, \tag{9}\] \[\Delta\boldsymbol{\mu}_{c}^{b}=\eta\Delta\boldsymbol{\mu}_{c}^{b-1 }+(1-\eta)\Delta\boldsymbol{\mu}_{c}. \tag{10}\] We go over batches \(\mathcal{B}_{b}\) where \(b=1,\ldots,B\) but we drop subscript \(b\) from \(\mathcal{B}_{b}\) for brevity. The final expression after \(B\) batches is: \[\boldsymbol{\mu}_{c}^{l}=\boldsymbol{\mu}_{c}^{l-1}+\Delta\boldsymbol{\mu}_{c }^{B}. \tag{11}\] Hyper-parameter \(\eta\) (set to 0.1 in all our experiments) controls the so-called exponential moving average. Next, we estimate covariance drift \(\boldsymbol{\Sigma}_{c}^{\prime}\) for batches \(\mathcal{B}_{b}\), \(b=1,\ldots,B\). We drop subscript \(b\) from \(\mathcal{B}_{b}\) for brevity. As \(\boldsymbol{\Sigma}_{c}^{\prime}\) is estimated from a limited number of samples, we apply to it the shrinkage operator (Ledoit and Wolf, 2004) known to estimate covariance matrices reliably in such circumstances. We apply the exponential moving average to obtain \(\boldsymbol{\Sigma}_{c}^{\prime\,b}\): \[\boldsymbol{\Sigma}_{c}^{\prime}=\frac{\sum\limits_{\mathbf{x} \in\mathcal{B}}w(\mathbf{x},\boldsymbol{\mu}_{c}^{-1})\cdot(f_{t}(\mathbf{x}) -\boldsymbol{\mu}_{c}^{\prime})\big{(}f_{t}(\mathbf{x})-\boldsymbol{\mu}_{c}^ {\prime}\big{)}^{\top}}{\sum\limits_{\mathbf{x}\in\mathcal{B}}w(\mathbf{x}^{ \prime},\boldsymbol{\mu}_{c}^{-1})}, \tag{12}\] \[\boldsymbol{\Sigma}_{c}^{\prime\,b}=\eta\boldsymbol{\Sigma}_{c}^{ \prime\,b-1}+(1-\eta)((1-\alpha)\boldsymbol{\Sigma}_{c}^{\prime}+\alpha \mathbf{I}). \tag{13}\] which leads to the final expression \(\boldsymbol{\Sigma}_{c}^{\prime}=\boldsymbol{\Sigma}_{c}^{\prime\,b}\) after \(B\) batches. Hyper-parameter \(\eta\) controls the exponential moving average (set to 0.1 in all our experiments). Moreover, \(0\leq\alpha\leq 1\) decides by how much covariance is shrunk towards the isotropic Normal distribution (set to 0.05 in all our experiments). We then sample pseudo-features of past classes from our prototype representations during processing each mini-batch \(\mathcal{B}\). \[\boldsymbol{\phi}_{c}^{l}\sim\mathcal{N}(\boldsymbol{\mu}_{c}^{l},\boldsymbol {\Sigma}_{c}^{l})\quad\text{where}\quad c\sim\mathcal{U}(1,|\mathcal{C}^{\prime \prime}|), \tag{14}\] where \(\mathcal{N}(\boldsymbol{\mu}_{c}^{l},\boldsymbol{\Sigma}_{c}^{l})\) denotes the multivariate Normal distribution with mean \(\boldsymbol{\mu}_{c}^{l}\) and covariance \(\boldsymbol{\Sigma}_{c}^{l}\), and \(\mathcal{U}(1,|\mathcal{C}^{\prime\prime}|)\) denotes the uniform distribution having \(1/|\mathcal{C}^{\prime\prime}|\) probability within support region \([1,|\mathcal{C}^{\prime\prime}|]\) (and zero elsewhere), and \(\mathcal{C}^{\prime\prime}\) defined in Section 4.2 is set of classes from sessions \(0,\ldots,t-1\). Finally, we note that: \[\boldsymbol{\phi}_{c}^{l}\sim\mathcal{N}(\boldsymbol{\mu}_{c}^{l}, \boldsymbol{\Sigma}_{c}^{l})\quad\iff\quad\boldsymbol{\phi}_{c}^{l}= \boldsymbol{\mu}_{c}^{l}+\text{Chol}(\boldsymbol{\Sigma}_{c}^{l})\mathbf{v}\] \[\text{where}\quad\mathbf{v}\sim\mathcal{N}(\mathbf{0},\mathbf{I}), \tag{15}\] where \(\text{Chol}(\cdot)\) is the Cholesky decomposition3 of the symmetric positive definite matrix. Footnote 3: Cholesky decomposition has a fast implementation in PyTorch and stable numerical results compared to the SVD-based matrix square root. ### Integration of Class Learning and the TRIPS Loss To further preserve old class information and balance the extreme data imbalance (exemplar-free approach), during the new sessions, we want to ensure the classifier and TRIPS model have access to pseudo-samples obtained by Eq. (15) in order to update classifier parameters, keep features of different classes separated (not only new class features but also old class features despite the lack of access to old class samples) while actively suppressing the domain-wise information. Thus, the classification loss in Eq. (1) is merged at the final stage with the classification loss on pseudo-features in Eq. (16). Let \((\boldsymbol{\phi},y)\in\mathcal{S}\) be pseudo-samples (with labels \(y\in\mathcal{C}^{\prime\prime}\)) obtained by Eq. (15) for session \(t\) and batch \(\mathcal{B}\). We define: \[L_{t}^{\text{pseudo}}=-\frac{1}{|\mathcal{S}|}\sum_{(\boldsymbol{\phi},y)\in \mathcal{S}}\log\bigg{(}\frac{\exp\left(\boldsymbol{\theta}_{r}^{\top} \boldsymbol{\phi}\right)}{\sum\limits_{c\in\mathcal{C}}\exp\left(\boldsymbol{ \theta}_{r}^{\top}\boldsymbol{\phi}\right)}\bigg{)}. \tag{16}\] Fig. 3: An example of the latent space and class-wise feature distributions. Due to the intra-class variance (multiple samples from several training domains), the single per-class feature mean (which is the first-order statistic) is not sufficient to represent the class distribution well. The old class distribution can be better captured by the mean and covariance matrix (the multivariate Normal distribution). The multivariate Normal distributions (second-order statistics) capture class boundaries better than isotropic/univariate Normal distributions and first-order statistics such as the mean vectors. Subsequently, the TRIPS in Eq. (6) is redefined as Eq. (17): \[L_{\tau}^{\text{trips,inter}}=\frac{1}{|\mathcal{B}|}\sum_{\mathbf{x }\in\mathcal{B}}\bigg{[}\max_{\mathbf{x}\in\mathcal{B}\backslash\{\mathbf{x} \}}\gamma_{\mathbf{x},\mathbf{x}^{\prime}}^{+}\cdot d(\mathbf{x},\mathbf{x}^{ \prime}) \tag{17}\] \[-\min\Big{(}\min_{\mathbf{x}^{\prime\prime}\in\mathcal{B} \backslash\{\mathbf{x}\}}\bar{\mathbf{x}}_{\mathbf{x},\mathbf{x}^{\prime \prime}}\cdot d(\mathbf{x},\mathbf{x}^{\prime\prime}),\min_{\mathbf{\phi}\in \mathcal{S}}\lVert f_{\mathbf{t}}(\mathbf{x})-\mathbf{\phi}\rVert_{2}^{2}\Big{)}+m \bigg{]}_{+}\] Notice that negative-pair labels \(\gamma^{-}\) between \(\mathbf{x}\in\mathcal{B}\) and pseudo-samples \(\mathbf{\phi}\in\mathcal{S}\) always equal 1. Pseudo-samples are domain agnostic by design, and pseudo-sample set \(\mathcal{S}\) and batch set \(\mathcal{B}\) have class set \(\mathcal{C}^{\prime\prime}\cap\mathcal{C}_{t}=\varnothing\). Also, to balance the old and new class performance, we sample the same number of pseudo-features as new class features \(|\mathcal{S}|=|\mathcal{B}|\). **The full loss.**\(L_{t}\) for step \(t\geq 0\) and hyper-parameters \(\lambda\) & \(\lambda^{\prime}\) is: \[L_{t}=\begin{cases}L_{0}^{\text{class}}+\lambda L_{0}^{\text{trips,base}},& \text{if }t=0,\\ L_{t}^{\text{class}}+L_{t}^{\text{pseudo}}+\lambda L_{t}^{\text{trips,inter}}+ \lambda^{\prime}L_{t}^{\text{dist}}&\text{if }t\geq 1,\end{cases} \tag{18}\] where \(\lambda\) and \(\lambda^{\prime}\) are hyper-parameters that balance different loss terms. By diagnosing the loss value for each loss term, we set \(\lambda=1\) and \(\lambda^{\prime}\) to 30 for all our experiments to ensure that all loss terms contribute similar level of penalty. ## 5 Experiments In this section, we compare our proposed method with existing approaches in both incremental learning and domain generalization. We start by introducing the experimental details. **Datasets.** Three domain generalization benchmark datasets PACS [10], OfficeHome [20], and DomainNet [14] are used for our experiments. PACS [10] contains 7 classes from 4 domains: Art, Cartoon, Photo, and Sketch. It provides challenging recognition scenarios with significant shifting between different domains. OfficeHome [20] is a large-scale dataset that contains 65 classes from 4 domains: Real, Clipart, Art, and Product. DomainNet [14] is also a large-scale dataset that contains 126 classes from 4 domains: Real, Clipart, Painting, and Sketch. To comprehensively evaluate the performance of our proposed method, we perform various multi-step incremental learning scenarios on these datasets. Specifically, according to the number of classes of the dataset, we perform a 2-step incremental learning task on PACS and both 5-step and 10-step incremental tasks on OfficeHome and DomainNet, similar to Simon et al. Simon et al. (2022). **Evaluation Metric.** Domain-wise average accuracy is normally used for domain generalization tasks to evaluate the generalization capability of the model [13]. Under DGCIL conditions, we also need to evaluate the memorizing capability of the model and consider performance on old and new classes, respectively. Thus, during each learning session, we first regard one domain as the unseen domain and the remaining domains as training domains to train the model. Then the domain-wise class-wise average accuracy is reported as the overall performance of the model. We also use the domain-wise harmonic accuracy between old and new classes to evaluate for possible prediction bias. Each domain becomes a test domain at once, _e.g_., given four domains we have four separate sets of experiments to average over. Moreover, our evaluation protocol is explained in Figure 4 (left). We argue that the protocol of MSL_MOV (right) may be biased to overestimate the performance on old classes. This may occur due to accuracy being averaged over sub-tasks, or it might even result in an overestimation of overall accuracy if early sub-tasks align more closely with the test domain than the whole task. To address this concern, we propose evaluating all models at the end of each incremental session. **Model Selection.** As the model selection is non-trivial for domain generalization, following domain generalization benchmark DomainBed [13], we use the training-domain validation set strategy to select the final model for all the experiments. Specifically, we split each training domain into 80% and 20% for training and validation subsets, respectively. We combine the validation subsets from all training domains to generate the overall validation set. The model that provides the maximal domain-wise class-wise accuracy on the overall validation set is chosen as the final model. Figure 4: Evaluation protocols for DGCIL. (_right_) The only existing protocol, implemented by MSL_MOV, evaluates each incremental step by averaging performance on individual sub-tasks. As the first sub-task may be biased towards old classes from the previous incremental step, we argue that a better evaluation protocol should avoid sub-tasks. Also, the first sub-task may be more aligned with the test domain than the whole task, so evaluation should take place at the end of task instead. (_left_) TRIPS performs incremental training step without splitting the problem into sub-tasks. We evaluate the samples from the test domain once the incremental step finishes. We adopt this evaluation protocol on TRIPS and MSL_MOV. Figure 5: Domain-average performance of each incremental step under _the 2-step incremental setting_ on the PACS dataset. **Implementation Details.** We build our code on top of domain generalization benchmark DomainBed Gulrajani and Lopez-Paz (2021). For our experiments, we use ResNet-34 He et al. (2016) as the backbone network. The initial learning rate is set to 5e-5 for all dataset training. The maximum iteration is set to 5000 and we utilize the training-domain validation set strategy Gulrajani and Lopez-Paz (2021) to select the final model. We use a batch size of 32 for each training domain data and the final data for each batch training is the cascaded data from all training domains. To capture the possible performance variations due to data split and report accurate performance, we run each experiment on three seeds and report the average result. ### Comparison with the State of the Art We compare our method with exemplar-free incremental learning methods LwF Li and Hoiem (2017), EWC Kirkpatrick et al. (2017), and MAS Aljundi et al. (2018). We also compare our method with domain generalization methods CORAL Sun and Saenko (2016) and MixStyle Zhou et al. (2021), and DGCIL method MSL_MOV Simon et al. (2022). Experiments are performed on three datasets Li et al. (2017); Venkateswara et al. (2017); Saito et al. (2019) under several different incremental settings. DGCIL is a challenging task that combines the difficulty of both incremental learning and domain generalization. To comprehensively evaluate the performance of each method, its memorizing and generalization capabilities should be evaluated. Thus, we report the average performance towards all incremental steps for each domain to evaluate the out-of-distribution performance. We also report the average performance towards all domains for each incremental step to evaluate the preservation of past knowledge. Figure 5 and Table 1 show the experimental results on the PACS dataset. During experiments, we find that applying batch normalization on the features from the feature extractor is important to boost performance. We conjecture this is because, during training, there are covariate shifts Ioffe and Szegedy (2015), and the shifting is more severe when the training data comes from various domains. To make fair comparisons, we apply the same batch normalization to all distillation-based methods. In addition, for small dataset training, data augmentation is important to guarantee sufficient data is available for training. Thus, we apply image rotations by 90 degrees on the training data and regard the rotated images as additional classes to increase both the number of samples and class concepts. In each incremental step, we apply this data and class augmentation strategy in our method when the number of new classes is below 5. For fair comparisons, we apply the same augmentation strategy to our method and other methods. We also note that MSL_MOV is an exemplar-based method by design. However, we study the exemplar-free setting in this paper. Thus, for a fair comparison, we equip MSL_MOV with mean-based prototypes and we completely remove exemplars from MSL_MOV4. Footnote 4: Prototype-based IL methods usually perform worse than exemplar-based IL methods. Thus, the performance of MSL_MOV reported by Simon et al. (2022) differs from the performance reported in our paper due to (i) prototype-based ”memory” and a more realistic evaluation setting as in Figure 4 (left). According to Table 1, TRIPS always achieves the best performance on different unseen test domains. We have an average of 8.69% and 12.94% gain on average and harmonic accuracy compared to the second-best method, respectively. When the hardest _sketch_ domain is the unseen test domain, our method outperforms SOTA by 14.07% and 28.36% in average and harmonic accuracy, respectively. This shows TRIPS copes well with such an out-of-distribution scenario. Figure 5 shows that TRIPS also achieves both the highest average accuracy and harmonic accuracy for all the incremental steps. Figure 6 and Table 2 show the experimental results on Office-Home and DomainNet datasets. Under both five-step and ten-step incremental settings, TRIPS shows significant improvements over the baselines. According to Table 2, under four different task settings, we outperform the second-best baseline by an average of 13.78%/15.86% on average/harmonic accuracy for all domains which shows our method copes well with the out-of-distribution test data. According to Figure 6, under four different task settings, we also outperform the second-best baseline in almost every step on both average and harmonic accuracy which shows our method is more robust to catastrophic forgetting. ### Ablation Study To validate the effectiveness of each part of our method, Tables 3 and A.5 show the ablation study on the OfficeHome \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c}{PACS} \\ & \multicolumn{3}{c}{2-Step Inc (+ 2 cls/step)} \\ \cline{2-5} & Art & Cartoon & Photo & Sketch \\ \hline \hline \multicolumn{5}{c}{average accuracy (\% + 1)} \\ \hline ERM & 51.01 & 50.22 & 54.35 & 48.85 \\ MixStyle Zhou et al. (2021) & 44.68 & 47.05 & 49.15 & 47.06 \\ CORAL Sun and Saenko (2016) & 50.32 & 51.26 & 54.50 & 50.20 \\ EWC Kirkpatrick et al. (2017) & 52.01 & 50.03 & 54.57 & 50.25 \\ MAS Aljundi et al. (2018) & 54.52 & 53.00 & 58.48 & 51.35 \\ MSL_MOV Simon et al. (2022) & 56.76 & 52.55 & 56.79 & 48.96 \\ MSL_MOV + Norm & 54.40 & 53.45 & 60.76 & 51.92 \\ MSL_MOV + Data Aug & 57.56 & 55.90 & 70.22 & 53.99 \\ MSL_MOV + Norm + Data Aug & 64.11 & 49.59 & 70.84 & 53.20 \\ MSL_MOV + Old Prototype & 57.18 & 53.24 & 57.81 & 49.01 \\ LwF Li and Hoiem (2017) & 51.71 & 50.11 & 54.43 & 47.52 \\ LwF + Norm Li and Hoiem (2017) & 63.91 & 64.79 & 68.67 & 59.80 \\ LwF Norm + Data Aug & 69.43 & 72.62 & 76.63 & 55.60 \\ TRIPS (ours) + Data Aug & **80.79** & **74.80** & **83.77** & **77.27** \\ \hline \hline \multicolumn{5}{c}{harmonic accuracy (\% + 1)} \\ \hline ERM & 0.0 & 0.0 & 0.0 & 0.0 \\ MixStyle Zhou et al. (2021) & 0.0 & 0.0 & 0.0 & 0.0 \\ CORAL Sun and Saenko (2016) & 0.0 & 0.0 & 0.0 & 0.0 \\ EWC Kirkpatrick et al. (2017) & 0.51 & 0.12 & 1.10 & 0.0 \\ MAS Aljundi et al. (2018) & 11.71 & 10.81 & 15.84 & 5.13 \\ MSL_MOV Simon et al. (2022) & 19.90 & 8.76 & 10.29 & 0.55 \\ MSL_MOV + Norm & 32.71 & 40.28 & 42.11 & 38.61 \\ MSL_MOV + Data Aug & 22.11 & 26.90 & 49.21 & 12.39 \\ MSL_MOV + Norm + Data Aug & 41.19 & 40.30 & 30.19 & 40.96 \\ MSL_MOV + Old Prototype & 19.82 & 9.12 & 11.07 & 1.01 \\ LwF (Li and Hoiem, 2017) & 0.0 & 0.11 & 0.05 & 0.0 \\ LwF + Norm (Li and Hoiem, 2017) & 47.33 & 50.60 & 52.06 & 42.11 \\ LwF + Norm + Data Aug & 59.02 & 66.85 & 66.52 & 24.35 \\ TRIPS (ours) + Data Aug & **74.16** & **68.04** & **73.58** & **70.47** \\ \hline \hline \end{tabular} \end{table} Table 1: Domain-wise average performance for each unseen domain over all incremental steps on the PACS dataset under _the 2-step incremental setting._ dataset (Venkateswara et al., 2017). Firstly, when the triplet loss is used (which improves the generalization of the model), in the base step, the average accuracy of the model improves by 1.60%. During incremental learning, the triplet loss improves the average/harmonic accuracy in each incremental step for every test domain by an average of 12.06%/11.34%. Thus, the benefit of triplet loss is clear. Next, we update and inject old-class prototypes into both the cross-entropy and triplet loss. We find that directly injecting updated old-class prototypes does not help improve the performance. We conjecture this is because of the extreme data imbalance since we focus on the exemplar-free setting and each old class only has one mean-based prototype preserved. Thus, to overcome the data imbalance and obtain balanced old- and new-class performance, sufficient pseudo-features of old classes are sampled from the updated class prototype representations. With the help of sampling, the performance is boosted by an average of 1.57%/1.47% average/harmonic accuracy over all incremental steps. ### Comparison of Different Prototype Representations To validate the efficiency of our proposed prototype representation based on the multivariate Normal distribution (mean and covariance matrix per class) and our pseudo-sampling strategy, we compare our method with related prototype representation methods (Yu et al., 2020; Zhu et al., 2021). SDC (Yu et al., 2020) utilizes the single mean-based prototype per class and a non-parametric cosine classifier. SDC updates old-class means by shifting them through the lens of new-class samples and uses \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{6}{c}{OfficeHome} & \multicolumn{6}{c}{DomainNet} \\ \cline{2-13} & \multicolumn{3}{c}{5-Step Inc (+ 10 cls/step)} & \multicolumn{3}{c}{10-Step Inc (+ 5 cls/step)} & \multicolumn{3}{c}{5-Step Inc (+ 20 cls/step)} & \multicolumn{3}{c}{10-Step Inc (+ 10 cls/step)} \\ & Art & Clipart & Product & Real & Art & Clipart & Product & Real & Clipart & Painting & Real & Sketch & Clipart & Painting & Real & Sketch \\ \hline \multicolumn{13}{c}{average accuracy (\% \(\uparrow\))} \\ \hline ERM & 26.55 & 26.85 & 31.95 & 32.41 & 16.19 & 16.19 & 18.81 & 19.49 & 30.60 & 27.69 & 30.64 & 30.09 & 18.11 & 16.59 & 18.04 & 17.79 \\ Misyple (Zhou et al., 2021) & 15.16 & 18.96 & 22.33 & 21.87 & 10.15 & 12.27 & 14.04 & 14.32 & 21.12 & 28.65 & 17.98 & 20.64 & 13.21 & 10.44 & 11.79 & 13.27 \\ CORAL (Sun and Saenko, 2016) & 27.39 & 26.91 & 31.52 & 32.34 & 16.12 & 15.79 & 18.31 & 19.11 & 30.32 & 28.45 & 31.05 & 30.06 & 18.12 & 16.77 & 18.15 & 17.87 \\ EWC (Kirkpatrick et al., 2017) & 26.85 & 27.00 & 32.60 & 31.71 & 16.07 & 16.94 & 18.53 & 19.34 & 30.45 & 27.95 & 30.45 & 30.01 & 18.11 & 16.57 & 18.24 & 17.94 \\ MAS (Aljundi et al., 2018) & 29.95 & 30.56 & 41.83 & 36.51 & 19.32 & 21.28 & 22.51 & 24.01 & 29.86 & 28.26 & 31.86 & 30.22 & 18.14 & 16.37 & 19.74 & 17.99 \\ MSL\_MOV (Simon et al., 2020) & 30.43 & 30.65 & 36.62 & 39.29 & 19.68 & 20.81 & 23.81 & 27.05 & 33.52 & 31.69 & 37.87 & 33.54 & 20.66 & 19.59 & 23.00 & 21.08 \\ LwF (Li and Hoiem, 2017) & 26.82 & 26.95 & 32.15 & 33.02 & 16.07 & 16.81 & 19.21 & 20.59 & 30.60 & 28.05 & 29.89 & 30.08 & 17.91 & 15.88 & 18.64 & 17.82 \\ LwF + Norm + Norm & 38.06 & 41.17 & 47.77 & 51.46 & 25.83 & 28.37 & 34.45 & 39.21 & 36.25 & 34.62 & 40.41 & 39.32 & 29.10 & 26.81 & 30.84 & 29.06 \\ LwF+ Norm+Data Aug & 40.03 & 37.92 & 46.98 & 49.02 & 30.85 & 27.66 & 33.59 & 34.98 & 35.92 & 31.59 & 35.20 & 35.60 & 31.62 & 26.02 & 31.10 & 29.78 \\ TRIPS (ours) & **46.04** & **48.99** & **62.24** & **67.88** & **33.87** & **38.64** & **49.57** & **54.39** & **54.20** & **47.36** & **55.35** & **55.22** & **47.80** & **42.41** & **50.37** & **49.41** \\ \hline \multicolumn{13}{c}{harmonic accuracy (\% \(\uparrow\))} \\ \hline ERM & 0.63 & 0.84 & 0.14 & 0.0 & 0.10 & 0.33 & 0.48 & 1.25 & 0.15 & 0.08 & 0.03 & 0.08 & 0.09 & 0.10 & 0.01 & 0.04 \\ MixStyle (Zhou et al., 2021) & 0.0 & 0.01 & 0.0 & 0.0 & 0.0 & 0.0 & 0.04 & 0.0 & 0.01 & 0.0 & 0.02 & 0.0 & 0.0 & 0.0 & 0.02 & 0.0 \\ CORAL (Sun and Saenko, 2016) & 0.0 & 0.0 & 0.02 & 0.01 & 0.0 & 0.0 & 0.1 & 0.2 & 0.0 & 0.0 & 0.01 & 0.0 & 0.0 & 0.05 & 0.01 & 0.01 \\ EWC (Kirkpatrick et al., 2017) & 0.24 & 0.13 & 2.32 & 0.03 & 0.09 & 0.54 & 0.0 & 0.02 & 0.20 & 0.20 & 0.20 & 0.30 & 0.06 & 0.11 & 0.12 & 0.11 \\ MAS (Aljundi et al., 2018) & 5.60 & 16.85 & 20.08 & 10.65 & 7.03 & 12.17 & 7.68 & 7.26 & 4.59 & 4.20 & 5.15 & 2.89 & 3.32 & 2.15 & 4.69 & 3.10 \\ MSL\_MOV (Simon et al., 2022) & 10.69 & 11.41 & 14.75 & 18.23 & 7.47 & 9.70 & 11.40 & 15.28 & 13.31 & 12.58 & 18.20 & 12.68 & 8.42 & 8.13 & 11.20 & 8.85 \\ LwF (Li and Hoiem, 2017) & 0.23 & 0.46 & 0.0 & 0.0 & 0.10 & 0.08 & 0.16 & 0.25 & 0.15 & 0.10 & 0.25 & 0.12 & 0.09 & 0.12 & 0.03 & 0.01 \\ LwF + Norm & 31.14 & 37.02 & 40.49 & 44.36 & 20.84 & 26.88 & 32.35 & 37.04 & 22.71 & 22.30 & 23.80 & 23.58 & 25.62 & 24.20 & 29.36 & 25.83 \\ LwF+ Norm+Data Aug & 34.00 & 31.87 & 39.05 & 41.63 & 29.72 & 26.19 & 31.77 & 33.75 & 22.68 & 19.71 & 22.88 & 22.77 & 30.05 & 24.20 & 29.98 & 28.19 \\ TRIPS (ours) & **39.56** & **44.89** & **57.46** & **63.71** & **32.40** & **39.07** & **49.68** & **55.10** & **42.88** & **36.97** & **44.50** & **43.28** & **49.93** & **44.49** & **52.45** & **50.59** \\ \hline \hline \end{tabular} \end{table} Table 2: Domain-wise average performance over all incremental steps on OfficeHome and DomainNet under _the 5-step and 10-step incremental settings._ Figure updated class-wise means for classification. As in SDC, the final classification result is determined by the maximum cosine similarity over the feature vector of an input image and the updated class-wise mean-based prototypes. SDC does not sample pseudo-features of old classes as it is impossible to perform sampling from first-order statistics such as the mean. PASS (Zhu et al., 2021a) uses the mean-based prototype with a radius (hyper-parameter) to form an isotropic Normal distribution from which pseudo-features of old classes are sampled. The formula \(\mathbf{\phi}^{\prime}_{c}=\mathbf{\mu}^{\prime}_{c}+\gamma\mathbf{\mathrm{v}}\) describes the sampling step, where \(\mathbf{\phi}^{\prime}_{c}\) are the sampled pseudo-features of old class \(c\) at session \(t\) and \(\mathbf{\mu}^{\prime}_{c}\) is the mean-based prototype for old class \(c\) at session \(t\). Moreover, \(\mathbf{\mathrm{v}}\sim\mathcal{N}(\mathbf{0},\mathds{I})\) is a vector sampled from the isotropic Normal distribution, whereas \(\gamma\geq 0\) is the radius that controls the deviation of pseudo-features from the mean. We implement the SDC and PASS methods in our setting using their official code available online. As MSL_MOV is an exemplar-based method which does not utilize old-class prototypes, for a fair comparison, we implement it with SDC (updated old prototypes). Table 4 shows that modeling prototype representations as full multivariate Normal distributions per class outperforms the naive use of isotropic Normal distributions with distinct means per class. ### Hyper-parameter Analysis To find the most suitable hyper-parameter value, we perform the hyper-parameter grid analysis on the PACS dataset under the two-step incremental protocol. Figure 8 shows the analysis for different loss terms. To better explore the effect of each loss term, these experiments are trained without data augmentation and pseudo-sampling. Five old exemplars for each class and each domain are stored and utilized. Firstly, we set the distillation loss term hyper-parameter (\(\lambda^{\prime}\)) to 1 and vary the value for the triplet loss term hyper-parameter (\(\lambda\)). We find that when \(\lambda\) is set to 1, the best average and harmonic accuracy are acquired. Then we set \(\lambda\) to 1 and vary the value for \(\lambda^{\prime}\). We find that when \(\lambda^{\prime}\) is set to 30, the best average and harmonic accuracy are acquired. Thus, for all our experiments, we set \(\lambda\) to 1 and \(\lambda^{\prime}\) to 30. We do a similar loss term hyper-parameter grid search for other methods. For all distillation-based methods (Li and Hoiem, 2017; Simon et al., 2022) we use the hyper-parameter value 30 for the distillation loss. For importance matrix-based methods (Kirkpatrick et al., 2017; Aljundi et al., 2018), we use the hyper-parameter value 1000 for the regularization loss. We also explore the effect of various hyper-parameters related to prototype shifting and pseudo-sampling. Figure 7 shows the values of hyper-parameters we set for shifting factor (\(\sigma\)), moving average factor (\(\eta\)), and shrinkage factor (\(\alpha\)). Firstly, we set \(\eta=0.1\) and \(\alpha=0.1\), and vary the value of \(\sigma\). When \(\sigma\) is set to 0.5, the best performance is obtained. Then, we set \(\sigma=0.5\) and \(\alpha=0.1\), and vary the value of \(\eta\). When \(\eta\) is set to 0.1, the best performance is obtained. After that, we set \(\sigma=0.5\) and \(\eta=0.1\), and vary the value of \(\alpha\). When \(\alpha\) is set to 0.05, the best performance is obtained. These parameters were selected on the PACS dataset and then simply applied without any tuning on both OfficeHome and Domain-Net datasets. We also observed that the same hyper-parameters were optimal when we used the validation set of PACS comprised of the three training domains of PACS and 20% train data withheld for validation. 10 show the comparison with other methods on the OfficeHome Venkateswara et al. (2017) and DomainNet Saito et al. (2019) datasets using ResNet-18 as the backbone network. In both settings, our TRIPS method always achieves the best performance. ### Training Time Analysis We run all our experiments on 1x NVIDIA v100 GPU with 16GB RAM. We use a batch size of 32 for each domain set, and three training domains are utilized in each batch. Thus, the total batch size is 96. When data augmentation is utilized, due to memory limitations, we reduce the batch size from 32 to 24 for each domain set, thus the total batch size is reduced from 96 to 72. For each incremental session, the training time is between 5 and 7 hours. When data augmentation is utilized, the training data within each batch is enlarged from 72 to 288, which causes the training time to increase to be between 10 and 15 hours long per incremental session. Within each batch, an equal number of pseudo-samples from old classes will be generated from the multivariate Normal distributions (this step do not rely on a feature extractor minimizing the impact on runtime). These samples of old classes are used along the samples from the new classes. ## 6 Conclusions In this paper, we have explored the challenging DGCIL task and proposed our TRIPS method. To improve the generalization of the model, triplet loss helps obtain semantic information while eliminating domain information. With no old exemplars preserved, we sample pseudo-features of old classes from updated prototype representations (multivariate Normal distributions). These pseudo-features not only help preserve old class knowledge but also support the TRIPS loss to maintain inter-class boundaries, thus boosting the performance. Importantly, these prototype representations are domain-agnostic as our loss enforces domain confusion. Substantial improvements in benchmarks demonstrate the efficiency of our method. ## Acknowledgements This work was funded by CSIRO's Reinvent Science and CSIRO's Data61 Science Digital. Figure 8: Ablations of loss hyper-parameters \(\lambda\) and \(\lambda^{\prime}\) on the PACS dataset. Figure 10: Performance under _the five-step incremental settings_ (20 new classes added at a time) on the DomainNet dataset based on ResNet-18. Figure 7: Ablations of prototype shifting and pseudo-sampling hyper-parameters \(\sigma\), \(\eta\), and \(\alpha\) on the PACS dataset. Figure 9: Performance under _the five-step incremental settings_ (10 new classes added at a time) on the OfficeHome dataset based on ResNet-18.
2303.17986
A new boson expansion theory utilizing a norm operator
We propose a new boson expansion method using a norm operator. The small parameter expansion, in which the boson approximation becomes the zeroth-order approximation, requires the double commutation relations between phonon operators that are not closed between the phonon excitation modes adopted as boson excitations. This results in an infinite expansion regardless of whether the type of the boson expansion is Hermitian or non-Hermitian. The small parameter expansion does not hold when the commutation relations are closed. The norm operator is expressed as a function of the number operator in the physical subspace, which enables us to obtain substantially a finite boson expansion regardless of the Hermitian or non-Hermitian type. We also point out the problems of the conventional boson expansion methods. The normal-ordered linked-cluster expansion theory has failed to refute Marshalek's claim that KT-1 and KT-2 are of chimerical boson expansion. The Dyson boson expansion theory does not have exceptional superiority over other types. Previous studies using the boson expansion methods should be re-examined.
Kimikazu Taniguchi
2023-03-31T11:49:26
http://arxiv.org/abs/2303.17986v2
# A new boson expansion theory utilizing a norm operator ###### Abstract We propose a new boson expansion method using a norm operator. The small parameter expansion, in which the boson approximation becomes the zeroth-order approximation, requires the double commutation relations between phonon operators that are not closed between the phonon excitation modes adopted as boson excitations. This results in an infinite expansion regardless of whether the type of the boson expansion is Hermitian or non-Hermitian. The small parameter expansion does not hold when the commutation relations are closed. The norm operator is expressed as a function of the number operator in the physical subspace, which enables us to obtain substantially a finite boson expansion regardless of the Hermitian or non-Hermitian type. We also point out the problems of the conventional boson expansion methods. The normal-ordered linked-cluster expansion theory has failed to refute Marshalek's claim that KT-1 and KT-2 are of chimerical boson expansion. The Dyson boson expansion theory does not have exceptional superiority over other types. XXXX-XXXX Introduction Microscopic elucidation of the large-amplitude collective motion of atomic nuclei remains one of the important and challenging tasks, and its achievement requires developing a method to overcome small-amplitude-oscillation approximations like the Tamm-Dancoff approximation and the random phase approximation. The boson expansion theory is one of the methods going beyond the small-amplitude-oscillation approximations [1]. The boson expansion theory was initially formulated by replacing fermion quasi-particle pair operators with the boson polynomials that reproduce their commutation relations [2]. Later, referring to the preceding work [3], the boson expansion theory has been given as a mapping theory by utilizing a one-to-one correspondence between the basis vector in fermion space and the completely antisymmetric state vector in boson space [4]. The Holstein-Primakoff and the Dyson Boson expansions have been formulated also in the same way [5]. These formulations target all pair operator excitations in fermion space. Practical use, however, has not adopted all excitation modes but the collective and, in case of need, some non-collective excitation modes of Tamm-Dancoff type phonons. Initially, there were two methods: One is to construct the mapping operator by the phonons with only the crucial excitation modes [6; 7], and the other is the method to pick up only these phonons and seek boson expansions that reproduce their commutation relations [8]. Thus formulated boson expansion methods were used for elucidating the large-amplitude collective motions such as the shape transitions of nuclei in the transitional region [8; 9; 10]. The boson expansion method called KT-2 [8] formulated in the latter method was, however, claimed to result in incorrect boson expansions [11; 12] and reformulated into so-called KT-3 [13] according to the former method. The Dyson boson expansion theory (DBET), finite expansion of the non-Hermitian type, has been also formulated by the former method [14]. Seeming to establish the formulation and achieve certain results, the problems remain with the boson expansion methods. One is on the approximate treatment of the algebra of the Tamm-Dancoff type phonons. The double commutators among the Tamm-Dancoff type phonons generally do not close within partially selected excitation modes. Until now without exception, the boson expansion methods with restriction of the phonon excitation modes have used approximations that neglect the modes not selected for the boson excitation modes. The normal-ordered linked-cluster expansion theory (NOLCEXPT) [13; 15] neglects these in the inverse of the norm matrices of the multi-phonon state vectors to obtain its boson expansions and finally abandons all the still-remaining modes. DBET truncates the unselected phonon operators by adopting the approximation named phonon-truncation approximation [9], which is also called _closed-algebra approximation_[14]. Each of the approximations above is essential for NOLCEXPT and DBET. NOLCEXPT adopts it for obtaining the same expansions as KT-2, and DBET to obtain the finite expansions. These approximations all bring to make the double commutators among the selected phonon operators closed. It is claimed that the validity of this approximation has been verified for the specific nuclei, and it is also shown that the norm of the multi-phonon state vector obtained under this approximation rapidly approaches 0 as the number of phonon excitations increases, which brings rapid convergence of the boson expansions [7; 16]. Such behavior of the norm is due to the effect of the Pauli exclusion principle [16]. Its rapid decrease means that the effect is strong. On the other hand, NOLCEXPT claims that its boson expansion is of a small parameter expansion with good convergence. Therefore, in the fermion subspace spanned by the multi-phonon state vectors with selected excitation modes, the effect of the Pauli exclusion principle should be weak. If this is correct, then the norms of the multi-phonon state vectors would not approach zero rapidly as the number of phonon excitations increases. We should investigate the cause of contradictory conclusions. The other is about the phonon excitation number. Until now, for the multi-phonon state vectors, used as the basis vectors of the fermion subspace to be mapped, the sorts of the excitation modes have been limited, while the number of phonon excitations has not [13; 14; 15]. Without restricting the phonon excitation number, the eigenvalues of the norm matrices of the multi-phonon state vectors become zero when the number of excitations becomes large enough even with restricting the sorts of excitation modes. Nevertheless, NOLCEXPT is formulated assuming that zero eigenvalues do not appear regardless of the number of phonon excitations[13; 15]. There is, however, no clear explanation for the validity of this assumption. We have proposed a boson-fermion expansion theory (BFEXP) [17; 18] as an alternative to the boson expansion theory. The boson expansion theory treats all the adopted phonon excitation modes as bosons, while BFEXP, in the zeroth order approximation, represents only the phonons with collective excitation modes as bosons and those with non-collective remains as original phonons. We can derive the boson expansions from this method by extending the boson part up to the non-collective modes necessary and depressing the fermion excitations. Since the formulation of BFEXP has not used the approximation for commutation relations among the phonon operators, it would be worthwhile to formulate a new boson expansion method without the approximation for the commutation relations among phonon operators and compare its boson expansions with those derived from BFEXP. In this article, we propose a new boson expansion theory, naming it the norm operator method (NOM), which enables us to handle both Hermitian and non-Hermitian types, the case with or without limiting the phonon excitation modes and the number of excitations, and the contribution of the phonon excitation modes which are neglected in the conventional boson expansion methods. In section 2, we deal with the Tam-Dancoff type phonons, the multi-phonon state vectors, and the ideal boson state vectors. In section 3, we give a mapping utilizing a norm operator. As specific examples, we deal with the case of mapping all modes of phonon excitations with and without the restriction of the phonon excitation number and the restricted case where the maximum number of phonon excitations is one. Section 4 deals with the boson expansions. First, we confirm the conditions for using the ideal boson state vectors and then give the formulae used in the boson expansions. Next, we provide the conditions that boson expansions become of a small parameter expansion, offer an order estimation method for the expansion terms, perform the boson expansions, show that all types of mapping of the small parameter expansion give infinite boson expansions, and provide the boson expansions of the phonon operators and the scattering operators up to terms that have not been obtained so far. We also deal with non-small parameter boson expansions, where we obtain DBET and the boson expansions that are finite and Hermite. Finally, we point out and stress the essential role of the norm operator in the boson expansion method. In section 5, we take up the conventional methods and point out their problems. Section 6 is a summary. ## 2 Fermion space and boson space ### Tamm-Dancoff type phonon operators, scattering operators, and their commutation relations We introduce pair operators, \[X_{\mu}^{\dagger}=\sum_{\alpha<\beta}\psi_{\mu}(\alpha\beta)a_{\alpha}^{ \dagger}a_{\beta}^{\dagger}, \tag{1a}\] \[X_{\mu}=\sum_{\alpha<\beta}\psi_{\mu}(\alpha\beta)a_{\beta}a_{\alpha},\] (1b) \[B_{q}=\sum_{\alpha\beta}\varphi_{q}(\alpha\beta)a_{\beta}^{ \dagger}a_{\alpha},\] (2a) \[B_{\bar{q}=}B_{q}^{\dagger}. \tag{2b}\] Here, \(a_{\alpha}^{\dagger}\) and \(a_{\alpha}\) are quasi-particle creation and annihilation operators in a single-particle state \(\alpha\). The coefficients satisfy the following relations: \[\psi_{\mu}(\beta\alpha)=-\psi_{\mu}(\alpha\beta) \tag{3a}\] \[\sum_{\alpha<\beta}\psi_{\mu}(\alpha\beta)\psi_{\mu^{\prime}}(\alpha\beta)= \delta_{\mu,\mu^{\prime}},\] (3b) \[\sum_{\mu}\psi_{\mu}(\alpha\beta)\psi_{\mu}(\alpha^{\prime}\beta^{\prime})= \delta_{\alpha,\alpha^{\prime}}\delta_{\beta,\beta^{\prime}}-\delta_{\alpha, \beta^{\prime}}\delta_{\beta,\alpha^{\prime}}, \tag{3c}\] \[\varphi_{\bar{q}}(\alpha\beta)=\varphi_{q}(\beta\alpha). \tag{4a}\] \[\sum_{\alpha\beta}\varphi_{q}(\alpha\beta)\varphi_{q^{\prime}}(\alpha\beta)= \delta_{q,q^{\prime}},\] (4b) \[\sum_{q}\varphi_{q}(\alpha\beta)\varphi_{q}(\alpha^{\prime}\beta^{\prime})= \delta_{\alpha,\alpha^{\prime}}\delta_{\beta,\beta^{\prime}}. \tag{4c}\] These are the most common orthogonal transformations of the quasi-particle pairs \(a_{\alpha}^{\dagger}a_{\beta}^{\dagger}\), \(a_{\beta}a_{\alpha}\) and \(a_{\beta}^{\dagger}a_{\alpha}\). These are used to couple the angular momenta of quasi-particles to those of the quasi-particle pairs. Some of \(X_{\mu}\) and \(X_{\mu}^{\dagger}\) are composed by the further superposition of such pair operators to reflect the dynamics into the selected phonons. Tamm-Dancoff approximation or a similar approximation is usually applied to them for identifying collective excitation modes and non-collective ones. Hereafter, \(X_{\mu}^{\dagger}\) and \(X_{\mu}\) are called phonon creation and annihilation operators, and \(B_{q}\) is called a scattering operator. The phonon and scattering operators satisfy the following commutation relations: \[[X_{\mu},X_{\mu^{\prime}}^{\dagger}]=\delta_{\mu,\mu^{\prime}}-\sum_{q}\Gamma_ {q}^{\mu\mu^{\prime}}B_{q}, \tag{5a}\] \[[B_{q},X_{\mu}^{\dagger}]=\sum_{\mu^{\prime}}\Gamma_{q}^{\mu\mu^{\prime}}X_{ \mu^{\prime}}^{\dagger},\] (5b) \[[X_{\mu},B_{q}]=\sum_{\mu^{\prime}}\Gamma_{q}^{\mu^{\prime}\mu}X_{\mu^{\prime}}, \tag{5c}\] where the definition of \(\Gamma_{q}^{\mu\mu^{\prime}}\) is as follows: \[\Gamma_{q}^{\mu\mu^{\prime}}=\sum_{\alpha\beta}\varphi_{q}(\alpha\beta)\Gamma_ {\alpha\beta}^{\mu\mu^{\prime}},\quad\Gamma_{\alpha\beta}^{\mu\mu^{\prime}}= \sum_{\gamma}\psi_{\mu}(\alpha\gamma)\psi_{\mu^{\prime}}(\beta\gamma). \tag{6}\] The following relation holds: \[\Gamma_{\bar{q}}^{\mu_{1}\mu_{2}}=\Gamma_{q}^{\mu_{2}\mu_{1}}. \tag{7}\] From Eqs. (5a) and (5b), we obtain \[[[X_{\mu_{1}},X^{\dagger}_{\mu_{2}}],X^{\dagger}_{\mu_{3}}]=-\sum_{\mu^{\prime}}Y( \mu_{1},\mu_{2},\mu_{3},\mu^{\prime})X^{\dagger}_{\mu^{\prime}}, \tag{8}\] where the definition of \(Y(\mu_{1}\mu_{2}\mu_{3}\mu_{4})\) is \[Y(\mu_{1}\mu_{2}\mu_{3}\mu_{4})=\sum_{q}\Gamma^{\mu_{1}\mu_{2}}_{q}\Gamma^{\mu _{3}\mu_{4}}_{q}=\sum_{\alpha\beta}\Gamma^{\mu_{1}\mu_{2}}_{\alpha\beta}\Gamma ^{\mu_{3}\mu_{4}}_{\alpha\beta}. \tag{9}\] The following relation holds: \[Y(\mu_{1}\mu^{\prime}_{1}\mu^{\prime}_{2}\mu_{2}) = Y(\mu_{2}\mu^{\prime}_{1}\mu^{\prime}_{2}\mu_{1}) \tag{10}\] \[= Y(\mu_{1}\mu^{\prime}_{2}\mu^{\prime}_{1}\mu_{2})\] \[= Y(\mu^{\prime}_{1}\mu_{1}\mu_{2}\mu^{\prime}_{2}).\] ### Multi-phonon and multi-boson state vectors We divide the phonon excitation modes \(\{\mu\}\) into two groups, \(\{t\}\) and \(\{\overline{t}\}\),and prepare the multi-phonon state vectors, \[|N;t\rangle\rangle=|t_{1},t_{2},\cdots,t_{N}\rangle\rangle=X^{\dagger}_{t_{1}} X^{\dagger}_{t_{2}}\cdots X^{\dagger}_{t_{N}}|0\rangle\quad(0\leq N\leq N_{max}). \tag{11}\] \(\{t\}\) usually consists of collective modes and some non-collective modes if necessary, selected by the small amplitude approximation. We treat not only these cases but also the case where all modes are adopted, that is, \(\{t\}=\{\mu\}\). Next we introduce boson creation and annihilation operators, \(b^{\dagger}_{t}\) and \(b_{t^{\prime}}\), having the same indices as those of the multi-phonons, \(X^{\dagger}_{t}\) and \(X_{t^{\prime}}\): \[[b_{t},b^{\dagger}_{t^{\prime}}]=\delta_{t,t^{\prime}}. \tag{12}\] The multi-boson states, \[|N;t\rangle)=|t_{1},t_{2},\cdots,t_{N}\rangle)=b^{\dagger}_{t_{1}}b^{\dagger} _{t_{2}}\cdots b^{\dagger}_{t_{N}}|0\rangle, \tag{13}\] are orthogonal to one another, and are normalized by their norms, \[{\cal N}_{B}(N;t)=((N:t|N;t)), \tag{14}\] such as \[|N;t\rangle=|t_{1},t_{2},\cdots,t_{N}\rangle={\cal N}_{B}(N;t)^{-1/2}|N;t \rangle). \tag{15}\] They are so-called ideal boson state vectors. Boson mapping This section deals with boson mapping. We introduce a norm operator and construct a mapping operator that can handle both Hermitian and non-Hermitian types, both with and without limiting the types and number of phonon excitation modes. The norm operator is defined as \[\hat{Z}=\sum_{N=0}^{N_{max}}\hat{Z}(N), \tag{16a}\] \[\hat{Z}(N) = \sum_{tt^{\prime}}|N,t\rangle\langle N;t|N;t^{\prime}\rangle(N;t^ {\prime}|\] \[= \sum_{t_{1}\leq\cdots\leq t_{N}}\sum_{t^{\prime}_{1}\leq\cdots \leq t^{\prime}_{N}}|t_{1}\cdots t_{N}\rangle\langle t_{1}\cdots t_{N}|t^{ \prime}_{1}\cdots t_{N}\rangle(t^{\prime}_{1}\cdots t^{\prime}_{N}|, \tag{16b}\] where \[|N;t\rangle=\mathcal{N}_{B}(N;t)^{-1/2}|N;t\rangle\rangle. \tag{17}\] This norm operator is a modified one of the previously introduced [13] by adding the restriction \(N_{max}\), which allows us to constrain the number of phonon excitations and corresponding boson excitations. \(\hat{Z}(N)\) satisfies the eigenequation, \[\hat{Z}(N)|N;a)=z_{a}(N)|N;a), \tag{18}\] where \(|N;a)\) is a normalized eigenvector and \(z_{a}(N)\) is an eigenvalue. The eigenvalues \(z_{a}(N)\) become positive or zero and \(a_{0}\) represents \(z_{a_{0}}(N)=0\). Using these, we obtain the spectral decomposition of \(\hat{Z}(N)\) as \[\hat{Z}(N)=\sum_{a\neq a_{0}}|N;a)z_{a}(N)(N;a|. \tag{19}\] Functions of \(\hat{Z}(N)\) are defined by \[f(\hat{Z}(N))=\sum_{a\neq a_{0}}|N;a)f(z_{a}(N))(N;a|, \tag{20}\] and we obtain \[f(\hat{Z})=\sum_{N=0}^{N_{max}}f(\hat{Z}(N)). \tag{21}\] Introducing \({u_{a}^{t}(N)=(N;t|N;a)}\), then Eq. (18) becomes the eigenequation of the multi-phonon norm matrix, \[\sum_{t^{\prime}}\langle N;t|N;t^{\prime}\rangle u_{a}^{t^{\prime}}(N)=z_{a}(N )u_{a}^{t}(N), \tag{22}\] The eigenvectors are orthonormalized as \[\sum_{t}u^{t}_{a}(N)u^{t}_{a^{\prime}}(N)=\delta_{a,a^{\prime}},\] (23a) and satisfy the completeness relations \[\sum_{a}u^{t}_{a}(N)u^{t^{\prime}}_{a}(N)=\delta_{t,t^{\prime}}. \tag{23b}\] Using the norm operator \(\hat{Z}\), we introduce the mapping operator \(U_{\xi}\) as \[U_{\xi}=\hat{Z}^{\xi-\frac{1}{2}}\widetilde{U}, \tag{24}\] where \(\widetilde{U}\) is a mapping operator whose definition is as follows: \[\widetilde{U}=\sum_{N=0}^{N_{max}}\widetilde{U}(N), \tag{25a}\] \[\widetilde{U}(N) = \sum_{t}|N;t\rangle\langle N;t|\] (25b) \[= \sum_{t_{1}\leq t_{2}\leq\cdots\leq t_{N}}|t_{1}t_{2}\cdots t_{N} \rangle\langle t_{1}t_{2}\cdots t_{N}|,\] which satisfies the following relations, \[\widetilde{U}\widetilde{U}^{\dagger}=\hat{Z}, \tag{26a}\] \[\widetilde{U}(N)\widetilde{U}(N)^{\dagger}=\hat{Z}(N). \tag{26b}\] \(\widetilde{U}(N)\) is also expressed as \[\widetilde{U}(N)=\sum_{a\neq a_{0}}z_{a}^{\frac{1}{2}}|N;a\rangle\langle N;a|, \tag{27}\] where \[|N;a\rangle=z_{a}^{-\frac{1}{2}}(N)\sum_{t}u^{t}_{a}(N)|N;t\rangle\qquad(a\neq a _{0}). \tag{28}\] \(|N;a\rangle\) become orthonormalized basis vectors of the fermion subspace spanned by \(|N;t\rangle\). Using \(|N:a\rangle\) and \(|N:a\rangle\), the mapping operator is expressed as \[U_{\xi}=\sum_{N=0}^{N_{max}}U_{\xi}(N);\quad U_{\xi}(N)=\sum_{a\neq a_{0}}z_{ a}(N)^{\xi}|N;a\rangle\langle N;a|. \tag{29}\] The following relations are satisfied: \[U^{\dagger}_{-\xi}U_{\xi}=\hat{T}_{F},\qquad U_{\xi}U^{\dagger}_{-\xi}=\hat{T }_{B}, \tag{30}\] where \[\hat{T}_{F}=\sum_{N=0}^{N_{max}}\hat{T}_{F}(N);\qquad\hat{T}_{F}(N)=\sum_{a\neq a _{0}}|N;a\rangle\langle N;a|, \tag{31}\] \[\hat{T}_{B}=\sum_{N=0}^{N_{max}}\hat{T}_{B}(N);\quad\hat{T}_{B}(N)=\sum_{a\neq a _{0}}|N;a)(N;a|. \tag{32}\] In addition, we define the following operators, \[\breve{1}_{B}=\sum_{N=0}^{N_{max}}\hat{1}_{B}(N);\qquad\hat{1}_{B}(N)=\sum_{t }|N;t)(N;t|. \tag{33}\] If \(\hat{Z}(N)\) has even one zero eigenvalue, then \(\hat{T}_{B}(N)\neq\hat{1}_{B}(N)\) and hence \(\hat{T}_{B}\neq\breve{1}_{B}\). Otherwise, they match one another. The state vectors and operators of fermion space are mapped onto those of boson subspace as \[|\psi^{\prime})_{\xi}=U_{\xi}|\psi^{\prime}),\qquad_{-\xi}(\psi|=\langle\psi| U_{-\xi}^{\dagger}, \tag{34a}\] \[(O_{F})_{\xi}=U_{\xi}O_{F}U_{-\xi}^{\dagger}. \tag{34b}\] These satisfy the following relations: \[|\psi^{\prime})_{\xi}=\left\{{}_{\xi}(\psi^{\prime}|\right\}^{\dagger},\qquad _{-\xi}(\psi|=\left\{{}|\psi{}\rangle_{-\xi}\right\}^{\dagger}, \tag{35a}\] \[(O_{F})_{-\xi}=\left\{(O_{F}^{\dagger})_{\xi}\right\}^{\dagger}. \tag{35b}\] The mapping is of the Hermitian type when \(\xi=0\) and, in other cases, of the non-Hermitian type. A one-to-one correspondence exists between the fermion subspace projected by \(\hat{T}_{F}\) and the boson subspace by \(\hat{T}_{B}\). For the state vectors, \(|\psi\rangle\) and \(|\psi^{\prime}\rangle\), which belong to the fermion subspace projected by \(\hat{T}_{F}\), \[\langle\psi|O_{F}|\psi^{\prime}\rangle = \langle\psi|\hat{T}_{F}O_{F}\hat{T}_{F}|\psi^{\prime}\rangle \tag{36}\] \[= \langle\psi|U_{-\xi}^{\dagger}U_{\xi}\hat{O}_{F}U_{-\xi}^{\dagger }U|\psi^{\prime}\rangle\] \[= {}_{-\xi}(\psi|(O_{F})_{\xi}|\psi^{\prime})_{\xi},\] that is, the matrix element of the fermion subspace becomes equal to that of the corresponding boson subspace. The boson subspace corresponding to the fermion subspace projected by \(\hat{T}_{F}\) is called the physical subspace, and the boson state vectors belonging to that space are called the physical state vector. The projection operator of the physical subspace is \(\hat{T}_{B}\) The relation \[{}_{\xi}(\psi|(O_{F})_{-\xi}|\psi^{\prime})_{-\xi}={}_{-\xi}(\psi|(O_{F})_{\xi}| \psi^{\prime})_{\xi} \tag{37}\] holds, therefore it is sufficient to treat the case \(\xi\geq 0\). The mapping of the product of the fermion operators does not generally result in the product of the mapped fermion operators. That is \[(O_{F}O^{\prime}_{F})_{\xi}\neq(O_{F})_{\xi}(O^{\prime}_{F})_{\xi}, \tag{38}\] and therefore, the commutation relations of the fermion operators are mapped as \[([O_{F},O^{\prime}_{F}])_{\xi}=(O_{F}O^{\prime}_{F})_{\xi}-(O^{\prime}_{F}O_{F })_{\xi}\neq[(O_{F})_{\xi},(O^{\prime}_{F})_{\xi}], \tag{39}\] while under the approximation \(O_{F}O^{\prime}_{F}\approx O_{F}\hat{T}_{F}O^{\prime}_{F}\), \[(O_{F}O^{\prime}_{F})_{\xi}\approx(O_{F})_{\xi}(O^{\prime}_{F})_{\xi}, \tag{40}\] and \[([O_{F},O^{\prime}_{F}])_{\xi}\approx[(O_{F})_{\xi},(O^{\prime}_{F})_{\xi}] \tag{41}\] hold. The conventional practical boson expansion methods use this approximation. If this approximation holds, it is sufficient to map the phonon and scattering operators, otherwise, it becomes necessary to obtain the mapping of the product of these fermion operators. We denote the mapping of \(\widetilde{U}\) as \[\widetilde{|\psi\rangle}=\widetilde{U}|\psi\rangle,\qquad\widetilde{(\psi|}= \langle\psi|\widetilde{U}^{\dagger}, \tag{42a}\] \[\widetilde{O_{F}}=\widetilde{U}O_{F}\widetilde{U}^{\dagger}. \tag{42b}\] The mapping of Eqs. (34) is expressed as \[|\psi^{\prime}\rangle_{\xi}=\hat{Z}^{\xi-\frac{1}{2}}\widetilde{|\psi^{\prime }\rangle},\qquad{}_{-\xi}(\psi|=\widetilde{(\psi|}\hat{Z}^{-\xi-\frac{1}{2}}, \tag{43a}\] \[(O_{F})_{\xi}=\hat{Z}^{\xi-\frac{1}{2}}\widetilde{O_{F}}\hat{Z}^{-\xi-\frac{1 }{2}}, \tag{43b}\] which makes it clear that the different treatment of the norm operator in the mapping operator produces another type of mapping. The mapping of Eqs. (34) is also expressed as \[|\psi^{\prime}\rangle_{\xi}=\hat{Z}^{\xi}|\psi^{\prime}\rangle_{0},\qquad{}_{ -\xi}(\psi|={}_{0}(\psi|\hat{Z}^{-\xi}, \tag{44a}\] \[(O_{F})_{\xi}=\hat{Z}^{\xi}(O_{F})_{0}\hat{Z}^{-\xi}, \tag{44b}\] The mapping of \(\xi=0\) being of the Hermitian type and that of \(\xi\neq 0\) being of the non-Hermitian type transform one another by the similarity transformation operator that becomes the power of the norm operator \(\hat{Z}\). ### The case where all the phonon excitation modes are adopted as the boson excitation modes Hereafter, we attach \((A)\) such as \(\hat{Z}^{(A)}\) in the case that we introduce boson operators corresponding to all phonon excitation modes for no confusion. We start with the following, \[\sum_{\mu_{1}\leq\cdots\leq\mu_{N}}|\mu_{1},\cdots,\mu_{N}\rangle \langle\mu_{1},\cdots,\mu_{N}|=\frac{1}{N!}\sum_{\mu_{1}\cdots,\mu_{N}}|\mu_{1},\cdots,\mu_{N}\rangle\rangle\langle\langle\mu_{1},\cdots,\mu_{N}| \tag{45}\] \[=\frac{1}{2^{N}N!}\sum_{\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_ {N}}|\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N}\rangle\langle\alpha_{1}\beta _{1}\cdots\alpha_{N}\beta_{N}|\] \[=\frac{(2N)!}{2^{N}N!}\sum_{\alpha_{1}\beta_{1}\leq\cdots\leq \alpha_{N}\beta_{N}}|\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N}\rangle\langle \alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N}|,\] where \[|\alpha_{1}\beta_{1}\cdot\alpha_{N}\beta_{N}\rangle=a^{\dagger}_{\alpha_{1}}a^ {\dagger}_{\beta_{1}}\cdots a^{\dagger}_{\alpha_{N}}a^{\dagger}_{\beta_{N}}|0\rangle, \tag{46}\] and we use that the function \(f(t_{1},\cdots,t_{N}),\) which is completely symmetric for the argument, satisfies the following [19]: \[\sum_{t_{1}\leq\cdots\leq t_{N}}f(t_{1},\cdots,t_{N})=\sum_{t_{1},\cdots,t_{N }}\frac{{\cal N}_{B}(t_{1},\cdots,t_{N})}{N!}f(t_{1},\cdots,t_{N}). \tag{47}\] From above, we obtain the following relation, \[\sum_{\mu_{1}\leq\cdots\leq\mu_{N}}|\mu_{1},\cdots,\mu_{N}\rangle\langle\mu_{1 },\cdots,\mu_{N}|=(2N-1)!!\hat{1}^{(A)}_{F}(N), \tag{48}\] where \[\hat{1}^{(A)}_{F}(N)=\sum_{\alpha_{1}\beta_{1}\leq\cdots\leq\alpha_{N}\beta_ {N}}|\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N}\rangle\langle\alpha_{1} \beta_{1}\cdots\alpha_{N}\beta_{N}|;\quad(2N-1)!!=\frac{(2N)!}{2^{N}N!}. \tag{49}\] Let \({\bf Z}^{(A)}(N)\) be a matrix composed of the matrix element \(\langle\mu^{\prime}_{1},\cdots\mu^{\prime}_{N}|\mu_{1}\cdots\mu_{N}\rangle,\) we obtain, from this relation, \[{\bf Z}^{(A)}(N)^{2}=(2N-1)!!{\bf Z}^{(A)}(N), \tag{50}\] which indicates that the eigenvalues of this matrix are \((2N-1)!!\) or \(0.\) Zero eigenvalues appear even at \(N=2\)[13], and so do in the case \(N\geq 2.\) From this relation we obtain \[\hat{Z}^{(A)}(N)^{2}=(2N-1)!!\hat{Z}^{(A)}(N), \tag{51}\] and \[\left(\hat{Z}^{(A)}\right)^{2}=(2\hat{N}_{B}^{(A)}-1)!!Z^{(A)};\quad\hat{N}_{B}^{ (A)}=\sum_{\mu}b_{\mu}^{\dagger}b_{\mu}. \tag{52}\] The case N=2 in Eq. (50) is equivalent to the following relations [8], \[\sum_{\mu\mu^{\prime}}Y(\mu_{1}^{\prime}\mu\mu^{\prime}\mu_{2}^{\prime})Y(\mu_ {1}\mu\mu^{\prime}\mu_{2})=4((\mu_{1}^{\prime}\mu_{2}^{\prime}|\mu_{1}\mu_{2}) )-2Y(\mu_{1}^{\prime}\mu_{1}\mu_{2}\mu_{2}^{\prime}). \tag{53}\] We introduce the following operators, \[b_{\alpha\beta}=\sum_{\mu}\psi_{\mu}(\alpha\beta)b_{\mu},\quad b_{\alpha\beta }^{\dagger}=\sum_{\mu}\psi_{\mu}(\alpha\beta)b_{\mu}^{\dagger}, \tag{54}\] which satisfies the commutation relations, \[[b_{\alpha^{\prime}\beta^{\prime}},b_{\alpha\beta}^{\dagger}]=\delta_{\alpha^ {\prime}\alpha}\delta_{\beta^{\prime}\beta}-\delta_{\alpha^{\prime}\beta} \delta_{\beta^{\prime}\alpha}. \tag{55}\] Using these operators, \(\widetilde{U}^{(A)}(N)=\sum_{\mu}|N;\mu\rangle\langle N;\mu|\) are rewritten as \[\widetilde{U}^{(A)}(N)=\sqrt{(2N-1)!!}\sum_{\alpha_{1}<\beta_{1}<\cdots< \alpha_{N}<\beta_{N}}|\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N})_{M}\ \langle\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N}|, \tag{56}\] where \[|\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N})_{M}=\frac{1}{\sqrt{(2N-1)!!}}{ \sum_{P}}^{\prime}(-)^{P}b_{\alpha_{1}\beta_{1}}^{\dagger}\cdots b_{\alpha_{N }\beta_{N}}^{\dagger}|0), \tag{57}\] and \({\sum_{P}}^{\prime}\) means the summation so that the states on the left side become totally antisymmetric [4]. From these, we obtain \[\hat{Z}^{(A)}(N) = \widetilde{U}^{(A)(N)}\widetilde{U}^{(A)}(N)^{\dagger} \tag{58}\] \[= (2N-1)!!\sum_{\alpha_{1}<\beta_{1}<\cdots<\alpha_{N}\beta_{N}}| \alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N})_{MM}(\alpha_{1}\beta_{1}\cdots \alpha_{N}\beta_{N}|,\] which is the spectral decomposition of \(\hat{Z}^{(A)}(N)\) and indicates that the eigenvectors of the eigenvalue \((2N-1)!!\) are \(|\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N})_{M}\). We also obtain \[\hat{T}_{F}^{(A)} = \sum_{N=0}^{N_{max}}\hat{T}_{F}^{(A)}(N),\] \[\hat{T}_{F}^{(A)}(N) = \hat{1}_{F}(N), \tag{59}\] \[\hat{1}_{F}(N) = \sum_{\alpha_{1}<\beta_{1}<\cdots\alpha_{N}\beta_{N}}|\alpha_{1} \beta_{1}\cdots\alpha_{N}\beta_{N}\rangle\langle\alpha_{1}\beta_{1}\cdots \alpha_{N}\beta_{N}|,\] \[\hat{T}_{B}^{(A)} = \sum_{N=0}^{N_{max}}\hat{T}_{B}(N), \tag{60}\] \[\hat{T}_{B}^{(A)}(N) = \sum_{\alpha_{1}<\beta_{1}<\cdots\alpha_{N}\beta_{N}}|\alpha_{1} \beta_{1}\cdots\alpha_{N}\beta_{N})_{MM}(\alpha_{1}\beta_{1}\cdots\alpha_{N} \beta_{N}|.\] \(\hat{Z}^{(A)}\) is written as \[\hat{Z}^{(A)}=(2\hat{N}_{B}^{(A)}-1)!!\hat{T}_{B}^{(A)}. \tag{61}\] The mapping operator is given as \[U_{\xi}^{(A)} = \sum_{N=0}^{N_{max}}U_{\xi}(N), \tag{62}\] \[U_{\xi}^{(A)}(N) = \{(2N-1)!!\}^{\xi}\sum_{\alpha_{1}<\beta_{1}<\cdots<\alpha_{N}< \beta_{N}}|\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N})_{M}\ \langle\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N}|.\] If we set \(\xi=0\) and \(N_{max}\rightarrow\infty\), this mapping becomes the MYT mapping [4], from which we obtain the boson expansions of Holstein and Primakoff, and if we take \(\xi=\pm 1\), they become mapping operators for the Dyson boson expansions [5]. Taking \(N_{max}\rightarrow\infty\), \(U_{\xi}^{(A)}\) maps the whole fermion space that consists of even numbers of quasi-particles to the boson subspace. ### The case where the maximum phonon excitation number is 1 In the case where the maximum phonon excitation number is 1, \(|0\rangle\) and \(|\mu\rangle\) become orthonormal bases of the fermion space of even-quasi-particle excitations up to the two-quasi-particle excitations, which correspond to \(|0\rangle\) and \(|\mu\rangle\), respectively. The mapping operator becomes as \[U_{\xi}=\widetilde{U}=|0\rangle\langle 0|+\sum_{\mu}|\mu\rangle\langle\mu|, \tag{63}\] and \(\hat{Z}=\breve{1}_{B}\). As a result, the mapping operator has no dependence on \(\xi\), and the mapping becomes of the Hermitian type. The projection operator onto the fermion subspace to be mapped is given by \[\hat{T}_{F}=|0\rangle\langle 0|+\sum_{\mu}|\mu\rangle\langle\mu|.\] (64a) The projection operator onto the physical subspace, which has a one-to-one correspondence to the fermion subspace, becomes as \[\hat{T}_{B}=\breve{1}_{B}. \tag{64b}\] This indicates that the ideal boson states are the physical state vectors, with one-to-one correspondences to the fermion state vectors. The following relations, \[X_{\mu}U^{\dagger}_{-\xi}=|0\rangle(\mu|=U^{\dagger}_{-\xi}b_{t}\,\breve{1}_{B}, \tag{65a}\] \[U_{\xi}X^{\dagger}_{\mu}=|\mu\rangle\langle 0|=\breve{1}_{B}b^{ \dagger}_{\mu}U_{\xi},\] (65b) \[B_{q}U^{\dagger}_{-\xi}=\sum_{\mu^{\prime}}\sum_{\mu}\Gamma^{\mu^{\prime}\mu}_ {q}|\mu\rangle(\mu^{\prime}|, \tag{65c}\] hold, and we obtain \[(X_{\mu})_{\xi}=|0\rangle(\mu|=\breve{1}_{B}(X_{\mu})_{B}\breve{1}_{B}=(X_{\mu })_{B}\breve{1}_{B},\quad(X_{\mu})_{B}=b_{\mu}, \tag{66a}\] \[(X^{\dagger}_{\mu})_{\xi}=|\mu\rangle(0|=\breve{1}_{B}(X^{\dagger}_{\mu})_{B }\breve{1}_{B}=\breve{1}_{B}(X^{\dagger}_{\mu})_{B},\quad(X^{\dagger}_{\mu})_ {B}=b^{\dagger}_{\mu},\] (66b) \[(B_{q})_{\xi}=\sum_{\mu^{\prime}}\sum_{\mu}\Gamma^{\mu^{\prime}\mu}_{q}|\mu \rangle(\mu^{\prime}|=\breve{1}_{B}(B_{q})_{B}\breve{1}_{B}=\breve{1}_{B}(B_{ q})_{B}=(B_{q})_{B}\breve{1}_{B},\] (66c) \[(B_{q})_{B}=\sum_{\mu\mu^{\prime}}\Gamma^{\mu^{\prime}\mu}_{q}b^{ \dagger}_{\mu}b_{\mu^{\prime}}.\] The product of the operators becomes as follows: \[(O_{F}X_{\mu})_{\xi}=(O_{F})_{\xi}(X_{\mu})_{\xi}=\breve{1}_{B}(O_{F})_{B} \breve{1}_{B}(X_{\mu})_{B}\breve{1}_{B}=\breve{1}_{B}(O_{F})_{B}(X_{\mu})_{B} \breve{1}_{B}, \tag{67a}\] \[(X^{\dagger}_{\mu}O_{F})_{\xi}=(X^{\dagger}_{\mu})_{\xi}(O_{F})_{\xi}=\breve{1} _{B}(X^{\dagger}_{\mu})_{B}\breve{1}_{B}(O_{F})_{B}\breve{1}_{B}=\breve{1}_{ B}(X^{\dagger}_{\mu})_{B}(O_{F})_{B}\breve{1}_{B}, \tag{67b}\] therefore we can obtain the mapping of the product of \(X^{\dagger}_{\mu}\),\(X_{\mu}\), and \(B_{q}\) by arranging them in normal order. The commutation relations of \((X^{\dagger}_{\mu})_{B}\), \((X_{\mu})_{B}\), and \((B_{q})_{B}\) become as follows: \[[(X_{\mu})_{B},(X^{\dagger}_{\mu^{\prime}})_{B}]=\delta_{\mu,\mu^{\prime}} \tag{68a}\] \[[(B_{q})_{B},(X^{\dagger}_{\mu})_{B}]=\sum_{\mu^{\prime}}\Gamma^{\mu\mu^{ \prime}}_{q}(X^{\dagger}_{\mu^{\prime}})_{B}.\] (68b) \[[(X_{\mu})_{B},(B_{q})_{B}]=\sum_{\mu^{\prime}}\Gamma^{\mu^{ \prime}\mu}_{q}(X_{\mu^{\prime}})_{B}, \tag{68c}\] which are equal to the results of the boson approximation. From the above, when the maximum number of phonons is 1, by arranging the phonon creation and annihilation operators and the scattering operators in normal order and replacing them with \((X^{\dagger}_{\mu})_{B}\), \((X_{\mu})_{B}\), and \((B_{q})_{B}\), respectively, then the fermion subspace is completely mapped onto the boson subspace projected by \(\breve{1}_{B}\). In this way, NOM establishes the boson approximation as the boson mapping whose maximum phonon excitation number is 1. ## 4 Boson expansions ### Formulae for the boson expansions We give here the formulae used to obtain the boson expansions of the mapped fermion operators. We utilize \[\begin{array}{rcl}\widetilde{U}(N)&=&\sum_{t_{1}\leq t_{2}\leq\cdots\leq t_{N} }|t_{1}t_{2}\cdots t_{N}\rangle\langle t_{1}t_{2}\cdots t_{N}|\\ &=&\sum_{t_{1}t_{2}\cdots t_{N}}\frac{\mathcal{N}_{B}(t_{1}t_{2} \cdots t_{N})}{N!}|t_{1}t_{2}\cdots t_{N}\rangle\langle t_{1}t_{2}\cdots t_{N}| \\ &=&\sum_{t_{1}t_{2}\cdots t_{N}}\frac{1}{N!}|t_{1}t_{2} \cdots t_{N}\rangle)\langle\langle t_{1}t_{2}\cdots t_{N}|\end{array} \tag{69}\] and obtain the following series of formulae: \[\widetilde{U}(N)X_{t^{\prime}}=(X_{t^{\prime}})_{D}\widetilde{U}(N+1)\quad(N \geq 0), \tag{70a}\] \[\begin{array}{rcl}\widetilde{U}(1)X_{t}^{\dagger}&=&(X_{t}^{\dagger})_{D} \widetilde{U}(0),\\ \widetilde{U}(N+1)X_{t}^{\dagger}&=&(X_{t}^{\dagger})_{D} \widetilde{U}(N)\\ &&-\frac{1}{2}\sum_{t_{1}t_{2}}\sum_{\vec{t}_{1}^{\prime}}Y( tt_{1}t_{2}\vec{t}_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger}\widetilde{U}(N -1)X_{\vec{t}_{1}^{\prime}}\quad(N\geq 1),\end{array}\] (70b) \[\begin{array}{rcl}\widetilde{U}(0)B_{q}&=&0,\\ \widetilde{U}(N)B_{q}&=&(B_{q})_{D}\widetilde{U}(N)+\sum_{t} \sum_{\vec{t}^{\prime}}\Gamma_{q}^{\vec{t}^{\prime}t}b_{\vec{t}}^{\dagger} \widetilde{U}(N-1)X_{\vec{t}^{\prime}}\quad(N\geq 1),\end{array}\] (70c) \[\begin{array}{rcl}\widetilde{U}(1)X_{\bar{t}}^{\dagger}&=&0,\\ \widetilde{U}(N+1)X_{\bar{t}}^{\dagger}&=&-\frac{1}{2}\sum_{t_{1}t_{2}} \sum_{t_{1}^{\prime}}Y(\bar{t}t_{1}t_{2}\vec{t}_{1}^{\prime})b_{t_{1}}^{ \dagger}b_{t_{2}}^{\dagger}b_{t_{1}^{\prime}}\widetilde{U}(N)\\ &&-\frac{1}{2}\sum_{t_{1}t_{2}}\sum_{\vec{t}_{1}^{\prime}}Y( \bar{t}t_{1}t_{2}\vec{t}_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger} \widetilde{U}(N-1)X_{\vec{t}_{1}^{\prime}}\quad(N\geq 1),\end{array} \tag{70d}\] where \[(X_{t^{\prime}})_{D}=b_{t^{\prime}}, \tag{71a}\] \[(X_{t}^{\dagger})_{D}=b_{t}^{\dagger}-\frac{1}{2}\sum_{t_{1}t_{2}}\sum_{t_{1} ^{\prime}}Y(tt_{1}t_{2}\vec{t}_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{ \dagger}b_{t_{1}^{\prime}},\] (71b) \[(B_{q})_{D}=\sum_{t}\sum_{t^{\prime}}\Gamma_{q}^{t^{\prime}t}b_{ \vec{t}}^{\dagger}b_{t^{\prime}}. \tag{71c}\] Eqs. (71) are the same as the boson expansions derived by DBET. \(\left(B_{\bar{q}}\right)_{D}^{\dagger}=(B_{q})_{D}\) holds. From these formulae, we obtain \[\widetilde{X_{t^{\prime}}}(N)=(X_{t^{\prime}})_{D}\hat{Z}(N+1)\qquad(N\geq 0), \tag{72a}\] \[\widetilde{X_{t}^{\dagger}}(0) = (X_{t}^{\dagger})_{D}\hat{Z}(0)\] \[\widetilde{X_{t}^{\dagger}}(N) = (X_{t}^{\dagger})_{D}\hat{Z}(N)-\frac{1}{2}\sum_{t_{1}t_{2}}\sum_ {\vec{t}_{1}^{\prime}}Y(tt_{1}t_{2}\vec{t}_{1}^{\prime})b_{t_{1}}^{\dagger}b_ {t_{2}}^{\dagger}\widetilde{X_{t_{1}^{\prime}}}(N-1)\qquad(N\geq 1),\] (72b) \[\widetilde{B_{q}}(0) = 0,\] \[\widetilde{B_{q}}(N) = (B_{q})_{D}\hat{Z}(N)+\sum_{t}\sum_{\vec{t}^{\prime}}\Gamma_{q}^ {\vec{t}^{\prime}t}b_{t}^{\dagger}\widetilde{X}_{\vec{t}^{\prime}}(N-1)\qquad (N\geq 1),\] (72c) \[\widetilde{X_{\vec{t}}^{\dagger}}(0) = 0,\] \[\widetilde{X_{\vec{t}}^{\dagger}}(N) = -\frac{1}{2}\sum_{t_{1}t_{2}}\sum_{\vec{t}_{1}^{\prime}}Y(\vec{t }t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger}b_{t_{1}^{ \prime}}\hat{Z}(N)\] \[- \frac{1}{2}\sum_{t_{1}t_{2}}\sum_{\vec{t}_{1}^{\prime}}Y(\vec{t }t_{1}t_{2}\vec{t}_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger} \widetilde{X_{t_{1}^{\prime}}}(N-1)\qquad(N\geq 1),\] where we use the following diffinitions: \(\widetilde{X_{\mu}}(N)=\widetilde{U}(N)X_{\mu}\widetilde{U}(N+1)^{\dagger}, \widetilde{B_{q}}(N)=\widetilde{U}(N)B_{q}\widetilde{U}(N)^{\dagger}\). \(\widetilde{X_{\mu}^{\dagger}}(N)=\left(\widetilde{X_{\mu}}(N)\right)^{\dagger}\). \(\widetilde{B_{q}^{\dagger}}(N)=\left(\widetilde{B}_{q}(N)\right)^{\dagger}\) holds. We can obtain the boson expansion of \(\hat{Z}(N)\) by using \[\hat{Z}(N)=\frac{1}{N}\sum_{t}(X_{t}^{\dagger})_{D}\hat{Z}(N-1)b_{t}-\frac{1} {2N}\sum_{t_{1}t_{2}}\sum_{\vec{t}_{1}^{\prime}}Y(t_{1}^{\prime}t_{1}t_{2}\vec {t}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger}\widetilde{X_{\vec{t}^{ \prime}}}(N-2)b_{t_{1}^{\prime}}\quad(N\geq 2), \tag{73}\] which is derived by applying \[\widetilde{U}^{\dagger}(N)=\frac{1}{N}\sum_{t}X_{t}^{\dagger}\widetilde{U}^{ \dagger}(N-1)b_{t}\qquad(N\geq 1) \tag{74}\] obtained from Eq. (69) to Eq. (26a), expressing \(\hat{Z}(N)\) as \[\hat{Z}(N)=\frac{1}{N}\sum_{t}\widetilde{X_{t}^{\dagger}}(N-1)b_{t}\quad(N \geq 1), \tag{75}\] and substituting Eqs. (72b) into this. \(\hat{Z}(N)\) up to \(N=2\) are as follows, \[\hat{Z}(0)=\hat{1}_{B}(0), \tag{76a}\] \[\hat{Z}(1)=\hat{1}_{B}(1), \tag{76b}\] \[\hat{Z}(2)=\hat{1}_{B}(2)\left(\hat{1}_{B}-\frac{1}{4}\sum_{t_{1}t_{2}}\sum_{t_{1} ^{\prime}t_{2}^{\prime}}Y(t_{2}^{\prime}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{ \dagger}b_{t_{2}}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}\right)\hat{1}_{ B}(2). \tag{76c}\] We use the following equation for the case \(N=2\), \[b_{t}\hat{1}_{B}(N)=\hat{1}_{B}(N-1)b_{t}. \tag{77}\] Once \(\widetilde{X_{\tilde{l}^{\prime}}}(N)\), \(\widetilde{X_{\tilde{l}^{\prime}}}^{\dagger}(N)\), and \(\hat{Z}(N)\) are obtained from Eq. (72d), the Hermitian conjugate of Eq. (72d), and Eq. (73), \(\widetilde{X_{t^{\prime}}}(N)\), \(\widetilde{X_{t}}^{\dagger}(N)\), and \(\widetilde{B_{q}}(N)\) are given by substituting these into Eqs. (72). ### On the use of ideal boson state vectors The effect of the Pauli exclusion principle is reflected generally in the boson operators and the boson state vectors by the mapping. While, if we restrict the types of phonon excitation modes and the number of phonon excitations so that zero eigenvalues do not appear in the norm matrices of the multiphonon state vectors, then \[\hat{T}_{B}=\breve{1}_{B}, \tag{78}\] holds. In this case, the ideal boson state vectors \(|N;t)\), which do not bear the effect of the Pauli exclusion principle, become the physical state vectors. As a result, all effects of the Pauli exclusion principle are fully reflected in the mapped operators. In order that the boson expansion method is practical, the phonon excitation modes and the maximum number of excitations should be chosen so that the ideal boson state vectors become the physical state vectors [13]. ### Boson expansions as a small parameter expansion In this subsection, we obtain the norm operator and the other mapped operators in the boson expansion being a small parameter expansion, where \(\Gamma_{q}^{\mu\mu^{\prime}}\) are regarded as of the order of magnitude \(O(\Gamma)\). 3.1 On the conditions for being a small parameter expansion and the evaluation of the order of magnitude of each term of expansions For realizing a small parameter expansion where the boson approximation becomes the zeroth order approximation, \(\hat{Z}\approx\breve{1}_{B}\) must hold as the zeroth order approximation. For that purpose, it is necessary to limit the type of mode and the number of phonon excitations in the mapping operator so that zero eigenvalues do not appear in the norm matrices of the multiphonon state vectors. This is the same condition for the ideal boson state vectors to become physical. This condition is necessary but not sufficient, however. Denoting the matrix each element of which is \(\langle t_{1}^{\prime}t_{2}^{\prime}|t_{1}t_{2}\rangle\) as \({\bf Z}(N)\), \({\bf Z}^{(A)}(N)\) is expressed as \[{\bf Z}^{(A)}(N)=\left(\begin{array}{cc}{\bf Z}(N)&{\bf W}(N)\\ {\bf W}(N)^{T}&{\bf Z}^{\prime}(N).\end{array}\right), \tag{79}\] As shown in the appendix, if \({\bf W}(2)={\bf 0}(2)\), i.e. \(Y(t_{1}^{\prime}\bar{t}\mu t_{2}^{\prime})=0\), then \({\bf W}(N)={\bf 0}(N)\) for \(N\geq 3\). Hence, in this case, we obtain \[{\bf Z}^{(A)}(N)=\left(\begin{array}{cc}{\bf Z}(N)&{\bf 0}(N)\\ {\bf 0}(N)^{T}&{\bf Z}^{\prime}(N)\end{array}\right). \tag{80}\] Substituting this into Eq.(50), \[{\bf Z}(N)^{2}=(2N-1)!!{\bf Z}(N), \tag{81a}\] \[{\bf Z}^{\prime}(N)^{2}=(2N-1)!!{\bf Z}^{\prime}(N), \tag{81b}\] are obtained. From Eq. (81a), we obtain \[\hat{Z}(N)^{2}=(2N-1)!!\hat{Z}(N), \tag{82}\] from which we find \[\hat{Z}(N)=(2N-1)!!\hat{T}_{B}(N)=(2\hat{N}_{B}-1)!!\hat{T}_{B}(N),\quad\hat{N }_{B}=\sum_{t}b_{t}^{\dagger}b_{t}, \tag{83a}\] \[\hat{Z}=(2\hat{N}_{B}-1)!!\hat{T}_{B}. \tag{83b}\] \(\{t\}\) and \(N_{max}\) are set so that no zero eigenvalue appears in \({\bf Z}(N)\). It is \({\bf Z}^{\prime}(N)\) that has zero eigenvalues. Therefore the eigenvalues of \({\bf Z}(N)\) are only \((2N-1)!!\). \(\hat{T}_{B}(N)=\hat{1}_{B}(N)\), and then \(\hat{T}_{B}=\hat{1}_{B}\) hold. Even so, \(\hat{Z}\approx\hat{1}_{B}\) does not hold as the zeroth order approximation, and the boson expansions can not be obtained as the small parameter expansion. \({\bf W}(2)\) must not be a zero matrix if the small parameter expansion holds. We investigate the case of N=2 to establish the small parameter expansion and an order evaluation of the terms in them. Substituting Eq. (79) into Eq. (50) and taking \(N=2\), we obtain \[{\bf Z}(2)^{2}+{\bf W}(2){\bf W}(2)^{T}=3{\bf Z}(2), \tag{84a}\] \[{\bf Z}(2){\bf W}(2)+{\bf W}(2){\bf Z}^{\prime}(2)=3{\bf W}(2),\] (84b) \[{\bf W}(2)^{T}{\bf W}(2)+{\bf Z}^{\prime}(2)^{2}=3{\bf Z}^{\prime}(2), \tag{84c}\] from which we derive \[\sum_{\mu\mu^{\prime}}Y(t^{\prime}_{1}\mu\mu^{\prime}t^{\prime}_{2})Y(t_{1}\mu \mu^{\prime}t_{2})=4((t^{\prime}_{1}t^{\prime}_{2}|t_{1}t_{2}))-2Y(t^{\prime}_ {1}t_{1}t_{2}t^{\prime}_{2}), \tag{85a}\] \[\sum_{\mu\mu^{\prime}}Y(t^{\prime}_{1}\mu\mu^{\prime}t^{\prime}_{2})Y(\bar{t}_ {1}\mu\mu^{\prime}\mu_{1})+2Y(t^{\prime}_{1}\bar{t}_{1}\mu_{1}t^{\prime}_{2}) =0,\] (85b) \[\sum_{\mu\mu^{\prime}}Y(\bar{t}^{\prime}_{1}\mu\mu^{\prime}\mu^{\prime}_{1})Y( \bar{t}_{1}\mu\mu^{\prime}\mu_{1})=4((\bar{t}^{\prime}_{1}\mu^{\prime}_{1}| \bar{t}_{1}\mu_{1}))-2Y(\bar{t}^{\prime}_{1}\bar{t}_{1}\mu_{1}\mu^{\prime}_{1}), \tag{85c}\] Since \({\bf Z}^{(A)}(2)\) has zero eigenvalues [13], these relations include some parts where the small parameter expansion breaks down. \(Y(\mu_{1}\mu_{2}\mu_{3}\mu_{4})\sim O(\Gamma^{2})\) should hold. Therefore if \(\mu\)-sums do not affect the evaluation of the order of magnitude, these equations have discrepancies in the order of magnitude of each term. The naive evaluation does not hold, and we must correctly evaluate the case where we take \(\mu\)-sum. We choose \(\{t\}\) so that the small parameter expansion holds in any situation. \(\sum_{tt^{\prime}}Y(t_{1}tt^{\prime}t_{2})Y(t^{\prime}_{1}tt^{\prime}t^{ \prime}_{2})\) should, then, be estimated as \(O(\Gamma^{4})\). To find out more about \(\bar{t}\)-sum, we take up \(\sum_{\mu}Y(t_{1}t_{2}t_{3}\mu)\Gamma^{\mu t_{4}}_{q}\). Because we choose \(\{t\}\) so that \(\sum_{t}Y(t_{1}t_{2}t_{3}t)\Gamma^{tt_{4}}_{q}\sim O(\Gamma^{3})\) hold, then we obtain \[\sum_{\mu}Y(t_{1}t_{2}t_{3}\mu)\Gamma^{\mu t_{4}}_{q}=\sum_{\bar{t}}Y(t_{1}t_ {2}t_{3}\bar{t})\Gamma^{\bar{t}t_{4}}_{q}+O(\Gamma^{3}). \tag{86}\] While \[\sum_{\mu}Y(t_{1}t_{2}t_{3}\mu)\Gamma^{\mu t_{4}}_{q}=\sum_{q^{\prime}q^{ \prime\prime}}\sum_{\alpha\beta\gamma}\varphi_{q}(\alpha\beta)\varphi_{q^{ \prime}}(\gamma\alpha)\varphi_{q^{\prime\prime}}(\gamma\alpha)(\Gamma^{t_{1}t _{2}}_{q^{\prime}}\Gamma^{t_{3}t_{4}}_{q^{\prime\prime}}+\Gamma^{t_{1}t_{3}} _{q^{\prime}}\Gamma^{t_{2}t_{4}}_{q^{\prime\prime}}), \tag{87}\] holds, which indicates that the order of the right-hand side is \(O(\Gamma^{2})\). Therefore the estimation of \(\bar{t}\)-sum should become as \[\sum_{\bar{t}}Y(t_{1}t_{2}t_{3}\bar{t})\Gamma^{\bar{t}t_{4}}_{q}\sim O(\Gamma^ {2}). \tag{88}\] This indicates that if we take a single \(\bar{t}\)-sum, we should estimate its magnitude by one order lower. Based on this evaluation, we evaluate \[\sum_{t\bar{t}}Y(t_{1}t\bar{t}t_{2})Y(t^{\prime}_{1}t\bar{t}t^{\prime}_{2}) \sim O(\Gamma^{3}), \tag{89}\] By applying these order evaluations to Eqs. (85), we obtain \[\sum_{\vec{t}\vec{t}^{\prime}}Y(\mu_{1}^{\prime}\vec{t}\vec{t}^{\prime}\mu_{2}^{ \prime})Y(\mu_{1}\vec{t}\vec{t}^{\prime}\mu_{2})=4((\mu_{1}^{\prime}\mu_{2}^{ \prime}|\mu_{1}\mu_{2}))-2Y(\mu_{1}^{\prime}\mu_{1}\mu_{2}\mu_{2}^{\prime})+O( \Gamma^{3}), \tag{90a}\] \[\sum_{\vec{t}\vec{t}^{\prime}}Y(t_{1}^{\prime}\vec{t}\vec{t}^{\prime}t_{2}^{ \prime})Y(\bar{t}_{1}\bar{t}\vec{t}^{\prime}\mu_{1})+2Y(t_{1}^{\prime}\bar{t}_ {1}\mu_{1}t_{2}^{\prime})=O(\Gamma^{3}), \tag{90b}\] \[\sum_{\vec{t}\vec{t}^{\prime}}Y(\vec{t}_{1}^{\prime}\vec{t}\vec{t}^{\prime}\mu _{1}^{\prime})Y(\bar{t}_{1}\bar{t}\vec{t}^{\prime}\mu_{1})=4((\vec{t}_{1}^{ \prime}\mu_{1}^{\prime}|\bar{t}_{1}\mu_{1}))-2Y(\vec{t}_{1}^{\prime}\bar{t}_{ 1}\mu_{1}\mu_{1}^{\prime})+O(\Gamma^{3}). \tag{90c}\] We can identify that the parts where the double \(\bar{t}\)-sums are performed across two coefficients are responsible for the failure of the small parameter expansion. Eqs. (90) become conditions for the small parameter expansion to hold. #### 4.3.2 Boson expansions of mapped operators as the small parameter expansion Here we perform the boson expansions of the mapped operators as the small parameter expansion. Eq. (43b) indicates that we can derive the boson expansions of \((O_{F})_{\xi}\) from those of the norm operator \(\hat{Z}\) and \(\widetilde{O_{F}}\). We give the terms of the boson expansions up to the order of magnitude \(O(\Gamma^{4})\). From Eq. (72d), its Hermitian conjugate, and Eq. (73), we find the recurrence formulae for obtaining the boson expansions of \(\hat{Z}(N)\), \(\widetilde{X_{\vec{t}^{\prime}}}(N)\), and \(\widetilde{X_{\vec{t}}^{\dagger}}(N)\) up to the desired order of magnitude. These recurrence formulae generate no parts where double \(\bar{t}\)-sums are performed across two coefficients in the expansions, which makes it possible to avoid convergence difficulty caused by them. The recurrence formulae of \(\hat{Z}(N)\) are as follows: \[\hat{Z}(N)=\sum_{k=1}^{4}\hat{Z}^{(k)}(N)+O(\Gamma^{5});\quad\hat{Z}^{(k)}(N )\sim O(\Gamma^{k}), \tag{91a}\] \[\hat{Z}^{(0)}(N)=\frac{1}{N}\sum_{t}b_{t}^{\dagger}\hat{Z}^{(0)}(N-1)b_{t}, \tag{91b}\] \[\hat{Z}^{(1)}(N)=0, \tag{91c}\] \[\hat{Z}^{(2)}(N) = \frac{1}{N}\sum_{t}b_{t}^{\dagger}\hat{Z}^{(2)}(N-1)b_{t}\] \[-\frac{1}{2N}\sum_{t_{1}t_{2}}\sum_{\begin{subarray}{c}t_{1}^{ \prime}t_{2}^{\prime}\\ \end{subarray}}Y(t_{2}^{\prime}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t _{2}}^{\dagger}b_{t_{1}^{\prime}}\hat{Z}^{(0)}(N-1)b_{t_{2}^{\prime}},\] \[\hat{Z}^{(3)}(N) = \frac{1}{N}\sum_{t}b_{t}^{\dagger}\hat{Z}^{(3)}(N-1)b_{t}\] \[+\frac{1}{4N}\sum_{t_{1}t_{2}t_{3}}\sum_{t_{1}^{\prime}t_{2}^{ \prime}t_{3}^{\prime}}\sum_{\bar{t}}Y(t_{3}^{\prime}t_{1}t_{2}\bar{t})Y(\bar{t} t_{1}^{\prime}t_{2}^{\prime}t_{3})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger}\hat{Z}^{( 0)}(N-1)b_{t_{3}}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}b_{t_{3}^{ \prime}},\] \[\hat{Z}^{(4)}(N)=\frac{1}{N}\sum_{t}b_{t}^{\dagger}\hat{Z}^{(4)}(N-1)b_{t}- \frac{1}{2N}\sum_{t_{1}t_{2}}\sum_{t_{1}^{\prime}t_{2}^{\prime}}Y(t_{2}^{\prime }t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger}b_{t_{1}^{ \prime}}\hat{Z}^{(2)}(N-1)b_{t_{2}^{\prime}}\] \[-\frac{1}{8N}\sum_{t_{1}t_{2}t_{3}t_{4}}\sum_{t_{1}^{\prime}t_{2}^{\prime}t_{3 }^{\prime}t_{4}^{\prime}}\sum_{\bar{t}^{\prime}}Y(t_{4}^{\prime}t_{1}t_{2}\bar {t})Y(\bar{t}t_{2}^{\prime}t_{3}^{\prime}\bar{t}^{\prime})Y(\bar{t}^{\prime}t _{3}t_{4}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger}b_{t_{3}}^{ \dagger}b_{t_{4}}^{\dagger}b_{t_{1}^{\prime}}\hat{Z}^{(0)}(N-1)b_{t_{2}^{ \prime}}b_{t_{3}^{\prime}}b_{t_{4}^{\prime}}. \tag{91f}\] The solution of Eq,(91b) is easily obtained as \[\hat{Z}^{(0)}(N)=\frac{1}{N!}\sum_{t_{1}t_{2}\cdots t_{N}}b_{t_{1}}^{\dagger} b_{t_{2}}^{\dagger}\cdots b_{t_{N}}^{\dagger}\hat{Z}(0)b_{t_{1}}b_{t_{2}} \cdots b_{t_{N}}=\hat{1}_{B}(N). \tag{92}\] Substituting it into Eq. (91d) and using Eq. (77), we obtain \[\hat{Z}^{(2)}(N) = \frac{1}{N}\sum_{t}b_{t}^{\dagger}\hat{Z}^{(2)}(N-1)b_{t}\] \[-\frac{1}{2N}\sum_{t_{1}t_{2}}\sum_{t_{1}^{\prime}t_{2}^{\prime} }Y(t_{2}^{\prime}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{ \dagger}\hat{1}_{B}(N-2)b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}.\] For finding the solution, assuming it as \[\hat{Z}^{(2)}(N)=y^{(2)}(N)\sum_{t_{1}t_{2}}\sum_{t_{1}^{\prime}t_{2}^{\prime }}Y(t_{2}^{\prime}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{ \dagger}\hat{1}_{B}(N-2)b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}, \tag{93b}\] substituting this into the recurrence formula, and using \[\sum_{t}b_{t}^{\dagger}\hat{1}_{B}(N)b_{t}=(N+1)\hat{1}_{B}(N+1), \tag{93c}\] we find \[y^{(2)}(N)=\frac{N-2}{N}y^{(2)}(N-1)-\frac{1}{2N}. \tag{93d}\] \(y^{(2)}(N)=-1/4\) is the solution, and we obtain \[\hat{Z}^{(2)}(N)=-\frac{1}{4}\sum_{t_{1}t_{2}}\sum_{t_{1}^{\prime}t_{2}^{ \prime}}Y(t_{2}^{\prime}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{ \dagger}\hat{1}_{B}(N-2)b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}. \tag{93e}\] Following the same procedure in order, we can obtain the solution of the recurrence formulae for each order of magnitude. Organizing the solutions obtained in this way using Eq. (77), we finally obtain \[\hat{Z}(N)=\hat{\mathcal{Z}}\hat{1}_{B}(N)=\hat{1}_{B}(N)\hat{\mathcal{Z}}=\hat{1} _{B}(N)\hat{\mathcal{Z}}\hat{1}_{B}(N), \tag{94a}\] \[\hat{\mathcal{Z}}=\hat{\mathcal{Z}}^{(0)}+\hat{\mathcal{Z}}^{(2)}+ \hat{\mathcal{Z}}^{(3)}+\hat{\mathcal{Z}}^{(4)}+O(\Gamma^{5}),\] (94b) \[\hat{\mathcal{Z}}^{(0)}=\hat{1}_{B},\] (94c) \[\hat{\mathcal{Z}}^{(2)}=-\frac{1}{4}\sum_{t_{1}t_{2}}\sum_{t_{1}^{\prime}t_{2 }^{\prime}}Y(t_{2}^{\prime}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2 }}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}^{\prime}},\] (94d) \[\hat{\mathcal{Z}}^{(3)}=\frac{1}{12}\sum_{t_{1}t_{2}t_{3}}\sum_{t_{1}^{ \prime}t_{2}^{\prime}t_{3}^{\prime}}\sum_{\bar{t}}Y(t_{3}^{\prime}t_{1}t_{2} \bar{t})Y(\bar{t}t_{1}^{\prime}t_{2}^{\prime}t_{3})b_{t_{1}}^{\dagger}b_{t_{2 }}^{\dagger}b_{t_{3}}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}b_{t_{3}^{ \prime}},\] (94e) \[\hat{\mathcal{Z}}^{(4)}= \hat{\mathcal{Z}}^{(4)}_{in}+\hat{\mathcal{Z}}^{(4)}_{out}\] (94f) \[\hat{\mathcal{Z}}^{(4)}_{out}=-\frac{1}{32}\sum_{t_{1}t_{2}t_{3}t_ {4}}\sum_{t_{1}^{\prime}t_{2}^{\prime}t_{3}^{\prime}}\sum_{\bar{t}\bar{t}^{ \prime}}Y(t_{4}^{\prime}t_{1}t_{2}\bar{t})Y(\bar{t}t_{2}^{\prime}t_{3}^{ \prime}\bar{t}^{\prime})Y(\bar{t}^{\prime}t_{3}t_{4}t_{1}^{\prime})b_{t_{1}}^{ \dagger}b_{t_{2}}^{\dagger}b_{t_{3}}^{\dagger}b_{t_{4}^{\prime}}b_{t_{1}^{ \prime}}b_{t_{2}^{\prime}}b_{t_{3}^{\prime}}b_{t_{4}^{\prime}}.\] From these results, we can easily find the norm operator as \[\hat{Z} = \hat{\mathcal{Z}}\hat{1}_{B}=\hat{1}_{B}\hat{\mathcal{Z}}=\hat{1 }_{B}\hat{\mathcal{Z}}\hat{1}_{B}, \tag{95}\] \[\hat{\mathcal{Z}} =\hat{1}_{B}+\hat{\mathcal{Y}},\] \[\hat{\mathcal{Y}}=\hat{\mathcal{Y}}_{in}+\hat{\mathcal{Y}}_{out},\] \[\hat{\mathcal{Y}}_{in}=\hat{\mathcal{Z}}^{(2)}+\hat{\mathcal{Z}}^{( 4)}_{in}+O(\Gamma^{5}),\] \[\hat{\mathcal{Y}}_{out}=\hat{\mathcal{Z}}^{(3)}+\hat{\mathcal{Z}} ^{(4)}_{out}+O(\Gamma^{5}).\] The \(\xi\)-th power of \(\hat{Z}\) becomes \[\hat{Z}^{\xi}=\hat{\mathcal{Z}}^{\xi}\hat{1}_{B}=\check{1}_{B} \hat{\mathcal{Z}}^{\xi}, \tag{96}\] \[\hat{\mathcal{Z}}^{\xi}=\hat{1}_{B}+\xi\hat{\mathcal{Y}}+\frac{1} {2}\xi(\xi-1)\hat{\mathcal{Y}}^{2}+O(\Gamma^{6}).\] Once \(\hat{Z}(N)\) is known, we can obtain \(\widetilde{X_{\bar{t}}}(N)\) from the following recurrence formula derived from Eqs. (72d), \[\widetilde{X_{\bar{t}^{\prime}}}(N) = -\frac{1}{2}\sum_{t_{1}}\sum_{t_{1}^{\prime}t_{2}^{\prime}}Y(\bar {t}^{\prime}t_{1}^{\prime}t_{2}^{\prime}t_{1})\hat{Z}(N)b_{t_{1}}^{\dagger}b_{ t_{1}^{\prime}}b_{t_{2}^{\prime}}\] \[+\frac{1}{4}\sum_{t_{1}t_{2}}\sum_{t_{1}^{\prime}t_{2}^{\prime}t_ {3}^{\prime}}\sum_{\bar{t}}Y(\bar{t}^{\prime}t_{2}^{\prime}t_{3}^{\prime}\bar{t })Y(\bar{t}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger}b_{t_{ 2}^{\prime}}\hat{Z}(N-1)b_{t_{2}^{\prime}}b_{t_{3}^{\prime}}\] \[+\frac{1}{4}\sum_{t_{1}t_{2}}\sum_{t_{1}^{\prime}t_{2}^{\prime}} \sum_{\bar{t}}\sum_{\bar{t}_{1}^{\prime}}Y(\bar{t}^{\prime}t_{1}^{\prime}t_{2}^ {\prime}\bar{t})Y(\bar{t}t_{1}t_{2}\bar{t}_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t _{2}}^{\dagger}\widetilde{X_{\bar{t}_{1}^{\prime}}}(N-2)b_{t_{1}^{\prime}}b_{t _{2}^{\prime}},\] and find the solutions as \[\widetilde{X_{\vec{t}^{\prime}}}(N)=(X_{\vec{t}^{\prime}})_{L}\hat{\cal Z} \hat{1}_{B}(N+1)=\hat{1}_{B}(N)(X_{\vec{t}^{\prime}})_{L}\hat{\cal Z}, \tag{98a}\] \[(X_{\vec{t}^{\prime}})_{L}=(X_{\vec{t}^{\prime}})_{L}^{(2)}+(X_{\vec{t}^{\prime}} )_{L}^{(3)}+(X_{\vec{t}^{\prime}})_{L}^{(4)}+O(\Gamma^{5}),\] (98b) \[(X_{\vec{t}^{\prime}})_{L}^{(2)} = -\frac{1}{2}\sum_{t_{1}}\sum_{\vec{t}_{1}^{\prime}\vec{t}_{2}^{ \prime}}Y(\vec{t}^{\prime}t_{1}^{\prime}t_{2}^{\prime}t_{1})b_{t_{1}}^{\dagger} b_{t_{1}^{\prime}}b_{t_{2}^{\prime}},\] \[(X_{\vec{t}^{\prime}})_{L}^{(3)} = \frac{1}{4}\sum_{t_{1}t_{2}}\sum_{\vec{t}_{1}^{\prime}\vec{t}_{2}^ {\prime}\vec{t}_{3}^{\prime}}\sum_{\vec{t}}Y(\vec{t}^{\prime}t_{2}^{\prime}t_{ 3}^{\prime}\hat{\vec{t}})Y(\vec{t}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger} b_{t_{2}}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}b_{t_{3}^{\prime}},\] (98c) \[(X_{\vec{t}^{\prime}})_{L}^{(4)} = [\hat{\cal Z}^{(2)},(X_{\vec{t}^{\prime}})_{L}^{(2)}]\] (98d) \[-\frac{1}{8}\sum_{t_{1}t_{2}t_{3}}\sum_{\vec{t}_{1}^{\prime}t_{2}^ {\prime}t_{3}^{\prime}}\sum_{\vec{t}\vec{t}^{\prime\prime}}Y(\vec{t}^{\prime}t _{1}^{\prime}t_{2}^{\prime}\hat{\vec{t}})Y(\vec{t}t_{1}t_{2}\vec{t}^{\prime \prime})Y(\vec{t}^{\prime\prime}t_{3}^{\prime}t_{4}^{\prime}t_{3})b_{t_{1}}^{ \dagger}b_{t_{2}}^{\dagger}b_{t_{3}}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}^{ \prime}}b_{t_{3}^{\prime}}^{\prime}b_{t_{4}^{\prime}},\] \[[\hat{\cal Z}^{(2)},(X_{\vec{t}^{\prime}})_{L}^{(2)}]=-\frac{1}{4 }\sum_{t_{1}}\sum_{\vec{t}_{1}^{\prime}\vec{t}_{2}^{\prime}}Y(\vec{t}^{ \prime}tt^{\prime}t_{1})Y(t_{1}^{\prime}tt^{\prime}t_{2}^{\prime})b_{t_{1}}^{ \dagger}b_{t_{1}^{\prime}}b_{t_{2}}\] \[-\frac{1}{4}\sum_{t_{1}t_{2}}\sum_{\vec{t}_{1}^{\prime}\vec{t}_{2 }^{\prime}t_{3}^{\prime}}\sum_{\vec{t}}\left\{2Y(t_{1}t_{1}^{\prime}t_{2}^{ \prime}t)Y(t\vec{t}^{\prime}t_{2}t_{3}^{\prime})-Y(\vec{t}^{\prime}t_{1}^{ \prime}t_{2}^{\prime}t)Y(tt_{1}t_{2}t_{3}^{\prime})\right\}b_{t_{1}}^{\dagger} b_{t_{2}}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}b_{t_{3}^{\prime}}^{ \prime}.\] From these, we obtain \[\widetilde{X_{\vec{t}^{\prime}}}=\sum_{N=0}^{N_{max}-1}\widetilde{X_{\vec{t}^{ \prime}}}(N)=(X_{\vec{t}^{\prime}})_{L}\hat{\cal Z}\hat{1}_{B}=\hat{1}_{B}^{(-1) }(X_{\vec{t}^{\prime}})_{L}\hat{\cal Z}=\hat{1}_{B}(X_{\vec{t}^{\prime}})_{L} \hat{\cal Z}\hat{1}_{B}, \tag{99}\] where \[\hat{1}_{B}^{(\Delta N)}=\sum_{N=0}^{N_{max}+\Delta N}\hat{1}_{B}(N). \tag{100}\] \(\hat{1}_{B}^{(0)}=\hat{1}_{B}\) and \(\hat{1}_{B}\hat{1}_{B}^{(-1)}=\hat{1}_{B}^{(-1)}\hat{1}_{B}=\hat{1}_{B}^{(-1)}\) hold. Organizing the Hermitian conjugate of Eq. (98a), we find \[\begin{array}{rcl}\widetilde{X_{\vec{t}}^{\dagger}}(N)&=&((X_{\vec{t}})_{L} \hat{\cal Z}\hat{1}_{B}(N+1))^{\dagger}=\hat{1}_{B}(N+1)\hat{\cal Z}(X_{\vec{t }})_{L}^{\dagger}\\ &=&\hat{\cal Z}(X_{\vec{t}})_{L}^{\dagger}\hat{1}_{B}(N)=\left\{\hat{\cal Z}(X_ {\vec{t}})_{L}^{\dagger}\hat{\cal Z}^{-1}\right\}\hat{\cal Z}\hat{1}_{B}(N)\\ &=&(X_{\vec{t}}^{\dagger})_{L}\hat{\cal Z}\hat{1}_{B}(N),\end{array}\] (101a) where \[(X_{\vec{t}}^{\dagger})_{L}=\hat{\cal Z}(X_{\vec{t}})_{L}^{\dagger}\hat{\cal Z }^{-1}, \tag{101b}\] \[(X_{\vec{t}}^{\dagger})_{L}=(X_{\vec{t}}^{\dagger})_{L}^{(2)}+(X_{\vec{t }}^{\dagger})_{L}^{(3)}+(X_{\vec{t}}^{\dagger})_{L}^{(4)}+O(\Gamma^{5}),\] (101c) \[(X_{\vec{t}}^{\dagger})_{L}^{(2)} = ((X_{\vec{t}})_{L}^{(2)})^{\dagger},\quad(X_{\vec{t}}^{\dagger})_{L} ^{(3)}=((X_{\vec{t}})_{L}^{(3)})^{\dagger},\] \[(X_{\vec{t}}^{\dagger})_{L}^{(4)} = ((X_{\vec{t}})_{L}^{(4)})^{\dagger}+[\hat{\cal Z}^{(2)},((X_{\vec{t }})_{L}^{(2)})^{\dagger}]\] \[= -\frac{1}{8}\sum_{t_{1}t_{2}t_{3}t_{4}}\sum_{\vec{t}_{1}^{\prime}t _{2}^{\prime}t_{3}^{\prime}}\sum_{\vec{t}\vec{t}^{\prime\prime}}Y(\vec{t}t_{1}t_{2 }\vec{t}^{\prime})Y(\vec{t}^{\prime}t_{1}^{\prime}t_{2}^{\prime}\vec{t}^{\prime \prime})Y(\vec{t}^{\prime\prime}t_{3}t_{4}t_{3}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{ \dagger}b_{t_{3}}^{\dagger}b_{t_{4}}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}}^{ \prime}b_{t_{3}}^{\dagger}b_{t_{4}}^{\prime}b_{t_{1}^{\prime}}^{\prime}b_{t_{2}}^{ \prime}b_{t_{3}^{\prime}}^{\prime},\] and obtain \[\widetilde{X_{\tilde{t}}^{\dagger}}=\sum_{N=0}^{N_{max}-1}\widetilde{X_{\tilde{t} }^{\dagger}}(N)=(X_{\tilde{t}}^{\dagger})_{L}\hat{\mathcal{Z}}\dot{\mathrm{I}}_ {B}^{(-1)}=\breve{\mathrm{I}}_{B}(X_{\tilde{t}}^{\dagger})_{L}\hat{\mathcal{Z }}=\breve{\mathrm{I}}_{B}(X_{\tilde{t}}^{\dagger})_{L}\hat{\mathcal{Z}}\dot{ \mathrm{I}}_{B}. \tag{102}\] In this way, we can obtain \((X_{\tilde{t}^{\prime}})_{L}\) and \((X_{\tilde{t}}^{\dagger})_{L}\) as infinite expansions. Dealing with the terms up to \(O(\Gamma^{4})\), we have found that \(\hat{Z}(N)=\hat{\mathcal{Z}}\hat{1}_{B}(N)=\hat{1}_{B}(N)\hat{\mathcal{Z}}\), \(\widetilde{X_{\tilde{t}^{\prime}}}(N)=\widetilde{X_{\tilde{t}^{\prime}}}\hat{ 1}_{B}(N+1)=\hat{1}_{B}(N)\widetilde{X_{\tilde{t}^{\prime}}}\), and \(\widetilde{X_{\tilde{t}}^{\dagger}}(N)=\hat{1}_{B}(N+1)\widetilde{X_{\tilde{t} }^{\dagger}}=\widetilde{X_{\tilde{t}}^{\dagger}}\hat{1}_{B}(N)\). Oppositely, assuming that these hold for any \(N\), and substituting them into Eq. (72d) and Eq. (73), we can find the relational expressions for \(\hat{\mathcal{Z}}\), \(\widetilde{X_{\tilde{t}^{\prime}}}\), and \(\widetilde{X_{\tilde{t}}^{\dagger}}\), and solve these for each order of magnitude, then we obtain the same results. This result suggests that the \(N\) dependency of these operators found up to \(O(\Gamma^{4})\) generally holds. Applying the above results to Eqs. (72) and summing up \(N\), we obtain \[\begin{array}{lcl}\widetilde{X_{t^{\prime}}}&=&(X_{t^{\prime}})_{L}\hat{ \mathcal{Z}}\dot{\mathrm{I}}_{B}=\breve{\mathrm{I}}_{B}^{(-1)}(X_{t^{\prime}} )_{L}\hat{\mathcal{Z}}=\breve{\mathrm{I}}_{B}(X_{t^{\prime}})_{L}\hat{ \mathcal{Z}}\breve{\mathrm{I}}_{B},\\ (X_{t^{\prime}})_{L}&=&(X_{t^{\prime}}^{\dagger})_{D},\end{array} \tag{103a}\] \[\begin{array}{lcl}\widetilde{X_{t}^{\dagger}}&=&\breve{\mathrm{I}}_{B}(X_{t }^{\dagger})_{L}\hat{\mathcal{Z}}=(X_{t}^{\dagger})_{L}\hat{\mathcal{Z}} \breve{\mathrm{I}}_{B}^{(-1)}=\breve{\mathrm{I}}_{B}(X_{t}^{\dagger})_{L} \hat{\mathcal{Z}}\breve{\mathrm{I}}_{B},\\ (X_{t}^{\dagger})_{L}&=&(X_{t}^{\dagger})_{D}+(X_{t}^{\dagger})_{out}\\ (X_{t}^{\dagger})_{out}&=&-\frac{1}{2}\sum_{t_{1}t_{2}}\sum_{ \tilde{t}^{\prime}_{1}}Y(tt_{1}t_{2}\tilde{t}^{\prime}_{1})b_{t_{1}}^{\dagger }b_{t_{2}}^{\dagger}(X_{\tilde{t}^{\prime}_{1}})_{L},\end{array}\] (103b) \[\begin{array}{lcl}\widetilde{B_{q}}&=&(B_{q})_{L}\hat{\mathcal{Z}} \breve{\mathrm{I}}_{B}=\breve{\mathrm{I}}_{B}(B_{q})_{L}\hat{\mathcal{Z}}= \breve{\mathrm{I}}_{B}(B_{q})_{L}\hat{\mathcal{Z}}\breve{\mathrm{I}}_{B},\\ (B_{q})_{L}&=&(B_{q})_{D}+(B_{q})_{out},\\ (B_{q})_{out}&=&\sum_{t}\sum_{\tilde{t}^{\prime}}\Gamma_{q}^{\tilde{t}^{ \prime}t}b_{t}^{\dagger}(X_{\tilde{t}^{\prime}})_{L}.\end{array} \tag{103c}\] While, from \(B_{q}=B_{\tilde{q}}^{\dagger}\), \(\widetilde{B_{q}}=\widetilde{B_{q}}^{\dagger}\) and we find another expression for \(\widetilde{B_{q}}\) as \[\begin{array}{lcl}\widetilde{B_{q}}&=&\breve{\mathrm{I}}_{B}\hat{\mathcal{Z }}(B_{\tilde{q}})_{L}^{\dagger}=\hat{\mathcal{Z}}(B_{\tilde{q}})_{L}^{\dagger }\breve{\mathrm{I}}_{B}=\breve{\mathrm{I}}_{B}\hat{\mathcal{Z}}(B_{\tilde{q}} )_{L}^{\dagger}\breve{\mathrm{I}}_{B},\\ (B_{\tilde{q}})_{L}^{\dagger}&=&(B_{q})_{D}+(B_{\tilde{q}})_{out}^{\dagger},\\ (B_{\tilde{q}})_{out}^{\dagger}&=&\sum_{t}\sum_{\tilde{t}}\Gamma_{q}^{\prime \tilde{t}^{\prime}\tilde{t}}(X_{\tilde{t}})_{L}^{\dagger}b_{t^{\prime}},\end{array} \tag{104}\] where we use \((B_{q})_{D}=(B_{\tilde{q}})_{D}^{\dagger}\). Using two types of expressions for \(\widetilde{B_{q}}\), we obtain \[\breve{\mathrm{I}}_{B}[(B_{q})_{D},\hat{\mathcal{Z}}]\breve{\mathrm{I}}_{B}= \breve{\mathrm{I}}_{B}\{\hat{\mathcal{Z}}(B_{\tilde{q}})_{out}^{\dagger}-(B_{ q})_{out}\hat{\mathcal{Z}}\}\breve{\mathrm{I}}_{B}. \tag{105}\] From Eq. (24) and Eq. (96), we can express the mapping operator as \[U_{\xi}=\hat{\mathcal{Z}}^{\xi-\frac{1}{2}}\widetilde{U}, \tag{106}\] and Eq. (43) becomes as follows: \[|\psi^{\prime}\rangle_{\xi}=\hat{\cal Z}^{\xi-\frac{1}{2}}\widetilde{|\psi^{ \prime}\rangle},\qquad_{-\xi}(\psi|=\widetilde{(\psi|}\hat{\cal Z}^{-\xi-\frac{ 1}{2}}, \tag{107a}\] \[(O_{F})_{\xi}=\hat{\cal Z}^{\xi-\frac{1}{2}}\widetilde{O_{F}}\hat{\cal Z}^{-\xi- \frac{1}{2}}, \tag{107b}\] If \(O_{F}\) is a phonon creation, a phonon annihilation, or a scattering operator, \(\widetilde{O_{F}}\!=\!\breve{1}_{B}(O_{F})_{L}\hat{\cal Z}^{\breve{\imath}} \!_{B}\) holds. Therefore the mapped \(O_{F}\) can be expressed as \[(O_{F})_{\xi}=\breve{1}_{B}(O_{F})_{B(\xi)}\breve{1}_{B};\quad(O_{F})_{B(\xi) }=\hat{\cal Z}^{\xi-\frac{1}{2}}(O_{F})_{L}\hat{\cal Z}^{-\xi+\frac{1}{2}}, \tag{108}\] and \[_{-\xi}(\psi|(O_{F})_{\xi}|\psi^{\prime})_{\xi}=_{-\xi}(\psi|(O_{F})_{B(\xi) }|\psi^{\prime})_{\xi} \tag{109}\] holds. Therefore we can regard \((O_{F})_{\xi}\) as \((O_{F})_{B(\xi)}\) in the physical subspace. The boson expansions of \((O_{F})_{B(\xi)}\) become infinite expansions for an arbitrary \(\xi\) because those of \((O_{F})_{L}\) become infinite expansions. For \(\xi\neq 0\), the boson expansions become of the non-Hermitian type. In the case of \(\xi=\frac{1}{2}\), \((O_{F})_{\xi(\frac{1}{2})}=(O_{F})_{L}\) holds. For \(\xi=0\), the boson expansions become the Hermitian type and can be derived using \[\begin{array}{rcl}(O_{F})_{B(0)}&=&\hat{\cal Z}^{-\frac{1}{2}}(O_{F})_{L} \hat{\cal Z}^{\frac{1}{2}}\\ &=&(O_{F})_{L}+\frac{1}{2}[(O_{F})_{L},\hat{\cal Y}]-\frac{3}{8}\hat{\cal Y}[( O_{F})_{L},\hat{\cal Y}]-\frac{1}{8}[(O_{F})_{L},\hat{\cal Y}]\hat{\cal Y}+O( \Gamma^{6}).\end{array} \tag{110}\] The boson expansions of the phonon creation and annihilation operators and the scattering operators are as follows: \[(X_{t^{\prime}})_{B(0)}=(X_{t^{\prime}})_{B(0)in}+(X_{t^{\prime}})_{B(0)out}, \tag{111a}\] \[(X_{t^{\prime}})_{B(0)in}=b_{t^{\prime}}+(X_{t^{\prime}})_{B(0)in}^{(2)}+(X_{ t^{\prime}})_{B(0)in}^{(4)}+O(\Gamma^{5}),\] (111b) \[(X_{t^{\prime}})_{B(0)in}^{(2)}=-\frac{1}{4}\sum_{t_{1}}\sum_{t^{\prime}_{1}t ^{\prime}_{2}}Y(t^{\prime}t^{\prime}_{1}t^{\prime}_{2}t_{1})b^{\dagger}_{t_{ 1}}b_{t^{\prime}_{1}}b_{t^{\prime}_{2}},\] (111c) \[(X_{t^{\prime}})_{B(0)in}^{(4)}=-\frac{1}{32}\sum_{t_{1}}\sum_{t^{\prime}_{1} t^{\prime}_{2}}\sum_{tt^{\prime\prime}}Y(t^{\prime}tt^{\prime\prime}t^{\prime}_{1})Y(t^{ \prime}_{2}tt^{\prime\prime}t^{\prime}_{1})b^{\dagger}_{t_{1}}b_{t^{\prime}_ {1}}b_{t^{\prime}_{2}}\] \[+\frac{1}{96}\sum_{t_{1}t_{2}}\sum_{t^{\prime}_{1}t^{\prime}_{2} t^{\prime}_{3}}\sum_{t}\{2Y(t^{\prime}_{1}t_{1}t^{\prime}t)Y(tt^{\prime}_{2}t^{ \prime}_{3}t_{2})\] (111d) \[-5Y(t^{\prime}t^{\prime}_{1}t^{\prime}_{2}t)Y(tt_{1}t_{2}t^{ \prime}_{3})\}b^{\dagger}_{t_{1}}b^{\dagger}_{t_{2}}b_{t^{\prime}_{1}}b_{t^{ \prime}_{2}}b_{t^{\prime}_{3}},\] \[(X_{t^{\prime}})_{B(0)out}=(X_{t^{\prime}})_{B(0)out}^{(3)}+(X_{t^{ \prime}})_{B(0)out}^{(4)}+O(\Gamma^{5}) \tag{111e}\] \[\begin{split}(X_{t^{\prime}})^{(3)}_{B(0)out}=\frac{1}{24}\sum_{t_{1}t_{2 }}\sum_{t^{\prime}_{1}t^{\prime}_{2}t^{\prime}_{3}}\sum_{\bar{t}}\{2Y(t^{\prime }_{1}t_{1}t^{\prime}\bar{t})Y(\bar{t}t^{\prime}_{2}t^{\prime}_{3}t_{2})\\ +Y(t^{\prime}_{1}t_{1}t_{2}\bar{t})Y(\bar{t}t^{\prime}_{2}t^{ \prime}_{3}t^{\prime})\}b^{\dagger}_{t_{1}}b^{\dagger}_{t_{2}}b^{\prime}_{t_{ 1}}b^{\prime}_{t^{\prime}_{2}}b^{\prime}_{t^{\prime}_{3}},\end{split} \tag{111f}\] \[\begin{split}(X_{t^{\prime}})^{(4)}_{B(0)out}=-\frac{1}{16}\sum_{t_{ 1}t_{2}t_{3}}\sum_{t^{\prime}_{1}t^{\prime}_{2}t^{\prime}_{3}t^{\prime}_{4}} \sum_{\bar{t}\bar{t}^{\prime}}Y(t^{\prime}_{1}t^{\prime}t_{1}\bar{t})Y(\bar{t} t^{\prime}_{2}t^{\prime}_{3}\bar{t}^{\prime})Y(\bar{t}^{\prime}t_{2}t_{3}t^{ \prime}_{4})\\ b^{\dagger}_{t_{1}}b^{\dagger}_{t_{2}}b^{\dagger}_{t_{3}}b^{ \dagger}_{t^{\prime}_{1}}b^{\prime}_{t^{\prime}_{2}}b^{\prime}_{t^{\prime}_{3} }b^{\prime}_{t^{\prime}_{4}}.\end{split}\] (111g) \[\begin{split}(X_{\bar{t}^{\prime}})_{B(0)}=(X_{\bar{t}^{\prime}})_{B( 0)}^{(2)}+(X_{\bar{t}^{\prime}})_{B(0)}^{(3)}+(X_{\bar{t}^{\prime}})_{B(0)}^{( 4)}+O(\Gamma^{5}),\end{split}\] (112a) \[\begin{split}(X_{\bar{t}^{\prime}})_{B(0)}^{(2)}=(X_{\bar{t}^{ \prime}})_{L}^{(2)},\quad(X_{\bar{t}^{\prime}})_{B(0)}^{(3)}=(X_{\bar{t}^{ \prime}})_{L}^{(3)},\end{split}\] (112b) \[\begin{split}(X_{\bar{t}^{\prime}})_{B(0)}^{(4)}=(X_{\bar{t}^{ \prime}})_{L}^{(4)}-\frac{1}{2}[\hat{\mathcal{Z}}^{(2),}(X_{\bar{t}^{\prime}} )_{L}^{(2)}].\end{split}\] (112c) \[\begin{split}(B_{q})_{B(0)}&=(B_{q})_{L}+\frac{1}{2} \hat{\mathcal{Z}}\{(B_{\bar{q}})_{out}{}^{\dagger}-(B_{q})_{out}\}+O(\Gamma^{5 }),\end{split}\] (113a) \[\begin{split}(B_{q})_{B(0)in}^{(4)}=\frac{1}{2}\{(B_{q})_{out}^{(k )}+(B_{\bar{q}})_{out}^{(k)}{}^{\dagger}\}\quad(k=2,3),\end{split}\] (113b) \[\begin{split}(B_{q})_{B(0)out}^{(4)}=\frac{1}{2}\{(B_{q})_{out}^{(4) }+(B_{\bar{q}})_{out}^{(4)}{}^{\dagger}\}+\frac{1}{2}\hat{\mathcal{Z}}^{(2)} \{(B_{\bar{q}})_{out}^{(2)}{}^{\dagger}-(B_{q})_{out}^{(2)}\},\end{split}\] (113c) \[\begin{split}(B_{q})_{B(0)out}^{(k)}=\frac{1}{2}\{(B_{q})_{out}^{(k )}+(B_{\bar{q}})_{out}^{(k)}{}^{\dagger}\}\quad(k=2,3),\end{split}\] (113d) \[\begin{split}(B_{q})_{B(0)out}^{(4)}=\frac{1}{2}\{(B_{q})_{out}^{(4) }+(B_{\bar{q}})_{out}^{(4)}{}^{\dagger}\}+\frac{1}{2}\hat{\mathcal{Z}}^{(2)} \{(B_{\bar{q}})_{out}^{(2)}{}^{\dagger}-(B_{q})_{out}^{(2)}\},\end{split}\] (113e) \[\begin{split}\frac{1}{2}\hat{\mathcal{Z}}^{(2)}\{(B_{\bar{q}})_{ out}^{(2)}{}^{\dagger}-(B_{q})_{out}^{(2)}\}=\frac{1}{8}\sum_{t_{1}t_{2}}\sum_{t^{ \prime}_{1}t^{\prime}_{2}}\sum_{tt^{\prime}}\sum_{\bar{t}}\{\Gamma^{t^{\prime }_{1}\bar{t}}Y(\bar{t}\bar{t}t^{\prime}_{2})-\Gamma^{\bar{t}t}_{q}Y(\bar{t}t^{ \prime}_{1}t^{\prime}_{2}t^{\prime})\}Y(tt_{1}t_{2}t^{\prime})\\ b^{\dagger}_{t_{1}}b^{\dagger}_{t_{2}}b_{t^{\prime}_{1}}b_{t^{ \prime}_{2}}\\ +\frac{1}{8}\sum_{t_{1}t_{2}t_{3}}\sum_{t^{\prime}_{1}t^{\prime}_{ 2}t^{\prime}_{3}}\sum_{\bar{t}}\{\Gamma^{t^{\prime}_{1}\bar{t}}Y(\bar{t}t_{1}t ^{\prime}_{2})-\Gamma^{\bar{t}t}_{q}Y(\bar{t}t^{\prime}_{1}t^{\prime}_{2}t_{1})- \Gamma^{\bar{t}t}_{q}Y(\bar{t}t^{\prime}_{1}t^{\prime}_{2}t)\}\\ +\frac{1}{16}\sum_{t_{1}t_{2}t_{3}t_{4}}\sum_{t^{\prime}_{1}t^{ \prime}_{2}t^{\prime}_{3}t^{\prime}_{4}}\sum_{\bar{t}}\{\Gamma^{t^{\prime}_{1} \bar{t}}Y(\bar{t}t_{1}t_{2}t^{\prime}_{2})-\Gamma^{\bar{t}t}_{1}Y(\bar{t}t^{ \prime}_{1}t^{\prime}_{2}t_{2})\}Y(t^{\prime}_{4}t_{3}t_{4}t^{\prime}_{3})\\ b^{\dagger}_{t_{1}}b^{\dagger}_{t_{2}}b^{\dagger}_{t_{3}}b^{ \dagger}_{t_{4}}b^{\prime}_{t_{1}}b^{\prime}_{t^{\prime}_{2}}b^{\prime}_{t^{ \prime}_{3}}b^{\prime}_{t^{\prime}_{4}}.\end{split} \tag{113f}\] Here, we use Eq. (105) to find \((B_{q})_{B(0)}\). From Eq. (103c), \((B_{q})_{out}^{(k)}=\sum_{t}\sum_{\bar{t}^{\prime}}\Gamma^{\bar{t}^{\prime}}_{q}b^ {\dagger}_{t}(X_{\bar{t}^{\prime}})_{L}^{(k)}\). Finally, we deal with the product of operators. Let \(O_{F}\) and \(O^{\prime}_{F}\) be the phonon creation, annihilation operators, or scattering operators, respectively, we can derive the boson expansions of their product as \[(O_{F}O^{\prime}_{F})_{B(\xi)}=\hat{\cal Z}^{\xi-\frac{1}{2}}\widetilde{O_{F}O^{ \prime}_{F}}\hat{\cal Z}^{-\xi-\frac{1}{2}}. \tag{114}\] If \(\widetilde{O_{F}O^{\prime}_{F}}=\breve{1}_{B}(O_{F})_{L}\breve{1}_{B}(O^{ \prime}_{F})_{L}\hat{\cal Z}\breve{1}_{B}\) holds, we obtain \[(O_{F}O^{\prime}_{F})_{B(\xi)}=(O_{F})_{B(\xi)}(O^{\prime}_{F})_{B(\xi)}, \tag{115}\] and if Eq. (40) holds, \[(O_{F}O^{\prime}_{F})_{B(\xi)}\approx(O_{F})_{B(\xi)}(O^{\prime}_{F})_{B(\xi)}. \tag{116}\] In the case that Eq. (115) and Eq. (116) hold, it is sufficient to obtain only the boson expansions of the basic fermion pair operators. Conventional practical boson expansion methods have used, as a matter of course, the approximation of Eq. (116). Eq. (114) makes it possible to judge whether this approximation is good or bad. We present \(\widetilde{O_{F}O^{\prime}_{F}}\) in the appendix. We finally point out that \(\bar{t}\)-sum does not need to sum all \(\bar{t}\) in the following cases. \[[[X_{\tau_{1}},X^{\dagger}_{\tau_{2}}],X^{\dagger}_{\tau_{3}}]\approx-\sum_{ \tau^{\prime}}Y(\tau_{1},\tau_{2},\tau_{3},\tau^{\prime})X^{\dagger}_{\tau^{ \prime}}, \tag{117}\] are satisfied for the phonon excitation modes \(\{\tau\}\), and \(\{\tau\}\) contains \(\{t\}\) and is set up such that the small parameter expansion holds, then \(\bar{t}\) not contained in \(\{\tau\}\) can be neglected in \(\bar{t}\)-sum. An example is a case where \(\{\tau\}\) contains a sufficient variety of phonon excitation modes and \[\sum_{\tau}\psi_{\tau}(\alpha\beta)\psi_{\tau}(\alpha^{\prime}\beta^{\prime}) \approx\delta_{\alpha\alpha^{\prime}}\delta_{\beta\beta^{\prime}}-\delta_{ \alpha\beta^{\prime}}\delta_{\beta\alpha^{\prime}} \tag{118}\] are satisfied. In this case, \[a^{\dagger}_{\alpha}a^{\dagger}_{\beta}\approx\sum_{\tau}\psi_{\tau}(\alpha\beta) \tag{119}\] are satisfied, therefore, \[X^{\dagger}_{\tau}\approx 0 \tag{120}\] are satisfied, and Eq. (117) are derived. In this case, however, \(\{\tau\}\) cannot be regarded as \(\{t\}\) because \(\{\tau\}\) contains so sufficient variety of phonon excitation modes that \(\hat{Z}\) includes zero eigenvalues. ### Boson expansions in the case where the double commutators of the phonon operators are closed In this section, we treat the boson expansions in the case where the double commutators of Eq. (8) are closed in \(\{t\}\). If \(Y(t_{1}^{\prime}t\bar{t}t_{2}^{\prime})=0\), then the double commutators of Eq. (8) are closed in \(\{t\}\). For further analysis, we denote more concretely \({\bf W}(2)\) and \({\bf Z}^{\prime}(2)\) in Eq. (79) as follows, \[{\bf W}(2)=\left({\bf W}^{(1)}(2)\ {\bf W}^{(2)}(2)\right), \tag{121a}\] \[{\bf Z}^{\prime}(N)=\left(\begin{array}{cc}{\bf Z}^{(1)}(2)&{{\bf Z}^{\prime }}^{(3)}(2)\\ {{\bf Z}^{\prime}}^{(3)}(2)^{T}&{{\bf Z}^{\prime}}^{(2)}(2)\end{array}\right), \tag{121b}\] where \({\bf W}^{(1)}(2)\) is what becomes a zero matrix when \(Y(t_{1}^{\prime}t\bar{t}t_{2}^{\prime})=0\). Substituting this into Eq. (84b), then \({\bf W}^{(2)}(2){{\bf Z}^{\prime}}^{(3)}(2)^{T}\) becomes a zero matrix, and we obtain \[\sum_{\bar{t}\bar{t}^{\prime}}Y(t_{1}^{\prime}\bar{t}t_{2}^{\prime})Y(t_{1} \bar{t}t^{\prime}\bar{t}_{1})=0, \tag{122}\] which indicates that Eqs. (90) do not hold. It also indicates that if \(Y(t_{1}^{\prime}t\bar{t}t_{2}^{\prime})=0\), then \(Y(t_{1}^{\prime}\bar{t}_{1}\bar{t}_{2}t_{2}^{\prime})=0\) should be satisfied. \({\bf W}(2)={\bf 0}(2)\) holds, and \(|t_{1}t_{2}\rangle\) and \(|\bar{t}\mu\rangle\) become orthogonal. Therefore, if the double commutators of Eq. (8) are closed in \(\{t\}\), the boson expansions are not obtained as the small parameter expansion. Starting with Eqs. (72) and applying \(Y(t_{1}^{\prime}\bar{t}\mu t_{2}^{\prime})=0\) to them, we derive \(\widetilde{X}_{\bar{t}}^{\dagger}(N)=0\), from which we obtain \(\widetilde{X}_{\bar{t}}=0\) and \(\widetilde{X}_{\bar{t}}^{\dagger}=0\). Therefore, \((X_{\bar{t}^{\prime}})_{\xi}=0\) and \((X_{\bar{t}}^{\dagger})_{\xi}=0\) hold. Inversely, if \((X_{\bar{t}^{\prime}})_{\xi}=0\) and \((X_{\bar{t}}^{\dagger})_{\xi}=0\) hold, then \(Y(t_{1}^{\prime}\bar{t}\mu t_{2}^{\prime})=0\) should hold, that is \(|t_{1}^{\prime}t_{2}^{\prime}\rangle\) and \(|\bar{t}\mu\rangle\) should be orthogonal because \[Y(t_{1}^{\prime}\bar{t}\mu t_{2}^{\prime})=((t_{1}^{\prime}t_{2}^{\prime}| \bar{t}\mu))-\langle\langle t_{1}^{\prime}t_{2}^{\prime}|\bar{t}\mu\rangle\rangle, \tag{123}\] and \[\langle\langle t_{1}^{\prime}t_{2}^{\prime}|\bar{t}\mu\rangle\rangle=\langle 0 |X_{t_{1}^{\prime}}X_{t_{2}^{\prime}}X_{\bar{t}}^{\dagger}X_{\mu}|0\rangle=(0| (X_{t_{1}^{\prime}})_{\xi}(X_{t_{2}^{\prime}})_{\xi}(X_{\bar{t}}^{\dagger})_ {\xi}(X_{\mu})_{\xi}|0\rangle \tag{124}\] hold. It is a necessary and sufficient condition for \((X_{\bar{t}^{\prime}})_{\xi}=0\) and \((X_{\bar{t}}^{\dagger})_{\xi}=0\) that \(|t_{1}^{\prime}t_{2}^{\prime}\rangle\) and \(|\bar{t}\mu\rangle\) are orthogonal. We also obtain \(\widetilde{X}_{t^{\prime}}=(X_{t^{\prime}})_{D}\hat{Z}\), \(\widetilde{X}_{t}^{\dagger}=(X_{t}^{\dagger})_{D}\hat{Z}\), and \(\widetilde{B}_{q}=(B_{q})_{D}\hat{Z}\). Therefore, \[(O_{F})_{\xi}=\hat{Z}^{\xi-\frac{1}{2}}(O_{F})_{D}\hat{Z}^{-\xi+\frac{1}{2}}, \tag{125}\] where \(O_{F}\) is \(X_{t^{\prime}}\), \(X_{t}^{\dagger}\), or \(B_{q}\). From Eq, (2b) and Eq. (72c), \([(B_{q})_{D},\hat{Z}(N)]=0\) holds, then \[[(B_{q})_{D},\hat{Z}]=0. \tag{126}\] Hence \[(B_{q})_{\xi}=(B_{q})_{D}\hat{T}_{B}=\hat{T}_{B}(B_{q})_{D}=\hat{T}_{B}(B_{q})_{ D}\hat{T}_{B} \tag{127}\] holds for any \(\xi\). For \(O_{F}\) and \(O^{\prime}_{F}\) being the phonon operators or the scattering operators, respectively, we obtain \[(O_{F}O^{\prime}_{F})_{\xi}=\hat{Z}^{\xi-\frac{1}{2}}(O_{F})_{D}(O^{\prime}_{F })_{D}\hat{Z}^{-\xi+\frac{1}{2}}, \tag{128}\] In addition, the following, \[(O_{F}O^{\prime}_{F})_{\xi}=(O_{F})_{\xi}(O^{\prime}_{F})_{\xi}, \tag{129}\] is satisfied if \(\hat{T}_{B}\) becomes \(\breve{1}_{B}\) and \(O_{F}O^{\prime}_{F}\) is normal ordered because \(\hat{Z}^{\xi-\frac{1}{2}}\hat{Z}^{-\xi+\frac{1}{2}}=\breve{1}_{B}\) and \(\breve{1}_{B}(X^{\dagger}_{t})_{D}=\breve{1}_{B}(X^{\dagger}_{t})_{D}\breve{1 }_{B}\), \((X_{t^{\prime}})_{D}\breve{1}_{B}=\breve{1}_{B}(X_{t^{\prime}})\breve{1}_{B}\), and Eq. (126) are satisfied. Eq. (73) becomes \[\hat{Z}(N)=\frac{1}{N}\sum_{t}(X^{\dagger}_{t})_{D}\hat{Z}(N-1)b_{t}\quad(N \geq 2). \tag{130}\] The solution of Eq. (130) should be given by Eq. (83a). While Eq. (130) can be solved directly in the case that \({\bf Z}(2)\) has no zero eigenvalues. In this case, \(Y(t^{\prime}_{1}t_{1}t_{2}t^{\prime}_{2})=-2((t^{\prime}_{1}t^{\prime}_{2}|t_{ 1}t_{2}))\) holds. Therefore, \[(X^{\dagger}_{t})_{D}=b^{\dagger}_{t}(2\hat{N}_{B}+1), \tag{131}\] from which we obtain \[\hat{Z}(N)=\frac{(2N-1)}{N}\sum_{t}b^{\dagger}_{t}\hat{Z}(N-1)b_{t}. \tag{132}\] \(\hat{Z}(2)=3\hat{1}_{B}(2)\), and if \(\hat{Z}(N-1)=(2N-3)!!\hat{1}_{B}(N-1)\), then \(\hat{Z}(N-1)=(2N-1)!!\hat{1}_{B}(N)\). These match Eq. (83a) in the case that \(\hat{T}_{B}(N)=\hat{1}_{B}(N)\) holds. Eq. (130) can be also solved formally as \[\hat{Z}(N)=\frac{1}{N!}\sum_{t_{1}\cdots t_{N}}(X_{t_{1}}^{\dagger})_{D}\cdots(X_ {t_{N}}^{\dagger})_{D}|0)(0|b_{t_{1}}\cdots b_{t_{N}}. \tag{133}\] From Eq. (133), we find the relation, \[\hat{Z}(N)b_{t}^{\dagger}=(X_{t}^{\dagger})_{D}\hat{Z}(N-1). \tag{134}\] While, from Eq. (83a) and Eq. (130), we obtain \[(2N-1)\hat{T}_{B}(N)b_{t}^{\dagger}=(X_{t}^{\dagger})_{D}\hat{T}_{B}(N-1). \tag{135}\] The mapped operators are given as follows, \[(O_{F})_{\xi}=\hat{T}_{B}(O_{F})_{B(\xi)}\hat{T}_{B}, \tag{136a}\] \[(O_{F})_{B(\xi)}=\left\{(2\hat{N}_{B}-1)!!\right\}^{\xi-\frac{1}{2}}(O_{F})_ {D}\left\{(2\hat{N}_{B}-1)!!\right\}^{-\xi+\frac{1}{2}} \tag{136b}\] From Eq. (128), we obtain \[(O_{F}O_{F}^{\prime})_{B(\xi)}=\left\{(2\hat{N}_{B}-1)!!\right\}^{\xi-\frac{1 }{2}}(O_{F})_{D}(O_{F}^{\prime})_{D}\left\{(2\hat{N}_{B}-1)!!\right\}^{-\xi+ \frac{1}{2}}. \tag{137}\] The difference due to \(\xi\) is renormalized to the boson excitation number, and the remaining boson expansions are the same as those of the DBET, which are finite. Therefore, we can substantially treat all types of boson expansions as finite expansions when the concerning states are the physical states that are eigenstates of the boson number operator. If \(\xi=0\), we obtain finite boson expansions of the Hermitian type. In the case that \(O_{F}\) preserves the number of quasi-particles, then \((O_{F})_{D}\) preserves the number of bosons, and the norm operator parts cancel out completely. As a result, we obtain finite boson expansions for any \(\xi\) such as \[(O_{F})_{B(\xi)}=(O_{F})_{D}, \tag{138}\] from which we also derive \[(O_{F})_{B(0)}=(O_{F})_{B(\xi)}. \tag{139}\] Even if \(O_{F}^{(1)}\) and \(O_{F}^{(2)}\) do not necessarily preserve the quasi-particle number, respectively, if \(O_{F}=O_{F}^{(1)}O_{F}^{(2)}\) preserves the quasi-particle number,Eq. (137) enables us to derive \[(O_{F}^{(1)}O_{F}^{(2)})_{B(0)}=(O_{F}^{(1)}O_{F}^{(2)})_{B(\xi)}=(O_{F}^{(1) })_{D}(O_{F}^{(2)})_{D}. \tag{140}\] Hence \((O_{F})_{D}\) and \((O_{F}^{(1)})_{D}(O_{F}^{(2)})_{D}\) are regarded as a finite boson expansion of the Hermitian type. For \(\xi=\frac{1}{2}\), the norm operator does not appear in the mapped fermion operators, and we can obtain the boson expansions as follows, \[\left(O_{F}\right)_{B(\frac{1}{2})}=(O_{F})_{D}, \tag{141}\] \[\left(O_{F}O_{F}^{\prime}\right)_{B(\frac{1}{2})}=\left(O_{F}\right)_{B(\frac{1 }{2})}(O_{F}^{\prime})_{B(\frac{1}{2})}=(O_{F})_{D}(O_{F}^{\prime})_{D}. \tag{142}\] We obtain the finite expansions of DBET. From Eqs. (44), it is straitforward to proof that Hermitian treatment[14] holds exactly for the eigenvector of \(\hat{Z}\), \(|N;a)\). Therefore, when \(\hat{T}_{B}=\hat{1}_{B}\) holds, it is applied precisely for the ideal boson state vector, \(|N:t)\). For \(\xi=0\), the boson mapping becomes of the Hermite type. We can obtain mapped operators as follows, \[\begin{array}{rcl}(X_{t}^{\dagger})_{B(0)}&=&\left\{(2\hat{N}_{B}-1)!! \right\}^{-\frac{1}{2}}(X_{t}^{\dagger})_{D}\left\{(2\hat{N}_{B}-1)!!\right\} ^{\frac{1}{2}},\\ &=&(X_{t}^{\dagger})_{D}\frac{1}{\sqrt{1+2\hat{N}_{B}}}=b_{t}^{ \dagger}\sqrt{1+2\hat{N}_{B}},\end{array} \tag{143a}\] \[\begin{array}{rcl}(B_{q})_{B(0)}=\sum_{tt^{\prime}}\Gamma_{q}^{t^{\prime}t} b_{t}^{\dagger}b_{t^{\prime}}.\end{array} \tag{143b}\] Here we use the relation, \[\hat{T}_{B}b_{t}^{\dagger}(2\hat{N}_{B}+1)=(X_{t}^{\dagger})_{D}\hat{T}_{B}, \tag{144}\] obtained from Eq. (135) for the derivation of \((X_{t}^{\dagger})_{B(0)}\). The scattering operators are expressed as finite expansions in the physical subspace. The phonon operators do not become of the small parameter expansion whose zeroth-order approximation becomes the boson approximation. The boson approximation holds only when the phonon excitation number does not exceed one. ### On the role of the norm operator In this subsection, based on the results obtained so far, we summarize the role the norm operator plays in the boson expansion method. What is important is the relation between the norm operator consisting of all kinds of phonons and the norm operator constituting the boson mapping operator. What kinds of boson expansions are derived is determined by the structure of the norm operator when all modes are adopted, \(\hat{Z}^{(A)}\). The structure is determined by the introduced single-particle states, \(\{\alpha\}\), and the amplitudes of the Tam-Dancoff phonons, \(\psi_{\mu}(\alpha\beta)\). is composed of the norm operator \(\hat{Z}\), which is used for mapping, and other operators as \[\hat{Z}^{(A)}=\hat{Z}+\hat{W}+\hat{W}^{\dagger}+\hat{Z},^{\prime}\] (145a) where \[\hat{Z}=\breve{1}_{B}\hat{Z}^{(A)}\breve{1}_{B},\quad\hat{W}=\breve{1}_{B}\hat{Z }^{(A)}(\breve{1}_{B}^{(A)}-\breve{1}_{B}),\quad\hat{Z}^{\prime}=(\breve{1}_ {B}^{(A)}-\breve{1}_{B})\hat{Z}^{(A)}(\breve{1}_{B}^{(A)}-\breve{1}_{B}), \tag{145b}\] The condition Eq. (52) is imposed on \(\hat{Z}^{(A)}\), which is regardless of how to take \(\psi_{\mu}(\alpha\beta)\). \(\hat{Z}\), \(\hat{W}\), and \(\hat{Z}^{\prime}\) are determined so as to satisfy the condition Eq. (52). The double commutation relations of the phonon operators that constitute \(\hat{Z}\) are generally not closed among them. \(\hat{Z}\) must have eigenvalues nearly to 1 for the small parameter expansion, which also allows the use of ideal boson state vectors as physical. It is possible to directly check whether this condition is satisfied because \(\hat{Z}\) is specifically obtained by the boson expansion assuming the small parameter expansion. Only this condition is not, however, a sufficient condition for realizing the small parameter expansion. Including not only \(\hat{Z}\) but also \(\hat{W}\) and \(\hat{Z}^{\prime}\) gives the necessary and sufficient condition. In this case, it is not allowed to treat \(\hat{W}\) as a zero operator, that is, to assume that the double commutation relation of the phonon operators constituting \(\hat{Z}\) is closed because the small parameter expansion does not hold. In the case that \(\hat{W}\) can be regarded as zero, the boson expansions can be substantially treated as finite expansions. The realization of this type of practical boson expansion is more difficult because \(\{t\}\) should selected so that the ideal boson state vectors become physical and the dynamics are reflected enough, as with the small parameter expansion, under the above condition. ## 5 Comments on the conventional methods Conventional practical boson expansion methods, without exception, discard the phonon excitation modes that are not adopted for the boson excitation modes. We call this procedure the non-adopted modes discarding (NAMD). Since NAMD closes double commutators of phonon operators within the adopted modes for those of bosons, it is incompatible with a small expansion. The incompatibility between NAMD and the small parameter expansion has not been considered in formulating the conventional practical boson expansion methods. In the case that NADM is precisely applicable, DBET is formulated exactly, which does not mean that DBET necessarily has its exceptional superiority because we can substantially obtain finite expansions of the Hermite type by treating the boson number operator parts appropriately. In the case that the small parameter expansion is applicable, all the boson expansions become infinite ones and include the terms neglected by NAMD. Applying NADM to Eq. (111), Eq. (112), and Eq. (113), it is found that the remaining terms up to \(O(\Gamma^{2})\) coincide with those obtained by NOLCEXPT. The order of magnitude of the neglected terms is also \(O(\Gamma^{2})\), respectively. NOLCEXPT obtains, with NADM, the terms only up to \(O(\Gamma^{2})\). On the other hand, the finite boson expansions of DBET are obtained from\(\left(O_{F}\right)_{B(\frac{1}{2})}\) by applying NADM. The order of magnitude of the neglected terms by NADM is also \(O(\Gamma^{2})\) in the same order as the smallest one of the non-neglected terms by NADM. In both NOLCEXPT and DBET, NADM neglects terms of the order of magnitude that should be adopted, which indicates that NAMD can not be used as a proper approximation under the small parameter expansion. The investigation so far makes it clear that the comment of NOLCEXPT [13, 15] on NAMD is incorrect. NOLCEXPT claims that the scattering operators are expressed as finite expansions. It is realized only when NADM is precisely or well approximately applied and not when the small parameter expansion is realized. Eqs. (143) indicate that it is impossible to express the phonon operators as infinite normal-ordered small parameter expansions although the scattering operators become finite. NOLCEXPT has failed to refute Marshalek's claim [11, 12] that KT-1 [24] and KT-2 [8] are chimerical boson expansions. As already mentioned, Hermitian treatment becomes exact when NAMD becomes exact. On the other hand, in the case that the small parameter expansion is applicable, Hermitian treatment becomes an approximation, and it can be generally evaluated using the norm operator by following the method of [22]. It is concluded that Hermitian treatment holds as far as it is possible to neglect terms of \(O(\Gamma^{4})\). Next, we comment on the problems related to a modified Marumori boson expansion method [7, 16]. The modified Marumori boson expansion method concludes that NADM is good using a norm of a multi-phonon state vector despite the small parameter expansion being available [7]. The reason why the conclusion is derived incorrectly is as follows: The norm of the multi-phonon state vector is obtained from \(\hat{Z}(N)\). Since the neglected terms by NADM do not appear up to \(O(\Gamma^{3})\) in \(\hat{Z}(N)\), it is impossible to evaluate whether NADM is a good approximation by investigating \(\hat{Z}(N)\) only up to \(O(\Gamma^{2})\) with the small parameter expansion. For explanation, we adopt a case where \(\{t\}\) consists of only one type of excitation mode \(c\). We define \(|N\rangle\) and \(|N\rangle\) as \[|N\rangle=|\,\overbrace{c\cdots c}^{N},\quad|N\rangle=|\,\overbrace{c\ldots c }^{N}\rangle. \tag{146}\] They sattisy \(\langle N|N|N\rangle=(N|\hat{Z}(N)|N)\). Assuming the small parameter expansion, and setting \(\langle 2|2\rangle=1-\varepsilon\), then \(\varepsilon=\frac{1}{2}Y(cccc)\sim O(\Gamma^{2})\). Expressing \(\langle N|N\rangle\) derived by the small parameter expansion up to \(O(\Gamma^{2})\) as \(\mathcal{N}^{(2)}(N)\), we obtain \(\mathcal{N}^{(2)}(N)=1-N(N-1)\frac{\varepsilon}{2}+O(\Gamma^{3})\). On the other hand, Eq. (130) enables us to obtain \(\hat{Z}(N)\) with the sum of all terms without the neglected ones by NADM. Expressing \(\langle N|N\rangle\) thus obtained as \(\mathcal{N}^{(all)}(N)\), \[\mathcal{N}^{(all)}(N)=\mathcal{N}^{(all)}(N-1)\left(1+(\langle 2|2\rangle-1)(N- 1)\right), \tag{147}\] holds, and we obtain \(\mathcal{N}^{(all)}(N)=(1-(N-1)\varepsilon)(1-(N-2)\varepsilon)\cdots(1- \varepsilon)\). As \(N\) becomes large, the difference between both becomes prominent, which is, however, only \(2\varepsilon^{2}\sim O(\Gamma^{4})\) for \(N=3\). It indicates that the small parameter expansion is well applied for this case and that both coincide well up to \(O(\Gamma^{2})\) with the exact one. Therefore it is impossible to judge whether NADM is good or not by the comparison of \(\mathcal{N}^{(all)}(3)\) with the exact one. It is a wrong conclusion that NADM holds well for \(\varepsilon\approx 0.1\)[7], where the small parameter expansion becomes possible. \(\varepsilon\) should become approximately -2 for NADM to become good. In addition, it is not the strong but the weak effect of the Pauli exclusion principle that makes the small parameter expansion possible. The comment on the convergence of the modified Marumori boson expansions is mistaken. Conventional practical boson expansion methods restrict, for mapping, only the sort of phonon excitation modes and not the number of those. \(\hat{Z}(N)\) necessarily has zero eigenvalues for large phonon excitation numbers even if restricting the sorts of modes, which makes ideal boson state vectors unphysical and the small parameter expansion impossible. We should restrict the phonon excitation number beforehand. Nevertheless, NOLCEXPT does not restrict the phonon excitation number. Instead, it treats, without a clear basis, all the norm matrices of the multi-phonon state as having no zero eigenvalues. Nevertheless, it gives the correct expansion terms up to \(O(\Gamma^{2})\) under NADM. The restriction of the phonon excitation number beforehand gives a clear reason for it because we obtain \[\lim_{N_{max}\rightarrow\infty}(O_{F})_{\xi}=(O_{F})_{B(\xi)} \tag{148}\] from Eq. (108). It indicates that we can obtain the correct results of the small parameter expansion without limiting the phonon excitation number beforehand. Afterward, we should limit the boson excitation number. As for BREXP, by replacing the collective modes \(\{c\}\) and the non-collective modes \(\{n\}\) to \(\{t\}\) and \(\{\bar{t}\}\), respectively, and suppressing the fermion excitations, we can obtain the boson expansions from BREXP [17, 18]. The Hermitian-type boson expansions obtained thus agree with those obtained from BREXP by adopting the proper transformation [25]. Further comparison requires the derivation of higher-order terms in BREXP. Summary We have proposed a new boson expansion theory, the norm operator method, where the norm operator plays a crucial role. The different treatment of the norm operator determines the type of boson expansions as Hermitian or non-Hermitian. The mapping operator limits the number of phonon excitations in addition to the phonon excitation modes beforehand to use the ideal boson state vectors as physical and avoid the breakdown of the small parameter expansion whose zeroth-order approximation is the boson approximation. In the case that the closed algebraic approximation or phonon truncation approximation holds, that is, the double commutation relations between the phonons with excitation modes adopted as boson excitations are closed, the small parameter expansion is not available. The norm operator is expressed as a function of the boson number operator, which substantially makes all types of boson expansions be of finite expansion. The small parameter expansion is not compatible with the closed-algebra approximation or the phonon truncation approximation. The contribution of the phonon excitation modes neglected by the approximation makes the boson expansion become infinite expansion regardless of whether it is of the Hermitian type or not. We have obtained the higher-order terms of the boson expansion not expanded so far in addition to the neglected by the approximation. Conventional practical boson expansion methods have used the closed-algebra approximation or the phonon truncation approximation without recognizing its playing role mentioned above, and the claims derived from this approximation have no validity: The normal-ordered linked-cluster expansion theory has failed to refute Marshalek's claim that KT-1 and KT-2 are of the chimerical boson expansion. The Dyson boson expansion theory does not have exceptional superiority over the other types of boson expansions. The boson-fermion expansion theory derives the same boson expansions with the Hermitian-type boson expansions obtained here up to the next-to-leading order. The boson-fermion expansion theory should derive higher-order expansion terms for further comparison. ## References * [1] A. Klein and E. R. Marshalek, Rev. Mod. Phys. **63**, 375 (1991). * [2] S. T. Beliaev and V. G. Zelevinsky, Nucl. Phys. **39**, 582 (1962). * [3] T. Usui, Prog. Theor. Phys. **23**, 787 (1960). * [4] T. Marumori, M. Yamamura, and A. Tokunaga, Prog. Theor. Phys. **31**, 1009 (1964). * [5] D. Janssen, F. Donau, and S. Frauendorf, Nucl. Phys. **A172**, 145 (1971). * [6] S. G. Lie and G. Holzwarth, Phys. Rev. **C12**, 1035 (1975). * [7] G. Holtzwarth, D. Janssen, and R. V. Jolos, Nucl. Phys. **A261**, 1 (1976). * [8] T. Kishimoto and T. Tamura, Nucl. Phys. **A270**, 317 (1976). * [9] H. Tsukuma, H. Thorn, and K. Takada, Nucl. Phys. **A466**, 70 (1987). * [10] H. Sakamoto and T. Kishimoto, Nucl. Phys. **A528**, 73 (1991). * [11] E. R. Marshalek, Nucl. Phys. **A347**, 253 (1980). * [12] E. R. Marshalek, Phys. Lett. **95B**, 337(1980). * [13] T. Kishimoto and T. Tamura, Phys. Rev. **C27**, 341 (1983). * [14] K. Takada, Prog. Theor. Phys. Suppl. **141**, 179 (2001). * [15] H. Sakamoto and T. Kishimoto, Nucl. Phys. **A486**, 1 (1988). * [16] T. Marumori, K. Takada, and F. Sakata, Suppl. Prog. Theor. Phys. **71**, 1 (1981) * [17] K. Taniguchi and Y. Miyanishi, Prog. Theor. Phys. **84**, 568 (1990). * [18] K. Taniguchi and Y. Miyanishi, Prog. Theor. Phys. **86**, 151 (1991). * [19] T. Kishimoto, T Kammuri, and H. Sakamoto, Prog. Theor. Phys. **85** 1057 (1991). * [20] K. Takada, Phys. Rev. **C34**, 750 (1986). * [21] K. Takada, Phys. Rev. **C38**, 2450 (1988). * [22] A. Kajiyama, K. Taniguchi, and Y. Miyanishi, Prog. Theor. Phys. **101**, 579 (1999). * [23] M. Sato, Y. R. Shimizu, and K. Takada, Prog. Theor. Phys. **102**, 287 (1999). * [24] T. Kishimoto and T. Tamura, Nucl. Phys. **A192**, 246 (1972). * [25] K. Taniguchi, A. Kajiyama, and Y. Miyanishi, Prog. Theor, Phys. **92**, 975 (1994). ## Appendix A formulae of the product of the pair operators We denote \(B_{q}\), \(X_{t^{\prime\prime}}\), or \(X_{\bar{t}^{\prime}}\) as \(O_{F}\). the following equations hold: \[\widetilde{X_{t^{\prime}}O_{F}}=\breve{1}_{B}(X_{t^{\prime}})_{L}B(O_{F})_{ L}\hat{\mathcal{Z}}\breve{1}_{B}=\breve{1}_{B}(X_{t^{\prime}})_{L}B(O_{F})_{L} \hat{\mathcal{Z}}\breve{1}_{B},\] (A1a) \[\widetilde{O_{F}^{\dagger}X_{t}^{\dagger}}=\breve{1}_{B}(O_{F}^{\dagger})_{L}(X _{t}^{\dagger})_{L}\hat{\mathcal{Z}}\breve{1}_{B}.\] (A1b) \[\widetilde{X_{t}^{\dagger}X_{t^{\prime}}}=\breve{1}_{B}\left\{(X_{t}^{ \dagger})_{D}(X_{t^{\prime}})_{D}-\frac{1}{2}\sum_{t_{1}t_{2}}\sum_{\bar{t}^{ \prime}}Y(tt_{1}t_{2}\bar{t}^{\prime}_{1})b_{t_{1}}^{\dagger}b_{t_{2}}^{ \dagger}(X_{t^{\prime}})_{L}(X_{\bar{t}^{\prime}_{1}})_{L}\right\}\hat{ \mathcal{Z}}\breve{1}_{B}.\] (A2) \[\widetilde{X_{\bar{t}}^{\dagger}X_{\bar{t}^{\prime}}}=\breve{1}_{B}(X_{\bar{t }}^{\dagger})_{L}(X_{\bar{t}^{\prime}}^{\dagger})_{L}\hat{\mathcal{Z}}\breve {1}_{B}+O(\Gamma^{5}),\] (A3a) \[\widetilde{X_{\bar{t}}X_{\bar{t}^{\prime}}}=\breve{1}_{B}(X_{\bar{t }}^{\dagger})_{L}(X_{\bar{t}^{\prime}})_{L}\hat{\mathcal{Z}}\breve{1}_{B}+O( \Gamma^{5}).\] (A3b) \[\widetilde{X_{\bar{t}}^{\dagger}X_{\bar{t}^{\prime}}}=\breve{1}_{B}(X_{\bar{t }}^{\dagger})_{L}(X_{\bar{t}^{\prime}})_{L}\hat{\mathcal{Z}}\breve{1}_{B}+O( \Gamma^{5}).\] (A4) \[\widetilde{B_{q}X_{\mu^{\prime}}}=(B_{q})_{D}\widetilde{X_{\mu^{\prime}}}+\sum _{t}\sum_{\bar{t}^{\prime}}\Gamma_{q}^{\bar{t}^{\prime}t}b_{t}^{\dagger} \widetilde{X_{\bar{t}^{\prime}}X_{\mu^{\prime}}},\] (A5a) \[\widetilde{X_{\mu}^{\dagger}B_{q}}=\widetilde{X_{\mu}^{\dagger}}(B_{q})_{D}+ \sum_{t^{\prime}}\sum_{\bar{t}}\Gamma_{q}^{t^{\prime}\bar{t}}\widetilde{X_{\mu }^{\dagger}X_{\bar{t}}^{\dagger}}b_{t^{\prime}}.\] (A5b) Proof of \({\bf W}(N)={\bf 0}(N)\) for \(N\geq 3\) \(\hat{Z}(N)\) is related to \(\hat{Z}^{(A)}(N)\) as \[\hat{Z}(N)=\check{1}_{B}\hat{Z}^{(A)}(N)\check{1}_{B}=\hat{1}_{B}(N)\hat{Z}^{(A )}(N)\hat{1}_{B}(N).\] (B1) Introducing \[\begin{array}{rcl}\hat{Z}^{\prime}(N)&=&(\check{1}_{B}^{(A)}-\check{1}_{B}) \hat{Z}^{(A)}(N)(\check{1}_{B}^{(A)}-\check{1}_{B})\\ &=&(\hat{1}_{B}^{(A)}(N)-\hat{1}_{B}(N))\hat{Z}^{(A)}(N)(\hat{1}_{B}^{(A)}(N)- \hat{1}_{B}(N)),\end{array}\] (B2) and \[\hat{W}(N)=\check{1}_{B}\hat{Z}^{(A)}(\check{1}_{B}^{(A)}-\check{1}_{B})=\hat {1}_{B}(N)\hat{Z}^{(A)}(\hat{1}_{B}^{(A)}(N)-\hat{1}_{B}(N)),\] (B3) we obtain \[\hat{Z}(N)^{(A)}=\hat{Z}(N)+\hat{W}(N)+\hat{W}(N)^{\dagger}+\hat{Z}^{\prime}( N).\] (B4) Eq. (79) expresses this relation as those of the matrices where \({\bf Z}^{(A)}(N)\), \({\bf Z}(N)\), \({\bf Z}^{\prime}(N)\), and \({\bf W}(N)\) are matrices representing \(\hat{Z}^{(A)}(N)\), \(\hat{Z}(N)\), \(\hat{Z}^{\prime}(N)\), and \(\hat{W}(N)\), respectively. If \(Y(t_{1}^{\prime}\bar{t}\mu t_{2}^{\prime})=0\), then \(\hat{W}(2)=0\). On the other hand, from Eq. (73), we obtain \[\hat{Z}^{(A)}(N)=\frac{1}{N}\sum_{\mu}(X_{\mu}^{\dagger})_{D}\hat{Z}(N-1)^{(A )}b_{\mu}.\] (B5) If \(\hat{W}(N-1)=0\), then \[\hat{Z}^{(A)}(N)=\frac{1}{N}\sum_{\mu}(X_{\mu}^{\dagger})_{D}Z(N-1)b_{\mu}+ \frac{1}{N}\sum_{\mu}(X_{\mu}^{\dagger})_{D}\hat{Z}^{\prime}(N-1)b_{\mu}.\] (B6) On the ohrer hand, \(\hat{1}_{B}(N-1)b_{\mu}(\hat{1}_{B}^{(A)}(N)-\hat{1}_{B}(N))=0\) and \(\hat{1}_{B}(N)(X_{\mu}^{\dagger})_{D}(\hat{1}_{B}^{(A)}(N-1)-\hat{1}_{B}(N-1))=0\) hold. Therefore if \(\hat{W}(N-1)=0\), then \(\hat{W}(N)=0\). That is \(\hat{W}(N)=0\) for \(N\geq 3\), and then \({\bf W}(N)={\bf 0}(N)\) for \(N\geq 3\).
We propose a new boson expansion method using a norm operator. 小パラメータ展開において、ボソン近似が0次の近似となる場合、Phononoperator間の二重 commutation relations が phonon excitation modes を採用したボソンの励起として存在しないため、無限の展開が発生します。この展開は、ボソン展開が Hermitian または non-Hermitian であるかにかかわらず、無限の展開となります。小パラメータ展開は、 commutation relations が閉じた場合に成り立ちません。 Norm operator は、物理空間における番号操作子を用いて表現され、これにより、Hermitian または non-Hermitian のタイプに関わらず、実質的に有限のボソンの展開を得ることができます。また、従来のボソン展開方法の問題点についても指摘しています。 Normal-ordered linked-cluster expansion theory は、Marshalek の主張を反証するものではなく、KT-1 と KT-2
2309.03770
Neural lasso: a unifying approach of lasso and neural networks
In recent years, there is a growing interest in combining techniques attributed to the areas of Statistics and Machine Learning in order to obtain the benefits of both approaches. In this article, the statistical technique lasso for variable selection is represented through a neural network. It is observed that, although both the statistical approach and its neural version have the same objective function, they differ due to their optimization. In particular, the neural version is usually optimized in one-step using a single validation set, while the statistical counterpart uses a two-step optimization based on cross-validation. The more elaborated optimization of the statistical method results in more accurate parameter estimation, especially when the training set is small. For this reason, a modification of the standard approach for training neural networks, that mimics the statistical framework, is proposed. During the development of the above modification, a new optimization algorithm for identifying the significant variables emerged. Experimental results, using synthetic and real data sets, show that this new optimization algorithm achieves better performance than any of the three previous optimization approaches.
David Delgado, Ernesto Curbelo, Danae Carreras
2023-09-07T15:17:10
http://arxiv.org/abs/2309.03770v1
# Neural lasso: a unifying approach of lasso and neural networks ###### Abstract In recent years, there is a growing interest in combining techniques attributed to the areas of Statistics and Machine Learning in order to obtain the benefits of both approaches. In this article, the statistical technique lasso for variable selection is represented through a neural network. It is observed that, although both the statistical approach and its neural version have the same objective function, they differ due to their optimization. In particular, the neural version is usually optimized in one-step using a single validation set, while the statistical counterpart uses a two-step optimization based on cross-validation. The more elaborated optimization of the statistical method results in more accurate parameter estimation, especially when the training set is small. For this reason, a modification of the standard approach for training neural networks, that mimics the statistical framework, is proposed. During the development of the above modification, a new optimization algorithm for identifying the significant variables emerged. Experimental results, using synthetic and real data sets, show that this new optimization algorithm achieves better performance than any of the three previous optimization approaches. neural networks, lasso, cross-validation, feature selection ## 1 Introduction Nowadays, there is a growing interest in combining techniques attributed to the areas of Statistics and Machine Learning in order to obtain the benefits of both approaches. An example of the above can be found in the area of statistical item response theory, and specifically in the development of computerized adaptive tests [1; 2]. Yan, Lewis, and Stocking and, later, Ueno and Songmuang proposed the use of decision trees as an alternative to the computerized adaptive tests [3; 4]. Later, Delgado-Gomez et al. established mathematically an equivalence between these two techniques that allows the administration of computerized adaptive tests in real-time using item selection criteria that are computationally very intensive [5]. Recently, several works using neural networks have been published in this field [6; 7]. Regarding these last works, it is interesting to note the synergies that are being generated between the areas of Statistics and Neural Networks [8; 9]. Representing statistical models using neural networks provides them with the flexibility and optimization methods of the latter. In a previous pilot study, Laria et al. indicated how the least absolute shrinkage and selection operator (lasso) algorithm can be represented as a neural network [10]. Conversely, linking neural networks to statistical models allows to improve the interpretability of the former [11]. These synergies have occurred in several domains of Statistics such as regression, dimensional reduction, time series, or quality control [12]. In this article, the widely used lasso algorithm is developed from the perspective of neural networks. To this end, in Section 2, the most relevant features of the lasso algorithm are presented in order to understand the elaboration of its neural version. After that, in Section 3, the entire mathematical formulation proposed by Laria et al. is extended, and the optimization is redefined [10]. Both linear and logistic regressions are considered. In Section 4, several experiments are carried out to evaluate the performance of the neural version and compare it with their statistical counterpart. These experiments are performed on both real and simulated data. Finally, the article concludes in Section 5 with a discussion of the obtained results and future research lines. ## 2 The lasso Following, the lasso algorithm is briefly presented highlighting the most relevant elements in relation to our proposal. Hereafter, the lasso algorithm will be referred to as _statistical lasso_ to differentiate it from its neural version throughout the article. ### Formulation Let \((\mathbf{x}_{i},y_{i})\), \(i=1,\ldots,N\), be a set containing \(N\) observations where \(\mathbf{x}_{i}\in\mathbb{R}^{p}\) represents the predictors, and \(y_{i}\in\mathbb{R}\) are the associated responses. It is assumed that the predictors are standardized and the responses are centered, i.e., \[\sum_{i=1}^{N}x_{ij}=0,\hskip 28.452756pt\sum_{i=1}^{N}x_{ij}^{2}=1,\hskip 28.452756pt \sum_{i=1}^{N}y_{i}=0,\hskip 28.452756pt\text{for }j=1,2,\ldots,p \tag{1}\] The lasso technique was introduced for generalized linear models in the supervised context by Tibshirani [13]. It is formulated as the following optimization problem \[\underset{\mathbf{\beta}}{argmin}\,\mathcal{R}(\mathbf{y},\mathbf{X}\mathbf{\beta})+\lambda \lVert\mathbf{\beta}\rVert_{1} \tag{2}\] where \(\mathbf{X}\) is the (standardized) matrix that contains the observations as rows, \(\mathbf{y}\) is the vector with the corresponding labels, \(\mathbf{\beta}\) is the vector containing the weights of the regression, and \(\lambda\lVert\mathbf{\beta}\rVert_{1}\) is a penalization term. \(\mathcal{R}(\mathbf{y},\mathbf{X}\mathbf{\beta})\) represents the error term. In this work, we will focus on linear and logistic regression. For linear regression, the error term is given by \[\mathcal{R}_{Lin}(\mathbf{y},\mathbf{X}\mathbf{\beta})=\frac{1}{N}\sum_{i=1}^{N}(y_{i}- \mathbf{x}_{i}^{t}\mathbf{\beta})^{2} \tag{3}\] while the error term for the logistic regression is given by: \[\mathcal{R}_{Log}(\mathbf{y},\mathbf{X}\mathbf{\beta})=\frac{1}{N}\sum_{i=1}^{N}\left[ \log(1+e^{\mathbf{x}_{i}^{t}\mathbf{\beta}})-y_{i}\mathbf{x}_{i}^{t}\mathbf{\beta}\right] \tag{4}\] ### Optimization Given a fixed \(\lambda\), the values of \(\mathbf{\beta}\) are estimated using coordinate descent. As an example, the coordinate descent update for the \(j^{th}\) coefficient in the linear regression case is given by \[\hat{\beta}_{j}=\mathcal{S}_{\lambda}(\frac{1}{N}\langle\mathbf{X}_{j}, \mathbf{r}_{j}\rangle) \tag{5}\] where \(\mathbf{X}_{j}\) is the \(j^{th}\) column of matrix \(\mathbf{X}\), the \(i^{th}\) component of \(\mathbf{r}_{j}\) is obtained by \[\mathbf{r}_{j}(i)=y_{i}-\sum_{k\neq j}x_{ik}\hat{\beta}_{k} \tag{6}\] and \(\mathcal{S}_{\lambda}\) is the soft-thresholding operator defined by \[S_{\lambda}(x)=\text{sign}(x)(|x|-\lambda)_{+} \tag{7}\] The optimal value of \(\lambda\) is obtained through a k-fold crossvalidation. A more detailed discussion of the lasso optimization can be found in the book by Hastie, Tibshirani and Wainwright [14]. A schematic representation of the lasso optimization algorithm is shown in the upper panel of Figure 3. ## 3 The neural lasso Similarly to the previous section, the formulation and optimization of the neural lasso is presented. ### Formulation Following, the neural representation of the lasso is presented. It begins by presenting the mathematical formulation for linear regression and, afterward, it is extended to logistic regression. #### Linear regression When the error term is given by the mean squared error (MSE), lasso can be characterized as the neural network shown in Figure 1. In this case, the loss function is given by \[\begin{split}\mathcal{L}(\mathbf{w})&=\frac{1}{N}\sum _{i=1}^{N}\Biggl{(}y_{i}-\gamma\sum_{j=1}^{p}x_{ij}w_{j}\Biggr{)}^{2}+\ell_{1} \sum_{j=1}^{p}|w_{j}|\\ &=\frac{1}{N}\|\mathbf{y}-\gamma\mathbf{X}\mathbf{w}\|_{2}^{2}{+} \ell_{1}\|\mathbf{w}\|_{1}\end{split} \tag{8}\] where \((\mathbf{w},\gamma)\) are the parameters of the network, and \(\ell_{1}\) is a regularization hyper-parameter. Notice that, by making \(\mathbf{\beta}=\gamma\mathbf{w}\) and \(\lambda=\frac{\ell_{1}}{\gamma}\), equation (8) is equivalent to equation (2) using MSE as error term. Figure 1: Neural Representation of lasso for linear regression An important aspect to keep in mind is that, unlike the statistical lasso, the neural network optimization does not set the weights exactly to zero. Therefore, it is necessary to establish a condition that determines which weights are zeros after each training epoch, and sets them to this value. To do this, we calculate the derivative of the loss function defined in equation (8) with respect to \(w_{j}\) \[\frac{\partial\mathcal{L}(\mathbf{w})}{\partial w_{j}}=\frac{-2\gamma}{N}\sum_{i=1 }^{N}\Biggl{(}y_{i}-\gamma\sum_{k=1}^{p}x_{ik}w_{k}\Biggr{)}x_{ij}+\ell_{1}s_{j} \tag{9}\] where the term \(s_{j}\) is the subgradient defined by \[s_{j}=\left\{\begin{array}{cc}1&w_{j}>0\\ -1&w_{j}<0\\ [-1,1]&w_{j}=0\end{array}\right.. \tag{10}\] Equation (9) can be rewritten as \[\frac{\partial\mathcal{L}(\mathbf{w})}{\partial w_{j}}=\frac{-2\gamma}{N}\Biggl{(} \sum_{i=1}^{N}y_{i}x_{ij}-\gamma\sum_{i=1}^{N}x_{ij}\sum_{k\neq j}x_{ik}w_{k}- \gamma w_{j}\sum_{i=1}^{N}x_{ij}^{2}\Biggr{)}+\ell_{1}s_{j} \tag{11}\] and, equivalently, in vector form \[\frac{\partial\mathcal{L}(\mathbf{w})}{\partial w_{j}}=\frac{-2\gamma}{N}\Biggl{(} \mathbf{X}_{j}^{t}\mathbf{y}-\Bigl{(}\gamma\mathbf{X}_{j}^{t}\mathbf{X}\mathbf{w} _{j}^{*}-\gamma w_{j}\Bigr{)}+\ell_{1}s_{j} \tag{12}\] where \(\mathbf{X}_{j}^{t}\) is the transpose of the \(j^{th}\) column of matrix \(\mathbf{X}\) (containing observations as rows) and \(\mathbf{w}_{j}^{*}\) is the vector \(\mathbf{w}\) with the \(j^{th}\) component equal to 0. To obtain the above expression, it has been taken into account that \(\sum_{i=1}^{2}x_{ij}^{2}=1\) since the data are standardized. Equating the derivative to 0 leads to \[w_{j}=\frac{\frac{2}{N}\gamma\mathbf{X}_{j}^{t}\Biggl{(}\mathbf{y}-\gamma \mathbf{X}\mathbf{w}_{j}^{*}\Biggr{)}-\ell_{1}s_{j}}{\frac{2}{N} \gamma^{2}} \tag{13}\] From where it follows that \[w_{j}^{op}=\left\{\begin{array}{ll}\dfrac{2}{N}\gamma\mathbf{X }_{j}^{t}\Bigg{(}\mathbf{y}-\gamma\mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}-\ell_{1}&\\ \dfrac{2}{N}\gamma^{2}&\text{if }\dfrac{2}{N}\gamma\mathbf{X}_{j}^{t} \Bigg{(}\mathbf{y}-\gamma\mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}>\ell_{1}\\ \dfrac{2}{N}\gamma\mathbf{X}_{j}^{t}\Bigg{(}\mathbf{y}-\gamma \mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}+\ell_{1}&\\ \dfrac{2}{N}\gamma^{2}&\text{if }\dfrac{2}{N}\gamma\mathbf{X}_{j}^{t} \Bigg{(}\mathbf{y}-\gamma\mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}<-\ell_{1}\\ 0&\text{if }\left|\dfrac{2}{N}\gamma\mathbf{X}_{j}^{t} \Bigg{(}\mathbf{y}-\gamma\mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}\right|\leq\ell_{1} \end{array}\right. \tag{14}\] Indicate that, unlike lasso which needs the three updates of equation 14, neural lasso only uses the last condition to make weights zero. This is because the update of the weights is performed implicitly during the training of the network. Concisely, after each training epoch, the network will determine if any of the weights can be replaced by 0 by checking if the last condition of the equation (14) is satisfied using the current estimates. This difference will be relevant later in the logistic regression. #### Logistic regression As shown below, the optimization problem for the logistic case is formulated by \[\underset{\mathbf{\beta}}{argmin}\frac{1}{N}\sum_{i=1}^{N}\Big{[}\log(1+e^{ \mathbf{x}_{i}^{t}\mathbf{\beta}+\beta_{0}})-y_{i}\left(\mathbf{x}_{i}^{t}\mathbf{ \beta}+\beta_{0}\right)\Big{]}+\lambda\norm{\mathbf{\beta}}_{1} \tag{15}\] This problem can be characterized by the neural network shown in Figure 2. Figure 2: Neural representation of lasso for logistic regression Note that the linear activation of the output layer has been replaced by a sigmoid. In addition, the MSE has been replaced by the binary cross-entropy function whose formula is given by \[-\frac{1}{N}\sum_{i=1}^{N}y_{i}\log\hat{y}_{i}+(1-y_{i})\log(1-\hat{y}_{i}) \tag{16}\] Therefore, the loss function of the network is given by \[\mathcal{L}(\mathbf{w})=-\frac{1}{N}\sum_{i=1}^{N}\Biggl{(}y_{i}\log \left(\frac{1}{1+e^{-\gamma x_{i}^{t}\mathbf{w}-b_{0}}}\right)+(1-y_{i})\log\left( 1-\frac{1}{1+e^{-\gamma x_{i}^{t}\mathbf{w}-b_{0}}}\right)\Biggr{)} \tag{17}\] \[+\ell_{1}\|\mathbf{w}\|_{1}\] It can be seen that equation (17) is equivalent to equation (15) as follows. Focusing on the error term of equation (17): \[\mathcal{R}(\mathbf{y},\mathbf{X}\mathbf{w}) = -\frac{1}{N}\sum_{i=1}^{N}\Biggl{(}y_{i}\log\left(\frac{1}{1+e^{ -\gamma x_{i}^{t}\mathbf{w}-b_{0}}}\right)+(1-y_{i})\log\left(\frac{1}{1+e^{ \gamma x_{i}^{t}\mathbf{w}+b_{0}}}\right)\Biggr{)}\] \[= -\frac{1}{N}\sum_{i=1}^{N}\left(-y_{i}\log(1+e^{-\gamma\mathbf{x}_{i} ^{t}\mathbf{w}-b_{0}})-(1-y_{i})\log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})\right)\] \[= \frac{1}{N}\sum_{i=1}^{N}\left(y_{i}\log(1+e^{-\gamma\mathbf{x}_{i}^ {t}\mathbf{w}-b_{0}})+\log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})-y_{i}\log(1+e^{ \gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})\right)\] \[= \frac{1}{N}\sum_{i=1}^{N}\left(y_{i}\log\left(\frac{1+e^{-\gamma \mathbf{x}_{i}^{t}\mathbf{w}-b_{0}}}{1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}}}\right)+ \log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})\right)\] \[= \frac{1}{N}\sum_{i=1}^{N}\left(y_{i}\log\left(e^{-\gamma\mathbf{x}_{i }^{t}\mathbf{w}-b_{0}}\right)+\log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})\right)\] \[= \frac{1}{N}\sum_{i=1}^{N}\left(\log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{ w}+b_{0}})-y_{i}(\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0})\right)\] Therefore, (17) becomes \[\mathcal{L}(\mathbf{w})=\frac{1}{N}\sum_{i=1}^{N}\left(\log(1+e^{\gamma\mathbf{x}_{i}^ {t}\mathbf{w}+b_{0}})-y_{i}(\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0})\right)+\ell_{1}\| \mathbf{w}\|_{1} \tag{18}\] Defining, as above, \(\mathbf{\beta}=\gamma\mathbf{w}\), \(\lambda=\ell_{1}/\gamma\), formulation (17) is equivalent to formulation (15). Similar to the linear case, it is necessary to establish a mechanism that makes the weights associated with the non-significant variables equal to 0. Taking the derivative of the loss function in equation (18) \[\frac{\partial\mathcal{L}(\mathbf{w})}{\partial w_{j}}=\frac{1}{N}\sum_{i=1}^{N} \left(\frac{\gamma x_{ij}e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}}}{1+e^{\gamma\mathbf{x} _{i}^{t}\mathbf{w}+b_{0}}}-y_{i}\gamma x_{ij}\right)+\ell_{1}s_{j} \tag{19}\] Unfortunately, unlike the linear case, it is not possible to isolate the vector \(\mathbf{w}\). The problem is, therefore, approached from a different perspective. Rearranging and equating the above equation to zero \[\frac{\gamma}{N}\sum_{i=1}^{N}\left(\frac{e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}} }{1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}}}-y_{i}\right)x_{ij}+\ell_{1}s_{j}=0 \tag{20}\] which is equivalent to \[\frac{\gamma}{\ell_{1}N}\sum_{i=1}^{N}\left(y_{i}-\frac{1}{1+e^{-\gamma\mathbf{x} _{i}^{t}\mathbf{w}+b_{0}}}\right)x_{ij}=s_{j} \tag{21}\] Following Simon et al. [15], this is satisfied for \(w_{j}=0\) if \[\frac{\gamma}{\ell_{1}N}\sum_{i=1}^{N}\left(y_{i}-\frac{1}{1+e^{-\gamma\mathbf{x} _{i}^{t}\mathbf{w}_{j}^{*}-b_{0}}}\right)x_{ij}=s_{j} \tag{22}\] where \(\mathbf{w}_{j}^{*}\) is the vector \(\mathbf{w}\) with the \(j^{th}\) component equal to \(0\). Therefore, \[\left|\frac{\gamma}{\ell_{1}N}\sum_{i=1}^{N}\left(y_{i}-\frac{1}{1+e^{-\gamma \mathbf{x}_{i}^{t}\mathbf{w}_{j}^{*}-b_{0}}}\right)x_{ij}\right|=|s_{j}|\leq 1 \tag{23}\] Rearranging gives \[\left|\frac{\gamma}{N}\sum_{i=1}^{N}\left(y_{i}-\frac{1}{1+e^{-\gamma\mathbf{x}_{ i}^{t}\mathbf{w}_{j}^{*}-b_{0}}}\right)x_{ij}\right|\leq\ell_{1} \tag{24}\] which vectorially can be written as \[\left|\frac{\gamma}{N}\mathbf{X}_{j}^{t}\Bigg{(}\mathbf{y}-\sigma\left(\gamma \mathbf{X}\mathbf{w}_{j}^{*}+\mathbf{b}\right)\Bigg{)}\right|\leq\ell_{1} \tag{25}\] where \(\sigma(x)=1/(1+e^{-x})\) is the sigmoid activation function and \(\mathbf{b}\) is the N-dimensional vector whose all components are equal to \(b_{0}\). It is important to note that the way by which neural lasso obtains the condition that determines whether a weight is zero is different from that of the statistical lasso. The latter uses a quadratic approximation of the error term since it also needs to have an explicit expression of the update of the non-zero weights. Neural lasso only needs to know which weights are zero since the update of the non-zero weights is implicitly performed during the training of the network. ### Optimization An important aspect to discuss is how to estimate the neural lasso weights. In this section, three optimization algorithms are proposed which are shown schematically in the three lower panels of Figure 3. Normally, when working with neural networks, its layout is determined by cross-validation and the estimation of its weights by simple validation. Figure 3: Statiscal lasso and neural lasso algorithms. That is, once the network layout has been determined, the available data are divided into a training set and a validation set. The training set is used to estimate the network parameters, while the validation set is used to evaluate the performance of the network in an independent set. The resulting network is the one whose weights minimize the validation error. As the network layout is predefined in neural lasso, it is only necessary to estimate its weights using simple validation. This way of training the network will be called _standard neural lasso_. However, the standard neural lasso may present a disadvantage with respect to the statistical lasso because of how they estimate the weights. The fact that statistical lasso employs cross-validation allows it to use all available observations to obtain an estimate of the error, whereas the standard neural lasso obtains this estimate using only a subset of the observations because it relies on simple validation. For this reason, a second algorithm called _restricted neural lasso_ has been developed to train the neural network by mimicking statistical lasso. Restricted neural lasso sets the value of \(\gamma\) equal to 1 and establishes it as a non-trainable parameter. Once the \(\gamma\) value has been fixed, it also sets the value of the hyper-parameter \(\ell_{1}\) to one of the \(\lambda\) values that the statistical lasso considers during its optimization. Having fixed the value of these two parameters, it is possible to perform the cross-validation and the algorithm selects the value of \(\ell_{1}\) that minimizes the cross-validation error. In a second step, the algorithm estimates the weights using the optimal value of \(\ell_{1}\) and setting \(\gamma\) equal to 1. Assuming that the network layout is correct, the performance of this second optimization method should be practically identical to that obtained by the statistical lasso. Finally, during the development of this work, a third optimization approach emerged. This new optimization algorithm, called _voting neural lasso_, combines all the optimization approaches discussed above. Specifically, it uses the cross-validation design used by the restricted neural lasso and by the statistical lasso. However, it does not search for the value of the hyper-parameter \(\lambda\) that minimizes the average validation error in the K configurations. For each of the K settings, it selects the value of \(\lambda\) with which the smallest validation error is obtained in a similar way to the standard neural lasso. A variable is considered to be significant when it has been selected in most of the K settings. In a second phase, the weights of only these significant variables are estimated without taking into account the penalty term. It is important to note that this approach is not a relaxed lasso [16]. To summarize the above, three optimization algorithms with three different purposes will be considered. Standard neural lasso obtains the estimation of the weights using the usual procedure of training neural networks. Restricted neural lasso mimics the statistical lasso method. If these two methods obtain very similar results, a bridge between Statistics and Machine Learning would be built. Finally, voting neural lasso proposes a new way of estimating weights that can be used for both the statistical and the neural versions. For the standard neural lasso and for the voting neural lasso, the network is initialized with \(\gamma=1\) and \(\ell_{1}=\max_{j}\left|\frac{2}{N}\mathbf{X}_{j}^{t}\mathbf{y}\right|\) for the linear case and \(\ell_{1}=\max_{j}\left|\frac{1}{N}\mathbf{X}_{j}^{t}(\mathbf{y}-\sigma(0))\right|\) for the logistic case. In addition, in this article, the Adam optimization algorithm is used to adjust the weights [17]. ## 4 Experimental Results In order to evaluate the performance of the proposed method, three experiments were conducted. The first two focused on the linear case. Specifically, the first one is performed with simulated data and the second one uses several real data sets. The two previous experiments are complemented with a third one aiming to evaluate the proposed method in the logistic case using real data. ### Experiment 1: Linear case, Simulated data In the first study, the data were simulated according to the model \(y=\mathbf{X}\boldsymbol{\beta}+\epsilon\) where \(\mathbf{X}\) is the matrix containing the observations as row, \(\epsilon_{i}\sim N(0,1)\) and \[\beta=[1\,2\,3\,4\,\underbrace{0\,\ldots\,0}_{p-4}]\] Moreover, the data were simulated from a centered normal distribution so that \(\rho_{ij}=0.5^{|i-j|}\) for \(1\leq i<j\leq p\). In addition, the columns with the predictors were randomly rearranged to avoid possible positional effects. In order to test the performance of the different algorithms, training sets for \(p\in\{20,100,200\}\) with sample size \(N\) equal to 50 were generated. For each of the three scenarios, a repeated validation was performed with 100 runs. In all the repetitions, a test set of 1000 observations was generated. As performance measures, we calculated the MSE on the test set, the precision (percentage of non-significant variables correctly identified), and the recall (percentage of significant variables correctly identified). The number of folders K was set to five for the statistical lasso, restricted neural lasso, and voting neural lasso algorithms. Standard neural lasso used 20% of the training data as validation set. Indicate that the analyses using the non-neural versions were performed using the glmnet R package [18], while the neural versions were implemented in Pytorch [19]. The obtained results are shown in Table 1. This table shows that the standard neural lasso performs significantly worse than the non-neural version. As noted above, this is because the standard neural lasso only obtains knowledge of its performance during training on the small validation subset. It is also observed that the performance of the statistical lasso and the restricted neural lasso is almost identical. This confirms that the network design is correct. Finally, a result of results were obtained by the voting neural lasso algorithm which significantly improves those obtained by the three previous approaches. ### Experiment 2: Linear case, Real data The evaluation of the proposed technique was further evaluated using five different real data sets. Specifically, three datasets were obtained from the University of Caroline-Irvine (UCI) repository, and two own datasets were used. The datasets used are the following: * UCI White wine quality [20]. This database, containing 4898 observations, was built to predict the quality of Portuguese "Vinho Verde" from 11 predictors. In each of the repetitions, the training set consisted of 4000 training observations, and the test set was made up of 898 observations. * UCI Boston housing [21]. This dataset consists of 506 observations with 12 attributes each. These attributes correspond to the dependent variable, which indicates the median value of owner-occupied homes, and the 11 predictors used to estimate it. In each of the repetitions, the training set consisted of 400 training observations, and the test set was made up of 106. * UCI Abalone [22]. This dataset was collected to predict the age of the abalone from physical measurements. It contains 4177 observations with nine attributes each. In each of the repetitions, the training set consisted of 3342 training observations, and the test set was made up of 1935. * Suicide attempt severity. This database contains information on the severity of 349 suicide attempts as measured by the Beck suicide intent scale [23]. The predictors are 30 items of the Barrat impulsivity scale [24]. In each repetition, the training set consisted of 200 training observations, and \begin{table} \begin{tabular}{c l c c c} \hline \hline & Method & MSE & Precision & Recall \\ \hline \hline \multirow{3}{*}{p=20} & Statistical lasso & 1.294 (0.188) & 0.671 (0.207) & 1 (0) \\ & Standard neural lasso & 1.465\({}^{**}\) (0.341) & 0.644 (0.249) & 1 (0) \\ & Restricted neural lasso & 1.298 (0.188) & 0.668 (0.210) & 1 (0) \\ & Voting neural lasso & 1.188\({}^{**}\) (0.144) & 0.934\({}^{**}\) (0.072) & 1 (0) \\ \hline \multirow{3}{*}{p=100} & Statistical lasso & 1.680 (0.419) & 0.848 (0.087) & 0.998 (0.025) \\ & Standard neural lasso & 2.129\({}^{**}\) (0.789) & 0.808\({}^{**}\) (0.136) & 0.998 (0.025) \\ & Restricted neural lasso & 1.695 (0.447) & 0.853 (0.096) & 0.998 (0.025) \\ & Voting neural lasso & 1.419\({}^{**}\) (0.360) & 0.976\({}^{**}\) (0.017) & 0.998 (0.025) \\ \hline \multirow{3}{*}{p=200} & Statistical lasso & 1.806 (0.383) & 0.910 (0.053) & 1 (0) \\ & Standard neural lasso & 2.338\({}^{**}\) (0.717) & 0.827\({}^{**}\) (0.166) & 0.995 (0.035) \\ \cline{1-1} & Restricted neural lasso & 1.821 (0.395) & 0.910 (0.065) & 1 (0) \\ \cline{1-1} & Voting neural lasso & 1.403\({}^{**}\) (0.425) & 0.992\({}^{**}\) (0.007) & 0.990 (0.049) \\ \hline \hline \end{tabular} \end{table} Table 1: Results obtained for the linear scenario with synthetic data. For each of the three statistics, the mean and average standard deviation (in parentheses) are shown. Differences with respect to the statistical lasso algorithm at the 0.05 and 0.01 significance levels are denoted by * and **, respectively. the test set was made up of 149. * Attention Deficit Hyperactivity Disorder (ADHD). It contains the responses provided by 59 mothers of children with ADHD to the Behavior Rating Inventory of Executive Function-2 containing 63 items [25]. This dataset has two possible dependent variables measuring the degree of inattention and the degree of hyperactivity of the children as measured by the ADHD rating scale [26]. The training set for each repetition consists of 47 observations and the validation set consists of 12 observations. As with the previous experiment, 100 repeated validations are performed, the number of K-folders is set to five, and the validation set contains 20% of the training data. Obtained results, shown in Table 2, strengthen the conclusions obtained with synthetic data. In particular, it is observed that the voting neural lasso obtains an MSE similar to that of the statistical lasso but with the advantage of using a significantly smaller number of predictors. It is also observed that the worst performance is obtained with the standard neural lasso. In addition, it can be seen that the statistical lasso and restricted neural lasso obtain practically identical results. \begin{table} \begin{tabular}{l l c c} \hline \hline Dataset & Method & MSE & Selected Var. (\%) \\ \hline \hline \multirow{3}{*}{White wine quality} & Statistical lasso & 0.567 (0.027) & 0.899 (0.087) \\ & Standard neural lasso & 0.566 (0.027) & 0.960\({}^{**}\) (0.073) \\ & Restricted neural lasso & 0.567 (0.027) & 0.898 (0.084) \\ & Voting neural lasso & 0.566 (0.028) & 0.905 (0.070) \\ \hline \multirow{3}{*}{Boston housing} & Statistical lasso & 25.530 (5.603) & 0.864 (0.093) \\ & Standard neural lasso & 25.865 (5.844) & 0.910\({}^{*}\) (0.082) \\ & Restricted neural lasso & 25.529 (5.600) & 0.865 (0.093) \\ & Voting neural lasso & 25.611 (5.625) & 0.764\({}^{*}\) (0.098) \\ \hline \multirow{3}{*}{Abalone} & Statistical lasso & 5.063 (0.420) & 0.981 (0.048) \\ & Standard neural lasso & 5.334\({}^{**}\) (0.458) & 0.571\({}^{**}\) (0) \\ & Restricted neural lasso & 5.061 (0.420) & 0.981 (0.048) \\ & Voting neural lasso & 5.060 (0.418) & 0.964\({}^{*}\) (0.062) \\ \hline \multirow{3}{*}{Suicide attempt} & Statistical lasso & 31.126 (2.380) & 0.095 (0.123) \\ & Standard neural lasso & 31.915\({}^{*}\) (2.276) & 0.683\({}^{**}\) (0.282) \\ & Restricted neural lasso & 31.127 (2.382) & 0.078 (0.133) \\ & Voting neural lasso & 31.025 (2.424) & 0.002\({}^{**}\) (0.008) \\ \hline \multirow{3}{*}{ADHD} & Statistical lasso & 3.616 (1.389) & 0.257 (0.065) \\ & Standard neural lasso & 3.680 (1.433) & 0.334\({}^{**}\) (0.229) \\ \cline{1-1} & Restricted neural lasso & 3.614 (1.388) & 0.252 (0.064) \\ \cline{1-1} & Voting neural lasso & 3.787 (1.230) & 0.145\({}^{**}\) (0.034) \\ \hline \multirow{3}{*}{ADHD} & Statistical lasso & 3.465 (1.251) & 0.312 (0.153) \\ \cline{1-1} & Standard neural lasso & 3.883\({}^{*}\) (1.686) & 0.346 (0.205) \\ \cline{1-1} & Restricted neural lasso & 3.465 (1.259) & 0.315 (0.159) \\ \cline{1-1} & Voting neural lasso & 3.637 (1.198) & 0.093\({}^{**}\) (0.029) \\ \hline \end{tabular} \end{table} Table 2: Results obtained for the linear scenario with real data. For each of the three statistics, the mean and average standard deviation (in parentheses) are shown. Differences with respect to the statistical lasso algorithm at the 0.05 and 0.01 significance levels are denoted by * and **, respectively. ### Experiment 3: Logistic case, Real data This last experiment is intended to test the performance of the neural lasso in the logistic scenario. For this purpose, three databases obtained from the UCI repository and one own database are used. A brief description of these databases is given below. * UCI Wisconsin Breast cancer [27]. This dataset is composed of 569 observations. Each observation has 30 predictors and a dependent variable indicating whether the predictors were obtained from a malignant tumor. The training set was made up of 445 observations while the test set consisted of 124. * UCI Spam [28]. This dataset is made up of 4601 instances. Each of them contains 57 predictors and one dependent variable indicating whether the email was spam. The training set consisted of 3975 observations while the test set comprised 626. * UCI Ionosphere [29]. This database is composed of 351 instances with 34 predictors and a dependent variable indicating whether the radar signal passed through the ionosphere or not. The training set was made up of 299 observations while the test set consisted of 52. * Suicidal Behaviour [30]. This database consists of 700 observations. Each contains 106 predictors consisting of responses to items of various scales, and a dependent variable indicating whether the respondent had recently made an attempt. The set-up used was similar to that of the two previous sections (K equal to five, 100 repetitions, and the validation set composed of 20% of the training data). The results obtained are shown in Table 3. Results obtained for the logistic case are similar to those obtained in the linear scenario and presented in the previous two sections. It is observed that the best results are achieved by the voting neural lasso in three of the four settings. A significantly lower accuracy than the statistical lasso is obtained only in the spam data set. It is also observed that the restricted neural lasso and the statistical lasso obtain equivalent results, which again shows the convergence of the neural technique with the statistical one. A small difference, with respect to the results achieved previously, is that the standard neural lasso gets better results than the statistical neural lasso in two settings (Cancer and Ionosphere). ## 5 Conclusions In this work, the lasso algorithm has been implemented by means of neural networks. Specifically, the network layout has been defined and three possible optimization algorithms for estimating its weights have been compared. It has been observed that estimating the weights in the way a neural network is usually trained results in poor performance. It has also been shown that it is possible to mimic the optimization of the statistical lasso algorithm with a neural network obtaining almost identical results. The only difference is that the former uses coordinated descent while the latter uses gradient descent. This result brings the fields of Statistics and Machine Learning closer. Finally, an algorithm using a majority vote has been proposed which takes into account how many of the cross-validation scenarios the variable is considered significant. This third algorithm has shown an exceptionally better performance than the widely used statistical lasso. In particular, it has been shown that voting neural lasso either obtains a lower error or obtains a better variable selection in both the linear and logistic cases. Moreover, these results have been obtained using training sets that present a great diversity. They contain a number of observations ranging from only 47 to 4000 and a number of predictors varying from 9 to 200. These results open up new lines of research such as developing neural versions of other shrinkage techniques such as the elastic net or extending these algorithms to non-linear versions using the flexibility of neural networks. It is also important to note that the development of the voting neural lasso has been limited to simple cross-validation which is the information available to the other techniques. However, the use of repeated repetitions or repeated \begin{table} \begin{tabular}{l l c c} \hline \hline Dataset & Method & ACC & Selected Var. (\%) \\ \hline \hline \multirow{4}{*}{Cancer} & Statistical lasso & 0.963 (0.016) & 0.359 (0.092) \\ & Standard neural lasso & 0.964 (0.018) & 0.160\({}^{**}\) (0.039) \\ & Restricted neural lasso & 0.964 (0.016) & 0.360 (0.096) \\ & Voting neural lasso & 0.969\({}^{**}\) (0.015) & 0.111\({}^{**}\) (0.018) \\ \hline \multirow{4}{*}{Spam} & Statistical lasso & 0.923 (0.011) & 0.926 (0.024) \\ & Standard neural lasso & 0.904\({}^{**}\) (0.014) & 0.528\({}^{**}\) (0.056) \\ & Restricted neural lasso & 0.924 (0.011) & 0.927 (0.024) \\ & Voting neural lasso & 0.915\({}^{**}\) (0.010) & 0.462\({}^{**}\) (0.025) \\ \hline \multirow{4}{*}{Ionosphere} & Statistical lasso & 0.828 (0.048) & 0.448 (0.079) \\ & Standard neural lasso & 0.823 (0.051) & 0.388\({}^{**}\) (0.071) \\ & Restricted neural lasso & 0.827 (0.047) & 0.447 (0.080) \\ & Voting neural lasso & 0.829 (0.048) & 0.245\({}^{**}\) (0.040) \\ \hline \multirow{4}{*}{Suicide} & Statistical lasso & 0.650 (0.030) & 0.093 (0.057) \\ & Standard neural lasso & 0.627\({}^{**}\) (0.048) & 0.166\({}^{**}\) (0.253) \\ \cline{1-1} & Restricted neural lasso & 0.651 (0.029) & 0.088 (0.061) \\ \cline{1-1} & Voting neural lasso & 0.652 (0.031) & 0.031\({}^{**}\) (0.010) \\ \hline \end{tabular} \end{table} Table 3: Results obtained for the logistic scenario with real data. For each of the two statistics, the mean and average standard deviation (in parentheses) are shown. Differences with respect to the statistical lasso algorithm at the 0.05 and 0.01 significance levels are denoted by * and **, respectively. cross-validations, and obtaining confidence intervals, on them might result in a more robust algorithm. ## Funding This research was partially funded by: Ministerio de Ciencia e Innovacion, Proyectos de Transicion Ecologica y Transicion Digital TED2021-130980B-I00, and Instituto Salud Carlos III, grant number DTS21/00091. ## Data availability The real data used in this study for the linear regression problem can be obtained from the UCI repository ([https://archive.ics.uci.edu/datasets](https://archive.ics.uci.edu/datasets)). The real data used for the logistic regression experiment are available from the corresponding author upon request. ## Declarations **Conflict of interest.** The authors have no relevant financial or non-financial interests to disclose.
近年、統計学と機械学習の技術を組み合わせるための興味が高まっている。両方のアプローチの利点を得るため、この論文では、統計的変数選択のための技術であるレゾルバをネウラルネットワークを通して表現する。統計的アプローチとネウラルなバージョンは、目的関数が同じだが、最適化方法が異なる。特に、ネウラルなバージョンは、通常は1ステップで1つの検証セットを用いて最適化され、統計的対照は、クロスバリデーションに基づく2ステップの最適化を使用する。統計的アプローチのより複雑な最適化は、訓練データが小さい場合、より正確なパラメータ推定をもたらす。その理由から、ニューラルネットワークの標準的なアプローチを修正し、統計的枠組みを模倣した新しいトレーニング方法が提案された。この修正の開発過程で、有意な変数を識別するための新しい最適化アルゴリズムが提案
2309.15112
InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition
We propose InternLM-XComposer, a vision-language large model that enables advanced image-text comprehension and composition. The innovative nature of our model is highlighted by three appealing properties: 1) Interleaved Text-Image Composition: InternLM-XComposer can effortlessly generate coherent and contextual articles that seamlessly integrate images, providing a more engaging and immersive reading experience. Simply provide a writing instruction, and our system will generate the corresponding manuscript. It can intelligently identify the areas in the text where images would enhance the content and automatically insert the most appropriate visual candidates. 2) Comprehension with Rich Multilingual Knowledge: The text-image comprehension is empowered by training on an extensive multi-modal multilingual database with carefully crafted strategies, resulting in a deep understanding of visual content. 3) State-of-the-art Performance: Our model consistently achieves state-of-the-art results across various mainstream benchmarks for vision-language foundational models, including MME Benchmark, MMBench, MMBench-CN, Seed-Bench, CCBench (Chinese Cultural Benchmark), QBench and Tiny LVLM. Owing to the absence of established metrics for quantitatively assessing text-image composition, we have devised a robust evaluation procedure that comprises both human and GPT4-Vision (GPT4-V) to ensure reliability. Notably, our InternLM-XComposer achieves competitive text-image composition scores compared to public solutions, including GPT4-V and GPT3.5. Collectively, InternLM-XComposer seamlessly blends advanced text-image comprehension and composition, revolutionizing vision-language interaction and offering new insights and opportunities. The InternLM-XComposer model series are publicly available at https://github.com/InternLM/InternLM-XComposer.
Pan Zhang, Xiaoyi Dong, Bin Wang, Yuhang Cao, Chao Xu, Linke Ouyang, Zhiyuan Zhao, Haodong Duan, Songyang Zhang, Shuangrui Ding, Wenwei Zhang, Hang Yan, Xinyue Zhang, Wei Li, Jingwen Li, Kai Chen, Conghui He, Xingcheng Zhang, Yu Qiao, Dahua Lin, Jiaqi Wang
2023-09-26T17:58:20
http://arxiv.org/abs/2309.15112v5
# InternLM-XComposer: A Vision-Language Large Model for ###### Abstract We propose InternLM-XComposer, a vision-language large model that enables advanced image-text comprehension and composition. The innovative nature of our model is highlighted by three appealing properties: 1) **Interleaved Text-Image Composition**: InternLM-XComposer can effortlessly generate coherent and contextual articles that seamlessly integrate images, providing a more engaging and immersive reading experience. Simply provide a title, and our system will generate the corresponding manuscript. It can intelligently identify the areas in the text where images would enhance the content and automatically insert the most appropriate visual candidates. 2) **Comprehension with Rich Multilingual Knowledge**: The text-image comprehension is empowered by training on extensive multi-modal multilingual concepts with carefully crafted strategies, resulting in a deep understanding of visual content. 3) **State-of-the-art Performance**: Our model consistently achieves state-of-the-art results across various mainstream benchmarks for vision-language rotational models, including MME Benchmark, MMBench, MMBench-CN, Seed-Bench, and CCBench (Chinese Cultural Benchmark). Collectively, InternLM-XComposer seamlessly blends advanced text-image comprehension and composition, revolutionizing vision-language interaction and offering new insights and opportunities. The InternLM-XComposer model series with 7B parameters are publicly available at [https://github.com/InternLM/InternLM](https://github.com/InternLM/InternLM) ## 1 Introduction Over the past year, impressive progress has been made in developing large language models (LLMs) [4, 5, 9, 10, 16, 46, 47, 51, 62, 63, 64]. These state-of-the-art models, including ChatGPT [46], GPT4 [47], and PaLM 2 [10], have shown an unprecedented ability to follow human instructions and solve open-ended tasks. Inspired by the success of PaLM-E [15] and BLIP2 [31], there is a promising approach to extending language models for vision-language tasks by leveraging vision features as extra inputs of LLMs. The community has developed several vision-language large models (VLLMs), such as MiniGPT-4 [79], LLaVA [38], and InstructBLIP [11], based on open-source LLMs like LLaMA [63], GLM [16], and InternLM [62]. However, these VLLMs focus on pure text outputs, missing the opportunity to equip generated text with richer information through auxiliary multimedia content like images. In this work, we propose InternLM-XComposer, which is a vision-language large model that enables advanced text-image comprehension and composition ability. 1) **Interleaved Text-Image Composition**. InternLM-XComposer excels in generating long-form content that is interleaved with contextually relevant images, thereby elevating the experience of vision-language interaction. In its operational flow, the framework first crafts text based on human-provided instructions. Subsequently, it autonomously pinpoints optimal locations within the text for image placement and furnishes corresponding, suitable image descriptions. In accordance with the generated descriptions, instead of relying on a text-image generation model for assistance, we opt to source aligned images from a large-scale web-crawled image database for realistic quality and contextual alignment. Moreover, it also provides flexibility by allowing users to customize an image repository. Compared to a baseline approach that relies solely on CLIP [52, 68] for image retrieval, XComposer offers a more reliable solution for choosing the most appropriate image. Initially, we select potential image candidates from our database using CLIP. Then, InternLM-XComposer leverages its comprehension capabilities to identify the image that optimally complements the content. 2) **Comprehension with Rich Multilingual Knowledge**. LLM demonstrates remarkable generalizability in handling open-world tasks, a capability attributed to its extensive training data, _e.g_., 2T tokens used in LLaMA2 [64]. This vast dataset inherently encapsulates a broad spectrum of semantic knowledge across diverse domains. In contrast, existing vision-language datasets are relatively constrained in both volume and diversity. To tackle these limitations, we employ two practical solutions: First, an interleaved multilingual vision-language dataset with over 11 million semantic concepts is collected from public websites. Second, we carefully crave the pretraining and finetuning strategies in our training pipeline, where we adopt the mixed training data of pure text and image-text data, primarily in English and Chinese. Consequently, InternLM-XComposer demonstrates a remarkable proficiency in comprehending a wide array of image content and responding with an extensive repository of multilingual knowledge. The proposed InternLM-XComposer exhibits superior capabilities in both text-image comprehension and composition, as evidenced by its strong performance in quantitative benchmarks and compelling qualitative demonstrations. It consistently achieves **state-of-the-art** performance across various leading benchmarks for vision-language large models, encompassing MME Benchmark [71, 18], MMBench [72], Seed-Bench [29] in English, and MMBench-CN [72], CCBench (Chinese Cultural Benchmark) [72] for evaluations in Chinese. Notably, our method significantly outperforms existing frameworks on benchmarks in the Chinese language, _i.e_., MMBench-CN [72] and CCBench [72], demonstrating unparalleled multilingual knowledgeability. We release InternLM-XComposer series in two versions: * InternLM-XComposer-VL: The pretrained and multi-task trained VLLM model with InternLM [62] as the initial LLM. * InternLM-XComposer: The further instruction tuned VLLM based on InternLM-XComposer-VL for interleaved text-image composition and LLM-based AI assistant. ## 2 Related Works **Large Language Models.** In recent years, the development of large language models has accelerated. Initially, encoder-decoder models such as BERT [14] and T5 [54], as well as decoder-only models like GPT [53], leveraged the Transformer architecture [65] to achieve remarkable results across various NLP tasks. GPT3 [5], employing prompt and in-context learning strategies along with larger models and data, has shown significant performance in few-shot and zero-shot downstream tasks. As a result, using decoder-only structures based on autoregressive decoding for output prediction has gained popularity among researchers. Google's PaLM [10] further expands the model parameter size and data volume, setting the performance benchmark of the time. To enhance the practical conversational experience, models like InstructGPT [49] and ChatGPT [46] integrate fine-tuning and reinforcement learning strategies guided by human feedback to guide models toward human-like conversation. The open-sourcing of the LLaMA [63] model has invigorated research on large language models, leading to the successive open-sourcing of a series of notable large language models such as Vicuna [9], Qwen [51], LLaMA2 [64], Baichuan2 [4], and InternLM [62]. **Multimodal Large Language Models.** Like large language models, visual language learning has emerged as a research hotspot in computer vision. Initially, these models utilized the Transformer architecture to align image-text pairs in unsupervised samples, enabling strong performance in zero-shot learning tasks like Image Captioning and Image-Text Retrieval. CLIP [52] aligns image and text features through contrastive learning objectives on large-scale image-text pairs, outperforming supervised learning on ImageNet [13] and exhibiting strong generalization capabilities in various downstream tasks. BLIP [32] devises data selection and generation strategies using cleaner and more diversified data, outperforming CLIP. Information mined from image-text pairs can effectively provide labels for basic visual tasks [34, 39, 74]. However, these models show limited capabilities for tasks requiring higher-level understanding, such as visual question answering. Benefiting from existing large language model [9] and visual encoder [17], MiniGPT-4 [79] trains multimodal large language models (MLLMs) through pre-training feature alignment and instruction fine-tuning, exhibiting excellent image-text dialogue capabilities. For the instruction fine-tuning stage of MLLMs, a series of studies [38, 66, 78] have explored the impact of the quality, diversity, and specificity of the fine-tuning data on the performance of these models, further enhancing their performance. MMICL [76] can manage multiple image inputs, enabling multimodal models to understand more complex Prompts. Otter [30], built on the OpenFlamingo [2] structure and multimodal context instruction fine-tuning data, can better follow new instructions. InstructBLIP [11], based on various image-text datasets, fine-tunes MLLMs by constructing prompt templates and introduces Q-Former [32] to associate more relevant image features in advance. With a larger dataset, this model shows better generalization capabilities across multiple tasks. mPLUG-Owl [70] introduces additional text-only instruction data during the second phase of instruction fine-tuning, improving its capabilities. Shikra [7] and KOSMOS-2 [50] incorporate Grounding data during the training phase, enabling the model to develop Grounding capabilities, reduce hallucinations, and enhance performance. Qwen-VL [3] uses more pre-training data and higher-resolution inputs to deepen the model's understanding of image-text details. **Multimodal Retrieval Models.** Image-text retrieval, a pivotal area in multimodal modeling, has seen substantial advancements recently. CLIP [52], utilizing contrastive learning on a large corpus of unsupervised image-text pairs, excels in image-text matching, enabling efficient retrieval in both image-to-text and text-to-image modalities. Expanding on CLIP's foundation, BLIP [32] filters out irrelevant image-text pairs, generating a high-quality subset for retraining, thereby enhancing image-text matching performance. ALIGN [27] extends beyond singular image-text matching by simultaneously accommodating image-text combinations as inputs, retrieving results that meet the given image-text criteria, thus providing a more comprehensive retrieval system. Despite these models' impressive strides in image-text retrieval, they still need to improve in deep image-text understanding. REVEAL [25] addresses this by introducing an end-to-end retrieval-enhanced visual language model that leverages various knowledge source modalities and operates in tandem with a generator, excelling in knowledge-intensive visual question-answering tasks. RA-CM3 [69] further enhances this process by retrieving external relevant knowl Figure 2: **The architecture of the InternLM-XComposer.** The proposed model comprises three essential components: a visual encoder, a perceive sampler, and a large language model. The training regimen is divided into two distinct phases, namely Stage A and Stage B. In Stage A, which serves as the pre-training phase, both the perceive sampler and the large language model are subjected to optimization procedures. Stage B focuses on supervised fine-tuning, during which the perceive sampler and LoRA [23] are specifically trained. edge as a supplement, facilitating superior generation of image and text results, and demonstrating remarkable performance in knowledge-intensive image generation and multimodal in-context learning tasks. FROMAGe [28] integrates the large language model [75] with the visual encoder CLIP ViT/14 [52]. Training with image-text pairs utilizes multimodal information from multi-turn dialogues for image retrieval and conversation, offering a more dynamic and interactive approach to retrieval. However, the capabilities of current models are primarily confined to the matching and generation of image-text pairs, revealing a significant gap in their ability to generate comprehensive interleaved image-text articles, signifying a promising future research direction. ## 3 Method ### Model Architecture In the proposed InternLM-XComposer framework, as depicted in Figure 2, there are three integral components: a visual encoder, a perceive sampler, and a large language model. **Visual Encoder.** The visual encoder in InternLM-XComposer employs EVA-CLIP [17], an refined variant of the standard CLIP [68], enhanced with mask image modeling capabilities, to proficiently capture the visual nuances of the input image. Within this module, images are resized to a consistent dimension of \(224\times 224\) and subsequently dissected into patches with a stride of 14. These patches serve as input tokens and enable the self-attention mechanisms within the transformer block, facilitating the extraction of detailed image embeddings. **Perceive Sampler.** The perceive sampler within the InternLM-XComposer operates as an attentive pooling mechanism designed to condense the initial set of 257 image embeddings down to 64 refined embeddings. These optimized embeddings are subsequently aligned to be compatible with the knowledge structures understood by the large language model. Following BLIP2 [31], we leverage BERT\({}_{base}\)[14] equipped with cross-attention layers, serving as the perceive sampler in our framework. **Large Language Model.** The InternLM-XComposer is anchored on InternLM [62] as its foundational large language model. Notably, InternLM stands as a potent language model equipped with multilingual capabilities, proficient in both English and Chinese. In our framework, we employ the publicly available InternLM-Chat-7B to serve as the large language model. For comprehensive details about InternLM, we refer readers to its official code repository1. Footnote 1: [https://github.com/InternLM/InternLM](https://github.com/InternLM/InternLM) ### Training As shown in Figure 2, the training process of InternLM-XComposer is split into Stage A and Stage B. Stage A serves as the pre-training phase, utilizing vast amounts of data for foundational model training. In contrast, Stage B is the supervised fine-tuning phase, involving a multi-task training step and a following instruction tuning step. The model is named InternLM-XComposer-VL after multi-task training and InternLM-XComposer after instruction tuning. **Pre-training.** The pre-training phase incorporates large-scale, web-crawled image-text pairs along with interleaved image-text data to pre-train the foundational vision-language model. This data comprises multimodal content in both English and Chinese languages. To preserve the linguistic capabilities of the large language model, the partial textual data utilized for InternLM's pre-training is also employed in the pre-training phase of InternLM-XComposer. As indicated in Table 1, the multimodal pre-training process employs 1.1 billion images alongside 67.7 billion text tokens, including both public datasets and in-house data collected from public websites, possessing over 11 million semantic concepts. This corpus includes 50.6 billion English text tokens and 17.1 billion Chinese text tokens. Furthermore, approximately 10 billion text tokens, sampled from the InternLM pre-training dataset, are incorporated to maintain the model's linguistic proficiencies. Prior to the training process, all pre-training data underwent a thorough cleaning procedure to ensure its quality and reliability. During the pre-training phase, the visual encoder is held constant, allowing for the optimization to be concentrated on the perceive sampler and the large language model. Initial weight for the perceive sampler and the large language model are sourced from BLIP2 [31] and InternLM [62], respectively. Given that the large language model lacks native understanding of image embeddings, its optimization within the framework of multimodal pre-training serves to enhance its capability to interpret such embeddings effectively. The training objective for the model centers on next-token prediction, utilizing cross-entropy loss as the loss function. The optimization algorithm employed is AdamW, with hyperparameter settings as follows: \(\beta_{1}\)=0.9, \(\beta_{2}\)=0.95, \(eps\) =1e-8. The maximum learning rates for the perceive sampler and the large language model are configured at 2e-4 and 4e-5, respectively, following a cosine learning rate schedule. The minimum learning rate is set at 1e-5. Additionally, a linear warm-up is applied over the initial 200 steps. The training procedure employs a batch size of approximately 15.7 million tokens and spans 8,000 iterations. Utilizing such a large batch size in conjunction with a limited number of iterations contributes to stable training dynamics while also aiding in the preservation of the inherent capabilities of InternLM. **Supervised Fine-Tuning.** In the pre-training phase, im age embeddings are aligned with language representations, equipping the large language model with a rudimentary understanding of image content. However, the model still lacks proficiency in utilizing this image information optimally. To address this limitation, we introduce a variety of vision-language tasks that the model undertakes during the subsequent Supervised Fine-Tuning Stage (SFT), which contains two consecutively steps, _i.e_., _Multi-task training_ and _Instruction tuning_. _Multi-task training_. As illustrated in Table 2, the multi-task training dataset is constructed from multiple sources to endow the model with a diverse range of capabilities, including two splits, _i.e_., multi-task training and instruction training. These include scene understanding (_e.g_., COCO Caption [8], SUB [48]), location understanding (_e.g_., Visual Spatial Reasoning dataset [35]), Optical Character Recognition (OCR) (_e.g_., OCR-VQA [45]), and open-ended answering (_e.g_., VQAv2 [1], GQA [26]), among others. Each of these tasks is formulated as a conversational interaction, adhering to the following format: \[<|User|>:\textit{Instruction}\ <\texttt{eou}>\] \[<|Bot|>:\textit{Answer}\ <\texttt{eob}>\] where \(<\texttt{eou}>\) and \(<\texttt{eob}>\) represent the _end-of-user_ and _end-of-bot_ tokens, respectively. For QVA datasets featuring multiple questions per image, we structure them as multi-round conversations with randomly ordered questions, thereby substantially enhancing the efficiency of the SFT process. During this stage, all questions are introduced through manually crafted prompts to augment task diversity. In order to achieve stable and efficient fine-tuning, we retains the weights of the pre-existing large language model in a frozen state. Subsequently, we augment the architecture with Low-Rank Adaption (LoRA) [23] for the fine-tuning process. The perceive sampler is concurrently trained, al \begin{table} \begin{tabular}{c c c c} \hline \hline Language & Type & Dataset & Selected Images & Selected Text \\ \hline \multirow{5}{*}{English} & Image-text paired & SBU-Caption [48]) & 1M & 18M \\ & Image-text paired & Conceptual Captions 3M [58] & 3M & 37M \\ & Image-text paired & Conceptual 12M [6] & 9M & 250M \\ & Image-text paired & LAION400M [56] & 509M & 10B \\ & Image-text paired & In-house Data & 2M & 321M \\ & Interleaved image-text & Multimodal C4 [80] & 332M & 40B \\ \hline \multirow{5}{*}{Chinese} & Image-text paired & WuKong [21] & 31M & 545M \\ & Image-text paired & TaiSu [40] & 44M & 865M \\ \cline{1-1} & Image-text paired & LAION-CN [55] & 80M & 2B \\ \cline{1-1} & Image-text paired & In-house Data & 9M & 704M \\ \cline{1-1} & Interleaved image-text & In-house Data & 85M & 13B \\ \hline \hline \end{tabular} \end{table} Table 1: Details of InterLM-XComposer pre-training multimodal data. LAION-CN represents the Chinese language subset extracted from the larger LAION-5B corpus. This subset is further cleaned utilizing the Chinese CLIP [68]. The volume of text data is counted in terms of the number of tokens. The in-house data are collected from public websites, possessing over 11 million semantic concepts collected from public websites. A subset of the in-house data has been made publicly available by WanJuan [22]. \begin{table} \begin{tabular}{l l} \hline \hline Task & Dataset \\ \hline _Multi-task training_ & \\ Caption & COCO [8], SUB [8], TextCaps [59] \\ VQA & VQAv2 [1], GQA [26], OK-VQA [44], VSR [35], IConQA [42] \\ & Text-VQA [60], SOA [41], OCR-VQA [45], In-house data \\ IQG & VQAv2 [1], OK-VQA [44], A-OKVQA [57] \\ Conversation & Visual Dialog [12], LLava-150k [38] \\ \hline _Instruction tuning_ & \\ Image-Text Composiiton & In-house data (Refer to Sec.3.3) \\ Conversation & LLaVA-150k [38], Alpaca-en\&zh [61], ShareGPT-en\&zh, Oasst-en\&zh, LRV [36] \\ \hline \hline \end{tabular} \end{table} Table 2: Datasets used for Supervised Fine-Tuning. beit with a distinct learning rate. Specifically, LoRA is applied to the _query_, _value_ and _key_ of the attention layer as well as the feed-forward network. We find that a high LoRA rank is conducive for imbuing the model with new capabilities; consequently, we set the LoRA rank and alpha parameter both to 256. The model is trained using a global batch size of 256 over 10,000 iterations. The learning rates are set to \(5e^{-5}\) for the LoRA layer and \(2e^{-5}\) for the perceive sampler. _Instruction tuning_. To further empower aforementioned model's instruction following and interleaved image-text composition capability, as shown in Table 2, we utilize data from pure-text conversation corpora and LLava-150k for instruction-based tuning, and leverage the LRV dataset to mitigate hallucinations. The interleaved image-text composition dataset is constructed based on the methodology delineated in Section 3.3. We maintain a batch size of 256 and execute the tuning over 1000 iterations with a small learning rate \(1e^{-5}\). ### Interleaved Image-Text Composition To achieve the objective of crafting interleaved image-text compositions, the initial step involves the generation of a text-centric article. Following this, pertinent images are incorporated at well-suited positions within the textual content, thereby enriching the overall narrative and augmenting reader engagement. **Text Generation.** To facilitate the generation of extended text-based articles, we curate a dataset comprising interleaved image-text compositions. It is noteworthy that the acquired dataset contains noise, particularly in the form of marketing and advertising content. To address this, we employ GPT-4 to assess the noise level of each individual paragraph. Any paragraphs that are identified as noisy, along with articles where over 30% of the content is classified as such, are subsequently filtered out of the dataset. To enable the model to generate text-based articles with respect to specific titles, we formulate the training data in the following manner: \[<|User|> :\text{\emph{Write an illustrated article based on the}}\] \[\text{\emph{given title: }}\{\text{\emph{ Title}}\} <\text{\emph{eou}}>\] \[<|Bot|> :[para_{1}]\dots[para_{N}]\ <\text{\emph{eob}}>\] Here, \(\{Title\}\) serves as a placeholder for the article title, while \([para_{1}]\) and \([para_{N}]\) denote the first and last paragraphs, respectively. To enhance the visual appeal and engagement level of the generated text-centric articles, the incorporation of contextually appropriate images is essential. In line with this aim, Figure 3: **The pipeline of the interleaved image-Text composition.** (a) Given an input title, the model initially generates a corresponding text-based article. (b) Subsequent to the article generation, the model is trained to identify suitable image locations and generate corresponding captions for the ensuing steps. (c) A text-image retrieval algorithm is initially employed to restrict the pool of candidate images. Following this, our vision-language model is fine-tuned to make the final image selection, ensuring thematic and visual coherence by considering both the preceding textual content and images within the article. we establish an exhaustive database that functions as a candidate pool for the selection of images. The overall procedure is divided into two main components: image spotting, which identifies opportune locations within the text for image integration, and image selection, aimed at choosing the most contextually suitable images. A basic strategy for image selection involves summarizing the preceding textual content and retrieving the most closely related image from the available image pool. However, this approach is insufficient for maintaining a coherent thematic flow of images across the article. To remedy this limitation, we suggest the employment of our vision-language foundational model. This model is designed to select a portfolio of images that are not only contextually relevant but also maintain thematic consistency throughout the article. In order to enhance computational efficiency, we initially employ a retrieval mechanism to reduce the size of the candidate image pool. Subsequent to this, our vision-language model is deployed to perform the final image selection from this narrowed set of candidates. Consequently, the overarching task is decomposed into image spotting and captioning, along with image retrieval and selection. **Image Spotting and Captioning.** Leveraging the acquired interleaved image-text compositions, pinpointing image locations becomes a straightforward task. For subsequent image retrieval, it's imperative to generate an appropriate caption, enabling the application of various text-image retrieval algorithms. A straightforward approach involves employing a large language model to summarize preceding content as a caption. Nonetheless, due to limitations in the model's capacity (e.g., 7B), captions generated by the pre-trained language model often misses the central theme or concept of the article. To mitigate this challenge, we suggest a supervised fine-tuning approach, utilizing caption data generated via GPT-4. For the creation of this data, GPT-4 is provided with the text-based article and image location, and is instructed to generate a caption that remains coherence with the article's overarching theme and concept, specifically for image retrieval purposes. Upon data generation, the training data is structured as follows: \[<|User|>: [seg_{1}][para_{1}]\dots[seg_{N}][para_{N}]\ \ \text{Based}\] \[\ **Image Retrieval and Selection.** Having obtained the captions, a variety of text-image retrieval methods become available for use. In this work, we opt for the CLIP model, capitalizing on its proven efficacy in zero-shot classification tasks. We compute the similarity scores between the generated caption and each image in the candidate pool. The \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline Method & Language Model & Vision Model & Overall & LR & AR & RR & FP-S & FP-C & CP \\ \hline OpenFlamingo & LLaMA 7B & CLIP ViT-L/14 & 1.7 & 1.7 & 4.5 & 0 & 1.5 & 0.8 & 1.3 \\ MiniGPT-4 & Vicuna 7B & EVA-G & 11.9 & 11.6 & 19.4 & 5.7 & 14.6 & 6.5 & 10.9 \\ InstructBLIP & Vicuna 7B & EVA-G & 23.9 & 9.2 & 38.5 & 16.6 & 20.9 & 15 & 30.8 \\ mPLUG-Owl & LLaMA 7B & CLIP ViT-L/14 & 24.9 & 6.9 & 34 & 17.5 & 33.4 & 8.5 & 30.6 \\ VisualGLM & ChatGLM 6B & EVA-CLIP & 25.6 & 5.2 & 42 & 18 & 24.1 & 13 & 34.5 \\ LLaVA & LLaMA 7B & CLIP ViT-L/14 & 36.6 & 15 & 52.4 & 17.1 & 34.4 & 27.5 & 50.3 \\ Qwen-VL-Chat & Qwen-7B & ViT-G/16 & 56.3 & 35.3 & 63.5 & 46 & 63.6 & 43.7 & 64.7 \\ \hline InterLM-XComposer-VL & InternLM & EVA-G & 72.4 & 44.5 & 79.5 & 83.4 & 71.6 & 56.3 & 82.4 \\ \hline \hline \end{tabular} \end{table} Table 6: **Evaluation of MMBench-CN test set**. Here we report the results on the six L-2 abilities based on Chinese, namely Logical Reasoning (LR), Attribute Reasoning (AR), Relation Reasoning (RR), Fine-grained Perception (Cross Instance) (FP-C), Fine-grained Perception (Single Instance) (FP-S), and Coarse Perception (CP). \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline Method & Language Model & Visual Model & Overall & LR & AR & RR & FP-S & FP-C & CP \\ \hline MMGPT & LLaMA-7B & CLIP ViT-L/14 & 16.0 & 1.1 & 23.8 & 20.7 & 18.3 & 5.2 & 18.3 \\ MiniGPT-4 & Vincuna-7B & EVA-G & 23.0 & 13.6 & 32.9 & 8.9 & 28.8 & 11.2 & 28.3 \\ PandaGPT & Vincuna-13B & ImageBind ViT-H/14 & 30.6 & 15.3 & 41.5 & 22.0 & 20.3 & 20.4 & 47.9 \\ VisualGLM & ChatGLM-6B & EVA-CLIP & 33.5 & 11.4 & 48.8 & 27.7 & 35.8 & 17.6 & 41.5 \\ InstructBLIP & Vincuna-7B & EVA-G & 33.9 & 21.6 & 47.4 & 22.5 & 33.0 & 24.4 & 41.1 \\ LLaVA & LLaMA-7B & CLIP ViT-L/14 & 36.2 & 15.9 & 53.6 & 28.6 & 41.8 & 20.0 & 40.4 \\ LLaMA-Adapter-v2 & LLaMA-7B & CLIP ViT-L/14 & 38.9 & 7.4 & 45.3 & 19.2 & 45.0 & 32.0 & 54.0 \\ G2PT & LLaMA-7B & ViT-G & 39.8 & 14.8 & 46.7 & 31.5 & 41.8 & 34.4 & 49.8 \\ Otter-I & LLaMA-7B & CLIP ViT-L/14 & 48.3 & 22.2 & 63.3 & 39.4 & 46.8 & 36.4 & 60.6 \\ IDEFICS-80B & LLaMA65B & CLIP ViT-H/14 & 54.6 & 29.0 & 67.8 & 46.5 & 56.0 & 48.0 & 61.9 \\ Shikra & Vincuna-7B & CLIP ViT-L/14 & 60.2 & 33.5 & 69.6 & 53.1 & 61.8 & 50.4 & 71.7 \\ Qwen-VL-Chat & Qwen-7B & ViT-G/16 & 61.2 & 38.6 & 70.9 & 46.9 & 67.7 & 47.6 & 71.9 \\ LMEye & Flan-XL & CLIP ViT-L/14 & 61.3 & 36.9 & 73.0 & 55.4 & 60.0 & 58.0 & 68.9 \\ MMLCL & FLANT5-XXL & EVA-G & 65.2 & 44.3 & 77.9 & 64.8 & 66.5 & 53.6 & 70.6 \\ mPLUG-Owl & LLaMA2-7B & CLIP ViT-L/14 & 68.5 & **56.8** & 77.9 & 62.0 & 72.0 & 58.4 & 72.6 \\ \hline InterLM-XComposer-VL & InternLM & EVA-G & **74.4** & 50.6 & **82.0** & **76.1** & **79.3** & **59.2** & **81.7** \\ \hline \hline \end{tabular} \end{table} Table 4: **Evaluation of MMBench test set**. Here we report the results on the six L-2 abilities, namely Logical Reasoning (LR), Attribute Reasoning (AR), Relation Reasoning (RR), Fine-grained Perception (Cross Instance) (FP-C), Fine-grained Perception (Single Instance) (FP-S), and Coarse Perception (CP). \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline Method & Language Model & Overall & T-Avg. & Sense.U & Inst.Id & Inst.At & Inst.Lo & Inst.Co & Spat.R & Inst.It & Vis.R & Text.R \\ \hline OpenFlamingo & MPT-7B & 42.7 & 39.4 & 53.2 & 45.3 & 40 & 31.2 & 39.3 & 32.6 & 36.1 & 51.4 & 25.9 \\ Otter & MPT-7B & 42.9 & 40.08 & 51.3 & 43.5 & 42.3 & 34.2 & 38.4 & 30.9 & 40.2 & 55.3 & 24.7 \\ IDEFICS-9b-instruct & LLaMA-7B & 44.5 & 43.01 & 55.8 & 45.3 & 42.3 & 40.2 & 36.8 & 34.9 & 37.1 & 55.9 & 38.8 \\ MiniGPT-4 & Vicuna-7B & 47.4 & 42.6 & 56.3 & 49.2 & 45.8 & 37.9 & 45.3 & 32.6 & 47.4 & 57.1 & 11.8 \\ BLIP-2 & Flan-T5-XL & 49.7 & 45.7 & 59.1 & 53.9 & 49.2 & 42.3 & 43.2 & 36.7 & 55.7 & 45.6 & 25.9 \\ IDEFICS-80b-instruct & LLaMA-65B & 53.2 & 54.4 & 64 & 52.6 & 50.8 & 48.3 & 46.1 & 45.5 & 62.9 & 68 & 51.8 \\ Kosmos-2 & Kosmos & 1.3B & 54.4 & 49.4 & 63.4 & 57.1 & 58.5 & 44 & 41.4 & 37.9 & 55.7 & 60.7 & 25.9 \\ InstructBLIP & Flan-T5-XL & 57.8 & 49.3 & 60.3 & 58.5 & 63.4 & 40.6 & **58.4** & 38.7 & 51.6 & 45.9 & 25.9 \\ InstructBLIP-Vicuna & Vicuna-7B & 58.8 & 52.2 & 60.2 & 58.9 & 65.6 & 43.6 & 57.2 & 40.3 & 52.6 & 47.7 & 43.5 \\ Qwen-VL & Qwen-7B & 62.3 & 59.6 & 71.2 & 66.4 & 67.7 & 53.5 & 44.8 & 43.8 & 62.9 & 74.9 & 51.2 \\ Qwen-VL-Chat & Qwen-7B & 65.4 & 61.9 & 73.3 & 67.3 & **69.6** & 57.7 & 52.9 & 48.2 & 59.8 & 74.6 & **53.5** \\ \hline InternLM-XComposer-VL & InternLM-7B & **66.9** & **65.2** & **75.0** & **71.7** & 67.6 & **60.8** & 56.2 & **55.3** & **74.4** & **77.0** & 48.5 \\ \hline \hline \end{tabular} \end{table} Table 5: **Evaluation of Seed-Bench test set**. Here we report the results on the image-based sub tasks, including Scene Understanding(Sense.U), Instance Identity(Inst.Id), Instance Atributes(Inst.At), Instance Localization(Inst.Lo), Instance Counting(Inst.Co), Spatial Relation(Spat.R), Instance Interaction(Inst.It), Visual Reasoning(Vis.R), Text Recognition(Text.R), and both overall accuracy(Overall) and task-level average accuracy(T-Avg.) top \(m\) images, based on these similarity scores, are then selected to constitute the reduced candidate pool for further processing. To guarantee thematic or conceptual coherence in the images dispersed throughout the article, we deploy our vision-language model to execute the final image selection. When selecting images to accompany the \(j^{th}\) paragraph, the training data is structured in the following manner: \[<|User|>:[para_{1}]\dots[para_{i}][img_{i}][para_{i+1}]\dots[para_{j}]\] _Based on the given context and candidate_ _images, select the appropriate image. Candidate_ _images include:_\([img^{1}_{j}]\dots[img^{m}_{j}]\)__\(<\)eou> \[<|Bot|>:\text{The \{selected index\} image. <eob>}\] In this configuration, \([img_{i}]\) denotes the image associated with the \(i^{th}\) paragraph (preceding the \(j^{th}\) paragraph). The terms \([img^{1}_{j}],\dots,[img^{m}_{j}]\) represent the images present in the reduced candidate pool. Meanwhile, \(\{\)_selected index\(\}\)_ acts as a placeholder indicating the index of the final selected image. The vision-language model selects images by considering both preceding text and prior images within the article. This mechanism enables the model to acquire an understanding of thematic and visual coherence, an expertise derived from the curated dataset of interleaved image-text compositions. ## 4 Experiments ### English-Based Benchmark results. In this section, we validate the performance of our InternLM-XComposer-VL on several benchmarks. **MME Benchmark** measures the perception and cognition capability of multi-modality LLMs with carefully crafted questions within 14 sub-tasks. As shown in Table.3, our InternLM-XComposer-VL reached a new state-of-the-art performance \(137.11\%\), outperforms the previous method QWen-VL-Chat with more than \(5.0\%\). We also highlight the Top-3 models within each subtask with underline and we notice that our model reaches the Top-3 performance with 12 of the 14 sub-tasks. This proves the outstanding generalize of our model. **MMBench** is a hand-crafted challenging benchmark, which evaluates the vision-related reasoning and perception capability with multi-choice questions. The MME Bench provides both a dev-set and test-set. Here we report the test-set performance of our model. As shown in Table.4. Our method gets \(74.4\%\) accuracy and outperforms previous methods by a large margin. Further, our InternLM-XComposer-VL reaches the best performance in 5 of the 6 dimensions. This proves that our model understands the image information well and can handle diverse vision-related tasks. **Seed-Bench** is a large-scale multi-modality benchmark, which is built with the help of GPT-4 and contains nearly 19K multi-choice questions for both image and video. Here we report the image-set results in Table.5. It can be observed that our InternLM-XComposer-VL gets the best overall performance and the highest performance in 6 of the 9 sub-tasks. We also notice that the sub-task data number is in-balance, for example, the '_Instance Attributes_' task have 4649 questions, while the '_Text Understanding_' task only has 84 questions. So the overall metric would be biased toward the tasks that have more questions. To better evaluate the generalized capability of the LLMs along different tasks. We also report the task-level average, similar to the MME benchmark. It can be observed that our model reaches the state-of-the-art average accuracy and outperforms the previous method with \(3.3\%\). This further proves the general capability of our model. ### Chinese-Based Benchmark results. As we introduced in Sec.1, our model is pretrained with rich multilingual knowledge. To prove the effectiveness of the pretraining, here we further evaluate its performance with two Chinese-based benchmarks. **MMBench-CN** is the Chinese translated benchmark of the original MMbench, which shows the vision-related Chinese \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline Method & Language Model & Vision Model & Overall & CP & CR & F\&C & HF & S\&B & SR & TS \\ \hline OpenFlamingo & LLaMA 7B & CLIP ViT-L/14 & 0.7 & 1.8 & 0 & 0.8 & 0 & 0 & 2.2 & 1.5 \\ MiniGPT-4 & Vicuna 7B & EVA-G & 1.7 & 7 & 4 & 0 & 0 & 1 & 0 & 0 \\ LLaVA & LLaMA 7B & CLIP ViT-L/14 & 8.3 & 10.5 & 8.1 & 7.6 & 1.7 & 8 & 11.1 & 10.6 \\ VisualGLM & ChatGLM 6B & EVA-CLIP & 9.2 & 14 & 11.1 & 8.4 & 0 & 14 & 4.4 & 7.6 \\ InstructBLIP & Vicuna 7B & EVA-G & 12.1 & 8.8 & 9.1 & 21 & 0 & 12 & 6.7 & 18.2 \\ mPLUG-Owl & LLaMA 7B & CLIP ViT-L/14 & 12.9 & 22.8 & 17.2 & 6.7 & 0 & 25 & 4.4 & 7.6 \\ Qwen-VL-Chat & Qwen-7B & ViT-G/16 & 39.3 & 40.4 & 33.3 & 31.9 & 3.4 & 67 & 51.1 & 42.4 \\ \hline InternLM-XComposer-VL & InternLM-7B & EVA-G & 47.6 & 50.9 & 53.5 & 42 & 10.3 & 55 & 73.3 & 50 \\ \hline \hline \end{tabular} \end{table} Table 7: **Evaluation of CCBench test set**. Here we report all the sub-tasks, including Calligraphy Painting(CP), Cultural Relic(CR), Food & Clothes(F&C), Historical Figures(H&F), Scenery & Building(S&B), Sketch Reasoning(SR), Traditional Show(TS), understanding and reasoning capability. Here we report the test-set performance in Table.6. It can be observed that our method outperforms previous methods by a large margin. When comparing with the English version performance in Table.4. Qwen and VisualGLM have \(4.9\%\) and \(7.9\%\) performance degrading, while the performance gap of our model between different languages is only \(2.0\%\). This proves the strong multi-lingo capability of our model. **Chinese-Bench** is a Chinese knowledge-related benchmark, that challenges the model with Chinese traditional cultural questions, including art, food, clothes, landmarks, _etc_. Here we report the performance in Table.7. It can be observed that the benchmark is quite challenging, most LLaMA-based model fails to answer these questions, due to the lack of corresponding knowledge. Compared with LLaMA-based methods, the QWen-based model Qwen-VL-Chat shows a much better performance of \(39.3\%\). While it is still worse than our InternLM-based model InternLM-XComposer-VL, which reaches a new state-of-the-art performance of \(47.6\%\). This proves the rich Chinese knowledge of IntenrLM and the great alignment between the vision and language knowledge by our large-scale pretraining. ### Interleaved Image-Text Composition **Qualitative results**. We direct readers to the supplementary materials for detailed qualitative results of the interleaved image-text compositions generated by the InternLM-XComposer and the demonstration of multimodal conversations. ## 5 Conclusion and Outlook In this paper, we present InternLM-XComposer, a vision-language large model with superb multi-modality understanding and composition capability. Benefiting from the rich multi-lingual and multi-modality knowledge from our carefully designed pretraining, on one hand, our model could generate coherent and contextual articles with a simple title input, and integrate images at the proper location and content. On the other hand, our model shows state-of-the-art performance across various mainstream vision-language LLM benchmarks. We hope our InternLM-XComposer could provide new insight for the following exploration of advanced vision-language interaction.
私たちはInternLM-XComposerという、視覚言語の大規模モデルを提案します。これは、高度な画像テキスト理解と Compositing を可能にするものです。私たちモデルの革新性は、3つの魅力的な特徴によって強調されています。1) **交差的テキスト画像合成:** InternLM-XComposerは、画像とテキストを seamless に統合した、説得力のある文章を、容易に生成することができます。これにより、より没入感の高い読書体験を提供します。簡単な文章指示を与えるだけで、私たちのシステムは、対応する本文を生成します。テキストの中で画像がコンテンツを向上させる領域を intelligently 認識し、適切な視覚的な候補を自動的に挿入できます。2) **豊富な多言語知識に基づく理解:** 文書と画像の理解は、多言語多媒体データベースでトレーニングを受け、巧妙な戦略を用いて、視覚的なコンテンツに対する深い理解を促進します。3)
2309.05301
Classification of normal phylogenetic varieties for tripods
We provide a complete classification of normal phylogenetic varieties coming from tripods, and more generally, from trivalent trees. Let $G$ be an abelian group. We prove that the group-based phylogenetic variety $X_{G,\mathcal{T}}$, for any trivalent tree $\mathcal{T}$, is projectively normal if and only if $G\in \{\mathbb{Z}_2, \mathbb{Z}_3, \mathbb{Z}_2\times\mathbb{Z}_2, \mathbb{Z}_4, \mathbb{Z}_5, \mathbb{Z}_7\}$.
Rodica Andreea Dinu, Martin Vodička
2023-09-11T08:35:37
http://arxiv.org/abs/2309.05301v1
# Classification of normal phylogenetic varieties for tripods ###### Abstract. We provide a complete classification of normal phylogenetic varieties coming from tripods, and more generally, from trivalent trees. Let \(G\) be an abelian group. We prove that the group-based phylogenetic variety \(X_{G,\mathcal{T}}\), for any trivalent tree \(\mathcal{T}\), is projectively normal if and only if \(G\in\{\mathbb{Z}_{2},\mathbb{Z}_{3},\mathbb{Z}_{2}\times\mathbb{Z}_{2}, \mathbb{Z}_{4},\mathbb{Z}_{5},\mathbb{Z}_{7}\}\). Key words and phrases:group-based model, projective variety, polytope, normal ## 1. Introduction Phylogenetics aims to investigate the evolution of species over time and to determine the genetic relationships between species based on their DNA sequences, [10]. A correspondence is established to highlight the differences between the DNA sequences and this is useful to reconstruct the relationships between species, [16]. The ancestral relationships can be encoded by the structure of a tree, which is called a _phylogenetic tree_. Phylogenetics reveals connections with several parts of mathematics such as algebraic geometry [2, 8], combinatorics [19, 5] and representation theory [17]. We consider the algebraic variety associated to a phylogenetic tree, called a _phylogenetic variety_. The construction of this variety will be presented in this article and it can be consulted also in [8] for more details. For group-based models, this variety is toric [14, 9, 7]. Algebraic and geometric properties for these varieties are presented in [4, 5, 18, 20, 21, 23, 25]. Normality is a very important property, as most of the results in toric geometry work only for normal varieties. The reader may consult [1, 3, 11, 15, 22] and the references therein. A polytope \(P\) whose vertices generate lattice \(L\) is _normal_ if every point in \(kP\cap L\) can be expressed as a sum of \(k\) points from \(P\cap L\). The normality of polytope implies the projective normality of the associated projective toric variety. We are interested in understanding the normality property of group-based phylogenetic varieties; hence, the normality of their associated lattice polytopes. A _tripod_ is a tree that has exactly one inner node and \(3\) leaves, and a _trivalent tree_ is a tree for which each node has vertex degree \(\leq 3\). By Theorem 2.2 and [20, Lemma 5.1], it follows that, for a given group, in order to check the normality for any trivalent tree, it is enough to check the normality for the chosen group and the tripod. More generally, if one wants to check the normality for the algebraic variety for this group and any tree, it is enough to verify the normality for claw trees. In addition, by [4, Remark 2.2], non-normality for tripods gives non-normality for any non-trivial tree (i.e. a tree that is not a path). Hence, it is important to understand the normality when the phylogenetic trees are tripods. Actually, the polytope \(P_{G,3}\) associated to the tripod, encodes the group multiplication, which makes it an interesting object to study even without a connection to phylogenetics. Buczynska and Wisniewski [2] proved that the toric variety associated to any trivalent tree and the group \(\mathbb{Z}_{2}\) is projectively normal. The same result holds actually for any tree by using [25], where the normality was proved for the \(3\)-Kimura model (i.e. when \(G=\mathbb{Z}_{2}\times\mathbb{Z}_{2}\)). Also, computations from [20] show that \(X_{\mathbb{Z}_{4},\mathcal{T}}\) is normal for any trivalent tree \(\mathcal{T}\). In [4, Proposition 2.1], it was shown that, if \(G\) is an abelian group of even cardinality greater or equal to \(6\), then the polytope \(P_{G,3}\) is not normal. As a consequence, in this case, the polytope \(P_{G,n}\) for any \(n\)-claw tree is not normal, and hence, the algebraic variety \(X_{G,n}\) representing this model is not projectively normal. Hence, not every group-based phylogenetic variety is normal. Thus, there is a need for a classification. We present now the structure of the article. In Section 2, we introduce the algebraic variety associated to a group-based model and its associated polytope. We provide also some results, known in the literature, such as the vertex description for \(P_{G,3}\) and how a trivalent tree can be obtained by gluing tripod trees, and more generally, how an arbitrary tree can be obtained by gluing \(n\)-claw trees (i.e. trees that have one inner node, and \(n\) leaves). Section 3 is devoted to proving that the algebraic variety \(X_{G,3}\) is not projectively normal for any abelian group having its order an odd number greater or equal to \(11\). In order to show the non-normality of a polytope, it is enough to find a lattice point that belongs to the \((mk)\)-th dilation of the polytope in the corresponding lattice, but not in the \(k\)-th dilation of the polytope, again, in the corresponding lattice, for some integers \(k,m\). Our strategy is to provide such examples for all abelian groups having the order an odd number \(\geq 11\). For this, we consider a cubic graph \(\Gamma=(E,V)\) whose edges are colored blue, yellow, and red such that no two adjacent edges have the same color. We call a function \(f\colon\ E\to\Gamma\)_good_ if, to any vertex, when taking the sum of the adjacent blue, yellow, and red vertices through function \(f\) we obtain \(0\). In Lemma 3.2, and later, in Lemma 3.3 which is the key of our proofs from this section, we give the sufficient conditions that should be satisfied by a \(3\)-colorable graph. that it is not bipartite, and a good function in order to obtain a lattice point that has required the property, which destroys the normality of the polytope. In Theorem 3.4, we provide a suitable graph and we show the existence of a good function, showing that if \(G\) is an abelian group of odd order which is greater than \(43\), then \(P_{G,3}\) is not normal. If the order of \(G\) is an odd number between \(12\) and \(43\), we prove in Theorem 3.5, that again, the polytope \(P_{G,3}\) is not normal, this time by proving concrete examples of good functions. In Theorem 3.6, we show that \(P_{\mathbb{Z}_{11},3}\) is not normal, but we find a larger graph with a good function that satisfies the properties from Lemma 3.3. In Section 4, we present some computational results and the code we used to obtain those results. Computation 4.1 shows that \(P_{\mathbb{Z}_{5},3}\) and \(P_{\mathbb{Z}_{7},3}\) are normal, while Computation 4.2 shows that \(P_{\mathbb{Z}_{9},3}\) and \(P_{\mathbb{Z}_{3}\times\mathbb{Z}_{3},3}\) are not-normal. Section 5.1 is devoted to the main result of this article, Theorem 5, which unifies all the results obtained apriori and provides a complete classification of the group-based phylogenetic varieties for tripods: **Theorem.** Let \(G\) be an abelian group. Then the polytope \(P_{G,3}\) associated to a tripod and the group \(G\) is normal if and only if \(G\in\{\mathbb{Z}_{2},\mathbb{Z}_{3},\mathbb{Z}_{2}\times\mathbb{Z}_{2}, \mathbb{Z}_{4},\mathbb{Z}_{5},\mathbb{Z}_{7}\}\). Moreover, this result holds for the polytope \(P_{G,\mathcal{T}}\) associated to any trivalent tree. As a consequence, we obtain that: **Corollary.** Let \(G\) be an abelian group. Then the group-based phylogenetic variety \(X_{G,3}\) associated to a tripod and the group \(G\) is projectively normal if and only if \(G\in\{\mathbb{Z}_{2},\mathbb{Z}_{3},\mathbb{Z}_{2}\times\mathbb{Z}_{2}, \mathbb{Z}_{4},\mathbb{Z}_{5},\mathbb{Z}_{7}\}\). Moreover, this result holds for the group-based phylogenetic variety \(X_{G,\mathcal{T}}\) associated to any trivalent tree. ## 2. Preliminaries In this section, we introduce the notation and preliminaries that will be used in the article. The reader may consult also [23, 18, 4]. ### The algebraic variety associated to a group-based model We present here the construction of the algebraic variety associated to a model. A _phylogenetic tree_ is a simple, connected, acyclic graph that will come together with some statistical information. We will denote its vertices by \(V\) and its edges by \(E\). A vertex \(v\) is called a _leaf_ if it has valency \(1\), and all the vertices that are not leaves will be called _nodes_. The set of all leaves will be denoted by \(\mathcal{L}\). The edges of \(T\) are labeled by transition probability matrices \(\mathcal{M}\) which show the probabilities of changes of the states from one node to another. A _representation_ of a model on a phylogenetic tree \(T\) is an association \(E\to\mathcal{M}\). We denote the set of all representations by \(\mathcal{R}(T)\). Each node of \(T\) is a random variable with \(k\) possible states chosen from the state space \(S\). To each vertex \(v\) of \(T\) we associate an \(|S|\)-dimensional vector space \(V_{v}\) with basis \((v_{s})_{s\in S}\). An element of \(\mathcal{M}\) associated to \(e:=(v_{1},v_{2})\in E\) may be viewed as an element of the tensor product \(V_{v_{1}}\otimes V_{v_{2}}\). We fix a representation \(M\in\mathcal{R}(T)\) and an association \(a\colon\ \mathcal{L}\to S\). Then the probability of \(a\) may be computed as follows: \[P(M,a)=\sum_{\sigma}\prod_{(v_{1},v_{2})\in E}(M(v_{1},v_{2}))_{(\sigma(v_{1} ),\sigma(v_{2}))},\] where the sum is taken over all associations \(\sigma\colon\ V\to S\) such that their restrictions to \(\mathcal{L}\) coincide to \(a\). By identifying the association \(a\) with a basis element \(\otimes_{l\in\mathcal{L}}l_{a(l)}\in\bigotimes_{l\in\mathcal{L}}V_{l}\), we get the map: \[\Theta\colon\ M\to\sum_{a}(P(M\otimes a)\otimes_{l\in\mathcal{L}}l_{a(l)}) \in\bigotimes_{l\in\mathcal{L}}V_{l}.\] The Zariski closure of this map is an algebraic variety that represents the model and we call it a _phylogenetic variety_. For group-based models, we denote this variety by \(X_{G,T}\), where \(G\) is the group representing the model and \(T\) is the tree as above. We call it a _group-based phylogenetic variety_. ### The polytope associated to a group-based phylogenetic variety For special classes of phylogenetic varieties, Hendy and Penny [14] and, later, Erdos, Steel, and Szekely [7], used the Discrete Fourier Transform in order to turn the map \(\Theta\) into a monomial map. In particular, it is known that the group-based phylogenetic variety \(X_{G,T}\) is a _toric variety_. Hence, one can use toric methods when working with it, because the geometry of a toric variety is completely determined by the combinatorics of its associated lattice polytope. We denote the polytope associated to the projective toric variety \(X_{G,T}\) by \(P_{G,T}\). When \(T\) is the \(n\)-claw tree. we denote the corresponding polytope by \(P_{G,n}\). In this paper, we will always work with \(3\)-claw trees, which are also called _tripods_, and we will denote the corresponding polytopes by \(P_{G,3}\). ### Vertex description of \(P_{g,3}\) We introduce some notation that will be used when working with polytopes \(P_{G,3}\subset\mathbb{R}^{3|G|}\). We label the coordinates of a point \(x\in\mathbb{R}^{3|G|}\) by \(x_{g}^{j}\), where \(1\leq j\leq 3\) corresponds to an edge of the tree, and \(g\in G\) corresponds to a group element. For any point \(x\in\mathbb{Z}_{\geq 0}^{3|G|}\), we define its \(G\)-presentation as an \(3\)-tuple \((G_{1},G_{2},G_{3})\) of multisets of elements of \(G\). Every element \(g\in G\) appears exactly \(x_{g}^{j}\) times in the multiset \(G_{j}\). We denote by \(x(G_{1},G_{2},G_{3})\) the point with the corresponding \(G\)-presentation. The vertex description of the polytope \(P_{G,3}\) (and, more generally, for \(P_{G,n}\)) is known and may be consulted in [2, 20, 23]. We recall this description in terms of \(G\)-presentations. **Theorem 2.1**.: _The vertices of the polytope \(P_{G,3}\) associated to a finite abelian group \(G\) and a tripod are exactly the points \(x(G_{1},G_{2},G_{3})\) with \(G_{1}+G_{2}+G_{3}=0\)._ _Let \(L_{G,3}\) be the lattice generated by vertices of \(P_{G,3}\). Then_ \[L_{G,3}=\{x\in\mathbb{Z}^{3|G|}:\sum_{g,j}x_{g}^{j}\cdot g=0,\forall\ 1\leq j,j^{ \prime}\leq 3,\sum_{g}x_{g}^{j}=\sum_{g}x_{g}^{j^{\prime}}\}.\] _where the first sum is taken in the group \(G\)._ In addition, there is an implemented algorithm that can be used to obtain the vertices of \(P_{G,3}\) for several abelian groups \(G\). It can be found in [6]. ### Reduction to simpler trees The toric fiber product of two homogeneous ideals belonging to two multigraded polynomial rings having the same multigrading is a construction due to Sullivant [24], which has interesting applications when working with polytopes \(P_{G,3}\) (and, even more generally, \(P_{G,n}\)). More details may be consulted in [24, 4]. We will use the following result due to Sullivant: **Theorem 2.2**.: _([24, Theorem 3.10]) The polytope associated to any trivalent tree \(\mathcal{T}\), the polytope \(P_{G,\mathcal{T}}\) can be expressed as the fiber product of polytopes associated to the tripod, \(P_{G,3}\). More generally, the polytope associated to any tree \(T\), the polytope \(P_{G,T}\) can be expressed as the fiber product of polytopes associated to the \(n\)-claw tree, \(P_{G,n}\)._ ## 3. Non-normal tripods To show that a polytope \(P\) is not normal it is sufficient to provide an example of a lattice point \(x\), such that \(x\in mk(P\cap L)\) and \(x\not\in k(P\cap L)\) for some integers \(k,m\). We will provide such an example for all abelian groups \(G\) with the order being an odd number greater than \(12\). Moreover, we will always have \(m=2\). Let \(\Gamma=(V,E)\) be a cubic graph whose edges are colored blue, yellow, and red, such that no two adjacent edges have the same color. For any vertex \(v\) of \(\Gamma\) let us denote by \(e_{B}(v)\), \((e_{Y}(v),\,e_{R}(v))\) the blue (yellow, red) edge adjacent to the vertex \(v\). Let us call a function \(f:E\to G\)_good_ if for any vertex \(v\) of \(\Gamma\) we have \[f(e_{B}(v))+f(e_{Y}(v))+f(e_{R}(v))=0.\] To any good function \(f\) we can associate a point \(x(f)\in|V|(P_{G,3})\) in the following way: First, consider any vertex \(v\) of \(\Gamma\). To this vertex we associate a point \(x(v,f):=x(f(e_{B}(v),f(e_{Y}(v),f(e_{R}(v))\) which is a vertex of \(P_{G,3}\) since \(f\) is good. Then we simply define \[x(f):=\sum_{v\in V}x(v,f).\] Note that the point \(x(f)/2\) is still a lattice point. Indeed, all coordinates of \(x(f)\) are even since every edge is counted twice - once for both of its endpoints. Then the conclusion follows from the following auxiliary and more general result: **Lemma 3.1**.: _Let \(x\in L_{G,n}\) be a point such that all coordinates of \(x\) are even. Then \(x/2\in L_{G,n}\)._ Proof.: Since \(x\in L_{G,n}\) we have \[0=\sum_{i=1}^{n}\sum_{g\in G\setminus\{0\}}x_{g}^{i}\cdot g=2\cdot\sum_{i=1}^ {n}\sum_{g\in G\setminus\{0\}}(x_{g}^{i}/2)\cdot g.\] Since \(G\) has odd order, the only solution to the equation \(g+g=0\) in \(G\) is \(g=0\). It follows that \[\sum_{i=1}^{n}\sum_{g\in G\setminus\{0\}}(x_{g}^{i}/2)\cdot g=0\] and, hence, \(x/2\in L_{G,n}\). If the point \(x(f)\) can not be written as the sum of \(|V|/2\) vertices of \(P\), it is the point that proves the non-normality of \(P_{G,3}\). Thus, it remains to show that there exists a graph \(\Gamma\) and a good function \(f:E\to G\) such that the point \(x(f)\) has this property. **Lemma 3.2**.: _Let \(\Gamma\) be a cubic 3-colorable graph which is not bipartite and \(f:E\to G\) be a good function with the following property: if \(f(e_{1})+f(e_{2})+f(e_{3})=0\) for a blue edge \(e_{1}\), yellow edge \(e_{2}\) and red edge \(e_{3}\), then \(e_{1}=e_{B}(v),e_{2}=e_{Y}(v),e_{3}=e_{R}(v)\) for some vertex \(v\) of \(\Gamma\). Then \(x(f)/2\) can not be written as a sum of \(|V|/2\) vertices of \(P_{G,3}\)._ Proof.: For the beginning, let \(m=|V|/2\). We claim that \(f(e)\neq f(e^{\prime})\) for any pair of edges of the same color. Assume for contradiction that \(f(e)=f(e^{\prime})\) for some pair of edges of the same color. Let \(e_{1},e_{2}\) be edges that share one vertex with edge \(e\). Then \(0=f(e)+f(e_{1})+f(e_{2})=f(e^{\prime})+f(e_{1})+f(e_{2})\). It follows that also all edges \(e^{\prime},e_{1},e_{2}\) all share one vertex, and therefore \(e=e^{\prime}\). This means that in the \(G\)-presentation of \(x\) all multisets have \(m\) different elements. Assume that \(x/2=p_{1}+p_{2}+\cdots+p_{m}\), where \(p_{i}\) are vertices of \(P_{G,n}\). Then \(p_{i}=x(f(e_{1}^{i}),f(e_{2}^{i}),f(e_{3}^{i}))\) for suitable edges \(e_{j}^{i}\) of the graph \(\Gamma\). From the condition from the lemma statement it follows that the edges \(e_{1}^{i},e_{2}^{i},e_{3}^{i}\) are adjacent to one vertex \(v_{i}\) of \(\Gamma\), i.e. \(p_{i}=x(v_{i},f)\). Since \(x/2=p_{1}+p_{2}+\cdots+p_{m}=x(v_{1},f)+\cdots+x(v_{m},f)\), it must be true that every edge is adjacent to exactly one of the vertices \(v_{1},\ldots,v_{m}\). However, this implies that the graph \(\Gamma\) is bipartite, with one partition being \(v_{1},\ldots,v_{m}\). Contradiction. Now it is clear how to find the good function \(f\) for which \(x(f)/2\notin|V|/2(P_{G,3}\cap L_{G,3})\). We just need to find a graph \(\Gamma\) and a function \(f\) which satisfy the property from Lemma 3.2. If \(G\) is large enough, one can hope that there must be a suitable choice of a good function \(f\). However, this would lead to a large bound on the order of \(G\). Thus, we will try to weaken the condition from Lemma 3.2: **Lemma 3.3**.: _Let \(\Gamma\) be a cubic 3-colorable graph and \(T=\{t_{B},t_{Y},t_{R}\}\) be blue, yellow, and red edge in \(\Gamma\) that form a triangle. Let \(f:E\to G\) be a good function with the following properties:_ 1. \(f(t_{B})+f(t_{Y})+f(t_{R})\neq 0\)_,_ 2. \(f(t)\neq f(e)\)_, for any different edges_ \(t\in T,e\in E\) _of the same color,_ 3. \(f(t)+f(e_{1})+f(e_{2})\neq 0\) _for any edges_ \(t\in T,e\in E\setminus T\) _of different color._ _Then \(x(f)/2\) can not be written as a sum of \(|V|/2\) vertices of \(P_{G,n}\)._ Proof.: Analogously as in the proof of Lemma 3.2, we denote \(m=|V|/2\) and we assume that \(x(f)/2=p_{1}+\cdots+p_{m}\), where \(p_{i}=x(f(e_{1}^{i}),f(e_{2}^{i}),f(e_{3}^{i}))\) for suitable edges of the graph \(\Gamma\). Note that the second condition implies \[x_{f}(t_{B})^{1}+x_{f}(t_{Y})^{2}+x_{f}(t_{R})^{3}=2+2+2=6\] which means that for the point \(y:=x(f)/2\) we obtain that \[y_{f}(t_{B})^{1}+y_{f}(t_{Y})^{2}+y_{f}(t_{R})^{3}=1+1+1=3.\] For any point \(p=x(f(e_{1}),f(e_{2}),f(e_{3}))\), we have that \[p_{f}(t_{B})^{1}+p_{f}(t_{Y})^{2}+p_{f}(t_{R})^{3}\not\in\{1,3\}.\] This is a consequence of the first and the third condition. This implies that for all \(p_{i}\), the sum \(p_{f}(t_{B})^{1}+p_{f}(t_{Y})^{2}+p_{f}(t_{R})^{3}\) is even. However, this is a contradiction with \(p_{1}+\cdots+p_{m}=y\), since the sum of even numbers can never be an odd number. Now we provide an example of graph \(\Gamma\) and good functions \(f\). Let \(\Gamma\) be the following cubic graph on 12 vertices. First, we note that a good function is uniquely determined by the values of six edges that do not lie in any triangle. We denote the values of these edges by \(h_{1},\dots,h_{6}\in G\), as in Figure 2. Then the value of the other edges is determined by expressions as in the figure. Note that, since \(|G|\) is odd, the function \((1/2)\cdot g\) is well-defined for any element \(g\). To see, that the values of other edges are, in fact, determined by \(h_{1},\dots,h_{6}\), let us consider the edges \(e_{B},e_{Y},e_{R}\) which form the left upper triangle. Since \(f\) is good, we must have \[f(e_{B})+f(e_{Y})+h_{3}=f(e_{B})+h_{2}+f(e_{R})=h_{1}+f(e_{Y})+f(e_{R})=0.\] By summing up the first two equations and subtracting the last one, we obtain \[2f(e_{B})+h_{3}+h_{2}-h_{1}=0\Leftrightarrow f(e_{B})=\frac{1}{2}\cdot(-h_{1} +h_{2}+h_{3}))\] Analogously, we can determine the values of other edges. It is easy to check, that for any choice of \(h_{1}\dots,h_{6}\) this defines a good function \(f\). **Theorem 3.4**.: _Let \(G\) be an abelian group of odd order which is greater than 43. Then \(P_{G,3}\) is not normal._ Proof.: Clearly, it is sufficient to provide an example of a good function \(f\) that satisfies the conditions from Lemma 3.3. For this, we need to provide the corresponding elements Figure 1. Figure 2. \(h_{1},\ldots,h_{6}\in G\). Note that every condition from Lemma 3.3 requires that a certain linear form in \(h_{1},\ldots,h_{6}\) must be different from \(0\). We want to show that the number of \(6\)-tuples \((h_{1},\ldots,h_{6})\in G^{6}\) that satisfies all conditions is positive. Clearly, there are \(|G|^{6}\)\(6\)-tuples in \(G^{6}\). Let \(L(h_{1},\ldots,h_{6})=l_{1}h_{1}+\cdots+l_{6}h_{6}\) be a linear form such that \(l_{1},\ldots,l_{6}\in\mathbb{Z}\) and \(G\) is \(l_{i}\)-divisible for at least one index \(i\). Then the number of \(6\)-tuples for which \(L(h_{1},\ldots,h_{6})=0\) is \(|G|^{5}\). To see this, simply pick a number \(l_{i}\) such that \(G\) is \(l_{i}\)-divisible. Then the element \(h_{i}\) is uniquely determined by the choice of other group elements. Now we simply compute the number of conditions that are in Lemma 3.3. There is one linear form for \((i)\), there are \(5\cdot 3=15\) conditions of type \((ii)\). We note that some of the conditions of type \((iii)\) are redundant. Let \(T\) be a triangle as in the statement of Lemma 3.3. Consider the condition \(f(t)+f(e_{1})+f(e_{2})\neq 0\), where the edges \(e_{1}\) and \(t\) are adjacent. Let \(t^{\prime}\) be the third edge adjacent to their common vertex. Clearly, \(t^{\prime}\in T\) and it is of the same color as edge \(e_{2}\). Since \(f(t)+f(e_{1})+f(t^{\prime})=0\), the condition \(f(t)+f(e_{1})+f(e_{2})\neq 0\) is equivalent with \(f(e_{2})\neq f(t^{\prime})\), which a condition from \((ii)\). Analogously, the condition \(f(t)+f(e_{1})+f(e_{2})\neq 0\) is redundant also in the case where \(e_{2}\) and \(t\) are adjacent. Similarly, consider the condition \(f(t)+f(e_{1})+f(e_{2})\neq 0\) for adjacent edges \(e_{1}\) and \(e_{2}\). Let \(e_{3}\) be the third edge adjacent to their common vertex. As, in the previous case, we have that \(f(t)+f(e_{1})+f(e_{2})\neq 0\) is equivalent to \(f(t)\neq f(e_{3})\) which is, again, a condition of type \((ii)\). Thus, it is sufficient to consider only such triples \(t,e_{1},e_{2}\) such that no two of them are adjacent. If the edge \(t=t_{B}\), one can easily count that there are \(9\) pairs of edges \(e_{1},e_{2}\) which are yellow and red, such that no two of the edges \(t,e_{1},e_{2}\) are adjacent. The same is true for \(t=t_{Y}\) and \(t=t_{R}\). Thus there are only \(9+9+9=27\) (non-redundant) conditions of type \((iii)\). One can easily check, that each of these conditions, indeed corresponds to a linear form \(L(h_{1},\ldots,h_{6})\) such that \(G\) is \(l_{i}\)-divisible for at least one of its coefficients. Therefore, there are at least \(|G|^{6}-(1+15+27)|G|^{5}=|G|^{5}(|G|-43)>0\)\(6\)-tuples of elements \((h_{1},\ldots,h_{6})\) such that the corresponding good function \(f\) satisfies all conditions from Lemma 3.3 which proves the desired result. In the previous result, we just show the existence of a good function \(f\) without actually providing a concrete construction. An alternative approach is to find a good function \(f\) simply by trying some \(6\)-tuples \(h_{1},\ldots,h_{6}\). This way, we are able to prove non-normality also for some smaller groups: **Theorem 3.5**.: _Let \(G\) be an abelian group of odd order which is greater than 11. Then \(P_{G,3}\) is not normal._ Proof.: If \(|G|>43\), then \(P_{G,3}\) is not normal, due to Proposition 3.4. Here we provide an example of a good function \(f\) for smaller groups \(G\). Let \(h_{1}=7\), \(h_{2}=7\), \(h_{3}=-4\), \(h_{4}=-6\), \(h_{5}=0\), \(h_{6}=7\). This uniquely determines the good function \(f:E\to\mathbb{Z}\) as in Figure 3: One can easily check that all sums \(f(e_{1})+f(e_{2})+f(e_{3})\) for edges of different colors are in the interval \([-26,21]\). Moreover, with the help of a computer, it is possible to check that none of these sums are equal to \(23\) or \(25\) and the only sums which are equal to \(0\) are those for which \(e_{1},e_{2},e_{3}\) have a common vertex. This means that the function \(f_{n}:E\to\mathbb{Z}_{n}\) which is a composition of \(f\) with a natural map \(i:\mathbb{Z}\to\mathbb{Z}_{n}\) satisfies the condition from Lemma 3.2 for all odd \(n\geq 21\). By Lemma 3.2, the polytope \(P_{G,3}\) is not normal when \(G=\mathbb{Z}_{n}\) for any odd natural number \(n>21\). We are left just with a few abelian groups, namely with \[G\in\{\mathbb{Z}_{13},\mathbb{Z}_{15},\mathbb{Z}_{17},\mathbb{Z}_{19},\mathbb{ Z}_{21},\mathbb{Z}_{5}^{2},\mathbb{Z}_{3}^{3},\mathbb{Z}_{9}\times\mathbb{Z}_{3}\}.\] For each of them, we will provide a separate example of a good function \(f\) that satisfies the condition from Lemma 3.3. The examples provided here by us are not unique, and one can check by a computer that they satisfy the required conditions. For each example we will provide just the corresponding elements \(h_{1},\ldots,h_{6}\) which uniquely determined the good function \(f\): \(G=\mathbb{Z}_{19}\)**:**: \(h_{1}=1\), \(h_{2}=1\), \(h_{3}=7\), \(h_{4}=15\), \(h_{5}=0\), \(h_{6}=1\). \(G=\mathbb{Z}_{21}\)**:**: \(h_{1}=3\), \(h_{2}=3\), \(h_{3}=1\), \(h_{4}=9\), \(h_{5}=0\), \(h_{6}=3\). \(G=\mathbb{Z}_{2}^{2}\)**:**: \(h_{1}=(1,2)\), \(h_{2}=(0,1)\), \(h_{3}=(1,1)\), \(h_{4}=(2,2)\), \(h_{5}=(0,0)\), \(h_{6}=(1,4)\). \(G=\mathbb{Z}_{9}\times\mathbb{Z}_{3}\)**:**: \(h_{1}=(5,2)\), \(h_{2}=(2,2)\), \(h_{3}=(3,1)\), \(h_{4}=(1,0)\), \(h_{5}=(0,0)\), \(h_{6}=(5,2)\). These four examples satisfy the (stronger) condition from Lemma 3.2. \(G=\mathbb{Z}_{3}^{3}\)**:**: \(h_{1}=(1,1,0)\), \(h_{2}=(0,0,1)\), \(h_{3}=(0,2,2)\), \(h_{4}=(2,2,0)\), \(h_{5}=(0,0,0)\), \(h_{6}=(1,1,0)\). In this case, there is one additional triple of edges of different colors (except those that share a vertex) whose sum is equal to \(0\), namely, \(h_{4}+h_{5}+h_{6}=0\). Still, this satisfies the conditions from Lemma 3.3 since none of these edges is contained in the upper left triangle \(T\). \(G=\mathbb{Z}_{17}\)**:**: \(h_{1}=3\), \(h_{2}=2\), \(h_{3}=2\), \(h_{4}=6\), \(h_{5}=0\), \(h_{6}=3\). In this case, there is one additional triple of edges of different colors whose sum is equal to \(0\), namely, \((h_{1}-h_{5}-h_{6})/2+h_{2}+(-h_{4}-h_{5}+h_{3})/2=0\). Still, this satisfies the conditions from Lemma 3.3 since none of these edges is contained in the upper left triangle \(T\). \(G=\mathbb{Z}_{15}\)**:**: \(h_{1}=3\), \(h_{2}=10\), \(h_{3}=1\), \(h_{4}=1\), \(h_{5}=0\), \(h_{6}=8\). In this case, there are two additional triples of edges of different colors whose sum is equal to \(0\). Namely, \((h_{1}-h_{5}-h_{6})/2+h_{2}+(-h_{4}-h_{5}+h_{3})/2=(h_{4}-h_{2}-h_{6})/2+h_{5}+ h_{3}=0\). Still, this satisfies the conditions from Lemma 3.3 since none of these edges is contained in the upper left triangle \(T\). \(G=\mathbb{Z}_{13}\)**:**: \(h_{1}=1\), \(h_{2}=1\), \(h_{3}=1\), \(h_{4}=9\), \(h_{5}=0\), \(h_{6}=1\). In this case, \(h_{3}=h_{6}\) creates \(4\) additional triples of edges of different colors whose sum is equal to \(0\). Still, each of them contains either \(0\) or two edges from the upper left triangle \(T\), so this also satisfies conditions from Lemma 3.3. We consider now the abelian group \(G=\mathbb{Z}_{11}\). For this group, we have not found a good function that satisfies conditions of Lemma 3.3 on the graph \(\Gamma\) which was used in the previous examples. We have not checked all good functions so it is possible that such a function exists, even though we strongly suspect that it does not. However, we manage to find a larger graph \(\Gamma_{2}\) with a good function, that satisfies the properties from Lemma 3.3. Figure 3. **Theorem 3.6**.: _The polytope \(P_{\mathbb{Z}_{11},3}\) is not normal._ Proof.: Consider the graph \(\Gamma_{2}=(V_{2},E_{2})\) and the good function \(f:E_{2}\to\mathbb{Z}_{11}\) displayed in Figure 4: The triangle \(T\) is the gray triangle from the picture, i.e. edges with values \(2,3,8\). Again, one can check by computer, or even by hand, that this satisfies the conditions of Lemma 3.3 and, therefore, \(P_{\mathbb{Z}_{11},3}\) is not normal. **Remark 3.7**.: _The non-decomposable point from the last proof is the point_ \[x(\{0,0,1,2,4,4,5,10\},\{1,1,1,3,5,5,6,8\},\{0,2,5,6,6,6,8,10\}).\] _It is not difficult to find all triples of elements \((h_{1},h_{2},h_{3})\), where \(h_{i}\) is from the \(i\)-th multiset, that satisfy_ \[h_{1}+h_{2}+h_{3}=0\wedge(h_{1}=2\lor h_{2}=3\lor h_{3}=8).\] _Indeed, the only such triples are \((2,3,6),(2,1,8),(0,3,8)\) which immediately shows that this point is non-decomposable. Despite the fact, we derive this example from the graph, this demonstrates what is happening just in terms of elements of \(\mathbb{Z}_{11}\)._ ## 4. Computational results In this section, we present a computational way to check if the polytopes associated to a tripod and the groups \(\mathbb{Z}_{5}\) and \(\mathbb{Z}_{7}\) are normal. In the next computational result, we give a positive answer to this question: **Computation 4.1**.: _The polytope \(P_{G,3}\) associated to the tripod and any of the groups \(G\in\{\mathbb{Z}_{5},\mathbb{Z}_{7}\}\) is normal. Hence, the algebraic varieties representing these models are projectively normal._ Here we present the computational method we used. For obtaining the vertices of the polytopes \(P_{G,3}\) where \(G\in\{\mathbb{Z}_{5},\mathbb{Z}_{7}\}\), we use the following code in Macaulay2 [13], making use of the package "Phylogenetic Trees". loadPackage "PhylogeneticTrees" n=7; g=1_(ZZ/n); G=for i from 0 to n-1 list i*g; Figure 4. B=for i from 0 to n-1 list {G#i}; M=model(G,B,{}); T=leafTree(3,{}); A=submatrix'(phyloToricAMatrix(T,M),{0,n,2*n},); I=id_(QQ^(n-1)); PP=for i from 1 to n-1 list n-i; TM=inverse((I|(-1)*I|0*I)||(I|0*I|I)||((transpose(matrix{PP})+ submatrix(I,,{0}))|submatrix'(I,,{0})|0*I|0*I)); LL=for i from 0 to n^2-1 list 1; AA=(transpose(matrix{LL}))|transpose(A)*TM; entries(AA) Note that we want to check the normality of the polytope in the lattice spanned by its vertices and not in the lattice \(\mathbb{Z}^{3(|G|-1)}\). Thus, we write the coordinates of vertices of \(P_{G,3}\) in the basis of \(L_{G,3}\), hence this requires the last part of the presented code. More precisely, we use the following basis for the lattice \(L_{\mathbb{Z}_{n},3}\): * \(x(\{g\},\emptyset,\emptyset)-x(\emptyset,\{g\},\emptyset)\) for all \(g\in\mathbb{Z}_{n}\) * \(x(\{g\},\emptyset,\emptyset)-x(\emptyset,\emptyset,\{g\})\) for all \(g\in\mathbb{Z}_{n}\) * \(n\cdot x(\{1\},\emptyset,\emptyset)\) * \(x(\underbrace{1,1,\ldots,1}_{(n-i)\text{-times}}i),\emptyset,\emptyset)\) for all \(2\leq i\leq n-1\). After obtaining the vertices of the polytope, we use Polymake [12] in order to check its normality. We also use a similar computational approach to check the normality for groups \(\mathbb{Z}_{9}\) and \(\mathbb{Z}_{3}^{2}\). However, this time the computation resulted in a negative answer: **Computation 4.2**.: _The polytope \(P_{G,3}\) associated to the tripod and any of the groups \(G\in\{\mathbb{Z}_{9},\mathbb{Z}_{3}^{2}\}\) is not normal. Hence, the algebraic varieties representing these models are not projectively normal._ For the group \(\mathbb{Z}_{3}\times\mathbb{Z}_{3}\) we use the following code in Macaulay2 for obtaining the vertices, and, again, for checking normality we use Polymake. loadPackage "PhylogeneticTrees" g1={1_(ZZ/3),0_(ZZ/3)}; g2={0_(ZZ/3),1_(ZZ/3)}; G={0*g1,g1,2*g1,g2,g1+g2,2*g1+g2,2*g2,g1+2*g2,2*g1+2*g2}; B=for i from 0 to 8 list {G#i}; M=model(G,B,{}); T=leafTree(3,{}); A=phyloToricAMatrix(T,M); AAA=submatrix'(A,{0,8,16},); I=id_(QQ^8); PP=matrix{{3,0,0,0,0,0,0},{0,3,0,0,0,0,0},{1,0,0,0,0,0,0}, {1,0,1,-1,0,0,0,0},{2,0,1,0,-1,0,0,0},{0,0,1,0,0,1,0,0}, {1,0,2,0,0,0,-1,0}, {2,0,2,0,0,0,0,-1}}; TM=inverse(([I|(-1)*I|0*I]||(I|0*I|I)||(PP|0*I|0*I)); ATM=transpose(AAA)*TM; LL=for i from 0 to 80 list 1; AA=(transpose(matrix{LL}))|ATM; L=entries(AA) For the group \(\mathbb{Z}_{9}\) one can use the program [6] for getting the vertices of the polytope \(P_{\mathbb{Z}_{9},3}\), and then we proceed as for the previous groups. ## 5. Classification In this section, we present the main result. Namely, we provide a complete classification of normal group-based phylogenetic varieties for tripods, and more generally, for trivalent trees. **Theorem 5.1**.: _Let \(G\) be an abelian group. Then the polytope \(P_{G,3}\) associated to a tripod and the group \(G\) is normal if and only if \(G\in\{\mathbb{Z}_{2},\mathbb{Z}_{3},\mathbb{Z}_{2}\times\mathbb{Z}_{2},\mathbb{ Z}_{4},\mathbb{Z}_{5},\mathbb{Z}_{7}\}\)._ Proof.: By Theorem 3.4, \(P_{G,3}\) is not normal, for any abelian group \(G\) which has odd cardinality greater than \(43\). The same is true, by Theorem 3.5, the same is true, for any abelian group \(G\) which has odd cardinality between \(12\) and \(43\), and, by Theorem 3.6, if \(|G|=11\). Now, by [4, Proposition 2.1], \(P_{G,3}\) is not normal, for any abelian group of even cardinality greater or equal than \(6\). The polytopes \(P_{G,3}\) are normal when \(G\in\{\mathbb{Z}_{5},\mathbb{Z}_{7}\}\), by Computation 4.1, and non-normal when \(G\in\{\mathbb{Z}_{9},\mathbb{Z}_{3}\times\mathbb{Z}_{3}\}\), by Computation 4.2. If \(G=\mathbb{Z}_{4}\), the polytope \(P_{G,3}\) is normal by computations shown in [20]. When \(G=\mathbb{Z}_{2}\times\mathbb{Z}_{2}\), the polytope corresponding to the \(3\)-Kimura model is normal by [25], even for any tree. If \(G=\mathbb{Z}_{3}\), the corresponding polytope is normal by [4, Theorem 2.3], for any tree. When \(G=\mathbb{Z}_{2}\), the polytope corresponding to the Cavender-Farris-Neyman is normal, by [2] and [25], for any tree. As a consequence of Theorem 5.1, and Sullivant's result Theorem 2.2, we obtain that: **Corollary 5.2**.: _Let \(G\) be an abelian group. Then the polytope \(P_{G,\mathcal{T}}\) associated to any trivalent tree and the group \(G\) is normal if and only if \(G\in\{\mathbb{Z}_{2},\mathbb{Z}_{3},\mathbb{Z}_{2}\times\mathbb{Z}_{2}, \mathbb{Z}_{4},\mathbb{Z}_{5},\mathbb{Z}_{7}\}\)._ Therefore, in terms of the associated toric varieties, we get: **Corollary 5.3**.: _Let \(G\) be an abelian group. Then the group-based phylogenetic variety \(X_{G,3}\) associated to a tripod and the group \(G\) is projectively normal if and only if \(G\in\{\mathbb{Z}_{2},\mathbb{Z}_{3},\mathbb{Z}_{2}\times\mathbb{Z}_{2}, \mathbb{Z}_{4},\mathbb{Z}_{5},\mathbb{Z}_{7}\}\). Moreover, this result holds for the group-based phylogenetic variety \(X_{G,\mathcal{T}}\) associated to any trivalent tree._ **Remark 5.4**.: _Let \(G\) be an abelian group. As the non-normality of polytopes associated to tripods implies the non-normality of polytopes associated to any tree, the groups from Theorem 5.1 are the only candidates to give rise to projectively normal toric varieties associated to an arbitrary tree. In addition, for \(\mathbb{Z}_{2}\), \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) and \(\mathbb{Z}_{3}\), the corresponding phylogenetic are known to be normal for any tree, hence, it remains to understand only the normality when the groups are \(\mathbb{Z}_{4},\mathbb{Z}_{5}\) and \(\mathbb{Z}_{7}\) for any tree._ We also used the computer to check the normality for \(4\)-claw tree and the above groups, and it turns out that the polytopes are normal. We suspect that these groups give rise to projectively normal phylogenetic varieties for any tree and we propose the following: **Conjecture 5.5**.: _Let \(G\) be an abelian group and \(T\) a tree. Then the group-based phylogenetic variety \(X_{G,T}\) is projectively normal if and only if \(G\in\{\mathbb{Z}_{2},\mathbb{Z}_{3},\mathbb{Z}_{2}\times\mathbb{Z}_{2}, \mathbb{Z}_{4},\mathbb{Z}_{5},\mathbb{Z}_{7}\}\)._ ## Acknowledgement RD was supported by the Alexander von Humboldt Foundation and by a grant of the Ministry of Research, Innovation and Digitization, CNCS - UEFISCDI, project number PN-III-P1-1.1-TE-2021-1633, within PNCDI III. MV was supported by Slovak VEGA grant 1/0152/22.
``` 完全な系統分類を、三足動物から、一般的に三分岐樹から提供します。$G$ がアベリン群であるとき、任意の三分岐樹 $\mathcal{T}$ に対して、$G$-ベースの系統的変異 $X_{G,\mathcal{T}}$ は、projectively normal であることは同値条件です。$G$ は $\mathbb{Z}_2$, $\mathbb{Z}_3$, $\mathbb{Z}_2 \times \mathbb{Z}_2$, $\mathbb{Z}_4$, $\mathbb{Z}_5$, $\mathbb{Z}_7$ のいずれかです。 ```
2309.06608
Pump, Dump, and then What? The Long-Term Impact of Cryptocurrency Pump-and-Dump Schemes
The pump and dump scheme is a form of market manipulation attack in which coordinated actors drive up the price of an asset in order to sell at a higher price. Due in part to a lack of enforcement, these schemes are widespread within the cryptocurrency marketplace, but the negative impact of these events on the coins they target is not yet fully understood. Drawing upon a novel dataset of pump events extracted from Telegram channels, an order of magnitude larger than the nearest comparable dataset in the literature, we explore the differing tactics of pumping channels and the long-term impact of pump and dump schemes across 765 coins. We find that, despite a short-term positive impact in some cases, the long-term impact of pump and dump schemes on the targeted assets is negative, amounting to an average 30% relative drop in price a year after the pump event.
Joshua Clough, Matthew Edwards
2023-09-12T21:23:50
http://arxiv.org/abs/2309.06608v1
# Pump, Dump, and then What? The Long-Term Impact of Cryptocurrency Pump-and-Dump Schemes ###### Abstract The pump and dump scheme is a form of market manipulation attack in which coordinated actors drive up the price of an asset in order to sell at a higher price. Due in part to a lack of enforcement, these schemes are widespread within the cryptocurrency marketplace, but the negative impact of these events on the coins they target is not yet fully understood. Drawing upon a novel dataset of pump events extracted from Telegram channels, an order of magnitude larger than the nearest comparable dataset in the literature, we explore the differing tactics of pumping channels and the long-term impact of pump and dump schemes across 765 coins. We find that, despite a short-term positive impact in some cases, the long-term impact of pump and dump schemes on the targeted assets is negative, amounting to an average 30% relative drop in price a year after the pump event. market manipulation, cryptocurrency, telegram, exchanges, fraud ## I Introduction Pump and dump schemes are a type of investment fraud where asset prices are artificially inflated by a group of market participants in order for them to sell the assets at a higher price. Once the instigators sell off the asset and stop promoting it, the price falls significantly and any remaining investors in that particular asset end up losing money [29]. Figure 1 shows an example of a cryptocurrency pump and dump event where the price rapidly rises as participants buy into the coin, peaks for around two minutes, then rapidly falls as some participants sell their holdings in the coin. This type of scheme is not in itself a new phenomenon, with examples littering stock market history back to the South Sea Bubble of 1720. However, the unregulated and decentralised nature of cryptocurrencies and the widespread adoption of encrypted messaging applications such as Telegram have made executing such schemes possible on a scale never seen before. This near-industrial scale of such schemes means that they have had an increasingly significant impact on the cryptocurrency market as a whole, as such events can be organised on a daily basis and across multiple different exchanges. The current state of regulation with respect to cryptocurrencies is sparse or non-existent [8] and in effect this allows operators to get away with organising such events with no legal consequences or accountability, despite similar schemes in traditional stock markets being illegal with organisers actively prosecuted by the SEC [30]. The aims and contributions of this paper are: * **New and enlarged cryptocurrency pump event dataset.** This paper introduces and analyses a new enlarged dataset of pump events, which includes pump events collected directly from Telegram to build upon an existing dataset [23]. Our new dataset expands this existing dataset from \(1,111\) events to \(10,687\). The dataset is made publicly available for future research1 along the code used to collect it, enabling future updates and expansions. Fig. 1: Example of a pump and dump event on the cryptocurrency ARKER. * **Pump strategy analysis.** A full breakdown and analysis of the dataset was performed with respect to the organising channel, exchange and market capitalisation, revealing two strategies taken by pump organisers, referred to as the _quantity vs quality tradeoff_. A successful analysis of the performance of the pumps in the dataset further exhibits differences between these two groups. * **Long-term impact.** Whilst it has been suggested by Li et al. [24] that a higher concentration of pump events occurring on a given exchange are detrimental to the price of cryptocurrencies in the long-term, no study has focused on concretely quantifying and analysing the pricing of pumped coins in the long-term fallout from such events. In our analysis of the pricing impact after the pumps within our dataset, it is revealed that prices of pumped coins fell by 30% after 365 days relative to the wider market, indicating that such schemes have a strongly negative impact on cryptocurrency value. ## II Related Work Huang and Cheng [19] investigate the impact of pump and dump events in the Taiwanese stock market, a regulated market, by analysing market data on manipulations prosecuted by authorities from 1990 to 2010. They examine the cumulative abnormal returns (CAS) from day \(-100\) to \(+100\) relative to events and find that the peak CAS is \(28\%\) but by day \(+100\) the CAS returns to \(0\%\), suggesting that such events increase the volatility of returns during both pump and post-pump periods. Analysing the effects on market prices they find that the temporary price increases prices by over \(24\%\) but overall longer term price impacts are negligible. They further show that there is also a large temporary price impact associated with such events and suggest that this means events have a damaging effect on price accuracy and market efficiency. One of the first papers describing pump and dumps in the cryptocurrency sphere, Harmick et al. [14] provide a description for pump and dump schemes and identify factors that affect the success of a pump. They collected data over a 7 month period from Telegram and Discord and broadly categorised the channels into 3 groups: **Obvious pumps** that clearly promote pump and dump schemes and provide countdown signals hours and days before the occurrence of the pumps; **Target pumps.** that avoid directly marketing themselves as a pump and dump channel but instead posted coin names without any prior announcement and **copied pumps** that copied other channels' posts, typically several hours after the original post. They found that coins with lower trading volume (and therefore lower market capitalisation and liquidity) were more likely to produce a successful pump, and also established that the number of exchanges a coin is listed on correlates negatively with the success of a pump. Klamps and Kleinberg [21] propose an unsupervised anomaly detection algorithm to help identify pump and dump events. They found that certain exchanges, specifically Bi-nance and Bittrex, accounted for more pumps than the relative percentage of symbols explored on each whereas for Kraken, Kucoin and LBank the converse was true. On a coin-pair level, they found that most were targeted \(0-3\) times but there were coins that were targeted up to 13 times, implying that pump and dump groups target specific coins multiple times. Xu and Livshits [34] provide an in depth analysis of the anatomy of pump and dump schemes and real-world case study of such an event. They note that participation levels in a group are at a fraction of the total membership. They go on to identify 412 pump events in 358 channels over an 8 month period on 4 exchanges.. After these pumps were matched to OHLCV market data using CryptoCompare, they develop a RF model and investment strategy for predicting pump events. They estimate that they only obtain half of the gain in value caused by the pump and find even with this caution, returns of \(60\%\) can be achieved. Li et al. [24] investigated pump events using hand-collected data from Telegram across both CEXs (Binance, Bittrex and Yobit) and DEXs (PancakeSwap)2, from an economic perspective. They explain that these CEXs are not randomly chosen, and display common features such as having little if any "know your customer" requirements3 and a large number of listed cryptocurrencies to use as targets. After analysing market data around the collected events, they found that cryptocurrencies targeted are far more likely to have been pumped before and that effects on traded volumes disappear in one to two days, when viewed in the context on a week long window after events. Further analysis on cryptocurrencies with a relatively high market capitalisation pumped on CEXs reveals that the effects of pump events are not restricted to only small coins and on average most pump events increase trading volumes and prices. They conclude by analysing the effects of two opposing policy changes with respect to pump and dumps by Bittrex and Yobit. In the case of Bittrex, which started banning accounts suspected of market manipulation in November 2017, the number of pump events sharply decreased. Yobit, however, announced in October 2018 that it would randomly pump listed cryptocurrencies on its exchange, which generated a negative reaction from investors and reduced the overall prices and volumes of cryptocurrencies listed. They suggest that these opposing effects mean that pump and dump events are damaging to the price and liquidity of cryptocurrencies. Footnote 2: Not used in this project due to the difficulty in obtaining historical pricing data for DEXs. Footnote 3: Binance and Bittrex have since introduced such requirements. Morgia et al. [23] collected a publicly available dataset of pump events4 and proposed a real-time detection model that represented a significant improvement on existing models with respect to speed and accuracy. They also investigated "crowd pumps"; pumps that result from actions by a group of market participants that are not directly organised. They present pumps of GameStop and DogceCoin as examples of such events, and compare "crowd pumps" to traditional pump and dump events, explaining that there are three key differences between the two: (1) crowd pumps aim to inflate prices and keep them high, whereas traditional schemes quickly sell at inflated prices for a profit; (2) the target of crowd pumps is known well in advance so any uptick in its price can trigger the start of a large price increase and (3) crowd pumps can last extended periods of time whereas traditional schemes typically last minutes. In other work, Victor and Hagermann [32] analyse pumps on Binance over the period of a year and find that, on average, a pumped coin performs around \(10\%\) better in the 100 days after a pump event compared to its peers. They also use an XGBoost classifer, which uses tree boosting, to detect pump events and find \(612\) pump like events across \(172\) coins. Corbet et al. [8] find that cryptocurrencies do not fit into existing regulations. This makes applying any existing market manipulation regulations to cryptocurrencies nigh on impossible. Dhawan and Putnins [11] postulate that no rational market participant would knowingly take part in a pump and dump scheme as they show them to be a negative-sum game. They further explain that their evidence suggests participants treat such events as a game where the goal is to outsell others. There are numerous other papers focussed on building machine learning models to detect pump and dump events. Nilsen [26] used a LSTM network to create a real-time pump event detector with over \(97\%\) accuracy, Tsuchiya [31] used Bayesian linear regression to classify pumps before they occur with a \(75\%\) accuracy rate and Hu et al. [18] created a sequence-based neural network to identify pumps within Telegram channels. Whilst this is clearly an active field, much previous work has focused on data collected during a specific period and for a specific exchange. By collecting data going back multiple years and leveraging existing datasets, we produce a more comprehensive and extensive dataset that allows us to develop a novel long-term analysis, and which can be updated and adapted for future use. ## III Data Collection and Preparation Our methodology is split into three distinct phases. * **Pump Event Collection (Section III-A).** Identifying Telegram channels organising pump and dump events and collecting information about such events from them. * **Market Data Collection (Section III-B).** Aggregating market OHLCV data for identified pump events. * **Price and Data Analysis (Section III-C).** Using collected market data to analyse the impact of pump events. ### _Pump Event Collection_ There are few large-scale existing datasets on pump events, the lone exception being the one produced by Morgia et al. [23], which spans multiple years (2017 to 2021), but covers only 1,111 pump events. We take this dataset as a basis, and update and extend it through our own process described below. #### Iii-A1 Identifying Telegram Channels We used PumpOlymp5 to retrieve a list of \(800\) Telegram channels for investigation (as used by Xu and Livshits [34]). Further investigation on a subset of this list found that many of these channels were inactive or provided "signals" rather than organising pump and dump events. We filtered the list to find channel names that contained the word "pump", which yielded around \(130\) channels for investigation. Footnote 5: [https://pypi.org/project/pytesseract](https://pypi.org/project/pytesseract), powered by Google’s Tesseract-OCR engine, [https://github.com/tesseract-ocr/tesseract](https://github.com/tesseract-ocr/tesseract). #### Iii-A2 Finding and Collecting Pump Events Identifying pump events from channels was automated using the Telethon Python library. The main challenge was the different formats used to announce coins being pumped. As shown in Figure 2, the announcements generally took one of two forms: text containing keywords (e.g., "coin" and "pumping today", as in Figure 1(b)), or text embedded in an image (as in Figure 1(a)). We extracted text from images using pytesseract6 and iterated through images checking for predefined regex patterns (e.g., #) matching identifiable patterns of announcement. These extracted coins were combined with the date and time of the message, also extracted via Telethon, to give a list of per-channel pump events. Footnote 6: [https://pypi.org/project/pytesseract](https://pypi.org/project/pytesseract), powered by Google’s Tesseract-OCR engine, [https://github.com/tesseract-ocr/tesseract](https://github.com/tesseract-ocr/tesseract). Out of the \(130\) channels investigated, \(34\) of them had pump events that could be systematically identified using the methods described above. There were multiple reasons for this large number of unusable channels, chief among them was channels either no longer existing or having broken links. Issues also included channels organising pumps on DEXs, such as PancakeSwap, which do not have APIs giving access to historical OHLCV information. Some channels no longer organised pumps, instead providing general cryptocurrency investment "advice" in vague terms. #### Iii-A3 Aggregating and Cleaning Pump Events The final stage of pump event collection was to clean and aggregate pump events collected from Telegram to ensure that individual pump events were attributable to their original channel and that Fig. 2: Different styles of pump announcements. market data for the coins used was retrievable. This was split into three stages, detailed below. Identifying ChannelsIdentifying the individual channels and their respective pumps was done using two to four digit codes generated from the respective channel's names. For example, the channel Hit Pump Angels is represented by the code HPA. The full reference for this can be found in Table XIV, in Appendix B. This was done to allow unique identification for pump events that were broadcast across multiple channels and to ensure compatibility with the system used by Morgia et al.'s dataset7. Footnote 7: See [https://github.com/SystemsLab-Sapienza/pump-and-dump-dataset/blob/master/groups_csv](https://github.com/SystemsLab-Sapienza/pump-and-dump-dataset/blob/master/groups_csv). Checking Out the CoinsOnce events were uniquely identified, they were merged into a main table containing \(10,687\) pump events. As there is a large amount of listing and delisting of cryptocurrencies on a weekly basis, the main table was then filtered to remove coins that were no longer listed on exchanges' APIs. For example, in the week ending Sunday 19th March 2023, there were 27 new listings across Binance, Bitmart, Hotbit and Kucoin alone [10]. This high turnover of listed coins made these checks essential. Coin checks were done by comparing pumped coins and their respective exchanges against lists of cryptocurrencies available via CCXT, and by extension, the exchanges' APIs. These checks revealed \(459\) pump events with coins that were no longer listed, which were subsequently removed from the dataset. The Curious Case of TobitYobit is a popular exchange for executing pump events due to its lack of "know your customer" requirements8[35]. It has also previously performed its own random pumps on coins listed on its exchange [24] which implies that the exchange wants to encourage pump and dump organisers to use their platform. Unfortunately, Yobit's API only provides OHLCV data for the past 7 days which makes analysing events from several years ago impossible. As such the \(364\) pumps identified on Yobit were filtered out and excluded from the analysis. Footnote 8: These are used by exchanges to verify the identity of customers. Applying the above aggregation and cleaning steps gave \(9,191\) pump events that were used for analysis in Section IV. ### _Market Data Collection_ The next phase of the project was to retrieve market data for the pumps via CCXT. This section explains the processes used to do this and reasoning behind their choice. #### Iv-B1 Data Granularity Deciding on the amount of data to collect and its granularity was the key decision for this stage. CCXT, and the exchanges that it interfaces with, allow queries to OHLCV data at varying granularities, ranging from 1 minute intervals to 1 day and 1 month intervals9. In an ideal world, all data collected would be at 1 minute granularity to allow for the highest level of detail in the analysis. Footnote 9: It is worth noting that many exchanges’ APIs purport to only have 1 minute data for a specific period of time (typically 90 days) but no such restriction was found when collecting via CCXT. There are some drawbacks to the approach outlined above, chief among them being the sheer amount of data required to capture multiple years worth of OHLCV history. Furthermore, analysing volumes over periods longer than the minute (i.e., total volume over an entire day) requires additional calculation and overheads. The solution to this was to retrieve different granularities at different time periods, with higher granularities over time periods closer to pump events. Data was collected at 1 minute (1m) intervals for 1 day either side of a pump event, at 1 hour (1h) intervals for 1 week either side of the event, and at 1 day (1d) intervals from a coin's listing to the current day10. This gives both the benefits of being able to analyse trends over a long time period whilst also being able to see the immediate impacts of events at a high level of granularity. Footnote 10: For the purpose of this project this is 7th March 2023. #### Iv-B2 Back to the Start Collecting data at 1d intervals from a coin's listing date required this date to be found in an efficient way. A basic solution to this would be to send a query spanning a time period from a year before cryptocurrencies became widely traded, such as 2005, and taking the first date data is available for as the listing date. Due to limits on the number of values returned at once by CCXT (in order to satisfy limits imposed by the underlying exchanges' APIs) it would be impossible to implement this effectively without a significant query overhead. To reduce the number of queries, we use a binary search11, which searches for listing dates by calculating and comparing a midpoint that is compared against the search value [2]. By retrieving single OHLCV queries at midpoints, it is possible to tell whether a coin was listed before or after that date since if the query returns values, the coin was listed before the date used in the query and if it returns no values, then the coin was listed after the date of the query. This method was applied to every entry in the pump dataset, giving every pumped coin a start date to collect data from. Footnote 11: Adapted from [https://gist.github.com/mr-easy/1185b1dcdcd5f9908ff196446f092e9b](https://gist.github.com/mr-easy/1185b1dcdcd5f9908ff196446f092e9b). #### Iv-B3 Turning the Tap On Once the listing dates of coins had been established, collecting the data at the granularities discussed in Section III-B1 was a relatively straightforward process. Queries for the respective granularities were pignated to ensure that API request limits were not exceeded and multiple passes of the dataset were performed to ensure that no data was missed due to API rate limits. Data was collected for each unique coin in the dataset of pumped coins, a total of \(765\) coins, suggesting that coins were pumped on average around \(12\) times across the dataset (further explored in Section IV-A3). #### Iv-B4 A Cheeky Bit of Btc One issue with the price data was the variation in base cryptocurrencies for the pairs retrieved from the exchange. For example, all the coins in the dataset pumped on Binance are paired with a BTC base whereas all the coins pumped on Hottbit are paired with USDT12. The problem with this is that \(1\) BTC is worth around \(26,900\) USDT13, hence making any coin with a BTC pairing seem \(26,000\)x lower in price when compared to a similarly valued coin with a USDT pairing. A simple fix to this would be to multiply all the BTC paired prices by \(26,000\) in order to achieve parity with USDT paired coins. Unfortunately, due to the massive fluctuations in the price between USDT and BTC, this would introduce a large margin of error, particularly as the price of BTC was as high as \(60,000\) USDT in 2021. The solution was to collect OHLCV data for the BTC/USDT pairing for the same timeframes and periods discussed in Section III-B1 for all coins with BTC as their base pairing. These prices can then be combined with the original BTC base paired data to produce equivalent data in USDT. Combining the two sets of OHLCV was performed as follows * **Open**\(-\) open price of the original BTC based pair multiplied by open price of BTC/USDT for the relevant date, timeframe and exchange. * **High**\(-\) high price of the original BTC based pair multiplied by the typical price. * **Low**\(-\) low price of the original BTC based pair multiplied by the typical price. * **Volume**\(-\) volume figures of the original BTC based pair as this is quoted in the coin being pumped and is independent of any BTC/USDT conversion. Typical price is calculated as follows \[\text{Typical Price}=\frac{\text{High}+\text{Low}+\text{Close}}{3}\] which provides an average price across the respective timeframe [13]. This is used to reflect that high and low prices of the original BTC based pairing and the BTC/USDT pairing are unlikely to ever line up perfectly (i.e. occur at exactly the same time), especially for longer timeframes. Therefore using an average of the BTC/USDT conversion price somewhat mitigates against this whilst still capturing differences between the high and low prices in the original BTC pairing data. ### _Price and Data Analysis_ The final phase of the project was to analyse the collected market data for impacts and trends. Whilst the results of this analysis are contained in Section IV, we here explain certain measures used and their justification. #### Iii-C1 Market Capitalisation Data Market capitalisation is a measurement of an asset's value, calculated as the multiplication of the current asset price by the amount of that asset in circulation. In the cryptocurrency sphere market capitalisation can be used as a relative measure of size and perceived levels of risk with respect to a cryptocurrency [3]. Cryptocurrencies with a high market capitalisation are perceived as less risky as they tend to have a history of growth and tend to be more liquid. Market capitalisation also plays a role in the success of pump and dump events, with pumps targeting coins with a lower market capitalisation more likely to be successful [14]. As such, our analysis used market capitalisation as a way to compare coins targeted across exchanges. However, each exchange has different prices and volumes for coins, hence only capturing a snapshot of the asset's true market capitalisation. These price differentials are driven by the differences in trading volumes (and therefore liquidity), the fact that moving money across exchanges is inefficient and there is no accepted method for pricing such assets [28]. The solution was to use data from CoinMarketCap, a website that provides aggregated price and volume data for assets traded across multiple exchanges. Prices for a given asset are calculated from a volume-weighted average of all the traded market pairs (e.g. BTC/USDT) for that asset which in turn are calculated by converting the price of a pair into USD using reference prices [4]. Similarly, volumes are calculated as the sum of an asset's volume across all trading pairs where the volume of each trading pair is converted to USD using reference prices14. This price and volume data, along with aggregated market capitalisation data, was retrieved via CoinMarketCap's API15 on the 27th March 2023. Footnote 14: For more detail on the methodology used to calculate these, see [https://support.coinmarketcap.com/hc/en-us/articles/360043395912-Volume-Market-Pair-Cryptoauset-Exchange-Aggregate-](https://support.coinmarketcap.com/hc/en-us/articles/360043395912-Volume-Market-Pair-Cryptoauset-Exchange-Aggregate-). Footnote 15: [https://coinmarketcap.com/api](https://coinmarketcap.com/api). #### Iii-C2 Calculating Pre-Pump Data Values for pre-pump prices were calculated using the average closing price of the coin for the 7 days prior to the pump event. The reason for this was to mitigate against the effects of any insiders such as channel admins or VIP members gaining knowledge of the coin about to be pumped before the specified start time and in effect pre-pumping the coin [14, 19]. Figure 3 shows a pump group advertising the VIP benefits of early access to coins being pumped, emphasising that prices of coins immediately prior to pumps have already been affected by such events. 7 days was chosen as the period to average over as it is long enough to smooth intraday market movements but short enough to not run into previous pump events. Pre-pump volume data was similarly calculated as an average over the 7 days prior to a pump for consistency. #### Iii-C3 Measuring Maximum Price Increase The maximum percentage price increase \(\Delta P\) used in our analysis is defined as \[\Delta P=\frac{P_{\max}-P_{\text{before}}}{P_{\text{before}}}*100\] where * \(P_{\max}\) is the maximum price in the 5 minutes after a pump announcement. * \(P_{\text{before}}\) is the pre-pump price defined in Section III-C2. Fig. 3: Example of VIP early access to pumps. We only use a 5 minute window to calculate this maximum increase because price peaks of pump events are typically found within the first minutes of pump event announcements [32, 34]. #### Iv-B4 Measuring Volume The volume within the pump window \(\Delta V\) is calculated symmetrically to \(\Delta P\). Whilst the pre-pump and during pump volumes represent significantly different timescales, the relative nature of this metric means that pumps across coins with different liquidity characteristics can be directly compared. The volume moved during a pump event is taken as the amount of units of the coin being pumped within the 5 minute window immediately after the announcement of a pump event. As discussed above, price peaks, and therefore the highest levels of activity, are found within the first few minutes of such announcements [32, 34], meaning that a 5 minute window is a reasonable period to use for this calculation. #### Iv-B5 Long-Term Timeframes One of our key goals was to study impact of pump events over the long-term by analysing price and volume data over multiple time periods after a pump event. The timeframes used for analysis were 7, 14, 30, 60, 90, 180, 270 and 365 days, to give a wide range of analysis that complements and extends existing research. An important caveat to note is that there are pumps that occurred less than a year before our market data was collected (e.g., pump events from January 2023 happened three months before our collection, hence there is only 3 months of data to analyse rather than a year). Another cause of censoring is that there are coins for which subsequent pumps occur, at which point, for our analyses, future data is considered to not be available for the original pump event, as effects would otherwise be confounded by the second pump. #### Iv-B6 Relative Price Impacts In order to equally compare the price impacts across coins with massively different prices, a form of price indexing was used. This set the pre-pump price of a pump event to 100, with subsequent relative prices calculated as follows \[R_{n}=\frac{P_{n}}{P_{pre}}*100\] where * \(R_{n}\) is the relative price at day n. * \(P_{n}\) is the absolute closing price at day n. * \(P_{pre}\) is the absolute pre-pump price for the pump, as calculated in Section III-C2. This means that subsequent absolute price rises above the pre-pump price cause the relative price to rise above 100 [12]. #### Iv-B7 Real World Adjustments We further compare the relative price indices of the pumps to the equivalent relative indices of the top 10 cryptocurrencies by market capitalisation16. These coins account for a large proportion of the total market capitalisation of all cryptocurrencies, meaning that their price movements represent the general sentiment of the market. The OHLCV data was retrieved at a daily granularity across the three most common exchanges in the dataset; Binance, Kucoin and Htobit. The closing price was averaged to give a combined price across all the exchanges for a given day. Footnote 16: As of 22nd April 2023. Excludes stablecoins that are meant to be pegged to a traditional currency. These individual average prices were then converted to give relative prices, for each of the top 10 coins, for the period after each pump event. Again, the closing price for the day of pump was set to 100, with subsequent prices calculated as detailed above. The prices were combined through the use of a weighted average, using the volumes of each coin as weights. This gave average relative market prices for the window after each pump, allowing a pumped coin's price changes to be compared relative to the wider market. #### Iv-B8 Stitching These Impacts Together Once every pump event had a relative price index for both the coin being pumped and the top 10 cryptocurrencies, the next stage was to combine these to allow the impacts to be measured. This was calculated as follows \[I_{n}=(R_{n}-M_{n})*100\] where * \(I_{n}\) is the resulting relative price adjusted for market price movements at day n. * \(R_{n}\) is the relative price of the pumped coins at day n. * \(M_{n}\) is the relative market prices of pumped coins at day n. * \(100\) is a normalising factor to move the resulting price difference back to 100. The result of this was relative prices adjusted for market movements for each pumped coin, allowing for a comparison of the price impacts independent of general market movements. For example, if a pumped coin is not affected at all and follows general market movements, its adjusted price would be close to or equal to 100 for the entire period. Adjusted relative prices were averaged across the entire dataset to give an average price change relative to the pre-pump price and general market movements. In order to do this, three different averages were used in order to capture the effect of outliers and view the overall data from different angles: the mean adjusted relative price for a given day, the median, and the mean of all values within the interquartile range. #### Iv-B9 Quantifying the Effect of Subsequent Pumps The final part of investigating the long term impacts was exploring the price differentials between coins with varying numbers of pumps, which was achieved via an extension of the method discussed above. The first step required to achieve this was to put the pumps into a number of predetermined bins, where each bin contains a similar number of pumped coins. These are based on the number of times coins are pumped in the dataset. Since we found two groups in the dataset with significant differences in re-pumping behaviour, there are two separate sets of bins. These two groups are distinguished by the groups organising them and are labelled as CPI organised and non-CPI organised pumps. A full breakdown of the differences between these groups can be found in Section IV-A2. For CPI organised pumps, four bins were chosen, 1-10 pumps, 11-18 pumps, 19-30 pumps and 31+ pumps, with each bin containing a similar number of coins. For non-CPI organised pumps, for which re-pumping was rarer, two bins were chosen, 1 pump and 2+ pumps. Following binning, an average adjusted relative price was calculated for the 365 days after pump events, similar to Section III-C8. One difference, however, was that data for the entire 365 day time period was used for every pump, instead of stopping if there was a subsequent pump. As with above, the mean, median and IQR mean were used as comparative averages. We also compared the performance of a coin's first pump to subsequent ones. Again this analysis was split for CPI and non-CPI pump events in order to better differentiate between the two different organiser behaviours. For both groups, we compare performance ranging from the 1st to the 4th pump of a coin. ## IV Analysis ### _Dataset Breakdown_ The first section of analysis breaks down the pump event dataset and highlights some initial features and trends contained within it, including a natural split in the data which is used as a segregating feature in future sections. #### Iv-A1 Distribution of Pumps Figure 4 visualises the spread of pumps across the time period covered by the dataset17. The date of FTX's collapse is shown by the blue vertical line18. The plot indicates that there has been a shift in the exchange of choice for pump organisers from Binance to Hotbit and Kucoin, potentially caused by the introduction of mandatory KYC checks by Binance in 2021 [1], marked by the orange line. This change in policy meant that every user that wanted to trade or deposit funds had to pass some form of KYC, a feature that is not attractive to pump and dump groups [24]. On the other hand, Kucoin allows users to withdraw up to 1 BTC19 a day and perform unlimited amounts of trades without any KYC [22] and Hotbit only requires KYC if a user triggers a "higher risk control system" [17]. This lack of KYC requirements makes it easier for participants to take part in pump events on the respective exchanges, making them more attractive to groups organising pump events. Footnote 17: Inspired by Morgia et al. [23], see page 7, Figure 3. Footnote 18: The date FTX filed for bankruptcy: the 11th of November, 2022. #### Iv-A2 Quality over Quantity? There is an outlier to this behaviour, a channel named Crypto Pump Island (CPI)20 which has continued to organise pumps on Binance at a very high intensity. Table I shows that CPI is responsible for over \(86\%\) of the pumps in the dataset and has the highest number of pumps per coin by a factor of 10, meaning that each unique coin has been pumped by that channel on average \(22\) times across the time period of the dataset. This suggests that CPI is recycling coins when organising pumps, often multiple times a day, and is trying to achieve a high quantity of pumps rather than performance quality. Footnote 20: A full reference for channels and their respective channel codes can be found in Table XIV, in Appendix B #### Iv-A3 Pumps per Coin There are only \(765\) unique coins in the dataset of \(9,000\) events, meaning coins were pumped multiple times. Table II shows the distribution of the number of pumps per coin across the dataset. Whilst most coins were targeted less than 10 times, there is a somewhat significant number of coins targeted over 30 times, likely for pumps organised by CPI. The most targeted coin has \(98\) pumps across a 4 year period. This means that there are coins that are frequently targeted by operators of pump and dump schemes, suggesting they have a track record of success and they have features that make them attractive to such schemes. #### Iv-A4 Market Capitalisation One such attractive feature could be a low market capitalisation, which has been linked to a pump's success [14]. Figure 5 shows the number of pumps per coin plotted against the coin's respective market capitalisation which are further segregated by the exchange in which the highest number of pumps took place on. It also highlights whether the coin was pumped predominantly on CPI (x markers) or other channels (o markers). Coins that are pumped more often in general have a higher market capitalisation, occur on Binance and are typically organised by CPI. As discussed in Section IV-A2, CPI runs multiple pumps a day which implies a preference for more liquid coins, as it is easier for participants to execute trades in the market [5] meaning buying into a pump is easier. Coins with higher liquidity tend to have a higher market capitalisation than less liquid ones which explains the behaviour in Figure 5. However, this higher liquidity and market capitalisation comes at the cost of lower returns on pumps [14], again highlighting the quality versus quantity tradeoff. ### _Pump Performance_ The second section of analysis investigates the relationships between pre-pump characteristics and the immediate perfor \begin{table} \begin{tabular}{l r r} \hline **Number of Pumps** & **Coins** & **Percentage** \\ \hline 1–10 & 480 & 64.26 \\ 11–20 & 104 & 13.92 \\ 21–30 & 60 & 8.03 \\ 31–40 & 42 & 5.62 \\ 41–50 & 21 & 2.81 \\ 51+ & 40 & 5.35 \\ \hline \end{tabular} \end{table} TABLE II: Number of pumps per coin in the datset. \begin{table} \begin{tabular}{l r r r} \hline **Code** & **Channel Name** & **Pumps** & **per Coin** \\ \hline CPI & Crypto Pump Island & 7920 & 86.17 & 22.06 \\ HPA & Hit Pump Angels & 145 & 1.58 & 2.20 \\ BPF & Binance Pump Family & 139 & 1.51 & 2.21 \\ SP & Softze Pump & 71 & 0.77 & 1.06 \\ CPC & Crypto Pump Club & 70 & 0.76 & 1.15 \\ HTP & Hotbit Trading Pump & 70 & 0.76 & 1.06 \\ Others & & 776 & 8.45 & 1.16 \\ \hline **Total** & & **9191** & \\ \hline \end{tabular} \end{table} TABLE I: Number of pumps per channel in the dataset. mance of pumps. It further highlights the quality vs quantity distinction defined in Section IV-A and explores the total amount of value moved. #### Iv-B1 Pre-Pump Prices and Volumes Figure 6 plots the pre-pump price against the average maximum price increase of pumps for each unique coin. Once again there is clear separation between pumps organised by CPI and those organised by others. The maximum price increase, on average, is much lower (\(15.02\%\) vs \(790.54\%\)) for pumps organised by CPI and therefore for those taking place on Binance. Conversely the pre-pump price, on average, is higher (\(92.19\) USDT vs \(41.01\) USDT) for pumps organised by CPI. The negative trade-off between pre-price and increase is in part explained by the fact that a higher pre-price means a coin's price has to increase more in absolute terms, in order to achieve the same percentage increase as a coin with a lower pre-price. Also the effects of choosing coins with higher market capitalisation, and therefore higher liquidity, mean the proportion of market participants attempting to push the price up is lower. This means that it is harder to generate the volume required to massively increase prices. #### Iv-B2 Volume Moved Figure 7 shows the average volume moved in the first 5 minutes after a pump event announcement, compared to the proportion of the pre-pump volume this represents. The amount of volume moved during a pump appears relatively similar for CPI and non-CPI pumps, although there is more variation and a higher standard deviation for non-CPI pumps (\(2.33\times 10^{10}\) vs \(6.38\times 10^{7}\)). It also shows that the average proportion of the pre-pump volume this represents is significantly higher for non-CPI organised pumps (\(38703\%\) vs \(17.38\%\)), although again the non-CPI pumps have a much higher standard deviation for this metric (\(25010\) vs \(27.67\)). Looking at the median values (\(1613\%\) for non-CPI, \(8.25\%\) for CPI) it is evident that pumps organised by CPI have a significantly lower impact on the increase in volumes. The choice of coins with higher trading pre-pump volumes and Fig. 4: Distribution of pumps categorised by channel and exchange. Fig. 5: Number of pumps against the coins’ respective market capitalisation. Fig. 6: Pre-price of coins against the average percentage price increase caused by pump events. prices by CPI, as discussed earlier, means that more volume needs to be moved in order to reach the same values achieved by coins with very low pre-pump prices and volumes favoured by non-CPI organised pumps. #### Iii-B3 Total Value Up to this point this section has been exploring the relative impacts of pump events. Whilst this allows pump events to be compared on a like for like basis, it does not capture the total changes in value caused by pump events. For example, the total value of trades for 10 units on a coin worth 1 USDT is 10 USDT but for a coin worth 10 USDT the total trading value is 100 USDT for the same 10 units traded. To estimate the total value of a pump, we use the average maximum price achieved in the 5 minutes after a pump announcement. Whilst this is obviously not the price across the entire 5 minute period, it produces the best possible price, and therefore the best possible total value of trades across the 5 minute period, which can be compared across all pump events. The pre-pump total trading value is calculated as the pre-pump price multiplied by the pre-pump volume. Figure 8 shows the total value of trades in the 5 minutes after a pump announcement compared to the average daily value of trades in the pre-pump period. This indicates that the average daily value of trades is higher for pumps organised by CPI and lower for those not, which is not surprising given the higher pre-pump prices and volumes for coins pumped by CPI discussed earlier. The value of trades in the 5 minutes after a pump announcement is also generally higher for pumps organised by CPI, particularly when compared to those organised by Hobit. Interestingly, pumps on Kucoin have the highest average value of trades in the 5 minutes after a pump announcement. This is likely due to the Big Pump Signal group, one of the largest groups with some of the biggest price increases [23], using Kucoin as its primary exchange. The large number of participants in pumps organised by this group means a higher volume of a coin is traded which in turn allows the price to be pushed up higher, which increases the total value of trades after a pump announcement. ### _Long-Term Impacts_ The third section of analysis focuses on the long-term impact of pump events on cryptocurrencies both overall and separated by pump organiser. It also investigates the notion of a relationship between the number of pumps and long-term price performance. #### Iii-C1 Overall Relative Impact Figure 9 shows the mean prices of all pumped coins, relative to the pre-pump price, for the period spanning the year after a pump event. This is compared to the relative prices of the top 10 coins by market capitalisation, referred to as the market prices from this point onwards. This data is derived using the methodology outlined in Section III-C8 and it is worth noting that all relative prices for the coins are given with respect to the market prices. It also highlights the percentage difference between the pumped coins price and the market price at different timeframes throughout the year, shown via the vertical lines. Red lines indicate a percentage decrease relative to the market price and green lines indicate a percentage increase relative to the market price. In the short term Figure 9 shows that pumps have a positive pricing impact relative to the market and pre-pump pricing. This positive effect, however, is short-lived and the price is on average 11% lower than the pre-pump value, relative to market prices, 30 days after the event. Table III further shows the relative prices for each average at the timeframes highlighted in the figure. It further emphasises that there is a small positive Fig. 8: Total value of trades during pump events against to the average daily value of trades pre-pump. Fig. 7: Volume moved during a pump event against the proportion of this relative to the pre-pump volume. Fig. 9: Mean relative prices of pumped coins in the year after a pump, adjusted for general market movements. increase in price performance relative to market prices in the first week after a pump, on average around 3%. Expanding the timeframe out gives a decrease of around 15% at 60 days and 30% at 365 days, implying a steady decline in the pumped coin's value relative to market prices. #### V-B2 Impact by Organiser Figure 10 separates impact for pumps organised by CPI and by others. For CPI-organised pumps, we see behaviour similar to Figure 9: a short-term positive effect followed by a negative long-term one. For non-CPI organised pumps there is no positive impact at all and in fact the price begins falling almost immediately relative to the market prices. As explored in Section IV-A, CPI tend to choose coins that are more liquid, meaning they have higher volumes and are in general more visible to outside traders. This makes pump outsiders more likely to be attracted to these CPI pumped coins since these coins that have been pumped are higher ranked and therefore more visible. Since CPI organised pumps make up 86% of the dataset, the prices of these have the biggest impact on the overall relative prices for the dataset, meaning that the positive short-term impact of CPI-pumps on coins is reflected in the overall average of the dataset. Table IV summarises the adjusted relative prices for CPI pumps at various timeframes. The results highlight the need for multiple averages, as the mean values for longer than 90 days are significantly different from the median value and the IQR mean, implying an influential outlier. From this it can be seen that coins pumped by CPI lose around 16% of their value in the year after a pump event, relative to market prices. Table V displays the same information for non-CPI organised pumps. The differences between the averages are much smaller than for the CPI-organised pumps, implying there are few if any big outliers in the data. Overall coins that are pumped by non-CPI groups lose just over 25% of their value in the year after a pump event, again relative to market prices. different non-CPI bins with respect to shorter timeframes, with the 1 bin having a short-term relative price uptick whilst the 2+ bin immediately decreases. Both categories display a relative decrease in price over the course of a year, although as discussed above the 2+ has worse performance, 13% less than the 1 bin. Whilst these features could be indicative of more pumps meaning lower long-term prices, the large range of the 2+ pump bin and the lack of coins in it relative to the 1 pump bin, indicate that in reality more data is needed before any conclusive interpretations can be made on this relationship. #### V-B4 First vs the Rest Table VII shows the average adjusted price impacts of the first and the subsequent three times a coin is pumped during CPI-organised pumps, as outlined at the end of Section III-C9. Again we see the a long-term negative impact across all the coins pumped relative to the market. However, with regards to the variations between the groupings by pump number, a complex picture emerges. The impact of additional pumps, for both CPI and non-CPI organised pumps, is hard to discern across any of the timeframes, no matter the number of pumps or re-pumps. The only consistent trend across the pumps and re-pumps is the overall impact is negative in the long-term. From this we may conclude that at least when viewed from this perspective, the number of times a coin is pumped has no significant bearing on its long-term price. Furthermore, this analysis in particular does not adjust for whether subsequent pumps happen in the 365 day period after pump events, which could provide an explanation for the outlier discussed earlier. \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline Day & **7** & **14** & **30** & **60** & **90** & **180** & **270** & **365** \\ \hline \multicolumn{10}{c}{CPI} \\ \hline 1–10 & 105.04 & 105.47 & 99.99 & 86.38 & 78.66 & 76.47 & 78.09 & 71.47 \\ 11–18 & 106.57 & 107.42 & 96.50 & 67.27 & 68.06 & 61.85 & 43.49 & 81.83 \\ 19–30 & 106.03 & 107.27 & 94.77 & 67.53 & 59.99 & 48.87 & 19.56 & 50.23 \\ 31+ & 113.19 & 117.77 & 97.44 & 48.08 & 55.94 & 48.09 & 12.22 & 69.08 \\ \hline \multicolumn{10}{c}{non-CPI} \\ \hline 1 & 93.85 & 94.45 & 92.88 & 85.16 & 79.76 & 74.02 & 71.61 & 79.13 \\ 2+ & 96.55 & 94.45 & 88.27 & 87.50 & 73.00 & 73.67 & 74.78 & 66.76 \\ \hline \hline \end{tabular} \end{table} TABLE VI: Summary of long-term impacts of CPI and non-CPI organised pumps, grouped by number of pumps and averaged using the IQR mean. Fig. 11: Relative prices of coins pumped by CPI (top row) and groups other than CPI (bottom row) in the year after a pump, adjusted for general market movements and grouped by total number of pumps in the dataset. \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline Days & **7** & **14** & **30** & **60** & **90** & **180** & **270** & **365** \\ \hline \multicolumn{10}{c}{CPI} \\ \hline 1 & 107.80 & 109.61 & 97.55 & 69.23 & 66.15 & 58.47 & 37.29 & 68.95 \\ 2 & 107.09 & 107.35 & 99.69 & 89.27 & 71.74 & 61.41 & 50.33 & 63.10 \\ 3 & 104.87 & 104.32 & 101.66 & 97.99 & 78.42 & 71.11 & 60.08 & 63.68 \\ 4 & 104.40 & 103.30 & 101.19 & 97.36 & 81.37 & 76.51 & 66.94 & 59.19 \\ \hline \multicolumn{10}{c}{non-CPI} \\ \hline 1 & 94.76 & 94.45 & 91.33 & 85.84 & 77.36 & 73.88 & 72.93 & 73.38 \\ 2 & 98.08 & 98.43 & 94.42 & 89.12 & 77.76 & 86.19 & 73.63 & 75.64 \\ 3 & 100.05 & 101.08 & 95.25 & 85.29 & 64.68 & 89.07 & 73.95 & 73.90 \\ 4 & 97.58 & 98.84 & 95.53 & 94.28 & 65.98 & 95.21 & 90.43 & 92.17 \\ \hline \hline \end{tabular} \end{table} TABLE VII: Summary of long-term impacts of CPI and non-CPI organised pumps for coins on their 1st to 4th pumps, averaged using the IQR mean. ## V Discussion and Conclusion ### _Contributions and Findings_ We have contributed a new, enlarged dataset of cryptocurrency pump events based on existing sources, that contains around 10,000 events, representing a 10-fold increase compared to the largest comparable source [23]. As well as the dataset itself, we release the code used to collect the data, meaning the dataset can be maintained and updated. Our main finding within this dataset was, relative to market prices, pump events have a long-term negative impact on the value of cryptocurrencies which represents a conclusive answer to the main hypothesis of this project. On average this impact is valued at a 27% price decrease relative to market prices 365 days after a pump event. These findings provide specific evidence for the notion proposed by Li et al. that pump events are, in general, detrimental to the price of cryptocurrencies [24]. Regarding the findings of Victor and Hagermann, that pump events provide a 10% positive price impact after 100 days [32], this project finds that, after 90 days, the prices of pumped coins are over 25% lower relative to the wider market. Further, our investigations of the dataset reveal a _quality vs quantity tradeoff_, which exposes two different approaches to organising pumps. The quality approach involves choosing coins with lower liquidity, and therefore lower value, for pumps at lower frequencies which achieve a high percentage price increase. The quantity approach, on the other hand, focuses on coins with higher liquidity, and therefore higher value, for pumps at higher frequencies which achieve lower percentage price increases. Further empirical evidence, in the form of price and volume separation, was found through analysing the market data for each grouping. A significant event within our data was the collapse of the cryptocurrency exchange FTX. An additional analysis presented in Appendix A covers the impact of this event. We find the collapse had no material direct impact on the number of pumps organised. However, the collapse impacted the underlying prices and volumes of cryptocurrencies in general, this in turn impacted the value of coins traded during pump events, decreasing the amount of money that operators, and to a lesser extent participants, could potentially make. These findings highlight the need for further regulation of pump and dumps in the cryptocurrency sphere, given that these schemes are in effect market manipulation and are found to have had a significant negative impact on the value of cryptocurrencies. Regulators and governments are beginning to accept that pump and dump schemes are a significant problem that should be addressed in any future cryptocurrency regulation; a recent UK government consultation cited them as a target for potential market abuse regulation [27]. However, this lack of regulation with respect to pump and dump schemes, which was first highlighted as a regulatory issue in 2018 [14], coupled with the high frequency and relative anonymity of such events mean they will continue to pervade cryptocurrency markets until decisive action is taken. ### _Future Work_ There is significant scope for future work, some suggestions for which are discussed below. Investigating more channelsOnly the 130 most suspicious out of 800 channels were investigated from the initial list from PumpOlymp. Whilst these channels still resulted in a dataset far larger than existing ones, there are still a significant number of channels which may fruitfully be monitored for more pumps. Future work could expand the dataset by investigating these sources. Pumps on DEXAs the APIs used to retrieve the market OHLCV data are for centralised exchanges, any pumps found taking place on decentralised exchanges could not be analysed and were therefore ignored. Additional work could focus on probing pump events on DEXs by analysing on-chain exchange transactions, using a method such as that described by Li et al. [24]. Yobit and further dataAs discussed in Section III-A3, pumps organised on Yobit could not be analysed due to its API only providing 7 days of OHLCV data, which excluded around 350 pumps from further analysis. Subsequent research could investigate these by using data from source such as Kaiko, which provides over 10 years of historical data for over 100 exchanges including Yobit [20]21. Furthermore, a limitation of the methodology used for this project was the OHLCV data was not checked for missing dates, due to time constraints, which again could be solved through the use of Kaiko data. General market data calculationsThe method used to calculate the relative market prices relied on the top 10 cryptocurrencies by market capitalisations as of 22nd April 2023. Whilst this provided a way of easily measuring the performance of pumped coins relative to the wider cryptocurrency market, it does not take into account previous changes to this top 10 ranking22, meaning they may not represent the general market movements for earlier pump events. A solution to this would be to use historic CoinMarketCap market capitalisation to retrieve the top 10 coins at the time of each pump event and then use the data for these in order to calculate adjusted relative prices. Footnote 21: The reason Kaiko has Yobit data is that it collects pricing data for all cryptocurrencies listed on an exchange in real time. Footnote 22: For example, DOGE only achieved a top 10 market capitalisation in 2021. ### _Conclusion_ In summary, our work has produced a new and extended dataset of pump events and provided a detailed breakdown and analysis of the performance of these pump events. Exploring the long-term impacts of pump events on cryptocurrencies reveals an overall negative impact over a 365 day period with respect to general market prices. The analysis also highlighted different tactics in the form of a _quality vs quantity_ tradeoff, with different targets and different forms of market impact.
pompaリングとDumpのschemeは、市場操縦攻撃の形であり、協調的な人物が資産の価格を上げて売却するようになっている。その実施の妨げとなる事情は、これらの詐欺が仮想通貨市場内で広く存在しているものの、これらのイベントがターゲットとなるコインの負の影響がまだ完全に理解されていない。テレグラムチャンネルから抽出された pompaの新しいデータセットを用いて、文献における最も近い比較データの規模の約3倍の規模で調査を行い、Pumpチャンネルの異なる戦術と765種類のCoinにおけるPump and Dumpの長期的な影響を調査しました。短期的な上昇効果は、一部のケースでは見られるものの、Pump and Dumpの長期的な影響は、ターゲットとなる資産の価格を30%低下させ、Pumpイベントの後1年で平均的な値を達成します。
2309.11374
Cooperative Spin Amplification
Quantum amplification is recognized as a key resource for precision measurements. However, most conventional paradigms employ an ensemble of independent particles that usually limit the performance of quantum amplification in gain, spectral linewidth, etc. Here we demonstrate a new signal amplification using cooperative 129Xe nuclear spins embedded within a feedback circuit, where the noble-gas spin coherence time is enhanced by at least one order of magnitude. Using such a technique, magnetic field can be substantially pre-enhanced by more than three orders and is in situ readout with an embedded 87Rb magnetometer. We realize an ultrahigh magnetic sensitivity of 4.0 fT/Hz$^{1/2}$ that surpasses the photon-shot noise and even below the spin-projection noise of the embedded atomic magnetometer, allowing for exciting applications including searches for dark matter with sensitivity well beyond supernova constraints. Our findings extend the physics of quantum amplification to cooperative spin systems and can be generalized to a wide variety of existing sensors, enabling a new class of cooperative quantum sensors.
Minxiang Xu, Min Jiang, Yuanhong Wang, Haowen Su, Ying Huang, Xinhua Peng
2023-09-20T14:55:34
http://arxiv.org/abs/2309.11374v1
# Cooperative Spin Amplification ###### Abstract Quantum amplification is recognized as a key resource for precision measurements. However, most conventional paradigms employ an ensemble of independent particles that usually limit the performance of quantum amplification in gain, spectral linewidth, etc. Here we demonstrate a new signal amplification using cooperative \({}^{129}\)Xe nuclear spins embedded within a feedback circuit, where the noble-gas spin coherence time is enhanced by at least one order of magnitude. Using such a technique, magnetic field can be substantially pre-enhanced by more than three orders and is in situ readout with an embedded \({}^{87}\)Rb magnetometer. We realize an ultrahigh magnetic sensitivity of \(4.0\,\mathrm{f\Omega/Hz^{1/2}}\) that surpasses the photon-shot noise and even below the spin-projection noise of the embedded atomic magnetometer, allowing for exciting applications including searches for dark matter with sensitivity well beyond supernova constraints. Our findings extend the physics of quantum amplification to cooperative spin systems and can be generalized to a wide variety of existing sensors, enabling a new class of "cooperative quantum sensors". Quantum amplification that offers the capability of enhancing weak signals is ubiquitous and essential to various frontiers of science [1], ranging from ultrasensitive magnetic and electric field sensing [2; 3; 4], mechanical oscillator motion measurements [5], and optical amplifiers [6; 7] to determination of fundamental constants [8], frequency standards [9], and searches for dark matter [10; 11; 12] and exotic forces beyond the standard model [13]. To date, the well-established paradigm of quantum amplification is mostly based on using independent quantum systems, including superconducting qubits [2], atomic and molecular spins [11; 12; 13], photons [6; 7], nitrogen-vacancy centers in diamonds [14; 4], trapped-ion qubits [3; 15], etc. The individuals in independent systems amplify the measured signal independently and the total response is the summation of individuals, which in practice leads to limits on the performance of quantum amplifiers, including operation frequency, spectral linewidth, and gain. Cooperative systems have recently attracted extensive attention and provided opportunities for novel applications [16; 17; 18; 19; 20; 21; 22; 23; 24]. In contrast to independent systems, the individuals in cooperative systems experience each other and their evolution depends on the state of the entirety. Various experimental systems have explored the rich phenomena of cooperative systems, for example, cooperative emitting [16; 17; 18; 19; 25] and scattering [20; 21], one-axis-twisting dynamics [22], and spectral narrowing [23; 26]. Cooperative systems could be a promising platform to explore advanced quantum amplification beyond independent systems, partially because such systems provide an ideal way to engineer the coherence time of quantum systems and thus enhance signal response. The combination of cooperative systems and quantum amplification may open up exciting opportunities for developing new quantum amplifiers with improved performance, especially in gain. Such amplifiers would find promising applications in precision measurements, for example, ultrasensitive magnetometers [27; 28], magnetencephalography [29; 30], geomagnetic anomaly detection [51], and searches for new physics beyond the standard model [12; 13]. In this Article, we demonstrate a new magnetic-field signal amplification using cooperative noble-gas nuclear spins. In experiment, we prepare cooperative \({}^{129}\)Xe spins by acquiring the \({}^{129}\)Xe signal with an embedded \({}^{87}\)Rb magnetometer and then feeding the signal back to the \({}^{129}\)Xe spins with a feedback circuit. Our investigation shows the dynamics under different feedback strength. The nuclear-spin coherence time is significantly prolonged by more than one order of magnitude, and 2400-fold improvement in signal amplification is realized using such cooperative spins. We name these collective phenomena as "cooperative amplification". As a first application, our approach constitutes a new technology for enhancing and measuring magnetic fields with a sensitivity of \(4.0\,\mathrm{f\Omega/Hz^{1/2}}\), which surpasses photon-shot noise and even spin-projection noise of the embedded \({}^{87}\)Rb magnetometer. It is noteworthy that this quantum-enhanced measurement scheme does not rely on entanglement [32]. We discuss the promising applications of our amplification technique in the searches for hypothetical particles with a sensitivity well beyond the stringent supernova constraints [33; 34]. The present amplification technique should be generic for a wide range of sensors and constitute a new class of cooperative sensors. Our experiments are carried out in a setup similar to that of Refs. [19; 35], as depicted in Fig. 1(a). A \(0.5\,\mathrm{cm^{3}}\) cubic vapor cell contains \(20\,\mathrm{t}\mathrm{r}\mathrm{r}\mathrm{t}\mathrm{e}\mathrm{r}\mathrm{t} \mathrm{e}\mathrm{r}\mathrm{t}\mathrm{N}_{2}\), and a droplet of enriched \({}^{87}\)Rb. The \({}^{129}\)Xe spins are polarized through spin-exchange collision with optically pumped \({}^{87}\)Rb atoms, as there are no optical transitions available for \({}^{129}\)Xe spins from the ground levels. A bias field \(B_{0}\) is applied along the pumping direction (the \(z\) axis). The two steps, i.e. measurement and feedback, establish the indirect interaction among spins. The \({}^{129}\)Xe nuclear magnetization generates an effective magnetic field \(\mathbf{B}_{\mathrm{eff}}=\lambda M_{0}\mathbf{P}\) on \({}^{87}\)Rb atoms through Fermi-contact collisions [36, 37], where \(\lambda=8\pi\kappa_{0}/3\) is the Fermi-enhancement factor, \(\kappa_{0}\approx 540\) for \({}^{87}\)Rb-\({}^{129}\)Xe system, \(M_{0}\) is the maximum magnetization of the \({}^{129}\)Xe with unity polarization, \(\mathbf{P}\) is the equilibrium polarization vector of the \({}^{129}\)Xe nucleus. The \({}^{87}\)Rb atoms in the vapor cell serve as a sensitive magnetometer to in situ read out the \({}^{129}\)Xe magnetization. The real-time output signal of the \({}^{87}\)Rb magnetometer is connected to a feedback coil and generates a corresponding feedback field \(B_{\mathrm{fb}}\), with a rheostat in series with the coils to adjust feedback strength [Fig. 1(a), more details are presented in Supplementary Section I]. Because the \({}^{87}\)Rb magnetometer measures both the \(x\) and \(y\) component of \({}^{129}\)Xe polarization (with response \(C_{x}\) and \(C_{y}\) respectively), the feedback field can be expressed as \(B_{\mathrm{fb}}=\chi_{1}P_{x}-\chi_{2}P_{y}\). Here, \(\chi_{1}\) and \(\chi_{2}\) represent the feedback gain associated with "measuring \(P_{x}\) and providing feedback in \(y\)" and "measuring \(P_{y}\) and providing feedback in \(y\)", respectively. The values of \(\chi_{1}\) and \(\chi_{2}\) depend on factors such as the magnetometer response, the rheostat, and the coil coefficient. The self-induced feedback field carries the information about the \({}^{129}\)Xe spins and then produces a torque on the spins. Equivalently, each single spin experiences the torque from the collective spins and its time evolution depends on the entirety. Notably, this torque does not come from the dipole-dipole interaction between the single spin and the collective spins, but is mediated by the feedback field. We now consider the dynamics of cooperative \({}^{129}\)Xe spins under the self-induced feedback field. The polarization of \({}^{129}\)Xe in the \(x\), \(y\), and \(z\) directions is denoted as \(P_{x}\), \(P_{y}\), and \(P_{z}\) respectively. The dynamics of cooperative \({}^{129}\)Xe spins in the feedback circuit can be described by the Bloch equation: \[\frac{\mathrm{d}P_{x}}{\mathrm{d}t} =\gamma(P_{y}B_{0}-P_{z}B_{\mathrm{fb}})-\Gamma P_{x} \tag{1}\] \[=(\gamma B_{0}+\gamma\chi_{2}P_{z})P_{y}-(\Gamma+\gamma\chi_{1}P_ {z})P_{x},\] where \(\gamma\) is the gyromagnetic ratio of \({}^{129}\)Xe, \(\Gamma=1/T_{2}\) corresponds to the spin decoherence rate, and \(T_{2}\) represents the intrinsic coherence time. In this equation, we adopt the small angle approximation, treating \(P_{z}\) as a constant. To simplify the equation, we introduce two additional parameters, namely \(\xi=\gamma\chi_{1}P_{z}\) and \(\Delta_{\mathrm{fb}}=\gamma\chi_{2}P_{z}\). The parameter \(\xi\), associated with the process of "measuring \(P_{x}\) and providing feedback in \(y\)", represents the modification of decoherence induced by feedback (incoherent effect). On the other hand, the parameter \(\Delta_{\mathrm{fb}}\), linked to the process of "measuring in \(P_{y}\) and providing feedback in \(\hat{y}\)", describes a feedback-induced frequency shift (coherent effect). The rheostat controls the amplitude of both \(\xi\) and \(\Delta_{\mathrm{fb}}\), while the sign is determined by the connecting polarity of the feedback coil. The ratio \(\Delta_{\mathrm{fb}}/\xi\) remains constant Figure 1: Setup and conceptual description of cooperative dynamics. (a) Sketch of experimental setup. The polarization and probing of \({}^{129}\)Xe atoms are achieved through spin-exchange collisions with \({}^{87}\)Rb atoms. Real-time feedback is provided to the system via a feedback coil. The feedback field includes \(P_{x}\) and \(P_{y}\) signals of \({}^{129}\)Xe. The amplitude of the feedback is controlled by an adjustable rheostat, and the sign is controlled by the connecting polarity. A bias field \(B_{0}\) is applied along the pumping direction. The diagram does not include the pump beam. (b) Refocusing effect in positive feedback mode. Some spins experience dephasing at certain points in time (highlighted with bright colors). The feedback field applies a torque on the dephased spins, causing them to reorient and refocus towards the collective spin. The right inset illustrates the spin dynamics. Each individual spin undergoes a torque (indicated by red arrows) parallel to the collective spin. As a result, the dephased spins tend to refocus, leading to an effective enhancement of the coherence time. Precession is omitted in the dynamical diagram. (c) Spreading effect in negative feedback mode. In this mode, the feedback-induced torque is anti-parallel to the collective spin, causing the dephased spins to align in the opposite direction. Consequently, the effective coherence time decreases as the dephased spins deviate from the collective spin. and is determined by the \({}^{87}\)Rb magnetometer. We show that the cooperative spin coherence time can be significantly enhanced through manipulating the feedback strength. According to Eq. (1), the decoherence rate modified by the feedback \(\xi\) becomes \[\frac{1}{T_{\text{eff}}}=\Gamma+\xi, \tag{2}\] where \(T_{\text{eff}}\) is the effective coherence time. In order to clearly illustrate that the behaviors of the spins are closely connected with relation between \(\Gamma\) and \(\xi\), we define the parameter \(C=-\xi/\Gamma\). In our analysis, we focus solely on the \(\chi_{1}\) component, disregarding the contribution of \(\chi_{2}\) which primarily induces a frequency shift. For \(0<C<1\) (positive feedback), we demonstrate that spins, initially dephased from the collective spin due to random noise, exhibit a tendency to refocus towards the collective spin [Fig. 1(b)]. In the presence of the feedback field, each spin experiences a torque parallel to the collective spin, compelling them to rotate until they realign with the collective spin (Supplementary Section II). As a result, unlike in independent dephasing scenarios, the cooperative spins are able to correct their precession phase according to the entirety, leading to an extended coherence time. Conversely, when \(C<0\) (negative feedback), the feedback-induced torque is anti-parallel to the collective spin [Fig. 1(c)]. Under this torque, the dephased spins tend to spread out until they align in the opposite direction, effectively canceling the collective spin. As a consequence, the decoherence rate worsens with the presence of feedback. It is this modulation of the decoherence process that distinguishes cooperative systems from independent systems. We demonstrate cooperative \({}^{129}\)Xe spin dynamics by adjusting feedback parameter \(\xi\). When \(\xi\) is set in the \(C\leq 0\) and \(0<C<1\) regime, the transverse magnetization decays exponentially with modified rate. To track changes in the coherence time, we apply a transverse pulse to tilt the \({}^{129}\)Xe spins at a small angle about \(5^{\circ}\) and record the resu01tant decay signal. The signals are fitted by exponentially-decayed sinusoidal function to determine the corresponding coherence time. In the \(C\leq 0\) regime, the coherence time decreases from \(31\,\text{s}\) to \(4\,\text{s}\) with increasing \(\xi\) [Fig. 2(a)]. In the \(0<C<1\) regime, the coherence signal decays slower for larger \(|\xi|\), and realizes \(T_{\text{eff}}>T_{2}\) [Fig. 2(b)]. In our experiment, the coherence time \(T_{\text{eff}}\) can be tuned to about \(545\,\text{s}\), which is more than one order longer than that observed without feedback (\(\approx\)31 s). Furthermore, Figure 2(c) shows the effective \({}^{129}\)Xe coherence time for different values of \(\xi\), which can be well fitted with the theoretical inverse function. When \(\xi\) is set in the \(C>1\) regime, superradiance-shaped pulses and maser occur instead of exponentially decayed signal, and \(T_{\text{eff}}\) can no longer be defined in such regime. Significant magnetic-field amplification is observed using cooperative \({}^{129}\)Xe spins. A transverse oscillating magnetic field \(\mathbf{B}_{\text{ac}}\) is applied on \({}^{129}\)Xe spins and generates transverse magnetization of \({}^{129}\)Xe; the magnetization induces an effective magnetic field \(\mathbf{B}_{\text{eff}}^{\perp}\) through Fermi-contact collisions with \({}^{87}\)Rb atoms. As reported in Refs. [12; 13; 35], the amplitude of \(\mathbf{B}_{\text{eff}}^{\perp}\) can be significantly larger than that of \(\mathbf{B}_{\text{ac}}\) with an amplification factor \(\eta_{0}=|\mathbf{B}_{\text{eff}}^{\perp}|/|\mathbf{B}_{\text{ac}}|\). The factor is determined by \(\eta_{0}=\frac{\lambda}{2}M_{0}P_{0}\gamma T_{2}\), where \(T_{2}\) is the intrinsic coherence time. Such amplifiers are based on independent \({}^{129}\)Xe spins, and their amplification ranges from 20-200 [12; 13; 35]. In contrast, our approach enhance the coherence time with the cooperative \({}^{129}\)Xe spins as demonstrated, leading to a modified cooperative amplification (Supplementary Section III) \[\eta=\frac{\lambda}{2}M_{0}P_{0}\gamma T_{\text{eff}}, \tag{3}\] where the coherence time is \(T_{\text{eff}}\) instead of the intrinsic \(T_{2}\). This provides new opportunities to realize improved spin amplification. We experimentally measure \(\eta\) and the bandwidth of the amplifier by sweeping frequency around \({}^{129}\)Xe resonance and recording signal response [Fig. 3(a)]. The fitting curve of Lorentz profile is overlaid on the experimental data. We further investigate \(\eta\) under different \(T_{\text{eff}}\) by tuning \(\xi\) and show that the resonance peak becomes narrower and higher with longer \(T_{\text{eff}}\). For example, when \(T_{\text{eff}}\) is tuned to be about \(163\,\text{s}\), the amplification \(\eta\) reaches approximately 2500. We also find that the resonance frequency \(f\) deviates from Larmor frequency \(f_{0}\) in the presence of the feedback field [see inset of Fig. 3(a)]. As derived in Supplementary Section II, the shift \((f-f_{0})\) linearly depends on \(\xi\) and its slope equals to \(-C_{y}/C_{x}\). The fitted result is \(f-f_{0}\approx-0.46\xi\). The relative amplification \(\eta/\eta_{0}\) is shown in Fig. 3(b). The cooperative response leads to a 5-fold enhancement in the relative amplification \(\eta/\eta_{0}\). Further enhancement of \(\eta\) is realized when \(\xi\) approaches \(-\Gamma\). However, in practice, the fluctuation of \({}^{87}\)Rb magnetometer response or feedback circuit resistance limits the precision of \(\xi\) and makes \({}^{129}\)Xe spins leave the amplification regime \(0<C<1\). The inset of Fig. 3(b) shows \(\eta\) values under different bias field \(B_{0}\) from \(0.08\,\mu\text{T}\) to \(3\,\mu\text{T}\) with \(\xi\approx 0.006\,\text{s}^{-1}\), where the amplification factor \(\eta\) is nearly independent of \(B_{0}\) and its average is about 820. In contrast to spin-exchange-relaxation-free magnetometers that require the operation at near-zero fields below \(100\,\text{nT}\)[38], the present \({}^{129}\)Xe cooperative sensor can be operated in \(\mu\)T-level magnetic field. As a first application, we use cooperative spin amplification to realize magnetic-field precision measurements with a fT/Hz\({}^{1/2}\)-level sensitivity. As an example, the bias field is set to \(B_{0}\approx 850\,\text{nT}\), corresponding to \({}^{129}\)Xe Larmor frequency \(f_{0}\approx 10.03\,\text{Hz}\). By tuning the feedback strength, the effective coherence time is set to \(T_{\text{eff}}\approx 300\,\text{s}\). A resonant oscillating field \(B_{\text{ac}}\approx 13.8\,\text{pT}\) along the \(y\) direction is applied as a test field. Benefiting from cooperative \({}^{129}\)Xe amplification, the applied test field is pre-amplified into \(65\,\text{nT}\). By taking the response of the cooperative spin amplifier into account, the magnetic sensitivity of \({}^{87}\)Rb magnetometer is effectively enhanced to about \(4.0\,\text{fT/Hz}^{1/2}\) around resonance frequency, as illustrated in Fig. 4(a). The sensitivity is over 1800 times better than photon-shot noise limit (\(\approx\)7.3 pT/Hz\({}^{1/2}\)) of \({}^{87}\)Rb magnetometer. Moreover, it surpasses spin-projection noise (\(\approx\)8.7 FT/Hz\({}^{1/2}\)) of \({}^{87}\)Rb magnetometer by 2.2 fold (Supplementary Section IV). Figure 4(b) depicts the magnetic-field sensitivity with various feedback strengths that correspond to different enhanced coherence time \(T_{\text{eff}}\). The sensitivity data are fitted with the function \([(a/T_{\text{eff}})^{2}+b^{2}]^{1/2}\), where the coefficients are estimated to be \(a\approx 860.3\) and \(b\approx 3.2\). Here the first term originates from non-magnetic photon-shot noise, which is not amplified and can be suppressed by the amplifier. The second term denotes real magnetic noise about 3.2 fT/Hz\({}^{1/2}\) that can be amplified by the cooperative amplifier, including magnetic-shield Johnson noise and unavoidable feedback circuit magnetic noise. As one can see, the current sensitivity is dominantly limited by the magnetic noise, which can be suppressed by existing techniques. For example, magnetic-shield Johnson noise can be minimized by using ferrite shielding [39]. The theoretical sensitivity is indicated by the dashed line when the potential magnetic noise is removed, e.g. the sensitivity can be improved to better than 1 fT/Hz\({}^{1/2}\) when \(T_{\text{eff}}\) is tuned to 900 s. Further improvement of the cooperative amplifier can be implemented with smaller coefficient \(a\), which requires high noble-gas number density, noble-gas spin polarization, and alkali-metal magnetometer response. Extrapolating the present results to devices with alkali-noble-gas pairs with smaller spin-destruction cross section such as K-\({}^{3}\)He, \(a\) should be reduced to about 80 with 3 atm \({}^{3}\)He. \({}^{3}\)He spins also possess longer intrinsic coherence time (\(\approx\)1000 s), which can be hours-long after enhanced by cooperative approach. These methods would extend sensitivity below 0.1 fT/Hz\({}^{1/2}\). **Discussions.** We would like to emphasize the main difference between this work and Fermi-contact enhancement. First, the Fermi-contact enhancement factor \(\lambda\) constitutes just a fraction of amplification factor \(\eta\). It should be noted that many other parameters are also important to realize a significant amplification factor, such as \(P_{0}\) and \(T_{\text{eff}}\). In our experiment, the polarization of \({}^{129}\)Xe can achieve \(P_{0}\approx 0.18\) and \(T_{\text{eff}}\) is tuned to more than 500 s, both of which is essential to realize an amplification factor of more than three orders. Second, we introduce cooperative amplifier to further increase the amplification. A 5-fold enhancement of \(\eta\) is achieved through tuning the feedback strength, while \(\lambda\) remains unchanged. Our technique based on cooperative spins shows potential for application in other areas, such as comagnetometry - a means to measure the precession frequency of two species of nuclei, including \({}^{129}\)Xe-\({}^{131}\)Xe and \({}^{129}\)Xe-\({}^{3}\)He [40; 41]. Its ability to resist noise and systematic effects associated with the magnetic field makes it useful for searches for violation of local Lorentz invariance [41] and for new spin-dependent forces [42; 40], inertial rotation sensing [43], etc. By allowing for long measurement times, the persistent coherence of cooperative spins allows for high accuracy in determining the precession frequency of nuclear spins, which is proportional to the measurement time to the power of -3/2 according to Cramer-Rao lower bound [41]. Our cooperative approach is capable of reuniting decoherence spins and resisting magnetic field gradients, making it possible to create a new class of cooperative spin comagnetometers. According to the experiment where the coherence time is enhanced to about 20 times longer than the independent ensemble, the frequency accuracy could be improved by two orders. It is also reported that in the \({}^{129}\)Xe-\({}^{131}\)Xe isotope comagnetometer, the electric quadrupole moment of \({}^{131}\)Xe can split into triplets due to the electric field gradient induced by the glass wall [44]. These triplets may narrow down benefiting from cooperation approach, thus allowing for high precision measurements of the quadrupole splitting. Our amplification technique has potential applications in the search for hypothetical particles theorized by various models beyond the standard model, such as axions and dark Figure 2: Demonstration of cooperative \({}^{129}\)Xe dynamics with different feedback strengths. (a) In the regime where \(C\leq 0\), the coherence decay rate becomes higher as \(\xi\) increases. (b) In the regime where \(0<C<1\), the coherence decay rate becomes smaller as \(|\xi|\) increases. All the curves have been normalized and offset along the y-axis for clarity. (c) The effective coherent time \(T_{\text{eff}}\) versus the feedback strength \(\xi\). The red line shows the fit with the inverse function. \(T_{\text{eff}}\) cannot be defined in the \(C>1\) regime. Instead of exponentially decayed signal, superradiance-shaped pulses and maser occur in such \(C>1\) regime. photons [12; 45]. These particles are expected to interact with standard model particles (such as nuclear spins) and produce an oscillating pseudo-magnetic field that can be amplified using our technique. Consequently, the search sensitivity of axions and dark photons can be significantly enhanced, leading to new empirical constraints. With our current experimental parameters, one-day measurement yields the search sensitivity of axion dark matter \(|g_{\text{aNN}}|\leq 10^{-10}\,\text{GeV}^{-1}\), which surpasses the most stringent supernova constraints [33; 34] by about two orders of magnitude. The constant \(g_{\text{aNN}}\) characterizes axion-neutron coupling. Our technique can also be applied to search for exotic spin-dependent interactions [13], where axions serve as force mediators that couple the standard particles. Using our current experiments, the search sensitivity is approximately one order of magnitude better than that in previous searches [13; 46]. In conclusion, we have demonstrated a novel approach for enhancing quantum amplification through cooperative noble-gas spins, resulting in improved magnetic field sensitivity. This approach should be generic to other noble gas, as well as alkali atoms and nitrogen-vacancy centers. Notably, cooperative spin amplification can operate in the presence of finite bias fields, eliminating the need for strict \(\mu\)-metal magnetic shielding. This extended functionality facilitates applications such as exploring Schumann resonance of Earth [47] and detecting geomagnetic field anomaly [31]. In addition, the combination of cooperative spin amplification and Floquet engineering [35] may increase the bandwidth of amplification.
量子増幅は精度測定のための重要なリソースとして認識されています。しかし、従来のパラダイムは、通常独立した粒子群を複数用いるため、量子増幅の利点であるゲイン、スペクトルlinewidthなどにおける性能を制限しています。ここでは、相互作用する129Xe核スピンをフィードバック回路に埋め込むことで、磁気共鳴時間を少なくとも1倍に高める新しい信号増幅技術を示しています。この技術を用いることで、磁場を3倍のオーダー以上で事前に増強し、埋め込まれた87Rb磁気計を使用してその場での読み出しを実現しました。これにより、4.0 fT/Hz$^{1/2}$の超高磁気感度を実現し、光子ノイズを超え、埋め込まれた原子磁気計のスピン投影ノイズも大きく上回り、暗物質探索など、革新的な応用が可能となりました。私たちの研究は
2308.00150
Effects of mirror birefringence and its fluctuations to laser interferometric gravitational wave detectors
Crystalline materials are promising candidates as substrates or high-reflective coatings of mirrors to reduce thermal noises in future laser interferometric gravitational wave detectors. However, birefringence of such materials could degrade the sensitivity of gravitational wave detectors, not only because it can introduce optical losses, but also because its fluctuations create extra phase noise in the arm cavity reflected beam. In this paper, we analytically estimate the effects of birefringence and its fluctuations in the mirror substrate and coating for gravitational wave detectors. Our calculations show that the requirements for the birefringence fluctuations in silicon substrate and AlGaAs coating will be on the order of $10^{-8}$ and $10^{-10}$ rad/$\sqrt{\rm Hz}$ at 100~Hz, respectively, for future gravitational wave detectors. We also point out that optical cavity response needs to be carefully taken into account to estimate optical losses from depolarization.
Yuta Michimura, Haoyu Wang, Francisco Salces-Carcoba, Christopher Wipf, Aidan Brooks, Koji Arai, Rana X Adhikari
2023-07-31T20:57:21
http://arxiv.org/abs/2308.00150v2
# On the effects of mirror birefringence and its fluctuations to ###### Abstract Crystalline materials are promising candidates as substrates or high-reflective coatings of mirrors to reduce thermal noises in future laser interferometric gravitational wave detectors. However, birefringence of such materials could degrade the sensitivity of gravitational wave detectors, not only because it can introduce optical losses, but also because its fluctuations create extra phase noise in the arm cavity reflected beam. In this paper, we analytically estimate the effects of birefringence and its fluctuations in the mirror substrate and coating for gravitational wave detectors. Our calculations show that the requirements for the birefringence fluctuations in silicon substrate and AlGaAs coating will be in the order of \(10^{-8}\) rad\(/\sqrt{\text{Hz}}\) and \(10^{-10}\) rad\(/\sqrt{\text{Hz}}\) at 100 Hz, respectively, for future gravitational wave detectors. We also point out that optical cavity response needs to be carefully taken into account to estimate optical losses from depolarization. ## I Introduction The first detections of gravitational waves from binary black holes [1] and binary neutron stars [2; 3] by Advanced LIGO [4] and Advanced Virgo [5] inaugurated gravitational wave physics and astronomy. Improvements in the sensitivity of these laser interferometric detectors in recent years enabled routine detections and more precise binary parameter estimation [6]. Further improvements in the astrophysical reach of these detectors will allow us to study the origin of massive black holes, the neutron star equation of state, alternative gravity theories, and cosmology. The fundamental limitation to the sensitivity of these detectors at the most sensitive frequency band is set by thermal vibrations of mirror surface [7]. KAGRA [8; 9] and other concepts of future gravitational wave detectors plan to utilize cryogenic crystalline test mass mirrors for thermal noise reduction, instead of fused silica mirrors at room temperature. KAGRA uses sapphire test masses, and plan to cool them down to 22 K [10]. Voyager is an upgrade plan of LIGO to use 123 K silicon to increase the astrophysical reach by a factor of 4-5 over Advanced LIGO design [11]. The next generation detectors such as Einstein Telescope [12; 13] also plan to use silicon test masses at cryogenic temperatures for the low frequency detectors, and Cosmic Explorer [14; 15] considers using them for an upgrade. In addition, crystalline coatings such as AlGaAs coating [16] and AlGaP coating [17] are considered as promising candidates to reduce coating Brownian noise, instead of amorphous silica and tantala coating. Although crystalline materials are promising to reduce thermal noise, it has been pointed out that slight birefringence of mirror substrates and coatings could cause optical losses due to depolarization of the light, and cause degradation of interferometric contrast [18]. The birefringence and its inhomogeneity of sapphire input test masses of KAGRA were found to be higher than expected [19; 20], and around 10% of power was lost on reflection due to depolarization, when arm cavities are not on resonance [9]. Ideally, crystalline silicon is a cubic crystal and optically isotropic, but could have strain-induced birefringence from crystal dislocations and due to support in the mirror suspension system. Birefringence measurements in silicon mirrors have revealed that the amount of the static birefringence is \(\Delta n\sim 10^{-7}\) or less at laser wavelengths of 1.55 \(\mu\)m [21] and 2 \(\mu\)m [22] at room temperature, which satisfies the optical loss requirements for future detectors. Also, previous cavity experiments using AlGaAs coatings reported birefringence at 1 mrad level [23; 16; 24]. These past studies have focused on the static birefringence and optical losses from the depolarization. However, recent measurement of thermal noises in crystalline mirror coatings at cryogenic temperatures reported excess birefringent noise, which could limit the sensitivity of future gravitational wave detectors [25]. Theoretical calculations on thermal fluctuations of birefringence in crystalline mirror coatings have also revealed that the noise from these fluctuations could be similar to Brownian noise [26]. It is also worth noting that experiments to search for vacuum magnetic birefringence, such as PVLAS and OVAL, has been suspected to be limited by thermal birefringence noise of mirrors [27; 28; 29; 30; 31]. These temporal birefringence fluctuations could also limit optical cavity based axion dark matter searches using the birefringence effect from axion-photon coupling [32; 33; 34; 35; 36]. In this paper, we study the effects of birefringence and its fluctuations to gravitational wave detectors based on Fabry-Perot-Michelson interferometer. We show that the polarization axis and the crystal axes of arm cavity mirrors need to be aligned to avoid optical losses and to reduce noises from birefringence fluctuations. We also show that the cavity response to birefringence needs to be correctly taken into account for estimating the noises and the optical losses of arm cavities. We start by analytically describing the cavity response to birefringence in Sec. II. In Sec. III, we focus on noises from substrate birefringence and coating birefringence, and derive requirements for their fluctuations for future gravitational wave detectors. In Sec. IV, we expand our formulation to include spatial higher order modes, and discuss power losses from inhomogeneous birefringence of the substrate and the coating. Our conclusions and outlook are summarized in Sec. V. ## II Cavity response to birefringence Let us consider a Fabry-Perot cavity formed by an input test mass (ITM) and an end test mass (ETM) mirrors as shown in Fig. 1. We consider birefringence of ITM substrate, ITM high-reflective coating, and ETM high-reflective coating. The ordinary axis of the ETM coating is rotated by \(\theta\) with respect to that of ITM. The input beam is linearly polarized and its polarization is rotated by \(\theta_{\mathrm{pol}}\) with respect to the ordinary axis of ITM. We assume that the crystal axes of ITM substrate are aligned with those of its coating. This will not affect the results of this paper, as we will treat the substrate birefringence and the coating birefringence independently in the following sections. For calculating the cavity response to birefringence, we can use the Jones matrix formalism Jones (1974). In the basis of ITM crystal axes, the electric field of the input beam can be written as \[\vec{E}_{\mathrm{in}}=\left(\vec{e}_{\mathrm{o}}\ \ \vec{e}_{\mathrm{e}} \right)\vec{v}_{\mathrm{in}}E_{\mathrm{in}}, \tag{1}\] where \(\vec{e}_{\mathrm{o}}\) and \(\vec{e}_{\mathrm{e}}\) are eigenvectors along with the ITM ordinary and extraordinary axes, and \(\vec{v}_{\mathrm{in}}\) is the vector representing the input polarization. We suppose the ITM substrate is loss less, and the amplitude reflectivity and the amplitude transmission of the whole ITM is determined by the high-reflective coating. Then the amplitude transmission of ITM can be written as \[T_{1}=\begin{pmatrix}t_{1}&0\\ 0&t_{1}e^{-i\frac{1}{2}\Delta\phi_{\mathrm{t_{1}}}}\end{pmatrix}, \tag{2}\] where \(\Delta\phi_{\mathrm{t_{1}}}/2\) is the phase difference between the ordinary and extraordinary axes in the ITM transmission from both the substrate and the coating birefringence, and \(t_{1}\) is the amplitude transmission of ITM. Here, we assumed that the amplitude transmission is the same for both axes. Similarly, the amplitude reflectivity of ITM and ETM from the high-reflective coating side can be written as \[R_{j}=\begin{pmatrix}r_{j}&0\\ 0&r_{j}e^{-i\Delta\phi_{r_{j}}}\end{pmatrix}, \tag{3}\] where \(\Delta\phi_{\mathrm{r_{j}}}\) is the phase difference between the ordinary and extraordinary axes in ITM and ETM reflection, and \(r_{j}\) is the amplitude reflectivity of ITM and ETM. \(j=1\) is for ITM and \(j=2\) is for ETM. Also, the amplitude reflectivity of ITM from the substrate side can be written as \[S_{1}=\begin{pmatrix}-r_{1}&0\\ 0&-r_{1}e^{-i\Delta\phi_{\mathrm{r_{1}}}}\end{pmatrix}, \tag{4}\] where \(\Delta\phi_{s_{1}}\) is the phase difference between the ordinary and extraordinary axes in the ITM reflection from the substrate side. From the energy conservation and the time reversal symmetry, \(\Delta\phi_{t_{1}}=\Delta\phi_{r_{1}}+\Delta\phi_{s_{1}}\). Here, we use the convention that \(r_{j}\) and \(t_{1}\) are real, and the sign is flipped for reflection from the ITM substrate side. We keep the coordinate axis to be the same even if the propagation direction flips on mirror reflections, so that the sign for both polarizations will be the same. For arm cavities in gravitational wave detectors, \(r_{1}\) and \(r_{2}\) are designed to be \(r_{2}\simeq 1\), and \(r_{1}<r_{2}\), such that almost all the light is reflected back. From the phase of the cavity reflected beam, cavity length changes from gravitational waves are read out. In the following subsections, we calculate the polarization eigenmodes in the cavity, and the phase of the cavity reflected beam. ### Polarization eigenmodes in the cavity The electric field inside the cavity that propagates from ITM to ETM can be written as \[\vec{E}_{\mathrm{cav}}=\left(I-A\right)^{-1}T_{1}\vec{E}_{\mathrm{in}}, \tag{5}\] with \(I\) being the identity matrix. Here, \[A\equiv R_{1}R(-\theta)R_{2}R(\theta)e^{-i\phi}, \tag{6}\] Figure 1: The schematic of a Fabry-Pérot cavity with mirror crystal axes and input beam polarization axis illustrated. With respect to the ITM ordinary axis, the input polarization is rotated by \(\theta_{\mathrm{pol}}\) and the ETM ordinary axis is rotated by \(\theta\). where \(\phi=4\pi L/\lambda\) is the phase acquired in the cavity round-trip, with \(L\) and \(\lambda\) being the cavity length and the laser wavelength, and \[R(\theta)\equiv\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}. \tag{7}\] Note that \(\phi\) includes phase acquired in the ITM and ETM reflection for their ordinary axes. The resonant polarization mode is the eigenvectors of \[M_{\rm cav}\equiv(I-A)^{-1}\,T_{1}. \tag{8}\] The cavity enhancement factors for each mode will be the eigenvalues of \(M_{\rm cav}\). When \(\theta=0\), the ITM axes and the ETM axes are aligned, and the eigenvectors will be \[\vec{v}_{a}=\begin{pmatrix}1\\ 0\end{pmatrix},\qquad\vec{v}_{b}=\begin{pmatrix}0\\ 1\end{pmatrix}, \tag{9}\] which means that the resonant modes are linear polarizations along the ITM ordinary axis \(\vec{e}_{\rm o}\) and the extraordinary axis \(\vec{e}_{\rm e}\). The cavity enhancement factors will be \[w_{a}=\frac{t_{1}}{1-r_{1}r_{2}e^{-i\phi}},\qquad w_{b}=\frac{t_{1}e^{-i\frac{ 1}{2}\Delta\phi_{t_{1}}}}{1-r_{1}r_{2}e^{-i(\phi+\Delta\phi_{t_{1}}+\Delta \phi_{t_{2}})}}. \tag{10}\] The resonant frequency difference between two eigenmodes therefore will be \[\Delta\nu=\frac{\Delta\phi_{\rm r_{1}}+\Delta\phi_{\rm r_{2}}}{2\pi}\nu_{\rm FSR}, \tag{11}\] where \(\nu_{\rm FSR}=c/(2L)\) is the free spectral range of the cavity. When \(\theta=\pi/2\), the ITM ordinary axis and the ETM extraordinary axis are aligned, and the eigenvectors again will be the same as the ones given in Eq. (9). The cavity enhancement factors will be \[w_{a}=\frac{t_{1}}{1-r_{1}r_{2}e^{-i(\phi+\Delta\phi_{\rm r_{2}})}},\quad w_{b }=\frac{t_{1}e^{-i\frac{1}{2}\Delta\phi_{t_{1}}}}{1-r_{1}r_{2}e^{-i(\phi+ \Delta\phi_{\rm r_{1}})}}. \tag{12}\] The resonant frequency difference between two eigenmodes therefore will be \[\Delta\nu=\frac{\Delta\phi_{\rm r_{1}}-\Delta\phi_{\rm r_{2}}}{2\pi}\nu_{\rm FSR}. \tag{13}\] Since we defined the ITM and ETM axes such that \(\Delta\phi_{\rm r_{i}}\) have the same sign for ITM and ETM, when \(\theta=0\), the phase difference between the axes are added and the resonant frequency difference is maximized. When \(\theta=\pi/2\), it is minimized, as the phase difference is cancelled. When \(0<\theta<\pi/2\), the resonant frequency difference will be in between the maximum and the minimum. When the resonant frequency difference is smaller than the cavity linewidth, i.e., \(\Delta\phi_{\rm r_{i}}\ll 2\pi/\mathcal{F}\), and when the effect from the ITM substrate birefringence is small, i.e., \(\Delta\phi_{\rm r_{i}}\ll\Delta\phi_{\rm r_{1}}\mathcal{F}/\pi\), the resonant frequency difference can be calculated with \[\Delta\nu\simeq\frac{2\pi(\arg w_{a}-\arg w_{b})}{\mathcal{F}}\frac{\nu_{\rm FSR }}{2\pi}, \tag{14}\] at \(\phi=0\), where \[\mathcal{F}=\frac{\pi\sqrt{r_{1}r_{2}}}{1-r_{1}r_{2}} \tag{15}\] is the finesse of the cavity. This can be further approximated as [38] \[\Delta\nu\simeq\frac{\delta_{\rm EQ}}{2\pi}\nu_{\rm FSR}, \tag{16}\] where \[\delta_{\rm EQ}\equiv\sqrt{(\Delta\phi_{\rm r_{1}}-\Delta\phi_{\rm r_{2}})^{ 2}+4\Delta\phi_{\rm r_{1}}\Delta\phi_{\rm r_{2}}\cos^{2}\theta}, \tag{17}\] when \(\delta_{\rm EQ}\ll 1\). Also, the cavity eigenmodes are linear polarizations approximated as \[\vec{v}_{a}=\begin{pmatrix}\cos\theta_{\rm EQ}\\ \sin\theta_{\rm EQ}\end{pmatrix},\quad\vec{v}_{b}=\begin{pmatrix}-\sin\theta_{ \rm EQ}\\ \cos\theta_{\rm EQ}\end{pmatrix}, \tag{18}\] where the polarization angle is defined by \[\cos 2\theta_{\rm EQ}=\frac{\frac{\Delta\phi^{\prime}_{r_{1}}}{\Delta\phi_{\rm r _{2}}}+\cos 2\theta}{\sqrt{\left(\frac{\Delta\phi^{\prime}_{r_{1}}}{\Delta\phi_{\rm r _{2}}}-1\right)^{2}+4\frac{\Delta\phi^{\prime}_{r_{1}}}{\Delta\phi_{\rm r_{2}}} \cos^{2}\theta}}, \tag{19}\] with \[\Delta\phi^{\prime}_{r_{1}}\equiv\Delta\phi_{r_{1}}+\frac{\pi}{\mathcal{F}} \Delta\phi_{\rm t_{1}}. \tag{20}\] When \(\Delta\phi^{\prime}_{r_{1}}\gg\Delta\phi_{r_{2}}\), \(\theta_{\rm EQ}\) is equal to zero, when \(\Delta\phi^{\prime}_{r_{1}}=\Delta\phi_{r_{2}}\), \(\theta_{\rm EQ}\) is equal to \(\theta/2\), and when \(\Delta\phi^{\prime}_{r_{1}}\ll\Delta\phi_{r_{2}}\), \(\theta_{\rm EQ}\) is equal to \(\theta\). Note that the polarization state resonating inside the cavity are elliptic polarizations given by \(R_{1}T_{1}\vec{v}_{a,b}/(r_{1}t_{1})\), and are different from linear polarizations given by Eq. (18). The mis-match between the cavity polarization mode and the input beam polarization can be calculated with \[\Lambda^{2}=1-|\vec{v}_{a}\cdot\vec{v}_{\rm in}|^{2}\,. \tag{21}\] When the input beam is linearly polarized with the polarization angle of \(\theta_{\rm pol}\) such that \[\vec{v}_{\rm in}=R(\theta_{\rm pol})\begin{pmatrix}1\\ 0\end{pmatrix}=\begin{pmatrix}\cos\theta_{\rm pol}\\ \sin\theta_{\rm pol}\end{pmatrix}, \tag{22}\] Eq. (21) reduces to \[\Lambda^{2}=\sin^{2}{(\theta_{\rm EQ}-\theta_{\rm pol})}. \tag{23}\] The mis-match will be less than than 0.1 % when \(|\theta_{\rm EQ}-\theta_{\rm pol}|\) is smaller than 1.8 degrees. For gravitational wave detectors, this is required for both arm cavities. This means that the axes of two arm cavities need to be aligned to the same degree. Note that mis-match do not directly mean that there is a same amount of power loss. The actual power loss also depend on the amount of birefringence, as we will discuss in Sec. IV. Figure 2 shows the polarization eigenmodes of the cavity as a function of ETM rotation angle \(\theta\), calculated using Eqs. (16) and (19). As we have discussed earlier, the resonant frequency difference will be the maximized at \(\theta=0\), and minimized at \(\theta=\pi/2\). When \(\theta=\pi/2\) and \(\Delta\phi_{\mathrm{r}_{1}}=\Delta\phi_{\mathrm{r}_{2}}\), the phase difference between ordinary and extraordinary axes is completely cancelled, and two modes will be degenerate. In this case, two linear polarizations and two circular polarizations will be cavity eigenmodes, since two modes have the same resonant frequency. The bottom panel of Fig. 2 shows the mis-match calculated using Eq. (21), assuming the input polarization is linear and aligned with either of the ITM axes. The mis-match is nulled at \(\theta=0\) and \(\theta=\pi/2\). To minimize the mis-match and to make the resonant frequency difference large, aligning the ETM rotation such that \(\theta=0\) and aligning the input polarization to one of the ITM axes will be the optimal choice. The requirement on the alignment will be not severe, since the dependence on the ETM rotation angle goes with \(\theta^{2}\) at \(\theta=0\). For deriving the cavity reflected beam, we need to calculate the electric field inside the cavity that propagates from ETM to ITM. This can be written as \[\vec{E}^{\prime}_{\mathrm{cav}} = R(-\theta)R_{2}R(\theta)e^{-i\phi}M_{\mathrm{cav}}\vec{E}_{ \mathrm{in}} \tag{24}\] \[\equiv M^{\prime}_{\mathrm{cav}}\vec{E}_{\mathrm{in}}. \tag{25}\] The eigenvectors of \(M^{\prime}_{\mathrm{cav}}\) is the same as those of \(M_{\mathrm{cav}}\) within our approximations discussed above, but the cavity enhancement factors will be slightly different. When \(\theta=0\), the cavity enhancement factors will be \[w^{\prime}_{a}=\frac{t_{1}r_{2}e^{-i\phi}}{1-r_{1}r_{2}e^{-i\phi}},\quad w^{ \prime}_{b}=\frac{t_{1}r_{2}e^{-i(\phi+\frac{1}{2}\Delta\phi_{\mathrm{r}_{1}}+ \Delta\phi_{\mathrm{r}_{2}})}}{1-r_{1}r_{2}e^{-i(\phi+\Delta\phi_{\mathrm{r}_ {1}}+\Delta\phi_{\mathrm{r}_{2}})}}, \tag{26}\] and when \(\theta=\pi/2\), those will be \[w^{\prime}_{a}=\frac{t_{1}r_{2}e^{-i(\phi+\Delta\phi_{\mathrm{r}_{2}})}}{1-r_{ 1}r_{2}e^{-i(\phi+\Delta\phi_{\mathrm{r}_{2}})}},\quad w^{\prime}_{b}=\frac{t_ {1}r_{2}e^{-i(\phi+\frac{1}{2}\Delta\phi_{\mathrm{r}_{1}})}}{1-r_{1}r_{2}e^{-i( \phi+\Delta\phi_{\mathrm{r}_{1}})}}. \tag{27}\] Compared with \(w_{a}\) and \(w_{b}\), those have extra phase \(\phi\) from the cavity round trip and extra phase \(\Delta\phi_{\mathrm{r}_{2}}\) for the corresponding axis for one additional reflection from ETM. ### Phase of cavity reflected beam The noises due to temporal fluctuations of birefringence will be imprinted in the phase of the cavity reflected beam. The electric field of the cavity reflection can be written as \[\vec{E}_{\mathrm{refl}}=M_{\mathrm{refl}}\vec{E}_{\mathrm{in}} \tag{28}\] where \[M_{\mathrm{refl}}\equiv S_{1}+T_{1}M^{\prime}_{\mathrm{cav}}. \tag{29}\] The first term corresponds to the prompt reflection from ITM, and the second term is the ITM transmitted beam from the cavity circulating beam. In general, when the input beam polarization component is \[\vec{v}_{\mathrm{in}}=a\vec{v}^{\prime}_{a}+b\vec{v}^{\prime}_{b}, \tag{30}\] the polarization component of the reflected beam is \[M_{\mathrm{refl}}\vec{v}_{\mathrm{in}}=a(S_{1}+w^{\prime}_{a}T_{1})\vec{v}^{ \prime}_{a}+b(S_{1}+w^{\prime}_{b}T_{1})\vec{v}^{\prime}_{b}. \tag{31}\] Since the resonant condition of each eigenmode is generally different, it is generally \(|w^{\prime}_{a}|\neq|w^{\prime}_{b}|\). Therefore, the polarization component of the cavity reflected beam will be different from the input polarization. When we use a Faraday isolator to extract the cavity reflection, we extract the polarization component which Figure 2: The polarization eigenmodes of a Fabry-Pérot cavity as a function of ETM rotation angle \(\theta\). The top panel shows the round-trip phase difference between the eigenmodes in the unit of \(\Delta\phi_{\mathrm{r}_{1}}\), i.e., \(2\pi\Delta\nu/(\nu_{\mathrm{FSR}}\Delta\phi_{\mathrm{r}_{1}})\), which is proportional to the resonant frequency difference. The middle panel shows the polarization angle of the eigenmodes \(\theta_{\mathrm{EQ}}\) calculated using Eq. (19). The bottom panel shows the mis-match of the input beam polarization to the eigenmodes, when it is linear and aligned with ITM axes, calculated using Eq. (21). Different color of the lines correspond to different \(\Delta\phi_{\mathrm{r}_{2}}/\Delta\phi_{\mathrm{r}_{1}}\) ratio. Blue lines for \(\Delta\phi_{\mathrm{r}_{2}}=0\) case in the bottom two plots are zero. is the same as the input polarization. Therefore, the phase of the cavity reflected beam can be calculated with \[\arg\left(E_{\rm out}\right)=\arg\left(E_{\rm ref\parallel}\right)=\arg\left(E_{ \rm in}M_{\rm ref\parallel}\vec{v}_{\rm in}\cdot\vec{v}_{\rm in}\right). \tag{32}\] In the case when the input beam polarization is aligned to the ITM ordinary axis, this reflected phase is the phase of (1,1) component of \(M_{\rm refl}\), and that for the ITM extraordinary axis is (2,2) component of \(M_{\rm refl}\). Let us first consider the effects from ITM. If we set \(\Delta\phi_{\rm r_{2}}=0\) and the input beam is linearly polarized with the polarization angle of \(\theta_{\rm pol}\) as shown in Eq. (22), the reflected electric field in the polarization parallel to \(\vec{v}_{\rm in}\) and in the orthogonal polarization will be \[\frac{E_{\rm ref\parallel}}{E_{\rm in}} = M_{\rm refl}\vec{v}_{\rm in}\cdot\vec{v}_{\rm in} \tag{33}\] \[= (-r_{1}+w^{\prime}_{a}t_{1})\cos^{2}\theta_{\rm pol}\] \[+(-r_{1}e^{-i\Delta\phi_{\rm r_{1}}}+w^{\prime}_{b}t_{1}e^{-i \frac{1}{2}\Delta\phi_{\rm r_{1}}})\sin^{2}\theta_{\rm pol},\] \[\frac{E_{\rm ref\perp}}{E_{\rm in}} = M_{\rm refl}\vec{v}_{\rm in}\cdot R(\theta_{\rm pol})\begin{pmatrix} 0\\ 1\end{pmatrix}\] (34) \[= \left[(-r_{1}+w^{\prime}_{a}t_{1})-(-r_{1}e^{-i\Delta\phi_{\rm r _{1}}}+w^{\prime}_{b}t_{1}e^{-i\frac{1}{2}\Delta\phi_{\rm r_{1}}})\right]\] \[\times\frac{\sin\left(2\theta_{\rm pol}\right)}{2}.\] These are similar to the electric fields of the bright reflection port and the dark anti-symmetric port for a Fabry-Perot-Michelson interferometer that has an unbalanced beam splitter. The effects from the ETM birefringence can be calculated by setting \(\Delta\phi_{\rm s_{1}}=\Delta\phi_{\rm r_{1}}=0\), and replacing \(\Delta\phi_{\rm r_{1}}\) to \(\Delta\phi_{\rm r_{2}}\) and \(\theta_{\rm pol}\) to \(\theta+\theta_{\rm pol}\). If we combine the effects from ITM and ETM, the phase of the reflected beam around the resonance can be approximated as \[\arg\left(\frac{E_{\rm ref\parallel}}{E_{\rm in}}\right) = (\Delta\phi_{\rm s_{1}}-2\Delta\phi_{\rm t_{1}})\sin^{2}\theta_{ \rm pol}\] \[- \frac{\cal F}{\pi}\left[\phi+\Delta\phi_{\rm r_{1}}\sin^{2}\theta _{\rm pol}\right.\] \[+ \left.\Delta\phi_{\rm r_{2}}\sin^{2}\left(\theta+\theta_{\rm pol} \right)\right],\] with the approximation that \(\Delta\phi_{\rm r_{1}}\ll 2\pi/{\cal F}\) and \(r_{2}=1\). It is clear that both the ETM rotation angle \(\theta\) and the input beam polarization angle \(\theta_{\rm pol}\) changes the phase of the cavity reflected beam, and will contribute to the phase noise, unless \(\theta_{\rm pol}\) and \(\theta+\theta_{\rm pol}\) are either \(0\) or \(\pi/2\), where the effects are quadratic to these angles. The fluctuations of phase differences between ordinary and extraordinary axes also create phase noises, unless \(\theta_{\rm pol}\) and \(\theta+\theta_{\rm pol}\) are both \(0\). It is worth noting that, even if we use this phase to lock the cavity, this does not generally mean that the cavity is locked on resonance to one of its polarization eigenmodes, as the cavity reflected beam contains the phase fluctuations from both polarization eigenmodes. To avoid the mixing of phase noises from two polarization eigenmodes, it is actually better to have higher static coating birefringence, i.e., \(\Delta\phi_{\rm r_{i}}\gg 2\pi/{\cal F}\). If the static coating birefringence is high such that one of the eigenmodes is out of resonance when the other is resonant, only \(\Delta\phi_{\rm s_{1}}\) and \(\phi\) terms remain in Eq. (35). ## III Noises from birefringence In this section, we calculate the phase noises from temporal fluctuations of birefringence, and derive the requirements for the current and future gravitational wave detectors. For calculating the requirements, we have used the interferometer parameters summarized in Table 1, and the displacement sensitivity curves shown in Fig. 3. At the last part of this section, we also discuss the noise from the amplitude fluctuations in the orthogonal polarization at the anti-symmetric port of the Fabry-Perot Michelson interferometer. Although different interferometers plan to use different materials for the mirrors, discussions presented here do not depend on the choice of materials. ### Phase noises from substrate birefringence The phase changes from the ITM substrate birefringence can be calculated from Eq. (35) by setting \(\Delta\phi_{\rm r_{1}}=\Delta\phi_{\rm r_{2}}=0\), and \(\Delta\phi_{\rm s_{1}}=\Delta\phi_{\rm t_{1}}\). In this case, Eq. (35) reduces to \[\arg\left(\frac{E_{\rm ref\parallel}}{E_{\rm in}}\right)=-\Delta\phi_{\rm s_{1 }}\sin^{2}\theta_{\rm pol}-\frac{\cal F}{\pi}\phi. \tag{36}\] Therefore, the length noise couplings from the fluctuations of \(\theta_{\rm pol}\) and \(\Delta\phi_{\rm s_{1}}\) can be calculated as \[\frac{\delta L}{\delta\theta_{\rm pol}} = \frac{\lambda}{4\pi}\frac{\delta[\arg\left(E_{\rm ref\parallel} \right)]}{\delta\theta_{\rm pol}}\left(\frac{\delta[\arg\left(E_{\rm ref \parallel}\right)]}{\delta\phi}\right)^{-1} \tag{37}\] \[= \frac{\lambda}{4{\cal F}}\Delta\phi_{\rm s_{1}}\sin 2\theta_{\rm pol},\] \[\frac{\delta L}{\delta(\Delta\phi_{\rm s_{1}})} = -\frac{\lambda}{4{\cal F}}\sin^{2}\theta_{\rm pol}. \tag{38}\] \begin{table} \begin{tabular}{c c c c c c} & \(L\) & \({\cal F}\) & \(t\) & \(\lambda\) & Ref. \\ \hline aLIGO & 4 km & 450 & 20 cm & 1064 nm & [4] \\ A+ & 4 km & 450 & 20 cm & 1064 nm & [39] \\ Voyager & 4 km & 3000 & 55 cm & 2050 nm & [11] \\ CE & 40 km & 450 & 27.3 cm & 2050 nm & [15] \\ ET-LF & 10 km & 900 & 57 cm & 1550 nm & [13] \\ ET-HF & 10 km & 900 & 30 cm & 1064 nm & [13] \\ \end{tabular} \end{table} Table 1: Interferometer parameters of Advanced LIGO (aLIGO), A+, Voyager, Cosmic Explorer (CE), Einstein Telescope Low Frequency (ET-LF), and ET High Frequency (ET-HF) used for calculating requirements. \(L\): arm length, \({\cal F}\): arm finesse, \(t\): ITM thickness, \(\lambda\): laser wavelength. ### Phase noises from coating birefringence Next, we consider the phase changes from the coating birefringence. From Eq. (35), it is clear that the second term from \(\Delta\phi_{\rm r_{1}}\) and \(\Delta\phi_{\rm r_{2}}\) contributes more to the phase of the reflected beam, compared with the first term from \(\Delta\phi_{\rm s_{1}}\) and \(\Delta\phi_{\rm t_{1}}\), since the phase acquired inside the cavity is enhanced by a factor of \(\mathcal{F}/\pi\). The length noise couplings from the fluctuations of \(\theta_{\rm pol}\), \(\theta\), and \(\Delta\phi_{\rm r_{i}}\) can be calculated as \[\frac{\delta L}{\delta\theta_{\rm pol}} = \frac{\lambda}{4\pi}[\Delta\phi_{\rm r_{1}}\sin 2\theta_{\rm pol} \tag{39}\] \[+\Delta\phi_{\rm r_{2}}\sin{[2(\theta+\theta_{\rm pol})]}],\] \[\frac{\delta L}{\delta\theta} = \frac{\lambda}{4\pi}\Delta\phi_{\rm r_{2}}\sin{[2(\theta+\theta_ {\rm pol})]},\] (40) \[\frac{\delta L}{\delta(\Delta\phi_{\rm r_{1}})} = -\frac{\lambda}{4\pi}\sin^{2}\theta_{\rm pol},\] (41) \[\frac{\delta L}{\delta(\Delta\phi_{\rm r_{2}})} = -\frac{\lambda}{4\pi}\sin^{2}{(\theta+\theta_{\rm pol})}. \tag{42}\] ### Requirements on birefringence fluctuations Noise couplings discussed above are nulled when \(\theta_{\rm pol}=0\) and \(\theta=0\). For KAGRA test masses, the sapphire \(c\)-axis was aligned to the cylindrical plane of the test mass within 0.1 deg [20]. For deriving the requirements to birefringence fluctuations for the substrate and the coating, we assume that the input beam polarization and the ETM axes are aligned to the ITM axes to \(\theta_{\rm pol}=1\) deg and \(\theta=1\) deg, respectively. Figure 4: The requirements on birefringence fluctuations from the axis rotations (top) and from the phase difference between ordinary and extraordinary axes (middle) for different gravitational wave detectors. The bottom plot shows the requirement on the substrate birefringence converted from the phase difference requirements on \(\Delta\phi_{\rm s_{1}}\) in the middle plot, assuming uniform \(\Delta n\), using Eq. (43). The solid lines are for the substrate that have a static birefringence of \(\Delta n=10^{-7}\) and the dashed lines are for the coating that have a static birefringence of \(\Delta\phi_{\rm r_{i}}=1\) mrad. For deriving these requirements, we assumed that the input beam polarization and the ETM axes are aligned to the ITM axes to \(\theta_{\rm pol}=1\) deg and \(\theta=1\) deg, and no safety margin is considered. Figure 3: The designed displacement sensitivity for different gravitational wave detectors. The strain sensitivity data are taken from Refs. [40; 41; 42], and corrected to displacement sensitivities by removing frequency-dependent responses to gravitational waves [43]. The solid lines in Fig. 4 show the derived requirements for the substrate birefringence fluctuations. We assumed that the ITM substrate have uniform birefringence \(\Delta n\), and \(\Delta\phi_{\rm s_{1}}\) can be written using the mirror thickness \(t\) as \[\Delta\phi_{\rm s_{1}}=\frac{4\pi}{\lambda}\Delta nt. \tag{43}\] We used the static birefringence value of \(\Delta n=10^{-7}\), which is a typical measured value for silicon [21, 22]. The dashed lines in Fig. 4 show the derived requirements for the coating using the static birefringence value of \(\Delta\phi_{\rm r_{i}}=1\) mrad, which is a typical measured value for AlGaAs coating [23, 16, 24]. The requirements do not change for other materials when they have the same amount of static birefringence. For deriving the requirement for \(\Delta\phi_{\rm r_{j}}\), we used Eq. (42), as this gives more stringent requirement than Eq. (41). All the requirements are divided by \(\sqrt{2}\) to take into account of birefringence noises between two arm cavities to be incoherent, assuming both cavities have similar level of birefringence. The requirements will be relaxed for common effects in two arms, such as the fluctuations in the input beam polarization angle and birefringence induced by laser intensity fluctuations. The requirements on the axis rotations for future gravitational wave detectors is in the order of \(10^{-10}\) rad\(/\sqrt{\rm Hz}\). We note that the requirements on \(\theta_{\rm pol}\) and \(\theta\) presented here are also the requirements for the polarization fluctuation requirement for the input beam and the roll motion of the mirrors. As for the roll motion of the mirrors, the vertical seismic motion create less than \(10^{-11}\) rad\(/\sqrt{\rm Hz}\) level of roll motion above 10 Hz for the Advanced LIGO suspensions, if we conservatively assume that the coupling from vertical to roll motion is unity [35, 44]. Therefore, the birefringence noise from the roll motion of the mirrors is small enough. The requirements on the phase differences between ordinary and extraordinary axes for future gravitational wave detectors are in the order of \(10^{-8}\) rad\(/\sqrt{\rm Hz}\) for the substrate, and \(10^{-10}\) rad\(/\sqrt{\rm Hz}\) for the coating. Birefringence at \(10^{-8}\) rad\(/\sqrt{\rm Hz}\) level can be feasibly evaluated with shot noise limited interferometry at the laser power of \(P=10\) mW level, as the shot noise limited phase sensitivity of a Michelson interferometer is given by \[\phi_{\rm shot}=\sqrt{\frac{hc}{2\lambda P}}, \tag{44}\] where \(h\) is Planck constant and \(c\) is the speed of light. Evaluation of birefringence at \(10^{-10}\) rad\(/\sqrt{\rm Hz}\) level requires 10-W class laser or cavity enhancements. Measurements can be done at relatively lower power compared with gravitational wave detectors, as the phase noise from birefringence is attenuated by \(\sin^{2}\theta\) and \(\sin^{2}\left(\theta+\theta_{\rm pol}\right)\), by aligning the polarization axis and the mirror crystal axes. In the evaluation setup, the phase noise can be enhanced by intentionally misaligning the axes. One of the possible sources of birefringence fluctuations is magnetic field fluctuations due to Faraday effect. Measured magnetic field fluctuations at various gravitational wave detector sites are in the order of \(10^{-12}\) T\(/\sqrt{\rm Hz}\) at 10 Hz [45], and the Verdet constant for silicon is 15 rad\(/(\)T\(\cdot\)m\()\)[46]. These give \(10^{-11}\) rad\(/\sqrt{\rm Hz}\) level of \(\Delta\phi_{\rm s_{1}}\) for mirror thicknesses in Table 1, which is below the requirements given above. ### Amplitude noise at the anti-symmetric port So far, we have considered the phase noise in the arm cavity reflected beams in gravitational wave detectors. In gravitational wave detectors, the differential arm length caused by gravitational waves will be read out as the interference fringe changes at the anti-symmetric port. Birefringence fluctuations will also create power fluctuations in the orthogonal polarization, and it will be a noise source when the output Faraday isolator has a finite extinction ratio \(\epsilon\), and the orthogonal polarization is not completely rejected. A slight misalignment of the axes between the input Faraday isolator and the output Faraday isolator would also cause a finite extinction ratio. From Eq. (34), the power of the cavity reflected beam in the orthogonal polarization from the birefringence in ITM can be written as \[\frac{P_{\rm ref\perp}|_{\rm reg}}{P_{\rm in}} \simeq \frac{1}{4}\left(\Delta\phi_{\rm s_{1}}-2\Delta\phi_{\rm t_{1}}- \frac{\mathcal{F}}{\pi}\Delta\phi_{\rm r_{1}}\right)^{2}\sin^{2}\left(2\theta _{\rm pol}\right) \tag{45}\] when the cavity is on resonance. Here, \(P_{\rm in}=|E_{\rm in}|^{2}\) is the input power to the cavity, and we used that \(r_{2}=1\), \(r_{1}\simeq 1\) and \(t_{1}^{2}=1-r_{1}^{2}\), which are good approximations for arm cavities of gravitational wave detectors. We also assumed that the amount of birefringence is uniform and small, i.e., \(\Delta\phi_{\rm r_{i}}\ll 2\pi/\mathcal{F}\), \(\Delta\phi_{\rm s_{1}}\ll 1\) and \(\Delta\phi_{\rm t_{1}}\ll 1\). As we can see from Eq. (34), the orthogonal polarization is vanished when there is no birefringence, or \(\theta_{\rm pol}\) is not 0 or \(\pi/2\). The orthogonal polarization component is generated from the reflected electric field unbalance between two eigenmodes. Therefore, when the amount of birefringence is small, the phase of \(E_{\rm ref\perp}\) is always around \(\pi/2\) away from the phase of \(E_{\rm ref\parallel}\). This means that the orthogonal polarization in the cavity reflection is always in the quadrature phase with respect to the gravitational wave signal, independent of the resonant condition of the cavity. In the case of gravitational wave detectors, the anti-symmetric port therefore will be either at the bright or the dark fringe for the orthogonal polarization, when it is at the dark fringe for the main polarization. When the both arms are completely symmetric and the amount of birefringence is the same, the anti-symmetric port will be at the bright fringe for the orthogonal polarization. This is the same as the reason why the polarization signal from axion dark matter is present at the anti-symmetric port, as discussed in Ref. [35]. In reality, the beam splitter in the Fabry-Perot-Michelson interferometer adds extra phase difference between two polarization axes due to \(\sim\)45 deg incident angle, and the fringe will be slightly shifted. To derive the requirements for the extinction ratio \(\epsilon\) of the output Faraday isolator, let us assume that the power of the orthogonal polarization component at the anti-symmetric port can be roughly estimated from the power from one of the arms. By requiring the power fluctuation from the orthogonal polarization from one of the arms to be less than the shot noise of the local oscillator beam in the main polarization, we can require \[\epsilon<\frac{1}{P_{\text{refl}\perp}}\sqrt{\frac{2\hbar cP_{\text{LO}}}{ \lambda}}, \tag{46}\] where \(P_{\text{LO}}\) is the power of the local oscillator beam at the anti-symmetric port. When the requirements for the birefringence fluctuations derived in the previous subsections are met, the noise from the birefringence fluctuations are lower than the shot noise of the gravitational wave detector. Therefore, the requirement can be rewritten as \[\epsilon\lesssim\sqrt{\frac{P_{\text{LO}}}{P_{\text{in}}}}\left(\Delta\phi_{ \text{s}_{1}}-2\Delta\phi_{\text{t}_{1}}-\frac{\mathcal{F}}{\pi}\Delta\phi_{ \text{r}_{1}}\right)^{-1}. \tag{47}\] For gravitational wave detectors operating with DC readout scheme [47], \(P_{\text{LO}}\) and \(P_{\text{in}}\) are in the order of 10 mW and 10 kW for power-recycled case, respectively. Assuming that the birefringence terms \(\Delta\phi_{\text{s}_{1}}\), \(\Delta\phi_{\text{t}_{1}}\), and \(\Delta\phi_{\text{r}_{1}}\mathcal{F}/\pi\) are in the order of 1 rad, the requirement to the extinction ratio will be \(\epsilon\lesssim 0.1\%\). This means that the input Faraday isolator and the output Faraday isolator has to be aligned within 1.8 degrees. ## IV Optical losses from inhomogeneous birefringence Birefringence and its inhomogeneity in cavities create power losses from depolarization. The mode content of the cavity reflected beam in the orthogonal polarization will be different depending on the locations of birefringence and the resonant condition of the cavity. In this section, we discuss the power of cavity reflected beam in the orthogonal polarization to estimate the optical loss. To show that the different locations of birefringence create different mode content, we first consider the effects from ITM, as we have considered in Eqs. (33) and (34). From Eq. (34), the power losses to orthogonal polarization when the cavity is out of resonance will be \[\frac{P_{\text{refl}\perp}}{P_{\text{in}}}\simeq\frac{1}{4}(\Delta\phi_{\text {s}_{1}})^{2}\sin^{2}{(2\theta_{\text{pol}})}, \tag{48}\] under same approximations used to derive Eq. (45). So far, we have only considered the birefringence uniform over the substrate and the coating. When there is a perturbation from a uniform birefringence, spatial higher order modes are generated. The amount of the higher order modes in the orthogonal polarization can be estimated from inhomogeneous birefringence \(\Delta\phi_{\text{s}_{1}}^{\text{HOM}}\). The power in the higher order modes when the cavity is on resonance and out of resonance will be \[\frac{P_{\text{refl}\perp}^{\text{HOM}}\big{|}_{\text{res}}}{P_{ \text{in}}} \simeq \frac{1}{4}\left(\Delta\phi_{\text{s}_{1}}^{\text{HOM}}-\Delta \phi_{\text{t}_{1}}^{\text{HOM}}\right)^{2}\sin^{2}{(2\theta_{\text{pol}})}, \tag{49}\] \[\frac{P_{\text{refl}\perp}^{\text{HOM}}\big{|}_{\text{off}}}{P_{ \text{in}}} \simeq \frac{1}{4}(\Delta\phi_{\text{s}_{1}}^{\text{HOM}})^{2}\sin^{2}{( 2\theta_{\text{pol}})}, \tag{50}\] respectively. Note that the coefficient for \(\Delta\phi_{\text{t}_{1}}^{\text{HOM}}\) is 1, as opposed to 2 for \(\Delta\phi_{\text{t}_{1}}\) in Eq. (45), since higher order modes do not resonate in the cavity and higher order modes are generated in the ITM transmission of intra-cavity beam. For considering the effect from the ITM substrate birefringence, we can set \(\Delta\phi_{\text{r}_{1}}=0\), \(\Delta\phi_{\text{s}_{1}}=\Delta\phi_{\text{t}_{1}}\) and \(\Delta\phi_{\text{s}_{1}}^{\text{HOM}}=\Delta\phi_{\text{t}_{1}}^{\text{HOM}}\). In this case, the amount of the fundamental transverse mode in the orthogonal polarization stays the same when the cavity is out of resonance or on resonance. However, the amount of higher order modes in the orthogonal polarization is suppressed to the second order, as we can see from Eq. (49). This is similar to the Lawrence effect for the thermal lensing of ITM [48]. It is worth noting that the cavity reflected power in the main polarization \(P_{\text{refl}\parallel}\) could increase when the cavity is on resonance due to this effect, if the optical loss in the cavity is small compared with the optical loss from inhomogeneous birefringence. For KAGRA sapphire ITM, the transmission wavefront error difference between two polarizations was measured to be around 60 nm in RMS [19; 20], which corresponds to the round-trip phase difference \(\Delta\phi_{\text{s}_{1}}^{\text{HOM}}\) of 0.7 rad in RMS. If we attribute this all to inhomogeneous refractive index difference using Eq. (43), this corresponds to \(\Delta n^{\text{HOM}}\) of \(2\times 10^{-7}\) in RMS, using the KAGRA sapphire mirror thickness being 15 cm and laser wavelength being 1064 nm. For sapphire, the amount of birefringence along \(c\)-axis can be calculated with [49] \[\Delta n=\frac{n_{o}(n_{o}^{2}-n_{e}^{2})\psi^{2}}{n_{e}^{2}}, \tag{51}\] where \(n_{e}=1.747\) and \(n_{o}=1.754\) are the refractive indices in the \(c\)-axis and in axes orthogonal to the \(c\)-axis, respectively, and \(\psi\ll 1\) is the inclination of the light propagation direction with respect to the \(c\)-axis. Using this equation, the amount of birefringence observed in KAGRA can be explained by \(\psi^{\text{HOM}}\) being 0.2 deg in RMS. This is larger than nominal orientation of the beam propagation axis with respect to the \(c\)-axis, which was aligned within 0.1 deg [20]. This suggests that \(\theta_{\text{pol}}\) is also inhomogeneous and uncontrolled. Using Eq. (50), this inhomogeneous birefringence create power loss to orthogonal polarization of around 10% when the arm cavity is out of resonance, if \(\theta_{\rm pol}\) is around \(\pi/4\). This is consistent with the measured value in KAGRA, as reported in Ref. [9]. The reduction of the power loss to orthogonal polarization on resonance was also observed, which is consistent with the Lawrence effect described above. In KAGRA case, the power of the orthogonal polarization inside the power recycling cavity was reduced by a factor of three when the arm cavity was locked on resonance. To make the optical loss due to inhomogeneous birefringence of ITM substrate always smaller than \(0.1\%\), \(\Delta\phi_{\rm s_{1}}\) and \(\Delta\phi_{\rm s_{1}}^{\rm HOM}\) need to be smaller than \(0.06\) rad in RMS. Achieving this with surface figuring alone could be challenging, as surface figuring cannot compensate for the phase difference between two axes. This requirement can be eased by aligning the input polarization axis to \(\theta_{\rm pol}=0\) or \(\pi/2\). When considering the effect from the ITM coating birefringence, we can set \(\Delta\phi_{\rm s_{1}}=\Delta\phi_{\rm r_{1}}\). However, \(\Delta\phi_{\rm s_{1}}\) is not exactly \(\Delta\phi_{\rm t_{1}}\), as the penetration length for the coating is different from the coating thickness. Therefore, the Lawrence effect does not completely suppress the higher order modes. If we can set \(\Delta\phi_{\rm s_{1}}=l\Delta\phi_{\rm t_{1}}\), where \(0<l<1\) is the ratio of the penetration length over the coating thickness, the higher order modes in the orthogonal polarization increase when the cavity is locked on resonance, for \(l<0.5\). The fundamental transverse mode in the orthogonal polarization increases for high finesse cavities with \(\mathcal{F}/\pi\gg 1\). The mode content in the orthogonal polarization from the ETM coating birefringence can be obtained by replacing \(\Delta\phi_{\rm r_{1}}\) to \(\Delta\phi_{\rm r_{2}}\) and \(\theta_{\rm pol}\) to \(\theta+\theta_{\rm pol}\) in Eqs. (45), (48), (49) and (50), and by setting \(\Delta\phi_{\rm s_{1}}=\Delta\phi_{\rm t_{1}}=0\), as \[\frac{P_{\rm ref\perp}\rm|_{\rm res}}{P_{\rm in}} \simeq \frac{1}{4}\left(\frac{\mathcal{F}}{\pi}\Delta\phi_{\rm r_{2}} \right)^{2}\sin^{2}\left[2(\theta+\theta_{\rm pol})\right]\!, \tag{52}\] \[\frac{P_{\rm ref\perp}\rm|_{\rm off}}{P_{\rm in}} \simeq 0,\] (53) \[\frac{P_{\rm ref\perp}^{\rm HOM}|_{\rm res}}{P_{\rm in}} \simeq 0,\] (54) \[\frac{P_{\rm ref\perp}^{\rm HOM}|_{\rm off}}{P_{\rm in}} \simeq 0. \tag{55}\] Therefore, as for the effects from the ETM coating birefringence, the power in the orthogonal polarization increases when the cavity is locked on resonance, and the fundamental transverse mode dominates, because the higher order modes are suppressed in the cavity. The discussions above highlights the fact that the optical losses from birefringence needs to be correctly taken into account to measure the optical losses in the arm cavity. It also suggests that, by measuring the mode content of the beam in the orthogonal polarization when the cavity is out of resonance and on resonance, we can estimate where the optical losses from birefringence is mainly coming from. Future gravitational wave detector designs call for 10 dB of detected squeezing, requiring that the total optical loss be less than \(10\%\)[50]. From Eqs. (45) and (52), \(|\theta|\) and \(|\theta+\theta_{\rm pol}|\) needs to be less than \(1.8\) degrees, requiring the optical loss from birefringence be less than \(0.1\%\), when the birefringence terms \(\Delta\phi_{\rm s_{1}}\), \(\Delta\phi_{\rm t_{1}}\), and \(\Delta\phi_{\rm r_{2}}\mathcal{F}/\pi\) are in the order of \(1\) rad. Similar to the discussions around Eq. (23), the polarization of the injected squeezed vacuum also needs to be aligned to less than \(1.8\) degrees to achieve the optical loss of less than \(0.1\%\). ## V Conclusions and outlook In this paper, we have discussed the effects of birefringence and its fluctuations in the mirror substrate and coating for laser interferometric gravitational wave detectors. We have shown that the polarization axis of the beam and the crystal axes of mirrors need to be aligned to minimize the optical losses and the noises from birefringence fluctuations. The optical losses from birefringence can be feasibly reduced to less than \(0.1\%\), when the axes are aligned within a few degrees. We have also shown that the requirements for the birefringence fluctuations in the substrate and the coating will be in the order of \(10^{-8}\) rad\(/\sqrt{\rm Hz}\) and \(10^{-10}\) rad\(/\sqrt{\rm Hz}\) at \(100\) Hz, respectively, for future gravitational wave detectors with mirrors that have \(\Delta n=10^{-7}\) level of substrate birefringence and \(\Delta\phi_{\rm r_{1}}=1\) mrad level of coating birefringence. When the static coating birefringence is large such that the resonant frequency difference between two polarization eigenmodes are larger than the cavity linewidth, the requirements on the coating birefringence fluctuations will be relaxed. In addition, we have derived the equations for estimating the amount of optical losses due to depolarization from inhomogeneous birefringence of mirror substrates and coatings. Our results provide the basic theory to study the noises and optical losses from birefringence fluctuations of mirrors in gravitational wave detectors. In our model, we assumed that the amount of birefringence and mis-orientation of axes to be small. We also assumed that two interferometer arms of gravitational wave detectors to be close to symmetric. Detailed interferometer modeling will be necessary to treat larger birefringence, mis-orientation of axes, inhomogeneity of birefringence and axes orientations, and asymmetry between two arms including birefringent beam splitter effects. These effects would create classical radiation pressure noise, as intra-cavity power fluctuates from birefringence fluctuations. Including the power and signal recycling cavities to the model would also be important when these effects are not negligible and the resonant condition in the recycling cavities are different between polarizations. We leave these studies to future work. ###### Acknowledgements. We would like to thank Hiroki Fujimoto, Kevin Kuns, Stefan W. Ballmer, Valery Frolov and Martin M. Fejer for insightful discussions. This work was supported by the Gordon and Betty Moore Foundation, by the National Science Foundation under Grant No. PHY-1912677, by JSPS KAKENHI Grant No. JP20H05854, and by JST PRESTO Grant No. JPMJPR200B. FSC acknowledges support from the Barish-Weiss postdoctoral fellowship. This paper carries LIGO DCC No. LIGO-P2300220 and JGW Document No. JGW-P2315068.
結晶材料は、将来のレーザー干渉式重力波探査装置の冷却ノイズの低減に望ましい候補として、鏡の基板または高反射コーティングの候補となります。しかし、これらの材料の birefringence は重力波探査装置の感度を低下させ、単に光損失を導入するだけでなく、その変動がアームCavityに反射された光 beam に発生する相位ノイズも生じます。この論文では、重力波探査装置のミラー基板およびコーティングにおけるBirefringence の影響を解析的に評価します。私たちの計算結果によれば、将来の重力波探査装置用のシリコン基板とAlGaAsコーティングにおけるBirefringence の変動量は、それぞれ10<sup>-8</sup> と 10<sup>-10</sup> rad/$\sqrt{\rm Hz}$ が求められます。また、光Cav
2309.08625
Performance of ChatGPT-3.5 and GPT-4 on the United States Medical Licensing Examination With and Without Distractions
As Large Language Models (LLMs) are predictive models building their response based on the words in the prompts, there is a risk that small talk and irrelevant information may alter the response and the suggestion given. Therefore, this study aims to investigate the impact of medical data mixed with small talk on the accuracy of medical advice provided by ChatGPT. USMLE step 3 questions were used as a model for relevant medical data. We use both multiple choice and open ended questions. We gathered small talk sentences from human participants using the Mechanical Turk platform. Both sets of USLME questions were arranged in a pattern where each sentence from the original questions was followed by a small talk sentence. ChatGPT 3.5 and 4 were asked to answer both sets of questions with and without the small talk sentences. A board-certified physician analyzed the answers by ChatGPT and compared them to the formal correct answer. The analysis results demonstrate that the ability of ChatGPT-3.5 to answer correctly was impaired when small talk was added to medical data for multiple-choice questions (72.1\% vs. 68.9\%) and open questions (61.5\% vs. 44.3\%; p=0.01), respectively. In contrast, small talk phrases did not impair ChatGPT-4 ability in both types of questions (83.6\% and 66.2\%, respectively). According to these results, ChatGPT-4 seems more accurate than the earlier 3.5 version, and it appears that small talk does not impair its capability to provide medical recommendations. Our results are an important first step in understanding the potential and limitations of utilizing ChatGPT and other LLMs for physician-patient interactions, which include casual conversations.
Myriam Safrai, Amos Azaria
2023-09-12T05:54:45
http://arxiv.org/abs/2309.08625v1
Performance of ChatGPT-3.5 and GPT-4 on the United States Medical Licensing Examination With and Without Distractions ## Abstract Efforts are being made to improve the time effectiveness of healthcare providers. Artificial intelligence tools can help transcript and summarize physician-patient encounters and produce medical notes and medical recommendations. However, in addition to medical information, discussion between healthcare and patients includes small talk and other information irrelevant to medical concerns. As Large Language Models (LLMs) are predictive models building their response based on the words in the prompts, there is a risk that small talk and irrelevant information may alter the response and the suggestion given. Therefore, this study aims to investigate the impact of medical data mixed with small talk on the accuracy of medical advice provided by ChatGPT. USMLE step 3 questions were used as a model for relevant medical data. We use both multiple choice and open ended questions. First, we gathered small talk sentences from human participants using the Mechanical Turk platform. Second, both sets of USLME questions were arranged in a pattern where each sentence from the original questions was followed by a small talk sentence. ChatGPT 3.5 and 4 were asked to answer both sets of questions with and without the small talk sentences. Finally, a board-certified physician analyzed the answers by ChatGPT and compared them to the formal correct answer. The analysis results demonstrate that the ability of ChatGPT-3.5 to answer correctly was impaired when small talk was added to medical data for multiple-choice questions (72.1% vs. 68.9%; p=0.67) and open questions (61.5% vs. 44.3%; p=0.01), respectively. In contrast, small talk phrases did not impair ChatGPT-4 ability in both types of questions (83.6% and 66.2%, respectively). According to these results, ChatGPT-4 seems more accurate than the earlier 3.5 version, and it appears that small talk does not impair its capability to provide medical recommendations. Our results are an important first step in understanding the potential and limitations of utilizing ChatGPT and other LLMs for physician-patient interactions, which include casual conversations. ## Introduction One of the key, yet most time-consuming, healthcare tasks is charting and creating medical notes, taking daily hours of healthcare providers [1]. In fact, this task often requires healthcare providers to spend as much, if not more time than they do in direct patient interaction [2, 3]. For example, in a survey, 67% of the residents reported spending in excess of 4 hours daily on documentation [1].Despite the importance of medical notes [4, 5] no changes have been made to their format, besides having transferred the responsibility of writing them from other medical team members to the physicians [6]. This shift has created a burden for medical providers [7] and physician burnout [8]. Moreover, the recent implementation of electronic health records (EHRs) has significantly increased clinician documentation time [9], making it the most time-consuming physician activity [10]. This emphasizes the pressing need to improve the way of charting and making medical notes. Large Language Models (LLMs) have been suggested as a possible solution, improving healthcare documentation, creating notes, summarizing physician-patient encounters, and even providing meaningful suggestions for further treatments [11, 12]. For example, Chat Generative Pre-trained Transformer 3.5 (ChatGPT-3.5) has been shown to generate a correct diagnosis for 93% of clinical cases with common chief complaints [13] and screening breast cancer with an average correct rate of 88.9% [14]. In addition, ChatGTP-3.5 was able to provide general medical information on common retinal disease [15], in almost every subject in genecology [16] and in cancer subjects [17]. Moreover, another article demonstrated ChatGPT-3.5's ability to generate clinical letters with high overall accuracy and humanization [18]. Recent investigations have also shown ChatGPT-3.5's ability to write medical notes [12] and to generate a discharge note based on a brief description [19]. More recently, a newer version ChatGPT-4 was released. ChatGPT-4 has the ability to process a greater word limit, a stronger ability to solve complex problems, and image recognition [20]. This version has additionally shown greater capabilities in terms of clinical evaluation [21, 22]. Namely, while ChatGPT-3.5 has obtained a score of 60.9% on a US sample clinical exam, ChatGPT-4 obtained a score of 89.8% on the same exam [21]. A similar result was obtained on the Japanese medical exam, in which ChatGPT-3.5 obtained an average score of 121.3 on the first part of the exam and 149.7 on the second part, while ChatGPT-4 obtained an average score of 167.7 on the first part of the exam and 221.5 on the second part [22]. Following the success of ChatGPT in the medical field, the technology has been tested to summarize physician-patient encounters [23]. Those appointments between healthcare providers and patients form the foundation of medical care [24]. They necessitate medical evaluations, including the provider's focus on patient needs, obtaining medical histories [25, 26], conducting physical examinations [27, 28] and performing additional tests if necessary [29, 30]. Moreover, they also entail non-medical tasks such as documenting patient records, organizing notes, and making referrals. However, since healthcare and patient discussions are unique and based on trust, in addition to medical information, they often include small talk and other information irrelevant to medical concerns [31, 32]. Those unique exchanges are an important part of the relationship between medical providers and patients and are common among different cultures [33]. In traditional chinese medicine doctors actively initiate small talks to acquire holistic information for diagnosis and attach great importance to them [33]. In contrast, such interaction with small talk, has been found to alter the technical skills and performances of medical students, [34]. These controversies raise concerns regarding their potential impact on LLMs. As LLMs are predictive models generating their response based on the words in the provided prompt, [35], there is a risk that small talk and irrelevant information may alter the response and the provided suggestion. Despite the growing number of studies on the potential of using AI for healthcare purposes, to the best of our knowledge, none have assessed this unique aspect of healthcare-patient content interactions and the effect that casual conversation and unrelated information could have on the efficacy of ChatGPT to process medical information and therefore be used to write medical notes summarizing physician-patient interaction. This study aims to investigate the impact of interspersing medical data with casual conversation on the precision of medical recommendations provided by ChatGPT3.5 and ChatGPT-4. ## Material and methods ### Medical Information To assess ChatGPT's capabilities in medical reasoning, we evaluate its responses to questions from the United States Medical Licensing Examination (USMLE). This exam has been successfully used to assess the medical logic of LLMs in previous studies [36]. Specifically, to evaluate the LLM's proficiency in addressing clinical queries, we selected the Step 3 exam, which is the final examination in the USMLE sequence that qualifies individuals to practice medicine unsupervised. The multiple-choice questions in this exam primarily test knowledge related to diagnosis and clinical management and reflect clinical situations that a general physician might encounter1. Footnote 1: [https://www.usmle.org/step-exams/step-3/step-3-exam-content](https://www.usmle.org/step-exams/step-3/step-3-exam-content) USMLE Step 3 questions were sourced from the dataset provided by Kung et al. [36]. Two distinct sets of questions were utilized in the study. The first comprised the original multiple-choice (MC) questions from the USMLE Step 3 exam, while the second presented the same questions in an open-ended (OE) format. Each set contained 122 questions. ### Obtaining Small Talk Sentences We conducted a survey on Amazon's Mechanical Turk platform, which allows researchers to recruit participants for various tasks, including online surveys and experiments. Mechanical Turk has gained considerable popularity in recent years as a tool for research due to its efficiency, cost-effectiveness, and the ability to reach a vast pool of participants [37]. In our survey, we required the participants to write sentences with at least 10 words to encourage more thoughtful and meaningful responses and reduce the likelihood of individuals providing rushed, brief answers (e.g., "I ate something", "I saw someone", etc.). This is because we aim for participants to produce meaningful sentences that emulate small talk, ensuring they convey information in a casual conversational manner. The participants were provided the following instructions. "Please write 5 different sentences as if you were talking to your friend. Each sentence must describe something that has happened to you or an action that you have performed in the past few days. The sentences should not depend on each-other. It is OK to write sentences about simple everyday occurrences (e.g., "1. I sat on a chair on my balcony and looked at the cars passing by."). Each sentence should be at least 10 words long." We note that we intentionally framed the small talk in the context of "talking to a friend" rather than talking to a physician, since we did not want the small talk sentences to have any true influence on the correct answer. By framing the small talk in the context of talking to a friend, we aimed for the correct diagnosis to remain unchanged. We elicited 35 participants, each provided 5 sentences. This resulted in 175 sentences. The following are some examples of sentences we received from the Mechanical Turk workers: 1. I had a great time catching up with my friends at the coffee shop. 2. I finished reading a great book and I'm looking for my next one. 3. I biked to the park and watched the birds for an hour. All sentences shorter than 10 words were removed. The remaining sentences were converted to a third person's view, to better align with the USMLE format. This resulted in a list of 143 small talk sentences, which are provided in the appendix. Converting the three aforementioned sentences to a third person's view, obtains the following: 1. The person had a great time catching up with their friends at the coffee shop. 2. The person finished reading a great book and is looking for their next one. 3. The person biked to the park and watched the birds for an hour. ### Small talk Integration Into Medical Information A program was developed that executed the following procedure on the USMLE Step 3 questions. Through sentence tokenization, each question was broken down into individual sentences, and a small talk sentence was inserted. Once processed, each sentence from the USMLE question was followed by a sentence from the small talk file, creating an alternating sequence, as shown in Fig. 1. The final dataset included a total of 488 questions: 122 multiple-choice questions and 122 open-ended questions, each presented with and without small talk. ### ChatGPT Queries ChatGPT was prompted using the OpenAI API (in Python). Each question was submitted as a user query without system messages. Each query was submitted separately as a new query, i.e., our program read each question from the file and submitted it to ChatGPT. We used the openai.ChatCompletion.create function with the default parameters2. Footnote 2: [https://openai.com/blog/openai-api](https://openai.com/blog/openai-api) Figure 1: **Example of a question from the open-ended question dataset with added small talk sentences**. The small talk sentences, added for this illustration, are highlighted in green (the actual dataset does not contain any color highlighting). ### ChatGPT Answers Assessment All the responses from ChatGPT to the various datasets were evaluated by a single board-certified physician (MS). For both multiple-choice and open-ended formats, ChatGPT's responses were validated against the official answers of the original multiple-choice questions. ### Statistical Analysis Statistical analyses were performed using Python (Scipy 1.10.1). The different group analyses were conducted using the chi2_contingency function provided by the scipy.stats library. P values less than 0.05 were considered statistically significant. ## Results The overall performance of ChatGPT-4 was significantly better than ChatGPT-3.5, with an overall of 61.7% of correct responses vs. 75.4% respectively (p. value \(<0.001\)). A significantly better score was observed for ChatGPT-4 version when comparing the overall answer to the USMLE question without the addition of small talk (75.4% vs 66.8%, p.value \(=0.045\)) and (75.4% vs 56.6%, p. value \(<0.001\)) for the question including small talk addition (Fig. 2). In addition, the effect of small talk integration within medical information differs between the two ChatGPT versions. ChatGPT-3.5, showed a clear decrease in the answers' accuracy when small talk sentences were added to the medical data, with a significant decrease of 66.8% to 56.6% for all ChatGPT-3.5 answers (p. value \(=0.025\)). While looking at each separate data set of questions, the influence of small talk integration on each type of question is more prominent. ChatGPT-3.5 demonstrates a Fig 2: **Result of the performance of ChatGPT-3.5 and ChatGPT-4 on USMLE datasets with and without small talk.**_Caption:_ The figure shows the significant difference in ChatGPT-3.5 and ChatGPT-4 performances’ with and without the addition of small talk sentences. In addition, it demonstrates the significant difference in the performance of ChatGPT-3.5 for the datasets with and without small talk addition. ST - Small talk, with the addition of small talk to the original question. * and ** indicate statistical significance at levels \(p<0.05\) and \(p<0.001\), respectively. non-significant reduction from 72.1% to 68.9% for the multiple-choice questions, while a more considerable and significant drop in performance from 61.5% to 44.3% (p. value \(=0.01\)) was observed for open-ended questions. In contrast, the performance of ChatGPT-4 remained unchanged despite the introduction of small talk, displaying 67.2% and 83.6% of correct answers for open and multiple-choice questions, respectively (Fig. 3). Upon closer examination of the answers of ChatGPT to each question, a pattern of error can be observed in ChatGPT responses' when the correct answer is that no further test or investigation was required. For instance, each dataset included two questions, with the correct answer being "No further evaluation is necessary" or "No additional study is indicated." Both ChatGPT versions responded incorrectly in the case of open questions suggesting further investigation or treatment regardless in small talk addition. In contrast, when prompted with the dataset of multiple choice of questions, ChatGPT 3.5 answered one of the 2 questions right when no small talk was inserted, and was disturbed by the addition of small talk and responded wrong for both questions after adding small talk. ChatGPT-4 also improved his score on multiple questions and got one correct answer. In contradiction to ChatGPT3.5, its answer was not impaired by adding small talk, and the performance was the same even after adding irrelevant information. In other questions, where the correct answer was a diagnosis or treatment, the addition of small talk impaired ChatGPT-3.5 performance. For example, as seen in Fig. 4, the response was correct before adding small talk. However, as shown in Fig. 5, once small talk phrases were added to the question, ChatGPT-3.5 failed and provided an incorrect response. Interestingly, even though the small talk caused ChatGPT to respond incorrectly, it does not explicitly mention any of the small talk information in Fig 3: **Performance of ChatGPT-3.5 and ChatGPT-4 on the two types of USMLE questions, i.e., multiple choice and open ended, with and without small talk.**_Caption:_ ChatGPT-4 performed significantly better than ChatGPT-3.5 (p\(<0.001\)). The small talk seemed to have a larger effect on the performance of ChatGPT-3.5 in the open ended questions. ST - Small talk, with the addition of small talk to the original question. OE - Open-ended questions, MC - Multiple choice questions. * and ** indicate statistical significance at levels \(p<0.05\) and \(p<0.001\), respectively. its answer and does not explain its wrong answer based on the specific interference added to this question. ## Discussion The primary purpose of this study was to investigate the effect of the addition of small talk to medical data on the accuracy of medical advice provided by ChatGPT. First, as expected, ChatGPT-4 outperforms ChatGPT-3.5 with an overall higher score for open and multiple-choice questions. This matches the expectation as ChatGPT-4 is a more advanced version and has been shown to outpass ChatGPT-3.5 on multiple-choice questions in the US and Japan medical exams [21, 36]. However, this is the first study to show a similar improvement in the capacity of ChaGPT-4 to surpass ChatGTP-3.5, giving medical recommendations to open questions that simulate daily clinical needs. The high score of almost three-quarters of correct answers of ChatGPT-4 for open questions in our study indicates its ability to process medical information. These findings suggest the capacity of ChatGPT-4 to respond and provide medical advice and demonstrate its potential future use in the medical field. When evaluating the effect of small talk addition to the different datasets, ChatGPT-3.5 showed a slight drop in performance for multiple-choice questions and a significant one in answering the open-ended questions following the addition of small talk. In contrast, ChatGPT-4's performance was consistent regardless of small talk, with stable accuracy rates for both question types. To our knowledge, this is the first study evaluating the effect of small talk on ChatGPT and other LLMs' efficacy in Fig 4: ChatGPT-3.5 provides a correct answer to a question that does not include small talk interference. A 37-year-old man comes to the emergency department because he has felt nauseated and light-headed for the past hour. The person is finally getting the hang of a new coding language. Medical history is significant for esophageal surfaces secondary to alcohol-related cirrhosis and ascites treated with spiromolactone. The person went for a walk but forgot to take their wallet with them. He drinks eight to ten alcoholic beverages daily. The person listened to their mom and tried to understand what she was trying to say. While you are obtaining additional history, the patient vomits a large volume of bright red blood and becomes difficult to arouse. The person watched the sunset over the lake. Vital signs are temperature \(36.0^{o}C\) (\(96.8^{o}F\)), pulse 110/min, respirations 12/min, and blood pressure 90/50 mm Hg. The person is planning to go to the movies with friends, eager to see a new Marvel movie. Following initiation of intravenous fluids, what is the most appropriate immediate management? Given the patient's symptoms of vomiting a large volume of bright red blood, difficulty arousing, and hemodynamic instability (low blood pressure), this is a medical emergency. The most likely diagnosis is bleeding from esophageal varices, a common complication of cirrhosis. The immediate management in this situation is to address the bleeding and stabilize the patient. The most appropriate immediate management would be to initiate a transfusion of packed red blood cells to restore blood volume and correct the patient's anemia. This will help stabilize the patient while further interventions can be planned, such as an upper endoscopy to identify the bleeding source and potentially perform interventions like band ligation or sclerotherapy to control the bleeding from the varices. Fig 5: ChatGPT-3.5 incorrectly responds to a question mixed with small talk (highlighted in green). processing medical information in the context of unformal or irrelevant information. Our study demonstrates the various impacts of adding small talk on different versions of ChatGPT. It implies that the addition of small talk does not impair ChatGPT-4 performance in processing medical data, which can provide the same accuracy in medical recommendations as in'medical only' conversation. During a provider-patient interaction, irrelevant information is often mixed with medical data, which needs to be processed and summarized in contrast to the small talk. It has been demonstrated in a previous study that ChatGPT can summarize and provide a note for'medical only' physician-patient encounters [23]. Therefore, our data suggests that ChatGPT-4 can assist in this task without being impaired by a patient-provider casual discussion that might occur and be provided to ChatGPT in a transcript. These findings provide important answers for medical practitioners and LLM developers regarding the potential of the implication of ChatGPT and other LLMs as a tool in medicine. This is especially important as it is predicted that chatbots will be used by medical professionals, as well as by patients, with increasing frequency [23]. The analysis of the exact scoring of ChatGPT in our study demonstrates that ChatGPT-3.5 answered 72.1% of the multiple-choice questions correctly without small talk integration. This score is higher than the one reported by Kung et al. [36] ranging from 68.8% to 61.5%. It should be noted, however, that our study was conducted approximately 8 months after the original assessment. A possible explanation for this difference is that ChatGPT, as an Artificial Intelligence system, has learned and adapted from the data. As it encounters more information, it refines its models, which often leads to improved performance and accuracy [38]. It is plausible that the elevated scores observed in our research can be attributed to a marked learning enhancement. These findings likely underscore the continuous improvement of ChatGPT over time. We are optimistic that subsequent studies will yield even more favorable outcomes, enhancing ChatGPT's ability to offer even better medical recommendations and furnish dependable support to healthcare providers in medical record documentation. Each dataset included 2 questions, where the correct answer was that no further investigation was required. Both versions of ChatGPT answers to the open-ended questions were wrong. In contrast, for multiple choice questions, ChatGPT3.5 had one of the two questions answered correctly if no small talk was added and both were wrong after this addition, whereas ChatGPT-4 was not influenced by small talk addition constantly answers to one of two questions correctly. Our study is the first to report the need and complexity of LLMs to respond to those types of questions. Those types of answers are crucial in medicine as patients can be easily referred to countless further tests and investigations, burdening the patients and the medical system [39]. Those queries challenge LLMs for whom specific wording of the prompt influence dramatically the answer provided [40]. In those examples, asking what should be the next step may induce that a next step is indeed required. That finding demonstrates the complexity of using ChatGPT in different queries and the need to acknowledge the limit of this technology at this current development. Finally, we seek to analyze the cause of the small talk disturbance to ChatGPT-3.5 processing. We hypothesized that adding different subjects and specific words would engender a failure in the process of ChatGPT-3.5. However, while the presence of small talk impaired the performance of ChatGPT-3.5 for the datasets of questions, the answer provided by ChatGPT-3.5 did not explain the wrong answer based on a specific subject or word included in the small talk. This result is concerning, as by delivering incorrect responses but still not mentioning any unrelated information, it may be difficult for a health provider reviewing the answers to pinpoint errors. Our study has several limitations. The prominent one is that it is challenging to mimic the small talk that is occurring between the health provider and a patient. In our model, we framed the small talk in the context of "talking to a friend" rather than a physician to avoid bias and integration of medical terms. However, in practice, the patient will be talking to a physician; thus, even the small talk may resemble medical information being conveyed. Such small talk might deteriorate the performance of ChatGPT3.5 and might even affect the performance of ChatGPT-4, which, in our analysis, seemed immune to small talk. In addition, in our work, the small talk sentences and the medical information were added in an alternating sequence to USMLE questions, each small talk sentence was added as a standalone piece of information. However, in medical practice, the transcript of physician-patient interaction may be much longer than a USMLE question, and the small talk might be structured differently. USMLE question has been used previously to assess medical data processing [36], reinforcing the use of a dataset for such a purpose and allowing us to compare our results. Nevertheless, it is possible that different patterns of small talk integration on different scripts might have various effects on ChatGPT's ability to provide medical counsel. We would also like to stress that this work focuses on both medical information and small talk conveyed in text; however, in practice, the irrelevant information can be conveyed in different modes, such as images (either medical-related images or pictures of the patient's family, pets, etc.) or sounds (either caused by a medical condition of the patient, or the patient laughing as a response to a joke, imitating their boss, etc.). Despite this, the present analysis provides important new information about the impact of the most common way of communicating [19, 26], including irrelevant information, in physician-patient encounters on the ability of the different versions of ChatGPT to provide medical advice. Another potential limitation of this study is that it focuses on ChatGPT-only and has not assessed different LLMs and therefore cannot be generalized to other forms of LLMs. Future research could thus attempt to investigate whether the addition of small talk interferes with other LLMs (such as BERT, Cloude, LLAMA-1, and LLAMA-2) ability to provide medical advice In this paper, we took the first step toward understanding the performance of the two ChatGPT versions, when faced with physician-patient interactions including medical mixed with irrelevant information. Those unique interactions raised a challenge to discern the impact of casual conversations on the accuracy and reliability of medical recommendations made by these LLMs. This analysis shows that while ChatGPT-3.5 performance was significantly impaired by small talk addition, ChatGPT-4 performance was not affected. The results have significant implications for the integration of LLMs into medical practice. In addition, LLM developers, and especially healthcare providers must be aware of these limitations. It should be emphasised that while LLMs can assist in many tasks, it is crucial to critically review and evaluate the suggestions and notes generated, particularly in the context of patient interactions filled with non-medical content.
Large Language Models (LLMs)は予測モデルであり、質問に依存して回答を生成するもので、小さな会話や不要な情報は回答と提案に影響を与える可能性があります。そこで、この研究は、ChatGPTによる医療アドバイスの正確性を評価するため、医学データと小話に含まれる影響を調査することを目的としています。USMLEステップ3の質問を医学データのモデルとして使用しました。この研究では、複数選択とオープンエンドの質問を使用しました。人間参加者から小話文を機械 Turkプラットフォームで収集しました。USMLEの質問は、元の質問の各文の前に小話文が配置され、ChatGPT 3.5と4は、小話文を含む質問と、小話文を含まない質問に対して回答しました。 board-certified医師はChatGPTの回答を分析し、回答を正式な答えと比較しました。分析結果によると、小話文が医学データに追加された場合、Chat
2301.13341
Neural Target Speech Extraction: An Overview
Humans can listen to a target speaker even in challenging acoustic conditions that have noise, reverberation, and interfering speakers. This phenomenon is known as the cocktail-party effect. For decades, researchers have focused on approaching the listening ability of humans. One critical issue is handling interfering speakers because the target and non-target speech signals share similar characteristics, complicating their discrimination. Target speech/speaker extraction (TSE) isolates the speech signal of a target speaker from a mixture of several speakers with or without noises and reverberations using clues that identify the speaker in the mixture. Such clues might be a spatial clue indicating the direction of the target speaker, a video of the speaker's lips, or a pre-recorded enrollment utterance from which their voice characteristics can be derived. TSE is an emerging field of research that has received increased attention in recent years because it offers a practical approach to the cocktail-party problem and involves such aspects of signal processing as audio, visual, array processing, and deep learning. This paper focuses on recent neural-based approaches and presents an in-depth overview of TSE. We guide readers through the different major approaches, emphasizing the similarities among frameworks and discussing potential future directions.
Katerina Zmolikova, Marc Delcroix, Tsubasa Ochiai, Keisuke Kinoshita, Jan Černocký, Dong Yu
2023-01-31T00:26:52
http://arxiv.org/abs/2301.13341v1
# Neural Target Speech Extraction: An Overview ###### Abstract Humans can listen to a target speaker even in challenging acoustic conditions that have noise, reverberation, and interfering speakers. This phenomenon is known as the cocktail-party effect. For decades, researchers have focused on approaching the listening ability of humans. One critical issue is handling interfering speakers because the target and non-target speech signals share similar characteristics, complicating their discrimination. Target speech/speaker extraction (TSE) isolates the speech signal of a target speaker from a mixture of several speakers with or without noises and reverberations using clues that identify the speaker in the mixture. Such clues might be a spatial clue indicating the direction of the target speaker, a video of the speaker's lips, or a pre-recorded enrollment utterance from which their voice characteristics can be derived. TSE is an emerging field of research that has received increased attention in recent years because it offers a practical approach to the cocktail-party problem and involves such aspects of signal processing as audio, visual, array processing, and deep learning. This paper focuses on recent neural-based approaches and presents an in-depth overview of TSE. We guide readers through the different major approaches, emphasizing the similarities among frameworks and discussing potential future directions. Speech processing, target speech extraction, speech enhancement, multi-modal, deep learning ## I Introduction In everyday life, we are constantly immersed in complex acoustic scenes consisting of multiple sounds, such as a mixture of speech signals from multiple speakers and background noise from air-conditioners or music. Humans naturally extract relevant information from such noisy signals as they enter our ears. The cocktail-party problem is a typical example [1], where we can follow the conversation of a speaker of interest (target speaker) in a noisy room with multiple interfering speakers. Humans can manage this complex task due to selective attention or a selective hearing mechanism that allows us to focus our attention on a target speaker's voice and ignore others. Although the mechanisms of human selective hearing are not fully understood yet, many studies have identified essential cues exploited by humans to attend to a target speaker in a speech mixture: spatial, spectral (audio), visual, or semantic cues [1]. One long-lasting goal of speech processing research is designing machines that can achieve similar listening abilities as humans, i.e., selectively extracting the speech of a desired speaker based on auxiliary cues. In this paper, we present an overview of recent developments in target speech/speaker extraction (TSE), which estimates the speech signal of a target speaker in a mixture of several speakers, given auxiliary cues to identify the target1. In the following, we call auxiliary cues, clues, since they represent hints for identifying the target speaker in the mixture. Fig. 1 illustrates the TSE problem and shows that by exploiting the clues, TSE can focus on the voice of the target speaker while ignoring other speakers or noise. Inspired by psychoacoustic studies [1], several clues have been explored to tackle the TSE problem, such as spatial clues that provide the direction of the target speaker [2, 3], visual clues from video of their face [4, 5, 6, 7, 8, 9], or audio clues extracted from pre-recorded enrollment recording of their voice [10, 11, 12]. Footnote 1: Alternative terms in the literature for TSE include informed source separation, personalized speech enhancement, or audio-visual speech separation, depending on the context and the modalities involved. The TSE problem is directly related to human selective hearing, although we approach it from an engineering point of view and do not try to precisely mimic human mechanisms. TSE is related to other speech and audio-processing tasks such as noise reduction and blind source separation (BSS) that do not use clues about the target speaker. Although noise reduction does suppress the background noise, it cannot handle well interfering speakers. BSS estimates each speech source signal in a mixture, which usually requires estimating the number of sources, a step that is often challenging. Moreover, it estimates the source signals without identifying them, which leads to global permutation ambiguity at its output; it remains ambiguous which of the estimated source signals corresponds to the target speaker. In contrast, TSE focuses on the target speaker's speech by exploiting clues without assuming knowledge of the number of speakers in the mixture and avoids global permutation ambiguity. It thus offers a practical alternative to noise reduction or BSS when the use case requires focusing on a desired speaker's voice. Solving the TSE problem promises real implications for the development of many applications: (1) robust voice user interfaces or voice-controlled smart devices that only respond to a specific user; (2) teleconferencing systems that can remove Fig. 1: TSE problem and examples of clues interfering speakers close by; (3) hearing aids/hearables that can emphasize the voice of a desired interlocutor. TSE ideas can be traced back to early works on beamformers [2]. Several works also extended BSS approaches to exploit clues about the target speaker [4, 5, 12]. Most of these approaches required a microphone array [5] or models trained on a relatively large amount of speech data from the target speaker [4]. The introduction of neural networks (NNs) enabled the building of powerful models that learn to perform complex conditioning on various clues by leveraging large amounts of speech data of various speakers. This evolution resulted in impressive extraction performance. Moreover, neural TSE systems can operate with a single microphone and with speakers unseen during the training of the models, allowing more flexibility. This overview paper covers recent TSE development and focuses on neural approaches. Its remaining sections are organized as follows. In Section II, we formalize the TSE problem and its relation to noise reduction and BSS and introduce its historical context. We then present a taxonomy of TSE approaches and motivate the focus of this overview paper in Section III. We describe a general neural TSE framework in Section IV. The later sections (V, VI, and VII) introduce implementations of TSE with different clues, such as audio, visual, and spatial clues. We discuss extensions to other tasks in Section VIII. Finally, we conclude by describing the outlook on remaining issues in Section IX and provide pointers to available resources for experimenting with TSE in Section X. ## II Problem definition ### _Speech recorded with a distant microphone_ Imagine recording a target speaker's voice in a living room using a microphone placed on a table. This scenario represents a typical use case of a voice-controlled smart device or a video-conferencing device in a remote-work situation. Many sounds may co-occur while the speaker is speaking, e.g., a vacuum cleaner, music, children screaming, voices from another conversation, or from a TV. The speech signal captured at a microphone thus consists of a mixture of the target speaker's speech and interference from the speech of other speakers and background noise2. We can express the mixture signal recorded at a microphone as Footnote 2: In this paper, we do not explicitly consider the effect of reverberation caused by the reflection of sounds on the walls and surfaces in a room, which also corrupt the recorded signal. Some of the approaches we discussed implicitly handle reverberation. \[\mathbf{y}^{m}=\mathbf{x}_{s}^{m}+\underbrace{\sum_{k\neq s}\mathbf{x}_{k}^{m }+\mathbf{v}^{m}}_{\triangleq\mathbf{i}^{m}}, \tag{1}\] where \(\mathbf{y}^{m}=[y^{m}[0],\ldots,y^{m}[T]]\in\mathbb{R}^{T}\), \(\mathbf{x}_{s}^{m}\in\mathbb{R}^{T}\), \(\mathbf{x}_{k}^{m}\in\mathbb{R}^{T}\), and \(\mathbf{v}^{m}\in\mathbb{R}^{T}\) are the time-domain signal of the mixture, the target speech, the interference speech, and noise signals, respectively. Variable \(T\) represents the duration (number of samples) of the signals, \(m\) is the index of the microphone in an array of microphones, \(s\) represents the index of the target speaker and \(k\) is the index for the other speech sources. We drop microphone index \(m\) whenever we deal with single microphone approaches. In the TSE problem, we are interested in only recovering the target speech of speaker \(s\), \(\mathbf{x}_{s}^{m}\), and view all the other sources as undesired signals to be suppressed. We can thus define the interference signal as \(\mathbf{i}^{m}\in\mathbb{R}^{T}\). Note that we make no explicit hypotheses about the number of interfering speakers. ### _TSE problem and its relation to BSS and noise reduction_ The TSE problem is to estimate the target speech, given a clue, \(\mathbf{C}_{s}\), as \[\hat{\mathbf{x}}_{s}=\mathrm{TSE}(\mathbf{y},\mathbf{C}_{s};\theta^{\mathrm{ TSE}}), \tag{2}\] where \(\hat{\mathbf{x}}_{s}\) is the estimate of the target speech, \(\mathrm{TSE}(\cdot;\theta^{\mathrm{TSE}})\) represents a TSE system with parameters \(\theta^{\mathrm{TSE}}\). The clue, \(\mathbf{C}_{s}\), allows identifying the target speaker in the mixture. It can be of various types, such as a pre-recorded enrollment utterance, \(\mathbf{C}_{s}^{(a)}\), a video signal capturing the face or lips movements of the target speaker, \(\mathbf{C}_{s}^{(v)}\), or such spatial information as the direction of arrival (DOA) of the speech of the target speaker, \(\mathbf{C}_{s}^{(d)}\). In the later sections, we expand on how to design TSE systems. Here, we first emphasize the key difference between TSE and BSS and noise reduction. Fig. 2 compares these three problems. BSS [13, 14] estimates all the source signals in a mixture without requiring clues: \[\{\hat{\mathbf{x}}_{1},\ldots,\hat{\mathbf{x}}_{K}\}=\mathrm{BSS}(\mathbf{y}; \theta^{\mathrm{BSS}}), \tag{3}\] where \(\mathrm{BSS}(\cdot;\theta^{\mathrm{BSS}})\) represents a separation system with parameters \(\theta^{\mathrm{BSS}}\), \(\hat{\mathbf{x}}_{k}\) are the estimates of the speech sources, and \(K\) is the number of sources in the mixture. As seen in Eq. (3), BSS does not and cannot differentiate the target speech from other speech sources. Therefore, we cannot know in advance which output corresponds to the target speech, i.e., there is a global permutation ambiguity problem between the outputs and the speakers. Besides, since the number of outputs is given by the number of sources, the number of sources \(K\) must be known or estimated. Comparing Eqs. (2) and (3) emphasizes the fundamental difference between TSE and BSS: (1) TSE estimates only the target speech signal, while BSS estimates all the signals, and (2) TSE is conditioned on speaker clue \(\mathbf{C}_{s}\), while BSS only relies on the observed mixture3. Typical use cases for BSS include applications that require estimating speech signals of every speaker, such as automatic meeting transcription systems. Footnote 3: Another setup sitting between TSE and BSS is a task that extracts multiple target speakers, e.g., extracting the speech of all the meeting attendees given such information about them as enrollment or videos of all the speakers. Noise reduction is another related problem. It assumes that the interference only consists of background noise, i.e., \(\mathbf{i}=\mathbf{v}\), and can thus enhance the target speech without requiring clues: \[\hat{\mathbf{x}}_{s}=\mathrm{Denoise}(\mathbf{y};\theta^{\mathrm{Denoise}}), \tag{4}\] where \(\mathrm{Denoise}(\cdot;\theta^{\mathrm{Denoise}})\) represents a noise reduction system with parameters \(\theta^{\mathrm{Denoise}}\). Unlike BSS, a noise reduction system's output only consists of target speech \(\hat{\mathbf{x}}_{s}\), and there is thus no global permutation ambiguity. This is possible if the background noise and speech have distinct characteristics. For example, we can assume that ambient noise and speech signals exhibit different spectro-temporal characteristics that enable their discrimination. However, noise reduction cannot suppress interfering speakers because it cannot discriminate among different speakers in a mixture without clues4. Noise reduction is often used, e.g., in video-conferencing systems or hearing aids. Footnote 4: Some works propose to exploit clues for noise reduction and apply similar ideas of TSE to reduce background noise (and sometimes interfering speakers). In the literature, this is called personalized speech enhancement, which in this paper, we view as a special case of the TSE problem, where only the target speaker is actively speaking [15]. TSE is an alternative to BSS and noise reduction, which uses a clue to simplify the problem. Like BSS, it can handle speech mixtures. Like noise reduction, it only estimates the target speaker, thus avoiding global permutation ambiguity and the need to estimate the number of sources. However, TSE requires access to clues, unlike BSS and noise reduction. Moreover, it must internally perform two sub-tasks: (1) identifying the target speaker and (2) estimating the speech of that speaker in the mixture. TSE is thus a challenging problem that introduces specific issues and requires dedicated solutions. A straightforward way to achieve TSE using BSS methods is to first apply BSS and next select the target speaker among the estimated sources. Such a cascade system allows the separate development of BSS and speaker identification modules. However, this scheme is usually computationally more expensive and imports some disadvantages of BSS, such as the need to estimate the number of speakers in the mixture. Therefore, we focus on approaches that directly exploit the clues in the extraction process. Nevertheless, most TSE research is rooted in BSS, as argued in the following discussion on the historical context. ### _Historical context_ The first studies related to TSE were performed in the 1980s. Flanagan et al. [2] explored enhancing a target speaker's voice in a speech mixture, assuming that the target speech originated from a fixed and known direction. They employed a microphone array to record speech and designed a fixed beamformer that enhanced the signals from the target direction [2, 16]. We consider that this work represents an early TSE system that relies on spatial clues. In the mid-1990s, the BSS problem gained attention with pioneering works on independent component analysis (ICA). ICA estimates spatial filters that separate the sources by relying on the assumption of the independence of the sources in the mixture and the fact that speech signals are non-Gaussian [13]. A frequency-domain ICA suffers from a frequency permutation problem because it treats each frequency independently. In the mid-2000s, independent vector analysis (IVA) addressed the frequency-permutation problem by working on vectors spanning all frequency bins, which allowed modeling dependency among frequencies [13]. Several works have extended ICA and IVA to perform TSE, which simplifies inference by focusing on a single target source. For example, in the late 2000s, TSE systems were designed by incorporating the voice activity information of the target speaker derived from video signals to the ICA criterion, allowing identification and extraction of only the target source [5]. In the late 2010s, independent vector extraction (IVE) extended IVA to extract a single source out of the mixture. In particular, IVE exploits clues to guide the extraction process, such as the enrollment of the target speaker to achieve TSE [12]. All these approaches require a microphone array to capture speech. In the first decade of the 2000s, single-channel approaches for BSS emerged, such as factorial hidden Markov model (F-HMM) [17] and non-negative matrix factorization (NMF) [18]. These approaches relied on pre-trained spectral models of speech signals learned on clean speech data. An F-HMM is a model of speech mixtures, where the speech of each speaker in the mixture is explicitly modeled using a separate hidden Markov model (HMM). The parameters of each speaker-HMM are learned on the clean speech data of that speaker. The separation process involves inferring the most likely HMM state sequence associated with each speaker-HMM, which requires approximations to make inference tractable. This approach was the first to achieve super-human performance using only single-channel speech [17]. In the early 2000s, F-HMM was also among the first approaches to exploit visual clues [4]5. In NMF, the spectrogram of each source is modeled as a multiplication of pre-learned bases, representing the basic spectral patterns and their time-varying activations. NMF methods have also been extended to multi-channel signals [13] and used to extract a target speaker [19] with a flexible multi-source model of the background. The main shortcoming of the F-HMM and NMF methods is that they require pre-trained source models and thus struggle with unseen speakers. Furthermore, Fig. 2: Comparison of TSE with BSS and noise reduction the inference employs a computationally expensive iterative optimization. In the mid-2010s, deep NNs (DNNs) were first introduced to address the BSS problem. These approaches rapidly gained attention with the success of deep-clustering and permutation invariant training (PIT) [20, 21], which showed that single-channel speaker-open6 BSS was possible. In particular, the introduction of DNNs enabled more accurate and flexible spectrum modeling and computationally efficient inference. These advances were facilitated by supervised training methods that can exploit a large amount of data. Footnote 6: BSS is possible for speakers unseen during training, i.e., not present in the training data. Neural BSS rapidly influenced TSE research. For example, Du et al. [22] trained a speaker-close NN to extract the speech of a target speaker using training data with mixed various interfering speakers. This work is an initial neural TSE system using audio clues. However, using speaker-close models requires a significant amount of data from the target speaker and cannot be extended to speakers unseen during training. Subsequently, the introduction of TSE systems conditioned on speaker characteristics derived from an enrollment utterance significantly mitigated this requirement [10, 11, 23]. Enrollment consists of a recording of a target speaker's voice, which amounts to a few seconds of speech. With these approaches, audio clue-based TSE became possible for speakers unseen during training as long as an enrollment utterance was available. Furthermore, the flexibility of NNs to integrate different modalities combined with the high modeling capability of face recognition or lip-reading systems offered new possibilities for speaker-open visual clue-based TSE [7, 8]. More recently, neural approaches have also been introduced for spatial-clue-based TSE [24, 3]. TSE has gained increased attention. For example, dedicated tasks were part of such recent evaluation campaigns as the deep noise suppression (DNS)7 and Clarity8 challenges. Many works have extended TSE to other tasks, such as a direct automatic speech recognition (ASR) of a target speaker from a mixture, which is called target speaker ASR (TS-ASR) [25, 26], or personalized voice activity detection (VAD)/dianization [27, 28]. Notably, target speaker VAD (TS-VAD)-based diarization [28] has been very successful in such evaluation campaigns as CHiME-69 or DIHARD-310, outperforming state-of-the-art diarization approaches in challenging conditions. Footnote 7: [https://www.microsoft.com/en-us/research/academic-program/deep-noise-suppression-challenge-icassp-2022/](https://www.microsoft.com/en-us/research/academic-program/deep-noise-suppression-challenge-icassp-2022/) Footnote 8: [https://clarity.challenge.github.io/clarity_CC_doc](https://clarity.challenge.github.io/clarity_CC_doc) Footnote 9: [https://chimechallenge.github.io/clinine/results.html](https://chimechallenge.github.io/clinine/results.html) Footnote 10: [https://diharcchallenge.github.io/diharc3/results](https://diharcchallenge.github.io/diharc3/results) ## III TSE Taxonomy TSE is a vast research area spanning a multitude of approaches. This section organizes them to emphasize their relations and differences. We categorized the techniques using four criteria: 1) type of clues, 2) number of channels, 3) speaker-close vs. open, and 4) generative vs. discriminative. Table I summarizes the taxonomy; the works in the scope of this overview paper are emphasized in red. ### _Type of clue_ The type of clue used to determine the target speaker is an important factor in distinguishing among TSE approaches. The most prominent types are audio, visual, and spatial clues. This classification also defines the main organization of this article, which covers such approaches in Sections V, VI, and VII. Other types have and could be proposed, as we briefly discuss in Section IX. An _audio clue_ consists of a recording of a speech signal of the target speaker. Such a clue can be helpful, e.g., in the use case of personal devices, where the user can pre-record an example of their voice. Alternatively, for long recordings, such as meetings, clues can be obtained directly from part of the recording. The interest in audio clues sharply increased recently with the usage of neural models for TSE [10, 11, 12]. Audio clues are perhaps the most universal, because they do not require using any additional devices, such as multiple microphones or a camera. However, the performance may be limited compared to other clues, since discriminating speakers based only on their voice characteristics is prone to errors due to inter- and intra-speaker variability. For example, the voice characteristics of different speakers, such as family members, often closely resemble each other. On the other hand, the voice characteristics of one speaker may change depending on such factors as emotions, health, or age. A _visual clue_ consists of a video of the target speaker talking. This type is often constrained to the speaker's face, sometimes just to the lip area. Unlike audio clues, visual clues are typically synchronized with audio signals that are processed, i.e., not pre-recorded. A few works also explored just using a photo of the speaker [37]. Visual clues have been employed to infer the activity pattern and location of the target speaker [5] or to jointly model audio and visual signals [4, 5]. Recent works usually use visual clues to guide discriminative models toward extracting the target speaker [7, 8, 9]. Visual clues are especially useful when speakers in the recording have similar voices [8]. However, they might be sensitive to physical obstructions of the speaker in the video. A _spatial clue_ refers to the target speaker's location, e.g., the angle from the recording devices. The location can be inferred in practice from a video of the room or a recording of a speaker in the same position. Extracting the speaker based on their location has been researched from mid 1980's, with beamforming techniques that pioneered this topic [2, 16]. More recent IVE models use location for initialization [12]. Finally, several works have shown that NNs informed by location can also achieve promising performance [24, 3]. Spatial clues are inherently applicable only when a recording from multiple microphones is available. However, they can identify the target speaker in the mixture rather reliably, especially when the speakers are stationary. Different clues may work better in different situations. For example, the performance with audio clues might depend on the similarity of voices of the present speakers, and obstructions in the video may influence visual clues. As such, it is advantageous to use multiple clues simultaneously to combine their strengths. Many works have combined audio and visual clues [4, 33], and some have even added spatial clues [36, 4]. ### _Number of microphones_ Another way to categorize the TSE approaches is based on the number of microphones (channels) they use. Multiple channels allow the spatial diversity of the sources to be exploited to help discriminate the target speaker from interference. Such an approach also closely follows human audition, where binaural signals are crucial for solving the cocktail-party problem. All approaches with spatial clues require using a microphone array to capture the direction information of the sources in the mixture [2, 3, 16, 24, 36]. Some TSE approaches that exploit audio or visual clues also assume multi-channel recordings, such as the extensions of ICA/IVA approaches [5, 12]. Multi-channel approaches generally generate extracted signals with better quality and are thus preferable when recordings from a microphone array are available. However, sometimes they might fail when the sources are located in the same direction from the viewpoint of the recording device. Moreover, adopting a microphone array is not always an option when developing applications due to cost restrictions. In such cases, single-channel approaches are requested. They rely on spectral models of speech mixture using either F-HMM or recently NNs and exploit audio [10, 11] or visual clues [7, 8] to identify the target speech. Recent single-channel neural TSE systems have achieved remarkable performance. Interestingly, such approaches can also be easily extended to multi-channel processing by augmenting the input with spatial features [3] or combining the processing with beamforming [24, 30], as discussed in Section IV-C. For example, using a beamformer usually extracts a higher quality signal due to employing a spatial linear filter to perform extraction, which can benefit ASR applications [10]. ### _Speaker-open vs speaker-close methods_ We usually understand the clues used by TSE as short evidence about the target speaker obtained at the time of executing the method, e.g., one utterance spoken by the target speaker, a video of him/her speaking, or their current location. There are, however, also methods that use a more significant amount of data from the target speaker (e.g., several hours of their speech) to build a model specific to that person. These methods can also be seen as TSE except that the clues involve much more data. We refer to these two categories as the speaker-open and speaker-close methods13. In speaker-open methods, the data of the target speaker are available only during the test time, i.e., the model is trained on the data of different speakers. In contrast, the target speaker is part of the training data in speaker-close methods. Many methods in the past were speaker-close, e.g., [4] or [19], where the models were trained on the clean utterances of the target speaker. Also, the first neural models for TSE used a speaker-specific network [22]. Most recent works on neural methods, which use a clue as an additional input, are speaker-open methods [3, 7, 8, 10, 11]. Recent IVE methods [12] are also speaker-open, i.e., they guide the inference of IVE using the embedding of a previously unseen speaker. ### _Generative vs discriminative_ We can classify TSE into approaches using generative or discriminative models. Generative approaches model the joint distribution of the observations, target signals, and clues. The estimated target speech is obtained by maximizing the likelihood. In contrast, discriminative approaches directly estimate the target speech signal given observations and clues. In the TSE literature, generative models were the dominant choice in the pioneering works, including one [4] that used HMMs to jointly model audio and visual modalities. IVE [12] is also based on a generative model of the mixtures. The popularity of discriminative models, in particular NNs, has increased since mid-2010's, and such models today are the choice for many problems, including TSE. With discriminative models, TSE is treated as a supervised problem, where the parameters of a TSE model are learned using artificially generated training data. The modeling power of NNs enables us to exploit large amounts of such data to build strong speech models. Moreover, the versatility of NNs enables complex dependencies to be learned between different types of observations (e.g., speech mixture and video/speaker embeddings), which allows the successful conditioning of the extraction process on various clues. However, NNs also bring new challenges, such as generalization to unseen conditions or high computational requirements [38]. Some recent works have also explored using generative NNs, such as variational autoencoders (VAEs) [29], which might represent a middle-ground between the traditional generative approaches and those using discriminative NNs. ### _Scope of overview paper_ In the remainder of our paper, we focus on the neural methods for TSE emphasized in Table I. Recent neural TSE approaches opened the possibility of achieving high-performance extraction with various clues. They can be operated with a single microphone and applied for speaker-open conditions, which are very challenging constraints for other schemes. Consequently, these approaches have received increased attention from both academia and industry. In the next section, we introduce a general framework to provide a uniformized view of the various NN-based TSE approaches, for both single- and multi-channel approaches, and independently of the type of clues. We then respectively review the approaches relying on audio, visual, and spatial clues in Sections V, VI, and VII. ## IV General framework for neural TSE In the previous section, we introduced a taxonomy that described the diversity of approaches to tackle the TSE problem. However, recent neural TSE systems have much in common. In this section, we introduce a general framework that provides a unified view of a neural TSE system, which shares the same processing flow independently of the type of clue used. By organizing the existing approaches into a common framework, we hope to illuminate their similarities and differences and establish a firm foundation for future research. A neural TSE system consists of an NN that estimates the target speech conditioned on a clue. Fig. 3 is a schematic diagram of a generic neural TSE system that consists of two main modules: a clue encoder and a speech extraction module, described in more detail below. ### _Clue encoder_ The clue encoder pulls out (from the clue, \(\mathbf{C}_{s}\)) information that allows the speech extraction module to identify and extract the target speech in the mixture. We can express the processing as \[\mathbf{E}_{s}=\mathrm{ClueEncoder}(\mathbf{C}_{s};\theta^{ \mathrm{Clue}}), \tag{5}\] where \(\mathrm{ClueEncoder}(\cdot;\theta^{\mathrm{Clue}})\) represents the clue encoder, which can be an NN with learnable parameters \(\theta^{\mathrm{Clue}}\), and \(\mathbf{E}_{s}\) are the clue embeddings. Naturally, the specific implementation of the clue encoder and the information carried within \(\mathbf{E}_{s}\) largely depend on the type of clues. For example, when the clue is an enrollment utterance, \(\mathbf{E}_{s}=\mathbf{E}_{s}^{(a)}\in\mathbb{R}^{D^{\text{Enh}}}\) will be a speaker embedding vector of dimension \(D^{\text{Enh}}\) that represents the voice characteristics of the target speaker. When dealing with visual clues, \(\mathbf{E}_{s}=\mathbf{E}_{s}^{(v)}\in\mathbb{R}^{D^{\text{Enh}}\times N}\) can be a sequence of the embeddings of length \(N\), representing, e.g., the lip movements of the target speaker. Here \(N\) represents the number of time frames of the mixture signal. Interestingly, the implementation of the speech extraction module does not depend on the type of clues used. To provide a description that is independent of the type of clues, hereafter, we consider that \(\mathbf{E}_{s}\in\mathbb{R}^{D^{\text{Enh}}\times N}\) consists of a sequence of embedding vectors of dimension \(D^{\text{Enh}}\) of length \(N\). Note that we can generate a sequence of embedding vectors for audio clue-based TSE systems by repeating the speaker embedding vector for each time frame. ### _Speech extraction module_ The speech extraction module estimates the target speech from the mixture, given the target speaker embeddings. We can use the same configuration independently of the type of clue. Its process can be decomposed into three main parts: a mixture encoder, a fusion layer, and a target extractor: \[\mathbf{Z}_{y} =\mathrm{MixEncoder}(\mathbf{y};\theta^{\text{Mix}}), \tag{6}\] \[\mathbf{Z}_{s} =\mathrm{Fusion}(\mathbf{Z}_{y},\mathbf{E}_{s};\theta^{\text{ Fusion}}),\] (7) \[\hat{\mathbf{x}}_{s} =\mathrm{TgtExtractor}(\mathbf{Z}_{s},\mathbf{y};\theta^{\text{ TgtExtractor}}), \tag{8}\] where \(\mathrm{MixEncoder}(\cdot;\theta^{\text{Mix}})\), \(\mathrm{Fusion}(\cdot;\theta^{\text{fusion}})\), and \(\mathrm{TgtExtractor}(\cdot;\theta^{\text{TgtExtractor}})\) respectively represent the mixture encoder, the fusion layer, and the target extractor with parameters \(\theta^{\text{Mix}}\), \(\theta^{\text{fusion}}\), and \(\theta^{\text{TgtExtractor}}\). \(\mathbf{Z}_{y}\in\mathbb{R}^{D^{y}\times N}\) and \(\mathbf{Z}_{s}\in\mathbb{R}^{D^{s}\times N}\) are the internal representations of the mixture before and after conditioning on embedding \(\mathbf{E}_{s}\). The mixture encoder performs the following: \[\mathbf{Y} =\mathrm{FE}(\mathbf{y};\theta^{\text{FE}}), \tag{9}\] \[\mathbf{Z}_{y} =\mathrm{MixNet}(\mathbf{Y};\theta^{\text{MixNet}}), \tag{10}\] where \(\mathrm{FE}(\cdot)\) and \(\mathrm{MixNet}(\cdot)\) respectively represent the feature extraction process and an NN with parameters \(\theta^{\text{FE}}\) and \(\theta^{\text{MixNet}}\). The feature extractor computes the features from the observed mixture signal, \(\mathbf{Y}\in\mathbb{R}^{D\times N}\). These can be such spectral features as magnitude spectrum coefficients derived from the short-time Fourier transform (STFT) of the input mixture [7, 8, 10, 11]. When using a microphone array, spatial features like interaural phase difference (IPD) defined in Eq. (21) in Section VII can also be appended. Alternatively, the feature extraction process can be implemented by an NN such as a 1-D convolutional layer that operates directly on the raw input waveform of the microphone signal [23, 39]. This enables learning of a feature representation optimized for TSE tasks. The features are then processed with an NN, \(\mathrm{MixNet}(\cdot)\), which performs a non-linear transformation and captures the time context, i.e., several past and future frames of the signal. The resulting representation, \(\mathbf{Z}_{y}\), of the mixture is (at this point) agnostic of the target. The fusion layer, sometimes denoted as an adaptation layer, is a key component of a TSE system and allows conditioning of the process on the clue. It combines \(\mathbf{Z}_{y}\) with the clue embeddings, \(\mathbf{E}_{s}\). Conditioning an NN on auxiliary information is a general problem that has been studied for multi-modal processing or the speaker adaptation of ASR systems. TSE systems have borrowed fusion layers from these fields. Table II lists several options for the fusion layer. Some widely used fusion layers include: (1) the concatenation of \(\mathbf{Z}_{y}\) with the clue embeddings \(\mathbf{E}_{s}\)[7, 8]; (2) addition14 after transforming the embeddings with linear transformation \(\mathbf{L}\) to match the dimension of \(\mathbf{Z}_{y}\); (3) multiplication [10]; (4) a combination of addition and multiplication denoted as FiLM; (5) a factorized layer [10, 30], i.e., the combination of different transformations of the mixture representation weighted by the clue embedding values. Other alternatives have also been proposed, including attention-based fusion [40]. Note that the fusion operations described here assume just one clue. It is also possible to use multiple clues, as discussed in Section VI-B. Some works also employ the fusion repeatedly at multiple positions in the model [31]. Footnote 14: Concatenation is similar to addition if a linear transformation follows it. The last part of the speech extraction module is the target extractor, which estimates the target signal. We explain below the time-frequency masking-based extractor, which has been widely used [3, 7, 8, 41]. Recent approaches also perform a similar masking operation in the learned feature domain [23, 39]. The time-frequency masking approach was inspired by early BSS studies that relied on the sparseness assumption of speech signals, an idea based on the observation that the energy of a speech signal is concentrated in a few time-frequency bins of a speech spectrum. Accordingly, the speech signals of different speakers rarely overlap in the time-frequency domain in a speech mixture. We can thus extract the target speech by applying a time-frequency mask on the observed speech mixture, where the mask indicates the time-frequency bins where the target speech is dominant over other signals. Fig. 4 shows an example of an ideal binary mask for extracting a target speech in a mixture of two speakers. Such an ideal binary mask assumes that all the energy in each TF bin belongs to one speaker. In recent mask-based approaches that use real Fig. 3: General framework for neural TSE valued (or complex) masks, this assumption or observation is not needed. The processing of the masking-based extractor can be summarized as \[\mathbf{M}_{s} =\mathrm{MaskNet}(\mathbf{Z}_{s};\theta^{\mathrm{Mask}}), \tag{11}\] \[\hat{\mathbf{X}}_{s} =\mathbf{M}_{s}\odot\mathbf{Y},\] (12) \[\hat{\mathbf{x}}_{s} =\mathrm{Reconstruct}(\hat{\mathbf{X}}_{s};\theta^{\mathrm{Recost }}), \tag{13}\] where \(\mathrm{MaskNet}(\cdot)\) is an NN that estimates the time-frequency mask for the target speech, \(\mathbf{M}_{s}\in\mathbb{R}^{D\times N}\), \(\theta^{\mathrm{Mask}}\) are the network parameters, and \(\odot\) denotes the element-wise Hadamard multiplication. \(\mathbf{Y}\) and \(\hat{\mathbf{X}}_{s}\) are the mixture and the estimated target speech signals in the feature domain. Eq. (12) shows the actual extraction process. \(\mathrm{Reconstruct}(\cdot)\) is an operation to reconstruct the time-domain signal by performing the inverse operation of the feature extraction of the mixture encoder, i.e., either inverse STFT (iSTFT) or a transpose convolution if using a learnable feature extraction. In the latter case, the reconstruction layer has learnable parameters, \(\theta^{\mathrm{Recost}}\). There are other possibilities to perform the extraction process. For example, we can modify the \(\mathrm{MaskNet}(\cdot)\) NN to directly infer the target speech signal in the feature domain. Alternatively, as discussed in Section IV-C, we can replace the mask-based extraction process with beamforming when a microphone array is available. ### _Integration with microphone array processing_ If we have access to a microphone array to record the speech mixture, we can exploit the spatial information to extract the target speech. One approach is to use spatial clues to identify the speaker in the mixture by informing the system about the target speaker's direction, as discussed in Section VII. Another approach combines TSE with beamforming and uses the latter to perform the extraction process instead of Eq. (12). For example, we can use the output of a TSE system to estimate the spatial statistics needed to compute the coefficients of a beamformer steering in the direction of the target speaker. This approach can also be used with audio or visual clue-based TSE systems and requires no explicit use of spatial clues to identify the target speaker in the mixture. We briefly review the mask-based beamforming approach, which was introduced initially for noise reduction and BSS [42, 43, 44]. A beamformer performs the linear spatial filtering of the observed microphone signals: \[\hat{X}_{s}[n,f]= \mathbf{W}^{\mathsf{H}}[f]\mathbf{Y}[n,f], \tag{14}\] where \(\hat{X}_{s}[n,f]\in\mathbb{C}\) is the STFT coefficient of the estimated target signal at time frame \(n\) and frequency bin \(f\), \(\mathbf{W}[f]\in\mathbb{C}^{M}\) is a vector of the beamformer coefficients, \(\mathbf{Y}[n,f]=\left[Y^{1}[n,f],\dots,Y^{M}[n,f]\right]{}^{T}\in\mathbb{C}^{M}\) is a vector of the STFT coefficients of the microphone signals, \(M\) is the number of microphones, and \({}^{\mathsf{H}}\) is the conjugate transpose. We can derive the beamformer coefficients from the spatial correlation matrices of the target speech and the interference. These correlation matrices can be computed from the observed signal and the time-frequency mask estimated by the TSE system [30]. This way of combining a TSE system with beamforming replaces the time-frequency masking operation of Eq. (12) with the spatial linear filtering operation of Eq. (14). It allows distortionless extraction, which is often advantageous when using TSE as a front-end for ASR [10]. ### _Training a TSE system_ Before using a TSE model, we first need to learn its parameters: \(\theta^{\mathrm{TSE}}=\{\theta^{\mathrm{Mix}},\theta^{\mathrm{Cue}},\theta^{ \mathrm{fusion}},\theta^{\mathrm{TgEutractor}}\}\). Most existing studies use fully supervised training, which requires a large amount of training data consisting of the triplets of speech mixture \(\mathbf{y}\), target speech signal \(\mathbf{x}_{s}\), and corresponding clue \(\mathbf{C}_{s}\) to learn parameters \(\theta^{\mathrm{TSE}}\). Since this requires access to a clean target speech signal, such training data are usually simulated by artificially mixing clean speech signals and noise following the signal model of Eq. (1). Figure 5 illustrates the data generation process using a multi-speaker audio-visual speech corpus containing multiple videos for each speaker. First, we generate a mixture using randomly selected speech signals from the target speaker, the interference speaker, and the background noise. We obtain an audio clue by selecting another speech signal from the target speaker as well as a visual clue from the video signal associated with the target speech. The training of a neural TSE framework follows the training scheme of NNs with error back-propagation. The parameters are estimated by minimizing a training loss function: \[\theta^{\mathrm{TSE}}=\operatorname*{arg\,min}_{\theta}\mathcal{L}\left( \mathbf{x}_{s},\hat{\mathbf{x}}_{s}\right), \tag{15}\] where \(\mathcal{L}(\cdot)\) is a training loss, which measures how close estimated target speech \(\hat{\mathbf{x}}_{s}=\mathrm{TSE}\left(\mathbf{y},\mathbf{C}_{s};\theta\right)\) is to the target source signal \(\mathbf{x}_{s}\). We can use a similar loss as that employed for training noise reduction or BSS systems [39, 14]. Fig. 4: Example of time-frequency mask for speech extraction: Time-frequency mask shows spectrogram regions where target source is dominant. By applying this mask to the mixture, we obtain an extracted speech signal that estimates the target speech. Several variants of the losses operating on different domains exist, such as the cross-entropy between the oracle and the estimated time-frequency masks and the mean squared error (MSE) loss between the magnitude spectra of the source and the estimated target speech. Recently, a negative signal-to-noise ratio (SNR) measured in the time-domain has been widely used [23, 6, 39]: \[\mathcal{L}^{\text{SNR}}(\mathbf{x}_{s},\hat{\mathbf{x}}_{s})=-10\log_{10} \left(\frac{\|\mathbf{x}_{s}\|^{2}}{\|\mathbf{x}_{s}-\hat{\mathbf{x}}_{s}\|^{ 2}}\right). \tag{16}\] The SNR loss is computed directly in the time-domain, which forces the TSE system to learn to correctly estimate the magnitude and the phase of the target speech signal. This loss has improved extraction performance [23]. Many works also employ versions of the loss which are invariant to arbitrary scaling, i.e., scale-invariant SNR (SI-SNR) [39] or linear filtering of the estimated signal, often calledsignal-to-distortion ratio (SDR) [44]. Besides training losses operating on the signal or mask levels, it is also possible to train a TSE system end-to-end with a loss defined on the output of an ASR system [45]. Such a loss can be particularly effective when targeting ASR applications, as discussed in Section VIII. The clue encoder can be an NN trained jointly with a speech extraction module [10] or pre-trained on a different task, such as speaker identification for audio clue-based TSE [11] or lip-reading for visual clue-based TSE [7]. Using a pre-trained clue encoder enables the leveraging of large amounts of data to learn robust and highly discriminative embeddings. On the other hand, jointly optimizing the clue encoder allows learning embeddings to be optimized directly for TSE. These two trends can also be combined by fine-tuning the pre-trained encoder or using multi-task training schemes, which add a loss to the output of the clue embeddings [46]. ### _Considerations when designing a TSE system_ We conclude this section with some considerations about the different options for designing a TSE system. In the above description, we intentionally ignored the details of the NN architecture used in the speech extraction module, such as the type of layers. Indeed, novel architectures have been and will probably continue to be proposed regularly, leading to gradual performance improvement. For concrete examples, we refer to some public implementations of TSE frameworks presented in Section X. Most TSE approaches borrow a network configuration from architectures proven effective for BSS or noise reduction. One important aspect is that an NN must be able to see enough context in the mixture to identify the target speaker. This has been achieved using such recurrent neural network (RNN)-based architectures as a stack of bidirectional long short-term memory (BLSTM) layers [10], convolutional neural network (CNN)-based architectures with a stack of convolutional layers that gradually increases the receptive field over the time axis to cover a large context [7, 23] or attention-based architectures [47]. The networks in the mixture encoder and the extraction process generally use a similar architecture. The best performance was reported when using a shallow mixture encoder (typically a single layer/block) and a much deeper extraction network, i.e., where a fusion layer is placed on the lower part of the extraction module. Furthermore, we found in our experiments that the multiplication or FiLM layers usually perform well. Fig. 5: Example of generating simulation data for training or testing: This example assumes videos are available so that audio and visual clues can be generated. No video is needed for audio clue-based TSE. For visual clue-based TSE, we do not necessarily need multiple videos from the same speaker. However, the impact of the choice of the fusion layer seems rather insignificant. For the feature extraction, early studies used spectral features computed with STFT [7, 8, 10]. However, most recent approaches employ a learned feature extraction module following its success for separation [23, 39]. This approach allows direct optimization of the features for the given task. However, the choice of input features may depend on the acoustic conditions, and some have reported superior performance using STFT under challenging reverberant conditions [48] or using handcrafted filterbanks [49]. Except for such general considerations, it is difficult to make solid arguments for a specific network configuration since performance may depend on many factors, such as the task, the type of clue, the training data generation, and the network and training hyper-parameters. ## V Audio-based TSE In this section, we explain how the general framework introduced in Section IV can be applied in the case of audio clues. In particular, we discuss different options to implement the clue encoder, summarize the development of the audio-based TSE, and present some representative experimental results. ### _Audio clue encoder_ An audio clue is an utterance spoken by the target speaker from which we derive the characteristics of their voice, allowing identification in a mixture. This enrollment utterance can be obtained by pre-recording the user of a personal device or with a part of a recording where a wake-up keyword was uttered. The clue encoder is usually used to extract a single vector that summarizes the entire enrollment utterance. Since the clue encoder's goal is to extract information that defines the voice characteristics of the target speaker, embeddings from the speaker verification field are often used, such as i-vectors or NN-based embeddings (e.g., d-vectors or x-vectors). Clue encoders trained directly for TSE tasks are also used. Fig. 6 describes these three options. #### V-A1 I-vectors From their introduction around 2010, i-vectors [50] were the ruling speaker verification paradigm until the rise of NN speaker embeddings. The main idea behind i-vectors is modeling the features of an utterance using a Gaussian mixture model (GMM), whose means are constrained to a subspace and depend on the speaker and the channel effects. The subspace is defined by the Universal Background model (UBM), i.e., GMM trained on a large amount of data from many speakers, and a total variability subspace matrix. The super-vector of the means of utterance GMM \(\mathbf{\mu}\) is decomposed: \[\mathbf{\mu}=\mathbf{m}+\mathbf{Tw}, \tag{17}\] where \(\mathbf{m}\) is a super-vector of the means of the UBM, \(\mathbf{T}\) is a low-rank rectangular matrix representing the bases spanning the subspace, and \(\mathbf{w}\) is a random variable with standard normal prior distribution. Since an i-vector is the maximum a posteriori estimate of \(\mathbf{w}\), it thus consists of values that enable the adaptation of the parameters of the generic UBM speaker model (\(\mathbf{m}\)) to a specific recording. As a result, it captures the speaker's voice characteristics in the recording. An important characteristic of i-vectors is that they capture both the speaker and channel variability. This case may be desired in some TSE applications, where we obtain enrollment utterances in identical conditions as the mixed speech. In such a situation, the channel information might also help distinguish the speakers. I-vectors have also been used in several TSE works [10]. #### V-A2 Neural network-based embeddings The state-of-the-art speaker verification systems predominantly use NN-based speaker embeddings, which were adopted later for TSE. The common idea is to train an NN for the task of speaker classification. Such an NN contains a "pooling layer" which converts a sequence of features into one vector. The pooling layer computes the mean and optionally the standard deviation of the sequence of features over the time dimension. The pooled vector is then classified into speaker classes or used Fig. 6: Illustration of i-vector, NN-based vector, and jointly-trained embeddings: Orange parts are included only in training stage. in other loss functions that encourage speaker discrimination. For TSE, the speaker embedding is then the vector of the activation coefficients of one of the last network layers. The most common of such NN-based speaker embeddings are d-vectors and x-vectors [51]. Many TSE works employ d-vectors [11]. Since NNs are trained for speaker classification or a related task, embeddings are usually highly speaker-discriminative. Most other sources of variability are discarded, such as the channel or content. Another advantage of this class of embeddings is that they are usually trained on large corpora with many speakers, noises, and other variations, resulting in very robust embedding extractors. Trained models are often publicly available, and the embeddings can be readily used for TSE tasks. #### Iv-A3 Jointly-learned embeddings NN-based embeddings, such as x-vectors, are designed and trained for the task of speaker classification. Although this causes them to contain speaker information, it is questionable whether the same representation is optimal for TSE tasks. An alternative is to train the neural embedding extractor jointly with a speech extraction module. The resulting embeddings are thus directly optimized for TSE tasks. This approach has been used for TSE in several works [10, 31]. The NN performing the speaker embedding extraction takes an enrollment utterance \(\mathbf{C}_{s}^{(\mathbf{a})}\) as input and generally contains a pooling layer converting the frame-level features into one vector, similar to the embedding extractors discussed above. This NN is trained with the main NN using a common objective function. A second objective function can also be used on the embeddings to improve their speaker discriminability [46]. As mentioned above, the advantage of such embeddings is that they are trained directly for TSE and thus collect essential information for this task. On the other hand, the pre-trained embedding extractors are often trained on larger corpora and may be more robust. A possible middle ground might take a pre-trained embedding extractor and fine-tune it jointly with the TSE task. However, this has, to the best of our knowledge, not been done yet. ### _Existing approaches_ The first neural TSE methods were developed around 2017. One of the first published works, SpeakerBeam [10], explored both the single-channel approach, where the target extractor was implemented by time-frequency masking, and the multi-channel approach using beamforming. This work also compared different variants of fusion layers and clue encoders. This was followed by VoiceFilter [11], which put more emphasis on ASR applications using TSE as a front-end and also investigated streaming variants with minimal latency. A slightly modified variant of the task was presented in works on speaker inventory [40], where not one but multiple speakers can be enrolled. Such a setting might be suitable for meeting scenarios. Recently, many works, such as SpEx [31], have started to use time-domain approaches, following their success in BSS [39]. ### _Experiments_ An audio clue is a simple way to condition the system for extracting the target speaker. Many works have shown that the speaker information extracted from audio clues is sufficient for satisfactory performance. Demonstrations of many works are available online15. We present here some results to demonstrate the potential of audio clue-based approaches. The experiments were done with time-domain SpeakerBeam16, which uses a convolutional architecture, a multiplicative fusion layer, and a jointly-learned clue encoder. Footnote 15: Demonstrations of audio clues approaches: VoiceFilter [11] [https://google.github.io/speaker-id/publications/VoiceFilter/](https://google.github.io/speaker-id/publications/VoiceFilter/), SpeakerBeam [10] [https://www.youtube.com/watch?v=7FSHgKipV6I](https://www.youtube.com/watch?v=7FSHgKipV6I). Footnote 16: [https://github.com/butstspeechfit/speakerbeam](https://github.com/butstspeechfit/speakerbeam) The experiments were done on three different datasets (WSJ0-2mix, WHAM!, and WHAMR!) to show the performance in different conditions (clean, noisy, and reverberant, respectively). We describe these datasets in more detail in Section X. All the experiments were evaluated with the SI-SNR metric and measured the improvements over the SI-SNR of the observed mixture. More details about the experiments can be found in [52]. Figure 7 compares the TSE results with a cascade system, first doing BSS and then independent speaker identification. Speaker identification is done either in an oracle way (selecting the output closest to the reference) or with x-vectors (extracting the x-vectors from all the outputs and the enrollment utterances and selecting the output with the smallest cosine distance as the target). The BSS system uses the same convolutional architecture as TSE, differing only in that it does not have a clue encoder and the output layer is twice larger as it outputs two separated speech signals. The direct TSE scheme outperformed the cascade system, especially in more difficult conditions such as WHAMR!. This difference reflects a couple of causes: 1) the TSE model is directly optimized for the TSE task and does not spend any capacity on extracting other speakers or 2) the TSE model has additional speaker information. Figure 8 shows an example of spectrograms obtained using TSE on a recording of two speakers from the WHAMR! database, including noise and reverberation. TSE correctly Fig. 7: Comparison of TSE and cascade BSS systems when using an audio clue in terms of SI-SNR improvement (higher is better) [52]. identifies the target speaker and removes all the interference, including the second speaker, noise, and reverberation. ### _Limitations and outlook_ Using TSE systems conditioned on audio clues is particularly practical due to the simplicity of obtaining the clues, i.e., no additional hardware is needed, such as cameras or multiple microphones. Considering the good performance demonstrated in the literature, these systems are widely applicable. Nowadays, the methods are rapidly evolving and achieving increasingly higher accuracy. The main challenge in audio-clue-based systems is correct identification of the target speaker. The speech signal of the same speaker might have highly different characteristics in different conditions due to such factors as emotional state, channel effects, or the Lombard effect. TSE systems must be robust enough to such intra-speaker variability. On the other hand, different speakers might have very similar voices, leading to erroneous identification if the TSE system lacks sufficient accuracy. Resolving both issues requires precise speaker modeling. In this regard, the TSE methods may draw inspiration from the latest advances in the speaker verification field, including advanced model architectures, realistic datasets with a huge number of speakers for training, or using pre-trained features from self-supervised models. ## VI Visual/Multi-modal clue-based TSE Visual clue-based TSE assumes that a video camera captures the face of the target speaker who is talking in the mixture [7, 8]. Using visual clues is motivated by psycho-acoustic studies (see the references in a previous work [6]) that revealed that humans look at lip movements to understand speech better. Similarly, the visual clues of TSE systems derive hints about the state of the target speech from the lip movements, such as whether the target speaker is speaking or silent or more refined information about the phoneme being uttered. A visual clue, which presents different characteristics than audio clues because it captures information from another modality, is time-synchronized with the target speech in the mixture without being corrupted by the interference speakers. Therefore, a visual clue-based TSE can better handle mixtures of speakers with similar voices, such as same-gender mixtures, than audio clue-based systems because the extraction process is not based on the speaker's voice characteristics17. Another potential advantage is that the users may not need to pre-enroll their voice. Video signals are also readily available for many applications such as video-conferencing. Footnote 17: Some works can even perform extraction from a mixture of the same speaker’s speech [8]. Figure 9 shows a diagram of a visual TSE system that follows the same structure as the general TSE framework introduced in Section IV. Only the visual clue encoder part is specific to the task. We describe it in more detail below and then introduce a multi-modal clue extension. We conclude this section with some experimental results and discussions. ### _Visual clue encoder_ The visual clue encoder computes from the video signal a representation that allows the speech extraction module to identify and extract the target speech in the mixture. This processing involves the steps described below: \[\mathbf{E}_{s}^{(v)}=\mathrm{Upsample}(\mathrm{NN}(\mathrm{VFE}(\mathbf{C}_{s} ^{(v)}),\theta^{\text{\tiny{v-clue}}})), \tag{18}\] where \(\mathbf{E}_{s}^{(v)}\in\mathbb{R}^{D^{\mathrm{plan}}\times N}\) represents the sequence of the visual embedding vectors, \(\mathbf{C}_{s}^{(v)}\) is the video signal obtained after pre-processing, \(\mathrm{VFE}(\cdot)\) is the visual feature extraction module, \(\mathrm{NN}(\cdot,\theta^{\text{\tiny{v-clue}}})\) is an NN with parameters \(\theta^{\text{\tiny{v-clue}}}\), and \(\mathrm{Upsample}(\cdot)\) represents the up-sampling operation. The latter up-sampling step is required because the sampling rates of the audio and video devices are usually different. Up-sampling Fig. 8: Example of spectrograms of mixed, reference, and extracted speech: Example is taken from WHAMR! database. matches the number of frames of the mixture and visual clue encoders. #### V-A1 Pre-processing First, the video signal captured by the camera requires pre-processing to isolate the face of the target speaker. Depending on the application, this may require detecting and tracking the target speaker's face and cropping the video. These pre-processing steps can be performed using previously well-established video processing algorithms [6]. #### V-A2 Visual feature extraction Similar to an audio-clue-based TSE, the visual clue encoder can directly extract embeddings from raw video data or visual features. With the first option, the raw video is processed with a CNN whose parameters are jointly-learned with the speech extraction module to enable direct optimization of the features for the extraction task without any loss of information. However, since the video signals are high-dimensional data, achieving joint optimization can be complex. This approach has been used successfully with speaker-close conditions [53]. Extending it to speaker-open conditions might require a considerable amount of data or careful design of the training loss using, e.g., multi-task training to help the visual encoder capture relevant information. Most visual TSE works use instead a visual feature extractor pre-trained on another task to reduce the dimensionality of the data. Such feature extractors can leverage a large amount of image or video data (that do not need to be speech mixtures) to learn representation robust to variations, such as resolution, luminosity, or head orientation. The first option is to use facial landmark points as features. Facial landmarks are the key points on a face that indicate the mouth, eyes, or nose positions and offer a very low-dimension representation of a face, which is interpretable. Moreover, face landmarks can be easily computed with efficient off-the-shelf algorithms [32]. The other option is to use neural embeddings derived from an image/video processing NN trained on a different task, which was proposed in three concurrent works [7, 8, 9]. Ephrat et al. [8] used visual embeddings obtained from an intermediate layer of a face recognition system called FaceNet. This face recognition system is trained so that embeddings derived from photographs of the same person are close and embeddings from different persons are far from each other. It thus requires only a corpus of still images with person identity labels for training the system. However, the embeddings do not capture the lip movement dynamics and are not explicitly related to the acoustic content. Alternatively, Afouras et al. [7] proposed using embeddings obtained from a network trained to perform lip-reading, i.e., where a network is trained to estimate the phoneme or word uttered from the video of the speaker's lips. The resulting embeddings are thus directly related to the acoustic content. However, the training requires video with the associated phoneme or word transcriptions, which are more demanding and costly to obtain. The third option introduced by Owens et al. [9] exploits embeddings derived from an NN trained to predict whether the audio and visual tracks of a video are synchronized. This approach enables self-supervised training, where the training data are simply created by randomly shifting the audio track by a few seconds. The embeddings capture information on the association between the lip motions and the timing of the sounds in the audio. All three options [7, 8, 9] can successfully perform a visual TSE. #### V-A3 Transformation and up-sampling Except with joint-training approaches, the visual features are (pre-)trained on different tasks and thus do not provide a representation optimal for TSE. Besides, since some of the visual features are extracted from the individual frames of a video, the dynamics of lip movements are not captured. Therefore, the visual features are further transformed with an NN, which is jointly trained with the speech extraction module. The NN, which allows learning a representation optimal for TSE, can be implemented with long short-term memory (LSTM) or convolutional layers across the time dimension to model the time series of the visual features, enabling the lip movement dynamics to be captured. Finally, the visual embeddings are up-sampled to match the sampling rate of audio features \(\mathbf{Z}_{y}\). ### _Audio-visual clue-based TSE_ Audio and visual clue-based TSE systems have complementary properties. An audio clue-based TSE is not affected by speaker movements and visual occlusions. In contrast, a visual clue-based TSE is less affected by the voice characteristics of the speakers in the mixture. By combining these approaches, we can build TSE systems that exploit the strengths of both clues for improving the robustness to various conditions [33, 36]. Figure 10 shows a diagram of an audio-visual TSE system, which assumes access to the pre-recorded enrollment of the target speaker to provide an audio clue and a video camera for a visual clue. The system uses the audio and visual clue encoders described in Sections V-A and VI-A and combines these clues into an audio-visual embedding, which is given to the speech extraction module. Audio-visual embeddings can be simply the concatenation [35] or the summation of the audio and visual embeddings, or obtained as a weighted sum [33], Fig. 9: Visual clue-based TSE system. [34], where the weights can vary depending on the reliability of each clue. The weighted sum approach can be implemented with an attention layer widely used in machine learning, which enables dynamic weighting of the contribution of each clue. ### _Experimental results and discussion_ Several visual TSE systems have been proposed, which differ mostly by the type of visual features used and the network configuration. These systems have demonstrated astonishing results, which can be attested by the demonstrations available online18. Here we briefly describe experiments using the audio, visual, and audio-visual time-domain SpeakerBeam systems [34], which use a similar configuration as the system in Section V-C. The speech extraction module employs a stack of time-convolutional blocks and a multiplicative fusion layer. The audio clue encoder consists of the jointly-learned embeddings described in Section V-A3. The visual clue encoder uses visual features derived from face recognition like a previous work [8]. The audio-visual system combines the visual and audio clues with an attention layer [34]. Footnote 18: Demo samples for several approaches are available, e.g., for [9]: [https://andrewowens.com/multisensory](https://andrewowens.com/multisensory), for [8]: [https://looking-to-listen.github.io](https://looking-to-listen.github.io), for [7]: [https://www.robots.ox.ac.uk/~vgg/demo/thconversation](https://www.robots.ox.ac.uk/~vgg/demo/thconversation), and for [34]: [http://www.kecl.ntt.co.jp/icl/signal/member/demo/audio_visual_speakerBeam.html](http://www.kecl.ntt.co.jp/icl/signal/member/demo/audio_visual_speakerBeam.html) The experiments used mixtures of utterances from the LRS3-TED corpus19, which consists of single speaker utterances with associated videos. We analyzed the behavior under various conditions by looking at results from same and different gender mixtures and two examples of clue corruptions (enrollment corrupted with white noise at SNR of 0 dB and video with a mask on the speaker's mouth). The details of the experimental setup are available in [34]. Footnote 19: [https://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs3.html](https://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs3.html) Figure 11 compares the extraction performance measured in terms of SDR improvement for audio, visual, and audio-visual TSE under various mixture and clue conditions. We confirmed that a visual clue-based TSE is less sensitive to the characteristics of the speakers in the mixture since the performance gap between different- and same-gender mixtures is smaller than with an audio clue-based TSE. When using a single clue, performance can be degraded when this clue is corrupted. However, the audio-visual system that exploits both clues can achieve superior extraction performance and is more robust to clue corruption. ### _Discussions and outlook_ Visual clue-based TSE approaches offer an alternative to audio-clue-based ones when a camera is available. The idea of using visual clues for TSE is not new [4, 5], although recent neural systems have achieved an impressive level of performance. This is probably because NNs can effectively model the relationship between the different modalities learned from a large amount of training data. Issues and research opportunities remain with the current visual clue-based TSE systems. First, most approaches do not consider the speaker tracking problem and assume that the audio and video signals are synchronized. These aspects must be considered when designing and evaluating future TSE systems. Second, video processing involves high computational costs, and more research is needed to develop efficient online systems. ## VII Spatial clue-based TSE When using a microphone array to record a signal, spatial information can be used to discriminate among sources. In particular, access to multi-channel recordings opens the way to extract target speakers based on their location, i.e., using spatial clues (as indicated in Fig. 1). This section explains how such spatial clues can be obtained and used in TSE systems. While enhancing speakers from a given direction has a long research history [2], we focus here on neural methods that follow the scope of our overview paper. Note that multi-channel signals can also be utilized in the extraction process using beamforming. Such an extraction process can be used in systems with any type of clue, only requiring that the mixed speech be recorded with multiple microphones. This beamforming process was reviewed in Section IV-C. In this section, we focus specifically on the processing of spatial clues. ### _Obtaining spatial clues_ In some situations, the target speaker's location is approximately known in advance. For example, for an in-car ASR, the driver's position is limited to a certain region in a car. In other scenarios, we might have access to a multi-channel enrollment utterance of the speaker recorded in the same position as the final mixed speech. In such a case, audio source localization methods can be applied. Conventionally, this can be done by methods based on generalized cross-correlation or steered-response power, but recently, deep learning methods have also shown success in this task. An alternative is to skip the explicit estimation of the location and directly extract features in which the location is encoded when a multi-channel enrollment is available. We will detail this approach further in the next section. Spatial clues can also be obtained from a video using face detection and tracking systems. A previous work [36] demonstrated this possibility with a 180-degree wide-angle camera positioned parallel to a linear microphone array20. By identifying the target speaker in the video, the azimuth with Fig. 10: Audio-visual clue-based TSE system respect to the microphone array was roughly approximated. Depth cameras can also be used to estimate not only the azimuth but also the elevation and distance of the speaker. ### _Spatial clue encoder_ The left part of Fig. 12 shows the overall structure and the usage of a spatial clue encoder, which usually consists of two parts: the extraction of directional features and an NN post-processing of them. Two possible forms of spatial clues are dominant in the literature: the angle of the target speaker with respect to the microphone array or a multi-channel enrollment utterance recorded in the target location. Both can be encoded into directional features. When the spatial clue is DOA, the most commonly used directional features are the _angle features_, which are computed as the cosine of the difference between the IPD and the target phase difference (TPD): \[\text{AF}[n,f]=\sum_{m_{1},m_{2}\in\mathcal{M}}\cos \bigg{(}\text{TPD}\left(m_{1},m_{2},\phi_{s},f\right)\] \[-\text{IPD}\left(m_{1},m_{2},n,f\right)\bigg{)} \tag{19}\] \[\text{TPD}(m_{1},m_{2},\phi_{s},f)=\frac{2\pi fF_{s}}{F}\ \frac{\cos \phi_{s}\ \Delta_{m_{1},m_{2}}}{c} \tag{20}\] \[\text{IPD}(m_{1},m_{2},n,f)=\angle Y^{m_{2}}[n,f]-\angle Y^{m_{1} }[n,f], \tag{21}\] where \(\mathcal{M}\) is a set of pairs of microphones used to compute the feature, \(F_{s}\) is the sampling frequency, \(\phi_{s}\) is the target direction, \(c\) is the sound's velocity, and \(\Delta_{m_{1},m_{2}}\) is the distance from microphone \(m_{1}\) to microphone \(m_{2}\). An example of angle features is shown on the right of Fig. 12. For time-frequency bins dominated by the source from direction \(\phi_{s}\), the value of the angle feature should be close to 1 or -1. Other directional features have been proposed that exploit a grid of fixed beamformers. A directional power ratio measures the ratio between the power of the response of a beamformer steered into the target direction and the power of the beamformer responses steered into all the directions in the grid. In a similar fashion, a directional signal-to-noise ratio can also be computed, which compares the response of a beamformer in the target direction with the response of a beamformer in the direction with the strongest interference. If the spatial clue consists of a multi-channel enrollment utterance, the directional feature can be formed as a vector of IPDs computed from the enrollment. Alternatively, the DOA can be estimated from the enrollment, and the spatial features derived from it can be used. Note that when using a spatial clue to determine the target speaker, the multi-channel input of the speech extraction module must also be used. This enables the identification of the speaker coming from the target location in the mixture. Furthermore, a target extractor is often implemented as beamforming, as explained in Section IV-C. Fig. 11: SDR Improvement of TSE with audio, visual, and audio-visual clues for mixtures of same/different gender and for corruptions of audio and visual clues: Audio clues were corrupted by adding white noise at SNR of 0 dB to enrollment utterance. Video clues were corrupted by masking mouth region in video. Fig. 12: Illustration of usage of spatial clue encoder and directional features ### _Combination with other clues_ Although a spatial clue is very informative and generally can lead the TSE system to a correct extraction of the target, it does fail in some instances. Estimation errors of DOA are harmful to proper extraction. Furthermore, if the spatial separation of the speakers with respect to the microphone array is not significant enough, the spatial clue may not discriminate between them. Combining a spatial clue with audio or visual clues is an option to combat such failure cases. ### _Experimental results_ We next report the results from an experiment with spatial clues [36] that compared the effectiveness of using audio, visual, and spatial clues. The audio-clue encoder was trained jointly with the extraction module, and the visual encoder was a pre-trained lip-reading network. The target speaker's direction was encoded in the angle feature. The spatial and visual embeddings were fused with the extraction network by concatenation and the audio embedding with a factorized layer. The extraction module employed a neural network consisting of temporal convolutional layers. The experiments were performed on a Mandarin audio-visual dataset containing mixtures of two and three speakers. The results in Fig. 13 were divided into several conditions, based on the angle separation between the closest speakers. The spatial clue is very effective, although the performance declines when speakers are near each other (\(<15^{\circ}\)). A combination with other modalities outperformed any individual type of clue in all the conditions. ### _Discussion_ Using spatial clues is a powerful way of conditioning a TSE system to extract the target speaker. It relies on the availability of signals from a microphone array and a way to determine the location of the target speaker. Unfortunately, these restrictions limit the applications to some extent. Neural TSE methods with spatial clues follow a long history of research on the topic, such as beamforming techniques, and extend them with non-linear processing. This approach unifies the methods with those using other clues and allows a straightforward combination of different clues into one system. Such combinations can alleviate the shortcomings of spatial clues, including the failures when the speakers are located in the same direction from the point of view of the microphones. In most current neural TSE works, the target speaker's location is assumed to be fixed. Although the methods should be easily extended to a dynamic case, investigations of such settings remain relatively rare [24]. ## VIII Extension to other tasks The ideas of TSE can be applied to other speech processing tasks, such as ASR and diarization. ### _Target-speaker ASR_ An important application of TSE is TS-ASR, where the goal is to transcribe the target speaker's speech and ignore all the interference speakers. The TSE approaches we described can be naturally used as a front-end to an ASR system to achieve TS-ASR. Such a cascade combination allows for a modular system, which offers ease of development and interpretability. However, the TSE system is often optimized with a signal loss, as in Eq. (16). Such a TSE system inevitably introduces artifacts caused by the remaining interferences, over-suppression, and other non-linear processing distortions. These artifacts limit the expected performance improvement from a TSE front-end. One approach to mitigate the effect of such artifacts is to optimize the TSE front-end with an ASR criterion [10]. The TSE front-end and the ASR back-end are NNs and can be interconnected with differentiable operations, such as beamforming and feature extraction. Therefore, a cascade system can be represented with a single computational graph, allowing all parameters to be jointly trained. Such joint-training can significantly improve the TS-ASR performance. Another approach inserts a fusion layer into an ASR system [26, 45] to directly perform clue conditioning. These integrated TS-ASR systems avoid any explicit signal extraction step, a decision that reduces the computational cost, although such systems may be less interpretable than cascade systems. TS-ASR can use the audio clues provided by pre-recorded enrollment utterances [10, 26, 45] or from a keyword (anchor) for a smart-device scenario [54], for example. Some works have also exploited visual clues, which can be used for the extraction process and to implement an audio-visual ASR back-end, since lip-reading also improves ASR performance [55]. ### _Target-speaker VAD and diarization_ The problem of speech diarization consists of detecting who spoke when in a multi-speaker recording. This technology is essential for achieving, e.g., meeting recognition and analysis systems that can transcribe a discussion between multiple participants. Several works have explored using speaker clues to perform this task [27, 28]. For example, a personalized VAD [27] exploits a speaker embedding vector derived from an enrollment utterance of the target speaker to predict its activity, i.e., whether they are speaking at a given time. In principle, this can be done with a system like that presented in Section IV, where the output layer performs the binary classification of the speaker activity Fig. 13: SI-SNR improvement of TSE with audio, visual, and spatial clues in four conditions based on angle separation between speakers [36] instead of estimating the target speech signal. Similar systems have also been proposed using visual clues, called audio-visual VAD [56]. Predicting the target speaker's activity is arguably a more straightforward task than estimating its speech signal. Consequently, TS-VAD can use simpler network architectures, leading to more lightweight processing. The above TS-VAD systems, which estimate the speech activity of a single target speaker, have been extended to simultaneously output the activity of multiple target speakers [28]. The resulting system achieved the top diarization performance in the CHiME 6 evaluation campaign21. Footnote 21: The results of the CHiME 6 challenge can be found at: [https://imchandelange.github.io/chime6/results.html](https://imchandelange.github.io/chime6/results.html). The top system used TS-VAD among other technologies. DHARD 3 performed a diarization evaluation on the CHiME 6 challenge data. Here the top system also used TS-VAD: [https://dihardchallenge.github.io/dihard3/results](https://dihardchallenge.github.io/dihard3/results) ## IX Remaining issues and outlook Research toward computational selective hearing has been a long endeavor. Recent developments in TSE have enabled identifying and extracting a target speaker's voice in a mixture by exploiting audio, visual, or spatial clues, which is one step closer to solving the cocktail-party problem. Progress in speech processing (speech enhancement, speaker recognition) and image processing (face recognition, lip-reading), combined with deep learning technologies to learn models that can effectively condition processing on auxiliary clues, triggered the progress in the TSE field. Some of the works we presented have achieved levels of performance that seemed out-of-reach just a few years ago and are already being deployed in products22. Footnote 22: The following blog details the effort for deploying a visual chea based TSE system for on-device processing: [https://ai.googleblog.com/2020/10](https://ai.googleblog.com/2020/10) audiovisual-speech-enhancement-in.html. Despite substantial achievements, many opportunities remain for further research, some of which we list below. ### _Deployment of TSE systems_ Most of the systems we described operate offline and are computationally expensive. They are also evaluated under controlled (mostly simulated mixture) settings. Deploying such systems introduces engineering and research challenges to reduce computational costs while maintaining high performance under less controlled recording conditions. We next discuss some of these aspects. #### Ix-A1 Inactive target speaker Most TSE systems have been evaluated assuming that the target speaker is actively speaking in the mixture. In practice, we may not know beforehand whether the target speaker will be active. We expect that a TSE system can output no signal when the target speaker is inactive, which may not actually be the case with most current systems that are not explicitly trained to do so. The inactive target speaker problem is specific to TSE. The type of clue used may also greatly impact the difficulty of tackling this problem. For instance, visual voice activity detection [5] might alleviate this issue. However, it is more challenging with audio clues [57], and further research may be required. #### Ix-A2 Training and evaluation criteria Most TSE systems are trained and evaluated using such signal level metrics as SNR or SDR. Although these metrics are indicative of the extraction performance, their use presents two issues. First, they may not always be correlated with human perception and intelligibility or with ASR performance. This issue is not specific to TSE; it is common to BSS and noise reduction methods. For ASR we can train a system end-to-end, as discussed in Section VIII-A. When targeting applications for human listeners, the problem can be partly addressed using other metrics for training or evaluation that correlate better with human perception, such as short-time objective intelligibility (STOI) or perceptual evaluation of speech quality (PESQ) [6]. However, controlled listening tests must be conducted to confirm the impact of a TSE on human listeners [6]. Second, unlike BSS and noise reduction, a TSE system needs to identify the target speech, implying other sources of errors. Indeed, failing to identify the target may lead to incorrectly estimating an interference speaker or inaccurately outputting the mixture. Although these errors directly impact the SDR scores, it would be fruitful to agree on the evaluation metrics that separate extraction and identification performance to better reveal the behavior of TSE systems. Signal level metrics might not satisfactorily represent the extraction performance for inactive speaker cases. A better understanding of the failures might help develop TSE systems that can recognize when they cannot identify the target speech, which is appealing for practical applications. Consequently, developing better training and evaluation criteria are critical research directions. #### Ix-A3 Robustness to recording conditions Training neural TSE systems requires simulated mixtures, as discussed in Section IV-D. Applying these systems to real conditions (multi-speaker mixtures recorded directly with a microphone) requires that the training data match the application scenario relatively well. For example, the type of noise and reverberation may vary significantly depending on where a system is deployed. This raises questions about the robustness of TSE systems to various recording conditions. Neural TSE systems trained with a large amount of simulated data have been shown to generalize to real recording conditions [8]. However, exploiting real recordings where no reference target speech signal is available could further improve performance. Real recordings might augment the training data or be used to adapt a TSE system to a new environment. The issue is defining unsupervised training losses correlated with the extraction performance of the target speech without requiring access to the reference target signal. Another interesting research direction is combining neural TSE systems, which are powerful under matched conditions, with such generative-based approaches as IVE [12], which are adaptive to recording conditions. #### Ix-A4 Lightweight and low-latency systems Research on lightweight and low-latency TSE systems is gaining momentum as the use of teleconferencing systems in noisy environments has risen in response to the Covid pandemic. Other important use cases for TSE are hearing aids and hearables, both of which impose very severe constraints in terms of computation costs and latency. The recent DNS23 and Clarity24 challenges that target teleconferencing and hearing aid application scenarios include tracks where target speaker clues (enrollment data) can be exploited. This demonstrates the growing interest in practical solutions for TSE. Footnote 23: [https://www.microsoft.com/en-us/research/academic-program/deep-noise-suppression-challenge-icassp-2022/](https://www.microsoft.com/en-us/research/academic-program/deep-noise-suppression-challenge-icassp-2022/) Footnote 24: [https://claritychallenge.github.io/clarity_CC_doc/](https://claritychallenge.github.io/clarity_CC_doc/) Since TSE is related to BSS and noise reduction, the development of online and low-latency TSE systems can be inspired from the progress of BSS/noise reduction in that direction. However, TSE must also identify the target speech, which may need specific solutions that exploit the long-context of the mixture to reliably and efficiently capture a speaker's identity. #### Ix-A5 Spatial rendering For applications of TSE to hearing aids or hearables, sounds must be localized in space after the TSE processing. Therefore, a TSE system must not only extract the target speech but also estimate its direction to allow rendering it so that a listener feels the correct direction of the source. ### _Self-supervised and cross-modal learning_ A TSE system identifies the target speech in a mixture based on the intermediate representation of the mixture and the clue. Naturally, TSE benefits from better intermediate representations. For example, speech models learned with self-supervised learning criteria have gained attention as a way to obtain robust speech representations. They have shown potential for pre-training many speech processing downstream tasks, such as ASR, speaker identification, and BSS. Such self-supervised models could also reveal advantages for TSE since they could improve robustness by allowing efficient pre-training on various acoustic conditions. Moreover, for audio-based TSE, using the same self-supervised pre-trained model for the audio clue encoder and the speech extraction module will help to learn the common embedding space between the enrollment and speech signals in the mixture. Similarly, the progress in cross-modal learning, which aims to learn the joint representation of data across modalities, could benefit such multi-modal approaches as visual clue-based TSE. ### _Exploring other clues_ We presented three types of clues that have been widely used for TSE. However, other clues can also be considered. For example, recent works have explored other types of spatial clues such as the distance [58]. Moreover, humans do not only rely on physical clues to perform selective hearing. We also use more abstract clues, such as semantic ones. Indeed, we can rapidly focus our attention on a speaker when we hear our name or a topic we are interested in. Reproducing a similar mechanism would require TSE systems that operate with semantic clues, which introduces novel challenges concerning how to represent semantic information and exploit it within a TSE system. Some works have started to explore this direction, such as conditioning on languages [59] or more abstract concepts [60]. Other interesting clues consist of signals that measure a listener's brain activity to guide the extraction process. Indeed, the electroencephalogram (EEG) signal of a listener focusing on a speaker correlates with the envelope of that speaker's speech signal. Ceolini et al. identified the possibility of using EEG as clues for TSE with a system similar to the one described in Section IV [61]. An EEG-guided TSE might open the door for futuristic hearing aids controlled by the user's brain activity, which might automatically emphasize the speaker a user wants to hear. However, research is still needed because developing a system that requires marginal tuning to the listener is especially challenging. Moreover, collecting a large amount of training data is very complicated since it is more difficult to control the quality of such clues. Compared to audio and visual TSE clues, EEG signals are very noisy and affected by changes in the attention of the listener, body movements, and other factors. ### _Beyond speech_ Human selective listening abilities go beyond speech signals. For example, we can focus on listening to the part of an instrument in an orchestra or switch our attention to a siren or a barking dog. In this paper, we focused on TSE, but similar extraction problems have also been explored for other audio-processing tasks. For example, much research has been performed on extracting the track of an instrument in a piece of music conditioned on, e.g., the type of instrument [62], video of the musician playing [63], or EEG signal of the listener [64]. These approaches may be important to realize, e.g., audio-visual music analysis [65]. Recently, the problem was extended to the extraction of arbitrary sounds from a mixture [66, 67], e.g., extracting the sound of a siren or a klaxon from a recording of a mixture of street sounds. We can use such systems as that introduced in Section IV to tackle these problems, where the clue can be a class label indicating the type of target sound [66], the enrollment audio of a similar target sound [67], a video of the sound source [9] or a text description of the target sound [68]. Target sound extraction may become an important technology to design, e.g., hearables or hearing aids that could filter out nuisances and emphasize important sounds in our surroundings, or audio visual scene analysis [9]. Psycho-acoustic studies suggest that humans process speech and music partly using shared auditory mechanisms and that exposure to music can lead to better discrimination of speech sounds [69]. It would be interesting to explore whether, similarly to humans, TSE systems could benefit from exposure to other acoustic signals by training a system to extract target speech, music, or arbitrary sounds. ## X Resources We conclude by providing pointers to selected datasets and toolkits available for those motivated to experiment with TSE. TSE works mostly use datasets designed for BSS. These datasets consist generally of artificial mixtures generated from
人間は、騒音や反響、干渉する話し手の存在があるような複雑な音響条件下でも、ターゲットスピーカーに聴取することができる。この現象は、カクテルパーティー効果と呼ばれる。数十年間、研究者は人間の聴取能力に焦点を当ててきた。重要な課題の一つは、干渉するスピーカーを処理することである。なぜなら、ターゲットと非ターゲットのSpeech信号は、類似した特性を持っているため、その区別が難しいからだ。ターゲット音声/スピーカ抽出 (TSE) は、騒音や反響がある場合も、複数スピーカーの混合物からターゲットスピーカーのSpeech信号を分離する技術である。その技術では、特定のスピーカーを混合物から識別するためのヒントを使用する。これらのヒントは、ターゲットスピーカーの方向を示す空間的ヒント、スピーカーの唇の動画、またはその声の特徴を導き出すための事前登録されたenroll utteranceなどである。
2309.07270
GPU Scheduler for De Novo Genome Assembly with Multiple MPI Processes
$\textit{De Novo}$ Genome assembly is one of the most important tasks in computational biology. ELBA is the state-of-the-art distributed-memory parallel algorithm for overlap detection and layout simplification steps of $\textit{De Novo}$ genome assembly but exists a performance bottleneck in pairwise alignment. In this work, we proposed 3 GPU schedulers for ELBA to accommodate multiple MPI processes and multiple GPUs. The GPU schedulers enable multiple MPI processes to perform computation on GPUs in a round-robin fashion. Both strong and weak scaling experiments show that 3 schedulers are able to significantly improve the performance of baseline while there is a trade-off between parallelism and GPU scheduler overhead. For the best performance implementation, the one-to-one scheduler achieves $\sim$7-8$\times$ speed-up using 25 MPI processes compared with the baseline vanilla ELBA GPU scheduler.
Minhao Li, Siyu Wang, Guanghao Wei
2023-09-13T19:20:46
http://arxiv.org/abs/2309.07270v2
# GPU Scheduler for _De Novo_ Genome Assembly with Multiple MPI Processes ###### Abstract _De Novo_ Genome assembly is one of the most important tasks in computational biology. ELBA is the state-of-the-art distributed-memory parallel algorithm for overlap detection and layout simplification steps of _De Novo_ genome assembly but exists a performance bottleneck in pairwise alignment. In this work, we proposed 3 GPU schedulers for ELBA to accommodate multiple MPI processes and multiple GPUs. The GPU schedulers enable multiple MPI processes to perform computation on GPUs in a round-robin fashion. Both strong and weak scaling experiments show that 3 schedulers are able to significantly improve the performance of baseline while there is a trade-off between parallelism and GPU scheduler overhead. For the best performance implementation, the one-to-one scheduler achieves \(\sim\)7-8\(\times\) speed-up using 25 MPI processes compared with the baseline vanilla ELBA GPU scheduler. ## I Introduction _De Novo_ Genome assembly is the method for constructing genomes from a large number of (short- or long-) DNA fragments, with no prior knowledge of the correct sequence or order of those fragments. It is one of the most important and computationally intensive tasks in computational biology. ELBA [1] is the state-of-the-art distributed-memory parallel algorithm that uses MPI, OpenMP, and sparse matrix multiplication to accelerate the task on the CPU. However, Zeni et al. [2] shows that pairwise alignment constitutes about 90% of the overall run-time when using real data sets of genome assembly tasks. Additionally, Zeni et al. [2] shows that taking advantage of high-performance computing using GPU and CUDA boosts the performance of pairwise alignment. Our work proposes 3 schemes of GPU scheduler, namely One-to-one, One-to-all, and Opt-one-to-one. All 3 schedulers show great strong scaling efficiency. For the best performance implementation, the One-to-one scheduler achieves \(\sim\)7-8\(\times\) speed-up using 25 MPI processes compared with the baseline vanilla ELBA GPU scheduler. For the rest of this paper, Section III introduces the methodology and scheme of 3 schedulers, Section IV presents the empirical results of our experiments and evaluates the pros and cons of these schedulers. The implementation is highly based on the code base of ELBA1, and our implementation is publicly available on GitHub2. Footnote 1: [https://github.com/PASSIONLab/ELBA](https://github.com/PASSIONLab/ELBA) Footnote 2: [https://github.com/garywei944/ELBA/tree/GPU](https://github.com/garywei944/ELBA/tree/GPU) ## II Related works In this section, we reviewed the prior work of ELBA in the literature that led to the proposal of this work. BELLA [3] is the first work that formulates overlap detection for _De Novo_ genome assembly using sparse matrices under a distributed memory setting. BELLA uses a seed-based approach to detect overlaps in the context of long-read applications and uses a Markov chain model to filter out unreliable k-mers. If the sparse overlap matrix is too large, BELLA divides it into batches based on available RAM. For the alignment phase, BELLA proposes an efficient seed-and-extend algorithm. An adaptive threshold is used to perform the X-drop alignment. Experiments show that using the probability model, BELLA improves the assembly quality. It also achieves a 2.5\(\times\) boost in performance compared to BLASR [4]. As a follow-up work of BELLA, diBELLA 2D [1] uses the Overlap-Layout-Consensus paradigm to tackle the _De Novo_ genome assembly problem, which introduces the theoretical background of ELBA. The algorithm uses a sparse linear algebra-centric approach to optimize the overlap and layout phases. One important component is distributed Sparse General Matrix Multiply, which has high parallelism and is used to perform and boost the overlap step. The input data are divided into equal-sized independent chunks for MPI processes. For the pairwise alignment, diBELLA 2D uses an algorithm similar to X-Drop. Experiments show that the overall pipeline has great strong scaling with respect to the number of MPI processes. The most time-consuming phase is pairwise alignment, which takes approximately more than 90% of the overall time, which provides us with a starting point and encourages us to keep the MPI parallelism. LOGAN is a near-optimal implementation of the X-Drop, a dynamic programming algorithm that solves pairwise alignment, using CUDA to boost the performance of pairwise alignment tasks. Multiple layers of parallelism including intra-sequence parallelism, and inter-sequence parallelism are used in seeking performance improvement. LOGAN takes advantage of the fact that each cell on the same anti-diagonal of the DP(dynamic programming) table is independent of the other and can be processed concurrently. One GPU thread is spawned for each cell to overcome the limitation of available GPU threads and the DP table is split into segments. Furthermore, multiple pairs of sequences are processed in parallel by assigning each alignment to a GPU block. Experiments show that LOGAN improves the performance of up to 10.7\(\times\) of BELLA, which gives us the rationale to combine LOGAN and ELBA together to solve the bottleneck of pairwise alignment. ## III Method We proposed 3 GPU schedulers compared with the baseline ELBA GPU scheduler. ### _Baseline_ The vanilla GPU implementation of ELBA [1] uses one MPI process and multiple GPUs. The total work of one process is divided into batches of size \(10,000\), and then each batch is divided into \(c\) sub-batches. At each step, one sub-batch is fed to GPUs to perform the computation. The advantage of this approach is that since there is only one MPI process, it is free to use all of the GPU resources. There is no communication cost and racing condition. The downside is that it cannot benefit from using multiple MPI processes to accelerate pairwise alignment and other steps. Building on top of this baseline model, after introducing multiple MPI processes, we need to add a GPU scheduler to avoid more than one process using one GPU at the same time. We experimented with three scheduling algorithms to improve the pairwise alignment and the overall performance. ### _One-to-all Scheduler_ As showed in Figure 1, each MPI process uses all the GPU resources in a round-robin fashion. One process schedules a sub-batch to all GPUs, and after computation is finished, it signals the next process that still has remaining work in the current round. After receiving the completion message, the next process starts scheduling work to all GPUs. The advantage of this algorithm is that it is easy to implement, the communication cost is low, and there is no load imbalance across GPUs. The disadvantage is that even though each process has less work, still only one process is using GPU at a time. Therefore, it does not fully exploit the benefit of having multiple processes. We use MPI_Send and MPI_Recv to implement the signals because we want to create implicit barriers to force the MPI processes not to access GPUs at the same time. The corner cases such that rank-1 is smaller than 0 are taken care of by traversing in a ring-array fashion. Before executing, all MPI processes perform an all-to-all communication as specified from Line 5 to Line 11 to acknowledge how many batches each process has. This is important to avoid deadlocks and waiting for completed processes. A detailed pseudo-code is presented at Algorithm 1. We add this scheduler layer in the distributed batch runner and use LOGAN as a subroutine because MPI and CUDA can't be compiled together. The master process will start first and signal the next process, and so on and so forth. The line 16 to Line 20 are used to create an implicit barrier. A completed process can be checked by comparing the batch number and the total batch number. We use a while loop to find the left-most uncompleted process. If such a process exists, the current process needs to wait for the signal from that process to avoid conflicts on GPU. Similarly, a process needs to find the right most uncompleted process to send a signal as specified from line 22 to line 25. ### _One-to-one Scheduler_ As showed in Figure 2, processes are divided into pipelines where each pipeline has one dedicated GPU. Processes in the same pipeline schedule one sub-batch of work to its assigned GPU, and signals the next process in its pipeline after the job is finished. The scheduling inside one pipeline is round-robin. To be more specific, assume that there are \(m\) GPUs, the \(n\)-th MPI process will be assigned to \(n\) mod \(m\) pipeline and perform computation on the \(n\) mod \(m\) GPU. Instead of traversing sequentially, we traverse with step size at the number of GPUs to create an implicit pipeline. This approach achieves better parallelism because the number of processes that are active at a time is proportional to the number of GPUs. For the fixed sub-batch size, the performance of using only one GPU is not significantly worse than using Figure 1: **One-to-all Scheduler Workflow - Each MPI Process uses all GPUs, one sub-batch is scheduled each time** Figure 2: **One-to-one Scheduler Workflow - Each MPI Process uses one GPU, one sub-batch is scheduled each time** all GPUs. However, the drawback of this approach is that each pipeline can have different total batches. This may cause some load imbalance on different GPUs. Also, if one GPU has higher computational power than others, it will become idle after it completes its own work. ### _Opt-one-to-one Scheduler_ To lower the communication cost of the previous approach, we came up with the idea of letting one process finish all the sub-batches in one batch before signaling the next process. As showed in Figure 3, we move the MPI communication from the iteration level to the batch level. The total communication cost should decrease proportionally to the number of sub-batches per batch. However, the GPU might be idling for a short period of time between the runs of sub-batches while the process is doing GPU-unrelated work. ## IV Results ### _Experimental Setup_ We measure the total and alignment running time of the three schedulers proposed in Section III. The experiments are divided into 2 groups: strong scaling with respect to the number of MPI processes and strong scaling with respect to the number of GPUs. The strong scaling experiments are performed on both E. Coli 29X(266 MB, 8,605 sequences) and E. Coli 100X (929 MB, 91,394 sequences) datasets. For reproducibility, all of the experiments are conducted on NERSC Perlmutter supercomputer cluster. All runs are using either 1 or 2 nodes, each MPI process is assigned with 4 CPU cores. Except for the computational resources, other hyperparameters and specifications are the same, namely \(\bullet\) -cpu-bind=cores, -gpu-bind=none \(\bullet\) -k (the k-mer length): 31 \(\bullet\) -s (the k-mer stride): 1 \(\bullet\) -alph (The alphabet to use): dna \(\bullet\) -ga (GPU-based x-drop alignment): 15 Codes for E. Coli 29X datasets use LOWER_KMER_FREQ=20 and UPPER_KMER_FREQ=30, while codes for E. Coli 100X datasets use LOWER_KMER_FREQ=20 and UPPER_KMER_FREQ=50. Figure 3: **Opt-one-to-one Scheduler Workflow** - Each MPI process uses one GPU, one batch is scheduled each time, ### _Strong Scaling experiments w.r.t. number of MPI processes_ Figure 5 illustrates the strong scaling of our three approaches. We tested on {1, 4, 9, 16, 25} MPI processes and 4 GPUs. The setup with 1 MPI process is treated as the baseline because ELBA can only support 1 MPI process if CUDA is used in pairwise alignment. The best performance achieves approximately 10x speed up in total time with 4 MPI processes. The alignment time increases from 4 to 25 MPI processes, which is expected because the overhead of MPI communication increases linearly. However, the total runtime still decreases from 4 to 25 MPI processes because other steps also benefit from having more MPI processes, and this overweights the increasing communication overhead. One interesting observation is that the alignment time is faster for 4 and 9 MPI processes than only 1 MPI process. This provides evidence that our GPU scheduler benefits from MPI parallelism. Our implementation splits the data on the CPU concurrently before sending it to GPUs. ### _Strong Scaling experiments w.r.t. number of GPUs_ Figure 6 illustrates the strong scaling of our three approaches with respect to the number of GPUs. We run three approaches on {1, 2, 4} GPUs with 16 MPI processes. Both alignment time and total time scales down, and (total alignment) time stays almost constant. This aligns with our hypothesis because having more GPUs means more MPI processes can make progress at the same time. The difference between total time and alignment time scales down with an increasing number of MPI processes, and not with respect to the number of GPUs. One interesting observation is that the alignment time of the one-to-one method is shorter than one-to-all. It is expected because multiple MPI processes can move data from CPU to GPU at the same time. Also, since we divide MPI processes into different pipelines, the communication overhead per pipeline is lower. ### _Weak Scaling experiments w.r.t. number of MPI processes_ Table I shows the weak scaling of our three approaches with respect to the number of MPI processes. When the data size increases 10.6 times, as the number of MPI processes increases 16 times, the difference in total time and alignment time only speed up 7.4 times for all 3 schedulers, which shows a poor weak scaling efficiency constant. More discussion will be covered in Section IV-E. ### _Comparison between small dataset and large dataset_ After comparing the performance of E. Coli 100X to the performance on E. Coli 29X as shown in Figure 4, we find that the strong scaling for the small dataset is worse than the large dataset. The overall run time increases as the number of MPI processes goes from 4 to 25. However, referring to Figure 4 and Figure 5, surprisingly, neither the total runtime nor the alignment time of E. Coli 100X experiments follows an \(\mathcal{O}(n)\) or \(\mathcal{O}(\log n)\) runtime complexity scale up, which makes the weak-scaling analysis above trivial. The fundamental reason for it is that the computation workload needed by the LOGAN algorithm[2] (essentially Needleman-Wunsch or Smith-Waterman algorithm with X-Drop) is highly influenced by up-streaming K-mers related parameters like K-mers length, K-mers stride, lower and upper K-mers frequency. All the above experiments are conducted using Prof. Giulia Guidi's preset parameters where the lower kmer frequency is 20 and the upper kmer frequency is 50 for E. Coli 100X datasets. ## V Conclusions In this work, we introduce 3 GPU schedulers for ELBA to accommodate multiple MPI processes and multiple GPUs. We show that 3 schedulers are able to significantly improve the performance of the baseline by conducting both strong and weak scaling experiments. For the best performance implementation, the one-to-one scheduler achieves \(\sim\)7-8\(\times\) speed-up using 25 MPI processes compared with the baseline vanilla ELBA GPU scheduler. ## VI Acknowledgement We are grateful to Professor Giulia Guidi for her tutorial on the background of ELBA and guidance on implementing this project.
$\textit{De Novo}$ゲノムアセンブリは計算生物学において重要なタスクの一つであり、ELBAは、$\textit{DeNovo}$ゲノムアセンブリのオーバーラップ検出とレイアウトの簡素化ステップのための最先端の分散メモリ並列アルゴリズムです。しかし、 pairwisealignmentにおいては性能のボトルネックが存在します。この研究では、ELBAに3つのGPUスケジューラを提案しました。これらのGPUスケジューラにより、複数のMPIプロセスと複数のGPUを処理するように、複数のMPIプロセスがGPU上でラウンドロビン方式で計算を実行することができます。強弱スケール実験の結果、3つのスケジューラは基線のパフォーマンスをSignificantly改善しましたが、並列とGPUスケジューラオーバーヘッドの間にはトレードオフがありました。最適なパフォーマンスの実装のため、1対1のスケジューラは、25個のMPIプロセスを使用して基線ベースのELBA GPUスケジュー
2301.13427
Disciplined Saddle Programming
We consider convex-concave saddle point problems, and more generally convex optimization problems we refer to as $\textit{saddle problems}$, which include the partial supremum or infimum of convex-concave saddle functions. Saddle problems arise in a wide range of applications, including game theory, machine learning, and finance. It is well known that a saddle problem can be reduced to a single convex optimization problem by dualizing either the convex (min) or concave (max) objectives, reducing a min-max problem into a min-min (or max-max) problem. Carrying out this conversion by hand can be tedious and error prone. In this paper we introduce $\textit{disciplined saddle programming}$ (DSP), a domain specific language (DSL) for specifying saddle problems, for which the dualizing trick can be automated. The language and methods are based on recent work by Juditsky and Nemirovski arXiv:2102.01002 [math.OC], who developed the idea of conic-representable saddle point programs, and showed how to carry out the required dualization automatically using conic duality. Juditsky and Nemirovski's conic representation of saddle problems extends Nesterov and Nemirovski's earlier development of conic representable convex problems; DSP can be thought of as extending disciplined convex programming (DCP) to saddle problems. Just as DCP makes it easy for users to formulate and solve complex convex problems, DSP allows users to easily formulate and solve saddle problems. Our method is implemented in an open-source package, also called DSP.
Philipp Schiele, Eric Luxenberg, Stephen Boyd
2023-01-31T05:48:22
http://arxiv.org/abs/2301.13427v2
# Disciplined Saddle Programming ###### Abstract We consider convex-concave saddle point problems, and more generally convex optimization problems we refer to as _saddle problems_, which include the partial supremum or infimum of convex-concave saddle functions. Saddle problems arise in a wide range of applications, including game theory, machine learning, and finance. It is well known that a saddle problem can be reduced to a single convex optimization problem by dualizing either the convex (min) or concave (max) objectives, reducing a min-max problem into a min-min (or max-max) problem. Carrying out this conversion by hand can be tedious and error prone. In this paper we introduce _disciplined saddle programming_ (DSP), a domain specific language (DSL) for specifying saddle problems, for which the dualizing trick can be automated. The language and methods are based on recent work by Juditsky and Nemirovski [17], who developed the idea of conic-representable saddle point programs, and showed how to carry out the required dualization automatically using conic duality. Juditsky and Nemirovski's conic representation of saddle problems extends Nesterov and Nemirovski's earlier development of conic representable convex problems; DSP can be thought of as extending disciplined convex programming (DCP) to saddle problems. Just as DCP makes it easy for users to formulate and solve complex convex problems, DSP allows users to easily formulate and solve saddle problems. Our method is implemented in an open-source package, also called DSP. ###### Contents * 1 Introduction * 1.1 Previous and related work * 1.2 Outline * 2 Saddle programming * 2.1 Saddle functions * 2.2 Saddle point problems * 2.3 Saddle extremum functions * 2.4 Saddle problems * 2.5 Solving saddle problems * 2.6 Dual reduction * 3 Applications * 3.1 Robust bond portfolio construction * 3.2 Model fitting robust to data weights * 3.3 Robust production problem with worst case prices * 3.4 Robust Markowitz portfolio construction * 4 Disciplined saddle point programming * 4.1 Saddle function calculus * 4.2 Conically representable saddle functions * 5 Implementation * 5.1 Atoms * 5.2 Calculus rules * 5.3 Saddle point problems * 5.4 Saddle extremum functions * 5.5 Saddle problems * 6 Examples * 6.1 Robust bond portfolio construction * 6.2 Model fitting robust to data weights * 6.3 Robust Markowitz portfolio construction Introduction We consider saddle problems, by which we mean convex-concave saddle point problems or, more generally, convex optimization problems that include the partial supremum or infimum of convex-concave saddle functions. Saddle problems arise in various fields such as game theory, robust and minimax optimization, machine learning, and finance. While there are algorithms specifically designed to solve some types of saddle point or minimax problems, another approach is to convert them into standard convex optimization problems using a trick based on duality that can be traced back to at least the 1920s. The idea is to express the infima or suprema that appear in the saddle problem via their duals, which converts them to suprema or infima, respectively. Roughly speaking, this turns a min-max problem into a min-min (or max-max) problem, which can then be solved by standard methods. Specific cases of this trick are well known; the classical example is converting a matrix game, a specific saddle point problem, into a linear program (LP) [14]. While the dualizing trick has been known and used for almost 100 years, it has always been done by hand, for specific problems. It can only be carried out by those who have a working knowledge of duality in convex optimization, and are aware of the trick. In this paper we propose an automated method for carrying out the dualizing trick. Our method is based on the theory of conic representation of saddle point problems, developed recently by Juditsky and Nemirovski [17]. Based on this development, we have designed a domain specific language (DSL) for describing saddle problems, which we refer to as disciplined saddle programming (DSP). When a problem description complies with the syntax rules, _i.e._, is DSP-compliant, it is easy to verify that it is a valid saddle problem, and more importantly, automatically carry out the dualizing trick. We have implemented the DSL in an open source software package, also called DSP, which works with CVXPY [1], a DSL for specifying and solving convex optimization problems. DSP makes it easy to specify and solve saddle problems, without any expertise in (or even knowledge of) convex duality. Even for those with the required expertise to carry out the dualizing trick by hand, DSP is less tedious and error prone. DSP is _disciplined_, meaning it is based on a small number of syntax rules that, if followed, guarantee that the specified problem is a valid saddle problem. It is analogous to disciplined convex programming (DCP) [1], which is a DSL for specifying convex optimization problems. When a problem specification follows these syntax rules, _i.e._, is DCP-compliant, it is a valid convex optimization problem, and more importantly can be automatically converted to an equivalent cone program, and then solved. As a practical matter, DCP allows a large number of users to specify and solve even complex convex optimization problems, with no knowledge of the reduction to cone form. Indeed, most DCP users are blissfully unaware of how their problems are solved, _i.e._, a reduction to cone form. DCP was based on the theory of conic representations of convex functions and problems, pioneered by Nesterov and Nemirovski [17]. Widely used implementations of DCP include CVXPY [1], Convex.jl [13], CVXR [14], YALMIP [15], and CVX [1]. Like DCP did for convex problems, DSP makes it easy to specify and solve saddle problems, with most users unaware of the dualization trick and reduction used to solve their problems. ### Previous and related work Saddle problems.Studying saddle problems is a long-standing area of research, resulting in many theoretical insights, numerous algorithms for specific classes of problems, and a large number of applications. Saddle problems are often studied in the context of minimax or maximin optimization [10, 11], which, while dating back to the 1920s and the work of von Neumann and Morgenstern on game theory [14], continue to be active areas of research, with many recent advancements for example in machine learning [13]. A variety of methods have been developed for solving saddle point problems, including interior point methods [15, 16, 17, 18, 19], and second-order methods [20, 21], where many of these methods are specialized to specific classes of saddle problems. Depending on the class of saddle problem, the methods differ in convergence rate. For example, for the subset of smooth minimax problems, an overview of rates for different curvature assumptions is given in [16]. Due to their close relation to Lagrange duality, saddle problems are commonly studied in the context of convex analysis (see, for example, [1, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37], with an analysis via monotone operators given in [20]. The practical usefulness of saddle programming in many applications is also increasingly well known. Many applications of saddle programming are robust optimization problems [2, 30]. For example, in statistics, distributionally robust models can be used when the true distribution of the data generating process is not known [1]. Another common area of application is in finance, with [12, SS19.3-4] describing a range of financial applications that can be characterized as saddle problems. Similarly, [1, 13, 14] describe variations of the classical portfolio optimization problem as saddle problems. Disciplined convex programming.DCP is a grammar for constructing optimization problems that are provably convex, meaning that they can be solved globally, efficiently and accurately. It is based on the rule that the convexity of a function \(f\) is preserved under composition if all inner expressions in arguments where \(f\) is nondecreasing are convex, and all expressions where \(f\) is nonincreasing are concave, and all other expressions are affine. A detailed description of the composition rule is given in [1, SS3.2.4]. Using this rule, functions can be composed from a small set of primitives, called atoms, where each atom has known curvature, sign, and monotonicity. Every function that can be constructed from these atoms according to the composition rule is convex, but the converse is not true. The DCP framework has been implemented in many programming languages, including MATLAB [1, 20], Python [14], R [20], and Julia [21], and is used by researchers and practitioners in a wide range of fields. Well-structured convex-concave saddle point problems.As mentioned earlier, disciplined saddle programming is based on Juditsky and Nemirovski's recent work on well-structured convex-concave saddle point problems [22]. ### Outline In SS2 we describe saddle programming, which includes the classical saddle point problem, as well as convex problems that include functions described via partial minimization or maximization of a saddle function. We describe some typical applications of saddle programming in SS3. In SS4 we describe disciplined saddle programming, which is a way to specify saddle programs in such a way that validity is easy to verify, and the reduction to an equivalent cone program can be automated. We describe our implementation in SS5, showing how saddle functions, saddle extremum functions, saddle point problems, and saddle problems are specified. We present numerical examples in SS6. ## 2 Saddle programming ### Saddle functions A _saddle function_ (also referred to as a convex-concave saddle function) \(f:\mathcal{X}\times\mathcal{Y}\to\mathbf{R}\) is one for which \(f(\cdot,y)\) is convex for any fixed \(y\in\mathcal{Y}\), and \(f(x,\cdot)\) is concave for any fixed \(x\in\mathcal{X}\). The argument domains \(\mathcal{X}\subseteq\mathbf{R}^{n}\) and \(\mathcal{Y}\subseteq\mathbf{R}^{m}\) must be nonempty closed convex. We refer to \(x\) as the convex variable, and \(y\) as the concave variable, of the saddle function \(f\). Examples. * _Functions of \(x\) or \(y\) alone._ A convex function of \(x\), or a concave function of \(y\), are trivial examples of saddle functions. * _Lagrangian of a convex optimization problem._ The convex optimization problem \[\begin{array}{ll}\mbox{minimize}&f_{0}(x)\\ \mbox{subject to}&Ax=b,\quad f_{i}(x)\leq 0,\quad i=1,\ldots,m,\end{array}\] with variable \(x\in\mathbf{R}^{n}\), where \(f_{0},\ldots,f_{m}\) are convex and \(A\in\mathbf{R}^{p\times n}\), has Lagrangian \[L(x,\nu,\lambda)=f(x)+\nu^{T}(Ax-b)+\lambda_{1}f_{1}(x)+\cdots+\lambda_{n}f_{m }(x),\] for \(\lambda\geq 0\) (elementwise). It is convex in \(x\) and affine (and therefore also concave) in \(y=(\nu,\lambda)\), so it is a saddle function with \[\mathcal{X}=\bigcap_{i=0,\ldots,m}\mathbf{dom}\,f_{i},\qquad\mathcal{Y}= \mathbf{R}^{m}_{+}\times\mathbf{R}^{p},\] * _Bi-affine function._ The function \(f(x,y)=(Ax+b)^{T}(Cy+d)\), with \(\mathcal{X}=\mathbf{R}^{p}\) and \(\mathcal{Y}=\mathbf{R}^{q}\), is evidently a saddle function. The inner product \(x^{T}y\) is a special case of a bi-affine function. For a bi-affine function, either variable can serve as the convex variable, with the other serving as the concave variable. * _Convex-concave inner product_. The function \(f(x,y)=F(x)^{T}G(y)\), where \(F:\mathbf{R}^{p}\rightarrow\mathbf{R}^{n}\) is a nonnegative elementwise convex function and \(G:\mathbf{R}^{q}\rightarrow\mathbf{R}^{n}\) is a nonnegative elementwise concave function. * _Weighted \(\ell_{2}\) norm._ The function \[f(x,y)=\left(\sum_{i=1}^{n}y_{i}x_{i}^{2}\right)^{1/2},\] with \(\mathcal{X}=\mathbf{R}^{n}\) and \(\mathcal{Y}=\mathbf{R}^{n}_{+}\), is a saddle function. * _Weighted log-sum-exp._ The function \[f(x,y)=\log\left(\sum_{i=1}^{n}y_{i}\exp x_{i}\right),\] with \(\mathcal{X}=\mathbf{R}^{n}\) and \(\mathcal{Y}=\mathbf{R}^{n}_{+}\), is a saddle function. * _Weighted geometric mean._ The function \(f(x,y)=\prod_{i=1}^{n}y_{i}^{x_{i}}\), with \(\mathcal{X}=\mathbf{R}^{n}_{+}\) and \(\mathcal{Y}=\mathbf{R}^{n}_{+}\), is a saddle function. * _Quadratic form with quasi-semidefinite matrix_. The function \[f(x,y)=\left[\begin{array}{cc}x\\ y\end{array}\right]^{T}\left[\begin{array}{cc}P&S\\ S^{T}&Q\end{array}\right]\left[\begin{array}{c}x\\ y\end{array}\right],\] where the matrix is quasi-semidefinite, _i.e._, \(P\in\mathbf{S}^{n}_{+}\) (the set of symmetric positive semidefinite matrices) and \(-Q\in\mathbf{S}^{n}_{+}\). * _Quadratic form._ The function \(f(x,Y)=x^{T}Yx\), with \(\mathcal{X}=\mathbf{R}^{n}\) and \(\mathcal{Y}=\mathbf{S}^{n}_{+}\) (the set of symmetric positive semidefinite \(n\times n\) matrices), is a saddle function. * As more esoteric example, the function \(f(x,Y)=x^{T}Y^{1/2}x\), with \(\mathcal{X}=\mathbf{R}^{n}\) and \(\mathcal{Y}=\mathbf{S}^{n}_{+}\), is a saddle function. Combination rules.Saddle functions can be combined in several ways to yield saddle functions. For example the sum of two saddle functions is a saddle function, provided the domains have nonempty intersection. A saddle function scaled by a nonnegative scalar is a saddle function. Scaling a saddle function with a nonpositive scalar, and swapping its arguments, yields a saddle function: \(g(x,y)=-f(y,x)\) is a saddle function provided \(f\) is. Saddle functions are preserved by pre-composition of the convex and concave variables with an affine function, _i.e._, if \(f\) is a saddle function, so is \(f(Ax+b,Cx+d)\). Indeed, the bi-affine function is just the inner product with an affine pre-composition for each of the convex and concave variables. ### Saddle point problems A _saddle point_\((x^{\star},y^{\star})\in\mathcal{X}\times\mathcal{Y}\) is any point that satisfies \[f(x^{\star},y)\leq f(x^{\star},y^{\star})\leq f(x,y^{\star})\text{ for all }x\in\mathcal{X},\ y\in\mathcal{Y}. \tag{1}\] In other words, \(x^{\star}\) minimizes \(f(x,y^{\star})\) over \(x\in\mathcal{X}\), and \(y^{\star}\) maximizes \(f(x^{\star},y)\) over \(y\in\mathcal{Y}\). The basic _saddle point problem_ is to find such a saddle point, \[\text{find }x^{\star},\ y^{\star}\text{ which satisfy (\ref{eq:saddle point problem}).} \tag{2}\] The value of the saddle point problem is \(f(x^{\star},y^{\star})\). Existence of a saddle point for a saddle function is guaranteed, provided some technical conditions hold. For example, Sion's theorem [14] guarantees the existence of a saddle point when \(\mathcal{Y}\) is compact. There are many other cases. Examples. * _Matrix game._ In a matrix game, player one chooses \(i\in\{1,\ldots,m\}\), and player two chooses \(j\in\{1,\ldots,n\}\), resulting in player one paying player two the amount \(C_{ij}\). Player one wants to minimize this payment, while player two wishes to maximize it. In a mixed strategy, player one makes choices at random, from probabilities given by \(x\) and player two makes independent choices with probabilities given by \(y\). The expected payment from player one to player two is then \(f(x,y)=x^{T}Cy\). With \(\mathcal{X}=\{x\mid x\geq 0,\ \mathbf{1}^{T}x=1\}\), and similarly for \(\mathcal{Y}\), a saddle point corresponds to an equilibrium, where no player can improve her position by changing (mixed) strategy. The saddle point problem consists of finding a stable equilibrium, _i.e._, an optimal mixed strategy for each player. * _Lagrangian._ A saddle point of a Lagrangian of a convex optimization problem is a primal-dual optimal pair for the convex optimization problem. ### Saddle extremum functions Suppose \(f\) is a saddle function. The function \(G:\mathcal{X}\to\mathbf{R}\cup\{\infty\}\) defined by \[G(x)=\sup_{y\in\mathcal{Y}}f(x,y),\quad x\in\mathcal{X}, \tag{3}\] is called a _saddle max function_. Similarly, the function \(H:\mathcal{Y}\to\mathbf{R}\cup\{-\infty\}\) defined by \[H(x)=\inf_{x\in\mathcal{X}}f(x,y),\quad y\in\mathcal{Y}, \tag{4}\] is called a _saddle min function_. Saddle max functions are convex, and saddle min functions are concave. We will use the term _saddle extremum_ (SE) functions to refer to saddle max or saddle min functions. Which is meant is clear from context, _i.e._, whether it is defined by minimization (infimum) or maximization (supremum), or its curvature (convex or concave). Note that in SE functions, we always maximize (or take supremum) over the concave variable, and minimize (or take infimum) over the convex variable. This means that evaluating \(G(x)\) os \(H(y)\) involves solving a convex optimization problem. **Examples.** * _Dual function._ Minimizing a Lagrangian \(L(x,\nu,\lambda)\) over \(x\) gives the dual function of the original convex optimization problem. * Maximizing a Lagrangian \(L(x,\nu,\lambda)\) over \(y=(\nu,\lambda)\) gives the objective function restricted to the feasible set. * _Conjugate of a convex function._ Suppose \(f\) is convex. Then \(g(x,y)=f(x)-x^{T}y\) is a saddle function, the Lagrangian of the problem of minimizing \(f\) subject to \(x=0\). Its saddle min is the negative conjugate function: \(\inf_{x}g(x,y)=-f^{\star}(y)\). * _Sum of \(k\) largest entries._ Consider \(f(x,y)=x^{T}y\), with \(\mathcal{Y}=\{y\mid 0\leq y\leq 1,\ \mathbf{1}^{T}y=k\}\). The associated saddle max function \(G\) is the sum of the \(k\) largest entries of \(x\). Saddle points via SE functions.A pair \((x^{\star},y^{\star})\) is a saddle point of a saddle function \(f\) if and only if \(x^{\star}\) minimizes the convex SE function \(G\) in (3) over \(x\in\mathcal{X}\), and \(y^{\star}\) maximizes the concave SE function \(H\) defined in (4) over \(y\in\mathcal{Y}\). This means that we can find saddle points, _i.e._, solve the saddle point problem (2), by solving the convex optimization problem \[\begin{array}{ll}\text{minimize}&G(x)\\ \text{subject to}&x\in\mathcal{X},\end{array} \tag{5}\] with variable \(x\), and the convex optimization problem \[\begin{array}{ll}\text{maximize}&H(y)\\ \text{subject to}&y\in\mathcal{Y},\end{array} \tag{6}\] with variable \(y\). The problem (5) is called a minimax problem, since we are minimizing a function defined as the maximum over another variable. The problem (6) is called a maximin problem. While the minimax problem (5) and maximin problem (6) are convex, they cannot be directly solved by conventional methods, since the objectives themselves are defined by maximization and minimization, respectively. There are solution methods specifically designed for minimax and maximin problems [10, 11], but as we will see minimax problems involving SE functions can be transformed to equivalent forms that can be directly solved using conventional methods. ### Saddle problems In this paper we consider convex optimization problems that include SE functions in the objective or constraints, which we refer to as _saddle problems_. The convex problems that solve the basic saddle point problem (5) and (6) are special cases, where the objective is an SE function. As another example consider the problem of minimizing a convex function \(\phi\) subject to the convex SE constraint \(H(y)\leq 0\), which can be expressed as \[\begin{array}{ll}\text{minimize}&\phi(x)\\ \text{subject to}&f(x,y)\leq 0\text{ for all }y\in\mathcal{Y},\end{array} \tag{7}\] with variable \(x\). The constraint here is called a _semi-infinite constraint_, since (when \(\mathcal{Y}\) is not a singleton) it can be thought of as an infinite collection of convex constraints, one for each \(y\in\mathcal{Y}\)[10]. Saddle problems include the minimax and maximin problems (that can be used to solve the saddle point problem), and semi-infinite problems that involve SE functions. There are many other examples of saddle problems, where SE functions can appear in expressions that define the objective and constraints. Robust cost LP.As a more specific example of a saddle problem consider the linear program with robust cost, \[\begin{array}{ll}\text{minimize}&\sup_{c\in\mathcal{C}}c^{T}x\\ \text{subject to}&Ax=b,\quad x\geq 0,\end{array} \tag{8}\] with variable \(x\in\mathbf{R}^{n}\), with \(\mathcal{C}=\{c\mid Fc\leq g\}\). This is an LP with worst case cost over the polyhedron \(\mathcal{C}\)[1, 2]. This is a saddle problem with convex variable \(x\), concave variable \(y\), and an objective which is a saddle max function. ### Solving saddle problems Special cases with tractable analytical expressions.There are cases where an SE function can be worked out analytically. An example is the max of a linear function over a box, \[\sup_{l\leq y\leq u}y^{T}x,=(1/2)(u+l)^{T}x+(1/2)(u-l)^{T}|x|,\] where the absolute value is elementwise. We will see other cases in our examples. Subgradient methods.We can readily compute a subgradient of a saddle max function (or a supergradient of a saddle min function), by simply maximizing over the concave variable (minimizing over the convex variable), which is itself a convex optimization problem. We can then use any method to solve the saddle problem using these subgradients, _e.g._, subgradient-type methods, ellipsoid method, or localization methods such as the analytic center cutting plane method. In [14] such an approach is used for general minimax problems. Methods for specific forms.Many methods have been developed for finding saddle points of saddle functions with the special form \[f(x,y)=x^{T}Ky+\phi(x)+\psi(y),\] where \(\phi\) is convex, \(\psi\) is concave, and \(K\) is a matrix [1, 2, 3, 4, 10]. Beyond this example, there are many other special forms of saddle functions, with different methods adapted to properties such as smoothness, separability, and strong-convex-strong-concavity. ### Dual reduction A well-known trick can be used to transform a saddle point problem into an equivalent problem that does not contain SE functions. This method of transforming an inner minimization is not new; it has been used since the 1950s when Von Neumann proved the minimax theorem using strong duality in his work with Morgenstern on game theory [14]. Using this observation, he showed that the minimax problem of a two player game is equivalent to an LP. Duality allows us to express the convex (concave) SE function as an infimum (supremum), which facilitates the use of standard convex optimization. We think of this as a reduction to an equivalent problem that removes the SE functions from the objective and constraints. Robust cost LP.We illustrate the dualization method for the robust cost LP (8). The key is to express the robust cost or saddle max function \(\sup_{c\in\mathcal{C}}c^{T}x\) as an infimum. We first observe that this saddle max function is the optimal value of the LP \[\begin{array}{ll}\text{maximize}&x^{T}c\\ \text{subject to}&Fc\leq g,\end{array}\] with variable \(c\). Its dual is \[\begin{array}{ll}\text{minimize}&g^{T}\lambda\\ \text{subject to}&F^{T}\lambda=x,\quad\lambda\geq 0,\end{array}\] with variable \(\lambda\). Assuming that \(\mathcal{C}\) is nonempty, this dual problem has the same optimal value as the primal, _i.e._, \[\sup_{c\in\mathcal{C}}c^{T}x=\inf_{\lambda\geq 0,\,F^{T}\lambda=x}g^{T}\lambda\] Substituting this into (8) we obtain the problem \[\begin{array}{ll}\text{minimize}&g^{T}\lambda\\ \text{subject to}&Ax=b,\quad x\geq 0,\quad F^{T}\lambda=x,\quad\lambda \geq 0,\end{array} \tag{9}\] with variables \(x\) and \(\lambda\). This simple LP is equivalent to the original robust LP (8), in the sense that if \((x^{\star},\lambda^{\star})\) is a solution of (9), then \(x^{\star}\) is a solution of the robust LP (8). We will see this dualization trick in a far more general setting in SS4. ## 3 Applications In this section we describe a few applications of saddle programming. ### Robust bond portfolio construction We describe here a simplified version of the problem described in much more detail in [10]. Our goal is to construct a portfolio of \(n\) bonds, giving by its holdings vector \(h\in\mathbf{R}_{+}^{n}\), where \(h_{i}\) is the number of bond \(i\) held in the portfolio. Each bond produces a cash flow, _i.e._, a sequence of payments to the portfolio holder, up to some period \(T\). Let \(c_{i,t}\) be the payment from bond \(i\) in time period \(t\). Let \(y\in\mathbf{R}^{T}\) be the yield curve, which gives the time value of cash: A payment of one dollar at time \(t\) is worth \(\exp(-ty_{t})\) current dollars, assuming continuously compounded returns. The bond portfolio value, which is the present value of the total cash flow, can be expressed as \[V(h,y)=\sum_{i=1}^{n}\sum_{t=1}^{T}h_{i}c_{i,t}\exp(-ty_{t}).\] This function is convex in the yields \(y\) and concave (in fact, linear) in the holdings vector \(h\). Now suppose we do not know the yield curve, but instead have a convex set \(\mathcal{Y}\) of possible values, with \(y\in\mathcal{Y}\). The worst case value of the bond portfolio, over this set of possible yield curves, is \[V^{\text{wc}}(h)=\inf_{y\in\mathcal{Y}}V(h,y).\] We recognize this as a saddle min function. (In this application, \(y\) is the convex variable of the saddle function \(V\), whereas elsewhere in this paper we use \(y\) to denote the concave variable.) We consider a robust bond portfolio construction problem of the form \[\begin{array}{ll}\text{minimize}&\phi(h)\\ \text{subject to}&h\in\mathcal{H},\quad V^{\text{wc}}(h)\geq V^{\text{lim}}, \end{array} \tag{10}\] where \(\phi\) is a convex objective, typically a measure of return and risk, \(\mathcal{H}\) is a convex set of portfolio constraints (for example, imposing \(h\geq 0\) and a total budget), and \(V^{\text{lim}}\) is a specified limit on worst case value of the portfolio over the yield curve set \(\mathcal{Y}\), which has a saddle min as a constraint. For some simple choices of \(\mathcal{Y}\) the worst case value can be found analytically. One example is when \(\mathcal{Y}\) has a maximum element. In this special case, the maximum element is the minimizer of the value over \(\mathcal{Y}\) (since \(V\) is a monotone decreasing function of \(y\)). For other cases, however, we need to solve the saddle problem (10). ### Model fitting robust to data weights We wish to fit a model parametrized by \(\theta\in\Theta\subseteq\mathbf{R}^{n}\) to \(m\) observed data points. We do this by minimizing a weighted loss over the observed data, plus a regularizer, \[\sum_{i=1}^{m}w_{i}\ell_{i}(\theta)+r(\theta),\] where \(\ell_{i}\) is the convex loss function for observed data point \(i\), \(r\) is a convex regularizer function, and the weights \(w_{i}\) are nonnegative. The weights can be used to adjust a data sample that was not representative, as in [1], or to ignore some of the data points (by taking \(w_{i}=0\)), as in [1]. Evidently the weighted loss is a saddle function, with convex variable \(\theta\) and concave variable \(w\). We consider the case when the weights are unknown, but lie in a convex set, \(w\in\mathcal{W}\). The robust fitting problem is to choose \(\theta\) to minimize the worst case loss over the set of possible weights, plus the regularizer, \[\max_{w\in\mathcal{W}}\sum_{i=1}^{m}w_{i}\ell_{i}(\theta)+r(\theta).\] We recognize the first term, _i.e._, the worst case loss over the set of possible weights, as a saddle max function. For some simple choices of \(\mathcal{W}\) the worst case loss can be expressed analytically. For example with \[\mathcal{W}=\{w\mid 0\leq w\leq 1,\;\mathbf{1}^{T}w=k\},\] (with \(k\in[0,n]\)), the worst case loss is given by \[\max_{w\in\mathcal{W}}\sum_{i=1}^{m}w_{i}\ell_{i}(\theta)=\phi(\ell_{1}, \ldots,\ell_{m}),\] where \(\phi\) is the sum-of-\(k\)-largest entries [1, SS3.2.3]. (Our choice of symbol \(k\) suggests that \(k\) is an integer, but it need not be.) In this case we judge the model parameter \(\theta\) by its worst loss on any subset of \(k\) of data points. Put another way, we judge \(\theta\) by dropping the \(m-k\) data points on which it does best (_i.e._, has the smallest loss) [1]. CVXPY directly supports the sum-of-\(k\)-largest function, so the robust fitting problem can be formulated and solved without using DSP. To support this function, CVXPY carries out a transformation very similar to the one that DSP does. The difference is that the transformation in CVXPY is specific to this one function, whereas the one carried out in DSP is general, and would work for other convex weight sets. ### Robust production problem with worst case prices We consider the choice of a vector of quantities \(q\in\mathcal{Q}\subseteq\mathbf{R}^{n}\). Positive entries indicate goods we buy, and negative quantities are goods we sell. The set of possible quantities \(\mathcal{Q}\) is our production set, which is convex. In addition, we have a manufacturing cost associated with the choice \(q\), given by \(\phi(q)\), where \(\phi\) is a convex function. The total cost is the manufacturing cost plus the cost of goods (which includes revenue), \(\phi(q)+p^{T}q\), where \(p\in\mathbf{R}^{n}\) is vector of prices. We consider the situation when we do not know the prices, but we have a convex set they lie in, \(p\in\mathcal{P}\). The worst case cost of the goods is \(\max_{p\in\mathcal{P}}p^{T}q\). The robust production problem is \[\begin{array}{ll}\text{minimize}&\phi(q)+\max_{p\in\mathcal{P}}p^{T}q\\ \text{subject to}&q\in\mathcal{Q},\end{array} \tag{11}\] with variable \(q\). Here too we can work out analytical expressions for simple choices of \(\mathcal{P}\), such as a range for each component, in which case the worst case price is the upper limit for goods we buy, and the lower limit for goods we sell. In other cases, we solve the saddle problem (11). ### Robust Markowitz portfolio construction Markowitz portfolio construction [10] chooses a set of weights (the fraction of the total portfolio value held in each asset) by solving the convex problem \[\begin{array}{ll}\mbox{maximize}&\mu^{T}w-\gamma w^{T}\Sigma w\\ \mbox{subject to}&{\bf 1}^{T}w=1,\quad w\in\mathcal{W},\end{array}\] where the variable is the vector of portfolio weights \(w\in{\bf R}^{n}\), \(\mu\in{\bf R}^{n}\) is a forecast of the asset returns, \(\gamma>0\) is the risk aversion parameter, \(\Sigma\in{\bf S}^{n}_{++}\) is a forecast of the asset return covariance matrix, and \(\mathcal{W}\) is a convex set of feasible portfolios. The objective is called the risk adjusted (mean) return. Markowitz portfolio construction is known to be fairly sensitive to the (forecasts) \(\mu\) and \(\Sigma\), which have to be chosen with some care; see, _e.g._, [1]. One approach is to specify a convex uncertainty set \(\mathcal{U}\) that \((\mu,\Sigma)\) must lie in, and replace the objective with its worst case (smallest) value over this uncertainty set. This gives the robust Markowitz portfolio construction problem \[\begin{array}{ll}\mbox{maximize}&\inf_{(\mu,\Sigma)\in\mathcal{U}}\left( \mu^{T}w-\gamma w^{T}\Sigma w\right)\\ \mbox{subject to}&{\bf 1}^{T}w=1,\quad w\in\mathcal{W},\end{array}\] with variable \(w\). This is described in, _e.g._, in [1, 1, 1]. We observe that this is directly a saddle problem, with a saddle min objective, _i.e._, a maximin problem. For some simple versions of the problem we can work out the saddle min function explicitly. One example, given in [1], uses \(\mathcal{U}=\mathcal{M}\times\mathcal{S}\), where \[\mathcal{M} = \{\mu+\delta\mid|\delta|\leq\rho\},\] \[\mathcal{S} = \{\Sigma+\Delta\mid\Sigma+\Delta\succeq 0,\;|\Delta_{ij}|\leq\eta( \Sigma_{ii}\Sigma_{jj})^{1/2},\;i,j=1,\ldots,n\},\] where \(\rho>0\) is a vector of uncertainties in the forecast returns, and \(\eta\in(0,1)\) is a parameter that scales the perturbation to the forecast covariance matrix. (We interpret \(\delta\) and \(\Delta\) as perturbations of the nominal mean and covariance \(\mu\) and \(\Sigma\), respectively.) We can express the worst case risk adjusted return analytically as \[\inf_{(\mu,\Sigma)\in\mathcal{U}}\left(\mu^{T}w-\gamma w^{T}\Sigma w\right)= \mu^{T}w-\gamma w^{T}\Sigma w-\rho^{T}|w|-\gamma\eta\left(\sum_{i=1}^{n}\Sigma _{ii}^{1/2}|w_{i}|\right)^{2}.\] The first two terms are the nominal risk adjusted return; the last two terms (which are nonpositive) represent the cost of uncertainty. Disciplined saddle point programming ### Saddle function calculus We use the notation \(\phi(x,y):\mathcal{X}\times\mathcal{Y}\subseteq\mathbf{R}^{n\times m}\to\mathbf{R}\) to denote a saddle function with concave variables \(x\) and convex variables \(y\). The set of operations that, when performed on saddle functions, preserves the saddle property are called the _saddle function calculus_. The calculus is quite simple, and consists of the following operations: 1. _Conic combination of saddle functions._ Let \(\phi_{i}(x_{i},y_{i})\), \(i=1,\ldots,k\) be saddle functions. Let \(\theta_{i}\geq 0\) for each \(i\). Then the conic combination, \(\phi(x,y)=\sum_{i=1}^{k}\theta_{i}\phi_{i}(x_{i},y_{i})\), is a saddle function. 2. _Affine precomposition of saddle functions._ Let \(\phi(x,y)\) be a saddle function, with \(x\in\mathbf{R}^{n}\) and \(y\in\mathbf{R}^{m}\). Let \(A\in\mathbf{R}^{n\times q}\), \(b\in\mathbf{R}^{n}\), \(C\in\mathbf{R}^{m\times p}\), and \(d\in\mathbf{R}^{m}\). Then, with \(u\in\mathbf{R}^{q}\) and \(v\in\mathbf{R}^{p}\), the affine precomposition, \(\phi(Au+b,Cv+d)\), is a saddle function. 3. _Precomposition of saddle functions._ Let \(\phi(x,y):\mathcal{X}\times\mathcal{Y}\subseteq\mathbf{R}^{n\times m}\to \mathbf{R}\) be a saddle function, with \(x\in\mathbf{R}^{n}\) and \(y\in\mathbf{R}^{m}\). The precomposition with a function \(f:\mathbf{R}^{p}\to\mathbf{R}^{n}\), \(\phi(f(u),y)\), is a saddle function if for each \(i=1,\ldots,n\) one of the following holds: * \(f_{i}(u)\) is convex and \(\phi\) is nondecreasing in \(x_{i}\) for all \(y\in\mathcal{Y}\) and all \(x\in\mathcal{X}\). * \(f_{i}(u)\) is concave and \(\phi\) is nonincreasing in \(x_{i}\) for all \(y\in\mathcal{Y}\) and all \(x\in\mathcal{X}\). Similarly, the precomposition with a function \(g:\mathbf{R}^{q}\to\mathbf{R}^{m}\), \(\phi(x,g(v))\), is a saddle function if for each \(j=1,\ldots,m\) one of the following holds: * \(g_{j}(v)\) is convex and \(\phi\) is nonincreasing in \(y_{j}\) for all \(x\in\mathcal{X}\) and all \(y\in\mathcal{Y}\). * \(g_{j}(v)\) is concave and \(\phi\) is nondecreasing in \(y_{j}\) for all \(x\in\mathcal{X}\) and all \(y\in\mathcal{Y}\). ### Conically representable saddle functions Nemirovski and Juditsky propose a class of _conic representable_ saddle functions which facilitate the automated dualization of saddle problems [13]. We will first introduce some terminology and notation, and then describe the class of conic representable saddle functions. Notation.We use the notation \(\phi(x,y):\mathcal{X}\times\mathcal{Y}\subseteq\mathbf{R}^{n\times m}\to \mathbf{R}\) to denote a saddle function which is convex in \(x\) and concave in \(y\). Let \(K_{x}\), \(K_{y}\) and \(K\) be members of a collection \(\mathcal{K}\) of closed, convex, and pointed cones with nonempty interiors in Euclidean spaces such that \(\mathcal{K}\) contains a nonnegative ray, is closed with respect to taking finite direct products of its members, and is closed with respect to passing from a cone to its dual. We denote conic membership \(z\in K\) by \(z\succeq_{K}0\). We call a set \(\mathcal{X}\subseteq\mathbf{R}^{n}\)\(\mathcal{K}\)-representable if there exist constant matrices \(A\) and \(B\), a constant vector \(c\), and a cone \(K\in\mathcal{K}\) such that \[\mathcal{X}=\{x\mid\exists u:Ax+Bu\preceq_{K}c\}.\] CVXPY [16] can implement a function \(f\) exactly when its epigraph \(\{(x,u)\mid f(x)\leq u\}\) is \(\mathcal{K}\)-representable. Conically representable saddle functions.Let \(\mathcal{X}\) and \(\mathcal{Y}\) be nonempty and possessing \(\mathcal{K}\)-representations \[\mathcal{X}=\{x\mid\exists u:Ax+Bu\preceq_{K}c\},\quad\mathcal{Y}=\{y\mid \exists v:Cy+Dv\preceq_{K}e\}.\] A saddle function \(\phi(x,y):\mathcal{X}\times\mathcal{Y}\rightarrow\mathbf{R}\) is \(\mathcal{K}\)-representable if there exist constant matrices \(P\), \(Q\), \(R\), constant vectors \(p\) and \(s\) and a cone \(K\in\mathcal{K}\) such that for each \(x\in\mathcal{X}\) and \(y\in\mathcal{Y}\), \[\phi(x,y)=\inf\{f^{T}y+t\mid Pf+tp+Qu+Rx\preceq_{K}s\}.\] This definition generalizes simple class of bilinear saddle functions. See [17] for much more detail. Automated dualization.Suppose we have a \(\mathcal{K}\)-representable saddle function \(\phi\) as above. The power of the conic form is that the saddle extremum \[\Phi(x)=\sup_{y\in\mathcal{Y}}\phi(x,y)\] admits a tractable conic form, meaning that it can be implemented in a DSL like CVXPY. Specifically, \[\Phi(x) = \sup_{y\in\mathcal{Y}}\phi(x,y) \tag{12}\] \[= \inf_{f,t,u}\left\{\sup_{y\in\mathcal{Y}}\left(f^{T}y+t\ \right|\,Pf+tp+Qu+Rx\preceq_{K}s\right\}\] \[= \inf_{f,t,u}\left\{\sup_{y\in\mathcal{Y}}\left(f^{T}y+t\right)\ \left|\ Pf+tp+Qu+Rx\preceq_{K}s\right\}\right.\] \[= \inf_{f,t,u}\left\{\inf_{\lambda}\left\{\lambda^{T}e\ \middle|\begin{array}{c}C^{T}\lambda=f,\ D^{T}\lambda=0\\ \lambda\succeq_{K^{*}}0\end{array}\right.\right\}\ \left|\ Pf+tp+Qu+Rx\preceq_{K}s\right\} \tag{13}\] where in (12) we use Sion's minimax theorem [14] to reverse the \(\inf\) and \(\sup\), and in (13) we invoke strong duality to replace the supremum over \(y\) with an infimum over \(\lambda\). The final line implies a conic representation of the epigraph of \(\Phi(x)\), \[\{(x,u)\mid\Phi(x)\leq u\}=\left\{(x,u)\ \middle|\begin{array}{ll} \lambda^{T}e+t\leq u\\ \exists\lambda,f,t,u:&C^{T}\lambda=f,\ D^{T}\lambda=0,\ \lambda\succeq_{K^{*}}0\\ Pf+tp+Qu+Rx\preceq_{K}s\end{array}\right.\},\] which is tractable and can be implemented in a DSL like CVXPY. A mathematical nuance.Switching the \(\inf\) and \(\sup\) in (12) requires Sion's theorem to hold. A sufficient condition for Sion's theorem to hold is that the set \(\mathcal{Y}\) is compact. However, the \(\min\) and \(\max\) can be exchanged even if \(\mathcal{Y}\) is not compact. Then, due to the max-min inequality \[\max_{y\in\mathcal{Y}}\min_{x\in\mathcal{X}}f(x,y)\leq\min_{x\in\mathcal{X}} \max_{y\in\mathcal{Y}}f(x,y),\] the equality in (13) is replaced with a less than or equal to, and we obtain a convex restriction. Thus, if a user creates a problem involving an SE function (as opposed to a saddle point problem only containing saddle functions in the objective), then DSP guarantees that the problem generated is a restriction. This means that the variables returned are feasible and the returned optimal value is an upper bound on the optimal value for the user's problem. In our implementation, a saddle problem is solved by applying the above automatic dualization to both the objective \(f\) and \(-f\) and then solving each resulting convex problem, with the latter having the role of convex and concave variables switched. We do so in order to obtain both the convex and concave components of the saddle point, since the dualization removes the concave variable. The saddle problem is only reported as solved if the optimal value of the problem with objective \(f\) is within a numerical tolerance of the negated optimal value of the problem with objective \(-f\). If this holds, this actually implies that \[\max_{y\in\mathcal{Y}}\min_{x\in\mathcal{X}}f(x,y)=\min_{x\in\mathcal{X}}\max _{y\in\mathcal{Y}}f(x,y),\] _i.e._, (12) was valid, even if for example \(\mathcal{Y}\) is not compact. Thus, a user need not concern themselves with the compactness of \(\mathcal{Y}\) (or any other sufficient condition for Sion's theorem) when using DSP to find a saddle point; if a saddle point problem is solved, then the saddle point is guaranteed to exist. ## 5 Implementation In this section we describe our open source Python implementation of the concepts and methods described in SS4, which we also call DSP. DSP works with CVXPY [1], an implementation of a DSL for convex optimization based on DCP. We use the term DSP in two different ways. We use it to refer to the mathematical concept of disciplined saddle programming, and also our specific implementation; which is meant should be clear from the context. The term DSP-compliant refers to a function or expression that is constructed according to the DSP composition rules given in SS5.2. It can also refer to a problem that is constructed according to these rules. In the code snippets below, we use the prefix cp to indicate functions and classes from CVXPY. (We give functions and classes from DSP without prefix, whereas they would likely have a prefix such as dsp in real code.) ### Atoms Saddle functions in DSP are created from fundamental building blocks or atoms. These building blocks extend the atoms from CVXPY [1]. In CVXPY, atoms are either jointly convex or concave in all their variables, but in DSP, atoms are (jointly) convex in a subset of the variables and (jointly) concave in the remaining variables. We describe some DSP atoms below. Inner product.The atom inner(x,y) represents the inner product \(x^{T}y\). Since either \(x\) or \(y\) could represent the convex variable, we adopt the convention in DSP that the first argument of inner is the convex variable. According to the DSP rules, both arguments to inner must be affine, and the variables they depend on must be disjoint. Saddle inner product.The atom saddle_inner(F, G) corresponds to the function \(F(x)^{T}G(y)\), where \(F\) and \(G\) are vectors of nonnegative and respectively elementwise convex and concave functions. It is DSP-compliant if \(F\) is DCP convex and nonnegative and \(G\) is DCP concave. If the function \(G\) is not DCP nonnegative, then the DCP constraint G >= 0 is attached to the expression. This is analogous to how the DCP constraint x >= 0 is added to the expression cp.log(x). As an example consider f = saddle_inner(cp.square(x), cp.log(y)). This represents the saddle function \[f(x,y)=x^{2}\log y-I(y\geq 1),\] where \(I\) is the \(\{0,\infty\}\) indicator function of its argument. Weighted \(\ell_{2}\) norm.The weighted_norm2(x, y) atom represents the saddle function \(\left(\sum_{i=1}^{n}y_{i}x_{i}^{2}\right)^{1/2}\), with \(y\geq 0\). It is DSP-compliant if x is either DCP affine or both convex and nonnegative, and \(y\) is DCP concave. Here too, the constraint y >= 0 is added if \(y\) is not DCP nonnegative. Weighted log-sum-exp.The weighted_log_sum_exp(x, y) atom represents the saddle function \(\log\left(\sum_{i=1}^{n}y_{i}\exp x_{i}\right)\), with \(y\geq 0\). It is DSP-compliant if x is DCP convex, and \(y\) is DCP concave. The constraint y >= 0 is added if \(y\) is not DCP nonnegative. Quasi-semidefinite quadratic form.The quasidef_quad_form(x, y, P, Q, S) atom represents the function \[f(x,y)=\left[\begin{array}{cc}x\\ y\end{array}\right]^{T}\left[\begin{array}{cc}P&S\\ S^{T}&Q\end{array}\right]\left[\begin{array}{c}x\\ y\end{array}\right],\] where the matrix is quasi-semidefinite, _i.e._, \(P\in\mathbf{S}_{+}^{n}\) and \(-Q\in\mathbf{S}_{+}^{n}\). It is DSP-compliant if \(x\) is DCP affine and \(y\) is DCP affine. Quadratic form.The saddle_quad_form(x, Y) atom represents the function \(x^{T}Yx\), where \(Y\) is a PSD matrix. It is DSP-compliant if \(x\) is DCP affine, and \(Y\) is DCP PSD. ### Calculus rules The atoms can be combined according to the calculus described below to form expressions that are DSP-compliant. For example, saddle functions can be added or scaled. DCP-compliant convex and concave expressions are promoted to saddle functions with no concave or convex variables, respectively. For example, with variables x, y, and z, the expression f = 2.5 * saddle_inner(cp.square(x), cp.log(y)) + cp.minimum(y,1) - z is DSP-compliant, with convex variable x, concave variable y, and affine variable z. Calling the is_dsp method of an expression returns True if the expression is DSP-compliant. The methods convex_variables, concave_variables, and affine_variables, list the convex, concave, and affine variables, respectively. The convex variables are those that could only be convex, and similarly for concave variables. We refer to the convex variables as the unambiguously convex variables, and similarly for the concave variables. The three lists of variables gives a partition of all the variables the expression depends on. For the expression above, f.is_dsp() evaluates as True, f.convex_variables() returns the list [x], f.concave_variables() returns the list [y], and f.affine_variables() returns the list [z]. Note that the role of z is ambiguous in the expression, since it could be either a convex or concave variable. No mixing variables rule.The DSP rules prohibit mixing of convex and concave variables. For example if we add two saddle expressions, no variable can appear in both its convex and concave variable lists. DSP-compliance is sufficient but not necessary to be a saddle function.Recall that if an expression is DCP convex (concave), then it is convex (concave), but the converse is false. For example, the expression cp.sqrt(1 + cp.square(x)) represents the convex function \(\sqrt{1+x^{2}}\), but is not DCP. But we can express the same function as cp.norm2(cp.hstack([1, x])), which is DCP. The same holds for DSP and saddle function: If an expression is DSP-compliant, then it represents a saddle function; but it can represent a saddle function and not be DSP-compliant. As with DCP, such an expression would need to be rewritten in DSP-compliant form, to use any of the other features of DSP (such as a solution method). As an example, the expression x.T @ C @ y represents the saddle function \(x^{T}Cy\), but is not DSP-compliant. The same function can be expressed as inner(x, C @ y), which is DSP-compliant. When there are affine variables in a DSP-compliant expression, it means that those variables could be considered either convex or concave; either way, the function is a saddle function. Example.The code below defines the bi-linear saddle function \(f(x,y)=x^{T}Cy\), the objective of a matrix game, with \(x\) the convex variable and \(y\) the concave variable. Creating a saddle function. ``` 1fromdspimport*#notationalconvenience 2importcvxpyascp 3importnumpyasnp 4 5x=cp.Variable(2) 6y=cp.Variable(2) 7C=np.array([[1,2],[3,1]]) 8 9f=inner(x,C@y) 10 11f.is_dsp()#True 12 13f.convex_variables()#[x] 14f.concave_variables()#[y] 15f.affine_variables()#[] ``` Lines 1-3 import the necessary packages (which we will use but not show in the sequel). In lines 5-7, we create two CVXPY variables and a constant matrix. In line 9 we construct the saddle function f using the DSP atom inner. Both its arguments are affine, so this matches the DSP rules. In line 11 we check if saddle_function is DSP-compliant, which it is. In lines 13-15 we call functions that return lists of the convex, concave, and affine variables, respectively. The results of lines 13-15 might seem odd, but recall that inner marks its first argument as convex and its second as concave. ### Saddle point problems Saddle point problem objective.To construct a saddle point problem, we first create an objective using ``` obj=MinimizeMaximize(f), ``` where f is a CVXPY expression. The objective obj is DSP-compliant if the expression f is DSP-compliant. This is analogous to the CVXPY contructors cp.Minimize(f) and cp.Maximize(f), which create objectives from expressions. Saddle point problem.A saddle point problem is constructed using ``` prob=SaddlePointProblem(obj,constraints,cvx_vars,ccv_vars) ``` Here, obj is a MinimizeMaximize objective, constraints is a list of constraints, cvx_vars is a list of convex variables and ccv_vars is a list of concave variables. The objective must be DSP-compliant for the problem to be DSP-compliant. We now describe the remaining conditions under which the constructed problem is DSP-compliant. Each constraint in the list must be DCP, and can only involve convex variables or concave variables; convex and concave variables cannot both appear in any one constraint. The list of convex and concave variables partitions all the variables that appear in the objective or the constraints. In cases where the role of a variable is unambiguous, it is inferred, and does not need to be in either list. For example with the objective MinimizeMaximize(weighted_log_sum_exp(x, y) + cp.exp(u) + cp.log(v) + z), x and u must be convex variables, and y and v must be concave variables, and so do not need to appear in the lists used to construct a saddle point problem. The variable z, however, could be either a convex or concave variable, and so must appear in one of the lists. The role of a variable can also be inferred from the constraints: Any variable that appears in a constraint with convex (concave) variables must also be convex (concave). With the objective above, the constraint z + v <= 1 would serve to classify z as a concave variable. With this constraint, we could pass empty variable lists to the saddle point constructor, since the roles of all variables can be inferred. When the roles of all variables are unambiguous, the lists are optional. The roles of the variables in a saddle point problem prob can be found by calling prob.convex_variables() and prob.concave_variables(), which return lists of variables, and is a partition of all the variables appearing in the objective or constraints. This is useful for debugging, to be sure that DSP agrees with you about the roles of all variables. A DSP-compliant saddle point problem must have an empty list of affine variables. (If it did not, the problem would be ambiguous.) Solving a saddle point problem.The solve() method of a SaddlePointProblem object solves the problem. The solve method returns the optimal saddle value, _i.e._, the value of the objective at the saddle point. As in CVXPY, the solve method has the side effect of writing all variables'.value attribute. Example.Here we create and solve a matrix game, continuing the example above where f was defined. We do not need to pass in lists of variables since their roles can be inferred. Creating and solving a matrix game. ``` 1obj=MinimizeMaximize(f) 2constraints=[x>=0,cp.sum(x)==1,y>=0,cp.sum(y)==1] 3prob=SaddlePointProblem(obj,constraints) 4 5prob.is_dsp()#True 6prob.convex_variables()#[x] 7prob.concave_variables()#[y] 8prob.affine_variables()#[] 9 10prob.solve()#solvestheproblem 11prob.value#1.6666666666666666667 12x.value#array([0.666666667,0.33333333]) 13y.value#array([0.33333333,0.66666667]) ### Saddle extremum functions Local variables.An SE function has one of the forms \[G(x)=\sup_{y\in\mathcal{Y}}f(x,y)\quad\text{or}\quad H(y)=\inf_{x\in\mathcal{X }}f(x,y),\] where \(f\) is saddle function. Note that \(y\) in the definition of \(G\), and \(x\) in the definition of \(H\), are local or dummy variables, understood to have no connection to any other variable. Their scope extends only to the definition, and not beyond. To express this subtlety in DSP, we use the class LocalVariable to represent these dummy variables. The variables that are maximized over (in a saddle max function) or minimized over (in a saddle min function) must be declared using the LocalVariable() constructor. Any LocalVariable in an SE function cannot appear in any other SE function. Constructing SE functions.We construct SE functions in DSP using saddle_max(f, constraints) or saddle_min(f, constraints). Here, f is a CVXPY scalar expression, and constraints is a list of constraints. We now describe the rules for constructing a DSP-compliant SE function. If a saddle_max is being constructed, f must be DSP-compliant, and the function's concave variables, and all variables appearing in the list of constraints, must be LocalVariables, while the function's convex variables must all be regular Variables. A similar rule applies for saddle_min. The list of constraints is used to specify the set over which the sup or inf is taken. Each constraint must be DCP-compliant, and can only contain LocalVariables. With x a Variable, y_loc a LocalVariable, z_loc a LocalVariable, and z a Variable, consider the following two SE functions: 1f_1 = saddle_max(inner(x, y_loc)+z,[y_loc<=1]) 2f_2 = saddle_max(inner(x, y_loc)+z_loc,[y_loc<=1, z_loc<=1]) Both are DSP-compliant. For the first, calling f_1.convex_variables() would return [x, z], and calling f_1.concave_variables() would return [y_loc]. For the second, calling f_2.convex_variables() would return [x], and f_2.concave_variables() return [y_loc, z_loc]. Let y be a Variable. Both of the following are not DSP-compliant: 1f_3 = saddle_max(inner(x, y_loc)+z,[y_loc<=1, z<=1]) 2f_4 = saddle_max(inner(x, y)+z_loc,[y_loc<=1, z_loc<=1]) The first is not DSP-compliant because \(\mathbf{z}\) is not a LocalVariable, but appears in the constraints. The second is not DSP-compliant because y is not a LocalVariable, but appears as a concave variable in the saddle function. SE functions are DCP.When they are DSP-compliant, a saddle_max is a convex function, and a saddle_min is a concave function. They can be used anywhere in CVXPY that a convex or concave function is appropriate. You can add them, compose them (in appropriate ways), use them in the objective or either side of constraints (in appropriate ways). Examples.Now we provide full examples demonstrating construction of a saddle_max, which we can use to solve the matrix game described in SS5.3 as a saddle problem involving an SE function. Creating a saddle max. ``` 1#Creatingvariables 2x=cp.Variable(2) 3 4#Creatinglocalvariables 5y_loc=LocalVariable(2) 6 7#Convexinx,concaveiny_loc 8f=saddle_inner(C@x,y_loc) 9 10#maximizesovery_loc 11G=saddle_max(f,y_loc>=0,cp.sum(y_loc)==1]) ``` Note that G is a CVXPY expression. Constructing a saddle_min works exactly the same way. ### Saddle problems A saddle problem is a convex problem that uses SE functions. To be DSP-compliant, the problem must be DCP (which implies all SE functions are DSP-compliant). When you call the solve method on a saddle problem involving SE functions, and the solve is successful, then all variables'.value fields are overwritten with optimal values. This includes LocalVariables that the SE functions maximized or minimized over; they are assigned to the value of _a particular_ maximizer or minimizer of the SE function at the value of the non-local variables, with no further guarantees. Example.We continue our example from SS5.4 and solve the matrix game using either a saddle max. Creating and solving a saddle problem using a saddle max to solve the matrix game. ``` 1prob=cp.Problem(cp.Minimize(G),[x>=0,cp.sum(x)==1]) 2 3prob.is_dsp()#True 4 5prob.solve()#solving the problem 6prob.value#1.6666666666666667 7x.value#array([0.66666667,0.3333333]) ``` ## 6 Examples In this section we give numerical examples, taken from SS3, showing how to create DSP-compliant problems. The specific problem instances we take are small, since our main point is to show how easily the problems can be specified in DSP. But DSP will scale to far larger problem instances. ### Robust bond portfolio construction Our first example is the robust bond portfolio construction problem described in SS3.1. We consider portfolios of \(n=20\) bonds, over a period \(T=60\) half-years, _i.e._, 30 years. The bonds are taken as representative ones in a global investment grade bond portfolio; for more detail, see [10]. The payments from the bonds are given by \(C\in\mathbf{R}^{20\times 60}\), with cash flow of bond \(i\) in period \(t\) denoted \(c_{i,t}\). The portfolio constraint set \(\mathcal{H}\) is given by \[\mathcal{H}=\{h\mid h\geq 0,\;p^{T}h=B\},\] _i.e._, the investments must be nonnegative and have a total value (budget) \(B\), which we take to be $100. Here \(p\in\mathbf{R}^{20}_{+}\) denotes the price of the bonds on September 12, 2022. The portfolio objective is \[\phi(h)=\frac{1}{2}\|(h-h^{\mathrm{mkt}})\circ p\|_{1},\] where \(h^{\mathrm{mkt}}\) is the market portfolio scaled to a value of $100, and \(\circ\) denotes Hadamard or elementwise multiplication. This is called the turn-over distance, since it tells us how much we would need to buy and sell to convert our portfolio to the market portfolio. The yield curve set \(\mathcal{Y}\) is described in terms of perturbations to the nominal or current yield curve \(y^{\mathrm{nom}}\), which is the yield curve on September 12, 2022. We take \[\mathcal{Y}=\left\{y^{\mathrm{nom}}+\delta\;\left|\;\|\delta\|_{\infty}\leq \delta^{\mathrm{max}},\;\|\delta\|_{1}\leq\kappa,\;\sum_{t=1}^{T-1}(\delta_{t+ 1}-\delta_{t})^{2}\leq\omega\right.\right\}.\] We interpret \(\delta\) as a shock to the yield curve, which we limit elementwise, in absolute sum, and in smoothness. The specific parameter values are given by \[\delta^{\mathrm{max}}=0.02,\quad\kappa=0.9,\quad\omega=10^{-6}.\] In the robust bond portfolio problem (10) we take \(V^{\rm lim}=90\), that is, the worst case value of the portfolio cannot drop below $90 for any \(y\in\mathcal{Y}\). We solve the problem using the following code, where we assume the cash flow matrix \(\mathtt{C}\), the price vector \(\mathtt{p}\), the nominal yield curve \(\mathtt{y\_nom}\), and the market portfolio \(\mathtt{h\_mkt}\) are defined. Robust bond portfolio construction. ``` 1#Constantsandparameters 2n,T=C.shape 3delta_max,kappa,omega=0.02,0.9,1e-6 4B=100 5V_lim=90 6 7#Creatingvariables 8h=cp.Variable(n,nonneg=True) 9 10delta=LocalVariable(T) 11y=y_nom+delta 12 13#Objective 14phi=0.5*cp.norm1(cp.multiply(h,p)-cp.multiply(h_mkt,p)) 15 16#Creatingsaddleminfunction 17V=0 18foriinrange(n): 19t_plus_1=np.arange(T)+1#Accountforzero-indexing 20V+=saddle_inner(cp.exp(cp.multiply(-t_plus_1,y)),h[i]*C[i]) 21 22Y=[ 23cp.norm_inf(delta)<=delta_max, 24cp.norm1(delta)<=kappa, 25cp.sum_squares(delta[1:]-delta[:-1])<=omega, 26] 27 28V_wc=saddle_min(V,Y) 29 30#Creatingandsolvingtheproblem 31problem=cp.Problem(cp.Minimize(phi),[h@p==B,V_wc>=V_lim]) 32problem.solve()#15.32 ``` We first define the constants and parameters in lines 2-5, before creating the variable for the holdings \(\mathtt{h}\) in line 8, and the LocalVariable delta, which gives the yield curve perturbation, in line 10. In line 11 we define y as the sum of the current yield curve y_nom and the perturbation delta. The objective function is defined in line 14. Lines 17-20 define the saddle function V via the saddle_inner atom. The yield uncertainty set Y is defined in lines 22-26, and the worst case portfolio value is defined in line 25 using saddle_min. We use the concave expression saddle_min to create and solve a CVXPY problem in lines 31-32. Table 1 summarizes the results. The nominal portfolio is the market portfolio, which has zero turn-over distance to the market portfolio, _i.e._, zero objective value. This nominal portfolio, however, does not satisfy the worst-case portfolio value constraint, since there are yield curves in \(\mathcal{Y}\) that cause the portfolio value to drop to around $87, less than our limit of $90. The solution of the robust problem has turn-over distance $15.32, and satisfies the constraint that the worst-case value be at least $90. ### Model fitting robust to data weights We consider an instance of the model fitting problem described in SS3.2. We use the well known Titanic data set [10], which gives several attributes for each passenger on the ill-fated Titanic voyage, including whether they survived. A classifier is fit to predict survival based on the features sex, age (binned into three groups, 0-26, 26-53, and 53-80), and class (1, 2, or 3). These features are encoded as a Boolean vector \(a_{i}\in\mathbf{R}^{7}\). The label \(y_{i}=1\) means passenger \(i\) survived, and \(y_{i}=-1\) otherwise. There are 1046 examples, but we fit our model using only the \(m=50\) passengers who embarked from Queenstown, one of three ports of embarkation. This is a somewhat non-representative sample; for example, the survival rate among Queenstown departures is 26%, whereas the overall survival rate is 40.8%. We seek a linear classifier \(\hat{y}_{i}=\mathbf{sign}(a_{i}^{T}\theta+\beta_{0})\), where \(\theta\in\mathbf{R}^{7}\) is the classifier parameter vector and \(\beta_{0}\in\mathbf{R}\) is the bias. The hinge loss and \(\ell_{2}\) regularization are used, given by \[\ell_{i}(\theta)=\max(0,1-y_{i}a_{i}^{T}\theta),\qquad r(\theta)=\eta\|\theta \|_{2}^{2},\] with \(\eta=0.05\). The data is weighted to partially correct for the different survival rates for our training set (26%) and the whole data set (40.8%). To do this we set \(w_{i}=z_{1}\) when \(y_{i}=1\) and \(w_{i}=z_{2}\) when \(y_{i}=-1\). We require \(w\geq 0\) and \(\mathbf{1}^{T}w=1\), and \[0.408-0.05\leq\sum_{y_{i}=1}w_{i}\leq 0.408+0.05.\] \begin{table} \begin{tabular}{l c c} & Nominal portfolio & Robust portfolio \\ \hline Turn-over distance & \$0.00 & \$15.32 \\ Worst-case value & \$86.99 & \$90.00 \\ \hline \end{tabular} \end{table} Table 1: Turn-over distance and worst-case value for the nominal (market) portfolio and the robust portfolio. The nominal portfolio does not meet our requirement that the worst-case value be at least $90. Thus \(\mathcal{W}\) consists of weights on the Queenstown departure samples that correct the survival rate to within 5% of the overall survival rate. The code shown below solves this problem, where we assume the data matrix is already defined as A_train (with rows \(a_{i}^{T}\)), the survival label vector is defined as y_train, and the indicator of survival in the training set is defined as surv. Model fitting robust to data weights. ``` 1#Constantsandparameters 2m,n=A_train.shape 3inds_0=surv==0 4inds_1=surv==1 5eta=0.05 6 7#Creatingvariables 8theta=cp.Variable(n) 9beta_0=cp.Variable() 10weights=cp.Variable(m,nonneg=True) 11surv_weight_0=cp.Variable() 12surv_weight_1=cp.Variable() 13 14#Definingthelossfunctionandtheweightconstraints 15y_hat=A_train@theta+beta_0 16loss=cp.pos(1-cp.multiply(y_train,y_hat)) 17objective=MinimizeMaximize(saddle_inner(loss,weights) 18+eta*cp.sum_squares(theta)) 19 20constraints=[ 21cp.sum(weights)==1, 220.408-0.05<-weights@surv, 23weights@surv<=0.408+0.05, 24weights[inds_0]==surv_weight_0, 25weights[inds_1]==surv_weight_1, 26] 27 28#Creatingandsolvingtheproblem 29problem=SaddlePointProblem(objective,constraints) 30problem.solve() ``` After defining the constants and parameters in lines 2-5, we specify the variables for the model coefficient and the weights in lines 8-9 and 10-12, respectively. The loss function and regularizer which make up the objective are defined next in lines 15-18. The weight constraints are defined in lines 20-26. The saddle point problem is created and solved in lines 29 and 30. The results are shown in table 2. We report the test accuracy on all samples in the dataset with a different port of embarkation than Queenstown (996 samples). We see that while the robust classification model has slightly lower training accuracy than the nominal model, it achieves a higher test accuracy, generalizing from the non-representative training data better than the nominal classifier, which uses uniform weights. ### Robust Markowitz portfolio construction We consider the robust Markowitz portfolio construction problem described in SS3.4. We take \(n=6\) assets, which are the (five) Fama-French factors [14] plus a risk-free asset. The data is obtained from the Kenneth R. French data library [14], with monthly return data available from July 1963 to October 2022. The nominal return and risk are the empirical mean and covariance of the returns. (These obviously involve look-ahead, but the point of the example is how to specify and solve the problem with DSP, not the construction of a real portfolio.) We take parameters \(\rho=0.02\), \(\eta=0.2\), and risk aversion parameter \(\gamma=1\). In the code, we use mu and Sigma for the mean and covariance estimates, respectively, and the parameters are denoted rho, eta, and gamma. Robust Markowitz portfolio construction. ``` 1#Constantsandparameters 2n=len(mu) 3rho, eta, gamma=0.2,0.2,1 4 5#Creatingvariables 6w=cp.Variable(n, nonneg=True) 7 8delta_loc=LocalVariable(n) 9Sigma_perturbed=LocalVariable((n, n), PSD=True) 10Delta_loc=LocalVariable((n, n)) 11 12#Creatingsaddleminfunction 13f=w@mu+saddle_inner(delta_loc, w) \ 14-gamma*saddle_quad_form(w, Sigma_perturbed) 15 16Sigma_diag=Sigma.diagonal() \begin{table} \begin{tabular}{c c c} & Nominal classifier & Robust classifier \\ \hline Train accuracy & 82.0\% & 80.0\% \\ Test accuracy & 76.0\% & 78.6\% \\ \hline \end{tabular} \end{table} Table 2: Nominal and worst-case objective values for classification and robust classification models. * [17]local_constraints = [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ * [ [ * [ * [ * [ * [ * [ * [ * [ * [ [ * [ * [ [ * [ *
凸凹鞍点問題を考慮し、より一般的に凸最適化問題を「鞍点問題」と呼称し、その中には凸凹鞍関数の部分 supremum または infimumが含まれます。鞍点問題は、ゲーム理論、機械学習、金融など幅広い分野で生じます。鞍点問題は、凸(最小)または凹(最大)の目的関数を either 2nd dual 化することで、単一凸最適化問題に削減できます。この変換は手作業で実行する場合、煩雑でエラーにproneになります。この論文では、鞍点問題を指定するための特定の言語 (DSL) として、**disciplined saddle programming (DSP)** を導入しました。この言語と手法は、Juditsky と Nemirovski の arXiv:2102.01002 [math.OC] の最近の研究に基づいています。彼らは、鞍点プログラムを線形な表現で表す方法を
2309.17063
GateSeeder: Near-memory CPU-FPGA Acceleration of Short and Long Read Mapping
Motivation: Read mapping is a computationally expensive process and a major bottleneck in genomics analyses. The performance of read mapping is mainly limited by the performance of three key computational steps: Index Querying, Seed Chaining, and Sequence Alignment. The first step is dominated by how fast and frequent it accesses the main memory (i.e., memory-bound), while the latter two steps are dominated by how fast the CPU can compute their computationally-costly dynamic programming algorithms (i.e., compute-bound). Accelerating these three steps by exploiting new algorithms and new hardware devices is essential to accelerate most genome analysis pipelines that widely use read mapping. Given the large body of work on accelerating Sequence Alignment, this work focuses on significantly improving the remaining steps. Results: We introduce GateSeeder, the first CPU-FPGA-based near-memory acceleration of both short and long read mapping. GateSeeder exploits near-memory computation capability provided by modern FPGAs that couple a reconfigurable compute fabric with high-bandwidth memory (HBM) to overcome the memory-bound and compute-bound bottlenecks. GateSeeder also introduces a new lightweight algorithm for finding the potential matching segment pairs. Using real ONT, HiFi, and Illumina sequences, we experimentally demonstrate that GateSeeder outperforms Minimap2, without performing sequence alignment, by up to 40.3x, 4.8x, and 2.3x, respectively. When performing read mapping with sequence alignment, GateSeeder outperforms Minimap2 by 1.15-4.33x (using KSW2) and by 1.97-13.63x (using WFA-GPU). Availability: https://github.com/CMU-SAFARI/GateSeeder
Julien Eudine, Mohammed Alser, Gagandeep Singh, Can Alkan, Onur Mutlu
2023-09-29T08:49:44
http://arxiv.org/abs/2309.17063v1
# GateSeeder: Near-memory CPU-FPGA Acceleration ###### Abstract **Motivation:** Read mapping is a computationally expensive process and a major bottleneck in genomics analyses. The performance of read mapping is mainly limited by the performance of three key computational steps: Index Querying, Seed Chaining, and Sequence Alignment. The first step is dominated by how fast and frequent it accesses the main memory (i.e., memory-bound), while the latter two steps are dominated by how fast the CPU can compute their computationally-costly dynamic programming algorithms (i.e., compute-bound). Accelerating these three steps by exploiting new algorithms and new hardware devices is essential to accelerate most genome analysis pipelines that widely use read mapping. Given the large body of work on accelerating Sequence Alignment, this work focuses on significantly improving the remaining steps. **Results:** We introduce _GateSeeder_, the _first_ CPU-FPGA-based near-memory acceleration of both short and long read mapping. GateSeeder exploits near-memory computation capability provided by modern FPGAs that couple a reconfigurable compute fabric with high-bandwidth memory (HBM) to overcome the memory-bound and compute-bound bottlenecks. GateSeeder also introduces a new lightweight algorithm for finding the potential matching segment pairs. Using real ONT, HiFi, and Illumina sequences, we experimentally demonstrate that GateSeeder outperforms Minimap2, without performing sequence alignment, by up to 40.3\(\times\), 4.8\(\times\), and 2.3\(\times\), respectively. When performing read mapping with sequence alignment, GateSeeder outperforms Minimap2 by 1.15-4.33\(\times\) (using KSW2) and by 1.97-13.63\(\times\) (using WFA-GPU). **Availability:** [https://github.com/CMU-SAFARI/GateSeeder](https://github.com/CMU-SAFARI/GateSeeder) **Contact:** julien@eudine.fr, mealser@gmail.com, omutlu@etilde.ch **Supplementary information:** Supplementary data are available at _Bioinformatics_ online. ## 1 Introduction Read mapping is the first fundamental step in most genomic analyses [1; 2; 3; 4; 5; 6; 7; 8]. Read mapping compares fragments (known as _reads_) of an organism's genome generated by a sequencing machine against a well-studied reference genome. The main goal of read mapping is to locate each read sequence in a reference genome, attempting to reassemble the reads back into their entire genome sequence. Read mapping remains one of the major performance bottlenecks in many genomic analyses for the three prominent sequencing technologies, Oxford Nanopore Technologies (ONT), PacBio HiFi, and Illumina [9; 10]. This is true even for the widely-used, well-maintained, state-of-the-art read mapper for modern CPUs, Minimap2 [11]. To understand the reasons behind read mapping's large performance overhead, we first briefly describe the workflow of Minimap2 in five key steps: 1) _Index Construction_, 2) _Seed Extraction_, 3) _Index Querying_, 4) _Anchor Sorting_, and 5) _Seed ###### Abstract We present a novel approach to the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer state of the art, higher memory density close to the computation fabric, an order of magnitude more memory bandwidth, and much lower latency to access stored data compared to traditional off-chip DRAM devices. However, exploiting such modern FPGAs requires building efficient hardware architecture that handles the desired operations by leveraging only supported operations by the FPGA logic. Such modern FPGAs are already proven beneficial for sequence alignment [19] and pre-alignment filtering [20]. However, such new technology is not yet exploited for performing complete read mapping. To this end, we introduce _GateSeeder_, the first near-memory CPU-FPGA co-design for alleviating both the compute-bound and memory-bound bottlenecks in short and long-read mapping. GateSeeder is based on three **key ideas**: (1) We observe that potential mapping locations always have the largest number of seed matches compared to other locations in the reference genome due to high similarity with a given read. GateSeeder exploits this observation and proposes a new computational step with a linear time complexity in the number of seed matches that finds out the potential matching segment pairs based on the highest number of seed matches scattered around a region in the reference genome. We call this approach _Seed Voting_. (2) GateSeeder builds two new hardware architectures for performing _Seed Extraction_ and _Index Querying_ using modern FPGAs with HBM. Although _Seed Extraction_ is not memory-bound, it provides the input queries that are used for querying the index. Thus, minimizing the overall latency requires accommodating both steps, _Seed Extraction_ and _Index Querying_, within the same FPGA chip. (3) GateSeeder introduces the first HBM-friendly hash table that is specially designed to exploit the access parallelism provided by modern FPGAs for _fully_ maximizing the querying throughput. We carefully orchestrate execution on the CPU and the FPGA to hide data transfer latency and increase parallelism. GateSeeder takes reads in FASTQ format and a reference genome (or precomputed index) in FASTA format and outputs mapping information in PAF format. We summarize the **contributions** of this paper as follows: * We introduce GateSeeder, the first software/hardware co-designed read mapper that exploits modern FPGAs featuring high bandwidth memory (HBM). GateSeeder is fully synthesizable, open-source, and ready-to-be-used on real hardware. * We provide, to our knowledge, the first FPGA accelerator for _Seed Extraction_ and _Index Querying_ for both short and long read mapping. * We propose a new, efficient voting algorithm that replaces the compute-bound seed chaining algorithm while maintaining good accuracy. * We experimentally demonstrate, using real ONT, HiFi, and Illumina sequences, that GateSeeder outperforms Minimap2 by up to 40.3x, 4.8x, and 2.3x, respectively, when mapping the reads against the entire human reference genome. When performing read mapping with sequence alignment, GateSeeder outperforms Minimap2 by 1.15-4.33 (using KSW2) and by 1.97-13.63x (using WFA-GPU). Figure 1: (a) Roofline model, and (b) execution time breakdown for the four key steps of the state-of-the-art read mapper, Minimap2, when mapping ONT and Illumina reads against the human reference genome (GRCh38). We use 12, 24, and 48 CPU threads and 2 Intel Xeon Gold 5118. ## 0.2 Methods ### Overview Fig. 2 shows the overview of GateSeeder, a CPU-FPGA co-design for accelerating read mapping. The pipeline can be divided into 7 stages: _Index Construction_, _Read Parsing_, _Seed Extraction_, _Index Querying_, _Location Adjustment_, _Anchor Sorting_, and _Mapping Location Voting_. We explain each step in detail in the next subsections. Stages \(1\), \(2\), and \(3\) are performed on the host CPU as they better suit general-purpose CPUs and better benefit from CPU multithreading. Stages \(4\), \(5\), and \(6\) are performed on FPGA featuring HBM as they better suit near-data FPGA acceleration. **GateSeeder efficiently uses both a host CPU and a modern FPGA in order to enable four different levels of parallelism**. First, the host CPU and the FPGA kernels are working concurrently. The host CPU launches the FPGA kernels asynchronously, such that the host CPU continues executing other stages of GateSeeder (i.e., \(5\), and _6_) without the need to wait for the FPGAs. Second, GateSeeder exploits the CPU multithreading for faster execution. GateSeeder allocates the available CPU threads (e.g., a user-defined parameter, \(N\)) and efficiently manages the tasks assigned to each CPU thread via thread-pool design pattern [21]. Our thread-pool software design does not limit each CPU thread to processing a single read. It rather keeps each CPU thread busy with any remaining task for any available read. This achieves high allocation efficiency and optimized concurrent execution. The CPU threads are orchestrated such that the following five different tasks are quickly applied to each read sequence: (1) Parsing the read sequences of a given FASTQ file using stage \(5\), (2) transferring the parsed read sequence in batches from the host CPU to the FPGA, (3) launching an FPGA kernel that executes stages \(6\), \(7\), and \(8\), (4) transferring the calculated anchors from the FPGA to the CPU, and (5) sorting the anchors using stage \(9\), performing _Mapping Location Voting_ using stage _10_, and writing the mapping results in PAF format. Third, by carefully building an efficient hardware architecture as a Processing Element (PE) for performing stages _11_, _12_, and _13_ on an FPGA chip, GateSeeder is able to run multiple (\(M\)) PEs concurrently on the same FPGA chip for a higher level of parallelism. Fourth, GateSeeder executes in dataflow manner [22] where the PEs perform different tasks on an FPGA in parallel by allowing consumer tasks to operate before the producer tasks have been completed. We describe in Section S1.1 the FPGA dataflow in more detail. ### HBM Organization To mitigate the memory bottleneck caused by the data transfer between the memory and the computing elements, modern FPGA features HBM. Fig. 3 depicts the internal organization of an HBM that consists of two main components: 1) HBM stacks and 2) an HBM controller inside the FPGA. A stack comprises multiple memory sections (MSs), each of which is connected to the HBM controller through a 64-bit pseudo channel. In the HBM controller, each pseudo channel is connected to an AXI channel that interacts with the user logic. Each AXI channel is associated with a pseudo channel and can directly access the aligned MS. To make each AXI channel able to access the full HBM space (i.e., all the MSs), an AXI switch is integrated between the AXI channels and the pseudo channels. However, to reach the maximum bandwidth and the Figure 2: Overview of GateSeeder that consists of a host CPU with main memory and a modern FPGA board that is equipped with an HBM memory. minimum latency for the HBM controller, direct routing from the AXI port to the aligned pseudo channel should be used, and accessing unaligned MS should be avoided [23]. As a result, to optimize the throughput of our design, we 1) partition our data into batches with a smaller size than the size of MS (e.g., Xilinx Alveo U55C features two 8GB HBM2 memories, each of which has 16 512MB MSs), and 2) carefully design the architecture of each PE such that each AXI channel can access only a unique MS, i.e., limiting the size of the memory space accessed by each AXI channel to the size of one MS. ### Index Processing #### 2.3.1 Index Construction The purpose of the index is to efficiently store extracted information (e.g., _seeds_ and their start locations in the reference genome) from the subject reference genome and efficiently facilitate the querying and retrieval of such information when needed. For a given reference genome and a set of parameters (e.g., seed length), the index only needs to be built once and can be reused for different sets of read sequences. As building the index is not considered as a contributor to the total execution time of read mapping, we build the index at the host CPU. GateSeeder uses the minimizer algorithm [24] to choose the seeds to be stored in the reference genome index. The minimizer algorithm uses a _hash and compare_ heuristic: (1) It computes the hash value of \(w\) consecutive/overlapping k-mers (a subsequence of length \(k\)), and (2) compares all the hash values and outputs the k-mer with the smallest hash value as a resulting minimizer seed that represents the subject k-mers. The _Index Construction_ step of GateSeeder is fully configurable for different \(w\) and \(k\) values. The implementation is multi-threaded, and its execution time has the same order of magnitude as Minimap2. We build an index data structure that is similar to the _HashMap_[25] as it offers (1) high data locality and (2) constant-time performance for adding or querying any seed. The high data locality leads to a higher throughput. The data are accessed into contiguous blocks, which leverage the memory architecture and enable multiple hardware accelerations such as burst transfer. The constant-time performance leads to a constant number of required clock cycles for performing index-dependent operations and a constant number of memory accesses for fetching indexed data. This helps in easily orchestrating the index querying step with all other steps that depend on its output and thus increasing task-level parallelism. This index data structure has two arrays: a map array and a key array. The map array stores pointers to the key array and is indexed by the seed value (i.e., a hash value of a seed). The key array stores the locations of extracted seeds in the reference genome. Some seeds can occur very frequently in the reference genome and, as a result, can increase the rate of false-positive mapping locations and unnecessarily increase the time spent to query the index and process the seed locations [26]. To overcome this issue, we remove from the index the seeds (along with their locations) that occur (based on each seed's number of locations) more frequently than a user-defined value of max_occ. #### 2.3.2 Index Storing As frequent accesses to the index stored in the main memory cause memory bottlenecks, GateSeeder stores the index directly in the HBM of the FPGA. This provides two key advantages: (1) minimizes data communication latency due to shorter interconnects between the FPGA chip and HBM compared to the interconnects between the CPU and the main memory, and (2) provides an order of magnitude more bandwidth than traditional main memory (e.g., DDR4). Since the size of each MS in HBM is limited to a fixed size and the size of the index depends on the subject reference genome, we partition both the map and key arrays of the index into subarrays, each of which has a size smaller than or equal to the size of an MS. By storing each subarray in a different MS, GateSeeder can handle any index of any size as long as the sizes of the index, one batch of read sequences, and one batch of anchors collectively do not exceed the HBM capacity (e.g., 16GB on the Xilinx Alveo U55C). The index is loaded in the HBM of the FPGA _only_ once before the execution of the read mapper. #### 2.3.3 Index Querying The purpose of the _Index Querying_ stage is to efficiently retrieve all occurrence locations in the reference genome for a given query seed. To maximize the throughput of this stage: (1) we minimize the number of memory accesses, and (2) our design only accesses consecutive memory addresses to leverage burst transfers. Our _Index Querying_ is a two-step mechanism: accessing the map array and accessing the key array of the index. Both steps perform unique memory access (i.e., unique entry in the arrays) for each seed, and both steps are performed in parallel. The first step is to access the map array with the value of the seed, which returns two pointers (i.e., addresses to the corresponding memory section) that indicate the start and the end of the list of seed locations stored in the key array. The second step is to fetch all the locations between the start and end entries in the key array. Each fetched location from the index (corresponding to a location in the reference genome) is then associated with the corresponding location of seed in the read to form an anchor. To perform both steps in parallel (through pipelining), each PE is connected to the index through 2 different AXI channels. The first AXI channel is used to access the map array, and the second one to access the key array. ### Read Processing #### 2.4.1 Read Parsing & Storing The goal of _Read Parsing & Storing_ is to convert the input read sequences stored as FASTQ files into sequences that can efficiently be stored in the HBM and processed by the FPGA logic. To efficiently overlap FPGA processing time with data transfer time and minimize the HBM allocation size for accommodating read sequences, the reads are transferred and processed in batches. We construct the read batches at the CPU side and transfer them to the HBM of the FPGA. To maximize the bandwidth between the FPGA logic and the HBM, we limit the size of each read batch to the size of an MS of the HBM. Since our FPGA design only performs _Seed Extraction_ and _Index Querying_, there is no need to store any metadata (read ID, read len, read number) on the HBM. Each read batch consists of a stream of read sequences concatenated to each other, where each read sequence is separated by a special character E (a different character from the read alphabets, A, C, G, T, and N). The metadata (e.g., read ID, read length, number of reads) for a given batch of reads is stored at the CPU side. Each read batch is transferred by a CPU thread to the HBM of the FPGA through the PCIe interface. The FPGA can process as many batches as the number of PEs implemented in the FPGA chip in parallel. Therefore we limit the number of batches to the number of PEs, which we discuss in Section S1.1. #### 2.4.2 Seed Extraction The goal of the _Seed Extraction_ is to quickly extract the seeds of each read stored in the batches. Similar to the _Index Construction_ step of GateSeeder, GateSeederalso uses the minimizer algorithm [24] to extract the seeds from read sequences. Our hardware architecture of _Seed Extraction_ step calculates one minimizer seed every cycle. To reach this performance, we use two key approaches: (1) We replicate the hardware logic responsible for computing the hash values \(w\) times, and thus it allows us to compute the hash value of the subject \(w\) consecutive k-mers in parallel. (2) Our implementation is pipelined, which means the critical path delay of each PE is shortened by dividing it into stages or smaller tasks. This allows GateSeeder to meet target timing constraints (e.g., maximum operating frequency) and achieve more parallelism by calcullating multiple minimizer seeds in parallel. ### Calculating the Mapping Locations #### 2.5.1 Anchor Sorting The goal of the _Anchor Sorting_ stage is to sort the anchors according to their location in the reference genome. Sorting the anchors allows us to quickly identify the potential matching segment pairs during the voting stage. Based on the literature [27] and our experimental evaluation, there is no FPGA implementation for sorting algorithms that is faster than multicore CPU implementations as used in Minimap2 [11]. Our FPGA implementation of a pseudo-in-place merge sort algorithm shows one order of magnitude higher execution time compared to 24-threads CPU implementation. For this reason, we decide to perform _Anchor Sorting_ at the CPU side and not on the FPGA chip. We implement two types of sorting algorithms: radix sort and merge sort 28. We observe that for Illumina reads, merge sort is 1.84x faster than radix sort, while for ONT reads, radix sort is 1.32x faster than merge sort. #### 2.5.2 Location Adjustment and Mapping Location Voting The goal of the _Mapping Location Voting_ stage is to quickly find the potential matching segment pairs between a given read sequence and the reference genome. The key idea of _Voting_ is based on the observation that the correct mapping location always has the largest number of anchors compared to the other mapping locations due to the high similarity between the read sequence and the sequence extracted at the correct mapping location in the reference genome. Based on this observation, we develop a linear time (in the number of anchors) voting algorithm. Our voting mechanism consists of two main steps. The first one is performed on the FPGA after _Index Querying_, and it consists of subtracting the location of the seed within the read sequence from the location of the seed within the reference genome. The list of subtracted locations (\(\delta\)) along with the corresponding location within the read sequence, also called the list of anchors (\(A\)) constitutes the input of the second step. The second step is the core of our algorithm, and it is performed on the CPU after _Anchor Sorting_. During this step, we iterate once through the list of sorted anchors, and based on those, we output a list of matching segment pairs that have the highest number of votes. Our voting mechanism is different from the one used in Genome-on-Diet [29] in two different aspects. (1) GateSeeder only performs one round of voting on the whole read to identify all subsequences in the read that share a large number of votes with the reference genome. The goal of GateSeeder is to only identify the correct mapping locations in the reference genome for each of these subsequences and report them in the PAF file. Genome-on-Diet on the other hand performs multiple rounds of voting on multiple subsequences of the read to map one or more of the read subsequences (two subsequences with a large gap in between) together to cover structural variations (SVs) occurring in the read. The linked subsequences are needed to generate a CIGAR string that represents the SV. (2) The index data structure, the _Seed Extraction_ algorithm, and the indexing parameters that GateSeeder uses are _all_ different than the one used in Genome-on-Diet. GateSeeder uses minimizer seeds, while Genome-on-Diet uses sparsified seeds that span a much larger region in the reference genome compared to that of GateSeeder. Thus, GateSeeder also uses a different implementation and different parameters for our voting algorithm than the one used in Genome-on-Diet. To explain our voting algorithm, let a list of anchors be \(A\), and the \(i^{th}\) and \(j^{th}\) anchors are represented as pairs of integer numbers \((L^{i}_{read},L^{i}_{ref})\) and \((L^{j}_{read},L^{j}_{ref})\)), respectively. While \(L^{i}_{read}\) and \(L^{j}_{read}\) represent the locations of different seeds within the same read, \(L^{i}_{ref}\) and \(L^{j}_{ref}\) represent the locations of these seeds within the reference genome. Let \(e^{i}_{j}\) be the total number of deletions and insertions between the \(i^{th}\) and \(j^{th}\) anchors, such that we have the following inequality: \[|(L^{j}_{read}-L^{i}_{read})-(L^{j}_{ref}-L^{i}_{ref})|\leq e^{i}_{j}\] This inequality becomes equality if there are only deletions or insertions between the two seed matches. Let the subtracted locations for the two anchors be: \(\delta^{i}=L^{i}_{ref}-L^{i}_{read}\) and \(\delta^{j}=L^{j}_{ref}-L^{j}_{read}\), such that the following inequality holds: \[|\delta^{j}-\delta^{i}|\leq e^{i}_{j}\] Thus the difference between two subtracted locations \(\Delta^{i}_{j}=|\delta^{j}-\delta^{i}|\) gives us a lower bound for the total number of insertions and deletions between two anchors. If there are only insertions or deletions, then \(\Delta^{i}_{j}=e^{i}_{j}\). For anchors that are close to each other, we expect \(\Delta^{i}_{j}\) to be close to \(e^{i}_{j}\) since the number of consecutive insertions and deletions is small. **Location Adjustment**. Thereby, it makes sense to sort the list of locations based on \(\delta\) and then iterate through the list. For this reason, the goal of _Location Adjustment_ step is to compute the \(\delta\) values on the FPGA chip after performing _Index Querying_ step. On FPGA, the computation of the \(\delta\) values is performed in parallel with the other stages performed on the FPGA and thus has no cost in terms of execution time. To match segments from the read sequence to segments from the reference genome, we define a voting distance vt_dist. We consider that two anchors \(i\) and \(j\) belong to the same segment if the total number of insertions and deletions between the two anchors is smaller than the user-defined voting distance. (i.e., \(e^{i}_{j}\leq\) vt_dist). Computing the exact value of \(e^{i}_{j}\) is computationally expensive and requires DP (it requires performing alignment between the anchors). Since for anchors that are close to each other \(\Delta^{i}_{j}\) can be seen as a good approximation of \(e^{i}_{j}\), and the \(\Delta^{i}_{j}\) of all the consecutive anchors can be computed with a linear time complexity, we consider that the two anchors belong to the same segment if \(\Delta^{i}_{j}\leq\) vt_dist. The voting distance can be arbitrarily large depending on the application. If we want to perform alignment on the output segments, the voting distance should have the same order of magnitude as the alignment bandwidth. Indeed, for a given segment pair, we might have a \(\Delta^{i}_{j}\) such that \(\Delta^{i}_{j}=\) vt_dist. Now if we consider that we only have insertions or deletions between the two anchors, the following holds \(\Delta^{i}_{j}=e^{i}_{j}\) and thus \(e^{i}_{j}=\) vt_dist. In order to align a segment pair having a section with only vt_dist deletions or insertions, we need a bandwidth of at least vt_dist. For each matching segment pair, we define a voting score corresponding to the number of anchors belonging to the given segment. We use the voting score as a metric to measure the quality of the matching segment pair. The higher the voting score is, the more anchors belong to the segment pair and the higher the probability of being the correct mapping location. Our voting algorithm (Algorithm 1) takes the list of sorted anchors as input and outputs a list of matching segment pairs with the highest voting score that meet some user-defined constraints, such as the minimum length of the segments. The algorithm starts by initializing two temporary mutable segment pairs, one corresponding to the positive strand and the other to the negative. The voting algorithm then iterates over the list of sorted anchors. For each iteration, we check if the anchor belongs to the temporary segment pair of the corresponding strand (Line 4). If yes, we adjust the boundaries of the segment pair based on the anchor and increment the voting score (Line 5). Else, we check if the voting score of the temporary segment pair is greater than the lowest voting score of the list of mapping segment pairs and if the temporary segment pair meets the user-defined constraints. If both conditions are met, we append the segment pair to the list of matching segment pairs (and remove the segment pair with the lowest voting score if necessary) (Line 7). We then initialize the temporary segment pair with the current anchor (Line 8). ## 3 Results We evaluate 1) the time for data transfer and processing per genomic base (bp), 2) the FPGA resource utilization, 3) the end-to-end speedup of GateSeeder compared to Minimap2, and 4) the accuracy of GateSeeder compared to Minimap2. We provide all the commands used to generate the results and a comprehensive treatment of all evaluation results on the GateSeeder GitHub page. Our evaluated datasets and presets are provided in the Supplementary Materials, Sections S1.2 and S1.3. We implement our accelerator designs on an Alveo U55C card featuring the Xilinx Virtex Ultrascale+ XCU55C with 16 GiB HBM2 connected to an AMD EPYC 7302P host system with 64 GiB of memory. All the experiments, including the CPU-only experiments, are run on the described system. ### Data Transfer and Processing Time Analysis In Fig. 4, we evaluate the data transfer time from the host CPU to the FPGA board and from the FPGA board back to the host CPU, the FPGA kernel processing time, and the processing time of a CPU-optimized version of the seeding kernel running on the host CPU. We use 32 CPU threads for the CPU-based step and 8 FPGA PEs for the FPGA-based step. We perform our measurements when running the complete pipeline of GateSeeder. We use the nine presets and normalize the time to the number of bases by dividing the transfer time or processing time by the batch size. Based on Fig. 4 we make three key observations. (1) Our FPGA kernel is always faster than the CPU version except for the ILMN3 preset and provides up to 1.96x, 1.58x and 1.47x speedup for ONT, HiFi, and Illumina reads, respectively, compared to the CPU kernel. The highest performance compared to the CPU kernel is reached when using small max_occ values. This is expected, as for large max_occ values, the average number of locations returned by the key table is larger. Since the returned locations are stored contiguously in the memory, the CPU cache hierarchy and the data prefetching mechanisms can be leveraged by the CPU to increase the overall throughput of the CPU kernel. (2) Increasing max_occ value always increases the execution Figure 3: Near-memory FPGA design of GateSeeder time of our FPGA kernel. This is expected as increasing max_occ increases the number of locations to fetch from the HBM. (3) The transfer time is always more than 20x faster than the FPGA kernel execution time. Transferring the locations from the device to the host is always slower than transferring the read sequences from the host to the device. Since we use asynchronous programming in our host program, we trigger the host-to-device and the device-to-host transfers in parallel. Thus, we are only limited by the device-to-host transfer time. We conclude that outsourcing the entire seeding stage on FPGA is always beneficial since it reduces the CPU workload, the processing time on FPGA is always faster or comparable to the CPU processing time, and the transfer time is negligible compared to the processing time. ### FPGA Resource Utilization We list the resource utilization of GateSeeder on the FPGA in Table 1. For each sequencing technology and from the FPGA design point-of-view, the three presets we choose differ only in the values of k and k as max_occ and vt_dist impact only the CPU-based steps and batch size impacts the memory allocation. Thus, we report the resource utilization for each sequencing technology and not for each preset. From Table 1, we observe that regardless of the sequencing technology, there are always enough resources to theoretically accommodate more than 16 PEs. However, in practice, we are limited by the number of HBM AXI Channels. Since each PE is designed to use 4 AXI Channels, the maximum number of PEs that GateSeeder can accommodate is 8 to cope with the 32 memory channels offered by the HBM of the board we are using. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & **PEs** & **CLB** & **LUT** & **FF** & **BRAM** \\ \hline \multirow{2}{*}{ONT} & 1 & 18.88 & 10.33 & 7.05 & 10.69 \\ & 8 & 31.62 & 18.22 & 13.49 & 16.07 \\ \hline \multirow{2}{*}{HiFi} & 1 & 19.30 & 10.51 & 7.21 & 10.69 \\ & 8 & 33.89 & 19.67 & 14.72 & 16.07 \\ \hline \multirow{2}{*}{Illumina} & 1 & 19.32 & 10.51 & 7.17 & 10.69 \\ & 8 & 33.38 & 19.66 & 14.41 & 16.07 \\ \hline \hline \end{tabular} \end{table} Table 1: FPGA resource utilization (in %) for different sequencing data types (i.e., w and k values) and different numbers of PEs. Figure 4: Data transfer time and processing time per bp, when transferring the reads to the FPGA board, transferring the mapping locations back to the host CPU, executing the FPGA kernel, and executing a CPU implementation of the seeding kernel on the host CPU. ### End-to-End Speedup We evaluate the end-to-end speedup that is provided by GateSeeder over Minimap2. As a baseline, we run Minimap2 without alignment using 32 CPU threads on the same host CPU used by GateSeeder for a fair comparison. We build the index used by GateSeeder and Minimap2 beforehand so that the execution time for _Index Construction_ steps is not accounted for in the total execution time. We run GateSeeder using 32 CPU threads and 8 FPGA PEs, as we discussed in the previous subsection. We load the read sequences and the index into DRAM before each run of GateSeeder and Minimap2 to reduce the impact of the I/O costs. Fig. 5 presents the speedup of GateSeeder over Minimap2 for the nine presets. We make three key observations. (1) GateSeeder provides the largest speedup rate when using ONT reads. This is expected for two main reasons: 1) ONT preset uses small k and w values, which causes the number of locations returned by the index to be large, and 2) the length of evaluated ONT reads is much larger (between 10k and 100k bps) than that of the evaluated HiFi and Illumina reads. Consequently, the number of extracted seeds, the number of queried seeds, and the number of returned locations per read are much larger than those for HiFi and Illumina reads. The larger workload benefits directly from FPGA acceleration and high parallelism offered by GateSeeder. Minimap2 uses chaining, which has a quadratic time complexity in the number of returned locations, and the voting algorithm used in GateSeeder has a linear time complexity in the number of locations. So for small k and w values and ultra-long reads, _Mapping Location Voting_ provides a non-negligible speedup compared to chaining. (2) Using a small max_occ as in ONT1 leads to have the highest speedup rate (40.3\(\times\)). This is expected as it reduces the number of returned locations after querying the index and hence there is a smaller workload to be sorted and performing voting step on, which reduces the overall execution time. (3) For large values of k (HiFi and Illumina presets), the impact of max_occ is less important on the end-to-end speedup. Indeed using large values for k increases the number of unique minimizers and decreases the average occurrence of each minimizer. Thus increasing max_occ while having a small average occurrence only has a limited impact on the execution time. Whereas for ONT (k = 15) max_occ has a large impact on the end-to-end speedup. For ONT, the execution time when using max_occ = 10 is 2.4x faster than the execution time when using max_occ = 50. We conclude that, in terms of speedup, GateSeeder performs the best for long and inaccurate reads compared to Minimap2. We also conclude that, in terms of execution time, the choice of the max_occ value is impactful for long and inaccurate reads. ### Accuracy Analysis We evaluate the accuracy of GateSeeder compared to Minimap2 using simulated human reads and using mapeval tool from the PAFtools library provided by Minimap2. Simulated reads were mapped to the complete Human reference genome GRCh38. A read is considered correctly mapped if its longest mapping overlaps with the true interval, and the overlap length is \(\geq\)10% of the true interval length. We run Minimap2 using its default presets for each read type. We measure the Figure 5: End-to-end speedup of GateSeeder over Minimap2 for the nine different presets. We run Minimap2 without performing sequence alignment. accuracy of GateSeeder for the nine presets. We provide in Fig. 6 the _(error rate, fraction of mapped reads)_ pairs that are above different mapping quality thresholds. Based on Fig. 6, we make three key observations. (1) For accurate read sequences (HiFi and Illumina), increasing max_occ always increases the accuracy. This is not true for long noisy reads (ONT) accuracy results. A possible explanation is that increasing max_occ also increases the rate of false positive seed matches (i.e., random seed matches due to, for example, highly repetitive seeds in Human data). Since the amount of false positive seed matches is higher for noisy reads, it also leads to a higher number of false positive votes. (2) For HiFi reads, GateSeeder has always a better accuracy even with max_occ set to 1 compared to Minimap2. For ONT and Illumina reads, GateSeeder has always a lower fraction (<2%) of mapped reads for the same error rate compared to Minimap2 (3) For HiFi and Illumina, we observe that the accuracy converges to an upper bound. Choosing a max_occ value above 5 and 450 for HiFi and Illumina, respectively, only has a limited effect on the fraction of mapped reads. We conclude that even if GateSeeder uses a lightweight pre-alignment filtering algorithm, _Mapping Location Voting_, compared to chaining, GateSeeder provides high accuracy compared to Minimap2 for all sequencing data types. Figure 6: Read mapping accuracy of GateSeeder compared to Minimap2, using mapeval from PAFtools. ### Performing Sequence Alignment We examine in Table 2 the benefits of integrating the existing state-of-the-art sequence aligners with GateSeeder to perform complete read mapping with sequence alignment. We choose one representative tool from each of the four directions for accelerating sequence alignment: 1) Using modern processors that provide wider registers (e.g., 512-bit wide) for executing vector operations on multiple operands at once for high parallelism. We choose a recent, fastest vectorized implementation [30] of the widely-used aligner, KSW2 [31]. It accelerates KSW2 by up to \(2.2\times\). We refer to this recent implementation in Table 2 as _KSW2 AVX_. 2) Building CMOS-based customized hardware architectures to speed up the alignment process. We choose a non-optimal alignment algorithm, called GACT [32], that has such an accelerator. It divides the DP matrix into overlapping submatrices and greedily processing each submatrix using systolic arrays. We refer to it in Table 2 as _GACT CMOS_. 3) Exploiting a large number of threads and large local memory provided by modern GPUs to compute alignments of many independent sequence pairs concurrently. We choose a recent GPU implementation [33] of the wavefront algorithm (WFA) [34], which reformulates the classic Smith-Waterman-Gotoh recursion and shows significant speedups for highly similar sequence pairs. The GPU implementation [33] of the WFA algorithm improves the original CPU implementation by 1.5-7.7\(\times\) using long reads. We refer to it in Table 2 as _WFA GPU_. 4) Using a pre-alignment filtering algorithm to reduce the number of mapping locations to be verified by sequence alignment by providing approximate edit distance calculation. We choose SneakySnake [35] as representative since it provides the highest accuracy and speedup compared to other algorithms [2]. We refer to it in Table 2 as _SneakySnake CPU_. We present in Table 2 the read mapping throughput of Minimap2 (which uses KSW2) and GateSeeder integrated with each of the representative tools that we discuss. We observe that integrating existing tools for sequence alignment with GateSeeder is always beneficial. It provides up to \(13.63\times\), \(13.67\times\), and \(3.89\times\) higher read mapping throughput compared to Minimap2. ## 4 Conclusion We demonstrate that we can use the HBM of modern FPGAs to mitigate the memory bottleneck of the index querying step. We propose an index data structure that leverages the HBM organization. We develop an FPGA+HBM design that performs the seed extraction and index querying steps. We implement our design on a real FPGA with 8 PEs. In addition, we propose a lightweight voting algorithm with a linear time complexity that replaces the computationally expensive chaining step while maintaining good accuracy. We integrate our FPGA design and our voting algorithm into a CPU-FPGA co-designed read mapping tool, GateSeeder. We experimentally demonstrate, using real ONT, HiFi, and Illumina sequences, that GateSeeder outperforms Minimap2 by up to 40.3x, 4.8x, and 2.3x, respectively, when mapping the reads against the entire human reference genome. ## Funding We acknowledge the generous gifts of our industrial partners, including Intel and VMware. This work is also partially supported by the European Union's Horizon programme for research and innovation [101047160 - BioPIM] and the Swiss National Science Foundation (SNSF) \([200021\_213084]\). \begin{table} \begin{tabular}{c|c c c c c} \hline \hline & \multicolumn{4}{c}{**GateSeeder integrated with**} & \multicolumn{1}{c}{**Minimap2**} \\ \cline{2-6} & **KSW2 AVX** & **GACT CMOS** & **WFA GPU** & **SneakySnake CPU** & \\ \hline **ONT** & 1’516 (4.33x) & 3’037 (8.67x) & **4’771 (13.63x)** & 1’324 (3.78x) & 350 \\ \hline **HiFi** & 1’287 (1.15x) & 2’752 (2.47x) & **15’237 (13.67x)** & 4’774 (4.28x) & 1’114 \\ \hline **Illumina** & 281’827 (2.44x) & 156’168 (1.35x) & 228’205 (1.97x) & **450’062 (3.89x)** & 115’511 \\ \hline \hline \end{tabular} \end{table} Table 2: Read mapping throughput (number of mapped reads per second) of Minimap2 and GateSeeder integrated with state-of-the-art pre-alignment filter and sequence alignment tools
Motivations: 読み込みマッピングは、計算的に複雑なプロセスであり、ゲノム分析における大きなボトルネックです。読み込みマッピングのパフォーマンスは、主に、3つの重要な計算ステップの性能によって制限されます:インデックスクエリ、種連鎖、配列 alignments。最初のステップは、主メモリへのアクセス速度と頻度が速い(メモリ制約)に支配されています。後者の2つのステップは、CPUの計算速度が、その計算コストの高いダイナミックプログラミングアルゴリズムを実行する際に支配されます(計算制約)。これらの3つのステップを、新しいアルゴリズムと新しいハードウェアデバイスを用いて加速させることは、読み込みマッピングを広く使用している多くのゲノム分析パイプラインを加速させるために不可欠です。配列 alignments の加速に関する多くの研究が行われており、この研究では、残りのステップの改善に焦点を当てています。結果: 私たちは
2307.16809
An enhanced system for the detection and active cancellation of snoring signals
Snoring is a common disorder that affects people's social and marital lives. The annoyance caused by snoring can be partially solved with active noise control systems. In this context, the present work aims at introducing an enhanced system based on the use of a convolutional recurrent neural network for snoring activity detection and a delayless subband approach for active snoring cancellation. Thanks to several experiments conducted using real snoring signals, this work shows that the active snoring cancellation system achieves better performance when the snoring activity detection stage is turned on, demonstrating the beneficial effect of a preliminary snoring detection stage in the perspective of snoring cancellation.
Valeria Bruschi, Michela Cantarini, Luca Serafini, Stefano Nobili, Stefania Cecchi, Stefano Squartini
2023-07-31T16:28:16
http://arxiv.org/abs/2307.16809v1
# An Enhanced System for the Detection and Active Cancellation of Snoring Signals ###### Abstract Snoring is a common disorder that affects people's social and marital lives. The annoyance caused by snoring can be partially solved with active noise control systems. In this context, the present work aims at introducing an enhanced system based on the use of a convolutional recurrent neural network for snoring activity detection and a delayless subband approach for active snoring cancellation. Thanks to several experiments conducted using real snoring signals, this work shows that the active snoring cancellation system achieves better performance when the snoring activity detection stage is turned on, demonstrating the beneficial effect of a preliminary snoring detection stage in the perspective of snoring cancellation. V. Bruschi\({}^{\star}\) M. Cantarini\({}^{\star}\) L. Serafini\({}^{\star}\) S. Nobili\({}^{\dagger}\) S. Cecchi\({}^{\star}\) S. Squartini\({}^{\star}\)+\({}^{\star}\) Department of Information Engineering - Universita Politecnica delle Marche, Ancona, Italy \({}^{\dagger}\) Leaff Engineering Srl, Ancona, Italy + Footnote †: This work was supported by the financial program DM MiSE 5 Marzo 2018, project ”_ChaALenge_”—F/180016/01-05/X43. boring activity detection, active snoring cancellation, convolutional recurrent neural network, adaptive subband algorithm ## 1 Introduction The noise caused by snoring activity is an important problem in our society. The snoring noise can reach a sound level of \(90\,\)dB and have harmful implications, e.g., loss of productivity, attention deficit, and unsafe driving [1, 2]. Recently, various studies have identified significant similarities between snoring and vocal signal [3, 4]. In fact, both of them present high-order harmonics preceded by a fundamental frequency in the spectrum [4]. The snoring activity is composed of two phases, i.e., inspiration and expiration. The power of the snoring signal is mostly concentrated on lower frequencies of the spectrum. In particular, the inspiration produces a signal between \(100\,\)Hz and \(200\,\)Hz, while the expiration is focused between \(200\,\)Hz and \(300\,\)Hz. Thus, the fundamental frequency, which must be deleted, is located between \(100\,\)Hz and \(300\,\)Hz. In the literature, several approaches can be found for snoring attenuation. Passive solutions involve physical devices such as earplug or special pillows [5] that may be trouble-some for the user. Moreover, these techniques are ineffective at low frequencies and can be very expensive. In contrast, active noise control (ANC) systems can reduce low-frequency noises that passive approaches cannot attenuate. In particular, ANC techniques are based on the introduction of a secondary source that produces a signal capable of generating destructive interference in a desired area controlled by one or more microphones. ANC systems must be adaptive to follow the variations of the noise recorded at the error and reference microphones. They are usually implemented using filtered-X least mean square (FxLMS) [6] algorithm, where the estimate of the secondary path is used to calculate the output signal at the error microphone. Examples of FxLMS applications for active snoring cancellation can be found in [1, 7, 8, 9, 10, 11]. However, snoring is a non-stationary signal that can cause issues during the adaptation process. Specifically, its irregular nature can result in signal absence, which in turn can negatively impact the performance of the adaptive algorithm. Therefore, to ensure active snoring cancellation, it is crucial to support it with a snoring activity detection algorithm that can identify the presence of snoring. In the literature, deep learning algorithms for sound event detection and classification have also been applied to snoring audio signals. To this end, several studies have employed 2D convolutional neural networks (2D-CNNs) that rely on feature learning of time-frequency representations computed from fixed-length audio segments [12, 13, 14]. In these studies, the high accuracy in snoring detection derives from both the acoustic features chosen and the wide signal analysis windows (\(\geq 1\,\)s) that entail a slow decision response of the algorithm. This issue can be solved by sequential models that analyze the signal over short frames, such as 1D convolutional neural networks (1D-CNNs) and recurrent neural networks (RNNs). In [15, 16], 1D-CNNs proved to be less performing than 2D-CNNs, but the low computational cost due to feature extraction from the raw audio signal makes them suitable for end-to-end systems. In [17, 18], RNNs exploited the features of past and present time-frequency representations of the audio signal over reduced temporal windows (\(25-30\,\)ms) for the snoring activity detection, confirming their effectiveness in sequential data analysis. Promising results have also been obtained from the combination of convolutional and se quential models, which together form convolutional recurrent neural networks (CRNNs). The studies described in [19, 20] demonstrated that CRNNs with gated recurrent units (GRUs) or long short-term memory (LSTM) layers outperform 2D-CNNs in snoring detection. However, the performance of each approach is not easily comparable due to the different quantity, quality, and acquisition methods of the data used for training and testing the algorithms. Given these premises, requirements such as reliability in signal classification and the capability to generalize in the presence of different background noises are some of the desired ones for an effective active snoring cancellation system. In this context, an enhanced system for the detection and active cancellation of snoring signals is presented. In particular, starting from the use of a CRNN for snoring activity detection, a delayless subband approach for active snoring cancellation has been improved, reporting good results in terms of convergence time and cancellation quality achieved. The paper is focused on the performance of the active snoring cancellation system with and without the aid of the snoring detection stage; therefore, since our interest is to evaluate the active snoring cancellation performance, the comparison of our snoring activity detection system with others in the literature is not addressed here because out of our scope, but it can be addressed in future work. The paper is organized as follows. Section 2 and Section 3 describe the definition of the algorithm for both snoring activity detection and active snoring cancellation, respectively. Experimental results are reported in Section 4, where several results obtained with snoring signals are presented. Finally, conclusions are drawn in Section 5. ## 2 Snoring Activity Detection In this study, we address the snoring activity detection (SAD) methodology in three stages, as reported in Figure 1. The first stage involves audio signal processing for acoustic feature computation. The second stage consists of data analysis using a CRNN for a binary snoring/non-snoring classification task, where a snoring event represents the positive class (label \(1\)), and all non-snoring events constitute the negative class (label \(0\)). Finally, in the third stage, the predictions produced by the neural network are post-processed with the "Hangover" algorithm. This pipeline - binary classifier plus output filter (Hangover) - is common in Voice Activity Detection tasks. More in detail, in the first stage, the stereo audio signal is turned into monophonic by channel averaging. Log-Mel spectrograms are computed, and \(40\) log-Mel coefficients are extracted by using \(30\) ms-windows with a shift of \(10\) ms. The second stage involves the classification and is performed by the CRNN, which takes as input the log-Mel coefficients computed in the previous step. The convolutional part of the CRNN comprises three consecutive blocks, each consisting of a convolutional layer, a batch normalization layer, a dropout layer, and a max pooling layer. In each block, convolutional layers have \(32\) filters with size (3,3), and their output is normalized and regulated by the Leaky Rectified Linear Unit (Leaky ReLU) [21] activation function. All dropout layers are characterized by a rate equal to \(0.3\), while max-pooling layers have filters decreasing with each block, from (5,1) to (4,1) to (2,1). The output is then flattened and passed to the recurrent part of the network, composed of two blocks. Each consists of a \(32\)-unit GRU layer with tanh and hard sigmoid activation functions to update and reset the gates, respectively, and a dropout layer with a drop rate of \(0.3\). Finally, a time-distributed feed-forward output layer, with a single neuron and sigmoid as activation function, returns predictions in the range [0,1], each one representing the probability that a frame is associated with a snoring event. Then, the predictions are binary-encoded ("binarization"), using a threshold of \(0.5\), so that they can be leveraged by the ASC algorithm. In the third stage, the predictions output by the CRNN are post-processed with the Hangover algorithm presented in Algorithm 1. It works with an input buffer, _buffIn_, which acts as a FIFO (First-In First-Out) register that is automatically updated with a new sample every \(10\) ms, and takes as input the number of predictions in the input audio file \(L\), the size of the input buffer \(X\), and the number of predictions \(k\) that we would like to use to characterize a snoring event. When the input buffer is filled with the first \(X\) samples, _buffInFull()_ returns Figure 1: Scheme of the SAD system. the execution of the code to the caller; then the input buffer is read and a majority voting scheme is applied. In particular, if the input buffer contains more zeros than ones, its content is copied into the output buffer, \(\textit{buffOut}\). On the other hand, if it contains more ones than zeros, the Hangover algorithm considers the beginning of a snoring event by setting \(k\) consecutive predictions to one. In this way, a snoring event is more likely to be characterized by all predictions equal to one. This method aims to decrease the number of sporadic false negatives (FNs) predictions (i.e., snoring occurrences erroneously classified as non-snoring) within a snoring sequence, which could degrade the ASC performance. Although this method is not robust against false positives (FPs), it is able to reduce FNs, which are those to which the ASC algorithm is most susceptible. ``` \(L\), \(k\), \(X\) Output: buffOut // output buffer \(outIdx\gets 0\)// index output buffer \(buffInFull(X)\)// returns when buffIn full while\(outIdx\leq(L-1)\)do \(\textit{buffIn}\gets readbuffIn()\) \(\textit{zeros}\leftarrow\textit{FindZeros}(\textbf{buffIn})\) // Find n\({}^{\circ}\) of 0s \(ones\leftarrow\textit{FindOnes}(\textbf{buffIn})\) // Find n\({}^{\circ}\) of ls if\(ones>zeros\)then \(startIdx=outIdx\) \(\textbf{buffOut}[startIdx-X:startIdx+1]\leftarrow\textbf{I}(X,1)\) \(i\gets 1\) while\(1\)do \(\_\gets readbuffIn()\) // Discard reading if\((i\leq k-X)\) and \((outIdx\leq L-1)\)then \(outIdx\gets startIdx+i\) \(\textbf{buffOut}[outIdx]\gets 1\) \(i\gets i+1\) else \(outIdx\gets outIdx+1\) \(break\) else if\(outIdx=0\)then buffOut[\(outIdx:outIdx+X\)] \(\leftarrow\)buffIn \(outIdx\gets outIdx+X\) else buffOut[\(outIdx\)] \(\leftarrow\)buffIn[\(-1\)] \(outIdx\gets outIdx+1\) ``` **Algorithm 1** Hangover algorithm ## 3 Active Snoring Cancellation Active Snoring Cancellation (ASC) is developed considering a feed-forward filtered-X configuration and a subband implementation as reported in [11]. Figure 2 shows the scheme of the algorithm. There is a reference microphone that picks up the snoring source \(x(n)\) and an error microphone that picks up the noise in the area to be quiet \(e(n)\). Then, a loudspeaker reproduces the interference signal \(y(n)\) generated by \(x(n)\) filtered with the adaptive filter \(w(n)\) that represents the estimation of the primary path \(p(n)\). The coefficients of this filter are produced by the subband adaptive filtering (SAF) block on the basis of \(x(n)\) filtered with the estimation of the path between the loudspeaker and the error microphone, i.e., the secondary path \(s(n)\), the error \(e(n)\), and snoring detection block predictions. The SAF block has been developed considering a delay-less subband adaptive filter algorithm as first proposed in [22] and efficiently implemented in [11, 10]. In particular, the signal \(x^{\prime}(n)\) and the error \(e(n)\) are decomposed in subband by an analysis filter-bank, as \(x^{\prime}_{k}(n)\) and \(e_{k}(n)\) for each \(k\)-th subband, respectively. The weights of the \(k\)-th subband \(\textbf{w}_{k}^{SAF}(n)\) are updated following the normalized least mean square (NLMS) algorithm as \[\textbf{w}_{k}^{SAF}(n+1)=\textbf{w}_{k}^{SAF}(n)+\mu_{w}\frac{\textbf{x}_{k}^ {\prime*}(n)e_{k}(n)}{\alpha+||\textbf{x}_{k}^{\prime}(n)||^{2}}, \tag{1}\] where \(\textbf{x}_{k}^{\prime*}(n)\) is the complex conjugate of the input signal of the \(k\)-th subband \(x^{\prime}_{k}(n)\), \(\mu_{w}\) is the step size, and \(\alpha\) is a small coefficient that avoids division by zero. The fullband filter \(w(n)\) of length \(N\) is obtained by stacking all the subband weights following the steps below: * the subband weights are reported in the frequency domain by \((N/D)\)-point fast Fourier transform (FFT), with \(D=M/2\) the decimation factor and \(M\) the number of subbands; * the first half of the array representing the fullband filter is calculated by stacking the complex samples of FFT; * the rest of the array is obtained by the complex conjugate reversed version of the first half and the central point is set to zero. * the fullband filter is computed by a \(N\)-point inverse FFT of the array. The SAF algorithm is activated when the SAD algorithm provides a prediction of snore presence. ## 4 Experimental Results ### Dataset The A3-Snore dataset [19] has been selected for the experimental phase. It is a collection of audio files containing snoring events emitted by two male volunteers aged 48 and 55 during overnight sleep. The recording setup is a ZOOM-H1 Handy Recorder with two unidirectional microphones oriented perpendicularly. Acquisitions were made in a single room measuring \(4\times 2.5\) m with the sensors positioned near the snorer's head. The corpus includes almost \(7\,\)h of audio material split into \(10\)-minute segments, selected according to the highest frequency of snoring events associated with each volunteer ("snorer 1" and "snorer 2"). All audio files, characterized by wav format, are stereophonic with a sampling rate of \(44.1\,\)kHz and \(16\)-bit encoding. A metadata file reports annotations of the start and end timestamps of snoring events with a resolution of \(1\) second. The dataset is organized into two folders, each associated with a snorer, with an unbalanced distribution between snoring and non-snoring events. Table 1 summarizes the composition of the A3-Snore audio collection. Files associated with Snorer 1 have been used for the training set, whereas Snorer 2's files have been split with a ratio of 50% and used for validation and test sets. ### Snoring Activity Detection In the experiments, training was performed in a supervised manner for \(500\) epochs by monitoring the Average Precision (AP) - also known as the area under the precision-recall curve (AUC-PR) - on the validation set, and exploiting the early-stopping strategy to arrest the learning process when the model does not improve for \(20\) consecutive epochs. An adaptive learning rate according to the AdaDelta [23] optimization algorithm was selected, with an initial value equal to \(1\) and a decay rate of \(0.95\). The binary cross-entropy was used as the loss function. The experiments were carried out on an NVIDIA DGX Station A100 with dual 64-Core AMD EPYC 7742 @3.4 GHz and eight NVIDIA A100-SXM4-40 GB GPUs. The server was running Ubuntu 20.04.3 LTS. The neural network has been implemented with the Tensorflow [24] deep learning framework. CRNN classification performance was evaluated considering the AP, obtaining a value equal to \(77.54\)%. For what concerns the Hangover algorithm, the size \(X\) of the input buffer has been chosen in order to reduce the number of FNs while keeping the latency as low as possible. Moreover, since the Hangover algorithm applies a majority voting scheme, \(X\) should be an odd number. We found the right trade-off by setting \(X=3\); in this way, the post-processing algorithm is able to improve the CRNN output while maintaining a relatively low latency (i.e., \(30\,\)ms). Since for a 10-minute audio file we have \(60\,001\) predictions, we set \(L=60\,001\), whereas \(k\) has been set equal to \(100\). In order to evaluate the performance of the overall active snoring detection system also from a graphical perspective, we report in Fig. 3(a) a 100-second excerpt of an audio signal employed in testing and the associated predictions generated by the overall snoring activity detection system, after the post-processing stage. Moreover, in order to also visualize the Hangover Algorithm performance, Fig. 3(b) shows the binary predictions output of the CRNN before and after the post-processing stage; the time interval is more limited to better highlight the difference. ### Active Cancellation with Snoring Activity Detection The presented ASC algorithm has already been validated in [11, 10], by comparing its performance with the state-of-the-art algorithm of [25], considered as reference. In this paper, the ASC algorithm is improved by applying the SAD, and the experiments are mainly focused on evaluating the perfor \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Snorer} & \multirow{2}{*}{\begin{tabular}{c} Number of files \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Total \\ duration \\ [s] \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Snoring \\ duration \\ [s] \\ \end{tabular} } & \multicolumn{2}{c|}{ \begin{tabular}{c} Snoring \\ ratio \\ [\%] \\ \end{tabular} } \\ \hline 1 & 18 & 10 800 & 1127 & 10.4 \\ 2 & 23 & 13 800 & 2017 & 14.6 \\ Total & 41 & 24 600 & 3144 & 12.8 \\ \hline \end{tabular} \end{table} Table 1: Statistics of the A3-Snore dataset. Figure 3: Predictions post-processed by the Hangover algorithm of an audio signal (a), and their difference with respect to raw predictions before the processing stage (b). Figure 2: Scheme of the ASC algorithm with SAD. mance of the system with and without SAD. Starting from the snoring signals of the dataset described in Section 4.1, primary path \(p(n)\) and secondary path \(s(n)\) are simulated considering responses measured in a semi-anechoic chamber from the setup of [9]. Since \(p(n)\) and \(s(n)\) are modeled as FIR filters with a length of \(256\) samples, the length of the adaptive filter \(w(n)\) is set to \(512\) taps. For the subband structure, the length of the prototype filter is \(256\) samples, the number of subbands is \(M=64\), and the step size is \(\mu=0.03\). The performance of the proposed system has been evaluated in terms of primary path estimation, varying the signal-to-noise ratio (SNR) of the signal \(d(n)\) (cf. Figure 2). The primary path estimated by the ASC with the SAD is compared with the one estimated without SAD and with the measured primary path. Figure 4 shows the obtained results considering \(\text{SNR}=10\) dB. The difference between the estimated responses and the measured one is evaluated by the log-spectral distance (LSD), in the frequency domain, and by the misalignment, in the time domain. The LSD evaluates the spectral difference between two frequency responses [26]. Similarly, the misalignment evaluates the difference between the measured and the estimated path in the time domain and gives a measure of the convergence rate [10, 11]. Denoting the measured primary path as \(p(n)\), the estimated primary path as \(w(n)\), and their respective transfer functions as \(P(k)\) and \(W(k)\), the LSD is computed as \[\text{LSD}=\sqrt{\frac{1}{k_{2}-k_{1}+1}\sum_{k=k_{1}}^{k_{2}}\left[10\log_{10 }\frac{\left|P(k)\right|^{2}}{\left|W(k)\right|^{2}}\right]^{2}}, \tag{2}\] where \(k_{1}\) and \(k_{2}\) delimit the frequency range within which the LSD is estimated, defined as \(B=[k_{1}\frac{f_{1}}{K},\,k_{2}\frac{f_{2}}{K}]=[100\) Hz, \(20\) kHz\(]\), with \(K=4096\) the number of frequency bins for the FFT computation, and \(f_{\text{s}}=44.1\) kHz the sampling frequency. The misalignment is calculated as \[\text{MIS}=20\log_{10}\frac{||p(n)-w(n)||}{||p(n)||}. \tag{3}\] Table 2 shows the values of the LSD and the misalignment considering signals with different SNR levels. The estimation performance improves with the SNR increase both with and without SAD and in terms of both LSD and misalignment. The lowest values of the LSD are obtained when the SAD is applied, i.e., when the adaptation algorithm of the ASC is executed only when the snoring signal is detected by the SAD. This result is confirmed by Figure 4(b), where the magnitude frequency response of the primary path is well estimated up to 10 kHz with SAD, while the frequency response estimated without SAD deviates from the measured one for all the frequency spectrum. Differently, the difference in the misalignment of the two cases is more difficult to recognize. In fact, looking at Figure 4(a), the main peak of the impulse response is rightly detected both with and without SAD, but both cases introduce some late reflections not present in the measured impulse response. ## 5 Conclusions In this paper, an enhanced system that combines detection and active cancellation of snoring signals has been proposed. For snoring activity detection, a convolutional recurrent neural network fed by log-Mel coefficients has been implemented to classify snoring and non-snoring events. For active snoring cancellation, a feed-forward filtered-X configuration based on a delayless subband adaptive filter algorithm has been developed. The combined use of the two algorithms results in a single improved system for ASC. This work is a preliminary study that offers large room for improvement. For the SAD, more performing neural architectures based on unsupervised or semi-supervised deep learning strategies coupled with larger and more challenging datasets can be explored. The ASC can be improved by introducing non-uniform subband structures and different environments with different re \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{SNR [dB]} & \multicolumn{2}{c|}{LSD [dB]} & \multicolumn{2}{c|}{Misalignment [dB]} \\ \cline{2-5} & SAD OFF & SAD ON & SAD OFF & SAD ON \\ \hline 10 & 0.79 & **0.72** & -4.05 & **-6.05** \\ 15 & 0.49 & **0.37** & -10.11 & **-12.51** \\ 20 & 0.25 & **0.21** & **-16.20** & -14.86 \\ \hline \end{tabular} \end{table} Table 2: Values of the LSD and the misalignment obtained considering SAD OFF and SAD ON for different SNR values of the input signal. The LSD is calculated in the frequency range of [\(100\) Hz–\(20\) kHz]. Figure 4: Comparison between the measured primary path, the primary path estimated in the case of SAD OFF and SAD ON, (a) in the time domain and (b) in the frequency domain, considering an input signal with \(\text{SNR}=10\) dB. verberations could be taken into account to test the proposed system.
``` 鼻息の乱れは、人々の社会生活と結婚生活に影響を与える一般的な障害です。鼻息の苦情は、アクティブな騒音制御システムによって一部解決できます。この研究では、Convolutioal Recurrent Neural Networkを用いた鼻息活動検出と遅延のないサブバンドアプローチを用いた強化システムを導入することを目的としています。これらの実験は、実用的な鼻息信号を使用して行われたため、この研究は、鼻息の活動検出段階がオンになることで、アクティブな鼻息Cancelationシステムがより良いパフォーマンスを発揮することを示しています。これは、鼻息消滅に向けた事前鼻息検出段階の効果を証明しています。 ```
2309.11412
Creases and cusps in growing soft matter
The buckling of a soft elastic sample under growth or swelling has highlighted a new interest in materials science, morphogenesis, and biology or physiology. Indeed, the change of mass or volume is a common fact of any living species, and on a scale larger than the cell size, a macroscopic view can help to explain many features of common observation. Many morphologies of soft materials result from the accumulation of elastic compressive stress due to growth, and thus from the minimization of a nonlinear elastic energy. The similarity between growth and compression of a piece of rubber has revived the instability formalism of nonlinear elastic samples under compression, and in particular Biot's instability. Here we present a modern treatment of this instability in the light of complex analysis and demonstrate the richness of possible profiles that an interface can present under buckling, even if one restricts oneself to the two spatial dimensions. Special attention is given to wrinkles, folds and cusps, a surprising observation in swelling gels or clays. The standard techniques of complex analysis, nonlinear bifurcation theory and path-independent integrals are revisited to highlight the role of physical parameters at the origin of the observed patterns below and above the Biot threshold.
Martine Ben Amar
2023-09-20T15:36:52
http://arxiv.org/abs/2309.11412v1
# Creases and cusps in growing soft matter ###### Abstract The buckling of a soft elastic sample under growth or swelling has highlighted a new interest in materials science, morphogenesis, and biology or physiology. Indeed, the change of mass or volume is a common fact of any living species, and on a scale larger than the cell size, a macroscopic view can help to explain many features of common observation. Many morphologies of soft materials result from the accumulation of elastic compressive stress due to growth, and thus from the minimization of a nonlinear elastic energy. The similarity between growth and compression of a piece of rubber has revived the instability formalism of nonlinear elastic samples under compression, and in particular Biot's instability. Here we present a modern treatment of this instability in the light of complex analysis and demonstrate the richness of possible profiles that an interface can present under buckling, even if one restricts oneself to the two spatial dimensions. Special attention is given to wrinkles, folds and cusps, a surprising observation in swelling gels or clays. The standard techniques of complex analysis, nonlinear bifurcation theory and path-independent integrals are revisited to highlight the role of physical parameters at the origin of the observed patterns below and above the Biot threshold. ###### Contents * I Introduction * II Selection of creases in experiments * III A basic introduction to nonlinear elasticity * III.1 A brief reminder of the principles of linear elasticity * III.2 The principles of nonlinear elasticity * III.3 Constitutive equations in finite elasticity and definition of the elastic stresses * III.4 Simple geometry and stretches * IV Competition between elasticity and continuous fields * IV.1 The origin of the elastic stresses * IV.1 Growth without stress generation * IV.1.1 Elastic energy evaluation, order by order * IV.2 Nonlinear coupling of quasi-singular profiles * IV.3 Nonlinear coupling of harmonic modes * IV.4 Intermediate algebra for the coupling of sinusoidal modes * IV.4.1 Coupling two modes near the \(J_{B}\) threshold * IV.4.2 Nonlinear three mode coupling in the vicinity of the \(J_{B}\) threshold * IV.4.3 Super and subcritical bifurcations * IV.4 Role of surface tension * X How to escape the Biot threshold? * X Singular Profiles below the Biot threshold * X.1 Physical origins of the patches * X.1 Patches as inner boundary layer * XI Theoretical evidence for internal singularities * XI New elastic model for large stresses * X.2 The intermediate boundary layer analysis C The inner core * 1. Rescaling the strains and the invariants * 2. The energy density of the inner core * 2. Energy of the patches * 3. Path independent contour integrals * 4. The J-Integral * 5. Constant growth and finite size effects * 6. Inhomogeneous volumetric growth * 7. The M-Integral * 4. Finite-size effects or the buckling of layers * 5. Selection of a unique harmonic mode * 6. Nonlinearity and creasing above threshold for growing layer * 7. Conclusion * 8. Acknowledgements * A. Nonlinear elasticity at first order: stress and energy expansion * B. Expansion of the elastic and capillary energy density * C. Evaluation of the total energy for a single mode, double and triple mode * D. Profiles and Cartography of the stress * E. Weakly nonlinear analysis for quasi-singular profiles * F. Path-independent integrals ## I Introduction The buckling of the outer surface of a living tissue during growth [1; 2; 3] and the corrugation of the surface of a swelling gel [4; 5; 6] are often observed in nature or in the laboratory. In the last three decades, a large number of studies have been devoted to such patterns in order to explain complex geometries in embryogenesis [7; 8; 9], botanical morphogenesis [10; 11; 12], but also in tumorogenesis [13; 14; 15] and organ pathologies (e.g. wound healing [16; 17; 18]). These shape instabilities affect thick samples that experience large volume variations in a non-isotropic manner. Obviously, in a free environment, the constant growth of a homogeneous sample does not generate stress, but if there is a constraint, such as a substrate, or if there is a material or growth inhomogeneity, then the stress is generated that changes the shape of the body. It can buckle, but only if there is enough growth. This suggests a shape change once the relative volume increase exceeds a threshold, about 2 times the original. The origin of the observed patterns at free surfaces results from the compressive stress generated by growth coupled with the hyperelastic properties of soft tissues. These tissues exhibit large deformations even at low stress values, and classical linear elasticity cannot explain the observed shapes. Focusing on the simplest case of a gel layer of constant thickness \(H_{0}\) placed on a substrate, the growth process occurs mainly in the vertical direction and leads to a thickening of the layer with: \(H=J_{t}H_{0}\), where \(J_{t}\) is the relative growth per unit volume at a time \(t\) in this simple geometry. When \(J_{t}\) is increased to a critical value, the top surface begins to wrinkle. For neo-Hookean elasticity, this value \(J_{B}\) of order \(3.38\) can be related to the critical value found by Biot for samples under compression. Of course, this instability is common and not limited to the ideal gel layer. The threshold for wrinkling depends on the nonlinear elasticity model [19; 20], or on the initial geometry of the sample [21; 16], or possibly on the growth anisotropy [22], but the order of magnitude of this number seems quite robust. The mechanical interpretation of a material under compression was first given by M.A. Biot in a seminal paper "Surface instability of rubber in compression" [23]. Surface instability means that the instability is more visible at the surface of the sample, but actually occurs throughout the volume, as opposed to the Azaro-Tiller-Grenfield instability [24; 25], which results from surface diffusion. This instabilty is also different from wrinkles formed by a two-layer system where the top layer is thin and stiff and plays the role of a hard skin [26]. In this case, the surface topography can be realized in a very controlled way and is of enormous importance in industrial and biomedical applications [27]. Biot's instability was first demonstrated for a compressed neo-Hookean hyperelastic sample with a free surface in infinite geometry. It describes a two-dimensional infinite periodic pattern that occurs above a characteristic threshold for the compression level, but when the material geometry is more complex, such as bilayers [20; 28], or when the compression results from anisotropic or inhomogeneous growth, the interface buckling is recovered experimentally, but the analysis can be less straightforward. However, if smooth surface undulations can also be considered [29], the experimental patterns quickly evolve to nonlinear mode coupling [30; 31; 32; 33; 34] and even to wrinkles, which are less understood, although they are easily and commonly observed in experiments and are also noted in the physiology of the brain cortex, for example [35]. An even more puzzling observation concerns more cusped interfaces as shown in Fig.(1) (A1) to (A6). In one dimension, a cusp is a special point of a curve where the radius of curvature vanishes (or the curvature is infinite), while a "wrinkle" represents a more or less deep local folding of the interface. Other different interpretations of surface wrinkles concern singular points at the origin of a self-contacting interface, which of course indicates a much more singular interface deformation, see Fig. (1) (A9) and [36; 37; 38; 39; 40]. Do they result from a highly nonlinear coupling of modes occurring after the bifurcation, or do they belong to another class of solutions? In the latter case, they can appear below the Biot threshold \(J_{B}\) and even inhibit the classical instability [41; 42]. More recently, the idea that there can be new families of solutions below the Biot threshold has been supported by matched asymptotic analysis [36; 37; 38; 39; 40] or by the nucleation of new solutions in more complex elasticity models and geometries [20; 43]. Some experimental evidence realized on rubber in compression or on swelling gels also seems to favor the second hypothesis [36; 37; 44]. Of course, numerical evidence is always difficult in the case of spatial singularities, but we must mention the finite element numerical investigation of [45; 46] in favor of a subcritical (or discontinuous bifurcation) before \(J_{B}\) which becomes supercritical (or continuous) at \(J_{B}\) with an important sensitivity of the results to the conditions imposed on the substrate. Another way to study the cusp formation experimentally and theoretically [38] is to create a localized defect in a controlled experiment, mimicking in some way experiments in viscous fluids where the defect is realized by contrarotating cylinders [47]. It should be noted that localized singular structures easily occur in tubes but here the geometry helps the appearance of singular deformations [48; 49]. Despite the similarity that exists between compressive forcing and homogeneous growth in the neo-Hookean approach, this review article focuses on volumetric growth, which is ubiquitous in life. Most of our organs exhibit Biot's instability, which explains our fingerprints, the convolutions of our brains, the villi and the mucosa of the intestines. All these structures appear after a certain time after fertilization in foetal life. They are present in most mammals, except for small rodents. These two observations support an interpretation in terms of morpho-elasticity: the shape of the organ is a determinant factor, as is the volumetric growth, which increases with time from \(J=1\) (no growth expansion) up to critical values. Before giving mathematical proofs concerning wrinkles, our presentation will begin with a selection of experiments (section II) and a brief introduction to the principles of nonlinear elasticity. In this field of study, positive quantities called invariants \(I_{J}\) are introduced to evaluate the elastic energy density. Since they are specific to finite elasticity, they will be introduced in detail in section III. In addition, the local growth per unit volume creates an external field that does not obey physical rules and is imposed a priori inside the sample. It is not fully comparable to an externally applied compressive dead load, see Sec. IV. We first revisit the original model of Biot for neo-Hookean elasticity in the incompressibility limit and in semi-infinite geometry [23; 50], but for the threshold determination \(J_{B}\) and for nonlinear buckling and wrinkling, we follow a different strategy based on variational principles. Euler-Lagrange equations derived by incremental perturbation techniques are at the origin of the periodic modes and also of \(J_{B}\), the threshold. We then apply the nonlinear techniques of bifurcations, combined with complex analysis, which greatly simplifies the intermediate algebra. The results of Biot are recovered in a much simpler way and nonlinearities are treated above and below the threshold without difficulty. First, subcritical bifurcations, as indicated by [51; 52; 53], are demonstrated by nonlinear sinusoidal mode coupling. Second, wrinkles above and below the Biot threshold are analytically justified by introducing singularities either inside or above the elastic sample. This notion can be rather abstract, but has been successfully introduced for interfacial flows such as viscous fingering [54; 55; 56; 57], for bubbles in Laplacian and Stokes flows [54; 58], for vortices [59; 60], and for diffusive growth [61; 62]. In fluids, singularities outside the physical plane are used to select the length scale of the interface patterns, but they can be physically introduced into the flow in the experimental setup, leading to a complete change of the interface shape. For example, a coin or a bubble in front of a viscous finger completely changes the shape into a dendritic one [63], and a theoretical interpretation has been given in terms of a dipole. This idea of a dipole was taken up later [64] in fluids and in linear elastic solids. Also, when vortices are created in viscous fluids, they generate cusps at the interface [65; 66] (in the mathematical sense), which are transformed into sharp wrinkles when a weak surface tension is included [47; 67]. Following a similar strategy, we will consider singularities outside and inside the physical domain, with the aim of discovering the main physical ingredients necessary to predict the observed wrinkles. In conclusion, the existence of wrinkles in growing soft materials benefits from many theoretical analyses carried out in the last decades on viscous flows (interfacial and vortex flows) and from treatments of singularities in elasticity based on the Noether theorem and path independent integrals, see section XII. These classical but not easy techniques are presented in the following. We limit ourselves to a very simple modeling of hyperelasticity, being convinced that, once established, it will be possible to extend the mathematics to arbitrary geometries and complex structures of soft materials. After the presentation of some experimental examples in section II and a reminder of the foundations of nonlinear or finite elasticity (sections III to VI), we focus on a variational energy method, section VII, where buckling modes are treated at the linear, (section VIII), and nonlinear, (section IX), levels. We then study the possibility of stress focusing [68] inside the material just below the interface, which can induce interfacial wrinkles, in section X. If these zones can be perfectly characterized in morphoelastic growth, (section XI), there is no clear threshold for their observation as demonstrated by the technique of path independent integrals, (section XII). Finally, we come back to the buckling of thin films of finite thickness comparable to the wavelength in section XIII. ## II Selection of Creases in Experiments The formation of wrinkles and creases in samples of elastomers or swelling gels has fascinated physicists for decades and probably still does. Examples of compressed elastomers are given in Fig.(1) panels \(A1,A2,A4\), and all the other panels concern swelling gels in different experimental setups. In fact, the nucleation of wrinkles in materials known to be highly deformable without plasticity is quite astonishing. It contrasts with the difficulty of nucleating a fracture in a 3D brittle sample under tensile loading: in this case, an initial notch or slit must be deliberately made [69; 70]. Experimentally, it is difficult to elucidate the threshold for the appearance of these wrinkles. Indeed, the homogeneous volumetric growth of a material is equivalent to a compression, but the linear instability threshold discovered by Biot has not been precisely verified experimentally. As for wrinkles, it seems even worse, although there is a tendency to detect them above the Biot threshold. It is true that the geometry of the experimental setup has its importance on the threshold, as well as the fact that the material is attached to a solid substrate or to another elastic sample. Another important point concerns the size of the experimental setup compared to the instability wavelength and the fact that the neo-Hookean model (or any hyperelastic model) is not really adapted to swelling. The poroelastic model is more appropriate in this case [14; 36; 71]. Independently, R. Hayward and collaborators [72; 53; 42] point out in a series of articles that the bifurcation around the Biot threshold is probably subcritical, which makes a precise experimental determination difficult. However, singular profiles certainly exist, and the last panel (A9) shows the strong stress concentration that leads to the ejection of material pieces from the outer ring [73; 74] during the course of the experiment. Our main concern in the following will be the prediction of patterns around the Biot threshold or below. Nevertheless, let us recall the theory of finite elasticity with or without growth. It will be a way to introduce the main principles of application as well as the mathematical tools. A short presentation of the theory of swelling gels is also included to emphasize the difference between swelling and volumetric growth. ## III A basic introduction to nonlinear elasticity ### A brief reminder of the principles of linear elasticity Linear elasticity is limited to weak to moderate deformations corresponding to small strains, estimated by the ratio: deformation over a typical length of the deformed specimen. These deformations often occur under external loads, possibly under external fields such as temperature changes. Unlike other heuristic models, such as the Canham-Helfrich [78; 79] models for lipid membranes, elasticity requires knowledge of the initial shape of the body, which is assumed to be free of stress, and focuses on its deformation. Until recently, the goal was to explain and quantify the deformations of stiff materials: steel, wood, concrete, paper, nylon, etc., and their stiffness is usually given by the Young's modulus \(E\) in Pascals. For these materials, the value of \(E\) is on the order of \(10^{9}\) to \(10^{12}\) Pascals, which immediately indicates that it will be very difficult to stretch a cuboid by human force. Nevertheless, the field of linear elasticity remains very active: Being closely related to geometry, any peculiarity leads to a strong interest and curiosity, such as the crumpling of paper [80; 81; 82; 83], the formation of folds [84; 85; 86], or the science of origami [87; 88]. The linearity of the relationship between displacement and load does not automatically imply that the equilibrium equations are linear, as demonstrated by the Kappel-Von Karman equation, where the Hooke formalism is applied but the deformation is extended to the third order [89]. In particular, origami and paper crumpling studies introduce geometric singularities that can be treated with linear elasticity [68], while folding involves nonlinear elasticity. The linearity of the Hooke's law does not automatically imply simplicity of the theoretical treatment when the initial shape is complex. In fact, the formalism leads to partial differential equations, and this geometric complexity is also recovered in nonlinear elasticity. Thus, the main question is when nonlinear elasticity is required a priori. ### The principles of nonlinear elasticity Once a material is soft, even very soft, with a Young's modulus \(E\) not greater than \(10^{5}\), the displacement of any point of the sample under load can be of the order of the original shape. Then, a precise description for the internal stresses and also for the geometry of the deformations are required. Not all nonlinear descriptions of the elastic energy density W are possible because they must satisfy strong mathematical properties dictated by the laws of mechanics, such as objectivity and convexity. Objectivity means that the elastic energy remains invariant under rigid rotation or translation. Convexity means that for small displacements \(u\), \(\delta W\sim\alpha u^{2}\) with \(\alpha>0\). We consider an undeformed body with no internal stresses, where each point \(M\) is represented by the capital letters \(M(X,Y,Z)\) (for simplicity, Cartesian coordinates are chosen and maintained throughout the manuscript). Then there exists a vectorial mapping function \(\chi\) such that relates the new coordinates of the displaced point \(m\) to the coordinates of the original point such that \(\bar{O}m=\bar{O}M+\vec{u}\). \(\vec{u}\) is the vector displacement according to the same definition as in linear elasticity. One of the most important mathematical tools is the deformation gradient tensor which reads: \[\mathbf{F}=\nabla\chi\quad\text{where}\quad F_{ij}=\frac{\partial x_{i}}{ \partial X_{j}}=\delta_{ij}+\frac{\partial u_{i}}{\partial X_{j}}\,. \tag{1}\] The hyperelastic energy density \(W\) must respect spatial isotropy (if there is no preferred direction in the structure of the body) and be invariant for any change in the coordinate system. Consequently, it must be represented by the trace or determinant of tensors constructed with \(\mathbf{F}\). We start with the simplest invariants, the most common one being defined with the Cauchy right tensor \(\mathbf{C}=\mathbf{F}^{\mathbf{T}}\mathbf{F}\) to satisfy the objectivity requirement. \[I_{1}=\mathrm{Tr}(\mathbf{C})\quad\mathrm{I}_{2}=\frac{1}{2}\left\{(\mathrm{ Tr}(\mathbf{C}))^{2}-\mathrm{Tr}(\mathbf{C}^{2})\right\}, \tag{2}\] \(I_{1}\) can be written as \(I_{1}=F_{ij}\cdot F_{ij}\) where summation on repeated indices holds. The third invariant \(I_{3}=\)Det\((\mathbf{F})\) is related to the local volume variation and must be a positive number. Homogeneous hyperelastic energy densities are basically functions of these \(3\) invariants, but can also be restricted to two of them, generally \(I_{1}\) and \(I_{3}\) as for the neo-Hookean energy density, while the linear combination of \(I_{1}\), \(I_{2}\) and \(I_{3}\) is called the Mooney-Rivlin model. One may wonder how to recover the weakly nonlinear energy density described by the Lame coefficients. The simplest way is to define \(\mathbf{H}=\mathbf{F}-\mathbf{I}\) first, and then the elastic energy density W as \[W=\frac{1}{2}\left\{\mu_{L}(\mathrm{Tr}\left(\mathbf{H}^{\mathbf{T}}\cdot \mathbf{H}\right)+\mathrm{Tr}(\mathbf{H})^{2})+\lambda_{L}\mathrm{Tr}(\mathbf{ H}^{2})\right\}. \tag{3}\] Figure 1: In (A1) and (A2), compression of a parallelepiped specimen: micrographs of a pair of wrinkles from the front and from the top view at a strain level of \(45\%\), above the Biot threshold. [75]. The critical strain for wrinkling is \(37.2\%\). In (A3) experimental herringbone array of a PDMS swelling gel sample, courtesy of Derek Breid [51; 52]. In (A4) Confocal microscopy for elastomer surfaces under compressive strain with an initial thickness of \(23\mu\)m [76]. In (A5) and (A6) two optical micrographs of wrinkle growth for a gel containing \(15\) mol \(\%\) NaAc, obtained by cooling, with an initial thickness of \(15\mu\)m [77], in (A5) from to \(33.2^{\circ}C\) to \(31.7^{\circ}C\) in (A6) up to to \(25^{\circ}C\). In (A7) Creases in circular geometry: a pioneering experiment by T. Tanaka _et al._ on the swelling of an ionized acrylamide gel in water. In (A8) A ring of charged polyacrylamide gel (yellow) around a hard disk of neutral polyacrylamide gel (transparent) viewed from above: initial diameter: \(50\) mm and imposed thickness of \(1\) mm. The outer ring swells by immersion in distilled water; the swelling is highly inhomogeneous in this geometry. The inner disk acts as a constraint, and after the appearance of smooth wrinkles, wrinkles develop at the outer boundary above a certain threshold of volume variation [14]. In (A9) the same experimental setup as in (b) with a focus on a single cuspidal point [74]. For clarity, the attached line of fracture or refolding has been underlined in black. Note that it may appear as a self-contacting interface or as a fracture in compression [38]. Note that such a formulation is not suitable for incompressible materials, since the coefficient \(\lambda_{L}\) diverges. In fact, for incompressible materials, \(I_{3}=1\), a limit corresponding to a Poisson ratio \(\sigma=0.5\) in linear elasticity. If a preferred direction is present in the materials, as is often the case in organs such as heart, arteries, and skeletal muscles, more invariants are needed indicating an increase in stiffness. These invariants will depend on \(\mathbf{C}\) and on the orientation of a unit vector \(\vec{e}_{0}\) which indicates the direction of the fibers, assuming that this direction is unique. The Helmoltz free energy for an incompressible sample is then \[\mathcal{E}=\iiint_{\Omega}dV\ W(I_{1},I_{2},I_{4},I_{5})-Q\ (I_{3}-1)\,, \tag{4}\] where dV is the volume element in the reference configuration and \(Q\) is a Lagrange multiplier that fixes the physical property of incompressibility. The energy density \(W\) is a positive scalar that vanishes for \(\mathbf{C}=\mathbf{I}\). If a material is anisotropic only in a single direction, defined by the unit vector \(\vec{e}_{0}\) in the reference configuration, then two invariants must be added, such as \(I_{4}\) and \(I_{5}\), given by \(I_{4}=\vec{e}_{0}.(\mathbf{C}\vec{e}_{0})\) and \(I_{5}=\vec{e}_{0}.(\mathbf{C}^{2}\vec{e}_{0})\)[90]. In the biological context, materials can have other directions of anisotropy, in which case other invariants are introduced with a new vector \(\vec{e}_{1}\). For compressible materials, the energy is composed of two terms: a volumetric term, which is a function of \(I_{3}\): \(\Psi(I_{3})\), and a strain energy function, where all components of the strains are divided by \(I_{3}^{1/3}\) in \(3D\) so \(\bar{I}_{1}=I_{1}/I_{3}^{2/3}\) and \(\bar{I}_{2}=I_{2}/I_{3}^{4/3}\) : \[\mathcal{E}=\iiint_{\Omega}dV\ W(\bar{I}_{1},\bar{I}_{2})+\Psi(I_{3})\,. \tag{5}\] Note that in \(2\)D, the new strains are divided by \(\sqrt{I_{3}}\). Compressible elasticity leads to much more complex calculations in practice and different simpler models can be found in the literature [91] as the compressible Mooney-Rivlin model [92]: \[\begin{cases}W_{MR}&=c_{1}(I_{1}-3)+c_{2}(I_{2}-3)+c(I_{3}-1)^{2}\\ &-2(c_{1}+2c_{2})Log(I_{3})\,.\end{cases} \tag{6}\] Finally, if an external mechanical load \(\vec{B}\) is applied onto the system and/or on its surface \(\vec{T}\), the work they exert on the sample must be added to eq.(4) or to eq.(6) according to: \[\mathcal{E}_{add}=-\iiint_{\Omega}dV\ \vec{B}\cdot\vec{x}-\iint_{\partial \Omega}d\mathcal{A}\ \vec{T}\cdot\vec{x}. \tag{7}\] Let us now derive the so-called constitutive equations, which are the counterpart of the Hooke's law of the linear elasticity theory. ### Constitutive equations in finite elasticity and definition of the elastic stresses The constitutive equation is the relation between the stress tensor \(\mathbf{S}\) and the gradient of the deformation tensor \(\mathbf{F}\) which can be obtained from the variation of the elastic energy. The Euler-Lagrange equation results from the extremum of \(\mathcal{E}+\mathcal{E}_{add}\) with respect to the variation of the new position \(\delta x\) and also of \(Q\). Mathematically, it reads: \[\delta[\mathcal{E}+\mathcal{E}_{add}](x,y,z;x_{i})=0\quad\text{and}\quad \delta\mathcal{E}(x,y,z;Q)=0\,, \tag{8}\] for arbitrary variation of \(x_{i}\) and \(Q\). As before \(x_{i}\) means either \(x\), or \(y\), or \(z\), which are the current coordinates of the displaced point \(m\), initially located at \(M\). Then \[\delta\mathcal{E}=\iiint_{\Omega}dV\left(\frac{\partial W}{\partial\mathbf{F} }-Q\,\mathbf{F}^{-\mathbf{T}}\right)\delta\mathbf{F}, \tag{9}\] where we have used the tensorial relation for an arbitrary tensor \(\mathbf{A}\), which is \(\partial\) Det( \(\mathbf{A})/\partial\mathbf{A}=\text{Det}(\mathbf{A})\,\mathbf{A}^{-\mathbf{ T}}\). Then we derive the Piola stress tensor \(\mathbf{S}\) for an incompressible material: \[\mathbf{S}=\frac{\partial W}{\partial\mathbf{F}}-Q\,\mathbf{F}^{-\mathbf{T}}\,. \tag{10}\] Note that the Piola stress tensor, also called the first Piola-Kirchoff stress tensor [91] is the transpose of the nominal stress tensor [93]. Once \(W\) is selected, this relation represents the constitutive relation of the material. Since we must perform the variation with respect to the current position \(\vec{x}\) in the coordinate system of the reference configuration \(\vec{X}\), an integration by part leads for \(\delta\mathcal{E}+\delta\mathcal{E}_{add}\): \[\begin{cases}\delta\mathcal{E}+\delta\mathcal{E}_{add}=\iint_{\partial\Omega} d\mathcal{A}\ (-\vec{T}+\mathbf{S}\cdot\vec{N})\cdot\vec{\delta}x\\ -\iint_{\Omega}dV\,(\operatorname{Div}\mathbf{S}+\vec{B})\cdot\delta\vec{x}= 0\,.\end{cases} \tag{11}\] When the equilibrium is reached: \[\operatorname{Div}\mathbf{S}+\vec{B}=0,\quad\mathbf{S}\cdot\vec{N}=\vec{T}\,. \tag{12}\] The Piola stress tensor \(\mathbf{S}\) is not the only stress that can be defined in finite elasticity. In fact, by definition, a stress is the ratio between a force and a surface, and the value is not the same in the reference or in the current configuration where the Cauchy stress is evaluated according to: \[\iint_{\partial\Omega}d\mathcal{A}\ (\mathbf{S}.\vec{N})=\iint_{\partial\Omega}d \,a\ (\mathbf{\sigma}.\vec{n})\,. \tag{13}\] Using Nanson's formula: \(da\,\vec{n}=d\mathcal{A}\vec{N}(\text{Det}(\mathbf{F})\mathbf{F}^{-\mathbf{T}})\), we obtain the Cauchy stress \(\mathbf{\sigma}\): \[\mathbf{\sigma}=\operatorname{Det}(\mathbf{F})^{-1}\mathbf{S}\mathbf{F}^{\mathbf{T }}\quad\text{and}\quad\mathbf{S}\mathbf{F}^{\mathbf{T}}=\mathbf{F}\mathbf{S} ^{\mathbf{T}}\,. \tag{14}\] The Cauchy stress is imposed to be symmetric unlike \(\mathbf{S}\) and the last equality results for the Piola stress tensor \(\mathbf{S}\) which is not symmetric. Note that although in this section the determinant of \(\mathbf{F}\) is equal to one, we keep this notation which will change when growth is considered. In the literature and in classical textbooks (see [91; 21; 93] for instance) there are other alternative stress tensors, all of which are related to the Piola stress tensor, as opposed to linear elasticity. Relations between them can be established as soon as \(\mathbf{F}\) is known. ### Simple geometry and stretches When the specimen geometry is simple such as the cube, the cylinder and the sphere, the deformation gradient tensor can be diagonal in the corresponding coordinate system and the equations of elasticity become simpler if the deformations follow the same symmetry. Let us start with a parallelepiped with coordinates \(0<X<L_{X},0<Y<L_{Y},0<Z<L_{Z}\), subjected to a compressive force on the two opposite faces normal to \(\vec{e}_{Y}\) (see Fig.(2)). In this case, we expect a simple deformation \(x=\lambda_{1}X,y=\lambda_{2}Y\) and \(z=\lambda_{3}Z\) and the diagonal tensors \(\mathbf{F}\) and \(\mathbf{S}\) are easily obtained: \[\begin{cases}\mathbf{F}=\mathrm{Diag}(\lambda_{1},\lambda_{2},\lambda_{3}), \\ \text{and}\\ \mathbf{S}=\mathrm{Diag}(\frac{\partial\mathrm{W}}{\partial\lambda_{1}}- \frac{\mathrm{Q}}{\lambda_{1}},\frac{\partial\mathrm{W}}{\partial\lambda_{2} }-\frac{\mathrm{Q}}{\lambda_{2}},\frac{\partial\mathrm{W}}{\partial\lambda_{3 }}-\frac{\mathrm{Q}}{\lambda_{3}})\,.\end{cases} \tag{15}\] where \(\mathbf{S}\) follows the definition of eq.(10). In this simple geometry and for constant values of \(\lambda_{i}\), \(\mathbf{S}\) is diagonal with constant components, so it automatically satisfies the equilibrium equation eq.(12) in the absence of internal mechanical load \(\vec{B}\). The eigenvalues of \(\mathbf{F}\) are called stretches. Since there is no force acting on the surfaces perpendicular to \(\vec{e}_{X}\) and \(\vec{e}_{Z}\), the Lagrange parameter \(Q\) is then \[Q=\lambda_{1}\frac{\partial\mathrm{W}}{\partial\lambda_{1}}\quad\text{and} \quad Q=\lambda_{3}\frac{\partial\mathrm{W}}{\partial\lambda_{3}}\,. \tag{16}\] For an isotropic sample, \(W\) is a symmetric function of the stretches \(\lambda_{i}\), and there is no reason to distinguish between both directions, here \(1\) and \(3\) so \(\lambda_{1}=\lambda_{3}=1/\sqrt{\lambda_{2}}\) due to the assumption of incompressibility. After applying a compressive load, we finally get: \[\frac{\partial\mathrm{W}}{\partial\lambda_{2}}-\frac{\lambda_{1}}{\lambda_{2 }}\frac{\partial\mathrm{W}}{\partial\lambda_{1}}=-P_{0}\,. \tag{17}\] Assuming a neo-Hookean material with a shear modulus \(\mu\) chosen as the unit of stress, then the energy density is \(W=1/2(I_{1}-3)\) and the stretch \(\lambda_{2}\) is the solution of the cubic equation: \[\lambda_{2}^{3}+P_{0}\lambda_{2}^{2}-1=0, \tag{18}\] which has a unique real root: \(\lambda_{2}\sim 1-P_{0}/3\) for small \(P_{0}\) and for large compression, the stretch is close to zero so \(\lambda_{2}\sim 1/\sqrt{P_{0}}\). Note the simplicity of the derivation of such a solution which, however, implies that the points at the bottom of the cube can slide freely, without friction. ## IV Competition between elasticity and continuous fields Independent of local forces applied to the surface, the shape of a body can change due to different external fields applied and elasticity can be only one cause of the deformation among others. The nonlinear elastic formalism explained above concerns only a part of the global visible deformation and in practice it is not so easy to separate the elastic part from the overall shape. In the case of volumetric growth, each small piece of the sample which initially has a volume \(\delta\Omega\) becomes \(\delta\omega\) after a growth or drying process that results in a change in the total volume but also in a change in shape or morphology. In the following, the word growth will be used to refer either an increase or a decrease in volume. Furthermore, growth can refer to the cell proliferation, as in embryos, or to the swelling of gels, as already shown in the experiments mentioned in section II. It can also refer to drying or volume decrease. To separate the growth from the elastic deformation, we keep the definition of the mapping \(\chi\) between the initial state and the observed state at time \(t\) as it is defined in eq.(1). This mapping gives only a geometric information and we split the tensor \(\mathbf{F}\) into two components: a tensor mimicking the growth \(\mathbf{G}\) and the tensor \(\mathbf{F_{e}}\) for the elasticity, so that : \[\mathbf{F}=\mathbf{F_{e}G}\ \text{ so }\quad\mathbf{F_{e}}=\mathbf{F}\mathbf{G^{-1}}. \tag{19}\] This relation, inspired by plasticity modeling and proposed in biomechanics by Rodriguez et al. Rodriguez et al. (2014) is local, and \(\mathbf{G}\) is the growth tensor at a point \(M\) of the sample, obtained after a period \(t\). This law is cumulative in time meaning that the determinant of \(\mathbf{G}\) gives the local amount of growth variation between the initial time and the time of observation. This approach assumes that transient states are quickly eliminated to make room for a slowly adiabatic dependent growth state where the time is an index. Although not intuitive, this formalism actually allows to quantitatively translate some aspects of biological growth, such as inhomogeneity, but also anisotropy of growth: \(\mathbf{G}\) is a tensor, so it simultaneously represents \(3\) directions and \(3\) eigenvalues, each of them associated with a direction. A question that immediately comes to mind is the order of the two tensors \(\mathbf{F_{e}}\) and \(\mathbf{G}\) when they do not commute. This Figure 2: On the left a schematic representation of a soft material in blue, in the initial configuration, on the right the same sample in the current configuration. A normal pressure \(P_{0}\) is applied to both surfaces (X,Z) on the left, which becomes \(p_{0}\) on the right. For clarity, only one side is shown. To emphasize the deformation, a non-deformable substrate is shown in black and it is assumed that the sample slides on this substrate without friction. Note that the pressure in the reference and current configurations are different due to the expansion of the lateral surfaces. and other questions have been discussed, see [95]. A physicist would argue that, since the stresses are due to the growth, then the position of \({\bf G}\) is obviously on the right side. Another difficult problem arises simply from the fact that growth is often associated with a process defined per unit time and may be better represented in an Eulerian description while here we are faced with a Lagrangian formulation that relates an initial state to a current state at time \(t\). This approach more or less intuitively assumes that the time scale of growth is extremely long compared to any time scale at the origin of the dissipation, reorganization, or remodeling of the samples [96]. Despite its apparent conceptual simplicity, this formalism has generated significant contributions in embryogenesis, morphogenesis and also in the description of various pathologies such as wound healing, fibrosis and tumorogenesis. As suggested by eq.(19), growth induces stresses so not only a change in volume but also a change in its shape and one may wonder if this is always the case. In the next section, we will examine the origin of the stresses induced by growth. ### The origin of the elastic stresses #### iv.1.1 Growth without stress generation Materials can grow without stress if they can follow and adapt themselves to the imposed growth tensor. This is possible if there are no boundary conditions restricting the growth. Homogeneous growth \({\bf G}=G_{0}{\bf I}\) of a spherical object (without weight) does not generate any stress in the material. If the growth tensor is more complex, e.g. inhomogeneous and anisotropic, the shape of the body will change, as it grows. The question of a stress-free process has recently been explored [97; 98] and examples from living systems have been given. If the deformation of the body can exactly follow the growth process, then \({\bf F}={\bf G}\) and is independent of the material properties of the body. Such a relation allows to obtain the tensor \({\bf G}\) and thus the properties of the growth which are mostly unknown for macroscopic samples. This process requires the absence of constraints from boundaries and external forces such as gravity. The best example can be given by fresh planar leaves [99]. To verify such a hypothesis, one possible test is to cut the material at right angles. If there is no crack opening, then the material is considered as stress-free. When the leaves have in-plane residual stresses due to growth, they cannot remain planar, as shown in [74], and they buckle. Recently, a general proof of stress-free growth by conformal mapping was given [100]. #### iv.1.2 Constrained growth process Obviously, the main source of stress comes from boundary conditions, especially from rigid walls. Imagine a parallelepiped where only one side is rigidly attached to a substrate and then it cannot evolve freely. This is the case with gels, where the chains of polymers adhere to the substrate, and then mimic clamped conditions. But it is also the case of parallel layers with different elastic properties, that are attached to each other, and grow according to their own rules. The best example concerns growing epithelia, always connected to their ECM (extracellular matrix), such as the imaginal disc of the Drosophila wing [28], the skin layers of the epidermis [101; 102], and also the cortex of the brain connected to the white matter [103; 22], in the embryonic period. Finally, it is known that life is compatible with elastic stresses, which is the basis of the criterion of homeostasis for mammals: Compressive stress above the homeostatic pressure reduces the cell proliferation, while tensile stress favors the proliferatin. ### Volumetric Growth and elasticity The elasticity invariants defined in eq.(2) refer to the elastic tensor \({\bf F_{e}}\) and not to the deformation gradient \({\bf F}\). \({\cal E}\) must now take into account the growth per unit volume of the sample, which is represented by \(\mbox{Det}\;({\bf F})=\mbox{Det}\;({\bf G})=J\) for an incompressible material, and the elastic energy becomes \[{\cal E}=\iiint_{\Omega}dV\;J\left\{W(I_{1},I_{2},I_{4},I_{5})-Q\;(I_{3}-1) \right\}. \tag{20}\] The invariants are given by eq.(2) where \({\bf F}\) is replaced by \({\bf F_{e}}\). In eq.(20) the growth appears explicitly by the factor \(J\) which indicates that the material volume has changed and implicitly in the substitution of \({\bf F}\) by \({\bf F_{e}}\) in all the invariants. If we also consider this substitution in the definition of \({\bf S}\) in eq.(10) and of \(\sigma\) in eq.(14), we have \[{\bf S}=J{\bf G^{-1}}\left\{\frac{\partial W}{\partial{\bf F_{e}}}-Q\,{\bf F_{ e}^{-T}}\right\};\;\mathbf{\sigma}={\bf F_{e}}\frac{\partial W}{ \partial{\bf F_{e}}}-Q\,{\bf I}. \tag{21}\] In contrast to the Piola stress tensor, \({\bf S}\), the Cauchy stress \(\sigma\) shows no signature of the growth, which can be surprising. At this stage, it is important to emphasize that, first, these two tensors are not defined in the same coordinate basis, and second, only forces acting on a given surface are invariant quantities, as will be shown later. To illustrate this paragraph, we consider a growth process that is anisotropic and the example of section III.4. There is no change in the elastic stretches except for the compressive loading \(P_{0}\) which becomes \(P_{G}=P_{0}(g_{1}g_{3})\), if we want to keep the same stress level. The stretches do not change and \(\lambda_{1}\) is the solution of eq.(18) with \(P_{0}\). However, due to the growth, the new coordinates will be: \(x_{i}=\lambda_{i}g_{i}X_{i}\). Now consider the case where the bottom surface of the cuboid is attached to a rigid substrate, assuming anisotropic growth but no applied external compressive stress. Then for \(X=0\), the points of this surface cannot move and \(y=Y\) and \(z=Z\). If no displacement is possible in the \(Y\) and \(Z\) directions, the simplest choice is to choose the same rules \(y=Y\) and \(z=Z\), everywhere in the sample, and only the allowed displacements are in the \(X\) direction so that \(x=JX\). The elastic stretches are then: \[\lambda_{2}=\frac{1}{g_{2}};\quad\lambda_{3}=\frac{1}{g_{3}};\quad\lambda_{1}= \frac{J}{g_{1}}=g_{2}g_{3}. \tag{22}\] According to eq.(21) the Piola stress tensor at the top in the neo-Hookean approach becomes: \[S_{1}=g_{2}g_{3}\left(g_{2}g_{3}-Q\frac{1}{g_{2}g_{3}}\right)=0;\ Q=g_{2}^{2}g_{3 }^{2}. \tag{23}\] In both horizontal directions, we have: \[S_{2}=g_{1}g_{3}\left(\frac{1}{g_{2}}-g_{2}^{3}g_{3}^{2}\right);S_{3}=g_{1}g_{2 }\left(\frac{1}{g_{3}}-g_{2}^{2}g_{3}^{3}\right). \tag{24}\] Note that the horizontal stresses are compressive, which means \(g_{i}>1\), indicating that compressive stresses must be applied to the vertical faces at \(\pm L_{Y}\) and at \(\pm L_{Z}\) to maintain such deformation. Another possibility is an infinite sample in the \(Y\) and \(Z\) directions. However, growth can also induce a buckling instability which will be studied in detail in the following. When buckling occurs, this simple choice of deformations must be modified, but the main deformation remains for low stress levels above the buckling threshold. In conclusion, a substrate that prohibits any displacement at the bottom of the parallelepiped is an obstacle to free growth at the origin of compressive stresses, leading eventually to a shape bifurcation. ## V Swelling of gels Swelling hydrogels have the advantage of mimicking the mechanical behavior of growing soft tissue while being precisely controllable. They consist of reticulated networks of polymer chains with a high proportion of small solvent molecules. A phase transition can be induced in the sample when it comes into contact with a reservoir of solvent, resulting in an amazing increase in volume. Although they are perfect candidates for mimicking growing tissues, growth and swelling have different microscopic origins. A swollen hydrogel is a system in both mechanical and thermodynamic equilibrium, and the swelling does not produce any new polymeric components, which constitute the only elastic phase and become increasingly dilute during the swelling process. In addition, the solvent has no reason to be uniformly distributed in the sample. For this reason, different poroelastic models have been proposed for the swelling [104; 105; 106] but also for plant or animal tissues [107; 108; 109; 110; 111]. Here, we choose a presentation by Hong et al. [6; 71] slightly modified to be as close as possible to the section IV. In fact, at equilibrium, the minimization concerns the grand potential \(\hat{\mathcal{W}}=\mathcal{W}(\mathbf{F},\mathcal{C})-\mu\mathcal{C}\) where \(C\) is the solvent concentration and \(\mu\) is the chemical potential: \(\mu=\partial\mathcal{W}(\mathbf{F},\mathcal{C})/\partial C\). If the gel is in contact with a reservoir full of solvent, then at the interface between the reservoir and the swelling gel, the two chemical potentials in both phases are equal: \(\mu=\mu_{s}\). If incompressibility is assumed and can be considered as the sum of its incompressible components, then \(C\) is related to \(\text{Det}(\mathbf{F})\) by the relation: \(\text{Det}(\mathbf{F})=1+\nu C\), where \(\nu C\) is simply the ratio between the volume of the solvent molecules and that of the dry matrix. Obviously, although the experiments on swelling gels are easier to perform and show interesting patterns similar to those observed in nature, we are still faced with two coupled fields: the elastic and the chemical one. Let us consider the variation of the free energy density: \[\delta\hat{\mathcal{W}}=\delta W(\mathbf{F},C)-\mu\delta C=\frac{\partial W}{ \partial\mathbf{F}}\delta\mathbf{F}+\left(\frac{\partial W}{\partial C}-\mu \right)\delta C\,, \tag{25}\] where \(\delta C\) is replaced by \(\delta\)\(\text{Det}(\mathbf{F})/\nu\)=\(\text{Det}(\mathbf{F})\,\mathbf{F}^{-\mathbf{T}}\delta\mathbf{F}/\nu\). Then, the corresponding stress becomes: \[\mathbf{S}=\frac{\partial W}{\partial\mathbf{F}}+\frac{1}{\nu}\left(\frac{ \partial W}{\partial C}-\mu\right)\text{Det}(\mathbf{F})\mathbf{F}^{-\mathbf{ T}}\,. \tag{26}\] The free energy density \(W(\mathbf{F},C)\) is often represented by the addition of two components: \(W_{e}(\mathbf{F})\) and \(W_{c}(C)\), where the first represents the elastic energy of the polymer matrix, the second, the contribution \(W_{c}(C)\)which depends only on \(C\). For \(W_{e}(\mathbf{F})\), a classical formulation due to Flory and Rehner [112] leads to : \[W_{e}(\mathbf{F})=\frac{1}{2}NkT\left(I_{1}-3-2Log(\lambda_{1}\lambda_{2} \lambda_{3})\right), \tag{27}\] for a compressible polymer matrix that satisfies the neo-Hookean elasticity, \(N\) is the number of the polymer chains, while for \(W_{c}(C)\) we have: \[W_{c}(C)=-\frac{kT}{\nu}\left(\nu CLog[\frac{(1+\nu C)}{\nu C}]+\frac{\Upsilon }{1+\nu C}\right). \tag{28}\] If we consider the case of a cuboid with clamped conditions at the bottom, then we can again imagine a diagonal strain and stress tensors with \(\lambda_{2}=\lambda_{3}\) and \(\mathbf{F}_{11}=\lambda_{1}\), so that \[S_{1}=NkT\left\{\lambda_{1}-\frac{1}{\lambda_{1}}-\frac{1}{N\nu}\lambda_{2}^{ 2}\left(w^{\prime}+\frac{\mu}{kT}\right)\right\}=0\,, \tag{29}\] \[S_{2}=NkT\left\{\lambda_{2}-\frac{1}{\lambda_{2}}-\frac{1}{N\nu}\lambda_{1} \lambda_{2}\left(w^{\prime}+\frac{\mu}{kT}\right)\right\}, \tag{30}\] with \[w^{\prime}=-\left(Log(\frac{\lambda_{1}-1}{\lambda_{1}})+\frac{1}{\lambda_{1} }+\frac{\Upsilon}{\lambda_{1}^{2}}\right), \tag{31}\] and a similar result for \(S_{3}\), which is equal to \(S_{2}\). The relative increase of the height \(\lambda_{1}\) in the vertical direction leads to a compressive stress in the horizontal directions, at the origin of the buckling of the sample. Here the control parameter is \(\mu/\nu\) at the origin of the swelling/deswelling. Although there is an analogy between volumetric growth and swelling, the theoretical approach will be more uncertain in the second case and also more dependent on the experimental conditions. Therefore, for our purposes, and in the following, we will restrict ourselves to the simplest initial geometry and suggest how we can interpret the experiments shown in section II. Biod's theory applied to rubber in compression versus volumetric growth ### Compression and critical instability threshold Thick samples can buckle under compression. This volumetric instability occurs when the compressive stresses due to load reach a threshold value. In fact, as mentioned in section II, experimentalists often characterize buckling by the compressive strain \(\lambda_{2}\) rather than by the compressive load. In fact, strain, which is the ratio of the length of the specimen to the initial length, is more easily evaluated. Biot has studied this buckling instability in detail, in particular for the neo-Hookean and Mooney-Rivlin models for a semi-infinite sample representing a free surface, subjected to a lateral compression, which we will call \(P_{0}\). This simple geometry allows a diagonal representation of the strains and stresses before the bifurcation, and this instability is often called a surface instability because it is more easily observed at the surface. His proof concerns a simple plane strain instability controlled by a parameter \(\xi\), above which the simple diagonal representation ceases to be valid. \(\xi\) and \(\xi_{B}\) are given by: \[\xi=\frac{\lambda_{1}^{2}-\lambda_{2}^{2}}{\lambda_{1}^{2}+\lambda_{2}^{2}} \quad\text{and}\quad\xi_{B}=0.839287. \tag{32}\] For the neo-Hookean model, Biot [23] has established the following relation for \(\xi_{B}\): \[\mathcal{Q}_{B}=\xi_{B}^{3}+2\xi_{B}^{2}-2=0\,. \tag{33}\] We will consider three different cases, the first two were considered in [113]. The stresses are defined in the current configuration and \(\mathbf{\sigma}\) represents the Cauchy stress. In the following three cases there is no stress on the top free surface, which leads to : \(\sigma_{1}=\lambda_{1}^{2}-Q=0\) when the shear modulus is chosen as unity: \(\mu=1\). It gives \(Q=\lambda_{1}^{2}\). Remember that in this case \(\sigma_{1}-\sigma_{i}=-\sigma_{i}=\lambda_{1}^{2}-\lambda_{i}^{2}\,\) for \(i=2\) or \(3\). #### vi.1.1 Case one We assume that there is no strain in the \(Z\) direction and \(\lambda_{3}=1\) \[\mathbf{F}=\mathrm{Diag}(\lambda_{1},\lambda_{2},1);\ \mathbf{\sigma}=\mathrm{ Diag}(0,\lambda_{2}^{2}-\lambda_{1}^{2},1-\lambda_{1}^{2}). \tag{34}\] With this choice, incompressibility imposes: \(\lambda_{2}=1/\lambda_{1}\) and the parameter \(\xi\) becomes: \[\xi=\frac{\lambda_{1}^{2}-1/\lambda_{1}^{2}}{\lambda_{1}^{2}+1/\lambda_{1}^{2} }\quad\text{so}\quad\lambda_{1}=\left(\frac{1+\xi}{1-\xi}\right)^{1/4}. \tag{35}\] At the threshold of stability, the value of the stretches are then given by \(\xi=\xi_{B}\), and \(\lambda_{1}=1.839287\) so \(\lambda_{2}=0.543689\), and compressive stresses occur in both directions for \(Y\) with \(\sigma_{2}=-3.0873\) and for \(Z\,\) with \(\sigma_{3}=-2.38298\). #### vi.1.2 Case two Choosing now \(\lambda_{1}=\lambda_{3}\) \[\mathbf{F}=\mathrm{Diag}(\lambda_{1},\lambda_{2},\lambda_{1})\quad\mathbf{\sigma} =\mathrm{Diag}(0,\lambda_{2}^{2}-\lambda_{1}^{2},0)\,. \tag{36}\] With this choice, the incompressibility imposes: \(\lambda_{2}=1/\lambda_{1}^{2}\) and the parameter \(\xi\) and \(\lambda_{1}\) become: \[\xi=\frac{\lambda_{1}^{2}-1/\lambda_{1}^{4}}{\lambda_{1}^{2}+1/\lambda_{1}^{4 }}\quad\text{so}\quad\lambda_{1}=\left(\frac{1+\xi}{1-\xi}\right)^{1/6}\,, \tag{37}\] which gives the instability when \(\lambda_{1}=1.50118\) and \(\lambda_{2}=0.443746\). The compressive stress occurs only in the \(Y\) direction with \(\sigma_{2}=-2.05663\). #### vi.1.3 Case three Finally for the third case, we assume that the compressive loads act similarly in both directions: \(Y\) and \(Z\). \[\mathbf{F}=\mathrm{Diag}(\lambda_{1},\lambda_{2},\lambda_{2});\ \mathbf{\sigma}= \mathrm{Diag}(0,\lambda_{2}^{2}-\lambda_{1}^{2},\lambda_{2}^{2}-\lambda_{1}^{ 2}). \tag{38}\] With this choice, incompressibility imposes \(\lambda_{2}=1/\sqrt{\lambda_{1}}\) and the parameter \(\xi\) and \(\lambda_{1}\) become: \[\xi=\frac{\lambda_{1}^{2}-1/\lambda_{1}}{\lambda_{1}^{2}+1/\lambda_{1}}\quad \text{so}\quad\lambda_{1}=\left(\frac{1+\xi}{1-\xi}\right)^{1/3}, \tag{39}\] which gives the instability when \(\lambda_{1}=2.25354\) and \(\lambda_{2}=0.666142\) and a compressive stress equal in \(Y\) and \(Z\) direction: \(\sigma_{2}=\sigma_{3}=-4.6347\). Note that this last case is not considered by Biot. ### Semi-infinite samples under volumetric growth As shown earlier, the Biot instability is mostly controlled by the strains that are directly observable for a solid under compression. There is no difference between the elastic and the geometric strains as opposed to growth. Assuming that the previous analysis remains valid, we will try to apply the Biot approach to volumetric growth. To do so, we will reconsider the three cases defined above. #### vi.2.1 Case one This case concerns \(\lambda_{3}=1\), which means that in this direction the displacement is equal to growth. Then the critical elastic strains evaluated in section VI.1.1 are equal to \(\lambda_{1}\sim 1.839\) and \(\lambda_{2}=0.543689\). There are several cases depending on how the growth is organized in the sample. For isotropic growth without displacement in the \(Y\) direction, we have \(x=JX\), \(y=Y\) and \(z=gZ\) with \(\lambda_{1}=J/g\), \(\lambda_{2}=1/g\) and \(J=g^{2}\). So the expansion in the \(X\) and \(Z\) direction at criticality is \(g=1.839\). These values were determined directly in [19] and are recovered in a different way in section VII. The compressive stresses in the \(Y\) and \(Z\) directions become: \(\sigma_{2}=-3.0874\) and \(\sigma_{3}=-2.383\). \(J\) can be evaluated by noting that \(\xi=(J^{2}-1)/(J^{2}+1)\) which once introduced into eq.(33) leads to the polynomial for \(J_{B}\): \[\mathcal{Q}_{J}=J_{B}^{3}-3J_{B}^{2}-J_{B}-1=0;\ J_{B}=3.38298\,. \tag{40}\] This configuration will be examined in detail in all the following sections. #### vi.1.2 Case two This case concerns the growth of a sample with \(2\) sides without stress. Assuming \(x=J_{1}X\), \(y=J_{2}Y\) and \(Z=J_{1}X\), then at the threshold \(\lambda_{1}=J_{1}/g=\lambda_{3}=1.5012\) and \(\lambda_{2}=J_{2}/g=0.4437\) with \(g\) defined as \(g^{3}=J_{1}^{2}J_{2}\). There is only a compressive stress in the \(Y\) direction with the same value as in section VI.1.2 \(\sigma_{2}=-2.0567\) #### vi.1.3 Case three In this case it is assumed that \(x=J_{1}X\), \(y=J_{2}Y\) and \(z=J_{2}Z\). If the displacement is forbidden along the \(Y\) and \(Z\) directions, then \(J_{2}=J_{3}=1\) and \(J=J_{1}=g^{3}\). \[\begin{cases}\mathbf{G}=\mathrm{Diag}(\mathrm{g},\mathrm{g},\mathrm{g});\\ \mathbf{F}=\mathrm{Diag}(\frac{1}{\mathrm{g}},1,1);\\ \mathbf{F}_{\mathbf{e}}=\mathrm{Diag}(\frac{1}{\mathrm{g}},\frac{1}{\mathrm{g }},\frac{1}{\mathrm{g}});\\ \boldsymbol{\sigma}=\mathrm{Diag}(0,\frac{1}{\mathrm{g}^{2}}-\mathrm{g}^{4}, \frac{1}{\mathrm{g}^{2}}-\mathrm{g}^{4})\,.\end{cases} \tag{41}\] This unidirectional growth process produces lateral compressive stresses when \(g\) and \(J_{1}\) are greater than one. In the opposite case \(J_{1}<1\), the stresses are tensile. This case is similar to eq.(38) and \[\xi_{B}=\frac{g^{4}-1/g^{2}}{g^{4}+1/g^{2}}=\frac{J_{1}^{2}-1}{J_{1}^{2}+1}\,. \tag{42}\] At the threshold, replacing \(\xi_{B}\) by \(J_{B}\) in eq.(33) we obtain the critical threshold for such growth process given by: \[\mathcal{Q}_{J}=J_{B}^{3}-3J_{B}^{2}-J_{B}-1=0\,. \tag{43}\] The solution for \(J_{B}\) is then \(J_{B}=3.38298\), the critical strain is then \(\lambda_{1}=2.25354\) and \(\lambda_{2}=0.666142\). Note that we recover the same threshold for the growth parameter as for section VI.2.1. Growth anisotropy increases the space of possible instability parameters. Here we limit ourselves to three cases and restrict ourselves to homogeneous growth. The Biot instability is generic, but depending on the situation, the thresholds can be different and must be evaluated each time. In the following, we will consider only one case with a different theoretical approach, without using the Biot's method, which imposes a critical parameter \(\xi_{B}\)[23]. We prefer a presentation in terms of variational analysis. ## VII Growth of a semi-infinite sample It is impossible to list all the publications on volumetric growth in soft matter. If growing layers, multilayers, shells, disks, spheres are the most frequently chosen geometries [21], numerical treatments with advanced finite elements methods softwares allow to represent a variety of shapes closer to reality [114]. Our purpose is different since we want to give exact results with the simplest and standard hyperelastic model, that is the neo-Hookean model [93; 21; 91] for incompressible materials. In addition, instead of considering all possible growth processes that can be found in nature, anisotropic [7] or space dependent [97; 98], we focus on a spatially constant growth that evolves on a rather long time scale in order to neglect any transient dynamics. Since elasticity goes hand in hand with geometry [89], we start with the geometry of the sample to fix the notations used in the following. ### The geometry We consider a semi-infinite growing sample bounded by the plane \(X=0\), infinite in the positive \(X\) direction and extending laterally between \(-\infty,\infty\) in the \(Y\) and \(Z\) directions. We assume \(\lambda_{3}=1\), so that no elastic strain exits in the third direction. The growth is assumed to be isotropic and homogeneous with a constant relative volume expansion \(J=g^{3}\). Due to the Biot instability (see the previous section), periodic patterns will appear on top of the sample with a spatial periodicity \(\Lambda\) chosen as the length unit. This geometry orients the growth mostly in the \(X\) direction and the new position for an arbitrary material point inside the sample leads to compressive stresses in the \(Y\) direction, as described before in section VI.2.1. Thus, defining a Cartesian coordinate system \(X,Y\) in the initial configuration, the position of each point after growth and the elastic deformation becomes \(x\sim JX\) and \(y\sim Y\), in leading order and \(J_{2D}=J=g^{2}\). Since an adiabatic approach to the growth process is assumed, i.e. transient deformations are quickly eliminated, a free energy describes the possible patterns resulting from a symmetry breaking. Our approach, which is poorly followed in the mechanics community, will be based on energy variation and will avoid tensorial algebra. ### The variational method based on the free energy minimization #### vi.2.1 The free energy: elasticity and capillarity The Euler-Lagrange equations or the equilibrium equations result from the extremum of the free energy, the sum of the elastic and possibly surface energy. Assuming a perfect periodicity of the patterns, we make a virtual partition of the initial domain into stripes of unity width and focus on \(\mathcal{P}\) the domain between \(-1/2<Y<1/2\), see the blue domains in Fig.(3). The neo-Hookean model depends on only two invariants: \(I_{1}\), for the elastic deformations and \(I_{3}\) for the relative volume change due to elastic stresses, which we renormalize into the geometric invariants: \(\tilde{I}_{1}=JI_{1}\) and \(\tilde{I}_{3}=JI_{3}:\) \[\tilde{I}_{1}=x_{X}^{2}+x_{Y}^{2}+y_{X}^{2}+y_{Y}^{2}-2J;\tilde{I}_{3}=x_{X}y_{Y }-y_{X}x_{Y}-J\,, \tag{44}\] where the subscript \(X\) (resp. \(Y\)) denotes the partial derivative of any function with respect to the variable \(X\) (resp. \(Y\)). The invariants \(I_{1}\) and \(I_{3}\) have already been defined in section III.2. The energy unit is chosen as the product: \(\mu\cdot(\Lambda^{2}t_{3})\). \(t_{3}\) is the thickness of the sample in the orthogonal direction which is irrelevant for plane strain deformations and we have for the elastic energy of a single strip: \[\mathcal{E}_{e}=\frac{1}{2}\iint_{\mathcal{P}}dS\left(\tilde{I}_{1}-2Q\tilde{ I}_{3}\right). \tag{45}\] The Lagrange parameter \(Q\) is also a function of \(X\) and \(Y\) fixing the incompressibility constraint \(I_{3}=1\) or \(\tilde{I}_{3}=0\) and \(dS=dXdY\). The capillary energy is often written in Eulerian coordinates: \[\tilde{\mathcal{E}}_{c}=\gamma_{0}\int_{\partial\mathcal{P}}dy\sqrt{1+x_{y}^{ 2}}\,. \tag{46}\] Considering the upper boundary \(\partial\mathcal{P}\): \[X=0;\quad Y\in[-1/2,1/2],\] where the capillary energy is defined, the following relations hold : \[dy=\frac{\partial y}{\partial Y}|_{{}_{X=0}}\,dY\quad\text{and}\quad dx=\frac{ \partial x}{\partial Y}|_{{}_{X=0}}\,dY, \tag{47}\] then eq.(46) is transformed into: \[\mathcal{E}_{c}=\gamma_{0}\int_{\partial\mathcal{P}}dY(\sqrt{x_{Y}^{2}+y_{Y}^ {2}}-1)\,, \tag{48}\] where \(\gamma_{0}\) is the rescaled capillarity coefficient and is equal to \(\gamma_{0}=\gamma/(\mu\Lambda)\) ( \(\gamma\) is the surface tension). Capillarity represents the average energy difference between the microscopic components of the sample (as atoms, molecules) located in the bulk or at the interface. It is positive when the interface separates a dense material from a more dilute phase. In practice, the capillary coefficient \(\gamma_{0}\) is very weak for ordinary gels and plays a significant role only when the sample size is of the order of \(0.1\)mm and for extremely soft gels [115]. However a skin effect can occur on top of elastic samples due to inhomogeneity of the shear modulus or to the growth process itself. This is especially true for the swelling of gels. Despite the weakness of this energy, it plays a crucial role in the determination of the wavelength and in the local regularization of singular profiles. #### iii.2.2 The Euler-Lagrange equations They simply result from the first variational derivative of the functional \(\mathcal{E}_{e}\) with respect to the small variation of \(x\) and \(y\): \[\begin{cases}x_{XX}+x_{YY}=Q_{X}\,y_{Y}-Q_{Y}\,y_{X}=\{Q,y\}\,,\\ y_{XX}+y_{YY}=-Q_{X}\,x_{Y}+Q_{Y}\,x_{X}=-\{Q,x\}\,.\end{cases} \tag{49}\] The left-hand side of the equation (49) represents the Laplacian \(\Delta\) in Cartesian coordinates, and \(\{P,x_{i}\}\) is the Poisson bracket of \(P\) and \(x_{i}\). This mathematical symbol has important properties in mechanics [116]. The zero-order solution : \(x=JX\) and \(y=Y\) verify these equations when the Lagrange parameter is a constant, so \(Q=Q_{0}\). Boundary conditions are also derived from the first variational derivative of \(\mathcal{E}_{e}\) and \(\mathcal{E}_{c}\) with respect to the elementary variation of \(x\) and \(y\), a process which allows the cancellation of the normal components \(S_{11}\) and \(S_{21}\) of the Piola stress tensor \(\mathbf{S}\)[91; 93], at the free boundary \(\partial\mathcal{P}\): \[S_{11}=x_{X}-Q\,y_{Y}\quad\text{and}\quad S_{21}=y_{X}+Q\,x_{Y}\,. \tag{50}\] On top, for \(X=0\), the cancellation of \(S_{11}\) gives \(Q_{0}=J\) while \(S_{21}=0\) is automatically obtained for the zero order solution. Capillarity appears for buckled solutions and is responsible for the normal \(\Gamma_{11}\) and tangential \(\Gamma_{21}\) components: \[\Gamma_{11}=\gamma_{0}\frac{\partial}{\partial Y}\frac{x_{Y}}{(x_{Y}^{2}+y_{Y}^ {2})^{1/2}}\,, \tag{51}\] and \[\Gamma_{21}=\gamma_{0}\frac{\partial}{\partial Y}\frac{y_{Y}}{(x_{Y}^{2}+y_{Y}^ {2})^{1/2}}\,, \tag{52}\] which must be added to the normal stresses at \(X=0\). Note the strong nonlinearities in the surface energy. However, since \(\gamma_{0}\) is in practice a very small parameter, the role of the capillary stresses is probably negligible for smooth patterns, but may become important in the case of creases. For completeness, the other two components of the stresses are also given: \[S_{12}=x_{Y}+Q\,y_{X}\quad\text{and}\quad S_{22}=y_{Y}-Q\,x_{X}\,. \tag{53}\] So far, it is assumed that the interface is regular and admits a regular curvature everywhere. Self-contacting interfaces are not considered, although in the last panel (A9)of Fig(1) on the right, such a property can explain the highly singular pattern obtained in the radial geometry. Assuming that it happens at a position \(Y=0\), then two additive stress boundary conditions must be imposed locally [38; 39; 40], \[S_{22}|_{Y=0^{+}}=S_{22}|_{Y=0^{-}}\text{ and }S_{12}|_{Y=0^{+}}=S_{12}|_{Y=0^{-}}, \tag{54}\] the second condition indicates the absence of friction on the singular line. Finally, it is easy to show that the Euler-Lagrange equations, eq.(49), are equivalent to the cancellation of the divergence of the Piola stress tensor, see also section III.3 and eq.(12). In Cartesian coordinates, \(\text{Div}(\mathbf{S})_{i}=\partial Sij/\partial X_{j}\). ### Incremental approach and solution of the Euler-Lagrange equations The classical way to detect a bifurcation in the elasticity is to expand the general solution by adding a small perturbation scaled by a small parameter \(\epsilon\). The following results are obtained for \(x\) and \(y\) and \(Q\) : \[\begin{cases}Q=J+\epsilon q(X,Y)\,,\\ x=JX+\epsilon U(X,Y)\quad\text{with}\quad\Delta U=q_{X}\,,\\ y=Y+\epsilon V(X,Y)\quad\text{with}\quad\Delta V=Jq_{Y}\,.\end{cases} \tag{55}\] The incompressibility condition at \(\epsilon\) order imposes the following constraint \(U_{X}+JV_{Y}=0\) and the elimination of \(q\) is easy by cross-derivation of the previous equations, eq.(55): \(J\partial_{Y}\Delta U-\partial_{X}\Delta V=0\) which can be derivated a second time to isolate \(U\) from \(V\). Defining \(\Delta_{J}=\partial_{XX}^{2}+J^{2}\partial_{YY}^{2}\): \[\Delta_{J}(\Delta U)=\Delta_{J}(\Delta V)=0. \tag{56}\] and \(\Delta_{J}q=0\). The fourth order operator \(\Delta_{J}\Delta=\Delta\Delta_{J}\) accepts as possible solutions \(\Re\left(\Phi_{1}(Z)+\Phi_{2}(Z_{1})\right)\) where both functions are holomorphic functions of \(Z=X+IY\) and \(Z_{1}=JX+IY\). Nevertheless, due to the boundary conditions for \(X=0\) (on the top of the strip), \(\Phi_{1}\) and \(\Phi_{2}\) are related and we finally get for \(x\) and \(y\): \[\begin{cases}x&=JX+J\epsilon\Re\left[\Phi(Z)+\tau_{1}\Phi(Z_{1})\right],\\ y&=Y-\epsilon\Im\left[\Phi(Z)+J\tau_{1}\Phi(Z_{1})\right],\\ Q&=J+\epsilon\tau_{0}\tau_{1}\Re\left[\Phi_{Z_{1}}\right];\ \tau_{0}=J^{2}-1\,, \end{cases} \tag{57}\] where \(\Re\) and \(\Im\) are the real and imaginary parts of the holomorphic function \(\Phi\). The notation \(\tau_{0}=J^{2}-1\) is introduced for convenience and \(\tau_{1}\) is free at this stage, and will be determined later. A priori \(\Phi\) is arbitrary with the restriction that it must vanish for \(X\to\infty\) which automatically implies that \(\Phi\) is singular. Any singularity occurring outside of the domain \(\mathcal{P}\) (for \(X<0\)) is physically appropriate while singularities within the physical domain \(\mathcal{P}\) must be considered with caution. The balance of the elastic and capillary stresses at the surface \(\partial\mathcal{P}\) gives the value of \(\tau_{1}\) as well as the threshold \(J_{B}\) for the buckling instability. Let us first evaluate the stresses in linear order in \(\epsilon\): The calculation is not difficult and can be easily done using the Mathematica software as an example (see also the Appendix, section XV.1): \[\begin{cases}S_{11}&=\epsilon\Re\left[2J\Phi_{Z}+(1+J^{2})\tau_{1}\Phi_{Z_{1} }\right],\\ S_{21}&=-\epsilon\Im\left[(1+J^{2})\Phi_{Z}+2J^{2}\tau_{1}\Phi_{Z_{1}}\right], \\ S_{12}&=-\epsilon J\Im\left[2\Phi_{Z}+(1+J^{2})\tau_{1}\Phi_{Z_{1}}\right],\\ S_{22}&=1-J^{2}-\epsilon\Re\left[(1+J^{2})\Phi_{Z}+2J^{3}\tau_{1}\Phi_{Z_{1}} \right].\end{cases} \tag{58}\] Only \(S_{22}\) shows a zero order contribution in \((-\tau_{0})=1-J^{2}\), which is negative for a growing material since \(J>1\). This compressive stress explains the buckling instability and is associated with an elastic strain along \(Y\). ### The boundary conditions at the top of the sample, the Biot threshold and the harmonic modes To derive the condition of a buckling instability, the quantities of interest are the normal \(S_{11}\) and shear \(S_{21}\) stresses at the top, which must include the capillary contribution. Only the normal capillary stress \(\Gamma_{11}\) is of order \(\epsilon\) while \(\Gamma_{21}\) is of order \(O(\epsilon^{2})\) and can be discarded so it reads, for \(X=0\) and for \(S_{11}\): \[\epsilon\left(2J+(1+J^{2})\tau_{1}\right)\cdot\Re\left[\Phi_{Z}\right]- \epsilon\gamma_{0}J(1+\tau_{1})\Re\left[\Phi_{ZZ}\right], \tag{59}\] where \(\tau_{1}\) is not modified. We first neglect the surface tension. Then, the cancellation of \(S_{21}\) gives the value of \(\tau_{1}\): \(\tau_{1}=-(1+J^{2})/(2J^{2})\). Once this \(\tau_{1}\) value is introduced into \(S_{11}\), there are two possibilities: * Cancellation of \(\mathcal{Q}(J)\), leading to the determination of \(J_{B}\) such that : \[\mathcal{Q}(J_{B})=(J_{B}^{3}-3J_{B}^{2}-J_{B}-1)=0\,,\] (60) for any profile function \(\Phi(Z)\). This value was also found by Biot, see section VI.2.1 where another demonstration by Biot is proposed [113]. * \(\Re\left[\Phi_{Z}\right]=0\) which defines a family of suitable profiles but not a threshold value for observing the interface buckling. It requires that \(\Phi\) is an even function of \(Z\). The second case does not imply any specific value of \(J\), but selects shape profiles, unlike the first case which occurs above \(J_{B}\) for any profile. It suggests and explains the diversity of experimental observations for the layer buckling: Indeed, the absence of mode selection at the Biot threshold automatically induces a spontaneous coupling of harmonic modes. The only real root \(J_{B}\) of \(\mathcal{Q}(J)\) is \[J_{B}=\frac{1}{3}(3+6^{1/3}\{(9-\sqrt{33})^{1/3}+(9+\sqrt{33})^{1/3}\})\,, \tag{61}\] \(J_{B}\sim 3.38\). But, as mentioned above, all holomorphic periodic functions of \(Z\) that vanish for \(X\to+\infty\) are possible eigenmodes of deformations that occur for the same threshold value. In the original papers, Biot only focused on the harmonic modes: \(\Phi_{B}=e^{-2\pi nZ}\), which appear for a compressed rubber sample. The polynomial that gives the threshold is not always the same depending on the experiment. Any modification in the physics of the problem such as more sophisticated hyperelasticity (Mooney-Rivlin, Ogden model, Fung-model [91; 93], anisotropy of the material [90] or of the growth [7; 8], possibly external loading, will modify the incremental result eq.(57) and the critical polynomial \(\mathcal{Q}\), but not the fundamental property of instability. However, this model does not provide a choice of wavelength at the threshold, unlike the similar instabilities of fluids such as Rayleigh-Benard or Benard Marangoni [34]. Above the threshold, the determination of the wavelength for periodic fluid instabilities remains a difficult theoretical challenge [117; 118; 34] giving rise to an important literature and sometimes controversies as for the Rayleigh-Benard convection or the diffusion patterns of directional solidification. In [19], a surface tension selection mechanism is proposed for a layer of finite height \(d\). It induces a shift of the threshold \(J_{B}\) and a selection of the ratio: wavelength over the height \(d\) of the sample, this ratio being of order one. Here the sample height is infinite and the wavelength is chosen as length unit, which means that the selection must in principle provide the value of the critical threshold \(J_{C}\). A discussion of finite size effects is deferred to the last section XIII. When capillarity is introduced, the normal stress \(S_{11}\) and the shear stress \(S_{21}\), given by eq.(59) are modified by the capillary contribution. Only the periodic function \(\Phi=e^{-2n\pi Z}\) (where n is an integer) gives a simple solution with a shift of the bifurcation threshold due to \(\Gamma_{11}\): \[\delta J=J_{C}-J_{B}=n\pi\gamma_{0}J_{B}\frac{(1+J_{B})}{(3J_{B}^{2}-6J_{B}-1) }\,. \tag{62}\] It is possible to recover this threshold directly minimizing the the total energy: elastic and capillary energy. In the next section, we give examples of such an evaluation which takes advantage of the expansion given in section XV.2. Figure 3: At the top, panels A and B, the physical domain \(\mathcal{P}\), corresponding to one wavelength in cyan, is restricted to \(-1/2\leq Y\leq 1/2\), \(X\in[0,+\infty]\). Only one wavelength is plotted. To the right of each panel, is the \(\mathcal{C}\) plane with the unit Riemann disk in yellow. The red arrow indicates the correspondence between the point \(\mathcal{O}\) ( \(X=0,Y=0\) ) of the physical space and the point \(\mathcal{O}^{\prime}\) (\(1,0\)) of the \(\mathcal{C}\) plane. The region \(X->+\infty\) is contracted and associated with the center of the Riemann disk (green arrow). Only the interior and the contour of the Riemann disk mimic the physical domain. The blue dots represent the possible singularities, which are not singular for the physical domain in panel (A), but can generate sharp interface variations. Note that the dashed purple lines are images of each other due to the conformal mapping. In (B), the same plot except that a singular point enters both the Riemann disk and the physical plane P. Below, in (b), the interface profile for an outer pole, see eq.(63), with the parameter \(a=0.1\) for the red curve, \(a=1\) for the blue curve, \(a=5\) for the green curve. The profile appears quite singular for \(a=0.1\), but it remains finite. In (c) the real part of the derivative which contributes to the stresses, see \(S_{11}\), according to eq.(58). One wavelength, \(\mathcal{P}\), is shown and the rainbow colors indicate the sharp variations near the origin \(\mathcal{O}\). Note also that the scale for \(X\) varies between \(0\) and \(0.1\). Although quite singular near \(\mathcal{O}\), the stress remains finite and decreases rapidly. Similarly, (d) shows the opposite of the imaginary part present in \(S_{21}\) of eq.(58). ## VIII Periodic profiles and the Riemann theorem ### Construction of periodic profiles The choice of the periodic functions \(\Phi(Z)\) follows from the Riemann theorem, which states that there exists a bijective mapping between a non-empty and simply connected domain and the unit disk \(\mathcal{C}\). In our case the domain is \(\mathcal{P}\), which covers one period, see Fig.(3). Introducing the complex variable \(\zeta=e^{-2\pi Z}\), Fig.(3) shows the points of correspondence, in particular the upper boundary (\(X=0\)) of \(\partial\mathcal{P}\), which is mapped onto the outer circle \(\partial\mathcal{C}\), and the zone at infinity of \(\mathcal{P}\) (\(X\rightarrow+\infty\)) concentrated in the center of the unit disk. The exterior of the unit disk corresponds to the non-physical half-plane where \(X<0\). The central vertical axis of the strip \(X>0,Y=0\), (dashed purple line in Fig.(3)) is associated with the horizontal axis of the \(\mathcal{C}\) plane (purple dashed lines) which we extend into the non-physical domain. Every holomorphic function, except constants, \(\Phi(Z)\) or \(\Psi(\zeta)\), has singularities. If they are located in the non-physical plane, these solutions are physically relevant, since they contribute to a finite elastic energy density. But this is not the case when they are located inside \(\mathcal{P}\) or \(\mathcal{C}\), where they require special attention. When they are near the boundary or of the Riemann circle \(\partial\mathcal{C}\), they become good candidates for generating creases. We will consider the regular profiles first. ### Regular patterns above the Biot threshold. The patterns of interest are of course the harmonic modes \(\zeta_{k}=e^{-2\pi kZ}\) and their superposition: \(\Phi(Z)=a_{k}\zeta^{k}\) where the Einstein notation on double indices is assumed and with \(k\) a positive integer. The Biot solution is simply \(\zeta=e^{-2\pi Z}\). All these modes without specific parity in the change \(Z\curvearrowright-Z\) occur strictly at the Biot threshold and can easily overlap. However, when focusing on folds occurring at the interface, a more appropriate choice must be made with singularities located near the interface. The word creases is chosen to describe a sharp and localized variation of the interface shape \(\partial\mathcal{P}\), which is mathematically represented by a continuous function, such that the profile \(x(Y)\) remains differentiable, at least \(2\) times. Another definition can be that the elastic and/or capillary energy density remains locally finite. A fancy representation in complex analysis has been given by the viscous interfacial flow experts. [119; 120; 54; 121]. The creases are then simply generated at the threshold by using the conformal mapping technique [122; 123]. Defining the neighborhood of the central line \(\zeta_{a}=\zeta-1-a\), with \(a>0\), possible solutions with a pole, a logarithm or a square root can be good representations of quasi-singular profiles in the neighborhood near the center of the strip \(\mathcal{O}\) or near \(\mathcal{O}^{\prime}\): \[\Phi=\frac{a}{\zeta_{a}};\ \Phi=-\frac{Log(-\zeta_{a})}{Log(a)};\ \Phi=(-\zeta_{a})^{1/2}\,, \tag{63}\] \(a\) decreases as one approaches the point \(\mathcal{O}^{\prime}\), or the interface near \(\mathcal{O}\) (see Fig.(3)). The amplitude of the singular profile is normalized in the definition given by eq.(63). Fig.(3) shows different profile solutions for \(a=0.1\) (red curve), \(a=1\) (blue curve) and \(a=5\) (green curve) for a pole singularity (first choice of eq.(63) on the left) corresponding to a distance \(d_{a}\) from the point \(\mathcal{O}\) of the physical plane with \(d_{a}=-0.0152,-0.110,-0.285\) respectively. Since \(\Phi_{Z}\) goes directly in the stress definition, its value gives information about the stresses, see eq.(58) and Fig.(3), panels (c) and (d). Plotted on a single wavelength \(-0.5<Y<0.5\) (with \(a=0.1\)), the real and imaginary parts of \(\Phi\) show a strong localization near the interface for \(a=0.1\) so \(d_{a}=0.0152\) and they quickly disappear with increasing values of \(X\). However, even if the stresses at the interface are large, the solution is not singular and the linear expansion remains valid for sufficiently small values of \(\epsilon\). For the logarithm or square root choices presented in eq.(63), see the Appendix (XV.4). ### The case of even holomorphic function of \(Z\) The second way to satisfy the cancellation of the stress \(S_{11}\) for \(X=0\) is to choose an even function of \(Z\), which means that \(\Phi(Z)=a_{k}(\zeta^{k}+\zeta^{-k})\) which will automatically diverge in the center of the Riemann disk or at infinity of \(\mathcal{P}\). The only way to satisfy the convergence at infinity is to introduce a singularity inside the Riemann disk. The choice of such a singularity is huge, but \(2D\) elasticity allows only logarithm and square root singularities for elastic energy convergence. In linear elasticity, it is well known that square roots c correspond to fractures and logarithms to wedge dislocations, see [124]. Before proceeding further in this direction, let us start with the nonlinear coupling of regular modes. ## IX Nonlinear bifurcation analysis via energy minimisation and mode coupling All modes emerge at the Biot threshold and the mechanisms of selection are never a simple matter in the physics of continuous media. For example in diffusion-limited growth, the selection of the size and velocity of the needle-crystal remained a puzzle for five decades. In fact, Ivantsov [125; 126; 127; 128] has demonstrated the relationship between the tip radius of the crystal times its velocity as a function of the undercooling since \(1947\) but disentangling both quantities by the appropriate correct physical process has generated many conflicting hypotheses and discussions. The role of the surface tension, much debated due to mathematical difficulties, is now understood by including the surface tension anisotropy [129; 61]. In the same class of problems, the displacement of a viscous finger in a channel remains unsolved for about thirty years [130] and it has been again demonstrated the role of the surface tension which chooses a discrete set of solutions among a continuum [55; 56; 57]. When an energy emerges, as in our formulation of volumetric growth, solutions can be selected by an energy minimization, which was not the case for the two examples mentioned above. However our continuum here is a continuum of possible functions, not just a selection of a pure number characteristic of the pattern. Using an expansion of the deformation at \(\epsilon\) order, the energy density can be expanded in \(E=\tau_{0}/2+\delta E\), as follows: \[\delta\mathcal{E}=\epsilon\int_{\mathcal{P}}dS(E_{1}+\epsilon E_{2}+\epsilon^{2} E_{3})=\epsilon\mathcal{E}_{1}+\epsilon^{2}\mathcal{E}_{2}+\epsilon^{3} \mathcal{E}_{3}\,, \tag{64}\] where each order is given in the Appendix (XV.2). If the system is governed by an energy, it is a possible to analyze the bifurcation in more detail, to deduce its characteristics and finally to obtain the amplitude \(\epsilon\) of the selected mode by expanding the energy. To prepare such a calculation, which can be really tedious and even impossible for an arbitrary choice of \(\Phi\), we take advantage of the complex formulation. ### Elastic energy evaluation, order by order Such an evaluation requires surface integrals covering the entire domain of interest \(\mathcal{P}\) (Fig.(4), top left panel) which can be obtained in two ways: either in \(X,Y\) coordinates, or in \(Z,\bar{Z}\) coordinates. The latter choice makes the calculus much more easier for the first and second order, as soon as holomorphic functions are chosen in the rectangular geometry. First, we define these surface integrals defined on \(\mathcal{P}\) as: \[\begin{cases}K^{(1)}(f,\bar{g})=\frac{1}{2I}\iint_{\mathcal{P}}dZd\bar{Z}f_{Z }\bar{g}_{Z}\,,\\ K^{(2)}(f_{1},\bar{g}_{1})=\frac{1}{2JI}\iint_{\mathcal{P}}dZ1_{d}\bar{Z}_{1} f_{Z}\bar{g}_{\bar{Z}_{1}}\,,\\ K^{(3)}(f\bar{g}_{1})=\frac{1}{I(J+1)}\iint_{\mathcal{P}}dZd\bar{Z}_{1}f_{Z} \bar{g}_{\bar{Z}_{1}}\,,\\ K^{(4)}(f,g_{1})=\frac{1}{I(J-1)J}\iint_{\mathcal{P}}dZdZ_{1}f_{Z}g_{Z_{1}}\,. \end{cases} \tag{65}\] According to [131], these integrals can be transformed into contour integrals such that: \[\begin{cases}K^{(1)}(f,\bar{g})&=\frac{1}{2I}\oint_{\partial\mathcal{P}}dZf_{ Z}\bar{g}(\bar{Z})\,,\\ K^{(2)}(f_{1},\bar{g}_{1})&=\frac{1}{2IJ}\oint_{\partial\mathcal{P}}dZ_{1}f_{ Z_{1}}\bar{g}(\bar{Z}_{1})\,,\end{cases} \tag{66}\] and for \(K^{(3)}\) and \(K^{(4)}\) which mix \(Z\) and \(Z_{1}\) using : \[Z_{1}=Z\frac{1+J}{2}+\bar{Z}\frac{J-1}{2}\,, \tag{67}\] it comes: \[\begin{cases}K^{(3)}(f,\bar{g}_{1})&=\frac{1}{I(J+1)}\oint_{\partial\mathcal{P }}dZf_{Z}\bar{g}(\bar{Z}_{1})\,,\\ K^{(4)}(f,g_{1})&=\frac{1}{I(J-1)}\oint_{\partial\mathcal{P}}dZf_{Z}g(Z_{1})\,. \end{cases} \tag{68}\] The first order corresponds to \[\begin{cases}\mathcal{E}_{1}&=\tau_{0}\int_{\mathcal{P}}dS\Re\left(\Phi_{Z}+J \tau_{1}\Phi_{Z_{1}}\right)\\ &=\tau_{0}\Re\left[K^{(1)}(\Phi,1)\right]+J\tau_{1}K^{(2)}(\Phi,1)\,]\,.\end{cases} \tag{69}\] Since \(\Phi\) has no singularity inside the sample, the contour integral for \(\Phi\) vanishes and \(\mathcal{E}_{1}=0\). \(\mathcal{E}_{2}\) and \(\mathcal{E}_{3}\) can be found in section XV.2, eq.(S9). Using eq.(S9), expansion of \(\mathcal{E}\) at second order gives for \(\mathcal{E}_{2}\), : \[\begin{cases}\mathcal{E}_{2}&=\frac{1}{2}(1+3J^{2})(K_{1}+J^{2}\tau_{1}^{2}K_{ 2})\\ &+\frac{J\tau_{1}}{2}((J+1)^{3}K_{3}-(J-1)^{3}K_{4})\,,\end{cases} \tag{70}\] with \(K_{1}=K^{(1)}(\Phi,\bar{\Phi});\,K_{2}=K^{(2)}(\Phi_{1},\bar{\Phi}_{1});\,K_{3} =K^{(3)}(\Phi,\bar{\Phi}_{1});\,\,\,\,\,\,\,K_{4}=K^{(4)}(\Phi,\Phi_{1})\). All these quantities are reduced to contour integrals obtained along \(\partial\mathcal{P}\), see Fig.(4(A) on top). We divide the outer contour into horizontal lines and vertical lines travelled in the negative sense. Because of the periodicity, the two vertical contour integrals cancel each other out, (blue lines of Fig.4) above. At infinity \(\Phi_{Z}\) vanishes so only the integral along \(\mathcal{C}_{0}\) contributes to the energy at this order. This result is valid since there is _no singularity_ inside the physical domain \(\mathcal{P}\). Finally, we get \(K_{1}\): \[K_{1}=-\frac{1}{2}\int_{-1/2}^{1/2}dY\bar{\Phi}(-IY)\Phi_{Z}(IY)\,, \tag{71}\] and \(K_{2}=K_{1}/J;\quad K_{3}=2K_{1}/(J+1);\quad K_{4}=0\). The energy density at second order simplifies: \[\mathcal{E}_{2}=-\mathcal{Q}(J)\frac{(1+J)(1-J)^{2}}{8J^{3}}K_{1}\,. \tag{72}\] Near the Biot threshold, \(\mathcal{E}_{2}\) behaves as \(\mathcal{E}_{2}\sim-E_{f}(J-J_{B})\). Defining first \[Q_{B}=\frac{d\mathcal{Q}}{dJ}|_{J=J_{B}}=(3J_{B}^{2}-6J_{B}-1). \tag{73}\] \(E_{f}\) reads: \[E_{f}=K_{1}\mathcal{Q}_{2};\,\text{where}\quad\mathcal{Q}_{2}=Q_{B}\frac{(J_{B} -1)^{2}(J_{B}+1)}{8J^{3}}. \tag{74}\] At this order of perturbation, we have recovered the linear stability result. It is possible to go one step further and specify the nature of the bifurcation that occurs near \(J_{B}\). For this we consider \(\mathcal{E}_{3}\). A third order, it reads: \[\frac{\mathcal{E}_{3}}{p_{e}}=L_{1}+J^{2}\tau_{1}^{2}L_{2}+\frac{\tau_{1}(J+1)^ {2}}{2}L_{3}-\frac{\tau_{1}(J-1)^{2}}{2}L_{4}\,, \tag{75}\] with \(p_{e}=J\tau_{0}\tau_{1}\) and: \[\begin{cases}L_{1}=<\Re\left[\Phi_{Z}\Phi_{Z}\Phi_{Z_{1}}\right]>,\\ L_{2}=<\Re\left[\Phi_{Z_{1}}\Phi_{\bar{Z}_{1}}\Phi_{Z_{1}}\right]>,\\ L_{3}=<\Re\left[\Phi_{Z_{1}}\right]\Re\left[\Phi_{Z_{1}}\Phi_{Z_{1}}\Phi_{Z} \right]>,\\ L_{4}=<\Re\left[\Phi_{Z_{1}}\right]\Re\left[\Phi_{Z_{1}}\Phi_{Z}\right]>\,.\end{cases} \tag{76}\] These formulas allow to calculate the third order for any profile function \(\Phi\). The calculation is not always easy but can be done as demonstrated hereafter for the logarithmic function defined in eq.(63). ### Nonlinear coupling of quasi-singular profiles The purpose of this paragraph is to estimate the amplitude of the profile and the nature of the bifurcation near \(J_{B}\). Since each case is special, we limit ourselves to one of them, namely the logarithmic mode, see eq.(63): \(\Phi=-Log(1+a-e^{-2\pi Z})/Log(a)\), with \(a>0\) and shown in Fig.(8)(e). In this figure, only \(\Re\left[\Phi\right]\) is shown for \(X=0\), and the true profile function must be multiplied by \(\epsilon\tau_{0}/(2J)\). Obviously, the desired profile is chosen with a positive value of \(\epsilon\) to have the sharp-pointed shape in the positive direction. Such a solution appears a priori at the Biot threshold and remains a regular solution, even with stresses accumulated at the interface. The corresponding elastic energy starts at the second order and the elastic energy expansion is written as: \[\mathcal{E}=\mathcal{E}_{2}\epsilon^{2}+\mathcal{E}_{3}\epsilon^{3}=-E_{f} \left(\delta J\epsilon^{2}+e_{3}\epsilon^{3}\right)\,, \tag{77}\] where \(\delta J=J-J_{B}\) and \(e_{3}=-\frac{\mathcal{E}_{3}}{E_{f}}\). \(E_{f}\), \(K_{1}\) and \(\mathcal{Q}_{2}\) have been defined previously, see Eqs.(71,74). Thus, minimizing the energy with respect to \(\epsilon\) leads to: \[\epsilon=-\frac{2}{3}\frac{\delta J}{e_{3}};\quad\text{so}\quad\mathcal{E}=- \frac{4}{27}E_{f}\frac{\delta J^{3}}{e_{3}^{2}}\,. \tag{78}\] To observe a bifurcation with such an expansion in \(\epsilon\) requires a negative value of \(\mathcal{E}\), so \(\delta J\) must be positive for positive values of \(E_{f}\) and \(K_{1}\). \(K_{1}\) depends on the logarithmic dependence of the profile,and can be estimated as: \[K_{1}\sim-\pi\frac{Log(2a)}{Log(a)^{2}}\quad\text{for}\quad 0<a<<1\,. \tag{79}\] The evaluation of the third order \(\mathcal{E}_{3}\) is given in section XV.5 and the corresponding result in eq.(S25). So when \(a\) is a small quantity, we get for \(e_{3}\): \[e_{3}=\frac{2J\pi\tau_{1}\Pi_{1}}{(J-1)^{2}p_{a}Q_{B}};\text{ where }\ p_{a}=aLog(a)Log(2a). \tag{80}\] The numerical value of \(e_{3}\) is then \(e_{3}\simeq-11.71/p_{a}\) which decreases when \(a\) increases. Since \(e_{3}<0\), \(\delta J\) is positive to obtain \(\epsilon>0\), which is required for the profile shown in Fig.(8). In this way, a bifurcation and a crease can be effectively observed. A negative sign will be counterintuitive with cusps oriented upward. Nevertheless, the cusp amplitude will remain tiny approximately given by \(\left(2/3p_{a}\delta J\right)/11.71\sim 0.01\delta J\) for \(a=0.01\). This treatment does not include surface tension because of obvious technical difficulties, [47]. ### Nonlinear coupling of harmonic modes An efficient treatment of mode coupling near a threshold is to multiply the main harmonic mode, here \(\zeta\), by a slowly varying amplitude satisfying the so-called amplitude equation derived from the separation of scales. This method is easily illustrated by the Euler Elastica [132]. An explicit solution of the Elastica bending solution can be found in [124] section \(19\) exercise \(3\). Depending on the boundary conditions applied at both ends, the threshold value of the force \(F_{c}\) responsible for the bending is found, and the nonlinearities give the amplitude of the bending profile as a function of the force above the threshold value. In this simple case, the bifurcation is supercritical since the amplitude varies as \(\pm\sqrt{F-F_{c}}\), above the threshold. For this simple example, there is also a free energy that includes the elastic energy of bending and the work of the forcing. Then, another way to study the bifurcation is also provided by the analysis of the free energy above the threshold, this is the choice made in Appendix A of [89] that we will follow here. In fact, the treatment by the separation of scales is more tedious in our case for at least two reasons: first, three unknown functions \(x,y,Q\) are coupled and second it requires an initial guess for the scaling dependence of the coupled functions,which is not easy to find a priori. The energy analysis turns out to be much more efficient. and is chosen here. We start with the coupling of \(2\) harmonics and then \(3\) harmonics. For the linear order we have shown that all harmonic modes appear at the same threshold value: \(J_{B}\) ### Intermediate algebra for the coupling of sinusoidal modes Consider the superposition of several modes where \(\Phi(Z)=\sum_{k}^{k_{0}}a_{k}\zeta^{k}\) with \(k<k_{0}\), \(k\) and \(k_{0}\) being positive integers [133]. Then \(K_{1}=\pi\sum_{k}k|a_{k}|^{2}\) so that \(K_{1}\) is always positive and \(\mathcal{E}_{2}\) is negative above the Biot threshold. Unfortunately, at the third order, the calculus becomes much more tedious, even when sinusoidal modes are imposed. Each integral involves a triple series of the mode amplitudes \(a_{n}\). \[\begin{cases}\tilde{L}_{1}&=\sum_{p,q,r}\frac{pqra_{p}a_{q}a_{r}}{p+q+r}( \delta_{p-q-r}+\delta_{p-q+r})\\ &=2\sum_{0<p<q}\frac{pq(q)a_{p}a_{q}a_{q-p}}{p(1-J)+q(1+J)}\,,\\ \end{cases} \tag{81}\] \[\begin{cases}\tilde{L}_{2}&=\sum_{p,q,r}\frac{pqra_{p}a_{q}a_{r}}{J(p+q +r)}(\delta_{p-q-r}+\delta_{p-q+r})\\ &=\frac{1}{J}\sum_{0<p<q}p(q-p)a_{p}a_{q}a_{q-p}\,,\\ \end{cases}\] \[\begin{cases}\tilde{L}_{3}&=\sum_{p,q,r}\frac{pqra_{p}a_{q}a_{r}}{J(p+ q)+r}(\delta_{p-q+r}+\delta_{p+q-r})\\ &=\sum_{0<p\leq q}(2-\delta_{p-q})\frac{pqa_{p}a_{q}a_{q+k}}{J+1}+\tilde{L}_{4} \,,\\ \tilde{L}_{4}&=\sum_{p,q,r}\frac{a_{p}a_{q}a_{r}pqr}{J(p+q)+r}\delta_{p-q-r}\\ &=\sum_{0<p<q}a_{p}a_{q}\frac{a_{q-p}pq(q-p)}{(J-1)p+(J+1)q}\,,\end{cases} \tag{82}\] with \(\tilde{L}_{i}=-2\pi^{2}L_{i}\). It is to be noted that a non-vanishing third order in the energy exits if and only if modes are coupled. #### vi.3.1 Coupling two modes near the \(J_{b}\) threshold In the case of two modes, \(K_{1}=\pi(1+k|a_{k}|^{2})\). For the third order in \(\epsilon^{3}\), the only non-vanishing values contributing to \(\mathcal{E}_{3}\), eq.(75), are obtained for the exponents \(k=1\) and \(k=2\). Thus, the two mode profile is limited to \(\zeta+a_{2}\zeta^{2}\) where \(|a_{2}|\) is assumed to be of order \(1\), greater than \(\epsilon\) and \(K_{1}=\pi(1+2|a_{2}|^{2})\). Another scaling can be found below, in section IX.5. We have already found the second order of the energy \(\mathcal{E}_{2}\), see eq.(72, 74). Assuming \(a_{2}\) is real, the results for the associated \(L_{i}\), eq.(81), are: \[L_{1}=4a_{2}/(3+J);\ L_{2}=a_{2}/J;\ L_{4}=2a_{2}/(3J+1)\,, \tag{83}\] and \(L_{3}=L_{4}+a_{2}/(1+J)\) which gives \(\mathcal{E}_{3}\) \[\begin{cases}\mathcal{E}_{3}&=-\pi^{2}a_{2}\mathcal{Q}_{3};\\ \mathcal{Q}_{3}&=\frac{(J-1)^{4}(J+1)\left(J^{2}+1\right)\left(11J^{2}+16J+3 \right)}{4J^{4}(J+3)(3J+1)}\,,\end{cases} \tag{83}\] and the generic results found in eq.(78) and eq.(80) apply and give: \[\epsilon=-\frac{2}{3}\frac{\mathcal{Q}_{2}}{\mathcal{Q}_{3}}\delta J\frac{K_{ 1}}{\pi^{2}a_{2}}\,;\,\,\mathcal{E}=-\frac{4}{27}\frac{K_{1}^{3}}{\pi^{4}a_{2} ^{2}}\frac{\mathcal{Q}_{3}^{2}}{\mathcal{Q}_{3}^{2}}\delta J^{3}\,. \tag{84}\] We then deduce that the two-mode profile is a minimizer of the elastic energy above the Biot threshold, \(\delta J>0\). Such solution exists for every finite value of \(a_{2}\). The bifurcation occurs for \(\epsilon a_{2}<0\) and is transcritical [118; 32]. #### iv.4.2 Nonlinear three mode coupling in the vicinity of the \(J_{b}\) threshold We now consider the following shape deformation given by the three-mode coupling: \(\Phi(Z)=\zeta+a_{2}\zeta^{2}+a_{3}\zeta^{3}\). For simplicity, we choose real values for all the coefficients \(a_{i}\) and \(K_{1}=\pi(1+2a_{2}^{2}+3a_{3}^{2})\). Similarly, the expansion of the elastic energy up to the third order reads: \[\mathcal{E}=-\mathcal{Q}_{2}\delta JK_{1}\epsilon^{2}-\pi^{2}a_{2}\mathcal{Q} _{3}(1+a_{3}\mathcal{Q}_{33})\epsilon^{3}\,, \tag{85}\] where \[\mathcal{Q}_{33}=\frac{4(3+J)(1+3J)\tilde{\mathcal{Q}}_{33}}{(2+J)(5+J)(1+2J)( 1+5J)(3+16J+11J^{2})}\,,\] with \(\tilde{\mathcal{Q}}_{33}=10+97J+254J^{2}+196J^{3}+37J^{4}\). The numerical value of \(\mathcal{Q}_{33}=3.8845\) for \(J=J_{B}\). The introduction of \(\zeta^{3}\) does not modify the result of section IX.4.1 unless \(a_{3}\mathcal{Q}_{33}<-1\). The function \(e_{3}\) which enters the eq.(78) and eq.(80) becomes \(e_{3}=\pi^{2}a_{2}\mathcal{Q}_{3}(1+a_{3}\mathcal{Q}_{33})\) and is shwon numerically in density plots in Fig.(8)(a). Again, the minimum non-trivial value of \(\mathcal{E}\) is found for \(\delta J>0\) with no possibility of obtaining a stable solution below the Biot threshold. Due to the complexity of the formula, we give here only the numerical value of the selected amplitude of the profile and the corresponding energy: \[\epsilon=-\beta_{1}\frac{K_{1}\delta J/\pi}{a_{2}(1+\beta_{2}a_{3})};\,\, \mathcal{E}=-\beta_{3}\frac{(K_{1}\delta J/\pi)^{3}}{a_{2}^{2}(1+\beta_{2}a_{ 3})^{2}}\,, \tag{86}\] with \(\beta_{1}=0.02575\), \(\beta_{2}=3.88446\), \(\beta_{3}=0.0007271\). As shown in section XV B, the surface tension creates an additive contribution in \(\epsilon^{4}\) which can change the present result, giving two solutions for \(\epsilon\) instead of one. An exhaustive study is really difficult due to the large number of degrees of freedom such as \(a_{2}\), \(a_{3}\) and \(\gamma_{0}\), the dimensionless capillary number. However, this number is rather weak and for the numerical study we choose \(\gamma_{0}=0.1\). The amplitude \(\epsilon\) which minimizes the elastic energy is a solution of a quadratic equation, so there are two solutions in addition to \(\epsilon=0\). A first numerical investigation, for coefficients \(a_{2}=0.1\) and \(0.5\) and \(a_{3}=\pm a_{2}\) is shown in Fig.(4)(b,c) and demonstrates nonlinear modes occurring after or before the Biot threshold. Only stable solutions are considered, so only the continuous lines of Fig.(4)(b,c). From these two examples, one can notice that the \(\epsilon\) values are rather weak, negative and less than \(0.1\) in absolute value. The interface profiles are shown in Fig.(4)(d). At the top we find strongly distorted profiles for a value of \(J=3.50\) and \(\epsilon=-0.019\) (\(a_{3}=-a_{2}\), \(a_{2}=0.1\)) and \(\epsilon=-0.0065\) for \(a_{3}=0.1\). Below, for \(a_{2}=0.5=-a_{3}\)\(\epsilon=-0.0096\) and \(J=3.58\) and for \(a_{3}=0.5\), \(\epsilon=-0.003560\) and \(J=3.18\). Fig.(4)(d) respects the scale but the height is magnified by a factor of \(10\). In conclusion, this nonlinear treatment shows that nonlinear modes can occur before the Biot threshold, but for strong capillary numbers. As the amplitude of the coefficient \(a_{2}\) increases, the mode becomes more and more distorted from the single sinusoidal solution but the amplitude of the interface remains small due to the small amplitude of \(\epsilon\). To better understand the bifurcation plots, we choose a more appropriate representation for the profile functions in the next section. ### Super and subcritical bifurcations In the previous paragraph, we assumed that all the coupled harmonics are of the same order of magnitude. Now, we construct profile functions where the harmonics slightly perturb the principal mode \(\zeta\) such as \(:\Phi(Z)=\zeta-\epsilon A_{2}\zeta^{2}(1+\alpha_{3}\zeta)\), where \(A_{2}\) and \(\alpha_{3}\) are constants of order \(1\). In this case we get: \[\mathcal{E}=-\mathcal{Q}_{2}\delta J\pi\epsilon^{2}+\pi^{2}A_{2}\mathcal{Q}_{3 }\epsilon^{4}\,. \tag{87}\] For positive values of \(A_{2}\), we recover the classical supercritical bifurcation with \(\epsilon\sim\pm\sqrt{\delta J}\) above the Biot threshold. For the opposite sign of \(A_{2}\) and \(\alpha_{3}=-\epsilon^{2}B_{3}\) (which implies a very weak perturbation of the main mode \(\zeta\)), the selected profile becomes \(\Phi(Z)=\zeta+\epsilon B_{2}\zeta^{2}-\epsilon^{2}B_{3}\zeta^{3}\), with \(B_{2}\) and \(B_{3}\) positive and in this case: \[\mathcal{E}=-\mathcal{Q}_{2}\delta J\pi\epsilon^{2}-\pi^{2}B_{2}\mathcal{Q}_{3 }\epsilon^{4}+\pi^{2}B_{3}B_{2}\mathcal{Q}_{3}\mathcal{Q}_{33}\epsilon^{6}\,. \tag{88}\] The extrema of \(\mathcal{E}\) are obtained for \(\epsilon\) values given by: \[\epsilon(J)=\pm\frac{1}{\sqrt{3B_{3}\mathcal{Q}_{33}}}\left(1\pm\sqrt{1+ \frac{3}{\pi}\frac{\mathcal{Q}_{2}}{Q_{3}}\frac{B_{3}}{B_{2}}\mathcal{Q}_{33} \delta J}\right)^{1/2}\,. \tag{89}\] and \[J_{C}=J_{B}-\frac{\pi}{3}\frac{B_{2}}{B_{3}}\frac{\mathcal{Q}_{3}}{\mathcal{Q}_{ 33}\mathcal{Q}_{2}}\,. \tag{90}\] Fig (4) (e) and (f) show the evolution of the profile amplitude \(\epsilon\) as the volumetric growth coefficient \(J\) increases and decreases in the vicinity of \(J_{B}\). As \(J\) increases, and remains below \(J_{B}\), the chosen value of \(\epsilon\) remains zero (red solid curve), corresponding to the purely axial growth, but such a solution loses its stability at \(J_{B}\), (red dashed curve). Then the value of \(\epsilon\) makes a positive (or negative) jump \(\epsilon_{G}=\sqrt{2}/\sqrt{3B_{3}\mathcal{Q}_{33}}\) (resp. \(-\epsilon_{G}\)). Then \(\epsilon\) rises slightly above \(J_{B}\) following the Figure 4: Domain \(\mathcal{P}\) for integration of the energy density, restricted to one wavelength \(-1/2\leq Y\leq 1/2\), \(X\in[0,+\infty]\). Only contours of interest are labeled, such as \(\mathcal{C}_{i}\). Panel (A) is similar to panel (A) of Fig.(3), only the contour \(\mathcal{C}_{0}\) contributes to the elastic energy, as shown in section IX.1. In (B) and (C) which is also panel (B) of Fig.(3), the relevant contours are around each singular point \(\mathcal{S}\) and \(\mathcal{S}_{J}\). In panels (b) and (c), the bifurcation diagram, \(\epsilon\) versus \(J-J_{b}\), for a triple mode coupling with surface tension: \(\gamma_{0}=0.1\). In (b), \(a_{2}=0.1\) and \(a_{3}=-0.1\) (curves in red and magenta) and \(a_{2}=0.1\) and \(a_{3}=0.1\) (curves in brown and green). Dashed curves indicate unstable solutions. In (c) \(a_{2}=0.5\) and \(a_{3}\leq 0\) with the same color code. For \(a_{3}=0.5\), the stable solution appears below the Biot threshold \(J_{B}\) in contrast to the other cases. In (d) interface profiles (multiplied by 10 for both axes; \(J-J_{B}=0.12\) for the upper case corresponding to the data of panel (b), \(J-J_{B}=-0.2\) for the lower case. In blue \(a_{3}=-a_{2}\), in red \(a_{3}=a_{2}\) with the exact values of \(\epsilon\) chosen in each case. In (e) a subcritical bifurcation diagram for the amplitude \(\epsilon\). Continuous lines indicate locally stable solutions, dashed lines unstable solutions, not observed experimentally. Red arrows indicate the trajectory for increasing values of \(J\) while black arrows indicate the trajectory for decreasing values. Note the complete symmetry between positive and negative values of \(\epsilon\). The hysteresis cycle extends between the two vertical arrows indicating the jump of the \(\epsilon\) amplitude at \(J_{C}\) (see eq.(89)) and at \(J=J_{B}\). In (f) the elastic energy density for \(J=J_{B}+0.1\) in blue and \(J=J_{B}-0.1\) in red as a function of \(\epsilon\) with \(3\) stable solutions below \(J_{B}\),(3 red minima) and only \(2\) (in blue) above. blue trajectory in the direction of the red arrow. If there is a decrease of \(J\) from \(J>J_{B}\), \(\epsilon\) decreases along the blue line which is stable until \(J=J_{C}\), where the blue trajectory loses its stability (blue dashed curve) and the flat pattern: \(\epsilon=0\) is restored. At the transition there is also a jump in \(\epsilon_{D}=1/\sqrt{3B_{3}Q_{33}}\). Note that \(\epsilon\) can be either positive or negative, only \(\epsilon>0\) is shown for clarity but both signs are equivalent (see Fig.4)(f) which gives the energy minima for two values of \((J-J_{B})\). Only, E.Hohlfeld and Mahadevan seem to have discovered this subcritical bifurcation by numerical means (finite elements, ABAQUS), while experimentally the hysteresis associated with such a configuration was revealed by J.Yoon, J. Kim and R.C. Ryan Hayward in [77]. This scheme nicely represents the hysteresis observed in experiments. Before closing the parenthesis on nonlinear wrinkling patterns, studied with the classical techniques of bifurcation theory, let us outline a recent analysis performed with group theory methods concerning first the case of a compressed inextensible beam resting on a nonlinear foundation [43], second, the case of a thick compressed nonlinear sample [20] with different types of elasticity energy than the simple one considered here. Focusing on the first case, the very interesting point is that the authors succeed to capture localized patterns and one can wonder if it will not be possible to establish a nonlinear solitonic solution for the spatial modes detected here. ### Role of surface tension Surface tension is a weak parameter in processes controlled by elasticity. A typical order of magnitude is given by the dimensionless number \(\gamma_{0}=\gamma/\mu H\), where \(\gamma\) is in Newtons per meter, the shear modulus \(\mu\) is in Pascals and \(H\) is a typical length, so in our case it will be the wavelength. An exact value will depend on the nature of the elastic sample and possibly of the fluid above the sample. Measurements based on elasto-capillary waves [134; 135; 115] made with extremely soft materials (\(\mu\sim 100Pa\)) give a value of about \(0.05N/m\). Recently, the role of surface tension on creases has been considered and obviously, surface tension plays an enormous role in the vicinity of quasi-singular profiles, as naively explained by the Laplace law of capillarity [136; 137]. For small deformations, well represented by a few harmonics and for ordinary elastic materials with a shear modulus around \(10^{4}\)Pa, the surface tension may be relevant only in the vicinity of the bifurcation examined in the previous section IX.5. We will first consider the case where the coupling with the first harmonic \(\zeta\) is weak as in section IX.5. The expansion of the energy density, eq.(88), must now include the capillary terms order by order: \[\begin{cases}\mathcal{E}_{c}=\mathcal{E}_{cs}+\frac{\gamma_{0}}{2}\left(\frac {\pi(J^{2}-1)\epsilon}{2J}\right)^{2}\\ \times\bigl{(}4B_{2}^{2}+B_{2}\frac{(J-1)^{2}\pi}{J}\bigr{)}\epsilon^{2}+e_{2 c}\epsilon^{4}\bigr{)}\,.\end{cases} \tag{91}\] \(E_{cs}\) is the capillary energy associated to the main mode \(\zeta\). It is given by eq.(S12) and eq.(S13) while and \(e_{2c}\) is given by eq.(S15), in section XV.3. \(E_{cs}\) is the capillary energy associated with the main mode \(\zeta\). Regarding the sign, the fourth and sixth order terms can be positive or negative so they can change the nature of the bifurcation which can go from subcritical to supercritical if \(\gamma_{0}\) is strong enough. One can now examine in more detail the case where the coupling of the \(3\) modes has an equivalent weight, according to the section IX.2. In this parameter range, the surface tension becomes a small parameter for the standard range of values of \(\gamma_{0}\) and the capillarity plays a critical role at the fourth order. We rescale the free energy and rewrite the equation (78) as follows: \[\mathcal{E}_{t}=-E_{f}\epsilon^{2}\left(\delta J+2\gamma_{0}g_{2}\right)+(e_{ 3}+2\gamma_{0}g_{3})\epsilon+2\gamma_{0}g_{4}\epsilon^{2}\bigr{)}\,,\] where \(E_{f}\) and \(e_{3}\) have been defined in eq.(78). Here we give only \(g_{2}\) \[g_{2}=-\frac{\pi J(J+1)\left(1+4a_{2}^{2}+9a_{3}^{2}\right)}{\left(3J^{2}-6J-1 \right)\left(1+2a_{2}^{2}+3a_{3}^{2}\right)}\,. \tag{92}\] Each coefficient of the capillary energy is a function of \(J\), \(a_{2}\) and \(a_{3}\) and is listed in section XV.3. The order of magnitude of these coefficients as \(a_{2}\) and \(a_{3}\) vary can be found in Fig.(8) in section XV.3. In fact, for normal values of the shear modulus, there is little chance that the capillary will alter the results given by eq.(78). Since \(g_{2}\) is negative, the bifurcation threshold is shifted to higher values by capillarity. This shift depends on the representation of the profile. Post-buckling creases were studied extensively a decade ago [29; 44; 53; 138]. These studies suggest that creases can appear before the Biot threshold due to a subcritical bifurcation, as shown here in section IX.5. Note that the numerically detected creases in these studies require the introduction of periodic defects. Cao and Hutchinson [138] demonstrate the remarkable sensitivity of wrinkling results to physical imperfections and defects. This is not surprising, since it is a general property of the bifurcation theory [33]. The case of the self-contacting interface is much more difficult to handle, since analyticity is not preserved on a line (or on a segment) in the plane, so the elasticity equations are no longer valid. If we approximate the two Heaviside distributions that mimic the self-contacting interfaces by an analytic function such as \(\Phi=-b^{2}\sqrt{Z^{2}+a^{2}}\), where \(a\) is a tiny quantity, there is no reason to assume real contact between the two surfaces, which will remain separated by \(2a\). Thus, self-contacting interfaces are intentionally created like fractures. They can be nucleated by defects and then, they will have a better chance to be observed in thin samples, i.e.in \(2\) dimensions compared to \(3\) dimensions (see Dervaux's thesis [74]). Nevertheless, such triggered singularities remain a very attractive area of study, as shown by the experiment of a deflated cavity localized at a finite distance from the upper boundary [38]. Before the generation of the self-contact, a quasi-singular profile is obtained with the scaling \(x\sim|Y|^{2/3}\), which is similar to our last profile function \(\Phi\) of eq.(63), on the right but with a different exponent. The curvature at the singularity varies as \(|Y|^{-1/3}\) like \(x^{-1/2}\). This experiment is strongly reminiscent of the equivalent one realized in viscous flow by Jeong and Moffatt [47] with contra-rotating motors. Although the interface behavior recovers the same exponent at some distance from the singularity, the curvature remains finite and parabolic at the singularity, the only unknown being the radius of curvature at the tip, which is chosen by the surface tension. In conclusion, the observation of a bifurcation occurring before the Biot threshold is possible if at least \(3\) harmonic modes are initially coupled in the nonlinear regime. For the quasi-singular profile, the answer depends too much on the mathematical description of the profile. However, here we have presented a way to fully analyze the nature of the bifurcation in the neighborhood of the Biot threshold in order to obtain valuable predictions. ## X How to escape the Biot threshold? In the previous sections, the existence of creases occurring at or just after the Biot threshold was examined. There is no difficulty in generating such creases as shown above, using the tools of complex analysis. However, it has been suggested by heuristic arguments [29; 139; 44] that singularities inside elastic samples can induce bifurcation below the threshold: \(J_{B}\). Singularities induced by stresses are not forbidden in plane strain or plane stress elasticity, provided that the local elastic energy remains finite even if the energy density does not. In practice in \(2D\), this means that the strains are not more singular than \(R^{1/2}\) and the elastic deformation gradient or the stresses are not more singular than \(R^{-1/2}\), where \(R\) represents the distance to the singular point \(R\to 0\). In linear plain strain elasticity, this is the case for the fracture tips and also for edge dislocations. The main difference between linear and this work is the fact that linear elasticity does not consider the nucleation of such defects that exist prior to loading and focuses more on the opening and/or the displacement of the fracture. There are very few theoretical or experimental investigations about the fracture nucleation [140; 141; 142; 143; 144; 70]. The hope here is indeed to generate these peculiar structures by volumetric growth or by compression. The main question we have to solve is the following: is it possible to lower the bifurcation threshold by considering singularities inside the sample? As already mentioned in eq.(60), the solvability condition to observe periodic solutions implies either \(J=J_{B}\) or \(\Re\left[\Phi_{Z}\right]=0\), for \(X=0\) so \(\Phi\) is an even function of \(Z\). Here we avoid here a singularity at the interface \(X=0\), which requires a modification of the elastic model with a surface energy [45]. A nonlinear singular solution emerges, but it does not satisfy the simultaneous cancellation of normal and shear stress at \(X=0\)[46]. So we focus on singularities inside the sample. An even function of \(Z\), that can be represented by \(\Phi(Z)=F(Z)+F(-Z)\), automatically exhibits singularities inside the sample if convergence at positive infinity is required: a holomorphic function, other than a constant, cannot converge at both infinities without singularities. The choice is then a periodic allowed singular function, eliminating poles and avoiding as much as possible extended branch cuts, always a source of stress. ### Singular Profiles below the Biot threshold In finite elasticity, such as the Neo-Hookean elasticity, allowed singularities in plane strain must induce a local finite elastic energy, which in practice means that \(|F_{Z}|\) cannot be more singular than \(|Z-X_{0}|^{-1/2}\), where the positive constant \(X_{0}\) denotes the central position of the singularity. Larger exponents are allowed since they do not contribute locally to the total elastic energy. Existing branch-cut singularities must remain limited in size. However, singular solutions can locally invalidate the hypotheses of the initial elastic model and may require more complexity in the elastc representation and even in the growth process. As an example, in fast fracture dynamics [145; 146; 147], 3 possible zones for the crack tip have been identified: first the viscoelastic, then the nonlinear elastic, and finally the traditional linear-elastic zone, which produces the square root singularity [148]. Of course, such a description requires different physical models at different length scales. Whatever the complexity introduced into the modeling, such as multiple invariants of finite elasticity, compressibility, plasticity or strain hardening, and eventually growth variations, it must remain localized in a small domain and must be treated as an inner boundary layer. Let us fix this domain around \(X_{0}\) and choose \(\Phi\) as: \[\Phi(Z)=F(Z-X_{0})+F(-Z-X_{0})\,, \tag{93}\] where \(F(Z)=\bar{F}(Z)\). The function \(F\) is periodic with period one, real or its Laurent series has only real coefficients and is convergent for \(Z\to\infty\). Calculating the derivative of \(\Phi\) for \(Z=IY\), it is easy to see that \(\Re\left[\Phi_{Z}\right]=0\) so the normal stress \(S_{11}\) vanishes and there is no need to cancel \(\mathcal{Q}(J)\), see Eqs.(58,59,60). For \(F\), two square root singularities are chosen, located at \(X_{0}\pm l_{0}\) and separated by a branch cut. For symmetry reasons, we will fix the branch cut along the \(X\) axis as follows: \[F_{Z}=\frac{1}{\sqrt{\tanh^{2}(\pi Z)-\tanh^{2}(\pi l_{0})}}-\cosh(\pi l_{0})\,. \tag{94}\] The last term fixes the cancellation of \(F_{Z}\) when \(Z\to\infty\). \(l_{0}\) is a tiny parameter that specifies the size of the branch cut \(2l_{0}\). Useful for the following, \(F(Z)\) is the primitive function of eq.(94): \[F(Z)=f(Z)-Z\cosh(\pi l_{0})\,, \tag{95}\] where \[f(Z)=\sqrt{\frac{\sinh^{2}(\pi Z)-\sinh^{2}(\pi l_{0})}{\tanh^{2}(\pi Z)- \tanh^{2}(\pi l_{0})}}\frac{h(Z)}{\pi\cosh(\pi Z)}\,, \tag{96}\] and \[h(Z)=\tanh^{-1}\left(\frac{\sinh(\pi Z)}{\sqrt{\sinh^{2}(\pi Z)-\sinh^{2}(\pi l _{0})}}\right)\,. \tag{97}\] Several observations must be made at this stage: * The choice of \(F_{Z}\) is somewhat arbitrary, but takes into account symmetry arguments and is also dictated by its simplicity. * Less singular functions with the square root replaced by a power law \((w)^{1/2}\to(w)^{a}\), where \(a>1/2\), are also possible a priori. * As soon as we introduce a singular zone around \(X_{0}\), this automatically produces another singularity around \(X_{0}/J\) which will influence the interface more strongly, at \(X=0\). So we are faced with two boundary layers which are easier to treat independently. So \(J>1\) and \(l_{0}<<1\). * When introducing such profiles, we are faced with a minimal list of parameters such as \(l_{0}\), \(X_{0}\), \(\epsilon\) and \(J\). We hope to find \(J\) below the Biot threshold \(J_{B}\) and to find constraints on these parameters. These parameters must be fixed in a consistent way. * Finally, such localized singularities are often found in periodic viscous flows where the equivalent of our "blob" is the bubble in the viscous flow [119; 120; 58; 149]. Note that in the case of bubbles, there are several families of solutions that depend on the bubble location and its symmetries. The contribution of such a function \(F(Z)\) on the interface and on the stress accumulated inside the sample are shown in Fig.(5). In order to show that such a scenario can exit, it is necessary to establish the existence of semi-singular patches where the stresses are concentrated. it will also be necessary to relate the four parameters \(l_{0}\), \(X_{0}\), \(\epsilon\) and \(J\) to other physical processes occurring in the inner boundary layer which are neglected in the outer zone. ### Physical origins of the patches Several origins can explain the existence of patches, sites of focusing of the elastic stress. Such focusing locally destroys the linear expansion of the elastic deformation in \(\epsilon\), making the validity of the linear expansion questionable. But also, at large strains, it can invalidate the model itself: the choice of the neo-Hookean elastic energy, the incompressibility hypothesis, the constant volumetric growth. Let us examine each of these possible causes: * The neo-Hookean model, very convenient for its simplicity fails to describe a focusing, either too strong or too localized and must be corrected by a more sophisticated hyperelasticity model involving nonlinearities in \(I_{1}\) or other invariant such as \(I_{2}\). * The incompressibility limit is a mathematical limit that is not appropriate for large strains. A more physically relevant model may be a quasi-incompressible approximation with a strong coefficient multiplying the invariant \(I_{3}\) in the elastic energy [44]. * Spatially constant volumetric growth is a naive approximation of a true biological process. In fact, for growing living species, an excess of compressible stress is known to inhibit the cell proliferation. It is reported as the principle of homeostasis [150]. * This phenomenon also occurs in swelling. Everyone knows how to expel water from a wet sponge, simply by applying a pressure to it. The complete solution of the nonlinearities is hopeless, since it is impossible to find a nonlinear solution inside the patch that asymptotically recovers the expansion given by Eqs.(94,96). But, we know similar situations in solid mechanics where nonlinearities play an essential role in a localized region around a singular point, as in fracture mechanics or dislocation theory, and are responsible for a constant stress intensity factor. As shown by Barenblatt [151; 152], w these nonlinearities, if they remain localized, do not prevent the validity of the linear theory, except in the zones of high predicted stress where they soften. In the next section, we relax some of the limitations of the model, by adding nonlinearities and compressibility, while keeping the volume growth constant. ### Patches as inner boundary layer In the first patch located at \(Z=X_{0}\), a new coordinate \(U\) such as \(Z-X_{0}=\frac{1}{\pi}\tanh^{-1}(\tanh(\pi l_{0})U)\) gives the following expansion of eq.(96) in the limit of small \(l_{0}\) \[F(Z)\sim\frac{1}{2\pi}\left\{Log\left(\frac{U+\sqrt{U^{2}-1}}{-U+\sqrt{U^{2}- 1}}\right)+\phi_{0}\right\}\,, \tag{98}\] and \(\quad x\sim JX+\epsilon J(\Re\left[F(Z)\right]+x_{1})\) with \(x_{1}=\Re\left[\phi_{0}+F(-X_{0})+J\tau_{1}\Re\left(F((J-1)X_{0})+F(-(J+1)X_{ 1})\right]\). This expansion shows that \(F(Z)\) has two singular points \(U\pm 1\) separated by a branch cut. For \(|Z-X_{0}|>l_{0}\), then \(U>1\), so \(l_{0}<Z-X_{0}<1\), the asymptotic behavior of \(F(Z)\) is logarithmic, which gives for the outer profile: \[\begin{cases}x&\sim JX+\epsilon\left\{\frac{J}{2\pi}Log\left(\frac{(X-X_{0}) ^{2}+Y^{2}}{l_{0}^{2}}\right)+x_{1}\right\}\,,\\ y&\sim Y-\frac{\epsilon}{\pi}\tan^{-1}\frac{Y}{X-X_{0}}\,.\end{cases} \tag{99}\] For the second patch \(\mathcal{S_{J}}\) located at \(X_{0}/J\), a similar expansion for \(Z_{1}\) with the definition of \(U_{1}=JU_{X}+IU_{Y}\) gives a similar result, the only difference being the expansion of coordinates \((x,y)\): \[\begin{cases}x&\sim JX+\frac{\epsilon J\tau_{1}}{2\pi}\left\{Log\left(\frac{( JX-X_{0})^{2}+Y^{2}}{l_{0}^{2}}\right)+x_{2}\right\}\,,\\ y&\sim Y-\frac{\epsilon\tau_{1}}{\pi}\tan^{-1}\frac{Y}{JX-X_{0}}\,.\end{cases} \tag{100}\] The fact that we have two independent small parameters ( \(\epsilon\) and \(l_{0}\) ) suggests a complicated double boundary layer. An example is also given by fracture mechanics, where a separation of length scales is necessary and has been estimated in [148; 153]. Indeed, in soft and highly deformable materials, we cannot neglect nonlinear deformations and the dissipation for fast dynamics even if Hookean elasticity gives a reasonable picture of cracks. The separation of length scales, in [148] includes the inner scale (the crack tip) due to a large amount of dissipation, the intermediate scale due to the nonlinearities of elasticity and finally the outer scale where linear elasticity is allowed. Again, two different length scales: \(\zeta\) for dissipation and \(l\) for nonlinearities coexist and have been experimentally verified [148]. In our case, we can disregard dissipation since either growth or gel swelling are slow processes, see Fig.(1)(A8,A9) and [74]. However, as with fractures, additional physical properties are needed in order to fulfill the neo-Hookean model with incompressibility which has an inherent lack of parameters and length scales. In the next section, we extend the neo-Hookean model by incorporating nonlinearities in the elastic energy such as \(I_{1}^{2}\) and \(I_{2}\) and also the third invariant \(I_{3}\), which is responsible for the compressibility. ## XI Theoretical evidence for internal singularities ### New elastic model for large stresses We start by locally assuming a more complex elastic energy and we focus on the patches located around \(X_{0}\) and \(X_{0}/J\). They are separated by a distance \(X_{0}(J-1)/J\sim 2/3X_{0}\). We assume two boundary layers around each patch: an inner zone of size \(l_{0}\) and an outer zone of size \(l_{p}\) such that \(l_{0}<l_{p}<<X_{0}\), to avoid overlapping between the two patches. The existence of two boundary layers is required, first, to eliminate the square root singularity and, second, the necessity to recover the logarithmic asymptotic before the linear expansion in \(\Phi(Z)\). Besides nonlinearities in the modeling of the hyperelasticity density, the limit of complete incompressibility can be questionable when the strains become very large. There are a significant number of models in the literature that treat compressibility, see chapters \(6\) and \(8\) of [91]. Due to the large strains localized in the patches, the best approach is the compressible model of R. Ogden [91, 93], which separates the elastic energy density into two parts: \(\Psi_{iso}\) and a volumetric part, also called the bulk part \(\Psi_{vol}\). In practice however, it is very difficult to find explicit solutions that occur on a scale smaller than \(l_{0}\). So we focus first on the intermediate regime where \(|Z-X_{0}|>l_{0}\) and we restrict on nonlinearities in \(I_{1}\) and a compressibility penalty treated as a quadratic expansion. For the elastic energy density: either \(\Psi^{(p)}\) (nonlinear model) or \(\Psi^{(mr)}\) (the Mooney-Rivlin model) the following expressions are given for geometric invariants which are corrected by the growth according to eq.(44): \[\begin{cases}\Psi^{(p)}&=\tilde{I}_{1}+\frac{\omega_{p}}{2^{p_{r}-1}}\tilde{I }_{1}^{2p}+\frac{\kappa}{J}(\tilde{I}_{3}-J)^{2}\\ \Psi^{(mr)}&=\tilde{I}_{1}+\frac{\omega_{m}}{2^{p_{r}-1}}\tilde{I}_{2}+\frac{ \kappa}{J}(\tilde{I}_{3}-J)^{2}\,,\end{cases} \tag{101}\] where \(p\) is a real number greater than \(1\), \(\omega_{p}\) and \(\omega_{mr}\) are small parameters, positive or negative, representing a first correction to the neo-Hookean elasticity, but \(\kappa\) is expected to be a large positive quantity to mimic the quasi-incompressibility. Figure 5: On the left, profiles corresponding to the deformation given by eq.(96). The parameters are \(J=3,\epsilon=0.1\). For the blue profile \(X_{0}=0.01\) and \(l_{0}=0.001\), for the red curve, the same \(X_{0}\) value and \(l_{0}=0.005\), for the brown profile, \(l_{0}=0.001\) and \(X_{0}=0.1\). Note that \(l_{0}\) has little effect on the shape profile, which depends strongly on the distance of \(X_{0}\) from \(X=0\). The difference in height of the averaged surface has no physical meaning. On the right, the stress \(S_{11}\) in logarithmic scale (\(Log(1+S_{11}^{2})\), multiplied by a scaling factor of \(0.01\) for visualization purposes, with \(l_{0}=0.01\) and \(X_{0}=0.1\). Note the two singular zones around \(X_{0}\) and \(X_{0}/J\). These singularities will merge in the boundary layer. The stress decreases rapidly so that the map is confined to a limited area of the sample between \(0.0<X<0.25\) and \(-0.2<Y<0.2\). ### The intermediate boundary layer analysis For the first patch, we define the rescaled quantities for the space coordinates in the initial configuration: \[\hat{X}=(X-X_{0})/l_{p};\quad\hat{Y}=Y/l_{p}\,. \tag{102}\] A localized patch \(\mathcal{S}\) or \(\mathcal{S}_{J}\) requires \(l_{p}\leq X_{0}\) and to recover the asymptotics of \(F(Z)\), we assume a solution for the profile function \(\epsilon(x_{s}(\hat{R}),\epsilon y_{s})\) given by: \[x_{s}=\frac{\epsilon}{\pi}f_{s}(\hat{R})+JX_{0}\quad\text{and}\quad y_{s} \sim\frac{-\epsilon}{\pi}T\,, \tag{103}\] where \(\hat{R}=\sqrt{\hat{X}^{2}+\hat{Y}^{2}}\) and \(T=\arctan(\hat{Y}/\hat{X})\). Assuming a possible correspondence with eq.(99), the unknown function \(f_{s}\) must satisfy \(f_{s}(\hat{R})\sim JLog\hat{R}\), for large values of \(\hat{R}\), but must remain finite for small values of \(\hat{R}\), a property which is not verified by eq.(99). This gives the following scaling for the invariants: \[\tilde{I}_{1}=\epsilon^{2}/(\pi l_{p})^{2}\,,\quad\tilde{I}_{1}^{p}\sim \epsilon^{4p}/(\pi l_{p})^{4p}\,,\quad\tilde{I}_{3}\sim\epsilon^{2}/(\pi l_{ p})^{2}. \tag{104}\] The size of the boundary layer given by \(l_{p}\) must eliminate arbitrary coefficients as much as possible. Note that \(l_{p}\) is a tiny value, unlike \(\kappa\) which can take large values to represent quasi-incompressible material. Define: \[\begin{cases}\tilde{I}_{1}=\frac{\epsilon^{2}}{\pi^{2}l_{p}}\hat{I}_{1};\quad \hat{I}_{1}=\frac{1}{\hat{R}^{2}}+f_{s}^{\prime}(\hat{R})^{2},\\ \tilde{I}_{3}-J=-J\hat{K};\quad\hat{K}=1+|\omega_{p}|^{-1/p_{0}}\frac{f_{s}^{ \prime}(\hat{R})}{\hat{R}},\end{cases} \tag{105}\] where \(p_{0}=2p-1\) and \(l_{p}=|\epsilon|/\pi|\omega_{p}|^{1/(2p_{0})}J^{-1/2}\), then the elastic energy, for the new elasticity model given by eq.(101), is transformed to : \[\mathcal{E}_{s}=\frac{\epsilon^{2}}{\pi}\int_{0}^{\infty}\hat{R}d\hat{R}\left( \hat{I}_{1}+\hat{I}_{1}^{2p}+J\kappa\hat{K}^{2}\right). \tag{106}\] Variation with respect to \(f_{s}\) leads to a second order E-L equation, which can be integrated once without difficulty, and finally we are faced with a first order nonlinear differential equation: \[\hat{R}f_{s}^{\prime}(\hat{R})\left(1+2p\left(f_{s}^{\prime}(\hat{R})^{2}+ \frac{1}{\hat{R}^{2}}\right)^{p_{0}}\right)+\kappa_{0}\frac{f_{s}^{\prime}( \hat{R})}{\hat{R}}=C_{S}, \tag{107}\] where \(\kappa_{0}=J\kappa\,|\omega_{p}|^{-1/p_{0}}\) and \(p\geq 1\). \(C_{s}\) is an arbitrary integration constant at this stage. In the quadratic case, \(p=p_{0}=1\), the Euler-Lagrange equation for \(\hat{x}_{s}\) is easily solved, and the second Euler-Lagrange equation for \(\hat{y}\) is automatically satisfied once the relation eq.(103) is imposed. Even if an exact solution can be found, we focus on the two limits of interest, first for \(\hat{R}\to 0\) and then for \(\hat{R}\to\infty\): \[\text{For}\quad\hat{R}\to 0\quad f_{s}^{\prime}(\hat{R})=\frac{Cs\hat{R}^{4p-3}}{2p +\kappa_{0}\hat{R}^{4(p-1)}}\,. \tag{108}\] Regardless of the value of \(p\geq 1\), the behavior of \(f_{s}\) is a regular function of \(\hat{R}\), as it is required for a physical solution. Note that \(\kappa_{0}\) only plays a critical role if \(p=1\), which leads to: \[\text{For}\quad\hat{R}\to 0\quad f_{s}^{\prime}(\hat{R})=\frac{C_{S}}{2p+ \kappa_{0}}\hat{R}\,. \tag{109}\] Finally, choosing \(C_{s}=J\) leads to the convenient asymptotic for \(f_{s}\) when \(\hat{R}\to\infty\) whatever the \(p\) values. So the outer boundary layer seems to satisfy the physical requirements, see eq.(99). Since we have introduced two boundary layers: an inner core of size \(l_{0}\) and an intermediate zone of size \(l_{p}>l_{0}\), this allows to easily perform the adjustment in the asymptotics of \(F(Z)\) given by eq.(98). Another standard way to modify the neo-Hookean model is the Mooney-Rivlin model, which is even simpler: \[\hat{R}\left(\frac{df_{s}}{d\hat{R}}\right)\left(1+\frac{1}{\hat{R}^{2}}\right) =-\frac{\kappa_{mr}}{\hat{R}}\left(\frac{df_{s}}{d\hat{R}}\right)+C_{s}\,, \tag{110}\] with \(\kappa_{mr}=J\kappa/\omega_{mr}\) and \(l_{mr}=\epsilon/\pi\sqrt{\omega_{mr}/J}\). The solution for \(f_{s}\) is easily found and in physical units, we have: \[x_{s}=\frac{J\epsilon}{2\pi}Log\left\{\frac{R^{2}}{l_{mr}^{2}}+\left(1+\frac{ \kappa_{mr}}{\omega_{mr}}\right)\right\}\,, \tag{111}\] with no change for \(y_{s}=-\epsilon T/\pi\). However, in the inner region very close to \(X_{0}\) where \(|Z-X_{0}|\leq l_{0}\), these solutions given by eq.(108,110) present a singularity of the strain in \(1/R\) due to the choice of \(y_{s}\), which limits their validity in this inner zone. Note that, for both models, \(x_{s}\) has the same asymptotic limit for \(R>l_{p}\) and they are regular for \(R->0\), so even if we are not able to find the inner core solution, we can recover the result of eq.(99) and the correct asymptotic of \(\Phi(Z)\). The same analysis applies to the upper singularity centered around \(X_{0}/J\). For the second patch \(\mathcal{S}_{\mathcal{J}}\), the same strategy is followed, and it is easy to show that the associated deformations \((\hat{x}_{S_{J}},\hat{y}_{S_{J}})\) according to eq.(100) satisfy : \[\begin{cases}\hat{x}_{s_{J}}=\frac{1}{2\pi}Log\left(\frac{1}{\pi^{2}}+J^{2} \hat{X}^{2}+\hat{Y}^{2}\right)\\ \text{and}\\ \hat{y}_{s_{J}}=-\frac{1}{\pi}\tan^{-1}\frac{\hat{Y}}{J\overline{X}}\,.\end{cases} \tag{112}\] The same matching between deformations in the second patch is done by \(\epsilon\tau_{1}(Jx_{s_{J}},y_{s_{J}})\) which gives the same analysis for both patches. So, both singularities can be treated in the same way. In summary, once linearized, the primary incompressible model can have two patches \(\mathcal{S}\) and \(\mathcal{S}_{J}\) with a singularity inside. The actual patch size is \(l_{p}\). Each patch contains an inner core of size \(l_{0}^{p}\) and an intermediate zone of size \(l_{p}\), corresponding to two boundary layers with a more regular stress distribution. The main information we obtain for the treatment of the intermediate zone is in fact its size \(l_{p}\) given by \(\epsilon\) and the first nonlinear correction of the neo-Hookean energy scaled by the constant: \(|\omega_{p}|\). ### The inner core In the inner core, the strains and stresses are expected to increase again, and perhaps the specimen is strongly modified by nonlinearities going from stress hardening to plasticity [154] Such material transformations have been studied in detail in the fracture mechanics literature [147; 148]. Here we try to stay as close as possible to our original model although we agree that any modification of the material structure is possible and may change some conclusions. Therefore, we keep the compressible hyperelasticity point of view but we reinforce the compressibility analysis of the material with the Ogden model, where the elastic energy is decoupled into a purely volumetric elastic response and a dilatational part [93]. As explained in section III.2 at the very beginning of the paper, this requires a new definition of the strains according to eq.(5). #### iii.3.1 Rescaling the strains and the invariants We retain the quadratic compressibility model of the previous paragraph, which is still a function of \(I_{3}\) although other approximations may be more appropriate for \(R\sim l_{0}^{p}\) (see Chapter \(6.5\) of [91]). Within the inner core, we suggest the following description for the current coordinates \((x_{c},y_{c})\): \[x_{c}=\frac{\epsilon}{\pi}l_{0}^{s}\bar{F}(\rho,\theta);\;y_{c}=-\frac{ \epsilon l_{0}^{q}}{\pi}\bar{G}(\rho,\theta)\,. \tag{113}\] with \(\rho=R/l_{0}^{p}\) and \(\theta=T/l_{0}^{q}\). We restrict ourselves to a cubic nonlinearity \(I_{1}^{3}\) for reason of simplicity but the method can be applied to any kind of dilatational hyperelasticity. We impose that \(p\) and \(q\) are positive and that \(x_{c}\) and \(y_{c}\) are regular for \(\rho\to 0\). For \(\rho\to\infty\), the matched asymptotic analysis required that \(x_{c}\) and \(y_{c}\) coincide with the behavior of \(x_{s}\) and \(y_{s}\) for \(R\to 0\). For \(y_{c}\), this simply requires \(G\to-\theta\). For \(x_{c}\), we first consider \(x_{s}\) from eq.(109), in the neighborhood of \(R\to 0\): \[x_{s}=\frac{\epsilon J}{\pi}\frac{R^{2}}{6\,l_{3/2}^{2}}\quad\mbox{where}\quad l _{3/2}=\frac{\epsilon}{\pi\sqrt{J}}|\omega_{3/2}|^{1/4}. \tag{114}\] Thus, at infinity, \(x_{c}\) must behave as \(x_{c}\sim\rho^{2}\) and matching with \(x_{s}\) gives a more precise result: \[x_{c}=\frac{\epsilon}{\pi}l_{0}^{s-2p}R^{2}\,\Rightarrow\,l_{0}=\left(\frac{6 \,l_{3/2}^{2}}{J}\right)^{1/(2p-s)}. \tag{115}\] If these conditions are satisfied, the inner coordinates \((x_{c},y_{c})\) can correctly match the corresponding ones \((x_{s},y_{s})\) in the intermediate zone, allowing to recover eq.(112). However, our analysis involves many degrees of freedom in addition to the unknown material parameters \(\kappa\) and \(\omega_{3/2}\). Most likely, there will be several possibilities and to limit them, we recapitulate the different constraints. Since the singularity of the strains calculated with \(x_{s}\) and \(y_{s}\) comes from the derivative with respect to \(T\), we impose that the corresponding strains for \(x_{c}\) and \(y_{c}\) dominate. After evaluating the amplitude of the strains, we get: \[\begin{cases}\frac{\partial x_{c}}{\partial R}&=\frac{\epsilon}{\pi}l_{0}^{s- p}\frac{\partial\bar{F}}{\partial\rho}<<\frac{1}{R}\frac{\partial x_{c}}{ \partial T}\sim\frac{\epsilon}{\pi}l_{0}^{s-p-q}\frac{1}{\rho}\frac{\partial \bar{F}}{\partial\theta},\\ \\ \frac{\partial y_{c}}{\partial R}&=-\frac{\epsilon l_{0}^{q-p}}{\pi}\frac{ \partial\bar{G}}{\partial\rho}<<\frac{1}{R}\frac{\partial y_{c}}{\partial T} \sim-\frac{\epsilon}{\pi\rho}l_{0}^{-p}\frac{\partial\bar{G}}{\partial\theta }\,.\end{cases} \tag{116}\] \(\frac{\partial\bar{F}}{\partial\rho}\), \(\frac{\partial\bar{F}}{\partial\theta}\), \(\frac{\partial\bar{G}}{\partial\rho}\) being quantities of order one, then the exponent \(q\) is positive. Now we examine \(I_{1}\), reduced to the shear strains: \[I_{1}\simeq\frac{\epsilon^{2}}{\pi^{2}}l_{0}^{-2p}\frac{1}{\rho^{2}}\left(l_{ 0}^{2(s-q)}\left(\frac{\partial\bar{F}}{\partial\theta}\right)^{2}+\left( \frac{\partial\bar{G}}{\partial\theta}\right)^{2}\right). \tag{117}\] The two terms in \(I_{1}\) are of different weight. However, only the first one gives the correct asymptotics for the Euler-Lagrange equations, such as \(x_{c}\sim\rho^{2}\). This imposes \(s<q\). Then we define \(\hat{I}_{1}\) and \(\hat{I}_{3}\) \[\begin{cases}I_{1}=\frac{\epsilon^{2}}{\pi^{2}}l_{0}^{2(s-q-p)}\hat{I}_{1}, \quad I_{3}=\frac{\epsilon^{2}}{\pi^{2}}l_{0}^{s-2p}\hat{I}_{3},\\ \hat{I}_{1}=\frac{1}{\rho^{2}}\left(\frac{\partial\bar{F}}{\partial\theta} \right)^{2},\\ \hat{I}_{3}=\frac{1}{\rho^{2}}\left(\frac{\partial\bar{F}}{\partial\rho}\frac{ \partial\bar{G}}{\partial\theta}-\frac{\partial\bar{F}}{\partial\theta}\frac{ \partial\bar{G}}{\partial\rho}\right).\end{cases} \tag{118}\] #### iii.3.2 The energy density of the inner core Although families of possible deformations have been published for arbitrary elastic energy densities, very few concern compressible materials ([155; 156; 157]). Since we can expect a high compression in the inner core, we modify the bulk energy and adopt the Ogden model [91; 93] of compressible constrained materials, but keep the penalty term as before. There are then two choices, depending on the value of \(\vartheta=|\omega_{3/2}|l_{0}^{2s-4q}/J^{2}\). In fact \(|\omega_{3/2}|\) is a tiny amount, but \(s<q\) so \(\vartheta\) is arbitrary. * \(\vartheta<<1\), defining \(K_{1}=\frac{\epsilon}{\pi}\frac{\epsilon^{4}}{\pi^{4}}l_{0}^{s-4p+2q}\) \[W_{c}=l_{0}^{s-2q}\left(\frac{\hat{I}_{1}}{\hat{I}_{3}}+K_{1}\hat{I}_{3}^{2} \right).\] (119) * \(\vartheta>>1\), and \(K_{2}=\frac{J\kappa}{|\omega_{3/2}|}\frac{\epsilon^{4}}{\pi^{4}}l_{0}^{6q-4p-s}\) \[W_{c}=\frac{l_{0}^{3s-6q}|\omega_{3/2}|}{J^{2}}\left(\pm\left(\frac{\hat{I}_{1}}{ \hat{I}_{3}}\right)^{3}+K_{2}\hat{I}_{3}^{2}\right).\] (120) A good way to get very low values of \(l_{0}\) is to choose \(p\simeq s/2\). However, it is sufficient to have \(2p<2+s\). In addition, one must reduce the elastic energy which is confined in the core.This means that \(2q-s\) must be as small as possible. In this context, the elastic energy trapped in the core can be estimated to be around \(l_{0}^{s-2q+2p}\), which means that \(s-2q+2p>0\) must be positive for the first hypothesis, and for the second, a necessary condition is \(3s-6q+2p>0\). It is relatively easy to choose good parameters, such as \(s=0,p>q\) or \(s=0,p>3/2q\) for the second choice. However, these conditions may not be sufficient and must be compared with the elastic energy in the intermediate range. Note that the nonlinear eigenvalues \(K_{1}\) and \(K_{2}\) are numbers that are difficult to predict even as an order of magnitude. Whatever the hypothesis, eq.(119) or eq.(120), the corresponding asymptotics perfectly check the overlap with \(x_{s}\) and \(y_{s}\) for \(\rho\to\infty\), and the behavior for \(\rho\to 0\), for eq.(119) is \(\bar{F}(\rho,\theta)\sim\rho\) and the same for \(\bar{G}\). The analysis for the second hypothesis is less obvious. This study proves that the matching is possible and the deformation remains regular, but due to the degrees of freedom as \(p\) and \(q\), we do not get any information about \(J\) and \(\epsilon\), \(X_{0}\) and \(l_{0}\) as a function of the material parameters \(\kappa\) and \(\omega_{3/2}\). At this stage, it remains to evaluate how the zero order elastic energy is modified by the localized compression zones. In fact, a bifurcation is possible if the elastic energy is reduced. ### Energy of the patches The goal now is to evaluate the elastic energy involving the entire physical plane (the zero-order and linear expansion in \(\epsilon\)) and the energy due to the patches, including the inner core and the outer ring of both singular patches. As shown before, in section IX, the expansion of the elastic field has \(3\) contributions in power of \(|\epsilon|\), see eq.(69). If there are no singularities inside the physical plane, the linear term \(\mathcal{E}_{1}\) vanishes. If there are singularities, this is not the case and one has to evaluate this term. For this, we use the same method of complex analysis as described in IX.1: \[\begin{cases}\mathcal{E}_{1}&=\tau_{0}\iint_{\mathcal{P}}dS\Re\left[\Phi_{Z}+ J\tau_{1}\Phi_{Z_{1}}\right]\\ &=\frac{\tau_{0}^{2}}{2J^{2}}\Re\left[\frac{1}{2T}\iint_{\mathcal{P}}dZd\bar{ Z}\Phi_{Z}\right]\end{cases} \tag{121}\] Without singularities inside the physical plane, the integration contour called \(\partial\mathcal{P}\), which can be observed in Fig.(4), first row on the left, corresponds to the outer boundary of the physical strip and because of the periodicity of \(\Phi\) the integration gives no contribution. This is not the case now since the calculation requires to cross the cell in the middle along paths that go from the point \(M_{0}\) up to the first and second singularities \(\mathcal{S}\) and \(\mathcal{S}_{J}\) and up along the neighboring path on the right (see Fig.(4, first row)). So the additive contribution comes from the two circular contours around the singularities. After simple simplifications and taking the size of the singularity radius as \(l_{S}\sim l_{1}\), for both patches, we get: \[\begin{cases}\mathcal{E}_{1}&=\frac{\tau_{0}^{2}}{2J^{2}}\Re\left[\frac{1}{2T }\oint_{\mathcal{P}}dZ\Phi(\bar{Z})\right.\\ &=\frac{\tau_{0}^{2}}{4J}\frac{l_{1}}{\pi}\Re\left[\int_{-\pi}^{\pi}dTe^{IT}(- IT)\right]=\frac{\tau_{0}^{2}}{2J}l_{1}\,.\end{cases} \tag{122}\] Thus, the singularities inside the sample give a correction of the elastic energy of order \(\epsilon l_{1}\) which, if \(\epsilon<0\), indicates a possible bifurcation: for negative \(\epsilon\), the singular solution is less energetic than the uniform axial growth. However, one must also evaluate the energy inside the core of the patches, for both singularities. Evaluating the energy inside the inner core in each singular patch leads to : \[\mathcal{E}_{c}\simeq|\omega_{3/2}|l_{3/2}^{\varpi};\quad\varpi=\frac{2}{2p-s }(3s-6q+2p). \tag{123}\] where \(\varpi=p(3s-6q+2p)/(2p-s)\). In order to compare with the energy of the intermediate zone, \(\varpi\) must be as large as possible. Unfortunately the value \(s=2p\) is not compatible with our constraints but \(s=0\), \(q\) small and \(p=1\) leads to \(\varpi\simeq 2\) which is enough for \(\mathcal{E}_{c}\) to be negligible. This quantity has to be compared with \(\epsilon l_{3/2}=\epsilon^{2}|\omega_{3/2}|^{1/4}/\pi\). Also the small amount \(|\omega_{3/2}|^{1/4}>|\omega_{3/2}|\) justifies the fact that the intermediate singular zone dominates. We conclude that the dominant energy density corresponds to the uniform axial growth corrected by: \[\delta\mathcal{E}\sim\epsilon\frac{(J^{2}-1)^{2}}{2J}|\epsilon||\omega_{3/2}| ^{1/4}\,. \tag{124}\] Thus, singularities inside the sample lower the instability threshold and are at the origin of a bifurcation. At this stage, there is no restriction on the values of \(J\) except that the distance between the two singularities must be greater than \(2l_{1}<X_{0}(J-1)/J\), which means that \(\delta J=J-1\) must be larger than \(2Jl_{1}/X_{0}\). This is a necessary condition for our analysis based on the separation of the two patches. Thus, the new bifurcation threshold results from controlled deviations from the neo-Hookean model. ## XII Path independent contour integrals A fancy and easy way to determine unknown parameters in singular elasto-static fields is to use path-independent integrals [158; 159; 160]. They result from Noether's first theorem [161] as recently demonstrated and recalled by J. Rice and collaborators [162]. This theoretical method relates the geometric parameters of the singularities to the boundary conditions imposed on the far-field elasticity. It has been successfully applied to many topics of elasticity [163], but also to other physical fields as soon as they are governed by variational principles. One can think of interfacial potential flows (Darcy or Euler flows, [164])and electrostatic fields ([165; 162]). It is widely used in all aspects of solid mechanics such as fracture [166], dislocations. [167; 168; 169], notches [160], erosion [170], it is not limited to time-independent formulations [171], nor to linear elasticity although the finite elasticity singularity must be treated with care [168], especially in the case of non-quadratic formulations. Nonlinear problems, sometimes time-dependent, are often interpreted in terms of internal forces acting on defects present in materials and path-independent integrals have also been established in these cases [166; 172]. It remains, however, that some applications in nonlinear elasticity have been questioned [173; 174], more precisely for the so-called M integrals. Proofs of the application of this technique are justified in the Appendix section XV.6. Our goal is to to discover relationships between \(l_{0}\) and \(X_{0}\) and the neo-Hookean parameters with growth such as \(\epsilon\) and \(J\). ### The J-Integral This approach is not fully nonlinear because we perform an incremental expansion. In addition, our sample is pre-streched by growth which destroys the spatial isotropy. Thus, we cannot claim that the J-integral methods are directly applicable. Knowles and Sternberg have demonstrated the validity of the \(J\) integral in fully nonlinear elasticity and also for incremental deformations but only when the initial state is stress-free, which is a different case. Therefore, it is important to verify that the J-integral remains valid for the model described in section VII.2. This is done in Appendix (XV.6) and we define \(\mathcal{J}\) which is a contour integral, see Fig. (4), panel (B) or (C) above: \[\mathcal{J}=\oint dS(E.\vec{\mathbf{e}}_{x}-N_{k}S_{ik}F_{i1})=0\,. \tag{125}\] The stress \(S_{ik}\) was introduced in the section VII.2, eq.(50) and eq.(53) and the strains are simply given by \(F_{ij}=\partial_{j}x_{i}\). Note that the J-integral is a vector so \(\mathcal{J}\) is only one component. Not all of the J-component can give pertinent information. The contour, shown in Fig.(4), panel B on top, first goes from \(M\) to \(M_{0}\), the center of \(\mathcal{C}_{0}\), then it goes down to avoid the two singularities \(\mathcal{S}_{J}\) and \(\mathcal{S}\), it climbs up to \(M_{0}\) to join the point \(M_{1}\), it continues along \(M_{1}M_{2}\), then \(M_{2}M_{3}\) and finally \(M_{3}M\). Only the brown paths can contribute, the blue paths cancel each other out for reasons of periodicity and symmetry. Focusing on the contour \(\mathcal{C}_{0}\), which is \(MM_{1}\) at the top of the domain, only the energy density contributes, since both normal stress components vanish, a necessary condition for a free boundary. Decomposing \(\mathcal{J}\) into \(\mathcal{J}^{(0)}+\epsilon\mathcal{J}^{(1)}+\epsilon^{2}\mathcal{J}^{(2)}\), we get for the upper boundary \(\mathcal{C}_{0}\) : \[\mathcal{J}^{(0)}_{\mathcal{C}_{0}}=\frac{(J-1)^{2}}{2};\,\mathcal{J}^{(2)}_{ \mathcal{C}_{0}}=\frac{\tau_{0}^{3}}{8J^{2}}\int_{-1/2}^{1/2}dY(\phi_{Z})^{2} |_{{}_{Z=IY}}\,. \tag{126}\] and \(\quad\mathcal{J}^{(1)}_{\mathcal{C}_{0}}=0\). The last integral is difficult to evaluate exactly, but in the limit of small \(l_{0}\), it gives: \[\mathcal{J}^{(2)}_{\mathcal{C}_{0}}\sim-\frac{\tau_{0}^{3}}{2J^{2}}\left(\coth (2\pi X_{0})-1\right). \tag{127}\] If we now focus on the singularities \(\mathcal{S}_{J}\) and \(\mathcal{S}\), the vertical brown contours have no contribution, so only the singularities \(\mathcal{S}\) and \(\mathcal{S}_{\mathcal{J}}\) play a role. By defining a small radius around each singularity: \(R=((X-N_{0})^{2}+Y^{2})^{1/2}\) for \(l_{0}<R<1\), one can approximate \(\phi_{Z}\) close to \(\mathcal{C}_{\mathcal{S}}\) by: \[\phi_{Z}\sim\frac{e^{-IT}}{\pi R}-\coth(2\pi X_{0})\,, \tag{128}\] and in the neighborhood of \(\mathcal{C}_{\mathcal{S}_{\mathcal{J}}}\) by: \[\phi_{Z}\sim\frac{1}{\pi R(J\cos(T)+I\sin(T))}-\coth(2\pi X_{0})\,, \tag{129}\] and derive the contributions of these singularities to \(\mathcal{J}\), first for \(\mathcal{S}\): \[\mathcal{J}^{(1)}_{\mathcal{S}}=2\tau_{0}\quad\mathcal{J}^{(2)}_{\mathcal{S}} \sim 2\tau_{0}\coth(2\pi X_{0})\,, \tag{130}\] and for \(\mathcal{S}_{J}\): \[\mathcal{J}^{(1)}_{\mathcal{S}_{\mathcal{J}}}=2J\tau_{0}\tau_{1};\quad J^{(2)} _{\mathcal{S}_{\mathcal{J}}}=-2J^{2}\tau_{0}\tau_{1}^{2}\coth(2\pi X_{0})\,, \tag{131}\] Finally, after adding the integral contribution \(\mathcal{J}\) at \(\mathcal{C}_{0}\), around the singularities \(\mathcal{S}\) and \(\mathcal{S}_{\mathcal{J}}\), and on the two trajectories between \(M_{0}\) and the singularities and on the two vertical lines Figure 6: Panel (a) Continuous lines: the ratio \(\mathcal{R}_{0}\) between \(X_{0}\) and \(|\epsilon|\) for different values of \(J\), dashed lines: \(d_{j}=\mathcal{R}_{0}(J-1)/J\) the distance (divided by \(|\epsilon|\)) between \(S_{J}\) and \(S\), which must be greater than \(2l_{1}/|\epsilon|=2\sqrt{|\omega_{1}|}\) represented by the dot-dashed curve in black. Panel (b) continuous lines: \(\mathcal{R}_{0}\),dotted and dot-dashed lines, corrected values due to change of \(J\) at large distances: \(j_{0}\pm 0.1\), with dot-dashed lines for \(+\) and dotted lines for \(-\), see eq.(134). In (c) the quantity \(j_{0}d\) (jump of the growth values between the bottom and the top of the strip the times the thickness \(d\) of the sample) as a function of \(J\) required for two sizes of the singularity, \(l_{0}=0.01\), and \(l_{0}=0.001\). \(M_{1}M_{2}\) and \(M_{3}M\), and after simplifications, we determine the value of \(\mathcal{J}_{P}=\mathcal{J}_{\mathcal{C}_{0}}+\mathcal{J}_{\mathcal{S}}+ \mathcal{J}_{\mathcal{S}_{J}}=\mathcal{J}_{\mathcal{C}_{1}}\), where \(\mathcal{J}_{\mathcal{C}_{1}}\) is restricted to the horizontal lower segment. If at \(+\infty\), the volumetric growth is kept at \(J\), then \(\mathcal{J}_{\mathcal{C}_{1}}=\mathcal{J}_{\mathcal{C}_{0}}^{(0)}\), which eliminates the zero order and results in: \[\mathcal{J}=-\frac{\epsilon\tau_{0}(J-1)^{2}}{J}\left\{1+\frac{\epsilon(1+J)^{ 2}}{2J}\left(\frac{2}{\tanh(2\pi X_{0})}-1\right)\right\}\,. \tag{132}\] This evaluation is correct for \(X_{0}>l_{0}\), which results in \(X_{0}\sim-\epsilon(1+J)^{2}/(2\pi J)\). Again, the negative sign of \(\epsilon\) is confirmed. The numerical values of the \(X_{0}\) solution of eq.(132) are shown in Fig.(6), which shows the ratio \(\mathcal{R}_{0}=X_{0}/|\epsilon|\) in panel (a) with continuous lines while the distance of both singularities also divided by \(|\epsilon|\): \(d_{J}=X_{0}(J-1)/J\) is represented by dashed curves. Both sets of curves show a decrease at small \(\epsilon\) and then an increase, so we can deduce that \(X_{0}\sim-\epsilon\). Since \(d_{J}\) must be greater than \(2l_{p}=2|\epsilon||\omega_{p}|^{1/2p}\), a threshold value for \(J\) depending on \(|\sqrt{|\omega_{1}|}\) can be proposed: as an example, from Fig.(6), the value \(J+1.1\) is obviously too low. Our analysis assumes that the growth conditions are maintained at infinity and that the sample is infinite, which is not true in real experiments. If the sample has a finite depth \(d\geq 1\), we know that the elastic deformations decay exponentially from a distance to the interface of the order of the wavelength, our approach remains valid near the interface, but we must consider a substrate that may alter our estimate of \(\mathcal{J}_{\mathcal{C}_{1}}\). In addition, the growth law may change away from the interface. These two points will be explored below. #### v.1.1 Constant growth and finite size effects We now assume that our sample has a finite size \(d\) and it is attached on a solid substrate. For \(X=d\), the growing material cannot penetrate the substrate but can slide freely on it. The deformation \(\Phi(Z)\), estimated from the top of the layer, decreases exponentially as \(\Phi(Z)\sim-2X_{0}e^{-2\pi Z}\) when \(|Z|>>1\). We need to adjust this deformation near the substrate when \(Z\sim d\). Following the eq.(57) at a distance \(d\), the profile function can be represented by: \[\begin{cases}x&=JX-d_{1}+\epsilon_{1}\cos(2\pi Y)\left(e^{-2\pi X}-\tilde{\tau }e^{-2\pi JX}\right),\\ y&=Y-\epsilon_{1}\sin(2\pi Y)\left(e^{-2\pi X}-J\tilde{\tau}e^{-2\pi JX} \right).\end{cases} \tag{133}\] where \(d_{1}=d(J-1)\), \(\epsilon_{1}=-2X_{0}\epsilon\) and \(\tilde{\tau}=-e^{2\pi d(J-1)}\). \(\epsilon_{1}\) results from the matching with the lower expansion and is of the order of \(\epsilon^{2}\). \(\mathcal{J}_{\mathcal{C}_{1}}\) is easy to compute and read: \(\mathcal{J}_{\mathcal{C}_{1}}=(J-1)^{2}/2\{1-(4\epsilon\pi X_{0}e^{-2\pi d})^ {2}(J+1)\}\). Obviously, once \(d\) is of the order of the wavelength and even larger, this correction becomes negligible: for \(d=1\), \(e^{-4\pi d}=3.4610^{-6}\), for \(d=2.5\), \(e^{-4\pi d}=210^{-14}\). Note that a sliding substrate allows an easy estimation of finite size effects. Clamped conditions, as discussed later in the section XIII are much more difficult to fit to our singular deformation mode. #### v.1.2 Inhomogeneous volumetric growth If the growth becomes slightly inhomogeneous at large distances, becoming \(\tilde{J}=J+\epsilon j_{0}\) at the bottom of the sample, then estimating \(\mathcal{J}_{\mathcal{C}_{1}}=(1-J)^{2}/2+\epsilon(J-1)j_{0}\) will change the \(X_{0}\) value into : \[X_{0}\sim-\epsilon\frac{(J-1)^{2}(1+J)^{3}}{2\pi J(1-J-J^{2}+J^{3}+j_{0}J)}\,. \tag{134}\] This estimate for \(X_{0}\) is given for small \(\epsilon\), see Fig. (6)(b). Increasing the volumetric growth at the bottom (\(j_{0}<0\)) also increases the value of \(X_{0}\). However, such an estimate is valid for a change in volumetric growth localized only at the bottom. ### The M-Integral Despite debates about the validity of the M-integrals in finite elasticity, let us now consider these integrals which have the advantage of explicitly introducing the size of the elastic samples. Unlike the \(J\) and \(L\) integrals, proved valid by Knowles and Sternberg Knowles and Sternberg (1999), the M-integral technique turns out not to be always applicable for arbitrary energy densities. Nevertheless, when applicable, it remains a very useful tool for demonstrating properties of nonlinear fields such as the creeping closure, for example Knowles and Sternberg (1999). As before for the \(J\) integral, it is better to convince ourselves that a path-independent integral \(\mathcal{M}\) can be constructed, and this is realized in the Appendix section XV.6. For our modeling, the definition of \(\mathcal{M}\) follows: \[\mathcal{M} =\oint ds\left(E-\tfrac{(J-1)^{2}}{2}\right)\vec{\mathbf{X}}. \vec{\mathbf{N}}-S_{jk}U_{ji}X_{i}.N_{k}\] \[-(J+1)\left(U_{x}\cdot N_{X}+U_{y}\cdot N_{Y}\right)=0\,. \tag{135}\] where \(U_{X}=x-JX\) and \(U_{y}=y-Y\), the equation (135) being valid up to \(O(\epsilon^{3})\). As before for \(\mathcal{J}\), \(\mathcal{M}\) results from \(4\) contributions along the horizontal axis \(X=0\), the two patches \(\mathcal{C}_{\mathcal{S}}\) and \(\mathcal{C}_{\mathcal{S}_{J}}\), and the far field. The vertical lines do not contribute as before. Considering the upper boundary where \(X=0\), only the third term in eq.(135) of order \(|\epsilon|\) contributes. The contribution of the two patches is of order \(\mathcal{J}X_{0}\) for the first two terms of eq.(135). Each term, either \(\mathcal{J}\) or \(X_{0}\), is of order \(|\epsilon|\) and the result will be neglected. For the last term, it is of order \(|\epsilon|l_{1}\), which is even smaller than \(|\epsilon|X_{0}\), so the patches make a subdominant contribution. Consequently, the only way to compensate \(\mathcal{M}_{X=0}\) is to close the contour at a finite distance \(d\) as done before and to assume a slight difference in the volumetric growth. Let us first evaluate \(\mathcal{M}_{X=0}\) for a very small value of \(l_{0}\) \[\begin{cases}\mathcal{M}_{X=0}&=-(J+1)\int_{-1/2}^{1/2}(x-JX)dY\\ &=-J(J+1)(1+\tau_{1})\epsilon\int_{-1/2}^{1/2}dYI_{f}\,,\end{cases} \tag{136}\] with \(I_{f}=F(IY-X_{0})+F(-IY-X_{0})\). A careful analysis of the integral of \(I_{f}\) gives: \[-2X_{0}+2Log(\sinh(\pi l_{0}))/\pi\] for small values of \(l_{0}\), so the last term dominates which finally leads to: \[\mathcal{M}_{X=0}\sim-\frac{1}{J\pi}(J-1)(J+1)^{2}\epsilon Log(\sinh(\pi l_{0}))\,. \tag{137}\] Now let us evaluate the contribution of the lower boundary at \(X=d\) where \(\tilde{J}=J+\epsilon j_{0}\). The first and last terms contribute giving : \((J-1)d\epsilon j_{0}-(J+1)d\epsilon j_{0}\) and then: \[\mathcal{M}_{X=d}=-2d\epsilon j_{0}\quad\text{so}\quad l_{0}\sim\frac{1}{\pi} e^{\left(\frac{2\pi dJ_{0}}{(J-1)(J+1)^{2}}\right)}\,. \tag{138}\] Since \(l_{0}\) is a tiny quantity, the model is validated if \(j_{0}<0\), that is, if the volumetric growth is greater at the bottom than at the top. Noticing that values of \(l_{0}\) of order \(10^{-2}\) or \(10^{-3}\) require both a negative jump value \(j_{0}\) and a finite thickness for the sample \(d\), a graph representing the product of \(j_{0}\times d\), see Fig.(6)(c), shows that this product must be of the order of several units for suitable \(l_{0}\) values, except for very low growth values: \(J-1\sim 0.1\). In other words, thin shells \(d\sim 1\) are more likely to reach low values of \(l_{0}\). In conclusion, our solution is based on the determination of \(\Phi(Z)\), which involves 2 parameters \(X_{0}\) and \(l_{0}\). \(X_{0}/J\) gives the position of the closest singularity of our doublet from the top while \(X_{0}\) indicates the position of the second singularity. \(l_{0}\)is a parameter that determines the outer solutions. Since these two constants are relevant to the outer solution, they are automatically eliminated in the boundary layer treatment which concerns the inner solutions as shown by Eq\((111)\), and so are not detected at this level. To capture them, the path-independent integral treatment is appropriate, since this fancy technique introduces the boundary conditions at the level of the whole domain. In fracture theory, the \(J\) integral relates the singular stress at the fracture tip to the dimensions of the specimen while the M-integral gives access to more complex singular fields and in particular to interfacial fractures. Here, the \(X_{0}\) value is determined by the \(J\) integral, and can be slightly modified by growth inhomogeneity at large distances from the interface. The balance of the \(\mathcal{M}\) integral is dominated by the two horizontal boundaries: above and below when the volumetric growth varies at both ends. This is due to the fact that the \(\mathcal{M}\) integrals associated with the singularities are subdominant, being of order \(\epsilon^{2}\). They are evaluated in section XV.6. Obviously, introducing growth heterogeneity at the bottom is the best way to fix \(l_{0}\). One may wonder whether our results concerning for either \(\mathcal{J}\) or \(\mathcal{M}\) remain valid when we add the growth heterogeneity. In fact, the initial axial growth makes the elastic model anisotropic : therefore, we check the validity of these approaches in section.(XV.6). In addition, we add local heterogeneity at the bottom. It can be shown that the method, which is valid at order \(\epsilon^{2}\) for constant volumetric growth, remains valid only at order \(\epsilon\), in the case of heterogeneity. This is also the reason why we assume that the growth jump is localized at the bottom. Finally, at this stage, \(X_{0}\) and \(l_{0}\) are completely determined by \(J\) and \(j_{0}\) and \(\epsilon\). Since \(J\) is given by the nonlinearity of the hyperelastic model, the only unknown is \(\epsilon\). Thus, in order to conveniently treat the two boundary layers, the neo-Hookean model must be modified. A weak compressibility is required, as well as a nonlinearity of the elastic energy in \(I_{1}^{p}\) and finally a variation of the growth in the far field. Surprisingly, a case studied by Pandurangi _et al._[20] for a semi-infinite sample, consists in an elastic energy modeling that also includes compressibility and a quadratic energy in \(I_{2}\). However, they also introduce a graded material property in the vertical direction, while our choice consists in a graded growth in this direction. We can conclude that, although the two approaches are different, the physics of creases requires going beyond the simple incompressible neo-Hookean hyperelasticity. ## XIII Finite-size effects or the buckling of layers One may wonder whether the degeneracy of the solutions presented above is not due either to the simplicity of the neo-Hookean model or to the fact that the initial geometry is too simple. It is obvious that a length scale is missing in our formulation, since we arbitrarily set the wavelength to unity. Consider the case of a gel layer whose height is given by the parameter \(d\). In order to keep as much as possible the same definitions and equations given in the previous section, we continue to use the wavelength as the unit of length. This situation was also considered by Biot in 1963, but with different boundary conditions [23] and with a different point of view: the limit of small height \(d\) compared to the wavelength where the analogy with Euler's buckling becomes more explicit [23]. Sinusoidal modes have been found and the dispersion relation given the wavelength as a function of \(d\) has been given numerically. In this section, our aim is to revisit his results with different boundary conditions at the substrate and considering \(d>1\). For a single layer, there is no need to change the main equations, just remember that boundary conditions have to be applied on both horizontal sides \(X=0\) and now \(X=d\) and during the growth, the layer can be free or glued at the bottom. In the first case, and unlike the second case, symmetry allows us to choose half of the sample with boundary conditions for \(X=d/2\), which is now the bottom. In any case, we will have strain conditions at the bottom and free stress conditions at the top: \(X=0\). These two cases are physically similar and will only differ in numerical values. The two sets of boundary conditions to be applied either at the top or at the bottom are different in nature, and due to the finite extension of the layer, divergent solutions at infinity are now relevant. They were not allowed and eliminated in the previous section. The non-symmetric case adapts more easily to bilayers and is therefore of more interest. This is especially true when the second layer is stiffer than the upper layer (see [8]) although the case of a soft substrate is more considered in the literature [175; 176; 177; 178]. Under growth, the description of the new positions \((x,y)\) can follow the same perturbation scheme as before, but must now include \(2\) holomorphic functions: \(\Phi_{e}(Z)\), (an even function of \(Z\)) and \(\Phi_{o}(Z)\) (an odd function of \(Z\)). Then, by defining \(\tilde{Z}=Z-d=X+IY-d\) and \(\tilde{Z}_{1}=Z_{1}-Jd=J(X-d)+IY\) the Euler-Lagrange equation associated with the incompressibility condition gives the following results for the deformations: \[\begin{cases}x=JX-(J-1)d+J\epsilon\Re\left[F_{1}(\tilde{Z})+F_{2}(\tilde{Z}_{1}) \right],\\ y=Y-\epsilon\Im\left[F_{1}(\tilde{Z})+JF_{2}(\tilde{Z}_{1})\right].\end{cases} \tag{139}\] where \[\begin{cases}F_{1}=\Phi_{e}(\tilde{Z})+a_{1}\Phi_{o}(\tilde{Z}) \,,\\ F_{2}=b_{1}\Phi_{e}(\tilde{Z}_{1})+b_{2}\Phi_{o}(\tilde{Z}_{1})\,.\end{cases} \tag{140}\] With this definition, the incompressibility condition valid everywhere in the sample: \[\frac{\partial(x-JX)}{\partial X}+J\frac{\partial(y-Y)}{\partial Y }\quad\forall\quad X\quad\text{and}\quad Y\,, \tag{141}\] is automatically checked at first order in \(\epsilon\). The boundary conditions of anchoring to the solid substrate impose: \(x=d\) and \(y=Y\) for \(X=d\). Since \(\Re\left[\Phi_{o}(IY)\right]\) and \(\Im\left[\Phi_{e}(IY)\right]\) vanish independently of the \(Y\) values, the anchorage then imposes \(b_{1}=-1\) and \(b_{2}=-a_{1}/J\). It remains to apply the free stress conditions involving \(S_{11}\) and \(S_{21}\) on the upper boundary (X=0) which must be verified for arbitrary \(Y\) values. Let us limit ourselves to the harmonic modes. ### Selection of a unique harmonic mode Selecting \(\Phi_{e}(Z)=\cosh(2\pi Z)\) and \(\Phi_{o}(Z)=\sinh(2\pi Z)\), to represent the current \((x,y)\) coordinates, the first order incremental correction in \(\epsilon\) becomes: \(\delta x=J\cos(2\pi Y)f_{1}(\tilde{X},X_{1})\) and \(\delta y=\sin(2\pi Y)f_{2}(\tilde{X},\tilde{X}_{1})\), where \(\tilde{X}=2\pi(X-d)\) and \(\tilde{X}_{1}=2\pi J(X-d)\) and \[\begin{cases}f_{1}&=\cosh\tilde{X}-\cosh\tilde{X}_{1}+a_{1}(\sinh\tilde{X}- \frac{\sinh\tilde{X}_{1}}{J})\,,\\ f_{2}&=a_{1}(\cosh\tilde{X}_{1}-\cosh\tilde{X})+J\sinh\tilde{X}_{1}-\sinh \tilde{X}\,.\end{cases}\] Now, considering the cancellation of the normal \(S_{11}\) and of the shear stress \(S_{21}\) at the top of the strip, we derive the value of the coefficient \(a_{1}\) in a first step: \[a_{1}=\frac{J(2J\sinh(\tilde{d})-(1+J^{2})\sinh(J\tilde{d})}{2J^{2}\cosh( \tilde{d})-(1+J^{2})\cosh(J\tilde{d})}\,, \tag{142}\] where \(\tilde{d}=2\pi d\). The dispersion relation \(\mathcal{D}\) gives the new threshold \(J_{d}\) as a function of the ratio of wavenumber to thickness, so \(2\pi d\) is the solution of a transcendental equation: \[\mathcal{D} =-4J_{d}^{2}(1+J_{d}^{2})+(1+2J_{d}^{2}+5J_{d}^{4})\cosh(\tilde{d })\cosh(J_{d}\tilde{d})\,, \tag{143}\] \[\qquad-J_{d}(1+6J_{d}^{2}+J_{d}^{4})\sinh(\tilde{d})\sinh(J_{d} \tilde{d})=0\,,\] ### Nonlinearity and creasing above threshold for growing layer Why focus on a single harmonic mode? Each harmonic mode will correspond to \(\cosh(2\pi mZ)\) and \(\sinh(2\pi mZ)\) and will have a different threshold given by \(J_{md}\) as opposed to the unique threshold independent of \(m\) for an infinite thickness, see section (IX.3). Thus, we cannot simply combine different modes and evaluate the nonlinearities. In fact, nonlinear profiles do not result from a single mode and traditional techniques become more difficult. Other asymptotic techniques consist of the coupling mode approach of classical bifurcation theory such as the Landau formalism of the amplitude equations [32, 33, 118, 34] but it remains complex or even impossible Figure 7: The selected threshold \(J_{d}\) as a function of \(2\pi d\), where \(d\) is the ratio between of thickness to pattern wavelength. For a thin film, the threshold increases dramatically as \(J_{d}\sim 4.93347/(2\pi\Delta d)\). When the thickness is of order or greater than \(\Lambda/\pi\), \(J_{d}\) reaches the asymptotic limit \(J_{d}\sim J_{\infty}=J_{b}\), so \(3.3829\) represented by a solid blue line in (a). In the inset, the critical amplitude \(\mathcal{A}\) defined in eq.(145) (in units of the wavelength for observing a crease with a single harmonic mode). A plateau of order 0.3, is reached very quickly. In b) a normalized pattern is plotted for a value of \(\mathcal{A}\) at the critical value \(0.3\) where \(d=t_{3}/\pi\) and \(J_{c}\) the threshold value. The amplitude \(x\) is divided by \(\mathcal{A}\) for normalization. A cusp can be observed for \(Y=0\), which repeats periodically for \(Y=n\pi\). In c) superposition of the profiles for increasing amplitudes for \(\mathcal{A}=0.1\) in blue, \(\mathcal{A}=0.2\) and \(\mathcal{A}=0.3\). to use them with partial differential equations with boundary conditions. One method, different from the present one, concerns the use of a nonlinear stream function introduced in [19], which exactly treats the incompressibility and reports the nonlinearities on the elastic energy [7; 8; 52]. This method, valid only in \(2D\) geometry or when the elastic deformations are reduced to a two-dimensional space, has allowed to demonstrate a third order of expansion: [19] : \[\begin{cases}x=JX-(J-1)d+\epsilon J\cos(2\pi y)f_{1}\,,\\ Y=y-\epsilon\sin(2\pi y)f_{2}\,.\end{cases} \tag{144}\] The parameter \(\epsilon\) being predicted at the third order. Of course, this prediction depends on the size of the layer \(d\). This formulation assumes that all inversion formulas can be achieved, which is obviously not the case when creases occur at the interface. They appear when \(\partial Y/\partial y\) vanishes for \(X=0\) according to the theorem of the implicit function, ([32]), which gives the critical value \(\mathcal{A}\), which is equal to \(\epsilon Jf_{1}\) of the deformation at the cusp position (see inset in Fig.(7), panel (a)) and eq.(144). It can be noticed that the required amplitude saturates to a finite value around 0.3 for values \(d\) of the sample width of the order of the wavelength or more, and that very thin samples, very easily exhibit cusps as they grow, although their threshold \(J_{d}\) is obtained for a higher value. In Fig.(7), panel (b) we plot the profile of the cusp function over one period. It is divided by \(\mathcal{A}\) for normalization so that the amplitude varies between \(-1\) and \(1\). To evaluate whether the amplitude \(\mathcal{A}\) can be obtained in practice requires an analytical treatment of the nonlinearities (which is not reported here, see Ref.([8])). It approximates the amplitude of the wavy regular pattern above the bifurcation threshold \(J_{d}\) as \[x\sim-(J-1)d\pm 0.537\sqrt{J-J_{d}}\cos(2\pi Y) \tag{145}\] where the numerical number \(0.537\) is analytically predicted and the zero order in eq.(145) indicates the increase in height due to growth, which appears negative due to our choice of coordinate system. The nonlinear treatment assumes a regular wave pattern and does not assume a priori singularities as cusps. This estimation must be compared with the amplitude \(\mathcal{A}\) and results in a growth parameter inducing a possible crease given by \(\mathcal{A}\sim 0.3=0.537\sqrt{J-J_{d}}\), which implies a distance from the threshold approximately equal to \(0.3\), or \(10\%\) of the Biot threshold. Thus, for a thickness of the order of the pattern wavelength: \(d/\Lambda\sim 1\), creases appear rather quickly at the interface once the threshold value is exceeded. Although nonlinearities can be responsible for creases, they always appear above the Biot threshold and not below. ### Conclusion We have shown that the sinusoidal Biot profile is not the only solution that occurs in a growing hyperelastic sample. Restricting to the simplest configuration of a semi-infinite two-dimensional neo-Hookean sample, growing with an isotropic constant growth rate \(J\), we show that other candidates are possible solutions that appear exactly at the same threshold. Among them, quasi-singular solutions with a periodic array of cusps can be found, at the Biot threshold. Nonlinearities can be evaluated by classical nonlinear treatments, supercritical bifurcations are rather common, but also subcritical bifurcations can appear slightly below the Biot threshold when several harmonics are coupled. This explains the diversity of experimental observations independent of the elasticity model, since this diversity occurs at the level of the simplest growth formalism. Independent of these patterns, which are always related to the Biot threshold, it has been suggested that patterns can occur well below the Biot threshold if local singularities also occur within the material. We consider this conjecture and show that it can be the source of new families of solutions. In this case, tiny linear singularities occur through pairs near (but not at ) the interface where the compressive elastic field is concentrated. The high level of stress generated requires a slight local modification of the elastic model. Relevant parameters such as the linear extension of the singularities and their positions are recovered by path-independent integrals. In addition, this study proposes a threshold value for the volumetric growth below the Biot threshold determined by the nonlinearities above the neo-Hookean approach. ## XIV Acknowledgements I would like to thank Linda Cummings, Darren Crowdy and Saleh Tanveer for insightful discussions during the programme "Complex analysis: techniques, applications and computations" (Fall 2019, July 2023) of the Isaac Newton Institute of Mathematical Sciences, Cambridge, for its support and hospitality. I acknowledge the support of the ANR (Agence Nationale de la Recherche) under the contract MecaTiss (ANR-17-CE30-0007) and the contract EpiMorph (ANR-2018-CE13-0008). ## XV Appendix ### Nonlinear elasticity at first order: stress and energy expansion This appendix is written with the elastic fields given by a complex function. It is written in the initial frame of coordinates and the stress corresponds to the Piola stress tensor[93]. \[x = JX+\epsilon J\Re\,\left[\Phi(Z)+\tau_{1}\Phi(Z_{1})\right],\] \[y = Y-\epsilon\Im\left[\Phi(Z)+\tau_{1}J\Phi(Z_{1})\right].\] (S1) In this work, we restrict to \(\Phi=\bar{\Phi}\) or \(\Phi\) function having a real expansion in \(Z\). The Jacobian in \(2D\) is given by \(I_{3}=x_{X}\cdot y_{Y}-x_{Y}\cdot y_{X}-J\). At linear order it reads: \[x_{X} = J+\epsilon J\Re\,\left[\Phi_{Z}+J\tau_{1}\Phi_{Z_{1}}\right],\] \[y_{Y} = Y-\epsilon\Re\,\left[\Phi_{Z}+J\tau_{1}\Phi_{Z_{1}}\right],\] \[x_{Y} = -\epsilon J\Im\left[\Phi_{Z}+\tau_{1}\Phi_{Z_{1}}\right],\] \[y_{X} = -I\epsilon\Im\left[\Phi_{Z}+J^{2}\tau_{1}\Phi_{Z_{1}}\right].\] (S2) With this choice for the deformation field, we can verify that at linear order in \(\epsilon\), \(I_{3}=0\) as required. The Euler-Lagrange equation for the Neo-Hookean model are given by: \[\Delta x=x_{XX}+x_{YY}=Q_{X};\ \Delta y=y_{XX}+y_{YY}=JQ_{Y}\,.\] (S3) Obviously, only the term depending on \(Z_{1}\) and \(\bar{Z}_{1}\) are concerned by these equations which implies that the Lagrange parameter \(Q\) is only a function of \(Z_{1}\) and \(\bar{Z}_{1}\). Both equations of eq.(S3) determines \(Q\) as: \[Q=J+\frac{1}{2}\epsilon\tau_{0}\tau_{1}\left\{\Phi_{Z_{1}}+\tilde{\Phi}_{Z_{1} }\right\}\] (S4) It remains to check that the boundary verifies the cancellation of the shear \(S_{21}\) and normal stress \(S_{11}\) for the free surface \(X=0\). Using eq. (S2), we first obtain the components of the Piola stress tensor. For the diagonal elements: \[\begin{cases}S_{11}&=\epsilon\Re\left[2J\Phi_{Z}+(1+J^{2})\tau_{1}\Phi_{Z_{1}} \right],\\ S_{22}&=-\tau_{0}-\epsilon\Re\left[(1+J^{2})\Phi_{Z}+2J^{3}\tau_{1}\Phi_{Z_{1} }\right].\end{cases}\] (S5) For the off-diagonal elements, we have: \[S_{21}=y_{X}+Qx_{Y}\ \text{and}\ S_{12}=x_{Y}+Qy_{X}\ \text{which reads:}\] (S6) Note that, for \(X=0\) so for \(Z=IY\), the normal stresses on the top stress become: \[S_{11}=\frac{1}{2}(2J+(1+J^{2})\tau_{1})(\Phi_{Z}+\bar{\Phi}_{Z})\,,\] (S7) \[S_{21}=\frac{I}{2}(1+J^{2}+2J^{2}\tau_{1})(\Phi_{Z}-\bar{\Phi}_{Z})\,.\] (S8) The boundary conditions impose \(S_{11}=S_{21}=0\) but If \(\Phi(Z)\) is an even function of \(Z\), the derivative \(\Phi^{\prime}(Z)\) is odd, so \(S_{11}\) vanishes automatically, and we only need to choose \(\tau_{1}=-(1+J^{2})/(2J^{2})\) to cancel \(S_{21}\). If \(\Phi(Z)\) is an odd function of \(Z\), then \(S_{21}\) cancels automatically and \(S_{11}=0\) imposes \(\tau_{2}=-2J/(1+J^{2})\). Our choice in this manuscript corresponds to the first case. The Biot solution \(\Phi(Z)=e^{-2\pi Z}\) has no parity so both normal stress components must vanish at the top of the strip \(X=0\), which explains the existence of the threshold \(J=J_{B}\). Regardless of the choice of \(\tau_{1}\) or \(\tau_{2}\), the threshold value \(J_{B}\) is identical. ### Expansion of the elastic and capillary energy density Expansion of the the elastic energy density \(E\) given by eq.(45) at third order of the parameter \(\epsilon\): \(E=E_{0}+\epsilon E_{1}+\epsilon^{2}E_{2}+\epsilon^{3}E_{3}\), reads: \[\begin{cases}E_{0}=\frac{1}{2}(J-1)^{2}\,,\\ \\ E_{1}=\tau_{0}\Re\left[\Phi_{Z}+J\tau_{1}\Phi_{Z_{1}}\right],\\ \\ E_{2}=\frac{1}{2}(3J^{2}+1)(|\Phi_{Z}|^{2}+J^{2}\tau_{1}^{2}|\Phi_{Z_{1}}|^{2} )\\ +\frac{J\tau_{1}}{2}\Re\left[(J+1)^{3}\Phi_{Z}\Phi_{Z_{1}}-(J-1)^{3}\Phi_{Z} \Phi_{Z_{1}}\right],\\ \\ E_{3}/(J\tau_{0}\tau_{1})=\Re\left[\Phi_{Z_{1}}\right]\times(|\Phi_{Z}|^{2}+J ^{2}\tau_{1}^{2}|\Phi_{Z_{1}}|^{2}\\ +\frac{\tau_{1}}{2}\Re\left[(J+1)^{2}\Phi_{Z}\Phi_{Z_{1}}-(J-1)^{2}\Phi_{Z} \Phi_{Z_{1}}\right])\,.\end{cases}\] (S9) Note that if \(J=1\) (no growth) all the coefficients \(E_{i}\) vanish whatever the function \(\Phi(Z)\). The capillary energy density evaluated at \(X=0\) so for \(Z=IY\) can be expanded up to forth order: \[\mathcal{E}_{c}=\gamma_{0}\epsilon(E_{1c}+\epsilon E_{2c}+\epsilon^{2}E_{3c}+ \epsilon^{3}E_{4c})\,,\] (S10) where \(\gamma_{0}\) is the ratio between the capillary energy and the shear modulus multiplied by the wavelength and where: \[\begin{cases}E_{1c}=\frac{(J-1)^{2}}{2J}\Re\left[\Phi_{Z}\right],\\ \\ E_{2c}=\frac{\tau_{0}^{2}}{8J^{2}}(\Im\left[\Phi_{Z}\right])^{2}\,,\\ \\ E_{3c}=-\frac{\tau_{0}^{2}(J-1)^{2}}{16J^{3}}\Im\left[\Phi_{Z}\right]^{2}\Re \left[\Phi_{Z}\right],\\ \\ E_{4c}=\frac{\tau_{0}^{2}(J-1)^{2}}{128J^{4}}\Im\left[\Phi_{Z}\right]^{2}\times \\ \{(5-6J+5J^{2})\Re\left[\Phi_{Z}\right]^{2}-(J+1)^{2}|\Phi_{Z}|^{2}\}\,.\end{cases}\] (S11) ### Evaluation of the total energy for a single mode, double and triple mode We consider first a single mode: \(\zeta=e^{-2\pi Z}\). in this case, only \(E_{2c}\) and \(E_{4c}\) and \(E_{6c}\) contribute to the capillary energy \(E_{cs}\). After integrating on the top interface and defining the following quantities: \[\begin{cases}\alpha_{c}=\left(\frac{(J-1)\pi}{2J}\right)^{2};E_{s0}=\gamma_{0} \alpha_{c}\epsilon^{2}(1+J)^{2},\\ \\ \mathcal{Q}_{1c}=\frac{1}{4}(J^{2}-14J+1),\\ \\ Q_{2c}=\frac{1}{4}(1-12J+102J^{2}-12J^{3}+J^{4}).\end{cases}\] (S12) it reads: \[\mathcal{E}_{cs}=E_{s0}\left(1+\epsilon^{2}\alpha_{c}\mathcal{Q}_{1c}+\epsilon^ {4}\alpha_{c}^{2}\mathcal{Q}_{2c}\right)\] (S13) If the base state includes other harmonics such as \(\zeta^{2}\) and \(\zeta^{3}\) with decreasing amplitudes as in section IX.5, \(\phi[Z]=\zeta+\zeta\) \(\epsilon B_{2}\zeta^{2}-\epsilon^{2}B_{3}\zeta^{3}\), the capillary energy includes \(\mathcal{E}_{cs}\) but also additive terms in \(\epsilon^{4}\) and \(\epsilon^{6}\): \[\mathcal{E}_{c}=\mathcal{E}_{cs}+E_{s0}\epsilon^{2}\left(e_{1c}+\epsilon^{2}e_{ 2c}\right).\] (S14) \[\begin{cases}e_{1c}=B_{2}(4B_{2}+\pi(J-1)^{2}/J),\\ e_{2c}=9B_{3}^{2}-\frac{3\alpha_{c}}{\pi}B_{3}(8JB_{2}+\pi(1+J)^{2})\\ +4\alpha_{c}Q_{1c}e_{1c}.\end{cases}\] (S15) We now consider the 3 mode coupling with \(\Phi(Z)=\zeta+a_{2}\zeta^{2}+a_{3}\zeta^{3}\) of the section IX.4.2. The goal is to evaluate the weight of each term in the expansion of the elastic energy, eq.(85), and to compare it with the capillary energy. Writing the total energy: the elastic plus the capillary energy, we obtain for \(\mathcal{E}_{t}=-E_{f}\epsilon^{2}\tilde{\mathcal{E}}_{t}\) (see eq.(85)) where: \[\begin{cases}E_{f}=\pi\mathcal{Q}_{2}(1+2a_{2}^{2}+3a_{3}^{2})\,,\\ \tilde{\mathcal{E}}_{t}=\delta J+2\gamma_{0}g_{2}+(e_{3}+2\gamma_{0}g_{3}) \epsilon+2\gamma_{0}g_{4}\epsilon^{2}.\end{cases}\] (S16) where \(\mathcal{Q}_{2}\) has been given in eq.(74), \(e_{3}\) in eq.(87) and \(e_{4}=0\): \[e_{3}=\pi^{2}a_{2}\mathcal{Q}_{3}(1+a_{3}\mathcal{Q}_{33})/E_{f};\,g_{i}=-E_{ ic}/E_{f}\] (S17) Figure 8: In (a) and (b) density plots of the coefficients \(e_{3}\) and \(g_{2}\) entering the expansion of the free energy eq.(S16) as a function of \(a_{2}\) and \(a_{3}\). The numerical values can be estimated from the legend to the right of each panel. In (c) and (d) Density plots of the coefficients \(g_{3}\) and \(g_{4}\) entering the expansion of the free energy eq.(S16) as a function of \(a_{2}\) and \(a_{3}\). We first define: \[f_{g}=-\frac{(-1+J)^{2}(1+J)}{(1+2a2^{2}+3a3^{2})(3J^{2}-6J-1)}\] (S18) \[\begin{cases}g_{2}=(1+4a_{2}^{2}+9a_{3}^{2})\frac{J\pi f_{g}}{(J-1)^{2}},\\ g_{3}=a_{2}(1+6a_{3})\pi^{2}f_{g},\\ g_{4}=\frac{\pi^{2}f_{g}}{4J}\{3a_{3}(1+J)^{2}+Q_{1c}\times(1+36a_{3}^{2}\\ +81a_{3}^{4}+16a_{2}^{2}(1+a_{2}^{2}+3a_{3}+9a_{3}^{2})\}\end{cases}\] (S19) \(E_{f}\) is positive and all polynomials must be evaluated for \(J=J_{B}\). The order of magnitude of each coefficient as \(e_{3},g_{2},g_{3},g_{4}\) is function of the two parameters \(a_{2}\) and \(a_{3}\), once \(J=J_{B}\) is imposed and is represented by density plots (see Fig.(8)). More specifically, Fig.(8(a) gives the order of magnitude of \(e_{3}\) while panel (b) gives the \(g_{2}\) coefficient as function of the same quantities. \(g_{2}\) is negative and so is responsible for shifting the threshold towards higher values, a similar result was found in [19]. Fig.(8c) and Fig.(8d) are dedicated respectively to \(g_{3}\) and \(g_{4}\) respectively. Note that all \(g_{i}\) must be multiplied by the capillary number \(\gamma_{0}\) and only \(g_{4}\) appears alone in the asymptotics eq.(S16). ### Profiles and Cartography of the stress The profiles and the stress of both a logarithmic singularity and of a square root singularity, localized outside the elastic sample, are displayed in Fig. (8). Panels (e,f,g) are devoted to \(\Phi(Z)=-Log1+a-e^{-2\pi Z}/Log(a)\) for \(a=0.01,0.1\) and to \(\Phi(Z)=Log(2-e^{-2I\pi Y})\) for \(a=1\). This corresponds to singularities in the physical plane located to a distance \(d_{a}=-0.00158,-0.01516,-0.110318\), outside the elastic sample. The panels (h,i,j) concern the square root singularity:\(\sqrt{e^{-2\pi Z}-1-a}\). An indication of the normal stress \(S_{11}\), which includes the real part of \(\Phi_{Z}\), and the shear stress \(S_{21}\), which includes the imaginary part are also shown in panels (f,i) and (g,j) of Fig.(8). The stress contributes significantly at the boundary, but decreases very rapidly. \(S_{11}\) and \(S_{21}\) are given by eq.(S5). ### Weakly nonlinear analysis for quasi-singular profiles These profiles have been discussed in section (VIII.2. A full exhaustive study cannot be achieved and it is not certain that the method of weakly nonlinear analysis will converge for these choices of quasi-singularity. The evaluation of the integrals is not easy in \(X\) and \(Y\) geometry and and we choose a particular mode: \(\phi(Z)=-(Log(1+a-e^{-2\pi Z})-Log(1+a))/Log(a)\). Obviously, an expansion in sinusoidal modes: \(\zeta^{p}\) have little chance to converge quickly so we have to modify our strategy. The algebra is more complex even with formal mathematical software and we cannot find in handbooks of integrals the result for the previously defined integrals that go into the energy expansion. Thus, the results cannot be obtained completely analytically without approximations. It is also not possible, except for \(L_{3}\), to integrate in the complex plane of \(\zeta\) or the physical plane \(\Omega\) due to the juxtaposition of \(Z\) and \(Z_{1}\). Let us give an estimate. A first integration on \(Y\) induces the evaluation of two kinds of integrals, \(\mathcal{L}_{m,n}\) : \[\mathcal{L}_{m,n}=\int_{0}^{\infty}\frac{e^{-2\pi(m+n)u}du}{\left(\alpha^{2}-e ^{-2\pi mu}\right)\left(\alpha^{2}-e^{-2\pi nu}\right)}\,,\] (S20) where \(m\) and \(n\) are positive integers,\(\alpha=a+1\). Using Watson's lemma [179] and the fact that the denominator is singular near \(X=0\) for vanishing \(a\) values, once the parameter \(l_{m}\) is introduced with \(l_{m}=a(2+a)/(2m\pi)\), we approximate the denominator by linear expansion giving \[\mathcal{L}_{m,n}\sim\int_{0}^{\infty}\frac{du}{p_{a}}e^{-2\pi(m+n)u}\left( \frac{1}{l_{m}+u}-\frac{1}{l_{n}+u}\right)\,,\] (S21) with \(p_{a}=2\pi a(2+a)(m-n)\). The two last integrals can be computed explicitly with the function exponential integrals \((E_{i})\) and the limit \(l_{m}->0\) finally gives the following result: \[\mathcal{L}_{m,n}\sim\frac{1}{4\pi a}l_{m,n}+O(a)\text{ with }l_{m,n}=\frac{Log(m/n)}{m-n}\,.\] (S22) For \(m=n\), \(l_{n,n}=1/n\). Then, taking into account only the contribution in \(a^{-1}\) and defining \(m_{a}=-2\pi^{2}/(aLog(a)^{3})\), we get: \[\begin{cases}L_{1}&\sim m_{a}l_{2,1+J}=\frac{m_{a}}{1-J}Log\frac{2}{1+J}\,,\\ L_{2}&=m_{a}l_{2J,2J}=\frac{m_{a}}{2J}\,,\\ L_{4}&\sim\frac{m_{a}}{2}l_{2J,1+J}=\frac{m_{a}}{2(J-1)}Log\frac{2J}{1+J}\,,\\ L_{3}&\sim\frac{m_{a}}{2}(l_{1+J,1+J}+l_{2J,1+J})\\ &\sim\frac{m_{a}}{2}\left(\frac{1}{1+J}+\frac{Log(2J/(1+J))}{J-1}\right)\,. \end{cases}\] (S23) A comparison between numerical values of integrals (see eq.(S20) and the estimate given by eq.(S22) is correct for \(a\sim 0.01\), but smaller values are necessary for eq.(S23). This treatment can always be done numerically, the advantage here is to find the scaling of \(\mathcal{E}_{3}\). For the logarithmic choice, we finally derive: \[\begin{cases}\Pi_{1}\sim 4J^{2}(J+1)^{2}Log\left(\frac{2}{J+1}\right)\\ +\left(J^{2}+1\right)\left(4J^{2}Log(J)+(J-1)^{2}\right)\,,\end{cases}\] (S24) and the third order correction gives: \[\mathcal{E}_{3}=-\frac{(J+1)}{8J^{2}}\Pi_{1}\tau_{1}m_{a}\simeq-\frac{38.38}{aLog (a)^{3}}\,.\] (S25) ### Path-independent integrals It may seem pointless to demonstrate that conservation laws remain valid in incremental models of finite elasticity after the pioneering work of Knowles and Sternberg [171], but there are some slight differences with our approach. Fist, we are concerned with growth so our initial state is an anisotropic (axially pre-stretched) state. Second, our demonstration is achieved at the second order in \(\epsilon\) and not at first order. For this reason, if we can predict the result for the \(J\) integral, it is less obvious for the \(M\) integral, which is not believed to be valid in nonlinear elasticity. The strategy to prove the existence of path-independent integrals is simple: it consists in relating a scalar or a vector to a vector or a tensor which is divergence free. In \(2\) dimensions, we shall demonstrate that it is the case for \(\mathcal{J}\) and \(\mathcal{M}\). First let us begin by \(\mathcal{J}\) \[\mathcal{J}_{i}=\iint dS\left\{\frac{\partial(E\delta_{ik})}{\partial X_{k}}- \frac{\partial(S_{jk}F_{ji})}{\partial X_{k}}\right\}\,.\] (S26) where we use the Einstein convention for repeated indices and \(dS=dXdY\) and the brackets represent \(\mathcal{T}_{i}\) The index \(i\) reminds us that the \(J\) integral is indeed a vector but here only the component \(\mathbf{\tilde{e}}_{X}\) is important. \(\mathbf{F}\) is the gradient of the deformation tensor, whose components are: \(F_{ij}=\partial x_{i}/\partial X_{j}\). It can be replaced by \(\mathbf{U}\) so that \(U_{ij}=F_{ij}\) if \(i\neq j\) and \(U_{11}=F_{11}-J\) and \(U_{22}=F_{22}-1\). We want to show that \(\mathcal{T}_{i}\) is a divergence. In \(2\) dimensions, we have : \[\mathcal{T}_{1}=\frac{\partial E}{\partial X}-\frac{\partial(S_{11}U_{11}+S_{ 21}U_{21})}{\partial X}-\frac{\partial(S_{12}U_{11}+S_{22}U_{21})}{\partial Y}\,.\] (S27) where \(E\) has been given in eq.(45), \(\mathbf{S}\) is the Piola stress tensor already defined in eq.(58). At equilibrium, the divergence of the Piola stress tensor cancels: \[\frac{\partial S_{11}}{\partial X}+\frac{\partial S_{12}}{\partial Y}=0\quad \frac{\partial S_{21}}{\partial X}+\frac{\partial S_{22}}{\partial Y}=0\,.\] (S28) \[\begin{cases}\mathcal{T}_{1}&=\frac{\partial E}{\partial X}-\left(S_{11}\frac {\partial^{2}x}{\partial X^{2}}+S_{21}\frac{\partial^{2}y}{\partial X^{2}} \right)\\ \\ &\quad-\left(S_{12}\frac{\partial^{2}x}{\partial X\partial Y}+S_{22}\frac{ \partial^{2}y}{\partial X\partial Y}\right).\end{cases}\] (S29) We recall the Piola stress components, given in section VII.2, Eqs.(50,53), that we evaluate at linear order in \(\epsilon\): \[\begin{cases}S_{11}=\frac{\partial x}{\partial X}-Q\frac{\partial y}{ \partial Y}&S_{12}=\frac{\partial x}{\partial Y}+Q\frac{\partial y}{\partial X }\,,\\ S_{21}=\frac{\partial y}{\partial X}+Q\frac{\partial x}{\partial Y}&S_{22}= \frac{\partial y}{\partial Y}-Q\frac{\partial x}{\partial X}\,.\end{cases}\] (S30) From eq. (S29) it is easy to show that all terms of the neo-Hookean part are eliminated by the stress contribution of the same equation and that terms proportional to \(Q\) also cancel each other, a relation which is always true even without expansion in \(\epsilon\). So \(\mathcal{T}_{1}\) vanishes and the same result is obtained for \(\mathcal{T}_{2}\). In our case, \(\mathcal{J}_{1}\) and an \(\mathcal{J}_{2}\) vanishes rigorously on a closed contour, and so also up to second order. Although \(\mathcal{J}_{2}\) vanishes identically in our geometry because of the chosen contour, we write its generic form, which will be useful to establish the \(\mathcal{M}\) integral: \[\begin{cases}\mathcal{T}_{2}=\frac{\partial E}{\partial Y}\\ -\frac{\partial(S_{11}U_{12}+S_{21}U_{22})}{\partial X}-\frac{\partial(S_{12} U_{12}+S_{22}U_{22})}{\partial Y}\,.\end{cases}\] (S31) Let us consider now \(\mathcal{M}=\iint dSN\) \[N=\frac{\partial\delta EX_{i}}{\partial X_{i}}-\frac{\partial\left(S_{jk}U_{ji }X_{i}\right)}{\partial X_{k}}-\tau_{0}/J\frac{\partial U_{X}}{\partial X}\,.\] (S32) where \(\delta E=E-(J-1)^{2}/2\), \(U_{i}\) is the displacement: \(U_{1}=x-JX\) and \(U_{2}=y-Y\). The last term can be replaced by \(-(1+J)Div(U)=-(1+J)(\partial_{X}U_{X}+\partial_{Y}U_{Y}\). This result, which is easy to demonstrate with mathematica software, turns out to be less obvious to show analytically and differs from the result obtained by Knowles and Sternberg for incremental elasticity. The reason is due to growth since \(J^{2}-1\) different from from zero, in our case. Nevertheless, our definition of \(M\) satisfies the criterion for defining a path independent integral similar to the classical \(M\) integral of linear elasticity up to order \(\epsilon^{2}\). After some algebra with this definition, which is slightly different from the one given by [171], we can construct a path-independent integral, valid up to second order for a pre-stretched sample. Compared to eq.(3.36) of [171]'s equation, the only difference comes from the last term of the previous equation. Transforming \(\mathcal{M}\) into a closed contour integral: \(\mathcal{M}=\oint dsm=0\) \[\begin{cases}m=\left(E-\frac{(J-1)^{2}}{2}\right)\mathbf{\vec{X}}.\mathbf{ \vec{N}}-S_{jk}U_{ji}X_{i}.N_{k}\\ -(J+1)\left\{(x-JX)\cdot N_{X}+(y-Y)\cdot N_{Y}\right\}\,.\end{cases}\] (S33) This result is true up to \(O(\epsilon^{3})\). Then the horizontal boundary at the top contributes to the order of \(\epsilon\). We have already discussed the case of two boundaries in the main text section (XII.2). For completeness we will also consider the patches here, at least to confirm our approach. Considering now the \(2\) patches where the function \(F\) or \(\Phi\) has singularities in \(X=X_{0}\) and \(X=X_{0}/J\), we can then consider \(3\) contributions. We start with a separation into three contributions for \(\mathcal{M}_{\mathcal{J}_{S}}\) into: \(\mathcal{M}^{(1)}+\mathcal{M}^{(2)}+\mathcal{M}^{(3)}\) which will be done for the two patches one after the other. Around the patch located at \(X=X_{0}\), the contour being a circle with radius \(R\) greater than \(l_{0}\) but less than \(X_{0}\), defined by \[\begin{cases}m_{1}&=E-\frac{(J-1)^{2}}{2}-S_{11}U_{11}-S_{21}U_{21},\\ m_{3}&=E-\frac{(J-1)^{2}}{2}-S_{12}U_{12}-S_{22}U_{22},\\ m_{2}&=S_{12}U_{11}+S_{22}U_{22},\\ m_{4}&=S_{11}U_{12}+S_{21}U_{22}.\end{cases}\] (S34) Note that each \(m_{i}\) is of order \(\epsilon^{2}\) \[\mathcal{M}_{\mathcal{J}_{S}}=R\int_{-\pi}^{\pi}dT(X_{0}+R\cos T)\left\{\cos Tm _{1}-m_{2}\sin T\right\}\,.\] This relation can eventually be simplified to give: \[\mathcal{M}_{\mathcal{J}_{S}}^{(1)}=X_{0}\mathcal{J}_{\mathcal{S}}+R^{2}\int_{- \pi}^{\pi}dT\cos T(\cos Tm_{1}-\sin Tm_{2})\,.\] Defining in the same way: \[\mathcal{M}_{\mathcal{J}_{S}}^{(2)}=R^{2}\int_{-\pi}^{\pi}dT\sin T\left\{\sin Tm _{3}-\cos Tm_{4}\right\},\] and finally: \[\mathcal{M}^{(3)}_{\mathcal{J}_{S}}=\tau_{0}R\int_{-\pi}^{\pi}dT\sin T(y-Y)\,.\] So the leading order for \(\mathcal{M}_{\mathcal{J}_{S}}\) is \(X_{0}J_{S}\) so \(\epsilon^{2}\). For the second patch which is around \(X=X_{0}/J\), the difference comes from the fact that \(X_{0}\) has to be changed to \(X_{0}/J\) and of course each integral is different due to the local behavior of the function \(\Phi\) in each patch given by Eqs.(128,129). From this expansion, we can deduce that \(\mathcal{M}^{(3)}_{S}\) and \(\mathcal{M}^{(3)}_{S,\mathcal{J}}\) is negligible for \(R\to 0\) compared to the other contributions. Also it is clear that \(X_{0}\mathcal{J}_{S}\) is of the order of \(X_{0}\mathcal{J}_{S}\sim X_{0}\epsilon\sim\epsilon^{2}\) as the integrals in \(R^{2}\). Then we conclude that only the upper boundary and lower boundary contribute to the \(\mathcal{M}\) integral and the inner singularities contribute only to the order of\(\epsilon^{2}\).
柔らかい弾性サンプルの成長や膨張による曲げは、材料科学、形態形成、生物学または生理学における新しい関心を呼び起こしました。実際、質量や体積の変化は、あらゆる生物の一般的な事実であり、細胞サイズよりも大きなスケールでは、宏观的な見方は多くの観察の一般的な特徴を説明するのに役立ちます。軟質材料の多くの形態は、成長に伴う弾性圧縮応力蓄積によって生じ、また線形非線形弾性エネルギーの最小化によって生じます。ゴムの成長と圧縮の類似性は、非線形弾性サンプルの圧縮における不安定性を再活性化させ、特にBiotの不安定性を浮き彫りにしました。ここでは、複雑な解析に基づいた最新の不安定性を提示し、 buckling における界面の多様な形を、2つの空間的な次元にとどまっても、説明しています。特に、渦
2309.15674
Speech collage: code-switched audio generation by collaging monolingual corpora
Designing effective automatic speech recognition (ASR) systems for Code-Switching (CS) often depends on the availability of the transcribed CS resources. To address data scarcity, this paper introduces Speech Collage, a method that synthesizes CS data from monolingual corpora by splicing audio segments. We further improve the smoothness quality of audio generation using an overlap-add approach. We investigate the impact of generated data on speech recognition in two scenarios: using in-domain CS text and a zero-shot approach with synthesized CS text. Empirical results highlight up to 34.4% and 16.2% relative reductions in Mixed-Error Rate and Word-Error Rate for in-domain and zero-shot scenarios, respectively. Lastly, we demonstrate that CS augmentation bolsters the model's code-switching inclination and reduces its monolingual bias.
Amir Hussein, Dorsa Zeinali, Ondřej Klejch, Matthew Wiesner, Brian Yan, Shammur Chowdhury, Ahmed Ali, Shinji Watanabe, Sanjeev Khudanpur
2023-09-27T14:17:53
http://arxiv.org/abs/2309.15674v1
# Speech Collage: Code-Switched Audio Generation by Collaging Monolingual Corpora ###### Abstract Designing effective automatic speech recognition (ASR) systems for Code-Switching (CS) often depends on the availability of the transcribed CS resources. To address data scarcity, this paper introduces _Speech Collage_, a method that synthesizes CS data from monolingual corpora by splicing audio segments. We further improve the smoothness quality of audio generation using an overlap-add approach. We investigate the impact of generated data on speech recognition in two scenarios: using in-domain CS text and a zero-shot approach with synthesized CS text. Empirical results highlight up to 34.4% and 16.2% relative reductions in Mixed-Error Rate and Word-Error Rate for in-domain and zero-shot scenarios, respectively. Lastly, we demonstrate that CS augmentation bolsters the model's code-switching inclination and reduces its monolingual bias. Amir Hussein \({}^{\dagger 1}\), Dorsa Zeinali \({}^{\dagger 2}\), Ondrej Klejch\({}^{3}\), Matthew Wiesner\({}^{1}\), Brian Yan\({}^{4}\), Shammur Chowdhury\({}^{5}\), Ahmed Ali\({}^{5}\), Shinji Watanabe \({}^{4}\), Sanjeev Khudanpur\({}^{1}\)\({}^{1}\)Johns Hopkins University, USA, \({}^{2}\)Northeastern University, USA, \({}^{3}\) University of Edinburgh, UK, \({}^{4}\)Carnegie Mellon University, USA, \({}^{5}\) Qatar Computing Research Institute, Doha Code-switching, ASR, data augmentation, end-to-end, zero-shot learning ## 1 Introduction In multilingual societies, code-switching (CS) is integral to communication, enabling clearer expression and reflecting cultural nuances [1, 2]. While CS is prevalent in daily conversations, it is underrepresented in transcribed datasets. This linguistic phenomenon, where speakers interweave languages within a conversation or utterance, poses challenges for voice technologies like automatic speech recognition (ASR). Given the abundance of monolingual data and scarcity of labeled CS speech, there's a pressing need to harness monolingual resources for CS applications. The prime challenge lies in developing robust ASR systems for CS in zero-shot settings where no CS training data is available. Several approaches have proposed to build CS ASR directly from monolingual data by utilizing multilingual training [3, 4, 5, 6, 7, 8]. Further studies advocate for the joint modeling of CS and monolingual ASR, effectively breaking down bilingual tasks into monolingual components [9, 10, 11]. A prominent issue with monolingual training is the model's monolingual bias which impedes seamless language switching [12]. To address this issue, several data augmentation strategies have been proposed including textual data augmentation, text-to-speech synthesis, and concatenation-based speech generation. In [13] authors proposed a methodology to generate the code-switching text from monolingual text to improved ASR performance with language model rescoring. In [8, 14], researchers propose merging monolingual utterance to mimic code-switching. However, this strategy tends to primarily capture inter-sentential switches, often sidelining the nuances of intra-sentential CS. On another front, text-to-speech (TTS) based synthetic audio has gained traction for CS data generation [15, 16, 17, 18, 19, 20, 21]. Despite its potential, TTS based augmentation suffer from limited speaker variability compared to real data. Consequently, there's a growing interest in using audio segment splicing as augmentation to covers more speaker variations and acoustic environments [22, 23]. However in the proposed splicing, speech segments and their corresponding words are randomly selected, and the potential of splicing method in code-switching remains unexplored. In this paper, we introduce _Speech Collage1_, a data augmentation technique that constructs synthetic code-switched audio from monolingual data. Our method is inspired by traditional concatenation-based speech synthesis techniques [24, 25]. We demonstrate the efficacy of _Speech Collage_ with two scenarios: a) In-domain CS text: where target-domain CS text is leveraged, and b) Zero-shot CS: where synthesized CS text is used. Our study covers two language pairs: Mandarin-English and Arabic-English. Experimental results show substantial improvements _Speech Collage_ brings to code-switching ASR for both scenarios. Our contributions include: (i) a novel speaker-agnostic CS data augmentation derived from monolingual resources, (ii) further improving ASR performance with enhanced audio quality in generated data, and (iii) propose a of zero-shot learning framework tailored for CS. As an additional contribution, we conduct an ablation study to assess the significance of each component on the final performance. We also perform a modified Code Mixed Index (CMI) analysis to identify where the primary gains achieved through our augmentation method. Footnote 1: Visit our repository for audio samples and implementation [https://github.com/JSALT2022CodesSwitchingASR/generating-code-switched-audio](https://github.com/JSALT2022CodesSwitchingASR/generating-code-switched-audio) ## 2 Speech Collage We propose a framework designed to splice speech units extracted from monolingual corpora. These units are based on code-switched text, either real or synthesized, as depicted in Figure1. For the merging process, we select word-units for English and Arabic, and characters for Mandarin. While smaller units, such as phones, offer greater adaptability, they tend to degrade audio quality [26]. The constructed data form segment splicing, encompasses variations from multiple speakers and diverse acoustic environments. We first obtain the unit alignments with audio from the monolingual data by training standard Hidden Markov Model-Gaussian Mixture Model (HMM-GMM) 2 using Kaldi ASR toolkit [27]. Utilizing these alignments, in conjunction with the CS text and monolingual audio, our Speech Collage framework generates the CS audio dataset. In cases where the training data possesses multiple segments for a singular unit, a segment is selected at random. The generated audio quality is further enhanced using the overlap-add technique, energy normalization, and n-gram matching, as detailed below. The audio enhancement and segment splicing were implemented using the Lhotes toolkit [28]. Footnote 2: [https://github.com/kaldi-asr/kaldi/tree/master/egs/aishell/s5](https://github.com/kaldi-asr/kaldi/tree/master/egs/aishell/s5) [https://github.com/kaldi-asr/kaldi/tree/master/egs/mdp2_arabic/s5](https://github.com/kaldi-asr/kaldi/tree/master/egs/mdp2_arabic/s5) ### Overlap-add To enhance the quality of the generated CS audio, we employ the overlap-add with a Hamming window to mitigate discontinuity effects resulting from spliced units. To ensure the segment capture of each unit, we extend the unit-segments by \(0.05\) seconds at the start and end of the segment. This extension provides an extra \(0.05\) second which is utilized as overlap in overlap-add process. ### Energy normalization Additionally, we normalize the synthesized utterance by the average of unit-segments energy to remove artifacts introduced by energy variations between segments. For a speech sequence \(X\) of length \(T\), \(X=\{x_{t}\in\mathbb{R}|t=1,\cdots,T\}\), the average energy is calculated as follows: \[X^{\prime}=\left\{\frac{x_{t}}{\sqrt{\frac{1}{T}\sum_{t}x_{t}^{2}}}|t=1, \cdots,T\right\} \tag{1}\] ### N-gram units To further enhance the quality of the generated CS we explore splicing consecutive units (n-grams), in alignment with selecting longer units in concatenated speech synthesis [29] Given a CS sentence our approach starts by matching the largest consecutive unit from monolingual alignments. If a specific n-gram is unavailable, the algorithm backs off to a smaller unit. It's worth noting that in this study, we only experimented with unigrams and bigrams. A detailed description of n-gram Speech Collage implementation is described in Algorithm 1. Using the alignments from monolingual data and maximum n-gram size, SetupSupervisions(\(\cdot\)) creates a collection \(\mathcal{D}\) of audio segments corresponding to each n-gram unit. Consecutive n-gram units are matched from alignments, starting with \(n\) and progressing to unigrams. If an n-gram is absent, the algorithm backs off to an (\(n-1\)) unit. In GenerateCollage(\(\cdot\)), the function getConsecUnits(\(\cdot\)) returns all consecutive (\(1:n\)) units. Each n-gram unit is randomly drawn from its respective collection using SampleUnit(\(\cdot\)). These segments are appended to the current spliced utterance with overlapAdd(\(\cdot\)) described in SS2.1, and the resulting combined utterance undergoes energy normalization NormalizeEnergy(\(\cdot\)) from Eq1. Figure 1: High level illustration of the proposed Speech Collage CS generation approach. ### Zero-shot CS framework In this case study we focus on generating Arabic-English code-switching (CS) data, operating under the assumption that no Arabic-English CS training data is available. To generate speech data using the Speech Collage method, we require CS text. We generate the CS text from monolingual resources using the lexicon-based (Random) replacements approach described in [13]. The approach entails the following steps: 1. **Parallel Text Translation**: We leverage a public Arabic-English Machine Translation System3 to generate the parallel English text from the Arabic transcription. Footnote 3: API access available from [https://mt.qcri.org/api](https://mt.qcri.org/api) 2. **Word Level Alignments**: After translation, we fine-tune multilingual BERT (mBERT) [30] to obtain the word-level alignments. 3. **Random Replacement**: Given the alignments, Arabic words are randomly substituted with their corresponding English words at a rate of \(20\%\), as suggested by [13]. ### End-to-End Speech Recognition In this work, we utilized the end-to-end (E2E) ASR conformer architecture [31], with the ESPNET toolkit [32]. The E2E-ASR implementation consists of a conformer encoder and a transformer decoder. Both are multiblocked self-attention architectures with the encoder further enhanced by an additional convolution module. The ASR task is formulated as Bayesian decision finding the most probable target word sequence \(\hat{\mathbf{Y}}\), from all possible outputs \(\mathbf{Y}^{*}\), by selecting the sequence which maximizes the posterior likelihood \(P(\mathbf{Y}|\mathbf{X})\), given T-length sequence of D-dimensional speech features, \(\mathbf{X}=\{\mathbf{x}_{\mathbf{t}}\in\mathbb{R}^{D}|t=1,\cdots,T\}\). For text tokenization, we used word-piece byte-pair-encoding [33]. The total loss function \(\mathcal{L}_{\text{\tiny{asr}}}\) is a multi-task learning objective that combines the decoder cross-entropy (CE) loss \(\mathcal{L}_{\text{ce}}\) and the CTC loss [34]\(\mathcal{L}_{\text{ce}}\). \[\mathcal{L}_{\text{\tiny{asr}}}=\alpha\mathcal{L}_{\text{\tiny{ctc}}}+(1- \alpha)\mathcal{L}_{\text{ce}} \tag{2}\] where \(\alpha\) is used for interpolation. In our approach, the conformer is initially pre-trained on monolingual data and subsequently fine-tuned on monolingual and synthetic CS speech combined. ### Code-Mixing Index To quantify the amount of code-switching we use _Code-Mixing Index_ (CMI) metric [35]. The CMI for an utterance is defined as: \[CMI=\frac{\frac{1}{2}*(N-\text{max}_{i})+\frac{1}{2}P(x)}{N} \tag{3}\] Where \(max_{i}\) represents the number of words in the dominant language \(i\), \(N\) is the total word count in the utterance, \(P\) is the number of code alternation points, with the constraint \(0\leq P<N\). A low CMI score indicates monolingualism in the text whereas the high CMI score implies high degree of code-mixing in the text. ## 3 Data and Experimental Setup **In-domain:** The target domain we are considering is the Mandarin-English code-switching, specifically SEAME [36]. In this scenario, we utilize monolingual training data from Chinese AISHELL-1 [37], \(100\)h of English data randomly sampled from Tedlium3 [38] and SEAME text [36] to generate \(62.2\) hours of CS data. Evaluation is performed on SEAME test sets (devman and devsge), measuring mixed error-rate (MER) that considers word-level English and character-level Mandarin. We also report WER on monolingual English and CER on monolingual Chinese subsets. **Zero-shot:** For this scenario, we use monolingual training data from MGB-2 [39] and Tedlium3. We generate \(80\) hours of CS data using synthetic CS text described in SS2.4. Evaluation is conducted on ES-CWA [8], which is a real Arabic-English CS dataset. **Data pre-processing:** All audios are augmented with speed perturbations (\(0.9\), \(1.0\) and \(1.1\)) and transformed into \(83\)-dimensional feature frames (\(80\) log-mel filterbank coefficients plus \(3\) pitch features). Additionally, we augment the features with specaugment, with mask parameters \((mT,mF,T,F)=(5,2,27,0.05)\) and bi-cubic time-warping. **Models:** the conformer encoder consists of \(12\) blocks, each with \(2048\) feed-forward dimensions, \(256\) attention dimensions, and \(4\) attention heads. The transformer decoder has \(6\) blocks with configurations similar to the encoder. We combine \(2622\) Mandarin characters with \(3000\) English BPE units for **In domain** scenario. As for the **Zero-shot** scenario we use a shared Arabic-English vocabulary of size \(5000\) BPE. Our training configuration utilizes Adam optimizer with a learning rate of \(0.001\), warmup-steps of \(25\)K, a dropout-rate of \(0.1\) and \(40\) epochs. We use joint training with hybrid CTC/attention by setting CTC weight \(\alpha\), Eq 2, to \(0.3\). During inference, we use a beam size of \(10\) with no length penalty. For the language model (LM), we train a long short term memory (LSTM) with \(4\) layers, each of \(2048\) dimensions, over \(20\) epochs. When integrating LM with E2E-ASR, we apply an LM weight of \(0.2\). ## 4 Results and Analysis ### In-domain CS text We examine the impact of augmenting data with generated CS speech from monolingual data, particularly by integrating in-domain CS text. The results, presented in Table 2, are based on the SEAME evaluation. The results from _Mono_, obtained by training on monolingual Chinese and English data, act as our baseline. A shallow fusion with a _SEAME-LM_, trained on SEAME text data, results in a marginal relative reduction: up to \(2\)% in MER. However, simple CS augmentation using unigram units yields up to \(15.3\)% relative reductions in MER, compared to _Mono_. By further enhancing the audio quality of the generated data, we achieve an overall relative improvement of up to \(34.4\)% in MER compared to the _Mono_. Finally comparing our best results to ASR trained on SEAME, the \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**DevMan**} & \multicolumn{3}{c}{**DevSeg**} \\ \cline{2-7} & CER-MAN & WER-EN & MER & CER-MAN & WER-EN & MER \\ \hline Mono & 37.2 & 67.4 & 32.9 & 56.7 & 47.5 & 38.4 \\ + SEAME-LM & 36.4 & 65.9 & 32.2 & 55.2 & 46.5 & 37.6 \\ + CS-Unigram & 31.5 & 53.3 & 28.4 & 47.5 & 42.2 & 34.4 \\ + CS-Unigram-SE & 29.7 & 53.7 & 27.2 & 44.0 & 40.9 & 33.0 \\ + CS-Bigram-SE & **27.2** & **47.9** & **25.4** & **39.7** & **38.1** & **31.4** \\ \hline SEAME-ASR (proline) & 15.1 & 28.8 & 16.5 & 21.7 & 28.7 & 23.5 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of the CER/WER/MER results on SEAME. **CS**: generated CS using in-domain SEAME text. **Mono**: baseline trained on monolingual data, **(Unigram, Bigram)**: generated CS using (unigram, bigram) units, **SE**: signal enhancement from §2, **SEAME-ASR**: topline model trained on SEAME. absolute gap is up to \(8.9\)% MER. Given that we utilize SEAME text for data generation, this gap can be attributed to audio mismatches. Thus, we anticipate that further enhancements in audio quality to align with SEAME will bridge this gap. ### Zero-shot CS We investigate the effects of augmenting the dataset with CS speech, generated from monolingual data and synthetic CS text. This synthetic CS text is produced from the monolingual Arabic MGB-2 and English Tedlium3 datasets, as described in SS2.4. Our evaluations, detailed in Table 2, utilize the ESCWA dataset. Operating under our assumption that we do not have access to real CS data, we use the merged evaluation sets from MGB-2 and Tedlium3 to select the best epochs for the model. The observations align with those from $4.1: the _CS-Unigram_ yields relative reductions of \(12.3\)% in WER and \(22.8\)% in CER. Interestingly, the results from shallow fusion with \(Mono+CS\)-\(LM\) consistently underperform when compared to \(Mono\). Moreover, enhancing the quality of the generated audio further improves results, leading to an overall relative improvement of \(16.2\)% in WER and \(27.6\)% in CER compared to \(Mono\). It's noteworthy that, on monolingual data, performance deteriorates with CS augmentation. This suggests model bias towards code-switching and a reduced inclination for monolingual data. We further analyze this observation in SS4.4. ### Generated CS data size We explore the impact of the amount of generated CS data size on ASR system performance. Figure 2 illustrates the WER at different percentages of generated CS data. In this experiment, we generated CS data with bigrams at 10%, 50%, and 100%. The 0% represents monolingual condition, while 100% corresponds to 80 hours for Arabic-English and 62.2 hours for Chinese-English. It can be observed that there is a substantial improvement when using 10% of generated CS data. However, as the percentage of generated CS data increases, the rate of improvement decreases. This suggests that with more data, further gains can be expected, albeit at a diminishing rate. ### Analysis To understand the effect of our proposed CS augmentation, we measure the average CMI. Notably, the conventional CMI doesn't account for the accuracy of the sentence. To address this, we select predictions that closely align with the reference using a WER with heuristic threshold set at \(\leq 20\)%. It can be observed from Table 3, that employing CS data augmentation consistently elevates the CMI. This affirms our assumption that CS augmentation enhances the model's aptitude for code-switching. ## 5 Conclusion We introduced a framework that generates synthetic code-switched data from monolingual corpora. Our findings demonstrate that integrating this CS data augmentation yields substantial improvements that surpass results from training exclusively on monolingual sources or simply combining with a code-switched language model. The enhancement of the generated audio's quality further improves the performance. Additionally, in a zero-shot learning scenario, our CS augmentation is superior to solely monolingual training. Finally, we show that improvements from using CS data augmentation stem from the model's increased propensity for code-switching and a decreased bias towards monolingual input. ## 6 Acknowledgements This work was carried out during the 2022 Jelinek Memorial Summer Workshop on Speech and Language Technologies at Johns Hopkins University, which was supported by Amazon, Kanari AI, Microsoft and Google. This work was also partially supported by NSF CCRI Grant No 2120435. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**MGB-2**} & \multicolumn{2}{c}{**TED3**} & \multicolumn{2}{c}{**ESCWA**} \\ \cline{2-7} & CER & WER & CER & WER & CER & WER \\ \hline Mono & **6.1** & **12.9** & **4.4** & **8.5** & 31.1 & 48.7 \\ + CS-LM & 6.3 & 12.5 & 4.6 & 8.7 & 38.0 & 57.0 \\ + CS-Unigram & 6.9 & 14.6 & 5.2 & 10.1 & 24.0 & 42.7 \\ + CS-Unigram-SE & 7.0 & 14.7 & 10.4 & 5.4 & 23.1 & 42.0 \\ + CS-Bigram-SE & 7.0 & 14.7 & 10.2 & 5.2 & **22.5** & **40.8** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of the CER/WER results on ESCWA. CS: data generated using synthetic CS text. **Mono**: baseline trained on monolingual data, **(Unigram, Bigram)**: generated CS using (unigram, bigram) units, **SE**: signal enhancement from §2 \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Dataset** & **Ref** & **Mono** & **CS-Uni** & **CS-Uni-SE** & **Bi-SE** \\ \hline ESCWA & 15.6 & 8.7 & 10.6 & 11.6 & 10.5 \\ SEAME & 10.4 & 3.3 & 5.4 & 6.2 & 7.3 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of the average CMI. **Mono**: baseline trained on monolingual data, **SE**: Signal enhancement from §2, **Ref**: reference, **(Uni, Bi)**: generated CS using (unigram, bigram) units. Figure 2: WER/WER at different percentages of generated CS data where **0%**: represents Monolingual, **100%**: represents Monolingual with all generated CS.
効果的な自動音声認識 (ASR) システムを設計するには、コードスワップ (CS) の関連資料の入手が重要です。データ不足を解決するため、本論文では Speech Collage を導入しました。Speech Collage は、単言語コーポラから CS データを合成するための方法であり、音声セグメントを組み合わせることで音声データを作成します。音声生成の滑らかさを向上させるために、オーバーラップ追加法を用いています。生成されたデータの影響を、コードスワップテキストとゼロショットアプローチの二つのシナリオにおいて調査しました。実証実験の結果、in-domain CS テキストとゼロショットアプローチにおいて、それぞれ 34.4% と 16.2% の割合の混雑エラー率と単語エラー率の削減が見られました。最後に、CS 拡張により、モデルのコードスワップ能力が向上し、単言語バイアスが軽減されました。
2309.13205
A Practical Survey on Zero-shot Prompt Design for In-context Learning
The remarkable advancements in large language models (LLMs) have brought about significant improvements in Natural Language Processing(NLP) tasks. This paper presents a comprehensive review of in-context learning techniques, focusing on different types of prompts, including discrete, continuous, few-shot, and zero-shot, and their impact on LLM performance. We explore various approaches to prompt design, such as manual design, optimization algorithms, and evaluation methods, to optimize LLM performance across diverse tasks. Our review covers key research studies in prompt engineering, discussing their methodologies and contributions to the field. We also delve into the challenges faced in evaluating prompt performance, given the absence of a single "best" prompt and the importance of considering multiple metrics. In conclusion, the paper highlights the critical role of prompt design in harnessing the full potential of LLMs and provides insights into the combination of manual design, optimization techniques, and rigorous evaluation for more effective and efficient use of LLMs in various NLP tasks.
Yinheng Li
2023-09-22T23:00:34
http://arxiv.org/abs/2309.13205v1
# A Practical Survey on Zero-shot Prompt Design for In-context Learning ###### Abstract The remarkable advancements in large language models (LLMs) have brought about significant improvements in Natural Language Processing(NLP) tasks. This paper presents a comprehensive review of in-context learning techniques, focusing on different types of prompts, including discrete, continuous, few-shot, and zero-shot, and their impact on LLM performance. We explore various approaches to prompt design, such as manual design, optimization algorithms, and evaluation methods, to optimize LLM performance across diverse tasks. Our review covers key research studies in prompt engineering, discussing their methodologies and contributions to the field. We also delve into the challenges faced in evaluating prompt performance, given the absence of a single "best" prompt and the importance of considering multiple metrics. In conclusion, the paper highlights the critical role of prompt design in harnessing the full potential of LLMs and provides insights into the combination of manual design, optimization techniques, and rigorous evaluation for more effective and efficient use of LLMs in various NLP tasks. ## 1 Introduction In recent years, transformer-based language models (such as [14], [17], [18]) have emerged as a transformative force in the field of artificial intelligence, revolutionizing Natural Language Understanding(NLU) and Generation(NLG). As model size and training data have evolved, the GPT series has exhibited extraordinary capabilities in a wide range of natural language tasks by relying on a paradigm known as in-context learning. According to [17], in-context learning harnesses the context provided by input data to generate appropriate responses or predictions, contrasting with traditional methods that necessitate explicit task-specific training and fine-tuning on labeled datasets. In-context learning enables large language models to capitalize on vast amounts of data and adapt to various tasks in a flexible and dynamic manner. There are several categories of in-context learning, including zero-shot, one-shot, and few-shot learning. In all types of in-context learning, the key to success lies in effective prompt design, which is occasionally referred to as an "art." This survey paper aims to categorize each type of in-context learning, discuss the core principles, examine state-of-the-art design techniques, and explore recent advancements in in-context learning, with a particular focus on zero-shot discrete in-context learning. ## 2 Definition Although there is no formal definition for prompt design optimization, we follow the principle from [17] and provide the definition in (1) for prompt design in in-context learning: \[P^{\star}=\operatorname*{arg\,max}_{P}\mathbb{E}_{x_{i},y_{i}\in\mathcal{D}}[S (f_{\theta}(P,x_{i}),y_{i})] \tag{1}\] Here, \(x_{i}\) represents input sentences and features, while \(y_{i}\) denotes the target labels. \(\theta\) signifies the parameters for any Large Language Models (LLMs) or Pretrained Language Models (PLMs), which remain frozen in the case of in-context learning. \(f_{\theta}\) represents the output from LLMs given input \(x_{i}\) and prompt \(P\). \(S\) is a scoring function that measures the performance of the model output in relation to the ground truth label \(y_{i}\). The objective of in-context learning (or prompt engineering) is to identify the optimal prompt \(P^{\star}\) that maximizes the score \(S\) in the test distribution. Based on the structure of \(P\), in-context learning can be further classified into discrete (hard) prompt when \(P\) consists of a list of tokens or continuous prompt (soft) where \(P\) represents an embedding vector (see Figure 1). Additionally, for zero-shot in-context learning, \(P\) is independent of \(x_{i}\), whereas for one-shot or few-shot in-context learning, \(P\) can be a function of \(x_{i}\) (from training data). This survey focuses on zero-shot in-context learning with discrete prompts and examines its application exclusively in decoder-only LLMs, such as the GPTx series. ## 3 Relevant Work ### Prompts for Encoder-only Transformer Models (BERT) Before the advent of in-context learning, some research efforts have been devoted to studying how to design effective prompts to enhance the performance of BERT models. As depicted in Figure 2, prompts in BERT are usually combined with input to form a cloze-style structure, while for transformer decoder-based models, prompts are more flexible. Numerous studies have investigated prompt design in BERT. In the work by Jiang et al. (2020), the authors proposed heuristic-based approaches for designing discrete prompts. Dependency parsing is employed to identify useful prompts from Wikipedia. In Gao et al. (2021), the authors utilized T5 as a prompt generator with a beam search to create a set of diversified prompts. They then used \(D_{dev}\) to select a single prompt with the best performance. In Shin et al. (2020), a gradient-based prompt search approach was proposed, wherein each prompt token is learned by directly optimizing LMs on the downstream task. In addition to prompt designing strategies, other research work focuses on enriching the prompt candidates and ensembling the output from multiple prompts for the same input. To enrich prompts, Jiang et al. (2020) employed back-translation to paraphrase prompts. Building on this work, Havivi et al. (2021) trained a separate BERT model to rewrite prompts using the nearest BERT vector embedding. The concept of in-context learning originates from the work by Brown et al. (2020). However, BERT models can also perform similar tasks by using a single token as output. For example, France's capital is [MASK]. Only the output for the [MASK] position is used for inference. This characteristic enables the ensemble of answers from different prompts, although it is not apparent for similar practices in GPT-style models. In Jiang et al. (2020), the authors proposed rank-based ensemble and optimized ensemble methods to aggregate answers generated from different prompts. Among the studies designing prompts for BERT models, the majority focus on discrete prompts (i.e., hard prompts). To the best of our knowledge, we did not find any work attempting to generate continuous prompts. In general, optimizing prompts in BERT brings only marginal improvements to the original model. Given the size and structure of BERT, it is more favorable to fine-tune on downstream tasks. ### Prompts for Decoder-only Transformer (GPT) #### 3.2.1 Continuous Prompt Another line of research has focused on optimizing soft prompts, which eliminate the constraint that prompts have to be natural language. Soft prompts can be learned and optimized directly within the same language model. The key difference between soft prompt tuning and fine-tuning is that prompt tuning typically fixes the weights of the language model and only performs gradient updates on the network that generates the prompt. Prefix-Tuning Li and Liang (2021) is one of the early works that tunes prompts on GPT-2 with a small amount of data per task, achieving comparable performance to the full data fine-tuning setting. Prefix-Tuning does not use a separate network; instead, it utilizes the same transformer network but only optimizes the input embedding of the prompt. In P-Tuning V1 Liu et al. (2021) and V2 Liu et al. (2022), Figure 1: Prompt categorization by prompt form generate the input prompt for the language model. While using soft prompts provides more flexibility in prompt design, it requires access to either the weights of language models or the ability to input vectors into language models. As recent language models are hosted as cloud services and large language models are difficult to access via vector inputs, this practice becomes less feasible when using GPT-3 or PaLM Chowdhery et al. (2022). #### 3.2.2 Few-Shot Learning In the GPT paper Brown et al. (2020), few-shot learning demonstrates strong NLP capabilities across various benchmarks. As the title suggests, Language Models are Few-Shot Learners. In the few-shot setting, a task description along with a few examples are presented to the model, which is then asked to complete the task for an unseen example. Numerous studies have been conducted to optimize few-shot examples and prompts to enhance performance. In Liu et al. (2021), the authors discovered that GPT-3 generally performs better when in-context examples are similar to the test examples. As a result, they proposed an in-context example algorithm based on example similarities. Similarity is measured using RoBERTa embedding distance in Euclidean space or cosine distance. Other works, such as Rubin et al. (2021) and Gutierrez et al. (2022), have adopted similar example selection logic and demonstrated better performance over randomly selected examples. In addition to example selection methods, research efforts like Wu et al. (2022) and Kumar and Talukdar (2021) have been made to optimize the rank and order of retrieved examples. While few-shot learning exhibits remarkable performance, according to the no free lunch(NFL) theorem Wolpert and Macready (1995, 1997), providing examples inevitably introduces bias to the prediction algorithm. In cases where out-of-distribution samples occur, applying few-shot learning can hinder the inference process. ## 4 Zero-Shot Discrete Prompts With the recent success of Large Language Models such as GPTs, designing zero-shot discrete prompts has become increasingly popular in practice. In the experiments conducted by Reynolds and McDonell (2021), the authors demonstrate that carefully engineered zero-shot prompts can actually outperform few-shot prompts. They argue that providing examples does not always help because examples tend to be interpreted as part of a narrative rather than serving as categorical guidance. On the other hand, the advantages of using zero-shot discrete prompts can be listed as follows: (1) zero-shot prompts are highly interpretable, (2) few training data or examples are required, (3) the designing process is more straightforward as we only need to deal with task instructions, and (4) the prompt structure is flexible, allowing us to insert our input wherever needed. Zero-shot discrete prompts are also known as task instructions. There are two primary approaches to obtaining a good discrete prompt. The first is heuristic-based manual design, while the second relies on an optimization algorithm to find the optimal prompt. In this section, we focus on reviewing research on prompt Figure 2: Prompt categorization by model types design for transformer decoder style models (e.g., GPT), which has been the focus of a majority of research efforts. ### Manual Design In their work Reynolds and McDonell (2021), the authors argue that GPT (or other LLMs) resemble a superposition of human authors. Therefore, it can be helpful to ask GPT to pretend to be a character in the prompt or use the prompt to signify a dialogue between people (i.e., task specification by memetic proxy). The authors also discuss the idea of MetaPrompts, which encapsulate a general intention that will develop towards specific meanings when additional information, such as a task question, is provided. The example prompts they provide, such as "Let's solve this problem by splitting it into steps," have been proven to be significantly helpful by subsequent works. In the work Mishra et al. (2021), the authors propose five principles for designing prompts for GPT-3 based on their observations of GPT-3's failures. These principles include: (1) using simple patterns to specify expected output, (2) using bulleted lists and assertions, (3) breaking down complex tasks into multiple simpler ones, (4) adding explicit textual statements of output constraints, and (5) customizing the instructions so that the model can directly output the results. These principles can be a good starting point for manual design. Another line of work focuses on improving the reasoning capabilities of large language models via prompt design. The work Chain-of-Thought (CoT) Wei et al. (2022) was initially proposed in few-shot learning, where the reasoning steps were presented as part of the solution for several few-shot examples. The zero-shot version of CoT was later proposed in Kojima et al. (2022), which demonstrates that inserting the single prompt "let's think step by step" into the task instruction significantly improves performance on mathematical reasoning. The authors also experimented with different templates for prompts and found that instructive prompts help improve the model's performance in mathematical reasoning, while misleading or irrelevant prompts do not contribute to performance enhancement. ### Prompt Optimization Finding the optimal prompt can also be treated as an optimization process, where the goal is to optimize the performance of the target task. Similar to finding the best soft prompt or finding the optimal examples for few-shot learning, algorithms can be implemented to find the best zero-shot prompt. However, such work typically requires a small set of evaluation data to assess the prompt performance. In the work by Zhou et al. (2022), the authors proposed Automatic Prompt Engineer (APE) for zero-shot prompt design. A LLM is used to generate a group of prompts given the task example or human description, and an iterative Monte Carlo search method is used to search for the optimal prompt given the objective function. In addition to using Monte Carlo search for prompt optimization, a gradient-free, edit-based search approach called Gradientfree Instructional Prompt Search (GRIPS) is introduced in Prasad et al. (2022). GRIPS starts from a manually designed instruction and iteratively searches among generated prompts from four operations (delete, add, swap, paraphrase) to find the optimal prompt for a target task. Another line of research uses gradient-based methods but to generate discrete zero-shot prompts. The work FluentPrompt Shi et al. (2022) follows the idea from AutoPrompt Shin et al. (2020), using a gradient-based method to generate discrete prompts. They also use a fluency constraint to encourage human-readable prompt outcomes, which helps improve performance. Another gradient-based prompt generation method RLPROMPT is introduced in Deng et al. (2022). This work uses a reinforcement learning structure to generate prompts that optimize the task-based reward function. The prompts generated from this framework are often incoherent gibberish but are claimed to achieve significant performance improvement. ### Evaluation Evaluating prompt design is very challenging. As there is no ground truth dataset for prompt generation, there is no "best" prompt but only better prompts. Therefore, the evaluation of the prompt performance for in-context learning usually falls into the following categories. **Conditional Probability (Likelihood)**: To evaluate the performance of a text generation model, we can measure the probability of the generated text. In our case, we can calculate the conditional probability of ground truth(\(y\)) given prompt (\(p\)), input(\(x\)) or calculate the joint probability of \(x,y,p\) averaging over the training data, as shown in (2) \[\begin{split} Prob(y|x,p)\\ x,y\in X,Y\end{split} \tag{2}\] This is a simple strategy because the models for in-context learning are generative language models which will generate the joint probability (likelihood) automatically. However, this metric sometimes fails to represent the actual performance of the downstream task. **Execution Accuracy**: A more direct method to measure the performance of a prompt is to use metrics from the target task Zhou et al. (2022), as ultimately the performance on the task is what we care about. In addition to measuring the execution accuracy directly on the entire training set, there are ways to efficiently estimate the performance on a subset of training data to save computational cost Zhou et al. (2022), Li et al. (2022). **Prompt Transferability** is another evaluation metric reported in Zhou et al. (2022), Deng et al. (2022) which is used to prove the quality of the prompt generation methods. However, this metric is more useful in selecting the prompt designing method than evaluating the performance of a single prompt. **General Metrics for Language Models** should be used when using large language models via zero-shot in-context learning. It is also important to measure the performance from additional aspects. For example, if we are to build a Question-Answering system, we need to measure the risk of hallucination Ji et al. (2022). If we are to build an email generation system, we may need to measure the toxicity and prevent generating any aggressive content. The work of Holistic Evaluation of Language Models (HELM) Liang et al. (2022) provides a great example in evaluating the performance for language models via in-context learning. Although various metrics have been reported in HELM for existing models, it is worth noting that the design of our prompt will directly impact the models' performance. ## 5 Conclusion The rapid development of large language models (LLMs) has significantly influenced various NLP tasks. Among the techniques to harness their capabilities, in-context learning with different types of prompts--discrete, continuous, few-shot, and zero-shot--has shown remarkable promise. Discrete prompt engineering emphasizes human-readable prompts that can enhance model performance, while continuous prompt optimization involves soft prompts that can be learned and optimized directly in the same language model. Few-shot learning leverages a small number of examples to guide the model in the right direction, whereas zero-shot discrete prompts only require task instructions, offering a more straightforward design process. Manual design of prompts can be guided by principles based on model behavior, and optimization algorithms can be used to find optimal prompts. Evaluating the performance of prompts is challenging, as there is no single "best" prompt, and various metrics need to be considered. In conclusion, as LLMs continue to evolve, prompt design remains a crucial factor in harnessing their full potential across a wide range of applications. A combination of manual design, optimization techniques, and rigorous evaluation can lead to more effective and efficient use of LLMs in diverse NLP tasks.
大型言語モデル(LLM)の顕著な進歩は、自然言語処理(NLP)タスクにおける重要な改善をもたらしてきました。この論文は、イン・コンテキスト学習手法の包括的なレビューを提示しています。その内容は、離散、連続、少数の学習、およびゼロ学習などのさまざまなタイプのプロンプトとそのLLMのパフォーマンスへの影響についてです。私たちは、プロンプト設計のためのさまざまなアプローチ、すなわち手動設計、最適化アルゴリズム、評価方法などを調査し、LLMのパフォーマンスを最適化するためのさまざまなアプローチを検討しました。私たちのレビューは、プロンプトエンジニアリングにおける主要な研究論文をカバーしており、その方法論とその分野への貢献について議論しています。また、プロンプトパフォーマンスの評価における課題についても掘り下げており、唯一の「最適な」プロンプトが存在しないことや、複数の指標を考慮することが重要であるという点を説明しています。結論として、この
2309.04779
$g$-factor engineering with InAsSb alloys toward zero band gap limit
Band gap is known as an effective parameter for tuning the Lande $g$-factor in semiconductors and can be manipulated in a wide range through the bowing effect in ternary alloys. In this work, using the recently developed virtual substrate technique, high-quality InAsSb alloys throughout the whole Sb composition range are fabricated and a large $g$-factor of $g\approx -90$ at the minimum band gap of $\sim 0.1$ eV, which is almost twice that in bulk InSb is found. Further analysis to the zero gap limit reveals a possible gigantic $g$-factor of $g\approx -200$ with a peculiar relativistic Zeeman effect that disperses as the square root of magnetic field. Such a $g$-factor enhancement toward the narrow gap limit cannot be quantitatively described by the conventional Roth formula, as the orbital interaction effect between the nearly triply degenerated bands becomes the dominant source for the Zeeman splitting. These results may provide new insights into realizing large $g$-factors and spin polarized states in semiconductors and topological materials.
Yuxuan Jiang, Maksim Ermolaev, Seongphill Moon, Gela Kipshidze, Gregory Belenky, Stefan Svensson, Mykhaylo Ozerov, Dmitry Smirnov, Zhigang Jiang, Sergey Suchalkin
2023-09-09T12:50:01
http://arxiv.org/abs/2309.04779v1
# \(g\)-factor engineering with InAsSb alloys toward zero band gap limit ###### Abstract Band gap is known as an effective parameter for tuning the Lande \(g\)-factor in semiconductors and can be manipulated in a wide range through the bowing effect in ternary alloys. In this work, using the recently developed virtual substrate technique, high-quality InAsSb alloys throughout the whole Sb composition range are fabricated and a large \(g\)-factor of \(g\approx-90\) at the minimum band gap of \(\sim 0.1\) eV, which is almost twice that in bulk InSb is found. Further analysis to the zero gap limit reveals a possible gigantic \(g\)-factor of \(g\approx-200\) with a peculiar relativistic Zeeman effect that disperses as the square root of magnetic field. Such a \(g\)-factor enhancement toward the narrow gap limit cannot be quantitatively described by the conventional Roth formula, as the orbital interaction effect between the nearly triply degenerated bands becomes the dominant source for the Zeeman splitting. These results may provide new insights into realizing large \(g\)-factors and spin polarized states in semiconductors and topological materials. + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED high-quality unstrained, unrelaxed InAsSb alloys in the whole composition range [14; 30], providing a perfect opportunity for experimental studies of the material parameters and \(g\)-factors in the narrow band gap region. In this work, we present a systematic investigation of the band structure evolution with the composition in InAsSb alloys via a combination of magneto-absorption measurements and \(k\cdot p\) calculations. We find that the Kane energy shows very little bowing effect across the entire composition range, but the \(g\)-factor increases significantly as the band gap reaches the minimum. When \(E_{g}\to 0\), the Landau levels (LLs) of the triply degenerated bands become fully relativistic (i.e., LL energy \(\propto\sqrt{B}\)) due to the dominant orbital interaction, and their relative wavefunction mixing determines the spin states and energy spacing of the LLs. For typical III-V (more generally, zinc-blende type) semiconductor, we find that these relativistic LLs are highly spin polarized along with maximized energy spacings, which could lead to a \(g\)-factor of \(g\approx-200\) at 1 T (vs. \(g\rightarrow-\infty\) based on the Roth formula), overwhelmingly larger than most of the two-band Dirac materials. Our findings may provide a new perspective for \(g\)-factor engineering in future devices based on semiconductors and topological materials. Five InAs\({}_{1-x}\)Sb\({}_{x}\) alloy samples are studied in this work, with \(x=0.09\), 0.22, 0.44, 0.50, and 0.63. These samples are grown by solid-source MBE on undoped GaSb(100) substrates. The \(x=0.50\) sample was grown using VEECO Gen II MBE system in Army Research Laboratory, and the other samples were grown using VEECO GEN930 MBE system in Stony Brook University. The growth process has been described previously in Ref. [14]. The core structure and band alignment of our InAs\({}_{0.37}\)Sb\({}_{0.63}\) sample are schematically shown in Fig. 1 as an example. Information on the core structures of these samples is summarized in Table 1. In addition, samples with \(x=0.09\), 0.22, and 0.44 are n-doped (Te-doped, 2\(\times 10^{16}\) cm\({}^{-3}\)), and samples with \(x=0.50\) and 0.63 are grown without intentional doping. To avoid the formation of two-dimensional electron "pockets" due to band bending at the boundaries of the InAsSb layer (absorber), the barriers and cap are p-doped to \(10^{16}\) cm\({}^{-3}\). The three-dimensional character of the carrier motion in InAsSb is confirmed by magneto-transport measurements in tilted magnetic fields [14]. InAsSb alloy samples are then studied with magneto-IR spectroscopy, which is known for its accuracy in determining electronic band structures. The samples are placed inside a superconducting magnet at liquid helium temperature (the effective temperature at the sample is measured to be \(T=5\) K). The samples are illuminated with IR radiation in the Faraday configuration using a Bruker 80v Fourier-transform IR spectrometer. A composite Si bolometer is mounted behind the sample to detect the transmitted light signal at different magnetic fields. Figure 2(a) shows the false color plot of the normalized transmission \(T(B)/T(0\)T) of the InAs\({}_{0.50}\)Sb\({}_{0.50}\) sample as a typical example. A series of absorption modes, which blue-shift in energy with increasing magnetic fields, can be identified and attributed to LL transitions. The low-lying transitions are labeled with \(T_{0}\)-\(T_{5}\). These modes originate from the same non-zero energy intercept as the magnetic field approaches zero, indicative of the nature of interband LL transitions. The energy intercept allows for direct readout of the band gap \(E_{g}=108\) meV. To quantitatively describe these LL transitions and extract other material parameters, we employ the well-established eight-band \(k\cdot p\) model to fit the experimental results [24; 27; 31; 32]. The model consists of several parameters, including \(E_{g}\), \(\Delta\), \(E_{P}\), the electron effective mass \(m^{*}\), and the modified Luttinger parameters \(\gamma_{1}\), \(\gamma_{2}\), and \(\gamma_{3}\). To simplify the Hamiltonian, we first assume \(\gamma_{1,2,3}=0\). Meanwhile, we set \(A_{c}=\hbar^{2}/2m^{*}-E_{P}(3E_{g}+2\Delta)/6m_{0}E_{g}(E_{g}+\Delta)=0\), where \(\hbar\) is the reduced Planck constant and \(m_{0}\) is the free electron mass, to avoid spuri \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline Sb (\%) & Grading (nm) & \multicolumn{2}{c|}{Bottom barrier (nm)} & \multicolumn{2}{c|}{Absorber (nm)} & \multicolumn{2}{c|}{Top barrier (nm)} & Cap layer (nm) \\ \hline 9 & Not & N/A & Al\({}_{80}\)Ga\({}_{20}\)As\({}_{6.2}\)Sb\({}_{93.8}\) 500 & InAs\({}_{91}\)Sb\({}_{9}\) 1000 & Al\({}_{80}\)Ga\({}_{20}\)As\({}_{6.2}\)Sb\({}_{93.8}\) 200 & InAs\({}_{91}\)Sb\({}_{9}\) 100 \\ 22 & Al\({}_{85}\)In\({}_{15}\)Sb 1600 & Al\({}_{95}\)In\({}_{4.5}\)Sb 500 & InAs\({}_{97}\)Sb\({}_{22}\) 1000 & Al\({}_{95}\)In\({}_{4.5}\)Sb 200 & InAs\({}_{87}\)Sb\({}_{22}\) 100 \\ 44 & Al\({}_{60}\)In\({}_{40}\)Sb 3000 & Al\({}_{68}\)In\({}_{32}\)Sb 500 & InAs\({}_{86}\)Sb\({}_{44}\) 1000 & Al\({}_{68}\)In\({}_{32}\)Sb 200 & InAs\({}_{86}\)Sb\({}_{44}\) 100 \\ 50 & Al\({}_{39}\)In\({}_{61}\)Sb 2630 & Al\({}_{63}\)In\({}_{37}\)Sb 250 & InAs\({}_{80}\)Sb\({}_{50}\) 1500 & Al\({}_{63}\)In\({}_{37}\)Sb 200 & InAs\({}_{80}\)Sb\({}_{50}\) 100 \\ 60 & Al\({}_{40}\)In\({}_{60}\)Sb 4000 & Al\({}_{48}\)In\({}_{52}\)Sb 500 & InAs\({}_{80}\)Sb\({}_{60}\) 1000 & Al\({}_{48}\)In\({}_{52}\)Sb 200 & Al\({}_{40}\)In\({}_{60}\)Sb 100 \\ \hline \end{tabular} \end{table} Table 1: Composition and thickness of the core layers in the MBE-grown InAsSb samples of different Sb concentrations. The core layer structure is shown in Fig. 1(a). Figure 1: (a) Structure layout of the InAs\({}_{0.37}\)Sb\({}_{0.63}\) sample. The InAsSb alloy (absorber) is sandwiched between the two Al\({}_{0.48}\)In\({}_{0.52}\)Sb barriers. (b) Schematic band alignment of the InAs\({}_{0.37}\)Sb\({}_{0.63}\) sample as an example. The zero energy corresponds to the top of the GaSb valence band. ous solutions [33]. Finally, we focus on the \(\Gamma\) point LLs, which carry the dominant contributions to the observed optical transitions. With these assumptions, the \(k\cdot p\) Hamiltonian is greatly simplified while, as we will show below, ensuring a good agreement between the experiment and model. The simplified Hamiltonian now reads \[H_{k\cdot p}=\begin{bmatrix}H_{+}&0\\ 0&H_{-}\end{bmatrix}, \tag{1}\] where \[H_{+}=\begin{bmatrix}E_{g}&i\sqrt{3}V^{\dagger}&iV&\sqrt{2}V\\ -i\sqrt{3}V&0&0&0\\ -iV^{\dagger}&0&0&0\\ \sqrt{2}V^{\dagger}&0&0&-\Delta\end{bmatrix},\] \[H_{-}=\begin{bmatrix}E_{g}&-\sqrt{3}V&-V^{\dagger}&i\sqrt{2}V^{\dagger}\\ -\sqrt{3}V^{\dagger}&0&0&0\\ -V&0&0&0\\ -i\sqrt{2}V&0&0&-\Delta\end{bmatrix}.\] Here, \(V=\frac{1}{\sqrt{6}}P_{0}k_{-}\), \(\mathbf{k}=(k_{x},k_{y},k_{z})\) is the wave vector, \(k_{\pm}=k_{x}\pm k_{y}\), and \(P_{0}\) is related to the Kane energy by \(E_{P}=2m_{0}P_{0}^{2}/\hbar^{2}\). The bases for the Hamiltonian are in the order of the electron band (EB) spin up, heavy hole (HH) spin up, light hole (LH) spin down, split-off (SO) spin down, EB spin down, HH spin down, LH spin up, and SO spin up bands. To calculate the LL energies, we apply the ladder operator formalism and the following ansatz to the two subblocks of the Hamiltonian [31; 32]. For \(H_{+}\) subblock, the ansatz is \(\left|n_{+}\right\rangle=[\left|n-1\right\rangle,\left|n-2\right\rangle,\left| n\right\rangle,\left|n\right\rangle]^{T}\). For \(H_{-}\) subblock, the ansatz is \(\left|n_{-}\right\rangle=[\left|n-1\right\rangle,\left|n\right\rangle,\left|n -2\right\rangle,\left|n-2\right\rangle]^{T}\). Here, \([...]^{T}\) denotes the transpose operation, \(n\) is a positive integer, and \(\left|n\right\rangle\) is the \(n^{\mathrm{th}}\) harmonic oscillator eigenfunction. Further details of the calculation can be found in Refs. [31; 32]. With the calculated LLs, we can fit the experimental data and extract the corresponding band parameters. The dashed lines in Fig. 2(a) show the best fits to the data, and Fig. 2(b) shows the calculated LL structure using the fitting parameters in Table 2. In Fig. 2(b), we also label out the corresponding low-lying LL transitions for \(T_{0}\)-\(T_{5}\), where we assume the dominant contributions to the observed transitions in Fig. 2(a) are the HH to EB LL transitions [27]. \begin{table} \begin{tabular}{c c c c c c} \hline Sb & \(E_{g}\)(eV) & \(\Delta\)(eV) & \(E_{P}\)(eV) & \(g_{\mathrm{exp}}\) & \(g_{\mathrm{theory}}\) \\ \hline 0\% & 0.415 & 0.390 & 19 & 15.0 & 12.8 \\ 9\% & 0.315 & 0.323 & 22 & 20.0 & 21.6 \\ 22\% & 0.220 & 0.276 & 20 & 29.4 & 31.7 \\ 44\% & 0.132 & 0.280 & 19 & 63.2 & 63.2 \\ 50\% & 0.108 & 0.300 & 19 & 76.0 & 87.4 \\ 63\% & 0.100 & 0.375 & 21 & 91.5 & 108.5 \\ 100\% & 0.235 & 0.800 & 23.3 & 51.3 & 49.1 \\ \hline \end{tabular} \end{table} Table 2: Fitting parameters extracted from experiments using the \(k\cdot p\) model. Figure 3: (a-d) False color plot of the normalized transmission \(T(B)/T(0T)\) for InAsSb samples of (a) 9%, (b) 22%, (c) 44%, and (d) 63% Sb compositions. The dashed lines indicate the fitting results from the \(k\cdot p\) model using parameters given in Table 2. The gray areas are opaque region to IR light and shows no intensity. The color scales in all panels are kept the same. Figure 2: (a) False color plot of the normalized transmission \(T(B)/T(0T)\) of the InAs\({}_{0.50}\)Sb\({}_{0.50}\) alloy sample. The dashed lines indicate the fitting results from the \(k\cdot p\) model using parameters given in Table 2. The first few absorption modes are labeled with \(T_{i},i=0,1,...,5\). (b) Calculated LL fan diagram of InAs\({}_{0.50}\)Sb\({}_{0.50}\) at \(\Gamma\) point. The blue, black, and red colors denote the LLs from the EB, HH, and LH bands, respectively. The arrows show the low-lying LL transitions, in correspondence to those in panel (a). Following the above analysis, we can analyze the experimental results of other InAsSb alloys with different Sb compositions. Figure 3 shows their false color plot of the normalized transmission data for Sb compositions of \(9\%,22\%,44\%,\) and \(63\%,\) respectively. Similar to Fig. 2(a), the dashed lines are best fits to the data using the \(k\cdot p\) model, which exhibits excellent agreement with the experiment. Table 2 summarizes the band parameters extracted from the fitting for different Sb concentrations. We note that, the actual fitting parameters are \(E_{g}\) and \(E_{P}\), whereas \(\Delta\) does not critically affect the fitting results as the SO band is distant from the other bands. Here, we assume that \(\Delta\) follows the bowing relation of ternary InAsSb alloys reported in Ref. [12]. Based on the results in Table 2, we can study the bowing effects of the band parameters. First, the band gap \(E_{g}\) bows positively with the Sb concentration. By comparing the interband LL transition energies of different compositions, we find that the energy decreases as the Sb composition increases and \(E_{g}\) reaches its minimum \(\sim 100\) meV at \(63\%\) Sb concentration. The extracted \(E_{g}\) versus Sb composition gives a bowing coefficient of \(0.83\), consistent with our previous result [14]. Second, the Kane energy \(E_{P}\) shows a weak bowing effect throughout the entire Sb composition range. This is in contrast to an earlier work [27], where \(E_{P}\) bows significantly with the Sb concentration. It is likely that the samples in Ref. [27] were grown with relaxed strain due to a strong mismatch of the lattice parameters between the substrate and the epilayers, which degraded the quality of the alloys, particularly near the middle of the composition range. According to Ref. [29], this may lead to additional coupling between the conduction and valence bands and hence bowing of \(E_{P}\). Lastly, we discuss the bowing effect in \(g\)-factors. The g-factor for nth LL is defined as \(g_{n}=\min_{m}|E_{n,\uparrow(\downarrow)}(B)-E_{m,\downarrow(\uparrow)}(B)|/B\), where \(\min\{...\}\) finds the nearest LL of opposite spin. Based on this definition, the experimental \(g\)-factors (\(g_{\rm exp}\)) are extracted from the splitting of the two lowest EB LLs at 1 T, calculated using the \(k\cdot p\) model with experimental band parameters. For comparison, we also calculate the theoretical \(g\)-factors (\(g_{\rm theory}\)) from the Roth formula. In both cases, we observe a negative bowing. That is, the \(g\)-factor gradually increases with increasing Sb composition and reaches a maximum when the band gap reaches a minimum at \(63\%\) Sb. Then, the \(g\)-factor decreases with increasing band gap and Sb composition. Such behavior is expected as the mixing between the EB, HH, and LH bands enhances the \(g\)-factor, and the mixing is strongly correlated with the size of the band gap. Therefore, the \(g\)-factors and band gaps exhibit opposite bowing effects. However, the bowing in \(g_{\rm exp}\) is found smaller than that in \(g_{\rm theory}\). As discussed before, this is because the Roth formula is a single-band theory and fails to handle the orbital mixing effect as band gap reduces [8]. Further enhancement of the \(g\)-factor is possible when the band gap approaches zero. In this case, the EB, HH, and LH bands are degenerated (forming a triple point), and their interactions become the dominant effect. For simplicity, as the SO band is still far from these bands, we can omit the SO band presence in the following discussion. We thus arrive at the following Hamiltonian \(H_{\pm}\) \[H_{+}=\begin{bmatrix}0&itU^{\dagger}&iU\\ -itU&0&0\\ -iU^{\dagger}&0&0\end{bmatrix},H_{-}=\begin{bmatrix}0&-tU&-U^{\dagger}\\ -tU^{\dagger}&0&0\\ -U&0&0\end{bmatrix}.\] Here, \(U=P_{0}k_{-}\), and for a more general discussion, we use \(t\) to denote the ratio of the coupling strength between the EB and HH to that between the EB and LH. The corresponding LL energy reads \[E^{0}_{n,\pm} =0,\quad n=0,2,3,4...\] \[E^{\alpha,+}_{n,-} =\alpha P_{0}k_{B}\sqrt{n(1+t^{2})-t^{2}},\quad n=1,2,3,4...\] \[E^{\alpha,-}_{n,-} =\alpha P_{0}k_{B}\sqrt{n(1+t^{2})-1},\quad n=1,2,3,4....\] where \(k_{B}=\sqrt{eB/\hbar}\), and \(e\) is the elementary charge. Each LL has three indices. The superscript \(\alpha\) is the band index and takes the value of \(0,+1,-1\), denoting the HH, EB, and LH bands, respectively. The first subscript \(n\) denotes the LL index in each band, and the second subscript \(\pm\) denotes the subblock \(H_{\pm}\) to which the eigenstate relates. Figure 4(a) shows the magnetic field dependence of the calculated LL energies with \(t=\sqrt{3}\), which is the case for III-V semiconductors. Due to the electron-hole symmetry (i.e., \(E^{-1}_{n,\pm}=-E^{+1}_{n,\pm}\)), we will focus on the \(\alpha=+1\) LLs in the discussion below. We will also exclude the discussion of the spin states in the \(\alpha=0\) LLs as their Zeeman effect is negligible due to large degeneracy. In this case, we can omit the band index for simplicity. As the basis state for each subblock \(H_{\pm}\) is not a pure spin state, the spin up component of a LL is found to be \[S^{\uparrow}_{n,+}=1-\frac{n/2}{n(1+t^{2})-t^{2}},\quad S^{\uparrow}_{n,-}= \frac{(n-1)/2}{n(1+t^{2})-1}.\] Figure 4(b) shows the calculated spin up component of the low-lying LLs as a function of \(t\). We find that independent of \(t\), LL\({}_{1,+}\) is equally spin mixed while LL\({}_{1,-}\) is fully spin down polarized. For other LLs, they become more spin polarized with increasing \(t\). Hence, for \(t\) that gives decent spin polarization, the Zeeman splitting is now directly connected to the orbital energy levels (i.e., the LLs) and exhibits a peculiar relativistic \(\sqrt{B}\) magnetic field dependence (Fig. 4(a)), in stark contrast to the conventional linear in \(B\) Zeeman splitting. On the other hand, the magnitude of the Zeeman splitting also depends on the choice of \(t\). Figure 4(c) shows the \(t\) dependence of the low-lying LL energies. For \(t=0\), 1, and \(t\rightarrow+\infty\), the LLs of opposite dominant spin components are degenerated, and thus zero Zeeman effect. On the contrary, when a LL is equally separated from two neighboring LLs of opposite spins, the optimized Zeeman effect is achieved. For example, a simple calculation using the relation \(E_{2,-}-E_{2,+}=E_{2,+}-E_{1,-}\) gives an optimized \(t\approx 1.7\) for large Zeeman splitting in LL\({}_{2,+}\), which is close to \(t=\sqrt{3}\) in III-V semiconductors. The optimized \(t\) for other LLs is also close to this value. It is interesting to compare the Zeeman effect in such triple point semimetals to those of Dirac semimetals such as graphene [26] and ZrTe\({}_{5}\)[34]. In the two-band model (as in Dirac semimetals), the interaction between the two bands leads to degenerated LLs with no dominant spin components. This is equivalent to taking \(t\to 0\) or \(+\infty\) in our model, where no Zeeman effect exists if only considering the orbital interaction. The Zeeman effect comes into play through the interaction with remote bands [24; 8; 35], which leads to a relatively small \(g\)-factor. However, in triple point semimetals, the additional interaction with the third band can lift the degeneracy of the LLs (except for the lowest two LLs). Therefore, the Zeeman effect can reveal itself through the splitting of the orbital energy levels and no longer take effect through perturbations. In this case, the \(g\)-factor can be more easily and effectively manipulated through the interactions between the three bands (EB, HH, and LH) rather than with the remote bands. These observations could be useful in designing high \(g\)-factor in future topological materials. Before closing, we comment on how to enhance the Zeeman effect in practicable materials. We find that \(t=\sqrt{3}\) is an ideal ratio which gives rise to a decent 80% spin polarization in \(n>1\) LLs as well as the ideal energy spacing between spin polarized LLs. In fact, this ratio is protected by the crystal symmetry and hence it can be also applied to the zinc-blende type semiconductor [36]. Using a typical value of \(E_{P}=20\) eV, the Zeeman splitting for LL\({}_{2,+}\) is about 11 meV at 1 T (i.e., \(\min\{E_{2,-}-E_{2,+},E_{2,+}-E_{1,-}\}\approx 11\) meV), which corresponds to an effective \(g\)-factor of \(g\approx-200\). Our finding is consistent with that reported on triple point (zinc-blende) HgCdTe [37]. Therefore, zinc-blende type semiconductors with zero energy gap are ideal candidates for realizing large Zeeman effects. This work was primarily supported by the NSF (grant nos. DMR-1809120 and DMR-1809708). The MBE growth at Stony Brook was also supported by the U.S. Army Research Office (Grant No. W911NF2010109) and the Center of Semiconductor Materials and Device Modeling. The magneto-IR measurements were performed at the National High Magnetic Field Laboratory, which is supported by the NSF Cooperative Agreement (nos. DMR-1644779 and DMR-2128556) and the State of Florida. S.M., D.S., and Z.J. acknowledge support from the DOE (for magneto-IR) under grant no. DE-FG02-07ER46451. Y.J. acknowledges support from the National Natural Science Foundation of China (Grant No. 12274001) and the Natural Science Foundation of Anhui Province (Grant No. 2208085MA09).
半導体における Lande $g$-ファクタをチューニングするための有効なパラメータとして、バンドギャップは知られており、三元合金における波動効果により幅広い範囲で操作可能です。この研究では、最近開発された仮想基質技術を用いて、全Sb組成範囲のInAsSb合金を高品質に作成し、$\sim$0.1 eVの最小バンドギャップで $g$ 値が $g\approx -90$ になり、これはBulk InSb の値に比べてほぼ2倍に達しています。さらに、ゼロギャップの極限に到達する分析では、磁場の平方根に比例するような巨大な $g$ 値 ($g\approx -200$) を示しており、これは電磁気効果による分散を特徴としています。このような $g$ 値の向上は、Narrow Gap 限界に対する従来の Roth 表式では説明できません。これは
2309.10724
Sound Source Localization is All about Cross-Modal Alignment
Humans can easily perceive the direction of sound sources in a visual scene, termed sound source localization. Recent studies on learning-based sound source localization have mainly explored the problem from a localization perspective. However, prior arts and existing benchmarks do not account for a more important aspect of the problem, cross-modal semantic understanding, which is essential for genuine sound source localization. Cross-modal semantic understanding is important in understanding semantically mismatched audio-visual events, e.g., silent objects, or off-screen sounds. To account for this, we propose a cross-modal alignment task as a joint task with sound source localization to better learn the interaction between audio and visual modalities. Thereby, we achieve high localization performance with strong cross-modal semantic understanding. Our method outperforms the state-of-the-art approaches in both sound source localization and cross-modal retrieval. Our work suggests that jointly tackling both tasks is necessary to conquer genuine sound source localization.
Arda Senocak, Hyeonggon Ryu, Junsik Kim, Tae-Hyun Oh, Hanspeter Pfister, Joon Son Chung
2023-09-19T16:04:50
http://arxiv.org/abs/2309.10724v1
# Sound Source Localization is All about Cross-Modal Alignment ###### Abstract Humans can easily perceive the direction of sound sources in a visual scene, termed sound source localization. Recent studies on learning-based sound source localization have mainly explored the problem from a localization perspective. However, prior arts and existing benchmarks do not account for a more important aspect of the problem, cross-modal semantic understanding, which is essential for genuine sound source localization. Cross-modal semantic understanding is important in understanding semantically mismatched audio-visual events, e.g., silent objects, or off-screen sounds. To account for this, we propose a cross-modal alignment task as a joint task with sound source localization to better learn the interaction between audio and visual modalities. Thereby, we achieve high localization performance with strong cross-modal semantic understanding. Our method outperforms the state-of-the-art approaches in both sound source localization and cross-modal retrieval. Our work suggests that jointly tackling both tasks is necessary to conquer genuine sound source localization. ## 1 Introduction Humans can easily perceive where the sound comes from in a scene. We naturally attend to the sounding direction and associate incoming audio-visual signals to understand the event. To achieve human-level audio-visual perception, sound source localization in visual scenes has been extensively studied [50, 51, 4, 47, 8, 35, 31, 33, 53, 54, 52, 36, 39, 38, 20]. Motivated by that humans learn from natural audio-visual correspondences without explicit supervision, most of the studies have been developed on a fundamental assumption that audio and visual signals are temporally correlated. With the assumption, losses of the sound source localization task are modeled by audio-visual correspondence as a self-supervision signal and are implemented by contrasting audio-visual pairs,, contrastive learning. While these approaches appear to be unsupervised methods, they strongly rely on partial supervision information;, using supervisedly pretrained vision networks [50, 51, 47, 53, 54, 20] and visual objectness estimators for post-processing [39, 38]. Without leveraging such strong initial representations, the performance is degraded. Thus, the previous methods are not purely self-supervised approaches. Even further, there are recent studies [45, 39, 38] that point out visual objectness bias in existing sound source localization benchmarks and exploit the objectness prior to improve the localization accuracy. They show that, even without interaction between visual and audio signals, a model may achieve strong accuracy in localization by only referring visual signals alone, which is not the true intention of the sound source localization task. In short, the current evaluation and setting of the sound source localization do not capture the true sound source localization performance. In this work, we first sort out evaluating sound source localization methods by introducing a cross-modal retrieval task as an auxiliary evaluation task. By this task, we can measure whether the learned representation have the capability to accurately interact between audio and visual modalities;, more fine-grained audio-visual correspondence which is essential for genuine sound source localization. This aspect has been missed in existing sound source localization benchmarks. Indeed, our experiments show that higher sound localization performance does not guarantee higher cross-modal retrieval performance. Figure 1: **A conceptual difference between prior approaches and our alignment-based sound source localization.** Second, given this additional criterion, we revisit the importance of semantic understanding shared across audio and visual modalities in both sound source localization and cross-modal retrieval. In the previous methods [50, 51, 54, 47], the cross-modal semantic alignment is induced by instance-level cross-modal contrastive learning,, cross-modal instance discrimination between visual and audio features. However, they are aided by labels or supervisedly pretrained encoder 2 for easing challenging cross-modal feature alignment. Instead, our method learns from scratch supporting the lack of guidance by incorporating multiple positive samples into cross-modal contrastive learning. Specifically, we construct a positive set for each modality using both multi-view [10] and conceptually similar samples [17]. Thereby, we enhance feature alignment and achieve high localization performance and strong cross-modal semantic understanding. Footnote 2: Typically, an image encoder is pretrained on ImageNet [16] and an audio encoder is pretrained on AudioSet [25] in supervised ways. We evaluate our method on the VGG-SS and SoundNet-Flickr benchmarks for sound source localization and cross-modal retrieval. As aforementioned, the sound source localization task is closely related to the cross-modal retrieval task, but our experiments show that existing works have a weak performance correlation between them. This implies that we need to evaluate both tasks for evaluating the genuine sound source localization. The proposed method performs favorably against the recent state-of-the-art approaches in both tasks. We summarize the contributions of our work as follows: * We analyze that sound source localization benchmarks are not capable of evaluating cross-modal semantic understanding, thereby sound source localization methods may perform poorly in cross-modal retrieval tasks. * We propose semantic alignment to improve cross-modal semantic understanding of sound source localization models. * We expand semantic alignment with multi-views and conceptually similar samples which leads to state-of-the-art performance on both sound source localization and cross-modal retrieval. ## 2 Related work **Sound source localization.** Sound source localization in visual scenes has been investigated by exploiting correspondences between audio and visual modalities. The most widely used approach for sound source localization is cross-modal attention [50, 51, 57] with contrastive loss [13, 29, 42]. Later, the attention-based method is improved by intra-frame hard sample mining [8], iterative contrastive learning with pseudo labels [35], feature regularization [36], positive mining [52], negative free learning [54] with stop-gradient operation [12], or momentum encoders [38]. Some sound localization approaches exploit additional semantic labels [47, 33, 53] or object prior [39, 63]. Semantic labels are used to pretrain audio and vision encoders with classification loss [33, 53] or refine audio-visual feature alignment [47]. A more explicit way to refine localization output is to use object prior. EZVSL [39] proposes post-processing to combine attention based localization output with a pretrained visual feature activation map. Similarly, Xuan [63] propose to combine off-the-shelf object proposals with attention based sound localization results. However, postprocessing by object prior may generate a false positive output as it is solely based on vision without audio-visual interaction. In addition to the localization, there has been an attempt to localize sounding objects and recover the separated sounds simultaneously, also known as the cocktail party problem [27, 37]. The separation of sound mixture is achieved by predicting masks of spectrogram guided by visual features [19, 1, 64, 23, 62, 21, 2, 65, 24, 58, 56]. Furthermore, a number of recent papers are presented on audio-visual navigation for a given sound source [7, 22]. **Self-supervised representation learning.** In a broader categorization, sound source localization belongs to self-supervised multimodal learning. Our work is also relevant to self-supervised audio-visual representation learning, and other multimodal learning studies. Contrastive learning aims to learn robust representations from large-scale raw data without annotations. Recent representation learning approaches [60, 10, 28, 11] use instance discrimination by contrastive learning [13, 29, 42] as a pretext task with notable advancements in visual recognition tasks. Recently, positive mining by nearest-neighbor search are used to learn representations of images [17, 18, 61], videos [26], neural recordings [6], and text-image [34]. In this work, we expand the previous works by incorporating both multi-views and conceptually similar samples into audio-visual modalities for cross-modal feature alignment. A series of audio-visual representation learning studies have shown that audio and visual contents in a video are correlated, therefore a visual representation can be learned by sound prediction [44] or audio representation can be distilled from visual representation [5, 55]. Later, a variety of joint audio-visual representation learning methods are proposed with an assumption that there is a semantic [3, 30, 41, 40] or temporal [14, 43, 32, 15] correspondence between them. However, simply learning sound source localization by audio-visual correspondence with instance discrimination ignores the semantic similarity of audio-visual contents among samples, introducing false negatives or positives. In order to mitigate this issue, clustering [30], sampling [41], weighting [40], and hard mining [32] are proposed. Similarly, in this work, we go beyond instance discrimination by using multiple positive samples to enforce semantic understanding across modalities. ## 3 Method ### Preliminaries **Contrastive learning** learns representation by containing positive and negative pairs. Given an encoded query sample \(q\) and its encoded positive pair \(k^{+}\) and negative pairs \(k\), the loss can be defined as: \[\mathcal{L}=-\mathrm{log}\frac{\mathrm{exp}(q\cdot k^{+}/\tau)}{\sum_{i} \mathrm{exp}(q\cdot k_{i}/\tau)} \tag{1}\] where \(\tau\) is the temperature parameter. **Cross-modal contrastive learning** extends contrastive learning across multiple modalities. In sound source localization, audio-visual correspondence is used to define positive and negative cross-modal pairs. With an audio-visual dataset \(\mathcal{D}=\{(v_{i},a_{i}):i=1,...,N\}\) and its encoded features \(\mathbf{v}_{i}=f_{v}(v_{i})\) and \(\mathbf{a}_{i}=f_{a}(a_{i})\), cross-modal contrastive learning loss is defined as: \[\mathcal{L}_{i}=-\mathrm{log}\frac{\mathrm{exp}(s(\mathbf{v}_{i},\mathbf{a}_{ i})/\tau)}{\sum_{j}\mathrm{exp}(s(\mathbf{v}_{i},\mathbf{a}_{j})/\tau)} \tag{2}\] where \(s\) is a cross-modal similarity function. The cross-modal contrastive loss Eq. (2) can be extended to symmetric form [48] as used in a few previous works [39, 38]. ### Cross-Modal Feature Alignment We consider both spatial localization and semantic feature alignment for sound source localization. To this end, we use two different similarity functions \(s_{L}\) and \(s_{A}\) for contrastive learning (Eq. (2)), \(s_{L}\) for localization and \(s_{A}\) for cross-modal feature alignment. Recent studies rely on audio-visual spatial correspondence maps to learn sound source localization by contrasting them. Given a spatial visual feature \(\mathbf{v}\in\mathbb{R}^{c\times h\times w}\) and audio feature \(\mathbf{a}\in\mathbb{R}^{\text{e}}\), audio-visual similarity with a correspondence map can be calculated as follows: \[s_{L}(\mathbf{v},\mathbf{a})=\sum_{xy\in M}\frac{1}{|M|}\frac{\mathbf{v}^{xy} \cdot\mathbf{a}}{\|\mathbf{v}^{xy}\|\|\mathbf{a}\|} \tag{3}\] where \(\mathbf{v}^{xy}\) is a feature vector at location \((x,y)\), and \(M\) is an optional binary mask when an annotation or pseudo-mask [8, 36] is available. Since we assume no supervision for sound source localization, we do not use any mask, therefore, \(M=\mathbf{1}\). The contrastive loss with localization similarity \(s_{L}\) enforces location dependent alignment giving sparse but strong audio-visual correspondence which enables to perform localization. However, our empirical studies on cross-modal retrieval indicate that strong localization performance does not guarantee semantic understanding. To overcome the low semantic understanding in recent studies, we propose to add instance-level contrastive loss. Instance-level contrasting encapsulates the whole context in a scene, enforcing better audio-visual semantic alignment. However, instance-level contrasting may smooth out spatial discriminativeness learned by Eq. (3). Inspired by SimCLR [10], we adopt a projection layer to align audio-visual semantics in a projection space. The projection layer separates the latent space of localization and semantic alignment, thereby preventing the alignment loss smoothing out the spatial discriminativeness. The similarity function for cross-modal feature alignment is defined as follows: \[s_{A}(\mathbf{v},\mathbf{a})=\frac{p_{v}(\mathsf{avg}(\mathbf{v}))\cdot p_{a} (\mathbf{a})}{\|p_{v}(\mathsf{avg}(\mathbf{v}))\|\|p_{a}\mathbf{a}\|} \tag{4}\] where \(\mathsf{avg}(\cdot)\) is spatial average pooling, \(p_{v}\) is a projection Figure 2: **Our sound source localization framework.** Our model construct multiple positive pairs with augmentation and Nearest Neighbor Search (Conceptually Similar Samples). By using these newly constructed 9 pairs, our model employs spatial localization, \(s_{L}\), and semantic feature alignment, \(s_{A}\), for each pair to learn a better sound source localization ability. layer for visual features, and \(p_{a}\) is a projection layer for audio features. ### Expanding with Multiple Positive Samples Typically, contrastive learning contrasts between one positive pair and multiple negative pairs as shown in Eq. (1). In audio-visual learning, by an audio-visual correspondence assumption, an audio-image pair from the same clip is used as a positive pair while negative pairs are sampled from different clips. However, single-instance discrimination may not be sufficient to achieve strong cross-modal alignment. In this section, we expand contrastive learning beyond single instance discrimination by positive set construction and pairing them. To construct a positive set, we incorporate both hand-crafted positive and conceptual positive samples for each modality. Later, we adjust the contrastive learning to incorporate multiple positive pairs to enforce cross-modal alignment. Obtaining hand-crafted positive samples.Using randomly augmented samples as positive multi-view pairs are widely adopted in self-supervised representation learning, _i.e_., instance discrimination. Similarly, we extend a single anchor audio-image pair to multiple positive pairs by applying simple augmentations on image and audio samples separately. While we utilize common image transformations on images, we apply temporal shifting to audios. It is worth noting that sound source localization task learns from the underlying semantic consistency rather than subtle time differences as in videos. Thus, a slight shift in the audio may not alter contextual information significantly. As a result of hand-crafted multi-view positive pair generation, we obtain additional \(\mathbf{v}^{aug}\) and \(\mathbf{a}^{aug}\) samples. Obtaining conceptual positive samples.Apart from manually created augmented views, we additionally expand our positive set with conceptually similar samples. The sampling strategy with nearest neighbor search can be performed in a various way, such as on-the-fly sampling [17, 49, 61, 34], sampling by pretrained encoders [52], or guided sampling [26, 18] using another modality. For selecting our conceptually similar samples, we utilize pretrained encoders. Note that pretrained encoders trained either with supervised or self-supervised learning are effective in positive sample mining as shown in the experiment section. By employing readily available image and audio encoders, we use the \(k\)-nearest neighborhood search to sample semantically similar samples in both modalities. In particular, given a pair of image and audio, we compute cosine similarity with all other samples and choose the top-\(k\) most similar samples among the training set for each modality. From a set of \(k\) samples, we randomly select one sample to obtain conceptually similar samples for each modality, \(\mathbf{v}^{conc}\). and \(\mathbf{a}^{conc}\). By utilizing the conceptually similar samples as positive samples, our model expands semantic understanding. Pair Construction.Once we obtain the conceptual and hand-crafted positive samples for each modality, we proceed to create 9 distinct audio-visual pairs by pairing \(\mathbf{V}=\{\mathbf{v},\mathbf{v}^{aug},\mathbf{v}^{conc}\}\) and \(\mathbf{A}=\{\mathbf{a},\mathbf{a}^{aug},\mathbf{a}^{conc}\}\). This is done to ensure semantic alignment and consistency between them through contrastive learning. The negative pairs are randomly paired from the remaining samples in a training set. It is worth noting that some of these pairs are a combination of hand-crafted and conceptually similar samples, which further enhances the feature alignment of our model during training. ### Training Our loss formulation incorporates both localization and instance-level similarity functions with multiple positive pairs constructed by augmentation and conceptually similar sample search. The final loss term is defined as follows: \[\begin{split}\mathcal{L}_{i}=-\sum_{\mathbf{v}_{i}\in\mathbf{V} }\sum_{\mathbf{a}_{i}\in\mathbf{A}}\Bigg{[}\mathrm{log}\frac{\exp(s_{L}( \mathbf{v}_{i},\mathbf{a}_{i})/\tau)}{\sum_{j}\exp(s_{L}(\mathbf{v}_{i}, \mathbf{a}_{j})/\tau)}\\ +\mathrm{log}\frac{\exp(s_{A}(\mathbf{v}_{i},\mathbf{a}_{i})/ \tau)}{\sum_{j}\exp(s_{A}(\mathbf{v}_{i},\mathbf{a}_{j})/\tau)}\Bigg{]}\end{split} \tag{5}\] where \(\mathbf{V}\) and \(\mathbf{A}\) indicate positive sample sets. ## 4 Experiments Our proposed method for sound source localization is validated through experiments conducted on VGGSound [9] and SoundNet-Flickr [5]. First, we conduct a quantitative analysis to evaluate the accuracy of the localization, cross-modal retrieval, and the impact of various components of our model. Then, we visualize our sound source localization results across different categories of sounds. ### Experiment Setup Datasets.Our method is trained using the VGGSound [9] and SoundNet-Flickr-144K [50, 51]. VGGSound is an audio-visual dataset containing around \(\sim\)200K videos. SoundNet-Flickr-144K set is the subset of SoundNet-Flickr [5]. After training, we test the sound localization performance with VGG-SS [8] and SoundNet-Flickr-Test [50] datasets for the main experiments. These evaluation sets have bounding box annotations of sound sources for \(\sim\)5K and 250 samples, respectively. Moreover, we employ the AVSBench [66] and Extended VGGSound/SoundNet-Flickr [38] datasets for additional evaluations. AVSBench dataset provides binary segmentation maps that show the audio-visually correspondent pixels for roughly 5k five-second videos belonging to 23 categories. Lastly, the Extended VGGSound /SoundNet-Flickr dataset, proposed by [38], is used to understand non-visible sound sources. **Implementation details.** We use two ResNet18 models for both audio and vision encoding. Unlike prior approaches, we do not fine-tune (or use a pretrained) a visual encoder from ImageNet pretrained weights. Instead, we train both the audio and vision encoders from scratch. We preprocess images and audios following the previous works [52, 8]. To create multiple pairs, we utilize both NN search and generic augmentation approaches. For NN search, we experiment on two different setups to retrieve k conceptually similar samples: (1) For supervisedly pretrained encoder experiments, We employ ResNet and VGGSound models pretrained on ImageNet and VGGSound respectively, (2) For self-supervisedly pretrained encoder experiments, we utilize the CLIP [48] Vision Encoder and Wav2CLIP [59] Audio Encoder. We use \(k\)=1000 for the experiments. To perform image augmentations, we follow the augmentations used in SimCLR [10]. For audios, we randomly select time-window shifts in a time axis. The model is trained for 50 epochs with Adam Optimizer and a learning rate of 0.0001. \(\tau\) is set to 0.07 in contrastive learning. ### Quantitative Results **Comparison with strong baselines.** In this section, we conduct a comparative analysis of our sound source localization method against existing approaches. We carry out our evaluations in two settings, following previous approaches. Firstly, we train our model on VGGSound-144K and evaluate it on VGG-SS and SoundNet-Flickr test sets. Secondly, we train our model on SoundNet-Flickr-144K and evaluate it on the SoundNet-Flickr test set. It is important to note that all the compared models are trained using the same amount of data. AVEL [57], AVObject [2], and LCBM [53] models rely on video input, and as such, they cannot be evaluated on the SoundNet-Flickr dataset, which contains static image and audio pairs. We present our results in Table 1 and Table 2. Our proposed model achieves higher performance compared to prior approaches on both test sets. Specifically, it yields a +2.15\(\%\) cloU and +0.6\(\%\) AUC improvement on VGGSS, as well as a +3.7\(\%\) cloU improvement on SoundNet-Flickr compared to the state-of-the-art methods that uses pretrained vision encoder. It is worth highlighting that unlike the majority of previous works, our proposed model does not utilize a vision encoder pretrained on ImageNet in a sound source localization backbone. This is because, as discussed in Mo _et al_. [38], using supervisedly pretrained vision encoders makes the sound source localization problem a weakly supervised problem. However, it is worth noting that even without using a pretrained vision encoder, our method achieves state-of-the-art performance on both experiments that are presented in Table 1 and Table 2. We demonstrate the performance of our model with the pretrained models learned through supervised learning (NN Search w/ Supervised Pre. Encoders) and with models that are pretrained through self-supervised learning (NN Search w/ Self-Supervised Pre. Encoders) in NN Search module. As the results indicate, using self-supervised pre \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & **Pre. Vision** & **cloU**\(\uparrow\) & **AUC**\(\uparrow\) & **cloU**\(\uparrow\) & **AUC**\(\uparrow\) \\ \hline Attention [50]\({}_{\text{CVPR18}}\) & ✓ & 18.50 & 30.20 & 66.00 & 55.80 \\ Concrete [47]\({}_{\text{ICCV20}}\) & ✓ & 29.10 & 34.80 & - & - \\ LCBM [53]\({}_{\text{CVPR21}}\) & ✓ & 32.20 & 36.60 & - & - \\ LVS [5]\({}_{\text{CVPR21}}\) & ✗ & 30.30 & 36.40 & 72.40 & 57.80 \\ LVS [5]\({}_{\text{CVPR21}}\) & ✗ & 34.40 & 38.20 & 71.90 & 58.20 \\ HardPos [52]\({}_{\text{ICASSP2}}\) & ✗ & 34.60 & 38.00 & 76.80 & 59.20 \\ SSPPL (w/o PCM) [54]\({}_{\text{CVPR22}}\) & ✓ & 27.00 & 34.70 & 73.90 & 60.20 \\ SSPPL (w/o PCM) [54]\({}_{\text{CVPR22}}\) & ✓ & 33.90 & 38.00 & 76.70 & 60.50 \\ E-VEL (w/o OGL) [39]\({}_{\text{ECCV2}}\) & ✓ & 35.96 & 38.20 & 78.31 & 61.74 \\ SSL-TL [56]\({}_{\text{ICASSP2}}\) & ✓ & 38.63 & 39.75 & 96.50 & 61.20 \\ SLACV (w/o OGL) [38]\({}_{\text{CNNLP82}}\) & ✓ & 37.79 & 39.40 & **83.60** & - \\ \hline **Ours** & & & & \\ \(\backslash\) NN Search w/ Supervised Pre. Encoders & ✗ & **39.94** & **40.02** & **29.60** & **63.44** \\ \(\backslash\) NN Search w/ Self-Supervised Pre. Encoders & ✗ & 39.20 & 39.20 & 79.20 & 63.00 \\ \hline _wRL OGL:_ & & & & \\ E-VEL (w/o OGL) [39]\({}_{\text{ECCV2}}\) & ✓ & 38.85 & 39.54 & 83.94 & 63.60 \\ SLACV (w/ OGL) [39]\({}_{\text{CNNLP82}}\) & ✓ & 39.80 & - & **86.00** & - \\ \hline **Ours** & & & & \\ \(\backslash\) NN Search w/ Supervised Pre. Encoders & ✗ & **42.64** & **41.48** & 82.40 & **64.40** \\ \(\backslash\) NN Search w/ Self-Supervised Pre. Encoders & ✗ & 42.47 & 41.42 & 82.80 & 64.48 \\ \hline _wifi Object Flow:_ & & & & \\ HearTheFlow [20]\({}_{\text{NACV23}}\) & ✓ & 39.40 & 40.00 & 84.80 & 64.00 \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative results on the VGG-SS and SoundNet-Flickr test sets.** All models are trained with 144K samples from VGG-Sound and tested on VGG-SS and SoundNet-Flickr. \(\dagger\) is the result of the model released on the official project page. SLAVC [38] does not provide AUC scores. \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **Pre. Vision** & **cloU**\(\uparrow\) & **AUC**\(\uparrow\) \\ \hline Attention [50]\({}_{\text{CVPR18}}\) & ✓ & 18.50 & 30.20 & 66.00 & 55.80 \\ Concrete [47]\({}_{\text{ICCVPR20}}\) & ✓ & 29.10 & 34.80 & - & - \\ LCBM [53]\({}_{\text{CVPR21}}\) & ✓ & 32.20 & 36.60 & - & - \\ LVS [5]\({}_{\text{CVPR21}}\) & ✗ & 30.30 & 36.40 & 72.40 & 57.80 \\ LVS [5]\({}_{\text{CVPR21}}\) & ✗ & 34.40 & 38.20 & 71.90 & 58.20 \\ HardPos [52]\({}_{\text{ICASSP2}}\) & ✗ & 34.60 & 38.70 & 76.80 & 59.20 \\ SSPPL (w/o PCM) [54]\({}_{\text{CVPR22}}\) & ✓ & 27.00 & 34.80 & 73.90 & 60.20 \\ SSPPL (w/o PCM) [54]\({}_{\text{CVPR22}}\) & ✓ & 33.90 & 38.00 & 76.70 & 60.50 \\ E-VEL (w/o OGL) [39]\({}_{\text{ECCV2}}\) & ✓ & 35.96 & 38.20 & 78.31 & 61.74 \\ SSL-TL [56]\({}_{\text{ICASSP2}}\) & ✓ & 38.63 & 39.75 & 96.50 & 61.20 \\ SLAVC (w/o OGL) [38]\({}_{\text{CNNLP82}}\) & ✓ & 37.79 & 39.40 & **83.60** & - \\ \hline **Ours** & & & & \\ \(\backslash\) NN Search w/ Supervised Pre. Encoders & ✗ & **39.94** & **40.02** & **29.60** & **63.44** \\ \(\backslash\) NN Search w/ Self-Supervised Pre. Encoders & ✗ & 39.20 & 39.20 & 79.20 & 63.00 \\ \hline _wifi Object Flow:_ & & & & \\ HearTheFlow [20]\({}_{\text{NACV23}}\) & ✓ & 38.65 & 39.50 & \multicolumn{1}{c}{} & \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative results on the SoundNet-Flickr test set.** All models are trained and tested on the SoundNet-Flickr 144K dataset. \(\dagger\) is the result of the model from the official project page. SLAVC [38] does not provide results with SoundNet-Flickr 144K. trained encoders in NN Search performs on par with the supervised pretrained encoders in NN Search. This shows that our model does not depend on supervised pretrained encoders for the NN search module and can utilize any type of pretrained encoder feature for nearest neighbor search. Note that these pretrained encoders are not used in the backbone networks of the sound source localization module but only in the NN Search Module, as illustrated in Figure 2. We also discuss the methods employed by previous studies, such as SSPL [54] which utilizes a sub-module called PCM to reduce the impact of background noise, HTF [20] which utilizes Optical Flow, and EZ-VSL [39] which refines its initial audio-visual localization outcomes through object guidance obtained from an ImageNet pretrained visual encoder. Our model, on the other hand, and any of its variations do not require any task-specific modules or operations to achieve the state-of-the-art (SOTA) results. This suggests that using additional semantic and multi-view correspondence, as well as feature alignment, provides more varied and robust supervision for better aligned audio and visual features, as opposed to using task-specific approaches. The quantitative results presented in Table 1 and Table 2 also showcase the performance of previous methods that utilize object guidance to evaluate their final sound source localizations. Our model outperforms all previous methods that employ object guidance on the VGG-SS test set and achieves comparable results on the SoundNet-Flickr test set, even though our model _does not use object guided refinement (OGL)_. Additionally, we acknowledge that the addition of OGL to our audio-visual localization results in improvement on the VGGSS test set, while degrading performance on the SoundNet-Flickr test set. In contrast, prior methods see modest improvements when utilizing OGL. This can be explained by the fact that our model is already accurately localizing the sounding objects, and object guidance can interfere with localization results by introducing visual regions that are not sounding (refer to Section 4.4 for visual results). Unlike prior methods, we do not use OGL in our architecture for the remainder of this paper, unless it is being directly compared with OGL-based methods. Finally, in comparison to HearTheFlow, which utilizes an additional Optical Flow modality, our method outperforms it on the VGGSS test set, and achieves slightly lower performance on the SoundNet-Flickr test set without utilizing any additional modalities, but instead relying on better audio-visual correspondence and alignment. these open set experiments. While some conclude that their models have strong generalization ability because their performance in unheard categories is higher than heard categories [39, 38, 46], the other works that cannot achieve the same trend discuss that this is expected since their models are dealing with unseen categories [36]. However, our results show that these conclusions are highly dependent on the chosen train/test splits. Our model performs better than existing works in both splits, but there is no uniform trend in between two splits. While our method performs better on unheard categories in the splits of [8, 39, 38, 46], it performs worse on unheard categories in the split of [36]. Therefore, we conclude that the observed trends are highly dependent on the randomly selected train/test splits. **AVSBench [66].** To demonstrate the precise sound localization ability of our model, we conduct experiments on the AVSBench S4 dataset. The dataset's objective is to detect audio-visual correspondence and correlation at the pixel level. To make a fair comparison, we use some of the self-supervised sound source localization methods mentioned earlier. All models are trained on VGGSound-144K and directly assessed on the AVSBench S4 dataset without any further fine-tuning (zero-shot setting). Our results, which are presented in Table 5, indicate that our method achieves the highest performance, as in the previous experiments. **Retrieval.** We evaluate sound localization models on the VGG-SS dataset for cross-modal retrieval. As shown in Table 6, our method clearly outperforms other state-of-the-art methods. One interesting observation is that EZ-VSL [39] notably performs better than SLAVC [38] on cross-modal retrieval, while SLAVC performs better on sound source localization in Table 1. This shows that with the current benchmark evaluations, better sound localization performance does not guarantee better audio-visual semantic understanding, thereby we need to additionally evaluate sound source localization methods on cross-modal understanding tasks. Another observation is that the performance gap between our method and the strongest competitor SSL-TIE [36] is notably larger on cross-modal retrieval than sound source localization. This is due to the strong cross-modal feature alignment of our method that is overlooked in the sound source localization benchmarks. **Extended Flickr and VGG-SS datasets.** The prior study [38] points out that the current sound source localization benchmarks overlook false positive detection. It is because the evaluation samples always contain at least a sounding object in a scene; thus cannot capture false positive outputs,, silent objects or off-screen sounds. To analyze false positive detection, Mo and Morgado [38] extended the benchmarks with non-audible, non-visible, and mismatched audio-visual samples. The expectation is that a sound source localization model should not localize any objects when audio-visual semantics do not match. The experiment with the extended datasets in Table 7 shows that our method performs favorably against state-of-the-art competitors. Our method performs better than the competing methods in false positive detection measured by \(\mathbf{AP}\) and \(\mathbf{max}\)-\(\mathbf{F1}\), while SLAVC [38] achieves better localization performance on Extended Flickr-SoundNet. As both false positive detection and cross-modal retrieval require cross-modal interaction, our method shows strong performance on both tasks. ### Ablation Results We conduct a series of experiments in order to verify our design choices and make further analysis. To save computational time and resources, we primarily perform ablation studies by training our model on VGGSound-144K with NN Search w/ Supervised Pre. Encoders setup and evaluating it on VGG-SS. Results are in Table 8. **Impact of Semantic and Multi-View Invariance.** In order to understand the impact of each type of invariance (consistency), we analyze the performance of our model with different type of invariance methodologies in Table 8. As the results of (C _vs._ E) and (D _vs._ F) reveal, using semantically similar samples (semantic invariance) produces better performance (+0.45\(\%\) and +0.5\(\%\) on cIoU respectively) compared to augmented multi-view invariance. Moreover, as the results of (A _vs._ C) and (A _vs._ E) depict, the combination of these two different types of invariance complement each other and and further enhances the model's performance. Using pair combination of these two different \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{**Extended Flickr-SoundNet**} & \multicolumn{4}{c}{**Extended VGG-SS**} \\ \cline{3-8} **Method** & **Pro. Vision** & \(\mathbf{AP}\) & \(\mathbf{max}\)-\(\mathbf{F1}\) & \(\mathbf{Lucket}\) & \(\mathbf{AP}\) & \(\mathbf{max}\)-\(\mathbf{F1}\) & \(\mathbf{Lucket}\) \\ \hline \hline Cross-modal [14] & \(\boldsymbol{\chi}\) & 0.00 & 83.50 & 47.20 & 0.00 & 19.00 & 21.93 \\ \hline \(\mathbf{S}\)[39] & \(\boldsymbol{\chi}\) & 0.00 & 17.00 & 19.60 & 8.15 & 6.90 & 10.43 \\ Attention[38] & \(\boldsymbol{\chi}\)[38] & \(\boldsymbol{\chi}\) & 15.98 & 24.50 & 54.16 & 6.50 & 13.30 & 14.04 \\ Text [39] & \(\boldsymbol{\chi}\) & 25.56 & 44.00 & 52.80 & 11.53 & 25.30 & 22.63 \\ DSDS[39] & \(\boldsymbol{\chi}\) & 38.22 & 40.40 & 72.91 & 16.58 & 25.60 & 26.27 \\ DSDS[39] & \(\boldsymbol{\chi}\) & 40.20 & 57.70 & 27.78 & 17.85 & 39.00 & 36.58 \\ DSDS[39] & \(\boldsymbol{\chi}\) & 46.30 & 54.60 & 64.00 & 24.55 & 30.90 & 31.58 \\ DSDS[39] & \(\boldsymbol{\chi}\) & 51.30 & 51.30 & 51.80 & 52.98 & 69.00 & 37.79 \\ \hline **Ours** & \(\boldsymbol{\chi}\) & **46.40** & **66.90** & **72.60** & **34.73** & **40.70** & **30.94** \\ \hline \(\mathbf{1}\)-35 Results and Self-Supervised Pre. Session & \(\boldsymbol{\chi}\) & **42.72** & **60.10** & **79.20** & **31.02** & **40.01** & **79.20** \\ \hline \hline \end{tabular} \end{table} Table 7: **Quantitative results on the Extended VGG-SS and Extended SoundNet-Flickr sets**. All models are trained with 144K samples from VGG-Sound. The results of the prior approaches are obtained from [38]. \begin{table} \begin{tabular}{l l l l l l} \hline \hline & **Semantic** & **Multi-View** & **Feature Alignment** & **cIoU** & \(\mathbf{AUC}\) \\ \hline \hline (A) & ✓ & ✓ & ✓ & **39.94** & **40.02** \\ (B) & ✓ & ✓ & ✗ & 39.10 & 39.44 \\ (C) & ✓ & ✗ & ✓ & 38.75 & 39.34 \\ (D) & ✓ & ✗ & ✗ & 38.24 & 38.90 \\ (E) & ✗ & ✓ & ✓ & 38.30 & 39.38 \\ (F) & ✗ & ✓ & ✗ & 37.72 & 39.19 \\ (G) & ✗ & ✗ & ✓ & 34.93 & 37.94 \\ (H) & ✗ & ✗ & ✗ & 34.22 & 37.67 \\ \hline \hline \end{tabular} \end{table} Table 8: **Ablation studies on our proposed method to see the impact of each main component.** types of consistency elements provides additional supervisions, invariance and alignments, leading to a more robust representation space and improve sound localization performance. **Impact of Feature Alignment.** We perform controlled experiments to verify the effect of the feature alignment strategy, and the results are presented in Table 8. Comparing the performance of the proposed model with and without feature alignment, (A _vs._ B), highlights the importance of this strategy to boost the performance. Further, examining the results of experiments (C _vs._ D) and (E _vs._ F) reveals that feature alignment provides additional gains irrespective of the consistency types. These findings indicate that global feature-based alignment helps the optimization of audio-visual correspondence. **Impact of \(k\) in conceptually similar sample selection.** Selecting an appropriate \(k\) value for sampling nearest neighbors is crucial. If this value is set too high, it may result in noisy samples that could disrupt the learning phase. Conversely, if the value is set too low, only very similar samples to the anchor will be provided and it limits semantic invariance. Nevertheless, when compared to Table 8 (E), we observe performance gain throughout the range of \(k\) used for the ablation study. Table 9 shows an ablative evaluation of the effect of \(k\) value used to select neighborhood samples. The results indicate that an optimal choice is \(k\)=1000. This choice of \(k\) can be explained by the fact that it provides a balance between semantic similarity and sufficient diversity. ### Qualitative Results In this section, we visualize and compare our sound localization results with the recent prior works on standard benchmarks, namely on VGG-SS and SoundNet-Flickr. The visualized samples in Figure 3 show that localized regions of the proposed method are more compact and accurately aligns with the sounding objects than the other methods. For instance, small size musical instrument is localized accurately compared to the recent methods in the top right column. We also compare our localization results with and without object-guided localization (OGL). As shown in Figure 4, OGL deteriorates our sound localization outputs. OGL captures objectness in a scene, thereby tending to attend to any distinctive objects regardless of whether it is the sound source or not. Therefore, OGL can be helpful when localization totally fails because of the objectness bias in the benchmarks, but it is harmful when the localization is accurate which is the case for the examples shown. This result is consistent with the quantitative result in Table 2, showing that our method with OGL performs worse. Throughout the paper, we discuss the importance of Figure 4: **OGL degrades our sound localization results on SoundNet-Flickr.** Figure 3: **Sound Localization Results on VGG-SS (top) and SoundNet-Flickr (bottom).** \begin{table} \begin{tabular}{c|c c c c c} \hline \hline \(k\) **in _k_-NN** & **10** & **30** & **100** & **500** & **1000** \\ \hline \hline cIoU \(\uparrow\) & 38.80 & 38.82 & 39.46 & 39.90 & **39.94** \\ AUC \(\uparrow\) & 39.51 & 39.67 & 39.93 & 40.00 & **40.02** \\ \hline \hline \end{tabular} \end{table} Table 9: **Varying k in conceptually similar sample selection.** cross-modal semantic understanding. We demonstrate interactiveness of our method across modalities in Figure 5. Genuine sound source localization should be able to localize objects that are correlated with the sound. To visualize cross-modal interaction, we synthetically pair up the same image with different sounds of objects that are visible in a scene. The examples demonstrate that the proposed method can localize different objects depending on the contexts of sounds, while the competing method can not. ## 5 Conclusion In this work, we investigate cross-modal semantic understanding that has been overlooked in sound source localization studies. We observe that higher sound source localization performance on the current benchmark does not necessarily show higher performance in cross-modal retrieval, despite its causal relevance in reality. To enforce strong understanding of audio-visual semantic matching while maintaining localization capability, we propose semantic alignment with multi-views of audio-visual pairs in a simple yet effective way. The ablation study shows that strong semantic alignment is achieved when both semantic alignment loss and enriched positive pairs are used. We extensively evaluate our method on sound source localization benchmarks including cross-dataset and open-set settings. Moreover, our analyses on cross-modal retrieval and false positive detection verify that the proposed method has strong capability in cross-modal interaction. Our study suggests that sound localization methods should be evaluated not only on localization benchmarks but also on cross-modal understanding tasks. ## 6 Acknowledgment This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00212845, Multimodal Speech Processing for Human-Computer Interaction). H. Pfister and J. Kim were partially supported by NIH grant R01HD104969. T.-H. Oh was partially supported by IITP grant funded by the Korea government (MSIT) (No. 2021-0-02068, Artificial Intelligence Innovation Hub; No. 2022-0-00290, Visual Intelligence for Space-Time Understanding and Generation based on Multi-layered Visual Common Sense).
人間は、視覚的なシーンにおける音源の位置を感知することができ、それは音源 localizarion と呼ばれています。近年、学習ベースの音源 localizarion に関する研究は、主に位置付けの問題からアプローチしてきました。しかし、従来の技術と既存のベンチマークは、この問題のより重要な側面、つまりクロスモーダルセマンティック理解を考慮していません。クロスモーダルセマンティック理解は、意味的に不整合なオーディオと視覚的なイベントを理解する上で重要な役割を果たします。例えば、静かなオブジェクトや画面外の音。このため、音源 localizarion に対するクロスモーダルセマンティック理解を考慮するために、音源 localizarion と関連するクロスモーダルアライメントタスクを提案しました。これにより、音声と視覚のモダリティ間の相互作用をより良く学習することができ、高い localizarion 性能を実現しました。また、この方法は、音源 localizarion とクロスモーダル検索
2309.07258
On the integrability of Lie algebroids by diffeological spaces
Lie's third theorem does not hold for Lie groupoids and Lie algebroids. In this article, we show that Lie's third theorem is valid within a specific class of diffeological groupoids that we call `singular Lie groupoids.' To achieve this, we introduce a subcategory of diffeological spaces which we call `quasi-etale.' Singular Lie groupoids are precisely the groupoid objects within this category, where the unit space is a manifold. Our approach involves the construction of a functor that maps singular Lie groupoids to Lie algebroids, extending the classical functor from Lie groupoids to Lie algebroids. We prove that the \v{S}evera-Weinstein groupoid of an algebroid is an example of a singular Lie groupoid, thereby establishing Lie's third theorem in this context.
Joel Villatoro
2023-09-13T18:48:31
http://arxiv.org/abs/2309.07258v3
# On the integrability of Lie algebroids by diffeological spaces ###### Abstract. Lie's third theorem does not hold for Lie groupoids and Lie algebroids. In this article, we show that Lie's third theorem is valid within a specific class of diffeological groupoids that we call'singular Lie groupoids.' To achieve this, we introduce a subcategory of diffeological spaces which we call 'quasi-etale.' Singular Lie groupoids are precisely the groupoid objects within this category, where the unit space is a manifold. Our approach involves the construction of a functor that maps singular Lie groupoids to Lie algebroids, extending the classical functor from Lie groupoids to Lie algebroids. We prove that the Severa-Weinstein groupoid of an algebroid is an example of a singular Lie groupoid, thereby establishing Lie's third theorem in this context. ###### Contents * 1 Introduction * 1.1 Main question * 1.2 Our solution * 1.3 Methods and outline * 1.4 Related work * 2 Diffeology * 2.1 Diffeological structures * 2.2 Smooth maps * 2.3 Standard constructions * 3 Quasi-etale diffeological spaces * 3.1 Quasi-etale maps and spaces * 3.2 Properties of Quasi-etale maps * 3.3 Fiber products * 3.4 Fiber products over manifolds * 4 Quasi-etale groupoids * 4.1 Groupoid objects in \(\mathbf{QUED}\) * 4.2 Local groupoids * 5 Differentiation * 5.1 Representing singular Lie groupoids * 5.2 Construction of the Lie functor * 5.3 Example - singular Lie groupoids with integrable algebroids * 6 Proof of Theorem 5.2 and Theorem 5.3 * 6.1 Lifting the division map * 6.2 Division structures and comparing maps * 6.3 Local Existence of charts * 6.4 Uniqueness of charts * 6.5 Proof of Theorem 5.2 * 6.6 Proving Theorem 5.3 * 7 The classical Severa-Weinstein groupoid * 7.1 The fundamental groupoid construction * 7.2 The Severa-Weinstein groupoid is a singular Lie groupoid ## 1. Introduction Lie theory provides us with a differentiation procedure that takes global symmetries geometric objects (Lie groupoids) as input and outputs infinitesimal symmetry algebras (Lie algebroids). This differentiation procedure takes the form of a functor: \[\mathbf{Lie}\colon\mathbf{LieGrpd}\to\mathbf{LieAlg}\] Given infinitesimal data, such as a morphism or object in \(\mathbf{LieAlg}\), the "integration problem" refers to constructing a corresponding global data in \(\mathbf{LieGrpd}\). In the classical case of finite dimensional Lie groups and Lie algebras, Lie's second theorem Lie's second and third theorems are concerned with the existence of integrations of morphisms and objects, respectively. If one replaces the classical condition of "simply connected" with "source simply-connected", Lie's second theorem holds for the more general setting of Lie groupoids. On the other hand, Lie's third theorem is known to be false. In other words, there exist Lie algebroids which are not integrated by any Lie groupoid. The first example of a non-integrable algebroid is due to Rui Almeida and Pierre Molino [1]. To any algebroid \(A\) it is possible to functorially associate a kind of fundamental groupoid \(\Pi_{1}(A)\) which is commonly called the "Weinstein groupoid" or occasionally the "Severa-Weinstein" groupoid. It turns out that a Lie algebroid admits an integration if and only if \(\Pi_{1}(A)\) is a smooth manifold. Furthermore, if \(\Pi_{1}(A)\) is smooth it is the "universal" source simply-connected integration. The earliest version of this fundamental groupoid construction is by Cattaneo and Felder[13] where they describe how to construct a symplectic groupoid integrating a Poisson manifold via Hamiltonian reduction on the space of cotangent paths. In 2000, a few months after the article of Cattaneo and Felder appeared on the arXiv, Pavol Severa gave a talk where he proposed a version of this construction where the Hamiltonian action is reinterpreted as homotopies in the category of algebroids [14]. An advantage of this perspective was that it makes it clear that Cattaneo and Felder's approach could be easily generalized to work for any Lie algebroid. Marius Crainic and Rui Loja Fernandes [13] were able to study the geometry of the space of algebroid paths in detail and consequently find a precise criteria the smoothness of the Severa-Weinstein groupoid. Crainic and Fernandes credit Alan Weinstein for suggesting a path-based approach to them and so they introduced the term "Weinstein groupoid". Even when the Severa-Weinstein groupoid is not smooth, it is clear that it is far from being an arbitrary topological groupoid. For example, in the work of Hsian-Hua Tseng and Chenchang Zhu [14] it is observed that it is the topological coarse moduli-space of a groupoid object in etale geometric stacks over manifolds. In other words, it is the orbit space of an etale Lie groupoid equipped with a "stacky" product. ### Main question Our aim in this article will be to provide a foundation for a version of Lie theory that includes the kinds of groupoids associated to non-integrable algebroids. We intend to do this in a way that will permit us to extend notions of Morita equivalence, symplectic groupoids and differentiation to this larger context while preserving Lie's second theorem. Therefore, we aim to answer the following question: **Question**.: Does there exist a category \(\mathbf{C}\) with the following properties: * The category of smooth manifolds is a full subcategory of \(\mathbf{C}\). * If \(\mathbf{SingLieGrpd}\) is the category of groupoid objects \(\mathcal{G}\xrightarrow{\rightarrow}M\) in \(\mathbf{C}\) where \(M\) is a smooth manifold, there exists a functor: \[\widehat{\mathbf{Lie}}\colon\mathbf{SingLieGrpd}\rightarrow\mathbf{LieAlg}\] * \(\mathbf{LieGrpd}\) is a full subcategory of \(\mathbf{SingLieGrpd}\) and we have that: \[\widehat{\mathbf{Lie}}|_{\mathbf{LieGrpd}}=\mathbf{Lie}\] * There is a notion of "simply connected object" in \(\mathbf{C}\) which corresponds to being simply connected on the subcategory of manifolds. And, using this notion of simply connectedness, Lie's second and third theorems hold for \(\widehat{\mathbf{Lie}}\). Answering this question is not so straightforward. If we take \(\mathbf{C}\) to be the (2,1)-category of etale geometric stacks we get all but the last bullet point [10][11]. Another natural choice could be to take \(\mathbf{C}\) to be the category of diffeological spaces. However, this category is so large that there appears to be little hope of defining a Lie functor in this context. Another complication with answering this question is that the notion of groupoid object referenced in the statement of the question is somewhat subtle. For example, a Lie groupoid is not just a groupoid object in manifolds. It is a groupoid object in manifolds where the source (or equivalently target) map is a submersion. Therefore, when proposing such a category \(\mathbf{C}\) we also need to propose a notion of "submersion" to go along with it. The problem, as stated above, is a sort of "Goldilocks problem." If we choose the category \(\mathbf{C}\) to be too large, we have little hope of defining the Lie functor. If we choose \(\mathbf{C}\) to be to small, it may be the case that not every Lie algebroid can be integrated by a groupoid object in \(\mathbf{C}\). ### Our solution In this article, we will give a partial answer to this question. We introduce the category of "quasi-etale diffeological spaces" (**QUED** for short) and submit that it is a solution to the problem stated above. The category **QUED** is, in many ways, very natural and generalizes existing notions such as orbifolds, quasifolds and similar types of diffeological structures. We will show that the category **QUED** indeed satisfies the first three bullet points of our question above. In order to do this we will also need to explain what a "submersion" is in this context, and we will construct the Lie functor \(\mathbf{Lie}\colon\mathbf{SingLieGrpd}\rightarrow\mathbf{LieAlg}\). The main results of our paper can be summarized in the following two theorems: **Theorem 1.1**.: _Let \(\mathbf{SingLieGrpd}\) be the category of \(\mathbf{QUED}\)-groupoids where the space of objects is a smooth manifold. There exists a functor:_ \[\widehat{\mathbf{Lie}}\colon\mathbf{SingLieGrpd}\rightarrow\mathbf{LieAlg}\] _with the property that \(\widehat{\mathbf{Lie}}|_{\mathbf{LieGrpd}}=\mathbf{Lie}\)._ **Theorem 1.2**.: _Given a Lie algebroid, \(A\to M\), let \(\Pi_{1}(A)\xrightarrow{\rightarrow}M\) be the Severa-Weinstein groupoid of \(A\) and consider it as a diffeological groupoid. Then \(\Pi_{1}(A)\) is an element of \(\mathbf{SingLieGrpd}\) and \(\widehat{\mathbf{Lie}}(\Pi_{1}(A))\) is canonically isomorphic to \(A\)._ What does not appear in this article is a proof of Lie's second theorem and the notion of "simply connected" for **QUED**. There is a diffeological version of simply-connected due to Patrick Iglesias-Zemmour [11]. Indeed, it is not too difficult to show that the source fibers of the Severa-Weinstein groupoid are diffeologically simply-connected. However, a full proof of Lie's second theorem will require additional technical development which we intend to address in a later article. ### Methods and outline The main technical tools used in this article are the theory of diffeological spaces and structures as well as the the theory of local groupoids. The bulk of the article is dedicated to developing the technology needed to show that the Lie functor is well-defined. However, the basic idea behind our definition of the Lie functor for **SingLieGrpd** is not fundamentally so complex: If \(\mathcal{G}\rightrightarrows M\) is an element of **SingLieGrpd** and \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) is a "local covering" from a _local Lie_ groupoid \(\widetilde{\mathcal{G}}\) to \(\mathcal{G}\) then we define the Lie algebroid of \(\mathcal{G}\) to be the Lie algebroid of \(\widetilde{\mathcal{G}}\). In order for this definition to make sense, we will have to prove that every element of **SingLieGrpd** is locally the quotient of a local Lie groupoid and this is the main technical difficulty. In Section 2, we will review the basics of diffeology and diffeological spaces. We primarily do this to establish some notational conventions and to keep this article mostly self-contained. All of the material in this section is fairly standard in the field of diffeology and can be found in, for instance, the book of Patrick Iglesias-Zemmour [11]. In Section 3, we will introduce the notion of a quasi-etale diffeological space. Briefly, a quasi-etale diffeological space is a diffeological space which is locally the quotient of a smooth manifold by a "nice" equivalence relation. Nice in this context means that the equivalence classes are totally disconnected (e.g. think \(\mathbb{Q}\subseteq\mathbb{R}\) ) and the equivalence relation is "rigid" in the sense than any endomorphism of the equivalence relation must be an isomorphism. After defining quasi-etale spaces we will explore a few properties of quasi-etale maps and show how it is possible to study maps between quasi-etale spaces in terms of maps between smooth manifolds by "representing" them on local charts. The remaining part of the section is dedicated to defining what a submersion in **QUED** is and proving some key technical properties that we will need later. In Section 4 we will define what a groupoid object in **QUED** is as well as state the definition of a _local_ groupoid which is key to our differentiation procedure. Section 5 defines the Lie functor and states the main technical theorems that we need in order to prove that it is well-defined. It concludes with a classification of source connected singular Lie groupoid with integrable algebroids. Section 6 is dedicated to proving the main results that we rely on in Section 5. It is the most technical part of the article. The main tools used here are a combination of diffeology and the theory of local Lie groupoids. The main point of this section is to show that every element of **SingLieGrpd** is locally the quotient of a local Lie groupoid in a "unique" way. With these proofs we finish the proof of Theorem 1.1. Finally, in Section 7 we conclude by proving that the classical Severa-Weinstein groupoid is an element of **SingLieGrpd** and we explain how to apply the Lie functor to it. With this calculation we complete the proof of Theorem 1.2. ### Related work The most closely related work to the subject of this article is perhaps that of Chenchang Zhu et al.([11][12][13]). In Tseng and Zhu [10] they describe the geometric stack whose orbit space is the Severa-Weinstein groupoid. They also describe a differentiation procedure for certain groupoid objects in geometric stacks. However, there are two main disadvantages to this approach: One is that it necessitates going from the category of Lie groupoids to a (2,1)-category of stacky groupoids. The inherently higher categorical nature comes with a variety of coherence conditions and other phenomena that can make working with such objects a bit cumbersome. The second problem, is with Lie's second theorem. Although stacky groupoids can be used to repair the loss of Lie's third theorem, this comes at the cost of further complicating Lie's second theorem. Unlike for ordinary Lie groupoids, for stacky groupoids, it is not the case that source simply connected groupoids operate as "universal" integrations. The version of Lie's second theorem that is true for stacky groupoids can be seen in [15]. The stacky groupoids version replaces the "source simply connected" condition with the much stronger condition of "source 2-connected". Indeed, the simply-connected version of Lie's second theorem for stacky groupoids appears to be false. Several other authors have also investigated the relationship between diffeology and Lie theory. For example, Gilbert Hector and Enrique Macias-Virgos[14] wrote an article discussing diffeological groups with a particular emphasis on diffeomorphism groups. They use a diffeological version of the tangent functor to define a kind of Lie algebra that one can associate to a diffeological group. However, it is not clear from their work whether the resulting structure is indeed a Lie algebra in the classical sense. They do, however, show that this procedure recovers the expected Lie algebra in the case of diffeomorphism groups. Also on the topic of diffeological groups, there is an article by Jean-Marie Souriau [16] where he generalizes a variety of properties of Lie groups to the diffeological setting. Some of them relate to the integration and differentiation problem. For example, Souriau observes that under some separability assumptions, it is possible to construct something akin to the exponential map. There is work by Marco Zambon and Iakovos Androulidakis [1] on the topic of integrating singular subalgebroids by diffeological groupoids. In their work they develop a differentiation procedure for diffeological groupoids that arise as a kind of "singular subgroupoid" of an ambient Lie groupoid. Their integration and differentiation procedure is defined relative to an ambient (integrable) structure algebroid and they do not treat the case of non-integrable algebroids. There is also work by Christian Blohmann [1] towards developing a kind of Lie theory/Cartan calculus for elastic diffeological spaces. The motivation behind this is to study the infinite dimensional symmetries that occur in field theory. However, we remark that the Severa-Weinstein groupoid of a non-integrable algebroid does not appear to be elastic in general so it is not clear if such a theory would be suitable for the study of non-integrable algebroids. ## Acknowledgements The author would like to thank Marco Zambon for the numerous discussions on this topic over the years, as well as his suggestions for this manuscript. The author has also greatly benefited from discussions with Christian Blohmann during his stay at the Max Planck Institute. We also acknowledge Cattaneo Felder, Rui Loja Fernandes, Eckhard Meinrenken, David Miyamoto, and Jordan Watts for their corrections and/or comments on an earlier draft of this article. This article is based on work that was supported by the following sources: Fonds Wetenschappelijk Onderzoek (FWO Project G083118N); the Max Planck Institute for Mathematics in Bonn; the National Science Foundation (Award Number 2137999). ## Notation The category of sets will be denoted \(\mathbf{Set}\) and the category of finite-dimensional, Hausdorff, second countable, smooth manifolds will be written \(\mathbf{Man}\). ## 2. Diffeology Diffeological spaces are a generalization of the notion of a smooth manifold. They were independently introduced by Souriau[11] and Chen[12]. Fundamentally, the idea is to endow a space with a'manifold-like' structure by specifying which maps into the space are smooth. The standard textbook for the theory is by Iglesias-Zemmour[13] who is also responsible for fleshing out a considerable amount of the standard diffeological techniques. ### Diffeological structures The core observation behind diffeology is that the smooth structure on a manifold \(M\) is completely determined by the set of smooth maps into \(M\) where the domains are Euclidean sets. The definition of diffeology is obtained by axiomatizing some of the basic properties of this distinguished collection of maps. **Definition 2.1**.: An _\(n\)-dimensional Euclidean set_ is an open subset \(U\subseteq\mathbb{R}^{n}\). Let \(\mathbf{Eucl}\) denote the category whose objects are Euclidean sets and whose morphisms are smooth functions between Euclidean sets. **Definition 2.2**.: A _diffeological structure_ on a set \(X\) is a function \(\mathcal{D}_{X}\) which assigns to each object \(U\in\mathbf{Eucl}\) a distinguished collection \(\mathcal{D}_{X}(U)\subseteq\mathbf{Set}(U,X)\). An element of \(\mathcal{D}_{X}(U)\) for some \(U\) is call a _plot_. Plots are required to satisfy the following axioms: 1. If \(\phi\colon U_{\phi}\to X\) is constant then \(\phi\) is a plot. 2. If \(\phi\colon U_{\phi}\to X\) is a plot and \(\psi\colon V\to U_{\phi}\) is a morphism in \(\mathbf{Eucl}\) then \(\phi\circ\psi\) is a plot. 3. If \(\phi\colon U_{\phi}\to X\) is a function and \(\{U_{i}\}\) is an open cover of \(U_{\phi}\) such that \(\phi|_{U_{i}}\) is a plot for all \(i\in I\) then \(\phi\) is a plot. A _diffeological space_ is a pair \((X,\mathcal{D}_{X})\) where \(X\) is a set and \(\mathcal{D}_{X}\) is a diffeological structure on \(X\). In a mild abuse of notation we will typically refer to diffeological spaces by only their underlying set. However, it is important to keep in mind that a given set may admit many diffeological structures. Finally, given a plot \(\phi\) on a diffeological space \(X\) we will typically denote the domain of \(\phi\) using a subscript \(U_{\phi}\). Diffeological spaces are quite general and include objects which range from the ordinary to quite pathological. Let us go over a few basic examples. **Example 2.3**.: If \(M\) is a smooth manifold and \(U\) is a Euclidean set, then we declare \(\phi\colon U_{\phi}\to M\) to be a plot if and only if \(\phi\) is smooth as a map of manifolds. **Example 2.4**.: Suppose \(X\) is a topological space. We can make \(X\) into a diffeological space by declaring any continuous function \(\phi\colon U\to X\) to be a plot. **Example 2.5**.: Suppose \(X\) is a set. The _discrete diffeology_ on \(X\) is the unique diffeology on \(X\) for which every plot is locally constant. **Example 2.6**.: Suppose \(X\) is a set. The _coarse diffeology_ on \(X\) is the diffeology on \(X\) which declares every set theoretic function \(\phi\colon U\to X\) from a Euclidean set to be a plot. **Example 2.7**.: Suppose \(X\) is a diffeological space and \(\iota\colon Y\to X\) is the inclusion of an arbitrary subset. The _subset diffeology_ on \(Y\) is the diffeology on \(Y\) which says that \(\phi\colon U\to Y\) is a plot if and only if \(\iota\circ\phi\) is a plot. One particular example of the subset diffeology will be of particular relevance to this article. **Definition 2.8**.: Suppose \(X\) is a diffeological space and \(Y\subseteq X\). We say that \(Y\) is _totally disconnected_ if the subset diffeology on \(Y\) is the discrete diffeology. In other words, a map \(\phi\colon U_{\phi}\to Y\) is a plot on \(X\) if and only if it is locally constant. **Example 2.9**.: The set of rational numbers \(\mathbb{Q}\subseteq\mathbb{R}\) is a totally disconnected subset of \(\mathbb{R}\). Diffeological spaces come with a natural topological structure. However, the relationship between topological structures and diffeological structures is much weaker than the usual one between smooth structures and topology. **Definition 2.10**.: Suppose \(X\) is a diffeological space. A subset \(V\subseteq X\) is said to be _open_ if for all plots \(\phi\colon U_{\phi}\to X\) we have that the inverse image \(\phi^{-1}(V)\subseteq U_{\phi}\) is open. This topology on \(X\) is called the _D-topology_. From now on, whenever we refer to something being "local" or "open" in a diffeological space, we mean relative to the D-topology. ### Smooth maps Morphisms of diffeological spaces are defined in a rather straightforward way. Basically, a function is smooth if it pushes forward plots to plots. More formally: **Definition 2.11**.: Let \(X\) and \(Y\) be diffeological spaces. A function \(f\colon X\to Y\) is a _smooth map_ if for all plots \(\phi\) on \(X\) we have that \(f\circ\phi\) is a plot on \(Y\). We say that \(f\) is a _diffeomorphism_ if it is a bijection and the inverse function \(f^{-1}\) is smooth. The category of diffeological spaces with smooth maps will be denoted **Diffgl**. This notion of a smooth map extends the usual one for smooth maps between manifolds. Let us consider some basic examples. **Example 2.12**.: Suppose \(M\) and \(N\) are smooth manifolds. A function \(f\colon M\to N\) is smooth as a map of diffeological spaces if and only if it is smooth as a map of manifolds. This means that the category of smooth manifolds embeds fully faithfully into the category of diffeological spaces. **Example 2.13**.: Suppose \(X\) is a diffeological space and \(Y\) is a subset of \(X\) equipped with the subset diffeology. The inclusion map \(\iota\colon Y\to X\) is smooth. There are a few different notions of "quotients" in the context of diffeology. The most fundamental one is called a subduction. One can think of subduction as the diffeological version of a topological quotient. **Definition 2.14**.: Suppose \(X\) and \(Y\) are diffeological spaces. A smooth function \(f\) is called a _subduction_ if for all plots \(\phi\colon U_{\phi}\to Y\) and points \(u\in U\) there exists an open \(V\subseteq U_{\phi}\) of \(u\in U\) together with a plot \(\widetilde{\phi}\colon V\to X\) such that \(\phi|_{V}=f\circ\widetilde{\phi}\). Let us state a few examples. **Example 2.15**.: If \(f\colon M\to N\) is a smooth map of manifolds, then \(f\) is a subduction if and only if for all \(p\in N\) there exists an element \(q\in M\) such that the differential \(T_{q}f\colon T_{q}M\to T_{p}N\) is a surjection. **Example 2.16**.: Suppose \(X\) is a diffeological space and let \(\{\phi_{i}\colon U_{i}\to X\}_{i\in I}\) be the set of all plots on \(X\). Consider the function: \[\bigsqcup_{i\in I}\phi_{i}\colon\bigsqcup_{i\in I}U_{i}\to X\] This function is a subduction since every plot on \(X\) factors through \(\bigsqcup_{i\in I}\phi_{i}\) in an canonical way. In some cases, one requires maps with a greater degree of regularity than a subduction. For example, one might wish to generalize the notion of a submersion of manifolds to the diffeological setting. We saw above that a subduction between manifolds is not quite the same thing as a submersion. This leads us to the notion of a local subduction. **Definition 2.17**.: We say that \(f\) is a _local subduction_ if for all plots \(\phi\colon U_{\phi}\to Y\) and points \(x\in X\), \(u\in U_{\phi}\) such that \(\phi(u)=f(x)\), there exists an open neighborhood \(V\subseteq U_{\phi}\) of \(u\in U_{\phi}\) and a plot \(\widetilde{\phi}\colon U_{\phi}\to X\) such that \(\widetilde{\phi}(u)=x\) and \(\phi|_{V}=f\circ\phi\). The definitions of local subduction and subduction look similar. The main distinction, for subductions, is that the lift of \(\phi\) is not required to factor through any _specific_ point in \(X\). On the other hand, for a local subduction, one must be able to find a lift through every point in \(X\) that is in the preimage of \(\phi(u)\). This has two main consequences: One is that being a local subduction is a property that is local in the _domain_ of \(f\). The other consequence is that, unlike subductions, local subductions may not necessarily be surjective. **Example 2.18**.: If \(f\colon M\to N\) is a morphism of smooth manifolds then \(f\) is a local subduction if and only if \(f\) is a (not necessarily surjective) submersion. **Example 2.19**.: Consider \(f\colon\mathbb{R}_{\sqcup}\mathbb{R}\to\mathbb{R}\) where \(f(x)=0\) on the first connected component and \(f(x)=x\) on the second connected component. Then \(f\) is a subduction but \(f\) is not a local subduction. Our next example/lemma is one of particular interest to us. Arguments similar to the one below will be used frequently in this article. **Lemma 2.20**.: _Suppose \(\mathcal{G}\xrightarrow{\rightarrow}M\) is a Lie groupoid and let \(X=M/\mathcal{G}\) be the set of orbits. Then quotient map \(\pi\colon M\to X\) is a local subduction._ Proof.: Suppose \(\phi\colon U_{\phi}\to X\) is a plot and let \(u\in U_{\phi}\) and \(p\in M\) be fixed such that \(\phi(u)=\pi(p)\). Since \(\pi\colon M\to X\) is a subduction, we know there exists an open neighborhood \(V\subseteq U_{\phi}\) of \(U\) and a smooth function \(\widetilde{\phi}\colon V\to M\) with the property that \(\pi\circ\widetilde{\phi}=\phi\). Let \(q:=\phi(u)\in M\). Since \(p\) and \(q\) lie in the same \(\pi\) fiber, there exists a groupoid element \(g\in\mathcal{G}\) with the property that \(s(g)=q\) and \(t(g)=p\). Let \(\sigma\colon\mathcal{O}\to\mathcal{G}\) be a local section of the source map defined in a neighborhood of \(\mathcal{O}\subseteq M\) of \(q\) such that \(\sigma(q)=g\). Now let: \[\overline{\phi}(v):=(t\circ\sigma\circ\widetilde{\phi})(v)\] where \(t\) is the target map for the groupoid. If necessary, we shrink the domain \(V\) of \(\overline{\phi}\) to a smaller open neighborhood of \(u\) to ensure that \(\overline{\phi}\) is well-defined. Then a direct computation shows that \(\pi\circ\overline{\phi}=\phi\) and also \(\overline{\phi}(u)=p\). We conclude that \(\pi\) is a local subduction. ### Standard constructions The flexibility of diffeological spaces means that we have many tools for constructing new diffeological spaces out of old ones. Let us briefly review a few of the standard diffeological constructions that we will require. **Definition 2.21**.: Suppose \(X\) is a set and \(\{\mathbf{D}_{i}\}_{i\in I}\) is an arbitrary collection of diffeological structures on \(X\). The _intersection diffeology_ relative to this collection is the diffeology on \(X\) which declares a function \(\phi\colon U\to X\) to be a plot if and only if \(\phi\) is a plot in \(\mathbf{D}_{i}\) for all \(i\in I\). Suppose \(X\) is a set and we have an arbitrary collection \(C=\{\phi\colon U_{\phi}\to X\}\) of set theoretic functions where the domains of the elements of \(C\) are Euclidean spaces. The _diffeology generated by \(C\)_ is the diffeology obtained by taking the intersection of all diffeologies which for which every element in \(C\) is a plot. Since the constant plots are contained in every possible diffeology, the intersection diffeology always exists. Since every diffeology contains the discrete diffeology, the intersection of an arbitrary collection of diffeologies is never empty. Many diffeologies can be constructed by taking intersections. Among these, the quotient diffeology is of particular importance. **Definition 2.22**.: Suppose \(X\) is a diffeological space and \(\sim\) is an equivalence relation on \(X\). Let \(X/\sim\) be the set of equivalence classes and let \(\pi\colon X\to X/\sim\) be the canonical surjective function. The _quotient diffeology_ on \(X/\sim\) is the intersection of all diffeologies on \(Y\) which makes \(\pi\) a subduction. Surjective subductions and quotient diffeologies are in one-to-one correspondence. By this we mean that a surjective function \(f\colon X\to Y\) of diffeological spaces is a subduction if and only if the diffeology on \(Y\) is the quotient diffeology relative to the equivalence relation given by the fibers of \(f\). The category of diffeological spaces is fairly well-behaved. In particular it admits finite products and exponential objects. **Definition 2.23**.: Given two diffeological spaces, \(X\) and \(Y\) the _product diffeology_ on \(X\times Y\) is constructed as follows: We say that \(\phi\colon U_{\phi}\to X\times Y\) is a plot if and only if \(\operatorname{pr}_{1}\circ\phi\) and \(\operatorname{pr}_{2}\circ\phi\) are plots. **Definition 2.24**.: Given two diffeological spaces \(X\) and \(Y\) the _functional diffeology_ on \(C^{\infty}(X,Y)\) is constructed as follows: We say that \(\phi\colon U_{\phi}\to C^{\infty}(X,Y)\) is a plot if and only if the following map is smooth: \[\overline{\phi}\colon U_{\phi}\times X\to Y\qquad(u,x)\mapsto\phi(u)(x)\] Note that we take the product diffeology on \(U_{\phi}\times X\). ## 3. Quasi-etale diffeological spaces We will now introduce a new class of diffeological space that we will call "quasi-etale diffeological spaces" or QUED for short. This subcategory of diffeological spaces, along with its associated groupoids, will be the main focus of the remainder of the article. In short, a QUED is a diffeological space that can be expressed as the (local) quotient of a smooth (Hausdorff) manifold by a well-behaved equivalence relation. The term "quasi-etale" is used because this equivalence relation causes the quotient map to behave in a manner that is reminiscent of an etale map of manifolds. However, unlike a typical etale map, it is not a local diffeomorphism. ### Quasi-etale maps and spaces **Definition 3.1** (Quasi-etale).: Suppose \(X\) and \(Y\) are diffeological spaces. A map \(\pi\colon X\to Y\) is said to be _quasi-etale_ if it satisfies the following properties: 1. \(\pi\) is a local subduction, 2. the fibers of \(\pi\) are totally disconnected, 3. if \(\mathcal{O}\subseteq X\) is an open subset and we are given a smooth map \(f\colon\mathcal{O}\to X\) with the property that \(\pi\circ f=\pi\) then \(f\) is a local diffeomorphism. Given \(x\in X\) a _quasi-etale chart around_\(x\) consists of a quasi-etale map \(\pi\colon M\to X\) where \(M\) is a smooth manifold and such that \(x\) is in the image of \(\pi\). We say that \(X\) is a _quasi-etale diffeological space (QUED)_ if for all \(x\in X\) there exists a quasi-etale chart around \(x\). We write **QUED** to denote the full subcategory of **Diffgl** that consists of quasi-etale diffeolgical spaces. Since local subductions are open, every quasi-etale diffeological space is locally the quotient of a smooth manifold modulo an equivalence relation. In practice, it can be difficult to determine whether an equivalence relation on a manifold gives rise to a quasi-etale chart and typically the most difficult step to prove is (QE3). However, there are a lot of interesting examples of Quasi-etale diffeological spaces. In particular, several kinds of diffeological spaces that appear in the literature happen to be quasi-etale. **Example 3.2**.: Suppose \(N\) and \(M\) are smooth manifolds. A quasi-etale map \(\pi\colon M\to N\) is the same thing as an etale map. Therefore a classical atlas on a manifold is also a quasi-etale atlas. **Example 3.3**.: A quasifold (introduced by Prato [11]) is a diffeological space that is locally the quotient of Euclidean space modulo a countable group of affine transformations. If \(\Gamma\) is a countable group of affine transformations of \(\mathbb{R}^{n}\) then the quotient map \(\mathbb{R}^{n}\to\mathbb{R}^{n}/\Gamma\) is quasi-etale. It is a local subduction because the quotient map for a smooth group action is always a local subduction. The fibers are totally disconnected since \(\Gamma\) is countable. The fact that condition (QE3) is satisfied in this context is actually rather non-trivial but it has already been proved by Karshon and Miyamoto [12] (Corollary 2.15). **Example 3.4**.: Since orbifolds are a special case of quasifolds, they are also quasi-etale diffeological spaces. **Example 3.5**.: In the literature, the closest definition to the one we give above is probably the "diffeological etale manifolds" of Ahmadi [1]. In the Ahmadi article, he defines a class of maps which he calls "diffeological etale maps." We will not state the definition of such maps here but we will simply utilize some of the properties proved by Ahmadi to show the relationship with our notion: Ahmadi shows that if \(\pi\colon X\to Y\) is a diffeological etale map then it is a local subduction and it has the property that for all \(x\in X\): \[T^{int}_{x}\pi\colon T^{int}_{x}X\to T^{int}_{\pi(x)}Y\] is a bijection ([1], Corollary 5.6(i)). Here \(T^{int}\) denotes the "internal tangent space" (see [10] and [11].) If \(X\) is a smooth manifold and we have an \(f\) as in (QE3) such that \(\pi\circ f=\pi\) we can apply the tangent functor to this equation to easily conclude that \(f\) must be a local diffeomorphism. Our last example is of particular importance. It says that quotients of Lie groups by totally disconnected subgroups are quasi-etale diffeological spaces. The proof strategy for this lemma is essentially a simplified form of the main proof strategy we will use to show that the Severa-Weinstein groupoid of a Lie algebroid is a quasi-etale diffeological space. **Lemma 3.6**.: _Suppose \(G\) is a Lie group and \(K\) is a totally disconnected normal subgroup of \(G\). Let \(X=G/K\) be the group of \(K\)-cosets equipped with the quotient diffeology. The quotient map \(\pi\colon G\to X\) is quasi-etale and so \(X\) is a quasi-etale diffeological space._ Proof.: We need to show each of the three properties. (QE1) Consider the action groupoid \(K\ltimes G\toto G\). Clearly \(\pi\) is the quotient map for the orbit space of this Lie groupoid. By Lemma 2.20 we conclude \(\pi\) is a local subduction. (QE2) The fibers of \(\pi\) are the \(K\) cosets of the form \(gK\) for \(g\in G\). Since left translation by \(g\) is a diffeomorphism and \(K\) is totally disconnected, it follows that \(gK\) must be totally disconnected. (QE3) Suppose \(\mathcal{O}\subseteq G\) is a connected open set and \(f\colon\mathcal{O}\to G\) is a smooth function such that \(\pi\circ f=\pi\). Consider the function: \[\overline{f}(g):=g^{-1}\cdot f(g)\] Since \(\pi\) is a homomorphism it is clear that the function \(\pi\circ\overline{f}\colon\mathcal{O}\to X\) is constant and therefore the image of \(\overline{f}\) is contained in a single fiber of \(\pi\). Since the fibers of \(\pi\) are totally disconnected, we conclude that there exists \(g_{0}\in G\) such that \(\overline{f}(g)=g_{0}\) for all \(g\in G\). This implies that \[f(g)=g\cdot g_{0}\] Since \(f\) is just right translation by \(g_{0}\) we conclude that \(f\) is a local diffeomorphism. ### Properties of Quasi-etale maps Quasi-etale diffeological spaces have a variety of favorable properties that makes it possible to transport many differential geometry techniques to the category \(\mathbf{QUED}\). The way one does this is by "representing" maps between quasi-etale diffeological spaces using maps between manifolds. This is analogous to how smooth maps between manifolds can be represented in terms of charts. **Definition 3.7**.: Suppose \(f\colon X\to Y\) is a smooth map in \(\mathbf{QUED}\). A _local representation_ of \(f\) consists of a pair of quasi-etale charts \(\pi_{X}\colon M\to X\) and \(\pi_{Y}\colon N\to Y\) together with a smooth function \(\widetilde{f}\colon M\to N\) such that the following diagram commutes: (3.7.1) We say that a local representation of \(f\) is _around_\(x\in X\) if \(x\) is in the image of \(\pi_{X}\). Our next lemma states that representations of smooth maps in \(\mathbf{QUED}\) always exist. **Lemma 3.8**.: _Suppose \(f\colon X\to Y\) is a smooth in \(\mathbf{QUED}\). Then for all \(x\in X\) there exists a local representation of \(f\) around \(x\in X\)._ Proof.: Since \(Y\) is quasi-etale, there must exist a quasi-etale chart \(\pi_{Y}\colon N\to Y\) around \(f(x)\in Y\). Now let \(\pi_{X}\colon M^{\prime}\to X\) be a quasi-etale chart around \(x\in X\). Since \(\pi_{Y}\) is a subduction, we know there must exist a local lift \(\widetilde{f}\colon M\to N\) of of \(f\circ\pi_{X}\colon M^{\prime}\to Y\) defined on some open subset \(M\subseteq M^{\prime}\). The triple \(\widetilde{f},\pi_{X}|_{M}\) and \(\pi_{Y}\) is the desired local representation. Our next lemma observes that any two points in the same fiber of a quasi-etale chart can be related by a diffeomorphism. **Lemma 3.9**.: _Suppose \(X\) is a quasi-etale diffeological space and \(\pi\colon M\to X\) is a quasi-etale chart. If \(p,q\in M\) are such that \(\pi(p)=\pi(q)\) then there exists a local diffeomorphism \(f\colon\mathcal{O}\to M\) defined on a neighborhood \(\mathcal{O}\subseteq M\) of \(p\) such that \(f(p)=q\) and \(\pi\circ f=\pi\)._ Proof.: Since \(\pi\colon M\to X\) is a local subduction and \(M\) is a manifold, we know that there must exists an open neighborhood \(\mathcal{O}\subseteq M\) and a smooth function \(f\colon\mathcal{O}\to M\) with the property that \(\pi\circ f=\pi\) and \(f(p)=q\). Since \(\pi\) is assumed to be quasi-etale, (QE3) implies that such an \(f\) must be a local diffeomorphism. Our next lemma relates properties of maps between quasi-etale diffeological spaces and properties of their representations. **Proposition 3.10**.: _Suppose \(f\colon X\to Y\) is a smooth map in \(\mathbf{QUED}\) and we have a local representation of \(f\) as in the Diagram 3.7.1._ 1. _If_ \(f\) _is a local diffeomorphism, then_ \(\widetilde{f}\) _is a local diffeomorphism._ 2. _If_ \(f\) _is a local subduction then_ \(\widetilde{f}\) _is a submersion._ 3. _If_ \(f\) _is constant, then_ \(\widetilde{f}\) _is locally constant._ Proof.: Throughout, let \(p\in M\) be arbitrary and \(q:=\widetilde{f}(p)\in N\), \(x:=\pi_{X}(p)\in X\) and \(y:=\pi_{Y}(q)\in Y\). (a) Since \(N\) is a smooth manifold and \(f\circ\pi_{X}\) is a local subduction, there must exists a smooth function \(g\colon\mathcal{O}\to M\) defined on an open neighborhood \(\mathcal{O}\) of \(q\) such that \[f\circ\pi\circ g=\pi_{Y}\] This implies that: \[\pi_{Y}\circ\widetilde{f}\circ g=\pi_{Y}\] Since \(\pi_{Y}\) is quasi-etale, by (QE3) it follows that \(\widetilde{f}\circ g\) is a local diffeomorphism. Therefore, it follows \(\widetilde{f}\) is a submersion. and the dimension of \(M\) is greater than or equal to the dimension of \(N\). (b) Now suppose \(f\) is a diffeomorphism. Then it is, in particular, a local subduction. Let \(g\) be as in the discussion for part (a). We can repeat the argument from part (a) where we replace \(f\) with \(f^{-1}\) and \(\widetilde{f}\) with \(g\) and from that we conclude that \(g\) is a submersion. Since \(\widetilde{f}\circ g\) is a local diffeomorphism and both \(f\) and \(g\) are submersions we conclude that \(\widetilde{f}\) is a local diffeomorphism. (c) Since \(f\) is a constant function, it follows that the image of \(\widetilde{f}\) is contained in \(\pi_{Y}^{-1}(y)\). By assumption, we know that \(\pi_{Y}^{-1}(y)\) is totally disconnected so \(f\) is locally constant. We conclude this subsection with a lemma that simplifies the process of proving a map is a quasi-etale chart when we already know the space is quasi-etale. **Lemma 3.11**.: _Suppose \(X\) is a quasi-etale diffeological space and suppose \(\pi\colon M\to X\) is a local subduction and the fibers of \(\pi\) are totally disconnected. Then \(\pi\) is a quasi-etale chart._ Proof.: Let \(p_{0}\in M\) be an arbitrary point and let \(x_{0}:=\pi(p_{0})\). We will show that \(\pi\) is quasi-etale in a neighborhood of \(p\). Take \(\pi^{\prime}\colon N\to X\) be a quasi-etale chart around \(x_{0}\) and let \(q_{0}\in N\) be such that \(\pi^{\prime}(q_{0})=x_{0}\) Since both \(\pi\) and \(\pi^{\prime}\) are local subductions, we know that by possibly shrinking \(M\) and \(N\) to smaller open neighborhoods of \(p_{0}\) and \(q_{0}\), respectively, we can construct a pair of smooth fuctions \(f\colon M\to N\) and \(g\colon N\to M\) such that \(f(p_{0})=q_{0}\), \(g(q_{0})=p_{0}\), \(\pi^{\prime}\circ f=\pi\), and \(\pi\circ g=\pi^{\prime}\). Then it follows that \(f\circ g\colon N\to N\) is a smooth function such that \(\pi^{\prime}\circ(f\circ g)=\pi^{\prime}\). Since \(\pi^{\prime}\) is quasi-etale, it follows that \(f\circ g\) is a local diffeomorphism. Therefore, it follows that \(f\) is a submersion. However, since \(\pi^{\prime}\circ f=\pi\) the fibers of \(f\) must be totally disconnected and so \(f\) is a local diffeomorphism. Since \(f\colon M\to N\) preserves the projections to \(X\), and \(\pi^{\prime}\colon N\to X\) is quasi-etale, it follows that \(\pi\) must also be quasi-etale. ### Fiber products It is a notable property of diffeological spaces that the fiber product of two diffeological spaces always exist. However, fiber products of quasi-etale diffeological spaces may not be quasi-etale. This can happen even when the maps involved are quasi-etale. Consider the following simple example: **Example 3.12**.: Consider the \(\mathbb{Z}_{2}\) action on \(\mathbb{R}\) by the reflection \(x\mapsto-x\). Let \(X:=\mathbb{R}/\mathbb{Z}_{2}\). Then \(X\) is quasi-etale (it is an orbifold). However, consider the fiber product: \[\mathbb{R}\times_{X}\mathbb{R}=\{(x,y)\in\mathbb{R}^{2}\ :\ x=y\text{ or }x=-y\}\] The diffeological space \(\mathbb{R}\times_{X}\mathbb{R}\) is not a quasi-etale diffeological space. To see why, note that the domain of a quasi-etale chart around \((0,0)\) would have to be to be one-dimensional. It is an standard exercise to show that there cannot exist a local subduction \(\pi\colon\mathbb{R}\to\mathbb{R}\times_{X}\mathbb{R}\) with \((0,0)\) in the image of \(\pi\). What this tells us is that "local subduction" is not a strong enough condition to function as our notion of "submersion" between quasi-etale diffeological spaces. In order to clarify what is going on here, will will need to use the notion of a "cartesian" morphism in a category. **Definition 3.13**.: Suppose \(\mathbf{C}\) is an arbitrary category. We say that a morphism \(p\colon X\to Z\) in \(\mathbf{C}\) is _cartesian_ if for all other morphisms \(f\colon Y\to Z\) one of the following holds: 1. The fiber product \(X_{p}{\times_{f}}\,Y\) exists. I.e. we have a pullback square: \[\begin{CD}X_{p}{\times_{f}}\,Y@>{\mathrm{pr}_{1}}>{}>X\\ Y@>{\mathrm{pr}_{2}}>{}>Z\end{CD}\] 2. The collection of commutative squares of the following form is empty: \[\begin{CD}W@>{}>{}>X\\ Y@>{f}>{}>Z\end{CD}\] **Example 3.14**.: Suppose \(\mathbf{C}\) is the category of sets \(\mathbf{Set}\). Then every function in \(\mathbf{Set}\) is cartesian. In this category case (2) in the above definition never occurs since the empty set is an initial object in \(\mathbf{Set}\). **Example 3.15**.: Consider the category of smooth manifolds \(\mathbf{Man}\). Any submersion \(p\colon M\to N\) in \(\mathbf{Man}\) is cartesian. Observe that if we permit the empty set to be an initial object in \(\mathbf{Man}\) condition (2) above never occurs. However, we will proceed with the convention that the empty set is not a manifold. **Example 3.16**.: In the category of diffeological spaces \(\mathbf{Diffgl}\) every morphism is cartesian. For a general morphism in \(\mathbf{QUED}\) there does not appear to be a simple geometric criterion for determining when a morphism is cartesian. However, later in this section we will state such a criterion for the special case where the co-domain is a manifold. We use the notion of a cartesian morphism to define a class of maps in \(\mathbf{QUED}\) that is well-behaved for fiber products and extends the classical notion of a submersion. **Definition 3.17**.: Suppose \(p\colon X\to Z\) is morphism in \(\mathbf{QUED}\). We say that \(p\) is a \(\mathbf{QUED}\)_-submersion_ if \(p\) is a local subduction and \(p\) is cartesian in \(\mathbf{QUED}\). In our theory of quasi-etale diffeological spaces, \(\mathbf{QUED}\)-submersions will play the role of submersions in the theory of manifolds. Note that from Example 3.12 we know that quasi-etale maps are _not_ a special case of \(\mathbf{QUED}\)-submersions. Our next lemma says that **QUED**-submersions are stable under taking base changes. It is consequence of the more general fact that the properties of being a local subduction and being cartesian are both stable under base changes. We will include a proof, regardless, for completeness. **Lemma 3.18**.: _Suppose \(p\colon X\to Z\) is a_ **QUED**_-submersion and \(f\colon Y\to Z\) is any other morphism. The base change of \(p\) along \(f\):_ \[\operatorname{pr}_{2}\colon X\times_{Z}Y\to Y\] _is a_ **QUED**_-submersion._ Proof.: Suppose \(p\colon X\to Z\) is a **QUED** submersion and \(f\colon Y\to Z\) is a morphism in **QUED**. Let us illustrate the situation with a diagram: We need to show that: \[(X\times_{Z}Y)\times_{Y}W\] is a quasi-etale diffeological space. Observe that \((X\times_{Z}Y)\times_{Y}W\) is canonically diffeomorphic to \(X\times_{Z}W\) as a diffeological space. Since \(p\) is cartesian we conclude that \(X\times_{Z}W\) is a quasi-etale diffeological space so \(\operatorname{pr}_{2}\colon X\times_{Z}Y\to Y\) is cartesian. Now we must show that \(\operatorname{pr}_{2}\colon X\times_{Z}Y\to Y\) is a local subduction. Suppose \(\phi\colon U_{\phi}\to Y\) is a plot and let \(u\in U_{\phi}\) and \((x,y)\in X\times_{Z}Y\) be such that \(\phi(u)=y\). Since \(p\colon X\to Z\) is a local subduction, we know there exists an open neighborhood \(V\subseteq U_{\phi}\) of \(u\) and a lift \(\psi\colon V\to X\) with the property that \(\psi(u)=x\) and \(p\circ\psi=f\circ\phi|_{V}\). Then \[\widetilde{\phi}:=\phi|_{V}\times\psi\colon V\to X\times_{Z}Y\] is well-defined and has the properties that \(\operatorname{pr}_{2}\circ\widetilde{\phi}=\phi|_{V}\) and \(\widetilde{\phi}(u)=(x,y)\). It is not that easy to construct "obvious" examples of **QUED**-submersions. It is essential to our theory that **QUED**-submersions between manifolds be the same thing as ordinary submersions. However, the proof of this fact is not as obvious as one might expect. **Example 3.19**.: Suppose \(p\colon M\to N\) is a smooth map of manifolds. Then \(p\) is a **QUED**-submersion if and only if \(p\) is a submersion. One of the directions in the above statement is clear from the fact that local subductions between manifolds are automatically submersions. The other direction follows from Theorem 3.21 which we will state later in this subsection. ### Fiber products over manifolds In this last subsection we will give some statements about the behavior of fiber products along **QUED**-submersions in the special case where the fiber product is taken over a smooth manifold. **Proposition 3.20**.: _Suppose \(p\colon X\to B\) is a local subduction where \(B\) is a smooth manifold. Suppose we have another morphism \(f\colon Y\to B\) in_ **QUED** _for which \(X\times_{B}Y\) is also a quasi-etale diffeological space._ _Then for any point \((x_{0},y_{0})\in X\times_{B}Y\). If \(\pi_{X}\colon M\to X\) and \(\pi_{Y}\colon N\to Y\) are quasi-etale charts around \(x\) and \(y\) respectively, then the map:_ \[\pi_{XY}\colon M\times_{B}N\to X\times_{B}Y\qquad\pi_{XY}(m,n):=(\pi_{X}(m), \pi_{Y}(n))\] _is a quasi-etale chart around \((x_{0},y_{0})\)._ Proof.: We have a commutative diagram: Note that \(M\times_{B}N\) is a manifold since \(p\circ\pi_{X}\) is a local subduction (and hence a submersion). We must show that \(\pi_{XY}\) is quasi-etale. By Lemma 3.11, it suffices to show that \(\pi_{XY}\) satisfies (QE1) and (QE2) from Definition 3.1. (QE1) We need to show that \(\pi_{XY}\) is a local subduction. Suppose \(\phi\colon U_{\phi}\to X\times_{B}Y\) is a plot. Let \(u\in U_{\phi}\) and \((m,n)\in M\times_{B}N\) be such that: \[\phi(u)=\pi_{XY}(m,n)\] We may split \(\phi\) into a product \(\phi=\phi_{1}\times\phi_{2}\) where \(\phi_{1}\) is a plot on \(X\) and \(\phi_{2}\) is a plot on \(Y\). Since \(\pi_{X}\) and \(\pi_{Y}\) are local subductions, it follows that there exist an open neighborhood \(V\subseteq U_{\phi}\) of \(u\) and plots: \[\widetilde{\phi}_{1}\colon V\to M\qquad\widetilde{\phi}_{2}\colon V\to N\] on \(M\) and \(N\) respectively such that: \[\pi_{X}\circ\widetilde{\phi}_{1}=\phi_{1}|_{V}\qquad\pi_{Y}\circ\widetilde{ \phi}_{2}=\phi_{2}|_{V}\] and \[\widetilde{\phi}_{1}(u)=m\qquad\widetilde{\phi}_{2}(u)=n\] This pair of plots defines a plot \(\widetilde{\phi}:=\widetilde{\phi}_{1}\times\widetilde{\phi}_{2}\) on \(M\times_{B}N\). Furthermore \(\pi_{XY}\circ\widetilde{\phi}=\phi\) and \(\widetilde{\phi}(u)=(m,n)\). This shows \(\pi_{XY}\) is a local subduction. (QE2) Next, we show that \(\pi_{XY}\) has totally disconnected fibers. Suppose \[\phi\colon U_{\phi}\to M\times_{B}N\] is a plot such that the image of \(\phi\) is contained in \(\pi_{XY}^{-1}(x,y)\) for some \((x,y)\in X\times_{B}Y\). Then \(\operatorname{pr}_{1}\circ\phi\) is a plot with image contained in \(\pi_{X}^{-1}(x)\) and \(\operatorname{pr}_{2}\circ\phi\) is a plot with image contained in \(\pi_{Y}^{-1}(y)\). Since \(\pi_{X}\) and \(\pi_{Y}\) have totally disconnected fibers, it follows that \(\phi\) is locally constant. \(\phi\) was arbitrary we conclude that \(\pi_{XY}^{-1}(x,y)\) is totally disconnected. The next theorem is our main result of this section. It says essentially that a smoothly parameterized family of quasi-etale diffeological spaces has a quasi etale total space. **Theorem 3.21**.: _Suppose \(p\colon X\to B\) is a morphism in_ **QUED** _with \(B\) a smooth manifold and \(p\) a local subduction. Then \(p\) is a_ **QUED**_-submersion if and only if the fibers of \(p\) are quasi-etale._ This somewhat innocent looking theorem has a surprisingly involved proof. Before getting to the proof of this theorem. We will first prove a lemma that simplifies the process of determining whether or not a map is quasi-etale. Note that the only difference is a subtle change in the last axiom in the definition of quasi-etale. **Lemma 3.22**.: _Suppose \(\pi\colon M\to X\) is a smooth map of diffeological spaces with \(M\) a smooth manifold. Then \(\pi\) is quasi-etale if and only if the following conditions hold:_ 1. \(\pi\) _is a local subduction,_ 2. _the fibers of_ \(\pi\) _are totally disconnected,_ 3. _for all_ \(p\in M\) _and smooth functions:_ \[f\colon\mathcal{O}\to M\] _such that_ \(\mathcal{O}\) _is an open neighborhood of_ \(p\)_,_ \(f(p)=p\) _and_ \(\pi\circ f=\pi\) _we have that_ \(f\) _is a diffeomorphism in a neighborhood of_ \(p\)_._ Proof.: Note that the only difference between the two conditions above and the definition of quasi-etale is the additional stipulation that \(f\) has a fixed point. Therefore, the \(\Rightarrow\) case is clear. Now, suppose we have a map \(\pi\colon M\to X\) which satisfies properties (1-3). We must show that \(\pi\) satisfies (QE3). Consider a smooth map \(f\colon\mathcal{O}\to Y\) where \(\mathcal{O}\subseteq X\) is an open subset and \(\pi\circ f=\pi\). To show that \(f\) is a local diffeomorphism, it suffices to show that it is a local diffeomorphism in a neighborhood of an arbitrary \(p\in\mathcal{O}\) so let \(p\in\mathcal{O}\) be fixed. Since \(\pi\) is a local subduction and \(M\) is a smooth manifold, there exists an open neighborhood \(\mathcal{U}\subseteq M\) of \(f(p)\) and a smooth function \(\psi\colon\mathcal{U}\to M\) such that \(\psi(f(p))=p\) and \(\pi\circ\psi=\pi\). We can assume without loss of generality that \(f^{-1}(\mathcal{U})\subseteq\mathcal{O}\) and therefore the function \(\psi\circ f\) is well defined. This function also satisfies \((\psi\circ f)(p)=p\) and \(\pi\circ(\psi\circ f)=\pi\). By property (3) above we conclude that \(\psi\circ f\) is a local diffeomorphism. Therefore, \(f\) must be an immersion. By a dimension count, we conclude that \(f\) is a local diffeomorphism. Our next lemma tells us that if we have a smoothly parameterized family of quasi-etale diffeological spaces with a quasi-etale total space, then one can find a quasi-etale chart on a fiber by restricting a quasi-etale chart on the total space. **Lemma 3.23**.: _Suppose \(p\colon X\to B\) is smooth with \(X\) quasi-etale and \(B\) a smooth manifold and assume that for all \(b\in B\)_ \[X_{b}:=p^{-1}(b)\] _is quasi-etale._ _Then if \(\pi\colon M\to X\) is a quasi-etale chart, it follows that:_ \[\pi|_{(p\circ\pi)^{-1}(b)}\colon(p\circ\pi)^{-1}(b)\to X_{b}\] _is a quasi-etale chart._ Proof.: This is an immediate consequence of Proposition 3.20 where we take \(f\colon N\to B\) to be the inclusion of a point \(\{b\}\hookrightarrow B\). With these observations, we can now prove the theorem. Proof of Theorem 3.21.: Suppose \(B\) is a smooth manifold and let \(p\colon X\to B\) be a local subduction and a morphism in \(\mathbf{QUED}\). We must show that \(p\) is a \(\mathbf{QUED}\)-submersion if and only if the fibers of \(p\) are quasi-etale diffeological spaces. (\(\Rightarrow\)) Suppose that \(p\) is a **QUED**-submersion. Then \(p\) must be cartesian and it follows that for all \(b\in B\) we have that the fiber product: \[\{b\}\times_{B}X\cong p^{-1}(b)\] is a quasi-etale diffeological space. (\(\Leftarrow\)) Suppose that \(p\colon X\to B\) is a local subduction in **QUED** where \(B\) is a smooth manifold. Assume that for all \(b\in B\) we have that \(p^{-1}(b)\) is a quasi-etale diffeological space. We must show that \(p\) is cartesian. Let \(f\colon Y\to B\) be an arbitrary smooth map in **QUED**. Our task is to show that \(X\times_{B}Y\) quasi-etale. Suppose \(\pi_{X}\colon M\to X\) and \(\pi_{Y}\colon N\to Y\) are quasi-etale charts. Let \(\pi_{XY}\colon M\times_{B}N\to X\times_{B}Y\) be the associated smooth map. We claim that \(\pi_{XY}\) is quasi-etale. First let us establish some notation and make a few observations: * Let \[\widetilde{p}:=p\circ\pi_{X}\colon M\to B\qquad\widetilde{f}:=f\circ\pi_{Y} \colon N\to B\] We remark that \(\widetilde{p}\) is a submersion between smooth manifolds. * Given \(b\in B\), let \(X_{b}:=p^{-1}(b)\) and \(M_{b}:=\widetilde{p}^{-1}(b)\). Note that, by Lemma 3.23, we know that \[\pi_{X}|_{M_{p}}\colon M_{p}\to X_{p}\] is a quasi-etale chart. * Let \(q:=\operatorname{pr}_{2}\colon M\times_{B}N\to N\). Note that since \(q\) is the base change of \(\widetilde{p}\) along \(\widetilde{f}\) it follows that \(q\) is a submersion. We can illustrate the situation with the following diagram: (QE1) We must show \(\pi_{XY}\) is a local subduction. Suppose \(\phi\colon U_{\phi}\to X\times_{B}Y\) is a plot and we have a point \(u\in U_{\phi}\) and \((m,n)\in M\times_{B}N\) such that \(\pi_{XY}(m,n)=\phi(u)\). We know that there exist plots \(\phi_{X}\colon U_{\phi}\to X\) and \(\phi_{Y}\colon U_{\phi}\to Y\) such that: \[\phi(u)=(\phi_{X}(u),\phi_{Y}(u))\quad\text{ and }\quad p\circ\phi_{X}=f \circ\phi_{Y}\] Since \(\pi_{X}\) and \(\pi_{Y}\) are local subductions, there must exist an open neighborhood \(V\) of \(u\in U_{\phi}\) together with lifts \(\widetilde{\phi}_{X}\) and \(\widetilde{\phi}_{Y}\) such that: \[\pi_{X}\circ\widetilde{\phi}_{X}=\phi_{X}\qquad\pi_{Y}\circ\widetilde{\phi}_{ Y}=\phi_{Y}\] and: \[\widetilde{\phi}_{X}(u)=m\qquad\widetilde{\phi}_{Y}(u)=n\] Therefore, we can define a map: \[\widetilde{\phi}\colon V\to X\times Y\qquad\widetilde{\phi}(u)=(\widetilde{ \phi}_{X}(u),\widetilde{\phi}_{Y}(u))\] We claim that the image of \(\widetilde{\phi}\) is contained in \(X\times_{B}Y\). This follows from a direct calculation: \[\widetilde{p}\circ\widetilde{\phi}_{X} =p\circ\pi_{X}\circ\widetilde{\phi}_{X}\] \[=p\circ\phi_{X}\] \[=f\circ\phi_{Y}\] \[=f\circ\pi_{Y}\circ\widetilde{\phi}_{Y}\] \[=\widetilde{f}\circ\widetilde{\phi}_{Y}\] Furthermore, it follows immediately from the definition of \(\pi_{XY}\) that \(\pi_{XY}\circ\widetilde{\phi}=\phi\) and \(\widetilde{\phi}(u)=(m,n)\). This shows that \(\pi_{XY}\) is a local subduction. (QE2) If \(\phi\colon U_{\phi}\to M\times_{B}N\) is a smooth map with image contained in the fiber of \(\pi_{XY}\), then it follows that \(\operatorname{pr}_{1}\circ\pi_{XY}\circ\phi\) is locally constant. Commutativity of the top square implies that \(\pi_{X}\circ\operatorname{pr}_{1}\circ\phi\) is locally constant. Since \(\pi_{X}\) is quasi-etale it follows that \(\operatorname{pr}_{1}\circ\phi\) is locally constant. A symmetrical argument shows that \(\operatorname{pr}_{2}\circ\phi\) is locally constant. Since each component of \(\phi\) is locally constant it follows that \(\phi\) is locally constant. (QE3) We utilize the simplification from Lemma 3.22. Let us fix \((m_{0},n_{0})\in M\times_{B}N\) and suppose \(g\colon\mathcal{O}\to M\times_{B}N\) is a smooth function such that \(\pi_{XY}\circ g=\pi_{XY}\) and \(g(m_{0},n_{0})=(m_{0},n_{0})\). We are finished if we can show \(g\) is a local diffeomorphism in a neighborhood of \((m_{0},n_{0})\). Our strategy to show \(g\) is a local diffeomorphism involves several steps. This fact follows from the following sequence of claims: 1. In a neighborhood of \((m_{0},n_{0})\), \(g\) maps fibers of \(q\) to fibers of \(q\). 2. Since \(\pi_{Y}\) is quasi-etale, the horizontal (relative to \(q\)) component of \(g\) is a local diffeomorphism. 3. Since \(\pi_{X}|_{M_{b}}\) is quasi-etale for all \(b\in B\), it follows that the vertical component of \(g\) is a local diffeomorphism. 4. Since both the horizontal and vertical components of \(g\) are local diffeomorphisms, it follows that \(g\) is a local diffeomorphism. (1) Since we are only concerned with the behavior of \(g\) in an open neighborhood of \((m_{0},n_{0})\) and \((m_{0},n_{0})\) is a fixed point of \(g\) we can freely restrict \(g\) to arbitrarily small neighborhoods of \((m_{0},n_{0})\). Therefore, we can assume without loss of generality that there exist two open subsets \(\mathcal{U}\subseteq M\) and \(\mathcal{V}\subseteq N\) such that \(\mathcal{O}=\mathcal{U}\times_{B}\mathcal{V}\). Since \(\widetilde{p}\) is a submersion, we can choose \(\mathcal{U}\) in such a way that for all \(b\in B\) we have that: \[M_{b}\cap\mathcal{U}\] is connected. Note that if \(n\in N\) is a point such that \(\widetilde{f}(n)=b\), it follows that: \[q^{-1}(n)=M_{b}\times\{n\}\] Consequently, it follows that for all \(n\in N\) the sets: \[q^{-1}(n)\cap\mathcal{O}\] are connected. We claim that \(g\) must preserve the fibers of \(q\) on such a domain. To see why, observe that by following the commutative diagram above and the fact that \(g=\pi_{XY}\) we have that: \[\pi_{Y}\circ q\circ g=\operatorname{pr}_{2}\circ\pi_{XY}\circ g=\operatorname{pr}_ {2}\circ\pi_{XY}=\pi_{Y}\circ q\] Since, for all \(n\in N\), the subset \(q^{-1}(n)\cap\mathcal{O}\) are connected and the fibers of \(\pi_{Y}\) are totally disconnected, it follows that the image: \[q\circ g(q^{-1}(n)\cap\mathcal{O})\] is a point. This implies that the image of the fibers of \(q\) under \(g\) are contained in fibers of \(q\). Since \(q\) is a submersion, it follows that there must exist a smooth function \(h\) which makes the following diagram commute: (2) Note that since \(\pi_{XY}\circ g=\pi_{XY}\) it follows that \(\pi_{Y}\circ h=\pi_{Y}\). Since \(\pi_{Y}\) is quasi-etale it follows that \(h\) is a local diffeomorphism. (3) For each \(n\in N\) let \(\widetilde{f}(n)=b\). Note that: \[q^{-1}(n)\cap\mathcal{O}=(M_{b}\cap\mathcal{U})\times\{n\}\] Furthermore, we remark that \(\widetilde{f}\circ h=\widetilde{f}\) so it follows that: \[g((M_{b}\cap\mathcal{U})\times\{n\})\subseteq M_{b}\times\{h(n)\}\] Therefore, let: \[v_{n}\colon M_{b}\cap\mathcal{U}\to M_{b}\] the the map induced by \(g\) at the level of fibers. In other words, \(v_{n}\) is the vertical component of \(g\) at the \(q\) fiber of \(n\). Observe that since \(\pi_{XY}\circ g=\pi_{XY}\) it follows that \(\pi_{X}\circ v_{b}=\pi_{X}\). From a previous lemma, we saw that \(\pi_{X}|_{M_{b}}\) is quasi-etale for all \(b\in B\) whenever \(M_{b}\) is non-empty. From the quasi-etale property, it follows that \(v_{b}\) is a local diffeomorphism. (4) Since both \(h\) and \(v_{b}\) are local diffeomorphisms for all \(b\in B\) it follows that \(g\) is a local diffeomorphism. ## 4. Quasi-etale groupoids We will now look at groupoid objects in the category of quasi-etale diffeological spaces. We will also need to define the notion of a local groupoid in this context. It turns out that local groupoids will play a key role in constructing the differentiation functor. ### Groupoid objects in **Qued** **Definition 4.1**.: A _**QUED**-_groupoid_ consists of two quasi-etale diffeological spaces \(\mathcal{G}\) and \(X\) called the _arrows_ and _objects_ respectively together with a collection of smooth maps: * Two **QUED**-submersions called the _source_ and _target_: \[s\colon\mathcal{G}\to X\qquad t\colon\mathcal{G}\to X\] * A smooth map called the _unit_ \[u\colon X\to\mathcal{G}\qquad x\mapsto 1_{x}\] * A smooth map called _multiplication_ \[m\colon\mathcal{G}\,_{s}{\times_{t}}\,\mathcal{G}\to\mathcal{G}\qquad(g,h) \mapsto gh\] * A smooth map called _inverse_ \[i\colon\mathcal{G}\to\mathcal{G}\] These morphisms are required to satisfy the following properties: 1. Compatibility of source and target with unit: \[\forall x\in M\qquad s(u(x))=t(u(x))=x\] 2. Compatibility of source and target with multiplication: \[\forall(g,h)\in\mathcal{G}\,_{s}{\times_{t}}\,\mathcal{G}\qquad s(m(g,h)=s(h) \quad t(m(g,h)=t(g)\] 3. Compatibility of source and target with inverse: \[\forall g\in\mathcal{G}\qquad s(i(g))=t(g)\] 4. Left and right unit laws: \[\forall g\in\mathcal{G}\qquad m(u(t(g)),g)=g=m(g,u(s(g)))\] 5. Left and right inverse laws: \[\forall g\in\mathcal{G}\qquad m(g,i(g))=u(t(g))\quad m(i(g),g)=u(s(g))\] 6. Associativity law: \[\forall(g,h,k)\in\mathcal{G}\,_{s}{\times_{t}}\,\mathcal{G}\,_{s}{\times_{t}} \,\mathcal{G}\qquad m(g,m(h,k))=m(m(g,h),k)\] We say that such a groupoid \(\mathcal{G}\rightrightarrows X\) is a _singular Lie groupoid_ if \(X\) is a smooth manifold. We say that \(\mathcal{G}\rightrightarrows X\) is a _Lie groupoid_ if both \(\mathcal{G}\) and \(X\) are manifolds. It should be noted that the condition that the source and target maps be **QUED**-submersions is crucial. For example, this property ensures that the spaces of composable arrows are also objects in **QUED**. Singular Lie groupoids are precisely the class of diffeological groupoids that we intend to differentiate to Lie algebroids. **Definition 4.2**.: Suppose \(\mathcal{G}\rightrightarrows X\) and \(\mathcal{H}\rightrightarrows Y\) are **QUED**-groupoids. A _groupoid homomorphism_\(F\colon\mathcal{G}\to\mathcal{H}\) covering \(f\colon X\to Y\) is a pair of smooth maps with the properties: 1. Compatibility with source and target: \[\forall g\in\mathcal{G}\qquad s(F(g))=f(s(g))\quad t(F(g))=f(t(g))\] 2. Compatibility with multiplication: \[\forall(g,h)\in\mathcal{G}\,_{s}{\times_{t}}\,\mathcal{G}\qquad F(m(g,h))=m( F(g),F(h))\] **Example 4.3**.: Since a **QUED**-submersion between smooth manifolds is an ordinary submersion, this definition of a Lie groupoid corresponds exactly to the classical definition. **Example 4.4**.: Suppose \(G\) is a Lie group and \(K\) is a totally disconnected normal subgroup of \(G\). According to Lemma 3.6 we know \(G/K\) is a quasi-etale diffeological space and therefore \(G/K\) is a singular Lie groupoid. **Example 4.5**.: Consider the tangent bundle \(T\mathbb{R}\) and think of it as a bundle of abelian groups over \(\mathbb{R}\). Let us define an action of \(\mathbb{Z}\) on \(T\mathbb{R}\) by the following rule: \[z\in\mathbb{Z},\ (v,t)\in T\mathbb{R}\qquad z\cdot(v,t):=(v+tz,t)\] where the second coordinate is the base point and the first coordinate is the vector component. The orbit space of this action \(T\mathbb{R}/\mathbb{Z}\) is quasi-etale so it is canonically a singular Lie groupoid over \(\mathbb{R}\). This example illustrates how quasi-etale groupoids are able to handle the "transverse obstruction" to integrability. ### Local groupoids Local groupoids are a weaker form of a groupoid which has a more restrictive product operation. Essentially, we do not require that multiplication be defined for all "composable" pairs. The "local" part of the definition of a local groupoid refers to the fact that multiplication should be defined in an open neighborhood of the units. **Definition 4.6**.: A _local \(\mathbf{QUED}\)-groupoid_ consists of a pair of quasi-etale spaces \(\mathcal{G}\) and \(X\) called the _arrows_ and _objects_ together with: * two \(\mathbf{QUED}\)-submersions called the _source_ and _target_: \[s\colon\mathcal{G}\to X\qquad t\colon\mathcal{G}\to X\] * a smooth map called the _unit_ \[u\colon X\to\mathcal{G}\qquad x\mapsto 1_{x}\] * an open neighborhood \(\mathcal{M}\subseteq\mathcal{G}\,_{s}{\times}_{t}\,\mathcal{G}\) of \(u(M)\times u(M)\) and smooth map called _multiplication_ \[m\colon\mathcal{M}\to\mathcal{G}\qquad(g,h)\mapsto gh\] * An open neighborhood \(\mathcal{I}\subseteq\mathcal{G}\) of \(u(M)\) and a smooth map called _inverse_ \[i\colon\mathcal{I}\to\mathcal{G}\qquad g\mapsto g^{-1}\] We further require that the following properties hold: 1. Compatibility of source and target with unit: \[\forall x\in M,\qquad s(u(x))=t(u(x))=x\] 2. Compatibility of source and target with multiplication: \[\forall(g,h)\in\mathcal{M},\qquad s(m(g,h)=s(h)\quad t(m(g,h)=t(g)\] 3. Compatibility of source and target with inverse: \[\forall g\in\mathcal{I},\qquad s(i(g))=t(g)\] 4. Left and right unit laws: \(\forall g\in\mathcal{G}\) \[\forall g\in\mathcal{G}\qquad m(u(t(g)),g)=g=m(g,u(s(g)))\] whenever the above expression is well-defined 5. Left and right inverse laws: \[\forall g\in\mathcal{G},\qquad m(g,i(g))=u(t(g))\quad m(i(g),g)=u(s(g))\] whenever the above expression is well-defined 6. Associativity law: \[\forall(g,h,k),\in\mathcal{G}\,_{s}{\times}_{t}\,\mathcal{G}\,_{s}{\times}_{ t}\,\mathcal{G}\qquad m(g,m(h,k))=m(m(g,h),k)\] whenever the above expression is well-defined. A local **QUED**-groupoid \(\mathcal{G}\rightrightarrows X\) is said to be a _singular local Lie groupoid_ if \(X\) is a smooth manifold. It is a _local Lie groupoid_ if both \(\mathcal{G}\) and \(X\) are manifolds. Apart from the fact that multiplication is defined only on an open subset, for our purposes, local groupoids will have a distinction in how their morphisms are defined. Let us establish some notation. Given a smooth function of diffeological spaces \(f\colon X\to Y\) and a pair of subsets \(S\subseteq X\) and \(T\subseteq Y\) such that \(f(S)\subseteq T\) we write: \[[f]_{S}\colon[X]_{S}\to[Y]_{T}\] to denote the class of \(f\) as a germ of a map from \(X\) to \(Y\) defined in an open neighborhood of \(S\). The expressions \([X]_{S}\) and \([Y]_{T}\) denote germs of diffeological spaces. **Definition 4.7**.: Suppose \(\mathcal{G}\rightrightarrows X\) and \(\mathcal{H}\rightrightarrows Y\) are local **QUED**-groupoids. A _morphism_\(\mathcal{G}\to\mathcal{H}\) covering \(f\colon X\to Y\) consists of a _germ_ of a smooth function: \[[\mathcal{F}]_{u(M)}\colon[\mathcal{G}]_{u(M)}\to[\mathcal{H}]_{u(N)}\] such that there exists a representative of the germ \(\mathcal{F}\) which satisfies the following properties: 1. Compatibility with source and target: \[\forall g\in\mathcal{G}\qquad s(\mathcal{F}(g))=f(s(g))\quad t(\mathcal{F}(g) )=f(t(g))\] whenever the above expressions are well-defined. 2. Compatibility with multiplication: \[\forall(g,h)\in\mathcal{G}_{s}{\times}_{t}\,\mathcal{G}\qquad\mathcal{F}(m(g, h))=m(\mathcal{F}(g),\mathcal{F}(h))\] whenever the above expression is well-defined. It is built-in to our notation that \(\mathcal{F}(u(M))\subseteq u(N)\). In other words, a morphism of local **QUED**-groupoids always maps units to units by definition. A **QUED**-groupoid can also be thought of as a local **QUED**-groupoid. However, we remark that the functor from **QUED**-groupoids to local **QUED**-groupoids is not full or faithful. In other words, **QUED**-groupoids do not form a subcategory of their local counterparts. In order to reduce the potential of confusion, we will always use a symbol such as \(\mathcal{F}\) to denote an actual smooth map compatible with the structure maps and \([\mathcal{F}]\) to denote the germ of such a map around the units. This is a mild abuse of notation since we should formally use \([\mathcal{F}]_{u(M)}\) but we will omit the repetitive subscripts to reduce notational clutter. Local Lie groupoids are particularly relevant to the integration problem due to the following theorem. **Theorem 4.8** ([10],[11]).: _The Lie functor for local groupoids defines an equivalence of categories:_ \[\textbf{Lie}\colon\{\text{Local Lie groupoids}\}\to\{\text{Lie algebroids}\}\] The first proof of local integrability of Lie algebroids appears in Crainic and Fernandes [10]. However, that the relationship is actually an equivalence of categories was, to some extent, folklore. Later, a complete proof of this equivalence appeared in [11] and this is the earliest such proof that we are aware of. This theorem means that we will not need to deal with Lie algebroids directly to define differentiation. Instead, our strategy will be to construct a functor from singular Lie groupoids to local Lie groupoids. The only other feature of Lie algebroids that we must keep in mind is that Lie algebroids are a _sheaf_. In other words, Lie algebroids can be constructed by gluing together Lie algebroids defined over an open cover together with coherent gluing data on the intersections. Since this functor is an equivalence, Local Lie groupoids are also a sheaf. ## 5. Differentiation The key observation that we need for our differentiation procedure is that a quasi-etale chart (around an identity element) on a local singular Lie groupoid inherits a unique local groupoid structure compatible with the chart. In this sense, local Lie groupoids can be use to "desingularize" singular Lie groupoids. This section will be organized as follows. In the first subsection we will state our basic theorems about quasi-etale charts on local singular Lie groupoids. In the next section we will show how these theorems can be used to construct a differentiation functor. In the last section we will use some of this theory to classify singular Lie groupoids with integrable algebroids. ### Representing singular Lie groupoids Our method of differentiating a singular Lie groupoid involves locally "representing" the said groupoid with a local Lie groupoid. **Definition 5.1**.: Suppose \(\mathcal{G}\rightrightarrows M\) is a local singular Lie groupoid. A _local Lie groupoid chart_ of \(\mathcal{G}\rightrightarrows M\) consists of an open subset \(\widetilde{M}\subseteq M\) together with a quasi-etale chart \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) such that \([\pi]\) constitutes a local groupoid morphism covering the inclusion: We say a local groupoid chart is _around_\(x\in M\) if \(x\in\widetilde{M}\). We say it is _wide_ if \(\widetilde{M}=M\). The following two theorems are the main technical results that permit us to differentiate (local) singular Lie groupoids. The first is an existence and uniqueness result for local groupoid charts. The second theorem says that one can always represent a homomorphism of local singular Lie groupoids using local Lie groupoid charts. **Theorem 5.2**.: _Suppose \(\mathcal{G}\rightrightarrows M\) is a local singular Lie groupoid. There exists a wide local Lie groupoid chart \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\). Furthermore, if \(\pi^{\prime}\colon\widetilde{\mathcal{G}}^{\prime}\to\mathcal{G}\) is another wide local Lie groupoid chart there exists a unique isomorphism of local Lie groupoids \([\mathcal{F}]\colon\widetilde{\mathcal{G}}\to\widetilde{\mathcal{G}}^{\prime}\) which makes the following diagram commute:_ **Theorem 5.3**.: _Let \(\mathcal{G}\rightrightarrows M\) and \(\mathcal{H}\rightrightarrows N\) be local singular Lie groupoids. Suppose \(\mathcal{F}\colon\mathcal{G}\rightarrow\mathcal{H}\) is homomorphism and \(\pi_{\mathcal{G}}\colon\widehat{\mathcal{G}}\rightarrow\mathcal{G}\) and \(\pi_{\mathcal{H}}\colon\widehat{\mathcal{H}}\rightarrow\mathcal{H}\) are wide local groupoid charts. Then there exists a unique local groupoid morphism \([\widehat{\mathcal{F}}]\colon\widehat{\mathcal{G}}\rightarrow\widehat{ \mathcal{H}}\) which makes the following diagram commute:_ The idea of the proof is that, one starts with an arbitrary quasi-etale chart on \(\pi\colon\widehat{\mathcal{G}}\rightarrow\mathcal{G}\) and then constructs a local groupoid structure on \(\widehat{\mathcal{G}}\) by finding representations of all of the groupoid structure maps. Once these maps are constructed, one then needs to show that these representatives satisfy the groupoid axioms in a neighborhood of the identity. The full proofs of Theorem 5.2 and Theorem 5.3 require their own section as well as some further technical development. Therefore, for the sake of exposition, we have moved these proofs to Section 6. For now, let us explore some of the consequences of these theorems. One interesting consequence is a classification of all connected singular Lie groups. **Theorem 5.4**.: _Suppose \(G\) is a connected singular Lie group. In other words, \(G\) is a singular Lie groupoid over a point. Then as a diffeological group, \(G\) is isomorphic to a quotient of a Lie group modulo a totally disconnected normal subgroup._ Proof.: By Theorem 5.2 we know that we can come up with a local Lie group \(\widehat{G}^{\circ}\) together with quasi-etale chart \(\pi\colon\widetilde{G}\to G\) which is a homomorphism of local groups. By possibly shrinking \(\widehat{G}^{\circ}\) to a small enough open neighborhood of the identity, we can assume without loss of generality that \(\widehat{G}^{\circ}\) is an open subset of \(\widehat{G}\) where \(\widehat{G}\) is a simply connected Lie group. Since the group \(\widehat{G}^{\circ}\) generates \(\widetilde{G}\), it is possible1 to extend \(\pi\) uniquely to a homomorphism defined on all of \(\widehat{G}\) and so we have a continuous group homomorphism \(\pi\colon\widehat{G}\to G\). Since \(\pi\) is quasi-etale in a neighborhood of the identity, it follows by a simple translation argument that it must be quasi-etale everywhere. Since the fibers of a quasi-etale map must be totally disconnected it follows that the kernel of \(\pi\) is a totally disconnected normal subgroup of \(\widehat{G}\). Finally, since \(\pi\) is open, the image of \(\pi\) is an open subgroup. Since we have assumed that \(G\) is connected, it follows that \(\pi\) is surjective. Footnote 1: One way to prove this is by considering the subgroup of \(\widetilde{G}\times H\) generated by the graph of \(\pi\). This can also be seen by using the theory of associative completions developed by Malcev[11]. See also Fernandes and Michiels[13] for a more modern version. ### Construction of the Lie functor We will now prove the main theorem about differentiating singular Lie groupoids. Let us establish some notation for the relevant categories: \[\mathbf{Alg}:=\{\text{Category of Lie algebroids}\}\] \[\mathbf{LocLieGrpd}:=\{\text{Category of local Lie groupoids}\}\] \[\mathbf{SingLocLieGrpd}:=\{\text{Category of singular local Lie groupoids}\}\] Let us first restate the main theorem from the introduction: **Theorem 1.1**.: _Let \(\mathbf{SingLieGrpd}\) be the category of \(\mathbf{QUED}\)-groupoids where the space of objects is a smooth manifold. There exists a functor:_ \[\widehat{\mathbf{Lie}}\colon\mathbf{SingLieGrpd}\to\mathbf{LieAlg}\] _with the property that \(\widehat{\mathbf{Lie}}|_{\mathbf{LieGrpd}}=\mathbf{Lie}\)._ In fact, we will prove a stronger result than the one stated above. The full version is stated for local singular Lie groupoids and includes a claim that that says the functor is essentially "unique". **Theorem 5.5**.: _There exists a functor_ \[\widehat{\mathbf{Lie}}\colon\mathbf{SingLocLieGrpd}\to\mathbf{Alg}\] _with the following two properties:_ 1. _For all wide local Lie groupoid charts_ \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) _we have that_ \(\widehat{\mathbf{Lie}}(\pi)\) _is an isomorphism._ 2. \(\widehat{\mathbf{Lie}}=\mathbf{Lie}\) _when restricted to the subcategory of local Lie groupoids._ _Furthermore, such a functor is unique up to a natural isomorphism._ Proof.: Recall that \(\mathbf{Lie}\colon\mathbf{LocLieGrpd}\to\mathbf{Alg}\) is an equivalence of categories. Therefore, we shall instead prove a closely related fact from which the above theorem will immediately follow: We claim that there exists a unique functor \[\mathbf{F}\colon\mathbf{SingLocLieGrpd}\to\mathbf{LocLieGrpd}\] with the following two properties: 1. For all wide local Lie groupoid charts \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) we have that \(\mathbf{F}[\pi]\) is an isomorphism. 2. \(\mathbf{F}|_{\mathbf{LocLieGrpd}}=\mathrm{Id}\). First let us construct such a \(\mathbf{F}\). For each local singular Lie groupoid \(\mathcal{G}\) chose a local groupoid chart \(\pi_{\mathcal{G}}\colon\widetilde{\mathcal{G}}\to\mathcal{G}\). We make this choice in such a way so that when \(\mathcal{G}\) is a local Lie groupoid we take \(\widetilde{\mathcal{G}}=\mathcal{G}\). Therefore, for each such \(\mathcal{G}\) we define \(\mathbf{F}(\mathcal{G}):=\widetilde{\mathcal{G}}\). If \([\mathcal{F}]\colon\mathcal{G}\to\mathcal{H}\) is a morphism of singular local Lie groupoids, let \(\mathbf{F}[\mathcal{F}]\colon\widetilde{\mathcal{G}}\to\widetilde{\mathcal{H}}\) be the unique morphism which makes the following diagram commute: Such a morphism is guaranteed to exist by Theorem 5.3. We must show that \(\mathbf{F}\) is a functor. Suppose \([\mathcal{F}_{1}]\colon\mathcal{G}\to\mathcal{H}\) and \([\mathcal{F}_{2}]\colon\mathcal{H}\to\mathcal{K}\) are a pair of morphisms of singular local Lie groupoids. Then we get a commutative diagram in \(\mathbf{SingLocLieGrpd}\): From the definition of \(\mathbf{F}([\mathcal{F}_{2}]\circ[\mathcal{F}_{1}])\) we conclude from the uniqueness part of Theorem 5.3 that: \[\mathbf{F}([\mathcal{F}_{2}]\circ[\mathcal{F}_{1}])=\mathbf{F}[\mathcal{F}_{2}] \circ\mathbf{F}[\mathcal{F}_{1}]\] Now we will show that \(\mathbf{F}\) satisfies (1) and (2) above. (1) Suppose \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) is a local groupoid chart of a singular Lie groupoid. From the definition of \(\mathcal{F}\) we have a commuting diagram: Since \(\pi\) is a quasi-etale chart, it follows from Proposition 3.10 that \(\mathbf{F}[\pi]\) must be a submersion in a neighborhood of the identity elements. By a dimension count, we conclude that \(\mathbf{F}[\pi]\) is a diffeomorphism in a neighborhood of the identity elements and so it is an isomorphism of local Lie groupoids. Property (2) is immediate from the fact that \(\mathbf{F}(\mathcal{G})=\mathcal{G}\) by definition for \(\mathcal{G}\in\mathbf{LocLieGrpd}\). Now suppose \(\mathbf{F}^{\prime}\) is another functor satisfying properties (1) and (2) above. Given \(\mathcal{G}\in\mathbf{SingLocLieGrpd}\) let: \[\eta(\mathcal{G}):=\mathbf{F}^{\prime}[\pi_{\mathcal{G}}]\colon\mathbf{F}( \mathcal{G})\to\mathbf{F}^{\prime}(\mathcal{G})\] The domain and codomain are as above since \(\mathbf{F}^{\prime}\) satisfies property (2). Furthermore, for each \(\mathcal{G}\) we have that \(\eta(\mathcal{G})\) is an isomorphism since \(\mathbf{F}^{\prime}\) satisfies property (1). We only need to check that \(\eta\) defines a natural transformation. Suppose we have \([\mathcal{F}]\colon\mathcal{G}\to\mathcal{H}\). From the definition of \(\mathbf{F}[\mathcal{F}]\) we have that the following square commutes: If we apply the functor \(\mathbf{F}^{\prime}\) to this square we get a new commuting square: Which proves that \(\eta\) is a natural transformation. By viewing singular Lie groupoids as singular local Lie groupoids, it follows that Theorem 1.1 is a direct corollary of Theorem 5.5. Although the definition of the Lie functor above is a bit abstract. Computing this functor is not too difficult for many kinds of singular Lie groupoids. Let us fix a choice of functor \(\widehat{\mathbf{Lie}}\) satisfying Theorem 5.5. Suppose \(\mathcal{G}\) is a singular Lie groupoid and let \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) be a wide local groupoid chart. Notice that if we apply the functor \(\widehat{\mathbf{Lie}}\) to such a local groupoid chart we get: \[\widehat{\mathbf{Lie}}[\pi]\colon\mathbf{Lie}(\widetilde{\mathcal{G}})\to \widehat{\mathbf{Lie}}(\mathcal{G})\] In other words, the Lie algebroid of the domain of any local groupoid chart is canonically isomorphic to the Lie algebroid of \(\widetilde{\mathcal{G}}\). Therefore, we can compute \(\widetilde{\mathbf{Lie}}\) by simply constructing a local groupoid chart and then applying the classical Lie functor. **Example 5.6**.: Suppose \(G\) is a Lie group and suppose \(N\subseteq G\) is a totally disconnected normal subgroup. We observed earlier that \(G/N\) is a singular Lie groupoid (over a point) and the projection map \(\pi\colon G\to G/N\) is a local groupoid chart. Therefore, \(\widehat{\mathbf{Lie}}(G/N)\cong\widehat{\mathbf{Lie}}(G)\). This leads to some interesting calculations. For example, the group \(\mathbb{R}/\mathbb{Q}\) has rather degenerate topology but \(\widetilde{\mathbf{Lie}}(\mathbb{R}/\mathbb{Q})\cong\mathbb{R}\) so it has a perfectly acceptable Lie algebra. ### Example - singular Lie groupoids with integrable algebroids This section is not necessary for proving our main theorems. However, it does tell us what a singular Lie groupoid with an integrable algebroid must look like. It generalizes the example of the singular Lie group. **Lemma 5.7**.: _Suppose \(\widetilde{\mathcal{G}}\xrightarrow{\to}M\) is a Lie groupoid and \(\mathcal{N}\subseteq\widetilde{\mathcal{G}}\) is a wide normal subgroupoid. We think of \(\mathcal{N}\) as a diffeological space via the subspace diffeology. Suppose further that \(\mathcal{N}\) has the following properties:_ 1. \(\mathcal{N}\) _includes only isotropy arrows. In other words,_ \(s|_{\mathcal{N}}=t|_{\mathcal{N}}\)_._ 2. _The smooth map_ \(s|_{\mathcal{N}}\colon\mathcal{N}\to M\) _is a local subduction._ 3. _For each_ \(x\in M\) _the fiber_ \(N_{x}:=s^{-1}(x)\cap N\) _is totally disconnected._ _Then \(\mathcal{G}:=\widetilde{\mathcal{G}}/\mathcal{N}\) with the quotient diffeology is a singular Lie groupoid._ Proof.: First we show that the projection map \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) is quasi-etale. The fibers of \(\pi\) are clearly totally disconnected due to property (c) above. From the definition of the quotient diffeology, we know \(\pi\) is a subduction but we must show it is a local subduction. Suppose \(\phi\colon U_{\phi}\to\mathcal{G}\) is a plot and \(u_{0}\in U_{\phi}\) and \(g_{0}\in\widetilde{\mathcal{G}}\) are such that \(\phi(u_{0})=\pi(g_{0})\). Since \(\pi\) is a subduction, we know that there must exist a smooth function \(\psi\colon V\to\widetilde{\mathcal{G}}\) such that \(V\subseteq U_{\phi}\) is an open neighborhood of \(u_{0}\) and \(\pi\circ\psi=\phi\). Let \(n\in\mathcal{N}\) be the unique element such that \(\psi(u)\cdot n=g\). Now let \(\sigma\colon\mathcal{O}\to\mathcal{N}\) be a smooth section of \(s|_{\mathcal{N}}\) such that \(\mathcal{O}\) is an open neighborhood of \(s(n)\) and \(\sigma(s(n))=n\). Such a section exists since \(s|_{\mathcal{N}}\) is a local subduction. The function: \[\widetilde{\phi}(u):=\psi(u)\sigma(s(u))\] will be well defined in an open neighborhood of \(u_{0}\) in \(U_{\phi}\). Furthermore, \(\pi\circ\widetilde{\phi}=\phi\) and \(\widetilde{\phi}(u_{0})=g_{0}\). Now we need to show that \(\pi\) is quasi-etale. We will use the simplified criteria from Lemma 3.22. Suppose \(f\colon\mathcal{O}\to\widetilde{\mathcal{G}}\) is a smooth function defined on an open \(\mathcal{O}\subseteq\mathcal{G}\) such that \(\pi\circ f=\pi\) and \(f(g_{0})=g_{0}\) for some point \(g_{0}\in\widetilde{\mathcal{G}}\). We need to show that \(f\) is a diffeomorphism in a neighborhood of \(g_{0}\). Let us set some notation, to avoid confusion we will write \(\widetilde{s}\) and \(\widetilde{t}\) to denote the source and target maps for \(\widetilde{\mathcal{G}}\) and \(s\) and \(t\) to denote the source and target maps for Assume without loss of generality that for all \(x\in M\) we have that \(\widetilde{t}^{-1}(x)\cap\mathcal{O}\) is connected. Consider the function: \[\alpha\colon\mathcal{O}\to\mathcal{N}\qquad\alpha(g):=f(g)\cdot g^{-1}\] Since the the fibers of \(\mathcal{N}\to M\) are totally disconnected, it follows that, for all \(x\in M\), the restriction of \(\alpha\) to \(\widetilde{t}^{-1}(x)\cap\mathcal{O}\) is constant. In other words, \(\alpha\) is constant on target fibers. Therefore, there exists a unique function \(\sigma\colon\widetilde{t}(\mathcal{O})\to\mathcal{N}\) which is a section of the source map and with the property that: \[\forall g\in\mathcal{O}\qquad\alpha(g)=\sigma(\widetilde{t}(g))\] We can rewrite this to get that: \[\forall g\in\mathcal{O}\qquad f(g)\cdot g^{-1}=\sigma(\widetilde{t}(g))\] In other words: \[\forall g\in\mathcal{O}\qquad f(g)=\sigma(\widetilde{t}(g))\cdot g\] From this equation it follows that: \[\forall g\in f(\mathcal{O})\qquad f^{-1}(g)=\sigma(\widetilde{t}(g))^{-1}\cdot g\] Since \(f\) has a smooth inverse, \(f\) is a diffeomorphism onto its image. This shows that \(\pi\) is a quasi-etale chart. To finish the proof. We need to show that \(s\colon\mathcal{G}\to M\) is a **QUED**-submersion. By Theorem 3.21, it suffices to show that the fibers of \(t\) are quasi-etale. Let us fix \(x\in M\) and consider the projection: \[\pi_{x}\colon\widetilde{t}^{-1}(x)\to t^{-1}(x)\] We claim that \(\pi_{x}\) is a quasi-etale chart. A standard argument shows that \(\pi_{x}\) is a local subduction. Of course, the fibers of \(\pi_{x}\) are totally disconnected. To show \(\pi_{x}\) is quasi-etale. We will use the simplified criteria from Lemma 3.22. Suppose \(f\colon\mathcal{O}\to\widetilde{t}^{-1}(x)\) is a smooth function defined on an open \(\mathcal{O}\subseteq\widetilde{t}^{-1}(x)\) such that \(\pi_{x}\circ f=\pi_{x}\) and \(f(g_{0})=g_{0}\) for some point \(g_{0}\in\widetilde{\mathcal{G}}\). We need to show that \(f\) is a diffeomorphism in a neighborhood of \(g_{0}\). The argument that this \(f\) is a local diffeomorphism is essentially identical to the one from earlier on in this proof. We divide \(f\) by the identity map and observe that \(f\) must locally be left translation by an element of \(\mathcal{N}\). The converse to the above lemma is also true. Every source connected singular Lie groupoid with an integrable Lie algebroid is the quotient of a Lie groupoid by a totally disconnected wide normal subgroupoid. **Lemma 5.8**.: _Suppose \(\mathcal{G}\to M\) is a source connected Weistein groupoid and \(\widehat{\operatorname{\mathbf{Lie}}}(\mathcal{G})\) is an integrable Lie algebroid. Then there exists a Lie groupoid \(\mathcal{G}\) with a wide normal subgroupoid \(\mathcal{N}\) satisfying properties (a), (b) and (c) from the previous lemma such that \(\mathcal{G}\simeq\widetilde{\mathcal{G}}/\mathcal{N}\)._ Proof.: Let \(\pi^{\circ}\colon\widetilde{\mathcal{G}}^{\circ}\to\mathcal{G}\) be a wide local groupoid chart. By a theorem of Fernandes and Michiels [13], it is possible to chose \(\widetilde{\mathcal{G}}^{\circ}\) in such a way that \(\widetilde{\mathcal{G}}^{\circ}\) is an open subset of a source simply connected Lie groupoid \(\widetilde{\mathcal{G}}\). Using the associative completion functor of Fernandes and Michiels, one can extend the map \(\pi^{\circ}\) to a local groupoid chart \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) defined on all of \(\widetilde{\mathcal{G}}\). Since \(\pi\) is open and \(\mathcal{G}\) is source connected, it follows that \(\pi\) is surjective. Let \(\mathcal{N}=\ker\pi\). We only need to show that \(\mathcal{N}\) satisfies properties (a), (b) and (c). Property (a) is immediate since \(\pi\) covers the identity map at the level of objects. For property (b), suppose we have a plot \(\phi\colon U_{\phi}\to M\) and \(u\in U_{\phi}\) together with \(n\in\mathcal{N}\) such that \(s(n)=\phi(u)\). Since the unit embedding \(u\colon M\to\mathcal{G}\) is smooth the map \(u\circ\phi\colon U_{\phi}\to\mathcal{G}\) is a plot. Since \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) is a local subduction, there must exist an open neighborhood \(V\subseteq U_{\phi}\) of \(u\) and a lift \(\widetilde{\phi}\colon V\to\widetilde{\mathcal{G}}\) such that \(\pi\circ\widetilde{\phi}=u\circ\phi|_{V}\) and \(\widetilde{\phi}(u)=n\). Since \(\pi\circ\widetilde{\phi}=u\circ\phi|_{V}\) it follows that \(s\circ\widetilde{\phi}=\phi\) and the image of \(\widetilde{\phi}\) is contained in \(\mathcal{N}\). ## 6. Proof of Theorem 5.2 and Theorem 5.3 Suppose \(\mathcal{G}\rightrightarrows M\) is a local singular Lie groupoid. Following our usual convention, let us write \(s,t,u,m\) and \(i\) to denote the source, target, unit, multiplication and inverse groupoid structure maps for \(\mathcal{G}\rightrightarrows M\), respectively. We will also use: \[\delta\colon\mathcal{D}\to\mathcal{G}\qquad(g,h)\mapsto m(g,i(h))\] to denote the division map. Note that the domain of division: \[\mathcal{D}:=\{(g,h)\in\mathcal{G}_{\,s}{\times}_{\,s}\,\mathcal{G}\ :\ m(g,i(h))\text{ is well-defined}\}\] is an open neighborhood of the image of \(u\times u\colon M\to\mathcal{G}_{s}{\times}_{\,s}\,\mathcal{G}\). It will be desirable for us to be able to compare elements of \(\mathcal{G}\) by dividing them. Our next lemma states that any local singular Lie groupoid is isomorphic to one where this is the case. **Lemma 6.1**.: _Suppose \(\mathcal{G}^{\prime}\rightrightarrows M\) is a local singular Lie groupoid._ _Then \(\mathcal{G}^{\prime}\) is isomorphic (as a local groupoid) to a local singular Lie groupoid \(\mathcal{G}\rightrightarrows M\) with the property:_ \[\forall(g,h)\in\mathcal{D}\qquad\delta(g,h)=u\circ t(g)\quad\Leftrightarrow \quad g=h\}\] Proof.: Consider the following calculation: \[(gh^{-1})h =(u\circ t(g))h\] \[g(h^{-1}h) =h\] \[g(u\circ s(h)) =h\] \[g =h\] Let \(\mathcal{G}\subseteq\mathcal{G}^{\prime}\) be an open neighborhood of the units with the property that for all \(g,h\in\mathcal{G}\) we have that each step of the above calculation is well-defined. Since being well-defined is an open condition (it is just about being in the inverse image of open sets under some continuous functions) it follows that \(\mathcal{G}\) is an open set. \(\mathcal{G}\) will be an open neighborhood of the units since the above calculation is always well-defined for units. Now if we have that \(\delta(g,h)=u\circ t(g)\) for \(g,h\in\mathcal{G}\), it follows from the above calculation that \(g=h\). **Remark 6.2**.: The above proof can be generalized into the following principle: For any local groupoid and a finite number of equations that are consequences of the groupoid axioms, there exists an open local subgroupoid where the desired equation holds. Crucially, this only holds for a _finite_ number of equations as an infinite intersection of open sets may not be open. ### Lifting the division map We will begin by showing that, given a quasi-etale chart \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\), we have can lift the division operation to \(\widetilde{\mathcal{G}}\) in a way that has favorable properties. Given a quasi-etale chart \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) we will use the conventions that: \(\widetilde{s}:=s\circ\pi\) and \(\widetilde{t}:=t\circ\pi\). Since \(s\), \(t\) and \(\pi\) are local subductions, \(\widetilde{s}\) and \(\widetilde{t}\) will be submersions. Rather than choosing local representations of the normal groupoid structure maps, we will begin by choosing a local presentation of division. This is due to the fact that all of the remaining structure maps can be recovered from division. Indeed, (local) groupoids can be studied entirely in terms of their division map and source map (see the appendix of Crainic, Nuno Mestre, and Struchiner [14]. Our first lemma says that it is possible to find a lift of the division map: **Lemma 6.3**.: _Let \(\mathcal{G}\) be a singular Lie groupoid and let us fix a point \(x_{0}\in M\). Suppose \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) is a quasi-etale chart and we have a point \(e\in\widetilde{\mathcal{G}}\) such that \(\pi(e)=u(x_{0})\)._ _There exists an open neighborhood \(\widetilde{\mathcal{D}}\subseteq\widetilde{\mathcal{G}}_{\,\,\hat{s}}{ \times}_{\,\,\hat{t}}\widetilde{\mathcal{G}}\) of \((e,e)\) together with a submersion \(\delta\colon\widetilde{\mathcal{D}}\to\widetilde{\mathcal{G}}\) such that the following diagram commutes:_ (6.3.1) Proof.: Note that since \(s\) is a **QUED**-submersion, it follows that \(\mathcal{G}_{\,\,s}{\times}_{\,\,s}\,\mathcal{G}\) is quasi-etale and the map: \[\pi\circ\operatorname{pr}_{1}\times\pi\circ\operatorname{pr}_{2}\colon \widetilde{\mathcal{G}}_{\,\,s}{\times}_{\,\,s}\,\widetilde{\mathcal{G}}\to \mathcal{G}_{\,\,s}{\times}_{\,\,s}\,\mathcal{G}\] is a quasi-etale chart. Since the domain of division, \(\mathcal{D}\), is an open subset of a quasi-etale space it follows that \(\mathcal{D}\) is quasi-etale as well. Since \(\pi\) is a local subduction, is possible to chose a local representation of the division map \(\widetilde{\mathcal{D}}\colon\widetilde{\mathcal{D}}\to\widetilde{\mathcal{G}}\) where \(\widetilde{\mathcal{D}}\subseteq\widetilde{\mathcal{G}}_{\,\,s}{\times}_{\,\,s }\,\widetilde{\mathcal{G}}\) is an open neighborhood of \((e,e)\) and which makes Diagram 6.3.1 commute. Furthermore, we can apply Lemma 3.9, to chose \(\widetilde{\delta}\) in such a way that \(\widetilde{\delta}(e,e)=e\). Since \(\delta\) is a local subduction, Proposition 3.10 tells us that \(\widetilde{\delta}\) will be a submersion. The core of our proof of the existence of local groupoid charts will be the fact that a map \(\delta\) as above will always be (in some open neighborhood) the division map for a local groupoid. Our next lemma tells us that a choice of representation of the division map also induces a lift of the unit embedding. **Lemma 6.4**.: _Let \(\mathcal{G}\) be a singular Lie groupoid and let us fix a point \(x_{0}\in M\). Suppose \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) is a quasi-etale chart and we have a point \(e\in\widetilde{\mathcal{G}}\) such that \(\pi(e)=u(x_{0})\). Let \(\delta\) be a representation of division as in Lemma 6.3._ _There exists a smooth function_ \[\widetilde{u}\colon\widetilde{M}\to\widetilde{\mathcal{G}}\] _where \(\widetilde{M}\) is an open neighborhood of \(x_{0}\) in \(M\) and such that:_ * \(\pi\circ\widetilde{u}=u\) * \(\widetilde{u}(x_{0})=e\) * \(\forall x\in\widetilde{M}\) _we have that_ \(\widetilde{\delta}(\widetilde{u}(x),\widetilde{u}(x))=\widetilde{u}(x)\)_._ Proof.: First let \(\mathcal{U}\subseteq\widetilde{\mathcal{G}}\) be an open neighborhood of \(e\) such that for all \(g\in\mathcal{U}\), we have that \((g,g)\in\widetilde{\mathcal{D}}\). In other words, \((g,g)\) is in the domain of \(\widetilde{\delta}\). Write \(\Delta\colon\mathcal{U}\to\widetilde{\mathcal{D}}\) to denote the diagonal embedding. Now, note that for all \(g\in\mathcal{U}\), compatibility of \(\delta\) with \(\pi\) implies that: \[\widetilde{t}(g)=(\widetilde{s}\circ\widetilde{\delta}\circ\Delta)(g)\] Since \(\widetilde{t}\) is a surjective submersion, this implies that \(\widetilde{\delta}\circ\Delta\) has rank at least equal the dimension of the object manifold \(M\). On the other hand, if \(g(t)\) is a curve in \(\mathcal{U}\) tangent to a fiber of \(\widetilde{t}|_{\mathcal{U}}\), then it follows that \((\pi\circ\widetilde{\delta}\circ\Delta)(g(t))\) is a constant path. Since \(\pi\) is quasi-etale, it follows that \(g(t)\) is a constant path in a fiber of \(\widetilde{\delta}\circ\Delta\). This implies that kernel distribution of \(\widetilde{t}|_{\mathcal{U}}\) is contained in the kernel distribution of \(\widetilde{\delta}\circ\Delta\). In other words: \[\ker T\widetilde{t}\subseteq\ker T(\widetilde{\delta}\circ\Delta)\] Since the rank of \(\widetilde{\delta}\circ\Delta\) is at least equal to the rank of \(\widetilde{t}|_{\mathcal{U}}\), a dimension count tells us that the kernel distributions are actually equal. Now let us shrink \(\mathcal{U}\) to a smaller neighborhood of \(e\in\widetilde{\mathcal{G}}\) with the property that the fibers of \(\widetilde{t}|_{\mathcal{U}}\colon\mathcal{U}\to M\) and \(\widetilde{\delta}\circ\Delta\colon\mathcal{U}\to\widetilde{\mathcal{G}}\) coincide. This means that for all \(g,h\in\mathcal{U}\), we have that: \[\widetilde{t}(g)=\widetilde{t}(h)\quad\Leftrightarrow\quad\widetilde{\delta} (g,g)=\widetilde{\delta}(h,h) \tag{6.4.1}\] From all of these facts, we conclude that \(\widetilde{t}\) is a diffeomorphism when restricted to the image of \(\widetilde{\delta}\circ\Delta\). In other words, the image of \(\widetilde{\delta}\circ\Delta\) must be the image of a section \(\widetilde{u}\colon\widetilde{M}\to\widetilde{\mathcal{G}}\) of \(\widetilde{t}\). Compatibility of \(\pi\) with \(\widetilde{\delta}\) implies that \(\pi\circ\widetilde{u}=u\). Furthermore, since \(\widetilde{\delta}(e,e)=e\) we know that \(\widetilde{u}(x_{0})=e\). Finally, for any \(x\in\widetilde{M}\), we have that: \[\widetilde{t}(\widetilde{u}(x))=\widetilde{t}(\widetilde{\delta}(\widetilde{ u}(x),\widetilde{u}(x)))\] By (6.4.1) we conclude that \(\widetilde{\delta}(\widetilde{u}(y),\widetilde{u}(y))=\widetilde{u}(y)\). The final lemma for this subsection tells us that a lift of the division map (as in the above lemmas) can be used as an equality test: **Lemma 6.5**.: _Let \(\mathcal{G}\) be a local singular Lie groupoid and let us fix a point \(x_{0}\in M\). Suppose \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) is a quasi-etale chart and we have a point \(e\in\widetilde{\mathcal{G}}\) such that \(\pi(e)=u(x_{0})\). Let \(\delta\colon\widetilde{\mathcal{D}}\to\widetilde{\mathcal{G}}\) be a representation of division as in Lemma 6.3 and \(\widetilde{u}\colon\widetilde{M}\to\widetilde{\mathcal{G}}\) be a lift of the units as in Lemma 6.4._ _There exists an open neighborhood \(\mathcal{O}\subseteq\widetilde{\mathcal{G}}\) of \(e\) with the following properties:_ * \(\mathcal{O}\times_{\widetilde{s},\widetilde{s}}\mathcal{O}\subseteq\mathcal{D}\)__ * _For all_ \((g,h)\in\mathcal{O}\times_{\widetilde{s},\widetilde{s}}\mathcal{O}\) _we have that_ \(g=h\) _if and only if_ \(\widetilde{\delta}(g,h)=\widetilde{u}(\widetilde{t}(g))\)_._ Proof.: First observe that \(\widetilde{u}(\widetilde{M})\) is a cross section of a submersion and is therefore an embedded submanifold. Since \(\widetilde{\delta}\) is a submersion, we know that the inverse image \(\widetilde{\delta}^{-1}(\widetilde{u}(\widetilde{M}))\) is an embedded submanifold of dimension equal to the dimension of \(\widetilde{\mathcal{G}}\). Now let \(\mathcal{U}\subseteq\widetilde{\mathcal{G}}\) and \(\Delta\colon\mathcal{U}\to\mathcal{D}\) be as in the proof of Lemma 6.4. From the construction of \(\widetilde{u}\) we know that: \[\Delta(\mathcal{U})\subseteq\widetilde{\delta}^{-1}(\widetilde{u}(\widetilde{M}))\] By a dimension count, \(\Delta(\mathcal{U})\) is actually an open subset of \(\widetilde{\delta}^{-1}(\widetilde{u}(\widetilde{M}))\) containing \((e,e)\). Now let \(\mathcal{O}\subseteq\widetilde{\mathcal{G}}\) be an open neighborhood of \(e\) such that: \[(\mathcal{O}\times_{\widetilde{s},\widetilde{s}}\mathcal{O})\cap\widetilde{ \delta}^{-1}(\widetilde{u}(\widetilde{M})))\subseteq\Delta(\mathcal{U})\] We claim that this is the desired open subset. Suppose \(g,h\in\mathcal{O}\) have the same source and \(\widetilde{\delta}(g,h)=\widetilde{u}(\widetilde{t}(g))\). Then: \[(g,h)\in(\mathcal{O}\times_{s,s}\mathcal{O})\cap\widetilde{\delta}^{-1}( \widetilde{u}(\widetilde{M}))\] Therefore \((g,h)\in\Delta(\mathcal{U})\) and \(g=h\). ### Division structures and comparing maps It will be useful to formalize the properties from the previous three lemmas into a definition. **Definition 6.6**.: Let \(\mathcal{G}\rightrightarrows M\) be a local singular Lie groupoid. Suppose \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) is a quasi-etale chart. A _division structure_ on \(\pi\) consists of the following: * A smooth function: \[\widetilde{u}\colon\widetilde{M}\to\widetilde{\mathcal{G}}\] where \(\widetilde{M}=\widetilde{s}(\widetilde{\mathcal{G}})\subseteq M\). * A smooth function: \[\widetilde{\delta}\colon\widetilde{\mathcal{D}}\to\widetilde{\mathcal{G}}\] where \(\widetilde{\mathcal{D}}\subseteq\widetilde{\mathcal{G}}_{\widetilde{s}}\times_ {\widetilde{s}}\widetilde{\mathcal{G}}\) is an open neighborhood of the image of \(\widetilde{u}\times\widetilde{u}\). We require these two functions to satisfy the following properties: 1. \(\widetilde{\delta}\) is a representation of division. In other words we have a commutative diagram: \(\widetilde{\mathcal{D}}\)\(\widetilde{\delta}\)\(\widetilde{\mathcal{G}}\)\(\widetilde{\mathcal{G}}\)\(\widetilde{\mathcal{D}}\)\(\widetilde{\delta}\)\(\widetilde{\mathcal{G}}\)\(\widetilde{\mathcal{D}}\)\(\widetilde{\delta}\)\(\widetilde{\mathcal{G}}\)\(\widetilde{\mathcal{D}}\)\(\widetilde{\delta}\)\(\widetilde{\mathcal{G}}\) 2. \(\widetilde{u}\) is a lift of the units. In other words, \(\pi\circ\widetilde{u}=u\big{|}_{\widetilde{M}}\). 3. For all \((g,h)\in\widetilde{\mathcal{D}}\) we have that: \[\widetilde{\delta}(g,h)=\widetilde{u}\circ\widetilde{t}(g)\quad\leftrightarrow \quad g=h\] The sum effect of Lemma 6.3, Lemma 6.4 and Lemma 6.5 is the claim that around any point \(x_{0}\in M\) we can find a quasi-etale chart \(\widetilde{\pi}\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) around \(u(x)\) equipped with a division structure. The power of division structures is that they enable us to develop very useful tests for equality of certain maps. First let us establish some notation. Given a natural number \(n\) and \(U\subseteq\widetilde{\mathcal{G}}\) open, we will write: \[U^{(n)}:=\overbrace{U\times_{\widetilde{s},\widetilde{t}}U\times_{\widetilde{s },\widetilde{t}}\cdots\times_{\widetilde{s},\widetilde{t}}U}^{n\text{-times}}\] In other words, \(U^{(n)}\) is the set of \(n\)-tuples of "composable" elements of \(U\). Our next lemma provides us with a way of determining when exactly a function on the composable arrows takes values only in units. **Lemma 6.7**.: _Suppose \(\mathcal{G}\xrightarrow{\rightarrow}M\) and \(\mathcal{H}\xrightarrow{\rightarrow}N\) are a local singular Lie groupoid and \(\pi\colon\widetilde{\mathcal{G}}\rightarrow\mathcal{G}\) is a quasi-etale chart equipped with a division structure._ _Suppose \(\mathcal{U}\subseteq\widetilde{\mathcal{G}}\) is an open neighborhood of the image of \(\widetilde{u}\). Given a natural number \(n\) suppose that we have a smooth function:_ \[F\colon\mathcal{U}^{(n)}\rightarrow\widehat{\mathcal{G}}\] _with the following properties:_ * _The image of_ \(\pi\circ F\) _contains only unit arrows in_ \(\mathcal{G}\)_._ * \(\widetilde{\iota}\circ F=\widetilde{\iota}\circ\operatorname{pr}_{1}\)__ * _For all_ \(x\in\widetilde{u}^{-1}(\mathcal{U})\)_, we have that_ \(F(\widetilde{u}(x),\dots,\widetilde{u}(x))=\widetilde{u}(x)\)__ _Then there exists an open neighborhood \(\mathcal{O}\subseteq\mathcal{U}\) of the image of \(\widetilde{u}\) with the property that for all \((g_{1},\dots,g_{n})\in\mathcal{O}^{(n)}\) we have that_ \[F(g_{1},\dots,g_{n})=(\,\widetilde{u}\circ\widetilde{\iota}\,)(g_{1})\] _In particular, for a small enough open neighborhood of \(\mathcal{U}\), the function \(F\) takes values only in the image of \(\widetilde{u}\)._ Proof.: Notice that since \(\widetilde{\iota}\circ\operatorname{pr}_{1}\) is a submersion and \[\widetilde{\iota}\circ F=\widetilde{\iota}\circ\operatorname{pr}_{1}\] this implies that \(F\) has rank at least equal to the rank of \(\widetilde{\iota}\circ\operatorname{pr}_{1}\) which is equal to the dimension of \(M\). Now we claim that the kernel distribution of \(TF\) contains the kernel distribution of \(T(\widetilde{\iota}\circ\operatorname{pr}_{1})\). By a dimension count, this will imply their kernel distributions are equal. To see this, suppose \(\gamma\) is a path tangent to the kernel distribution of \(T\widetilde{\iota}\circ\operatorname{pr}_{1}\). We know that \(\widetilde{\iota}\circ F\circ\pi\) is constant. Furthermore, since \(\pi\circ F\) contains only unit elements, we can conclude that \(F\circ\gamma\) is a path in \(\pi\)-fiber of a unit element in \(\mathcal{G}\). However, the fibers of \(\pi\) are totally disconnected, and so this implies that \(F\circ\gamma\) is a constant path. Therefore, \(TF\) and \(T(\widetilde{\iota}\circ\operatorname{pr}_{1})\) are submersions with identical kernel distributions Now, notice that the map: \[\widetilde{u}^{(n)}\colon\widetilde{M}\rightarrow\widetilde{\mathcal{G}}^{(n) }\qquad x\mapsto(\widetilde{u}(x),\widetilde{u}(x),\dots,\widetilde{u}(x))\] is a section of \(\widetilde{\iota}\circ\operatorname{pr}_{1}\). By the local normal form theorem for submersions around a section, it is possible to find an open neighborhood \(\mathcal{W}\) of the image of \(\widetilde{u}^{(n)}\) of the domain of \(F\) with the property that the fibers of \(\widetilde{\iota}\circ\operatorname{pr}_{1}\) and \(F\) are connected and coincide. Now we claim that \[\widetilde{F}|_{\mathcal{W}}=\widetilde{u}\circ\widetilde{\iota}\circ \operatorname{pr}_{1}|_{\mathcal{W}}\] To see why, first notice that the fibers of these two functions coincide. Furthermore \[F(\widetilde{u}(x),\dots,\widetilde{u}(x))=\widetilde{u}(x)=\widetilde{u} \circ\widetilde{\iota}\circ\operatorname{pr}_{1}(\widetilde{u}(x),\dots, \widetilde{u}(x))\] Since the fibers of \(\widetilde{F}|_{\mathcal{W}}\) and \(\widetilde{u}\circ\operatorname{pr}_{1}|_{\mathcal{W}}\) are equal and they agree on one element of each fiber, the two functions are equal. Therefore, the proof is completed by choosing an open neighborhood \(\mathcal{O}\subseteq\mathcal{U}\) of the units with the property that \(\mathcal{O}^{(n)}\subseteq\mathcal{W}\). Our next lemma is just an upgrade of the previous lemma. It provides us with a very useful equality test. **Lemma 6.8**.: _Suppose \(\mathcal{G}\xrightarrow{\rightarrow}M\) is a local singular Lie groupoid and \(\pi\colon\widetilde{\mathcal{G}}\rightarrow\mathcal{G}\) is a quasi-etale chart equipped with a division structure._ _Suppose \(\mathcal{U}\subseteq\widetilde{\mathcal{G}}\) is an open neighborhood of the image of \(\widetilde{u}\). Given a natural number \(n\), suppose that we have a pair of smooth functions:_ \[\alpha\colon\mathcal{U}^{(n)}\rightarrow\widetilde{\mathcal{G}}\qquad\beta \colon\mathcal{U}^{(n)}\rightarrow\widetilde{\mathcal{G}}\] _with the following properties:_ * \(\pi\circ\alpha=\pi\circ\beta\)__ * \(\widetilde{\iota}\circ\alpha=\widetilde{\iota}\circ\operatorname{pr}_{1}\)__ * _For all_ \(x\in\widetilde{u}^{-1}(\mathcal{U})\)_, we have that_ \[\alpha(\widetilde{u}(x),\dots,\widetilde{u}(x))=\beta(\widetilde{u}(x),\dots, \widetilde{u}(x))=\widetilde{u}(x)\] _Then there exists an open neighborhood \(\mathcal{O}\subseteq\mathcal{U}\) of the image of \(\widetilde{u}\) such that \(\alpha|_{\mathcal{O}^{(n)}}=\beta|_{\mathcal{O}^{(n)}}\)_ Proof.: Let: \[F\colon\mathcal{U}^{(n)}\rightarrow\widetilde{\mathcal{G}}\qquad F(g_{1}, \dots,g_{n}):=\widetilde{\delta}(\alpha(g_{1},\dots,g_{n}),\beta(g_{1},\dots, g_{n}))\] We claim that \(F\) satisfies the hypotheses of Lemma 6.7. For the first bullet point, note that for \(\overline{g}\in\mathcal{U}^{(n)}\): \[\pi\circ F(\overline{g})=\delta(\pi\circ\alpha(\overline{g}),\pi\circ\beta( \overline{g}))\] Since we have assume that \(\pi\circ\alpha=\pi\circ\beta\) it follows that dividing them in \(\widetilde{\mathcal{G}}\) results in a unit element. Therefore \(\pi\circ F\) takes values only in unit elements. For the second bullet point, note that since \(\widetilde{\iota}\circ\widetilde{\delta}=\operatorname{pr}_{1}\) it follows that \(\widetilde{\iota}\circ F=\widetilde{\iota}\circ\alpha\). By assumption, \(\widetilde{\iota}\circ\alpha=\widetilde{\iota}\circ\operatorname{pr}_{1}\) so the bullet point holds. Finally, given \(x\in\widetilde{M}\), we have that: \[F(\widetilde{u}(x),\dots,\widetilde{u}(x)) =\widetilde{\delta}(\alpha(\widetilde{u}(x),\dots,\widetilde{u}( x)),\beta(\widetilde{u}(x),\dots,\widetilde{u}(x)))\] \[=\widetilde{\delta}(\widetilde{u}(x),\widetilde{u}(x))\] \[=\widetilde{u}(x)\] Since \(F\) satisfies the hypotheses of Lemma 6.7, it follows that there exists an open neighborhood \(\mathcal{O}\subseteq\mathcal{U}\) of \(e\) with the property that: \[\forall(g_{1},\dots,g_{n})\in\mathcal{O}^{(n)}\qquad\widetilde{\delta}(\alpha (g_{1},\dots,g_{n}),\beta(g_{1},\dots,g_{n}))=\widetilde{u}(\widetilde{\iota}( g_{1}))\] By Lemma 6.5, we conclude that: \[\forall(g_{1},\dots,g_{n})\in\mathcal{O}^{(n)}\qquad\alpha(g_{1},\dots,g_{n}) =\beta(g_{1},\dots,g_{n})\] ### Local Existence of charts **Proposition 6.9** (Existence of local groupoid charts).: _Suppose \(\mathcal{G}\xrightarrow{\rightarrow}M\) is a singular Lie groupoid and let \(x_{0}\in M\) be fixed. There exists a local groupoid chart \(\pi\colon\widetilde{\mathcal{G}}\rightarrow\mathcal{G}\) around \(x_{0}\)._ Proof.: Let \(\pi\colon\widetilde{\mathcal{G}}^{\prime}\rightarrow\mathcal{G}\) be a quasi-etale chart equipped with a division structure around \(x_{0}\). We saw earlier that such things exist. Now let: \[\widetilde{i}\colon\mathcal{I}\rightarrow\widetilde{\mathcal{G}} \qquad g\mapsto\delta(\widetilde{u}(s(g)),g)\] \[\widetilde{m}\colon\mathcal{M}\rightarrow\widetilde{\mathcal{G}} ^{\prime} \qquad(g,h)\mapsto\delta(g,\widetilde{i}(h))\] where \(\mathcal{I}\subseteq\widetilde{\mathcal{G}}^{\prime}\) and \(\mathcal{M}\subseteq\widetilde{\mathcal{G}}^{\prime}\times_{\widetilde{s}, \widetilde{t}}\widetilde{\mathcal{G}}^{\prime}\) are the maximal open sets which make these well-defined. We claim that there exists an open neighborhood \(\widetilde{\mathcal{G}}\) of the image of \(\widetilde{u}\) which makes the above maps a local groupoid structure on \(\widetilde{\mathcal{G}}\) and \(\pi|_{\mathcal{G}}\) is a local groupoid chart. We will begin by proving that \(\pi\) is compatible with these structure maps. 1. (Compatibility with source and target) By definition \(\widetilde{s}=s\circ\pi\) and \(\widetilde{t}=t\circ\pi\) so this condition is automatic. 2. (Compatibility with multiplication) Consider the following computation where we apply compatibility of \(\widetilde{\delta}\) with \(\pi\) multiple times: \[\pi\circ\widetilde{m}(g,h) =\pi\circ\widetilde{\delta}(g,\widetilde{\delta}(\widetilde{u} \circ\widetilde{s}(h),h))\] \[=\delta(\pi(g),\pi\circ\widetilde{\delta}(\widetilde{u}\circ \widetilde{s}(h),h))\] \[=\delta(\pi(g),\delta(\pi\circ\widetilde{u}\circ\widetilde{s}( h),\pi(h)))\] \[=\delta(\pi(g),\delta(u\circ s(\pi(h),\pi(h)))\] Since \(\mathcal{G}\) is a local groupoid, there exists an open neighborhood of the units where \[\delta(\pi(g),\delta(u\circ s(\pi(h),\pi(h)))=m(g,h)\] A similar calculation to what we have done above tells us that we can find an open neighborhood of the units \(\widetilde{\mathcal{G}}\) where: \[\pi(\widetilde{i}(g))=i(\pi(g))\] We can therefore assume without loss of generality that the whole ambient space \(\widetilde{\mathcal{G}}^{\prime}\) has the property \(\widetilde{m}\) and \(\widetilde{i}\) are compatible with \(\pi\). Now we will show the axioms (LG1-6) of a local groupoid must each hold in an open neighborhood of the image of \(\widetilde{u}\). 1. (Compatibility of source and target with unit) We will do the computation for the source as the proof of target is symmetrical: \[\widetilde{s}\circ\widetilde{u}=s\circ\pi\circ\widetilde{u}=s\circ u=u\] 2. (Compatibility of source and target with multiplication) We will do the computation for the source as the proof of target is symmetrical: \[\widetilde{s}\circ\widetilde{m}(g,h)=s\circ\pi\circ\widetilde{m}(g,h)=s\circ m (\pi(g),\pi(h))=s\circ\pi(h)=\widetilde{s}(h)\] 3. (Compatibility of source and target with inverse) \[\widetilde{s}\circ\widetilde{i}(g)=s\circ\pi\circ\widetilde{i}(g)=s\circ i \circ\pi(g)=t\circ\pi(g)=\widetilde{t}(g)\] 4. (Left and Right unit law) We will show the left unit. First, observe that one can choose an open neighborhood \(\mathcal{U}\subseteq\widetilde{\mathcal{G}}\) of the image of \(\widetilde{u}\) such that \((\widetilde{u}(\widetilde{t}(g)),g)\in\mathcal{M}\) for all \(g\in\mathcal{O}\). Given such an \(\mathcal{U}\), consider the two functions: \[\alpha\colon\mathcal{U}\to\widetilde{\mathcal{G}}\qquad g\mapsto\widetilde{m}( \widetilde{u}(\widetilde{t}(g)),g)\] \[\beta\colon\mathcal{U}\to\widetilde{\mathcal{G}}\qquad g\mapsto g\] Since \(\pi\) preserves our structure maps, it follows that \(\pi\circ\alpha=\pi\circ\beta\). Furthermore, \(\widetilde{t}\circ\alpha=\widetilde{t}\). Lastly, if \(x\in\widetilde{M}\) we have that: \[\alpha(\widetilde{u}(x))=\widetilde{m}(\widetilde{u}(x),\widetilde{u}(x))= \widetilde{u}(x)\] Therefore, the pair \(\alpha\) and \(\beta\) satisfy the hypotheses of Lemma 6.8 (in the case where \(n=1\)) which completes the proof. * (Left and right inverse laws) We will show the proof for right inverse. The left inverse case is symmetrical. As we did in the proof of left unit. Consider a pair of functions: \[\alpha\colon\mathcal{U}\to\widetilde{\mathcal{G}}\qquad g\mapsto\widetilde{m} (g,\widetilde{i}(g))\] \[\beta\colon\mathcal{U}\to\widetilde{\mathcal{G}}\qquad g\mapsto\widehat{u} (\widehat{t}(g)))\] where \(\mathcal{U}\) is an open neighborhood of the image of \(\widehat{u}\) that makes \(\alpha\) and \(\beta\) well-defined. Observe that since \(\pi\) is compatible with our structure maps and \(\mathcal{G}\) satisfies the unit axiom, it follows that \(\pi\circ\alpha=\pi\circ\beta\). Furthermore, \(t\circ\alpha=\widetilde{t}\). Finally, if \(x\in\widetilde{M}\) we have that: \[\alpha(\widetilde{u}(x))=\widetilde{m}(\widetilde{u}(x),\widetilde{i}( \widetilde{u}(x)))=\widetilde{m}(\widetilde{u}(x),\widetilde{u}(x))= \widetilde{u}(x)=\beta(\widetilde{u}(x))\] Therefore, \(\alpha\) and \(\beta\) satisfy the hypotheses of Lemma 6.8 and it follows that \(\alpha=\beta\) in an open neighborhood of \(e\). * (Associativity Law) As we did for the previous two axioms. We consider a pair of functions: \[\alpha\colon\mathcal{O}^{(3)}\to\widetilde{\mathcal{G}}\qquad(g,h,k)\mapsto \widetilde{m}(g,\widetilde{m}(h,k))\] \[\beta\colon\mathcal{O}^{(3)}\to\widetilde{\mathcal{G}}\qquad(g,h,k)\mapsto \widetilde{m}(\widetilde{m}(g,h),k))\] where \(\mathcal{U}\) is an open neighborhood of the image of \(\widetilde{u}\) and is chosen in such a way that \(\alpha\) and \(\beta\) are well-defined. Since \(\pi\) preserves these structure maps, and \(\mathcal{G}\) satisfies associativity, it follows that \(\pi\circ\alpha=\pi\circ\beta\). Furthermore, \(\widetilde{t}\circ\alpha=\widetilde{t}\circ\mathrm{pr}_{1}\). Finally, given \(x\in\widetilde{M}\) we have that: \[\alpha(\widetilde{u}(x),\widetilde{u}(x),\widetilde{u}(x))=\widetilde{m}( \widetilde{u}(x),\widetilde{m}(\widetilde{u}(x),\widetilde{u}(x)))= \widetilde{m}(\widetilde{u}(x),\widetilde{u}(x))=\widetilde{u}(x)\] A similar calculation holds for \(\beta\). Therefore, it follows that \(\alpha\) and \(\beta\) satisfy the hypotheses of Lemma 6.8, and therefore \(\alpha=\beta\) in some open neighborhood of the image of \(\widetilde{u}\). At this point, we have proved a local (non-wide) form of the existence part of the proof of Theorem 5.2. That is, every singular Lie groupoid admits a local groupoid chart around any given object. ### Uniqueness of charts Before we explain why "wide" local groupoid charts exist. We first need to prove a local form of the uniqueness portion of Theorem 5.2. First, we will observe that the local groupoid structure on a local groupoid chart is uniquely (in a neighborhood of units) determined by its unit embedding. **Lemma 6.10**.: _Let \(\mathcal{G}\xrightarrow{\ \ }M\) be a singular Lie groupoid. Suppose \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) is a quasi-etale chart we have two different local groupoid structures on \(\widetilde{\mathcal{G}}\) which both make \(\pi\) into a local groupoid chart and which have the same set of units \(\widetilde{M}\subseteq M\)._ _Let_ \[\widetilde{u}\colon\widetilde{M}\to\widetilde{\mathcal{G}}\qquad\widetilde{u} \,{}^{\prime}\colon\widetilde{M}\to\mathcal{G}\] denote the two different unit embeddings. Then for all \(x_{0}\in\widetilde{M}\) the local groupoid structures on \(\widetilde{\mathcal{G}}\) are equal in a neighborhood of \(\widetilde{u}(x_{0})\) if and only if \(\widetilde{u}\) and \(\widetilde{u}^{\prime}\) are equal in a neighborhood of \(x\)._ Proof.: Let \(\widetilde{m}\) and \(\widetilde{m}^{\prime}\) denote the two different multiplication maps and let \(x\in\widetilde{M}\) be fixed. Note that one direction is clear, if the two local groupoid structures are equal then they must have the same unit embedding. Therefore, we only need to show that if the unit embeddings are equal then \(m\) and \(m^{\prime}\) are equal in an open neighborhood of \(e:=\widetilde{u}(x_{0})=\widetilde{u}^{\prime}(x_{0})\). By assumption, \(\pi\circ\widetilde{m}=\pi\circ\widetilde{m}^{\prime}\). Furthermore, \(\widetilde{t}\circ\widetilde{m}=\widetilde{t}\circ\mathrm{pr}_{1}\). Finally, observe that for all \(x\in\widetilde{M}\), we have that: \[\widetilde{m}(\widetilde{u}(x),\widetilde{u}(x))=\widetilde{m}^{\prime}( \widetilde{u}(x),\widetilde{u}(x))\] Therefore, the functions \(\alpha=\widetilde{m}\) and \(\beta=\widetilde{m}^{\prime}\) satisfy the hypotheses of Lemma 6.8 so they must be equal. Our next lemma tells us that we can always construct an isomorphism between any two local groupoid charts. **Lemma 6.11**.: _Suppose \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) and \(\pi^{\prime}\colon\widetilde{\mathcal{G}}^{\prime}\to\mathcal{G}\) are local groupoid charts around \(x_{0}\in M\). Let \(\widetilde{u}\) and \(\widetilde{u}^{\prime}\) be the respective unit embeddings. Then there exists an open neighborhood \(\mathcal{U}\) of \(x_{0}\) and an isomorphism of local groupoids \([F]\colon\widetilde{\mathcal{G}}|_{\mathcal{U}}\to\widetilde{\mathcal{G}}^{ \prime}|_{\mathcal{U}}\) such that \([\pi]=[\pi^{\prime}]\circ[F]\)._ Proof.: Without loss of generality, we can assume that the two local groupoid charts have the same set of units \(\widetilde{M}\). Lemma 6.10 tells us that if two (local) groupoid structures on \(\widetilde{\mathcal{G}}^{\prime}\) are compatible with \(\pi^{\prime}\) and have the same set units near \(\widetilde{u}^{\prime}(x_{0})\), then they are equal in a neighborhood of \(\widetilde{u}^{\prime}(x_{0})\). Therefore, it suffices to construct a diffeomorphism \(F_{1}\colon\widetilde{\mathcal{G}}\to\widetilde{\mathcal{G}}^{\prime}\), defined in a neighborhood of \(\widetilde{u}(x_{0})\) such that \(F\circ\widetilde{u}=\widetilde{u}^{\prime}\) near \(\widetilde{u}(x_{0})\). Such a diffeomorphism will automatically be compatible with the multiplication operations in some neighborhood of \(\widetilde{u}(x_{0})\). Since \(\pi\) and \(\pi^{\prime}\) are quasi-etale, we can assume without loss of generality that we have a diffeomorphism \(\widehat{F}\colon\widetilde{\mathcal{G}}\to\widetilde{\mathcal{G}}^{\prime}\) such that \(\widehat{F}(\widetilde{u}(x))=\widetilde{u}^{\prime}(x)\). Let \(\widetilde{\delta}^{\prime}\) be the division map for \(\widetilde{\mathcal{G}}^{\prime}\). Now consider the function: \[F(g):=\widetilde{\delta}^{\prime}(\widehat{F}(g),\widehat{F}(\widetilde{u}( \widetilde{s}(g))))\] where \(\widetilde{s}\) is the source map for \(\widetilde{\mathcal{G}}\). Clearly \(F\) will be well-defined in a neighborhood of \(\widetilde{u}(x)\). A direct calculation shows that \(F\circ\widetilde{u}=\widetilde{u}^{\prime}\) and \(\pi^{\prime}\circ F=\pi\). We will finish the proof of this section a lemma where we show that there exists exactly one isomorphism between any two given local groupoid charts. **Lemma 6.12**.: _Suppose \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) and \(\pi^{\prime}\colon\widetilde{\mathcal{G}}^{\prime}\to\mathcal{G}\) are local groupoid charts around \(x_{0}\in M\). If \(\mathcal{U}\subseteq M\) is an open neighborhood of \(x_{0}\) and \([F]\colon\widetilde{\mathcal{G}}|_{\mathcal{U}}\to\widetilde{\mathcal{G}}^{ \prime}|_{\mathcal{U}}\) and \([G]\colon\widetilde{\mathcal{G}}|_{\mathcal{U}}\to\widetilde{\mathcal{G}}^{ \prime}|_{\mathcal{U}}\) satisfy_ \[[\pi]=[\pi^{\prime}]\circ[F]\qquad[\pi]=[\pi^{\prime}]\circ[G]\] _Then \([F]=[G]\)._ Proof.: We showed in the previous lemma that such an isomorphism exists. Therefore, we need to show that it is unique. We can assume without loss of generality that \(\mathcal{U}=M\). We must show that \([F]\circ[G]^{-1}\colon\widehat{\mathcal{G}}|_{\mathcal{U}}\to[\widehat{ \mathcal{G}}]\) is equal to the identity germ. Note that \([F]\circ[G]^{-1}\) satisfies \([\pi]_{x_{0}}=[\pi]_{x_{0}}\circ([F]_{x_{0}}\circ[G]_{x_{0}})\) so we can reduce to the case where \(\widehat{\mathcal{G}}^{\prime}=\widetilde{\mathcal{G}}\). Therefore, we are in a situation where we have a local groupoid map \([F]\colon\widehat{\mathcal{G}}\to\widehat{\mathcal{G}}\) which is an isomorphism in a neighborhood of the units and is such that \(\pi\circ F=\pi\). We must show that \(F\) equals the identity. Note that we have that \(\pi\circ F=\pi\circ\operatorname{Id}_{\widetilde{\mathcal{G}}}\), and also \(\widetilde{t}\circ F=\widetilde{t}\). Furthermore, if \(x\in\widetilde{M}\) we have that \(\widetilde{F}(\widetilde{u}(x))=\widetilde{u}(x)\). Therefore, \(F\) and \(\operatorname{Id}_{\widetilde{\mathcal{G}}}\) satisfy the hypotheses of Lemma 6.8 and therefore \(F=\operatorname{Id}\) in a neighborhood of \(\widetilde{u}(x_{0})\). ### Proof of Theorem 5.2 Proof.: Suppose \(\mathcal{G}\toq M\) is a singular Lie groupoid. First, we show show uniqueness. Suppose \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) and \(\pi^{\prime}\colon\widetilde{\mathcal{G}}^{\prime}\to\mathcal{G}\) are wide local groupoid charts. By Lemma 6.11, around each point \(p\in M\) there exists an open neighborhood \(\mathcal{U}_{p}\subseteq M\) of \(p\) and an isomorphism \([\mathcal{F}_{p}]\colon\widehat{\mathcal{G}}|_{\mathcal{U}_{p}}\to\widehat{ \mathcal{G}}^{\prime}|_{\mathcal{U}_{p}}\) with the property that \([\pi^{\prime}]\circ[\mathcal{F}_{p}]=[\pi]\). By Lemma 6.12, for any pair \(p,q\in M\) such that \(\mathcal{U}_{p}\cap\mathcal{U}_{q}\) is non-empty, it must be the case that \[[\mathcal{F}_{p}]|_{\mathcal{G}|_{\mathcal{U}_{p}\cap\mathcal{U}_{q}}}=[ \mathcal{F}_{q}]|_{\mathcal{G}^{\prime}_{\mathcal{U}_{p}\cap\mathcal{U}_{q}}}\] Recall that the category of local groupoids is equivalent to the category of Lie algebroids. Since morphisms of Lie algebroids form a sheaf (over the base manifold), it follows that morphisms of local groupoids also form a sheaf. Therefore, there must exist a unique local groupoid morphism \([\mathcal{F}]\colon\widetilde{\mathcal{G}}\to\widetilde{\mathcal{G}}^{\prime}\) with the property that \([\pi^{\prime}]\circ[\mathcal{F}]=[\pi]\). Now we consider existence. By Lemma 6.9, we know that for all \(p\in M\) there must exist an open neighborhood \(\widetilde{M}_{p}\subseteq M\) of \(p\) together with a local groupoid \(\widehat{\mathcal{G}}_{p}\toq\widetilde{M}_{p}\) and a local groupoid chart \(\pi\colon\widetilde{\mathcal{G}}_{p}\to\mathcal{G}\). Furthermore, we remark that given any two \(p,q\in M\) such that \(\widetilde{M}_{p}\cap\widetilde{M}_{q}\neq\emptyset\), by the uniqueness part of the proof there must exist a unique isomorphism of local groupoid charts: \[[\mathcal{F}_{pq}]\colon\widetilde{\mathcal{G}}_{p}|_{\mathcal{U}_{p}\cap \mathcal{U}_{q}}\to\widetilde{\mathcal{G}}_{q}|_{\mathcal{U}_{p}\cap\mathcal{ U}_{q}}\] that is compatible with the projections to \(\mathcal{G}\). Furthermore, on any triple intersection, the uniqueness of this isomorphism implies that a cocycle condition holds. In other words, for all \(p,q,r\in M\) such that \(\mathcal{U}_{p}\cap\mathcal{U}_{q}\cap\mathcal{U}_{r}\neq\emptyset\) we have: \[[\mathcal{F}_{qr}]\circ[\mathcal{F}_{pq}]=[\mathcal{F}_{pr}]\] Since local groupoids are a sheaf. We conclude that there must exist a local groupoid \(\widetilde{\mathcal{G}}\) equipped with isomorphisms: \[[\phi_{p}]\colon\widetilde{\mathcal{G}}|_{\mathcal{U}_{p}}\to\widetilde{ \mathcal{G}}_{p}\] which are compatible with the transition maps in the standard way. Furthermore, we remark that for all \(p\in\mathcal{U}\) we have a local groupoid chart: \[[\pi_{p}]\circ[\widetilde{\phi}_{p}]\colon\widetilde{\mathcal{G}}|_{\mathcal{U }_{p}}\to\mathcal{G}\] Compatibility of the collection \(\{[\widetilde{\phi}_{p}]\}_{p\in M}\) with the transition maps imply that the local groupoid charts \(\{[\pi_{p}]\circ[\widetilde{\phi}_{p}]\}\) are compatible on double intersections. Again, since morphisms of local groupoids are a sheaf, there must exist a morphism of local groupoids \([\pi]\colon\widetilde{\mathcal{G}}\to\mathcal{G}\). Since a local groupoid is canonically isomorphic to any neighborhood of its units, we can assume that \(\pi\) is globally defined. Furthermore such a \(\pi\) will be locally quasi-etale since each of the \([\pi_{p}]\) are quasi-etale. Since the property of being quasi-etale is local, we conclude that \(\pi\) is quasi-etale and so \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) is a local groupoid chart. ### Proving Theorem 5.3 Before we prove Theorem 5.3. Let us state new version of the equality test before that has been upgraded to account for maps between groupoids. **Lemma 6.13**.: _Suppose \(\mathcal{G}\rightrightarrows M\) and \(\mathcal{H}\rightrightarrows N\) are local singular Lie groupoids and \(\pi_{\mathcal{G}}\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) and \(\pi_{\mathcal{H}}\colon\widetilde{\mathcal{H}}\to\mathcal{H}\) are local groupoid charts._ _Suppose \(\mathcal{U}\subseteq\widetilde{\mathcal{G}}\) is an open neighborhood of the image of \(\widetilde{u}\). Given a natural number \(n\), suppose that we have a pair of smooth functions:_ \[\alpha\colon\mathcal{U}^{(n)}\to\widetilde{\mathcal{G}}\qquad\beta\colon \mathcal{U}^{(n)}\to\widetilde{\mathcal{G}}\] _together with a smooth function:_ \[f\colon M\to N\] _with the following properties:_ * \(\pi_{\mathcal{H}}\circ\alpha=\pi_{\mathcal{H}}\circ\beta\)__ * \(\widetilde{t}\circ\alpha=f\circ\widetilde{t}\circ\operatorname{pr}_{1}\)__ * _For all_ \(x\in\widetilde{u}^{-1}(\mathcal{U})\)_, we have that_ \[\alpha(\widetilde{u}(x),\dots,\widetilde{u}(x))=\beta(\widetilde{u}(x),\dots, \widetilde{u}(x))=\widetilde{u}(x)\] _Then there exists an open neighborhood \(\mathcal{O}\subseteq\mathcal{U}\) of the image of \(\widetilde{u}\) such that \(\alpha|_{\mathcal{O}^{(n)}}=\beta|_{\mathcal{O}^{(n)}}\)_ The proof of this lemma is almost identical to the one for Lemma 6.8. In principal it involves proving a similarly reformulated version of Lemma 6.7 as well. For the sake of avoiding repetition we will not write out the proof here except to remark that the only difference is addition of the function \(f\) in a few of the equations. One should also observe that the local groupoid structure on \(\widetilde{\mathcal{H}}\) will be a division structure so a version of Lemma 6.5 applies. We can now proceed with proving Theorem 5.3 Proof.: Suppose \(\mathcal{G}\rightrightarrows M\) and \(\mathcal{H}\rightrightarrows N\) are local singular Lie groupoids. Suppose \([\mathcal{F}]\colon\mathcal{G}\to\mathcal{H}\) is a local groupoid homomorphism covering \(f\colon M\to N\) and \(\pi_{\mathcal{G}}\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) and \(\pi_{\mathcal{H}}\colon\widetilde{\mathcal{H}}\to\mathcal{H}\) are wide local groupoid charts. We first show the uniqueness part. Suppose \([\widetilde{\mathcal{F}}]\colon\widetilde{\mathcal{G}}\to\widetilde{\mathcal{H}}\) and \([\widetilde{\mathcal{F}}^{\prime}]\colon\widetilde{\mathcal{G}}\to \widetilde{\mathcal{H}}\) are local groupoid homomorphisms with the properties: \[[\pi_{\mathcal{H}}]\circ[\widetilde{\mathcal{F}}]=[\mathcal{F}]\circ[\pi_{ \mathcal{G}}]\qquad[\pi_{\mathcal{H}}]\circ[\widetilde{\mathcal{F}}^{\prime}] =[\mathcal{F}]\circ[\pi_{\mathcal{G}}]\] We can assume that \(\widetilde{\mathcal{F}}\) and \(\widetilde{\mathcal{F}}^{\prime}\) are globally defined in \(\widetilde{\mathcal{G}}\). We invoke Lemma 6.13, where we take \(\mathcal{U}\) to be a common domain of definition for \(\widetilde{\mathcal{F}}\) and \(\widetilde{\mathcal{F}}^{\prime}\), \(n=1\), \(\alpha=\widetilde{\mathcal{F}}\) and \(\beta=\widetilde{\mathcal{F}}^{\prime}\). It is straightforward to check that the hypotheses are satisfied and so we conclude there must exist an open neighborhood \(\mathcal{O}\subseteq\mathcal{U}\) of the units where \(\mathcal{F}|_{\mathcal{O}}=\mathcal{F}^{\prime}|_{\mathcal{O}}\). Since local groupoid morphisms are germs in a neighborhood of the units we conclude \([\mathcal{F}]=[\mathcal{F}^{\prime}]\). Now we show existence. Note that since local groupoid morphisms can be defined locally in \(M\) (i.e. they are a sheaf), due to the uniqueness property we have just proved, it suffices to prove that there exists a \(\mathcal{F}\) with the desired property that is defined in a neighborhood of an arbitrary point of \(M\). Let \(x_{0}\in M\) be a fixed, arbitrary point. Since \(\pi_{\mathcal{H}}\colon\widetilde{\mathcal{H}}\to\mathcal{H}\) is a local subduction, there must exist a smooth function \(\overline{\mathcal{F}}\colon\mathcal{U}\to\mathcal{H}\) defined an open neighborhood \(\mathcal{U}\subseteq\widehat{\mathcal{G}}\) of \(\widehat{u}(x_{0})\) with the property that \[\pi_{\mathcal{H}}\circ\widetilde{\mathcal{F}}=\mathcal{F}\circ\pi_{\mathcal{G}}\] and \[\overline{\mathcal{F}}(\widehat{u}(x_{0}))=\widehat{u}(f(x_{0}))\] Now let: \[\widetilde{\mathcal{F}}(g):=\overline{\mathcal{F}}(g)\cdot\overline{\mathcal{ F}}(\widehat{u}\circ\widehat{s}(g))^{-1}\] This may not be defined for all \(g\) but note that \(\widetilde{\mathcal{F}}(\widehat{u}(x_{0}))=\widehat{u}(f(x_{0}))\). Therefore, there exists an open neighborhood of \(\widehat{u}(x_{0})\) where \(\widetilde{\mathcal{F}}\) is well-defined. Since we are only trying to show existence in a neighborhood of \(x_{0}\) we can assume without loss of generality that \(\widetilde{\mathcal{F}}\) is defined on \(\mathcal{U}\). Now observe that \(\widetilde{\mathcal{F}}\) maps units to units. That is, for all \(x\in M\) such that \(\widetilde{u}(x)\in\mathcal{U}\): \[\widetilde{\mathcal{F}}(\widehat{u}(x))=\overline{\mathcal{F}}(\widehat{u}(x ))\cdot\overline{\mathcal{F}}(\widehat{u}(x))^{-1}=\widehat{u}(\widehat{t} \circ\overline{\mathcal{F}}(\widehat{u}(x)))\] We claim that \(\widetilde{\mathcal{F}}\) defines a local groupoid homomorphism. To see this let \(\alpha\) and \(\beta\) be defined as follows. For \((g,h)\in\mathcal{U}^{(2)}\): \[\alpha(g,h)=\widetilde{\mathcal{F}}(gh)\qquad\beta(g,h)=\widetilde{\mathcal{ F}}(g)\cdot\widetilde{\mathcal{F}}(h)\] Again, we may need to shrink \(\mathcal{U}\) to a smaller open neighborhood so that \(\alpha\) and \(\beta\) are well-defined but this is no issue. Now we can invoke Lemma 6.13. A straightforward check shows that \(\alpha\) and \(\beta\) satisfy the conditions of the equality test so we know there must exist an open neighborhood \(\mathcal{O}\) of \(\widehat{u}(x_{0})\) with the property that \(\alpha=\beta\) on this open set. Therefore, \(\mathcal{O}\) is an open set where \(\widetilde{\mathcal{F}}\) is compatible with multiplication. Finally, we need to show that \(\widetilde{\mathcal{F}}\) satisfies: \[[\pi_{\mathcal{H}}]\circ[\widetilde{\mathcal{F}}]=[\mathcal{F}]\circ[\pi_{ \mathcal{G}}]\] However, if we compute for \(g\in\mathcal{O}\): \[\pi_{\mathcal{H}}\circ\widetilde{\mathcal{F}}(g)=\pi_{\mathcal{H}}\left( \overline{\mathcal{F}}(g)\cdot\overline{\mathcal{F}}(\widehat{u}\circ s(g)) \right)=\pi_{\mathcal{H}}(\overline{\mathcal{F}}(g))\] Where the last equality holds since \(\pi_{\mathcal{H}}\) is a homomorphism. However, by definition of how we constructed \(\overline{\mathcal{F}}\) we know that \(\pi_{\mathcal{H}}\circ\overline{\mathcal{F}}=\mathcal{F}\circ\pi_{\mathcal{G}}\). Therefore: \[[\pi_{\mathcal{H}}]\circ[\widetilde{\mathcal{F}}]=[\mathcal{F}]\circ[\pi_{ \mathcal{G}}]\] This concludes showing local existence and so the proof is finished. ## 7. The classical Severa-Weinstein groupoid We have now constructed the differentiation functor. Our final task is to prove that the classical Severa-Weinstein groupoid is, indeed, a singular Lie groupoid. ### The fundamental groupoid construction The Severa-Weinstein groupoid is constructed via an analogy to the fundamental groupoid. Essentially, what one does is reproduce the construction of the fundamental groupoid but within the category of Lie algebroids. We will mostly follow the notation and approach to the construction that appears in the works of Crainic and Fernandes [10][10] as we will need to use several of their results about the geometry of the path space. **Definition 7.1**.: Suppose \(A\to M\) and \(B\to N\) are Lie algebroids and \(F_{0},F_{1}\colon A\to B\) are Lie algebroid homomorphisms covering \(f_{0},f_{1}\colon M\to N\). We say that \(F_{0}\) and \(F_{1}\) are _algebroid homotopic_ if there exists a homomorphism of Lie algebroids \(H\colon A\times T[0,1]\to B\) covering \(h\colon M\times[0,1]\to N\) with the following properties: 1. \(H|_{A\times 0_{\{i\}}}=F_{i}\) 2. \(H|_{0_{\partial_{EM}}\times T[0,1]}=0\) By concatenating homotopies, algebroid homotopies can be seen to form an equivalence relation on the set of all algebroid morphisms \(A\to B\). In particular, in the case of the tangent algebroid, the algebroid homotopy relation coincides with the standard notion of homotopy of smooth maps. **Definition 7.2**.: Suppose \(A\to M\) is a Lie algebroid. An \(A\)-path is an algebroid morphism \(a\colon T[0,1]\to A\). The set of all \(A\)-paths will be denoted \(\mathcal{P}(A)\). The _fundamental groupoid_ or _Severa-Weinstein groupoid_ of \(A\) is denoted \(\Pi_{1}(A)\) and is defined to be the set of all \(A\)-paths modulo algebroid homotopy. In the work of Crainic and Fernandes, they make use of the theory of Banach manifolds to study the structure of \(\mathcal{P}(A)\). In order to do this, we must relax the smoothness assumptions on \(\mathcal{P}(A)\). From now on, we assume that any \(a\in\mathcal{P}(A)\) is a bundle map of class \(C^{1}\) and has base path of class \(C^{2}\). Under these assumptions, Crainic and Fernandes prove the following theorem: **Theorem 7.3**.: [10] _The set of algebroid homotopy equivalence classes in \(\mathcal{P}(A)\) partitions \(\mathcal{P}(A)\) into a foliation with finite co-dimension \(\mathcal{F}\)._ In addition to this, Crainic and Fernandes also show how this construction can be used to obtain a local Lie groupoid. **Theorem 7.4**.: [10] _For any Lie algebroid \(A\), there exists an open neighborhood \(A^{\circ}\) of the zero section together with an embedding \(\exp\colon A^{\circ}\to\mathcal{P}(A)\) with the following properties:_ * \(\exp\colon A^{\circ}\to\mathcal{P}(A)\) _is transverse to the foliation_ \(\mathcal{F}\)_._ * _There is a local Lie groupoid structure on_ \(A^{\circ}\) _which makes the associated map_ \(\pi\colon A^{\circ}\to\Pi_{1}(A)\) _into a local groupoid homomorphism._ * _The Lie algebroid of_ \(A^{\circ}\) _as a local groupoid is canonically isomorphic to_ \(A\)_._ An important consequence of the first bullet point above is that \(\exp\colon A^{\circ}\to\Pi_{1}(A)\) is a local subduction. This is essentially implicit in the literature, however it is a little difficult to find a clear statement. This is due to the fact that it relies of properties of foliations on Banach manifolds. For example, in [11], it is claimed that the restriction of the fundamental groupoid of the foliation \(\mathcal{F}\) to an arbitrary complete transversal results in an etale Lie groupoid. Therefore, by Lemma 2.20 passing to the quotient space should be a local subduction. Unfortunately, Tseng and Zhu do not go into much detail beyond stating this fact. For completeness, we will include a proof of this fact in the following lemma. **Lemma 7.5**.: _Let \(\pi\colon A^{\circ}\to\Pi_{1}(A)\) be the map defined above. Then \(\pi\) is a local subduction._ Proof.: First, we remark that since \(\mathcal{F}\) is a foliation on a Banach manifold, around any point \(a(t)\in\mathcal{P}(A)\) there exists a foliated chart \(\Phi\colon\mathbb{R}^{k}\times V\to\mathcal{P}(A)\). By foliated chart, we mean that \(\Phi\) is an open diffeomorphism onto its image, \(k\) is the codimension of \(\mathcal{F}\), \(V\) is an open subset of a Banach vector space, and the foliation \(\mathcal{F}\) corresponds to \(\mathbb{R}^{k}\times TV\). The existence of such foliated charts for foliations on Banach manifolds seems a little difficult to find in the literature stated plainly in this way. However, it can be inferred from the proof of Theorem 1.1 in Chapter 7 of Lang [10]. Now, to see why \(\exp\) is a subduction suppose \(\phi\colon U_{\phi}\to\Pi_{1}(A)\) is a plot and let \(v\in A^{\circ}\) and \(u\in U_{\phi}\) be such that \(exp(v)=\phi(u)\). Let \(p\colon\mathcal{P}(A)\to\Pi_{1}(A)\) be the canonical projection so that \(\pi=p\circ\exp\). Since \(\Pi_{1}(A)\) is equipped with the quotient diffeology, we know there exists a lift \(\widetilde{\phi}\colon V\to\mathcal{P}(A)\) defined on some open neighborhood \(V\) of \(u\). Now suppose we have a submanifold \(\mathcal{T}\subseteq\mathcal{P}(A)\) which is transverse to \(\mathcal{F}\), and is such that the image of \(\phi\) is contained in \(\mathcal{T}\). By using a foliated charts centered around \(\phi(u)\), and \(a\in\mathcal{P}(A)\) there exists a locally defined function \(q_{1}\colon\mathcal{O}_{1}\to\mathcal{T}\) and \(q_{2}\circ\mathcal{O}_{2}\to A^{\circ}\) defined around an open neighborhood \(\mathcal{O}_{1}\) of \(\phi(u)\) and \(\mathcal{O}_{2}\) of \(\exp(v)\) such that \(p\circ q_{1}=p\) and \(p\circ q_{2}=p\). Now, we also know that \(\widetilde{\phi}(u)\) is connected to \(a\) by a path \(h(t)\) in \(\mathcal{P}(A)\) which is tangent to \(\mathcal{F}\). Since \(\mathcal{T}\) is a transverse submanifold at \(\phi(u)\) and \(\exp(A^{\circ})\) is a transverse submanifold at \(\exp(v)\) we can use the classical construction2 for the holonomy of a foliation, to obtain a germ of a diffeomorphism Footnote 2: Note that the traditional form of this construction is by linking a series of foliated charts. \[f\colon\mathcal{T}\to\exp(A^{\circ})\] such that \(f(\phi(u))=\exp(v)\) and \(p\circ f=p\). Therefore, by choosing a suitably small open neighborhood \(V\) of \(u\) the function \[q_{2}\circ f\circ q\circ\widetilde{\phi}\] is a lift of \(\phi\) with the property that \(\phi(u)=a\). This shows that \(\pi\) is a local subduction. In the article of Crainic and Fernandes, the main goal was to obtain a criteria for integrability. Towards this end, they became interested in understanding the fibers of this exponential map and, in particular, they computed the inverse images of the unit elements. **Theorem 7.6** (Crainic and Fernandes).: _Let \(A\) be a Lie algebroid and let \(\exp\colon A^{\circ}\to\Pi_{1}(A)\) be as in the previous theorem._ _Given \(x\in M\) the fiber \(\exp^{-1}(u(x))\) is countable._ Indeed the actual theorem proved in the article of Crainic and Fernandes is much stronger: The fiber \(\exp^{-1}(u(x))\) is actually an additive subgroup of \(A\) called the _monodromy group_ at \(A\) and it arises from a group homomorphism from the second fundamental group of the orbit. The countability of the monodromy group is just an immediate consequence of the countability of the fundamental groups of a smooth manifold. ### The Severa-Weinstein groupoid is a singular Lie groupoid In this subsection we will explain the proof of Theorem 1.2. Let us restate this theorem now. **Theorem 1.2**.: _Given a Lie algebroid, \(A\to M\), let \(\Pi_{1}(A)\rightrightarrows M\) be the Severa-Weinstein groupoid of \(A\) and consider it as a diffeological groupoid. Then \(\Pi_{1}(A)\) is an element of \(\mathbf{SingLieGrpd}\) and \(\widehat{\mathbf{Lie}}(\Pi_{1}(A))\) is canonically isomorphic to \(A\)._ Our first lemma is used to argue that we only have to check if \(\Pi_{1}(A)\) is a singular Lie groupoid in a neighborhood of the units. **Lemma 7.7**.: _Suppose \(\mathcal{G}\rightrightarrows M\) is a diffeological groupoid over a smooth manifold \(M\). Assume that the source map \(s\colon\mathcal{G}\to M\) is a local subduction. Now suppose there exists an open neighborhood of the units \(\mathcal{U}\subseteq\mathcal{G}\) such that \(\mathcal{U}\) is a singular local Lie groupoid. Then \(\mathcal{G}\) is a singular Lie groupoid._ Proof.: The proof of this fact is a translation argument. We need to show that \(\mathcal{G}\) is a \(\mathbf{QUED}\)-groupoid. This means that we must show that \(\mathcal{G}\) is a quasi-etale diffeological space and the source map \(s\colon\mathcal{G}\to M\) is a \(\mathbf{QUED}\)-submersion. For the first part, let \(g_{0}\in\mathcal{G}\) be arbitrary and let \(s(g)=x_{0}\in M\). Since \(\mathcal{U}\) is quasi-etale, there must exist a quasi-etale chart \(\pi\colon N\to\mathcal{G}\) where \(u(x)\) is in the image of \(\pi\). Let \(n_{0}\in N\) be the point such that \(\pi(n_{0})=x_{0}\). Now let \(\sigma\colon\mathcal{O}\to\mathcal{G}\) be a local section of the source map such that \(\sigma(x)=g\). Then we can define: \[\pi^{\prime}(n):=\sigma(t\circ\pi(n))\cdot\pi(n)\] Clearly \(\pi^{\prime}\) will be well-defined in an open neighborhood of \(n_{0}\). Furthermore, \(\pi^{\prime}(n_{0})=g_{0}\). Since \(\pi^{\prime}\) is locally \(\pi\) composed with a diffeomorphism, it follows that \(\pi^{\prime}\) is quasi-etale. Therefore, \(\pi^{\prime}\) is a quasi-etale chart around \(g_{0}\). To show that \(s\colon\mathcal{G}\to M\) is a \(\mathbf{QUED}\)-submersion, we will utilize Theorem 3.21. We only need to show that the fibers of \(s\) are quasi-etale diffeological spaces. However, we already know that \(\mathcal{U}\) is a singular local Lie groupoid. Therefore, for each \(x\in M\) we know \(s^{-1}(x)\cap\mathcal{U}\) is a quasi-etale diffeological space. To show that \(s^{-1}(x)\) is a quasi-etale diffeological space we once again invoke a simple translation argument. Any point in \(s^{-1}(x)\) is just a left translation of the identity. Therefore, a quasi-etale chart around the identity element of \(s^{-1}(x)\) can be translated to be a quasi-etale chart around any element of \(s^{-1}(x)\). In order to show that \(\Pi_{1}(A)\) is a singular Lie groupoid we will have to show that the source map is a \(\mathbf{QUED}\)-submersion. Part of that is the claim that the source map is a local subduction. In a way, this is also a bit implicit in the literature but we include a proof here. **Lemma 7.8**.: _Suppose \(p\colon A\to M\) is a Lie algebroid. Let \(s\colon\Pi_{1}(A)\to M\) be the source map for the fundamental groupoid. Then \(s\) is a local subduction._ Proof.: First, we argue that the map: \[\widehat{s}\colon\mathcal{P}(A)\to M\qquad a\mapsto p(a(0))\] is a submersion of Banach manifolds. To see this, we rely on the original paper of Crainic and Fernandes [10] where they compute \(T_{a}\mathcal{P}(A)\) to be: \[\{(u,X)\in C^{\infty}([0,1],A\times TM)\ :\ \overline{\nabla}_{a}X=\rho(u)\}\] where \(\nabla\) is a connection on \(A\), \(\rho\) is the anchor map and: \[\overline{\nabla}_{a}X:=\rho(\nabla_{X}a)+[\rho(a),X]\] The right hand side of this expression is computed by using any set of sections that extend \(X\) and \(a\) to time dependent sections of \(TM\) and \(A\), respectively. Actually, \(\overline{\nabla}\) is an example of what is called an \(A\)-connection. More specifically, \(\overline{\nabla}\) is an \(A\) connection on the vector bundle \(TM\). The expression \(\overline{\nabla}_{a}\phi\) should be interpreted as the directional derivative of \(\phi\) along \(a\). We refer the reader to [10] for more details on the geometry of \(A\)-connections. The key fact is that, for \(A\) connections, parallel transport along an \(A\)-path is well-defined and the equation \(\overline{\nabla}_{a}X=0\) is equivalent to stating that \(X\) is parallel along \(a\). Now, the \(u\) part represents the vertical (fiber-wise) component of the tangent vector while \(X\) represents the "horizontal" component of the tangent vector. In particular, if we consider the differential of the projection we get: \[T_{a}p:T_{a}\mathcal{P}(A)\to T_{p\circ a}C^{2}([0,1],M)\qquad(u,X)\mapsto X\] Therefore we have: \[T_{a}\widehat{s}\colon T_{a}\mathcal{P}(A)\to T_{p\circ a(0)}M\qquad(u,\phi) \mapsto\phi(0)\] If we are given any \(v\in T_{p\circ a(0)}M\) and we let \(\phi\) be the unique solution to the initial value problem: \[\overline{\nabla}_{a}\phi=0\qquad\phi(0)=v\] Then it follows that \((0,\phi)\in T_{a}\mathcal{P}(A)\) is a tangent vector with the property that \(T_{a}\widehat{s}(0,\phi)=v\). This shows that \(\widehat{s}\) is a submersion of Banach manifolds. Now, since the co-domain of \(\widehat{s}\) is finite dimensional, the kernel of \(T\widehat{s}\) must admit a closed complement at each point. Therefore, the inverse function theorem for Banach manifolds applies and \(\widehat{s}\) admits local sections through every point in its domain. This immediately implies that \(\widehat{s}\) is a local subduction. As an immediate consequence, \(s\colon\Pi_{1}(A)\to M\) is also a local subduction. In a neighborhood of the identity, the fundamental groupoid of an algebroid is just a quotient of \(A^{\circ}\). Therefore, it will be useful to know, precisely, which kinds of quotients of local groupoids are singular Lie groupoids. The next theorem answers this question. **Theorem 7.9**.: _Suppose \(\mathcal{G}\rightrightarrows M\) is a diffeological groupoid where the source map is a local subduction. Suppose further that \(\widetilde{\mathcal{G}}\) is a local Lie groupoid over the same manifold and we have \(\pi\colon\widetilde{\mathcal{G}}\to\mathcal{G}\), a local subduction, such that \([\pi]\colon\widetilde{\mathcal{G}}\to\mathcal{G}\) is a homomorphism of local groupoids covering the identity._ _If for all unit elements \(u\in\mathcal{G}\) we have that the fiber \(\pi^{-1}(u)\subseteq\widetilde{\mathcal{G}}\) is totally disconnected, then \(\mathcal{G}\rightrightarrows M\) is a_ **QUED**_-groupoid and \(\pi\) is a wide local groupoid chart._ Proof.: By Lemma 7.7 it suffices to show that the image of an open neighborhood of the units in \(\widetilde{\mathcal{G}}\) under \(\pi\) is a local singular Lie groupoid. To set our notation, we will use \(\widetilde{s},\widetilde{t},\widetilde{u},\widetilde{m}\) and \(\widetilde{i}\) to denote the local groupoid structure on \(\widetilde{\mathcal{G}}\) and \(s,t,u,m,i\) the local groupoid structure on \(\mathcal{G}\). We begin by arguing that \(\pi\) is a quasi-etale chart (in some neighborhood of the units). We already know that \(\pi\) is a local subduction. Therefore, we need to show that \(\pi\) has totally disconnected fibers (in a neighborhood of the units) and satisfies the rigid endomorphism property. We know that \(\pi\) has totally disconnected fibers over the units of \(\mathcal{G}\). We claim that the fibers of \(\pi\) are totally disconnected in a neighborhood of the units of \(\widetilde{\mathcal{G}}\). To see why, consider the following calculation: \[(b\cdot a^{-1})\cdot a=b\cdot(a^{-1}\cdot a)=b\cdot(u(s(a)))=b\] Let \(\mathcal{U}\subseteq\widetilde{\mathcal{G}}\) be an open neighborhood of the identity with the property that for all \(a,b\in\mathcal{U}\) with \(s(a)=s(b)\) we have that the above calculation is well-defined in \(\widetilde{\mathcal{G}}\). Now given \(\widetilde{g}_{0}\in\mathcal{U}\) and let \(g_{0}=\pi(\widetilde{g})\). Consider the function: \[d\colon\pi^{-1}(g_{0})\cap\mathcal{U}\to\widetilde{\mathcal{G}}\qquad x\mapsto x \cdot\widetilde{g}_{0}^{-1}\] This function is well-defined since we are in \(\mathcal{U}\). Furthermore, this function is injective due to the fact that if we have \(x\cdot a^{-1}=y\cdot a^{-1}\) we have that: \[x=(x\cdot a^{-1})\cdot a=(y\cdot a^{-1})\cdot a=y\] Furthermore, \(d_{a}\) takes values in the kernel of \(\pi\). Specifically, it takes values in \(\pi^{-1}(u(t(g_{0})))\), which are totally disconnected by assumption. Since \(d\) is a smooth injection into a totally disconnected set, it follows that the domain of \(d\) is totally disconnected. This shows the fibers of \(\pi\) are totally disconnected when \(\pi\) is restricted to a small enough neighborhood of the units. Therefore, we can assume without loss of generality that, from now on, the fibres of \(\pi\) are totally disconnected. Next, we must show that \(\pi\) satisfies the rigid endomorphism property. Let us take \(\mathcal{U}\subseteq\widetilde{\mathcal{G}}\) to be an open subset of \(\widetilde{\mathcal{G}}\) with the property that for all \(x,y\in\mathcal{U}\) such that \(s(x)=s(y)\) we have that \(x\cdot y^{-1}\) is well-defined. We use the simplified criteria from Lemma 3.11 to show the rigid endomorphism property. Let \(g_{0}\in\mathcal{U}\) be arbitrary and \(f\colon\mathcal{O}\to\widetilde{\mathcal{G}}\) be a smooth function with the property that \(\mathcal{O}\) is an open neighborhood of \(g_{0}\), \(f(g_{0})=g_{0}\), and \(\pi\circ f=\pi\). We only need to show that \(f\) is a diffeomorphism in a neighborhood of \(g_{0}\). Let \(\alpha\) be the following function: \[\alpha\colon\mathcal{O}\to\widetilde{\mathcal{G}}\qquad f(g)\cdot g^{-1}\] Note \(\alpha\) will be well-defined by our assumptions on \(\mathcal{U}\). If we apply \(\pi\) to \(\alpha\) we observe that \(\pi\circ\alpha\) takes values only in units. Now let us chose \(\mathcal{O}\) to be an open neighborhood of \(g_{0}\) with the property that \(\mathcal{O}\cap\widetilde{t}^{-1}(x)\) is connected for all \(x\in M\). Since the fibers of \(\pi\) are totally disconnected, it follows that \(\alpha\) must be constant on the \(\widetilde{t}\)-fibers. In particular, there must exist a local section \(\sigma\colon\widetilde{t}(\mathcal{O})\to\widetilde{\mathcal{G}}\) with the property that: \[\forall g\in\mathcal{O}\qquad\alpha(g)=\sigma(\widetilde{t}(g))\] From this we can conclude that: \[\forall g\in\mathcal{O}\qquad f(g)\cdot g^{-1}=\sigma(\widetilde{t}(g))\] Which means that: \[\forall g\in\mathcal{O}\qquad f(g)=\sigma(\widehat{t}(g))\cdot g\] From this it immediately follows that \(f\) is a diffeomorphism since it is given by translation along a section. In fact, it has an explicit inverse: \[f^{-1}(g)=\sigma(\widehat{t}(g))^{-1}\cdot g\] To conclude showing that the image of \(\pi\) is a local singular Lie groupoid, the only thing left to show is that the source map \(s\colon\mathcal{G}\to M\) is a **QUED**-submersion. We will use the criteria from Theorem 3.21 which means that we need to show that for all \(x\in M\) we have that \(s^{-1}(x)\) is a quasi-etale diffeological space. Therefore, we just need to show that \(\pi|_{\widetilde{s}^{-1}(x)}\) is a quasi-etale chart. Let \(X:=\pi(\widetilde{s}^{-1}(x))\). We already know it is a local subduction and that the fibers are totally disconnected. It only remains to show that it satisfies the rigid endomorphism property. We use the simplified criteria from Lemma 3.11. Suppose \(\mathcal{O}\subseteq\widetilde{s}^{-1}(x)\) is open and \(f\colon\mathcal{O}\to\widetilde{s}^{-1}(x)\) is a smooth function such that \(\pi\circ f=\pi\) and \(f(\widetilde{g}_{0})=\widetilde{g}_{0}\) for some point \(\widetilde{g}_{0}\in\mathcal{O}\). We need to show that \(f\) is a diffeomorphism in a neighborhood of \(\widetilde{g}_{0}\). We need to show that \(f\) is a local diffeomorphism. The argument is very similar from the one earlier in the proof. Let: \[\alpha\colon\mathcal{O}\to\widetilde{s}^{-1}(x)\qquad\alpha(g):=f(g)\cdot g^{-1}\] Applying \(\pi\) to \(\alpha\) we get that: \[\pi\circ\alpha(g)=u\circ t(g)\] If we chose \(\mathcal{O}\) such that the intersection with the \(t\)-fibers is connected, then we can conclude that \(\alpha\) is constant when restricted to \(t\)-fibers. Therefore, there must exist a smooth section of the source map \(\sigma\colon t(\mathcal{O})\to\widehat{\mathcal{G}}\) with the property that: \[\alpha(g)=\sigma(\widehat{t}(g))\] Therefore, we can conclude that: \[f(g)\cdot g^{-1}=\sigma(\widehat{t}(g))\] Therefore: \[f(g)=\sigma(\widehat{t}(g))\cdot g\] Which, by similar reasoning, is a diffeomorphism. We can now finish the proof of the main theorem for this section. Proof of Theorem 1.2.: Let \(\pi\colon A^{\circ}\to\Pi_{1}(A)\) be the projection from Section 7.1. Recall that \(A^{\circ}\) is an open subset of the zero section in \(A\) and that \(A^{\circ}\) is equipped with a unique local groupoid structure that makes \([\pi]\colon A^{\circ}\to\Pi_{1}(A)\) a local groupoid homomorphism and the zero section of \(A^{\circ}\) the unit embedding. Note that by Lemma 7.8 we know the source map of \(\Pi_{1}(A)\) is a local subduction. Furthermore, we know that \(\pi\) is a subduction by Lemma 7.5. By Theorem 7.9 we conclude that \(\Pi_{1}(\mathcal{G})\) is a singular Lie groupoid. Furthermore, the map \(\pi\colon A^{\circ}\to\Pi_{1}(\mathcal{G})\) is a local groupoid chart. Therefore, the Lie algebroid of \(\Pi_{1}(\mathcal{G})\) is canonically isomorphic to the Lie algebroid of the local groupoid structure on \(A^{\circ}\) which is just \(A\).
Lieの3つ目の定理は、Lie groupoidsとLie algebroidsには当てはまらない。本稿では、Lieの3つ目の定理が、Singular Lie groupoidsという特定のクラスの微分幾何学的 groupoids内で有効であることを示す。このために、we introduce a subcategory of diffeological spaces which we call `quasi-etale.' Singular Lie groupoidsは、そのカテゴリー内の groupoid オブジェクトであり、単位空間がマニFoldである。私たちのアプローチは、Singular Lie groupoidsをLie algebroidsにマッピングする関数を構築することによって、Lie groupoidsからLie algebroidsへの古典的な関数を拡張するものである。We prove that the \v{S}evera-Weinstein groupoid of analgebroid is an example of a singular Lie groupoid, thereby establishing Lie'sthird theorem in this context.
2307.16821
Towards Formal Verification of a TPM Software Stack
The Trusted Platform Module (TPM) is a cryptoprocessor designed to protect integrity and security of modern computers. Communications with the TPM go through the TPM Software Stack (TSS), a popular implementation of which is the open-source library tpm2-tss. Vulnerabilities in its code could allow attackers to recover sensitive information and take control of the system. This paper describes a case study on formal verification of tpm2-tss using the Frama-C verification platform. Heavily based on linked lists and complex data structures, the library code appears to be highly challenging for the verification tool. We present several issues and limitations we faced, illustrate them with examples and present solutions that allowed us to verify functional properties and the absence of runtime errors for a representative subset of functions. We describe verification results and desired tool improvements necessary to achieve a full formal verification of the target code.
Yani Ziani, Nikolai Kosmatov, Frédéric Loulergue, Daniel Gracia Pérez, Téo Bernier
2023-07-31T16:35:16
http://arxiv.org/abs/2307.16821v2
# Towards Formal Verification of ###### Abstract The Trusted Platform Module (TPM) is a cryptoprocessor designed to protect integrity and security of modern computers. Communications with the TPM go through the TPM Software Stack (TSS), a popular implementation of which is the open-source library _tpm2-tss_. Vulnerabilities in its code could allow attackers to recover sensitive information and take control of the system. This paper describes a case study on formal verification of tpm2-tss using the Frama-C verification platform. Heavily based on linked lists and complex data structures, the library code appears to be highly challenging for the verification tool. We present several issues and limitations we faced, illustrate them with examples and present solutions that allowed us to verify functional properties and the absence of runtime errors for a representative subset of functions. We describe verification results and desired tool improvements necessary to achieve a full formal verification of the target code. ## 1 Introduction The _Trusted Platform Module_ (TPM) [20] has become a key security component in modern computers. The TPM is a cryptoprocessor designed to protect integrity of the architecture and ensure security of encryption keys stored in it. The operating system and applications communicate with the TPM through a set of APIs called _TPM Software Stack_ (TSS). A popular implementation of the TSS is the open-source library _tpm2-tss_. It is highly critical: vulnerabilities in its code could allow attackers to recover sensitive information and take control of the system. Hence, it is important to formally verify that the library respects its specification and does not contain runtime errors, often leading to security vulnerabilities. Formal verification of this library is the main motivation of this work. This target is new and highly ambitious for deductive verification: the library code is very large (over 120,000 lines of C code) and complex, heavily based on complex data structures (with multiple levels of imbricated structures and unions), low-level code, linked lists and dynamic memory allocation. In this paper we present a first case study on formal verification of tpm2-tss using the Frama-C verification platform [15]. We focus on a subset of functions involved in storing an encryption key in the TPM, one of the most critical features of the TSS. The functions are annotated in the acsl specification language [2]. Their verification with Frama-C currently faces several limitations of the tool, such as its capacity to reason about complex data structures, dynamic memory allocation, linked lists and their separation from other data. We have managed to overcome these limitations after minor simplifications and adaptations of the code. In particular, we replace dynamic allocation with calloc by another allocator (attributing preallocated memory cells) that we implement, specify and verify. We adapt a recent work on verification of linked lists [4] to our case study, add new lemmas and prove them in the Coq proof assistant [19]. We identify some deficiencies in the new Frama-C-Coq extraction for lists (modified since [4]), adapt it for the proof and suggest improvements. We illustrate all issues and solutions on a simple illustrative example3, while the (slightly adapted) real-life functions annotated in acsl and fully proved in Frama-C are available online as a companion artifact4. Finally, we identify desired extensions and improvements of the verification tool. Footnote 3: For convenience of the readers, its full code is also given in Appendix. Footnote 4: Available (with the illustrative example, all necessary lemmas and their proof) in [https://nikolai-kosmatov.eu/iFM2023.zip](https://nikolai-kosmatov.eu/iFM2023.zip). Contributions.The contributions of this paper include the following: * specification and formal verification in Frama-C of a representative subset of functions of the tpm2-tss library (slightly adapted for verification); * presentation of main issues we faced during their verification with an illustrative example, and description of solutions and workarounds we found; * proof in Coq of all necessary lemmas (including some new ones) related to linked lists, realized for the new version of Frama-C-Coq extraction; * a list of necessary enhancements of Frama-C to achieve a complete formal verification of the tpm2-tss library. Outline.The paper is organized as follows. Section 2 presents Frama-C. Section 3 introduces the TPM, its software stack and the tpm2-tss library. Sections 4 and 5 present issues and solutions related, resp., to memory allocation and memory management. Necessary lemmas are discussed in Sect. 6. Section 7 describes our verification results. Finally, Sect. 8 and 9 present related work and a conclusion with necessary tool improvements. ## 2 Frama-C Verification Platform Frama-C [15] is an open-source verification platform for C code, which contains various plugins built around a kernel providing basic services for source-code analysis. It offers acsl (ANSI/ISO C Specification Language) [2], a formal specification language for C, that allows users to specify functional properties of programs in the form of _annotations_, such as assertions or function contracts. A function contract basically consists of pre- and postconditions (stated, resp., by requires and ensures clauses) expressing properties that must hold, resp., before and after a call to the function. It also includes an assigns clause listing (non-local) variables and memory locations that _can_ be modified by the function. While useful built-in predicates and logic functions are provided to handle properties such as pointer validity or memory separation for example, acsl also supplies the user with different ways to define predicates and logic functions. Footnote 4: [https://trustedcomputinggroup.org/](https://trustedcomputinggroup.org/) Frama-C offers Wp, a plugin for deductive verification. Given a C program annotated in acsl, Wp generates the corresponding proof obligations (also called verification conditions) that can be proved either by Wp or, via the Why3 platform [13], by SMT solvers or an interactive proof assistant like Coq[19]. To ensure the absence of runtime errors (RTE), Wp can automatically add necessary assertions via a dedicated option, and try to prove them as well. Our choice to use Frama-C/Wp is due to its capacity to perform deductive verification of industrial C code with successful verification case studies [7] and the fact that it is currently the only tool for C source code verification recognized by ANSSI, the French Common Criteria certification body, as an acceptable formal verification technique for the highest certification levels EAL6-EAL7 [8]. ## 3 The TPM Software Stack and the tpm2-tss Library This section briefly presents the Trusted Platform Module (TPM), its software stack and the implementation we chose to study: the tpm2-tss library. Readers can refer to the TPM specification [20] and reference books as [1] for more detail. TPM Software Stack.The TPM is a standard conceived by the Trusted Computing Group (TCG)5 for a passive secure cryptoprocessor designed to protect secure hardware from software-based threats. At its base, a TPM is implemented as a discrete cryptoprocessor chip, attached to the main processor chip and designed to perform cryptographic operations. However, it can also be implemented as part of the firmware of a regular processor or a software component. Footnote 5: Like CVE-2023-22745 and CVE-2020-24455, documented on www.cve.org. Nowadays, the TPM is well known for its usage in regular PCs to ensure integrity and to provide a secure storage for the keys used to encrypt the disk with _Bitlocker_ and _dm-crypt_. However, it can be (and is actually) used to provide other cryptographic services to the Operating System (OS) or applications. For that purpose, the TCG defines the TPM Software Stack (TSS), a set of specifications to provide standard APIs to access the functionalities and commands of the TPM, regardless of the hardware, OS, or environment used. The TSS APIs provide different levels of complexity, from the Feature API (FAPI) for simple and common cryptographic services to the System API (SAPI) for a one-to-one mapping to the TPM services and commands providing greater flexibility but complexifying its usage. In between lies the Enhanced System API (ESAPI) providing SAPI-like functionalities but with slightly limited flexibility. Other TSS APIs complete the previous ones for common operations like data formatting and connection with the software or hardware TPM. The TSS APIs, as any software component or the TPM themselves, can have vulnerabilities6 that attackers can exploit to recover sensitive data communicated with the TPM or take control of the system. We study the verification of one of the implementations of the TSS, tpm2-tss, starting more precisely with its implementation of the ESAPI. ESAPI Layer of tpm2-tss.The ESAPI layer provides functions for decryption and encryption, managing session data and policies, thus playing an essential role in the TSS. It is very large (over 50,000 lines of C) and is mainly split into two parts: the API part containing functions in a one-to-one correspondence with TPM commands (for instance, the Esys_Create function of the TSS will correspond to -- and call -- the TPM2_Create command of the TPM), and the back-end containing the core of that layer's functionalities. Each API function will call several functions of the back-end to carry out various operations on command parameters, before invoking the lower layers and finally the TPM. The ESAPI layer relies on a notion of context (ESYS_CONTEXT) containing all data the layer needs to store between calls, so it does not need to maintain a global state. Defined for external applications as an opaque structure, the context includes, according to the documentation, data needed to communicate to the TPM, metadata for each TPM resource, and state information. The specification, however, does not impose any precise data structure: it is up to the developer to provide a suitable definition. Our target implementation uses complex data structures and linked lists. ## 4 Dynamic Memory Allocation Example Overview.We illustrate our verification case study with a simplified version of some library functions manipulating linked lists. The illustrative example is split into Fig. 1-6 that will be explained below step-by-step. Its full code being available in the companion artifact, we omit in this paper some less significant definitions and assertions which are not mandatory to understand the paper (but we preserve line numbering of the full example for convenience of the reader). This example is heavily simplified to fit the paper, yet it is representative for most issues we faced (except the complexity of data structures). It contains a main list manipulation function, getNode (esys_GetResourceObject in the real code), used to search for a resource in the list of resources and return it if it is found, or to create and add it using function createNode (esys_CreateResourceObject in the real code) if not. Figure 1 provides the linked list structure as well as logic definitions used to handle logic lists in specifications. Our custom allocator (used by createNode) is defined in Fig. 2. Figure 3 defines a (simplified) context and additional logic definitions to handle pointer separation and memory freshness. The search function is shown in Fig. 4 and 5. As it is often done, some acsl notation (e.g. \forall, integer, ==>, <=,!=) is pretty-printed (resp., as \(\forall\), \(\mathbb{Z}\), \(\Rightarrow\), \(\leq\), \(\neq\)). In this section, we detail Fig. 1-3. Lists of Resources.Lines 11-15 of Fig. 1 show a simplified definition of the linked list of resources used in the ESAPI layer of the library. Each node of the list consists of a structure containing a handle used as a reference for this node, a resource to be stored inside, and a pointer to the next element. In our example, a resource structure (omitted in Fig. 1) is assumed to contain only a few fields of relatively simple types. The real code uses a more extensive and complex definition (with several levels of imbricated structures and unions), covering all possible types of TPM resources. While it does add some complexity to prove certain properties (as some of them may require to completely unfold all resource substructures), it does not introduce new pointers that may affect memory separation properties, so our example remains representative of the real code regarding linked lists and separation properties. In particular, we need to ensure that the resource list is well-formed -- that is, it is not circular, and does not contain any overlap between nodes -- and stays that way throughout the layer. To accomplish that, we use and adapt the logic definitions from [4], given on lines 26-44, 48-57 of Fig. 1. To prove the code, we need to manipulate linked lists and segments of linked lists. Lines 48-57 define the _translating function_to_ll that translates a C list defined by a NODE_T pointer into the corresponding acsl logic list of (pointers to) its nodes. By convention, the last element end is not included into the resulting logic list. It can be either NULL for a full linked list, or a non-null pointer to a node for a _linked Figure 1: Linked list and logic definitions. list segment_ which stops just before that node. Lines 34-40 show the _linking predicate_linked_ll establishing the equivalence between a C linked list and an acsl logic list. This inductive definition includes memory separation between nodes, validity of access for each node, as well as the notion of reachability in linked lists. In acsl, given two pointers p and q, \valid(p) states that *p can be safely read and written, while \separated(p,q) states that the referred memory locations *p and *q do not overlap (i.e. all their bytes are disjoint). Lines 26-29 provide predicates to handle separation between a list pointer (or double pointer) and a full list. \nth(l,n) and \length(l) denote, resp., the n-th element of logic list l and the length of l. The predicate unchanged_ll in lines 41-44 states that between two labels (i.e. program points) L1 and L2, all list elements in a logic list refer to a valid memory location at both points, and that their respective next fields retain the same value. It is used to maintain the structure of the list throughout the code. Line 60 includes lemmas necessary to conduct the proof, further discussed in Sec. 6. Lack of Support for Dynamic Memory Allocation.As mentioned above, per the TSS specifications, the ESAPI layer does not maintain a global state between calls to TPM commands. The library code uses contexts with linked lists of TPM resources, so list nodes need to be dynamically allocated at runtime. The acsl language provides clauses to handle memory allocations: in particular, \allocable{L}(p) states that a pointer p refers to the base address of an unallocated memory block, and \fresh{L1,L2}(p, n) indicates that p refers to the base address of an unallocated block at label L1, and to an allocated memory block of size n at label L2. Unfortunately, while the Frama-C/Wp memory model is able to handle dynamic allocation (used internally to manage local variables), these clauses are not currently supported. Without allocability and freshness, proving goals involving validity or separation between a newly allocated node and any other pointer is impossible. Static Memory Allocator.To circumvent that issue, we define in Fig. 2 a bank-based static allocator calloc_NODE_T that replaces calls to calloc used in the real-life code. It attributes preallocated cells, following some existing implementations (like the memb module of Contiki [17]). Line 63 defines a static array of nodes of size _alloc_max. Line 64 introduces an allocation index we use to track the next allocable node and to determine whether an allocation is possible. Predicate valid_rsrc_mem_bank on line 66 states a validity condition for the bank: _alloc_idx must always be between 0 and _alloc_max. It is equal to the upper bound if all nodes have been allocated. Predicates lines 67-73 specify separation between a logic list of nodes (resp., a pointer or a double pointer to a node) and the allocable part of the heap, and is used later on to simulate memory freshness. Lines 76-99 show a part of the function contract for the allocator defined on lines 100-111. The validity of the bank should be true before and after the function execution (lines 77, 79). Line 78 specifies the variables the function is allowed to modify. The contract contains several cases (_behaviors_) that cover all situations and are disjoint (line 98). We show only one behavior (lines 89-97) describing a successful allocation (when an allocable node exists, as stated on line 90). Postconditions on lines 92-93 ensure the tracking index is incremented by one, and that the returned pointer points to the first allocable block. While this fact is sufficient to deduce the validity clause on line 94, we keep the latter as well (and it is actually expected for any allocator). In the same way, lines 96-97 specify that the nodes of the bank other than the newly allocated block have not been modified. Currently, Frama-C/Wp does not offer a memory model able to handle byte-level assignments in C objects. To represent as closely as possible the fact that allocated memory is initialized to zero by a call to calloc in the real-life code, we initialize each field of the allocated node to zero (see the C code on lines 104-106 and the postcondition on line 95). Figure 2: Allocation bank and static allocator. 113typedefstructCONTEXT{ 14intplaceholder_int; 15NODE_T *rsrc_list; 16}CONTEXT; 17/*@ 18predicatectx_sep_from_list(CONTEXT*ctx, \list<NODE_T*> 11) = 19\(\forall\)Z i; 0\(\leq\)i <\(\setminus\)length(1l) =>\(\setminus\)separated(\(\setminus\)ntl(1l, i), ctx); 20predicatectx_sep_from_allocables(CONTEXT*ctx) = 21\(\forall\)int i;_alloc_idx\(\leq\)i <_alloc_max\(\Rightarrow\)\(\setminus\)separated(ctx, &_rsrc_bank[i]); 22 23predicate freshness(CONTEXT * ctx, NODE_T ** node) = 24ctx_sep_from_allocables(ctx) 25\(\wedge\)list_sep_from_allocables(to_1l(ctx->rsrc_list, NULL)) 26\(\wedge\)ptr_sep_from_allocables(ctx->rsrc_list) 27\(\wedge\)ptr_sep_from_allocables(*node) 28\(\wedge\)dptr_sep_from_allocables(node); 29 30predicate sep_from_list(L)(CONTEXT * ctx, NODE_T ** node) = 31ctx_sep_from_list(ctx, to_1l(L)(ctx->rsrc_list, NULL)) 32\(\wedge\)dptr_sep_from_list(node, to_1l(L)(ctx->rsrc_list, NULL)); 33*/ ``` **Algorithm 1**Contexts, Separation Predicates and Freshness. In the target library (and in our illustrative example), pointers to nodes are not passed directly as function arguments, but stored in a context variable, and a pointer to the context is passed as a function argument. Lines 113-116 of Fig. 3 define a simplified context structure, comprised of an int and a NODE_T pointer to the head of a linked list of resources. Additional predicates to handle memory separation and memory freshness are defined on lines 118-132. In particular, the ctx_sep_from_list predicate on lines 118-119 specifies memory separation between a CONTEXT pointer and a logic list of nodes. Lines 120-121 define separation between such a pointer and allocables nodes in the bank. In C, a successful dynamic allocation of a memory block implies its _freshness_, that is, the separation between the newly allocated block (typically located on the heap) and all pre-existing memory locations (on the heap, stack or static storages). As this notion of freshness is currently not supported by Frama-C/Wp, we have to simulate it in another way. Our allocator returns a cell in a static array, so other global variables -- as well as local variables declared within the scope of a function -- will be separated from the node bank. To obtain a complete freshness within the scope of a function, we need to maintain separation between the allocable part of the bank and other memory locations accessible through pointers. In our illustrative example, pointers come from arguments including a pointer to a CONTEXT object (and pointers accessible from it) and a double pointer to a NODE_T node. This allows us to define a predicate to handle freshness in both function contracts. The freshness predicate on lines 123-128 of Fig. 3 specifies memory separation between known pointers within the scope of our functions and the allocable part of the bank, using separation predicates previously defined on lines 120-121, and on lines 67-73 of Fig 2. This predicate will become unnecessary as soon as dynamic allocation is fully supported by Frama-C/Wp. Figure 3: Context and predicates to handle separation from a list and memory freshness. In the meanwhile, a static allocator with an additional separation predicate simulating freshness provides a reasonable solution to verify the target library. Since no specific constraint is assumed in our contracts on the position of previously allocated list nodes already added to the list, the verification uses a specific position in the bank only for the newly allocated node. The fact that the newly allocated node does not become valid during the allocation (technically, being part of the bank, it was valid in the sense of acsl already before) is compensated in our contracts by the freshness predicate stating that the new node -- as one the allocable nodes -- was not used in the list before the allocation (cf. line 310 in Fig. 4). We expect that the migration from our specific allocator to a real-life dynamic allocator -- with a more general contract -- will be very easy to perform, as soon as necessary features are supported by Frama-C. Similarly, the sep_from_list predicate on lines 130-132 specifies separation between the context's linked list and known pointers, using predicates on lines 118-119, and on lines 28-29 of Fig 1. ## 5 Memory Management This section presents how we use the definitions introduced in Sec. 4 to prove selected ESAPI functions involving linked lists. We also identify separation issues related to limitations of the Typed memory model of Wp, as well as a way to manage memory to overcome such issues. In this section, we detail Fig. 4-6. The Search Function.Figure 4 provides the search operation getNode with a partial contract illustrating functional and memory safety properties we aim to verify and judge necessary for the proof at a larger scale. Some proof-guiding annotations (assertions, loop contracts) have been skipped for readability, but the code is preserved (mostly with the same line numbers). The arguments include a context, a handle to search and a double pointer for the returned node. Lines 380-416 perform the search of a node by its handle: variable temp_node iterates over the nodes of the resource list, and the node is returned if its handle is equal to the searched one (in which case, the function returns 616 for success). Lines 420-430 convert the resource handle to a TPM one, call the creation function to allocate a new node and add it to the list as its new head with the given handle if the allocation was successful (and return 833 if not). The new node is returned by createNode in temp_node_2 (again via a double pointer). Lines 435-462 perform some modifications on the content of the newly allocated node, without affecting the structure of the list. An error code is returned in case of a failure, and 1611 (with the allocated node in *node) otherwise. Lines 450-451, 453-454 and 461 provide some assertions to propagate information to the last return clause of the function, attained in case of the successful addition of the new element to the list. Compared to the real-life code, we have introduced anonymous blocks on lines 380-416 and 422-452 (which are not semantically necessary and were not present in the original code), as well as two local variables tmp_node and tmp_node2 instead of only one. We explain these code adaptations below. * 309/*0 * 310requiresvalid_rsrc_mem_bank(Pre) \ freshness(ctx,node); * 313requireslinked_ll(ctx->srrc_list, NULL, to_ll(ctx->rsrc_list, NULL)); * 317requiresep_from_list(ctx,node) \ separated(node,ctx); * 321ensuresvalid_rsrc_mem_bank \ freshness(ctx,node); * 325behavior handle_in_list: * 326assumesin_list_handle(rsrc_handle, to_ll(ctx->srrc_list, NULL)); * 332ensuresunchanged_ll(Pre,Post)(to_ll(ctx->rsrc_list, NULL)); * 333ensuresvresult==616; * 335behavior_not_in_list_and_node_allocated: * 336assumes!(in_list_handle(rsrc_handle, to_ll(ctx->srrc_list, NULL))); * 337assumesrsrc_handle<310V(rsrc_handle~in{0x10AU,0x10BU}) * 338<v(x120x1pxrrr_handle<0x12FV); * 339assumes0<-alloc_idx<-alloc_max; * 340ensuresvold(ctx->rsrc_list) \ NULL> * 341(to_ll(ctx->srrc_list,NULL),1)== \old(ctx->rsrc_list); * 342ensureselfneck_ll(ctx->rsrc_list,NULL, to_ll(ctx->rsrc_list, NULL)); * 343ensuresvresult==1611; * 344disjointbehaviors;completebehaviors; * 345*/ * 346intgetNode(PSEUDO_CONTEXT*ctx,uint32_trsrc_handle,NODE_T**node){ * 347/*@assertlinked_ll(ctx->srrc_list,NULL,to_ll(ctx->srrc_list,NULL));*/ * 348intr; * 349uint32_tpt_handle; * 350{ /*BlockaddedtocircurventissueswiththeWPmemorymodel*/ * 351NODE_T*tmp_node; * 352for(tmp_node=ctx->rsrc_list;tmp_node=NULL; * 353if(tmp_node=tmp_node>next){ * 354if(tmp_node->handle==rsrc_handle){*node=tmp_node;return616;} * 355} * 356} * 357r=iepsys_handle_to_tpt_handle(rsrc_handle,&tpt_handle); * 358r=ieps_handle_to_tpt_handle(rsrc_handle,&tpt_handle); * 359{ /*BlockaddedtocircurventissueswiththeWPmemorymodel*/ * 360NODE_T*tmp_node=NULL; * 361for(tmp_node=ctx->rsrc_handle,&tpt_node_2); * 362{ /*assertsep_from_list(ctx,node);*/ * 363if(r==833){returnr;}; * 364tmp_node_2->rsrc.handle=tpt_handle; * 365tmp_node_2->rsrc.rsrcType=0; * 366{ /*assertsep_from_handle,&tpt_node_2->rsrc.name.name[0], * 367{ /*assertsep_from_list(ctx->rsrc_list,Pre),NULL);*/ * 368return161; * 369} * 370 * 371 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 379 * 371 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 379 * 371 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 379 * 377 * 378 * 379 * 379 * 371 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 379 * 371 * 378 * 379 * 379 * 370 * 371 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 379 * 379 * 370 * 371 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 379 * 377 * 378 * 379 * 379 * 370 * 371 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 378 * 379 * 379 * 370 * 371 * 373 * 378 * 379 * 379 * 370 * 371 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 379 * 371 * 379 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 379 * 378 * 379 * 379 * 379 * 370 * 371 * 379 * 371 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 379 * 379 * 379 * 370 * 371 * 378 * 379 * 371 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 379 * 379 * 370 * 371 * 378 * 379 * 379 * 371 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 378 * 379 * 379 * 379 * 370 * 371 * 378 * 379 * 379 * 371 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 378 * 379 * 379 * 379 * 370 * 371 * 379 * 370 * 371 * 371 * 372 * 373 * 374 * 375 * 376 * 378 * 379 * 379 * 370 * 371 * 379 * 371 * 373 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 379 * 377 * 378 * 379 * 379 * 370 * 371 * 371 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 379 * 379 * 370 * 371 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 379 * 378 * 379 * 379 * 370 * 371 * 379 * 371 * 372 * 373 * 374 * 375 * 376 * 378 * 379 * 379 * 371 * 379 * 370 * 371 * 379 * 371 * 372 * 373 * 374 * 378 * 379 * 371 * 379 * 370 * 371 * 372 * 373 * 374 * 375 * 376 * 378 * 379 * 378 * 379 * 379 * 370 * 371 * 379 * 371 * 372 * 373 * 373 * 374 * 375 * 376 * 378 * 379 * 373 * 374 * 375 * 376 * 378 * 379 * 379 * 379 * 370 * 371 * 378 * 379 * 371 * 379 * 370 * 371 * 372 * 373 * 373 * 374 * 375 * 376 * 378 * 379 * 379 * 371 * 379 * 372 * 373 * 374 * 375 * 376 * 377 * 378 * 379 * 379 * 379 * 370 * 379 * 370 * 371 * 371 * 37 * 378 * 379 * 370 * 379 * 371 * 379 * 371 * 372 * 373 * 374 * 375 * 376 * 378 * 379 * 379 * 379 * 371 * 37 * 379 * 37 * 370 * 371 * 372 * 373 * 374 * 375 * 376 * 378 * 379 * 378 * 379 * 379 * 37 * 379 * 371 * 37 * 379 * 37 * 37 * 378 * 379 * 370 * 371 * 379 * 379 * 370 * 371 * 379 * 371 * 37 * 37 * 371 * 372 * 373 * 373 * 374 * 375 * 376 * 378 * 37 * 379 * 379 * 37 * 379 * 370 * 371 * 379 * 371 * 379 * 372 * 373 * 373 * 374 * 37 * 379 * 371 * 37 * 372 * 373 * 374 * 37 * 375 * 379 * 373 * 375 * 379 * 374 * 37 lines 369-370 ensure that if a new node was successfully allocated and added to the list, the old head becomes the second element of the list, while line 372 ensures the separation of known pointers from the new list. We specify that the complete list of provided behaviors must be complete and disjoint (line 374). As global preconditions, we notably require for the list to be well-formed (through the use of the linking predicate, cf. line 313), and the validity of our bank and freshness of allocable nodes with respect to function arguments and global variables (cf. line 310). Line 317 requires memory separation of known pointers from the list of resources using the sep_from_list predicate, and separation among known pointers using the \separated clause. As a global postcondition, we require that our bank stays valid, and that freshness of the (remaining) allocable nodes relatively to function arguments and global variables is maintained (cf. line 321). However, properties regarding the list itself -- such as the preservation of the list when it is not modified (line 332), or ensuring it remains well-formed after being modified (line 371) -- have to be issued to acsl behaviors to be proved, due to the way how local variables are handled in the memory model of Wp. The logic list properties are much more difficult for solvers to manipulate in global behaviors. Memory Model Limitation: an Uprovable Property.Consider the assertion on line 377 of Fig. 4. Despite the presence of the same property as a precondition of the function (line 313), currently this assertion cannot be proved by Wp at the entry point for the real-life version of the function. Basically, the real-life version can be obtained7 from Fig. 4 by removing the curly braces on lines 380, 416, 422, 452. This issue is due to a limitation of the Wp memory model. Footnote 7: another difference — removing variable tmp_node2 declared on line 423 and using tmp_node instead — can be ignored in this context. Indeed, for such an assertion (as in general for any annotation to be proved), Wp generates a proof obligation, to be proved by either Wp itself or by external provers via the Why3 platform [13]. Such an obligation includes a representation of the current state of the program memory. In particular, pointers such as the resource list ctx->rsrc_list (and by extension, any reachable node of the list) will be considered part of the heap. To handle the existence of a variable in memory -- should it be the heap, the stack or the static segments -- Wp uses an allocation table to express when memory blocks are used or freed, which is where the issue lies. For instance, on line 428 of Fig. 4, the temp_node_2 pointer has its address taken, and is considered as used locally due to requires involving it in our function contract for createNode. It is consequently transferred to the memory model, where it has to be allocated. Currently, the memory model of Wp does not provide separated allocation tables for the heap, stack and static segments. Using temp_node_2 the way it is used on line 428 changes the status of the allocation table as a whole. This affects the status of other "allocated" (relatively to the memory model) variables as well, including (but not limited to) any reachable node of the list. Therefore, the call to createNode line 428 of Fig. 4 in the real-life code that uses the address of a local pointer as a third argument is sufficient to affect the status of the resource list on the scale of the entire function. As a result, the assertion on line 377 is not proved. A WorkaroundAs a workaround (found thanks to an indication of the Wp team) to the aforementioned issue, we use additional blocks and variable declarations. Figure 5 presents those minor rewrites (with line numbers in alphabetical style to avoid confusion with the illustrative example). The left side illustrates the structure of the original C code, where the address of temp_node is taken and used in the createNode call on line j, and the same pointer is used to iterate on the list. On the right, we add additional blocks and a new pointer temp_node_2, initialized to NULL to match the previous iteration over the list. Each block defines a new scope, outside of which the pointer used by createNode will not exist and side-effect-prone allocations will not happen. It solves the issue. Additional Proof-Guiding AnnotationsAdditional annotations (mostly omitted in Fig. 4) include, as usual, loop contracts and a few assertions. Assertions can help the tool to establish necessary intermediate properties or activate the application of relevant lemmas. For instance, assertions of lines 450-451 and 453-454 help propagate information over the structure of the linked list (by its logic list representation) outside of each block, and finally to postconditions. Assertions on lines 429 and 461 help propagate separations from the list through the function and its anonymous blocks. Some other intermediate assertions are needed to prove the unchanged nature of the list. Handling Pointer CastsAnother memory manipulation issue we have encountered comes from the function call on line 440 in getNode: after having been added to the resource list, the newly allocated node must have its name (or more precisely, the name of its resource) set from its TPM handle tpm_handle (derived from the handle of the node by the function call on line 420). This is done through marshaling using the uint32_Marshal function, partially shown Figure 5: Comparison of the real-life code of getNode (on the left) and its rewriting with additional blocks (on the right) for proving list properties. on lines 298-306 of Fig. 6, whose role is to store a 4-byte unsigned int (in this case, our TPM handle) in a flexible array of bytes (the name of the resource). The function calls memcpy on (commented) line 302, which is the source of our issue (a correct endianness being ensured by a previous byte swap in in). For most functions of the standard libraries, Frama-C provides basic acsl contracts to handle their use. However, for memory manipulation functions like memcpy, such contracts rely on pointer casts, whose support in Wp is currently limited. To circumvent this issue, we define our own memory copy function on lines 280-285: instead of directly copying the 4-byte unsigned int pointed by src byte per byte using pointer casts using memcpy, we extract one-byte chunks using byte shifts and bitmasks (cf. lines 281-284, 303) without casts. Line 272 requires that both source and destination locations are valid, also without casts. This version is fully handled by Wp. Current contracts are sufficient for the currently considered functional properties and the absence of runtime errors (and we expect they will be easy to extend for more precise properties if needed). ## 6 Lemmas When SMT solvers become inefficient (e.g. for inductive definitions), it can be necessary to add lemmas to facilitate the proof. These lemmas can then be directly instantiated by solvers, but proving them often requires to reason by induction, with an interactive proof assistant. The previous work using logic lists [4] defined and proved several lemmas using the Coq proof assistant. We have added two new useful lemmas (defined in Fig. 7) and used twelve of the previous ones to verify both the illustrative example and the subset of real-life functions. However, because the formalization of the memory models and various aspects of acsl changed between the version of Frama-C used in the previous work and the one we use, we could not reuse the proofs of these lemmas. While older Frama-C versions directly generated Coq specifications, more recent Frama-C versions let Why3 generate them. Even if the new translation is close to the previous one, the way logic lists are handled was modified significantly. Figure 6: Definition for memcpy replacement in marshal. In the past, Frama-C logic lists were translated into the lists Coq offers in its standard library: an inductively defined type as usually found in functional programming languages such as OCaml and Haskell. Such types come with an induction principle that allows to reason by induction. Without reasoning inductively, it also offers the possibility to reason by case on lists: a list is defined either as empty, or as built with the cons constructor. In recent versions of Frama-C, acsl logic lists are axiomatized as follows: two functions nil and cons are declared, as well as a few other functions on lists, including the length of a list (length), the concatenation of two lists (concat), and getting an element from a list given its position (nth). However, there is no induction principle to reason by induction on lists, and because nil and cons are not constructors, it is not possible to reason by case on lists in this formalization. It is possible to test if a list is empty, but if not, we do not know that it is built with cons. Writing new recursive functions on such lists is also very difficult. Indeed, we only have nth to observe a list, while the usual way to program functions on lists uses the head and the tail of a list for writing the recursive case. Interestingly, when the hypotheses of our lemmas include a fact expressed using linked_ll, it is still possible to reason by case, because this inductive predicate is translated into Coq as an inductive predicate. Consequently, there are only two possible cases for the logic list: either it is empty, or it is built with cons. When such a hypothesis is missing, we axiomatized a tail function, and a decomposition principle stating that a list is either nil or cons. These axioms are quite classic and can be implemented using a list type defined by induction. We did not need an inductive principle on lists as either the lemmas did not require a proof by induction, or we reasoned inductively on inductive predicates. However, we proved such an induction principle using only the axioms we added. It is thus available to prove some other lemmas provided in [4] -- not needed yet in our current work -- that were proved by induction on lists. Because of these changes, to prove all lemmas we need, we had to adapt all previous proof scripts, and in a few cases significantly. The largest proof scripts are about 100 lines long excluding our axioms, and the shortest takes a dozen lines. We suggest that the next versions of Frama-C come back to a concrete representation of lists. Thanks to our approach, we expect that the required changes in our proofs of lemmas will remain minimal: we will only have to prove the axioms introduced on tail and our decomposition principle. ## 7 Verification Results Proof results, presented in Fig. 8, were obtained by running Frama-C 26.1 (Iron) on a desktop computer running Ubuntu 20.04.4 LTS, with an Intel(R) Core(TM) Figure 7: New lemmas proved in our verification work (in addition to those in [4]). i5-6600 CPU @ 3.30 GHz, featuring 4 cores and 4 threads, with 16GB RAM. We ran Frama-C with options -wp-par 3 and -wp-timeout 30. We used the Alt-Ergo v2.4.3 and CVC4 v1.8 solvers, via Why3 v1.5.1. Both functional properties and the absence of runtime errors (RTE) were proved. In our illustrative example, 276 goals were proved in a total time of 3min13s with 61% proved by SMT solvers, and the rest by the internal simplifier engine Qed of Wp and one Wp script. The maximum time to prove a goal was 20s. Solutions to memory manipulation problems presented in this paper were used on a larger verification study over 10 different functions of the target library (excluding macro functions, and interfaces without code whose behaviors needed to be modeled in acsl), related to linked-list manipulations and some internal ESAPI feasibility checks and operations (cryptographic operations excluded). Over 767 goals proved in a total of 21m44s, 58% were proved by SMT solvers and 41% by Qed. Only 5 Wp proof scripts were used, when automatic proof either failed or was too slow. This shows a high level of automation achieved in our project, in particular, thanks to carefully chosen predicates and lemmas. The maximum time to prove a goal was 1min50s. As for the 14 lemmas we used, 11 are proved by Coq using our scripts, and the remaining 3 directly by Alt-Ergo. Their proof takes 6 seconds in our configuration, with the maximum time to prove a goal being 650ms. ## 8 Related Work TPM related safety and security.Various case studies centered around TPM uses have emerged over the last decade, often focusing on use cases relying on functionalities of the TPM itself. A recent formal analysis of the key exchange primitive of TPM 2.0 [22] provides a security model to capture TPM protections on keys and protocols. Authors of [21] propose a security model for the cryptographic support commands in TPM 2.0, proved using the CryptoVerif tool. A model of TPM commands was used to formalize the session-based HMAC authorization and encryption mechanisms [18]. Such works focus on the TPM itself, but to the best of our knowledge, none of the previously published works aim at verifying the tpm2-tss library or any implementation of the TSS. _Linked lists and recursive data structures._ We use logical definitions from [4] to formalize and manipulate C linked lists as acsl logic lists in our effort, while another approach [3] relies on a parallel view of a linked list via a companion ghost array. Both approaches were tested on the linked list module of the Figure 8: Proof results for the illustrative example and the real-life code. Contiki OS [12], which relies on static allocations and simple structures. Realized in SPARK, a deductive verification tool for a subset of the Ada language and also the name of this subset, the approach to the verification of black-red trees [11] is related to the verification of linked lists in Frama-C using ghost arrays including the auto-verification aspects [5]. However, the trees themselves were implemented using arrays as pointers have only been recently introduced in SPARK [10]. Programs with pointers in SPARK are based on an ownership policy enforcing non-aliasing which makes their verification closer to Rust programs than C programs. _Formal verification for real-life code._ Deductive verification on real-life code has been spreading in the last decades, with various verification case studies where bugs were often found by annotating and verifying the code [14]. Such studies include [9], providing feedback on the authors' experience of using acsl and Frama-C on a real-world example. Authors of [7] managed a large scale formal verification of global security properties on the C code of the JavaCard Virtual Machine. SPARK was used in the verification of a TCP Stack [6]. Authors of [16] highlight some issues specific to the verification of the Hyper-V hypervisor, and how they can be solved with VCC, a deductive verification tool for C. ## 9 Conclusion and Future Work This paper presents a first case study on formal verification of the tpm2-tss library, a popular implementation of the TPM Software Stack. Making the bridge between the TPM and applications, this library is highly critical: to take advantage of security guarantees of the TPM, its deductive verification is highly desired. The library code is very complex and challenging for verification tools. We have presented our verification results for a subset of 10 functions of the ESAPI layer of the library that we verified with Frama-C. We have described current limitations of the verification tool and temporary solutions we used to address them. We have proved all necessary lemmas (extending those of a previous case study for linked lists [4]) in Coq using the most recent version of the Frama-C-Coq translation and identified some necessary improvements in handling logic lists. Finally, we identified desired tool improvements to achieve a full formal verification of the library: support of dynamic allocations and basic acsl clauses to handle them, a memory model that works at byte level, and clearer separation of statuses of variables between the heap, the stack, and static segments. We expect that the real-life code will become provable (without adaptations used in this work) as soon as those improvements are implemented. This work opens the way towards a full verification of the tpm2-tss library. Future work includes the verification of a larger subset of functions, including lower-level layers and operations. Specification and verification of specific security properties is another future work direction. Finally, combining formally verified modules with modules which undergo a partial verification (e.g. limited to the absence of runtime errors, or runtime assertion checking of expected specifications on large test suites) can be another promising work direction to increase confidence in the security of the library. _Acknowledgment._ Part of this work was supported by ANR (grants ANR-22-CE39-0014, ANR-22-CE25-0018) and French Ministry of Defense via a PhD grant of Yani Ziani.
信頼できるプラットフォームモジュール(TPM)は、現代のコンピュータのプライバシーとセキュリティを保護するように設計された暗号処理装置です。TPMとの通信は、TPMソフトウェアスタック(TSS)を経由します。TSSの最も人気のある実装の一つはオープンソースライブラリtpm2-tssです。そのコードには脆弱性が存在し、攻撃者は重要な情報を復元し、システムを control にする可能性があります。この論文では、Frama-C検証プラットフォームを使用してtpm2-tssの形式的検証に関する Case Study を記述しています。tpm2-tssのコードは、リンクされたリストと複雑なデータ構造に強く依存しているため、検証ツールに非常に難しく、いくつかの問題と制限点、そしてこれらの問題を解決するための解決策を示します。これにより、代表的なサブセットの機能の機能とランタイムエラーの無しの検証が可能になりました。検証結果と、目標コードの完全
2306.17816
Slice genus bound in $DTS^2$ from $s$-invariant
We prove a recent conjecture of Manolescu-Willis which states that the $s$-invariant of a knot in $\mathbb{RP}^3$ (as defined by them) gives a lower bound on its null-homologous slice genus in the unit disk bundle of $TS^2$. We also conjecture a lower bound in the more general case where the slice surface is not necessarily null-homologous, and give its proof in some special cases.
Qiuyu Ren
2023-06-30T17:31:47
http://arxiv.org/abs/2306.17816v1
# Slice genus bound in \(Dts^{2}\) from \(s\)-invariant ###### Abstract. We prove a recent conjecture of Manolescu-Willis which states that the \(s\)-invariant of a knot in \(\mathbb{RP}^{3}\) (as defined by them) gives a lower bound on its null-homologous slice genus in the unit disk bundle of \(TS^{2}\). We also conjecture a lower bound in the more general case where the slice surface is not necessarily null-homologous, and give its proof in some special cases. ## 1. Introduction Rasmussen [14] famously defined the \(s\)-invariant for knots in \(S^{3}\) using Khovanov homology theory [15], and proved that for a knot \(K\) in \(S^{3}\), \[2g_{4}(K)\geq|s(K)|, \tag{1}\] where \(g_{4}(K)\) is the slice genus of \(K\), which can be defined as the minimal genus of an orientable cobordism (in \(S^{3}\times[0,1]\)) from \(K\) to the unknot. Analogously, Manolescu-Marengon-Sarkar-Willis [16] and Manolescu-Willis [16] defined \(s\)-invariants (\(\mathbb{Z}\)-valued, like the usual \(s\)-invariant) for null-homologous knots in \(S^{1}\times S^{2}\) and for all knots in \(\mathbb{RP}^{3}\), respectively, and proved the same inequality (1) in these settings. Here the slice genus \(g_{4}(K)\) for \(K\) is still defined as the minimal genus of an orientable cobordism from \(K\) to an unknot (there are two unknots in \(\mathbb{RP}^{3}\), one null-homologous and one not). For an integer \(d\), let \(D(d)\) denote the \(D^{2}\)-bundle over \(S^{2}\) with euler number \(d\). Thus \(D(0)=D^{2}\times S^{2}\) with boundary \(S^{1}\times S^{2}\); \(D(1)=\mathbb{CP}^{2}\backslash B^{4}\) with boundary \(S^{3}\); \(D(2)=DTS^{2}\), the unit disk bundle of the tangent bundle of \(S^{2}\), with boundary \(\mathbb{RP}^{3}\). For a null-homologous properly embedded orientable connected surface \(\Sigma\subset D(d)\) with boundary a knot \(K\subset\partial D(d)\), \(d=0,1,2\), the genus bound \[2g(\Sigma)\geq-s(K) \tag{2}\] was proved for \(d=0,1\)[16, Theorem 1.15, Corollary 1.9] and conjectured for \(d=2\)[16, Conjecture 6.9]. We prove this conjecture. **Theorem**.: _If \((\Sigma,K)\subset(DTS^{2},\mathbb{RP}^{3})\) is a null-homologous properly embedded orientable connected surface that bounds a knot \(K\subset\mathbb{RP}^{3}\), then_ \[2g(\Sigma)\geq-s(K).\] _Remark 1_.: By reversing the orientation of \(D(d)\), (2) for \(d=0,1,2\) implies \[2g(\Sigma)\geq s(K)\] for \(d=0,-1,-2\). Since a cobordism in \(\partial D(d)\times[0,1]\) from a null-homologous \(K\subset\partial D(d)\) to an unknot can be capped off to become a slice surface in \(D(\pm d)\), (2) for \(d=0,1,2\) can be considered as refinements of (1) for null-homologous knots \(K\) in \(S^{1}\times S^{2},S^{3},\mathbb{RP}^{3}\), respectively. In the only other case, namely when \(K\subset\mathbb{RP}^{3}\) is not null-homologous, (1) is refined by (4) below with \([\Sigma]=\pm 1\). When \(d=0\), \(\Sigma\subset D^{2}\times S^{2}\) being null-homologous is equivalent to \(\partial\Sigma\subset S^{1}\times S^{2}\) being null-homologous. Since the \(s\)-invariant is only defined for null-homologous knots in \(S^{1}\times S^{2}\), the null-homologous condition on \(\Sigma\) puts no restriction. When \(d=1,2\), however, we could ask whether \(s(K)\) gives genus bounds for slice surfaces of \(K\) in \(D(d)\) that are not necessarily null-homologous. For \(d=1\) this is conjectured in [10] and proved by Ren [11, Corollary 1.5]. Explicitly, for any \((\Sigma,K)\subset(\mathbb{CP}^{2}\backslash B^{4},S^{3})\) we have \[2g(\Sigma)\geq-s(K)-[\Sigma]^{2}+|[\Sigma]|, \tag{3}\] where \(|\cdot|\) is the \(L^{1}\)-norm (equivalently, absolute value) on \(H_{2}(\mathbb{CP}^{2}\backslash B^{4},S^{3})\cong\mathbb{Z}\). We pose the following conjecture for the case \(d=2\). **Conjecture**.: _If \((\Sigma,K)\subset(DTS^{2},\mathbb{RP}^{3})\) is a properly embedded orientable connected surface that bounds a knot \(K\subset\mathbb{RP}^{3}\), then_ \[2g(\Sigma)\geq-s(K)-\frac{[\Sigma]^{2}}{2}. \tag{4}\] The main theorem shows the conjecture when \([\Sigma]=0\). In fact, the same proof applies to show it in a couple more cases. **Proposition 2**.: _The inequality (4) holds if \([\Sigma]=\pm 1,\pm 2,\pm 3\) in \(H_{2}(DTS^{2},\mathbb{RP}^{3})\cong\mathbb{Z}\)._ In fact, in the various settings above, the \(s\)-invariants are defined for links as well as knots [1, 10, 11]. As remarked in [10, 11], (2) for \(d=0,1,2\) and (3) reduce to computing the \(s\)-invariants of some special family of links; similarly, (4) also reduces to computing the \(s\)-invariant of a family of links. We will explain these reductions in Section 2. In Section 3, we calculate the \(s\)-invariants that enable us to conclude the main theorem and Proposition 2. We also pose a technical question whose positive answer implies the conjecture above. **Acknowledgement**.: The author thanks Ciprian Manolescu for suggesting this problem and for helpful discussions. ## 2. Reduce to \(s\)-invariant calculations In this section, we define links \(T(d;p,q)\subset\partial D(d)\) for \(p,q\geq 0\), such that (2) reduces to the calculation of the \(s\)-invariants of \(T(d;p,p)\), \(d=0,1,2\), and (3)(4) reduce to that of \(T(d;p,q)\), \(d=1,2\). The strategy is essentially due to [10, 11], but we carry it out explicitly for completeness. Let \(S\subset D(d)\) denote the core \(2\)-sphere of the \(D^{2}\)-bundle \(D(d)\to S^{2}\). Any properly embedded oriented surface \(\Sigma\subset D(d)\) can be perturbed so that it intersects \(S\) transversely at some \(p\) points positively and some \(q\) points negatively. Removing a tubular neighborhood of \(S\), we obtain a properly embedded surface \(\Sigma_{0}\subset\partial D(d)\times[0,1]\), whose boundary on \(\partial D(d)\times\{1\}\) is the original boundary of \(\Sigma\) and whose boundary on \(\partial D(d)\times\{0\}\) is an oriented link in \(\partial D(d)\) consisting of \(p+q\) fibers of the \(S^{1}\)-bundle \(\partial D(d)\to S^{2}\), \(p\) of which oriented positively and \(q\) of which negatively. We denote the mirror of this link by \(T(d;p,q)\); thus, \(\Sigma_{0}\) is a cobordism from \(\overline{T(d;p,q)}\) to \(\partial\Sigma\). **Example 3**.: \(T(0;p,q)\) is the disjoint union of \(p+q\) knots of the form \(*\times S^{2}\subset S^{1}\times S^{2}\), \(p\) of which oriented upwards and \(q\) of which downwards. It is denoted by \(F_{p,q}\) in [10]. **Example 4**.: \(\overline{\partial D(1)}\to S^{2}\) is the Hopf fibration, hence its fibers have pairwise linking number \(1\). Thus \(T(1;p,q)\) is the torus link \(T(p+q,p+q)\) in which \(p\) of the strands are oriented against the other \(q\) strands. It is denoted by \(F_{p,q}(1)\) in [10] and \(T(p+q,p+q)_{p,q}\) in [11]. **Example 5**.: Think of \(\mathbb{RP}^{3}\) as the \(3\)-ball \(B^{3}\) with antipodal points on the boundary identified. Then \(T(2;p,q)\) can be obtained by standardly embedding a half-twist on \(p+q\) strands, \(p\) of which oriented against the other \(q\), into \(B^{3}\) such that the endpoints land on the boundary. This can be seen by realizing \(T(2;p,q)\subset\mathbb{RP}^{3}\) as the quotient of \(T(1;p,q)\subset S^{3}\) by the standard involution on \(S^{3}\). \(T(2;p,p)\) is denoted by \(H_{p}\) in [14]. By [11, 12, 13], if \(\Sigma\) is an oriented cobordism in \(Y\times[0,1]\) between two (null-homologous if \(Y=S^{1}\times S^{2}\)) oriented links \(L_{0}\) and \(L_{1}\) in \(Y\), \(Y=S^{3},S^{1}\times S^{2},\mathbb{RP}^{3}\), such that every component of \(\Sigma\) has a boundary in \(L_{0}\), then \[s(L_{1})-s(L_{0})\geq\chi(\Sigma). \tag{5}\] By construction, if \((\Sigma,L)\subset(D(d),\partial D(d))\) is a properly embedded oriented connected surface without closed components, by deleting a tubular neighborhood of the core \(S\subset D(d)\), we obtain a cobordism \(\Sigma_{0}\) from some \(\overline{T(d;p,q)}\) to \(L\), each of whose components has a boundary in \(L\). Turning the cobordism upside down and applying (5) give \[s(T(d;p,q))-s(\bar{L})\geq\chi(\Sigma_{0})=\chi(\Sigma)-p-q, \tag{6}\] where the last inequality holds because topologically \(\Sigma_{0}\) is \(\Sigma\) with \(p+q\) disks removed from its interior. The number \(p-q\) equals to the homology class \([\Sigma]\in H_{2}(D(d),\partial D(d))\) upon an identification \(H_{2}(D(d),\partial D(d))\cong H^{2}(D(d))\cong\mathbb{Z}\). Thus, for \([\Sigma]=p-q\) a fixed number, if \(s(T(d;p,q))+p+q\) is independent of specific \(p,q\), then (6) can be rewritten in terms of \(s(\bar{L})\), \(\chi(\Sigma)\), and \([\Sigma]\). This is the case for \(d=0\), \([\Sigma]=0\), as well as for \(d=1\) with any \([\Sigma]\). Explicitly, by [12, Theorem 1.6][15, Theorem 1.1] we have \[s(T(0;p,p))=-2p+1,\ s(T(1;p,q))=(p-q)^{2}-2\max(p,q)+1. \tag{7}\] We conjecture this is also true for \(d=2\) and any \([\Sigma]\), with \[s(T(2;p,q))=\left\lfloor\frac{(p-q)^{2}}{2}\right\rfloor-p-q+1, \tag{8}\] and give a proof of it for \([\Sigma]=0,\pm 1,\pm 2,\pm 3\). In the special case when \(L=K\) is a knot, we have \(s(\bar{L})=s(\bar{K})=-s(K)\) by [16, Proposition 3.10][12, Proposition 8.8(1)][12, Proposition 4.10]. In this case, plugging (7) into (6) gives (2) for \(d=0,1\), and (3). Plugging (8) into (6) would give the conjectural inequality (4), although we are only able to prove it for \(|p-q|\leq 3\). _Remark 6_.: 1. It is easy to prove (8) for \(pq=0\), since in this case \(T(2;p,q)\) is a positive link and one can apply [14, Remark 6.3]. However this does not help in establishing (4). 2. If (8) were true in general, one can proceed as in [15, Section 4] to determine the entire quantum filtration structure of the Lee homology (as defined in [14]) of \(s(T(2;p,q))\). ## 3. \(s\)-invariants of \(T(2;p,q)\) As explained in Section 2, the main theorem and Proposition 2 reduce to the following proposition. **Proposition 7**.: _For \(p,q\geq 0\) with \(|p-q|\leq 3\), the \(s\)-invariant (as defined in [14]) of the link \(T(2;p,q)\subset\mathbb{RP}^{3}\) defined in Section 2 is given by_ \[s(T(2;p,q))=\left\lfloor\frac{(p-q)^{2}}{2}\right\rfloor-p-q+1.\] Stosic [14, Theorem 3] calculated the Khovanov homology groups of the positive torus links \(T(n,n)\) in their highest nontrivial homological grading \(h=2n^{2}\). For dimension reason, the Lee spectral sequence from \(Kh(T(n,n))\otimes\mathbb{Q}\) to the Lee homology \(Kh_{Lee}(T(n,n))\) collapses immediately in this homological degree. This can be used to give an alternative proof of \(s(T(1;p,p))=1-2p\), a fact that is reproved in [13, Theorem 1.7]. We prove Proposition 7 by adapting the argument of Stosic. It is worth remarking that the calculation of the more general \(s(T(1;p,q))\) was done in [12] by pushing Stosic's argument slightly further. However, we were not able to achieve the same here to prove (8) in its full generality. ### Review of Khovanov homology in \(\mathbb{RP}^{3}\) We first briefly review some properties of the Khovanov/Lee homology and the \(s\)-invariant of links in \(\mathbb{RP}^{3}\), following [15]. We only give definitions that will be relevant to us. We assume the reader is familiar with the usual theory in \(S^{3}\), in particular [1, 10, 11]. Think \(\mathbb{RP}^{3}\backslash*\) as the twisted \(I\)-bundle over \(\mathbb{RP}^{2}\), we see that links in \(\mathbb{RP}^{3}\) can be represented by link diagrams in \(\mathbb{RP}^{2}\), and two different diagrams of the same link are related by the usual three Reidemeister moves. Although the over/under strands are not well-defined at a crossing, it is unambiguous to distinguish positive/negative crossings (if the link is oriented or has only one component) or to define \(0/1\)-resolutions at a crossing in such a link diagram. Let \(L\subset\mathbb{RP}^{3}\) be an oriented link with an oriented link diagram \(D\). Let \(2^{n}=(0\to 1)^{n}\) denote the hypercube of complete resolutions seen as a directed graph, where \(n\) is the number of crossings in \(D\). Every vertex \(v\) corresponds to a complete resolution \(D_{v}\), which is assigned a bigraded abelian group \(C(D_{v})\). Every edge \(e\) from a vertex \(v\) to a vertex \(w\) corresponds to a saddle from \(D_{v}\) to \(D_{w}\), which is assigned four maps \(\partial_{0}^{e},\partial_{-}^{e},\Phi_{0}^{e},\Phi_{+}^{e}\colon C(D_{v}) \to C(D_{w})\) of bidegree \((0,0),(0,-2),(4,0),(4,2)\), respectively. The _Khovanov complex_ of \(D\) is \[C(D):=\oplus_{v}C(D_{v})[-n_{-}+|v|]\{n_{+}-2n_{-}\}\] equipped with the differential \(\partial:=\sum_{e}\partial_{0}^{e}\). Here \([\cdot]\) denotes the homological grading shift, \(\{\cdot\}\) denotes the shift in the first grading (called quantum grading) of \(C(D_{v})\), \(n_{\pm}\) denotes the number of positive/negative crossings in \(D\) and \(|v|\) denotes the number of \(1\)'s in \(v\). The _Lee complex_ is \(C_{Lee}(D):=C(D)\otimes\mathbb{Q}\) equipped with the differential \(\partial_{Lee}:=\sum_{e}(\partial_{0}^{e}+\partial_{-}^{e}+\Phi_{+}^{e}+\Phi_ {0}^{e})\otimes\mathbb{Q}\). For our purpose, we also consider a _deformed Khovanov complex_, defined as \(C^{\prime}(D):=C(D)\) equipped with the differential \(\partial^{\prime}:=\sum_{e}(\partial_{0}^{e}+\partial_{-}^{e})\). The cohomologies of these three complexes are denoted by \(Kh(L)\), \(Kh_{Lee}(L)\), \(Kh^{\prime}(L)\), respectively, which do not depend on the choice of the link diagram \(D\). The group \(Kh(L)\) is trigraded by \(h\) (homological grading), \(q\) (quantum grading), and \(k\); \(Kh^{\prime}(L)\) is bigraded by \(h,q\); \(Kh_{Lee}(L)\) is graded by \(h\) and filtered by \(q\). As a vector space, \(Kh_{Lee}(L)\cong\mathbb{Q}^{2^{|L|}}\) is spanned by some generators \([s_{\mathfrak{o}}]\), where \(\mathfrak{o}\) runs over the all possible orientations of \(L\) as an unoriented link1. When \(\mathfrak{o}\) is the given orientation on \(L\), \([s_{\mathfrak{o}}]\) sits in homological degree \(0\), and its quantum filtration degree plus \(1\) is defined as the \(s\)_-invariant_ of \(L\). As a filtered complex, the associated graded complex of \(C_{Lee}(L)\) is exactly \(C^{\prime}(L)\otimes\mathbb{Q}\). Thus, there is a spectral sequence with \(E_{1}\)-page \(Kh^{\prime}(L)\otimes\mathbb{Q}\) that converges to \(Kh_{Lee}(L)\), whose \(r\)-th differential has bidegree \((1,4r)\). Footnote 1: In fact, the definition of \([s_{\mathfrak{o}}]\) depends on some auxiliary choices, which we may ignore here. The orientation on \(L\) plays only a minor role on the group \(Kh^{\bullet}(L)\), where \(\bullet\) denotes one of the three favors we are considering. Explicitly, negating the orientation on a sublink \(L^{\prime}\subset L\) shifts its grading by \([2\ell]\{6\ell\}\), where \(\ell\) is the linking number (which takes half-integer values) between \(L^{\prime}\) and \(L\backslash L^{\prime}\) with the new orientations. Every cobordism \(\Sigma\colon L_{0}\to L_{1}\) between two oriented links with diagrams \(D_{0},D_{1}\) induces a chain map \(C^{\bullet}(\Sigma)\colon C^{\bullet}(D_{0})\to C^{\bullet}(D_{1})\) with some grading shifts. By design, if \(D_{0/1}\) are the \(0/1\)-resolutions at a crossing of a link diagram \(D\) of some link \(L\), and \(\Sigma\) is the obvious saddle cobordism, then \(C^{\bullet}(D)\) is isomorphic to the mapping cone of \(C^{\bullet}(\Sigma)\) up to grading shifts. More explicitly, for our convenience we record that if \(D_{0/1}\) are the \(0/1\)-resolutions of \(D\) at a positive crossing, and \(L_{0}\) is assigned the induced orientation from \(L\) while \(L_{1}\) is assigned any orientation, then \(C^{\prime}(D)\cong Cone(C^{\prime}(D_{0})\to C^{\prime}(D_{1})[c]\{3c+1\})[1]\{1\}\), where \(c=n_{-}(D_{1})-n_{-}(D)\). Thus we have the following exact triangle of deformed Khovanov homology groups \[Kh^{\prime}(L_{1})[c+1]\{3c+2\}\] \[Kh^{\prime}(L_{0})\{1\}.\] If \(\Sigma\colon L_{0}\to L_{1}\) is an oriented cobordism, it preserves the homological grading \(h\) and changes the quantum grading \(q\) by \(\chi(\Sigma)\). Moreover, it sends a generator \([s_{\mathfrak{o}}]\in Kh_{Lee}(L_{0})\) to some \(\sum_{\mathfrak{o}^{\prime}}\lambda_{\mathfrak{o}^{\prime}}[s_{\mathfrak{o}^{ \prime}}]\in Kh_{Lee}(L_{1})\), where \(\mathfrak{o}^{\prime}\) runs over orientations of \(L_{1}\) such that there is an orientation on \(\Sigma\) making it an oriented cobordism \((L_{0},\mathfrak{o})\to(L_{1},\mathfrak{o}^{\prime})\), and \(\lambda_{\mathfrak{o}^{\prime}}\in\mathbb{Q}^{\times}\). In particular, this implies (5). Finally, we remark that there are by definition two unknots in \(\mathbb{RP}^{3}\). The _class-\(0\) unknot_\(U_{0}\) is an unknot in a small ball contained in \(\mathbb{RP}^{3}\); the _class-\(1\) unknot_\(U_{1}\) is a copy of the standardly embedded \(\mathbb{RP}^{1}\subset\mathbb{RP}^{3}\). Both these unknots have rank-\(2\) deformed Khovanov homology given by \(Kh^{\prime 0,\pm 1}(U_{i})=\mathbb{Z}\). The deformed Khovanov homology behaves as expected under the disjoint union of two links, one in \(\mathbb{RP}^{3}\) and one in \(S^{3}\). In particular, regarding \(U_{0}\) as a knot in \(S^{3}\), we have \(Kh^{\prime}(L\sqcup U_{0})=Kh^{\prime}(L)\{1\}\oplus Kh^{\prime}(L)\{-1\}\) for any link \(L\subset\mathbb{RP}^{3}\). ### Calculation of \(s\) Now we are ready to prove Proposition 7. We first define two auxiliary family of links \(T^{i}_{n}\), \(S^{i}_{n}\), \(0\leq i\leq n-1\). These should be compared with \(D^{i}_{n,n-1}\), \(D^{i}_{n,n}\) in [1, Section 5]. Think of \(\mathbb{RP}^{2}\) as \(D^{2}\) with antipodal points on the boundary identified. A braid diagram can be placed into \(D^{2}\) with its endpoints on \(\partial D^{2}\), identified pairwisely to give a link diagram in \(\mathbb{RP}^{2}\); this is called by [10] the projective closure of the given braid. Let \(T^{i}_{n}\) be the link represented by the projective closure of \(\sigma_{n-1}(\sigma_{n-2}\sigma_{n-1})\cdots(\sigma_{2}\cdots\sigma_{n-1})( \sigma_{1}\cdots\sigma_{i})\), and \(S^{i}_{n}\) be the link presented by the projective closure of \(\sigma_{n-1}(\sigma_{n-2}\sigma_{n-1})\cdots(\sigma_{1}\cdots\sigma_{n-1})( \sigma_{n-1}\cdots\sigma_{n-i})\). We equip \(T^{i}_{n}\), \(S^{i}_{n}\) with the orientation where all strands are oriented upwards. By the description in Example 5, \(T(2;n,0)\) is exactly the link \(T^{n-1}_{n}\), and all \(T(2;p,n-p)\) are \(T^{n-1}_{n}\) with a possibly different orientation. Also, by definition \(S^{0}_{n}=T^{n-1}_{n}\), and it is easy to check by Reidemeister moves2 that \(T^{0}_{n}=S^{n-3}_{n-2}\). Footnote 2: If one wishes to think the link diagrams as sitting in \(D^{2}\), there will be two additional Reidemeister moves when one crosses the boundary, as illustrated in [1, Figure 1]. Note however the picture (e) there was incorrectly drawn. For \(n\geq 2\), \(i>0\), resolving the crossing in the standard diagram of \(T^{i}_{n}\) that corresponds to the last word \(\sigma_{i}\) gives \(T^{i-1}_{n}\) as the \(0\)-resolution and some other link \((S^{i}_{n})_{1}\) as the \(1\)-resolution. Similarly, resolving the crossing of \(S^{i}_{n}\) that corresponds to the last \(\sigma_{n-i}\) gives \(0\)-resolution \(S^{i-1}_{n}\) and \(1\)-resolution \((S^{i}_{n})_{1}\). By Reidemeister moves, one may check that in fact (as unoriented links) \[(T^{i}_{n})_{1}=\begin{cases}T^{n-3}_{n-2}\sqcup U_{0},&i=n-1\\ T^{i-1}_{n-2},&i<n-1\end{cases},\ (S^{i}_{n})_{1}=\begin{cases}S^{i-2}_{n-2},&i>1\\ S^{0}_{n-2}\sqcup U_{0},&i=1\end{cases}. \tag{10}\] Here \(U_{0}\) is the class-\(0\) unknot, and as a convention we define \(T^{-1}_{0}=S^{0}_{0}=\emptyset\) to be the empty link. Give \((T_{n}^{i})_{1}\), \((S_{n}^{i})_{1}\) the orientations of the right hand sides in the identification (10), the skein exact triangle (9) gives exact triangles \[Kh^{\prime}((T_{n}^{i})_{1})[n-1]\{3n-4\}\to Kh^{\prime}(T_{n}^{i})\to Kh^{ \prime}(T_{n}^{i-1})\{1\}\xrightarrow{[1]}, \tag{11}\] \[Kh^{\prime}((S_{n}^{i})_{1})[n]\{3n-1\}\to Kh^{\prime}(S_{n}^{i})\to Kh^{ \prime}(S_{n}^{i-1})\{1\}\xrightarrow{[1]}. \tag{12}\] We prove a lemma that gives a "graphical lower bound" of the deformed Khovanov homology groups of \(T_{n}^{i}\) and \(S_{n}^{i}\), in the spirit of [13, Theorem 2.1]. In fact, most parts of the statement won't be relevant for our purpose. But since the general statement is not much more complicated to state and to prove, we include it fully here. **Lemma 8**.: 1. \(Kh^{\prime h,q}(T_{n}^{i})=0\) _for_ \(h<0\) _or_ \(h>\lfloor n^{2}/4\rfloor\) _or_ \(q-h<\lfloor n^{2}/2\rfloor-2n+1+i\)_._ 2. \(Kh^{\prime h,q}(S_{n}^{i})=0\) _for_ \(h<0\) _or_ \(h>\lfloor n^{2}/4\rfloor+\lceil i/2\rceil\) _or_ \(q-h<\lfloor n^{2}/2\rfloor-n+i\) _or_ \(q-2h<\lfloor n^{2}/4\rfloor-n+i\)_._ _Remark 9_.: The \(s\)-invariant of all \(T(1;p,q)\subset S^{3}\) was deduced in [13] from Theorem 2.1 there. The difficulty that prevents us to similarly deduce (8) from Lemma 8 is that we were not able to establish an injectivity result like the addendum in Theorem 2.1(1) in [13] (there we actually have an isomorphism; however, only injectivity is needed for the proof, and only injectivity is expected in our case). See also Section 3.3. Proof.: We induct on \(n,i\). For \(n=1\) this is immediate, since \(S_{1}^{0}=T_{1}^{0}=U_{1}\) is the class-1 unknot. For \(n=2\), \(T_{2}^{0}=U_{0}\) satisfies _(1)_. By (10)(11), nonzero homology groups of \(T_{2}^{1}\) are exactly \[Kh^{\prime h,q}(T_{2}^{1})=\mathbb{Z},\ (h,q)=(0,0),(0,2),(1,1),(1,3),\] thus \(T_{2}^{1}\) satisfy _(1)_ and \(S_{2}^{0}=T_{1}^{1}\) satisfy _(2)_. By (10)(12), \(S_{2}^{1}\) has \[Kh^{\prime h,q}(S_{2}^{1})=\mathbb{Z},\ (h,q)=(0,1),(0,3),(1,2),(2,6),\] and zero elsewhere possibly except when \((h,q)=(1,4),(2,4)\), thus it satisfies _(2)_. Now, by the induction hypothesis and (10)(11)(12), _(1)(2)_ are inductively proved for \(n>2\) by checking the elementary statements that * The vanishing region (in the \(hq\)-coordinate plane) of \(Kh^{\prime}(T_{n}^{0})\) described in _(1)_ is contained in that of \(Kh^{\prime}(S_{n-2}^{n-3})\) described in _(2)_; * For \(i>0\), the vanishing region of \(Kh^{\prime}(T_{n}^{i})\) is contained in that of both \(Kh^{\prime}((T_{n}^{i})_{1})[n-1]\{3n-4\}\) and \(Kh^{\prime}(T_{n}^{i-1})\{1\}\); * The vanishing region of \(Kh^{\prime}(S_{n}^{0})\) is identical to that of \(Kh^{\prime}(T_{n}^{n-1})\); * For \(i>0\), the vanishing region of \(Kh^{\prime}(S_{n}^{i})\) is contained in that of both \(Kh^{\prime}((S_{n}^{i})_{1})[n]\{3n-1\}\) and \(Kh^{\prime}(S_{n}^{i-1})\{1\}\). Proof of Proposition 7.: We divide into four cases according to the value of \(|p-q|\). We give the proof carefully for the case \(|p-q|=0\), and more casually for the rest cases, as they will be similar. **Case 1**: \(|p-q|=0\). Write \(n=p+q=2m\). We induct on \(m\) to show that \[\operatorname{rank}Kh^{\prime m^{2},*}(T_{n}^{i})=2\binom{i}{m}, \tag{13}\] \[\inf\{q\colon Kh^{\prime m^{2},q}(T_{n}^{n-1})\neq 0\}=3m^{2}-2m. \tag{14}\] When \(m=1\), we have \(T_{2}^{0}=U_{0}\) satisfies (13), and \(T_{2}^{1}\) satisfies (13)(14) by the description of \(Kh^{\prime}(T_{2}^{1})\) in the proof of Lemma 8. Assume now \(m>1\). We have \(Kh^{m^{2},*}(T^{0}_{n})=0\) by Lemma 8(2) applied to \(S^{n-3}_{n-2}=T^{0}_{n}\); thus \(T^{0}_{n}\) satisfies (13). For \(i>0\), (11) and (10) give \[\operatorname{rank}Kh^{\prime m^{2},*}(T^{i}_{n}) \leq\operatorname{rank}Kh^{\prime m^{2},*}(T^{i-1}_{n})+ \operatorname{rank}Kh^{\prime(m-1)^{2},*}(T^{i-1}_{n-2}),\quad i<n-1 \tag{15}\] \[\operatorname{rank}Kh^{\prime m^{2},*}(T^{n-1}_{n}) \leq\operatorname{rank}Kh^{\prime m^{2},*}(T^{n-2}_{n})+2 \operatorname{rank}Kh^{\prime(m-1)^{2},*}(T^{n-3}_{n-2}). \tag{16}\] Use (15) iteratively and (16), as well as the induction hypothesis, we obtain \[\operatorname{rank}Kh^{\prime m^{2},*}(T^{i}_{n})\leq 2\binom{i}{m}.\] On the other hand, due to the existence of the Lee spectral sequence from \(Kh^{\prime}\otimes\mathbb{Q}\) to \(Kh_{Lee}\), \(\operatorname{rank}Kh^{\prime m^{2},*}(T^{n-1}_{n})\) is bounded below by \(\dim Kh^{m^{2}}_{Lee}(T^{n-1}_{n})\), which equals \(\binom{2m}{m}=2\binom{n-1}{m}\): \(Kh^{m^{2}}_{Lee}(T^{n-1}_{n})\) is generated by those generators \([s_{\mathfrak{o}}]\) for which \(\mathfrak{o}\) is an orientation of \(T^{n-1}_{n}\) realizing \(T(2;m,m)\) (note components in \(T^{n-1}_{n}\) have pairwise linking number \(1/2\)). We conclude that \(T^{n-1}_{n}\) satisfies (13), and so do all \(T^{i}_{n}\), because the sharpness of the estimate above shows all (15)(16) are in fact equalities. The sharpness of estimate also implies that the map \(Kh^{\prime}(T^{n-3}_{n-2}\sqcup U_{0})\to Kh^{\prime}(T^{n-1}_{n})\) in the exact triangle (11) is injective upon tensoring \(\mathbb{Q}\). It follows that \[\inf\{q\colon Kh^{\prime m^{2},q}(T^{n-1}_{n})\neq 0\}\leq\inf\{q\colon Kh^{ \prime(m-1)^{2},q}(T^{n-3}_{n-2})\neq 0\}-1+3n-4=3m^{2}-2m.\] Lemma 8(1) gives the reverse inequality, so (14) is also proved. We return to the calculation of the \(s\)-invariant. The sharpness of the estimate of \(\operatorname{rank}Kh^{\prime}(T^{n-1}_{n})\) also implies that the Lee spectral sequence from \(Kh^{\prime}(T^{n-1}_{n})\otimes\mathbb{Q}\) to \(Kh_{Lee}(T^{n-1}_{n})\) collapses immediately at homological degree \(h=m^{2}\). It follows that the lowest quantum filtration level of \(Kh^{m^{2}}_{Lee}(T^{n-1}_{n})\) is at \(q=3m^{2}-2m\). Taking into account the bidegree shift \([m^{2}]\{3m^{2}\}\), the \(s\)-invariant of \(T(2;m,m)\) is equal to the quantum filtration degree of \([s_{\mathfrak{o}}]\in Kh^{m^{2}}_{Lee}(T^{n-1}_{n})\) minus \(3m^{2}-1\), where \(\mathfrak{o}\) is any orientation of \(T^{n-1}_{n}\) that realizes \(T(2;m,m)\). Since \(Kh^{m^{2}}_{Lee}(T^{n-1}_{n})\) is spanned by all such \([s_{\mathfrak{o}}]\), every \([s_{\mathfrak{o}}]\) sits in the lowest filtration level. It follows that \(s(T(2;m,m))=(3m^{2}-2m)-(3m^{2}-1)=-2m+1\), proving Proposition 7 for \(|p-q|=0\). **Case 2**: \(|p-q|=2\). Write \(n=p+q=2m\). By an induction on \(m\) one can show that \[\operatorname{rank}Kh^{\prime m^{2}-1,*}(T^{i}_{n})=2\binom{i}{m+1}+2\binom{i }{m-1},\] \[\inf\{q\colon Kh^{\prime m^{2}-1,q}(T^{n-1}_{n})\neq 0\}=3m^{2}-2m-1.\] Moreover, we have \(\operatorname{rank}Kh^{\prime m^{2}-1,*}(T^{n-1}_{n})=\dim Kh_{Lee}(T^{n-1}_{n})\), which implies the collapsing of the Lee spectral sequence at \(h=m^{2}-1\). After a bidegree shift \([m^{2}-1]\{3m^{2}-3\}\), we calculate that \(s(T(2;m+1,m-1))=(3m^{2}-2m-1)-(3m^{2}-3-1)=-2m+3\), proving the case \(|p-q|=2\). **Case 3**: \(|p-q|=1\). Write \(n=p+q=2m+1\). By an induction on \(m\) (with base case \(m=0\)) one can show that \[\operatorname{rank}Kh^{\prime m^{2}+m,*}(T^{i}_{n})=2\binom{i+1}{m+1},\] \[\inf\{q\colon Kh^{\prime m^{2}+m,q}(T^{n-1}_{n})\neq 0\}=3m^{2}+m-1.\] Moreover, we also conclude the immediate collapsing of the Lee spectral sequence by a dimension count, and calculate that \(s(T(2;m+1,m))=(3m^{2}+m-1)-(3m^{2}+3m-1)=-2m\). **Case 4**: \(|p-q|=3\). Write \(n=p+q=2m+1\). From \[\operatorname{rank}Kh^{\prime m^{2}+m-2,*}(T_{n}^{i})=2{i\choose m+2}+2{i \choose m-1}\] \[\inf\{q\colon Kh^{\prime m^{2}+m-2,q}(T_{n}^{n-1})\neq 0\}=3m^{2}+m-3\ \ (m>0)\] and a dimension count we conclude as above that \(s(T(2;m+2,m-1))=-2m+4\). We remark that in this case one need to take both \(m=0,1\) as base cases for induction, where \(T_{3}^{0}=S_{1}^{0}=T_{1}^{0}=U_{0}\) and all \(Kh^{\prime}(T_{3}^{i})\) can be completely determined from (11). ### A question As an analogue to Question 6.1 in [10], we pose the following question, whose truth is verified in small examples (\(n\leq 5\)). **Question 10**.: _Is it true that the saddle cobordism \(T_{n}^{i}\to T_{n}^{i-1}\) always induces a surjection on \(Kh^{\prime}\otimes\mathbb{Q}\)?_ A positive answer of Question 10 in the case \(i=n-1\) implies the saddle cobordism \(T(2;n-2,0)\sqcup U_{0}\to T(2;n,0)\) is injective in \(Kh^{\prime}\otimes\mathbb{Q}\). By the same argument as the proof of Theorem 1.1 (\(m=n\)) in [10], this implies (8), thus the conjectural genus bound (4). Of course, (8) is a much weaker statement than Question 10 and would follow from the surjectivity of \[Kh^{\prime pq,pq+\lfloor n^{2}/2\rfloor-n}(T_{n}^{n-1})\otimes\mathbb{Q}\to Kh ^{\prime pq,pq+\lfloor n^{2}/2\rfloor-n-1}(T_{n}^{n-2})\otimes\mathbb{Q}\] for all \(p+q=n\) (c.f. Remark 9).
``` マンolescu-Willisの最近の<br> 提案により、$s$-Invariantは、$ \mathbb{RP}^3$の結を定義する上で、$ \mathbb{RP}^3$の結のnull-homologous slice genusの最小値を決定する。 ```
2309.09275
Breakdown in vehicular traffic: driver over-acceleration, not over-reaction
Contrary to a wide-accepted assumption about the decisive role of driver over-reaction for breakdown in vehicular traffic, we have shown that the cause of the breakdown is driver over-acceleration, not driver over-reaction. To reach this goal, we have introduced a mathematical approach for the description of driver over-acceleration in a microscopic traffic flow model. The model, in which no driver over-reaction occurs, explains all observed empirical nucleation features of traffic breakdown.
Boris S. Kerner
2023-09-17T13:45:07
http://arxiv.org/abs/2309.09275v1
# Breakdown in vehicular traffic: driver over-acceleration, not over-reaction ###### Abstract Contrary to a wide-accepted assumption about the decisive role of driver over-reaction for breakdown in vehicular traffic, we have shown that the cause of the breakdown is driver over-acceleration, not over-reaction. To reach this goal, we have introduced a mathematical approach for the description of driver over-acceleration in a microscopic traffic flow model. The model, in which no driver over-reaction occurs, explains all observed empirical nucleation features of traffic breakdown. pacs: 89.40.-a, 47.54.-r, 64.60.Cn, 05.65.+b Traffic breakdown is a transition from free flow to congested vehicular traffic occurring mostly at bottlenecks. In 1958s-1961s, Herman, Gazis, Montroll, Potts, Rothery, and Chandler [1] as well as Kometani and Sasaki [2] assumed that the cause of the breakdown is driver _over-reaction_ on the deceleration of the preceding vehicle: Due to a delayed deceleration of the vehicle resulting from a driver reaction time the speed becomes less than the speed of the preceding vehicle. If this over-reaction is realized for all following drivers, then traffic instability occurs [1; 2; 3]. The instability leads to a wide moving jam (J) formation in free flow (F) called as an F\(\rightarrow\)J transition [4]. The traffic instability is currently a theoretical basic of standard traffic theory (e.g., [3; 5; 6]). However, rather than the F\(\rightarrow\)J transition, in real field data traffic breakdown is a phase transition from free flow to synchronized flow (S) (F\(\rightarrow\)S transition) [7; 8]; the empirical traffic breakdown exhibits the nucleation nature (Fig. 1(a)) [15]. To explain the empirical nucleation nature of the F\(\rightarrow\)S transition, three-phase traffic theory was introduced [7], in which there are three phases: free flow (F), synchronized flow (S), and wide moving jam (J), where the phases S and J belong to congested traffic. Driver over-reaction that should explain traffic breakdown can occur _only_ if space gaps between vehicles are small enough [1; 2; 3; 4; 5; 6]. At large enough gaps, rather than over-reaction, the vehicle speed does _not_ become less than the speed of the decelerating preceding vehicle, i.e., usual _speed adaptation_ to the speed of the preceding vehicle occurs that causes _no_ instability. * Contrary to standard theory [1; 2; 3; 4; 5; 6], it is assumed in three-phase traffic theory [7] that traffic breakdown is realized at larger gaps between vehicles when no driver over-reaction can still occur. In three-phase traffic theory, the empirical nucleation nature of the F\(\rightarrow\)S transition is explained through a hypothesis about a discontinuity in the probability of vehicle acceleration when free flow transforms into synchronized flow (Fig. 1(b)) [10]: In free flow, vehicles can accelerate from car-following at a lower speed to a higher speed with a larger probability than it occurs in synchronized flow. Vehicle acceleration that probability exhibits the discontinuity when free flow transforms into synchronized flow is called _over-acceleration_, to distinguish over-acceleration from "usual" driver acceleration that does not show a discontinuous character. The discontinuous character of over-acceleration is explained as follows: Due to smaller space gaps in synchronized flow, vehicles prevent each other to accelerate from a local speed decrease; contrarily, due to larger space gaps in free flow at the same flow rate vehicles can easily accelerate from the local speed decrease. The discontinuous character of over-acceleration can lead to an S\(\rightarrow\)F instability in synchronized flow [7]. Contrary to the classical traffic instability that is a growing wave of a local _decrease_ in the vehicle speed [1; 2; 3; 4; 5; 6], the S\(\rightarrow\)F instability is a growing wave of a local _increase_ in the speed [7]. Microscopic three-phase models [11] that simulate the nucleation nature of traffic breakdown (Fig. 1(a)) show also the classical traffic instability leading to a wide moving jam emergence. In these complex traffic models [11], both driver over-acceleration and driver over-reaction are important. Thus, up to now there has been no mathematical proof that the cause of the nucleation nature of traffic breakdown is solely over-acceleration without the influence of driver over-reaction. In the paper, we introduce a mathematical approach for over-acceleration \(a_{\rm OA}\): \[a_{\rm OA}=\alpha\Theta(v-v_{\rm syn}) \tag{1}\] that satisfies the hypothesis about the discontinuous character of over-acceleration (Fig. 1(b)). In (1), \(v\) is the vehicle speed, where \(0\leq v\leq v_{\rm free}\), \(v_{\rm free}\) is a maximum speed; \(\alpha\) is a maximum over-acceleration; \(\Theta(z)=0\) at \(z<0\) and \(\Theta(z)=1\) at \(z\geq 0\); \(v_{\rm syn}\) is a given synchronized flow speed (\(v_{\rm syn}<v_{\rm free}\)). Based on (1), we develop a microscopic traffic flow model, in which vehicle acceleration/deceleration \(a\) in a road lane is described by a system of equations: \[a = K_{\Delta v}\Delta v+a_{\rm OA}\ {\rm at}\ g_{\rm safe}\leq g \leq G, \tag{2}\] \[a = a_{\rm max}\ {\rm at}\ g>G,\] (3) \[a = a_{\rm safe}(g,v,v_{\ell})\ {\rm at}\ g<g_{\rm safe}, \tag{4}\] where \(g\) is a space gap to the preceding vehicle, \(\Delta v=v_{\ell}-v\), \(v_{\ell}\) is the preceding vehicle speed, \(K_{\Delta v}\) is a positive coefficient, \(a_{\rm max}\) is a maximum acceleration, \(G\) is a synchronization space-gap, \(G=v\tau_{\rm G}\), \(\tau_{\rm G}\) is a synchronization time headway, \(g_{\rm safe}\) is a safe space-gap, \(g_{\rm safe}=v\tau_{\rm safe}\), \(\tau_{\rm safe}\) is a safe time headway, \(a_{\rm safe}(g,v,v_{\ell})\) is a safety deceleration. The physics of model (2)-(4) is as follows: (i) In Eq. (2), in addition to over-acceleration (1), there is function \(K_{\Delta v}\Delta v\)[7; 11] that describes vehicle speed adaptation to the preceding vehicle speed \(v_{\ell}\) occurring independent of gap \(g\) within the gap range \(g_{\rm safe}\leq g\leq G\). Thus, a decrease in \(v_{\ell}\) does not lead to a stronger decrease in the speed \(v\): No driver over-reaction occurs. (ii) Eq. (3) describes acceleration at large gaps \(g>G\). (iii) Contrary to over-acceleration \(a_{\rm OA}\) (1) applied in Eq. (2), function \(K_{\Delta v}\Delta v\) in Eq. (2) at \(\Delta v>0\) and Eq. (3) describe "usual" acceleration that does not show a discontinuous character. (iv) Eq. (4) describes safety deceleration that should prevent vehicle collisions at small gaps \(g<g_{\rm safe}\); contrary to Eq. (2), safety deceleration \(a_{\rm safety}(g,v,v_{\ell})\) in Eq. (4) can lead to driver over-reaction. There are many concepts developed in standard models [1; 2; 3; 4; 5; 6] that can be used for safety deceleration \(a_{\rm safety}(g,v,v_{\ell})\). For simulations below, we use one of them described by Helly's function \[a_{\rm safety}(g,v,v_{\ell})=K_{1}(g-g_{\rm safe})+K_{2}\Delta v, \tag{5}\] where \(K_{1}\) and \(K_{2}\) dynamic coefficients [16]. Obviously, through an appropriated parameter choice in standard models [1; 2; 3; 4; 5; 6] driver over-reaction is not realized even at the smallest possible gap \(g=g_{\rm safe}\) in initial steady states of traffic flow. However, in this case no nucleation of congestion is possible to simulate with the standard models. Contrarily, if we choose coefficients \(K_{1}\) and \(K_{2}\) in (5) (Fig. 2) at which even at \(g\leq g_{\rm safe}\) no driver over-reaction Figure 2: Simulations with model (2)–(5) of nucleation nature of traffic breakdown (F\(\rightarrow\)S transition) on single-lane road of length \(L=10\) km with two identical on-ramp bottlenecks B and B-down at road locations \(x=x_{\rm on,B}=6\) km and \(x=x_{\rm on,B-down}=9\) km, respectively: (a) Speed data presented in space and time as made in Fig. 1(a). (b, c) Averaged (1-min) speeds at \(x=7\) km within MSP (b) and at \(x=5.7\) km within SP induced through MSP propagation at bottleneck B. Flow rate on the road at \(x=0\) is \(g_{\rm in}=2250\) vehicles/h. For each of the bottlenecks that model is the same as that in [14], there is a merging region of length \(L_{\rm m}=0.3\) km; vehicles merge at a middle location between vehicles on the road at the preceding vehicle speed \(v^{+}\) when \(g>g_{\rm safe}^{\rm(min)}=\lambda_{\rm b}v^{+}+d\) with \(\lambda_{\rm b}=0.3\) s; on-ramp inflow rates are \(q_{\rm on,B-down}=0\) and \(q_{\rm on,B}=685\) vehicles/h; to induce the MSP at bottleneck B-down, impulse \(q_{\rm on,B-down}=400\) vehicles/h at \(t=20\) min during 2 min has been applied. All vehicles in traffic flow are identical ones with the following model parameters: \(\tau_{\rm safe}=1\) s, \(\tau_{\rm G}=3\) s, \(a_{\rm max}=2.5\) m/s\({}^{2}\), \(\alpha=1\) m/s\({}^{2}\), \(v_{\rm typ}=80\) km/h, \(K_{\Delta v}=0.8\) s\({}^{-1}\), \(K_{1}=0.15\) s\({}^{-2}\), \(K_{2}=0.95\) s\({}^{-1}\), \(v_{\rm free}=120\) km/h, \(d=7.5\) m. Under conditions \(0\leq v\leq v_{\rm free}\), vehicle motion is found from equations \(dv/dt=a\), \(dx/dt=v\) solved with the second-order Runge-Kutta method with time step \(10^{-2}\) s. Figure 1: Empirical nucleation nature of traffic breakdown (F\(\rightarrow\)S transition) at bottlenecks (a) and hypothesis about discontinuous character of over-acceleration (b, c) [10]. (a) Speed data presented in space and time with an averaging method were measured with road detectors installed along road: A moving synchronized flow pattern (MSP) that has emerged at downstream bottleneck (B-down) while propagating upstream induces F\(\rightarrow\)S transition (induced traffic breakdown) leading to emergence of synchronized flow pattern (SP) at upstream bottleneck (B); adapted from [7]. (b, c) Qualitative density-dependence of over-acceleration probability per a time interval (b) and equivalent presentation of (b) as discontinuous flow-rate dependence of the mean time delay in over-acceleration (c); F and S are states of free flow and synchronized flow, respectively. occurs in model (2)-(5), then, nevertheless, this model shows all known empirical nucleation features of traffic breakdown (Fig. 1(a)): An MSP induced at downstream bottleneck B-down propagates upstream. While reaching upstream bottleneck B, the MSP induces F\(\rightarrow\)S transition at the bottleneck (Fig. 2). Formula (1) for over-acceleration explains induced traffic breakdown as follows. Due to vehicle merging from on-ramp, condition \(g<g_{\rm safe}\) can be satisfied resulting in vehicle deceleration: A local speed decrease occurs at bottleneck B (Fig. 2(a)). The minimum speed \(v_{\rm min}^{\rm(dec)}\) within the local speed decrease satisfies condition \(v_{\rm min}^{\rm(dec)}>v_{\rm syn}\). Therefore, according to (1), vehicles accelerate with over-acceleration \(a_{\rm OA}=\alpha\) from the local speed decrease; this prevents congestion propagation upstream of bottleneck B. Contrarily, the minimum speed within the MSP satisfies condition \(v_{\rm min}^{\rm(MSP)}<v_{\rm syn}\) (Fig. 2(b)). Then, according to (1), over-acceleration \(a_{\rm OA}=0\): When the MSP reaches bottleneck B, synchronized flow is induced. The emergent SP remains at bottleneck B because the speed within the SP is less than \(v_{\rm syn}\) in (1) (Fig. 2(c)) and, therefore, over-acceleration \(a_{\rm OA}=0\). These simulations, in which no driver over-reaction can occur under chosen model parameters, support the statement of this paper: * Traffic breakdown is caused by over-acceleration, not driver over-reaction. Formula (1) for over-acceleration explains also the S\(\rightarrow\)F instability. We consider the time-development of a local speed increase in an initial steady synchronized flow state (Fig. 3). The cause of the local speed increase is a short-time acceleration of one of the vehicles (vehicle 1 in Figs. 3(a, b) or vehicle 8 in Figs. 3(c-e)); the vehicle must decelerate later to the speed of the preceding vehicle moving at the initial synchronized flow speed (\(v=70\) km/h, Fig. 3). There are two possibilities: (i) The increase in the speed of following vehicles (vehicles 2-7 in Figs. 3(a, b)) decays over time (Figs. 3 (a, b)); this occurs when the maximum speed of vehicle 2 (\(v_{\rm max}^{\rm(2)}=77.9\) km/h) is less than \(v_{\rm syn}\) in (1) and, therefore, over-acceleration \(a_{\rm OA}=0\). (ii) Contrarily, if vehicle 8 (Figs. 3(c, d)) accelerates only 0.5 s longer than vehicle 1 (Figs. 3(a, b)), the local speed increase initiated by vehicle 8 grows over time (vehicles 9-14 in Figs. 3(c, d)) leading to the S\(\rightarrow\)F instability (Figs. 3(c-e)); this occurs because the maximum speed of vehicle 9 (\(v_{\rm max}^{\rm(9)}=81.9\) km/h) is higher than \(v_{\rm syn}\) in (1) and, therefore, over-acceleration \(a_{\rm OA}=\alpha\) causes the S\(\rightarrow\)F instability. We have found that in model (2)-(5) under parame Figure 3: Nucleation character of S\(\rightarrow\)F instability simulated on single-lane road (8 km long) without bottlenecks with initial steady synchronized flow state at \(v=70\) km/h and \(g=27.5\) m: (a, b) No S\(\rightarrow\)F instability. (c–e) S\(\rightarrow\)F instability. In (a–d), time-development of speeds (a, c) and trajectories (b, d) of vehicles 1–7 (a, b) and 8–14 (c, d) caused by initial local speed increase of vehicle 1 (a, b) and vehicle 8 (c, d) simulated through vehicle short-time acceleration with \(a=0.5\) m/s\({}^{2}\) during 6.5 s in (a, b) and 7 s in (c, d). (e) Spatiotemporal development of speed during S\(\rightarrow\)F instability shown in (c, d). Other model parameters are the same as those in Fig. 2. Figure 4: Absence of driver over-reaction in model (2)–(5) under parameters used in Fig. 2. Simulations made on single-lane road (8 km long) without bottlenecks with initial steady state of synchronized flow with \(v=70\) km/h and \(g=g_{\rm safe}=19.5\) m: Time-development of vehicle trajectories (a), speed in space and time (b), and speeds of a sequence of vehicles 15–21 caused by initial local speed decrease of vehicle \(i\) in (a) simulated through deceleration of vehicle \(i\) with \(a=-\) 0.5 m/s\({}^{2}\) to the speed \(v=0\); vehicle \(i\) remains stationary for 1 s and then accelerates. ters used in Fig. 2 there is no driver over-reaction on the deceleration of the preceding vehicle even at the smallest possible space gap between vehicles \(g=g_{\rm safe}\) in an initial homogeneous steady state of traffic flow. In Fig. 4, under condition \(g=g_{\rm safe}\) in an initial synchronized flow, vehicle \(i\) decelerates to a standstill, remains stationary for 1 s and then accelerates. It turns out that none of the following vehicles decelerates to the standstill. The minimum speed of the following vehicles increases slowly over time (vehicles 15-21 in Fig. 4(c)). Finally, rather than a wide moving jam (J), a new state of synchronized flow with speed \(v\approx 15.5\) km/h results from the deceleration of vehicle \(i\). Clearly, other model parameters in (2)-(5) in comparison with those used above (Figs. 2-4) can be chosen at which driver over-reaction occurs. In this case, simulations of the model show usual results of three-phase traffic theory [7; 11]: (i) In free flow, the F\(\rightarrow\)S transition (traffic breakdown) occurs that features are qualitatively the same as presented in Figs. 2 and 3. (ii) Contrary to Fig. 4, in synchronized flow with lower speeds the classical traffic instability occurs leading to the S\(\rightarrow\)J transition. However, a detailed analysis of these results is out of the scope of the paper. I thank Sergey Klenov for help in simulations and useful suggestions. I thank our partners for their support in the project "LUKAS - Lokales Umfeldmodell fur das Kooperative, Automatisierte Fahren in komplexen Verkehrssituationen" funded by the German Federal Ministry for Economic Affairs and Climate Action.
事実、道路交通における車両の停止の決定要因として運転者の過反応が広く受け入れられているという前提に基づいていますが、実際には運転者の過加速が原因であることを示しました。この目的を達成するために、微視的交通流モデルにおける運転者の過加速の記述方法を導入しました。このモデルでは、運転者の過反応は発生しないため、交通停止の観察された実証的な核形成の特徴をすべて説明することができました。
2309.06146
On Wolfgang Lusky's paper "The Gurarij spaces are unique''
This note surveys Wolfgang Lusky's proof of uniqueness of the Gurariy spaces and mentions further developments.
Dirk Werner
2023-09-12T11:34:55
http://arxiv.org/abs/2309.06146v1
# On Wolfgang Lusky's paper ###### Abstract. This note surveys Wolfgang Lusky's proof of uniqueness of the Gurariy spaces and mentions further developments. Key words and phrases:Gurariy space; Banach spaces of almost universal disposition 2020 Mathematics Subject Classification: Primary 46B04; Secondary 46B10, 46B25 This piece has been commissioned by the editors of Archiv der Mathematik on the occasion of the 75th anniversary of the journal ## 1. Introduction In 1966, V. I. Gurariy [11] defined the notion of a _Banach space of (almost) universal disposition_ by a certain extension property; see Definition 2.1. He proved the existence of (separable) such spaces and investigated some of their properties; henceforth, such spaces were called _Gurariy spaces_ (alternative spellings: Gurarii, Gurarij, Gurarii,...); we shall reserve this name to separable spaces of this kind. While it is not a daunting task to prove that any two Gurariy spaces are almost isometric in the sense that their Banach-Mazur distance is \(1\), it remained open to decide whether they are actually isometric. This was asked for instance by J. Lindenstrauss and his collaborators at various junctures ([20, Problem II.4.13], [17]). The isometry problem was solved in 1976 by a fresh PhD from the (likewise rather freshly established) University of Paderborn, Wolfgang Lusky, in his first-ever published paper (the title says it all) [L] The Gurarij spaces are unique. _Arch. Math._ 27, 627-635 (1976). We shall refer to this paper, which is [23] in the bibliography, simply by [L]. The present note aims at surveying the background, Lusky's proof, and the ramifications of this result along with an outlook. Interestingly, some 30 years later Gurariy and Lusky cooperated intensively on a rather different topic, the Muntz spaces, which has led to their monograph [12]. The notation in this note is standard; \(B_{X}\) stands for the closed unit ball of \(X\) and \(\operatorname{ex}B_{X}\) for the set of its extreme points. We are considering only real Banach spaces. ## 2. Banach spaces of almost universal disposition V. I. Gurariy (1935-2005) was a member of the Kharkiv school of Banach spaces led by M. I. Kadets (sometimes spelled Kadec), one of the strongest in Europe which had its heyday from the late 1950ies till the collapse of the Soviet Union that produced a brain-drain in all fields of science. Gurariy himself emigrated to the United States in the early 1990ies. After 2000, the Kharkiv school was basically reduced to V. Kadets and his students. In 2022 the terror regime in Moscow set out to destroy the university of Kharkiv altogether [31], but remembering a slogan from many years back,!No pasaran! Here is the key definition of his paper [11]. **Definition 2.1**.: Let \(X\) be a Banach space with the following property. * For finite-dimensional spaces \(E\) and \(F\), isometries \(T\colon E\to X\) and \(S\colon E\to F\), and for \(\varepsilon>0\), there exists an operator \(\widehat{T}\colon F\to X\) satisfying \(\widehat{T}S=T\) and \[(1+\varepsilon)^{-1}\|y\|\leq\|\widehat{T}y\|\leq(1+\varepsilon)\|y\|\qquad(y \in F)\] ("an \(\varepsilon\)-isometry"). Then \(X\) is called a Banach space of _almost universal disposition_. A separable such space will also be called a _Gurariy space_. The epithet "almost" in this definition refers to the quantifier "for all \(\varepsilon>0\)"; if \(\varepsilon=0\) is permissible above, then the "almost" will be dropped. However, Gurariy proved in [11, Th. 10] that no separable space of universal disposition exists, but see Subsection 6.3 below. If in the above definition, \(S\) is the identical inclusion, i.e., \(E\subset F\), then \(\widehat{T}\) is an extension of \(T\), which can likewise be considered as the identical inclusion. To see that the condition of Definition 2.1 is quite restrictive, let us discuss two examples. **Example 2.2**.: (a) \(c_{0}\) is not a space of almost universal disposition. Indeed, let \(E=\mathbb{R}\), \(T\colon E\to c_{0}\), \(T(r)=(r,0,0,\dots)\), \(F=\ell_{\infty}^{2}=\mathbb{R}^{2}\) with the max-norm, \(S\colon E\to F\), \(S(r)=(r,r)\). Assume that \(\widehat{T}\) has the properties of Definition 2.1, and let \(\widehat{T}(-1,1)=(x_{1},x_{2},\dots)\). Note that \(\widehat{T}(1,1)=(1,0,0,\dots)\) and therefore \[\widehat{T}(0,1) =\Big{(}\frac{1+x_{1}}{2},\frac{x_{2}}{2},\dots\Big{)},\] \[\widehat{T}(1,0) =\Big{(}\frac{1-x_{1}}{2},\frac{-x_{2}}{2},\dots\Big{)}.\] This shows that \(\widehat{T}\) cannot be an \(\varepsilon\)-isometry for small \(\varepsilon\). (If \(x\) is a real number close to \(1\) in modulus, then \(\frac{1\pm x}{2}\) cannot both be close to \(1\).) (b) \(C[0,1]\) is not a space of almost universal disposition. Indeed, let \(E=\mathbb{R}\), \(T\colon E\to C[0,1]\), \(T(r)=r\mathbb{1}\) (the constant function), \(F=\ell_{2}^{2}=\mathbb{R}^{2}\) with the \(\ell_{2}\)-norm, \(S\colon E\to F\), \(S(r)=(r,0)\). Assume that \(\widehat{T}\) has the properties of Definition 2.1, and let \(\widehat{T}(0,1)=f\). Note that \(\widehat{T}(1,0)=\mathbb{1}\) and therefore \[\widehat{T}(1,1)=\frac{\mathbb{1}+f}{2},\] which must have norm \(\sqrt{2}=\|(1,1)\|_{2}\) up to \(\varepsilon\). Since \((1+\varepsilon)^{-1}\leq\|f\|\leq 1+\varepsilon\), this is impossible for small \(\varepsilon\). These examples indicate that positive results might not be very easy to come by. By a technical inductive argument, Gurariy shows in [11, Th. 2] the following existence theorem. **Theorem 2.3**.: _There exists a separable Banach space of almost universal disposition._ As for uniqueness, he proves the following result. To formulate it succinctly, let us recall the _Banach-Mazur distance_ between (isomorphic) Banach spaces \[d(X,Y)=\inf\{\|\Phi\|\|\Phi^{-1}\|\colon\ \Phi\colon X\to Y\ \text{is an isomorphism}\}\] and call two Banach spaces _almost isometric_ if their Banach-Mazur distance equals \(1\). Now for Theorem 5 of [11]. **Theorem 2.4**.: _Any two separable spaces of almost universal disposition are almost isometric._ A quick sketch of the proof can also be found in [20, p. 168]. ## 3. The Lazar-Lindenstrauss approach A key property of the Gurariy spaces (from now on we shall use this terminology) is that they are \(L_{1}\)-preduals. Recall that an \(L_{1}\)_-predual_ (a.k.a. a _Lindenstrauss space_) is a Banach space whose dual is isometrically isomorphic to a space \(L_{1}(\mu)\) of integrable functions on some measure space. This class of spaces is the subject of Lindenstrauss's epoch-making memoir [18]. **Proposition 3.1**.: _Every Gurariy space is an \(L_{1}\)-predual._ In the literature, especially from the previous century, there are only vague indications as to why this is so. Since a recent article [6] admits that this proposition is "not completely evident from the definition" and since it is instrumental for Lusky's proof, I'll sketch a proof. To begin with, we have to recall a characterisation of \(L_{1}\)-preduals from Lindenstrauss's memoir; see [20, Th. 6.1] in conjunction with [20, Lemma 4.2], or [16, SS21]. **Theorem 3.2**.: _A Banach space \(X\) is an \(L_{1}\)-predual if and only if any four open balls \(U(x_{i},r_{i})\) that intersect pairwise have a nonvoid intersection. It is enough to check this for balls of radius \(1\)._ Let us verify that a Gurariy space \(X\) has this property. So suppose \(U(x_{1},1),\ldots,U(x_{4},1)\) are four open balls in \(X\) with radius \(1\) that intersect pairwise, i.e., \(\|x_{i}-x_{j}\|<2\). Choose \(\varepsilon>0\) such that even \(\|x_{i}-x_{j}\|<2-4\varepsilon\). Let \(E\) be the span of \(x_{1},\ldots,x_{4}\). There are some \(N\in\mathbb{N}\) and a linear operator \(S_{1}\colon E\to\ell_{\infty}^{N}\) such that \[\frac{1}{1+\varepsilon}\|S_{1}x\|_{\infty}\leq\|x\|\leq\|S_{1}x\|_{\infty} \qquad(x\in E).\] Let us consider the balls \(U_{\ell_{\infty}^{N}}(S_{1}x_{i},1-\varepsilon)\) in \(\ell_{\infty}^{N}\). They intersect pairwise since \[\|S_{1}x_{i}-S_{1}x_{j}\|_{\infty}\leq(1+\varepsilon)\|x_{i}-x_{j}\|<(1+ \varepsilon)(2-4\varepsilon)<2-2\varepsilon.\] Being pairwise intersecting balls in \(\ell_{\infty}^{N}\), these balls have a point in common. This means that there exists some \(z\in\ell_{\infty}^{N}\) such that \[\|z-Sx_{i}\|_{\infty}<1-\varepsilon\qquad(i=1,\ldots,4).\] Unfortunately, \(S_{1}\) is not an isometry and therefore is not eligible for being used in Definition 2.1. However, we can renorm \(\ell_{\infty}^{N}\) to make it an isometry: note that \(B_{\ell_{\infty}^{N}}\cap S_{1}(E)\subset S_{1}(B_{E})\), and we can renorm \(\ell_{\infty}^{N}\) by letting the new unit ball be the convex hull of \(S_{1}(B_{E})\) and \(B_{\ell_{\infty}^{N}}\). Call this renorming \(F\), and let \(S=S_{1}\) considered as an operator from \(E\) to \(F\); this is an isometry. We have \[\frac{1}{1+\varepsilon}\|y\|_{\infty}\leq\|y\|_{F}\leq\|y\|_{\infty}\qquad(y \in F)\] and thus \[\|z-Sx_{i}\|_{F}\leq\|z-S_{1}x_{i}\|_{\infty}<1-\varepsilon.\] Since \(X\) is a Gurariy space, there is an \(\varepsilon\)-isometry \(\widehat{T}\colon F\to X\) satisfying \(\widehat{T}Sx=x\) for \(x\in E\). Let \(x_{0}=\widehat{T}z\); then \(x_{0}\in\bigcap U(x_{i},1)\): \[\|x_{0}-x_{i}\|=\|\widehat{T}z-\widehat{T}Sx_{i}\|\leq(1+\varepsilon)\|z-Sx_{ i}\|_{F}<(1+\varepsilon)(1-\varepsilon)<1.\] In the more contemporary literature one can find explicit proofs of Proposition 3.1 based on another characterisation of \(L_{1}\)-preduals and a "pushout argument" [9, Th. 2.17], [5, Prop. 6.2.8]. Now let \(X\) be a separable \(L_{1}\)-predual. By the results of Michael and Pelczynski [26] and Lazar and Lindenstrauss [17] there is a chain of finite-dimensional subspaces \(E_{n}\) of \(X\) such that * \(E_{1}\subset E_{2}\subset\ldots\) ; * \(\dim E_{n}=n\), and \(E_{n}\) is isometrically isomorphic to \(\ell_{\infty}^{n}\), * \(\bigcup E_{n}\) is dense in \(X\). The inclusion \(E_{n}\subset E_{n+1}\) entails some degree of freedom, namely the choice of an isometry \(\psi_{n}\colon\ell_{\infty}^{n}\to\ell_{\infty}^{n+1}\). To study the structure of these \(\psi_{n}\), we need the ad-hoc notion of an admissible basis: if \(\delta_{1},\ldots,\delta_{n}\) denotes the canonical unit vector basis of \(\ell_{\infty}^{n}\) and \(\psi\colon\ell_{\infty}^{n}\to\ell_{\infty}^{n}\) is an isometry, then \(\psi(\delta_{1}),\ldots,\psi(\delta_{n})\) is called an _admissible basis_ for \(\ell_{\infty}^{n}\). Note that \(\psi\) takes a vector \((a_{1},\ldots,a_{n})\) to \((\vartheta_{1}a_{\pi(1)},\ldots,\vartheta_{n}a_{\pi(n)})\) for some permutation \(\pi\) and some signs \(\vartheta_{j}=\pm 1\) Thus, an admissible basis is just a permutation of the unit vector basis up to signs, and the isometric image of an admissible basis is again an admissible basis. Let us return to the isometric embedding \(\psi_{n}\colon\ell_{\infty}^{n}\to\ell_{\infty}^{n+1}\), and let \(e_{1,n},\dots\), \(e_{n,n}\) be an admissible basis for \(\ell_{\infty}^{n}\). We can develop the vectors \(f_{j}:=\psi_{n}(e_{j,n})\) into the unit vector basis of \(\ell_{\infty}^{n+1}\). Since \(\psi_{n}\) is an isometry, there is at least one coordinate \(i\) where \(|f_{j}(i)|=1\). Then, if \(k\neq j\), \(f_{k}(i)=0\): pick a sign \(\lambda\) such that \[|f_{j}(i)+\lambda f_{k}(i)|=|f_{j}(i)|+|f_{k}(i)|=1+|f_{k}(i)|\] and so \[1=\|e_{j,n}+\lambda e_{k,n}\|=\|f_{j}+\lambda f_{k}\|\geq|f_{j}(i)+\lambda f_ {k}(i)|=1+|f_{k}(i)|,\] hence the claim. Since \(\|\psi_{n}\|=1\), we also have \[\Bigl{|}\sum_{j=1}^{n}f_{j}(i)\Bigr{|}=\Bigl{|}\psi_{n}\Bigl{(}\sum_{j=1}^{n} e_{j,n}\Bigr{)}(i)\Bigr{|}\leq\Bigl{\|}\sum_{j=1}^{n}e_{j,n}\Bigr{\|}=1.\] Therefore, there is an admissible basis \(e_{1,n+1},\dots,e_{n+1,n+1}\) for \(\ell_{\infty}^{n+1}\) such that for some numbers \(a_{jn}\) \[\psi_{n}(e_{j,n})=e_{j,n+1}+a_{jn}e_{n+1,n+1}\qquad(j=1,\dots,n)\] and \[\sum_{j=1}^{n}|a_{jn}|\leq 1.\] We can rephrase these representations in terms of the \(E_{n}\) as follows. **Proposition 3.3**.: _There exist admissible bases in each \(E_{n}\) and real numbers \(a_{jn}\) such that_ \[e_{j,n}=e_{j,n+1}+a_{jn}e_{n+1,n+1}\qquad(j=1,\dots,n;\ n=1,2, \dots)\] \[\sum_{j=1}^{n}|a_{jn}|\leq 1\qquad(n=1,2,\dots).\] This proposition is due to Lazar and Lindenstrauss [17]. The triangular matrix \((a_{jn})_{j\leq n,n\in\mathbb{N}}\) is called a _representing matrix_ for the given \(L_{1}\)-predual \(X\). Conversely does the choice of admissible bases and of an array \((a_{jn})\) lead to an \(L_{1}\)-predual. Lazar and Lindenstrauss use this approach to present another proof of the existence of Gurariy spaces. Let \(a_{n}=(a_{1n},\dots,a_{nn},0,0,\dots)\) be the \(n^{\text{th}}\) column of a matrix as in Proposition 3.3; then each \(a_{n}\) is in the unit ball of \(\ell_{1}\). **Theorem 3.4**.: _If \(\{a_{1},a_{2},\dots\}\) is dense in the unit ball of \(\ell_{1}\), then the corresponding matrix is associated to a Gurariy space._ It should be noted that the representing matrix \(A\) of an \(L_{1}\)-predual \(X\) is not uniquely determined, and much work has been done to study the relation of \(A\) and \(X\) for certain classes of \(L_{1}\)-preduals; see e.g. Lusky's paper [24]. ## 4. Lusky's uniqueness proof Here is Lusky's uniqueness theorem. **Theorem 4.1**.: _Any two Gurariy spaces are isometrically isomorphic._ Let us first remark that almost isometric spaces (cf. Theorem 2.4) need not be isometric. The following is a classical counterexample due to Pelczynski from [28]: Let \(X\) and \(Y\) be \(c_{0}\) equipped with the equivalent norms (\(x=(x_{n})\)) \[\|x\|_{X} =\|x\|_{\infty}+\Bigl{(}\sum_{n=1}^{\infty}\frac{|x_{n}|^{2}}{2^{ n}}\Bigr{)}^{1/2},\] \[\|x\|_{Y} =\|x\|_{\infty}+\Bigl{(}\sum_{n=1}^{\infty}\frac{|x_{n+1}|^{2}}{ 2^{n}}\Bigr{)}^{1/2}.\] The operators \(\Phi_{n}\): \(X\to Y\), \(x\mapsto(x_{n},x_{1},\dots,x_{n-1},x_{n+1},\dots)\) are isomorphisms satisfying \(\lim_{n}\|\Phi_{n}\|\|\Phi_{n}^{-1}\|=1\) so that \(X\) and \(Y\) are almost isometric; but \(X\) is strictly convex while \(Y\) isn't, therefore \(X\) and \(Y\) are not isometric. Benyamini [3] has shown that such counterexamples also exist among \(L_{1}\)-preduals. The proof of Theorem 4.1 consists of a delicate inductive construction of \(\ell_{\infty}^{n}\)-subspaces and admissible bases. The key problem to be solved here is this. **Problem 4.2**.: Let \(X\) be a Gurariy space and \(E\subset F\) be finite-dimensional spaces with \(E\cong\ell_{\infty}^{n}\) and \(F\cong\ell_{\infty}^{n+1}\). Let \(T\): \(E\to X\) be an isometry. When does there exist an isometric extension \(\widehat{T}\): \(F\to X\)? Lusky notes that this is not always the case [L, p. 630], and he gives the following useful criterion in terms of admissible bases. W.l.o.g. suppose that \(T\) is the identity. Let \(e_{1},\dots,e_{n}\) and \(f_{1},\dots,f_{n+1}\) be admissible bases for \(E\) resp. \(F\) such that \[e_{i}=f_{i}+r_{i}f_{n+1},\qquad i=1,\dots,n.\] **Lemma 4.3**.: _Problem 4.2 has a positive solution if \(\sum_{i=1}^{n}|r_{i}|<1\)._ This criterion is a little hidden in the proof of the Corollary [L, p. 630], where the extreme point condition \(\operatorname{ex}B_{E}\cap\operatorname{ex}B_{F}=\emptyset\) is spelled out to be sufficient; but the heart of the matter is Lemma 4.3. Now let's take a quick glimpse at the proof of Theorem 4.1. Suppose that \(X\) and \(Y\) are Gurariy spaces coming with \(\ell_{\infty}^{n}\)-approximations \(\bigcup_{n}E_{n}\) and \(\bigcup F_{n}\), respectively. Comparing Proposition 3.3 with Lemma 4.3 one realises that one has to perturb the given admissible bases so that Lemma 4.3 becomes applicable. The details of this process are quite technical [L, pp. 631-633] and lead to sequences of admissible bases. Ultimately one can pass to the limit and obtain admissible bases \(\{e_{i,n}\colon i\leq n,\,n\geq 1\}\) resp. \(\{f_{i,n}\colon i\leq n,\,n\geq 1\}\) spanning dense subspaces of \(X\) resp. \(Y\), and the operator \(e_{i,n}\mapsto f_{i,n}\) acts as a well-defined isometry. In an addendum to [L], dated January 10, 1976, Lusky applies his methods to Mazur's rotation problem that asks whether a separable transitive space is isometric to a Hilbert space; a Banach space \(X\) is called _transitive_ if whenever \(\|x\|=\|y\|=1\), there is an isometric automorphism \(T\colon X\to X\) mapping \(x\) to \(y\), i.e., \(Tx=y\). This problem is open to this day, and recent papers on the subject include [4] and [6]. What Lusky proves in his addendum is that the Gurariy space (now that we know it's unique we may use the definite article) is transitive for smooth points. Recall that \(x_{0}\) is a smooth point of the unit ball \(B_{X}\) if \(\|x_{0}\|=1\) and there is exactly one \(x_{0}^{*}\in X^{*}\) such that \(\|x_{0}^{*}\|=x_{0}^{*}(x_{0})=1\); equivalently, the norm function \(x\to\|x\|\) is Gateaux differentiable at \(x_{0}\). It is a theorem of Mazur that smooth points are dense in the unit sphere of a separable Banach space. **Theorem 4.4**.: _Let \(x\) and \(y\) be smooth points of the unit ball of the Gurariy space \(G\). Then there is an isometric automorphism \(T\colon G\to G\) mapping \(x\) to \(y\)._ Another result of [L] is a refined version of a theorem originally due to Wojtaszczyk [32] (see also [24]). **Theorem 4.5**.: _Let \(X\) be a separable \(L_{1}\)-predual and \(G\) be the Gurariy space. Then there exist an isometry \(T\colon X\to G\) and a norm-\(1\) projection \(P\colon G\to G\) onto \(T(X)\); further \((\operatorname{Id}-P)(G)\) is isometrically isomorphic to \(G\)._ This indicates that the Gurariy space is "maximal" among the separable \(L_{1}\)-predual spaces; in particular it contains \(C[0,1]\) and is universal, a fact proved by other means by Gevorkyan in [10]. We close this section by mentioning another proof of Theorem 4.1, due to W. Kubis and S. Solecki [15]. Their proof avoids the Lazar-Lindenstrauss machinery and just depends on the defining properties of a Gurariy space. They also prove the universality of the Gurariy space from first principles, without relying on the universality of \(C[0,1]\). Still another proof is in Kubis's paper [14] in _Archiv der Mathematik_, which builds on a Banach-Mazur type game. ## 5. The Poulsen simplex This note wouldn't be complete without mentioning the cousin of the Gurariy space in the world of compact convex sets, the _Poulsen simplex_. The traditional definition of a (compact) simplex is a compact convex subset \(S\) of a Hausdorff locally convex space \(E\) such that the cone generated by \(S\times\{1\}\) in \(E\oplus\mathbb{R}\) is a lattice cone. Thus, a triangle in the plane is a simplex while a rectangle isn't. For our purposes it is important to note that the space \(A(S)\) of affine continuous functions on a compact convex set is an \(L_{1}\)-predual if and only if \(S\) is a simplex. Poulsen [29] had proved the existence of a metrisable simplex, which now bears his name, whose set of extreme points is dense. It is a result due to Lindenstrauss, Olsen, and Sternfeld [19] that such a simplex is uniquely determined up to affine homeomorphism. They write: We discovered the uniqueness of the Poulsen simplex after reading Lusky's paper [L] on the uniqueness of the Gurari space. Our proof of the uniqueness uses the same idea which Lusky used in [L]. The role of admissible bases is now played by peaked partitions of unity. The authors mention a lot of similarities between the Poulsen simplex and the Gurariy space. For example, the counterpart of the defining property of the Poulsen simplex \(S_{P}\) is Lusky's theorem from [L] and [24] that a separable \(L_{1}\)-predual is a Gurariy space \(G\) if and only if \(\operatorname{ex}B_{G^{*}}\) is weak\({}^{*}\) dense in the unit ball \(B_{G^{*}}\). However, \(A(S_{P})\) is not the Gurariy space since for example the transitivity property of Theorem 4.4 fails. But, as shown by Lusky [25], one can salvage this by requiring a slightly more stringent assumption on \(x\) and \(y\), which are now supposed to be positive: in addition, \(1-x\) and \(1-y\) should be smooth points. ## 6. Outlook ### Fraisse theory The Gurariy space is a very homogeneous object, for example [11, Th. 3]: If \(E\) and \(F\) are finite-dimensional subspaces of the same dimension of a Gurariy space \(G\), then for every \(\varepsilon>0\), every isometric isomorphism from \(E\) to \(F\) extends to an \(\varepsilon\)-isometric automorphism of \(G\). In recent years, such homogeneous structures were investigated by methods of model theory known as Fraisse theory ([8], [2], [13]). Fraisse theory associates a unique limit to certain substructures. This approach is at least implicit in the Kubis-Solecki uniqueness proof, and a detailed exposition involving the Gurariy space, the Poulsen simplex and a whole lot more can be found in M. Lupini's paper [22]. ### Noncommutative Gurariy spaces T. Oikhberg, in his _Archiv der Mathematik_ paper [27], proved the existence and uniqueness of a "noncommutative" Gurariy space, i.e., a Gurariy-like object in the setting of operator spaces a la Effros-Ruan. Again, this can also be viewed from the perspective of Fraisse theory [21]. ### Nonseparable spaces We have already mentioned in Section 2 Gurariy's result that no space of universal disposition can be separable. Since the definition of (almost) universal disposition makes perfect sense beyond the separable case, it was studied in several papers, e.g., [1], [7], [9]. It turns out that there are spaces of almost universal disposition of density character \(\aleph_{1}\), but the uniqueness breaks down (Th. 3.6 and Th. 3.7 in [9]). Likewise, there are spaces of universal disposition of density \(\aleph_{1}\), and again, uniqueness fails ([1], [7]). Indeed, it should be noted that in these papers also the variant of being of (almost) universal disposition with respect to separable spaces, already considered by Gurariy, is studied: in Definition 2.1 one now allows \(E\) and \(F\) to be separable rather than finite-dimensional. ### Banach lattices Recently, M. A. Tursi [30] proved the existence of a uniquely determined Gurariy-like Banach lattice. She exploits ideas of Fraisse theory.
このノートは、Wolfgang LuskyによるGurariy空間の唯一性の証明について調査し、さらに発展について述べている。 Please let me know if you need any further translations!
2309.12133
Intertype superconductivity evoked by the interplay of disorder and multiple bands
Nonmagnetic impurity scattering is known to shift up the Ginzburg-Landau parameter $\kappa$ of a superconductor. In this case, when the system is initially in type I, it can change its magnetic response, crossing the intertype domain with $\kappa \sim 1$ between the two standard superconductivity types and arriving at type II. In the present work we demonstrate that the impact of disorder can be much more profound in the presence of the multiband structure of the charge carrier states. In particular, when the band diffusivities differ from each other, the intertype domain tends to expand significantly, including points with $\kappa \gg 1$ that belong to deep type-II in conventional single-band superconductors. Our finding sheds light on the nontrivial disorder effect and significantly complements earlier results on the enlargement of the intertype domain in clean multiband superconductors.
P. M. Marychev, A. A. Shanenko, A. Vagov
2023-09-21T14:54:33
http://arxiv.org/abs/2309.12133v1
# Intertype superconductivity evoked by the interplay of disorder and multiple bands ###### Abstract Nonmagnetic impurity scattering is known to shift up the Ginzburg-Landau parameter \(\kappa\) of a superconductor. In this case, when the system is initially in type I, it can change its magnetic response, crossing the intertype domain with \(\kappa\sim 1\) between the two standard superconductivity types and arriving at type II. In the present work we demonstrate that the impact of disorder can be much more profound in the presence of the multiband structure of the charge carrier states. In particular, when the band diffusivities differ from each other, the intertype domain tends to expand significantly, including points with \(\kappa\gg 1\) that belong to deep type-II in conventional single-band superconductors. Our finding sheds light on the nontrivial disorder effect and significantly complements earlier results on the enlargement of the intertype domain in clean multiband superconductors. pacs: 71.10.-m, 71.10.-m, 71.10.-m + Footnote †: preprint: APS/123-QED It is well-known that a nonmagnetic disorder can influence the superconductive magnetic properties by altering the characteristic lengths of a superconductor [1]. In particular, the Ginzburg-Landau (GL) coherence length \(\xi\) decreases when the electron mean-free path is reduced. At the same time the London magnetic penetration depth \(\lambda\) increases. As a result, the ratio \(\kappa=\lambda/\xi\), referred to as the GL parameter, increases as well. In this case the system, when being initially in type I, crosses the intertype (IT) domain between the two standard superconductivity types in the \(\kappa\)-\(T\) plane (\(T\) is the temperature) and exhibits the type-II magnetic response at a sufficient disorder. This feature was used to study the IT superconductivity and its boundaries when the magnetic properties of, e.g., Ta and Nb were modified by changing the amount of dissolved nitrogen [2]. The IT superconductivity is of special interest since it is characterized by unconventional magnetic properties and flux-condensate distributions which differ qualitatively from those of the two standard superconductivity types. A number of studies [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 15; 16; 17; 18; 19; 21] demonstrated that for conventional materials, the IT physics manifests itself at \(\kappa\sim 1\). In the present work we report a striking example when disorder does not only shift the system across the IT regime. Here the interplay of the diffusive motion of charge carriers with the multiband structure of the carrier states leads to qualitative changes in the magnetic-response phase diagram. When the band diffusivities differ significantly from each other (as e.g. in films of MgB\({}_{2}\)[23]), the IT domain shows a giant expansion. As a result, it can include large values of the GL parameter (\(\kappa\gg 1\)) that belong to deep type II in conventional single-band superconductors. Our finding significantly complements earlier results on the enlargement of the IT domain in clean multiband superconductors [4; 5; 24]. As the prototype of a multiband superconductor we choose the two-band system with the \(s\)-wave pairing in both bands and Josephson-like interband pair transfer. To describe the corresponding IT domain in the dirty limit, we employ the two-band Usadel equations [1]. To avoid unnecessary complications of an anisotropic case, the system is assumed to be isotropic. For simplicity we neglect the interband impurity scattering since our preliminary results demonstrates that it can produce quantitative corrections but does not change the qualitative picture. Investigations of such corrections will be published elsewhere. Then, the equations for the band-dependent gap functions \(\Delta_{\nu}=\Delta_{\nu}({\bf r})\) read (\(\nu=1,2\)) \[\hbar\omega f_{\nu}-\frac{\hbar\mathcal{D}_{\nu}}{2}\big{(}g_{\nu}\mathbf{D}^{2} f_{\nu}-f_{\nu}\mathbf{\nabla}^{2}g_{\nu}\big{)}=\Delta_{\nu}g_{\nu}, \tag{1}\] where \(g_{\nu}=g_{\nu}({\bf r},\omega)\) and \(f_{\nu}=f_{\nu}({\bf r},\omega)\) are the normal and anomalous quasiclassic (frequency-dependent) Green functions related to one another by the normalization condition \(g_{\nu}^{2}+|f_{\nu}|^{2}=1\), \(\omega\) stands for the fermionic Matsubara frequencies, \(\mathcal{D}_{\nu}\) is the diffusion coefficient associated with band \(\nu\), and \({\bf D}=\mathbf{\nabla}-(i2e/\hbar\varepsilon){\bf A}\) is the gauge-invariant derivative. The Usadel equations (1) are solved together with the self-consistency relation \[\Delta_{\nu}=2\pi T\sum_{\nu^{\prime}=1,2}{\rm g}_{\nu\nu^{\prime}}N_{\nu^{ \prime}}\sum_{\omega>0}f_{\nu^{\prime}}, \tag{2}\] where \({\rm g}_{\nu\nu^{\prime}}\) is the element of the symmetric coupling matrix \({\rm g}\) and \(N_{\nu}\) is the band density of states (DOS). The free energy density for the system of interest is given by \[{\sf f}=\frac{{\bf B}^{2}}{8\pi}+\langle\vec{\Delta}^{\dagger},{\rm g}^{-1} \vec{\Delta}\rangle+\sum_{\nu=1,2}{\sf f}_{\nu}, \tag{3}\] where \({\bf B}=\mathbf{\nabla}\times{\bf A}\) is the magnetic field, \(\vec{\Delta}=(\Delta_{1},\Delta_{2})^{T}\) with \(\langle.,.\rangle\) the scalar product in the band vector space, \({\rm g}^{-1}\) is the inverse of the coupling matrix and \[{\sf f}_{\nu}= 2\pi TN_{\nu}\sum_{\omega>0}\Big{\{}2\hbar\omega(1-g_{\nu})-2{ \rm Re}(f_{\nu}^{*}\Delta_{\nu})\] \[+\frac{\hbar\mathcal{D}_{\nu}}{2}\big{[}|{\bf D}f_{\nu}|^{2}+(\mathbf{ \nabla}g_{\nu})^{2}\big{]}\Big{\}}. \tag{4}\] The stationary point (minimum) of the free energy gives the equilibrium spatial distributions of \(\Delta_{1}({\bf r})\), \(\Delta_{2}({\bf r})\) and \({\bf B}({\bf r})\) [and \({\bf A}({\bf r})\)]. To calculate the boundaries of the IT domain, we employ the perturbation expansion of the two-band Usadel formalism in the small deviation from the superconducting critical temperature \(\tau=1-T/T_{c}\). It was shown previously for clean two-band superconductors [4; 6; 17] that many important details regarding the interpotype superconductivity can be obtained already from the leading correction to the GL theory in \(\tau\). The derivation of this correction in the present case is similar to that for clean two-band superconductors [4; 5; 6; 17; 19]. (For general details of the \(\tau\)-expansion in single- and multiband superconductors, see the papers [2; 6; 27] and [3; 29], respectively.) First, one represents the Green functions and the free energy density as series in powers of the gap functions and their spatial derivatives. The series are truncated so that to keep only the terms that contribute to the leading correction to the GL theory in \(\tau\). Second, based on the obtained expressions, one derives the \(\tau\)-expansion of the formalism up to the leading correction to the GL theory. Now, we employ the Usadel equations and invoke the expansion in powers of the gap functions and their spatial gradients (for more detail, see the Supplementary material). For the free energy density one gets \[{\sf f}_{\nu}= \Big{[}-N_{\nu}A+a_{\nu}\left(\tau+\frac{\tau^{2}}{2}\right) \Big{]}|\Delta_{\nu}|^{2}+\frac{b_{\nu}}{2}(1+2\tau)|\Delta_{\nu}|^{4}\] \[-\frac{c_{\nu}|\Delta_{\nu}|^{6}}{3}+{\cal K}_{\nu}(1+\tau)|{\bf D }\Delta_{\nu}|^{2}-{\cal Q}_{\nu}|{\bf D}^{2}\Delta_{\nu}|^{2}\] \[-\frac{{\cal L}_{\nu}}{2}\Big{\{}6|\Delta_{\nu}|^{2}|{\bf D} \Delta_{\nu}|^{2}+\big{[}\Delta_{\nu}^{2}({\bf D}^{*}\Delta_{\nu}^{*})^{2}+{ \rm c.c.}\big{]}\Big{\}}, \tag{5}\] where \[a=-N_{\nu},A=\ln\frac{2e^{\gamma}\hbar\omega_{D}}{\pi T_{c}},b_{ \nu}=N_{\nu}\frac{7\zeta(3)}{8\pi^{2}T_{c}^{2}},\] \[{\cal K}_{\nu}=N_{\nu}\frac{\pi\hbar{\cal D}_{\nu}}{8T_{c}},{ \cal Q}_{\nu}=\frac{(\hbar{\cal D}_{\nu})^{2}}{2}b_{\nu},\] \[{\cal L}_{\nu}=N_{\nu}\frac{\pi\hbar{\cal D}_{\nu}}{192T_{c}^{3}},c_{\nu}=N_{\nu}\frac{93\zeta(5)}{128\pi^{4}T_{c}^{4}}, \tag{6}\] with \(T_{c}\) the critical temperature, \(\omega_{D}\) the Debye frequency, \(\zeta(\ldots)\) the Riemann zeta function, and \(\gamma=0.577\). As is mentioned above, the series in Eq. (5) is truncated here so that to include only the terms that contribute to the leading correction to the GL theory in \(\tau\). It is instructive to compare the free energy density given by Eqs. (3) and (5) for the two-band system in the dirty limit with the corresponding expansion of the free energy density in the clean limit [4; 5; 3; 29]. First, the coefficients \({\cal K}_{\nu},{\cal Q}_{\nu}\), and \({\cal L}_{\nu}\) are now given by the different expressions [see Eq. (6)] including the band-dependent diffusivities \({\cal D}_{\nu}\). Second, the set of the three terms with the coefficient \({\cal Q}_{\nu}\), calculated for the clean limit, is now reduced to the only term in Eq. (6): there are no contributions proportional to \({\mathbf{\nabla}}\times{\bf B}\) and \({\bf B}^{2}\), c.f. Eq. (5) with Eq. (20) in [3]. Finally, the first term in the figure braces in the last line of Eq. (5) has now a numerical factor 6 instead of 8 in the clean limit. However, in general, the structure of the free energy density given by Eqs. (3) and (5) is similar to that of the clean system. Thus, to obtain the \(\tau\)-expansion of the present microscopic formalism, we can employ a similar calculation procedure. Based on the previous results for clean two-band superconductors [4; 3; 29], we introduce the \(\tau\)-expansion for the gap functions and fields in the form \[\vec{\Delta} =\tau^{1/2}\vec{\Psi}+\tau^{3/2}\vec{\psi}+\ldots,\ {\bf B}=\tau{\mathbf{\mathcal{B}}}+\tau^{2}{\bf b}+\ldots,\] \[{\bf A} =\tau^{1/2}{\mathbf{\mathcal{A}}}+\tau^{3/2}{\bf a}+\ldots, \tag{7}\] where \(\vec{\Psi}=(\Psi_{1},\Psi_{2})^{T}\) and \({\mathbf{\mathcal{B}}}\) (\({\mathbf{\mathcal{A}}}\)) correspond to the GL theory while its leading correction is governed by \(\vec{\psi}=(\psi_{1},\psi_{2})^{T}\) and \({\mathfrak{b}}\) (\({\mathfrak{a}}\)). In addition, the magnetic penetration depth \(\lambda\) and the GL coherence length \(\xi\) are divergent as \(\lambda,\xi\propto\tau^{-1/2}\). To extract this dependence from the spatial gradients, we introduce the spatial scaling \({\bf r}\rightarrow\tau^{1/2}{\bf r}\) and obtain the corresponding scaling factor for the spatial derivatives as \({\mathbf{\nabla}}\rightarrow\tau^{-1/2}{\mathbf{\nabla}}\). Then, based on Eqs. (3)-(S10) [see also the Supplementary material], one gets the stationary equations for \(\vec{\Psi}\) and \(\vec{\psi}\) as \[\check{L}\vec{\Psi}=0,\quad\check{L}=\check{\rm g}^{-1}-\left(\begin{array}{ cc}N_{1}A&0\\ 0&N_{2}A\end{array}\right) \tag{8}\] and \[\check{L}\vec{\psi}+\vec{W}=0, \tag{9}\] where \(\vec{W}=(W_{1},W_{2})^{T}\) and \(W_{\nu}=a_{\nu}\Psi_{\nu}+\frac{b_{\nu}}{2}\Psi_{\nu}|\Psi_{\nu}|^{2}+{\cal K}_{ \nu}{\mathbf{\mathcal{D}}}^{2}\Psi_{\nu}\), with \({\mathbf{\mathcal{D}}}={\mathbf{\nabla}}-(i2e/\hbar\varepsilon){\mathbf{\mathcal{A}}}\). Equation (8) has a nontrivial solution when the determinant of the matrix \(\check{L}\) is zero and we obtain \[({\rm g}_{22}-{\cal G}N_{1}A)({\rm g}_{11}-{\cal G}N_{2}A)-{\rm g}_{12}^{2}=0, \ \vec{\Psi}=\Psi({\bf r})\vec{\xi}, \tag{10}\] where \({\cal G}={\rm g}_{11}{\rm g}_{22}-{\rm g}_{12}^{2}\), \(\Psi\) is the Landau order parameter that controls the two-band system in the GL approximation, and \(\vec{\xi}\) is the eigenvector of \(\check{L}\) corresponding to its zero eigenvalue. The normalization of \(\vec{\xi}\) is not important here (the observables are not sensitive to it) and so, there are various options to choose \(\vec{\xi}\). Here we follow the variant used in [4] and given by \[\vec{\xi}=\left(\begin{array}{c}S^{-1/2}\\ S^{1/2}\end{array}\right),\ S=\frac{{\rm g}_{22}-{\cal G}N_{1}A}{{\rm g}_{12}}, \tag{11}\] where \(S\) controls the relative weights of the bands, changing from 0 (only band 2) to \(\infty\) (only band 1). Introducing the vector \[\vec{\eta}=\left(\begin{array}{c}S^{-1/2}\\ -S^{1/2}\end{array}\right) \tag{12}\] so that \(\vec{\xi}\) and \(\vec{\eta}\) are linearly independent, one can represent \(\vec{\psi}\) as their linear combination given by \[\vec{\psi}=\psi_{\xi}({\bf r})\vec{\xi}+\psi_{\eta}({\bf r})\vec{\eta}, \tag{13}\] where \(\psi_{\xi}\) and \(\psi_{\eta}\) control the spatial distributions of the gap functions in the leading correction to the GL theory. Projecting Eq. (9) onto \(\vec{\xi}\) and utilizing Eq. (13), one gets the GL equation for the Landau order parameter as \[a\Psi+b\Psi|\Psi|^{2}-\mathcal{K}\mathbf{\mathcal{D}}^{2}\Psi=0, \tag{14}\] where the coefficients \(a=\sum_{\nu}|\xi_{\nu}|^{2}a_{\nu}\), \(\mathcal{K}=\sum_{\nu}|\xi_{\nu}|^{2}\mathcal{K}_{\nu}\), and \(b=\sum_{\nu}|\xi_{\nu}|^{4}b_{\nu}\) are averages over the contributing bands, with \(\xi_{1}=S^{-1/2}\) and \(\xi_{2}=S^{1/2}\). Projecting Eq. (9) onto \(\vec{\eta}\) and keeping in mind Eq. (13), we express \(\psi_{\eta}\) in terms of \(\Psi\) as \[\psi_{\eta}=-\frac{\mathcal{G}}{4g_{12}}\big{(}\alpha\Psi+\beta\Psi|\Psi|^{2}- \Gamma\mathbf{\mathcal{D}}^{2}\Psi\big{)}, \tag{15}\] with the coefficients \(\alpha=\sum_{\nu}\eta_{\nu}^{*}\xi_{\nu}a_{\nu}\), \(\Gamma=\sum_{\nu}\eta_{\nu}^{*}\xi_{\nu}\mathcal{K}_{\nu}\), and \(\beta=\sum_{\nu}\eta_{\nu}^{*}\xi_{\nu}|\xi_{\nu}|^{2}b_{\nu}\) [here \(\eta_{1}=S^{-1/2}\) and \(\eta_{2}=-S^{1/2}\)]. Using Eq. (14), one can rearrange Eq. (15) as \[\psi_{\eta}=-\frac{\mathcal{G}}{4g_{12}}\big{(}a\bar{\alpha}\Psi+b\bar{\beta} \Psi|\Psi|^{2}\big{)}, \tag{16}\] with \(\bar{\alpha}=\frac{\alpha}{\mathfrak{c}}-\frac{\Gamma}{\mathfrak{c}}\) and \(\bar{\beta}=\frac{\beta}{\mathfrak{c}}-\frac{\Gamma}{\mathfrak{c}}\). Notice that \(\psi_{\eta}(\mathbf{r})\) is responsible for the difference between the spatial profiles of \(\Delta_{1}(\mathbf{r})\) and \(\Delta_{2}(\mathbf{r})\), i.e. it determines the deviation of the band-dependent coherence lengths \(\xi_{1}\) and \(\xi_{2}\) from the GL coherence length \(\xi\), see the discussion in [4; 5]. The leading correction to the GL contribution in the free energy density does not involve the terms depending on \(\psi_{\xi}\). Thus, to calculate the free energy within the extended GL formalism, involving the GL contribution and its leading correction in \(\tau\), one needs to know only the solution to the GL formalism (as \(\psi_{\eta}\) depends on \(\Psi\) and \(\mathbf{\mathcal{A}}\)), see the details in the previous papers for clean two-band systems [4; 5]. The Landau order parameter obeys the first GL equation given by Eq. (14). The second GL equation (the current equation) reads \[\mathbf{\nabla}\times\mathbf{\mathcal{B}}=\frac{4\pi}{\mathfrak{c}},\;\mathbf{j}= \frac{4e\mathcal{K}}{\hbar}\text{Im}[\Psi^{*}\mathbf{\mathcal{D}}\Psi]. \tag{17}\] Using solutions for Eqs. (14) and (17) and employing Eq. (16), one gets the stationary free energy density necessary to investigate the IT physics in dirty two-band superconductors. Now we turn to the problem of switching between superconductivity types I and II. It is well-known that type II differs from type I by the possibility to develop the mixed state where a magnetic field penetrates the interior of a superconductor so that the superconducting condensate is specified by a nonuniform spatial distribution. To calculate the boundary between types I and II, one needs to compare the Gibbs free energy of the Meissner state at the thermodynamic critical field \(H_{c}\) with that of a specific spatial configuration of the superconducting condensate [4; 5]. For example, one can choose the single-vortex configuration and calculate the corresponding Gibbs free energy difference between the nonuniform and Meissner states. When this difference is positive, the system is in type I. When it is negative, we arrive at type II. There are several ways to calculate the set of the parameters corresponding to the boundary between types I and II. Within the GL theory all these ways yield the same result: the boundary between types I and II is specified by the relation \(\kappa=\kappa_{0}=1/\sqrt{2}\). This is not the case beyond the GL theory: here the above ways of calculating the boundary between types I and II result in different lines \(\kappa^{*}(T)\) in the \(\kappa\)-\(T\) plane. All these lines intersect at the point \((\kappa_{0},T_{c})\), which is called the Bogomolnyi point (B-point). When the system approaches the B-point, it is governed by the self-dual GL theory given by the two Bogomolnyi (self-duality) equations. The fundamental feature of the B-point is that the corresponding equilibrium state is degenerate, hiding an infinite number of various exotic vortex configurations being degenerate solutions of the Bogomolnyi equations [6]. Below \(T_{c}\) the degeneracy is lifted and successive self-dual configurations shape the internal structure of the IT domain and determine its unconventional superconductive magnetic properties [4; 5; 6; 17]. The difference between the Gibbs free energies of a nonuniform condensate configuration and the Meissner state writes as \[G=\int\mathbf{g}\,d^{3}\mathbf{r},\quad\mathbf{g}=\mathbf{f}+\frac{H_{c}^{2}} {8\pi}-\frac{H_{c}B}{4\pi}, \tag{18}\] with the applied and internal fields \(\mathbf{H}=(0,0,H_{c})\) and \(\mathbf{B}=(0,0,B)\). Here the thermodynamic critical field is given by \[\frac{H_{c}}{\tau\mathcal{H}_{c}}=1-\tau\left(\frac{1}{2}+\frac{ac}{3b^{2}}+ \frac{\mathcal{G}a}{4g_{12}}(\bar{\alpha}-\bar{\beta})^{2}\right)+\ldots, \tag{19}\] with the GL thermodynamic critical field \(\mathcal{H}_{c}=\sqrt{4\pi a^{2}/b}\) and \(c=\sum_{\nu}|\xi_{\nu}|^{6}c_{\nu}\). Notice that \(\mathcal{H}_{c}\) should be multiplied by \(\tau\) to get back to the standard definition of the GL thermodynamic critical field. Using the \(\tau\)-expansion approach, we represent \(\mathcal{G}\) as a series in \(\tau\), and keep only the leading correction to the GL contribution (see the Supplemental material). In addition, we employ the expansion in the small deviation \(\delta\kappa=\kappa-\kappa_{0}\), as our study is focused on the IT domain near \(\kappa_{0}\). The relevant details are similar to those in the calculations for clean two-band superconductors and can be found in [4; 5]. Then, the Gibbs free energy difference is obtained as \[\frac{G}{\tau^{2}}= -\sqrt{2}\,\mathcal{I}\,\delta\kappa+\tau\,\Big{\{}\big{[} \bar{\mathcal{Q}}-\bar{c}+\bar{\mathcal{G}}\bar{\beta}(2\bar{\alpha}-\bar{ \beta})\big{]}\mathcal{I}\] \[+\Big{[}\frac{3}{2}\mathcal{L}-\bar{c}-\bar{\mathcal{Q}}-\bar{ \mathcal{G}}\bar{\beta}^{2}\Big{]}\mathcal{J}\Big{\}}+\ldots. \tag{20}\] where \(G\) is given in units of \(\mathcal{H}_{c}^{2}\lambda^{2}L/2\pi\), with \(L\) the system size in the \(z\) direction, the dimensionless coefficients are defined as \[\bar{c}=\frac{ca}{3b^{2}},\;\bar{\mathcal{Q}}=\frac{\mathcal{Q}a}{\mathcal{K}^{2 }},\;\bar{\mathcal{L}}=\frac{\mathcal{L}a}{\mathcal{K}b},\;\bar{\mathcal{G}}= \frac{\mathcal{G}a}{4g_{12}}, \tag{21}\] with \(\mathcal{Q}=\sum_{\nu}|\xi_{\nu}|^{2}\mathcal{Q}_{\nu}\), \(\mathcal{L}=\sum_{\nu}|\xi_{\nu}|^{4}\mathcal{L}_{\nu}\). The integrals \(\mathcal{I}\) and \(\mathcal{J}\) are given by \[\mathcal{I}= \int\!|\Psi|^{2}\big{(}1-|\Psi|^{2}\big{)}d^{2}\mathbf{r},\, \mathcal{J}= \int\!|\Psi|^{4}\big{(}1-|\Psi|^{2}\big{)}d^{2}\mathbf{r}, \tag{22}\] where \(\Psi\) is a solution of the GL equations for a particular condensate configuration at \(\kappa=\kappa_{0}\), it is given in units of \(\Psi_{0}=\sqrt{-a/b}\). Using Eq. (20), we find the corresponding critical GL parameter from \(G=0\) as \[\kappa^{*}= \kappa_{0}\bigg{\{}1+\tau\left[\bar{\mathcal{Q}}-\bar{c}+\bar{ \mathcal{G}}\bar{\beta}(2\bar{\alpha}-\bar{\beta})\right.\] \[\left.+\left(\frac{3}{2}\bar{\mathcal{L}}-\bar{c}-\bar{\mathcal{ Q}}-\bar{\mathcal{G}}\bar{\beta}^{2}\right)\!\frac{\mathcal{J}}{\mathcal{I}} \right]+\ldots\bigg{\}}. \tag{23}\] Utilizing a particular condensate-field configuration, we can now find the corresponding critical GL parameter, taking account of the leading correction to the GL theory. Notice that the dimensionless GL formalism involves only one parameter, i.e. the GL parameter \(\kappa\). It means that for any particular mixed-state configuration taken at \(\kappa=\kappa_{0}\) the spatial distribution of \(\Psi\) is the same in both the clean and dirty limits. Then, in our subsequent analysis we can employ the values of \(\mathcal{I}\) and \(\mathcal{J}\) found previously for the clean two-band case. One of the possibilities to calculate the boundary between types I and II is to consider the appearance/disappearance of a nonuniform (mixed) superconducting state for the fields above \(H_{c}\). As such states exist below the upper critical field \(H_{c2}\), it means that we need to check the condition \(H_{c}=H_{c2}\). In this case \(\Psi\to 0\) and so, to get the corresponding critical GL parameter \(\kappa_{2}^{*}\), one needs to utilize \(\mathcal{J}\ll\mathcal{I}\)[4] in Eq. (23). We can also choose the single-vortex solution as the reference spatial configuration and check when it is favourable versus the Meissner state. This is equivalent to the condition \(H_{c}=H_{c1}\), where \(H_{c1}\) is the lower critical field [4]. Inserting the corresponding ratio \(\mathcal{J}/\mathcal{I}=0.735\)[4] in Eq. (23), we find the critical parameter \(\kappa_{1}^{*}\). When using the condition of the zero surface energy of a flat domain wall between the superconductive and normal states, one finds \(\mathcal{J}/\mathcal{I}=0.559\)[4]. This ratio is then plugged in Eq. (23), which yield \(\kappa_{*}^{*}\). Finally, there exists also the useful condition of changing the sign of the long-range interaction between vortices. This condition results in \(\mathcal{J}/\mathcal{I}=2\)[4], and adopting this ratio in Eq. (23), we obtain \(\kappa_{li}^{*}\). As these critical GL parameters differ from one another, they yield different boundaries between types I and II beyond the GL theory. This difference shapes the internal structure of the IT domain in the \(\kappa\)-\(T\) plane. To find these boundaries, one needs to explicitly calculate the dimensionless coefficients defined in Eq. (21). These coefficients depend on the three parameters: \(S\) given by Eq. (11) and the ratios \(\theta=\mathcal{D}_{2}/\mathcal{D}_{1}\) and \(\chi=N_{2}/N_{1}\). Equation (11) yields \[S=\frac{1}{2\lambda_{12}}\Bigg{[}\lambda_{22}-\frac{\lambda_{11}}{\chi}+\sqrt {\Big{(}\lambda_{22}-\frac{\lambda_{11}}{\chi}\Big{)}^{2}+4\frac{\lambda_{12} ^{2}}{\chi}}\Bigg{]}\,, \tag{24}\] where \(\lambda_{\nu\nu^{\prime}}=g_{\nu\nu^{\prime}}(N_{1}+N_{2})\). Hence, to get the boundaries of the IT domain in the \(\kappa\)-\(T\) plane, we need to specify the dimensionless couplings \(\lambda_{ij}\) together with \(\chi\) and \(\theta\). Below, for the sake of illustration, we use the set \(\lambda_{11}=1.91\), \(\lambda_{22}=0.477\,\lambda_{12}=0.204\), and \(\chi=1.37\). These values are extracted from the data used for MgB\({}_{2}\)[31]. The ratio of the band diffusivities is treated as a free parameter here. We remark that this ratio can be very large, up to \(\sim 200\), as in dirty films of MgB\({}_{2}\)[23]. It is important to note that the choice of the dimensionless couplings and the DOS ratio is not decisive for our conclusions, similar results are obtained for other variants. Figure 1: The IT domain in the \(\kappa\)-\(T\) phase diagram. Panel (a) demonstrates the \(\tau\)-derivatives of the GL critical parameters (for their definitions, see the text) versus the ratio \(\theta=\mathcal{D}_{2}/\mathcal{D}_{1}\). Panels (b) and (c) show the IT domain in the \(\kappa\)-\(T\) plane for \(\theta=5\) and \(200\); the upper boundary is given by \(\kappa_{li}^{*}(T)\) whereas the lower boundary is \(\kappa_{2}^{*}(T)\). We remark that for single-band superconductors, the experimental results for the boundaries of the IT domain are in good agreement with the calculations of the extended GL theory down to \(T\sim 0.5T_{c}\)[4], this is why our results in panels (b) and (c) are given by the dashed lines below \(T=0.5T_{c}\). Our results for \(\kappa_{2}^{*},\kappa_{1}^{*},\kappa_{s}^{*}\) and \(\kappa_{li}^{*}\) are shown in Figs. 1(a-c). In Fig. 1(a) one can see the \(\tau\)-derivatives of the critical GL parameters as functions of \(\theta\). In Figs. 1(b) and (c) the upper and lower boundaries of the IT domain (\(\kappa_{li}^{*}\) and \(\kappa_{2}^{*}\)) are shown in the \(\kappa\)-\(T\) plane for \(\theta=5\) and 200, respectively. The main result of our present investigation is that the IT domain systematically expands with increasing the ratio of the band diffusivities \(\theta\). Being nearly negligible at \(\theta\sim 1\) [see Fig. 1(a)], it occupies a significant part of the phase diagram for large values of \(\theta\). For example, from Fig. 1(c) one can see that our diffusive two-band system with \(\kappa=7\)-\(8\) belongs to the IT domain at \(T=0.5T_{c}\) while such \(\kappa\)-values are commonly thought to be in type II. For \(\theta\lesssim 1\) the IT domain is nearly negligible with the width of about \(\Delta\kappa\sim 0.01\) (invisible in the figure) and its upper boundary is given by \(\kappa_{2}^{*}\). This is similar to the IT domain in a diffusive single-band system, where \(H_{c2}<H_{c}\), and the first order transition is expected at the upper critical field [7; 8]. However, for \(\theta>3\) the situation changes qualitatively so that the upper IT boundary corresponds to the sign change of the long-range interaction between vortices (controlled by \(\kappa_{li}^{*}\)), similarly to the IT domain in clean single-band and two-band superconductors. Based on the previous study for clean systems [4; 5; 6; 7], we can conclude that the IT vortex matter in dirty two-band superconductors with sufficiently large ratios of the band diffusion coefficients exhibits the formation of vortex clusters, and vortex chains in the IT subdomain above \(\kappa_{s}^{*}(T)\) while vortex liquid droplets proliferate in the IT subdomain below \(\kappa_{s}^{*}(T)\). According to the conclusions of [4; 5; 6; 17], the appearance of such exotic vortex configurations is connected with the self-dual nature of the B-point [4]. In summary, we have considered the nontrivial disorder effect appearing due to the interplay between the diffusive motion of charge carriers and the multiband structure of the single-particle states. Our results demonstrate that when the band diffusion coefficient in the weaker band is significantly larger than that of the stronger band, the nonmagnetic impurity scattering leads to a huge expansion of the IT domain between the standard superconductivity types in the \(\kappa\)-\(T\) plane. In our study we have considered the minimal two-band diffusive model with the \(s\)-wave pairing in both bands that are coupled via Josephson-like interband pair transfer, while the interband impurity scattering is not included. However, our preliminary study makes it possible to conclude that the effect of interest is generic and the qualitative results are not sensitive to the interband scattering. Furthermore, the \(s\)-wave pairing is not crucial for our conclusions. Notice that the B-point is also present in the case of the \(d\)-wave pairing. Finally, our findings complement the previous results on the enlargement of the IT domain in clean multiband superconductors that takes place when the Fermi velocity of the weaker band is significantly larger than that of the stronger band [4; 5]. ## Acknowledgements The work was supported by the Basic Research Program of the HSE University. ## References * (1) J. B. Ketterson and S. N. Song, _Superconductivity_ (Cambridge Univ. Press, Cambridge, 1999). * (2) J. Auer and Y. Ullmaier, Magnetic behavior of type-II superconductors with small Ginzburg-Landau parameters, Phys. Rev. B **7**, 136 (1973). * (3) U. Krageloh, Flux line lattices in the intermediate state of superconductors with Ginzburg-Landau parameters near \(1/\sqrt{2}\), Phys. Lett. A **28**, 657 (1969). * (4) U. Essmann, Observation of the mixed state, Physica **55**, 83 (1971). * (5) U. Kumpf, Magnetisierungskurven von Supraleitern zweiter Art mit kleinen Ginzburg-Landau-parameteren, Phys. Stat. Solidi B **44**, 829 (1971). * (6) A. E. Jacobs, First-order transitions at \(H_{c1}\) and \(H_{c2}\) in type II superconductors, Phys. Rev. Lett. **26**, 629 (1971). * (7) Yu. N. Ovchinnikov, Generalized Ginzburg-Landau equation and the properties of superconductors with Ginzburg-Landau parameter \(\kappa\) close to 1, JETP **88**, 398 (1999). * (8) I. Luk'yanchuk, Theory of superconductors with \(\kappa\) close to \(1/\sqrt{2}\), Phys. Rev. B **63**, 174504 (2001). * (9) M. Laver, C. J. Bowell, E. M. Forgan, A. B. Abrahamsen, D. Fort, C. D. Dewhurst, S. Muhlbauer, D. K. Christen, J. Kohlbrecher, R. Cubitt, and S. Ramos, Structure and degeneracy of vortex lattice domains in pure superconducting niobium: a small-angle neutron scattering study, Phys. Rev. B **79**, 014518 (2009). * (10) E. H. Brandt and M. P. Das, Attractive vortex interaction and the intermediate mixed state of superconductors, J. Supercond. Nov. Magn. **24**, 57 (2011). * (11) A. Pautrat and A. Brulet, Temperature dependence of clusters with attracting vortices in superconducting niobium studied by neutron scattering, J. Phys.: Condens. Matter **26**, 323201 (2014). * (12) J.-Y. Ge, J. Gutierrez, A. Lyashchenko, V. Filipov, J. Li, and V. V. Moshchalkov, Direct visualization of vortex pattern transition in ZrB\({}_{12}\) with Ginzburg-Landau parameter close to the dual point, Phys. Rev. B **90**, 184511 (2014). * (13) T. Reimann, S. Muhlbauer, M. Schulz, B. Betz, A. Kaestner, V. Pipich, P. Boni, and C. Grunzweig, Visualizing the morphology of vortex lattice domains in a bulk type-II superconductor, Nature Commun. **6**, 8813 (2015). * (14) A. Vagov, A. A. Shanenko, M. V. Milosevic, V. M. Axt, V. M. Vinokur, J. Albino Aguiar, and F. M. Peeters, Superconductivity between standard types: Multiband versus single-band materials, Phys. Rev. B **93**, 174503 (2016). * (15) J.-Y. Ge, V. N. Gladilin, N. E. Sluchanko, A. Lyashenko, V. Filipov, J. O. Indekeu, and V. V. Moshchalkov, Paramagnetic Meissner effect in ZrB\({}_{12}\) single crystal with nonmonotonic vortex-vortex interactions, New J. Phys. **19**, 093020 (2017). * Reimann _et al._ [2017]T. Reimann, M. Schulz, D. F. R. Mildner, M. Bleuel, A. Brulet, R. P. Harti, G. Benka, A. Bauer, P. Boni, and S. Muhlbauer, Domain formation in the type-II/1 superconductor niobium: interplay of pinning, geometry, and attractive vortex-vortex interaction, Phys. Rev. B **96**, 144506 (2017). * Wolf _et al._ [2017]S. Wolf, A. Vagov, A. A. Shanenko, V. M. Axt, and J. Albino Aguiar, Vortex matter stabilized by many-body interactions, Phys. Rev. B **96**, 144515 (2017). * Backs _et al._ [2019]A. Backs, M. Schulz, V. Pipich, M. Kleinhans, P. Boni, and S. Muhlbauer, Universal behavior of the intermediate mixed state domain formation in superconducting niobium, Phys. Rev. B **100**, 064503 (2019). * Saraiva _et al._ [2012]T. T. Saraiva, A. Vagov, V. M. Axt, J. Albino Aguiar, and A. A. Shanenko, Anisotropic superconductors between types I and II, Phys. Rev. B **99**, 014502 (2012). * Vagov _et al._ [2020]A. Vagov, S. Wolf, M. D. Croitoru, and A. A. Shanenko, Universal flux patterns and their interchange in superconductors between types I and II, Commun. Phys. **3**, 58 (2020). * Ooi _et al._ [2021]S. Ooi, M. Tachiki, T. Konomi, T. Kubo, A. Kikuchi, S. Arisawa, H. Ito, and K. Umemori, Observation of intermediate mixed state in high-purity cavity-grade Nb by magneto-optical imaging, Phys. Rev. B **104**, 064504 (2021). * Brems _et al._ [2022]X. S Brems, S. Muhlbauer1, W. Y. Cordoba-Camacho, A. A. Shanenko, A. Vagov, J. A. Aguiar, and R. Cubitt, Current-induced self-organisation of mixed superconducting states, Supercond. Sci. Technol. **35**, 035003 (2022). * Curran _et al._ [2015]P. J. Curran, W. M. Desoky, M. V. Milosevic, A. Chaves, J.-B. Laloe, J. S. Moodera, and S. J. Bending, Spontaneous symmetry breaking in vortex systems with two repulsive lengthscales, Sci. Rep. **5**, 15569 (2015). * Wolf _et al._ [2017]S. Wolf, A. Vagov, A. A. Shanenko, V. M. Axt, A. Perali, and J. Albino Aguiar, BCS-BEC crossover induced by a shallow band: Pushing standard superconductivity types apart, Phys. Rev. B **95**, 094521 (2017). * Cavalcanti _et al._ [2020]P. J. F. Cavalcanti, T. T. Saraiva, J. Albino Aguiar, A. Vagov, M. D. Croitoru, and A. A. Shanenko, Multiband superconductors with degenerate excitation gaps, J. Phys.: Condens. Matter **32**, 455702 (2020). * Gurevich [2003]A. Gurevich, Enhancement of the upper critical field by nonmagnetic impurities in dirty two-gap superconductors, Phys. Rev. B **67**, 184515 (2003). * Jacobs [1971]A. E. Jacobs,Theory of inhomogeneous superconductors near \(T=T_{c}\), Phys. Rev. B **4**, 3016 (1971). * Vagov _et al._ [2012]A. Vagov, A. A. Shanenko, M. V. Milosevic, V. M. Axt, and F. M. Peeters, Extended Ginzburg-Landau formalism: Systematic expansion in small deviation from the critical temperature, Phys. Rev. B **85**, 014502 (2012). * Shanenko _et al._ [2011]A. A. Shanenko, M. V. Milosevic, F. M. Peeters, and A.V. Vagov, Extended Ginzburg-Landau Formalism for Two-Band Superconductors, Phys. Rev. Lett. **106**, 047005 (2011). * Vagov _et al._ [2012]A. Vagov, A. A. Shanenko, M. V. Milosevic, V. M. Axt, and F. M. Peeters, Two-band superconductors: Extended Ginzburg-Landau formalism by a systematic expansion in small deviation from the critical temperature, Phys. Rev. B **86**, 144514 (2012). * Golubov _et al._ [2002]A. A. Golubov, J. Kortus, O. V. Dolgov, O. Jepsen, Y. Kong, O. K. Andersen, B. J. Gibson, K. Ahn, and R. K. Kremer, Specific heat of MgB\({}_{2}\) in a one- and a two-band model from first-principles calculations, J. Phys.: Condens. Matter **14**, 1353 (2002). **Supplemental material for the article "Intertype superconductivity evoked by the interplay of disorder and multiple bands".** ## I Expansion in powers of the band gap functions and their gradients ### Expansion of the Green functions The first step in the derivation of the \(\tau\)-expansion (\(\tau=1-T/T_{c}\), the proximity to the critical temperature) for the diffusive superconductor two-band model [1], is the expansion of the Green functions \(g_{\nu}(\mathbf{r},\omega)\) and \(f_{\nu}(\mathbf{r},\omega)\) in powers of the band gap function \(\Delta_{\nu}(\mathbf{r})\) and its gradients. This expansion is sought in the form \[g_{\nu}= g_{\nu}^{(0)}+g_{\nu}^{(2)}+g_{\nu}^{(4)}+\ldots,\] \[f_{\nu}= f_{\nu}^{(1)}+f_{\nu}^{(3)}+f_{\nu}^{(5)}+\ldots,\] (S1) where \(g_{\nu}^{(2n)}\) and \(f_{\nu}^{(2n+1)}\) are of the orders of \(\Delta_{\nu}^{2n}\) and \(\Delta_{\nu}^{2n+1}\), respectively (with \(n=0,1,2,\ldots\)). One also keeps in mind that the spatial gradient \(\mathbf{\nabla}\) and gauge-invariant spatial derivative \(\mathbf{D}=\mathbf{\nabla}-(i2e/\hbar\epsilon)\mathbf{A}\) are of the order of \(\Delta_{\nu}\). Inserting Eq. (S1) in the two-band Usadel equations [see Eq. (1) in the article] and using the accompanying normalization condition \(g_{\nu}^{2}+|f_{\nu}|^{2}=1\), one obtains \[g_{\nu}^{(0)}= 1,\ g_{\nu}^{(2)}=-\frac{|\Delta_{\nu}|^{2}}{2(\hbar\omega)^{2}},\] \[g_{\nu}^{(4)}= \frac{3|\Delta_{\nu}|^{4}}{8(\hbar\omega)^{4}}-\frac{\mathcal{D}_{ \nu}}{2\hbar^{2}\omega^{3}}\mathrm{Re}\left(\Delta_{\nu}^{*}\mathbf{D}^{2} \Delta_{\nu}\right)\] (S2) and \[f_{\nu}^{(1)}= \frac{\Delta_{\nu}}{\hbar\omega_{n}},\ f_{\nu}^{(3)}=\frac{ \mathcal{D}_{\nu}}{2\hbar\omega^{2}}\mathbf{D}^{2}\Delta_{\nu}-\frac{\Delta_{ \nu}|\Delta_{\nu}|^{2}}{2(\hbar\omega)^{3}},\] \[f_{\nu}^{(5)}= \frac{3\Delta_{\nu}|\Delta_{\nu}|^{4}}{8(\hbar\omega)^{5}}+\frac {\mathcal{D}_{\nu}^{2}}{4\hbar\omega^{3}}\mathbf{D}^{2}(\mathbf{D}^{2}\Delta_{ \nu})\] \[-\frac{\mathcal{D}_{\nu}}{4\hbar^{3}\omega^{4}}\Big{[}3|\Delta_{ \nu}|^{2}\mathbf{D}^{2}\Delta_{\nu}+2\Delta_{\nu}|\mathbf{D}\Delta_{\nu}|^{2}\] \[+2\Delta_{\nu}^{*}(\mathbf{D}\Delta_{\nu})^{2}+\Delta_{\nu}^{2}( \mathbf{D}^{2}\Delta_{\nu})^{*}\Big{]}.\] (S3) We remark that only the terms contributing to the leading correction to the Ginzburg-Landau (GL) theory are highlighted in Eqs. (S1)-(S3). ### Expansion of the free energy density Here we outline the derivation of the free-energy expansion in powers of the band-dependent gap functions and their spatial gradients. To get this expansion, one inserts Eqs. (S1)-(S3) in Eq. (4) of the article. However, before proceeding to this calculation, we need to rearrange the last term in the brackets of Eq (4). First, we obtain \[\int\frac{\hbar\mathcal{D}_{\nu}}{2} \left[|\mathbf{D}f_{\nu}|^{2}+(\mathbf{\nabla}g_{\nu})^{2}\right]d^{3} \mathbf{r}\] (S4) \[=\int\frac{\hbar\mathcal{D}_{\nu}}{2}\big{[}-f_{\nu}^{*}\mathbf{D }^{2}f_{\nu}-g_{\nu}\mathbf{\nabla}^{2}g_{\nu}\big{]}d^{3}\mathbf{r},\] where surface integrals, obtained by virtues of Gauss's theorem, vanish. Then, using the Usadel equation [see Eq. (1) in the article], one gets \[\int\!\!\frac{\hbar\mathcal{D}_{\nu}}{2}\left[|\mathbf{D}f_{\nu}|^ {2}+(\mathbf{\nabla}g_{\nu})^{2}\right]d^{3}\mathbf{r}\] \[=\int\left[f_{\nu}^{*}\Delta_{\nu}+\hbar\omega g_{\nu}-\frac{2 \hbar\omega+\hbar\mathcal{D}_{\nu}\mathbf{\nabla}^{2}g_{\nu}}{2g_{\nu}}\right]d^{ 3}\mathbf{r}.\] (S5) In addition, to keep only the terms up to the order \(\tau^{3}\) in the free energy, it is enough to use the approximations \[\int d^{3}\mathbf{r}\frac{\mathbf{\nabla}^{2}g_{\nu}}{g_{\nu}}= \int d^{3}\mathbf{r}\frac{(\mathbf{\nabla}g_{\nu})^{2}}{g_{\nu}^{2}} \simeq\int d^{3}\mathbf{r}(\mathbf{\nabla}g_{\nu})^{2},\] \[g_{\nu}\simeq 1-\frac{|f_{\nu}|^{2}}{2}-\frac{|f_{\nu}|^{4}}{8}-\frac{|f_{\nu} |^{6}}{16},\] (S6) where the latter expression follows from the normalization condition. Finally, we get the approximate expression \[\int\frac{\hbar\mathcal{D}_{\nu}}{2} \left[|\mathbf{D}f_{\nu}|^{2}+(\mathbf{\nabla}g_{\nu})^{2}\right]d^{ 3}\mathbf{r}\] (S7) \[\simeq \int d^{3}\mathbf{r}\bigg{[}f_{\nu}^{*}\Delta_{\nu}+\hbar\omega \bigg{(}|f_{\nu}|^{2}+\frac{|f_{\nu}|^{4}}{2}\] \[+\frac{3|f_{\nu}|^{6}}{8}\bigg{)}-\frac{\hbar\mathcal{D}_{\nu}}{2} (\mathbf{\nabla}g_{\nu})^{2}\bigg{]},\] Now, inserting Eqs. (S1)-(S3) and (S7) in Eq. (4) of the article, we get \[\mathfrak{f}_{\nu}= 2\pi TN_{\nu}\sum_{\omega>0}\bigg{\{}-\frac{|\Delta_{\nu}|^{2}} {\hbar\omega}+\frac{|\Delta_{\nu}|^{4}}{4(\hbar\omega)^{3}}-\frac{|\Delta_{ \nu}|^{6}}{8(\hbar\omega)^{5}}\] \[+\frac{\hbar\mathcal{D}_{\nu}}{2(\hbar\omega)^{2}}|\mathbf{D} \Delta_{\nu}|^{2}-\frac{(\hbar\mathcal{D}_{\nu})^{2}}{4(\hbar\omega)^{3}}| \mathbf{D}^{2}\Delta_{\nu}|^{2}\] \[-\frac{\hbar\mathcal{D}_{\nu}}{8(\hbar\omega)^{4}}\big{[}6|\Delta _{\nu}|^{2}|\mathbf{D}\Delta_{\nu}|^{2}+\Delta_{\nu}^{2}(\mathbf{D}^{*}\Delta _{\nu}^{*})^{2}\] \[+\Delta_{\nu}^{*2}(\mathbf{D}\Delta_{\nu})^{2}\big{]}\bigg{\}}.\] (S8) Then, we calculate the sums over \(\omega\) as \[2\pi T\sum_{\omega>0}\frac{1}{\hbar\omega}=\ln\frac{2e^{\gamma} \hbar\omega_{D}}{\pi T},\;2\pi T\sum_{\omega>0}\frac{1}{(\hbar\omega)^{2}}= \frac{\pi}{4T},\] \[2\pi T\sum_{\omega>0}\frac{1}{(\hbar\omega)^{3}}=\frac{7\zeta(3) }{4\pi^{2}T^{2}},\;2\pi T\sum_{\omega>0}\frac{1}{(\hbar\omega)^{4}}=\frac{\pi }{48T^{3}},\] \[2\pi T\sum_{\omega>0}\frac{1}{(\hbar\omega)^{5}}=\frac{31\zeta(5) }{16\pi^{4}T^{4}},\] (S9) where \(\omega_{D}\) is the Debye frequency, \(\zeta(\ldots)\) is the Riemann zeta function, and \(\gamma=0.577\). Finally, using Eqs. (S8) and (S9), one gets Eq. (5) of the manuscript. Notice that only the terms that contribute to the leading correction to the GL theory (in the \(\tau\)-expansion) are retained in this equation. ## II The \(\tau\)-expansion ### The free energy functional Now we employ the \(\tau\)-expansion of the gap functions and fields in the form [\(\vec{\Delta}^{T}=(\Delta_{1},\Delta_{2})\)] \[\vec{\Delta} =\tau^{1/2}\vec{\Psi}+\tau^{3/2}\vec{\psi}+\tau^{5/2}\vec{\varphi}+\ldots,\] \[\mathbf{A} =\tau^{1/2}\mathbf{\mathcal{A}}+\tau^{3/2}\mathfrak{a}+\ldots,\] \[\mathbf{B} =\tau\mathbf{\mathcal{B}}+\tau^{2}\mathfrak{b}+\ldots,\] (S10) where \(\vec{\Psi}=(\Psi_{1},\Psi_{2})^{T}\) and \(\mathbf{\mathcal{B}}\) (\(\mathbf{\mathcal{A}}\)) are the GL contributions to the gap functions and fields while the leading corrections are governed by \(\vec{\psi}=(\psi_{1},\psi_{2})^{T}\) and \(\mathfrak{b}\) (\(\mathfrak{a}\)). Below we need also the next-to-leading correction to the gap functions \(\vec{\varphi}=(\varphi_{1},\varphi_{2})\), which are not introduced in the article. The point is that \(\vec{\varphi}\) appears in the leading correction to the GL theory in the \(\tau\)-expansion of the free energy functional but does not contribute to the stationary free energy. In addition, we introduce the \(\tau\)-scaling of the spatial coordinates; for more detail, see the discussion after Eq. (7) in the article and see also the papers about the extended GL formalism for clean systems [3; 4; 5; 6]. The corresponding \(\tau\)-expansion of the free energy density is written as \[\mathfrak{f}=\tau^{2}\left[\tau^{-1}\mathfrak{f}^{(-1)}+\mathfrak{f}^{(0)}+\tau \mathfrak{f}^{(1)}+\ldots\right].\] (S11) For the lowest-order term we have \[\mathfrak{f}^{(-1)}=\vec{\Psi}^{\dagger}\check{L}\vec{\Psi},\] (S12) where the matrix \(\check{L}\) is given by Eq. (8) of the article. The next order is given by \[\mathfrak{f}^{(0)}=\frac{\mathbf{\mathcal{B}}^{2}}{8\pi}+\big{(}\vec{\Psi}^{\dagger} \check{L}\vec{\psi}+\text{c.c.}\big{)}+\sum_{\nu=1,2}\mathfrak{f}_{\nu}^{(0)},\] (S13) where \[\mathfrak{f}_{\nu}^{(0)}=a_{\nu}|\Psi_{\nu}|^{2}+\frac{b_{\nu}}{2}|\Psi_{\nu}|^{4} +\mathcal{K}_{\nu}|\mathbf{\mathcal{D}}\Psi_{\nu}|^{2},\] (S14) with \(\mathbf{\mathcal{D}}=\mathbf{\nabla}-(i2e/\hbar\varepsilon)\mathbf{\mathcal{A}}\). The coefficients \(a_{\nu},b_{\nu}\), and \(\mathcal{K}_{\nu}\) are defined by Eq. (6) of the article. Finally, \(\mathfrak{f}^{(1)}\) is given by \[\mathfrak{f}^{(1)}= \frac{\mathbf{\mathcal{B}}\cdot\mathfrak{b}}{4\pi}+\big{(}\vec{\Psi}^{ \dagger}\check{L}\vec{\varphi}+\text{c.c.}\big{)}+\vec{\psi}^{\dagger} \check{L}\vec{\psi}\] \[+\sum_{\nu=1,2}\big{[}\mathfrak{f}_{\nu,1}^{(1)}+\mathfrak{f}_{\nu,2}^{(1)}\big{]},\] (S15) where \[\mathbf{f}_{\nu,1}^{(1)}= \frac{a_{\nu}}{2}\,|\Psi_{\nu}|^{2}+b_{\nu}|\Psi_{\nu}|^{4}+\mathcal{ K}_{\nu}\,|\mathbf{\mathcal{D}}\Psi_{\nu}|^{2}-\mathcal{Q}_{\nu}|\mathbf{\mathcal{D}}^{2} \Psi_{\nu}|^{2}\] \[-\frac{\mathcal{L}_{\nu}}{2}\Big{[}6\,|\Psi_{\nu}|^{2}|\mathbf{ \mathcal{D}}\Psi_{\nu}|^{2}+\Psi_{\nu}^{2}(\mathbf{\mathcal{D}}^{*}\Psi_{\nu}^{*}) ^{2}\] \[+\Psi_{\nu}^{*2}(\mathbf{\mathcal{D}}\Psi_{\nu})^{2}\Big{]}-\frac{c_{ \nu}}{3}|\Psi_{\nu}|^{6}\] (S16) and \[\mathbf{f}_{\nu,2}^{(1)}= \big{(}a_{\nu}+b_{\nu}|\Psi_{\nu}|^{2}\big{)}\big{(}\Psi_{\nu} \psi_{\nu}^{*}+\text{c.c.}\big{)}\] \[+\mathcal{K}_{\nu}\left[\big{(}\mathbf{\mathcal{D}}\Psi_{\nu}\cdot \mathbf{\mathcal{D}}^{*}\psi_{\nu}^{*}+\text{c.c.}\big{)}-\mathfrak{a}\cdot\mathbf{ i}_{\nu}\right],\] (S17) where \(\mathbf{i}_{\nu}=4\text{e}\text{Im}[\Psi_{\nu}^{*}\mathbf{\mathcal{D}}\Psi_{\nu}]/ \hbar\mathbf{\varepsilon}\). Using the \(\tau\)-expansion of the free energy functional given by Eqs. (S11)-(S17), one gets the stationary point equations, see Eqs. (5) and (6) in the article. According to Eq. (5), we find that the contribution \(\mathbf{f}^{(-1)}\) is exactly equal to zero at the stationary point. In addition, the term \(\widetilde{\Psi}^{\dagger}\bar{L}\vec{\varphi}+\text{c.c.}\) is also zero in \(\mathbf{f}^{(1)}\). Thus, only \(\Psi\) and \(\psi\) make a contribution to the free energy density up to the order \(\tau^{3}\). Moreover, \(\psi\) is written as the linear combination of \(\vec{\xi}\) and \(\vec{\eta}\) as \(\vec{\psi}=\psi_{\xi}\vec{\xi}+\psi_{\eta}\vec{\eta}\), see also Eq. (13) in the article. Then, one can find [3; 4; 5] that only \(\psi_{\eta}\) contributes to the leading correction to the GL theory and furthermore, \(\psi_{\eta}\) is expressed in terms of \(\Psi\), see Eqs. (15) and (16) in the main article. Thus, to find the stationary free energy up to the leading correction to the GL theory (this corrections is of the order of \(\tau^{3}\)), one needs to use only the stationary solution of the GL formalism. ### The Gibbs free energy difference Using the stationary free energy, one calculates the Gibbs free energy difference given by Eq. (18) in the article. To simplify the calculations, we introduce the dimensionless quantities \[\tilde{\mathbf{r}}=\frac{\mathbf{r}}{\lambda\sqrt{2}},\,\tilde{ \mathbf{\mathcal{B}}}=\frac{\kappa\sqrt{2}}{\mathcal{H}_{c}}\mathbf{\mathcal{B}},\, \tilde{\mathbf{\mathcal{A}}}=\frac{\kappa}{\mathcal{H}_{c}\lambda}\mathbf{\mathcal{A}},\] \[\tilde{\Psi}=\frac{\Psi}{\Psi_{0}},\,\tilde{\mathbf{f}}=\frac{4 \pi\,\mathbf{f}}{\mathcal{H}_{c}},\,\,\tilde{\mathbf{g}}=\frac{4\pi\,\mathbf{ g}}{\mathcal{H}_{c}},\] (S18) where \(\Psi_{0}=\sqrt{-a/b}\) is the uniform solution of the GL formalism and \(\mathcal{H}_{c}\) is the GL thermodynamic critical field, see Eq. (19) in the article. Below we utilize these dimensionless quantities without tilde, for simplicity. Notice that we use the \(\tau\)-scaled spatial coordinates and so, the GL coherence length \(\xi\) and the London penetration depth \(\lambda\) are scaled accordingly. The series in \(\tau\) for the Gibbs free energy difference is sought in the form \[\mathbf{g}=\tau^{2}\big{[}\mathbf{g}^{(0)}+\tau\mathbf{g}^{(1)}+\dots\big{]},\] (S19) where the lowest order (GL) contribution is given by \[\mathbf{g}^{(0)}=\frac{1}{2}\left(\frac{\mathcal{B}}{\kappa\sqrt{2}}-1\right) ^{2}+\frac{1}{2\kappa^{2}}\,|\mathbf{\mathcal{D}}\Psi|^{2}-|\Psi|^{2}+\frac{1}{2} \,|\Psi|^{4},\] (S20) where \(\mathcal{B}=|\mathbf{\mathcal{B}}|\) and the dimensionless gauge-invariant derivative is given by \(\mathbf{\mathcal{D}}=\mathbf{\nabla}+i\mathbf{\mathcal{A}}\). The leading correction to the GL theory reads \[\mathbf{g}^{(1)}= \left(\frac{\mathcal{B}}{\kappa\sqrt{2}}-1\right)\left[\frac{1}{ 2}+\bar{c}+\bar{\mathcal{G}}(\bar{\alpha}-\bar{\beta})^{2}\right]+\bar{ \mathcal{G}}|\Psi|^{2}\] \[\times\left(\bar{\alpha}-\bar{\beta}|\Psi|^{2}\right)^{2}-\frac{1 }{2}|\Psi|^{2}+|\Psi|^{4}+\frac{1}{2\kappa^{2}}|\mathbf{\mathcal{D}}\Psi|^{2}\] \[+\frac{\bar{\mathcal{Q}}}{4\kappa^{4}}|\mathbf{\mathcal{D}}^{2}\Psi|^ {2}+\frac{\bar{\mathcal{L}}}{4\kappa^{2}}\Big{\{}6|\Psi|^{2}|\mathbf{\mathcal{D}} \Psi|^{2}\] \[+\left[\Psi^{2}(\mathbf{\mathcal{D}}^{*}\Psi^{*})^{2}+\text{c.c.} \right]\Big{\}}+\bar{c}\,\big{|}\Psi\big{|}^{6},\] (S21) where the dimensionless coefficients \(\bar{c},\bar{Q},\bar{L}\), and \(\bar{\mathcal{G}}\) are given by Eq. (21) of the article. Using Eqs. (S19)-(S21) and introducing the expansion in \(\delta\kappa=\kappa-\kappa_{0}\) (with \(\kappa=\kappa_{0}=1/\sqrt{2}\), see the article), one gets Eq. (20). This makes it possible to employ the self-duality Bogomolnyi equations [4; 5], as the the GL theory is reduced to these equations at \(\kappa=\kappa_{0}\). The relevant details of the perturbation expansion in \(\delta\kappa\) and the corresponding calculations are discussed in the previous papers on the IT domain in clean superconductors [4; 5].
磁性不純物混入散乱が、超伝導体のGinzburg-Landauパラメータ$\kappa$を上昇させることが知られています。この場合、系が初期的にI型の場合、磁気的な応答を変えることができるため、$\kappa \sim 1$の領域で異なる標準超伝導タイプの間のインタータイプ領域を横切り、II型超伝導タイプに到達します。この論文では、電荷キャリアのバンド構造を持つ多バンド構造の存在下で不秩序の影響ははるかに深刻であることを示しています。特に、バンド拡散率が異なる場合、インタータイプ領域は大きく拡大し、$\kappa \gg 1$の値を持つ、従来の単バンド超伝導体で深いII型に位置する点を含む可能性があります。これらの結果を通して、不規則性効果の複雑な面を明らかにし、純粋な多バンド超伝導体におけるインタータイプ領域の
2310.00454
SimLVSeg: Simplifying Left Ventricular Segmentation in 2D+Time Echocardiograms with Self- and Weakly-Supervised Learning
Echocardiography has become an indispensable clinical imaging modality for general heart health assessment. From calculating biomarkers such as ejection fraction to the probability of a patient's heart failure, accurate segmentation of the heart structures allows doctors to assess the heart's condition and devise treatments with greater precision and accuracy. However, achieving accurate and reliable left ventricle segmentation is time-consuming and challenging due to different reasons. Hence, clinicians often rely on segmenting the left ventricular (LV) in two specific echocardiogram frames to make a diagnosis. This limited coverage in manual LV segmentation poses a challenge for developing automatic LV segmentation with high temporal consistency, as the resulting dataset is typically annotated sparsely. In response to this challenge, this work introduces SimLVSeg, a novel paradigm that enables video-based networks for consistent LV segmentation from sparsely annotated echocardiogram videos. SimLVSeg consists of self-supervised pre-training with temporal masking, followed by weakly supervised learning tailored for LV segmentation from sparse annotations. We demonstrate how SimLVSeg outperforms the state-of-the-art solutions by achieving a 93.32% (95%CI 93.21-93.43%) dice score on the largest 2D+time echocardiography dataset (EchoNet-Dynamic) while being more efficient. SimLVSeg is compatible with two types of video segmentation networks: 2D super image and 3D segmentation. To show the effectiveness of our approach, we provide extensive ablation studies, including pre-training settings and various deep learning backbones. We further conduct an out-of-distribution test to showcase SimLVSeg's generalizability on unseen distribution (CAMUS dataset). The code is publicly available at https://github.com/fadamsyah/SimLVSeg.
Fadillah Maani, Asim Ukaye, Nada Saadi, Numan Saeed, Mohammad Yaqub
2023-09-30T18:13:41
http://arxiv.org/abs/2310.00454v3
UniLVSeg: Unified Left Ventricular Segmentation with Sparsely Annotated Echocardiogram Videos through Self-Supervised Temporal Masking and Weakly Supervised Training ###### Abstract Echocardiography has become an indispensable clinical imaging modality for general heart health assessment. From calculating biomarkers such as ejection fraction to the probability of a patient's heart failure, accurate segmentation of the heart and its structures allows doctors to plan and execute treatments with greater precision and accuracy. However, achieving accurate and robust left ventricle segmentation is time-consuming and challenging due to different reasons. This work introduces a novel approach for consistent left ventricular (LV) segmentation from sparsely annotated echocardiogram videos. We achieve this through (1) self-supervised learning (SSL) using temporal masking followed by (2) weakly supervised training. We investigate two different segmentation approaches: 3D segmentation and a novel 2D superimage (SI). We demonstrate how our proposed method outperforms the state-of-the-art solutions by achieving a 93.32% (95%CI 93.21-93.43%) dice score on a large-scale dataset (EchoNet-Dynamic) while being more efficient. To show the effectiveness of our approach, we provide extensive ablation studies, including pre-training settings and various deep learning backbones. Additionally, we discuss how our proposed methodology achieves high data utility by incorporating unlabeled frames in the training process. To help support the AI in medicine community, the complete solution with the source code will be made publicly available upon acceptance. Keywords:Left Ventricle Segmentation Sparse Video Segmentation 3D Segmentation Super Image Self-supervision Temporal Masking ## 1 Introduction Echocardiograms are a crucial modality in cardiovascular imaging due to their safety, availability, and high temporal resolution [11]. In clinical practice, echocardiogram diagram information is used to diagnose heart conditions and understand the preoperative risks in patients with cardiovascular diseases [7]. By accurately segmenting the heart structures, especially the end-diastole (ED) and end-systole (ES) frames, clinicians can assess the extent and location of the disease, determine the appropriate treatment approach, and monitor the patient's response to therapy [10]. The typical manual workflow of segmenting LV is as follows: 1) a sonographer acquires an echocardiogram video using an ultrasound device and records the patient's heartbeat, 2) finds ED and ES by locating candidate frames indicated by the recorded heartbeat signal and then verifies them visually with the recorded echocardiogram video, 3) draws some key points to represent LV region as shown in Figure 1. That manual LV segmentation workflow is typically time-consuming and prone to intra- and inter-observer variability. The inherent speckle noise in echocardiograms makes LV segmentation more challenging, as LV boundaries are sometimes unclear. Hence, sonographers need to consider the temporal context to eliminate the ambiguity caused by unclear heart structures in echocardiograms and perfectly segment LV to achieve accurate results. It can even add more burden for sonographers since they must go back-and-forth between echocardiogram frames to analyze the ambiguous boundaries properly. Automatic LV segmentation can help sonographers in solving this arduous task more efficiently. A wide range of work on performing medical image segmentation using a supervised deep-learning approach is presented ([23], [14]). Earlier segmentation approaches on echocardiograms propose a frame-by-frame (2D) image segmentation solution ([25], [13], [16], [21], [2]). The image-based approaches however, do not capitalize on the periodicity and temporal consistency of the echocardiograms, which may lead to incoherence in the segmentation results from one frame to the next. This has motivated a recent body of video-based echocardiogram segmentation approaches. Li et al. [19] use a Conv-LSTM to ensure spatiotemporal consistency between consecutive frames. Ahn et al. [1] use a multi-frame attention network to perform 3D segmentation. Wu et al. [31] demonstrated the effectiveness of semi-supervision using mean-teacher networks and spatiotemporal fusion on segmentation. Recently, Wei et al. [30] propose a two-stage training to enforce temporal consistency on a 3D U-Net by leveraging an echocardiogram ED & ES sequence constraint. Painchaud et al. [22] improve the average segmentation performance by enforcing temporal smoothness as a post-processing step on video segmentation outputs. These video-based approaches show high temporal consistency and state-of-the-art performance. However, they pose certain limitations. Recurrent units in [19] incur a high computational cost. Multi-frame attention in [1] similarly has computational cost correlated to the number of frames and they are limited to using five frames. [31] limit the temporal context to three frames to obtain optimum performance-compute trade-off. [30] leverages a constraint in their training pipeline where the segmented area changes monotonically as the first input frame is ED and the last frame is ES in the same (_one_) heartbeat cycle, thus limiting the usage of vastly unannotated frames in other cycles. On the other hand, image-based networks are computationally cheaper and retain an advantage in being effectively pre-trained on a large corpus of annotated image datasets. Annotated video datasets are, in comparison, more scarce. Fan et al. [5] introduced the idea of super images by flattening videos into image grids and successfully performed video-based tasks such as action recognition using image classifiers. Sobirov et al. [26] employ this approach on medical images for atrial and head and neck cancer segmentation problems. Moreover, publicly available echocardiogram datasets ([21], [17]) have typically two annotated frames only per video, i.e. end-diastole (ED) and end-systole (ES) frames. In the case of the EchoNet-Dynamic dataset [21], this utilizes less than 1.2 % of the available frames when training in a 2D supervised setting. Self-supervised learning (SSL) alleviates this problem. Saeed et al. [24] use contrastive pre-training to provide self-supervision on echocardiograms. Recently, He et al. [8] show that masked autoencoders (MAE) for self-supervised pre-training enable accelerated training and improve accuracy on natural image-based tasks. Feichtenhofer et al. [6] and Tong et al. [28] extend this idea to spatiotemporal masking and show promising results on action recognition. The aforementioned works perform LV segmentation from echocardiogram videos **either by** 1) analyzing frames independently with simple 2D deep learning models **or** 2) performing 2D+time analysis and developing models using complex training schemes. In our proposed method, while achieving state-of-the-art performance, we aim to mimic clinical assessment where doctors assess multiple frames concurrently in a simplified approach. We introduce a novel self-supervised pre-training approach and a loss calculation method for video-based echocardiogram segmentation, specifically designed to handle sparsely annotated frames in the downstream task. Our key contributions are: * We propose a self-supervised temporal masking approach that leverages vastly unannotated echocardiogram frames to provide a better network initialization for the downstream LV segmentation task by learning the periodic nature of echocardiograms. * We propose a loss calculation mechanism that allows a video-based segmentation network to learn LV segmentation from sparsely annotated echocardiogram videos without any heartbeat cycle constraint. * We show the compatibility of our approach with the 2D super image and 3D segmentation network with various encoder backbones. * We demonstrate how our proposed approach outperforms the state-of-the-art in LV segmentation on Echonet-Dynamic in terms of performance and efficiency through extensive ablation studies. ## 2 Methodology Our proposed method is demonstrated in Figure 2. A network utilizes unannotated frames for a pre-training stage and learns from annotated frames in a weakly-supervised manner. The performance of the proposed method was evaluated with 3D segmentation and 2D super image (SI) segmentation [5] approach, as depicted in Figure 3. The details are described below. **Self-Supervised Temporal Masking.** In the EchoNet-Dynamic [21] dataset, most of the frames are unannotated, thus the ability to perform supervised training is limited. To benefit from the vast amount of unlabeled frames, we implement a self-supervised temporal masking algorithm to pre-train our model. As depicted in Figure 2, a clip of an echocardiogram video is retrieved, and a portion of the frames is masked. The model is then pre-trained to reconstruct the masked clip. Through this process, the model learns valuable latent information from the periodic nature of echocardiograms, e.g. the embedded temporal pattern or cardiac rhythm, that benefit the downstream LV segmentation task. Figure 1: A sequence of an echocardiogram video [21]. The number of frames varies, yet only two are labeled, i.e. the end-diastole (_left-most_) and the end-systole (_right-most_) frame. Annotators draw key points to represent the left ventricular (LV) region. Then, LV segmentation labels are inferred from the given key points. Figure 2: An illustration of our approach. A video segmentation network is developed to segment LV on every input echocardiogram frame. The network is pre-trained using a self-supervised temporal masking method, which is then fine-tuned on the LV segmentation task with sparse annotations. More formally, suppose \(V\) is an echocardiogram video with \(H\times W\) frame size. From \(V\), we sample a clip \(v\in\mathbb{R}^{H\times W\times F\times 3}\) consisting of \(F\) number of consecutive frames with a stride or sampling period of \(T\). Then, we provide a masked clip \(v_{m}\in\mathbb{R}^{H\times W\times F\times 3}\) by randomly choosing \(f\) number of frames from \(v\) and adjusting their pixel values to 0. A video network \(\mathcal{G}\) is then pre-trained to reconstruct \(v\) from \(v_{m}\). The network \(\mathcal{G}\) is optimized by minimizing the mean-squared difference of pixel values between the reference clip \(v\) and the reconstructed clip \(\mathcal{G}(v_{m})\). **LV Segmentation with Sparse Annotation.** The sparsely-annotated echocardiogram videos make the LV segmentation challenging as training a video segmentation model on EchoNet-Dynamic is not trivial. To tackle the issue, inspired by [3], we propose a training strategy to develop a video segmentation network specifically for LV. As illustrated in Figure 2, the network takes in \(F\) number of frames and segments the LV on each frame. Then, the loss is calculated and backpropagated only based on the prediction of frames having a segmentation label. More formally, let \(\mathcal{G}\) be a video segmentation network with a set of parameters \(\Psi\) which takes in an input echocardiogram clip \(v\in\mathbb{R}^{H\times W\times F\times C}\) and predicts LV segmentation \(\hat{\mathbf{y}}\in\mathbb{R}^{H\times W\times F}\), where \(F\), \(C\), and \(H\times W\) are number of frames, number of channels which is 3, and frame size respectively. Also, let \(\mathbf{y}=\{y_{f_{1}},y_{f_{2}},\ldots,y_{f_{n}}\}\) denote the segmentation label of the input clip, where \(y_{f_{i}}\in\mathbb{R}^{H\times W}\) is the \(f_{i}\)-th frame label (\(f_{i}\leq F\)) and \(n\leq F\) is the number of the labeled frames. Thus, the total dice loss \(\mathcal{L}_{d}\) can be formulated as: \[\mathcal{L}_{d}(\mathbf{y},\hat{\mathbf{y}})=\sum_{i=1}^{F}\ell_{d}\left(y_{i},\hat{y}_ {i}\right)=\underbrace{\sum_{j\in\mathcal{F}_{l}}\ell_{d}\left(y_{j},\hat{y_{j} }\right)}_{\text{labeled (annotated) frames}}+\underbrace{\sum_{k\in\{1,\ldots,F\} \setminus\mathcal{F}_{l}}\ell_{d}\left(y_{k},\hat{y}_{k}\right)}_{\text{ unlabeled frames}} \tag{1}\] where \(\ell_{d}\) is the _frame-wise_ dice loss, \(\mathcal{F}_{l}=\{f_{1},\ldots,f_{n}\}\) is the labeled frames, and \(y_{k}\) is a dummy label if \(k\in\{1,\ldots,F\}\backslash\mathcal{F}_{l}\) (_unlabeled frames_). The gradient of \(\mathcal{L}_{d}\) w.r.t. a parameter \(\psi\in\Psi\) is given by: \[\frac{\partial\mathcal{L}_{d}}{\partial\psi}(\mathbf{y},\hat{\mathbf{y}})=\sum_{j\in \mathcal{F}_{l}}\frac{\partial\ell_{d}}{\partial\psi}\left(y_{j},\hat{y_{j}} \right)+\sum_{k\in\{1,\ldots,F\}\backslash\mathcal{F}_{l}}\frac{\partial\ell_ {d}}{\partial\psi}\left(\mathbf{y},\hat{y}_{k}\right) \tag{2}\] where \(\frac{\partial\ell_{d}}{\partial\psi}\left(y_{k},\hat{y}_{k}\right)\) is set to zero because the \(k\)-th frame is unlabeled. **Since (1)**\(\hat{y}_{j}\in\mathcal{G}\left(v;\,\Psi\right)\), and **(2)**\(\mathcal{G}\) typically consists of shared-weights operators (e.g. convolution and attention), **then** \[\frac{\partial\ell_{d}}{\partial\psi}\left(y_{j},\hat{y}_{j}\right)\in\mathbb{ R}\implies\sum_{j\in\mathcal{F}_{l}}\frac{\partial\ell_{d}}{\partial\psi} \left(y_{j},\hat{y_{j}}\right)\in\mathbb{R}\implies\frac{\partial\mathcal{L}_ {d}}{\partial\psi}(\mathbf{y},\hat{\mathbf{y}})\in\mathbb{R} \tag{3}\] for all parameters \(\psi\) in \(\Psi\). Thus, although a clip \(v\) is partially labeled and gradients do not come from unlabeled frames, this framework can facilitate training for all \(\mathcal{G}\) parameters. During training, a clip is randomly extracted around an annotated frame from every video with the specified number of frames \(F\) and sampling period \(T\), resulting in more variations and acting as a regularizer. In other words, there is only a segmentation mask for one frame on every clip. To reduce randomness during the evaluation step, a clip is extracted from each video where an annotated frame is at the center of the clip. **3D Segmentation Approach.** Echocardiogram videos consist of stacked 2D images. Considering the time axis as the 3rd dimension allows 3D models to segment the LV on an echocardiogram clip. Thus, the 3D U-Net [3] is utilized as the architecture. As depicted in Fig. 4, we use a CNN with residual units [15] as the encoder, which has 5 stages where the stage outputs are passed to the decoder. A residual unit comprises two Conv2D layers, two instance norm layers, two PReLU activation functions, and a skip connection. **2D Super Image Approach.** Unlike the 3D approach, the SI addresses the video segmentation problem in a 2D fashion [26]. An echocardiogram video \(v\in\mathbb{R}^{H\times W\times F\times C}\) is rearranged into a single big image \(x\in\mathbb{R}^{\hat{H}\times W\times C}\), where \(\hat{H}\) and \(\hat{W}\) are the height and width of the SI respectively. Since the SI works best with a grid layout [5], we set the echocardiogram SI size to be \(H\sqrt{F}\times W\sqrt{F}\). Hence, existing techniques for 2D image analysis can be well utilized to help solve the problem, e.g. state-of-the-art architectures, self-supervised methods, and strong pre-trained models. The 2D U-Net [23] is used as the main architecture with the UniFormer-S [18] as the encoder. We select the UniFormer-S since 1) it leverages the strong properties of convolution and attention, and 2) it is the recent state-of-the-art on EchoNet-Dynamic ejection fraction estimation [20]. In short, the network consists of 4 stages, where the first two stages utilize convolution operators to Figure 3: The 3D vs. 2D super image segmentation approach. The first approach utilizes a 3D segmentation network, while the second rearranges the echocardiogram clip as a super image and then utilizes a 2D network. extract features, and the rest implement multi-head self-attention (MHSA) to learn global contexts. The inductive biases of convolution layers allow the model to learn efficiently and the MHSA has a large receptive field that is favorable for SI [5]. ## 3 Experimental Setup Experiments were performed on EchoNet-Dynamic [21], a large-scale echocardiography dataset, using an NVIDIA RTX 6000 GPU with CUDA 11.7 and PyTorch 1.12. **Dataset.** We conducted our experiments on the EchoNet-Dynamic dataset [21]. EchoNet-Dynamic is the largest publicly available 2D+Time echocardiograms of the apical four-chambers (a4c) view of the human heart. The dataset comprises approximately 10,030 heart echocardiogram videos with a fixed frame size of \(112\times 112\). Video length varies from 28 to 1002 frames, yet only two are annotated (ED & ES frames). A sample echocardiogram sequence is given in Figure 1. To ensure a fair comparison with reported state-of-the-art methods, we adhered strictly to the organizer's provided split, consisting of 7460 training videos, 1288 validation videos, and 1276 test videos. **Implementation Details.** We pre-trained our video segmentation models for 100 epochs with self-supervision. Each echocardiogram video was randomly sampled on every epoch with a specified number of frames (\(F\)) and a stride or sampling period (\(T\)) to give more variations. We utilized the AdamW optimizer with a 3e-4 learning rate and a 1e-5 weight decay. A set of augmentation was applied to enrich the variation during training, consisting of color jitter, CLAHE, random rotation, and random padded cropping. Then, the model is fine-tuned for the LV segmentation task with sparse annotations in a weakly-supervised manner for 70 epochs. Every video was sampled twice on every epoch to accommodate the annotated ED and ES frames. Main hyper-parameters were set experimentally. Figure 4: The 3D U-Net architecture. A residual unit [15] consists of convolutional layers, instance norm layers, PReLU, and a skip connection. Residual Unit [\(C\)] denotes a residual unit with \(C\) number of feature channels. ## 4 Results **Comparison with the state-of-the-art.** Our method outperforms other approaches on the EchoNet test set, as mentioned in Table 1. The 3D U-Net results in 93.32% overall dice similarity coefficient (DSC), and the SI approach shows on-par performance. Confidence interval (CI) analysis further shows no overlap between the 95% CI of our methods with other state-of-the-art solutions, indicating that our improvements hold statistical significance over those methods with a p-value of less than 0.05. The 3D U-Net was trained with 32 frames sampled consecutively, while the SI was trained with 16 frames sampled at every 5\({}^{\text{th}}\) frame. This experiment shows that a video segmentation network trained in a weakly-supervised manner is capable of segmenting the LV with a 3.8x lower computational cost compared to [2]. **Number of Frames and Sampling Period.** The number of frames \(F\) and the sampling period \(T\) play important roles ([20], [31]). Large \(F\) allows a network \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline Method & \multicolumn{3}{c|}{DSC (95\%CI)} & FLOPs \# Params \\ \hline & \multicolumn{1}{c|}{Overall} & ES & ED & (G) & (M) \\ \hline EchoNet & 92.00 (91.87-92.13) & 90.68 (90.55-90.86) & 92.78 (92.61-92.94) & 7.84 & 39.64 \\ \hline nnU-Net [14] & 92.86 (92.74-92.98) & 91.63 (91.43-91.83) & 93.62 (93.48-93.76) & 2.30 & **7.37** \\ \hline SepXception [2] & 92.90 & - & 91.73 (91.54-91.92) & 93.64 (93.50-93.78) & 4.28 & 55.83 \\ \hline Ours (SI) & 93.31 (93.19-93.43) & 92.26 (92.08-92.44) & **93.95** (93.81-94.09) & (\({}^{\star}\)) 2.17 & 24.83 \\ Ours (3D) & **93.32** (93.21-93.43) & **92.29** (92.11-92.47) & **93.95** (93.81-94.09) & (\({}^{\star}\)) **1.13** & 18.83 \\ \hline \end{tabular} \end{table} Table 1: Dice similarity coefficient (DSC) on EchoNet-Dynamic test set. Our approach shows state-of-the-art performance with fewer FLOPs and relatively fewer parameters. fvcore was utilized to count the FLOPs. Note that we report computational cost compared to [2]. Figure 5: A comparison with other state-of-the-art solutions. Our methods achieve higher DSC on the EchoNet-Dyamic test set while being more efficient. The bubble size represents the number of parameters. to retrieve rich temporal information while increasing \(T\) reduces redundancy between frames. We studied the combination of (\(F\), \(T\)) to find the optimum pair as provided in Table 2. The (16, 5) combination results in the highest DSC of 93.21% for SI while (32, 1) gives the best performance for 3D approach, resulting in 93.31% DSC. Additionally, all (\(F\), \(T\)) pairs result in a better performance compared to [2]. **SSL Temporal Masking.** We conducted an ablation study (Table 3) to find the optimum value of the masking ratio and obtain the best results for 60 % masking. We find that SSL pre-training helps maintain better temporal consistency and improve robustness (Fig. 6). **Different backbones.** An ablation study was performed on different encoders of the segmentation architecture to see how well our approach adapts to model complexity. We implemented ResNet-18 [9], MobileNet-V3 [12], and ViT-B/16 [4] as the encoder of the SI approach. We also tested with a smaller version of 3D U-Net (Fig. 4), which consists of two residual units on every stage (3D U-Net-S). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Approach} & \multirow{2}{*}{\# Frames} & \multicolumn{4}{c|}{Sampling Period} \\ \cline{3-6} & & 1 & 2 & 3 & 5 \\ \hline \multirow{4}{*}{2D Super Image (UniFormer-S)} & 4 & 93.06 & 93.12 & 93.17 & 93.09 \\ & 9 & 93.11 & 93.14 & 93.15 & 93.13 \\ \cline{1-1} & 16 & 93.17 & 93.13 & 93.18 & **93.21** \\ \cline{1-1} & 25 & 93.16 & 93.11 & 93.20 & 93.12 \\ \hline \multirow{2}{*}{3D U-Net} & 16 & 93.23 & 93.25 & 93.11 & 93.04 \\ & 32 & **93.31** & 93.14 & 93.06 & 92.90 \\ \hline \end{tabular} \end{table} Table 2: An ablation study on the number of frames and the sampling period. During this experiment, the UniFormer-S was pre-trained on ImageNet, and the 3D U-Net was trained from scratch. All reported values are overall DSC (%) scores. \begin{table} \begin{tabular}{r|r} \hline Masking ratio & Overall DSC (\%) \\ \hline N/A & 93.19 \\ 0.3 & 93.25 \\ 0.4 & 93.23 \\ 0.5 & 93.21 \\ 0.6 & **93.31** \\ 0.7 & 93.23 \\ \hline \end{tabular} \end{table} Table 3: An ablation study on masking ratio during pre-training with the 2D SI approach. N/A denotes without pre-training. The optimum masking ratio is 60%. As provided in Table 4, the experiment shows that the performance is robust to encoder backbones. ## 5 Discussions Table 1 shows that while more efficient, our video segmentation networks outperform the highest reported DSC on the EchoNet-Dynamic test set. Our networks aggregate both spatial and temporal information by analyzing multiple echocardiogram frames at a single pass. The networks predict an LV segmentation trace for every input frame at once, thus eliminating the redundancy in analyzing the same frames multiple times as in [27] and [31]. In addition, our training pipeline is simple yet effective, easy to implement, and scalable, as it does not require pseudo labels ([29], [30]) or temporal regularization [22]. Compared to ([29], [30]), our proposed approach does not depend on a specific heart stage, thus eliminating the burden of locating the ED and ES frame when creating training data. This also allows us to easily exploit non-ED and -ES frames for supervision if their corresponding segmentation labels are available. Table 2 highlights the robustness of our approach to the sampling hyperparameters. This allows for a broader design space to meet hardware limitations such as memory and compute power (FLOPs) while still achieving a satisfactory segmentation performance. We observed that randomly masking a significant portion (60%) of an echocardiogram clip during SSL pre-training results in the best performance. The masking SSL improves the overall DSC of the SI approach from 93.19% to 93.31%, as reported in Table 3. Further, as shown in Fig. 6, we observe that self-supervision with temporal masking enables the network to maintain better temporal consistency across predictions in a given echocardiogram clip. We hypothesize that the pre-training stage helps the 3D U-Net model to better learn the semantic features that are useful for estimating human heart structures in the A4C view, resulting in a more robust prediction. This finding indicates that pre-training with self-supervision remarkably benefits the downstream LV segmentation task. Hence, self-supervised learning with vast echocardiogram videos can be a promising solution to provide strong pre-trained models that can generalize well in downstream echocardiography-related clinical tasks. \begin{table} \begin{tabular}{l|r|r|r|r|r} \hline Approach & \multicolumn{2}{c|}{\begin{tabular}{c} \% DSC \\ (\# Frames, Period) \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} \% DSC \\ (Overall) \\ \end{tabular} } & \multicolumn{2}{c|}{Params (M)} & \multicolumn{2}{c}{ \begin{tabular}{c} FLOPs (G) \\ Single pass \\ \end{tabular} } & \multicolumn{2}{c}{One frame} \\ \hline Super Image (SI) & MobileNetV3 & 93.16 & **6.69** & **12.46** & **0.78** \\ (16, 5) & ResNet-18 & 93.23 & 14.33 & 21.75 & 1.36 \\ & ViT-B/16 & 92.98 & 89.10 & 120.20 & 7.51 \\ \hline 3D (32, 1) & 3D U-Net-S & **93.27** & 11.26 & 27.34 & 0.85 \\ \hline \end{tabular} \end{table} Table 4: An ablation study on various encoder backbones. Our approach is robust to the selection of backbone complexity. The SI backbones were pre-trained on the ImageNet dataset, while the 3D U-Net-S was trained from scratch. We have shown that both the SI and 3D methods trained using sparse annotations are capable of accurately segmenting the left ventricle in echocardiogram videos. The 3D U-Net performance is slightly better than the SI network with the UniFormer-S backbone. However, designing a backbone for 3D U-Net is not straightforward since it requires tedious hyperparameter tuning. On the other hand, there are plenty of optimized models that can be utilized as a backbone for the SI approach. For instance, MobileNetV3, with only 6.69 M of parameters, can give an on-par performance with 93.16% overall DSC, as can be seen in Table 4. The pre-trained models on ImageNet can also help generalize better if we only have a small data. Moreover, many self-supervised learning algorithms for 2D can also be employed to further improve performance. ## 6 Conclusion We propose a novel approach to tackle the LV segmentation task on echocardiogram videos. Our method outperforms other works on the EchoNet-Dynamic test set. The method utilizes a video segmentation network that efficiently combines both spatial and temporal information. The network is pre-trained on a reconstruction task and then trained with sparse annotations to predict LV. An extensive experiment was performed to show the superiority of the proposed approach both quantitatively and qualitatively. We expect that this work will motivate researchers to explore more about the video segmentation approach for LV instead of working on frame-by-frame prediction. We limit our experiments in this work to self-supervision using temporal masking only. However, there remains scope to improve the self-supervision by identifying the optimum masking scheme between temporal, random spatiotemporal, space-wise, and block-wise masking. Further, we aim to validate the cross-dataset generalizability of our approach using other publicly available echocardiogram datasets.
Echocardiographyは、心臓の健康評価のための必須の臨床画像モデリティとなりつつあります。 ejection fractionなどを計算する biomarkers といったものを含め、患者の心臓Failureの確率など、正確な心臓構造のsegmentizationにより、医師は心臓の状態を評価し、より精度と正確さを持つ治療を開発することが可能です。しかし、正確で信頼性の高い左室のsegmentizationは、時間と労力を要するものであり、その理由によって困難な場合があります。そのため、医師は、左室の分割を、2つの特定のエコー画像フレームで分割して診断をすることが一般的です。この手動分割の範囲が限定されているため、高Temporal一致性を持つ自動左室分割の開発に課題が生まれています。この課題に対応し、この研究では、Sparseなアノテーションを特徴とするエコー画像ビデオから、ビデオベースのネットワークを活用した、SimLVSegという革新的なパラダイム
2309.14542
Trace Maps on Rigid Stein Spaces
We provide a relative version of the trace map from the work of Beyer, which can be associated to any finite tale morphism $X \to Y$ of smooth rigid Stein spaces and which then relates the Serre duality on $X$ with the Serre duality on $Y$. Furthermore, we consider the behaviour of any rigid Stein space under (completed) base change to any complete extension field and deduce a commutative diagram relating Serre duality over the base field with the Serre duality over the extension field.
Milan Malčić
2023-09-25T21:37:11
http://arxiv.org/abs/2309.14542v1
# Trace maps on rigid Stein spaces ###### Abstract. We provide a _relative_ version of the trace map from [1], which can be associated to any finite etale morphism \(X\to Y\) of smooth rigid Stein spaces and which then relates the Serre duality on \(X\) with the Serre duality on \(Y\). Furthermore, we consider the behaviour of any rigid Stein space under (completed) base change to any complete extension field and deduce a commutative diagram relating Serre duality over the base field with the Serre duality over the extension field. ###### Contents * 1 A generalized version of a connectedness result by Bosch * 2 Review of Beyer's residue maps and trace maps * 3 The theorem on the relative trace map * 4 Base-changing Beyer's trace map ## Introduction Serre duality for rigid Stein spaces was first established in [1], but Peter Beyer subsequently provided another proof appearing in his dissertation [1, 2, 3] and outlined it in [1, Remark 5.1.5]. Given a smooth \(d\)-dimensional rigid Stein space \(X\) over a non-archimedean field \(K\), Beyer's approach allows for an explicit description of the trace map \[t_{X}\colon H^{d}_{c}(X,\omega_{X})\to K\] that makes the canonical bundle \(\omega_{X}\) into a dualizing sheaf. Here \(H^{*}_{c}(X,-)\) denotes the compactly supported cohomology of the rigid space \(X\) with values in coherent sheaves, defined as in [1, SS1.1] and [1, SS1] and topologized as in [1, SS1.3] and [2, 1.6]. Our first goal is to provide a relative version of the trace map for rigid Stein spaces. In fact, if \(\alpha\colon X\to Y\) is a finite etale morphism of smooth connected \(d\)-dimensional Stein spaces over \(K\), there is a natural candidate \[t_{\alpha}\colon\alpha_{*}\omega_{X}\to\omega_{Y}\] that appears in [2, p. 48] and is already called "the relative trace map" there. Locally, for an affinoid \(\operatorname{Sp}(A)\subseteq Y\) and its preimage \(\operatorname{Sp}(B)\subseteq X\), \(t_{\alpha}\) is defined by \[\Omega^{d}_{B/K}=\Omega^{d}_{A/K}\otimes_{A}B \to\Omega^{d}_{A/K}\] \[\omega\otimes b \mapsto\operatorname{Tr}_{B/A}(b)\cdot\omega,\] where \(\Omega^{d}_{A/K}:=\wedge^{d}\Omega^{1}_{A/K}\) and \(\Omega^{1}_{A/K}\) denotes the "universally finite" differential module (characterized by being finitely generated over \(A\) and universal for derivations into finitely generated \(A\)-modules), and \(\operatorname{Tr}_{B/A}\) is the trace of the finitely generated projective \(A\)-module \(B\). We prove that \(t_{\alpha}\) indeed deserves to be called the relative trace map: **Theorem 1** (Theorem 3.4, Proposition 3.10).: _Let \(\alpha\colon X\to Y\) be a finite etale morphism of smooth connected \(d\)-dimensional Stein spaces over \(K\). Then the following diagram commutes:_ _In particular, letting \(\operatorname{Ext}^{i}_{X}(\alpha^{*}\mathcal{G},\omega_{X})\to\operatorname{ Ext}^{i}_{Y}(\mathcal{G},\omega_{Y})\) denote the morphism constructed from \(t_{\alpha}\) and the adjunction morphism \(\mathcal{G}\to\alpha_{*}\alpha^{*}\mathcal{G}\) as in Proposition 3.10, one obtains a commutative diagram of Serre duality pairings_ _for every coherent sheaf \(\mathcal{G}\) on \(Y\) and all \(i\geq 0\)._ This theorem is also of interest in the study of reciprocity laws for \((\varphi_{L},\Gamma_{L})\)-modules over Lubin-Tate extensions as done by Schneider and Venjakob. Indeed, in [11, SS4.2], Theorem 1 is used as a conceptual way to obtain functorial properties of pairings arising from Serre duality on certain specific rigid Stein spaces. We mention that the recent work of Abe and Lazda [1] constructs a trace map on proper pushforwards of analytic adic spaces, and that some of their results can be related to our Theorem 1. Whereas [1] develops a general machinery, our constructions are rather explicit, and it would be an interesting project to fully compare these two approaches. The proof of Theorem 1 builds on the technique of investigating \(t_{X}\) in the special case of so-called _special affinoid wide-open spaces_ (which we review in SS2.1 below) via a relation between algebraic local cohomology and compactly supported rigid cohomology established in [10]. For this purpose we also prove a generalized version of Bosch's theorem [11, Satz 7.1] on the connectedness of formal fibers, which closes an argumentative gap in [10]. Namely, there is a crucial technical lemma [10, Lemma 4.2.2] underlying Beyer's arguments, which asserts that special affinoid wide-open spaces can be characterized in two equivalent ways, and the proof of this lemma relies on Bosch's connectedness result. However, Bosch's result contains the assumption that the affinoid algebra under consideration is _distinguished_ (see Definition 1.1) and this assumption is not satisfied in the general setting of [10], resulting in an argumentative gap. The generalized connectedness result is proved in Theorem 1.7 for affinoid algebras that become distinguished after a base change to a finite Galois extension. This indeed bridges the gap in the setting of smooth special affinoid wide-open spaces, since Lemma 1.3 proves that the relevant affinoid algebras can be made distinguished after a base change to a finite Galois extension. We explain this further in SS2.1. Our second goal is to show that Beyer's trace map behaves well under (completed) base change to any complete extension field \(K^{\prime}/K\). More precisely, for any separated rigid space \(X\) over \(K\), let \(X^{\prime}:=X\mathbin{\widehat{\otimes}}_{K}K^{\prime}\) denote the base change of \(X\) to \(K^{\prime}\) and let \(\mathcal{F}\rightsquigarrow\mathcal{F}^{\prime}\) be the exact "pullback" functor from coherent sheaves on \(X\) to coherent sheaves on \(X^{\prime}\). For a Stein space \(X\) over \(K\), we construct comparison maps \(H^{j}_{c}(X,\mathcal{F})\otimes_{K}K^{\prime}\to H^{j}_{c}(X^{\prime}, \mathcal{F}^{\prime})\) in Section 4 and prove: **Theorem 2** (Corollary 4.12).: _Let \(X\) be a smooth rigid Stein \(K\)-space of dimension \(d\). Then, for every coherent sheaf \(\mathcal{F}\) on \(X\), the diagram_ _commutes for all \(i\geq 0\)._ This result is likewise relevant in the context of [11, SS4.2], where it is used to argue that the constructions carried out there are compatible with base change. **Acknowledgements.** This article is based on the author's Ph.D. thesis [15] written under the supervision of Otmar Venjakob and we thank him for his advice and many helpful discussions. We also thank Katharina Hubner for her valuable remarks on [15] and for her careful reading of this manuscript. Finally, Section 1 has resulted from an email correspondence with Werner Lutkebohmert and we thank him for his valuable input. This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) TRR 326 _Geometry and Arithmetic of Uniformized Structures_, project number 444845124. **Notation and conventions.** We use the language of [1], i.e. we work with rigid spaces in the sense of Tate. All rigid spaces are assumed to be separated. (Note that this assumption is superfluous for Stein spaces, since they are automatically separated, see [14, Satz 3.4].) Recall that, if \(\{X_{i}\}_{i\in\Sigma}\) is a subset of the set of connected components of a rigid space \(X\), then \(\cup_{i\in\Sigma}X_{i}\) is admissible open and \(\{X_{i}\}_{i\in\Sigma}\) is an admissible covering thereof. Moreover, \(\{X_{i}\}_{i\in\Sigma}\) are precisely the connected components of \(\cup_{i\in\Sigma}X_{i}\). The admissible opens of \(X\) that arise in this way are precisely the admissible opens of \(X\) that are also analytic sets, i.e. "open and closed". We refer to such subsets as _clopen_. Let \(K\) be a complete non-archimedean nontrivially valued and perfect field, \(\mathfrak{o}_{K}\) be the ring of integers in \(K\) and \(k\) be the residue field of \(K\). Let \(K\langle\xi_{1},\ldots,\xi_{n}\rangle\) be the free Tate algebra in \(n\) variables, for which we also write \(T_{n}\) or \(T_{n}(K)\) when we wish to emphasize the base field. We set \(\mathbb{D}^{n}:=\mathbb{D}^{n}_{K}:=\operatorname{Sp}K\langle\xi_{1},\ldots, \xi_{n}\rangle\) and \(\mathring{\mathbb{D}}^{n}:=\mathring{\mathbb{D}}^{n}_{K}:=\{x\in\mathbb{D}^{n} \colon|\xi_{i}(x)|<1\text{ for all }i=1,\ldots,n\}\). The assumption of \(K\) being perfect goes back to [1], and is only needed for the residue map on local cohomology [1, Definition 4.2.7] (which we refer to in SS2.3 here) to exist for any given point. Recall that the reduction of an affinoid algebra \(R\) is \(\widetilde{R}=\mathring{R}/\mathring{R}\) where \(\mathring{R}=\{f\in R\colon|f|_{\sup}\leq 1\}\) is the \(\mathfrak{o}_{K}\)-algebra of all power-bounded elements in \(R\) and \(\mathring{R}=\{f\in R\colon|f|_{\sup}<1\}\) is the \(\mathring{R}\)-ideal of all topologically nilpotent elements in \(R\). The _reduction_\(\widetilde{Z}\) of the affinoid space \(Z:=\operatorname{Sp}(R)\) is the affine algebraic variety given by the maximal spectrum of \(\widetilde{R}\). There is a functorial reduction map \(p\colon Z\to\widetilde{Z}\) defined as in [1, SS7.1.5], which is surjective by [1, 7.1.5/Theorem 4]. For the reduction of a point \(z\in Z\), we often use the notation \(p(z)\) and \(\widetilde{z}\) interchangeably. For \(z\in Z\), the fibers \(Z_{+}(z):=p^{-1}(p(z))\) of the reduction map \(p\colon Z\to\widetilde{Z}\) are called _the formal fibers of \(Z\)_, the terminology and notation being as in [10]. All rings are assumed commutative and unital. Given any field \(F\) and any ring \(A\), we say that a ring map \(F\to A\) is _etale_ if and only if \(A\) is isomorphic as an \(F\)-algebra to a finite product of finite separable field extensions of \(F\), see [10, Tag 00U3]. A torsion-free morphism \(A\to B\) from an integral domain \(A\) into a ring \(B\) is called _separable_ if the induced map \(Q(A)\hookrightarrow Q(B)\) of the fraction field \(Q(A)\) of \(A\) into the total ring of fractions \(Q(B)\) of \(B\) is etale. Accordingly, a morphism \(\operatorname{Sp}(S)\to\operatorname{Sp}(R)\) of affinoid spaces is called separable if the ring morphism \(R\to S\) is separable. If \(A\) is a ring and \(\mathfrak{a}\) an ideal of \(A\), we write \(A^{\wedge\mathfrak{a}}\) for the \(\mathfrak{a}\)-adic completion of \(A\). If \(A\) is local, we write \(\widehat{A}\) for the completion with respect to the maximal ideal. If \(M\) is an \(A\)-module, we let \(H^{j}_{\mathbf{a}}(M)\) denote the \(j\)-th local cohomology of \(M\) with support in \(\mathfrak{a}\) (see, for instance, [10]). It is computed by deriving the left-exact functor \(\Gamma_{\mathfrak{a}}(M):=\{m\in M\colon\mathfrak{a}^{t}M=0\text{ for some }t\in\mathbb{N}\}\). ## 1. A generalized version of a connectedness result by Bosch ### Distinguished affinoid algebras We recall the following notion from [1, 6.4.3/Definition 2]: **Definition 1.1**.: Let \(R\) be a \(K\)-affinoid algebra. A surjective morphism \(\alpha\colon K\langle\xi_{1},\dots\xi_{m}\rangle\to\!\!\!\!\to R\) is called _distinguished_ if the residue norm \(|\cdot|_{\alpha}\) coincides with the supremum semi-norm \(|\cdot|_{\sup}\) on \(R\). A \(K\)-affinoid algebra \(R\) is called _distinguished_ if for some \(m\geq 0\) it admits a distinguished \(\alpha\colon K\langle\xi_{1},\dots\xi_{m}\rangle\to\!\!\!\!\to R\). **Definition 1.2** ([1, p. 138]).: A \(K\)-affinoid algebra \(R\) is called _absolutely reduced_ over \(K\), if for every complete field extension \(K^{\prime}\) of \(K\), the algebra \(R\,\widehat{\otimes}_{K}\,K^{\prime}\) is reduced. **Lemma 1.3**.: _Let \(R\) be an absolutely reduced \(K\)-affinoid algebra. Then there exists a finite Galois field extension \(K^{\prime}/K\) such that the algebra \(R\otimes_{K}K^{\prime}\) is distinguished._ Proof.: In the proof of [1, Lemma 2.7], Bosch constructs a finite field extension \(K_{1}/K\) such that the algebra \(R\otimes_{K}K_{1}\) is distinguished. (Note that \(R\otimes_{K}K_{1}=R\,\widehat{\otimes}_{K}\,K_{1}\) since \(K_{1}\) is finite over \(K\).) The field \(K_{1}\) is, in the notation of loc. cit., constructed by adjoining certain algebraic elements \(c_{i\nu},i=1,\dots,n,\nu\in\mathbb{N}_{0}^{m}\), only finitely many of which are non-zero, to \(K\). The \(c_{i\nu}\) are obtained by using the density of the algebraic closure of \(K\) in its completion to approximate certain elements \(c^{\prime}_{i\nu}\) of the completion. But, by using the density of the separable algebraic closure of \(K\)[1, 3.4.1/Proposition 6], we can assume that the \(c_{i\nu}\) are separable algebraic over \(K\) and set \(K^{\prime}\) to be the Galois closure of the finite separable extension \(K_{1}\). Then the rest of the argument in the proof of [1, Lemma 2.7] can be applied verbatim to show that \(R\otimes_{K}K^{\prime}\) is distinguished. ### Behaviour of formal fibers under base change Recall that, for an affinoid space \(Z\) and \(z\in Z\), the fibers \(Z_{+}(z):=p^{-1}(p(z))\) of the reduction map \(p\colon Z\to\widetilde{Z}\) are called _the formal fibers_. **Proposition 1.4**.: _Let \(R\) be a \(K\)-affinoid algebra and \(K^{\prime}/K\) a finite Galois extension. Consider \(S:=R\otimes_{K}K^{\prime}.\) Let \(\iota\colon R\to S\) be the canonical inclusion and_ \[\phi\colon Z^{\prime}=\operatorname{Sp}(S)\to Z=\operatorname{Sp}(R)\] _the associated morphism of rigid spaces over \(K\), which fits into the commutative diagram_ (1.2.1) _Then the reduction map \(Z^{\prime}\to\widetilde{Z}^{\prime}\) is fiberwise surjective in the sense that for every \(z\in Z\), the induced map_ \[\phi^{-1}(z)\to\widetilde{\phi}^{-1}(\widetilde{z})\] _is surjective._ Proof.: The morphism \(\phi\) is finite, so \(\phi^{-1}(z)\) is a finite set, say \(\phi^{-1}(z)=\{z^{\prime}_{1},\dots,z^{\prime}_{n}\}.\) Then \(\widetilde{\phi}\) is also finite by [1, 6.3.5/Theorem 1], so the complement \(\widetilde{\phi}^{-1}(\widetilde{z})\setminus\{\widetilde{z}^{\prime}_{1}, \dots,\widetilde{z}^{\prime}_{n}\}\) is a finite set and we have to show that it is in fact empty. Suppose that it is non-empty, say \[\widetilde{\phi}^{-1}(\widetilde{z})\setminus\{\widetilde{z}^{\prime}_{1}, \dots,\widetilde{z}^{\prime}_{n}\}=\{\widetilde{z}^{\prime}_{n+1},\dots, \widetilde{z}^{\prime}_{s}\}\] for some elements \(\widetilde{z}^{\prime}_{n+1},\ldots,\widetilde{z}^{\prime}_{s}\in\widetilde{Z}^{\prime}\) with lifts \(z^{\prime}_{n+1},\ldots,z^{\prime}_{s}\in Z^{\prime}\). Based upon this assumption, we will construct an \(f\in\mathring{R}\) such that \[|f(z)|=1\quad\text{ and }\quad|f(\phi(z^{\prime}_{n+1}))|<1.\] But then, by [3, 7.1.5/Proposition 2], \(|f(z)|=1\) means that \(\widetilde{f}(\widetilde{z})\neq 0\) whereas \(|f(\phi(z^{\prime}_{n+1}))|<1\) means that \(\widetilde{f}(\widetilde{z})=0\) since \(\phi(z^{\prime}_{n+1})\) also reduces to \(\widetilde{z}\). Thus we arrive at a contradiction and the assertion is proved. It remains to construct such an \(f\in\mathring{R}\). For this, we first choose a \(\widetilde{g}\in\widetilde{S}\) such that \[\widetilde{g}(\widetilde{z}^{\prime}_{j})=1\text{ for }j=1,\ldots n\quad\text { and }\quad\widetilde{g}(\widetilde{z}^{\prime}_{n+1})=0. \tag{1.2.2}\] This is possible since \(\widetilde{S}\to\prod_{j=1}^{n+1}\widetilde{S}/\mathfrak{m}_{\widetilde{z}^{ \prime}_{j}}\) is surjective by the Chinese Remainder Theorem. Then we choose a lift \(g\in\mathring{S}\) of \(\widetilde{g}\). Note that (1.2.2) then yields \[|g(z^{\prime}_{j})|=1\text{ for }j=1,\ldots n\quad\text{ and }\quad|g(z^{\prime}_{n+1})|<1 \tag{1.2.3}\] by [3, 7.1.5/Proposition 2]. Next, the Galois group \(G=\operatorname{Gal}(K^{\prime}/K)\) acts in an obvious way on \(R\otimes_{K}K^{\prime}=S\) by \(R\)-algebra homomorphisms, and it is easy to prove that the fixed elements satisfy \(S^{G}=R.\) Therefore, \[f:=\prod_{\sigma\in G}\sigma(g)\in R.\] Moreover, any morphism \(S\to S\) is contractive with respect to \(|\cdot|_{\sup}\) by [3, 3.8.1/Lemma 4], so in particular we have \(|\sigma(g)|_{\sup}\leq|g|_{\sup}\leq 1\) for all \(\sigma\in G\). Hence \[|f|_{\sup}\leq\prod_{\sigma\in G}|\sigma(g)|_{\sup}\leq 1. \tag{1.2.4}\] Since \(\iota\colon R\to S\) is finite, it is an isometry with respect to \(|\cdot|_{\sup}\) by [3, 3.8.1/Lemma 6], i.e. the \(|\cdot|_{\sup}\) of \(S\) restricts to the \(|\cdot|_{\sup}\) of \(R\). Thus the inequality (1.2.4) shows that in fact \(f\in\mathring{R}\). Next we claim that \[|\sigma(g)(z^{\prime}_{1})|=1\quad\text{ for all }\sigma\in G. \tag{1.2.5}\] To see this, note that each \(\sigma\in G\) permutes \(\{\mathfrak{m}_{z^{\prime}_{1}},\ldots,\mathfrak{m}_{z^{\prime}_{s}}\}\) so \(\sigma^{-1}(\mathfrak{m}_{z^{\prime}_{1}})=\mathfrak{m}_{z^{\prime}_{j}}\) for some \(j\in\{1,\ldots n\}\) and thus \(\sigma\) induces an isomorphism \(\sigma\colon S/\mathfrak{m}_{z^{\prime}_{j}}\xrightarrow{\sim}S/\mathfrak{m} _{z^{\prime}_{1}}\) mapping \(g\bmod\mathfrak{m}_{z^{\prime}_{j}}\) to \(\sigma(g)\bmod\mathfrak{m}_{z^{\prime}_{1}}\), which means that \(|g(z^{\prime}_{j})|=|\sigma(g)(z^{\prime}_{1})|\). Since \(|g(z^{\prime}_{j})|=1\) by (1.2.3), this proves (1.2.5). Finally, we compute \[|f(z)|=|f(\phi(z^{\prime}_{1}))|=|\iota(f)(z^{\prime}_{1})|=|\prod_{\sigma} \sigma(g)(z^{\prime}_{1})|=\prod_{\sigma}|\sigma(g)(z^{\prime}_{1})|=\prod_{ \sigma}1=1\] and similarly \[|f(\phi(z^{\prime}_{n+1}))|=\prod_{\sigma}|\sigma(g)(z^{\prime}_{n+1})|=|g(z^ {\prime}_{n+1})|\cdot\prod_{\sigma\neq\operatorname{id}}|\sigma(g)(z^{\prime} _{n+1})|<1\] since \(|g(z^{\prime}_{n+1})|<1\) and \(|\sigma(g)(z^{\prime}_{n+1})|\leq 1\). **Corollary 1.5**.: _Let \(Z\) be a \(K\)-affinoid space and \(K^{\prime}/K\) a finite Galois extension. Consider the base change \(Z^{\prime}:=Z\otimes_{K}K^{\prime}\) of \(Z\) to \(K^{\prime}\) and the associated morphism \(\phi\colon Z^{\prime}\to Z\) of rigid spaces. Let \(\{z^{\prime}_{1},\ldots,z^{\prime}_{n}\}=\phi^{-1}(z)\). Then we have the following relation between formal fibers_ \[Z_{+}(z)=\bigcup_{i=1}^{n}\phi(Z^{\prime}_{+}(z^{\prime}_{i})). \tag{1.2.6}\] Proof.: The inclusion "\(\supseteq\)" in (1.2.6) is clear due to the commutativity of the diagram (1.2.1). To show the reverse inclusion, let \(y\in Z_{+}(z)\) and take a preimage \(y^{\prime}\in Z^{\prime}\) under \(\phi\). We need to show that \(y^{\prime}\in Z^{\prime}_{+}(z^{\prime}_{i})\) for some \(i\). By the commutativity of (1.2.1) and since \(y\in Z_{+}(z)\), we find that \(\widetilde{\phi}(\widetilde{y}^{\prime})=\widetilde{y}=\widetilde{z}\), so \(\widetilde{y}^{\prime}\in\widetilde{\phi}^{-1}(\widetilde{z})\). Now \(\widetilde{\phi}^{-1}(\widetilde{z})=\{\widetilde{z}^{\prime}_{1},\ldots, \widetilde{z}^{\prime}_{n}\}\) by Proposition 1.4, whence \(\widetilde{y}^{\prime}=\widetilde{z}^{\prime}_{i}\) for some \(i\), i.e. \(y^{\prime}\in Z^{\prime}_{+}(z^{\prime}_{i})\). ### The generalized connectedness result If \(R\) is a \(K\)-affinoid algebra, then \(R\) is Noetherian and hence contains only finitely many minimal prime ideals, say \(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{s}\). As in [10, p. 6], we say that \(R\)_has pure dimension_ (or _is equidimensional_) if \(\dim R/\mathfrak{p}_{1}=\dim R/\mathfrak{p}_{2}=\ldots=\dim R/\mathfrak{p}_{s}\), or, equivalently, \(\dim R=\dim R/\mathfrak{p}_{1}=\dim R/\mathfrak{p}_{2}=\ldots=\dim R/ \mathfrak{p}_{s}.\) The following Theorem 1.6, which is due to Bosch, is an important technical result concerning the connectedness of formal fibers: **Theorem 1.6** ([10, 11]).: _Let \(R\) be a distinguished \(K\)-affinoid algebra which has pure dimension and let \(Z=\operatorname{Sp}(R)\). Then, for every \(z\in Z\), the formal fiber \(Z_{+}(z)=p^{-1}(p(z))\) of the reduction map \(p\colon Z\to\widetilde{Z}\) is connected._ We generalize Bosch's theorem to the case of a not necessarily distinguished affinoid algebra: **Theorem 1.7**.: _Let \(R\) be a \(K\)-affinoid algebra which has pure dimension and let \(Z=\operatorname{Sp}(R)\). Suppose that there exists a finite Galois extension \(K^{\prime}/K\) such that \(S:=R\otimes_{K}K^{\prime}\) is distinguished. Then, for every \(z\in Z\), the formal fiber \(Z_{+}(z)=p^{-1}(p(z))\) of the reduction map \(p\colon Z\to\widetilde{Z}\) is connected._ We postpone the proof for a moment to note that the condition regarding the existence of \(K^{\prime}\) is satisfied whenever \(R\) is absolutely reduced, by Lemma 1.3. In particular, since \(R\) is absolutely reduced whenever \(Z=\operatorname{Sp}(R)\) is smooth, we deduce: **Corollary 1.8**.: _Let \(R\) be a \(K\)-affinoid algebra such that \(Z=\operatorname{Sp}(R)\) is smooth and connected. Then, for every \(z\in Z\), the formal fiber \(Z_{+}(z)=p^{-1}(p(z))\) of the reduction map \(p\colon Z\to\widetilde{Z}\) is connected._ Proof of Theorem 1.7.: Let \(\iota\colon R\to S\) be the canonical inclusion and \(\phi\colon Z^{\prime}=\operatorname{Sp}(S)\to Z=\operatorname{Sp}(R)\) the associated morphism of rigid spaces, which fits into the commutative diagram (1.3.1) The morphism \(\iota\colon R\to S\) is finite flat and injective, so \(\phi\) is finite flat and surjective. Since the morphism \(\phi\) is finite, \(\phi^{-1}(z)\) is a finite set, say \[\phi^{-1}(z)=\{z^{\prime}_{1},\ldots,z^{\prime}_{n}\}.\] Since \(R\) has pure dimension, the base change \(S=R\otimes_{K}K^{\prime}\) also has pure dimension by [10, Lemma 2.5]. Because \(S\) is moreover distinguished, the formal fibers \(Z^{\prime}_{+}(z^{\prime}_{i}),i=1,\ldots,n\) are connected by Theorem 1.6. Being a flat map between quasi-compact rigid \(K\)-spaces, \(\phi\) is open by [10, Corollary 5.11], so \(\phi(Z^{\prime}_{+}(z^{\prime}_{i}))\) is open in \(Z\) and hence a rigid (sub)space. In particular, the restriction \(Z^{\prime}_{+}(z^{\prime}_{i})\twoheadrightarrow\phi(Z^{\prime}_{+}(z^{\prime}_ {i}))\) is a surjective map of rigid spaces whose domain is connected, whence the codomain \(\phi(Z^{\prime}_{+}(z^{\prime}_{i}))\) is also connected. On the other hand, \(\phi\) is finite and hence proper, so \(\phi\) maps closed analytic subsets to closed analytic subsets by [13, Satz 4.1 and its proof], which is why \(\phi(Z^{\prime}_{+}(z^{\prime}_{i}))\) is clopen in \(Z\). Due to the commutativity of the diagram (1.3.1) above, \(\phi(Z^{\prime}_{+}(z^{\prime}_{i}))\) is contained in \(Z_{+}(z)\), i.e. we can regard it as a (clopen) subset of \(Z_{+}(z)\). Therefore, being clopen and connected, each \(\phi(Z^{\prime}_{+}(z^{\prime}_{i}))\) is a connected component of \(Z_{+}(z)\). On the other hand, \[Z_{+}(z)=\bigcup_{i=1}^{n}\phi(Z^{\prime}_{+}(z^{\prime}_{i})) \tag{1.3.2}\] by Corollary 1.5. Next we will show that \(\phi(Z^{\prime}_{+}(z^{\prime}_{i}))=\phi(Z^{\prime}_{+}(z^{\prime}_{1}))\) for all \(i=1,\ldots,n\), which then implies that \(Z_{+}(z)=\phi(Z^{\prime}_{+}(z^{\prime}_{1}))\) due to (1.3.2) and thus completes the proof of the theorem. Since \(\phi(z^{\prime}_{i})=z=\phi(z^{\prime}_{1})\), we conclude that \(z\in\phi(Z^{\prime}_{+}(z^{\prime}_{1}))\cap\phi(Z^{\prime}_{+}(z^{\prime}_ {i}))\), so \(\phi(Z^{\prime}_{+}(z^{\prime}_{1}))\cap\phi(Z^{\prime}_{+}(z^{\prime}_{i}))\neq\varnothing\) for all \(i\) which means that the connected components \(\phi(Z^{\prime}_{+}(z^{\prime}_{1}))\) and \(\phi(Z^{\prime}_{+}(z^{\prime}_{i}))\) must coincide. ## 2. Review of Beyer's residue maps and trace maps In this section we summarize Beyer's main results and in particular his construction of the trace map yielding Serre duality. He first constructs the trace map in the special case of so-called _special affinoid wide-open spaces_ (which we review in Subsection 2.1 below), then he uses the fact that a Stein space can be exhausted by subspaces of this type. To prove that the trace maps in the covering are compatible and thus glue to a global trace map, he uses a relation to the local cohomology groups of the underlying affine schemes (which are defined as in, say, [10, SS3.1] or, more generally, in [12]). This relation to local cohomology is reviewed in Subsection 2.4 below and used to prove the main result of Section 3, i.e. Theorem 1 from the introduction. ### Special affinoid wide-open spaces **Definition 2.1** ([10, Definition 4.2.1]).: Let \(Z\) be an affinoid space. A subset \(\mathring{W}\subseteq Z\) is called a _special affinoid wide-open space_ if \(\mathring{W}\) is the preimage of a finite set of points under the reduction map \(p\colon Z\to\widetilde{Z}\). There is a gap in the proof of the crucial result [10, Lemma 4.2.2] on equivalent characterizations of special affinoid wide-opens: Bosch's theorem on the connectedness of formal fibers of distinguished affinoid algebras (Theorem 1.6) is applied to a not necessarily distinguished affinoid algebra. Our generalization of Bosch's Theorem (Corollary 1.8) can be used to remedy this gap, as we explain in the proof of Lemma 2.2 below. **Lemma 2.2**.: _Let \(Z=\operatorname{Sp}(R)\) be a smooth and connected affinoid space, \(p\colon Z\to\widetilde{Z}\) the reduction map and let \(\mathring{W}\subseteq Z\) be a special affinoid wide-open space. Then there exists a finite surjective morphism \(\pi\colon Z\to\mathbb{D}^{d}\) such that \(\mathring{W}\) is a union of connected components of \(\pi^{-1}(\mathring{\mathbb{D}}^{d})\). For any such morphism \(\pi\), \(\pi^{-1}(\mathring{\mathbb{D}}^{d})\) consists of only finitely many connected components. More precisely: For \(\mathring{W}=p^{-1}(\{\widetilde{z_{1}},\dots,\widetilde{z_{r}}\})\), there exists a finite surjective morphism \(\widetilde{\pi}\colon\widetilde{Z}\to\mathbb{A}^{d}_{k}\) that maps all the \(\widetilde{z_{i}}\) to \(0\), and any lift \(\pi\) of any such \(\widetilde{\pi}\) satisfies the desired properties above. In this setting, if \(\widetilde{\pi}^{-1}(0)=\{\widetilde{z_{1}},\dots,\widetilde{z_{r}}, \widetilde{z_{r+1}},\dots,\widetilde{z_{s}}\}\) is the full fiber over zero, then the connected components of \(\pi^{-1}(\mathring{\mathbb{D}}^{d})\) are precisely the \(p^{-1}(\widetilde{z_{1}}),\dots,p^{-1}(\widetilde{z_{s}})\) and \(\mathring{W}\) is the union of the components \(p^{-1}(\widetilde{z_{1}}),\dots,p^{-1}(\widetilde{z_{r}})\)._ Proof.: This is the content of [10, Lemma 4.2.2], but we give an outline of the argument to illustrate where our generalized connectedness theorem plays a role. By [1, Lemma 1.1.4], we can choose a finite surjective morphism \(\pi\colon Z\to\mathbb{D}^{d}\) such that \(\pi(z_{i})=0\) for all \(i=1,\dots,n\). Taking its reduction, we obtain a \(\widetilde{\pi}\) as in the assertion. On the other hand, given any \(\widetilde{\pi}\) as in the assertion, then any lift \(\pi\) of \(\widetilde{\pi}\) is finite surjective by the results of [1, SS6.3]. Moreover, let \(\widetilde{z}_{r+1},\dots,\widetilde{z_{s}}\in\widetilde{Z}\) be such that \(\widetilde{\pi}^{-1}(0)=\{\widetilde{z_{1}},\dots,\widetilde{z_{r}},\widetilde {z_{r+1}},\dots,\widetilde{z_{s}}\}\). Then, due to the commutativity of and the fact that \(\mathring{\mathbb{D}}^{d}\) is the fiber over \(0\in\mathbb{A}^{d}_{k}\), we see that \(\pi^{-1}(\mathring{\mathbb{D}}^{d})\) as an analytic space is the disjoint union of the \(p^{-1}(\widetilde{z_{i}}),i=1,\dots,s\). Finally, each \(p^{-1}(\widetilde{z_{i}})\) is connected by our generalized connectedness result Corollary 1.8. Thus the connected components of \(\pi^{-1}(\mathring{\mathbb{D}}^{d})\) are precisely the \(p^{-1}(\widetilde{z_{1}}),\dots,p^{-1}(\widetilde{z_{s}})\). Since moreover \(\mathring{W}=p^{-1}(\{\widetilde{z_{1}},\dots,\widetilde{z_{r}}\})\) by assumption, it follows that \(\mathring{W}\) is the union of the components \(p^{-1}(\widetilde{z_{1}}),\dots,p^{-1}(\widetilde{z_{r}})\). **Remark 2.3**.: The morphism \(\pi\) in Lemma 2.2 can be chosen to be separable. Indeed, given a smooth connected affinoid space \(Z=\operatorname{Sp}(R)\) and points \(z_{1},\dots,z_{n}\in Z\), one can show (by a slight modification of the proof of [1, Satz 4.1.9]) that there exists a finite surjective separable morphism \(\pi\colon Z\to\mathbb{D}^{d}\) such that \(|\pi(z_{i})|<1\) for all \(i=1,\ldots,n\). The property \(|\pi(z_{i})|<1\) means that \(\widetilde{\pi}(\widetilde{z_{i}})=0\) (due to [1, 7.1.5/Proposition 2]), whence it follows by Lemma 2.2 that \(\pi\) has the other desired properties too. **Remark 2.4**.: We mention that the converse of Lemma 2.2 also holds: If there exists a finite surjective morphism \(\pi\colon Z\to\mathbb{D}^{d}\) such that \(\mathring{W}\) is a union of connected components of \(\pi^{-1}(\mathring{\mathbb{D}}^{d})\), then these connected components are fibers of the reduction map, so in particular \(\mathring{W}\) is special affinoid wide-open. **Lemma 2.5**.: _Let \(X\) be a connected smooth Stein space of dimension \(d\). Then \(X\) has a cover \(\{\mathring{W}_{i}\}_{i\in\mathbb{N}}\) consisting of admissible open, special affinoid wide-open subsets \(\mathring{W}_{i}\) such that_ 1. _the ambient affinoid space_ \(W_{i}\supseteq\mathring{W}_{i}\) _in the sense of Definition_ 2.1 _can be chosen to be connected and contained in_ \(X\)_,_ 2. \(\mathring{W}_{i}\subseteq\mathring{W}_{i+1}\)_, and_ 3. \(\mathring{W}_{i}\) _is smooth of dimension_ \(d\)_._ Proof.: By definition, \(X\) has an admissible open cover \(\{W_{i}\}_{i\in\mathbb{N}}\) by connected affinoid subsets \(W_{i}\) satisfying \(W_{i}\Subset W_{i+1}\). One checks easily that the condition \(W_{i}\Subset W_{i+1}\) ensures that the image of \(W_{i}\) under the reduction map \(p_{i+1}\colon W_{i+1}\to\widetilde{W_{i+1}}\) is a finite subset of \(\widetilde{W_{i+1}}\). Then we can define \(\mathring{W}_{i+1}\) as the preimage of this finite set under \(p_{i+1}\) and it is immediate that this \(\mathring{W}_{i+1}\) satisfies the desired conditions. ### Beyer's trace map First we recall the definition of the map \(\operatorname{res}\colon H^{d}_{c}(\mathring{\mathbb{D}}^{d},\omega_{ \mathring{\mathbb{D}}^{d}})\to K\) from [1, Definition 2.1.1]. We let \(K\langle X_{1}^{-1},\ldots,X_{d}^{-1}\rangle^{\dagger}\) denote the ring of overconvergent series of the form \(\sum_{i_{1},\ldots,i_{d}<0}r_{i_{1},\ldots,i_{d}}X_{1}^{i_{1}}\cdots X_{d}^{i_ {d}},\ r_{i_{1},\ldots,i_{d}}\in K\) and switch to multi-index notation, in particular \(\frac{dX_{1}}{z_{1}}\wedge\ldots\wedge\frac{(dX_{d}}{X_{d}}=\frac{dX_{1}, \ldots,dX_{d}}{X_{1}\cdots X_{d}}=:\frac{dX}{X}\). Then, choosing coordinates \(X=(X_{1},\ldots,X_{d})\) on \(\mathbb{D}^{d}\) yields an isomorphism \(H^{d}_{c}(\mathring{\mathbb{D}}^{d},\omega_{\mathring{\mathbb{D}}^{d}}) \cong K\langle X^{-1}\rangle^{\dagger}\cdot\frac{dX}{X}\) (cf. [1, Corollary 1.2.5]), which allows one to define \(\operatorname{res}\) by the following formula \[\operatorname{res} \colon H^{d}_{c}(\mathring{\mathbb{D}}^{d},\omega_{\mathring{ \mathbb{D}}^{d}})\to K \tag{2.2.1}\] \[\sum_{i\leq 0}r_{i}X^{i}\cdot\frac{dX}{X}\mapsto r_{0}.\] This is independent of the choice of coordinates \((X_{1},\ldots,X_{d})\) on \(\mathbb{D}^{d}\) by [1, Proposition 2.1.3]. We note that Beyer works with the basis \(dX\) and then \(\operatorname{res}\) projects onto the \((-1)\)-th coefficient, but this coincides with the \(0\)-th coefficient with respect to the basis \(dX/X\). Another important ingredient in the construction of the trace map is the following map from [1, p. 234]: **Definition 2.6** (The map \(\sigma\)).: Let \(Z=\operatorname{Sp}(R)\) be a connected smooth affinoid space of dimension \(d\). Let \(\mathring{W}\subseteq Z\) be a special affinoid wide-open space with an associated finite surjective separable morphism \(\pi\colon Z\to\mathbb{D}^{d}\) according to Remark 2.3. We denote the trace map of the corresponding finite field extension \[E=Q(T_{d})\hookrightarrow L=Q(R)\] by \(\operatorname{Tr}_{L/E}\). It induces a map on the \(d\)-forms \[\begin{split}\Omega^{d}_{L/K}=\Omega^{d}_{E/K}\otimes_{E}L &\stackrel{{\sigma}}{{\to}}\Omega^{d}_{E/K}\\ &\omega\otimes b\mapsto\operatorname{Tr}_{L/E}(b)\cdot\omega, \end{split} \tag{2.2.2}\] where \(\Omega^{d}_{L/K}=\Omega^{d}_{E/K}\otimes_{E}L\) holds because \(L/E\) is separable. Moreover, \(\Omega^{d}_{L/K}=Q(R)\otimes_{R}\Omega^{d}_{R/K}\) and \(\sigma(\Omega^{d}_{R/K})\subseteq\Omega^{d}_{T_{d}/K}\), so \(\sigma\) restricts to a map \[\sigma\colon\Omega^{d}_{R/K}\to\Omega^{d}_{T_{d}/K},\] which corresponds to a map \[\sigma\colon\pi_{*}\omega_{Z}\to\omega_{\mathbb{D}^{d}}.\] We write \(\sigma=\sigma_{\pi}\) when we want to stress the dependence on \(\pi\). Now we reproduce [10, Definition 4.2.4]: **Definition 2.7** (Trace map for special affinoid wide-open spaces).: Let \(Z\) be a connected smooth affinoid space of dimension \(d\). Let \(\mathring{W}\subseteq Z\) be a special affinoid wide-open space with an associated finite surjective separable morphism \(\pi\colon Z\to\mathbb{D}^{d}\) according to Remark 2.3. Let \(\mathring{Z}:=\pi^{-1}(\mathring{\mathbb{D}}^{d})\). _The trace map_ \[t=t_{\pi}\colon H^{d}_{c}(\mathring{W},\omega_{Z})\to K\] is defined as the following composite map (which is in fact independent of the choice of the finite surjective separable morphism \(\pi\) by [10, Corollary 4.2.11]): \[H^{d}_{c}(\mathring{W},\omega_{Z})\hookrightarrow H^{d}_{c}(\mathring{Z}, \omega_{Z})\xrightarrow{\sim}H^{d}_{c}(\mathring{\mathbb{D}}^{n},\pi_{*} \omega_{Z})\xrightarrow{H^{d}_{c}(\sigma)}H^{d}_{c}(\mathring{\mathbb{D}}^{ d},\omega_{\mathbb{D}^{d}})\xrightarrow{\mathrm{res}}K\] where the the third map is induced by the \(\sigma\) from Definition 2.6. The definition of the trace map is then extended to smooth Stein spaces, since they admit admissible covers consisting of special affinoid wide-open spaces: **Definition 2.8** (Trace map for Stein spaces).: Let \(X\) be a connected smooth Stein space of dimension \(d\). In the notation of Lemma 2.5, we have the trace maps \(t_{i}\colon H^{d}_{c}(\mathring{W}_{i},\omega_{X})\to K.\) Since the diagrams commute (by [10, Corollary 4.2.12]), the \(t_{i}\) induce a map \[t\colon\varinjlim_{i}H^{d}_{c}(\mathring{W}_{i},\omega_{X})=H^{d}_{c}(X, \omega_{X})\to K.\] Beyer shows in [10, Proof of Satz 7.1] that this \(t\) satisfies Theorem 2.9 below. In particular, it follows by standard universal abstract nonsense that, up to a unique automorphism of \(\omega_{X}\), \(t\) is independent of the choice of the covering \(\{\mathring{W}_{i}\}_{i\in\mathbb{N}}\). We write \(t=t_{X}\) when we wish to emphasize the base space \(X\). **Theorem 2.9** (Serre duality for smooth rigid Stein spaces).: _Let \(X\) be a smooth rigid \(K\)-space of dimension \(d\). If \(X\) is Stein, then there is a canonical trace morphism_ \[t\colon H^{d}_{c}(X,\omega_{X})\to K\] _which has the following property: If \(\mathcal{F}\) is a coherent sheaf on \(X\), then the composite of the trace map \(t\) with the canonical pairing_ \[H^{d-i}_{c}(X,\mathcal{F})\times\mathrm{Ext}^{i}_{X}(\mathcal{F},\omega_{X}) \to H^{d}_{c}(X,\omega_{X})\] _induces an isomorphism of topological \(K\)-vector spaces_ \[H^{d-i}_{c}(X,\mathcal{F})^{\vee}\xrightarrow{\sim}\mathrm{Ext}^{i}_{X}( \mathcal{F},\omega_{X})\] _for all \(i\geq 0\)._ Here \(H^{d-i}_{c}(X,\mathcal{F})^{\vee}\) denotes the space of continuous linear forms on \(H^{d-i}_{c}(X,\mathcal{F})\), equipped with the strong dual topology. Moreover, \(\mathrm{Ext}^{i}_{X}(\mathcal{F},\omega_{X})\) is equipped with the canonical topology for global sections of a coherent sheaf (see [10, SS1.3]), as \(\mathrm{Ext}^{i}_{X}(\mathcal{F},\omega_{X})=H^{0}(X,\mathcal{Ext}^{i}_{X}( \mathcal{F},\omega_{X}))\) because the spectral sequence for the derived functor of the composition is degenerate. Indeed, the spectral sequence degenerates since \(X\) is quasi-Stein and \(\mathcal{Ext}^{i}_{X}(\mathcal{F},\omega_{X})\) is a coherent \(\mathcal{O}_{X}\)-module for all \(i\) (cf. [11, Proposition 3.3 and also the discussion preceding Lemma 3.7]). ### The residue map on local cohomology Let \(Z=\operatorname{Sp}(R)\) be a connected smooth affinoid space of dimension \(d\) and \(z\in Z\) a point with corresponding maximal ideal \(\mathfrak{m}_{z}\subseteq R\). Following Beyer, we often shorten notation as follows: Given a coherent sheaf \(\mathcal{F}\) on \(Z\), we set \(M=\Gamma(Z,\mathcal{F})\) and \[H^{j}_{z}(\mathcal{F}):=H^{j}_{\mathfrak{m}_{z}}(M)=H^{j}_{\widehat{\mathfrak{ m}}_{z}}(\widehat{M}),\] where the latter identification between the local cohomology groups ("insensitivity to completion") is shown1 in [10, Proposition 2.15]. Footnote 1: The assertion [10, Proposition 2.15] concerns the case of a local ring and its unique maximal ideal, but the proof carries over verbatim to our (more general) setting of the Noetherian ring \(R\) and any maximal ideal \(\mathfrak{m}_{z}\). There are two fundamental properties of local cohomology that play an important role throughout. One is its relation to the sheaf cohomology of the underlying affine scheme \(\operatorname{Spec}(R)\) as in [11, Theorem 12.47], which ultimately yields the link to compactly supported cohomology described in SS2.4 below. The other is the explicit description of local cohomology as a direct limit of Koszul complexes (cf. [1, Lemma 3.1.2] and [11, Theorem 7.11]): **Lemma 2.10**.: _Let \(A\) be a Noetherian ring, \(\mathfrak{a}\subseteq A\) an ideal and \(M\) an \(A\)-module. Let \(t_{1},\dots,t_{d}\in A\) be such that \(\sqrt{\mathfrak{a}}=\sqrt{(t_{1},\dots,t_{d})}\). Write \(t=(t_{1},\dots,t_{d})\). For each \(\rho\in\mathbb{N}\), set \(t^{\rho}:=(t_{1}^{\rho},\dots,t_{d}^{\rho})\). Then there is a canonical isomorphism_ \[\varinjlim_{\rho}M/t^{\rho}M\xrightarrow{\sim}H^{d}_{\mathfrak{a}}(M),\] _where the transition map \(M/t^{\rho_{1}}M\to M/t^{\rho_{2}}M\) for \(\rho_{1}\leq\rho_{2}\) on the left-hand side is given by \(m\mapsto(\prod_{i=1}^{d}t_{i})^{\rho_{2}-\rho_{1}}\cdot m\)._ For instance, this result yields the isomorphisms (2.3.2) and (2.3.3) further below. Now, in [1, Definition 4.2.7], Beyer defines a canonical _residue map_ \[\operatorname{res}_{z}\colon H^{d}_{z}(\omega_{Z})=H^{d}_{\mathfrak{m}_{z}}( \Omega^{d}_{R/K})\to K \tag{2.3.1}\] whose explicit construction we need not recall for our purposes, but rather its properties: **Lemma 2.11**.: _Let \(Z=\operatorname{Sp}(R)\) be a connected smooth affinoid space of dimension \(d\). Let \(\dot{W}\subseteq Z\) be a special affinoid wide-open space with an associated finite surjective morphism \(\pi\colon Z\to\mathbb{D}^{d}\) according to Lemma 2.2, and let \(\varphi\colon T_{d}\to R\) be the finite injective ring morphism corresponding to \(\pi\). Let \(\{z_{1},\dots,z_{r}\}=\pi^{-1}(0)\cap\dot{W}\) and \(\{z_{1},\dots,z_{r},z_{r+1},\dots z_{s}\}=\pi^{-1}(0)\). Denote by \(\mathfrak{m}_{1},\dots\mathfrak{m}_{s}\subseteq R\) the corresponding maximal ideals in \(R\). Let \(M=\Gamma(Z,\omega_{Z})\) and let \(\mathfrak{m}\subseteq T_{d}\) denote the maximal ideal corresponding to \(0\in\mathbb{D}^{d}\). Then:_ 1. _For every coherent sheaf_ \(\mathcal{F}\) _on_ \(Z\)_, there is a canonical isomorphism_ \[\gamma\colon\bigoplus_{i=1}^{s}H^{d}_{z_{i}}(\mathcal{F})\xrightarrow{\sim}H^ {d}_{0}(\pi_{*}\mathcal{F}).\] _We write_ \(\gamma=\gamma_{\mathcal{F},\pi}\) _when we want to stress the dependence on_ \(\mathcal{F}\) _and_ \(\pi\)_._ 2. _Let_ \(X_{1},\dots,X_{d}\) _be a system of parameters for the_ \(\mathfrak{m}\)_-adic completion_ \(T_{d}{}^{\wedge\mathfrak{m}}\)_. There are canonical isomorphisms_ \[H^{d}_{0}(\pi_{*}\omega_{Z})\cong\varinjlim_{\rho}\widehat{M_{\mathfrak{m}}} /(X_{1}^{\rho},\dots,X_{d}^{\rho})\] (2.3.2) _and_ \[\bigoplus_{i=1}^{s}H^{d}_{z_{i}}(\omega_{Z})\cong\varinjlim_{\rho}\bigoplus_ {i=1}^{s}\widehat{M_{\mathfrak{m}_{i}}}/(X_{1}^{\rho},\dots,X_{d}^{\rho}),\] (2.3.3) _where we have denoted the image of_ \(X_{j}\) _under each map_ \(T_{d}{}^{\wedge\mathfrak{m}}\to R{}^{\wedge\mathfrak{m}_{i}}\) _induced by_ \(\varphi\) _on completions again by_ \(X_{j}\)_; these form a system of parameters in_ \(R{}^{\wedge\mathfrak{m}_{i}}\)_. Via these isomorphisms,_ \(\gamma^{-1}\) _is identified with the map_ \[\begin{split}\widetilde{\gamma}^{-1}\colon\varlimsup_{\rho} \widehat{M_{\mathfrak{m}}}/(X_{1}^{\rho},\dots,X_{d}^{\rho})& \xrightarrow{\sim}\varinjlim_{\rho}\bigoplus_{i=1}^{s}\widehat{M_{ \mathfrak{m}_{i}}}/(X_{1}^{\rho},\dots,X_{d}^{\rho})\\ &\begin{bmatrix}\omega\\ X^{\rho}\end{bmatrix}\mapsto\begin{pmatrix}\begin{bmatrix}\omega_{1}\\ X^{\rho}\end{bmatrix},\dots,\begin{bmatrix}\omega_{r}\\ X^{\rho}\end{bmatrix}\end{pmatrix}\end{split}\] (2.3.4) _where_ \([\begin{smallmatrix}\omega\\ X^{\rho}\end{smallmatrix}]\) _denotes the image of_ \(\omega\) _under_ \(\widehat{M_{\mathfrak{m}}}/(X_{1}^{\rho},\dots,X_{d}^{\rho})\to\varinjlim_{j} \widehat{M_{\mathfrak{m}}}/(X_{1}^{j},\dots,X_{d}^{j}),\) _and_ \(\omega_{i}\) _denotes the image of_ \(\omega\) _in_ \(\widehat{M_{\mathfrak{m}_{i}}}\)_._ 3. _The diagram_ \[\begin{split}\bigoplus_{i=1}^{s}H_{z_{i}}^{d}(\omega_{Z})& \xrightarrow{\gamma}H_{0}^{d}(\pi_{*}\omega_{Z})& \xrightarrow{H_{0}^{d}(\sigma)}H_{0}^{d}(\omega_{\mathbb{D}^{n}})\\ &\sum\operatorname{res}_{a_{i}}&\xrightarrow{\gamma} \operatorname{res}_{0}\end{split}\] _commutes, where_ \(\sigma=\sigma_{\pi}\) _is defined as in Definition_ 2.6 _above._ Proof.: 1. The isomorphism \(\gamma\) is the map from [1, Lemma 4.2.9 (a)]. 2. This is true by the arguments in [1, Proof of Lemma 4.2.9 (c)]. 3. This assertion is precisely [1, Lemma 4.2.9 (c)]. ### The map from local cohomology into compactly supported cohomology Let \(Z=\operatorname{Sp}(R)\) be a connected smooth affinoid space of dimension \(d\), let \(\check{W}\subseteq Z\) be a special affinoid wide-open space. Given a point \(z\in\check{W}\) and a coherent sheaf \(\mathcal{F}\) on \(Z\), [1, Lemma 4.2.6] constructs a canonical map \[H_{z}^{d}(\mathcal{F})\to H_{c}^{d}(\check{W},\mathcal{F})\] that is functorial in \(\mathcal{F}\). Again, we need not recall its explicit construction for our purposes, but rather the following properties: **Lemma 2.12**.: _Let \(Z=\operatorname{Sp}(R)\) be a connected smooth affinoid space of dimension \(d\). Let \(\hat{W}\subseteq Z\) be a special affinoid wide-open space. Let \(\pi\colon Z\to\mathbb{D}^{d}\) be any finite surjective morphism associated to \(\check{W}\) as in Lemma 2.2. Let \(\{z_{1},\dots,z_{r}\}=\pi^{-1}(0)\cap\check{W}\) and \(\{z_{1},\dots,z_{r},z_{r+1},\dots z_{s}\}=\pi^{-1}(0)\). Set \(\hat{Z}=\pi^{-1}(\check{\mathbb{D}}^{d}).\) Then the image of the canonical map_ \[\bigoplus_{i=1}^{s}H_{z_{i}}^{d}(\omega_{Z})\to H_{c}^{d}(\check{Z},\omega_{Z})\] _is dense in \(H_{c}^{d}(\hat{Z},\omega_{Z})\) and the image of the following map induced by restriction_ \[\bigoplus_{i=1}^{r}H_{z_{i}}^{d}(\omega_{Z})\to H_{c}^{d}(\check{W},\omega_{Z})\] _is dense in \(H_{c}^{d}(\check{W},\omega_{Z})\)._ Proof.: This is [1, Lemma 4.2.9 (d)]. ## 3. The theorem on the relative trace map In this section, we introduce the relative trace map, then state and prove the main theorem about it. Throughout this section, let \(\alpha\colon X\to Y\) be a finite etale morphism of smooth \(d\)-dimensional rigid spaces. ### The relative trace map \(t_{\alpha}\) We start by defining the relative trace map. Since \(\alpha\) pulls back affinoids to affinoids (because \(\alpha\) is finite), any coherent \(\alpha_{*}\mathcal{O}_{X}\)-module \(\mathcal{M}\) can naturally be viewed as a coherent \(\mathcal{O}_{X}\)-module \(\widetilde{\mathcal{M}}\) such that \(\alpha_{*}\widetilde{\mathcal{M}}=\mathcal{M}\) and \((-)^{\sim}\) is an equivalence of categories (cf. [10, Proposition I.9.2.5]). Since \(\alpha\) is etale, there is a natural isomorphism \[(\alpha_{*}\mathcal{O}_{X}\otimes_{\mathcal{O}_{Y}}\omega_{Y})^{\sim} \xrightarrow{\sim}\omega_{X}. \tag{3.1.1}\] Since \(\alpha\) is finite flat, there is the usual trace pairing \[\alpha_{*}\mathcal{O}_{X}\to\mathcal{Hom}_{Y}(\alpha_{*}\mathcal{O}_{X}, \mathcal{O}_{Y}). \tag{3.1.2}\] Finally, since \(\alpha\) is finite flat, the natural map \[\mathcal{Hom}_{Y}(\alpha_{*}\mathcal{O}_{X},\mathcal{O}_{Y})\otimes_{\mathcal{ O}_{Y}}\omega_{Y}\xrightarrow{\sim}\mathcal{Hom}_{Y}(\alpha_{*}\mathcal{O}_{X}, \omega_{Y}) \tag{3.1.3}\] is an isomorphism. _The relative trace map_ is now defined to be the composite map \[\alpha_{*}\omega_{X} \stackrel{{\eqref{eq:def_trace map}}}{{\cong}}\alpha_{*}(\alpha_{*}\mathcal{O}_{X}\otimes_{\mathcal{O}_{Y}} \omega_{Y})^{\sim}\] \[\stackrel{{\eqref{eq:def_trace map}}}{{\to}}\alpha_{*}( \mathcal{Hom}_{Y}(\alpha_{*}\mathcal{O}_{X},\mathcal{O}_{Y})\otimes_{\mathcal{ O}_{Y}}\omega_{Y})^{\sim}\] \[\stackrel{{\eqref{eq:def_trace map}}}{{\cong}}\alpha_{*} \mathcal{Hom}_{Y}(\alpha_{*}\mathcal{O}_{X},\omega_{Y})^{\sim}\] \[= \mathcal{Hom}_{Y}(\alpha_{*}\mathcal{O}_{X},\omega_{Y})\xrightarrow {g\mapsto g(1)}\omega_{Y}\] and is denoted by \(t_{\alpha}\). We aim for a more down-to-earth description of the relative trace map in terms of affinoids. Let \(V=\operatorname{Sp}(A)\) be an affinoid in \(Y\) and \(U=a^{-1}(V)=\operatorname{Sp}(B)\) its preimage in \(X\). The associated ring morphism \(A\to B\) is, in particular, finite and flat. Since \(A\) is Noetherian, the finite flat ring morphism \(A\to B\) makes \(B\) into a finitely presented flat \(A\)-module, i.e. a finitely generated projective \(A\)-module. Hence the natural map \[\operatorname{can}\colon B^{*}\otimes_{A}B \xrightarrow{\sim}\operatorname{End}_{A}(B)\] \[f\otimes b \mapsto[x\mapsto f(x)\cdot b]\] is an isomorphism, where \(B^{*}:=\operatorname{Hom}_{A}(B,A)\). The trace \(\operatorname{Tr}_{B/A}\) is now defined as the composite where the first map sends \(b\in B\) to the endomorphism given by multiplication by \(b\), and the last map is given by "evaluation". If \(B\) is free of finite rank over \(A\), then \(\operatorname{Tr}_{B/A}\) coincides with the usual trace map from linear algebra. Note that the map (3.1.2) of coherent sheaves boils down to \(B\to B^{*},b\mapsto\operatorname{Tr}_{B/A}(b\cdot(-))\) at the level of modules. Therefore, the composite \[\Omega^{d}_{B/K}\xrightarrow[\text{\eqref{eq:def_trace map}}]{\sim}B\otimes_{A} \Omega^{d}_{A/K}\xrightarrow[\text{\eqref{eq:def_trace map}}]{\sim}B^{*} \otimes_{A}\Omega^{d}_{A/K}\xrightarrow[\text{\eqref{eq:def_trace map}}]{\sim}\operatorname{Hom}_{A}(B,\Omega^{d}_{A/K}) \xrightarrow[\text{\eqref{eq:def_trace map}}]{\sim}\Omega^{d}_{A/K}\] coincides with the following map \[\Omega^{d}_{B/K}=\Omega^{d}_{A/K}\otimes_{A}B \xrightarrow{\tau}\Omega^{d}_{A/K} \tag{3.1.4}\] \[\omega\otimes b \mapsto\operatorname{Tr}_{B/A}(b)\cdot\omega.\] We record this fact in the following lemma, for ease of reference. **Lemma 3.1**.: _The restriction of the relative trace map \(t_{\alpha}\colon\alpha_{*}\omega_{U}\to\omega_{V}\) is associated to the homomorphism of modules \(\tau\colon\Omega^{d}_{B/K}\to\Omega^{d}_{A/K}\) from (3.1.4)._ **Lemma 3.2**.: _If \(R^{\prime}\) is an overring of an integral domain \(R\) such that each \(r^{\prime}\in R^{\prime}\) is integral over \(R\) and such that no element of \(R\setminus\{0\}\) is a zero divisor in \(R^{\prime}\), then the localization \(R^{\prime}_{R\setminus\{0\}}\) of \(R^{\prime}\) at the multiplicative subset \(R\setminus\{0\}\) coincides with the total ring of fractions \(Q(R^{\prime})\)._ Proof.: This is proved in the discussion after [3, 3.1.3/Proposition 3]. Suppose that there are finite surjective separable morphisms \(\pi_{U}\colon U\to\mathbb{D}^{d}\) and \(\pi_{V}\colon V\to\mathbb{D}^{d}\) such that \(\pi_{U}=\pi_{V}\circ\alpha\). Then there is the following compatibility of \(\tau\) with \(\sigma_{U}:=\sigma_{\pi_{U}}\) and \(\sigma_{V}:=\sigma_{\pi_{V}}\) from Definition 2.6: **Lemma 3.3**.: _The diagram_ _commutes. By Lemma 3.1, it corresponds to the commutative diagram_ _Proof._: In the following, we write \(Q(B)^{*}=\operatorname{Hom}_{Q(A)}(Q(B),Q(A))\). Due to the formulae (2.2.2) and (3.1.4), we see that it suffices to show that \[\operatorname{Tr}_{Q(B)/Q(T_{d})}|_{B}=\operatorname{Tr}_{Q(A)/Q(T_{d})} \circ\operatorname{Tr}_{B/A}. \tag{3.1.5}\] We will show that the restriction of \(\operatorname{Tr}_{Q(B)/Q(A)}\) to \(B\) coincides with \(\operatorname{Tr}_{B/A}\), i.e. that the diagram commutes, whence the desired equality (3.1.5) follows by the transitivity of the trace in towers of field extensions. Lemma 3.2 tells us that \(Q(B)=B\otimes_{A}Q(A)\), so extension of scalars yields the maps \[\operatorname{End}_{A}(B) \to\operatorname{End}_{Q(A)}(Q(B))\qquad\text{ and }\qquad B^{*} \to Q(B)^{*}\] \[f \mapsto f_{Q}:=f\otimes\operatorname{id}_{Q(A)} f\mapsto f_{Q}:=f\otimes\operatorname{id}_{Q(A)}.\] Thus we can expand the above diagram to Now it easy easy to see that each of the three squares in the diagram commute. Indeed, to see, for instance, that the middle square commutes, note that the isomorphism \(Q(B)\cong B\otimes_{A}Q(A)\) means that any element of \(Q(B)\) can be written as \(\frac{x}{a}\) with \(x\in B\) and \(a\in A\), and any \(\phi_{Q}\) then acts on it as \(\phi_{Q}(\frac{x}{a})=\frac{\phi(x)}{a}\). The commutativity of the middle square amounts to showing that, if \(\phi(x)=\sum_{i}f_{i}(x)b_{i}\) for all \(x\in B\), then \(\phi_{Q}(\frac{x}{a})=\sum_{i}f_{i}(x)\frac{b_{i}}{a}\) for all \(x\in B\) and all \(a\in A\). But \(\phi_{Q}(\frac{x}{a})=\frac{\phi(x)}{a}\) and \(\sum_{i}f_{iQ}(\frac{x}{a})\frac{b_{i}}{1}=\sum_{i}\frac{f_{i}(x)}{a}\frac{b_ {i}}{1}=\frac{\sum_{i}f_{i}(x)b_{i}}{a}=\frac{\phi(x)}{a}\) as well, as desired. The following three subsections are devoted to proving our main result in this section: **Theorem 3.4**.: _Denote the composite map \(H^{d}_{c}(X,\omega_{X})\xrightarrow{\sim}H^{d}_{c}(Y,\alpha_{s}\omega_{X}) \xrightarrow{H^{d}_{c}(Y,t_{\alpha})}H^{d}_{c}(Y,\omega_{Y})\) by \(q_{\alpha}\). Then the following diagram commutes:_ (3.1.6) ### Reducing the proof of Theorem 3.4 to special affinoid wide-opens To reduce the proof of Theorem 3.4 to the case of special affinoid wide-opens, we need the following "relative version" of Lemma 2.5. Recall that \(\alpha\colon X\to Y\) is a finite etale morphism of smooth \(d\)-dimensional rigid spaces. **Lemma 3.5**.: _There exist admissible covers \(\{\hat{U}_{i}\}_{i\in\mathbb{N}}\) of \(X\) and \(\{\hat{V}_{i}\}_{i\in\mathbb{N}}\) of \(Y\) by special affinoid wide-opens as in Lemma 2.5 with ambient affinoid spaces \(U_{i}\subseteq X,V_{i}\subseteq Y\), such that \(\alpha(\hat{U}_{i})\subset\hat{V}_{i}\) and \(\alpha(U_{i})\subseteq V_{i}\) and \(\alpha\colon U_{i}\to V_{i}\) is finite etale. More precisely, there is a commutative diagram_ _for each \(i\), where \(U_{i}\subseteq X\) and \(V_{i}\subseteq Y\) are connected smooth affinoids, \(\alpha\) is finite etale, \(\pi_{V_{i}}\) and \(\pi_{U_{i}}\) are finite and separable and are associated to \(\hat{V}_{i}\) resp. \(\hat{U}_{i}\) (in the sense of Lemma 2.2), and all arrows with two heads are surjective._ Proof.: We obtain the existence of \(\{\hat{V}_{i}\}_{i\in\mathbb{N}}\) by Lemma 2.5. Moreover, we have a connected und smooth affinoid space \(V_{i}=\operatorname{Sp}(A_{i})\) with \(\hat{V}_{i}\subseteq V_{i}\subseteq Y\), such that \(\hat{V}_{i}\) is the preimage of finitely many points under the reduction map \(p_{V_{i}}\), say \[\hat{V}_{i}=p_{V_{i}}^{-1}(\{\widetilde{v}_{1},\dots,\widetilde{v}_{r}\}).\] The preimage of an affinoid space under a finite morphism is again affinoid, so \(U^{\prime}_{i}:=\alpha^{-1}(V_{i})\) is affinoid. Moreover, the restriction \(\alpha\colon U^{\prime}_{i}\to V_{i}\) is also finite etale. Replacing \(U^{\prime}_{i}\) with one of its connected components \(U_{i}\) (which is an affinoid subdomain in \(U^{\prime}_{i}\), say \(U_{i}=\operatorname{Sp}(B_{i})\)), the restriction \(\alpha\colon U_{i}\to V_{i}\) is again finite etale. Here we use that \(U_{i}\) is "clopen" in \(U^{\prime}_{i}\) (being a connected component): the restriction is again etale since it arises by composition with the open immersion \(U_{i}\hookrightarrow U^{\prime}_{i}\) (which is etale) and, on the other hand, it is again finite since it arises by composition with the closed immersion \(U_{i}\hookrightarrow U^{\prime}_{i}\) (which is finite). The restriction \(\alpha\colon U_{i}\to V_{i}\) being finite etale, the associated ring morphism \(A_{i}\to B_{i}\) is finite flat. We may assume that \(U_{i}\neq\emptyset\). Indeed, since \(X\neq\emptyset\), we see that \(U_{i}\neq\emptyset\) for all \(i\gg 0\), so we may re-index and forget the small \(i\). Next we argue that \(A_{i}\to B_{i}\) is injective. This is true since any flat ring morphism \(R\to S\) from an integral domain \(R\) to a ring \(S\neq 0\) is necessarily injective, and \(A_{i}\) is a domain since \(V_{i}\) is connected. So \(A_{i}\to B_{i}\) is an injective integral morphism, which implies that the map \(\operatorname{Spec}(B_{i})\to\operatorname{Spec}(A_{i})\) is surjective. Thus \[\alpha\colon U_{i}\to V_{i}\] is surjective as well, because any prime ideal lying over a maximal ideal in an integral ring extension is necessarily maximal. Note that \(\alpha\) being finite implies that \(\widetilde{\alpha}\) is also finite (by [1, 6.3.5/Theorem 1]) and that \(\alpha\) being surjective implies that \(\widetilde{\alpha}\) is also surjective (since both \(\alpha\) and \(p_{V_{i}}\) are surjective and thus also the composite \(p_{V_{i}}\circ\alpha=\widetilde{\alpha}\circ p_{U_{i}}\) is surjective). Since \(\widetilde{\alpha}\) is finite, \(\Omega:=\widetilde{\alpha}^{-1}(\{\widetilde{v}_{1},\dots,\widetilde{v}_{r}\})\) is a finite subset of \(\tilde{U}_{i}\) and hence \[\mathring{U}_{i}:=p_{U_{i}}^{-1}(\Omega)\] is special affinoid wide-open in \(U_{i}\). Then \(\alpha(\mathring{U}_{i})\subseteq\mathring{V}_{i}\) because \(p_{V_{i}}\circ\alpha=\widetilde{\alpha}\circ p_{U_{i}}\). In fact, we claim that \(\mathring{U}_{i}=\alpha^{-1}(\mathring{V}_{i})\), so the restriction \(\mathring{U}_{i}\to\mathring{V}_{i}\) of \(\alpha\) is again surjective. To see this, let \(u\in\alpha^{-1}(\mathring{V}_{i})\), which by definition of \(\mathring{V}_{i}\) means that \[\{\widetilde{v}_{1},\dots,\widetilde{v}_{r}\}\ni p_{V_{i}}(\alpha(u))= \widetilde{\alpha}(p_{U_{i}}(u)),\] so \(p_{U_{i}}(u)\in\widetilde{\alpha}^{-1}(\{\widetilde{v}_{1},\dots,\widetilde{v }_{r}\})=\Omega\), i.e. \(u\in p_{U_{i}}{}^{-1}(\Omega)=\mathring{U}_{i}\) which proves our claim. In particular, it follows that \(\{\mathring{U}_{i}\}_{i\in\mathbb{N}}\) is the preimage of the admissible open cover \(\{\mathring{V}_{i}\}_{i\in\mathbb{N}}\) of \(Y\) under \(\alpha\colon X\to Y\), hence it is itself an admissible open cover of \(X\). Now take a finite surjective map \(\widetilde{\pi}_{V_{i}}\colon\widetilde{V}_{i}\to\mathbb{A}^{d}\) with \(\widetilde{\pi}_{V_{i}}(\widetilde{v}_{j})=0\) for all \(j=1,\dots,r\) and a separable lift \(\pi_{V_{i}}\colon V_{i}\to\mathbb{D}^{d}\) (see Remark 2.3). Define \[\widetilde{\pi}_{U_{i}}:=\widetilde{\pi}_{V_{i}}\circ\widetilde{\alpha}.\] Then \(\widetilde{\pi}_{U_{i}}(\Omega)=\widetilde{\pi}_{V_{i}}(\widetilde{\alpha}( \Omega))\subseteq\widetilde{\pi}_{V_{i}}(\{\widetilde{v}_{1},\dots,\widetilde{ v}_{r}\})=\{0\}.\) Moreover, \(\widetilde{\pi}_{U_{i}}\) is finite surjective since \(\widetilde{\pi}_{V_{i}}\) and \(\widetilde{\alpha}\) are both finite and surjective. Therefore, any separable lift of \(\widetilde{\pi}_{U_{i}}\) satisfies the conditions in the assertion, by Lemma 2.2. In particular, the separable lift \[\pi_{U_{i}}:=\pi_{V_{i}}\circ\alpha\] satisfies the desired conditions. Here we used that \(\alpha\) is separable, which is true because it is unramified at all closed points of \(U\) and hence necessarily generically unramified. Lemma 3.5 reduces us to showing that the diagram (3.2.1) commutes for all \(i\). Indeed, taking \(\varinjlim_{i}\) then yields Theorem 3.4 (see Definition 2.8). ### Utilizing local cohomology Consider the diagram from Lemma 3.5 for a fixed \(i\), but omit the index \(i\) in the notation. Thus \(\alpha\colon\mathring{U}\to\mathring{V}\) is the restriction of a surjective finite etale morphism \[\alpha\colon U=\operatorname{Sp}(B)\to V=\operatorname{Sp}(A).\] The associated ring morphism \(A\to B\) is, in particular, finite and flat. We consider the commutative diagram (3.3.1) where all morphisms are finite and \(A\) and \(B\) are integral domains. This notation is fixed for the remainder of this section. Let \[\{x_{1},\dots,x_{r}\}=\pi_{U}^{-1}(0)\cap\mathring{U}\quad\text{ and }\quad\{x_{1},\dots,x_{r},x_{r+1},\dots,x_{s}\}=\pi_{U}^{-1}(0).\] Note that \(\{\alpha(x_{1}),\dots,\alpha(x_{r})\}\subseteq\pi_{V}^{-1}(0)\cap\mathring{V}\), since \(\pi_{U}=\pi_{V}\circ\alpha\) by Lemma 3.5. Moreover, \(\{\alpha(x_{1}),\dots,\alpha(x_{s})\}=\pi_{V}^{-1}(0)\) since \(\pi_{U}=\pi_{V}\circ\alpha\) and \(\alpha\) is surjective. Denote the cardinality of \(\{\alpha(x_{1}),\dots,\alpha(x_{s})\}\) by \(s^{\prime}\) and let \[\{y_{1},\dots,y_{s^{\prime}}\}=\{\alpha(x_{1}),\dots,\alpha(x_{s})\}=\pi_{V}^{ -1}(0)\] with \(y_{i}\neq y_{j}\) for \(i\neq j\). Then \(s^{\prime}\leq s\) and it may happen that \(s^{\prime}<s\). Define \(r^{\prime}\) in the same way for \(\{\alpha(x_{1}),\ldots,\alpha(x_{r})\}\). We bring local cohomology into the game: Using the maps from SSSS 2.3-2.4, expand the diagram (3.2.1) to (3.3.2) where the lower horizontal map has dense image by Lemma 2.12. We have placed a question mark in the triangle in (3.3.2) for psychological reasons - as a reminder that we need to show that the triangle commutes. The two outer "slices" in (3.3.2) commute by [1, Proposition 4.2.10]. Next, we use the map \(\tau\colon\Omega^{d}_{B/K}\to\Omega^{d}_{A/K}\) from (3.1.4) to obtain an induced map \[H^{d}_{x}(\tau)\colon H^{d}_{x}(\omega_{U})\to H^{d}_{\alpha(x)}(\omega_{V}) \tag{3.3.3}\] for every point \(x\in U\), via Definition 3.6 below. Recall that we use the notation \(H^{d}_{x}(\omega_{U})=H^{d}_{\mathfrak{m}_{x}}(\Omega^{d}_{B/K})\) etc. **Definition 3.6** (Induced maps on local cohomology in general).: Let \(R\) and \(S\) be rings, \(\varphi\colon R\to S\) a ring morphism, \(\mathfrak{b}\subseteq S\) an ideal, \(M\) an \(R\)-module and \(N\) an \(S\)-module. Let \(\rho\colon N\to M\) be an \(R\)-linear map. Then we define \[H^{j}_{\mathfrak{b}}(\rho)\colon H^{j}_{\mathfrak{b}}(N)\to H^{j}_{\varphi^{-1 }(\mathfrak{b})}(M)\] as follows. The inclusion \(\varphi(\varphi^{-1}(\mathfrak{b}))S\subseteq\mathfrak{b}\) implies that \(\Gamma_{\mathfrak{b}}(N)\subseteq\Gamma_{\varphi(\varphi^{-1}(\mathfrak{b}))S} (N)\), hence there is a natural map \[H^{j}_{\mathfrak{b}}(N)\to H^{j}_{\varphi(\varphi^{-1}(\mathfrak{b}))S}(N). \tag{3.3.4}\] Moreover, \[H^{j}_{\varphi(\varphi^{-1}(\mathfrak{b}))S}(N)\cong H^{j}_{\varphi^{-1}( \mathfrak{b})}(N) \tag{3.3.5}\] by independence of base [26, Proposition 2.14 (2)]. Finally, \(\Gamma_{\varphi^{-1}(\mathfrak{b})}\) is a functor on \(R\)-modules and hence gives rise to \[H^{j}_{\varphi^{-1}(\mathfrak{b})}(N)\to H^{j}_{\varphi^{-1}(\mathfrak{b})}(M). \tag{3.3.6}\] The desired map \(H^{j}_{\mathfrak{b}}(\rho)\) is the composite of these three maps. Choosing \(R=A,S=B,\mathfrak{b}=\mathfrak{m}_{x},M=\Omega^{d}_{A/K},N=\Omega^{d}_{B/K}\) and \(\rho=\tau\) in Definition 3.6 yields \(H^{d}_{x}(\tau)\). For each \(i\in\{1,\ldots,s^{\prime}\}\) we consider the map \[\bigoplus_{x\in\alpha^{-1}(y_{i})}H^{d}_{x}(\omega_{U})\xrightarrow{\sum_{x} H^{d}_{x}(\tau)}H^{d}_{y_{i}}(\omega_{V}).\] **Lemma 3.7** (Explicit description of \(\sum_{x}H^{d}_{x}(\tau)\)).: _Let \(M=\Omega^{d}_{A/K},N=\Omega^{d}_{B/K}\) and consider the map \(\tau\colon N\to M\) from (3.1.4). Let \(\mathfrak{m}_{y}\subseteq A\) be a maximal ideal that pulls back to \(\mathfrak{m}\) in \(T_{d}\), where \(\mathfrak{m}\) denotes the ideal corresponding to the point \(0\in\mathbb{D}^{d}\). For every \(\mathfrak{m}_{x}\subseteq B\) that pulls back to \(\mathfrak{m}_{y}\), taking completions in the ring diagram (3.3.1) yields the diagram_ _where all morphisms are finite. Let \(X_{1},\ldots,X_{d}\) be a system of parameters for \(T_{d}\!^{\wedge\mathfrak{m}}\). Then the images in \(A^{\wedge\mathfrak{m}_{y}}\) resp. \(B^{\wedge\mathfrak{m}_{x}}\) are also a system of parameters and the map_ \[\bigoplus_{x\in\alpha^{-1}(y)}H^{d}_{x}(N)\xrightarrow{\sum_{x}H^{d}_{x}(\tau )}H^{d}_{y}(M)\] _identifies, via Lemma 2.10, with the map_ \[\varinjlim_{\rho}N^{\wedge\mathfrak{m}_{y}}/(X_{1}^{\rho},\ldots,X_{n}^{\rho} )\xrightarrow{\tau^{\wedge\mathfrak{m}_{y}}}\varinjlim_{\rho}M^{\wedge \mathfrak{m}_{y}}/(X_{1}^{\rho},\ldots,X_{n}^{\rho}). \tag{3.3.7}\] _More precisely: The diagram_ \[\begin{CD}\bigoplus_{x\in\alpha^{-1}(y)}H^{d}_{\widehat{\mathfrak{m}_{x}}}( \widehat{N})\xrightarrow{\cong}_{\sum_{x}(3.3)}H_{\widehat{\mathfrak{m}_{y} \overline{B}}}(N^{\wedge\mathfrak{m}_{y}B})@>{\sum_{x}H_{\widehat{ \mathfrak{m}_{y}}}(N^{\wedge\mathfrak{m}_{y}})}>{}>\varinjlim_{(3.3.5)}H_{ \widehat{\mathfrak{m}_{y}}}(N^{\wedge\mathfrak{m}_{y}})@>{\tau^{\wedge \mathfrak{m}_{y}}}>{}>H_{\widehat{\mathfrak{m}_{y}}}(M^{\wedge\mathfrak{m}_{y}}) \\ @V{\text{Lemma \ref{lem: which we denote by \(\oplus_{i}H^{d}_{x_{i}}(\tau)\), abusing the notation. This yields the diagram (3.3.9) with outer semicircle (3.3.10) where question marks are again placed as a reminder that commutativity needs to be shown. We now explain how the proof of Theorem 3.4 reduces to showing the commutativity of the outer semicircle (3.3.10) and the commutativity of the square from diagram (3.3.9), which is then done in Lemma 3.8 and Lemma 3.9 below in the next subsection. Proof of Theorem 3.4.: We consider the diagram (3.3.9). The objective is to show that the triangle with the question mark commutes. Since the lower horizontal map (call it \(\eta\)) has dense image, it suffices to show that \[t_{\hat{U}}\circ\eta=t_{\hat{V}}\circ q_{\alpha}\circ\eta. \tag{3.3.11}\] By the commutativity of the lower "slice" in the diagram, we have \(t_{\hat{U}}\circ\eta=\sum\operatorname{res}_{x_{i}}\). On the other hand, \(t_{\hat{V}}\circ q_{\alpha}\circ\eta\) is equal to the composite of \(\sum\operatorname{res}_{y_{i}}\) with \(\oplus_{i}H^{d}_{x_{i}}(\tau)\) by the commutativity of the square (Lemma 3.9 below) and the commutativity of the upper "slice". But the composite of \(\sum\operatorname{res}_{y_{i}}\) with \(\oplus_{i}H^{d}_{x_{i}}(\tau)\) then coincides with \(\sum\operatorname{res}_{x_{i}}\) by the commutativity of the outer semicircle (Lemma 3.8 below). Hence both sides of (3.3.11) are equal to \(\sum\operatorname{res}_{x_{i}}\), which completes the proof that (3.2.1) commutes and thus the proof of Theorem 3.4. ### Proving the missing lemmas We complete the proof of Theorem 3.4 with two lemmas: **Lemma 3.8**.: _The outer semicircle (3.3.10) commutes._ Proof.: Using the commutative diagram from Lemma 2.11 (iii), we can re-write the diagram (3.3.10) as To show that this diagram commutes, we expand it so that it has three parts: Then (1) commutes for obvious reasons and (3) commutes by Lemma 3.3. Thus it remains to prove that (2) commutes. Letting \(M=\Omega^{d}_{A/K},N=\Omega^{d}_{B/K}\) and \(\mathfrak{m}\) denote the maximal ideal corresponding to \(0\in\mathbb{D}^{d}\), Lemma 2.11 (ii) says that the square (2) is equivalent to a diagram of the form \[\begin{CD}\varinjlim_{\rho}\bigoplus_{i=1}^{s}\widehat{N_{\mathfrak{m}_{x_{i }}}}/(X^{\rho}_{1},\dots,X^{\rho}_{n})@>{}>{}>\varinjlim_{\rho}\bigoplus_{i=1}^ {s^{\prime}}\widehat{M_{\mathfrak{m}_{w_{i}}}}/(X^{\rho}_{1},\dots,X^{\rho}_{n}) \\ @V{\cong}V{\widetilde{\gamma_{U}}}V@V{\cong}V{\widetilde{\gamma_{U}}}V\\ \varinjlim_{\rho}\widehat{N_{\mathfrak{m}}}/(X^{\rho}_{1},\dots,X^{\rho}_{n}) @V{}>{}>\varinjlim_{\rho}\widehat{M_{\mathfrak{m}}}/(X^{\rho}_{1},\dots,X^{\rho}_{n}).\end{CD} \tag{3.4.1}\] Moreover, for each \(y_{i}\) we have an isomorphism \(B^{\wedge\mathfrak{m}_{y_{i}}B}\cong\oplus_{x\in\alpha^{-1}(y_{i})}\widehat{B_{ \mathfrak{m}_{x}}}\) of \(B\)-modules, whence \[\widehat{N_{\mathfrak{m}_{y_{i}}}}=N\otimes_{B}B^{\wedge\mathfrak{m}_{y_{i}}B}= \bigoplus_{x\in\alpha^{-1}(y_{i})}N\otimes_{B}\widehat{B_{\mathfrak{m}_{x}}}= \bigoplus_{x\in\alpha^{-1}(y_{i})}\widehat{N_{\mathfrak{m}_{x}}}.\] Thus \[\varinjlim_{\rho}\bigoplus_{i=1}^{s}\widehat{N_{\mathfrak{m}_{x_{i}}}}/(X_{1} ^{\rho},\dots,X_{n}^{\rho})\cong\varinjlim_{\rho}\bigoplus_{i=1}^{s^{\prime}} \widehat{N_{\mathfrak{m}_{y_{i}}}}/(X_{1}^{\rho},\dots,X_{n}^{\rho})\] and the diagram (3.4.1) becomes with horizontal maps according to Lemma 3.7 and vertical maps according to (2.3.4) from Lemma 2.11 (ii). This last diagram obviously commutes, so the proof is complete. **Lemma 3.9**.: _The square_ _from Diagram (3.3.9) commutes._ Proof.: Let \(\breve{\mathcal{V}}:=\pi_{V}^{-1}(\breve{\mathbb{D}}^{d})\) and \(\breve{\mathcal{U}}:=\pi_{V}^{-1}(\breve{\mathbb{D}}^{d})\), then it suffices to show that the following diagram commutes: Now (1) commutes for obvious reasons, so it remains to prove that (2) commutes. Applying Lemma 2.11 (i) to the left column of (2) and the isomorphisms \(H_{c}^{d}(\breve{\mathcal{V}},\omega_{V})\cong H_{c}^{d}(\breve{\mathbb{D}}^{ d},\pi_{V*}\omega_{V})\) and \(H_{c}^{d}(\breve{\mathcal{U}},\omega_{U})\cong H_{c}^{d}(\breve{\mathbb{D}}^{ d},\pi_{U*}\omega_{U})\) to the right column, we can re-write (2) as This diagram commutes by the functoriality of the horizontal maps, completing the proof. ### Some consequences Let \(\alpha\colon X\to Y\) be a finite etale morphism of smooth \(d\)-dimensional Stein spaces over \(K\) and let \(\mathcal{G}\) be a coherent sheaf on \(Y\). Let \(\xi_{\alpha}\colon\mathcal{G}\to\alpha_{*}\alpha^{*}\mathcal{G}\) be the adjunction morphism and let \((-)^{\vee}=\operatorname{Hom}_{K}^{cont}(-,K)\) denote the continuous dual. An easy consequence of Theorem 3.4 is: **Proposition 3.10**.: _The diagram_ _commutes for all \(i\geq 0\), where the horizontal isomorphisms come from the Serre duality pairing._ Proof.: As explained in the proof of [23, Lemma 4.2.8], the assertion follows from Theorem 3.4 and the naturality of the Yoneda-Cartier pairing in the coherent sheaf together with some functoriality properties. Consider the special case \(\mathcal{G}=\mathcal{O}_{Y}\). Then \(\alpha^{*}\mathcal{O}_{Y}=\mathcal{O}_{X}\) and hence \(\operatorname{Hom}_{X}(\alpha^{*}\mathcal{O}_{Y},\omega_{X})=\operatorname{ Hom}_{X}(\mathcal{O}_{X},\omega_{X})=\omega_{X}(X)\), so the commutativity of the above diagram in particular yields: **Corollary 3.11**.: _The diagram_ _commutes._ Note that the domain of \(t_{\alpha}(Y)\) is indeed \(\alpha_{*}\omega_{X}(Y)=\omega_{X}(X)\). ## 4. Base-changing Beyer's trace map Throughout this section, we consider the following setting: Let \(K^{\prime}\) be a (not necessarily finite) complete field extension of \(K\) and, for any (separated) rigid space \(X\) over \(K\), let \[X^{\prime}:=X\mathbin{\widehat{\otimes}}_{K}K^{\prime}\] denote the base change of \(X\) to \(K^{\prime}\) as in [1, SS9.3.6]. If \(R\) is a \(K\)-affinoid algebra, we accordingly use the notation \[R^{\prime}:=R\mathbin{\widehat{\otimes}}_{K}K^{\prime}.\] Finally, let \(\mathcal{F}\rightsquigarrow\mathcal{F}^{\prime}\) denote the exact "pullback" functor from coherent sheaves on \(X\) to coherent sheaves on \(X^{\prime}\). In general, \(H^{j}_{c}(X,\mathcal{F})\mathbin{\widehat{\otimes}}_{K}K^{\prime}\not\cong H^ {j}_{c}(X^{\prime},\mathcal{F}^{\prime})\) even when \(K^{\prime}\) is finite over \(K\), due to the fact that the left-hand side is Hausdorff whereas the right-hand side can be a non-Hausdorff space, cf. [11, Remark 1.11]. However, for a special affinoid wide-open space (resp. a Stein space) \(S\) over \(K\), we discuss comparison maps \[H^{j}_{c}(S,\mathcal{F})\mathbin{\widehat{\otimes}}_{K}K^{\prime}\to H^{j}_{ c}(S^{\prime},\mathcal{F}^{\prime})^{\wedge}\] in SS4.1 (resp. SS4.3) and prove that Beyer's trace map and Serre duality behave well with respect to them. ### Comparison maps for base change Recall that, for any rigid space \(X\) with a sheaf \(\mathcal{F}\) of abelian groups and \(Z\subseteq X\) a finite union of admissible affinoids, \(H^{j}_{Z}(X,\mathcal{F})\) is computed by deriving the left-exact functor \(\Gamma_{Z}(X,\mathcal{F}):=\ker(\Gamma(X,\mathcal{F})\to\Gamma(X\setminus Z, \mathcal{F}).\) Moreover, \(H^{j}_{c}(X,\mathcal{F})=\varinjlim_{Z}H^{j}_{Z}(X,\mathcal{F})\) where the limit is taken over all subspaces \(Z\) of the above form. We record the following obvious fact for future reference: **Lemma 4.1**.: _Let \(S\) be a rigid space, \(Z\subseteq S\) a finite union of admissible affinoids and \(\mathcal{F}\) a coherent sheaf on \(S\) such that \(H^{j}(S,\mathcal{F})=0\) for \(j\geq 1\). Then the long exact sequence [10, Remark 1.1.2 (b)] yields topological isomorphisms_ \[H^{j}_{Z}(S,\mathcal{F})\cong H^{j-1}(S\setminus Z,\mathcal{F})\qquad\text{ for }j\geq 2\] _and_ \[H^{1}_{Z}(S,\mathcal{F})\cong H^{0}(S\setminus Z,\mathcal{F})/H^{0}(S, \mathcal{F}).\] **Proposition 4.2**.: _Let \(Z\) be a connected smooth affinoid space, \(\mathcal{F}\) a coherent sheaf on \(Z\). Let \(\mathring{W}\subseteq Z\) be a special affinoid wide-open space with associated finite surjective morphism \(\pi\colon Z\to\mathbb{D}^{d}_{K}\) whose restriction \(\varpi\colon\mathring{W}\to\hat{\mathbb{D}}^{d}_{K}\) we denote by \(\varpi\). Let \(\varepsilon\in(0,1)\) and set \(S:=\mathring{W},V:=\varpi^{-1}(\mathbb{D}^{d}_{K}(\varepsilon))\) and \(X:=S\setminus V.\) Then:_ 1. \(V^{\prime}={\varpi^{\prime}}^{-1}(\mathbb{D}^{d}_{K^{\prime}}(\varepsilon))\) _and_ \[X^{\prime}=S^{\prime}\setminus V^{\prime}.\] (4.1.1) 2. _There is a natural map_ \[H^{j}_{V}(S,\mathcal{F})\otimes_{K}K^{\prime}\to H^{j}_{V^{\prime}}(S^{\prime },\mathcal{F}^{\prime})\] (4.1.2) _for all_ \(j\geq 0\)_._ 3. _Taking_ \(\varinjlim_{\varepsilon}\) _in (_4.1.2_) yields a natural map_ \[H^{j}_{c}(S,\mathcal{F})\otimes_{K}K^{\prime}\to H^{j}_{c}(S^{\prime}, \mathcal{F}^{\prime})\] (4.1.3) _which induces a map on completions_ \[H^{j}_{c}(S,\mathcal{F})\mathbin{\widehat{\otimes}}_{K}K^{\prime}\to H^{j}_{c }(S^{\prime},\mathcal{F}^{\prime})^{\wedge}.\] (4.1.4) 4. _If_ \(K^{\prime}\) _is moreover finite over_ \(K\)_, then all three maps (_4.1.2_), (_4.1.3_) and (_4.1.4_) are isomorphisms._ Proof.: 1. Since extension of scalars is compatible with the formation of fiber products, it is in particular compatible with taking preimages under morphisms. Applying this to \(\varpi\), we find that \[V^{\prime}=(\varpi^{-1}(\mathbb{D}^{d}_{K}(\varepsilon)))^{\prime}={\varpi^{ \prime}}^{-1}(\mathbb{D}^{d}_{K^{\prime}}(\varepsilon))\] and \[X^{\prime}=(\varpi^{-1}(\hat{\mathbb{D}}^{d}_{K}\setminus\mathbb{D}^{d}_{K}( \varepsilon)))^{\prime}={\varpi^{\prime}}^{-1}((\hat{\mathbb{D}}^{d}_{K} \setminus\mathbb{D}^{d}_{K}(\varepsilon))^{\prime}).\] Next, we claim that \[(\hat{\mathbb{D}}^{d}_{K}\setminus\mathbb{D}^{d}_{K}(\varepsilon))^{\prime}= \hat{\mathbb{D}}^{d}_{K^{\prime}}\setminus\mathbb{D}^{d}_{K^{\prime}}( \varepsilon).\] (4.1.5) To see this, note that \[\hat{\mathbb{D}}^{d}_{K}\setminus\mathbb{D}^{d}_{K}(\varepsilon)=\bigcup_{i, \delta}U_{K,i,\delta}\ \ \text{with}\ \ U_{K,i,\delta}:=\{x\in\hat{\mathbb{D}}^{d}_{K}\colon\delta\leq|x_{i}|\leq 1 -\delta\}\] where \(i\) runs through \(1,\ldots,n\) and \(\delta\) runs through a zero sequence. Due to how the base change functor is defined, \((\hat{\mathbb{D}}^{d}_{K}\setminus\mathbb{D}^{d}_{K}(\varepsilon))^{\prime}\) is obtained by gluing the \((U_{K,i,\delta})^{\prime}\). But \((U_{K,i,\delta})^{\prime}=U_{K^{\prime},i,\delta}\), whence (4.1.5) follows. Altogether, we see that \[X^{\prime}={\varpi^{\prime}}^{-1}(\hat{\mathbb{D}}^{d}_{K^{\prime}}\setminus \mathbb{D}^{d}_{K^{\prime}}(\varepsilon))=S^{\prime}\setminus{\varpi^{\prime}} ^{-1}(\mathbb{D}^{d}_{K^{\prime}}(\varepsilon))=S^{\prime}\setminus V^{\prime},\] which proves (4.1.1). 2. By definition, \(H^{0}_{V}(S,\mathcal{F})=\ker(H^{0}(S,\mathcal{F})\to H^{0}(X,\mathcal{F}))\) and \(H^{0}_{V^{\prime}}(S^{\prime},\mathcal{F}^{\prime})=\ker(H^{0}(S^{\prime}, \mathcal{F}^{\prime})\to H^{0}(X^{\prime},\mathcal{F}^{\prime}))\), where the latter equality uses (4.1.1). For any rigid space \(Y\) and any coherent sheaf \(\mathcal{G}\) on \(Y\), we consider the natural map \[H^{0}(Y,\mathcal{G})\otimes_{K}K^{\prime}\to H^{0}(Y,\mathcal{G})\, \widehat{\otimes}_{K}\,K^{\prime}=H^{0}(Y^{\prime},\mathcal{G}^{\prime})\] where the last equality is due to the explicit construction of \(\mathcal{G}^{\prime}\). This yields the vertical maps in the commutative diagram (4.1.6) By restricting the left vertical map to the kernel of the lower horizontal map (which coincides with \(H^{0}_{V}(S,\mathcal{F})\otimes_{K}K^{\prime}\) due to the flatness of \(K\to K^{\prime}\)) and observing that this map then lands in the kernel of the upper horizontal map, we obtain the desired map (4.1.2) for \(j=0\). For \(j\geq 1\) we can apply Lemma 4.1 and use (4.1.1) to see that it is equivalent to produce a natural map \[H^{j-1}(X,\mathcal{F})\otimes_{K}K^{\prime}\to H^{j-1}(X^{\prime},\mathcal{F} ^{\prime}).\] (4.1.7) To construct the map (4.1.7), we imitate the proof of [Stacks, Tag 02KH], which calls for a finite Leray covering of \(X\). To see that there exists a finite Leray covering of \(X\), first note that \(\varpi\) is a finite morphism. Indeed, since \(\hat{W}\) is a union of connected components of \(\pi^{-1}(\hat{\mathbb{D}}^{d}_{K})\) and hence a clopen subspace, the inclusion \(\hat{W}\hookrightarrow\pi^{-1}(\hat{\mathbb{D}}^{d}_{K})\) is a closed immersion and in particular a finite map, whence its composite with the finite map \(\pi^{-1}(\hat{\mathbb{D}}^{d}_{K})\to\hat{\mathbb{D}}^{d}_{K}\) is also finite, i.e. \(\varpi\colon\hat{W}\to\hat{\mathbb{D}}^{d}_{K}\) is finite. Now, if we let \(\mathfrak{W}\) be the following finite Leray cover of \(\hat{\mathbb{D}}^{d}_{K}\setminus\mathbb{D}^{d}_{K}(\varepsilon)\) \[\hat{\mathbb{D}}^{d}_{K}\setminus\mathbb{D}^{d}_{K}(\varepsilon)=\bigcup_{i= 1}^{d}U_{i,\varepsilon}\ \ \text{where}\ \ U_{i,\varepsilon}:=\{x\in\hat{\mathbb{D}}^{d}_{K}\colon \varepsilon<|x_{i}|\},\] (4.1.8) then \(\mathfrak{U}:=\varpi^{-1}\mathfrak{W}\) is a finite cover of \(X\) which is a Leray cover. Indeed, the latter is due to the fact that, given a space with vanishing higher coherent cohomology, its preimage under any finite morphism also has vanishing higher cohomology. Next, the explicit construction of \(\mathcal{F}^{\prime}\) and the fact that the completed tensor product commutes with finite products yields the following relation between Cech complexes \[\check{C}^{j-1}(\mathfrak{U},\mathcal{F})\,\widehat{\otimes}_{K}\,K^{\prime}= \check{C}^{j-1}(\mathfrak{U}^{\prime},\mathcal{F}^{\prime}).\] (4.1.9) By precomposing with the map \[\check{C}^{j-1}(\mathfrak{U},\mathcal{F})\otimes_{K}K^{\prime}\to\check{C}^{j- 1}(\mathfrak{U},\mathcal{F})\,\widehat{\otimes}_{K}\,K^{\prime}\] (4.1.10) we obtain a natural map \[\check{C}^{j-1}(\mathfrak{U},\mathcal{F})\otimes_{K}K^{\prime}\to\check{C}^{j- 1}(\mathfrak{U}^{\prime},\mathcal{F}^{\prime}).\] (4.1.11) The cohomology of the left-hand side in (4.1.11) is \(H^{j-1}(\mathfrak{U},\mathcal{F})\otimes_{K}K^{\prime}\) (because \(K\to K^{\prime}\) is flat), so taking cohomology in (4.1.11) yields the desired natural map \[H^{j-1}(\mathfrak{U},\mathcal{F})\otimes_{K}K^{\prime}\to H^{j-1}(\mathfrak{U} ^{\prime},\mathcal{F}^{\prime}).\] (4.1.12) 3. We only need to see that the \(V\) (resp. the \(V^{\prime}\)), for varying \(0<\varepsilon<1\), form a cofinal subfamily of the family of all finite unions of admissible affinoids in \(S\) (resp. in \(S^{\prime}\)). But the \(\hat{\mathbb{D}}^{d}_{K}(\varepsilon)\) form a cofinal subfamily for \(\hat{\mathbb{D}}^{d}_{K}\) and one checks that taking preimages under any finite morphism respects such cofinality, which yields the assertion for \(V\). Moreover, the same argument over \(K^{\prime}\), bearing in mind that \(V^{\prime}={\varpi^{\prime}}^{-1}(\mathbb{D}^{d}_{K^{\prime}}(\varepsilon))\) by (i), yields the assertion for \(V^{\prime}\). 4. Now assume that \(K^{\prime}\) is finite over \(K\). Obviously, it suffices to show that the map (4.1.2) is an isomorphism, or, equivalently, that (4.1.7) is an isomorphism, or, equivalently, that (4.1.12) is an isomorphism. For this, we first claim that the map (4.1.10) is an isomorphism. Indeed, \(\hat{C}^{j-1}(\mathfrak{U},\mathcal{F})\) is a Frechet space (which is in particular Hausdorff and complete) and the functors \(K^{\prime}\otimes_{K}(-)\) and \(K^{\prime}\mathbin{\widehat{\otimes}}_{K}(-)\) are isomorphic on complete locally convex Hausdorff spaces due to \(K^{\prime}\) being finite over \(K\), whence the claim follows. Hence the map (4.1.11) is also an isomorphism, which, by passage to cohomology, yields that (4.1.12) is now an isomorphism. This settles the assertion for \(j\geq 1\). The case \(j=0\) follows from the fact that both vertical maps in the diagram (4.1.6) are now isomorphisms, again because the functors \(K^{\prime}\otimes_{K}(-)\) and \(K^{\prime}\mathbin{\widehat{\otimes}}_{K}(-)\) are isomorphic on complete locally convex Hausdorff spaces. ### Special affinoid wide-opens under base change In this subsection we prove preparatory results for the next subsection. **Lemma 4.3**.: _If \(R\to S\) is a finite (resp. finite injective) morphism of affinoid \(K\)-algebras, then the base change \(R^{\prime}=K^{\prime}\mathbin{\widehat{\otimes}}_{K}R\to K^{\prime}\mathbin{ \widehat{\otimes}}_{K}S=S^{\prime}\) is also finite (resp. finite injective)._ Proof.: The map \(R^{\prime}=K^{\prime}\mathbin{\widehat{\otimes}}_{K}R\to K^{\prime}\mathbin{ \widehat{\otimes}}_{K}S=S^{\prime}\) can be identified with the map \(R^{\prime}\mathbin{\widehat{\otimes}}_{R}R\to R^{\prime}\mathbin{ \widehat{\otimes}}_{R}S\) by the associativity of the completed tensor product [1, 2.1.7/Proposition 7]. This latter map arises by viewing \(R\to S\) as a map between finite \(R\)-modules and applying \(R^{\prime}\otimes_{R}(-)\) to it, because the functors \(R^{\prime}\otimes_{R}(-)\) and \(R^{\prime}\mathbin{\widehat{\otimes}}_{R}(-)\) are isomorphic on finite \(R\)-modules by [11, Lemma 1.1.5]. But \(R^{\prime}\otimes_{R}(-)\) preserves finiteness (and injectivity too, since \(R\to R^{\prime}\) is flat by [11, Lemma 1.1.5]), so we are done. **Lemma 4.4**.: _Let \(Z=\operatorname{Sp}(R)\) be a connected smooth affinoid space and \(\varphi\colon T_{d}\hookrightarrow R\) a finite injective separable morphism. Then the morphism \(\varphi^{\prime}\colon T_{d}(K^{\prime})\hookrightarrow R^{\prime}\) obtained by base change to \(K^{\prime}\) is finite, injective and separable._ Proof.: The morphism \(\varphi^{\prime}\colon T_{d}(K^{\prime})\hookrightarrow R^{\prime}\) obtained by base change to \(K^{\prime}\) is finite injective by Lemma 4.3. Since \(R\) is an integral domain, \(R\) has pure dimension \(d\), so \(R^{\prime}\) also has pure dimension \(d\) by [1, Lemma 2.5]. Since \(R^{\prime}\) has pure dimension \(d\) and is moreover reduced, every finite injective morphism \(T_{d}(K^{\prime})\hookrightarrow R^{\prime}\) is torsion-free. In particular, \(\varphi^{\prime}\) is torsion-free, so it induces a morphism \[Q(T_{d}(K^{\prime}))\hookrightarrow Q(R^{\prime}).\] We need to prove that this morphism is etale. Since \(Q(R)\) is a separable field extension of \(Q(T_{d})\) by assumption, the morphism \[Q(T_{d})\hookrightarrow Q(R)\] is etale. Setting \(T_{d}^{\prime}:=T_{d}(K^{\prime})\) and then tensoring the above morphism with \(Q(T_{d}^{\prime})\), it follows by [10, 11] that the structure morphism \[Q(T_{d}^{\prime})\hookrightarrow Q(T_{d}^{\prime})\otimes_{Q(T_{d})}Q(R)\] is etale too. We will show that \[Q(T_{d}^{\prime})\otimes_{Q(T_{d})}Q(R)\cong Q(R^{\prime}), \tag{4.2.1}\] whence \(Q(R^{\prime})\) is etale over \(Q(T_{d}^{\prime})\), completing the proof. It remains to prove (4.2.1). As in the Proof of Lemma 4.3, we see that the map \(T_{d}^{\prime}\hookrightarrow R^{\prime}\) arises by applying \(T_{d}^{\prime}\otimes_{T_{d}}(-)\) to the map \(T_{d}\hookrightarrow R\), i.e. \[R^{\prime}=T_{d}^{\prime}\otimes_{T_{d}}R.\] With the multiplicative subset \(S:=T_{d}\setminus\{0\}\subseteq T_{d}\), we find that \[S^{-1}R^{\prime} =S^{-1}(T_{d}^{\prime})\otimes_{S^{-1}T_{d}}S^{-1}R\] \[=S^{-1}(T_{d}^{\prime})\otimes_{Q(T_{d})}Q(R)\] where the last equality holds because \(S^{-1}T_{d}=Q(T_{d})\) by definition and \(S^{-1}R=Q(R)\) by Lemma 3.2. On the other hand, we also wish to apply Lemma 3.2 to the finite ring extension \(\varphi^{\prime}\colon T_{d}^{\prime}\hookrightarrow R^{\prime}\). This is possible since \(\varphi^{\prime}\) is torsion-free, as we have established above. Hence Lemma 3.2 tells us that \(Q(R^{\prime})=T^{-1}R^{\prime}\) with the multiplicative set \(T:=T_{d}^{\prime}\setminus\{0\}\). But obviously \(T^{-1}R^{\prime}=T^{-1}(S^{-1}R^{\prime})\). Altogether, we have \[Q(R^{\prime})=T^{-1}(S^{-1}R^{\prime}) =Q(T_{d}^{\prime})\otimes_{T_{d}^{\prime}}S^{-1}R^{\prime}\] \[=Q(T_{d}^{\prime})\otimes_{T_{d}^{\prime}}(S^{-1}(T_{d}^{\prime} )\otimes_{Q(T_{d})}Q(R))\] \[=(Q(T_{d}^{\prime})\otimes_{T_{d}^{\prime}}S^{-1}(T_{d}^{\prime} ))\otimes_{Q(T_{d})}Q(R)\] \[=S^{-1}Q(T_{d}^{\prime})\otimes_{Q(T_{d})}Q(R)\] \[=Q(T_{d}^{\prime})\otimes_{Q(T_{d})}Q(R),\] where the last equality holds because \(S^{-1}Q(T_{d}^{\prime})=Q(T_{d}^{\prime})\). This proves (4.2.1) and completes the proof of the lemma. **Corollary 4.5**.: _Consider again the setting of Lemma 4.4. Let \(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{s}\) denote the minimal prime ideals in \(R^{\prime}\). Then \(R^{\prime}=\prod_{i=1}^{s}R^{\prime}/\mathfrak{p}_{i}\) and composing \(\varphi^{\prime}\) with the projection \(R^{\prime}\xrightarrow{\ \ \ }R^{\prime}/\mathfrak{p}_{i}\) yields a finite injective map \(\varphi^{\prime}_{i}\colon T_{d}(K^{\prime})\hookrightarrow R^{\prime}/ \mathfrak{p}_{i}\) for all \(i\). Hence there is an induced finite field extension \(Q(T_{d}(K^{\prime}))\hookrightarrow Q(R^{\prime}/\mathfrak{p}_{i})\) for each \(i\), and these extensions are all separable. The natural maps_ \[Q(R^{\prime})\xrightarrow{\ \ }R_{\mathfrak{p}_{1}}\times\ldots\times R_{ \mathfrak{p}_{s}}\xrightarrow{\ \ }Q(R^{\prime}/\mathfrak{p}_{1})\times\ldots\times Q(R^{\prime}/\mathfrak{p}_{s}) \tag{4.2.2}\] _are isomorphisms._ Proof.: Since \(Z^{\prime}\) is smooth, the local rings \(\mathcal{O}_{Z^{\prime},z^{\prime}}\) are integral domains for all \(z^{\prime}\in Z^{\prime}\). This tells us that the irreducible components of \(\operatorname{Spec}(R^{\prime})\) are pairwise disjoint and hence they coincide with the connected components of \(\operatorname{Spec}(R^{\prime})\). In other words, \(R^{\prime}=\prod_{i=1}^{s}R^{\prime}/\mathfrak{p}_{i}\). Because \(R^{\prime}\) is reduced, \(\mathfrak{p}_{1}\cup\ldots\cup\mathfrak{p}_{s}\) is the set of zero divisors in \(R^{\prime}\) by [Stacks, Tag 00EW]. Since \(\varphi^{\prime}\) is torsion-free (as we have established in the proof of Lemma 4.4), it pulls back zero divisors to zero, so in particular it pulls back each \(\mathfrak{p}_{i}\) to zero, i.e. each \(\varphi^{\prime}_{i}\) is indeed injective. The first map in (4.2.2) is an isomorphism by [Stacks, Tag 02LX]. The localization of a reduced ring at a minimal prime ideal is a field by [Stacks, Tag 00EU], hence \(R^{\prime}{}_{\mathfrak{p}_{i}}\) is a field and the natural map \(R^{\prime}{}_{\mathfrak{p}_{i}}\xrightarrow{\ \ \ }Q(R^{\prime}/\mathfrak{p}_{i})\) is an isomorphism for each \(i\), so it follows that the second map in (4.2.2) is an isomorphism. Finally, in general, if \(F\) is a field and \(A=A_{1}\times\ldots\times A_{n}\) is a finite product of \(F\)-algebras, then \(A\) is etale over \(F\) if and only if each \(A_{i}\) is etale over \(F\), see [Stacks, Tag 00U2]. Therefore, Lemma 4.4 tells us (equivalently) that the finite field extensions \(Q(T_{d}(K^{\prime}))\hookrightarrow Q(R^{\prime}/\mathfrak{p}_{i})\) induced by \(\varphi^{\prime}_{i}\) are separable for all \(i\). **Corollary 4.6**.: _Let \(Z=\operatorname{Sp}(R)\) be a connected smooth affinoid space, \(\mathring{W}\subseteq Z\) be a special affinoid wide-open space and \(\pi\colon Z\to\mathbb{D}_{K}^{d}\) an associated finite surjective separable morphism. Consider the morphism_ \[\pi^{\prime}\colon Z^{\prime}\to\mathbb{D}_{K^{\prime}}^{d}\] _obtained by base change to \(K^{\prime}\). Then:_ 1. \(\pi^{\prime}\) _is finite, surjective and separable._ 2. _Letting_ \(Z^{\prime}_{1},\ldots,Z^{\prime}_{s}\) _denote the connected components of_ \(Z^{\prime}\)_, then the restriction_ \(\pi^{\prime}_{i}\colon Z^{\prime}_{i}\to\mathbb{D}_{K^{\prime}}^{d}\) _of_ \(\pi\) _to_ \(Z^{\prime}_{i}\) _is finite, surjective and separable for each_ \(i\)_._ 3. \(\mathring{W}^{\prime}\subseteq Z^{\prime}\) _is a finite union of connected components of_ \({\pi^{\prime}}^{-1}(\mathring{\mathbb{D}}_{K^{\prime}}^{d})\)_._ Proof.: Assertion (i) follows from Lemma 4.4 and Assertion (ii) follows from the Corollary 4.5. It remains to prove (iii), i.e. that \(\mathring{W}^{\prime}\) embeds into \(Z^{\prime}\) as a finite union of connected components of \({\pi^{\prime}}^{-1}(\mathring{\mathbb{D}}_{K^{\prime}}^{d})\). Setting \(\mathring{Z}:=\pi^{-1}(\mathring{\mathbb{D}}_{K}^{d})\) and \((Z^{\prime}):={\pi^{\prime}}^{-1}(\mathring{\mathbb{D}}_{K^{\prime}}^{d})\), we find that \((\mathring{Z})^{\prime}=(Z^{\prime})\) since extension of scalars is compatible with taking preimages under morphisms. The space \(\mathring{W}\) is a finite union of connected components of \(\mathring{Z}\), so in particular it is clopen in \(\mathring{Z}\). Thus we can apply [Con99, Lemma 3.1.1] to the open immersion \(\hat{W}\hookrightarrow\hat{Z}\) to deduce that \(\hat{W}^{\prime}\to(Z^{\prime})\) is an open immersion too. On the other hand, since base change takes closed immersions to closed immersions, we see that \(\hat{W}^{\prime}\to(Z^{\prime})\) is also a closed immersion, so \(\hat{W}^{\prime}\) is clopen in \((Z^{\prime})\). Hence \(\hat{W}^{\prime}\) is a union of connected components of \((Z^{\prime})\) and the assertion follows. Due to Corollary 4.6, we can construct a map \[\sigma=\sigma_{\pi^{\prime}}\colon\Omega^{d}_{R^{\prime}/K^{\prime}}\to\Omega^ {d}_{T^{\prime}_{k}/K^{\prime}} \tag{4.2.3}\] by invoking the same formula as in Definition 2.6, but now using the trace \(\operatorname{Tr}_{L^{\prime}/E^{\prime}}\) of the finite etale ring map \(E^{\prime}:=Q(T^{\prime}_{d})\hookrightarrow L^{\prime}:=Q(R^{\prime})\) induced by \(\pi^{\prime}\colon Z^{\prime}\to\mathbb{D}^{d}_{K^{\prime}}\). **Remark 4.7**.: In the decomposition \(L^{\prime}=L^{\prime}_{1}\times\ldots\times L^{\prime}_{s}\) according to (4.2.2), each \(L^{\prime}_{i}\) is finite separable over \(E^{\prime}\), so there is the map \(\Omega^{d}_{L^{\prime}/K^{\prime}}=\bigoplus_{i=1}^{s}\Omega^{d}_{L^{\prime}_ {i}/K^{\prime}}\xrightarrow{\sum_{i}\sigma_{i}}\Omega^{d}_{E^{\prime}/K^{ \prime}}\) which is readily seen to coincide with \(\sigma\) when restricted to \(\Omega^{d}_{R^{\prime}/K^{\prime}}\). **Lemma 4.8**.: _The diagram_ _commutes. In other words,_ _commutes._ Proof.: We write \(L=Q(R),E=Q(T_{d})\) and \(L^{\prime}=Q(R^{\prime}),E^{\prime}=Q(T^{\prime}_{d})\) for brevity. We can view \((\sigma_{\pi})^{\prime}\) as the composite2 Footnote 2: One obtains the third map in the composite by taking the completion of the map \((L\otimes_{E}\Omega^{d}_{E/K})\otimes_{K}K^{\prime}\to L\otimes_{E}\Omega^{d}_ {E^{\prime}/K^{\prime}}\) and observing that \(L\otimes_{E}\Omega^{d}_{E^{\prime}/K^{\prime}}=L\otimes_{E}\Omega^{d}_{E^{ \prime}/K^{\prime}}\) since both factors in the tensor product are finite over \(E\). \[\Omega^{d}_{R/K}\mathbin{\widehat{\otimes}}_{K}K^{\prime}\xrightarrow{}\Omega ^{d}_{L/K}\mathbin{\widehat{\otimes}}_{K}K^{\prime}=(L\otimes_{E}\Omega^{d}_ {E/K})\mathbin{\widehat{\otimes}}_{K}K^{\prime}\xrightarrow{}L\otimes_{E} \Omega^{d}_{E^{\prime}/K^{\prime}}\xrightarrow{\operatorname{Tr}_{L/E} \otimes\operatorname{id}}\Omega^{d}_{E^{\prime}/K^{\prime}}\] and \(\sigma_{\pi^{\prime}}\) as the composite \[\Omega^{d}_{R^{\prime}/K^{\prime}}\xrightarrow{}\Omega^{d}_{L^{\prime}/K^{ \prime}}\xleftarrow{}L^{\prime}\otimes_{E^{\prime}}\Omega^{d}_{E^{\prime}/K^{ \prime}}\xrightarrow{\operatorname{Tr}_{L^{\prime}/E^{\prime}}\otimes \operatorname{id}}\Omega^{d}_{E^{\prime}/K^{\prime}}\] whence we need to show that the diagram commutes. Under the identification \[L^{\prime}=L\otimes_{E}E^{\prime}\] from the proof of Lemma 4.4, we find that \(\operatorname{Tr}_{L^{\prime}/E^{\prime}}=\operatorname{Tr}_{L/E}\otimes\operatorname {id}_{E^{\prime}}\). Thus we can replace the last arrow in the lower line of the above diagram with \(L\otimes_{E}E^{\prime}\otimes_{E^{\prime}}\Omega^{d}_{E^{\prime}/K^{\prime}} \xrightarrow{\operatorname{Tr}_{L/E}\otimes\operatorname{id}_{E^{\prime}} \otimes\operatorname{id}}\Omega^{d}_{E^{\prime}/K^{\prime}}\) to obtain the diagram which now obviously commutes, completing the proof. ### Base change results for the trace map and for Serre duality We keep the notation of the previous subsection: Let \(Z=\operatorname{Sp}(R)\) be a connected smooth affinoid space, \(\check{W}\subseteq Z\) a special affinoid wide-open space and \(\pi\colon Z\to\mathbb{D}^{d}_{K}\) an associated finite surjective separable morphism. Let \(\pi^{\prime}\colon Z^{\prime}\to\mathbb{D}^{d}_{K^{\prime}}\) denote the morphism obtained by base change to \(K^{\prime}\). In this setting, we can use \(\sigma_{\pi^{\prime}}\) from (4.2.3) to construct a trace map \[t_{\check{W}^{\prime}}=t_{\pi^{\prime}}\colon H^{d}_{c}(\check{W}^{\prime}, \omega_{Z^{\prime}})\to K^{\prime}\] exactly as in Definition 2.7 (even though \(\check{W}^{\prime}\) is not necessarily connected). Letting \(Z^{\prime}_{1},\dots,Z^{\prime}_{s}\) denote the connected components of \(Z^{\prime}\), \(\check{Z}^{\prime}_{i}:=\pi^{\prime-1}_{i}(\check{\mathbb{D}}^{d}_{K^{\prime}})\) and \(\check{W}^{\prime}_{i}\) denote the union of those connected components of \(\check{W}^{\prime}\) that are contained in \(\check{Z}^{\prime}_{i}\), we find the commutative diagram which tells us that \(t_{\pi^{\prime}}=\sum_{i}t_{\pi^{\prime}_{i}}\) and in particular shows that \(t_{\pi^{\prime}}\) does not depend on \(\pi^{\prime}\) (since we know that each \(t_{\pi^{\prime}_{i}}\) does not depend on \(\pi^{\prime}_{i}\)). We are now ready to prove the main results of this section, first for special affinoid wide-opens and then for Stein spaces in general. **Proposition 4.9**.: _Let \(Z=\operatorname{Sp}(R)\) be a connected smooth affinoid space of dimension \(d\), \(\check{W}\subseteq Z\) a special affinoid wide-open space. Then the following diagram commutes:_ (4.3.1) Proof.: First we consider the special case \(\hat{W}=\check{\mathbb{D}}^{d}_{K},Z=\mathbb{D}^{d}_{K}\) with coordinates \(X=(X_{1},\dots,X_{d})\). Then there is an isomorphism \(H^{d}_{c}(\check{\mathbb{D}}^{d}_{K},\omega_{\mathbb{D}^{d}_{K}})\cong K \langle X^{-1}\rangle^{\dagger}\cdot\frac{dX}{X}\). In particular, \(H^{d}_{c}(\check{\mathbb{D}}^{d}_{K^{\prime}},\omega_{\mathbb{D}^{d}_{K^{ \prime}}})\) is Hausdorff and complete and the diagram (4.3.1) in the assertion is just the diagram with maps according to (2.2.1). But this diagram obviously commutes. Now we consider a general special affinoid wide-open space \(\mathring{W}\subseteq Z\) and an associated finite surjective separable morphism \(\pi\colon Z\to\mathbb{D}_{K}^{d}\). We have to show that the outer contour of the diagram commutes, where we have written \((-)_{K^{\prime}}\) instead of \(K^{\prime}\otimes_{K}(-)\) in the lower line, for ease of notation. The left-hand square certainly commutes, since the first vertical map (from the left) is just the restriction of the second map to a direct summand. Next, we claim that the middle square commutes. To see this, we first consider any given \(\varepsilon\in(0,1)\) and let \(\mathfrak{W}\) be the finite Leray cover of \(\hat{\mathbb{D}}_{K}^{d}\setminus\mathbb{D}_{K}^{d}(\varepsilon)\) defined in (4.1.8). Then we recall from (the proof of) Proposition 4.2 that the middle square is obtained by taking \(\varinjlim_{\varepsilon}\) of the diagrams of Cech cohomology groups which obviously commute, whence our claim follows. Similarly, the commutativity of the right-hand square above follows from the commutativity of which holds by Lemma 4.8. Finally, the right-hand triangle commutes by what we have already discussed. The assertion follows. **Theorem 4.10**.: _Let \(X\) be a connected smooth Stein space of dimension \(d\). Choosing an admissible open cover \(\{\mathring{W}_{i}\}_{i\in\mathbb{N}}\) of \(X\) consisting of special affinoid wide-open subsets (as in Lemma 2.5) and taking the limit over all the maps \(K^{\prime}\otimes_{K}H^{d}_{c}(\mathring{W}_{i},\omega_{X})\to H^{d}_{c}( \mathring{W}^{\prime}_{i},\omega_{X^{\prime}})\) yields a map_ \[K^{\prime}\otimes_{K}H^{d}_{c}(X,\omega_{X})\to H^{d}_{c}(X^{\prime},\omega_{X^ {\prime}}) \tag{4.3.2}\] _which is in fact canonical and makes the following diagram commute:_ Proof.: First of all, we note that the isomorphism \[K^{\prime}\otimes_{K}\varinjlim_{i}H^{d}_{c}(\check{W}_{i},\omega_{X})\cong \varinjlim_{i}K^{\prime}\otimes_{K}H^{d}_{c}(\check{W}_{i},\omega_{X})\] is topological. Now, the map (4.3.2) is by definition the unique map that makes the left-hand rectangle in commute, and we have to show that the resulting outer contour in the above diagram commutes (cf. Definition 2.8). But this is immediate, since the right-hand triangle commutes by Proposition 4.9. It only remains to prove that the map (4.3.2) is canonical, i.e. independent of the choice of the cover \(\{\check{W}_{i}\}_{i\in\mathbb{N}}\). Given another cover \(\{\check{V}_{i}\}_{i\in\mathbb{N}}\) of \(X\) as in the assertion, we can assume that \(\check{V}_{i}\subseteq\check{W}_{i}\) (otherwise replace \(\check{V}_{i}\) by \(\check{V}_{i}\cap\check{W}_{i}\) which is again special affinoid wide-open by [1, Lemma 5.1.3]). Then we need to show that the diagram commutes. Granting that the map \(\check{V}_{i}^{\prime}\to\check{W}_{i}^{\prime}\) is an open immersion, we find induced maps \(H^{d}_{c}(\check{V}_{i}^{\prime},\omega_{X^{\prime}})\to H^{n}_{c}(\check{W}_{ i}^{\prime},\omega_{X^{\prime}})\) whose limit over all \(i\) makes the outer contour and the upper semi-circle in the extended diagram commute, whence the desired commutativity of the rectangle follows. It remains to check that \(\check{V}_{i}^{\prime}\to\check{W}_{i}^{\prime}\) is an open immersion. Dropping the index \(i\) for notational convenience, it suffices to show that \(\check{W}^{\prime}\to X^{\prime}\) is an open immersion (since then \(\check{V}^{\prime}\hookrightarrow X^{\prime}\) is an open immersion too, and \(\check{V}^{\prime}\) lands in the admissible open \(\check{W}^{\prime}\subseteq X^{\prime}\)). Adopting our standard notation \(\check{W}\subseteq Z\) and \(\pi\colon Z\to\mathbb{D}^{d}_{K}\) the desired conclusion follows since \(\mathring{W}^{\prime}\) is an admissible open in \({\pi^{\prime}}^{-1}(\mathring{\mathbb{D}}^{d}_{K^{\prime}})\) by Proposition 4.6 (iii), \({\pi^{\prime}}^{-1}(\mathring{\mathbb{D}}^{d}_{K^{\prime}})\) is certainly an admissible open in \(Z^{\prime}\), and \(Z^{\prime}\) is an admissible open in \(X^{\prime}\) by the explicit construction of \(X^{\prime}\). Next, we prove that the Yoneda-Cartier pairing is also compatible with base change: **Proposition 4.11**.: _Let \(X\) be a smooth rigid Stein \(K\)-space of dimension \(d\) and let \(\mathcal{F}\) be a coherent sheaf on \(X\). Then the following diagram commutes for all \(i\geq 0\):_ Proof.: Let \(\alpha_{i,\mathcal{F}}\colon H^{d-i}_{c}(X,\mathcal{F})\to H^{d-i}_{c}(X^{ \prime},\mathcal{F}^{\prime})\) denote the comparison map for base change. We will prove the following equivalent formulation of the theorem: the diagram commutes for all \(i\geq 0\). We can view the content of this diagram as having two maps from the \(\delta\)-functor \(\operatorname{Ext}^{i}_{X}(-,\omega_{X})\) to the \(\delta\)-functor \(\operatorname{Hom}_{K}(H^{d-i}_{c}(X,-),H^{d}_{c}(X^{\prime},\omega_{X^{\prime }}))\), and we want to prove that these maps coincide. Note that the latter is indeed a \(\delta\)-functor, since it is the composite \(\operatorname{Hom}_{K}(-,H^{d}_{c}(X^{\prime},\omega_{X^{\prime}}))\circ H^{d -i}_{c}(X,-)\) of the \(\delta\)-functor \(H^{d-i}_{c}(X,-)\) with the exact functor \(\operatorname{Hom}_{K}(-,H^{d}_{c}(X^{\prime},\omega_{X^{\prime}}))\). Now, since \(\operatorname{Ext}^{i}_{X}(-,\omega_{X})\) is a universal \(\delta\)-functor, it suffices to show that the mentioned maps coincide for \(i=0\), i.e. that the above diagram commutes for \(i=0\). For this, consider any given \(\gamma\in\operatorname{Hom}_{X}(\mathcal{F},\omega_{X})\). If we denote its image in \(\operatorname{Hom}_{X^{\prime}}(\mathcal{F}^{\prime},\omega_{X^{\prime}})\) by \(\gamma^{\prime}-\) so, for \(U\subseteq X\) affinoid, \(\gamma^{\prime}\) over \(U^{\prime}\) is \(\mathcal{F}^{\prime}(U^{\prime})=\mathcal{F}(U)\mathbin{\widehat{\otimes}}_{ K}K^{\prime}\xrightarrow{\gamma\otimes\operatorname{id}}\omega_{X}(U)\mathbin{ \widehat{\otimes}}_{K}K^{\prime}=\omega_{X^{\prime}}(U^{\prime})-\) then the commutativity of the above diagram for \(i=0\) amounts to the commutativity of which holds true for all \(\gamma\in\operatorname{Hom}_{X}(\mathcal{F},\omega_{X})\) since it is evident from the construction of our base-change-comparison maps \(\alpha\) that they are functorial in this sense. Thus the proposition is proved. Now we can summarize the content of Theorem 4.10 and Proposition 4.11 in the following: **Corollary 4.12**.: _Let \(X\) be a smooth rigid Stein \(K\)-space of dimension \(d\). Then the Serre duality pairing from Theorem 2.9 is compatible with base change, in the following sense: For every coherent sheaf \(\mathcal{F}\) on \(X\), the diagram_ _commutes for all \(i\geq 0\)._
``` Beyer の traceplan の相対的バージョンを提示します。これは、滑らかな剛体 Stein空間の有限の台形写像 $X \to Y$ に関連付けられ、それにより、$X$ の Serre dualitet と $Y$ の Serre dualitet を関連付けます。さらに、私たちは、すべての剛体 Stein 空間が、完備化されたベースチャージを任意の完備化された拡張体フィールドに適用して、Serre dualitet の算術的図形を導き出します。この図形は、ベースフィールドにおける Serre dualitet と拡張フィールドにおける Serre dualitet の関係を示しています。 ```
2309.04751
Optical microcavities as platforms for entangled photon spectroscopy
Optical microcavities are often proposed as platforms for spectroscopy in the single- and few-photon regime due to strong light-matter coupling. For classical-light spectroscopies, an empty microcavity simply acts as an optical filter. However, we find that in the single- or few-photon regime treating the empty microcavity as an optical filter does not capture the full effect on the quantum state of the transmitted photons. Focusing on the case of entangled photon-pair spectroscopy, we consider how the propagation of one photon through an optical microcavity changes the joint spectrum of a frequency-entangled photon pair. Using the input-output treatment of a Dicke model, we find that propagation through a strongly coupled microcavity above a certain coupling threshold enhances the entanglement entropy between the signal and idler photons. These results show that optical microcavities are not neutral platforms for quantum-light spectroscopies and their effects must be carefully considered when using change in entanglement entropy as an observable.
Ravyn Malatesta, Lorenzo Uboldi, Evan J. Kumar, Esteban Rojas-Gatjens, Luca Moretti, Andy Cruz, Vinod Menon, Giulio Cerullo, Ajay Ram Srimath Kandada
2023-09-09T10:45:23
http://arxiv.org/abs/2309.04751v1
# Optical microcavities as platforms for entangled photon spectroscopy ###### Abstract Optical microcavities are often proposed as platforms for spectroscopy in the single- and few-photon regime due to strong light-matter coupling. For classical-light spectroscopies, an empty microcavity simply acts as an optical filter. However, we find that in the single- or few-photon regime treating the empty microcavity as an optical filter does not capture the full effect on the quantum state of the transmitted photons. Focusing on the case of entangled photon-pair spectroscopy, we consider how the propagation of one photon through an optical microcavity changes the joint spectrum of a frequency-entangled photon pair. Using the input-output treatment of a Dicke model, we find that propagation through a strongly coupled microcavity above a certain coupling threshold enhances the entanglement entropy between the signal and idler photons. These results show that optical microcavities are not neutral platforms for quantum-light spectroscopies and their effects must be carefully considered when using change in entanglement entropy as an observable. ## I Introduction Due to spectacular advances within the field of quantum optics, experimentalists can now control non-classical states of light with high levels of precision in optics laboratories. Combined with advances in single-photon detection, these innovations lay the groundwork for the growing field of quantum-light spectroscopy[1]. There are many advantages to using quantum-light for spectroscopy, including access to information otherwise inaccessible using classical spectroscopies[2][3][4], and importantly superior signal-to-noise ratio that can enable spectroscopy at extremely low excitation fluence. Quantum light refers to any state of light that cannot be described classically, such as single photons, squeezed light, or entangled photon pairs. Entangled photon pairs exhibit non-classical correlations that provide an advantage for both linear and nonlinear spectroscopies[5][6][7][8][9]. In the single- or few-photon regime, classical spectroscopic signals are swamped with noise but entanglement-enhanced spectroscopies can surpass the shot-noise limit by taking advantage of quantum correlations[10][11]. Similarly, entangled light can enhance signal-to-noise ratios of nonlinear spectroscopies, resulting in sharper spectroscopic features and greater simultaneous time-frequency resolution[12]. Furthermore, entangled-photon pairs provide direct access to nonlinear processes even at low-level excitation, facilitating the study of nonlinear processes in photo-sensitive systems that might bleach or otherwise be destroyed at excitation powers[13]. Hao Li _et al._ describe theoretically how the entanglement entropy of biphoton states (photon pair states) can be used as a probe of many-body correlations that are often elusive or obscured in classical nonlinear spectroscopic measurements[14]. In the single- or few-photon regime, a challenge arises for spectroscopists because of the low provability of light-matter interactions. One popular method to address this problem is to use an optical microcavity to couple to optical excitations in materials and thus enhance the processes of interest[15][16][17][18]. Optical microcavities are extremely controllable platforms for light-matter interaction; they are used to manipulate molecular states, enhance spontaneous emission, and drive chemical reactions[19]. For all of their uses, microcavities are an extremely versatile platform for quantum spectroscopy, but they are not neutral platforms and cannot be treated as such. To demonstrate this, we consider how the joint spectrum and entanglement entropy of a frequency-entangled biphoton state changes after one photon (the idler) propagates through an optical microcavity. We first briefly describe the modeling of the biphoton joint spectrum and its transformation using input-output theory. We then consider an empty microcavity. Although for classical light an empty microcavity behaves as a simple optical filter, we find experimentally that treating the microcavity as an optical filter does not capture the full effect on the transmitted biphoton state. With simple input-output theory, we can model the filtering effect of the empty microcavity but cannot explain the full joint spectral transformation. We next move on to a simple model of a microcavity coupled to \(N\) two-level systems and consider how strong-coupling transforms the joint spectrum and entanglement entropy of the biphoton state after the idler passes through an active microcavity. We find that above a certain coupling strength, passing through the microcavity system alone, regardless of detuning, enhances the entanglement entropy even without including many-body interactions in the model. These results confirm that optical microcavities are not neutral platforms for quantum-light spectroscopies and their effects must be carefully considered when using change in entanglement entropy as a spectroscopic observable. ## II Biphoton state transformation ### Joint spectrum and entanglement Sources of entangled photons that are based on spontaneous parametric downconversion (SPDC) generate two daughter photons, historically called the _signal_ and _idler_, from a single pump photon according to energy- and momentum-conservation. Following the development of Zielnicki _et al._[20], a generic biphoton state of a signal and idler pair can be written as \[\left|\psi_{s,i}\right\rangle=\int\int d\omega_{s}d\omega_{i}\mathcal{F}\left( \omega_{s},\omega_{i}\right)a_{1}^{\dagger}(\omega_{s})a_{2}^{\dagger}(\omega _{i})\left|0\right\rangle, \tag{1}\] where the creation operators \(a_{1}^{\dagger}(\omega_{s})\) and \(a_{2}^{\dagger}(\omega_{i})\) operate on the vacuum state to create photons at frequency \(\omega_{s}\) and \(\omega_{i}\), respectively. The joint spectral amplitude, \(\mathcal{F}\left(\omega_{s},\omega_{i}\right)\), describes the frequency-correlations between the signal and idler photons. Experimentally, we typically measure the joint spectral intensity (JSI), \[\left|\mathcal{F}\left(\omega_{s},\omega_{i}\right)\right|^{2}=\left|A\left( \omega_{s},\omega_{i}\right)\right|^{2}\!\left|\Phi\left(\omega_{s},\omega_{i }\right)\right|^{2}, \tag{2}\] where \(A\left(\omega_{s},\omega_{i}\right)\) is based on the spectral amplitude of the pump beam and \(\Phi\left(\omega_{s},\omega_{i}\right)\) is determined by the _phase-matching conditions_ and _spatial profile_ of the pump. To quantify the entanglement between the signal and idler photons, we compute the von Neumann entanglement entropy, \(S\). We first normalize the joint spectral amplitude, and then use singular value decomposition to find the Schmidt coefficients \(\lambda_{j}\) which satisfy the normalization condition \(\sum_{j}\lambda_{j}^{2}=1\). We then calculate the entanglement entropy as \[S=-\sum_{i}\lambda_{i}^{2}\ln\left(\lambda_{i}^{2}\right). \tag{3}\] ### Application of input-output theory In their seminal work in 1984 [21], Collett and Gardiner develop a general input-ouput theory that relates output operators to input operators via internal dynamics of a cavity system governed by quantum Langevin equations. For a single photon mode, if we express the input-output transformation as a frequency-dependent function \(C(\omega_{i})\), we simply write the output creation operator in terms of the input as \[\tilde{a}^{\dagger}(\omega_{i})=C\left(\omega_{i}\right)a^{\dagger}(\omega_{ i}). \tag{4}\] Now, considering the case of a biphoton state where we allow only the idler photon to propagate through a microcavity system, we replace the original idler creation operator \(a_{2}^{\dagger}(\omega_{i})\rightarrow\tilde{a}_{2}^{\dagger}(\omega_{i})\) to get the transformed biphoton state \[\left|\Psi\right\rangle=\int\int d\omega_{s}d\omega_{i}\mathcal{F}\left(\omega _{s},\omega_{i}\right)C\left(\omega_{i}\right)a_{1}^{\dagger}(\omega_{s})a_{2 }^{\dagger}(\omega_{i})\left|0\right\rangle. \tag{5}\] In this simple approach, the transformed JSA is the product \(\mathcal{F}\left(\omega_{s},\omega_{i}\right)\cdot C\left(\omega_{i}\right)\). The transformed state bears similarity to the expression developed by Kalashnikov _et al._ before they act with a beamsplitter to see how interaction with a resonant medium changes the quantum interference pattern in Hong-Ou-Mandel interferometry[22]. ## III Propagation through an empty microcavity ### Theory Following the input-output formalism of Collett and Gardiner[21], we consider an empty cavity confining a Figure 1: Transmission function, joint spectral intensity, and applied phase shift of a (a) one-sided and (b) two-sided empty microcavity following Gardiner and Collett. single optical mode. Starting with a one-sided empty microcavity, i.e. a microcavity with substantial loss through a single mirror, the transformation of the output photon creation operator in terms of the input is \[\tilde{a}^{\dagger}\left(\omega\right)=\frac{\frac{1}{2}\gamma-i\left(\omega- \omega_{0}\right)}{\frac{1}{2}\gamma+i\left(\omega-\omega_{0}\right)}a^{ \dagger}\left(\omega\right), \tag{6}\] where \(\gamma\) is the coupling strength of the cavity photons to input(output) photons and \(\omega_{0}\) is the frequency of the cavity mode. The coupling strength \(\gamma\) is directly related to the cavity photon lifetime, \(\tau=1/\gamma\). For all our simulations, we choose \(\gamma\) such that the cavity photon lifetime \(\tau\) is \(150\,\mathrm{fs}\). As noted by Collett and Gardiner, the one-sided cavity imposes a frequency-dependent relative phase shift, but does not change the JSI. Therefore the JSI shown in Fig. 1(a) is identical to that of the input biphoton state. For all simulations shown here, we use the same input state, assuming a Gaussian pump with a central down-converted wavelength of \(685\,\mathrm{nm}\) for both signal and idler photons. To replicate experimental conditions, we apply detection filters to both the signal and idler. For the filter shape, we choose a Gaussian squared, centered at \(685\,\mathrm{nm}\) with an \(8\,\mathrm{nm}\) bandwidth. Until we consider the effect of the pump bandwidth on entanglement entropy, the pump bandwidth is set at \(6\,\mathrm{nm}\). Next, we move on to a two-sided empty microcavity, i.e. a cavity with leaky mirrors on both sides, and so with two input and two output modes. We assume the coupling to be same for both mirrors, \(\gamma_{1}=\gamma_{2}=\gamma\), and a single input mode. Thus in transmission, the output photon creation operator is \[\tilde{a}^{\dagger}\left(\omega\right)=\frac{\gamma}{\gamma+i\left(\omega- \omega_{0}\right)}a^{\dagger}\left(\omega\right). \tag{7}\] Now we see the filtering effect of the empty microcavity acting on the idler photon, as shown in Fig. 1(b). The centering and bandwidth of the transmission function are determined by the cavity mode frequency and the cavity photon lifetime, respectively. The filtering effect of the microcavity slightly affects the entanglement entropy bringing the base entanglement entropy from \(S=0.395\) to \(S=0.359\). ### Experiment To further test the formalism developed in the previous section, we experimentally measure the JSI of a biphoton state with one of the photons transmitted through an empty microcavity. The spectrally entangled state is generated in a Type-I \(\beta\)-Barium Borate (BBO) crystal phase-matched for SPDC close to the degeneracy at the pump wavelength of \(343\,\mathrm{nm}\). The pump beam here is the third harmonic of a femtosecond laser oscillator (Pharos, Light Conversion) output operating at \(1030\,\mathrm{nm}\) and \(75\,\mathrm{MHz}\). The photons are spatially separated and transmitted through a translating-wedge-based identical pulses encoding system (GEMINI, Nireos srl) and a co-incidence detection system (Hydraharp, Picoquant), which enable measurement of spectral correlations between the photons. More details of the measurement system can be found in Ref. [23]. The JSI spectrum of the as-prepared biphoton state is shown in Fig. 2(a), in which the spectral correlation between the signal and idler photons is evident through the diagonal feature. We transmit the idler photon of this state through a planar optical microcavity, which is built on distributed Bragg reflectors (DBR) and has an optical resonance at \(691\,\mathrm{nm}\) with a full width at half maximum of \(8\,\mathrm{nm}\) at normal incidence. The JSI spectrum of the transmitted biphoton state is shown in Fig. 2(b). We observe clear spectral filtering of the biphoton state with the peak of the JSI map at the peak resonance of the cavity. On closer inspection, we observe a reduction in the degree of spectral correlation in the transmitted state with the previously extended diagonal feature flattening along the idler-axis, close to the cavity resonance. To reproduce this behavior we consider a biphoton state whose JSI spectrum follows Eq. 2, and shown in Fig. 2(c). Based on the formalism developed in the previous section, we estimate the JSI spectrum of the biphoton state whose idler photon is transmitted through a microcavity. By setting the cavity resonance to \(690\,\mathrm{nm}\) in our simulation, we can approximate the experimentally measured transmission function of the empty microcavity. While the filtering effect is reproduced, we miss the effects of the Figure 2: Joint spectral intensity before and after propagating through an empty microcavity from (a, b) experimental measurement and (c, d) input-output theory. microcavity that depend on the joint spectrum - that is we miss the effects of the microcavity that depend simultaneously on the signal and idler frequency even though only the idler propagated through the microcavity. ## IV Propagation through a strongly-coupled microcavity Having already established that an empty microcavity has a non-trivial effect on the entanglement of frequency-entangled photon pairs, we now consider a simple model of a strongly-coupled microcavity system. Taking inspiration from Li _et al._Li _et al._ (2015), we use a Dicke model of \(N\) identical 2-level emitters coupled to an optical cavity described by the following Hamiltonian: \[\hat{H} = \sum_{j}\frac{\hbar\omega_{e}}{2}\hat{\sigma}_{z,j}+\sum_{k}\hbar (\omega_{k}-i\gamma)\hat{\psi}_{k}^{\dagger}\hat{\psi}_{k} \tag{8}\] \[+\sum_{k,j}\frac{\hbar\lambda_{kj}}{\sqrt{N}}(\hat{\psi}_{k}^{ \dagger}+\hat{\psi}_{k})(\hat{\sigma}_{j}^{+}+\hat{\sigma}_{j}^{-}),\] where \(\omega_{e}\) is the frequency of the emitter, \(\{\hat{\sigma}_{z,j},\hat{\sigma}_{j}^{\pm}\}\) are the corresponding spin-1/2 operators for site \(j\), \(\hat{\psi}_{k}^{\dagger}\) is the cavity photon creation operator, and \(\lambda_{kj}\) is the coupling between a cavity photon and a molecular excitation at site \(j\). As before, \(\gamma\) is the coupling of the cavity photon mode to an external photon mode. For simplicity, we consider only the normal cavity mode, \(k=0\), with frequency \(\omega_{0}\). We also constrain ourselves to the strong coupling regime, \(\lambda>\gamma/2\), but stay well below the critical point \(\lambda_{c}\) where the system undergoes a quantum phase transition. Using an input-output treatment of this model, Li _et al._ develop an analytical expression for the response function of the strongly-coupled system Li _et al._ (2015) which we use to define the transformation function \(C(\omega_{i})\) for an idler photon propagating through the strongly-coupled microcavity. The transformation function depends on many parameters: the frequency of the molecular emitter \(\omega_{e}\), the frequency of the cavity mode \(\omega_{0}\), the cavity lifetime \(1/\gamma\), and the strength of the coupling \(\lambda\). Applying this transformation to the same input biphoton state as before (Fig. 2(c)), we immediately find much different behavior than for transmission through an empty microcavity. We analyze transmission through a strongly-coupled microcavity with both the molecular and cavity resonance at 685 nm, a 150 fs cavity photon lifetime, and equal coupling to the molecular excitation and external photons (\(\gamma=\lambda\)). The resulting transmission spectrum and the JSI map shown in Fig. 3(a) are composed of two peaks associated with the lower and upper polariton states. While this is an expected result, two intriguing details emerge on deeper analysis. Firstly, we see sharp discontinuity in the applied phase shift corresponding to the molecular resonance. As the strength of the light-matter coupling increases with respect to the coupling to external photons, the applied phase shift begins to resemble a step function, as shown in Fig. 3(b). Secondly, the entanglement entropy of the transformed state is \(S=0.437\), which is a higher value than the entanglement entropy of the input state \(S=0.395\). But the increase in the entropy is curiously not monotonically related to the strength of light-matter coupling. The transformation of the biphoton state due to propagation of the idler photon through the strongly-coupled microcavity system _increases_ only above a certain coupling-strength threshold, see Fig. 3(c). Below this threshold, the entropy substantially reduces, possibly due to the spectral filtering of the idler photons by the dominant molecular transition. Of course, the exact coupling strength at which the enhanced entanglement entropy surpasses that of the input depends on the specific state and microcavity system parameters including the molecular resonance, cavity photon lifetime, and cavity detuning. In general, across several cavity detunings, shown in Fig. 3(c), at the lower end of the strong-coupling limit when coupling to external photons out-competes coupling to the molecular excitation (\(\gamma>\lambda>\gamma/2\)), propagation through the coupled microcavity system suppresses the entanglement entropy even below propagation through an empty microcavity. As the coupling strength increases, we reach a regime where the microcavity system improves the entanglement entropy past that of the input state for a wide range of cavity-detuning values, until the dependence of the entanglement entropy on cavity-detuning plateaus. We find a similar ebb and flow of entropy improvement when considering how the entanglement entropy changes with the bandwidth of the Gaussian pump generating the input biphoton state, seen in Fig. 4. For very narrow pump bandwidths, the frequencies of the signal and idler are strongly anti-correlated and the input state thus has a relatively high entanglement entropy. Within this limit, propagation of the idler through a sufficiently strongly coupled microcavity system still improves the entanglement entropy. Beyond \(\lambda=1.35\gamma\), the benefit weakens but the entanglement entropy remains firmly above that for an empty microcavity. ## V Conclusion In summary, we show with experiment and simple input-output theory that propagation through empty optical microcavities exerts a non-trivial effect on the state of frequency-entangled biphotons. We also theoretically consider the case of strongly coupled microcavities and identify peculiar transformations of the spectral correlations of the output biphoton state. From our experimental measurements of an empty microcavity we expect there to be further correlated effects in these systems that our simple input-output approach does not capture, but understanding the interplay of the modeled and unmodeled changes requires further theoretical development and experimentation. Nevertheless, we show even with a simple theoretical treatment that the microcavity platform has notable effects on entanglement entropy of biphoton states. Notably, the experimental configuration and the model we consider here simply correspond to the _linear_ response of microcavities. Previous works propose to use the entanglement entropy of the biphoton state transmitted through the cavity as a probe of many-body processes, including polariton-polariton interactions. While these treatments show the biphoton entanglement entropy is sensitive to such many-body interactions, we have to also consider the non-trivial entropy changes identified in this work that can manifest even in the absence of any correlating mechanisms. While optical microcavities can indeed be excellent platforms that enable spectroscopy with entangled photons, care must be taken to design systems with light-matter coupling strengths that minimize the linear-response induced variations in the JSI, so that the transformation of the biphoton state can be directly correlated with many-body dynamics. Figure 4: Entanglement entropy by pump bandwidth for increasing coupling strength compared to an empty microcavity (dashed black). Figure 3: Changes to the biphoton state due to idler propagation through a strongly-coupled microcavity. (a) Transmission function, joint spectral intensity, and applied phase shift of a strongly-coupled (\(\lambda=\gamma\)) microcavity with a 150 fs lifetime with zero detuning, (b) dependence of the microcavity induced phase shift on coupling strength, and (c) change in entanglement entropy with coupling strength for variable cavity detuning for a molecular resonance at 685 nm, compared to input state entropy (dotted gray). ###### Acknowledgements. A.R.S.K. acknowledges the start-up funds provided by Wake Forest University and funding from the Center for Functional Materials and the Office of Research and Sponsored Programs at WFU. The authors thank Prof Carlos Silva, Prof Eric Bittner and Dr Andrei Piyatinski for insightful discussions. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2039655. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
光マイクロキャビティは、単一光子や少数の光子 régimen で光-物質結合が強いことから、分光プラットフォームとして頻繁に提案されています。古典的な光分光においては、空のマイクロキャビティは単なる光フィルタとして作用します。しかし、単一または少数の光子 régimen で空のマイクロキャビティを光フィルタとして扱うことは、透過光子の量子状態に及ぼす影響を完全に捉えることはできません。 entanglephoton-pair 分光の場合に焦点を当て、光マイクロキャビティを通過する1つの光子の伝播が周波数エンタングルされた光子ペアの合spectrum にどのように影響するかについて検討します。Dicke モデルの入力出力処理を用いると、ある結合閾値を超えた strongly coupledマイクロキャビティを通過すると、信号とイダー光子間のエンタングルentropyが向上することが見出されました。これらの結果
2309.04037
SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks
The fast growth of computational power and scales of modern super-computing systems have raised great challenges for the management of exascale scientific data. To maintain the usability of scientific data, error-bound lossy compression is proposed and developed as an essential technique for the size reduction of scientific data with constrained data distortion. Among the diverse datasets generated by various scientific simulations, certain datasets cannot be effectively compressed by existing error-bounded lossy compressors with traditional techniques. The recent success of Artificial Intelligence has inspired several researchers to integrate neural networks into error-bounded lossy compressors. However, those works still suffer from limited compression ratios and/or extremely low efficiencies. To address those issues and improve the compression on the hard-to-compress datasets, in this paper, we propose SRN-SZ, which is a deep learning-based scientific error-bounded lossy compressor leveraging the hierarchical data grid expansion paradigm implemented by super-resolution neural networks. SRN-SZ applies the most advanced super-resolution network HAT for its compression, which is free of time-costing per-data training. In experiments compared with various state-of-the-art compressors, SRN-SZ achieves up to 75% compression ratio improvements under the same error bound and up to 80% compression ratio improvements under the same PSNR than the second-best compressor.
Jinyang Liu, Sheng Di, Sian Jin, Kai Zhao, Xin Liang, Zizhong Chen, Franck Cappello
2023-09-07T22:15:32
http://arxiv.org/abs/2309.04037v3
SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks ###### Abstract The fast growth of computational power and scales of modern super-computing systems have raised great challenges for the management of exascale scientific data. To maintain the usability of scientific data, error-bound lossy compression is proposed and developed as an essential technique for the size reduction of scientific data with constrained data distortion. Among the diverse datasets generated by various scientific simulations, certain datasets cannot be effectively compressed by existing error-bounded lossy compressors with traditional techniques. The recent success of Artificial Intelligence has inspired several researchers to integrate neural networks into error-bounded lossy compressors. However, those works still suffer from limited compression ratios and/or extremely low efficiencies. To address these issues and improve the compression on the hard-to-compress datasets, in this paper, we propose SRN-SZ, which is a deep learning-based scientific error-bounded lossy compressor leveraging the hierarchical data grid expansion paradigm implemented by super-resolution neural networks. SRN-SZ applies the most advanced super-resolution network HAT for its compression, which is free of time-costing per-data training. In experiments compared with various state-of-the-art compressors, SRN-SZ achieves up to 75% compression ratio improvements under the same error bound and up to 80% compression ratio improvements under the same PSNR than the second-best compressor. error-bounded lossy compression, deep learning, super-resolution. ## I Introduction The rapid growth of computing power of worldwide exascale supercomputers has enabled the scientific applications running on them to intensely enlarge their scales and outputs. Nevertheless, the data storage capacity and memory bandwidth of those machines have not developed fast enough to catch up with the increasingly huge amount of data generated by those applications, bringing rising requirements for advanced data reduction techniques to efficiently store, transfer, and analyze those data. To this end, error-bounded lossy compression has been recognized as the most proper strategy to manage extremely large amounts of scientific data. Compared to the lossless compression techniques which can only provide around halved compressed size, it can reduce the data size to 10%, 1%, or even 0.1% of the original size. Unlike many existing lossy compressors (such as the JPEG compressor for image data) that do not constrain the point-wise data error, error-bounded lossy compression can control the point-wise data distortion upon the user's requirements. Therefore, error-bounded lossy compression is of great significance for boosting the utility of scientific data. Existing state-of-the-art scientific error-bounded lossy compressors with diverse compression ratios and speeds, such as SZ3 [1, 2], ZFP [3], and SPERR [4], have shown advantages in variant practical use cases. However, despite the success existing error-bounded lossy compressors have achieved, their limitations persist. Among the diverse archetypes of existing compressors, their compressions on certain datasets are still apparently under-optimized, suffering from low compression ratios, which is still an ongoing challenge for error-bounded lossy compression research. Inspired by the great breakthroughs in the Artificial Intelligence field, several attempts have been made to leverage neural networks in error-bounded lossy compression. Autoencoder-based AE-SZ [5] and Coordinate network-based CoordNet [6] are two typical examples. Those deep learning-based compressors may provide well-optimized compression ratios in certain cases, but their limitations are still obvious. The Coordinate network-based compressors [6, 7, 8] suffer from extremely low compression efficiencies as they need to train a new network separately for each input. Although autoencoder-based compressors such as [5, 9] can leverage pre-trained networks to avoid per-input training, their compression ratios cannot overperform SZ3 in most cases [5]. In order to address the issues of optimizing the hard-to compress data compression and overcoming the limitations of deep learning-based error-bounded lossy compression, in this paper, we proposed SRN-SZ, which is a grand new deep learning-based error-bounded lossy compression framework. The core innovation of SRN-SZ is that it abstracts the compression and decompression processes of scientific data grids into a hierarchical paradigm of data grid super-resolution, which is the first work of integrating the super-resolution neural network into the error-bounded lossy compressor to the best of our knowledge. Compared with the autoencoders and coordinate networks, the super-resolution networks have two-fold advantages: Unlike coordinate networks, they can be pre-trained before the practical compression tasks. At the same time, they do not generate any latent information that is required to be stored for compression as the autoencoders. Benefiting from those advantages, SRN-SZ achieves acceptable efficiencies and further improved compression ratios over the state-of-the-art error-bounded lossy compressors on multiple hard-to-compress datasets. The contributions of our paper are detailed as follows: * We propose a new scientific error-bounded lossy compressor SRN-SZ, in which the compression is performed by hierarchical data grid expansion implemented with a hybrid of super-resolution networks and interpolations. * Leveraging the Hybrid Attention Transformer (HAT) network, we designed a specialized training pipeline with several adaptive techniques to optimize the super-resolution quality of scientific data. * We carry out systematical evaluations with SRN-SZ and 5 other state-of-the-art scientific error-bounded lossy compressors on various scientific datasets from different domains. According to the experimental results, SRN-SZ has achieved up to 75% compression ratio improvements under the same error bound and up to 80% compression ratio improvements under the same PSNR. The rest of this paper is organized as follows: In, Section II, we discuss related works. Section III presents the research problem formulation and backgrounds. The overall framework of SRN-SZ is demonstrated in Section IV. The compression pipeline and network training pipeline of SRN-SZ are separately proposed in Section V and Section VI. In section VII, the evaluation results are provided and analyzed. Section VIII concludes this work and discusses future work. ## II Related Work In this section, we discuss the related works in 3 categories: Traditional scientific error-bounded lossy compression, deep learning-based scientific lossy compression, and super-resolution neural networks. ### _Traditional Scientific Error-bounded Lossy Compression_ Traditional scientific error-bounded lossy compressors can be classified into prediction-based, transform-based, and dimension-reduction-based. The prediction-based compressors utilize different data prediction techniques for the compression, such as linear regression (SZ2 [10]) and interpolations (SZ3 [1] and QoZ [11]). Transform-based compressors decorrelate the input data by data transformation techniques so that the transformed data (a.k.a., coefficients) turn out to be much easier to compress than the original dataset; then it compresses the efficient domain to get a high compression ratio. Typical examples include ZFP [3] leveraging orthogonal discrete transform and SPERR [4] integrating CDF 9/7 wavelet transform. With dimension reduction techniques such as (high-order) singular vector decomposition (SVD), dimension-reduction-based compressors such as THHRESH [12] can perform the data compression very effectively. Besides the CPU-based compressors, several GPU-specialized error-bounded lossy compressors have also been developed and proposed for better parallelization and throughput. Typical examples are CuSZ [13, 14] and FZ-GPU [15]. ### _Deep Learning-based Scientific Lossy Compression_ The great success of the recent research of Artificial Intelligence techniques started boosting the development of several other relevant research fields, including the scientific error-bounded lossy compression. Several research works that leverage deep neural networks in error-bounded lossy compression have been proposed [5, 6, 7, 8, 9]. There are mainly 2 archetypes: autoencoder-based compressors which store the autoencoder-encoded latent vectors for compression, and coordinate network-based compressors which train networks online for each input to map the data coordinates to data values. For autoencoder-based compressors, AE-SZ is an example of integrating Slice-Wasserstein autoencoders, and Hayne et al. [9] leverages a double-level autoencoder for compressing 2D data. Examples of coordinate network-based compressors include NeurComp [8], CoordNet [6] and [7]. ### _Super-resolution Neural Networks_ Following the SRCNN [16] which introduced a Convolutional neural network model to the image super-resolution tasks, a large number of convolutional neural network models [17, 18, 19] have been proposed for the super-resolutions. Because of the development of Transformer [20] and its adaption to Computer Vision tasks [21, 22, 23], vision-transformer-based neural networks like [24, 25, 26] have achieved state-of-the-art performance on the image super-resolution task. Among those works, HAT [26] is the most impressive one as it has the widest scope of feature extraction for reconstructing each data point with a carefully designed hybrid attention model and achieves state-of-the-art performance. ## III Problem Formulation and Backgrounds ### _Mathematical Formulations for Error-bounded Lossy Data Compression_ In this subsection, we propose several key mathematical definitions and the mathematical formulation of our research target for this paper. #### Iii-A1 Compression ratio and bit rate Compression ratio is defined by the input data size divided by the compressed data size. Specifically, for input data \(X\) and compressed data \(Z\), compression ratio \(\rho\) is: \[\rho=\frac{|X|}{|Z|} \tag{1}\] According to Eq. 1, a higher compression ratio means better (smaller) compressed size, and vice versa. In the visualization of experimental results, researchers often plot curves with another metric closely related to the compression ratio, namely the bit rate. Bit rate is defined by the average number of bytes used in the compressed data to store each data element for the input data, which can be expressed as (denote bit rate by \(b\)): \[b=\frac{sizeof(x)}{|Z|} \tag{2}\] in which \(x\) is an element of the input \(X\), and sizeof() returns the byte size. Since the bit rate is reciprocal to the compression ratio, a lower bit rate is better. #### Iii-A2 Psnr PSNR (Peak Signal-to-Noise Ratio) is one of the most important data distortion metrics for evaluating the quality of the decompressed data from the lossy compression. it is defined as follows: \[PSNR=20\log_{10}{vrange(X)}-10\log_{10}{mse(X,X^{\prime})}, \tag{3}\] where \(X\) is the input data and \(X^{\prime}\) is the decompressed data. strange() calculates the value range of one data array, and _mse_ refers to the mean-squared error. Fixing the input data (and also the data range), a smaller mean-squared error will lead to higher PSNR, therefore higher PSNR means higher precision of the decompressed data. #### Iii-A3 Research target The objective of SRN-SZ is to optimize the compression process with regard to a certain optimization target: maximizing compression PSNR under each certain compression ratio. Mathematically speaking, given the input data \(X\), compressed data \(Z\), decompression output \(X^{\prime}\), error bound \(e\), and the target compression ratio \(T\), we will optimize the compressor \(C\) and decompressor \(D\) of SRN-SZ via the following optimization problem (\(Z=C(X)\) and \(X^{\prime}=D(Z)\)): \[\begin{array}{ll}&\underset{|Z|}{maximize}\ \ PSNR(X,X^{\prime})\\ s.t.&\frac{|X|}{|Z|}=T\\ &&|x_{i}-x^{\prime}_{i}|\leq e,\ \forall x_{i}\in X.\end{array}, \tag{4}\] In this paper, we propose a deep learning-based compressor, leveraging the super-resolution neural network for the optimization of Eq. 4. ### _Challenge for Error-bounded Lossy Compression: Low-compression-ratio Datasets_ Recently proposed scientific error-bound lossy compressors have succeeded in outperforming the old state-of-the-art compressors dramatically. Compared with the historical SZ 2.1 [10], SZ3 [2] has improved the compressor ratio by up to 460% [1] under the same data distortion. With higher computational costs, wavelet-based compressors such as SPERR [4] may have doubled or even tripled compression ratios compared with SZ3. However, those exciting improvements in compression ratios are just concentrated on datasets that generally project relatively high compression ratios (e.g. over 100). In other words, the recent proposed works with advanced data compression techniques fail to improve the compression for datasets with relatively low compression ratios to similar extents as they have done in high-ratio cases. In Figure 1, we present the bit rate-PSNR curves from the compression of 4 scientific datasets with the representative existing error-bounded lossy compressors: prediction-based SZ2 [10] and SZ3 [1, 2], SVD-based TTHRESH [12], and wavelet transform-based SPERR [4] (the compression result of TTHRESH is not shown in Figure 1 (b) as TTHRESH does not support 2D data input). For datasets like the Miranda [27] (Figure 1 (a)). SZ3 has boosted the compression ratio of SZ2 by over 100%, and SPERR further achieves 2x-3x of the compression ratio over SZ3. However, on other datasets, those 4 compressors have relatively low compression ratios. On certain datasets such as NYX-Dark Matter Density and Hurricane-QRain (Figure 1 (c) and (d)), the SPERR and TTHRESH have lower compression ratios than SZ3 does, though they are designed with more complicated data processing techniques and much higher computational costs. It is worth noting that the low-compression-ratio data snapshots are actually the bottleneck of compression effectiveness because their compressed data size will obviously occupy a very large portion of all data fields (having diverse characteristics) in a single dataset. For example, compressing 100TB of data with a compression ratio of 100 will generate 1TB of compressed data, which means that we can at most save the space of 1TB when optimizing the compression. Nevertheless, if the original data has the same size of 100TB but only has a potential compression ratio of 5 (20TB compressed data), merely improving the compression ratio by 10% will lead to around 1.8TB storage cost reduction. Therefore, overcoming the limitation of existing compressors on low-compression-ratio data will be significant for optimizing the overall compression process for a large variety of scientific simulation datasets. ## IV SRN-SZ Design Overview We would like to propose our SRN-SZ, which is a deep learning-based error-bounded lossy compressor, and is based on a modular compression framework that integrates a hybrid data reconstruction model with both interpolators and super-resolution neural networks. As shown in Figure 2, the compression framework of SRN-SZ consists of 4 modules: Data grid sparsification, data grid expansion, Huffman encoding, and Zstd lossless compression. Moreover, the super-resolution neural networks are first pre-trained with a large-size dataset assorted from the scientific database and then fine-tuned with domain-specific datasets before being leveraged in the data grid expansion module of SRN-SZ. In the compression process of SRN-SZ, it first extracts a sparse data grid from the original data input, next, this sparse data grid is expanded step by step with super-resolution networks and interpolators, eventually to a lossy reconstruction of the full-size input grid. Compared to existing deep learning-based compressors which leverage autoencoder-like networks [5, 28] to generate compact representations or coordinate networks [6, 7, 8] mapping data point indices to data values, SRN-SZ has the advantages of both free from the storage cost for the compact representations (required by autoencoders) and per-input network training (required by coordinate networks). We demonstrate the detailed compression algorithm of SRN-SZ in Algorithm 1. Lines 1-2 correspond to data grid sparsification, Lines 3-10 correspond to data grid expansion, and Lines 11-12 correspond to Huffman encoding and Zstd lossless compression. To bound the point-wise compression error, the linear quantization is involved in the data grid expansion module, and for clearness of demonstration, it is not displayed in Figure 2. ``` 0: Input data \(D\), error-bound \(e\), grid sparsification rate \(r\), minimum SRN size \(s\) 0: Compressed data \(Z\) 1: Sparsify \(D\) into \(D_{0}\) with rate \(r\). Save \(D_{0}\) losslessly /*Data grid sparsification.*/ 2: Set current reconstructed data grid \(D^{{}^{\prime}}\gets D_{0}\), Quantized errors \(Q\leftarrow\{\}\) 3:while\(size(D^{{}^{\prime}})!=size(D)\)do 4:if\(size(D^{{}^{\prime}})\leq s\)then 5:\(D^{{}^{\prime}},q=Interp\_and\_Quantize(D,D^{{}^{\prime}},e)\)/*Expand \(D^{{}^{\prime}}\) with interpolation.*/ 6:else 7:\(D^{{}^{\prime}},q=HAT\_and\_Quantize(D,D^{{}^{\prime}},e)\)/*Expand \(D^{{}^{\prime}}\) with HAT network.*/ 8:endif 9:\(Q\gets Q\bigcup q\). /*Merge newly acquired quantized errors \(q\).*/ 10:endwhile 11:\(H\leftarrow\) Huffman_Encode(\(Q\)). /*Huffman encoding*/ 12:\(Z\leftarrow\) Zstd(\(H,D_{0}\)). /*Zstd compression*/ ``` **Algorithm 1** SRN-SZ Compression Algorithm ## V SRN-SZ Compression Pipeline In this section, we describe the steps in the SRN-SZ Compression pipeline in detail. Since the encoding and lossless modules of SRN-SZ are the same as the ones in SZ3 and QoZ [1, 2, 11], in the following subsections, we will mainly discuss the data grid sparsification and data grid expansion. ### _Data Grid Sparsification_ Having shown advantages in MGARD [29, 30], SZ3 [1, 2], and QoZ [11], SRN-SZ adopts a level-wise hierarchical data grid reconstruction paradigm for its compression process. It starts from a sparse data grid sampled from the original input dataset. An example of 2D input data is shown in Figure 3: certain data points are uniformly sampled from the full data grid with a fixed stride. Those sampled data points in a sparsified data grid will be losslessly saved and the rest data points will be reconstructed in the data grid expansion process. The reason SRN-SZ losslessly saves the sparsified grid instead of directly reconstructing a lossy version of it from scratch as SZ3 does is analyzed below. According to the comparison between evaluations of SZ3 and QoZ [11], for the hierarchical level-wise data reconstruction, an accurate base is essential for preserving the high reconstruction quality of the data points, meanwhile only introducing negligible overhead storage space overhead. To balance the compression ratio loss and data reconstruction accuracy, we conducted some tests and then specified the dimension-wise rate of data grid sparsification as \(\frac{1}{32}\), i.e., reduce the data grid to \(\frac{1}{32}\) along each dimension and then save the sparsified grid for the data grid expansion. ### _Data Grid Expansion_ Based on the sparsified data grid, the data grid expansion (i.e. reconstruction) process is involved in both the compression and decompression of SRN-SZ. In the compression, the data grid expansion is executed for acquiring the reconstruction errors of data points, and then those errors are quantized and encoded serving as the correction offsets in the decompression. Moreover, During both the compression and decompression process of SRN-SZ, the super-resolution and error-quantization in compression (or error correction in decompression) are executed alternately, which can maximally Fig. 1: Rate-distortion (PSNR) of several existing error-bounded compressors. Fig. 3: Data grid sparsification Fig. 2: SRN-SZ compression framework preserve the accuracy of data grid expansion. As presented in Figure 4, the data grid expansion is performed iteratively step by step, until the whole data grid has been reconstructed. In each step, the reconstructed data grid is expanded by 2x along each dimension, therefore its implementation is compatible with both the deep learning-based super-resolution neural networks and the traditional interpolation methods. #### Iii-B1 HAT super-resolution network Super-resolution network is the most important data grid expansion technique in SRN-SZ as it is always applied on the last iteration step of data grid expansion, which contains the reconstruction for most of the data points in the input data (about 75% for 2D case and about 87.5% for 3D case). The network SRN-SZ leveraged is the HAT (Hybrid Attention Transformer) network [26], which is a very recent proposed work for image super-resolution and has been proven to be state-of-the-art. The network architecture of HAT is illustrated in Figure 5. Developed from [25, 19], HAT is a very-deep residual [31] neural network with transformers [20] as its basic components. HAT has 3 main modules: the initial convolutional layers for shallow feature extraction, the deep feature extraction module integrated with residual hybrid attention groups (RHAG), and a reconstruction module leveraging the Pixel Shuffle technique [32]. The RHAG blocks in the HAT network can be broken down into HAB (hybrid attention block), OCAB (overlapping cross-attention block), and convolutional layers. For more details of the HAT network, we refer readers to read [26]. The main advantage of HAT is that according to the analysis presented in [26], the design of HAT empowers it to make use of a large region of data points for computing each value in its super-resolution output. Therefore, both local and global data patterns can be well utilized in the super-resolution process. Although HAT was originally designed for the super-resolution of natural images, we managed to adapt it to the scientific data grid expansion process in SRN-SZ. Feeding an intermediate data grid with size X x Y (or X x Y x Z) into HAT, SRN-SZ uses the super-resolution output of size 2X x 2Y (or 2X x 2Y x 2Z) from HAT as the data grid expansion result in one step. Some key points in bridging the scientific data and the HAT network are: First, the input and output channels in HAT have been modified from 3 to 1. Second, the input data grid is normalized to 0-1 before being fed into the network. Last, for 3D data inputs, 2D HAT models can still be used, but the inputs are preprocessed into 2D slices (along all the 3 dimensions) instead of 3D blocks. The reason SRN-SZ applies 2D networks for 3D data is that 3D HAT models suffer from extremely high computational time costs for training and inference, presenting unacceptable flexibility and scalability. Figure 6 presents the details of performing 3D super-resolution with those 2D slices. Specifically, with a partially reconstructed 3D data grid (blue points), SRN-SZ performs super-resolution on it with the HAT network in 3 different directions: on top/bottom faces (red points), on left/right faces (green points), and on front/back faces (purple points). The super-resolution results on the edges are the average of 2 directions, and the point on the cubic center is reconstructed by a multi-dimensional spline interpolation, which is introduced in [33] and will be detailed in the next subsection and Figure 7 (b). #### Iii-B2 interpolation-based data predictor We have observed that, when the reconstructing data grid has a small size, the super-resolution network can not work well. Therefore, on some initial steps of data grid expansion in which the Fig. 4: Data grid expansion Fig. 5: HAT network Fig. 6: 3D super-resolution with 2D slices Fig. 7: Interpolations in SRN-SZ current data grid is smaller than a threshold (with a dimension shorter than 64), the traditional QoZ-based interpolation [11] is leveraged for the grid expansion which can auto-tune the best-fit interpolation configurations and error bounds. In addition to the QoZ interpolation, following the design proposed by [33], SRN-SZ also leverages several advanced interpolation designs such as multi-dimensional spline interpolation. Figure 7 presents and compares these two interpolation methods, and SRN-SZ will dynamically select the interpolation method for each interpolation level. This adaptive selection design improves both the efficiency of SRN-SZ and the reconstruction quality in the early steps of data grid expansion. ## VI SRN-SZ Network Training The super-resolution quality of the HAT network plays the most important role in optimizing the compression ratio with controlled data distortion in SRN-SZ, and the core of optimizing the super-resolution quality of the HAT is its training process. The HAT networks in SRN-SZ are pre-trained offline both with an assorted dataset and domain-specific datasets. This design contributes to the flexibility and adaptability of SRN-SZ. Several strategies have been proposed for optimizing the training of the HAT networks in SRN-SZ. Figure 8 proposes our designed HAT network training pipeline for SRN-SZ. In the pipeline, each network is trained for two rounds: the general training from scratch and the domain-specific training for fine-tuning. The following subsections describe the key design of this pipeline. ### _Training data collection and preprocessing_ We have collected training data snapshots from a variety of well-known scientific simulations, including CESM-ATM [34], RTM [35], OCEAN, Miranda [27], JHTDB [36], Hurricane-ISABEL [37], SCALE-LetKF [38], NYX [39], and so on. The full list of the scientific simulations used by SRN-SZ for the HAT network training is shown in Table I. With those assorted data snapshots, we first decompose 3D data arrays into 2D data slices, next normalize them to [0,1] range, then split all over-sized (over 480x480) slices into smaller slices (480x480) according to the setting in [26]. When yielding the training data batches, the low-resolution, and high-resolution image pairs are randomly cropped from those slices. The widely-used image data augmentation methods like random flip and rotation are excluded from SRN-SZ network training as we observe that those data augmentation strategies will harm the quality of super-resolution with test data. This assorted and pre-processed dataset will be used for general pre-training of the HAT network from scratch. ### _Domain-specific fine-tuning_ Datasets from different scientific domains and simulations would present diverse patterns and characteristics. To make the trained network better adapt to more varied inputs, we fine-tune the super-resolution for certain scientific simulations that are being intensively and consistently used for research and analysis. To address this issue, we develop a domain-specific fine-tuning in SRN-SZ. After an initial training phase with the assorted database, SRN-SZ picks up several additional data snapshots generated by those simulations and then fine-tunes the network separately with each simulation data. In this way, SRN-SZ can achieve improved compression ratios on multiple widely used scientific data simulation datasets. We will compare the rate-distortion of SRN-SZ between applying the domain-specific fine-tuning or not in Section VII-B5. ### _Denoise training with Gaussian random noise_ As discussed in Section V-B, the data grid to be expanded in SRN-SZ is a lossy sample from the original data input. At the same time, we will need the super-resolution of it to fit the original input as much as possible. To simulate this process in the training of the HAT networks in SRN-SZ for better super-resolution results, we propose denoise training in SRN-SZ. Specifically, instead of simply using full data grids and the corresponding down-sampled data grids as the training data pairs, SRN-SZ adds Gaussian noise to the down-sampled data grids before feeding them into the network in the training phase. In this way, the trained network will be capable of denoising the input for more accurate super-resolution outputs. Fig. 8: SRN-SZ network training pipeline Moreover, we observe that training networks with intense noises will damage their effectiveness on low error-bound cases, so we separately train 3 base networks with different intensities of noises: strong noise (with stand derivation of 1% of data range), weak noise (with stand derivation of 0.1% of data range), and no noise. Those networks will correspondingly serve for different compression cases: high error bounds (larger than 1e-2), medium error bounds (1e-4 to 1e-2), and low error bounds (smaller than 1e-4). ## VII Performance Evaluation In this section, we describe the setup of our experiments and then present the experimental results together with our analysis. We evaluate the newly proposed SRN-SZ and compare it with five other state-of-the-art error-bounded lossy compressors [2, 4, 11, 12, 10]. ### _Experimental Setup_ #### Vii-A1 Experimental environment and datasets Our experiments are conducted on the Argonne Bebop supercomputer (for CPU-based tests) and the ALCF Theta supercomputer (for GPU-based tests). On the Bebop machine, we used its nodes of the bdwall partition, having an Intel Xeon E5-2695v4 CPU with 64 CPU cores and a total of 128GB of DRAM on each. On the Theta machine, each GPU node of it has 8 NVIDIA TESLA A100 GPUs. We select 6 data fields from 4 real-world scientific applications in diverse scientific domains. Those datasets are frequently used for evaluating scientific error-bounded lossy compression [41]. We detail the information about the datasets and the fields in Table II. As suggested by domain scientists, some fields of the datasets listed above are transformed to their logarithmic domain for better visualization. For fairness of evaluation, the data snapshots used for the evaluations are never contained in the assorted training dataset and their corresponding fine-tuning datasets. However, for optimizing the compression, some data snapshots in the same data field (but from different runs of the application or from different time steps) are used for training (especially for fine-tuning). #### Vii-A2 Comparison of lossy compressors in evaluation In the experiments, SRN-SZ is evaluated together with five other state-of-the-art lossy compressors. Among those, 4 are the traditional error-bounded lossy compressors: SZ3 [2], QoZ [11], SPERR [4], and FAZ [40]. Another one is the deep learning-based AE-SZ [5], which was verified in [5] to be one of the most effective autoencoder-based error-bounded lossy compressors. We do not perform comparison experiments with coordinate-network-based compressors due to the reason that they suffer from very low compression speed (much slower than SRN-SZ) as they need to perform a network training process for each single compression task [6, 7, 8]. #### Vii-A3 Network training configurations For the training of HAT networks in SRN-SZ, we apply the network structure and training configurations described in [26]. In each training phase (including general training and domain-specific fine-tuning), we train the network on 8 GPUs in 200,000 iterations with a mini-batch size of 32. The initial learning rate is 2e-4 and will be halved on step [100K,160K,180K,190K]. For the network training and compression of AE-SZ, we follow the configurations described in [5]. #### Vii-A4 Evaluation Metrics In the compression experiments, we adopted the value-range-based error bound mode (denoted as \(\epsilon\)) being equivalent to the absolute error bound (denoted as \(e\)) with the relationship of \(e\) = \(\epsilon\cdot value\_range\). The evaluation results are based on the following key metrics: * Decompression error verification: Verify that the decompression errors are strictly error-bounded. * Compression ratio (CR) under the same error bound: Compression ratio is the metric mostly cared for by the users, for fair comparison, the compression ratios under fixed error bounds are presented. * _Rate-PSNR plots_: Plot curves for compressors with the compression bit rate and decompression PSNR. * Visualization with the same CR: Comparing the visual qualities of the reconstructed data from different compressors based on the same CR. * Ablation Study: Verify the effectiveness of each SRN-SZ design component separately. ### _Evaluation Results and Analysis_ #### Vii-B1 Verification of compression errors versus error bound First of all, we verify that the decompression errors from SRN-SZ have strictly been constrained within the error bounds. To this end, we plot the histograms of decompression errors for each compression task, and two of them (on the QRAIN and QGRAUP fields of the Hurricane-ISABEL dataset) are presented in Figure 9. It can be clearly observed that the decompression errors of SRN-SZ always respect the error bound (\(e\)) in all cases with no out-of-bound abnormalities of point-wise decompression error. Having examined the error-bounded feature of SRN-SZ, in the following subsections, we will test, present, and analyze the compression ratios and qualities of SRN-SZ. #### Vii-B2 Compression ratio under the same error bounds The compression ratios of all lossy compressors under the same Fig. 9: Histograms of decompression errors from SRN-SZ certain error bounds (1e-3, 1e-4, and 1e-5) are presented in Table III. An interesting fact is that, although proposed later than SZ3, some new compressors (QoZ, SPERR, and FAZ) have not raised the compression ratios well on the tested datasets. In contrast, SRN-SZ has quite improved the compression ratios of error-bounded lossy compressors on almost all of the tested compression cases, over a variety of datasets and error bounds. Particularly, under the error bound of 1e-4 SRN-SZ achieves a 75% compression ratio improvement over the second-best QoZ on the CLDHGH field of the CESM-ATM dataset, and under the error bound of 1e-3 SRN-SZ achieves a 44% compression ratio improvement on the FREQSH field of it. On other datasets, SRN-SZ can also get 3% to 20% compression ratio improvements. Last, compared with other deep learning-based compressors, SRN-SZ has outperformed AE-SZ in an overall assessment. #### Iv-B3 Rate distortion evaluation Next, we present and analyze the rate-distortion evaluation of SRN-SZ and other state-of-the-art error-bounded lossy compressors. Figure 10 displays the rate-distortion evaluation results of each lossy compressor on all datasets. In the plots, the x-axis is bit rate and the y-axis is PSNR. Like the cases of same-error-bound compression ratios, SRN-SZ has the best rate-distortion curves on all the datasets. On the CESM-CLDHGH dataset, SRN-SZ achieves \(60\%\) to \(80\%\) compression ratio improvement than the second-best SPERR in the PSNR range of 70 \(\sim\) 80. On the Ocean-TMXL dataset, SRN-SZ achieves \(\sim\)20% compression ratio improvement than the second-best QoZ in the PSNR range of 60 \(\sim\) 70. Additionally, SRN-SZ overperforms all other compressors by about \(5\%\) to \(15\%\) compression ratio improvements on the rest of the datasets. Those results show that, for certain datasets on which the traditional or autoencoder-based lossy compressors can only present limited compression ratios, SRN-SZ has the potential to optimize the compression of those datasets to a further extent, and the reasons can be attributed to 3 terms. First, those datasets have complex data characteristics and patterns for which traditional data modeling techniques cannot fit well. Second, the newly proposed compression framework of SRN-SZ enables the compressor to directly leverage a super-resolution network for the data prediction via data grid expansion (super-resolution) instead of applying a redundant autoencoder model for which the latent vectors must be stored (such as AE-SZ does). Third, the hybrid usage of interpolations and super-resolution networks makes the interpolation compensate for the limitation of neural networks when dealing with small data grids. #### Iv-B4 Visualization of decompressed data As an example of the high compression quality of SRN-SZ, In Figure 11, we present several visualizations of the decompression results of CESM-CLDHGH data field from multiple compressors, together with the original data as the reference. For a fair comparison, for each compressor, the data are compressed into a fixed compression ratio (around 32) and then get decompressed. According to Figure 11 (we omit the visualization results of AE-SZ because it has poor visualization quality with PSNR \(\approx\) 53 under the specified compression ratio), in this case, the decompression data of SRN-SZ has the lowest distortion from the original input, with a PSNR of 68.5 which is 5 higher than the second-best FAZ. The zoomed regions also show that SRN-SZ has best preserved the local data patterns as well. The local visualization of SRN-SZ decompressed data is nearly identical to the original data, meanwhile, the ones of other compressors suffer from some quality degradation. #### Iv-B5 Ablation Study For verifying and understanding how the design details of SRN-SZ contribute to the overall compression quality, especially for the design components in the network pre-training pipelines, we conduct several ablation studies Fig. 10: Rate Distortion Evaluation (PSNR) for the network pre-training, identifying and quantifying the contributions of the corresponding design components. First, to examine the impact of domain-specific fine-tuning (described in Section VI-B) on the training of HAT networks in SRN-SZ. We tested the compression of SRN-SZ with networks free of domain-specific fine-tuning and then compared the rate-distortion of it to the one from ordinary SRN-SZ. This comparison is detailed in Figure 12 with 2 examples presented (on Ocean-TMXL and NYX-Dark Matter Density). It is shown that the domain-specific fine-tuning process (the blue curves in Figure 12) can consistently improve the compression rate distortion over the SRN-SZ without a network fine-tuning process (the orange curves in Figure 12). Next, we address the importance of SRN-SZ denoise training via analyzing and comparing the compression rate-distortion of SRN-SZ integrating fixed HAT networks each trained with a certain intensity of noise added to the training data. In Figure 13, the rate-PSNR curves of SRN-SZ with HAT networks trained by 3 different levels of noise intensity (zero noise, low noise of \(\sigma\)=1e-3, and high noise of \(\sigma\)=1e-2) are illustrated. Those compressors exhibit advantages over the others on different bit rate ranges. SRN-SZ with high-noise-trained overperforms the other configurations when the bit rate is smaller than 0.4 (corresponding to error bound \(>\) 1e-2). The low-noise-trained HAT network optimizes the SRN-SZ compression under medium bit rates, and when the bit rate is large (error bound \(<\) 1e-4), Leveraging networks trained with no noise achieves the best rate-distortion. From those results, we prove that the error-bound-adaptive dynamic usage of differently-trained HAT networks (with diverse noise intensities) essentially optimizes the compression of SRN-SZ. ## VIII Conclusion and Future Work In this paper, We propose SRN-SZ, a deep learning-based error-bounded compressor that leverages one of the most advanced super-resolution neural network archetypes, namely HAT. SRN-SZ abstracts the data prediction process in compression into a hierarchical data grid expansion paradigm, enabling the utility of super-resolution neural networks for lossy compression. To exploit the advantages of different data reconstruction techniques, the data grid expansion in SRN-SZ is performed by a self-adaptive hybrid method of super-resolution HAT networks and interpolations. For the better adaptation of super-resolution networks to scientific data, SRN-SZ integrates a carefully designed network training pipeline for optimizing the network performance. In the evaluations, SRN-SZ outperforms all other state-of-the-art error-bounded lossy compressors in terms of compression ratio and rate-distortion, achieving up to 75% compression ratio improvements under the same error bound and up to 80% compression ratio improvements under the same PSNR. SRN-SZ still has a few limitations. First, since it is based on neural networks, its running speed is inevitably quite lower than traditional lossy compressors, and the complexity of its integrated network makes it slower than some autoencoder-based compressors such as AE-SZ. Second, the compression ratios of SRN-SZ may not outperform the existing state-of-the-art compressors on datasets with high compressibility. Third, the training of HAT networks in SRN-SZ is not fully Fig. 11: Visualization of reconstructed data (CESM-CLDHGH) Fig. 12: Ablation study for the Domain-specific fine-tuning Fig. 13: Ablation study for the Denoise Training optimized. In future work, we will revise SRN-SZ in several aspects such as accelerating and fine-tuning the training and inference of its integrated neural networks, improving its compression ratio on easy-to-compress datasets, and so on. ## Acknowledgments This research was supported by the Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations - the Office of Science and the National Nuclear Security Administration, responsible for the planning and preparation of a capable exascale ecosystem, including software, applications, hardware, advanced system engineering and early testbed platforms, to support the nation's exascale computing imperative. The material was supported by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research (ASCR), under contract DE-AC02-06CH11357, and supported by the National Science Foundation under Grant OAC-2003709, OAC-2104023, OAC-2311875, OAC-2311877, and OAC-2153451. We acknowledge the computing resources provided on Bebop (operated by Laboratory Computing Resource Center at Argonne) and on Theta and JLSE (operated by Argonne Leadership Computing Facility).
現代スーパーコンピューティングシステムの計算能力と規模の急速な成長は、 exascale 科学データの管理に大きな課題となっています。科学データの利用性を維持するため、エラーバリアントな損失圧縮が提案され、科学データのサイズ削減のための必須技術として開発されました。膨大な量の科学シミュレーションから生成される様々なデータセットでは、既存のエラーバリアントな損失圧縮技術では効果的に圧縮することができないデータセットが存在します。人工知能の最近の成功により、エラーバリアントな損失圧縮にニューラルネットワークを統合する研究者が出てきました。しかし、これらの研究は、圧縮率が限られていることや、効率が低いままです。これらの問題を解決し、圧縮率の高いデータセットを改善するために、本論文では、SRN-SZという、深層学習ベースの科学的エラーバリアントな損失圧縮技術を提案します。SRN-
2309.14224
On $k-$WUR and its generalizations
We introduce two notions called $k-$weakly uniform rotundity ($k-$WUR) and $k-$weakly locally uniform rotundity ($k-$WLUR) in real Banach spaces. These are natural generalizations of the well-known concepts $k-$UR and WUR. By introducing two best approximation notions namely $k-$weakly strong Chebyshevity and $k-$weakly uniform strong Chebyshevity, we generalize some of the existing results to $k-$WUR and $k-$WLUR spaces. In particular, we present characterizations of $k-$WUR spaces in terms of $k-$weakly uniformly strong Chebyshevness. Also, the inheritance of the notions $k-$WUR and $k-$WLUR by quotient spaces are discussed. Further, we provide a necessary and sufficient condition for an infinite $\ell_p-$product space to be $k-$WUR (respectively, $k-$WLUR). As a consequence, we observe that the notions WUR and $k-$WUR coincide for an infinite $\ell_p-$product of a Banach space.
P. Gayathri, Vamsinadh Thota
2023-09-25T15:30:55
http://arxiv.org/abs/2309.14224v2
# On \(k-\)Wur and its generalizations ###### Abstract. We introduce two notions called \(k-\)weakly uniform rotundity (\(k-\)WUR) and \(k-\)weakly locally uniform rotundity (\(k-\)WLUR) in real Banach spaces. These are natural generalizations of the well-known concepts \(k-\)UR and WUR. By introducing two best approximation notions namely \(k-\)weakly strong Chebyshev and \(k-\)weakly uniform strong Chebyshev, we generalize some of the existing results to \(k-\)WUR and \(k-\)WLUR spaces. In particular, we present characterizations of \(k-\)WUR spaces in terms of \(k-\)weakly uniformly strong Chebyshev. Also, the inheritance of the notions \(k-\)WUR and \(k-\)WLUR by quotient spaces are discussed. Further, we provide a necessary and sufficient condition for an infinite \(\ell_{p}-\)product space to be \(k-\)WUR (respectively, \(k-\)WLUR). As a consequence, we observe that the notions WUR and \(k-\)WUR coincide for an infinite \(\ell_{p}-\)product of a Banach space. Key words and phrases:\(k-\)weakly uniformly rotund; \(k-\)weakly locally uniformly rotund; \(k-\)weakly strongly Chebyshev; \(k-\)weakly uniformly strongly Chebyshev; property \(k-\)_w_UC; product space 2 **Definition 1.1**.: _[_23_]_ _Let \(k\in\mathbb{Z}^{+}.\) A space \(X\) is said to be \(k-\)uniformly rotund (in short, \(k-\)UR), if for every \(\epsilon>0\),_ \[\delta_{X}^{k}(\epsilon)\coloneqq\inf\left\{1-\frac{1}{k+1}\left\|\sum_{i=1}^{ k+1}x_{i}\right\|:x_{1},x_{2},\ldots,x_{k+1}\in S_{X},V[(x_{i})_{i=1}^{k+1}]\geq \epsilon\right\}>0.\] It is significant to note that the notion \(k-\)UR reinforces Singer's concept of \(k-\)rotundity [19], which is stated in the following manner. The space \(X\) is said to be \(k-\)rotund, if for any \(x_{1},x_{2},\ldots,x_{k+1}\in S_{X}\) with \(V[(x_{i})_{i=1}^{k+1}]>0,\) it follows that \(\frac{1}{k+1}\|\sum_{i=1}^{k+1}x_{i}\|=1.\) Similar to this, \(k-\)version of some other rotundity properties are also introduced and studied in the literature as \(k-\)locally uniform rotundity (in short, \(k-\)LUR) [23], \(k-\)midpoint locally uniform rotundity (in short, \(k-\)MLUR) [9], \(k-\)weakly midpoint locally uniform rotundity (in short, \(k-\)WMLUR) [29] and \(k-\)strong rotundity [26]. We refer to [7, 8, 11, 12, 13, 16, 22, 25, 30] for further study on these notions. It would seem relevant to investigate the question of whether it is feasible to define the concepts of WUR and WLUR in a corresponding \(k-\)version. In response to this question, we introduce the notions \(k-\)WUR and \(k-\)WLUR (see, Definition 3.1) in this article. These generalizations are natural in Sullivan's sense. It is well established in the literature that the geometry of Banach spaces and the best approximation theory in Banach spaces are closely related. Several authors investigated various rotundity properties in terms of notions from best approximation theory. To see this, we need the following notations and notions. Let \(k\in\mathbb{Z}^{+}.\) For any non-empty bounded subset \(C\) of \(X,\) the \(k-\)dimensional diameter \(diam_{k}(C)\) is defined as \(diam_{k}(C)=\sup\{V[(x_{i})_{i=1}^{k+1}]:x_{1},x_{2},\ldots,x_{k+1}\in C\}.\) For any non-empty subset \(A\) of \(X,\)\(x\in X\) and \(\delta>0,\) we define \(P_{A}(x)=\{y\in A:\|x-y\|=d(x,A)\}\) and \(P_{A}(x,\delta)=\{y\in A:\|x-y\|\leq d(x,A)+\delta\},\) where \(d(x,A)=\inf\{\|x-y\|:y\in A\}.\) The set \(A\) is said to be proximinal at \(x,\) if \(P_{A}(x)\) is non-empty. We say that \(A\) is \(k-\)Chebyshev [19] at \(x,\) if A is proximinal at \(x\) and \(diam_{k}(P_{A}(x))=0.\) We say that \(A\) is \(k-\)strongly Chebyshev (in short, \(k-\)SCh) [26] at \(x,\) if \(A\) is proximinal at \(x\) and for every \(\epsilon>0\) there exists \(\delta>0\) such that \(diam_{k}(P_{A}(x,\delta))<\epsilon.\) Let \(B\) be any non-empty subset of \(X.\) The set \(A\) is said to be proximinal on \(B,\) if \(A\) is proximinal at every \(x\in B.\) Similarly, we define for other notions such as \(k-\)Chebyshev and \(k-\)strongly Chebyshev. We say that \(A\) is \(k-\)uniformly strongly Chebyshev (in short, \(k-\)USCh) [11] on \(B,\) if \(A\) is proximinal on \(B\) and for every \(\epsilon>0\) there exists \(\delta>0\) such that \(diam_{k}(P_{A}(x,\delta))<\epsilon\) for all \(x\in B.\) According to Singer [19], a space is \(k-\)rotund if every proximinal convex subset is \(k-\)Chebyshev, and the converse also holds. Recently, a characterization of \(k-\)UR spaces in terms of \(k-\)USCh obtained by Kar and Veeramani [11]. Also, a characterization of \(k-\)MLUR spaces in terms of \(k-\)SCh is obtained in [13]. Further, Veena Sangeetha et al. [26] proved that the space is \(k-\)strongly rotund iff every closed convex subset is \(k-\)SCh. In light of the aforementioned results, it is reasonable to presume that the new geometric notions \(k-\)WUR and \(k-\)WLUR may be characterized or analyzed in terms of appropriate notions from best approximation theory. To achieve this, two best approximation notions namely \(k-\)weakly strong Chebyshevness and \(k-\)weakly uniformly strong Chebyshevness (see, Definition 2.5) are defined in this article. It has been extensively explored and is of great interest how quotient spaces may inherit rotundity notions and how geometric notions can be stable under \(\ell_{p}-\)product. Geremia and Sullivan [8] proved that for any \(1<p<\infty,\)\((\oplus_{p}X_{i})_{i\in\mathbb{N}}\) is \(2-\)UR iff all but except one of the \(X_{i}\) are UR with a common modulus of convexity and the remaining space is \(2-\)UR. Later, in [30], the authors generalized this result for any \(k\in\mathbb{Z}^{+}.\) In [22], Smith and Turett proved that an \(\ell_{p}-\)product space can not be \(k-\)UR whenever none of the underlying space is UR. Recently, all the preceding results were obtained for \(k-\)rotund spaces in an analogous way [25, 26]. This article seeks to examine the stability of the notions \(k-\)WUR and \(k-\)WLUR in this approach. The paper is organized as follows. In Section 2, we present some properties of \(k-\)dimensional determinants, which are essential to prove our results. We introduce the notions \(k-\)weakly strong Chebyshev (in short, \(k-\)\(w\)SCh), \(k-\)weakly uniformly strong Chebyshev (in short, \(k-\)\(w\)USCh), property \(k-\)\(w\)UC and obtain some relationships among them. These notions will be used to provide some necessary and sufficient conditions for a space to be \(k-\)WUR (respectively, \(k-\)WLUR, \(k-\)WMLUR). In Section 3, we introduce and study the notions \(k-\)WUR and \(k-\)WLUR. Some of the sequential characterizations of \(k-\)WUR and \(k-\)WLUR presented in this section are necessary to prove our main results. Characterizations of \(k-\)WUR spaces in terms of \(k-\)\(w\)USCh of the closed unit ball as well as subspaces are obtained. Further, the concepts \(k-\)\(w\)SCh and property \(k-\)\(w\)UC are explored in \(k-\)WLUR and \(k-\)WMLUR spaces. We provide counter examples to demonstrate that the converses of some implications are not necessarily true. We investigate the stability of the notions \(k-\)WUR, \(k-\)WLUR and \(k-\)WMLUR in Section 4. First, we obtain two results that correlate \(k-\)WUR and \(k-\)WLUR properties of a space with the associated quotient spaces. We provide necessary and sufficient conditions for finite and infinite \(\ell_{p}-\)product space to be \(k-\)WUR (respectively, \(k-\)WLUR, \(k-\)WMLUR). As a consequence, we observe that the notions WUR (respectively, WLUR, WMLUR) and \(k-\)WUR (respectively, \(k-\)WLUR, \(k-\)WMLUR) coincide for an infinite \(\ell_{p}-\)product of a Banach space. ## 2. Preliminaries This section begins with some \(k-\)dimensional determinant properties that will be utilized throughout the article. We assume all subspaces to be closed. **Remark 2.1**.: _Let \(x_{1},x_{2},\ldots,x_{k+1}\in X\) and \(f_{1},f_{2},\ldots,f_{k}\in X^{*}\). Then,_ 1. \(D_{k}[(x_{i}+w)_{i=1}^{k+1};(f_{j})_{j=1}^{k}]=D_{k}[(x_{i})_{i=1}^{k+1};(f_{j })_{j=1}^{k}],\) _for any_ \(w\in X;\)__ 2. \(D_{k}[(cx_{i})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]=c^{k}D_{k}[(x_{i})_{i=1}^{k+1};( f_{j})_{j=1}^{k}],\) _for any_ \(c\in\mathbb{R}\)_;_ 3. \(D_{k}[(x_{i})_{i=1}^{k+1};(g_{j})_{j=1}^{k}]>0\) _for some_ \(g_{1},g_{2},\ldots,g_{k}\in S_{X^{*}}\) _\(\Leftrightarrow\) _the set_ \(\{x_{i}-x_{k+1}:1\leq i\leq k\}\) _is linearly independent;_ 4. \(D_{k}[(x_{i}+Y)_{i=1}^{k+1};(h_{j})_{j=1}^{k}]=D_{k}[(x_{i}+y_{i})_{i=1}^{k+1} ;(h_{j})_{j=1}^{k}],\) _where_ \(Y\) _is a subspace of_ \(X,\)__\(x_{i}+Y\in X/Y,\)__\(y_{i}\in Y\) _for all_ \(1\leq i\leq k+1\) _and_ \(h_{j}\in Y^{\perp}\cong(X/Y)^{*}\) _for all_ \(1\leq j\leq k.\)__ In the following result, we observe certain continuity properties of \(k-\)dimensional determinants. The proof of [11, Lemmas 2.9 and 2.10] may also be used to prove the following lemma. However, we concisely provide a direct proof here. **Lemma 2.2**.: _Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) be \((k+1)-\)bounded sequences in \(X\). Then for any \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\) the following statements hold._ 1. _If_ \((c_{n}^{(1)}),(c_{n}^{(2)}),\ldots,(c_{n}^{(k+1)})\) _be_ \((k+1)-\)_sequences in_ \(\mathbb{R}\) _such that_ \(c_{n}^{(i)}\to c_{i}\) _for some_ \(c_{i}\in\mathbb{R}\)_, for all_ \(1\leq i\leq k+1,\) _then_ \(|D_{k}[(c_{n}^{(i)}x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]-D_{k}[(c_{i}x_{n }^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]|\to 0.\)__ 2. _If_ \((y_{n}^{(1)}),(y_{n}^{(2)}),\ldots,(y_{n}^{(k+1)})\) _be_ \((k+1)-\)_sequences in_ \(X\) _such that_ \(y_{n}^{(i)}\overset{w}{\to}y_{i}\) _for some_ \(y_{i}\in X,\) _for all_ \(1\leq i\leq k+1,\) _then_ \(|D_{k}[(y_{n}^{(i)}+x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]-D_{k}[(y_{i}+x_{ n}^{(i)})_{k=1}^{k+1};(f_{j})_{j=1}^{k}]|\to 0.\)__ Proof.: (1): Let \((c_{n}^{(1)}),(c_{n}^{(2)}),\ldots,(c_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(\mathbb{R}\) such that \(c_{n}^{(i)}\to c_{i}\) for some \(c_{i}\in\mathbb{R}\), for all \(1\leq i\leq k+1.\) Let \(a_{n}=(D_{k}[(c_{n}^{(i)}x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]-D_{k}[(c_{ i}x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}])\) for every \(n\in\mathbb{N}.\) \[|a_{n}|= |D_{k}[(c_{n}^{(i)}x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]+\sum_{ i=1}^{k}D_{k}[(c_{n}^{(j)}x_{n}^{(j)})_{j=1}^{i},(c_{j}x_{n}^{(j)})_{j=i+1}^{k+1};(f_ {j})_{j=1}^{k}]-\] \[\sum_{i=1}^{k}D_{k}[(c_{n}^{(j)}x_{n}^{(j)})_{j=1}^{i},(c_{j}x_{n}^ {(j)})_{j=i+1}^{k+1};(f_{j})_{j=1}^{k}]-D_{k}[(c_{i}x_{n}^{(i)})_{i=1}^{k+1};(f _{j})_{j=1}^{k}]|\] \[\leq \sum_{i=1}^{k+1}|D_{k}[(c_{n}^{(j)}x_{n}^{(j)})_{j=1}^{i},(c_{j}x_ {n}^{(j)})_{j=i+1}^{k+1};(f_{j})_{j=1}^{k}]-D_{k}[(c_{n}^{(j)}x_{n}^{(j)})_{j=1} ^{i-1},(c_{j}x_{n}^{(j)})_{j=i}^{k+1};(f_{j})_{j=1}^{k}]|.\] Using the properties of determinant, for any \(n\in\mathbb{N}\) we have \(|a_{n}|\leq\sum_{i=1}^{k+1}|b_{n}^{(i)}|,\) where \[b_{n}^{(i)}= \begin{vmatrix}\frac{1}{\ldots}&1&0&1&\ldots&1\\ f_{1}(c_{n}^{(1)}x_{n}^{(1)})&\ldots&f_{1}(c_{n}^{(i-1)}x_{n}^{(i-1)})&(c_{n}^{ (i)}-c_{i})f_{1}(x_{n}^{(i)})&f_{1}(c_{i+1}x_{n}^{(i+1)})&\ldots&f_{1}(c_{k+1}x _{n}^{(k+1)})\\ \vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\ f_{k}(c_{n}^{(1)}x_{n}^{(1)})&\ldots&f_{k}(c_{n}^{(i-1)}x_{n}^{(i-1)})&(c_{n}^{ (i)}-c_{i})f_{k}(x_{n}^{(i)})&f_{k}(c_{i+1}x_{n}^{(i+1)})&\ldots&f_{k}(c_{k+1}x _{n}^{(k+1)})\end{vmatrix}\] for all \(1\leq i\leq k+1.\) Denote \(M=\sup\{\|x_{n}^{(i)}\|,\|c_{n}^{(i)}x_{n}^{(i)}\|,\|c_{i}x_{n}^{(i)}\|:1\leq i \leq k+1,n\in\mathbb{N}\}.\) Now, for any \(1\leq i\leq k+1,\) by evaluating the determinant \(b_{n}^{(i)}\) along the \(i^{th}\) column, we have \[|b_{n}^{(i)}|\leq\sum_{s=1}^{k}|(c_{n}^{(i)}-c_{i})f_{s}(x_{n}^{(i)})D_{k-1}[( c_{n}^{(j)}x_{n}^{(j)})_{j=1}^{i-1},(c_{j}x_{n}^{(j)})_{j=i+1}^{k+1};(f_{j})_{j =1,j\neq s}^{k}]|\leq|c_{n}^{(i)}-c_{i}|M^{k}kk!\] and hence \(|b_{n}^{(i)}|\to 0.\) Therefore, \(|a_{n}|\to 0.\) (2): Proof follows by the similar argument involved in the proof of (1). For any \(k\in\mathbb{Z}^{+},\) we define \(\mathcal{S}_{k}(n)=\{\alpha\subseteq\{1,2,\ldots,k\}:\alpha\text{ contains exactly $n$ elements}\}\) for every \(n\in\{1,2,\ldots,k\}\) and \(\mathcal{S}_{k}(0)=\emptyset\). If \(n\in\{1,2,\ldots,k\}\) and \(\alpha\in\mathcal{S}_{k}(n),\) we denote the elements of \(\alpha\) as \(\alpha_{1},\alpha_{2},\ldots,\alpha_{n},\) where \(\alpha_{1}<\alpha_{2}<\cdots<\alpha_{n}.\) **Lemma 2.3**.: _Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+2)})\) be \((k+2)-\)sequences in \(X\) and \(f_{1},f_{2},\ldots,f_{k+1}\in S_{X^{*}}.\) If \(D_{k}[(x_{n}^{(\alpha_{i})})_{i=1}^{k+1};(f_{\beta_{j}})_{j=1}^{k}]\to 0\) for all \(\alpha\in\mathcal{S}_{k+2}(k+1)\) and \(\beta\in\mathcal{S}_{k+1}(k),\) then \(D_{k+1}[(x_{n}^{(i)})_{i=1}^{k+2};(f_{j})_{j=1}^{k+1}]\to 0.\)_ Proof.: Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+2)})\) be \((k+2)-\)sequences in \(X\) and \(f_{1},f_{2},\ldots,f_{k+1}\in S_{X^{*}}.\) Case\(-(i)\): Suppose \(D_{1}[x_{n}^{(1)},x_{n}^{(i+1)};f_{j}]\to 0\) for all \(1\leq i,j\leq k+1.\) For every \(n\in\mathbb{N},\) using Sylvester's determinant identity [10, Page 27], we have \(D_{k+1}[(x_{n}^{(i)})_{i=1}^{k+2};(f_{j})_{j=1}^{k+1}]=det([c_{j,i}^{(n)}]),\) where \(c_{j,i}^{(n)}=D_{1}[x_{n}^{(1)},x_{n}^{(i+1)};f_{j}]\) for all \(1\leq i,j\leq k+1.\) By evaluating the determinant \(det([c_{j,i}^{(n)}])\) along any row and using the assumption, we have \(D_{k+1}[(x_{n}^{(i)})_{i=1}^{k+2};(f_{j})_{j=1}^{k+1}]\to 0.\) Case\(-(ii)\): Suppose \(D_{1}[x_{n}^{(1)},x_{n}^{(i+1)};f_{j}]\) does not converge to \(0\) for some \(1\leq i,j\leq k+1.\) Now, we claim that, there exists \(r\in\mathbb{Z}^{+}\) such that \(2\leq r\leq k\) satisfying \[D_{r-1}[(x_{n}^{(\alpha_{i})})_{i=1}^{r};(f_{\beta_{j}})_{j=1}^{r-1}]\nto 0\text{ for some }\alpha\in\mathcal{S}_{k+2}(r),\beta\in\mathcal{S}_{k+1}(r-1)\] and \[D_{r}[(x_{n}^{(\lambda_{i})})_{i=1}^{r+1};(f_{\mu_{j}})_{j=1}^{r}]\to 0\text{ for every } \lambda\in\mathcal{S}_{k+2}(r+1),\mu\in\mathcal{S}_{k+1}(r).\] If \(D_{2}[(x_{n}^{(\lambda_{i})})_{i=1}^{3};(f_{\mu_{j}})_{j=1}^{2}]\to 0\) for every \(\lambda\in\mathcal{S}_{k+2}(3)\) and \(\mu\in\mathcal{S}_{k+1}(2),\) then by the assumption of Case\(-(ii),\) choose \(r=2.\) If not, then there exist \(\alpha\in\mathcal{S}_{k+2}(3)\) and \(\beta\in\mathcal{S}_{k+1}(2)\) such that \(D_{2}[(x_{n}^{(\alpha_{i})})_{i=1}^{3};(f_{\beta_{j}})_{j=1}^{2}]\nto 0.\) Now if \(D_{3}[(x_{n}^{(\lambda_{i})})_{i=1}^{4};(f_{\mu_{j}})_{j=1}^{3}]\to 0\) for every \(\lambda\in\mathcal{S}_{k+2}(4)\) and \(\mu\in\mathcal{S}_{k+1}(3),\) then choose \(r=3.\) Similarly, proceeding like this and using the hypothesis, the claim holds. Therefore, there exist \(r\in\mathbb{Z}^{+}\) with \(2\leq r\leq k,\)\(\epsilon>0\) and a subsequence \((n_{m})\) of \((n)\) satisfying \[|D_{r-1}[(x_{n_{m}}^{(\alpha_{i})})_{i=1}^{r};(f_{\beta_{j}})_{j=1}^{r-1}]|\geq\epsilon \text{ for some }\alpha\in\mathcal{S}_{k+2}(r),\beta\in\mathcal{S}_{k+1}(r-1),\text{for all }m\in\mathbb{N}\] \[D_{r}[(x_{n}^{(\lambda_{i})})_{i=1}^{r+1};(f_{\mu_{j}})_{j=1}^{r}]\to 0\text{ for every }\lambda\in\mathcal{S}_{k+2}(r+1),\mu\in\mathcal{S}_{k+1}(r).\] Without loss of generality, assume \(\alpha=\{1,2,\ldots,r\}\) and \(\beta=\{1,2,\ldots,r-1\}.\) For any \(m\in\mathbb{N},\) consider \[A_{m}=\begin{bmatrix}1&1&\ldots&1\\ f_{1}(x_{n_{m}}^{(1)})&f_{1}(x_{n_{m}}^{(2)})&\ldots&f_{1}(x_{n_{m}}^{(k+2)})\\ \vdots&\vdots&\ddots&\vdots\\ f_{k+1}(x_{n_{m}}^{(1)})&f_{k+1}(x_{n_{m}}^{(2)})&\ldots&f_{k+1}(x_{n_{m}}^{(k +2)})\end{bmatrix}.\] Now using Sylvester's determinant identity [10, Page 27] for every \(m\in\mathbb{N},\) we have \[det(A_{m})D_{r-1}[(x_{n_{m}}^{(i)})_{i=1}^{r};(f_{j})_{j=1}^{r-1}]^{k-r+1}=det (B_{m}),\] where \(B_{m}=[b_{s,t}^{(r,m)}]_{r+1\leq s,t\leq k+2}\) and \(b_{s,t}^{(r,m)}=D_{r}[(x_{n_{m}}^{(i)})_{i=1}^{r},x_{n_{m}}^{(t)};(f_{j})_{j=1 }^{r-1},f_{s-1}]\) for all \(r+1\leq s,t\leq k+2.\) By evaluating the determinant of \(B_{m}\) along any row and using the claim, we get \(det(B_{m})\to 0.\) Note that \(|D_{k+1}[(x_{n_{m}}^{(i)})_{i=1}^{k+2},(f_{j})_{j=1}^{k+1}]|=|det(A_{m})|\leq \frac{1}{\varepsilon^{k-r+1}}|det(B_{m})|\) and hence \(|D_{k+1}[(x_{n_{m}}^{(i)})_{i=1}^{k+2};(f_{j})_{j=1}^{k+1}]|\to 0.\) Thus, \(|D_{k+1}[(x_{n}^{(i)})_{i=1}^{k+2};(f_{j})_{j=1}^{k+1}]|\to 0.\) Now, we characterize Schur's property using \(k-\)dimensional determinants. The space \(X\) is said to have Schur's property, if norm and weak convergences coincide for sequences in \(X.\) **Proposition 2.4**.: _The following statements are equivalent._ 1. \(X\) _has Schur's property._ 2. _If_ \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) _be_ \((k+1)-\)_sequences in_ \(X\) _and_ \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) _for all_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{\star}},\) _then_ \(V[(x_{n}^{(i)})_{i=1}^{k+1}]\to 0.\)__ Proof.: \((1)\Rightarrow(2)\): Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(X.\) Assume that \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) for all \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{\star}}.\) Observe that, there exist \((k)-\)sequences \((f_{n}^{(1)}),(f_{n}^{(2)}),\ldots,(f_{n}^{(k)})\) in \(S_{X^{\star}}\) such that \(V[(x_{n}^{(i)})_{i=1}^{k+1}]\leq D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{n}^{(j)})_ {j=1}^{k}]+\frac{1}{n}\) for all \(n\in\mathbb{N}.\) Now, it is enough to show that \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{n}^{(j)})_{j=1}^{k}]\to 0.\) Step\(-(1)\): Fix \(f_{2},f_{3},\ldots,f_{k}\in S_{X^{\star}}.\) For any \(f_{1}\in S_{X^{\star}},\) by evaluating \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\) along the \(2^{nd}\) row, we have \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]=f_{1}(\sum_{i=1}^{k+1}(-1) ^{(2+i)}x_{n}^{(n}M_{n}^{(2,i)}),\) where for any \(1\leq i\leq k+1\) and \(n\in\mathbb{N},\)\(M_{n}^{(2,i)}\) denotes the minor of the \((2,i)^{th}\) entry of the determinant \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}].\) By the assumption, \(f_{1}(\sum_{i=1}^{k+1}(-1)^{(2+i)}x_{n}^{(i)}M_{n}^{(2,i)})\to 0\) for all \(f_{1}\in S_{X^{\star}}\) and hence, by (1), we have \(\|\sum_{i=1}^{k+1}(-1)^{(2+i)}x_{n}^{(i)}M_{n}^{(2,i)}\|\to 0,\) which further implies \(f_{n}^{(1)}(\sum_{i=1}^{k+1}(-1)^{(2+i)}x_{n}^{(i)}M_{n}^{(2,i)})\to 0.\) Therefore, \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};f_{n}^{(1)},(f_{j})_{j=2}^{k}]\to 0.\) Step\(-(2)\): Fix \(f_{3},f_{4},\ldots,f_{k}\in S_{X^{\star}}.\) For any \(f_{2}\in S_{X^{\star}},\) by evaluating \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};f_{n}^{(1)},(f_{j})_{j=2}^{k}]\) along the \(3^{rd}\) row, we have \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};f_{n}^{(1)},(f_{j})_{j=2}^{k}]=f_{2}(\sum_{i=1}^{ k+1}(-1)^{(3+i)}x_{n}^{(i)}M_{n}^{(3,i)}),\) where for any \(1\leq i\leq k+1\) and \(n\in\mathbb{N},\)\(M_{n}^{(3,i)}\) denotes the minor of the \((3,i)^{th}\) entry of the determinant \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};f_{n}^{(1)},(f_{j})_{j=2}^{k}].\) Now, by using the similar argument involved in Step\(-(1),\) we have \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};f_{n}^{(1)},f_{n}^{(2)},(f_{j})_{j=3}^{k}]\to 0.\) By repeating the same procedure up to Step\(-(k),\) we get \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{n}^{(j)})_{j=1}^{k}]\to 0.\) \((2)\Rightarrow(1)\): If \(X\) is a finite dimensional space, then there is nothing to prove. Let \(X\) be an infinite dimensional space and \((x_{n}^{(1)})\) be a sequence in \(X\) such that \(x_{n}^{(1)}\xrightarrow{w}0.\) For each \(n\in\mathbb{N},\) by Hahn-Banach theorem, there exists \(f_{n}^{(1)}\in S_{X^{\star}}\) such that \(f_{n}^{(1)}(x_{n}^{(1)})=\|x_{n}^{(1)}\|.\) Now, for every \(n\in\mathbb{N}\) and \(2\leq i\leq k,\) there exists \(x_{n}^{(i)}\in\cap_{j=1}^{i-1}ker(f_{n}^{(j)})\cap S_{X}\) and by Hahn-Banach theorem, there exists \(f_{n}^{(i)}\in S_{X^{\star}}\) such that \(f_{n}^{(i)}(x_{n}^{(i)})=1.\) Since \(x_{n}^{(1)}\xrightarrow{w}0\) and \((x_{n}^{(i)})\) are bounded sequences for all \(2\leq i\leq k,\) by Lemma 2.2, we have \(D_{k}[0,(x_{n}^{(i)})_{i=1}^{k};(g_{j})_{j=1}^{k}]\to 0\) for all \(g_{1},g_{2},\ldots,g_{k}\in S_{X^{*}}.\) Therefore, by (2), we have \(V[0,(x_{n}^{(i)})_{i=1}^{k}]\to 0,\) which further implies, \(D_{k}[0,(x_{n}^{(i)})_{i=1}^{k};(f_{n}^{(j)})_{j=1}^{k}]\to 0.\) Since \(D_{k}[0,(x_{n}^{(i)})_{i=1}^{k};(f_{n}^{(j)})_{j=1}^{k}]=\|x_{n}^{(1)}\|,\) it follows that \(x_{n}^{(1)}\to 0.\) Hence the proof. In the following definition, we introduce two notions called \(k-\)weakly strong Chebyshevness and \(k-\)weakly uniformly strong Chebyshevness which are weaker to the notions \(k-\)strong Chebyshevness [26] and \(k-\)uniformly strong Chebyshevness [11] respectively. These new notions will be used to characterize \(k-\)WUR, \(k-\)WMLUR spaces in Section 3. **Definition 2.5**.: _Let \(A\) and \(B\) be non-empty subsets of \(X,\)\(x\in X\) and \(k\in\mathbb{Z}^{+}.\) Then we say that \(A\) is_ 1. \(k-\)_weakly strongly Chebyshev (in short,_ \(k-\)_wSCh) at_ \(x,\) _if_ \(A\) _is proximinal at_ \(x\) _and for every_ \(\epsilon>0,\)__\(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\) _there exists_ \(\delta=\delta(\epsilon,x,(f_{j})_{j=1}^{k})>0\) _such that_ \(|D_{k}[(x_{i})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]|\leq\epsilon\) _whenever_ \(x_{1},x_{2},\ldots,x_{k+1}\in P_{A}(x,\delta);\)__ 2. \(k-\)_wSCh on_ \(B,\) _if_ \(A\) _is_ \(k-\)_wSCh at every_ \(x\in B;\)__ 3. \(k-\)_weakly uniformly strongly Chebyshev (in short,_ \(k-\)_wUSCh) on_ \(B,\) _if_ \(A\) _is proximinal on_ \(B\) _and for every_ \(\epsilon>0,\)__\(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\) _there exists_ \(\delta=\delta(\epsilon,(f_{j})_{j=1}^{k})>0\) _such that_ \(|D_{k}[(x_{i})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]|\leq\epsilon\) _whenever_ \(x_{1},x_{2},\ldots,x_{k+1}\in P_{A}(x,\delta)\) _and_ \(x\in B.\)__ The notion \(1-\)wSCh (respectively, \(1-\)wUSCh) coincides with the notion weakly strongly Chebyshev [2, 6] (respectively, weakly uniformly strongly Chebyshev [6]). Observe that, \(A\) is \(k-\)_wUSCh on \(B\Rightarrow A\) is \(k-\)_wSCh on \(B\Rightarrow A\) is \(k-\)Chebyshev on \(B.\) In Examples 2.6 and 4.17, we will see that the reverse implications are not necessarily true. Further, \(A\) is \(k-\)USCh (respectively, \(k-\)SCh) on \(B\Rightarrow A\) is \(k-\)_wUSCh (respectively, \(k-\)_wSCh) on \(B,\) in general the converse does not hold (see, Example 3.16), however, using Proposition 2.4, the converse holds whenever the space has Schur's property. **Example 2.6**.: _Consider the space \(X=(\ell_{1},\|\cdot\|_{H})\) from [20, Example 5] and \(k\in\mathbb{Z}^{+}.\) In [20], it is proved that \(X\) is rotund, but not MLUR. Then, by [15, Theorems 5.1.18 and 5.3.28], it follows that \(B_{X}\) is Chebyshev on \(X,\) but not approximatively compact on \(X\) (see, [2, Definition 1.1]). Therefore, by [26, Lemma 2.8], \(B_{X}\) is not \(k-\)SCh on \(X.\) Since \(X\) has Schur's property, we have \(B_{X}\) is not \(k-\)wSCh on \(X.\) However, \(B_{X}\) is \(k-\)Chebyshev on \(X.\)_ The following sequential version of Definition 2.5 is easy to verify and will be used further. **Proposition 2.7**.: _Let \(A\) and \(B\) be non-empty subsets of \(X\) and \(x\in X.\) Then the following statements hold._ 1. \(A\) _is_ \(k-\)_wSCh at_ \(x\) _iff_ \(A\) _is proximinal at_ \(x\) _and for any_ \((k+1)-\)_sequences_ \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,\)__\((x_{n}^{(k+1)})\) _in_ \(A\) _such that_ \(\|x_{n}^{(i)}-x\|\to d(x,A)\) _for all_ \(1\leq i\leq k+1,\) _it follows that_ \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) _for all_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\)__ 2. \(A\) _is_ \(k-\)_wUSCh on_ \(B\) _iff_ \(A\) _is proximinal on_ \(B\) _and for any_ \((k+1)-\)_sequences_ \((x_{n}^{(1)}),(x_{n}^{(2)}),\)__\(\ldots,(x_{n}^{(k+1)})\) _in_ \(A,\) _a sequence_ \((y_{n})\) _in_ \(B\) _such that_ \(\|x_{n}^{(i)}-y_{n}\|-d(y_{n},A)\to 0\) _for all_ \(1\leq i\leq k+1,\) _it follows that_ \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) _for all_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\)__ Now, we introduce a notion called property \(k-\)weakly UC which is a generalization of both property \(w\)UC [6] and property \(k-\)UC [11]. **Definition 2.8**.: _Let \(A\) and \(B\) be non-empty subsets of \(X\) and \(k\in\mathbb{Z}^{+}.\) The pair \((A,B)\) is said to have property \(k-\)weakly UC (in short, property \(k-\)wUC), if for any \((k+1)-\)sequences \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) in \(A\) and a sequence \((y_{n})\) in \(B\) such that \(\|x_{n}^{(i)}-y_{n}\|\to d(A,B)\) for all \(1\leq i\leq k+1,\) it follows that \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) for all \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\)_ The property \(k-w\)UC coincides with property \(w\)UC [6] for the case \(k=1.\) Further, if \((A,B)\) has property \(k-\)UC, then \((A,B)\) has property \(k-w\)UC. The converse does not hold in general (see, Example 3.16). However, using Proposition 2.4, the converse holds, whenever the space has Schur's property. The following result is a consequence of Lemma 2.3. On the other hand, it reveals that if a pair of subsets has property \(w\)UC, then it has property \(k-w\)UC and a similar statement holds for the notions \(k-w\)USCh and \(k-w\)SCh. **Proposition 2.9**.: _Let \(A\) and \(B\) be non-empty subsets of \(X\) and \(x\in X.\) Then the following statements hold._ 1. _If_ \((A,B)\) _has property_ \(k-w\)_UC, then_ \((A,B)\) _has property_ \((k+1)-w\)_UC._ 2. _If_ \(A\) _is_ \(k-w\)_USCh on_ \(B,\) _then_ \(A\) _is_ \((k+1)-w\)_USCh on_ \(B.\)__ 3. _If_ \(A\) _is_ \(k-w\)_SCh at_ \(x,\) _then_ \(A\) _is_ \((k+1)-w\)_SCh at_ \(x.\)__ Proof.: (1): Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+2)})\) be \((k+2)-\)sequences in \(A,\)\((y_{n})\) be a sequence in \(B\) such that \(\|x_{n}^{(i)}-y_{n}\|\to d(A,B)\) for all \(1\leq i\leq k+2\) and \(f_{1},f_{2},\ldots,f_{k+1}\in S_{X^{\star}}.\) By assumption, it follows that \(D_{k}[(x_{n}^{(\alpha_{i})})_{i=1}^{k+1};(f_{\beta j})_{j=1}^{k}]\to 0\) for all \(\alpha\in\mathcal{S}_{k+2}(k+1)\) and \(\beta\in\mathcal{S}_{k+1}(k).\) Hence, by Lemma 2.3, we have \(D_{k+1}[(x_{n}^{(i)})_{i=1}^{k+2};(f_{j})_{j=1}^{k+1}]\to 0.\) Thus, \((A,B)\) has property \((k+1)-w\)UC. The proofs of (2) and (3) follow in the similar lines of proof of (1). We remark that the converses of the statements of Proposition 2.9 need not be true for any \(k\in\mathbb{Z}^{+}\) (see, Example 3.17). In the following proposition and remark, we present some relations among the notions \(k-w\)SCh, \(k-w\)USCh and property \(k-w\)UC. **Proposition 2.10**.: _Let \(A\) and \(B\) be non-empty subsets of \(X.\) Then the following statements hold._ 1. _If_ \(A\) _is_ \(k-w\)_USCh on_ \(B,\) _then_ \((A,B)\) _has property_ \(k-w\)_UC._ 2. _If_ \((A,B)\) _has property_ \(k-w\)_UC, then_ \(A\) _is_ \(k-w\)_USCh on_ \(B_{0},\) _where_ \(B_{0}=\{y\in B:\|x-y\|=d(A,B)\) _for some_ \(x\in A\}.\)__ Proof.: (1): Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(A\) and \((y_{n})\) be a sequence in \(B\) such that \(\|x_{n}^{(i)}-y_{n}\|\to d(A,B)\) for all \(1\leq i\leq k+1.\) Since, for any \(1\leq i\leq k+1,\) \[0\leq\|x_{n}^{(i)}-y_{n}\|-d(y_{n},A)\leq\|x_{n}^{(i)}-y_{n}\|-d(A,B),\] we have \(\|x_{n}^{(i)}-y_{n}\|-d(y_{n},A)\to 0.\) Thus, by assumption, it follows that \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) for all \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{\star}}.\) Hence, \((A,B)\) has property \(k-w\)UC. (2): Clearly, \(A\) is proximinal on \(B_{0}.\) Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) be \((k+1)-\)sequences in A and \((y_{n})\) be a sequence in \(B_{0}\) such that \(\|x_{n}^{(i)}-y_{n}\|-d(y_{n},A)\to 0\) for all \(1\leq i\leq k+1.\) Since \(d(y_{n},A)=d(A,B)\) for all \(n\in\mathbb{N},\) we have \(\|x_{n}^{(i)}-y_{n}\|\to d(A,B)\) for all \(1\leq i\leq k+1.\) Therefore, by assumption, it follows that \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) for all \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{\star}}.\) Thus, \(A\) is \(k-w\)USCh on \(B_{0}.\) **Remark 2.11**.: _Let \(A\) be a non-empty bounded subset of \(X\) and \(B\) be a non-empty boundedly compact subset of \(X.\) If \(A\) is \(k-w\)SCh on \(B,\) then \((A,B)\) has property \(k-w\)UC._ The next example shows that the converse of the statements of Proposition 2.10 and Remark 2.11 need not be true. In particular, property \(k-w\)UC of the pair \((A,B)\) is not sufficient for the proximinality of \(A\) on \(B.\) **Example 2.12**.: 1. _Let_ \(k\in\mathbb{Z}^{+}\) _and_ \(M=(c_{0},\|\cdot\|_{\infty}).\) _By_ _[_4_, Chapter II, Corollary 6.9]__,_ \(M\) _admits an equivalent norm (say,_ \(\|\cdot\|_{\tau}\)_) such that_ \(X=(c_{0},\|\cdot\|_{\tau})\) _is WUR. Since_ \(X\) _is not reflexive,_ _there exists a subspace_ \(Y\) _of_ \(X\) _such that_ \(Y\) _is not proximinal at some_ \(x\in X.\) _However, by_ _[_6_, Theorem 4.6]__,_ \((Y,\{x\})\) _has property_ \(w\)_UC and hence, by Proposition_ 2.9_, it has property_ \(k-w\)_UC._ 2. _Let_ \(k\in\mathbb{Z}^{+}\) _and_ \(X=(\mathbb{R}^{k+1},\|\cdot\|_{\infty}).\) _Consider_ \(A=B_{X}\) _and_ \(B=3S_{X}\cup\{2(\sum_{i=1}^{k+1}e_{i})\}.\) _It is easy to prove that_ \((A,B)\) _has property_ \(w\)_UC and hence, by Proposition_ 2.9_, it has property_ \(k-w\)_UC. However, it is clear that_ \(A\) _is not_ \(k-\)_Chebyshev at_ \(3e_{1}\in B.\) _Thus,_ \(A\) _is not_ \(k-w\)_USCh on_ \(B.\)__ 3. _Let_ \(k\in\mathbb{Z}^{+},\)__\(X=(\mathbb{R}^{k+1},\|\cdot\|_{2})\oplus_{\infty}\mathbb{R}\) _and_ \(Y=(\mathbb{R}^{k+1},\|\cdot\|_{2})\oplus_{\infty}\{0\}\) _be the subspace of_ \(X.\) _Consider_ \(A=B_{Y}\) _and_ \(B=\{(2e_{1},0)\}\cup\{(0,1+\frac{1}{n}):n\in\mathbb{N}\}.\) _Observe that_ \(d(A,B)=1\) _and_ \(B_{0}=\{y\in B:\|x-y\|=d(A,B)\) _for some_ \(x\in A\}=\{(2e_{1},0)\}.\) _Clearly,_ \(A\) _is_ \(k-w\)_USCh on_ \(B_{0}.\) _For all_ \(n\in\mathbb{N}\) _and_ \(1\leq i\leq k+1,\) _define_ \(x_{n}^{(i)}=(e_{i},0)\) _and_ \(y_{n}=(0,1+\frac{1}{n}).\) _Therefore,_ \(\|x_{n}^{(i)}-y_{n}\|\to 1\) _for all_ \(1\leq i\leq k+1,\) _but by Remark_ 2.1_, there exists_ \(g_{1},g_{2},\ldots,g_{k}\in S_{X^{*}}\) _such that_ \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(g_{j})_{j=1}^{k}]=\epsilon\) _for some_ \(\epsilon>0.\) _Thus,_ \((A,B)\) _does not have property_ \(k-w\)_UC._ The proof of the subsequent result follows in similar lines of the proof of [7, Theorem 2.12]. **Theorem 2.13**.: _The following statements hold._ 1. _If_ \(B_{X}\) _is_ \(k-w\)_USCh (respectively,_ \(k-w\)_SCh) on_ \(rS_{X}\) _for some_ \(r\in(1,\infty),\) _then_ \(B_{X}\) _is_ \(k-w\)_USCh (respectively,_ \(k-w\)_SCh) on_ \(tS_{X}\) _for every_ \(t\in(1,\infty).\)__ 2. _If_ \(S_{X}\) _is_ \(k-w\)_USCh (respectively,_ \(k-w\)_SCh) on_ \(rS_{X}\) _for some_ \(r\in(1,\infty),\) _then_ \(S_{X}\) _is_ \(k-w\)_USCh (respectively,_ \(k-w\)_SCh) on_ \(tS_{X}\) _for every_ \(t\in(1,\infty).\)__ 3. _If_ \(S_{X}\) _is_ \(k-w\)_USCh (respectively,_ \(k-w\)_SCh) on_ \(rS_{X}\) _for some_ \(r\in(0,1),\) _then_ \(S_{X}\) _is_ \(k-w\)_USCh (respectively,_ \(k-w\)_SCh) on_ \(tS_{X}\) _for every_ \(t\in(0,\infty).\)__ Now, we present a characterization of \(k-\)rotund spaces in terms of \(k-\)rotundity of the quotient spaces. **Theorem 2.14**.: _Let \(\alpha,\beta\in\mathbb{Z}^{+}\) and \(X\) be a Banach space satisfying \(dim(X)\geq k+2,\)\(1\leq\alpha\leq dim(X)-(k+1)\) and \(k+1\leq\beta\leq dim(X)-1.\) Consider the following statements._ 1. \(X\) _is_ \(k-\)_rotund._ 2. \(X/M\) _is_ \(k-\)_rotund, whenever_ \(M\) _is a proximinal subspace of_ \(X.\)__ 3. \(X/F\) _is_ \(k-\)_rotund, whenever_ \(F\) _is a subspace of_ \(X\) _with_ \(dim(F)=\alpha.\)__ 4. \(X/Y\) _is_ \(k-\)_rotund, whenever_ \(Y\) _is a proximinal subspace of_ \(X\) _with_ \(codim(Y)=\beta.\)__ _Then \((1)\Leftrightarrow(2)\Leftrightarrow(3)\Rightarrow(4).\) Further, if \(X\) is reflexive, then all the statements are equivalent._ Proof.: \((1)\Rightarrow(2)\): Let \(M\) be a proximinal subspace of \(X.\) Let \(x_{1}+M,x_{2}+M,\ldots,x_{k+1}+M\in S_{X/M}\) with \(\|\sum_{i=1}^{k+1}(x_{i}+M)\|=k+1.\) Since \(M\) is proximinal on \(X,\) for every \(1\leq i\leq k+1\) there exists \(y_{i}\in M\) such that \(\|x_{i}-y_{i}\|=d(x_{i},M)=1.\) Note that \[k+1=\left\|\sum_{i=1}^{k+1}(x_{i}+M)\right\|=d\left(\sum_{i=1}^{k+1}x_{i},M \right)\leq\left\|\sum_{i=1}^{k+1}x_{i}-\sum_{i=1}^{k+1}y_{i}\right\|\leq k+1,\] which implies \(\|\sum_{i=1}^{k+1}(x_{i}-y_{i})\|=k+1.\) Therefore, by (1), we have \(V[(x_{i}-y_{i})_{i=1}^{k+1}]=0.\) Using Remark 2.1, it is easy to verify that \(V[(x_{i}+M)_{i=1}^{k+1}]\leq V[(x_{i}-y_{i})_{i=1}^{k+1}]\) and hence \(V[(x_{i}+M)_{i=1}^{k+1}]=0.\) Thus, \(X/M\) is \(k-\)rotund. \((2)\Rightarrow(3)\): Obvious. \((3)\Rightarrow(1)\): Suppose there exist \(x_{1},x_{2},\ldots,x_{k+1}\in S_{X}\) with \(\|\sum_{i=1}^{k+1}x_{i}\|=k+1\) such that \(V[(x_{i})_{i=1}^{k+1}]>0.\) By Hahn-Banach theorem, there exists \(f\in S_{X^{*}}\) such that \(f(\sum_{i=1}^{k+1}x_{i})=\|\sum_{i=1}^{k+1}x_{i}\|.\) Therefore, \(f(x_{i})=1\) for all \(1\leq i\leq k+1.\) Choose a subspace \(F\) such that \(F\subseteq ker(f)\), \(F\cap span\{x_{i}-x_{k+1}:1\leq i\leq k\}=\{0\}\) and \(dim(F)=\alpha.\) Hence, by Ascoli's formula, for all \(1\leq i\leq k+1\), we have \[1=|f(x_{i})|=d(x_{i},ker(f))\leq d(x_{i},F)\leq\|x_{i}\|=1.\] Therefore, \(\|x_{i}+F\|=1\) for all \(1\leq i\leq k+1.\) Similarly, we have \(\|\sum_{i=1}^{k+1}(x_{i}+F)\|=k+1.\) By (3), we get \(V[(x_{i}+F)_{i=1}^{k+1}]=0.\) Thus, by Remark 2.1, there exist \(\lambda_{1},\lambda_{2},\ldots,\lambda_{k}\in\mathbb{R}\) such that \(\lambda_{k}=1\) and \(\sum_{i=1}^{k}\lambda_{i}(x_{i}-x_{k+1}+F)=0+F.\) Observe that \(\sum_{i=1}^{k}\lambda_{i}(x_{i}-x_{k+1})\in F.\) Therefore \(\sum_{i=1}^{k}\lambda_{i}(x_{i}-x_{k+1})=0,\) which implies \(V[(x_{i})_{i=1}^{k+1}]=0.\) This is a contradiction. \((2)\Rightarrow(4)\): Obvious. Let \(X\) be a reflexive space. Suppose there exist \(x_{1},x_{2},\ldots,x_{k+1}\in S_{X}\) with \(\|\sum_{i=1}^{k+1}x_{i}\|=k+1\) such that \(V[(x_{i})_{i=1}^{k+1}]>0.\) By Hahn-Banach theorem, there exists \(f\in S_{X^{*}}\) such that \(f(\sum_{i=1}^{k+1}x_{i})=\|\sum_{i=1}^{k+1}x_{i}\|.\) Therefore, \(f(x_{i})=1\) for all \(1\leq i\leq k+1.\) Choose a subspace \(Y\) such that \(Y\subseteq ker(f),\)\(codim(Y)=\beta\) and \(Y\cap span\{x_{i}-x_{k+1}:1\leq i\leq k\}=\{0\}\). Since \(Y\) is proximinal on \(X,\) by replacing \(F\) by \(Y\) in the proof of \((3)\Rightarrow(1)\) and repeating the argument involved in the proof, we get a contradiction. Hence the proof. As a consequence of Example 2.15, we observe that the implication \((4)\Rightarrow(1)\) of Theorem 2.14 need not be true in general, for any \(k\in\mathbb{Z}^{+}.\) **Example 2.15**.: _Let \(k\in\mathbb{Z}^{+}\) and \(X=M\oplus_{1}\left(\mathbb{R}^{k},\|\cdot\|_{1}\right),\) where \(M\) is the Read's space [17]. Clearly, \(X\) is not \(k-\)rotund. Let \(Y\) be any subspace of \(X\) with \(codim(Y)=k+2.\) Since any finite co-dimensional subspace of \(M\) with \(co\)-dimension greater than one is not proximinal on \(M\), by [2, Corollary 4.2], it follows that \(Y\) is not proximinal on \(X.\) Therefore, \(X\) does not have any proximinal subspace of co-dimension \(k+2.\)_ ## 3. Characterizations of \(k-\)Wur, \(k-\)WLUR and \(k-\)Wmlur In this section, we introduce and study two notions called \(k-\)weakly uniform rotundity and \(k-\)weakly locally uniform rotundity. We present a few characterizations of \(k-\)WUR, \(k-\)WLUR and \(k-\)WMLUR in terms of the notions discussed in Section 2. **Definition 3.1**.: _Let \(k\in\mathbb{Z}^{+}.\) A space \(X\) is said to be_ 1. \(k-\)_weakly uniformly rotund (in short,_ \(k-\)_WUR), if for every_ \(\epsilon>0\) _and_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\)_,_ \[\delta_{X}^{k}(\epsilon,(f_{j})_{j=1}^{k})\coloneqq\inf\left\{\{1\}\cup\left\{ 1-\frac{1}{k+1}\left\|\sum_{i=1}^{k+1}x_{i}\right\|:\begin{array}{l}x_{1},x_ {2},\ldots,x_{k+1}\in S_{X},\\ |D_{k}[(x_{i})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]|\geq\epsilon\end{array}\right\} \right\}>0;\] 2. \(k-\)_weakly locally uniformly rotund (in short,_ \(k-\)_WLUR) at_ \(x\in S_{X}\)_, if for every_ \(\epsilon>0\) _and_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\)_,_ \[\delta_{X}^{k}(\epsilon,x,(f_{j})_{j=1}^{k})\coloneqq\inf\left\{\{1\}\cup\left\{ 1-\frac{1}{k+1}\left\|x+\sum_{i=1}^{k}x_{i}\right\|:\begin{array}{l}x_{1},x_ {2},\ldots,x_{k}\in S_{X},\\ |D_{k}[x,(x_{i})_{i=1}^{k};(f_{j})_{j=1}^{k}]|\geq\epsilon\end{array}\right\} \right\}>0.\] _We say_ \(X\) _is_ \(k-\)_weakly locally uniformly rotund (in short,_ \(k-\)_WLUR), if_ \(X\) _is_ \(k-\)_WLUR at every_ \(x\in S_{X}\)_._ Clearly, the notion \(1-\)WUR (respectively, \(1-\)WLUR) coincide with the notion WUR (respectively, WLUR). The equivalent sequential formulation of the notions \(k-\)WUR and \(k-\)WLUR given in the following results are useful to prove our results. **Proposition 3.2**.: _The following statements are equivalent._ 1. \(X\) _is_ \(k-\)_WUR._ 2. _If_ \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) _are_ \((k+1)-\)_sequences in_ \(S_{X}\) _such that_ \(\frac{1}{k+1}\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to 1,\) _then_ \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) _for all_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\) _._ 3. _If_ \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) _are_ \((k+1)-\)_sequences in_ \(B_{X}\) _such that_ \(\frac{1}{k+1}\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to 1,\) _then_ \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) _for all_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\)__ 4. _If_ \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) _are_ \((k+1)-\)_sequences in_ \(X\) _such that_ \(\|x_{n}^{(i)}\|\to 1\) _for all_ \(1\leq i\leq k+1\) _and_ \(\frac{1}{k+1}\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to 1,\) _then_ \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) _for all_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\)__ Proof.: \((1)\Leftrightarrow(2)\): These implications follow from the Definition 3.1. \((2)\Rightarrow(4)\): Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(X\) with \(\|x_{n}^{(i)}\|\to 1\) for all \(1\leq i\leq k+1\) and \(\frac{1}{k+1}\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to 1.\) Let \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\) For all \(n\in\mathbb{N}\) and \(1\leq i\leq k+1,\) define \(\overline{x}_{n}^{(i)}=\frac{x_{n}^{(i)}}{\|x_{n}^{(i)}\|}.\) Since \[1\geq\frac{1}{k+1}\left\|\sum_{i=1}^{k+1}\overline{x}_{n}^{(i)}\right\|\geq \frac{1}{k+1}\left\|\sum_{i=1}^{k+1}x_{n}^{(i)}\right\|-\frac{1}{k+1}\sum_{i=1 }^{k+1}\left\|\overline{x}_{n}^{(i)}-x_{n}^{(i)}\right\|,\] it follows that \(\frac{1}{k+1}\|\sum_{i=1}^{k+1}\overline{x}_{n}^{(i)}\|\to 1.\) Thus, by assumption, \(D_{k}[(\overline{x}_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) Further, using Lemma 2.2, we have \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) \((4)\Rightarrow(3)\): Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(B_{X}\) such that \(\frac{1}{k+1}\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to 1.\) Note that for any \(1\leq i\leq k+1,\) we have \[\frac{1}{k+1}\left\|\sum_{i=1}^{k+1}x_{n}^{(i)}\right\|\leq\frac{\|x_{n}^{(i)} \|}{k+1}+\frac{k}{k+1}\leq 1\] and hence \(\|x_{n}^{(i)}\|\to 1.\) Thus, by (4), \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) for all \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\) \((3)\Rightarrow(2)\): Obvious. The proof of the following corollary is similar to the proof of Proposition 3.2. **Corollary 3.3**.: _Let \(x\in S_{X}\). Then the following statements are equivalent._ 1. \(X\) _is_ \(k-\)_WLUR at_ \(x\)_._ 2. _If_ \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k)})\) _are_ \((k)-\)_sequences in_ \(S_{X}\) _such that_ \(\frac{1}{k+1}\|x+\sum_{i=1}^{k}x_{n}^{(i)}\|\to 1,\) _then_ \(D_{k}[x,(x_{n}^{(i)})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0\) _for all_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\)__ 3. _If_ \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k)})\) _are_ \((k)-\)_sequences in_ \(B_{X}\) _such that_ \(\frac{1}{k+1}\|x+\sum_{i=1}^{k}x_{n}^{(i)}\|\to 1,\) _then_ \(D_{k}[x,(x_{n}^{(i)})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0\) _for all_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\)__ 4. _If_ \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k)})\) _are_ \((k)-\)_sequences in_ \(X\) _such that_ \(\|x_{n}^{(i)}\|\to 1\) _for all_ \(1\leq i\leq k\) _and_ \(\frac{1}{k+1}\|x+\sum_{i=1}^{k}x_{n}^{(i)}\|\to 1,\) _then_ \(D_{k}[x,(x_{n}^{(i)})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0\) _for all_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\)__ It is easy to verify that the observations given in the following remark hold. **Remark 3.4**.: 1. _From the definitions, it follows that_ \(k-\)_UR_ \(\Rightarrow\)__\(k-\)_WUR_ \(\Rightarrow\)__\(k-\)_WLUR_ \(\Rightarrow\)__\(k-\)_rotund. Further,_ \(k-\)_LUR_ \(\Rightarrow\)__\(k-\)_WLUR._ 2. _In general, none of the implications given in_ \((1)\) _can be reversed (see, Examples_ 3.7 _and_ 4.15_). However, if the space is finite dimensional, then all the notions in_ \((1)\) _coincide._ 3. _There is no relation between the notion_ \(k-\)_WUR and any of the notions_ \(k-\)_LUR,_ \(k-\)_MLUR,_ \(k-\)_strongly rotund (see, Examples_ 3.7 _and_ 4.15_). Also, there is no relation between the notion_ \(k-\)_WLUR and any of the notions_ \(k-\)_MLUR,_ \(k-\)_strongly rotund (see, Examples_ 3.7 _and_ 4.15_)._ The following result is an outcome of Lemma 2.3, wherein we show that if a space is WUR (respectively, WLUR), then it is \(k-\)WUR (respectively, \(k-\)WLUR) for any \(k\in\mathbb{Z}^{+}.\) **Proposition 3.5**.: _Let \(x\in S_{X}.\) Then the following statements hold._ 1. _If_ \(X\) _is_ \(k-\)_WUR, then_ \(X\) _is_ \((k+1)-\)_WUR._ 2. _If_ \(X\) _is_ \(k-\)_WLUR at_ \(x,\) _then_ \(X\) _is_ \((k+1)-\)_WLUR at_ \(x.\)__ Proof.: (1): Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\dots,(x_{n}^{(k+2)})\) be \((k+2)-\)sequences in \(S_{X}\) such that \(\|\sum_{i=1}^{k+2}x_{n}^{(i)}\|\to k+2\) and \(f_{1},f_{2},\dots,f_{k+1}\in S_{X^{*}}.\) Since for any \(1\leq j\leq k+2,\) we have \[\left\|\sum_{i=1}^{k+2}x_{n}^{(i)}\right\|-1=\left\|\sum_{i=1}^{k+2}x_{n}^{(i) }\right\|-\left\|x_{n}^{(j)}\right\|\leq\left\|\sum_{i=1,i\neq j}^{k+2}x_{n}^{ (i)}\right\|\leq k+1,\] which implies \(\|\sum_{i=1,i\neq j}^{k+2}x_{n}^{(i)}\|\to k+1.\) By assumption, we have \(D_{k}[(x_{n}^{(\alpha_{i})})_{i=1}^{k+1};(f_{\beta_{j}})_{j=1}^{k}]\to 0\) for all \(\alpha\in\mathcal{S}_{k+2}(k+1)\) and \(\beta\in\mathcal{S}_{k+1}(k).\) Therefore, by Lemma 2.3, \(D_{k+1}[(x_{n}^{(i)})_{i=1}^{k+2};(f_{j})_{j=1}^{k+1}]\to 0.\) Thus, \(X\) is \((k+1)-\)WUR. (2): Let \(x\in S_{X},\)\((x_{n}^{(1)}),(x_{n}^{(2)}),\dots,(x_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(S_{X}\) with \(\|x+\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to k+2\) and \(f_{1},f_{2},\dots,f_{k+1}\in S_{X^{*}}.\) Note that, \(\|x+\sum_{i=1}^{k}x_{n}^{(\alpha_{i})}\|\to k+1\) for all \(\alpha\in\mathcal{S}_{k+1}(k).\) Since \(X\) is \(k-\)WLUR at \(x,\) it follows that \(|D_{k}[x,(x_{n}^{(\alpha_{i})})_{i=1}^{k};(f_{\beta_{j}})_{j=1}^{k}]|\to 0\) for all \(\alpha,\beta\in\mathcal{S}_{k+1}(k).\) Now, as a result of [24, Lemma 2], for any \(\beta\in\mathcal{S}_{k+1}(k),\) we have \[|D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{\beta_{j}})_{j=1}^{k}]|\leq\sum_{\alpha \in\mathcal{S}_{k+1}(k)}|D_{k}[x,(x_{n}^{(i)})_{i=1}^{k};(f_{\beta_{j}})_{j=1 }^{k}]|,\] which implies \(|D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{\beta_{j}})_{j=1}^{k}]|\to 0.\) Thus, by Lemma 2.3, \(D_{k+1}[x,(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k+1}]\to 0.\) Hence, \(X\) is \((k+1)-\)WLUR at \(x.\) The subsequent example shows that the converses of the statements of Proposition 3.5, need not be true for any \(k\in\mathbb{Z}^{+}.\) Further, we will see in Example 4.16 that there exists a strongly rotund space which is \((k+1)-\)WUR, but not \(k-\)WLUR. **Example 3.6**.: _Let \(k\in\mathbb{Z}^{+},\)\(k\geq 2\) and \(i_{1}<i_{2}<\dots<i_{k}.\) For each \(x=(x_{1},x_{2},\dots)\) in \(l_{2},\) define_ \[\|x\|_{i_{1},i_{2},\dots,i_{k}}^{2}=\left(\sum_{j=1}^{k}|x_{i_{j}}|\right)^{2} +\sum_{i\neq i_{1},i_{2},\dots,i_{k}}x_{i}^{2}.\] _Let \(X=(l_{2},\|\cdot\|_{i_{1},i_{2},\dots,i_{k}}).\) In [12, Example 2], it is proved that \(X\) is \(k-\)UR, but not \((k-1)-\)rotund. Thus, \(X\) is \(k-\)WUR, but not \((k-1)-\)WLUR._ As noted in Remark 3.4, now we provide an example. **Example 3.7**.: _Consider the space \(X=(\ell_{2},\|\cdot\|_{W})\) from [20, Example 2] and \(k\in\mathbb{Z}^{+}.\) In [20], it is proved that \(X\) is WUR, but not MLUR and it does not have the Kadets-Klee property (see, [15, Definition 2.5.26]). From [15, Theorems 5.1.18 and 5.3.28], it follows that \(B_{X}\) is Chebyshev on \(X,\) but not approximatively compact on \(X.\) Therefore, by [26, Lemma 2.8], \(B_{X}\) is not \(k-\)SCh on \(X.\) Thus, by [13, Theorem 2.6], \(X\) is not \(k-\)MLUR. Observe that \(X\) is not \(k-\)strongly rotund. However, by Proposition 3.5, \(X\) is \(k-\)WUR._ Now, we present some sequential characterizations of \(k-\)WUR in terms of an uniform version of \(k-\)WMLUR. **Definition 3.8**.: [29] _Let \(k\in\mathbb{Z}^{+}.\) A space \(X\) is said to be \(k-\)WMLUR, if for any \((k+1)-\)sequences \((x_{n}^{(1)}),(x_{n}^{(2)}),\)\(\dots,(x_{n}^{(k+1)})\) in \(S_{X}\) and \(x\in S_{X}\) with \(\|(k+1)x-\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to 0,\) it follows that \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) for all \(f_{1},f_{2},\dots,f_{k}\in S_{X^{*}}.\)_ It is easy to verify from the definitions that \(k-\)WLUR \(\Rightarrow\)\(k-\)WMLUR \(\Rightarrow\)\(k-\)rotund. However, none of the implications can be reversed in general (see, Examples 3.23 and 4.15). **Theorem 3.9**.: _The following statements are equivalent._ 1. \(X\) _is_ \(k-\)_WUR._ 2. _If_ \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+2)})\) _are_ \((k+2)-\)_sequences in_ \(S_{X}\) _such that_ \(\|(k+1)x_{n}^{(k+2)}-\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to 0,\) _then_ \(D_{k}[(x_{n}^{(\alpha_{i})})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) _for all_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\) _and_ \(\alpha\in\mathcal{S}_{k+2}(k+1).\)__ 3. _If_ \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+2)})\) _are_ \((k+2)-\)_sequences in_ \(S_{X}\) _such that_ \(\|(k+1)x_{n}^{(k+2)}-\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to 0,\) _then_ \(D_{k}[(x_{n}^{(\alpha_{i})})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) _for all_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\) _and for some_ \(\alpha\in\mathcal{S}_{k+2}(k+1).\)__ Proof.: \((1)\Rightarrow(2)\): Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+2)})\) be \((k+2)-\)sequences in \(S_{X}\) with \(\|(k+1)x_{n}^{(k+2)}-\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to 0\) and \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\) Observe that \(\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to k+1.\) Thus, by \((1),\) we get \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) Now, it is enough to show that \(D_{k}[x_{n}^{(k+2)},(x_{n}^{(\beta_{i})})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0\) for all \(\beta\in\mathcal{S}_{k+1}(k).\) For every \(n\in\mathbb{N},\) consider \(y_{n}=\frac{1}{k+1}\sum_{i=1}^{k+1}x_{n}^{(i)}\) and let \(\beta\in\mathcal{S}_{k+1}(k)\). Note that for any \(n\in\mathbb{N},\) we have \(|D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]|=(k+1)|D_{k}[y_{n},(x_{n}^ {(\beta_{i})})_{i=1}^{k};(f_{j})_{j=1}^{k}]|,\) which implies \(D_{k}[y_{n},(x_{n}^{(\beta_{i})})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0.\) Since \(y_{n}-x_{n}^{(k+2)}\to 0,\) it follows from Lemma 2.2 that \[|D_{k}[y_{n}-x_{n}^{(k+2)}+x_{n}^{(k+2)},(x_{n}^{(\beta_{i})})_{i=1}^{k};(f_{j })_{j=1}^{k}]-D_{k}[x_{n}^{(k+2)},(x_{n}^{(\beta_{i})})_{i=1}^{k};(f_{j})_{j=1 }^{k}]|\to 0.\] Therefore, \(D_{k}[x_{n}^{(k+2)},(x_{n}^{(\beta_{i})})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0.\) \((2)\Rightarrow(3)\): Obvious. \((3)\Rightarrow(1)\): Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(S_{X}\) such that \(\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to k+1\) and \(f_{1},f_{2},\ldots,f_{k}\) in \(S_{X^{*}}.\) For each \(n\in\mathbb{N},\) define \(x_{n}^{(k+2)}=\frac{\sum_{i=1}^{k+1}x_{n}^{(i)}}{\|\sum_{i=1}^{k+1}x_{n}^{(i)} \|}.\) Since \(\|(k+1)x_{n}^{(k+2)}-\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to 0,\) by \((3),\)\(D_{k}[(x_{n}^{(\alpha_{i})})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) for some \(\alpha\in\mathcal{S}_{k+2}(k+1).\) If \(\alpha=\{1,2,\ldots,k+1\},\) then it is done. Assume \(D_{k}[x_{n}^{(k+2)},(x_{n}^{(\beta_{i})})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0\) for some \(\beta\in\mathcal{S}_{k+1}(k).\) Then, using Lemma 2.2, we have \(D_{k}\left[\frac{1}{k+1}\sum_{i=1}^{k+1}x_{n}^{(i)},(x_{n}^{(\beta_{i})})_{i=1 }^{k};(f_{j})_{j=1}^{k}\right]\to 0.\) Thus, \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) Hence, \(X\) is \(k-\)WUR. The proof of the subsequent corollary follows in similar lines of the proof of Theorem 3.9. **Corollary 3.10**.: _Let \(x\in S_{X}.\) Then the following statements are equivalent._ 1. \(X\) _is_ \(k-\)_WLUR at_ \(x.\)__ 2. _If_ \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+2)})\) _are_ \((k+2)-\)_sequences in_ \(S_{X}\) _such that_ \(x_{n}^{(1)}=x\) _for all_ \(n\in\mathbb{N}\) _and_ \(\|(k+1)x_{n}^{(k+2)}-\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to 0,\) _then_ \(D_{k}[(x_{n}^{(\alpha_{i})})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) _for all_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\) _and_ \(\alpha\in\mathcal{S}_{k+2}(k+1).\)__ 3. _If_ \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+2)})\) _are_ \((k+2)-\)_sequences in_ \(S_{X}\) _such that_ \(x_{n}^{(1)}=x\) _for all_ \(n\in\mathbb{N}\) _and_ \(\|(k+1)x_{n}^{(k+2)}-\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to 0,\) _then_ \(D_{k}[(x_{n}^{(\alpha_{i})})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) _for all_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\) _and for some_ \(\alpha\in\mathcal{S}_{k+2}(k+1).\)__ In the following proposition and example, we discuss some relationships between rotundity properties of a space and its double dual. **Proposition 3.11**.: _If \(X\) is \(k-\)WUR, then \(X^{**}\) is \(k-\)rotund._ Proof.: Suppose \(X^{**}\) is not \(k-\)rotund. Then there exist \((k+1)\) elements \(x_{1}^{**},x_{2}^{**},\ldots,x_{k+1}^{**}\) in \(S_{X^{**}}\) such that \(\|\sum_{i=1}^{k+1}x_{i}^{**}\|=k+1,\) but \(D_{k}[(x_{i}^{**})_{i=1}^{k+1};(\tilde{g}_{j})_{j=1}^{k}]=\epsilon\) for some \(\tilde{g}_{1},\tilde{g}_{2},\ldots,\tilde{g}_{k}\in S_{X^{**}}\) and \(\epsilon>0.\) For every \(1\leq i\leq k+1,\) by Goldstine's theorem, there exists a net \((x_{\alpha_{i}}^{(i)})_{\alpha_{i}\in l_{i}}\) in \ function, we have \[k+1=\left\|\sum_{i=1}^{k+1}x_{i}^{**}\right\|\leq\liminf_{\beta}\left\|\sum_{i=1}^ {k+1}x_{\beta}^{(i)}\right\|\leq\limsup_{\beta}\left\|\sum_{i=1}^{k+1}x_{\beta }^{(i)}\right\|\leq k+1,\] which implies \(\|\sum_{i=1}^{k+1}x_{\beta}^{(i)}\|\to k+1.\) Therefore, by assumption, \(D_{k}[(x_{\beta}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) for all \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\) Since \(x_{\beta}^{(i)}\xrightarrow{w^{*}}x_{i}^{**}\) for all \(1\leq i\leq k+1,\) we have \(D_{k}[(x_{\beta}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to D_{k}[(x_{i}^{**})_{i= 1}^{k+1};(f_{j})_{j=1}^{k}]\) for all \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\) Therefore, \(D_{k}[(x_{i}^{**})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]=0\) for all \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\) Further for every \(1\leq j\leq k,\) by Goldstine's theorem, there exists a net \((f_{\lambda_{j}}^{(j)})_{\lambda_{j}\in J_{j}}\) in \(B_{X^{*}}\) such that \(f_{\lambda_{j}}^{(j)}\xrightarrow{w^{*}}\tilde{g}_{j}.\) Then for every \(1\leq j\leq k\) it is easy to find a subnet \((f_{\gamma}^{(j)})\) of \((f_{\lambda_{j}}^{(j)})\) with the same index set. Since \(f_{\gamma}^{(j)}\xrightarrow{w^{*}}\tilde{g}_{j}\) for all \(1\leq j\leq k,\) it follows that \(D_{k}[(x_{i}^{**})_{i=1}^{k+1};(f_{\gamma}^{(j)})_{j=1}^{k}]\to D_{k}[(x_{i}^ {**})_{i=1}^{k+1};(\tilde{g}_{j})_{j=1}^{k}].\) Thus \(D_{k}[(x_{i}^{**})_{i=1}^{k+1};(\tilde{g}_{j})_{j=1}^{k}]=0,\) which is a contradiction. Hence \(X^{**}\) is \(k-\)rotund. The following example illustrates that in Proposition 3.11 the assumption \(k-\)WUR cannot be replaced by \(k-\)LUR (hence, \(k-\)WLUR). Further, we will see in Example 4.18 that the property \(k-\)WUR of a space \(X\) is not sufficient for the space \(X^{**}\) to be \(k-\)WMLUR. The converse of Proposition 3.11 need not be true in general. To see this, consider a strongly rotund space which is not \(k-\)WLUR (see, Examples 4.15 and 4.16). **Example 3.12**.: _Let \(X=(l_{1},\|\cdot\|_{1})\) and \(k\in\mathbb{Z}^{+}.\) By [4, Chapter II, Theorem 2.6], \(X\) admits an equivalent norm (say, \(\|\cdot\|_{r}\)) such that \(Y=(l_{1},\|\cdot\|_{r})\) is LUR. Note that, by [4, Chapter II, Corollary 3.5], \(Y^{*}\) is not smooth. Thus, \(Y^{**}\) is not rotund. Now, consider the Banach space \(Z=l_{2}(Y).\) Then, by [14, Theorem 1.1], \(Z\) is LUR (hence, \(k-\)LUR). Clearly, \(Z^{**}\cong l_{2}(Y^{**}).\) Therefore, by [25, Corollary 2.10], \(Z^{**}\) is not \(k-\)rotund._ We present some necessary and/or sufficient conditions for the notions \(k-\)WUR, \(k-\)WLUR and \(k-\)WMLUR in terms of property \(k-\)\(w\)UC, \(k-\)\(w\)USCh and \(k-\)\(w\)SCh. In the next result, we obtain some characterization of \(k-\)WUR in terms of property \(k-\)\(w\)UC. **Theorem 3.13**.: _Let \(r>1.\) Then the following statements are equivalent._ 1. \(X\) _is_ \(k-\)_WUR._ 2. _If_ \(A\) _and_ \(B\) _are non-empty subsets of_ \(X\) _such that_ \(A\) _is convex, then_ \((A,B)\) _has property_ \(k-\)_wUC._ 3. \((B_{X},rS_{X})\) _has property_ \(k-\)_wUC._ 4. \((S_{X},rS_{X})\) _has property_ \(k-\)_wUC._ Proof.: \((1)\Rightarrow(2)\): Let \(A\) and \(B\) be non-empty subsets of \(X\) and \(A\) be convex. Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\)\(\ldots,(x_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(A,\)\((y_{n})\) be a sequence in \(B\) such that \(\|x_{n}^{(i)}-y_{n}\|\to d(A,B)\) for all \(1\leq i\leq k+1\) and \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\) If \(d(A,B)=0,\) then it is clear that \((A,B)\) has property \(k-\)\(w\)UC. Assume \(d(A,B)>0.\) Since \(A\) is convex, we have \[d(A,B)\leq\left\|\frac{1}{k+1}\sum_{i=1}^{k+1}x_{n}^{(i)}-y_{n}\right\|=\frac{ 1}{k+1}\left\|\sum_{i=1}^{k+1}(x_{n}^{(i)}-y_{n})\right\|\leq\frac{1}{k+1} \sum_{i=1}^{k+1}\|x_{n}^{(i)}-y_{n}\|\] and hence \(\frac{1}{k+1}\|\sum_{i=1}^{k+1}(x_{n}^{(i)}-y_{n})\|\to d(A,B).\) Now, by (1), we have \(D_{k}\left[\left(\frac{x_{n}^{(i)}-y_{n}}{d(A,B)}\right)_{i=1}^{k+1};(f_{j})_{ j=1}^{k}\right]\to 0.\) Therefore, by Remark 2.1, we have \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) Hence, \((A,B)\) has property \(k-\)\(w\)UC. \((2)\Rightarrow(3)\): Obvious. \((3)\Rightarrow(4)\): Since \(S_{X}\subseteq B_{X}\) and \(d(S_{X},rS_{X})=d(B_{X},rS_{X}),\) it follows from the assumption that \((S_{X},rS_{X})\) has property \(k-\)\(w\)UC. \((4)\Rightarrow(1)\): Let \((S_{X},rS_{X})\) has property \(k-\)\(w\)UC. By Proposition 2.10 and Theorem 2.13, it follows that \((S_{X},(k+1)S_{X})\) has property \(k-\)\(w\)UC. Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(S_{X}\) with \(\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to k+1\) and \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\) For every \(n\in\mathbb{N},\) define \(y_{n}=\sum_{i=1}^{k+1}x_{n}^{(i)}.\) Then for all \(1\leq i\leq k+1,\) we have \[\left\|(k+1)\frac{y_{n}}{\|y_{n}\|}-x_{n}^{(i)}\right\|=\left\|\frac{(k+1)x_{n }^{(i)}}{\|y_{n}\|}+(k+1)\frac{y_{n}-x_{n}^{(i)}}{\|y_{n}\|}-x_{n}^{(i)}\right\| \leq\left|\frac{k+1}{\|y_{n}\|}-1\right|+\frac{(k+1)k}{\|y_{n}\|}.\] Thus, \(\left\|x_{n}^{(i)}-(k+1)\frac{y_{n}}{\|y_{n}\|}\right\|\to d(S_{X},(k+1)S_{X})\) for all \(1\leq i\leq k+1.\) Since \((S_{X},(k+1)S_{X})\) has property \(k-\)\(w\)UC, we get \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) Hence, \(X\) is \(k-\)WUR. The following corollary is an immediate consequence of Theorem 2.13 and Theorem 3.13. However, the converse need not be true in general. **Corollary 3.14**.: _Let \(r\in(0,1).\) If \((S_{X},rS_{X})\) has property \(k-\)\(w\)UC, then \(X\) is \(k-\)WUR._ Now, in view of Proposition 2.10 and Theorem 3.13, we characterize \(k-\)WUR spaces in terms of \(k-\)weakly uniformly strong Chebyshevness of the corresponding closed unit ball. **Theorem 3.15**.: _Let \(r>1.\) Then the following statements are equivalent._ 1. \(X\) _is_ \(k-\)_WUR._ 2. \(B_{X}\) _is_ \(k-\)_wUSCh on_ \(X.\)__ 3. \(B_{X}\) _is_ \(k-\)_wUSCh on_ \(rS_{X}.\)__ Proof.: \((1)\Rightarrow(2)\): It is enough to show that \(B_{X}\) is \(k-\)\(w\)USCh on \(X\backslash B_{X}.\) Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,\)\((x_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(B_{X},\)\((y_{n})\) be a sequence in \(X\backslash B_{X}\) with \(\|x_{n}^{(i)}-y_{n}\|-d(y_{n},B_{X})\to 0\) for all \(1\leq i\leq k+1.\) Note that for all \(n\in\mathbb{N},\) we have \(d(y_{n},B_{X})=\|y_{n}\|-1,\) which implies \(\|y_{n}\|-\|x_{n}^{(i)}-y_{n}\|\to 1\) for all \(1\leq i\leq k+1.\) Since \[\frac{1}{k+1}\left\|\sum_{i=1}^{k+1}x_{n}^{(i)}\right\|\geq\|y_{n}\|-\frac{1}{ k+1}\left\|\sum_{i=1}^{k+1}x_{n}^{(i)}-(k+1)y_{n}\right\|\geq\|y_{n}\|-\frac{1}{k+1 }\sum_{i=1}^{k+1}\|x_{n}^{(i)}-y_{n}\|,\] it follows that \(\frac{1}{k+1}\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to 1.\) Thus, by (1), we have \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\) for all \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\) Therefore, \(B_{X}\) is \(k-\)\(w\)USCh on \(X\backslash B_{X}.\) \((2)\Rightarrow(3)\): Obvious. \((3)\Rightarrow(1)\): By (3) and Proposition 2.10, we have \((B_{X},rS_{X})\) has property \(k-\)\(w\)UC. Thus, by Theorem 3.13, it follows that \(X\) is \(k-\)WUR. In light of Theorems 3.13 and 3.15, we now present few examples to illustrate some of the implications mentioned in Section 2 cannot be reversed in general. As mentioned immediately after Definitions 2.5 and 2.8, the following example shows that, in general \(k-\)\(w\)USCh (respectively, property \(k-\)\(w\)UC) does not imply \(k-\)SCh (respectively, property \(k-\)UC). **Example 3.16**.: _Let \(k\in\mathbb{Z}^{+}\). Consider the space \(X\) as in Example 3.7. Since \(X\) is \(k-\)WUR, by Theorem 3.15, \(B_{X}\) is \(k-\)wUSCh on \(X.\) However as mentioned in Example 3.7, \(B_{X}\) is not \(k-\)SCh on \(X.\) In addition, observe that \(X\) is \(k-\)WUR, but not \(k-\)UR. Hence, by Theorem 3.13, \((B_{X},(k+1)S_{X})\) has property \(k-\)wUC. However, by [11, Theorem 2.19], \((B_{X},(k+1)S_{X})\) does not have property \(k-\)UC._ As noted in Section 2, from the following example we can observe that the converses of the statements of Proposition 2.9 are not necessarily true. **Example 3.17**.: _Let \(k\geq 2\). Consider a \(k-\)WUR space \(X\) which is not \((k-1)-\)rotund (see, Example 3.6). Therefore, by Theorem 3.15 and [7, Proposition 2.4], \(B_{X}\) is \(k-\)\(w\)USCh on \(2S_{X}\), but \(B_{X}\) is not \((k-1)-\)Chebyshev on \(2S_{X}.\) Further, by Theorem 3.13, \((B_{X},2S_{X})\) has property \(k-\)\(w\)UC, but it does not have property \((k-1)-\)\(w\)UC._ For any non-empty closed convex subset \(C\) of \(X\) and \(\alpha>0\), we define \(C^{\alpha}=\{x\in X:d(x,C)=\alpha\}.\) For any \(x^{*}\in S_{X^{*}},\) we say that the set \(ker(x^{*})=\{x\in X:x^{*}(x)=0\}\) is a hyperplane of \(X.\) It follows from Proposition 2.10 and Theorem 3.13 that, every proximinal convex subset \(C\) of a \(k-\)WUR space is \(k-\)wUSCh on \(C^{\alpha}\) for any \(\alpha>0.\) In fact something more is true. To see this we define a notion called \(k-\)equi weakly uniform strong Chebyshev as follows. Let \(\mathcal{M}\) be a collection of proximinal convex subsets of \(X\) and \(\alpha>0.\) We say that \(\mathcal{M}\) is \(k-\)equi weakly uniformly strongly Chebyshev (in short, \(k-\)equi _w_USCh) on \(\mathcal{M}^{\alpha},\) if for every \(\epsilon>0\) and \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\) there exists \(\delta>0\) such that \(|D_{k}[(x_{i})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]|\leq\epsilon\) whenever \(M\in\mathcal{M},\)\(x\in M^{\alpha}\) and \(x_{1},x_{2},\ldots,x_{k+1}\in P_{M}(x,\delta).\) **Theorem 3.18**.: _Let \(\alpha>0.\) Then the following statements are equivalent._ 1. \(X\) _is_ \(k-\)_WUR._ 2. \(\mathcal{C}\) _is_ \(k-\)_equi_ _w_USCh on_ \(\mathcal{C}^{\alpha},\) _where_ \(\mathcal{C}\) _is the collection of all proximinal convex subsets of_ \(X.\)__ 3. \(\mathcal{M}\) _is_ \(k-\)_equi_ _w_USCh on_ \(\mathcal{M}^{\alpha},\) _where_ \(\mathcal{M}\) _is the collection of all proximinal subspaces of_ \(X.\)__ 4. \(\mathcal{H}\) _is_ \(k-\)_equi_ _w_USCh on_ \(\mathcal{H}^{\alpha},\) _where_ \(\mathcal{H}\) _is the collection of all proximinal hyperplanes of_ \(X.\)__ 5. \(\mathcal{F}\) _is_ \(k-\)_equi_ _w_USCh on_ \(\mathcal{F}^{\alpha},\) _where_ \(\mathcal{F}\) _is the collection of all_ \(k-\)_dimensional subspaces of_ \(X.\)__ Proof.: \((1)\Rightarrow(2)\): Let \((C_{n})\) be a sequence of proximinal convex subsets of \(X.\) Let \((y_{n}^{(1)}),(y_{n}^{(2)}),\ldots,\)\((y_{n}^{(k+1)})\) be \((k+1)-\)sequences with \(y_{n}^{(i)}\in C_{n}\) for all \(n\in\mathbb{N}\) and \(1\leq i\leq k+1,\)\((x_{n})\) be a sequence with \(x_{n}\in C_{n}^{\alpha}\) for all \(n\in\mathbb{N}\) such that \(\|y_{n}^{(i)}-x_{n}\|\to\alpha\) for all \(1\leq i\leq k+1\) and \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\) Since \(C_{n}\) is convex, we have \[d(x_{n},C_{n})\leq\left\|\frac{1}{k+1}\sum_{i=1}^{k+1}y_{n}^{(i)}-x_{n}\right\| =\frac{1}{k+1}\left\|\sum_{i=1}^{k+1}(y_{n}^{(i)}-x_{n})\right\|\leq\frac{1}{ k+1}\sum_{i=1}^{k+1}\|y_{n}^{(i)}-x_{n}\|\] and hence \(\frac{1}{k+1}\|\sum_{i=1}^{k+1}(y_{n}^{(i)}-x_{n})\|\to\alpha.\) Thus, by \((1),\) it follows that \(D_{k}[(\frac{y_{n}^{(i)}-x_{n}}{\alpha})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) Therefore, by Remark 2.1, \(D_{k}[(y_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) \((2)\Rightarrow(3)\Rightarrow(4)\): Obvious. \((4)\Rightarrow(1)\): Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(S_{X}\) such that \(\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to k+1\) and \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\) For every \(n\in\mathbb{N},\) define \(y_{n}=\frac{1}{k+1}\sum_{i=1}^{k+1}x_{n}^{(i)}.\) By Hahn-Banach theorem, for every \(n\in\mathbb{N}\) there exists \(g_{n}\in S_{X^{*}}\) such that \(g_{n}(y_{n})=\|y_{n}\|.\) Let \(1\leq i\leq k+1.\) Observe that \(g_{n}(y_{n})\to 1\) and \(g_{n}(x_{n}^{(i)})\to 1\). Now, define \(H_{n}=ker(g_{n}),\)\(\beta_{n}=d(y_{n},H_{n})\) and \(z_{n}^{(i)}=y_{n}-x_{n}^{(i)}-g_{n}(y_{n}-x_{n}^{(i)})\frac{y_{n}}{\|y_{n}\|}\) for all \(n\in\mathbb{N}.\) Clearly \(H_{n}\) is proximinal on \(X\) for all \(n\in\mathbb{N}\) and \(\beta_{n}\to 1.\) Note that \(z_{n}^{(i)}\in H_{n}\) and \(\frac{\alpha y_{n}}{\beta_{n}}\in H_{n}^{\alpha}\) for all \(n\in\mathbb{N}.\) Since \[d(H_{n},H_{n}^{\alpha})\leq\left\|\frac{\alpha z_{n}^{(i)}}{\beta_{n}}-\frac{ \alpha y_{n}}{\beta_{n}}\right\|=\frac{\alpha}{\beta_{n}}\left\|x_{n}^{(i)}+g_ {n}(y_{n}-x_{n}^{(i)})\frac{y_{n}}{\|y_{n}\|}\right\|\leq\frac{\alpha}{\beta_ {n}}\left(\|x_{n}^{(i)}\|+|g_{n}(y_{n}-x_{n}^{(i)})|\right),\] we have \(\|\frac{\alpha z_{n}^{(i)}}{\beta_{n}}-\frac{\alpha y_{n}}{\beta_{n}}\|\to\alpha.\) Thus, by \((4),\)\(D_{k}[(\frac{\alpha}{\beta_{n}}z_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) Further, using Remark 2.1 and Lemma 2.2, we have \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) Hence, \(X\) is \(k-\)WUR. \((2)\Rightarrow(5)\): Obvious. \((5)\Rightarrow(1)\): Suppose there exist \(\epsilon>0,\)\(g_{1},g_{2},\ldots,g_{k}\in S_{X^{*}}\) and \((k+1)-\)sequences \((x_{n}^{(1)}),(x_{n}^{(2)}),\)\(\ldots,(x_{n}^{(k+1)})\) in \(S_{X}\) such that \(\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to k+1,\) but \(|D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(g_{j})_{j=1}^{k}]|>\epsilon\) for all \(n\in\mathbb{N}.\) Now for every \(n\in\mathbb{N},\) define \(F_{n}=span\{x_{n}^{(i)}-x_{n}^{(k+1)}:1\leq i\leq k\}.\) Using Remark 2.1, observe that \(F_{n}\) is a \(k-\)dimensional subspace of \(X\) and hence it is proximinal on \(X.\) Thus, for every \(n\in\mathbb{N},\) there exist \(\lambda_{n}^{(1)},\lambda_{n}^{(2)},\ldots,\lambda_{n}^{(k)}\in\mathbb{R}\) such that \(\|x_{n}^{(k+1)}+\sum_{i=1}^{k}\lambda_{n}^{(i)}(x_{n}^{(i)}-x_{n}^{(k+1)})\|=d (x_{n}^{(k+1)},F_{n}).\) Denote \(d(x_{n}^{(k+1)},F_{n})=\beta_{n}\) for all \(n\in\mathbb{N}.\) Using [27, Lemma 2.3], we have \(\beta_{n}\to 1.\) Note that \(d(\frac{\alpha}{\beta_{n}}x_{n}^{(k+1)},F_{n})=\alpha,\)\(\frac{\alpha}{\beta_{n}}(x_{n}^{(k+1)}-x_{n}^{(i)})\in F_{n}\) and \(\|\frac{\alpha}{\beta_{n}}(x_{n}^{(k+1)}-x_{n}^{(i)})-\frac{\alpha}{\beta_{n} }x_{n}^{(k+1)}\|\to\alpha\) for all \(1\leq i\leq k+1.\) Therefore, our assumption leads to \(D_{k}[(\frac{\alpha}{\beta_{n}}(x_{n}^{(k+1)}-x_{n}^{(i)}))_{i=1}^{k+1};(g_{j} )_{j=1}^{k}]\to 0\). Thus from Remark 2.1 and Lemma 2.2, \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(g_{j})_{j=1}^{k}]\to 0,\) which is a contradiction. Hence the proof. We remark that Theorems 3.13, 3.15 and 3.18 are generalizations of [6, Theorems 4.5, 4.6 and 4.15]. In the next two results, we present a necessary and a sufficient condition for a space to be \(k-\)WLUR in terms of \(k-\)weakly strongly Chebyshevness. **Proposition 3.19**.: _If \(X\) is a \(k-\)WLUR space, then every proximinal convex subset of \(X\) is \(k-\)wSCh on \(X.\)_ Proof.: In view of Remark 2.1, it is enough to prove that every proximinal convex subset \(C\) of \(X\) with \(d(0,C)=1\) is \(k-\)wSCh at \(0.\) Let \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(C\) such that \(\|x_{n}^{(i)}\|\to 1\) for all \(1\leq i\leq k+1\) and \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{\star}}.\) Choose \(y\in P_{C}(0)\) and observe that \(y\in S_{X}.\) Since \(C\) is convex, for any \(\alpha\in\mathcal{S}_{k+1}(k),\) we have \[1=d(0,C)\leq\frac{1}{k+1}\left\|y+\sum_{i=1}^{k}x_{n}^{(\alpha_{i})}\right\| \leq\frac{1}{k+1}(\|y\|+\sum_{i=1}^{k}\|x_{n}^{(\alpha_{i})}\|)\] and hence \(\|y+\sum_{i=1}^{k}x_{n}^{(\alpha_{i})}\|\xrightarrow{}k+1.\) Thus, by assumption, we have \(D_{k}[y,(x_{n}^{(\alpha_{i})})_{i=1}^{k};(f_{j})_{j=1}^{k}]\xrightarrow{}0\) for any \(\alpha\in\mathcal{S}_{k+1}(k).\) Since, by [24, Lemma 2], \[|D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]|\leq\sum_{\alpha\in \mathcal{S}_{k+1}(k)}|D_{k}[y,(x_{n}^{(\alpha_{i})})_{i=1}^{k};(f_{j})_{j=1}^{ k}]|,\] we have \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\xrightarrow{}0.\) We remark that the converse of Proposition 3.19 not necessarily true (see, Example 4.17). **Theorem 3.20**.: _Consider the following statements._ * \(X\) _is_ \(k-\)_WLUR._ * \(S_{X}\) _is_ \(k-\)_wSCh on_ \(rS_{X}\) _for some_ \(r\in(0,1).\)__ * \((S_{X},C)\) _has property_ \(k-\)_wUC, whenever_ \(C\) _is a non-empty boundedly compact subset of_ \(X\) _with_ \(d(0,C)>0.\)__ _Then \((1)\Leftarrow(2)\Leftrightarrow(3).\)_ Proof.: \((2)\Rightarrow(1)\): From the assumption and Theorem 2.13, we have \(S_{X}\) is \(k-\)wSCh on \(\frac{1}{2}S_{X}.\) Let \(x\in S_{X},\)\((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k)})\) be \((k)-\)sequences in \(S_{X}\) such that \(\|x+\sum_{i=1}^{k}x_{n}^{(i)}\|\to k+1\) and \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{\star}}.\) Observe that for any \(1\leq i\leq k,\) we have \(\|x_{n}^{(i)}+x\|\to 2\) and \[d(\frac{1}{2}x,S_{X})\leq\left\|\frac{x_{n}^{(i)}+x}{\|x_{n}^{(i)}+x\|}-\frac{ 1}{2}x\right\|\leq\left|\frac{1}{\|x_{n}^{(i)}+x\|}-\frac{1}{2}\right|+\frac{1 }{\|x_{n}^{(i)}+x\|},\] which implies \(\left\|\frac{x_{n}^{(i)}+x}{\|x_{n}^{(i)}+x\|}-\frac{1}{2}x\right\|\to d( \frac{1}{2}x,S_{X}).\) Thus, we have \(D_{k}\left[x,\left(\frac{x_{n}^{(i)}+x}{\|x_{n}^{(i)}+x\|}\right)_{i=1}^{k};(f _{j})_{j=1}^{k}\right]\to 0.\) Therefore, by Remark 2.1 and Lemma 2.2, it follows that \(D_{k}[x,(x_{n}^{(i)})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0.\) \((2)\Rightarrow(3)\): This implication follows from Remark 2.11. \((3)\Rightarrow(2)\): This implication follows from the Definitions 2.5 and 2.8. We note that for the case \(k=1\), Proposition 3.19 and \((2)\Rightarrow(1)\) of Theorem 3.20 are proved in [6, Propositon 4.7] and [5, Theorem 3.12] respectively. In the following example, we see that the other implications of Corollary 3.14 and Theorem 3.20 need not be true in general for any \(k\geq 2\). For instance for any \(k\geq 2,\) consider the \(k-\)WUR space \(X=(\mathbb{R}^{k},\|\cdot\|_{\infty})\) and \(x=(r,r,\ldots,r)\) for some \(0<r<1.\) It is easy to see that, \(S_{X}\) is not \(k-\)Chebyshev at \(x,\) hence \(S_{X}\) is not \(k-\)wSCh on \(rS_{X}.\) Further, by Proposition 2.10, \((S_{X},rS_{X})\) does not have property \(k-\)wUC. In light of the Definition 3.8 and Theorem 3.9, it is natural to ask whether the local version of Theorem 3.15 holds. The following result provides a positive answer to this question. We conclude this section with some characterizations of the \(k-\)WMLUR spaces. **Theorem 3.21**.: _Let \(r>1.\) Then the following statements are equivalent._ 1. \(X\) _is_ \(k-\)_WMLUR._ 2. \(B_{X}\) _is_ \(k-\)_wSCh on_ \(X.\)__ 3. \(B_{X}\) _is_ \(k-\)_wSCh on_ \(rS_{X}.\)__ Proof.: \((1)\Rightarrow(2)\): In view of the Theorem 2.13, it is enough to prove that \(B_{X}\) is \(k-\)wSCh on \(\frac{k+1}{k}S_{X}.\) Let \(x\in S_{X},\)\((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(B_{X}\) such that \(\|x_{n}^{(i)}-\frac{k+1}{k}x\|\rightarrow\frac{1}{k}\) and \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\) We need to show that \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) Observe that for any \(1\leq i\leq k+1,\) \[\frac{k+1}{k}-1\leq\frac{k+1}{k}\|x\|-\|x_{n}^{(i)}\|\leq\left\|x_{n}^{(i)}- \frac{k+1}{k}x\right\|\] holds, which implies \(\|x_{n}^{(i)}\|\to 1.\) Now, we claim that \(D_{k}[x,(x_{n}^{(\alpha_{i})})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0\) for all \(\alpha\in\mathcal{S}_{k+1}(k).\) Let \(\alpha\in\mathcal{S}_{k+1}(k).\) For every \(n\in\mathbb{N},\) define \(z_{n}=(k+1)x-\sum_{i=1}^{k}x_{n}^{(\alpha_{i})}.\) Since \[d\left(B_{X},\frac{k+1}{k}S_{X}\right)\leq\left\|\frac{1}{k}\sum_{i=1}^{k}x_{ n}^{(\alpha_{i})}-\frac{k+1}{k}x\right\|\leq\frac{1}{k}\sum_{i=1}^{k}\left\|x_{n}^{ (\alpha_{i})}-\frac{k+1}{k}x\right\|,\] we have \(\|z_{n}\|\to 1.\) Observe that \(\|(k+1)x-(\sum_{i=1}^{k}x_{n}^{(\alpha_{i})}+z_{n})\|=0\) for all \(n\in\mathbb{N}.\) Hence, by (1), we get \(D_{k}[z_{n},(x_{n}^{(\alpha_{i})})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0.\) Note that \(|D_{k}[x,(x_{n}^{(\alpha_{i})})_{i=1}^{k};(f_{j})_{j=1}^{k}]|=\frac{1}{k+1}|D_ {k}[z_{n},(x_{n}^{(\alpha_{i})})_{i=1}^{k};(f_{j})_{j=1}^{k}]|\) for all \(n\in\mathbb{N}.\) Thus, \(D_{k}[x,(x_{n}^{(\alpha_{i})})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0.\) Hence the claim. Now, it follows from [24, Lemma 2] that \[|D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]|\leq\sum_{\alpha\in \mathcal{S}_{k+1}(k)}|D_{k}[x,(x_{n}^{(\alpha_{i})})_{i=1}^{k};(f_{j})_{j=1}^{k }],\] which implies \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) Hence, \(B_{X}\) is \(k-\)wSCh on \(\frac{k+1}{k}S_{X}.\) \((2)\Rightarrow(3)\): Obvious. \((3)\Rightarrow(1)\): It follows from the assumption and Theorem 2.13 that \(B_{X}\) is \(k-\)wSCh on \((k+1)S_{X}.\) Let \(x\in S_{X},\)\((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(S_{X}\) such that \(\|(k+1)x-\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to 0\) and \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}.\) We need to show that \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) Note that for any \(1\leq i\leq k+1,\) we have \[(k+1)\|x\|-\|x_{n}^{(i)}\| \leq\|x_{n}^{(i)}-(k+1)x\|\] \[\leq\left\|\sum_{j=1}^{k+1}x_{n}^{(j)}-(k+1)x\right\|+\left\|\sum_ {j=1,j\neq i}^{k+1}x_{n}^{(j)}\right\|\] \[\leq\left\|\sum_{j=1}^{k+1}x_{n}^{(j)}-(k+1)x\right\|+k.\] Thus, \(\|x_{n}^{(i)}-(k+1)x\|\to k.\) Since \(B_{X}\) is \(k-w\)SCh on \((k+1)S_{X},\) we get \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) Hence, \(X\) is \(k-\)WMLUR. We remark that for the case \(k=1,\) Theorem 3.21 is proved in [31, Theorem 2.6]. The following corollary is an immediate consequence of Remark 2.11 and Theorem 3.21. **Corollary 3.22**.: _The following statements are equivalent._ 1. \(X\) _is_ \(k-\)_WMLUR._ 2. _If_ \(A\) _is a closed ball in_ \(X\) _and_ \(B\) _is a non-empty boundedly compact subset of_ \(X,\) _then_ \((A,B)\) _has property_ \(k-w\)_UC._ As specified immediately after Definition 3.8, the following example illustrates that, in general, \(k-\)rotundity does not imply \(k-\)WMLUR. **Example 3.23**.: _Let \(k\in\mathbb{Z}^{+}.\) Consider the space \(X\) as in Example 2.6. Note that \(X\) is \(k-\)rotund, but not MLUR. From Example 2.6, it is clear that \(B_{X}\) is not \(k-\)wSCh on \(X.\) Therefore, by Theorem 3.21, \(X\) is not \(k-\)WMLUR._ ## 4. Stability of \(k-\)WUR, \(k-\)WLUR and \(k-\)WMLUR In this section, we examine the stability of the notions \(k-\)WUR, \(k-\)WLUR and \(k-\)WMLUR. We begin with the inheritance nature of the notions \(k-\)WUR and \(k-\)WLUR by quotient spaces. In view of Theorem 2.14, it is natural to ask whether a similar characterization holds for the notions \(k-\)WUR and \(k-\)WLUR. To answer this question, in the following result, we prove that the collection of all quotient spaces of a \(k-\)WUR space is uniformly \(k-\)WUR. Indeed the reverse implication also holds. **Theorem 4.1**.: _Let \(\alpha,\beta\in\mathbb{Z}^{+}\), \(X\) be a Banach space satisfying \(dim(X)\geq k+2,\)\(1\leq\alpha\leq dim(X)-(k+1)\) and \(k+1\leq\beta\leq dim(X)-1.\) Then the following statements are equivalent._ 1. \(X\) _is_ \(k-\)_WUR._ 2. _For every_ \(\epsilon>0\) _and_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}},\) _it follows that_ \[\inf\{\delta_{X/M}^{k}(\epsilon,(f_{j})_{j=1}^{k}):M\subseteq\cap_{j=1}^{k}ker (f_{j})\}>0.\] 3. _For every_ \(\epsilon>0\) _and_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}},\) _it follows that_ \[\inf\{\delta_{X/F}^{k}(\epsilon,(f_{j})_{j=1}^{k}):F\subseteq\cap_{j=1}^{k}ker (f_{j})\text{ with }dim(F)=\alpha\}>0.\] 4. _For every_ \(\epsilon>0\) _and_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}},\) _it follows that_ \[\inf\{\delta_{X/Y}^{k}(\epsilon,(f_{j})_{j=1}^{k}):Y\subseteq\cap_{j=1}^{k}ker (f_{j})\text{ with }codim(Y)=\beta\}>0.\] Proof.: \((1)\Rightarrow(2)\): Suppose there exist \(\epsilon>0,\)\(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}},\) a sequence \((M_{n})\) of subspaces in \(X\) such that \(M_{n}\subseteq\cap_{j=1}^{k}ker(f_{j})\) and \(\delta_{X/M_{n}}^{k}(\epsilon,(f_{j})_{j=1}^{k})<\frac{1}{n}\) for all \(n\in\mathbb{N}.\) Then there exist \((k+1)-\)sequences \((x_{n}^{(1)}+M_{n}),(x_{n}^{(2)}+M_{n}),\ldots,(x_{n}^{(k+1)}+M_{n})\) with \(x_{n}^{(i)}+M_{n}\in S_{X/M_{n}}\) for all \(n\in\mathbb{N},\)\(1\leq i\leq k+1\) such that \(|D_{k}[(x_{n}^{(i)}+M_{n})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]|\geq\epsilon\) for all \(n\in\mathbb{N},\) but \(\|\sum_{i=1}^{k+1}(x_{n}^{(i)}+M_{n})\|\to k+1.\) Since \(d(x_{n}^{(i)},M_{n})=1,\) there exists \(y_{n}^{(i)}\in M_{n}\) such that \(\|x_{n}^{(i)}-y_{n}^{(i)}\|\to 1\) for all \(1\leq i\leq k+1.\) Therefore, we have \[\left\|\sum_{i=1}^{k+1}x_{n}^{(i)}+M_{n}\right\|\leq\left\|\sum_{i=1}^{k+1}x_{ n}^{(i)}-\sum_{i=1}^{k+1}y_{n}^{(i)}\right\|\leq\sum_{i=1}^{k+1}\|x_{n}^{(i)}-y_{n}^{(i )}\|\] and hence \(\|\sum_{i=1}^{k+1}(x_{n}^{(i)}-y_{n}^{(i)})\|\to k+1.\) By \((1),\) we get \(D_{k}[(x_{n}^{(i)}-y_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) Thus, by Remark 2.1, we have \(D_{k}[(x_{n}^{(i)}+M_{n})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0.\) This is a contradiction. \((2)\Rightarrow(3)\): Obvious. \((3)\Rightarrow(1)\): Suppose there exist \(\epsilon>0,\)\(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\) and \((k+1)-\)sequences \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,\) \((x_{n}^{(k+1)})\) in \(S_{X}\) such that \(\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to k+1\), but \(|D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]|\geq\epsilon\) for all \(n\in\mathbb{N}\). By Hahn-Banach theorem, for every \(n\in\mathbb{N}\) there exists \(g_{n}\in S_{X^{*}}\) such that \(g_{n}(\sum_{i=1}^{k+1}x_{n}^{(i)})=\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\). Now, for every \(n\in\mathbb{N}\), choose a subspace \(F_{n}\) of \(X\) such that \(F_{n}\subseteq(\cap_{j=1}^{k}ker(f_{j}))\cap ker(g_{n})\) and \(dim(F_{n})=\alpha\). Let \(1\leq i\leq k+1\). Since \(g_{n}(x_{n}^{(i)})\to 1\) and \[|g_{n}(x_{n}^{(i)})|=d(x_{n}^{(i)},ker(g_{n}))\leq d(x_{n}^{(i)},F_{n})\leq\| x_{n}^{(i)}\|=1,\] it follows that \(\|x_{n}^{(i)}+F_{n}\|\to 1\). For every \(n\in\mathbb{N}\), define \(y_{n}^{(i)}=\frac{x_{n}^{(i)}}{d(x_{n}^{(i)},F_{n})}\). Note that \(y_{n}^{(i)}+F_{n}\in S_{X/F_{n}}\) for all \(n\in\mathbb{N}\) and \(g_{n}(y_{n}^{(i)})\to 1\). Therefore, we have \[\left|g_{n}\left(\sum_{i=1}^{k+1}y_{n}^{(i)}\right)\right|=d\left(\sum_{i=1}^{ k+1}y_{n}^{(i)},ker(g_{n})\right)\leq d\left(\sum_{i=1}^{k+1}y_{n}^{(i)},F_{n} \right)\leq\sum_{i=1}^{k+1}\left\|y_{n}^{(i)}+F_{n}\right\|=k+1\] and hence \(\|\sum_{i=1}^{k+1}y_{n}^{(i)}+F_{n}\|\to k+1\). Thus, by assumption, we get \(D_{k}[(y_{n}^{(i)}+F_{n})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\). Further, by Remark 2.1 and Lemma 2.2, it follows that \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\to 0\). This is a contradiction. \((2)\Rightarrow(4)\): Obvious. \((4)\Rightarrow(1)\): Suppose there exist \(\epsilon>0,f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\) and \((k+1)-\)sequences \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,\)\((x_{n}^{(k+1)})\) in \(S_{X}\) such that \(\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\to k+1\), but \(D_{k}[(x_{n}^{(i)})_{i=1}^{k+1};(f_{j})_{j=1}^{k}]\geq\epsilon\) for all \(n\in\mathbb{N}\). By Hahn-Banach theorem, for every \(n\in\mathbb{N}\) there exists \(g_{n}\in S_{X^{*}}\) such that \(g_{n}(\sum_{i=1}^{k+1}x_{n}^{(i)})=\|\sum_{i=1}^{k+1}x_{n}^{(i)}\|\). Now, for every \(n\in\mathbb{N}\), choose a subspace \(Y_{n}\) of \(X\) such that \(Y_{n}\subseteq(\cap_{j=1}^{k}ker(f_{j}))\cap ker(g_{n})\) and \(codim(Y_{n})=\beta\). By replacing \(F_{n}\) by \(Y_{n}\) in the proof of \((3)\Rightarrow(1)\) and repeating the argument involved in the proof, we get a contradiction. Hence the proof. The following corollary is an immediate consequence of Theorem 4.1. **Corollary 4.2**.: _If \(X\) is \(k-\)WUR and \(M\) is a subspace of \(X,\) then \(X/M\) is \(k-\)WUR._ Now, we present an analogous result of Theorem 4.1 for the notion \(k-\)WLUR. **Theorem 4.3**.: _Let \(\alpha,\beta\in\mathbb{Z}^{+},\)\(X\) be a Banach space satisfying \(dim(X)\geq k+3,\)\(1\leq\alpha\leq dim(X)-(k+2)\), \(k+2\leq\beta\leq dim(X)-1\) and \(x\in S_{X}\). Then the following statements are equivalent._ 1. \(X\) _is_ \(k-\)_WLUR at_ \(x\)_._ 2. _For every_ \(\epsilon>0\) _and_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}},\) _it follows that_ \[\inf\{\delta_{X/M}^{k}(\epsilon,x+M,(f_{j})_{j=1}^{k}):M\subseteq\cap_{j=1}^{ k}ker(f_{j})\text{ and }d(x,M)=1\}>0.\] 3. _For every_ \(\epsilon>0\) _and_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}},\) _it follows that_ \[\inf\{\delta_{X/F}^{k}(\epsilon,x+F,(f_{j})_{j=1}^{k}):dim(F)=\alpha\text{ with }F\subseteq\cap_{j=1}^{k}ker(f_{j})\text{ and }d(x,F)=1\}>0.\] 4. _For every_ \(\epsilon>0\) _and_ \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}},\) _it follows that_ \[\inf\{\delta_{X/Y}^{k}(\epsilon,x+Y,(f_{j})_{j=1}^{k}):codim(Y)=\beta\text{ with }Y\subseteq\cap_{j=1}^{k}ker(f_{j})\text{ and }d(x,Y)=1\}>0.\] Proof.: \((1)\Rightarrow(2)\): Suppose there exist \(\epsilon>0\), \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\), a sequence \((M_{n})\) of subspaces in \(X\) such that \(M_{n}\subseteq\cap_{j=1}^{k}ker(f_{j})\), \(d(x,M_{n})=1\) and \(\delta_{X/M_{n}}^{k}(\epsilon,x+M_{n},(f_{j})_{j=1}^{k})<\frac{1}{n}\) for all \(n\in\mathbb{N}\). Then there exist \((k)-\)sequences \((x_{n}^{(1)}+M_{n}),(x_{n}^{(2)}+M_{n}),\ldots,(x_{n}^{(k)}+M_{n})\) with \(x_{n}^{(i)}+M_{n}\in S_{X/M_{n}}\) for all \(n\in\mathbb{N}\) and \(1\leq i\leq k\) such that \(|D_{k}[(x+M_{n}),(x_{n}^{(i)}+M_{n})_{i=1}^{k};(f_{j})_{j=1}^{k}]|\geq\epsilon\) for all \(n\in\mathbb{N}\), but \(\|(x+M_{n})+\sum_{i=1}^{k}(x_{n}^{(i)}+M_{n})\|\to k+1\). Since \(d(x_{n}^{(i)},M_{n})=1\), there exists \(y_{n}^{(i)}\in M_{n}\) such that \(\|x_{n}^{(i)}-y_{n}^{(i)}\|\to 1\) for all \(1\leq i\leq k\). Therefore, we have \[\left\|(x+M_{n})+\sum_{i=1}^{k}(x_{n}^{(i)}+M_{n})\right\|\leq\left\|x+\sum_{i=1 }^{k}x_{n}^{(i)}-\sum_{i=1}^{k}y_{n}^{(i)}\right\|\leq\|x\|+\sum_{i=1}^{k}\|x_{n }^{(i)}-y_{n}^{(i)}\|\] and hence \(\|x+\sum_{i=1}^{k}(x_{n}^{(i)}-y_{n}^{(i)})\|\to k+1.\) By (1), we get \(D_{k}[x,(x_{n}^{(i)}-y_{n}^{(i)})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0.\) Thus, by Remark 2.1, we have \(D_{k}[(x+M_{n}),(x_{n}^{(i)}+M_{n})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0.\) This is a contradiction. \((2)\Rightarrow(3)\): Obvious. \((3)\Rightarrow(1)\): Suppose there exist \(\epsilon>0\), \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\) and \((k)-\)sequences \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,\)\((x_{n}^{(k)})\) in \(S_{X}\) such that \(\|x+\sum_{i=1}^{k}x_{n}^{(i)}\|\to k+1,\) but \(|D_{k}[x,(x_{n}^{(i)})_{i=1}^{k};(f_{j})_{j=1}^{k}]|\geq\epsilon\) for all \(n\in\mathbb{N}.\) By Hahn-Banach theorem, there exists \(g\in S_{X^{*}}\) such that \(g(x)=\|x\|\) and for every \(n\in\mathbb{N},\) there exists \(g_{n}\in S_{X^{*}}\) such that \(g_{n}(x+\sum_{i=1}^{k+1}x_{n}^{(i)})=\|x+\sum_{i=1}^{k+1}x_{n}^{(i)}\|.\) Observe that \(g_{n}(x_{n}^{(i)})\to 1\) for all \(1\leq i\leq k\) and \(g_{n}(x)\to 1.\) Now, for every \(n\in\mathbb{N},\) choose a subspace \(F_{n}\) of \(X\) such that \(F_{n}\subseteq(\cap_{j=1}^{k}ker(f_{j}))\cap ker(g_{n})\cap ker(g)\) and \(dim(F_{n})=\alpha.\) Let \(1\leq i\leq k.\) Since \[|g_{n}(x_{n}^{(i)})|=d(x_{n}^{(i)},ker(g_{n}))\leq d(x_{n}^{(i)},F_{n})\leq\|x _{n}^{(i)}\|=1,\] we have \(\|x_{n}^{(i)}+F_{n}\|\to 1.\) Similarly, \(\|x+F_{n}\|=1\) for all \(n\in\mathbb{N}.\) For every \(n\in\mathbb{N},\) define \(y_{n}^{(i)}=\frac{x_{n}^{(i)}}{d(x_{n}^{(i)},F_{n})}\) and observe that \(y_{n}^{(i)}+F_{n}\in S_{X/F_{n}}\) and \(g_{n}(y_{n}^{(i)})\to 1\). Therefore, we have \[\left|g_{n}\left(x+\sum_{i=1}^{k}y_{n}^{(i)}\right)\right|=d\left(x+\sum_{i=1 }^{k}y_{n}^{(i)},ker(g_{n})\right)\leq d\left(x+\sum_{i=1}^{k}y_{n}^{(i)},F_{ n}\right)\leq k+1\] and hence \(\|(x+F_{n})+\sum_{i=1}^{k}(y_{n}^{(i)}+F_{n})\|\to k+1.\) By (3), we get \(D_{k}[(x+F_{n}),(y_{n}^{(i)}+F_{n})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0,\) Thus, by Remark 2.1 and Lemma 2.2, \(D_{k}[x,(x_{n}^{(i)})_{i=1}^{k};(f_{j})_{j=1}^{k}]\to 0.\) This is a contradiction. \((2)\Rightarrow(4)\): Obvious. \((4)\Rightarrow(1)\): Suppose there exist \(\epsilon>0\), \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}}\) and \((k)-\)sequences \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,\)\((x_{n}^{(k)})\) in \(S_{X}\) such that \(\|x+\sum_{i=1}^{k}x_{n}^{(i)}\|\to k+1,\) but \(|D_{k}[x,(x_{n}^{(i)})_{i=1}^{k};(f_{j})_{j=1}^{k}]|\geq\epsilon\) for all \(n\in\mathbb{N}.\) By Hahn-Banach theorem, there exists \(g\in S_{X^{*}}\) such that \(g(x)=\|x\|\) and for every \(n\in\mathbb{N},\) there exists \(g_{n}\in S_{X^{*}}\) such that \(g_{n}(x+\sum_{i=1}^{k+1}x_{n}^{(i)})=\|x+\sum_{i=1}^{k+1}x_{n}^{(i)}\|.\) Now, for every \(n\in\mathbb{N},\) choose a subspace \(Y_{n}\) of \(X\) such that \(Y_{n}\subseteq(\cap_{j=1}^{k}ker(f_{j}))\cap ker(g_{n})\cap ker(g)\) and \(codim(Y_{n})=\beta.\) By replacing \(F_{n}\) by \(Y_{n}\) in the proof of \((3)\Rightarrow(1)\) and proceeding in a similar way, we get a contradiction. Hence the proof. As a consequence of Theorem 4.3, we have the following result. **Corollary 4.4**.: _If \(X\) is \(k-\)WLUR and \(M\) is a proximinal subspace of \(X,\) then \(X/M\) is \(k-\)WLUR._ Proof.: Let \(\epsilon>0\), \(x+M\in S_{X/M}\) and \(f_{1},f_{2},\ldots,f_{k}\in S_{M^{\perp}}\cong S_{(X/M)^{*}}\). By assumption, there exists \(y\in M\) such that \(d(x,M)=\|x-y\|=1\). Since \(X\) is \(k-\)WLUR at \((x-y),\) it follows from Theorem 4.3 that \(\delta_{X/M}^{k}(\epsilon,(x-y)+M,(f_{j})_{j=1}^{k})>0\). Thus, \(X/M\) is \(k-\)WLUR. In Example 4.5, we see that, in general, a quotient space of a \(k-\)WLUR space need not be \(k-\)WLUR. Further, we note that there exists a space \(X\) and a subspace \(M\) of \(X\) such that both \(X\) and \(X/M\) are \(k-\)WUR (hence, \(k-\)WLUR), but \(M\) is not proximinal on \(X\). To see this, consider any WUR space which is not reflexive (see, Example 2.12). **Example 4.5**.: _Let \(k\in\mathbb{Z}^{+}\) and \(X=(\ell_{1},\|\cdot\|_{r})\) be the space considered in [22, Example 1]. That is, for any \(x\in\ell_{1}\), \(\|x\|_{r}=(\|x\|_{1}^{2}+\|S(x)\|_{2}^{2})^{\frac{1}{2}}\), where \(S:\ell_{1}\to\ell_{2}\) defined as \(S(\alpha_{n})=(\alpha_{n}2^{\frac{-n}{2}})\) for all \((\alpha_{n})\in\ell_{1}\). By [22, Theorem 1], \((\ell_{1},\|\cdot\|_{1})\cong X/M\) for some subspace \(M\) of \(X\). Therefore, \(X/M\) is not \(k-\)rotund. However, following [20, Example 6], it is easy to see that \(X\) is LUR (hence, \(X\) is \(k-\)WLUR)._ From the Definition 3.1, it follows that every subspace of a \(k-\)WUR (respectively, \(k-\)WLUR) space is \(k-\)WUR (respectively, \(k-\)WLUR). Further, in view of Corollaries 4.2 and 4.4, it is natural to ask whether \(k-\)WUR and \(k-\)WLUR are three space properties [15, Definition 1.7.8] or not. To see this, consider a space \(X=M\oplus_{1}(\mathbb{R}^{k},\|\cdot\|_{1}),\) where \(M\) is a WUR space and \(k\in\mathbb{Z}^{+}.\) Observe that \(X\) is not \(k-\)WLUR. However, \(X/M\) and \(M\) are \(k-\)WUR. Thus, \(k-\)WUR and \(k-\)WLUR are not three space properties. Now we present a result that is closely related to Corollary 4.4, which also generalizes [18, Proposition 3.2]. **Proposition 4.6**.: _Let \(Y\) be a subspace of \(X\) such that \(Y^{\perp}\subseteq NA(X),\) where \(NA(X)\) is the set of all norm attaining functionals on \(X\). If \(X\) is \(k-\)WLUR, then \(X/Y\) is \(k-\)rotund._ Proof.: Suppose \(X/Y\) is not \(k-\)rotund. Then there exist \((k+1)\) elements \(x_{1}+Y,x_{2}+Y,\ldots,x_{k+1}+Y\) in \(S_{X/Y}\) such that \(\|\sum_{i=1}^{k+1}(x_{i}+Y)\|=k+1,\) but \(D_{k}[(x_{i}+Y)_{i=1}^{k+1};(\widetilde{g}_{j})_{j=1}^{k}]=\epsilon\) for some \(\epsilon>0\) and \(\widetilde{g}_{1},\widetilde{g}_{2},\ldots,\widetilde{g}_{k}\in S_{(X/Y)^{*}}.\) By Hahn-Banach Theorem, there exists \(\widetilde{f}\in S_{(X/Y)^{*}}\) such that \(\widetilde{f}(\sum_{i=1}^{k+1}x_{i}+Y)=k+1.\) Observe that \(\widetilde{f}(x_{i}+Y)=1\) for all \(1\leq i\leq k+1.\) Let \(T:(X/Y)^{*}\xrightarrow{}Y^{\perp}\) be the isometric isomorphism defined by \(T(\widetilde{h})=\widetilde{h}\circ q,\) where \(q:X\xrightarrow{}X/Y\) is the quotient map. Clearly, \(\widetilde{f}\circ q\in Y^{\perp}\subseteq NA(X)\) and \(\|\widetilde{f}\circ q\|=1.\) Thus, there exists \(z\in S_{X}\) such that \((\widetilde{f}\circ q)(z)=\widetilde{f}(z+Y)=1.\) For every \(1\leq i\leq k+1,\) choose a sequence \((y_{n}^{(i)})\) in \(Y\) such that \(\|x_{i}-y_{n}^{(i)}\|\xrightarrow{}1.\) Let \(\alpha\in\mathcal{S}_{k+1}(k).\) Note that \[(\widetilde{f}\circ q)\left(\sum_{i=1}^{k}(x_{\alpha_{i}}-y_{n}^{(\alpha_{i} )})+z\right)=(\widetilde{f}\circ q)\left(\sum_{i=1}^{k}x_{\alpha_{i}}+z\right) =\widetilde{f}\left(\left(\sum_{i=1}^{k}x_{\alpha_{i}}+z\right)+Y\right)=k+1.\] Since \((\widetilde{f}\circ q)\left(\sum_{i=1}^{k}(x_{\alpha_{i}}-y_{n}^{(\alpha_{i} )})+z\right)\leq\left\|\sum_{i=1}^{k}(x_{\alpha_{i}}-y_{n}^{(\alpha_{i})})+z \right\|\leq\sum_{i=1}^{k}\|x_{\alpha_{i}}-y_{n}^{(\alpha_{i})}\|+1,\) we have \(\|\sum_{i=1}^{k}(x_{\alpha_{i}}-y_{n}^{(\alpha_{i})})+z\|\xrightarrow{}k+1.\) By assumption, we have \(D_{k}[z,(x_{\alpha_{i}}-y_{n}^{(\alpha_{i})})_{i=1}^{k};(f_{j})_{j=1}^{k}] \xrightarrow{}0\) for all \(f_{1},f_{2},\ldots,f_{k}\in S_{X^{*}},\) in particular, \(D_{k}[z,(x_{\alpha_{i}}-y_{n}^{(\alpha_{i})})_{i=1}^{k};(\widetilde{g}_{j} \circ q)_{j=1}^{k}]\xrightarrow{}0.\) Thus, it follows that \(D_{k}[z,(x_{\alpha_{i}})_{i=1}^{k};(\widetilde{g}_{j}\circ q)_{j=1}^{k}]=0\) which further implies, \(D_{k}[z+Y,(x_{\alpha_{i}}+Y)_{i=1}^{k};(\widetilde{g}_{j})_{j=1}^{k}]=0.\) By [24, Lemma 2], \[|D_{k}[(x_{i}+Y)_{i=1}^{k+1};(\widetilde{g}_{j})_{j=1}^{k}]|\leq\sum_{\alpha\in \mathcal{S}_{k+1}(k)}|D_{k}[z+Y,(x_{\alpha_{i}}+Y)_{i=1}^{k};(\widetilde{g}_{ j})_{j=1}^{k}]|=0,\] which is a contradiction. Hence \(X/Y\) is \(k-\)rotund. In the rest of the section, we mainly focus on the finite and infinite \(\ell_{p}-\)product of the notions \(k-\)WUR, \(k-\)WLUR, \(k-\)WMLUR. In Example 3.6 it is noted that the notions \(k-\)WUR (respectively, \(k-\)WLUR, \(k-\)WMLUR) and \((k+1)-\)WUR (respectively, \((k+1)-\)WLUR, \((k+1)-\)WMLUR) do not coincide in general. However, we prove that these notions coincide in \(\ell_{p}-\)product of a Banach space for \(1<p<\infty.\) For this we need the following results. **Theorem 4.7**.: _Let \(1\leq p\leq\infty,\)\(X_{i}\) be a Banach space for all \(1\leq i\leq k\) and \(X=(\oplus_{p}X_{i})_{i=1}^{k}.\) Then the following statements hold._ 1. _If_ \(X\) _is_ \(k-\)_WUR, then_ \(X_{i}\) _is WUR for some_ \(1\leq i\leq k.\)__ 2. _If_ \(X\) _is_ \(k-\)_WLUR, then_ \(X_{i}\) _is WLUR for some_ \(1\leq i\leq k.\)__ 3. _If_ \(X\) _is_ \(k-\)_WMLUR, then_ \(X_{i}\) _is WMLUR for some_ \(1\leq i\leq k.\)__ Proof.: (1): Let \(k\geq 2,\)\(1\leq p<\infty\) and \(X\) be \(k-\)WUR. Suppose \(X_{i}\) is not WUR for all \(1\leq i\leq k.\) Then for each \(1\leq i\leq k\) there exist \(f_{i}\in S_{X_{i}^{*}}\) and \((2)-\)sequences \((x_{n}^{(i)}),\)\((y_{n}^{(i)})\) in \(S_{X_{i}}\) such that \(\|x_{n}^{(i)}+y_{n}^{(i)}\|\to 2\), but \(|f_{i}(x_{n}^{(i)}-y_{n}^{(i)})|>\epsilon\) for all \(n\in\mathbb{N}\), for some \(\epsilon>0\). For every \(n\in\mathbb{N}\), define \[z_{n}^{(1)} =\frac{1}{k^{\frac{1}{p}}}(x_{n}^{(1)},x_{n}^{(2)},x_{n}^{(3)}, \ldots,x_{n}^{(k-1)},x_{n}^{(k)}),\] \[z_{n}^{(2)} =\frac{1}{k^{\frac{1}{p}}}(y_{n}^{(1)},x_{n}^{(2)},x_{n}^{(3)}, \ldots,x_{n}^{(k-1)},x_{n}^{(k)}),\] \[z_{n}^{(3)} =\frac{1}{k^{\frac{1}{p}}}(y_{n}^{(1)},y_{n}^{(2)},x_{n}^{(3)}, \ldots,x_{n}^{(k-1)},x_{n}^{(k)}),\] \[\vdots\] \[z_{n}^{(k+1)} =\frac{1}{k^{\frac{1}{p}}}(y_{n}^{(1)},y_{n}^{(2)},y_{n}^{(3)}, \ldots,y_{n}^{(k-1)},y_{n}^{(k)}).\] Clearly, \(z_{n}^{(t)}\in S_{X}\) for all \(1\leq t\leq k+1\) and \(n\in\mathbb{N}\). Now for every \(1\leq j\leq k\), let \(g_{j}=(0,\ldots,0,f_{j},0,\ldots 0)\in S_{X^{*}}\), here \(f_{j}\) is in the \(j^{th}\) coordinate. Since \[D_{k}[(z_{n}^{(t)})_{t=1}^{k+1};(g_{j})_{j=1}^{k}]=\frac{1}{k^{ \frac{k}{p}}}\begin{vmatrix}1&1&1&\ldots&1\\ f_{1}(x_{n}^{(1)})&f_{1}(y_{n}^{(1)})&f_{1}(y_{n}^{(1)})&\ldots&f_{1}(y_{n}^{(1 )})\\ f_{2}(x_{n}^{(2)})&f_{2}(x_{n}^{(2)})&f_{2}(y_{n}^{(2)})&\ldots&f_{2}(y_{n}^{(2 )})\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ f_{k}(x_{n}^{(k)})&f_{k}(x_{n}^{(k)})&f_{k}(x_{n}^{(k)})&\ldots&f_{k}(y_{n}^{(k )})\\ \end{vmatrix}\] \[=\frac{1}{k^{\frac{k}{p}}}\begin{vmatrix}0&0&0&\ldots&1\\ f_{1}(x_{n}^{(1)}-y_{n}^{(1)})&0&0&\ldots&f_{1}(y_{n}^{(1)})\\ f_{2}(x_{n}^{(2)}-y_{n}^{(2)})&f_{2}(x_{n}^{(2)}-y_{n}^{(2)})&0&\ldots&f_{2}(y _{n}^{(2)})\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ f_{k}(x_{n}^{(k)}-y_{n}^{(k)})&f_{k}(x_{n}^{(k)}-y_{n}^{(k)})&f_{k}(x_{n}^{(k )}-y_{n}^{(k)})&\ldots&f_{k}(y_{n}^{(k)})\\ \end{vmatrix},\] it follows that \[|D_{k}[(z_{n}^{(t)})_{t=1}^{k+1};(g_{j})_{j=1}^{k}]|=\frac{1}{k^{ \frac{k}{p}}}\left|\prod_{i=1}^{k}f_{i}(x_{n}^{(i)}-y_{n}^{(i)})\right|>\frac{ 1}{k^{\frac{k}{p}}}\epsilon^{k}.\] Since \[\frac{1}{k+1}\left\|\sum_{t=1}^{k+1}z_{n}^{(t)}\right\| =\frac{1}{(k+1)k^{\frac{1}{p}}}\left(\sum_{i=1}^{k}\left\|ix_{n}^ {(i)}+(k+1-i)y_{n}^{(i)}\right\|^{p}\right)^{\frac{1}{p}}\] \[=\frac{1}{k^{\frac{1}{p}}}\left(\sum_{i=1}^{k}\left\|\frac{i}{k+1 }x_{n}^{(i)}+\left(1-\frac{i}{k+1}\right)y_{n}^{(i)}\right\|^{p}\right)^{ \frac{1}{p}},\] we have \(\|\sum_{t=1}^{k+1}z_{n}^{(t)}\|\to k+1\). This is a contradiction. For the case \(p=\infty\), a similar proof holds. (2): Let \(k\geq 2\), \(1\leq p<\infty\) and \(X\) be \(k-\)WLUR. Suppose \(X_{i}\) is not WLUR for all \(1\leq i\leq k\). Then for each \(1\leq i\leq k\) there exist \(f_{i}\in S_{X_{i}^{*}}\), \(y_{i}\ \in S_{X_{i}}\) and a sequence \((x_{n}^{(i)})\) in \(S_{X_{i}}\) such that \(\|y_{i}+x_{n}^{(i)}\|\to 2\), but \(|f_{i}(y_{i}-x_{n}^{(i)})|>\epsilon\) for all \(n\in\mathbb{N}\), for some \(\epsilon>0\). Now, using the preceding functionals, sequences and by assuming \(y_{n}^{(i)}=y_{i}\) for all \(n\in\mathbb{N}\) and \(1\leq i\leq k\), construct \(k-\)functionals \(g_{1},g_{2},\ldots,g_{k}\) in \(S_{X^{*}}\) and \(k-\)sequences \((z_{n}^{(1)}),(z_{n}^{(2)}),\ldots,(z_{n}^{(k)})\) in \(S_{X}\) as in the proof of (1). Let \(z=\frac{1}{k^{\frac{1}{p}}}(y_{1},y_{2},\ldots,y_{k})\in S_{X}\). By following the similar technique as in the proof of (1), we have \(\|z+\sum_{i=1}^{k}z_{n}^{(i)}\|\to k+1\) and \(|D_{k}[z,(z_{n}^{(i)})_{t=1}^{k};(g_{j})_{j=1}^{k}]|>\frac{1}{k^{\frac{k}{p}}} \epsilon^{k}\) for all \(n\in\mathbb{N}\). This is a contradiction. For the case \(p=\infty\), a similar proof holds. (3): Let \(k\geq 2\), \(1\leq p<\infty\) and \(X\) be \(k-\)WMLUR. Suppose \(X_{i}\) is not WMLUR for all \(1\leq i\leq k\). Therefore, by Theorem 3.21, \(B_{X_{i}}\) is not \(w\)SCh on \(2S_{X_{i}}\) for all \(1\leq i\leq k\). Then for each \(1\leq i\leq k\), there exist \(f_{i}\in S_{X_{i}^{*}}\), \(w_{i}\in 2S_{X_{i}}\) and (2)\(-\)sequences \((x_{n}^{(i)})\), \((y_{n}^{(i)})\) in \(B_{X_{i}}\) such that \(\|x_{n}^{(i)}-w_{i}\|\to 1\) and \(\|y_{n}^{(i)}-w_{i}\|\to 1\), but \(|f_{i}(x_{n}^{(i)}-y_{n}^{(i)})|>\epsilon\) for all \(n\in\mathbb{N}\), for some \(\epsilon>0.\) Now, using the preceding sequences and functionals, we define \((k+1)-\)sequences \((z_{n}^{(1)}),(z_{n}^{(2)}),\ldots,(z_{n}^{(k+1)})\) in \(B_{X}\) and \((k)-\)functionals \(g_{1},g_{2},\ldots,g_{k}\) in \(S_{X^{*}}\) as in the proof of (1). Let \(w=\frac{1}{k^{\frac{1}{p}}}(w_{1},w_{2},\ldots,w_{k})\in 2S_{X}.\) Since for any \(1\leq t\leq k+1\), we have \[1=d(w,B_{X})\leq\|z_{n}^{(t)}-w\|=\frac{1}{k^{\frac{1}{p}}}\left(\sum_{j=1}^{ t-1}\|y_{n}^{(j)}-w_{j}\|^{p}+\sum_{j=t}^{k}\|x_{n}^{(j)}-w_{j}\|^{p}\right)^{ \frac{1}{p}}\] and hence \(\|z_{n}^{(t)}-w\|\to d(w,B_{X}).\) Using the similar argument involved in the proof of (1), we have \[|D_{k}[(z_{n}^{(t)})_{t=1}^{k+1};(g_{j})_{j=1}^{k}]|=\frac{1}{k^{\frac{k}{p}}} \left|\prod_{i=1}^{k}f_{i}(x_{n}^{(i)}-y_{n}^{(i)})\right|>\frac{1}{k^{\frac{k }{p}}}\epsilon^{k}.\] Thus, \(B_{X}\) is not \(k-\)wSCh at \(w.\) Therefore, by Theorem 3.21, \(X\) is not \(k-\)WMLUR, which is a contradiction. For the case \(p=\infty\), a similar proof holds. Hence the proof. We notice that Theorem 4.7 can be extended to infinite \(\ell_{p}-\)product which is presented in the following corollary. **Corollary 4.8**.: _Let \(1\leq p\leq\infty,\)\(X_{i}\) be a Banach space for all \(i\in\mathbb{N}\) and \(X=(\oplus_{p}X_{i})_{i\in\mathbb{N}}.\) If \(X\) is \(k-\)WUR (respectively, \(k-\)WLUR, \(k-\)WMLUR), then all but except \((k-1)-\)spaces of the collection \(\{X_{i}\}_{i\in\mathbb{N}}\) are WUR (respectively, WLUR, WMLUR)._ The following corollary is an immediate consequence of Proposition 3.5, Corollary 4.8 and [21, A.2, A.3, A.4]. **Corollary 4.9**.: _Let \(1<p<\infty.\) Then the following statements are equivalent._ 1. \(X\) _is WUR (respectively, WLUR, WMLUR)._ 2. \(\ell_{p}(X)\) _is WUR (respectively, WLUR, WMLUR)._ 3. \(\ell_{p}(X)\) _is_ \(k-\)_WUR (respectively,_ \(k-\)_WLUR,_ \(k-\)_WMLUR)._ From the preceding result, we conclude that unlike the notion WUR (respectively, WLUR, WMLUR), the notion \(k-\)WUR (respectively, \(k-\)WLUR, \(k-\)WMLUR) for \(k>1,\) need not be lifted to \(\ell_{p}-\) product space. To see this consider a space \(X\) which is \(k-\)WUR but not rotund (see, Example 3.6). Now, we present a necessary condition for a finite \(\ell_{p}-\)product space to be \(k-\)WUR (respectively, \(k-\)WLUR, \(k-\)WMLUR). **Theorem 4.10**.: _Let \(X,\)\(Y\) be Banach spaces and \(1\leq p\leq\infty.\) For any \(k\in\mathbb{Z}^{+},\) there exist \(k_{1},k_{2}\in\mathbb{Z}^{+}\) with \(k=k_{1}+k_{2}-1\) such that the following statements hold._ 1. _If_ \(X\oplus_{p}Y\) _is_ \(k-\)_WUR, then_ \(X\) _is_ \(k_{1}-\)_WUR and_ \(Y\) _is_ \(k_{2}-\)_WUR._ 2. _If_ \(X\oplus_{p}Y\) _is_ \(k-\)_WLUR, then_ \(X\) _is_ \(k_{1}-\)_WLUR and_ \(Y\) _is_ \(k_{2}-\)_WLUR._ 3. _If_ \(X\oplus_{p}Y\) _is_ \(k-\)_WMLUR, then_ \(X\) _is_ \(k_{1}-\)_WMLUR and_ \(Y\) _is_ \(k_{2}-\)_WMLUR._ Proof.: (1): Let \(X\oplus_{p}Y\) be \(k-\)WUR space. Suppose \(Y\) is WUR, then there is nothing to prove. Assume \(Y\) is not WUR. Then there exists \(k_{2}\in\mathbb{Z}^{+}\) such that \(2\leq k_{2}\leq k\) and \(Y\) is \(k_{2}-\)WUR, but not \((k_{2}-1)-\)WUR. Now, it is enough to show that \(X\) is \(k_{1}-\)WUR, where \(k_{1}=k-k_{2}+1.\) Suppose \(X\) is not \(k_{1}-\)WUR. Then there exist \(\epsilon>0,\)\((k_{1}+1)-\)sequences \((x_{n}^{(1)}),(x_{n}^{(2)}),\)\(\ldots,(x_{n}^{(k_{1}+1)})\) in \(S_{X}\) with \(\|\sum_{i=1}^{k_{1}+1}x_{n}^{(i)}\|\to k_{1}+1,\) but \(|D_{k_{1}}[(x_{n}^{(i)})_{i=1}^{k_{1}+1};(f_{j})_{j=1}^{k_{1}}]|\geq\epsilon\) for all \(n\in\mathbb{N}\) and for some \(f_{1},f_{2},\ldots,f_{k_{1}}\in S_{X^{*}}.\) Since \(Y\) is not \((k_{2}-1)-\)WUR, there exist \((k_{2})-\)sequences \((y_{n}^{(1)}),\)\((y_{n}^{(2)}),\ldots,(y_{n_{2}}^{(k_{2})})\) in \(S_{Y}\) with \(\|\sum_{i=1}^{k_{2}}y_{n}^{(i)}\|\to k_{2},\) but \(|D_{k_{2}-1}[(y_{n}^{(k_{2})})_{i=1}^{k_{2}};(g_{j})_{j=1}^{k_{2}-1}]|\geq\epsilon\) for all \(n\in\mathbb{N}\) and for some \(g_{1},g_{2},\ldots,g_{k_{2}-1}\in S_{Y^{*}}.\) Choose \(r>0\) with \(\|(r,r)\|_{p}=1.\) For every \(n\in\mathbb{N}\) and \(1\leq i\leq k+1,\) define \[z_{n}^{(i)}=\begin{cases}r(x_{n}^{(i)},y_{n}^{(k_{2})}),&\text{ if }1\leq i\leq k_{1} +1;\\ r(x_{n}^{(k_{1}+1)},y_{n}^{(i-k_{1}-1)}),&\text{ if }k_{1}+2\leq i\leq k_{1} +k_{2}.\end{cases}\] Clearly, \(z_{n}^{(i)}\in X\oplus_{p}Y\) and \(\|z_{n}^{(i)}\|=1\) for all \(1\leq i\leq k+1,\)\(n\in\mathbb{N}.\) Note that \[1-\frac{1}{k+1}\left\|\sum_{i=1}^{k+1}z_{n}^{(i)}\right\| =\|(r,r)\|_{p}-\frac{r}{k+1}\left\|\left(\sum_{i=1}^{k_{1}+1}x_{n} ^{(i)}+(k_{2}-1)x_{n}^{(k_{1}+1)},\sum_{i=1}^{k_{2}-1}y_{n}^{(i)}+(k_{1}+1)y_{ n}^{(k_{2})}\right)\right\|\] \[=\|(r,r)\|_{p}-\frac{r}{k+1}\left|\left(\left\|\sum_{i=1}^{k_{1}+ 1}x_{n}^{(i)}+(k_{2}-1)x_{n}^{(k_{1}+1)}\right\|,\left\|\sum_{i=1}^{k_{2}-1}y_ {n}^{(i)}+(k_{1}+1)y_{n}^{(k_{2})}\right\|\right)\right\|\] \[\leq r\left\|\left(1-\frac{1}{k+1}\left\|\sum_{i=1}^{k_{1}}x_{n} ^{(i)}+(k_{2})x_{n}^{(k_{1}+1)}\right\|,1-\frac{1}{k+1}\left\|\sum_{i=1}^{k_{2 }-1}y_{n}^{(i)}+(k_{1}+1)y_{n}^{(k_{2})}\right\|\right)\right\|\] \[\leq r\left|1-\frac{1}{k+1}\left\|\sum_{i=1}^{k_{1}}x_{n}^{(i)}+( k_{2})x_{n}^{(k_{1}+1)}\right\|\right|+r\left|1-\frac{1}{k+1}\left\|\sum_{i=1}^{k_{ 2}-1}y_{n}^{(i)}+(k_{1}+1)y_{n}^{(k_{2})}\right\|\right\|.\] Thus, by [7, Lemma 3.8], it follows that \(\frac{1}{k+1}\|\sum_{i=1}^{k+1}z_{n}^{(i)}\|\to 1.\) For every \(1\leq j\leq k,\) define \[h_{j}=\begin{cases}(f_{j},0),&\text{ if }1\leq j\leq k_{1};\\ (0,g_{j-k_{1}}),&\text{ if }k_{1}+1\leq j\leq k_{1}+k_{2}-1.\end{cases}\] Clearly, \(h_{j}\in(X\oplus_{p}Y)^{*}\) and \(\|h_{j}\|=1\) for all \(1\leq j\leq k.\) Now, consider \[D_{k}[(z_{n}^{(i)})_{i=1}^{k+1};(h_{j})_{j=1}^{k}] =D_{k}[((rx_{n}^{(i)},0))_{i=1}^{k_{1}+1},(r(x_{n}^{(k_{1}+1)},y_ {n}^{(l)}-y_{n}^{(k_{2})}))_{l=1}^{k_{2}-1};(h_{j})_{j=1}^{k}]\] \[=r^{k}det\left(\begin{bmatrix}A_{n}&B_{n}\\ 0&C_{n}\end{bmatrix}\right)\!,\] where \(A_{n}=[a_{i,j}^{(n)}],\) here \(a_{1,j}^{(n)}=1,\)\(a_{i+1,j}^{(n)}=f_{i}(x_{n}^{(j)})\) for all \(1\leq i\leq k_{1},\)\(1\leq j\leq k_{1}+1;\) \(B_{n}=[b_{l,m}^{(n)}],\) here \(b_{l,m}^{(n)}=1,b_{l+1,m}^{(n)}=f_{l}(x_{n}^{k_{1}+1})\) for all \(1\leq m\leq k_{2}-1,\)\(1\leq l\leq k_{1};\) \(C_{n}=[c_{s,t}^{(n)}],\) here \(c_{s,t}^{(n)}=g_{s}(y_{n}^{t}-y_{n}^{k_{2}})\) for all \(1\leq s,t\leq k_{2}-1.\) Therefore, by assumption, for all \(n\in\mathbb{N}\) we have \[|D_{k}[(z_{n}^{(i)})_{i=1}^{k+1};(h_{j})_{j=1}^{k}]|=r^{k}|D_{k_{1}}[(x_{n}^{( i)})_{i=1}^{k_{1}+1};(f_{j})_{j=1}^{k_{1}}]D_{k_{2}-1}[(y_{n}^{(i)})_{i=1}^{k_{2}};(g_ {j})_{j=1}^{k_{2}-1}]|\geq r^{k}\epsilon^{2},\] which is a contradiction to \(X\oplus_{p}Y\) is \(k-\)WUR. Thus, \(X\) is \(k_{1}-\)WUR. (2): Let \(X\oplus_{p}Y\) be \(k-\)WLUR space. Suppose \(Y\) is WLUR, then there is nothing to prove. Assume \(Y\) is not WLUR. Then there exists \(k_{2}\in\mathbb{Z}^{+}\) such that \(2\leq k_{2}\leq k\) and \(Y\) is \(k_{2}-\)WLUR, but not \((k_{2}-1)-\)WLUR. Now, it is enough to show that \(X\) is \(k_{1}-\)WLUR, where \(k_{1}=k-k_{2}+1.\) Suppose \(X\) is not \(k_{1}-\)WLUR. Then there exist \(x\in S_{X},\)\(\epsilon>0\) and \((k_{1})-\)sequences \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k_{1})})\) in \(S_{X}\) with \(\|x+\sum_{i=1}^{k_{1}}x_{n}^{(i)}\|\to k_{1}+1,\) but \(|D_{k_{1}}[x,(x_{n}^{(i)})_{i=1}^{k_{1}};(f_{j})_{j=1}^{k_{1}}]|\geq\epsilon\) for all \(n\in\mathbb{N}\) and for some \(f_{1},f_{2},\ldots,f_{k_{1}}\in S_{X^{*}}\). Since \(Y\) is not \((k_{2}-1)-\)WLUR, there exist \(y\in S_{X}\) and \((k_{2}-1)-\)sequences \((y_{n}^{(1)}),\)\((y_{n}^{(2)}),\ldots,(y_{n}^{(k_{2}-1)})\) in \(S_{Y}\) with \(\|y+\sum_{i=1}^{k_{2}-1}y_{n}^{(i)}\|\to k_{2},\) but \(|D_{k_{2}-1}[y,(y_{n}^{(i)})_{i=1}^{k_{2}-1};(g_{j})_{j=1}^{k_{2}-1}]|\geq\epsilon\) for all \(n\in\mathbb{N}\) and for some \(g_{1},g_{2},\ldots,g_{k_{2}-1}\in S_{Y^{*}}\). Using preceding functionals, consider \(k-\)functionals \(h_{1},h_{2},\ldots,h_{k}\) in \(S_{(X\oplus_{p}Y)^{*}}\) as in the proof of (1). Choose \(r>0\) with \(\|(r,r)\|_{p}=1.\) Let \(z=r(x,y)\). For every \(n\in\mathbb{N}\) and \(1\leq i\leq k,\) define \[z_{n}^{(i)}=\begin{cases}r(x_{n}^{(i)},y),&\text{ if }1\leq i\leq k_{1};\\ r(x,y_{n}^{(i-k_{1})}),&\text{ if }k_{1}+1\leq i\leq k_{1}+k_{2}-1.\end{cases}\] Clearly, \(z,z_{n}^{(i)}\in S_{(X\oplus_{p}Y)}\) for all \(1\leq i\leq k\) and \(n\in\mathbb{N}\). Now using similar argument as in the proof of (1), we have \(\frac{1}{k+1}\|z+\sum_{i=1}^{k}z_{n}^{(i)}\|\to 1\) and \(|D_{k}[z,(z_{n}^{(i)})_{i=1}^{k};(h_{j})_{j=1}^{k}]|\geq r^{k}\epsilon^{2}\) for all \(n\in\mathbb{N}.\) This contradicts the assumption \(X\oplus_{p}Y\) is \(k-\)WLUR. Thus, \(X\) is \(k_{1}-\)WLUR. (3): Let \(X\oplus_{p}Y\) be \(k-\)WMLUR space. Suppose \(Y\) is WMLUR, then there is nothing to prove. Assume \(Y\) is not WMLUR. Then there exists \(k_{2}\in\mathbb{Z}^{+}\) such that \(2\leq k_{2}\leq k\) and \(Y\) is \(k_{2}-\)WMLUR, but not \((k_{2}-1)-\)WMLUR. Now, it is enough to show that \(X\) is \(k_{1}-\)WMLUR, where \(k_{1}=k-k_{2}+1.\) Suppose \(X\) is not \(k_{1}-\)WMLUR. Then, by Theorem 3.21, \(B_{X}\) is not \(k_{1}-\)wSCh on \(2S_{X}.\) Therefore there exist \(x\in S_{X},\)\(\epsilon>0\) and \((k_{1}+1)-\)sequences \((x_{n}^{(1)}),(x_{n}^{(2)}),\ldots,(x_{n}^{(k_{1}+1)})\) in \(B_{X}\) with \(\|x_{n}^{(i)}-2x\|\to 1,\) but \(|D_{k_{1}}[(x_{n}^{(i)})_{i=1}^{k_{1}+1};(f_{j})_{j=1}^{k_{1}}]|\geq\epsilon\) for all \(n\in\mathbb{N}\) and for some \(f_{1},f_{2},\ldots,f_{k_{1}}\in S_{X^{\star}}.\) Since, by Theorem 3.21, \(B_{Y}\) is not \((k_{2}-1)-\)wSCh on \(2S_{Y},\) there exist \(y\in S_{Y}\) and \((k_{2})-\)sequences \((y_{n}^{(1)}),(y_{n}^{(2)}),\ldots,(y_{n}^{(k_{2})})\) in \(B_{Y}\) with \(\|y_{n}^{(i)}-2y\|\to 1,\) but \(|D_{k_{2}-1}[(y_{n}^{(i)})_{i=1}^{k_{2}};(g_{j})_{j=1}^{k_{2}-1}]|\geq\epsilon\) for all \(n\in\mathbb{N}\) and for some \(g_{1},g_{2},\ldots,g_{k_{2}-1}\in S_{Y^{\star}}.\) Choose \(r>0\) with \(\|(r,r)\|_{p}=1.\) Using preceding sequences and functionals, consider \((k+1)-\)sequences \((z_{n}^{(1)}),(z_{n}^{(k)}),\ldots,(z_{n}^{(k+1)})\) in \(B_{X\oplus_{p}Y}\) and \(k-\)functionals \(h_{1},h_{2},\ldots,h_{k}\) in \(S_{(X\oplus_{p}Y)^{\star}}\) as in the proof of (1). Let \(z=r(x,y)\in S_{X\oplus_{p}Y}\). It is easy to verify that, \(\|z_{n}^{(i)}-2z\|\to 1\) for all \(1\leq i\leq k+1.\) Now, following the similar technique, as in the proof of (1), we get \(|D_{k}[(z_{n}^{(i)})_{i=1}^{k+1};(h_{j})_{j=1}^{k}]|\geq r^{k}\epsilon^{2}\) for all \(n\in\mathbb{N},\) which implies \(B_{X\oplus_{p}Y}\) is not \(k-\)wSCh on \(X\oplus_{p}Y.\) Therefore, by Theorem 3.21, \(X\) is not \(k-\)WMLUR, which is a contradiction. Thus, \(X\) is \(k_{1}-\)WMLUR. Hence the proof. The next result is an immediate consequence of Theorem 4.10 and the fact \((\oplus_{p}X_{i})_{i=1}^{d}\cong(\oplus_{p}X_{i})_{i=1}^{d-1}\oplus_{p}X_{d}.\) **Corollary 4.11**.: _Let \(d\in\mathbb{Z}^{+}\), \(d>1\) and \(1\leq p\leq\infty.\) Let \(X_{i}\) be a Banach space for all \(1\leq i\leq d\) and \(X=(\oplus_{p}X_{i})_{i=1}^{d}.\) If \(X\) is \(k-\)WUR (respectively, \(k-\)WLUR, \(k-\)WMLUR), then there exist \(k_{1},k_{2},\ldots,k_{d}\in\mathbb{Z}^{+}\) such that \(k=\sum_{i=1}^{d}k_{i}-d+1\) and \(X_{i}\) is \(k_{i}-\)WUR (respectively, \(k_{i}-\)WLUR, \(k_{i}-\)WMLUR)._ In the following result, we provide a sufficient condition for a finite \(\ell_{p}-\)product space to be \(k-\)WUR (respectively, \(k-\)WLUR, \(k-\)WMLUR). **Theorem 4.12**.: _Let \(X,Y\) be Banach spaces and \(1<p<\infty.\) For any \(k_{1},k_{2},k\in\mathbb{Z}^{+}\) satisfying \(k=k_{1}+k_{2}-1\), the following statements hold._ 1. _If_ \(X\) _is_ \(k_{1}-\)_WUR and_ \(Y\) _is_ \(k_{2}-\)_WUR, then_ \(X\oplus_{p}Y\) _is_ \(k-\)_WUR._ 2. _If_ \(X\) _is_ \(k_{1}-\)_WLUR and_ \(Y\) _is_ \(k_{2}-\)_WLUR, then_ \(X\oplus_{p}Y\) _is_ \(k-\)_WLUR._ 3. _If_ \(X\) _is_ \(k_{1}-\)_WMLUR and_ \(Y\) _is_ \(k_{2}-\)_WMLUR, then_ \(X\oplus_{p}Y\) _is_ \(k-\)_WMLUR._ Proof.: (1): Let \(X\) be \(k_{1}-\)WUR, \(Y\) be \(k_{2}-\)WUR and \(k=k_{1}+k_{2}-1.\) Let \((z_{n}^{(1)}),(z_{n}^{(2)}),\ldots,(z_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(S_{(X\oplus_{p}Y)}\) with \(\|\sum_{i=1}^{k+1}z_{n}^{(i)}\|\to k+1\) and \(h_{1},h_{2},\ldots,h_{k}\in S_{(X\oplus_{p}Y)^{\star}}.\) Clearly \(h_{j}=(f_{j},g_{j})\) for some \(f_{j}\in B_{X^{\star}},\)\(g_{j}\in B_{Y^{\star}},\) for all \(1\leq j\leq k\) and \(z_{n}^{(i)}=(x_{n}^{(i)},y_{n}^{(i)})\) for some \(x_{n}^{(i)}\in B_{X},\)\(y_{n}^{(i)}\in B_{Y}\) for all \(n\in\mathbb{N},\)\(1\leq i\leq k+1.\) Let \(1\leq i<j\leq k+1.\) Since \[\frac{1}{k+1}\left\|\sum_{t=1}^{k+1}z_{n}^{(t)}\right\|\leq\frac{1}{k+1}\left( \left\|z_{n}^{(i)}+z_{n}^{(j)}\right\|+k-1\right)\leq 1,\] it follows that \(\|z_{n}^{(i)}+z_{n}^{(j)}\|\to 2.\) Note that \[\|z_{n}^{(i)}+z_{n}^{(j)}\| =\|(\|x_{n}^{(i)}+x_{n}^{(j)}\|,\|y_{n}^{(i)}+y_{n}^{(j)}\|)\|\] \[\leq\|(\|x_{n}^{(i)}\|+\|x_{n}^{(j)}\|,\|y_{n}^{(i)}\|+\|y_{n}^{(j )}\|)\|\] \[=\|(\|x_{n}^{(i)}\|,\|y_{n}^{(i)}\|)+(\|x_{n}^{(j)}\|,\|y_{n}^{(j)} \|)\|\] \[\leq 2.\] Therefore, \(\|(\|x_{n}^{(i)}\|,\|y_{n}^{(i)}\|)+(\|x_{n}^{(j)}\|,\|y_{n}^{(j)}\|)\|\to 2.\) Since \((\mathbb{R}^{2},\|\cdot\|_{p})\) is uniformly rotund, it follows that \(\|(\|x_{n}^{(i)}\|-\|x_{n}^{(j)}\|,\|y_{n}^{(i)}\|-\|y_{n}^{(j)}\|)\|\to 0,\) which implies \(\|x_{n}^{(i)}\|-\|x_{n}^{(j)}\|\to 0\) and \(\|y_{n}^{(i)}\|-\|y_{n}^{(j)}\|\to 0.\) Case\(-(i)\): Assume that the sequence \((\|x_{n}^{(1)}\|)\) converges. Therefore, for every \(1\leq i\leq k+1\) we have \(\|x_{n}^{(i)}\|\to a_{1}\) for some \(a_{1}\in[0,1]\), which further implies \(\|y_{n}^{(i)}\|\to a_{2}\), where \(a_{2}=(1-a_{1}^{p})^{\frac{1}{p}}\). Let \(\alpha\in\mathcal{S}_{k+1}(k_{1}+1)\). Note that for any subsequence \((n_{m})\) of \((n)\), we have \(\|\sum_{i=1}^{k_{1}+1}z_{n_{m}}^{(\alpha_{i})}\|\to k_{1}+1\) and \[\limsup_{n\to\infty}\left\|\sum_{i=1}^{k_{1}+1}z_{n_{m}}^{(\alpha _{i})}\right\| =\limsup_{n\to\infty}\left\|\left(\left\|\sum_{i=1}^{k_{1}+1}x_{n _{m}}^{(\alpha_{i})}\right\|,\left\|\sum_{i=1}^{k_{1}+1}y_{n_{m}}^{(\alpha_{i} )}\right\|\right)\right\|\] \[\leq\left\|\left(\limsup_{n\to\infty}\left\|\sum_{i=1}^{k_{1}+1}x_ {n_{m}}^{(\alpha_{i})}\right\|,\limsup_{n\to\infty}\left\|\sum_{i=1}^{k_{1}+1 }y_{n_{m}}^{(\alpha_{i})}\right\|\right)\right\|\] \[\leq\left\|\left(\limsup_{n\to\infty}\sum_{i=1}^{k_{1}+1}\left\|x _{n_{m}}^{(\alpha_{i})}\right\|,\limsup_{n\to\infty}\sum_{i=1}^{k_{1}+1}\left\| y_{n_{m}}^{(\alpha_{i})}\right\|\right)\right\|\] \[=\|((k_{1}+1)a_{1},(k_{1}+1)a_{2})\|\] \[=k_{1}+1.\] Thus \(\limsup_{n\to\infty}\|\sum_{i=1}^{k_{1}+1}x_{n_{m}}^{(\alpha_{i})}\|=(k_{1}+1 )a_{1}\), which further implies \(\|\sum_{i=1}^{k_{1}+1}x_{n}^{(\alpha_{i})}\|\to(k_{1}+1)a_{1}\). If \(a_{1}=0\), by Lemma 2.2, we have \(D_{k_{1}}[(x_{n}^{(\alpha_{i})})_{i=1}^{k_{1}+1};(f_{\lambda_{j}})_{j=1}^{k_{1 }}]\to 0\) for all \(\lambda\in\mathcal{S}_{k}(k_{1})\). Suppose \(a_{1}\neq 0\). Since \(X\) is \(k_{1}-\)WUR, we have \(D_{k_{1}}[(x_{n}^{(\alpha_{i})})_{i=1}^{k_{1}+1};(f_{\lambda_{j}})_{j=1}^{k_{1 }}]\to 0\) for all \(\lambda\in\mathcal{S}_{k}(k_{1})\). Similarly, for every \(\beta\in\mathcal{S}_{k+1}(k_{2}+1)\) we have \(\|\sum_{j=1}^{k_{2}+1}y_{n}^{(\beta_{j})}\|\to(k_{2}+1)a_{2}\) and \(D_{k_{2}}[(y_{n}^{(\beta_{i})})_{i=1}^{k_{2}+1};(g_{\mu_{i}})_{j=1}^{k_{2}}]\to 0\) for all \(\mu\in\mathcal{S}_{k}(k_{2})\). Consider, \[D_{k}[(z_{n}^{(i)})_{i=1}^{k+1};(h_{j})_{j=1}^{k}]=\begin{vmatrix}1&1&\ldots&1 \\ f_{1}(x_{n}^{(1)})+g_{1}(y_{n}^{(1)})&f_{1}(x_{n}^{(2)})+g_{1}(y_{n}^{(2)})& \ldots&f_{1}(x_{n}^{(k+1)})+g_{1}(y_{n}^{(k+1)})\\ \vdots&\vdots&\ddots&\vdots\\ f_{k}(x_{n}^{(1)})+g_{k}(y_{n}^{(1)})&f_{k}(x_{n}^{(2)})+g_{k}(y_{n}^{(2)})& \ldots&f_{k}(x_{n}^{(k+1)})+g_{k}(y_{n}^{(k+1)})\end{vmatrix}.\] Since the determinant is multilinear, we can write the preceding determinant as the sum of \(2^{k}\) determinants each of order \((k+1)\). Then, by rearranging the rows, we can rewrite all \(2^{k}\) determinants such that \[|D_{k}[(z_{n}^{(i)})_{i=1}^{k+1};(h_{j})_{j=1}^{k}]|\leq\sum_{t=1}^{2^{k}}|a_{ n}^{(t)}|,\] where \[a_{n}^{(t)}=\begin{vmatrix}1&1&\ldots&1\\ f_{\alpha_{1}}(x_{n}^{(1)})&f_{\alpha_{1}}(x_{n}^{(2)})&\ldots&f_{\alpha_{1}}(x_ {n}^{(k+1)})\\ \vdots&\vdots&\ddots&\vdots\\ f_{\alpha_{t}}(x_{n}^{(1)})&f_{\alpha_{t}}(x_{n}^{(2)})&\ldots&f_{\alpha_{t}}(x _{n}^{(k+1)})\\ g_{\beta_{1}}(y_{n}^{(1)})&g_{\beta_{1}}(y_{n}^{(2)})&\ldots&g_{\beta_{1}}(y_{n} ^{(k+1)})\\ \vdots&\vdots&\ddots&\vdots\\ g_{\beta_{s_{t}}}(y_{n}^{(1)})&g_{\beta_{s_{t}}}(y_{n}^{(2)})&\ldots&g_{\beta_{s_{t }}}(y_{n}^{(k+1)})\end{vmatrix}\] for some \(\alpha\in\mathcal{S}_{k}(r_{t})\), \(\beta\in\mathcal{S}_{k}(s_{t})\) and \(0\leq r_{t},s_{t}\leq k\) with \(r_{t}+s_{t}=k\) for all \(1\leq t\leq 2^{k}\). Observe that in each determinant \(a_{n}^{(t)}\), either \(r_{t}\geq k_{1}\) or \(s_{t}\geq k_{2}\). Consider the determinant \(a_{n}^{(t_{0})}\), for some \(1\leq t_{0}\leq 2^{k}\). subcase\(-(a)\): Suppose \(r_{t_{0}}\geq k_{1}\). Then evaluate the determinant \(a_{n}^{(t_{0})}\) using the Laplace expansion of the determinant [10] (by fixing the first \((k_{1}+1)-\)rows). Since each entry of the determinant \(a_{n}^{(t_{0})}\) is bounded by \(1\) and \(D_{k_{1}}[(x_{n}^{(\alpha_{i})})_{i=1}^{k_{1}+1};(f_{\lambda_{j}})_{j=1}^{k_{1 }}]\to 0\) for all \(\alpha\in\mathcal{S}_{k+1}(k_{1}+1)\), \(\lambda\in\mathcal{S}_{k}(k_{1})\), it follows that \(|a_{n}^{(t_{0})}|\to 0\). subcase\(-(b)\): Suppose \(s_{t_{0}}\geq k_{2}\). Then evaluate the determinant \(a_{n}^{(t_{0})}\) using the Laplace expansion of the determinant [10] (by fixing the rows \(R_{1},R_{r_{t_{0}}+1},R_{r_{t_{0}}+2},\ldots,R_{r_{t_{0}}+k_{2}}\)). Since each entry of the determinant \(a_{n}^{(t_{0})}\) is bounded by \(1\) and \(D_{k_{2}}[(y_{n}^{(\beta_{i})})_{i=1}^{k_{2}+1};(g_{\mu_{j}})_{j=1}^{k_{2}}]\to 0\) for all \(\beta\in\mathcal{S}_{k+1}(k_{2}+1)\), \(\mu\in\mathcal{S}_{k}(k_{2})\), it follows that \(|a_{n}^{(t_{0})}|\to 0\). Therefore, \(|a_{n}^{(t)}|\to 0\) for all \(1\leq t\leq 2^{k}\). Thus, \(|D_{k}[(z_{n}^{(i)})_{i=1}^{k+1};(h_{j})_{j=1}^{k}]|\to 0\). Case\(-(ii)\): Assume that the sequence \((\|x_{n}^{(1)}\|)\) does not converge. We need to show that \(|D_{k}[(z_{n}^{(i)})_{i=1}^{k+1};(h_{j})_{j=1}^{k}]|\to 0\). Suppose \(|D_{k}[(z_{n}^{(i)})_{i=1}^{k+1};(h_{j})_{j=1}^{k}]|\) does not converge to \(0\). Then there exist a subsequence \((n_{m})\) of \((n)\) and \(\epsilon>0\) such that \(|D_{k}[(z_{m}^{(n)})_{i=1}^{k+1};(h_{j})_{j=1}^{k}]|\geq\epsilon\) for all \(m\in\mathbb{N}\). Since the sequence \((\|x_{n_{m}}^{(1)}\|)\) is bounded, there exists a subsequence \((\|x_{n_{m}}^{(1)}\|)\) of \((\|x_{n_{m}}^{(1)}\|)\) such that \(\|x_{m_{s}}^{(1)}\|\to b_{1}\) for some \(b_{1}\in[0,1]\). Now, by Case\(-(i)\), we have \(|D_{k}[(z_{m_{s}}^{(i)})_{i=1}^{k+1};(h_{j})_{j=1}^{k}]|\to 0\) as \(s\to\infty\), which is a contradiction. Thus, \(|D_{k}[(z_{n}^{(i)})_{i=1}^{k+1};(h_{j})_{j=1}^{k}]|\to 0\). (2): Let \(X\) be \(k_{1}-\)WLUR, \(Y\) be \(k_{2}-\)WLUR and \(k=k_{1}+k_{2}-1\). Let \(z\in S_{(X\oplus_{p}Y)}\), \((z_{n}^{(1)}),(z_{n}^{(2)}),\ldots\), \((z_{n}^{(k)})\) be \((k)-\)sequences in \(S_{(X\oplus_{p}Y)}\) with \(\|z+\sum_{i=1}^{k}z_{n}^{(i)}\|\to k+1\) and \(h_{1},h_{2},\ldots,h_{k}\in S_{(X\oplus_{p}Y)^{*}}\). Clearly \(h_{j}=(f_{j},g_{j})\) for some \(f_{j}\in B_{X^{*}}\), \(g_{j}\in B_{Y^{*}}\), for all \(1\leq j\leq k\), \(z=(x,y)\) and \(z_{n}^{(i)}=(x_{n}^{(i)},y_{n}^{(i)})\) for some \(x,x_{n}^{(i)}\in B_{X}\), \(y,y_{n}^{(i)}\in B_{Y}\) for all \(n\in\mathbb{N}\), \(1\leq i\leq k\). By considering \(z_{n}^{(k+1)}=z\) for all \(n\in\mathbb{N}\) and following the similar steps involved in the proof of (1), we obtain \(\|x_{n}^{(i)}\|\to\|x\|\) and \(\|y_{n}^{(i)}\|\to\|y\|\) for all \(1\leq i\leq k\). Now the rest of the proof follows as Case\(-(i)\) in the proof of (1). (3): Let \(X\) be \(k_{1}-\)WLUR, \(Y\) be \(k_{2}-\)WLUR and \(k=k_{1}+k_{2}-1\). Now, by Theorem 3.21, it is enough to show that \(B_{X\oplus_{p}Y}\) is \(k-w\)SCh on \(2S_{X\oplus_{p}Y}\). Let \(z\in S_{X\oplus_{p}Y}\), \((z_{n}^{(1)}),(z_{n}^{(2)}),\ldots,(z_{n}^{(k+1)})\) be \((k+1)-\)sequences in \(B_{(X\oplus_{p}Y)}\) such that \(\|2z-z_{n}^{(i)}\|\to 1\) for all \(1\leq i\leq k+1\) and \(h_{1},h_{2},\ldots,h_{k}\in S_{(X\oplus_{p}Y)^{*}}\). Clearly \(h_{j}=(f_{j},g_{j})\) for some \(f_{j}\in B_{X^{*}}\), \(g_{j}\in B_{Y^{*}}\), for all \(1\leq j\leq k\), \(z=(x,y)\) and \(z_{n}^{(i)}=(x_{n}^{(i)},y_{n}^{(i)})\) for some \(x,x_{n}^{(i)}\in B_{X}\), \(y,y_{n}^{(i)}\in B_{Y}\) for all \(n\in\mathbb{N}\), \(1\leq i\leq k+1\). Let \(1\leq i\leq k+1\). Note that \[\|2z-z_{n}^{(i)}\| =\|(\|2x-x_{n}^{(i)}\|,\|2y-y_{n}^{(i)}\|)\|\] \[\geq\|(\|2x\|-\|x_{n}^{(i)}\|,\|2y\|-\|y_{n}^{(0)}\|)\|\] \[=\|(\|2x\|,\|2y\|)-(\|x_{n}^{(i)}\|,\|y_{n}^{(i)}\|)\|\] \[\geq\|2z\|-\|z_{n}^{(i)}\|\] \[\geq 1,\] which implies \(\|(\|2x\|,\|2y\|)-(\|x_{n}^{(i)}\|,\|y_{n}^{(i)}\|)\|\to 1\). Since \(B_{(\mathbb{R}^{2},\|.\|_{p})}\) is strongly Chebyshev on \((\mathbb{R}^{2},\|\cdot\|_{p})\), we have \((\|x_{n}^{(i)}\|,\|y_{n}^{(i)}\|)\to(\|x\|,\|y\|)\), which further implies \(\|x_{n}^{(i)}\|\to\|x\|\) and \(\|y_{n}^{(i)}\|\to\|y\|\). For any subsequence \((n_{m})\) of \((n)\), observe that \[\liminf_{n\to\infty}\big{\|}2z-z_{n_{m}}^{(i)}\big{\|} =\liminf_{n\to\infty}\big{\|}\big{(}\big{\|}2x-x_{n_{m}}^{(i)}\big{\|} \,,\big{\|}2y-y_{n_{m}}^{(i)}\big{\|}\big{)}\big{\|}\] \[\geq\Big{\|}\Big{(}\liminf_{n\to\infty}\big{\|}2x-x_{n_{m}}^{(i)} \big{\|}\,,\liminf_{n\to\infty}\big{\|}2y-y_{n_{m}}^{(i)}\big{\|}\big{)}\Big{\|}\] \[\geq\|(\|x\|\,,\|y\|)\|\] \[=1,\] which implies \(\liminf_{n\to\infty}\|2x-x_{n_{m}}^{(i)}\|=\|x\|\). Therefore, \(\|2x-x_{n}^{(i)}\|\to\|x\|\). If \(\|x\|=0\), by Remark 2.1, we have \(D_{k_{1}}[(x_{n}^{(\alpha_{i})})_{i=1}^{k_{1}+1};(f_{\lambda_{j}})_{j=1}^{k_{1}}]\to 0\) for all \(\alpha\in\mathcal{S}_{k+1}(k_{1}+1)\) and \(\lambda\in\mathcal{S}_{k}(k_{1})\). Assume \(\|x\|\neq 0\). Note that for any \(1\leq i\leq k+1\), we have \[\left\|\frac{x_{n}^{(i)}}{\|x_{n}^{(i)}\|}-2\frac{x}{\|x\|}\right\|=\left|\frac{x _{n}^{(i)}}{\|x_{n}^{(i)}\|}-\frac{x_{n}^{(i)}}{\|x\|}+\frac{x_{n}^{(i)}}{\|x\|}-2 \frac{x}{\|x\|}\right|\leq\big{\|}x_{n}^{(i)}\big{\|}\left|\frac{1}{\|x_{n}^{(i)} \|}-\frac{1}{\|x\|}\right|+\frac{1}{\|x\|}\left\|x_{n}^{(i)}-2x\right\|\] and hence \(\left\|\frac{x_{n}^{(i)}}{\left\|x_{n}^{(i)}\right\|}-2\frac{x}{\left\|x\right\|} \right\|\to 1.\) Since \(X\) is \(k_{1}-\)WMLUR, by Theorem 3.21, it follows that \(B_{X}\) is \(k_{1}-\)wSCh on \(2S_{X}.\) Therefore \(D_{k_{1}}\left[\left(\frac{x_{n}^{(a_{i})}}{\left\|x_{n}^{(a_{i})}\right\|} \right)_{i=1}^{k_{1}+1};(f_{\lambda_{j}})_{j=1}^{k_{1}}\right]\to 0,\) further by Remark 2.1 and Lemma 2.2, we have \(D_{k_{1}}[(x_{n}^{(a_{i})})_{i=1}^{k_{1}+1};(f_{\lambda_{j}})_{j=1}^{k_{1}}]\to 0\) for all \(\alpha\in\mathcal{S}_{k+1}(k_{1}+1)\) and \(\lambda\in\mathcal{S}_{k}(k_{1}).\) Similarly, \(D_{k_{2}}[(y_{n}^{(\beta_{i})})_{i=1}^{k_{2}+1};(g_{\mu_{j}})_{j=1}^{k_{2}}]\to 0\) for all \(\beta\in\mathcal{S}_{k+1}(k_{2}+1)\) and \(\mu\in\mathcal{S}_{k}(k_{2}).\) Now by repeating the similar technique involved in Case\(-(i)\) of the proof of (1), we obtain \(|D_{k}[(z_{n}^{(i)})_{i=1}^{k+1};(h_{j})_{j=1}^{k}]|\to 0.\) Hence the proof. The next result is an immediate consequence of Theorem 4.12 and the fact \((\oplus_{p}X_{i})_{i=1}^{d}\cong(\oplus_{p}X_{i})_{i=1}^{d-1}\oplus_{p}X_{d}.\) **Corollary 4.13**.: _Let \(d\in\mathbb{Z}^{+}\), \(d>1\) and \(1<p<\infty.\) Let \(X_{i}\) be a Banach space for all \(1\leq i\leq d\) and \(X=(\oplus_{p}X_{i})_{i=1}^{d}.\) If \(X_{i}\) is \(k_{i}-\)WUR (respectively, \(k_{i}-\)WLUR, \(k_{i}-\)WMLUR) for all \(1\leq i\leq d,\) then \(X\) is \(k-\)WUR (respectively, \(k-\)WLUR, \(k-\)WMLUR) where \(k=\sum_{i=1}^{d}k_{i}-d+1.\)_ As a consequence of [21, A.2, A.3, A.4], Corollaries 4.8, 4.11 and 4.13, we now present the necessary and sufficient condition for an infinite \(\ell_{p}-\)product space to be \(k-\)WUR (respectively, \(k-\)WLUR, \(k-\)WMLUR). **Theorem 4.14**.: _Let \(1<p<\infty\), \(X_{i}\) be a Banach space for all \(i\in\mathbb{N}\) and \(X=(\oplus_{p}X_{i})_{i\in\mathbb{N}}.\) Then the following statements are equivalent._ 1. \(X\) _is_ \(k-\)_WUR (respectively,_ \(k-\)WLUR_,_ \(k-\)_WMLUR)._ 2. _There exists_ \(j\in\mathbb{N}\) _such that_ \(X_{i}\) _is WUR (respectively, WLUR, WMLUR) for all_ \(i>j\) _and for each_ \(i\leq j\) _there exists_ \(k_{i}\in\mathbb{Z}^{+}\) _with_ \(\sum_{i=1}^{j}k_{i}-j+1\leq k\) _such that_ \(X_{i}\) _is_ \(k_{i}-\)_WUR (respectively,_ \(k_{i}-\)_WLUR_,_ \(k_{i}-\)_WMLUR)._ We now provide few examples to demonstrate that some of the implications and assertions mentioned in the preceding sections cannot be reversed in general. The subsequent example reveals that some implications observed in Remark 3.4 and one given immediately below Definition 3.8 cannot be reversed generally. **Example 4.15**.: 1. _Consider the space_ \(X=(\ell_{2},\|\cdot\|_{L})\) _from_ _[_20_, Example 1]_ _and_ \(k\in\mathbb{Z}^{+}.\) _In_ _[_20_]__, it is proved that_ \(X\) _is LUR and reflexive, but not WUR. By Corollary_ 4.9_,_ \(\ell_{2}(X)\) _is not_ \(k-\)_WUR. However, by_ _[_14_, Theorem 1.1]__,_ \(\ell_{2}(X)\) _is LUR (hence,_ \(k-\)_LUR)._ 2. _Consider the space_ \(X=(\ell_{2},\|\cdot\|_{A})\) _from_ _[_20_, Example 3]_ _and_ \(k\in\mathbb{Z}^{+}.\) _In_ _[_20_]__, it is proved that_ \(X\) _is strongly rotund (hence, MLUR), but not WLUR. By Corollary_ 4.9_,_ \(\ell_{2}(X)\) _is not_ \(k-\)_WLUR. However, by_ _[_21_]__,_ \(\ell_{2}(X)\) _is strongly rotund (hence,_ \(k-\)_strongly rotund and_ \(k-\)_MLUR)._ We now present an example of a space which is strongly rotund and \((k+1)-\)WUR, but not \(k-\)WLUR as specified immediately below Proposition 3.5. **Example 4.16**.: _For each \(x=(x_{1},x_{2},\dots)\) in \(\ell_{2},\) define \(\|x\|_{1}=\sup\{\|x\|_{i_{1},i_{2}}:i_{1}<i_{2}\},\) where \(\|x\|_{i_{1},i_{2}}\) is defined as in Example 3.6. Let \((c_{n})\) be a decreasing sequence of positive real numbers converges to zero. Define the continuous map \(T:(\ell_{2},\|\cdot\|_{1})\to(\ell_{2},\|\cdot\|_{2})\) by \(T(x_{1},x_{2},\dots)=(c_{2}x_{2},c_{3}x_{3},\dots).\) Now, define \(\|x\|_{r}^{2}=\|x\|_{1}^{2}+\|T(x)\|_{2}^{2}\) for all \(x\in\ell_{2}.\) Let \(B=(\ell_{2},\|\cdot\|_{r}).\) In_ _[_16_, Example 2]__, it is proved that \(B\) is \(2-\)UR and rotund, but not LUR. Now, we will prove that the space \(B\) is not WLUR. Let \((e_{n})\) be the standard basis of \((\ell_{2},\|\cdot\|_{2}).\) It is easy to see that \(\|e_{1}\|_{r}=1,\)\(\|e_{n}\|_{r}\to 1\) and \(\|e_{1}+e_{n}\|_{r}\to 2.\) Consider \(f=\frac{e_{1}}{\|e_{1}\|}\in S_{B^{*}}.\) Observe that \(f(e_{1}-e_{n})=\frac{1}{\|e_{1}\|}\) for all \(n\geq 2.\) Therefore, \(e_{n}-e_{1}\) does not converge to \(0\) weakly. Hence, \(B\) is not WLUR. For any \(k\in\mathbb{Z}^{+}\), consider \(X=B\oplus_{2}B\oplus_{2}\cdots\oplus_{2}B\)\((k)-\)times. Clearly, \(X\) is strongly rotund. Since _is \(2-\)WUR, it follows from Corollary 4.13 that \(X\) is \((k+1)-\)WUR. However, it is easy to see from Theorem 4.7 that \(X\) is not \(k-\)WLUR._ The following example illustrates that the implication observed immediately after Definition 2.5 cannot be reversed in general. The example also shows that the converse of Proposition 3.19 not necessarily true. **Example 4.17**.: _Let \(k\in\mathbb{Z}^{+}\) and \(X\) be a \(k-\)strongly rotund, but not \(k-\)WLUR space (see, Examples 4.15 and 4.16). Therefore, by [26, Theorem 2.10], every closed convex subset of \(X\) is \(k-\)SCh on \(X,\) in particular \(B_{X}\) is \(k-\)wSCh on \(2S_{X}.\) However, by Theorem 3.15, \(B_{X}\) is not \(k-\)wUSCh on \(2S_{X}.\)_ As mentioned after Proposition 3.11, the following example demonstrate that \(k-\)weakly uniform rotundity of \(X\) does not imply that the space \(X^{**}\) is \(k-\)WMLUR. **Example 4.18**.: _Let \(Z=(c_{0},\|\cdot\|_{\infty})\)and \(k\in\mathbb{Z}^{+}.\) By [4, Chapter II, Corollary 6.9], \(Z\) admits an equivalent norm (say, \(\|\cdot\|_{r}\)) such that \(Y=(c_{0},\|\cdot\|_{r})\) is WUR. Since it is proved in [1] that \(\ell_{\infty}\) does not have any equivalent WMLUR renorming, we have \(Y^{**}\) is not WMLUR. Consider the Banach space \(X=l_{2}(Y).\) Then, by [21, A.2], \(X\) is WUR (hence, \(k-\)WUR). Clearly, \(X^{**}\cong l_{2}(Y^{**}).\) Therefore, by Corollary 4.9, \(X^{**}\) is not \(k-\)WMLUR._
$k$-弱Uniformな rotundity ($k$-WUR) と $k$-弱Locally Uniformな rotundity ($k$-WLUR) という概念を、実 Banach 空間において導入します。 これらは、既知の概念 $k$-UR と WUR の自然な一般化です。二つの最適近似概念 namely $k$-weakly strong Chebyshevity と $k$-weakly uniform strong Chebyshevity を導入することで、既存の成果を $k$-WUR と $k$-WLUR 空間において一般化します。 特に、$k$-WUR 空間を $k$-weakly uniformly strong Chebyshevness によって表現します。 また、$k$-WUR と $k$-WLUR の概念を、商空間でどのように受け継がれるかについて議論しました。さらに、無限の $\ell_p$ -積空間が $k$-WUR (あるいは $k$-WLUR) と
2309.03758
Hybrid of representation learning and reinforcement learning for dynamic and complex robotic motion planning
Motion planning is the soul of robot decision making. Classical planning algorithms like graph search and reaction-based algorithms face challenges in cases of dense and dynamic obstacles. Deep learning algorithms generate suboptimal one-step predictions that cause many collisions. Reinforcement learning algorithms generate optimal or near-optimal time-sequential predictions. However, they suffer from slow convergence, suboptimal converged results, and overfittings. This paper introduces a hybrid algorithm for robotic motion planning: long short-term memory (LSTM) pooling and skip connection for attention-based discrete soft actor critic (LSA-DSAC). First, graph network (relational graph) and attention network (attention weight) interpret the environmental state for the learning of the discrete soft actor critic algorithm. The expressive power of attention network outperforms that of graph in our task by difference analysis of these two representation methods. However, attention based DSAC faces the overfitting problem in training. Second, the skip connection method is integrated to attention based DSAC to mitigate overfitting and improve convergence speed. Third, LSTM pooling is taken to replace the sum operator of attention weigh and eliminate overfitting by slightly sacrificing convergence speed at early-stage training. Experiments show that LSA-DSAC outperforms the state-of-the-art in training and most evaluations. The physical robot is also implemented and tested in the real world.
Chengmin Zhou, Xin Lu, Jiapeng Dai, Bingding Huang, Xiaoxu Liu, Pasi Fränti
2023-09-07T15:00:49
http://arxiv.org/abs/2309.03758v1
# Hybrid of representation learning and reinforcement learning for dynamic and complex ###### Abstract Motion planning is the soul of robot decision making. Classical planning algorithms like graph search and reaction-based algorithms face challenges in cases of dense and dynamic obstacles. Deep learning algorithms generate suboptimal one-step predictions that cause many collisions. Reinforcement learning algorithms generate optimal or near-optimal time-sequential predictions. However, they suffer from slow convergence, suboptimal converged results, and overfittings. This paper introduces a hybrid algorithm for robotic motion planning: _long short-term memory_ (LSTM) pooling and skip connection for attention-based discrete soft actor critic (LSA-DSAC). First, graph network (relational graph) and attention network (attention weight) interpret the environmental state for the learning of the discrete soft actor critic algorithm. The expressive power of attention network outperforms that of graph in our task by difference analysis of these two representation methods. However, attention based DSAC faces the overfitting problem in training. Second, the skip connection method is integrated to attention based DSAC to mitigate overfitting and improve convergence speed. Third, LSTM pooling is taken to replace the sum operator of attention weigh and eliminate overfitting by slightly sacrificing convergence speed at early-stage training. Experiments show that LSA-DSAC outperforms the state-of-the-art in training and most evaluations. The physical robot is also implemented and tested in the real world. Motion Planning, Navigation, Reinforcement Learning, Representation Learning, Intelligent Robot ## I Introduction Intelligent robots play an important role in our daily life. For example, autonomous robot has been applied to hotel guidance [1], parcel delivery [2][3], and robotic arms in manufacturing [4][5]. Motion planning or path planning is the soul of robotic decision making. It enables robots to reach the goal and finish the tasks. Classical planning algorithms like graph search (e.g., A* [6]) enable robots to navigate in static environment. However, they cause many collisions in the environment with dense and dynamic obstacles because of the huge burden in updating the environmental map in real time. Classical reaction-based algorithms like _dynamic window approach_ (DWA) [7] and _optimal reciprocal collision avoidance_ (ORCA) [8] reduce collisions in the environment with dense and dynamic obstacles because they compute robot's motion by just considering obstacle's geometry and speed information. This requires fewer information updates compared to the map update. However, high collision rates still exist in reaction-based algorithms when the robot avoids obstacles with high speed because of the increasing burden in the information update. _Deep learning algorithms_ (DL) like _convolutional neural network_ (CNN) [9] avoid the information update problem of the reaction-based algorithms by training a model that generates robot's decisions or actions in real time. However, decisions from DL are based on one-step predictions which result in suboptimal trajectories when the robot moves toward its goals. _Reinforcement learning_ (RL) algorithms like _deep Q network_ (DQN) [10] and _advantage actor critic_ (A2C) or _asynchronous advantage actor critic_ (A3C) [11] improve the one-step predictions of CNN by training the models based on the multi-step time-sequential predictions. These multi-step time-sequential predictions are better than the one-step predictions because the training of the RL model considers the time-sequential information of the goals and obstacles. RL, however, may suffer from slow convergence speed and suboptimal converged result once the input (the environment state) is low-quality with limited expressive power. **Progress of representation learning and RL.** Currently, representation learning methods like LSTM pooling [12], graph network [13][14], and attention network [15][16] improve the expressive power of input. RL algorithms are also improved by introducing new architectures like double Q networks [17], dueling architecture [18], and deterministic architecture [19][20]. The fused architecture of the double Q networks and the actor-critic network is proved to be one of the most efficient architectures in RL [21]. This architecture is also applied to many RL variants like _Twin delayed deep deterministic policy gradient_ (TD3) [22] and _soft actor critic_ (SAC) [23][24][25]. The combination of representation learning and RL is a promising direction for better motion planning performance, because RL is fed with the input with high expressive power. This improves the overall convergence of RL algorithms. **Technical difficulties of existing works.** The combination of representation learning and RL is promising for improving robotic motion planning performance. However, current works in this direction are still not good enough for challenging commercial tasks. Existing works about the combination of the representation learning and RL include the _relational graph_ (RG) [13], _proximal policy optimization_ (PPO) with multiple robots [26], CADRL [27], LSTM-A2C [28][29], LSTM-RL [15] and SARL [15]. RG is the combination of relational graph and DQN. Relational graph describes the relationship of all agents (the robot and obstacles), instead of focusing on the robot-obstacle relationship. Relational graph partly and indirectly represents the robot-obstacle relationship, therefore its expressive power is limited. DQN faces over-estimation problems which cause the slow and suboptimal convergence of overall networks. PPO with multiple robots faces problems of data quality because it learns obstacle features from entire source environmental state without precisely and explicitly analyzing the relationship between the robot and obstacles. Moreover, the entire source environmental state is interpreted by the CNN in the PPO with multiple robots. Background noise is also included in this interpretation process, resulting in poor quality of interpreted environmental state. CADRL learns the pairwise feature of the robot and one obstacle by DQN. Then, a trained model is applied to the multiple-obstacle case. CADRL is myopic because it does not consider the relationship between the robot and obstacle. The closest obstacle feature is just used for training instead of all obstacle features. DQN in CADRL also brings high bias and variance. In LSTM-A2C and LSTM-RL, LSTM encodes obstacle features by distance-based order which partly represents robot-obstacle relationship, resulting in limited expressive power of interpreted environmental state. A2C/A3C lack efficient data replay strategies, resulting in a slow convergence. A2C/A3C and DQN in LSTM-A2C and LSTM-RL bring high bias and variance, resulting in a slow convergence. SARL consists of an attention network and DQN where attention network interprets robot-obstacle features to the attention weight that better describes the relationship between the robot and obstacles, resulting in the improvement of expressive power. However, attention network still faces the overfitting problem if overall architecture has deep and complex networks. Moreover, DQN brings high bias and variance. These two reasons cause the slow and suboptimal convergence of SARL. **Optimizations and contributions.** For better motion planning performance of the robot among dense and dynamic obstacles, 1) we first implemented the _discrete action for soft actor critic_ (DSAC) which is the soft actor critic algorithm in the setting of discrete action space, and is also one of most efficient RL algorithms currently. DSAC is then combined with the _relational graph_ (RG) [13], resulting in the _relational graph based DSAC_ (RG-DSAC) that achieves satisfactory performance in motion planning. However, we found that the expressive power of the relational graph is limited in the experiment. The relational graph just partly describes the relationship between the robot and obstacles via establishing the relationships for all agents without precisely focusing on the relationship between the robot and obstacles. This may result in the limited expressive power of interpreted environmental state. 2) The expressive power of interpreted environmental state is improved by replacing the relational graph using _attention weight_ (AW) [15] which precisely and explicitly analyze and describe the relationship between the robot and obstacles. This results in the _attention weight based DSAC_ (AW-DSAC) which outperformed RG-DSAC in the early-stage training but suffered from overfittings. 3) After analysis, we concluded that the _feature loss_ and _pooling method_ in attention network may cause the overfitting. Hence, we optimized attention network by integrating the skip connection method and LSTM pooling into the architecture of the attention network, resulting in the _skip connection for attention-based DSAC_ (SA-DSAC) and LSA-DSAC. SA-DSAC _mitigated_ the problem of overfittings in training in case of fewer dynamic obstacles. LSA-DSAC _eliminated_ overfittings by sacrificing the convergence speed slightly at the early-state training. Overall, the workflow of our motion planning task is shown in Fig. 1. Main contributions of this paper include 1) the implementation of RG-DSAC and AW-DSAC, 2) the LSA-DSAC which is the optimized version of AW-DSAC by integrating the skip connection method and LSTM pooling into the architecture of the attention network of AW-DSAC, 3) extensive evaluations of our algorithms against the state-of-the-art, and 4) physical implementation and testing of the robot in real world. Fig. 1: Workflow of our LSA-DSAC. The training data is collected from the circle-crossing simulator. The relational graph based DSAC (RG-DSAC) is implemented and selected as the trainable baseline algorithm for comparisons. Relational graph is replaced the attention weight (or attention network), resulting in attention weight based DSAC (AW-DSAC). The skip connection is applied to the attention network to improve the convergence, resulting in the skip connection for Attention weight based DSAC (SA-DSAC). Finally, LSTM is applied to the SA-DSAC to further improve the convergence, resulting in the LSTM encoding and skip connection for attention weight based DSAC (LSA-DSAC). The training curves demonstrate a good convergence of LSA-DSAC over the rest algorithms. This paper includes the physical implementation which demonstrates how to transplant our algorithm into the real world. Our test code is available on website [https://github.com/CHUENGMINCHOU/LSA-DSAC](https://github.com/CHUENGMINCHOU/LSA-DSAC) This paper is arranged as follows: Section II presents the state-of-the-art, problem formulation and preliminary of RL and DSAC. Section III presents RG-DSAC, AW-DSAC, SA-DSAC and LSA-DSAC. Section IV presents network framework of LSA-DSAC, model trainings, model evaluations and physical implementation. ## II Research Background This section first presents the state-of-the-art for dynamic robotic motion planning tasks. The state-of-the-art includes classical algorithm ORCA and trainable algorithms CADRL, LSTML, LSTM-A2C/A3C, PPO with multiple robots, SARL, and RG-DQN. Then, the problem formulation of motion planning tasks are given by mathematic descriptions. Finally, the preliminary of RL and DSAC are presented. They are fundamental concepts for following further algorithm implementations and optimizations. ### _State-of-the-art for dynamic motion planning_ This part concludes the state-of-the-art of motion planning algorithms. They includes ORCA [8], CADRL [27], LSTMRL [15][10], LSTM-A2C/A3C [11][29][28], PPO with multiple robots [30][26], SARL [15], RG-DQN [13]. Reaction-based ORCA relies on the positions and velocities of robots and obstacles to compute possible robot's velocity. CADRL is based on DQN that learns pairwise features of the robot and one obstacle. Trained model is then applied to multiple-obstacle cases. LSTMRL and LSTM-A2C/A3C are based on DQN and A2C/A3C to learn obstacle features that are pooled to hidden features by LSTM. PPO with multiple robots is based on CNN and PPO to learn from entire source environmental state that include features of the robot and obstacles and potential background noise. SARL is based on DQN where the attention network pools the pairwise robot-obstacle features to attention features (attention weight). RG-DQN is also based on DQN where the relation matrix and message passing process interpret source features to the graph features. We implemented ORCA, CADRL, LSTMRL, LSTM-A2C, SARL as baseline algorithms for comparisons. ### _Problem formulation_ All algorithms in this paper are trained and tested in simulators (Fig. 2) provided by ORCA [8]. Simulators includes _circle-crossing_ and _square-crossing_ simulators that add _predictable complexity_ to the motion planning tasks. Let \(a\) and \(v\) represent the action and velocity of robot where \(a=v=\left[v_{x},v_{y}\right]\). Let \(p=\left[p_{x},v_{y}\right]\) represent the robot position. Let \(s_{t}\) represent the robot state at time step \(t\). \(s_{t}\) consists of observable and hidden parts \(s_{t}=\left[s_{t}^{obs},s_{t}^{h}\right]\), \(s_{t}\in R^{9}\). Observable part refers to factors that can be measured or observed by others. It consists of the position, velocity, and radius \(s^{obs}=\left[p_{x},p_{y},v_{x},v_{y},r\right],~{}s^{obs}\in R^{5}\). The hidden part refers to factors that cannot be seen by others. It consists of planned goal position, preferred speed and heading angle \(s^{h}=\left[p_{gx},p_{gy},v_{pref},\theta\right],s^{h}\in R^{4}\). The state, position, and radius of the obstacles are described by \(\xi\), \(\hat{p}\) and \(\hat{r}\) respectively. We first introduce one-robot one-obstacle case, and then the one-robot multi-obstacle case. The robot plans its motion by the policy \(\pi\): \((s_{0:t},s^{obs}_{0:t})\to a_{t}\) where \(s_{0:t}\) and \(s^{obs}_{0:t}\) are the robot states and observable obstacle states from time step \(0\) to time step \(t\), while the obstacles plan their motions by \(\hat{\pi}\): \((s_{0:t},s^{obs}_{0:t})\to a_{t}\) where \(\hat{s}_{0:t}\) and \(s^{obs}_{0:t}\) are the obstacle states and observable robot states from time step \(0\) to time step \(t\). The robot's objective is to minimize the expectation (average) of the time to its goal \(E\left[t_{g}\right]\) (1) under the policy \(\pi\) without collisions to the obstacles. The constraints of robot's motion planning can be formulated via (2-5) that represent the _collision avoidance constraint_, _goal constraint_, _kinematics of the robot_ and _kinematics of the obstacle_, respectively. Collision avoidance constraint denotes that the distance of robot and obstacles \(\left\|p_{t}-\hat{p}_{t}\right\|_{2}\) should be greater than or equal to the radius sum of the robot and obstacles \(r+\hat{r}\). Goal constraint denotes that the position of the robot \(p_{tg}\) should be equal to the goal position \(p_{g}\) if the robot reaches its goal. Kinematics of the robot denotes that the robot position \(p_{t}\) is equal to the sum of robot position \(p_{t-1}\) and the change of the robot position \(\Delta t\cdot\pi\): \((s_{0:t},s^{obs}_{0:t})\). The robot policy \(\pi\): \((s_{0:t},s^{obs}_{0:t})\) is a velocity decided by learning from historical robot states and obstacle states. Kinematics of the obstacle is the same as that of the robot. \(\hat{\pi}\): \((s^{obs}_{0:t},s_{0:t})\) is a velocity decided by the obstacle policy \(\hat{\pi}\) like ORCA. \[minimizeE\left[t_{g}|s_{0:t},s^{obs}_{0:t},\pi,\hat{\pi}\right]~{}s.t. \tag{1}\] \[\left\|p_{t}-\hat{p}_{t}\right\|_{2}\geq r+\hat{r}~{}\forall t \tag{2}\] \[p_{tg}=p_{g} \tag{3}\] \[p_{t}=p_{t-1}+\Delta t\cdot\pi\colon(s_{0:t},s^{obs}_{0:t}) \tag{4}\] \[\hat{p}_{t}=\hat{p}_{t-1}+\Delta t\cdot\hat{\pi}\colon(s^{obs}_{0:t},s_{0:t}) \tag{5}\] In one-robot \(N\)-obstacle case, the objective is replaced by \(minimizeE\left[t_{g}|s_{0:t},\{s^{obs}_{0:t},s^{obs}_{0:t}\},\pi,\hat{\pi}\right]\) where we assume that obstacles use the same policy \(\hat{\pi}\). Collision avoidance constraint is replaced by Fig. 2: Circle-crossing and square-crossing simulators. Obstacles are randomly generated near the brink of the circle in a circle-crossing environment. Then they move toward their opposite side. In the square-crossing environment, obstacles are randomly generated on the left side or right side and then they move toward random positions on their opposite side. \[\left\{\begin{array}{l}\|p_{t}-\hat{p}_{0:t}\|_{2}\geq r+\hat{r}\\ \|p_{t}-\hat{p}_{1:t}\|_{2}\geq r+\hat{r}\\...\\ \|p_{t}-\hat{p}_{N-1:t}\|_{2}\geq r+\hat{r}\end{array}\right.\forall t \tag{6}\] assuming that obstacles are in the same radius \(\hat{r}\). \(\hat{p}_{N-1:t}\) denotes the position of \(N\)-th obstacle at the time step \(t\). Kinematics of the robot is replaced by \(p_{t}=p_{t-1}+\Delta t\cdot\pi\): \((s_{0:t},\hat{s}_{0:t}^{obs},\hat{s}_{0:t}^{obs},\hat{s}_{-1:t}^{obs})\) where the historical states of all obstacles \(\{s_{0:t}^{obs}\ldots s_{N-1:t}^{obs}\}\) are considered for generating the robot policy. Kinematics of the obstacles is replaced by \[\left\{\begin{array}{l}\hat{p}_{0:t}=\hat{p}_{0:t-1}+\Delta t\cdot\hat{r}\\ \hat{p}_{1:t}=\hat{p}_{1:t-1}+\Delta t\cdot\hat{r}\\ \hat{p}_{N-1:t}=\hat{p}_{N-1:t-1}+\Delta t\cdot\hat{r}\end{array}\right. \tag{7}\] _Preliminary_. _Markov decision process_ (MDP) is sequential decision process based on Markov Chain [31]. Markov Chain is defined by a variable set \(\textbf{{X}}=\{X_{n}\colon n>0\}\) where the probability \(p(X_{t+1}|X_{t},...,X_{1})=p(X_{t+1}|X_{t})\). This means the state and action of the next step only depend on the state and action of the current step. MDP is described as a tuple \(<S,A,P,R>\). \(S\) denotes the state and here it refers to the state of robot and obstacles. \(A\) denotes an action taken by the robot. Action \(A=[\theta,v]\) is selected from _action space_ where directions \(\theta\in\{0,\frac{\pi}{g},...2\pi\}\) and Speed of each direction \(v\in\{0.2,0.4,..1\}\). Hence, action space consists of 81 actions including a stop action. \(P\) denotes the possibility to transit from one state to the next state. \(R\) denotes the reward or punishment received by the robot after executing actions. The reward function in this paper is defined by \[R(s,a)=\left\{\begin{array}{cl}1&\text{if }p_{current}=p_{g}\\ -0.1+\frac{d_{min}}{2}&\text{if }0<d_{min}<0.2\\ -0.25&\text{if }d_{min}<0\\ \frac{d_{start,to\,goal}(-p_{g}-p_{current})}{d_{start,to,goal}}\cdot 0.5& \text{if }t=t_{max}\text{ and }\\ &p_{t}\neq p_{g}\\ 0&\text{otherwise}\end{array}\right. \tag{8}\] where \(p_{current}\) denotes the position of the robot currently. \(p_{g}\) denotes the position of the goal. \(d_{min}\) denotes the minimum distance of the robot and obstacles during motion planning process. \(d_{start,to\,goal}\) denotes the distance of the start to the goal. \(t_{max}\) is the allowed maximum time for any episode of the motion planning. Our reward function (8) is modified from [15] which cannot work without the imitation learning. (8) accelerates convergence speed by attaching a reward to _the final position of the robot_. This encourages the robot to approach the goal. Other crucial terms of RL include the _value_, _policy_, _value function_, and _policy function_. Value denotes _how good one state is or how good one action is in one state_. The value consists of the _state value_ (\(V\) value) and _state-action value_ (\(Q\) value). Value is defined by the expectation of accumulators rewards \(V(s)=\mathbb{E}[R_{t+1}+\gamma R_{t+1}+\cdots+\gamma^{T-1}R_{T}|s_{t}]\) or \(Q(s,a)=\mathbb{E}[R_{t+1}+\gamma R_{t+1}+\cdots+\gamma^{T-1}R_{T}|(s_{t},a_{t})]\) where \(\gamma\) is a discounted factor. The policy denotes the way to select actions. In function approximation case, policy is represented by the neural network. Value function in deep RL scope is represented by neural networks to estimate the value of environmental state via the function approximation [32]. Policy function is also represented by neural networks. Actions are selected by indirect way (e.g., \(a\gets argmax_{a}R(s,a)+Q(s,a;\theta\) ) in DQN [10][33]) or direct way (e.g., \(\pi_{\theta}:s\to a\) in the actor-critic algorithm [34]). **Discrete soft actor critic.** The policy of classical RL algorithm is obtained by maximizing the objective \(\sum_{t=0}^{T}\mathbb{E}_{(s_{t},a_{t})\sim p_{\pi}}[r(s_{t},a_{t})]\). The objective of SAC is defined by the maximum entropy objective that considers the reward and entropy simultaneously \[J(\pi)=\sum_{t=0}^{T}\mathbb{E}_{(s_{t},a_{t})\sim p_{\pi}}[r(s_{t},a_{t})+\pi \mathcal{H}\big{(}\pi(\cdot|s_{t})\big{)}],\mathcal{H}\big{(}\pi(\cdot|s_{t}) \big{)}=-\log\pi(\cdot|s_{t}) \tag{9}\] where \(\mathcal{H}\big{(}\pi(\cdot|s_{t})\big{)}\) denotes the entropy. \(\alpha\) is the temperature parameter. In objective maximization, SAC policy converges to optimal policy certainly by the soft policy iteration which consists of _policy evaluation_ and _policy improvement_. Optimal policy is obtained by repeated application of policy evaluation and policy improvement. Policy evaluation [24] proves that if \(Q^{k+1}=\mathcal{T}^{\pi}(Q^{k})\), \(Q^{k}\) will converge to the soft Q value of \(\pi\) when \(k\rightarrow\infty\). \(\mathcal{T}^{\pi}(\cdot)\) is a modified Bellman backup operator given by \[\mathcal{T}^{\pi}(Q)(s_{t},a_{t})\triangleq r(s_{t},a_{t})+\gamma\mathbb{E}_{s _{t+1}\sim p}[V(s_{t+1})] \tag{10}\] where \[V(s_{t})=\mathbb{E}_{a_{t}\sim\pi}[Q(s_{t},a_{t})-\log\pi(a_{t}|s_{t})]. \tag{11}\] Applying \(\mathcal{T}^{\pi}(\cdot)\) to Q value will bring Q value _closer_ to \(Q^{\pi}\). This means \(Q(s_{t},a_{t})\leq\mathcal{T}^{\pi}(Q)(s_{t},a_{t})\leq Q^{\pi}(s_{t},a_{t})\). Policy improvement [24] proves that \(Q^{\pi_{new}}\geq Q^{\pi_{old}}\) in objective maximization. \(\pi_{new}\) is defined by \[\pi_{new}=\arg\min_{n^{\prime}\in\Pi}D_{\mathcal{K}\Gamma}(\pi^{\prime}(\cdot|s_{ t})\parallel\frac{\exp(Q^{\pi_{old}}(s_{t}))}{\mathcal{T}^{\pi_{old}}(s_{t})}) \tag{12}\] where \(\mathcal{I}^{\pi_{old}}(s_{t})\) is the partition function for distribution normalization. It can be ignored because it does not contribute to the gradient of new policy. \(Q^{\pi_{old}}\) guides the policy update to ensure an improved new policy. New policy is constrained to a parameterized family of distribution \(\pi^{\prime}\in\Pi\) like Gaussians to ensure the tractable and optimal new policy. Given the repeated application of policy evaluation and improvement, policy \(\pi\) eventually converges to optimal policy \(\pi^{*}\), \(Q^{\pi^{*}}\geq Q^{\pi},\pi\in\Pi\). SAC is the combination of _soft policy iteration_ and _function approximation_. In (9), temperature \(\alpha\) is either a fixed value or an adaptive value. In function approximation, networks \(\theta\), and \(\phi\) are used to approximate the action value and policy value. The action value objective and its gradient are obtained by \[\left\{\begin{array}{cl}J(\theta)=\mathbb{E}_{(s_{t},a_{t})\sim p_{\pi}}[ \mathbb{I}_{2}^{1}\big{(}Q(s_{t},a_{t};\theta)-\bar{Q}(s_{t},a_{t})\big{)}^{2}] \\ \bar{Q}(s_{t},a_{t})=r(s_{t},a_{t})+\gamma\mathbb{E}_{s_{t+1}\sim p}[V(s_{t+1}; \bar{\theta})]\\ \nabla_{\theta}(\theta)(\theta_{s})\\ =\nabla_{\theta}Q(s_{t},a_{t};\theta)\cdot(Q(s_{t},a_{t};\theta)-r(s_{t},a_{t})+ \gamma V(s_{t+1};\bar{\theta})\\ -\alpha\log\pi_{\phi}(a_{t+1}|s_{t+1}))\end{array}\right. \tag{13}\] where state value is approximated by \(V(s_{t+1};\bar{\theta})\). \(\bar{\theta}\) is the target action value network. \(\gamma\) is a discount factor. The policy objective and its gradient are obtained by \[\left\{\begin{array}{l}J(\phi)=\mathbb{E}_{s_{t}\sim\mathcal{D}}\left[D_{KL}\left( \pi_{\phi}(\cdot\mid s_{t})\mid\mathbb{I}\frac{\exp(Q(s_{t},\cdot\theta))}{z_{ \theta}(s_{t})}\right)\right]\\ =\mathbb{E}_{s_{t}\sim\mathcal{D}}[\mathbb{E}_{a_{t}\sim\pi_{\phi}}[\alpha\log \pi_{\phi}(a_{t}|s_{t})-Q(s_{t},a_{t};\theta)]]\\ \nabla_{\phi}f(\phi)=\nabla_{\phi}\alpha\log\pi_{\phi}(a_{t}|s_{t})+\\ \nabla_{\phi}f_{\phi}(\epsilon_{t};s_{t})\cdot(\nabla_{a_{t}}\alpha\log\pi_{ \phi}(a_{t}|s_{t})-\nabla_{a_{t}}Q(s_{t},a_{t}))\\ a_{t}=f_{\phi}(\epsilon_{t};s_{t})\end{array}\right. \tag{14}\] where \(f_{\phi}(\epsilon_{t};s_{t})\) is the network transformation. \(\epsilon_{t}\) is an input noise vector sampled from fixed distribution like spherical Gaussian. The temperature objective is defined by \[J(\alpha)=\mathbb{E}_{a_{t}\sim\pi_{t}}[-\alpha\log\pi_{t}\left(a_{t}|s_{t} \right)-\alpha\mathcal{\bar{H}}] \tag{15}\] where \(\mathcal{\bar{H}}\) is the target entropy. Temperature objective gradient is obtained by approximating dual gradient descent [35]. Eventually, the networks and temperature are updated by \[\left\{\begin{array}{l}\theta\leftarrow\theta-\gamma_{\theta}\nabla_{\theta }J(\theta)\\ \phi\leftarrow\phi-\gamma_{\phi}\nabla_{\phi}J(\phi)\\ \alpha\leftarrow\alpha-\gamma_{\alpha}\nabla_{\omega}J(\alpha)\\ \bar{\theta}\leftarrow\tau\theta+(1-\tau)\bar{\theta}\end{array}\right. \tag{16}\] SAC is used in tasks with continuous action space. However, the action space in this paper is discrete. Hence, SAC should be modified to suit our task. Some modifications [25] should be made. They are summarized as the follows: 1) \(Q\) function should be moved from \(Q\colon S\times A\rightarrow\mathbb{R}\) to \[Q\colon S\times A\rightarrow\mathbb{R}^{|A|}. \tag{17}\] This means \(Q\) values of all possible actions should be outputted, instead of a \(Q\) value of the action taken by the robot. 2) The outputted policy should be the action distribution \[\pi\colon S\rightarrow[0,1]^{|A|} \tag{18}\] instead of the _mean_ and _covariance_ of action distribution of SAC \(\pi\colon S\rightarrow\mathbb{R}^{2|A|}\). 3) In temperature objective (15), its expectation \(\mathbb{E}_{a_{t}\sim\pi_{t}}[\cdot]\) is obtained by the Monte-Carlo estimation which involves taking an expectation over the action distribution [25]. In the discrete action space case, the expectation should be calculated directly, instead of Monte-Carlo estimation. Hence, the temperature objective changes into \[J(\alpha)=\pi(s_{t})^{T}[-\alpha\log\pi_{t}\left(s_{t}\right)-\alpha\mathcal{ \bar{H}}] \tag{19}\] Where \(\mathcal{\bar{H}}\) is the target entropy. Similarly, the policy objective changes into \[J(\phi)=\mathbb{E}_{s_{t}\sim\mathcal{D}}[\pi(s_{t})^{T}[\alpha\log\pi_{\phi}( s_{t})-Q(s_{t};\theta)]] \tag{20}\] ## III Method This section presents the implementations and optimizations of our motion planning algorithms. We first presents the implementation of the relational graph based DSAC. Then, relational graph based DSAC is improved by introducing the attention weight based DSAC. Finally, the attention weight based DSAC is further improved by integrating the skip connection method and LSTM pooling into the architecture of attention network of attention weight based DSAC. ### _Relation graph based DSAC (RG-DSAC)_ The mechanism of relational graph [13] is shown in Fig. 3a. The source input collected from the environment consists of source robot feature \(s_{r}\) and \(N\)-obstacle features \((o_{i},i\in 1,2,..N)\). These features are in different dimensions. They should be formatted to the same dimension to accommodate the input requirement of the graph by \[s_{r,emb\_1}=MLP(s_{r}) \tag{21}\] \[o_{i,emb\_1}=MLP(o_{i}) \tag{22}\] They are achieved by the _multi-layer perceptron_ (MLP). \(s_{r,emb\_1}\) and \(o_{i,emb\_1}\) denote the embeddings of the robot and \(i\)-th obstacle feature. All obstacle features are concatenated to form the embedded obstacle state \(s_{o,emb\_1}\). The embedded robot state and embedded obstacle state are concatenated to form the initial Fig. 3: Mechanisms of relational graph (a), attention weight (b), skip connection method for attention weight (c), and LSTM pooling and skip connection method for attention weight (LSA) (d). In the LSA encoder, the skip connection is for reducing the feature loss, while LSTM replaces the sum operator of the attention network to make the final interpreted features (environmental state) injective. feature matrix (environmental representation) \(X\) by \[X=concat(s_{r,emb\_1},s_{o,emb\_1}) \tag{23}\] The first row of \(X(X[0,:])\) denotes the robot feature and remaining rows (\(X[i,:],i\in 1,2,...N\)) denotes all obstacle features. Given feature matrix \(X\), relation matrix \(A\) that represents the relationship of the _robot-obstacle and obstacle-obstacle_ is computed by a _similarity function_. This is achieved by concatenating one of features in \(X[i,:],i\in 0,1,..N\) with all features in \(X[j,:],j\in 0,1,..N\) recursively to form the _pairwise features of the robot-obstacle and obstacle-obstacle_. Then, the relation feature \(A[i,:]\) is obtained by MLP that maps these pairwise features to a fixed dimension (the same dimension as that of \(X[i,:]\)) via \[A[i,:]=MLP\big{(}concat(X[i,:],X[:,:])\big{)}.i\in 0,1,..N \tag{24}\] Given feature matrix and relation matrix, interaction feature matrix \(H\) is obtained by the _message passing rule_ from the _graph convolutional network_ (GCN) via \[H^{1+1}=\sigma(AH^{1}W^{1})+H^{1},H^{0}=H^{1}=X,l\in 0,1.. \tag{25}\] where \(\sigma\) is the activation function. \(W\) is layer-specific trainable weight, and \(l\) denotes the number of the neural network layers. The _difference_ between the initial feature matrix \(X\) and the interaction feature matrix \(H\) is that \(H\) includes both initial features and relation features, while \(X\) only includes initial features. Interaction feature matrix \(H^{i+1}\) outperforms initial feature matrix \(X\) in the _expressive power_. This is achieved via the relation matrix \(A\) and the message passing, and it is shown by training and evaluation performances in the simulation and real-world motion planning. Moreover, the expressive power of the interaction feature can be further improved by LSTM pooling that maps interaction feature of obstacles \(H^{1+1}[1:,:]\) to the sequential hidden features. Hence, final output of the relational graph (environmental state) that feeds DSAC consists of interaction feature of the robot and obtained sequential hidden features via \[S^{rg}=[H^{1+1}[0,:],LSTM(H^{1+1}[1:,:])] \tag{26}\] ### _Attention weight based DSAC (AW-DSAC)_ Given the relational graph, it is obvious that relation matrix \(A\) plays an essential role to improve the expressive power of output features. Relation matrix includes the robot-obstacle relation and obstacle-obstacle relation. Now, let's recall our task: robotic motion planning among dense obstacles. It is easy to see that robot-obstacle relation matters in our task. However, obstacle-obstacle relation does not show much _direct importance_ in this task to generate features with high expressive power, although it has _marginal importance_ to predict future obstacle trajectories that slightly improve the motion planning performance [13]. To further improve the motion planning performance, much attention should be paid to making the best of the robot-obstacle relation. Moreover, the importance of the obstacles also vary in different time step. The importance is shown by the robot speed, moving directions, and distance of robot and obstacle. Recent attention weight mechanism [15] focuses on the pairwise robot-obstacle feature. It computes an attention score that weighs the importance of dynamic obstacles and makes the expressive power of interpreted environmental state interpretable. Hence, we apply the attention weight to replace relational graph for high and interpretable expressive power of the output features. As the relational graph, in the attention weight case (Fig. 2(b)), the environmental state to feed DSAC \(S^{aw}\) is defined by the feature combination of the robot and obstacles via \[S^{aw}=[s_{r},S^{aw}_{o}] \tag{27}\] where \(S^{aw}_{o}\) denotes the weighted obstacle feature, and it is defined by \[S^{aw}_{o}=\sum_{i=1}^{n}[softmax(\alpha_{i})]\cdot h_{i} \tag{28}\] where \(\alpha_{i}\) and \(h_{i}\) denote the _attention score_ and the _interaction feature_ of the robot and the obstacle \(\alpha_{i}\) respectively. The interaction feature is a high-level feature that better outlines a robot-obstacle relation, compared to a shallow feature \(e_{i}\). The interaction feature is defined by \[h_{i}=f_{h}(e_{i};w_{n}) \tag{29}\] where \(f_{h}(\cdot)\) and \(w_{h}\) denote the MLP and its weight. \(e_{i}\) denotes the _embedded shallow feature_ obtained from the pairwise robot-obstacle feature \([s_{r},\alpha_{i}]\). The attention score is defined by \[\alpha_{i}=f_{\alpha}(e_{i};w_{a}) \tag{30}\] where \(f_{\alpha}(\cdot)\) and \(w_{a}\) denote the MLP and its weight. The embedded shallow feature is defined by \[e_{i}=f_{e}([s_{r},o_{i}];w_{e}),i\in 1,2,..N \tag{31}\] where \(f_{e}(\cdot)\) and \(w_{e}\) denote the MLP and its weight. ### _Skip connection for \(\Delta\)tention weight based DSAC (SA-DSAC)_ Recent progress in supervised DL [36][37][38] unveils that low-level (shallow) and high-level (deep) features play different role in the learning of the neural networks. The low-level feature provides more details of the source environmental state, while high-level feature outlines an overall structure of source environmental feature. Both low-level and high-level features contribute to the expressive power. Obviously, the attention weight mechanism just includes high-level feature to form the interaction feature \(h_{i}\), given the mechanism of the attention weight in Fig 2(b). This causes the loss of the details in environmental state and low expressive power of final feature \(S^{aw}\) follows. To improve the expressive power of environmental state interpreted by attention weight, we introduce SA-DSAC that integrates the skip connection method (Fig. 2(c)) into the architecture of the attention network for generating optimized interaction feature by \[h_{i}=f_{h}(concat(e_{i},[s_{r},o_{i}]);w_{n}) \tag{32}\] where \(f_{h}(\cdot)\) and \(w_{h}\) denote the MLP and its weight. \(e_{i}\) denotes the _embedded shallow feature_ obtained from the pairwise robot-obstacle feature \([s_{r},o_{i}]\). ### _LSTM pooling and Skip connection for \(\Delta\)tention weight based DSAC (LSA-DSAC)_ Given the attention weight mechanism, we can notice that weighted obstacle features \(S^{aw}_{o}\) are pooled by summing all weighted interaction features. Recent research [39] unveils a high performance of the sum operation over the _mean_ and _max_ operations in pooling features for generating new features with high expressive power. However, it does not mean that sum operation is absolute _injective_[39]. The more injective the feature is, the more distinguishable the feature is against other features. Hence, high injectivity of feature means high expressive power [39]. Sum operation just outlines an overall structure of pooled features, and some pooled features based on the sum operation lack injectivity or are undistinguishable. For instance, _sum_ (3,1) and _sum_ (2,2). Features [3,1] and [2,2] are equal statistically, but they are obviously different features. We think that _keeping some source features in the feature pooling process_ contributes to the injectivity of pooled features. LSTM pooling is expected to be a good solution to achieve this goal where the source features are just mapped into sequential hidden features. In this process, the structural information, and a part of the feature details of the source features are kept, instead of just keeping the statistical property of source features via the sum operation. Hence, we introduce LSA-DSAC that takes LSTM to replace sum operation in the pooling of weighted obstacle feature \(S_{o}^{aw}\). LSTM maps weighted interaction features \(softmax(\alpha_{i})\cdot h_{i}\) to sequential features (Fig. 3d). This better preserve the feature of each weighted interaction feature. This is achieved by \[S_{o}^{aw}=LSTM[softmax(\alpha_{i})\cdot h_{i}],i\in\text{1,2,..}\,N \tag{33}\] Once the environmental state \(S^{aw}=[s_{r},S_{o}^{aw}]\) generated by LSA is well prepared, it will feed DSAC to generate trained models for the motion planning of the robot. In implementation, separate attention networks are taken to form the _critic_ and _policy_ of DSAC by \[critic=[\theta_{att_{c}},\theta_{c1},\theta_{c2}] \tag{34}\] \[policy=[\theta_{att\_p},\theta_{p}] \tag{35}\] where double-network architecture (networks \(\theta_{c1},\theta_{c2}\)) is taken in critic to reduce the overestimation of the Q value, while policy just has single network \(\theta_{p}\) for prediction. The attention network connects with the prediction network to form the critic or policy of DSAC (Alg. 1). The training process of LSA-DSAC is shown in Fig. 4. Episodic data \(<\)\(s,a,r,s^{\prime}\)\(>\) of each time step is obtained by performing the policy of DSAC \[<s_{t},a_{t},r_{t},S_{t+1}>\sim\pi(a_{t}|s_{t};[\theta_{att\_p},\theta_{p}]) \tag{36}\] Episodic data is stored in replay buffer \(\mathcal{D}\) at the end of each episode by \[\mathcal{D}\leftarrow\mathcal{D}\cup\mathcal{E},\mathcal{E}=\mathcal{E}+<s_{t},a_{t},r_{t},S_{t+1}> \tag{37}\] Networks are trained in each step of an episode (Alg. 2). In the forward propagation process, the critic loss \(\mathcal{Loss}_{Q}\) and policy loss \(\mathcal{Loss}_{p}\) are obtained by \[\mathcal{Loss}_{Q}=MSE\big{(}Q(a|s)_{c1},Q_{next\_dis}\big{)}+MSE(Q(a|s)_{c2}, Q_{next\_dis}) \tag{38}\] \[\mathcal{Loss}_{p}=-mean[\mathbb{E}_{s-p}[Q(s)]+\alpha\cdot\mathcal{H}\big{(} \pi(\cdot\mid s)\big{)}] \tag{39}\] where \(Q_{next\_dis}\) and \(Q(a|s)_{c1},i\in\text{1,2}\) denote the discounted next state value and current action values respectively. \(\mathbb{E}_{s-p}[Q(s)]\) and \(\mathcal{H}\big{(}\pi(\cdot\mid s)\big{)}\) denote the expectation of current state value and current policy entropy respectively. \(\alpha\) denotes the temperature parameter. **Compute discounted next state value.**\(Q_{next\_dis}\) is computed by \[Q_{next\_dis}=r(s,a)+\gamma\mathbb{E}_{s^{\prime}-p}[Q(s^{\prime})] \tag{40}\] where \(r(s,a)\) denotes the reward from the environment after executing action \(a\) in state \(s\). \(\mathbb{E}_{s^{\prime}-p}[Q(s^{\prime})]\) denotes the expectation of the next state value. \(\gamma\) denotes a discount factor. \(\mathbb{E}_{s^{\prime}-p}[Q(s^{\prime})]\) is computed by \[\mathbb{E}_{s^{\prime}-p}[Q(s^{\prime})]=\sum[p^{\prime}\cdot\min(Q(s^{\prime })_{c1},Q(s^{\prime})_{c2})-\alpha\cdot\log p^{\prime}] \tag{41}\] where \(Q(s^{\prime})_{c1}\) and \(Q(s^{\prime})_{c2}\) denote the next state values computed by the target critic via the algorithm _Forward-propagation-critic_. \(p^{\prime}\) and \(\log p^{\prime}\) denote the next policy distribution and its logit value. They are computed by _Forward-propagation-policy_. The forward propagations of the critic and policy (Alg. 3-4) are almost the same. Their difference is that the critic takes two networks to compute two Q values by \[Q(s)_{c1},Q(s)_{c2}\gets f_{\theta_{c1}}(S^{aw}),i\in\text{1,2} \tag{42}\] Then, an average Q value is obtained. This reduces the bias (overestimation) of the Q value. The policy just takes single network to compute policy distribution and its logit value by \[\log p,p\gets f_{\theta_{p}}(S^{aw}) \tag{43}\] **Compute current action values.** To obtain \(Q(a|s)_{c1},i\in\text{\{1,2\}}\), current state values \(Q(s)_{c1},i\in\text{\{1,2\}}\) should be computed first by the algorithm _Forward-propagation-critic_. Then, current action values are computed by gathering state value along the policy distribution of the action \(a\) via \[Q(a|s)_{c1}=Q(s)_{c1}.gather(a),Q(a|s)_{c2}=Q(s)_{c2}.gather(a) \tag{44}\] **Compute expectation of current state value.** The process to compute the expectation of current state value is different from that of the next state value. It is computed by \[\mathbb{E}_{s-p}[Q(s)]=\sum\min[(Q(s)_{c1},Q(s)_{c2})\cdot p] \tag{45}\] where \(Q(s)_{c1},i\in\text{\{1,2\}}\) and \(p\) are computed respectively by _Forward-propagation-critic_ and _Forward-propagation-policy_. Figure 4: Training process of our LSA-DSAC. LSA-DSAC starts by collecting data from the environment using the initialized models. Collected data is saved in the replay buffer from which data is sampled. The policy and critic are updated based on sampled data until convergence, resulting in trained models which are saved for evaluations in the motion planning tasks. **Compute policy entropy.**\(\mathcal{H}\big{(}\pi(\cdot\mid s)\big{)}\) is computed by \[\mathcal{H}\big{(}\pi(\cdot\mid s)\big{)}=-log\ \pi(\cdot\mid s)=-\sum p\cdot\log p \tag{46}\] Before the back-propagation process, the temperature loss \(\mathcal{L}_{\alpha}\) is also required for the network update. \(\mathcal{L}_{\alpha}\) is computed by \[\mathcal{L}_{\alpha}=-\min\ [\log\alpha\cdot(\mathcal{\bar{H}}-\mathcal{H})] \tag{47}\] where \(\mathcal{\bar{H}}\) is the target entropy. Then, the temperature and all networks are updated by the gradient ascent via \[\theta_{att,c,i}\leftarrow\theta_{att,c,i}-\gamma\nabla_{\theta_{att,c,i}}Loss _{q},i\in\text{1,2} \tag{48}\] \[\theta_{att,p,p}\leftarrow\theta_{att,p,p}-\gamma\nabla_{\theta_{att,p}}Loss_{p} \tag{49}\] \[\alpha\leftarrow\alpha-\gamma\nabla_{\alpha}L_{\alpha},\ \alpha\gets e^{\alpha} \tag{50}\] Finally, target critic is also updated for a new training round via \[\theta_{att,c}\leftarrow\theta_{att,c},\ \theta_{c1}\leftarrow\theta_{c1},\ \theta_{c2}\leftarrow\theta_{c2} \tag{51}\] **Algorithm 1:**_LSA-DSAC_ 1.Initialize the replay buffer \(\mathcal{D}\) 2.Initialize attention net of critic \(\theta_{att,c}\), attention net of policy \(\theta_{att,p}\), prediction nets of critic \(\theta_{c1}\) and \(\theta_{c2}\), and prediction net of policy \(\theta_{p}\) where \(critic=[\theta_{att,c},\theta_{c1},\theta_{c2}]\), \(policy=[\theta_{att,p},\theta_{p}]\) 3.Initialize target critic \([\theta_{att,c},\theta_{c1},\theta_{c2}]\): \(\vec{\theta}_{att,c}\leftarrow\theta_{att,c},\ \vec{\theta}_{c1}\leftarrow\theta_{c1},\ \vec{\theta}_{c2}\leftarrow\theta_{c2}\) 4.For episode \(i<N\)do 5.For \(t\neq T_{terminal}\) in episode \(i\)do 6.Execute action: \(<\)\(s_{t},a_{t},r_{t},S_{t+1}>-\pi(a_{t}|s_{t};\{\theta_{att,p},\theta_{p}\})\) 7.Train If length (\(\mathcal{D}\)) \(<\) batch size \(l\) 8.Store data of this episode: \(\mathcal{E}\)=\(\mathcal{E}\)\(<\)\(s_{t},a_{t},r_{t},S_{t+1}>\) 9.Update replay buffer: \(\mathcal{D}\leftarrow\mathcal{D}\cup\mathcal{E}\) 10.\(i=i+1\) 11.Save models: \(\theta_{att,c}\), \(\theta_{att,p}\), \(\theta_{c1}\), \(\theta_{c2}\) and \(\theta_{p}\) **Algorithm 2: Train** 1.Sample \(K\)-batch experiences randomly from replay buffer \(\mathcal{D}\) **//Prepare discounted next state value \(\boldsymbol{Q}_{next,dis}\)** 2.Compute next policy distribution \(p^{\prime}\) and its logit value \(\log p^{\prime}\): **Forward-propagation-policy** 3.Compute next state value \(Q(s^{\prime})_{c1}\), \(Q(s^{\prime})_{c2}\) by target critic: **Forward-propagation-critic** 4.Compute expectation of next state value: \(\mathbb{E}_{s^{\prime}\sim p}[Q(s^{\prime})]=\sum[p^{\prime}\cdot\min(Q(s^{ \prime})_{c1},Q(s^{\prime})_{c2})-\alpha\cdot\log p^{\prime}]\) 5.Compute discounted next state value: \(Q_{next,dis}=r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim p}[Q(s^{\prime})]\) **//Prepare current action value \(\boldsymbol{Q}(a|s)_{c1}\), \(\boldsymbol{Q}(a|s)_{c2}\)** 6.Compute current state \(Q(s_{c1})\), \(Q(s_{c2}\): **Forward-propagation-critic** 7.Compute current action value \(Q(a|s)_{c1}\), \(Q(a|s)_{c2}\): \(Q(a|s)_{c1}=Q(s)_{c1}.gather(a)\) \(Q(a|s)_{c2}=Q(s)_{c2}.gather(a)\) **//Prepare Q value loss (critic loss)** 8.Compute Q value loss: \[\mathcal{Loss}_{Q}=MSE\big{(}Q(a|s)_{c1},Q_{next,dis}\big{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad ## IV Experiments This section presents the implementation details of our algorithms. First, the network framework of our LSA-DSAC is given. Second, the model training details of our algorithms and the state-of-the-art are presented. Third, the model evaluations are conducted. The evaluations include the converged reward evaluation, interpretability evaluation, qualitative evaluation, quantitative evaluation, time complexity evaluation, transferability evaluation, and robustness evaluation. Finally, the physical implementation is presented. ### _Network framework_ In implementation, the network framework of our LSA-DSAC (Fig. 5) takes the architecture with separate attention networks. The prediction networks of the critic and policy connect with different attention network to form the critic and policy of DSAC. This contributes to overall convergence, compared with architecture with a shared attention network. The prediction network of the critic consists of two linear networks. Each linear network has three linear layers. The prediction network of the policy just has one linear network that also has three linear layers. features from the circle-crossing simulator for the training. Both critic network and policy network consist of feature interpretation part and prediction part. The feature interpretation part is our LSA encoder which translates features of the robot and obstacles into the attention-based environment features \(S^{aw}\). Attention-based environment features \(S^{aw}\) is the combination of the robot feature \(s_{r}\) and attention-based obstacle features \(S^{aw}_{o}\). Attention-based environment features \(S^{aw}\) are then fed into prediction networks which are the linear neural layers. Double prediction networks are used in the critic network to reduce the overestimation of Q value, while single prediction network is used in the policy network. ### _Model training_ We first implemented RG-DSAC for motion planning in the circle-crossing simulator. The training result (Fig. 6a) demonstrated that RG-DSAC converged faster than DSAC with source environmental state. Moreover, the converged result of RG-DSAC also outperformed the DSAC with source environmental state. The output of the relational graph is the matrix \(H^{t+1}\) (layer number \(l\)=\(2\)). The final interpreted feature to feed DSAC \(S^{\sigma g}\) is the combination of the robot feature \(H^{t+1}[0;\cdot]\) and the pooled obstacle features \(LSTM(H^{t+1}[1;\cdot;\cdot])\) where obstacle features are pooled by LSTM. To prove the efficacy and efficiency of our method to prepare the final features for training (_rob+lstm(obs)_), we compared it with other potential features for training by the ablation experiments where other features include: 1) features based on the feature concatenation of the robot and obstacles (_rob+obs_), 2) source robot feature (_rob_), 3) features from summing concatenated feature of the robot and obstacle (_sum(rob+obs)_), 4) features from concatenating the robot feature and obstacle features pooled by MLP (_rob+mlp(obs)_), 5) pooled features of the robot and obstacles by MLP (_mlp(rob+obs)_), and 6) pooled features of the robot and obstacles by LSTM (_lstm(rob+obs)_). Ablation experiments (Fig. 6b) demonstrate that interpreted features for training using our method outperforms other potential features. Experiments also indicate that the robot feature should be separated from the obstacle features pooled by LSTM or MLP, resulting in the features _rob+lstm(obs)_ and _rob+mlp(obs)_. This contributes to the expressive power of interpreted features for training. The interpreted feature _rob+mlp(obs)_ marginally contributes to the convergence, while the interpreted feature _rob+lstm(obs)_ by our LSA-DSAC dramatically improves the convergence. We noticed that a separate architecture (two relation networks and two LSTM) outperforms a shared architecture (one shared relation network with shared and separate LSTMs) (Fig. 6c) in the implementation of RG-DSAC. Experiment shows that separate LSTM encoding contributes to the convergence dramatically (yellow and green curves), while a separate relation network also contributes to the convergence (blue and yellow curves). Fig. 5: Network framework of our LSA-DSAC. The framework of LSA-DSAC consists of the critic network and policy network. The critic network or policy network receives the same environment In Fig. 6d, the convergence is improved after the attention network replaces the relational graph for the feature interpretation. Then, the convergence is further improved by integrating the skip connection method and LSTM pooling to the attention network. Experiment also shows that AW-DSAC and SA-DSAC _overfit_ in training because of the sum operation that lacks robustness or injectivity. LSA-DSAC outperforms the rest algorithms in converged result by sacrificing the convergence speed at early-stage training. The experiment is extended to 10-obstacle cases (Fig. 6e) where our LSA-DSAC still outperforms the rest algorithms in overall convergence speed and converged result. LSA-DSAC is also trained in cases with 1, 2, 3 and 4 obstacles (Fig. 6f). The experiment shows that the increase of the environmental complexity (the number of dynamic obstacles) results in a decrease of convergence. Finally, LSA-DSAC is compared with RG-DSAC and the state-of-the-art that includes CADRL, LSTM-A2C, LSTM-RL and SARL. Note that ORCA is not trainable and it is not included in the training comparisons. LSA-DSAC is compared with CADRL in 1-obstacle case because CADRL only supports single Fig. 6: The training curves of our algorithms and state-of-the-art. (a) denotes that the environment state interpreted by the relational graph contributes more to the convergence of DSAC than the source environment state. (b) denotes that after acquiring the robot and obstacle features interpreted by the relational graph, the obstacle features should be encoded by the LSTM. Then, the robot features concatenate the LSTM-encoded obstacle features to form new features for the learning of DSAC. This method of preparing new features outperforms the rest of the methods in improving the convergence of DSAC. (c) denotes that the network architecture with the separate relational graph and separate LSTM (Fig. 5) contributes more to the convergence of DSAC than the architecture with the shared relational graph or share LSTM. (d-**e**) denotes that the feature interpretation based on the relational graph partly represents the relationship between the robot and the obstacles, resulting in a slow convergence of DSAC. The attention weight or attention network focuses on and precisely describes the relationship between the robot and the obstacles, resulting in a fast convergence of DSAC. However, attention weight overfits in the training, and the overfitting problem can be mitigated by applying the skip connection method and LSTM pooling method. (f) denotes that the convergence of LSA-DSAC becomes slow, with the increase of the dynamic obstacles in the environment. (g) denotes the comparison of LSA-DSAC and CADRL in convergence. CADRL does not support multi-agent training, therefore the comparison of LSA-DSAC and CADRL is presented in a separate figure. (h-i) denotes the comparisons of LSA-DSAC, RG-DSAC, and the state-of-the-arts that support multi-agent training. obstacle training (Fig. 6g). The rest state-of-the-art supports multi-obstacle training and they are trained in cases with 5 and 10 obstacles (Fig. 6h-i). Experiment shows that our LSA-DSAC is superior to the state-of-the-art in both convergence speed and converged result. The training parameters of our LSA-DSAC is shown in TABLE I. ### Model evaluation Trained models of all algorithms are evaluated in 5-obstacle case comprehensively in the circle-crossing simulator from seven perspectives. This includes the _converged reward evaluation_, _interpretability (explainable ability) evaluation_, _qualitative evaluation_, _quantitative evaluation_, _time complexity evaluation_, _transferability evaluation_, and _robustness evaluation_. Models are evaluated with 500 test sets (episodes). **Converged reward evaluation.** Converged reward indicates the overall performance of the trained models in small test set during training. It provides a fast impression about how good the model is in training. It is easy to see that our LSA-DSAC outperforms the state-of-the-art in the converged reward, while SARL performs best among the state-of-the-art (TABLE II). CADRL just supports training with one obstacle, therefore its model is not included in comparison. **Interpretability (explainable ability) evaluation.** Interpretability (explainable ability) here is defined as the ability to decide _directly, explicitly, and uniformly_ how good the motion planning performance is. Attention mechanism (attention network) provides the attention score (a post-training indicator) to evaluate the importance of obstacles, therefore justifying the robot's policy or actions. Then, motion planning strategy is generated based on the attention score. Motion planning strategy (Fig. 7) of our LSA-DSAC indicates that the attention score is an overall evaluation that considers the moving direction, moving speed, and distance of robot and obstacles (e.g., humans). The distance of robot and obstacles sometimes contributes less to the attention score (e.g., human 2 in Fig. 7 that has minimum distance to the robot). Interpretability comparisons of the models are shown in TABLE III. Our LSA-DSAC and SARL have interpretability because of the attention score, while the motion planning performance of the rest algorithms cannot be observed indirectly, explicitly, and unexplainably. \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}} \hline Algorithms & Converged reward \\ \hline ORCA [8] & \(\rightarrow\) \\ \hline CADRL [27] & 0.61(training with one obstacle) \\ \hline LSTM-RL [15][10] & 0.49 \\ \hline SARL [15] & 0.55 \\ \hline LSTM-A2C [29][11][28] & 0.30 \\ \hline Our RG-DSAC & 0.50 \\ \hline Our LSA-DSAC & **0.57** \\ \hline \end{tabular} \end{table} TABLE II: Converged reward of the models in training with five dynamic obstacles. ORCA is not trainable, therefore it is not included in the comparisons. CADRL does not support multi-agent training, and the converged result of CADRL is from the training with one dynamic obstacle. \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}} \hline Parameters/ & Values of Parameters/ \\ & Hyper-parameters \\ \hline LSTM hidden size & 50 \\ \hline Number of MLP layer & 3 \\ \hline ReLU layer after MLP & Yes (First MLP layer) \\ \hline MLP input/output size (interaction and embedded layers) & [150, 100]-[100, 50] \\ \hline MLP input/output size (attention layer) & [100, 100]-[100, 1] \\ \hline Reward & Source reward \\ \hline Gamma & 0.95 \\ \hline Tau & 0.005 \\ \hline Learning rate & 3e-4 \\ \hline Alpha & 0.2 \\ \hline Frequency of network update & Per step \\ \hline Automatic entropy tuning & True \\ \hline Batch size & 128 \\ \hline Input layer size (DSAC network) & 6+50 \\ \hline Hidden layer size (DSAC network) & 128 \\ \hline Output size of policy network (DSAC network) & 81 \\ \hline \end{tabular} \end{table} TABLE I: Parameters and hyper-parameters of LSA-DSAC \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}} \hline Algorithm & Interpretability (explainable ability) \\ \hline ORCA & \(\rightarrow\) \\ \hline CADRL & No \\ \hline LSTMRL & No \\ \hline SARL & Yes \\ \hline LSTM-A2C & No \\ \hline RG-DSAC & No \\ \hline LSA-DSAC & **Yes** \\ \hline \end{tabular} \end{table} TABLE III: Interpretability (explainable ability) evaluation. Here the interpretability is measured by whether there are post-training indicators generated to justify the actions or policies. Fig. 7: Examples of obstacles with attention score in LSA-DSAC. The human here denotes the dynamic obstacles. The distance between the robot and obstacles is not so important to decide the attention score sometimes, such as the human (obstacle) 2 in (a) and (b). The obstacles heading toward the robot, such as human 0, are expected to have a higher attention score, but it doesn’t mean only the direction of the obstacle decides the attention score. Attention score is an overall evaluation that considers the direction of motion, speed, and distance between the robot and the obstacles. Hence, in (b), human 0 has a high attention score, but its attention score is slightly smaller than that of human 1 and human 3. \begin{table} \begin{tabular}{l|l} \hline Algorithm & Learnt motion planning strategy \\ \hline ORCA & Cross \\ CADRL & Cross/Follow-pass \\ LSTMRL & Partly-bypass/Follow-pass \\ SARL & Full-bypass/Partly-bypass \\ LSTM-A2C & Back-pass/Wait-pass \\ RG-DSAC & Partly-bypass/Follow-pass \\ LSA-DSAC & **Full-bypass/Partly-bypass** \\ \hline \end{tabular} \end{table} Table V: Qualitative evaluation. The quality here is measured according to the efficiency or the property of learned motion planning strategies. Figure 8: Six learned motion planning strategies. The numbers along the trajectories represent the time step of each robot or obstacle. The full-bypass and partly-bypass are the most efficient motion planning strategies. The performance of wait-pass and follow-pass strategies is acceptable. The back-pass is the most time-consuming motion planning strategy. The cross strategy is efficient in the motion planning sometimes, but it causes many collisions. \begin{table} \begin{tabular}{l|l|l|l} \hline Strategy & Description & Speed & Collision \\ \hline Full-bypass & Bypass all obstacles & Fast & Less \\ Partly-bypass & Bypass most obstacles & Fast & Less \\ Follow-pass & Follow front obstacles and pass & Medium & Less \\ Wait-pass & Wait until obstacles move away and pass & Slow & Less \\ Back-pass & Move back until obstacles move away and pass & Slow & Less \\ Cross & Cross dense obstacles & High/Medium/Slow & More \\ \hline \end{tabular} \end{table} Table IV: Features of six motion planning strategies. The motion planning strategy here is defined by humans according to human experience. **Qualitative evaluation.** The quality here refers to the trajectory quality of the robot in an episode. In 500 tests, the robot based on our algorithms and state-of-the-art (trainable) learnt some of the six motion planning strategies. These six strategies include _full-bypass_, _partly-bypass_, _follow-pass_, _wait-pass_, _back-pass_, and _cross_. Their features and examples are shown in TABLE IV and Fig. 8. For each algorithm, we sampled 50 trajectories from 500 tests and found that the most of sampled trajectories of the SARL and Our LSA-DSAC followed the high-quality full-bypass and partly-bypass strategies (TABLE V), while the trajectories of the CADRL, LSTMRL, LSTM-A2C, and RG-DSAC followed the medium-quality follow-pass, wait-pass, and back-pass strategies. CADRL and ORCA took low-quality cross strategy that caused more collisions, although the cross strategy led to fast speed sometimes. Fig. 9 presents some examples that indicate the superiority of our LSA-DSAC in trajectory quality when comparing with the state-of-the-art. **Quantitative evaluation.** The quantity here refers to the statistical motion planning result in 500 tests of each algorithm from the perspectives of the _success rate_, _time to goal_, _collision rate_, _timeout rate_ (allowed time 25s), _mean distance of robot and obstacle_, and _mean reward_. The statistics (TABLE VI) shows that our LSA-DSAC outperforms the state-of-the-art in all perspectives, except for the time to goal. However, LSA-DSAC still maintains high performance (2\({}^{\text{nd}}\) place) in the time to goal. Fig. 9: Superiority of LSA-DSAC in trajectory quality. Here this paper presents an example that demonstrates a good performance of our LSA-DSAC in the time cost to reach the goal. \begin{table} \begin{tabular}{l|l|l|l|l|l|l} \hline Algorithms & Success rate & Time to goal & Collision rate & Timeout rate & Mean distance & Mean reward \\ \hline ORCA & 0.43 & 10.86 & 0.564 & 0.006 & 0.08 & — \\ CADRL & 0.89 & 11.30 & 0.106 & 0.004 & 0.16 & 0.47 \\ LSTMRL & 0.96 & 12.10 & 0.02 & 0.01 & 0.16 & 0.49 \\ SARL & 0.99 & 10.96 & 0.01 & 0.00 & 0.18 & 0.56 \\ LSTM-A2C & 0.88 & 17.04 & 0.05 & 0.07 & 0.12 & 0.36 \\ RG-DSAC & 0.94 & 11.37 & 0.06 & 0.00 & 0.14 & 0.52 \\ LSA-DSAC & **0.996** & 10.94 & **0.004** & **0.00** & **0.15** & **0.57** \\ \hline \end{tabular} \end{table} Table 6: Statistical results of the quantitative evaluation. \begin{table} \begin{tabular}{l|l|l|l|l|l} \hline Algorithms & Time cost (hour/10K epi.) \\ \hline ORCA & — \\ CADRL & 7.4 (train with one obstacle) \\ LSTMRL & 16.08 \\ SARL & 14.72 \\ LSTM-A2C & 0.42 \\ RG-DSAC & 4.38 \\ LSA-DSAC & 4.56 \\ \hline \end{tabular} \end{table} Table 7: Time complexity evaluation. The time complexity here is measured by the time cost of all algorithms in training. \begin{table} \begin{tabular}{l|l|l|l|l|l|l} \hline Algorithms & Success rate & Time to goal & Collision rate & Timeout rate & Mean distance & Mean reward \\ \hline ORCA & 0.74 & 9.12 & 0.256 & 0.004 & 0.08 & — \\ CADRL & 0.88 & 11.19 & 0.01 & 0.11 & 0.17 & 0.48 \\ LSTMRL & 0.91 & 10.54 & 0.03 & 0.06 & 0.12 & 0.49 \\ SARL & 0.92 & 10.96 & 0.02 & 0.06 & 0.17 & 0.51 \\ LSTM-A2C & 0.45 & 15.61 & 0.41 & 0.14 & 0.10 & 0.12 \\ RG-DSAC & 0.40 & 11.09 & 0.59 & 0.01 & 0.11 & 0.10 \\ LSA-DSAC & **0.93** & 10.95 & 0.05 & 0.02 & **0.14** & **0.51** \\ \hline \end{tabular} \end{table} Table 8: Transferability evaluation. Here the transferability is measured by the performance of trained models (training in the circle-crossing simulator) in the new environment (the square-crossing simulator). The performance here is measured using the same metrics as that of quantity evaluation. \begin{table} \begin{tabular}{l|l|l|l|l|l|l} \hline Algorithms & Success rate & Time to goal & Collision rate & Timeout rate & Mean distance & Mean rewards \\ \hline ORCA & 0.310 & 1.560 & 0.308 & 0.002 & 0.000 & — \\ CADRL & 0.010 & 2.110 & 0.096 & 0.106 & 0.010 & 0.02 \\ LSTMRL & 0.078 & 1.160 & 0.008 & 0.050 & 0.020 & 0.04 \\ SARL & 0.070 & 0.000 & 0.010 & 0.060 & 0.010 & 0.05 \\ LSTM-A2C & 0.430 & 1.430 & 0.360 & 0.070 & 0.020 & 0.24 \\ RG-DSAC & 0.540 & 0.280 & 0.530 & 0.010 & 0.030 & 0.42 \\ LSA-DSAC & 0.066 & 0.010 & 0.046 & 0.020 & 0.010 & 0.06 \\ \hline \end{tabular} \end{table} Table 9: Statistical results of the quantitative evaluation. Figure 10: An example of 500 tests in the square-crossing simulator. (a) presents the simulator, while (b) presents the trajectories of the robot and obstacles at the end of an episode. The square-crossing simulator is only used in transferability evaluations. **Time complexity evaluation.** Here the time complexity is scaled by the _time cost_ of each algorithm in training. Online learning algorithm LSTM-A2C learns from online data. Hence, it takes less time 0.42h in training (TABLE VII), while other off-policy algorithms including our RG-DSAC and LSA-DSAC take much time. However, our LSA-DSAC and RG-DSAC still keep high performance (around 4h) when comparing with other off-policy algorithms. **Transferability evaluation.** The transferability here refers to the performance of a trained model (trained in circle-crossing simulator) in a new environment (square-crossing simulator, Fig. 10). The test results (TABLE VIII) show that our LSA-DSAC keeps the best performance among all trained models in the success rate, mean distance of the robot and obstacle, and mean reward. For the time to goal, collision rate, and timeout rate, our LSA-DSAC still keeps high performance (\(3^{\text{rd}}\), \(4^{\text{th}}\), and \(3^{\text{rd}}\) place respectively). **Robustness evaluation.** The robustness here denotes the _stability_ of a trained model in a new environment. The stability is described as the _value changes_ (the changes of the statistical results in the quantitative evaluation). TABLE IX presents the value changes of these models from the circle-crossing simulator to the square-crossing simulator. Although our LSA-DSAC does not perform best among all trained models, it still keeps high performance (\(2^{\text{nd}}\), \(2^{\text{nd}}\), \(3^{\text{rd}}\), \(3^{\text{rd}}\), \(2^{\text{nd}}\), and \(4^{\text{th}}\) place respectively). ### _Physical implementation_ This paper provides a demonstration of physical implementation. The motivation for physical implementation is to provide a possible way to implement the physical robot and enable the robot to navigate in dense and dynamic scenarios as the motion planning in the simulator. This paper emphasizes the evaluations of motion planning algorithms in simulators, instead of the evaluations in the real world, because simulators can provide as many tests as possible to extensively evaluate the performance of motion planning algorithms. Moreover, the errors introduced by simulators are predictable. In the real world, unexpected errors may result in an unfair environment when evaluating the motion planning performance of algorithms. For example, the false operations from humans and the measurement errors from the sensors may make the results of the real-world evaluations different from that in simulators under the same settings. This paper attempts to create a real-world environment that has the same settings as that of simulators to demonstrate the motion planning performance of algorithms. The unexpected errors from the real world are not considered in the physical implementation. The problems of unexpected errors are expected to be solved by integrating the model-based methods or Bayesian inference into the model-free RL. However, the model-based methods may bring new problems like expensive computation [40]. This paper is not going to extend this topic, but model-based methods may be considered in future works for motion planning in dense and dynamic scenarios. In the physical implementation, the models of trainable motion planning algorithms are trained with the data from the simulator, while the testing data is collected by the robot sensors from the real world. The mechanism of physical implementation is as follows: A Local area network (LAN) is established in the robot operation system (ROS) [41]. Agents in the experimental area are connected to the LAN. They are equipped with a marker point that can be captured by cameras to create a rigid body model in the motion capture system. The workflow in the physical implementation is shown in Figure 11a. First, cameras capture the agent's location information (observations) via marked points. Second, observations are sent to the host to compute the agent's actions by the RL model and ORCA. Third, the actions are broadcasted to the agents by WiFi. Finally, the actions are executed by agents. Once the robot reaches the goal, the task is finished. ### _Workflow of physical implementation_ In action execution (Figure 11b), Jetson Nano converts digital actions to ROS messages. Stm32 then computes the wheel speed using ROS messages and converts it into messages of Pulse-Width Modulation (PWM). They can be recognized and executed by motors. The motion capture system is based on optical tracking technology [42][43][44] to localize agents. It consists of eight 3D cameras. Finally, our physical implementation is tested in both a ROS Gazebo environment and the real world (Figure 11c). Videos of the ROS Gazebo test and the real-world tests are available as follows: Fig. 11: Details of physical implementation. (a) presents the detailed steps of physical implementation. (b) presents the hardware of the robot and obstacles for the action execution. (c) presents the motion capture system and an example of the tests in the gazebo and the real world. The motion capture system localizes the robot and obstacles in real-time to compute the positions and velocities of the robot and obstacles. Figure 14: Real-world test in dense and dynamic scenarios. As Gazebo test and real-world test in a static environment, real-world test in dense and dynamic scenarios uses the model of LSA-DSAC. The model of LSA-DSAC is trained by 1k episodes in the circle-crossing simulator. Two robots use the same model of LSA-DSAC. Each robot treats another robot as an obstacle in the motion planning task. The obstacles spread along the robot’s routes to their destinations and randomly walk continuously. Finally, the robots reached their destinations and avoided all dynamic obstacles simultaneously. Note that this paper omitted the real-world test in dense and dynamic scenarios with one robot because each robot treats another robot as an obstacle in the motion planning. Figure 12: The Gazebo test. The same trained models are evaluated in Gazebo environment and the circle-crossing simulator simultaneously. The robot and obstacles in these two environments are controlled by the LSA-DSAC and ORCA respectively. The model of LSA-DSAC is trained by 1k episodes in the circle-crossing simulator. Finally, the motion planning results in these two environments were almost the same, despite a few differences in the smoothness of the trajectory given the video results. Figure 13: The real-world test in the static environment. As Gazebo test, the real-world test in a static environment uses the model of LSA-DSAC. The model of LSA-DSAC is trained by 1k episodes in the circle-crossing simulator. The obstacles spread along the robot’s route to the destination. Finally, the robot reached its destination and avoided all obstacles simultaneously. 1) Gazebo test (Figure 12). The video link is available at [https://youtu.be/A-GdHGoWwCk](https://youtu.be/A-GdHGoWwCk). The purpose of the Gazebo test is to compare the motion planning differences between the Gazebo environment and the simulator. The same trained models are evaluated in Gazebo environment and the circle-crossing simulator. The experiment demonstrates that the motion planning performance in Gazebo environment and circle-crossing simulator is almost the same under the same settings. Their difference is that the trajectories of the robot and the obstacles in Gazebo environment are not as smooth as that of the circle-crossing simulator, given the video demonstration. The sensor errors of Gazebo environment cause the positioning drift, reducing the smoothness of the trajectories. However, the robot can still reach the destination safely and efficiently by learning an efficient motion planning policy and keeping a safe distance to the obstacle. 2) Real-world test in the static environment (Figure 13). The video link is available at the website [https://www.youtube.com/watch?v=b1SFbA14AqE](https://www.youtube.com/watch?v=b1SFbA14AqE). The model tested in Gazebo environment is then tested in the static real-world environment. Given the test result, the robot can reach the destination and avoid all obstacles simultaneously. 3) Real-world test in dense and dynamic scenarios (Figure 14). The video is available at [https://youtu.be/UB6aC3XoZ6c](https://youtu.be/UB6aC3XoZ6c). The real-world test in dense and dynamic scenarios uses the same model as the above two tests. Given the test result, the robots can reach their destinations and avoid all dynamic obstacles simultaneously. As Gazebo test, real-world tests in static and dynamic scenarios have the same problem in the positioning drift caused by the sensor errors, reducing the smoothness of the trajectories, given the video demonstration. However, the robot can still reach the destination safely and efficiently by learning an efficient motion planning policy and keeping a safe distance from the obstacle. ## V V. Conclusion This paper combines representation learning with reinforcement learning for robotic motion planning in the environment with dynamic and dense obstacles. First, relational graph combines with DSAC to form the RG-DSAC, and satisfactory performance of motion planning is achieved. Second, the expressive power of interpreted features is improved by the attention weight (attention network) to replace the relational graph in the feature interpretation. This improves network convergence. Third, the attention weight (network) is optimized by the skip connection method and LSTM pooling to eliminate overfittings in training. Therefore, the convergence speed and converged result are further improved. Extensive experiments (training and evaluations) of our algorithms and state-of-the-art are conducted. The results demonstrated that our LSA-DSAC outperforms the state-of-the-art in trainings and most evaluations. The details of physical implementation of the robot and dynamic obstacles are also given to provide a possible method to transplant the simulation into the real world. Motion planning experiments were conducted in indoor scenarios (ROS Gazebo environment and real world). This further demonstrates the credibility of our motion planning algorithm and physical implementation method in the real world. Future research may focus on the design of independent objectives for the attention network to further improve the convergence and interpretability. We will also try tree models and Bayesian model-based method to infer the hidden features of the robot and obstacles. This contributes to better interpretability and reducing unexpected errors once the robot works in the real world, therefore improving the network convergence, and reducing the sim2real gap. ## Acknowledgment The physical implementation is supported in part by the National Natural Science Foundation of China under Grant _62003218_, Guangdong Basic and Applied Basic Research Foundation under Grant _2019A1515110234_, and Shenzhen Science and Technology Program under Grant _RCBS20200714114921371_.
ロボットの決断を行うための意思決定の核となるのは、モーションプランニングです。古典的な計画アルゴリズムでは、密集した障害物や動的な障害物に対して課題を抱えています。深層学習アルゴリズムは、多くの衝突を引き起こす低性能の1ステップの予測を生成します。強化学習アルゴリズムは、最適なまたは近似的な時間順序予測を生成します。しかし、遅延収束、低性能な収束結果、過剰学習に苦しんでいます。この論文では、ロボットのモーションプランニングのためのハイブリッドアルゴリズムである、長短記憶(LSTM)のプールとスキップ接続を導入します。これは、注意をベースにした離散ソフトアクタ・クリティカル(LSA-DSAC)の学習のために、グラフネットワーク(関係グラフ)と注意ネットワーク(注意重量)が環境状態を解釈します。注意ネットワークは、このタスクにおけるグラフと比較して
2310.00332
MFL Data Preprocessing and CNN-based Oil Pipeline Defects Detection
Recently, the application of computer vision for anomaly detection has been under attention in several industrial fields. An important example is oil pipeline defect detection. Failure of one oil pipeline can interrupt the operation of the entire transportation system or cause a far-reaching failure. The automated defect detection could significantly decrease the inspection time and the related costs. However, there is a gap in the related literature when it comes to dealing with this task. The existing studies do not sufficiently cover the research of the Magnetic Flux Leakage data and the preprocessing techniques that allow overcoming the limitations set by the available data. This work focuses on alleviating these issues. Moreover, in doing so, we exploited the recent convolutional neural network structures and proposed robust approaches, aiming to acquire high performance considering the related metrics. The proposed approaches and their applicability were verified using real-world data.
Iurii Katser, Vyacheslav Kozitsin, Igor Mozolin
2023-09-30T10:37:12
http://arxiv.org/abs/2310.00332v1
# MFL Data Preprocessing and CNN-based Oil Pipeline Defects Detection ###### Abstract Recently, the application of computer vision for anomaly detection has been under attention in several industrial fields. An important example is oil pipeline defect detection. Failure of one oil pipeline can interrupt the operation of the entire transportation system or cause a far-reaching failure. The automated defect detection could significantly decrease the inspection time and the related costs. However, there is a gap in the related literature when it comes to dealing with this task. The existing studies do not sufficiently cover the research of the Magnetic Flux Leakage data and the preprocessing techniques that allow overcoming the limitations set by the available data. This work focuses on alleviating these issues. Moreover, in doing so, we exploited the recent convolutional neural network structures and proposed robust approaches, aiming to acquire high performance considering the related metrics. The proposed approaches and their applicability were verified using real-world data. Deep learning Computer vision Convolutional neural networks Anomaly detection Fault detection Oil pipelines Magnetic Flux Leakage data Defect Technical diagnostics. ## 1 Introduction Anomaly detection problems have a great importance in industrial applications because anomalies usually represent faults, failures or the emergence of such Chandola et al. (2009). To detect them automatically, advanced analytic algorithms, including machine learning- and deep learning-based, can be applied. In this work, we investigated if deep neural network would perform well enough to provide hindsight to oil pipeline diagnostics. An oil pipeline system spans over thousands of kilometers, which makes manual inspection very costly and sometimes impossible. The damage of pipelines that transport oil and gas products leads to severe environmental problems. Eliminating leakages and their consequences is expensive. To avoid accidents, it is recommended to improve the efficiency of diagnostics and increase the frequency of in-line-inspection (ILI) tools deployment (Fig. 1). ILI tools, also referred to as pipeline inspection gauges, use the Hall effect for measuring localized Magnetic Flux Leakage (MFL) intensity along the pipe wall. While moving along the pipe, the gauge inspects the wall and detects the magnetic field leaks. The MFL technique is the most common approach for nondestructive testing of oil and gas pipelines nowadays Loskutov et al. (2006). The data collected during the inspection can be further analyzed and used to solve the main diagnostics problems Katser et al. (2022): detection of damages and defects, their localization, diagnosis or defects classification. Such analysis results are useful for assets management and repair prioritizing. This data analysis step is partly automated, but still there is a lot of manual work done here. That is why it is quite expensive and time consuming. Data analysis and machine learning techniques are very useful in the tasks of making processes more efficient time- and money-wise. Thus, an improved diagnostic process allows running the whole ILI procedure more often and gain more knowledge about the pipeline health, resulting in better safety and fewer financial losses due to leakages. The objectives of this research are to appraise the proficiency of data engineering and computer vision (CV) techniques in oil pipeline diagnostics. ## 2 Literature Review and Problem Statement The MLF technique is the most common approach for nondestructive testing of oil and gas pipelines. The data obtained during the pipeline inspection is primarily analyzed by expert- and heuristics-based methods and since recently by regular machine learning (ML) methods. A comparison of performance among different ML methods for the defect identification problem is presented in Khodayari-Rostamabad et al. (2009). The main challenge for the ML approach is creating informative and important features that can be used as an input for ML methods. Usually, these diagnostic features are generated using expert knowledge and manually-created heuristics. So, on the one hand, the ML methods extend expert-based approaches and improve their quality. On the other hand, using manually generated features imposes the limitation on the quality of solving the ML-based defect detection problem that fails to fully automate the diagnostic process. A variety of most successful features is presented and analyzed in detail in Slesarev (2017). To overcome the limitations of the expert-based and ML-based approaches, one can resort to Deep Learning (DL) techniques that showed significant progress and achieved incredible results in numerous applications just in the past few years. The image classification problem is one of the most successful applications of DL and Convolutional Neural Networks (CNNs) in particular. CNNs can also be used to automate the process of feature generation in MFL data analysis. As an advantage, they can solve defect detection, weld strength detection, classification and segmentation problems at the same time. In literature there are examples of applying CNNs for defect detection Feng et al. (2017), weld defect detection Shang et al. (2020), weld and defects classification Yang et al. (2020), and defect size estimation Lu et al. (2019). For all the mentioned applications, CNNs outperformed traditional approaches. Nevertheless, still, there are few works dedicated to MFL data analysis using DL, and the existing DL approaches do not always achieve the required quality for full automation of the diagnostic process using such techniques. A number of particular problems that can be solved using the novel approach are not covered yet. For instance, we could not find any works on applying CNNs to the defect segmentation task, despite the importance of this problem according to Feng et al. (2017). This can be an extension of the current research. This work seeks to address three different problems: 1. Defect detection with the DL techniques, 2. Welds strength detection with the DL techniques, 3. MFL data preprocessing. To solve the first two problems, it is proposed to apply CNNs of different architecture and compare their results with the existing state-of-the-art approaches. It was decided to formulate such defect detection problem as an image classification problem in terms of ML because the applied DL techniques are intended to solve the problem formulated so. To solve the first problem, we state the binary classification problem (healthy pipe or defected pipe). To solve the Figure 1: In-line-inspection tool. second problem, we state the multiclass classification problem (healthy pipe, defected pipe or defected weld), covering first two problems simultaneously. Also, this research addresses different preprocessing techniques for dealing with typical issues in comparing the MFL data results of various preprocessing approaches, used with various CNNs. This work seeks to constructing such a preprocessing approach that improves the results of the defect detection problem best of all. ## 3 Dataset Description There are three main classes of data that are attended to by diagnostic personnel. They are presented in Fig. 2. Some other classes of data (concerning pipe tees and bends) are out of the scope of this work as well as different classes of defects and different classes of welds (healthy and defected). Although MFL data looks quite similar for different pipes and ILI tools, it can differ significantly. The data mainly depends on pipe size, wall width, sensor geometry, and other geometric characteristics. Moreover, ILI tools differ a lot for different pipe sizes. Therefore, the repeatability of the results for different datasets should be investigated additionally. Further on, we provide dataset characteristics, which are also presented in Table 1. The data was collected from a 219 mm in diameter pipe. The MFL dataset provides information about a single inspection tool run. The dataset has 64 features collected from 64 sensors installed at a constant step (10.75 mm) around the perimeter of the ILI tool. The data is collected as an array of 1x64 shape with a constant step (3.37 mm) along with the ILI tool movement inside the pipe. The dataset has 4,470,704 samples (steps along the pipe) that represent a 15,162.85 m part of pipeline. The sample values vary from 0 to 4,095 units. It has 745 defects of different types and 1,462 welds, 34 of which were found to be defected. Figure 2 shows examples of healthy data, data with a weld, and with a defect. A technical report, attached to the dataset, contains information about the location of welds and defects, defect types, sizes, and other related characteristics. The report is prepared manually by the domain expert, so it contains some inaccuracies and needs additional preprocessing, as well as the data itself. ## 4 Preprocessing Procedures Raw data has several issues that make it unusable to solve CV problems without proper preprocessing. The issues are: 1. Sensor malfunctions (zeroed values cause bold horizontal line in Fig. 2), 2. Displaced origins between data and report coordinates, 3. Inaccurate annotations, e.g., missed defects, wrong defect location, etc., Figure 2: Image classes distinguished in this work. 4. No annotated data for the segmentation task. The preprocessing stages and procedures that resolve these and other issues are given in this section. Initial dataset transforming into separate imagesThe initial dataset represents a long table indexed over the coordinate along the pipe. To state and solve the image classification problem, we should first decompose this long table into smaller squared 64x64 subtables. It can be also interpreted as a sliding window that runs over the coordinate (index) of the initial dataset and clips the dataset into the non-overlapping subsets (figure 3). Each subset can be shown as an image of the pipe part. as a result, we had a dataset of 11,690 images of the healthy class, 711 images of the defected class, and 1,412 images with welds. The characteristics of the pipeline defect dataset are described in Table 3. These classes were assigned to the images according to the markup from the technical report, where coordinates of the welds and defects are noted. Thus, the image covering the range of coordinates with the defect is interpreted as an image with the defected class. From now on, we refer to this transformed dataset of 13,813 64x64 images. Sensors malfunctions problemTo deal with sensor malfunctions, we propose to fill the gaps (zeroed values) with values calculated by different methods. Additionally, we will consider the values below 2,000 abnormal in this domain according to the experts and replace them with zeroes during the preprocessing. 1. Abnormal values are equal to 0. Then Min-Max scaling to \([0.5:1]\) range. 2. Abnormal values are equal to the mean of normal values from one picture. Then Min-Max scaling. 3. Abnormal values are equal to the mean of normal values over the column. Then Min-Max scaling. 4. Abnormal values are equal to the mean of neighboring sensors over the column. Then Min-Max scaling. 5. Abnormal values are equal to the interpolation results over the column. Then Min-Max scaling. The results of all the applied methods are presented in Fig. 4. The Min-Max scaling can be applied using the whole dataset or just one image. Both approaches can be compared when the experiment is conducted. Since the ILI tool location data did not match the defect location data from the report, it was necessary to merge the data. The key factor here turned out to be that the signal values from the magnetic flux sensors grew at the weld site. Hence the solution was to find the locations of the maxima of sensors data values and then to combine it with the weld coordinates. \begin{table} \begin{tabular}{l l} \hline Parameter & Value \\ \hline Pipeline diameter, mm & 219 \\ Pipeline length, m & 15162.85 \\ Number of samples & 4470704 \\ Number of features & 64 \\ Min value & 0 \\ Max value & 4095 \\ Number of defects & 745 \\ Number of welds (with defects) & 1462 (34) \\ \hline \end{tabular} \end{table} Table 1: Dataset characteristics Figure 3: Non-overlapping sliding window scheme for data preprocessing. Inaccurate annotations problemThis problem is a common one for nondestructive testing of oil and gas pipelines Khodayari-Rostamabad et al. (2009), as well as for the manual labelling. There appears to be a lot of missing defects that affect the quality of the problem. Besides, there are wrong defect types and locations. To eliminate the wrong location issue, we additionally searched extremums around the provided location and chose the defects or welds, taking into account new coordinates. AugmentationAlthough we had a lot of data, we had small amount of defects and welds in comparison with healthy pipe wall instances. The augmentation procedure was used to balance the classes of images and improve the model quality by increasing the number of images in small classes (defects, welds). The Albumentations library Buslaev et al. (2020) was selected as an augmentation tool. All the applied augmentations both for welds and defects are presented in Table 2. Based on domain knowledge, not all selected augmentations were applied to images with welds. The applied augmentation details are presented in Buslaev et al. (2020) and references therein. The characteristics of the augmented dataset, used for the research, are described in Table 3. Examples of augmentations are shown in Fig. 5. ## 5 Defects Detection Methods The Pipeline defect detection is composed of two problems. First, the defect should be detected, and second, it should be evaluated using the segmentation results. We propose here a novel CNN architecture for solving the first problem. Additionally, we present the existing architectures that achieve the best results in the MFL and X-ray defect detection problems. ### CNN Preliminaries A CNN is a special type of a neural network that has proven effective in computer vision applications. State-of-the-art results can be achieved in the segmentation and classification tasks Sainath et al. (2013). Compared to the computer vision algorithms that do not take advantage of CNNs, much less pre-processing is required. More importantly, such networks are able to learn characteristics from data, which otherwise would have to be individually accounted for Huet et al. (2018). Even though CNNs have been proposed in different architectures - to increase their efficiency for specific tasks and/or datasets, only three types of layers are used without exception, each with a specific propose. They are convolutional, pooling, and fully connected (linear) layers. The convolutional layers aim to extract feature maps of the input images by applying filters over different region of images. For instance, with \(k\) filters, each filter having weight and bias of \(w_{i}\) and \(b_{i}\), respectively, the convolution of an image patch, \(x_{n}\), can be written as follows: \[f_{i,n}=\sigma(W_{i}x_{n}+b_{i}), \tag{1}\] where \(\sigma\) is the activation function. Besides the rectified linear units (ReLU), sigmoid or softmax activation functions, a multitude of different options exist, all having their individual advantages. These are applied on the layers' output neurons (e.g. after a convolutional layer). After a number of convolutional layers, the pooling layers are commonly applied in prominent network architectures to reduce the size of particular dimensions. Max-pooling and average-pooling are two examples. The pooling layers, alongside the reducing dimension sizes, perform denoising when utilized on images. The fully connected layers are generally the last layers of CNNs, possessing a similar structure compared to the traditional neural networks Mingoti and Lima (2006). ### Existing CNNs We implemented the CNN from Feng et al. (2017) with only one difference: we used squared pictures (64x64 pixels) as an input, so we omitted the Normalization layer. The interested reader can find all details and overall architecture parameters in Feng et al. (2017). From now on, this CNN is marked as CNN-2 by the number of Convolutional layers. We also implemented the CNN from Shang et al. (2020), which showed better results than the pretrained one, and fine-tuned OverFeatNet, VGGNet, and GoogleNet networks. Since our input size was smaller than in the original paper, we used smaller kernel size (3x3 instead of 7x7). All the details and CNN parameters are presented in Shang et al. (2020). From now on, this CNN is marked as RayNet. ### Proposed CNN-5 model The proposed model in Fig. 6 consists of five convolutional layers overall. Each convolutional layer is followed by BN and Dropout sequentially (not shown in Fig. 6). All the convolutional layers have equal kernel size 5 x 5. All the MaxPooling layers have equal kernel size 2 x 2, and stride 2. From now on, this CNN is marked as CNN-5. \begin{table} \begin{tabular}{l c c c} \hline \hline Data & Healthy & Defect & Weld \\ \hline \multicolumn{4}{c}{Before augmentation} \\ \hline Train & 11106 & 569 & 1130 \\ Validation & 584 & 142 & 282 \\ \hline \multicolumn{4}{c}{After augmentation} \\ \hline Train & 11106 & 8535 & 11300 \\ Validation & 584 & 142 & 282 \\ \hline \hline \end{tabular} \end{table} Table 3: Dataset size before and after augmentation ### Performance metric For each class, binary classification problems are evaluated according to the principle of one versus all. Recall is used as a binary classification metric for each class. Recall is defined by the formula from Olson and Delen (2008): \[Recall=\frac{TP}{TP+FN} \tag{2}\] where \(TP\) is the number of samples when model correctly identified the considered class, \(FN\) is the number of samples when model did not identify the considered class. ### Loss functions Binary Cross-Entropy is used as a loss function of the CNN-5: \[BCE=-\frac{1}{N}\sum_{i=1}^{N}y_{i}\cdot\log\left(p\left(\hat{y}_{i}\right) \right)+\cdot\left(1-y_{i}\right)\cdot\log\left(1-p\left(\hat{y_{i}}\right)\right) \tag{3}\] ## 6 Results Table 4 presents the results of comparison for different preprocessing and feature engineering techniques and different CNN architectures for binary classification (normal pipe wall or defect/weld). Table 5 shows the multiclass classification problem (normal pipe wall, defect or weld). Batch size was equal to 64, so the input to the network had shape (64, 1, 64, 64). In the experiments we used the Adam optimizer with initial learning rate 0.001 and the learning rate scheduler with parameters: threshold = 0.0001, factor = 0.5, min lr = 0.0001, patience = 484. Also, for all the experiments, the number of epochs was equal to 12 and the dropout rate was equal to 0.33. Filling methods were researched for binary classification problem. Centering means using a peak (extremums) searching procedure to define the weld or defect coordinates correctly. The centering procedure was researched for both the binary and multiclass classification problems. Moreover, the Min-Max normalization, with using either a single image or whole dataset, was investigated. Finally, CNN-2 and CNN-5 were compared for centered images with the first filling method using the single image Min-Max normalization. \begin{table} \begin{tabular}{l c c c} \hline Method & \(\hat{y}=y=0\) & \(\hat{y}=y=1\) & Average \\ \hline CNN-2 & 95.55 & 82.08 & 89.88 \\ RayNet & 96.92 & 80.42 & 89.81 \\ CNN-5 & 97.95 & **91.51** & **95.24** \\ CNN-5+LRN & **98.29** & 89.86 & 94.74 \\ \hline \multicolumn{4}{c}{Filling techniques comparison} \\ \hline CNN-5 (filling 1) & 97.95 & **91.51** & **95.24** \\ CNN-5 (filling 2) & 97.95 & 84.20 & 92.16 \\ CNN-5 (filling 3) & 97.26 & 83.02 & 91.27 \\ CNN-5 (filling 4) & **98.63** & 81.13 & 91.27 \\ CNN-5 (filling 5) & 98.12 & 81.84 & 91.27 \\ \hline \end{tabular} \end{table} Table 4: Comparison of performance using Recall metric among different classification methods for binary classification problem. \(y=0\) - healthy; \(y=1\) - defect/weld Figure 6: Proposed CNN architecture. ## 7 Conclusion Today, manual analysis of a magnetographic image is being a bottleneck for the diagnostics of pipelines, since it is costly limited by human resources. This study proves that this process can be fully automated, which is likely to make the analysis more reliable, faster and cheaper. The CNN-5 network that outperformed the currently used CNNs for pipeline defect detection was proposed. Moreover, the results of the experiments prove that proper preprocessing procedures, including missing values filling techniques and normalization strategies, helps significantly improve the results and achieve the high quality of the oil pipeline diagnostics. Finally, there can be several project development options: 1. To increase sizes of the datasets, 2. To improve the preprocessing procedures, including manual pictures selection, 3. To try multiclass defects classification, 4. To try defected and healthy welds classification, 5. To apply defect depth evaluation, 6. To investigate the repeatability of the results for similar datasets or transfer learning possibility.
最近のコンピュータビジョンによる異常検出の利用は、数々の産業分野で注目されている。重要な例は、石油配管の欠陥検出である。石油配管の故障は、輸送システム全体の運転を中断させたり、広範囲な故障を引き起こす可能性がある。自動的な欠陥検出は、検査時間を大幅に短縮し、関連するコストを削減できる。しかし、関連文献では、このタスクに対処するためのギャップがある。既存の研究では、磁気通量漏洩データの研究と、可用データの限界を克服するための処理技術を十分にカバーしていない。本研究は、これらの課題を克服することを目的とし、この目的のために、最新のConvolutioal Neural Network構造を用いて、高い性能を達成することを目指した。提案されたアプローチとその適用性を、現実世界のデータを用いて検証した。
2303.17776
Learning Internal Representations of 3D Transformations from 2D Projected Inputs
When interacting in a three dimensional world, humans must estimate 3D structure from visual inputs projected down to two dimensional retinal images. It has been shown that humans use the persistence of object shape over motion-induced transformations as a cue to resolve depth ambiguity when solving this underconstrained problem. With the aim of understanding how biological vision systems may internally represent 3D transformations, we propose a computational model, based on a generative manifold model, which can be used to infer 3D structure from the motion of 2D points. Our model can also learn representations of the transformations with minimal supervision, providing a proof of concept for how humans may develop internal representations on a developmental or evolutionary time scale. Focused on rotational motion, we show how our model infers depth from moving 2D projected points, learns 3D rotational transformations from 2D training stimuli, and compares to human performance on psychophysical structure-from-motion experiments.
Marissa Connor, Bruno Olshausen, Christopher Rozell
2023-03-31T02:43:01
http://arxiv.org/abs/2303.17776v1
# Learning Internal Representations of 3D Transformations from 2D Projected Inputs ###### Abstract When interacting in a three dimensional world, humans must estimate 3D structure from visual inputs projected down to two dimensional retinal images. It has been shown that humans use the persistence of object shape over motion-induced transformations as a cue to resolve depth ambiguity when solving this underconstrained problem. With the aim of understanding how biological vision systems may internally represent 3D transformations, we propose a computational model, based on a generative manifold model, which can be used to infer 3D structure from the motion of 2D points. Our model can also learn representations of the transformations with minimal supervision, providing a proof of concept for how humans may develop internal representations on a developmental or evolutionary time scale. Focused on rotational motion, we show how our model infers depth from moving 2D projected points, learns 3D rotational transformations from 2D training stimuli, and compares to human performance on psychophysical structure-from-motion experiments. ## 1 Introduction For effective interactions with 3D objects and environments, humans must estimate 3D structure from visual inputs projected down to two dimensions in a retinal image. This problem of recovering 3D structure from 2D projections is underconstrained - there are an infinite number of possible depths for each 2D point. To resolve this challenge, humans rely on a variety of cues to infer depth including motion parallax, binocular disparity, texture, occlusions, shadows, size, blur, and shading (Reichelt et al., 2010). Specifically, the persistence of object shape over motion-induced transformations provides a powerful cue (even in the absence of other cues (Petersik, 1979; Braunstein et al., 1987; Todd et al., 1988; Dosher et al., 1989; Sperling et al., 1989; Braunstein, 2014))) that can be used to resolve the depth ambiguity for points on an object's surface and improve accuracy of depth perception. Mathematically there are precise definitions of motion-induced geometric transformations such as rotations and translations, and these can be employed to successfully compute point depths from multiple viewpoints (Longuet-Higgins, 1981; Fischler and Bolles, 1981; Tomasi and Kanade, 1992; Nister, 2005; Pollefeys et al., 2008; Snavely et al., 2006) or frames (Godard et al., 2017; Garg et al., 2016; Xie et al., 2016; Zhou et al., 2017). While all of these methods use analytically constructed models to achieve the objective of estimating depth from multiple viewpoints, we aim to understand how biological vision systems may internally represent 3D transformations, how they could learn or adapt to the statistics of these 3D transformations with minimal supervision, and how this knowledge could be used to aid in discerning structure from motion. The results of mental transformation experiments (Shepard and Cooper, 1986; Lamm et al., 2007), as well as qualitative descriptions from subjects performing mental transformation tasks (Zacks and Michelon, 2005), suggest that humans internally imagine 3D spatial transformations when performing tasks such as identifying rotated reference objects (Shepard and Metzler, 1971; Cooper and Shepard, 1973; Just and Carpenter, 1985). We propose a computational model to explain the mechanism for internally representing these transformations and for learning them with minimal supervision (Perry et al., 2010) and show that this model can be used to infer 3D structure from the motion 2D points. Motivated by the manifold hypothesis which states that natural variations in high-dimensional data lie on or near a low-dimensional, nonlinear manifold (Fefferman et al., 2016), we introduce generative manifold models as a possible mechanism for learning and representing internal models of natural motion-induced transformations. These models represent manifolds through continuous, nonlinear transformation operators that traverse the geometric structure of the manifold (Culpepper and Olshausen, 2009; Sohl-Dickstein et al., 2010; Connor and Rozell, 2020; Connor et al., 2021, 2021). The transformation operators can be used to infer relationships between different object views and to interpolate or extrapolate views of transformed objects. Neuroscience research has suggested that the brain explicitly exploits the manifold structure of object variations by using hierarchical processing stages to flatten the manifolds produced by different objects undergoing the same physical transformations (e.g., changes in pose and position) (DiCarlo and Cox, 2007; DiCarlo et al., 2012), but to our knowledge no detailed model has been proposed for how a biological system could learn or represent the manifolds of such natural variations from data. Specifically in this work, we develop a proof of concept for the viability of learning 3D transformation representations from 2D projected inputs using a generative manifold model. Focusing on the rotational motion that is used in many structure from motion tasks (Petersik, 1979; Dosher et al., 1989; Braunstein, 2014), we develop a manifold-based method for inferring depth from moving 2D projected points and learning 3D rotational transformation models from 2D training stimuli. Finally, we apply the learned transformation model to structure from motion tasks and compare to human performance on psychophysical experiments. Background In this work, we focus on the development of a model that can learn transformations which may be used to model internal mental rotation. While there have been computational models introduced for mental rotation (Funt, 1983; Fukumi et al., 1997; Inui and Ashizawa, 2011; Seepanomwan et al., 2015), they have both assumed prior knowledge of the rotational transformations and been focused on modeling specific brain areas that are involved in this process. In contrast, we focus on the representation of the transformation model itself, including the learning and inference process within such a model. We use our model of 3D transformations to infer point depths from 2D projections of moving points. The ability for humans to perceive depth from moving points and objects, known as the kinetic depth effect (Wallach and O'connell, 1953), has been extensively studied in both psychology and computer vision. The kinetic depth effect has been investigated through a wide array of psychophysical experiments suggesting that humans can generate stable precepts of 3D structures under a wide variety of conditions (Petersik, 1979; Braunstein et al., 1987; Todd et al., 1988; Dosher et al., 1989; Sperling et al., 1989; Braunstein, 2014). Computational models have been developed to estimate 3D point-cloud structure from multiple views of an object or scene using multiview geometry (Longuet-Higgins, 1981; Tsai and Huang, 1984; Hartley, 1997; Hartley and Sturm, 1997; Hartley and Zisserman, 2004), factorization methods (Tomasi and Kanade, 1992; Kanade and Morris, 1998), and neural network-based models (Eigen et al., 2014; Ladicky et al., 2014; Liu et al., 2015; Godard et al., 2017; Garg et al., 2016; Xie et al., 2016; Godard et al., 2017; Garg et al., 2016; Xie et al., 2016). All of these methods, while very successful at estimating camera motion and depth, have requirements that make them poor representations of the neural mechanisms for learning types of motion, inferring motion in scenes, and estimating point depths. Several methods assume prior knowledge of the types of transformations present in temporal visual inputs (i.e., rotation and translation) and rely on a mathematical specification of how to apply rotational and translational motion through matrix multiplication. Other methods require ground truth depth labels in order to train a system to estimate depth and motion. We cannot assume that vision systems know ground truth point depths or how to apply natural transformations during the development of mechanisms for estimating motion and depth of scenes. In this work, we learn a representation of 3D transformations from observed point motion itself, without a prior assumption how these transformations affect points in a scene. This could resemble how internal mental transformation models are developed. Importantly, this model is learned using only 2D moving points without requiring ground truth knowledge of point depth. ## 3 Model Description We aim to develop a model for learning 3D rotational transformations from 2D projected inputs and to use that model to describe how humans may employ motion cues to recover the 3D structure of objects in their environment. This perceptual setting is visualized in Fig. 1 where an object is transforming in 3D but the visual inputs are in the form of 2D projections on the retina. In particular, each object is represented as a combination of 3D key points \(\mathbf{x}^{(i)}\in\mathbb{R}^{3},\;\;i=\{1,...,N_{P}\}\) that are projected to 2D point locations \(\mathbf{y}^{(i)}\in\mathbb{R}^{2}\). We assume rigid body motion and incorporate a transformation model that can constrain the possible 3D motion between different transformed viewpoints. This provides the structure necessary to infer depth from points on a moving object. We develop a method that uses a generative manifold model as a representation of transformations, and we show that we can learn rotational transformation operators and use them to accurately infer point depth from rotating points and scenes. We build up to the learnable model of natural transformations in two steps. In the first step, we assume the 3D rotational transformation model is known and we develop a method for inferring the depth of 2D projections of rotating points. In the second step, we utilize the depth inference approach from the first step to develop a learning model that can adapt the transformation representation to ensure it corresponds to the real world transformations. We will preface the descriptions of each of these tasks with an overview of the transport operator model, a learnable generative manifold model. ### Transport Operator Model The transport operator technique is a specific manifold learning model that learns to transform points through nonlinear Lie group operators, known as transport operators, that transverse a manifold (Rao and Ruderman, 1999; Miao and Rao, 2007; Culpepper and Olshausen, 2009; Sohl-Dickstein et al., 2010; Cohen and Welling, 2014; Hauberg et al., 2016; Connor and Rozell, 2020; Connor et al., 2021). Lie group operators represent infinitesimal transformations which can be applied to data through an exponential mapping to transform points along a manifold. In particular, this model learns a dictionary of \(M\) transport operators \(\mathbf{\Psi_{m}}\) which each represent a different transformation. These operators are effective for representing an internal transformation model for a few reasons. First, once learned, the transport operators are stored as a representation of possible transformations that may be experienced or observed. This means they can be reused in the future when the same type of transformation is visualized. Second, the transport operator model is a generative manifold model meaning it can interpolate and extrapolate new views of points undergoing a learned transformation. This provides a way of creating an internal visualization of how an object transforms similar to what humans describe when performing mental transformation tasks (Zacks and Michelon, Figure 1: Visualization of the 3D depth inference problem. Three dimensional points on an object are jointly transformed in the 3D world view and the visual inputs are in the form of 2D projected points. 2005). Finally, transport operators can be used to infer the relationship between points in two separate viewpoints and define the 3D transformations between them. With the transport operator model, the relationship between two individual 3D points \(\mathbf{x}_{0}^{(i)}\) and \(\mathbf{x}_{1}^{(i)}\) is defined as follows: \[\begin{split}\mathbf{x}_{0}^{(i)}=\mathrm{expm}\left(\sum_{m=1}^{M }\boldsymbol{\Psi}_{m}c_{m}\right)\mathbf{x}_{1}^{(i)}+\mathbf{n},\\ \mathbf{n}\sim\mathcal{N}(0,I)\quad c_{m}\sim\text{Laplace}\left( 0,\frac{1}{\zeta}\right),\end{split} \tag{1}\] where \(\mathbf{c}\in\mathbb{R}^{M}\) is the set of coefficients that specifies the local structure of transformations between \(\mathbf{x}_{0}^{(i)}\) and \(\mathbf{x}_{1}^{(i)}\). Given this relationship between points, the original work from Culpepper and Olshausen (2009) defines the negative log posterior of the model as: \[E_{\Psi}=\frac{1}{2}\left\|\mathbf{x}_{0}^{(i)}-\mathrm{expm}\left(\sum_{m=1} ^{M}\boldsymbol{\Psi}_{m}c_{m}\right)\,\mathbf{x}_{1}^{(i)}\right\|_{2}^{2}+ \frac{\gamma}{2}\sum_{m}\|\boldsymbol{\Psi}_{m}\|_{F}^{2}+\zeta\|\mathbf{c}\| _{1}, \tag{2}\] where \(\|\cdot\|_{F}\) is the Frobenius norm and \(\gamma,\zeta\geq 0\). The first term is a data fidelity term that specifies how well \(\mathbf{x}_{0}^{(i)}\) can be represented as a transformed version of \(\mathbf{x}_{1}^{(i)}\) when the transformations are constrained by the current dictionary of operators \(\boldsymbol{\Psi}\). The data fidelity objective term is an indication of how well the transport operators fit the data manifold. The second term is a Frobenius norm regularizer on the dictionary elements which constrains the growth of the dictionary magnitudes and helps identify how many operators are necessary for representing transformations on the data manifold. The third term is the sparsity regularizer which encourages each transformation between point pairs to be represented with a sparse set of coefficients. Given a dictionary of operators \(\boldsymbol{\Psi}\), the 3D transformation between \(\mathbf{x}_{0}^{(i)}\) and \(\mathbf{x}_{1}^{(i)}\) can be estimated by inferring a set of transport operator coefficients \(\mathbf{c}\). This inference is performed by minimizing \(E_{\Psi}\) when \(\gamma=0\). If the operators need to be learned or adapted, \(E_{\Psi}\) is used as an objective for transport operator training as well. Training proceeds by alternating between coefficient inference between point pairs and gradient steps on the transport operators. We adapt the transport operator model to use time-varying views of transforming points in a 2D projection plane to learn a generative motion model. We begin by developing an inference procedure that enables joint depth estimation and coefficient inference from pairs of 2D inputs points in different viewing frames. ### Depth Inference with Projected Inputs In this section, we will assume that the rotational transport operators are either known _a priori_ or already learned. We describe the training procedure in Section 4.2. Fig. 1(a) shows a top-down view of the setup of this problem. The eye located at the red 'x' in the center represents the viewer at the origin. The placement of the viewer at the origin is natural for learning a representation of self-motion in an egocentric viewing framework where the human is the origin. However, the model is flexible and the same set up can be used to infer object motion in the allocentric viewing framework (see Appendix A for more details). Each 3D point \(\mathbf{x}^{(i)}\) is projected onto the viewing plane to a corresponding 2D point \(\mathbf{y}^{(i)}\) with an associated depth \(\lambda^{(i)}\): \(\mathbf{y}^{(i)}=\mathbf{K}\mathbf{x}^{(i)}\). The matrix \(\mathbf{K}\) is the orthographic projection matrix which is defined as \(\mathbf{K}=\begin{bmatrix}1&0&0\\ 0&1&0\end{bmatrix}\) in all of our experiments. This projection matrix corresponds to setting the viewing plane to the \(xy\)-plane and defining the unknown depth as the \(z\)-coordinate of the 3D input points. It is assumed that the \(\mathbf{K}\) is known during processing. We observe \(N_{P}\) points transforming jointly on a rigid object and concatenate all \(N_{P}\) points into a matrix: \(\mathbf{X}_{0}=\begin{bmatrix}\mathbf{x}_{0}^{(1)}....\mathbf{x}_{0}^{(N_{P}) }\end{bmatrix}\). The relationship between points in two consecutive frames at \(t=0\) and \(t=1\) is defined as: \[\mathbf{Y}_{0}=\mathbf{K}\mathbf{T}(\mathbf{c})\widehat{\mathbf{X}}_{1}\left( \lambda\right)+\mathbf{W}, \tag{3}\] where \(\mathbf{W}\) is a Gaussian noise matrix, \(\widehat{\mathbf{X}}_{1}\) is a matrix of estimated 3D point locations associated with \(\mathbf{Y}_{1}\) and \(\mathbf{T}(\mathbf{c})\) is the matrix exponential of a weighted combination of transport operators which can each represent a different type of motion: \[\mathbf{T}(\mathbf{c})=\mathrm{expm}\left(\sum_{m=1}^{M}\mathbf{\Psi}_{m}c_{m }\right). \tag{4}\] In (3) we define \(\mathbf{Y}_{0}\) at \(t=0\) as a transformation of points at \(t=1\) in order to estimate the 3D point locations in \(\widehat{\mathbf{X}}_{1}\) at \(t=1\) in a causal manner as we describe below. To compute \(\widehat{\mathbf{X}}_{1}\), we reverse the process of the projection matrix in two steps. First, we concatenate the \(\mathbf{Y}_{1}\) with a vector of zeros in the \(z\)-coordinate position that is lost during Figure 2: (a) Top-down views of the depth inference problem setup for points rotating on a cylinder. The 3D points \(\mathbf{x}^{(i)}\) have an associated depth \(\lambda^{(i)}\). Each point is projected onto the orange viewing plane using the projection matrix \(\mathbf{K}\). This results in the 2D projected points \(\mathbf{y}^{(i)}\). The 3D points are rotating counter-clockwise around the axis and the points in the blue shaded box on top indicate the direction of motion of the projected points. (b) Visualization of the inference window sequence for a single point. The inference window is made up of several frames of transformed points. We assume that the transformation speed is constant between the frames, resulting a constant coefficient value representing the transformation from one frame to the next. The depth \(\lambda\) is inferred for the final frame in the sequence. projection: \[\widetilde{\mathbf{X}}_{1}=\begin{bmatrix}\mathbf{y}_{1}^{(1)}&\cdots&\mathbf{y}_{ 1}^{(N_{P})}\\ 0&\cdots&0\end{bmatrix}. \tag{5}\] Second, we add the depths to the newly introduced dimension. To do this, we compute the outer product between the standard basis vector associated with the axis lost during projection \(\mathbf{e}_{z}\) and the depth vector \(\lambda\in\mathbb{R}^{N_{P}}\), resulting in a matrix with two rows of zeros and one row containing the estimated depths, and add that to \(\widetilde{\mathbf{X}}_{1}\): \[\widehat{\mathbf{X}}_{1}\left(\lambda\right)=\widetilde{\mathbf{X}}_{1}+ \mathbf{e}_{z}\lambda^{\top}. \tag{6}\] This model for incorporating the estimated depths can be integrated into the data fidelity term of the objective function in (2) and used for jointly inferring the depth \(\lambda\) and coefficients \(\mathbf{c}\) between point pairs. \[L_{\text{df}} =\frac{1}{2}\sum_{i=1}^{N_{P}}\|\mathbf{y}_{0}^{(i)}-\mathbf{K} \mathbf{T}(\mathbf{c})\widehat{\mathbf{x}}_{1}^{(i)}\left(\lambda^{(i)} \right)\|_{2}^{2} \tag{7}\] \[=\frac{1}{2}\text{trace}\left(\left(\mathbf{Y}_{0}-\mathbf{K} \mathbf{T}(\mathbf{c})\widehat{\mathbf{X}}_{1}(\lambda)\right)^{\top}\left( \mathbf{Y}_{0}-\mathbf{K}\mathbf{T}(\mathbf{c})\widehat{\mathbf{X}}_{1}( \lambda)\right)\right). \tag{8}\] We add two additional constraints to this model to improve the consistency of accurate depth estimation. First, we incorporate a Gaussian prior on the depths which constrains them to magnitudes consistent with the ground truth depths of the rotating objects. Second, we group several consecutive views of the transforming points to reverse the projection procedure on points in the final frame in the sequence. We refer to this sequence of frames as the inference window. During inference and learning, we use ground truth knowledge of point correspondences between frames. From this inference window, we can obtain a causal estimate of the depth in the final frame and infer a fixed set of coefficients that represents the transformations between each consecutive view. This assumes a fixed transformation speed over multiple frames which can be seen as an extension of the slowness principle to natural transformations that persist over time (Wiskott and Sejnowski, 2002). Fig. 2b shows this setting where the same coefficients \(\mathbf{c}\) are inferred between points in each neighboring frame and the depth is inferred for the final point in the sequence. Using more than two motion frames for depth inference provides additional information that can be used to resolve depth ambiguities. To model this setting, we generalize (3) for \(N_{T}\) viewing frames: \[\mathbf{Y}_{N_{T}-n}=\mathbf{K}\mathbf{T}^{n}(\mathbf{c})\widehat{\mathbf{X}} _{N_{T}}\left(\lambda\right)=\mathbf{K}\mathbf{T}(n\mathbf{c})\widehat{ \mathbf{X}}_{N_{T}}\left(\lambda\right),\quad n=\{1,...,N_{T}\}, \tag{9}\] where the change from \(\mathbf{T}^{n}(\mathbf{c})\) to \(\mathbf{T}(n\mathbf{c})\) is possible because raising an exponent to the power of \(n\) is the same as applying the same transformation \(\mathbf{T}(\mathbf{c})\)\(n\) times and thus multiplying its transformation coefficients by \(n\). We define an objective that leverages multiple views and a depth regularizer: \[L=\frac{1}{2N_{T}}\sum_{n=1}^{N_{T}}\sum_{i=1}^{N_{P}}\left[\| \mathbf{y}_{N_{T}-n}^{(i)}-\mathbf{K}\mathbf{T}(n\mathbf{c})\widehat{\mathbf{ x}}_{N_{T}}^{(i)}(\lambda)\|_{2}^{2}\right]+\zeta\|\mathbf{c}\|_{1} \tag{10}\] \[+\frac{\beta}{2}\|\lambda\|_{2}^{2}+\frac{\gamma}{2}\sum_{m}\| \mathbf{\Psi}_{m}\|_{F}^{2}.\] With this objective, the depth \(\lambda\) and the coefficients \(\mathbf{c}\) can be jointly inferred for sequences of transforming points. See Section E for more details on the inference process. To highlight the effectiveness of this inference model, we will examine how accurately it can infer depths and transformations using ground truth rotational operators and explore the requirements for the inputs that lead to robust depth estimation. ## 4 Results ### Depth Inference Experiments Three-dimensional rotational matrices can be defined as elements of the 3D rotational group SO(3) and ground truth rotational transport operators can be derived from elements of the \(\mathfrak{so}(3)\) Lie algebra (Hall, 2015). Fig. 3 shows the trajectories of these ground truth 3D rotational operators, each rotating around one of the principal axes. These plots are generated by selecting a few example starting points on a sphere and applying individual dictionary elements \(\mathbf{\Psi_{m}}\) to each point as they evolve over time: \(\mathbf{x}_{t}^{(i)}=\mathrm{expm}(\mathbf{\Psi_{m}}\frac{t}{T})\mathbf{x}_{0} ^{(i)}\), \(t=0,...,T\). Using (10) and the fixed \(\mathbf{\Psi}\) representing ground truth rotational operators, we jointly infer the coefficients and depths from a sequence of transforming points. Fig. 1(b) shows a visualization of this inference setting for a single point. Given \(N_{T}\) views of rotating points, we infer the depth \(\lambda\) for the projected points in the last viewpoint \(\mathbf{Y}_{N_{T}}\) as well as the coefficients \(\mathbf{c}\) that correspond to the shared transformation between every pair of consecutive views in the sequence. This ensures that the depths at time \(N_{T}\) are inferred by samples preceding it in the motion sequence, making this a causal estimate. Because the objective (10) is nonconvex, inference may result in local minima. To avoid the local minima, we perform inference for the same inference window several times using several random restarts. That is, we randomly sample a new initialization and infer coefficients and depths using that starting point. This often results in different final inferred outputs. Figure 3: Trajectories generated by ground truth rotational transport operators. Each line represents the trajectory of an individual transport operator dictionary element applied to one of several example starting points selected on the sphere. These three operators generate rotation around each of the three principle axes. We choose the inferred output associated with the lowest final objective from inference. Fig. 4 shows examples of depth inference for points on the surfaces of three different shapes. The plots in the first column show the visual input of projected points in the final viewing plane of the sequence. The plots in the second column show a side view of the point stimuli where the ground truth depth locations are shown on the \(x\)-axis of the plot. The plots in the third column show a side view with the estimated depth locations for each of the points. In both the second and third columns, the points are colored by the ground truth depths. This shows that the estimated depths correspond with the ground truth depths for a variety of shapes. We quantitatively evaluate the accuracy of the inferred depths for many trials to analyze the impact of various model parameters. There are three parameters of particular interest during inference. First we are interested in the impact of the perceptual extent of rotation viewed in a sequence of frames. The perceptual extent of rotation is a combination of two parameters, the number of frames in a rotation sequence \(N_{T}\) and the ground truth rotation angle between each frame in the rotation sequence \(\theta\). The full angular extent of rotation viewed is \(\theta_{\text{path}}=N_{T}\theta\). Experiments have shown that larger angular extents of rotation can lead to more accurate depth estimates for human subjects (Hildreth et al., 1990). Next we are interested in the impact of the number of jointly transforming points \(N_{P}\). This indicates the amount of coherent rotational motion viewed in the input stimulus. Psychophysical experiments have been run that indicate a Figure 4: Inferred depths for points on the surface of different shapes. The first column shows the 2D point projections in the final viewing plane. The second column shows a side view of the ground truth 3D point stimuli where the \(x\)-axis in the plots is the depth axis. The third column shows a side view of the estimated depths for the projected points. The points in the second and third columns are colored by the ground truth depth. (a-c) Cylinder (d-f) Sphere (g-i) Cube. greater number of coherently moving points leads to a more robust depth percept (Todd et al., 1988; Dosher et al., 1989; Sperling et al., 1989; Braunstein et al., 1987). In order to quantitatively evaluate the success of inferred depth, we use two metrics. The first is the mean squared error between the estimated depths and the ground truth depths for the \(N_{P}\) rotating points. Ideally depth inference would result in low MSE between the estimated depth and the ground truth depth. However, with a rotational transformation, as we are working with here, there exists a depth-angle ambiguity. Namely, when viewing projected points \(\mathbf{y}_{0}^{(i)}\) and \(\mathbf{y}_{1}^{(i)}\) from two separate views, they could be either projections of points with large depths that undergo rotation with a smaller angle or points with small depths that undergo rotation with a larger angle (see Appendix B for a visualization). While it is not ideal for the depth to be off by a scaling factor, the inferred structure can still be accurate. Additionally, this depth ambiguity is observed in experiments with human subjects (Todd et al., 1988). In psychophysical experiments, one metric for determining accuracy of a percept is comparing the estimated ordering of point depths to the ground truth ordering of point depths (Hildreth et al., 1990). To analyze the accuracy of the inferred structure in the presence of potential scaling in depth, we compare the ordering of the inferred depths to that of the ground truth depths using the Kendall's Tau rank correlation coefficient (Kendall, 1938). We compare the Kendall's Tau between all points \(N_{P}\) as well as between five randomly selected points. We choose to compare the ordering of five randomly selected points in order to define a metric that can be used to fairly compare the performance as the number of points increases. With greater \(N_{P}\), even if depths are accurate within some error range, there is a greater chance of incorrectly ordering a few points because there is a greater point density. Therefore, comparing five randomly sampled points should provide a consistent metric as we vary \(N_{P}\). Fig. 5a and 5b show the median depth error as we vary the angular extent of rotation \(\theta_{\text{path}}\). In Fig. 5a, each line represents a different number of frames \(N_{T}\). The error bars in all plots represent the bootstrap confidence interval. For each line in Fig. 5a, because the values of \(N_{T}\) are fixed, moving along the \(x\)-axis corresponds to increasing the angle between frames \(\theta\). Each of these lines has a clear minimum and this minimum occurs at an angular extent in the range of \(N_{T}\) to \(2N_{T}\). This corresponds to an angle \(\theta\) between frames of \(1^{\circ}-2^{\circ}\). Up to this minimum value, the depth error decreases as the rotational extent increases, indicating the benefit of a more complete view of the rotational sequence. For smaller rotational extents, the inference results in low objective function values but large depth errors. This indicates that there are local minima of the optimization that achieve low objective function values but do not accurately reflect the rotation sequence and point depth. With greater rotational extents, the low objective function values correspond more directly to accurate depth inference. As \(\theta\) increases, the magnitudes of the coefficients corresponding to the true rotation increase. This presents a challenge for the non-convex objective because the space of possible coefficient values increases. Therefore, randomly selecting initializations of the coefficients in the neighborhood of the true minimum is less likely which results in more solutions that correspond to inaccurate local minima. The optimal values between \(N_{T}\) and \(2N_{T}\) have a large enough rotational extent such that low objective function values correspond to accurate depths but small enough rotational extent that we can randomly initialize inference with coefficients that result in minima close to the ground truth depth values. Fig. 5b breaks down the performance for individual values of \(\theta\). As \(\theta\) increases up to \(\theta=2^{\circ}\), the depth error decreases with increasing \(\theta\). For \(\theta\) greater than \(5^{\circ}\), the performance starts to degrade. This is consistent with the patterns in Fig. 5a, and it indicates that rotation angles that are too large between frames (corresponding to fast rotational motion) results in less accurate depth inference1. In the remaining tests of inference performance with ground truth operators, unless otherwise stated, we set \(N_{T}=30\) and \(\theta=2\). See E for more details on model parameters. Footnote 1: We should note that inference optimization experiences inaccuracy even with a large number of random restarts as greater number of frames are used in the inference window, leading to large increases in depth error for \(N_{T}>50\) in many settings. Therefore, in Fig. 5b, we only display lines until an angular extent of rotation for which the non-convex optimization inaccuracy impacts the solutions. The angular extent where this occurs is smaller for the smaller values of \(\theta\) because they require larger values of \(N_{T}\) to achieve the same angular extent of rotation. Fig. 5c and 5d show the median depth error and the mean Kendall's Tau as we vary \(N_{P}\). This shows that the depth estimation improves as more points are added with a large performance improvement from \(N_{P}=1\) to \(N_{P}=10\). We reason that this improvement Figure 5: Quantitative metrics for depth inference when varying angular extent of rotation, the number of coherently transforming points \(N_{P}\), and the standard deviation of Gaussian noise added to the ground truth operators. (a) Median depth error as angular extent increases. Each line is generated with different numbers of frames in the inference window \(N_{T}\). The optimal performance occurs for angular extents in the range of \(N_{T}\) to \(2N_{T}\). (b) Median depth error as angular extent increases. Each line is generated with different angles of rotation between sequence frames \(\theta\). A rotation angle of \(\theta=2\) results in the lowest depth error at \(180^{\circ}\) of rotation. (c) Median depth error as \(N_{P}\) increases. (d) Mean Kendall’s Tau for 5 randomly selected points as \(N_{P}\) increases. Values of this metric for \(N_{P}<5\) are set to zero because there are not enough points to compare five randomly selected points. (e) Median depth error as the standard deviation of dictionary noise increases. (f) Mean Kendall’s Tau for 5 randomly selected points. is due the reduction in transformation ambiguity that results from seeing more points rotating jointly. The greater number of points on a rigid object undergoing the same transformation, the more information our model has about the accurate transformation and depth. Going forward, we use \(N_{P}=20\). Research in structure from motion has shown that increasing the number of transforming points improves the general depth percept (Todd et al., 1988; Dosher et al., 1989; Sperling et al., 1989) but it may not increase the accuracy of the inferred depths (Braunstein et al., 1987). The final quantity we analyze in this controlled setting with ground truth rotational operators is the effect of adding Gaussian noise to the operators. Noisy operators depart from the ground truth rotational transformations and analyzing the performance with noisy operators can indicate the impact of accurate rotational transformation models on effective depth inference. Fig. 5e and 5f show the median depth error and mean Kendall's Tau metrics as noise is added to the ground truth operators. Both metrics indicate that the depth inference is robust to noise with a standard deviation of around \(10^{-3}\) - \(10^{-2}\) but performance decreases sharply with noise larger than that. This shows that the model can perform effectively with some transformation inaccuracy but performance decreases with increasingly inaccurate transport operators. This highlights the necessity of accurate rotational operators and inspires the learning and adaptation procedure introduced and analyzed in the next section. ### Learning 3D Transport Operators from 2D Projected Inputs A stated goal of this work is to develop a model that can learn 3D transformational representations from rotating 2D projected input points. The learning procedure is a straight-forward extension of the coefficient and depth inference model from the previous sections. Training of the transport operator dictionary elements is performed using gradient descent. For each training step, a sequence of projected rotated points \(\mathbf{Y}_{n},\;\;n=\{1,...,N_{T}\}\) is generated. First the dictionary weights are fixed and the depth and coefficients are inferred. Then, fixing the depth and coefficients, the gradient on the dictionary elements is computed using the objective in (10) with \(\zeta=\beta=0\). If this gradient step improves the objective, then it is accepted. Otherwise, the step is rejected and the learning rate is decreased. See Section F for more details on the training procedure. With this training procedure, we are able to learn rotational transport operators from randomly initialized operators. Fig. 6 shows the trajectories of the operators during one training run in which the number of dictionary elements \(M\) is set to 3. At the beginning of training, the trajectories do not correspond to common geometric transformations but they quickly adapt to represent near-rotational operators with trajectories similar to the ground truth operators shown in Fig. 3. In Section F we show an example of learning rotational operators from a dictionary with six operators. We quantitatively compare these operators to the ground truth operators using the same depth MSE and Kendall's Tau metrics employed to analyze inference success. We can compute the depth error and Kendall's Tau metrics for inferred depths using operators at various points during training and compare them to the metric values resulting from depth inference using the ground truth operators with noise added. This gives us a proxy for estimating the deviation between the learned operators and ground truth rotational operators. In Fig. 7, we show the depth error and Kendall's Tau for depths inferred using transport operators at different points in the training procedure. For reference, we also plot straight lines which correspond to the values for these metrics in Fig. 5e and 5f which are computed using ground truth rotational operators with added noise with standard deviations of \(10^{-3},10^{-2},10^{-1}\), and \(1\). This shows that our method learns operators that are close in structure to the ground truth operators and the performance they achieve is similar to ground truth operators with additive Gaussian noise with a standard deviation of \(10^{-2}\). Additionally we see that the depth inference performance with learned operators improves significantly over the first 100-200 training steps but requires fine-tuning for many steps after that to achieve optimal performance. ### Kinematogram Experiments Random dot kinematograms are displays of dots on the surface of or within invisible rotating shapes. Still frames of kinematogram inputs appear as random dots with no perceptible structure (see Fig. 8a). However, the motion of the dots elicits the perception of a 3D structure. Fig. 8 shows the 2D projection of random points along with the 3D structure of the points on the surface of a cylinder. This perception of depth through motion is termed the "kinetic depth effect" (Wallach and O'connell, 1953). The random dot kinematogram visual stimulus has been used for many structure from motion experiments because it isolates the use of motion cues from the use of other possible depth cues. We use our depth inference model with transport operators learned from 2D projections of rotational motion in order to estimate depths for random points that are located within the volume of invisible rotating shapes. We compare characteristics of our experimental results to the performance of humans on structure from motion tasks with random dot kinematograms. For these experiments, we create kinematogram stimuli by randomly selecting \(N_{P}\) 3D points within the volume of a cylinder. Sequences are generated by rotating points around the \(x\)-axis at a rotational speed specified by the angle between frames \(\theta\). The Figure 6: Transport operator trajectories during training. Each row represents one of the three learned operators. Each column shows the trajectories at a different training step. The operators begin with random initializations at step 1 and quickly reach a rotation structure around 200 steps. From 200 steps to 9000 steps, the operators vary relatively slowly, resulting in operators with clear rotational structure at the end of training. points in each frame are ortographically projected to the \(xy\)-plane. Point correspondences are estimated by pairing nearest neighbors in the projected inputs from one frame to the next using the Hungarian algorithm (Kuhn, 1955). We use the inference procedure described in Section 3.2 to infer the depth and coefficients. During inference, we use an inference window of \(N_{T}-1\) preceding frames to infer depths for the points in the current frame. We can vary the parameters of the stimuli and the inference procedure and analyze their impact. Fig. 9 shows depths that are inferred for a random dot kinematogram sequence on a cylindrical structure by minimizing the objective in (10). In this experiment, \(N_{P}=20\), \(\theta=2^{\circ}\) and \(N_{T}=30\). Each line in the top and middle plot of Fig. 8(a) is the depth for one of five stimulus points. In the early stages of the kinematogram sequence, the number of frames in the inference window is only as large as the number of frames that have appeared (which is less than \(N_{T}\)). Once more than \(N_{T}\) frames have appeared, the depth and coefficient inference will make use of only the current frame and the \(N_{T}-1\) preceding frames. This build up in the angular extent of rotation explains the larger depth errors early in the sequence, and we will analyze this further below. The estimated depth in Fig. 9 has discontinuities that result in the sign of the depth switching. This natural phenomenon is due to the fact that the orthographically projected random dot kinematogram stimulus is a bistable perceptual representation (Andersen and Bradley, 1998). That is, it is an ambiguous representation in which there are two correct perceptual structures. All the points could be rotating in a clockwise direction points with a specific combination of positive and negative depths or they could be Figure 7: Inferred depth metrics using operators at different steps in training. Dotted lines represent values of error metrics for depths inferred using ground truth operators with additive Gaussian noise with the standard deviation specified in the legend. These values are obtained from the plots in Fig. 4(e) and 4(f). (a) Median depth error for depth inference performed with operators at different steps in training. The depth error decreases significantly after 200 training steps and continues to decrease until the end of training. The depth error achieves a value consistent with the estimates using ground truth operators with a additive Gaussian noise with a standard deviation of \(10^{-2}\). (b) Mean Kendall’s Tau for 5 randomly selected points. The Kendall’s Tau increases significantly around 200 training steps and starts to plateau around 1000 training steps. The Kendall’s Tau reaches a value consistent with ground truth operators with a noise standard deviation of \(10^{-3}\). rotating in a counterclockwise direction with the opposite combination of positive and negative depths. Each of these perceptual estimates is equally correct for the stimulus. Therefore, when computing the error metrics, we correct for the direction of the inferred rotation so it corresponds to the ground truth direction (which is clockwise in all of our experiments). This is done by generating a path with the inferred transformation coefficients and identifying the rotation direction of the points on that path. If the inferred rotation is moving in a counterclockwise direction, we reverse the signs of the depths prior to computing the error metrics. The bottom plot Fig. 9a shows the depth error for the kinematogram sequence. The depth error is high at the beginning of the sequence due to the limited angular extent of rotation. As the angular extent of rotation increases, the depth error decreases and remains low even while the signs of the depths switch. In Fig. 9b, we overlay the estimated depth for individual points on top of sequences with both of positive and negative ground truth depth values. This shows, whichever direction of rotation is inferred, the depths are aligned with either the positive or negative ground truth depth values. This bistable phenomenon is observed in psychophysical experiments as well. Specifically, subjects incorrectly identify the rotation direction of orthographically projected stimuli \(50.3\%\) of the time (Petersik, 1979). In the experiments shown in Fig. 10a, the clockwise rotation is estimated \(50.04\%\) of the time. Fig. 10 contains plots demonstrating the performance of our model on random dot kinematogram stimuli as we vary parameters of both the inference algorithm and the kinematogram inputs. These plots compute the depth MSE and the Kendall's Tau for 5 randomly selected dots in the stimulus. Fig. 10a and Fig. 10b examine the influence of the number of stimulus points2. Increasing the number of points improves the accuracy of the depth estimates but does not significantly impact the accuracy of the depth ordering. The Kendall's Tau values for \(N_{P}\geq 10\) have a spike at the beginning Figure 8: Visualization of kinematogram visual stimuli. (a) Example of a 2D kinematogram stimulus which is the projection of random dots on the surface of a cylinder. (b) 3D ground truth structure of the points in the kinematogram stimulus. The points are randomly sampled from the cylinder surface and colored by their depth. of the kinematogram sequence. This is due to the trade-off between the data fidelity term and the depth regularizer term in the objective. With fewer rotational frames in the inference window early in the kinematogram sequence, the optimization results in large magnitudes for the inferred depths that lead to small errors in the data fidelity term but large values for the depth regularizer. Therefore, the ordering of the depths is accurate but the exact depth values are inaccurate because they are off by a scale in magnitude. As the kinematogram sequence continues, with more frames in the inference window, the magnitudes of the depths decrease and reduce the depth regularizer term but this leads to a an increase in the data fidelity term associated less accurate depth ordering. We see this as the depth error decreases (because depth magnitudes are reducing) in conjunction with a decrease Kendall's Tau values. Decreasing the number of points eliminates this spike3. We examine the impact of adding Gaussian noise to the point locations in Fig. 9(c) and Fig. 9(d). The depth is consistently accurate with point location noise up to a standard deviation of \(10^{-2}\) and depth error increases after that. The introduction of point noise also eliminates the spike in Kendall's Tau at the beginning of the sequence. Footnote 3: The Kendall’s Tau we report in Fig. 9(b) is for five randomly selected stimulus points so the value for \(N_{P}=3\) is set to zero because there are fewer than five points to use for this metric computation. This perceptual build up of an accurate estimate of point depths is observed in structure from motion experiments (Hildreth et al., 1990). Hildreth et al. performed experiments where they displayed orthographic projections of three points rotating about a central axis and asked subjects to order the depths of the three points. They computed the percentage of correct ordering responses - a metric similar in nature to our Kendall's Tau metric. They found that the percent of correct depth ordering increased as the angular extent of rotation increased up to about \(40^{\circ}\) of rotation after which it plateaued. We Figure 9: Example of depths inferred for random dots in a kinematogram sequence. (a) In the top plot, each line represents the ground truth depth of a single random point over the rotational sequence of the kinematogram. In the middle plot each line represents the estimated depth for the same points as in the top plot. The bottom plot shows the depth error between the estimated and ground truth depths over the sequence. (b) Each plot shows the estimated depth for a single point with the sequences of positive and negative ground truth depths overlaid. observe the same build up and plateau of point ordering accuracy (as judged by Kendalls' Tau). They also observed degradation in performance as Gaussian noise was added to the point locations as we see in Fig. 10d. ## 5 Discussion The main contribution of this work is a generative model framework for learning and inference of 3D manifold-based transformations from 2D projections. A key innovation of the model is an inference procedure that jointly estimates scene geometry (point depth) and transformation parameters from a sequence of 2D views via gradient descent through a transport operator. Using this procedure, we show that it is possible to learn, without any prior knowledge of transformations or point depth, the dictionary elements that generate rotational motion from 2D projections of rotating points. This model lays the groundwork for explaining the development and adaptation of internal representations of natural variations that are observed in the world. Additionally, our depth inference model enables the investigation of data characteristics that may influence the capacity for accurate depth estimation. This allows us to connect model performance with various data characteristics and algorithmic parameters to human performance on perceptual studies. An important factor in accurate depth estimation and ordering is that a large angular extent is spanned by the set of input frames used for inference (see Fig. 5 and Fig. 10). This supports the notion that humans build up their perception of 3D structure during random dot kinematogram rotation sequences (Hildreth et al., 1990). We also show Figure 10: Quantitative metrics for random dot kinematogram depth estimates. Depth error and Kendall’s Tau: (a-b) as \(N_{P}\) increases. (c-d) as the standard deviation of noise added to the point locations increases. that increases in the number of random dot stimuli result in improvement in depth inference performance (see Fig. 5c, Fig. 5d, Fig. 10a, and Fig. 10b). This connects to kinematogram experiments that indicate a greater number of coherently transforming points results in a stronger depth percept from moving points (Todd et al., 1988; Dosher et al., 1989; Sperling et al., 1989; Braunstein et al., 1987). Our model also demonstrates the same direction switching phenomenon with the bistable kinematogram stimulus that humans perceive (Petersik, 1979). ### Psychophysical implications Our model has the flexibility to adapt to many different test scenarios that are inspired by human performance on mental rotation and structure from motion tasks. In our experimentation, we tuned parameters like the inference window length \(N_{T}\), the angle between frames \(\theta\), and the number of stimulus points \(N_{P}\) to achieve the most accurate depth estimates. However, experiments show, even when humans perceive the correct shape, they often have inaccurate estimates of depth magnitude (Todd et al., 1988), especially when viewing limited numbers of transforming points (Dosher et al., 1989). From our model performance, this may suggest that humans rely on a smaller angular extent of rotation for inferring depth or that they do not utilize a prior on expected depths. In the future, we can vary our model parameters to explore comparisons with potentially inaccurate human depth estimation in various tests settings. In this work we do not directly relate the internal rotation model developed here to the rich area of mental rotation experiments. In particular, the seminal work in that area suggests a monotonically increasing relationship between rotation angle between views of an object and the human reaction time (Shepard and Metzler, 1971). The manifold based model presented here may have a similar connection between processing time and rotation angle because the transport operators can generate transformations similar to the internal representation of 3D rotations described by humans in these studies. It may be a fruitful to examine the performance of our transformation model on mental rotation tasks and to compare to human performance on similar tasks. ### Future Improvements Ultimately, addressing the underlying neural mechanisms of 3D perception will require formulating a more biologically plausible model. The inference and learning in the current model are performed using quasi-Newton optimization and gradient descent, respectively. The optimization objective is non-convex and does not naturally lend itself to a parallel representation similar to neural architectures. Moving forward, we suggest developing an optimization procedure that is more biologically plausible. As our focus in this work is on the development of a transformation learning framework, we assume ground truth point correspondences for the learning and inference experiments in Section 4.1 through Section 4.2. However, identifying point correspondences from different views of the same scene is a challenging task and one that has been a focus of many computer vision algorithms (Ullman, 1979; Fischler and Bolles, 1981; Zbontar et al., 2016; Luo et al., 2016). Going forward, incorporating point correspondence estimates into this framework will lead to a more versatile, biologically plausible model. Finally, a step towards biological plausibility is extending this model to be robust to incoherent point motion and additional moving objects. An initial approach to improving the robustness to incoherent motion is to employ random sample consensus (RANSAC) (Fischler and Bolles, 1981). With this method, transport operator coefficients could be estimated from random subsets of points in the scenes and the final transformation parameters could be chosen as those that explain the transformation between the largest number of random subsets of points.
3次元世界でインタラクションを行う際に、人間は、2次元網膜画像に投影された2次元視覚入力から3次元構造を推定する必要があります。物体形状の persistence を運動誘導変形の際の基準として、この不完全な問題を解決する際に深度Ambiguity を解決するのに使用されてきました。生物学的視覚システムが内部的に3D変換をどのように表現するかを理解するために、この提案は、2次元点の運動から3次元構造を推定できる、生成型マノイドモデルに基づく計算モデルです。このモデルは、最小の指導によって変換の表現を学習することができます。これは、人間が進化的にまたは発達的にどのように内部の表現を獲得する可能性を示唆するものです。回転運動に焦点を当て、このモデルは2次元投影された点からの深度を推定し、2Dトレーニング刺激から3D回転変換を学習し、人間の心理物理構造-from-
2309.05329
A functional limit theorem for lattice oscillating random walk
The paper is devoted to an invariance principle for Kemperman's model of oscillating random walk on $\mathbb{Z}$. This result appears as an extension of the invariance principal theorem for classical random walks on $\mathbb{Z}$ or reflected random walks on $\mathbb{N}_0$. Relying on some natural Markov sub-process which takes into account the oscillation of the random walks between $\mathbb{Z}^-$ and $\mathbb{Z}^+$, we first construct an aperiodic sequence of renewal operators acting on a suitable Banach space and then apply a powerful theorem proved by S. Gou\"ezel.
Marc Peigné, Tran Duy Vo
2023-09-11T09:26:01
http://arxiv.org/abs/2309.05329v1
# A Functional Limit Theorem for Lattice Oscillating Random Walk ###### Abstract The paper is devoted to an invariance principle for Kemperman's model of oscillating random walk on \(\mathbb{Z}\). This result appears as an extension of the invariance principal theorem for classical random walks on \(\mathbb{Z}\) or reflected random walks on \(\mathbb{N}_{0}\). Relying on some natural Markov subprocess which takes into account the oscillation of the random walks between \(\mathbb{Z}^{-}\) and \(\mathbb{Z}^{+}\), we first construct an aperiodic sequence of renewal operators acting on a suitable Banach space and then apply a powerful theorem proved by S. Gouezel. Keywords and phrases: oscillating random walk, invariance principle, skew Brownian motion, renewal sequences of operators, Markov chain ## 1 Model and setting ### Introduction Consider two independent sequences of i.i.d. discrete random variables \((\xi_{n})_{n\geq 1}\) and \((\xi^{\prime}_{n})_{n\geq 1}\), defined on a probability space \((\Omega,\mathcal{F},\mathbb{P})\) and with respective distributions \(\mu\) and \(\mu^{\prime}\). For any fixed \(\alpha\in[0,1]\), the oscillating random walk \(\mathcal{X}^{(\alpha)}=(X^{(\alpha)}_{n})_{n\geq 0}\) is defined recursively by : \(X^{(\alpha)}_{0}=x\), where \(x\in\mathbb{Z}\) is fixed, and, for \(n\geq 0\), \[X^{(\alpha)}_{n+1}=\left\{\begin{array}{ll}X^{(\alpha)}_{n}+\xi_{n}&\mbox{ if }X^{(\alpha)}_{n}\leq-1,\\ \eta_{n}&\mbox{ if }X^{(\alpha)}_{n}=0,\\ X^{(\alpha)}_{n}+\xi^{\prime}_{n}&\mbox{ if }X^{(\alpha)}_{n}\geq 1,\end{array}\right. \tag{1}\] where \(\eta_{n}:=B_{n}\xi_{n}+(1-B_{n})\xi^{\prime}_{n}\) and \((B_{n})_{n\geq 0}\) is a sequence of i.i.d. Bernoulli random variables (independent of \((\xi_{n})\) and \((\xi^{\prime}_{n})\)) with \(\mathbb{P}[B_{i}=1]=\alpha=1-\mathbb{P}[B_{i}=0]\). When we want to emphasize the dependence in \(\mu\) and \(\mu^{\prime}\) of this oscillating process, we denote it by \(\mathcal{X}^{(\alpha)}(\mu,\mu^{\prime})\). This spatially non-homogeneous random walk was first introduced by Kemperman[13] to model discrete-time diffusion in one dimensional space with three different media \(\mathbb{Z}^{+}\) and \(\mathbb{Z}^{-}\) and a barrier \(\{0\}\). Whenever the process \(\mathcal{X}^{(\alpha)}(\mu,\mu^{\prime})\) stays on the negative half line, its excursion is directed by the jumps \(\xi_{n}\) until it reaches the positive half line; then, it continues being directed by the jumps \(\xi^{\prime}_{n}\) until returning in the negative half line and so on. After each visit of the origin, the increment is governed by the distribution of \(\eta_{n}\), which is a convex combination of \(\mu\) and \(\mu^{\prime}\). Our considering system is referred to as a special case when the barrier is degenerated as a single point; in general context, it may be any determined interval \([a,b]\cap\mathbb{Z}\) which passes through the origin, see [14] for instance. Another interesting variant has been studied by Madras and Tanny in [16] dealing with an oscillating random walk with a moving barrier at some constant speed. Although this basically leads to differences in its long-term behaviour compared to (1), we may be able to trace this model back to (1) by using some appropriate translations for its random increments. In the present paper, we prove an invariant principle for \(\mathcal{X}^{(\alpha)}(\mu,\mu^{\prime})\) towards the skew Brownian motion \((B^{\gamma}_{t})_{t>0}\) on \(\mathbb{R}\) with parameter \(\gamma\in[0,1]\). The diffusion \((B^{\gamma}_{t})_{t>0}\) is obtained from the standard Brownian process by independently altering the signs of the excursions away from \(0\), each excursion being positive with probability \(\gamma\) and negative with probability \(1-\gamma\). By [19], its heat kernel is given by: for any \(x,y\in\mathbb{R}\) and \(t>0\), \[p^{\gamma}_{t}(x,y):=p_{t}(x,y)+(2\gamma-1)\ \mathrm{sign}(y)\,p_{t}(0,|x|+|y|),\] where \(p_{t}(x,y)=\frac{1}{\sqrt{2\pi t}}e^{-(x-y)^{2}/2t}\) is the transition density of the Brownian motion. Throughout this paper, we suppose that the following general assumptions always hold: **H1**: \((\xi_{n})_{n\geq 1}\) _and \((\xi^{\prime}_{n})_{n\geq 1}\) are independent sequences of i.i.d. \(\mathbb{Z}\)-valued random variables, with finite variances \(\sigma^{2}\) and \(\sigma^{\prime 2}\), respectively_. **H2**: _Both distributions \(\mu\) and \(\mu^{\prime}\) are centered (i.e. \(\mathbb{E}[\xi_{n}]=\mathbb{E}[\xi^{\prime}_{n}]=0\))._ **H3**: _Both distributions \(\mu\) and \(\mu^{\prime}\) are strongly aperiodic on \(\mathbb{Z}\), i.e. their supports are not included in \(b+a\mathbb{Z}\) for any \(a>1\) and \(b\in\{0,\ldots,a-1\}\)._ **H4**: _There exists \(\delta>0\) such that \(\mathbb{E}[(\xi^{+}_{n})^{3+\delta}]+\mathbb{E}[(\xi^{\prime-}_{n})^{3+\delta }]<+\infty\), where \(\xi^{+}_{n}:=\max\{0,\xi_{n}\}\) and \(\xi^{\prime-}_{n}:=\max\{0,-\xi^{\prime}_{n}\}\)._ Let us emphasize that, under hypotheses **H1**, **H2** and **H3**, the oscillating random walk \(\mathcal{X}^{(\alpha)}\) is irreducible and null recurrent on \(\mathbb{Z}\); this property is not stated in [20] and we will detail the argument later (see Property 3.2). We denote by \(S=(S_{n})_{n\geq 1}\) (resp. \(S^{\prime}=(S^{\prime}_{n})_{n\geq 1}\)) the random walk defined by \(S_{0}=0\) and \(S_{n}=\xi_{1}+\ldots+\xi_{n}\) for \(n\geq 1\) (resp. \(S^{\prime}_{0}=0\) and \(S^{\prime}_{n}=\xi^{\prime}_{1}+\ldots+\xi^{\prime}_{n}\) for \(n\geq 1\)). Let \((\ell_{i})_{i\geq 0}\) be the sequence of strictly ascending ladder epochs associated with \(S\) and defined recursively by \(\ell_{0}=0\) and, for \(i\geq 1\), \[\ell_{i+1}:=\inf\{k>\ell_{i}\mid S_{k}>S_{\ell_{i}}\}\] (with the convention \(\inf\emptyset=+\infty\)). We also consider the sequence of descending ladder epochs \((\ell^{\prime}_{i})_{i\geq 0}\) of \(S^{\prime}\), defined as follows, \[\ell^{\prime}_{0}=0,\quad\text{and}\quad\ell^{\prime}_{i+1}:=\inf\{k>\ell^{ \prime}_{i}\mid S^{\prime}_{k}<S^{\prime}_{\ell^{\prime}_{i}}\},\quad\text{for any $i\geq 1$}.\] Under hypothesis **H2**, it holds \(\mathbb{P}[\limsup_{n\to+\infty}S_{n}=+\infty]=\mathbb{P}[\liminf_{n\to+\infty }S^{\prime}_{n}=-\infty]=1\); hence, all the random variables \(\ell_{i}\) and \(\ell^{\prime}_{i}\) are \(\mathbb{P}\)-a.s. finite. In addition, both sequences \((\ell_{i+1}-\ell_{i})_{i\geq 0}\) and \((S_{\ell_{i+1}}-S_{\ell_{i}})_{i\geq 0}\) contain i.i.d. random elements with distributions of \(\ell_{1}\) and \(S_{\ell_{1}}\), respectively; the same property holds for \((\ell^{\prime}_{i+1}-\ell^{\prime}_{i})_{i\geq 0}\) and \((S^{\prime}_{\ell^{\prime}_{i+1}}-S^{\prime}_{\ell^{\prime}_{i}})_{i\geq 0}\). Consequently, processes \((\ell_{i})_{i\geq 0},(S_{\ell_{i}})_{i\geq 0},(\ell^{\prime}_{i})_{i\geq 0}\) and \((S^{\prime}_{\ell^{\prime}_{i}})_{i\geq 0}\) are all random walks with i.i.d. increments. We denote \(\mu_{+}\) the distribution of \(S_{\ell_{1}}\) and \(\mathcal{U}_{+}\) its potential defined by \(\mathcal{U}_{+}:=\sum_{n\geq 0}(\mu_{+})^{*n}\). Similarly \(\mu^{\prime}_{-}\) denotes the distribution of \(S^{\prime}_{\ell^{\prime}_{1}}\) and \(\mathcal{U^{\prime}}_{-}:=\sum_{n\geq 0}(\mu^{\prime}_{-})^{*n}\). In particular, the oscillating random walk \(\mathcal{X}^{(\alpha)}\) visits \(\mathbb{Z}^{-}\) and \(\mathbb{Z}^{+}\) infinitely often ; in order to control the excursions inside each of these these two half lines, it is natural to consider the following stopping times \(\tau^{S}(-x),\ \tau^{S^{\prime}}(x)\) with \(x\geq 1\), associated with \(S\) and \(S^{\prime}\) respectively and defined by \[\tau^{S}(-x):=\inf\{n\geq 1\mid-x+S_{n}\geq 0\},\quad\text{and}\quad\tau^{S^{ \prime}}(x):=\inf\{n\geq 1\mid x+S^{\prime}_{n}\leq 0\}.\] In the sequel, we focus on the "ascending renewal function" \(h_{a}\) of \(S\) and the "descending renewal function" \(h^{\prime}_{d}\) of \(S^{\prime}\) defined by \[h_{a}(x):=\left\{\begin{array}{ll}\mathcal{U}_{+}[0,x]=\sum_{i \geq 0}\mathbb{P}[S_{\ell_{i}}\leq x]&\text{ if }x\geq 0,\\ 0&\text{ otherwise},\end{array}\right.\] and \[h^{\prime}_{d}(x):=\left\{\begin{array}{ll}\mathcal{U}^{\prime}_{-}[-x,0]= \sum_{i\geq 0}\mathbb{P}[S^{\prime}_{\ell^{\prime}_{i}}\geq-x]&\text{ if }x\geq 0,\\ 0&\text{ otherwise}.\end{array}\right.\] We denote by \(\check{h}_{a}\) the function \(x\mapsto h_{a}(-x)\), it appears in the definition of the parameter \(\gamma\) below. Both functions \(h_{a}\) and \(h^{\prime}_{d}\) are increasing and satisfy \(h_{a}(x)=O(x)\) and \(h^{\prime}_{d}(x)=O(x)\). They appear crucially in the quantitative estimates of the fluctuations of \(S\) and \(S^{\prime}\) ; see subsection 2.1 for precise statements. Let us end this paragraph devoted to the presentation of quantities that play an important role in the rest of the paper. \(\bullet\) By classical results on 1-dimensional random walks [9], under hypotheses **H1** and **H2**, both constants \(c=\dfrac{\mathbb{E}[S_{\ell_{1}}]}{\sigma\sqrt{2\pi}}\) and \(c^{\prime}=\dfrac{\mathbb{E}[-S_{\ell^{\prime}_{1}}]}{\sigma^{\prime}\sqrt{2 \pi}}\) are finite. \(\bullet\) Under hypotheses **H1**, **H2** and **H3**, the "crossing sub-process" \(\mathcal{X}^{(\alpha)}_{\mathbf{C}}\) which corresponds to the sign changes of the process \(\mathcal{X}^{(\alpha)}\) is well defined (see section 3.1) and it is positive recurrent on its unique irreducible class. We denote by \(\nu\) its unique invariant probability measure on \(\mathbb{Z}\). ### Main result **From now on, we fix \(\alpha\in[0,1]\)** and consider the continuous and linearly interpolated version \((X_{nt})\) of \(\mathcal{X}^{(\alpha)}\), defined by: for any \(n\geq 1\) and \(t\in(0,1]\), \[X^{(\alpha)}_{nt}=\sum_{i=1}^{n}\left(X^{(\alpha)}_{[nt]}+(nt-[nt])\times J_{[ nt]+1}\right)\mathbb{1}_{[\frac{t=1}{n},\frac{t}{n}]}(t),\] where \[J_{[nt]+1}:=\left\{\begin{array}{lll}\xi_{[nt]+1}&if&X^{(\alpha)}_{[nt]}\leq-1\\ \eta_{[nt]+1}&if&X^{(\alpha)}_{[nt]}=0\\ \xi^{\prime}_{[nt]+1}&if&X^{(\alpha)}_{[nt]}\geq 1.\end{array}\right.\] We also set \[X^{(\alpha,n)}(t):=\left\{\begin{array}{lll}\frac{X^{(\alpha)}_{\alpha}}{ \sigma\sqrt{n}}&\mbox{if $X_{nt}\leq 0$,}\\ \frac{X^{(\alpha)}_{\alpha\mu}}{\sigma^{\prime}\sqrt{n}}&\mbox{if $X_{nt}\geq 0$.} \end{array}\right.\] The main result of this paper is the following one. **Theorem 1.1**.: _Assume that hypotheses_ **H1**_-_**H4** _are satisfied. Then, as \(n\to+\infty\), the normalized stochastic process \(\{X^{(\alpha,n)}(t),t\in[0,1]\}_{n\geq 1}\) converges weakly in the space of continuous function \(C([0,1])\) to the skew Brownian motion \(W_{\gamma}:=\{W_{\gamma}(t),t\in[0,1]\}\) with parameter \(\gamma=\frac{c^{\prime}\nu(h^{\prime}_{d})}{c\nu(\bar{h}_{a})+c^{\prime}\nu(h ^{\prime}_{d})}\)._ Let us clarify the value of the parameter \(\gamma\) in two peculiar cases of (1). \(\bullet\) When \(\mu=\mu^{\prime}\), the chain \(\mathcal{X}^{(\alpha)}\) is an ordinary random walk on \(\mathbb{Z}\) directed by the unique type of jumps \((\xi_{n})_{n\geq 1}\) and the limit diffusion \(W_{\gamma}\) is the Brownian motion. In this case, the parameter \(\gamma\) equals \(\frac{1}{2}\) since the sequences of ladder heights \((S_{\ell_{i}})_{i\geq 1}\) and \((-S^{\prime}_{\ell^{\prime}_{i}})_{i\geq 1}\) coincide. \(\bullet\) When \(\mu(x)=\mu^{\prime}(-x)\) for any \(x\in\mathbb{Z}\), the random walk \(\mathcal{X}^{(\alpha)}\) is the so-called "anti-symmetric random walk" (or "reflected random walk" as usual), which appears in several works, see for instance [8] and [18]. By setting \(\xi_{n}=-\xi^{\prime}_{n}\), the behaviour of the chain \(\mathcal{X}^{(\alpha)}\) on positive and negative half lines, respectively, are mirror images of each other. Hence, we may "glue" them together to get an unifying Markov chain on \(\mathbb{Z}^{+}\cup\{0\}\) receiving \(\{0\}\) as its reflecting boundary. Accordingly, \(\gamma=1\) in this case and it matches perfectly with the result in [17], which states that the normalized reflected random walk (constructed as above) converges weakly in \(C([0,1])\) towards the absolute value of the standard Brownian motion. \(\bullet\) Notice that the limit process \(W_{\gamma}\) does not depend on \(\alpha\in[0,1]\). Henceforth, we fix \(\alpha\) and set \(\mathcal{X}^{(\alpha)}=\mathcal{X}\) in order to simplify the notations. ### Notations We set \(\mathbb{Z}:=\mathbb{Z}^{+}\cup\mathbb{Z}^{-}\cup\{0\},\ \mathbb{N}:=\{1,2,3,\ldots\}\) and \(\overline{\mathbb{D}}\) the closed unit ball in \(\mathbb{C}\). Given two positive real sequences \(\mathbf{a}=(a_{n})_{n\in\mathbb{N}}\) and \(\mathbf{b}=(b_{n})_{n\in\mathbb{N}}\), we write as usual * \(a_{n}\thicksim b_{n}\) if \(\lim\limits_{n\to\infty}a_{n}/b_{n}=1\), * \(a_{n}\approx b_{n}\) if \(\lim\limits_{n\to\infty}(a_{n}-b_{n})=0\), * \(a_{n}=O(b_{n})\) if \(\limsup\limits_{n\to\infty}a_{n}/b_{n}<+\infty\), * \(a_{n}=o(b_{n})\) if \(\lim\limits_{n\to\infty}a_{n}/b_{n}=0\), * \(\mathbf{a}\preceq\mathbf{b}\) if \(a_{n}\leq c\ b_{n}\) for some constant \(c>0\), * \(\mathbf{a}\asymp\mathbf{b}\) if \(\frac{1}{c}\ b_{n}\leq a_{n}\leq c\ b_{n}\) for some constant \(c\geq 1\). _The paper is organized as follows. In Section 2, we recall some important estimates in the theory of fluctuations of random walks; we introduce in particular the renewal functions associated with 1-dimensional random walks and relative conditional limit theorems. These helpful tools appear in Section 4 to compute the multi-dimensional distribution of the limit process. The center of gravity of the paper is Section 3 where we adapt the approach used in [17] in the case of the reflected random walk (with proper adjustments to derive Corollary 3.6 and to determine the parameter \(\gamma\) later on). The last two sections are devoted to the proof of Theorem 1.1 and the appendix._ ## 2 Auxiliary results for random walks In this section, we present some classical results on fluctuations of random walk on \(\mathbb{Z}\). ### Asymptotic estimates for fluctuations of a random walk The following statement summarizes classical results on fluctuations of random walks which are used below at various places (for instance, see Proposition 11 in [6], Theorem A in [15] et al). Recall that \(c=\frac{\mathbb{E}[S_{\ell_{1}}]}{\sigma\sqrt{2\pi}}\) and \(c^{\prime}=\frac{\mathbb{E}[-S_{\ell^{\prime}_{1}}]}{\sigma^{\prime}\sqrt{2 \pi}}\). **Lemma 2.1**.: _(Asymptotic property) Under assumptions_ **H1**_-_ **H3**_, for any \(x,y\geq 1\), it holds, as \(n\to\infty\),_ 1. \(\mathbb{P}[\tau^{S}(-x)>n]\thicksim 2c\,\frac{h_{a}(x)}{\sqrt{n}},\quad\text{and} \quad\mathbb{P}[\tau^{S^{\prime}}(x)>n]\thicksim 2c^{\prime}\,\frac{h^{\prime}_{a}(x)}{ \sqrt{n}};\)__ 2. \(\mathbb{P}[\tau^{S}(-x)>n,-x+S_{n}=-y]\thicksim\frac{1}{\sigma\sqrt{2\pi}}\, \frac{h_{a}(x)\,h_{d}(y)}{n^{3/2}}\)_,_ \[\text{and}\quad\mathbb{P}[\tau^{S^{\prime}}(x)>n,x+S^{\prime}_{n}=y]\thicksim \frac{1}{\sigma^{\prime}\sqrt{2\pi}}\,\frac{h^{\prime}_{d}(x)\,h^{\prime}_{a} (y)}{n^{3/2}};\] _where_ \(h_{d}\) _(resp._ \(h^{\prime}_{a}\)_) is the descending (resp. ascending) renewal function associated with the random walk_ \(S\) _(resp._ \(S^{\prime}\)_)._ 3. \(\mathbb{P}[\tau^{S}(-x)=n]\thicksim c\,\frac{h_{a}(x)}{n^{3/2}},\quad\text{and }\quad\mathbb{P}[\tau^{S^{\prime}}(x)=n]\thicksim c^{\prime}\,\frac{h^{\prime} _{d}(x)}{n^{3/2}};\)__ **Lemma 2.2**.: _(Upper bound) For any \(n\geq 1\), it holds_ 1. \(\mathbb{P}[\tau^{S}(-x)>n]\preceq\,\frac{1+x}{\sqrt{n}},\quad\text{and}\quad \mathbb{P}[\tau^{S^{\prime}}(x)>n]\preceq\,\frac{1+x}{\sqrt{n}};\)__ 2. \(\mathbb{P}[\tau^{S}(-x)>n,-x+S_{n}=-y]\preceq\,\frac{(1+x)(1+y)}{n^{3/2}}\)_,_ \[\text{and}\quad\mathbb{P}[\tau^{S^{\prime}}(x)>n,x+S^{\prime}_{n}=y]\preceq\, \frac{(1+x)(1+y)}{n^{3/2}};\] 3. \(\mathbb{P}[\tau^{S}(-x)=n]\preceq\frac{1+x}{n^{3/2}},\quad\text{and}\quad \mathbb{P}[\tau^{S^{\prime}}(x)=n]\preceq\frac{1+x}{n^{3/2}}.\)__ As a direct consequence of \(b)\) in Lemmas 2.1 and 2.2, for any \(x\geq 1\) and \(w\geq 0\), \[\mathbb{P}[\tau^{S}(-x)=n,-x+S_{n}=w]\preceq\,\frac{1+x}{n^{3/2}}\sum_{z\geq w +1}z\mu(z). \tag{2}\] Indeed, for any \(n\geq 1\), \[\mathbb{P}[\tau^{S}(-x)=n, -x+S_{n}=w]\] \[=\sum_{y\geq 1}\mathbb{P}[\tau^{S}(-x)=n,-x+S_{n-1}=-y,-y+\xi_{n}=w]\] \[=\sum_{y\geq 1}\mathbb{P}[\tau^{S}(-x)>n-1,-x+S_{n-1}=-y,-y+\xi_{n}=w]\] \[=\sum_{y\geq 1}\mathbb{P}[\tau^{S}(-x)>n-1,-x+S_{n-1}=-y]\mu(y+w)\] \[\preceq\frac{1+x}{n^{3/2}}\quad\underbrace{\sum_{y\geq 1}(1+y) \mu(y+w)}_{\preceq\sum_{z\geq w+1}z\mu(z)<+\infty}\.\] Notice also that, more precisely it holds \[\mathbb{P}[\tau^{S}(-x)=n,-x+S_{n}=w]\thicksim\frac{h_{a}(x)}{\sigma\sqrt{2\pi n ^{3/2}}}\sum_{y\geq 1}h_{d}(y)\mu(y+w).\] ### Conditional limit theorems It is worth remarking some necessary limit theorems which are very helpful for us to control the fluctuation of excursions between two consecutive successive crossing times and contribute significantly to reduce the complexity when dealing with multidimensional distribution of these excursions. Now, assume that \(\mathbb{E}[\xi_{1}^{\prime}]=0\) and \(\mathbb{E}[(\xi_{1}^{\prime})^{2}]<+\infty\) and let \((S^{\prime}(t))_{t\geq 0}\) be the continuous time process constructed from the sequence \((S_{n}^{\prime})_{n\geq 0}\) by using the linear interpolation between the values at integer points. By Lemma 2.3 in [1], for \(x\geq 1\), the rescaled process \(\left(\frac{x+S_{[nt]}^{\prime}}{\sigma^{\prime}\sqrt{n}},t\in[0,1]\right)\) conditioning on the event \([\tau^{S^{\prime}}(x)>n]\) converges weakly on \(C([0,1],\mathbb{R})\) towards the Brownian meander. In other words, for any bounded Lipschitz continuous function \(\psi:\mathbb{R}\to\mathbb{R}\) and any \(t\in(0,1]\) and \(x\geq 1\), \[\lim_{n\to+\infty}\mathbb{E}\left[\psi\left(\frac{x+S_{[nt]}^{\prime}}{\sigma^ {\prime}\sqrt{n}}\right)\mid\tau^{S^{\prime}}(x)>[nt]\right]=\frac{1}{t}\int_ {0}^{+\infty}\psi(u)u\exp\left(-\frac{u^{2}}{2t}\right)du. \tag{3}\] Let us also state the Caravena-Chaumont's result about random bridges conditioned to stay positive in the discrete case. Roughly speaking, as \(n\to+\infty\), for any starting point \(x\geq 1\) and any ending point \(y\geq 1\), the random bridge of the random walk \(S\), starting at \(x\), ending at \(y\) at time \(n\) and conditioned to stay positive until time \(n\), after a linear interpolation and a diffusive rescaling, converges in distribution on \(C([0,1],\mathbb{R})\) towards the normalized Brownian excursion \(\mathcal{E}^{+}\): \[\left(\left(\frac{S_{[nt]}^{\prime}}{\sigma^{\prime}\sqrt{n}}\right)_{t\in[0,1 ]}\mid\tau^{S^{\prime}}(x)>[nt],S_{n}^{\prime}=y\right)\xrightarrow{\mathscr{ L}}\mathcal{E}^{+},\quad\text{as }n\to+\infty.\] More precisely, for any \(x,y\geq 1,0<s<t\leq 1\) and any bounded Lipschitz continuous function \(\psi:\mathbb{R}\rightarrow\mathbb{R}\), \[\lim_{n\rightarrow+\infty}\mathbb{E}\left[\psi\left(\frac{x+S^{ \prime}_{[n]}}{\sigma^{\prime}\sqrt{n}}\right)\right] \tau^{S^{\prime}}(x)>[nt],x+S^{\prime}_{[nt]}=y\right]\] \[=\int_{0}^{+\infty}2\psi(u\sqrt{t})\exp\left(-\frac{u^{2}}{2 \frac{s}{t}\frac{t-s}{t}}\right)\frac{u^{2}}{\sqrt{2\pi\frac{s^{3}}{t^{3}}(t-s )^{3}}\frac{t}{t^{3}}}du. \tag{4}\] ## 3 Crossing times and renewal theory In order to analyse the asymptotic behavior of the process \(\mathcal{X}\), we decompose \(X_{n}\) as a sum of successive excursions in \(\mathbb{Z}^{-}\) or \(\mathbb{Z}^{+}\). It is therefore interesting to introduce the sequence \(\mathbf{C}=(C_{k})_{k\geq 0}\) of "crossing times", i.e. times at which the process \(\mathcal{X}\) changes its sign: more precisely, \(C_{0}=0\) and, for any \(k\geq 0\), \[C_{k+1}:=\left\{\begin{array}{ll}\inf\{n>C_{k}\mid X_{C_{k}}+(\xi_{C_{k}+1}+ \cdots+\xi_{n})\geq 0\}&\text{if }X_{C_{k}}\leq-1,\\ C_{k}+1&\text{if }X_{C_{k}}=0,\\ \inf\{n>C_{k}\mid X_{C_{k}}+(\xi^{\prime}_{C_{k}+1}+\cdots+\xi^{\prime}_{n}) \leq-1\}&\text{if }X_{C_{k}}\geq 1.\end{array}\right. \tag{5}\] Under hypothesis **H2**, the random times \(C_{k}\) are \(\mathbb{P}\)-a.s. finite and form a sequence of finite stopping times with respect to the canonical filtration \(\left(\sigma\left(\xi_{k},\xi^{\prime}_{k}\right)\mid k\leq n\right)_{n\geq 1}\). ### On the crossing sub-process \(\mathcal{X}_{\mathbf{C}}\) We denote \(\mathcal{X}_{\mathbf{C}}:=(X_{C_{k}})_{k\geq 0}\) the **crossing sub-process of \(\mathcal{X}\)**, which plays an important role in this paper. **Lemma 3.1**.: _The sub-process \(\mathcal{X}_{\mathbf{C}}\) is a time-homogeneous Markov chain on \(\mathbb{Z}\) with transition kernel \(\mathcal{C}=(\mathcal{C}(x,y))_{x,y\in\mathbb{Z}}\) given by_ \[\mathcal{C}(x,y)=\left\{\begin{array}{ll}\sum_{t=0}^{-x-1}\mu_{+}(y-x-t)\, \mathcal{U}_{+}(t)&\text{if }x\leq-1\text{ and }y\geq 0,\\ \alpha\mu(y)+(1-\alpha)\mu^{\prime}(y)&\text{if }x=0\text{ and }y\in\mathbb{Z},\\ \sum_{t=-x+1}^{0}\mu^{\prime}_{-}(y-x-t)\,\,\,\mathcal{U}^{\prime}_{-}(t)& \text{if }x\geq 1\text{ and }y\leq 0.\end{array}\right. \tag{6}\] Proof.: The Markov property is obvious from the above definition. Now, we compute \(\mathcal{C}(x,y)\) for any \(x\leq-1\) and \(y\geq 0\) (other cases are similar) as follows. Noticing that the first crossing time \(C_{1}\) belongs \(\mathbb{P}\)-a.s. to the set \(\{\ell_{k}\mid k\geq 1\}\) and that the sequence \((S_{\ell_{k}})_{k\geq 1}\) is increasing, we may write \[\mathcal{C}(x,y) =\sum_{k\geq 1}\mathbb{P}[x+S_{\ell_{k-1}}\leq-1,x+S_{\ell_{k}}\ =y]\] \[=\sum_{k\geq 1}\sum_{t=0}^{-x-1}\mathbb{P}[S_{\ell_{k-1}}=t]\, \mathbb{P}[S_{\ell_{k}}-S_{\ell_{k-1}}=y-x-t]\] \[=\sum_{t=0}^{-x-1}\,\mathbb{P}[S_{\ell_{1}}=y-x-t]\,\sum_{i\geq 0 }\mathbb{P}[S_{\ell_{i}}=t]\] \[=\sum_{t=0}^{-x-1}\,\mu_{+}(y-x-t)\,\mathcal{U}_{+}(t).\] When **H2** holds, the crossing sub-process \(\mathcal{X}_{\mathbf{C}}\) is well defined and it is irreducible, aperiodic and positive recurrent on its unique essential class \(\mathcal{I}_{\mathbf{C}}(X_{0})\). Notice that this essential class can be a proper subset of \(\mathbb{Z}\); it occurs for instance when the support of \(\mu\) is bounded from above or the one of \(\mu^{\prime}\) is bounded from below. Nevertheless, it admits a unique invariant probability measure \(\nu\) supported by \(\mathcal{I}_{\mathbf{C}}(X_{0})\). Also in [20], the explicit expression of \(\nu\) is only known when \(\alpha\in\{0,1\}\) and the support of \(\mu\) (resp. \(\mu^{\prime}\)) is included in \(\mathbb{Z}^{+}\) (resp. in \(\mathbb{Z}^{-}\)). However, the existence of \(\nu\) is enough for our purpose regardless of its exact formula. Furthermore, by Theorem 2.2 in [20], under hypothesis **H3**, the oscillating random walk \(\mathcal{X}\) is irreducible on \(\mathbb{Z}\). It is also important to notice the following property. **Property 3.2**.: _Under hypotheses_ **H1**_-_ **H3**_, the oscillating random walk is null recurrent. In other words, setting \(\mathbf{t}_{0}:=\{k\geq 1\mid X_{k}=0\}\) then it holds_ \[\mathbb{P}_{0}[\mathbf{t}_{0}<+\infty]=1\quad\text{ and }\quad\mathbb{E}_{0}[ \mathbf{t}_{0}]=+\infty.\] Proof.: We may choose \(x_{0},y_{0}\geq 1\) s.t. \(\mu(-x_{0})>0,\mu^{\prime}(y_{0})>0\) and write for any \(n\geq 1\), \[\mathbb{P}_{0}[\mathbf{t}_{0}>n] \geq\mathbb{P}_{0}[\mathbf{t}_{0}>n,\xi_{1}=-x_{0}]+\mathbb{P}_{0 }[\mathbf{t}_{0}>n,\xi_{1}^{\prime}=y_{0}]\] \[\geq\alpha\mu(-x_{0})\mathbb{P}[\ell_{1}>n-1]+(1-\alpha)\mu^{ \prime}(y_{0})\mathbb{P}[\ell_{1}^{\prime}>n-1].\] Hence \(\mathbb{E}_{0}[\mathbf{t}_{0}]\geq\alpha\mu(-x_{0})(1+\mathbb{E}[\ell_{1}])+( 1-\alpha)\mu^{\prime}(y_{0})(1+\mathbb{E}[\ell_{1}^{\prime}])=+\infty\). Another point to insist on here is that any excursion between two consecutive crossing times is uniquely governed by \(S\) or \(S^{\prime}\); thus, all the results obtained in the previous section can be applied. The decomposition technique that exploits this fact is classical and extremely efficient in controlling the varying excursions over time of Markov processes; for example, we use it in the last section to estimate the convergence of finite dimensional distribution. As a direct application, we can prove that the strong law of large numbers still holds for the chain \(\mathcal{X}\). **Lemma 3.3**.: _Assume that \(\mathbb{E}[|\xi_{n}|]+\mathbb{E}[|\xi_{n}^{\prime}|]<+\infty\) and \(\mathbb{E}[\xi_{n}]=\mathbb{E}[\xi_{n}^{\prime}]=0\). Then, it holds_ \[\lim_{n\to+\infty}\frac{X_{n}}{n}=0\ \ \mathbb{P}\text{-a.s.}\] Proof.: We decompose \(X_{n}\) as \(X_{n}=X_{n}\mathbb{1}_{\{X_{n}\geq 1\}}+X_{n}\mathbb{1}_{\{X_{n}\leq-1\}}\). Let us estimate the first term. For any \(n\geq 1\), there exists a random integer \(k(n)\geq 0\) such that \(C_{k(n)}\leq n<C_{k(n)+1}\); notice that the condition \(X_{n}\geq 1\) yields \(X_{C_{k(n)}}\geq 1\). Hence, we get \[0\leq\frac{X_{n}\mathbb{1}_{\{X_{n}\geq 1\}}}{n} =\frac{X_{C_{k(n)}}+S_{n}^{\prime}-S_{C_{k(n)}}^{\prime}}{n}\] \[\leq\frac{\max\{X_{0},\xi_{C_{k(n)}}\}}{n}+\frac{S_{n}^{\prime}}{ n}-\frac{S_{C_{k(n)}}^{\prime}}{C_{k(n)}}\frac{C_{k(n)}}{n}\] \[\leq\frac{\max\{X_{0},\xi_{C_{k(n)}}\}}{n}+\frac{S_{n}^{\prime}}{ n}+\left|\frac{S_{C_{k(n)}}^{\prime}}{C_{k(n)}}\right|.\] By the strong law of large numbers, the different terms on the right-hand side above converges \(\mathbb{P}\)-a.s. to \(0\); so does \(\frac{X_{n}\mathbb{1}_{\{X_{n}\geq 1\}}}{n}\). The second term is treated in the same way. ### On aperiodic renewal sequences of operators Let \((\mathbb{Z}^{\otimes\mathbb{N}},(\mathcal{P}(\mathbb{Z}))^{\otimes\mathbb{N}}, \mathcal{X},(\mathbb{P}_{x})_{x\in\mathbb{Z}},\theta)\) be the canonical space, i.e. the space of trajectories associated with the Markov chain \(\mathcal{X}\). For any \(x\in\mathbb{Z}\), we probability measure \(\mathbb{P}_{x}\) is the conditional probability with respect to the event \([X_{0}=x]\), we denote by \(\mathbb{E}_{x}\) the corresponding conditional expectation. The operator \(\theta\) is the classical shift transformation defined by \(\theta((x_{k})_{k\geq 0})=(x_{k+1})_{k\geq 0}\) for any \((x_{k})_{k\geq 0}\in\mathbb{Z}^{\otimes\mathbb{N}}\). In this section, we study the behavior as \(n\to+\infty\) of the sequence \[H_{n}(x,y)=\sum_{k=1}^{+\infty}\mathbb{P}_{x}[C_{k}=n,X_{n}=y],\] for any \(x,y\in\mathbb{Z}\). Since the position at time \(C_{k}\) may vary, so that the excursions of \(\mathcal{X}\) between two successive crossing times are not independent, it thus motivates us to take into account the long-term behaviours of these quantities and express them in terms of operators related to the crossing sub-process \(\mathcal{X}_{\mathbf{C}}.\) For this purpose, we apply a general renewal theorem due to S. Gouezel [10]. This theorem relies on the decomposition of the operator \(\mathcal{C}\) using a sequence of operators \((\mathcal{C}_{n})_{n\geq 1}\) acting on some Banach space and that are not so difficult to deal with. It is natural in our context to deal with the operators \(\mathcal{C}_{n}=(\mathcal{C}_{n}(x,y))_{x,y\in\mathbb{Z}},n\geq 1\), defined by: for any \(x,y\in\mathbb{Z}\) and any \(n\geq 1\), \[\mathcal{C}_{n}(x,y):=\mathbb{P}_{x}[C_{1}=n,X_{n}=y].\] The relation \(\mathcal{C}(x,y)=\sum_{n\geq 1}\mathcal{C}_{n}(x,y)\) is straightforward. We also pay attention to the case \(x=0\), that is \(\mathcal{C}_{1}(0,y)=\mathbb{P}_{0}[X_{1}=y]=\alpha\mu(y)+(1-\alpha)\mu^{ \prime}(y)\) and \(\mathcal{C}_{n}(0,y)=0\) if \(n\geq 2\). For a function \(\varphi:\mathbb{Z}\to\mathbb{C}\), we formally set \[\mathcal{C}_{n}\varphi(x):=\sum_{y\in\mathbb{Z}:\,xy\leq 0}\mathcal{C}_{n}(x,y) \varphi(y)=\mathbb{E}_{x}[\varphi(X_{n}),C_{1}=n]\quad\text{if }x\in\mathbb{Z} \setminus\{0\},\] and \(\mathcal{C}_{1}\varphi(0)=\sum_{y\in\mathbb{Z}}\mathcal{C}_{1}(0,y)\varphi(y )=\mathbb{E}_{0}[\varphi(X_{1})]\) and \(\mathcal{C}_{n}\varphi(0)=0\quad\text{if }n\geq 2\). The quantity \(\mathcal{C}_{n}\varphi(x)\) is well defined for instance when \(\varphi\in L^{\infty}(\mathbb{Z})\). Other Banach spaces can be considered; under moment assumptions, we describe below the action of the \(\mathcal{C}_{n}\) on a bigger Banach space \(\mathcal{B}_{\delta}\), more suitable to the situation as explained a little further on. Notice that \(\mathcal{C}_{n}(x,y)=\mathcal{C}_{n}\mathbbm{1}_{\{y\}}(x)\) for any \(x,y\in\mathbb{Z}\), which yields, by induction, \[H_{n}(x,y) =\sum_{k=1}^{+\infty}\mathbb{P}_{x}[C_{k}=n,X_{n}=y]\] \[=\sum_{k=1}^{+\infty}\sum_{j_{1}+\ldots+j_{k}=n}\mathbb{P}_{x}[C _{1}=j_{1},C_{2}-C_{1}=j_{2},\ldots,C_{k}-C_{k-1}=j_{k},X_{n}=y]\] \[=\sum_{k=1}^{+\infty}\sum_{j_{1}+\ldots+j_{k}=n}\mathcal{C}_{j_{1 }}\ldots\mathcal{C}_{j_{k}}1_{\{y\}}(x). \tag{7}\] As announced above, we apply a result of S. Gouezel, stated in a general framework [10], that of _aperiodic renewal sequence of operator_, i.e. sequences \((\mathcal{C}_{n})_{n\geq 1}\) of operators acting on a Banach space \((\mathcal{B},|\cdot|_{\mathcal{B}})\) and satisfying the following conditions: \(\bullet\) the operators \({\cal C}_{n},n\geq 1\), act on \({\cal B}\) and \(\sum_{n\geq 1}\|{\cal C}_{n}\|_{\cal B}<+\infty\) (where \(\|\cdot\|_{\cal B}\) denotes the norm on the space \({\cal L}({\cal B})\) of continuous operators on \(({\cal B},|\cdot|_{\cal B})\)); \(\bullet\) the operator \({\cal C}(z):=\sum_{n\geq 1}z^{n}{\cal C}_{n}\), defined for any \(z\in\overline{\mathbb{D}}\), satisfies 1. \({\cal C}(1)\) has a simple eigenvalue at \(1\) (with corresponding eigenprojector \(\Pi\)) and the rest of its spectrum is contained in a disk of radius \(<1\); 2. for any complex number \(z\in\overline{\mathbb{D}}\setminus\{1\}\), the spectral radius of \({\cal C}(z)\) is \(<1\); 3. for any \(n\geq 1\), the real number \(r_{n}\) defined by \(\Pi{\cal C}_{n}\Pi=r_{n}\Pi\) is \(\geq 0\). Condition R2 implies that, for any \(z\in\overline{\mathbb{D}}\setminus\{1\}\), the operator \(I-{\cal C}(z)\) is invertible on \({\cal B}\) and \[(I-{\cal C}(z))^{-1}=\sum_{k\geq 0}{\cal C}(z)^{k}=\sum_{k\geq 0}\left(\sum_{j \geq 1}{\cal C}_{j}z^{j}\right)^{k}=\sum_{n\geq 0}{\cal H}_{n}z^{n}\] with \({\cal H}_{0}=I\) and \({\cal H}_{n}=\sum_{k=1}^{+\infty}\sum_{j_{1}+\ldots+j_{k}=n}{\cal C}_{j_{1}} \ldots{\cal C}_{j_{k}}.\) The above identity, called the _renewal equation_, is of fundamental importance to understand the asymptotics of \(H_{n}\) in the non-commutative setting; in particular, the equality (7) yields \(H_{n}(x,y)={\cal H}_{n}\mathbb{1}_{\{y\}}(x)\) so that the asymptotic behaviour of \((H_{n}(x,y))_{n\geq 1}\) is related to that of \(({\cal H}_{n})_{n\geq 1}\). By [10], if the sequence \(({\cal C}_{n})_{n\geq 1}\) satisfies the following additive assumptions \[{\rm R4}(\ell,\beta).\quad\|{\cal C}_{n}\|_{\cal B}\leq C\frac{\ell(n)}{n^{1+ \beta}},\] \[{\rm R5}(\ell,\beta).\quad\sum_{j>n}r_{j}\thicksim\frac{\ell(n)}{n^{\beta}},\] where \(C>0,\ \beta\in(0,1)\) and \(\ell\) is a slowly varying function, then the sequence \((n^{1-\beta}\ell(n){\cal H}_{n})_{n\geq 1}\) converges in \(({\cal L}({\cal B}),\|\cdot\|_{\cal B})\) to the operator \(d_{\beta}\Pi\), with \(d_{\beta}=\frac{1}{\pi}\sin\beta\pi\). In the next subsection, we introduce some Banach space \({\cal B}={\cal B}_{\delta}\) in order to be able to apply this general result. ### Spectral property of the transition matrix \({\cal C}=({\cal C}(x,y))_{x,y\in{\mathbb{Z}}}\) The operator \({\cal C}\) acts on the space \(L^{\infty}({\mathbb{Z}})\) of bounded functions on \({\mathbb{Z}}\). By the following lemma, it satisfies some strong spectral property on this space. **Lemma 3.4**.: _Assume_ **H1**_-_ **H3** _hold. Then, the infinite matrix \({\cal C}\) satisfies the Doeblin condition and therefore, it is a quasi-compact operator on \(L^{\infty}({\mathbb{Z}})\), the space of bounded functions on \({\mathbb{Z}}\). Furthermore, the eigenvalue \(1\) is simple, with associated eigenvector \({\bf 1}\), and the rest of the spectrum is included in a disk of radius \(<1\)._ Proof.: Under the above assumptions, the positive random variable \(S_{\ell_{1}}\) has finite first moment; hence, by the renewal theorem, \[\lim_{t\to+\infty}\mathcal{U}_{+}(t)=\frac{1}{\mathbb{E}[S_{\ell_{1}}]}>0.\] The above convergence readily implies \(\delta_{+}:=\inf_{z\in\mathbb{Z}^{+}}\mathcal{U}_{+}(z)>0\). Consequently, by (6), for any \(x\leq-1\) and \(y\geq 0\), \[\mathcal{C}(x,y)\geq\mu_{+}(y+1)\,\mathcal{U}_{+}(-x-1)\geq\delta_{+}\mu_{+}(y+ 1).\] In the same vein, one gets \(\mathcal{C}(x,y)\geq\delta^{\prime}\mu^{\prime}_{-}(y-1)\) for any \(x\geq 1\) and \(y\leq 0\) with \(\delta^{\prime}:=\inf_{z\in\mathbb{Z}^{+}}\mathcal{U}^{\prime}_{-}(z)>0\). Hence, it is easy to show that there exists a probability measure \(\mathbf{m}\) and \(\delta_{0}>0\) s.t. for any \(x\in\mathbb{Z}\), \[\mathcal{C}(x,.)\geq\delta_{0}\mathbf{m}(.),\] which immediately implies the quasi-compactness of \(\mathcal{C}\). The control of the peripheral spectrum readily follows. Thanks to this lemma, we could believe that hypothesis R1 is satisfied by the sequence \((\mathcal{C}_{n})_{n\geq 1}\) acting on \(L^{\infty}(\mathbb{Z})\) since \(\mathcal{C}(1)=\mathcal{C}\). Unfortunately, it holds \(\sum_{n\geq 1}|\mathcal{C}_{n}|_{\infty}=+\infty\). Indeed, it holds \(|\mathcal{C}_{n}|_{\infty}=\sup_{x\in\mathbb{Z}}\mathbb{P}_{x}[C_{1}=n]\); now, if we assume for instance \(x\leq 1\), it holds \(\mathbb{P}_{x}[C_{1}=n]=\mathbb{P}[\tau^{S}(x)=n]\) with \((i)\quad\mathbb{P}[\tau^{S}(x)=n]=O(1/n)\), and \((ii)\quad\liminf_{n\to+\infty}n\mathbb{P}[\tau^{S}(x_{n})=n]>0\) when \(x_{n}\asymp\sqrt{n}\). (see Lemma 5 and Theorem (B) [7]). Consequently \(|\mathcal{C}_{n}|_{\infty}\asymp 1/n\). Thus, we have to choose another Banach space \(\mathcal{B}_{\delta}\). By (5), it is clear that \(C_{k+1}=\tau_{S}(X_{C_{k}})\) when \(X_{C_{k}}\leq-1\) and \(C_{k+1}=\tau_{S^{\prime}}(X_{C_{k}})\) when \(X_{C_{k}}\geq 1\). Consequently, the behaviour as \(n\to+\infty\) of the \(k^{\text{th}}\)-term \(\mathbb{P}_{x}[C_{k}=n,X_{n}=y]\) of the sum \(\Sigma_{n}(x,y)\) is closely related to the distributions of \(\tau_{S}\) and \(\tau_{S^{\prime}}\); in particular, by Lemma 2.1, its dependence on \(y\) is expressed in terms of \(h_{a}(y)\) and \(h^{\prime}_{d}(y)\). This explains why we have to choose a Banach space on which the action of \(\mathcal{C}\) has "nice" spectral properties - as compacity or quasi-compacity - and also does contain these functions \(h_{a}\) and \(h^{\prime}_{d}\). The fact that they are both sublinear leads us to examine the action of \(\mathcal{C}\) on the space \(\mathcal{B}_{\delta}\) of complex valued functions on \(\mathbb{Z}\) defined by \[\mathcal{B}_{\delta}:=\left\{f:\mathbb{Z}\to\mathbb{C}:|f|_{\mathcal{B}_{ \delta}}:=\sup_{x\in\mathbb{Z}}\frac{|f(x)|}{1+|x|^{1+\delta}}<+\infty\right\},\] with \(\delta\geq 0\). By Lemma 2.2 and the fact that \(h_{a}(x)=O(x),\ h^{\prime}_{d}(x)=O(x)\), the functions \(h_{a},h^{\prime}_{d},\mathbf{h}_{n}:x\mapsto\sqrt{n}\mathbb{P}[\tau^{S}(-x)>n]\) and \(\mathbf{h}^{\prime}_{n}:x\mapsto\sqrt{n}\mathbb{P}[\tau^{S^{\prime}}(x)>n]\) do belong to \(\mathcal{B}_{\delta}\) for any \(\delta\geq 0\); furthermore, applying Lemma 2.1, the sequence \((\mathbf{h}_{n})_{n\geq 0}\) (resp. \((\mathbf{h}^{\prime}_{n})_{n\geq 0}\)) converges to \(2ch_{a}\) (resp. \(2c^{\prime}h^{\prime}_{d}\)) in \(\mathcal{B}_{\delta}\) if \(\delta>0\). This last property is of interest in applying Gouezel's renewal theorem and for this reason, we assume from now on \(\delta>0\). Furthermore, the map \(\mathcal{C}\) acts on \(\mathcal{B}_{\delta}\) as a compact operator whose spectrum can be controlled as follows. **Proposition 3.5**.: _Assume that hypotheses_ **H1**_-_ **H4** _hold. Then,_ 1. _The map_ \(\mathcal{C}\) _acts on_ \(\mathcal{B}_{\delta}\) _and_ \(\mathcal{C}(\mathcal{B}_{\delta})\subset L^{\infty}(\mathbb{Z})\)_._ 2. \(\mathcal{C}\) _is a compact operator on_ \(\mathcal{B}_{\delta}\) _with spectral radius_ \(\rho_{\mathcal{B}_{\delta}}=1\) _and with the unique and simple dominant eigenvalue_ \(1\)_._ 3. _The rest of the spectrum of_ \(\mathcal{C}\) _on_ \(\mathcal{B}_{\delta}\) _is contained in a disk of radius_ \(<1\)_._ _Consequently, the operator \(\mathcal{C}\) on \(\mathcal{B}_{\delta}\) may be decomposed as_ \[\mathcal{C}=\Pi+Q\] _where_ \(\bullet\quad\Pi\) _is the eigenprojector from_ \(\mathcal{B}_{\delta}\) _to_ \(\mathbb{C}\mathbf{1}\) _corresponding to the eigenvalue_ \(1\) _and_ \(\Pi(\phi)=\nu(\phi)\mathbf{1}\)_, where_ \(\nu\) _is the unique_ \(\mathcal{C}\)_-invariant probability measure on_ \(\mathbb{Z}\)_;_ \(\bullet\quad\)_the spectral radius of_ \(Q\) _on_ \(\mathcal{B}_{\delta}\) _is_ \(<1\)_;_ \(\bullet\quad\Pi Q=Q\Pi=0\)_._ Proof.: 1. Note that \(\mathcal{U^{\prime}}_{-}(t)=\sum_{n\geq 0}\mathbb{P}[S^{\prime}_{\ell^{ \prime}_{n}}=t]=\mathbb{P}[\exists n\geq 0:S^{\prime}_{\ell^{\prime}_{n}}=t]\leq 1\). For any \(\varphi\in\mathcal{B}_{\delta}\) and \(x\geq 1\), we have \[|\mathcal{C}\varphi(x)| \leq\sum_{y\leq 0}\sum_{t=-x+1}^{0}\mu^{\prime}_{-}(y-x-t)\,| \varphi(y)|\] \[\leq|\varphi|_{\mathcal{B}_{\delta}}\sum_{y\leq 0}(1+|y|^{1+\delta} )\,\mu^{\prime}_{-}(-\infty,y-1)\] \[\leq|\varphi|_{\mathcal{B}_{\delta}}\left(\mathbb{E}[|S^{\prime} _{\ell^{\prime}_{1}}]+\mathbb{E}[|S^{\prime}_{\ell^{\prime}_{1}}|^{2+\delta}] \right),\] which is finite if \(\mathbb{E}[(\xi^{\prime-}_{n})^{3+\delta}]<+\infty\) (see [5]). Other cases can be estimated in the same way and yield \[|\mathcal{C}\varphi|_{\mathcal{B}_{\delta}}\leq|\mathcal{C}\varphi|_{\infty} \leq|\varphi|_{\mathcal{B}_{\delta}}\left(\mathbb{E}[|S^{\prime}_{\ell^{\prime} _{1}}]+\mathbb{E}[|S^{\prime}_{\ell^{\prime}_{1}}|^{2+\delta}]\right)<+\infty.\] (8) 2. By (8), the operator \(\mathcal{C}\) acts continuously from \(\mathcal{B}_{\delta}\) into \(L^{\infty}(\mathbb{Z})\); since the inclusion map \(i:L^{\infty}(\mathbb{Z})\hookrightarrow\mathcal{B}_{\delta}\) is compact, the operator \(\mathcal{C}\) is also compact on \(\mathcal{B}_{\delta}\). Let us now compute the spectral radius \(\rho_{\mathcal{B}_{\delta}}\) of \(\mathcal{C}\). The fact that \(\mathcal{C}\) is a stochastic matrix yields \(\rho_{\mathcal{B}_{\delta}}\geq 1\). To prove \(\rho_{\mathcal{B}_{\delta}}\leq 1\), it suffices to show that \(\mathcal{C}\) has bounded powers on \(\mathcal{B}_{\delta}\). For any \(n\geq 1\) and \(x\in\mathbb{Z}\), \[|\mathcal{C}^{n}\varphi(x)|\leq\sum_{y\in\mathbb{Z}}\mathcal{C}^{n-1}(x,y)| \mathcal{C}\varphi(y)|\leq|\mathcal{C}\varphi|_{\infty}\sum_{y\in\mathbb{Z}} \mathcal{C}^{n-1}(x,y)=|\mathcal{C}\varphi|_{\infty}.\] Together with (8), it implies \[|\mathcal{C}^{n}\varphi|_{\mathcal{B}_{\delta}}\leq|\mathcal{C}^{n}\varphi|_{ \infty}\leq|\mathcal{C}\varphi|_{\infty}\leq|\varphi|_{\mathcal{B}_{\delta}} \left(\mathbb{E}[|S^{\prime}_{\ell^{\prime}_{1}}]+\frac{1}{2}\mathbb{E}[|S^{ \prime}_{\ell^{\prime}_{1}}|^{2+\delta}]\right).\] Hence \(\|\mathcal{C}^{n}\|_{\mathcal{B}_{\delta}}\leq\mathbb{E}[|S^{\prime}_{\ell^{ \prime}_{1}}]+\mathbb{E}[|S^{\prime}_{\ell^{\prime}_{1}}|^{2+\delta}]\) for any \(n\geq 1\) and \(\rho_{\mathcal{B}_{\delta}}=\lim\limits_{n\to+\infty}\|\mathcal{C}^{n}\|_{ \mathcal{B}_{\delta}}^{1/n}\leq 1\). Let us now control the peripheral spectrum of \(\mathcal{C}\). Let \(\theta\in\mathbb{R}\) and \(\psi\in\mathcal{B}_{\delta}\) such that \(\mathcal{C}\psi=e^{i\theta}\psi\). Obviously, the function \(\psi\) is bounded and \(|\psi|\leq\mathcal{C}|\psi|\). Consequently, \(|\psi|_{\infty}-|\psi|\) is non-negative and super-harmonic (i.e. \(\mathcal{C}(|\psi|_{\infty}-|\psi|)\leq|\psi|_{\infty}-|\psi|\)) on the unique irreducible class \(\mathcal{I}_{\mathbf{C}}(X_{0})\) of \(\mathcal{X}\). By the classical denumerable Markov chains theory, it is thus constant on \(\mathcal{I}_{\mathbf{C}}(X_{0})\) which follows that \(|\psi|\) is constant on \(\mathcal{I}_{\mathbf{C}}(X_{0})\). Without loss of generality, we may assume \(|\psi(x)|=1\) for any \(x\in\mathcal{I}_{\mathbf{C}}(X_{0})\), i.e. \(\psi(x)=e^{i\phi(x)}\) for some \(\phi(x)\in\mathbb{R}\). We may rewrite the equality \(\mathcal{C}\psi=e^{i\theta}\psi\) as \[\forall x\in\mathcal{I}_{\mathbf{C}}(X_{0})\quad\sum_{y\in\mathcal{I}_{ \mathbf{C}}(X_{0})}\mathcal{C}(x,y)e^{i(\phi(y)-\phi(x))}=e^{i\theta}.\] Note that \(\mathcal{C}(x,y)>0\) for all \(x,y\in\mathcal{I}_{\mathbf{C}}(X_{0})\); by convexity, one readily gets \(e^{i\theta}=e^{i(\phi(y)-\phi(x))}\) for such points \(x,y\). Taking \(x=y\in\mathcal{I}_{\mathbf{C}}(X_{0})\), we thus obtain \(e^{i\theta}=1\). In particular, the function \(\psi\) is harmonic on \(\mathcal{I}_{\mathbf{C}}(X_{0})\), hence constant on this set, by _Liouville's theorem_. Furthermore, for any \(x\in\mathbb{Z}\), it holds \(\mathcal{C}(x,y)>0\Longleftrightarrow y\in\mathcal{I}_{\mathbf{C}}(X_{0})\) ; consequently, for any fixed \(y_{0}\in\mathcal{I}_{\mathbf{C}}(X_{0})\) and any \(x\in\mathbb{Z}\), \[\psi(x)=\mathcal{C}\psi(x)=\sum_{y\in\mathcal{I}_{\mathbf{C}}(X_{0})} \mathcal{C}(x,y)\psi(y)=\psi(y_{0}).\] Therefore, the function \(\psi\) is constant on \(\mathbb{Z}\). 3. This is a direct consequence of (ii). ### A renewal limit theorem for the sequence of crossing times The main goal of this part is to prove the following statement. **Proposition 3.6**.: _The sequence \((\sqrt{n}\mathcal{H}_{n})_{n\geq 1}\) converges in \((\mathcal{L}(\mathcal{B}_{\delta}),\|\cdot\|_{\mathcal{B}_{\delta}})\) to the operator \(\mathbf{c}^{-1}\Pi\) with \(\mathbf{c}=2\pi\big{(}c\nu(\tilde{h}_{a})+c^{\prime}\nu(h^{\prime}_{d})\big{)}\). In particular, for any \(x,y\in\mathbb{Z}\),_ \[\lim_{n\to+\infty}\sqrt{n}H_{n}(x,y)=\frac{\nu(y)}{2\pi\big{(}c\nu(\tilde{h}_{ a})+c^{\prime}\nu(h^{\prime}_{d})\big{)}}.\] This is a consequence of the fact that \((\mathcal{C}_{n})_{n\geq 1}\) is an aperiodic renewal sequence of operators on \(\mathcal{B}_{\delta}\) satisfying R4 and R5 ( with \(\beta=1/2\) and \(\ell\) constant). The fact that all the \(\mathcal{C}_{n},n\geq 1\), act on \(\mathcal{B}_{\delta}\) and \(\sum_{n\geq 1}\|\mathcal{C}_{n}\|_{\mathcal{B}_{\delta}}<+\infty\) is a consequence of the following lemma. **Lemma 3.7**.: _Under hypotheses_ **H1**_-_ **H4**_, for any \(n\geq 1,\) the operator \(\mathcal{C}_{n}\) acts on \(\mathcal{B}_{\delta}\) and_ \[\|\mathcal{C}_{n}\|_{\mathcal{B}_{\delta}}=O\left(\frac{1}{n^{3/2}}\right).\] Proof.: By (2), for any \(x\geq 1\) and \(\phi\in\mathcal{B}_{\delta}\), \[|\mathcal{C}_{n}\phi(-x)| \leq\sum_{w\geq 0}\mathbb{P}_{-x}[C_{1}=n;X_{n}=w]\,|\phi(w)|\] \[=\sum_{w\geq 0}\mathbb{P}[\tau^{S}(-x)=n,-x+S_{n}=w]\,|\phi(w)|\] \[\leq\frac{1+x}{n^{3/2}}\sum_{w\geq 0}\left(\sum_{z\geq w+1}z\mu(z) \right)|\phi(w)|\] \[\leq\frac{1+x}{n^{3/2}}|\phi|_{\mathcal{B}_{\delta}}\underbrace{ \sum_{w\geq 0}(1+w^{1+\delta})\left(\sum_{z\geq w+1}z\mu(z)\right)}_{z\geq 1}.\] Similarly, \[|\mathcal{C}_{n}\phi(x)|\preceq\frac{1+x}{n^{3/2}}|\phi|_{\mathcal{B}_{\delta} }\mathbb{E}[(\xi_{1}^{\prime-})^{3+\delta}].\] Moreover, \(|\mathcal{C}_{1}\phi(0)|\preceq|\phi|_{\mathcal{B}_{\delta}}\) and \(\mathcal{C}_{n}\phi(0)=0\) for all \(n\geq 2\). This completes the proof. Condition R1 coincides with the statement of Proposition 3.5. Similarly, R2 and R3 correspond to assertions \(i)\) and \(ii)\) of the next proposition. Consequently, \((\mathcal{C}_{n})_{n\geq 1}\) is an aperiodic renewal sequence of operators. **Proposition 3.8**.: _Suppose that_ **H1**_-_ **H4** _are satisfied. Then the sequence \((\mathcal{C}_{n})_{n\geq 1}\) holds the following properties_ * _The spectral radius_ \(\rho_{\mathcal{B}_{\delta}}(z)\) _of_ \(\mathcal{C}(z)\) _is strictly less than_ \(1\) _for_ \(z\in\overline{\mathbb{D}}\setminus\{1\}\)_._ * _For any_ \(n\geq 1\)_, it holds_ \(\Pi\mathcal{C}_{n}\Pi=r_{n}\Pi\) _with_ \[r_{n}:=\nu(\mathcal{C}_{n}1)=\sum_{x\in\mathbb{Z}}\nu(x)\mathbb{P}_{x}[C_{1}=n ]\geq 0.\] * \(\sum_{j>n}r_{j}\sim\frac{2\big{(}c\nu(\tilde{h}_{a})+c^{\prime}\nu(h_{d}^{ \prime})\big{)}}{\sqrt{n}}\) _as_ \(n\to+\infty\)_._ Proof.: * The argument is close to the one used to prove Proposition 3.5. For any \(z\in\overline{\mathbb{D}}\setminus\{1\}\), the operator \(\mathcal{C}(z)\) is compact on \(\mathcal{B}_{\delta}\) with spectral radius \(\rho_{\mathcal{B}_{\delta}}(z)\leq 1\). We now prove \(\rho_{\mathcal{B}_{\delta}}(z)\neq 1\) by contraposition. Suppose \(\rho_{\mathcal{B}_{\delta}}(z)=1\); in other words, there exist \(\theta\in\mathbb{R}\) and \(\varphi\in\mathcal{B}_{\delta}\) such that \(\mathcal{C}(z)\varphi=e^{i\theta}\varphi\). Since \(\mathcal{C}\) is bounded from \(\mathcal{B}_{\delta}\) into \(L^{\infty}(\mathbb{Z})\) and \(0\leq|\varphi|\leq\mathcal{C}|\varphi|\), the function \(|\varphi|\) is \(\mathcal{C}\)- superharmonic, bounded and thus constant on its essential class \(\mathcal{I}_{\mathbf{C}}(X_{0})\). Without loss of generality, we can suppose that \(|\varphi(x)|=1\) for any \(x\in\mathcal{I}_{\mathbf{C}}(X_{0})\); equivalently, \(\varphi(x)=e^{i\phi(x)}\) for some function \(\phi:\mathcal{I}_{\mathbf{C}}(X_{0})\to\mathbb{R}\). For any \(x\in\mathcal{I}_{\mathbf{C}}(X_{0})\), we get \[\mathcal{C}(z)\varphi(x)=e^{i\theta}\varphi(x)\Longleftrightarrow\sum_{n\geq 1 }\sum_{y\in\mathcal{I}_{\mathbf{C}}(X_{0})}z^{n}e^{i(\phi(y)-\phi(x))}\mathbb{P} _{x}[C_{1}=n;X_{n}=y]=e^{i\theta},\] \[\left\|\sqrt{[ns]}\mathcal{H}_{[ns]}-\mathbf{c}^{-1}\Pi\right\|_{\mathcal{B}_{ \delta}}\longrightarrow 0,\quad\text{as}\quad n\rightarrow+\infty. \tag{9}\] ## 4 Proof of Theorem 1.1 For \(m\geq 1\), let \(\{\varphi_{i}:\mathbb{R}\rightarrow\mathbb{R}\mid i=1,\ldots,m\}\) be a sequence of bounded and Lipschitz continuous functions with corresponding Lipschitz coefficients \(Lip(\varphi_{i})\). Assume that the time sequence \(\{t_{i}\}_{1\leq i\leq m}\) is strictly increasing with values in \((0,1]\) and \(t_{0}=0\). In this part, we prove that \[\lim_{n\rightarrow+\infty}\mathbb{E}_{x}\left[\prod_{i=1}^{m}\varphi_{i} \bigg{(}X^{(n)}(t_{i})\bigg{)}\right]=\int_{\mathbb{R}^{m}}\prod_{i=1}^{m} \varphi_{i}(u_{i})p_{t_{i}-t_{i-1}}^{\gamma}(u_{i-1},u_{i})\,du_{1}\ldots du_{m}\] with \(u_{0}=0\). Without loss of generality, we assume \(\sigma=\sigma^{\prime}\) and \(x\geq 1\) to reduce unnecessary complexity associated with subcases. ### Convergence of the one dimensional distributions \(m=1\) We first notice that \(\mathbb{E}_{x}[\varphi_{1}(X^{(n)}(t_{1}))]\approx\mathbb{E}_{x}\left[\varphi_{1} \left(\frac{X_{[nt_{1}]}}{\sigma\sqrt{n}}\right)\right]\) since \[\left|\mathbb{E}_{x}[\varphi_{1}(X^{(n)}(t_{1}))]-\mathbb{E}_{x} \left[\varphi_{1}\left(\frac{X_{[nt_{1}]}}{\sigma\sqrt{n}}\right)\right]\right| \leq Lip(\varphi_{1})\,\mathbb{E}_{x}\left[\left|X^{(n)}(t_{1})- \frac{X_{[nt_{1}]}}{\sigma\sqrt{n}}\right|\right]\] \[\leq Lip(\varphi_{1})\,\frac{\mathbb{E}[|\xi_{[nt_{1}]+1}|]+ \mathbb{E}[|\eta_{[nt_{1}]+1}|]+\mathbb{E}[|\xi^{\prime}_{[nt_{1}]+1}|]}{\sigma \sqrt{n}},\] which tends to \(0\) as \(n\to+\infty\). Now, we can decompose \(\mathbb{E}_{x}\left[\varphi_{1}\left(\frac{X_{[nt_{1}]}}{\sigma\sqrt{n}}\right)\right]\) as \[\underbrace{\mathbb{E}_{x}\left[\varphi_{1}\left(\frac{X_{[nt_{1}]}}{\sigma \sqrt{n}}\right),X_{[nt_{1}]}=0\right]}_{A_{0}(n)}+\underbrace{\mathbb{E}_{x} \left[\varphi_{1}\left(\frac{X_{[nt_{1}]}}{\sigma\sqrt{n}}\right),X_{[nt_{1}]}> 0\right]}_{A^{+}(n)}+\underbrace{\mathbb{E}_{x}\left[\varphi_{1}\left(\frac{X _{[nt_{1}]}}{\sigma\sqrt{n}}\right),X_{[nt_{1}]}<0\right]}_{A^{-}(n)}.\] The term \(A_{0}(n)\) tends to \(0\) as \(n\to+\infty\) since \((X_{n})_{n\geq 0}\) is null recurrent. It remains to control the two other terms. \(\bullet\) Estimate of \(A^{+}(n)\) \[A^{+}(n) \approx\sum_{k_{1}=1}^{[nt_{1}]-1}\sum_{\ell\geq 1}\sum_{y\geq 1 }\mathbb{E}_{x}\bigg{[}\varphi_{1}\left(\frac{X_{[nt_{1}]}}{\sigma\sqrt{n}} \right),C_{\ell}=k_{1},X_{k_{1}}=y,y+\xi^{\prime}_{k_{1}+1}>0,\] \[\ldots,y+\xi^{\prime}_{k_{1}+1}+\ldots+\xi^{\prime}_{[nt_{1}]}>0 \bigg{]}\] \[=\sum_{k_{1}=1}^{[nt_{1}]-1}\sum_{y\geq 1}\mathbb{E}\left[ \varphi_{1}\left(\frac{y+\xi^{\prime}_{k_{1}+1}+\ldots+\xi^{\prime}_{[nt_{1}]} }{\sigma\sqrt{n}}\right),\tau^{S^{\prime}}(y)>[nt_{1}]-k_{1}\right]\] \[\left(\sum_{\ell\geq 1}\mathbb{P}_{x}[C_{\ell}=k_{1};X_{k_{1}}=y]\right)\] \[=\sum_{k_{1}=1}^{[nt_{1}]-1}\sum_{y\geq 1}H_{k_{1}}(x,y)\mathbb{E} \left[\varphi_{1}\left(\frac{y+S^{\prime}_{[nt_{1}]-k_{1}}}{\sigma\sqrt{n}} \right),\tau^{S^{\prime}}(y)>[nt_{1}]-k_{1}\right].\] For any \(0\leq s_{1}\leq t_{1}\) and \(n\geq 1\), let \(f_{n}\) be the function defined by \[f_{n}(s_{1}):=n\sum_{y\geq 1}H_{[ns_{1}]}(x,y)\mathbb{E}\left[\varphi_{1} \left(\frac{y+S^{\prime}_{[nt_{1}]-[ns_{1}]}}{\sigma\sqrt{n}}\right),\tau^{S^{ \prime}}(y)>[nt_{1}]-[ns_{1}]\right]\] if \(0\leq s_{1}<\frac{[nt_{1}]}{n}\) and \(f_{n}(s_{1})=0\) if \(\frac{[nt_{1}]}{n}\leq s_{1}\leq t_{1}\). Hence, \[A^{+}(n)=\int_{0}^{t_{1}}f_{n}(s_{1})\,ds_{1}+O\left(\frac{1}{\sqrt{n}}\right).\] The convergence of the term \(A^{+}(n)\) as \(n\to+\infty\) is a consequence of the two following properties: \(\bullet\) for any \(n\geq 1\), \[|f_{n}(s_{1})|\preceq\frac{1+|x|}{\sqrt{s_{1}(t_{1}-s_{1})}}\in L^{1}([0,t_{1 }]). \tag{10}\] \(\bullet\) for any \(s_{1}\in[0,t_{1}]\), \[\lim_{n\rightarrow+\infty}f_{n}(s_{1}) =\frac{\gamma}{\pi\sqrt{s_{1}(t_{1}-s_{1})}}\int_{0}^{+\infty} \varphi_{1}(z\sqrt{t_{1}-s_{1}})z\exp\left(\frac{-z^{2}}{2}\right)dz\] \[=\frac{\gamma}{\pi}\int_{0}^{+\infty}\varphi_{1}(u)u\ \frac{\exp\left(\frac{-u^{2}}{2(t_{1}-s_{1})}\right)}{\sqrt{s_{1}(t_{1}-s_{1}) ^{3}}}du\qquad(\text{set }u=z\sqrt{t_{1}-s_{1}}). \tag{11}\] Indeed, applying the Lebesgue dominated convergence theorem, we obtain \[\lim_{n\rightarrow+\infty}A^{+}(n) =\frac{\gamma}{\pi}\int_{0}^{+\infty}\varphi_{1}(u)u\left(\int_{ 0}^{t_{1}}\frac{1}{\sqrt{s_{1}(t_{1}-s_{1})^{3}}}\exp\left(\frac{-u^{2}}{2(t_ {1}-s_{1})}\right)ds_{1}\right)du\] \[=\frac{\gamma}{\pi}\int_{0}^{+\infty}\varphi_{1}(u)u\bigg{[} \frac{1}{t_{1}}\exp\left(\frac{-u^{2}}{2t_{1}}\right)\underbrace{\int_{0}^{+ \infty}\frac{1}{\sqrt{s}}\exp\left(\frac{-u^{2}}{2t_{1}}s\right)ds}_{-\frac{ \sqrt{2\pi t_{1}}}{u}}\bigg{]}du\quad(\text{set }s:=\frac{s_{1}}{t_{1}-s_{1}})\] \[=\gamma\int_{0}^{+\infty}\varphi_{1}(u)\frac{2\exp\left(-u^{2}/2t _{1}\right)}{\sqrt{2\pi t_{1}}}du. \tag{12}\] Similarly, \[\lim_{n\rightarrow+\infty}A^{-}(n)=(1-\gamma)\int_{-\infty}^{0} \varphi_{1}(u)\frac{2\exp\left(-u^{2}/2t_{1}\right)}{\sqrt{2\pi t_{1}}}du. \tag{13}\] Combining (12) and (13), we thus obtain \[\lim_{n\rightarrow+\infty}\mathbb{E}_{x}[\varphi_{1}(X_{t_{1}}^{(n)})]=\int_ {\mathbb{R}}\varphi_{1}(u)p_{t_{1}}^{\gamma}(0,u)du=\int_{\mathbb{R}}\tilde{ \varphi_{1}}(u)\frac{2\exp(-u^{2}/2t_{1})}{\sqrt{2\pi t_{1}}}du,\] where \(\tilde{\varphi_{1}}(u)=\gamma\varphi_{1}(u)\mathbbm{1}_{(0,+\infty)}(u)+(1- \gamma)\varphi_{1}(u)\mathbbm{1}_{(-\infty,0)}(u)\). It thus remains to establish (10) and (11). The first natural idea is to set \[\psi_{n}(y):=\sqrt{n}\mathbb{E}\left[\varphi_{1}\left(\frac{y+S_{[nt_{1}]-[ns_{ 1}]}^{\prime}}{\sigma\sqrt{n}}\right),\tau^{S^{\prime}}(y)>[nt_{1}]-[ns_{1}]\right]\] and to remark that \(f_{n}(s_{1})=\sqrt{n}\mathcal{H}_{[ns_{1}]}(\psi_{n})(x)\) with \(\psi_{n}\in\mathcal{B}_{\delta}\). One can easily check that \((\psi_{n})_{n\geq 0}\) converges pointwise to some function \(\psi\in\mathcal{B}_{\delta}\) but it is much more complicated to prove that this convergence holds in \(\mathcal{B}_{\delta}\). This can be done when \(\delta\geq 1\) with a strong moment assumption (namely moments of order \(\geq 4\) for \(\mu^{\prime}\)) by using a recent result in [11]; unfortunately, such a result does not exist for the Brownian meander, which is useful in the sequel for convergence of multidimensional distributions. This forces us to propose another strategy that we now present. For this purpose, for any \(n\geq 1\) and any fixed \(0<s_{1}<t_{1}\), we decompose \(f_{n}(s_{1})\) as \(f_{n}(s_{1})=\sum_{y\geq 1}a_{n}(y)b_{n}(y)\), where \[a_{n}(y) :=nH_{[ns_{1}]}(x,y)\,\mathbb{P}[\tau^{S^{\prime}}(y)>[nt_{1}]-[ ns_{1}]],\] \[b_{n}(y) :=\mathbb{E}\left[\varphi_{1}\left(\frac{y+S_{[nt_{1}]-[ns_{1}]}^{ \prime}}{\sigma\sqrt{n}}\right)\mid\tau^{S^{\prime}}(y)>[nt_{1}]-[ns_{1}]\right]\] and apply the following classical lemma with \(V=\mathbb{Z}^{+}\): **Lemma 4.1**.: _Let \(V\) be denumerable and \((a_{n}(v))_{v\in V},(b_{n}(v))_{v\in V}\) be real sequences satisfying_ _(i) \(a_{n}(v)\geq 0\) for any \(n\geq 1,v\in V\) and \(\lim_{n\to+\infty}\sum_{v\in V}a_{n}(v)=A\),_ _(ii) for any \(\epsilon>0\), there exists a finite set \(V_{\epsilon}\subset V\) s.t. \(\sup_{n\geq 1}\sum_{v\notin V_{\epsilon}}a_{n}(v)<\epsilon\)._ _(iii) \(\lim_{n\to+\infty}b_{n}(v)=b\) for any \(v\in V\) and \(\sup_{n\geq 1,v\in V}|b_{n}(v)|<+\infty\)._ _Then \(\lim_{n\to+\infty}\sum_{v\in V}a_{n}(v)b_{n}(v)=Ab\)._ Let us check that these conditions are satisfied by the families \((a_{n}(y))_{y\geq 1},(b_{n}(y))_{y\geq 1}\) defined above. Condition \((i)\). The sum \(\sum_{y\geq 1}a_{n}(y)\) may be written as \[\sum_{y\geq 1}a_{n}(y)=\frac{1+o(n)}{\sqrt{s_{1}(t_{1}-s_{1})}}\ \sqrt{[ns_{1}]}\mathcal{H}_{[ ns_{1}]}(\mathbf{h}^{\prime}_{[nt_{1}]-[ns_{1}]})(x). \tag{14}\] On the one hand, the sequence \((\sqrt{[ns_{1}]}\mathcal{H}_{[ns_{1}]})_{n\geq 1}\) converges in \((\mathcal{L}(\mathcal{B}_{\delta}),\|\cdot\|_{\mathcal{B}_{\delta}})\) to the operator \(\frac{1}{2\pi\big{(}\omega(h_{d})+c^{\prime}\nu(h^{\prime}_{d})\big{)}}\Pi\); on the other hand, the sequence \((\mathbf{h}^{\prime}_{[nt_{1}]-[ns_{1}]})_{n\geq 1}\) converges in \(\mathcal{B}_{\delta}\) to \(2c^{\prime}h^{\prime}_{d}\). Hence condition \((i)\) holds with \[A=\frac{1}{\sqrt{s_{1}(t_{1}-s_{1})}}\frac{c^{\prime}\nu(h^{\prime}_{d})}{ \pi\big{(}c\nu(\check{h}_{d})+c^{\prime}\nu(h^{\prime}_{d})\big{)}}=\frac{ \gamma}{\pi\sqrt{s_{1}(t_{1}-s_{1})}}.\] Condition \((ii)\). Fix \(\epsilon>0\). We want to find \(y_{\epsilon}\geq 1\) s.t. \(\sum_{y\geq y_{\epsilon}}a_{n}(y)\leq\epsilon\) for any \(n\geq 1\). By Lemma 2.2, there exists a constant \(C_{0}>0\) s.t. \(0\leq\mathbf{h}^{\prime}_{k}(y)\leq C_{0}(1+y)\) for any \(y,k\geq 1\); hence, for \(y\geq y_{\epsilon}\), \[0\leq\mathbf{h}^{\prime}_{k}(y)\leq C_{0}\left(1+\frac{y^{1+\delta}}{y_{ \epsilon}^{\delta}}\right)\leq 2C_{0}\frac{1+y^{1+\delta}}{y_{\epsilon}^{\delta}}.\] Consequently the function \(\mathbf{h}^{\prime}_{k}\mathbb{1}_{[y_{\epsilon}+\infty[}\) belongs to \(\mathcal{B}_{\delta}\) and \(|\mathbf{h}^{\prime}_{k}\mathbb{1}_{[y_{\epsilon}+\infty[}|_{\mathcal{B}_{ \delta}}\leq\frac{2C_{0}}{y_{\epsilon}^{\delta}}\) for any \(k\geq 1\). By (14), it follows \[0\leq\sum_{y\geq y_{\epsilon}}a_{n}(y)\preceq\underbrace{\sup_{n\geq 1}\sqrt{[ ns_{1}]}\|\mathcal{H}_{[ns_{1}]}\|_{\mathcal{B}_{\delta}}}_{<+\infty}\ \underbrace{\sup_{n\geq 1}|\mathbf{h}^{\prime}_{[nt_{1}]-[ns_{1}]} \mathbb{1}_{[y_{\epsilon}+\infty[}|_{\mathcal{B}_{\delta}}}_{\leq\frac{1}{y_ {\epsilon}^{\delta}}}.\] We conclude choosing \(y_{\epsilon}\) large enough. Condition \((iii)\). By (11), it holds with \(b=\int_{0}^{+\infty}\varphi_{1}(u)u\ \frac{\exp\left(\frac{-u^{2}}{2(t_{1}-s_{1})} \right)}{t_{1}-s_{1}}du\). ### Convergence of the multidimensional distributions We focus here on the case \(m=2\); the cases \(m\geq 3\) is done by induction. We fix \(0<t_{1}<t_{2}\) and, for \(n\geq 1\) given, let \(\kappa=\kappa_{t_{1}}\) be the first crossing time after time \([nt_{1}]\) defined by \[\kappa:=\min\{k>[nt_{1}]:X_{[nt_{1}]}X_{k}\leq 0\}.\] As in the case \(m=1\), it holds \[\mathbb{E}_{x}[\varphi_{1}(X^{(n)}(t_{1}))\varphi_{2}(X^{(n)}(t_{2}))]\approx \mathbb{E}_{x}\left[\varphi_{1}\left(\frac{X_{[nt_{1}]}}{\sigma\sqrt{n}}\right) \varphi_{2}\left(\frac{X_{[nt_{2}]}}{\sigma\sqrt{n}}\right)\right],\] and the right hand side term may be decomposed as \(A_{0}(n)+A_{1}^{\pm}(n)+A_{2}^{\pm}(n)\), where \[A_{0}(n):=\mathbb{E}_{x}\left[\varphi_{1}\left(\frac{X_{[nt_{1}]}}{\sigma\sqrt {n}}\right)\varphi_{2}\left(\frac{X_{[nt_{2}]}}{\sigma\sqrt{n}}\right),X_{[nt_ {1}]}=0\right],\] \[A_{1}^{\pm}(n):=\sum_{k_{2}=[nt_{1}]+1}^{[nt_{2}]}\mathbb{E}_{x}\left[\varphi_ {1}\left(\frac{X_{[nt_{1}]}}{\sigma\sqrt{n}}\right)\varphi_{2}\left(\frac{X_{ [nt_{2}]}}{\sigma\sqrt{n}}\right)\mathbb{1}_{[\kappa=k_{2}]}\mathbb{1}_{[ \pm X_{[nt_{1}]>0}]}\right],\] and \[A_{2}^{\pm}(n):=\mathbb{E}_{x}\left[\varphi_{1}\left(\frac{X_{[nt_{1}]}}{ \sigma\sqrt{n}}\right)\varphi_{2}\left(\frac{X_{[nt_{2}]}}{\sigma\sqrt{n}} \right)\mathbb{1}_{[\kappa>[nt_{2}]]}\mathbb{1}_{[\pm X_{[nt_{1}]>0}]}\right].\] As previously, the term \(A_{0}(n)\) tends to \(0\) since \((X_{n})_{n\geq 0}\) is null recurrent. \(\bullet\) Estimate of \(A_{1}^{\pm}(n)\) \[A_{1}^{+}(n)\approx\sum_{k_{1}=1}^{[nt_{1}]-1}\sum_{k_{2}=[nt_{1} ]+1}^{[nt_{2}]}\sum_{l\geq 1}\sum_{y\geq 1}\sum_{z\geq 1}\sum_{w\leq 0} \mathbb{E}_{x}\bigg{[}\varphi_{1}\left(\frac{X_{[nt_{1}]}}{\sigma\sqrt{n}} \right)\varphi_{2}\left(\frac{X_{[nt_{2}]}}{\sigma\sqrt{n}}\right),C_{\ell}=k_ {1},\\ X_{k_{1}}=y,y+\xi_{k_{1}+1}^{\prime}>0,\ldots+,y+\xi_{k_{1}+1}^{ \prime}+\ldots+\xi_{k_{2}-2}^{\prime}>0,\\ y+\xi_{k_{1}+1}^{\prime}+\ldots+\xi_{k_{2}-1}^{\prime}=z,y+\xi_{k_{1} +1}^{\prime}+\ldots+\xi_{k_{2}}^{\prime}=w\bigg{]}\] \[=\sum_{k_{1}=1}^{[nt_{1}]-1}\sum_{k_{2}=[nt_{1}]+1}^{[nt_{2}]}\sum_{l\geq 1 }\sum_{y\geq 1}\sum_{z\geq 1}\sum_{w\leq 0}\mathbb{E}_{x}\bigg{[}\varphi_{1} \left(\frac{y+\xi_{k_{1}+1}^{\prime}+\ldots+\xi_{[nt_{1}]}^{\prime}}{\sigma \sqrt{n}}\right)\varphi_{2}\left(\frac{X_{[nt_{2}]}}{\sigma\sqrt{n}}\right),\\ C_{\ell}=k_{1},X_{k_{1}}=y,y+\xi_{k_{1}+1}^{\prime}>0,\ldots,y+ \xi_{k_{1}+1}^{\prime}+\ldots+\xi_{k_{2}-2}^{\prime}>0,\\ y+\xi_{k_{1}+1}^{\prime}+\ldots+\xi_{k_{2}-1}^{\prime}=z\bigg{]} \mathbb{P}[\xi_{k_{2}}^{\prime}=w-z]\] \[=\sum_{k_{1}=1}^{[nt_{1}]-1}\sum_{y\geq 1}H_{k_{1}}(x,y)\sum_{k_{2}=[nt_{ 1}]+1}^{[nt_{2}]}\sum_{z\geq 1}\ \mathbb{E}\bigg{[}\varphi_{1}\left(\frac{y+\xi_{[nt_{1}]-k_{1}}^{\prime}}{ \sigma\sqrt{n}}\right),\tau^{S^{\prime}}(y)>k_{2}-k_{1}-1,\\ y+S_{k_{2}-k_{1}-1}^{\prime}=z\bigg{]}\sum_{w\leq 0}\mathbb{E}_{w} \bigg{[}\varphi_{2}\left(\frac{X_{[nt_{2}]-k_{2}}}{\sigma\sqrt{n}}\right) \bigg{]}\mu^{\prime}(w-z).\] For any \((s_{1},s_{2})\in[0,t_{1}]\times[t_{1},t_{2}]\) and \(n\geq 1\), let \(g_{n}\) be the function defined by \[g_{n}(s_{1},s_{2})=n^{2}\sum_{y\geq 1}H_{[ns_{1}]}(x,y)\sum_{z \geq 1}\mathbb{E}\bigg{[}\varphi_{1}\left(\frac{y+S_{[nt_{1}]-[ns_{1}]}^{ \prime}}{\sigma\sqrt{n}}\right),\tau^{S^{\prime}}(y)>[ns_{2}]-[ns_{1}]-1,\\ y+S_{[ns_{2}]-[ns_{1}]-1}^{\prime}=z\bigg{]}\sum_{w\leq 0} \mathbb{E}_{w}\bigg{[}\varphi_{2}\left(\frac{X_{[nt_{2}]-[ns_{2}]}}{\sigma\sqrt{n} }\right)\bigg{]}\mu^{\prime}(w-z)\] if \(0\leq s_{1}<\frac{[nt_{1}]}{n}\) and \(\frac{[nt_{1}]+1}{n}\leq s_{2}\leq\frac{[nt_{2}]}{n}\), and \(0\) otherwise. Hence, \[A_{1}^{+}(n)=\int_{0}^{t_{1}}\int_{t_{1}}^{t_{2}}g_{n}(s_{1},s_{2})ds_{1}ds_{2}+O \left(\frac{1}{\sqrt{n}}\right).\] The convergence of the term \(A_{1}^{+}(n)\) as \(n\rightarrow+\infty\) is a consequence of the two following properties whose proofs are postponed at the end of the present subsection: \(\bullet\) for any \(n\geq 1\), \[|g_{n}(s_{1},s_{2})|\preceq\frac{1+|x|}{\sqrt{s_{1}(s_{2}-s_{1})^{3}}}\in L^{1} ([0,t_{1}]\times[t_{1},t_{2}]); \tag{15}\] \(\bullet\) for any \((s_{1},s_{2})\in[0,t_{1}]\times[t_{1},t_{2}]\), \[\lim_{n\rightarrow+\infty}g_{n}(s_{1},s_{2})=\frac{\gamma}{\pi^{2}}\int_{0}^{+ \infty}\int_{-\infty}^{+\infty}\varphi_{1}(u)\tilde{\varphi_{2}}(v)u^{2} \frac{e^{\frac{-v^{2}}{2(t_{2}-s_{2})^{2}}}e^{\frac{-u^{2}}{2(t_{1}-s_{1})^{3} (s_{2}-t_{1})^{2}}}}{\sqrt{s_{1}(t_{1}-s_{1})^{3}(s_{2}-t_{1})^{3}(t_{2}-s_{2 })}}dudv. \tag{16}\] Indeed, applying the Lebesgue dominated convergence theorem, we obtain \[\lim_{n\rightarrow+\infty}A_{1}^{+}(n)\] \[=\frac{\gamma}{\pi^{2}}\int_{0}^{+\infty}\int_{-\infty}^{+\infty }\varphi_{1}(u)\tilde{\varphi_{2}}(v)\bigg{(}\int_{0}^{t_{1}}\int_{t_{1}}^{t_ {2}}\frac{e^{\frac{-v^{2}}{2t_{1}(s_{2}-s_{2})}}u^{2}\exp\left(\frac{-u^{2}}{ 2(\frac{-u^{2}}{2(s_{1}-s_{1})^{3}})}\right)}{\sqrt{s_{1}(t_{1}-s_{1})^{3}(s_ {2}-t_{1})^{3}(t_{2}-s_{2})}}ds_{1}ds_{2}\bigg{)}dudv\] \[=\frac{\gamma}{\pi^{2}}\frac{\sqrt{2\pi t_{1}}}{t_{1}}\int_{0}^{ +\infty}\int_{-\infty}^{+\infty}\varphi_{1}(u)\tilde{\varphi_{2}}(v)|u|\bigg{(} \int_{t_{1}}^{t_{2}}\frac{e^{\frac{-v^{2}}{2t_{1}(s_{2}-t_{1})}}e^{\frac{-v^ {2}}{2(t_{2}-s_{2})}}}{\sqrt{(t_{2}-s_{2})(s_{2}-t_{1})^{3}}}ds_{2}\bigg{)}dudv\] \[=\frac{2\gamma}{\pi\sqrt{t_{1}(t_{2}-t_{1})}}\int_{0}^{+\infty} \int_{-\infty}^{+\infty}\varphi_{1}(u)\tilde{\varphi_{2}}(v)e^{-\frac{u^{2} }{2t_{2}+v^{2}t_{1}+2|uv|t_{1}}{t_{1}(t_{2}-t_{1})}}dudv\] which can be rewritten as \[\lim_{n\rightarrow+\infty}A_{1}^{+}(n) =\frac{2\gamma^{2}}{\pi\sqrt{t_{1}(t_{2}-t_{1})}}\int_{0}^{+ \infty}\int_{0}^{+\infty}\varphi_{1}(u)\varphi_{2}(v)e^{-\frac{u^{2}}{2t_{1}} }e^{-\frac{(u+v)^{2}}{2(t_{2}-t_{1})}}dudv\] \[+\frac{2\gamma(1-\gamma)}{\pi\sqrt{t_{1}(t_{2}-t_{1})}}\int_{0}^ {+\infty}\int_{-\infty}^{0}\varphi_{1}(u)\varphi_{2}(v)e^{-\frac{u^{2}}{2t_{1 }}}e^{-\frac{(u-v)^{2}}{2(t_{2}-t_{1})}}dudv, \tag{17}\] by using the classical integral \(\int_{0}^{+\infty}\frac{1}{\sqrt{x}}\exp\left(-\lambda_{1}x-\frac{\lambda_{2 }}{x}\right)dx=\sqrt{\frac{\pi}{\lambda_{1}}}e^{-2\sqrt{\lambda_{1}\lambda_{2 }}}\) for any \(\lambda_{1}>0\) and \(\lambda_{2}\geq 0\). The same argument holds for the term \(A_{1}^{-}(n)\) and yields \[\lim_{n\rightarrow+\infty}A_{1}^{-}(n) =\frac{2(1-\gamma)^{2}}{\pi\sqrt{t_{1}(t_{2}-t_{1})}}\int_{-\infty }^{0}\int_{-\infty}^{0}\varphi_{1}(u)\varphi_{2}(v)e^{-\frac{u^{2}}{2t_{1}} }e^{-\frac{(u+v)^{2}}{2(t_{2}-t_{1})}}dudv\] \[+\frac{2\gamma(1-\gamma)}{\pi\sqrt{t_{1}(t_{2}-t_{1})}}\int_{- \infty}^{0}\int_{0}^{+\infty}\varphi_{1}(u)\varphi_{2}(v)e^{-\frac{u^{2}}{2t_ {1}}}e^{-\frac{(u-v)^{2}}{2(t_{2}-t_{1})}}dudv. \tag{18}\] \(\bullet\) Estimate of \(A_{2}^{+}(n)\) \[A_{2}^{+}(n) =\sum_{k=1}^{[nt_{1}]-1}\sum_{\ell\geq 1}\sum_{y\geq 1}\mathbb{E}_{x} \bigg{[}\varphi_{1}\left(\frac{X_{[nt_{1}]}}{\sigma\sqrt{n}}\right)\varphi_{2} \left(\frac{X_{[nt_{2}]}}{\sigma\sqrt{n}}\right),C_{\ell}=k,X_{k}=y,\] \[\qquad\qquad y+\xi_{k+1}^{\prime}>0,\ldots,y+\xi_{k+1}^{\prime}+ \ldots+\xi_{[nt_{1}]}^{\prime}>0,\ldots,y+\xi_{k+1}^{\prime}+\ldots+\xi_{[nt_{ 2}]}^{\prime}>0\bigg{]}\] \[=\sum_{k=1}^{[nt_{1}]-1}\sum_{y\geq 1}H_{k}(x,y)\mathbb{E} \bigg{[}\varphi_{1}\left(\frac{y+S_{[nt_{1}]-k}^{\prime}}{\sigma\sqrt{n}} \right)\varphi_{2}\left(\frac{y+S_{[nt_{2}]-k}^{\prime}}{\sigma\sqrt{n}} \right),\tau^{S^{\prime}}(y)>[nt_{2}]-k\bigg{]}.\] For any \(n\geq 1\), let \(g_{n}:r\mapsto g_{n}(r)\) be the real function defined on \([0,t_{1}]\) by \[g_{n}(r):=n\sum_{y\geq 1}H_{[nr]}(x,y)\mathbb{E}\bigg{[}\varphi_{1}\left(\frac{y+S ^{\prime}_{[nt_{1}]-[nr]}}{\sigma\sqrt{n}}\right)\varphi_{2}\left(\frac{y+S^{ \prime}_{[nt_{2}]-[nr]}}{\sigma\sqrt{n}}\right),\tau^{S^{\prime}}(y)>[nt_{2}]- [nr]\bigg{]}\] if \(0\leq r<\frac{[nt_{1}]}{n}\) and \(g_{n}(r)=0\) if \(\frac{[nt_{1}]}{n}\leq r\leq t_{1}\). In the same way as above, we set \[a_{n}(y):=nH_{[nr]}(x,y)\mathbb{P}[\tau^{S^{\prime}}(y)>[nt_{2}]-[nr]]\] and Sequences \((a_{n}(y))_{y\geq 1}\) and \((b_{n}(y))_{y\geq 1}\) satisfy assumptions of Lemma 4.1. Indeed, the limit of \(\sum_{y\geq 1}a_{n}(y)\) is given by (14) and condition \((ii)\) of this lemma has been checked previously. Furthermore, by Theorem 3.2 in [3] and Theorems 2.23 and 3.4 in [12], it holds \[\lim_{n\rightarrow+\infty}b_{n}(y) =\lim_{n\rightarrow+\infty}\mathbb{E}\bigg{[}\varphi_{1}\left( \frac{y+S^{\prime}_{[nt_{1}]-[nr]}}{\sigma\sqrt{[nt_{2}]-[nr]}}\frac{\sqrt{[nt_ {2}]-[nr]}}{\sqrt{n}}\right)\] \[\qquad\qquad\varphi_{2}\left(\frac{y+S^{\prime}_{[nt_{2}]-[nr]}}{ \sigma\sqrt{n}}\frac{\sqrt{[nt_{2}]-[nr]}}{\sqrt{n}}\right)\mid\tau^{S^{\prime }}(y)>[nt_{2}]-[nr]\bigg{]}\] \[=\frac{1}{\sqrt{2\pi(t_{2}-t_{1})}}\int_{0}^{+\infty}\int_{0}^{+ \infty}\varphi_{1}(u)\varphi_{2}(v)\frac{\sqrt{t_{2}-r}}{\sqrt{(t_{1}-r)^{3}} }ue^{\frac{\pi^{2}}{2(t_{1}-r)}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times\bigg{(}e^{- \frac{(u-v)^{2}}{2(t_{2}-t_{1})}}-e^{\frac{-(u+v)^{2}}{2(t_{2}-t_{1})}}\bigg{)} dudv.\] It immediately yields \[\lim_{n\rightarrow+\infty}A_{2}^{+}(n) =\frac{\gamma}{\pi\sqrt{2\pi(t_{2}-t_{1})}}\int_{0}^{t_{1}}\bigg{(} \frac{1}{\sqrt{2\pi(t_{2}-t_{1})}}\int_{0}^{+\infty}\int_{0}^{+\infty}\varphi _{1}(u)\varphi_{2}(v)\] \[\qquad\qquad\qquad\times\frac{\sqrt{t_{2}-r}}{\sqrt{r(t_{2}-r)(t _{1}-r)^{3}}}ue^{\frac{-\pi^{2}}{2(t_{1}-r)}}\bigg{(}e^{-\frac{(u-v)^{2}}{2(t_ {2}-t_{1})}}-e^{\frac{-(u+v)^{2}}{2(t_{2}-t_{1})}}\bigg{)}dudv\bigg{)}dr\] \[=\frac{\gamma}{\pi\sqrt{t_{1}(t_{2}-t_{1})}}\int_{0}^{+\infty} \int_{0}^{+\infty}\varphi_{1}(u)\varphi_{2}(v)e^{\frac{-\pi^{2}}{2t_{1}}} \bigg{(}e^{-\frac{(u-v)^{2}}{2(t_{2}-t_{1})}}-e^{\frac{-(u+v)^{2}}{2(t_{2}-t_ {1})}}\bigg{)}dudv. \tag{19}\] Analogously, one gets \[\lim_{n\rightarrow+\infty}A_{2}^{-}(n)=\frac{1-\gamma}{\pi\sqrt{t_{1}(t_{2}-t _{1})}}\int_{-\infty}^{0}\int_{-\infty}^{0}\varphi_{1}(u)\varphi_{2}(v)e^{ \frac{-\pi^{2}}{2t_{1}}}\bigg{(}e^{-\frac{(u-v)^{2}}{2(t_{2}-t_{1})}}-e^{- \frac{(u+v)^{2}}{2(t_{2}-t_{1})}}\bigg{)}dudv \tag{20}\] Combining (17), (18), (19) and (20), we conclude \[\lim_{n\rightarrow+\infty}\mathbb{E}_{x}\left[\varphi_{1}(X^{(n)}(t_{1})) \varphi_{2}(X^{(n)}(t_{2}))\right]=\int_{-\infty}^{+\infty}\int_{-\infty}^{+ \infty}\varphi_{1}(u)\varphi_{2}(v)p_{t_{1}}^{\gamma}(0,u)p_{t_{2}-t_{1}}^{ \gamma}(u,v)dudv.\] **Proof of properties (15) and (16)** Following the same strategy as in the one dimensional case, we decompose \(g_{n}(s_{1},s_{2})\) as \(g_{n}(s_{1},s_{2})=\sum_{y\geq 1}\sum_{z\geq 1}\sum_{w\leq 0}a_{n}(y,z,w)b_{n}(y,z,w)\), where \[a_{n}(y,z,w):=n^{2}H_{[ns_{1}]}(x,y)\mathbb{P}[\tau^{S^{\prime}}(y)>[ns_{2}]-[ ns_{1}]-1,y+S^{\prime}_{[ns_{2}]-[ns_{1}]-1}=z]\ \mu^{\prime}(w-z)\] and \[b_{n}(y,z,w):= \mathbb{E}\bigg{[}\varphi_{1}\left(\frac{y+S^{\prime}_{[nt_{1}]-[ ns_{1}]}}{\sigma\sqrt{n}}\right)\mid\tau^{S^{\prime}}(y)>[ns_{2}]-[ns_{1}]-1,y+S^{ \prime}_{[ns_{2}]-[ns_{1}]-1}=z\bigg{]}\] \[\qquad\times\mathbb{E}_{w}\bigg{[}\varphi_{2}\left(\frac{X_{[nt_{2 }]-[ns_{2}]}}{\sigma\sqrt{n}}\right)\bigg{]}.\] Properties (15) and (16) are a direct consequence of Lemma 4.1, applied to the families \((a_{n}(y,z,w))_{y,z\geq 1,w\leq 0},(b_{n}(y,z,w))_{y,z\geq 1,w\leq 0}\); it thus sufficies to check that conditions \((i),(ii)\) and \((iii)\) of this lemma are satisfied in this new situation. Condition \((i)\). The sum \(\sum_{y\geq 1}\sum_{z\geq 1}\sum_{w\leq 0}a_{n}(y,z,w)\) may be written as \[\sum_{y\geq 1}\sum_{z\geq 1}\sum_{w\leq 0}a_{n}(y,z,w)=\frac{1+o(n)}{\sqrt{s_{1} (s_{2}-s_{1})^{3}}}\sqrt{[ns_{1}]}\mathcal{H}_{[ns_{1}]}(\mathbf{b}^{\prime}_{ [ns_{2}]-[ns_{1}]})(x), \tag{21}\] where we set \(\mathbf{b}^{\prime}_{k}(y)=k^{3/2}\mathbb{P}[\tau^{S^{\prime}}(y)=k]\). As previously, the sequence \((\sqrt{[ns_{1}]}\mathcal{H}_{[ns_{1}]})_{n\geq 1}\) converges in \((\mathcal{L}(\mathcal{B}_{\delta}),\|\cdot\|_{\mathcal{B}_{\delta}})\) to the operator \(\frac{1}{2\pi\big{(}\omega r(h_{\alpha})+c^{\prime}\mu(h^{\prime}_{\alpha}) \big{)}}\Pi\); furthermore, the sequence \((\mathbf{b}^{\prime}_{[ns_{2}]-[ns_{1}]})_{n\geq 1}\) converges in \(\mathcal{B}_{\delta}\) to the function \(c^{\prime}h^{\prime}_{d}\). Hence condition \((i)\) holds with \[A=\frac{1}{\sqrt{s_{1}(s_{2}-s_{1})^{3}}}\frac{c^{\prime}\nu(h^{\prime}_{d})}{ 2\pi\big{(}c\nu(\check{h}_{a})+c^{\prime}\nu(h^{\prime}_{d})\big{)}}=\frac{ \gamma}{2\pi\sqrt{s_{1}(s_{2}-s_{1})^{3}}}.\] Condition \((ii)\). Fix \(\epsilon>0\) and \(y_{\epsilon}\geq 1\). As above \(\mathbf{h}^{\prime}_{k}\mathbb{I}_{[y_{\epsilon},+\infty[}\), the function \(\mathbf{b}^{\prime}_{k}\mathbb{I}_{[y_{\epsilon},+\infty)}\) belongs to \(\overline{\mathcal{B}_{\delta}}\) and \(|\mathbf{b}^{\prime}_{k}\mathbb{I}_{[y_{\epsilon},+\infty)}|_{\mathcal{B}_{ \delta}}\leq\frac{C_{1}}{y^{\epsilon}}\) for some constant \(C_{1}>0\). By (21), it follows \[0\leq\sum_{y\geq y_{\epsilon},z\geq 1,w\leq 0}a_{n}(y,z,w) \preceq\sqrt{[ns_{1}]}\mathcal{H}_{[ns_{1}]}(\mathbf{b}^{\prime}_ {[ns_{2}]-[ns_{1}]}\mathbb{I}_{[y_{\epsilon},+\infty)})(x),\] \[\preceq\underbrace{\sup_{n\geq 1}\|\sqrt{[ns_{1}]}\mathcal{H}_{[ns_{1}]} \|_{\mathcal{B}_{\delta}}}_{<+\infty}\underbrace{\sup_{n\geq 1}|\mathbf{b}^{ \prime}_{[ns_{2}]-[ns_{1}]}\mathbb{I}_{[y_{\epsilon},+\infty)}|_{\mathcal{B}_{ \delta}}}_{\preceq\frac{1}{y^{\epsilon}}}.\] This last right hand side term is \(<\epsilon\) for sufficiently large \(y_{\epsilon}\). Furthermore, \(0\leq a_{n}(y,z,w)\preceq(1+y)(1+z)\mu^{\prime}(w-z)\) for any fixed \(y,z,w\) and any \(n\geq 1\); hence, for any \(0\leq y<y_{\epsilon}\), it holds \(\sum_{z+|w|>t}a_{n}(y,z,w)<\epsilon\) if \(t\) is large enough since \(\sum_{z\geq 1}z\mu^{\prime}(]-\infty,-z])<+\infty\). This completes the argument. Condition \((iii)\). On the one hand, by (4), for any \(y,z\geq 1\), \[\lim_{n\rightarrow+\infty}\mathbb{E}\bigg{[}\varphi_{1} \left(\frac{y+S^{\prime}_{[nt_{1}]-[ns_{1}]}}{\sigma\sqrt{n}} \right)\mid\tau^{S^{\prime}}(y)>[ns_{2}]-[ns_{1}]-1,y+S^{\prime}_{[ns_{2}]-[ ns_{1}]-1}=z\bigg{]}\] \[=\int_{0}^{+\infty}2\varphi_{1}(u^{\prime}\sqrt{s_{2}-s_{1}}) \exp\left(\frac{-u^{\prime 2}}{2\frac{t_{1}-s_{1}}{s_{2}-s_{1}}\frac{s_{2}-t_{1}}{s_{2}-s _{1}}}\right)\frac{u^{\prime 2}}{\sqrt{2\pi\frac{(t_{1}-s_{1})^{3}}{(s_{2}-s_{1})^{3}} {(s_{2}-s_{1})^{3}}}}du^{\prime}\] \[=\frac{2}{\sqrt{2\pi}}\int_{0}^{+\infty}\varphi_{1}(u)\exp\left( \frac{-u^{2}}{2\frac{(t_{1}-s_{1})(s_{2}-t_{1})}{s_{2}-s_{1}}}\right)\frac{u ^{2}}{\sqrt{\frac{(t_{1}-s_{1})^{3}(s_{2}-t_{1})^{3}}{(s_{2}-s_{1})^{3}}}}du.\] On the other hand, the one dimensional case \(m=1\) studied above yields, for any \(w\leq 0\), \[\lim_{n\rightarrow+\infty}\mathbb{E}_{w}\bigg{[}\varphi_{2}\left(\frac{X_{[nt _{2}]-[ns_{2}]}}{\sigma\sqrt{n}}\right)\bigg{]}=\int_{\mathbb{R}}\tilde{ \varphi}_{2}(v)\frac{2\exp\left(\frac{-v^{2}}{2(t_{2}-s_{2})}\right)}{\sqrt{2 \pi(t_{2}-s_{2})}}dv\] with \(\tilde{\varphi}_{2}(v)=\gamma\varphi_{2}(v)\mathbb{1}_{(0,+\infty)}(v)+(1- \gamma)\varphi_{2}(v)\mathbb{1}_{(-\infty,0)}(v)\). ### Tightness of the sequence \(\left\{X^{(n)}\right\}_{n\geq 1}\) Let us recall the modulus of continuity of a function \(f:[0,T]\rightarrow\mathbb{R}\) is defined by \[\omega_{f}(\delta):=\sup\{|f(t)-f(s)|:t,s\in[0,T]\,s.t.\,|t-s|\leq\delta\}.\] By Theorems 7.1 and 7.3 in [2], we have to show that the following conditions hold: 1. For every \(\eta>0\), there exist \(a>0\) and \(n_{\eta}\geq 1\) such that \[\mathbb{P}[|X^{(n)}(0)|\geq a]\leq\eta,\quad\forall n\geq n_{\eta}.\] 2. For every \(\epsilon>0\) and \(\eta>0\), there exist \(\delta\in(0,1)\) and \(n_{\epsilon,\eta}\geq 1\) such that \[\mathbb{P}[\omega_{X^{(n)}}(\delta)\geq\epsilon]\leq\eta,\quad\forall n\geq n _{\epsilon,\eta}.\] Proof.: Condition (i) is obviously satisfied. Let us now check the condition (ii). Set \(I_{n,\delta}:=\{(i,j)\in\mathbb{N}\mid 1\leq i<j\leq n\text{ and }|i-j|\leq n\delta\}\) and note that we have \[\omega_{X^{(n)}}(\delta)\leq\frac{7}{\min\{\sigma,\sigma^{\prime}\}\sqrt{n}} \left(\sup_{(i,j)\in I_{n,\delta}}|S_{i}-S_{j}|+\sup_{(i,j)\in I_{n,\delta}}| S^{\prime}_{i}-S^{\prime}_{j}|\right). \tag{22}\] We suggest the following figure as a useful illustration of this bound. Moreover, by [2] (see Chapter 7) one also gets \[\lim_{\delta\to 0}\lim_{n\to+\infty}\mathbb{P}\left[\frac{1}{\sigma\sqrt{n}}\sup_{(i,j)\in I_{n,\delta}}|S_{i}-S_{j}|\geq\epsilon\right]=\lim_{\delta\to 0}\lim_{n\to+ \infty}\mathbb{P}\left[\frac{1}{\sigma^{\prime}\sqrt{n}}\sup_{(i,j)\in I_{n, \delta}}|S^{\prime}_{i}-S^{\prime}_{j}|\geq\epsilon\right]=0. \tag{23}\] The condition (ii) immediately follows by (22) and (23). Hence, we conclude that the sequence \(\{X^{(n)}\}_{n\geq 1}\) is tight.
This paper is devoted to an invariance principle for Kemperman's model of oscillating random walk on $\mathbb{Z}$. This result appears as an extension of the invariance principle theorem for classical random walks on $\mathbb{Z}$ or reflected random walks on $\mathbb{N}_0$. Relying on some natural Markov sub-process which takes into account the oscillation of the random walks between $\mathbb{Z}^-$ and $\mathbb{Z}^+$, we first construct an aperiodic sequence of renewal operators acting on a suitable Banach space and then apply a powerful theorem proved by S. Gou\'ezel.
2309.03980
Enhancement Pattern Mapping for Detection of Hepatocellular Carcinoma in Patients with Cirrhosis
Background and Aims: Limited methods exist to accurately characterize risk of malignant progression of liver lesions in patients undergoing surveillance for hepatocellular carcinoma (HCC). Enhancement pattern mapping (EPM) measures voxel-based root mean square deviation (RMSD) and improves the contrast-to-noise ratio (CNR) of liver lesions on standard of care imaging. This study investigates the utilization of EPM to differentiate between HCC versus benign cirrhotic tissue. Methods: Patients with liver cirrhosis undergoing MRI surveillance at a single, tertiary-care hospital were studied prospectively. Controls (n=99) were patients without lesions during surveillance or progression to HCC. Cases (n=48) were defined as patients with LI-RADS 3 and 4 lesions who developed HCC within the study period. RMSD measured with EPM was compared to the signal from MRI arterial and portovenous (PV) phases. EPM signals of liver parenchyma between cases and controls were quantitatively validated on an independent patient set using cross validation. Results: With EPM, RMSD of 0.37 was identified as a quantitative cutoff for distinguishing lesions that progress to HCC from background parenchyma on pre-diagnostic scans with an area under the curve (AUC) of 0.83 (CI: 0.73-0.94) and a sensitivity, specificity, and accuracy of 0.65, 0.97, and 0.89, respectively. At the time of diagnostic scans, a sensitivity, specificity, and accuracy of 0.79, 0.93, and 0.88 was achieved with an AUC of 0.89 (CI: 0.82-0.96). EPM RMSD signals of background parenchyma in cases and controls were similar (case EPM: 0.22 +/- 0.08, control EPM: 0.22 +/- 0.09, p=0.8). Conclusions: EPM differentiates between HCC and non-cancerous parenchyma in a surveillance population and may aid in early detection of HCC. Future directions involve applying EPM for risk stratification of indeterminate lesions.
Newsha Nikzad, David Thomas Fuentes, Millicent Roach, Tasadduk Chowdhury, Matthew Cagley, Mohamed Badawy, Ahmed Elkhesen, Manal Hassan, Khaled Elsayes, Laura Beretta, Eugene Jon Koay, Prasun Kumar Jalal
2023-09-07T19:26:44
http://arxiv.org/abs/2309.03980v2
# Enhancement Pattern Mapping for Detection of Hepatocellular Carcinoma in Patients with Cirrhosis ###### Abstract Background and Aims: Limited methods exist to accurately characterize risk of malignant progression of liver lesions in patients undergoing surveillance for hepatocellular carcinoma (HCC). Enhancement pattern mapping (EPM) measures voxel-based root mean square deviation (RMSD) and improves the contrast-to-noise ratio (CNR) of liver lesions on standard of care imaging. This study investigates the utilization of EPM to differentiate between HCC versus benign cirrhotic tissue. Methods: Patients with liver cirrhosis undergoing MRI surveillance at a single, tertiary-care hospital were studied prospectively. Controls (n=99) were patients without lesions during surveillance or progression to HCC. Cases (n=48) were defined as patients with LI-RADS 3 and 4 lesions who developed HCC within the study period. RMSD measured with EPM was compared to the signal from MRI arterial and portovenous (PV) phases. EPM signals of liver parenchyma between cases and controls were quantitatively validated on an independent patient set using cross validation. Results: With EPM, RMSD of 0.37 was identified as a quantitative cutoff for distinguishing lesions that progress to HCC from background parenchyma on pre-diagnostic scans with an area under the curve (AUC) of 0.83 (CI: 0.73-0.94) and a sensitivity, specificity, and accuracy of 0.65, 0.97, and 0.89, respectively. At the time of diagnostic scans, a sensitivity, specificity, and accuracy of 0.79, 0.93, and 0.88 was achieved with an AUC of 0.89 (CI: 0.82-0.96). EPM RMSD signals of background parenchyma in cases and controls were similar (case EPM: 0.22 \(\pm\) 0.08, control EPM: 0.22 \(\pm\) 0.09, p=0.8). Conclusions: EPM differentiates between HCC and non-cancerous parenchyma in a surveillance population and may aid in early detection of HCC. Future directions involve applying EPM for risk stratification of indeterminate lesions. ## 3 Introduction The rapidly rising incidence of hepatocellular carcinoma (HCC) can be attributed to several risk factors, including metabolic syndrome, alcohol, viral hepatitis related to HBV or HCV, and other genetic and environmental etiologies. Morbidity and mortality related to HCC remain major challenges for the healthcare system throughout the world. Guidelines recommend surveillance for patients with cirrhosis for early detection of HCC to improve clinical outcomes. In patients with cirrhosis, the American Association for the Study of Liver Diseases (AASLD) recommends surveillance with abdominal ultrasound (US) every 6 months, with or without serum alpha-fetoprotein (AFP).1 Similarly, the European Association for the Study of the Liver (EASL) recommends surveillance with abdominal US every 6 months and notes the suboptimal cost-effectiveness of biomarkers such as AFP.2 US is an affordable, safe, and accessible imaging method; however, Tzartzeva et al. found that the sensitivity of US alone or with AFP for early-stage HCC is only 47Suboptimal performance of US for small lesions motivates the use of contrast-enhanced magnetic resonance imaging (MRI),8 which detects smaller malignant lesions with more sensitivity than US (84There is a need for new minimally invasive tools that better risk stratify patients under surveillance and detect HCC at earlier stages with higher sensitivity and specificity, given the limitations of US and MRI.1 While most current surveillance methods utilize clinical, demographic, and blood-based biomarkers, diagnostic methods utilize imaging, specifically computed tomography (CT) and MRI.10,13 The Liver Imaging Reporting and Data System (LI-RADS) represents an attempt to classify liver nodules for probability of malignancy with CT or MRI using a standardized method to minimize to discrepancies among radiologists.14 LI-RADs uses tumor size and arterial phase hypereenhancement as defining features for risk stratification, while major features of an enhancing capsule, washout, and threshold growth can further increase the confidence in a malignant or benign diagnosis. However, there is significant heterogeneity within the LI-RADS groups, especially for LR-3 and LR-4 lesions.1,14 Limited diagnostic performance is especially evident in these categories as 38Recent studies have investigated machine learning and radiomic approaches for HCC detection, such as enhancement pattern mapping (EPM).10,11 EPM is a novel voxel-based signal analysis technique that quantifies the difference in enhancement over time of a given voxel in the liver compared to either a patient-specific or population-based normal liver model, providing a measurement and visualization of how different the signal is over an entire volume of interest. EPM expands on the available set of imaging features, and it provides an interpretable value that is based on angiogenesis and tumor perfusion that are fundamental to HCC pathophysiology improving its diagnostic performance. Previous medical literature has indicated that EPM algorithm improves the contrast-to-noise (CNR) ratio enhancement for lesion detection in hepatobilary malignancy.12 Therefore, we embarked on this study to test the hypothesis that EPM can differentiate between HCC and cirrhotic parenchyma on pre-diagnostic and diagnostic MRI scans, with a future view of applications of EPM for early detection of HCC in patients undergoing surveillance. ## 4 Materials and Methods Patient selection and characteristics. With approval from the Baylor College of Medicine Institutional Review Board (H-47711 and H-45208) and MD Anderson Cancer Center (PA14-0646), all consecutive patients presenting with cirrhosis at the Hepatology Clinic between 2012 and 2020 at a single tertiary care hospital (Baylor St. Luke's Medical Center) were prospectively followed. The institutional practice for HCC surveillance is with contrast-enhanced MRI and surveillance is performed every six months. Patients with cirrhosis were included in the study if they had at least two consecutive contrast-enhanced MRIs, utilizing liver protocol, for HCC surveillance. Patients with cirrhosis presenting with HCC in the initial scan were excluded. Cases were defined as patients with a LI-RADS 3 or 4 lesion identified in a pre-diagnostic scan that progressed to HCC in the subsequent diagnostic scan. Controls did not have a lesion (LI-RADS 3 or more) identified on a single timepoint scan. Controls were age and sex-matched with cases to minimize confounding variables in patient characteristics. Patients were excluded after initial review if follow up imaging was unavailable (1/166 patients, 0.6 Figure 1. Patient selection process. Statistical analysis to describe patient demographics was performed using SPSS Statistics (Version 26, SPSS Ltd, Chicago, IL). Data were tested for normality and homogeneity of variance using a Shapiro-Wilk test. Based on this outcome (P!0.05) and after visual examination of each variable's histogram and QQ plot, data were reported as mean (SD) for normally distributed variables and as median (IQR 25th to 75th percentiles) for asymmetrical distributed data. A Student's t-test was used to compare unpaired symmetrical continuous variables, and the non-parametric Mann-Whitney test was used for unpaired asymmetrical continuous variables. The Chi-square test or Fisher's exact test was used to compare binary variables. A Pj0.05 and a confidence interval (CI) of 95Region of interest (ROI) Placement. Previous studies have demonstrated decreasing segmentation accuracy as a function of lesion size.17 Thus, one regions of interest was manually placed on each observed HCC lesion to avoid potential confounding factors from lesion segmentation inaccuracies. Similarly, to avoid any potential confounding factors from auto segmentation inaccuracies ROIs in background liver parenchyma were manually and randomly sampled. The normal liver ROIs were selected by visually analyzing the parenchyma of the liver to avoid medium to large blood vessels, cysts, and bile ducts. The software application ITK-SNAP12 was used to perform segmentation of ROIs on the arterial phase scan for all cases. A total of 8 ROIs were selected per image slice, with 3 slices per case, giving a total of 24 ROIs per case. The diameter of each ROI was selected as 6 millimeters. ROIs of lesions were delineated on the arterial phase of the contrast-enhanced scan. Arterial phase hyperen-hancement of the lesion, venous and delayed phases washout, in addition to the lesion's size and growth pattern were the criteria used to assign category codes based on LI-RADS version 2018 guidelines.14 Enhancement pattern mapping. A three-dimensional, voxel-based method called the EPM algorithm was used for quantitative image analysis. Previous implementation of the EPM algorithm11 in multi-phase CT was modified for multi-phase MRI data. Briefly, the generalized enhancement pattern, such as the change in intensity values over the period of multi-phase MRI due to uptake and washout of contrast materials, of the liver was acquired from the registered multi-phase MRI scans. The normal liver enhancement curve was obtained by fitting the MRI intensity values over time within user-selected ROIs, as described in the previous section, sampled uniformly across normal liver parenchyma from the given patient. Second, the root-mean-square deviation (RMSD) for each voxel was computed by taking the average across all time points of the squares of the differences between the generalized normal liver intensity and the voxel intensity and then taking the square root of the average. Finally, the calculated RMSD values of all voxels were mapped to the original MRI coordinates. The normal liver enhancement curve was obtained by fitting the MRI intensity values sampled from the normal liver ROIs over the period of contrast phases by a piece-wise smooth function, where each piece was a second-order polynomial. The EPM algorithm was implemented numerically using MATLAB (MathWorks, Inc.). Contrast to noise (CNR) measurements within the EPM image as well as the original multiphase MRI data were calculated as the average intensity value of the lesion minus the average intensity of healthy tissue divided by the standard deviation of the intensity of the health tissue. Here the intensity average and standard deviation were calculated over the ROI within the lesion and healthy tissue, as described in the previous section. A Wilcoxon rank-sum test is used to evaluate the statistical significance of the difference in the EPM RMSD measurements between cases and controls. RMSD differences between LI-RADS categories of the cases were calculated using the EPM algorithm. The EPM algorithm was used to compute the mean and standard deviation of the also evaluated. ROC analysis was applied to study the EPM RMSD threshold to discriminate cases and control. The optimal cut point for the ROC analysis was defined as the point with the closest Euclidean distance to the perfect classifier (sensitivity=specificity=1). To validate the cut point, the variability of the optimal EPM cut point across 5-folds of the case and control dataset was evaluated. For each fold, the cut point was obtained from the training data independent from the validation hold-out fold. On average, within 5 fold cross validation, the data is split into 80 ## 5 Results Data curation. We identified consecutive patients with cirrhosis who underwent surveillance at our high-volume Hepatology clinic from 2012-2020. 58 cases developed HCC on surveillance and fulfilled the selection criteria, and 48 cases included in the study. 99 matched patients were designated controls. Cases and controls were similar in baseline characteristics, as outlined in Table 1. The median age was 60 years (IQR 55-64) for combined cohort, 59 years (IQR 55-64) for controls, and 60 years (IQR 55-64) for cases. Most patients from both cohorts were male (n=89, 60.5 Table 1. Baseline characteristics of cases and controls. EPM Analysis. On pre-diagnostic scans for cases, the mean CNR was as follows: 3.62 on EPM, 2.39 on arterial phase, and 1.05 on PV phase. Similarly, on diagnostic scans, the mean CNR was as follows: 3.58 on EPM, 2.35 on arterial phase, and 0.89 on PV phase. An example of the ROI including a lesion that was used to calculate the CNR of the image intensity on the arterial phase image and the EPM image is shown in Figure 2. Figure 2. Representative ROI used for CNR analysis of the image intensity on EPM (left) and on arterial phase MRI (right) with lesion indicated. Boxplots of the EPM RMSD within lesions and background liver parenchyma is shown in Figure 3. Lesions in cases demonstrated a greater median EPM RMSD on pre-diagnostic scans (LI-RADS 3 and 4) and diagnostic scans (HCC) compared to parenchyma (p i 0.05). The average EPM RMSD of the background liver parenchyma in pre-diagnostic and diagnostic scans of the case patients and the single timepoint scan of control patients were not statistically different (control parenchymal ROI = 0.22 \(\pm\) 0.09, case parenchymal ROI = 0.22 \(\pm\) 0.08, p = 0.8). Figure 3. Box plots of EPM RMSD between observed lesions for pre-diagnostic and diagnostic scans in cases. (Left) On pre-diagnostic scans in cases, the median EPM RMSD observed was 0.44 for lesions and 0.22 for parenchyma. (Right) On diagnostic scans, the median EPM RMSD observed was 0.50 for lesions and 0.22 for parenchyma. (Left and right) ROC analysis for EPM RMSD in differentiating cases and controls on pre-diagnostic and single timepoint scans is shown in Figure 4(a). A sensitivity, specificity, and accuracy of 0.65, 0.97, and 0.89 is achieved at the optimal threshold of EPM RMSD 0.37. The area under the curve (AUC) of EPM RMSD at the pre-diagnostic time point was 0.83 (CI: 0.73-0.94). In a multivariable logistic regression model, adjusting for BMI, age, sex, and diabetes status, the association between EPM and eventual HCC status, as all pre-diagnostic time-points progressed to HCC, is significant (OR=8.08e3; 95ROC analysis for EPM RMSD in differentiating cases and controls on diagnostic and single timepoint scans is shown in Figure 4(b). A sensitivity, specificity, and accuracy of 0.79, 0.93, and 0.88 was achieved at the optimal threshold of EPM RMSD 0.35. The AUC of EPM RMSD at the diagnostic time point was 0.89 (CI: 0.82-0.96). In a multivariable logistic regression model, adjusting for BMI, age, sex, and diabetes status, the association between EPM and HCC status is significant (OR=2.91e5; 95Further permutations in ROC analysis for EPM RMSD in differentiating cases and controls on single timepoint and pre-diagnostic scans is shown in Figure 4(c). Five-fold cross validation was performed to estimate the out of sample performance on an independent test set. The range of optimum EPM RMSD threshold for discriminating case and control included 0.43, 0.32,.030, 0.39, and 0.37. Five-fold analysis achieved an aggregate sensitivity, specificity, and accuracy of 0.75, 0.92, and 0.88; respectively. ROC analysis for EPM RMSD in differentiating cases and controls on single timepoint and diagnostic scans is shown in Figure 4(d). Five-fold cross validation was performed to estimate the out of sample performance. The range of optimum EPM RMSD threshold for discriminating case and control included 0.45, 0.32, 0.35, 0.33, and 0.36. Five-fold analysis achieved an aggregate sensitivity, specificity, and accuracy of 0.83, 0.94, and 0.90; respectively. Figure 4. ROC analysis of optimal threshold for discriminating case and control for pre-diagnostic and diagnostic time points. (a) In-sample ROC analysis for EPM RMSD in differentiating cases and controls on pre-diagnostic and single timepoint scans is shown. Similarly, (b) in-sample ROC analysis for EPM RMSD in differentiating cases and controls on diagnostic and single timepoint scans is shown. (c) Five-fold cross validation ROC analysis for EPM RMSD in differentiating cases and controls on pre-diagnostic and single timepoint scans is shown. Similarly, (d) Five-fold cross validation ROC analysis for EPM RMSD in differentiating cases and controls on diagnostic and single timepoint scans is shown. ## 6 Conclusion This study investigated the use of a novel EPM technique for distinguishing HCC from background cirrhotic parenchyma, including non-malignant lesions. Our results suggested that EPM successfully differentiates between cancerous lesions and non-cancerous parenchyma in a surveillance population approximately six months before they were diagnosed with standard MRI protocol. EPM results show a significant increase in the CNR compared to the arterial and PV phase imaging on both pre-diagnostic and diagnostic scans. The CNR improvement on EPM is due exclusively to lesion signal amplification from calculated intensity differences between multiple phase scans. With EPM, a RMSD of 0.37 was a quantitative value to characterize lesions that progress to HCC on pre-diagnostic imaging (AUC of 0.83, CI: 0.73-0.94). The median time between pre-diagnostic and diagnostic scans for cases was 6.8 months (IQR 5.5-11.4). Further, the sensitivity (0.79), specificity (0.93), and accuracy (0.88) of EPM show an improvement during the diagnostics scan over the sensitivity (0.65), specificity (0.97), and accuracy (0.98) observed for the pre-diagnostic scan. 0.65, 0.97, and 0.89, respectively. The improvement in EPM performance agrees with intuition that lesions with higher LIRADs score would be expected to have a stronger EPM signal. Machine learning methods are a running theme to novel approaches for HCC detection and direct prediction of the LI-RAD score of a liver lesion. An overall accuracy of 60Dissimilarly, this study investigated a neural network approach to differentiate lesions within the LR-3 and LR-4 categories, which are subject to greater heterogeneity and indeterminacy, from background cirrhotic parenchyma. The consistency in background signal between cases and controls suggests that the EPM RMSD value in liver parenchyma is unlikely to predict the future location of a new lesion. The current results indicate that an EPM RMSD cutoff of 0.37 would identify LR-3 and LR-4 lesions that progress to HCC. The range of cutoff values in five-fold cross validation agrees with this in-sample cutoff value. Sensitivity, specificity, and accuracy are comparable between in-sample and cross-validation analyses. The same analysis at the diagnostic time point provides a quantitative reference for the patients with known HCC. Parallels between pre-diagnostic and diagnostic values indicate that the EPM signal is likely to detect HCC earlier in the surveillance period. Dilation of the liver mask was important in achieving robust results. Generally, over-segmentation of the liver did not decrease the registration accuracy. However, under-segmentation of the liver was prone to more registration errors due to the registration algorithm not directly visualizing the entire liver for guidance. Adding the dilation effectively ensured that the liver was included in the mask used to guide the image registration. This approach further had the effect of reducing the sensitivity of the approach to segmentation errors. An additional limitation of this study is that the image analysis pipeline was not fully automated. Manual ROI were placed on lesions and liver parenchyma to facilitate EPM analysis. Our data did not include comparative analyses between different LI-RADS category lesions, as the primary goal of the study was to assess the foundational feasibility of using EPM to define malignant potential of indeterminant lesions. Additionally, we did not investigate lesions in LI-RADS M category, metastatic lesions, or cholangiocarcinoma. We also did not study the risk of progression to HCC based on cirrhosis etiology. In conclusion, EPM identifies quantifiable differences between HCC cases and controls in a population with cirrhosis under surveillance, and a threshold cutoff of 0.37 was found to be predictive approximately six months prior to diagnosis of HCC. Given the physiology integrated into this study's EPM methodology, the findings highlight the potential applications of EPM as an imaging biomarker in the early detection realm. As lesion enhancement and transformation are intrinsically tied to visual evaluation by radiologists, this study provides a quantitative approach to the traditional qualitative approach of radiology. Future studies will include patients with LR-3 and LR-4 lesions without progression to HCC as a control group, and cohorts with a larger sample size will be diversified to have sex, race, ethnicity, and cirrhosis etiology distributions representative of the general population. This may subsequently allow for assessment of EPM in risk stratification for patients under surveillance for HCC, distinguishing which lesions are likely to transform into cancer. Our study introduces EPM as a prospective means of predicting lesion progression to malignancy, achieving early curative interventions, and individualizing care for patients at risk of development of HCC.
背景と目的: 肝疾患の悪性進行のリスクを正確に評価するための方法が限られています。肝細胞癌(HCC)の監視を行っている患者における肝斑の悪性進行のリスクを評価するための方法が限られています。肝臓の画像法では、エンスァンサーパターンマッピング(EPM)が、標準の医療画像における肝斑のコントラストの向上、利点の信号の強化を改善します。この研究では、EPMをHCCと非HCCの組織の区別するために活用することを目的としています。方法: 肝硬変の患者を対象に、単一の、高性能の医療施設でMRIの監視を行っている患者を対象に、研究は前向きに実施されました。コントロール(n=99)は、監視中にまたはHCCの発症なしの患者です。ケース(n=48)は、LI-RADS 3および4の肝斑を有
2309.09513
Learning Parallax for Stereo Event-based Motion Deblurring
Due to the extremely low latency, events have been recently exploited to supplement lost information for motion deblurring. Existing approaches largely rely on the perfect pixel-wise alignment between intensity images and events, which is not always fulfilled in the real world. To tackle this problem, we propose a novel coarse-to-fine framework, named NETwork of Event-based motion Deblurring with STereo event and intensity cameras (St-EDNet), to recover high-quality images directly from the misaligned inputs, consisting of a single blurry image and the concurrent event streams. Specifically, the coarse spatial alignment of the blurry image and the event streams is first implemented with a cross-modal stereo matching module without the need for ground-truth depths. Then, a dual-feature embedding architecture is proposed to gradually build the fine bidirectional association of the coarsely aligned data and reconstruct the sequence of the latent sharp images. Furthermore, we build a new dataset with STereo Event and Intensity Cameras (StEIC), containing real-world events, intensity images, and dense disparity maps. Experiments on real-world datasets demonstrate the superiority of the proposed network over state-of-the-art methods.
Mingyuan Lin, Chi Zhang, Chu He, Lei Yu
2023-09-18T06:51:41
http://arxiv.org/abs/2309.09513v1
# Learning Parallax for Stereo Event-based ###### Abstract Due to the extremely low latency, events have been recently exploited to supplement lost information for motion deblurring. Existing approaches largely rely on the perfect pixel-wise alignment between intensity images and events, which is not always fulfilled in the real world. To tackle this problem, we propose a novel coarse-to-fine framework, named NETwork of Event-based motion Deblurring with STereo event and intensity cameras (St-EDNet), to recover high-quality images directly from the misaligned inputs, consisting of a single blurry image and the concurrent event streams. Specifically, the coarse spatial alignment of the blurry image and the event streams is first implemented with a cross-modal stereo matching module without the need for ground-truth depths. Then, a dual-feature embedding architecture is proposed to gradually build the fine bidirectional association of the coarsely aligned data and reconstruct the sequence of the latent sharp images. Furthermore, we build a new dataset with STereo Event and Intensity Cameras (StEIC), containing real-world events, intensity images, and dense disparity maps. Experiments on real-world datasets demonstrate the superiority of the proposed network over state-of-the-art methods. The code and dataset are available at [https://mingyuan-lin.github.io/St-ED_web/](https://mingyuan-lin.github.io/St-ED_web/). Motion Deblurring, Stereo Matching, Event Camera. ## I Introduction Motion blur is a common image degradation in fast-moving photography. Restoring sharp textures from a single blurry image is highly ill-posed due to missing information in terms of texture erasures and motion ambiguities [1, 2, 3]. The event camera can supplement the missing information thanks to its low latency and high dynamics [4, 5, 6], which helps yield reliable reconstruction results. Recently, many event-based motion deblurring approaches have been proposed that commonly rely on the assumption of per-pixel alignments between blurry images and event streams [7, 8, 9, 10, 11, 12]. However, the above assumption is not always fulfilled in real-world applications, _e.g._, the stereo event and intensity camera setup [13, 14], leading to severe degradation of the motion deblurring performance. Although spatially aligned events and intensity frames can be captured by a shared sensor in an event-intensity camera, _e.g._, the DAVIS event camera [5], the low resolution of such cameras hinders the popularization of existing methods in practical applications. Therefore, the association and fusion of multi-sensor and multi-modal data in the stereo event and intensity camera setup are more practical but more challenging due to the existence of parallax between the event camera and the intensity camera, as shown in Fig. 1. In this case, a pixel-level alignment is essential but ill-posed for event-based motion deblurring approaches since motion brings coupled burdens for multi-modal correspondence, _i.e._, _blurry effects_ and _dynamic scene depth_. * **Blurry Effects.** When perceiving the same light field, events and intensities exhibit in different modalities but implicitly share common structures, enabling pixel-level correspondence [15]. However, such correspondence becomes weak and ambiguous when intensity images appear blurry. * **Dynamic Scene Depth.** Even though one can apply calibration [16, 17] and stereo rectification between the stereo event and intensity pairs as a pre-processing step to alleviate parallax [18, 19], the resulted homography can not achieve the pixel-level correspondence since the scene depth is a varying and commonly unknown prior. Therefore, this paper proposes a novel coarse-to-fine framework, named NETwork of Event-based motion Deblurring with STereo event and intensity cameras (St-EDNet), to effectively dig out and aggregate the information of the input single Fig. 1: Illustrative examples of the impact of the misaligned event and frame data from the _DSEC_ and _MVSEC_ datasets. Our St-EDNet generates fewer artifacts and achieves the best visualization performance.
非常に低いレイテンシのため、イベントが近年、失われた情報を補完する手段として用いられている。既存の方法では、強度画像とイベント間の完璧なピクセル対応が大きく依存しており、これは現実世界では必ずしも満たされない。この問題に対処するために、私たちは、イベントベースのモーションデblurringのための新しい粗から精度の枠組み、NetWork of Event-based motionDeblurring with STereo event and intensity cameras (St-EDNet)を提案する。このフレームワークは、単一のぼやけた画像と同時発生するイベントストリームから高品質の画像を直接復元するために設計されている。具体的には、ぼやけた画像とイベントストリームの粗い空間対応は、高精度な立体対応モジュールを用いて実装する。このモジュールは、地面 truthの深度を必要としない。その後、二重特徴エンベディイングアーキテクチャが提案され、粗い対応データ
2309.06620
Games and Argumentation: Time for a Family Reunion!
The rule "defeated(X) $\leftarrow$ attacks(Y,X), $\neg$ defeated(Y)" states that an argument is defeated if it is attacked by an argument that is not defeated. The rule "win(X) $\leftarrow$ move(X,Y), $\neg$ win(Y)" states that in a game a position is won if there is a move to a position that is not won. Both logic rules can be seen as close relatives (even identical twins) and both rules have been at the center of attention at various times in different communities: The first rule lies at the core of argumentation frameworks and has spawned a large family of models and semantics of abstract argumentation. The second rule has played a key role in the quest to find the "right" semantics for logic programs with recursion through negation, and has given rise to the stable and well-founded semantics. Both semantics have been widely studied by the logic programming and nonmonotonic reasoning community. The second rule has also received much attention by the database and finite model theory community, e.g., when studying the expressive power of query languages and fixpoint logics. Although close connections between argumentation frameworks, logic programming, and dialogue games have been known for a long time, the overlap and cross-fertilization between the communities appears to be smaller than one might expect. To this end, we recall some of the key results from database theory in which the win-move query has played a central role, e.g., on normal forms and expressive power of query languages. We introduce some notions that naturally emerge from games and that may provide new perspectives and research opportunities for argumentation frameworks. We discuss how solved query evaluation games reveal how- and why-not provenance of query answers. These techniques can be used to explain how results were derived via the given query, game, or argumentation framework.
Bertram Ludäscher, Yilin Xia
2023-09-12T22:22:15
http://arxiv.org/abs/2309.06620v1
# Games and Argumentation: Time for a Family Reunion! # Games and Argumentation: Time for a Family Reunion! Bertram Ludascher School of Information Sciences University of Illinois, Urbana-Champaign {ludaesch,yilinx2}@illinois.edu Yilin Xia School of Information Sciences University of Illinois, Urbana-Champaign {ludaesch,yilinx2}@illinois.edu ## 1 Introduction Consider the following two single-rule programs \(P_{\mathsf{AF}}\) and \(P_{\mathsf{G}}\) that deal with the status of abstract arguments and with positions in a game graph, respectively: \[\mathsf{defeated}(X)\leftarrow\mathsf{attacks}(Y,X),\neg\,\mathsf{defeated}( Y).\] ( \[P_{\mathsf{AF}}\] ) \[\mathsf{win}(X)\leftarrow\mathsf{move}(X,Y),\neg\,\mathsf{win}(Y).\] ( \[P_{\mathsf{G}}\] ) \(P_{\mathsf{AF}}\) states that an argument \(X\) is _defeated_ in an argument framework AF if it is _attacked_ by an argument \(Y\) that is accepted, i.e., not defeated. Conversely, \(P_{\mathsf{G}}\) states that in a game a position \(X\) is _won_ if there is a _move_ to a position \(Y\) that is not won (by the opponent). Both logic rules can be seen as close relatives, even "identical twins",1 and each rule has received considerable attention in the past by different communities: Footnote 1: The rules \(P_{\mathsf{AF}}\) and \(P_{\mathsf{G}}\) are syntactic variants of each other: swap relations \(\mathsf{defeated}\rightleftharpoons\mathsf{win}\) and \(\mathsf{attacks}\rightleftharpoons\mathsf{move}^{-1}\) (the direction of edges is reversed in \(\mathsf{move}^{-1}\)). The first rule \((P_{\mathsf{AF}})\) constitutes an "argument processing unit" (APU) that--together with a suitable semantics--lies at the heart of Dung's abstract argumentation theory [12], a seminal work that spawned a large body of research, including families of models, semantics, tools, and applications of abstract (and structured) argumentation [4, 5, 6]. The second rule (\(P_{\mathsf{G}}\)) has played a key role in the logic programming and non-monotonic reasoning community in their quest to find the "right" semantics for rules with recursion through negation (i.e., which are _not stratified2_), and in database theory. Footnote 2: A _stratified_ logic program \(P\)[2] can use both (positive) recursion and negation, but only in a “layered” manner, i.e., the rule-goal graph of \(P\) must not contain negative cycles. For non-stratified logic programs such as \(P_{\mathsf{AF}}\) and \(P_{\mathsf{G}}\), two declarative semantics emerged as the most popular, i.e., the _stable model semantics_[17] and the _well-founded semantics_[24]. For the latter, the win-move game \(P_{\mathsf{G}}\) has been the poster-child example because its unique three-valued model assigns _true_, _false_, and _undefined_ to \(\mathsf{win}(x)\) iff a position \(x\) in the given game graph is _won_, _lost_, or _drawn_, respectively. In other words, \(P_{\mathsf{G}}\)_solves games_ and thus can be viewed as a "game processing unit" (GPU) when used with an engine that computes the well-founded semantics. Similarly, the rule \(P_{\mathsf{AF}}\) is an APU that can solve argumentation frameworks: The well-founded model of \(P_{\mathsf{AF}}\) yields _grounded extensions_ and _grounded labelings_[10, 22], where an argument \(x\) is _defeated_ (label = out), _accepted_ (label = in), or _undecided_ (label = under) iff \(\mathsf{defeated}(x)\) is _true_, _false_, and _undefined_ in the well-founded model, respectively. Although close connections between formal argumentation on one hand, and logic programming, nonmonotonic reasoning, and games on the other have been known for a long time [12, 22, 9, 4], the overlap and cross-fertilization between these and some other communities (e.g., database theory) appears to be smaller than one might expect. There seems to be no work, e.g., that discusses \(P_{\mathsf{AF}}\) and \(P_{\mathsf{G}}\) in the same paper, despite (or because of?) the fact that these rules can be viewed as syntactic variants of the same underlying query. In database theory, the win-move query expressed by \(P_{\mathsf{G}}\) has been used to study the _expressive power_ of query languages [20, 14] and to develop a unified _provenance model_ that can explain the presence and absence of query answers [19, 21]. The game-theoretic notions and concepts developed in these and other database and game-theory papers [16, 13] seem to carry over to argumentation theory and may lead to new insights and results there. Conversely, related notions studied in argumentation theory may carry over to database theory and applications thereof. The purpose of this short paper is therefore to foster a "family reunion" of sorts with the goal of developing new insights and findings through cross-fertilization, i.e., by transferring concepts, ideas, and results between communities. ## 2 Some Sightings of the Win-Move Query in Database Theory We recall first some (possibly lesser known) results about \(P_{\mathsf{G}}\), the "lost twin" of \(P_{\mathsf{AF}}\). **Games vs Stratified Rules.** During the late 1980s and through the 90s, the LP/NMR community developed and studied a number of proposals for a canonical semantics for rules with recursion and negation. Veterans from that era will fondly recall examples of the form \(\{\mathsf{p}\leftarrow\neg\mathsf{q};\ \mathsf{q}\leftarrow\neg\mathsf{p}\}\), or the rather self-defeating (pun intended) \(\{\mathsf{p}\leftarrow\neg\mathsf{p}\}\). Proponents of the _stratified semantics_[2] simply ruled out such unstratifiable programs, i.e., which exhibit recursion through negation. An earlier claim by [11] suggested that stratified rules express all of Fixpoint1, i.e., a large class of database queries with PTIME data complexity. As shown in [20], however, the Fixpoint query that computes the positions for which a player has a _winning strategy_3 is _not_ expressible by stratified rules, so stratified Datalog is strictly less expressive than Fixpoint. On the other hand, well-founded Datalog expresses all Fixpoint queries, and \(P_{\mathsf{G}}\) computes the won, lost, and drawn positions in PTIME for any game given by a finite move graph. Footnote 1: Player I can force a win, no matter how Player II moves. **All You Need Is Game.** Consider the rule \(P_{\mathsf{G}}\) over a given move graph, e.g., on the left in Fig. 1(a). \(P_{\mathsf{G}}\) captures the essence of a 2-person game \(G=(V,E)\) with positions \(V\) and moves \(E\). Initially, a pebble is placed on a start position, and Player I starts to move. The players then take turns, moving the pebble along the edges of the graph until a player runs out of moves,4 in which case the opponent has won, i.e., a position \(x\in V\) is _won_ for a player if there _exists_ a move to a position \(y\) that is _lost_ (i.e., not won) for the opponent. If \(y\) is objectively lost, this means that _all_ outgoing moves from \(y\) lead to a position \(z\) that will again leave the first player in a won position. This alternation of quantifiers (\(\exists x_{1}\forall y_{1}\exists x_{2}\forall y_{2}\cdots\)) lies at the core of the expressive power of \(P_{\mathsf{G}}\). The rule \(P_{\mathsf{G}}\) turns out to be a **universal query engine**: every \(n\)-ary Fixpoint query can be expressed in _game normal form_\(P_{\mathsf{G}}:\mathsf{win}(\bar{X})\leftarrow\mathsf{move}(\bar{X},\bar{Y}), \neg\mathsf{win}(\bar{Y})\), i.e., where \(\bar{X}\) and \(\bar{Y}\) are \(n\)-tuples of variables, \(P_{\mathsf{G}}\) is the only recursive rule, and \(\mathsf{move}(\bar{X},\bar{Y})\) is definable via a quantifier-free formula over the input database [14]. **Solving Games.** In Figure 1(a) and (b), positions \(\mathsf{b}\), \(\mathsf{f}\), and \(\mathsf{h}\) are immediately lost (red nodes): No moves are possible from sink nodes. Next we can infer that positions that have an outgoing move to a lost position (for the opponent) are definitely won (green). Based on our initial determination that \(\mathsf{b}\), \(\mathsf{f}\), and \(\mathsf{h}\) are lost, it then follows that \(\mathsf{a}\), \(\mathsf{d}\), and \(\mathsf{e}\) are won.5 What is the status of the remaining positions? The status of \(\mathsf{c}\) is now determined since _all_ outgoing moves from \(\mathsf{c}\) definitely end in a node that is won for the opponent (\(\mathsf{d}\) and \(\mathsf{e}\) are already green), so \(\mathsf{c}\) is objectively lost. Solving a game thus proceeds by iterating the following two coloring (or _labeling_) rules in stages:6 Footnote 5: Similarly (_mutatis mutandis_), in the argumentation framework Figure 1(c), arguments \(\mathsf{b}\), \(\mathsf{f}\), and \(\mathsf{h}\) are _accepted_ (not defeated) and can be labeled in because they are not attacked at all. Footnote 6: This method corresponds to the _alternating fixpoint_ procedure [23] and to Algorithm 6.1 for computing the _grounded labeling_ of an argumentation framework in [22]. * Position \(x\) is _won_ (green) if \(\exists\) move \(x\to y\) and position \(y\) is lost (red) * Position \(x\) is _lost_ (red) if \(\forall\) moves \(x\to y\), position \(y\) is won (green) With each position \(x\) we can associate its _length_[19], i.e., the stage number when its color first became known. Similarly, we can associate a length with each move, indicating at what stage its _type_ (edge color) became known: in Fig. 1(b) edges into (red) sinks Figure 1: The move graph on the left defines a game \(G=(V,E)\) with positions \(V\) and moves \(E\). The _solved game_\(G^{\lambda}\) in (b) is color-labeled: positions are either _won_ (green), _lost_ (red), or _drawn_ (yellow). This separates “good” moves (solid, colored) from “bad” ones (dashed, gray). The length \(\ell\) of an edge \(x{\stackrel{{\ell}}{{\rightarrow}}}y\) indicates how quickly one can force a win, or how long one can delay a loss, with that move. By reversing the move edges, one obtains an attack graph: its _grounded labeling_ in (c) shows arguments that are out (orange), in (blue), and undec (yellow). are winning moves (colored green) and labeled with length = 1, so a, d, e and those edges to sink nodes all have length = 1. In the next stage, all successors of c are won, so c itself must be lost, and its length is 1 + the _maximal_ length of any of its succcessors. Similarly, for won \(x\), length(\(x\)) = 1 + the _minimal_ length of any lost successor, etc. After a fixpoint is reached, all remaining uncolored nodes correspond to _drawn_ positions and are colored yellow. We set length = \(\infty\) for drawn positions, since neither player can force a win, but both can avoid losing by repeating moves indefinitely. **Solved Games Explain It All!** Solved games, e.g., \(G^{\lambda}\) in Fig. 1(b), have an intriguing property: node labels (colors) induce different _edge types_, which in turn can be used to _explain why_ a position is won, lost, or drawn, respectively. Fig. 2 shows how edge types are determined from the color-labels of incident vertices. These types, in turn, induce a downstream _provenance subgraph_\(G^{\lambda}_{x}\) which provides the _justification_ or _explanation_ for the status of any \(x\in V\). The provenance \(G^{\lambda}_{x}\) of \(x\) in the solved game \(G^{\lambda}\) is the subgraph reachable from \(x\) via certain _regular path queries_ (RPQs): The provenance of a won position \(x\) matches the RPQ \(x.\mathsf{green.(red.green)^{*}}\), lost positions match \(x.(\mathsf{red.green})^{*}\), and drawn positions match \(x.\mathsf{yellow}^{+}\). Similarly, an argument's status in the grounded argumentation framework in Fig. 1(c) can be explained by an RPQ-definable subgraph, _mutatis mutandis_. Unless this is known by a different name in argumentation frameworks, this seems to be a new result for grounded extensions. The fact that (i) solved games expose their own provenance (= result _explanations_), and (ii) all First-Order queries \(Q\) have a natural encoding as query evaluation games \(G_{Q}\), were combined in [19] to develop _first-order provenance games_, a unified framework for _why-_, _how-_, and _why-not_ provenance. The framework can explain how a query result was derived, and why some results are missing from an answer \(A=Q(D)\) (see also [21]). Roughly speaking, two players argue (via a query evaluation game) whether or not tuple \(t\in A\), for a given database \(D\). The provenance of \(t\) corresponds to a subgraph in the solved game \(G^{\lambda}_{Q(D)}(t)\) and to the winning strategies for the claim \(t\in A\). ## 3 The Next Move: Strengthening Family Ties This brief exposition of a few results from database theory for the win-move query \(P_{\mathsf{G}}\) should look familiar to researchers in formal argumentation. Clearly, there are many Figure 2: Depending on node labels, moves \(x\to y\) are either _winning_ (or green) (\(\mathsf{W}\stackrel{{\mathsf{g}}}{{\rightsquigarrow}}\mathsf{L}\)), _delaying_ (or red) (\(\mathsf{L}\stackrel{{\cdot}}{{\rightsquigarrow}}\mathsf{W}\)), or _drawing_ (or yellow) (\(\mathsf{D}\stackrel{{\cdot}}{{\rightsquigarrow}}\mathsf{D}\)). All other moves are either “_bad_” (allowing the opponent to improve the outcome), or cannot exist (\(\nicefrac{{\cdot}}{{\nicefrac{{\cdot}}{{\nicefrac{{\cdot}}{{\nicefrac{{ \cdot}}{{\nicefrac{{\cdot}}{{\nicefrac{{\cdot}}{{\nicefrac{\cdot}{{\cdot}}{{ \nicefrac{\cdot}{{\cdot}{\cdot}{\cdot{\cdot{\cdot{\cdot}}{\cdot{\cdot{\cdot{ \cdot}}{\cdot{\cdot{\cdot}}{\cdot{\cdot{\cdot{\cdot}}{\cdot{\cdot{\cdot{ \cdot}}{\cdot{\cdot{\cdot{\cdot}}{\cdot{\cdot{\cdot{\cdot}}}{\cdot{ \cdot{\cdot{\cdot{\cdot direct correspondences, but there are also slightly different notions that seem to warrant further inspection and investigation, and that could lead to new insights and results. Our starting point was the straightforward link between \(P_{\mathsf{AF}}\) and \(P_{\mathsf{G}}\): Twin rules that have their distinct histories and applications in separate communities, but that haven't been studied together, at least to the best of our knowledge. Under the well-founded semantics, the solved game \(G^{\lambda}\) (with its additional structure and "built-in" provenance) corresponds to the grounded labeling of an argumentation framework [8, 22]. The additional provenance structure induced by edge types (not all edges are "created equal") suggests a corresponding new structure for AF. Here are a few more propositions and conjectures, aimed at fostering new collaborations between our different communities: * The well-founded model \(\mathcal{M}_{\mathsf{G}}^{w}\) of \(P_{\mathsf{G}}\) is isomorphic to the well-founded model \(\mathcal{M}_{\mathsf{AF}}^{w}\) of \(P_{\mathsf{AF}}\) via a natural mapping (cf. Figure 1(b) and (c)). What about other logical semantics, e.g., stable models? The resulting answer sets are useful in the analysis of AFs, but do they have natural and intuitive interpretations for games? After all, it is the well-founded semantics that yields the canonical model for win-move games. * Color labels of solved games correspond to grounded labelings of AF (Figure 1). The _length_ of game positions in \(\mathcal{M}_{\mathsf{G}}^{w}\)[19] (see also [14]) corresponds to the _min-max numbering_ of arguments in \(\mathcal{M}_{\mathsf{AF}}^{w}\)[3], and also appears as a byproduct in the alternating fixpoint computation [23] when solving a game via the \(P_{\mathsf{G}}\) rule.7 Footnote 7: A similar stage/state number is used for analyzing the provenance of Datalog queries [18]. * The _characteristic function_[12] in argumentation frameworks is closely related to _strategy functions_[13], _winning strategies_[14] and the _unattacked_ operator \(U_{\Theta}\)[7]. What are the precise correspondences? What results might be transferable? * There is a plethora of variants of argument/dialogue games in argumentation theory, in addition to the basic win-move games used in databases [20, 14, 15] and game theory [16, 13]. Has someone classified this "zoo" of game variants before? * The _decomposition theorems_ for graph kernels [16, 13] directly apply to games and thus carry over to argumentation frameworks. Has this been studied before? We invite feedback and welcome collaboration opportunities on these and similar questions. An open source demonstration using Jupyter notebooks, including the example from Figure 1, is available [25]. We plan to evolve and expand these notebooks as teaching materials for some of our undergraduate and graduate courses, covering knowledge representation & reasoning, information modeling, and database theory. **Acknowledgments.** The authors thank Shawn Bowers for his detailed comments and suggestions on an earlier draft. Work supported in part by NSF/OAC-2209628 (TRACE).
``` このルール「defeated(X) ← attacks(Y,X), ¬ defeated(Y)」は、主張が攻撃を受け、攻撃者が敗北していない場合にのみ敗北されると述べている。このルール「win(X) ← move(X,Y), ¬ win(Y)」は、ゲームにおいて、ある位置を勝つには、その位置への移動がある必要があることを述べている。これらの論理規則は、非常に近い関係性があり(あるいは、同じ双子)、両方の規則は、様々なコミュニティにおいて注目を集めている。第一の規則は、論理的フレームワークの核心にあり、抽象論理的論証の多くのモデルと意味を創出してきた。第二の規則は、反復を必要とする論理プログラムの「正しい」意味付けを探求し、安定で確立された意味付けを提示した。これらの意味付けは、論理プログラミングと非論理的推論コミュニティ
2308.00178
Transparent conductive oxides and low loss nitride-rich silicon waveguides as building blocks for neuromorphic photonics
Fully CMOS-compatible photonic memory holding devices hold a potential in a development of ultrafast artificial neural networks. Leveraging the benefits of photonics such as high-bandwidth, low latencies, low-energy interconnect and high speed they can overcome the existing limits of the electronic processing. To satisfy all these requirements a new photonic platform is proposed that combines low-loss nitride-rich silicon as a guide and low-loss transparent conductive oxides as an active material that can provide high nonlinearity and bistability under both electrical and optical signals.
Jacek Gosciniak, Jacob B. Khurgin
2023-07-31T22:20:11
http://arxiv.org/abs/2308.00178v1
Transparent conductive oxides and low loss nitride-rich silicon waveguides as building blocks for neuromorphic photonics ###### Abstract Fully CMOS-compatible photonic memory holding devices hold a potential in a development of ultrafast artificial neural networks. Leveraging the benefits of photonics such as high-bandwidth, low latencies, low-energy interconnect and high speed they can overcome the existing limits of the electronic processing. To satisfy all these requirements a new photonic platform is proposed that combines low-loss nitride-rich silicon as a guide and low-loss transparent conductive oxides as an active material that can provide high nonlinearity and bistability under both electrical and optical signals. ## Introduction Neuromorphic computing refers to the way the signal is processed that try to mimic a signal processing by a brain [1]. In comparison to traditional computers that are based on von Neumann architecture with two separated memory and processing units and operating in a sequential way [2], the brain process signals in a parallel way [3, 4]. It provides huge benefits in terms of speed and energy efficiency as a data transfer is responsible for a large part of the power consumption. One of the ways to overcome some of those limitations is by developing new algorithms that can improve signal processing [5, 6], however, it still requires the data transfer between memory and processor what limits its efficiency. To deal with those limitations a lot of effort is put in the last years in a development of artificial neurons and synapses that can be implemented in the network [1]. Neuromorphic computing based on the photonics _i.e._, the neuromorphic photonics, avail photons as a signal carrier to transfer an information between different part of the network [7-12]. Thanks to almost unlimited bandwidth, compatibility with standard CMOS technology and almost zero power consumption to carry out basic matrix multiplication it can offer a huge improvement compared to neuromorphic electronics. The full parallelism can be achieved by busing multiple signals on a single waveguide at the speed of light. Simultaneously, the optical weights can offer low latency of the computation. By combining those advantages at least few orders of magnitude improvement compared to electronic counterparts is expected. However, a realization of such demanding task requires new material platform and low-loss architecture that is still missing. Silicon nitride (SiN) is a ubiquitous material for photonic integrated circuit (PIC) technologies since it is compatible with standard CMOS processes [13, 14]. It allows for cost-effective construction of devices and co-integration of electronic and photonics components on a single chip. Furthermore, the photonic devices based on SiN platform are characterized by higher tolerance to the temperature drifts compared to other materials, lower optical losses and broad wavelength range operation, wider wavelength transparency and improved crosstalk values [14]. Already, SiN has proved to be a proper material platform for a realization of neural networks showing the increased degree of freedom is design linear neurons [8, 9]. Thus, SiN platform can play a key role as routing layer in neuromorphic photonics [9]. Among many active materials available for implementation in neuromorphic networks [1], transparent conductive oxides (TCOs) seem to be a material of choice for such tasks as it provides nonlinearity and bistability under both electrical signal and optical power coupled to the waveguide [15, 16]. Thus, it can provide a dual-mode operation and bring a lot of flexibility in terms of operation conditions [17]. As it has already shown [15, 16], it exhibits two stable states that depends on the history of the system, thus it can act as a memristor [18-21]. TCOs belong to the epsilon-near-zero (ENZ) materials that show large permittivity tunability under an applied voltage and/or light illumination [17, 22-24]. They are characterized by fast switching time and low switching voltage when operating under an electrical switching mechanism [17, 22-24] what is a huge benefit for a realization of efficient neuromorphic networks. And similarly to the SiN platform, TCOs are CMOS-compatible and operate under low optical loss. Thus, a combination of SiN and TCOs can provide an ideal material platform for a realization of low-loss, CMOS-compatible and extremally fast neuromorphic systems able to process an information in-place and under a low operation power. To process all information in-place, system has to possess some type of bistability, _i.e._, a special activity that take place in biological neurons where neurons can switch between active and non-active states under some action of neuromodulating substances [3]. Thus, bistability is a property of a system that exhibits two stable steady states and the system rests in one of those states depending on the history of this system [15, 16, 18, 25-32]. It can refer to two opposite magnetizations of magnet, low or high resistance of the electronic devices, low or high signal transmitted through a device and etc. The two states represent two values of a binary digit _i.e._, bit. To meet the demands of modern systems they should operate at high speed, under low power consumption and in wide operation bandwidth. However, up to now, most of the proposed bistable devices suffer either from high power consumption, incompatibility with standard processing technology, narrow bandwidth or complicated design that is a combination of a nonlinear material and resonant cavity [31]. The reliable bistable all-optical devices can bring progress in many fields, especially, in the all-optical neural networks, thus, the search for such device intensify in the last few years [15, 16, 31, 32]. ### Switching mechanism The photonic devices with TCO materials can operate in dual-mode operation, electrical and/or optical, thanks to the unique properties of TCO materials which exhibit a dispersion of its real electrical permittivity under applying an electric field or optical pump, thus, either generating or exciting free carriers [17, 22]. Depending on the requirements and working conditions each of the processes can be implemented to the proposed device. ### Electrical switching Under an applied voltage the electrons accumulate at the TCO what increases the local density of electrons and reduces the permittivity according to the Drude dispersion formula: \[\varepsilon(\omega)=\varepsilon_{\infty}-\frac{N_{c}e^{2}}{\varepsilon_{0}( \omega^{2}+i\omega)m^{*}(E)}\] where \(\varepsilon_{\infty}\) is the permittivity due to the bound electrons, \(N_{c}\) is the carrier density, \(e\) is the electron charge, \(\omega\) is the working frequency, \(m^{*}(E)\) is the energy-dependent effective mass, and \(\nu\) is the scattering rate. As it has been previously showed [33], even unity order permittivity change can be obtained under a reasonable voltage. The increased carrier concentration leads to decreasing a permittivity and shifting into the ENZ region what leads to higher absorption and increases the absorption losses of a device as the mode is more confined to the TCO material. Once a voltage is removed, electrons flow away from the TCO and TCO returns to its initial low-loss state. It should be emphasized that the switching process under the electrical modulation is limited by RC delays that scale with device size [34]. All-optical switching In comparison, all-optical switching with TCO operates via two mechanisms, either via interband absorption, or through intraband absorption of light. For interband absorption, the energy of the optical pump has to be greater than the bandgap of the TCO to excite photocarriers from the valence band to the conduction band [17, 23]. As in a case of the electrical switching, the photoexcited carriers lower the permittivity of the TCO via Drude dispersion and move TCO closer to the ENZ region. On the other hand, intraband absorption with the pump energy lower than the bandgap, heats up electrons in the conduction band what move it toward higher energies. Due to the non-parabolic nature of the conduction bands in TCO, these excited electrons have a greater effective mass. For a Drude formula it can be seen that as the effective mass of electrons increase the plasma frequency decreases and, in consequence, it leads to increases of the TCO permittivity. When the optical pump is off, the electrons cool down in sub-picosecond time scale. Thus, all-optical switching is a very promising mechanism for realization of active photonic components operating in the femtosecond time scale. Furthermore, when operating under intraband absorption, the same light source can be used as a pump source and signal what reduces the complexity of the system. ### Design Here we examine a concept of bistability in a SiN rib photonic waveguide arrangement with TCO placed between SiN rib and ridge and utilizing an intraband absorption of light. Compared to our previous papers [15, 16] in which we utilized plasmonic slot waveguides to enhance an electric field into TCO and thus enhances the interaction of light with TCO, here we focused on all-dielectric device. It may provide lower electric field enhancement inside TCO but simultaneously it facilitates integration with photonic platform as it will not require any additional fabrication steps. Furthermore, a coupling efficiency between photonic waveguide and plasmonic slot waveguide usually does not exceed 50 %. In comparison, a proposed all-dielectric device can be easily integrated with the SiN photonic platform with an extremally high coupling efficiency exceeding 95% and it not require any additional fabrication steps Here, the concept of bistability was investigated using 2D finite element method (FEM) simulations at the telecom wavelength of 1550 nm using a commercial software COMSOL and Lumerical. The thickness of TCO was chosen at 10 nm, while the thickness of SiN rib at 200 nm. The thickness and width of the SiN ridge was taken at \(h\)=300 nm and w=500 nm. The refractive index of SiN is assumed to be \(n\)=1.9963. For all TCOs considered here, the calculations were performed for a thermalization time r = 500 fs. The ITO properties were taken as \(\omega_{p}\)=2.52\(\cdot\)10\({}^{15}\) (rad/s), \(\nu\)=1.8\(\cdot\)10\({}^{14}\) (rad/s), \(\varepsilon_{\infty}\)=3.9 [11] where \(\omega_{p}\) is the plasma frequency, \(\nu\) is the scattering rate and \(\varepsilon_{\infty}\) is the permittivity due to the bound electrons. Similarly, the 6% Ga:ZnO (GZO) properties were taken at \(\omega_{p}\)=2.93\(\cdot\)10\({}^{15}\) (rad/s), \(\nu\)=1.78\(\cdot\)10\({}^{14}\) (rad/s), \(\varepsilon_{\infty}\)=2.475 [35], 10 % Al:ZnO (AZO) at \(\omega_{p}\)=1.137\(\cdot\)10\({}^{15}\) (rad/s), \(\nu\)=1.27\(\cdot\)10\({}^{14}\) (rad/s), \(\varepsilon_{\infty}\)=3.8825 [36] while In doped CdO at \(\omega_{p}\)=2.41\(\cdot\)10\({}^{15}\) (rad/s), \(\nu\)=3.06\(\cdot\)10\({}^{13}\) (rad/s), \(\varepsilon_{\infty}\)=5.5 [37-39]. The TCO parameters presented above allowed to calculate wavelength and plasma frequency dependent complex permittivity of all TCO materials examined in this paper. In our previous papers we focused on ITO [15, 16] as it is currently the most popular TCO material commonly found in a literature [17, 22-24, 32-34]. However, the family of TCO materials is very broad and, depending on applications and an operation wavelength range, the proper TCO material can be identify. In this paper we examine first four TCO materials: AZO, GZO, ITO and In doped CdO that represents wide spectrum of ENZ wavelengths ranging from \(\lambda\)=1.0 \(\upmu\)m for 6 % Ga:ZnO (GZO) through, \(\lambda\)=1.5 \(\upmu\)m for ITO, \(\lambda\)=1.82 \(\upmu\)m for In:CdO to \(\lambda\)=3.34 \(\upmu\)m for 10 % Al:ZnO (AZO). As observed, AZO and In:CdO are characterized by the lowest imaginary permittivity, thus losses, while the imaginary part of permittivity of ITO at ENZ wavelength is pretty high (Fig. 2a). As we are here interested in telecom wavelengths, in the rest of the paper we focus on GZO, ITO and In:CdO (Fig. 2b). GZO shows the lowest plasma frequency in the telecom wavelength of 1550 nm while the plasma frequency of In:CdO is the highest. However, as in previous case (Fig. 2a), the imaginary part of permittivity is lowest for In:CdO (Fig. 2b). Furthermore, it should be remembered that In:CdO is characterized by an order of magnitude higher mobility compared to any other TCOs what highly influences its scattering rate, damping factor, \(\nu\) (\(\nu\)=\(e/\mu m_{eff}\)) where \(\mu\) is the material mobility [37, 38]. Figure 1: Geometry of the proposed photonic bistable device. By comparing a real part of permittivity in a function of wavelength it can be observed that change of real part of permittivity close to ENZ wavelength is higher for Indium doped CdO and GZO compared to ITO and AZO. Similarly, AZO is characterized by the smoothest transition close to the ENZ wavelength. From our previous papers we can deduce [15, 16] that the steeper slope of the permittivity close to the ENZ point the narrower absorption curve of the device what means that less power is required to switch between two transmission levels of a bistable device. As observed from **Fig. 3**, close to the ENZ region of the TCO, the electric field is confined mostly in the TCO while the electric field out of TCO decreases (blue curve at **Fig. 3**). Figure 3: Electric field distribution in the SiN waveguide with ITO for different value of ITO permittivity. Figure 2: (a, b) Dispersion of real and imaginary parts of dielectric permittivity of ITO, GZO, AZO and In:CdO as a function of wavelength and plasma frequency. In consequence, the mode power attenuation is the highest at ENZ region as observed from **Fig. 4a** for ITO and decreases fast when moving out from ENZ point. The absorption curve reminds well-known bell shape. Depending on the power coupled to the SiN rib waveguide and the carrier concentration of ITO that is part of the waveguide, a device can operate in a bistable region with two different stable levels of transmitted output power for the same input (**Fig. 4b-e**) [15, 16]. Thus, it can serve as a memristor that mimics the biological synaptic response and allow to co-locate both processing and storage. Memristors have opened new doors to integrated circuits as it allows to actively modulate electrical or optical signals and hold memory states comparable to synaptic activity in the brain [18-21]. As observed from **Fig. 4**, the optical power required to move into a bistable region for SiN photonic waveguide with ITO is pretty high in a range of few Watts. The higher carrier concentration provides wider bistable region but at the cost of input optical power that arises. Simultaneously, the longer device no influence the bistability region range, however it highly influences the output power contrast between low and high transmission levels. And, with longer devices the absorption arises for both transmissions. For higher carrier concentration the bistability region ranges from 3.25 W to 5.3 W while for lower carrier concentration from 2.55 W to 3.15 W. For a device length of _l_=500 nm the output power difference in a low and high transmission level ranges from 3 W to 4.8 W for a high transmission Figure 4: (a) Illustration of bistability and switching for different optical powers in the waveguide and under a different carrier concentration of ITO \(-\)\(N_{c}\)=0.93\(\cdot\)10\({}^{27}\) m\({}^{-3}\) and \(N_{c}\)=1.0\(\cdot\)10\({}^{27}\) m\({}^{-3}\). The mechanism of bistability was explained in detail in ref. **15**, **16**. (b) Absorptive loss as a function of the propagating power exhibiting hysteresis and manifesting all-optical bistability. (c, d, e) Input–output characteristics of the photonic bistable device of (b) 500 nm, (c) 1000 nm and (d) 1000 nm length for different carrier concentration of 0.93\(\cdot\)10\({}^{27}\) m\({}^{-3}\) and 1.0\(\cdot\)10\({}^{27}\) m\({}^{-3}\). level and from 2.2 W to 3.9 W for a low transmission level. In comparison, for a longer device of \(I\)=4000 nm it ranges from 1.9 W to 2.35 W and from 0.13 W to 0.46W for high and low transmission levels, respectively. For a shorter device a difference is around 1.8 W while for longer device is around 0.35-0.45 W. As observed, the change of input optical power from 3.25 W to 5.3 W causes only small change in the output power for longer device - both transmission lines flatten out. The operation conditions of the proposed photonic device can be changed when ITO is replaced by other TCO material (**Fig. 5**). For the same plasma frequency, the power required to operate in a bistable region drops from 3.25-5.30 W for structure with ITO (**Fig. 4**) to only 0.18-0.37 W for structure with In:CdO (**Fig. 6**). It is over 18 times reduction in the input power required to move into a bistability region. Even GZO can be helpful to reduce the power while working into a lower plasma frequency (**Fig. 5**). It means that even with lower carrier concentration in the GZO, the power can be reduced over few times compared to a device with ITO. Furthermore, the absorption curve of a proposed device should be as narrow and steep as possible to ensure low power consumption (**Fig. 5**). It is directly related to the TCO material properties what was mentioned previously - the steeper a real part of permittivity close to the ENZ region the narrower absorption curve. And lower imaginary part of permittivity translates on higher absorption contrast as absorption highly arises only if the TCO material works close to the ENZ region (**Fig. 2**). From **Fig. 5** and **Fig. 2** can be deduced that ultra-low loss TCO materials with sharp index dispersion in the ENZ range are preferred as they offer sharp and narrow absorption curve of a device what reduces a power required for switching. For a device with _I_=1000nm long in:CdO, a difference between low transmission level and high transmission level in a bistable region of a device changes from 10 mW to 135 mW for an input power of 180 mW and from 63 mW to 220 mW for an input power of 370 mW. Higher contrast is possible with longer devices but at the cost of output power that drops for longer devices. Here we operate at telecom wavelength of 1550 nm, thus highly doped In:CdO is required, however, we can reduce a doping level when working close to the ENZ wavelength of 1820 nm (Fig. 2a). The proposed photonic bistable device can serve as a building block for complex photonic neural networks. The proper choice of TCO materials and operation wavelength allow to define the operation conditions of the device while a design allow to enhance interaction of light with TCO material. To imitate brain performances such devices should be arranged in more complex architectures that can serve for a neuromorphic computing based on a photonic platform. #### Dual-mode operation By playing simultaneously with both electrical and optical switching [40, 41] or only all-optical switching but under both interband and intraband absorption (two light sources at different wavelength) we can take a full advantage of the switching possibilities of TCO materials. As observed from **Fig. 7**, even when we change simultaneously or step by step a carrier concentration in TCO and effective mass through coupling a light to the TCO, we can still stay at the same value of permittivity. From this point, the performance of a device does no change (points E and F and solid line in **Fig. 7a**). However, when we increase an effective mass of the TCO through coupling a short pulse of light to a device and simultaneously increase a carrier concentration in the TCO through either intraband pump or electric voltage, and when a light pulse is off, a device transfer from a high loss regime, \(\varepsilon\)=0, to a low loss regime, \(\varepsilon\)\(\sim\)-1.8 (points A and B and dotted line in **Fig. 7a**). In a contrary, by working in a different parameters range, we can transfer from a low loss regime \(\varepsilon_{r}\)=-2.0 (point C) to a high loss regime \(\varepsilon_{r}\)=0 (point D) by coupling a light to a device and simultaneously either applying a short electrical pulse to the TCO or coupling a short optical pulse to a device (dashed line in **Fig. 7a**). Figure 6: (a) Absorptive loss as a function of the propagating power for Indium doped CdO and for carrier concentration _N_=1.02\(\cdot\)10\({}^{27}\) m\({}^{3}\). (b, c) Input–output characteristics of the photonic bistable device of (b) 100 nm and (c) 1000 nm length for carrier concentration of 1.02\(\cdot\)10\({}^{27}\) m\({}^{3}\). hus, transparent conductive oxides (TCOs) open new possibilities in both photonic integrated circuits (PIC) and neuromorphic photonics that can provide a lot of freedom in a design and can bring a network to the next operational level. ### Biological brain As the goal of the neuromorphic computing is to mimic a behavior of a biological brain we should, at first, recall how a signal is processed in biological systems [3, 4]. In a brain, two types of synaptic integration take place, and both of them are very essential for a signal processing. First, the spatial summation - the process in which synaptic potential generated at many different synapses on a dendrite of the same neuron are added together at the soma region. Second, the temporal summation - the process in which many synaptic potentials generated at the same synapse are added together if they occur in rapid succession. Thus, it requires high-frequency presynaptic activity to summate all the postsynaptic responses. Going into more details - in the absence of any signals in the neuron, the membrane of the individual neuron stays at so-called the resting potential. To generate an action potential the membrane potential must be reduced below threshold what is called depolarization. As the depolarization enhances a cell's ability to generate an action potential, it is excitatory. It has been already mentioned that to achieve the necessary depolarization the synapses must be stimulated at high frequencies. Furthermore, to achieve a significant spatial summation enough synapses must be active simultaneously. This second requirement in a biology is called cooperativity as many coactive synapses must cooperate to produce enough depolarization to cause long-term potentiation, _i.e._, activity. To achieve a sufficient temporal summation, the individual presynaptic potential must persist long enough to maintain depolarization and even deepen it before the next presynaptic potential arrives. Thus, it defines the membrane time constant that determines the time course of the synaptic potential and thus controls temporal summation. In a human brain, a time constant is in the range of 1-15 ms. In consequence, the neurons with a larger membrane time constant have a greater capacity Figure 7: (a) Real and (b) imaginary part of permittivity map for different carrier concentration and effective mass. for temporal summation as there is higher probability that two consecutive signals from presynaptic neuron will summate and bring the membrane to the threshold for an action potential. **Device performance in neural networks** For the proposed structure, the optical signal corresponds to a biological equivalent of action potential in neurons while a thermalization time of electrons in TCO corresponds to the membrane time constant. The membrane time constant defines how long the depolarization is maintained by the neurons while a thermalization time of electrons defines a time needed for excited electrons to return to its initial unexcited state. Consequently, while a depolarization defines a membrane potential that is reduced below threshold, its equivalent in proposed device defines the lower output optical power for a given input optical power for an all-optical switching mechanism or higher output optical power for the same carrier concentration for an electrical switching mechanism. Similarly to the biological counterpart, a temporal summation in the proposed device that is based on the TCO materials require high-frequency input optical pulses to summate all the signals. A time constant between consequent optical pulses should be shorter than a thermalization time of electrons in the TCO so the energy provided to the electrons from the next optical pulse should give rise to further increases of the energy of the electron gas. For a time constant between consequent optical pulses longer than an electron-lattice relaxation time of electrons into TCO, the electrons excited by a first optical pulse return to its initial unexcited state before the next optical pulse arrives. When the consequent pulses are high enough and are combined in the integration area in a time shorter than a thermalization time of electrons in TCO, each pulse slightly heats up electrons and move it higher in the conduction band. The output power follows the red curve as shown in **Fig. 8**. However, when the combined optical power exceeds the threshold, the optical transmission drops and now follows the blue curve. When the optical pulses delivered to the device decrease or if a distance between consequent pulses exceeds a thermalization time of electrons in TCO, the electrons can thermalize and return to its initial energy level, thus in consequence, the effective mass decreases and transmission drops to lower level for the same input power (points X and Y in **Fig. 8**). Further decreases of the optical input power and thus the electron temperature reset a device and moves its back to its Figure 8: (a) An optical pulse train from waveguide before and after a proposed device. (b) Absorbed pump energy \(U_{s}\),..., \(U_{n}\) under consequent pulses increases electron energy and, thus, the electrons effective mass \(m^{*}(E)\) and (c) operation principles. initial state indicated by point A. In this arrangement, the integration of pulses can be both in spatial and temporal domains where the pulses from other neurons can be combined into a single waveguide using wavelength division multiplexing (WDM). In this case, as the switching of the TCO occurs only above a certain threshold value, the neuron only stays at low output power if the weighted sum of the input optical power exceeds this threshold. Thus, the system naturally emulates the basic integrate-and-fire functionality of a biological neuron but in inverse schema - system stays in low power only when the threshold is reached. This artificial neuron can integrate over the optical power and over time what make it very similar to a biological neuron. ## Conclusion For a first time we have examined a bistable device on the low-loss nitride-rich silicon platform with the TCO active materials arranged in the photonic rib waveguide for application in artificial neural networks. Different TCO materials were examined showing that significant reduction in optical power can be achieved under proper choice of material. The proposed photonic device can serve as both a linear weight for a single photonic signal and a simultaneously spatial and temporal summation unit integrating many photonic signals. Furthermore, depending on the overall summated signal value the proposed device can keep history about previous state and thus can serve as a memristor what bring it closer to the brain. The proposed device can be easily integrated with the photonic SiN waveguides serving as an interconnector with a coupling efficiency exceeding even 95 %. Furthermore, both materials _i.e._, silicon nitride and transparent conductive oxides are CMOS-compatible and are characterized by very low losses what open new possibilities for a further development of neural networks. ## Acknowledgements J.G. thanks the "ENSEMBLE3 - Centre of Excellence for Nanophotonics, advanced materials and novel crystal growth-based technologies" project (GA No. MAB/2020/14) carried out within the International Research Agendas program of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund and the European Union's Horizon 2020 research and innovation program Teaming for Excellence (Grant Agreement No. 857543) for support of this work.
CMOS互換性を持つフォトニクスメモリ保持デバイスは、超高速人工ニューラルネットワークの開発に潜在的な可能性を秘めている。光学技術の利点を活かし、高帯域幅、低レイテンシー、低エネルギー接続、高速性を備えているため、電子処理の既存の限界を超えることができる。これらの要件を満たすために、新しい光学プラットフォームが提案されており、このプラットフォームは、低損失の窒化物豊富なシリコンをガイド材として、低損失の透明導電性酸化物をアクティブ材料として使用することで、電気的信号と光学的信号の下で高非線形性と Bistabilit を提供する。 Please let me know if you want me to translate any other sentences.
2309.10042
Covariant operator bases for continuous variables
Coherent-state representations are a standard tool to deal with continuous-variable systems, as they allow one to efficiently visualize quantum states in phase space. Here, we work out an alternative basis consisting of monomials on the basic observables, with the crucial property of behaving well under symplectic transformations. This basis is the analogue of the irreducible tensors widely used in the context of SU(2) symmetry. Given the density matrix of a state, the expansion coefficients in that basis constitute the multipoles, which describe the state in a canonically covariant form that is both concise and explicit. We use these quantities to assess properties such as quantumness or Gaussianity and to furnish direct connections between tomographic measurements and quasiprobability distribution reconstructions.
A. Z. Goldberg, A. B. Klimov, G. Leuchs, L. L. Sanchez-Soto
2023-09-18T18:00:15
http://arxiv.org/abs/2309.10042v2
# Covariant operator bases for continuous variables ###### Abstract Coherent-state representations are a standard tool to deal with continuous-variable systems, as they allow one to efficiently visualize quantum states in phase space. Here, we work out an alternative basis consisting of monomials on the basic observables, with the crucial property of behaving well under symplectic transformations. This basis is the analogue of the irreducible tensors widely used in the context of SU(2) symmetry. Given the density matrix of a state, the corresponding expansion coefficients in that basis constitute the state multipoles, which describe the state in a canonically covariant form that is both concise and explicit. We use these quantities to assess properties such as quantumness or Gaussianity. ## 1 Introduction The notion of observable plays a central role in quantum physics [1]. The term was first used by Heisenberg [2] (_beobachtbare Grosse_) to refer to quantities involved in physical measurements and thus having an operational meaning. They give us information about the state of a physical system and may be predicted by the theory. According to the conventional formulation, observables are represented by selfadjoint operators acting on the Hilbert space associated with the system [3, 4]. Given an abstract observable, one has to find its practical implementation. For discrete degrees of freedom, the associated Hilbert space is finite dimensional and the observable is then represented by a matrix whose explicit form depends on the basis. Choosing this basis such that it possesses specific properties can be tricky [5, 6, 7, 8]. Especially, when the system has an intrinsic symmetry, the basis should have the suitable transformation properties under the action of that symmetry. This idea is the rationale behind the construction of irreducible tensorial sets [9], which are crucial for the description of rotationally invariant systems [10] and can be generalized to other invariances [11]. Things get more complicated in the continuous-variable setting, when the Hilbert space has infinite dimensions. The paradigmatic example is that of a single bosonic mode, where the Weyl-Heisenberg group emerges as a hallmark of noncommutativity [12]. As Fock and coherent states are frequently regarded as the most and least quantum states, respectively, they are typically used as bases in quantum optics. Coherent states constitute an overcomplete basis which is at the realm of the phase-space formulation of quantum theory [13, 14, 15, 16, 17, 18, 19, 20, 21, 22] where observables become \(c\)-number functions (the _symbols_ of the operators). This is the most convenient construct for visualizing quantum states and processes for continuous variables (CV). In this phase-space approach the operator bases used are recognised to be simple ordered exponentials of the dynamical variables. However, our physical intuition seems to require an explicit invariance under symplectic transformations (i.e., linear canonical transformations), which is not apparent at first sight [23]. This seems to call for proper tensorial sets for CV. In Ref. [24] it was suggested that for a single mode, the monomials \[\hat{T}_{Kq}=\hat{a}^{\dagger K+q}\hat{a}^{K-q} \tag{1}\] with \(K=0,1/2,1,\ldots\) and \(q=-K,\ldots,+K\) behave as proper tensor operators for the problem at hand. Here \(\hat{a}\) and \(\hat{a}^{\dagger}\) are the bosonic creation and annihilation operator for the mode. In this work, we examine the properties of these monomials and derive their inverses, which can then be used to directly expand any quantum operator. These operators can then be added to the quantum optician's toolbox and used by anyone working in CV. When the density matrix is expanded in the basis (1), its expansion coefficients are the moments, dubbed as state multipoles, which convey complete information. For CV, moments have been considered for studying quantumness [25, 26]. Here, we inspect how the multipoles characterize the state. Drawing inspiration from SU(2), we compare states that hide their information in the large-\(K\) coefficients to those whose information is mostly contained in the smallest-\(K\) multipoles. The result is an intriguing counterplay between the extremal states in the other representations, including Fock states, coherent states, and states with maximal off-diagonal coefficients in the Fock basis. There are many avenues to explore with the monomials representation. After a brief review of the basic concepts required in Sec. 2, we examine the properties of the basis (1) and its inverse in Sec. 3. The corresponding multipoles appear as the expansion coefficients of the density matrix in that basis. The covariance under symplectic transformations tells us how the different parts of a state are interconverted through standard operations. Note that we are considering only normally ordered polynomials, but everything can be extended for antinormally and symmetrically ordered monomials. In Sec. 4 we introduce the concept of cumulative multipole distribution and its inverse and find the extremal states for those quantities and determine in this way which states are the most and least quantum. Our conclusions are finally summarized in Sec. 5. ## 2 Background We provide here a self-contained background that is familiar to quantum opticians. The reader can find more details in the previously quoted literature [13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. A single bosonic mode has creation and annihilation operators satisfying the commutation relations \[[\hat{a},\hat{a}^{\dagger}]=\mathbb{1}. \tag{2}\] These can be used to define the Fock states as excitations \[\ket{n}=\frac{\hat{a}^{\dagger n}}{\sqrt{n!}}\ket{\mathrm{vac}} \tag{3}\] of the vacuum \(\ket{\mathrm{vac}}\) annihilated as \(\hat{a}\ket{\mathrm{vac}}=0\), as well as the canonical coherent states \[\ket{\alpha}=\mathrm{e}^{-\frac{\ket{\alpha}^{2}}{2}}\sum_{n=0}^{\infty}\frac{ \alpha^{n}}{\sqrt{n!}}\ket{n}. \tag{4}\] These can both be used to resolve the identity: \[\mathbb{1}=\sum_{n=0}^{\infty}\ket{n}\bra{n}=\frac{1}{\pi}\int d^{2}\alpha\ket {\alpha}\bra{\alpha}. \tag{5}\] The coherent states can also be defined as displaced versions of the vacuum state \(\left|\alpha\right\rangle=\hat{D}(\alpha)\left|\mathrm{vac}\right\rangle\) via the displacement operators that take numerous useful forms \[\hat{D}(\alpha)=e^{\alpha\hat{a}^{\dagger}-\alpha^{*}\hat{a}}=e^{-\frac{\left| \alpha\right|^{2}}{2}}\,e^{\alpha\hat{a}^{\dagger}}e^{-\alpha^{*}\hat{a}}=e^{ \frac{\left|\alpha\right|^{2}}{2}}\,e^{-\alpha^{*}\hat{a}}e^{\alpha\hat{a}^{ \dagger}}\,. \tag{6}\] These obey the composition law \[\hat{D}(\alpha)\hat{D}(\beta)=e^{i\,\mathrm{Im}(\alpha\beta^{*})}\hat{D}( \alpha+\beta) \tag{7}\] and the trace-orthogonality condition \[\mathrm{Tr}[\hat{D}(\alpha)\hat{D}(-\beta)]=\pi\delta^{2}(\alpha-\beta)\,. \tag{8}\] Their matrix elements in the coherent-state basis can be found from the composition law and in the Fock-state basis are given by [27] \[\left\langle m\right|\hat{D}(\alpha)\left|n\right\rangle=\begin{cases} \sqrt{\frac{n!}{m!}}e^{-\frac{\left|\alpha\right|^{2}}{2}}\alpha^{m-n}\,L_{n} ^{(m-n)}(|\alpha|^{2}),&m\leq n,\\ \\ \sqrt{\frac{m!}{n!}}e^{-\frac{\left|\alpha\right|^{2}}{2}}(-\alpha^{*})^{n-m}L _{m}^{(n-m)}(|\alpha|^{2}),&n\leq m,\end{cases} \tag{9}\] where \(L_{n}^{(\alpha)}(\cdot)\) denotes the generalized Laguerre polynomial [28]. Given any operator \(\hat{F}\), it can be expressed in the Fock basis as \[\hat{F}=\sum_{m,n}F_{m,n}\left|m\right\rangle\left\langle n\right|,\qquad F_{m,n}=\left\langle m\right|\hat{F}\left|n\right\rangle \tag{10}\] and in the coherent-state basis as \[\hat{F}=\frac{1}{\pi^{2}}\int d^{2}\alpha d^{2}\beta F(\alpha,\beta)\left| \alpha\right\rangle\left\langle\beta\right|,\qquad F\left(\alpha,\beta\right) =\left\langle\alpha\right|\hat{F}\left|\beta\right\rangle. \tag{11}\] However, it is always possible to express this coherent-state representation in a diagonal form. For the particular case of the density operator \(\hat{\varrho}\) this yields the Glauber-Sudarshan \(P\)-function [29, 30] \[\hat{\varrho}=\int d^{2}\alpha\,P(\alpha)\left|\alpha\right\rangle\left\langle \alpha\right|\,, \tag{12}\] with [31] \[P(\alpha)=\frac{e^{\left|\alpha\right|^{2}}}{\pi^{2}}\int d^{2}\beta\left\langle -\beta\right|\hat{\varrho}\left|\beta\right\rangle e^{\left|\beta\right|^{2} +2i\,\mathrm{Im}(\alpha\beta^{*})}. \tag{13}\] The same holds true for any operator \(\hat{F}\) for which \(\left\langle-\beta\right|\hat{F}\left|\beta\right\rangle e^{\left|\beta \right|^{2}}\) is square-integrable. One identity that often shows up in this realm is an expression for the vacuum in terms of normally ordered polynomials: \[\left|\mathrm{vac}\right\rangle\left\langle\mathrm{vac}\right|=:e^{-\hat{a}^{ \dagger}\hat{a}}:\,. \tag{14}\] This allows us to express any unit-rank operator from the Fock basis as \[\left|m\right\rangle\left\langle n\right|=\frac{1}{\sqrt{m!n!}}:\hat{a}^{ \dagger}{}^{m}\mathrm{e}^{-\hat{a}^{\dagger}\hat{a}}\hat{a}^{n}:\,. \tag{15}\] This directly guarantees that a normally ordered expression will always exist for any operator. State multipoles As heralded in the Introduction, the monomials (1) are the components of finite-dimensional tensor operators with respect to the symplectic group Sp(2, \(\mathbb{R}\)). Their transformation properties are examined in the Appendix A. For completeness, we have to seek operators \(\hat{\mathfrak{Z}}_{Kq}\) satisfying the proper orthonormality conditions to be inverses of the monomials: \[\operatorname{Tr}(\hat{\mathfrak{Z}}_{Kq}\hat{T}_{K^{\prime}q^{\prime}})= \delta_{KK^{\prime}}\delta_{Kq^{\prime}}. \tag{16}\] Using the trace-orthogonality conditions of the displacement operators, we can rewrite this condition as \[\operatorname{Tr}(\hat{\mathfrak{Z}}_{Kq}\hat{T}_{K^{\prime}q^{ \prime}}) =\frac{1}{\pi}\int d^{2}\beta\ \operatorname{Tr}[D(\beta)\hat{T}_{K^{\prime}q^{\prime}}] \operatorname{Tr}[D(-\beta)\hat{\mathfrak{Z}}_{Kq}]\] \[=\frac{1}{\pi^{2}}\int d^{2}\alpha d^{2}\beta\,e^{\frac{|\beta|^{ 2}}{2}}e^{\beta\alpha^{*}-\beta^{*}\alpha}\alpha^{*K^{\prime}+q^{\prime}} \alpha^{K^{\prime}-q^{\prime}}\operatorname{Tr}[D(-\beta)\hat{\mathfrak{Z}}_ {Kq}]\,. \tag{17}\] Now, by inspection, we attain orthonormality when \[e^{\frac{|\beta|^{2}}{2}}\operatorname{Tr}[D(-\beta)\hat{\mathfrak{Z}}_{Kq}] =(-1)^{2K}\frac{\beta^{K+q}(-\beta^{*})^{K-q}}{(K+q)!(K-q)!}. \tag{18}\] In consequence, we have \[\hat{\mathfrak{Z}}_{Kq}=\frac{(-1)^{K+q}}{(K+q)!\,(K-q)!}\frac{1}{\pi}\int d^{ 2}\beta\ e^{-\frac{|\beta|^{2}}{2}}\hat{D}(\beta)\ \beta^{K+q}\beta^{*K-q}. \tag{19}\] Interestingly, they appear as moments of the operators introduced in the pioneering work by Agarwal and Wolf [32]. This inversion process can be repeated with other ordered polynomials and we find the inverse operators to again appear as moments of the other operators introduced therein. In Appendix B we sketch the procedure for the case of symmetric order. Once they are known, it is easy to expand any operator, such as a density matrix \(\hat{\varrho}\), through \[\hat{\varrho}=\sum_{Kq}\langle\hat{\mathfrak{Z}}_{Kq}\rangle\ \hat{T}_{Kq}\,, \tag{20}\] where \(\langle\hat{\mathfrak{Z}}_{Kq}\rangle=\operatorname{Tr}(\hat{\varrho} \mathfrak{Z}_{Kq})\), following the standard notation for SU(2) [10], will be called the state multipoles. They correspond to moments of the basic variables, properly arranged. Conversely, we can expand operators in the basis of the inverse operators, \[\hat{\varrho}=\sum_{Kq}\langle\hat{T}_{Kq}\rangle\ \hat{\mathfrak{Z}}_{Kq}\,, \tag{21}\] with \(\langle\hat{T}_{Kq}\rangle=\operatorname{Tr}(\hat{\varrho}T_{Kq})\) now being the inverse multipoles. Since inverse operators inherit the Hermitian conjugation properties of the monomials, \[\hat{T}_{Kq}^{\dagger}=\hat{T}_{K\,-q},\qquad\qquad\hat{\mathfrak{Z}}_{Kq}^{ \dagger}=\hat{\mathfrak{Z}}_{K\,-q}, \tag{22}\] the multipoles and inverse multipoles simply transform as \(q\leftrightarrow-q\) under complex conjugation. The purity of a state has a simple expression in terms of the multipoles \[\operatorname{Tr}(\hat{\varrho}^{2})=\sum_{Kq}\langle\hat{\mathfrak{Z}}_{Kq} \rangle\langle\hat{T}_{Kq}\rangle\,. \tag{23}\] It is more challenging to express the trace of a state in terms of the multipoles because the operators \(\hat{T}_{Kq}\) are not trace-class; however, by formally writing \(\mathrm{Tr}[\hat{D}(\beta)]=\pi\delta^{2}(\beta)\exp(-|\beta|^{2}/2)\), we can compute \[\mathrm{Tr}(\hat{\mathfrak{T}}_{Kq})=\delta_{K0}\delta_{q0} \tag{24}\] such that normalization dictates that the inverse multipoles satisfy \(1=\mathrm{Tr}(\hat{\varrho})=\langle\hat{T}_{00}\rangle\). In principle, the complete characterization of a CV state requires the knowledge of infinite multipoles. For a Gaussian state, only moments up until \(K=1\) are needed. This suggests that either the inverse multipoles \(\langle\hat{T}_{Kq}\rangle\) for larger values of \(K\) or the multipoles \(\langle\hat{\mathfrak{T}}_{Kq}\rangle\) characterize the non-Gaussianity of a state. In consequence, we have to calculate the multipoles of arbitrary states. Before that, we consider the simplest cases of coherent and Fock states, for which the calculations are straightforward. Starting with coherent states, using \((\ref{eq:201})\) and recalling the Rodrigues formula for the generalized Laguerre polynomials [28], we get \[\langle\alpha|\hat{\mathfrak{T}}_{Kq}|\alpha\rangle=\frac{(-1)^{K+q}}{(K-q)! }\frac{e^{-|\alpha|^{2}}}{\alpha^{*2q}}L_{K+q}^{(-2q)}(|\alpha|^{2})\,. \tag{25}\] The magnitudes of these multipole moments versus \(|\alpha|^{2}\) for various values of \(K\) and \(q\) are plotted in Fig. 1. As we can appreciate, they decrease rapidly with \(K\) and large \(|\alpha|\). As for Fock states, we use the matrix elements of the displacement operator \(\langle n|\,\hat{D}(\beta)\,|n\rangle=\exp(-|\beta|^{2}/2)L_{n}(|\beta|^{2})\). Since these only depend on \(|\beta|\) and not its phase, the \(q=0\) terms all vanish, leaving us with \[\langle n|\hat{\mathfrak{T}}_{Kq}|n\rangle=\delta_{q0}\frac{(-1)^{K}}{K!^{2}} \int_{0}^{\infty}rdr\,2\mathrm{e}^{-r^{2}}r^{2K}L_{n}(r^{2})=\delta_{q0}\frac {(-1)^{K+n}}{n!(K-n)!}. \tag{26}\] The inverse multipoles are trivial in both cases, with \[\langle\alpha|\hat{T}_{Kq}|\alpha\rangle=\alpha^{*K+q}\alpha^{K-q}\,,\qquad \qquad\langle n|\hat{T}_{Kq}|n\rangle=\delta_{q0}K!\binom{n}{K}\,. \tag{27}\] Note that the multipoles that vanish for Fock states have \(n>K\) and the inverse multipoles that vanish for Fock states have \(K>n\). For arbitrary states, we note that, since any state can be expressed in terms of its \(P\)-function, we can write \[\langle\widehat{\mathfrak{L}}_{Kq}\rangle=\int d^{2}\alpha\ P(\alpha)\langle \alpha|\widehat{\mathfrak{L}}_{Kq}|\alpha\rangle=\int d^{2}\alpha\ P(\alpha) \,\frac{(-1)^{K+q}}{(K-q)!}\frac{e^{-|\alpha|^{2}}}{\alpha^{*2q}}L_{K+q}^{(-2 q)}(|\alpha|^{2}). \tag{28}\] To get more of a handle on these multipoles, expecially when \(P\) is not a well-behaved function, it is more convenient to have an expression in terms of the matrix elements \(\varrho_{mn}=\langle m|\,\hat{\varrho}\,|n\rangle\). This can be provided by expressing \(P(\alpha)\) in terms of matrix elements of the state in the Fock basis and derivatives of delta functions. More directly, we can compute (\(m\leq n\)) \[\langle n|\widehat{\mathfrak{L}}_{Kq}|m\rangle =\frac{(-1)^{K+q}}{(K+q)!\,(K-q)!}\frac{1}{\pi}\int d^{2}\beta e^ {-|\beta|^{2}}\sqrt{\frac{n!}{m!}}\beta^{m-n}L_{n}^{(m-n)}(|\beta|^{2})\beta^{ K+q}\beta^{*K-q}\] \[=\delta_{n-m,2q}\,(-1)^{K+q+n}\,\sqrt{\frac{n!}{(n-2q)!}}{K+q \choose n}\frac{1}{(K+q)!}. \tag{29}\] These give the matrix elements of the inverse operators \(\widehat{\mathfrak{L}}_{Kq}\) in the Fock basis and show that \(\widehat{\mathfrak{L}}_{Kq}\) can only have nonnull eigenstates when \(q=0\). Putting these together for an arbitrary state, we find \[\langle\widehat{\mathfrak{L}}_{Kq}\rangle=\begin{cases}\sum_{n\geq m}\varrho_{ nm}\delta_{n-m,2q}\,(-1)^{K+q+n}\,\sqrt{\frac{n!}{(n-2q)!}}{K+q\choose n}\frac{1}{(K+q)!},&q\geq 0\,,\\ \\ \sum_{m\geq n}\varrho_{nm}^{*}\delta_{n-m,-2q}\,(-1)^{K-q+n}\,\sqrt{\frac{n!}{ (n+2q)!}}{K-q\choose n}\frac{1}{(K-q)!},&q\leq 0\,.\end{cases} \tag{30}\] Figure 2: Parts of the state in the Fock state basis coupled to by a particular inverse operator \(\widehat{\mathfrak{L}}_{Kq}\). Each value of \(q\) labels the off-diagonal stripe of the matrix that affects the value of \(\langle\widehat{\mathfrak{L}}_{Kq}\rangle\). Each value of \(K\) labels the maximal antidiagonal row that contributes to the value of \(\langle\widehat{\mathfrak{L}}_{Kq}\rangle\). This antidiagonal row is characterized by the row and column number summing to \(2K\). In this way, we get a simple expression for the inverse monomials in the Fock basis: \[\mathfrak{\hat{T}}_{Kq}=\begin{cases}\sum_{n=2q}^{K+q}\frac{(-1)^{K+q+n}}{\sqrt{n! (n-2q)!}(K+q-n)!}\,\left|n-2q\right\rangle\left\langle n\right|,&q\geq 0\,, \\ \\ \sum_{n=-2q}^{K-q}\frac{(-1)^{K-q+n}}{\sqrt{n!(n+2q)!(K-q-n)!}}\, \left|n\right\rangle\left\langle n+2q\right|,&q\leq 0\,,\end{cases} \tag{31}\] whose orthonormality with the operators \(\hat{T}_{Kq}\) can be directly verified. This expression equally serves to furnish a representation of the moments of the displacement operator in the Fock basis. To understand this result, we plot in Fig. 2 a representation of the nonzero parts of different operators \(\mathfrak{\hat{T}}_{Kq}\) in the Fock basis, which equivalently represents which elements of a density matrix \(\varrho_{mn}\) contribute to a given multipole \(\langle\mathfrak{\hat{T}}_{Kq}\rangle\). The contributing elements are all on the \(2q\)th diagonal, ranging over the first \(2K+1\) antidiagonals. The inverse multipoles \(\langle\hat{T}_{Kq}\rangle\) depend on the \(-q\)th diagonal and all of the antidiagonals starting from the \(2K\)th antidiagonal. This picture makes clear a number of properties that will become useful for our purposes. To conclude, it is common to find operators of a generic form \(f(\hat{a},\hat{a}^{\dagger})\). Quite often, it is necessary to find their normally ordered form \(:\!f(\hat{a},\hat{a}^{\dagger})\!:\), where \(:\!:\) denotes normal ordering. Such is necessary, for example, in photodetection theory [33]. Although algebraic techniques are available [34], the multipolar expansion that we have developed makes this computation quite tractable. We first compute \[\operatorname{Tr}[\hat{D}(\beta)\,:\!f(\hat{a},\hat{a}^{\dagger})\!:]=e^{\frac {\left|\beta\right|^{2}}{2}}\,\operatorname{Tr}[:\!e^{\beta\hat{a}^{\dagger}} \,f(\hat{a},\hat{a}^{\dagger})\,e^{-\beta^{*}\hat{a}}\!:]=\frac{e^{\frac{ \left|\beta\right|^{2}}{2}}}{\pi}\int d^{2}\alpha\,f(\alpha,\alpha^{*})\,e^{ \beta\alpha^{*}-\beta^{*}\alpha}\,. \tag{32}\] The integral is nothing but the Fourier transform of the function \(f(\alpha,\alpha^{*})\) with respect to both of its arguments. If we call \(F(\beta,\beta^{*})\) this transform, the multipole moments of \(:f(\hat{a},\hat{a}^{\dagger}):\), denoted by \(F_{Kq}\), become \[F_{Kq}=\frac{(-1)^{K+q}}{\pi(K+q)!(K-q)!}\int d^{2}\beta\,F(\beta,\beta^{*})\, \beta^{K+q}\beta^{*K-q}\,. \tag{33}\] In other words, the moments of the Fourier transform of \(f(\alpha,\alpha^{*})\) give the expansion coefficients of \(:f(\hat{a},\hat{a}^{\dagger}):\) in the \(\hat{T}_{Kq}\) basis. ## 4 Extremal states ### Cumulative multipolar distribution We turn now our attention to cumulative multipole distribution; that is, \[\mathfrak{A}_{M}(\hat{\varrho})=\sum_{K=0}^{M}\mathfrak{T}_{K}^{2}(\hat{ \varrho}) \tag{34}\] with \(M=0,1/2,1,\ldots\) and where \[\mathfrak{T}_{K}^{2}(\hat{\varrho})=\sum_{q=-K}^{K}|\operatorname{Tr}(\mathfrak{ \hat{T}}_{Kq}\hat{\varrho})|^{2} \tag{35}\] is the Euclidean norm of the \(K\)th multipole. The quantities \(\mathfrak{A}_{M}(\hat{\varrho})\) can be be used to furnish a generalized uncertainty principle [24] and they are a good indicator of quantumness [35, 36]. For spin variables, it has been shown that \(\mathfrak{A}_{M}(\hat{\varrho})\) are maximized to all orders \(M\) by SU(2)-coherent states, which are the least quantum states in this context, and vanish for the most quantum states, which are called the Kings of Quantumness, the furthest in some sense from coherent states [37, 38, 39]. What states maximize and minimize these cumulative variables for CV? Let us begin by examining a few of the lowest orders. \(\boldsymbol{M=0}\): For an arbitrary state, we can write \(\mathfrak{A}_{0}\) in terms of the Fock-state coefficients as \[\mathfrak{A}_{0}=\left|\sum_{n}(-1)^{n}\varrho_{nn}{0\choose n}\right|^{2}=| \varrho_{00}|^{2}. \tag{36}\] This is uniquely maximized by the vacuum state \(|{\rm vac}\rangle\), with \(\varrho_{00}=1\), which is a minimal-energy coherent state and can be considered the least quantum state in this context. The quantity \(\mathfrak{A}_{0}\), on the other hand, is minimized by any state with \(\varrho_{00}=0\), which causes \(\mathfrak{A}_{0}\) to vanish. This is easily attained by Fock states \(|n\rangle\) with \(n>0\). In this sense, all Fock states that are not the vacuum are the most quantum. States becomes more quantum as they gain more energy and their vacuum component \(\varrho_{00}\) diminishes in magnitude. \(\boldsymbol{M=1/2}\): For \(K=1/2\), we can readily compute \[\mathfrak{T}_{1/2}=|\varrho_{01}|^{2}+|\varrho_{10}|^{2}=2|\varrho_{01}|^{2}. \tag{37}\] This is minimized by any state with no coherences in the Fock basis (such as, e.g., number states). On the other hand, it is maximized by states with maximal coherence in the smallest-energy section of the Fock basis: \(|\psi_{+}\rangle=\frac{1}{\sqrt{2}}(|0\rangle+e^{i\varphi}\,|1\rangle)\), with \(\varphi\in\mathbb{R}\). Together, \(\mathfrak{A}_{1/2}\) is minimized by any state with \(\varrho_{00}=0\), because that forces \(\varrho_{01}\) to vanish by positivity of the density matrix, and it is still uniquely maximized by the vacuum state, again because of the positivity constraint \(|\varrho_{01}|\leq\sqrt{\varrho_{00}(1-\varrho_{00})}\). \(\boldsymbol{M=1}\): Now, we find \[\mathfrak{T}_{1}=|\varrho_{00}-\varrho_{11}|^{2}+\tfrac{1}{2}|\varrho_{02}|^{ 2}+\tfrac{1}{2}|\varrho_{20}|^{2}=(\varrho_{00}-\varrho_{11})^{2}+|\varrho_{0 2}|^{2}. \tag{38}\] This is minimized by all states with \(\varrho_{00}=\varrho_{11}=0\), again including Fock states but now with more than one excitation, but it is also _minimized_ by the state \(|\psi_{+}\rangle\) that _maximized_\(\mathfrak{A}_{1/2}\). It is again maximized by the vacuum state with \(\varrho_{00}=1\), but it is also maximized by the single-photon state with \(\varrho_{11}=1\). The cumulative distribution is again the more sensible quantity: \(\mathfrak{A}_{1}\) is minimized by states with vanishing components in the zero- and single-excitation subspaces, of which the Fock state \(|2\rangle\) has the lowest energy, and is uniquely maximized by the vacuum (coherent) state. \(\boldsymbol{M=3/2}\): We find \[\mathfrak{T}_{3/2}=\tfrac{2}{3!}|\varrho_{30}|^{2}+2\left|\varrho_{10}-\tfrac {1}{\sqrt{2}}\varrho_{21}\right|^{2}. \tag{39}\] As usual this is minimized by any Fock state and by any state with no probability in photon-number sectors up until \(n=3\), while it is maximized by pure states of the form \(|\psi\rangle=e^{i\varphi}\tfrac{1}{\sqrt{3}}\,|0\rangle+\tfrac{1}{\sqrt{2}} \,|1\rangle-e^{-i\varphi}\tfrac{1}{\sqrt{6}}\,|2\rangle\). The cumulative \(\mathfrak{A}_{3/2}\) is again uniquely maximized by the vacuum state and minimized by any Fock state and by any state with no probability in photon-number sectors up until \(n=3\). \(\boldsymbol{M>3/2}\): The consistent conclusion is that different Euclidean norms of the multipoles for different orders \(K\) can be maximized by different states, but that the cumulative distribution is always maximized by the vacuum state. All of the orders of multipoles and their cumulative distribution vanish for sufficiently large Fock states, cementing Fock states as maximally quantum according to this condition. We as of yet have only a circuitous proof that \(\mathfrak{A}_{M}(\hat{\varrho})\) is uniquely maximized by \(|{\rm vac}\rangle\) for arbitrarily large \(M\): in Appendix C, we provide joint analytical and numerical arguments that this pattern continues for all \(M\), such that the vacuum state may be considered minimally quantum according to this condition. We can compute this maximal cumulative multipole moment, that of the vacuum, at any order: \[\mathfrak{A}_{M}(|\mathrm{vac})=\sum_{K=0}^{M}\frac{1}{K!^{2}}=I_{0}(2)-\,_{1} \tilde{F}_{2}(1;\lfloor M\rfloor+2,\lfloor M\rfloor+2;1), \tag{40}\] with a Bessel function [28] and a regularized hypergeometric function [40]. This approaches \(I_{0}(2)\approx 2.27959\) in the limit of large \(M\). Moreover, by computing \(\mathfrak{A}_{\infty}(|n\rangle)=I_{0}(2)/n!^{2}\), we realize why only \(|0\rangle\) and \(|1\rangle\) behave so similarly in the large-\(M\) limit. Finally, note that the cumulative multipole operators also take the intriguing form \[\mathfrak{\hat{A}}_{M}=\frac{1}{\pi^{2}}\int d^{2}\alpha d^{2}\beta\,e^{-\frac {|\alpha|^{2}+|\beta|^{2}}{2}}\hat{D}(-\alpha)\otimes\hat{D}(\beta)\sum_{K}^{ M}\frac{(\alpha\beta^{*}-\alpha^{*}\beta)^{2K}}{(2K)!^{2}}P_{2K}\left(\frac{ \alpha\beta^{*}+\alpha^{*}\beta}{\alpha^{*}\beta-\alpha\beta^{*}}\right) \tag{41}\] \[\mathfrak{\hat{A}}_{\infty}=\frac{1}{\pi^{2}}\int d^{2}\alpha d^{2}\beta\,e^{ -\frac{|\alpha|^{2}+|\beta|^{2}}{2}}\hat{D}(-\alpha)\otimes\hat{D}(\beta)\left| I_{0}(2\sqrt{\alpha\beta^{*}})\right|^{2},\] where \(P_{n}(\alpha)=\exp-|\alpha|^{2}/2\alpha^{n}/\sqrt{n!}\) is the Poissonian amplitude. ### Inverse multipole distribution An important question arises: how does one measure a state's multipole moments? Homodyne detection provides one immediate answer. By interfering a given state \(\hat{\varrho}\) with a coherent state \(|\alpha\rangle\) on a balanced beamsplitter and measuring the difference of the photocurrents of detectors placed at both output ports, one collects a signal proportional to \(x(\theta)=\left\langle\hat{a}e^{-i\theta}+\hat{a}^{\dagger}e^{i\theta}\right\rangle\), where \(\theta\) can be varied by changing the phase \(\arg\alpha\) of the reference beam. Collecting statistics of the quadrature \(x(\theta)\) up to its \(K\)th-order moments for a variety of phases \(\theta\) allows one to read off the moments \(\left\langle\hat{T}_{Kq}\right\rangle=\left\langle\hat{a}^{\dagger\,K+q}\hat{ a}^{K-q}\right\rangle\). This provokes the question: what states maximize and minimize the cumulative multipole moments in the inverse basis? We start by defining, in analogy to Eq. (34), the cumulative distribution \[A_{M}(\hat{\varrho})=\sum_{K}^{M}\sum_{q=-K}^{K}\left|\langle\hat{T}_{Kq} \rangle\right|^{2}\,. \tag{42}\] This quantity directly depends on the energy of the state, vanishing if an only if the state is the vacuum. As for the maximization, it is clear that coherent states with more energy cause the cumulative sum \(A_{M}\) to increase, so we must fix the average energy \(\bar{n}=\left\langle\hat{a}^{\dagger}\hat{a}\right\rangle\) when comparing which states maximize and minimize the sum. Maximizing \(A_{M}\) for a fixed average energy is straightforward because each inverse multipole satisfies \[|\langle\hat{T}_{Kq}\rangle|^{2}\leq\left\langle\hat{a}^{\dagger K+q}\hat{a}^{ K+q}\right\rangle\left\langle\hat{a}^{\dagger K-q}\hat{a}^{K-q}\right\rangle. \tag{43}\] The inequality is saturated if and only if \(\hat{a}^{K+q}\left|\psi\right\rangle\propto\hat{a}^{K-q}\left|\psi\right\rangle\); that is, \(\hat{a}^{2q}\left|\psi\right\rangle\propto\left|\psi\right\rangle\), which, for \(q\neq 0\), requires coherent states or superpositions of coherent states with particular phase relationships akin to higher-order cat states [41, 42, 43]: \[\left|\psi^{(q)}\right\rangle=\sum_{l=0}^{2q-1}\psi_{l}\left|\alpha e^{\frac{2 \pi il}{2q}}\right\rangle. \tag{44}\] Each of these states provides the same value \(|\langle\hat{T}_{Kq}\rangle|^{2}=|\alpha|^{4K}\). Then, since saturating the inequality for all \(q\) requires \(\psi_{0}=0\), only a coherent state maximizes the cumulative sum \(A_{M}\) for any fixed energy \(\bar{n}=|\alpha|^{2}\). We already know that \(\left|\mathrm{vac}\right\rangle\) minimizes \(A_{M}\). For a given, fixed \(\bar{n}>0\), one can ask what state minimizes the cumulative multipoles. All of the multipoles with \(q\neq 0\) vanish for Fock states; this is because they vanish for any state that is unchanged after undergoing a rotation by \(\pi/2q\) about the origin in phase space. The \(q=0\) multipoles, on the other hand, depend only on the diagonal coefficients of the density matrix in the Fock basis, which can be minimized in parallel. To minimize a multipole moment \[\left|\langle\widehat{T}_{K0}\rangle\right|=K!\sum_{n\geq K}\binom{n}{K}\varrho _{nn}, \tag{45}\] there are two cases to consider: \(\bar{n}<K\) and \(\bar{n}\geq K\). If \(\bar{n}<K\), the multipole vanishes by simply partitioning all of the probability among the Fock states with fewer than \(K\) photons and arranging those states in a convex combination with no coherences in the Fock basis. If \(\bar{n}\geq K\), the sum is ideally minimized by setting \(\varrho_{\bar{n}\bar{n}}=1\), by convexity properties of the binomial coefficients (they grow by a larger amount when \(n\) increases than the amount that they shrink when \(n\) decreases). For noninteger \(\bar{n}\), the minimum is achieved by setting \[\varrho_{\lceil\bar{n}\rceil\lceil\bar{n}\rceil}=1-(\lceil\bar{n}\rceil- \bar{n}),\qquad\qquad\varrho_{\lceil\bar{n}\rceil-1\,\lceil\bar{n}\rceil-1}= \lceil\bar{n}\rceil-\bar{n} \tag{46}\] with no coherences between these two Fock states. Here, \(\lceil x\rceil\) is the ceiling function that gives the smallest integer value that is bigger than or equal to \(x\). Since this minimization does not depend on \(K\), we have thus found the unique state that minimizes \(A_{M}\) for all \(M\) with arbitrary \(\bar{n}\): \[\arg\max A_{M}(\hat{\rho}|\bar{n})=(\lceil\bar{n}\rceil-\bar{n})\left|\lceil \bar{n}\rceil-1\right\rangle\left\langle\lceil\bar{n}\rceil-1\right|+(1+\bar{ n}-\lceil\bar{n}\rceil)\left|\lceil\bar{n}\rceil-1\right\rangle\left\langle \lceil\bar{n}\rceil-1\right|. \tag{47}\] It is intriguing that coherent states and Fock states respectively maximize and minimize this sum for integer-valued energies, while a convex combination of the nearest-integer Fock states minimize this sum for a noninteger energy. These results should be compared against those for the sum \(\mathfrak{A}_{M}\), which was uniquely maximized by the vacuum state that minimizes the sums here and for which the states that made it vanish were Fock states with large energies. Both sums are minimized for some Fock states and both sums are maximized by some coherent states, but the scalings with energy are opposite, where smaller energy leads to larger \(\mathfrak{A}_{M}\) and smaller \(A_{M}\) while larger energy leads to smaller \(\mathfrak{A}_{M}\) and larger \(A_{M}\); it just so happens that the state with smallest energy is both a Fock state and a coherent state. ## 5 Concluding remarks Expanding the density operator in a conveniently chosen operator set has considerable advantages. By using explicitly the algebraic properties of the basis operators the calculations are often greatly simplified. But the usefulness of the method depends on the choice of the basis operator set. The idea of irreducible tensor operators is to provide a well-developed and efficient way of using the inherent symmetry of the system. However, the irreducible-tensor machinery was missing for CV, in spite of the importance of these systems in modern quantum science and technology. We have provided a complete account of the use of such bases, which should constitute an invaluable tool for quantum optics. ## Acknowledgments We thank H. de Guise and U. Seyfarth for discussions. This work received funding from the European Union's Horizon 2020 research and innovation programme project STORMYTUNE under grant Agreement No. 899587. AZG acknowledges that the NRC headquarters is located on the traditional unceded territory of the Algonquin Anishinaabe and Mohawk people, as well as support from the NSERC PDF program. LLSS acknowledges support from Ministerio de Ciencia e Innovacion (Grant PID2021-127781NB-I00). ## Appendix A Transformation properties of the operators We present in this appendix some properties of the composition law of two tensors operators. Writing the inverse operators \(\hat{\mathfrak{Z}}_{Kq}\) in the basis of monomial operators \(\hat{T}_{Kq}\) is as simple as reading off coefficients using Fig. 2. We have already identified that each inverse operator \(\hat{\mathfrak{Z}}_{Kq}\) has contributions from a finite stripe with \(K-|q|\) elements along the \(q\)th diagonal. The monomials, on the other hand, have contributions on the \(-q\)th stripe, starting from the \((K-|q|)\)th element and going to infinity. The expansion is thus given by a sum of monomials \(\hat{T}_{K-q}\) for all possible values of \(K\) up until infinity, whose expansion coefficients can be found iteratively. The coefficient with the lowest value of \(K\) is just given by the coefficient of the top-left element of \(\hat{\mathfrak{Z}}_{Kq}\) in Fig. 2. The coefficient with the next-lowest value of \(K\) can be found iteratively by canceling the contribution from the monomial that begins at the top-left corner and adding the contribution from the monomial that begins after the top-left corner. The iteration must continue to infinity in order to make sure all of the contributions after the \((2K+1)\)th antidiagonal vanish. Another method of finding these expansion coefficients considers the quantity \(\operatorname{Tr}(\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^{\prime}q^{ \prime}})\). We already know by inspection that this will vanish unless \(q=-q^{\prime}\). We can directly compute these overlaps by summing terms from Eq.\((\ref{eq:1})\): \[\hat{\mathfrak{Z}}_{Kq} =\sum_{K^{\prime}q^{\prime}}\hat{T}_{K^{\prime}q^{\prime}} \operatorname{Tr}(\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^{\prime}q^{ \prime}}) \tag{48}\] \[\operatorname{Tr}(\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^ {\prime}q^{\prime}}) =\delta_{q,-q^{\prime}}\frac{(-1)^{K+K^{\prime}+2|q|}\,_{2}F_{1} (|q|-K,|q|-K^{\prime};2|q|+1;1)}{(2|q|)!(K-|q|)!(K^{\prime}-|q|)!},\] which provides a useful alternative formula for the integrals \[\operatorname{Tr}(\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^ {\prime}q^{\prime}}) =\frac{(-1)^{K+K^{\prime}+q+q^{\prime}}}{(K+q)!\,(K-q)!\,(K^{ \prime}+q^{\prime})!\,(K^{\prime}-q^{\prime})!} \tag{49}\] \[\times\frac{1}{\pi^{3}}\int d^{2}\alpha d^{2}\beta d^{2}\gamma e^ {-\frac{|\beta|^{2}+|\gamma|^{2}}{2}}\left\langle\alpha\right|\hat{D}\left( \beta\right)\hat{D}\left(\gamma\right)\left|\alpha\right\rangle\beta^{K+q} \beta^{*K-q}\gamma^{K^{\prime}+q^{\prime}}\gamma^{*K^{\prime}-q^{\prime}}.\] Just because a particular product \(\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^{\prime}q^{\prime}}\) with \(q\neq q^{\prime}\) is traceless does not mean that it necessarily vanishes. In fact, we can directly compute the product of two such operators to find their structure constants. Each inverse operator \(\hat{\mathfrak{Z}}_{Kq}\) serves to decrease the number of photons in a state by \(2q\), so the product of two inverse operators must be a finite sum of inverse operators whose second index satisfies \(q^{\prime\prime}=q+q^{\prime}\). We start by writing \[\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^{\prime}q^{\prime}}=\sum_{K^{ \prime\prime}}f_{K^{\prime\prime}}(K,K^{\prime},q,q^{\prime})\hat{\mathfrak{Z }}_{K^{\prime\prime},q+q^{\prime}}. \tag{50}\] In theory, the coefficients \(f_{K^{\prime\prime}}\) are formally given by \(\operatorname{Tr}(\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^{\prime}q^{ \prime}}\hat{T}_{K^{\prime\prime},q+q^{\prime}})\). Inspecting Eq.\(\ref{eq:1}\), we find some interesting, immediate results: for example, when \(q,q^{\prime}\geq 0\) and \(2q>K^{\prime}-q^{\prime}\), all of the structure constants \(f_{K^{\prime\prime}}\) vanish and we have \(\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^{\prime}q^{\prime}}=0\). Similar vanishing segments can be found for any combination of the signs of \(q\) and \(q^{\prime}\), which is not readily apparent from multiplications of displacement operators from Eq.\(\ref{eq:1}\). The nonzero structure constants can be found via iteration, using Fig. 2 as a guide. Taking, for example, \(q,q^{\prime}\geq 0\), we find products of the form \[\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^{\prime}q^{\prime}}=\sum_{n=2q+2q^{ \prime}}^{\min(K^{\prime}+q^{\prime},K+q+2q^{\prime})}\frac{(-1)^{K+K^{\prime} +q-q^{\prime}}(K+q+2q^{\prime}-n)!^{-1}}{\sqrt{n!(n-2q-2q^{\prime})!(n-2q^{ \prime})!(K^{\prime}+q^{\prime}-n)!}}\left|n-2q-2q^{\prime}\right\rangle\left \langle n\right|; \tag{51}\] the nonzero structure constants obey \(K^{\prime\prime}\leq K^{\prime\prime}_{\max}=\min(K+q^{\prime},K^{\prime}-q)\). The one with the largest \(K^{\prime\prime}\) is the only one that has the term \(\left|K^{\prime\prime}_{\max}-q-q^{\prime}\right\rangle\left\langle K^{\prime \prime}_{\max}+q+q^{\prime}\right|\), so its structure constant must balance the unique contribution to that term from \(\hat{\mathfrak{Z}}_{K^{\prime\prime}_{\max}+q+q^{\prime}}\). This means that \[f_{K^{\prime\prime}_{\max}}(K,K^{\prime},q,q^{\prime})=\frac{(-1)^{K+K^{\prime }+q-q^{\prime}}}{(K^{\prime\prime}_{\max}+q-q^{\prime})!(K^{\prime}-q-K^{ \prime\prime}_{\max})!(K+q^{\prime}-K^{\prime\prime}_{\max})!}, \tag{52}\] where one of the final two terms in the denominator will simply be \(0!=1\). Then, by iteration, one can balance the contribution of \(\hat{\mathfrak{Z}}_{K^{\prime\prime}_{\max}-k,q+q^{\prime}}\) in order to find the structure constants \(f_{K^{\prime\prime}_{\max}-k}(K,K^{\prime},q,q^{\prime})\). The structure constants for the monomial operators are already known. One can compute [44] \[\hat{T}_{Kq}\hat{T}_{K^{\prime}q^{\prime}}=\sum_{n}c_{n}\hat{a}^{\dagger K+q+K^ {\prime}+q^{\prime}-n}\hat{a}^{K+K^{\prime}-q-q^{\prime}-n} \tag{53}\] from normal ordering. The inverse operators transform nicely under displacements: \[\hat{D}(\alpha)\hat{\mathfrak{Z}}_{Kq}\hat{D}(\alpha)^{\dagger} =\frac{(-1)^{K+q}}{\pi(K+q)!(K-q)!}\int d^{2}\beta e^{|\beta|^{2} /2}e^{\alpha\beta^{*}-\alpha^{*}\beta}\hat{D}(\beta)\beta^{K+q}\beta^{*K-q}\] \[=\sum_{S}^{\infty}\sum_{l=-S}^{S}\alpha^{S-l}\alpha^{*S+l}\binom{ K+S+q+l}{K+q}\binom{K+S-q-l}{K-q}\hat{\mathfrak{Z}}_{K+S,q+l}. \tag{54}\] These displaced operators are inverse to the displaced monomials \[\hat{D}(\alpha)\hat{T}_{Kq}\hat{D}(\alpha)^{\dagger}=\sum_{S=0,1/2}^{K}\sum_{ l=-S}^{S}\binom{K+q}{S+l}\binom{K-q}{S-l}(-\alpha^{*})^{K+q-S-l}(-\alpha)^{K-q-S+l} \hat{T}_{Sl}. \tag{55}\] It is interesting to note that the displaced inverse operators are given by an infinite sum of inverse operators and the displaced monomials by a finite sum of monomials, in contrast to the number of terms \(\left|m\right\rangle\left\langle n\right|\) required to expand the original operators in the Fock basis. ## Appendix B Symmetrically ordered monomials We briefly consider here the example of symmetrically ordered multinomials \(\hat{T}^{W}_{Kq}\). We can write them explicitly in terms of the normally ordered polynomials as \[\hat{T}^{W}_{Kq}=\{\hat{a}^{\dagger K+q}\hat{a}^{K-q}\}_{\mathrm{sym}}=\sum_{n =0}^{\min(K+q,K-q)}\frac{(K+q)!(K-q)!}{2^{n}n!(K+q-n)!(K-q-n)!}\hat{T}_{K-n,q}\,, \tag{56}\] where \(\{\cdot\}_{\mathrm{sym}}\) denotes the symmetric (or Weyl) order or operators [44]. An important expression for the symmetrically ordered polynomials is \[\hat{T}^{W}_{Kq}=\frac{\partial^{2K}}{\partial\beta^{K+q}\partial(-\beta^{*})^{ K-q}}\hat{D}(\beta)\bigg{|}_{\beta=0}. \tag{57}\] We thus look for inverse operators through \[\operatorname{Tr}(\hat{\mathfrak{L}}_{Kq}^{W}\hat{T}_{K^{\prime}q^{ \prime}}^{W}) =\frac{\partial^{2K^{\prime}}}{\partial\beta^{K^{\prime}+q^{\prime}} \partial(-\beta^{*})^{K^{\prime}-q^{\prime}}}\operatorname{Tr}[\hat{\mathfrak{ L}}_{Kq}^{W}\hat{D}(\beta)]\bigg{|}_{\beta=0}\] \[=\frac{1}{\pi}\int d^{2}\beta\operatorname{Tr}[\hat{D}(-\beta) \hat{\mathfrak{L}}_{Kq}^{W}]\operatorname{Tr}\left[\hat{D}(\beta)\frac{ \partial^{2K}}{\partial\alpha^{K+q}\partial(-\alpha^{*})^{K-q}}\hat{D}(\alpha) \bigg{|}_{\alpha=0}\right]\] \[=\int d^{2}\beta\operatorname{Tr}[\hat{D}(-\beta)\hat{\mathfrak{ L}}_{Kq}^{W}]\left.\frac{\partial^{2K}}{\partial\alpha^{K+q}\partial(-\alpha^{*})^{K-q} }\delta^{2}(\alpha+\beta)\right|_{\alpha=0}. \tag{58}\] By inspection, we attain orthonormality when \[\operatorname{Tr}[\hat{\mathfrak{L}}_{Kq}^{W}\hat{D}(\beta)]=\frac{\beta^{K+q} (-\beta^{*})^{K-q}}{(K+q)!(K-q)!}, \tag{59}\] which corresponds to \[\hat{\mathfrak{L}}_{Kq}^{W}=\frac{1}{\pi}\int d^{2}\beta\hat{D}(-\beta)\frac{ \beta^{K+q}(-\beta^{*})^{K-q}}{(K+q)!(K-q)!}=\frac{(-1)^{K+q}}{\pi(K+q)!(K-q)! }\int d^{2}\beta\hat{D}(\beta)\beta^{K+q}\beta^{*K-q}, \tag{60}\] simply differing from the expression (19) for \(\hat{\mathfrak{L}}_{Kq}\) by removing the factor of \(\exp(-|\beta|^{2}/2)\). We can find the multipoles for specific states. We simply quote the results \[\langle\alpha|\hat{\mathfrak{L}}_{Kq}^{W}|\alpha\rangle=\frac{2^{K-q+1}(-1)^{ K+q}}{(K-q)!}\frac{\text{e}^{-2|\alpha|^{2}}}{\alpha^{*2q}}L_{K+q}^{(-2q)}(2| \alpha|^{2}) \tag{61}\] and \[\langle n|\hat{\mathfrak{L}}_{Kq}^{W}|n\rangle=\delta_{q0}\frac{(-1)^{K}}{K!^ {2}}2\int_{0}^{\infty}r^{2K+1}\text{e}^{-r^{2}/2}L_{n}(r^{2})=\delta_{q0}\frac {(-1)^{K}2^{K+1}\,_{2}F_{1}(K+1,-n;1;2)}{K!}. \tag{62}\] For arbitrary states, we can follow the same procedure as we used for normal order; the final result is (\(m\leq n\)) \[\langle n|\hat{\mathfrak{L}}_{Kq}^{W}|n\rangle =\frac{(-1)^{K+q}}{(K+q)!\,(K-q)!}\int\frac{d^{2}\beta}{\pi} \text{e}^{-|\beta|^{2}/2}\sqrt{\frac{n!}{m!}}\beta^{m-n}L_{n}^{(m-n)}(|\beta|^ {2})\beta^{K+q}\beta^{*K-q}\] \[=\delta_{n-m\,\,2q}\frac{(-1)^{K+3q}}{(K+q)!}\sqrt{\frac{n!}{(n-2 q)!}}2^{K+q+1}\,_{2}\tilde{F}_{1}(k+q+1,2q-n;2q+1;2). \tag{63}\] Finally, it is direct to check that the tensors \(\hat{T}_{Kq}^{W}\) are covariant under symplectic transformations [24]. ## Appendix C Vacuum state as maximizing the cumulative multipolar distribution We here provide analytical and numerical evidence that the vacuum state uniquely maximizes the cumulative multipolar distribution to arbitrary orders \(M>3/2\). First, we note by convexity that the multipole moments are all largest for pure states. We next ask how to maximize a single multipole moment \(|\langle\hat{\mathfrak{L}}_{Kq}\rangle|\). The phases can be arranged such that \(\varrho_{nm}(-1)^{n}>0\) for all \(n\) and \(m\) in Eq. (30), while each term is bounded as \(|\varrho_{nm}|\leq\sqrt{\varrho_{mm}\varrho_{nn}}\). It is tempting to use a Cauchy-Schwarz inequality to say that this expression is maximized by states with the relationship \(\varrho_{nn}=\lambda n!\) for some normalization constant \(\lambda\). This fails, however, for two related reasons: one cannot simultaneously saturate the inequality \(|\varrho_{nm}|\leq\sqrt{\varrho_{mm}\varrho_{nn}}\) for all \(m\) and \(n\) while retaining a positive density operator \(\hat{\varrho}\); similarly, the trace of \(\hat{\varrho}\) is bounded, which the Cauchy-Schwarz inequality does not take into consideration. One can outperform this Cauchy-Schwarz bound by concentrating all of the probability in the term with the largest value of \(1/\sqrt{n!(n-2q)!(K+q-n)!^{2}}\). Taking \[\tilde{n}=\arg\max_{n}\frac{1}{\sqrt{n!(n-2q)!(K+q-n)!}}, \tag{64}\] \(|\langle\hat{\mathfrak{T}}_{Kq}\rangle|\) is maximized by any pure state with \(\varrho_{\tilde{n}\tilde{n}}=\varrho_{\tilde{n}-2q,\tilde{n}-2q}=1/2\): \[\max|\langle\hat{\mathfrak{T}}_{Kq}\rangle|^{2}=\frac{1}{4\tilde{n}!(\tilde{ n}-2)q!(K+q-\tilde{n})!^{2}} \tag{65}\] This condition changes with \(K\) and \(q\), so there will always be a competition between which terms \(|\langle\hat{\mathfrak{T}}_{Kq}\rangle|^{2}\) are maximized in the cumulative sum. The contributions to \(\mathfrak{A}_{M}\) by the various terms \(|\langle\hat{\mathfrak{T}}_{Kq}\rangle|^{2}\) diminish with increasing \(K\), which can be seen through the following argument. As \(M\) increases by \(1/2\), the number of new terms contributing to the sum increases quadratically: there are \(2M+1\) new multipoles to consider and each multipole is a sum of at most \(M+1\) terms. From the preceding discussion, each multipole is individually maximized when it is made from only a single term, the cumulative multipole moment \(\mathfrak{A}_{M}\) can only increase by the addition of \(\mathcal{O}(M)\) (competing) terms. In contrast, the magnitudes of each of the multipole moments decay exponentially with increasing \(M\), due to the factorials in the denominator Eq. (65), stemming from Eq. (30). One can, therefore, guarantee that a state maximizing \(\mathfrak{A}_{M}\) for sufficiently large \(M\) will continue to maximize \(\mathfrak{A}_{M}\) for all larger values of \(M\), at least approximately. We can also inspect the inverse operators directly to understand the maximization properties. The multipoles being summed as an indicator of quantumness, \(|\langle\hat{\mathfrak{T}}_{Kq}\rangle|^{2}\), can be expressed as expectation values of the duplicated operator \(\hat{\mathfrak{T}}_{Kq}\otimes\hat{\mathfrak{T}}_{Kq}^{\dagger}=\hat{ \mathfrak{T}}_{Kq}\otimes\hat{\mathfrak{T}}_{K,-q}\) with respect to the duplicated states \(\hat{\varrho}\otimes\hat{\varrho}\). The vacuum state \(|0\rangle\otimes|0\rangle\) is the only duplicated state that is an eigenstate of all of the duplicated operators for all \(K\) and \(q\), albeit with different eigenvalues for each operator. These operators act on Fock states as \[(\hat{\mathfrak{T}}_{Kq}\otimes\hat{\mathfrak{T}}_{Kq}^{\dagger})\ket{n} \otimes\ket{n}\propto|n-2q\rangle\otimes|n+2q\rangle \tag{66}\] Figure 3: Eigenvalues of \(\hat{\mathfrak{A}}_{M}\) with the eight largest magnitudes up until \(M=10\). The negative eigenvalue with the largest magnitude corresponds to the entangled state \(|0\rangle\otimes|1\rangle-|1\rangle\otimes|2\rangle\), the positive eigenvalue with the largest magnitude is \(|0\rangle\otimes|2\rangle-c\ket{1}\otimes|1\rangle+|2\rangle\otimes|0\rangle\) for some positive constant \(c>1\), and the positive eigenvalue with the second largest magnitude is \(|0\rangle\otimes|0\rangle\). These dictate that the symmetric state \(|\psi\rangle\otimes|\psi\rangle\) for which the expectation value of \(\hat{\mathfrak{A}}_{M}\) is largest must be confined to the sector spanned by \(|0\rangle\), \(|1\rangle\), and \(|2\rangle\). and have nonzero matrix elements given by Kronecker products of the stripes found in Fig. 2 (some combinations of \(K\)\(q\), and \(n\) cause the proportionality constant to be zero). These can be used to help finding the eigenstates and eigenvalues of the summed joint operators \[\mathfrak{\hat{A}}_{M}=\sum_{K=0}^{M}\sum_{q=-K}^{K}\mathfrak{\hat{A}}_{Kq} \otimes\mathfrak{\hat{A}}_{Kq}^{\dagger}. \tag{67}\] As mentioned previously, each individual operator \(\mathfrak{\hat{A}}_{Kq}\) only has null eigenstates, unless \(q=0\); this can be seen from the striped pattern in Fig. 2. The same is true of the joint operators \(\mathfrak{\hat{A}}_{Kq}\otimes\mathfrak{\hat{A}}_{Kq}^{\dagger}\), but is not true of the summed joint operators \(\mathfrak{\hat{A}}_{M}\). The latter are represented in the Fock basis by sparse antitriangular matrices, which can be visualized by Kronecker products of pairs of matrices from Fig. 2. The eigenstates and eigenvalues can thus be found directly for any \(M\). For example, the joint Fock state with maximal eigenvalue is the joint vacuum state \(\ket{0}\otimes\ket{0}\). The cumulative operators \(\mathfrak{\hat{A}}_{M}\) have positive expectation values when taken with respect to any duplicated state \(\hat{\rho}\otimes\hat{\rho}\). However, \(\mathfrak{\hat{A}}_{M}\) may have negative eigenvalues, because some of the eigenstates may not be of the form \(\hat{\varrho}\otimes\hat{\varrho}\). For example, the eigenstate whose eigenvalue has the largest magnitude is always found to be the maximally entangled state \((\ket{0}\otimes\ket{1}-\ket{1}\otimes\ket{0})/\sqrt{2}\), with a large, negative eigenvalue. This is orthogonal to any duplicated state \(\hat{\varrho}\otimes\hat{\varrho}\) because the latter is permutation symmetric, not antisymmetric, so we can readily ignore all contributions to \(\mathfrak{\hat{A}}_{M}\) from this part of its spectrum. Another entangled state is the eigenstate with the next largest eigenvalue: \((\ket{0}\otimes\ket{2}-c\ket{1}\otimes\ket{1}+\ket{2}\otimes\ket{0})/{\cal N}\) for some positive constants \(c\) and \({\cal N}=\sqrt{2+c^{2}}\). This eigenstate obeys permutation symmetry, so it will contribute to the multipole moments. The maximum contribution will come from a state of the form \[\ket{\psi}=\sqrt{p_{0}}\ket{0}+\sqrt{p_{1}}\mathrm{e}^{\mathrm{i}\psi}\ket{1}- \sqrt{1-p_{0}-p_{1}}\ket{2}, \tag{68}\] specifically with \(p_{0}=1-p_{0}-p_{1}\). Since \(c>1\), the contribution is uniquely maximized by \(p_{0}=0\) and \(p_{1}=1\), so again we need only consider the joint Fock states in the analysis. The overlap of \(\ket{1}\otimes\ket{1}\) with this eigenstate is \(c^{2}/{\cal N}^{2}\approx 0.621\). The eigenstate with the third largest-magnitude eigenvalue is the joint vacuum state \(\ket{0}\otimes\ket{0}\). The ratio of its eigenvalue to that with the second largest magnitude approaches \(\approx 0.647>c^{2}/{\cal N}^{2}\) as \(M\) increases. This is enough to ensure that the joint vacuum state uniquely maximizes the cumulative multipole moments for all \(M\). We stress that these optima have not been found through Figure 4: Coefficients of the cumulative mutipole sum for the different weights in the optimal state \(\ket{\psi_{\mathrm{opt}}}\). The coefficients rapidly converge for moderate \(M\), with those of \(p_{0}^{2}\) and \(p_{1}^{2}\) rapidly approaching each other. a numerical optimization, but rather through an exact diagonalization of the operators \(\hat{\mathfrak{A}}_{M}\), which means our analysis does not have to worry about local minima or other numerical optimization hazards. How can this be made more rigorous? The eigenvalues and eigenstates can be found exactly for any value of \(M\) by diagonalizing the sparse matrix \(\hat{\mathfrak{A}}_{M}\). By \(M=9/2\), the largest eigenvalues have already converged to three significant digits and \(c^{2}/\mathcal{N}^{2}\) to four; by \(M=7\), the they have all converged to six significant digits. The contributions from a new, larger value of \(K=M\) strictly reduce the magnitude of each expansion coefficient in the sum of Eq. (31) by a multiplicative factor, ranging from \(1/(M+q)\) for the term with the smallest \(n\) that has appeared the most times in the cumulative multipole to \(1\) for the term with the largest \(n\) that has only appeared once previously. There is also the addition of an extra term for \(\left|M-q\right\rangle\left\langle M+q\right|\), normalized by the large factor \(1/\sqrt{(M+q)!(M-q)!}\). Each term gets divided by an increasingly large factor as \(M\) increases; the factor that decreases the slowest has already started out with a tiny magnitude due to the normalization factor \(1/\sqrt{(M+q)!(M-q)!}\). The magnitudes of the expansion coefficients in the cumulative sums decrease at least exponentially in \(\hat{\mathfrak{A}}_{M}-\hat{\mathfrak{A}}_{M-1/2}\), so the largest eigenvalues and eigenstates of \(\mathfrak{A}_{M}\) are fixed once they are known for moderate \(M\) (see visualization in Fig. 3). The above demonstrates that the state maximizing the cumulative multipole moments for any value of \(M\) must take the form (\(p_{0}+p_{1}+p_{2}=1\)) \[\left|\psi_{\mathrm{opt}}\right\rangle=\sqrt{p_{0}}\left|0\right\rangle+\sqrt{ p_{1}}\mathrm{e}^{\mathrm{i}\psi}\left|1\right\rangle+\sqrt{p_{2}}\mathrm{e}^{ \mathrm{i}\phi}\left|2\right\rangle, \tag{69}\] because such a states concentrates maximal probability in the subspace with the largest eigenvalues of \(\hat{\mathfrak{A}}_{M}\). We can compute the cumulative multipole moments for such a state, which equal \[\mathfrak{A}_{M}(\left|\psi_{\mathrm{opt}}\right\rangle) =\sum_{K\in\mathbb{Z}}^{M}\left|\frac{\varrho_{00}}{K!}-\frac{ \varrho_{11}}{(K-1)!}+\frac{\varrho_{22}}{2!(K-2)!}\right|^{2}+2\frac{\left| \varrho_{20}\right|^{2}}{2(K-1)!^{2}}\] \[+\sum_{K\in\mathbb{Z}+\frac{1}{2}}^{M}2\left|\frac{\varrho_{10}} {(K+\frac{1}{2})!}-\frac{\varrho_{21}}{\sqrt{2}(K-\frac{3}{2})!}\right|^{2}. \tag{70}\] Figure 5: Cumulative multipole sum for optimal state \(\left|\psi_{\mathrm{opt}}\right\rangle\) as a function of the two independent probabilities \(p_{0}\) and \(p_{1}\). The multipoles to order \(M=100\) are included, by which point they have converged well beyond machine precision. It is clear that the maximum is obtained by setting all of the probability to go to either \(p_{0}\) or \(p_{1}\) with no shared probability between the two. The relative phases that maximize this sum satisfy \(2\psi-\phi=\pi\), so we can set \(\mathrm{e}^{\mathrm{i}\psi}=1\) and \(\mathrm{e}^{\mathrm{i}\phi}=-1\) without loss of generality. There are now only two constants to optimize over in the sum \[\mathfrak{A}_{M}(|\psi_{\mathrm{opt}}\rangle) =\sum_{K\in\mathds{Z}}^{M}\left|\frac{p_{0}}{K!}-\frac{p_{1}}{(K -1)!}+\frac{p_{2}}{2!(K-2)!}\right|^{2}+\frac{p_{0}p_{2}}{(K-1)!^{2}}\] \[+\sum_{K\in\mathds{Z}+\frac{1}{2}}^{M}2\left|\frac{\sqrt{p_{0}p_{ 1}}}{(K+\frac{1}{2})!}+\frac{\sqrt{p_{1}p_{2}}}{\sqrt{2}(K-\frac{3}{2})!} \right|^{2}\,. \tag{71}\] All of the terms decay at least exponentially with \(K\), so it is again evident that optimizing the sum for moderate \(M\) will approximately optimize the sum for all larger \(M\). Computing the contributions to \(\mathfrak{A}_{M}\), we find \[\mathfrak{A}_{M}(|\psi_{\mathrm{opt}}\rangle) \approx 2.27959p_{0}^{2}+2.27959p_{1}^{2}+0.569896p_{2}^{2}\] \[-0.622103p_{0}p_{1}+2.96853p_{0}p_{2}+0.688948p_{1}p_{2}+1.94864p _{1}\sqrt{p_{0}p_{2}}, \tag{72}\] which converges to this value by \(M=7\) (see Fig. 4) and we have verified that these digits remain unchanged beyond \(M=100\). This means that the sum will be maximized by either \(p_{0}=1\) or \(p_{1}=1\) (visualization in Fig. 5). We can directly compute \(\mathfrak{A}_{M}(|0\rangle)-\mathfrak{A}_{M}(|1\rangle)=1/\lfloor M\rfloor!^{2}\), where \(\lfloor x\rfloor\) is the floor function that gives the greatest integer less than or equal to \(x\). This means that the vacuum state is the unique state with the maximal cumulative multipole moment for all \(M\), while its supremacy diminishes exponentially with \(M\).
2301.13369
Free boundary problem with a nonlocal kernel
In this paper, we propose a new nonlocal model for two-phase Stefan problem, where the nonlocal version of the one-phase Stefan problem arises naturally as a special case. Among other things, we obtain the optimal condition for the pointwise convergence between local and nonlocal one-phase Stefan problem and an equivalent characterization of this optimal condition. Moreover, we provide some sufficient criteria for the continuous expansion of free boundaries, and when the sufficient conditions are violated, we construct examples to demonstrate that the jumping phenomena could happen on the free boundaries. The jumping phenomena is essentially induced by the nonlocal diffusion and thus it does not appear in the classical Stefan problem.
Xinfu Chen, Fang Li, Maolin Zhou
2023-01-31T02:29:55
http://arxiv.org/abs/2301.13369v1
# Free boundary problem with a nonlocal kernel + ###### Abstract In this paper, we propose a new nonlocal model for two-phase Stefan problem, where the nonlocal version of the one-phase Stefan problem arises naturally as a special case. Among other things, we obtain the optimal condition for the pointwise convergence between local and nonlocal one-phase Stefan problem and an equivalent characterization of this optimal condition. Moreover, we provide some sufficient criteria for the continuous expansion of free boundaries, and when the sufficient conditions are violated, we construct examples to demonstrate that the jumping phenomena could happen on the free boundaries. The jumping phenomena is essentially induced by the nonlocal diffusion and thus it does not appear in the classical Stefan problem. **Keywords**: nonlocal Stefan problem, free boundary, jumping phenomena **MSC (2020)**: 35K57, 45K05, 35R35 ## 1 Introduction The _classical Stefan problem_ is well known to describe the evolution of the interface between two phases of a substance undergoing a phase change, for example the melting of a solid, such as ice to water. _Latent heat_, defined as the heat or energy that is absorbed or released during a phase change of a substance, acts as an energy source or sink at a moving solid-liquid interface, and the resulting boundary condition is known as the _Stefan boundary condition_. In this paper, we propose and study _the nonlocal version of two-phase Stefan problem_ \[\begin{cases}\gamma_{t}(t,x)=a\int_{\{\gamma>0\}}\!\!k(x-y)\gamma(t,y)dy-a\gamma (t,x)\chi_{\{\gamma>0\}}\\ \qquad+b\int_{\{\gamma<-\ell_{0}\}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! On the basis of latent heat, _the nonlocal version of one-phase Stefan problem_ is proposed as follows \[\begin{cases}\gamma_{t}(t,x)=d\int_{\{\gamma>0\}}\!\!\!k(x-y)\gamma(t,y)dy-d \gamma(t,x)\chi_{\{\gamma>0\}}&t>0,\ x\in\mathbb{R}^{n},\\ \gamma(0,x)=\gamma_{0}(x)&x\in\mathbb{R}^{n},\end{cases} \tag{1.6}\] where the kernel function \(k\) satisfies **(K)**, and for the initial data, we assume that \[\gamma_{0}(x)\in L^{\infty}(\mathbb{R}^{n}),\ \gamma_{0}(x)=-\ell_{0}\ \ \text{for}\ x\in \mathbb{R}^{n}\setminus\bar{\Omega}_{0},\ \gamma_{0}|_{\bar{\Omega}_{0}}\geq 0,\ \gamma_{0}|_{\bar{\Omega}_{0}}\not \equiv 0. \tag{1.7}\] The essence of the nonlocal Stefan problem (1.6) is at the time \(t\), * if \(x\in\{x\in\mathbb{R}^{n}\,|\,\gamma(t,x)\leq 0\}\), then only it can absorb energy from outside; * if \(x\in\{x\in\mathbb{R}^{n}\,|\,\gamma(t,x)>0\}\), then not only it can absorb energy from outside, but also transfer its energy outside. Here, the value \(\ell_{0}\) is in the status of latent heat, \(\gamma\) equal to \(-\ell_{0}\) corresponds to the status of ice at zero degree centigrade and \(\gamma\) reaching zero represents that there has already been sufficient energy accumulated here for the phase change. The nonlocal version of the two-phase Stefan problem (1.1) is proposed in the same spirit. The phase change happens at either \(\gamma\) reaching zero or \(\gamma\) reaching \(-\ell_{0}\) and the initial data \(\gamma=-\alpha_{0}\) in \(\mathbb{R}^{n}\setminus\bar{\Omega}_{0}\), where \(\alpha_{0}\in(0,\ell_{0})\), corresponds to the mixture of water and ice at zero degree centigrade. Different from the one-phase case, in the initial data, \(\gamma_{0}|_{\bar{\Omega}_{0}}\) could change signs and in particular, when both \(\{x\in\mathbb{R}^{n}\,|\,\gamma(t,x)<-\ell_{0}\}\) and \(\{x\in\mathbb{R}^{n}\,|\,\gamma(t,x)>0\}\) are nonempty, in the set \(\{x\in\mathbb{R}^{n}\,|\,-\ell_{0}\leq\gamma(t,x)\leq 0\}\), the energy could be absorbed and released from outside simultaneously. We point out that the nonlocal version of the one-phase Stefan problems was also proposed and studied in [5]. Some discussions will be placed when the results obtained in this paper are related to those derived in [5]. Moreover, the fractional two-phase Stefan problem was treated in [2], and more general, the two-phase Stefan problem with anomalous diffusion was investigated in [3]. _The main purpose of this paper is to study effects of nonlocal diffusion operators on the evolution of free boundaries and explore connections and discrepancies between the local and nonlocal Stefan problems._ First of all, we establish results about local existence and global existence for the nonlocal Stefan problems. **Theorem 1.1**.: _Assume that in the problem (1.1), the kernel functions satisfy the assumption **(K)**, the condition (1.2) is valid and the initial data satisfies (1.3). Then the problem (1.1) admits a unique classical solution \(\gamma(t,\cdot)\in L^{\infty}(\mathbb{R}^{n})\) defined for all \(t>0\), and \(\gamma\) satisfies the estimate_ \[\operatorname*{ess}\inf_{\mathbb{R}^{n}}\gamma_{0}\leq\gamma(t,x)\leq \operatorname*{ess}\sup_{\mathbb{R}^{n}}\gamma_{0}\quad\text{for}\ t>0,\,x\in \mathbb{R}^{n}. \tag{1.8}\] _Moreover, if \(\gamma_{0}|_{\bar{\Omega}_{0}}\in C(\bar{\Omega}_{0})\), then \(\gamma(t,\cdot)\) is continuous in \(\bar{\Omega}_{0}\) and \(\mathbb{R}^{n}\setminus\bar{\Omega}_{0}\) for any \(t>0\)._ Next, we investigate the convergence relations between local and nonlocal Stefan problems. For simplicity, for \(\epsilon>0\), denote \[k_{\epsilon}(x)=\frac{1}{\epsilon^{n}}k(\frac{x}{\epsilon}),\ \eta_{\epsilon}(x)= \frac{1}{\epsilon^{n}}\eta(\frac{x}{\epsilon}).\] Before we present the main results, we briefly explain what should be the natural and optimal assumptions on the nonlocal kernel functions in the studies of convergence relations between models with local and nonlocal diffusions. Define the Fourier transform of the kernel function \(k\) as follows \[\hat{k}(\xi)=\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}k(x)dx.\] Based on the properties of the Fourier transform, one observes that for \(\phi\in L^{1}(\mathbb{R}^{n})\bigcap C^{2}(\mathbb{R}^{n})\) \[\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}\left(\frac{1}{\epsilon^{2}}\int_{ \mathbb{R}^{n}}k_{\epsilon}(x-y)\phi(y)dy-\frac{1}{\epsilon^{2}}\phi(x)\right) dx=\frac{1}{\epsilon^{2}}\left(\hat{k}(\epsilon\xi)-1\right)\hat{\phi}(\xi),\] \[\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}\Delta\phi(x)dx=-|\xi|^{2}\hat{\phi}(\xi),\] and for fixed \(\xi\), \[\lim_{\epsilon\to 0}\frac{1}{\epsilon^{2}}\left(\hat{k}(\epsilon\xi)-1 \right)\hat{\phi}(\xi)=-A|\xi|^{2}\hat{\phi}(\xi)\] under the condition \[\hat{k}(\xi)=1-A|\xi|^{2}+o(|\xi|^{2})\quad\text{as $\xi\to 0$}, \tag{1.9}\] where \(A>0\) is a constant. This observation indicates that the condition (1.9) is optimal in the studies of nonlocal approximation of Laplacian operator. Indeed, the nonlocal approximation of heat equation is verified under this condition. See [1] for details. We establish an important equivalent characterization of the condition (1.9). **Proposition 1.2**.: _Assume that \(k\) satisfies the assumption **(K)**. Then the following two statements are equivalent._ * _For_ \(1\leq j,h\leq n,\ j\neq h\)_,_ \(\int_{\mathbb{R}^{n}}x_{j}k(x)dx=0,\ \int_{\mathbb{R}^{n}}x_{j}x_{h}k(x)dx=0,\ \int_{\mathbb{R}^{n}}x_{j}^{2}k(x)dx= \frac{1}{n}\int_{\mathbb{R}^{n}}|x|^{2}k(x)dx<+\infty.\)__ * _The Fourier transform of_ \(k\) _satisfies the assumption (_1.9_)._ _Moreover, \(\frac{1}{2n}\int_{\mathbb{R}^{n}}|x|^{2}k(x)dx=A.\)_ In order not to interrupt the main theme of this paper, we leave the proof of this proposition in the appendix. We first establish the convergence result about the two-phase Stefan problem. Let \(\gamma_{\epsilon}\) be the solution of the following problem \[\begin{cases}(\gamma_{\epsilon})_{t}(t,x)=\frac{a}{\epsilon^{2}}\int_{\{\gamma_{ \epsilon}>0\}}k_{\epsilon}(x-y)\gamma_{\epsilon}(t,y)dy-\frac{a}{\epsilon^{2}} \gamma_{\epsilon}(t,x)\chi_{\{\gamma_{\epsilon}>0\}}\\ \qquad+\frac{b}{\epsilon^{2}}\int_{\{\gamma_{\epsilon}<-\ell_{0}\}}\!\!\!\!\! \eta_{\epsilon}(x-y)(\gamma_{\epsilon}(t,y)+\ell_{0})dy-\frac{b}{\epsilon^{2}} (\gamma_{\epsilon}(t,x)+\ell_{0})\chi_{\{\gamma_{\epsilon}<-\ell_{0}\}}&t>0, \ x\in\mathbb{R}^{n},\\ \gamma_{\epsilon}(0,x)=\gamma_{0}(x)&x\in\mathbb{R}^{n}.\end{cases} \tag{1.10}\] **Theorem 1.3**.: _In the problem (1.10), assume that the conditions of Theorem 1.1 are valid. In addition, assume that the kernel functions satisfy Proposition 1.2(i) and_ \[\int_{\mathbb{R}^{n}}|x|^{3}k(x)dx<+\infty. \tag{1.11}\] _Then for any given \(T>0\), \(0<t<T\),\(\gamma_{\epsilon}(t,\cdot)\) converges to \(\gamma(t,\cdot)\) in \(L^{1}_{loc}(\mathbb{R}^{n})\) as \(\epsilon\to 0^{+}\), where \(\gamma\in L^{\infty}((0,T)\times\mathbb{R}^{n})\) is the generalized solution of_ \[\begin{cases}\Delta u\in\beta(u)_{t},\\ \beta(u)(0,x)=\gamma_{0}(x),\end{cases} \tag{1.12}\] _where \( A=\frac{a}{2}\int_{\mathbb{R}^{n}}|z|^{2}k(z)dz,\ B=\frac{b}{2} \int_{\mathbb{R}^{n}}|z|^{2}\eta(z)dz\),_ \[u=\begin{cases}A\gamma&\text{for }\gamma>0\\ 0&\text{for }-\ell_{0}\leq\gamma\leq 0\\ B(\gamma+\ell_{0})&\text{for }\gamma<-l_{0}\end{cases}\] _and \(\beta(u)\) is a multivalued mapping defined as follows_ \[\beta(u)=\begin{cases}\frac{1}{B}u-\ell_{0}&\text{for }u<0\\ [-\ell_{0},0]&\text{for }u=0\\ \frac{1}{A}u&\text{for }u>0.\end{cases}\] Thanks to Proposition 1.2, one sees that in Theorem 1.3, only the condition (1.11) is extra in the studies of convergence relations. Obviously, the kernel functions which are radially symmetric and compactly supported satisfy the extra condition (1.11). Next, the convergence relations between local and nonlocal one-phase Stefan problems are verified under the optimal condition (1.9). Similar to (1.10), we rescale the problem (1.6) as follows \[\begin{cases}\gamma_{\epsilon t}(t,x)=\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^ {n}}k_{\epsilon}(x-y)\gamma_{\epsilon}^{+}(t,y)dy-\frac{1}{\epsilon^{2}} \gamma_{\epsilon}^{+}(t,x)&t>0,\ x\in\mathbb{R}^{n},\\ \gamma_{\epsilon}(0,x)=\gamma_{0}(x)&x\in\mathbb{R}^{n},\end{cases} \tag{1.13}\] where for simplicity, set \(d=1\) and denote \[\gamma_{\epsilon}^{+}(t,x)=\gamma_{\epsilon}(t,x)\chi_{\{\gamma_{\epsilon}(t, x)>0\}}.\] **Theorem 1.4**.: _In the problem (1.13), assume that the kernel function satisfies the assumption **(K)**, the condition (1.2) is valid and the initial data satisfies (1.7). Also, assume that the Fourier transform of \(k\) satisfies (1.9). Then for any given \(T>0\), \(\gamma_{\epsilon}^{+}\) converges to the solution \(\theta\) of the one-phase Stefan problem (1.5) in the following sense:_ \[\int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,x)d\tau\to\int_{\min\{s(x),t\}}^{t} \theta(\tau,x)d\tau\ \ \text{a.e. in }(0,T)\times\mathbb{R}^{n},\] _where we set \(d=A\) in the problem (1.5)._ The convergence relations between local and nonlocal one-phase Stefan problems is also studied in [5] under the additional conditions that the kernel function is radially symmetric and compactly supported. From now on, we mainly focus on the nonlocal one-phase Stefan problem and derive some interesting and fundamental properties related to _expansion, boundedness and continuity_ of free boundaries in the nonlocal one-phase Stefan problem (1.6). Due to the lack of regularity in the nonlocal Stefan problems, we will impose an extra condition that \(\gamma_{0}|_{\bar{\Omega}_{0}}\in C(\bar{\Omega}_{0})\) on the initial data \(\gamma_{0}\) when discussing the properties of free boundaries. **Theorem 1.5**.: _In the problem (1.6), assume that the kernel function satisfies the assumption **(K)**, the condition (1.2) is valid, and the initial data \(\gamma_{0}\) satisfies (1.7) and the extra condition that \(\gamma_{0}|_{\bar{\Omega}_{0}}\in C(\bar{\Omega}_{0})\). We have the following statements._ 1. Expansion: _there exists_ \(t_{0}>0\) _such that_ \(\Omega(t)=\Omega(0)\) _for_ \(0\leq t\leq t_{0}\) _and_ \(\Omega(t_{1})\subseteq\Omega(t_{2})\) _for_ \(0<t_{1}<t_{2}\)_._ 2. Boundedness: _there exists_ \(R>0\)_, which depends on the initial data only, such that_ \(\Omega(t)\subseteq B_{R}(0)\) _for all_ \(t>0\)_._ Theorem 1.5(i) is also proved in [5], where the kernel function is assumed to be compactly supported and radially symmetric. For the nonlocal two-phase Stefan problem (1.1), due to the interaction between \(\Omega(t)\) and \(\Omega^{-}(t)\) denoted in (1.4), Theorem 1.5(i) might not hold. However, thanks to the comparison principle, Theorem 1.5(ii) remains true for both \(\Omega(t)\) and \(\Omega^{-}(t)\). We further investigate the continuity of the free boundary in the nonlocal one-phase Stefan problem. For convenience, we prepare an extra assumption about the kernel function as follows **(K1)**: \(k(x)\) is radially symmetric, decreasing in \(|x|\). **Theorem 1.6**.: _Under the conditions of Theorem 1.5, if additionally assume that \(\bar{\Omega}_{0}\) is convex and the assumption **(K1)** is valid, then \(\Omega(t)\) expands continuously._ In Theorem 1.6, extra conditions on the kernel function \(k(x)\) and the initial domain \(\Omega_{0}\) are needed to guarantee the continuous expansion of the free boundary \(\partial\Omega(t)\). A natural question is what happens without these extra conditions. Two examples are constructed to show that when either the extra condition on the kernel function or that on the initial domain \(\Omega_{0}\) in Theorem 1.6 is violated, the population range could generate at a distant place. This is so called _jumping phenomena_. Since the nonlocal dispersal describes the movement between non-adjacent spatial locations, the jumping phenomena is natural. It also reflects the essential differences between local and nonlocal dispersal operators. We also point out that, if allowing the initial data to be nonconstant outside \(\bar{\Omega}_{0}\), similar to [5, Theorem 4.6], where the kernel function is assumed to be compactly supported and radially symmetric, jumping phenomena could happen by choosing initial data properly. Indeed, the conclusion is valid as long as the kernel function satisfies the assumption **(K)**. We omit the proof since it is similar. At the end, the main features of our paper are summarized as follows. * Formulation of a new nonlocal model for two-phase Stefan problem, where the nonlocal version of the one-phase Stefan problem arises naturally as a special case. * The optimal condition (1.9) for the pointwise convergence between local and nonlocal one-phase Stefan problem in Theorem 1.4. * An equivalent characterization between the conditions (i) about the kernel function and (ii), i.e., (1.9), about the Fourier transform of the kernel function in Proposition 1.2. * For local and global existence in Theorem 1.1, expansion and boundedness of free boundaries in Theorem 1.5, we only require the basic assumption **(K)** on the kernel functions. * The sufficient conditions derived in Theorem 1.6 for the continuous expansion of the free boundary when the initial data outside initial domain \(\Omega_{0}\) is assumed to be a negative constant. Counterexamples are constructed to demonstrate that the jumping phenomena could happen when the sufficient conditions are violated. This paper is organized as follows. Theorem 1.1 and some preliminary results for the problem (1.1) are established in Section 2. In Section 3, we focus on the convergence relations between local and nonlocal Stefan problems and present the proofs of Theorem 1.3 and Theorem 1.4. In Section 4, Theorems 1.5 and 1.6 related to properties about the free boundary of the nonlocal Stefan problem are verified. Moreover, we construct two examples where jumping phenomena happen, when one of the additional assumptions in Theorem 1.6 is violated. At the end, the proof of Proposition 1.2 is included in the appendix. ## 2 Wellposedness and preliminaries ### Local and global existence We first verify the local and global existence to the nonlocal version of the two-phase Stefan problem (1.1). The same arguments can be applied to the the nonlocal version of the one-phase Stefan problem (1.6) word by word. Proof of Theorem 1.1.: Denote \(M_{0}=\|\gamma_{0}\|_{L^{\infty}(\mathbb{R}^{n})}\), \(\mathbb{Y}=L^{\infty}(\mathbb{R}^{n})\), for \(s>0\), \[\mathbb{X}_{s}=\left\{\phi\in C([0,s),\mathbb{Y})\,\big{|}\,\phi(0,\cdot)=\gamma _{0}(\cdot),\ \|\phi(t,\cdot)\|_{L^{\infty}(\mathbb{R}^{n})}\leq 2M_{0},\,t\in[0,s) \right\},\] and \[\|\phi\|_{C([0,s),\mathbb{Y})}=\sup_{0\leq t<s}\|\phi(t,\cdot)\|_{L^{\infty}( \mathbb{R}^{n})}.\] For \(\phi\in\mathbb{X}_{s}\), \(0<t<s\), define \[\mathcal{T}\phi =\gamma_{0}(x)+a\int_{0}^{t}\int_{\{\phi>0\}}k(x-y)\phi(\tau,y) dyd\tau-a\int_{0}^{t}\phi(\tau,x)\chi_{\{\phi>0\}}d\tau\] \[+b\int_{0}^{t}\int_{\{\phi<-\ell_{0}\}}\eta(x-y)(\phi(\tau,y)+ \ell_{0})dyd\tau-b\int_{0}^{t}(\phi(\tau,x)+\ell_{0})\chi_{\{\phi<-\ell_{0}\} }d\tau.\] Then it is routine to show that \(\mathcal{T}\phi\in C([0,s),\mathbb{Y})\), \(\mathcal{T}\phi(0,\cdot)=\gamma_{0}(\cdot)\) and \[\|\mathcal{T}\phi\|_{C([0,s),L^{\infty}(\mathbb{R}^{n}))}\leq M_{0}+2as\|\phi \|_{C([0,s),\mathbb{Y})}+2bs\|\phi\|_{C([0,s),\mathbb{Y})}\leq M_{0}+4s\left( a+b\right)M_{0}.\] Moreover, for \(\phi_{1},\phi_{2}\in\mathbb{X}_{s}\), \[\|\mathcal{T}\phi_{1}-\mathcal{T}\phi_{2}\|_{C([0,s),\mathbb{Y})}\leq 2as\| \phi_{1}-\phi_{2}\|_{C([0,s),\mathbb{Y})}+2bs\|\phi_{1}-\phi_{2}\|_{C([0,s), \mathbb{Y})}.\] Thus it is obvious that there exists \(t_{0}>0\), which depends \(a\), \(b\) and \(M_{0}\) only and is sufficiently small, such that for \(0<s\leq t_{0}\), \(\mathcal{T}\) maps \(\mathbb{X}_{s}\) into \(\mathbb{X}_{s}\) and \(\mathcal{T}\) is a contraction mapping in \(\mathbb{X}_{s}\). Hence by the contraction mapping theorem, for \(0<s\leq t_{0}\), there exists a unique \(\gamma\in\mathbb{X}_{s}\) satisfying \[\gamma(t,x) =\gamma_{0}(x)+a\int_{0}^{t}\int_{\{\gamma>0\}}k(x-y)\gamma(\tau, y)dyd\tau-a\int_{0}^{t}\gamma(\tau,x)\chi_{\{\gamma>0\}}d\tau\] \[+b\int_{0}^{t}\int_{\{\gamma<-\ell_{0}\}}\eta(x-y)(\gamma(\tau,y) +\ell_{0})dyd\tau-b\int_{0}^{t}(\gamma(\tau,x)+\ell_{0})\chi_{\{\gamma<-\ell_ {0}\}}d\tau\] for \(0<t<s\), \(x\in\mathbb{R}^{n}\). Thus, obviously \(\gamma\) is the unique solution to the problem (1.1). Let \((0,T_{\max})\) denote the maximal time interval for which the solution \(\gamma(t,x)\) of the problem (1.1) exists. It remains to show \(T_{\max}=+\infty\). For this purpose, it suffices to show that \(\|\gamma(t,\cdot)\|_{L^{\infty}(\mathbb{R}^{n})}\) is bounded in \((0,T_{\max})\). To be more specific, we claim that \(\gamma\)_satisfies the estimate (1.8) in \((0,T_{\max})\)._ Fix any \(0<T<T_{\max}\). First, assume that the kernel functions \(k\) and \(\eta\) are compactly supported. Then since \(\bar{\Omega}_{0}\) is bounded, it is standard to show that \(\{\gamma(t,x)\geq 0\}\) and \(\{\gamma(t,x)\leq-\ell_{0}\}\) remain bounded for \(0<t<T\). Notice that if \(|\{\gamma_{0}(x)>0\}|=0\), then by the equation satisfied by \(\gamma(t,x)\), one has \[\gamma(t,x)\leq\operatorname*{ess}\sup_{\mathbb{R}^{n}}\gamma_{0},\ \ 0<t<T,\,x\in\mathbb{R}^{n}.\] Now we consider the case that \(|\{\gamma_{0}(x)>0\}|>0\). Based on the problem (1.1), for any \(1<p<+\infty\), \(0<t<T\), one has \[(\gamma^{+})^{p-1}\gamma_{t}(t,x)\leq(\gamma^{+})^{p-1}\left(a\int_{\{\gamma>0 \}}\!\!\!k(x-y)\gamma(t,y)dy-a\gamma(t,x)\chi_{\{\gamma>0\}}\right).\] Then direct computation yields that for \(0<t<T\), \[\frac{1}{p}\frac{d}{dt}\int_{\mathbb{R}^{n}}(\gamma^{+}(t,x))^{p }dx\leq a\int_{\mathbb{R}^{n}}(\gamma^{+}(t,x))^{p-1}\left(\int_{\mathbb{R}^{n }}k(x-y)\gamma^{+}(t,y)dy-\gamma^{+}(t,x)\right)dx\] \[\leq a\int_{\mathbb{R}^{n}}(\gamma^{+}(t,x))^{p-1}\left(\int_{ \mathbb{R}^{n}}k(x,y)dy\right)^{\frac{p-1}{p}}\left(\int_{\mathbb{R}^{n}}k(x, y)(\gamma^{+}(t,y))^{p}dy\right)^{\frac{1}{p}}dx-a\|\gamma^{+}(t,\cdot)\|_{L^{p}( \mathbb{R}^{n})}^{p}\] \[\leq a\|\gamma^{+}(t,\cdot)\|_{L^{p}(\mathbb{R}^{n})}^{p-1}\left(\int _{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}k(x,y)(\gamma^{+}(t,y))^{p}dydx\right)^ {\frac{1}{p}}-a\|\gamma^{+}(t,\cdot)\|_{L^{p}(\mathbb{R}^{n})}^{p}\leq 0.\] Hence for any \(1<p<+\infty\), \(0<t<T\), \[\|\gamma^{+}(t,\cdot)\|_{L^{p}(\mathbb{R}^{n})}\leq\|\gamma^{+}(0,\cdot)\|_{L^ {p}(\mathbb{R}^{n})},\] and it follows that \[\gamma(t,x)\leq\mbox{ess}\sup_{\mathbb{R}^{n}}\gamma_{0},\ \ 0<t<T,\,x\in\mathbb{R}^{n}.\] Similar arguments can be applied on \((\gamma(t,x)+\ell_{0})^{-}\) to derive that \[\gamma(t,x)\geq\mbox{ess}\inf_{\mathbb{R}^{n}}\gamma_{0},\ \ 0<t<T,\,x\in \mathbb{R}^{n}.\] The claim is proved for compactly supported kernel functions since \(T\in(0,T_{\max})\) is arbitrary. Now consider the case that the kernel functions \(k\) and \(\eta\) are not compactly supported. Then there exists a series of kernels \(k_{j}\), \(\eta_{j}\), \(j\geq 1\), which are compactly supported, satisfy the assumption **(K)**, and \[\lim_{j\to\infty}\|k_{j}-k\|_{L^{1}(\mathbb{R}^{n})}=0,\ \lim_{j\to\infty}\|\eta_{j}-\eta\|_{L^{1}( \mathbb{R}^{n})}=0. \tag{2.1}\] Let \(\gamma_{j}\) denote the solution to the problem (1.1) with \(k\) replaced by \(k_{j}\) and \(\eta\) replaced by \(\eta_{j}\). Set \(w_{j}=\gamma-\gamma_{j}\), \(j\geq 1\). Then \(w_{j}\) satisfies \[\begin{cases}(w_{j})_{t}(t,x)=a\int_{\{\gamma>0\}}k(x-y)\gamma(t,y)dy-a\gamma( t,x)\chi_{\{\gamma>0\}}\\ \qquad-a\int_{\{\gamma_{j}>0\}}k_{j}(x-y)\gamma_{j}(t,y)dy+a\gamma_{j}(t,x) \chi_{\{\gamma_{j}>0\}}\\ \qquad+b\int_{\{\gamma<-\ell_{0}\}}\eta(x-y)(\gamma(t,y)+\ell_{0})dy-b(\gamma (t,x)+\ell_{0})\chi_{\{\gamma<-\ell_{0}\}}\\ \qquad-b\int_{\{\gamma_{j}<-\ell_{0}\}}\eta_{j}(x-y)(\gamma_{j}(t,y)+\ell_{0}) dy+b(\gamma_{j}(t,x)+l_{0})\chi_{\{\gamma_{j}<-\ell_{0}\}}&0<t<T_{\max},\ x\in\mathbb{R}^{n},\\ w_{j}(0,x)=0&x\in\mathbb{R}^{n}.\end{cases}\] Then for \(w_{j}>0\), direct computation yields that \[(w_{j})_{t}(t,x) \leq a\int_{\{\gamma>0\}}k(x-y)\left(\gamma(t,y)-\gamma_{j}(t,y)\right) dy+a\int_{\{\gamma>0\}}k(x-y)\gamma_{j}(t,y)dy\] \[-a\int_{\{\gamma_{j}>0\}}\left(k_{j}(x-y)-k(x-y)\right)\gamma_{j}( t,y)dy-a\int_{\{\gamma_{j}>0\}}k(x-y)\gamma_{j}(t,y)dy\] \[+b\int_{\{\gamma<-\ell_{0}\}}\eta(x-y)(\gamma(t,y)+\ell_{0})dy-b \int_{\{\gamma_{j}<-\ell_{0}\}}\eta(x-y)(\gamma(t,y)+\ell_{0})dy\] \[+b\int_{\{\gamma_{j}<-\ell_{0}\}}\eta(x-y)(\gamma(t,y)-\gamma_{j} (t,y))dy\] \[+b\int_{\{\gamma_{j}<-\ell_{0}\}}(\eta(x-y)-\eta_{j}(x-y))(\gamma _{j}(t,y)+\ell_{0})dy\] \[\leq (a+b)\|w_{j}\|_{L^{\infty}(\mathbb{R}^{n})}+aM_{0}\|k_{j}-k\|_{L ^{1}(\mathbb{R}^{n})}+bM_{0}\|\eta_{j}-\eta\|_{L^{1}(\mathbb{R}^{n})},\] where the last inequality follows from the fact that \(\gamma_{j}\) satisfies the estimate (1.8). Similarly for \(w_{j}<0\), we have \[(-w_{j})_{t}(t,x)\leq(a+b)\|w_{j}\|_{L^{\infty}(\mathbb{R}^{n})}+aM_{0}\|k_{j} -k\|_{L^{1}(\mathbb{R}^{n})}+bM_{0}\|\eta_{j}-\eta\|_{L^{1}(\mathbb{R}^{n})}.\] The above two inequalities indicate that for \(0<t<T_{\max}\), \[|w_{j}(t,x)|=\lim_{\delta\to 0}\int_{0}^{t}\frac{\partial}{ \partial\tau}\left[w_{j}^{2}(\tau,x)+\delta^{2}\right]^{\frac{1}{2}}\,d\tau \tag{2.2}\] \[= \lim_{\delta\to 0}\int_{0}^{t}\frac{w_{j}(\tau,x)}{\left[w_{j}^{2}( \tau,x)+\delta^{2}\right]^{\frac{1}{2}}}\frac{\partial}{\partial\tau}w_{j}( \tau,x)d\tau\] \[\leq (a+b)\int_{0}^{t}\|w_{j}\|_{L^{\infty}(\mathbb{R}^{n})}(\tau)d \tau+aM_{0}\|k_{j}-k\|_{L^{1}(\mathbb{R}^{n})}t+bM_{0}\|\eta_{j}-\eta\|_{L^{1} (\mathbb{R}^{n})}t.\] Denote \[h_{j}(t)=\int_{0}^{t}\|w_{j}\|_{L^{\infty}(\mathbb{R}^{n})}(\tau)d\tau,\] then (2.2) implies that for \(0<t<T_{\max}\), \[h_{j}^{\prime}(t)\leq(a+b)h_{j}(t)+aM_{0}\|k_{j}-k\|_{L^{1}(\mathbb{R}^{n})}t+ bM_{0}\|\eta_{j}-\eta\|_{L^{1}(\mathbb{R}^{n})}t.\] Direct computation yields that for \(0<t<T_{\max}\), \[h_{j}(t)\leq\frac{M_{0}}{(a+b)^{2}}e^{(a+b)t}\left(a\|k_{j}-k\|_{L^{1}( \mathbb{R}^{n})}+b\|\eta_{j}-\eta\|_{L^{1}(\mathbb{R}^{n})}\right),\] which, together with (2.2), yields that for \(0<t<T_{\max}\), \[\|w_{j}\|_{L^{\infty}(\mathbb{R}^{n})}(t)\leq M_{0}\left(\frac{1}{a+b}e^{(a+b )t}+t\right)\left(a\|k_{j}-k\|_{L^{1}(\mathbb{R}^{n})}+b\|\eta_{j}-\eta\|_{L^{ 1}(\mathbb{R}^{n})}\right). \tag{2.3}\] This, together with (2.1) and the fact that \(\gamma_{j}\) satisfies the estimate (1.8) for all \(j\geq 1\), implies the desired claim for general kernel functions under the assumption **(K)**. At the end, it is routine to verify that if \(\gamma_{0}|_{\bar{\Omega}_{0}}\in C(\bar{\Omega}_{0})\), then \(\gamma(t,\cdot)\) is continuous in \(\bar{\Omega}_{0}\) and \(\mathbb{R}^{n}\setminus\bar{\Omega}_{0}\) for any \(t>0\). ### Preliminaries We first present the comparison principle for the nonlocal version of the two-phase Stefan problem (1.1) and omit the proof since it is standard. Similarly, the comparison principle is also valid for the nonlocal version of the one-phase Stefan problem (1.6). **Proposition 2.1**.: _Assume that the conditions of Theorem 1.1 are valid. Also assume that \(\gamma_{0}^{*}\in L^{\infty}(\mathbb{R}^{n})\), \(\gamma_{0}^{*}(x)=-\alpha_{0}^{*}\) for \(x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0},\ \alpha_{0}^{*}\in(0,\ell_{0})\). Let \(\gamma^{*}\) denote the solution to the problem (1.1) with initial data \(\gamma_{0}^{*}\). If \(\gamma_{0}^{*}\geq\gamma_{0}\), then \(\gamma^{*}\geq\gamma\) for all \(t>0\)._ Moreover, we present a type of strong maximum principle for the nonlocal version of one-phase Stefan problem (1.6). **Proposition 2.2**.: _Under the conditions of Theorem 1.5, given \(s\geq 0\), we have \(\gamma(t,x)>0\) in \(\Omega(s)\) for \(t\geq s\)._ Proof.: First, we claim that _if \(x\in\{x\in\Omega(s)\,|\,\gamma(s,x)>0\}\), then for \(t>s\), \(\gamma(t,x)>0\)_. Due to the continuity of the solution in \(t\), we only need consider the case that \(s>0\). According to \[\gamma_{t}(t,x)=d\int_{\{\gamma>0\}}k(x-y)\gamma(t,y)dy-d\gamma(t,x)\chi_{\{ \gamma>0\}}\geq-d\gamma(t,x)\chi_{\{\gamma>0\}},\] the claim follows immediately. Next we consider the initial domain \(\bar{\Omega}_{0}\). Set \[\gamma_{0\delta}(x)=\begin{cases}\gamma_{0}(x)+\delta&x\in\bar{\Omega},\\ \gamma_{0}(x)&x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0},\end{cases}\] where \(\delta>0\), and let \(\gamma_{\delta}\) denote the solution to the problem (1.6) with the initial data (1.7), where \(\gamma_{0}\) is replaced by \(\gamma_{0\delta}\). Thanks to the above claim, one sees that \(\gamma_{\delta}(t,x)>0\) for \(t>0\), \(x\in\bar{\Omega}_{0}\). By letting \(\delta\to 0^{+}\), it is routine to derive that \(\gamma(t,x)\geq 0\) for \(t>0\), \(x\in\bar{\Omega}_{0}\), i.e., \(\bar{\Omega}_{0}\subseteq\Omega(t)\) for \(t>0\). Moreover, since \(\gamma_{0}|_{\bar{\Omega}_{0}}\geq 0,\ \gamma_{0}|_{\bar{\Omega}_{0}}\not \equiv 0\), the claim at the beginning indicates that \(\{x\in\bar{\Omega}_{0}\,|\,\gamma(t,x)>0\}\) is not empty for \(t>0\). Suppose that there exists \(t_{0}>0\) such that \(\gamma(t_{0},x)\) touches zero somewhere in \(\bar{\Omega}_{0}\). By choosing \[x_{0}\in\partial\{x\in\bar{\Omega}_{0}\,|\,\gamma(t_{0},x)>0\}\bigcap\{x\in \bar{\Omega}_{0}\,|\,\gamma(t_{0},x)=0\},\] we have \[0\geq\gamma_{t}(t_{0},x_{0})=d\int_{\{\gamma(t_{0},y)>0\}}k(x_{0}-y)\gamma(t_ {0},y)dy>0,\] where the strict inequality is due to the assumption **(K)** and the choice of \(x_{0}\). This is a contradiction and thus \(\gamma(t,x)>0\) for \(t>0\), \(x\in\bar{\Omega}_{0}\). It remains to consider the set \(\{x\in\Omega(s)\setminus\bar{\Omega}_{0}\,|\,\gamma(s,x)=0\}\), when it is not empty. Fix \(x^{*}\in\{x\in\Omega(s)\setminus\bar{\Omega}_{0}\,|\,\gamma(s,x)=0\}\) and let \(s_{1}\) denote the moment when \(\gamma(t,x^{*})\) first touches zero. Obviously \(s_{1}\leq s\) and by the equation satisfied by \(\gamma\), we have \[\ell_{0}=\int_{0}^{s_{1}}d\int_{\{\gamma(t,y)>0\}}k(x^{*}-y)\gamma(t,y)dydt.\] Then obviously there exists \(t_{1}\in(0,s_{1})\) such that \[\int_{\{\gamma(t_{1},y)>0\}}k(x^{*}-y)\gamma(t_{1},y)dy>0. \tag{2.4}\] We claim that _for any \(t>t_{1}\), \(\int_{\{\gamma(t,y)>0\}}k(x^{*}-y)\gamma(t,y)dy>0\)_. Suppose that the claim is not true, i.e., there exists \(t_{2}>t_{1}\) such that \[\int_{\{\gamma(t_{2},y)>0\}}k(x^{*}-y)\gamma(t_{2},y)dy=0.\] This implies that \(\gamma(t_{2},y)\leq 0\) in the set \(\{y\in\mathbb{R}^{n}\,|\,k(x^{*}-y)>0\}\). Again thanks to the claim at the beginning, we have \(\gamma(t_{1},y)\leq 0\) in the set \(\{y\in\mathbb{R}^{n}\,|\,k(x^{*}-y)>0\}\), which contradicts to (2.4). The claim is proved. According to this claim and the choice of \(s\), \(x^{*}\), one sees that \[\gamma_{t}(s,x^{*})=d\int_{\{\gamma(s,y)>0\}}k(x^{*}-y)\gamma(s,y)dy>0.\] Hence for \(t>s\geq 0\), \(\gamma(t,x^{*})>0\). At the end, some priori estimates are verified for the nonlocal version of the two-phase Stefan problem. **Lemma 2.3**.: _Under the assumptions of Theorem 1.1, there exists a constant \(C_{1}>0\), which depends on the initial data only, such that for given \(1\leq p\leq\infty\), we have_ \[\|\gamma^{+}(t,\cdot)\|_{L^{p}(\mathbb{R}^{n})}\leq C_{1},\ \|(\gamma(t,\cdot)+ \ell_{0})^{-}\|_{L^{p}(\mathbb{R}^{n})}\leq C_{1},\ \ t>0.\] Proof.: Notice that if \(\phi\in L^{1}(\mathbb{R}^{n})\bigcap L^{\infty}(\mathbb{R}^{n})\), then for any \(p>1\), \(\phi\in L^{p}(\mathbb{R}^{n})\) and \[\|\phi\|_{L^{p}(\mathbb{R}^{n})}\leq\left(\|\phi\|_{L^{\infty}(\mathbb{R}^{n}) }^{p-1}\|\phi\|_{L^{1}(\mathbb{R}^{n})}\right)^{\frac{1}{p}}\leq\left(\|\phi\| _{L^{\infty}(\mathbb{R}^{n})}+1\right)\left(\|\phi\|_{L^{1}(\mathbb{R}^{n})}+1 \right).\] Hence it suffices to verify the statements for \(p=1\) and \(p=\infty\). Indeed, when \(p=\infty\), the conclusion is obvious due to Theorem 1.1, i.e., \[\|\gamma(t,\cdot)\|_{L^{\infty}(\mathbb{R}^{n})}\leq\|\gamma_{0}\|_{L^{\infty }(\mathbb{R}^{n})}. \tag{2.5}\] In order to estimate \(\|\gamma^{+}(t,\cdot)\|_{L^{1}(\mathbb{R}^{n})}\), we first consider the case that both \(k\) and \(\eta\) are compactly supported. Let \(\hat{\gamma}(t,x)\) denote the solution to the problem (1.1) with the initial data replaced by \[\hat{\gamma}(0,x)=\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{L^{\infty}(\bar{\Omega} _{0})},\ x\in\bar{\Omega}_{0},\ \ \hat{\gamma}(0,x)=-\alpha_{0},\ x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0}.\] By Theorem 1.1 and Proposition 2.1, we have \[\hat{\gamma}(t,x)\geq\gamma(t,x),\ -\alpha_{0}\leq\hat{\gamma}(t,x)\leq\| \gamma_{0}|_{\bar{\Omega}_{0}}\|_{L^{\infty}(\bar{\Omega}_{0})},\ \ t>0,\,x\in\mathbb{R}^{n}. \tag{2.6}\] Since \(\bar{\Omega}_{0}\) is bounded and \(k\), \(\eta\) are compactly supported, for \(t>0\), it is routine to show that \(\{\hat{\gamma}(t,x)\geq 0\}\) remains bounded. Set \[\Sigma^{+}(t)=\bigcup_{0<\tau<t}\{\hat{\gamma}(\tau,x)\geq 0\}.\] Then by direct computation, for \(0<\tau<t\), \[\int_{\Sigma^{+}(t)}\hat{\gamma}_{\tau}(\tau,x)dx\leq a\int_{\Sigma^{+}(t)} \int_{\mathbb{R}^{n}}k(x-y)\hat{\gamma}^{+}(\tau,y)dydx-a\int_{\Sigma^{+}(t)} \hat{\gamma}^{+}(\tau,x)dx\leq 0.\] Thus \[0\leq\int_{\Sigma^{+}(t)}\hat{\gamma}(t,x)dx\leq\int_{\Sigma^{+}(t)}\hat{ \gamma}(0,x)dx=-\alpha_{0}\ |\Sigma^{+}(t)\setminus\bar{\Omega}_{0}|+\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{L ^{\infty}(\bar{\Omega}_{0})}\ |\bar{\Omega}_{0}|,\] which implies that \[|\{\hat{\gamma}(t,x)\geq 0\}|\leq|\Sigma^{+}(t)|\leq\left(1+\frac{\|\gamma_{0} |_{\bar{\Omega}_{0}}\|_{L^{\infty}(\Omega_{0})}}{\alpha_{0}}\right)|\bar{ \Omega}_{0}|.\] Hence thanks to (2.6), for any given \(t>0\) \[\|\gamma^{+}(t,\cdot)\|_{L^{1}(\mathbb{R}^{n})}\leq\|\gamma_{0}|_{\bar{\Omega }_{0}}\|_{L^{\infty}(\Omega_{0})}\left(1+\frac{\|\gamma_{0}|_{\bar{\Omega}_{0} }\|_{L^{\infty}(\Omega_{0})}}{\alpha_{0}}\right)|\bar{\Omega}_{0}|. \tag{2.7}\] Now consider the case that the kernel functions \(k\) and \(\eta\) satisfy the assumption **(K)**, but are not compactly supported. Then there exists a series of kernel functions \(k_{j}\), \(\eta_{j}\), \(j\geq 1\), which are compactly supported, satisfy the assumption **(K)**, and \[\lim_{j\to\infty}\|k_{j}-k_{\epsilon}\|_{L^{1}(\mathbb{R}^{n})}=0,\ \lim_{j\to\infty}\|\eta_{j}-\eta_{\epsilon}\|_{L^{1}(\mathbb{R}^{n})}=0.\] Let \(\gamma_{j}\) denotes the solution to the problem (1.1) with \(k\) replaced by \(k_{j}\) and \(\eta\) replaced by \(\eta_{j}\). Similar to the proof of Theorem 1.1, we have \[\lim_{j\to\infty}\|\gamma_{j}^{+}-\gamma^{+}\|_{L^{\infty}(\mathbb{R}^{n})} \leq\lim_{j\to\infty}\|\gamma_{j}-\gamma\|_{L^{\infty}(\mathbb{R}^{n})}=0.\] This, together with (2.7), implies that for any \(R>0\), \[\int_{B_{R}(0)}\gamma^{+}(t,x)dx=\lim_{j\to\infty}\int_{B_{R}(0)}\gamma_{j}^{+ }(t,x)dx\leq\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{L^{\infty}(\bar{\Omega}_{0})} \left(1+\frac{\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{L^{\infty}(\bar{\Omega}_{0}) }}{\alpha_{0}}\right)|\bar{\Omega}_{0}|.\] Since \(R\) is arbitrary, for any given \(t>0\), \[\|\gamma^{+}(t,\cdot)\|_{L^{1}(\mathbb{R}^{n})}\leq\|\gamma_{0}|_{\bar{\Omega }_{0}}\|_{L^{\infty}(\bar{\Omega}_{0})}\left(1+\frac{\|\gamma_{0}|_{\bar{ \Omega}_{0}}\|_{L^{\infty}(\bar{\Omega}_{0})}}{\alpha_{0}}\right)|\bar{\Omega }_{0}|.\] Obviously, \(\|(\gamma(t,\cdot)+\ell_{0})^{-}\|_{L^{1}(\mathbb{R}^{n})}\) can be estimated in a similar way. The proof is complete. **Lemma 2.4**.: _Under the assumptions of Theorem 1.1, we have_ \[\int_{\mathbb{R}^{n}}|\gamma(t,x+h)-\gamma(t,x)|\,dx\leq\int_{\mathbb{R}^{n}}| \gamma_{0}(x+h)-\gamma_{0}(x)|\,dx,\ \ t>0,\,x\in\mathbb{R}^{n}.\] Proof.: First of all, fix \(x\), \(h\in\mathbb{R}^{n}\). For \(\delta\neq 0\), introduce \(\mu_{\delta}(X)=\left(X^{2}+\delta^{2}\right)^{\frac{1}{2}}.\) According to the problem (1.1) satisfied by \(\gamma\), it is routine to verify that \[\frac{\partial}{\partial t}\mu_{\delta}(\gamma(t,x+h)-\gamma(t,x))\] \[= \frac{\gamma(t,x+h)-\gamma(t,x)}{\left[\left(\gamma(t,x+h)-\gamma (t,x)\right)^{2}+\delta^{2}\right]^{\frac{1}{2}}}\left(\gamma(t,x+h)-\gamma(t,x)\right)_{t}\] \[\leq \frac{|\gamma(t,x+h)-\gamma(t,x)|}{\left[\left(\gamma(t,x+h)- \gamma(t,x)\right)^{2}+\delta^{2}\right]^{\frac{1}{2}}}\] \[\times\left(a\int_{\mathbb{R}^{n}}k(x-y)|\gamma^{+}(t,y+h)- \gamma^{+}(t,y)|dy-a|\gamma^{+}(t,x+h)-\gamma^{+}(t,x)|\right)\] \[+\frac{|\gamma(t,x+h)-\gamma(t,x)|}{\left[\left(\gamma(t,x+h)- \gamma(t,x)\right)^{2}+\delta^{2}\right]^{\frac{1}{2}}}\cdot b\int_{\mathbb{R }^{n}}k(x-y)|(\gamma+\ell_{0})^{-}(t,y+h)-(\gamma+\ell_{0})^{-}(t,y)|dy\] \[-\frac{|\gamma(t,x+h)-\gamma(t,x)|}{\left[\left(\gamma(t,x+h)- \gamma(t,x)\right)^{2}+\delta^{2}\right]^{\frac{1}{2}}}\cdot b|(\gamma+\ell_{ 0})^{-}(t,x+h)-(\gamma+\ell_{0})^{-}(t,x)|,\] which yields that \[|\gamma(t,x+h)-\gamma(t,x)|-|\gamma_{0}(x+h)-\gamma_{0}(x)|\] \[= \lim_{\delta\to 0}\left[\mu_{\delta}(\gamma(t,x+h)-\gamma(t,x))- \mu_{\delta}(\gamma_{0}(x+h)-\gamma_{0}(x))\right]\] \[= \lim_{\delta\to 0}\int_{0}^{t}\frac{\partial}{\partial\tau}\mu_{ \delta}(\gamma(\tau,x+h)-\gamma(\tau,x))d\tau\] \[\leq a\int_{0}^{t}\left(\int_{\mathbb{R}^{n}}k(x-y)|\gamma^{+}(\tau,y +h)-\gamma^{+}(\tau,y)|dy-|\gamma^{+}(\tau,x+h)-\gamma^{+}(\tau,x)|\right)d\tau\] \[+b\int_{0}^{t}\int_{\mathbb{R}^{n}}k(x-y)|(\gamma+\ell_{0})^{-}( \tau,y+h)-(\gamma+\ell_{0})^{-}(\tau,y)|dyd\tau\] \[-b\int_{0}^{t}|(\gamma+\ell_{0})^{-}(\tau,x+h)-(\gamma+\ell_{0}) ^{-}(\tau,x)|d\tau.\] Thus for any \(R>0\), \[\int_{B_{R}(0)}|\gamma(t,x+h)-\gamma(t,x)|\,dx-\int_{B_{R}(0)}| \gamma_{0}(x+h)-\gamma_{0}(x)|\,dx\] \[\leq a\int_{0}^{t}\int_{B_{R}(0)}\left(\int_{\mathbb{R}^{n}}k(x-y)| \gamma^{+}(\tau,y+h)-\gamma^{+}(\tau,y)|dy-|\gamma^{+}(\tau,x+h)-\gamma^{+}( \tau,x)|\right)dxd\tau\] \[+b\int_{0}^{t}\int_{B_{R}(0)}\int_{\mathbb{R}^{n}}k(x-y)|(\gamma +\ell_{0})^{-}(\tau,y+h)-(\gamma+\ell_{0})^{-}(\tau,y)|dydxd\tau\] \[-b\int_{0}^{t}\int_{B_{R}(0)}|(\gamma+\ell_{0})^{-}(\tau,x+h)-( \gamma+\ell_{0})^{-}(\tau,x)|dxd\tau\] \[\leq a\int_{0}^{t}\left(\int_{\mathbb{R}^{n}}|\gamma^{+}(\tau,x+h)- \gamma^{+}(\tau,x)|dx-\int_{B_{R}(0)}|\gamma^{+}(\tau,x+h)-\gamma^{+}(\tau,x)| dx\right)d\tau\] \[+b\int_{0}^{t}\int_{\mathbb{R}^{n}}|(\gamma+\ell_{0})^{-}(\tau,x+h )-(\gamma+\ell_{0})^{-}(\tau,x)|dxd\tau\] \[-b\int_{0}^{t}\int_{B_{R}(0)}|(\gamma+\ell_{0})^{-}(\tau,x+h)-( \gamma+\ell_{0})^{-}(\tau,x)|dxd\tau.\] Notice that \(\gamma_{0}(x+h)-\gamma_{0}(x)\) is compactly supported. Then thanks to Lemma 2.3, by letting \(R\to\infty\), we have for \(t>0\), \[\int_{\mathbb{R}^{n}}|\gamma(t,x+h)-\gamma(t,x)|\,dx\leq\int_{\mathbb{R}^{n} }|\gamma_{0}(x+h)-\gamma_{0}(x)|\,dx.\] The proof is complete. **Remark 2.1**.: _The priori estimates in Lemmas 2.3 and 2.4 are also valid for the nonlocal version of the one-phase Stefan problem based on the same arguments. In particular, these estimates play an important role in proving convergence relations between local and nonlocal Stefan problems._ ## 3 Convergence to the local Stefan problem ### Convergence to the two-phase Stefan problem Theorem 1.3 is about the convergence relations between local and nonlocal two-phase Stefan problems, where the additional assumptions that the kernel functions are radially symmetric and compactly supported are required. Proof of Theorem 1.3.: Fix \(T>0\). For any test function \(\zeta\in C_{c}^{\infty}(\mathbb{R}^{n}\times[0,T))\), it is routine to show that \[-\int_{0}^{T}\int_{\mathbb{R}^{n}}\gamma_{\epsilon}\zeta_{t}dxdt- \int_{\mathbb{R}^{n}}\gamma_{0}(x)\zeta(0,x)dx\] \[= a\int_{0}^{T}\int_{\mathbb{R}^{n}}\frac{1}{\epsilon^{2}}\left( \int_{\mathbb{R}^{n}}k_{\epsilon}(x-y)\zeta(t,y)dy-\zeta(t,x)\right)\gamma_{ \epsilon}^{+}(t,x)dxdt \tag{3.1}\] \[+b\int_{0}^{T}\int_{\mathbb{R}^{n}}\frac{1}{\epsilon^{2}}\left( \int_{\mathbb{R}^{n}}\eta_{\epsilon}(x-y)\zeta(t,y)dy-\zeta(t,x)\right)( \gamma_{\epsilon}(t,x)+\ell_{0})^{-}dxdt.\] First, thanks to the conditions imposed on the kernel functions \(k\) and \(\eta\), and \(\zeta\in C_{c}^{\infty}(\mathbb{R}^{n}\times[0,T)\), we have \[\lim_{\epsilon\to 0}\frac{1}{\epsilon^{2}}\left(\int_{ \mathbb{R}^{n}}k_{\epsilon}(x-y)\zeta(t,y)dy-\zeta(t,x)\right) = \lim_{\epsilon\to 0}\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k(z) \left(\zeta(t,x-\epsilon z)-\zeta(t,x)\right)dz\] \[= \lim_{\epsilon\to 0}\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k _{\epsilon}(x-y)\zeta(t,y)dy-\zeta(t,x)\] \[= \lim_{\epsilon\to 0}\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k _{\epsilon}(x-y)\zeta(t, \[\lim_{j\to\infty}\gamma_{\epsilon_{j}}(t,\cdot)=\gamma(t,\cdot)\ \text{ weakly in }L^{p}_{loc}(\mathbb{R}^{n}). \tag{3.5}\] To prove this claim, fix \(t\in(0,T)\bigcap\mathbb{Q}^{c}\), \(1<p<\infty\) and a bounded set \(\Omega\) in \(\mathbb{R}^{n}\). Obviously, there exist a subsequence of the sequence \(\{\epsilon_{j}\}\), denoted by \(\{\epsilon_{j_{\ell}}\}\), and \(\gamma_{\Omega}(t,\cdot)\in L^{p}(\Omega)\), such that \[\lim_{j_{\ell}\to\infty}\gamma_{\epsilon_{j_{\ell}}}(t,\cdot)=\gamma_{\Omega} (t,\cdot)\ \text{ weakly in }L^{p}(\Omega). \tag{3.6}\] We emphasize that the subsequence \(\{\epsilon_{j_{\ell}}\}\) depends on \(t\in(0,T)\bigcap\mathbb{Q}^{c}\). Then fix \(\phi(x)\in C_{c}(\Omega)\), \[\int_{\Omega}\left(\gamma_{\epsilon_{j}}(t,x)-\gamma_{\Omega}(t, x)\right)\phi(x)dx \tag{3.7}\] \[= \int_{\Omega}\left(\gamma_{\epsilon_{j}}(t,x)-\gamma_{\epsilon_{ j_{\ell}}}(t,x)\right)\phi(x)dx+\int_{\Omega}\left(\gamma_{\epsilon_{j_{\ell}}}(t,x)- \gamma_{\Omega}(t,x)\right)\phi(x)dx\] \[= \int_{\Omega}\left(\gamma_{\epsilon_{j}}(s,x)-\gamma_{\epsilon_{ j_{\ell}}}(s,x)\right)\phi(x)dx+\int_{\Omega}\left(\gamma_{\epsilon_{j_{\ell}}}(t,x)- \gamma_{\Omega}(t,x)\right)\phi(x)dx\] \[+\int_{\Omega}\left(\gamma_{\epsilon_{j}}(t,x)-\gamma_{\epsilon _{j}}(s,x)\right)\phi(x)dx-\int_{\Omega}\left(\gamma_{\epsilon_{j_{\ell}}}(t, x)-\gamma_{\epsilon_{j_{\ell}}}(s,x)\right)\phi(x)dx,\] where \(s\in(0,T)\bigcap\mathbb{Q}\). Notice that based on the problem (1.10), one has \[\int_{\mathbb{R}^{n}}\left(\gamma_{\epsilon}(t,x)-\gamma_{\epsilon} (s,x)\right)\phi(x)dx\] \[= a\int_{s}^{t}\int_{\mathbb{R}^{n}}\frac{1}{\epsilon^{2}}\left( \int_{\mathbb{R}^{n}}k_{\epsilon}(x-y)\phi(y)dy-\phi(x)\right)\gamma_{\epsilon }^{+}(\tau,x)dxd\tau\] \[+b\int_{s}^{t}\int_{\mathbb{R}^{n}}\frac{1}{\epsilon^{2}}\left( \int_{\mathbb{R}^{n}}\eta_{\epsilon}(x-y)\phi(y)dy-\phi(x)\right)(\gamma_{ \epsilon}(\tau,x)+\ell_{0})^{-}dxd\tau.\] Thanks to (1.8) and (3.2), there exists a constant \(C\), which is independent of \(\epsilon>0\), such that \[\Big{|}\int_{\mathbb{R}^{n}}\left(\gamma_{\epsilon}(t,x)-\gamma_{\epsilon}(s,x)\right)\phi(x)dx\Big{|}\leq C|t-s|.\] Thanks to this estimate, we can choose \(s\in\mathbb{Q}\) close enough to \(t\) to control the last two terms in (3.7). Hence, together with (3.4) and (3.6), it is standard to show that for any \(\phi(x)\in C_{c}(\Omega)\), \[\lim_{j\to\infty}\int_{\Omega}\left(\gamma_{\epsilon_{j}}(t,x)-\gamma_{\Omega }(t,x)\right)\phi(x)dx=0.\] Thus \[\lim_{j\epsilon\to\infty}\gamma_{\epsilon_{j}}(t,\cdot)=\gamma_{\Omega}(t, \cdot)\ \mbox{ weakly in }L^{p}(\Omega).\] Since \(1<p<\infty\) is arbitrary, thanks to Theorem 1.1, one sees that \[\|\gamma_{\Omega}(t,\cdot)\|_{L^{\infty}(\Omega)}\leq\|\gamma_{0}\|_{L^{ \infty}(\mathbb{R}^{n})}.\] Notice that \(\Omega\subseteq\mathbb{R}^{n}\) is any fixed bounded set, thus due to the uniqueness of weak convergence, we can define \(\gamma(t,\cdot)\in L^{\infty}(\mathbb{R}^{n})\) by setting \[\gamma(t,x)=\gamma_{\Omega}(t,x)\ \mbox{ a.e. in }\Omega.\] It follows that \[\lim_{j\to\infty}\gamma_{\epsilon_{j}}(t,\cdot)=\gamma(t,\cdot)\ \mbox{ weakly in }L^{p}_{loc}(\mathbb{R}^{n}).\] Since \(t\in(0,T)\bigcap\mathbb{Q}^{c}\) is arbitrary, the claim is proved. Furthermore, we improve the weak convergence in (3.5) to the strong convergence in \(L^{1}_{loc}(\mathbb{R}^{n})\). For this purpose, fix \(t\in(0,T)\bigcap\mathbb{Q}^{c}\) and a bounded set \(\Omega\subseteq\mathbb{R}^{n}\). Recall that due to the Fr\(\acute{e}\)chet-Kolmogorov theorem, Lemmas 2.3 and 2.4, \(\{\gamma_{\epsilon_{j}}(t,\cdot)\}\) is precompact in \(L^{1}(\Omega)\). Thus thanks to (3.5) and the uniqueness of weak convergence, it is routine to verify that \[\lim_{j\to\infty}\gamma_{\epsilon_{j}}(t,\cdot)=\gamma(t,\cdot)\ \mbox{ in }L^{1}(\Omega),\] i.e., for any \(t\in(0,T)\bigcap\mathbb{Q}^{c}\), \[\lim_{j\to\infty}\gamma_{\epsilon_{j}}(t,\cdot)=\gamma(t,\cdot)\ \mbox{ in }L^{1}_{loc}(\mathbb{R}^{n}). \tag{3.8}\] Therefore, by letting \(j\to\infty\), it follows from (3.1), (3.2), (3.3), (3.4) and (3.8) that \[\int_{0}^{T}\int_{\mathbb{R}^{n}}\left(\gamma\zeta_{t}+\left(A\gamma^{+}+B( \gamma+\ell_{0})^{-}\right)\Delta\zeta\right)dxdt+\int_{\mathbb{R}^{n}}\gamma _{0}(x)\zeta(0,x)dx=0.\] The uniqueness of generalized solution to the problem (1.12) yields the desired conclusion. ### Convergence to the one-phase Stefan problem This subsection is devoted to the proof of Theorem 1.4, where the convergence relations between local and nonlocal one-phase Stefan problems are verified under the optimal condition (1.9) imposed on the kernel function. It is known that the classical one-phase problem (1.5) can be reduced to a parabolic variational inequality [11, Chapter 1.9]. To be more specific, define \[v(t,x)=\begin{cases}\int_{0}^{t}\theta(\tau,x)d\tau&\text{if }x\in\bar{\Omega}_{0},\\ 0&\text{if }x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0},\,t\leq s(x),\\ \int_{s(x)}^{t}\theta(\tau,x)d\tau&\text{if }x\in\mathbb{R}^{n}\setminus\bar{ \Omega}_{0},\,t>s(x),\end{cases}\] and then transform the problem (1.5) into a variational inequality for the function \(v(t,x)\) as follows \[\begin{cases}v_{t}-A\Delta v\geq\bar{f}&\text{a.e. in }(0,T)\times\mathbb{R}^{n},\\ v\geq 0&\text{a.e. in }(0,T)\times\mathbb{R}^{n},\\ (v_{t}-A\Delta v-\bar{f})v=0&\text{a.e. in }(0,T)\times\mathbb{R}^{n},\end{cases} \tag{3.9}\] where \(\bar{f}=\gamma_{0}\) defined in (1.7). It has been proved that there exists a unique solution of the problem (3.9), still denoted by \(v(t,x)\), and \[D_{x}v,\,D_{x}^{2}v,\,D_{t}v\quad\text{belong to }L^{\infty}((0,T);L^{p}( \mathbb{R}^{n}))\ \text{ for }p<\infty.\] See [11, Chapter 1.9] for details. Borrowing this idea, define \[v_{\epsilon}(t,x)=\int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,x)d\tau, \tag{3.10}\] Obviously, Theorem 1.4 is about the convergence relations between \(v_{\epsilon}\) and \(v\). First we compute the equation satisfied by \(v_{\epsilon}\). For any \(x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0}\), let \(s_{\epsilon}(x)\) denote the time if exists when \(\gamma_{\epsilon}(t,x)\) first reaches zero. Thus \[\ell_{0}=\frac{1}{\epsilon^{2}}\int_{0}^{s_{\epsilon}(x)}\int_{\mathbb{R}^{n} }k_{\epsilon}(x-y)\gamma_{\epsilon}^{+}(\tau,y)dyd\tau. \tag{3.11}\] * if \(x\in\bar{\Omega}_{0},t>0\), then \[v_{\epsilon t}-\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k_{ \epsilon}(x-y)v_{\epsilon}(t,y)dy+\frac{1}{\epsilon^{2}}v_{\epsilon}(t,x)\] \[= \gamma_{\epsilon}^{+}(t,x)-\frac{1}{\epsilon^{2}}\int_{\mathbb{R }^{n}}k_{\epsilon}(x-y)\int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,y)d\tau dy+ \frac{1}{\epsilon^{2}}\int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,x)d\tau\] \[= \int_{0}^{t}\gamma_{\epsilon t}^{+}(\tau,x)d\tau+\gamma_{ \epsilon}^{+}(0,x)-\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k_{\epsilon}(x -y)\int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,y)d\tau dy+\frac{1}{\epsilon^{2}} \int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,x)d\tau\] \[= \gamma_{0}(x);\] * if \(x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0},0<t\leq s_{\epsilon}(x)\), then \(v_{\epsilon}(t,x)=0\). Thus \[v_{\epsilon t}-\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k_{\epsilon}(x-y)v_{ \epsilon}(t,y)dy+\frac{1}{\epsilon^{2}}v_{\epsilon}(t,x)=-\frac{1}{\epsilon^{2 }}\int_{\mathbb{R}^{n}}k_{\epsilon}(x-y)v_{\epsilon}(t,y)dy;\] * if \(x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0},t>s_{\epsilon}(x)\), then \[v_{\epsilon t}-\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k_{ \epsilon}(x-y)v_{\epsilon}(t,y)dy+\frac{1}{\epsilon^{2}}v_{\epsilon}(t,x)\] \[= \gamma_{\epsilon}^{+}(t,x)-\frac{1}{\epsilon^{2}}\int_{\mathbb{R }^{n}}k_{\epsilon}(x-y)\int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,y)d\tau dy+ \frac{1}{\epsilon^{2}}\int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,x)d\tau\] \[= \int_{s_{\epsilon}(x)}^{t}\gamma_{\epsilon t}^{+}(\tau,x)d\tau- \frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k_{\epsilon}(x-y)\int_{s_{ \epsilon}(x)}^{t}\gamma_{\epsilon}^{+}(\tau,y)d\tau dy+\frac{1}{\epsilon^{2}} \int_{s_{\epsilon}(x)}^{t}\gamma_{\epsilon}^{+}(\tau,x)d\tau\] \[-\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k_{\epsilon}(x-y)\int _{0}^{s_{\epsilon}(x)}\gamma_{\epsilon}^{+}(\tau,y)d\tau dy\] \[= -\ell_{0},\] according to (3.11). Hence one sees that \(v_{\epsilon}\) satisfies \[\begin{cases}v_{\epsilon t}(t,x)=\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k _{\epsilon}(x-y)v_{\epsilon}(t,y)dy-\frac{1}{\epsilon^{2}}v_{\epsilon}(t,x)+f_ {\epsilon}(t,x)&t>0,\ x\in\mathbb{R}^{n},\\ v_{\epsilon}(0,x)=0&x\in\mathbb{R}^{n},\end{cases} \tag{3.12}\] where \[f_{\epsilon}(t,x)=\begin{cases}\gamma_{0}(x)&t>0,\ x\in\bar{\Omega}_{0},\\ -\int_{\mathbb{R}^{n}}\frac{1}{\epsilon^{2}}k_{\epsilon}(x-y)v_{ \epsilon}(t,y)dy&0<t\leq s_{\epsilon}(x),\ x\in\mathbb{R}^{n}\setminus\bar{ \Omega}_{0},\\ -\ell_{0}&t>s_{\epsilon}(x),\ x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0}. \end{cases}\] Secondly, we prepare some useful estimates about \(f_{\epsilon}\). **Lemma 3.1**.: _Assume that in the problem (1.13), the kernel function \(k\) satisfies the assumption **(K)**, the initial data satisfies (1.2) and \(u_{0}\geq 0\). Then for given \(1\leq p\leq\infty\), \(f_{\epsilon}(t,x)\) is uniformly bounded in \(L^{p}(\mathbb{R}^{n})\) for any \(\epsilon>0\), \(t>0\)._ Proof.: Similar to the proof of Lemma 2.3, if \(\phi\in L^{1}(\mathbb{R}^{n})\bigcap L^{\infty}(\mathbb{R}^{n})\), then for any \(p>1\), \(\phi\in L^{p}(\mathbb{R}^{n})\) and \[\|\phi\|_{L^{p}(\mathbb{R}^{n})}\leq\left(\|\phi\|_{L^{\infty}(\mathbb{R}^{n}) }^{p-1}\|\phi\|_{L^{1}(\mathbb{R}^{n})}\right)^{\frac{1}{p}}\leq\left(\|\phi\|_ {L^{\infty}(\mathbb{R}^{n})}+1\right)\left(\|\phi\|_{L^{1}(\mathbb{R}^{n})}+1 \right).\] Hence it suffices to verify the conclusion for \(p=1\) and \(p=\infty\). Since \(f_{\epsilon}(t,x)=\gamma_{0}\) for \(x\in\bar{\Omega}_{0}\) and \(t>0\), we only need to estimate \(f_{\epsilon}\) outside \(\bar{\Omega}_{0}\). It mainly relies on the following estimates: \[-f_{\epsilon}(t,x)\in[0,\ell_{0}]\ \ \text{for}\ x\in\mathbb{R}^{n}\setminus\bar{ \Omega}_{0},\ \ \int_{\mathbb{R}^{n}\setminus\bar{\Omega}_{0}}-f_{\epsilon}(t,x)dx\leq\int_{ \Omega_{0}}\gamma_{0}dx. \tag{3.13}\] Assume that (3.13) holds. It immediately yields that \[\|f_{\epsilon}(t,\cdot)\|_{L^{\infty}(\mathbb{R}^{n})}\leq\max\,\left\{\|\gamma_{ 0}|_{\bar{\Omega}_{0}}\|_{L^{\infty}(\bar{\Omega}_{0})},\ell_{0}\right\},\ \ \|f_{\epsilon}(t,\cdot)\|_{L^{1}(\mathbb{R}^{n})}\leq 2 \int_{\Omega_{0}}\gamma_{0}dx.\] The desired conclusion follows. Now it remains to verify (3.13). In fact, due to (3.11), the first estimate in (3.13) is obvious. Intuitively, the second estimate in (3.13) indicates that \(\int_{\mathbb{R}^{n}\setminus\bar{\Omega}_{0}}-f_{\epsilon}(t,x)dx\) is less than the total energy absorbed outside \(\bar{\Omega}_{0}\) from time \(0\) to \(t\), which can not exceed the total energy at the initial time, i.e. \(\int_{\Omega_{0}}\gamma_{0}dx\). To be more precise, by (1.13), one has for any large \(R>0\) \[\int_{B_{R}(0)\setminus\bar{\Omega}_{0}}\left(\gamma_{\epsilon}( t,x)-\gamma_{\epsilon}(0,x)\right)dx \tag{3.14}\] \[= \int_{(B_{R}(0)\setminus\bar{\Omega}_{0})\bigcap\{s_{\epsilon}( x)<t\}}\left(\gamma_{\epsilon}(t,x)-\gamma_{\epsilon}(0,x)\right)dx+\int_{(B_{R}(0) \setminus\bar{\Omega}_{0})\bigcap\{s_{\epsilon}(x)\geq t\}}\left(\gamma_{ \epsilon}(t,x)-\gamma_{\epsilon}(0,x)\right)dx\] \[\geq \int_{(B_{R}(0)\setminus\bar{\Omega}_{0})\bigcap\{s_{\epsilon}( x)<t\}}\ell_{0}dx+\frac{1}{\epsilon^{2}}\int_{0}^{t}\int_{(B_{R}(0)\setminus\bar{ \Omega}_{0})\bigcap\{s_{\epsilon}(x)\geq t\}}\int_{\mathbb{R}^{n}}k_{\epsilon} (x-y)\gamma_{\epsilon}^{+}(\tau,y)dydxd\tau\] \[= \int_{B_{R}(0)\setminus\bar{\Omega}_{0}}-f_{\epsilon}(t,x)dx.\] Moreover, it is easy to see that \[\int_{B_{R}(0)}\gamma_{\epsilon t}(t,x)dx=\frac{1}{\epsilon^{2}} \int_{B_{R}(0)}\int_{\mathbb{R}^{n}}k_{\epsilon}(x-y)\gamma_{\epsilon}^{+}(t,y )dydx-\frac{1}{\epsilon^{2}}\int_{B_{R}(0)}\gamma_{\epsilon}^{+}(t,x)dx\] \[\leq \frac{1}{\epsilon^{2}}\int_{\Omega_{\epsilon}(t)}\gamma_{\epsilon }^{+}(t,x)dx-\frac{1}{\epsilon^{2}}\int_{B_{R}(0)}\gamma_{\epsilon}^{+}(t,x)dx,\] where the validity of the above inequality is due to the property that \(\gamma_{\epsilon}^{+}(t,\cdot)\in L^{1}(\mathbb{R}^{n})\) proved Lemma 2.3. This implies that \[\int_{B_{R}(0)\setminus\bar{\Omega}_{0}}\left(\gamma_{\epsilon}( t,x)-\gamma_{\epsilon}(0,x)\right)dx\] \[\leq -\int_{\bar{\Omega}_{0}}\left(\gamma_{\epsilon}(t,x)-\gamma_{ \epsilon}(0,x)\right)dx+\frac{1}{\epsilon^{2}}\int_{0}^{t}\left(\int_{\Omega _{\epsilon}(\tau)}\gamma_{\epsilon}^{+}(\tau,x)dx-\int_{B_{R}(0)}\gamma_{ \epsilon}^{+}(\tau,x)dx\right)d\tau\] \[\leq \int_{\Omega_{0}}\gamma_{0}dx+\frac{1}{\epsilon^{2}}\int_{0}^{t} \left(\int_{\Omega_{\epsilon}(\tau)}\gamma_{\epsilon}^{+}(\tau,x)dx-\int_{B_{R }(0)}\gamma_{\epsilon}^{+}(\tau,x)dx\right)d\tau.\] This, together with (3.14), yields that \[\int_{B_{R}(0)\setminus\bar{\Omega}_{0}}-f_{\epsilon}(t,x)dx\leq\int_{\Omega _{0}}\gamma_{0}dx+\frac{1}{\epsilon^{2}}\int_{0}^{t}\left(\int_{\Omega_{ \epsilon}(\tau)}\gamma_{\epsilon}^{+}(\tau,x)dx-\int_{B_{R}(0)}\gamma_{ \epsilon}^{+}(\tau,x)dx\right)d\tau,\] and thus the second estimate in (3.13) follows immediately by letting \(R\to\infty\). Moreover, as mentioned in Remark 2.1, on the basis of Lemmas 2.3 and 2.4, we establish some convergence results about \(v_{\epsilon}\) defined in (3.10). **Lemma 3.2**.: _Assume that in the problem (1.13), the kernel function \(k\) satisfies the assumption **(K)**, the initial data satisfies (1.7). Then for any fixed \(t>0\), there exist a sequence \(\{\epsilon_{\ell}\}\), which depends on \(t\) and satisfies \(\lim_{\ell\to\infty}\epsilon_{\ell}=0\), and \(\tilde{v}^{t}\in L^{1}(\mathbb{R}^{n})\) such that \(v_{\epsilon_{\ell}}(t,\cdot)\to\tilde{v}^{t}(\cdot)\) a.e. in \(\mathbb{R}^{n}\)._ Proof.: Thanks to Lemma 2.4, \[\int_{\mathbb{R}^{n}}\left|v_{\epsilon}(t,x+h)-v_{\epsilon}(t,x) \right|dx=\int_{\mathbb{R}^{n}}\left|\int_{0}^{t}\left(\gamma_{\epsilon}^{+}( \tau,x+h)-\gamma_{\epsilon}^{+}(\tau,x)\right)d\tau\right|dx\] \[\leq \int_{0}^{t}\int_{\mathbb{R}^{n}}\left|\gamma_{\epsilon}(\tau,x+ h)-\gamma_{\epsilon}(\tau,x)\right|dxd\tau\] \[\leq \int_{0}^{t}\int_{\mathbb{R}^{n}}\left|\gamma_{0}(x+h)-\gamma_{0 }(x)\right|dxd\tau=t\int_{\mathbb{R}^{n}}\left|\gamma_{0}(x+h)-\gamma_{0}(x) \right|dx.\] This, together with the Fr\(\acute{e}\)chet-Kolmogorov theorem and Lemma 2.3, indicates that for any fixed \(t>0\) and bounded set \(\Omega\subseteq\mathbb{R}^{n}\), \(\{v_{\epsilon}(t,\cdot)\,|\,0<\epsilon<1\}\) is precompact in \(L^{1}(\Omega)\). Then it is easy to show that there exist a sequence \(\{\epsilon_{\ell}\}\) with \(\lim_{\ell\to\infty}\epsilon_{\ell}=0\) and \(\tilde{v}^{t}\in L^{1}(\mathbb{R}^{n})\) such that \(v_{\epsilon_{\ell}}(t,\cdot)\to\tilde{v}^{t}(\cdot)\) in \(L^{1}_{loc}(\mathbb{R}^{n})\) and \(v_{\epsilon_{\ell}}(t,\cdot)\to\tilde{v}^{t}(\cdot)\) a.e. in \(\mathbb{R}^{n}\). We emphasize that so far the additional condition (1.9) has not been used yet. After previous preparations, we are ready to complete the proof of Theorem 1.4. Proof of Theorem 1.4.: From now on, fix \(T>0\). Back to the problem (3.12) satisfied by \(v_{\epsilon}\), by the Fourier transform and the property \(\hat{k}_{\epsilon}(\xi)=\hat{k}(\epsilon\xi)\), we derive that \[\hat{v}_{\epsilon}(t,\xi)=\int_{0}^{t}e^{\frac{1}{\epsilon^{2}}\left(\hat{k}( \epsilon\xi)-1\right)(t-\tau)}\hat{f}_{\epsilon}(\tau,\xi)d\tau. \tag{3.15}\] Due to the Parseval formula, \[\|f_{\epsilon}(t,\cdot)\|_{L^{2}(\mathbb{R}^{n})}=\|\hat{f}_{\epsilon}(t, \cdot)\|_{L^{2}(\mathbb{R}^{n})}.\] Then thanks to Lemma 3.1, there exists a sequence \(\{\epsilon_{j}\}\) with \(\lim_{j\to\infty}\epsilon_{j}=0\) and \(f_{0},\,G_{0}\in L^{2}((0,T)\times\mathbb{R}^{n})\) such that \[\lim_{j\to\infty}f_{\epsilon_{j}}=f_{0}\ \text{ weakly in }\ L^{2}((0,T)\times\mathbb{R}^{n}). \tag{3.16}\] and \[\lim_{j\to\infty}\hat{f}_{\epsilon_{j}}=G_{0}\ \text{ weakly in }\ L^{2}((0,T)\times\mathbb{R}^{n}). \tag{3.17}\] Notice that for any test function \(\psi(t,\xi)\in C_{c}((0,T)\times\mathbb{R}^{n})\), on the one side, due to (3.16), \[\lim_{j\to\infty}\int_{0}^{T}\int_{\mathbb{R}^{n}}\hat{f}_{\epsilon_{j}}(t, \xi)\psi(t,\xi)d\xi dt\] \[= \lim_{j\to\infty}\int_{0}^{T}\int_{\mathbb{R}^{n}}\left(\int_{ \mathbb{R}^{n}}e^{-ix\cdot\xi}f_{\epsilon_{j}}(t,x)dx\right)\psi(t,\xi)d\xi dt\] \[= \lim_{j\to\infty}\int_{0}^{T}\int_{\mathbb{R}^{n}}\left(\int_{ \mathbb{R}^{n}}e^{-ix\cdot\xi}\psi(t,\xi)d\xi\right)f_{\epsilon_{j}}(t,x)dxdt\] \[= \int_{0}^{T}\int_{\mathbb{R}^{n}}\left(\int_{\mathbb{R}^{n}}e^{-ix \cdot\xi}\psi(t,\xi)d\xi\right)f_{0}(t,x)dxdt=\int_{0}^{T}\int_{\mathbb{R}^{n} }\hat{f}_{0}(t,\xi)\psi(t,\xi)d\xi dt.\] On the other side, (3.17) yields that \[\lim_{j\to\infty}\int_{0}^{T}\int_{\mathbb{R}^{n}}\hat{f}_{\epsilon_{j}}(t, \xi)\psi(t,\xi)d\xi dt=\int_{0}^{T}\int_{\mathbb{R}^{n}}G(t,\xi)\psi(t,\xi)d \xi dt.\] Hence \[G_{0}(t,\xi)=\hat{f}_{0}\ \ \text{a.e. in }(0,T)\times\mathbb{R}^{n},\] i.e. \[\lim_{j\to\infty}f_{\epsilon_{j}}=f_{0},\ \ \lim_{j\to\infty}\hat{f}_{\epsilon_{j}}= \hat{f}_{0}\ \ \text{weakly in }\ L^{2}((0,T)\times\mathbb{R}^{n}). \tag{3.18}\] Introduce the following problem \[\begin{cases}v_{t}=A\Delta v+f_{0}&0<t\leq T,\ x\in\mathbb{R}^{n},\\ v(0,x)=0&x\in\mathbb{R}^{n},\end{cases} \tag{3.19}\] where \(v_{*}\) denote the unique generalized solution in \(V_{2}^{1,1/2}(\mathbb{R}^{n}\times[0,T])\)[12, Chapter III.5]. By applying the Fourier transform to the problem (3.19), we derive that \[\hat{v}_{*}(t,\xi)=\int_{0}^{t}e^{-A|\xi|^{2}(t-\tau)}\hat{f}_{0}(\tau,\xi)d\tau.\] Fix \(t\in(0,T)\). For any given \(\phi(\xi)\in C_{c}^{\infty}(\mathbb{R}^{n})\), \[\lim_{j\to\infty}\int_{\mathbb{R}^{n}}\hat{v}_{\epsilon_{j}}(t, \xi)\phi(\xi)d\xi=\lim_{j\to\infty}\int_{\mathbb{R}^{n}}\left(\int_{0}^{t}e^{ \frac{1}{\epsilon_{j}^{2}}\left(\hat{k}(\epsilon_{j}\xi)-1\right)(t-\tau)} \hat{f}_{\epsilon_{j}}(\tau,\xi)d\tau\right)\phi(\xi)d\xi\] \[= \lim_{j\to\infty}\int_{\mathbb{R}^{n}}\int_{0}^{t}\left(e^{\frac {1}{\epsilon_{j}^{2}}\left(\hat{k}(\epsilon_{j}\xi)-1\right)(t-\tau)}-e^{-A| \xi|^{2}(t-\tau)}\right)\hat{f}_{\epsilon_{j}}(\tau,\xi)\phi(\xi)d\tau d\xi\] \[+\lim_{j\to\infty}\int_{\mathbb{R}^{n}}\int_{0}^{t}e^{-A|\xi|^{2} (t-\tau)}\hat{f}_{\epsilon_{j}}(\tau,\xi)\phi(\xi)d\tau d\xi.\] Since \(\|\hat{f}_{\epsilon_{j}}(\tau,\cdot)\|_{L^{\infty}(\mathbb{R}^{n})}\leq\|f_{ \epsilon_{j}}(\tau,\cdot)\|_{L^{1}(\mathbb{R}^{n})}\), due to Lemma 3.1, the assumption (1.9) and (3.18), we have \[\lim_{j\to\infty}\int_{\mathbb{R}^{n}}\hat{v}_{\epsilon_{j}}(t,\xi)\phi(\xi)d \xi=\int_{\mathbb{R}^{n}}\int_{0}^{t}e^{-A|\xi|^{2}(t-\tau)}\hat{f}_{0}(\tau, \xi)\phi(\xi)d\tau d\xi=\int_{\mathbb{R}^{n}}\hat{v}_{*}(t,\xi)\phi(\xi)d\xi. \tag{3.20}\] Moreover, thanks to Lemma 2.3, there exists a subsequence of \(\{\epsilon_{j}\}\), denoted by \(\{\epsilon_{j_{t}}\}\), and \(v_{0}^{t}\) in \(L^{2}(\mathbb{R}^{n})\), such that \(v_{\epsilon_{j_{t}}}(t,\cdot)\rightharpoonup v_{0}^{t}(\cdot)\) in \(L^{2}(\mathbb{R}^{n})\). Then for any given \(\phi(\xi)\in C_{c}^{\infty}(\mathbb{R}^{n})\), \[\lim_{j\to\infty}\int_{\mathbb{R}^{n}}\hat{v}_{\epsilon_{j_{t}}}(t,\xi)\phi( \xi)d\xi\] \[= \lim_{j_{\ell}\to\infty}\int_{\mathbb{R}^{n}}\left(\int_{\mathbb{R}^{n} }e^{-ix\cdot\xi}v_{\epsilon_{j_{\ell}}}(t,x)dx\right)\phi(\xi)d\xi \tag{3.21}\] \[= \lim_{j\epsilon\to\infty}\int_{\mathbb{R}^{n}}\left(\int_{\mathbb{ R}^{n}}e^{-ix\cdot\xi}\phi(\xi)d\xi\right)v_{\epsilon_{j_{\ell}}}(t,x)dx=\int_{ \mathbb{R}^{n}}\left(\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}\phi(\xi)d\xi\right)v _{0}^{t}(x)dx\] \[= \int_{\mathbb{R}^{n}}\left(\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}v_ {0}^{t}(x)dx\right)\phi(\xi)d\xi.\] Now (3.20) and (3.21) implies that \[\hat{v}_{*}(t,\xi)=\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}v_{0}^{t}(x)dx\ \ \text{a.e. in }\mathbb{R}^{n}.\] Thus \(v_{*}(t,x)=v_{0}^{t}(x)\) a.e. in \(\mathbb{R}^{n}\). Since \(v_{*}(t,x)\) is the unique solution to the problem (3.19), it follows immediately that for any \(0<t<T\), \(v_{\epsilon}(t,\cdot)\rightharpoonup v_{*}(t,\cdot)\) in \(L^{2}(\mathbb{R}^{n})\) as \(\epsilon\to 0\). This, together with Lemma 3.2, implies that \[v_{\epsilon}(t,x)\to v_{*}(t,x)\ \ \text{a.e. in }(0,T)\times\mathbb{R}^{n}\ \ \text{as }\epsilon\to 0. \tag{3.22}\] To complete the proof of Theorem 1.4, it remains to verify that \(v_{*}\) satisfies the parabolic variational inequality (3.9) as follows. \[\begin{cases}v_{t}-A\Delta v\geq\bar{f}&\text{a.e. in }(0,T)\times\mathbb{R}^{n},\\ v\geq 0&\text{a.e. in }(0,T)\times\mathbb{R}^{n},\\ (v_{t}-A\Delta v-\bar{f})v=0&\text{a.e. in }(0,T)\times\mathbb{R}^{n},\end{cases}\] where \(\bar{f}=\gamma_{0}\) for \(x\in\mathbb{R}^{n}\). Obviously \(v_{*}\geq 0\) satisfies the first two inequalities in (3.9) since \(v_{\epsilon}\) is always non-negative and \(f_{\epsilon}\geq\bar{f}\) for all \(t>0\) and \(x\in\mathbb{R}^{n}\). Moreover, thanks to Lemma 3.1, (3.16) and the uniqueness of weak convergence, it is standard to show that \(f_{0}\in L^{p}(\mathbb{R}^{n}\times[0,T])\) for any \(p>1\). Then by parabolic regularity theory and Sobolev embedding theorem, one obtains that \(v_{*}(t,\cdot)\) is continuous in \(\mathbb{R}^{n}\). Thus, the set \(\{v_{*}>0\}\) is open in \((0,T)\times\mathbb{R}^{n}\). Also notice that \(f_{\epsilon}=\bar{f}\) if \(v_{\epsilon}>0\). Hence thanks to (3.22), it is standard to verify that \[f_{\epsilon}(t,x)\to\bar{f}(t,x)\ \ \text{a.e. in }\{v_{*}>0\}\ \ \text{as } \epsilon\to 0.\] Thus due to (3.18), \(f_{0}=\bar{f}\) a.e. in \(\{v_{*}>0\}\), i.e., \(v_{*}\) satisfies the third equality in (3.9). The proof of Theorem 1.4 is complete. ## 4 Fundamental properties of nonlocal Stefan problem In this section, we investigate the fundamental properties of the nonlocal version of one-phase Stefan problem (1.6) \[\begin{cases}\gamma_{t}(t,x)=d\int_{\mathbb{R}^{n}}k(x-y)\gamma^{+}(t,y)dy-d \gamma^{+}(t,x)&t>0,\ x\in\mathbb{R}^{n},\\ \gamma(0,x)=\gamma_{0}&x\in\mathbb{R}^{n}.\end{cases}\] ### Expansion and boundedness Theorem 1.5(i) is about the expansion of \(\Omega(t)\). Proof of Theorem 1.5(i).: Fix \(x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0}\). Let \(t=s(x)\) denote the moment when \(\gamma(s(x),x)=0\) while \(\gamma(t,x)<0\) for \(0<t<s(x)\). By (1.6), one has \[\ell_{0}=d\int_{0}^{s(x)}\int_{\mathbb{R}^{n}}k(x-y)\gamma^{+}(\tau,y)dyd\tau=d \int_{0}^{s(x)}\int_{\Omega(\tau)}k(x-y)\gamma^{+}(\tau,y)dyd\tau. \tag{4.1}\] Also thanks to Theorem 1.1, \(0\leq\gamma^{+}\leq\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}\). This yields that \[\ell_{0}\leq d\int_{0}^{s(x)}\int_{\Omega(\tau)}k(x-y)\|\gamma_{0}|_{\bar{ \Omega}_{0}}\|_{C(\bar{\Omega}_{0})}dyd\tau\leq ds(x)\|\gamma_{0}|_{\bar{ \Omega}_{0}}\|_{C(\bar{\Omega}_{0})},\] i.e. \(s(x)\geq\ell_{0}/\left(d\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0} )}\right)\). Hence by choosing \(t_{0}<\ell_{0}/\left(d\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0}) }\right)\), one has \(\Omega(t)=\Omega(0)\) for \(0\leq t\leq t_{0}\). The rest follows directly from Proposition 2.2. In the following, we prove Theorem 1.5(ii), which is about the uniform boundedness of \(\Omega(t)\). Proof of Theorem 1.5(ii).: The proof is lengthy. To begin with, we introduce the first auxiliary \(1-\)dim problem \[\begin{cases}\gamma_{t}(t,x_{1})=d\int_{\mathbb{R}}k_{1}(x_{1}-y_{1})\gamma^{ +}(t,y_{1})dy_{1}-d\gamma^{+}(t,x_{1})&t>0,\ x_{1}\in\mathbb{R},\\ \gamma(0,x_{1})=\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}&0 \leq x_{1}\leq M,\\ \gamma(0,x_{1})=-\ell_{0}&x_{1}<0\ \text{or}\ x_{1}>M,\end{cases} \tag{4.2}\] where \(k_{1}(x_{1})=\int_{\mathbb{R}^{n-1}}k(x_{1},x^{\prime})dx^{\prime},\ x^{ \prime}=(x_{2},...,x_{n})\) and choose the constant \(M\) such that \[\bar{\Omega}_{0}\subseteq\{x\in\mathbb{R}^{n}\ |\ 0<x_{1}<M,\ \text{where}\ x=(x_{1},...,x_{n})\}.\] Such \(M\) exists since \(\bar{\Omega}_{0}\) is bounded. Let \(\gamma_{1}(t,x_{1})\) denote the solution to the problem (4.2). Notice that \(\gamma_{1}(t,x_{1})\) also satisfies the \(n-\)dim problem (1.6) with initial data \[\gamma_{0}(x)=\begin{cases}\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_ {0})}&0\leq x_{1}\leq M,\\ -\ell_{0}&x_{1}<0\ \text{or}\ x_{1}>M,\ x=(x_{1},...,x_{n}).\end{cases}\] Denote \[\Sigma_{1}(t)=\{x_{1}\in\mathbb{R}\ |\ \gamma_{1}(t,x_{1})\geq 0\}\ \ \text{and}\ \ \Sigma_{1}^{\infty}=\bigcup_{t\geq 0}\ \Sigma_{1}(t).\] By Proposition 2.1, \(\gamma_{1}(t,x_{1})\geq\gamma(t,x)\) in \(\mathbb{R}^{n}\), where \(\gamma\) denote the solution to the \(n-\)dim problem (1.6) with initial data (1.7) and \(x=(x_{1},...,x_{n})\). _To prove Theorem 1.5(ii), it suffices to show that \(\Sigma_{1}^{\infty}\) is bounded, since the other \(n-1\) directions can be handled similarly and thus \(\Omega(t)\) will be constrained by a bounded cube._ We first show that \(|\Sigma_{1}^{\infty}|\) is bounded. Thanks to Lemma 2.3, \(\gamma_{1}^{+}(t,\cdot)\in L^{1}(\mathbb{R})\). By direct computation, for \(0<t<T\) \[\int_{\Sigma_{1}(T)}\gamma_{1t}(t,x_{1})dx_{1}\] \[= d\int_{\Sigma_{1}(T)}\!\int_{\mathbb{R}}k_{1}(x_{1}\!-\!y_{1}) \gamma_{1}^{+}(t,y_{1})dy_{1}dx_{1}-d\int_{\Sigma_{1}(T)}\gamma_{1}^{+}(t,x_{1 })dx_{1}\] \[\leq d\int_{\mathbb{R}}\gamma_{1}^{+}(t,y_{1})dy_{1}-d\int_{\Sigma_{1} (T)}\gamma_{1}^{+}(t,x_{1})dx_{1}=0.\] Thus \[0\leq\int_{\Sigma_{1}(T)}\gamma_{1}(T,x_{1})dx_{1}\leq\int_{\Sigma_{1}(T)} \gamma_{1}(0,x_{1})dx_{1}=-\ell_{0}|\Sigma_{1}(T)\setminus[0,M]|+\|\gamma_{0 }|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}M,\] which implies that \[|\Sigma_{1}(T)|\leq\left(1+\frac{\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{ \Omega}_{0})}}{\ell_{0}}\right)M.\] Since \(T\) is arbitrary, one has \[|\Sigma_{1}^{\infty}|\leq\left(1+\frac{\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C( \bar{\Omega}_{0})}}{\ell_{0}}\right)M. \tag{4.3}\] Next we will show that \(\gamma_{1}(t,x_{1})\) decays exponentially as \(t\) goes to infinity. For this purpose, we introduce the second auxiliary \(1-\)dim problem with periodic initial data \[\begin{cases}\gamma_{t}=d\int_{\mathbb{R}}k_{1}(x_{1}-y_{1})\gamma^{+}(t,y_{1 })dy_{1}-d\gamma^{+}(t,x_{1})&t>0,\ x_{1}\in\mathbb{R},\\ \gamma(0,x_{1})=\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}& \kappa(M+L)\leq x_{1}\leq\kappa(M+L)+M,\\ \gamma(0,x_{1})=-\ell_{0}&\kappa(M+L)+M<x_{1}<(\kappa+1)(M+L),\end{cases}\] where \(\kappa\in\mathbb{Z}\), \(L>0\) is a constant to be determined later and let \(\tilde{\gamma}_{1}(t,x_{1})\) denote the solution. By Proposition 2.1, \[\gamma_{1}(t,x_{1})\leq\tilde{\gamma}_{1}(t,x_{1})\ \ \text{for}\ t>0,\,x_{1}\in\mathbb{R}, \tag{4.4}\] Obviously, \(\tilde{\gamma}_{1}(t,x_{1})\) is periodic in \(x_{1}\) with period \(M+L\). Thus this problem can be rewritten as follows \[\begin{cases}\gamma_{t}=d{\int_{0}^{M+L}}k_{*}(x_{1}\!-\!y_{1}) \gamma^{+}(t,y_{1})dy_{1}\!-\!d\gamma^{+}(t,x_{1})&t>0,\ x_{1}\in(0,M+L),\\ \gamma(0,x_{1})=\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}&0 \leq x_{1}\leq M,\\ \gamma(0,x_{1})=-\ell_{0}<0&M<x_{1}<(M+L),\end{cases} \tag{4.5}\] where \[k_{*}(x_{1})=\sum_{\kappa\in\mathbb{Z}}k_{1}(x_{1}+\kappa(M+L))\ \ \mbox{and}\ \ \int_{0}^{M+L}k_{*}(x_{1})dx_{1}=1.\] Denote \[\tilde{\Sigma}_{1}(t)=\{x_{1}\in\mathbb{R}\ |\ \tilde{\gamma}_{1}(t,x_{1})\geq 0 \}\ \mbox{ and }\ \tilde{\Sigma}_{1}^{\infty}=\bigcup_{t\geq 0}\ \tilde{\Sigma}_{1}(t).\] We claim that _if_\(L>\dfrac{\|\gamma_{0}\|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}}{\ell_{0}}M\)_, then_\(|\,\tilde{\Sigma}_{1}^{\infty}\,\bigcap\,(0,M+L)\,|<M+L\)_._ In (4.5), fix \(T>0\), by direct computation, one has for \(0<t<T\), \[\int_{\tilde{\Sigma}_{1}(T)\,\bigcap\,(0,M+L)}\tilde{\gamma}_{1t} (t,x_{1})dx_{1}\] \[= d\int_{\tilde{\Sigma}_{1}(T)\,\bigcap\,(0,M+L)}\!\int_{0}^{M+L} \!k_{*}(x_{1}\!-\!y_{1})\tilde{\gamma}_{1}^{+}(t,y_{1})dy_{1}dx_{1}-d\int_{ \tilde{\Sigma}_{1}(T)\,\bigcap\,(0,M+L)}\tilde{\gamma}_{1}^{+}(t,x_{1})dx_{1}\] \[\leq d\int_{0}^{M+L}\tilde{\gamma}_{1}^{+}(t,y_{1})dy_{1}-d\int_{ \tilde{\Sigma}_{1}(T)\,\bigcap\,(0,M+L)}\tilde{\gamma}_{1}^{+}(t,x_{1})dx_{1 }=0.\] Thus \[0 \leq \int_{\tilde{\Sigma}_{1}(T)\,\bigcap\,(0,M+L)}\tilde{\gamma}_{1} (T,x_{1})dx_{1}\] \[\leq \int_{\tilde{\Sigma}_{1}(T)\,\bigcap\,(0,M+L)}\tilde{\gamma}_{1} (0,x_{1})dx_{1}=-\ell_{0}|\tilde{\Sigma}_{1}(T)\bigcap\,(M,M+L)|+\|\gamma_{0} \|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}M.\] This implies that \[|\tilde{\Sigma}_{1}(T)\bigcap\,(M,M+L)|\leq\dfrac{\|\gamma_{0}\|_{\bar{\Omega }_{0}}\|_{C(\bar{\Omega}_{0})}}{\ell_{0}}M. \tag{4.6}\] Since \(T\) is arbitrary, it is easy to see that \(|\,\tilde{\Sigma}_{1}^{\infty}\,\bigcap\,(0,M+L)\,|<M+L\) provided that \[L>\dfrac{\|\gamma_{0}\|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}}{\ell_{0}}M.\] The claim is proved. Thanks to the strong maximum principle established in Proposition 2.2, \([0,M]\subseteq\tilde{\Sigma}_{1}^{\infty}\) and \(\tilde{\Sigma}_{1}^{\infty}\) is open in \((M,M+L)\). Thus when fix \(L>\dfrac{\|\gamma_{0}\|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}}{\ell_{0}}M\), by the previous claim, one sees that there exists an open interval \((a,b)\subset(0,M+L)\) satisfying \((a,b)\bigcap\,\tilde{\Sigma}_{1}^{\infty}=\emptyset\). If necessary, we could choose \(b-a\) smaller such that \(k_{*}(b-a)>0\). Denote \[\tilde{\Sigma}_{D}=(0,a)\bigcup\,(b,M+L)\subseteq\tilde{\Sigma}_{1}^{\infty}.\] Under the condition that \(k_{*}(b-a)>0\), the proof of [13, Theorem 2.6 (i)] can be slightly modified to show that the eigenvalue problem \[d\int_{\tilde{\Sigma}_{D}}k_{*}(x_{1}-y_{1})\phi(y_{1})dy_{1}-d\phi(x_{1})= \lambda\phi(x_{1})\quad\text{for }x_{1}\in\tilde{\Sigma}_{D}\] admits a principal eigenvalue \(\lambda_{p}\) with the corresponding eigenfunction \(\phi_{p}\) satisfying \(\phi_{p}>0\) in \(\tilde{\Sigma}_{D}\) and then it is easy to see that \(\lambda_{p}<0\). Moreover, notice that \(v(t,x_{1})=\ell e^{\lambda_{p}t}\phi_{p}(x_{1})\), \(\ell>0\), satisfies the following problem \[\begin{cases}v_{t}(t,x_{1})=d\int_{\tilde{\Sigma}_{D}}k_{*}(x_{1}-y_{1})v(t,y _{1})dy_{1}-dv(t,x_{1})&t>0,\ x_{1}\in\tilde{\Sigma}_{D},\\ v(0,x_{1})=\ell\phi_{p}(x_{1})&x_{1}\in\tilde{\Sigma}_{D}.\end{cases}\] Choose \(\ell\) large enough such that \(v(0,x_{1})>\|\gamma_{0}|_{\tilde{\Omega}_{0}}\|_{C(\tilde{\Omega}_{0})}\) in \(\tilde{\Sigma}_{D}\). By the comparison principle, it follows that \[\tilde{\gamma}_{1}(t,x_{1})\leq v(t,x_{1})=\ell e^{\lambda_{p}t}\phi_{p}(x_{1 })\quad\text{for }\ t>0,\ x_{1}\in\tilde{\Sigma}_{D}.\] Therefore by (4.4), the choice of \(\tilde{\Sigma}_{D}\) and the fact that \(\tilde{\gamma}_{1}(t,x_{1})\) is periodic in \(x_{1}\) with period \(M+L\), we have \[\gamma_{1}(t,x_{1})\leq\ell e^{\lambda_{p}t}\|\phi_{p}\|_{L^{\infty}(\tilde{ \Sigma}_{D})}\quad\text{for }\ t>0,\ x_{1}\in\mathbb{R}, \tag{4.7}\] i.e., \(\gamma_{1}(t,x_{1})\) decays exponentially at infinity since \(\lambda_{p}<0\). Now we are ready to complete the last piece of the proof of Theorem 1.5(ii). Suppose that \(\Sigma_{1}^{\infty}\) is unbounded, i.e., there exists a sequence \(\{x_{1i}\}_{i\geq 1}\subseteq\Sigma_{1}^{\infty}\) and \(\{s_{1i}\}_{i\geq 1}\) with \(|x_{1i}|\to\infty\) as \(i\to\infty\) such that \[\ell_{0}=d\int_{0}^{s_{1i}}\int_{\mathbb{R}}k_{1}(x_{1i}-y_{1})\gamma_{1}^{+} (\tau,y_{1})dy_{1}d\tau,\] where \(t=s_{1i}\) denote the moment when \(\gamma_{1}(s_{1i},x_{1i})=0\) while \(\gamma_{1}(t,x_{1i})<0\) for \(0<t<s_{1i}\). To derive a contradiction, we need the following property \[\lim_{|x_{1}|\to\infty}\int_{\Sigma_{1}^{\infty}}k_{1}(x_{1}-y_{1})dy_{1}=0, \tag{4.8}\] which follows from the facts that \(k_{1}\in L^{1}(\mathbb{R})\) and \(|\Sigma_{1}^{\infty}|<+\infty\) due to (4.3). Thanks to (4.7) and (4.8), it follows that \[d\int_{0}^{\infty}\int_{\mathbb{R}}k_{1}(x_{1i}-y_{1})\gamma_{1} ^{+}(\tau,y_{1})dy_{1}d\tau\] \[\leq d\int_{0}^{\infty}\int_{\Sigma_{1}(\tau)}k_{1}(x_{1i}-y_{1})\ell e ^{\lambda_{p}\tau}\|\phi_{p}\|_{L^{\infty}(\tilde{\Sigma}_{D})}dy_{1}d\tau\] \[\leq \frac{d\ell}{-\lambda_{p}}\|\phi_{p}\|_{L^{\infty}(\tilde{\Sigma} _{D})}\int_{\Sigma_{1}^{\infty}}k_{1}(x_{1i}-y_{1})dy_{1}\to 0\quad\text{as }i\to\infty.\] This contradicts to the existence of \(s_{1i}\) when \(i\) is large enough. Therefore, \(\Sigma_{1}^{\infty}\) is bounded and the desired conclusion follows. ### Continuous expansion and jumping phenomena We first verify Theorem 1.6, which is about the continuous expansion of \(\Omega(t)\) under the extra conditions that the initial domain \(\bar{\Omega}_{0}\) is convex and the kernel function \(k\) satisfies **(K1)**. Proof of Theorem 1.6.: Suppose that \(\Omega(t)\) first jumps at time \(t=T\), i.e., \(\Omega(t)\) is connected for \(t<T\) while \(\Omega(T)\) is disconnected. Let \(\Omega_{1}(T)\) denote the connected domain which contains \(\Omega(t)\) for \(t<T\). Choose \(y_{T}\in\Omega(T)\setminus\Omega_{1}(T)\). Since \(\Omega(0)=\bar{\Omega}_{0}\) is convex, there exists a unique \(x_{T}\in\partial\Omega(0)\) such that \[|x_{T}-y_{T}|=\mbox{dist}\{\mbox{y}_{\rm T},\Omega(0)\}.\] Moreover, there exists \(z_{T}\), which lies on the line segment \(\overline{x_{T}y_{T}}\) and satisfies \(z_{T}\not\in\Omega(T)\). Let \(\ell\) denote the line which passes through \((z_{T}+y_{T})/2\) and perpendicular to the line segment \(\overline{x_{T}y_{T}}\). W.l.o.g., assume that \(\ell=\{x\in\mathbb{R}^{n}\ |\ x_{1}=0\}\), where \(x=(x_{1},x_{2},...,x_{n})\) and \(x_{T1}<0\), where \(x_{T}=(x_{T1},x_{T2},...,x_{Tn})\). Since \(\Omega(0)\) is convex, obviously, \(\mbox{dist}\{\ell,\Omega(0)\}>0\). For simplicity, denote \[\mathbb{R}^{n}_{-}=\{x\in\mathbb{R}^{n}\ |\ x_{1}<0\},\ \mathbb{R}^{n}_{+}=\{x \in\mathbb{R}^{n}\ |\ x_{1}>0\},\ \tilde{x}=(-x_{1},x_{2},...,x_{n}),\] and set \[w(t,x)=\gamma(t,x)-\gamma(t,\tilde{x}),\ x\in\mathbb{R}^{n}_{-}.\] Then \(y_{T}=\tilde{z}_{T}\) and \[w(T,z_{T})=\gamma(T,z_{T})-\gamma(T,y_{T})<0. \tag{4.9}\] Next it is standard to compute that for \(x\in\mathbb{R}^{n}_{-}\), \[w_{t}(t,x)=\gamma_{t}(t,x)-\gamma_{t}(t,\tilde{x})\] \[= d\int_{\mathbb{R}^{n}}k(x-y)\gamma^{+}(t,y)dy-d\int_{\mathbb{R} ^{n}}k(\tilde{x}-y)\gamma^{+}(t,y)dy-d\gamma^{+}(t,x))+d\gamma^{+}(t,\tilde{ x})\] \[= d\int_{\mathbb{R}^{n}_{-}}k(x-y)\gamma^{+}(t,y)dy+d\int_{ \mathbb{R}^{n}_{+}}k(x-y)\gamma^{+}(t,y)dy\] \[-d\int_{\mathbb{R}^{n}_{-}}k(\tilde{x}-y)\gamma^{+}(t,y)dy-d\int _{\mathbb{R}^{n}_{+}}k(\tilde{x}-y)\gamma^{+}(t,y)dy-c(t,x)w(t,x)\] \[= d\int_{\mathbb{R}^{n}_{-}}k(x-y)\gamma^{+}(t,y)dy+d\int_{ \mathbb{R}^{n}_{-}}k(x-\tilde{y})\gamma^{+}(t,\tilde{y})dy\] \[-d\int_{\mathbb{R}^{n}_{-}}k(\tilde{x}-y)\gamma^{+}(t,y)dy-d\int _{\mathbb{R}^{n}_{-}}k(\tilde{x}-\tilde{y})\gamma^{+}(t,\tilde{y})dy-c(t,x)w (t,x)\] \[= \int_{\mathbb{R}^{n}_{-}}\left[k(x-y)-k(\tilde{x}-y)\right]c(t,y )w(t,y)dy-c(t,x)w(t,x),\] where \[c(t,x)=\frac{d\gamma^{+}(t,x)-d\gamma^{+}(t,\tilde{x})}{\gamma(t,x)-\gamma(t,\tilde{x})},\] and \(k(x-y)-k(\tilde{x}-y)\geq 0\) for \(x,y\in\mathbb{R}^{n}_{-}\) since \(k(x)\) is decreasing in \(|x|\). Moreover, for \(x\in\ell\), \(w(t,x)=0\), and for \(x\in\mathbb{R}^{n}_{-}\), \[w(0,x)=\gamma(0,x)-\gamma(0,\tilde{x})\geq 0,\] since \(\Omega(0)\subseteq\mathbb{R}_{-}^{n}\). Thus by the comparison principle, one has \(w(t,x)\geq 0\) for \(t>0\), \(x\in\mathbb{R}_{-}^{n}\), which contradicts to (4.9). The proof is complete. Notice that in Theorem 1.6, extra conditions on kernel functions and initial domains are needed to guarantee the continuous expansion of \(\Omega(t)\). Now we construct two examples to show that when one of these two extra conditions in Theorem 1.6 is violated, _jumping phenomena_ could happen. **Example 1**.: _This example is about the assumption **(K1)** on kernel functions._ For simplicity, we focus on the the one dimensional case and assume that the initial domain is an internal. According to Theorem 1.6, if the kernel function \(k(x)\) is decreasing in \(|x|\), then \(\Omega(t)\) expands continuously. On the contrary, in this example, we choose a kernel function, which is not decreasing in \(|x|\), and jumping phenomena happens. Define \[k_{*}(x)=\begin{cases}\frac{1}{4\sigma}&1-\sigma\leq|x|\leq 1+\sigma,\\ 0&\text{otherwise},\end{cases}\] where \(0<\sigma<\frac{1}{4}\) is small. Consider the problem \[\begin{cases}\gamma_{t}(t,x)=\int_{\mathbb{R}}k_{j}(x-y)\gamma^{+}(t,y)dy- \gamma^{+}(t,x)&t>0,\ x\in\mathbb{R},\\ \gamma(0,x)=c_{0}&x\in\left(-\frac{1}{4},\frac{1}{4}\right),\\ \gamma(0,x)=-\ell_{0}&x\in\mathbb{R}\setminus\left(-\frac{1}{4},\frac{1}{4} \right),\end{cases} \tag{4.10}\] where \(c_{0}\), \(\ell_{0}\) are positive constants, \(k_{j}\) satisfies the assumption **(K)** and \[\lim_{j\to\infty}\|k_{j}-k_{*}\|_{L^{1}(\mathbb{R}^{n})}=0.\] Let \(\gamma_{j}\) denote the solution to the problem (4.10). We claim that _if \(2\ell_{0}<c_{0}\), \(0<\sigma<\frac{1}{4}\), then the jumping phenomena happens for (4.10) when \(j\) is sufficiently large._ To prove the claim, first consider the problem \[\begin{cases}\gamma_{t}(t,x)=\int_{\mathbb{R}}k_{*}(x-y)\gamma^{+}(t,y)dy- \gamma^{+}(t,x)&t>0,\ x\in\mathbb{R},\\ \gamma(0,x)=c_{0}&x\in\left(-\frac{1}{4},\frac{1}{4}\right),\\ \gamma(0,x)=-\ell_{0}&x\in\mathbb{R}\setminus\left(-\frac{1}{4},\frac{1}{4} \right).\end{cases} \tag{4.11}\] The existence and uniqueness of the solution, denoted by \(\gamma_{*}\), to this problem can be verified by similar arguments in the proof of Theorem 1.1. Moreover, similar to the proof of (2.3) in the proof of Theorem 1.1, one has \[\lim_{j\to\infty}\|\gamma_{j}-\gamma_{*}\|_{L^{\infty}(\mathbb{R}^{n})}=0.\] Hence it suffices to show that the jumping phenomena happens in the limiting problem (4.11) if \(2\ell_{0}<c_{0}\), \(0<\sigma<\frac{1}{4}\). Let \(t_{1}\) denote the moment when \(\gamma_{*}\) first touches zero somewhere in \(\mathbb{R}\setminus(-\frac{1}{4},\frac{1}{4})\). For \(x\in\left(-\frac{1}{4},\frac{1}{4}\right)\), \(0<t<t_{1}\), it is easy to see that \(\int_{\mathbb{R}^{n}}k_{*}(x-y)\gamma_{*}^{+}(t,y)dy=0\) due to the definition of \(k_{*}\) and the choice of \(\sigma\). Thus \[\begin{cases}(\gamma_{*}^{+})_{t}(t,x)=-\gamma_{*}^{+}(t,x)&0<t<t_{1},\ x\in \left(-\frac{1}{4},\frac{1}{4}\right),\\ \gamma_{*}^{+}(0,x)=c_{0}&x\in\left(-\frac{1}{4},\frac{1}{4}\right),\end{cases}\] thus \[\gamma_{*}^{+}(t,x)=c_{0}e^{-t}\ \ \text{for}\ 0<t<t_{1},\ x\in\left(-\frac{1}{4 },\frac{1}{4}\right).\] Then for any \(x^{*}\in\{x\in\mathbb{R}\setminus\left(-\frac{1}{4},\frac{1}{4}\right)\ |\ \gamma_{*}(t_{1},x)=0\}\), we compute \[\ell_{0}=\int_{0}^{t_{1}}\int_{-\frac{1}{4}}^{\frac{1}{4}}k_{*}(x^{*}-y)c_{0} e^{-t}dydt=c_{0}\left(1-e^{-t_{1}}\right)\int_{-\frac{1}{4}}^{\frac{1}{4}}k_{*}(x^ {*}-y)dy. \tag{4.12}\] According to the definition of \(k_{*}\), it is routine to verify that \(\int_{-\frac{1}{4}}^{\frac{1}{4}}k_{*}(x^{*}-y)dy\leq\frac{1}{2}\) and \[\int_{-\frac{1}{4}}^{\frac{1}{4}}k_{*}(x^{*}-y)dy=\frac{1}{2}\ \ \text{if and only if}\ \ x^{*}\in\left[-\frac{5}{4}+\sigma,-\frac{3}{4}-\sigma\right]\bigcup\left[ \frac{3}{4}+\sigma,\frac{5}{4}-\sigma\right].\] Hence when \(2\ell_{0}<c_{0}\), one has \[\left\{x\in\mathbb{R}\setminus\left(-\frac{1}{4},\frac{1}{4}\right)\ \Big{|}\ \gamma(t_{1},x)=0\right\}=\left[-\frac{5}{4}+\sigma,-\frac{3}{4}-\sigma \right]\bigcup\left[\frac{3}{4}+\sigma,\frac{5}{4}-\sigma\right],\] where \[t_{1}=-\ln\left(1-\frac{2\ell_{0}}{c_{0}}\right).\] Therefore, the jumping phenomena happens in the problem (4.11) provided that \(0<\sigma<\frac{1}{4}\) and \(2\ell_{0}<c_{0}\). **Example 2**.: _This example is about the conditions on the shape of initial domains._ To emphasize the effect of initial domains, we still require that the kernel functions constructed in this example satisfy the requirements for kernel functions in Theorem 1.6. Then according to Theorem 1.6, if the initial domain is convex, \(\Omega(t)\) expands continuously. However, in the following constructed example, the initial domain is non-convex and the jumping phenomena happens. Define \[\tilde{k}(x)=\begin{cases}2^{-n}\omega_{n}^{-1}&|x|\leq 2,\\ 0&\text{otherwise},\end{cases}\] where \(\omega_{n}\) denotes the volume of a unit ball in \(\mathbb{R}^{n}\). Consider the problem \[\begin{cases}\gamma_{t}(t,x)=\int_{\mathbb{R}^{n}}\tilde{k}_{j}(x-y)\gamma^{+}(t, y)dy-\gamma^{+}(t,x)&t>0,\ x\in\mathbb{R}^{n},\\ \gamma(0,x)=c_{0}&x\in\bar{\Omega}_{0},\\ \gamma(0,x)=-\ell_{0}&x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0},\end{cases} \tag{4.13}\] where \(c_{0}\), \(\ell_{0}\) are positive constants, \(n\geq 2\), \(\bar{\Omega}_{0}=\bar{B}_{2}(0)\setminus B_{1}(0)\), the kernel function \(\tilde{k}_{j}\) satisfies the conditions for kernel functions in Theorem 1.6 and \[\lim_{j\to\infty}\|\tilde{k}_{j}-\tilde{k}\|_{L^{1}(\mathbb{R}^{n})}=0.\] The existence of such kernel functions is obvious. We claim that _if \((1-2^{-n})\,c_{0}>\ell_{0}\), then for \(j\) sufficiently large, the jumping phenomena happens for (4.13)._ Similar to **Example 1**, to prove this claim, it suffices to show the jumping phenomena happens for the following model \[\begin{cases}\gamma_{t}(t,x)=\int_{\mathbb{R}^{n}}\tilde{k}(x-y)\gamma^{+}(t, y)dy-\gamma^{+}(t,x)&t>0,\ x\in\mathbb{R}^{n},\\ \gamma(0,x)=c_{0}&x\in\bar{\Omega}_{0},\\ \gamma(0,x)=-\ell_{0}&x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0},\end{cases} \tag{4.14}\] if \((1-2^{-n})\,c_{0}>\ell_{0}\). Now let \(\tilde{\gamma}\) denote the solution to the problem (4.14) and \(t_{2}\), if exists, denote the moment when the solution \(\tilde{\gamma}\) first touches zero somewhere in \(\mathbb{R}^{n}\setminus\bar{\Omega}_{0}\). When \(0<t<t_{2}\), thanks to the definition of \(\tilde{k}\), it is easy to check that for \(x\neq 0\), \[\int_{\mathbb{R}^{n}}\tilde{k}(x-y)\tilde{\gamma}^{+}(t,y)dy=\int_{\bar{B}_{2 }(0)\setminus B_{1}(0)}\tilde{k}(x-y)\tilde{\gamma}(t,y)dy<\int_{\bar{B}_{2}(0 )\setminus B_{1}(0)}\tilde{k}(-y)\tilde{\gamma}(t,y)dy.\] This indicates that if \(t_{2}<+\infty\), then at \(t=t_{2}\), \(\tilde{\gamma}\) touches zero only at \(x=0\), i.e., the jumping phenomena happens. It remains to show the existence of \(t_{2}<+\infty\). Suppose that \(t_{2}=+\infty\). Based on the definition of \(\tilde{k}\) and the first equation in (4.14), it is easy to see that \[\ell_{0}\geq\int_{0}^{+\infty}\int_{\bar{B}_{2}(0)\setminus B_{1}(0)}\tilde{k }(-y)\tilde{\gamma}^{+}(t,y)dydt>\int_{0}^{+\infty}\left(1-2^{-n}\right)c_{0}e ^{-t}dt=\left(1-2^{-n}\right)c_{0}>\ell_{0}.\] This is impossible. The proof is complete. ## Appendix A Important equivalent characterization In this appendix, we include the proof of Proposition 1.2. Proof of Proposition 1.2.: Assume that (i) holds. For clarity, set \(w=(w_{1},...,w_{n})=\dfrac{\xi}{|\xi|}\). Then we compute as follows. \[\dfrac{1-\hat{k}(\xi)}{|\xi|^{2}} = \dfrac{1}{|\xi|^{2}}\left(1-\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}k( x)dx\right)\] \[= \dfrac{1}{|\xi|^{2}}\int_{\mathbb{R}^{n}}ix\cdot w\int_{0}^{|\xi| }e^{-(ix\cdot w)\eta}d\eta k(x)dx\] \[= \dfrac{1}{|\xi|^{2}}\int_{\mathbb{R}^{n}}ix\cdot w\int_{0}^{|\xi| }\left(e^{-(ix\cdot w)\eta}-1\right)d\eta k(x)dx\] \[= \dfrac{1}{|\xi|^{2}}\int_{\mathbb{R}^{n}}(x\cdot w)^{2}\int_{0}^ {|\xi|}\int_{0}^{\eta}e^{-(ix\cdot w)\tau}d\tau d\eta k(x)dx,\] where the third equality is due to the first equality in (i). Notice that thanks to the assumptions in (i), we have \[\int_{\mathbb{R}^{n}}(x\cdot w)^{2}k(x)dx=\dfrac{1}{n}\int_{\mathbb{R}^{n}}|x |^{2}k(x)dx.\] Then it follows that \[\dfrac{1-\hat{k}(\xi)}{|\xi|^{2}}-\dfrac{1}{2n}\int_{\mathbb{R}^{ n}}|x|^{2}k(x)dx\] \[= \dfrac{1}{|\xi|^{2}}\int_{\mathbb{R}^{n}}(x\cdot w)^{2}\int_{0}^ {|\xi|}\int_{0}^{\eta}e^{-(ix\cdot w)\tau}d\tau d\eta k(x)dx-\dfrac{1}{2}\int_{ \mathbb{R}^{n}}(x\cdot w)^{2}k(x)dx\] \[= \dfrac{1}{|\xi|^{2}}\int_{\mathbb{R}^{n}}(x\cdot w)^{2}\int_{0}^ {|\xi|}\int_{0}^{\eta}\left(e^{-(ix\cdot w)\tau}-1\right)d\tau d\eta k(x)dx.\] Lebesgue dominated convergence theorem yields that \[\lim_{\xi\to 0}\dfrac{1-\hat{k}(\xi)}{|\xi|^{2}}-\dfrac{1}{2n}\int_{\mathbb{R}^{ n}}|x|^{2}k(x)dx=0.\] Thus (ii) is verified and \(\dfrac{1}{2n}\int_{\mathbb{R}^{n}}|x|^{2}k(x)dx=A\). Assume that (ii) holds. First choose \(\xi=(0,...,\xi_{j},...,0)\), \(1\leq j\leq n\), with \(\xi_{j}>0\), then \[\dfrac{1-\hat{k}(\xi)}{|\xi|^{2}}=\dfrac{1}{|\xi|^{2}}\left(1- \int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}k(x)dx\right)=\dfrac{1}{\xi_{j}^{2}}\int_ {\mathbb{R}^{n}}\left(1-e^{-ix_{j}\xi_{j}}\right)k(x)dx\] (A.1) \[= \dfrac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}}\left(1-\cos(x_{j}\xi _{j})+i\sin(x_{j}\xi_{j})\right)k(x)dx.\] For any \(R>0\), we have \[\dfrac{|1-\hat{k}(\xi)|}{|\xi|^{2}}\geq\dfrac{1}{\xi_{j}^{2}}\int_{B_{R}(0)} \left(1-\cos(x_{j}\xi_{j})\right)k(x)dx,\] which yields that \[\lim_{\xi_{j}\to 0}\frac{|1-\hat{k}(\xi)|}{|\xi|^{2}}\geq\frac{1}{2}\int_{B_{R}(0 )}x_{j}^{2}k(x)dx.\] Since \(R\) is arbitrary and \(1\leq j\leq n\), one sees that \[\frac{1}{2n}\int_{\mathbb{R}^{n}}|x|^{2}k(x)dx\leq A<+\infty.\] (A.2) This also indicates that \[\int_{\mathbb{R}^{n}}|x|k(x)dx<+\infty.\] (A.3) Next still choose \(\xi=(0,...,\xi_{j},...,0)\), \(1\leq j\leq n\), with \(\xi_{j}>0\). Notice that \[\frac{1-\hat{k}(\xi)}{|\xi|}=\frac{1}{\xi_{j}}\int_{\mathbb{R}^{n}}\left(1-e^ {-ix_{j}\xi_{j}}\right)k(x)dx=\frac{1}{\xi_{j}}\int_{\mathbb{R}^{n}}ix_{j} \int_{0}^{\xi_{j}}e^{-ix_{j}\eta}d\eta k(x)dx,\] where \(x=(0,...,x_{j},...,0)\). Due to (A.3), Lebesgue dominated convergence theorem can be applied and one sees that \[0=\lim_{\xi\to 0}\frac{1-\hat{k}(\xi)}{|\xi|}=\int_{\mathbb{R}^{n}}ix_{j}k(x)dx,\] i.e., \[\int_{\mathbb{R}^{n}}x_{j}k(x)dx=0,\ 1\leq j\leq n.\] (A.4) Now thanks to (A.4), we have \[\frac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}}\sin(x_{j}\xi_{j})k(x)dx\] \[= \frac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}}x_{j}\int_{0}^{\xi_{j} }\cos(x_{j}\eta)d\eta k(x)dx=\frac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}}x_{j} \int_{0}^{\xi_{j}}\left(\cos(x_{j}\eta)-1\right)d\eta k(x)dx\] \[= \frac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}}-x_{j}^{2}\int_{0}^{ \xi_{j}}\int_{0}^{\eta}\sin(x_{j}\tau)d\tau d\eta k(x)dx.\] Thus (A.2) and Lebesgue dominated convergence theorem imply that \[\lim_{\xi_{j}\to 0}\frac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}}\sin(x_{j}\xi_{j})k(x)dx=0.\] Now in (A.1), letting \(\xi_{j}\to 0\), again it follows from (A.2) and Lebesgue dominated convergence theorem that \[A = \lim_{\xi_{j}\to 0}\frac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}} \left(1-\cos(x_{j}\xi_{j})\right)k(x)dx=\lim_{\xi_{j}\to 0}\frac{1}{\xi_{j}^{2}} \int_{\mathbb{R}^{n}}x_{j}\int_{0}^{\xi_{j}}\sin(x_{j}\eta)d\eta k(x)dx\] \[= \lim_{\xi_{j}\to 0}\frac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}}x_{j}^{2 }\int_{0}^{\xi_{j}}\int_{0}^{\eta}\cos(x_{j}\tau)d\tau d\eta k(x)dx=\frac{1}{2 }\int_{\mathbb{R}^{n}}x_{j}^{2}k(x)dx.\] Hence \[\int_{\mathbb{R}^{n}}x_{j}^{2}k(x)dx=2A=\frac{1}{n}\int_{\mathbb{R}^{n}}|x|^{2}k( x)dx,\ \ 1\leq j\leq n.\] (A.5) At the end, it remains to show that \(\int_{\mathbb{R}^{n}}x_{j}x_{h}k(x)dx=0\), \(1\leq j,h\leq n,j\neq h.\) Choose \(\xi=(0,...,\xi_{j},...,\xi_{h},...,0)\) with \(j<h\), \(\xi_{j}>0\) and \(\xi_{h}=\lambda\xi_{j}\). Then thanks to (A.4) and (A.5), it follows that \[\frac{1-\hat{k}(\xi)}{|\xi|^{2}}=\frac{1}{|\xi|^{2}}\left(1-\int_ {\mathbb{R}^{n}}e^{-ix\cdot\xi}k(x)dx\right)=\frac{1}{\xi_{j}^{2}+\lambda^{2} \xi_{j}^{2}}\int_{\mathbb{R}^{n}}\left(1-e^{-i(x_{j}+\lambda x_{h})\xi_{j}} \right)k(x)dx\] \[= \frac{1}{\xi_{j}^{2}+\lambda^{2}\xi_{j}^{2}}\int_{\mathbb{R}^{n} }i\left(x_{j}+\lambda x_{h}\right)\int_{0}^{\xi_{j}}e^{-i(x_{j}+\lambda x_{h })\eta}d\eta k(x)dx\] \[= \frac{1}{\xi_{j}^{2}+\lambda^{2}\xi_{j}^{2}}\int_{\mathbb{R}^{n} }i\left(x_{j}+\lambda x_{h}\right)\int_{0}^{\xi_{j}}\left(e^{-i(x_{j}+\lambda x _{h})\eta}-1\right)d\eta k(x)dx\] \[= \frac{1}{\xi_{j}^{2}+\lambda^{2}\xi_{j}^{2}}\int_{\mathbb{R}^{n} }\left(x_{j}+\lambda x_{h}\right)^{2}\int_{0}^{\xi_{j}}\int_{0}^{\eta}e^{-i(x _{j}+\lambda x_{h})\tau}d\tau d\eta k(x)dx.\] Letting \(\xi_{j}\to 0\), Lebesgue dominated convergence theorem and (A.5) imply that \[A=\frac{1}{2(1+\lambda^{2})}\int_{\mathbb{R}^{n}}\left(x_{j}+\lambda x_{h} \right)^{2}k(x)dx=A+\frac{\lambda}{1+\lambda^{2}}\int_{\mathbb{R}^{n}}x_{j}x_ {h}k(x)dx.\] This indicates that \[\int_{\mathbb{R}^{n}}x_{j}x_{h}k(x)dx=0,\] since \(\lambda\) is an arbitrary constant. The proof is complete. **Data Availability Statement** The authors confirm that this manuscript has no associated data.
この論文では、2段階のStefan問題に対して新しい非局在モデルを提案しています。この非局在モデルは、1段階のStefan問題の非局在バージョンが自然にその特別なケースとして現れるため、存在します。他に、局所と非局所1段階のStefan問題の逐次収束の最適条件を導き出し、この最適条件の等価な表現を与えています。さらに、自由境界の連続的な拡張の必要な条件を提示し、これらの条件が満たされない場合、ジャンプ現象が起こる例を構築しました。このジャンプ現象は、非局在拡散によって essentially 誘導され、古典的なStefan問題では見られません。
2309.09507
Pruning Large Language Models via Accuracy Predictor
Large language models(LLMs) containing tens of billions of parameters (or even more) have demonstrated impressive capabilities in various NLP tasks. However, substantial model size poses challenges to training, inference, and deployment so that it is necessary to compress the model. At present, most model compression for LLMs requires manual design of pruning features, which has problems such as complex optimization pipeline and difficulty in retaining the capabilities of certain parts of the model.Therefore, we propose a novel pruning approach: firstly, a training set of a certain number of architecture-accuracy pairs is established, and then a non-neural model is trained as an accuracy predictor. Using the accuracy predictor to further optimize the search space and search, the optimal model can be automatically selected. Experiments show that our proposed approach is effective and efficient. Compared with the baseline, the perplexity(PPL) on Wikitext2 and PTB dropped by 9.48% and 5,76% respectively, and the average accuracy of MMLU increased by 6.28%.
Yupeng Ji, Yibo Cao, Jiucai Liu
2023-09-18T06:38:24
http://arxiv.org/abs/2309.09507v2
# Pruning Large Language Models via Accuracy Predictor ###### Abstract Large language models(LLMs) containing tens of billions of parameters (or even more) have demonstrated impressive capabilities in various NLP tasks. However, substantial model size poses challenges to training, inference, and deployment so that it is necessary to compress the model. At present, most model compression for LLMs requires manual design of pruning features, which has problems such as complex optimization pipeline and difficulty in retaining the capabilities of certain parts of the model.Therefore, we propose a novel pruning approach: firstly, a training set of a certain number of architecture-accuracy pairs is established, and then a non-neural model is trained as an accuracy predictor. Using the accuracy predictor to further optimize the search space and search, the optimal model can be automatically selected. Experiments show that our proposed approach is effective and efficient. Compared with the baseline, the perplexity(PPL) on Wikitext2 and PTB dropped by 9.48% and 5,76% respectively, and the average accuracy of MMLU increased by 6.28%. Yupeng Ji\({}^{1}\)1, Yibo Cao\({}^{2}\), Jiucai Liu\({}^{3}\)\({}^{1,3}\) Tongji University, Shanghai, China \({}^{2}\)Chongqing University, Chongqing, China neural architecture search, large language model, structural pruning, machine learning Footnote 1: Corresponding Author ## 1 Introduction Recently, LLMs has shown impressive reasoning and generation capabilities in various NLP tasks, and these capabilities further enhance as the number of model parameters increases [1, 2, 3, 4]. However, the huge computational and memory requirements are still a major obstacle to wider application, so it is necessary to compress LLMs to reduce costs [5, 6, 7, 8, 9, 10]. Some works for LLMs compression currently focus on model quantization [11, 12, 13], which is the process of quantizing model parameters into low-bit level representions. In face, another commonly used method for model compression is network pruning [14], which reduces the size of the model by deleting some unimportant weights. Although some pruning work for LLMs has made some progress, pruning strategies usually require manual design [15, 16, 17]. Specifically, the pruning ratio of the global or each layer is preset, which causes the pruning results to depend on the set of hyperparameters. On the other hand, due to the model size and wide range of applications of LLMs, manual design of pruning strategies still faces challenges such as complex pipelines and suitabilities for downstream tasks. Neural architecture search (NAS) [18, 19, 20, 21, 22, 23], which aims to automatically find neural network architecture, has been applied to model pruning due to its effectiveness and simplicity. One common approach is using an accuracy predictor to predict the accuracy of the model architecture to be pruned in the search space, which can save the cost of evaluating these candidate architectures [24, 25]. The key of this approach is training the accuracy predictor and the required dataset. Accuracy predictors among previous works are basically based on neural network, although effective, it still requires a large number of architecture-accuracy pairs and careful design. However, for LLMs, a single evaluation also requires certain computing resources, so it is unrealistic to build tens of thousands of architecture-accuracy pairs. In this paper, we propose an alternative approach that first builds a certain number of architecture-accuracy pairs for LLMs(usually a few hundred), and then trains an accuracy predictor based on non-nerural models(e.g., tree based models) to guide LLMs structured pruning. The specific algorithm is as follows: (1) Combined with expert knowledge to limit the scope of some features(such as layer type, layer ID and so on), narrow the search space and perform random sampling to obtain the model architectures to be pruned. (2) According to the pruning requirements of each model architectures, prune the model, and evaluate the pruned model from multiple aspects to obtain the accuracy. Then train an accuracy predictor based on gradient boosting decision trees (GBDT) [26]. (3) Use the trained GBDT model to predict more architectures in the search space, and architectures with top predicted accuracy are selected for further evaluation. The main contributions of our approach are as follows: (1) Explore the use of GBDT as an accuracy predictor, establish the relationship between the model architecture to be pruned and model performance, and improve the efficiency of searching model architecture. (2) Use the trained accuracy predictor to further guide the process of model pruning, and conduct a refined search based on the required performance of a certain aspect of the model. It solves the problem of complex optimization pipeline caused by manually designing pruning features, making it easier to find the optimal model architecture. ## 2 Methodology Sec.2.1 describes the specific process of building architecture-accuracy pairs and accuracy predictors including how to narrow the search space of pruning, how to evaluate the parameters importance of the model, how to evaluate the pruned model, and how to train the accuracy predictor. Sec.2.2 introduces the use of GBDT as a predictor for architecture search, which make the overall search process more efficient and effective. Fig.1 shows the overview of our approach. ### Building Architecture-Accuracy Pairs **Narrow the Search Space** Usually, we prune the model according to the predefined pruning ratio of each layer and evaluate the pruned model, so that we get a architecture-accuracy pair, but the problem is how to predefined the pruning ratio of each layer. It is unachievable to predict the accuracy for all the candidate architectures is costly for large search space in real applications. Therefore, we have to limit the ID and type of LLM layers to be pruned, and set a range of the pruning ratio (the proportion of the layer with parameters equal to 0). Furthermore, adjacent \(k\) layers can be considered to use the same pruning ratio. Finally, random sampling is performed to obtain the architectures. **Parameters Importance Estimation** Consider a set of samples \(\mathcal{D}=\{(x_{i},y_{i}),i=1,2,...,N\}\), our goal is to remove the parameters that have the least impact, where the importance of the parameters can be represented by the deviation of the loss [27]. Specifically, as is shown in Eq.(1), we use Taylor Expansion to approximate the change in the loss \(\Delta\mathcal{L}\) of pruning structures [28]. \[\Delta\mathcal{L}=\frac{\partial\mathcal{L}^{T}}{\partial w}\Delta w+\frac{1 }{2}\Delta w^{T}\mathbf{H}\Delta w+R_{2}(w) \tag{1}\] where \(R_{2}(w)\) is remainders, \(\frac{\partial\mathcal{L}^{T}}{\partial w}\) is the first-order gradient of loss function, and \(\mathbf{H}\) is the Hessian matrix containing second-order derivatives. Although second-order Taylor expansion contains more information, it requires more additional memory and computational resources, which is more expensive for LLMs. On the contrary, the gradient information of first-order Taylor expansion can be obtained from backpropagation, which is more efficient. Therefore, the first-order Taylor expansion is used for importance estimation. **Evaluation of The Pruned LLMs** Currently, due to the recent emergence of LLMs and their excellent performance in multiple fields, scientific and comprehensive quantitative evaluation is still being explored [29]. We evaluate LLMs from two aspects: 1) Generation ability: zero-shot perplexity(PPL) analysis on Wikitext2 [30] and PTB [31]. The lower the value, the stronger the ability. 2) World knowledge and problem solving ability: five-shot accuracy analysis on Massive Multitask Language Understanding(MMLU) [32] which covers 57 subjects across STEM, the humanities, the social sciences, and more. The higher the value, the stronger the ability. As LLMs evaluation mechanism continues to improve, more quantitative evaluation indicators can be added. **Training Accuracy Predictor Based on GBDT** After obtaining \(M\) architecture-accuracy pairs, as is shown in Table, we train an accuracy predictor based on GBDT with the goal of minimizing the difference between prediction accuracy and target accuracy as much as possible. It should be noted that some data preprocessing work is necessary, such as normalization. So that, we can get architecture-accuracy pairs \(\mathcal{C}=\{\mathbf{X_{i}},\mathbf{Y_{i}}\}_{i=1}^{M}\), where \(\mathbf{X_{i}}=(x_{i1},x_{i2},...,x_{ip})\) is the architecture features, \(p\) is its dimension, \(\mathbf{Y_{i}}=(y_{i1},y_{i2},...,y_{iq})\) is the accuracy of the pruned LLMs, \(q\) is its dimension and \(M\) is the number of architecture-accuracy pairs. ### Using GBDT for Search Space Search One important reason for using GBDT as accuracy predictor is its interpretability. We can optimize search space according to the feature importance of the GBDT model. Specifically, in order to maintain consistency with training architecture-accuracy pairs \(\mathcal{C}\), we still need to comply with the requirements of Sec.2.1. Then for features with higher importance, we try to limit the pruning ratio to a small range, while for features with lower importance, we can appropriately expand the range of pruning ratio to control the global pruning ratio of LLMs. Figure 1: The overview of our approach After determining the search space, we predict the accuracy of \(L\) randomly sampled architectures. Then we evaluate these architectures with the top \(K\) predicted accuracies and choose the best one. ## 3 Experiments We first introduce the experiment setup in Sec.3.1. Then we carry out experiments in Sec.3.2, and verify the contribution of our approach. Finally, we try to restore the performance of pruned model in Sec.3.3. ### Experiment Setup Our experiments are implemented with PyTorch and the open source LLM structure used for testing is LLAMA2-7B [4]. We select 10 randomly samples from Bookcorpus [33] for calculating the gradient. ### Experiment Results and Analysis **Build Architectures-Accuracy Pairs** There are a total of 32 LlamaDecoderLayers in LLAMA2-7B, we assume that only the LlamaAttention and LlamaMLP are pruned, the layer ID to be pruned is _[3, 29]1_, the pruning ratio range is from 0.1 to 0.5, step is 0.1. In order to make the model more robust, during random sampling, we partially restrict the adjacent 4 layers to use the same pruning ratio, and some do not. The former sampled 400, and the latter sampled 125, for a total of 525 architecture-accuracy pairs, as is shown in Tab.1. Footnote 1: _[3, 29]_ represents a closed interval, that is, the value range is from 3 to 29 **Train GBDT and Predict** After building architectures-accuracy pairs, we divide to the training sets and the validation sets according to the ratio of 7:3, and mulit-output GBDT are trained. The metrics of validation sets are shown as Tab.2. Then we calculate feature importance of PPL(Wikitext2 and PTB) and MMLU average accuracy, as is shown in Fig.2. It can be seen that the importance of the Attention layer is much higher than that of MLP. For PPL, the layers located at the head and tail are more important, while for MMLU, the importance of the middle layer is higher. This also reflects the different abilities of each layer in LLM. In addition, whether it is attention or mlp, the importance of the _[19, 25]_ layer is low. It may be that they represent some capabilities of the model that we have not assessed. When predicting, we partially align with the training set, such as pruning only Attention and MLP layer, and keeping the first 3 layers and the last two layers unchanged. The difference is that we adjust the pruning ratio range based on the calculation results of feature importance. Specifically, the pruning ratio range of some Attention layers _([3, 6], [11, 18], [26, 29]_ is narrowed to _[0.1, 0.3]_, and the pruning ratio range of some MLP and Attention layers _([19, 25])_ is expanded to _[0.2, 0.7]_. Then conducte random sampling, obtaine 10,000 model architectures, and make predictions. Based on different global pruning ratios, we selected 18 architectures with the best prediction results for evaluation. As shown in the Fig.3, we performed a second-order polynomial fit on the true evaluation results of the architectures and established a 95% confidence band and prediction band. It can be seen that the trained GBDT can be used as a predictor to predict unseen architectures, improving the efficiency of searching for better architectures in the search space. We set the method based on LLM-pruner [15] as baseline. The only difference from our method is that the pruning ratio of each layer is set to the same. Filter according to the global pruning ratio, and the final result is shown in the Fig.4. It is normal for the results to differ under almost the same global pruning ratio, as the specific pruning ratio of each layer vary. It can be seen that at almost the same global pruning ratio, the PPL value evaluated by the best pruned model used GBDT on \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \(A_{1}\) & \(A_{2}\) & \(A_{3}\) & \(A_{4}\) & \(A_{5}\) & \(A_{6}\) & \(A_{7}\) & \(A_{8}\) & \(A_{9}\) \\ (\(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{ \text{ \text{ \texttext{ \texttexttext{ { }}}}}}} {}}{}}{{}}{{}{{}{{}{{}{{}{{}{{}{{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{ {}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{ {}{{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{ {}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{ {}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{}{}{{}{{}{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{{}{}{}{}{{}{}{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{{}{}{{}{{}{}{{}{}{{}{{}{{}{}{{}{{}{}{{}{{}{}{{}{{}{{}{}{{}{{}{{}{{}{}{{}{}{}{{}{{}{}{{}{{}{}{{}{}{{}{{}{{}{}{{}{{}{}{{}{}{{}{}{{}{}{}{{}{{}{}{{}{}{{}{{}{}{{}{{}{}{{}{{}{{}{}{{}{}{{}{{}{{}{{}{}{{}{{}{{}{}{}{{}{{}{}{{}{{}{{}{{}{{}{{{}{}{{}{}{{{{}{}{{}{{{}{{{{}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}{{}{}{}{}{}{}{}{{}{}{}{{{}{{}{{}{{{{}{{}{{}{{{{}{{}{{{{{}}{{{{{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\{\{\{\{\{\{}{}{}\}\\\\\\\\\\\\\\\\\\\\\\\\\\{\{\}}}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\{\{\}}\\\\\\\\\\\\\{\}}\\\\\\\\\\\\\\\\\\\\\\\{\}{}\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\{\}\\\\\\\\\\\\\\\\\\\\\\{\}\\\\\\\\\\\\\} Wikitext2 is about 9.48% lower than the baseline, on PTB is about 5.76%. And the average accuracy evaluated on MMLU is about 6.28% higher than the baseline. ### Fast Recovery with QLoRA Furthermore, we explore to restore model performance under limited data by Quantized Low-rank Approximation(QLoRA) [34]. We selecte several pruned LLMs and collect 52k instruction samples. Tuning these samples requires merely 4 hours on a single GPU with only 2 epochs. As is shown in Tab.3, the PPL(Wikitext2), PPL(PTB) and Avg-acc(MMLU) increased by an average of 31.23%, 37.5% and 11.18% respectively after the fine-tuning. ## 4 Conclusion In this paper, we propose a novel pruning approach using non-neural model for LLMs. We introduce GBDT into the architecture search of LLMs by building a training set between pruned model and accuracy, which improves the efficiency of predicting model to be pruned within the search space. In addition, we use the additional information learned by GBDT to prune the search space, and obtain a better performing model under almost the same global pruning ratio. Our approach improves the efficiency and effectiveness of searching within the search space. And with the increase in model evaluation indicators, our method is more transferable and solves the difficulty in adaptability of manually designed pruning features. Figure 4: Comparison Results of the pruned model choosed by GBDT and baseline Figure 3: Comparison of prediction accuracy using GBDT and true accuracy \begin{table} \begin{tabular}{c c c c} \hline Global Pruning Ratio & \(PPL_{Wikitext2}\) & \(PPL_{TPB}\) & \(AVG\_ACC_{MMLU}\) \\ \hline 25.50\% & 23.50[**(16.80)** & 91.43[**(58.43)** & 0.0942[**(1.917)** \\ 25.69\% & 23.71[**(18.71)** & 94.33[**(61.96)** & 0.0168[**(1.982)** \\ 26.74\% & 29.92[**(16.88)** & 114.24[**(60.48)** & 0.0320[**(1.550)** \\ 26.81\% & 23.92[**(16.16)** & 96.57[**(58.76)** & 0.0101[**(0.2043)** \\ 28.59\% & 24.51[**(19.55)** & 96.95[**(58.54)** & 0.0975[**(2.219)** \\ 28.80\% & 27.09[**(17.20)** & 103.02[**(62.09)** & 0.1552[**(1.214)** \\ 30.07\% & 35.34[**(17.02)** & 124.00[**(58.75)** & 0.0011[**(1.307)** \\ 30.52\% & 34.97[**(30.98)** & 133.56[**(110.01)** & 0.0000[**(0.0109)** \\ \hline \end{tabular} \end{table} Table 3: Model performance before and after fine-tuning
大規模言語モデル(LLM)には、数十億のパラメータ(あるいはそれ以上)が含まれているものが存在し、様々な自然言語処理タスクで impressive な能力を発揮してきました。しかし、モデルの規模が大きいと、訓練、推論、デプロイに課題を抱え、モデルの圧縮が必要となるためです。現在のところ、LLMのモデル圧縮の多くは、pruningの機能の設計を手動で行う必要があり、この手法には複雑な最適化パイプラインやモデルの一部能力の保持が難しいという問題があります。そこで、私たちは、新規なpruning手法を提案します。それは、まず、特定のアーキテクチャ精度ペアのトレーニングセットを構築し、その上で、精度予測器として、非ニューラルモデルを訓練します。精度予測器を用いて、検索空間をさらに最適化し、検索を行い、最適なモデルを自動的に選択することができます。実験結果から、提案手法
2309.06634
$G$-Mapper: Learning a Cover in the Mapper Construction
The Mapper algorithm is a visualization technique in topological data analysis (TDA) that outputs a graph reflecting the structure of a given dataset. However, the Mapper algorithm requires tuning several parameters in order to generate a ``nice" Mapper graph. This paper focuses on selecting the cover parameter. We present an algorithm that optimizes the cover of a Mapper graph by splitting a cover repeatedly according to a statistical test for normality. Our algorithm is based on $G$-means clustering which searches for the optimal number of clusters in $k$-means by iteratively applying the Anderson-Darling test. Our splitting procedure employs a Gaussian mixture model to carefully choose the cover according to the distribution of the given data. Experiments for synthetic and real-world datasets demonstrate that our algorithm generates covers so that the Mapper graphs retain the essence of the datasets, while also running significantly fast.
Enrique Alvarado, Robin Belton, Emily Fischer, Kang-Ju Lee, Sourabh Palande, Sarah Percival, Emilie Purvine
2023-09-12T22:51:16
http://arxiv.org/abs/2309.06634v2
# \(G\)-Mapper: Learning a Cover in the Mapper Construction ###### Abstract The Mapper algorithm is a visualization technique in topological data analysis (TDA) that outputs a graph reflecting the structure of a given dataset. The Mapper algorithm requires tuning several parameters in order to generate a "nice" Mapper graph. The paper focuses on selecting the cover parameter. We present an algorithm that optimizes the cover of a Mapper graph by splitting a cover repeatedly according to a statistical test for normality. Our algorithm is based on \(G\)-means clustering which searches for the optimal number of clusters in \(k\)-means by conducting iteratively the Anderson-Darling test. Our splitting procedure employs a Gaussian mixture model in order to choose carefully the cover based on the distribution of a given data. Experiments for synthetic and real-world datasets demonstrate that our algorithm generates covers so that the Mapper graphs retain the essence of the datasets. ## 1 Introduction Topological data analysis (TDA) utilizes techniques from topology in order to extract valuable insights from a dataset. Topology studies properties of mathematical spaces that are preserved under continuous deformations, such as the connected pieces or holes, and TDA uncovers these features from datasets. We refer the reader to [6] for the overview of this area. This paper focuses on _Mapper_ introduced in [37] by Singh, Memoli, and Carlsson, one of the fundamental visualization tools in TDA. Mapper has been shown to be useful in various applications including analyzing breast cancer microarray data [28], identifying diabetes subgroups [25], and studying divergence of COVID-19 trends [44]. Mapper is a network-based visualization technique of high-dimensional data. The algorithm takes as input a point cloud dataset and produces as output a graph reflecting the structure of the underlying data. To apply the Mapper algorithm, the user needs to determine the following parameters including choosing a _lens_ (or _filter_) function \(f:X\to Y\) from a high-dimensional point cloud \(X\) to a lower-dimensional space \(Y\), an _(open) cover_ of the target space \(Y\), and a _clustering_ algorithm for cover elements. Optimizing these parameters is an essential part of generating a "nice" Mapper graph. We concentrate on tuning a _cover_ given by a collection of overlapping intervals. While the traditional Mapper algorithm takes a uniform cover with the number of intervals and the same overlapping percent between consecutive intervals to be specified by the user [37], sophisticated methods have recently been applied to optimize the cover of Mapper. This paper is concerned with _clustering_ methods which have been utilized for selection of a cover. Motivated by the _X-means_ algorithm [29] for estimating the number of clusters in _k-means_ according to the _Bayesian Information Criterion_ (BIC), Chalapathi, Zhou, and Wang devised a Mapper construction which repeatedly splits intervals of a coarse cover [8] according to information criteria. This Mapper algorithm is called the _Multipass AIC/BIC_. Our work is primarily inspired by this algorithm and we present a Mapper algorithm utilizing another iterative clustering algorithm called _\(G\)-means_[19]. Additionally, based on the _Fuzzy \(C\)-means_ algorithm [15, 3], a centroid-based overlapping clustering method, Bui et al. presented a Mapper construction called _F-Mapper_ that takes clusters obtained by the algorithm applied to \(f(X)\) as a cover of the Mapper [5]. ### Contributions We propose a new Mapper construction algorithm called _\(G\)-Mapper_ for optimizing a cover of the Mapper graph based on the \(G\)-means clustering [19]. The \(G\)-means clustering algorithm aims to learn the number \(k\) of clusters in the \(k\)-means clustering algorithm according to a statistical test, called the _Anderson-Darling_ test, for the hypothesis that the points in a cluster follow a _Gaussian_ distribution. Our algorithm performs splitting a cover element iteratively while our splitting decision is determined by the Anderson-Darling score. For further elaboration, we split each cover element into two overlapping intervals employing a _Gaussian mixture model_ (GMM) so that the splits are made according to the characteristics of the cover element rather than yielding uniform intervals. This procedure allows us to take variance into account when forming cover elements, making our algorithm perform well without initialization of the cover. Our \(G\)-Mapper integrates ideas from the current state-of-the-art techniques to fill in gaps from each method. To find the number of intervals in a cover, the Multipass AIC/BIC method iteratively splits cover elements using information criteria. However, experiments reveals that \(X\)-means based on the criteria does not perform well when a given data is non-spherical or high-dimensional [19, 20, 36]. Additionally, when a cover element splits, two intervals are created to have a set uniform length. In contrast, \(F\)-Mapper takes into account the data to determine where intervals in the cover should be located, but the number of intervals must be decided before starting. \(G\)-Mapper finds the number of intervals by applying an iterative splitting procedure with the Anderson-Darling test. \(G\)-means using the statistical test experimentally works wells even for non-spherical data and high-dimensional data [19]. Moreover, our algorithm splits a cover element into two intervals with the GMM, taking the distribution of the data into consideration. By applying the \(G\)-Mapper algorithm to several synthetic and real-world datasets, we demonstrate that the algorithm generates covers so that the corresponding Mapper graphs maintain the essence of the datasets. A comparison of Mapper graphs generated by \(G\)-Mapper and Multipass BIC, indicates that our algorithm captures characteristics of the datasets that are not detected by the other algorithm, performs better even for high-dimensional datasets and runs significantly faster. Experiments also reveal that while Multipass BIC requires initialization of the cover, G-Mapper does not due to utilizing the GMM. In addition, we illustrate that the number of intervals of covers produced by \(G\)-Mapper could be utilized as input of other Mapper algorithms such as \(F\)-Mapper. The code will be publicly available. ### Related Work Following the original Mapper construction algorithm [37], most implementations of Mapper including [41, 39] construct the cover using open intervals or hypercubes of a fixed length with a fixed amount of overlap. However, the optimal number of intervals or amount of overlap to use is often unknown and the Mapper output is very sensitive to these parameters. The implementation of Mapper in Giotto TDA [39] has a _balanced cover_ method in which the user specifies the number of intervals and the algorithm finds a cover with the given number of intervals so that each interval covers the same number of points. This parameter is also difficult to know a-priori and the method may not be elaborate on generating an optimal cover. Statistical methods are utilized for optimizing a cover of the Mapper construction algorithm. Carriere, Michel, and Oudot introduced a statistical method to select these parameters [7]. The main idea is to sweep through various Mapper parameters to find the "best" Mapper graph that is structurally stable and close to a particular _Reeb graph_[33]. _Extended persistence_[12] developed by Cohen-Steiner, Edelsbrunner, and Harer is used to find this optimal Mapper graph. This method requires the user to know the Reeb graph to compare the Mapper graph against and relies on independent sampling conditions of the point cloud. Clustering methods such as fuzzy \(C\)-means [15, 3] and \(X\)-means [29] have been used for optimizing an open cover [5, 8]. These methods were already mentioned in the third paragraph of Section 1. The cover optimizing algorithm we propose is motivated by a clustering method, the \(G\)-means clustering [19] using the statistical test. ## 2 Background This section is devoted to reviewing the Mapper construction in Section 2.1 and statistical tools used in our proposed algorithm (the \(G\)-Mapper algorithm) to tune the cover parameter. We focus on the cover constructed from a set of points in \(\mathbb{R}\). The central idea is to form cover elements (intervals) around points that appear to be normally distributed. In order to test if a set of points in an interval follows a Gaussian distribution, we apply the Anderson-Darling test that we explain in Section 2.2. If the statistical test indicates the points do not follow a Gaussian distribution, we split the interval into two intervals using a Gaussian mixture model reviewed in Section 2.3. Section 2.4 describes the \(G\)-means clustering inspiring our \(G\)-Mapper algorithm. ### Mapper The Mapper algorithm which was first introduced in [37] generates a graph or a network. The output is called a Mapper graph. For a given dataset \(X\), constructing a Mapper graph consists of the following procedure that we illustrate in Example 2.1. 1. Define a lens (or filter) function \(f:X\to Y\) from the point cloud \(X\) to a lower-dimensional space \(Y\). In this paper, we focus on the case that the target space \(Y\) is a subset of \(\mathbb{R}\), i.e., the space \(Y\) is of dimension \(1\). 2. Construct a cover \(\mathcal{U}=\{\,U_{i}\,|\,U_{i}\text{ is open for }i\in I\}\) of the target space \(Y\) given in Step 1, i.e., \(Y\subset\bigcup_{i\in I}U_{i}\). For a one-dimensional space \(Y\), a cover of \(Y\) consists of overlapping intervals. 3. For each element \(U_{i}\) of the cover, apply a clustering algorithm to the pre-image \(f^{-1}(U_{i})\) of \(U_{i}\) under the lens function \(f\). We use a density-based clustering algorithm called DBSCAN [17]. The algorithm is popularly chosen for Mapper since it does not require the number of clusters to be pre-determined since it detects arbitrarily shaped clusters. DBSCAN has two main parameters: epsilon (\(\varepsilon\)) and minimum samples (MinPts). 4. Create the Mapper graph whose vertices are the clusters found in Step 3, and an edge between two vertices exists if the two corresponding clusters share data points. Then the output is a simplified representation of the dataset \(X\). Note that the Mapper graph is the \(1\)-skeleton of the nerve of the cover of \(X\) generated in Step 3. In Step 2, the conventional Mapper algorithm [37] employs covers consisting of intervals of uniform length and the overlap \(g\) between each two consecutive intervals. The parameter \(g\) and the number \(r\) of intervals are referred to as the _gain_ and _resolution_, respectively. To optimize a cover in the conventional Mapper algorithm, the user should tune these two parameters. We illustrate the Mapper construction algorithm by applying it to a toy dataset in Example 2.1. **Example 2.1** (Mapper Graph).: Let \(X\) be the set of points sampled from a circle of radius \(1/2\) with center at \((1/2,1/2)\). Refer to Figure 1. 1. We define the lens function \(f:X\to Y\) to be the projection map onto the first coordinate with the target space \(Y=[-0.03,1.05]\). 2. The cover \(\{U_{i}\}_{i=1}^{3}\) is chosen to be three overlapping intervals of \(Y\). Specifically, we set \(U_{1}=[-0.03,0.39)\), \(U_{2}=(0.30,0.72)\), and \(U_{3}=(0.64,1.05]\). Consecutive intervals are overlapping by \(20\%\). 3. The DBSCAN clustering algorithm generates four clusters where each of \(f^{-1}(U_{1})\) and \(f^{-1}(U_{3})\) forms a cluster and \(f^{-1}(U_{2})\) is separated into two clusters based on whether the points are in the upper or lower part of the circle. 4. The Mapper graph constructed in Step 4 is a cycle graph with four vertices and four edges. The color of each vertex in the Mapper graph is the average value of the lens function with the rainbow color map. ### Statistical test for a Gaussian distribution The Anderson-Darling test is a statistical test to determine if data follows a Gaussian distribution [1, 38]. The test is similar to two other normality tests, the Komologorov-Smirnov test [27] and the Shapiro-Wilk test [35] in that it is computing test statistics based on the empirical distribution function (EDF). Let \(X=\{x_{i}\}_{i=1}^{n}\) be a given dataset. We standardize the set to yield a set \(X^{\prime}=\{x^{\prime}_{i}\}_{i=1}^{n}\) such that \(X^{\prime}\) has a mean equal to \(0\) and a variance equal to \(1\). Let \(x^{\prime}_{(i)}\) be the \(i^{th}\) ordered value of \(X^{\prime}\). Define \(z_{i}=F(x^{\prime}_{(i)})\), where \(F\) is the cumulative distribution function of the standard normal distribution. Let \(F_{n}\) be the sample cumulative distribution function. Then the _Anderson-Darling (AD)_ statistic is defined to be the quadratic EDF statistics measuring differences between \(F\) and \(F_{n}\): \[A^{2}(X)=n\int_{-\infty}^{\infty}(F_{n}(x)-F(x))^{2}w(x)\ dF(x)\] where \(w(x)\) is a weighting function given by \(w(x)=(F(x)(1-F(x))^{-1}\). This weighting function places more weight on observations in the tails. Computing the integration results in \[A^{2}(X)=-\frac{1}{n}\sum_{i=1}^{n}(2i-1)[\log(z_{i})+\log(1-z_{n+1-i})]-n.\] Figure 1: Mapper Construction. The dataset \(X\) consists of points sampled from a circle of radius \(1/2\) with center at \((1/2,1/2)\). Constructing a Mapper graph requires selecting a lens function (Figure 0(a)), cover (Figure 0(b)), and clustering algorithm (Figure 0(c)). The parameters are specified in Example 2.1, and the generated Mapper graph is a cycle graph with four vertices and four edges (Figure 0(d)). In the case where both \(\mu\) and \(\sigma\) are unknown and estimated from the data, a modification of the statistic was suggested in [38] as follows: \[A_{*}^{2}(X)=A^{2}(Z)(1+4/n-25/(n^{2})).\] The user selects a critical threshold, which we call the _AD threshold_. The set of values exceeding the critical threshold is called the critical region. The null hypothesis \(H_{0}\), asserting that the data follows a Gaussian distribution, is rejected for values \(A_{*}^{2}(X)\) in the critical region, and \(H_{0}\) is not rejected for values below the AD threshold. The probability of \(A_{*}^{2}\) occurring in the critical region under the null hypothesis is the _significance level_\(\alpha\). A table of critical values for varying significance levels can be found in [16]. ### Gaussian mixture models A Gaussian mixture model (GMM) (Section 9.2.2 of [4]) is a probabilistic model representing a probability density function as a finite weighted sum of Gaussian distributions. The model learns a mixture model distribution with \(K\) components. For a real-valued vector \(x\), \[p(x)=\sum_{k=1}^{K}\pi_{k}\:N[x|m_{k},\Sigma_{k}],\] where \(\pi_{k}\) is a mixture probability with \(\sum_{k=1}^{K}\pi_{k}=1\), and \(N[x|m_{k},\Sigma_{k}]\) is the \(k^{th}\) Gaussian distribution with mean \(m_{k}\) and covariance \(\Sigma_{k}\). See Example 2.2 for an illustration of the GMM. In order to fit the model, the GMM implements the expectation-maximization (EM) algorithm searching for the maximum likelihood estimators of model parameters. Since the GMM takes into account the covariance structure of the given data as well as the centers of the components, it is considered a generalization of the \(k\)-means clustering. We will use the means and the variances generated by the GMM to create two overlapping intervals for the cover parameter in Mapper. **Example 2.2** (AD Statistic and GMM).: Let \(X\) be a one-dimensional dataset whose histogram is shown in Figure 2. The AD statistic of the data set \(X\) is \(53.85\). If we apply the GMM with two components on this dataset \(X\), we get means \(m_{1}=0.21\), \(m_{2}=0.68\) and variances \(\sigma_{1}=0.10\), \(\sigma_{2}=0.16\). We divide the dataset into two sets based on if a point has a value less than \(0.41\) or not. The AD statistics of the left and right sides are \(1.66\) and \(3.40\), respectively. These values are much smaller than the AD statistic of the entire dataset \(X\). Figure 2: Anderson-Darling statistics and a Gaussian mixture model. Figure 1(a). The histogram of a 1-dimensional dataset whose AD statistic is \(53.85\). Figure 1(b). GMM is applied to the dataset and the AD statistics of the left and right sides are much smaller than the AD statistic of the entire dataset. ### \(G\)-means clustering The \(G\)-means clustering algorithm [19] automatically detects the number \(k\) of clusters employing the Anderson-Darling test in order to decide if a cluster should be split into two clusters. The algorithm starts by applying \(k\)-means to a dataset of vectors with a small number of centers. The algorithm is initialized with \(k=1\), or a larger \(k\) can be selected if there is some advance knowledge of the range of values of \(k\). Let \(\mathcal{C}\) be the set of derived clusters. Choose a cluster \(C\in\mathcal{C}\) to be tested. Applying the \(k\)-means with two components to \(C\) splits the cluster into two clusters \(C_{1}\) and \(C_{2}\). Let \(\ell\) be the line connecting the two centers of \(C_{1}\) and \(C_{2}\) and project the points of the cluster \(C\) onto \(\ell\). If the corrected Anderson-Darling statistic for the projected points is below the user-specified AD threshold, the cluster \(C\) is kept. Otherwise, the cluster \(C\) is deleted from \(\mathcal{C}\) and \(C_{1}\) and \(C_{2}\) are added. These steps are repeated until no more clusters are added. ## 3 Methods We propose the \(G\)-Mapper algorithm for learning a cover in the Mapper construction which is based on the \(G\)-means algorithm for learning the parameter \(k\) in identifying the correct number \(k\) of clusters. ### \(G\)-Mapper Algorithm The input for \(G\)-Mapper consists of (1) the image of a dataset \(X\) under the 1-dimensional lens \(f\), i.e., \(f(X)\), (2) the AD threshold, and (3) the percentage of overlap for intervals when an interval is split into two. This parameter is called g_overlap. The main differences between the \(G\)-Mapper algorithm and the \(G\)-means clustering algorithm are (1) we are interested in finding overlapping clusters (intervals) rather than disjoint clusters, (2) we design overlapping intervals using variances generated by the GMM rather than applying the \(k\)-means, and (3) the \(G\)-Mapper algorithm also does not require a projection of points for performing the Anderson-Darling test since the lens used in our algorithm is 1-dimensional. For \(G\)-Mapper, we begin with the cover consisting of one interval containing \(f(X)\), i.e., the interval \([\min\{f(X)\},\max\{f(X)\}]\). The \(G\)-Mapper algorithm is iterative and proceeds as follows: 1. Select an interval from the current cover (a collection of intervals). 2. For the data points of the interval, perform a statistical test to determine if it follows a Gaussian distribution. For the test, we use the corrected Anderson-Darling statistic. 3. If the computed statistic is smaller than the AD threshold, keep the original interval. 4. Otherwise, split the interval into two overlapping intervals. We utilize the means and the variances derived from a GMM, as described in detail in the following paragraph. Repeat Steps 1-4 until no more intervals split. See Example 3.1 for a step-by-step illustration of the \(G\)-Mapper algorithm. We now provide the details of the splitting procedure in Step 4. Let \(m_{1}\), \(m_{2}\) and \(\sigma_{1}\), \(\sigma_{2}\) be the two means and the two standard deviations discovered from the GMM. An interval \((a,b)\) is split into the following two intervals: \[(a,\min\{m_{1}+(1+\texttt{g\_overlap})\sigma_{1}/(\sigma_{1}+ \sigma_{2})(m_{2}-m_{1}),m_{2}\}),\text{ and}\] \[(\max\{m_{2}-(1+\texttt{g\_overlap})\sigma_{2}/(\sigma_{1}+ \sigma_{2})(m_{2}-m_{1}),m_{1}\},b).\] These two intervals are formed by considering a value [18] \[\frac{\sigma_{2}}{\sigma_{1}+\sigma_{2}}m_{1}+\frac{\sigma_{1}}{\sigma_{1}+ \sigma_{2}}m_{2}=m_{1}+\frac{\sigma_{1}}{\sigma_{1}+\sigma_{2}}(m_{2}-m_{1})=m _{2}-\frac{\sigma_{2}}{\sigma_{1}+\sigma_{2}}(m_{2}-m_{1})\] dividing the two means \(m_{1}\) and \(m_{2}\) in the ratio \(\sigma_{1}:\sigma_{2}\) based on the minimum-error decision boundary between two Gaussian distributions [14] and extending the length between this value and each mean by g_overlap respectively. When applying the GMM algorithm to an interval, we take as two initial centers \(c\pm\sqrt{2\lambda/\pi}\), where \(c\) and \(\lambda\) are the mean and variance of data points in the interval, respectively. This method is the 1-dimensional version of a recommended initialization using principal component analysis (PCA) [19]. However, even though we have a consistent way of initializing the centers, the initialization of the variances and mixing coefficients may be different each time we implement the \(G\)-Mapper algorithm on the same dataset. This can result in different \(G\)-Mapper graphs with the same parameters on the dataset. For all \(G\)-Mapper results presented in this paper, seeds will be provided to the readers so that they can reproduce the Mapper graphs. The splitting algorithm has three variations based on deciding which unprocessed interval to check for Step 1. The three search methods include: * The _depth-first search_ (DFS) method iteratively performs splitting as deep as possible choosing an interval with a bigger Anderson-Darling statistic. Once a chosen interval does not split, the method backtracks and explores an unchecked interval. * The _breadth-first search_ (BFS) method checks all intervals in the current cover. The interval with the biggest Anderson-Darling statistic is split. This process is repeated with the newly updated cover. * The _randomized_ method randomly selects an interval where intervals with a bigger Anderson-Darling statistic have a higher probability of being chosen. In this paper, we adopted the DFS method for generating \(G\)-Mapper graphs, which is more time-efficient than the BFS method and more reproducible than the randomized method. **Example 3.1** (\(G\)-Mapper Construction).: We apply the \(G\)-Mapper algorithm to the circle dataset given in Example 2.1. We initialize the algorithm by selecting an AD threshold equal to 10, g_overlap equal to 0.2, and the DBSCAN clustering algorithm. The \(G\)-Mapper algorithm starts with the cover \(\{U_{0}\}\) consisting of one interval, the entire target space \(U_{0}=Y=[-0.03,1.05]\). The Mapper graph at this iteration is a single vertex. In the first iteration, the AD statistic of \(U_{0}\) is 32.09. Since \(32.09>10\), the first interval is split into two intervals \(U_{0}^{\prime}=[-0.03,0.65)\) and \(U_{0}^{\prime\prime}=(0.52,1.05]\). The Mapper graph at this iteration is an edge. In the second iteration, the AD statistics of \(U_{0}^{\prime}\) and \(U_{0}^{\prime\prime}\) are 3.30 and 10.64, respectively. Since \(10.64>10\), the interval \(U_{0}^{\prime\prime}\) is split into \(U_{2}=(0.52,0.89)\) and \(U_{3}=(0.84,1.05]\) so that \(\{U_{1},U_{2},U_{3}\}\) is a cover with \(U_{1}=U_{0}^{\prime}=[-0.03,0.65)\). The Mapper graph is a cycle graph with 4 vertices. Figure 3: \(G\)-Mapper. The initialization and first two iterations of the splitting procedure are represented in Figure 2(a), Figure 2(b), and Figure 2(c) respectively. The cover, the pre-images of the cover elements, and the corresponding Mapper graph are located on the lower left side, the upper left side, and the right side, respectively. The final Mapper graph is a cycle graph with 4 vertices. ### Parameter Selection for the \(G\)-Mapper Algorithm The \(G\)-Mapper algorithm involves two key user-specified parameters: AD_threshold, the critical value corresponding to the significance level \(\alpha\) for the statistical test, and g_overlap, the amount of overlap when an interval is split in two. One important thing to note is that unlike the Multipass AIC/BIC algorithm, the \(G\)-Mapper algorithm does not require the user to initialize an open cover. It will be shown that our \(G\)-Mapper perform well without initialization in the next section. The first parameter is AD_threshold corresponding to the significance level \(\alpha\) for the statistical test. The significance level is the probability of making a Type I error meaning we incorrectly reject the null hypothesis \(H_{0}\). As the number of intervals of a cover (resolution) in the conventional Mapper algorithm, AD_threshold influences the number of nodes of the output graph. Setting a lower threshold segments the given data into more pieces, which produces a more detailed Mapper graph with more nodes. Taking a high threshold yields gives coarse-grained visualization with the graph with a few nodes. The second parameter is g_overlap which specifies how much two intervals should overlap when applying a split. In the \(G\)-Mapper algorithm, a split is made using the means and variances estimated by the GMM together with this parameter. As the overlap parameter (gain) in the conventional Mapper algorithm, g_overlap controls relationships between overlapping clusters. Increasing this parameter generates more edges between nodes in the Mapper graph, which results in a more compact graph representation. Decreasing this parameter leads the output Mapper graph to become less connected, segmenting nodes in the graph into smaller groups. ## 4 Experimental Results In this section, we present the results of applying our \(G\)-Mapper algorithm to synthetic and real-world datasets compared to fine-tuned Mapper graphs resulting from the conventional Mapper algorithm. We call the fine-tuned Mapper graphs the _reference Mapper graphs_. This is followed by a comparison to the current state-of-the-art Mapper construction algorithms (Multipass BIC, \(F\)-Mapper, and balanced cover). We explore how Mapper graphs built from \(G\)-Mapper and Multipass BIC are different. For the latter two algorithms, we use the number of intervals detected by \(G\)-Mapper as an input parameter. We close the section by providing a runtime comparison between all methods. ### Synthetic Datasets We first applied the \(G\)-Mapper algorithm to three synthetic datasets (two circles, human, and Klein bottle datasets). Our experiments reveal that our \(G\)-Mapper generates Mapper graphs that describe aptly the original datasets. In addition, the output \(G\)-Mapper graphs turn out to be close to the reference Mapper graphs. For each dataset, three figures for the original dataset and these two Mapper graphs will be presented together. We describe the two main parameters (g_overlap and AD_threshold) of \(G\)-Mapper for applying synthetic datasets. For all datasets, we set the parameter g_overlap to 0.1. We pick different Anderson-Darling (AD) thresholds according to the datasets. We take 10 as the thresholds for the two circles and human datasets, and we take a larger value 15 for the Klein bottle dataset to prevent generating an overly detailed Mapper graph. In order to figure out the characteristic of nodes in Mapper graphs, we color nodes with the rainbow colormap. For all Mapper graphs of synthetic datasets, the color of a node is the average value of the lens function over all datapoints in the node. #### 4.1.1 Two Circles Dataset The two circles dataset consists of 5,000 sampled points from two concentric circles. The lens function is the sum of the \(x\) and \(y\) coordinates normalized. The reference Mapper graph consists of two concentric circles. The \(G\)-Mapper graph also consists of two concentric circles and the graph was found in seven iterations. Note that homology groups of the underlying space are the same as those of the generated Mapper graphs. In Figure 4, we present Mapper graphs along with the parameters used. #### 4.1.2 Human Dataset The next point cloud dataset we explored is a 3D human shape from [9] that consists of 4,706 points. The lens function is the height function normalized. In Figure 5, we present Mapper graphs along with the parameters used. The cover for \(G\)-Mapper was found in 11 iterations. The reference Mapper graph and the \(G\)-Mapper graph are human skeletons consisting of a head, two arms, and two legs. Although each body part of the \(G\)-Mapper graph has fewer nodes than the corresponding part of the reference Mapper graph, the ratios of any pairs of the body parts are about the same. Figure 4: Two Circles Dataset. \(G\)-Mapper Parameters: AD threshold = 10, g_overlap = 0.1, clustering algorithm = DBSCAN with \(\varepsilon=0.1\) and MinPts = 5, and search method = DFS. The cover was found in 7 iterations and consists of 8 intervals. Reference Mapper Parameters: the number of intervals = 7, the overlap = 0.2, and the same DBSCAN parameters. Figure 5: Human Dataset. \(G\)-Mapper Parameters: AD threshold = 10, g_overlap = 0.1, clustering algorithm = DBSCAN with \(\varepsilon=0.1\) and MinPts = 5, and search method = DFS. The cover was found in 11 iterations and consists of 12 intervals. Reference Mapper Parameters: the number of intervals = 30, the overlap = 0.4, and the same DBSCAN parameters. #### 4.1.3 Klein Bottle Dataset The Klein bottle dataset consists of 15,875 points sampled from the Klein Bottle embedded in \(\mathbb{R}^{5}\). The dataset is obtained from the Gudhi library [26]. The lens function is the projection map onto the first coordinate normalized. In Figure 6, we present Mapper graphs along with the parameters used. The reference Mapper graph has a long cycle with several flares (branches). Our \(G\)-Mapper produces a very similar graph in 16 iterations. ### Real-World Datasets We next applied \(G\)-Mapper to three real-world datasets: _Passiflora_ leaves, COVID-19 trends, and CIFAR-10 image dataset. We explain these datasets in detail in Section 4.2.1, Section 4.2.2, and Section 4.2.3, respectively. Each data point is labeled by its morphotype, the state the data point was taken, and its image class in the Passiflora dataset, the COVID-19 dataset, and the CIFAR-10 image dataset, respectively. These datasets are higher-dimensional than the synthetic datasets explored in the previous section. We visualize the nodes in the Mapper graphs as pie charts in order to represent the proportions of data points within the nodes belonging to different labels. #### 4.2.1 Passiflora Dataset The _Passiflora_ dataset [10] consists of 3,319 leaves from 40 different species of the _Passiflora_ genus. Leaves of the _Passiflora_ genus are of particular interest to biologists due to their remarkable diversity of shape. Each leaf in the dataset has 15 landmarks whose locations are 2-dimensional vectors expressed as \(x\) and \(y\) coordinates. Then leaves have 30-dimensional vectors and the correlation distance will be selected in order to measure the distance between leaf vectors. The correlation distance between two vectors is defined as \(1-r\), where \(r\) is the Pearson correlation coefficient. In this metric, leaf vectors with high correlation will have a distance near zero. The authors of [10] classified the 40 species into seven different morphotypes. For the classification, they performed principal component analysis (PCA) on the landmark data set described in the previous paragrpah and elliptical Fourier descriptors for the outline of the leaf. The first and second principal components are visualized in Figure 6(a) (given in [10, Figure 5] and [30, Figure 3C]). Because of the significant amount of overlap as shown in the PCA plot, the authors also relied on their domain knowledge when assigning morphotypes. Mapper may help extend this process further to obtain better separation between morphotypes and extract hidden relationships between morphotypes. The Mapper algorithm has been utilized in [30] for this purpose. Figure 6: Klein Bottle Dataset. \(G\)-Mapper Parameters: AD threshold = 15, g_overlap = 0.1, clustering algorithm = DBSCAN with \(\varepsilon=0.21\) and MinPts = 5. The cover was found in 16 iterations and consists of 17 intervals. Reference Mapper Parameters: the number of intervals = 19, the overlap = 0.4, and the same DBSCAN parameters. In Figure 6(b), we present the results of applying \(G\)-Mapper to the _Passiflora_ dataset using the first principal component as the lens function. The \(G\)-Mapper graph has a strong linear backbone along the first principal component. here is much overlap in the morphotypes represented by purple, brown, and red whereas the morphotypes represented in orange, green, and pink are more distinct as described in the _Passiflora_ PCA plot. The reference Mapper graph shown in Figure 6(c) is generated with referring to parameters given in [30]. The \(G\)-Mapper graph is close to the reference Mapper graph in that it has a strong linear backbone and the coloring of nodes is similar. However, these two graphs have different features originating from their constructions. The \(G\)-Mapper graph produces more edges between the multicolored purple, red, and green nodes compared to the Reference Mapper graph. We suspect this is due to the greater density and overlap of morphotypes having a projection onto the first principal component range between 0-0.2. \(G\)-Mapper constructs more intervals in areas that are more dense whereas conventional Mapper constructs intervals in a uniform manner. In addition, the \(G\)-Mapper graph has more red nodes and less blue nodes than the reference Mapper graph since \(G\)-Mapper takes the distribution of the data into account. #### 4.2.2 COVID-19 Dataset COVID-19 data was collected in [13] and it can be found on the data repository 1. In order to analyze COVID-19 state-wide trends, the authors of [44] selected 1,431 daily records of COVID-19 cases during 159 days between April 12, 2020 to September 18, 2020 for nine states (AZ, CA, FL, GA, IL, NC, NJ, NY, TX) from the dataset. These nine states were chosen since they had the largest number of confirmed cases. For each day, the dataset has the following 7 attributes: the number of confirmed cases, death cases, active cases, people tested, testing rate, mortality, and the number of cases per 100,000 people, where the number of active cases is defined by subtracting the number of death cases and recovered cases from the number of confirmed cases. Since the number of recovered cases is unavailable in some states, we simply estimate the number of active cases by the number of confirmed cases minus the number of death cases. Figure 7: _Passiflora_ Dataset. Lens Function: the first principal component normalized. \(G\)-Mapper Parameters: AD threshold = 2, g_overlap = 0.1, clustering algorithm = DBSCAN with \(\varepsilon=0.15\) and MinPts = 5, and distance = correlation distance, and search method = DFS. The open cover was found in 37 iterations and consists of 38 intervals. Reference Mapper Parameters: the number of intervals = 40, the overlap = 0.5, and the same DBSCAN parameters. In Figure 7(a), we present the results for applying \(G\)-Mapper to this dataset along with the parameters. The figure tells us that \(G\)-Mapper is identifies COVID-19 trends according to each state, giving information concerning the relation between them. The Mapper graph consists of 3 connected components: the main component, the pink component (NY), and the brown component (NJ), which means that NY and NJ have COVID-19 trends distinct from the other states, respectively. The purple branch (IL) is located on the side of the main component. As it gets further away from the branch, the orange branch (CA), the green branch (FL), the olive branch (TX), and the gray branch (NC) appear sequentially. Two color nodes with blue (AZ) and red (GA) show up, which indicates that these two states have very similar COVID-19 trends at some point. These two states finally bifurcate into blue branch (AZ) and the red branch (GA). We realize that the \(G\)-Mapper graph resembles the reference Mapper graph presented in Figure 7(b) in that they share these features shown in the Mapper graphs. #### 4.2.3 Cifar-10 Dataset The last real-world dataset is the famous image dataset, called CIFAR-10 [24]. The dataset consists of 60,000 images (50,000 training images and 10,000 test images) in 10 classes. The 10 different classes represent airplanes, automobiles, birds, cats, deer, dogs, frogs, horses, ships, and trucks. The input data for applying the Mapper algorithm is obtained by learning train images with a ResNet-18 neural network, passing test images to the network, and collecting activation vectors from the last layer of the network. Then 10,000 activation vectors are collected and each vector is 512-dimensional, which is tremendously higher than the previous two datasets. The t-distributed stochastic neighbor embedding (_t-SNE_) algorithm is a well-known dimension reduction method [40]. A 2-dimensional t-SNE embedding of the collection the activation vectors is represented in Figure 8(a). The plot indicates that t-SNE separates the data points into the 10 different classes with some of the classes overlapping. In order to highlight relationships among the 10 classes, the Mapper algorithm has been utilized [31, 44, 8]. Figure 8: COVID-19 Dataset. Lens Function: the number of recorded days normalized. \(G\)-Mapper Parameters: AD threshold = 1.35, g_overlap = 0.15, clustering algorithm = DBSCAN with \(\varepsilon=0.15\) and MinPts= 5, and search method = DFS. The open cover was found in 30 iterations and consists of 31 intervals. Reference Mapper Parameters: the number of intervals = 30, the overlap = 0.3, and the same DBSCAN parameters. Figure (b)b and Figure (c)c represent the \(G\)-Mapper graph and the reference Mapper graph for the dataset, respectively. Both Mapper graphs also classify the data points into the 10 classes, detecting three pairs of classes with the relationship of each pair. The three pairs are automobiles (orange points) and trucks (cyan points), cats (red points) and dogs (brown points), and airplanes (blue points) and birds (green points). The two classes in each of these three pairs are regarded as having similar features. Each pair has nodes containing data points in both classes, and it eventually is divided into two different branches each of which contains data points in either class. ### Comparison to Other Methods We compare our \(G\)-Mapper algorithm to other state-of-the-art techniques on the same synthetic and real-world datasets as above. We analyze how Mapper graphs generated by another iterative algorithm, the Multipass BIC algorithm, are different from \(G\)-Mapper graphs. In addition, we produce Mapper graphs from \(F\)-Mapper and the balanced cover strategy utilizing the number of cover intervals estimated by \(G\)-Mapper as an input parameter. We refer the reader to [8, Sect. VI-A] for a discussion and analysis of the performance of the statistical cover strategy. In this section, we set the same DBSCAN parameters with \(G\)-Mapper. #### 4.3.1 Multipass BIC The Multipass BIC algorithm repeatedly splits intervals uniformly from some initialized coarse cover based on information criteria. The main parameters that need to be specified for this algorithm include the initial number of intervals used before splitting and the amount of overlap between consecutive intervals in the cover. There is also a threshold parameter \(\delta\) for deciding when to split, but this is often set to zero meaning that a split is performed as long as the information criterion statistic improves. We set \(\delta=0\), adopt the BIC statistic, and take the DFS method. We tried to follow the parameters specified in [8], but we had to adjust overlap parameters due to an error in the Multipass AIC/BIC code. Mapper graphs built by the code did not create edges between nodes generated from non-consecutive intervals even though they share data points. Figure 9: _CIFAR-10_ Dataset. Lens Function: the \(L_{2}\) norm of each activation vector. \(G\)-Mapper Parameters: AD threshold = 9, g_overlap = 0.2, clustering algorithm = DBSCAN with \(\varepsilon=2\) and MinPts= 5, and search method = DFS. The open cover was found in 32 iterations and consists of 33 intervals. Reference Mapper Parameters: the number of intervals = 70, the overlap = 0.35, and the same DBSCAN parameters. Figure 10 shows that the Multipass BIC algorithm provides a simplified representation of the datasets although the algorithm does not thoroughly capture the essence of each dataset. For the two circles dataset, the inner circle does not appear while the outer circle consists of many nodes. For the human dataset, the two arms in the output graph are much shorter than the two legs. Several flares are missing from the Klein bottle output graph. Additionally, we see in the Klein bottle dataset that starting from the number of intervals less than the one given in Figure 10 does not lead to splitting intervals. This experiment indicates that choosing the initial number of intervals is a crucial task, which is not required in the \(G\)-Mapper algorithm. Figure 11 represents the Mapper graphs generated by applying the algorithm to the real-world dataset. For the Passiflora dataset, the algorithm does not split well intervals for pink and purple data points while it splits intervals for blue data points unduly. The Mapper graph for the COVID-19 dataset has a similar feature to the G-Mapper graph even though it has a lot more nodes than the G-Mapper graph. The Mapper graph for the CIFAR-10 dataset separates the 10 different classes, but it does not capture relations between the classes. In addition, the algorithm splits specific intervals interminably since the data is too high-dimensional, and hence we had to limit the minimum length of intervals. Figure 11: Results of using the Multipass BIC Algorithm on the Real-World Dataset. Figure (a)a_Passiflora_ dataset: the number of initial intervals = 2 and the overlap = 0.2. Figure (b)b COVID-19 dataset: the number of initial intervals = 2 and the overlap = 0.5. Figure (c)c_CIFAR-10_ dataset: the number of initial intervals = 2 and the overlap = 0.2. The numbers of resulting intervals are 28, 40, and 22, respectively. Figure 10: Results of using the Synthetic Dataset. Figure (a)a Two Circles: the number of initial intervals = 2 and the overlap = 0.2. Figure (b)b Human: the number of initial intervals = 2 and the overlap = 0.2. Figure (c)c Klein Bottle: the number of initial intervals = 4, and the overlap = 0.4. The numbers of resulting intervals are 14, 10, and 8, respectively. #### 4.3.2 \(F\)-Mapper The \(F\)-Mapper algorithm finds open intervals based on the fuzzy \(C\)-means clustering. In particular, the image of the data points under the lens function, \(f(X)\) are clustered based on fuzzy \(C\)-means. From the clustering, each data point, \(p\in f(X)\) is given a probability of belonging to each cluster. To construct intervals from this clustering, the user needs to specify a probability threshold \(\tau\). If the probability \(p\) belongs to a specific cluster exceeds \(\tau\), then it is declared to be a part of the open interval containing that data point. \(F\)-Mapper requires the user to specify the number of clusters, a threshold value, an exponent value for determining how "fuzzy" to make the clusters and an error or convergence parameter. In [5], an exponent value of 2 and an error of 0.005 were used for all their examples. The threshold value can be viewed as an overlap parameter. As mentioned before, the number of clusters is difficult to know a priori and is the parameter we are trying to estimate with \(G\)-Mapper. As an input parameter of the \(F\)-Mapper algorithm, we picked the number of intervals obtained from \(G\)-Mapper and applied the algorithm to all the datasets. For its implementation, we used the Python package SciKit-Fuzzy 0.4.2 to implement \(F\)-Mapper. The results and parameters for the \(F\)-Mapper are given in Figure 12 and Figure 13. These figures show that \(F\)-Mapper with our selective inputs produces Mapper graphs almost identical to the reference Mapper Graphs. This illustration tells us that our \(G\)-Mapper could be a useful tool to determine the number of intervals for \(F\)-Mapper as its input parameter. #### 4.3.3 Balanced Cover The balanced cover method forms open intervals so that each interval contains the same number of points. To apply the method, the user must specify the number of intervals. When we choose the number of intervals found using \(G\)-Mapper as the input of the balanced cover method as in \(F\)-Mapper, it does not generate the desired Mapper graph due to difference of constructing covers. Hence, we start from the number of intervals obtained by \(G\)-Mapper and adjust this number in order to decide the optimal input number of intervals for the balanced cover method. In Figure 14 and Figure 15, we present Mapper graphs generated by applying the balanced cover method to the synthetic and real-world datasets along with the parameters used. For the synthetic datasets, we chose the number of intervals found using the \(G\)-Mapper algorithm with the exception of the human dataset. For the dataset, \(G\)-Mapper found 12 intervals but this number results in a disconnected Mapper graph for the Balanced strategy so we decided to use 13 intervals instead. For the real-world datasets, we select the number of intervals derived from \(G\)-Mapper with the exception of the CIFAR-10 dataset. For the dataset, \(G\)-Mapper found 33 intervals but this number could not detect the relationship between airplanes (blue points) and birds (green points) for the Balanced strategy so we decided to use 29 intervals instead. Figure 14 and Figure 15 indicate that the balanced cover method can generate Mapper graphs similar to the G-Mapper graphs and the reference Mapper graphs although they are not almost identical as in \(F\)-Mapper. #### 4.3.4 Runtime Analysis In Table 1, we provide the runtimes of generating covers of Mapper construction algorithms in seconds for \(G\)-Mapper, Multipass BIC, \(F\)-Mapper, and the balanced cover strategy. We use a 5 GHz, 8-core laptop with 8 GB of RAM. The times in Table 1 are averaged over five trials and the sizes and the dimensions of the datasets are also listed in the table. The \(G\)-Mapper algorithm computes a cover faster than the Multipass BIC strategy for two of the datasets, and is significantly faster for the other four datasets in computation time. These four datasets are high-dimensional or have big sizes. The AD statistic as a splitting criterion in \(G\)-Mapper is evaluated on an individual interval. On the other hand, deciding to split an interval for Multipass BIC requires (1) performing a soft clustering on the pre-image of two intervals (under the lens) if the interval is split for constructing nodes of a Mapper graph, (2) performing a hard clustering on the same set using the previous clustering result, and (3) computing BIC of the hard clustering. We observe that the first step is the most time-consuming part of the above three steps, dominating the time for computing the AD statistic in \(G\)-Mapper. \(G\)-Mapper performs faster than \(F\)-Mapper for three of the datasets, running much faster for the other three data sets. As in F-Mapper, these are high-dimensional or have big sizes. Hence, utilizing the number of cover intervals generated by \(G\)-Mapper as an input of \(F\)-Mapper is more effective in regards to runtime than applying \(F\)-mapper many times while varying the number of cover intervals. Since the balanced cover strategy only considers the number of data points in each interval, it computes an open cover much faster than the other methods. ## 5 Discussion and Conclusion We propose a new method (\(G\)-Mapper) for optimizing the cover parameter of a Mapper graph, motivated by the \(G\)-means clustering. \(G\)-Mapper involves an iterative procedure splitting covers using statistical tests and with the \(GMM\). The Multipass AIC/BIC algorithm based on the \(X\)-means clustering iteratively splits intervals according to information criteria. These two methods aim to select an optimal number of intervals. Another Mapper construction algorithm, called \(F\)-Mapper, relies on fuzzy \(C\)-means clustering. In contrast to the previous two algorithms, \(F\)-Mapper requires choosing the number of intervals in advance. We propose the number of intervals generated by \(G\)-Mapper as an input of \(F\)-Mapper, which turns out to be effective (see Section 4.3.2). Our experiments (given in Section 4.1 and Section 4.2) for synthetic and real-world datasets reveal that the \(G\)-Mapper algorithm generates Mapper graphs keeping the characteristic of a given data. \(G\)-Mapper works well even on non-spherical datasets and high-dimensional datasets as \(G\)-means does. We found that our \(G\)-Mapper algorithm extracts essential parts that are not uncovered by the Multipass BIC algorithm using information criteria (see Section 4.3.1). In addition, the runtime comparison (given in Section 4.3.4) shows that \(G\)-Mapper runs considerably faster than Multipass BIC. The running time of \(G\)-Mapper hardly depends on the dimension and the size of a given data set while Multipass BIC is greatly influenced by these factors. \begin{table} \begin{tabular}{|l|l|l||l|l||l|} \hline **Dataset** & **Size** & **Dim** & \(G\)**-Mapper** & **Multipass BIC** & \(F\)**-Mapper** & **Balanced** \\ \hline Two Circles & 5000 & 2 & 0.170 & 1.959 & 0.202 & 0.000177 \\ \hline Human & 4706 & 3 & 0.182 & 1.448 & 0.294 & 0.000211 \\ \hline Klein Bottle & 15875 & 5 & 0.414 & 36.095 & 8.109 & 0.000155 \\ \hline Passiflora & 3319 & 30 & 0.197 & 6.798 & 2.492 & 0.000173 \\ \hline COVID-19 & 1431 & 7 & 0.114 & 2.151 & 0.338 & 0.000147 \\ \hline CIFAR-10 & 10000 & 512 & 0.350 & 60.849 & 8.012 & 0.000133 \\ \hline \end{tabular} \end{table} Table 1: Size, dimension, and runtimes in seconds of each dataset for each algorithm. Our \(G\)-Mapper method makes use of the GMM for splitting an interval into two overlapping intervals, which provides an elaborate splitting of intervals. Also, the GMM enables \(G\)-Mapper to start from the whole target space without choosing the initial number of intervals. Note that regardless of the distribution of a given dataset, the conventional Mapper employs uniform covers, and the Multipass AIC/BIC requires initialization of a cover and splits intervals uniformly. We utilize the means and the variances derived from the GMM for designing two overlapping intervals (refer to Section 3.1). Considering the weight of each mixture component would give a more specified splitting. Various clustering algorithms other than \(G\)-means, \(X\)-means, and Fuzzy \(C\)-means may also be suitable to optimize covers in the Mapper construction. Spectral clustering [42] uses connectivity of data points instead of compactness like \(k\)-means, and its fuzzy versions can be found in [34, 43]. Agglomerative clustering [22, 21] is a hierarchical algorithm, recursively merging the closest pairs of clusters, and its fuzzy version was established in [23]. A recent clustering algorithm [2] introduces a novel technique, partitioned local depth cohesion, interpreted as cohesion, but its soft clustering version has not been developed. We compared different methods for selecting a cover using qualitative methods and runtimes. We leave making more quantitative comparisons between the Mapper outputs of the different methods as future work. A co-optimal transport metric between hypergraphs is proposed in [11]. Since Mapper graphs can be viewed as hypergraphs, the metric is a suitable metric to compare different Mapper graph outputs from each cover selection method as it was utilized in [32, 45]. **Acknowledgements.** This research is a product of one of the working groups at the American Mathematical Society (AMS) Mathematical Research Community: _Models and Methods for Sparse (Hyper)Network Science_ in June 2022. The workshop and follow-up collaboration was supported by the National Science Foundation under grant number DMS 1916439. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or the American Mathematical Society. Kang-Ju Lee was supported in part by the National Research Foundation of Korea (NRF) Grants funded by the Korean Government (MSIP) (No.2021R1C1C2014185). The authors would like to thank Nithin Chalapathi, Youjia Zhou, and Bei Wang for their encouraging conversations and their comments concerning how to handle datasets.
Mapperアルゴリズムは、トポロジカルデータ分析(TDA)における可視化手法で、与えられたデータセットの構造を示すグラフを出力します。しかし、Mapperアルゴリズムは、生成するMapperグラフを「良い」グラフにするために、いくつかのパラメータをチューニングする必要があります。この論文では、カバーパラメータを選択することに焦点を当てています。私たちは、カバーを繰り返し分割することで、統計的正規性テストに基づいて、Mapperグラフのカバーを最適化するアルゴリズムを提案しています。このアルゴリズムは、$G$平均クラスタリングに基づいており、$k$平均で最適なクラスタ数を探索するために、Anderson-Darlingテストを繰り返し適用します。分割手順は、Gaussian混合モデルを用いて、与えられたデータの分布に従ってカバーを慎重に選択します。合成データセットと現実世界のデータセットのそれぞれで実験を実施した結果、このアルゴリズムは
2309.16427
Klever: Verification Framework for Critical Industrial C Programs
Automatic software verification tools help to find hard-to-detect faults in programs checked against specified requirements non-interactively. Besides, they can prove program correctness formally under certain assumptions. These capabilities are vital for verification of critical industrial programs like operating system kernels and embedded software. However, such programs can contain hundreds or thousands of KLOC that prevent obtaining valuable verification results in any reasonable time when checking non-trivial requirements. Also, existing tools do not provide widely adopted means for environment modeling, specification of requirements, verification of many versions and configurations of target programs, and expert assessment of verification results. In this paper, we present the Klever software verification framework, designed to reduce the effort of applying automatic software verification tools to large and critical industrial C programs.
Ilja Zakharov, Evgeny Novikov, Ilya Shchepetkov
2023-09-28T13:23:59
http://arxiv.org/abs/2309.16427v1
# Klever: Verification Framework for Critical Industrial C Programs ###### Abstract Automatic software verification tools help to find hard-to-detect faults in programs checked against specified requirements non-interactively. Besides, they can prove program correctness formally under certain assumptions. These capabilities are vital for verification of critical industrial programs like operating system kernels and embedded software. However, such programs can contain hundreds or thousands of KLOC that prevent obtaining valuable verification results in any reasonable time when checking non-trivial requirements. Also, existing tools do not provide widely adopted means for environment modeling, specification of requirements, verification of many versions and configurations of target programs, and expert assessment of verification results. In this paper, we present the Klever software verification framework, designed to reduce the effort of applying automatic software verification tools to large and critical industrial C programs. ## 1 Introduction Tools that implement an automatic software verification technique also known as software model checking aim at finding violations of specified requirements non-interactively and proving program correctness under certain assumptions formally [24]. Automatic software verification tools (for brevity, we will refer to them as _verification tools_ below) participating in annual competitions on software verification (SV-COMP) demonstrate excellent results for verification tasks included in the benchmark suite [5]. Some verification tools miss a few faults and report fewer false alarms. The benchmark suite contains moderate-sized programs that are prepared in advance and checked against the predefined list of properties. However, these verification tools cannot be applied to large software projects without prior preparation. This paper considers the verification of large critical industrial C programs such as operating system kernels, embedded software, web servers, database management systems, libraries and utilities on which many people rely in their daily lives and at work. Such programs can contain hundreds or thousands of KLOC and might evolve rapidly (Table 1). Neither advanced algorithms implemented by developers of verification tools nor a substantial increase in computing power can guarantee obtaining valuable verification results in a reasonable time if large enough parts of these programs' source code are strongly relevant to checked properties. The interest in the application of verification tools to critical industrial programs continually grows due to the following reasons: * Faults in these programs can result in significant economic losses [9; 36]. * Traditional software quality assurance techniques such as code review, testing and static analysis often cannot provide correctness guarantees [8; 19]. * Heavyweight formal verification methods like deductive verification are capable of proving functional correctness under certain assumptions. But such methods are successfully applied either to fairly small programs whose size is up to 12 KLOC [1; 21; 27] or to critical components of large programs [16; 17; 20] since verification of them as a whole would require enormous efforts. Target programs can extensively interact with their environment during operation, while verification tools are unaware of any restrictions imposed on the corresponding interactions. Users may need to check that their programs satisfy specific requirements in addition to supported properties. Program environment and requirements can vary for different versions and configurations of target programs. However, there \begin{table} \begin{tabular}{|l|c|c|} \hline **Program name and version** & **Size (MLOC)** & **Lines changed2 (MLOC)** \\ \hline Linux 5.7.7 kernel & 27 & 2.6 added (9.7 \%) \\ & & 1.2 deleted (4.4 \%) \\ \hline FreeBSD 12.1 & 25 & 5.3 added (22 \%) \\ & & 3 deleted (12 \%) \\ \hline GCC 10.1 & 14 & 2.2 added (15 \%) \\ & & 0.99 deleted (6.9 \%) \\ \hline GTK 3.24.21 & 3.2 & 0.13 added (4.2 \%) \\ & & 0.1 deleted (3.2 \%) \\ \hline PostgreSQL 12.3 & 2.2 & 0.13 added (5.3 \%) \\ & & 0.07 deleted (3.3 \%) \\ \hline glibc 2.31 & 2 & 0.24 added (12 \%) \\ & & 0.25 deleted (13 \%) \\ \hline \end{tabular} \end{table} Table 1: Characteristics of several open-source programs1. is a lack of widely adopted methods and tools for preparing programs for large-scale verification. At last, existing verification tools do not provide users with a comprehensive enough suite of means for expert assessment of verification results. To reduce efforts that are necessary for the application of automatic software verification tools for large industrial C programs, we have been developing the Klever software verification framework and project-specific adaptations considered in Sections 2 and 3 correspondingly. Section 4 demonstrates the publicly available achievements of Klever and estimates various aspects of its usage. Section 5 outlines the related work. In conclusion, we summarize our overall vision of the practical application of automatic software verification tools. The main contribution of the paper is an approach and its implementation in the Klever verification framework, tailored to apply verification tools to large industrial programs. This approach is a complex set of methods that tackle each step of a path, from selecting the scope of the source code to be checked to analyzing verification results. We provided a solution for each significant challenge that a user might face on this way. Consequently, the paper might look too broad and cover too many aspects. But focusing on a single individual problem, like environment modeling or program decomposition, does not allow seeing the big picture and simplifying the overall verification process in practice. Therefore, our main contribution is a solid approach to the application of automatic software verification tools to industrial C programs that breaks down into corresponding methods for: program decomposition; environment modeling; stating requirement specifications; presenting verification results and enabling their triage. In addition, the presented verification framework can work with any verification tool that supports its interface, which is pretty close to the one adopted by the SV-COMP community. The most vital requirements for us are the possibility to apply the tool directly to the source code, get verification results non-interactively, and visualize them. We stick to these criteria mostly for technical reasons and do not limit the variety of verification approaches and techniques that such tools can implement under the hood. Discussing the application of other analysis and verification tools to various software is not our goal in the paper as it is too broad, so we limit the overview of related work to the particular problem of applying the aforementioned tools to industrial programs. ## 2 Klever: A Software Verification Framework Fig. 1 presents the overall workflow of Klever [32]. Rectangles filled with red represent manual actions. Green rectangles mean automatic steps executed by Klever. The only blue rectangle corresponds to the invocation of verification tools. The rectangle with a dotted border represents an optional step necessary if users would like to improve verification results. The Klever software verification framework uses SV-COMP compliant verification tools as backends and automates the following steps (they are considered in detail in corresponding subsections below): * Decomposition of programs into fragments of a moderate size. * Generation of environment models for various kinds of interactions with an environment. * Generation of verification tasks that includes configuration of verification tools. * Scheduling and monitoring to enable parallel generation and solution of verification tasks. * Preliminary processing of verification results. * Managing verification processes and expert assessment of verification results. Klever supports verification of programs developed in the GNU C programming language on _x86_64_, _ARM_, and _ARM64_ Linux platforms. Project-specific adaptations include configurations, specifications and plugins for checking specific software. If a project adaptation has already been implemented, users need to deploy Klever and can start using it as is without any additional Figure 1: Klever workflow. manual efforts. Otherwise, several iterations dedicated to the development of the project-specific adaptation are necessary. During design and development, we cared about supporting the verification of different versions and configurations of target programs. This is highly demanded by the fast-changing industry. Klever does not introduce any changes to the program's source code or build processes. Thus, it can pick up a new version or configuration of a program smoothly to run its verification without extra steps except for adjusting an appropriate project-specific adaptation if necessary. The Klever software verification framework is an open-source project3 primarily implemented in Python 3. Configurations from project-specific adaptations are stored as JSON files. Specifications are developed using appropriate domain-specific languages. Project-specific adaptation plugins are implemented as Python 3 modules. The Klever project repository contains numerous tests written in C. These tests are checked automatically by a CI/CD system before any substantial update to the main branch. Klever user documentation can be found at this link4. Footnote 3: [https://forge.ispras.ru/projects/klever](https://forge.ispras.ru/projects/klever), [https://github.com/ldv-klever/klever](https://github.com/ldv-klever/klever) Footnote 4: [https://klever.readthedocs.io/en/latest](https://klever.readthedocs.io/en/latest) The following subsections describe particular steps of the automated Klever workflow and related components of the Klever software verification framework. It is worth noting that different Klever components depend on several auxiliary tools considered in Appendix A. One can omit any of these subsections if they are not of interest. ### Decomposition of Programs One can hardly verify any complicated, large industrial program automatically as one piece of code. We suggest decomposing large programs by extracting smaller _program fragments_ from their source to verify them separately. Such an approach decreases the demand for computational resources and simultaneously increases the efforts required for environment modeling (more on this in the next subsection). We suggest considering logically interconnected program components like, say, loadable kernel modules or plugins (Table 2) as program fragments. The decomposition is performed only on the file level. Each program fragment can be considered as a set of C source files. In our experience, this is a "golden mean" that enables obtaining useful verification results with moderate efforts for modeling the environment. Each program fragment should have at least one entry point -- a function that can be called by other program fragments. It is hardly possible to propose a universal algorithm for an automatic program decomposition. Thus, we suggest a configurable and extendable approach illustrated in Fig. 2. A user should provide configuration properties and the program's source code in the form of a Clade build base5. Program decomposition is performed automatically. However, a user can adjust it by tuning configuration properties and the decomposition specification between runs of Klever. The Project Fragment Generator (PFG) is a Klever component that performs the automatic decomposition. It implements the algorithm illustrated in Fig. 3. Configuration properties, which are denoted as _conf_, contain the names of files, directories and program fragments for verification. It also lists the names of tactics that implement particular decomposition (_decomposition_tactic_) and composition (_composition_tactic_) algorithms. These tactics ought to be implemented as part of a specific program adaptation, and their implementation should be based on a particular program architecture. However, the repository contains several simple open-source implementations of such tactics. The build base (_build_base_) contains, among other things, an oriented callgraph and build command graph, which are used by PFG6. The build base contains other details to resolve dependencies between files, such as definitions of macros and \begin{table} \begin{tabular}{|l|c|} \hline **Project** & **Number of components (approximately)** \\ \hline Linux Kernel & 5,000 \\ \hline BusyBox & 300 \\ \hline GTK & 200 \\ \hline Apache & 150 \\ \hline VLC & 80 \\ \hline \end{tabular} \end{table} Table 2: Approximate number of components in open-source projects. Figure 2: Iterative method of program decomposition. types. We focus mostly on the callgraph because functions are essential means for implementing the main logic of the program, and the distribution of functions among files better reflects how the program is organized. First, PFG reads _conf_ and chooses two project-specific tactics (for decomposition and composition) depending on a program, its version and names provided by a user. The user should choose the best suitable tactics for decomposition and composition, depending on the program's design and its build process. Decomposition and composition tactics traverse the build command graph and callgraph selecting files for program fragments for decomposition or merging already selected program fragments for composition. The decomposition specification (_decomposition_spec_) is a JSON file that specifies files to be added or removed from particular program fragments7. PFG uses it in the next step. Footnote 7: There are a couple of examples of decomposition specifications in Appendix 0.B. Then PFG constructs a _file graph_ using the two mentioned graphs. Nodes of the file graph correspond to the program's C source files, and edges match function calls between them. Then, the _decomposition_tactic_ constructs a _program fragment graph_ (_fragments_graph_) for the particular program under verification. Each node of this graph corresponds to a set of C source files of the program. An edge between two program fragments exists if the first one calls at least one function defined in the second one. Decomposition and composition tactics work with the file graph, callgraph and build command graph. Common operations with them, such as traversing over dependencies or finding specific nodes, are separated into a specific shared library that Figure 3: Algorithm of decomposition into program fragments. simplifies the implementation of new tactics. One can find more information about the configuration of program decomposition in the user documentation8. Footnote 8: [https://klever.readthedocs.io/en/latest/dev_decomposition_conf.html](https://klever.readthedocs.io/en/latest/dev_decomposition_conf.html) The _refine_fragments_ step is the correction of the generated program fragments according to the provided decomposition specification. This specification serves only the purpose of manually adjusting the automatically obtained results of program decomposition using tactics. To achieve that, the user can modify the generated decomposition specification by changing the list of functions, files, and directories (either concrete names or regular expressions) describing program fragments. This specification can be provided at the follow-up run of Klever, and it supersedes program fragments generated automatically. At the next step, PFG marks nodes of the file graph and program fragment graph as target ones using the configuration. A user can provide a list of functions, files, and directories (yet again as either concrete names or regular expressions) to specify targets for verification. If a file is marked as a target one, then all program fragments that contain the file are also marked as targets. The last step is the composition of program fragments. The idea is to find a few other program fragments for each target program fragment and combine them together as new program fragments. It might reduce efforts required for environment modeling. For example, a composition tactic can search for program fragments that implement missing function definitions for some target program fragment. An example of decomposition and composition is shown in Fig. 4. There is a file graph in the left part of the picture. For illustrating purposes, we added information about function calls (numbers), build command dependencies (shapes with the same color are input files of a single linking command) and lines of code per file (sizes of ellipse shapes). There is a program fragment graph on the right side of the picture. There are two target components (_comp1.c_ and _comp2.c_) that actively use a couple Figure 4: Decomposition and composition example. of libraries (_lib1.c_ and _lib2.c_). The second component is implemented with a _helper_; the first library refers _core.c_. Decomposition and composition tactics can use this information to simplify algorithms and make them more precise. Imagine a user expects to get two program fragments for corresponding components with included dependencies for verification as a result of the decomposition. PFG makes two program fragments, illustrated in the left picture, and then the composition tactic adds a program fragment with libraries to each component. A dashed line on the right illustrates the result of such a composition. It is worth mentioning that the process of decomposition is not very stable regarding changes in the code base if it relies on manual file or function listings. However, if there is a way to automatically select program components and implement the routine as a decomposition tactic, then this approach becomes much more stable. The Linux kernel is a good example. It is hardly possible to manually list files that correspond to loadable kernel modules that might change significantly from version to version. But an algorithm that detects files from the build commands tree allows seamlessly obtaining the list of files for each independent module, until the whole algorithm for arranging build commands is unchanged. ### Environment Modeling Libraries, user inputs, other programs, etc. constitute an environment that influences program execution. Program verification requires providing a model that represents certain assumptions about the environment: * It should contain a function that we refer to as an _entry point_, and all paths analyzed by a verification tool should start with this function. This entry point should invoke the target program's API in the same way as the real environment. * It should contain models of functions that the program calls during execution and which can influence verification results, but not necessarily all of them. These functions can be defined in other program fragments or libraries. Moreover, some functions that require modeling could be just too complicated for verification tools and need to be replaced by simplified models. * It should correctly initialize external global variables. Our experience shows that bug finding is possible even without accurate environment models if providing verification guarantees is out of scope. However, it is crucial to provide precise environment models considering the specifics of checked requirements and programs under verification to achieve high-quality verification results [39]. It becomes even more important to do that to avoid missing faults and false alarms after program decomposition. The Klever framework allows the development of the environment model either directly in the C programming language or using the proposed DSL approach. A user might choose an approach that suits the project best. We focus mostly on the DSL approach in this section, but before diving deeper into details, let us highlight the main advantages of the proposed solution. The configurable DSL enables detaching the environment model from the code and maintaining a certain level of abstraction from program interfaces implemented by a particular program fragment. The approach pays off if there are many program fragments with a similar design, like operating system device drivers. Otherwise, the development cost using the DSL might be close to the cost of developing the model in C manually, especially if each program fragment has an entirely unique interface and logic. We show different kinds of interactions between components called _interaction scenarios_ in Fig. 5. Each interaction scenario is related to some specific API and associated with arrows of a particular style. For instance, the device driver provides initialization and exit functions (long-dashed line), device-specific callbacks (dotted line), interrupt handlers (chained line) and, in turn, it calls library functions from the bus driver (thin line) and the kernel (short dashed line). We refer to corresponding function calls and read or write accesses to global variables as _interaction events_. Thus, each interaction scenario can be considered as a set of feasible sequences of interaction events that can happen during real program execution. We assume that all interaction events in an interaction scenario should happen in the context of the same thread or process, but each event happening in the environment is followed by the execution of some code from the program fragment. The environment model might include different models of interaction scenarios, depending on the decomposition of a program into fragments. It is possible to verify either a device driver separately from other components, both device and bus drivers together, or even add to them extra files from the operating system kernel. Figure 5: Interaction scenarios between drivers and their environment. The next subsection describes the semantics of the used representation of environment models and an approach to their generation. #### Intermediate Model We propose a notation for the so-called _intermediate model_ to specify environment models of interaction scenarios, or just _scenario models_ for brevity. In simple words, the intermediate model describes events that can happen in the environment of a program fragment in a structured way. The language definition is out of the paper's scope, it is given in the Klever documentation. In this section, we present the semantics of the notation. An intermediate model aims at modeling scenarios before their translating to a C source code of the environment model. Models in the notation can be either generated or developed by a user. Let the interface of a program fragment be \(I=<V_{p},F_{p},R_{p},T_{p}>.V_{p}\) is a set of global variables, \(F_{p}\) is a set of function declarations, \(R_{p}\) is a set of macros, and \(T_{p}\) is a set of type definitions provided in the C programming language. The "p" suffix means that these objects are declared or defined in the program fragment. An intermediate model for the program fragment with interface \(I\) consists of the following elements: \[M_{I}=<V_{e},F_{e},R_{e},T_{e},E_{F},E_{T}>\] \[E_{T}=\varepsilon_{1}\parallel..\parallel\varepsilon_{n}\] Where \(V_{e},F_{e},R_{e},T_{e}\) are supplementary global variables, functions, macros and type definitions provided in the C programming language as separate C source files or headers. The "e" suffix helps to distinguish environment-related objects from the program fragment's ones. \(E_{F}\) and \(E_{T}\) represent external functions and threads spawned by an environment correspondingly. The set of function definitions \(F_{e}\) contains two kinds of functions (auxiliary and project-specific ones). Auxiliary functions are reused between projects, and many of them depend on a used verification tool. Models of POSIX or C standard library functions belong to project-specific functions. \(E_{F}\) and \(E_{T}\) denote definitions of models of external functions and specific separate threads for events happening in the environment correspondingly. Models of external functions specify undefined functions from \(F_{p}\setminus F_{e}\) (functions that don't have definitions or models in either the program fragment or environment model supplementary files). Both the environment thread and external function models specify scenario models. The former describes events executed in a separate thread spawned by an environment. The latter specifies sequences of interaction events executed in an already existing thread. The notation limits an application scope to parallel programs with shared memory. The semantics of environment thread models assume that interaction events would happen in threads started simultaneously in the environment model9. Footnote 9: However, the word ”run” here is used for convenience because the environment model and the program fragment are never intended for real execution but only for verification. A scenario model \(\varepsilon\) can be considered as a transition system: \[\varepsilon=<\mathcal{V},\mathcal{A},\mathcal{R},\alpha_{0}>\] Where \(\mathcal{V}\) is a set of _labels_, \(\mathcal{A}\) is a set of actions, \(\mathcal{R}\) is a transition relation of the scenario model. Labels are local variables of scenario models in simple words. States of the transition system correspond to different assignments of values to labels from \(\mathcal{V}\) and variables \(V_{p}\cup V_{e}\) (global variables defined in the program fragment and environment model respectively). Actions describe state transitions and interaction events. Each label has a type defined in the program fragment or the standard C library. Each scenario model has the first action denoted as \(\alpha_{0}\). There are three kinds of actions: sending/receiving a signal, base block and jump. Base blocks contain the C code, such as entry point calls or accesses to the heap memory and global variables. Each base block action is a couple: \[\alpha=<\varphi,\beta>\] where \(\varphi\) is a precondition defined as a C logical expression over variables and labels from \(\mathcal{V}\cup V_{p}\cup V_{e}\). \(\beta\) contains C statements over variables and labels from \(\mathcal{V}\cup V_{p}\cup V_{e}\) and functions from \((F_{p}\cup F_{e})\setminus E_{F}\). The base block code should follow the C programming language syntax. Each block should have a single control entry and exit points without any _goto_ statements or incomplete switches, loops or conditional operators. It is also forbidden to introduce new variables in base blocks. Signal exchanges allow for describing data and ordering dependencies between actions in different scenario models. Signals behave according to the Rendezvous synchronization model proposed by Tony Hoare [22]. Let us consider a scenario model \(\varepsilon_{i}=<\mathcal{V}_{i},\mathcal{A}_{i},\mathcal{R}_{i},\alpha_{0_{ i}}>\) that sends a signal to \(\varepsilon_{j}=<\mathcal{V}_{j},\mathcal{A}_{j},\mathcal{R}_{j},\alpha_{0_{ j}}>\). Sending and receiving actions are \(\alpha_{i}\in\mathcal{A}_{i}\) and \(\alpha_{j}\in\mathcal{A}_{j}\): \[\alpha_{i}=<\varphi_{i},\pi_{i},l_{i}> \alpha_{j}=<\varphi_{j},\pi_{j},l_{j},\psi_{j}>\] Constants \(l_{i}\) and \(l_{j}\) are signal names. Signal exchange happens if and only if \(l_{i}=l_{j}\). C logical expressions \(\varphi_{i}\), \(\varphi_{j}\) and \(\psi_{j}\) over \(\mathcal{V}_{j}\cup V_{p}\cup V_{e}\) define the precondition and postcondition. Sending action \(\alpha_{i}\) does not have a postcondition because its local variables stay unchanged. Two vectors of labels \(\pi_{i}:v_{1},...,v_{k}\) where \(v_{t}\in\mathcal{V}_{i}\) for \(t\in 1..k\) and \(\pi_{j}:u_{1},...,u_{k}\) where \(u_{t}\in\mathcal{V}_{j}\) for \(t\in 1..k\) describe data transfer: \(\forall t=0..k:\ u_{t}:=v_{t}\). Types of corresponding labels at the same positions in vectors should match each other. Jumps are actions that help to implement loops and recursion. An entrance to a jump replaces the transfer relation with new transfer relation rules. The order of actions (\(\mathcal{R}\) and \(\alpha_{0}\)) in the example is specified in the _process_ entry using a simple language. There is a notation for the following kinds of actions: * \(<name>\) is a base block action; * \((name)\) is a signal receiving action. Where \((!name)\) means that the scenario model waits for a signal to start. * \([name]\) is a signal sending action. * \(jump\) is a jump that just specifies a new sequence of actions to do. Each jump action has its _process_ entry. An order of actions is described using two operators: *. is a sequential combination operator. * \(|\) is a non-deterministic choice operator. One can use parentheses in expressions; a sequential combination operator has a higher priority. There is an example of a transfer relation of a scenario model illustrated in Fig. 6. The order can be defined by the following expressions: \((!a)\). \(<b>.(<c>|d).[e]\) and a jump action \(d=<f>.d|[e]\). The notation of intermediate models is sufficient for specifying realistic environment models, as it allows combining an event-oriented description of scenarios with fragments in C. It gives more freedom to describe the event-driven environment model, dependencies between events, and a non-deterministic behavior. Descriptions of actions are separated from definitions of signals and transfer relations of scenario models. Such organization simplifies the comprehension of the whole model structure. It is easy to illustrate it as a graph. The model-generating process allows weaving the environment model into files of program fragments without touching the source code. It simplifies the work of dealing with different versions and configurations of a program. Figure 6: Example of an interaction scenario transfer relation. The drawbacks of the approach are the large size of JSON files and the redundancy of many features if an environment model is simple and requires mostly function models. In this case, it is easier to provide the model directly in C. This case is also supported in Klever. A simplified example of an intermediate model and its description can be found in Appendix C. #### Generating Environment Models Environment Model Generator (EMG) performs the synthesis of environment models for program fragments according to the workflow illustrated in Fig. 7. As an input, EMG receives a program fragment, build base, and optional environment model specifications developed manually. As its output, EMG synthesizes an environment model in the aspect-oriented extension of the C programming language. CIF weaves the program fragment source code to add the environment model, but this step follows EMG's work10. Footnote 10: The next subsection and Appendix A.2 contain more details about the aspect-oriented extension of the C programming language and CIF. The EMG's synthesis process consists of two steps: 1. First, _scenario generators_ independently synthesize scenario models. Their combination forms an intermediate model. 2. Then, the _translator_ generates the C code of the environment model from the provided intermediate model. Each scenario generator provides scenario models that can communicate over signals. Scenario generators are parts of program-specific adaptations and can implement various implementations. The open-source implementation contains generators for the Linux device drivers, while the implementation just combines manual environment model specifications. Notations of environment model specifications for various scenario generators may differ. It is convenient to use a pipeline of several scenario generators for different kinds of scenario models in practice. For large event-oriented programs, the development of scenario generators may take a considerable time. In some cases, it is convenient to manually adjust or develop an intermediate model or its part. We propose to use a specific scenario generator to enable that. The translator generates the main single C source file with the entry point and several files in the aspect-oriented extension of the C programming language called _aspect files_. The first EMG translation step is the extraction of the C code denoted as \(V_{e},F_{e},R_{e},T_{e}\) from the provided intermediate model. This part of the model does not need any translation and can be added to the corresponding files. Then the translator should generate a function for each scenario model. We refer to such functions as _control functions_. We give an example of a control function in Appendix C. To generate a control function, the translator creates a control function with local variables per each label from \(\mathcal{V}\), adds C control operators to implement the order of interaction events defined by the intermediate model, and adds an implementation per event, which is explained below. Subsequently, the translator determines how to replace references to labels in actions with the newly generated control function local variables. There is a sound approach for translating signal exchanges to C code without losing scenarios. Unfortunately, preliminary experiments indicated that the existing verification tools could not cope with complicated parallel models when checking memory safety or reachability properties. Thus, we have implemented a simplified translator that implies additional restrictions on models and misses some possible scenarios. It allows only scenario models that can receive signals as the first and the last actions. The resulting code is sequential, and it calls corresponding control functions in place of signal dispatches to scenario models. Such optimization makes the resulting model straightforward but does not capture the interleaving of actions from different scenario models that may run simultaneously. There is a specific translating algorithm for getting simplified parallel models to check data races described in [2]. The obtained model is parallel, but it still does not allow signal exchange between the first and the last actions. The translator generates a control flow graph that determines the order of actions using _if_ and _switch_ operators instead of "--" and ".", and _goto_ jumps instead of auxiliary _jump_ actions to proceed with the generating control functions. After the control flow graph is ready, the translator supplies it with the code of base block actions and newly generated code for receiving and sending actions in particular places. Figure 7: Environment Model Generator workflow. CIF also adds calls to control functions from aspect files to original source files of the program to trigger the execution of control functions at points where functions that correspond to function scenario models are called. Some extra details about environment modeling in the Klever framework can be found in [25; 33; 39] and in the user documentation11. Footnote 11: [https://klever.readthedocs.io/en/latest/dev_env_model_specs.html](https://klever.readthedocs.io/en/latest/dev_env_model_specs.html) #### Modeling Complicated and Undefined Functions As we already mentioned, it may be necessary to model complicated and undefined functions that influence the verification results. There is an example of a model of the Linux kernel memory allocation function _kmalloc()_ in Fig. 8. Fig. 9 shows how this model is bound with the original function from the Linux kernel. The model reduces the _ldv_kmalloc()_ function to _malloc()_ which is known by any verification tool that participates in SV-COMP. The model takes into account that _malloc()_ does not return NULL according to the SV-COMP rules and that in the Linux kernel there are reserved error codes that cannot be valid pointer addresses (they are filtered out by _ldv_is_err()_). Using _ldv_undef_int()_ causes verification tools to traverse both paths when analyzing the target program with the given model. The first path corresponds to successful Figure 8: Model of a memory allocating function. memory allocation, while the second one is a failure. This allows us to reveal possible faults in error handling paths that are unlikely to happen during normal operation and testing. _ldv_kmalloc()_ invokes _ldv_check_alloc_flags()_ that may be defined by requirement specifications to check for memory allocation function flags. For instance, one can ensure that the flag _GFP_ATOMIC_ is passed when allocating memory in the interrupt context and when holding spinlocks within the Linux kernel. Fig. 10 gives an example of support for different versions of the Linux kernel. In this example, the same model _ldv_is_err()_ is used since only the return type of _IS_ERR()_ changed, but its return values remained the same. In the case of changes in the semantics of the API, one may need to introduce different models. One can find additional information on this topic in the user documentation12. Footnote 12: [https://klever.readthedocs.io/en/latest/dev_common_api_models.html](https://klever.readthedocs.io/en/latest/dev_common_api_models.html) ### Specifications of Requirements For checking requirements that do not correspond explicitly to any of the properties supported by verification tools [5], we suggest to weave an additional source code into a program. This extra source code should express the requirements using one of the supported properties. For instance, rules of correct usage of a particular API can be formulated as the unreachability of the error function, like in Fig. 11 and Fig. 12. Figure 10: Aspect file taking into account changes in the Linux kernel API. Figure 9: Aspect file replacing the _kmalloc()_ function definition with its model. For some requirements, it may be hard or even impossible to express them using additional C expressions and statements. In this case, one has to leverage specific means of verification tools, e.g., this may be the case for finding data races [2]. If one expresses weakly related requirements using the same property, it is possible to check them simultaneously, but we do not recommend this due to the following issues. The first reason is that it is not an easy task for verification tools to distribute available computational resources fairly between various requirements. The second reason is that most verification tools stop after they find a first violation of a checked property. Therefore, detecting a first fault or a false alarm can prevent finding other faults. Below, we provide an example of a requirements specification. Appendix 0.D contains its high-level description. If the reader is not interested in this, he/she can proceed to the next subsection. Fig. 13 and Fig. 14 contain an example of a requirements specification that reduces checking of correct usage of the module reference counter API in the Linux kernel to a reachability problem. The requirements specification introduces a model state represented by the global variable _ldv_module_refcounter_ initialized by 0. The model state is changed within model functions _ldv_module_get()_, _ldv_try_module_get()_, and Figure 11: Original program source code. Figure 12: Modified program source code. ldv_module_put()_ according to the semantics of the corresponding API. These model functions are bound with the original ones by the aspect file shown in Fig. 14. The considered requirements specification makes 2 checks by _ldv_assert()_. The first one is within _ldv_module_put()_. It is intended to find out when Linux kernel loadable modules decrement the module reference counter without first incrementing it. The second check is within _ldv_check_final_state()_. It tracks that modules should decrement the module reference counter to its initial value before finishing their operation. Figure 13: Example of expressing requirements as a reachability problem. To emphasize statements that are most relevant to checked requirements, we suggest using special comments. Such comments in the example begin with the keywords _NOTE_ and _ASSERT_. These comments are used during preliminary processing and visualization of violation witnesses considered in the following subsections. One can find more details about the development of requirement specifications in the corresponding section of the user documentation13. Footnote 13: [https://klever.readthedocs.io/en/latest/dev_req_specs.html](https://klever.readthedocs.io/en/latest/dev_req_specs.html) ### Verification Task Generation Klever generates a verification task for each pair of a program fragment and a requirements specification. Below, we consider particular steps of this process. #### 2.4.1 Weaving and Merging of Source Files and Models First, CIF14 weaves the source code of the program fragment with aspect files from the requirements specification and the environment model. At this step, preprocessing is also performed. Thereafter, there are several instrumented and preprocessed original C source files as well as additional preprocessed model C source files. CIL combines all the source files. After all, there is a single preprocessed C file _cil.i_ prepared for an immediate run of a verification tool. Footnote 14: One can find more details about CIF and CIL in Appendix A.2 and Appendix A.3 respectively. #### 2.4.2 Configuration of Automatic Software Verification Tools It may be helpful to experiment with different configurations of verification tools in practice. There is also a need to specify different sets of verification tool options for Figure 14: Aspect file replacing kernel function calls with model ones. various requirement specifications. Such sets may even differ for various versions of the same tool. To simplify the configuration process for end-users, we propose to prepare several so-called _verifier profiles_ in advance. Each requirements specification refers to one of verifier profiles by default. In addition, a user can choose any other available variant without investing time into learning particular verification tool options and capabilities. There is a verifier profiles example in Appendix E. At the moment, Klever supports CPAchecker [6] as a verification tool backend and there is a bunch of verifier profiles for it. To integrate new verification tools within Klever, users need to do at least the following: * Describe specific verification tool options suitable for checking requirement specifications as extra verifier profiles. * Provide verification tool binaries to a Klever deployment script. One can learn more about configuration of verification tools in the corresponding section of the user documentation15. Footnote 15: [https://klever.readthedocs.io/en/latest/dev_verifier_profiles.html](https://klever.readthedocs.io/en/latest/dev_verifier_profiles.html) #### Final Preparation of Verification Tasks At the final step, Klever combines everything obtained thus far into verification tasks. Each verification task consists of the following set of files: 1. _cil.i_ (its generation was described above). 2. A property file, like in Fig. 15. Its content is provided by a verifier profile referred to by a checked requirements specification. One can see a complete list of properties supported by various verification tools in [5]. 3. A task definition file16. Klever uses it only to bind two previous files, like in Fig. 16. Footnote 16: [https://github.com/sosy-lab/benchexec/blob/main/doc/benchexec.md#input-for-benchexec](https://github.com/sosy-lab/benchexec/blob/main/doc/benchexec.md#input-for-benchexec) 4. A benchmark definition file17. It connects all previous files as well as specifies various verification tool options provided by a user in various configuration files. An example of this file is shown in Fig. 17. Footnote 17: [https://github.com/sosy-lab/benchexec/blob/main/doc/task-definition-example.yml](https://github.com/sosy-lab/benchexec/blob/main/doc/task-definition-example.yml) ### Scheduling and Monitoring Klever Scheduler operates with two types of objects: verification tasks and verification jobs. A _verification job_ is a set of files including a build base18, specifications, and configuration files that are necessary to start automatic verification. Verification jobs are solved by a specific Klever component named Klever Core. During the solution of verification jobs, it generates verification tasks according to the approaches described above. Each Klever Core instance can generate verification tasks in parallel to speed up the entire verification process. Footnote 18: As a rule, build bases are not explicitly included in verification jobs. They are located and found in the Klever deployment directory. Monitoring of available computational resources and their fair distribution between verification jobs and verification tasks are responsibilities of Klever Scheduler. Klever Scheduler respects verification task priorities specified by users. Also, Klever Scheduler supports canceling verification jobs. Some verification tasks can require considerably lesser computational resources than specified by the user. To avoid useless memory reservations, we perform speculative scheduling, trying to run a verification tool with lesser limitations first. We use BenchExec19 to isolate runs of Klever Core and verification tools as well as to measure and limit computational resources consumed by them at a single machine. It also lets you get verification results from different verification tools in a unified format. Figure 16: Task definition example. Figure 17: Benchmark definition example. Klever Scheduler currently supports parallel solution of verification jobs and tasks on a single machine. Also, it can solve verification tasks in parallel using VerifierCloud20. Footnote 20: [https://vcloud.sosy-lab.org](https://vcloud.sosy-lab.org) ### Preliminary Processing of Verification Results One of the most important functions of the framework is the presentation of verification results to users. The Klever component named Verification Result Processor (VRP) performs preliminary processing of verification results for each solved verification task. Violation witnesses generated by verification tools are machine-readable, and a user can hardly use them in their raw format. Moreover, verification tools can miss some details and even parts of corresponding error paths. We supported an extended witness format for CPAchecker to have all the necessary pieces of information21. In addition, it allows to add some important internal information from verification tools to the violation witness. For instance, places where leaked memory is allocated can be specially noted during memory-safety checking. Other verification tools do not provide violation witnesses using our format; thus, their visualization may not be as good as the CPAchecker ones. Footnote 21: [https://klever.readthedocs.io/en/latest/dev.html#extended-violation-witness-format](https://klever.readthedocs.io/en/latest/dev.html#extended-violation-witness-format) VRP translates violation witnesses into a Klever internal format that is more convenient for visualization and assessment purposes22. Using information from violation witnesses, it finds corresponding expressions and statements within _cil.i_ to be presented to users and adds references to corresponding lines of source files. Also, at this stage, VRP parses special comments from environment models and requirement specifications. This helps to distinguish important pieces of information like invocations of program fragment entry points and changes to model states. In contrast, VRP marks all parts of violation witnesses that seem irrelevant for found violations of checked requirements so that they can be hidden during subsequent visualization. Footnote 22: [https://klever.readthedocs.io/en/latest/dev.html#error-trace-format](https://klever.readthedocs.io/en/latest/dev.html#error-trace-format) The CPAchecker verification tool can provide code coverage reports in addition to witnesses [12]. These reports are in the GCC test coverage format (GCOV). For its visualization, one can use standard tools like LCOV23. Code coverage reports are an important artifact to establish verification in practice. They reflect parts of the program such as lines of code, branches, and functions that are actually verified. This information is essential for estimating the quality of an environment model, since neither violation nor correctness witnesses provide data on actual program paths. Code coverage reports help to understand which program entry points should be invoked additionally by environment models. VRP converts code coverage reports to an internal format to facilitate their visualization24. Like for violation witnesses, it adds references to the original source files. Also, it merges reports issued for individual verification tasks to get a single code coverage report for the program for each requirements specification. At this step, VRP calculates statistics for each subdirectory of the program source tree, considering either source files considered by a verification tool or all source files from a build base. These sets of source files can differ since the verification tool may miss analysis for certain source files, e.g., due to some internal failures or if some source files are not included by a considered program configuration. Footnote 24: [https://klever.readthedocs.io/en/latest/dev.html#code-coverage-format](https://klever.readthedocs.io/en/latest/dev.html#code-coverage-format) ### User Interface The Klever software verification framework implements a user interface as a web-server named Klever Bridge. Below, we consider two major use cases of Klever Bridge. #### Managing Verification Processes To obtain verification results, users should prepare all the necessary data and then start verification. At this stage, Klever Bridge provides the following facilities: * Each new verification activity starts with the creation of a new verification job (Fig. 18). Verification jobs consist of specifications and configuration files that are necessary to start the verification process and have attributes such as a name, date of creation, author, access rights, etc. Verification jobs can be created based on templates for corresponding project-specific adaptations or existing verification jobs. To proceed to verification, users should choose program fragments and requirements to be checked. Besides, to get better verification results for particular programs and requirements, we suggest incrementally tuning various configuration options and improving specifications. Klever Bridge provides a featured text editor to complete these tasks. * Klever Scheduler starts the execution of Klever Core as soon as the user wants to solve his/her newly created verification job and there are available computational resources. Klever Core reports verification results when they become available. It is worth noting that since verification can take a considerable time, Klever Bridge presents verification results to experts as soon as they appear. In this case, they can proceed to their analysis faster; in particular, it is possible to understand that something was done wrong without waiting for all verification results. * Klever Bridge provides information on running and already solved verification jobs. For running verification processes, it presents their progress: the number of already solved and the total number of verification tasks, elapsed time, and approximate remaining time. The latter is evaluated by Klever Core and Klever Scheduler based on accumulated statistics. #### Expert Assessment of Verification Results To simplify the analysis of verification results by experts, we implemented the following facilities in Klever Bridge: * Visualization of violation witnesses, code coverage and internal failures (Fig. 19). The primary goal of this visualization is to hide from experts as many irrelevant, according to domain knowledge, details as possible. Besides, all verification results are closely related to the original source files. * Evaluation of verification results by creating expert marks (Fig. 20). Expert marks can be associated with verification results, either manually or automatically. The automatic assessment saves a great deal of time by avoiding the analysis of similar verification results when different versions or configurations of the same program are verified. Often, in these cases, violation witnesses and internal failure reasons do not differ considerably, and Klever Bridge matches them by pretty simple rules. Experts can provide each mark with a detailed description and tags. To further simplify the analysis, Klever Bridge keeps all the history of mark changes. * Showing various statistics over verification results that can help to understand a general picture. For instance, it can be very useful to see how many warnings were yielded for a particular verification job, what warnings correspond to faults and false alarms, what are the most significant reasons of false alarms, and so on. * Users can compare both verification jobs as sets of files and their verification results. This is useful both for tracking changes for various versions and configurations of target programs and for estimating changes during the development of project-specific adaptations. Figure 18: Creation of a new verification job. It might be useful to integrate expert marks with issue tracking systems. However, it might be worth noting that a mark does not always correspond to a real bug. Marks might help to distinguish false alarms or issues in the environment models. One can find more examples and detailed explanations of some actions with the Klever user interface in the tutorial25. Footnote 25: [https://klever.readthedocs.io/en/latest/tutorial.html](https://klever.readthedocs.io/en/latest/tutorial.html) Figure 19: Visualized violation witness and code coverage. Figure 20: Creation of a new expert mark. ## 3 Project-Specific Adapatations This section describes publicly available project-specific adaptations delivered together with the Klever software verification framework. The application domain of Klever is, however, not limited to the programs described in this section. There are two usable adaptations -- for the Linux kernel and BusyBox. The adaptation for the Linux kernel demonstrates capabilities of verification of complex event-driven software. The Busybox adaptation is a proof of concept of how Klever is applicable to user-space programs. ### Adaptation for the Linux Kernel The Linux kernel architecture is monolithic -- it is loaded into RAM during operating system startup, and then it operates completely in the same shared address space. The Linux kernel implements main operating system functionalities such as scheduling, memory management, interprocess communication, interrupt handling, a network stack, and so on. Besides, it supports modules that can extend its functionality further. As a rule, device drivers, file systems, network protocols, and audio codecs are developed as modules. Modules can be either statically compiled into the kernel or dynamically loaded, but in both cases, they become parts of the monolithic structure. We consider the whole Linux kernel, except loadable modules, as a set of subsystems. Each subsystem usually consists of all the C source files inside a specific subdirectory of the kernel source tree. Typical personal computers can load several hundred modules, but that is a small fraction of all available modules. There are more than 8,000 modules in Linux 5.12-rc3 compiled for the _allmodconfig_ configuration. The average module size is about 1.7 KLOC with a median of 500 lines. The average subsystem size is 7 KLOC with a median of 2 KLOC. The adaptation for the Linux kernel includes: * Tactics for decomposition and composition. They extract modules and subsystems from the Linux source code and combine them together. * A pipeline of several scenario generators to obtain environment models. * A relatively large set of environment model specifications. * A set of requirement specifications allowing one to check memory safety, correct usage of the Linux kernel API, and the absence of data races. Below, these items are treated in detail. In the given section, we do not provide verification results for the Linux kernel, since it is a matter for the following section. #### Decomposition and Composition Tactics There is only one tactic implemented for decomposing the Linux kernel source code. This tactic uses a build command graph from the Linux kernel build base as an input. It iterates over LD (linker) and AR (archiver) build commands. The tactic divides them into two types, depending on the extension of the output files. Files with the extension ".ko" correspond to loadable kernel modules. Subsystems are associated with files "built-in.o" or "built-in.a" in recent versions of the Linux kernel. The decomposition tactic generates a separate program fragment for each selected build command. It traverses the build command graph to extract CC (compiler) build commands whose output files become a part of the module or subsystem. Each CC build command has an input C source file, and such files together form the final program fragment. The composition step is optional, so we do not consider it here in detail. There are two composition strategies that allow to combine modules or subsystems that call exported functions from each other using a greedy algorithm. #### Environment Modeling A program fragment can contain a single module or several modules and subsystems. An interface for the program fragment can include: * _Subsystem initialization functions_. These functions are entry points of subsystems, and the Linux kernel calls them in a specific order at boot time. * _Module initialization and exit functions_. Pairs of such functions are entry points for modules. The Linux kernel calls them when loading and unloading modules from memory, respectively. * _Callbacks_. Each separate group of callbacks implements operations for handling events relevant to a particular resource or device. Callbacks are registered and deregistered in the Linux kernel by calling special functions. * _Exported functions_. Any module or subsystem can implement functions that are exported and used by other Linux kernel components. Figure 21: Pipeline of scenario generators. There is a pipeline of three scenario generators to compose environment models according to Fig. 21: 1. _Init/exit caller_. It is a scenarios generator for calling subsystem initialization functions and model initialization and exit functions. 2. _Callbacks caller_. The scenarios generator invokes Linux kernel callbacks. 3. _User model composer_. The scenarios generator adds handcrafted parts to the intermediate model. _Init/exit caller_ accepts as input a specification with an order for calling the subsystem's initializing functions. Klever contains such a specification that enumerates a list of relevant macros for this purpose. The scenarios generator checks if these macros and _module_init/module_exit_ macros are used in the source code of the program fragment. If so, it groups modules' initialization and exit functions in pairs. The generator also determines the proper order of calling corresponding functions if the program fragment contains several modules or subsystems. There is an example of an order for calling a subsystem initialization and the driver's initialization and exit functions in Fig 22. Both initialization functions can return an error, and curve arrows reflect these cases. The Linux kernel always initializes subsystems before loadable kernel modules, so the corresponding function precedes the module's initialization function. _Callbacks caller_ is the most complicated scenario generator. At the first stage, the generator collects all interface specifications. These specifications list declarations used at model callbacks' invocations. Interfaces are grouped into categories corresponding to the specific functionality of the Linux kernel. A category can contain the following types of interfaces: * Kernel functions include functions for registering, deregistering, creating, or destroying relevant data. Figure 22: Order of calling subsystem initialization, driver initialization and driver exit functions. * Callbacks' declarations help to detect callbacks' groups implemented in a program fragment. * Containers are structures storing pointers to callbacks or other important data, e.g., pointers to device tables representing supported devices and some of their characteristics. * Resources are objects passed as parameters to multiple callbacks. Then, the generator selects categories of those interfaces that are implemented or used in the source code of the program fragment. The generator gets all the specifications that describe callbacks' invocation after that. We refer to them as event specifications. The specifications' notation extends the format of an intermediate model. Main extensions are the following: new labels' attributes for their binding to particular interfaces and categories, flags to define labels as resources, callbacks, and containers. There is a new kind of action named _callback_. It describes a callback's call with labels corresponding to resources. Each category needs an event specification to generate an environment model. A user can explicitly set the connection. Otherwise, a heuristic algorithm is used to select a suitable event specification for an interface category. The final step is the transformation of transition systems from event specifications to intermediate model notation in accordance with the interfaces implemented or used in the program fragment. At this step, the generator performs several modifications and simplifications, such as replacing callback actions with code blocks containing explicit callback calls. If there are several implementations of callbacks in a certain category, the generator inserts additional scenario models based on the event specification. Also, it prints callback wrappers if they are implemented as static functions. _User model composer_ accepts serialized and manually modified intermediate models. They can be obtained at any launch of Klever from its working directory and then manually corrected. A user can also make the intermediate model from scratch. The generator does not make any modifications to the provided specifications. It either adds or replaces parts of the intermediate model already received as input. In total, the collection describes callbacks from the following categories: USB, USB Serial, IIO, PCI, Platform, Power Management, Class, I2C, SCSI, HID, TTY, Parport, file and seq operations, block driver, Super, Net, interrupts, kthreads, timers, tasklets and workqueues. The level of code coverage and the quality of verification results depend on the number of callbacks invoked by environment models. Fig. 23 shows the dependence of the share of implementations of device driver callbacks of the most popular types (categories in our terminology) relative to the number of implementations of all types of callbacks for Linux 3.14. The plot demonstrates that more than 80 % of all callbacks belong to the 100 most popular categories. Thus, it would require significantly more effort to develop environmental model specifications than we have already spent to get a significantly higher level of code coverage. The code coverage for particular Linux kernel versions is discussed in Section 4. The Klever project's source tree contains environment model specifications in the directory _presets/jobs/specifications/linux_. It includes 36 specifications with 20 KLOC of DSL code and 1.5 KLOC of models in C. The collection of specifications is suitable for the following Linux kernel versions: 2.6.33, 3.14, 4.6.7, 4.15, 4.17 and 5.5. One can use these specifications for other versions as well, but the quality of verification results can decrease. The _presets/jobs/specifications_ directory contains auxiliary functions for different adaptations in the _verifier_ and _common_ subdirectories. They contain models for allocating memory, returning undefined values and so on. #### Requirement Specifications Klever includes 34 requirement specifications for checking memory safety and rules of correct usage of the Linux kernel API as well as for finding data races in Linux kernel modules and subsystems. In general, these requirement specifications allow to detect the following faults: * Buffer overflows. * Null pointer dereferences. * Uninitialized memory usages. * Double or incorrect memory releases. * Data races and deadlocks. * Leaks of specific resources of the Linux kernel. * Incorrect function calls depending on the execution context. * Incorrect initialization of specific Linux kernel resources. Figure 23: Dependence of the share of implementations of callbacks of the most common types on the number of all callbacks implementations for Linux kernel 3.14. Requirement specifications for the Linux kernel are located in the directory _perests/jobs/specifications/linux_. The requirement specifications base is placed at _perests/jobs/specifications/linux/Linux.json_. In total, requirement specifications' size is about 5.4 KLOC. The requirement specifications base consists of approximately 900 lines. Development of a new typical requirements specification can take about 1 week for experts and a month for novices. ### Adaptation for BusyBox The BusyBox project incorporates several command-line utilities such as _cat_, _tar_, or _tail_ into a single software system. Each utility is called an _applet_. Each applet can consist of several C source files. During its operation, it relies only on the _libbb_ shared library of the project and the C standard library. There is strictly one entry point function in an applet. Its name is a concatenation of the applet's name and the suffix _main_. The function prototype coincides with the prototype of the _main_ function. The adaptation in Klever matches the BusyBox 1.28.3 version. It includes one decomposition tactic and a set of requirement and environment model specifications. #### Decomposition Tactic The decomposition tactic for the BusyBox project separates the _libbb_ library and applets. Its algorithm includes the following steps: 1. Search all C source files from the project source code. Distinguish files from the _libbb_ directory as a separate program fragment. 2. Find those files that implement entry point functions according to the mentioned above pattern. We refer to them as main files. 3. Collect C source files from other than _libbb_ directory that have dependencies with the main file using a callgraph. 4. Prepare a list of program fragments for applets by joining main files with their found dependencies. The algorithm has an additional configuration parameter to add C source files from the _libbb_ directory to each program fragment. It is enabled by default. The adaptation has a decomposition specification for BusyBox. It implies removing eight _libbb_ files from each program fragment. The excluded files cause problems for Klever and verification tools. Klever successfully generates program fragments for 185 applets of Busybox configured in _defconfig_. #### 3.2.2 Environment Modeling An applet's environment model calls the main function and provides stubs for certain undefined functions. Different adaptations share a scenario generator to call functions in random order and with undefined parameters. It is used in the BusyBox adaptation to generate calls of applets' main functions. There is an example of such a harness for the SSL client applet in Fig. 24. The set of environment model specifications includes several types of models in the C programming language: 1. Models of functions from the C standard library and POSIX such as functions for working with strings, _exit_, _fork_, etc. These models can be reused at verification of other user-space programs. 2. Models of functions from the excluded _libbb_ source files. The general set of functions is tiny; there are no more than a dozen functions of each mentioned type. With the current environment models, the line coverage constitutes 93 % and function coverage is 86 %. #### 3.2.3 Requirement Specifications We verified memory safety and consistency of opens and closes of file descriptors and pipes for BusyBox applets. No faults have been found. The number of false alarms was less than a dozen. ## 4 Practical Evaluation Klever has been used for verification of various operating system kernels and drivers. As a proof of concept that can be publicly demonstrated, Klever helped to reveal more than 400 faults acknowledged by the Linux kernel developers26. Many of these bugs were reported by students of several universities and participants of the Google Summer of Code program. Figure 24: Example of an environment model for the SSL client applet. We verified a subset of loadable kernel modules of Linux 5.5 released on January 26, 2020 and Linux 5.12-rc3 released on March 14, 2021 to evaluate different aspects of the user experience with Klever. We built the Linux kernel for the architecture _x86_64_ and configuration _allmodconfig_. Target loadable kernel modules were considered program fragments. On average, the size of the chosen modules was the closest to the mean size of all modules of the Linux kernel. These modules were from the following subdirectories of directory _drivers:__hid_, _hwmon_, _media_, _mtd_, _platform_, _staging_, _usb_ and _video_. Table 3 presents characteristics of these modules. There are about 4 % more modules for Linux 5.12-rc3 in comparison with Linux 5.5. We checked the given modules against 17 requirement specifications intended for checking the correct usage of the Linux kernel API27. Also, we checked memory safety for them. Underneath, we especially distinguish memory safety, since the solution of the corresponding verification tasks considerably differs from the solution of other verification tasks. Klever uses a CPAchecker configuration on the base of symbolic memory graphs [15] for memory safety and a CPAchecker configuration on the base of block-abstraction memoization and a predicate analysis [18] for requirement specifications intended for checking the correct usage of the Linux kernel API. Footnote 27: Identifiers of checked requirement specifications are as follows: _alloc:\(\{\)irq, spinlock, usb lock\(\}\)_, arch:\(\{\)asm:dma-mapping, mm:ioremap\(\}\)_, drivers:\(\{\)base:class, clk1, clk2\(\}\)_, drivers:usb:\(\{\)core:driver, core:urb, core:usb:coherent, core:usb:dev, gadget:udc-core\(\}\)_, f:sysfs:group\(,\)_kernel:\(\{\)module, rcu:update:lock\(\},\)_net:core:dev\(.\) For experiments, we used Klever 3.1 and the verification tool CPAchecker r36955 from branch _klever-fixes_. Klever 3.1 was released with some improvements in specifications intended specially for Linux 5.5, while for Linux 5.12-rc3 we did not make any specific adjustments in both specifications and tools. All experiments were conducted on an OpenStack virtual machine with 8 virtual cores of the Intel Xeon E312xx (Sandy Bridge) CPU, 64 GB of memory and Debian 9 (Stretch) on board. The computational resource limits per verification task were 5 minutes of CPU time and 5 GB of memory. \begin{table} \begin{tabular}{|l|c|c|} \hline **Characteristic** & **Linux 5.5** & **Linux 5.12-rc3** \\ \hline Number of modules & 2,059 & 2,141 \\ \hline Number of C source files & 3,383 & 3,645 \\ \hline Total size of modules & 2.7 MLOC & 2.8 MLOC \\ \hline Average size of modules & 1.3 KLOC & 1.3 KLOC \\ \hline \end{tabular} \end{table} Table 3: Characteristics of target modules. ### Verification Results Klever successfully generated 31,444 verification tasks for Linux 5.5 and 31,886 verification tasks for Linux 5.12-rc3. Table 4 presents the overall statistics of obtained verification results. One can see from the table that the _Unsafes_ share is approximately 50 times more for memory safety. Besides, the relative number of _Unknowns_ is also 2.5 times higher. For requirement specifications intended for checking the correct usage of the Linux kernel API the most common verdict was _Safe_, i.e. the verification tool could prove correctness under the certain assumptions. The changes in verification results for Linux 5.12-rc3 relatively to Linux 5.5 are not huge. One can see that there are fewer _Safes_ and more _Unknowns_. Foremost, this is associated with some new expressions in the source code that cause failures of various Klever components and CPAchecker. For instance, CIL started to fail on 19 modules from the directory _drivers/staging/greybus/_. The number of _Unsafes_ did not change too much. A detailed analysis shows that the most noticeable changes were caused by two reasons. The first reason is that modules started to use new APIs that were not yet modeled. The second reason is that we reported rather many fixes of faults revealed for Linux 5.5. They were accepted before releasing Linux 5.12-rc3. Thus, Klever does not find corresponding faults anymore. Anyway, these slight changes demonstrate that, despite there being quite considerable changes in target programs (see Table 1 and Table 3), Klever still could produce comparable and valuable verification results without any changes in the project-specific adaptation for the Linux kernel. Hereinafter, in this subsection, we consider _Unsafes_ and _Unknowns_ in detail. #### 4.1.1 Unsafes Table 5 shows the number of faults and the number of false alarms. The number of faults revealed during checking memory safety is approximately the same as the total \begin{table} \begin{tabular}{|l|c|c|} \hline **Verdict** & **Linux 5.5** & **Linux 5.12-rc3** \\ \hline \multicolumn{3}{|c|}{Memory safety} \\ \hline Unsafe & 280 (14 \%) & 278 (13 \%) \\ \hline Safe & 910 (44 \%) & 901 (42 \%) \\ \hline Unknown & 869 (42 \%) & 962 (45 \%) \\ \hline \multicolumn{3}{|c|}{Correct usage of the Linux kernel API} \\ \hline Unsafe & 85 (0.24 \%) & 110 (0.3 \%) \\ \hline Safe & 29,033 (83 \%) & 29,319 (81 \%) \\ \hline Unknown & 5,885 (17 \%) & 6,968 (19 \%) \\ \hline \end{tabular} \end{table} Table 4: Overall statistics of verification results. _Unsafes_ correspond to warnings issued by the verification tool (they may be either faults or false alarms). _Safe_ means that the verification tool could prove correctness under the certain assumptions. _Unknowns_ correspond to failures of the verification tool or Klever components. number of faults that were found for all remaining requirement specifications. Below, we will separately treat these faults. A false alarm rate when checking memory safety is higher than for requirement specifications intended for checking the correct usage of the Linux kernel API. It is explained by the fact that more accurate environment models are necessary for it. For Linux 5.12-rc3 the false alarm rate increased. Primarily, this is due to the same reasons as for changes in the number of _Unsafes_. Table 6 demonstrates the distribution of false alarms for different reasons. Inaccurate analysis performed by CPAchecker when checking memory safety gives considerably more false alarms than for other requirement specifications. Besides, it is more demanding for the accuracy of environment models as was already stated above. Most such false alarms are due to the absence of definitions and models of functions that considerably influence verification results, e.g., those functions that allocate and initialize specific kernel resources. There is a noticeable difference in the distribution of false alarms for Linux 5.12-rc3 in contrast to Linux 5.5 for requirement specifications intended for checking the correct usage of the Linux kernel API. It can be explained by the fact that modules started to use new APIs that should be modeled. Users can wonder whether it is possible to get rid of false alarms since they hinder the assessment of verification results. Hopefully, there are some large groups of false alarms. For example, about 20 % of all false alarms related to a verifier are \begin{table} \begin{tabular}{|l|r|r|} \hline **False alarm reason** & **Linux 5.5** & **Linux 5.12-rc3** \\ \hline \multicolumn{3}{|c|}{Memory safety} \\ \hline Inaccurate environment models & 123 (50 \%) & 128 (51 \%) \\ \hline Inaccurate requirement specifications & 0 (0 \%) & 0 (0 \%) \\ \hline Imprecise verification tool & 114 (46 \%) & 117 (46 \%) \\ \hline Others & 9 (4 \%) & 8 (3 \%) \\ \hline \multicolumn{3}{|c|}{Correct usage of the Linux kernel API} \\ \hline Inaccurate environment models & 20 (36 \%) & 42 (55 \%) \\ \hline Inaccurate requirement specifications & 22 (40 \%) & 25 (32 \%) \\ \hline Imprecise verification tool & 11 (20 \%) & 9 (12 \%) \\ \hline Others & 2 (4 \%) & 1 (1 \%) \\ \hline \end{tabular} \end{table} Table 6: False alarm reasons. \begin{table} \begin{tabular}{|l|r|r|} \hline **Unsafe type** & **Linux 5.5** & **Linux 5.12-rc3** \\ \hline \multicolumn{3}{|c|}{Memory safety} \\ \hline Fault & 34 (12 \%) & 25 (9 \%) \\ \hline False alarm & 246 (88 \%) & 253 (91 \%) \\ \hline \multicolumn{3}{|c|}{Correct usage of the Linux kernel API} \\ \hline Fault & 30 (35 \%) & 33 (30 \%) \\ \hline False alarm & 55 (65 \%) & 77 (70 \%) \\ \hline \end{tabular} \end{table} Table 5: Distribution of _Unsafes_. due to CPAchecker not supporting attributes _packed_ and _aligned_. Also, there are several non-modeled functions, each causing about 10 % of all false alarms related to environment models. Fixing verification tools is a very laborious job, but they get better every year. Additionally, we improve environment models steadily. Often, it is not enough to eliminate the initial reasons for false alarms. After that, models for other functions may be required, or CPAchecker can produce false alarms due to imprecise analysis. Caution: We manually assessed the reported _Unsafes_, so there may be some mistakes. If the reader tries to repeat this work, one can obtain a more or less different distribution. #### 4.1.2 Faults In this subsection, we discuss the faults that we have found, as they represent quite an important outcome for most users. There are two primary questions regarding the discovered faults: * Whether these faults are crucial enough or benign, of which static analysis tools are often accused of. * Whether these faults can be easily found by other approaches. Klever helped to identify faults that can result in various bad consequences, such as NULL pointer dereference, buffer overflow, usage of non-initialized data, memory leaks, and so on. We prepared patches fixing some of them and sent these patches to the Linux kernel developers to demonstrate that these faults are really essential. They accepted most of the patches and rejected just a few of them since they corresponded to situations that are not possible in practice (we need to improve our environment models). Appendix F enumerates those faults that were found by Klever in Linux 5.5. The total number of faults that were fixed by developers after our reports is 20. The number of faults that were somehow identified by other means as well is 11. Developers backported 14 commits to stable branches of the Linux kernel repository. These branches represent a long-term support for older versions. Backporting to them indicates that the Linux kernel developers agree that the corresponding faults are really significant. Here is a discussion of the ability of other approaches to identify faults found by Klever. Many faults (41 of 64) revealed in Linux 5.5 happen on error handling paths that are executed extremely seldom. Moreover, they are unlikely covered by testing and dynamic analysis tools. Most faults (52 of 64) manifest when a complicated interaction with an environment takes place. This assumes a specific order of callback invocation depending on their return values as well as the specific initialization of their arguments. Besides, it may be necessary to comprehend the semantics of the called functions, e.g., when, after a successful invocation of one function, another one should be called. These cases may be pretty hard for lightweight static analysis tools. Though, certain tools will be able to find some faults revealed thanks to Klever after specific tuning. We have not reported all detected faults to the Linux kernel developers yet, since this process occupies quite a lot of time. Nevertheless, we are going to report the remaining faults gradually. #### Unknowns Table 7 shows the number of _Unknowns_ due to failures of CPAchecker and other reasons. The share of _Unknowns_ reported by CPAchecker when checking memory safety is substantially higher than the one for remaining requirement specifications. There have been no considerable changes in the distribution of _Unknowns_ due to CPAchecker and other reasons over time. Though, the absolute number of _Unknowns_ increased, which was already explained at the beginning of this subsection. Table 8 shows that timeouts are the most common reason for CPAchecker _Unknowns_ especially for memory safety. For Linux 5.12-rc3 their share decreased primarily due to more parsing failures. Users may wonder why we performed experiments with such small limits on computational resources per verification task. For instance, for verification tasks from the SV-COMP benchmark suite these limits are 3 times higher [5]. of increasing the CPU time limit on the number of timeouts. Using the timeout of 15 minutes resulted in 5 more _Safes_ and one more _Unsafe_ for requirements specification _drivers:clk1_ while originally, there were about 3,300 of _Safes_ and 14 _Unsafes_. The CPU time consumption of CPAchecker rose from 15 hours to 25 hours. Further increasing the CPU time limit gave similar improvements in verification results but degradation in CPU time. ### Code Coverage Table 9 presents current code coverage. It is reasonably high for particular subdirectories, and for some modules it is 100 % while for some subdirectories and modules it is very low. This is the case because we developed environment model specifications not for all types of modules, as was discussed in the previous section. Code coverage for Linux 5.12-rc3 varies not so much. This again demonstrates that Klever can be applied to newer versions of target programs without considerable changes (indeed there were no changes at all for the given experiment). ### Verification Time The overall time spent by Klever and the CPU time consumed by Klever components and CPAchecker is demonstrated in Table 10. In total, it took about 7.4 days to get all the verification results. This time can be considerably reduced by using more machines or a more powerful machine for both the generation and solution of verification tasks. One can see that more time was necessary for Linux 5.12-rc3. The primary reason for that is that there are more modules and, thus, verification tasks. The solution of verification tasks by \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{**Subdirectory**} & \multicolumn{2}{c|}{**Linux 5.5**} & \multicolumn{2}{c|}{**Linux 5.12-rc3**} \\ & Line coverage, & Function coverage, & Line coverage, & Function coverage, \\ & KLOC & thousands & KLOC & thousands \\ \hline hid & 24/37 (64 \%) & 1.2/2.1 (58 \%) & 25/38 (65 \%) & 1.3/2.2 (59 \%) \\ \hline hwmon & 33/63 (52 \%) & 1.5/3.9 (38 \%) & 34/67 (51 \%) & 1.5/4.1 (37 \%) \\ \hline media & 114/354 (32 \%) & 5.7/19.6 (29 \%) & 120/368 (33 \%) & 6.0/20.0 (29 \%) \\ \hline mtd & 19/51 (37 \%) & 0.86/2.8 (31 \%) & 18/50 (36 \%) & 0.81/2.7 (30 \%) \\ \hline platform & 13/34 (37 \%) & 0.76/2.5 (30 \%) & 15/39 (38 \%) & 0.89/2.9 (31 \%) \\ \hline staging & 53/159 (33 \%) & 2.4/8.2 (29 \%) & 40/127 (32 \%) & 1.8/6.3 (28 \%) \\ \hline usb & 66/163 (40 \%) & 3.5/8.8 (40 \%) & 63/158 (40 \%) & 3.3/8.5 (39 \%) \\ \hline video & 43/95 (45 \%) & 1.8/4.6 (40 \%) & 48/94 (51 \%) & 1.9/4.6 (41 \%) \\ \hline **Total** & 365/957 (38 \%) & 17.8/52.7 (34 \%) & 364/941 (39 \%) & 17.5/51.7 (34 \%) \\ \hline \end{tabular} \end{table} Table 9: Code coverage. CPAchecker when checking memory safety consumed the most CPU time. The generation of verification tasks by Klever components needed a rather considerable portion of the CPU time to check the correct usage of the Linux kernel API. It is worth noting that we launched Klever several times when checking requirement specifications devoted to the correct usage of the Linux kernel API. For that reason, some actions, like the generation of environment models, were duplicated during the generation of verification tasks. These actions are not very time-consuming, so if one checks these requirement specifications together, there may be up to 10 % improvement in the overall time and the CPU time of Klever components. ## 5 Related Work According to our knowledge, no existing tool automates the preparation of an arbitrary C program before verification, runs verification tools, processes verification results and provides means for their further analysis and improvement. Below, we consider several frameworks intended for the verification of specific software, such as device drivers or embedded software. SDV is the best-known example of an application of the automatic software verification technique in practice [4]. It aims at checking the correct usage of the kernel API in Windows device drivers using SLAM, YOGI, and Q [3; 28]. There are also LDV Tools [38], DDVerify[37], and Avinux [34] intended for verification of Linux device drivers. These frameworks are based on CBMC [14] and CPAchecker [6]. As a result, hundreds of unrevealed faults have been found and acknowledged by developers. CBMC is also known for various other applications, such as verification of TinyOS [10] and embedded software [35]. Authors deliver successful case studies as a proof of concept. There is an IDE for development of embedded software called _mbeddr_ that allows to automatically run CBMC to check programs under development against a predefined set of safety properties [11]. The IDE also provides developers with nicely arranged results. However, _mbeddr_ is not intended for automated verification of programs that were not developed using it. Another example of \begin{table} \begin{tabular}{|l|c|c|} \hline **Type of time** & **Linux 5.5** & **Linux 5.12-rc3** \\ \hline \multicolumn{3}{|c|}{Memory safety} \\ \hline Overall time & 10 & 11 \\ \hline CPU time of CPAchecker & 52 & 53 \\ \hline CPU time of Klever components & 16 & 17 \\ \hline \multicolumn{3}{|c|}{Correct usage of the Linux kernel API} \\ \hline Overall time & 71 & 85 \\ \hline CPU time of CPAchecker & 223 & 232 \\ \hline CPU time of Klever components & 158 & 190 \\ \hline \end{tabular} \end{table} Table 10: Verification time in hours. a successful integration of CBMC into the development process is the recent work devoted to verification of the AWS C Common library [13]. DC2 aims at verification of industrial software [23]. To limit the verification scope, it generates contracts relevant for finding such faults as memory leaks and array-bound overflows. If necessary, users can improve these contracts manually. Then DC2 runs the Varvel model checker. However, it is an in-house NEC research project, so it is not possible to estimate its applicability to various C programs in more detail. ## 6 Conclusion The paper presents the ongoing work dedicated to applying automatic software verification tools for critical industrial C programs like operating system kernels and embedded software. We described the Klever software verification framework and demonstrated the results of its practical evaluation. We based Klever on solutions accepted by the SV-COMP community. Moreover, we keep in touch with verification tool developers to cooperate and to solve the most vital problems together, discussing the interface, providing feedback and contributing generated verification tasks to the competition benchmark suite. Our experience clearly demonstrates that automatic software verification is a very promising area since it enables finding faults in programs, many of which could be hardly detected by other software quality assurance techniques. The more efforts that are invested in related research, development and, especially, various applications, the more awesome achievements will be reached. We hope that one day automatic software verification techniques will become one of the best practices, along with testing and static analysis. Moreover, we expect that for some industries, like avionics, railways, and autonomous vehicles, the use of this technique will be required by appropriate standards. ## 7 Acknowledgments We would like to express special thanks to Prof. Dr. Alexander K. Petrenko who was a scientific supervisor of most related works, researches and theses. Also, much gratitude is due to Alexey Khoroshilov and Vadim Mutilin for their great ideas and suggestions starting from the beginning of this project, to Pavel Shved, Alexander Strakh, Mikhail Mandrykin, Pavel Andrianov, Anton Vasilyev, Vitaliy Mordan and Denis Efremov for participating in the development of a Klever prototype as well as for improving auxiliary tools and CPAchecker, to Vladimir Gratinskiy for investing enormous efforts in the design and development of the most close to the user and large Klever component, and to numerous students and users who made various contributions.
自動ソフトウェア検証ツールは、指定された要件に基づいてチェックされたプログラムに非対称的に、検出困難な欠陥を特定するのに役立ちます。また、特定の仮定の下でプログラムの正確性を証明できるという能力があります。これらの機能は、critical industrial programの検証に非常に重要であり、例えば、operating system kernelやembedded softwareです。しかし、そのようなプログラムは、数百または数千のKLOCを備えているため、どの程度でも非標準的な要件を検証する場合、有用な検証結果を得るのに十分な時間がかかる場合がある。また、既存のツールは、環境モデル化、要件の仕様、多くのバージョンと構成の検証、そして検証結果の専門的な評価のための広く採用されている手段を提供していません。この論文では、Kleverソフトウェア検証フレームワークを提案しています。これは、自動ソフトウェア検証ツールを適用する作業の時間を削減するために設計されています。
2309.10056
Long-wave instabilities of sloping stratified exchange flows
We investigate the linear instability of two-layer stratified shear flows in a sloping two-dimensional channel, subject to non-zero longitudinal gravitational forces. We reveal three previously unknown instabilities, distinct from the well-known Kelvin-Helmholtz Instability (KHI) and Holmboe Wave Instability (HWI), in that they have longer wavelengths (of the order of 10 to $10^3$ shear-layer depths) and often slower growth rates. Importantly, they can grow in background flows with gradient Richardson number $\gg 1$, which offers a new mechanism to sustain turbulence and mixing in strongly stratified flows. These instabilities are shown to be generic and relatively insensitive to Reynolds number $\mathrm{Re}$, Prandtl number $\mathrm{Pr}$, base flow profile, and boundary conditions. The nonlinear evolution of these instabilities is investigated through a forced direct numerical simulation, in which the background momentum and density are sustained. The growth of long unstable waves in background flows initially stable to short wave instabilities causes a decrease in the local gradient Richardson number. This leads to local nonlinear processes that result in small-scale overturns resembling Kelvin-Helmholtz billows. Our results establish a new energy exchange pathway, where the mean kinetic energy of a strongly stratified flow is extracted by primary unstable long waves and secondary short waves, and subsequently dissipated into internal energy.
Lu Zhu, Amir Atoufi, Adrien Lefauve, Rich R. Kerswell, P. F. Linden
2023-09-18T18:09:53
http://arxiv.org/abs/2309.10056v1
[ ###### Abstract We investigate the linear instability of two-layer stratified shear flows in a sloping two-dimensional channel, subject to non-zero longitudinal gravitational forces. We reveal three previously unknown instabilities, distinct from the well-known Kelvin-Helmholtz Instability (KHI) and Holmboe Wave Instability (HWI), in that they have longer wavelengths (of the order of 10 to \(10^{3}\) shear-layer depths) and often slower growth rates. Importantly, they can grow in background flows with gradient Richardson number \(\gg 1\), which offers a new mechanism to sustain turbulence and mixing in strongly stratified flows. These instabilities are shown to be generic and relatively insensitive to Reynolds number Re, Prandtl number Pr, base flow profile, and boundary conditions. The nonlinear evolution of these instabilities is investigated through a forced direct numerical simulation, in which the background momentum and density are sustained. The growth of long unstable waves in background flows initially stable to short wave instabilities causes a decrease in the local gradient Richardson number. This leads to local nonlinear processes that result in small-scale overturns resembling Kelvin-Helmholtz billows. Our results establish a new energy exchange pathway, where the mean kinetic energy of a strongly stratified flow is extracted by primary unstable long waves and secondary short waves, and subsequently dissipated into internal energy. stratified flows, linear stability analysis, long-wave instability, direction numerical simulation L. Zhu]Lu Zhu\({}^{1}\)+, Amir Atoufi\({}^{1}\), Adrien Lefauve\({}^{1}\), Rich R. Kerswell\({}^{1}\), P. F. Linden\({}^{1}\) Footnote †: Email address for correspondence: lz447@cam.ac.uk ## 1 Introduction The study of stratified flows has attracted considerable attention over the past few decades due to their importance in many environmental and industrial processes. In the oceans, stratification occurs due to differences in salinity and/or temperature, leading to mostly stably stratified flows. Turbulence in these flows plays a significant role in the transport of momentum and mass and is crucial in shaping the global climate (Linden, 1979; Riley & Lelong, 2000; Gregg _et al._, 2018; Caulfield, 2020). An interesting open question concerns the maintenance of turbulence and its associated irreversible turbulent mixing under strong stable stratification, which tends to suppress turbulence. When stratification is relatively weak, stably-stratified flows can be linearly unstable. It is well known that linear shear instabilities, such as Kelvin-Helmholtz instability (KHI) (Hazel, 1972; Smyth _et al._, 1988) and Holmboe wave instability (HWI) (Holmboe, 1962), can cause transition of a laminar stratified flow to turbulence, inducing strong mass and momentum transport (Caulfield, 2021). Over the past 50 years, numerous studies have been carried out to understand these instabilities and their relation to mixing (Thorpe, 1968; Smyth _et al._, 1988; Carpenter _et al._, 2010; Salehipour _et al._, 2015; Zhou _et al._, 2017). In most of these studies, the density isopycnals are perpendicular to the direction of gravity, which does not explicitly drive the flows. However, in many natural systems, density isopycnals are not exactly perpendicular to gravity, in which case nonzero streamwise gravity forces come into play and may partially drive the flow. One notable example is the internal tide interacting with the sloping bottom topography of the oceanic continental shelf (Garrett & Kunze, 2007). At a critical slope, the internal tide provides an additional energy production pathway that leads to turbulent mixing of temperature, salinity, and other tracers (Gayen & Sarkar, 2010). Similarly, many engineering flows occur along an inclined boundary. Examples can be found in building ventilation systems (Linden, 1999), where indoor/outdoor air is often exchanged through inclined ventilation ducts, producing mixing and dispersion of heat and indoor pollutants. In gas-cooled nuclear reactors, carbon dioxide and air are exchanged through inclined coolant ducts, which can result in the depressurization and damage of the reactors in case of failure (Leach & Thompson, 1975; Mercer & Thompson, 1975). Studies on the influence of longitudinal gravitational forcing on the onset of turbulence in stratified exchange flows remain limited. One notable recent body of work is the Stratified Inclined Duct (SID) experiment (Meyer & Linden, 2014; Lefauve _et al._, 2019; Lefauve & Linden, 2020). These studies investigated the transition and turbulent mixing of the exchange flow in an inclined duct that connected two reservoirs with fluids at different densities or temperatures. To understand the mechanism of transition in SID, Lefauve _et al._ (2018) conducted a linear stability analysis using a base state extracted from the SID experiment. Subsequently, Ducimetiere _et al._ (2021) systematically investigated the three-dimensional unstable modes in inclined ducts, focusing on the effects of side wall confinement. These studies focused primarily on HWI (and secondarily on KHI), which have wavelengths comparable to the thickness of the shear layers. Interestingly, Ducimetiere _et al._ (2021) observed a secondary instability at significantly longer wavelengths than KHI and HWI and attributed it to the effect of the inclination angle. Recently, Atoufi _et al._ (2023) studied the mechanism of transition by applying shallow water equations as a diagnostic tool to analyse a new numerical database of SID (Zhu _et al._, 2023). They suggested that the instability of long shallow water waves (long-wave KHI in the presence of top and bottom solid boundaries) may cause turbulence in the SID. Although the longitudinal gravitational forcing was included in the numerical simulation data, it was not included explicitly in the shallow water model. In this paper, we explore explicitly the impact of longitudinal gravitational forces on the instability of long waves and on potential new pathways toward turbulence, restricting ourselves to a two-dimensional geometry. In SS 3, we examine the linear instabilities in inclined channels and conduct a thorough exploration of the parameter space. We identify three new distinct families of long-wave instabilities distinct from the well-known HWI and KHI, and map in parameter space these long-wave instabilities that dominate the flow. In SS 4, we then investigate the evolution of these new instabilities by conducting two-dimensional forced direct numerical simulations (DNS), and discuss their impact on turbulence and energy transfers. Finally, we conclude in SS 5. ## 2 Methodology ### Problem formulation and governing equations In this section, we present the equations required for linear stability analysis (LSA) of a stratified exchange flow between two fluid layers having density \(\rho_{0}\pm\Delta\rho/2\) (where \(\rho_{0}\) is the reference density and \(0<\Delta\rho\ll\rho_{0}\) is the density difference) in a two-dimensional stratified inclined channel (SIC, see fig. 1(a)). Following the SID experimental literature, lengths are nondimensionalized by the half-channel height \(H^{*}\), velocity by the buoyancy-velocity scale \(U^{*}\equiv\sqrt{g^{\prime}H^{*}}\) (where \(g^{\prime}=g\Delta\rho/\rho_{0}\) is the reduced gravity), time by the advective time unit \(H^{*}/U^{*}\), pressure by \(\rho_{0}U^{*2}\), and density variations around \(\rho_{0}\) by \(\Delta\rho/2\), respectively. The non-dimensional continuity, Navier-Stokes and scalar governing equations under the Boussinesq approximation are \[\boldsymbol{\nabla}\cdot\mathbf{u} =0, \tag{1}\] \[\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\boldsymbol {\nabla}\mathbf{u} =-\boldsymbol{\nabla}p+\frac{1}{\text{Re}}\nabla^{2}\mathbf{u}+ \text{Ri}\,\rho\,\hat{\mathbf{g}},\] (2) \[\frac{\partial\rho}{\partial t}+\mathbf{u}\cdot\boldsymbol{ \nabla}\rho =\frac{1}{\text{Re}\,\Pr}\nabla^{2}\rho, \tag{3}\] where \(\mathbf{u}=(u,v,w)\) is the non-dimensional velocity in the three-dimensional coordinate system \(\mathbf{x}=(x,y,z)\), where \(x\)-,\(y\)-,\(z\)-axis are the longitude, spanwise and wall-normal direction of the channel respectively. In this coordinate system gravity \(\mathbf{g}\) is pointing downward at a angle \(\theta\) to the \(-z\) axis, i.e., \(\mathbf{g}=g\,\hat{\mathbf{g}}=g\,\left[\sin\theta,0,-\cos\theta\right]\), and \(p\) and \(\rho\) are the non-dimensional pressure and density, respectively. The dimensionless parameters are the Reynolds number \(\text{Re}\equiv H^{*}U^{*}/\nu\) (\(\nu\) is the kinematic viscosity), the Prandtl number \(\Pr\equiv\nu/\kappa\) (\(\kappa\) is the scalar diffusivity), and Richardson number \(\text{Ri}\equiv g^{\prime}H^{*}/(2U^{*})^{2}=1/4\) (fixed here because of the buoyancy velocity scale). ### Formulation of linear stability analysis We now apply a linear stability analysis (LSA) (Drazin & Reid, 2004; Smyth & Carpenter, 2019) to the SIC, noting that in agreement with Squire's theorem, Lefauve _et al._ (2018); Ducimetiere _et al._ (2021) have shown that the fastest-growing mode is two dimensional (2D). We impose infinitesimal 2D perturbations to a 1D base state. The velocity, density, and pressure fields are thus decomposed as \[\boldsymbol{u} =\boldsymbol{U}+\boldsymbol{u}^{\prime}=[U(z),0,0]+[u^{\prime},0, w^{\prime}], \tag{4}\] \[p =P(z)+p^{\prime},\] (5) \[\rho =R(z)+\rho^{\prime}, \tag{6}\] Figure 1: (a) Schematic of the two-dimensional shear flow in a stratified channel inclined at an angle \(\theta\), and (b) base velocity \(U(z)\) and density \(R(z)\) profiles computed from (10) and (11). where capital letters and superscript prime represent the mean and perturbation components of quantities, respectively. A normal mode perturbation of the form \[\phi(x,z,t)=\hat{\phi}(z)\exp{(ikx+\eta t)}, \tag{7}\] is adopted. The base flows are obtained by solving for the numerical solution of the laminar exchange flow following Thorpe (1968), which will be introduced in SS2.3. Substituting (4)-(6) into (1)-(3) and linearising yields the same system as Lefauve _et al._ (2018), i.e. \[\eta\left[\begin{array}{cc}\Delta&\mathbf{0}\\ \mathbf{0}&\mathbf{I}\end{array}\right]\left[\begin{array}{c}\widehat{w}\\ \widehat{\rho}\end{array}\right]=\left[\begin{array}{cc}\mathcal{L}_{ww}& \mathcal{L}_{w\rho}\\ \mathcal{L}_{\rho w}&\mathcal{L}_{\rho\rho}\end{array}\right]\left[\begin{array} []{c}\widehat{w}\\ \widehat{\rho}\end{array}\right], \tag{8}\] where \(\mathbf{0}\) and \(\mathbf{I}\) are the zero and identity matrices, respectively and \[\mathcal{L}_{ww} =-\mathrm{i}kU\Delta+\mathrm{i}k\mathcal{D}^{2}U+\mathrm{Re}^{- 1}\Delta^{2}, \tag{9}\] \[\mathcal{L}_{w\rho} =Ri\left(k^{2}\cos\theta-\mathrm{i}k\sin\theta\mathcal{D}\right),\] \[\mathcal{L}_{\rho w} =-\mathcal{D}R,\] \[\mathcal{L}_{\rho\rho} =-\mathrm{i}kU+(\mathrm{Re}\ \mathrm{Pr})^{-1}\ \Delta,\] where \(\Delta=\mathcal{D}^{2}-k^{2}\) (the operator \(\mathcal{D}=\partial/\partial z\) and \(\mathcal{D}^{2}=\partial^{2}/\partial z^{2}\)). At the top and bottom boundaries (\(z=\pm 1\)), no-slip and no-flux boundary conditions are applied for velocity and density, respectively. We also demonstrate the negligible effect of choosing a free-slip boundary condition for velocity in appendix A. To obtain the unstable modes, we solve the linear system (8) numerically using a second-order finite-difference discretization method described in Smyth & Carpenter (2019). The spatial resolution is chosen based on the sharpness of the interface and is \((150,150,250,400)\) grid points for \(\mathrm{Pr}=(1,\,7,\,28,\,70)\), respectively. A sensitivity analysis for resolution ensured convergence of the results. ### Base flows The base state for density in our exchange flow is taken as a hyperbolic tangent (figure 1(b)) \[R(z)=-\tanh(z/\delta)=-\tanh(2\sqrt{\mathrm{Pr}}\,z). \tag{10}\] The interfacial thickness is \(\delta=1/(2\sqrt{\mathrm{Pr}})\) to approximate the effect of diffusion (Smyth & Peltier, 1991). The typical model (Smyth _et al._, 1988, e.g.) considers a shear layer driven by an arbitrary, controllable background shear. A similar procedure is applied to our SIC by modifying the laminar solution developed by Thorpe (1968) and imposing a background body force \(\mathcal{F}=-\gamma RiR\) (where \(\gamma\) is a variable to control the magnitude of the force). This decouples the base velocity from the inclination angle in SIC, allowing for the exploration of the \(U-\theta\) space, as if being influenced by arbitrary external tidal forces or pressure gradients. The mean velocity profile \(U(z)\) of the steady laminar exchange flow is obtained by integrating the 2D momentum equation \[-\frac{\partial P}{\partial x}+Ri\sin\theta R+\frac{1}{\mathrm{Re}}\frac{ \partial^{2}U}{\partial z^{2}}+\mathcal{F}=0, \tag{11}\] where \(-\partial P/\partial x=0\) to satisfy the zero-flux condition of SIC. This yields the following laminar base state for the forced SIC \[U(z)=-\mathrm{Re}\ Ri(\sin\theta-\gamma)I(z)+c_{1}z+c_{2}, \tag{12}\] where \[I(z;\mathrm{Pr})=\frac{z^{2}}{2}+\ln 2\delta z+\frac{\delta^{2}\,\mathrm{Li}_{2} \left(-\mathrm{e}^{2z/\delta}\right)}{2}, \tag{13}\] where \(Li_{2}\) is the polylogarithm function of order 2. The constants \(c_{1}\) and \(c_{2}\) are computed given the no-slip boundary condition at the walls \(U(z=\pm 1)=0\) and are \[c_{1} =\frac{1}{2}\mathrm{Re}\ Ri\ (\sin\theta-\gamma)\ [I(1)-I(-1)]\, \tag{14}\] \[c_{2} =\frac{1}{2}\mathrm{Re}\ Ri\ (\sin\theta-\gamma)\ [I(1)+I(-1)]. \tag{15}\] This solution \(U(z)\) is sinusoidal-like (figure 1(b)), much like those observed in experiments and simulations (Lefauve _et al._, 2018; Zhu _et al._, 2023). The magnitude of the base velocity depends on \(\mathrm{Re}\), \(\theta\), and \(\gamma\), while the shape depends more on \(\delta\). In addition to the base state described by (12), we also conducted a LSA with a tanh-shape velocity profile in appendix A, to compare with the standard stratified free-shear layer model (Smyth _et al._, 1988). These results were qualitatively consistent with those in the remainder of the paper, in terms of the existence of the same long- and short-wave families in SIC. ## 3 Results: new families of linear instabilities in SIC Here we present the results from the LSA of SIC. We explore the parameter space of \(\theta-\gamma\) and map out three new families of long-wave instabilities in addition to the well-known short-wavel HWI and KHI. We also investigate the impacts of \(\mathrm{Re}\) and \(\mathrm{Pr}\) in order to further understand the importance of these newly discovered long waves in the laminar-turbulence transition. ### Five families of instabilities We first fix \((\mathrm{Re},\mathrm{Pr},\mathrm{Ri})=(1000,7,0.25)\) and vary the inclination angle \(\theta\) from \(-10^{\circ}\) to \(10^{\circ}\). When \(\theta>0\), the SIC slopes downward, the streamwise gravity energises the mean flow and vice versa. We vary the forcing factor \(\gamma\), on which two important physical quantities depend: the interfacial background Richardson number \(Ri_{b}\), defined as the gradient Richardson number of the background flow at the density interface \(z=0\), i.e., \[Ri_{b}\equiv Ri\frac{\partial R/\partial z}{(\partial U/\partial z)^{2}} \Big{|}_{z=0}, \tag{16}\] and the mass flux (or flow rate of buoyancy), which is given by \[Q_{m}\equiv\frac{1}{2}\int_{-1}^{1}\ RU\ dz. \tag{17}\] The Richardson number \(Ri_{b}\) is an important measure of the relative importance of stratification compared with shear, which is critical for stratified shear flow stability (Caulfield, 2020). The mass flux \(Q_{m}\) is closely associated with the hydraulic control of exchange flows; a threshold value of \(Q_{m}\approx 0.5\) indicates the emergence of an internal hydraulic jump (Meyer & Linden, 2014; Lefauve _et al._, 2019) which Atoufi _et al._ (2023) demonstrated to be equivalent to a relatively long KHI (requiring the existence of a top and bottom boundaries). Figure 2(a,c) shows the distribution of the growth rate and wave frequency of the fastest-growing modes in the parameter spaces \((\theta,Ri_{b})\) and \((\theta,Q_{m})\), respectively. Examining the contour lines reveals five distinct families of unstable modes, shown schematically in figure 2(b,d). To better understand these modes, we show the dispersion relation of five representative cases (marked by the symbols in figure 2) dominated by the five families of instabilities in figure 3. The real \(\eta_{r}\) and imaginary \(\eta_{i}\) components of the eigenvalues denote the growth rate and wave frequency (and phase speed \(c=-\eta_{i}/k\)) of the unstable mode, respectively. Notably, two of these unstable modes, namely the Holmboe wave instability (HWI) and Kelvin-Helmholtz instability (KHI), can be triggered without the presence of a slope (\(\theta=0\), see vertical dotted black line). The other three families of modes rely on the presence of a slope (\(\theta\neq 0\)) and are named long-wave instability (LWI), _downslope_ very-long-wave instability (VLWI-DS), and _upslope_ very-long-wave instability (VLWI-US) based on their longer wavelengths (\(O(10\sim 10^{4})\)) compared to the'short' HWI and KHI (\(O(10^{-1}\sim 10)\)). To the best of our knowledge, these unstable modes have not previously been investigated in the literature. We find that the features of these instabilities are generally insensitive to the shapes of base profile and boundary conditions, despite adopting a base profile (12) and no-slip boundary in this section. To support this, we show in appendix A that these instabilities are found Figure 2: Parameter space projections of the fastest growing mode: (a) the growth rate \(\eta_{r}\) (colours) and wave frequency \(\eta_{i}\) (lines) and (b) the schematics of the \(Ri_{b}-\theta\) parameter space; (c) the growth rate \(\eta_{r}\) and wave frequency \(\eta_{i}\) and (d) the schematics of \(Q_{m}-\theta\) parameter space. Markers represent the five cases I,..., V in Table 1 for which the fastest growing mode is calculated. Black solid lines are the natural convective Thorpe base state (i.e. \(\mathcal{F}=0\)), and the horizontal dotted lines in (a) and (b) correspond to \(Ri_{b}=0.25\). using a tanh-shape base state and free-slip boundary condition, as used by Smyth & Winters (2003). This suggests that these instabilities can exist in a wide range of stratified exchange flows along a slope. In the following sections, we characterise the five families of unstable modes in more detail. #### 3.1.1 Holmboe wave instability (HWI) The HWI (Holmboe 1962) occurs when the density interface is thinner than the shear layer and results from the resonance between vorticity waves at the edges of the shear layer and internal gravity waves at the density interface (Caulfield 1994; Carpenter _et al._ 2010). It gives rise to a pair of counter-propagating growing modes on either side of the density interface. In SIC, the regime dominated by HWI exists from \(\theta=-10^{\circ}\) to \(2^{\circ}\) and \(Ri_{b}=0.3\) to \(4\) (\(Q_{m}=0.1\) to \(0.3\)) in figure 2. The dispersion relation of HWI is shown in figure 3, where HWI has a pair of complex conjugate eigenvalues with non-zero phase speed \(c=-\eta_{i}/k\). Despite the well-known feature that HWI can exist in horizontal flows at \(Ri_{b}\) values significantly higher than \(0.25\) (Miles 1961; Howard 1961), we notice that HWI can also be induced over a wide range of \(\theta\). More interestingly, the HWI-dominated regime gradually shrinks from \(\theta<0\) to \(\theta\approx 2\), beyond which HWI ceases to exist. This indicates that increasing downward slopes have a negative effect on HWI, a phenomenon that has not been previously discussed in the literature and constitutes a new result. #### 3.1.2 Kelvin-Helmholtz instability (KHI) The KHI arises due to the interaction of vorticity waves at two edges of finite shear layers, leading to a sequence of stationary vortex billows that roll up the denser fluids and cause significant mixing (Hazel 1972; Smyth _et al._ 1988). However, unlike these previous studies (with the exception of the recent Atoufi _et al._ (2023)) the KHI observed here in the SIC geometry is bounded by no-slip solid boundaries at \(z=\pm 1\). In SIC, KHI has a zero phase speed and a characteristic wavelength of \(\pi\), consistent with previous studies by Smyth & Carpenter (2019); Caulfield (2021); Smyth & Peltier (1991). KHI dominates the flow at small \(Ri_{b}\lesssim 0.25\), in agreement with the Miles-Howard criterion. Interestingly, like HWI, the longitudinal gravity force can affect the regimes of KHI. The upper bound of the KHI-dominant regime in figure 2(a) increases linearly from \(Ri_{b}=0.15\) Figure 3: Dispersion relations for typical cases: (a) positive growth rate \(\eta_{r}\) versus wave number \(k\) and (b) positive growth rate \(\eta_{r}\) versus wave frequency \(\eta_{i}\). Markers correspond to cases I, II, III, IV, and V from figure 2. to \(0.25\) as \(\theta\) increases from \(-10\) to \(10\). This suggests an enhancement of KHI by a downward slope, which we believe to be an additional new result. #### 3.1.3 Long-wave instability (LWI) Of the three new instabilities that arise with slopes, the novel long-wave instability (LWI) dominates the flow at large downward slopes (\(\theta>4^{\circ}\)) and a weak shear (strong stratification). In contrast to KHI and HWI, the LWI has a longer wavelength (\(O(10-10^{2})\)). Note that the LWI discussed in this paper is distinct from the long waves supported by shallow-water (hydraulic) theory (Lawrence, 1990; Atoufi _et al._, 2023) which are essentially KH waves with a large \(k\) (satisfying the hydrostatic approximation) and which can exist at \(\theta=0\). LWI, on the other hand, specifically requires \(\theta\neq 0\). As depicted in figure 3, its phase speed is near-zero. This instability can be triggered at \(Ri_{b}\gg 1\), at which the shear-induced HWI and KHI vanish. Note that the presence of a mean shear can affect LWI by modifying its growth rate and phase speed. In terms of wave interaction, since vorticity waves vanish as \(Q_{m}\to 0\), we hypothesise that LWI is a result of the interaction between two gravity waves at the density interface whose symmetry is broken by the non-zero slope. However, the \(Q_{m}=0\) condition may be arbitrary when subjected to a non-zero slope, as it requires the gravity and pressure forces to be precisely cancelled by external body forces \(\mathcal{F}\) in (11). In practice, such a precisely balanced condition is expected to be rarely observed. #### 3.1.4 Downslope very-long-wave instability (VLWI-DS) The new VLBI-DS shares similarities with the LWI, in that it can exist at weak shear (strong stratification) and has a long wavelength. However, VLBI-DS dominates the flow under different conditions, namely when \(2^{\circ}<\theta<5^{\circ}\) and \(Ri_{b}>0.25\) (\(Q_{m}<0.5\)). It is also characterized by very long wavelengths of \(O(10^{2}-10^{3})\) (wave numbers \(k=O(10^{-3})\sim O(10^{-2})\)) and, interestingly, a pair of eigenmodes with complex conjugate phase speeds (figure 3). As with the HWI, we thus expect a pair of unstable VLBI-DS modes propagating with opposite phase speeds. The evolution of these unstable long waves and their connections to the onset of turbulence will be further discussed in SS4. #### 3.1.5 Upslope very-long-wave instability (VLWI-US) At a negative inclination angle (\(\theta<0\)), i.e. for upward slopes, another type of very-long-wave instability (VLWI-US) appears with wavelengths \(\geqslant 10^{2}\) (wave numbers \(k<O(10^{-2})\)) and a zero phase speed (figure 3). This instability is similar to LWI and VLBI-DS in that it requires a slope (\(\theta\neq 0\)) and can exist in a strongly stratified environment. Contrary to the usually significantly smaller growth rate of the long waves compared with the corresponding short waves, the VLBI-US has in fact a comparable growth rate as HWI; this will be further discussed in SS3.3. Importantly, these long-wave instabilities have the potential to trigger and sustain turbulence in strongly-stable stratified flows, which are _a priori_ regarded as stable. In SS4 we will show that these new instabilities can indeed destabilise the flow at \(Ri_{b}\gg 1\), eventually resulting in nonlinear bursting and a transition to turbulence and mixing. It is also important to note that figure 2 only shows the fastest growing modes, whereas multiple families of instabilities can coexist in certain regions, as shown in figure 3. As a result, the regions of instability overlap, and the neutral boundary of each instability cannot be identified from figure 2. In SS3.3, we will address this challenge by introducing an unsupervised clustering technique to isolate the neutral boundary of each family. Furthermore, in figure 2, we include a black line computed from \(\gamma=0\), i.e. the natural convective 'Thorpe' base state with forcing \(\mathcal{F}=0\). Under the parameters discussed so far (Re = 1000, Pr = 7), this line does not overlap with the regimes of long-wave instabilities in parameter space. Nonetheless, it is important to note that different Re and Pr or boundary conditions can modify the regimes of the long wavelength instability and interact with the base flow. An example is demonstrated in SS3.4 for Pr = 28. ### Eigenfunctions Further insights into these SIC instabilities can be gained by examining their eigenfunctions expressed in (7) for representative cases (see figure 2 and table 1). In figure 4, we present the vorticity (first row) and density (second row) eigenfunctions of the fastest growing modes for cases I,..., V, marked in figure 2, each of which represents one of the five branches of instabilities: HWI, KHI, LWI, VLBI-DS, and VLBI-US, respectively. Note that the \(x\)-axis in these cases has been re-scaled to compare modes having very different wavelengths. In figure 4, the wavelengths of HWI and KHI are \(\approx 4\), LWI is \(\approx 70\), VLBI-DS is \(\approx 300\), and VLBI-US is \(\approx 420\). The density eigenfunctions of all modes are concentrated near the interface, indicating the critical role of stratification. Near the walls, the intensity of vorticity eigenfunctions is large due to the no-slip effects of the walls. (Note that, with a free-slip velocity boundary condition, the corresponding modes do not exhibit this intense vorticity at the wall, see appendix A). In the shear layer, one of the HWI modes plotted here (left-propagating) exhibits two pairs of counter-rotating roll cells centred at \(z\approx 0.5\). For KHI, the vorticity and density eigenfunctions are highly concentrated at the interface, leaving a weaker bulk region in the rest of channel. By contrast, the vorticity eigenfunctions of LWI, VLBI-DS, and VLBI-US fill the channel and are asymmetric with respect to \(z=0\). ### Neutral boundaries of instabilities As mentioned in SS3.1, different families of instabilities can coexist at the same parameters, making it difficult to determine the neutral boundary of each family from the distribution of fastest growing modes in figure 2. To identify the different neutral boundaries we employ an unsupervised machine learning algorithm called DBSCAN (density-based spatial clustering of applications with noise) (Ester _et al._ 1996). The DBSCAN algorithm clusters the local maxima of the dispersion relation (figure 3(a)) of all the cases in figure 2 using \(k\), \(\eta_{r}\), and \(\eta_{i}\) as input variables. These variables are first logarithmically transformed and normalised before being fed into DBSCAN for clustering. Note that the DBSCAN groups the local optimal modes of LWI and VLBI-US together in a single cluster due to their similarity in \(k\), \(\eta_{r}\), and Figure 4: Eigenfunctions of the fastest growing modes for the 5 cases given in figure 2 table 1: (a-b) I, HWI; (d-f) II, KHI; (g-i) III, LWI; (j-l) IV, VLBI-DS; and (m-o) V, VLBI-US. First row: vorticity eigenfunctions; Second row: density eigenfunctions. \(\eta_{i}\). An additional step is taken to distinguish between the two branches by using the fact that LWI occurs when \(\theta>0\), while VLBI-US occurs when \(\theta<0\). The clustering analysis in figure 5(b-f) reveals the regimes of different families of instabilities, which could not have been identified by simply looking at the distribution of fastest amplifying modes (figure 5(a)). The KHI regime (panel (c)) exactly matches the distribution of the fastest amplifying modes (panel (a)), while other modes (LWI, VLBI-DS, VLBI-US) that overlap with KHI are omitted. This suggests that KHI always has the fastest growth rate. For HWI (panel (b)), increasing \(\theta\) clearly decreases the growth rate while shrinking its 'territory', causing it to disappear when \(\theta>2^{\circ}\). When \(\theta\) is fixed, the fastest growing HWI appears at \(Ri_{b}\approx 1\), while the growth rate decreases as \(Ri_{b}\) departs from 1. The territory of HWI overlaps with VLBI-US (panel (f)) which can exist when \(\theta<-0.5^{\circ}\). The growth rate of these two modes is comparable so that figure 5(a) cannot display the neutral boundaries of these two modes properly. As for LWI (panel (d)), it generally persists at large positive \(\theta\) except for \(Ri_{b}\lesssim 0.2\). The critical \(\theta\) for the appearance of VLBI-DS (panel (e)) is \(\approx 2.5^{\circ}\). It overlaps with KHI and LWI at large \(\theta\) and small \(Ri_{b}\), respectively, but is mostly omitted in the plot of the fastest growing mode due to its relatively small growth rate. In general, these long-wave families of instabilities can persist across a wide range of \(Ri_{b}\), ranging from \(Ri_{b}\ll 0.25\) (especially for VLBI-DS and VLBI-US) to \(Ri_{b}\gg 1\). Consequently, we anticipate their widespread presence in sloping stratified exchange flows. ### Effect of Reynolds and Prandtl numbers In this section, we study the impacts of Re and Pr on these different families of instabilities. #### 3.4.1 Reynolds number effects Figure 6 shows the \(Ri_{b}-\theta\) parameter space of the fastest growing modes at a lower \(\mathrm{Re}=650\) (panel (a)) and higher \(\mathrm{Re}=5000\) (panel (b)) than the standard case discussed in SS3.1. Figure 5: Clustering results: fastest growing mode of each family in \(Ri_{b}-\theta\) parameter space (a) fastest amplifying modes (FAM) of all families reproduced from figure 2(a), (b) VLBI-US and LWI, (c) HWI modes, (d) KHI modes, and (e) VLBI-DS modes. Generally, Re has a significant effect on all families of instabilities except KHI. The HWI-dominated regime expands to smaller (and slightly larger) \(Ri_{b}\) but shrinks in \(\theta\) with increasing Re. The largest \(\theta\) for HWI decreases from 1.3 to 0.4, indicating a stronger suppression effect by the slope. The long-wave families (LWI, VLBI-DS, and VLBI-US) still dominate the large \(Ri_{b}\) region, and their boundaries approach \(\theta=0^{\circ}\) as Re increases. For instance, the leftmost VLBI appears at \(\theta\approx 2\) for Re = 650, whereas it is \(\theta\approx 0.3^{\circ}\) for Re = 5000. Similarly, for VLBI-US, the right-most points change from \(\theta-=0.6^{\circ}\) at Re = 650 to \(\theta=-0.1^{\circ}\) at Re = 5000. It is anticipated that in the inviscid limit Re \(\rightarrow\infty\) the critical \(\theta\) will approach \(0^{\circ}\). Therefore, it is a reasonable speculation that these gravity-induced long waves may be generic in high-Re natural water bodies subjected to shear, stratification and even the most shallow slope. #### 3.4.2 Prandtl number effects Figure 7 displays the \(Ri_{b}-\theta\) parameter space of the fastest growing modes at Pr = 1, 28, and 70, respectively, corresponding to the increasingly sharper interface of the density base state, following (2.10). To ensure convergence, the grid resolution for the LSA was set to 150, 250, and 400, respectively. As Pr increases, the influence of the slope \(\theta\) on KHI becomes more significant, resulting in a wider upper boundary of KHI, which can be triggered at Figure 6: Effect of Reynolds number: fastest growing mode projected onto \(\text{Ri}_{b}-\theta\) space for (a) Re = 650, and (b) Re = 5000. Figure 7: Effect of Prandtl number: fastest growing mode projected onto the \(\text{Ri}_{b}-\theta\) space for (a) Pr = 1, (b) Pr = 28, and (c) Pr = 70, respectively. \(Ri_{b}>0.25\) for large downward slopes \(\theta\gtrsim 5^{\circ}\). Meanwhile, HWI is also significantly affected by Pr. At \(\Pr=1\), HWI does not appear due to the thick density interface determined by (10). However, as \(\Pr\) increases, the region of HWI expands significantly towards larger \(\theta\). The long-wave families exist at all \(\Pr\). As \(\Pr\) increases from \(\Pr=1\) to \(28\), the territory of the long waves converges towards \(\theta=0\). However, the changes in the territory become less significant from \(\Pr=28\) to \(70\), indicating a potential convergence of the wave regime at moderate \(\Pr\). However, due to the dominance of HWI at high \(\Pr\), the long-wave families are largely omitted by the fastest growing HWI at \(Ri_{b}\lesssim 10\) in figure 7(c). Interestingly, at \(\Pr=28\), the profile of the Thorpe exchange flow (\(\mathcal{F}=0\) in (12)) passes sequentially through the HWI, VLBI-DS, LWI, and KHI dominated regimes. This provides an example where VLBI-DS and LWI can dominate Thorpe's SIC flow. ## 4 Nonlinear evolution of unstable modes To gain insight into the subsequent nonlinear evolution of these unstable modes we conduct forced two-dimensional direct numerical simulations (DNS). We describe our DNS in SS4.1 and discuss the evolution and breakdown of the unstable flows in SS4.2. The instantaneous flow kinetics of these unstable waves and the mechanisms leading to their breakdown are discussed in SS4.3 and SS4.4, respectively. ### Forced DNS formulation To simulate the growth of linear unstable perturbations on the desired base state, we add to the right-hand sides of (2) and (3) the two forcing terms \[F_{v}=-\frac{1}{\mathrm{Re}}\frac{\partial^{2}U}{\partial z^{2}}-Ri\sin\theta R,\qquad F_{\rho}=-\frac{1}{\mathrm{Re}\mathrm{Pr}}\frac{\partial^{2}R}{ \partial z^{2}}, \tag{13}\] respectively. In this way, the mean velocity and density of the DNS are forced towards the targeted base profile of \(U(z)\) and \(R(z)\). These terms can be regarded as enforcing a pressure-driven exchange flow under a sustained stratification. Similar approaches that apply body forces to the stratified flows were introduced in Taylor _et al._ (2016) and Smith _et al._ (2021). We perform the simulations using the open-source solver Dedalus (Burns _et al._, 2020) employing a Fourier-Chebyshev pseudo-spectral scheme for spatial discretisation and a 3rd-order, 4-stage diagonally-implicit+explicit Runge-Kutta scheme (Ascher _et al._, 1997) for time stepping. We imposed periodic boundary conditions in the streamwise \(x\) direction, while we applied no-slip and no-flux boundary conditions for velocity and density, respectively, to the solid walls at \(z=\pm 1\), as in the LSA. The streamwise length \(L_{x}\) of the channel was set equal to the wavelength of the fastest growing mode, while the channel height \(L_{z}\) was fixed at \(2\). We employed a uniform grid for the \(x\) direction and a Chebyshev grid for the \(z\) direction. The simulation resolution was determined by the geometrical and physical parameters of the problem. We initialised the simulations by superimposing on the base state the eigenfunctions of the LSA unstable modes with a perturbation magnitude \(\zeta\). The parameters of the production runs are listed in table 1. ### Temporal evolution In this section, we focus on the temporal evolution of the fastest growing modes of each instability family, I, II, III, IV, and V, as marked in figure 2. Figure 8 shows the temporal behaviour of the unstable modes through the time series of the mass flux \(Q_{m}(t)\) (3) and the spatially-averaged vertical velocity of perturbations \(\langle w^{2}\rangle(t)\), where \(\langle\cdot\rangle\) denotes averaging over \(x,z\). The magnitude \(\zeta\) of the perturbation was chosen differently for each mode in order to obtain a reasonably long linear growth period. The forcing magnitude \(\gamma\) is determined so that the base velocity matches the selected cases in figure 2. The exchange flow is simulated by forcing the background flow in time using (4.1) and allowing the perturbations to grow. In figure 8(a), the background state is controlled by the body forces (4.1) so that \(Q_{m}\) initially remains constant and consistent with the targeted base state (as marked in figure 2) until the perturbations are significantly amplified and the flow enters the nonlinear stage. This initially constant \(Q_{m}\) value indicates the effectiveness of the forcing method to maintain a sustained background state before the intense nonlinear dynamics set in. The evolution of the disturbance amplitudes is shown in figure 8(b), with all cases exhibiting a clear exponential growth period for \(w^{2}\), with growth rates matching the corresponding linear unstable modes. For KHI, LWI, and VLBI-US, following the exponential growth period, an intense nonlinear bursting process is caused by the breakdown of the primary waves, leading to intense mixing and changes in \(Q_{m}\) and \(w^{2}\). In contrast, the sudden changes in \(Q_{m}\) do not appear for HWI and VLBI-DS since their primary waves do not break down. HWI and VLBI-DS have a pair of conjugate modes, represented by oscillating \(w^{2}\) profiles, due to the synchronization of complex-conjugate modes, as discussed in Yang _et al._ (2022). Interestingly, after the nonlinear bursting at \(t=1250\), the nonlinear HWI still maintains the oscillating pattern (Lefauve _et al._, 2018). The time series of \(Q_{m}\) and \(\ln\langle w^{2}\rangle\) pinpoint the critical time when the nonlinear effects become prominent. Specifically, this occurs when \(Q_{m}\) deviates from its constant level or when \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline Instability & Case & Re & Pr & \(\kappa\) & \(\theta\) (deg.) & \(\gamma\) & \(k\) & \(\zeta\) & \(L_{x}\) & \(N_{x}\times N_{z}\) \\ \hline HWI & I & & & 0.5 & -0.0028 & 1.5 & 0.2 & 4.2 & \(320\times 144\) \\ KHI & II & 1000 & 7 & 2 & 2 & 0.0077 & 1.5 & \(10^{-6}\) & 4.2 & \(320\times 144\) \\ LWI & III & & & 6 & 0.093 & 0.089 & 0.01 & 70.5 & \(960\times 144\) \\ VLBI-DS & IV & & & 3 & 0.041 & 0.02 & 0.5 & 314.9 & \(6000\times 144\) \\ VLBI-US & V & & & -6 & -0.125 & 0.02 & 0.5 & 444.7 & \(6000\times 144\) \\ \hline \hline \end{tabular} \end{table} Table 1: Numerical parameter values used for the DNS runs. Figure 8: Time evolution of (a) mass flux \(Q_{m}\) and (b) logarithm of vorticity squared \(\ln\langle w^{2}\rangle\) for the fastest growing modes in table 1. The slopes of the growth for the HWI, KHI, LWI, VLBI-DS, and VLBI-US modes are 0.0038, 0.15, 0.018, 0.0017, and 0.0089, respectively, consistent with \(2\eta_{r}\) of corresponding unstable mode in LSA (0.0037, 0.15, 0.018, 0.00018, and 0.0092). \(\ln\langle w^{2}\rangle\) no longer shows exponential growth after reaching a certain amplitude. Note that the critical amplitude for nonlinear bursting remains independent of the initial amplitude of perturbations. However, it varies for each individual unstable mode, as illustrated in figure 8. The critical time may vary depending on the particular unstable mode, the growth rate, and the magnitude of the initial perturbation. Figure 9 shows the \(x-t\) diagrams of \(\ln\langle w^{2}\rangle_{z}\), where \(\langle\cdot\rangle_{z}\) indicates \(z-\)averages. In case HWI (panel (a)), we observe left-going waves from the spatial-temporal diagram, while its conjugate pair is omitted as only one mode of the pair is imposed as initial perturbation in the DNS. Nonlinear effects become significant at \(t\approx 1250\) (\(\ln\langle w^{2}\rangle\approx-9\)) owning to a relatively small growth rate (\(0.0038\)), as indicated by the saturation of the exponential growth of \(w^{2}\) in figure 8(b). Interestingly, the spatial-temporal pattern of HWI does not change significantly after \(t\approx 1250\) in figure 9(a). This implies that nonlinear effects only halt the linear growth of HW structures, which maintain their forms as nonlinear HW, as observed in experiments and nature (Meyer & Linden, 2014; Cudby & Lefauve, 2021). By contrast, KHI generates strong secondary instabilities (Mashayek & Peltier, 2012) after the onset of nonlinear effects at \(t\approx 200\) (\(\ln\langle w^{2}\rangle\approx-5\)). Consequently, the KH billows break up, leading to highly chaotic flow stages with small-scale structures. In figure 9(c-e), the \(x-t\) diagrams of \(\ln\langle w^{2}\rangle_{z}\) reveal that for the three new families of long waves, small-scale structures emerge in the latter stages of the transitions, characterised by highly fluctuating contour lines. In the case of LWI (figure 9(c)), the onset of an intense chaotic flow period occurs at \(t=710\), during which small-scale structures are initially generated at \(x=25\) and \(x=55\) where \(\ln\langle w^{2}\rangle_{z}\) of the linear wave peaks. These structures then propagate towards the quiet regions and ultimately trigger a disorganised flow field across the channel. Interestingly, we have observed from figure 8(a) that the nonlinear effects set in at \(t=630\) (\(\ln\langle w^{2}\rangle\approx-14\)) as \(Q_{m}\) significantly deviates from the original constant level. At this stage, the two peaks of the linear disturbance approach each other while contour curves twist. The \(\ln\langle w^{2}\rangle_{z}\) distribution no longer maintains its shape as is in the linear growing period. As we will show later, this nonlinear dynamics is the breakdown of the long waves. In case VLBI-DS, as shown in figure 9(d), the unstable wave moves leftward and grows exponentially until nonlinear dynamics set in at \(t=2860\) (\(\ln\langle w^{2}\rangle\approx-19\)), which is identified by a jump in \(\langle w^{2}\rangle(t)\) in figure 8(b). Small-scale waves/structures are formed at a local peak of the long wave \(w^{2}\), which propagate both leftward and rightward, creating strong mixing. Despite the intense bursting of the flows, the long wave does not break down like LWI, presumably due to its low growth rate. It continues to propagate at the same phase speed as the linear wave energy that was previously used to amplify the long wave and is then fed to Figure 9: Spatial-temporal diagrams of \(\ln\langle w^{2}\rangle_{z}\) from the nonlinear simulations of (a) HWI, (b) KHI, (c) LWI, (d) VLBI-DS, and (e) VLBI-US. The black solid lines indicate the times of visibly nonlinear dynamics identified in figure 8. the small-scale waves, which eventually break down and dissipate, allowing the long wave to persist for a long period of time and propagate over a long distance. For VLBI-US (shown in figure 9(e)), local nonlinear bursting and small-scale structures are directly created at \(t=500\) (\(\ln\left\langle w^{2}\right\rangle\approx-16\)) and \(x\approx 150\) and \(350\) on top of the long wave. Similar to VLBI-DS, a preliminary breakdown of the long-wave is not observed. Soon after, intense secondary instabilities fill the entire channel and the long waves are no longer distinguishable. In summary, the evolution of these linear long waves eventually leads to the appearance of nonlinear dynamics and intense secondary short-wavelength structures. The formation of these small-scale structures is a result of perturbation amplification, which alters the base state and allows the growth of short-wave instabilities. We will explore this mechanism in more detail in SS4.3-4.4. ### Features of flow kinematics In this section we present instantaneous flow fields corresponding to the key stages of evolution for the long wave instabilities. The kinematics of the short waves, i.e., HWI and KHI, have been well-documented in the literature (Smyth & Winters, 2003; Salehipour _et al._, 2015; Mashayek & Peltier, 2012, 2013; Lefauve _et al._, 2018), and will not be repeated here. Figure 10 shows snapshots of the total density \(\rho=R+\rho^{\prime}\) (colours) and vertical velocity \(w^{\prime}\) (lines) at four stages of LWI. We find two stages of nonlinear breakdown corresponding to the breakdown of the long wave and the generation of KH-like overturns. At the linear stage (figure 10(a)), the impacts of the density perturbation on the mean are barely observable and the interface is thin and flat. Later, the growth of LWI raise and drop on the left- and right-hand sides of \(x=40\), creating a large-scale jump (figure 10(b)). Meanwhile, \(w^{\prime}\) becomes localized at \(x=40\). Further amplification of the unstable mode breaks down the jump, generating a chaotic region at \(t=700\). At this stage, the instability is no longer linear, as demonstrated in SS4.2. At \(t=760\), a series of short overturns resembling to KH billows are formed inside and at two sides of the chaotic region at \(x=40\). These waves propagate away from the chaotic region and may eventually lead to (two-dimensional) turbulence. Figure 11 shows snapshots of the flow fields at three stages of VLBI-DS. From \(t=1500\) (panel a) to \(t=2500\) (panel b), VLBI-DS amplifies, while light-blue (e.g., upper layer: \(150<x<200\)) and light-red (e.g., bottom layer: \(250<x<300\)) regions become distinguishable, indicating the enhancement of mixing in these regions which acts to dissipate the energy Figure 10: Nonlinear LWI: density (colour) and vertical velocity (lines) snapshots of the forced DNS at \(K=0.01\) at different time instances: (a) \(t=400\), (b) \(t=600\), (c) \(t=700\), and (d) \(t=760\). injected by gravity. As the base flow is frozen by the simulation, the mixing is attributed to the amplification of VLBI-DS. At \(t=2900\) (panel c), further growth of the long wave induces intense KH-like overturns near the leading edge of VLBI-DS, characterised by the strong fluctuation of \(w^{\prime}\) in the range of \(x=175\sim 250\). These overturns create extra dissipation and mixing of the flow, acting to balance the extra kinetic energy supplied by gravity. In contrast to the KH-like overturns in LWI, the overturns can center within the bulk flow of each layer in addition to the interface (figure 11(d)). This is because the propagation of the leading edge of the long waves (dark blue region at \(x=180\)) into the mixed region (light blue region at \(x<180\)) creates a weak interface between the denser and lighter regions inside the flow layer. Finally, in the VLBI-US case (figure 12), the amplification of the stationary waves directly induces localized KH-like billows characterised by strong fluctuations of \(w^{\prime}\) in panel (c) at the interface without first breaking down as in LWI. This behaviour may be due to the slower growth rate, which prevents the formation of a distinct nonlinear 'jump' observed in LWI. As discussed in SS4.2, the evolution of these long wave families eventually lead to intense bursting processes that form strong small-scale KH-like overturns. These overturns are responsible for dissipating the kinetic energy injected by a positive slope that cannot be completely balanced by the dissipation of long waves. Note that in all the long waves cases, a Figure 11: Nonlinear LWI-DS: density (colour) and vertical velocity (lines) snapshots of the forced DNS at \(K=0.5\): (a) \(t=1500\), (b) \(t=2500\), and (c) \(t=2900\). An enlarged plot of panel (c) is shown in panel (d). Figure 12: Nonlinear VLBI-US: density (colour) and vertical velocity (lines) snapshots of the forced DNS at \(K=0.1\): (a) \(t=600\), and (b) \(t=888\). An enlarged plot of panel (b) is shown in panel (c). short-wave instability (KHI and HWI) does not initially exist according to the LSA in SS3.1. In the next section, we study how these short-wave KH-like overturns are induced by nonlinear long waves. ### Breakdown mechanism In SS3.1, we showed that the KHI can only occur when \(Ri_{b}\lessapprox 0.25\). In the new long wave cases considered in this study, \(Ri_{b}\gg 1\), hence KHI cannot be triggered. Instead, KH-like overturns are formed by the nonlinear evolution of these long waves. To understand the cause of the formation of KH-like overturns, we computed the gradient Richardson number \(Ri_{g}\) at the total density interface \(\rho=0\), which is defined as \[Ri_{g}(x,t)\equiv Ri\frac{\partial\rho/\partial z}{(\partial u/\partial z)^{2 }}\bigg{|}_{\rho=0}, \tag{10}\] where we recall that \(Ri\equiv 1/4\) (see SS 2.1). Note that the field \(Ri_{g}(x,t)\) is based on the total velocity \(u\) and density \(\rho\), and thus differs from the constant \(Ri_{b}\) defined in (11), which is based on the initial base flow profiles \(U\) and \(R\). In figure 13, we show the \(x-t\) diagrams of \(Ri_{g}\), with \(w^{2}\) contour lines superimposed. In general, as the unstable long waves grow, the density gradient can decrease due to diffusivity which increases the mixing layer. Meanwhile, the velocity gradient can increase as the perturbations are amplified. These results in a decreasing interfacial \(Ri_{g}\), until it reaches below 0.25 (shades of blue), with which small-scale structures are associated. For each individual long-wave family, the process is slightly different. LWI (figure 13(a)) has two stages of nonlinear breakdown. The first stage appears at \(t=680\) when a 'jump' is formed at \(x=45\). This jump changes the density interface and generates two low \(Ri_{g}\) regions separated by a high \(Ri_{g}\) region. In these low \(Ri_{g}\) regions, \(Ri_{g}\) continuously decreases due to the amplification of long waves and eventually reduces below 0.25, which potentially allows the growth of the secondary KHI in these regions. Finally, overturns are formed in these regions, leading to the second stage of nonlinear breakdown. Similarly, the amplification of VLBI-DS (figure 13(b)) also causes a low \(Ri_{g}\) region that travels along with the waves. As soon as \(Ri_{g}<0.25\), intense overturns are formed in this region and later contaminate the entire duct. For VLBI-US, the overturns are first formed at the edges of the low \(Ri_{g}\) region (\(170<x<330\)). The close relation between the low \(Ri_{g}\) region and the onset of nonlinear Figure 13: Spatial-temporal diagrams of the gradient Richardson number \(Ri_{g}\) at the density interface in forced DNS for (a) LWI, (b) VLBI-DS, and (c) VLBI-US. The color maps show the values of \(Ri_{g}\), while the lines represent \(ln\langle w^{2}\rangle_{z}\). short waves strongly suggests that the overturns are a consequence of the decreasing of local \(Rig_{g}\) caused by the nonlinear evolution of initially long waves. From an energy budget perspective, the formation of short-wave overturns in these long-wave simulations allows for more efficient dissipation of kinetic energy fed by external forces. In figure 14(a), we illustrate the pathways of turbulent kinetic energy \(K^{\prime}\) as given schematically by \[\partial_{t}K^{\prime}=\Phi^{K^{\prime}}+P-B-\epsilon, \tag{4.3}\] where, \(P\), \(\epsilon\), \(B\), and \(\Phi^{K^{\prime}}\) represent the production, dissipation, buoyancy flux, and transport terms of \(K^{\prime}\). The reader may refer to (Caulfield, 2021; Lefauve & Linden, 2022) for the definition and a more comprehensive discussion of the kinetic budget of stratified shear flows. When \(Ri_{b}\gg 0.25\), initially only the long waves are allowed to grow in the flow with strong stratification, gaining energy from the mean flow through production and buoyancy terms and losing it through dissipation. As the long waves are amplified, local shear is created and amplified by the growing velocity perturbations, leading to the decrease of local \(Rig_{g}\). As \(Ri_{g}\lesssim 0.25\), the necessary condition for the growth of short waves (mostly KHI here) is satisfied. The short waves then grow, extracting \(K^{\prime}\) from the long waves and dissipating it to internal energy. This opens a new energy pathway that allows flows with strong stratification (large \(Ri_{b}\)) to dissipate energy by creating small-scale (turbulent) structures. When \(Ri_{b}\ll 0.25\) (figure 14(b)), long waves can coexist with short waves (e.g. case IV in figure 3) and may contribute to the energy dissipation. But they are often significantly weaker than short waves since short waves tend to have a faster growth rate. Meanwhile, the short wave directly gains most of the kinetic energy from the mean flow and converts it to internal energy. We also note that turbulence created by these unstable waves can also induce irreversible mixing which, in return, contributes to the production of internal energy. ## 5 Conclusions In this paper, we examined the effects of longitudinal gravitational forces on the stability of two-layer stratified exchange flows by conducting linear stability analyses and nonlinear forced DNS in a sloping channel with solid top and bottom boundaries. In addition to the well-known Holmboe and Kelvin-Helmholtz instabilities, we revealed the existence of three new families of long-wave instabilities subject to non-zero gravitational forces (\(\theta\neq 0\)): * Long-wave instability (LWI), with wavelengths of the order \(10-100\) channel depths (wave number \(k=O\left(10^{-2}-(10^{-1})\right)\)) and a near-zero wave speed; * Downslope very-long-wave instability (VLWI-DS), with wavelengths of the order Figure 14: Pathways of turbulent kinetic energy in sloping exchange flows under (a) strong stratification \(Ri_{b}\gg 0.25\), where only very long waves are unstable, (b) weaker stratification \(Ri_{b}<0.25\) where both short and long waves coexist. 1000 channel depths (wave number \(k=O(10^{-3})\sim O(10^{-2})\)), a non-zero wave speed, and complex conjugate eigenmodes implying travelling waves; \(\bullet\) Upslope very-long-wave instability (VLWI-US), with wavelengths \(\geqq 100\) channel depths (wave number \(k<O(10^{-2})\)) and a near-zero wave speed. The LWI and VLBI-DS exist at a positive (favourable) slope where the along-slope component of gravity reinforces the pressure gradient, while VLBI-US emerges under a negative (adverse) slope condition. Interestingly, their onset is largely independent of the base flow speed. As a result, they can be triggered even at very high background gradient Richardson numbers \(Ri_{b}\gg 1\), and induce chaos and sustain (two-dimensional) turbulence and mixing in strongly stratified fluids. In a weakly stratified flow (low \(Ri_{b}\)), they can also coexist with short-wave instabilities (KHI and HWI), but they generally have a lower growth rate. The short-wave HWI and KHI also exhibit interesting features under non-zero slopes. Increasing \(\theta\) tends to suppress the HWI regime while enhancing the KHI. Moreover, the neutral boundary of KHI increases linearly from \(Ri_{b}=0.15\) at \(\theta=-10^{\circ}\) to \(0.25\) at \(\theta=10^{\circ}\). The long-wave families appear under broad flow conditions. To explore their dependence on flow parameters, we varied the Reynolds number Re, Prandtl number Pr, the base flow and boundary conditions. While increasing the Re does not significantly affect KHI, it does enhance the other instabilities. The range of HWI expands to larger \(Ri_{b}\), while the range of the long-wave instabilities approaches \(\theta=0\). Therefore, it can be anticipated that as \(\mathrm{Re}\rightarrow\infty\) (as is often the case in natural flows), the critical slope required to trigger these long instabilities approaches zero \(|\theta|\to 0\). Increasing Pr, or equivalently, decreasing the thickness of the density interface of the base flow, cause the unstable range of HWI to expand towards larger \(\theta\) and smaller \(Ri_{b}\). Meanwhile, the range of the long-wave instabilities slowly approaches \(\theta=0\), indicating that these long waves can exist in both water (with Pr ranging from 7 to 700) and air (\(\mathrm{Pr}\approx 1\)). It should be noted that these instabilities are not limited to the sine-like base state and the no-slip boundary conditions used in this study. Instead, they can be triggered by, e.g., a tanh-shaped velocity base and free-slip (but impenetrable) velocity boundary conditions. Finally, we studied the nonlinear evolution of the different instabilities in the inclined channel and their connections to turbulence using a two-dimensional forced DNS that maintains the base states. For all of the long-wave instabilities, the evolution eventually led to a nonlinear bursting process with significant small-scale secondary KH-like overturns and mixing. Specifically, the LWI exhibit two nonlinear stages where an initial breakdown of the long waves is followed by a secondary bursting process, creating multiple intense KH-like overturns. For VLBI-DS and VLBI-US, the long waves do not break down. Instead, they directly alter the base states and induce localized small-scale overturns. The evolution of these long instabilities results in a decrease in the density gradient and an increase in the shear, which in turn reduces the local gradient Richardson number \(Ri_{g}\). Our analysis reveals that the appearance of KH-like overturns is highly correlated with a local low \(Ri_{g}\), which approaches the critical threshold of 0.25 (below which we find the KHI), substantiating the emergence of localised KH-like overturns. From a turbulent kinetic energy budget perspective, a new energy pathway allows the transfer of kinetic energy from the mean flow to the long waves (linearly) and then to the short waves (nonlinearly), eventually leading to the dissipation of turbulent kinetic energy, under conditions where short waves are linearly stable. The circumstances under which turbulence can persist in strongly stratified flows remains a fascinating debate within the community (Caulfield 2021). We demonstrated that weakly unstable (very) long waves may trigger turbulence and mixing after long periods of time, even under initially very strongly stratified conditions (\(Ri_{b}\gg 1\)). These results have particular relevance for high\(-\mathrm{Re}\) flows in rivers (Yoshida _et al._ 1998) and straits (Gregg & Ozsoy 2002), or any natural flow having even very shallow slopes \(\theta\approx 0\). A quantitative investigation of the turbulent transition and mixing associated with these long waves would require three-dimensional direct numerical simulations, an endeavour left for future work. We acknowledge the ERC Research and Innovation Grant No 742480 'Stratified Turbulence And Mixing Processes' (STAMP). A. L. acknowledges a Leverhulme Trust Early Career Fellowship and a NERC Independent Research Fellowship (NE/W008971/1). For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission. ## Declaration of interests The authors report no conflict of interest. ## Appendix A Linear stability analysis with free-slip boundary condition To investigate the potential impacts of the base flow shape on the instabilities, we perform a LSA with a tanh-shape density (10) and velocity base \[U(z)=\gamma\tanh(\iota z), \tag{11}\] where \(\iota=1.5\kappa\sqrt{Pr}\) defines the thickness of velocity base. A free-slip boundary condition for velocities is also adopted to understand the effects of boundary conditions. In figure 15, we show the \(Ri_{b}-\theta\) and \(Q_{m}-\theta\) parameter space of the fastest growing modes of the above LSA. Clearly, the five families of instabilities appear even with the different base flow and boundary conditions. It means that these instabilities are not a consequence of an arbitrary flow condition that is subject to a certain base flow or boundary condition, but rather general flow instabilities that can appear in a wide range of stratified flow systems. Features of these instabilities, e.g. the wave speed, wavelength, growth rate, and regime, are largely consistent with the main cases discussed in SS3.1, which suggests, again, the universal features of these instabilities. Figure 16 show the eigenfunctions of fastest growth modes of the typical case of each instability (marked in figure 15). Note again that the forms of eigenfunctions of each instability are generally consistent with the main cases in SS3.2 in the middle region of the channel. However, those intense regions near the wall do not appear with a free-slip velocity boundary condition. Therefore, these near-wall structures as well as the no-slip boundary conditions are not essential to these instabilities.
``` 2層のstratifiedShearFlowを傾斜した2次元チャネルで調査する。この流れには非ゼロの縦方向の重力加速度が作用する。 私たちは、従来知られているKelvin-HelmholtzInstability (KHI)とHolmboeWaveInstability (HWI)とは異なる、長波長 (10 から $10^3$ Shear-layerの深さオーダー) と通常ゆっくりとした成長速度を持つ3つの新しい不安定性を明らかにした。 重要なのは、これらの不安定性は、勾配のRichardson数 $\gg 1$ の背景流で成長することができ、強にstratified flowでのturbulenceとmixingを維持する新しいメカニズムを提供する。 これらの不安定性は、ジェネリックであり、Reynolds数 $\mathrm{Re}$ 、Prandtl数 $\mathrm{Pr}$ 、基流プロファイル、境界条件にあまり影響を与えないことが示された。
2309.04023
BOLA360: Near-optimal View and Bitrate Adaptation for 360-degree Video Streaming
Recent advances in omnidirectional cameras and AR/VR headsets have spurred the adoption of 360-degree videos that are widely believed to be the future of online video streaming. 360-degree videos allow users to wear a head-mounted display (HMD) and experience the video as if they are physically present in the scene. Streaming high-quality 360-degree videos at scale is an unsolved problem that is more challenging than traditional (2D) video delivery. The data rate required to stream 360-degree videos is an order of magnitude more than traditional videos. Further, the penalty for rebuffering events where the video freezes or displays a blank screen is more severe as it may cause cybersickness. We propose an online adaptive bitrate (ABR) algorithm for 360-degree videos called BOLA360 that runs inside the client's video player and orchestrates the download of video segments from the server so as to maximize the quality-of-experience (QoE) of the user. BOLA360 conserves bandwidth by downloading only those video segments that are likely to fall within the field-of-view (FOV) of the user. In addition, BOLA360 continually adapts the bitrate of the downloaded video segments so as to enable a smooth playback without rebuffering. We prove that BOLA360 is near-optimal with respect to an optimal offline algorithm that maximizes QoE. Further, we evaluate BOLA360 on a wide range of network and user head movement profiles and show that it provides $13.6\%$ to $372.5\%$ more QoE than state-of-the-art algorithms. While ABR algorithms for traditional (2D) videos have been well-studied over the last decade, our work is the first ABR algorithm for 360-degree videos with both theoretical and empirical guarantees on its performance.
Ali Zeynali, Mahsa Sahebdel, Mohammad Hajiesmaili, Ramesh K. Sitaraman
2023-09-07T21:30:57
http://arxiv.org/abs/2309.04023v2
# BOLA360: Near-optimal View and Bitrate Adaptation for 360-degree Video Streaming ###### Abstract Recent advances in omnidirectional cameras and AR/VR headsets have spurred the adoption of 360deg videos that are widely believed to be the future of online video streaming. 360deg videos allow users to wear a head-mounted display (HMD) and experience the video as if they are physically present in the scene. Streaming high-quality 360deg videos at scale is an unsolved problem that is more challenging than traditional (2D) video delivery. The data rate required to stream 360deg videos is an order of magnitude more than traditional videos. Further, the penalty for rebuffering events where the video freezes or displays a blank screen is more severe as it may cause cybersickness. We propose an online adaptive bitrate (ABR) algorithm for 360deg videos called BOLA360 that runs inside the client's video player and orchestrates the download of video segments from the server so as to maximize the quality-of-experience (QoE) of the user. BOLA360 conserves bandwidth by downloading only those video segments that are likely to fall within the field-of-view (FOV) of the user. In addition, BOLA360 continually adapts the bitrate of the downloaded video segments so as to enable a smooth playback without rebuffering. We prove that BOLA360 is near-optimal with respect to an optimal offline algorithm that maximizes QoE. Further, we evaluate BOLA360 on a wide range of network and user head movement profiles and show that it provides 13.6% to 372.5% more QoE than state-of-the-art algorithms. While ABR algorithms for traditional (2D) videos have been well-studied over the last decade, our work is the first ABR algorithm for 360deg videos with _both_ theoretical and empirical guarantees on its performance. ## 1 Introduction With recent advancements in omnidirectional cameras and AR/VR headsets, users can enjoy 360deg media like YouTube 360 [1], virtual reality video games [2, 3], and augmented reality applications like Google AR/VR [4]. Users either wear a head-mounted display (HMD) or use a device that allows them to change their viewport and field-of-view (FOV)1 when watching a 360deg video (see Figure 1). For instance, a user watching world cup soccer as a 360deg video can wear an HMD and watch the game by changing their head position as if they were actually in the stadium. Footnote 1: Field of view is the spatial area that falls within the viewport of the user’s device. A user sees only the portion of the 360deg video that is within the FOV. The rapid increase in the popularity of 360deg videos is driven in part by the wide availability of VR headsets that has grown more than five-fold in the past five years to reach nearly 100 million units in use [5]. A second trend driving the popularity of 360deg videos is the wide availability of omnidirectional cameras that make it easy to create 360deg video content. While the promise of providing an immersive experience has made 360deg videos the holy grail of internet video streaming [6], providing a high quality-of-experience to users while _delivering those videos at scale over the internet_ is a major unsolved problem and is the main motivation of our work. **Tiled video delivery.** The 360deg videos are created and stored in servers and delivered to the users (i.e., client) in an _online_ fashion. A common approach to 360deg video delivery is to divide the viewing sphere of the user into a set of tiles (see Figure 1). Each tile is stored as a sequence of video segments, where each video segment can be played for a fixed duration of \(\delta\) (say, \(\delta=5\)) seconds. Each segment is encoded in multiple bitrates (i.e., resolutions) so that the quality of the segments sent to the user can be adapted to the available bandwidth between the server and the client, a feature known as "adaptive bitrate streaming". Video segments are streamed from the server ahead of time and buffered at the client before they can be rendered to the user. As the user changes their viewport, say by moving their head, the appropriate segments for the tiles within the user's FOV is extracted from the client's buffer and rendered on the user's display. **Challenges of 360deg video delivery.** A key challenge in delivering 360deg videos is that they are an order of magnitude larger in size than a traditional (2D) videos [7, 8, 9]. The reason is that there are multiple tiles required to cover the 360deg viewing sphere, with each tile encoded in multiple bitrates in a manner similar to a 2D video. Further, a high resolution of 4K to 8K is recommended for viewing AR/VR media [8]. Thus, the data rate of a 360deg video that delivers a 4K stream to each eye (x2) and allows the user to watch the full 360deg viewing sphere (x8 tiles, say) is 400 Mbps, compared to about 25 Mbps for a traditional 4K video. In fact, the data rate of such a 360deg video is an order of magnitude larger than the US's average last-mile bandwidth [10, 11]. Additionally, when the user's viewport changes, say due to a head movement, the new segments that fall within the user's new FOV must be rendered within a latency of a few tens of milliseconds, so as to not cause a _rebuffering event_ that results in showing either an incorrect/stale segment or no segment at all (i.e., blank screen). If Figure 1: (a) Users watch 360° videos by moving their viewport to point to any direction in the enclosing sphere (b) the sphere is broken up into tiles and each tile in the user’s FOV is streamed as a sequence of segments [6]. the "motion-to-photon" latency exceeds a few tens of milliseconds, the user experiences a degraded quality-of-experience, or even cybersickness [6]. **Adaptive bitrate (ABR) algorithms for 360\({}^{\ast}\) videos.** To ameliorate the challenge imposed by the size of the 360\({}^{\ast}\) videos, we study _adaptive bitrate (ABR) algorithms_ that run on the client's device and orchestrate the download of the video segments from the server. ABR algorithms for traditional (2D) videos is a well-studied problem with more than a decade of research [12, 13, 14, 15, 16, 17, 18]. In the traditional 2D video setting, ABR algorithms primarily adapt the bitrates of the downloaded segments to the available client-server bandwidth, ensuring that the video plays continuously without rebuffers (i.e., freezes). ABR algorithms for 360\({}^{\ast}\) videos are considerably more complex since they must _simultaneously_ perform two types of adaptations. First, the algorithm must perform _view adaptation_ by predicting ahead of time where the user's head position might be and what tile(s) the user may view in the future. Second, the algorithm must perform _bitrate adaptation_ by deciding what bitrates to download the segments of the predicted tiles. Importantly, these two types of adaptations must be optimized jointly since tiles that are more likely to be in the user's viewport should be downloaded at higher bitrates. **Why naive ABR solutions do not work.** A naive ABR algorithm divides the available client-server bandwidth equally among all tiles of the 360\({}^{\ast}\) video, resulting in downloading a segment for all tiles. While this prevents rebuffering since there is a downloaded segment for each tile, it leads to lower video quality and wasted segments that are never viewed. An alternative approach predicts the tile(s) the user is likely to watch and downloads segments only for those tile(s). This reduces the number of segments downloaded, allowing for higher quality. However, it is prone to rebuffering if the user watches unpredicted tile(s) for which no segments were downloaded [19, 20, 21, 22, 23, 24, 25, 26]. The provably near-optimal approach that we propose balances these two naive extremes to achieve both high quality and lower rebuffering. **Our Contributions.** We leverage Lyapunov optimization techniques to achieve _both_ high bitrates and low rebuffering by judiciously downloading higher-quality segments for tiles that are more likely to be in the FOV of the users, while using lower-quality segments for the rest of the tiles as a hedge against rebuffering. Our algorithm, BOLA360, is the first _provably_ near-optimal ABR algorithm for 360\({}^{\ast}\) videos that also empirically performs better than state-of-the-art algorithms. We make the following specific contributions. **1)** We formulate maximizing the quality-of-experience (QoE) of 360\({}^{\ast}\) videos as an optimization problem that we call ABR360. We model QoE as a weighted sum of two terms, one term relates to the quality (i.e., bitrate) of the video segments viewed by the user, and the other term relates to continuous video playback without rebuffers. **2)** We present BOLA360, an algorithm that finds a near-optimal solution for ABR360 in an online manner without the future knowledge of uncertain inputs. In each round, BOLA360 selects a suitable bitrate for each tile based on the current buffer utilization. Further, there are multiple parameters in BOLA360 that could be tuned to improve the performance under different conditions and environments. **3)** We analyze the performance of BOLA360 and show that (i) it never violates the buffer capacity of the client (Theorem 1), and (ii) its average QoE is within a small additive constant factor of the offline optimum of ABR360 (Theorem 2), the additive factor goes to zero when the buffer size goes to infinity. Further, let _playback delay_ be the time elapsed from when a video segment is downloaded by the client to when it is rendered to the user. Our theoretical analysis reveals a tradeoff between playback delay and QoE of BOLA360, i.e., one needs to tolerate a longer playback delay to achieve better QoE (Remark 2). **4)** We implement BOLA360 on a simulation testbed and evaluate its performance extensively using both real and synthetic data traces. Using trace-based simulations, we compare BOLA360 with both baseline and state-of-the-art algorithms used in VA-360[27], 360ProbDASH[28], Salient-VR[29],Flare[30], and Pano[31]. Our results show that BOLA360 achieves a QoE that is better than the best alternative by average 13.6% over 14 real network profiles (Figure 6) and 30.3% over multiple videos and 12 different head position probability distributions (Figure 8). **5)** Finally, we consider two extensions to BOLA360 that are relevant in specific real-world situations [32]. While BOLA360 exhibits impressive performance in terms of achieving high QoE, average playing bitrate, and rebuffering ratio, there is additional room to improve the QoE by adding heuristics on top of the basic design of BOLA360. Toward this, we improve BOLA360 by including minimizing fluctuations in the bitrate of rendered segments or ensuring swift responses to network conditions. To address these areas and enhance the practical performance of BOLA360, we propose two innovative heuristics: BOLA360-PL, and BOLA360-REP, each targeting specific drawbacks of the original algorithm. Our experimental results reveal substantial enhancements achieved by two heuristics. Specifically, BOLA360-PL reduces reaction time by up to 67.8%, and BOLA360-REP significantly improves both playing bitrate and reaction time by 91.2% and 80.0% respectively, especially when coupled with short-term head position predictions. These heuristics provide highly efficient and practical solutions, surpassing the performance of the original algorithm. **Roadmap.** The rest of paper is organized as follows. We introduce our system model and formulate the ABR problem in Section 3. Using a Lyapunov optimization approach, we develop BOLA360 and prove that it is near-optimal in Section 4. We empirically analyze the behavior of BOLA360 in Section 5 and evaluate its performance in Section 6. Next, in Section 7, we introduce two additional versions of BOLA360 which practically improves the performance of BOLA360. Finally, we review related work in Section 8 and conclude in Section 9. ## 2 Background **ABR Algorithm for 360\({}^{\circ}\) Videos.** Tile-based 360\({}^{\circ}\) videos temporally slice the video into chunks. Each chunk is split into multiple segments to cover entire 360\({}^{\circ}\) spatial area. The user's screen includes multiple tiles and each segment represents a short fraction of video for a particular tile. Usually, each segment is encoded in multiple quality levels or bitrates for video streaming. The ABR algorithm for 360\({}^{\circ}\) video has to select the bitrate of segment for each tile before downloading it. So, the action of the online ABR algorithm for each chunk is a list of selected bitrates for each tile. **Field of View [FOV].** A 360\({}^{\circ}\) video is encoded in the full 360\({}^{\circ}\) visual sphere. However, the human eye's field of vision covers about 130\({}^{\circ}\)[33]. Therefore, the user interacting with the 360\({}^{\circ}\) video cannot see the entire spatial area of the presented video. The part of the 360\({}^{\circ}\) video inside the user's visible region is called Field of View or FOV. Figure 2 shows an example of a FOV that consists a subset of tiles of the full sphere of the 360\({}^{\circ}\) video seen by the user. We use the term view to refer to the group of tiles inside the FOV. When the user interacts with 360\({}^{\circ}\) video with a VR headset (say), the user can arbitrarily change the FOV and view by moving their head,. **Navigation Graph.**_Navigation Graph_ for 360\({}^{\circ}\) videos introduced in [34] for the first time and is used to represent the probability of users transitioning from each view to another view as they watch the 360\({}^{\circ}\) video. Each node \((k,v)\) of the graph corresponds to a \(k^{th}\) chunk of the video and view index \(v\). Also, an weighted edge \(e=\{k,v_{i},v_{j}\}\) from node \((k,v_{i})\) to node \((k+1,v_{j})\) shows the probability of jumping from view \(v_{i}\) to view \(v_{j}\) while the chunk \(k\) is playing. Usually, the navigation graph is used Figure 2: One shot from the entire spatial area of 360\({}^{\circ}\) video and FOV of user in that to keep the historical head direction traces of multiple users against a single video or historical head direction traces of a single user against multiple videos. Every time the user interacts with a video and jumps from view \(v_{i}\) to view \(v_{j}\) at chunk \(k\) the weight of edge \(e=\{k,v_{i},v_{j}\}\) get updated. ## 3 System Model and Problem Formulation **The 360\({}^{\circ}\) Video Model.** We consider a 360\({}^{\circ}\) video as a sequence of \(K\)_chunks_, where each chunk represents \(\delta\) seconds of the playback time. Each chunk is further partitioned into \(D\)_segments_ to cover the entire 360\({}^{\circ}\) spatial area. Each segment represents \(\delta\) seconds of video for a particular tile of the screen. Moreover, each segment is encoded in \(M\) different _bitrates_, all of which are available at the server; the higher the bitrate, the larger the size in bits. Let \(S_{m}\) denote the size of a segment with bitrate \(m\). We define \(v_{m}\) as the utility value the user gets by watching a tile playing a segment with bitrate \(m\). Therefore, we have the following inequality. \[S_{1}\leq S_{2}\leq...\leq S_{M}\Leftrightarrow v_{1}\leq v_{2}\leq...\leq v_{ M}.\] During the playback time of each chunk, the user views only a subset of tiles, which is their FOV. The bitrate of tiles inside the FOV directly impacts the QoE. On the other hand, downloading segments for tiles out of FOV wastes the bandwidth capacity. A key challenge is that the FOV is unknown to the bitrate selection algorithm at download time. As a result, the online bitrate selection algorithm must predict the FOV and download the segment for tiles based on its prediction. Let \(p_{k,d}\) denote the probability of the tile \(d\) is inside FOV while playing \(k^{th}\) chunk. We assume that these probability values are given from a prediction based on the previous user's watching the video [35, 19, 36, 29, 34], or from a chunk analysis of the content [37, 38]. For simplicity, we assume that the FOV includes a single tile and \(\sum_{d=1}^{D}p_{k,d}=1\). The algorithm's design could be straightforwardly extended to include multiple tiles for the FOV of the user. Problem Formulation.In what follows, we formulate ABR360, an online optimization problem for the bitrate and view adaptation of 360\({}^{\circ}\) video streaming. In ABR360, the objective is to maximize the expected quality of experience (QoE) of the user, including two terms: 1) the utility term that is related to quality of the video watched by the user, such utility is an increasing function of the quality of the segment, and 2) the smoothness of streaming term that captures continuous playback without rebuffering. The first term directly depends on the bitrate downloaded by the streaming algorithm, i.e., the higher the bitrate, the higher the utility. The second term captures the expected smoothness of video streaming. Rebuffering happens when at least one of the segments inside FOV is not completely downloaded during playback time. Note that the above two terms conflict with each other. To maximize the utility, an ABR algorithm must download the highest possible bitrate segments. However, to maximize the expected continuous smooth playback, the ABR algorithm must download low-bitrate segments. Thus, to maximize the sum of both terms, the ABR algorithm must balance the two conflicting requirements. We now formulate QoE mathematically captures the utility as the sum of the two terms \(U_{K}\) and \(R_{K}\). \(U_{K}\) represents the time-average expected playback utility the video player prepares for the user over the sequence of segments and is defined as follows. \[U_{K}=\frac{\sum_{k=1}^{K}\sum_{d=1}^{D}\sum_{m=1}^{M}\mathbb{E}\{a_{k,d,m}.p_ {k,d}\.\ v_{m}\}}{\mathbb{E}\{T_{end}\}}, \tag{1}\] where \(T_{end}\) is the moment video player finishes playback time of the last chunk, and \(a_{k,d,m}\) is a binary optimization variable in the ABR360 problem: \(a_{k,d,m}=1\) if segment with bitrate \(m\) is selected to download for tile \(d\) of chunk \(k\); 0, otherwise. Let \(t_{k}\) denotes the time the video player completes the download of segments that belong to chunk \(k-1\) and decides about the segments of \(k^{th}\) chunk. And \(T_{k}\) shows the time interval between finishing downloading chunks \(k-1\) and \(k\), i.e., \(T_{k}=t_{k+1}-t_{k}\). In Equation (1), \(p_{k,d}\) is the probability of the tile \(d\) being inside FOV during playback time of \(k^{th}\) chunk. The second QoE term is denoted by \(R_{K}\), which targets the playback smoothness as follows. \[R_{K}=\frac{\sum_{k=1}^{K}\sum_{d=1}^{D}\sum_{m=1}^{M}\mathbb{E}\{a_{k,d,m} \delta\}}{\mathbb{E}\{T_{end}\}}, \tag{2}\] That is, \(R_{K}\) is the ratio of expected playback length of downloaded segments of video and the length of the streaming. Note that a low value for \(R_{k}\) when the the download time (denominator) is larger than the playback length of the segments (numerator) will result in rebuffering. Thus a large value of \(R_{k}\) is a measure of continuous play. In contrast to \(U_{k}\), \(R_{K}\) has an inverse relation with the download time (or the bitrate), so it decreases with higher bitrates. Note that the expectations in Equation (1) and Equation (2) are over different possible decisions BOLA360 may take. We use the coefficient \(\gamma>0\) to set the relative importance of the two terms in the user's final QoE, i.e., \(\gamma\) provides an opportunity to tune the relative importance of high-bitrate streaming with respect to a continuous streaming experience. We formulate the ABR360 problem as follows. \[\text{\@@cite[cite]{[\@@bibref{}{ABR360}{}{}]}} \max U_{K}+\gamma R_{K} \tag{3a}\] \[\text{s.t.}, \sum_{m=1}^{M}a_{k,d,m}\leq 1,\quad\forall d,k,\] (3b) \[Q(t_{k})\leq Q_{max},\] (3c) \[\text{vars.}, a_{k,d,m}\in\{0,1\}. \tag{3d}\] Constraint (3b) limits to downloading at most one segment for each tile of a chunk. The second constraint (3c) enforces the buffer capacity constraint. In this constraint, \(Q(t_{k})\) is the buffer level at time \(t_{k}\) and includes the number of segments available in a buffer at time \(t_{k}\). \(Q_{max}\) is the buffer capacity and depicts the maximum number of segments stored in the buffer. Since the number of segments downloaded for each chunk is not fixed, the actual number of segments that drain out from the buffer when a chunk is played can vary from chunk to chunk. To capture this, let \(n_{k}\) be the average number of segments downloaded for chunks played during downloading of chunk \(k\). The evolution of the buffer level is characterized as \[Q(t_{k+1})=\max[Q(t_{k})-\frac{n_{k}T_{k}}{\delta},0]+\sum_{d=1}^{D}\sum_{m=1 }^{M}a_{k,d,m}, \tag{4}\] where the first term refers to the number of segments removed from the buffer during the download time of chunk \(k\) and the second term shows the number of segments recently downloaded. **Remark 1**.: _For regular 2D videos with \(D=1\), number of segments that drain out of the buffer when each chunk is played fixed, \(n_{k}=1\). In this particular case, \(\min[Q(t_{k}),\frac{T_{k}}{\delta}]\) segments get drained out of the buffer after passing \(T_{k}\) seconds._ ## 4 BOLA360: An Online ABR Algorithm for 360\({}^{\circ}\) Videos In this section, we propose BOLA360, a Lyapunov-based algorithm that finds a near-optimal solution to ABR360. BOLA360 is an online algorithm and its decisions do not require the knowledge of future bandwidth values. ``` 1\(\mathbf{a}(k)\): A decision vector that maximizes the value of \(\eta(k,a(k))\) defined in (5a) with respect to single-bitrate constraint (5b) for chunk \(k\); 2ifnumber of non-zero elements in \(\textbf{a}(k)>\) 0then 3 Download bitrates according to \(\mathbf{a}(k)\) and finish the decision making about chunk \(k\); 4 end if 5else 6 Wait for \(\Delta\) seconds and repeat the bitrate selection for this chunk again; 7 end if ``` **Algorithm 1**BOLA360 (k) ### Design and Analysis of BOLA360 The design of BOLA360 is based on three key ideas. First, BOLA360 finds a solution for a single-slot maximization problem that leads to a near-optimal solution for the original long-term problem over \(K\) chunks. Second, the single-slot decision of BOLA360 is based on the buffer level; the higher the current buffer level, the higher the selected bitrate for download. This is intuitive since a high buffer level indicates that the input rate into the buffer was higher than the output rate from the buffer, so the algorithm has more freedom to download high-quality segments and reduce the input rate of the buffer. Third, BOLA360 uses a threshold as the indicator of high buffer utilization, and by reaching the threshold, it moves to an idle state and waits until the buffer level decreases again. This approach limits the buffer utilization of BOLA360. It is worth noting that at the beginning and with an empty buffer, BOLA360 starts downloading low bitrates. With the above three key ideas, we now proceed to explain the technical details of BOLA360. BOLA360 uses an input parameter \(V\) that controls the trade-off between the performance of the algorithm and the maximum acceptable buffer utilization of the algorithm. Note that parameter \(V\) also plays a critical role in the playback delay, i.e., for real-time streaming, smaller values of \(V\) are preferable, while in an on-demand streaming application, the larger values of \(V\) are acceptable. At the decision time \(t_{k}\) for segment \(k\), the buffer level \(Q(t_{k})\) and head position probability values encoded in \(p_{k,d}\) are given. BOLA360 selects the bitrates for segments of chunk \(k\) by solving the maximization problem described in the following. \[\max_{a(k)} \eta(k,a(k))=\sum_{d=1}^{D}\sum_{m=1}^{M}\frac{a_{k,d,m}\big{(}V( v_{m}\cdot p_{k,d}+\gamma\delta)-Q(t_{k})\ \big{)}}{S_{m}}\] (5a) s.t., \[\sum_{m=1}^{M}a_{k,d,m}\leq 1,\quad\forall k,d, \tag{5b}\] \[\text{vars.}, a_{k,d,m}\in\{0,1\}, \tag{5c}\] where \(a(k)\) is a decision vector of the BOLA360 and \[0<V<\frac{Q_{max}-D}{v_{M}+\gamma\delta},\] is a control parameter that is bounded the R.H.S term to guarantee the required buffer level for BOLA360 is less than \(Q_{max}\). Constraint (5b) limits BOLA360 to download at most one segment for each tile. BOLA360 selects the near-optimal bitrates of chunk \(k\) by finding a decision vector \(\mathbf{a}(k)=[a_{k,1,1},a_{k,1,2},...,a_{k,1,M},a_{k,2,1},...,a_{k,D,M}]\) that maximizes the value of \(\eta(k,a(k))\) in Equation (5a). When the buffer level exceeds \(V(v_{M}+\gamma\delta)\), the algorithm decides to wait and download nothing (entering idle state). In this situation, BOLA360 waits for \(\Delta\) seconds and repeats the bitrate selection for that chunk again. The selection of \(\Delta\) could be dynamic as suggested in [17], the algorithm waits until the buffer level reaches \(Q(t_{0})\leq V(v_{M}+\gamma\delta)\). We note that our theoretical analysis is valid even with a dynamic waiting time. The pseudocode for action taken by BOLA360 for segment \(k\) is described in Algorithm 1. ### Theoretical Analysis of BOLA360 Our theoretical analysis first provides an upper bound for the buffer level while running BOLA360 in Theorem 1. Second, in Theorem 2, we show the QoE of BOLA360 is within a constant term of the optimal QoE of ABR360. The theoretical results reveal an interesting trade-off between the QoE and the playback delay of the BOLA360, which is discussed in Remark 2. **Theorem 1**.: _Under bitrate control of BOLA360, the buffer level never exceeds \(V(v_{M}+\gamma\delta)+D\),_ \[Q(t_{k})\leq V(v_{M}+\gamma\delta)+D, \tag{6}\] A proof of Theorem 1 is given in Appendix A. **Theorem 2**.: _Let OBJ be the QoE achieved by BOLA360. For a large video, i.e., \(K\to\infty\),_ \[\texttt{OBJ}^{\star}-\frac{D\delta^{2}+\Psi}{2V\delta^{2}}\sigma\leq\texttt{ OBJ}, \tag{7}\] _where \(\texttt{OBJ}^{\star}=U_{K}^{\star}+\gamma R_{K}^{\star}\) is QoE of the offline optimal algorithm, and \(\sigma=1/\mathbb{E}\{T_{k}\}\) and \(\Psi\leq\mathbb{E}\{DT_{k}^{2}\}\). That is, BOLA360 is achieves a QoE that is within an additive factor of the offline optimal._ A proof of Theorem 2 is given in Appendix B. Our theoretical analysis assumes that the number of chunks is very large for the video, i.e., \(K\to\infty\). Note that this assumption is needed for the theoretical analysis, and the algorithms do not need such an assumption. **Remark 2** (On the conflict between the playback delay and QoE of streaming).: _Theorem 2 states as the value of \(V\) increases, the performance of BOLA360 gets closer to the optimal QoE. However, Theorem 1 reveals that the upper bound on the playback delay increases with higher values of \(V\). Comparing these results, we observe a trade-off between minimizing playback delay and maximizing QoE in BOLA360. As the playback delay increases, the QoE performance of BOLA360 approaches the offline optimum._ ## 5 Understanding the Behavior of BOLA360 To understand the detailed behavior or BOLA360, in this section, we evaluate the actions and performance of the BOLA360 using a trace-based simulation with synthetic traces. In the next section, we conduct a comprehensive study comparing BOLA360 with other state-of-the-art algorithms using both real and synthetic trace data. ### Experimental Setup We conducted our experiments using a video with a duration of 250 seconds, divided into chunks of 5 seconds each. Each chunk is further divided into six tiles, and each segment is encoded at six different bitrates. To represent utility values, we employed a logarithmic function, similar to previous works such as [39, 17, 40]. While our theoretical results only require a non-decreasing utility function, we opted for a concave function that better reflects real-world utility functions. The concave utility function exhibits a diminishing return property, meaning that increasing the bitrate from 1 Mbps to 2 Mbps provides more utility than increasing it from 10 Mbps to 11 Mbps, even though the bitrate difference is the same in both cases. In Table 1, we present the utility values generated using the logarithmic function \(v_{m}=\log(S_{m}/S_{1})\). It is important to note that this utility function assigns a zero utility value to the lowest available bitrate. Although selecting the lowest bitrate does not affect the value of \(U_{K}\), its positive impact on \(R_{K}\) makes it a better choice compared to not downloading any segment at all. The head position of the user watching the \(360^{\circ}\) video is represented by a head position probability distribution that is critical for guiding the actions of BOLA360. In this section, we evaluate the performance of BOLA360 using two different head position probability distributions. The first distribution is homogeneous, where each tile is assigned a uniform probability, resulting in an equal likelihood of the user watching any tile (\(p_{k,d}=1/D\) for all tiles). The second distribution is heterogeneous, with a linear increase in probability from the minimum to the maximum. Specifically, we set the maximum and minimum probabilities as 0.317 and 0.017 respectively. Additionally, we set the values of \(\gamma\) and \(V\) to be \(\gamma=0.1\) and \(V=1.66\) for the experiments conducted in this section. In this section, our goal is to understand the behavior of BOLA360 using these synthetic inputs, while we use realistic probabilities derived from real-world scenarios in the next section, ### Experimental Results Figure 3 shows the maximum, minimum, and average bitrates of segments downloaded by BOLA360 for each chunk of the video. For the homogeneous distribution, the selected bitrate for all segments of a chunk is the same. BOLA360 chooses its action by solving the maximization problem defined in Equation (5). This action is taken based on the current buffer level. The results in Figure 3 show that the average download bitrate grows with an increase in buffer level. We show the threshold values for the buffer level where the action for the tile with the highest probability changes. In addition, we show the variations of buffer level over time for both homogeneous and heterogeneous head position probability distributions in Figure 3. When the buffer level is higher than \(V(v_{M}\cdot p_{k,d}+\gamma\delta)\), BOLA360 downloads nothing for that tile and tries to select the bitrate after \(\Delta\) seconds. Note that increasing the value of \(\gamma\) increases the importance of continuous playback without rebuffers. Increasing the value of \(\gamma\) by \(\epsilon\) is similar to reducing the buffer level by \(\epsilon\delta V\), resulting in BOLA360 using correspondingly higher \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline **Bitrate (Mbps)** & 0.2 & 0.4 & 0.6 & 0.8 & 1 & 1.5 \\ \hline **Sizes (Mb)** & 1 & 2 & 3 & 4 & 5 & 7.5 \\ \hline **Utility values** & 0.000 & 0.693 & 1.099 & 1.386 & 1.609 & 2.015 \\ \hline \end{tabular} \end{table} Table 1: Available bitrates and utility values Figure 3: The selected bitrate of BOLA360 for tiles with highest and lowest probability and average selected bitrate as a function of buffer level for homogeneous (left most) and heterogeneous (second left) distributions, and buffer level variation over time for homogeneous (third left) and heterogeneous (right most) distributions. Marked threshold values show the buffer levels where the bitrate for tile with highest probability changes. threshold values for the buffer levels for bitrate switches. Therefore, increasing the value of \(\gamma\) shifts the bitrate curves in Figure 3 to the right and vice versa. Lastly, Figure 4 shows the average bitrates of segments downloaded across all tiles and the bitrate of the tiles that user actually sees (playing bitrate) in their FOV. One can see that BOLA360 responds to the bandwidth change by increasing/decreasing selected bitrates. ## 6 Comparison of BOLA360 with other approaches We introduce several algorithms that capture baseline techniques as well the current state-of-the-art known in the prior literature for solving ABR360. We provide an extensive comparison of the QoE achieved by BOLA360 in comparison with these algorithms and show that BOLA360 significantly advances the current state-of-the-art. ### Comparison Algorithms The first comparison algorithm, named DPon, utilizes the estimated bandwidth to determine the bitrates of segments for the next chunk. However, DPon lacks foresight and does not consider future implications, focusing solely on maximizing the immediate impact on the quality of experience (QoE). The second comparison algorithm, Top-D, distributes the estimated bandwidth equally among all D tiles and selects bitrates accordingly. Top-D is very similar to the algorithm used in some previous works like [41, 28]. The third comparison algorithm, VA-360, is introduced in [27], which gives a unique weight to each tile and distributes the estimated bandwidth among all tiles based on the given weights, where the head movement probabilities serve as the weight of tiles. The fourth comparison algorithm, 360ProbDASH proposed in [28] selects the aggregate bitrate of tiles for each segment to keep the buffer level close to the targeted buffer level. Then, it distributes the selected bitrate among all tiles to maximize QoE and reduce the variance of tiles' bitrates inside the FOV. The last comparison algorithm, Salient-VR, proposed in [29]. Salient-VR, leverage the estimated bandwidth and buffer level to determine the highest possible bitrates such that the download time of a chunk does not exceed the length of the buffered video. Note that there are other state-of-the-art algorithms, such as Flare [30], or Pano [31], which consider different metrics, like minimizing bitrate variations across segments of a chunk. However, these algorithms may exhibit weaker performance when evaluated using the QoE defined in this work. By adapting their concepts to align with the QoE defined in this work, we can develop ABR360 algorithms that closely resemble 360ProbDASH, DPon, or Top-D. Figure 4: Variation of average downloaded bitrates and playing bitrate over time under bitrate selection of BOLA360 for the homogeneous (left most and second left), and heterogeneous (third left and right most) head position probability distribution. BOLA360 responds to the variation of network bandwidth by changing the selected bitrates. ### Experimental Setup To evaluate the performance of these algorithms, we conducted experiments in multiple different scenarios to demonstrate the algorithms' performance under different settings. We use a 250 seconds video, split into chunks of 5 seconds and spatially distributed over 8 tiles. Also, the video is encoded in seven different bitrates, i.e., \(M=7\). Similar to Section 5, we use the logarithmic utility function. The list of available bitrates, size of segments, and utility values are listed in Table 2. The buffer capacity is \(Q_{\text{max}}=64\). Finally, we select \(\gamma=0.3\), \(V=10.9\), and dynamic value selection for \(\Delta\) as suggested in [17]. We use 4G bandwidth traces from [42] and 4G/LTE bandwidth trace dataset [43] collected by IDLAB [44] to simulate the network condition. We select 14 different traces from 4G/LTE dataset to evaluate the performance of BOLA360 under different network conditions. During our evaluation process, the video is stored on an Apache server. Both server and client use Microsoft Windows as a OS, 24GB of RAM, and 8-core, 3Ghz Intel Core-i7 CPU. We used Chrome DevTools API [45] to transfer the video between server and client and also emulate the network condition. We fetched the bandwidth capacity from the 4G/LTE dataset and injected them into the Chrome DevTools to limit the download capacity between the server and the client. Unless anything else mentioned, to capture the actual FOV of the user and head position probability values, we generate the navigation graph [34] for \(360^{*}\) video using public VR head traces published by [46]. ### Performance Evaluation using Real Network and Head Movement Traces In this experiment, we compare the performance of BOLA360 and other comparison algorithms using real network and head movement traces. We use 4G bandwidth traces, network profile 15 in Appendix D for this section. We report playing bitrate, the rebuffering ratio (percentage of length of video considered as a rebuffering), and QoE of BOLA360, DP\({}_{\text{on}}\), Top-D, VA-360, and Salient-VR. Note that, the average playing bitrate reported in Figure 5 is calculated over the segments the user has seen inside FOV. We report the results of 100 different trials, where for each trial, we sample the user's head direction from the head position probability distribution and use the same network traces, video, and algorithm parameters. The CDF plot of average bitrates, rebuffering ratio, and QoE values of 100 different trials is reported in Figure 5. The results in Figure 5 show that BOLA360 substantially outperforms other comparison algorithms in QoE as well as the bitrate of a tile user has seen. VA-360 selects relatively high bitrates for all segments of a chunk while BOLA360 efficiently distributes the available bitrates among different segments such that BOLA360 is able to achieve higher playing bitrate and also lower rebuffering ratio. It is worth noting that the rebuffering of DP\({}_{\text{on}}\) is very low since it is designed to minimize the rebuffering by taking actions with an expected download time of less than \(\delta\) seconds regardless of buffer level. However, its average bitrate is significantly lower than that of BOLA360. **Key takeaway.** BOLA360 outperforms comparison algorithms in terms of QoE as it is designed to maximize it. Besides, BOLA360 performs better than _all_ comparison algorithms in terms of playing bitrate and rebuffering ratio. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline **bitrate (Mbps)** & 0.44 & 0.7 & 1.35 & 2.14 & 4.1 & 8.2 & 16.5 \\ \hline **Sizes (Mb)** & 2.2 & 3.5 & 6.75 & 10.7 & 20.5 & 41.0 & 82.5 \\ \hline **Utility values** & 0.000 & 0.464 & 1.121 & 1.582 & 2.232 & 2.925 & 3.624 \\ \hline \end{tabular} \end{table} Table 2: Available bitrates and utility values of them for experiments of Sections 6.3, and 6.4 ### Impact of Network Bandwidth on the Performance of Algorithms The bandwidth capacity and variations significantly impact the performance of online algorithms for ABR360. In this experiment, we investigate the impact of different network profiles on the performance of BOLA360 and comparison algorithms. We use 14 different network traces from 4G/LTE dataset [43] to generate the bandwidth throughput, which are all shown in Appendix D. We use the same video and algorithm/problem parameters (details in Section 6.2) for all algorithms to capture the impact of the network capacity on their performance. The results are reported for 100 trials for each network profile. We report the average QoE, playback delay, rebuffering ratio, and average playing bitrate of algorithms. The results in Figure 6 report the average QoE, and average playing bitrate of BOLA360 and five comparison algorithms over 100 trials for 14 network profiles. BOLA360 stands as the best algorithm in all 14 experiments. In these experiments, VA-360 selects relatively higher bitrates compared to other algorithms, while its high rebuffering, shown in Figure 7 is substantial such that it lowers the QoE of this algorithm as compared to BOLA360. In addition the playback delay of BOLA360 and comparison algorithms shown in Figure 7. We can see the playback delay of VA-360 was the lowest in all experiments. That clearly shows the trade-off between having low rebuffering or low playback delay. The results show that BOLA360 keeps the playback delay less than 7.8 seconds while its rebuffering ratio was the lowest among comparing algorithms in 12 out of 14 experiments. **Key takeaway.** Networks with high fluctuations (e.g., profile index 2 and 7) causes a high rebuffering ratio and leads BOLA360 to select bitrates with more cautious. Results in lower playing bitrates (compared to other network profiles). Figure 5: Average playing bitrate vs. rebuffering ratio (left most), the CDF of playing bitrate (second left), rebuffering ratio (third left), and QoE (right most) of BOLA360 and comparison algorithms using real network and head movement traces. The average playing bitrate of BOLA360 was \(3.2Mbps\), while this value for VA-360, Salient-VR, and 360ProbDASH were \(2.6Mbps\), \(2.6Mbps\), and \(2.1Mbps\). However, the average rebuffering for BOLA360, VA-360, Salient-VR, and 360ProbDASH were \(0.26\%\), \(2.1\%\), \(0.25\%\) and \(0.25\%\), respectively. Figure 6: The average QoE (left) and average playing bitrate (right) over the bitrate selection of BOLA360 and other comparison algorithms for 14 different network profiles and 100 trials. In terms of QoE, BOLA360 outperforms others in all profiles. On average, BOLA360 achieves about \(13.6\%\) more QoE than Salient-VR which was in the second place during these experiments. ### Impact of Head Position Probabilities on the Performance of Algorithms The head position probability values directly impact the QoE characterized in Equations (1) and (2); hence, the performance of algorithms varies depending on these probabilities. To observe the impact of head position probability distribution on the performance of ABR algorithms, we define 12 probability distributions(details in Appendix C). We evaluate the BOLA360 and comparison algorithms against these 12 probability distributions while the rest of the setting is similar to the experiment in Section 6.3. Specifically, for each chunk \(k\), we replace the set of probabilities with the probabilities calculated from Equation (14) in Appendix C. Note that each head position probability distribution could be interpreted as a different video file. We report the average QoE, playback delay, rebuffering ratio, and average playing bitrate of 100 trials of BOLA360 and comparison algorithms using each head position probability distribution profile in Figures 8 (average QoE and playing bitrate), and 9 (average rebuffering ratio and playback delay). Figure 8 shows that BOLA360 achieves relatively higher QoE when the prediction of FOV is concentrated on fewer number of tiles. Since the Top-D downloads the fixed bitrate for all tiles, the expected QoE of Top-D is independent of the head position probability distribution. A notable observation from playing bitrate depicted in Figure 8 demonstrates that BOLA360 kept the average playing bitrate at a high value for every probability profile, while the achieved QoE is promising, and kept rebuffering ratio close to the lowest among all algorithms. **Key takeway.** The playing bitrate of BOLA360 and most comparison algorithms improves when the head position prediction is concentrated on fewer number of tiles. Meanwhile, BOLA360 could improve the playing bitrate more than other comparison algorithms. ### Discussion on the Performance of Predictions-based Algorithms This section provides details on why the baseline or state-of-the-art algorithm used in Section 6.2 may fail to perform well in particular scenarios, and they cannot guarantee their performance in the worst-case scenario. All of Top-D, DPon, VA-360, 360ProbDASH, and Salient-VR algorithms take Figure 8: The average QoE (left) and average playing bitrate (right) over the bitrate selection of BOLA360 and comparison algorithms using 12 different head position probability distributions over 100 trials. On average, BOLA360 achieves about 30.3% more QoE than Salient-VR which was the second best algorithm in 11 out of 14 experiments. Figure 7: The average rebuffering ratio (left) and average playback delay (right) over the bitrate selection of BOLA360 and comparison algorithms for 14 different network profiles and 100 trials. VA-360 usually results in higher rebuffering compared to the other algorithms while its playback delay is very short. The average playback delay for BOLA360 was 7.8 seconds. action based on the prediction of bandwidth that is given to them. The accuracy of this prediction significantly impacts the performance of these algorithms such that a prediction with an error may result in a significant difference between the performance of the ABR algorithm and the performance of the optimal offline solution. In addition, these algorithms behave similarly to the ABR360 with different values of \(\gamma\). For example, for tiny values of \(\gamma\), the bitrate level of the segments are much more important to the user than the smoothness of streaming. However, Top-D, DPon algorithms take similar actions as they take in the case of a large value of \(\gamma\). ## 7 Bola360 Enhancements BOLA360 is meticulously designed to excel under all conceivable network conditions, including the most challenging worst-case-like scenarios. The aim to achieve a satisfactory performance across all input, however, makes BOLA360 often operate conservatively, refraining from switching to higher bitrates in many real-world situations where worst-case conditions fail to materialize. In this section, we propose BOLA360-REP and BOLA360-PL, two heuristic algorithms to improve the practical performance of BOLA360 could be improved from two perspectives. First, we introduce BOLA360-PL to address the common drawback of buffer-based ABR algorithms in fetching low-quality bitrates during start or seek time or high oscillations time intervals. Secondly, we propose BOLA360-REP to add the segment upgrade into the BOLA360. The basic BOLA360 algorithm is not designed to replace previously downloaded segments with higher bitrates, further restricting its adaptability. Consequently, if the network condition momentarily deteriorates and subsequently improves, the algorithm toggles between lower and higher bitrates. While a high buffer level grants ABR algorithm an opportunity to replace low bitrate segments downloaded earlier, the fundamental design of BOLA360 fails to support this crucial action. A detailed explanation of both heuristics is given below. BOLA360-PL is a generalized version of BOLA-PL introduced in [32]. It aims to reduce the reaction time of the BOLA360 during start and seek times. The reaction time is the duration from when the first segment is fetched (during start time) or the first seek segment is fetched (during seek time) until the selected bitrate of tiles by BOLA360 stabilizes. The main concept behind BOLA360-PL is to virtually increase the buffer level at the start or seek time. This is achieved by estimating the bandwidth and multiplying it by 50% to establish a safe expected bandwidth. To prevent rebuffering, BOLA360-PL limits the bitrate of each segment based on the estimated bandwidth throughput. More specifically, it restricts the size of the entire chunk to \(S_{lim}=Q(t)w_{p}(t)/2D\), where \(w_{p}(t)\) denotes the expected bandwidth capacity at time \(t\). BOLA360-PL virtually inserts a proportional number of segments into the buffer such that the size of the new downloading chunk does not exceed \(S_{lim}\). The second heuristic is BOLA360-REP, which is a variant of BOLA360 that allows for upgrading of previously downloaded segments. One limitation of BOLA360 is its inability to modify previously Figure 9: The average rebuffering ratio (left) and average playback delay (right) over the bitrate selection of BOLA360 and comparison algorithms using 12 different head position probability distributions over 100 trials. VA-360 usually results in high rebuffering ratio and short playback delay; meanwhile, the rebuffering ratio of BOLA360 was the lowest in 8 out of 12 experiments. The average playback delay for BOLA360 was 6.7 seconds. downloaded segments. Specifically, BOLA360 must make decisions about the next chunk, and it is not designed to replace higher bitrate segments with previously downloaded, lower-quality ones. BOLA360-REP determines whether it is better to download a new segment for the next chunk or to improve the quality of previously downloaded segments based on the length of video available in the buffer. If the decision is to download segments for the next chunk, BOLA360-REP selects the bitrates according to the decision of BOLA360. If the decision is to replace the previously downloaded segments, BOLA360-REP identifies a tile where there is at least a two-level difference between the bitrate of downloaded segment for that tile and the bitrate that BOLA360 would select for that tile at the current time. BOLA360-REP then downloads and replaces the new segments for those low-quality segments. ### Experimental Setup We choose the parameters used in Section 6.5 and the head position probability profile 2 defined in that section to evaluate the performance of heuristic extensions BOLA360-PL and BOLA360-REP. We evaluate the performance of these algorithms against two scenarios: 1) accurate head position probability prediction; and 2) noisy prediction for future chunks. In the first scenario, the head position probabilities provided to the ABR algorithms are identical to the user's actual head position distribution. This means that the algorithm knows the user's head position distribution, even for tiles of chunks that will be played far in the future. In contrast, the second scenario assumes that there will be a 10% error added to the prediction of head position probabilities for every \(\delta\) seconds difference between the chunk the user is watching and the chunk the ABR is seeking to obtain head position probabilities for. Note that if the error is greater than 100%, the prediction of head position is considered unavailable, and the head position probabilities passed to the ABR algorithms are uniform distributions, where \(p_{k,d}=1/D\). ### Experimental Results Figure 10 shows the CDF plots of the average segments' bitrate (left), the reaction time (middle), and oscillation (right, the average difference between the bitrate of two consecutive segments) of 100 trials for accurate head position probability predictions. The results show that BOLA360-PL significantly reduces the oscillation and reaction time of BOLA360. Since the BOLA360-PL improves the bitrate of segments during start and seek time, and these segments are a low fraction of the entire video, the average bitrate of tiles that BOLA360-PL prepared for the user is slightly better than the average bitrate of tiles BOLA360 downloads. In Figure 11, we report the result of the evaluation of BOLA360 and heuristic versions against the noisy prediction of head positions. Specifically, we report the CDF plot of the average segments' bitrate, reaction time, and the oscillation of BOLA360, BOLA360-PL, and BOLA360-REP. The results show that the average bitrate of BOLA360 and BOLA360-PL reduced compared to the case where accurate head position probabilities were available. On the other hand, BOLA360-REP improves the average bitrate of BOLA360 up to near 97.6%. In addition, BOLA360-REP reduces the reaction time of BOLA360 by 80.0%. Although BOLA360-REP could improve the average bitrate and the reaction time, it increases the oscillation. The average oscillation time for BOLA360 was 1.6 seconds, while this value for BOLA360-REP was 4.5 seconds. Meanwhile, all two heuristic versions could keep the rebuffering as low as the rebuffering of BOLA360. **Key takeaway.** Each extension of BOLA360 improves the performance in certain aspects, such as bitrate or reaction time. However, each version has drawbacks that may result in lower performance in other aspects. Therefore, no version outperforms the others in all aspects, and depending on the application and user requirements, different versions may be suitable. ## 8 Related Work The prior literature extensively addresses the problem of bitrate and view adaptation in \(360^{\circ}\) video streaming. Previous works commonly employ various machine learning techniques to predict user head movements and incorporate them into existing ABR algorithms. For example, [30] proposes a prediction-based approach and designs an ABR algorithm using historical data from \(360^{\circ}\) video streaming sessions. The focus of their work is on head movement prediction, while the ABR algorithm itself is a heuristic approach lacking rigorous optimization-based mechanisms. Authors in [47] propose a Lyapunov-based model that uses Lyapunov optimization to solve ABR360 problem. In their work, the quality of the tiles is selected based on the motion maps. In addition, they add saliency map information to their model to make a balance between QoE and playback delay. In another work, [48] proposes a different approach by constructing a two-layered hierarchical buffer-based algorithm with short and long buffer layers. The prediction of FOV is used to update the tiles inside the short buffer layer (short-term improvement). The long buffer layer tries to download the new segments that are not available in the short buffer layer and will be played later. In another work, [41] predicts the head movement by using a saliency map, tile probability heat map, and LSTM models and gives ABR360 algorithm based on that. In another category of work [49, 50, 51, 52, 18], several deep RL-based algorithms are developed for solving bitrate selection problems. They also use a dataset of the user's head position to train the model and find the optimal bitrate selection according to the predicted FOV. In [53], FOV prediction is used to select proper bitrates for tiles in a predicted FOV, with the accuracy of prediction impacting the final bitrate selection. Other works such as [54, 29, 38, 55, 28] also focus on FOV prediction. The main idea is that users have similar region-of-interest when watching the same Figure 11: The CDF of average bitrate of downloaded segments (left), reaction time (middle), and oscillation (right) of basic BOLA360, BOLA360-PL, and BOLA360-REP using real network and head movement traces while the prediction of the head position dynamically got updated. BOLA360-REP improves the average bitrate of downloaded tiles up to 91.2% compared to BOLA360 BASIC, and reduces the reaction time by 80.0%. Figure 10: The CDF of the average bitrate of any downloaded tile (left), reaction time (middle), and oscillation (right) of basic BOLA360 and BOLA360-PL using real network and head movement traces. BOLA360-PL reduces the oscillation and reaction time by 70.9% and 67.8% respectively. video. They divide the users into clusters such that users inside each cluster have similar region-of-interest in most videos. Then they give FOV prediction based on the cluster of a given user and the historical head direction traces of users in a predicted cluster. While these approaches help reduce bandwidth waste, they still require an ABR algorithm to select bitrates within the predicted region. In contrast, BOLA360 is an online algorithm with rigorous performance guarantees, solving the ABR360 problem optimally. Guan et al. [31] employ Model Predictive Control (MPC) to select the aggregate bitrate for a segment, allocating it among tiles to maintain quality within the limited bitrate. In another category of research [56, 57], an optimized coding/encoding algorithm minimizes bandwidth usage for 360deg videos, evaluated using real 4K and 8K videos from YouTube. Their experiments use a straightforward ABR algorithm resembling DPon (Section 6). ## 9 Conclusion and Future Directions In this paper, we formulated an optimization problem to maximize users' quality of experience in 360deg video streaming applications. Then, we proposed BOLA360, an online algorithm that achieves a provably near-optimal solution by selecting a proper bitrate for each tile of a 360deg video that maximizes the quality while ensuring the rebuffering rate is minimal. Our comprehensive experimental results showed that BOLA360 performs better than several other alternative algorithms under a wide range of network and head movement profiles. In future work, we plan to develop a data-driven and robust version of BOLA360 to explicitly use the future predictions in the decision-making while persevering the theoretical performance guarantees of the algorithm. ## Acknowledgments This research was supported in part by NSF grants CAREER 2045641, CPS-2136199, CNS-2106299, CNS-2102963, CSR-1763617, CNS-2106463, and CNS-1901137. We acknowledge their financial assistance in making this project possible.
recentes avanços em câmeras omnidirecionais e headsets AR/VR impulsionaram a adoção de vídeos em 360 graus, amplamente acreditados como o futuro da transmissão de vídeo online. Vídeos em 360 graus permitem que os usuários usem um visor de cabeça (HMD) e experiem o vídeo como se estivessem fisicamente presentes no cenário. A transmissão de vídeos de 360 graus de alta qualidade em larga escala é um problema não resolvido que é mais desafiador do que a entrega de vídeo tradicional (2D). A quantidade de dados necessária para transmitir vídeos de 360 graus é de uma ordem de magnitude maior do que os vídeos tradicionais. Além disso, a penalidade por eventos de retransmissão onde a vídeo congela ou exibe uma tela preta é mais severa, pois pode causar cnessia. Propomos um algoritmo de bitrate adaptativo online (ABR) para vídeos 36
2309.14230
Competitive Networked Bivirus SIS spread over Hypergraphs
The paper deals with the spread of two competing viruses over a network of population nodes, accounting for pairwise interactions and higher-order interactions (HOI) within and between the population nodes. We study the competitive networked bivirus susceptible-infected-susceptible (SIS) model on a hypergraph introduced in Cui et al. [1]. We show that the system has, in a generic sense, a finite number of equilibria, and the Jacobian associated with each equilibrium point is nonsingular; the key tool is the Parametric Transversality Theorem of differential topology. Since the system is also monotone, it turns out that the typical behavior of the system is convergence to some equilibrium point. Thereafter, we exhibit a tri-stable domain with three locally exponentially stable equilibria. For different parameter regimes, we establish conditions for the existence of a coexistence equilibrium (both viruses infect separate fractions of each population node).
Sebin Gracy, Brian D. O. Anderson, Mengbin Ye, Cesar A. Uribe
2023-09-25T15:41:58
http://arxiv.org/abs/2309.14230v1
# Competitive Networked Bivirus SIS spread over Hypergraphs ###### Abstract The paper deals with the spread of two competing viruses over a network of population nodes, accounting for pairwise interactions and higher-order interactions (HOI) within and between the population nodes. We study the competitive networked bivirus susceptible-infected-susceptible (SIS) model on a hypergraph introduced in Cui et al. [1]. We show that the system has, in a generic sense, a finite number of equilibria, and the Jacobian associated with each equilibrium point is nonsingular; the key tool is the Parametric Transversality Theorem of differential topology. Since the system is also monotone, it turns out that the typical behavior of the system is convergence to some equilibrium point. Thereafter, we exhibit a tri-stable domain with three locally exponentially stable equilibria. For different parameter regimes, we establish conditions for the existence of a coexistence equilibrium (both viruses infect separate fractions of each population node). ## I Introduction The study of virus spread has been an active area of research for over two centuries. In particular, diverse scientific communities, such as physics [2], mathematics [3], computer science [4], automatic control [5], etc., have significantly aided in furthering our understanding of the complex mechanisms behind the spread of a virus. Fundamental to this effort has been the development of compartmental models where each individual is healthy and susceptible (S), infected with a virus (I), or has recovered from a viral infection (R). Two compartmental models, susceptible-infected-recovered (SIR) and susceptible-infected-susceptible (SIS) have garnered significant attention in several scientific disciplines, particularly in mathematical epidemiology. In contrast to the SIR model, the SIS model allows for the possibility of reinfection and is the focus of the present paper. More specifically, we will deal with networked SIS models, with each node in the network being representative of a large population, and the interconnection among the nodes denotes the possible spreading pathways for the virus. The existing literature on modeling virus spread typically relies on the assumption that there is just a single virus present. However, one often encounters scenarios where there are two viruses, say virus 1 and virus 2, circulating in a meta-population (i.e., a network of population nodes). In such a context, said viruses could be cooperative, i.e., infection with virus 1 (resp. virus 2) increases (resp. decreases) the likelihood of simultaneous infection with virus 2 (resp. virus 1); see [6] for more details. Another possibility is for the two viruses to compete; infection with virus 1 (resp. virus 2) precludes the possibility of simultaneous infection with virus 2 (resp. virus 1) - this is the focus of the present paper. We stress that the notion of competing viruses is not restricted to just epidemics; it manifests itself in, among others, product adoption in a marketplace and the spread of opinions in social networks [7]. Networked competitive multi-virus SIS models have been analyzed in substantial depth in recent times; see [8, 9, 10, 11, 12, 13, 14, 15]. A major drawback of networked competitive bivirus SIS models studied in the aforementioned papers is that they account only for pairwise interactions between individuals. In reality, interactions in social groups often involve more than two individuals - it is not unusual that an individual can _simultaneously_ interact with more than one other individual. This motivates the need for higher-order networks such as hypergraphs 1, i.e., graphs where an edge can connect more than two nodes, which are quite effective in representing higher-order interactions (HOI) [17]. Inspired by the approach in [18], an SIS model on a hypergraph has been proposed and analyzed in [19]. However, the analytic results therein relied on certain restrictions on the network structure. Overcoming this drawback, a networked SIS model on a hypergraph has been devised and studied in considerable detail in [20]. However, the modeling frameworks in [18, 19, 20] are restrictive in the sense that none of these account for the possibility of more than one virus simultaneously circulating in a given population. Addressing this shortcoming, a competitive networked bivirus SIS model on a hypergraph has been developed and analyzed in [1]. The set of equilibria for the model in [1] can be broadly classified into three categories: the disease-free equilibrium (both viruses have been eradicated), the boundary equilibria (one virus is dead, and the other is alive); and coexistence equilibria (two viruses infect separate fractions of every population node in the network). Nevertheless, the results in [1] have the following limitations: a) some of the findings therein have yet to be rigorously established, and b) the analysis, while improving our understanding of the existence and stability of various equilibria, is not exhaustive. The present paper aims to address the aforementioned gaps. Our main contributions, therefore, are as follows: Footnote 1: Simplicial networks (see [16] for more details) have also been used for studying HOI, see [17]. * We show that the networked bivirus SIS system with HOI has, in a generic sense, a finite number of equilibria. Furthermore, for each equilibrium, the associated Jacobian is a nonsingular matrix; see Theorem 1. In so doing, since our proof of Theorem 1 does not, unlike the proof of [1, Theorem 5.5], require the HOI infection rates to be set to zero, we establish the correctness of the claim raised in [1, Theorem 5.5]. Building off of Theorem 1 and leveraging the fact that the system is monotone as identified in [1, Theorem 5.5], we prove that the typical behavior of the bivirus SIS system with HOI is convergence to an equilibrium point; see Theorem 2. 2. We identify a parameter regime that not only establishes the existence of three equilibria (a single-virus endemic equilibrium corresponding to virus 1 (resp. virus 2) and the DFE) but also guarantees that all of the said equilibria are locally exponentially stable at the same time; see Proposition 1. 3. We identify a parameter regime, different from the one covered by Proposition 1, for the existence of a coexistence equilibrium. We do so under different configurations of the boundary equilibria, viz. both being unstable and both being stable; see Proposition 3 and Theorem 3, respectively. Additionally, for the parameter regime covered by Proposition 1, we establish existence of a coexistence equilibrium; see Proposition 4. **Notation**: We denote the set of real numbers by \(\mathbb{R}\) and the set of nonnegative real numbers by \(\mathbb{R}_{+}\). For any positive integer \(n\), we use \([n]\) to denote the set \(\{1,2,...,n\}\). We use **0** and **1** to denote the vectors whose entries all equal \(0\) and \(1\), respectively, and use \(I\) to denote the identity matrix. For a vector \(x\), we denote the diagonal square matrix with \(x\) along the diagonal by \(\mathrm{diag}(x)\). For any two vectors \(a,b\in\mathbb{R}^{n}\) we write \(a\geq b\) if \(a_{i}\geq b_{i}\) for all \(i\in[n]\), \(a>b\) if \(a\geq b\) and \(a\neq b\), and \(a\gg b\) if \(a_{i}>b_{i}\) for all \(i\in[n]\). Likewise, for any two matrices \(A,B\in\mathbb{R}^{n\times m}\), we write \(A\geq B\) if \(A_{ij}\geq B_{ij}\) for all \(i\in[n]\), \(j\in[m]\), and \(A>B\) if \(A\geq B\) and \(A\neq B\). For a square matrix \(M\), we use \(\sigma(M)\) to denote the spectrum of \(M\), \(\rho(M)\) to denote the spectral radius of \(M\), and \(s(M)\) to denote the spectral abscissa of \(M\), i.e., \(s(M)=\max\{\mathrm{Re}(\lambda):\lambda\in\sigma(M)\}\). A real square matrix \(A\) is called Metzler if all its off-diagonal entries are nonnegative. A matrix \(A\) is said to be an M-matrix if all of its off-diagonal entries are nonpositive, and there exists a constant \(c>0\) such that, for some nonnegative \(B\) and \(c\geq\rho(B)\), \(A=cI-B\). All eigenvalues of an M-matrix have nonnegative real parts. Furthermore, if an M-matrix has an eigenvalue at the origin, we say it is singular; if each eigenvalue has strictly positive parts, then we say it is nonsingular. If \(A(=[a_{ij}]_{n\times n})\) is a nonnegative matrix, then \(\rho(A)\) decreases monotonically with a decrease in \(a_{ij}\) for any \(i,j\in[n]\). The matrix \(A\) is reducible if, and only if, there is a permutation matrix \(P\) such that \(P^{\top}AP\) is block upper triangular; otherwise, \(A\) is said to be irreducible. If a nonnegative \(A\) is irreducible, and \(Ax=y\) for \(x>\textbf{0}\), then \(y>\textbf{0}\), and \(y\) cannot have a zero in every position where \(x\) has a zero. ## II Problem Formulation ### _Model_ Consider a network of \(n\) nodes. A node represents a well-mixed2 population of individuals. We will assume that the size of the population is fixed. We suppose two viruses, say virus 1 and virus 2, are spreading over such a network. Throughout this paper, we will assume that the two aforementioned viruses are competing. Through pairwise or HOI as described in more detail below, an otherwise healthy individual in node \(i\) gets infected with virus 1 (resp. virus 2) due to contact with either other individuals in node \(i\) who are infected with virus 1 (resp. virus 2) and/or with other individuals in node \(j\) (where \(j\) is a neighbor of \(i\)) who are infected with virus 1 (resp. virus 2). When a single interaction is involved (i.e., between two individuals in node \(i\) or between an individual in node \(i\) and an individual in node \(j\)), we say that the infection is caused due to _pairwise interactions_. An individual in node \(i\) could also be infected with virus 1 (resp. virus 2) due to _simultaneous_ interactions with infected individuals in nodes \(j\) and \(\ell\), where either a) \(j=i\), and/or \(\ell=i\), or b) \(j,\ell\) are neighbors of \(i\). Such interactions are referred to as _higher-order interactions_ (HOI). The notion of competition implies that no individual can be simultaneously infected with virus 1 and virus 2. Footnote 2: Well-mixed means that the probability of any two individuals in a node interacting with each other is the same. Footnote 3: Indeed, it is far more natural to have possibly different infection rates for each node; it is standard in the literature on classic SIS bivirus networked systems [8, 9, 10, 11, 12, 13, 21]. As evident below, we do not impose constraints on the values of the nonnegative matrices capturing the interactions, and hence the analysis does not differ materially. We choose this particular notation to remain consistent with earlier literature on epidemic models with HOI [20]. We assume that the pairwise infection (resp. HOI) rate with respect to virus \(k\) is the same for all nodes, denoted by \(\beta_{1}^{k}\) (resp. \(\beta_{2}^{k}\)) for all \(i\in[n]\) and \(k\in[2]\)3. An individual infected with virus \(k\) recovers from said infection at a healing rate \(\delta_{i}^{k}\) and immediately becomes susceptible to virus 1 or by virus 2. All individuals within a node have the same healing rate with respect to virus \(k\); individuals in different nodes possibly have different healing rates. We say that node \(i\) is healthy if all individuals in node \(i\) are healthy; otherwise, we say it is infected. Within the same node, it is possible for there to simultaneously exist a fraction of individuals that are infected with virus \(1\) and for a different fraction that is infected with virus \(2\). Footnote 3: Indeed, it is far more natural to have possibly different infection rates for each node; it is standard in the literature on classic SIS bivirus networked systems [8, 9, 10, 11, 12, 13, 21]. As evident below, we do not impose constraints on the values of the nonnegative matrices capturing the interactions, and hence the analysis does not differ materially. We choose this particular notation to remain consistent with earlier literature on epidemic models with HOI [20]. As mentioned previously, diseases could spread due to pairwise interactions and HOI. In case of the former, if an individual in node \(j\) can infect an individual in node \(i\) with virus \(k\), then, with \(a_{ij}^{k}(\geq 0)\) denoting the strength of interactions between an individual in node \(j\) and an individual in node \(i\) with respect to spread of virus \(k\), we have that \(a_{ij}^{k}>0\); otherwise \(a_{ij}^{k}=0\). For the case of HOI, if an individual in node \(i\) gets infected with virus \(k\) due to simultaneous interactions with individuals in nodes \(j\) and \(\ell\), then, with \(b_{ij\ell}^{k}\) denoting the strength of interaction that nodes \(j\) and \(\ell\) together have on node \(i\) with respect to the spread of virus \(k\), we have that \(b_{ij\ell}^{k}>0\); else, \(b_{ij\ell}^{k}=0\). Let \(x_{i}^{k}(t)\) denote the fraction of individuals infected with virus \(k\) in agent \(i\) at time instant \(t\). The evolution of this fraction can, therefore, be represented by the following scalar differential equation [1, Section 5], where, for \(i=1,2,\ldots,n\), we have \[\dot{x}_{i}^{1}= -\delta_{i}^{1}x_{i}^{1}+\beta_{1}^{1}(1-x_{i}^{1}-x_{i}^{2})\sum_{ j=1}^{n}a_{ij}^{1}x_{j}^{1}+\] \[\beta_{2}^{1}(1-x_{i}^{1}-x_{i}^{2})\sum_{j,\ell=1}^{n}b_{ij\ell} ^{1}x_{j}^{1}x_{\ell}^{1}\] \[\dot{x}_{i}^{2}= -\delta_{i}^{2}x_{i}^{2}+\beta_{1}^{1}(1-x_{i}^{1}-x_{i}^{2})\sum _{j=1}^{n}a_{ij}^{2}x_{j}^{2}+\] \[\beta_{2}^{2}(1-x_{i}^{1}-x_{i}^{2})\sum_{j,\ell=1}^{n}b_{ij\ell} ^{2}x_{j}^{2}x_{\ell}^{2} \tag{1}\] Define \(D^{1}=\operatorname{diag}(\delta_{i}^{1})\), where \(i\in[n]\), and define \(D^{2}\) analogously. Define \(X^{1}=\operatorname{diag}(x_{i}^{1})\), where \(i\in[n]\), and define \(X^{2}\) analogously. Let \(A^{1}=[a_{ij}^{1}]_{n\times n}\), and \(A^{2}=[a_{ij}^{2}]_{n\times n}\). Let \(B_{i}^{k}=[b_{ij\ell}^{k}]_{n\times n}\), for each \(i\in[n]\) and \(k\in[2]\). Let \(x^{k}=[x_{i}^{1}\qquad x_{i}^{2}\qquad\ldots\qquad x_{n}^{k}]^{\top}\) for \(k=1,2\). Therefore, in vector form, equation (1) can be written as: \[\dot{x}^{1}= -D^{1}x^{1}+\beta_{1}^{1}(I-X^{1}-X^{2})A^{1}x^{1}+\] \[\beta_{2}^{1}(I-X^{1}-X^{2})((x^{1})^{\top}B_{1}^{1}x^{1},(x^{1}) ^{\top}B_{2}^{1}x^{1},\ldots,(x^{1})^{\top}B_{n}^{1}x^{1})^{\top}\] \[\dot{x}^{2}= -D^{2}x^{2}+\beta_{1}^{2}(I-X^{1}-X^{2})A^{2}x^{2}+\] \[\beta_{2}^{2}(I-X^{1}-X^{2})((x^{2})^{\top}B_{1}^{2}x^{2},(x^{2}) ^{\top}B_{2}^{2}x^{2},\ldots,(x^{2})^{\top}B_{2}^{2}x^{2})^{\top} \tag{2}\] Throughout this document, we will drop the superscript \(k\) while considering the single-virus case. We note that system (2) is a special case of [1, system 5.5] in the following sense: System (2) only accounts for a) the case where, for \(k=1,2\), \(\beta_{i}^{k}\) and \(\beta_{2}^{k}\) is identical for every node \(i\), \(i=1,2,\ldots,n\), and b) the case where virus 1 (resp. virus 2) spread only due to contact with the infected individuals. In contrast, the model in [1] (see [1, system 5.5]) allows for the possibility of \(\beta_{i}^{k}\) and \(\beta_{2}^{k}\) being not necessarily the same for every node. Furthermore, it also allows for the possibility of the viruses to spread through additional mediums such as a water distribution network, a public transit network, etc. **Remark 1**: _Note that setting \(\beta_{2}^{k}=0\) for \(k=1,2\) results in system (2) coinciding with the classic networked bivirus SIS model studied in, among others, [8, 9, 10, 11, 12, 13]. Setting \(x^{1}(0)=\textbf{0}\) (resp. \(x^{2}(0)=\textbf{0}\)) results in system (2) coinciding with the model used for studying the spread of a single virus over hypergraphs in [20]._ The model in system (2) has three kinds of equilibria, viz. healthy state or disease-free equilibrium (DFE), \((\textbf{0},\textbf{0})\); single-virus endemic equilibria corresponding to virus \(k\), of the form \((\bar{x}^{k},\textbf{0})\), where \(\textbf{0}\ll\bar{x}^{k}\ll\textbf{1}\) for \(k=1,2\); and coexisting equilibria, \((\bar{x}^{1},\bar{x}^{2})\), where, as we will show in Lemma 1, \(\textbf{0}\ll\bar{x}^{1},\bar{x}^{2}\ll\textbf{1}\), and, furthermore, \(\bar{x}^{1}+\bar{x}^{2}\ll\textbf{1}\). It is unknown whether the single-virus endemic equilibria corresponding to virus \(k\) are unique, in contrast to the classic bivirus SIS network model without HOI. The Jacobian of system (2) evaluated at an arbitrary point, \((x^{1},x^{2})\), in the state space is as given in (3). \[J(x^{1},x^{2})=\begin{bmatrix}J_{11}&J_{22}\\ J_{21}&J_{22}\end{bmatrix}, \tag{3}\] where \[J_{11}= -D^{1}+\beta_{1}^{1}(I-X^{1}-X^{2})A^{1}-\operatorname{diag}( \beta_{1}^{1}A^{1}x^{1})+\] \[\beta_{2}^{1}(I-X^{1}-X^{2})O_{1}(x^{1})-\beta_{2}^{1}O_{2}(x^{ 1}) \tag{4}\] \[J_{12}= -\operatorname{diag}(\beta_{1}^{1}A^{1}x^{1})-\beta_{2}^{1} \operatorname{diag}((x^{1})^{\top}B_{1}^{1}x^{1})_{i=1,2,\ldots,n}\] (5) \[J_{21}= -\operatorname{diag}(\beta_{1}^{2}A^{2}x^{2})-\beta_{2}^{2} \operatorname{diag}((x^{2})^{\top}B_{1}^{2}x^{2})_{i=1,2,\ldots,n}\] (6) \[J_{22}= -D^{2}+\beta_{1}^{2}(I-X^{1}-X^{2})A^{2}-\operatorname{diag}( \beta_{1}^{2}A^{2}x^{2})+\] \[\beta_{2}^{2}(I-X^{1}-X^{2})O_{3}(x^{2})-\beta_{2}^{2}O_{4}(x^{ 2}) \tag{7}\] The terms \(O_{1}(x^{1})\), \(O_{2}(x^{1})\), \(O_{3}(x^{2})\) and \(O_{4}(x^{2})\) are as given in (8), (9), (10) and (11), respectively. We will need the following assumptions to ensure the model is well-defined. **Assumption 1**: _The matrix \(D^{k}\), for \(k=1,2\), is a positive diagonal matrix. The matrix \(A^{k}\), for \(k=1,2\), is nonnegative. The matrix \(B_{i}^{k}\) is nonnegative for all \(i\in[n]\) and \(k\in[2]\)._ **Assumption 2**: _The matrix \(A^{k}\), for \(k=1,2\), is irreducible. We define the set \(\mathcal{D}\) as follows:_ \[\mathcal{D}:=\{(x^{1},x^{2})\mid x^{k}\geq\textbf{0},k=1,2,\sum_{k=1}^{2}x^{k} \leq\textbf{1}\}. \tag{12}\] It is known that the set \(\mathcal{D}\) is positively invariant, and that the DFE is always an equilibrium for system (2); see [1, Lemma 5.1]. The fact that \(\mathcal{D}\) is positively invariant guarantees that the state values \(x_{i}^{k},k\in[2],i\in[n]\), always stay in the \([0,1]\) interval. Since the states represent fractions of an infected population node, if the states were to take values outside the \([0,1]\) interval, then those would not correspond to physical reality. ### _Problem Statements_ With respect to system (2), we aim to answer the following questions in this paper conclusively: 1. What is the typical behavior the trajectories exhibit as time goes to infinity? 2. Can we identify a parameter regime such that multiple equilibria are simultaneously stable? 3. Can we identify sufficient conditions for the existence of a coexistence equilibrium? Furthermore, can we establish the stability properties of such an equilibrium based on knowledge of the stability properties of the boundary equilibria? ### _Preliminary Lemmas and analysis of healthy state_ In this subsection, we will establish certain preliminary results on the nature of equilibria of system (1), and recall some of the results on irreducible matrices - all of these will aid in the development of the main results of the paper. **Lemma 1**: _Consider system (2) under Assumptions 1 and 2. If \(\bar{x}=(\bar{x}^{1},\bar{x}^{2})\in\mathcal{D}\) is an equilibrium of (2), then, for each \(k\in[2]\), either \(\bar{x}^{k}=\textbf{0}\), or \(\textbf{0}\ll\bar{x}^{k}\ll\textbf{1}\). Moreover, \(\sum_{k=1}^{2}\bar{x}^{k}\ll\textbf{1}\)._ The proof is inspired from [11, Lemma 3.1]. _Proof:_ It is clear that \((\textbf{0},\textbf{0})\) is an equilibrium of (2). Therefore, in the rest of the proof, we will show that any non-zero equilibrium \(\bar{x}=(\bar{x}^{1},\bar{x}^{2})\) of (2) must satisfy, for each \(k\in[2]\), \(\textbf{0}\ll\bar{x}^{k}\ll\textbf{1}\) and \(\sum_{k=1}^{2}\bar{x}^{k}\ll\textbf{1}\). We start off by showing \[O_{1}(x^{1}) =[(B_{1}^{1}+(B_{1}^{1})^{\top})x^{1}\qquad(B_{2}^{1}+(B_{2}^{1})^{ \top})x^{1}\qquad\ldots\qquad(B_{n}^{1}+(B_{n}^{1})^{\top})x^{1}] \tag{8}\] \[O_{2}(x^{1}) =\text{diag}((x^{1})^{\top}B_{1}^{1}x^{1})_{i=1,2,\ldots,n}\] (9) \[O_{3}(x^{2}) =\begin{bmatrix}(B_{1}^{2}+(B_{1}^{2})^{\top})x^{2}\qquad(B_{2}^{ 2}+(B_{2}^{2})^{\top})x^{2}\qquad\ldots\qquad(B_{n}^{2}+(B_{n}^{2})^{\top})x^{2 }\end{bmatrix}\] (10) \[O_{4}(x^{2}) =\text{diag}((x^{2})^{\top}B_{1}^{2}x^{2})_{i=1,2,\ldots,n} \tag{11}\] that \(\bar{x}^{1}+\bar{x}^{2}\ll\textbf{1}\). For any \(i\in[n]\), observe that the following is satisfied: \[\dot{\bar{x}}_{i}^{1}+\dot{\bar{x}}_{i}^{2}= -\delta_{i}^{1}\bar{x}_{i}^{1}-\delta_{i}^{2}\bar{x}_{i}^{2}+ \beta_{1}^{1}(1-\bar{x}_{i}^{1}-\bar{x}_{i}^{2})\sum_{j=1}^{n}a_{ij}^{1}\bar{x} _{j}^{1}\] \[+\beta_{2}^{1}(1-\bar{x}_{i}^{1}-\bar{x}_{i}^{2})\sum_{j,\ell=1}^ {n}b_{ij\ell}^{1}\bar{x}_{j}^{2}\bar{x}_{\ell}^{1}\] \[+\beta_{1}^{2}(1-\bar{x}_{i}^{1}-\bar{x}_{i}^{2})\sum_{j=1}^{n}a_ {ij}^{2}\bar{x}_{j}^{2}\] \[+\beta_{2}^{2}(1-\bar{x}_{i}^{1}-\bar{x}_{i}^{2})\sum_{j,\ell=1}^ {n}b_{ij\ell}^{2}\bar{x}_{j}^{2}\bar{x}_{\ell}^{2} \tag{13}\] Suppose that, for some \(i\in[n]\), \(\bar{x}_{i}^{1}+\bar{x}_{i}^{2}=1\). Therefore, since, by Assumption 1, \(\delta_{i}^{k}>0\) for \(k=1,2\), and since \(\bar{x}_{i}^{k}\in\mathcal{D}\), from (13), it is clear that \(\bar{x}_{i}^{1}+\dot{\bar{x}}_{i}^{2}<0\). However, since by assumption \(\bar{x}=(\bar{x}^{1},\bar{x}^{2})\) is an equilibrium, it must be that \(\dot{\bar{x}}_{i}^{1}+\dot{\bar{x}}_{i}^{2}=0\), which is a contradiction. Therefore, for all \(i\in[n]\), \(\bar{x}_{i}^{1}+\bar{x}_{i}^{2}<1\), which implies that \(\sum_{k=1}^{2}x^{k}\ll\textbf{1}\); thus guaranteeing that \(\bar{x}^{k}\ll\textbf{1}\) for \(k=1,2\). We are left to show that \(\bar{x}^{k}\gg\textbf{0}\) for \(k=1,2\). To this end, suppose that \(\bar{x}^{1}>\textbf{0}\) is an equilibrium point for which there exists at least one (but possibly more) \(i\in[n]\) such that \(\bar{x}_{i}^{1}=0\). Note that the equilibrium version of the first line of equation (2) yields the following: \[\dot{\bar{x}}^{1}= -D^{1}\bar{x}^{1}+\beta_{1}^{1}(I-\bar{X}^{1}-\bar{X}^{2})A^{1} \bar{x}^{1}+\] \[\beta_{2}^{1}(I-\bar{X}^{1}-\bar{X}^{2})((\bar{x}^{1})^{\top}B_{ 1}^{1}\bar{x}^{1},(\bar{x}^{1})^{\top}B_{2}^{1}\bar{x}^{1},\ldots,(\bar{x}^{1}) ^{\top}B_{n}^{1}\bar{x}^{1})^{\top} \tag{14}\] By noting that \(\bar{x}^{1}\) is an equilibrium point, and by a suitable rearrangement of terms, we obtain: \[\bar{x}^{1}= S\bar{x}^{1}, \tag{15}\] where \[S= (D^{1})^{-1}\beta_{1}^{1}(I-\bar{X}^{1}-\bar{X}^{2})A^{1}+\] \[(D^{1})^{-1}\beta_{2}^{1}(I-\bar{X}^{1}-\bar{X}^{2})((\bar{x}^{1 })^{\top}B_{1}^{1},\ldots,(\bar{x}^{1})^{\top}B_{n}^{1}. \tag{16}\] By Assumptions 1 and 2, it is clear that the matrix \(S\) is nonnegative and irreducible. Since, by assumption, \(\bar{x}^{1}>\textbf{0}\), from (15) and coupled with a property of a nonnegative matrix that is irreducible we have the following: i) \(S\bar{x}^{1}>\textbf{0}\), and ii) there is at least one \(i\in[n]\) such that \(\bar{x}_{i}^{1}=0\) but \((S\bar{x}^{1})_{i}>0\). Note that ii) contradicts (15). Therefore, if \(\bar{x}^{1}>\textbf{0}\) is an equilibrium point, then it must be that \(\bar{x}^{1}\gg\textbf{0}\). By an analogous argument, it can be shown that \(\bar{x}^{2}\gg\textbf{0}\), thus completing the proof. \(\Box\) **Lemma 2**: _[_10_, Proposition 1]_ _Suppose that \(\Lambda\) is a negative diagonal matrix and \(N\) is an irreducible nonnegative matrix. Let \(M\) be the irreducible Metzler matrix \(M=\Lambda+N\). Then, \(s(M)<0\) if and only if \(\rho(-\Lambda^{-1}N)<1,s(M)=0\) if and only if \(\rho(-\Lambda^{-1}N)=1\), and \(s(M)>0\) if and only if and only if, \(\rho(-\Lambda^{-1}N)>1\)._ **Lemma 3**: _[_22_, Proposition 2]_ _Let \(A\in\mathbb{R}^{n\times n}\) be Metzler. Then, \(A\) is Hurwitz if, and only if, there exists an \(x\in\mathbb{R}^{n}\) such that \(x\gg\textbf{0}\) and \(Ax\ll 0\)._ **Lemma 4**: _[_23_, Chapter 8.3]_ _[_24_, Theorem 2.7]_ _Suppose that \(N\) is an irreducible nonnegative matrix. Then,_ 1. \(r=\rho(N)\) _is a simple eigenvalue of_ \(N\)_._ 2. _There is an eigenvector_ \(\zeta\gg\textbf{0}\) _corresponding to the eigenvalue_ \(r\)_._ 3. \(x>\textbf{0}\) _is an eigenvector only if_ \(Nx=rx\) _and_ \(x\gg\textbf{0}\)_._ 4. _If_ \(A\) _is a nonnegative matrix such that_ \(A<N\)_, then_ \(\rho(A)<\rho(N)\)_._ \(\blacksquare\)__ It can be seen that \((\textbf{0},\textbf{0})\) is an equilibrium of (2), and is referred to as the disease-free equilibrium (DFE). We recall a sufficient condition for convergence to the DFE. **Lemma 5**: _[_1_, Theorem 5.2, statement 1]_ _Consider system (2) under Assumptions 1 and 2. If, for \(k=1,2\), \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k})<1\), then the DFE is locally stable._ Note that the guarantees provided by Lemma 5 are only local. It turns out that the DFE, under appropriate conditions, is endowed with stronger stability guarantees. We define for \(k=1,2\) the following matrices: \[R^{k}:=\begin{bmatrix}_{1}^{\top}x_{1}^{B}x_{2}^{\rrbracket}\\ \vdots\\ _{\top}x_{n}^{B}x_{n}^{\rrbracket}\end{bmatrix}.\] With the matrices \(R^{k}\), \(k=1,2\), in hand, we can recall the following result. **Lemma 6**: _[_1_, Theorem 5.2, statement 2]_ _Consider system (2) under Assumptions 1 and 2. If, for \(k=1,2\), \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k}+\beta_{2}^{k}(D^{k})^{-1}R^{k})<1\), then the DFE is globally exponentially stable._ ## III Monotone dynamical systems and competitive bivirus networked SIS models with HOI Monotone dynamical systems (MDS) are a class of systems that has found resonance in mathematical epidemiology; one of the major reasons for this is the fact that MDS, assuming that they generically have a finite number of equilibria, converge to a (stable) equilibrium point for almost all initial conditions. Here, the term "almost all" means: for all but a set of parameter values that has measure zero. An algebraic or semi-algebraic set defines this set of exceptional values. It is known that under Assumptions 1 and 2, system (2) is monotone; see [1, Theorem 5.5]. That is, suppose that \((x_{A}^{1}(0),x_{A}^{2}(0))\) and \((x_{B}^{1}(0),x_{B}^{2}(0))\) are two initial conditions in \(\text{int}(\mathcal{D})\) satisfying i) \(x_{A}^{1}(0)>x_{B}^{1}(0)\) and ii) \(x_{A}^{2}(0)<x_{B}^{2}(0)\). Since system (2) is monotone, it follows that, for all \(t\in\mathbb{R}_{\geq 0}\), i) \(x_{A}^{1}(t proof for finiteness of equilibria in [1, Theorem 5.5] is not complete, it leaves open the issue of generic convergence to an equilibrium point. To remedy this, we provide a different proof for generic finiteness of equilibria that does not rely on \(\beta_{2}^{k}=0\) for \(k=1,2\). Given that nonlinear systems can have complex equilibria patterns, including a continuum of equilibria for the classic bivirus network model, we establish that for generic parameter matrices, system (1) has a finite number of equilibria. We use arguments very much like those in [12]. Essentially because the healthy equilibrium and the single-virus boundary equilibria can be conveniently studied using single-virus techniques, it is easily established that there are no continua of equilibria confined to any boundary, i.e. any continuum of equilibria necessarily includes a continuum of coexistence equilibria. Therefore, we focus on showing that such equilibria cannot exist for generic parameter values. The tool is the Parametric Transversality Theorem, see [25, see p. 145] and [26, see p.68]. The main result is as follows: **Theorem 1**: _Consider the model of (2), under Assumptions 1 and 2. With any fixed matrices \(A^{k}\) and nonnegative \(B_{i}^{k}\), and the exclusion of a set of values for the entries of \(D^{1},D^{2}\) of measure zero, the number of coexistence equilibrium points is finite, and the associated vector field zero is nondegenerate, i.e. the associated Jacobian is nonsingular. Similarly, with any fixed \(D^{1},D^{2}\) and \(B_{i}^{k}\), and the exclusion of a set of values for the entries of \(A^{1},A^{2}\) of measure zero, the same properties of equilibrium points hold._ See Appendix. \(\Box\) Theorem 1, coupled with the fact that system (2) is monotone, allows us to leverage Hirsch's generic convergence theorem [27] to draw conclusions on the limiting behavior of system (2) outside of the specific conditions identified in Lemma 5. We have the following result. **Theorem 2**: _Consider system (2) under Assumptions 1 and 2. For all initial conditions \((x^{1}(0),x^{2}(0))\in\mathcal{D}\) except possibly for a set of measure zero, the system (2) will converge to an equilibrium. If the system does not converge to an equilibrium, it is on a nonattractive limit cycle._ In words, Theorem 2 establishes that the typical behavior of system (2) is convergence to _some_ equilibrium; this could be healthy, or (one of the possibly many) single-virus boundary equilibria, or a coexistence equilibrium. It further says that limit cycles, if any, are nonattractive. No more complicated behavior is allowed; chaos can be ruled out, see [28]. Thus, Theorem 2 answers question i) raised in Section II. Theorem 2 strengthens the result in [11, Theorem 3.6] by extending the generic convergence behavior to bi-virus SIS models that also account for HOI. Furthermore, it establishes the correctness of a similar claim raised in [1, Theorem 5.5]. ## IV Existence and local stability of boundary equilibria In this section, we identify a parameter regime that permits three equilibria of the bivirus system (2) to be simultaneously locally exponentially stable. Subsequently, for a parameter regime different from the one mentioned above, we identify a condition for the existence and instability of a boundary equilibrium. Finally, when there is only one virus, we identify a condition for the existence and local exponential stability of an endemic equilibrium. **Proposition 1**: _Consider system (2) under Assumptions 1 and 2, and \(B_{i}^{k}\geq 0\) for all \(i\in[n]\) and \(k\in[2]\). Define, for \(k=1,2\), \(\mathbf{1}_{B^{k}}\in\{0,1\}^{n}\) by \((\mathbf{1}_{B^{k}})_{i}=1\) if \(B_{i}^{k}\neq\mathbf{0}\); otherwise \((\mathbf{1}_{B^{k}})_{i}=0\). Suppose that the following conditions are fulfilled for \(k=1,2\):_ 1. \(\rho(\beta_{i}^{k}(D^{k})^{-1}A^{k})<1\)_, and_ 2. \(\min_{i.s.d.\,B_{i}^{k}\neq\mathbf{0}}\left(\frac{\beta_{i}^{k}}{\delta_{i}^{k }}(A^{k}\mathbf{1}_{B^{k}})_{i}+\frac{\beta_{i}^{k}}{2\delta_{i}^{k}}\mathbf{1} _{B^{k}}^{\top}B_{i}\mathbf{1}_{B^{k}}\right)>2\)_._ _Then, the following statements are true:_ 1. _[label=_)_]_ 2. _The DFE is locally exponentially stable._ 3. _there exist equilibria_ \(\bar{x}^{k}\gg\mathbf{0}\) _such that_ \(\bar{x}^{k}_{i}\geq\frac{1}{2}\) _for_ \(k=1,2\)_, for any_ \(i\) _such that_ \(B_{i}^{k}\neq\mathbf{0}\)_._ 4. _Any such equilibrium point_ \((\bar{x}^{1},\mathbf{0})\) _is locally exponentially stable; and_ 5. _Any such equilibrium point_ \((\mathbf{0},\bar{x}^{2})\) _is locally exponentially stable._ The proof is inspired from [20, Theorem 5.1, statements iv) and v)]. _Proof of statement i):_ Note that the Jacobian evaluated at the DFE is as follows: \[J(\mathbf{0},\mathbf{0})=\begin{bmatrix}-D^{1}+\beta_{1}^{1}A^{1}&\begin{matrix} \mathbf{0}\\ \mathbf{0}\end{matrix}\\ -D^{2}+\beta_{1}^{2}A^{2}\end{bmatrix}.\] By assumption, \(\beta_{1}^{k}\rho(D^{k})^{-1}A^{k})<1\), for \(k=1,2\). Therefore, from Lemma 2, it must be that \(s(-D^{k}+\beta_{1}^{k}A^{k})<0\) for \(k=1,2\), which, since \(J(\mathbf{0},\mathbf{0})\) is a block diagonal matrix, and since the matrices \(-D^{1}+\beta_{1}^{1}A^{1}\) and \(-D^{2}+\beta_{1}^{2}A^{2}\) are the only blocks along the main diagonal, implies that \(s(J(\mathbf{0},\mathbf{0}))<0\). Local exponential stability of the DFE, then, follows from [29, Theorem 4.15 and Corollary 4.3]. _Proof of statement ii):_ See [20, Theorem 5.1, statement iv)]. _Proof of statement iii):_ Consider the equilibrium point \((\bar{x}^{1},\mathbf{0})\), and observe that the Jacobian evaluated at this equilibrium is as follows: \[J(\bar{x}^{1},\mathbf{0})=\begin{bmatrix}\bar{J}_{11}&\bar{J}_{12}\\ \mathbf{0}&\bar{J}_{22}\end{bmatrix}, \tag{17}\] where \[\bar{J}_{11} =-D^{1}+\beta_{1}^{1}(I-\bar{X}^{1})A^{1}-\mathrm{diag}(\beta_{1}^ {1}A^{1}\bar{x}^{1})+\] \[\beta_{2}^{1}(I-\bar{X}^{1})O_{1}(\bar{x}^{1})-\beta_{2}^{1}O_{2 }(\bar{x}^{1})\] \[\bar{J}_{12} =-\mathrm{diag}(\beta_{1}^{1}A^{1}\bar{x}^{1})-\beta_{2}^{1} \mathrm{diag}((\bar{x}^{1})^{\top}B_{i}^{1}\bar{x}^{1})_{i=1,\ldots,n}\] \[\bar{J}_{22} =-D^{2}+\beta_{1}^{2}(I-\bar{X}^{1})A^{2}.\] The terms \(O_{1}(\bar{x}^{1})\) and \(O_{2}(\bar{x}^{1})\) are as defined in (8) and (9). We will establish the exponential stability of the 11 and 22 blocks (i.e., \(\bar{J}_{11}\) and \(\bar{J}_{22}\)) separately. Observe that \[\bar{J}_{11}= -D^{1}+\beta_{1}^{1}(I-\bar{X}^{1})A^{1}-\mathrm{diag}(\beta_{1}^ {1}A^{1}\bar{x}^{1})\ +\] \[\beta_{2}^{1}(I-\bar{X}^{1})O_{1}(\bar{x}^{1})-\beta_{2}^{1} \begin{bmatrix}(\bar{x}^{1})^{\top}B_{i}^{1}\bar{x}^{1}\\ &\ddots\\ &(\bar{x}^{1})^{\top}B_{n}^{1}\bar{x}^{1}\end{bmatrix}.\] Define summands \[Q_{1} :=-D^{1}+\beta_{1}^{1}(I-\bar{X}^{1})A^{1}+\beta_{2}^{1}(I-\bar{X}^{ 1})\begin{bmatrix}(\bar{x}^{1})^{\top}B_{1}^{1}\\ \vdots\\ (\bar{x}^{1})^{\top}B_{n}^{1}\end{bmatrix},\;\text{and}\] \[Q_{2} :=\!\!\beta_{2}^{1}(I-\bar{X}^{1})\begin{bmatrix}(\bar{x}^{1})^{ \top}(B_{1}^{1})^{\top}\\ \vdots\\ (\bar{x}^{1})^{\top}(B_{n}^{1})^{\top}\end{bmatrix}-\] \[\quad\quad\text{diag}(\beta_{1}^{1}A^{1}\bar{x}^{1})-\beta_{2}^{1} \begin{bmatrix}(\bar{x}^{1})^{\top}B_{1}^{1}\bar{x}^{1}\\ &\ddots\\ &&(\bar{x}^{1})^{\top}B_{n}^{1}\bar{x}^{1}\end{bmatrix}.\] It is immediate that \(\bar{J_{11}}=Q_{1}+Q_{2}\), which implies that \(\bar{J}_{11}\bar{x}^{1}=Q_{1}\bar{x}^{1}+Q_{2}\bar{x}^{1}\). Since \(\bar{x}^{1}\) is a single-virus endemic equilibrium corresponding to virus 1, by taking recourse to the equilibrium version of the first line of equation (2), it is clear that \(Q_{1}\bar{x}^{1}=\mathbf{0}\). Hence, \(\bar{J}_{11}\bar{x}^{1}=Q_{2}\bar{x}^{1}\). Note that \[Q_{2}\bar{x}^{1} =\beta_{2}^{1}(I-\bar{X}^{1})\begin{bmatrix}(\bar{x}^{1})^{\top} (B_{1}^{1})^{\top}\bar{x}^{1}\\ \vdots\\ (\bar{x}^{1})^{\top}(B_{n}^{1})^{\top}\bar{x}^{1}\end{bmatrix}-\] \[\quad\quad\text{diag}(\beta_{1}^{1}A^{1}\bar{x}^{1})\bar{x}^{1}- \beta_{2}^{1}\begin{bmatrix}(\bar{x}^{1})^{\top}B_{1}^{1}\bar{x}^{1}\\ &\ddots\\ &&(\bar{x}^{1})^{\top}B_{n}^{1}\bar{x}^{1}\end{bmatrix}\bar{x}^{1}. \tag{18}\] Denote by \((Q_{2}\bar{x}^{1})_{i}\) the \(i^{th}\) entry of the vector \(Q_{2}\bar{x}^{1}\). Therefore, in view of (18), we have the following: \[(Q_{2}\bar{x}^{1})_{i}=-\beta_{1}^{1}\Big{(}\sum_{j=1}^{n}a_{ij}^{1}\bar{x}_{ j}^{1}\Big{)}\bar{x}_{i}^{1}\!+\!\beta_{2}^{1}(1\!-\!2\bar{x}_{i}^{1})((\bar{x} ^{1})^{\top}B_{i}^{1}\bar{x}^{1}) \tag{19}\] We consider its sign under two circumstances. Suppose first that \(B_{i}^{1}=\mathbf{0}\). Then, in view of (19), since by Assumption 2 the matrix \(A^{1}\) is irreducible, \(\beta_{1}^{1}>0\), and from statement ii) we know that \(\bar{x}^{1}\gg\mathbf{0}\), it must be that \((Q_{2}\bar{x}^{1})_{i}<0\). Suppose secondly that \(B_{i}^{1}\neq\mathbf{0}\). Since from statement ii) we know that \(\bar{x}_{i}^{1}\geq\frac{1}{2}\), it follows that \(1-2\bar{x}_{i}^{1}<0\); thus implying that \((Q_{2}\bar{x}^{1})_{i}<0\). Note that the choice of index \(i\) was arbitrary, and therefore again, we have \((Q_{2}\bar{x}^{1})_{i}<0\) for all \(i\in[n]\). Hence, since \(\bar{J}_{11}\bar{x}^{1}=Q_{2}\bar{x}^{1}\), it follows that \((\bar{J}_{11}\bar{x}^{1})_{i}<0\) for all \(i\in[n]\). Note that Assumptions 1 and 2 guarantee that the matrices \(Q_{1}\) and \(Q_{2}\) are irreducible Metzler matrices; hence, the matrix \(\bar{J}_{11}\) is an irreducible Metzler matrix. Therefore, from Lemma 3, it must be that the matrix \(\bar{J}_{11}\) is Hurwitz. Turning our attention to the matrix \(\bar{J}_{22}\), consider the matrices \(\beta_{1}^{2}(D^{2})^{-1}A^{2}\) and \(\beta_{1}^{2}(D^{2})^{-1}(I-\bar{X}^{1})A^{2}\). From Assumption 1, it is clear that \(\beta_{1}^{2}(D^{2})^{-1}A^{2}\) is a nonnegative matrix. Since, from statement ii), \(\bar{x}^{1}\) satisfies \(\mathbf{0}\ll\bar{x}^{1}\ll\mathbf{1}\), it is also clear that \(\beta_{1}^{2}(D^{2})^{-1}(I-\bar{X}^{1})A^{2}\) is a nonnegative matrix. Furthermore, we also immediately obtain the following: \[\beta_{1}^{2}(D^{2})^{-1}(I-\bar{X}^{1})A^{2}<\beta_{1}^{2}(D^{2})^{-1}A^{2}.\] Therefore, since the spectral radius of a nonnegative matrix decreases monotonically with a decrease in any entry of said matrix, it follows that \(\rho(\beta_{1}^{2}(D^{2})^{-1}(I-\bar{X}^{1})A^{2})\leq\rho(\beta_{1}^{2}(D^{2 })^{-1}A^{2})\). By assumption, \(\rho(\beta_{1}^{2}(D^{2})^{-1}A^{2})<1\), which implies that \(\rho(\beta_{1}^{2}(D^{2})^{-1}(I-\bar{X}^{1})A^{2})<1\), and consequently, from Lemma 2, we have that \(s(-D^{2}+(I-\bar{X}^{1})A^{2})<0\). Therefore, since \(J(\bar{x}^{1},\mathbf{0})\) is block upper triangular, and since we have already established that \(\bar{J}_{11}\) is Hurwitz, it follows that \(s(J(\bar{x}^{1},\mathbf{0}))<0\). Local exponential stability of \((\bar{x}^{1},\mathbf{0})\), then, follows from [29, Theorem 4.15 and Corollary 4.3]. Proof of statement iv):.: The proof is analogous to that of statement iii). Proposition 1 answers question ii) raised in Section II. Proposition 1 guarantees the existence and simultaneous local exponential stability of three equilibria, whereas [1, Theorem 5.3], assuming that an endemic equilibrium exists, guarantees its local stability. Furthermore, the possibility of the DFE being locally stable simultaneously is alluded to; see [1, Remark 10]. On the other hand, for a parameter regime different from the one covered in Proposition 1, assuming that the terms corresponding to HOI are sufficiently small, [1, Theorem 5.3] secures global stability of the endemic equilibrium. The following remarks are in order. **Remark 2**: _Proposition 1 sheds light on an interesting phenomenon that bivirus spread over hypergraph exhibits (but bivirus spread over a normal graph does not exhibit): identification of a parameter regime that permits three equilibria, namely the DFE and the two boundary equilibria, to be simultaneously stable. This is an extension to the single virus case studied in [20], which permitted the simultaneous stability of the DFE and (since there is only one in this case) an endemic equilibrium._ **Remark 3**: _It is known that, assuming \(\beta_{2}^{k}=0\) for \(k=1,2\), the condition \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k})\leq 1\) guarantees that the DFE is the only equilibrium of system (2); see [21, Lemma 2]. However, as Proposition 1 shows, that is not necessarily true when considering bivirus SIS spread over hypergraphs._ Proposition 1 guarantees existence of boundary equilibria for the case when \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k})<1\). It is natural to ask if one is assured of existence even if the spectral radii of relevant quantities are larger than one. The following proposition addresses this issue. **Proposition 2**: _Consider system (2) under Assumptions 1 and 2. Suppose that, for all \(k\in[2]\), \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k})>1\). Then system (2) has at least three equilibria, namely the DFE, a single virus endemic equilibrium corresponding to virus \(1\)\((\bar{x}^{1},\mathbf{0})\), and a single virus endemic equilibrium corresponding to virus \(2\)\((\mathbf{0},\bar{x}^{2})\). Furthermore, if \(s(-D^{i}+\beta_{1}^{i}(I-\bar{X}^{k})A^{i})>0\) for \(i,k\in[2]\) such that \(i\neq k\), then the equilibrium points \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\) are unstable._ Proof.: Observe that the DFE is always an equilibrium of system (2). Suppose that for some \(k\in[2]\), \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k})>1\), then from [20, Theorem 5.1, statement iii) we know that there exists an endemic equilibrium, \(\bar{x}^{k}\), where \(\mathbf{0}\ll\bar{x}^{k}\ll\mathbf{1}\). Since, by assumption, \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k})>1\) for all \(k\in[2]\), it is also immediate that there exist equilibria, \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\), where \(\mathbf{0}\ll\bar{x}^{k}\ll\mathbf{1}\), for \(k=1,2\). It can be verified that \(J(\bar{x}^{1},\mathbf{0})\) is block upper triangular, with the matrix \(-D^{1}+\beta_{1}^{1}(I-\bar{X}^{2})A^{1}\) being one of the blocks along the diagonal. By assumption, \(s(-D^{1}+\beta_{1}^{1}(I-\bar{X}^{2})A^{1})>0\), which implies that \(s(J(\bar{x}^{1},\mathbf{0}))>0\). Consequently, instability of \((\bar{x}^{1},\mathbf{0})\) follows from [29, Theorem 4.7, statement ii)]. The instability of \((\mathbf{0},\bar{x}^{2})\) can be shown analogously, thus completing the proof. \(\Box\) **Remark 4**: _Proposition 2 (resp. Proposition 1) guarantees the existence (resp. existence and local exponential stability) of the equilibrium points, \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\). It turns out that it is possible to compute these points iteratively; see [20, Theorem 5.3]._ ## V (Non)existence of Coexistence equilibria This section identifies sufficient conditions for the existence (resp. nonexistence) of coexistence equilibria. Specifically, for investigating existence, we consider two parameter regimes, viz. for \(k=1,2\), i) \(s(-D^{k}+\beta_{i}^{k}A^{k})>0\), and ii) \(s(-D^{k}+\beta_{i}^{k}A^{k})<0\). Further, for the parameter regime i), we consider two stability configurations of the boundary equilibria, viz. a) both being unstable and b) both being stable; for parameter regime ii), we consider the case where both boundary equilibria are stable. **Proposition 3**: _Consider system (2) under Assumptions 1 and 2. Let \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\) denote a single-virus endemic equilibrium corresponding to virus 1 and virus 2, respectively. Suppose that the following conditions are satisfied:_ 1. \(s(-D^{1}+\beta_{1}^{1}A^{1})>0\)_;_ 2. \(s(-D^{2}+\beta_{1}^{2}A^{2})>0\)_;_ 3. \(s(-D^{1}+\beta_{1}^{1}(I-\bar{X}^{2})A^{1})>0\)_; and_ 4. \(s(-D^{2}+\beta_{1}^{2}(I-\bar{X}^{1})A^{2})>0\)_._ _Then there exists at least one equilibrium of the form \((\hat{x}^{1},\hat{x}^{2})\) such that \(\mathbf{0}\ll\hat{x}^{1},\hat{x}^{2}\ll\mathbf{1}\) and \(\hat{x}^{1}+\hat{x}^{2}\ll\mathbf{1}\)._ Before proving the claim in Proposition 3, we need the following background material. In line with the terminology of [30], given an equilibrium point of system (2), we classify the same as saturated or unsaturated. We say that an equilibrium is saturated (resp. strictly saturated) if the diagonal block corresponding to the zero entries of said equilibrium possibly has a single eigenvalue at the origin (resp. has every eigenvalue to be strictly less than zero) and unsaturated otherwise [30]. A boundary equilibrium of (2) is saturated if and only if said boundary equilibrium is locally exponentially stable; this follows immediately by noting the structure of the Jacobian matrix, evaluated at a boundary equilibrium, see (17). The definition also implies that every fixed point in the interior of \(\mathcal{D}\), irrespective of its stability properties, is saturated [30]; therefore, from Lemma 1, we have that every coexistence equilibrium of system (2) is saturated. _Proof:_ Assumptions i) and ii) of Proposition 3 guarantee existence of boundary equilibria, \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\); see Proposition 2. Observe that [1, Lemma 5.1] guarantees that, for each \(k\in[2]\), \(x^{k}(0)\geq\mathbf{0}\) implies that \(x^{k}(t)\geq\mathbf{0}\) for all \(t\in\mathbb{R}_{\geq 0}\), and that the set \(\mathcal{D}\) (which is compact) is forward invariant. Therefore, from [30, Theorem 2], it follows that system (2) has at least one saturated fixed point. There are two cases to consider. Case 1: Suppose the aforementioned saturated fixed point is in the interior of \(\mathcal{D}\). Note that any fixed point in the interior of \(\mathcal{D}\) is of the form \((\hat{x}^{1},\hat{x}^{2})\), where \(\mathbf{0}\ll(\hat{x}^{1},\hat{x}^{2})\ll\mathbf{1}\), thus implying that \((\hat{x}^{1},\hat{x}^{2})\) is a coexistence equilibrium. From Lemma 1, it must necessarily satisfy \(\mathbf{0}\ll(\hat{x}^{1},\hat{x}^{2})\ll\mathbf{1}\), and \(\hat{x}^{1}+\hat{x}^{2}\ll\mathbf{1}\). Case 2: Suppose, but we will demonstrate a contradiction, that there are no fixed points in the interior of \(\mathcal{D}\). This implies that there must be a saturated fixed point on the boundary of \(\mathcal{D}\)[30]. Therefore, at least one of the single-virus boundary equilibria is saturated. However, from Proposition 2, it is clear that assumptions iii) and iv) guarantee that the boundary equilibria are unstable; thus implying that they are unsaturated, and the contradiction is obtained. \(\Box\) Proposition 3 is implied by [1, Theorem 5.4], which, assuming \(\beta_{2}^{k}=0\) for \(k=1,2\), is the same as [9, Theorem 5]. The proof technique in [1, Theorem 5.4] is quite involved since it primarily relies on fixed point mapping, Perron Frobenius theory, etc. Our proof is significantly shorter. Note that [30, Theorem 2] is a key ingredient of our proof strategy. In light of Theorem 1, one could perhaps leverage [30, Theorem 2] to obtain a lower bound on the number of coexistence equilibria for the stability configuration of boundary equilibria given in Proposition 3, as has been done for classic bivirus networked SIS models; see [12, Corollary 3.9, statement 2]. Subsequently, one could possibly exploit the properties of MDS to conclude that there must exist a locally exponentially stable coexistence equilibrium. Observe that in Propostion 3 the demonstration of the existence of a coexistence equilibrium point \((\hat{x}^{1},\hat{x}^{2})\) relies on the assumption that both boundary equilibria are unstable. We now present a different condition that also guarantees the existence of a coexistence equilibrium point \((\hat{x}^{1},\hat{x}^{2})\) even when both boundary equilibria are stable. **Theorem 3**: _Consider system (2) under Assumptions 1 and 2. Let \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\) denote a single-virus endemic equilibrium corresponding to virus 1 and virus 2, respectively. Suppose that the following conditions are satisfied:_ 1. \(s(-D^{1}+\beta_{1}^{1}A^{1})>0\)_;_ 2. \(s(-D^{2}+\beta_{1}^{1}A^{2})>0\)_;_ _Suppose that both \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\) are locally exponentially stable. Then there exists at least one equilibrium of the form \((\hat{x}^{1},\hat{x}^{2})\) such that \(\mathbf{0}\ll\hat{x}^{1},\hat{x}^{2}\ll\mathbf{1}\) and \(\hat{x}^{1}+\hat{x}^{2}\ll\mathbf{1}\), such that \((\hat{x}^{1},\hat{x}^{2})\) is either neutrally stable or unstable.4_ Footnote 4: Assuming that equilibria of system (2) are hyperbolic, a stronger conclusion can be drawn: for generic parameter matrices, the coexistence equilibrium is unstable. _Proof:_ By assumption, \(s(-D^{k}+\beta_{1}^{k}A^{k})>0\) for \(k=1,2\). Therefore, from Proposition 2, it follows that there exists a single-virus endemic equilibrium corresponding to virus 1, \(\bar{x}^{1}\gg\mathbf{0}\), and a single-virus endemic equilibrium corresponding to virus 2, \(\bar{x}^{2}\gg\mathbf{0}\). By assumption, both \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\) are locally exponentially stable. The condition \(s(-D^{1}+\beta_{1}^{1}A^{1})>0\) implies that the origin is unstable; this can be observed from the proof of statement i) in Proposition 1. We are left to show that the stable manifold of the origin does not lie in the interior of \(\mathcal{D}\). We will rely on the proof technique of [12, Lemma 3.8]. It suffices to show that for the (linear) system \[\begin{bmatrix}\dot{x}^{1}\\ \dot{x}^{2}\end{bmatrix}=\begin{bmatrix}-D^{1}+\beta_{1}^{1}A^{1}&\boldsymbol{0} \\ \boldsymbol{0}&-D^{2}+\beta_{1}^{2}A^{2}\end{bmatrix}\begin{bmatrix}x^{1}\\ x^{2}\end{bmatrix} \tag{20}\] no trajectory starting in the interior of \(\mathcal{D}\) converges to the origin. First, consider \(x^{1}(t)\). Let \(w^{\top}\) be the left eigenvector associated with \(s(-D^{1}+\beta_{1}^{1}A^{1})\) so all its entries sum to one. Define \(z:=w^{\top}x^{1}\), and observe that \(\dot{z}=w^{\top}\dot{x}^{1}\), which, from (20), further implies that \(\dot{z}=s(-D^{1}+\beta_{1}^{1}A^{1})z\). Since, by assumption, \(s(-D^{1}+\beta_{1}^{1}A^{1})>0\), it is clear that the projection onto \(w\) (which is a positive vector) of the points of (20) in the interior of \(\mathcal{D}\) is away from \(x^{1}=\boldsymbol{0}\). An analogous argument can be made for \(x^{2}(t)\), since, by assumption, \(s(-D^{2}+\beta_{1}^{2}A^{2})>0\). Therefore, the stable manifold of the origin does not lie in the interior of \(\mathcal{D}\). Consequently, since we know that system (2) is monotone (see [1, Theorem 5.5]) and the monotone condition \(\bar{x}^{1}\gg\boldsymbol{0},\boldsymbol{0}\ll\bar{x}^{2}\) relates the two exponentially stable equilibrium points, from [31, Proposition 2.9] it follows that there exists an equilibrium point of the form \((\dot{x}^{1},\dot{x}^{2})\) such that \(\boldsymbol{0}\ll\dot{x}^{1},\dot{x}^{2}\ll\boldsymbol{1}\) and \(\dot{x}^{1}+\dot{x}^{2}\ll\boldsymbol{1}\). Furthermore, the point \((\dot{x}^{1},\dot{x}^{2})\) satisfies \(s(J(\dot{x}^{1},\dot{x}^{2}))\geq 0\), thus concluding the proof. Proposition 3 and Theorem 3 partially answer question iii) raised in Section II-B. Observe that neither of these results covers the case where one boundary equilibrium is locally exponentially stable, and the other is unstable. We next consider a different parameter regime, namely \(s(-D^{k}+\beta_{1}^{k}A^{k})<0\), and identify a sufficient condition for the existence of an unstable coexistence equilibrium. We have the following result. **Proposition 4**: _Consider system (2) under Assumptions 1 and 2. Define, for \(k=1,2\), \(\boldsymbol{1}_{B^{k}}\in\{0,1\}^{n}\) by \((\boldsymbol{1}_{B^{k}})_{i}=1\) if \(B^{k}_{i}\neq\boldsymbol{0}\); otherwise \((\boldsymbol{1}_{B^{k}})_{i}=0\). Suppose that the following conditions are fulfilled for \(k=1,2\) :_ * \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k})<1\)_, and_ * \(\min\limits_{i.s.t.B^{k}_{i}\neq\boldsymbol{0}}\left(\frac{\beta_{1}^{k}}{ \delta_{1}^{k}}(A^{k}\boldsymbol{1}_{B^{k}})_{i}+\frac{\beta_{2}^{k}}{2\delta_ {1}^{k}}\boldsymbol{1}_{B^{k}}B_{i}\boldsymbol{1}_{B^{k}}\right)>2\)_._ _Then there exists at least one equilibrium of the form \((\hat{x}^{1},\hat{x}^{2})\) such that \(\boldsymbol{0}\ll\hat{x}^{1},\hat{x}^{2}\ll\boldsymbol{1}\) and \(\hat{x}^{1}+\hat{x}^{2}\ll\boldsymbol{1}\) that is either neutrally stable or unstable._ _Proof:_ Suppose that the conditions in Proposition 4 are fulfilled. Therefore, it follows that there exist boundary equilibria \((\bar{x}^{1},\boldsymbol{0})\) and \((\boldsymbol{0},\bar{x}^{2})\), and that both are locally exponentially stable; see statements iii) and iv) in Proposition 1. Therefore, since we know that system (2) is monotone (see [1, Theorem 5.5]), from [31, Proposition 2.9] it follows that there exists (at least) one equilibrium point of the form \((\hat{x}^{1},\hat{x}^{2})\) such that \(\boldsymbol{0}\ll\hat{x}^{1},\hat{x}^{2}\ll\boldsymbol{1}\) and \(\hat{x}^{1}+\hat{x}^{2}\ll\boldsymbol{1}\). Furthermore, \(s(J(\hat{x}^{1},\hat{x}^{2}))\geq 0\), thus delivering the claim. \(\square\) Note that Proposition 4 guarantees the existence of at least one coexistence equilibrium. Given that system (2) is monotone, and since, from Theorem 1, it is known that for each of the equilibrium points the associated Jacobian is nonsingular, the conditions in Proposition 4 guarantee the existence of an odd number of coexistence equilibria, each of which must be unstable. The proof for the same follows from a Brouwer degree argument; see [32]. In fact, for the special case where \(\beta_{2}^{k}=0\) for \(k=1,2\), for the same stability configuration as in Theorem 3 and Proposition 4, a lower bound on the number of coexistence equilibria has been recently provided; see [12, Corollary 3.9, statement 3]. ## VI Numerical Examples We present a series of simulations highlighting interesting phenomena that can emerge when HOIs are incorporated. We use the following bivirus system with HOIs. The network has \(n=5\) nodes, and we set \(D^{1}=D^{2}=I\). The pairwise interactions are captured by two-cycle graphs with self-loops, with infection matrices: \[A^{1}=\begin{bmatrix}1&0&0&0&1\\ 1&1&0&0&0\\ 0&1&1&0&0\\ 0&0&1&1&0\\ 0&0&0&1&1\end{bmatrix},\qquad A^{2}=(A^{1})^{\top}. \tag{21}\] The HOI are captured by the following set of hyperedges with unit weight: \[\text{virus }1:(1,2,3),(2,3,1),(3,2,1),(1,4,5),(4,5,1),(5,4,1)\] \[\text{virus }2:(1,2,4),(2,4,1),(4,2,1),(1,3,5),(3,5,1),(5,3,1).\] In other words, this corresponds to the following \(b^{k}_{ij\ell}\) entries being equal to \(1\), with all other entries of \(B^{k}_{i}\) equal to \(0\): \(b^{1}_{123}\), \(b^{1}_{231}\), \(b^{1}_{321}\), \(b^{1}_{145}\), \(b^{1}_{451}\), \(b^{1}_{541}\), and \(b^{2}_{124}\), \(b^{2}_{241}\), \(b^{2}_{421}\), \(b^{2}_{421}\), \(b^{2}_{135}\), \(b^{2}_{551}\), \(b^{2}_{531}\). In our simulations, we randomly sample \(x^{i}_{k}(0)\) from a uniform distribution \((0,1)\), and then normalize the vectors \(x^{1}(0)\) and \(x^{2}(0)\) to ensure that \((x^{1}(0),x^{2}(0))\in\text{int}(\mathcal{D})\). The \(\beta_{i}^{k}\) are varied to yield different stability properties for the system in (2). **Example 1**: _We set \(\beta_{1}^{1}=\beta_{2}^{1}=0.2\) and \(\beta_{1}^{2}=\beta_{2}^{2}=5\). This ensures the inequalities of both conditions for Proposition 1 are satisfied. As can be observed from Fig. 0(a), for initial conditions close to the DFE, the trajectories converge to the locally exponentially stable DFE, \((x^{1}=\boldsymbol{0},x^{2}=\boldsymbol{0})\). In Figs. 0(b) and 0(c), the initial conditions are further in the interior of \(\mathcal{D}\), and depending on the particular initial condition, we observe convergence to a boundary equilibrium where one of the two viruses is extinct, \((\bar{x}^{1},\boldsymbol{0})\) or \((\boldsymbol{0},\bar{x}^{2})\) for some positive \(\bar{x}^{1}>0.5\times\boldsymbol{1}\) and \(\bar{x}^{2}>0.5\times\boldsymbol{1}\). That is, both boundary equilibria are simultaneously locally exponentially stable. Interestingly, without HOIs, it is impossible for a bivirus system to have the DFE, \((\bar{x}^{1},\boldsymbol{0})\), and \((\boldsymbol{0},\bar{x}^{2})\) all locally exponentially stable [33, Section E]._ **Example 2**: _We set \(\beta_{1}^{1}=\beta_{2}^{1}=2\) and \(\beta_{1}^{2}=3\) and \(\beta_{2}^{2}=2.4\). As illustrated in Figs. 1(a) and 1(b), there are two locally exponentially stable two boundary equilibria \((\bar{x}^{1},\boldsymbol{0})\) or \((\boldsymbol{0},\bar{x}^{2})\), and we converge to either depending on the initial conditions. However, the DFE is unstable, and no trajectories in \(\mathcal{D}\) converge there except if one starts at the DFE. This simulation highlights are interesting observation: for a standard bivirus system with no HOIs, examples of systems with two locally stable boundary equilibria have not been identified until recently and are not straightforward to construct [12, 34]. We conclude by remarking that, for each of the simulations presented, the bivirus system exhibits dynamical phenomena when HOIs are present that is not observed when HOIs are not present. In other words, HOIs unlock new possibilities in the competition of two viruses spreading over networks and suggest a significant amount of understanding remains to be unveiled. ## VII Conclusion This paper analyzed a networked competitive bivirus SIS model that also accounts for the possibility of HOI among the nodes. By taking recourse to the Parametric Transversality Theorem of differential topology, we showed that the bivirus system with HOI has, for generic parameter values, a finite number of equilibria. Furthermore, the Jacobian matrices associated with each of the equilibria are nonsingular. This finding, coupled with the knowledge that the system is monotone, enabled us to establish that the typical behavior that our system exhibits is convergence to some equilibrium. Subsequently, we identified a parameter regime that ensures the existence of multiple boundary equilibria and simultaneous stability of the same along with that of the DFE. For the special case where only one virus is circulating in the metapopulation, we guarantee the existence and local stability of an endemic equilibrium; our result does not impose any restrictions on the model parameters besides those covered by Assumptions 1 and 2. Thereafter, for different parameter regimes, we identified conditions that guarantee the existence of a coexistence equilibrium.
この論文では、人口ノードのネットワーク上で競合する2種類のウイルスが拡散する様子を考察します。 pairwise の相互作用と、人口ノード間の高階相互作用(HOI)を考慮しています。 Cui et al. [1] に導入されたハイパーグラフ上で、競争的ネットワークの bivirus Susceptible-infected-Susceptible (SIS) モデルを検討しています。 このシステムは、一般的な意味で、有限個の均衡点を持っていて、各均衡点に関連するJacobianは非単射です。 この研究の鍵となるツールは微分幾何学のパラメトリック transversality 定理です。 さらに、このシステムはmonotoneなので、システムの典型的な挙動は、ある均衡点への収束へと導きます。 その後、三安定領域を明らかにし、3つの局所指数安定均衡点を持つことを示します。 異なるパラメータ設定においては、両ウイルスが
2309.12899
OptCtrlPoints: Finding the Optimal Control Points for Biharmonic 3D Shape Deformation
We propose OptCtrlPoints, a data-driven framework designed to identify the optimal sparse set of control points for reproducing target shapes using biharmonic 3D shape deformation. Control-point-based 3D deformation methods are widely utilized for interactive shape editing, and their usability is enhanced when the control points are sparse yet strategically distributed across the shape. With this objective in mind, we introduce a data-driven approach that can determine the most suitable set of control points, assuming that we have a given set of possible shape variations. The challenges associated with this task primarily stem from the computationally demanding nature of the problem. Two main factors contribute to this complexity: solving a large linear system for the biharmonic weight computation and addressing the combinatorial problem of finding the optimal subset of mesh vertices. To overcome these challenges, we propose a reformulation of the biharmonic computation that reduces the matrix size, making it dependent on the number of control points rather than the number of vertices. Additionally, we present an efficient search algorithm that significantly reduces the time complexity while still delivering a nearly optimal solution. Experiments on SMPL, SMAL, and DeformingThings4D datasets demonstrate the efficacy of our method. Our control points achieve better template-to-target fit than FPS, random search, and neural-network-based prediction. We also highlight the significant reduction in computation time from days to approximately 3 minutes.
Kunho Kim, Mikaela Angelina Uy, Despoina Paschalidou, Alec Jacobson, Leonidas J. Guibas, Minhyuk Sung
2023-09-22T14:37:05
http://arxiv.org/abs/2309.12899v2
# OptCtrlPoints: Finding the Optimal Control Points for Biharmonic 3D Shape Deformation ###### Abstract We propose OptCtrlPoints, a data-driven framework designed to identify the optimal sparse set of control points for reproducing target shapes using biharmonic 3D shape deformation. Control-point-based 3D deformation methods are widely utilized for interactive shape editing, and their usability is enhanced when the control points are sparse yet strategically distributed across the shape. With this objective in mind, we introduce a data-driven approach that can determine the most suitable set of control points, assuming that we have a given set of possible shape variations. The challenges associated with this task primarily stem from the computationally demanding nature of the problem. Two main factors contribute to this complexity: solving a large linear system for the biharmonic weight computation and addressing the combinatorial problem of finding the optimal subset of mesh vertices. To overcome these challenges, we propose a reformulation of the biharmonic computation that reduces the matrix size, making it dependent on the number of control points rather than the number of vertices. Additionally, we present an efficient search algorithm that significantly reduces the time complexity while still delivering a nearly optimal solution. Experiments on SMPL, SMAL, and DeformingThings4D datasets demonstrate the efficacy of our method. Our control points achieve better template-to-target fit than FPS, random search, and neural-network-based prediction. We also highlight the significant reduction in computation time from days to approximately 3 minutes. **CCS Concepts** \(\bullet\)**Computing methodologies \(\rightarrow\) Mesh models; Mesh geometry models; Shape analysis;** ## 1 Introduction The demand for high-quality 3D models is growing rapidly, especially with recently emerging applications in virtual and augmented realities, gaming, robotics, animation, etc. However, creating and designing high-quality 3D models is a tedious and difficult process even for expert designers. Shape deformation is thus an important technique that enables producing plausible variations of existing high-quality, artist-generated 3D assets. Deforming a 3D model is, however, a highly non-trivial task. A straightforward approach is to parameterize deformation as positions of all the mesh vertices [19, 17, 18], although it is difficult for users to edit the shape and also can lead to unrealistic outputs due to its large degree of freedom. To circumvent this conundrum, existing works in geometry processing [11, 12, 13, 14, 15, 16, 17, 18, 19, 20] leverage a _sparse_ set of _deformation handles_ to constrain and parameterize deformation within a lower degree of freedom, facilitating more intuitive editing via interactions with the users. For handle-based deformation to be effective and useful, there are several desirable properties, such as identity (i.e. preserving the shape under zero handle movement), locality, smoothness, closed-form expression, and flexibility for the representation of shapes. Due to such desirable properties, many existing deformation methods [11, 12] use a set of _points_ or _regions_ in the mesh, which is typically a subset of the mesh vertices, as the handles (Fig. 1) while defining the shape deformation function from the handles using _biharmonic_ weights. Given any input shape, point and region handles can be directly selected, and their biharmonic weights can also be directly computed, offering flexibility and convenience. This is in contrast to other types of deformation handles, such as cages [15, 16, 17, 18, 19] and skeletons [12, 13]. Cages require the manual construction of a closed polyhedral envelope for the shapes, while skeletons require rigging, where the skeleton structure needs to be delicately constructed to produce detailed deformation and typically necessitate manual weight painting. Given a mesh and a sparse subset of the mesh vertices representing the control points, the biharmonic weights defining the linear map from the control point positions to the mesh vertex positions are calculated by solving a convex quadratic optimization problem [11], which was shown to be equivalent to solving a linear system [11] when relaxing some constraints. To obtain a wide range of plausible variations of the shape from a sparse set of control points, it is crucial to find the _optimal_ set of control points. For an articulated 3D shape, for instance, the control points near the joints would be able to produce more realistic deformations by bending the joints properly (see control points at knees on the left of Fig. 1). Also, more control points would be needed in the regions with more detailed variations (see the additional control points on the leg on the right of Fig. 1). In this work, we present a _data-driven_ method for finding the optimal set of control points, coined OptCtrlPoints. Given a template mesh and its variant shapes (e.g. different poses in an animation), we find the ideal subset of the template mesh vertices as control points that can best fit the template to all the variant shapes via deformation. Our method can thus provide optimized deformation handles to the users and enables easier shape editing. In contrast to previous works [16, 17, 18] that utilize neural networks to learn deformation handles in a data-driven manner, our approach focuses on discovering a set of control points rather than fitting a sphere cage to the template mesh. The limitation of using a sphere cage is that it is unable to accommodate large deformations, making it unsuitable for non-rigid shapes like human or animal bodies. By identifying control points instead, our method enables more flexible and effective deformation modeling for such shapes. Finding the optimal set of vertices poses a challenge due to the substantial amount of computation time involved, which can span several hours or more than a day. Two primary factors contribute to this extended computation duration. Firstly, the process of deforming the template mesh to accurately align with the target shapes is time-consuming. While deforming the mesh using _fixed_ biharmonic weights can be performed quickly by prefactorizing a matrix within a linear system, the need to test different sets of control points prevents prefactorization of the matrix, leading to a considerably slower process. Secondly, solving a combinatorial optimization problem to determine the ideal subset of vertices becomes intractable when dealing with thousands or more vertices. When \(N\) is the number of vertices and \(K\) is the number of control points, a straightforward exhaustive search requires a time complexity of \(\mathcal{O}(N^{K})\), further contributing to the computational challenges. To overcome these challenges and address the issue of intractable computation, we propose a novel algorithm that incorporates two key ideas. Firstly, we introduce a reformulation of the biharmonic weight computation, which significantly reduces the time required to solve the linear system. This is achieved by introducing a new linear system where the size of the matrix is not dependent on the number of vertices \(N\), but rather on the number of control points \(K\). This reformulation proves particularly effective in cases where the set of control points varies, as in our scenario where we search for the best set. Additionally, we present an efficient search algorithm that leverages the new biharmonic weight computation. This algorithm operates by iteratively updating control points one by one while simultaneously traversing local partitioned regions for each control point. This simple yet effective approach enables us to reduce the time complexity from \(\mathcal{O}(N^{K})\) to \(\Theta(N+K^{2})\), which is linear order of the number of the vertices while still providing a nearly optimal solution. In our experiments, we assess the performance of our method by evaluating the alignment of the template to the target through deformation, comparing it with other baselines: Farthest Point Sampling (FPS), random search, and KeypointDeformer [13], a neural-network-based method for predicting keypoints for deformation. We conduct these evaluations on three datasets: SMPL [17], SMAL [16], and DeformingThings4D [18]. Our results demonstrate that the control points discovered by our OptCtrlPoints algorithm offer a better fit of the template to the target shapes, thanks to their ideal locations for producing the desired variations. Additionally, our approach significantly reduces computation time, especially when compared to random search. In approximately 3 minutes, our method can find a good set of control points, whereas without the new biharmonic weight formulation and efficient search, it would take days to achieve a similar outcome. Furthermore, with the DeformingThings4D [18] dataset, we illustrate that our data-driven control point search method can discover an optimized set of control points tailored to the given set of target shapes. We conduct experiments using two setups: targeting all motions or focusing on a specific motion within the animations. The consistent lower fitting errors observed in the specific motion case compared to the all motion case highlight the effectiveness of our data-driven approach. ## 2 Related work ### 3D Shape Deformation 3D shape deformation has been a long-standing problem in computer graphics and geometry processing. The problem is to find the best vertex positions for a given mesh in order to obtain a new shape that best fits a target while preserving the local geometric details of the original shape. Previous approaches include free-form deformation [11, 12] that define a smooth deformation function by interpolating the weights of the voxel grids enclosing the surface, and vertex-based approaches [2, 13, 14, 15] where vertex positions are directly optimized through a target-fitting objective function. Regularization losses are also used, such as mesh Laplacian [23, 24] and local rigidity [1, 16, 17] to preserve the geometric details of the original shape. Learning-based approaches have also been introduced for both free-form [16, 17], JPS\({}^{*}\)18, KJG\({}^{*}\)18] and vertex-based [19, 14] deformation approaches. More recently, neural implicit functions [15, 16] explore defining the deformation offset on the full coordinate space, instead of only on the surface of the mesh. These works, however, may lead to unrealistic outputs due to their large degrees of freedom and also do not exert intuitive control over the shape, as editing operations are performed in the implicit space. ### Traditional Handle-based Shape Deformation Deformation handles are commonly used to address the need for low-dimensional control on shape deformation and have been well studied in the computer graphics literature [10]. Earlier works use volumetric prisms [1, 12] or off-surface handles [11] to compute detail-preserving shape deformation through variational methods. These variational methods typically require optimization at each time of deformation. Cage [12, 13, 14, 15, 16, 17, 18] is another form of shape handles where a shape is enclosed in a coarse polytope, and the mesh vertices are defined as a linear combination of the cage vertices through generalized barycentric coordinates. Skeletons [12, 13] also define a linear map from the joints and bones to the mesh vertices via linear blend skinning, while now the handles appear inside the shape. Both cages and skeletons allow for shape deformation to be expressible in a closed form, but require manual construction of the source cage or rigging. Jacobson et al. [1] introduced a handle-based deformation function based on solving the _biharmonic equation_ over the mesh surface with boundary constraints that can use a set of points or regions in the mesh as handles to define shape deformation using biharmonic weights. Unlike cages and skeletons, these handles can directly be computed for any source shape and are thus flexible and versatile. Wang et al. [14] then introduced a closed-form formulation to the original constrained quadratic optimization formulation. In our method, we leverage this versatile deformation handle with an efficient reformulation to enable tractable gradient-based optimization for handle _discovery_, in contrast to existing works that assume shape handles to be given. ### Learning Handle-based Shape Deformation Recently, handle-based shape deformations have been revisited in the context of deep learning. DeformSyncNet [20] uses rigid bounding boxes as shape handles for learning a latent shape difference deformation space. Wang et al. [15] introduced Neural Cages, a neural network for cage-based deformations that predicts a source-dependent cage used to deform a source shape to match a given target. KeypointDeformer [16] leverages Neural Cages to learn keypoints that can be used for deformation. However, the degree of plausible output shape deformations yielded by the neural cage-based deformation methods is limited since they start from a sphere-based cage, making highly non-rigid deformations on non-sphere-like shapes difficult. In contrast, we leverage a deformation function defined with biharmonic coordinates that enable flexibility for any given source shape. Liu et al. [11] also use biharmonic coordinates as their deformation function. In contrast to our work, where the goal is the _discovery_ of the control points, they assume them to be given and instead learn a latent space of _metahandles_ for the given set of control points. Moreover, we discover _explicit_ handles, which are 3D mesh vertices, that allow direct user interpretability and controllability, instead of meta-handles in latent space. ## 3 Background ### Shape Deformation and Deformation Handles The creation of high-quality 3D models is a tedious process that requires manual expertise, thus making shape deformation an important task as it enables converting an existing 3D model to a new shape while preserving their fine details. However, naive mesh deformation through moving individual mesh vertices is cumbersome as it is both difficult for users and can easily lead to unrealistic outputs. Existing work [1, 12, 13, 14, 15, 16, 17] thus introduce intuitive _deformation handles_, e.g. control points, cages, skeletons, etc., to constrain and parameterize deformation with a low degree of freedom. Several properties are critical for handle-based deformation to be effective and useful: 1. **Identity**: The original shape must be reconstructed under zero movement of shape handles. 2. **Locality**: The deformation produced by each individual handle must be local and smooth. 3. **Closed-form**: The output deformed shape must be expressed in a closed-form given the transformations of the deformation handles. 4. **Flexibility**: The deformation handles and function must be defined without any constraints or additional information about the shape (e.g., a cage, a skeleton, or bounding primitives). For these reasons, many existing shape deformation methods [11, 12, 13, 14] use biharmonic weights as the deformation function. Jacobson et al. [1] first introduced bounded biharmonic coordinates that solve biharmonic equations defined over a mesh to compute the linear map from the handles to the mesh vertices. Wang et al. [14] later introduced a closed-form formulation for the biharmonic coordinate-based deformation, which we base our work on. Below, we explain the details about the closed-form formulation of the biharmonic coordinate deformation function. ### Biharmonic Coordinates Given a 3D volumetric mesh with \(N\) vertices and \(K\) control points, which is a sparse _subset_ of the mesh vertices (\(K\ll N\)) represented with a binary selector matrix \(\mathbf{S}\in\{0,1\}^{K\times N}\), the biharmonic deformation function [1, 2] from the positions of the control points \(\mathbf{C}\in\mathbb{R}^{K\times 3}\) to the positions of mesh vertices \(\mathbf{V}\in\mathbb{R}^{N\times 3}\) is defined as a _linear_ function: \(\mathbf{V}=\mathbf{WC}\). (The original biharmonic deformation [1, 2] supports region handles, while we limit our scope to point handles for simplicity.) Here, \(\mathbf{W}\in\mathbb{R}^{N\times K}\) called _biharmonic weights_ is derived from the solution of the following optimization for the vertex positions \(\mathbf{V}\) with respect to the equality constraints on the control point positions: \[\mathbf{V}=\underset{\mathbf{X}\in\mathbb{R}^{N\times 3}}{\text{ argmin}}\frac{1}{2}\text{trace}\left(\mathbf{X}^{\top}\mathbf{AX}\right)\text{ subject to }\mathbf{S}\mathbf{X}=\mathbf{C}, \tag{1}\] where \(\mathbf{A}\in\mathbb{R}^{N\times N}\) denotes the discrete Bilaplacian matrix of the mesh. This optimization finds the positions of the mesh vertices \(\mathbf{V}\) minimizing squared Laplacian energy when the positions of the selected control points are fixed to be \(\mathbf{C}\). Since the Bilaplacian matrix \(\mathbf{A}\) is positive semi-definite, the optimization is a convex quadratic programming problem that can be solved as a linear system as described in [2]. The solution thus has a form of \(\mathbf{V}=\mathbf{WC}\) where \[\mathbf{W}(\mathbf{S};\mathbf{A})=\mathbf{S}^{\top}-\mathbf{T}^{\top}\left( \mathbf{TAT}^{\top}\right)^{-1}\mathbf{TAS}^{\top}, \tag{2}\] and \(\mathbf{T}\in\{0,1\}^{(N-K)\times N}\) is the complementary selector matrix of \(\mathbf{S}\) indicating the mesh vertices that are _not_ selected as control points. By the definition of the Bilaplacian matrix \(\mathbf{A}\) including the original positions of the vertices \(\overline{\mathbf{V}}\) in its null space, the given pose of the mesh \(\overline{\mathbf{V}}\) becomes the solution of the optimization when the control points are not moved (\(\mathbf{C}=\mathbf{S}\overline{\mathbf{V}}\)), satisfying the identity condition. To leverage the expressivity of biharmonic coordinates for shape deformation, we need to find the _optimal_ set of control points (a sparse subset of the mesh vertices) that allows us to achieve a wide variety of plausible variants. In what follows, we introduce our _data-driven_ approach of finding the optimal set of control points efficiently given possible variants of the shape. ## 4 OptCtrlPoints ### Problem Definition Given a template volumetric mesh with \(N\) vertices which Bilaplacian matrix is \(\mathbf{A}\in\mathbb{R}^{N\times N}\), and a set of \(M\) target shapes \(\{\mathcal{X}_{i}\}_{i=1}^{M}\) that are the possible variants of the template, our goal is to find the optimal \(K\)-subset of the \(N\) source vertices as the control points, denoted as \(\mathbf{S}\in\{0,1\}^{K\times N}\), which can best fit all the target shapes with their corresponding positions in the target (see Fig. 2): \[\mathbf{\bar{S}}=\underset{\mathbf{S}\in\{0,1\}^{K\times N}}{ \text{argmin}} \sum_{i=1}^{M}d(\mathcal{X}_{i},\mathbf{W}(\mathbf{S};\mathbf{A}) \mathbf{C}(\mathbf{S};\mathcal{X}_{i}))\] \[\text{s.t.}\quad\sum_{j=1}^{N}\mathbf{S}_{ij}=1\quad\text{for all }i, \tag{3}\] where \(d(\cdot,\cdot)\) is a shape of shape \(\mathcal{X}_{i}\) and \(\mathbf{W}(\mathbf{S};\mathbf{A})\) is the biharmonic weight function in Eq. 2 for the control point selector matrix \(\mathbf{S}\) and the given Bilaplacian matrix \(\mathbf{A}\), and \(\mathbf{C}(\mathbf{S};\mathcal{X}_{i})\) is a function computing the corresponding positions of the control points \(\mathbf{S}\) in the target \(\mathcal{X}_{i}\). Assuming that the point-wise map from each target shape \(\mathcal{X}_{i}\) to the template is given or estimated with an off-the-shelf shape correspondence method (e.g., functional maps [1], or its neural variants [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]; see a survey [13]), let \(f:\{\mathcal{X}_{i}\}_{i=1}^{M}\rightarrow\mathbb{R}^{N\times 3}\) denote a function returning corresponding points of each vertex of the template in the same order. Then \(\mathbf{C}(\mathbf{S};\mathcal{X}_{i})=\mathbf{S}f(\mathcal{X}_{i})\). The main challenge in finding the optimal \(K\)-subset of template vertices as control points in Eq. 3 lies in the extensive computation time required. Note that when the control points and resulting biharmonic weights are _fixed_, the deformations can be computed quickly by prefactorizing \(\mathbf{TAT}^{\top}\) in Eq. 2. However, when computing deformations with _different_ sets of control points, it is not feasible to leverage the prefactorization since the complementary selector matrix \(\mathbf{T}\) varies. Consequently, computing the biharmonic weight matrix \(\mathbf{W}\) in Eq. 2 becomes the bottleneck, taking several seconds to compute even once (for detailed analysis, refer to Sec. 5). Moreover, identifying the optimal \(K\)-subset from a pool of \(N\) elements is an NP-complete combinatorial optimization problem, making exhaustive search impractical due to the typically high number of vertices, often in the thousands. To address this challenge, we present our efficient control point search framework, called OptCtrlPoints. First, in Sec. 4.2, we introduce a reformulation of Eq. 2 that yields the same biharmonic weight matrix \(\mathbf{W}\) while solving a linear system on a much smaller scale, significantly reducing computation time. Second, in Sec. 4.3, we propose an efficient search algorithm that reduces the time complexity of the search from \(\mathcal{O}(N^{K})\) (for exhaustive search) to \(\Theta(N+K^{2})\) in average, while effectively finding nearly optimal solutions in practice. ### Reformulation of \(\mathbf{W}(\mathbf{S};\mathbf{A})\) (Eq. 2) Let \(\mathbf{M}\) denote the linear system in \(\mathbf{W}(\mathbf{S};\mathbf{A})\) (Eq. 2): \[\mathbf{M}=\left(\mathbf{TAT}^{\top}\right)^{-1}\mathbf{TAS}^{\top}. \tag{4}\] We begin by introducing a reformulation of \(\mathbf{M}\) for the case when \(\mathbf{A}\) is not a Bilaplacian matrix but rather an arbitrary invertible square matrix. However, in our specific scenario, \(\mathbf{A}\) is the Bilaplacian matrix, which is non-invertible and positive-semidefinite. In Section 4.2.1, we elaborate on how we address the singularity of \(\mathbf{A}\) in this new formulation. We choose a permutation matrix \(\mathbf{P}\in\mathbb{R}^{N\times N}\) such that the product of \(\mathbf{P}\) with \(\mathbf{S}\) and \(\mathbf{T}\) becomes \(\mathbf{SP}=\left[\mathbf{0}_{K\times(N-K)}\mid\mathbf{I}_{K\times K}\right]\) and \(\mathbf{TP}=\left[\mathbf{I}_{(N-K)\times(N-K)}\mid\mathbf{0}_{(N-K)\times K}\right]\), respectively. Moreover, using \(\mathbf{P}\), we define \(\mathbf{B}=\mathbf{P}^{\top}\mathbf{A}\) and its inverse \(\mathbf{D}=\mathbf{P}^{\top}\mathbf{A}^{-1}\mathbf{P}\) as follows: \[\mathbf{B}=\left[\begin{array}{cc}\mathbf{B}_{11}&\mathbf{B}_{12}\\ \mathbf{B}_{12}^{\top}&\mathbf{B}_{22}\end{array}\right],\mathbf{D}=\left[ \begin{array}{cc}\mathbf{D}_{11}&\mathbf{D}_{12}\\ \mathbf{D}_{12}^{\top}&\mathbf{D}_{22}\end{array}\right], \tag{5}\] where \(\mathbf{B}_{11},\mathbf{D}_{11}\in\mathbb{R}^{(N-K)\times(N-K)}\), \(\mathbf{B}_{12},\mathbf{D}_{12}\in\mathbb{R}^{(N-K)\times K}\), and \(\mathbf{B}_{22},\mathbf{D}_{22}\in\mathbb{R}^{K\times K}\) are block matrices. Now taking into account that a permutation matrix is an orthogonal matrix (i.e. \(\mathbf{P}^{-1}=\mathbf{P}^{\top}\)) and \(\mathbf{P}\mathbf{P}^{\top}=\mathbf{I}\), we can rewrite Eq. 4 as follows: \[\begin{split}\mathbf{M}&=\left(\mathbf{T}\mathbf{T} \mathbf{T}^{\top}\right)^{-1}\mathbf{T}\mathbf{A}\mathbf{S}^{\top}\\ &=\left(\left(\mathbf{T}\mathbf{P}\right)\left(\mathbf{P}^{\top }\mathbf{A}\mathbf{P}\right)\left(\mathbf{P}\mathbf{P}^{\top}\right)^{-1} \left(\mathbf{T}\mathbf{P}\right)\left(\mathbf{P}^{\top}\mathbf{A}\mathbf{P} \right)\left(\mathbf{S}\mathbf{P}\right)^{\top}\right.\\ &=\left(\left(\mathbf{T}\mathbf{P}\right)\mathbf{B}\mathbf{\left( T}\right)^{\top}\right)^{-1}\left(\mathbf{T}\mathbf{P}\right)\mathbf{B} \mathbf{\left(}\mathbf{S}\mathbf{P}\right)^{\top}\\ &=\left(\left[\begin{array}{cc}\mathbf{I}&\mathbf{0}\end{array} \right]\left[\begin{array}{cc}\mathbf{B}_{11}&\mathbf{B}_{12}\\ \mathbf{B}_{12}^{\top}&\mathbf{B}_{22}\end{array}\right]\left[\begin{array}{ cc}\mathbf{I}&\mathbf{I}\\ \mathbf{0}\end{array}\right]\right)^{-1}\\ &\left[\begin{array}{cc}\mathbf{I}&\mathbf{0}\end{array}\right]\left[ \begin{array}{cc}\mathbf{B}_{11}&\mathbf{B}_{12}\\ \mathbf{B}_{12}^{\top}&\mathbf{B}_{22}\end{array}\right]\left[\begin{array}{ cc}\mathbf{0}\\ \mathbf{I}\end{array}\right]\\ &=\mathbf{B}_{11}^{-1}\mathbf{B}_{12}.\end{split} \tag{6}\] By using the Schur complement [1], we can express \(\mathbf{D}\), the inverse of \(\mathbf{B}\) as follows: \[\mathbf{D}^{-1}=\left[\begin{array}{cc}\mathbf{D}_{11}&\mathbf{D}_{12}\\ \hline\mathbf{D}_{12}^{\top}&\mathbf{D}_{22}\end{array}\right]^{-1}=\left[ \begin{array}{cc}\mathbf{B}_{11}&\mathbf{B}_{12}\\ \hline\mathbf{B}_{12}^{\top}&\mathbf{B}_{22}\end{array}\right], \tag{7}\] where \[\begin{split}\mathbf{B}_{11}&=(\mathbf{D}_{11}-\mathbf{D}_{12} \mathbf{D}_{22}^{-1}\mathbf{D}_{12}^{\top})^{-1},\\ \mathbf{B}_{12}&=-\left(\mathbf{D}_{11}-\mathbf{D}_{12} \mathbf{D}_{22}^{-1}\mathbf{D}_{12}^{\top}\right)^{-1}\mathbf{D}_{12}\mathbf{ D}_{22}^{-1},\text{ and}\\ \mathbf{B}_{22}&=\mathbf{D}_{22}^{-1}+\mathbf{D}_{22}^{-1} \mathbf{D}_{12}^{\top}\left(\mathbf{D}_{11}-\mathbf{D}_{12}\mathbf{D}_{22}^{- 1}\mathbf{D}_{12}^{-1}\right)^{-1}\mathbf{D}_{12}\mathbf{D}_{22}^{-1}.\end{split} \tag{8}\] Thus, setting \(\mathbf{Q}=\mathbf{D}_{11}-\mathbf{D}_{12}\mathbf{D}_{22}^{-1}\mathbf{D}_{12}^ {\top}\), we have \(\mathbf{B}_{11}^{-1}=\mathbf{Q}\) and \(\mathbf{B}_{12}=-\mathbf{Q}^{-1}\mathbf{D}_{12}\mathbf{D}_{22}^{-1}\). Hence we get \[\mathbf{M}=\mathbf{B}_{11}^{-1}\mathbf{B}_{12}=-\mathbf{D}_{12} \mathbf{D}_{22}^{-1}. \tag{9}\] Finally, by considering the orthogonality of the permutation matrix \(\mathbf{P}\) and that \(\mathbf{P}^{\top}=\begin{bmatrix}\mathbf{T}\\ \mathbf{S}\end{bmatrix}\) based on its definition, we can show that \(\mathbf{D}_{12}=\mathbf{T}\mathbf{A}^{-1}\mathbf{S}^{\top}\) and that \(\mathbf{D}_{22}=\mathbf{S}\mathbf{A}^{-1}\mathbf{S}^{\top}\) as follows: \[\begin{split}\mathbf{D}&=\mathbf{P}^{\top}\mathbf{A}^{-1} \mathbf{P}\\ &=\left[\begin{array}{cc}\mathbf{T}\\ \mathbf{S}\end{array}\right]\mathbf{A}^{-1}\left[\mathbf{T}^{\top}\mid\mathbf{S} ^{\top}\right]\\ &=\left[\begin{array}{cc}\mathbf{T}\mathbf{A}^{-1}\mathbf{T}^{\top}& \mathbf{T}\mathbf{A}^{-1}\mathbf{S}^{\top}\\ \mathbf{S}\mathbf{A}^{-1}\mathbf{T}^{\top}&\mathbf{S}\mathbf{A}^{-1}\mathbf{S}^{ \top}\end{array}\right]\\ &=\left[\begin{array}{cc}\mathbf{D}_{11}&\mathbf{D}_{12}\\ \hline\mathbf{D}_{12}^{\top}&\mathbf{D}_{22}\end{array}\right],\end{split} \tag{10}\] Then, Eq. 9 becomes: \[\mathbf{M}=-\mathbf{T}\mathbf{A}^{-1}\mathbf{S}^{\top}\left( \mathbf{S}\mathbf{A}^{-1}\mathbf{S}^{\top}\right)^{-1}. \tag{11}\] By replacing Eq. 11 in our initial expression from Eq. 2, we obtain the following reformulation: \[\mathbf{W}(\mathbf{S};\mathbf{A})=\mathbf{S}^{\top}+\mathbf{T}^{\top}\left( \mathbf{T}\mathbf{A}^{-1}\mathbf{S}^{\top}\left(\mathbf{S}\mathbf{A}^{-1} \mathbf{S}^{\top}\right)^{-1}\right). \tag{12}\] Note that Eq. 12 includes a linear system with a significantly smaller matrix, \(\mathbf{S}\mathbf{A}^{-1}\mathbf{S}^{\top}\in\mathbb{R}^{K\times K}\), where \(K\ll N\). Also, \(\mathbf{A}^{-1}\) can be pre-computed to speed up the computation at each iteration. #### 4.2.1 Handling the Singularity of the Bilaplacian Matrix \(\mathbf{A}\) The reformulation of \(\mathbf{M}\) (Eq. 11) cannot be directly used in our case since the Bilaplacian matrix \(\mathbf{A}\) is a singular matrix. One possible approach to handling the singularity of \(\mathbf{A}\) is to leverage the shaving-off technique by Jacobson [14] while fixing a single control point during the control point search. Namely, when rewriting the optimization problem in Eq. 1 into a system of linear equations as follows: \[\begin{bmatrix}\mathbf{A}&\mathbf{S}^{\top}\\ \mathbf{S}&\mathbf{0}\end{bmatrix}\begin{bmatrix}\mathbf{X}\\ \mathbf{A}\end{bmatrix}=\begin{bmatrix}\mathbf{0}\\ \mathbf{C}\end{bmatrix}, \tag{13}\] where \(\Lambda\in\mathbb{R}^{K}\) is a vector of Lagrange multipliers, and assuming the fixed control point occupies the last index of vertices without loss of generality, the matrix on the left side of the linear system can be re-split in a way to shave off the last row and column of the Bilaplacian matrix \(\mathbf{A}\) while expanding the selector matrix \(\mathbf{S}\) and the zero matrix Figure 2: Examples of the template and target shapes, along with the fitting results obtained through biharmonic deformations from the template to the target. The colored points indicate the control points used for the deformation computation. The segmented shapes within the black boxes at the top left corner illustrates the partitioned volume of the source tetrahedral mesh, enabling a level-of-detail search for the optimal placement of each control point. region, resulting in \(\tilde{\mathbf{A}}\in\mathbb{R}^{(N-1)\times(N-1)}\), \(\tilde{\mathbf{S}}\in\mathbb{R}^{(K+1)\times(N-1)}\), and \(\tilde{\mathbf{Z}}\in\mathbb{R}^{(K+1)\times(K+1)}\) as follows (see Fig. 3): \[\begin{bmatrix}\tilde{\mathbf{A}}&\tilde{\mathbf{S}}^{\top}\\ \tilde{\mathbf{S}}&\tilde{\mathbf{Z}}\end{bmatrix}\begin{bmatrix}\mathbf{X}\\ \Lambda\end{bmatrix}=\begin{bmatrix}\mathbf{0}\\ \tilde{\mathbf{C}}\end{bmatrix}. \tag{14}\] where \(\tilde{\mathbf{C}}\in\mathbb{R}^{(K+1)\times 3}\) is the concatenation of a zero vector and \(\mathbf{C}\in\mathbb{R}^{K\times 3}\). Since the Bialapclian matrix \(\mathbf{A}\) has rank \(N-1\), \(\tilde{\mathbf{A}}\) obtained by removing the last row and column of \(\mathbf{A}\) has full rank. Thus, the Schur complement trick in Sec. 4.2 can now be directly utilized. One difference is that the bottom-right block matrix on the left side of the new linear system is not a zero matrix but \(\tilde{\mathbf{Z}}\). Hence, Eq. 11 needs to be modified as follows: \[\mathbf{M}=-\tilde{\mathbf{T}}\tilde{\mathbf{A}}^{-1}\tilde{\mathbf{S}}^{\top }\left(\tilde{\mathbf{S}}\tilde{\mathbf{A}}^{-1}\tilde{\mathbf{S}}^{\top}- \tilde{\mathbf{Z}}\right)^{-1}, \tag{15}\] where \(\tilde{\mathbf{T}}\in\{0,1\}^{(N-K)\times(N-1)}\) is the complement of the selector matrix \(\mathbf{T}\) without the last column. Note that the last vertex represents the fixed control point, and thus, the complementary set of the control point selection in \(\tilde{\mathbf{T}}\) is not changed. \(\mathbf{W}(\mathbf{S};\mathbf{A})\) in Eq. 12 is also reformulated again as follows: \[\tilde{\mathbf{W}}(\mathbf{S};\mathbf{A})=\tilde{\mathbf{S}}^{\top}+\tilde{ \mathbf{T}}^{\top}\left(\tilde{\mathbf{T}}\tilde{\mathbf{A}}^{-1}\tilde{ \mathbf{S}}^{\top}\left(\tilde{\mathbf{S}}\tilde{\mathbf{A}}^{-1}\tilde{ \mathbf{S}}^{\top}-\tilde{\mathbf{Z}}\right)^{-1}\right). \tag{16}\] The positions of the vertices, excluding the fixed single control point, can be computed as \(\tilde{\mathbf{W}}\tilde{\mathbf{C}}\); note that the position of the control point is given. Alternatively, one can simply consider regularizing the Bilaplacian matrix \(\mathbf{A}\) by adding a small-weighted identity matrix (e.g., \(\mathbf{A}+\varepsilon\mathbf{I}\)), approximating the solution while achieving numerical stability. In our experiments, we empirically find that this simple approach, which does not even require fixing any control points, performs well in practice for identifying the best set of control points. As a result, we use this regularization approach in our implementation. ### Control Point Search Algorithm Although the computation of the biharmonic weight matrix \(\mathbf{W}\) in Eq. 12 is fast, finding the \(K\) optimal control points that best align the template mesh to the target shapes via deformation remains computationally infeasible when an exhaustive search of \(\binom{N}{K}\) computations is used. To address this issue, we propose an effective search algorithm that reduces the time complexity to asymptotically linear order of the number of vertices \(\Theta(N+K^{2})\) in average. Despite this reduction in complexity, our algorithm still manages to discover nearly optimal solutions in practice. In our search algorithm, our objective is to iteratively refine a set of control points starting from an initial configuration. We utilize geodesic Farthest Point Sampling (FPS) over surface of the mesh to establish the initial set. Our algorithm incorporates two key ideas: * Drawing inspiration from the coordinate descent approach in continuous optimization, we propose to determine the optimal Figure 3: Shaving-off technique handling the singularity of the Bilaplacian matrix \(\mathbf{A}\). When assuming the last vertex is fixed as one of the control points, we define a new matrix \(\tilde{\mathbf{A}}\) by taking the last row and column from \(\tilde{\mathbf{A}}\) and also expand the selector matrix \(\mathbf{S}\) and the zero matrix region. Since the Bilaplacian matrix is rank \(N-1\), the new matrix \(\tilde{\mathbf{A}}\) has full rank, and thus the Schur complement trick in Sec. 4.2 can be used. location for each control point individually, while keeping all other current control points fixed. * We propose a level-of-detail approach where at each iteration of updating a single control point, we select one of the partitioned volumes of the template mesh first and then traverse each vertex in the selected partition. Specifically, let \((\mathbf{s}_{k})_{k=1}^{N}\) denote the ordered list of template vertex indices for the control points, where \(\mathbf{s}_{k}\in[1,N]\) for all \(k\), and \(\mathbf{s}_{k}\neq\mathbf{s}_{l}\) for all distinct \(k\) and \(l\). Let \(\mathbf{S}((\mathbf{s}_{k})_{k=1}^{N})\) then represent the \(K\times N\) binary matrix, with elements equal to one for the selected points and zero otherwise. The indices of the control points are initialized with the FPS point indices \((\mathbf{s}_{k}^{(0)})_{k=1}^{N}\). We construct the partition of the vertex indices \(\{V_{k}\}_{k=1}^{N}\) based on their proximity to the initial set of control points \((\mathbf{s}_{k}^{(0)})_{k=1}^{N}\), as shown inside the black boxes of Fig. 2. Since our algorithm allows the selection of internal vertices as control points, we employ the distance over the volume mesh graph as a measure of proximity. We update each control point \(\mathbf{s}_{k}\) sequentially using the following two steps for each point. (See Alg. 1 for the details.) In the first step, we determine the partition to which the \(k\)-th control point \(\mathbf{s}_{k}\) will move. We randomly sample a vertex from each partitioned volume \(V_{l}\). Then, we select the one of the sampled vertices that provides the minimum sum of distances between the template mesh and all the target shapes after deformation (as shown in Eq. 3) when substituting \(\mathbf{s}_{k}\). By finding the vertex with the minimum sum of fitting distances, we identify the corresponding partitioned region \(V\) for further exploration. (See FindRegion function in Alg. 1.) This approach allows each control point to explore different regions across the entire shape, mitigating the risk of falling into a local minimum through local search. In the second step, within the selected region \(V\), we find the best vertex, excluding those already chosen as control points, as a replacement for the \(k\)-th control point \(\mathbf{s}_{k}\) using the same distance measurement in Eq. 3. The vertex selected during this step becomes the new \(k\)-th control point for the subsequent iteration. (See the FindVertex function in Alg. 1.) Assuming an even partitioning of the template mesh with an equal number of vertices in each region, the average time complexity to update the entire set of control points once is asymptotically \(\mathcal{O}(K^{2})\) for the first step and \(\Theta(N)\) for the second step. Therefore, the total complexity is \(\Theta(N+K^{2})\). This complexity is linear with respect to the number of vertices, and since \(K\ll N\), it significantly reduces the computation time compared to the exhaustive search complexity of \(\mathcal{O}(N^{K})\). ## 5 Experiments In this section, we present the results of our experiments, where we compare the performance of our proposed method, OptCtrlPoints, with baseline search methods and a neural-network-based keypoint prediction method. We evaluate the performance based on the fitting error to the target shapes after deformation and the computation time. ### Datasets We evaluate our method on three different datasets of human (SMPL [LMR*15]) and animal (SMAL [2] and DeformingThings4D [LTT*21]) models. For each class of shapes, we take one template mesh and multiple target shapes covering a wide range of non-rigid deformations. #### 5.1.1 SMPL [LMR*15] and SMAL [2] For human models, we use synthetic shapes generated from SMPL [LMR*15]. We use the samples generated by Groueix et al. [1], which contains a large variety of body poses and shapes. For animal models, we use four classes of shapes including _fox_, _hipro_, _horse_, and _tiger_ generated from SMAL [2]. We follow Groueix et al. [1] to randomly draw the shape samples. We run our searching algorithm on each animal category separately. For both SMPL and SMAL, we use the rest pose as the template shape and take \(M=1000\) random shapes for each template as targets \(\{X_{i}\}_{i=1}^{M}\). #### 5.1.2 DeformingThings4D [LTT*21] DeformingThings4D [LTT*21] contains characters from Adobe Mixamo and also multiple animated motions for each character. We evaluate our approach both with _all_ motions for _each_ character and also with each motion to demonstrate the effectiveness of our _data-driven_ method of locating the ideal set of control points best fitting the given set of targets. We use seven characters in our experiments, namely _bear_, _deer_, _dogge_, _dragon_, _moose_, _proc_ and _raccoon_. For each character, we randomly sample \(M=1000\) different targets from all motions for the per-category experiments, while we use all the frames of the motion as targets for the per-motion experiments. We use the first frame of specific animation sequence as the template shape. Refer to the appendix for more details about the data used in our experiments. ### Experiment Setup #### 5.2.1 Data Preprocessing and Implementation Details For all template meshes, we first convert each mesh into a watertight manifold using the method of Huang et al. [1] and then into a tetrahedral mesh using TetWild [1]. We simplify and regularize each tetrahedral mesh to have 5000 vertices and also normalize it to fit in a unit sphere. We then precompute the Bilapicala matrix \(\mathbf{A}\) for each template mesh using the libigl [IP*18] and and its inverse with the regularization described in Sec. 4.2.1 for efficient searching. To compute \(\mathbf{C}(\mathbf{S};\mathcal{X}_{i})=\mathbf{S}f(\mathcal{X}_{i})\) (Sec. 4.1), we leverage vertex-wise correspondence of the meshes provided from SMPL [LMR*15], SMAL [2], and DeformingThings4D [LTT*21], while the correspondence can also be found using the off-the-shelf shape correspondence methods (refer to a survey [1] for the recent literature). Given the vertex-wise correspondence, we use the average of per-vertex L2 distance as our shape-to-shape distance function \(d(\cdot,\cdot)\) (Sec. 4.1). We executed Alg. 1 only once in all the experiments, but we also demonstrate in Sec. 5.6 that iterating the algorithm further improves the selection of control points. Figure 4: **Qualitative Results on the SMPL [LMR\({}^{*}\)15] and SMAL [ZKJB17] datasets.** We show qualitative comparisons of our approach compared to FPS, random search and KPD, respectively. For each example: (Left) We show each method’s output control points, the corresponding deformed template (white) overlayed over the desired target (yellow) to illustrate the alignment of the deformed source to the target shapes using the output control points. Notice that our approach finds better control points near joints as shown in the shoulder of the human, legs of the fox, horse, hippo and tiger. (Right) We also show the raw output deformed source shape colored with vertex-to-vertex alignment to target error map for visualization. We see that apart from achieving better fitting error, our approach achieves less distortions especially on the limbs. In all examples, the bottom row is a zoomed in version of the top row. **Best viewed in zoom and color. Refer to the appendix for additional results.** Figure 5: **Qualitative Results on Deform4DThings [LTT \({}^{2}\)1].** We show qualitative comparisons of our approach in both per-category and per-motion settings compared to FPS. (Left) We show the output control points together with the corresponding deformed source shape (white) overlayed with the desired target (yellow). We see that our approach leads to better fitting compared to the FPS baseline. Moreover, our per-motion setting outputs more specialized control points that leads to better fitting to the specific motion, as shown by the legs of the bear (first row), head of the deer and raccoon (second and last row), tail of the doggei (third row), back and head of the horse (fourth row). (Right) We similarly also show a colored visualization of target shape that corresponds to the vertex-to-vertex error map. We see significantly less distortions of our output shapes compared to the baseline. **Best viewed in zoom and color. Refer to the appendix for additional results. #### 5.2.2 Baselines We compare our method, OptCrrlPoints, with three different baselines: 1. **Farthest Point Sampling (FPS)**: This is the case of directly using the geodesic Farthest Point Sampling vertices, our initial set of control points \(\big{(}\mathbf{s}_{k}^{(0)}\big{)}_{k=1}^{N}\), as the final set without searching. We demonstrate in our experiments that our efficient searching method discovers a much better set of control points, which greatly reduces the fitting error. 2. **Random Search**: Instead of the computationally infeasible exhaustive search that tests all possible \(\binom{N}{K}\) cases, we compare our method with a random search approach. In random search, we randomly select \(K\) control points from the \(N\) vertices multiple times and choose the set with the lowest fitting distance. We iterate the random set sampling process \(N\times K\) times, which is significantly larger than the number of cases in our method. 3. **KeypointDeformer (KPD) [JTM\({}^{*}\)20]**: We additionally compare our method with KPD, which leverages a neural-network-based approach to predict keypoints on shapes and align the template to the target through cage-based deformation (not biharmonic). We trained KPD for each template shape and its corresponding set of \(M\) targets using the released codebase. The cage created by warping a _sphere_ mesh often fails to disentangle the deformations of different parts, especially in shapes with articulating parts (such as limbs of a human body) or complex topological structures. In our experiments, we demonstrate that our method, employing biharmonic deformation, achieves a better fit of the template shape to the targets with the ideal set of control points. ### Fitting Error Comparisons with SMPL [LMR\({}^{+}\)15] and SMAL [ZKJB17] We evaluate our OptCrrlPoints and other baselines by assessing how well the template mesh aligns with the target shapes using the output deformation handles. We conducted experiments with all the methods while varying the number of keypoints \(K\) to be 16, 24, and 32. First, we compare our method to FPS and random search, where control points are used as handles for biharmonic deformation. Tab. 1 presents a comparison of the average fitting distances between the template and target shapes. Compared to FPS, where our initial set of control points is used directly, our method demonstrates a significant improvement, reducing the fitting distance by more than half in most cases. Additionally, when compared to random search, which explores a much larger number of control point sets, our efficient search yields substantially lower fitting errors. Qualitative results in Fig. 4 also show that our method achieves more meaningful fine-grained deformations than FPS and random search, thanks to the optimal placement of control points in regions with greater variations or articulations. The first column shows the templates, the second column displays the targets, the next four columns demonstrate the alignment results through deformation, and the last four columns exhibit the fitting error maps over the deformed template shapes. Notably, failure cases of FPS and random search are observed, resulting in distortion in the deformation, such as in the _second row_ where the legs of the fox are distorted, and in the _last row_ where the legs and body of the tiger deviates from the target shapes. In contrast, our method's control points produce much better deformation results. Furthermore, compared to KPD [JTM\({}^{*}\)20], which employs cage-based deformation instead of biharmonic deformation, our method excels in fitting the template to the targets through deformation. Tab. 1 clearly illustrates a substantial gap between the average fitting errors of KPD and our method. Moreover, Fig. 4 vividly showcases the qualitative difference, especially in the arms and legs of the human (_first row_) and the hind legs of the fox and the horse (_second_ and _fourth rows_). **Figures are best viewed with zoom and in color. Additional qualitative results can be found in the supplementary material**. ### Computation Time Analysis Our proposed reformulation for the biharmonic weights matrix computation (Sec. 4.2) and the efficient search algorithm (Sec. 4.3) enable us to determine the optimal control point locations in a matter of minutes, even when dealing with 1000 target shapes. Without both of these advancements, achieving this task computationally would be challenging. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multicolumn{8}{c}{Average Fitting Distance (\(\times 10^{-4}\))} \\ \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{K} & SMPL & \multicolumn{4}{c}{SMAL [ZKJB17]} \\ \cline{3-6} & & [LMR\({}^{+}\)15] & fox & hippo & horse & tiger \\ \hline \multirow{2}{*}{KPD} & 16 & 17.09 & 23.12 & 23.41 & 34.82 & 21.53 \\ & 24 & 17.16 & 25.63 & 23.65 & 28.42 & 22.92 \\ & 32 & 14.49 & 25.70 & 20.75 & 32.47 & 20.15 \\ \hline \multirow{2}{*}{FPS} & 16 & 11.28 & 10.16 & 12.57 & 10.37 & 8.47 \\ & 24 & 5.70 & 4.41 & 8.32 & 3.92 & 3.76 \\ & 32 & 3.60 & 2.97 & 5.38 & 2.52 & 2.60 \\ \hline \multirow{2}{*}{Random Search} & 16 & 8.38 & 8.18 & 9.20 & 7.88 & 6.78 \\ & 24 & 3.91 & 4.78 & 5.38 & 4.52 & 3.80 \\ & 32 & 2.51 & 3.40 & 3.63 & 2.97 & 2.59 \\ \hline \multirow{2}{*}{Ours} & 16 & **5.16** & **5.07** & **5.49** & **4.45** & **4.15** \\ & 24 & **2.25** & **2.61** & **3.06** & **2.13** & **2.21** \\ \cline{1-1} & 32 & **1.36** & **1.89** & **1.83** & **1.57** & **1.42** \\ \hline \hline \end{tabular} \end{table} Table 1: Average fitting distance between corresponding vertices of target shape and deformed template shape multiplied by \(10^{4}\). \begin{table} \begin{tabular}{c|c|c|c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c}{Time (mins)} \\ \cline{2-4} & \(K=16\) & \(K=24\) & \(K=32\) \\ \hline Random Search & 48.8 & 70.4 & 97.0 \\ \hline Ours & **2.8** & **2.9** & **3.0** \\ \hline \hline \end{tabular} \end{table} Table 2: Execution time profiling of our OptCrrlPoints compared to random search. We show that our method demonstrates significantly faster performance compared to random search as our time complexity is reduced to \(\Theta(N+K^{2})\), whereas the time requirement increase linearly with the number of control points for random search. We first present profiling results comparing the computation time of the fitting loss (Eq. 3) using the original biharmonic weight formation (Eq. 2) and our reformulation (Eq. 12). When computing the fitting loss with 16 control points, utilizing a single NVIDIA RTX 3090 and parallelization, our PyTorch implementation takes 1.188 seconds for the original formulation, while our reformulation with precomputation completes the calculation in only 0.024 seconds, which is **49 times faster** than the original formulation. We also demonstrate the efficiency of our method compared to the naive random search approach for finding a better solution. We conducted profiling to compare the overall execution time of our OptCrrlPoints and random search. Tab. 2 presents the average runtime of OptCrrlPoints compared to random search. In random search, we sample subsets of vertices \(N\times K\) times, whereas our method has a linear order complexity with respect to the number of vertices. Consequently, we achieve approximately \(K\) times more speedup, as shown in Tab. 2, resulting in a significant reduction in computation time from about an hour to approximately **3 minutes**, while also obtaining a better set of control points (as demonstrated in the quantitative results in Tab. 1). These findings emphasize the crucial role played by both the new biharmonic weight formation and the efficient search algorithm, as **without them, it would take days** to find a satisfactory set of control points. For instance, when the number of control points \(K\) is \(16\), \(48.8\,\mathrm{mins}\times 49=2391.2\,\mathrm{mins}=1.66\,\mathrm{days}\). ### Results with DeformingThings4D [Ltt\({}^{+}\)21] In the experiment with the DeformThings4D dataset, we highlight the effectiveness of our OptCrrlPoints in discovering the optimal set of control points for a given set of targets in a _data-driven_ manner. We present two distinct experimental setups: per-category and per-motion. Quantitative results are presented in Table 3, where our approach outperforms both FPS and random search in both the per-category and per-motion setups. Notably, our OptCrrlPoints consistently achieves a lower fitting error in the per-motion setup compared to the per-category setup. This outcome is attributed to our _data-driven_ approach, which enables us to discover control points that are tailored to the given set of targets. Fig. 5 presents qualitative results. In the _third row_, FPS fails to preserve the geometry of the dog's legs and tail, while our approach successfully retains the leg geometry during the bowl motion by identifying control points on the joints. Moreover, our per-motion approach significantly outperforms the FPS baseline in scenarios involving larger motions and wider variations in the targets. For example, in the _second row_, our method achieves much better results in recovering the geometry of the deer's face and legs, whereas FPS falls short in this regard. Also, in the _fifth row_, our approach excels at capturing head deformations by identifying additional control points on the raccoon's head and neck. **Figures are best viewed with zoom and in color. Additional qualitative results can be found in the supplementary material.** ### Refinement through Iteration of Algorithm 1 We show that while executing Algorithm 1 only once can achieve desirable results, iterating through the algorithm can yield improved outcomes, as shown in Tab. 4. We see that through iterative execution of the Alg. 1, the average fitting distance is further reduced, leading to better results on the SMPL dataset across different numbers of control points. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline \multirow{2}{*}{Iteration} & \multicolumn{3}{c}{SMPL [Ltt\({}^{+}\)15]} \\ \cline{2-4} & \(K=16\) & \(K=24\) & \(K=32\) \\ \hline 1 & 5.14 & 2.31 & 1.34 \\ \hline 2 & 4.77 & 1.95 & 1.17 \\ \hline 3 & 4.74 & 1.80 & 1.15 \\ \hline \hline \end{tabular} \end{table} Table 4: Results from iterating Alg. 1. Fitting distance is improved through iterative execution of Alg. 1 on the SMPL dataset across different numbers of control points. The L2-loss across corresponding vertices is multiplied by \(10^{4}\). \begin{table} \begin{tabular}{c|c|c c|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{K} & \multicolumn{4}{c|}{Average Fitting Distance (\(\times 10^{-4}\))} \\ \cline{3-14} & & & \multicolumn{2}{c|}{Bear (3EP)} & \multicolumn{2}{c|}{Deer (OMG)} & \multicolumn{2}{c|}{Doggie (MN5)} & \multicolumn{2}{c|}{Dragon (OF2)} & \multicolumn{2}{c|}{Moose (1DOG)} & \multicolumn{2}{c|}{Procy (STEM)} & \multicolumn{2}{c}{Raccoon (VGG)} \\ \cline{3-14} & & Cat. & Mot. & Cat. & Mot. & Cat. & Mot. & Cat. & Mot. & Cat. & Mot. & Cat. & Mot. & Cat. & Mot. \\ \hline \multirow{3}{*}{FPS} & 16 & 18.79 & 16.06 & 15.89 & 17.40 & 20.13 & 16.97 & 45.33 & 44.35 & 14.72 & 14.99 & 26.24 & 24.57 & 30.83 & 27.88 \\ & 24 & 8.78 & 7.72 & 5.33 & 5.59 & 7.48 & 6.24 & 26.35 & 26.31 & 6.66 & 6.57 & 15.24 & 14.40 & 15.68 & 15.65 \\ & 32 & 6.82 & 6.05 & 3.10 & 3.14 & 6.32 & 5.31 & 16.85 & 17.06 & 4.94 & 4.86 & 9.89 & 9.70 & 10.87 & 11.35 \\ \hline \multirow{3}{*}{Random Search} & 16 & 16.17 & 12.59 & 8.35 & 7.14 & 14.76 & 11.27 & 21.57 & 19.91 & 14.04 & 12.25 & 20.16 & 17.86 & 27.75 & 24.99 \\ & 24 & 9.83 & 7.77 & 3.74 & 3.32 & 8.63 & 6.65 & 16.04 & 15.41 & 6.77 & 6.37 & 12.19 & 10.36 & 16.54 & 15.12 \\ & 32 & 5.95 & 5.05 & 2.54 & 2.31 & 5.86 & 4.56 & 14.02 & 13.50 & 4.52 & 4.17 & 7.82 & 6.92 & 11.96 & 11.23 \\ \hline \multirow{3}{*}{Ours} & 16 & 9.67 & **7.65** & 5.23 & **4.19** & 8.72 & **6.67** & 17.80 & **16.76** & 6.62 & **6.37** & 13.14 & **11.10** & 16.13 & **14.56** \\ & 24 & 5.16 & **4.06** & 1.72 & **1.56** & 4.57 & **3.54** & 10.68 & **10.31** & 3.78 & **3.51** & 6.46 & **5.50** & 9.10 & **8.42** \\ \cline{1-1} & 32 & 3.47 & **2.73** & 1.28 & **1.15** & 2.95 & **2.31** & 9.29 & **8.95** & 2.74 & **2.44** & 4.35 & **3.73** & 5.80 & **5.37** \\ \hline \hline \end{tabular} \end{table} Table 3: Target-driven shape deformation results for DeformingThings4D [Ltt\({}^{+}\)21] dataset. The control points identified by our OptCrrlPoints method achieve better alignment of the template to the targets compared to FPS and random search. Moreover, when specific motion targets (per-motion, denoted as Mot.) are provided instead of general targets across all motions (per-category, denoted as Cat.), the control points are further tailored, resulting in even lower fitting distances. The L2-loss across corresponding vertices is multiplied by \(10^{4}\). ## 6 Conclusion We introduced OptCrlPoints, a data-driven method for determining the optimal set of control points to replicate target shapes as biharmonic deformations of the template mesh. To address the computational challenges associated with finding the best \(k\)-subset out of \(N\) vertices while solving a large-scale linear system at each trial, we proposed a reformulation of the biharmonic weights that significantly speeds up the computation. Additionally, we developed an efficient search algorithm that significantly outperforms random search in terms of both quality and time efficiency. In future work, we plan to extend our method to identify _region_ handles of 3D shapes. This extension will allow us to handle more localized and specific deformations in a controlled and data-driven manner. This work was partly supported by NRF grant (RS-2023-00209723) and IITP grant (2022-0-00594, RS-2023-00227592) funded by the Korean government (MSIT), Technology Innovation Program (20016615) funded by the Korea government(MOTIE), and grants from ETRI, KT, NCSOFT, and Samsung Electronics. Leonidas Guibas acknowledges support from an ARL grant W911NF-21-2-0104, a Vannevar Bush Faculty Fellowship, and gifts from the Adobe and Snap corporations. Despoina Paschalidou acknowledges support from the Swiss National Science Foundation under grant number P500PT 206946.
OptCtrlPointsの提案、形状を再構成するためのデータ駆動フレームワークです。このフレームワークは、三次元の形状変形に用いるバイハーモニックの3D形状変形を用いて、最適なsparseな制御点を特定します。制御点ベースの3D変形手法は、インタラクティブな形状編集において広く利用され、その使いやすさは、制御点がsparseなにもかかわらず、形状に戦略的に配置されている場合に向上します。この目的を達成するために、最適な制御点のセットを決定するためのデータ駆動的方法を導入しました。この方法は、形状の変形に関するある程度のデータセットを持つことを前提としています。このタスクに関連する課題は、計算コストが高いという点です。この複雑さは、バイハーモニックの重み計算のための大規模な線形システムを解くことと、メッシュ頂点の最適なサブセットを探索することによるものです。これらの課題
2310.00326
The Beauty of Roots
A "Littlewood polynomial" is a polynomial whose coefficients are all 1 or -1. The set of all complex roots of all Littlewood polynomials exhibits many complicated, beautiful and fascinating patterns. Some fractal regions of this set closely resemble "dragon sets" formed by iterated function systems. A heuristic argument for this is known, but no precise theorem along these lines has been proved. We invite the reader to try.
John C. Baez
2023-09-30T10:04:57
http://arxiv.org/abs/2310.00326v1
# The beauty of roots ###### Abstract. One of the charms of mathematics is that simple rules can generate complex and fascinating patterns, which raise questions whose answers require profound thought. For example, if we plot the roots of all polynomials of degree \(23\) whose coefficients are all \(1\) or \(-1\), we get an astounding picture, shown in Figure 1. More generally, define a **Littlewood polynomial** to be a polynomial \(p(z)=\sum_{i=0}^{d}a_{i}z^{i}\) with each coefficient \(a_{i}\) equal to \(1\) or \(-1\). Let \(\mathbf{X}_{n}\) be the set of complex numbers that are roots of some Littlewood polynomial with \(n\) nonzero terms (and thus degree \(n-1\)). The \(4\)-fold symmetry of Figure 1 comes from the fact that if \(z\in\mathbf{X}_{n}\) so are \(-z\) and \(\overline{z}\). The set \(\mathbf{X}_{n}\) is also invariant under the map Figure 1. Roots of all polynomials of degree \(23\) whose coefficients are \(\pm 1\). The brightness shows the number of roots per pixel. \(z\mapsto 1/z\), since if \(z\) is the root of some Littlewood polynomial then \(1/z\) is a root of the polynomial with coefficients listed in the reverse order. It turns out to be easier to study the set \[\mathbf{X}=\bigcup_{n=1}^{\infty}\mathbf{X}_{n}=\{z\in\mathbb{C}|\;z\text{ is the root of some Littlewood polynomial}\}.\] If \(n\) divides \(m\) then \(\mathbf{X}_{n}\subseteq\mathbf{X}_{m}\), so \(\mathbf{X}_{n}\) for a highly divisible number \(n\) can serve as an approximation to \(\mathbf{X}\), and this is why we drew \(\mathbf{X}_{24}\). Some general properties of \(\mathbf{X}\) are understood. It is easy to show that \(\mathbf{X}\) is contained in the annulus \(1/2<|z|<2\). On the other hand, Thierry Bousch showed [2] that the closure of \(\mathbf{X}\) contains the annulus \(2^{-1/4}\leq|z|\leq 2^{1/4}\). This means that the holes near roots of unity visible in the sets \(\mathbf{X}_{d}\) must eventually fill in as we take the union over all degrees \(d\). More surprisingly, Bousch showed in 1993 that the closure \(\overline{\mathbf{X}}\) is connected and locally path-connected [3]. It is worth comparing the work of Odlyzko and Poonen [7], who previously showed similar result for roots of polynomials whose coefficients are all \(0\) or \(1\). Figure 2. The region of \(\mathbf{X}_{24}\) near the point \(z=\frac{1}{2}e^{i/5}\). The big challenge is to understand the diverse, complicated and beautiful patterns that appear in different regions of the set \(\mathbf{X}\). There are websites that let you explore and zoom into this set online [4, 5, 8]. Different regions raise different questions. For example, what is creating the fractal patterns in Figure 2 and elsewhere? An anonymous contributor suggested a fascinating line of attack which was further developed by Greg Egan [5]. Define two functions from the complex plane to itself, depending on a complex parameter \(q\): \[f_{+q}(z)=1+qz,\qquad f_{-q}(z)=1-qz.\] When \(|q|<1\) these are both contraction mappings, so by a theorem of Hutchinson [6] there is a unique nonempty compact set \(D_{q}\subseteq\mathbb{C}\) with \[D_{q}=f_{+q}(D_{q})\cup f_{-q}(D_{q}).\] We call this set a **dragon**, or the \(\boldsymbol{q}\)**-dragon** to be specific. And it seems that _for \(|q|<1\), the portion of the set \(\mathbf{X}\) in a small neighborhood of the point \(q\) tends to look like a rotated version of \(D_{q}\)_. Figure 3 shows some examples. To precisely describe what is going on, much less prove it, would take real work. We invite the reader to try. A heuristic explanation is known, which can serve as a starting point [1, 5]. Bousch [3] has also proved this related result: **Theorem**.: _For \(q\in\mathbb{C}\) with \(|q|<1\), we have \(q\in\overline{\mathbf{X}}\) if and only if \(0\in D_{q}\). When this holds, the set \(D_{q}\) is connected._
Littlewood多項式は、係数が全て1または-1である多項式である。Littlewood多項式のすべての複素根の集合には、多くの複雑で美しい、興味深いパターンが存在する。この集合のいくつかのフラクタル領域は、「繰り返し関数のシステムによって形成された竜のセット」に近似している。この関係に関するヘプシストの主張は知られているが、これらに関する正確な定理は証明されておらず、読者に挑戦を促す。 Please let me know if you would like to translate any other sentences.
2302.00137
Quantization of the Energy for the inhomogeneous Allen-Cahn mean curvature
We consider the varifold associated to the Allen--Cahn phase transition problem in $\mathbb R^{n+1}$(or $n+1$-dimensional Riemannian manifolds with bounded curvature) with integral $L^{q_0}$ bounds on the Allen--Cahn mean curvature (first variation of the Allen--Cahn energy) in this paper. It is shown here that there is an equidistribution of energy between the Dirichlet and Potential energy in the phase field limit and that the associated varifold to the total energy converges to an integer rectifiable varifold with mean curvature in $L^{q_0}, q_0 > n$. The latter is a diffused version of Allard's convergence theorem for integer rectifiable varifolds.
Huy The Nguyen, Shengwen Wang
2023-01-31T22:57:54
http://arxiv.org/abs/2302.00137v2
# Quantization of the energy for the inhomogeneous Allen-Cahn mean curvature ###### Abstract. We consider the varifold associated to the Allen-Cahn phase transition problem in \(\mathbb{R}^{n+1}\)(or \(n+1\)-dimensional Riemannian manifolds with bounded curvature) with integral \(L^{q_{0}}\) bounds on the Allen-Cahn mean curvature (first variation of the Allen-Cahn energy) in this paper. It is shown here that there is an equidistribution of energy between the Dirichlet and Potential energy in the phase field limit and that the associated varifold to the total energy converges to an integer rectifiable varifold with mean curvature in \(L^{q_{0}},q_{0}>n\). The latter is a diffused version of Allard's convergence theorem for integer rectifiable varifolds. ## 1. Introduction Let \(\Omega\subset(M^{n+1},g)\) be an open subset in a Riemannian manifold with bounded curvature. Consider \(u\in W^{2,p}(\Omega)\) satisfying the following equation \[\varepsilon\Delta u_{\varepsilon}-\frac{W^{\prime}(u_{\varepsilon})}{ \varepsilon}=f_{\varepsilon}, \tag{1.1}\] where \(W(t)=\frac{(1-t^{2})^{2}}{2}\) is a double-well potential. The equation (1.1) can be viewed as a prescribed first variation problem to the Allen-Cahn energy \[E_{\varepsilon}(u_{\varepsilon})=\int_{\Omega}\left(\frac{\varepsilon|\nabla u _{\varepsilon}|^{2}}{2}+\frac{W(u_{\varepsilon})}{\varepsilon}\right)dx.\] For any compactly supported test vector field \(\eta\in C_{c}^{\infty}(\Omega,\mathbb{R}^{n+1})\), we have a variation \(u_{s}(x)=u\left(x+s\eta(x)\right)\) and the first variation formula at \(u_{0}=u_{\varepsilon}\) is given by \[\begin{split}\frac{d}{ds}\bigg{|}_{s=0}E_{\varepsilon}\left(u_{s} \right)&=\int_{\Omega}\left(-\varepsilon\Delta u_{\varepsilon}+ \frac{W^{\prime}(u_{\varepsilon})}{\varepsilon}\right)\langle\nabla u_{ \varepsilon},\eta\rangle dx\\ &=-\int_{\Omega}\left(\frac{f_{\varepsilon}}{\varepsilon| \nabla u_{\varepsilon}|}\right)\langle\nu,\eta\rangle\varepsilon|\nabla u_{ \varepsilon}|^{2}dx,\end{split} \tag{1.2}\] where \(\nu=\frac{\nabla u_{\varepsilon}}{|\nabla u_{\varepsilon}|}\) is a unit normal to the level sets at non-critical points of \(u\). By [13], [12], [14] using the framework of [1], the sequence of functionals \(E_{\varepsilon}\)\(\Gamma\)-converges to the \(n\)-dimensional area functional as \(\varepsilon\to 0\). This shows that minimizing solutions to (1.1) with \(f_{\varepsilon}=0\) converge as \(\varepsilon\to 0\) to area minimizing hypersurfaces. For general critical points (\(f_{\varepsilon}=0\)) a deep theorem of Hutchinson-Tonegawa [10, Theorem 1] shows the diffuse varifold obtained by smearing out the level sets of \(u\) converges to limit which is a stationary varifold with \(a.e.\) integer density. The main result of this paper is to prove Hutchinson-Tonegawa's Theorem [14, Theorem 1] in the context of natural integrability conditions on the first variation of \(E_{\varepsilon}\). Under suitable controls on the first variation of the energy functional \(E_{\varepsilon}\) (the diffuse mean curvature) we can show comparable behaviour for the limit. In the case where \(n=2,3\) Roger-Schatzle [11] have shown under the assumption \[\liminf_{\varepsilon\to 0^{+}}\left(E_{\varepsilon}(u_{\varepsilon})+\frac{1}{ \varepsilon}\|f_{\varepsilon}\|_{L^{2}(\Omega)}^{2}\right)<\infty\] that the limit is an integer rectifiable varifold with \(L^{2}\) generalised mean curvature. The main focus of this paper is to generalise this result to higher dimensions. Before we state our main theorem, we give a choice of the diffused analogue of "mean curvature" in the Allen-Cahn setting, which will be used to state our bounded \(L^{q_{0}}\) Allen-Cahn mean curvature condition in the theorem. Recall that for an embedded hypersurface \(\Sigma^{n}\subset\Omega\subset\mathbb{R}^{n+1}\) restricted to a bounded domain \(\Omega\) and a compactly supported variation \(\Sigma_{s}\) with \(\Sigma_{0}=\Sigma\), we have the first variation area at \(s=0\) given by \[\frac{d}{ds}\bigg{|}_{s=0}\operatorname{Area}(\Sigma_{s}\cap\Omega)=-\int_{ \mathbb{R}^{n+1}}\langle\mathbf{H},\eta\rangle d\mu_{\Sigma}=\int_{\mathbb{R}^ {n+1}}H\langle\nu,\eta\rangle d\mu_{\Sigma}, \tag{1.3}\] where \(H\) is the mean curvature scalar, \(\mathbf{H}=-H\nu\) is the mean curvature vector, \(\nu\) is a unit normal vector field, \(\eta\) is the variation vector field, and \(d\mu_{\Sigma}\) is the hypersurface measure. By comparing the first variation formula (1.2) for Allen-Cahn energy and the first variation formula (1.3) for area, we can see that \(\left(\frac{f_{\varepsilon}}{\varepsilon|\nabla u|}\right)\) roughly plays the role of the mean curvature scalar in the Allen-Cahn setting. In [1], Allard proved that if a sequence of integral varifolds has \(L^{q_{0}}\) integrable mean curvature scalar with \(q_{0}>n\), then after passing to a subsequence, there is a limit varifold which is also integer rectifiable. Under similar conditions on \(L^{q_{0}}\) integrability of the term \(\left(\frac{f_{\varepsilon}}{\varepsilon|\nabla u|}\right)\) with \(q_{0}>n\), we prove the integer rectifiability of the limit of sequences of Allen-Cahn varifolds : **Theorem 1.1**.: _Let \(u_{\varepsilon}\in W^{1,2}(\Omega),\Omega\subset\mathbb{R}^{n+1}\) satisfy equation (1.1) with \(\varepsilon\to 0\) and \(f_{\varepsilon}\in L^{1}(\Omega)\). If the following holds:_ 1. _Bounds on the total energy_ (1.4) \[\int_{\Omega}\left(\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}+\frac{W( u_{\varepsilon})}{\varepsilon}\right)dx\leq E_{0};\] 2. _Uniform_ \(L^{\infty}\) _bounds_ (1.5) \[\|u_{\varepsilon}\|_{L^{\infty}(\Omega)}\leq c_{0};\] 3. \(L^{q_{0}}\) _bounds on the diffuse mean curvature_ (1.6) \[\int_{\Omega}\left(\frac{|f_{\varepsilon}|}{\varepsilon|\nabla u_{\varepsilon }|}\right)^{q_{0}}\varepsilon|\nabla u_{\varepsilon}|^{2}dx\leq\Lambda_{0}\] _for some_ \(q_{0}>n\)_;_ _then after passing to a subsequence, we have for the associated varifolds (see [11] for the definition) \(V_{u_{\varepsilon}}\to V_{\infty}\) weakly and_ 1. \(V_{\infty}\) _is an integral_ \(n\)_-rectifiable varifold;_ 2. _For any_ \(B_{r}(x_{0})\subset\subset\Omega\)_, the_ \(L^{q_{0}}\) _norm of the generalized mean curvature of_ \(V_{\infty}\) _is bounded by_ \(\Lambda_{0}\)_;_ 3. _The discrepancy measure_ \(\left(\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}-\frac{W(u_{ \varepsilon})}{\varepsilon}\right)dx\to 0\) _weakly, i.e. there is an equidistribution of energy as_ \(\varepsilon\to 0\) _(c.f. Proposition_ 4.4_)._ This theorem shows we can prove a result analagous to Hutchinson-Tonegawa [10], Tonegawa [11] and show as \(\varepsilon\to 0\), the diffuse varifold associated to the Allen-Cahn functional converges to an integer rectifiable varifold. This has some similarities with Allard's compactness theorem for rectifiable varifolds and for integral varifolds but here the sequence consists of diffuse varifolds and hence we require stronger conditions on the proposed mean curvature. As we shall see in a later paper, these conditions are exactly what is required to prove a version of Allard's regularity theorem for Allen-Cahn Varifolds As an application, we have the following Corollary **Corollary 1.2**.: _If \(u_{\varepsilon}\) satisfies (1.1) and of one of the following conditions holds:_ 1. \[\|f_{\varepsilon}\|_{L^{s}(\Omega)}\leq C_{1}\varepsilon^{\frac{1} {2}},\quad\text{ for some }2<s<n\] \[\left\|\frac{f_{\varepsilon}}{\varepsilon|\nabla u_{\varepsilon}| }\right\|_{L^{t}(\Omega)}\leq C_{2},\quad\text{ for some }t>\frac{n-2}{s-2}s>\max\{s,n-2\};\] 2. \[\left\|\frac{f_{\varepsilon}}{\varepsilon|\nabla u_{\varepsilon}| }\right\|_{W^{1,p}(\Omega)}\leq C,\quad\text{ for some }p>\frac{n+1}{2},\quad\text{(c.f. \@@cite[cite]{[\@@bibref{}{T}{}{}]})};\] 3. \[\|f_{\varepsilon}\|_{L^{2}(\Omega)}\leq C_{1}\varepsilon^{\frac{1 }{2}},\quad\text{ if the ambient dimension }n+1=2,\quad\text{(c.f. \@@cite[cite]{[\@@bibref{}{R}{}{}]})}\] \[\left\|\frac{f_{\varepsilon}}{\varepsilon|\nabla u_{\varepsilon}| }\right\|_{L^{\infty}(\Omega)}\leq C_{2},\quad\text{ if the ambient dimension }n+1\geq 3;\] _then after passing to a subsequence as \(\varepsilon\to 0\), the associated varifolds \(V_{\varepsilon}\) converge to an integral \(n\)-rectifiable varifold with generalized mean curvature in \(L^{q_{0}}\) for some \(q_{0}>n\)._ Proof.: 1. To see the first condition implies the conditions in Theorem 1.1, we choose \(q_{0}=\frac{t(s-2)}{s}+2\) (\(q_{0}>n\) is satisfied due to the choice of \(t\) and \(s\) above). Then we have \[\int_{\Omega}\left|\frac{f_{\varepsilon}}{\varepsilon|\nabla u_{\varepsilon}| }\right|^{q_{0}}\varepsilon|\nabla u_{\varepsilon}|^{2}dx=\int_{\Omega}\left| \frac{f_{\varepsilon}}{\varepsilon|\nabla u_{\varepsilon}|}\right|^{q_{0}-2} \frac{|f_{\varepsilon}|^{2}}{\varepsilon}dx\] \[\leq C_{1}^{2}C_{2}^{q_{0}-2}\leq\Lambda_{0}\] where we used Holder's inequality in the second line with exponent \(\frac{s}{s-2}\). 2. In the paper [17], assuming condition (2) above, the authors proved the same integer rectifiability and \(L^{q_{0}}\) mean curvature bound for the limit varifold. We show this conditions implies the integral bounds in the hypothesis of Theorem 1.1 for some \(q_{0}>n\). To see this, we compute \[\nabla\left(\phi^{\frac{np}{n+1-p}}\right)=\frac{np}{n+1-p}\phi^{\frac{(n+1)( p-1)}{n+1-p}}\nabla\phi.\] and applying [29, 5.12.4](c.f.[17, Theorem 3.7]) and [17, Theorem 3.8], and Holder's inequality, with \(\varphi=\phi^{\frac{np}{n+1-p}}\) and \(d\mu=\varepsilon|\nabla u_{\varepsilon}|^{2}d\mathcal{L}^{n+1}\). \[\left|\int_{\mathbb{R}^{n}}\varphi d\mu\right|\leq c(n)K(\mu)\int_{\mathbb{R}^ {n}}|\nabla\varphi|d\mathcal{L}^{n}\quad\forall\varphi\in C_{c}^{1}\left( \mathbb{R}^{n+1}\right)\] which implies \[\left|\int_{\mathbb{R}^{n+1}}|\phi|^{\frac{np}{n+1-p}}\varepsilon| \nabla u_{\varepsilon}|^{2}d\mathcal{L}^{n+1}\right| \leq\left|\int_{\mathbb{R}^{n+1}}\varphi d\mu\right|\] \[\leq C(n)K(\mu)\left|\int_{\mathbb{R}^{n+1}}\frac{np}{n+1-p}| \nabla\phi||\phi|^{\frac{(n+1)(p-1)}{n+1-p}}d\mathcal{L}^{n+1}\right|\] \[\leq C(n,p)K(\mu)\left|\int_{\mathbb{R}^{n+1}}|\nabla\phi||\phi| ^{\frac{(n+1)(p-1)}{n+1-p}}d\mathcal{L}^{n+1}\right|\] \[\leq C(n,p)\left(\int_{\mathbb{R}^{n+1}}|\nabla\phi|^{p}\right)^ {1/p}\left(\int_{\mathbb{R}^{n+1}}|\phi|^{\frac{p(n+1)}{n+1-p}}\right)^{\frac{ p-1}{p}}\] \[=C(n,p)\|\nabla\phi\|_{L^{p}(\mathbb{R}^{n+1})}\|\phi\|_{L^{\frac{ p(n+1)(n+1)}{n+1-p}}}^{\frac{p(n+1)(n+1)}{n+1-p}}.\] where \(C(n,p)\to\infty\) as \(p\to n+1\). We apply the above inequality with \(\phi=\psi\frac{f_{\varepsilon}}{\varepsilon|\nabla u_{\varepsilon}|}\) and \(d\mu=\varepsilon|\nabla u_{\varepsilon}|^{2}\) together the Sobolev inequality to get for \(\psi\in C_{0}^{1}(\Omega)\) \[\int_{\Omega}\left|\psi\frac{f_{\varepsilon}}{\varepsilon| \nabla u_{\varepsilon}|}\right|^{\frac{np}{n+1-p}}\varepsilon|\nabla u_{ \varepsilon}|^{2}d\mathcal{L}^{n+1} \leq C\left\|\nabla\left(\psi\frac{f_{\varepsilon}}{\varepsilon| \nabla u_{\varepsilon}|}\right)\right\|_{L^{p}(\Omega)}\left\|\psi\frac{f_{ \varepsilon}}{\varepsilon|\nabla u_{\varepsilon}|}\right\|_{L^{\frac{p(n+1)}{n+ 1-p}}}^{\frac{p(n+1)}{n+1-p}}\] \[\leq C\left\|\nabla\left(\psi\frac{f_{\varepsilon}}{\varepsilon| \nabla u_{\varepsilon}|}\right)\right\|_{L^{p}(\Omega)}\left\|\nabla\left(\psi \frac{f_{\varepsilon}}{\varepsilon|\nabla u_{\varepsilon}|}\right)\right\|_{L ^{p}(\Omega)}^{\frac{(p-1)(n+1)}{n+1-p}}\] \[\leq C_{\psi}\left\|\frac{f_{\varepsilon}}{\varepsilon|\nabla u_{\varepsilon}|} \right\|_{W^{1,p}(\Omega}\] where we have \(q_{0}=\frac{pn}{n+1-p}>n\) since \(p>\frac{n+1}{2}\). 3. If \(n+1=2\) then this is proven in [10]. For \(n+1\geq 3\) it can be directly verified that the condition (3) implies the the conditions in Theorem 1.1. Here we give an overview of our proof. In Section 2, we gather together some standard notation on varifolds and the first variation. In section 3, we prove the main estimates required for the proof of the integrality and rectifiability. Specifically we will need a monotonicity formula. For the Allen-Cahn equation and Allen-Cahn flow, a strict monotonicity formula can be proven due to Modica's estimate showing the discrepancy is negative. This estimate is not true without a homogeneous left hand side to equation (1.1). Instead we will use the integral bound (1.6) to derive a decay bound for \(L^{1}\) norm of the discrepancy which we eventually show vanishes in the limit \(\varepsilon\to 0\). This estimate constitutes one of the main advances of this paper. In section 4 we show the limiting varifold we obtain as \(\varepsilon\to 0\) is a rectifiable set and in section 5 we show the limiting varifold is in addition integral. **Acknowledgements.** The research was funded by the EPSRC grant EP/S012907/1. ## 2. Preliminaries and notations Throughout the paper, we will denote a constant by \(C\) if it only depend on the constants \(n,E_{0},c_{0},\Lambda_{0}\) in the conditions of Theorem 1.1. it may be enlarged in some steps of the argument, but we will not relabel it if there is no confusion in the context. We associate to each solution of (1.1) a varifold in the following way : let \(G(n+1,n)\) denote the Grassmannian (the space of unoriented \(n\)-dimensional subspaces in \(\mathbb{R}^{n+1}\)). We regard \(S\in G(n+1,n)\) as the \((n+1)\times(n+1)\) matrix representing orthogonal projection of \(\mathbb{R}^{n+1}\) onto \(S\), that is \[S^{2}=S,\quad S^{T}S=I\] and write \(S_{1}\cdot S_{2}=\operatorname{tr}(S_{1}^{T}\cdot S_{2})\). We say \(V\) is an \(n\)-varifold in \(\Omega\subset\mathbb{R}^{n+1}\) if \(V\) is a Radon measure on \(G_{n}(\Omega)=\Omega\times G(n+1,n)\). Varifold convergence means convergence of Radon measures or weak-\(*\) convergence. We let \(V\in\mathbb{V}_{n}(\Omega)\) and let \(\|V\|\) denote the weight measure of \(V\) and we define the first variation of \(V\) by \[\delta V(\eta)\equiv\int_{G_{n}(\Omega)}\nabla\eta(x)\cdot SdV(x)\quad\forall \eta\in C_{c}^{1}(\Omega;\mathbb{R}^{n+1}).\] We let \(\|\delta V\|\) be the total variation of \(\delta V\). If \(\|\delta V\|\) is absolutely continuous with respect to \(\|\delta V\|\) then the Radon-Nikodym derivative \(\frac{\delta V}{\|V\|}\) exists as vector valued measure. We denote by \(H_{V}=-\frac{\delta V}{\|V\|}\), the generalised mean curvature. Let \(u\) be a function, we define the associated energy measure as a Radon measure given by \[d\mu\equiv\left(\frac{\varepsilon|\nabla u|^{2}}{2}+\frac{W(u_{\varepsilon})}{ \varepsilon}\right)d\mathcal{L}^{n+1}\] where \(\mathcal{L}^{n+1}\) is the \((n+1)\) dimensional Lebesgue measure. We also denote the the energy of the \(1\) dimensional solution by \[\sigma=\int_{-1}^{1}\sqrt{2W(s)}ds\] There is an associated varifold \(V\in\mathbb{V}_{n}(\Omega)\) to the functions \(u\) given by \[V(\phi) =\int_{\{|\nabla u|\neq 0\}}\phi\left(x,\left(\frac{\nabla u(x)}{| \nabla u(x)|}\right)^{\perp}\right)d\mu(x)\] \[=\int_{\{|\nabla u|\neq 0\}}\phi\left(x,I-\frac{\nabla u(x)}{| \nabla u(x)|}\otimes\frac{\nabla u(x)}{|\nabla u(x)|}\right)d\mu(x),\quad \phi\in C_{c}(G_{n}(\Omega)).\] where \(I\) is the \((n+1)\times(n+1)\) identity matrix and \[I-\frac{\nabla u(x)}{|\nabla u(x)|}\otimes\frac{\nabla u(x)}{|\nabla u(x)|}\] is orthogonal projection onto the space orthogonal to \(\nabla u(x)\), that is \(\{a\in\mathbb{R}^{n+1}\ |\ \langle a,\nabla u(x)\rangle=0\}.\) By definition \(\|V\|=\mu_{\llcorner[\nabla u|\neq 0]}\) and the first variation may be computed as \[\delta V(\eta)=\int_{\{|\nabla u|\neq 0\}}\nabla\eta\cdot\left(I-\frac{ \nabla u(x)}{|\nabla u(x)|}\otimes\frac{\nabla u(x)}{|\nabla u(x)|}\right)d \mu(x),\quad\forall\eta\in C_{c}^{1}(\Omega;\mathbb{R}^{n+1}). \tag{2.1}\] ## 3. Discrepancy bounds and monotonicity formula In this section, we deduce integral bounds on the discrepancy. There is an almost monotonicity formula for the Allen-Cahn energy functional, we will give estimates of the terms in the almost monotonicity formulas under the assumptions in Theorem 1.1 and obtain a monotonicity formula for the \(n\)-dimensional volume ratio. It will be used in the next section to deduce rectifiability and integrality of the limit varifold as \(\varepsilon\to 0\). Conditions (1)-(3) in Theorem 1.1 are assumed to hold throughout this section. The \(n\)-dimensional volume ratio of the energy measure satisfies the following almost monotonicity formula. **Proposition 3.1** (Almost Monotonicity Formula).: _If \(u_{\varepsilon}\) satisfies (1.1) in \(B_{1}\subset\mathbb{R}^{n+1}\), then for \(r<1\), we have_ \[\frac{d}{dr}\left(\frac{\mu_{\varepsilon}(B_{r})}{r^{n}}\right)=-\frac{1}{r^ {n+1}}\xi(B_{r})+\frac{\varepsilon}{r^{n+2}}\int_{\partial B_{r}}\langle x, \nabla u_{\varepsilon}\rangle^{2}-\frac{1}{r^{n+1}}\int_{B_{r}}\langle x, \nabla u_{\varepsilon}\rangle f_{\varepsilon}. \tag{3.1}\] _Here \(\mu_{\varepsilon}(B_{r})=\int_{B_{r}}d\mu_{\varepsilon}=\int_{B_{r}}\left( \frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}+\frac{W(u_{\varepsilon})}{ \varepsilon}\right)\) is the total energy and \(\xi(B_{r})=\int_{B_{r}}\left(\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{ 2}-\frac{W(u_{\varepsilon})}{\varepsilon}\right)\) is the discrepancy measure (difference between the Dirichlet and potential energy) in \(B_{r}\)._ Proof.: Multiplying equation (1.1) by \(\langle x,\nabla u_{\varepsilon}\rangle\) and integrating by parts on \(B_{r}\), we get \[\int_{B_{r}}\langle x,\nabla u_{\varepsilon}\rangle f_{\varepsilon}\] \[=\int_{B_{r}}\varepsilon\Delta u_{\varepsilon}\langle x,\nabla u_{ \varepsilon}\rangle-\int_{B_{r}}\left\langle\frac{\nabla(W(u_{\varepsilon}))}{ \varepsilon},x\right\rangle\] \[=\int_{\partial B_{r}}\left(\varepsilon r\left|\frac{\partial u _{\varepsilon}}{\partial\nu}\right|^{2}-r\frac{W(u_{\varepsilon})}{\varepsilon }\right)-\int_{B_{r}}\left(\varepsilon\delta_{ij}u_{x_{i}}u_{x_{j}}+ \varepsilon\nabla^{2}u(\nabla u_{\varepsilon},x)-\frac{(n+1)W(u_{\varepsilon })}{\varepsilon}\right)\] \[=\int_{\partial B_{r}}\left(\varepsilon r\left|\frac{\partial u _{\varepsilon}}{\partial\nu}\right|^{2}-r\frac{W(u_{\varepsilon})}{\varepsilon }\right)-\int_{B_{r}}\left(\varepsilon|\nabla u_{\varepsilon}|^{2}+ \varepsilon\left\langle\nabla\frac{|\nabla u_{\varepsilon}|^{2}}{2},x\right \rangle-\frac{(n+1)W(u_{\varepsilon})}{\varepsilon}\right)\] \[=r\int_{\partial B_{r}}\left(\varepsilon\left|\frac{\partial u_{ \varepsilon}}{\partial\nu}\right|^{2}-\frac{W(u_{\varepsilon})}{\varepsilon }-\varepsilon\frac{|\nabla u_{\varepsilon}|^{2}}{2}\right)+\int_{B_{r}}\left( \varepsilon\frac{(n-1)|\nabla u_{\varepsilon}|^{2}}{2}+\frac{(n+1)W(u_{ \varepsilon})}{\varepsilon}\right)\] \[=n\int_{B_{r}}\left(\frac{\varepsilon|\nabla u_{\varepsilon}|^{2} }{2}+\frac{W(u_{\varepsilon})}{\varepsilon}\right)-r\int_{\partial B_{r}} \left(\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}+\frac{W(u_{\varepsilon })}{\varepsilon}\right)+\frac{\varepsilon}{r}\int_{\partial B_{r}}\langle x, \nabla u_{\varepsilon}\rangle^{2}-\xi(B_{r}).\] The conclusion then follows by dividing both sides by \(r^{n+1}\) and noticing \[\frac{d}{dr}\left(\frac{\mu(B_{r})}{r^{n}}\right)=-\frac{n}{r^{n+1}}\int_{B_{ r}}\left(\frac{\varepsilon|\nabla u|^{2}}{2}+\frac{W(u_{\varepsilon})}{ \varepsilon}\right)+\frac{1}{r^{n}}\int_{\partial B_{r}}\left(\frac{ \varepsilon|\nabla u|^{2}}{2}+\frac{W(u_{\varepsilon})}{\varepsilon}\right).\] Integrating the almost monotonicity formula (3.1) from \(\varepsilon\) to \(r_{0}\) for \(0<\varepsilon<r_{0}<1\), we have \[\frac{\mu_{\varepsilon}(B_{r_{0}})}{r_{0}^{n}}-\frac{\mu_{ \varepsilon}(B_{\varepsilon})}{\varepsilon^{n}}\] \[=\int_{\varepsilon}^{r_{0}}\left(-\frac{1}{r^{n+1}}\xi(B_{r})+ \frac{\varepsilon}{r^{n+2}}\int_{\partial B_{r}}\langle x,\nabla u_{ \varepsilon}\rangle^{2}-\frac{1}{r^{n+1}}\int_{B_{r}}\langle x,\nabla u_{ \varepsilon}\rangle f_{\varepsilon}\right)dr\] \[\geq-r_{0}\sup_{B_{r_{0}}}\omega_{n+1}\left(\frac{\varepsilon| \nabla u_{\varepsilon}|^{2}}{2}-\frac{W(u_{\varepsilon})}{\varepsilon} \right)_{+}+\int_{B_{r_{0}}\setminus B_{\varepsilon}}\frac{\varepsilon\langle x,\nabla u_{\varepsilon}\rangle^{2}}{|x|^{n+2}}-\int_{\varepsilon}^{r_{0}} \frac{1}{r^{n+1}}\int_{B_{r}}\langle x,\nabla u_{\varepsilon}\rangle f_{ \varepsilon}dr, \tag{3.2}\] where \(\omega_{n+1}\) denotes the volume of unit ball in \(\mathbb{R}^{n+1}\). We need to estimate the first and third term on the right hand side to obtain a monotonicity formula. In order to estimate the third term, we derive an a priori gradient bound for \(u\). Condition (3) of Theorem 1.1 states a combined integrability for the inhomogeneity \(f_{\varepsilon}\) and \(|\nabla u|\). The following theorem allows us to obtain separate integrability and regularity for each quantity. **Theorem 3.2**.: _There exists \(C,\varepsilon_{0}>0\) depending on \(E_{0},c_{0},\Lambda_{0}\) as defined in Theorem 1.1 such that if \(u\) satisfies (1.1) in \(B_{1}\subset\mathbb{R}^{n+1}\) with \(\varepsilon<\varepsilon_{0}\) and if \(q_{0}>n+1\), then_ \[\sup_{B_{1-\varepsilon}}\varepsilon|\nabla u_{\varepsilon}|\leq C, \tag{3.3}\] _and_ \[\varepsilon^{2-\frac{n+1}{q_{0}}}\|u_{\varepsilon}\|_{C^{1,1-\frac{n+1}{q_{0} }}(B_{1-\varepsilon})}\leq C. \tag{3.4}\] _If \(n<q_{0}\leq n+1\), then_ \[\varepsilon^{\frac{1}{2}}\|u_{\varepsilon}\|_{C^{0,\frac{1}{2}}(B_{1- \varepsilon})}\leq C. \tag{3.5}\] _Furthermore, there exists a \(\delta_{0}>0\) so that \(f\) has the following improved integrability_ \[\|f_{\varepsilon}\|_{L^{\frac{n+1}{2}+\delta_{0}}(B_{1-\varepsilon}(x_{0}))} \leq C\varepsilon^{-\frac{n}{q_{0}}}. \tag{3.6}\] Proof.: We first consider the case \(q_{0}>n+1\): Define the rescaled solution \(\tilde{u}(x):=u(\varepsilon x)\) and \(\tilde{f}(x)=\varepsilon f_{\varepsilon}(\varepsilon x)\) which satisfies the equation \[\Delta\tilde{u}-W^{\prime}(\tilde{u})=\tilde{f},\quad\text{ in }B_{\frac{1}{ \varepsilon}}\subset\mathbb{R}^{n+1}. \tag{3.7}\] By condition (3) in Theorem 1.1, we have by rescaling \[\int_{B_{\frac{1}{\varepsilon}}}\tilde{f}^{q_{0}}\varepsilon^{n-q_{0}}|\nabla \tilde{u}|^{2-q_{0}}=\int_{B_{\frac{1}{\varepsilon}}}\varepsilon^{-2q_{0}} \tilde{f}^{q_{0}}\varepsilon|\nabla\tilde{u}|^{2-q_{0}}\varepsilon^{q_{0}-2} \varepsilon^{n+1}=\int_{B_{1}}\varepsilon^{-q_{0}}f^{q_{0}}\varepsilon|\nabla u |^{2-q_{0}}\leq\Lambda_{0}. \tag{3.8}\] **Claim.** For any \(\bar{B}_{1}(x_{0})\subset B_{\frac{1}{\varepsilon}-1}\), we have \[\|\nabla\tilde{u}\|_{L^{2}(B_{1}(x_{0}))}\leq C(c_{0},\Lambda_{0},q_{0},n).\] Proof of Claim.: By the hypothesis \(\bar{B}_{1}(x_{0})\subset B_{\frac{1}{\varepsilon}-1}\) we have \(B_{2}(x_{0})\subset B_{\frac{1}{\varepsilon}}\). We choose a smooth cutoff function \(\phi\in C_{c}^{\infty}\left(B_{2}(x_{0})\right),[0,1])\) with \(\phi\equiv 1\) in \(B_{1}(x_{0})\) and \(|\nabla\phi|\leq 4\). By integration by parts and Young's inequality, we obtain \[\int_{B_{2}(x_{0})}|\nabla\tilde{u}|^{2}\phi^{2} \leq\int_{B_{2}(x_{0})}2c_{0}|\nabla\tilde{u}||\phi||\nabla\phi|+ \int_{B_{2}(x_{0})}c_{0}\phi^{2}|\Delta\tilde{u}|\] \[\leq\int_{B_{2}(x_{0})}2c_{0}|\nabla\tilde{u}||\phi||\nabla\phi|+ \int_{B_{2}(x_{0})}c_{0}\phi^{2}|W^{\prime}(\tilde{u})|+\int_{B_{2}(x_{0})}c_ {0}\phi^{2}|\tilde{f}|\] \[\leq\frac{1}{2}\int_{B_{2}(x_{0})}|\nabla\tilde{u}|^{2}\phi^{2}+ \int_{B_{2}(x_{0})}2c_{0}^{2}|\nabla\phi|^{2}+\int_{B_{2}(x_{0})}c_{0}\phi^{2 }C_{c_{0}}+\int_{B_{2}(x_{0})}c_{0}\phi^{2}|\tilde{f}|. \tag{3.9}\] We write \(c_{0}\phi^{2}|\tilde{f}|=c_{0}|\tilde{f}|\varepsilon^{\frac{n}{q_{0}}-1}| \nabla\tilde{u}|^{\frac{2}{q_{0}}-1}\times\phi^{2}\varepsilon^{1-\frac{n}{q_{ 0}}}|\nabla\tilde{u}|^{1-\frac{2}{q_{0}}}\) and use Young's inequality with exponent \(q_{0}\) to get \[\int_{B_{2}(x_{0})}c_{0}\phi^{2}|\tilde{f}|\leq\frac{1}{\delta q_{0}}\int_{B_{ 2}(x_{0})}\Big{|}c_{0}|\tilde{f}|\varepsilon^{\frac{n}{q_{0}}-1}|\nabla\tilde{u }|^{\frac{2}{q_{0}}-1}\Big{|}^{q_{0}}+\frac{\delta(q_{0}-1)}{q_{0}}\int_{B_{2} (x_{0})}\Big{|}\phi^{2}\varepsilon^{1-\frac{n}{q_{0}}}|\nabla\tilde{u}|^{1- \frac{2}{q_{0}}}\Big{|}^{\frac{q_{0}}{q_{0}-1}}\] \[\leq\frac{c_{0}^{q_{0}}}{\delta q_{0}}\Lambda_{0}+\frac{\delta(q_{0} -1)}{q_{0}}\int_{B_{2}(x_{0})}\phi^{\frac{2q_{0}}{q_{0}-1}}\big{|}\nabla\tilde{u} \big{|}^{\frac{90-2}{q_{0}-1}}\] \[\leq\frac{c_{0}^{q_{0}}}{\delta q_{0}}\Lambda_{0}+\frac{C_{n} \delta(q_{0}-1)}{q_{0}}\left(\int_{B_{2}(x_{0})}\phi^{\frac{4q_{0}}{q_{0}-2}} \big{|}\nabla\tilde{u}\big{|}^{2}\right)^{\frac{q_{0}-2}{2(q_{0}-1)}}\] \[\leq\frac{4C_{n}(q_{0}-1)c_{0}^{n}}{q_{0}^{2}}\Lambda_{0}+\frac{1 }{4}\max\left\{\left[\int_{B_{2}(x_{0})}\phi^{2}|\nabla\tilde{u}|^{2}\right]^ {\frac{4q_{0}}{q_{0}-2}},1\right\}.\] Here we used (3.8) to bound \(\int_{B_{2}(x_{0})}\left|c_{0}\tilde{f}\varepsilon^{\frac{n}{q_{0}}-1}|\nabla \tilde{u}|^{\frac{2}{q_{0}}-1}\right|^{q_{0}}\) and the fact that \(\varepsilon^{1-\frac{n}{q_{0}}}<1\) in the second inequality, Holder's inequality with exponent \(\frac{2(q_{0}-1)}{q_{0}-2}\) in the third inequality. And in the fourth inequality we used \(\phi^{\frac{4q_{0}}{q_{0}-2}}\leq\phi^{2}\), and chose \(\delta\) to be \(\frac{q_{0}}{4C_{n}(q_{0}-1)}\). We insert the above inequality into (3.9) and get \[\int_{B_{2}(x_{0})}|\nabla\tilde{u}|^{2}\phi^{2} \leq\frac{1}{2}\int_{B_{2}(x_{0})}|\nabla\tilde{u}|^{2}\phi^{2}+ \int_{B_{2}(x_{0})}2c_{0}^{2}|\nabla\phi|^{2}+\int_{B_{2}(x_{0})}c_{0}\phi^{2} C_{c_{0}}\] \[+\frac{4C_{n}(q_{0}-1)c_{0}^{n}}{q_{0}^{2}}\Lambda_{0}+\frac{1}{4 }\max\left\{\int_{B_{2}(x_{0})}\phi^{2}|\nabla\tilde{u}|^{2},1\right\}\] We assume \(\int_{B_{2}(x_{0})}\phi^{2}|\nabla\tilde{u}|^{2}\geq 1\), otherwise the desired bound holds trivially. Then by moving the first term \(\frac{1}{2}\int_{B_{2}(x_{0})}|\nabla\tilde{u}|^{2}\phi^{2}\) and the fifth term \(\int_{B_{2}(x_{0})}\phi^{2}|\nabla\tilde{u}|^{2}\) on the right to the left, we prove the claim. Now suppose \(\|\nabla\tilde{u}\|_{L^{p_{0}}(B_{1}(x_{0}))}\leq C(c_{0},\Lambda_{0},q_{0},n)\) (independent of \(\varepsilon\)) for some \(p_{0}>1\) (\(p_{0}\) can be chosen to be \(2\) by the claim above). For any \(B_{2}(x_{0})\in B_{\frac{1}{\varepsilon}}(0)\), we have by Holder's inequality \[\|\tilde{f}\|_{L^{\frac{p_{0}q_{0}}{p_{0}+q_{0}-2}}(B_{1}(x_{0}))} =\left(\int_{B_{1}(x_{0})}|\tilde{f}|^{\frac{p_{0}q_{0}}{p_{0}+q_{0}-2}}\right) ^{\frac{p_{0}+q_{0}-2}{p_{0}q_{0}}}\] \[\leq\left[\left\|\left|\tilde{f}\varepsilon^{\frac{n-q_{0}}{q_{0} }}|\nabla\tilde{u}|^{\frac{2}{q_{0}}-1}\right|^{\frac{p_{0}q_{0}}{p_{0}+q_{0}-2 }}\right\|_{L^{\frac{p_{0}+q_{0}-2}{p_{0}}}(B_{1}(x_{0}))}\cdot\left\|\left( \varepsilon^{\frac{q_{0}-n}{q_{0}}}\big{|}\nabla\tilde{u}\big{|}^{1-\frac{2}{q _{0}}}\right)^{\frac{p_{0}q_{0}}{p_{0}+q_{0}-2}}\right\|_{L^{\frac{p_{0}+q_{0} -2}{q_{0}-2}}(B_{1}(x_{0}))}\right]^{\frac{p_{0}+q_{0}-2}{p_{0}q_{0}}}\] \[\leq\left[\Lambda_{0}^{\frac{p_{0}}{p_{0}+q_{0}-2}}\varepsilon^{ \frac{(q_{0}-n)p_{0}}{p_{0}+q_{0}-2}}\cdot\left(\int_{B_{1}(x_{0})}|\nabla \tilde{u}|^{p_{0}}\right)^{\frac{q_{0}-2}{p_{0}+q_{0}-2}}\right]^{\frac{p_{0}+q_ {0}-2}{p_{0}q_{0}}}\] \[=\Lambda_{0}^{\frac{1}{q_{0}}}\cdot\varepsilon^{\frac{q_{0}-n}{q_ {0}}}\cdot\left(\int_{B_{1}(x_{0})}|\nabla\tilde{u}|^{p_{0}}\right)^{\frac{q_{0 }-2}{p_{0}q_{0}}}\] \[\leq C(c_{0},\Lambda_{0},q_{0},n)\varepsilon^{\frac{q_{0}-n}{q_ {0}}}\leq C(c_{0},\Lambda_{0},q_{0},n). \tag{3.10}\] **Remark 3.3**.: Here \(q_{0}>n\) will make the scaling subcritical and ensures a uniform bound of \(\|\tilde{f}\|_{L^{\frac{p_{0}q_{0}}{p_{0}+q_{0}-2}}(B_{1}(x_{0}))}\) independent of \(\varepsilon\). Thus \(\tilde{f}\) is uniformly bounded in \(L^{\frac{p_{0}q_{0}}{p_{0}+q_{0}-2}}(B_{1}(x_{0}))\) independent of \(\varepsilon\). By applying the Sobolev inequality to (3.7), standard Calderon-Zygmund estimates and finally using the \(L^{\infty}\) bound of \(u\) in condition (2) of Theorem 1.1, we have \[\|\nabla\tilde{u}\|_{L^{\frac{p_{0}q_{0}}{p_{0}+q_{0}-2}-p_{0} \frac{q_{0}}{n+1}}(B_{1}(x_{0}))} \leq\|\tilde{u}\|_{W^{\frac{1}{p_{0}q_{0}}{p_{0}+q_{0}-2}-p_{0} \frac{q_{0}}{n+1}}(B_{1}(x_{0}))}\] \[\leq C\|\tilde{u}\|_{W^{2,\frac{p_{0}q_{0}}{p_{0}+q_{0}-2}}(B_{1} (x_{0}))}\] \[\leq C\|\tilde{f}\|_{L^{\frac{p_{0}q_{0}}{p_{0}+q_{0}-2}}(B_{1}( x_{0}))}+C\|W^{\prime}(\tilde{u})\|_{L^{\frac{p_{0}q_{0}}{p_{0}+q_{0}-2}}(B_{1} (x_{0}))}\] \[\leq C\Lambda_{0}^{\frac{1}{q_{0}}}\cdot\varepsilon^{\frac{q_{0} -n}{q_{0}}}\cdot\left(\int_{B_{1}(x_{0})}|\nabla\tilde{u}|^{p_{0}}\right)^{ \frac{q_{0}-2}{p_{0}q_{0}}}+C\|W^{\prime}(\tilde{u})\|_{L^{\infty}(B_{1}(x_{0 }))}\] \[\leq C(c_{0},\Lambda_{0},q_{0},n)(\varepsilon^{\frac{q_{0}-n}{q_ {0}}}+1)\leq\tilde{C}(c_{0},\Lambda_{0},q_{0},n). \tag{3.11}\] We remark that \(q_{0}>n\) ensures the coefficient \(\varepsilon^{\frac{q_{0}-n}{q_{0}}}\) stays uniformly bounded as \(\varepsilon\to 0\). In the case \(\frac{p_{0}q_{0}}{p_{0}+q_{0}-2}>n+1\), by Calderon-Zygmund estimates we have \[\|\tilde{u}\|_{W^{2,\frac{p_{0}q_{0}}{p_{0}+q_{0}-2}}(B_{1}(x_{0}))}\leq C(c_{ 0},\Lambda_{0},q_{0},n)(\varepsilon^{\frac{q_{0}-n}{q_{0}}}+1)\leq\tilde{C}(c_ {0},\Lambda_{0},q_{0},n).\] The Sobolev inequality then gives \(\|\nabla\tilde{u}\|_{L^{\infty}}\leq C\). In the case \(\frac{p_{0}q_{0}}{p_{0}+q_{0}-2}\leq n+1\), using \(q_{0}>n+1\), we have \(p_{0}<p_{0}\frac{q_{0}}{n+1}\). Namely \[\frac{q_{0}}{p_{0}+q_{0}-2-p_{0}\frac{q_{0}}{n+1}}p_{0}=\frac{q_{0}}{(p_{0}-p _{0}\frac{q_{0}}{n+1})+(q_{0}-2)}p_{0}\geq\frac{q_{0}}{q_{0}-2}p_{0}. \tag{3.12}\] So we improved \(\nabla\tilde{u}\) from \(L^{p_{0}}\) to \(L^{\frac{q_{0}}{q_{0}-2}p_{0}}\). Define \(p_{i}=\frac{q_{0}}{q_{0}-2}p_{i-1}\). Using \(q_{0}>n+1\), we iterate finitely many times until \(p_{i}>\frac{(n+1)(q_{0}-2)}{q_{0}-(n+1)}\), i.e. \(\frac{p_{i}q_{0}}{p_{i}+q_{0}-2}>n+1\). The Sobolev inequality gives \(\nabla\tilde{u}\in L^{\infty}\). So if \(q_{0}>n+1\), we get \(\nabla\tilde{u}\in L^{\infty}\). Rescaling back, we get (3.3). By (3.8) where (\(q_{0}>n+1\geq 2\)) and \(\nabla\tilde{u}\in L^{\infty}\), we have \(\tilde{f}\in L^{q_{0}}\). Standard Calderon-Zygmund estimates give \[\|\nabla\tilde{u}\|_{C^{0,1-\frac{n+1}{q_{0}}}(B_{1}(x_{0}))}\leq\|\tilde{u}\|_ {W^{2,q_{0}}(B_{1}(x_{0}))}\leq\|\tilde{f}\|_{L^{q_{0}}(B_{1}(x_{0}))}+\|W^{ \prime}(\tilde{u})\|_{L^{q_{0}}(B_{1}(x_{0}))}<\infty,\] which gives (3.4). Consider now the case \(n<q_{0}\leq n+1\). For any \[p_{i}\leq\frac{2(n+1)}{n+1-q_{0}}-\delta, \tag{3.13}\] we have \[p_{i}+q_{0}-2-p_{i}\frac{q_{0}}{n+1}= p_{i}\frac{n+1-q_{0}}{n+1}+q_{0}-2\] \[\leq \left(\frac{2(n+1)}{n+1-q_{0}}-\delta\right)\frac{n+1-q_{0}}{n+1} +q_{0}-2\] \[\leq q_{0}-\frac{n+1-q_{0}}{n+1}\delta.\] And thus \[\frac{q_{0}}{p_{i}+q_{0}-2-p_{i}\frac{q_{0}}{n+1}}p_{i}\geq\frac{q_{0}}{q_{0}- \frac{n+1-q_{0}}{n+1}\delta}p_{i}\geq p_{i}. \tag{3.14}\] So (3.11) increases the integrability of \(\nabla\tilde{u}\) from \(L^{p_{i}}\) to \(L^{\frac{q_{0}}{q_{0}-\frac{n+1-q_{0}}{n+1}\delta}p_{i}}\). And we can iterate until (3.13) fails, namely \[\|\nabla\tilde{u}\|_{L^{\frac{2(n+1)}{n+1-q_{0}}-\delta}(B_{1}(x_{0}))}\leq C (c_{0},\Lambda_{0},q_{0},n)\varepsilon^{\frac{q_{0}-n}{q_{0}}}\leq C(c_{0}, \Lambda_{0},q_{0},n), \tag{3.15}\] for any \(x_{0}\in B_{\frac{1}{\varepsilon}-2}\)(so that the condition in the claim above is satisfied). By Sobolev inequalities, we then have for any \(x_{0}\in B_{\frac{1}{\varepsilon}-2}\) \[\|\tilde{u}\|_{C^{0,\frac{1}{2}}(B_{1}(x_{0}))} \leq C\|\tilde{u}\|_{W^{1,2(n+1)}(B_{1}(x_{0}))}\] \[\leq C\|\tilde{u}\|_{W^{1,\frac{2(n+1)}{n+1-q_{0}}-\delta}(B_{1}( x_{0}))}\] \[\leq C(c_{0},\Lambda,0,q_{0},n)\varepsilon^{\frac{q_{0}-n}{q_{0}} }\leq C(c_{0},\Lambda,0,q_{0},n).\] Rescaling back gives \[\varepsilon^{\frac{1}{2}}\|u\|_{C^{0,\frac{1}{2}}(B_{1-\varepsilon})}\leq\| \tilde{u}\|_{C^{0,\frac{1}{2}}(B_{\frac{1}{\varepsilon}-1})}\leq C(c_{0}, \Lambda_{0},q_{0},n)\varepsilon^{\frac{q_{0}-n}{q_{0}}}\leq C(c_{0},\Lambda_ {0},q_{0},n),\] which is (3.5). By (3.10) we improve the integrability of \(\tilde{f}\) in (3.10) up to \[\|\tilde{f}\|_{L^{\frac{p_{i}q_{0}}{p_{i}+q_{0}-2}}(B_{1}(x_{0}))}\leq C \varepsilon^{\frac{q_{0}-n}{q_{0}}},\] for \(p_{i}\leq\frac{2(n+1)}{n+1-q_{0}}-\delta\). So if \(q_{0}\in(n,n+1]\), by choosing \(p_{i}=2(n+1)\), we have \[\frac{p_{i}q_{0}}{p_{i}+q_{0}-2}=\frac{p_{i}}{\frac{p_{i}-2}{q_{0}}+1}>\frac{p _{i}}{\frac{p_{i}-2}{n}+1}=\frac{2(n+1)}{\frac{2(n+1)-2}{n}+1}=\frac{2(n+1)}{3}, \tag{3.16}\] rearranging gives \(\frac{p_{i}q_{0}}{p_{i}+q_{0}-2}>\frac{2(n+1)}{3}\geq\frac{n+1}{2}+\delta_{0}\) for some \(\delta_{0}>0\). On the other hand, if \(q_{0}>n+1\), using (3.8) and the uniform gradient bound of \(u\) in Theorem 3.2, we have \(\|\tilde{f}\|_{L^{q_{0}}(B_{1}(x_{0}))}\leq C\varepsilon^{\frac{90-n}{q_{0}}}\), where \(q_{0}>n+1>\frac{n+1}{2}+\delta_{0}\). Combining both cases, for any \(q_{0}>n\) \[\|\tilde{f}\|_{L^{\frac{n+1}{2}+\delta_{0}}(B_{1}(x_{0}))}\leq C\varepsilon^{ \frac{q_{0}-n}{q_{0}}}. \tag{3.17}\] and \[\|\tilde{f}\|_{L^{\frac{n+1}{2}+\delta_{0}(B_{\frac{1}{2}-1}(x_{0}))}}\leq C \varepsilon^{\frac{q_{0}-n}{q_{0}}}\varepsilon^{-n-1}, \tag{3.18}\] Rescaling back gives the bound on \(f\), \[\|f\|_{L^{\frac{n+1}{2}+\delta_{0}(B_{1-\varepsilon}(x_{0}))}}\leq C\varepsilon^ {-\frac{n}{q_{0}}}.\] Since in the case \(q_{0}\in(n,n+1]\), we lack gradient bounds of \(u\) as in the case \(q_{0}>n+1\). In order to get better estimates of the discrepancy terms in the almost monotonicity formula, we use some ideas from [10]. We will apply the following Lemma to (3.7) for \(\varepsilon\) sufficiently small such that \(C\varepsilon^{\frac{q_{0}-n}{q_{0}}}\leq\omega\). **Lemma 3.4** (cf [10, Lemma 3.2]).: _Let \(n+1\geq 3,0<\delta\leq\delta_{1}\) and \(R(\delta)=\frac{1}{\delta^{p_{1}}},\omega(\delta)=\delta^{p_{2}}\), where \(p_{1}=5,p_{2}=35\). If \(\tilde{u}\in C^{2}(B_{R}),\tilde{f}\in C^{0}(B_{R}),B_{R}=B_{R}(0)\subset \mathbb{R}^{n+1}\) where_ \[-\Delta\tilde{u}+W^{\prime}(\tilde{u}) =\tilde{f} \text{in }B_{R},\] \[|\tilde{u}|\leq c_{0} \text{in }B_{R},\] \[\|\tilde{f}\|_{L^{\frac{n+1}{2}+\delta_{0}(B_{R})}} \leq\omega,\] \(c_{0}\) _is as assumed in condition (2) of Theorem 1.1 and \(\delta_{0}\) is as in Theorem 3.2. Then_ \[\int_{B_{1}}\left(\frac{|\nabla\tilde{u}|^{2}}{2}-W(\tilde{u})\right)_{+}\leq C\delta. \tag{3.19}\] _And for \(\tau=\delta^{p_{3}}\), where \(p_{3}=\frac{2\delta_{0}}{(n+1)^{2}+(n+1)\delta_{0}+6\delta_{0}}\), we get_ \[\int_{B_{\frac{1}{2}}}\left(\frac{|\nabla\tilde{u}|^{2}}{2}-W(\tilde{u})\right) _{+}\leq c\tau\int_{B_{\frac{1}{2}}}\left(\frac{|\nabla\tilde{u}|^{2}}{2}+W( \tilde{u})\right)+\int_{B_{\frac{1}{2}}\cap\{|\tilde{u}|\geq 1-\tau\}}\frac{| \nabla\tilde{u}|^{2}}{2}. \tag{3.20}\] Proof.: Let us consider the auxiliary function \(\psi\) which solves the Dirichlet problem \[\Delta\psi =-\tilde{f}, \text{in }B_{R}\] \[\psi =0, \text{on }\partial B_{R}. \tag{3.21}\] The auxiliary function allows us to control the inhomogeneous part of the equation. **Claim**.: The function \(\psi\) defined in (3.21) satisfies the bounds \[\|\psi\|_{L^{\infty}(B_{R})}\leq C\delta^{25+5\frac{n+1}{2}+\delta_{0}}\ll 1, \tag{3.22}\] \[\|\nabla\psi\|_{L^{\frac{(n+1)(n+1+2\delta_{0})}{n+1-2\delta_{0}}}(B_{R})}\leq C \omega=C\delta^{35}. \tag{3.23}\] Proof.: Rescaling by \(\frac{1}{R}\), we have \[\Delta\psi_{R} =\tilde{f}_{R}, \text{in }B_{1}\] \[\psi_{R} =0, \text{on }\partial B_{1}, \tag{3.24}\] where \(\psi_{R}(x)=\psi(Rx),\tilde{f}_{R}(x)=R^{2}\tilde{f}(Rx)\). Standard Calderon-Zygmund estimates give \[\|\psi_{R}\|_{W^{2,\frac{n+1}{2}+\delta_{0}}(B_{1})}\leq\|\tilde{f}_{R}\|_{L^{ \frac{n+1}{2}+\delta_{0}}(B_{1})}=R^{2-\frac{n+1}{\frac{n+1}{2}+\delta_{0}}}\| \tilde{f}\|_{L^{\frac{n+1}{2}+\delta_{0}}(B_{R})}\leq CR^{2-\frac{n+1}{\frac{n +1}{2}+\delta_{0}}}\omega,\] where \(2-\frac{n+1}{2}>0\). Rescaling back yields \[\|\psi\|_{L^{\frac{n+1}{2}+\delta_{0}}(B_{R})}+R\|\nabla\psi\|_{L^ {\frac{n+1}{2}+\delta_{0}}(B_{R})}+R^{2}\|\nabla^{2}\psi\|_{L^{\frac{n+1}{2}+ \delta_{0}}(B_{R})}\] \[=R^{\frac{n+1}{\frac{n+1}{2}+\delta_{0}}}\|\psi_{R}\|_{L^{\frac{n +1}{2}+\delta_{0}}(B_{1})}+R^{\frac{n+1}{\frac{n+1}{2}+\delta_{0}}}\|\nabla \psi_{R}\|_{L^{\frac{n+1}{2}+\delta_{0}}(B_{1})}+R^{\frac{n+1}{\frac{n+1}{2}+ \delta_{0}}}\|\nabla^{2}\psi_{R}\|_{L^{\frac{n+1}{2}+\delta_{0}}(B_{1})}\] \[=R^{\frac{n+1}{\frac{n+1}{2}+\delta_{0}}}\|\psi_{R}\|_{W^{2,\frac {n+1}{2}+\delta_{0}}(B_{1})}\] \[\leq CR^{\frac{n+1}{\frac{n+1}{2}+\delta_{0}}}R^{2-\frac{n+1}{ \frac{n+1}{2}+\delta_{0}}}\omega\] \[=CR^{2}\omega\] \[=C\delta^{25}.\] _Here we prove (3.22):_ by the Sobolev inequality since \(\delta_{0}>0\implies\frac{n+1}{2}+\delta_{0}>\frac{n+1}{2}\), we have \[\|\psi\|_{L^{\infty}(B_{R})}=\|\psi_{R}\|_{L^{\infty}(B_{1})} \leq C\|\psi_{R}\|_{W^{2,\frac{n+1}{2}+\delta_{0}}(B_{1})}\] \[\leq CR^{2-\frac{n+1}{\frac{n+1}{2}+\delta_{0}}}\omega\] \[=C\delta^{25+5\frac{n+1}{\frac{n+1}{2}+\delta_{0}}}\ll 1,\] due to the choice of \(\omega\), where we used \(\frac{(n+1)(n+1+2\delta_{0})}{n+1-2\delta_{0}}>n+1\). _Here we prove the gradient bound (3.23):_ \[\|\nabla\psi\|_{L^{\frac{(n+1)(n+1+2\delta_{0})}{n+1-2\delta_{0}} }(B_{R})} \leq R^{\frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}-1}\|\nabla\psi_{R} \|_{L^{\frac{(n+1)(n+1+2\delta_{0})}{n+1-2\delta_{0}}}(B_{1})}\] \[\leq CR^{\frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}-1}\|\psi_{R}\|_ {W^{2,\frac{n+1}{2}+\delta_{0}}(B_{1})}\] \[\leq CR^{\frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}-1}R^{2-\frac{n+1 }{\frac{n+1}{2}+\delta_{0}}}\omega\] \[=CR^{0}\omega\] \[=C\omega=C\delta^{35}.\] We define \(\tilde{u}_{0}:=\tilde{u}+\psi\in W^{2,\frac{n+1}{2}+\delta_{0}}(B_{R})\). By (3.21), (3.22), \(\tilde{u}_{0}\) satisfies \[|\tilde{u}_{0}| \leq c_{0}+1,\] \[\Delta\tilde{u}_{0} =W^{\prime}(\tilde{u}). \tag{3.25}\] We compute for any \(\beta>0\), \[\frac{|\nabla\tilde{u}|^{2}}{2}-W(\tilde{u}) =\frac{|\nabla\tilde{u}_{0}-\nabla\psi|^{2}}{2}-W(\tilde{u}_{0}-\psi)\] \[\leq\left(\frac{1}{2}+\beta\right)|\nabla\tilde{u}_{0}|^{2}+\left( \frac{1}{2}+\frac{1}{\beta}\right)|\nabla\tilde{\psi}|^{2}-W(\tilde{u}_{0})+C| \psi|,\] for some \(C>0\). Thus by (3.22) and (3.23), we have \[\int_{B_{1}}\left(\frac{|\nabla\tilde{u}|^{2}}{2}-W(\tilde{u}) \right)_{+}\] \[\leq\int_{B_{1}}\left(\frac{|\nabla\tilde{u}_{0}|^{2}}{2}-W( \tilde{u}_{0})\right)_{+}+\int_{B_{1}}\left(\beta|\nabla\tilde{u}_{0}|^{2}+C| \psi|+\left(\frac{1}{2}+\frac{1}{\beta}\right)|\nabla\psi|^{2}\right)\] \[\leq\int_{B_{1}}\left(\frac{|\nabla\tilde{u}_{0}|^{2}}{2}-W( \tilde{u}_{0})\right)_{+}+C\left(\beta+R^{2-\frac{n+1}{\frac{n+1}{2}+\delta_{ 0}}}\omega+\left(\frac{1}{2}+\frac{1}{\beta}\right)\omega^{2}\right).\] By choosing \(\beta=\omega\leq\delta^{p_{2}}\) and using our hypothesis on \(\omega:R^{2-\frac{n+1}{\frac{n+1}{2}+\delta_{0}}}\omega=\delta^{25+\frac{5(n+ 1)}{\frac{n+1}{2}+\delta_{0}}}\). By our choice of \(p_{1}=2,p_{2}=15\), we ensure \[\beta =\delta^{35}\leq C\delta,\] \[R^{2-\frac{n+1}{\frac{n+1}{2}+\delta_{0}}}\omega =\delta^{25+5\frac{n+1}{\frac{n+1}{2}+\delta_{0}}}\leq C\delta,\] \[\left(\frac{1}{2}+\frac{1}{\beta}\right)\omega^{2} =\frac{1}{2}\delta^{70}+\delta^{35}\leq C\delta,\] for \(n\geq 2\). Thus \[\int_{B_{1}}\left(\frac{|\nabla\tilde{u}|^{2}}{2}-W(\tilde{u}) \right)_{+}\leq\int_{B_{1}}\left(\frac{|\nabla\tilde{u}_{0}|^{2}}{2}-W(\tilde{ u}_{0})\right)_{+}+C\delta.\] To prove (3.19), it suffices to show \[\int_{B_{1}}\left(\frac{|\nabla\tilde{u}_{0}|^{2}}{2}-W(\tilde{u}_{0})\right)_ {+}\leq C\delta. \tag{3.26}\] Here we estimate \(\tilde{u}\). Define \(\tilde{u}_{R}(x)=\tilde{u}(Rx)\) then by the Calderon-Zygmund estimates we have \[\|\tilde{u}_{R}\|_{W^{2,\frac{n+1}{2}+\delta_{0}}(B_{\frac{1}{2}})} \leq C\|\Delta\tilde{u}_{R}\|_{L^{\frac{n+1}{2}+\delta_{0}}(B_{1} )}+C\|\tilde{u}_{R}\|_{L^{\frac{n+1}{2}+\delta_{0}}(B_{1})}\] \[\leq C\left(R^{2-\frac{n+1}{\frac{n+1}{2}+\delta_{0}}}\|\Delta \tilde{u}\|_{L^{\frac{n+1}{2}+\delta_{0}}(B_{R})}+1\right)\] \[\leq C\left(R^{2-\frac{n+1}{\frac{n+1}{2}+\delta_{0}}}\left(\|W^ {\prime}(\tilde{u})\|_{L^{\frac{n+1}{2}+\delta_{0}}(B_{R})}+\|\tilde{f}\|_{L^ {\frac{n+1}{2}+\delta_{0}}(B_{R})}\right)+1\right)\] \[\leq C\left(R^{2-\frac{n+1}{\frac{n+1}{2}+\delta_{0}}}(R^{\frac{n +1}{\frac{n+1}{2}+\delta_{0}}}+\omega)+1\right) \tag{3.27}\] \[\leq CR^{2}\omega\] \[=C\delta^{25}\ll 1.\] Since we have \(|\tilde{u}_{0}|\leq c_{0}\), we apply Calderon-Zygmund to (3.25), for any \(B_{1}(x)\subset B_{R}\) and \(1<r<\infty\) and we get \[\|\tilde{u}_{0}\|_{W^{2,r}(B_{\frac{1}{2}}(x))}\leq C_{r}. \tag{3.31}\] Hence by the Morrey embedding \[\|\nabla\tilde{u}_{0}\|_{L^{\infty}(B_{R-1})}\leq C.\] We define a modified discrepancy \[\xi_{G}:=\frac{|\nabla\tilde{u}_{0}|^{2}}{2}-W(\tilde{u}_{0})-G(\tilde{u}_{0}) -\varphi, \tag{3.32}\] for some function \(G\in C^{\infty}(\mathbb{R})\) and \(\varphi\in W^{2,2}(B_{R})\) that we choose as in the following claims **Claim.** If we make the following choice of \(G\), \[G_{\delta}(r):=\delta\left(1+\int_{-c_{0}-1}^{r}\exp\left(-\int_{-c_{0}-1}^{t} \frac{|W^{\prime}(s)|+\delta}{2(W(s)+\delta)}ds\right)dt\right) \tag{3.33}\] then we have the properties \[\begin{split}&\delta\leq G_{\delta}(\tilde{u}_{0})\leq C\delta, \\ & 0<G^{\prime}_{\delta}(\tilde{u}_{0})\leq\delta,\\ & 0<-G^{\prime\prime}_{\delta}(\tilde{u}_{0})=G^{\prime}_{\delta} (\tilde{u}_{0})\frac{|W^{\prime}(\tilde{u}_{0})|+\delta}{2(W(\tilde{u}_{0})+ \delta)}\leq C.\end{split} \tag{3.34}\] Furthermore we have \[G^{\prime}_{\delta}W^{\prime}-2G^{\prime\prime}_{\delta}(W+G_{\delta})\geq \delta G^{\prime}_{\delta} \tag{3.35}\] and \[G^{\prime}_{\delta}(\tilde{u}_{0})\geq C\delta^{3}. \tag{3.36}\] Proof of Claim.: The first three equations of (3.34) follow from the direct computations. For (3.35), since \(G_{\delta}\geq\delta\), we obtain \[G^{\prime}_{\delta}W^{\prime}-2G^{\prime\prime}_{\delta}(W+G_{ \delta}) =G^{\prime}_{\delta}\left(W^{\prime}+\frac{|W^{\prime}|+\delta}{( W+\delta)}(W+G_{\delta})\right)\] \[\geq G^{\prime}_{\delta}\left(W^{\prime}+\frac{|W^{\prime}|+ \delta}{(W+\delta)}(W+\delta)\right)\] \[=G^{\prime}_{\delta}\left(W^{\prime}+|W^{\prime}|+\delta\right)\] \[\geq\delta G^{\prime}_{\delta}.\] For (3.36), from the definition of \(G_{\delta}\) (3.33) and the bound \(|\tilde{u}_{0}|\leq c_{0}+1\), we compute \[G^{\prime}_{\delta}(\tilde{u}_{0}) \geq\delta\exp\left(-\int_{-c_{0}-1}^{c_{0}+1}\frac{|W^{\prime}(s )|+\delta}{2(W(s)+\delta)}ds\right)\] \[\geq\delta\exp\left(-\int_{-c_{0}-1}^{-1}\left|\frac{d}{ds}\log( W(s)+\delta)\right|ds-\int_{-1}^{0}\left|\frac{d}{ds}\log(W(s)+\delta)\right|ds-(c_{0}+1 )\right)\] \[\geq\delta\exp\left(-\left(\log(W(-c_{0}-1)+\delta)-\log\delta \right)-\left(\log(1+\delta)-\log\delta\right)-(c_{0}+1)\right)\] \[\geq\delta\exp\left(\tilde{C}-\log(\delta^{2})\right)\] \[\geq C\delta^{3},\] where we used \(W\) is an even function, increasing in \([-1,0]\) and decreasing in \([-c_{0}-1,-1]\) **Claim**.: If we choose \(\varphi\) to satisfy the Dirichlet problem \[\begin{split}-\Delta\varphi&=|\langle\nabla\tilde{u}_{ 0},\nabla\tilde{f}_{0}\rangle-(W^{\prime}+G^{\prime}_{\delta})\tilde{f}_{0}|>0 \quad\text{ in }B_{\frac{R}{2}},\\ \varphi&=0\quad\text{ on }\partial B_{\frac{R}{2}} \end{split} \tag{3.37}\] then we have \[\varphi\geq 0\quad\text{ in }B_{\frac{R}{2}} \tag{3.38}\] and \[\|\varphi\|_{W^{1,\infty}(B_{\frac{R}{2}})}\leq CR^{4-\frac{n+1-2\delta_{0}}{ n+1+2\delta_{0}}}\omega=C\delta^{15+5\frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}}. \tag{3.39}\] Proof.: Since we have \(\varphi\geq 0\) in \(\partial B_{\frac{R}{2}}\) by applying the maximum principle, we have \(\varphi\geq 0\) in \(B_{\frac{R}{2}}\) which gives us (3.38). The estimates (3.31), (3.29) and (3.30) bound the right hand side of (3.37), that is \[\|\Delta\varphi\|_{L^{\frac{(n+1)(n+1+2\delta_{0})}{n+1-2\delta_{0}}}(B_{\frac {R}{2}})}=|\langle\nabla\tilde{u}_{0},\nabla\tilde{f}_{0}\rangle-(W^{\prime}+ G^{\prime}_{\delta})\tilde{f}_{0}|_{L^{\frac{(n+1)(n+1+2\delta_{0})}{n+1-2 \delta_{0}}}(B_{\frac{R}{2}})}\leq CR^{2}\omega=C\delta^{25}.\] Denote by \(\varphi_{R}(x)=\varphi(\frac{Rx}{2})\), then the Calderon-Zygmund estimates give \[\begin{split}\|\varphi\|_{W^{1,\infty}(B_{\frac{R}{2}})}=\| \varphi_{R}\|_{W^{1,\infty}(B_{1})}&\leq C\|\varphi_{R}\|_{W^{2,\frac{(n+1)(n+1+2\delta_{0})}{n+1-2\delta_{0}}}(B_{1})}\\ &\leq C\|\Delta\varphi_{R}\|_{L^{\frac{(n+1)(n+1+2\delta_{0})}{n+ 1-2\delta_{0}}}(B_{1})}\\ &\leq CR^{2-\frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}}\|\Delta \varphi\|_{L^{\frac{(n+1)(n+1+2\delta_{0})}{n+1-2\delta_{0}}}(B_{\frac{R}{2}} )}\\ &\leq CR^{4-\frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}}\omega\\ &=C\delta^{15+5\frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}}\end{split}\] and hence we obtain (3.39). We choose \(\varphi\) according to (3.37). Notice if \(\xi_{G}>0\), then we have \(\nabla\tilde{u}_{0}\neq 0\) and \[W(\tilde{u}_{0})\leq\frac{1}{2}|\nabla\tilde{u}_{0}|^{2}. \tag{3.40}\] The case \(\xi_{G}\leq 0\) immediately gives us our desired estimate since we are seeking an upper bound. **Claim**.: For the choice of \(G\) as in (3.33) and \(\varphi\) as in (3.37) we have the differential inequality \[\Delta\xi_{G}\geq-C\left(1+\frac{\delta}{|\nabla\tilde{u}_{0}|}\right)(|\nabla \xi_{G}|+R^{4-\frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}}\omega)+C(\delta^{6}+ \delta^{4}) \tag{3.41}\] in \(B_{\frac{R}{2}}\cap\{\xi_{G}>0\}\cap\{\nabla\tilde{u}_{0}\neq 0\}\). Proof.: We compute the Laplacian of the modified discrepancy \[\Delta\xi_{G} =|\nabla^{2}\tilde{u}_{0}|^{2}+\langle\nabla\tilde{u}_{0},\nabla \Delta\tilde{u}_{0}\rangle-\Delta\varphi-(W^{\prime}+G^{\prime})\Delta\tilde{u} _{0}-(W^{\prime\prime}+G^{\prime\prime})|\nabla\tilde{u}_{0}|^{2}\] \[=|\nabla^{2}\tilde{u}_{0}|^{2}+\langle\nabla\tilde{u}_{0},W^{ \prime\prime}\nabla\tilde{u}_{0}-\nabla\tilde{f}_{0}\rangle-\Delta\varphi-(W^ {\prime}+G^{\prime})(W^{\prime}(\tilde{u}_{0})-\tilde{f}_{0})-(W^{\prime\prime }+G^{\prime\prime})|\nabla\tilde{u}_{0}|^{2}\] \[=|\nabla^{2}\tilde{u}_{0}|^{2}-\langle\nabla\tilde{u}_{0},\nabla \tilde{f}_{0}\rangle-\Delta\varphi-(W^{\prime}+G^{\prime})(W^{\prime}(\tilde{ u}_{0})-\tilde{f}_{0})-G^{\prime\prime}|\nabla\tilde{u}_{0}|^{2}. \tag{3.42}\] By differentiating (3.32), we have \[\nabla\xi_{G}=\nabla^{2}\tilde{u}_{0}\nabla\tilde{u}_{0}-(W^{\prime}+G^{ \prime})\nabla\tilde{u}_{0}-\nabla\varphi,\] and thus \[|\nabla^{2}\tilde{u}_{0}|^{2}|\nabla\tilde{u}_{0}|^{2} \geq|\nabla^{2}\tilde{u}_{0}\nabla\tilde{u}_{0}|^{2}\] \[\geq|\nabla\xi_{G}+(W^{\prime}+G^{\prime})\nabla\tilde{u}_{0}+ \nabla\varphi|^{2}\] \[\geq 2(W^{\prime}+G^{\prime})\,\langle\nabla\tilde{u}_{0},\nabla( \xi_{G}+\varphi)\rangle+(W^{\prime}+G^{\prime})^{2}|\nabla\tilde{u}_{0}|^{2}.\] Dividing by \(|\nabla\tilde{u}_{0}|^{2}\), the first term in (3.42), \(|\nabla^{2}\tilde{u}_{0}|^{2}\), is bounded as follows \[|\nabla^{2}\tilde{u}_{0}|^{2}\geq\frac{2(W^{\prime}+G^{\prime})}{|\nabla \tilde{u}_{0}|^{2}}\langle\nabla\tilde{u}_{0},\nabla(\xi_{G}+\varphi)\rangle+( W^{\prime}+G^{\prime})^{2}.\] The last term in (3.42) is \[|\nabla\tilde{u}_{0}|^{2}=2(\xi_{G}+W+G+\varphi).\] Substituting these into (3.42) and rearranging, we have in \(B_{R}\subset\{\nabla\tilde{u}_{0}=0\}\) \[\Delta\xi_{G}-\frac{2(W^{\prime}+G^{\prime})}{|\nabla\tilde{u}_{ 0}|^{2}}\langle\nabla\tilde{u}_{0},\nabla\xi_{G}\rangle+2G^{\prime\prime}\xi _{G}\] \[\geq(W^{\prime}+G^{\prime})^{2}-W^{\prime}(W^{\prime}+G^{\prime} )-2G^{\prime\prime}(W+G)+\frac{2(W^{\prime}+G^{\prime})}{|\nabla\tilde{u}_{0} |^{2}}\langle\nabla\tilde{u}_{0},\nabla\varphi\rangle\] \[-2G^{\prime\prime}\varphi-\Delta\varphi-\langle\nabla\tilde{u}_{ 0},\nabla\tilde{f}_{0}\rangle+(W^{\prime}+G^{\prime})\tilde{f}_{0}\] \[=(G^{\prime})^{2}+(G^{\prime}W^{\prime}-2G^{\prime\prime}(W+G))+ \frac{2(W^{\prime}+G^{\prime})}{|\nabla\tilde{u}_{0}|^{2}}\langle\nabla \tilde{u}_{0},\nabla\varphi\rangle-2G^{\prime\prime}\varphi-\Delta\varphi\] \[-\langle\nabla\tilde{u}_{0},\nabla\tilde{f}_{0}\rangle+(W^{\prime }+G^{\prime})\tilde{f}_{0}.\] We choose \(G\) to be (3.33) which allows us to apply the estimates (3.34) and (3.35) so that \(\xi_{G}\) satisfies \[\Delta\xi_{G} \geq\frac{2(W^{\prime}+G^{\prime})}{|\nabla\tilde{u}_{0}|^{2}} \,\langle\nabla\tilde{u}_{0},(\nabla\xi_{G}+\nabla\varphi)\rangle-2G_{\delta} ^{\prime\prime}\xi_{G}\] \[+(G_{\delta}^{\prime})^{2}+\delta G_{\delta}^{\prime}-2G_{\delta} ^{\prime\prime}\varphi-\Delta\varphi-\langle\nabla\tilde{u}_{0},\nabla\tilde{ f}_{0}\rangle+(W^{\prime}+G_{\delta}^{\prime})\tilde{f}_{0}, \tag{3.43}\] in \(B_{R}\cap\{\nabla\tilde{u}_{0}\neq 0\}\). Furthermore we have by (3.40) \[|W^{\prime}(\tilde{u}_{0})|^{2}=|\tilde{u}_{0}|^{2}(1-|\tilde{u}_{0}|^{2})^{2} \leq CW(\tilde{u}_{0})\leq C|\nabla\tilde{u}_{0}|^{2}.\] From (3.34), the bounds on \(G_{\delta}\) and its derivatives, we get \[\frac{|(W^{\prime}+G_{\delta}^{\prime})(\tilde{u}_{0})\nabla\tilde{u}_{0}|}{| \nabla\tilde{u}_{0}|^{2}}\leq\frac{\frac{1}{2}|\nabla\tilde{u}_{0}|^{3}+\delta| \nabla\tilde{u}_{0}|}{|\nabla\tilde{u}_{0}|^{2}}\leq C\left(1+\frac{\delta}{| \nabla\tilde{u}_{0}|}\right). \tag{3.44}\] Substituting in (3.37), (3.39), and (3.44) into (3.43) and using the fact that \(G^{\prime\prime}<0\), we have \[\Delta\xi_{G} \geq-C\left(1+\frac{\delta}{|\nabla\tilde{u}_{0}|}\right)(|\nabla \xi_{G}|+|\nabla\varphi|)+(G_{\delta}^{\prime})^{2}+\delta G_{\delta}^{\prime }-\Delta\varphi+\Delta\varphi\] \[\geq-C\left(1+\frac{\delta}{|\nabla\tilde{u}_{0}|}\right)\left(| \nabla\xi_{G}|+R^{4-\frac{n+1-2\xi_{0}}{n+1+2\delta_{0}}}\omega\right)+(G_{ \delta}^{\prime})^{2}+\delta G_{\delta}^{\prime}.\] Thus applying equation (3.36) in \(B_{\frac{R}{2}}\cap\{\xi_{G}>0\}\cap\{\nabla\tilde{u}_{0}\neq 0\}\), we have (3.41) \[\Delta\xi_{G}\geq-C\left(1+\frac{\delta}{|\nabla\tilde{u}_{0}|}\right)(|\nabla \xi_{G}|+R^{4-\frac{n+1-2\xi_{0}}{n+1+2\delta_{0}}}\omega)+C(\delta^{6}+\delta ^{4}). \tag{3.45}\] We define \[\eta:=\sup_{B_{1}}\xi_{G} \tag{3.46}\] and consider two cases : **case i)**\(\eta:=\sup_{B_{1}}\xi_{G}<\delta\). Since \[\xi_{G}:=\frac{|\nabla\tilde{u}_{0}|^{2}}{2}-W(\tilde{u}_{0})-G(\tilde{u}_{0} )-\varphi<\delta,\] by (3.34) and (3.39) this implies \[\frac{|\nabla\tilde{u}_{0}|^{2}}{2}-W(\tilde{u}_{0})\leq\delta+G(\tilde{u}_{0 })+\varphi\leq\delta+C\delta+CR^{4-\frac{n+1-2\xi_{0}}{n+1+2\delta_{0}}}\omega.\] Our choices give \(CR^{4-\frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}}\omega=C\delta^{15+5\frac{n+1-2 \xi_{0}}{n+1+2\delta_{0}}}\leq C\delta\) so \[\frac{|\nabla\tilde{u}_{0}|^{2}}{2}-W(\tilde{u}_{0})\leq C\delta\] which, after integrating proves (3.26). **case ii)**\(\eta:=\sup_{B_{1}}\xi_{G}\geq\delta>0\). We choose a cutoff function \(\lambda\in C_{0}^{2}(B_{\frac{R}{2}})\) satisfying \(0\leq\lambda\leq 1\), \(\lambda\equiv 1\) on \(B_{\frac{R}{4}}\) and \(|\nabla^{j}\lambda|\leq CR^{-j}\) for \(j=1,2\). Then \(\exists x_{0}\in B_{\frac{R}{2}}\) such that \[(\lambda\xi_{G})(x_{0})=\max\left\{(\lambda\xi_{G})(x):x\in\bar{B}_{\frac{R}{2} }\right\}\geq\eta>0.\] By (3.31) we have \(\xi_{G}\leq C\) for some \(C(c_{0},\Lambda_{0},E_{0},n)>0\) in \(B_{R-1}\), and thus \[\lambda(x_{0})\geq\frac{\eta}{C}.\] Moreover, \[|\nabla\tilde{u}_{0}(x_{0})|^{2}\geq 2\xi_{G}(x_{0})\geq 2(\lambda\xi_{G})(x_{0} )\geq 2\eta\geq 2\delta>0.\] Since \(x_{0}\) is a critical point, \(\nabla(\lambda\xi_{G})(x_{0})=0\), and we get \[|\nabla\xi_{G}(x_{0})|=\lambda(x_{0})^{-1}|\nabla\lambda(x_{0})|\xi_{G}(x_{0}) \leq C(R\eta)^{-1}.\] At a maximum point \(x_{0}\), the Laplacian of the function \(\lambda\xi_{G}\) satisfies \[0 \geq\Delta(\lambda\xi_{G})(x_{0})\] \[=\lambda(x_{0})\Delta\xi_{G}(x_{0})+2\langle\nabla\lambda(x_{0}), \nabla\xi_{G}(x_{0})\rangle+\Delta\lambda(x_{0})\xi_{G}(x_{0}),\] and thus \[\Delta\xi_{G}(x_{0}) \leq\lambda(x_{0})^{-1}\left(C|\nabla\lambda(x_{0})||\nabla\xi_{ G}(x_{0})|+|\Delta\lambda(x_{0})||\xi_{G}(x_{0})|\right)\] \[\leq C\eta^{-1}\left(CR^{-1}(R\eta)^{-1}+CR^{-2}\right)\] \[\leq CR^{-2}\eta^{-1}(1+\eta^{-1})\] \[\leq CR^{-2}\eta^{-1}(1+\delta^{-1})\] \[\leq CR^{-2}\eta^{-1}\delta^{-1}, \tag{3.47}\] since \(\delta\ll 1\). Combining (3.41) and (3.47) we have \[CR^{-2}\eta^{-1}\delta^{-1} \geq-C\left(1+\frac{\delta}{|\nabla\tilde{u}_{0}(x_{0})|}\right) \left(|\nabla\xi_{G}|+R^{4-\frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}}\omega \right)+C(\delta^{6}+\delta^{4})\] \[\geq C\left[\left(1+\frac{\delta}{2\delta}\right)\left((R\eta)^{ -1}+R^{4-\frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}}\omega\right)+\delta^{4} \right].\] Thus the last term above is bounded by \[\delta^{4}\leq C\left(R^{-2}\eta^{-1}\delta^{-1}+(R\eta)^{-1}\right)+CR^{4- \frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}}\omega.\] By our choice of \(p_{1}=2,p_{2}=15\), we have \(R^{4-\frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}}\omega=R^{15+5\frac{n+1-2\delta_ {0}}{n+1+2\delta_{0}}}\ll\delta^{4}\). So \[\delta^{4}\leq C\left(R^{-2}\eta^{-1}\delta^{-1}+(R\eta)^{-1}\right),\] dividing both sides by \(\delta^{4}\eta^{-1}\) gives \[\eta \leq C\left(R^{-2}\delta^{-4}\delta^{-1}+R^{-1}\delta^{-4}\right)\] \[\leq C\delta.\] Namely, assuming (3.46) or not, we have \[\xi_{G}\leq C\delta,\] and thus by (3.39) \[\frac{|\nabla\tilde{u}_{0}|^{2}}{2}-W(\tilde{u}_{0}) =\xi_{G}+G_{\delta}(\tilde{u}_{0})+\varphi\] \[\leq C\delta+R^{4-\frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}}\omega\] \[\leq C\delta+\delta^{15+5\frac{n+1-2\delta_{0}}{n+1+2\delta_{0}}}\] \[\leq C\delta.\] This proves (3.26) and as a consequence (3.19). If \(|\tilde{u}|\geq 1-\tau\) in \(B_{\frac{1}{2}}\), then (3.20) follows because the left hand side is less than the second term on the right. So we only need to consider the case there exists \(x_{0}\in B_{\frac{1}{2}}\) with \(\tilde{u}(x_{0})\leq 1-\tau\). By the Sobolev inequality and Calderon-Zygmund estimates we bound \(\tilde{u}\) in the Holder norm as follows \[\|\tilde{u}\|_{C^{\frac{2\delta_{0}}{(n+1)+\delta_{0}}}(B_{1})} \leq\|\tilde{u}\|_{W^{2,\frac{n+1}{2}+\delta_{0}}(B_{1})}\] \[\leq\tilde{C}\left(\|W^{\prime}(\tilde{u})\|_{L^{\frac{n+1}{2}+ \delta_{0}}(B_{1})}+\|\tilde{f}\|_{L^{\frac{n+1}{2}+\delta_{0}}(B_{1})}+\| \tilde{u}\|_{L^{\frac{n+1}{2}+\delta_{0}}(B_{1})}\right)\] \[\leq C.\] Therefore \(|\tilde{u}|\leq 1-\frac{\tau}{2}\) and \(W(\tilde{u})\geq\frac{\tau^{2}}{4}\) in \(B_{\left(\frac{\tau}{2C}\right)^{\frac{(n+1)+\delta_{0}}{2\delta_{0}}}}\subset B _{1}\). So \[\int_{B_{\frac{1}{2}}}W(\tilde{u})\geq\frac{\tau^{2}}{4}\left(\frac{\tau}{2 \tilde{C}_{2}}\right)^{\frac{(n+1)[(n+1)+\delta_{0}]}{2\delta_{0}}}=C\tau^{ \frac{(n+1)^{2}+(n+1)\delta_{0}+4\delta_{0}}{2\delta_{0}}}.\] By our choice \(p_{3}=\frac{2\delta_{0}}{(n+1)^{2}+(n+1)\delta_{0}+6\delta_{0}}\), \[\int_{B_{\frac{1}{2}}}\left(\frac{|\nabla\tilde{u}|^{2}}{2}-W( \tilde{u})\right)_{+} \leq C\delta\] \[\leq C\tau^{\frac{(n+1)^{2}+(n+1)\delta_{0}+6\delta_{0}}{2\delta_ {0}}}\] \[\leq C\tau^{\frac{(n+1)^{2}+(n+1)\delta_{0}+4\delta_{0}}{2\delta_ {0}}}\] \[\leq C\tau\int_{B_{\frac{1}{2}}}\left(\frac{|\nabla\tilde{u}|^{2}} {2}+W(\tilde{u})\right),\] which proves (3.20). Next we derive energy estimates away from transition regions. **Proposition 3.5** ([10, Proposition 3.4]).: _For any \(n\geq 2\), \(0\leq\delta\leq\delta_{1}\), \(\varepsilon>0,u_{\varepsilon}\in C^{2}(\Omega),f_{\varepsilon}\in C^{0}(\Omega)\), if_ \[-\varepsilon\Delta u_{\varepsilon}+\frac{W^{\prime}(u_{\varepsilon})}{ \varepsilon}=f_{\varepsilon}\quad\text{ in }\Omega\] _and_ \[\Omega^{\prime}\subset\subset\Omega,0<r\leq d(\Omega^{\prime},\partial\Omega)\] _then_ \[\int_{\{|u_{\varepsilon}|\geq 1-\delta\}\cap\Omega^{\prime}} \left(\varepsilon|\nabla u_{\varepsilon}|^{2}+\frac{W(u_{\varepsilon})}{ \varepsilon}+\frac{W^{\prime}(u_{\varepsilon})^{2}}{\varepsilon}\right)\] \[\leq C\delta\int_{\{|u_{\varepsilon}|\leq 1-\delta\}\cap\Omega} \varepsilon|\nabla u_{\varepsilon}|^{2}+C\varepsilon\int_{\Omega}|f_{ \varepsilon}|^{2}+C\left(\frac{\delta}{r}+\frac{\delta^{2}}{r^{2}}\right) \varepsilon\mathcal{L}^{n+1}(\Omega)+\frac{C\varepsilon}{r^{2}}\int_{\{|u_{ \varepsilon}|\geq 1\}\cap\Omega}W^{\prime}(u_{\varepsilon})^{2}.\] _(Notice the power of \(f_{\varepsilon}\) in the above inequality will still be 2 instead of \(\frac{n+1}{2}+\delta_{0}\).)_ Proof.: Define a continuous function \[g(t)=\begin{cases}W^{\prime}(t),&\text{ for }|t|\geq 1-\delta\\ 0,&\text{ for }|t|\leq t_{0}\\ \text{ linear},&\text{ for }t\in[-1+\delta,-t_{0}]\cup[t_{0},1-\delta],\end{cases}\] where \(t_{0}=\frac{1}{\sqrt{3}}\) is chosen to be the number in \((0,1)\) such that \(W^{\prime\prime}(t_{0})=0\). Clearly \(|g|\leq|W^{\prime}|\). For \(\eta\in C^{1}_{0}(\Omega)\) satisfying \(0\leq\eta\leq 1\), \(\eta\equiv 1\) on \(\Omega^{\prime}\) and \(|\nabla\eta|\leq Cr^{-1}\), we get by integration by parts \[\int_{\Omega}f_{\varepsilon}g(u_{\varepsilon})\eta^{2} =\int_{\Omega}\left(-\varepsilon\Delta u_{\varepsilon}+\frac{W^{ \prime}(u_{\varepsilon})}{\varepsilon}\right)g(u_{\varepsilon})\eta^{2}\] \[=\int_{\Omega}\varepsilon g^{\prime}(u_{\varepsilon})|\nabla u_{ \varepsilon}|^{2}\eta^{2}+2\int_{\Omega}\varepsilon g(u_{\varepsilon})\eta \langle\nabla u_{\varepsilon},\nabla\eta\rangle+\int_{\Omega}\frac{W^{\prime }(u_{\varepsilon})}{\varepsilon}g(u_{\varepsilon})\eta^{2}. \tag{3.48}\] The left hand side of (3.48) can be bounded by \[\int_{\Omega}f_{\varepsilon}g(u_{\varepsilon})\eta^{2}\leq\frac{\varepsilon} {2}\int_{\Omega}|f_{\varepsilon}|^{2}+\frac{1}{2\varepsilon}\int_{\Omega}g(u_ {\varepsilon})^{2}\eta^{2}\leq\frac{\varepsilon}{2}\int_{\Omega}|f_{ \varepsilon}|^{2}+\frac{1}{2\varepsilon}\int_{\Omega}W^{\prime}(u_{ \varepsilon})g(u_{\varepsilon})\eta^{2}. \tag{3.49}\] By the definition of \(g\) above, we have \[|g(t)| \leq|g(1-\delta)|=W^{\prime}(1-\delta)\leq C\delta,\] \[|g^{\prime}(t)| \leq\frac{|g(1-\delta)|}{1-\delta}\leq\frac{|g(1-\delta)|}{1- \delta_{1}}\leq C\delta,\] for \(|t|\leq 1-\delta\). Applying these estimates to the second term on the right hand side of (3.48) we get the bound \[\left|2\int_{\Omega}\varepsilon g(u_{\varepsilon})\eta\langle \nabla u_{\varepsilon},\nabla\eta\rangle\right|\] \[\leq 2\delta\int_{\{|u_{\varepsilon}|\leq 1-\delta\}}\varepsilon \eta|\nabla u_{\varepsilon}||\nabla\eta|+\left|\int_{\{|u_{\varepsilon}|\geq 1 -\delta\}}\varepsilon W^{\prime}(u_{\varepsilon})\langle\nabla u_{ \varepsilon},\nabla\eta\rangle\right|\] \[\leq C\delta\int_{\{|u_{\varepsilon}|\leq 1-\delta\}}\varepsilon| \nabla u_{\varepsilon}|^{2}+\varepsilon\delta r^{-1}\mathcal{L}^{n+1}(\Omega)+ \tau\int_{\{|u_{\varepsilon}|\geq 1-\delta\}}\varepsilon|\nabla u_{ \varepsilon}|^{2}\eta^{2}+C\varepsilon\tau^{-1}r^{-2}\int_{\{|u_{\varepsilon} |\geq 1-\delta\}}W^{\prime}(u_{\varepsilon})^{2}, \tag{3.50}\] for \(\tau>0\). As \(g^{\prime}(t)=W^{\prime\prime}(t)\geq C_{W}>0\) for \(|t|\geq 1-\delta\), we obtain from (3.48), (3.49) and \[C_{W}\int_{\{|u_{\varepsilon}|\geq 1-\delta\}}\varepsilon|\nabla u _{\varepsilon}|^{2} +\frac{1}{2\varepsilon}\int_{\Omega}W^{\prime}(u_{\varepsilon})g(u_ {\varepsilon})\eta^{2}\] \[\leq C_{W}\delta\int_{\{|u_{\varepsilon}|\leq 1-\delta\}} \varepsilon|\nabla u_{\varepsilon}|^{2}+\tau\int_{\{|u_{\varepsilon}|\geq 1- \delta\}}\varepsilon|\nabla u_{\varepsilon}|^{2}\eta^{2}+\frac{\varepsilon}{2} \int_{\Omega}|f_{\varepsilon}|^{2} \tag{3.50}\] \[\int_{\{|u_{\varepsilon}|\geq 1\}\cap\Omega^{\prime}_{i}}W^{\prime}(u_{ \varepsilon})^{2}\leq C\varepsilon^{2}\int_{\Omega^{\prime}_{i-1}}|f_{\varepsilon} |^{2}+Ck^{2}r^{-2}\varepsilon^{2}\int_{\{|u_{\varepsilon}|\geq 1\}\cap\Omega^{ \prime}_{i-1}}W^{\prime}(u_{\varepsilon})^{2},\] for \(i=1,...,k\). The conclusion is obtained by applying the above inequality inductively \(k\) times. We conclude the following integral bound for positive part of discrepancy measure. **Lemma 3.7** ([14, Lemma 3.1] for all \(n\)).: _Let \(n\geq 2\), \(0<\delta\leq\delta_{1}\) (where \(\delta_{1}\) given as in Lemma 3.4), \(0<\varepsilon\leq\rho\), \(\rho_{0}:=\max\{2,1+\delta^{-M}\varepsilon\}\rho\) for some large universal constant \(M\). If \(u_{\varepsilon}\in C^{2}(B_{\rho_{0}}),f_{\varepsilon}\in C^{0}(B_{\rho_{0}})\) satisfies (1.1) in \(B_{\rho_{0}}(0)\) then the positive part of the discrepancy measure satisfies_ \[\rho^{-n}\int_{B_{\rho}}\left(\frac{\varepsilon|\nabla u_{ \varepsilon}|^{2}}{2}-\frac{W(u_{\varepsilon})}{\varepsilon}\right)_{+} \leq C\delta^{p_{3}}\rho^{-n}\int_{B_{2_{\rho}}}\left(\frac{ \varepsilon|\nabla u_{\varepsilon}|^{2}}{2}+\frac{W(u_{\varepsilon})}{ \varepsilon}\right)+C\delta^{-M}\varepsilon\rho^{-n}\int_{B_{\rho_{0}}}|f_{ \varepsilon}|^{2}\] \[+C\delta^{-M}\rho^{-n}\int_{B_{\rho_{0}}\cap\{|u_{\varepsilon}| \geq 1\}}\frac{W^{\prime}(u_{\varepsilon})^{2}}{\varepsilon}+C\left(\frac{ \varepsilon}{\rho}\right)\delta.\] Proof.: We prove the case \(0<\varepsilon\leq\rho=1\). The case for other \(\rho>0\) follows by rescaling to \(\rho=1\). For \(0<\delta\leq\delta_{1}\) we choose \(R(\delta)=\frac{1}{\delta^{p_{1}}}\) and \(\omega(\delta)=C_{\omega}\delta^{p_{2}}\) as in Lemma 3.4. Let \(\{x_{i}\}_{i\in\mathbf{I}}\subset B_{1},\mathbf{I}\subset\mathbb{N}\) be a maximal collection of points satisfying \[\min_{i\neq j}|x_{i}-x_{j}|\geq\frac{\varepsilon}{2}.\] Since \(\varepsilon\leq 1\), we have \[B_{1}(0)\subset\cup_{i\in\mathbf{I}}\bar{B}_{\frac{\varepsilon}{ 2}}(x_{i})\subset B_{\frac{3}{2}}(0),\] \[\sum_{i\in\mathbf{I}}\chi_{B_{\varepsilon}(x_{i})}\leq C_{n}\chi _{B_{2}(0)},\] \[\sum_{i\in\mathbf{I}}\chi_{B_{2R\varepsilon}(x_{i})}\leq C_{n}R^ {n+1}\chi_{B_{1+2R\varepsilon}(0)}.\] For \(i\in\mathbf{I}\) and \(x\in B_{2R}\), we define the rescaled and translated functions as \[\tilde{u}_{i}(x) :=u_{\varepsilon}(x_{i}+\varepsilon x),\] \[\tilde{f}_{i}(x) :=\varepsilon f_{\varepsilon}(x_{i}+\varepsilon x),\] which satisfy the rescaled equation \[-\Delta\tilde{u}_{i}+W^{\prime}(\tilde{u}_{i})=\tilde{f}_{i},\quad\text{ in }B_{2R}(0). \tag{3.51}\] For \(\tilde{u}_{i},\tilde{f}_{i}\) to be well-defined, we choose \(M\geq 5n+6\) and \(\delta_{1}\leq\frac{1}{2}\) so that \[x_{i}+\varepsilon x\in B_{1+2R\varepsilon}(0)\subset B_{1+\delta^{-M} \varepsilon}(0)\subset B_{\rho_{0}}(0).\] We decompose the index set \(\mathbf{I}\) into \[\mathbf{I}_{1} :=\{i\in\mathbf{I}:\|f_{\varepsilon}\|_{L^{\frac{n+1}{2}+\delta_ {0}}(B_{2R\varepsilon}(x_{i}))}<\varepsilon^{\frac{n+1}{2}-1+\delta_{0}}\omega,\|(|u_{\varepsilon}|-1)_{+}\|_{L^{1}(B_{2R\varepsilon}(x_{i}))}<C_{\omega} \varepsilon^{n+1}\},\] \[\mathbf{I}_{2} :=\mathbf{I}\setminus\mathbf{I}_{1}.\] For \(i\in\mathbf{I}_{1}\), we have \[\|\tilde{f}_{i}\|_{L^{\frac{n+1}{2}+\delta_{0}}(B_{2R}(0))}= \varepsilon^{-\frac{n+1}{2}-\delta_{0}}\|\varepsilon f_{\varepsilon}\|_{L^{ \frac{n+1}{2}+\delta_{0}}(B_{2R\varepsilon}(x_{i}))}<\omega\leq C_{\omega},\] \[\|(|\tilde{u}_{i}|-1)_{+}\|_{L^{1}(B_{2R}(x_{i}))}=\varepsilon^{ -n-1}\|(|u_{\varepsilon}|-1)_{+}\|_{L^{1}(B_{2R\varepsilon}(x_{i}))}<C_{ \omega}.\] By the condition \(\|u\|_{L^{\infty}}\leq c_{0}\) in the condition of Theorem 1.1, and choosing \(C_{\omega}\) sufficiently small, we have \[\|\tilde{u}_{i}\|_{L^{\infty}(B_{R})}\leq 1+C\cdot C_{\omega}\leq 2.\] Applying Lemma 3.4 to \(\tilde{u}_{i}\) gives (with the \(p_{3}\) from Lemma 3.4) \[\int_{B_{\frac{1}{2}}}\left(\frac{|\nabla\tilde{u}_{i}|^{2}}{2}-W (\tilde{u}_{i})\right)_{+}\] \[\leq C\delta^{p_{3}}\int_{B_{\frac{1}{2}}}\left(\frac{|\nabla \tilde{u}_{i}|^{2}}{2}+W(\tilde{u}_{i})\right)+\int_{B_{\frac{1}{2}}\cap\{| \tilde{u}_{i}|\geq 1-\delta\}}\frac{|\nabla\tilde{u}_{i}|^{2}}{2}.\] Rescaling back, we get \[\int_{B_{\frac{\varepsilon}{2}}(x_{i})}\left(\frac{\varepsilon| \nabla u_{\varepsilon}|^{2}}{2}-\frac{W(u_{\varepsilon})}{\varepsilon}\right)_ {+}\] \[\leq C\delta^{p_{3}}\int_{B_{\frac{\varepsilon}{2}}(x_{i})} \left(\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}+\frac{W(u_{\varepsilon })}{\varepsilon}\right)+\int_{B_{\frac{\varepsilon}{2}}(x_{i})\cap\{|u_{ \varepsilon}|\geq 1-\delta\}}\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}.\] Summing over \(i\in\mathbf{I}_{1}\) and noticing \(B_{\frac{\varepsilon}{2}}(x_{i})\) are disjoint, we get \[\sum_{i\in\mathbf{I}_{1}}\int_{B_{\frac{\varepsilon}{2}}(x_{i})} \left(\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}-\frac{W(u_{\varepsilon })}{\varepsilon}\right)_{+}\] \[\leq C\delta^{p_{3}}\int_{B_{\frac{3}{2}}(0)}\left(\frac{ \varepsilon|\nabla u_{\varepsilon}|^{2}}{2}+\frac{W(u_{\varepsilon})}{ \varepsilon}\right)+C\int_{B_{\frac{3}{2}}(0)\cap\{|u_{\varepsilon}|\geq 1- \delta\}}\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}\] \[\leq C\delta^{p_{3}}\int_{B_{2}(0)}\left(\frac{\varepsilon| \nabla u_{\varepsilon}|^{2}}{2}+\frac{W(u_{\varepsilon})}{\varepsilon}\right)+ C\varepsilon\int_{B_{2}(0)}|f_{\varepsilon}|^{2}+C\varepsilon\left(\delta+ \int_{B_{2}(0)\cap\{|u_{\varepsilon}|\geq 1\}}W^{\prime}(u_{\varepsilon})^{2} \right), \tag{3.52}\] where we used Proposition 3.5 in the last line. Since for \(n\geq 3\) (the \(n=2\) case need \(\delta_{0}\geq\frac{1}{2}\), but is already dealt with in [10]) \[W^{\prime}(t)^{2}\geq 4t^{2}(1+t)^{2}(1-t)^{2}\geq C_{W}t^{2}(|t|-1)^{2}\geq C _{W}(|t|-1)_{+}^{\frac{n+1}{2}+\delta_{0}}.\] Thus for \(i\in\mathbf{I}_{2}\) (at least one of the bounds in \(\mathbf{I}_{1}\) does not hold), we have \[C_{\omega} \leq\int_{B_{2R}(0)}(|\tilde{u}_{i}|-1)_{+}^{\frac{n+1}{2}+\delta _{0}}+\omega^{-2}\int_{B_{2R}}\tilde{f}_{i}^{2}\] \[\leq C\int_{B_{2R}(0)\cap\{|\tilde{u}_{i}|\geq 1\}}W^{\prime}( \tilde{u}_{i})^{2}+\omega^{-2}\int_{B_{2R}}\tilde{f}_{i}^{2}.\] By elliptic estimates applied to the rescaled equation (3.51), we get \[\int_{B_{\frac{1}{2}}}|\nabla\tilde{u}_{i}|^{2}\leq\tilde{C}\int_{B_{1}}\left( W^{\prime}(\tilde{u}_{i})^{2}+\tilde{u}_{i}^{2}+\tilde{f}_{i}^{2}\right)\] \[\leq C\delta^{p_{3}}\int_{B_{2}(0)}\left(\frac{\varepsilon|\nabla u_{ \varepsilon}|^{2}}{2}+\frac{W(u_{\varepsilon})}{\varepsilon}\right)+C\varepsilon \delta+C\varepsilon\delta^{-M}\int_{B_{\max\{2,1+\delta^{-M}\varepsilon\}}(0)}|f _{\varepsilon}|^{2}\] \[+C\delta^{-M}\int_{B_{\max\{2,1+\delta^{-M}\varepsilon\}}(0)} \frac{W^{\prime}(u_{\varepsilon})^{2}}{\varepsilon}.\] This completes the proof for \(\rho=1\) and rescaling gives the cases for other \(\rho>0\). As a result of these, we have the \(L^{1}\) convergence of the positive part of the discrepancy measure as \(\varepsilon\to 0\). **Lemma 3.8**.: _If we consider \(\xi_{\varepsilon}=\xi_{\varepsilon,+}-\xi_{\varepsilon,-}\) the decomposition of \(\xi_{\varepsilon}\) into positive and negative variations then_ \[\xi_{\varepsilon,+}\to 0\quad\text{ as }\varepsilon\to 0.\] _Furthermore this shows \(\xi\leq 0\)._ Proof.: For \(B_{2\rho}=B_{2\rho}(x)\subset\Omega^{\prime}\subset\subset\Omega,0<\delta< \delta_{0}\) and \(0<\varepsilon\leq\delta^{M}\) then applying Lemma 3.7 we have \[\begin{split}\int_{B_{\rho}}\left(\frac{\varepsilon|\nabla u_{ \varepsilon}|^{2}}{2}-\frac{W(u_{\varepsilon})}{\varepsilon}\right)_{+}& \leq C\delta^{p_{3}}\int_{B_{2\rho}}\left(\frac{\varepsilon| \nabla u_{\varepsilon}|^{2}}{2}+\frac{W(u_{\varepsilon})}{\varepsilon} \right)+C\delta^{-M}\varepsilon\int_{B_{\rho}}|f_{\varepsilon}|^{2}\\ &\quad+C\delta^{-M}\int_{B_{\rho}\cap\{|u_{\varepsilon}|\geq 1\}} \frac{W^{\prime}(u_{\varepsilon})^{2}}{\varepsilon}+C\left(\frac{\varepsilon} {\rho}\right)\delta\rho^{n}.\end{split} \tag{3.54}\] Proposition 3.6 gives us \[\int_{\{|u_{\varepsilon}|\geq 1\}\cap B_{\rho}}W^{\prime}(u_{\varepsilon})^{2} \leq C_{k}(1+\rho^{-2k}\varepsilon^{2k})\varepsilon^{2}\int_{B_{2\rho}}|f_{ \varepsilon}|^{2}+C_{k}\rho^{-2k}\varepsilon^{2k}\int_{\{|u_{\varepsilon}| \geq 1\}\cap B_{2\rho}}W^{\prime}(u_{\varepsilon})^{2}\] for all \(k\in\mathbb{N}_{0}\). Choosing \(k=2\) and applying the bound \[\int_{\{|u_{\varepsilon}|\geq 1\}\cap B_{2\rho}}W^{\prime}(u_{\varepsilon})^{2} \leq C(\Omega^{\prime})\] and inserting these estimates into (3.54), we get \[\begin{split}\int_{B_{\rho}}\left(\frac{\varepsilon|\nabla u_{ \varepsilon}|^{2}}{2}-\frac{W(u_{\varepsilon})}{\varepsilon}\right)_{+}& \leq C\delta^{p_{3}}\int_{B_{2\rho}}\left(\frac{\varepsilon| \nabla u_{\varepsilon}|^{2}}{2}+\frac{W(u_{\varepsilon})}{\varepsilon} \right)+C(\delta^{-M}\varepsilon+\varepsilon^{2})\int_{B_{\rho}}|f_{ \varepsilon}|^{2}\\ &\quad+C\delta^{-M}\varepsilon^{3}+C\left(\frac{\varepsilon}{ \rho}\right)\delta\rho^{n}.\end{split}\] By the Holder inequality with exponent \(q_{0}/2\), we estimate \[\begin{split}\varepsilon\int_{B_{r/2}}|f_{\varepsilon}|^{2}& =\varepsilon^{2}\int_{B_{r/2}}\left(\frac{f_{\varepsilon}}{| \varepsilon|\nabla u_{\varepsilon}|}\right)^{2}\varepsilon|\nabla u_{ \varepsilon}|^{2}\\ &\leq\varepsilon^{2}\left(\int_{B_{r/2}}\left|\frac{f_{ \varepsilon}}{|\varepsilon|\nabla u_{\varepsilon}|}\right|^{q_{0}}\varepsilon |\nabla u_{\varepsilon}|^{2}\right)^{2/q_{0}}\left(\int_{B_{r/2}}\varepsilon| \nabla u_{\varepsilon}|^{2}\right)^{\frac{q_{0}}{q_{0}-2}}\\ &\leq\varepsilon^{2}C(\Lambda_{0},E_{0}),\end{split} \tag{3.55}\] and obtain \[\begin{split}\int_{B_{\rho}}\left(\frac{\varepsilon|\nabla u_{ \varepsilon}|^{2}}{2}-\frac{W(u_{\varepsilon})}{\varepsilon}\right)_{+}& \leq\tilde{C}\delta^{p_{3}}+\tilde{C}\delta^{-M}\varepsilon^{2}+ \tilde{C}\varepsilon^{2}+\tilde{C}\delta^{-M}\varepsilon^{3}+\tilde{C}\delta \varepsilon\\ &\leq\tilde{C}\delta.\end{split}\] Letting \(\varepsilon\to 0\) we get \(\xi_{\varepsilon,+}(B_{\rho})\to 0\). ## 4. Rectifiability We will proceed by proving upper and lower density bounds for the energy measure. Combining the estimates obtained in the previous section, we get an upper bound on the density ratio of the limit energy measure. **Theorem 4.1**.: _If we consider \(\Omega^{\prime}\subset\subset\Omega\) and \(r_{0}(\Omega^{\prime}):=\min\left\{1,\frac{d(\Omega^{\prime},\partial\Omega)}{2 }\right\}\) then for all \(x_{0}\in\Omega^{\prime},0<r<r_{0}\) there exists a function \(\phi(\varepsilon)\) with \(\lim_{\varepsilon\to 0}\phi(\varepsilon)=0\) such that_ \[r^{-n}\mu_{\varepsilon}(B_{r}(x_{0}))\leq C(\Lambda_{0},\Omega^{\prime})+ \frac{\phi(\varepsilon)}{r^{n}}. \tag{4.1}\] _Letting \(\varepsilon\to 0\) we get_ \[r^{-n}\mu(B_{r}(x_{0}))\leq C(\Lambda_{0},\Omega^{\prime}),\] _where \(\mu=\lim_{\varepsilon\to 0}\mu_{\varepsilon}\) is the weak-* limit of \(\mu_{\varepsilon}=\left(\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}+ \frac{W(u_{\varepsilon})}{\varepsilon}\right)dx\) in the sense of Radon measures._ Proof.: For the sake of simplicity we set \(x_{0}=0\) and set \(B_{\rho}(0)=B_{\rho}\). By the almost monotonicity formula (3.1), Lemma 3.8 and Holder's inequality \[\frac{d}{d\rho}\left(\frac{\mu_{\varepsilon}(B_{\rho})}{\rho^{n}}\right)=- \frac{1}{\rho^{n+1}}\xi_{\varepsilon}(B_{\rho})+\frac{\varepsilon}{\rho^{n+2} }\int_{\partial B_{\rho}}\langle x,\nabla u\rangle^{2}-\frac{1}{\rho^{n+1}} \int_{B_{\rho}}\langle x,\nabla u\rangle f_{\varepsilon} \tag{4.2}\] We estimate the last term above as follows \[\frac{1}{\rho^{n+1}}\left|\int_{B_{\rho}}\langle x,\nabla u \rangle f_{\varepsilon}\right| \leq\frac{1}{\rho^{n+1}}\int_{B_{\rho}}\left|\langle x,\nabla u \rangle\right|\left|\frac{f_{\varepsilon}}{\varepsilon|\nabla u|}\right| \varepsilon|\nabla u|\] \[\leq\frac{1}{\rho^{n}}\int_{B_{\rho}}\left|\frac{f_{\varepsilon} }{\varepsilon|\nabla u|}\right|\varepsilon|\nabla u|^{2}\] \[\leq\frac{1}{\rho^{n}}\left(\int_{B_{\rho}}\left|\frac{f_{ \varepsilon}}{\varepsilon|\nabla u|}\right|^{q_{0}}\varepsilon|\nabla u|^{2} \right)^{\frac{1}{q_{0}}}\left(\int_{B_{\rho}}\varepsilon|\nabla u|^{2} \right)^{\frac{q_{0}-1}{q_{0}}}\] \[\leq C(\Lambda_{0})\rho^{-\frac{n}{q_{0}}}\left(\frac{\mu_{ \varepsilon}(B_{\rho})}{\rho^{n}}\right)^{\frac{q_{0}-1}{q_{0}}}\] \[\leq C(\Lambda_{0})\rho^{-\frac{n}{q_{0}}}\left(1+\frac{\mu_{ \varepsilon}(B_{\rho})}{\rho^{n}}\right) \tag{4.3}\] where we used the inequality \(a^{1-\frac{1}{q_{0}}}\leq 1+a\) which holds for all \(a\geq 0\). Inserting this inequality into (4.2) and discarding the positive second term on the right had side, we get \[\frac{d}{d\rho}\left(1+\frac{\mu_{\varepsilon}(B_{\rho})}{\rho^{n}}\right)= \frac{d}{d\rho}\left(\frac{\mu_{\varepsilon}(B_{\rho})}{\rho^{n}}\right)\geq -\frac{1}{\rho^{n+1}}\xi_{\varepsilon}(B_{\rho})-C(\Lambda_{0})\rho^{-\frac{n }{q_{0}}}\left(1+\frac{\mu_{\varepsilon}(B_{\rho})}{\rho^{n}}\right). \tag{4.4}\] Multiplying both sides by \(\exp\left(\int C(\Lambda_{0})\rho^{-\frac{n}{q_{0}}}d\rho\right)=\exp\left(\frac{q _{0}}{q_{0}-n}C(\Lambda_{0})\rho^{1-\frac{n}{q_{0}}}\right)\) we have \[\frac{d}{d\rho}\left[\exp\left(\frac{q_{0}}{q_{0}-n}C(\Lambda_{0})\rho^{1-\frac{ n}{q_{0}}}\right)\left(1+\frac{\mu_{\varepsilon}(B_{\rho})}{\rho^{n}}\right) \right]\geq-\exp\left(\frac{q_{0}}{q_{0}-n}C(\Lambda_{0})\rho^{1-\frac{n}{q_{0} }}\right)\frac{\xi_{\varepsilon}(B_{\rho})}{\rho^{n+1}}.\] Integrating from \(r\) to \(r_{0}\) gives \[\exp\left(\frac{q_{0}}{q_{0}-n}C(\Lambda_{0})r_{0}^{1-\frac{n}{q_ {0}}}\right)\left(1+\frac{\mu_{\varepsilon}(B_{r_{0}})}{r_{0}^{n}}\right)- \exp\left(\frac{q_{0}}{q_{0}-n}C(\Lambda_{0})r^{1-\frac{n}{q_{0}}}\right) \left(1+\frac{\mu_{\varepsilon}(B_{r})}{r^{n}}\right)\] \[\geq-\int_{r}^{r_{0}}\exp\left(\frac{q_{0}}{q_{0}-n}C(\Lambda_{0} )\rho^{1-\frac{n}{q_{0}}}\right)\frac{\xi_{\varepsilon,+}(B_{\rho})}{\rho^{n+ 1}}\] \[\geq-\exp\left(\frac{q_{0}}{q_{0}-n}C(\Lambda_{0})r_{0}^{1-\frac{n }{q_{0}}}\right)\int_{r}^{r_{0}}\frac{\xi_{\varepsilon,+}(B_{\rho})}{\rho^{n+ 1}}.\] Namely \[\exp\left(\frac{q_{0}}{q_{0}-n}C(\Lambda_{0})r_{0}^{1-\frac{n}{q_ {0}}}\right)\left(1+\frac{\mu_{\varepsilon}(B_{r_{0}})}{r_{0}^{n}}\right)- \frac{\mu_{\varepsilon}(B_{r})}{r^{n}} \geq-C(\Lambda_{0},\Omega^{\prime})\int_{r}^{r_{0}}\frac{\xi_{ \varepsilon,+}(B_{\rho})}{\rho^{n+1}}\] \[\geq-C(\Lambda_{0},\Omega^{\prime})\int_{r}^{r_{0}}\frac{\xi_{ \varepsilon,+}(B_{r_{0}})}{\rho^{n+1}}, \tag{4.5}\] where we used \(\exp\left(\frac{q_{0}}{q_{0}-n}C(\Lambda_{0})r^{1-\frac{n}{q_{0}}}\right)>1\) for \(r>0\). Passing to the limit as \(\varepsilon\to 0\) and using Lemma 3.8, we have \[\frac{\mu(B_{r})}{r^{n}}\leq C(\Lambda_{0},\Omega^{\prime},n,q_{0}).\] Next, we obtain estimates of the discrepancy measure for each \(\varepsilon\). **Proposition 4.2**.: _Let \(\delta=\rho^{\gamma},\varepsilon\leq\rho\leq r\) for \(0<\gamma<\frac{1}{M}\leq\frac{1}{2}\), we have \(\delta^{-M}\varepsilon\leq\rho^{1-M\gamma}\leq 1\). For \(B_{3\rho^{1-\beta}}(x)\subset\subset\Omega\), we have_ \[\rho^{-n-1}\xi_{\varepsilon,+}(B_{\rho}(x)) \leq C\rho^{p_{3}\gamma-n-1}\mu_{\varepsilon}(B_{2\rho}(x))\] \[+\tilde{C}_{k}\varepsilon\rho^{-M\gamma-n-1}\int_{B_{3\rho^{1- \beta}}(x)}|f_{\varepsilon}|^{2}+\tilde{C}_{\beta}\varepsilon\rho^{\gamma-2} \left(1+\int_{\{|u_{\varepsilon}|\geq 1\}\cap B_{3r^{1-\beta}}(x)}W^{\prime}(u_{ \varepsilon})^{2}\right). \tag{4.6}\] Proof.: For \(0<\gamma<\frac{1}{M}\leq\frac{1}{2}\), by choosing \(\delta^{-M}\varepsilon\leq\rho^{1-M\gamma}\leq 1\) we get \(\max\{2,1+\delta^{-M}\varepsilon\}=2\). Therefore substituting \(\delta=\rho^{\gamma}\) into Lemma 3.7 we have \[\rho^{-n-1}\xi_{\varepsilon,+}(B_{\rho}) =\rho^{-n-1}\int_{B_{\rho}(x)}\left(\frac{\varepsilon|\nabla u_{ \varepsilon}|^{2}}{2}-\frac{W(u_{\varepsilon})}{\varepsilon}\right)_{+}\] \[\leq C\rho^{p_{3}\gamma-n-1}\int_{B_{2\rho}(x)}\left(\frac{ \varepsilon|\nabla u_{\varepsilon}|^{2}}{2}+\frac{W(u_{\varepsilon})}{ \varepsilon}\right)+C\varepsilon\rho^{-M\gamma-n-1}\int_{B_{2\rho}(x)}|f_{ \varepsilon}|^{2}\] \[+C\varepsilon^{-1}\rho^{-M\gamma-n-1}\int_{B_{2\rho}(x)\cap\{|u_ {\varepsilon}|\geq 1\}}W^{\prime}(u_{\varepsilon})^{2}+C\varepsilon\rho^{ \gamma-2}\] On the other hand we have by Proposition 3.6 with \(r:=d(B_{2\rho}(x),\partial B_{3\rho^{1-\beta}}(x))=3\rho^{1-\beta}-2\rho\geq \rho^{1-\beta}\) \[\int_{\{|u_{\varepsilon}|\geq 1\}\cap B_{2\rho}}W^{\prime}(u_{\varepsilon})^{2} \leq C_{k}(1+\rho^{-2k(1-\beta)}\varepsilon^{2k})\varepsilon^{2}\int_{B_{3 \rho^{1-\beta}}}|f_{\varepsilon}|^{2}+C_{k}\rho^{-2k(1-\beta)}\varepsilon^{2k }\int_{\{|u_{\varepsilon}|\geq 1\}\cap B_{3\rho^{1-\beta}}}W^{\prime}(u_{ \varepsilon})^{2}\] Substituting this into our above estimate, we get \[\rho^{-n-1}\xi_{\varepsilon,+}(B_{\rho}) \leq C\rho^{p_{3}\gamma-n-1}\int_{B_{2\rho}(x)}\left(\frac{ \varepsilon|\nabla u_{\varepsilon}|^{2}}{2}+\frac{W(u_{\varepsilon})}{ \varepsilon}\right)+\tilde{C}_{k}\varepsilon\rho^{-M\gamma-n-1}\int_{B_{3\rho ^{1-\beta}}(x)}|f_{\varepsilon}|^{2}\] \[+C\varepsilon^{-1}\rho^{-M\gamma-n-1}\tilde{C}_{k}\rho^{2k\beta- 2k}\varepsilon^{2k}\int_{\{|u_{\varepsilon}|\geq 1\}\cap B_{3\rho^{1-\beta}}(x)}W^{ \prime}(u_{\varepsilon})^{2}+C\varepsilon\rho^{\gamma-2}\] \[\leq C\rho^{p_{3}\gamma-n-1}\mu_{\varepsilon}(B_{2\rho}(x))+ \tilde{C}_{k}\varepsilon\rho^{-M\gamma-n-1}\int_{B_{3\rho^{1-\beta}}(x)}|f_{ \varepsilon}|^{2}\] \[+C\left(\varepsilon\rho^{\gamma-2}+\varepsilon^{-1}\rho^{-M \gamma-n-1}\varepsilon^{2k\beta}\int_{\{|u_{\varepsilon}|\geq 1\}\cap B_{3\rho^{1- \beta}}(x)}W^{\prime}(u_{\varepsilon})^{2}\right)\] \[\leq C\rho^{p_{3}\gamma-n-1}\mu_{\varepsilon}(B_{2\rho}(x))+ \tilde{C}_{k,\beta}\varepsilon\rho^{-M\gamma-n-1}\int_{B_{3\rho^{1-\beta}}(x )}|f_{\varepsilon}|^{2}\] \[+\tilde{C}_{\beta}\varepsilon\rho^{\gamma-2}\left(1+\int_{\{|u_ {\varepsilon}|\geq 1\}\cap B_{3\rho^{1-\beta}}(x)}W^{\prime}(u_{\varepsilon})^{2} \right),\] where we have chosen \(-M\gamma-n+2k\beta+1\geq\gamma-2\) or \(k>\frac{\gamma-2+M\gamma+n+1}{2\beta}\) sufficiently large. In the following theorem we prove the density lower bound for the limit measure. **Theorem 4.3**.: _There exists \(\bar{\theta}>0\) such that for any \(\Omega^{\prime}\subset\subset\Omega\) and \(r_{1}(\Omega^{\prime})\leq\frac{d(\Omega^{\prime},\partial\Omega)}{2}\) sufficiently small, we have_ \[r^{-n}\mu(B_{r}(x))\geq\bar{\theta}-Cr^{\gamma},\] _for some \(\gamma>0\), and all \(x\in\operatorname{spt}\mu\cap\Omega^{\prime}\) and \(0<r\leq r_{1}\). In particular,_ \[\theta_{*}^{n}(\mu)\geq\frac{\bar{\theta}}{\omega_{n}}\] _for \(\mu\)-a.e. in \(\Omega\)._ Proof.: Without loss of generality, we assume \(0\in\operatorname{spt}\mu\cap\Omega^{\prime}\) and want to prove a density lower bound at \(0\). We first integrate (4.4) from \(s\) to \(r\). \[\begin{split}&\frac{\mu_{\varepsilon}(B_{r}(x))}{r^{n}}-\frac{ \mu_{\varepsilon}(B_{s}(x))}{s^{n}}\\ &\geq-\int_{s}^{r}\frac{1}{\rho^{n+1}}\xi_{\varepsilon,+}(B_{ \rho}(x))d\rho-\int_{s}^{r}C(\Lambda_{0})\rho^{-\frac{n}{q_{0}}}\left(\frac{ \mu_{\varepsilon}(B_{\rho}(x))}{\rho^{n}}\right)^{\frac{q_{0}-1}{q_{0}}}.\end{split} \tag{4.7}\] By (4.6) in Proposition 4.2, the discrepancy term \[-\int_{s}^{r}\rho^{-n-1}\xi_{\varepsilon,+}(B_{\rho}(x)) \geq-\int_{s}^{r}C\rho^{p_{3}\gamma-n-1}\mu_{\varepsilon}(B_{2 \rho}(x))-\int_{s}^{r}\tilde{C}_{k}\varepsilon\rho^{-M\gamma-n-1}\int_{B_{3 \rho^{1-\beta}}(x)}|f_{\varepsilon}|^{2}\] \[-\int_{s}^{r}\tilde{C}_{\beta}\varepsilon\rho^{\gamma-2}\left(1 +\int_{\{|u_{\varepsilon}|\geq 1\}\cap\Omega}W^{\prime}(u_{\varepsilon})^{2} \right). \tag{4.8}\] By the \(\varepsilon\)-Upper Density Bound (4.1) we get \[-\int_{s}^{r}\rho^{p_{3}\gamma-n-1}\mu_{\varepsilon}(B_{2\rho}(x)) =-\int_{s}^{r}2^{n}\rho^{p_{3}\gamma-1}\frac{\mu_{\varepsilon}(B_ {2\rho}(x))}{(2\rho)^{n}}\] \[\geq-\int_{s}^{r}2^{n}\rho^{p_{3}\gamma-1}\left(C(\Lambda_{0}, \Omega^{\prime})+\frac{\phi(\varepsilon)}{\rho^{n}}\right)\] \[\geq-C(\Lambda_{0},\Omega^{\prime})\left(r^{p_{3}\gamma}-s^{p_{3 }\gamma}\right)-\frac{\phi(\varepsilon)}{p_{3}\gamma-n+1}\left(r^{p_{3} \gamma-n}-s^{p_{3}\gamma-n}\right).\] The last term in (4.8) may be estimated as follows \[-\int_{s}^{r}\tilde{C}_{\beta}\varepsilon\rho^{\gamma-2}\left(1+\int_{\{|u_{ \varepsilon}|\geq 1\}\cap\Omega}W^{\prime}(u_{\varepsilon})^{2}\right)\geq- \tilde{C}_{\beta}\int_{s}^{t}\rho^{\gamma-1}d\rho\leq-\tilde{C}_{\beta}(r^{ \gamma}-s^{\gamma}).\] Using the bound \[\left(\frac{\mu_{\varepsilon}(B_{\rho}(x))}{\rho^{n}}\right)^{\frac{q_{0}-1}{ q_{0}}}\leq\left(1+\frac{\mu_{\varepsilon}(B_{\rho}(x))}{\rho^{n}}\right)\] and the \(\varepsilon\)-Upper Density Bound (4.1), we get \[-\int_{s}^{r}C(\Lambda_{0})\rho^{-\frac{n}{q_{0}}}\left(\frac{ \mu_{\varepsilon}(B_{\rho}(x))}{\rho^{n}}\right)^{\frac{q_{0}-1}{q_{0}}} \geq-\int_{s}^{r}C(\Lambda_{0},\Omega^{\prime})\rho^{-\frac{n}{q _{0}}}\left(1+\frac{\mu_{\varepsilon}(B_{\rho}(x))}{\rho^{n}}\right)\] \[\geq-\int_{s}^{r}C(\Lambda_{0},\Omega^{\prime})\rho^{-\frac{n}{q _{0}}}\left(1+C(\Lambda_{0},\Omega^{\prime})+\frac{\phi(\varepsilon)}{\rho^{n }}\right)\] \[\geq-C(\Lambda_{0},\Omega^{\prime})\left(r^{1-\frac{n}{q_{0}}}-s^{ 1-\frac{n}{q_{0}}}\right)\] \[-C(\Lambda_{0},\Omega^{\prime})\phi(\varepsilon)\left(r^{1-n- \frac{n}{q_{0}}}-s^{1-n-\frac{n}{q_{0}}}\right).\] Thus, plug all the above estimates of terms in (4.7), we get \[\frac{\mu_{\varepsilon}(B_{r}(x))}{r^{n}}-\frac{\mu_{\varepsilon}(B_{ s}(x))}{s^{n}} \geq-C(\Lambda_{0},\Omega^{\prime})\left(r^{p_{3}\gamma}-s^{p_{3} \gamma}\right)-\frac{\phi(\varepsilon)}{p_{3}\gamma-n+1}\left(r^{p_{3}\gamma-n }-s^{p_{3}\gamma-n}\right)\] \[-\int_{s}^{r}\tilde{C}_{\beta}\varepsilon\rho^{-M\gamma-n-1} \left(\int_{B_{3\rho^{1-\beta}}(x)}|f_{\varepsilon}|^{2}\right)d\rho-\tilde{C} _{\beta}(r^{\gamma}-s^{\gamma})\] \[-C(\Lambda_{0},\Omega^{\prime})\left(r^{1-\frac{n}{q_{0}}}-s^{1- \frac{n}{q_{0}}}\right)-C(\Lambda_{0},\Omega^{\prime})\phi(\varepsilon)\left(r ^{1-n-\frac{n}{q_{0}}}-s^{1-n-\frac{n}{q_{0}}}\right). \tag{4.9}\] Next, we estimate the term \(\int_{s}^{r}\tilde{C}_{\beta}\varepsilon\rho^{-M\gamma-n-1}\left(\int_{B_{3 \rho^{1-\beta}}(x)}|f_{\varepsilon}|^{2}\right)d\rho\) in the following claim. **Claim**.: There exists \(x\in B_{\frac{r}{2}}\) such that \[\varepsilon^{-n}\mu_{\varepsilon}(B_{\varepsilon}(x))\geq 2\bar{\theta}_{0}> \bar{\theta}_{0}\geq\int_{\varepsilon}^{\frac{r}{4}}\tilde{C}_{\beta} \varepsilon\rho^{-M\gamma-n-1}\left(\int_{B_{3\rho^{1-\beta}}(x)}|f_{ \varepsilon}|^{2}\right)d\rho, \tag{4.10}\] for some universal constant \(\bar{\theta}_{0}>0\). Proof of Claim.: Consider a point \(x\in B_{\frac{r}{2}}\) with \(|u_{\varepsilon}(x)|\leq 1-\tau\), for some \(0<\tau<1\). We can assume \(\varepsilon^{-n}\mu_{\varepsilon}(B_{\varepsilon}(x))\leq 1\)(otherwise the conclusion automatically follows), and so \[\varepsilon^{-n-1}\int_{B_{\varepsilon}(x)}u_{\varepsilon}^{p}\leq\varepsilon ^{-n-1}\int_{B_{\varepsilon}(x)}c_{0}^{p}\leq c_{0}^{p}\omega_{n+1},\forall p>1.\] From Theorem 3.2 we have \[\varepsilon^{\frac{1}{2}}\|u\|_{C^{0,\frac{1}{2}}(B_{1-\varepsilon}(x))}\leq C,\] and thus \[|u_{\varepsilon}|\leq 1-\frac{\tau}{2},\quad\text{ in }B_{\frac{\tau^{2} \varepsilon}{4C^{2}}}(x).\] So since \(W(t)=(1-t^{2})^{2}=(1+t)^{2}(1-t)^{2}\) we find in \(B_{\frac{\tau^{2}\varepsilon}{4C^{2}}}(x)\) \[\begin{split} W(u_{\varepsilon})=(1+|u_{\varepsilon}|)^{2}(1-|u_{ \varepsilon}|)^{2}\geq\frac{\tau^{2}}{4}\end{split} \tag{4.11}\] Denote \[2\bar{\theta}_{0}:=\min\{1,C_{n}\tau^{2n+4}\},\] then for \(x\in B_{\frac{r}{2}}\cap\{|u_{\varepsilon}|\leq 1-\tau\}\) the first inequality in the conclusion of the claim holds. Applying the error estimates Proposition 3.5 with the choice \(\Omega^{\prime}=B_{\frac{r}{4}}\) and \(\Omega=B_{\frac{r}{2}}\), for sufficiently small \(\tau\) \[\mu_{\varepsilon}(B_{\frac{r}{4}}) =\mu_{\varepsilon}\left(B_{\frac{r}{4}}\cap\{|u_{\varepsilon}|<1- \tau\}\right)+\mu_{\varepsilon}\left(B_{\frac{r}{4}}\cap\{|u_{\varepsilon}| \geq 1-\tau\}\right)\] \[\leq C\mu_{\varepsilon}\left(B_{\frac{r}{4}}\cap\{|u_{\varepsilon }|<1-\tau\}\right)+C\varepsilon\int_{B_{\frac{r}{2}}}|f_{\varepsilon}|^{2}+C \varepsilon(\tau r^{n}+\tau^{2}r^{n-1})+Cr^{-2}\varepsilon.\] Notice by (3.55), the second term \(\varepsilon\int_{B_{r/2}}|f_{\varepsilon}|^{2}\leq\varepsilon^{2}C(\Lambda_{0 },E_{0})\). So the last three terms are at most of order \(O(\varepsilon)\). Hence, as \(0\in\operatorname{spt}\mu\), by passing to limit \(\varepsilon\to 0\) we have \[0<\mu(B_{\frac{r}{4}})\leq\liminf_{\varepsilon\to 0}\mu_{\varepsilon}(B_{ \frac{r}{4}})\leq\liminf_{\varepsilon\to 0}\mu_{\varepsilon}\left(B_{ \frac{r}{4}}\cap\{|u_{\varepsilon}|<1-\tau\}\right).\] And in the set \(\{|u_{\varepsilon}|\leq 1-\tau\}\), we get by Lemma 3.8 that \[\liminf_{\varepsilon\to 0}\varepsilon^{-1}\mathcal{L}^{n+1}(B_{ \frac{r}{2}}\cap\{|u_{\varepsilon}|\leq 1-\tau\})\] \[\geq\liminf_{\varepsilon\to 0}\varepsilon^{-1}\int_{B_{\frac{r}{2}} \cap\{|u_{\varepsilon}|\leq 1-\tau\}}\frac{W(u_{\varepsilon})}{\tau^{2}}\] \[=\liminf_{\varepsilon\to 0}\frac{1}{\tau^{2}}\left(\mu_{ \varepsilon}-\xi_{\varepsilon}\right)(B_{\frac{r}{2}}\cap\{|u_{\varepsilon}| \leq 1-\tau\})\] \[\geq\frac{1}{\tau^{2}}\liminf_{\varepsilon\to 0}\mu_{ \varepsilon}\left(B_{\frac{r}{4}}\cap\{|u_{\varepsilon}|<1-\tau\}\right)- \liminf_{\varepsilon\to 0}\frac{1}{\tau^{2}}\xi_{\varepsilon,+}(B_{\frac{r}{2}} \cap\{|u_{\varepsilon}|\leq 1-\tau\})\] \[\geq\frac{\mu(B_{\frac{r}{4}})}{\tau^{2}}>0. \tag{4.12}\] (This guarantees we can always choose such a point \(x\in B_{\frac{r}{2}}\) with \(|u_{\varepsilon}(x)|\leq 1-\tau\) if \(0\in\operatorname{spt}\mu\).) To complete the proof, we define for \(0<\rho<r_{1}\) the convolution \[\omega_{\varepsilon,\rho}(x):=\rho^{-n-1}\left(\chi_{B_{\rho}}*\frac{1}{ \varepsilon}|f_{\varepsilon}|^{2}\right)(x)=\rho^{-n-1}\int_{B_{\rho}(x)} \frac{1}{\varepsilon}|f_{\varepsilon}|^{2},\] with \[\|\omega_{\varepsilon,\rho}(x)\|_{L^{1}(B_{\frac{r_{1}}{2}})}\leq\int_{B_{ \frac{r_{1}}{2}+r_{1}}}\frac{1}{\varepsilon}|f_{\varepsilon}|^{2}\leq C( \Lambda_{0},E_{0})<\infty,\] by (3.55). Denote by \(\omega_{\varepsilon}(x):=\int_{0}^{r_{1}}\omega_{\varepsilon,\rho}(x)d\rho\), we have \[\|\omega_{\varepsilon}(x)\|_{L^{1}(B_{\frac{r_{0}}{2}})}\leq r_{1}C(\Lambda_{ 0},E_{0})<\infty.\] Now we can estimate the term on the right hand side in the claim, by a change of variables \(t=3\rho^{1-\beta}\). Here \(\beta:=\beta(r_{1})\) is chosen small enough such that \(3\left(\frac{r_{1}}{4}\right)^{1-\beta}\leq r_{1}\). We calculate, setting \(t=3\rho^{1-\beta}\) \[\int_{\varepsilon}^{\frac{r}{4}}\rho^{-M\gamma-n-1}\left(\int_{B_{3\rho^{1- \beta}}(x)}\frac{1}{\varepsilon}|f_{\varepsilon}|^{2}\right)d\rho=\int_{3^{ \varepsilon 1-\beta}}^{3\left(\frac{r}{4}\right)^{1-\beta}}\left(\frac{t}{3}\right)^{\frac{- M\gamma-n-1}{1-\beta}}\left(\int_{B_{t}(x)}\frac{1}{\varepsilon}|f_{ \varepsilon}|^{2}\right)d\left(\frac{t}{3}\right)^{\frac{1}{1-\beta}}\] \[\leq C_{\beta}\int_{3\epsilon^{1-\beta}}^{3\left(\frac{r}{4}\right)^ {1-\beta}}t^{\frac{-M\gamma-n-1+\beta}{1-\beta}}\left(\int_{B_{t}}\frac{1}{ \varepsilon}|f_{\varepsilon}|^{2}\right)dt\] \[\leq C_{\beta}\int_{3\epsilon^{1-\beta}}^{3\left(\frac{r}{4} \right)^{1-\beta}}t^{\frac{-M\gamma-n-1+\beta}{1-\beta}+(n+1)}\omega_{ \varepsilon,t}(x)dt\] We find \[\frac{-M\gamma-n-1+\beta}{1-\beta}+(n+1)=\frac{-M\gamma-n\beta}{1-\beta}<0\] so that \(t^{\frac{-M\gamma-n\beta}{1-\beta}}\) is a decreasing function. Hence we get the bound \[\int_{\varepsilon}^{\frac{r}{4}}\rho^{-M\gamma-n-1}\left(\int_{B_ {3\rho^{1-\beta}(x)}}\frac{1}{\varepsilon}|f_{\varepsilon}|^{2}\right)d\rho \leq C_{\beta}\int_{3\epsilon^{1-\beta}}^{3\left(\frac{r}{4} \right)^{1-\beta}}\left(3\varepsilon^{1-\beta}\right)^{\frac{-M\gamma-n\beta} {1-\beta}}\omega_{\varepsilon,t}(x)dt\] \[\leq C_{\beta}\varepsilon^{-M\gamma-n\beta}\int_{0}^{r_{1}} \omega_{\varepsilon,t}(x)dt\] \[\leq C_{\beta}\varepsilon^{-M\gamma-n\beta}\omega_{\varepsilon} (x). \tag{4.13}\] Choosing \(M\gamma<\frac{1}{2}\) and \(\beta\) sufficiently small so that \(M\gamma+n\beta<\frac{1}{2}\), and applying the weak \(L^{1}\) inequality for the distribution function and (4.13), we get for some \(\tilde{C}_{\beta}\) depending on \(\beta\) \[\mathcal{L}^{n+1}\left(B_{\frac{r}{2}}\cap\left\{\int_{ \varepsilon}^{\frac{r}{4}}\tilde{C}_{\beta}\varepsilon\rho^{-M\gamma-n-1} \left(\int_{B_{3\rho^{1-\beta}(x)}}|f_{\varepsilon}|^{2}\right)d\rho\geq\bar{ \theta}_{0}\right\}\right)\] \[\leq\mathcal{L}^{n+1}\left(B_{\frac{r}{2}}\cap\left\{C_{\beta} \varepsilon^{2}\varepsilon^{-M\gamma-n\beta}\omega_{\varepsilon}(x)\geq\bar{ \theta}_{0}\right\}\right)\] \[\leq C_{\beta}\varepsilon^{2-(M\gamma+n\beta)}\bar{\theta}_{0}^{- 1}\|\omega_{\varepsilon}\|_{L^{1}(B_{\frac{r}{2}})}\] \[\leq C_{\beta}\varepsilon^{2-(M\gamma+n\beta)}\bar{\theta}_{0}^{- 1}C(\Lambda_{0},E_{0})\] \[\to 0, \tag{4.14}\] as \(\varepsilon\to 0\). This guarantees we can always choose such a point \(x^{\prime}\in B_{\frac{r}{2}}\) with \[\left\{\int_{\varepsilon}^{\frac{r}{4}}\tilde{C}_{\beta}\varepsilon\rho^{-M \gamma-n-1}\left(\int_{B_{3\rho^{1-\beta}(x^{\prime})}}|f_{\varepsilon}|^{2} \right)d\rho\leq\bar{\theta}_{0}\right\}.\] We can thus combine (4.12) with (4.14) to find an \(x\in B_{\frac{r}{2}}\) so that the upper bound and lower bound in the claim holds. With this claim, we proceed with the proof of the density lower bound. For the \(\bar{\theta}_{0}\) obtained from the claim, we denote by \(s:=\sup\{0\leq\rho\leq\frac{r}{4}:\frac{\mu_{\varepsilon}(B_{\rho}(x))}{\rho^{ n}}\geq 2\bar{\theta}_{0}\}\). And it is obvious from (4.11) \[s\geq\varepsilon.\] By this choice of \(s\), we have \[\frac{\mu_{\varepsilon}(B_{s}(x))}{s^{n}} \geq 2\bar{\theta}_{0},\] \[\frac{\mu_{\varepsilon}(B_{\rho}(x))}{\rho^{n}} \leq 2\bar{\theta}_{0},\forall\rho\in\left[s,\frac{r}{4}\right].\] Substituting \(\frac{r}{4}\) for \(r\) in the integral form of the almost monotonicity formula (4.9), we get from (4.10) the following density lower bound \[2^{n}\left[\frac{\mu_{\varepsilon}(B_{\frac{r}{2}}(x))}{\left( \frac{r}{2}\right)^{n}}\right] \geq\frac{\mu_{\varepsilon}(B_{\frac{r}{2}}(x))}{\left(\frac{r} {4}\right)^{n}}\] \[\geq\frac{\mu_{\varepsilon}(B_{s}(x))}{s^{n}}-C(\Lambda_{0}, \Omega^{\prime})\left(\left(\frac{r}{4}\right)^{p_{3}\gamma}-s^{p_{3}\gamma} \right)-\frac{\phi(\varepsilon)}{p_{3}\gamma-n+1}\left(\left(\frac{r}{4} \right)^{p_{3}\gamma-n}-s^{p_{3}\gamma-n}\right)\] \[-\int_{s}^{r/4}\tilde{C}_{\beta}\varepsilon\rho^{-M\gamma-n-1} \left(\int_{B_{3\rho^{1-\beta}}(x)}|f_{\varepsilon}|^{2}\right)d\rho-\tilde{C }_{\beta}(\left(\frac{r}{4}\right)^{\gamma}-s^{\gamma})\] \[-C(\Lambda_{0},\Omega^{\prime})\left(\left(\frac{r}{4}\right)^{1 -\frac{n}{q_{0}}}-s^{1-\frac{n}{q_{0}}}\right)-C(\Lambda_{0},\Omega^{\prime} )\phi(\varepsilon)\left(\left(\frac{r}{4}\right)^{1-n-\frac{n}{q_{0}}}-s^{1-n -\frac{n}{q_{0}}}\right)\] \[\geq 2\overline{\theta}_{0}-C(\Lambda_{0},\Omega^{\prime})r^{ \gamma_{n}}-C(\Lambda_{0},\Omega^{\prime})\phi(\varepsilon)r^{-n-\frac{n}{q_ {0}}}-C(\Lambda_{0},\Omega^{\prime})\phi(\varepsilon)r^{p_{3}\gamma-n+1}\] \[-\overline{\theta}_{0}\] \[\geq\overline{\theta}_{0}-C(\Lambda_{0},\Omega^{\prime})r^{\gamma _{n}}-C(\Lambda_{0},\Omega^{\prime})\phi(\varepsilon)r^{-n-\frac{n}{q_{0}}}-C (\Lambda_{0},\Omega^{\prime})\phi(\varepsilon)r^{p_{3}\gamma-n+1},\] where \(\gamma_{n}:=\min\{p_{3}\gamma,\gamma,1-\frac{n}{q_{0}}\}>0\), and \(\phi(\varepsilon)\to\) as \(\varepsilon\to 0\) by Theorem 4.1. As \(B_{\frac{r}{2}}(x)\subseteq B_{r}(0)\) we let \(\varepsilon\to 0\) and get for some \(\gamma_{n}>0\) \[\frac{\mu(\overline{B_{r}})}{r^{n}}\geq\limsup_{\varepsilon\to 0}\frac{\mu_{ \varepsilon}(B_{r})}{r^{n}}\geq\limsup_{\varepsilon\to 0}\frac{\mu_{ \varepsilon}(B_{\frac{r}{2}}(x))}{r^{n}}\geq C_{n}\bar{\theta}_{0}-C_{n}r^{ \gamma_{n}}.\] Approximating \(r^{\prime}\nearrow r\) we get for \(0<r<r_{1}(\Omega^{\prime})\) \[\frac{\mu(B_{r}(0))}{r^{n}}\geq c_{0}\overline{\theta}_{0}\] and hence \[\theta_{*}^{n}(\mu)\geq\frac{\overline{\theta}}{\omega_{n}}\quad\text{$\mu$- a.e. in $\Omega$}.\] which completes the proof. A last thing we need before proving rectifiability of the limit measure is the vanishing of full discrepancy when taking limit \(\varepsilon\to 0\). **Proposition 4.4**.: \[|\xi_{\varepsilon}|\to 0\quad\&\quad|\xi|=0.\] Proof.: We first prove the lower \(n\)-dimensional density of the discrepancy measure vanishes. Namely \[\theta_{*}^{n}(|\xi|)=\liminf_{\rho\to 0}\frac{|\xi|(B_{\rho})}{\rho^{n}}=0.\] If not, there exists \(0<\rho_{0},\delta<1\) and \(B_{\rho_{0}}\subset\Omega\) such that \[\frac{|\xi|(B_{\rho}(x))}{\rho^{n}}\geq\delta,\quad\forall 0<\rho\leq\rho_{0}.\] Multiplying both sides of (4.2) by an integrating factor and integrating from \(r\) to \(\rho_{0}\) as in the proof of Theorem 4.1 we get \[C(\Lambda_{0},\Omega^{\prime})\left(\frac{\mu_{\varepsilon}(B_{\rho_{0}})}{ \rho_{0}^{n}}\right)-C(\Lambda_{0},\Omega^{\prime})\left(\frac{\mu_{ \varepsilon}(B_{r})}{r^{n}}\right)\geq-C(\Lambda_{0},\Omega^{\prime})\int_{r}^ {\rho_{0}}\frac{\xi_{\varepsilon}(B_{r_{0}})}{\rho^{n+1}}d\rho.\] Using Lemma 3.8\(\xi_{+}=0\) and Theorem 4.1, we have when passing to the limit \(\varepsilon\to 0\) \[\tilde{C}(\Lambda_{0},\Omega^{\prime})\geq C(\Lambda_{0},\Omega^ {\prime})\int_{r}^{\rho_{0}}\frac{\xi_{-}(B_{\rho})}{\rho^{n+1}}d\rho =C(\Lambda_{0},\Omega^{\prime})\int_{r}^{\rho_{0}}\frac{\xi_{-}( B_{\rho})+\xi_{+}(B_{\rho})}{\rho^{n+1}}d\rho\] \[=C(\Lambda_{0},\Omega^{\prime})\int_{r}^{\rho_{0}}\frac{|\xi|(B_ {\rho})}{\rho^{n+1}}d\rho\] \[\geq C(\Lambda_{0},\Omega^{\prime})\int_{r}^{\rho_{0}}\frac{ \delta}{\rho}d\rho\] \[=C(\Lambda_{0},\Omega^{\prime})\delta\ln(\frac{\rho_{0}}{r}).\] This gives a contradiction by letting \(r\to 0\). By the density lower bound Theorem 4.3 and differentiation theorem for measures, we have \[D_{\mu}|\xi|(x)=\liminf_{\rho\to 0}\frac{|\xi|(B_{\rho}(x))}{\mu(B_{ \rho}(x))} \leq\frac{\liminf_{\rho\to 0}\frac{|\xi|(B_{\rho}(x))}{\rho^{n}}}{ \limsup_{\rho\to 0}\frac{\mu(B_{\rho}(x))}{\rho^{n}}}\] \[\leq\frac{\theta_{*}^{n}(|\xi|,x)\omega_{n}}{\bar{\theta}}=0\] and this shows \[|\xi|=D_{\mu}|\xi|\cdot\mu=0.\] **Proposition 4.5**.: _We choose a Borel measurable function \(\nu_{\varepsilon}:\Omega\to\partial B_{1}(0)\) extending \(\frac{\nabla u_{\varepsilon}}{|\nabla u_{\varepsilon}|}\) on \(\nabla u_{\varepsilon}\neq 0\) and consider the varifold \(V_{\varepsilon}=\mu_{\varepsilon}\otimes\nu_{\varepsilon}\) that is_ \[\int_{\{|\nabla u|\neq 0\}}\phi\left(x,I-\frac{\nabla u(x)}{|\nabla u(x)|} \otimes\frac{\nabla u(x)}{|\nabla u(x)|}\right)d\mu_{i}(x),\quad\phi\in C_{c} (G_{n}(\Omega)) \tag{4.15}\] _The first variation is given by_ \[\delta V_{\varepsilon}(\eta)=-\int f_{\varepsilon}\langle\nabla u_{ \varepsilon},\eta\rangle dx+\int\nabla\eta\left(\frac{\nabla u_{\varepsilon} }{|\nabla u_{\varepsilon}|},\frac{\nabla u_{\varepsilon}}{|\nabla u_{ \varepsilon}|}\right)d\xi_{\varepsilon},\quad\forall\eta\in C_{c}^{1}( \Omega\times\mathbb{R}^{n+1}). \tag{4.16}\] Proof.: By equation (2.1), we have \[\delta V_{\varepsilon}(\eta) =\int_{\Omega\times G(n+1,n)}\operatorname{div}_{S}\eta(x)dV_{ \varepsilon}(x,S)\] \[=\int_{\Omega}(\operatorname{div}\eta-\nabla\eta(\nu_{\varepsilon },\nu_{\varepsilon}))d\mu_{\varepsilon}\] \[=\int_{\Omega}(\operatorname{div}\eta-\nabla\eta(\nu_{\varepsilon },\nu_{\varepsilon}))\left(\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}+ \frac{W(u_{\varepsilon})}{\varepsilon}\right)d\mathcal{L}^{n+1}.\] \[T_{ij} =\varepsilon\frac{\left|\nabla u_{\varepsilon}\right|^{2}}{2} \delta_{ij}-\varepsilon\nabla_{i}u_{\varepsilon}\nabla_{j}u_{\varepsilon}+W \left(u_{\varepsilon}\right)\delta_{ij}\] \[\nabla_{i}T_{ij} =\varepsilon\nabla_{i}\nabla_{k}u_{\varepsilon}\nabla_{k}u_{ \varepsilon}\delta_{ij}-\varepsilon\Delta u_{\varepsilon}\nabla_{j}u_{ \varepsilon}-\varepsilon\nabla_{i}u_{\varepsilon}\nabla_{i}\nabla_{k}u_{\varepsilon}\] \[+W^{\prime}\left(u_{\varepsilon}\right)\nabla_{i}u_{\varepsilon }\delta_{ij}\] \[=\left(-\varepsilon\Delta u_{\varepsilon}+W^{\prime}\left(u_{ \varepsilon}\right)\right)\nabla_{j}u_{\varepsilon}\] Now \[T_{ij}\nabla_{i}\eta_{j} =\left(\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}+W\left( u_{\varepsilon}\right)\right)\operatorname{div}\eta-\varepsilon\nabla\eta \left(\nabla u_{\varepsilon},\nabla u_{\varepsilon}\right)\] \[=\left(\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}+W\left( u_{\varepsilon}\right)\right)\operatorname{div}\eta-\nabla\eta(\nu_{\varepsilon },\nu_{\varepsilon})\varepsilon|\nabla u_{\varepsilon}|^{2}.\] Integrating by parts, we get \[\int_{\Omega}\left(\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}} {2}+W\left(u_{\varepsilon}\right)\right)\operatorname{div}\eta-\nabla\eta( \nu_{\varepsilon},\nu_{\varepsilon})\varepsilon|\nabla u_{\varepsilon}|^{2} =-\int_{\Omega}\nabla_{i}T_{ij}\eta_{j}\] \[=\int_{\Omega}\left(-\varepsilon\Delta u_{\varepsilon}+W^{\prime }\left(u_{\varepsilon}\right)\right)\left\langle\nabla u_{\varepsilon},\eta\right\rangle\] Hence inserting this into our expression for the first variation we get \[\delta V_{\varepsilon}(\eta)=-\int_{\Omega}\left(-\varepsilon\Delta u_{ \varepsilon}+\frac{W^{\prime}(u_{\varepsilon})}{\varepsilon}\right)\left\langle \nabla u_{\varepsilon},\eta\right\rangle d\mathcal{L}^{n+1}+\int_{\Omega} \nabla\eta(\nu_{\varepsilon},\nu_{\varepsilon})d\xi_{\varepsilon}\] Combining Theorem 4.1, Theorem 4.3 and Proposition 4.4, we obtain **Theorem 4.6**.: _After passing to a subsequence, the associated varifolds \(V_{\varepsilon}\to V\) where \(V\) is a rectifiable \(n\)-varifold with the weak mean curvature in \(L^{q_{0}}_{loc}(\mu_{V})\)._ Proof.: We first compute the first variation of the associated varifolds \(V_{\varepsilon}\) to the energy measure \(\mu_{\varepsilon}(\text{c.f.}\)[10, Proposition 4.10], [17, Equation 4.3]). For any \(\eta\in C^{1}_{0}(\Omega;\mathbb{R}^{n+1})\) using Proposition 4.5 and Proposition 4.4 \[\begin{split}|(\delta V)(\eta)|&=|\lim_{\varepsilon \to 0}(\delta V_{\varepsilon})(\eta)|\\ &=\left|\lim_{\varepsilon\to 0}\left(-\int f_{\varepsilon} \langle\nabla u_{\varepsilon},\eta\rangle dx+\int\nabla\eta\left(\frac{\nabla u _{\varepsilon}}{|\nabla u_{\varepsilon}|},\frac{\nabla u_{\varepsilon}}{| \nabla u_{\varepsilon}|}\right)d\xi_{\varepsilon}\right)\right|\\ &\leq\lim_{\varepsilon\to 0}\int|f_{\varepsilon}||\nabla u_{ \varepsilon}||\eta|dx+\lim_{\varepsilon\to 0}\int|\nabla\eta|d|\xi_{ \varepsilon}|\\ &\leq\lim_{\varepsilon\to 0}\int\left|\frac{f_{\varepsilon}}{ \varepsilon|\nabla u|}\right||\eta|\varepsilon|\nabla u_{\varepsilon}|^{2}dx \\ &\leq\lim_{\varepsilon\to 0}\left(\int\left|\frac{f_{\varepsilon}}{ \varepsilon|\nabla u_{\varepsilon}|}\right|^{q_{0}}\varepsilon|\nabla u_{ \varepsilon}|^{2}\right)^{\frac{1}{q_{0}}}\left(\int|\eta|^{\frac{q_{0}}{q_{0 }-1}}\varepsilon|\nabla u_{\varepsilon}|^{2}\right)^{\frac{q_{0}-1}{q_{0}}} \\ &\leq\Lambda_{0}^{\frac{1}{q_{0}}}\|\eta\|_{L^{\frac{q_{0}}{q_{0}- 1}}(\mu_{V})}\left(\leq C(\Lambda_{0},E_{0})|\eta|\right).\end{split} \tag{4.17}\] So we see the limit varifold has locally bounded first variation, combining with the density lower bound Theorem 4.3 we conclude the limit varifold is rectifiable by Allard's rectifiability theorem. Moreover, the above calculation shows \(\delta V\) is a bounded linear functional on \(L^{\frac{q_{0}}{q_{0}-1}}_{loc}(\mu_{V})\) and thus itself is in \(L^{q_{0}}_{loc}(\mu_{V})\). ## 5. Integrality In this section, we prove the integrality of the limit varifold. **Theorem 5.1**.: _Let \(\mu\) be defined by (4.15). Then \(\frac{1}{\alpha}\mu\) is an integral \(n\)-varifold where \(\alpha=\int_{-\infty}^{\infty}(\tanh^{\prime}x)^{2}dx\) is the total energy of the heteroclinic \(1\)-\(d\) solution._ From the previous section, we have already shown the limiting varifold \(V\) is rectifiable. And thus for a.e. \(x_{0}\in\operatorname{spt}\mu_{V}\), we have for any sequence \(\rho_{i}\to 0\) \[\mathcal{D}_{\rho_{i},\#}\circ\mathcal{T}_{x_{0},\#}(\mu_{V})\to\theta_{x_{0}} P_{0},\quad\text{ for some }P_{0}\in G(n+1,n),\] where \(\mathcal{D}_{\rho_{i}}(x)=\rho_{i}^{-1}x\) and \(\mathcal{T}_{x_{0}}(x)=x-x_{0}\) represent dilations and translations in \(\mathbb{R}^{n+1}\) and \(\theta_{x_{0}}\) is the density of \(\mu_{V}\) at \(x_{0}\). By choosing a sequence of rescaling factors \(\rho_{i}\) such that \[\tilde{\varepsilon}_{i}:=\frac{\varepsilon_{i}}{\rho_{i}}\to 0, \tag{5.1}\] the new sequence \(\tilde{u}_{\tilde{\varepsilon}_{i}}(x):=u_{\varepsilon_{i}}(\rho_{i}x+x_{0}),\tilde{f}_{\tilde{\varepsilon}_{i}}(x):=\rho_{i}\tilde{f}_{i}(\rho_{i}x+x_{ 0})\) satisfies \[\tilde{\varepsilon}_{i}\Delta\tilde{u}_{\tilde{\varepsilon}_{i}}-\frac{W^{ \prime}(\tilde{u}_{\tilde{\varepsilon}_{i}})}{\tilde{\varepsilon}_{i}}=\tilde {f}_{\tilde{\varepsilon}_{i}}\] and the associated varifold \(\tilde{V}_{i}\) of this new sequence \(\tilde{u}_{\tilde{\varepsilon}_{i}}\) converges to \(\theta_{x_{0}}P_{0}\). By (3.55), we also have \[\frac{1}{\tilde{\varepsilon}_{i}}\int_{B_{\rho}}f_{\tilde{\varepsilon}_{i}}^{ 2}\leq C\left(\int_{B_{\rho}}\left(\frac{f_{\tilde{\varepsilon}_{i}}}{\tilde{ \varepsilon}_{i}|\nabla u_{\tilde{\varepsilon}_{i}}|}\right)^{q_{0}}\tilde{ \varepsilon}_{i}|\nabla u_{\tilde{\varepsilon}_{i}}|^{2}\right)^{\frac{2}{q_{0}}}\] \[=C\left(\rho_{i}^{q_{0}+1-(n+1)}\int_{B_{\rho_{i}\rho}}\left(\frac{f_ {\varepsilon_{i}}}{\varepsilon_{i}|\nabla u_{\varepsilon_{i}}|}\right)^{q_{0}} \varepsilon_{i}|\nabla u_{\varepsilon_{i}}|^{2}\right)^{\frac{2}{q_{0}}}\] \[\leq C\rho_{i}^{\frac{2(q_{0}-n)}{q_{0}}}\to 0,\] as \(q_{0}>n\). Furthermore, by choosing more carefully so that \(\rho_{i}:=\tilde{\varepsilon}_{i}^{\frac{(n-1)q_{0}}{2(q_{0}-n)}}=\tilde{ \varepsilon}_{i}^{\frac{1}{1+\frac{2(q_{0}-n)}{(n-1)q_{0}}}}\), we have \[\frac{1}{\tilde{\varepsilon}_{i}}\int_{B_{\rho}}f_{\tilde{\varepsilon}_{i}}^{ 2}\leq\tilde{\varepsilon}_{i}^{n-1},\quad\text{ for }\rho>\tilde{ \varepsilon}_{i}\] and thus \[\frac{1}{\tilde{\varepsilon}_{i}}\int_{B_{\rho}}f_{\tilde{\varepsilon}_{i}}^{ 2}\leq\rho^{n-1}. \tag{5.2}\] Therefore we have reduced Theorem 5.1 to the following proposition **Proposition 5.2**.: _If the limit varifold is \(\theta_{0}\mathcal{H}^{n}\lfloor P_{0}\) for some \(P_{0}\in G(n+1,n)\) and \(\theta_{0}>0\), then \(\alpha^{-1}\theta_{0}\) is a nonnegative integer, where \(\alpha=\int_{-\infty}^{\infty}(\tanh^{\prime}x)^{2}dx\) is the total energy of the heteroclinic \(1\)-d solution._ In order to prove Proposition 5.2, we need two lemmas. The first Lemma 5.5 is a multi-sheet monotonicity formula (c.f. [1, Theorem 6.2] for the version for integral varifolds, which is used to prove the integrality of the limits of sequences of integral varifolds). The second Lemma 5.7 says at small scales, the energy of each layers are almost integer multiple of the \(1\)-d solution. We first gather some apriori bounds on energy ratio for \(\mu_{\varepsilon}\). **Proposition 5.3**.: _Let \(\delta=\rho^{\gamma},\varepsilon\leq\rho\leq r\) for \(0<\gamma<\frac{1}{M}\leq\frac{1}{2}\), we have \(\delta^{-M}\varepsilon\leq\rho^{1-M\gamma}\leq 1\). Furthermore we choose \(r:=d(B_{2\rho}(x),\partial B_{3\rho^{1-\beta}}(x))\geq\rho^{1-\beta}\). Then_ \[\begin{split} Cr^{-n}\mu_{\varepsilon}(B_{r}(x))& \geq s^{-n}\mu_{\varepsilon}(B_{s}(x))-C\int_{s}^{r}\rho^{p_{3} \gamma-n-1}\mu_{\varepsilon}(B_{2\rho}(x))d\rho\\ &-C_{\beta}\varepsilon\int_{s}^{r}\rho^{-M\gamma-n-1}\left(\int _{B_{3\rho^{1-\beta}}(x)}|f_{\varepsilon}|^{2}\right)d\rho\\ &-\tilde{C}_{\beta}\left(1+\int_{\{|u_{\varepsilon}|\geq 1\} \cap B_{3\rho^{1-\beta}}(x)}W^{\prime}(u_{\varepsilon})^{2}\right)\int_{s}^{r} \rho^{\gamma-1}d\rho-C.\end{split} \tag{5.3}\] Proof.: Substitute (4.6) into the equation (4.5) in the proof of Theorem 4.1, we have for \(\varepsilon\leq s\leq\rho\leq r\leq 1\) \[C(\Lambda_{0},q_{0})\left(\frac{\mu_{\varepsilon}(B_{r})}{r^{n}}\right) \geq\left(\frac{\mu_{\varepsilon}(B_{s})}{s^{n}}\right)-C(\Lambda _{0},q_{0})-C\int_{s}^{r}\frac{\xi_{+}(B_{\rho})}{\rho^{n+1}}\] \[\geq\left(\frac{\mu_{\varepsilon}(B_{s})}{s^{n}}\right)-C( \Lambda_{0},q_{0})-C\int_{s}^{r}\rho^{p_{3}\gamma-n-1}\mu_{\varepsilon}(B_{2 \rho}(x))d\rho \tag{5.4}\] \[-C_{\beta}\varepsilon\int_{s}^{r}\rho^{-M\gamma-n-1}\left(\int_{B_{3 \rho^{1-\beta}}(x)}|f_{\varepsilon}|^{2}\right)d\rho \tag{5.5}\] \[-\int_{s}^{r}\tilde{C}_{\beta}\varepsilon\rho^{\gamma-2}\left(1+ \int_{\{|u_{\varepsilon}|\geq 1\}\cap B_{3\rho^{1-\beta}}(x)}W^{\prime}(u_{ \varepsilon})^{2}\right)d\rho.\] Noticing \(\varepsilon\leq\rho\) in the last term, we then conclude the desired energy ratio bound. As a corollary, we have **Corollary 5.4**.: _If in addition to the conditions in Proposition 5.3, we assume_ \[\frac{1}{\varepsilon}\int_{B_{\rho}}f_{\varepsilon}^{2}\leq\rho^{n-1},\quad \text{ for }\rho\geq\varepsilon, \tag{5.6}\] _and_ \[\beta\in\left(0,\frac{1-M\gamma}{2(n-1)}\right),\] _then the following upper bound for the energy ratio for \(\mu_{\varepsilon}\) holds_ \[\frac{\mu_{\varepsilon}(B_{s}(x))}{\rho^{n}}\leq C\frac{\mu_{\varepsilon}(B_{ r}(x))}{r^{n}}+C(\Lambda_{0},E_{0},q_{0},n), \tag{5.7}\] _for \(\varepsilon\leq s\leq r\)._ Proof.: We have \[p_{3}\gamma-1,-M\gamma+\beta(n-1),\gamma-1>-1.\] Thus by Proposition 5.3 and \(\varepsilon\leq\rho\), we have \[C\left(\frac{\mu_{\varepsilon}(B_{r}(x))}{r^{n}}\right) \geq\left(\frac{\mu_{\varepsilon}(B_{s}(x))}{s^{n}}\right)-C\int _{s}^{r}\rho^{p_{3}\gamma-1}\left(\frac{\mu_{\varepsilon}(B_{2\rho}(x))}{ \rho^{n}}\right)d\rho\] \[-C_{\beta}\varepsilon^{2}\int_{s}^{r}\rho^{-M\gamma-2-\beta(n-1) }\left(\frac{\int_{B_{3\rho^{1-\beta}}(x)}|f_{\varepsilon}|^{2}}{\varepsilon \rho^{(1-\beta)(n-1)}}\right)d\rho\] \[-\tilde{C}_{\beta}\left(1+\int_{\{|u_{\varepsilon}|\geq 1\} \cap\Omega}W^{\prime}(u_{\varepsilon})^{2}\right)\int_{s}^{r}\rho^{\gamma-1}d \rho-C\] \[\geq\left(\frac{\mu_{\varepsilon}(B_{s}(x))}{s^{n}}\right)-C\int _{s}^{r}\rho^{p_{3}\gamma-1}\left(\frac{\mu_{\varepsilon}(B_{2\rho}(x))}{ \rho^{n}}\right)d\rho-C\] The conclusion then follows by substituting in (5.6) and applying Gronwall's inequality to the above differential inequality. **Lemma 5.5**.: _For any \(N\in\mathbb{N}\), \(\delta>0\) small, \(\Lambda>0\) large and \(\beta\in(0,\frac{1-M\gamma}{2(n-1)})\) where \(M,\gamma\) are from Proposition 4.2, there exists \(\omega>0\) such that the following holds: Suppose \(u_{\varepsilon}\) satisfies (1.1) and the conditions(1)-(3) in Theorem 1.1 are satisfied, then for any finite _set \(X\subset\{0^{n}\}\times\mathbb{R}\subset\mathbb{R}^{n+1}\), and the number of elements in \(X\) is no more than \(N\). If moreover for some \(0<\varepsilon\leq d\leq R\leq\omega\), the followings are satisfied_ \[\mathrm{diam}(X)<\omega R, \tag{5.9}\] \[|x-y|>3d,\quad\text{ for }x,y\in X\text{ and }x\neq y, \tag{5.8}\] \[|\xi_{\varepsilon}|(B_{\rho}(x))+\int_{B_{\rho}(x)}\varepsilon|\nabla u_{ \varepsilon}|^{2}\sqrt{1-\nu_{\varepsilon,n+1}^{2}}\leq\omega\rho^{n},\quad \text{ for }x\in X\text{ and }d\leq\rho\leq R, \tag{5.10}\] \[\frac{1}{\varepsilon}\int_{B_{\rho}(x)}|f_{\varepsilon}|^{2}\leq\Lambda\rho^{n -1},\quad\text{ for }3d^{1-\beta}\leq\rho\leq 3R^{1-\beta}. \tag{5.11}\] _Then we have_ \[\sum_{x\in X}d^{-n}\mu_{\varepsilon}(B_{d}(x))\leq(1+\delta)R^{-n}\mu_{ \varepsilon}(\cup_{x\in X}B_{R}(x))+\delta. \tag{5.12}\] The Lemma is proved by applying inductively the following sheets-separation proposition and choosing \(\gamma\) large enough, \(\omega\) small enough. For simplicity of notations, we introduce a notation for the following sheets-separation term, which will be used in the rest of this section. \[\mathcal{S}_{y,x}=:(y_{n+1}-x_{n+1})\left(\frac{\varepsilon|\nabla u_{ \varepsilon}|^{2}}{2}+\frac{W(u_{\varepsilon})}{\varepsilon}\right)- \varepsilon\frac{\partial u_{\varepsilon}}{\partial x_{n+1}}\langle y-x,\nabla u _{\varepsilon}\rangle, \tag{5.13}\] for any pair of points \(x,y\in\mathbb{R}^{n+1}\). **Proposition 5.6**.: _Suppose the conditions in Theorem 1.1 are satisfied and let \(X\subset\{0^{n}\}\times[t_{1}+d,t_{2}-d]\subset\mathbb{R}^{n+1}\) consist of no more than \(N\in\mathbb{N}\) elements and \(\cup_{x\in X}B_{3R^{1-\beta}}\subset\Omega\subset\mathbb{R}^{n+1}\). Furthermore suppose for \(-\infty\leq t_{1}<t_{2}\leq\infty,0<\varepsilon\leq d\leq R\leq\frac{1}{2}, \beta\in(0,\frac{1-M\gamma}{2(n-1)})\) the following are satisfied:_ \[(\Gamma+1)\mathrm{diam}(X)<R,\quad\text{ for some }\Gamma\geq 1, \tag{5.15}\] \[|x-y|>3d,\quad\text{ for }x\neq y\in X, \tag{5.14}\] \[\int_{d}^{R}\rho^{-n-1}\left|\int_{B_{\rho}(x)\cap\{y_{n+1}=t_{j}\}}\mathcal{S }_{y,x}d\mathcal{H}_{y}^{n}\right|d\rho\leq\omega \tag{5.16}\] _for any \(x\in X\), \(j=1,2\) and for some \(\omega>0\),_ \[|\xi_{\varepsilon}|(B_{\rho}(x))+\int_{B_{\rho}(x)}\varepsilon| \nabla u_{\varepsilon}|^{2}\sqrt{1-\nu_{\varepsilon,n+1}^{2}}\leq\omega\rho^{ n},\quad\text{ for }d\leq\rho\leq R \tag{5.18}\] \[\frac{1}{\varepsilon}\int_{B_{\rho}(x)}|f_{\varepsilon}|^{2}\leq \Lambda\rho^{n-1},\quad\text{ for }3d^{1-\beta}\leq\rho\leq 3R^{1-\beta},\] (5.19) \[\frac{\mu_{\varepsilon}(B_{2R}(x))}{R^{n}}\leq\Lambda,\quad\forall x \in X\quad\text{(this is implied by Corollary \ref{eq:3} as $R\geq\varepsilon$)}. \tag{5.17}\] _Then by denoting \(S_{t}^{t^{\prime}}:=\{t\leq y_{n+1}\leq t^{\prime}\}\), we have_ \[d^{-n}\mu_{\varepsilon}(B_{d}(x))\leq R^{-n}\mu_{\varepsilon}(B_{R}(x)\cap S_{t_ {1}}^{t_{2}})+CR^{\gamma_{0}}+2\omega, \tag{5.20}\] _for some \(\gamma_{0}>0\) and for all \(x\in X\). Furthermore, if \(X\) consists of more than one point, then there exists \(t_{3}\in(t_{1},t_{2})\) such that \(\forall x\in X\)_ \[|x_{n+1}-t_{3}|>d, \tag{5.21}\] \[\int_{d}^{\tilde{R}}\rho^{-n-1}\int_{B_{\rho}(x)\cap\{y_{n+1}=t_{3}\}}| \mathcal{S}_{y,x}|\,d\mathcal{H}_{y}^{n}d\rho\leq 3N\Gamma\omega, \tag{5.22}\] _where \(\tilde{R}:=\Gamma\mathrm{diam}(X)\) and \(\mathcal{S}_{y,x}\) as defined in (5.13). Moreover, both \(X\cap X_{t_{1}}^{t_{3}}\) and \(X\cap X_{t_{3}}^{t_{2}}\) are non-empty and_ \[\tilde{R}^{-n}\left(\mu_{\varepsilon}(\cup_{x\in X\cap X_{t_{1}}^ {t_{3}}}B_{\tilde{R}}(x)\cap S_{t_{1}}^{t_{3}})+\mu_{\varepsilon}(\cup_{x\in X \cap X_{t_{3}}^{t_{2}}}B_{\tilde{R}}(x)\cap S_{t_{3}}^{t_{2}})\right)\leq\] \[\left(1+\frac{1}{\Gamma}\right)^{n}R^{-n}\mu_{\varepsilon}\left( \cup_{x\in X}B_{R}(x)\cap S_{t_{1}}^{t_{2}}\right)+CR^{\gamma_{0}}+2\omega.\] Proof.: First we choose \(\phi\) to be a non-increasing function satisfying \[\phi_{\delta,\rho}=\begin{cases}1,&\text{ on }[0,\rho]\\ 0,&\text{ on }[\rho+\delta,\infty),\end{cases}\] and \(\chi_{\delta}\) satisfying \[\chi_{\delta}\equiv\begin{cases}1,&\text{ on }[t_{1}+\delta,t_{2}-\delta],\\ 0,&\text{ on }(-\infty,t_{1}]\cup[t_{2},\infty),\end{cases}\] with \(\chi_{\delta}^{\prime}\geq 0\) on \([t_{1},t_{1}+\delta]\) and \(\chi_{\delta}^{\prime}\leq 0\) on \([t_{2}-\delta,t_{2}]\). Then we multiply (1.1) on both sides by \(\langle\nabla u,\eta\rangle\), where \(\eta\in C_{0}^{1}(\Omega,\mathbb{R}^{n+1})\) is defined by \(\eta(y):=(y-x)\phi_{\delta,\rho}(|y-x|)\chi_{\delta}(y_{n+1})\). Using integration by parts, we have \[\int f_{\varepsilon}\langle y-x,\nabla u_{\varepsilon}\rangle \phi_{\delta,\rho}(|y-x|)\chi_{\delta}(y_{n+1})\] \[=\int f_{\varepsilon}\langle\nabla u,\eta\rangle\] \[=\int\left(\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}+ \frac{W(u_{\varepsilon})}{\varepsilon}\right)\mathrm{div}\eta-\varepsilon \nabla u\otimes\nabla u:\nabla\eta\] \[=\int\left(|y-x|\phi_{\delta,\rho}^{\prime}\chi_{\delta}+(n+1) \phi_{\delta,\rho}\chi_{\delta}+(y_{n+1}-x_{n+1})\phi_{\delta,\rho}\chi_{ \delta}^{\prime}\right)d\mu_{\varepsilon}\] \[-\int\varepsilon\frac{\phi_{\delta,\rho}^{\prime}\chi_{\delta}} {|y-x|}\langle y-x,\nabla u_{\varepsilon}\rangle^{2}-\int\varepsilon|\nabla u _{\varepsilon}|^{2}\phi_{\delta,\rho}\chi_{\delta}-\int\varepsilon\frac{ \partial u}{\partial x_{n+1}}\langle y-x,\nabla u_{\varepsilon}\rangle\phi_{ \delta,\rho}\chi_{\delta}^{\prime}.\] Letting \(\delta\to 0\), we have \[\int_{B_{\rho}(x)\cap S^{t_{2}}_{t_{1}}}f_{\varepsilon}\langle y-x, \nabla u_{\varepsilon}\rangle\] \[=-\int_{\partial B_{\rho}\cap S^{t_{2}}_{t_{1}}}\rho d\mu_{ \varepsilon}+(n+1)\int_{B_{\rho}\cap S^{t_{2}}_{t_{1}}}d\mu_{\varepsilon}\] \[+\int_{B_{\rho}\cap\{y_{n+1}=t_{2}\}}(y_{n+1}-x_{n+1})d\mu_{ \varepsilon}-\int_{B_{\rho}\cap\{y_{n+1}=t_{1}\}}(y_{n+1}-x_{n+1})d\mu_{\varepsilon}\] \[+\int_{\partial B_{\rho}\cap S^{t_{2}}_{t_{1}}}\varepsilon\rho^{- 1}\langle y-x,\nabla u_{\varepsilon}\rangle^{2}-\int_{B_{\rho}\cap S^{t_{2}}_{ t_{1}}}\varepsilon|\nabla u_{\varepsilon}|^{2}\] \[+\int_{B_{\rho}\cap\{y_{n+1}=t_{2}\}}\varepsilon\frac{\partial u }{\partial x_{n+1}}\langle y-x,\nabla u_{\varepsilon}\rangle-\int_{B_{\rho} \cap\{y_{n+1}=t_{1}\}}\varepsilon\frac{\partial u}{\partial x_{n+1}}\langle y -x,\nabla u_{\varepsilon}\rangle.\] Dividing both sides by \(\rho^{n+1}\) and rearranging gives the following weighted monotonicity formula \[\frac{d}{d\rho}\left(\rho^{-n}\mu_{\varepsilon}(B_{\rho}(x)\cap S ^{t_{2}}_{t_{1}})\right)\] \[=-n\rho^{-n-1}\mu_{\varepsilon}(B_{\rho}(x)\cap S^{t_{2}}_{t_{1} })+\rho^{-n}\mu_{\varepsilon}(\partial B_{\rho}\cap S^{t_{2}}_{t_{1}})\] \[=-(n+1)\rho^{-n-1}\int_{B_{\rho}(x)\cap S^{t_{2}}_{t_{1}}}d\mu_{ \varepsilon}+\rho^{-n}\int_{\partial B_{\rho}(x)\cap S^{t_{2}}_{t_{1}}}d\mu_{\varepsilon}\] \[+\rho^{-n-1}\int_{B_{\rho}(x)\cap S^{t_{2}}_{t_{1}}}\varepsilon| \nabla u_{\varepsilon}|^{2}-\rho^{-n-1}\int_{B_{\rho}(x)\cap S^{t_{2}}_{t_{1}} }d\xi_{\varepsilon}\] \[=\rho^{-n-1}\int_{B_{\rho}\cap\{y_{n+1}=t_{2}\}}(y_{n+1}-x_{n+1} )d\mu_{\varepsilon}-\rho^{-n-1}\int_{B_{\rho}\cap\{y_{n+1}=t_{1}\}}(y_{n+1}-x_ {n+1})d\mu_{\varepsilon}\] \[+\rho^{-n-1}\int_{B_{\rho}\cap\{y_{n+1}=t_{2}\}}\varepsilon\frac{ \partial u}{\partial x_{n+1}}\langle y-x,\nabla u_{\varepsilon}\rangle-\rho^{ -n-1}\int_{B_{\rho}\cap\{y_{n+1}=t_{1}\}}\varepsilon\frac{\partial u}{\partial x _{n+1}}\langle y-x,\nabla u_{\varepsilon}\rangle\] \[-\rho^{-n-1}\int_{B_{\rho}(x)\cap S^{t_{2}}_{t_{1}}}d\xi_{ \varepsilon}-\rho^{-n-1}\int_{B_{\rho}(x)\cap S^{t_{2}}_{t_{1}}}f_{\varepsilon }\langle y-x,\nabla u_{\varepsilon}\rangle+\rho^{-n-1}\int_{\partial B_{\rho} \cap S^{t_{2}}_{t_{1}}}\varepsilon\rho^{-1}\langle y-x,\nabla u_{\varepsilon} \rangle^{2}. \tag{5.23}\] By (5.16) condition, the sum of norms of the first fours terms are bounded by \(2\omega\). And by (4.6) and (5.18), the discrepancy term is bounded by \[\rho^{-n-1}\int_{B_{\rho}(x)\cap S^{t_{2}}_{t_{1}}}d\xi_{\varepsilon,+}\] \[\leq C\rho^{p_{3}\gamma-n-1}\mu_{\varepsilon}(B_{2\rho}(x))+ \tilde{C}_{k}\varepsilon\rho^{-M\gamma-n-1}\int_{B_{3\rho^{1-}\beta}(x)}|f_{ \varepsilon}|^{2}+\tilde{C}_{\beta}\varepsilon\rho^{\gamma-2}\left(1+\int_{\{|u _{\varepsilon}|\geq 1\}\cap\Omega}W^{\prime}(u_{\varepsilon})^{2}\right)\] \[\leq\rho\left|\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}- \frac{W(u_{\varepsilon})}{\varepsilon}\right|+\varepsilon|\nabla u_{\varepsilon}| ^{2}|\langle y-x,e_{n+1}\rangle-\langle y-x,\nu_{\varepsilon}\rangle\langle e _{n+1},\nu_{\varepsilon}\rangle|\] \[\leq\rho\left|\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}- \frac{W(u_{\varepsilon})}{\varepsilon}\right|+\varepsilon|\nabla u_{ \varepsilon}|^{2}|y-x|\sqrt{1-\nu_{\varepsilon,n+1}^{2}}\] \[\leq\rho\left|\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}- \frac{W(u_{\varepsilon})}{\varepsilon}\right|+\rho\varepsilon|\nabla u_{ \varepsilon}|^{2}\sqrt{1-\nu_{\varepsilon,n+1}^{2}}.\] And thus by the condition (5.17), we have \[\int_{\tilde{t}_{1}}^{\tilde{t}_{2}}\int_{d}^{\tilde{R}}\rho^{-n- 1}\int_{B_{\rho}(x)\cap\{y_{n+1}=t\}}|\mathcal{S}_{y,x}|\,d\mathcal{H}_{\{y_{ n+1}=t\}}^{n}d\rho dt\] \[=\int_{d}^{\tilde{R}}\rho^{-n-1}\int_{B_{\rho}(x)\cap\{\tilde{t }_{1}^{2}\}}\left|(y_{n+1}-x_{n+1})\left(\frac{\varepsilon|\nabla u_{ \varepsilon}|^{2}}{2}+\frac{W(u_{\varepsilon})}{\varepsilon}\right)- \varepsilon\frac{\partial u_{\varepsilon}}{\partial x_{n+1}}\langle y-x,\nabla u _{\varepsilon}\rangle\right|dyd\rho\] \[\leq\int_{d}^{\tilde{R}}\rho^{-n}\int_{B_{\rho}(x)\cap S_{\tilde{t}_{1 }}^{\tilde{L}_{2}}}\left|\frac{\varepsilon|\nabla u_{\varepsilon}|^{2}}{2}-\frac {W(u_{\varepsilon})}{\varepsilon}\right|+\varepsilon|\nabla u_{\varepsilon}|^{2 }\sqrt{1-\nu_{\varepsilon,n+1}^{2}}dyd\rho\] \[\leq\int_{d}^{\tilde{R}}\rho^{-n}\omega\rho^{n}d\rho\] \[\leq\omega\tilde{R}.\] So there must exist \(t_{3}\in[\tilde{t}_{1},\tilde{t}_{2}]\) such that \[\int_{d}^{\tilde{R}}\rho^{-n-1}\int_{B_{\rho}(x)\cap\{y_{n+1}=t_{ 3}\}}|\mathcal{S}_{y,x}|\,d\mathcal{H}_{\{y_{n+1}=t\}}^{n}d\rho\] \[\leq\frac{\omega\tilde{R}}{(\tilde{t}_{2}-\tilde{t}_{1})}\leq \frac{3N\omega\tilde{R}}{\mathrm{diam}(X)}=3N\Gamma\omega.\] By the choice of \(t_{3}\in[\tilde{t}_{1},\tilde{t}_{2}]\), we automatically have \(|x_{n+1}-t_{3}|>d\) for all \(x\in X\). Finally, by denoting \[X_{+}:=\{x\in X,x_{n}\geq t_{3}\},X_{-}:=\{x\in X,x_{n}<t_{3}\},\] we have \(X_{\pm}\neq\emptyset\) and \[\left(\cup_{x\in X_{-}}B_{\tilde{R}}(x)\cap S_{t_{1}}^{t_{3}}\right)\cup\left( \cup_{x\in X_{+}}B_{\tilde{R}}(x)\cap S_{t_{3}}^{t_{2}}\right)\subset B_{ \tilde{R}+\mathrm{diam}(X)}(x_{0})\cap S_{t_{1}}^{t_{2}},\] for any \(x_{0}\in X\). By (5.20)(with \(\tilde{R}+\mathrm{diam}(X)\) in place of \(d\)), we then have \[\tilde{R}^{-n}\left(\mu_{\varepsilon}\left(\cup_{x\in X_{-}}B_{ \tilde{R}}(x)\cap S_{t_{1}}^{t_{3}}\right)+\mu_{\varepsilon}\left(\cup_{x\in X _{+}}B_{\tilde{R}}(x)\cap S_{t_{3}}^{t_{2}}\right)\right)\] \[\leq\tilde{R}^{-n}\mu_{\varepsilon}(B_{\tilde{R}+\mathrm{diam}(X )}(x_{0})\cap S_{t_{1}}^{t_{2}})\] \[=\left(1+\frac{1}{\Gamma}\right)^{n}(\tilde{R}+\mathrm{diam}(X) )^{-n}\mu_{\varepsilon}(B_{\tilde{R}+\mathrm{diam}(X)}(x_{0})\cap S_{t_{1}}^ {t_{2}})\] \[\leq\left(1+\frac{1}{\Gamma}\right)^{n}\left(R^{-n}\mu_{ \varepsilon}(B_{R}(x_{0})\cap S_{t_{1}}^{t_{2}})+CR^{\gamma_{0}}+2\omega\right)\] \[\leq\left(1+\frac{1}{\Gamma}\right)^{n}\left(R^{-n}\mu_{ \varepsilon}(\cup_{x\in X}B_{R}(x)\cap S_{t_{1}}^{t_{2}})\right)+CR^{\gamma_{ 0}}+2\omega.\] The next Lemma taken from [10] shows the energy ratio at small scales are very close to the 1-d solution. **Lemma 5.7** (Lemma 5.5 of [10]).: _Suppose the conditions in Theorem 1.1 are satisfied. For any \(\tau\in(0,1)\),\(\delta>0\) small, \(\Lambda>0\) large, there exists \(\omega>0\) sufficiently small and \(L>1\) sufficiently large such that the following holds: Suppose \(u_{\varepsilon}\) satisfies condition of Theorem 1.1 in \(B_{4L\varepsilon}(0)\subset\mathbb{R}^{n+1}\) and_ \[|u_{\varepsilon}(0)|\leq 1-\tau \tag{5.24}\] \[|\xi_{\varepsilon}(B_{4L\varepsilon}(0))|+\int_{B_{4L\varepsilon}(0)} \varepsilon|\nabla u_{\varepsilon}|^{2}\sqrt{1-\nu_{\varepsilon,n+1}^{2}}\leq \omega(4L\varepsilon)^{n} \tag{5.26}\] \[\frac{1}{\varepsilon}\int_{B_{4L\varepsilon}(0)}|f_{\varepsilon}| ^{2}\leq\Lambda(4L\varepsilon)^{n-2}\] (5.27) \[\mu_{\varepsilon}(B_{4L\varepsilon}(0))\leq\Lambda(4L\varepsilon) ^{n}. \tag{5.25}\] _Then by denoting \((0,t)\in\mathbb{R}^{n+1}\) to be the point with first \(n\)-th coordinate functions being \(0\) and the \((n+1)\)-th coordinate functions being \(t\), we have_ \[|u(0,t)|\geq 1-\frac{\tau}{2},\quad\text{ for all }L\varepsilon\leq|t|\leq 3L\varepsilon \tag{5.29}\] \[\left|\frac{1}{\omega_{n}(L\varepsilon)^{n}}\mu_{\varepsilon}(B_ {L\varepsilon}(0))-\alpha\right|\leq\delta\] (5.30) \[\left|\int_{-L\varepsilon}^{L\varepsilon}W(u_{\varepsilon}(0,t) )dt-\frac{\alpha}{2}\right|\leq\delta. \tag{5.28}\] Proof.: First we consider the \(1\)-dimensional solution \[q_{0}^{\prime}(t) =\sqrt{W(q_{0}(t))}\quad\forall t\in\mathbb{R},\] \[q_{0}(0) =u(0).\] We will use \(q_{0}\) to choose \(L\) depending on \(\tau,\delta>0\). On \(\mathbb{R}^{n+1}\) we write \(q(x)=q_{0}(x_{n+1})\) and choose \(L>1\) large enough depending on \(\tau,\delta\) so that \[|q(0,t)| \geq 1-\frac{\tau}{3},\quad\text{ for all }L\leq|t|\leq 3L,\] \[\left|\frac{1}{\omega_{n}L^{n-1}}\int_{B_{L}(0)}\left(\frac{| \nabla q|^{2}}{2}+W(q)\right)-\alpha\right| \leq\frac{\delta}{2}\] \[\left|\int_{-L}^{L}W(q(0,t))dt-\frac{\alpha}{2}\right| \leq\frac{\delta}{2} \tag{5.31}\] whenever \(|q(0)|\leq 1-\tau\). The function \(u\) satisfies the Allen-Cahn equation \[-\Delta u+W^{\prime}(u)=f,\] and by our condition (2) in Theorem 1.1 we get \(\|u_{\varepsilon}\|_{L^{\infty}(B_{1/2}(x))}\leq c_{0}\). Hence by Calderon-Zygmund estimates we get uniform \(W^{2,\frac{n+1}{2}+\delta_{0}}\) estimates on \(B_{3L}(0)\) of the form \[\|u\|_{W^{2,\frac{n+1}{2}+\delta_{0}}(B_{3L}(0))}\leq C(\Lambda,L). \tag{5.32}\] If there is no such \(\omega>0\) such that (5.28), (5.29) and (5.30) holds then this implies there exists \(\omega_{j}\to 0\) and \(u_{j},f_{j}\) satisfying the above estimates but that do not satisfy (5.28), (5.29) and (5.30). By (5.32), we get after passing to a suitable subsequence that \(u_{j}\rightharpoonup u\) weakly in \(W^{2,\frac{n+1}{2}+\delta_{0}}(B_{3L}(0))\) and \(f_{j}\rightharpoonup f\) weakly in \(L^{\frac{n+1}{2}+\delta_{0}}(B_{3L}(0))\). By the Sobolev embedding we have \(W^{2,\frac{n+1}{2}+\delta_{0}}(B_{3L}(0))\hookrightarrow C^{0}\) for \(\delta_{0}>0\) and hence we get \(u_{j}\to u\) uniformly in \(C^{0}(B_{3}L(0))\). **Claim**.: The functions \(u_{j}\to u=q\) strongly in \(W^{1,2}(B_{3L}(0))\). Proof.: Writing \(\nabla=(\nabla^{\prime},\partial_{n+1})\) we get (5.25) \[\int_{B_{3L(0)}}\left|\frac{|\nabla u|^{2}}{2}-W(u)\right| \leq\liminf_{j\to\infty}\int_{B_{3L}(0)}\left|\frac{|\nabla u_{j} |^{2}}{2}-W(u_{j})\right|\] \[\leq\liminf_{j\to\infty}|\xi_{j}|(B_{3L}(0))=0\] and \[\int_{B_{3L}(0)}|\nabla^{\prime}u|\leq\liminf_{j\to\infty}\int_{B_{3L}(0)}| \nabla^{\prime}u_{j}|\leq C(L)\left(\int_{B_{3L}(0)}|\nabla u_{j}|^{2}\sqrt{1- \nu_{j,n}^{2}}\right)^{1/2}=0,\] where \(\nu_{j}=\frac{\nabla u_{j}}{|\nabla u|}\) for \(\nabla u_{j}\neq 0\). Therefore \(|\nabla u|^{2}=2W(u)\) and \(u(y,t)=u_{0}(t)\) for some \(u_{0}\in W^{2,\frac{n+1}{2}+\delta_{0}}((-L,L))\hookrightarrow C^{1,\alpha}( (-L,L))\) and \(|u_{0}^{\prime}|=2\sqrt{2W(u_{0})}\). As \(|u_{0}(0)|\leq 1-\tau\) by uniform convergence, we see \(|u_{0}|<1\) and \(|u_{0}^{\prime}|>0\). After a reflection of the form \((y,x_{n})\mapsto(y,-x_{n})\) if necessary, we may assume \(u_{0}^{\prime}>0\) and hence \(u_{0}^{\prime}=\sqrt{2W(u_{0})}\). This gives us \(u_{0}=q_{0}\) and \(u=q\). This shows \(u_{j}\to u=q\) strongly in \(W^{1,2}(B_{3L}(0))\). From this claim and (5.31) we conclude \(u_{j}\) satisfies (5.28), (5.29) and (5.30) for sufficiently large \(j\) which is a contradiction. Now we prove Proposition 5.2. Proof of Proposition 5.2.: Without loss of generality, we assume \(P_{0}=\{x\in\mathbb{R}^{n+1},x_{n+1}=0\}\) and let \(\pi:\mathbb{R}^{n+1}\to P_{0}\) denote the associated orthogonal projection. Furthermore we know \[V_{\varepsilon}=\mu_{\varepsilon}\otimes\nu_{\varepsilon}\to V\] is rectifiable and \[\mu_{V} =\mu\] \[V =\theta_{0}\mathcal{H}^{n}\llcorner P_{0}\otimes\delta_{P_{0}}\] and \[\lim_{\varepsilon\to 0}\int_{B_{4}(0)}\varepsilon|\nabla u_{\varepsilon}|^{2} \sqrt{1-\nu_{\varepsilon,n+1}^{2}}=0. \tag{5.33}\] Let \(N\in\mathbb{N}\) be the smallest integer with \[N>\frac{\theta_{0}}{\alpha}\] and let \(0<\delta\leq 1\) be small. By Proposition 3.5 and the \(L^{\infty}\) bound condition of \(u_{\varepsilon}\) in Theorem 1.1, we can fix \(\tau>0\) such that \(\forall\varepsilon(\delta)>0\) sufficiently small we have \[\int_{\{|u_{\varepsilon}|\geq 1-\tau\}\cap B_{4}(0)}\frac{W^{\prime 2}(u_{ \varepsilon})}{\varepsilon}+\frac{W(u_{\varepsilon})}{\varepsilon}\leq\delta.\] We have by Lemma 3.8, \[\mu_{\varepsilon}(\{|u_{\varepsilon}|\geq 1-\tau\cap B_{4}(0)\})\leq|\xi_{ \varepsilon}(B_{4}(0))|+2\int_{\{|u_{\varepsilon}|\geq 1-\tau\}\cap B_{4}(0)} \frac{W(u_{\varepsilon})}{\varepsilon}\leq 3\delta. \tag{5.34}\] We want to apply Lemma 5.5 and Lemma 5.7. We choose \(0<\omega=\omega(N,\delta,\frac{1}{2},\frac{1}{2},C)\) and \(\omega(\delta,\tau,C)\leq 1\) where \(L=L(\delta,\tau)\) which are the parameters that appear in Lemma 5.5 and Proposition 5.6 and \(C\) is the constant so that \[\mu_{\varepsilon}(\Omega)+\frac{1}{\varepsilon}\int_{\Omega}|f_{\varepsilon}| ^{2}\leq C\quad\Omega=B_{4}(0).\] We define \(A_{\varepsilon}\) to be the set where the hypotheses for our Propositions hold, that is \[A_{\varepsilon}=\left\{x\in B_{1}(0)\left|\begin{array}{l}|u_{\varepsilon}( x)|\leq 1-\tau,\\ \forall\varepsilon\leq\rho\leq 3:|\xi_{\varepsilon}(B_{\rho}(x))|+\int_{B_{ \rho}(x)}\varepsilon|\nabla u_{\varepsilon}|^{2}\sqrt{1-\nu_{\varepsilon,n+1} ^{2}}\leq\omega\rho^{n},\\ \forall\varepsilon\leq\rho\leq 3:\frac{1}{\varepsilon}\int_{B_{\rho}(x)}|f_{ \varepsilon}|^{2}\leq\omega\rho^{n-1}.\end{array}\right.\right\}\] We show the complement of the set \(A_{\varepsilon}\) has small measure. By Besicovitch's covering theorem, we find a countable sub-covering \(\cup_{i}B_{\rho_{i}}(x_{i}),\rho_{i}\in[\varepsilon,3]\) of \(\{|u_{\varepsilon}|\leq 1-\tau\}\setminus A_{\varepsilon}\) such that every point \(x\in\{|u_{\varepsilon}|\leq 1-\tau\}\setminus A_{\varepsilon}\) belongs to at most \(\mathbf{B}_{n}\) balls in the covering, where \(\mathbf{B}_{n}\) depends only on the dimension \(n\). For each \(i\), either \[|\xi_{\varepsilon}(B_{\rho_{i}}(x_{i}))|+\int_{B_{\rho_{i}}(x_{i})}\varepsilon |\nabla u_{\varepsilon}|^{2}\sqrt{1-\nu_{\varepsilon,n+1}^{2}}\geq\omega\rho_{ i}^{n},\] or \[\frac{1}{\varepsilon}\int_{B_{\rho_{i}}(x_{i})}|f_{\varepsilon}|^{2}\geq\omega \rho_{i}^{n-1}\geq C\omega\rho_{i}^{n}.\] On the other hand, by (5.2), for sufficiently small \(\varepsilon\), we have \[\frac{1}{\varepsilon}\int_{B_{\rho}(x_{i})}|f_{\varepsilon}|^{2}\leq\omega \rho^{n-1},\forall\rho\in[\varepsilon,3].\] By (5.7), for each \(i\), we obtain \[\mu_{\varepsilon}\left(\overline{B_{\rho_{i}}(x_{i})}\right)\leq C\rho_{i}^{n}.\] Since the overlap in the Besicovitch covering is finite and (5.34), we get \[\mu_{\varepsilon}\left(B_{1}(0)\setminus A_{\varepsilon}\right) \leq 3\delta+\sum_{i}C\rho_{i}^{n}\] \[\leq 3\delta+C\omega^{-1}\left(|\xi_{\varepsilon}|(B_{4}(0))+\int_ {B_{4}(0)}\varepsilon|\nabla u_{\varepsilon}|^{2}\sqrt{1-\nu_{\varepsilon,n +1}^{2}}+\frac{1}{\varepsilon}\int_{B_{4}(0)}|f_{\varepsilon}|^{2}\right)\] \[\leq 4\delta, \tag{5.35}\] for \(\varepsilon\) sufficiently small. First by Lemma 5.5 and Lemma 5.7 we have \(x\in A_{\varepsilon},\forall L\varepsilon\leq R\leq\omega\), \[\alpha\omega_{n}-\delta\leq(1+\delta)R^{-n}\mu_{\varepsilon}(B_{R}(x))+\delta.\] By the reduction to the conditions in Proposition 5.2, we obtain \[\mu_{\varepsilon}\left(\Omega\setminus\{|x_{n+1}|\leq\zeta\}\right)\to 0,\quad\text{ for any fixed }\zeta>0.\] Thus, for sufficiently small \(\delta>0\), we get \[A_{\varepsilon}\subset\{|x_{n+1}|\leq\zeta_{\varepsilon}\},\quad\text{ with }\zeta_{\varepsilon}\to 0\text{ as }\varepsilon\to 0.\] For any \(\hat{y}\in B^{n}_{1}(0)\subset\mathbb{R}^{n}\), consider a maximal subset \[X=\{y\}\times\{t_{1}<...<t_{K}\}\subset A_{\varepsilon}\cap\pi^{-1}(y)\] with \(|x-x^{\prime}|>3L\varepsilon\) if \(x\neq x^{\prime}\in X\), where \(\pi\) denotes the projection to \(\{x_{n+1}=0\}\). If \(K\geq N\), we apply Lemma 5.5 with \(d=3L\varepsilon,R=\omega\) and Lemma 5.7 to get \[N\alpha\omega_{n}-N\delta\leq(1+\delta)R^{-n}\mu_{\varepsilon}\left(B_{R}(y) \right)+\delta\leq(1+\delta)R^{-n}\mu_{\varepsilon}\left(B_{R+\zeta_{ \varepsilon}}(y)\right)+\delta.\] As \[\limsup_{\varepsilon\to 0}(1+\delta)R^{-n}\mu_{\varepsilon}\left(B_{R+\zeta_{ \varepsilon}}(y)\right)\leq R^{-n}\mu(\overline{B_{R}(y)})+C\delta=\theta \omega_{n}+C\delta,\] and \(\delta>0\) is arbitrarily small, we have \[N\alpha\leq\theta,\] which is a contradiction to our definition of \(N\). So we obtain \[K\leq N-1.\] Since \(X\) is maximal, we get \[A_{\varepsilon}\cap\pi^{-1}(y)\subset\{y\}\times\cup_{k=1}^{K}(t_{k}-3L \varepsilon,t_{k}+3L\varepsilon).\] By (5.28), \[A_{\varepsilon}\cap\pi^{-1}(y)\cap\left(\{y\}\times\cup_{k=1}^{K }(t_{k}-3L\varepsilon,t_{k}+3L\varepsilon)\right)\] \[=A_{\varepsilon}\cap\pi^{-1}(y)\cap\left(\{y\}\times\cup_{k=1}^{K }(t_{k}-L\varepsilon,t_{k}+L\varepsilon)\right).\] So \[A_{\varepsilon}\cap\pi^{-1}(y)\subset\{y\}\times\cup_{k=1}^{K}(t_{k}-L \varepsilon,t_{k}+L\varepsilon)\] and by (5.30), \[\int_{t_{k}-L\varepsilon}^{t_{k}+L\varepsilon}\frac{W(u_{\varepsilon}(y,t))}{ \varepsilon}dt\leq\frac{\alpha}{2}+\delta,\quad\forall k=1,...,K.\] Hence summing over \(k\) gives \[\int_{A_{\varepsilon}\cap\pi^{-1}(y)}\frac{W(u_{\varepsilon})}{\varepsilon}d \mathcal{H}^{1}\leq\frac{(N-1)\alpha}{2}+(N-1)\delta\] and integrating over \(B^{n}_{1}(0)\subset\mathbb{R}^{n}\) we obtain \[\int_{B^{n+1}_{1}(0)\cap A_{\varepsilon}}\frac{1}{\varepsilon}W(u_{ \varepsilon})d\mathcal{H}^{n+1}\leq\int_{B^{n}_{1}(0)}\int_{A_{\varepsilon} \cap\pi^{-1}(y)}\frac{W(u_{\varepsilon})}{\varepsilon}d\mathcal{H}^{1}dy \leq\frac{(N-1)\alpha\omega}{2}+C\delta.\] Recalling (5.35), we get \[\mu_{\varepsilon}\left(B_{1}(0)\right) \leq\int_{B_{1}^{n+1}(0)\cap A_{\varepsilon}}\frac{1}{\varepsilon} W(u_{\varepsilon})d\mathcal{H}^{n+1}+\left|\xi_{\varepsilon}\left(B_{1}(0)\right) \right|+\mu_{\varepsilon}\left(B_{1}(0)\setminus A_{\varepsilon}\right)\] \[\leq(N-1)\alpha\omega_{n}+C\delta.\] On the other hand, since \(\lim_{\varepsilon\to 0}\mu_{\varepsilon}\left(B_{1}(0)\right)=\theta\omega_{n}\) and \(\delta>0\) is arbitrarily small, we obtain \[\theta\leq(N-1)\alpha.\] And since by definition \(N\) is the smallest integer such that \(\theta<N\alpha\), we have \[\theta=(N-1)\alpha.\]
この論文では、$\mathbb R^{n+1}$(またはn+1次元のリーマン多様体)におけるアールマン=カーン相変動問題に関連付けられた多様体について、アールマン=カーン平均曲率(アールマン=カーンエネルギの初次変分)の積分L<sup>q<sub>0</sub></sup>の bound を検討しています。この論文では、この多様体がディリヒ特とポテンシャルエネルギーのエネルギーが等分配する(相場の極限での)ことが示され、この多様体と相関の変形された多様体が、L<sup>q<sub>0</sub></sup> での平均曲率を持つ整数可積分多様体へと収束すると shown されています。後者のものが、Allardの整数可積分多様体への収束定理の拡散版です。
2309.03994
The decomposed photon anomalous dimension in QCD and the $\{β\}$-expanded representations for the Adler function
This work is devoted to the study of the $\{\beta\}$-expansion of the perturbative expressions for the $e^+e^-$ annihilation Adler function $D(Q^2)$ and for the related renormalization group functions, namely for the photon vacuum polarization function and its anomalous dimension $\gamma(\alpha_s)$ in QCD at the $\mathcal{O}(\alpha^4_s)$ order. We emphasize that $\gamma(\alpha_s)$ is not a conformal-invariant contribution to $D(Q^2)$ and, therefore, for a consistent analysis it is necessary to decompose its higher-order PT coefficients in powers of the $\beta$-function coefficients in the same way as for the Adler function. The arguments in favor of this statement are given. The comparison of the $\overline{MS}$ and PMC/BLM approximants are demonstrated.Theoretical and phenomenologically related consequences of this comparison are briefly commented.
A. L. Kataev, V. S. Molokoedov
2023-09-07T19:55:57
http://arxiv.org/abs/2309.03994v2
Decomposed photon anomalous dimension in QCD and the \(\{\beta\}\)-expanded representations for the Adler function ###### Abstract This work is devoted to the study of the \(\{\beta\}\) expansion of the perturbative expressions for the \(e^{+}e^{-}\) annihilation Adler function \(D(Q^{2})\) and for the related renormalization group functions, namely, for the photon vacuum polarization function and its anomalous dimension \(\gamma(\alpha_{s})\) in QCD at the \(\mathcal{O}(\alpha_{s}^{4})\) order. We emphasize that \(\gamma(\alpha_{s})\) is not a conformal-invariant contribution to \(D(Q^{2})\); therefore, for a consistent analysis, it is necessary to decompose its higher-order perturbation theory coefficients in powers of the \(\beta\)-function coefficients in the same way as for the Adler function. The arguments in favor of this statement are given. The comparison of the \(\overline{\text{MS}}\) and principal of maximum conformality and Brodsky-Lepage-Mackenzie scale approximants are demonstrated. We comment briefly on theoretical and phenomenologically related consequences of this comparison. DOI: 10.1103/PhysRevD.108.096027 ## I Introduction Among the various QCD representations for the renormalization-group (RG) invariant quantities studied nowadays are the renormalon-motivated large-\(\beta_{0}\) approximation, discussed in the reviews [1, 2, 3], and the \(\{\beta\}\)-expansion approach, originally proposed in Ref. [4]. It prescribes to decompose the analytically evaluated massless coefficients of the corresponding perturbation theory (PT) QCD series, defined in the gauge-invariant renormalization schemes [such as the minimal-subtraction (MS) like schemes], into the sum of the scale-invariant terms and of the concrete monomials, which contain fixed combinations of the RG \(\beta\)-function coefficients. In general, in the fixed order of PT, this representation allows one to go beyond large-\(\beta_{0}\) approximation. Indeed, within the \(\{\beta\}\)-expansion approach it is possible to trace extra contributions to higher-order PT coefficients from the terms, generated by the subleading renormalon chains. The consequences of the Borel resummations of those ones, proportional to the subleading powers of \(\beta_{0}\), are studied in the recent work of Ref. [5]. The application of the \(\{\beta\}\) decomposition is rather useful in the theoretical analysis of the effects of the conformal symmetry and of its violation in the product of the PT expressions for the \(e^{+}e^{-}\) annihilation Adler function and for the Bjorken polarized sum rule, with their nonsinglet in flavor coefficient functions entering into the Crewther relation [6] and in its QCD generalizations, written down in the form of single power of the \(\beta\) function or of multiple power of \(\beta\)-function representations (see Refs. [7, 8, 9] and Ref. [10], respectively). For the studies of the analog of the first of the aforementioned two generalizations of the Crewther relation in the cases of the gauge-dependent renormalization schemes, see Refs. [11, 12]. The \(\{\beta\}\) decomposition [4] and representations, based on the large-\(n_{f}\) and the large-\(\beta_{0}\) expansions, were applied in the number of formulations of various extensions of the Brodsky-Lepage-Mackenzie (BLM) scale-fixing prescription [13] to higher orders of PT (see [14, 15, 16, 17] and [18, 19, 20], respectively). Currently, the most popular higher-order BLM extension is the principal of maximum conformality (PMC), which was proposed in Ref. [21].1 The results of its application essentially depend on a manner of constructing \(\{\beta\}\)-decomposed coefficients of the specific PT series (compare the structure of the expressions presented in Refs. [4, 16]). There are different ways to determine the analytical structure of the \(\{\beta\}\)-expanded coefficients for the RG-invariant quantities in QCD and QCD-related models. 1. It is possible to apply the RG-inspired \(\mathcal{R}_{\delta}\) procedure of Ref. [23]. Its definition is rather close to the skeleton-motivated approach proposed in Ref. [24] for determining the energy evolution of the couplings in the analytical PT approach, developed, e.g., in Refs. [25, 26, 27]. 2. Another way of fixing the \(\{\beta\}\)-expansion representation is the multiple \(\beta\)-function decomposition, suggested in Ref. [28] and developed later on in Refs. [29, 30]. It relies considerably on the analogous expansion, introduced in Ref. [10] for the conformal symmetry breaking (CSB) term of the generalized Crewther relation. It is interesting, but not yet completely understood, that at the three-loop level the application of this approach to the PT series for the static potential [30] reproduces almost all but few coefficients of the \(\{\beta\}\)-expanded representation fixed through \(\mathcal{R}_{\delta}\) procedure (with taking into account possible misprints in Ref. [31], see discussions in [30]). 3. There is the approach based on introducing extra degrees of freedom in the gauge model of strong interactions, namely, the Majorana multiplet of massless gluinos. In this scenario, the number of gluinos \(n_{\beta}\) appears in addition to the number of the quark flavors \(n_{f}\) and serves as an auxiliary parameter for fixing terms of the \(\{\beta\}\) expansion. This approach was originally proposed in Ref. [4] while considering the three-loop expression for the Adler function in this effective QCD-like model. It was applied later in Refs. [10, 32] at the same order of PT for the case of the Bjorken polarized sum rule.2 The applicability of this approach to the Adler function and the Bjorken polarized sum rule at the four-loop level was analyzed in Ref. [33]. Footnote 2: It should be emphasized that, unlike Ref. [4], where the subsequent BLM procedure was used, in [32] the ideas of the PMC were already realized. 4. In the effective QCD-type model with arbitrary numbers of fermion representations at the same PT level, the decomposed expressions for the Adler function and for the Bjorken polarized sum rule were constructed and considered in Refs. [34, 35]. The above mentioned studies were based on the results of the analytical calculations of the related RG \(\beta\) function, obtained in Ref. [36], and on the expressions given in Ref. [8] for the two important physical quantities mentioned above, analytically evaluated in this QCD-type model at the four-loop level. Note that at the next-to-next-to-leading level the expressions of Ref. [34] yield results identical to the ones of Ref. [10], given in another QCD-type model, namely, QCD supplemented by the multiplet of massless gluinos. Apart from the ambiguities in constructing the \(\{\beta\}\)-expanded coefficients of the RG-invariant functions, there is still no consensus on how to apply the PMC approach to the physically important quantities, namely, to the \(e^{+}e^{-}\) annihilation Adler function [and to the associated \(R_{e^{+}e^{-}}(s)\) and \(R_{\pi}\) ratios] and to the Bjorken polarized sum rule (which has the PT expression closely related to the one of the Gross-Llewellyn-Smith sum rule of the deep-inelastic \(\nu N\) scattering [37, 38]). Indeed, despite the existence of the arguments presented in Refs. [29, 32, 39] that the original PMC analysis, performed in Refs. [23, 31], for instance, for the Adler function, has definite drawbacks related to the absence of the \(\{\beta\}\) expansion for the anomalous dimension of the photon vacuum polarization function, in further PMC-based works, including the most recent ones (see, e.g., [40, 41, 42, 43]), these arguments were not accepted. Note that the renormalization group method, elaborated in the classical works of Refs. [44, 45, 46], was applied in Ref. [47], where its relation to the principle of minimal sensitivity (PMS), formulated in the work of Ref. [48], was clarified. The theoretical basis of this approach was questioned several times (see, e.g., Refs. [43, 49, 50]). We will leave aside here the possible scientific study of the published critics of PMS presented in these works and definite arguments in favor of the PMS defense (see, e.g., [51]). In our work, we will concentrate on the more detailed analysis of the problem associated with consistent, from our point of view, application of the PMC approach primarily to the Adler function \(D(Q^{2})\). After presenting technical details in Sec. II, in Sec. III we will demonstrate theoretical consequences of taking into account the \(\{\beta\}\)-expanded representation for the QCD expression for the anomalous dimension \(\gamma(\alpha_{s})\) of the photon vacuum polarization function \(\Pi(Q^{2})\), which we will call the photon anomalous dimension. In Sec. IV, we will provide extra theoretical arguments in favor of the necessity of applying \(\{\beta\}\) decomposition for \(\gamma(\alpha_{s})\). We shall clarify that only after this step is it possible to understand where the important parts of the renormalon contributions to the Adler function, studied in Refs. [1, 2, 52], were put in shadow in the previous PMC-related considerations. In Sec. V, we construct a properly defined by us new Adler PMC-type approximation related to taking into account the truncated next-to-leading order (N\({}^{2}\)LO) and next-to-next-to-next-to-leading order (N\({}^{3}\)LO) \(\overline{\rm MS}\) massless PT QCD expressions. In part, the qualitative phenomenological comparison with the analogous \(\overline{\rm MS}\)-scheme approximations will be presented. The explicit solutions of the RG equations for functions \(\Pi(Q^{2})\) and \(D(Q^{2})\) are given in Appendix A. In Appendix B, the coefficients of the expansion of the Adler function in powers of the \(\beta\)-function coefficients with and without taking into account the application of the \(\{\beta\}\) decomposition to \(\gamma(a_{s})\) are presented. ## 2 Preliminaries Let us start from the consideration of the \(e^{+}e^{-}\) annihilation Adler function. It is defined in the Euclidean region as \[D(L,a_{s})=-\frac{d\Pi(L,a_{s})}{d\ln Q^{2}}=Q^{2}\int_{0}^{\infty}ds\frac{R_{ e^{+}e^{-}}(l,a_{s})}{(s+Q^{2})^{2}}, \tag{1}\] where \(a_{s}(\mu^{2})=a_{s}=a_{s}/\pi\), \(a_{s}\) is the \(\overline{\rm MS}\)-scheme strong coupling constant, \(\mu\) is the renormalization scale, \(L=\ln(\mu^{2}/Q^{2})\) and \(l=\ln(\mu^{2}/s)\), respectively, \(Q^{2}=-q^{2}>0\) is the Euclidean kinematic variable, and \(s=q^{2}>0\) is the timelike Minkowskian variable. The spectral function \(R_{e^{+}e^{-}}(l,a_{s})\) is the theoretical expression for the electron-positron annihilation \(R\) ratio. It is proportional to the experimentally measured total cross section \(\sigma(e^{+}e^{-}\rightarrow\gamma^{*}\rightarrow{\rm hadrons})\). \(\Pi(L,a_{s})\) is the renormalized QCD expression for the photon vacuum polarization function, which enters in the two-point correlator \(\Pi_{\mu\mu}(q)\) of the electromagnetic quark vector currents \(\vec{p}^{\mu}(x)\) as \[\Pi_{\mu\mu}(q) = i\int d^{4}x\,e^{iqx}\langle 0|T\{j_{\mu}(x)j_{\nu}(0)\}|0\rangle \tag{2}\] \[= \frac{1}{12\pi^{2}}(q_{\mu}q_{\nu}-q^{2}q_{\mu\nu})\Pi(q^{2}).\] Here \(\vec{p}^{\mu}(x)=\sum_{f}Q_{f}\vec{\psi}_{f}(x)\vec{\nu}^{\mu}\psi_{f}(x)\), and \(Q_{f}\) stands for the electric charge of the quark field \(\psi_{f}(x)\) with flavor \(f\). Note that, since the vector current is conserved both in the renormalized and bare cases, the expression for the tensor \(\Pi_{\mu\mu}(q)\) is transverse in both cases as well. The detailed theoretical studies, conducted in Refs. [53, 54] and used later on in Refs. [55, 56], lead to the following renormalization prescription for the photon vacuum polarization function in QCD: \[\Pi(L,a_{s})=Z(a_{s})+\Pi_{B}(L,a_{sB}). \tag{3}\] Here \(\Pi_{B}(L,a_{sB})\) is the unrenormalized QCD expression for the vacuum polarization function; \(a_{sB}=\alpha_{sB}/\pi=\mu^{2e}Z_{a_{s}}(a_{s})a_{s}\), \(\alpha_{sB}\) is the bare strong coupling, and \(Z_{a_{s}}(a_{s})\) is the corresponding renormalization constant, which defines the QCD RG \(\beta\) function. \(Z(a_{s})=(Z_{\mu B}(a_{s})-1)/a\), where \(a=\alpha/\pi\) and \(\alpha\) is the renormalized QED coupling, defined in the case where the effects of its QED running are not taken into account. \(Z_{\mu B}(a_{s})\) is the renormalization constant of the photon wave function, considered in the case where the QCD corrections only are taken into account. Within the class of the MS-like subtraction schemes, the expression for \(Z(a_{s})\) contains the pole terms in \(\varepsilon\), \[Z(a_{s})=\sum_{p\geq 1}a_{s}^{p-1}\sum_{k=1}^{p}\frac{Z_{p,-k}}{\varepsilon^{k}}, \tag{4}\] whereas the quantity \(\Pi_{B}(L,a_{sB})\) has the following form: \[\Pi_{B}(L,a_{sB})=\sum_{p\geq 1}\left(\frac{\mu^{2}}{Q^{2}}\right)^{\varepsilon p }a_{sB}^{p-1}\sum_{k=-p}^{\infty}\Pi_{p,k}\varepsilon^{k}, \tag{5}\] where \(\varepsilon=(4-d)/2\) and \(d\) is the space-time dimension. The renormalized photon vacuum polarization function \(\Pi(L,a_{s})\) obeys the following inhomogeneous RG equation: \[\left(\frac{\partial}{\partial\ln\mu^{2}}+\beta(a_{s})\frac{\partial}{ \partial a_{s}}\right)\Pi(L,a_{s})=\gamma(a_{s}), \tag{6}\] where \[\gamma(a_{s})=\frac{d\Pi(L,a_{s})}{d\ln\mu^{2}} \tag{7}\] is the QCD anomalous dimension of \(\Pi(L,a_{s})\) and \(\beta(a_{s})\) defines the scale dependence of the strong coupling RG \(\beta\) function, namely, \[\beta(a_{s})=\frac{\partial a_{s}}{\partial\ln\mu^{2}}=-{\sum_{n\geq 0}}\beta_{n}a_{ s}^{n+2}. \tag{8}\] One should note that the RG \(\beta\) function is included in the renormalized expression for the trace of the energy-momentum tensor in the massless QCD; therefore, it is a measure of violation not only of the symmetry under the dilatation transformations, but under the conformal ones as well. Application of the renormalization procedure leads to the following perturbative expression for the photon vacuum polarization function: \[\Pi(L,a_{s}) = d_{R}\biggl{(}{\sum_{f}}Q_{f}^{2}\biggr{)}\Pi^{NS}(L,a_{s}) \tag{9}\] \[+ d_{R}\biggl{(}{\sum_{f}}Q_{f}\biggr{)}^{2}\Pi^{SI}(L,a_{s}),\] where \(d_{R}\) is the dimension of the fundamental representation of the considered generic simple gauge group. In our study, we are primarily interested in the case of the \(SU(N_{c})\) gauge group with \(d_{R}=N_{c}\). In its particular case of the \(SU(3)\) color group, relevant for physical QCD, \(N_{c}=3\). The quantities \(\Pi^{NS}(L,a_{s})\) and \(\Pi^{SI}(L,a_{s})\) are the flavor nonsinglet (NS) and singlet (SI) contributions to \(\Pi(L,a_{s})\), respectively. Substituting \(\Pi(L,a_{s})\) from Eq. (9) into (6), one can get the PT expression for the photon anomalous dimension, \[\gamma(a_{s})\!=\!d_{R}\!\left(\sum_{f}\!Q_{f}^{2}\right)\!\gamma^{NS}(a_{s})+d _{R}\!\left(\sum_{f}\!Q_{f}\right)^{2}\!\gamma^{SI}(a_{s}), \tag{10}\] where \(\gamma^{NS}(a_{s})\) and \(\gamma^{SI}(a_{s})\) are the PT series in strong coupling, \[\gamma^{NS}(a_{s})=\sum_{n\geq 0}\!\gamma_{n}a_{s}^{n},\qquad\gamma^{SI}(a_{s}) =\sum_{n\geq 3}\!\gamma_{n}^{SI}a_{s}^{n}. \tag{11}\] Taking into account Eqs. (6) and (8), one arrives at the following RG-improved expressions for \(\Pi^{NS}(L,a_{s})\) and \(\Pi^{SI}(L,a_{s})\) at \(L=0\): \[\Pi^{NS}(0,a_{s}(Q^{2})) =\sum_{n\geq 0}\!\Pi_{n}a_{s}^{n}(Q^{2}),\] \[\Pi^{SI}(0,a_{s}(Q^{2})) =\sum_{n\geq 3}\!\Pi_{n}^{SI}a_{s}^{n}(Q^{2}). \tag{12}\] The solution of the RG equation (6) can be found perturbatively. Its explicit form obtained at the \(\mathcal{O}(a_{s}^{4})\) level is presented in Appendix A. Using the expressions presented above, it is possible to derive the following expression for the Adler function: \[D(L,a_{s})=\gamma(a_{s})-\beta(a_{s})\frac{\partial}{\partial a_{s}}\Pi(L,a_{ s}). \tag{13}\] In contrast to the polarization operator, it is the RG-invariant quantity. Therefore, it satisfies the homogeneous RG equation, \[\frac{dD(L,a_{s})}{d\ln\mu^{2}}=\left(\frac{\partial}{\partial\ln\mu^{2}}+ \beta(a_{s})\frac{\partial}{\partial a_{s}}\right)D(L,a_{s})=0. \tag{14}\] Solving the system of the corresponding RG equations, one can get the following PT expression for the Adler function: \[D(a_{s}(Q^{2})) =d_{R}\!\left(\sum_{f}\!Q_{f}^{2}\right)\!D^{NS}(a_{s}(Q^{2}))\] \[\quad+d_{R}\!\left(\sum_{f}\!Q_{f}\right)^{2}\!D^{SI}(a_{s}(Q^{2 })), \tag{15}\] where its NS and SI contributions are defined as \[D^{\rm NS}(a_{s}(Q^{2}))=\sum_{n\geq 0}\!d_{n}a_{s}^{n}(Q^{2}), \tag{16a}\] \[D^{\rm SI}(a_{s}(Q^{2}))=\sum_{n\geq 3}\!d_{n}^{\rm SI}a_{s}^{n}(Q^{2 }). \tag{16b}\] In the massless limit, all logarithmic corrections to \(D(Q^{2})\), controlled by the RG, can be summed up into the running coupling \(a_{s}(Q^{2})\). Using the explicit solution of Eq. (6) for \(\Pi(L,a_{s})\), one can obtain the solution of the RG equation (14), expressed in terms of the PT coefficients of \(\Pi(L,a_{s})\), \(\beta(a_{s})\), and \(\gamma(a_{s})\). The explicit form of its \(\mathcal{O}(a_{s}^{4})\) approximation is given in Appendix A as well. Comparing solutions of the expressions of Eqs. (16a) and (16b) with the ones following from Eq. (13) and taking into account the dependence \(a_{s}(Q^{2})\) on the Euclidean momentum \(Q^{2}\), we can obtain the following relations: \[d_{0} =\gamma_{0}, \tag{17a}\] \[d_{1} =\gamma_{1},\] (17b) \[d_{2} =\gamma_{2}+\beta_{0}\Pi_{1},\] (17c) \[d_{3} =\gamma_{3}+2\beta_{0}\Pi_{2}+\beta_{1}\Pi_{1},\] (17d) \[d_{4} =\gamma_{4}+3\beta_{0}\Pi_{3}+2\beta_{1}\Pi_{2}+\beta_{2}\Pi_{1},\] (17e) \[d_{3}^{\rm SI} =\gamma_{3}^{\rm SI},\] (17f) \[d_{4}^{\rm SI} =\gamma_{4}^{\rm SI}+3\beta_{0}\Pi_{3}^{\rm SI}. \tag{17g}\] One should recall that, in the class of the gauge-invariant MS-like schemes, the scheme dependence of the coefficients \(d_{k}\) starts to manifest itself from \(k\geq 2\) due to scheme-dependent terms \(\Pi_{m}\) at \(m\geq 1\). The expressions (17a)-(17g) are derived comparing the RG-based relation directly associating the Adler function to the photon vacuum polarization function and to its anomalous dimension. The analytical expressions for the coefficients \(d_{0}\!\neq\!d_{4}\) and \(\gamma_{0}\!\neq\!\gamma_{4}\), \(\Pi_{0}\!\div\!\Pi_{3}\) may be found in Refs. [57, 58] correspondingly (see also references therein). In the MS-like scheme, the coefficients of the corresponding RG \(\beta\) function up to the \(\beta_{3}\) term are known from the results of the analytical calculations of Ref. [59], which were effectively confirmed by the foundation of nullification of the three-loop \(DR\)-like scheme approximation for the RG \(\beta\) function of \(\mathcal{N}=4\) supersymmetry (SUSY) Yang-Mills theory (see, e.g., [60]) and by the direct analytical QCD calculations from Ref. [61]. ## III The \(\{\beta\}\) Expansion of \(\gamma(a_{s})\) and \(\Pi(a_{s})\) As was already mentioned in the Introduction, the \(\{\beta\}\)-expansion formalism implies the representation of the expressions of higher-order PT corrections to the massless RG-invariant quantities, evaluated in the gauge-invariant renormalization schemes, through the sums of the monomials in powers of the RG \(\beta\)-function coefficients by separating the scale-invariant contributions. For instance, the coefficients \(d_{1}\div d_{4}\) and \(d_{3}^{\rm SI}\div d_{4}^{\rm SI}\) of the \(e^{+}e^{-}\) annihilation Adler function, defined in Eqs. (15)-(16b), have the following \(\{\beta\}\)-expanded structure: \[d_{1} =d_{1}[0], \tag{18a}\] \[d_{2} =\beta_{0}d_{2}[1]+d_{2}[0],\] (18b) \[d_{3} =\beta_{0}^{2}d_{3}[2]+\beta_{1}d_{3}[0,1]+\beta_{0}d_{3}[1]+d_{ 3}[0],\] (18c) \[d_{4} =\beta_{0}^{3}d_{4}[3]+\beta_{1}\beta_{0}d_{4}[1,1]+\beta_{2}d_{ 4}[0,0,1]+\beta_{0}^{2}d_{4}[2]\] \[\quad+\beta_{1}d_{4}[0,1]+\beta_{0}d_{4}[1]+d_{4}[0],\] (18d) \[d_{3}^{\rm SI} =d_{3}^{\rm SI}[0],\] (18e) \[d_{4}^{\rm SI} =\beta_{0}d_{4}^{\rm SI}[1]+d_{4}^{\rm SI}[0], \tag{18f}\] where terms \(d_{k}[...]\), \(d_{k}^{\rm SI}[...]\) do not contain \(n_{f}\) dependence (except for the flavor \(n_{f}\) dependence arising from the contributions to \(d_{4}[0]\) of the light-by-light scattering-type diagrams [28, 29, 34]). Starting from \(k\geq 3\), there is an ambiguity associated with fixing the terms \(d_{k}[...]\) (apart from the coefficients \(d_{k}[k-1]\) defined by the leading renormalon chains effects, obtained in the case of QED in Ref. [62] and reformulated to the case of QCD in Ref. [7]). It is related to the differences of splitting of the \(n_{f}\)-dependent contributions to higher-order corrections to RG-invariant quantity between flavor-dependent coefficients \(\beta_{i}\) (with \(i\geq 0\)) of the RG \(\beta\) function (see, e.g., [4, 32, 39]). One of the currently existing ways to resolve this problem was proposed in Ref. [28] and more widely studied in Refs. [29, 30]. In accordance with these considerations, the PT expression for the NS contribution to the Adler function at the \(\mathcal{O}(a_{s}^{4})\) level can be presented through the following double sum: \[D^{\rm NS}(a_{s}(Q^{2})) =1+\sum_{n=0}^{3}\left(\frac{\beta(a_{s}(Q^{2}))}{a_{s}(Q^{2})} \right)^{n}\sum_{k=1}^{4-n}D_{n,k}a_{s}^{k}(Q^{2})\] \[=1+D_{0,1}a_{s}(Q^{2})+(D_{0,2}-\beta_{0}D_{1,1})a_{s}^{2}(Q^{2}) +(D_{0,3}-\beta_{0}D_{1,2}-\beta_{1}D_{1,1}+\beta_{0}^{2}D_{2,1})a_{s}^{3}(Q^ {2})\] \[\quad+(D_{0,4}-\beta_{0}D_{1,3}-\beta_{1}D_{1,2}-\beta_{2}D_{1,1} +\beta_{0}^{2}D_{2,2}+2\beta_{0}\beta_{1}D_{2,1}-\beta_{0}^{3}D_{3,1})a_{s}^{4 }(Q^{2}). \tag{19}\] Here, we will not touch upon the grounds of those presented in Eq. (19)-type representations. The arguments in its favor are given in Refs. [28, 29, 30]. In fact, the theoretical ways of fixing analytical expressions of the terms \(d_{k}[...]\) are not unique (see, e.g., definite considerations presented in Refs. [29, 34, 35]). Here we will consider the one outlined in Ref. [28] and followed in Ref. [30]. As observed by us there, it is applicable to the wider class of the functions and quantities in the corresponding RG equations. One may ask the following question: what are the theoretical and phenomenological reasons for separating the scale-invariant contributions \(d_{k}[0]\) from the total expressions for the coefficients \(d_{k}\)? The problem of careful consideration of the status of the PMC-related expressions and of the unraveling in them of the effects related to the scale-invariant limit and to its violation by the CSB effects are among the answers to this question. The PMC-related considerations enable one to eliminate \(\beta\)-dependent terms in the coefficients \(d_{k}\) of Eqs. (18b)-(18d) and (18f) by redefining the scale parameter in every order of PT and to leave in the coefficients of the PT expressions related to the Green's function quantities the scale-invariant parts \(d_{k}[0]\) only. As a result, the scale parameter becomes the coupling-dependent function (for the concrete realization of this feature within large-\(n_{f}\) expansion, see Refs. [14, 15, 16]; its PMC-type realizations are given in Refs. [21, 32]). It is also important that the PT coefficients of higher-order corrections to the corresponding physical quantities, studied in the gauge-invariant schemes, are becoming independent on the choice of scale. The representation of Eq. (19) allows one not only to separate the scale-invariant contributions \(d_{k}[0]\), but to reproduce the structure of the \(\{\beta\}\) expansion in Eqs. (18b)-(18d) as well. It also imposes essential restrictions on the terms of this decomposition, namely, \[d_{2}[1] =d_{3}[0,1]=d_{4}[0,0,1]=-D_{1,1}, \tag{20a}\] \[d_{3}[1] =d_{4}[0,1]=-D_{1,2},\] (20b) \[d_{3}[2] =d_{4}[1,1]/2=D_{2,1}. \tag{20c}\] This property is in correspondence with the feature of "special degeneracy" observed in Ref. [23] while applying the considered \(\mathcal{R}_{\delta}\) procedure. Application of these relations allowed the authors of Ref. [28] to get the analytical expressions for the terms \(d_{k}[...]\) with \(k\leq 4\) in the \(\{\beta\}\) expansions (18b)-(18d). Their explicit form is given in Appendix B. The representation (19) also enabled us to fix several terms of the \(\{\beta\}\) expansion of the totally unknown, at present, coefficient \(d_{5}\)[29]. In the approach described above, for finding terms \(d_{k}[...]\) of the \(\{\beta\}\)-decomposed corrections to the \(e^{+}e^{-}\) annihilation Adler function, it is not necessary to use any information about the possible \(\{\beta\}\) structure of the RG-related quantities, such as the photon anomalous dimension or the vacuum polarization function. In this case, we deal directly with the RG-invariant quantity \(D(Q^{2})\). However, when we pass to consideration of the relation (13) between \(D(L,a_{s})\), \(\gamma(a_{s})\), and \(\Pi(L,a_{s})\) and to the expressions (17a)-(17g) following from it, the important issue of whether or not to decompose the coefficients of \(\gamma(a_{s})\) and \(\Pi(L,a_{s})\) vivid arises. We adhere here to the statement made previously in Refs. [29, 32, 39] that it is really necessary to decompose them in combinations of the \(\beta\)-function coefficients. In accordance with this opinion, in order to extract the scale-invariant contributions from the PT expressions for the photon anomalous dimension, we should apply the \(\{\beta\}\)-expansion procedure to the coefficients \(\gamma_{k}\) and \(\gamma_{k}^{\text{St}}\) of the photon anomalous dimension function \(\gamma(a_{s})\) in Eq. (11) as well. Additional arguments in favor of this assertion will be given below. Following the proposal of Ref. [39], we write \[\gamma_{1} =\gamma_{1}[0], \tag{21a}\] \[\gamma_{2} =\beta_{0}\gamma_{2}[1]+\gamma_{2}[0],\] (21b) \[\gamma_{3} =\beta_{0}^{2}\gamma_{3}[2]+\beta_{1}\gamma_{3}[0,1]+\beta_{0} \gamma_{3}[1]+\gamma_{3}[0],\] (21c) \[\gamma_{4} =\beta_{0}^{3}\gamma_{4}[3]+\beta_{1}\beta_{0}\gamma_{4}[1,1]+ \beta_{2}\gamma_{4}[0,0,1]+\beta_{0}^{2}\gamma_{4}[2]\] \[\quad+\beta_{1}\gamma_{4}[0,1]+\beta_{0}\gamma_{4}[1]+\gamma_{4} [0],\] (21d) \[\gamma_{3}^{\text{St}} =\gamma_{3}^{\text{St}}[0],\] (21e) \[\gamma_{4}^{\text{St}} =\beta_{0}\gamma_{4}^{\text{St}}[1]+\gamma_{4}^{\text{St}}[0]. \tag{21f}\] Equation (13) leads to the relations \(d_{k}[0]=\gamma_{k}[0]\) and \(d_{k}^{\text{St}}[0]=\gamma_{k}^{\text{St}}[0]\). This fact, in conjunction with the \(\{\beta\}\) expansion (18a)-(18d) and equalities (20a)-(20c), entails the following relationships for terms \(\gamma_{k}[...]\) of the photon anomalous dimension: \[\gamma_{2}[1] =\gamma_{3}[0,1]=\gamma_{4}[0,0,1],\qquad\gamma_{3}[1]=\gamma_{4 }[0,1],\] \[\gamma_{3}[2] =\gamma_{4}[1,1]/2. \tag{22}\] Accommodating the explicit analytical expressions for the coefficients \(\gamma_{0}\div\gamma_{4}\)[58] and \(\beta_{0}\div\beta_{2}\)[59] and using the relations (22), we obtain the analytical expressions for terms \(\gamma_{k}[...]\) and \(\gamma_{k}^{\text{St}}[...]\), \[\gamma_{1}[0] =\frac{3}{4}C_{F},\qquad\gamma_{2}[0]=-\frac{3}{32}C_{F}^{2}+ \frac{1}{16}C_{F}C_{A},\qquad\gamma_{2}[1]=\gamma_{3}[0,1]=\gamma_{4}[0,0,1] =\frac{11}{16}C_{F}, \tag{23a}\] \[\gamma_{3}[1] =\gamma_{4}[0,1]=\bigg{(}\frac{239}{192}-\frac{11}{4}\zeta_{3} \bigg{)}C_{F}^{2}+\bigg{(}\frac{163}{288}+\frac{11}{4}\zeta_{3}\bigg{)}C_{F}C _{A},\qquad\gamma_{3}[2]=\frac{1}{2}\gamma_{4}[1,1]=-\frac{77}{144}C_{F},\] (23b) \[\gamma_{3}[0] =-\frac{69}{128}C_{F}^{3}+\bigg{(}-\frac{101}{256}+\frac{33}{16} \zeta_{3}\bigg{)}C_{F}^{2}C_{A}+\bigg{(}-\frac{53}{192}-\frac{33}{16}\zeta_{3} \bigg{)}C_{F}C_{A}^{2},\] (23c) \[\gamma_{4}[2] =\bigg{(}\frac{5467}{1536}-\frac{119}{16}\zeta_{3}+\frac{99}{32} \zeta_{4}\bigg{)}C_{F}^{2}+\bigg{(}-\frac{123}{512}+\frac{629}{64}\zeta_{3}- \frac{99}{32}\zeta_{4}\bigg{)}C_{F}C_{A},\] (23d) \[\gamma_{4}[1] =\bigg{(}-\frac{1477}{256}-\frac{135}{32}\zeta_{3}+\frac{435}{32} \zeta_{5}\bigg{)}C_{F}^{3}+\bigg{(}\frac{4733}{2048}+\frac{1167}{128}\zeta_{3} -\frac{297}{128}\zeta_{4}-\frac{765}{64}\zeta_{5}\bigg{)}C_{F}^{2}C_{A}\] \[\quad+\bigg{(}-\frac{16453}{18432}-\frac{2109}{256}\zeta_{3}+ \frac{297}{128}\zeta_{4}-\frac{135}{128}\zeta_{5}\bigg{)}C_{F}C_{A}^{2}, \qquad\gamma_{4}[3]=\bigg{(}-\frac{107}{384}-\frac{3}{8}\zeta_{3}\bigg{)}C_{F},\] (23e) \[\gamma_{4}[0] =\bigg{(}\frac{4157}{2048}+\frac{3}{8}\zeta_{3}\bigg{)}C_{F}^{4}+ \bigg{(}-\frac{3509}{1536}-\frac{73}{128}\zeta_{3}-\frac{165}{32}\zeta_{5} \bigg{)}C_{F}^{3}C_{A}+\bigg{(}\frac{9181}{4608}+\frac{299}{128}\zeta_{3}+ \frac{165}{64}\zeta_{5}\bigg{)}C_{F}^{2}C_{A}^{2}\] \[\quad+\bigg{(}-\frac{30863}{36864}-\frac{147}{128}\zeta_{3}+ \frac{165}{64}\zeta_{5}\bigg{)}C_{F}C_{A}^{3}+\bigg{(}\frac{3}{16}-\frac{1}{4} \zeta_{3}-\frac{5}{4}\zeta_{5}\bigg{)}\frac{d_{F}^{abcd}d_{A}^{abcd}}{d_{R}}+ \bigg{(}-\frac{13}{16}-\zeta_{3}+\frac{5}{2}\zeta_{5}\bigg{)}\frac{d_{F}^{abcd}d_{ F}^{abcd}}{d_{R}}n_{f}, \tag{23f}\] \[\gamma_{3}^{\rm SI}[0] =\left(\frac{11}{192}-\frac{1}{8}\zeta_{3}\right)\frac{d^{abc}d^{abc} }{d_{R}},\qquad\gamma_{4}^{\rm SI}[1]=\left(\frac{55}{256}-\frac{123}{256}\zeta_ {3}+\frac{9}{64}\zeta_{4}+\frac{15}{64}\zeta_{5}\right)\frac{d^{abc}d^{abc}}{d _{R}}, \tag{23g}\] \[\gamma_{4}^{\rm SI}[0] =\left(\left(-\frac{13}{64}-\frac{1}{4}\zeta_{3}+\frac{5}{8}\zeta _{5}\right)C_{F}+\left(\frac{205}{1536}-\frac{13}{64}\zeta_{3}-\frac{5}{32} \zeta_{5}\right)C_{A}\right)\frac{d^{abc}d^{abc}}{d_{R}}, \tag{23h}\] where \(\zeta_{n}=\sum_{k^{2}=1}^{\infty}k^{-n}\) is the Riemann \(\zeta\) function; \(C_{F}\) and \(C_{A}\) are the quadratic Casimir operators in the fundamental and adjoint representation of the gauge group correspondingly, \(d^{abc}=2{\rm Tr}(t^{a}t^{b}t^{c})\), \(d_{F}^{abcd}={\rm Tr}(t^{a}t^{b}t^{c}t^{d}))/6\), and \(d_{A}^{abcd}={\rm Tr}(C^{a}C^{(b}C^{c}C^{d)})/6\), \((C^{a})_{bc}=-if^{abc}\) are the generators of the adjoint representation with the antisymmetric structure constants \(f^{abc}\): \([t^{a},t^{b}]=if^{abc}t^{c}\). The terms proportional to \(d_{F}^{abcd}d_{F}^{abcd}n_{f}/d_{R}\) and \(d_{F}^{abcd}d_{A}^{abcd}/d_{R}\) group structures are the light-by-light scattering effects and they have to be included in the scale-invariant "\(n_{f}\)-independent" coefficient \(\gamma_{4}[0]\)[28, 29]. Using now the expressions (17a)-(17g) and (21a)-(21f) and taking into account the following \(\{\beta\}\)-expansion structure of the vacuum polarization function (9) and (12), \[\Pi_{0}=\Pi_{0}[0],\qquad\Pi_{1}=\Pi_{1}[0],\qquad\Pi_{2}=\Pi_{2}[0]+\beta_{0 }\Pi_{2}[1],\qquad\Pi_{3}=\Pi_{3}[0]+\beta_{0}\Pi_{3}[1]+\beta_{1}\Pi_{3}[0,1 ]+\beta_{0}^{2}\Pi_{3}[2], \tag{24}\] we arrive at the substantial relationships between terms \(d_{k}[...]\), \(\gamma_{k}[...]\), and \(\Pi_{k}[...]\), \[d_{1}[0]=\gamma_{1}[0],\qquad d_{2}[0]=\gamma_{2}[0],\qquad d_{2}[1]=\gamma_{2 }[1]+\Pi_{1}[0], \tag{25a}\] \[d_{3}[0]=\gamma_{3}[0],\qquad d_{3}[1]=\gamma_{3}[1]+2\Pi_{2}[0], \qquad d_{3}[0,1]=\gamma_{3}[0,1]+\Pi_{1}[0],\qquad d_{3}[2]=\gamma_{3}[2]+2 \Pi_{2}[1],\] (25b) \[d_{4}[0]=\gamma_{4}[0],\qquad d_{4}[1]=\gamma_{4}[1]+3\Pi_{3}[0], \qquad d_{4}[0,1]=\gamma_{4}[0,1]+2\Pi_{2}[0],\qquad d_{4}[2]=\gamma_{4}[2]+3 \Pi_{3}[1],\] (25c) \[d_{4}[3]=\gamma_{4}[3]+3\Pi_{3}[2],\qquad d_{4}[1,1]=\gamma_{4}[1,1]+3 \Pi_{3}[0,1]+2\Pi_{2}[1],\] (25d) \[d_{4}[0,0,1]=\gamma_{4}[0,0,1]+\Pi_{1}[0],\qquad d_{3}^{\rm SI}[0 ]=\gamma_{3}^{\rm SI}[0],\qquad d_{4}^{\rm SI}[0]=\gamma_{4}^{\rm SI}[0], \qquad d_{4}^{\rm SI}[1]=\gamma_{4}^{\rm SI}[1]+3\Pi_{3}^{\rm SI}[0], \tag{25e}\] where \(\Pi_{3}^{\rm SI}[0]=\Pi_{3}^{\rm SI}\). Applying these relations and using the explicit expressions for \(d_{k}[...]\) from Refs. [28, 29] and for \(\gamma_{k}[...]\) from Eqs. (23a)-(23h), we get the values of terms \(\Pi_{k}[...]\), \[\Pi_{0}[0]=\frac{5}{3},\qquad\Pi_{1}[0]=\left(\frac{55}{16}-3\zeta_{3}\right)C _{F},\qquad\Pi_{2}[1]=\left(\frac{3701}{288}-\frac{19}{2}\zeta_{3}\right)C_{F}, \tag{26a}\] \[\Pi_{2}[0]=\left(-\frac{143}{96}-\frac{37}{8}\zeta_{3}+\frac{15}{2}\zeta_{5} \right)C_{F}^{2}+\left(\frac{73}{72}-\frac{3}{4}\zeta_{3}-\frac{5}{4}\zeta_{5} \right)C_{F}C_{A},\] (26b) \[\Pi_{3}[2]=\left(\frac{196513}{3456}-\frac{809}{24}\zeta_{3}-15 \zeta_{5}\right)C_{F},\qquad\Pi_{3}[0,1]=\left(\frac{3701}{432}-\frac{19}{3} \zeta_{3}\right)C_{F},\] (26c) \[\Pi_{3}[1]=\left(-\frac{22103}{4608}-\frac{1439}{24}\zeta_{3}+9 \zeta_{3}^{2}-\frac{33}{32}\zeta_{4}+\frac{125}{2}\zeta_{5}\right)C_{F}^{2}+ \left(\frac{29353}{1536}-\frac{473}{192}\zeta_{3}-\frac{3}{2}\zeta_{3}^{2}+ \frac{33}{32}\zeta_{4}-\frac{185}{12}\zeta_{5}\right)C_{F}C_{A},\] (26d) \[\Pi_{3}[0]=\left(-\frac{31}{256}+\frac{39}{32}\zeta_{3}+\frac{735}{3 2}\zeta_{5}-\frac{105}{4}\zeta_{7}\right)C_{F}^{3}+\left(-\frac{520933}{55296} +\frac{5699}{384}\zeta_{3}-\frac{33}{4}\zeta_{3}^{2}+\frac{39}{128}\zeta_{4} -\frac{565}{64}\zeta_{5}+\frac{105}{8}\zeta_{7}\right)C_{F}^{2}C_{A}\] \[\qquad\qquad\qquad-\left(\frac{112907}{55296}+\frac{5839}{768} \zeta_{3}-\frac{33}{4}\zeta_{3}^{2}+\frac{99}{128}\zeta_{4}-\frac{835}{384} \zeta_{5}+\frac{35}{16}\zeta_{7}\right)C_{F}C_{A}^{2},\] (26e) \[\Pi_{3}^{\rm SI}=\Pi_{3}^{\rm SI}[0]=\left(\frac{431}{2304}-\frac{63}{256} \zeta_{3}-\frac{1}{8}\zeta_{3}^{2}-\frac{3}{64}\zeta_{4}+\frac{15}{64}\zeta_{5 }\right)\frac{d^{abc}d^{abc}}{d_{R}}. \tag{26f}\] One should note that, in contrast to Eqs. (20a) and (22), for the analogous \(\{\beta\}\)-expanded terms of the photon vacuum polarization function, one has \(\Pi_{3}[0,1]\neq\Pi_{2}[1]\). However, they turn out to be proportional to each other, namely, \(\Pi_{3}[0,1]=2/3\cdot\Pi_{2}[1]\). This follows from the fact that the derivative \(\partial/\partial a_{s}\) is included in Eq. (13). Thus, the analog of the double sum representation (19) for the two-point correlator \(\Pi(a_{s})\) is not fulfilled, but is held for the term \(\beta(a_{s})\partial\Pi(a_{s})/\partial a_{s}\) in Eq. (13) as the whole. After receiving the concrete results (23a)-(23h) and (26a)-(26f) for the \(\{\beta\}\)-expanded coefficients of \(\gamma(a_{s})\) and \(\Pi(a_{s})\), respectively, we now move on to presenting extra arguments in favor of the necessity of their \(\{\beta\}\) decomposition. ## IV Arguments in favor of the \(\{\beta\}\) expansion of \(\gamma(a_{s})\) While realizing the PMC ideas to the \(\overline{\rm MS}\)-scheme expression for the quantities related to the Adler function, expressed in Refs. [16, 21], the authors of Refs. [23, 31] adhere to the point of view that the \(\{\beta\}\)-expansion procedure should not be applied to the photon anomalous dimension \(\gamma(a_{s})\), which enters in the presented above Eq. (13). In the work of Ref. [32] within the effective QCD model with multiplet of massless gluinos, the attempt to clarify that the nonapplication of the \(\{\beta\}\)-expansion approach to \(\gamma(a_{s})\) contradicts the renormalizability principles was made. However, neither the arguments given within this effective QCD-related model in Ref. [32], nor the arguments presented within QCD itself in the work of Ref. [29], were not heard and the ideas of the applications of the PMC approximants without using \(\{\beta\}\) expansion of the photon anomalous dimension continue to be applied to the associated Adler function quantities (see, e.g., Refs. [40, 41, 42, 43] and the citations in the related discussions in even experimentally related works). To our understanding, the opinion of the authors of, e.g., Refs. [40, 41, 42, 43, 31, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 224, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 335, 346, 347, 348, 358, 360, 309, 320, 336, 349, 359, 361, 370, 308, 309, 337, 311, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 358, 362, 359, 371, 38, 38, 390, 391, 392, 303, 304, 305, 306, 307, 308, 309, 311, 323, 334, 348, 358, 363, 364, 349, 359, 370, 309, 380, 392, 300, 301, 302, 303, 304, 305, 306, 307, 309, 311, 323, 334, 346, 347, 348, 358, 363, 364, 365, 366, 367, 372, 373, 381, 382, 383, 393, 394, 395, 396, 397, 398, 399, 398, 399, 399, 399, 40, 41, 42, 43, 44, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 110, 111, 109, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 131, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 159, 160, 170, 171, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 209, 210, 210, 221, 22, 213, 224, 215, 226, 227, 228, 288, 230, 231, 232, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 245, 246, 247, 248, 249, 249, 250, 251, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 279, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 304, 305, 306, 307, 309, 310, 311, 323, 334, 335, 346, 347, 348, 358, 369, 370, 309, 310, 311, 323, 336, 348, 358, 369, 371, 382, 399, 399, 390, 399, 391, 392, 309, 3993, 399, 393, 3994, 395, 3 Let us repeat now another serious extra argument given in Ref. [29] for applying \(\beta\) expansion to \(\gamma(a_{s})\). 1. In the QED limit, the term \(\tilde{d}_{2}[0]\) (B2b) becomes equal to \(\widetilde{d}_{2}^{\rm QED}[0]=-3/32-11/48N\), where \(N\) is the number of the charged leptons. This expression is \(N\) dependent and does not correspond to Rosner's result [63] of calculating of the divergent part of the photon field renormalization constant \(Z_{ph}\) in the quenched QED, formulated in the diagrammatic level in Ref. [64]. In this finite approximation, the constant \(Z_{ph}\) does not contain the internal subgraphs renormalizing electromagnetic charge. The result of this work is \((Z_{ph}^{-1})_{\rm div}\!=\!\frac{a_{B}}{3}(1\!+\!\frac{3}{4}a_{B}\!\left[- \!\frac{3}{32}\right]\!a_{B}^{2})\ln\frac{M_{\pi}^{2}}{m^{2}}\), where \(a_{B}=\alpha_{B}/\pi\), \(\alpha_{B}\) is the bare fine-structure constant, \(m\) is the lepton mass, and \(M\) is the large-scale cutoff mass. The boxed term does not match the expression for \(\widetilde{d}_{2}^{\rm QED}[0]\), obtained when the photon anomalous dimension is not \(\{\beta\}\) decomposed, but it is in full agreement with the result for \(d_{2}^{\rm QED}[0]\), following from the \(U(1)\) limit of Eq. (23a) for \(\gamma_{2}[0]\) at \(C_{F}\!=\!1\) and \(C_{A}=0\). There is also the \(\mathcal{N}=1\) SUSY QCD argument in favor of the title of this section: 1. The NSVZ-like relation for the Adler function in the \(\mathcal{N}\!=\!1\) SUSY QCD derived in Ref. [65] and its detailed consideration at the three-loop level made in Ref. [66] serve as the extra arguments in favor of the \(\{\beta\}\) expansion of the SUSY analog of the photon anomalous dimension, namely, the anomalous dimension of the matter superfields. Indeed, the Novikov-Shifman-Vainshtein-Zakharov relation will be violated at the three-loop level if one does not decompose this anomalous dimension in the first coefficient of the corresponding \(\beta\) function.3 Footnote 3: It may be interesting to get the arguments in favor of this statement in the \(\mathcal{N}=1\) SUSY QCD at the four-loop level. Note that, if we consider the Adler function defined in Eq. (15) directly, without involving Eq. (13) linking the functions \(D(a_{s}),\gamma(a_{s})\), and \(\Pi(a_{s})\), then \(\{\beta\}\) expansion for its PT expression should not depend on \(\gamma(a_{s})\) and \(\Pi(a_{s})\). Moreover, from a formal point of view, for the \(\{\beta\}\) decomposition there is no principal difference, for example, between the PT series for the \(D(Q^{2})\) function, the Bjorken polarized sum rule, or the static interaction potential of the heavy quark-antiquark pair. However, the results of the \(\{\beta\}\) expansion for the Adler function, presented in Ref. [31], depend on \(\gamma(a_{s})\) in any case. At the same time, in the same paper, the \(\{\beta\}\) decomposition for the static QCD Coulomb-like potential, calculated analytically at the three-loop level in Ref. [67], is implemented on general grounds as in Ref. [30] as well. Therefore, the agreement of the results of \(\{\beta\}\) expansion for the static potential derived in Ref. [31] with those obtained in Ref. [30] in the framework of our formalism, is rather natural. Thus, the photon anomalous dimension is the convenient ingredient for analytical calculations, but it should not affect the structure of the \(\{\beta\}\) expansion of the Adler function. If one follows the logic of works [23, 31, 40, 41, 42, 43] and does not decompose the quantity \(\gamma(a_{s})\) in powers of \(\beta_{k}\) coefficients, then analytical expressions for the analogous to \(d_{k}[...]\) coefficients will be different. We denote these terms by \(\widetilde{d}_{k}[...]\) to distinguish them from ours \(d_{k}[...]\). For comparison of their analytical structure, see expressions for \(d_{k}[...]\) and \(\widetilde{d}_{k}[...]\) in Appendix B. Note that the explicit analytical expressions for \(\gamma_{4}\) and \(\Pi_{3}\) contain the Riemann \(\zeta_{4}\) contributions [58], which, however, are mutually canceled out in \(d_{4}\)[57], i.e., \[d_{4}^{(\zeta_{4})}=\gamma_{4}^{(\zeta_{4})}+3\beta_{0}\Pi_{3}^{(\zeta_{4})}=0. \tag{29}\] If we properly expand \(\gamma_{4}\) and \(\Pi_{3}\) [in accordance with Eqs. (21d) and (24)], we will naturally arrive at the absence of the \(\zeta_{4}\) contributions in the expression for \(d_{4}[0]\)[28, 29]. However, if one assumes that \(\gamma_{4}\) is a scale-invariant term, then the coefficient \(\widetilde{d}_{4}[0]\) [see Eq. (B2g)] will definitely contain \(\zeta_{4}\) contributions. This fact contradicts the consequences of the no-\(\pi\) theorem [68], explaining why the \(\zeta_{4}\) contribution should appear in the expressions for higher-order PT corrections to the Adler function starting from the coefficient \(d_{5}\) only. Let us now discuss the consequences stemming from the results of Refs. [23, 31, 40, 41, 42] for the terms \(\widetilde{d}_{k}[...]\) obtained when \(\gamma(a_{s})\) is not \(\{\beta\}\) expanded. 1. In this case, the \(\{\beta\}\) decomposition of the coefficients \(d_{k}\) of the Adler function has the following form: \[d_{2} =\beta_{0}\widetilde{d}_{2}[1]+\widetilde{d}_{2}[0],\] (30a) \[d_{3} =\underbrace{\beta_{0}^{2}\widetilde{d}_{3}[2]}_{=0}+\beta_{1} \widetilde{d}_{2}[1]+2\beta_{0}\widetilde{d}_{3}[1]+\widetilde{d}_{3}[0],\] (30b) \[d_{4} =\underbrace{\beta_{0}^{3}\widetilde{d}_{4}[3]}_{=0}+\underbrace{3 \beta_{0}^{2}\widetilde{d}_{4}[2]}_{=0}+3\beta_{0}\widetilde{d}_{4}[1]+ \underbrace{5}_{=0}\beta_{1}\beta_{0}\widetilde{d}_{3}[2]\] \[\quad+2\beta_{1}\widetilde{d}_{3}[1]+\beta_{2}\widetilde{d}_{2}[1 ]+\widetilde{d}_{4}[0],\] (30c) where curly brackets indicate terms with identically zero coefficients. It should be emphasized that this representation does not correspond to the well-known renormalon asymptotics \(d_{k+1}\!\sim\!\beta_{0}^{k}k!\) at \(k\!\gg\!1\) for higher-order PT coefficients of the Adler function in the large-\(\beta_{0}\) approximation (see, e.g., [1, 2, 52]). Indeed, all terms \(\widetilde{d}_{k+1}[k]\) in Eqs. (30b) and (30c) at \(k\!\geq\!2\) are _identically_ equal to zero. Thus, if we do not decompose the coefficients \(\gamma_{k}\) and \(\Pi_{k}\) in powers of the RG \(\beta\)-function coefficients, then we will not reproduce the large-\(\beta_{0}\) asymptotics for \(d_{k}\) in any order of PT starting from \(k=3\). The leading renormalon chain contribution, whose explicit general formula for arbitrary order \(k\) follows from analytical results, given in Refs. [7, 62], is fixed correctly when the photon anomalous dimension undergoes the \(\{\beta\}\)-expansion procedure only. In its turn, in Refs. [23, 31, 40, 41, 42] the missing \(n_{f}\)-dependent contributions are hidden in the expressions for nonzero terms \(\widetilde{d}_{2}[0],\widetilde{d}_{2}[1];\widetilde{d}_{3}[0],\widetilde{d} _{3}[1]\); \(\widetilde{d}_{4}[0],\widetilde{d}_{4}[1]\) in Eqs. (30a)-(30c). One more important point, which follows from totally decomposed \(\beta\)-expanded representation for the Adler function, is the recovery of its original BLM prescription NLO expression. Thus, the worries of Ref. [64] on non-recovery of the BLM results within the application of the \(\mathcal{R}_{\delta}\) procedure to the PT expression for the Adler function without proper expansion of the PT QCD series for the photon anomalous dimension \(\gamma(a_{s})\) considered in Ref. [23], critically commented in the more detailed PMC-related work of Ref. [31], turned out to have rather solid background. Since after applying the multiple \(\beta\)-function representation to the Adler function formulated in Eq. (19), we reproduce its NLO BLM prescription expression, we will call this the PMC/BLM approach. ## 5 Application of the PMC/BLM approximants to the Adler function ### Modified PMC expressions At the first stage of the BLM prescription application, one should consider the scale transformations \(\mu\to\mu^{\prime}\) and introduce the shift parameter \(\Delta=L-L^{\prime}=\ln(\mu^{2}/\mu^{\prime 2})\), where \(L^{\prime}=\ln(\mu^{\prime 2}/Q^{2})\). Using now the scaling operator (which may also be called the dilatation operator), one can obtain the relation between \(a_{s}(\mu^{2})\) and \(a_{s}(\mu^{\prime 2})\) in the following form, considered previously in Ref. [4, 32]: \[a_{s}(\mu^{2}) =a_{s}(\exp(\Delta)\cdot\mu^{\prime 2})=\exp\biggl{(}\Delta\, \frac{d}{d\ln\mu^{\prime 2}}\biggr{)}a_{s}^{\prime}\] \[=\exp\biggl{(}\Delta\beta(a_{s}^{\prime})\frac{\partial}{\partial a _{s}^{\prime}}\biggr{)}a_{s}^{\prime}\] \[=a_{s}^{\prime}+\frac{\Delta}{1!}\beta(a_{s}^{\prime})+\frac{ \Delta^{2}}{2!}\beta(a_{s}^{\prime})\frac{\partial}{\partial a_{s}^{\prime}} \beta(a_{s}^{\prime})\] \[\quad+\frac{\Delta^{3}}{3!}\beta(a_{s}^{\prime})\,\frac{ \partial}{\partial a_{s}^{\prime}}\biggl{(}\beta(a_{s}^{\prime})\,\frac{ \partial}{\partial a_{s}^{\prime}}\beta(a_{s}^{\prime})\biggr{)}+..., \tag{31}\] where \(a_{s}^{\prime}=a_{s}(\mu^{\prime 2})\). At the next step, we choose the PMC/BLM scale shift \(\Delta\) as a PT series in powers of \(\beta_{0}a_{s}^{\prime}\), \[\Delta=\ln\biggl{(}\frac{\mu^{2}}{\mu^{\prime 2}}\biggr{)}=\Delta_{0}+\sum_{k \geq 1}\Delta_{k}(\beta_{0}a_{s}^{\prime})^{k}. \tag{32}\] Taking into account this representation, one can rewrite the relation (31) in the fourth order of approximation in the following form: \[a_{s} =a_{s}^{\prime}-\beta_{0}\Delta_{0}a^{\prime}_{s}^{2}+(\beta_{0} ^{2}\Delta_{0}^{2}-\beta_{1}\Delta_{0}-\beta_{0}^{2}\Delta_{1}){a^{\prime}}_ {s}^{3}\] \[\quad+\biggl{(}\frac{5}{2}\beta_{0}\beta_{1}\Delta_{0}^{2}-\beta _{0}\beta_{1}\Delta_{1}+2\beta_{0}^{3}\Delta_{0}\Delta_{1}\] \[\quad-\beta_{0}^{3}\Delta_{0}^{3}-\beta_{2}\Delta_{0}-\beta_{0}^ {3}\Delta_{2}\biggr{)}a_{s}^{\prime 4}. \tag{33}\] Using now Eq. (33) at \(\mu^{\prime 2}=Q^{2}\), bearing in mind the RG invariance of the Adler function and its \(\{\beta\}\)-expansion pattern (18a)-(18f), it is possible to get the expressions for the coefficients \(d_{k}^{\prime}\) of the \(D(a_{s}^{\prime})\) function, normalized at the new scale, in the form given in Refs. [4, 32], \[d_{1}^{\prime} =d_{1}[0], \tag{34a}\] \[d_{2}^{\prime} =\beta_{0}(d_{2}[1]-\Delta_{0}d_{1}[0])+d_{2}[0],\] (34b) \[d_{3}^{\prime}+\delta_{f}(d_{3}^{\rm SI})^{\prime} =\beta_{0}^{2}(d_{3}[2]-2\Delta_{0}d_{2}[1]+\Delta_{0}^{2}d_{1}[0 ]-\Delta_{1}d_{1}[0])+\beta_{1}(d_{3}[0,1]-\Delta_{0}d_{1}[0])+\beta_{0}(d_{3} [1]-2\Delta_{0}d_{2}[0])+d_{3}[0]\] \[\quad+\delta_{f}d_{3}^{\rm SI}[0],\] (34c) \[d_{4}^{\prime}+\delta_{f}(d_{4}^{\rm SI})^{\prime} =\beta_{0}^{3}(d_{4}[3]-3\Delta_{0}d_{3}[2]+3\Delta_{0}^{2}d_{2}[1] -2\Delta_{1}d_{2}[1]+2\Delta_{0}\Delta_{1}d_{1}[0]-\Delta_{0}^{3}d_{1}[0]- \Delta_{2}d_{1}[0])\] \[\quad+\beta_{0}\beta_{1}(d_{4}[1,1]-3\Delta_{0}d_{3}[0,1]-2\Delta _{0}d_{2}[1]+5\Delta_{0}^{2}d_{1}[0]/2-\Delta_{1}d_{1}[0])+\beta_{2}(d_{4}[0,0,1 ]-\Delta_{0}d_{1}[0])\] \[\quad+\beta_{0}^{2}(d_{4}[2]-3\Delta_{0}d_{3}[1]+3\Delta_{0}^{2}d _{2}[0]-2\Delta_{1}d_{2}[0])+\beta_{1}(d_{4}[0,1]-2\Delta_{0}d_{2}[0])+\beta_{0}( d_{4}[1]-3\Delta_{0}d_{3}[0])\] \[\quad+\beta_{0}\delta_{f}(d_{4}^{\rm SI}[1]-3\Delta_{0}d_{3}^{\rm SI }[0])+d_{4}[0]+\delta_{f}d_{4}^{\rm SI}[0], \tag{34d}\] where \(\delta_{f}=(\sum Q_{f})^{2}/\sum Q_{f}^{2}\). Recall that the coefficients \(d_{k}[...]\) are presented in Appendix B. Setting initially \[\Delta_{0}=\Delta_{\rm BLM}=\frac{d_{2}[1]}{d_{1}[0]}=\left(\frac{33}{8}-3\zeta_ {3}\right)C_{F}, \tag{35}\] one can introduce the energy scale \(Q_{0}^{2}\), \[Q_{0}^{2}=Q^{2}\exp(-\Delta_{0}). \tag{36}\] At this new scale, the expression for the Adler function reads \[D(Q^{2}) =3{\sum_{f}}Q_{f}^{2}(1+d_{1}[0]a_{s}(Q_{0}^{2})+d_{2}[0]a_{s}^{2}(Q_ {0}^{2})\] \[\quad+\mathcal{O}(a_{s}^{3}(Q_{0}^{2}))). \tag{37}\] Further on, taking into account relation (35) and absorbing the remaining \(\beta_{i}\)-dependent contributions in Eq. (34c) into parameter \(\Delta_{1}\), we arrive at the following expression: \[\beta_{0}^{2}\Delta_{2}(n_{f}) =\beta_{0}\bigg{(}\frac{d_{4}[3]}{d_{1}[0]}-3\frac{d_{2}[1]d_{3}[ 2]}{d_{1}^{2}[0]}+2\frac{d_{2}^{3}[1]}{d_{1}^{3}[0]}\bigg{)}+\beta_{1}\bigg{(} \frac{d_{4}[1,1]}{d_{1}[0]}-3\frac{d_{2}[1]d_{3}[0,1]}{d_{1}^{2}[0]}+\frac{3}{2 }\frac{d_{2}^{2}[1]}{d_{1}^{2}[0]}-\frac{d_{3}[2]}{d_{1}[0]}\bigg{)}\] \[\quad+\beta_{0}\bigg{(}\frac{d_{4}[2]}{d_{1}[0]}-3\frac{d_{3}[1]d_ {2}[1]}{d_{1}^{2}[0]}+5\frac{2_{0}[0]d_{2}^{2}[1]}{d_{1}^{3}[0]}-2\frac{d_{2} [0]d_{3}[2]}{d_{1}^{2}[0]}\bigg{)}+\frac{d_{4}[1]}{d_{1}[0]}-3\frac{d_{3}[0]d_{ 2}[1]}{d_{1}^{2}[0]}+\delta_{f}\bigg{(}\frac{d_{4}^{8}[1]}{d_{1}[0]}-3\frac{d _{3}^{8}[0]d_{2}[1]}{d_{1}^{2}[0]}\bigg{)}\] \[\quad-2\frac{d_{2}[0]d_{3}[1]}{d_{1}^{2}[0]}+4\frac{d_{2}^{2}[0]d_ {2}[1]}{d_{1}^{3}[0]}+\frac{\beta_{1}^{2}}{\beta_{0}^{2}}\frac{d_{2}[1]-d_{3}[ 0,1]}{d_{1}[0]}+\frac{\beta_{1}}{\beta_{0}}\bigg{(}\frac{d_{4}[0,1]-d_{3}[1]}{ d_{1}[0]}-\frac{2d_{2}[0]}{d_{1}^{2}[0]}\big{)}\] \[\quad+\frac{\beta_{2}}{\beta_{0}}\frac{d_{4}[0,0,1]-d_{2}[1]}{d_{1 }^{2}[0]}. \tag{41}\] In this case, instead of the expressions (39) and (40), we obtain their higher-order counterparts, \[D(Q^{2}) =3{\sum_{f}}Q_{f}^{2}(1+d_{1}[0]a_{s}(Q_{2}^{2})+d_{2}[0]a_{s}^{2}(Q_ {2}^{2})\] \[\quad+(d_{3}[0]+\delta_{f}d_{3}^{S[0]}[0])a_{s}^{3}(Q_{2}^{2})\] \[\quad+(d_{4}[0]+\delta_{f}d_{4}^{S[0]}[0])a_{s}^{4}(Q_{2}^{2})+ \mathcal{O}(a_{s}^{5}(Q_{2}^{2}))), \tag{42}\] \[Q_{2}^{2} =Q^{2}\exp(-\Delta_{0}-\beta_{0}\Delta_{1}(n_{f})a_{s}(Q_{1}^{2})- \beta_{0}^{2}\Delta_{2}(n_{f})a_{s}^{2}(Q_{1}^{2})). \tag{43}\] In a particular case of the \(SU(3)\) color gauge group, the numerical forms of the parameters \(\Delta_{0}\), \(\beta_{0}\Delta_{1}\), and \(\beta_{0}^{2}\Delta_{2}\) are defined correspondingly and are included into determination of the scale \(Q_{2}^{2}\) in Eq. (43), are defined as \[\Delta_{0} =\frac{11}{2}-4\zeta_{3}\approx 0.6918, \tag{44a}\] \[\beta_{0}\Delta_{1}(n_{f}) =\beta_{0}\bigg{(}\frac{d_{3}[2]}{d_{1}[0]}-\frac{d_{2}^{2}[1]}{ d_{1}^{2}[0]}\bigg{)}+\frac{d_{3}[1]}{d_{1}[0]}-\frac{2d_{2}[0]d_{2}[1]}{d_{1}^{2}[0]}\] \[\quad+\frac{\beta_{1}}{\beta_{0}}\frac{d_{3}[0,1]-d_{2}[1]}{d_{1} [0]}. \tag{45}\] Application of the PMC/BLM approach at the \(\mathcal{O}(a_{s}^{3})\) level eventually yields \[D(Q^{2}) =3{\sum_{f}}Q_{f}^{2}(1+d_{1}[0]a_{s}(Q_{1}^{2})+d_{2}[0]a_{s}^{2}(Q_ {1}^{2})\] \[\quad+(d_{3}[0]+\delta_{f}d_{3}^{S[0]}[0])a_{s}^{3}(Q_{1}^{2})+ \mathcal{O}(a_{s}^{4}(Q_{1}^{2}))), \tag{46}\] where \(Q_{1}^{2}\) is defined in accordance with Eqs. (32), (35), (36), and (38) as \[Q_{1}^{2}=Q^{2}\exp(-\Delta_{0}-\beta_{0}\Delta_{1}(n_{f})a_{s}(Q_{0}^{2})). \tag{47}\] Following this logic and using Eq. (34d), one can fix the parameter \(\beta_{0}^{2}\Delta_{2}\) as \[\beta_{0}^{2}\Delta_{2}(n_{f}) =\beta_{0}^{2}\bigg{(}\frac{d_{4}[3]}{d_{1}[0]}-3\frac{d_{2}[1]d_ {3}[2]}{d_{1}^{2}[0]}+2\frac{d_{2}^{3}[1]}{d_{1}^{3}[0]}\bigg{)}+\beta_{1} \bigg{(}\frac{d_{4}[1,1]}{d_{1}[0]}-3\frac{d_{2}[1]d_{3}[0,1]}{d_{1}^{2}[0]}+ \frac{3}{2}\frac{d_{2}^{2}[1]}{d_{1}^{2}[0]}-\frac{d_{3}[2]}{d_{1}^{2}[0]}\bigg{)}\] \[\quad+\beta_{0}\bigg{(}\frac{d_{4}[2]}{d_{1}[0]}-3\frac{d_{3}[1]d_ {2}[1]}{d_{1}^{2}[0]}+5\frac{2_{0}[0]d_{2}^{2}[1]}{d_{1}^{3}[0]}-2\frac{d_{2} [0]d_{3}[2]}{d_{1}^{2}[0]}+\frac{d_{4}[1]}{d_{1}[0]}-3\frac{d_{3}[0]d_{2}[1]}{d_{1} ^{2}[0]}+\delta_{f}\bigg{(}\frac{d_{4}^{8}[1]}{d_{1}[0]}-3\frac{d_{3}^{8}[0]d_{2}[1]}{ d_{1}^{2}[0]}\bigg{)}\] \[\quad-2\frac{d_{2}[0]d_{3}[1]}{d_{1}^{2}[0]}+4\frac{d_{2}^{2}[0]d_ {2}[1]}{d_{1}^{3}[0]}+\frac{\beta_{1}^{2}}{\beta_{0}^{2}}\frac{d_{2}[1]-d_{3}[0,1]}{ d_{1}[0]}+\frac{\beta_{1}}{\beta_{0}}\bigg{(}\frac{d_{4}[0,1]-d_{3}[1]}{d_{1}[0]}-\frac{2d_{2}[0]}{ d_{1}^{2}[0]}\big{)}\] \[\quad+\frac{\beta_{2}}{\beta_{0}}\frac{d_{4}[0,0,1]-d_{2}[1]}{d_{1} [0]}. \tag{48}\] In this case, instead of the expressions (39) and (47), we obtain their higher-order counterparts, \[D(Q^{2}) =3{\sum_{f}}Q_{f}^{2}(1+d_{1}[0]a_{s}(Q_{2}^{2})+d_{2}[0]a_{s}^{2}(Q_ {2}^{2})\] \[\quad+(d_{3}[0]+\delta_{f}d_{3}^{S[0]}[0])a_{s}^{3}(Q_{2}^{2})\] \[\quad+(d_{4}[0]+\delta_{f}d_{4}^{S[0]} \[D(Q^{2}) = 3{\sum_{f}}Q_{f}^{2}\bigg{(}1+a_{s}(Q_{2}^{2})+(1.9857-0.1153n_{f})a_{s}^ {2}(Q_{2}^{2})+(-23.2227-0.4132\delta_{f})a_{s}^{3}(Q_{2}^{2}) \tag{45}\] \[+(81.1571+0.0802n_{f}-2.7804\delta_{f})a_{s}^{4}(Q_{2}^{2})+{\cal O }(a_{s}^{5}(Q_{2}^{2}))\bigg{)}.\] It is worth mentioning that the numerical results of Eq. (45) were previously presented in Ref. [29]. The magnitudes of their \({\cal O}(a_{s}^{2})\) and \({\cal O}(a_{s}^{3})\) coefficients are in agreement with the ones received in Ref. [14] with help of the generalized BLM prescription and the large-\(n_{f}\) expansion [see the related work of Ref. [16] where the numerical expression for the related \({\cal O}(a_{s}^{4})\) coefficient in Eq. (45) was found]. Our expression (45) should be also compared with its counterpart following from the PMC-type considerations of Refs. [23, 31, 40, 41, 42, 43] with the \(\{\beta\}\)-nonexpanded photon anomalous dimension, \[D(Q^{2}) = 3{\sum_{f}}Q_{f}^{2}(1+a_{s}(\tilde{Q}_{2}^{2})+(2.6042-0.1528n_{f })a_{s}^{2}(\tilde{Q}_{2}^{2})+(9.7418-2.0426n_{f}-0.0198n_{f}^{2}-0.4132\delta _{f})a_{s}^{3}(\tilde{Q}_{2}^{2}) \tag{46}\] \[+(41.0141-12.9110n_{f}+0.4887n_{f}^{2}+0.0045n_{f}^{3}+(-2.3829-0. 0241n_{f})\delta_{f})a_{s}^{4}(\tilde{Q}_{2}^{2})+{\cal O}(a_{s}^{5}(\tilde{Q} _{2}^{2}))),\] where we do not specify the explicit form of the corresponding scale \(\tilde{Q}_{2}^{2}\), which does not coincide with \(Q_{2}^{2}\), is defined in a similar way as indicated in Eq. (43). As we have already discussed above, this coefficients in the expression (46) are \(n_{f}\)-dependent ones. This essential difference of Eq. (46) from Eq. (45) is the consequence of the not applied \(\{\beta\}\)-expansion procedure to the photon anomalous dimension \(\gamma(a_{s})\) in Refs. [23, 31, 40, 41, 42, 43]. This fact was critically commented on in Sec. IV of this work. We also present here the numerical \(\overline{\rm MS}\)-scheme result for the Adler function, which follows from the analytical \({\cal O}(a_{s}^{4})\) expression, obtained in [57, 58] and confirmed in [69] \[D(Q^{2}) = 3{\sum_{f}}Q_{f}^{2}(1+a_{s}(Q^{2})+(1.9857-0.1153n_{f})a_{s}^ {2}(Q^{2})+(18.2427-4.2158n_{f}+0.0862n_{f}^{2}-0.4132\delta_{f})a_{s}^{3}(Q^{ 2}) \tag{47}\] \[+(135.7916-34.4402n_{f}+1.8753n_{f}^{2}-0.0101n_{f}^{3}+(-5.9422+0.1916n_{f})\delta_{f})a_{s}^{4}(Q^{2})+{\cal O}(a_{s}^{5}(Q_{2}^{2}))).\] It is worth clarifying that the leading large \(n_{f}\) contributions to Eq. (47) do agree with the numerical form of the analytical QED result, obtained previously in Ref. [62], but disagree with the analogous numbers, given in Eq. (46) above. This is a consequence of the fact that although the nonexpanded expression for \(\gamma(a_{s})\) contains a substantial part of the renormalon-related contributions to the Adler function, they are contained in the expression for the vacuum polarization function as well. The latter are absorbed into the scale \(\tilde{Q}_{2}^{2}\) while the PMC procedure is applied and the renormalon contributions to \(\gamma(a_{s})\) remain. From our point of view, such a variant of the realization of the PMC approach, generally representing interesting and important ideas, is not fully theoretically justified. **B. Energy dependence of the PMC/BLM and \(\overline{\rm MS}\)-scheme Adler function approximants** _1. PMC/BLM inputs_ Let us now specify what we mean under the expressions for the expansion parameters \(a_{s}(Q_{0}^{2})\), \(a_{s}(Q_{1}^{2})\), and \(a_{s}(Q_{2}^{2})\) in the NLO, N\({}^{2}\)LO, and N\({}^{2}\)LO PMC/BLM Adler function approximants, which are presented in Eqs. (37), (39), and (42) above. They correspond to the truncated at definite orders of PT inverse log representation of the \(\overline{\rm MS}\)-scheme QCD coupling constant taken, e.g., from Eq. (9.5) of the QCD PDG review of Ref. [70] with the NLO, N\({}^{2}\)LO, and N\({}^{3}\)LO energy scales \(Q_{0}^{2}\), \(Q_{1}^{2}\), and \(Q_{2}^{2}\) being fixed at the related orders of these representations through the relatively applied PMC/BLM expressions of Eqs. (36), (40), and (43). In concrete applications, these ways of fixation can be rewritten through the unique \(\overline{\rm MS}\)-scheme representation of the QCD coupling constant related to the arbitrary energy scale \(Q^{2}\), but with the appropriately redefined expressions of the \(\overline{\rm MS}\)-scheme scale parameter, namely, \[\Lambda_{\rm NLO}^{\rm(BLM)}(n_{f}) =\Lambda_{\rm NLO}^{(n_{f})}\cdot\exp\biggl{[}-\frac{1}{2}\Delta_{ 0}\biggr{]}, \tag{48a}\] \[\Lambda_{\rm N^{\rm*LO}}^{\rm(PMC)}(n_{f}) =\Lambda_{\rm N^{\rm*LO}}^{(n_{f})}\cdot\exp\biggl{[}-\frac{1}{2} \left(\Delta_{0}+\beta_{0}\Delta_{1}(n_{f})a_{s}^{\rm NLO}(Q^{2}/(\Lambda_{\rm NLO }^{\rm(BLM)})^{2})\right)\biggr{]},\] (48b) \[\Lambda_{\rm N^{\rm*LO}}^{\rm(PMC)}(n_{f}) =\Lambda_{\rm N^{\rm*LO}}^{(n_{f})}\cdot\exp\biggl{[}-\frac{1}{2} \left(\Delta_{0}+\beta_{0}\Delta_{1}(n_{f})a_{s}^{\rm N^{\rm*LO}}(Q^{2}/( \Lambda_{\rm N^{\rm*LO}}^{\rm(PMC)})^{2}\right)+\beta_{0}^{2}\Delta_{2}(n_{f} )\Bigl{(}a_{s}^{\rm N^{\rm*LO}}(Q^{2}/(\Lambda_{\rm N^{\rm*LO}}^{\rm(PMC)})^{ 2}))^{2}\Bigr{)}\biggr{]}, \tag{48c}\] where \(\Lambda_{\rm NLO}^{(n_{f})}\), \(\Lambda_{\rm N^{\rm*LO}}^{(n_{f})}\), \(\Lambda_{\rm N^{\rm*LO}}^{(n_{f})}\) are the expressions for the QCD scale parameter defined in the \(\overline{\rm MS}\) scheme in the corresponding order of PT, while \(\Delta_{0}\), \(\beta_{0}\Delta_{1}(n_{f})\), \(\beta_{0}^{2}\Delta_{2}(n_{f})\) are defined by Eqs. (44a)-(44c) presented above. We will use the appropriately truncated RG-improved expressions for the running QCD coupling \(a_{s}(Q^{2})\) through the \(\ln(Q^{2}/\Lambda^{(n_{f})/2})\) terms and the coefficients of the QCD \(\beta\) function, taking into account its N\({}^{3}\)LO four-loop coefficient, analytically evaluated in Ref. [71] and confirmed in Ref. [72]. In fact, at present, the five-loop \(\beta_{4}\) term of the corresponding \(\beta\) function is also known. It was analytically evaluated in Ref. [73] and confirmed in Refs. [74, 75]. However, for the sake of consistency with the orders of truncation of the PT approximations for the Adler function, this term will be not taken into account. #### 2. The \(\overline{\rm MS}\)-scheme benchmarks We will fix as the initial normalization point the \(\tau\)-lepton pole mass \(M_{\tau}=1776.8\) MeV, will consider \(n_{f}=3\) number of active flavors, and will use the rounded strong coupling constant value \(a_{s}(M_{\tau}^{2})=0.312\), extracted in Ref. [76] from the QCD sum rules analysis of the ALEPH Collaboration \(\tau\)-lepton decay data. In view of the qualitative aims of our studies to be presented below, we will neglect the consideration of the effects of theoretical and experimental uncertainties. We note, however, that the result of Ref. [76] falls into the uncertainty bands of the related results, independently obtained in Ref. [77] from a more detailed reanalysis of the same ALEPH data. Considering now the properly truncated at the NLO, N\({}^{2}\)LO, and N\({}^{3}\)LO representations \(a_{s}(M_{\tau}^{2})\) through the inverse powers of logarithms from \(M_{\tau}^{2}/\Lambda^{(3)2}\) ratio, we arrive at the following, of course rather rough, values for the \(\overline{\rm MS}\) scale QCD parameter \(\Lambda^{(3)}\) at \(n_{f}=3\): \[\Lambda_{\rm NLO}^{(3)}\!=\!361\,{\rm MeV},\ \ \Lambda_{\rm N^{\rm*LO}}^{(3)}\!=\!3 30\,{\rm MeV},\ \ \Lambda_{\rm N^{\rm*LO}}^{(3)}\!=\!325\,{\rm MeV}.\] To transform them to the cases of \(n_{f}=4\) and \(n_{f}=5\) effective number of quark flavors, we will use the expressions for the threshold transformation formulas available from the results of Refs. [78, 79, 80, 81] with the corresponding matching scales fixed at \(\sqrt{Q^{2}}=2\bar{m}_{c}(\bar{m}_{c}^{2})=2.54\) and \(\sqrt{Q^{2}}=2\bar{m}_{b}(\bar{m}_{b}^{2})=8.36\) GeV. They are related to the following values of the \(\overline{\rm MS}\) scheme running \(c\)- and \(b\)-quark masses, \(\bar{m}_{c}(\bar{m}_{c}^{2})\!=\!1.27\) and \(\bar{m}_{b}(\bar{m}_{b}^{2})\!=\!4.18\,{\rm GeV}\), taken from the PDG (2022) volume of Ref. [70]. Following these steps, we obtain the related cases of \(n_{f}=4\) and \(n_{f}=5\) numbers of active flavor sets of the numerical values of the \(\overline{\rm MS}\)-scheme scale parameter, \[\Lambda_{\rm NLO}^{(4)}=315,\qquad\Lambda_{\rm N^{\rm*LO}}^{(4)}=286,\qquad \Lambda_{\rm N^{\rm*LO}}^{(4)}=282\ {\rm MeV}\] and \[\Lambda_{\rm NLO}^{(5)}=223,\qquad\Lambda_{\rm N^{\rm*LO}}^{(5)}=205,\qquad \Lambda_{\rm N^{\rm*LO}}^{(5)}=203\ {\rm MeV}.\] The choice of the concrete threshold energies is, of course, ambiguous and will introduce additional inaccuracies [82]. However, these effects are also not substantial for our aims and we will neglect them as well in our considerations. Using the given above values of the \(\overline{\rm MS}\)-related QCD scale, the expressions from Eqs. (48a)-(48c), the inverse logarithmic representation of strong coupling in the NLO, N\({}^{2}\)LO, and N\({}^{3}\)LO approximations, and the given in Eqs. (47) and (45) explicit expressions for \(D(Q^{2})\) in the \(\overline{\rm MS}\) scheme and within the PMC/BLM procedure, we can get the corresponding energy dependence of the Adler function for the \(\overline{\rm MS}\) and PMC/BLM approximants to compare them with each other. It was also checked that the evolution of the taken value \(\alpha_{s}(M_{\tau}^{2})=0.312\) up to the mass \(M_{Z}=91.188\) GeV [70] of \(Z^{0}\) boson in QCD at the \({\cal O}(\alpha_{s}^{4})\) level yields \(\alpha_{s}(M_{Z}^{2})=0.1175\). It is consistent with the results of [76] and with the average value of PDG (2022) [70] with unaccounted by us uncertainties. Thus, we hope that we presented enough arguments for convincing careful readers that our qualitatively aimed study is quantitatively self-consistent. ## 3 The phenomenologically relevant \(n_{f}\) =3 case To illustrate the characteristic behavior of the Adler function approximants in the \(\overline{\rm MS}\) scheme and in the PMC/BLM approach in the case of \(n_{f}=3\), we consider the region of the Euclidean transferred momentum \(1.5\leq\sqrt{Q^{2}}\leq 2.4\ {\rm GeV}\), where the lower-energy scale is slightly smaller than the \(\tau\)-lepton mass and the upper one is a bit smaller than twice the charm-quark mass. Note that, in the Minkowskian timelike domain in the similar energy region \(1.84\leq\sqrt{s}\leq 3.88\ {\rm GeV}\), the subprocess of the production of the light quark-antiquark \(u\), \(d\), \(s\) pairs dominates in the \(e^{+}e^{-}\) annihilation into hadrons process. In this domain, the experimental data for the total cross section of the discussed subprocess were extracted from measurements provided by KEDR [83, 84] and BESIII [85] Collaborations. Taking into account the results of studies and benchmarks presented above, we obtain Fig. 1(a), demonstrating the energy behavior of the NLO, N\({}^{2}\)LO, and N\({}^{3}\)LO massless approximants for the Adler function in the \(\overline{\rm MS}\) scheme (47) and in the PMC/BLM approach. For comparison, the Born quark-parton result \(3\sum Q_{f}^{2}\) is presented there as well. Let us comment on definite consequences following from the comparison of the behavior of various curves presented in Fig. 1(a). 1. One can see that the NLO PT corrections to the Adler function are leading to the corrections, which are quantitatively defining main contributions in both \(\overline{\rm MS}\) and PMC/BLM cases. 2. While taking into account higher-order PT corrections, we observe the characteristic difference in the fine structure of sets of the \(\overline{\rm MS}\) scheme and PMC/BLM approximants. Indeed, the \(\overline{\rm MS}\) results satisfy the inequalities \(D_{\rm Born}<D_{\rm NLO}(Q^{2})<D_{\rm N^{2}LO}(Q^{2})<D_{\rm N^{2}LO}(Q^{2})\), whereas for PMC/BLM we have \(D_{\rm Born}<D_{\rm NLO}(Q^{2})\), but \(D_{\rm NLO}(Q^{2})>D_{\rm N^{2}LO}(Q^{2})\) and \(D_{\rm N^{2}LO}(Q^{2})<D_{\rm N^{2}LO}(Q^{2})\). 3. It is interesting that the sign structure of the related N\({}^{3}\)LO PT QCD expressions changes from the \(++++\) pattern in the \(\overline{\rm MS}\)-scheme case to the pattern \(++++\) in the PMC/BLM case. 4. The PMC/BLM approximants are located considerably below the \(\overline{\text{MS}}\) ones and as was expected are really rather flat [see Fig. 1(b) as well]. The recently presented [86] detailed phenomenologically related analysis of the Davier-Hoecker-Malaescu-Zhang compilation [87] of the experimental data for the total cross section of the \(e^{+}e^{-}\) annihilation to hadrons process (though without the published data provided by KEDR and BESIII Collaborations [84, 85] and most recent very interesting CMD-3 Collaboration new data [88]), and the less detailed described analysis of Refs. [89, 90] of the previous \(e^{+}e^{-}\) to hadrons experimental data, demonstrate that in this region of energies the experimentally related expression for the Adler function is higher than even the depicted massless \(\overline{\text{MS}}\) approximants. In Ref. [86], it is clearly demonstrated that taking into account the effects of s-quark mass-dependent corrections, c-quark mass-dependent corrections, and the nonperturbative power-suppressed corrections are minimizing the difference between the \(\overline{\text{MS}}\) D-function expression and Adler D-function experimentally related behavior, extracted in Refs. [86, 89, 90] from the concrete experimental data of the \(e^{+}e^{-}\) colliders. However, as seen from Fig. 1(a), the application of the PMC/BLM procedure to the massless \(\overline{\text{MS}}\) Adler function PT approximations is leading to moving the considered curves lower away from the experimentally based results for the Adler function in the considered kinematical region. Therefore, we do not recommend using the PMC/BLM approach in the process of comparison with the existing experimental data and, in particular, the ones provided by the \(e^{+}e^{-}\) colliders. The previously made within the PMC approach indication on improvement of the agreement of the corresponding PMC PT QCD approximants with the available experimental data for different processes (see the related papers, starting from, e.g., Ref. [31] up to Ref. [43]) may be at least rather questionable. Indeed, even the claims that the PMC-type QCD expressions, related to the totally non-\(\beta\)-expanded Adler function representation of Eq. (46), do not contain the definite parts of the effects, governed by the leading-order renormalon chain contribution, as shown in the process of this work, turned out to be not correct (compare the material presented in Appendix B). However, it is pleasant that our realizations of the PMC/BLM ideas seem to be self-consistent. Notice the flat behavior of the PMC/BLM approximants and of the related scale transformation factors, depicted at Figs. 1(a) and 1(b). The latter one demonstrates almost independence on the transferred momentum of the PMC/ BLM exponential factor \(\exp(-\Delta/2)\). These facts are related to the scale invariance of the coefficients \(d_{k}[0]\), obtained within the \(\{\beta\}\) expansion (19), and to the absorbtion of the coefficients proportional to the QCD \(\beta\)-function terms into the related PMC/BLM scales. **C. Theoretically useful \(n_{f}=4\), 5 cases** Similar to the discussion in the previous subsection, features are observed in the related obvious manifestation cases of \(n_{f}=4\), 5 numbers of active quark flavors. The corresponding curves are presented in Fig. 2(a,b) and Fig. 3(a,b) respectively. In general, at \(n_{f}=4,5\) the character of the behavior of the Adler function and the factor \(\exp(-\Delta/2)\) are similar to the one presented in Fig. 1. However, now the scale dependence of \(D(Q^{2})\) in the PMC/BLM approach in the N\({}^{3}\)LO approximation is more noticeable than in the case of \(n_{f}=3\). Why this is the case is still not clear to us. Among possible explanations are the manifestations of the effects, related to the regions where the uncertainties of the matching conditions may be detected. ## 6 Conclusion In this work, we have considered the RG relation between the \(e^{+}e^{-}\) annihilation Adler function \(D(L,a_{s})\), the photon vacuum polarization function \(\Pi(L,a_{s})\), and its anomalous dimension \(\gamma(a_{s})\) in QCD. We have provided arguments in favor of the necessity of the \(\{\beta\}\) expansion of PT series for \(\gamma(a_{s})\) and \(\Pi(L,a_{s})\) to extract the scale-invariant contributions to \(D(L,a_{s})\) function properly and to satisfy the fundamental renormalization principles. We have demonstrated that, unless this is done, the obtained expressions for terms \(\tilde{d}_{k}[\ldots]\) of the \(\{\beta\}\) decomposition of the Adler function will not correspond to the well-known renormalon asymptotics for its higher-order PT coefficients and the values of terms \(\tilde{d}_{k}[0]\), defined in such way, will not be genuinely scale invariant. We emphasize that the PMC/BLM scale setting approach will be actually implemented only after the \(\{\beta\}\)-decomposition procedure of the PT expressions for the photon vacuum polarization function and its anomalous dimension. Thus, the photon anomalous dimension is not the conformal contribution to the Adler function. All terms of the considered \(\{\beta\}\) expansion for \(\gamma(a_{s})\) are defined within the decomposition procedure in powers of \(\beta(a_{s})/a_{s}\). ## Acknowledgments We are grateful to G. Cvetic and S. V. Mikhailov for useful discussions. It is a pleasure to thank M. Khellat for his active participation and contribution at the initial stage of this work. A. L. K. would like to acknowledge previous round-table online discussions with S. J. Brodsky at the final stage of Bled-2022 Workshop. The work of V. S. M. was supported by the Russian Science Foundation, Agreement No. 21-71-30003 (the study of the \(\{\beta\}\)-expansion problem for the photon anomalous dimension and vacuum polarization operator) and by the Ministry of Education and Science of the Russian Federation as part of the program of the Moscow Center for Fundamental and Applied Mathematics, Agreement No. 075-15-2022-284 (the numerical analysis of the PMC/ BLM scale setting procedure). ## Appendix A The solution of the RG equation (6) for the photon vacuum polarization function can be found perturbatively, and at the \(\mathcal{O}(a_{s}^{4})\) level it has the following form for both the NS and SI contributions: \[\Pi^{\text{NS}}(L,a_{s}) =\Pi_{0}+\gamma_{0}L+(\Pi_{1}+\gamma_{1}L)a_{s}(\mu^{2})+\left( \Pi_{2}+(\gamma_{2}+\beta_{0}\Pi_{1})L+\frac{1}{2}\beta_{0}\gamma_{1}L^{2} \right)a_{s}^{2}(\mu^{2})\] \[\quad+\left(\Pi_{3}+(\gamma_{3}+\beta_{1}\Pi_{1}+2\beta_{0}\Pi_{ 2})L+\left(\beta_{0}\gamma_{2}+\frac{1}{2}\beta_{1}\gamma_{1}+\beta_{0}^{2} \Pi_{1}\right)L^{2}+\frac{1}{3}\beta_{0}^{2}\gamma_{1}L^{3}\right)a_{s}^{3}(\mu ^{2})\] \[\quad+\left(\Pi_{4}+(\gamma_{4}+\beta_{2}\Pi_{1}+2\beta_{1}\Pi_{ 2}+3\beta_{0}\Pi_{3})L+\left(\beta_{1}\gamma_{2}+\frac{1}{2}\beta_{2}\gamma_{ 1}+\frac{3}{2}\beta_{0}\gamma_{3}+\frac{5}{2}\beta_{0}\beta_{1}\Pi_{1}+3\beta _{0}^{2}\Pi_{2}\right)L^{2}\] \[\quad+\left(\frac{5}{6}\beta_{0}\beta_{1}\gamma_{1}+\beta_{0}^{2 }\gamma_{2}+\beta_{0}^{3}\Pi_{1}\right)L^{3}+\frac{1}{4}\beta_{0}^{3}\gamma_{ 1}L^{4}\right)a_{s}^{4}(\mu^{2})+\ldots, \tag{11a}\] \[\Pi^{\text{SI}}(L,a_{s})=(\Pi_{3}^{\text{SI}}+\gamma_{3}^{\text{SI} }L)a_{s}^{3}(\mu^{2})+\left(\Pi_{4}^{\text{SI}}+(\gamma_{4}^{\text{SI}}+3 \beta_{0}\Pi_{3}^{\text{SI}})L+\frac{3}{2}\beta_{0}\gamma_{3}^{\text{SI}}L^ {2}\right)a_{s}^{4}(\mu^{2})+\ldots. \tag{11b}\] The explicit solution of the RG equation (14), expressed in terms of the PT coefficients of the photon vacuum polarization function, its anomalous dimension, and the RG \(\beta\) function, reads \[D^{\text{NS}}(L,a_{s}) =\gamma_{0}+\gamma_{1}a_{s}(\mu^{2})+(\gamma_{2}+\beta_{0}\Pi_{1} +\beta_{0}\gamma_{1}L)a_{s}^{2}(\mu^{2})\] \[\quad+(\gamma_{3}+\beta_{1}\Pi_{1}+2\beta_{0}\Pi_{2}+(\beta_{1} \gamma_{1}+2\beta_{0}\gamma_{2}+2\beta_{0}^{2}\Pi_{1})L+\beta_{0}^{2}\gamma_{ 1}L^{2})a_{s}^{3}(\mu^{2})\] \[\quad+\left(\gamma_{4}+\beta_{2}\Pi_{1}+2\beta_{1}\Pi_{2}+3\beta _{0}\Pi_{3}+(2\beta_{1}\gamma_{2}+\beta_{2}\gamma_{1}+3\beta_{0}\gamma_{3}+5 \beta_{0}\beta_{1}\Pi_{1}+6\beta_{0}^{2}\Pi_{2})L\right.\] \[\quad+\left(3\beta_{0}^{2}\gamma_{2}+\frac{5}{2}\beta_{0}\beta_{1 }\gamma_{1}+3\beta_{0}^{3}\Pi_{1}\right)L^{2}+\beta_{0}^{3}\gamma_{1}L^{3} \right)a_{s}^{4}(\mu^{2})+\ldots, \tag{11a}\] \[D^{\text{SI}}(L,a_{s})=\gamma_{3}^{\text{SI}}a_{s}^{3}(\mu^{2})+( \gamma_{4}^{\text{SI}}+3\beta_{0}\Pi_{3}^{\text{SI}}+3\beta_{0}\gamma_{3}^{ \text{SI}}L)a_{s}^{4}(\mu^{2})+\ldots. \tag{11b}\] ## Appendix B 1 Coefficients \(d_{k}[...]\) Application of the \(\{\beta\}\)-decomposition procedure (19) to the PT series for the Adler function enables one to obtain the expressions for terms \(d_{k}[...]\) and \(d_{k}^{\text{SI}}[...]\) in relations (18a)-(18f). Within this procedure, these terms were defined previously in Refs. [28, 29]. The scale-invariant contributions \(d_{k}[0]\) and \(d_{k}^{\text{SI}}[0]\) satisfy the relations \[d_{k}[0]=\gamma_{k}[0],\qquad d_{k}^{\text{SI}}[0]=\gamma_{k}^{\text{SI}}[0],\] (19a) and terms \[\gamma_{k}[0]\], \[\gamma_{k}^{\text{SI}}[0]\] were fixed in Eqs. (23a), (23c), and ( 23f)-( 23h ). The remaining scale-noninvariant contributions to the \[D(Q^{2})\] function have the following analytical form: \[d_{2}[1]=d_{3}[0,1]=d_{4}[0,0,1]=\boxed{\left(\frac{32}{8}-3 \zeta_{3}\right)C_{F}}\,\] (20b) \[d_{3}[1]=d_{4}[0,1]=\left(-\frac{111}{64}-12\zeta_{3}+15\zeta_{5 }\right)C_{F}^{2}+\left(\frac{83}{32}+\frac{5}{4}\zeta_{3}-\frac{5}{2}\zeta_{5 }\right)C_{F}C_{A},\] (20c) \[d_{3}[2]=\frac{1}{2}d_{4}[1,1]=\boxed{\left(\frac{151}{6}-19\zeta_{3} \right)C_{F}}\,\] (20d) \[d_{4}[1]=\left(-\frac{785}{128}-\frac{9}{16}\zeta_{3}+\frac{165}{ 2}\zeta_{5}-\frac{315}{4}\zeta_{7}\right)C_{F}^{3}+\left(-\frac{3737}{144}+ \frac{3433}{64}\zeta_{3}-\frac{99}{4}\zeta_{3}^{2}-\frac{615}{16}\zeta_{5}+ \frac{315}{8}\zeta_{7}\right)C_{F}^{2}C_{A}\] \[\qquad+\left(-\frac{2695}{384}-\frac{1987}{64}\zeta_{3}+\frac{99} {4}\zeta_{3}^{2}+\frac{175}{32}\zeta_{5}-\frac{105}{16}\zeta_{7}\right)C_{F}C _{A}^{2},\] (20e) \[d_{4}[2]=\left(-\frac{4159}{384}-\frac{2997}{16}\zeta_{3}+27\zeta _{3}^{2}+\frac{375}{2}\zeta_{5}\right)C_{F}^{2}+\left(\frac{14615}{256}+\frac{ 39}{16}\zeta_{3}-\frac{9}{2}\zeta_{3}^{2}-\frac{185}{4}\zeta_{5}\right)C_{F}C_ {A},\] (20f) \[d_{4}[3]=\boxed{\left(\frac{6131}{36}-\frac{203}{2}\zeta_{3}-45 \zeta_{5}\right)C_{F}}\,\] (20g) \[d_{4}^{\text{SI}}[1]=\left(\frac{149}{192}-\frac{39}{32}\zeta_{3} +\frac{15}{16}\zeta_{5}-\frac{3}{8}\zeta_{3}^{2}\right)\frac{d^{abc}d^{abc}}{ d_{R}}.\] (20h) The boxed analytical expressions are results for the leading renormalon-chain contributions, obtained in Ref. [62] and also presented in Ref. [7]. ## Appendix B 1 Coefficients \(\tilde{d}_{k}[...]\) In the case where the photon vacuum polarization function and its anomalous dimension are not \(\{\beta\}\) decomposed, the counterparts of the expressions (23a), (23c), (23f)-(23h), and (20h)-(20h) were obtained in Refs. [23, 31, 40, 41, 42] and read as follows: \[\tilde{d}_{1}[0]=\gamma_{1}=\frac{3}{4}C_{F}, \tag{21a}\] \[\tilde{d}_{2}[0]=\gamma_{2}=-\frac{3}{32}C_{F}^{2}+\frac{133}{192} C_{F}C_{A}\boxed{-\frac{11}{48}C_{F}T_{F}n_{f}}\] (21b) \[\tilde{d}_{2}[1]=\tilde{d}_{3}[0,1]=\tilde{d}_{4}[0,0,1]=\Pi_{1}= \boxed{\left(\frac{55}{16}-3\zeta_{3}\right)C_{F}}\, \tag{21c}\] \[\tilde{d}_{3}[0]=\gamma_{3}=-\frac{69}{128}\,C_{F}^{3}+\left(\frac{215}{288}- \frac{11}{24}\zeta_{3}\right)C_{F}^{2}C_{A}+\left(\frac{5815}{20736}+\frac{11}{2 4}\zeta_{3}\right)C_{F}C_{A}^{2}\] \[\qquad-\left(\frac{169}{288}-\frac{11}{12}\zeta_{3}\right)C_{F}^{ 2}T_{F}n_{f}-\left(\frac{769}{5184}+\frac{11}{12}\zeta_{3}\right)C_{F}C_{A}T_{ F}n_{f}\boxed{\left[\frac{-77}{1296}\,C_{F}T_{F}^{2}n_{f}^{2}\right]}\] (B2d) \[\tilde{d}_{3}[1]=\tilde{d}_{4}[0,1]=\Pi_{2}=\left(-\frac{143}{96}- \frac{37}{8}\zeta_{3}+\frac{15}{2}\zeta_{5}\right)C_{F}^{2}+\left(\frac{44215 }{3456}-\frac{227}{24}\zeta_{3}-\frac{5}{4}\zeta_{5}\right)C_{F}C_{A}\] \[\boxed{\left[-\left(\frac{3701}{864}-\frac{19}{6}\,\zeta_{3} \right)C_{F}T_{F}n_{f}\right]}\] (B2e) \[\tilde{d}_{3}[2]=\tilde{d}_{4}[2]=\tilde{d}_{4}[3]=0,\] (B2f) \[\tilde{d}_{4}[0]=\gamma_{4}=\left(\frac{4157}{2048}+\frac{3}{8} \zeta_{3}\right)C_{F}^{4}-\left(\frac{7755}{1024}+\frac{71}{16}\zeta_{3}- \frac{935}{128}\zeta_{5}\right)C_{F}^{3}C_{A}+\left(\frac{882893}{110592}+ \frac{11501}{4608}\zeta_{3}+\frac{121}{256}\zeta_{4}-\frac{2145}{256}\zeta_{ 5}\right)C_{F}^{2}C_{A}^{2}\] \[\qquad-\left(\frac{1192475}{663552}-\frac{5609}{4608}\zeta_{3}+ \frac{121}{256}\zeta_{4}-\frac{825}{512}\zeta_{5}\right)C_{F}C_{A}^{3}+\left( \frac{2509}{1536}+\frac{67}{32}\zeta_{3}-\frac{145}{32}\zeta_{5}\right)C_{F}^{ 3}T_{F}n_{f}\] \[\qquad-\left(\frac{66451}{18432}\frac{2263}{1152}\zeta_{3}+\frac {143}{128}\zeta_{4}-\frac{255}{64}\zeta_{5}\right)C_{F}^{2}C_{A}T_{F}n_{f}+ \left(\frac{22423}{41472}-\frac{9425}{2304}\zeta_{3}+\frac{143}{128}\zeta_{4} +\frac{45}{128}\zeta_{5}\right)C_{F}C_{A}^{2}T_{F}n_{f}\] \[\qquad+\left(\frac{4961}{13824}-\frac{119}{144}\zeta_{3}+\frac{11 }{32}\zeta_{4}\right)C_{F}^{2}T_{F}^{2}n_{f}^{2}-\left(\frac{8191}{41472}- \frac{563}{576}\zeta_{3}+\frac{11}{32}\zeta_{4}\right)C_{F}C_{A}T_{F}^{2}n_{f}^ {2}\] \[\boxed{\left[\frac{+\left(1076\right.+\left(\frac{107}{10368}+ \frac{1}{72}\zeta_{3}\right)C_{F}T_{F}^{3}n_{f}^{3}\right]}{\left[\frac{+\left( \frac{107}{16}-\frac{1}{4}\zeta_{3}-\frac{5}{4}\zeta_{5}\right)}{d_{F}^{3}d_{ A}^{abcd}}{d_{R}}-\left(\frac{13}{16}+\zeta_{3}-\frac{5}{2}\zeta_{5} \right)\frac{d_{F}^{abcd}d_{F}^{abcd}}{d_{R}}n_{f}},\] (B2g) \[\tilde{d}_{4}[1]=\Pi_{3}=\left(-\frac{31}{256}+\frac{39}{32}\zeta _{3}+\frac{735}{32}\zeta_{5}-\frac{105}{4}\zeta_{7}\right)C_{F}^{3}-\left( \frac{382033}{27648}+\frac{46219}{1152}\zeta_{3}+\frac{11}{64}\zeta_{4}-\frac {9305}{192}\zeta_{5}-\frac{105}{8}\zeta_{7}\right)C_{F}^{2}C_{A}\] \[\qquad+\left(\frac{34499767}{497664}-\frac{147473}{3456}\zeta_{3} +\frac{55}{8}\zeta_{3}^{2}+\frac{11}{64}\zeta_{4}-\frac{28295}{1152}\zeta_{ 5}-\frac{35}{16}\zeta_{7}\right)C_{F}C_{A}^{2}\] \[\qquad-\left(\frac{7505}{13824}-\frac{1553}{72}\zeta_{3}+3\zeta _{3}^{2}-\frac{11}{32}\zeta_{4}+\frac{125}{6}\zeta_{5}\right)C_{F}^{2}T_{F}n_{f}\] \[\qquad-\left(\frac{559937}{124416}-\frac{41575}{1728}\zeta_{3}- \frac{1}{2}\zeta_{3}^{2}+\frac{11}{32}\zeta_{4}-\frac{515}{36}\zeta_{5}\right) C_{F}C_{A}T_{F}n_{f}\] \[\boxed{\left[\frac{+\left(\frac{196513}{31104}-\frac{809}{216}\zeta_{3} -\frac{5}{3}\zeta_{5}\right)C_{F}T_{F}^{2}n_{f}^{2}}\right]}\] (B2h) \[\tilde{d}_{3}^{\rm SI}[0]=\gamma_{3}^{\rm SI}=\left(\frac{11}{192}- \frac{1}{8}\zeta_{3}\right)\frac{d^{abc}d^{abc}}{d_{R}},\] (B2i) \[\widetilde{d}_{4}^{\rm SI}[0]=\gamma_{4}^{\rm SI} = \left(\left(-\frac{13}{64}-\frac{1}{4}\zeta_{3}+\frac{5}{8}\zeta_{5 }\right)C_{F}+\left(\frac{1015}{3072}-\frac{659}{1024}\zeta_{3}+\frac{33}{256} \zeta_{4}+\frac{15}{256}\zeta_{5}\right)C_{A}\right.\] \[\left.+\left(-\frac{55}{768}+\frac{41}{256}\zeta_{3}-\frac{3}{64} \zeta_{4}-\frac{5}{64}\zeta_{5}\right)T_{F}n_{f}\right)\frac{d^{abc}d^{abc}}{ d_{R}},\] \[\widetilde{d}_{4}^{\rm SI}[1]=\Pi_{3}^{\rm SI} = \left(\frac{431}{2304}-\frac{63}{256}\zeta_{3}-\frac{1}{8}\zeta_ {3}^{2}-\frac{3}{64}\zeta_{4}+\frac{15}{64}\zeta_{5}\right)\frac{d^{abc}d^{abc }}{d_{R}},\] (B2) where \(d_{3}^{\rm SI}=\widetilde{d}_{3}^{\rm SI}[0]\) and \(d_{4}^{\rm SI}=\widetilde{d}_{4}^{\rm SI}[0]+3\beta_{0}\widetilde{d}_{4}^{ \rm SI}[1]\). The corresponding combination of the single, double, and triple boxed analytical expressions coincide with the boxed analytical expressions of Eqs. (B1b), (B1d), and (B1g), which are related to leading renormalon-chain contributions. The second terms from these pairs of boxed terms were absorbed in unproperly defined PMC scales, while the remaining renormalon-chain effects are still contributing to the non-beta-expanded anomalous dimension.
この研究は、$e^+e^-$ Annihilation Adler関数 $D(Q^2)$ と関連する renormalization group関数、すなわち、光子真空光量関数とその異常項 $\gamma(\alpha_s)$ を $\mathcal{O}(\alpha^4_s)$ の精度で研究することに焦点を当てています。 私たちは、 $\gamma(\alpha_s)$ は $D(Q^2)$ に対する conformal-invariant 寄与ではないことを強調し、この分析に整合性を保つために、その高次オーダーの PT係数を $\beta$関数係数でべき乗の形で分解する必要があります。この主張の根拠は示されています。$\overline{MS}$とPMC/BLMapproximantsの比較が示されています。この比較による理論的かつ現象学的結果についての短いコメントを述べています。
2309.12641
Global Context Aggregation Network for Lightweight Saliency Detection of Surface Defects
Surface defect inspection is a very challenging task in which surface defects usually show weak appearances or exist under complex backgrounds. Most high-accuracy defect detection methods require expensive computation and storage overhead, making them less practical in some resource-constrained defect detection applications. Although some lightweight methods have achieved real-time inference speed with fewer parameters, they show poor detection accuracy in complex defect scenarios. To this end, we develop a Global Context Aggregation Network (GCANet) for lightweight saliency detection of surface defects on the encoder-decoder structure. First, we introduce a novel transformer encoder on the top layer of the lightweight backbone, which captures global context information through a novel Depth-wise Self-Attention (DSA) module. The proposed DSA performs element-wise similarity in channel dimension while maintaining linear complexity. In addition, we introduce a novel Channel Reference Attention (CRA) module before each decoder block to strengthen the representation of multi-level features in the bottom-up path. The proposed CRA exploits the channel correlation between features at different layers to adaptively enhance feature representation. The experimental results on three public defect datasets demonstrate that the proposed network achieves a better trade-off between accuracy and running efficiency compared with other 17 state-of-the-art methods. Specifically, GCANet achieves competitive accuracy (91.79% $F_{\beta}^{w}$, 93.55% $S_\alpha$, and 97.35% $E_\phi$) on SD-saliency-900 while running 272fps on a single gpu.
Feng Yan, Xiaoheng Jiang, Yang Lu, Lisha Cui, Shupan Li, Jiale Cao, Mingliang Xu, Dacheng Tao
2023-09-22T06:19:11
http://arxiv.org/abs/2309.12641v1
# Global Context Aggregation Network for Lightweight Saliency Detection of Surface Defects ###### Abstract Surface defect inspection is a very challenging task in which surface defects usually show weak appearances or exist under complex backgrounds. Most high-accuracy defect detection methods require expensive computation and storage overhead, making them less practical in some resource-constrained defect detection applications. Although some lightweight methods have achieved real-time inference speed with fewer parameters, they show poor detection accuracy in complex defect scenarios. To this end, we develop a Global Context Aggregation Network (GCANet) for lightweight saliency detection of surface defects on the encoder-decoder structure. First, we introduce a novel transformer encoder on the top layer of the lightweight backbone, which captures global context information through a novel Depth-wise Self-Attention (DSA) module. The proposed DSA performs element-wise similarity in channel dimension while maintaining linear complexity. In addition, we introduce a novel Channel Reference Attention (CRA) module before each decoder block to strengthen the representation of multi-level features in the bottom-up path. The proposed CRA exploits the channel correlation between features at different layers to adaptively enhance feature representation. The experimental results on three public defect datasets demonstrate that the proposed network achieves a better trade-off between accuracy and running efficiency compared with other 17 state-of-the-art methods. Specifically, GCANet achieves competitive accuracy (91.79% \(F_{\alpha}^{w}\), 93.55% \(S_{\alpha}\), and 97.35% \(E_{\diamond}\)) on SD-saliency-900 while running 272fps on a single gpu. Lightweight network, depth-wise self-attention, channel reference attention, surface defects. ## I Introduction Surface defect inspection is an important task for industrial quality control. Manual inspection is labor-intensive, time-consuming, and low efficient. Traditional machine vision methods mostly depend on effective texture features [1, 2, 3, 4], such as statistical features, filter-based features, and model-based features. The manually designed features cannot detect complicated defects effectively for lack of semantic information. Moreover, these methods show bad reusability and generalization since they are designed for specific surface defects. Recently, due to the strong feature representation ability of neural networks, deep learning methods have made great advances in industrial applications, such as magnetic tile [5], road [6, 7], rail [8, 9], and steel [10, 11, 12]. These methods mainly contain image-level [13, 14, 15], object-level [16, 17, 18], and pixel-level [11, 12, 19, 20] inspections. Although the above methods have achieved promising performance in defect detection, they mostly are involved in amounts of parameters and substantial computational overhead. For example, DACNet [11] has 98M parameters and 143G floating point operations (FLOPs), with a speed of 39 FPS on an NVIDIA RTX 3060 Ti, showing limitations in real-time and resource-constrained scenes. Although existing lightweight detection networks [21, 22, 23, 24, 25, 26, 27] can achieve real-time detection with lower computational costs, they show poor performance in some defect scenes due to the complexity of defects. The main challenges of defects are as follows. (i) _weak appearance_. Defects usually show inconspicuous appearances, such as small size, thin scratches, and low contrast with the background. These weak properties make it challenging to detect complete defect regions. (ii) _complicated background_. There are some distractions (e.g., stains, shadows, and random lighting) in the background, which may lead to false detection results. To improve the performance of lightweight networks in the defect detection task, two main problems should be considered. First, global context information is crucial for the detection of weak defects. As suggested in [28, 29], global information is beneficial for detecting complete object regions. However, it is difficult for lightweight CNNs to learn global dependencies because of limited receptive fields. To learn effective global information, Liu et al. [30] introduce a pyramid pooling module (PPM) [31] after the final layer of CNNs. But pooling operations may damage spatial details. Benefiting from Multi-head Self-Attention (MSA), the Transformer architecture [32] shows a powerful ability at modeling long-range dependencies. MSA models global dependencies by explicitly computing all pairwise similarities, i.e. each position is computed similarity with other all positions. But this brings quadratic computational complexity with the spatial size, which limits its application in the lightweight network. In addition, it is significant to refine the feature representation when defects show under complex backgrounds. The low-level features contain abundant background interference compared to the high-level features. So simply fusing original low-level features with other features by addition or concatenation may result in some incomplete or false detection results. To solve the aforesaid issues, we propose a Global Context Aggregation Network (GCANet) for lightweight saliency detection of surface defects. First, we introduce a novel transformer encoder on top of the encoder, which takes the top-level features as inputs to learn global information. To mitigate the computational overhead of MSA, we propose a novel Depth-wise Self-Attention (DSA) module with linear complexity. DSA can learn global information as MSA does. Differently, DSA produces attention weights via element-wise interaction. Meanwhile, considering the complementary of global and local features, global features are injected into all subsequent decoding stages through shortcuts to activate more complete defect regions. Secondly, we introduce a Channel Reference Attention (CRA) module to strengthen the feature representation. The high-level and global features are beneficial to suppress background noise because they contain richer semantic information than low-level features. CRA produces an attention map by computing the channel similarity between them and low-level features. The attention map can adaptively highlight useful defect information along the channel dimension, which enables the network to focus more on defect details. In summary, the main contributions are as follows: 1. We propose a Global Context Aggregation Network (GCANet) for lightweight saliency detection of surface defects, which achieves fast and accurate defect detection. 2. We present a novel transformer block based on the proposed Depth-wise Self-Attention (DSA). The DSA can learn global information with linear complexity. 3. We present a Channel Reference Attention (CRA) module to refine the expression of features. The CRA selectively emphasizes meaningful defect detail features by learning similarities between cross-level features. 4. Extensive experiments on three public defect datasets demonstrate the proposed model achieves the trade-off between accuracy and efficiency in defect detection scenes. ## II Related Works Defect detection methods are roughly grouped into traditional machine vision based [1, 2, 3, 4, 1] and deep learning based methods [6, 9, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. Traditional machine vision methods show limitations in defect scenes, such as low accuracy and poor reusability. Here, we briefly review deep learning based pixel-wise defect detection methods and some lightweight networks. ### _Deep Learning Based Pixel-wise Defect Detection Methods_ Recently, deep learning methods have made remarkable advances in the surface defect detection task due to the strong feature extraction function of neural networks. Different from image-level [13, 14, 15] and object-level [16, 17, 18] detection methods, pixel-level detection methods [6, 9, 11, 12, 19, 20] obtain fine-grained detection results. The utilization of context information is crucial for pixel-wise defect detection. Considering the complexity of defects, Wang et al. [20] exploited channel and spatial global dependencies to strengthen the representation of defect features. Similarly, Zhou et al. [11] deployed three convolutional branches with different depths in the encoder to learn multi-scale context information and introduce a dense attention mechanism in the decoder. In addition, Sampath et al. [19] added channel and spatial attention module in the encoder to highlight defect features and filter out background interference. Wan et al. [12] integrated effective context semantics, spatial details, and edge features to achieve accurate defect detection. Aimed at the problem of data imbalance between defect pixels and non-defect pixels in defect images, Li et al. [6] propose novel adaptive weighted cross-entropy (WCE) loss functions to train the network, which can make the network learn more information of defects. Although the above methods achieve excellent performance in defect detection, these methods are limited in resource-constrained and real-time scenes because of substantial parameters and expensive computational overhead. To achieve real-time defect detection, Huang et al. [33] constructed a compact segmentation network consisting of a lightweight encoder and decoder. Zhang et al. [21] developed a real-time surface defect segmentation network, called FDSNet, which adopts two branches to encode edge details and semantic information of defects, respectively. ### _Lightweight Networks_ Lightweight networks can achieve real-time inference because of low computational costs, i.e. fewer parameters and FOLPs. Currently, there are three mainstreams in the design of lightweight methods. (i) _lightweight backbone_. Considering the importance of multi-scale contexts, Fan et al. [22] presented a Short-Term Dense Concatenate (STDC) module that can capture multi-scale contexts and use it to develop a lightweight network named STDCNet. Similarly, Liu et al. [26] utilized the designed stereoscopically attentive multi-scale (SAM) unit to develop a lightweight SOD model called SAMNet, with fewer parameters and FLOPs compared to STDCNet. (ii) _multi-branch architecture_. Poudel et al. [34] designed a two-branch network for real-time segmentation, where context and detail branches share initial several layers to reduce computational costs. Xu et al. [24] introduced three different branches to learn details, semantics, and boundary features, respectively. (iii) _lightweight module design_. Peng et al. [23] adopted STDCNet as the encoder and designed a unified attention fusion module to integrate low-level and high-level features. Li [35] et al. introduced correlation-guided feature fusion and lightweight feature refinement block in the decoder to improve performance. Different from the previous works, the proposed method introduces a novel transformer block in the lightweight backbone to capture global information, considering the limited receptive field of CNNs. Each transformer block uses the proposed Depth-wise Self-Attention to learn global context information. Furthermore, the Channel Reference Attention module is presented to strengthen feature representations. ## III Method In this section, we first describe the overall architecture of the proposed method in Section III-A. Then, we introduce the designed DSA and CRA modules in Section III-B and Section III-C, respectively. Finally, we present the loss function in Section III-D. ### _Overall Architecture_ The proposed network is an encoder-decoder network and the details are described in Fig.1. We adopt the backbone of lightweight network SAMNet [26] as the encoder. Differently, we use a depthwise separable 3\(\times\)3 convolution (DSConv3\(\times\)3) with stride 1 instead of stride 2 in the first encoding stage. This enables the network to encode more defect details at the early layers. With an input image \(I\in\mathbb{R}^{H\times W\times 3}\) given, the dimension of output feature at encoding stage \(i\) is \(\frac{H}{2^{i-1}}\times\frac{W}{2^{i-1}}\times C_{i}\), \(C_{i}\in\{16,32,64,96,128\}\). We introduce an additional transformer encoder to learn global context semantics, which takes as input the output feature of the fifth encoding stage. The transformer block contains a depth-wise self-attention module (DSA) and a feed-forward network (FFN). The DSA can capture global dependencies while maintaining linear complexity. To remedy the problem of feature dilution in the top-down path, global features are injected into subsequent each decoder block (DB) through skip connections. The integration of global and local features is beneficial to detect complete defect objects. Besides, we introduce a Channel Reference Attention (CRA) module before feature fusion for feature enhancement. The attention weights are dynamically computed by channel similarities between cross-layer features, which can adaptively highlight important defect features. In each decoder block, global, high-level, and refined low-level features are aggregated together by element-wise summation and the dilated DSConv3\(\times\)3. The output features of decoder blocks and the transformer encoder are respectively fed into the convolution layer to produce side-output saliency predictions for deep supervision. ### _Depth-wise Self-Attention_ Transformers capture global context dependencies through multi-head self-attention (MSA). MSA computes the pairwise similarity among all spatial elements, producing an attention map with a size of \(N\times N\), where \(N\) denotes the spatial size of features. This brings high computational complexity and memory usage. To mitigate this problem, we present a novel Depth-wise Self-Attention, which calculates self-attention in the channel dimension and implicitly models the global context information, as described in Fig. 1 Fig. 1: The overview of the proposed GCANet. It adopts the lightweight backbone as the encoder. Following the encoder, an extra transformer encoder is introduced to capture global context information. Depth-wise Self-Attention (DSA) module can model global dependencies with linear complexity. Meanwhile, the Channel Reference Attention (CRA) module is introduced to strengthen the representation ability of features through channel interaction of cross-level features. Suppose \(\mathbf{X}\in\mathbb{R}^{H\times W\times C}\) as input feature map, which is first reshaped to \(\mathbf{X}^{\prime}\in\mathbb{R}^{N\times C}\), where \(N=H\times W\). \(\mathbf{X}^{\prime}\) generates query \(\mathbf{Q}\), key \(\mathbf{K}\) and value \(\mathbf{V}\) through different linear projections. Formally, we have \[\mathbf{Q}=\mathbf{X}^{\prime}\mathbf{W_{q}},\mathbf{K}=\mathbf{X}^{\prime} \mathbf{W_{k}},\mathbf{V}=\mathbf{X}^{\prime}\mathbf{W_{v}} \tag{1}\] where \(\mathbf{W_{q}}\), \(\mathbf{W_{k}}\), and \(\mathbf{W_{v}}\in\mathbb{R}^{C\times C}\) represent learnable weight matrices. DSA calculates depth-wise similarities between \(\mathbf{K}\) and \(\mathbf{Q}\), producing a global attention map with a size of \(\mathbb{R}^{1\times C}\) instead of \(\mathbb{R}^{N\times N}\). Specifically, with query \(\mathbf{Q}\) and key \(\mathbf{K}\) obtained, DSA splits them into \(C\) key vectors \(k_{i}\) and \(C\) query vectors \(q_{i}\) along the channel dimension, respectively, where \(q_{i}\) and \(k_{i}\in\mathbb{R}^{N\times 1}\). For each \(q_{i}\) and \(k_{i}\), DSA computes their normalized inter product as the global context descriptor \(g_{i}\in\mathbb{R}^{1}\), which is formulated as follows: \[g_{i}=\phi(q_{i})^{T}\phi(k_{i}) \tag{2}\] where \(\phi(\cdot)\) denotes the \(\ell_{2}\) normalization function, which can restrict the inter-product results of \(q_{i}\) and \(k_{i}\) in the range of \([-1,1]\). The obtained global context descriptors \(g_{1},...,g_{C}\) are concatenated together and multiplied by a learnable scale factor \(\alpha\), producing a global context attention map \(\mathbf{A}\in\mathbb{R}^{1\times C}\) through a Softmax function. The \(\mathbf{A}\) is expanded as \(\mathbb{R}^{N\times C}\) along the spatial dimension, performing an element-wise multiplication with \(\mathbf{V}\). Mathematically, we have: \[\mathbf{Z}=Softmax(\alpha\,C\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C} {i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i= 1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\,\overset{C}{i=1}\, \overset{C}{i=1}\,\overset{C}{i= where \(\mathbf{S}_{i}\) represents the \(i^{th}\) side-output saliency prediction, and \(\mathbf{G}\) represents the corresponding ground-truth. \(\mathcal{L}_{boe}\), \(\mathcal{L}_{iou}\), and \(\mathcal{L}_{ssim}\) represents the binary cross-entropy (BCE) loss [37], intersection over union (IoU) loss [38] and structural similarity (SSIM) loss [39], respectively. BCE loss is a pixel-level classification loss, defined as: \[\mathcal{L}_{bce}=-\frac{1}{HW}\sum\limits_{x=1}^{H}\sum\limits_{y=1}^{W}{[G_{ xy}log(S_{xy})+\widetilde{G}_{xy}log(\widetilde{S}_{xy})]} \tag{12}\] where \(H\) and \(W\) represent the height and width of \(\mathbf{G}\), respectively. \(G_{xy}\) and \(S_{xy}\) represent the label and prediction of \(\mathbf{G}\) and \(\mathbf{S}\) at position \((x,y)\), respectively. And \(\widetilde{S}_{xy}=1-S_{xy}\), \(\widetilde{G}_{xy}=1-G_{xy}\). IoU loss is computed based on the IoU measure, which is beneficial for the model to focus on defect pixels, defined as: \[\mathcal{L}_{iou}=1-\frac{\sum\limits_{x=1}^{H}\sum\limits_{y=1}^{W}{G_{xy} \cdot S_{xy}}}{\sum\limits_{x=1}^{H}\sum\limits_{y=1}^{W}{(G_{xy}+S_{xy}-G_{ xy}\cdot S_{xy})}} \tag{13}\] SSIM loss is computed based on the structural similarity measure, defined as: \[\mathcal{L}_{ssim}=1-\frac{(2\mu_{a}\mu_{b}+\xi_{1})(2\sigma_{ab}+\xi_{2})}{( \mu_{a}^{2}+\mu_{b}^{2}+\xi_{1})(\sigma_{a}^{2}+\sigma_{b}^{2}+\xi_{2})} \tag{14}\] where \(a\) and \(b\) denote two \(k\times k\) patches cropped from \(\mathbf{S}\) and \(\mathbf{G}\), respectively. \(\mu_{a}\), \(\mu_{b}\), \(\sigma_{a}\), \(\sigma_{b}\), and \(\sigma_{ab}\) represent means, standard deviations and covariance of patches \(a\) and \(b\), respectively. \(\xi_{1}=0.01^{2}\) and \(\xi_{2}=0.03^{2}\). ## IV Experiments ### _Datasets_ To validate the effectiveness of the proposed GCANet for surface defect detection, we conduct experiments on the following surface defect datasets. #### Iv-A1 SD-saliency-900 [10]: There are three typical types of strip steel defects: inclusions, patches, and scratches. Each defect sample has a resolution of 200\(\times\)200 with a corresponding pixel-level label. These defects are characterized by low contrast, various types, different scales, and cluttered ebackground, which bring difficulties to accurate detection. Each category of defects includes 300 images. We adopt the same training dataset (810 images) and test dataset (900 images) as previous works [10, 11, 40] in the experiment. #### Iv-A2 Magnetic tile [5]: There are 392 defect images, including five categories of defects: uneven, fray, crack, blowhole, and break. Each defect image has the corresponding fine-grained pixel-level label. Defects show various types and scales, complicated background noise (e.g.stains and shallow), and low contrast. In the experiment, the dataset includes 194 training images and 198 test images, which are obtained by randomly dividing images of each category at a ratio of 1:1. #### Iv-A3 DAGM 2007 [41]: There are 10 types of defects with each generated by a specific defect model and texture model. Each defect image contains a defect object, roughly labeled with an ellipse. Defects with various types and complex backgrounds bring challenges to detection. The dataset includes 1046 training images and 1054 test images. In the experiment, the training dataset is increased to 3138 images by flipping horizontally and vertically. ### _Implementation Details_ The proposed method is implemented with Pytorch. The parameters of the encoder are initialized with the pre-trained SAMNet backbone on ImageNet. The experiments are conducted on a computer with NVIDIA RTX 3060 Ti. The network is trained with Adam optimizer, where the learning rate is set to 5e-4. It is trained for 900 epochs with a batch size of 8 on SD-saliency-900, 900 epochs with a batch size of 5 on Magnetic tile, and 270 epochs with a batch size of 8 on DAGM 2007, respectively. During the training stage, the original image is resized to 256\(\times\)256 and randomly cropped to 224\(\times\)224 as the input of the network, following previous works [10, 11]. During the test stage, the proposed network takes 256\(\times\)256 resolution as the input. In addition, we only calculate the output saliency map of the final decoding stage. The obtained saliency prediction is sampled to the same resolution as the original image for evaluation. ### _Evaluation Metrics_ We quantitatively evaluate the performance of various models on the following metrics. _Precision-Recall_ (_PR_) curve [42] is plotted with different precision-recall pairs with each calculated on binarized saliency map \(\mathbf{S}\). The binarized \(\mathbf{S}\) are obtained by using 255 different thresholds in the range of \([0,1]\), respectively. _F-measure_ (\(F_{\beta}\)) [43] is a comprehensive metric that accounts for both accuracy and recall. \(F_{\beta}\) of each precision-recall pair is calculated as follows: \[F_{\beta}=\frac{(1+\beta^{2})\text{Precision}\times\text{Recall}}{\beta^{2} \times\text{Precision}+\text{Recall}} \tag{15}\] where \(\beta^{2}=0.3\) in the experiment. The _F-measure_ curve is plotted with calculated \(F_{\beta}\) scores using precision-recall pairs under different thresholds. _Mean Absolute Error_ (_MAE_) [44] evaluates the difference of \(\mathbf{S}\) and \(\mathbf{G}\) by computing their average pixel-level absolute error, computed as: \[MAE=\frac{1}{H\times W}\sum\limits_{x=1}^{H}\sum\limits_{y=1}^{W}{|S_{xy}-G_{ xy}|} \tag{16}\] _Weighted F-measure_ (\(F_{\beta}^{w}\)) [45] defines the generalized \(F_{\beta}\) by assigning different weights to errors at different locations, computed as: \[F_{\beta}^{w}=\frac{\left(1+\beta^{2}\right)\text{Precision}^{w}\times\text{ Recall}^{w}}{\beta^{2}\times\text{Precision}^{w}+\text{Recall}^{w}} \tag{17}\] where \(\beta^{2}=1\) in the experiment. _Structural similarity measure_ (\(S_{\alpha}\)) [46] evaluates the structural similarity of \(\mathbf{S}\) and \(\mathbf{G}\) through object-aware (\(S_{o}\)) and region-aware (\(S_{r}\)) measures, defined as: \[S_{\alpha}=\alpha*S_{o}+(1-\alpha)*S_{r} \tag{18}\] where \(\alpha=0.5\) in the experiment. _Enhanced-alignment measure_ (\(E_{\phi}\)) [47] considers local pixel-level and global image-level properties, defined as: \[E_{\phi}=\frac{1}{W\times H}\sum_{(x,y)}\phi(x,y) \tag{19}\] where \(\phi(x,y)\) represents the enhanced alignment matrix. We use mean \(E_{\phi}\) in the experiment. ### _Comparisons with State-of-the-arts_ To prove the advantage of the proposed GCANet, it is compared with 17 state-of-the-art segmentation models over three defect datasets, including defect detection models (i.e. MCNet [33], EDRNet [10], DACNet [11], FDSNet [21], LWNet [33]), efficient models (i.e. BASNet [48], FFRNet [49], ACCoNet [50], PGNet [29], UCTransNet [51] ), lightweight models (i.e. HVPNet [25], STDC2-Seg [22], SAMNet [26], AttaNet [52], PP-LiteSeg [23], CorNet [35], PIDNet-S [24]). It is noted that we generate single-channel predictions at the output layer of some semantic segmentation models. (i.e. [21, 22, 23, 24, 52]). And we employ the same loss function and training strategy as the proposed model for these models during the training process. #### Iv-D1 Quantitative Comparison Table I presents the quantitative comparison results of various models on accuracy (\(MAE\), \(S_{m}\), \(F_{\beta}^{w}\) and \(E_{\phi}\) ) and running efficiency (#Params, FLOPs, and FPS). It is observed that the proposed GCANet obtains the best trade-off between accuracy and efficiency. Compared with the defect detection model DACNet [11], the proposed method only decreases by 0.96%, 0.62% and 0.38% on \(F_{\beta}^{w}\), \(S_{m}\) and \(E_{\phi}\) on SD-saliency-900, but with \(98\times\) fewer FLOPs, \(53\times\) fewer parameters and \(7\times\) faster speed. The proposed method is even superior to [11] in terms of \(F_{\beta}^{w}\), \(S_{m}\) and \(F_{\beta}\) on Magnetic tile and DAGM 2007. Although lightweight defect detection models such as FDSNet [21] have achieved faster speed with fewer parameters and FLOPs, they are much lower than other defect detection methods (e.g., MCNet, EDRNet, and DACNet) in performance. And the proposed method also achieves competitive results in defect scenes in comparison with other efficient models. Meanwhile, the experimental results also show that most existing lightweight models perform poorly in defect scenes. This suggests that these lightweight methods are not appropriate for defect detection since defect objects are more complex than other objects. Compared with these methods, our lightweight method obtains better performance Fig. 3: Illustration of the trade-off between accuracy and efficiency for different methods. The weighted F-measure (\(F_{\beta}^{w}\)) is the average of that on three defect datasets. in defect scenes. Fig. 3 intuitively illustrates the trade-off comparison of various methods between accuracy (average \(F_{\beta}^{w}\) of three datasets) and running efficiency (#Param). The proposed method (the red dot) lies at the top-left corner, which obviously suggests that our method achieves competitive performance with fewer parameters. In addition, Fig. 4 indicates that the proposed GCANet (the red curve) also obtains competitive performance with respect to the PR and F-measure curves over three defect datasets. To demonstrate the robustness of the proposed method against background interference, we further compare the performance of various methods on the SD-saliency-900 with severe salt-and-pepper noise (\(\rho=20\%\)) added, similar to the previous works [10, 11]. The experimental results in Table II suggest that the proposed GCANet is less affected by interference noise on performance compared to other methods, which only reduces by 1.46% 1.29% and 0.37% on \(F_{\beta}^{w}\), \(S_{\alpha}\) and \(E_{\phi}\), respectively. Overall, compared with other methods, the proposed method displays competitive performance on six evaluation metrics while maintaining lower computational overhead. This implies that the proposed GCANet is an efficient lightweight defect detection model. #### Iv-D2 Visual Comparison Fig. 5 displays some detection results of various methods on three defect datasets. The visualization results show that the complexity of defects may result in some false or incomplete detection results. For example, when there are some distractions in the background (\(1^{\rm st}\) and \(3^{\rm rd}\) rows), some methods are prone to making false predictions, detecting them as defective regions. Some methods fail to detect some defect details, such as small defects (\(4^{\rm th}\) and \(9^{\rm th}\) rows) and fine scratches (\(5^{\rm th}\) and \(6^{\rm th}\) rows). And for some low-contrast defects (\(2^{\rm nd}\), \(7^{\rm th}\), and \(8^{\rm th}\) rows), which are highly similar to the background, some methods cannot detect them accurately either. In contrast, the proposed method obtains more precise detection results than other methods on those challenging defects, which demonstrates the effectiveness of GCANet in defect scenarios. ### _Ablation Study_ To investigate the validity of various modules, the following ablation experiments are done on SD-saliency-900 and DAGM 2007, respectively. #### Iv-E1 Ablation for Network Architecture To prove the validity of each module in GCANet, we perform the ablation study for network architecture. Specifically, the baseline adopts the same encoder and decoder as the lightweight SOD model SAMNet, except that the input is not downsampled in the first encoder stage. Based on that, we introduce the lightweight transformer encoder after the last encoder stage, and then add different modules, respectively. As shown in Table III, the injection Fig. 4: Comparisons of various methods in terms of PR curves and F-measure curves on different defect datasets, respectively. Note that the closer the PR curve is to the coordinates (1,1), the higher the F-measure curve, the better the performance. The top five methods are highlighted in different colors. of global information in each decoder block greatly improves the performance of the baseline. But as illustrated in Fig. 6, the existence of background interference brings some wrong or incomplete detection results. Similarly, the introduction of the CRA module also brings significant performance improvements but only incurs little computational overhead compared to the baseline. It can make the network focus more on defect objects, as illustrated in Fig. 6. By simultaneously introducing CRA and injecting global information into each decoder block, the model can detect more accurate defect regions, yielding the best performance. This indicates that it is effective for defect detection to strengthen feature representation and introduce global information in the network. #### Iv-B2 Effectiveness of DSA To prove the effectiveness of the proposed DSA module, it is compared with traditional MSA [32], as shown in Table IV. Specifically, we replace DSA in each transformer block with MSA (denoted as w MSA). It is found that the proposed DSA reduces FOLPs by 50M but obtains better performance compared with MSA. Furthermore, we compare the transformer encoder with PPM [31] in Table IV, where we replace the transformer encoder with PPM (denoted as w PPM). The experimental results show that the transformer encoder obtains better performance than the PPM in the network. The main reason is that pooling operations damage local details, resulting in poor performance. The experimental results demonstrate the effectiveness of DSA. Fig. 5: Visual Comparisons of different models on SD-saliency-900 (\(1^{\rm{st}}\sim 4^{\rm{th}}\) rows), Magnetic tile (\(5^{\rm{th}}\sim 7^{\rm{th}}\) rows), and DAGM 2007 (\(8^{\rm{th}}\) and \(9^{\rm{th}}\) rows), respectively. ## V Conclusion In this paper, we propose a Global Context Aggregation Network (GCANet) for lightweight saliency detection of surface defects. GCANet introduces a novel transformer encoder to learn global information, remedying the limitations of CNNs lacking global information. In each transformer block, Depthwise Self-Attention (DSA) module produces global weights by element-wise interaction between features, implicitly modeling global dependencies. In addition, Channel Reference Attention (CRA) module is embedded before each decoder block for feature enhancement. CRA utilizes the interaction of cross-layer features to suppress background interference and mine important defect details. The extensive experiments on three defect datasets demonstrate that the proposed lightweight method achieves promising performance with low computational costs compared to 17 other state-of-the-art methods.
表面欠陥検査は、表面欠陥が弱く表れる場合や複雑な背景下にある場合など、非常に困難なタスクです。高精度な欠陥検出方法は、高コストな計算とストレージのオーバーヘッドを必要とするため、一部のリソース制限のある欠陥検出アプリケーションでは実用性が低い。ただし、軽量な方法では、リアルタイムの推論速度を達成しながらパラメータの数を減らせたものの、複雑な欠陥シナリオでは検出精度が低い。そこで、私たちは、encoder-decoder構造の軽量な saliency 検出のためのグローバルなコンテキストの集約ネットワーク(GCANet)を開発しました。まず、軽量なバック bone の上位層に、新しいTransformer エンコーダを導入し、グローバルなコンテキスト情報を新しい Depth-wise Self-Attention (DSA) モジュールを通して捕捉します。提案された DSA は、チャネル次元において要素ごとの類似
2310.10653
Secure and Trustworthy NFC-based Sensor Readout for Battery Packs in Battery Management Systems
Wireless Battery Management Systems (BMS) are increasingly being considered for modern applications. The ever-increasing complexity and production costs of BMS modules and wired connections resulted in a necessity for new ideas and approaches. Despite this growing trend, there is a lack of generic solutions focused on battery cells' sensor readout, where wireless communication allows for a more flexible and cost-efficient sensor installation in battery packs. Many wireless technologies, such as those that use the 2.4 GHz frequency band, suffer from interference and other limitations. In this article, we present an alternative approach to communication in BMS that relies on the use of Near Field Communication (NFC) technology for battery sensor readouts. As an answer to the rising concern over the counterfeited battery packs, we consider an authentication schema for battery pack validation. We further consider security measures for the processed and stored BMS status data. To show that a general BMS application can make use of our design, we implement a BMS demonstrator using the targeted components. We further test the demonstrator on the technical and functional level, by also performing evaluation on its performance, energy usage, and a security threat model.
Fikret Basic, Martin Gaertner, Christian Steger
2023-08-31T22:55:21
http://arxiv.org/abs/2310.10653v1
# Secure and Trustworthy NFC-based Sensor Readout for Battery Packs in Battery Management Systems ###### Abstract Wireless Battery Management Systems (BMS) are increasingly being considered for modern applications. The ever-increasing complexity and production costs of BMS modules and wired connections resulted in a necessity for new ideas and approaches. Despite this growing trend, there is a lack of generic solutions focused on battery cells' sensor readout, where wireless communication allows for a more flexible and cost-efficient sensor installation in battery packs. Many wireless technologies, such as those that use the 2.4 GHz frequency band, suffer from interference and other limitations. In this article, we present an alternative approach to communication in BMS that relies on the use of Near Field Communication (NFC) technology for battery sensor readouts. As an answer to the rising concern over the counterfeited battery packs, we consider an authentication schema for battery pack validation. We further consider security measures for the processed and stored BMS status data. To show that a general BMS application can make use of our design, we implement a BMS demonstrator using the targeted components. We further test the demonstrator on the technical and functional level, by also performing evaluation on its performance, energy usage, and a security threat model. Battery Management System, Security, Sensor, Wireless, Near Field Communication, Anti Counterfeiting. ## I Introduction Green energy and sustainability are becoming more important than ever before, with Battery Management Systems (BMS) also seeing an increased interest from the industry and the research community. This resulted in a higher digitization and use-case expansion that required additional attention to the already complex systems that utilize BMS [1]. BMS play an important role in many systems today that rely on the use of large battery packs. They are often mentioned as one of the main critical controller components being part of smart power grids and electric or hybrid vehicles [2, 3]. They are used as control devices in such systems, where they regulate the usage of individual battery cells by offering monitoring and diagnostic services, as well as the possibility to track the lifetime usage of each individual cell [4]. To offer safer and more efficient energy usage, a BMS also handles cell balancing control during the charging and discharging cycles. A BMS can be deployed in different topologies and usually consists of various devices. They generally contain a central BMS controller, which in a modulated setting, communicates with individual Battery Cell Controllers (BCCs). The BCCs help in relaying diagnostic data back to the BMS controller through monitoring and control of individual battery cells. These cells are packed together in parallel or serial connections inside battery modules, with accompanying temperature, pressure, or other sensors. Data received from these sensors is critical in preventing dangerous incidents like the thermal runaway, which happens due to the rapid increase in battery temperature [5]. A BMS controller is able to derive diagnostic data based on the monitored and measured data from the battery cells, e.g., State of Charge (SoC), State of Health (SoH), etc., [1, 2, 3]. However, these controllers can only act as long as they have the correct information on the current state inside the battery pack, i.e., they are dependent on the sensor readouts. Two main factors influence the accuracy of these readings: (i) the number of sensors used, and (ii) the relative position of a sensor to its measurement target. Traditionally, BMS use wired connection to handle the communication between individual modules, i.e., between the BCCs and the battery cell sensors. This, however, imposes several limitations, as shown in Table I. In order to alleviate the aforementioned wired limitations, it is possible to replace wired with wireless technology networks. We see several challenges that need to addressed when choosing an appropriate wireless technology, as indicated in Table II. Different communications have already been tested in an attempt to solve the mentioned limitations. Research with Bluetooth [6, 7] and ZigBee [8] have been tested and evaluated within the BMS domain. However, while they give promising results for the data throughput, these studies fail to address the main challenge of the mentioned technologies, that being the interference. Security is also only partially covered, mostly under the given technologies' security stack, with ZigBee being especially subjected to limited throughput and security concerns [9]. Schneider et al. [10] address most of the concerned challenges but does not focus on the security aspects and newer modulated BMS considerations. We further discuss the BMS wireless and security findings, and their relevance to our work, in Section II. To address the BMS wired limitations and wireless requirements, we propose a system architecture for modular BMS that offers the NFC technology for battery module cells' sensor readout. This includes the extension with the conventional BCC by adding a connection to an active NFC reader. The battery cell's sensors would, hence, only connect to the provided passive NFC device per battery pack module, not requiring additional connection or power draw for their functionality. Security and data processing are handled via an additional microcontroller Unit (MCU). These additional components would form a new overall control block together with the BCC. This block would still remain modulated, i.e., it would maintain the same input and output connections. A BMS with our presented architecture is able to perform security operations on the logged status data with a minimal overhead increase, while retaining its original functionality. **Contributions:** In this work, we present an answer to the listed challenges, by proposing a design model that utilizes NFC as the chosen technology for the wireless communication between the battery cell sensors and the BCCs. By introducing NFC, we are not only able to answer to the design restrictions imposed through the use of wired communication, but also address the challenges introduced when using wireless communication technologies. After providing the relevant background information and presenting related work in Section II, we make the following contributions in this article: * We present a novel approach for the NFC-based BMS battery cells' sensor readout by indicating design points for the device use and placement, as well as the exchange protocol between the modules (Section III). * We address the battery cell source validity [20, 21] question by proposing an authentication model for verifying individual battery cell modules. BCCs should communicate only with the trusted battery cells, for the reasons of both security and safety concerns (Section IV-A). * We investigate the security protocol and design requirements for the purpose of storing and securely handling the derived BMS status data as the next operational step after the sensor readout (Section IV-B & IV-C). * We experimentally show the feasibility of our design by realizing a BMS system prototype and implementing NFC and security control functionalities. The system is further evaluated on its (i) security dependability by a threat model analysis, (ii) time measurements for individual BMS NFC readout phases, (iii) protocol overhead analysis for the secure BMS monitoring and diagnostic data logging (Section V), (iv) system energy consumption, and (v) potential NFC sensor readout throughput. This article presents an extended version of the published paper [22] that includes a more detailed analysis and investigation of the proposed design specifications related to the NFC integration for the BMS inter-module communication and sensor data readout. On top of the design and authentication approach presented in that paper, an additional security investigation and evaluation for the purpose of securely logging sensor monitoring and diagnostic BMS data were conducted. Additionally, in relation to the NFC system design, a throughput and energy consumption analysis have been done as well. ## II Background and Related Work ### _Wireless Battery Management Systems (WBMS)_ The increase of the number of battery cells in modern BMS resulted in an increase in the number of used component devices, especially regarding intermediate control components. This all further lead to an increase in expenses and complexity in cable installation. New topologies and architectures had to be introduced focused on using wireless technologies. Primarily, they were seen as an extension to already different derivations of modular and distributed BMS [4]. Some of the pioneering research includes the realization of a WBMS under custom chips and protocols by M. Lee et al. [12]. In this work, they introduce a WiBaAN protocol that works under the 900 MHz band with a data rate of up to 1 Mbit/s, allowing for direct communication between a large set of battery cells and the main BMS controller. However, while novel for the time of publishing, the relatively low data throughput rate, used frequency band, manufacturing costs, and no newer research updates regarding the modulated BMS topologies could present a limitation for the modern BMS derivations. Several design models have been proposed and investigated in the domain of the 2.4 GHz frequency band. Shell et al. [7] presents a Bluetooth-based BMS design approach. They show its feasibility under the standard BMS environment and commercial applications. De Maso-Gentile et al. [6] presents a different design approach, that is more focused on applying Bluetooth gateway access to already conventional BMS CAN infrastructures. However, most of the proposals based on Bluetooth technology are primarily centred on intra-module communication and do not account for direct battery sensor readout. Bluetooth, specifically the newer BLE, has a limited throughput rate which can often fluctuate due to noisy channels even in the newer 5.x standards [23]. This can make it difficult to fulfill the necessary standard requirements for data transfer under the conventional BLE topologies. Research has been also conducted using the ZigBee technology by Rahman et al. [8]. While it showed potential in its applicability, ZigBee would suffer from restrictions due to its low-data rates and unstable channels. Wi-Fi was also considered under specialized BMS investigations. Gherman et al. [15] propose a WBMS build on a single chip that used Wi-Fi as its communication technology for their demonstrator. A different kind of research, more focused on smart cells, was proposed by Huang et al. [24]. Here, the communication between the individual cells and the main controller is done over a Wi-Fi channel, with the BMS controller using the channel for the cell balancing control. The focus of this research was on cell balancing and smart cells, with Wi-Fi being mostly used as a demonstrative wireless technology with no significant focus on the wireless aspects and challenges. The presented research gives an insight into the communication between the BCCs or similar modules and the main BMS controller using the prescribed wireless technologies. Also, as mentioned in Section I, 2.4 GHz technologies generally suffer from an increased chance of interference under complex environments, e.g., Electric Vehicles (EVs), where many devices and modules could compete over the use of the bandwidth channels. This work extends the wireless usage regarding the BMS components by also focusing on the BCC and sensor communication, which is often overlooked, through NFC utilization. BMS today are also often considered for the cloud service extensions [25, 26]. These solutions offer the distribution of BMS modules over a wider area, and hence, further reduce the use of wires and deployment complexity. They also provide functionality extensions. Cloud services aim to cover the calculation of important State of Health (SoH) and State of Charge (SoC) BMS functions on a more efficient cloud base, by using different data sources and even resource-demanding machine learning algorithms, which otherwise would not be possible on resource and process constrained BMS MCU field controllers. These services, however, are outside of the scope of this work, as they focus mainly on the external, rather than on the internal BMS communication. ### _Security in Battery Management Systems (BMS)_ The current research work related to BMS security is limited, due to it being a relatively novel topic that first started sparking interest in the recent time. Nonetheless, there has already been some research done focused on different aspects of the BMS security design. Sripad et al. [18] present an investigation of the cybersecurity threats of BMS, particularly of EVs, especially related to their interaction with battery packs and to overcharging and discharging manipulation concerns. A FACTS approach proposed by Khalid et al. in [20] deals with a formal threat analysis of BMS by investigating and comparing different existing frameworks. It also goes into a detailed analysis, points out and classifies important general security threats found under a BMS. Further BMS threat analysis models have also been proposed by Kumbhar et al. [19]. This work also goes in a direction of a wider topic and includes some security overview of BMS Internet-of-Things (IoT) solutions. A similar work that looks at the IoT security perspective with BMS and their related environments is by Lopez et al. [27]. While most of the mentioned publications present a broad BMS security analysis topic, they still serve as a good starting ground to complement the presented work. ### _Near Field Communication (NFC) Applications_ NFC is a high frequency (HF) communication standard based on the Radio Frequency Identification (RFID) which operates on a frequency band of 13.56 MHz, has a typical range of up to \(10\,cm\), and depending on the standard, supports data rates of up to 848 kbit/s [28, 29]. It handles different modes of communication, among them being communication between an active reader and a passive tag device. Like the RFID, it supports the energy harvesting features from the active to the passive device during the data exchange. The use of the NFC technology in more extensive system infrastructures has already been investigated before. Specifically, research presented by Ulz et al. [30] proposes the use of NFC-based communication for robot-machine interaction in an Industry 4.0 setting. Additionally, work by Chen et al. [31] investigates secure authentication and anti-counterfeiting methods using RFID. Alzahrani et al. [32] propose an NFC-focused anti-counterfeiting system. Despite a large amount of research being done both for the general wireless BMS and the integration of NFC in similar environments, not much specific work has yet been done that combines these two fields of interest, which is also indicated by the recent survey research paper by A. Samanta and S. S. Williamson [9]. Work done by Schneider et al. [10] focuses largely on this field by also proposing a design approach for wireless BMS battery sensors utilizing the same RFID technology. However, one of the main focal points in that paper is placed on the issues caused by galvanic isolation. Moreover, due to the date when the paper was published, it does not account for the newer BMS modular architectures and modern NFC derivations, alongside the security aspects. In this work, we try to bridge that gap and show the potential of using NFC in hard-to-reach sensor environments while at the same time giving attention to the security requirements. ## III Design of the Novel BMS NFC Sensor Readout For the targeted design architecture we divide the entire system into three main modules: * _BMS controller_ * _Cell control board_ (CCB) * _Battery module_ Modules, as well as their placement and connections, are illustrated in Fig. 1. Here, the BMS controller plays the role of the main control unit responsible for receiving and interpreting diagnostic data and conducting necessary safety control actions. It can contain one or multiple operational MCUs. The BMS controller communicates with the CCB that contains a BCC, an NFC reader as the communication interface, and optionally, an additional control MCU for the protocol handling connected via a supported communication bus protocol, e.g., Serial Peripheral Interface (SPI). In a traditional design, the CCB usually only contains a simple BCC aimed primarily only for the BMS functional support. To supplement the communication handling requirements for the NFC reader, and also preceding and subsequent data and security processing, we introduce another process MCU that works either along with the BCC chip or on top of the BCC functional block. In most scenarios, the communication between the CCBs and the BMS controller is established using either a Controller Area Network (CAN) protocol, or some other form of network connection, like Ethernet, Transformer Physical Layer (TPL), or SPI [33].One BMS controller can communicate with multiple CCBs, depending on the system and protocol limitations [1, 4]. The battery module contains battery cells, sensors, and an NFC communication interface to the CCB. In the presented design, this interface is an _NFC-Tag_ (NTAG). The communication for the NTAG and sensors is primarily done with the Inter-Integrated Circuit (I2C) protocol. For charging and discharging cycles, as well as related voltage readings, we still rely on the hardwired measurements from conventional modulated BMS designs, with them being usually less demanding in terms of placement and installation compared to the investigated sensor connections and also requiring an otherwise special handling. ### _NFC Communication_ To make the communication between the BCCs and battery cells using the wireless NFC technology possible, appropriate devices and communication modes need to be chosen. In the presented design, _Reader/Writer mode_ is opted as the chosen mode of communication. The NFC reader plays the role of the active device that is connected to a specialized controller BCC, as well as to an MCU for pre-processing and security operations. In traditional designs of the modulated BMS, this MCU can also be already found as an integral part of the BCC. It is, however, vital, that the main functionality conditions are fulfilled and contained which entail that the communication over the NFC can be accurately processed and handled, as well as to be able to handle security operations. Before the communication begins, the NFC reader needs to have discovered the targeted NTAG(s) using the discovery loop process. Immediately afterwards, the authentication process starts. It is important to make sure that no formal communication can begin before the battery modules have been authenticated, as to avoid any potential vulnerabilities that might arise afterwards. Following the successful authentication, the NTAG proceeds to initiate self-configuration and prepares to communicate with both the sensors and the NFC reader. Since in a standard environment, the same devices are going to be also used for the subsequent measurement readings, the initialization and configuration steps can be cached and therefore omitted. ### _Energy Harvesting and Positioning_ A disadvantage that NFC has over most other wireless technologies is its relatively short range. This is of no issue in the presented design, as the BCCs and battery cells are usually tightly packed and installed together. The NFC in the presented design uses the energy harvesting feature to power up the NTAG from the reader. The energy harvesting is also additionally used to power the necessary readout of the adjacent sensor. This feature limits the distance between the antennas. Depending on the environment, the distance peaks approximately at \(5.4\,cm\). For a feasible communication and optimal initialization time, we opted to use a distance of \(2\,cm\). The NTAG is not powered right at the boot-up of the system. It first needs to check if enough energy can be received from the present NFC field. The energy harvesting needs also to match the internally pre-configured voltage level. A voltage level of up to 3 V can be supplied, which was also deemed sufficient for the sensor readout operation. As both the sensor and the NTAG reside on the battery module, it would be possible for them to be directly powered as it is done in a conventional design. However, this characteristic is not present in our design model, as using the wiring to the battery modules would violate one of the design requirements set on reducing the extent of the necessary wires. ### _Data Exchange Protocol_ The NFC reader is intended to establish the wireless connection to the dedicated battery sensors of the battery module. It plays the role of the active device, meaning that it initiates the communication. The sensors are able to transmit their values to the passive NTAG using the I2C connection. In this scenario, the master mode is used and the NTAG takes the role of the adapter module. The data is passed directly between the sensors and the NFC interface. Static Random Access Memory (SRAM) storage is used for the intermediate data placement before the read operation takes place. Additional commands had to be provided for the interaction on the battery module's I2C bus, as well as for the data transmission. These include the: (i) I2C read & write commands, and (ii) content read; which allows direct content read from the intermediate SRAM storage. No MCU or any additional component is needed here, making the design relatively simple and cheap. Fig. 1: Proposed BMS modular design architecture utilizing NFC components. Fig 2. shows a swimlane sequence diagram that encapsulates all main operation processes intended to be covered during a common data exchange run. It covers the following steps: 1. Configuration: initialization step at the session start, intended for loading up all the necessary configuration and operational material. It is expected to be run only once, usually at a start-up of a system (e.g., start of a vehicle). However, certain options could be cached, and hence pre-configured, with the aim of reducing the overall process execution time. 2. Validation & registration: CCB instructs its NFC reader to find and assign NTAGs first by using a discovery loop. Afterwards, validation takes place using the proposed signature authentication algorithm described more in detail in the Section IV-A. 3. Battery cells measurement readout: starts with the initialization step aimed for the measurement configuration. During this one-time procedure, the NTAG initializes its communication with the sensors, but also enables the energy harvesting feature covered in Section III-B. After the initialization is finished, a process loop is run that, based on the sampling time, periodically reads out and processes the battery cell measurement data. The cells' data are further covered by the conventional BCC monitoring and diagnostic operations. 4. Data protection: an optional step for the purpose of securing the read measurement, and BCC-derived, data. These operations are discussed in Sections IV-B & IV-C. If used, it is intended to be included together with the measurement process loop. ## IV Security Mechanisms ### _Battery Module Authentication Protocol_ In terms of security, NFC's advantage over the use of other wireless technologies is in both its short range and frequency band. This property limits the list of technologies that a potential attacker could use to attack the system. Since battery modules are usually enclosed in a protective case together with a BCC, the main potential attack vectors on these modules would be the ones initiated through counterfeiting [27]. It is important that only battery cells that come from valid and approved manufactures are installed, as inadequate battery cells could potentially lead to hazards through compromising the BMS controller, or even going higher to the high-speed network outside the BMS environment [33]. To be able to securely verify that the battery modules are valid, we integrate the use of an authentication protocol in our design. This process is achieved by verifying a value that needs to be unique to each device. Since NTAGs are usually shipped with a Unique Identifier (UID) value, we can use it as an input for an Elliptic Curve Digital Signature Algorithm (ECDSA). In our design, we use the _secp128r1_ protocol as the Elliptic Curve (EC) function, having a good balance between the performance and output sizes. The signature value, which is calculated with a private key during the manufacturing process or subsequently updated, is then stored in a protected memory space located on the NTAG chip. The BCC needs to have access to the public key, either it being pre-embedded or accessed through other secure channels. The authentication protocol is shown in Fig. 3. Before the signature verification takes place, the UID validity is first checked against the list of valid devices. Failure in either can lead to a warning message presented through the BMC controller, or a complete shutdown of the system, depending on the targeted use-case. ### _BMS Status Data Protection_ To protect the transmitted battery sensor data and the derived diagnostic data, it is necessary to apply different security measures. These measures would present an answer to the aforementioned security requirements and would be handled as an extension to the current BMS communication design, but also to its data acquisition protocol. Primarily, Fig. 2: General system design representation using a swimlane sequence diagram showing the communication flow between CCB with its NFC Reader, and the battery module with NTAG and sensor. The process follows three main operational steps, with the security operations appliance being an optional step. for the BMS use-case, it is important to fulfill the integrity and availability security requirements, since changes in the accuracy of the data and its sampling rate directly affects the output of the BMS control decisions. Data confidentiality also plays an important role, since the exposure of BMS data to unwanted third parties can also lead to the exposure of users' privacy, e.g., driver's behavior in electric vehicles. Based on the design from Section III and Fig. 1, an extension in the view of a security module would be necessary as part of the CCB. This would free the design space of the battery module from the otherwise additional hardware modifications. It also means, however, that the transferred sensor data is not going to be encrypted or otherwise secured on the analogue connection between the battery module and the CCB, i.e., either through the proposed NFC interface or an adequate wired transfer. This is deemed to be acceptable, as the CCB and the battery module are usually tightly coupled and enclosed together, and attacks on those connections from the outside would be either difficult or even unfeasible. What is therefore important before the data transfer takes place between these modules, is that the authentication of the battery module was successful as described in Section IV-A. Fig. 4 shows the additional operational steps for secure data handling. The input of the key would take place at the start of the measurement session, and would be run only once for that session. Data sampling would contain the main functionality for receiving and applying monitoring and diagnostic operations from the standard BCC. Before the security operations can be applied, the data will first need to be structurally prepared, e.g., by using compression, or padding, during the data processing step. Finally, the designated security operations are run. Placing the security operations on the CCB rather than onto the main BMS controller adds several benefits. Mainly, it presents an additional layer of security to otherwise different and uniform communication interfaces and standards used for the communication between the CCBs and the BMS controllers. It also frees the resources from the main BMS controller which would be necessary for secure storage in case of lifetime logging operations. Such data could then be stored onto the memory units connected to individual CCBs, encrypted and integrity protected against malicious modifications. This is especially important under the modulated topology where one BMS controller can communicate with multiple CCBs and would therefore reduce the computational and storage constraints on the main BMS controller. The CCB's controller needs to contain the necessary hardware and software components for the targeted security protocols. ### _Security Protocols_ To protect data confidentiality, it would be necessary to employ encryption of the sampled sensor data. Embedded devices rely on the use of either Hardware Security Modules (HSM), Secure Elements (SE), Trusted Platform Modules (TPM), or processor extensions with security function implementations. Security modules under the BMS use-case should be able to provide encryption and decryption operations, and tag verification for integrity check. The security module also provides other security functions, like a Random Number Generator (RNG), secure boot, and secure key generation and storage among others. The integrated algorithms are also often hardware-implemented, meaning that they benefit from the accelerated operations and physical security considerations. Advanced Encryption Standard (AES) is often employed for symmetric encryption operations due to its high-security profile and small footprint. AES also benefits from hardware implementations for a faster algorithm execution. During employment, AES would need to be in different modes to provide encryption operation across a larger set of data. Traditionally, CBC and CTR modes are used, with Authenticated Encryption with Associated Data (AEAD) also gaining prominence where available, with modes like EAX, GCM, or CCM. To protect the data against modifications, i.e., to guarantee its integrity, it is recommended to apply Message Authentication Code (MAC) calculations. These can be done on an arbitrary length of sampled sensor or diagnostic data either before or after the encryption on them took place. The calculated MAC bytes would be used for the integrity check. The MAC calculation can be left out in case the affiliated encryption algorithm is from the AEAD group and hence includes an integrity tag check as part of its procedure. These functions are sufficient in providing necessary BMS sensor data protection intended for either its intermediate storage or further data propagation and processing. Fig. 4: BMS measurement data sampling and secure handling. Fig. 3: Sequence diagram of the authentication protocol. ## V Evaluation ### _Test System Implementation_ To test the presented design model, we implemented a test suite that contains the necessary BMS modules, as well as the additional NFC equipment. We aimed to use the NFC modules, which support the latest _NFC Type 5 Tag_ technology. Furthermore, the used components are automotive-graded where applicable for the purpose of replicating a real-world use-case as closely as possible. To that end, all devices used, except for the temperature sensor, come from the NXP Semiconductors lineup of products. The system is shown in Fig. 5. As the main BMS controller, we use an S32K144 MCU board. It communicates with the CCB via the FRDMD-UAL33664 shield over the TPL protocol. It is further connected to an RD33771CDST that houses an MC33771C, which functions as a BCC. The CCB contains an automotive NFC Reader for handling the NFC transmissions and another S32K144 as the MCU for programming and testing. The MCU board is connected with the NFC reader via SPI. The battery module consists of a BATT-14CEMULATOR that serves as a battery emulator, an NTAG component as the passive NFC device, and a BMP180 temperature sensor. The NFC devices are of the NCF33xx product family. The antennas of the active NFC reader and the passive NTAG devices are placed in parallel to each other, with the reader placed at a short distance over the NTAG, corresponding to the positioning discussion in Section III-B. The sensor is placed in close proximity with the NTAG device. For the setup, the temperature sensor from the battery emulator was disabled from transferring the temperature data, being otherwise routed through the attached NTAG component and the added BMP180 temperature sensor that communicates with the NTAG via the I2C protocol. Hence, the BCC is able to receive the emulated cell voltage data from the battery emulator, while the temperature sensor data is sent through the NFC interface. Both the temperature and the cell voltage data are first received by the BCC and then transmitted to the BMS controller. For the authentication protocol, we base our implementation on the _originality signature_ feature found on the NXP's RFID devices. Signature calculation and verification are handled via the _ecc-nano_ library [34]. Elements of the project development and evaluation were handled in a recent master's thesis [35]. BMS status data protection: for this investigation, a security module was used to provide the necessary security operations, that comes integrated with the S32K144. The offered functionalities of this module are based on the Secure Hardware Extension (SHE) specification [36], and they included among others: a secure key derivation and storage, provided True Random Number Generator (TRNG), AES encryption algorithm with CBC mode, and Cipher-based MAC (CMAC) for data integrity and authentication. ### _NFC Sensor Readout Process Time Measurements_ We divide the main BMS monitoring process into two phases: (i) _Initialization phase:_ executed only once for device preparation and configuration, and (ii) _Monitoring phase:_ continuous action that is called on every sample step to measure and retrieve cell sensor data. Individual steps, as well as their time measurements, are shown in Fig. 6. All represented time values are median values taken after multiple measurements. The process starts after the NTAGs have already been discovered. As the first step the authentication protocol is run. This protocol run includes both sending an authentication request from the NFC reader, the response from the NTAG, and the verification calculation on the MCU that is connected with the NFC reader. The authentication step showed a median time of \(369.30\pm 0.37\,ms\), with majority of it being spent on the verification process. The relatively high execution time is attributed to this step being very hardware and software dependant, with optimizations being possible by using dedicated security components. With the NTAG verified, the energy harvesting check is handled which lasts for \(19.64\pm 0.25\,ms\). Finally, the NTAG operation initialization is run which measured \(29.16\pm 2.44\,ms\), followed with the sensor initialization that took \(116.1\pm 1.19\,ms\). After the initialization phase is finished, there is no need to reconfigure the devices during the system run. For the monitoring phase, sensor measurements are read and transmitted to the BCC using NFC communication. This phase is repeatable, with each action showing a time of \(27.2\pm 0.54\,ms\). Fig. 5: Evaluation setup for the BMS NFC sensor readout. Fig. 6: Time measurement results for the Initialization phase (Authentication, Energy Harv., NTAG Init., Sensor Init.) and Monitoring phase (Sensor Meas.). ### _Security Threat Analysis_ The proposed design has been subjected to a security threat analysis, for the purpose of evaluating the achieved security protection [37]. This has been conducted by listing individual _Assets_ (A), _Threats_ (T), and _Countermeasures_ (C). To better illustrate the carried out process, a visual representation of the targeted use-case system model was made using Data Flow Diagram (DFD) which can be seen in Fig. 7. Here, a demonstration is made with the indicated threats, their influenced assets, and answered countermeasures, also illustrating their potential points of impact. Threats are derived based on the carried security requirements analysis, as well as the basis security threats found in common BMS models done in prior research works [18, 19, 38]. In our security model, we argue the following assumptions: (i) a battery module can only be communicated with via an adequate BCC, (ii) both the CCB and the battery module are enclosed in a chassis and the external communication can only be achieved through the BMS controller, (iii) every newly added and unknown battery module is considered untrustworthy, (iv) the CCB is deemed to contain adequate hardware and software components for security protection and calculations. We indicate three important assets that need to be protected: * (A1) _Sensor data_: data retrieved from the cell sensors. * (A2) _System integrity_: hardware and software integrity. * (A3) _Diagnostic data_: status data derived from the monitored battery readings. An attacker would look to exploit a vulnerability of the system, i.e., the potential to conduct a successful attack. Each attack is tied to a threat and assets that are targeted by it. In the following, each separate threat is listed with a given short description, the assets that it impacts, and the countermeasures: * **(T1)**_Battery control obstruction_\(\mapsto(A1),(A3)\) A potential threat that disturbs the cell balancing control through a fake source of sensor and diagnostic data. Mitigated through **(C1)**_Authentication through signature validation_ by the proposed design. Here, BCCs validate every individual battery module, ensuring that the BMS controller only receives authorized status messages. * **(T2)**_Tamper with BMS status messages_\(\mapsto(A2),(A3)\) A similar threat like (T1), but that is more covered and tries to tamper with the data rather than obstruct it. Also mitigated via the **(C1)** countermeasure. * **(T3)**_Backdoor access_\(\mapsto(A1),(A2)\) An attacker might try to gain system's access through either the NFC interface or a counterfeited battery module. Protected through the **(C1)** countermeasure, but also by reducing the attack proximity by relying on the **(C3)**_NFC physical layer characteristics_. * **(T4)**_Remote attack_\(\mapsto(A1),(A2),(A3)\) Various attacks can be launched from outside of the system on unprotected channels by using wireless communication. Under this context, we primarily consider the probing attacks that target the NFC channels. **(C2)**_Cell pack sealing_ protects against remote attacks by isolating interfaces via material shielding. Also, **(C3)** would hamper the possibility of such an attack through frequency spectrum and range limitations. * **(T5)**_BMS log data compromise_\(\mapsto(A1),(A3)\) Such an attack can take place on CCB, both from the local or possible backdoor access via exposed (T3), or through (T4). These include both the privacy leak of the associated system through the compromise of the read raw data, but also any kind of unauthorized data changes which would be intermediately stored on the CCB. The data can be protected by **(C5)**_Data security measures_ which include the prescribed encryption, authentication and integrity validations. ### _Data Security Overhead Analysis_ An evaluation was conducted for the purpose of testing the BMS data security handling. This evaluation includes a model that was built to depict a real-world representation of the BMS data structure that includes both the monitoring and diagnostic data components. The evaluation follows the design principles described in Section IV-B and security protocol considerations in Section IV-C. To fulfill the security conditions, we employ the use of a security module as stated in Section V-A. The BMS test system uses a battery emulator that emulates 14 cell voltages together with a sensor temperature value derived from the extended NFC measurement components. The software presents each measurement with an identifier and the measured value. These values are considered monitoring values. The BCC is further capable of deriving diagnostic values for the active status report. One-time reading from one battery module is considered a sample. In our testing case, one such sample has a length of 162 bytes. For security purposes, padding is added to round up the total size to be 176 bytes, a multiple value of 16, since the security algorithms used are of the 128-bit block length. Fig. 7: Security threat analysis visual overview using Data Flow Diagram. The evaluation was divided into three phases following the design given in Fig. 4. Values are shown as mean values derived from multiple measurements. The initial key insertion step was measured at a constant \(20\,ms\). 1 _Data sampling_ was measured at \(112.98\pm 0.54\,ms\). Among this, the measurement step only required \(3.5\,ms\), with the remaining \(109.5\,ms\) being used for the diagnostic derivations. 2 _Data processing_ was shown to have little impact and be very fast with a resulting time of \(1.0\pm 0.1\,ms\). 3 _Security operations_ include the AES-CBC and CMAC calculations for the data confidentiality, integrity and authenticity security coverage. The execution was relatively fast, resulting in a total time of \(992\pm 8.75\,\mu s\). As it can be concluded from the evaluation, the main operational overhead comes from the data sampling step. This shows that the integration of security operations along with the traditional BMS data sampling results in minimal overhead change, having an increase of \(1.7\,\%\), and therefore would not intervene with the standard BMS time-critical safety operations. Even if the security data logging procedure would only be limited to the measurement steps, either due to performance of infrequent diagnostic checks, it still would result in an acceptable overhead range, adding additional \(\approx 1.5\,ms\) to the \(3.5\,ms\) of measurements sampling. The main challenges would come from determining how often the security logging should take place, and defining what would be the necessary memory capacity for the long-term administration. The measurements were also done over a longer operation run, but no significant changes between the measurements have been detected. The results follow a linear time increase. Total times for 1, 5 and 100 sample runs are shown in Table III. ### _Energy Consumption_ We measured the energy consumption of our BMS implementation to investigate how much additional energy would be required with the added CCB components, energy overhead for the added sensors and the NTAG, and the overall energy consumption for the BMS controller when considering the added security operations. For conducting the measurements, we used a Nordic Semiconductor Power Profiler Kit. The **CCB** was evaluated on two added elements: on the extended process MCU, and the active NFC reader. Fig. 8 shows the current consumption for the CCB's MCU, which is responsible for the control of the NFC reader. The same operational segments were also considered in parallel when measuring the consumption from the CCB's NFC reader board. From Fig. 8, we can observe four operational segments: 1. _NFC Discovery_: mainly considers the discovery loop for the battery module's passive NTAG component; shows the highest peak in current consumption, but the average remains consistent with other operations. 2. _Preparation & Validation_: board configuration and start-up steps; also includes the signature authentication step (Section IV-A). Shows a constant and stable consumption over most of its period. 3. _Module(s) Configuration_: configuration command exchange for NFC and sensor devices on the battery module. More oscillating consumption due to a more intensive NFC reader interaction. 4. _Readout & Processing_: one iteration of the battery sensor readout and data handling. The drop in current consumption indicates inactivity on the MCU part after the operation ends, wherein the beginning a higher consumption can be observed from the NFC data exchange. Fig. 9 shows the graphical comparison for average power consumption between the CCB's MCU and NFC reader after five different measurement runs. The operational voltage for both components was set at \(5\,V\). Overall, the added devices resulted in an increase of up to \(1\,W\) of power consumption, without optimization considerations. This means that for the repeatable monitoring phase (described in Section V-B), without any other additional computational overhead, the energy consumption amounts to \(25.82\,mJ\). The power consumption shown considers the consumption of the whole NFC reader board during the active period. This means that it accounts for all regulators, communication interfaces, the NFC chip controller, and most importantly, the active RF transceiver. Since for most of the operational run, Fig. 8: Current consumption over time for the process MCU in CCB during the start-up configuration period for the active battery sensor readout showcasing: discovery process from the NFC reader, internal config. & secure NTAG validation, batt. module, sensor and NTAG configuration, and NFC readout process. Fig. 9: Power consumption over the CCB’s MCU and NFC reader. the communication interaction between the active and passive NFC devices were taking place, the RF field remained also mainly active. In a general environment, NFC readers are indented to offer a polling feature, i.e., periodical wake-up from the stand-by state for the purpose of detecting present passive NFC devices. This feature greatly reduces the average current consumption over time. However, in the presented design, the communication remains active for most of the time during the monitoring phase since the positioning condition of the devices would not change and, depending on the sampling rate, the next measurement might occur soon after the last one finished. Optimisation and adjustment of the standby mode and RF activation is largely dependent on the targeted system implementation goals and is left open for the developers. **Battery pack sensor** consumption was negligible compared to other energy consumption of the system, with current consumption of \(40\,nA\) for the standby state, and peeking for a short time of up to \(22\,uA\) during initialization and active period. The NTAG relies on the energy harvesting feature for the operation and control of the sensors (see Section III-B). In our test case, this results in additional current consumption draw of the NFC reader, with an average rise of \(5\,mA\), when in the range of the NTAG. We analyzed the **BMS controller** on the total power and energy consumption for one full diagnostic sampling cycle. The average drawn power resulted in \(122.16\,mW\), with an average energy consumption of \(13.80\,mJ\), after ten different system runs. Additionally, the security operations, and its preceding data processing, resulted only in a slight increase of energy consumption with \(0.28\,mJ\) for one sampling cycle, i.e., \(2.66\,mJ\) for one-time key-insertion operation. We can observe that the added security operations result only in a minimal increase of up to \(2\,\%\) of energy consumption per sample run. ### _Battery Sensor Throughput Analysis_ The throughput of battery module sensor data largely depends on several factors. Primarily, it is dependent on: (i) the number of the total sensors used per module, (ii) the number of total battery modules used per CCB, the number of CCBs used per the main central BMS controller, and in this case of using the NFC components, (iv) the total number of communicating NFC components (active and passive) and the number of sensors per passive components (communication chains). As indicated in Section V-A, for the experimental setup a battery emulator was used that offers the reading of fourteen battery cells and one temperature sensor. As such, under our setup, we represent a system that has: one sensor per battery module, one passive NTAG device per battery module, connected directly with the battery sensor, and a CCB with one active NFC reader per assigned battery module. More points on the realization and potential future work based on the aforementioned throughput factors are discussed in Section VI. The readout of the NTAG is done through the provided SRAM. The SRAM in our test environment offers 256 bytes of data transfer, with data being divided into blocks of 4 bytes. In our setup, reading the whole SRAM would take \(82.28\,ms\). However, in a real setting, this readout would probably require much less data. As indicated, each sensor would need 1-2 blocks containing 4 bytes each to process and send its derived data. The amount of sensors is also usually limited per pack, and it is very unlikely, that with current battery modules the data requirements would exceed one SRAM read request. Each measurement requires three actions to take place: 1. CCB (NFC Read.) \(\rightarrow\) (NTAG) Batt. module; write command to enable and start the sensor measurement. 2. CCB (NFC Read.) \(\rightarrow\) (NTAG) Batt. module; read command to read out from the specific block of the SRAM of the saved sensor values. 3. CCB (NFC Read.) \(\leftarrow\) (NTAG) Batt. module; transmitting the sensor values from the NTAG's SRAM. The request and response frames contain additional data in the form of flags, IDs, commands, address, and the Cycle-Redundancy-Check (CRC) appendices. Thus, the first two write and read SRAM commands take additional 15 bytes of the header, which can be reduced to 7 bytes if the ID component is removed (if the communication is 1-to-1, it is not necessary). The response SRAM read frame only has 3 header bytes (flags and CRC). The remaining payload depends on the sensor data, which in our case is two blocks, i.e., 8 bytes. Out of those, the measured value is contained in 2 bytes. Based on the experiments from the implementation setup, the CCB was able to conduct 33 measurements per second, i.e, for reading 8 bytes, 264 bytes/s of the pure measurement data. As noted, each measurement is a three-stage process with always the same repeating overhead. The time required for processing the received data by the CCB is negligible compared to the transfer time. This accounted for \(1-2\,ms\) between each read request used for handling the processing of the received sensor data, but measurements otherwise correspond to running a single sensor measurement during the "Monitoring phase" as indicated in Section V-B. The total amount of time and handled sensor payload data when running repeatable measurements for a different number of iterations is shown in Table IV. As expected, linear growth of time compared to the number of repeated iterations is observed. Compared to the measurements conducted for the overall BMS process after applying security operations in Section V-D, it can be concluded that the amount of data processed would suffice for the current setup, even when using multiple sensors per battery module. Furthermore, for a modulated topology that is presented here, the measurements and sampling would be conducted in parallel, and are independent of the number of used CCB, being only limited by the processing power of the assigned BMS controller. However, for BMS that have requirements for a faster sampling rate, in this case, that being \(<30\,ms\), additional optimization aspects would need to be considered. ## VI Discussion and Future Work As we see from Section V-B, the initialization phase is very time-demanding. The primary reason for this is the long execution time for the signature validation, which took around 69% of the total initialization phase time in case all four initialization phase steps would be executed sequentially. Based on our evaluation tests and findings, we note the following important points that can be handled during the implementation of the proposed design to alleviate the time and help in phase delivery: (a) signature validation hardware and software need to be optimized for the target system to reduce the overall initial execution time, (b) process parallelization for steps reduction in the execution, (c) configuration and status caching for the targeted devices. The proposed system design solution can also be used on different BMS settings regardless of the use case, which should fit the needs of automobile and industrial environments. In this context, the applicability of the presented solution is not only limited to conventional temperature sensors but also other sensors as part of a battery module. The battery cell's sensor placement and the target of measurement play an important role and could benefit from using the means of NFC transfer. The closer the sensor is to the core of the battery, the more accurate and time-punctual results are going to be. To this end, it would be possible to utilize NFC to transfer the data from the inside to the outside of the battery from these sensors. These measurements would add an additional layer to the safety precautions of the BMS, and hence, would influence the increased safety of the overall system. Separate research would need to be conducted which would investigate the optimal placement and usability of using the NFC communication for the data transfer in regards to the actual sensor placement in a battery module. Next to the handling of the sensor, antenna positioning should be further investigated as well [16]. For the current setup, a parallel placement is used with no physical considerations. Future work should also include research on the limits of the NFC range when considering the obstructing environment of the enclosed BMS modules. Additionally, an analysis should be made on the possible range and performance when not considering the energy harvesting feature. As mentioned in this work under Section III-B, the proposed design uses the energy harvesting feature of the NFC technology to allow for less reliance on the wired connections with the batteries. However, by disabling this feature and basing the use-case on using the source of power from the underline batteries, the range can be greatly exceeded, but at a higher cost, as additional wiring would also need to be provided. Concerning the system design, another important point to consider for future work is the analysis of the number of used NFC elements. The current design proposes the use of one active NFC device per CCB and one passive device for the battery module. Considerations should be made on the adequate distribution of active and passive NFC devices. This is especially important for the passive NFC devices, i.e., the NTAGs, since an adequate hardware solution should be provided that considers the potential of multiple sensors placed in one battery module, or multiple NTAGs being handled by one active NFC component. Optimisations in the system design could lead to a reduction in the overall production cost. In this section, we have primarily discussed the hardware aspects of future work and improvement, but an investigation should also be made into the optimization methods for the purpose of improving the time execution and reliability of the connection during the NFC sensor readout process. This can be realized on different software layers, targeting both the lower driver control and application stack. Among others, this investigation may include the consideration of different communication protocol extensions, but also the improved security realization. For future work, we also plan to further extend the investigation of the data security control within the BMS environment. Security attention should be given to the extension of the authentication algorithm, but also in adding an extra security layer for communication with the external components and services. Additional threat aspects need to be considered when the attack surface is extended [33, 38]. ## VII Conclusion In this work, we have presented the idea of using NFC as a wireless communication interface for battery sensor readouts in BMS. A system design has been proposed that considers the construction of a modulated BMS with NFC components with special regard to the data exchange protocol and NFC requirements. To alleviate the risk of the counterfeited battery cells and prevent safety and security threats that can arise from them, an authentication model has been proposed and evaluated. A further study has been conducted that investigates the security handling and control of the derived sensor and diagnostic data once they are logged on a cell control board. Experimental results using real components show the feasibility of our approach, but also design challenges that open the possibilities of various further research in this field. ## Acknowledgments We would like to express our sincere thanks to the BMS team from the NXP Semiconductors Austria GmbH Co & KG for the support and cooperation during the design and evaluation phases of this work, as well as for providing the necessary equipment for conducting our experimental tests. This project has received funding from the "EFREtop: Securely Applied Machine Learning - Battery Management Systems" (Acronym "SEAMAL BMS", FFG Nr. 880564).
無線電池管理システム(BMS)は、現代的なアプリケーションにおいてますます検討されている。BMSモジュールの複雑さ増大と接続コスト増加は、新たなアイデアやアプローチが必要となった。この crescenteなトレンドに反して、バッテリーセルのセンサー読み取りを対象とした汎用的な解決策が不足しており、無線通信はバッテリーパックでのセンサー設置をより柔軟かつコスト効率的なものにする。 多くの無線技術、例えば2.4GHz周波数帯を使用する技術は、干渉や他の制限に苦しんでいます。この論文では、BMSの通信方法に新たなアプローチを提示します。それは、バッテリーセンサの読み取りを NFC 技術で行うものです。 counterfeited バッテリーパックの懸念に対する回答として、バッテリーパックの認証スキームを検討しています。また、処理および保管されたBMSステータスデータに対するセキュリティ対策も検討しています。一般的なBMSアプリケーションが私たちの設計を用いることができることを示すために、目標となる
2309.12127
Compact sheaves on a locally compact space
We describe the compact objects in the $\infty$-category of $\mathcal C$-valued sheaves $\text{Shv} (X,\mathcal C)$ on a hypercomplete locally compact Hausdorff space $X$, for $\mathcal C$ a compactly generated stable $\infty$-category. When $X$ is a non-compact connected manifold and $\mathcal C$ is the unbounded derived category of a ring, our result recovers a result of Neeman. Furthermore, for $X$ as above and $\mathcal C$ a nontrivial compactly generated stable $\infty$-category, we show that $\text{Shv} (X,\mathcal C)$ is compactly generated if and only if $X$ is totally disconnected.
Oscar Harr
2023-09-21T14:47:56
http://arxiv.org/abs/2309.12127v2
# Compact sheaves on a locally compact space ###### Abstract. We describe the compact objects in the \(\infty\)-category of \(\mathcal{C}\)-valued sheaves \(\operatorname{Shv}(X,\mathcal{C})\) on a hypercomplete locally compact Hausdorff space \(X\), for \(\mathcal{C}\) a compactly generated stable \(\infty\)-category. When \(X\) is a non-compact connected manifold and \(\mathcal{C}\) is the unbounded derived category of a ring, our result recovers a result of Neeman. Furthermore, for \(X\) as above and \(\mathcal{C}\) a nontrivial compactly generated stable \(\infty\)-category, we show that \(\operatorname{Shv}(X,\mathcal{C})\) is compactly generated if and only if \(X\) is totally disconnected. The aim of this note is to clarify and expand on a point made by Neeman [10]. Let \(M\) be a non-compact connected manifold, and let \(\operatorname{Shv}(M,\mathcal{D}(\mathbb{Z}))\) denote the unbounded derived category of sheaves of abelian groups on \(M\). Neeman shows that, up to equivalence, the only compact object in \(\operatorname{Shv}(M,\mathcal{D}(\mathbb{Z}))\) is the zero sheaf. This implies that \(\operatorname{Shv}(M,\mathcal{D}(\mathbb{Z}))\) is very far from compactly generated. Nevertheless, it follows from Lurie's covariant Verdier duality theorem [12, Thm 5.5.5.1] that \(\operatorname{Shv}(M,\mathcal{D}(\mathbb{Z}))\) satisfies a related smallness condition: it is _dualizable_ in the symmetric monoidal \(\infty\)-category \(\mathcal{P}_{\operatorname{stab}}^{\otimes}\) of stable presentable \(\infty\)-categories, which holds more generally if \(M\) is replaced with any locally compact Hausdorff space \(X\). Although every compactly generated presentable stable \(\infty\)-category is dualizable [12, Prop D.7.2.3], Neeman's example thus shows that the converse is false. The existence of this large and interesting class of stable presentable \(\infty\)-categories that are dualizable but not compactly generated forms part of the motivation behind Efimov's continuous extensions of localizing invariants [11], see also [13]. This note is concerned with the following two questions about the \(\infty\)-category of \(\mathcal{C}\)-valued sheaves on a general locally compact Hausdorff space \(X\), where \(\mathcal{C}\) is some compactly generated stable \(\infty\)-category (e.g. the unbounded derived \(\infty\)-category of a ring or the \(\infty\)-category of spectra): 1. How rare is it for \(\operatorname{Shv}(X,\mathcal{C})\) to be compactly generated? 2. How far is \(\operatorname{Shv}(X,\mathcal{C})\) from being compactly generated in general? With a relatively mild completeness assumption on \(X\) (see Section 1), we answer question (2) by showing that a \(\mathcal{C}\)-valued sheaf \(\mathscr{F}\) on \(X\) is compact as an object of \(\operatorname{Shv}(X,\mathcal{C})\) if and only if it has compact support, compact stalks, and is locally constant (Theorem 2.3). Thus if \(X\) is for instance a CW complex, the subcategory of compact objects \(\operatorname{Shv}(X,\mathcal{C})^{\omega}\) remembers only the _homotopy type_ of the compact path components of \(X\), and it is therefore impossible to reconstruct the entire sheaf category \(\operatorname{Shv}(X,\mathcal{C})\), or equivalently the homeomorphism type of \(X\), from this information. In his 2022 ICM talk, Efimov mentions that the \(\infty\)-category of \(\mathcal{D}(R)\)-valued sheaves on a locally compact Hausdorff space \(X\) 'is almost never compactly generated (unless \(X\) is totally disconnected, like a Cantor set)' [10, slide 13]. As a corollary to our description of the compact objects of \(\operatorname{Shv}(X,\mathcal{C})\), we verify-modulo the same completeness assumption mentioned above- that indeed the _only_ locally compact Hausdorff spaces \(X\) with \(\operatorname{Shv}(X,\mathcal{C})\) compactly generated, for some nontrivial \(\mathcal{C}\), are the totally disconnected ones (Proposition 3.1), thereby answering question (1). ### Notation and conventions Throughout this note, we use the theory of higher categories and higher algebra, an extensive textbook account of which can be found in the work of Lurie [12, 13, 14]. We will also make frequent use of the six-functor formalism for derived sheaves on locally compact Hausdorff spaces, as described classically by [21] and with general coefficients by [22]. For convenience, we assume the existence of an uncountable Grothendieck universe \(\mathcal{U}\) of _small_ sets and further Grothendieck universes \(\mathcal{U}^{\prime}\) and \(\mathcal{U}^{\prime\prime}\) of _large_ and _very large_ sets respectively, such that \(\mathcal{U}\in\mathcal{U}^{\prime}\in\mathcal{U}^{\prime\prime}\). 'Topological space' always implicitly refers to a small topological space, and similarly with'spectrum'. On the other hand, an '\(\infty\)-category' is an '\((\infty,1)\)-category' is a quasicategory, which unless otherwise stated is large. We let \(\widehat{\mathcal{C}\mathrm{at}}_{\infty}\) denote the very large \(\infty\)-category of (large) \(\infty\)-categories. Because we are dealing with sheaves on topological spaces, we deem it best to make a clear distinction between actual topological spaces on the one hand, and on the other hand the objects of the \(\infty\)-category \(\mathcal{S}\) of'spaces' in the sense of Lurie. Following the convention suggested in [13], we will refer to the latter as _anima_. Given an \(\infty\)-category \(\mathcal{C}\), we let \(\mathcal{C}^{\omega}\subseteq\mathcal{C}\) denote the subcategory spanned by the compact objects. Recall that an object \(C\in\mathcal{C}\) is said to be _compact_ if the presheaf of large anima \(D\mapsto\mathrm{Map}_{\mathcal{C}}(C,D)\) preserves small filtered colimits. **Acknowledgements.** I was partially supported by the Danish National Research Foundation through the Copenhagen Centre for Geometry and Topology (DRNF151). I am grateful to Marius Verner Bach Nielsen for comments on the draft, and to Jesper Grodal and Maxime Ramzi for valuable discussions about the arguments appearing in this note, and to the latter for pointing out that the original version of Lemma 3.2 was false in the generality in which I had stated it. ## 1. \(\mathcal{C}\)-hypercomplete spaces Given an \(\infty\)-category \(\mathcal{C}\) and a topological space \(X\), we let \(\mathrm{Shv}(X,\mathcal{C})\) denote the \(\infty\)-category of \(\mathcal{C}\)-valued sheaves on \(X\) in the sense of Lurie [15]. That is, \(\mathrm{Shv}(X,\mathcal{C})\) is the full subcategory of the presheaf \(\infty\)-category \(\mathrm{Fun}(\mathrm{Open}(X)^{\mathrm{op}},\mathcal{C})\) consisting of presheaves \(\mathscr{F}\) satisfying the _sheaf condition_: for any open set \(U\subseteq X\) and any open cover \(\{U_{i}\to U\}_{i\in I}\), the canonical map \[\mathscr{F}(U)\to\lim_{V}\mathscr{F}(V)\] is an equivalence, where \(V\) ranges over open sets \(V\subseteq U_{i}\subseteq X\), \(i\in I\), considered as a poset under inclusion. When \(\mathcal{C}=\mathcal{S}\) is the \(\infty\)-category of anima, we will abbreviate \(\mathrm{Shv}(X)=\mathrm{Shv}(X,\mathcal{S})\). **Remark 1.1**.: When \(\mathcal{C}=\mathcal{D}(R)\) is the unbounded derived \(\infty\)-category of a ring, the \(\infty\)-category \(\mathrm{Shv}(X,\mathcal{D}(R))\) is related to, but generally not the same as, the derived \(\infty\)-category \(\mathcal{D}(\mathrm{Shv}(X,R))\) of the ordinary category of sheaves of \(R\)-modules on \(X\), which is the object studied (via its homotopy category) by Neeman [23]. However, they do coincide under the completeness assumption that we will impose on \(X\) below, see [11]. Since this completeness assumption is verified when \(X\) is a topological manifold, our results include those of Neeman. We are interested in topological spaces satisfying the following condition: **Definition 1.2**.: A topological space \(X\) is \(\mathcal{C}\)_-hypercomplete_ if the stalk functors \(x^{*}\colon\,\mathrm{Shv}(X,\mathcal{C})\to\mathcal{C}\) are jointly conservative for \(x\) ranging over the points of \(X\). The reason for our choice of terminology is that \(X\) is \(\mathcal{S}\)-hypercomplete if and only if the \(0\)-localic \(\infty\)-topos \(\mathrm{Shv}(X)\) has enough points, which is equivalent to \(\mathrm{Shv}(X)\) being hypercomplete as an \(\infty\)-topos by Claim (6) in [15, SS 6.5.4]. (This is _not_ true for arbitrary \(\infty\)-topoi, i.e. there are hypercomplete \(\infty\)-topoi that do not have enough points.) This subtlety, whereby a morphism of sheaves may fail to be an equivalence even though it is so on all stalks, does not occur for non-derived sheaves: the \(1\)-topos \(\mathrm{Shv}(X,\mathcal{S}_{\leq 0})\) of sheaves of sets on a topological space \(X\)_always_ has enough points. We refer to [15, SS 6.5.4] for a discussion of why it is often preferable to work with non-hypercomplete sheaves, rather than, say, imposing hypercompleteness by replacing \(\mathrm{Shv}(X)\) with its hypercompletion \(\mathrm{Shv}(X)^{\wedge}\). Several classes of topological spaces are known to be \(\mathcal{S}\)-hypercomplete, and hence also \(\mathcal{C}\)-hypercomplete for any compactly generated \(\infty\)-category \(\mathcal{C}\).1 Footnote 1: Indeed, for any such \(\mathcal{C}\) there is a conservative functor \[\operatorname{Shv}(X,\mathcal{C})\to\prod_{C\in\mathcal{C}^{\omega}} \operatorname{Shv}(X)\] given informally by mapping \(\mathscr{F}\) to \((\mathscr{F}_{C})_{C\in\mathcal{C}^{\omega}}\), where \(\mathscr{F}_{C}=\operatorname{Map}_{\mathcal{C}}(C,-)\circ\mathscr{F}\), which is a sheaf since \(\operatorname{Map}_{\mathcal{C}}(C,-)\) preserves limits. Also, since \(C\) is compact, we have a canonical equivalence \((\mathscr{F}_{C})_{x}\simeq\operatorname{Map}_{\mathcal{C}}(C,\mathscr{F}_{x})\) natural in \(\mathscr{F}\) for each \(x\in X\), and it follows that if \(X\) is \(\mathcal{S}\)-hypercomplete then it is also \(\mathcal{C}\)-hypercomplete. Although only the first two are relevant for this note, here is a list of some classes of topological spaces that have this property: * paracompact spaces that are locally of covering dimension \(\leq n\) for some fixed \(n\)[11, Cor 7.2.1.12], * arbitrary CW complexes [14], * finite-dimensional Heyting spaces [11, Rem 7.2.4.18], and * Alexandroff spaces, since the \(\infty\)-topos of sheaves associated to an Alexandroff space is equivalent to a presheaf \(\infty\)-topos. ## 2. When is a sheaf compact? Let \(\mathcal{C}\) be a compactly generated stable \(\infty\)-category, e.g. the unbounded derived category \(\mathcal{D}(R)\) of a ring \(R\) or the \(\infty\)-category of spectra \(\operatorname{Sp}\). Given a sheaf \(\mathscr{F}\in\operatorname{Shv}(X,\mathcal{C})\), we define the _support_ of \(\mathscr{F}\) by \[\operatorname{supp}\mathscr{F}=\{x\in X\mid\mathscr{F}_{x}\not\simeq 0\} \subseteq X.\] As in [12], our study of the compact objects of \(\operatorname{Shv}(X,\mathcal{C})\) proceeds from an analysis of their supports. By slightly adapting the proof of [12, Lem 1.4], we get the following description of the support of a compact sheaf: **Lemma 2.1**.: _Let \(X\) be a \(\mathcal{C}\)-hypercomplete locally compact Hausdorff space and let \(\mathscr{F}\in\operatorname{Shv}(X,\mathcal{C})^{\omega}\). Then the support \(\operatorname{supp}\mathscr{F}\) is compact._ Proof.: We first show that \(\operatorname{supp}\mathscr{F}\) is contained in a compact subset of \(X\). Consider the canonical map \[\operatorname{colim}_{U}(j_{U})!j_{U}^{*}\mathscr{F}\to\mathscr{F}, \tag{1}\] where the colimit ranges over the poset of precompact open sets ordered by the rule \(U\leq V\) if \(\overline{U}\subseteq V\), and for each such \(U\) we have denoted by \(j_{U}\colon U\hookrightarrow X\) the inclusion. Since precompact open sets form a basis for the topology on \(X\), the map (1) is an equivalence of sheaves. Let \(\phi\colon\mathscr{F}\xrightarrow{\sim}\operatorname{colim}_{U}(j_{U})!(j_{U}) ^{*}\mathscr{F}\) be some choice of inverse. Any finite union of precompact open sets is again precompact open, so the poset of precompact open sets is filtered. Hence compactness of \(\mathscr{F}\) implies that \(\phi\) factors through \((j_{U})!j_{U}^{*}\mathscr{F}\) for some precompact open \(U\), and it follows that \(\operatorname{supp}\mathscr{F}\) is contained in a compact subset \(\overline{U}\subseteq X\), as claimed. By the above, it remains only to be seen that \(\operatorname{supp}\mathscr{F}\) is closed, or equivalently that its complement \(X\setminus\operatorname{supp}\mathscr{F}\) is open. Suppose \(x\in X\setminus\operatorname{supp}\mathscr{F}\). Then we have a recollement fiber sequence \[j_{l}j^{*}\mathscr{F}\to\mathscr{F}\to i_{*}i^{*}\mathscr{F},\] where \(j\colon X\setminus\{x\}\hookrightarrow X\) and \(i\colon\{x\}\hookrightarrow X\) are the inclusions, and since \(x\not\in\operatorname{supp}\mathscr{F}\) we have \(j_{l}j^{*}\mathscr{F}\simeq\mathscr{F}\). Since \(j_{l}\) is a fully faithful left adjoint, it reflects compact objects, and we conclude that \(j^{*}\mathscr{F}\) is again compact. But then \(j^{*}\mathscr{F}\) is supported on a compact subset of \(X\setminus\{x\}\) by the above, which must be closed as a subset of \(X\), and hence \(x\) lies in the interior of \(X\setminus\operatorname{supp}\mathscr{F}\) as desired. **Lemma 2.2**.: _If \(f\colon X\to Y\) is a proper map of locally compact Hausdorff spaces, then the pullback functor \(f^{*}\) preserves compact objects. In particular, if \(X\) is a compact Hausdorff space and \(E\in\mathcal{C}^{\omega}\), then \(E_{X}\in\operatorname{Shv}(X,\mathcal{C})^{\omega}\), where \(E_{X}\) denotes the constant sheaf at \(E\)._ Proof.: Since \(f\) is proper, the pullback \(f^{*}\) is left adjoint to \(f_{*}\simeq f_{!}\), which is itself left adjoint to \(f^{!}\). Hence \(f_{*}\) preserves colimits, and it follows that its left adjoint \(f^{*}\) preserves compact objects. The statement about constant sheaves follows by taking \(f\) to be the projection from \(X\) to a point. Our main result is the following description of the compact objects in \(\operatorname{Shv}(X,\mathcal{C})\): **Theorem 2.3**.: _Let \(X\) be a \(\mathcal{C}\)-hypercomplete locally compact Hausdorff space. A sheaf \(\mathscr{F}\in\operatorname{Shv}(X,\mathcal{C})\) is compact if and only if_ 1. \(\operatorname{supp}\mathscr{F}\) _is compact;_ 2. \(\mathscr{F}\) _is locally constant; and_ 3. \(\mathscr{F}_{x}\in\mathcal{C}^{\omega}\) _for each_ \(x\in X\)_._ In particular, note that conditions (i) and (ii) together imply that if \(\mathscr{F}\) is compact, then the support \(\mathscr{F}\) must be a compact open subset of \(X\). On the other hand, (iii) guarantees that if \(\mathscr{F}\) is constant on \(U\subseteq X\), say with value \(E\), then \(E\in\mathcal{C}^{\omega}\). Proof.: 'Necessity.' Suppose we are given \(\mathscr{F}\in\operatorname{Shv}(X,\mathcal{C})^{\omega}\). Then \(\mathscr{F}\) must satisfy (i) by Lemma 2.1 and (ii) by Lemma 2.2, since the stalk \(\mathscr{F}_{x}\) at \(x\in X\) is the same as the pull-back \(i^{*}_{x}\mathscr{F}\) along the inclusion \(i_{x}\colon\{x\}\hookrightarrow X\), which is always a proper map. It remains only to be seen that \(\mathscr{F}\) is locally constant. Fix a point \(x\in X\), and let \(i_{x}\) again denote the inclusion of this point into \(X\). Let \(E=i^{*}_{x}\mathscr{F}\) denote the stalk of \(\mathscr{F}\) at \(x\). By [11, Cor 7.1.5.6], there is an equivalence \(\operatorname{colim}_{U}\mathscr{F}(U)\simeq E\), where \(U\) ranges over the poset of open neighborhoods of \(x\). As \(E\) is compact, this implies that \(\mathscr{F}(U)\to E\) has a section for some \(U\). Pick a precompact open neighborhood \(W\ni x\) with \(\overline{W}\subseteq U\), and let \(i\colon\overline{W}\hookrightarrow X\) denote the inclusion. As the canonical map \(\mathscr{F}(U)\to E\) factors through the restriction \(\mathscr{F}(U)\to(i^{*}\mathscr{F})(\overline{W})\to\mathscr{F}(W)\), the map \((i^{*}\mathscr{F})(\overline{W})\to E\) also admits a section \(E\to(i^{*}\mathscr{F})(\overline{W})\). Viewing the latter as a morphism from the constant presheaf with value \(E\) to \(i^{*}\mathscr{F}\), we get an induced map \(\sigma\colon E_{\overline{W}}\to i^{*}\mathscr{F}\) of sheaves over \(\overline{W}\) which by construction induces an equivalence of stalks at \(x\). Here both \(E_{\overline{W}}\) and \(i^{*}\mathscr{F}\) are compact, so the cofiber \(\mathscr{Q}=\operatorname{cofib}(\sigma)\) is again compact. But then \(\operatorname{supp}\mathscr{Q}\) is compact, so \(W^{\prime}=W\setminus\operatorname{supp}\mathscr{Q}\) is open and \(\mathscr{Q}_{x}\simeq 0\) so \(x\in W^{\prime}\). Furthermore, \(\sigma\) restricts to an equivalence of sheaves on \(W^{\prime}\) by construction, so \(\mathscr{F}|_{W^{\prime}}\) is equivalent to the constant sheaf on \(W^{\prime}\) with value \(E\), as desired. 'Sufficiency.' Let \(i\colon\operatorname{supp}\mathscr{F}\hookrightarrow X\) denote the inclusion. Since \(i\) is both proper and an open immersion, both \(i_{*}\simeq i_{!}\) and \(i^{*}\simeq i^{!}\) preserve and reflect compact objects. We may therefore assume that \(X\) is compact, after possibly replacing it with \(\operatorname{supp}\mathscr{F}\). Pick a finite collection of closed subsets \(Z_{i}\subseteq X\), \(i=1,\dots,n\), such that \(\mathscr{F}\) is constant in a neighborhood of each \(Z_{i}\) and such that \(X\) is covered by the interiors \(Z_{i}^{\circ}\). Descent (Corollary A.3) implies that the canonical functor \[\operatorname{Shv}(X,\mathcal{C})\] \[\to\lim_{\Delta_{\leq n-1}}\bigg{(}\operatorname{Shv}(\bigcap_{1 }^{n}Z_{i},\mathcal{C})\xrightarrow[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! is an equivalence. Write \(I=\{1,\dots,n\}\) for short and put \(Z_{J}=\bigcap_{j\in J}Z_{j}\) for each \(J\subseteq I\). The canonical projection from \(\operatorname{Shv}(X,\mathcal{C})\) to \(\operatorname{Shv}(Z_{J},\mathcal{C})\) is the restriction map. By construction, we have that for each \(J\subseteq I\), the restriction \(\mathscr{F}|_{Z_{J}}\) is constant with value a compact object, and hence compact as an object of \(\operatorname{Shv}(Z_{J},\mathcal{C})\) by the preceding lemma. According to [11, Lem 6.3.3.6], the identity functor \(\operatorname{id}\colon\operatorname{Shv}(X,\mathcal{C})\to\operatorname{ Shv}(X,\mathcal{C})\) is the colimit of a diagram \(\Delta_{\leq n-1}\to\operatorname{Fun}(\operatorname{Shv}(X,\mathcal{C}), \operatorname{Shv}(X,\mathcal{C}))\) which sends the object \([k]\in\Delta_{\leq n-1}\) to the composition and so for any filtered system \(\{\mathscr{G}_{\alpha}\}_{\alpha\in A}\), we find \[\operatorname{Map}(\mathscr{F},\operatorname{colim}_{A}\mathscr{ G}_{\alpha}) \simeq\lim_{[k]\in\Delta_{\leq n-1}}\operatorname{Map}(\mathscr{F},(i_{k})_{*} i_{k}^{*}\operatorname{colim}\mathscr{G}_{\alpha})\] \[\simeq\lim_{[k]\in\Delta_{\leq n-1}}\operatorname{Map}(i_{k}^{*} \mathscr{F},\operatorname{colim}_{A}i_{k}^{*}\mathscr{G}_{\alpha})\] \[\simeq\lim_{[k]\in\Delta_{\leq n-1}}\operatorname{colim}_{A} \operatorname{Map}(i_{k}^{*}\mathscr{F},i_{k}^{*}\mathscr{G}_{\alpha})\] \[\simeq\operatorname{colim}_{A}\lim_{[k]\in\Delta_{\leq n-1}} \operatorname{Map}(i_{k}^{*}\mathscr{F},i_{k}^{*}\mathscr{G}_{\alpha})\] \[\simeq\operatorname{colim}_{A}\operatorname{Map}(\mathscr{F}, \mathscr{G}_{\alpha}),\] where the third equivalence uses that the restriction \(i_{k}^{*}\mathscr{F}\) is compact2 and the second-last equivalence uses that filtered colimits commute are left exact in \(\mathcal{S}\). Footnote 2: Indeed, we have already observed that \(\mathscr{F}|_{Z_{J}}\) is compact for each \(J\), and hence the associated object \(i_{k}^{*}\mathscr{F}\) in the product \(\Pi_{J}\operatorname{Shv}(Z_{J},\mathcal{C})\)is also compact according to [11, Lem 5.3.4.10]. As a corollary, we recover Neeman's result: **Corollary 2.4** (Neeman).: _Let \(M\) be a non-compact connected manifold. Then \(\mathscr{F}\in\operatorname{Shv}(M,\mathcal{C})^{\omega}\) if and only if \(\mathscr{F}\simeq 0\)._ In fact our result shows that the conclusion of Neeman's result holds more generally if \(M\) is replaced by a \(\mathcal{C}\)-hypercomplete locally compact Hausdorff space \(X\) whose quasicomponents are all non-compact. As a further corollary to our theorem, we can also describe the dualizable objects of, say, the \(\infty\)-category of \(\operatorname{Mod}_{R}\)-valued sheaves on a \(\operatorname{Mod}_{R}\)-hypercomplete locally compact Hausdorff space \(X\), where \(R\) is some \(\mathbb{E}_{\infty}\)-ring. As with the compact objects, the dualizable objects turn out to be very sparse: **Corollary 2.5**.: _Let \(\operatorname{Mod}_{R}^{\otimes}\) be the symmetric monoidal \(\infty\)-category of modules over an \(\mathbb{E}_{\infty}\)-ring \(R\), and let \(X\) be a \(\operatorname{Mod}_{R}\)-hypercomplete locally compact Hausdorff space. With respect to the induced symmetric monoidal structure on \(\operatorname{Shv}(X,\operatorname{Mod}_{R})\), a sheaf \(\mathscr{F}\in\operatorname{Shv}(X,\operatorname{Mod}_{R})\) is dualizable if and only if_ 1. \(\mathscr{F}\) _is locally constant, and_ 2. \(\mathscr{F}_{x}\) _is a perfect_ \(R\)_-module_3 _for each_ \(x\in X\)_._ Footnote 3: i.e. a compact object of \(\operatorname{Mod}_{R}\) Proof.: 'Sufficiency.' Since \(\operatorname{Shv}(X,\operatorname{Mod}_{R})^{\otimes}\) is closed symmetric monoidal, it suffices to show that for each sheaf \(\mathscr{F}\) satisfying the two conditions, the canonical map \[\mathscr{H}om(\mathscr{F},R_{X})\otimes\mathscr{F}\to\mathscr{H}om(\mathscr{ F},\mathscr{F}) \tag{2}\] is an equivalence, where \(R_{X}\) is the constant sheaf at \(R\) and \(\mathscr{H}om(-,-)\) denotes the internal mapping object in \(\operatorname{Shv}(X,\operatorname{Mod}_{R})\). For sufficiently small open subsets \(U\subseteq X\), the restriction \(\mathscr{F}|_{U}\) is equivalent to the constant sheaf \(F_{U}=\pi^{*}F\) at a perfect \(R\)-module \(F\), and since pullback-being symmetric monoidal-preserves dualizable sheaves, we find that (2) restricts to an equivalence on \(U\). Since (2) is a morphism of sheaves which is locally an equivalence, it must be an equivalence, proving the claim. 'Necessity.' Assume that \(\mathscr{F}\) is dualizable, and let \(x\in X\) be some point. The condition on the stalks is immediate, since pullback preserves dualizable sheaves. We must show that \(\mathscr{F}\) is locally constant in a neighborhood of \(x\). Pick a precompact open neighborhood \(U\ni x\). Then \(\mathscr{F}|_{\overline{U}}\) is again dualizable, and since the monoidal unit \(R_{\overline{U}}=\pi^{*}R\in\operatorname{Shv}(\overline{U},\operatorname{ Mod}_{R})\) is compact, it follows that \(\mathscr{F}|_{\overline{U}}\) is compact as an object of \(\operatorname{Shv}(\overline{U},\operatorname{Mod}_{R})\). But then the previous theorem implies that it must be locally constant on \(\overline{U}\), and hence also on the subset \(U\) as desired. ## 3. When is \(\operatorname{Shv}(X,\mathcal{C})\) compactly generated? In this section, we prove the following characterization of those locally compact Hausdorff spaces \(X\) that have \(\operatorname{Shv}(X,\mathcal{C})\) compactly generated: **Proposition 3.1**.: _Let \(\mathcal{C}\) be a non-trivial compactly generated stable \(\infty\)-category, and let \(X\) be a \(\mathcal{C}\)-hypercomplete locally compact Hausdorff space. Then \(\operatorname{Shv}(X,\mathcal{C})\) is compactly generated if and only if \(X\) is totally disconnected._ ### Proof of the proposition The proof will use the following observation: **Lemma 3.2**.: _Let \(\mathcal{C}\) be a compactly generated stable \(\infty\)-category, and let \(\{C_{i}\}_{i\in I}\) and \(\{D_{i}\}_{i\in I}\) be filtered systems in \(\mathcal{C}\) indexed over the same poset \(I\)._ 1. _Suppose that for each_ \(i\in I\)_, there is some_ \(j\geq i\) _so that the transition map_ \(C_{i}\to C_{j}\) _factors through the zero object_ \(*\)_. Then_ \(\operatorname{colim}_{I}C_{i}\simeq*\)_. If each_ \(C_{i}\) _is compact, then the converse holds._ 2. _Suppose that for each comparable pair_ \(i\leq j\) _in_ \(I\) _there are horizontal equivalences making_ \[\begin{CD}C_{i}\dashrightarrow{\widetilde{\Sigma}}D_{i}\\ @V{}V{}V@V{}V{}V\\ C_{j}\dashrightarrow{\widetilde{\Sigma}}D_{j}\end{CD}\] _commute, where the vertical maps are the transition maps. If each_ \(C_{i}\) _is compact, then_ \(\operatorname{colim}_{I}C_{i}\simeq*\) _if and only if_ \(\operatorname{colim}_{I}D_{i}\simeq*\)_._ Proof.: Note that (2) follows from (1), since the existence of such commutative squares implies that \(\{C_{i}\}_{I}\) has the vanishing property for transition maps described in (1) if and only if \(\{D_{i}\}_{I}\) has that property. For the first claim in (1), it suffices to show that \(\operatorname{Map}_{\mathcal{C}}(D,\operatorname{colim}_{i\in I}C_{i})\) is contractible for each compact object \(D\in\mathcal{C}^{\omega}\). For this, first observe that \[\pi_{0}\operatorname{Map}_{\mathcal{C}}(D,\operatorname{colim}_{i\in I}C_{i}) \cong\operatorname{colim}_{i\in I}\pi_{0}\operatorname{Map}(D,C_{i})\cong*,\] since our assumption guarantees that any homotopy class \(D\to C_{i}\) is identified \(D\to*\to C_{i}\) after postcomposing with the transition map \(C_{i}\to C_{j}\) for sufficiently large \(j\geq i\). Applying the same argument for the compact object \(\Sigma^{n}D\), \(n\geq 1\), we find that \[\pi_{n}\operatorname{Map}_{\mathcal{C}}(D,\operatorname{colim}_{i\in I}C_{i}) \cong\pi_{0}\operatorname{Map}_{\mathcal{C}}(\Sigma^{n}D,\operatorname{colim }_{i\in I}C_{i})\] vanishes also. Assume now that each \(C_{i}\) is compact and that \(\operatorname{colim}_{I}C_{i}\simeq*\). Then \[\operatorname{colim}_{j\in I}\operatorname{Map}_{\mathcal{C}}(C_{i},C_{j}) \simeq\operatorname{Map}_{\mathcal{C}}(C_{i},\operatorname{colim}_{j\in I}C_{ j}),\] and since \(\pi_{0}\) commutes with filtered colimits of anima, it follows that for sufficiently large \(j\geq i\) the transition map \(C_{i}\to C_{j}\) is homotopic to \(C_{i}\to*\to C_{j}\). Proof of Proposition 3.1.: 'Sufficiency.' The \(\infty\)-category of sheaves of anima \(\operatorname{Shv}(X)\) is compactly generated by [10, Prop 6.5.4.4], and hence so is \(\operatorname{Shv}(X,\mathcal{C})\simeq\operatorname{Shv}(X)\otimes\mathcal{C}\) according to [10, Lem 5.3.2.11]. 'Necessity.' Let \(x\in X\). We must show that if \(y\in X\) lies in the same connected component as \(X\), then \(y=x\). For this, pick an object \(C\not\simeq 0\) in \(\mathcal{C}\) and let \(x_{*}C\) denote the skyscraper sheaf at \(x\) with value \(C\). By assumption there is a filtered system \(\{\mathscr{F}_{i}\}_{i\in I}\) of compact sheaves with \(\operatorname{colim}_{I}\mathscr{F}_{i}\simeq x_{*}C\). For each \(i\), the fact that \(\mathscr{F}_{i}\) is locally constant and that \(x\) and \(y\) lie in the same connected component means there is a non-canonical equivalence of stalks \(x^{*}\mathscr{F}_{i}\simeq y^{*}\mathscr{F}_{i}\). One should not expect to find a system of such non-canonical equivalences assembling into a natural transformation, essentially because the neighborhoods on which the \(\mathscr{F}_{i}\) are constant could get smaller and smaller as \(i\) increases. Nevertheless, given a comparable pair \(i\leq j\) in \(I\), one can pick equivalences making the diagram (3) where the vertical maps are the transition maps. To see this, simply note that the set of \(z\in Z\) for which there is a commutative diagram is a clopen subset of \(X\), since any point admits a neighborhood on which both \(\mathscr{F}_{i}\) and \(\mathscr{F}_{j}\) are constant. Since all of the \(\mathscr{F}_{i}\) have compact stalks by Theorem 2.3, it follows from Lemma 3.2 that the stalk \((x_{*}C)_{y}\simeq\operatorname{colim}_{I}y^{*}\mathscr{F}_{i}\) is nonzero. But \(X\) is Hausdorff, so this implies that \(y=x\) as desired. **Remark 3.3**.: Lemma 3.2 is also true if \(\mathcal{C}\) is any ordinary category, e.g. the category of abelian groups Ab. It is illuminating to consider why the lemma holds in this concrete setting. Given a filtered system of abelian groups \(\{A_{i}\}_{i\in I}\), the associated colimit can be described as the quotient of \(\bigoplus_{I}A_{i}\) by the subgroup consisting of elements \(a\in A_{i}\) such that there is \(j\geq i\) for which the transition map \(\varphi_{ij}\colon A_{i}\to A_{j}\) maps \(a\) to zero. Clearly \(\operatorname{colim}_{I}A_{i}\cong 0\) is implied by the assumption that for every \(i\in I\) there is \(j\geq i\) with \(\varphi_{ij}\colon A_{i}\to A_{j}\) being zero. For the partial converse, assume now that each \(A_{i}\) is a compact object of Ab, i.e. a finitely generated abelian group, and that \(\operatorname{colim}_{I}A_{i}\cong 0\). Let \(i\in I\) and pick a generating set \(a_{1},\dots,a_{n}\) for \(A_{i}\). Since \(\operatorname{colim}_{I}A_{j}\cong 0\), there is \(j_{1},\dots,j_{n}\geq i\) with \(\varphi_{ij_{*}}(a_{s})=0\) for each \(s\). Using that \(I\) is filtered, pick \(j\in I\) so that \(j\geq j_{s}\) for each \(s\). Then \(\varphi_{ij}(a_{s})=\varphi_{j_{s}j}\varphi_{ij_{*}}(a_{s})=0\) for each \(s\), and hence \(\varphi_{ij}=0\). ### Hausdorff schemes Unlike in point-set topology, compactly generated categories of sheaves are abundant in algebraic geometry. Using results of Hochster [10], Proposition 3.1 can be interpreted as saying that \(\operatorname{Shv}(X,\mathcal{C})\) is only compactly generated when \(X\) happens to come from algebraic geometry: **Proposition 3.4**.: _Let \(\mathcal{C}\) be a nontrivial compactly generated stable \(\infty\)-category, and let \(X\) be a \(\mathcal{C}\)-hypercomplete locally compact Hausdorff space Then \(\operatorname{Shv}(X,\mathcal{C})\) is compactly generated if and only if \(X\) is the underlying space of a scheme_ Indeed, a locally compact Hausdorff space is totally disconnected if and only if it admits a basis of compact open sets if and only if it is the underlying space of a scheme. The second equivalence is the Hausdorff case of [10, Thm 9]. For the first equivalence, note in one direction that if \(X\) admits a basis of compact open sets, then every \(x\in X\) has \(\{x\}=\bigcap_{U\ni x}U\), with \(U\) ranging over compact open neighborhoods of \(x\). Since each compact open neighborhood is clopen, we thus have that \(\{x\}\) is a quasi-component in \(X\), and hence that \(X\) is totally disconnected. For the other direction, we must show that for every \(x\in X\) and every open neighborhood \(V\ni x\), there is a compact open \(W\) with \(x\in W\subseteq V\). Since \(X\) is locally compact, we may assume that \(V\) is precompact. By assumption \(\{x\}=\bigcap_{U\ni x}U\), with \(U\) ranging over clopen neighborhoods of \(x\). Since each of these \(U\) is in particular closed, we have that each \(U\cap\partial\overline{V}\) is compact. By the finite intersection property, it therefore follows from \(\bigcap_{U\ni x}U\cap\partial\overline{V}=\varnothing\) that for small enough clopen \(U\ni x\), \(U\cap\partial\overline{V}=\varnothing\). Hence \(U\cap\overline{V}=U\cap V\) is a compact open neighborhood of \(x\) contained in \(V\), as desired. ### When is \(\operatorname{Shv}(X)\) compactly generated? Proposition 3.1 says that the \(\infty\)-category of sheaves on \(X\) with coefficients in a stable \(\infty\)-category is rarely compactly generated when \(X\) is a locally compact Hausdorff space. If we had asked the same question 'without coefficients,' this would have been an easier observation: **Proposition 3.5**.: _Let \(X\) be a quasi-separated4 topological space. The \(\infty\)-topos \(\operatorname{Shv}(X)\) of sheaves of anima on \(X\) is compactly generated if and only if the sobrification of \(X\) is the underlying space of a scheme._ Footnote 4: Recall that a topological space \(X\) is said to be _quasi-separated_ if for any pair of compact open subsets \(U,V\subseteq X\), the intersection \(U\cap V\) is again compact. Note that all Hausdorff spaces are quasi-separated. Proof.: One direction is [11, Thm 7.2.3.6]. For the other direction, assume that \(\operatorname{Shv}(X)\) is compactly generated. Then so is the frame \(\mathcal{U}\simeq\tau_{\leq-1}\operatorname{Shv}(X)\) of open subsets of \(X\) by [11, Cor 5.5.7.4]. But this means that \(X\) admits a basis of compact open sets, and hence the sobrification of \(X\) is the underlying space of a scheme according to [10, Thm 9]. ## Appendix A Descent for maps with local sections In this short appendix, we prove a descent lemma that was used in the proof of Theorem 2.3, which is an immediate generalization of [1, Cor 4.1.6]. Let \(\mathcal{C}\) be a compactly generated \(\infty\)-category and let \(f\colon X\to Y\) be a continuous map of topological spaces. Recall that the _Cech nerve_ of \(f\) is the augmented simplicial topological space \(X_{\bullet}\) with \(X_{-1}=Y\) and \(p\)-simplices \[X_{p}=\underbrace{X\times_{Y}\cdots\times_{Y}X}_{p\text{ times}}\] for \(p\geq 0\), with face maps given by projections and degeneracy maps given in the obvious way. More formally, if \(\Delta_{+}\) is the category of finite (possibly empty) ordinals and \(\mathcal{T}\text{op}\) is the category of topological spaces, then \(X_{\bullet}\colon\Delta_{+}^{\text{op}}\to\mathcal{T}\text{op}\) is defined by right Kan extending \((f\colon X\to Y)\colon\Delta_{+,\leq 0}^{\text{op}}\to\mathcal{T}\text{op}\) along the inclusion functor \(\Delta_{+,\leq 0}^{\text{op}}\subset\Delta_{+}^{\text{op}}\). Letting \(\operatorname{Shv}^{*}(-,\mathcal{C})\) denote the contravariant functor from \(\mathcal{T}\text{op}\) to \(\widehat{\operatorname{Cat}}_{\infty}\) given informally by \(X\mapsto\operatorname{Shv}(X,\mathcal{C})\) on objects and \(f\mapsto f^{*}\) on morphisms, we then have the following useful definition: **Definition A.1**.: The function \(f\) is of _\(\mathcal{C}\)-descent type_ if the canonical functor \[\operatorname{Shv}(X,\mathcal{C})\to\lim_{\Delta}\operatorname{Shv}^{*}(X_{ \bullet},\mathcal{C})\] is an equivalence. Let us say that \(f\)_admits local sections_ if for every \(x\in X\), there is an open set \(U\ni x\) such that the restriction \(f\colon f^{-1}(U)\to U\) admits a section. **Proposition A.2**.: _If \(f\) admits local sections, then \(f\) is of \(\mathcal{C}\)-descent type._ Proof.: By ordinary Cech descent, we may assume that \(f\) admits a section globally on \(X\), after possibly passing to an open cover of \(X\) on which this is true. Let \(\varepsilon\colon Y\to X\) be a choice of such a section. The section \(\varepsilon\) allows us to endow the Cech nerve \(X_{\bullet}\) with the structure of a split augmented simplicial space, by defining the extra degeneracies \(h_{i}\colon X_{p}\to X_{p+1}\) by \[h_{i}(x_{0},\ldots,x_{p})=(x_{0},\ldots,x_{i-1},\varepsilon(y),x_{i},\ldots,x _{p})\] where \(y=f(x_{0})=\cdots=f(x_{p})\). It then follows that the split coaugmented cosimplicial \(\infty\)-category \(\operatorname{Shv}^{*}(X_{\bullet},\mathcal{C})\) is a limit diagram by [11, Lem 6.1.3.16] **Corollary A.3**.: _Let \(\{A_{i}\}_{i\in I}\) be a collection of subsets of \(X\) such that \(X=\bigcup_{I}A_{i}^{\mathrm{o}}\), where \(A_{i}^{\mathrm{o}}\) is the interior of \(A_{i}\). Then the canonical map \(\coprod_{I}A_{i}\to X\) is of \(\mathcal{C}\)-descent type._ Proof.: The canonical map \(\coprod_{I}A_{i}\to X\) admits a section on \(A_{j}^{\mathrm{o}}\) given by \(A_{j}^{\mathrm{o}}\hookrightarrow A_{j}\to\coprod_{I}A_{i}\), where the second map is the canonical injection.
We は $\infty$-カテゴリーの $\mathcalC$-値の射影 $\text{Shv} (X,\mathcal C)$ を、超完備局所コンパクトハスドル空間 $X$ 上で記述します。$X$ は $\mathcal C$ がコンパクト生成された安定$\infty$-カテゴリーであるときです。$X$ が非コンパクト連結マンディフで $\mathcal C$ が環の無界な導出カテゴリの場合、私たちの結果がネメーンの結果を回収します。さらに、$X$ を上記と $\mathcal C$ が非平凡なコンパクト生成された安定 $\infty$-カテゴリーの場合、$\text{Shv} (X,\mathcal C)$ がコンパクト生成されることは、$X$ が完全には分離された場合のみであることを示します。
2309.15648
SANGEA: Scalable and Attributed Network Generation
The topic of synthetic graph generators (SGGs) has recently received much attention due to the wave of the latest breakthroughs in generative modelling. However, many state-of-the-art SGGs do not scale well with the graph size. Indeed, in the generation process, all the possible edges for a fixed number of nodes must often be considered, which scales in $\mathcal{O}(N^2)$, with $N$ being the number of nodes in the graph. For this reason, many state-of-the-art SGGs are not applicable to large graphs. In this paper, we present SANGEA, a sizeable synthetic graph generation framework which extends the applicability of any SGG to large graphs. By first splitting the large graph into communities, SANGEA trains one SGG per community, then links the community graphs back together to create a synthetic large graph. Our experiments show that the graphs generated by SANGEA have high similarity to the original graph, in terms of both topology and node feature distribution. Additionally, these generated graphs achieve high utility on downstream tasks such as link prediction. Finally, we provide a privacy assessment of the generated graphs to show that, even though they have excellent utility, they also achieve reasonable privacy scores.
Valentin Lemaire, Youssef Achenchabe, Lucas Ody, Houssem Eddine Souid, Gianmarco Aversano, Nicolas Posocco, Sabri Skhiri
2023-09-27T13:35:45
http://arxiv.org/abs/2309.15648v1
# SANGEA: Scalable and Attributed Network Generation ###### Abstract The topic of synthetic graph generators (SGGs) has recently received much attention due to the wave of the latest breakthroughs in generative modelling. However, many state-of-the-art SGGs do not scale well with the graph size. Indeed, in the generation process, all the possible edges for a fixed number of nodes must often be considered, which scales in \(\mathcal{O}(N^{2})\), with \(N\) being the number of nodes in the graph. For this reason, many state-of-the-art SGGs are not applicable to large graphs. In this paper, we present SANGEA, a sizeable synthetic graph generation framework which extends the applicability of any SGG to large graphs. By first splitting the large graph into communities, SANGEA trains one SGG per community, then links the community graphs back together to create a synthetic large graph. Our experiments show that the graphs generated by SANGEA have high similarity to the original graph, in terms of both topology and node feature distribution. Additionally, these generated graphs achieve high utility on downstream tasks such as link prediction. Finally, we provide a privacy assessment of the generated graphs to show that, even though they have excellent utility, they also achieve reasonable privacy scores. G 1 trained on real data and used to generate synthetic samples to be shared for training models on downstream tasks. Graphs are represented by their node feature matrix \(\mathbf{X}\in\mathbb{R}^{N\times D}\), and by their adjacency matrix \(\mathbf{A}\in\{0,1\}^{N\times N}\), which scales with \(\mathcal{O}(N^{2})\), with \(N\) being the number of nodes in the graph, and \(D\) being the number of node features. This quadratic complexity makes it very _challenging_ to deal with large graphs. The deep generative learning literature is rich in models that deal with synthetic graph generation (You et al., 2018; Liao et al., 2019; Liu et al., 2019; Goyal et al., 2020; Dai et al., 2020; Chen et al., 2022; Jo et al., 2022), but most state-of-the-art models still suffer from graphs' intrinsic scalability issues. Synthetic graph generators (SGGs) are generally classified in the literature into two main categories: one-shot and recurrent generators. The former usually requires storing a dense adjacency matrix in memory, which is only feasible for a few nodes. As for the latter, they take a long time to train because they recursively go through all the nodes in the graph during training and generation. Moreover, they are not node-invariant, so the ordering of the nodes matters considerably. In addition, since the topology creates dependencies between nodes within a graph, the data parallelisation within a graph is not trivial and often causes overhead. In summary, graph generation is challenging to scale. One of the purposes of graph generation is to share data privately. However, the risks of re-identification still apply to synthetic datasets. Graphs are not immune to this phenomenon and have actually been shown to leak more private information than other data modalities due to the information they carry in their topology (Wu et al., 2021). For this reason, in the present work, we also provide a privacy assessment methodology by means of nearest neighbour distance ratio (NNDR) (Guzman et al., 2021) adapted to graphs. Our goal in this paper is to generate, from a single large attributed graph, another large attributed graph that matches the statistical properties of the original one while being privacy-preserving. We present SANGEA (Scalable and Attributed Network GEnerAtion), a lightweight method to scale _any_ graph generative models to many nodes and edges under the assumption that the training graph presents a community structure. The essence of our approach is dividing the input graph into densely connected communities that can be generated independently. Then, SANGEA learns to model inter-community interactions based on independent subgraphs. Since this divide-and-conquer strategy may not leverage joint distributions of the communities and the links between them, SANGEA iteratively improves the generated graph until it matches the original distribution. SANGEA offers numerous advantages: i) it limits the original generation to different graphs with fewer nodes, allowing the use of any high-quality but potentially unscalable state-of-the-art generation method; ii) only one-shot generation models are used to predict links between communities and to perform the updates, making them fast to learn and fast at inference; iii) only node-invariant models are used, making the process more generalizable and less prone to overfitting, which is a challenge as we have only one training sample; iv) our refinement process conditions the updates on the synthetic graph in a similar manner to recurrent methods, thus removing the need to sample from a high-dimensional joint distribution like other one-shot generation methods do; v) Empirical results show that our proposed method achieves high privacy scores. The contribution of this paper is threefold. Firstly, it proposes a novel approach to make _any_ state-of-the-art model scalable to large graphs that present a community struc ture. Secondly, extensive experiments are presented on five models from the literature and compare our proposed approach against these models, to show that we match the quality of those other models while allowing us to perform generative model training and sampling for graphs up to 90,000 nodes and 450,000 edges. Thirdly, a privacy assessment has been performed for our generated data. The rest of this paper is organized as follows. The next section presents essential works related to deep synthetic graph generators. Section 3 details our proposed model by explaining our training and generation procedures. Then, Section 4 presents the experimental setup and reports results with analyses. Section 5 concludes by highlighting the main findings of this research and by discussing directions for future works. ## 2 Related Works Many approaches have been considered in synthetic graph generation. On one hand, traditional statistical methods, on the other hand, deep learning-based methods such as auto-encoders, diffusion models, auto-regressive methods, and many more were adapted from the tabular domain to the graph domain. First, Barabasi-Albert model (Albert and Barabasi, 2002) was proposed to capture the scale-free property observed in numerous real-world graphs. This property states that the degree distribution follows a power-law. The Barabasi-Albert model has two parameters: the number of nodes and the number of edges to be added at each iteration. The graph is initialized with a fixed number of connected nodes. At each iteration, a new node is added and is connected to the existing nodes, with probability proportional to the current degree. Then, (Chen et al., 2007) introduced a model to deal with the small-world property, namely, the characteristics of high network clustering and short characteristic path length. The model consists of a regular lattice, typically a one-dimensional lattice with almost periodic boundary conditions. In other words, each vertex is connected to almost a fixed number of vertices nearest to it, and a small number of'shortcut' bonds are added between randomly chosen vertices. BTER (Kolda et al., 2014), exploits the same concepts as the well-known Erdos-Renyi generation technique (Erdos, 1960) but in a two-level way, first modelling communities and then linking them together. Another statistical method is DANCer (Benyahia et al., 2016), which creates a complete graph using preferential attachment (Barabasi and Albert, 1999) and then performs micro (edge) and macro (community) updates so that the final graph matches the distribution of a reference. While these statistical techniques leverage important properties of large graphs, we believe they lack the expressiveness of deep models and they do not generate node attributes. On the other hand, deep learning models were proposed to learn graph generative models, the following paragraphs classify them in different families. In the Auto-Encoder (AE) family, the first Graph Variational AE (GVAE) (Simonovsky and Komodakis, 2018) offered to generate a graph by sampling independent node representations from a known latent distribution and decoding it into a graph. Some other approaches built upon this model achieved better graph quality, for example by extending the loss with higher level constraints on the graph (Zahirnia et al., 2022). However, they all suffer from having to store a dense adjacency matrix, at least at generation time, which scales quadratically with the number of nodes, making them unscalable. More recently, many works have been released on performing graph generation with diffusion methods: NVDiff (Chen et al., 2022), GDSS (Jo et al., 2022), EDP-GNN (Niu et al., 2020) and DiGress (Vignac et al., 2023). These models learn a reversible process from a graph representation to a known distribution, however, these methods too suffer from the need to store the dense adjacency matrix, both at train time and at generation time, making them unscalable. There also exist SGGs based on reinforcement learning (Xu et al., 2020), adversarial networks (Cao and Kipf, 2022) or flow (Shah and Koltun, 2020). However, none of those works is currently considered state-of-the-art for large graph generation (Faez et al., 2021). In addition, their application domain is limited to molecular graph generation. Another family of SGGs is auto-regressive (AR) models, such as GraphRNN (You et al., 2018). These embed each node in a recursive manner, and in doing so they update a state vector to condition the generation of a step. Some of those models, like GRAN (Liao et al., 2019), have been extended with attention layers for more expressiveness. These models are very efficient in modelling small graphs as they do not suffer from the independent generation (of nodes/edges) of one-shot generation methods. However, they often fail to represent high-level characteristics in the generated graphs as long-term dependencies are difficult to capture by recurrent models. Some works enable recurrent models to accurately represent large graphs. GraphGen (Goyal et al., 2020) represents graphs by their minimum DFS codes1. This drastically reduces the size of the input space to the model. BiGG (Dai et al., 2020) is an auto-regressive model based on GraphRNN (You et al., 2018) that represents the recursive process by binary trees, which reduces the number of recursive steps. They also claim to scale with \(\mathcal{O}(\sqrt{M\text{log}N})\) memory-wise, \(M\) being the number of edges in the graph. However, neither GraphGen nor BiGG is able to generate node features in their original formulation2. Footnote 1: A graph (and its isomorphisms) can be uniquely identified by its minimum DFS code, without the need for an arbitrary ordering of nodes or edges. Footnote 2: GraphGen is able to generate node and edge labels but not feature vectors. Few works in the literature focused on random walks to learn generative models. They have the advantage of their invariance under node reordering. Additionally, random walks only include the nonzero entries of the adjacency matrix, thus efficiently exploiting the sparsity of real-world graphs. (Bojchevski et al., 2018) proposed NetGAN, they train a generator for random walks, and a discriminator for synthetic and real random walks. After training, the generator is used to sample a paramount of random walks and a count matrix is computed for all edges. Then a threshold strategy is used to binarize this matrix. Finally, there are works that combine hierarchical graph structure and deep models to tackle scaling issues of graphs while preserving good expressiveness. One such model applies this hierarchical idea with chemistry motifs (Jin et al., 2020) for molecule generation. Similarly, but not restricted to molecules, HiGen (Karami and Luo, 2023) proposes an AR-Based method to exploit graphs' hierarchical structure. It generates a high-level graph of communities in the first stage, then it extends each node into a community and each edge into inter-community links with a recursive model, potentially multiple times if there are more than two levels. However, they condition the expansion in the second stage only on the representation of the previous level and not on what has already been expanded elsewhere in the graph nor do they show the quality of the attribute generation. Lastly, GELLCELL (Hammarstedt, 2022) proposes a technique for generating each community with the CELL model (Rendsburg et al., 2020) and then connecting those communities by using a link prediction model based on XGBoost. Unfortunately, CELL (Rendsburg et al., 2020) is based on statistical measures and was not shown to match the state of the art, and the linking of the communities is agnostic of the context around the nodes. Our work is the first to extend the BTER principle of two-step, top-down graph generation using deep networks, combining the efficiency of one-shot models (by means of model architecture choices) and the precision of conditional generation thanks to the refinement process. Our meta-algorithm is community-generator agnostic, unlike existing approaches in the literature. We show that the graphs generated using our method show statistical similarity in terms of topology and node features, while also leading to low privacy risks. ## 3 Our Model This section presents our proposed model for large-scale synthetic graph generation. The essence of our method is described in Section 3.1, then more details about the training and the generation procedures are given respectively in Section 3.2 and Section 3.3. ### The SANGEA Algorithm We propose a divide-and-conquer strategy for generating graphs. The main idea is to separate the graph into different communities of controllable size. Usually, a graph is more densely connected within these communities than outside of them. Each community is used to train one SGG model. The SGG models are trained independently. Once trained, they are used to generate a synthetic version of their respective community. Then, the synthetic communities are patched together using a link prediction model. Finally, we refine the synthetic graph's links until we are satisfied with the quality of the generated graph. The pseudo-codes of SANGEA's training and generation steps are reported in Algorithms 1 and 2, respectively. A detailed explanation of these algorithms follows in sections 3.2 and 3.3. With this approach, we limit SGGs to graphs with fewer nodes, namely the communities. Then, we use link prediction models to link the generated communities as these models are usually more lightweight at training and inference time than SGGs. Finally, in the refinement step, we use extra link prediction models (refiners) to refine the final synthetic graph's topology. The refiners are link prediction models that can be trained on \(k\)-hop neighbourhoods, rather than on a full graph, similarly to recursive models. Besides, the SGGs are trained on communities. Thus, at no point, does the full graph's _dense_ adjacency matrix need to be stored in memory when training SANGEA. Indeed, thanks to the community structure of the generation, we limit the memory cost of inference (generation) to the square of the size of the largest community rather than that of the full graph as further explained in the memory section. During training, the _sparse_ representation of the graph is sufficient to perform all operations. ### Training Process Algorithm 1 shows the entire training process of the SANGEA algorithm. This process is also depicted in Figure 1. The training is divided into 5 phases. ``` Notation:\(G[c_{i}\neq c_{j}]\): inter-community edges of \(G\). \(G[\mathbf{c}=k]\): node subgraph of \(G\), only keeping nodes whose community is \(k\). Input: A large graph \(G\) Output: A set of trained community generators, a base linker and a set of \(k\)-refiners 1\(\mathbf{c}\gets assign\_communities(G)\) Phase 1 2\(C\leftarrow\)unique(\(\mathbf{c}\)) 3for\(k\in[1,\ldots,C]\)do 4\(g_{k}\gets G[\mathbf{c}=k]\) 5\(community\_generators[k]\gets train\_generator(g_{k})\) Phase 2 6 end for 7\(base\_linker\gets train\_autoencoder\left(\bigcup\limits_{1\leq k\leq C }g_{k}\,\ G[c_{i}\neq c_{j}]\right)\) Phase 3 8\(base\_refiner\gets train\_autoencoder\left(G,G\right)\) Phase 4 9for\(k\in[1,\ldots,C]\)do 10\(k\_refiners[k]\gets finetune\_autoencoder(base\_refiner,G,g_{k})\) Phase 5 (a-b) 11 end for 12\(k\_refiners[^{\text{inter}}]\gets finetune\_autoencoder(base\_refiner,G,G[c_{i} \neq c_{j}])\) Phase 5 (c) 13Comment: Last argument of \(train\_autoencoder\) and \(finetune\_autoencoder\) are the edges used as labels in the loss. ``` **Algorithm 1** SANGEA learning process _i) Louvain Partitioning:_ line 1 in Algorithm 1 shows a call to \(assign\_communities\). This function will assign to each node a label, i.e. a community, as given by the Louvain method (Blondel et al., 2008). This greedy algorithm, designed for very large graphs, aims at optimizing modularity, which measures how densely connected the communities are within themselves and how sparse the links between different communities are. _ii) Community Generator Training:_ once the communities have been found, the original, large graph can be separated into independent, disconnected components that correspond to the graphs defined by the nodes of each community. These are graphs of smaller size than the original graph, thus any generation technique that only applies to small graphs can be trained on them. In this phase, we train one generator per community, with each generator being totally independent of the others. Note that the rest of the SANGEA algorithm is agnostic of what method is used to generate the communities. _iii) Base Linker Training:_ the base linker is a graph autoencoder (GAE) model composed of a GNN encoder module and a MLP decoder module. This model is trained for the link prediction task. However, no message passing (Gilmer et al., 2017) over inter Figure 1: Full training procedure of the SANGEA generation method community links is allowed at this stage, thus message passing is allowed only over the intra-community links, and the model is trained to predict the inter-community links only. _iv) Base Refiner Training:_ in Phase 4, we create a new GAE, possibly with different hyperparameters than the base linker, also trained for the link prediction task. However, the message passing now goes through the whole training graph and the edges used as training samples in the loss are also all the edges of the original training graph. This model is never used for prediction. However, it is used in Phase 5 of the training described below. _v) k-Refiners Fine-Tuning:_ in this phase, the base refiner learned in Phase 4 is copied and then further trained using the entire original graph for message passing and the creation of embeddings. However, only a specific subset of links are used as samples in the loss. Specifically, we create \(C+1\) copies of the base refiner: one for the links within each community and one for the inter-community links. This fine-tune approach has two main goals: (i) It is a reasonable assumption that what is learned on the whole graph is transferable to specific parts of that graph, especially if the model is fine-tuned on that part of the graph; (ii) Some communities can be tiny, and training a model on a few samples without overfitting is a complicated task. Using this base refiner/fine-tuning approach, we still obtain good generalization results for those communities. ### Generation Process Once the training has been completed, it is time to generate a large graph using these trained models. ``` 0:\(\hat{G}[c_{i}\neq c_{j}]\): inter-community edges of \(\hat{G}\). \(\hat{G}[(c_{i}=label)\land(c_{j}=label)]\): set of links of \(\hat{G}\) where both end nodes are within community \(label\). \(K_{label},K_{inter}\): set of all possible edges that match that label (i.e. all possible inter-community edges or all possible edges within a community). Input: The models trained at the training phase, \(C\) the number of communities, \(R\) the number of refinement steps. Output: Generated graph \(\hat{G}\). 1for\(k\in[1,\dots,C]\)do 2\(\hat{g}_{k}\gets community\_generators[k].generate()\) 3 end for 4\(\hat{G}\leftarrow\bigcup\limits_{1\leq k\leq C}\hat{g}_{k}\) 5\(\mathbf{s}\gets base\_linker.score\_links(\hat{G},K_{inter})\) 6\(\hat{G}\leftarrow\hat{G}\bigcup sample(\mathbf{s})\) 7for\(r\in[1,\dots,R]\)do 8for\(label\in[1,\dots,C,"inter"]\)do 9if\(label\) is "\(inter\)" then 10\(\hat{\mathbf{s}}\leftarrow\mathbf{1}-k\_refiners["inter"].score\_links(\hat{G}, \hat{G}[c_{i}\neq c_{j}])\) 11else 12\(\hat{\mathbf{s}}\leftarrow\mathbf{1}-k\_refiners[label].score\_links(\hat{G}, \hat{G}[(c_{i}=label)\land(c_{j}=label)])\) 13 end if 14\(\mathbf{s}\gets k\_refiners[label].score\_links(\hat{G},K_{label})\) 15\(\hat{G}\leftarrow\hat{G}\setminus sample(\hat{\mathbf{s}})\) 16\(\hat{G}\gets\hat{G}\cup sample(\mathbf{s})\) 17 end for 18 19 end for ``` **Algorithm 2** SANGEA generation process _i) Base Generation:_ graph generation using SANGEA happens in two phases. In this first phase, for each community, a graph is generated using the corresponding community generator, resulting in a collection of synthetic graphs, one per community. This collection of disconnected components is then used for the message-passing of the base linker and the inter-community edges are predicted in one shot. We generate as many edges as there were in the original graph. _ii) Refinement:_ due to the independence of the community generators, and due to the base linker's lack of access to the whole graph (in terms of message passing), we designed a refinement phase where we iteratively update the graph by means of a new link predictor that, this time, has access to all links for the message passing. Therefore, at each refinement step, we will input the full graph to all the \(k\)-refiners, each updating a different part of the graph. Doing this, we condition the updates of the links on the current state of the graph, in a way that is analogous to recurrent models. However, this is all done using one-shot models. We can perform this phase \(R\) times for the desired amount of refinements steps. Each refinement step replaces edges that have low scores with ones that have high scores, with the objective of improving the final topology of the generated graph. The number of refinements controls the trade-off between privacy and generation quality, and would in fact depend on the actual downstream use of the generated data. ### Memory Usage _i) At Training Time:_ most one-shot generation techniques usually require the model to store in memory a dense adjacency matrix. This causes these models not to scale very well. Here, we show the theoretical memory upper-bound usage of SANGEA's full procedure. Let us imagine we have a large graph of \(N\) nodes and \(M\) edges. The Louvain method, which runs with \(\mathcal{O}(M)\) memory cost, yields communities for the large graph, the largest of which has \(N_{c^{*}}\) nodes, with \(c^{*}\) being the biggest community. Because we control the size of the communities, we can assume that \(N_{c^{*}}\ll N\)(Lambiotte et al., 2014). In the worst case, the method used as a community generator saves the whole dense adjacency matrix, which would imply a memory consumption proportional to \(\mathcal{O}(N_{c^{*}}^{2})\). For the base linker, the full training graph goes through the GNN layers that store, for each node, one latent representation. This means that its memory impact is proportional to \(N\), and then, for each edge used for the loss, the pair of corresponding node representations go through an MLP, which only needs to store gradients per node, having a memory cost proportional to \(N\) as well. The memory cost can be further reduced. In order to compute a node embedding, one may store the \(k\)-hop neighbourhood of that node. Because loss samples are edges, to perform a backpropagation, we only need the \(k\)-hop neighbourhoods of the two end nodes of that edge. This makes the memory impact in \(\mathcal{O}(N_{k})\) with \(N_{k}\) being the maximum number of nodes in all \(k\)-hop neighbourhoods. Assuming sparsity, \(N_{k}\) is often much lower than \(N\). This memory frugality comes at a computational expense. In practice, the value of \(k\) can be parameterized to match the memory capacity of the device running the computation. With this method, the whole memory impact of the base-linker at training time is measured in \(\mathcal{O}(N_{k})\). This holds for all other GAEs of the training process. This shows that at training time, our models can run in memory complexity bounded by \(\mathcal{O}(\max(N_{c^{*}}^{2},M))\), omitting \(N_{k}\) as it can be assumed to be smaller than \(M\). _ii) At Inference time:_ in the worst case, each community generator requires to generate a dense adjacency matrix for each community. This means that the memory impact is bounded in \(\mathcal{O}(N_{c^{*}}^{2})\). Then comes the base linker. This model scores all possible edges not within communities. The number of such edges is \(N_{inter}=N^{2}-\sum_{i=1}^{C}N_{i}^{2}\), which grows in \(\mathcal{O}(N^{2})\). However, rather than scoring all edges at once and then sampling, it is possible to score all edges between a pair of communities, sample amongst those, and then discard the memory used for those scores. This means that at a given time, we only store all the possible edges between a pair of communities, thus making the generation bounded in memory by \(\mathcal{O}(N_{c^{*}}^{2})\). Applying the same reasoning to refinement, for each community-refiner, we will be bounded in memory by \(\mathcal{O}(N_{i}^{2})\), and for the inter-refiner, using the same trick as for the base linker, we are bounded in \(\mathcal{O}(N_{c^{*}}^{2})\). Combining inference memory complexities, we obtain a final memory upper-bound growing in \(\mathcal{O}(N_{c^{*}}^{2})\) for the whole process. ## 4 Experiments In the present work, we present an approach to scale up any SGG to large graphs. In this Section, we present the experiments that we designed to validate that our method is indeed efficient and generates high-quality samples. This section aims at answering the following research questions (**RQs**): 1. Which models in the literature handle large graphs? 2. Can we make state-of-the-art models scalable to large graphs thanks to our approach? 3. Is one approach better than the other in terms of utility and privacy? 4. Does our approach bring performance gains compared to state-of-the-art approaches that deal with large graphs? ### Data Description Table 1 lists the various datasets used for our empirical evaluation and statistics considering their _maximum connected component_. We chose one-graph datasets from Fey and Lenssen (2019). Cora and CiteSeer are citation networks, IMDB's nodes represent movies, actors, or directors. Amazon3's nodes represent products, and edges represent the co-purchased relations of products. The Flickr dataset is an ensemble of images presented in a graph where the nodes are the images, and the edges are assigned based on the presence of some shared properties (e.g., identical geographical area, identical exhibition space, remarks made by the identical individual). \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{3}{c}{_Max. conn. comp. properties_} \\ \cline{2-5} & Nodes & Edges & Features & Classes \\ \hline Cora & 2485 & 5069 & 1433 & 7 \\ CiteSeer & 2120 & 3679 & 3703 & 6 \\ IMDB & 10384 & 16097 & 3066 & 4 \\ Amazon Computers & 13381 & 245778 & 767 & 10 \\ Flickr & 89250 & 449878 & 500 & 7 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the datasets (_Maximum Connected Component_). ### Evaluation Metrics Multiple aspects have been considered to compare the proposed approaches. First is the structural and attribute similarity between the generated and original graphs (Thompson et al., 2022). Second, the utility of the generated graphs for downstream tasks. Third, training, generation time, and memory consumption in order to assess the ability to handle large graphs. Finally, the privacy risk associated with the generated graph. Multiple metrics to assess the structural similarity between the generated and original graphs have been considered in our experiments. Namely, degree histogram, degree centrality, closeness centrality, eigenvector centrality, and clustering coefficient. The graphs are represented as normalized versions of these metrics and then compared using the Wasserstein distance (Vallender, 1974). Compared to the widely used Maximum Mean Discrepancy (MMD), the Wasserstein distance is more reliable. Indeed, the MMD requires additional parameters to be chosen, and issues of sensitivity to these choices have been recently raised by O'Bray et al. (2021). Node attribute similarity is implicitly taken into account by a distance measure over node embedding distributions, specifically because node embeddings also depend on node features. Nevertheless, for this comparison, we opted for the MMD, as it does not require the complex tasks of binning and then computing optimal transport between histograms in a multidimensional space. Two versions of a Graph Convolutional Network (GCN), one untrained and one trained on the link prediction task, are used to embed the input graphs into an embedding space of size 16. Then the MMD is used to compute the distance between the embeddings of the generated graph and the ones associated to the original graph. We also assess graph utility by training a GNN model on the generated graphs, on the link prediction task, and testing on the original graph. In fact, we train a VGAE link predictor on the generated graph and measure AUROC on the original graph. Since our use-case consists in training a SGG from a single training graph, we evaluate the privacy concerns that it may imply. We choose to evaluate privacy using the Nearest Neighbour Distance Ratio (NNDR) on node embeddings of the original and generated graphs. This metric is popular in the privacy domain (Gussenbauer et al., 2021).The full methodology works as follows: we first train a GCN embedder (details in the supplementary material) on the original data on the node classification task. Then, for each node embedding in the generated set, we compute the distance to all nodes in the original set using the Euclidean distance. Thus, we have a distance vector \(\mathbf{d}^{i}\in\mathbb{R}^{N}\), for the \(i\)-th node in the generated graph, with \(N\) being the number of nodes of the original graph. Finally, if \(d^{i}_{1}\) and \(d^{i}_{2}\) are respectively the smallest and second smallest distances in \(\mathbf{d}^{i}\), the NNDR for node \(i\) can be computed as: \(NNDR_{i}=\frac{d^{i}_{1}}{d^{i}_{2}}\). For each generated node, NNDR measures the ratio between its two closest neighbours in the training set. It can be interpreted as _the higher the ratio, the harder it will be to infer that a given target node was a member of the SGG's training set_. Since this metric is dependent on the embedder chosen, we do the following. We estimated the NNDR between the original graph and itself, then between the original graph and perturbed versions of itself, with increasing perturbation strength. Then, we chose the embedder that shows (i) a low NNDR value between the original graph and itself, and (ii) an increasing NNDR value on increasingly perturbed versions of the original data. ### Experimental Protocol All experiments are performed on a machine running Intel(R) Xeon(R) Gold 6134 CPU@3.20GHz processor with 32 physical cores, with 1 Nvidia Tesla V100 card with 32GB GPU memory, and 128GB RAM with Ubuntu 18.04.6 operating system. The first step in our experiments is to assess the capability of these models to deal with large graphs. Five approaches (Dai et al., 2020; Jo et al., 2022; Chen et al., 2022; Goyal et al., 2020; Zahirnia et al., 2022) have been considered in our experiments.4 Based on the results of this step, competitors to our approach will be identified according to their ability to handle big graphs. Then, these models will be used as community generators within our proposed approach. First, communities are identified using the Louvain algorithm (Blondel et al., 2008). They are used as training examples for the community generators. Each community generator is trained on one subgraph (i.e. community), and it is done for all considered state-of-the-art approaches. Once communities are generated, our proposed approach is used to generate the final version of the graph (more details in Section 3). The next step is to compare the different variants of our proposed approach on multiple dimensions: statistical properties, utility metrics, scalability, and privacy risk. In addition, a comparison to the selected state-of-the-art approaches will be performed. In our experiments we have chosen to set the number of refinements \(R=30\), the model parameters for all experiments are reported in Table 13 in the supplementary material. We search through all of those values through hyper-parameter optimization using the Optuna framework for the downstream task of link prediction. For the MMD metric, we used a Gaussian RBF with parameter sigma = 0.5, and for the community partitioning, we used a resolution parameter of 1 for the Cora and Citeseer datasets, to ensure sufficiently large communities, and of 1.5, 5.5 and 5.5 for the IMDB, Amazon and Flickr datasets respectively, to have communities of around a thousand nodes maximum. Footnote 4: Our most direct competitors are HiGen (Karami and Luo, 2023) and GELLCELL (Hammarstedt, 2022) but neither of them has code publicly available nor do they report results on large, attributed real-world graphs. We, therefore, consider GraphGen and BiGG as our closest scalable competitors. ### Results and Analysis In this section, detailed answers to the questions raised in the introduction of Section 4 are given, supported by numerical results. _Scalability:_ In Table 2, one can notice that three out of five tested approaches fail to train the generative models on IMDB's maximum connected graph. Only GraphGen and BiGG are capable of dealing with this graph. In Table 2, it is clear that all implemented state-of-the-art approaches fail to train on Amazon Computers's graph, which has 15 times more edges than IMDB. We refer the reader to the supplementary material for a similar result on the Flickr dataset. We note that Flickr dataset is very large, and has been used only to provide scalability results; the communities were copied from the original graph and were linked together using our proposed approach. We conclude from these results that we have only two competitors for medium-sized graphs. Furthermore, it is not possible to train current state-of-the-art approaches on big graphs, given our computational capacity and the limitations of existing models from the literature. For the running time metrics of state-of-the-art approaches and our model, we refer the interested reader to the appendix. SANGEA is able to generate graphs with sizes equivalent to those of Amazon Computers' size, as well as Flickr' size, which has over 6 times the number of nodes and twice the edges. All state-of-the-art approaches considered here, however, fail to do so. (running times available in the supplementary material). While the memory resource usage is greatly improved, all the independent steps of SANGEA add a consequent time overhead. This overhead can be greatly reduced with parallel training, which is facilitated by SANGEA's innate design. This can be explained by the fact that, no matter the size of the graph, we scale with the size of the largest community, which we control, thus enabling our model to choose where we set the time-memory cost trade-off. _Graph generation quality:_ Table 2 reports results on the structural and attribute similarity between the original training graph and the generated one on the IMDB and Amazon Computers datasets (Thompson et al., 2022). SANGEA allows for node feature generation via the community generators. This is the case for Sa(GDSS) and Sa(NVDiff), while other scalable models do not generate node features. Our method shows to be superior on the MMD over GCN embedding metrics. On the IMDB dataset, Sa(GDSS) closely follows GraphGen when it does not outperform it on most statistical topology metrics and outperforms BiGG on most of them. Increased performance on the downstream task of Link Prediction for the IMDB dataset might again be explained by the lack of node feature generation in the competitors' models. As to the Amazon Computers dataset, our models show superior performance since no other model could scale to that dataset size. These results suggest that on medium to large graphs, SANGEA can match, and even surpass other state-of-the-art methods. _Privacy:_ Table 3 shows the NNDR values obtained on 4 datasets, using SANGEA and also two mode models from the state of the art, NVDiff and GDSS, for which the results are reported on generated communities only (because we were not able to generate the full graph with these models). We compare the graphs generated with three baselines, the original graph and two perturbed versions of that graph; one at 50% and one at 75%. Perturbation at \(p\%\) corresponds to the original data where \(p\%\) of the edges have been replaced by random ones, and \(p\%\) of the node feature matrix has been changed. The table \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{IMDB Dataset} & \multicolumn{3}{c}{Amazon Computers Dataset} \\ \cline{2-7} & Others & \multicolumn{2}{c}{Ours} & \multicolumn{2}{c}{Others} & \multicolumn{2}{c}{Ours} \\ \cline{2-7} & GraphGen & BiGG & Sa & Sa & \multirow{2}{*}{All} & Sa & Sa \\ & & & (GDSS) & (NVDiff) & & (GDSS) & (NVDiff) \\ \hline MMD (tr.) & - & - & **0.305** & 0.379 & & 0.419 & **0.375** \\ MMD (untr.) & - & - & **0.00210** & 0.0713 & & 0.0426 & **0.0329** \\ WS spectral & **1.53e-3** & 9.88e-3 & 5.66e-3 & 3.50e-3 & & **1.68e-3** & 1.74e-3 \\ WS deg. hist. & 125e-4 & 29.7e-4 & **8.58e-4** & 18.2e-4 & & **8.56e-5** & 11.0e-5 \\ WS deg. cent. & 3.19e-5 & 1.11e-5 & **1.05e-5** & 5.40e-5 & & **2.04e-6** & 9.88e-6 \\ WS clos. cent. & 26.34e-5 & 8.87e-6 & **2.84e-6** & 10.0e-6 & & **2.76e-6** & 12.8e-6 \\ WS eig. cent. & **1.62e-5** & 5.07e-5 & 4.12e-5 & 3.59e-5 & & **1.78e-5** & 3.33e-5 \\ WS clust. coeff. & 29.7e-6 & **1.44e-6** & 15.1e-6 & 20.7e-6 & & **4.24e-5** & 5.78e-5 \\ AUROC (LP) & 0.74 & 0.73 & **0.76** & 0.74 & & 0.814 & **0.833** \\ \hline \hline \end{tabular} \end{table} Table 2: Structural and attribute similarity results on IMDB and Amazon Computers datasets. OOM stands for Out Of Memory and TO stands for TimeOut. Any competitor among GraphGen, BiGG, NVDiff, GVAEmm or GDSS not shown in the table indicates an OOM or TO. Sa(GDSS) stands for SANGEA using GDSS as community generator. shows that, for both of our generated graphs and for each dataset, they always at least match the 50% perturbation, often reaching the level of, or surpassing the 75% perturbation. This shows that even though our method generates graphs of high utility and close statistical properties, it achieves the privacy level, for the individual nodes of the training graph, of at least a 50% perturbation graph, which is higher than the privacy levels (NNDR value) reached by using NVDiff or GDSS alone. _Ablation study_: One of the main novelties of this work, which also differentiates it the most from HiGen (Karami and Luo, 2023) and GELLCELL (Hammarstedt, 2022), is the refinement process. This process conditions the predictions and updates of the links in the generated graph on previously generated links and nodes. Table 4 shows, for two different datasets, using two different community generators, that the refinement process improves the utility of the final generated graphs, which confirms that this process is a valuable feature of the method proposed in the present work. Results on more datasets are provided in the supplementary material, and highlight similar results. ## 5 Conclusion We presented a novel approach called SANGEA, a lightweight method to scale graph generative models to many nodes and edges. It generates from a single large training graph another large graph that matches the statistical properties of the original one while achieving high privacy scores. Extensive experiments have been conducted to assess the effectiveness of our approach. Five state-of-the-art approaches have been considered from the literature to benchmark against. We show in our experiments that SANGEA can work with graphs \begin{table} \begin{tabular}{l c c c} \hline \hline & Cora, Sa(NVDiff) & \multicolumn{2}{c}{IMDB, Sa(GDSS)} \\ \cline{2-4} & _w.o. ref._ & _w. ref._ & _w.o. ref._ & _w. ref._ \\ \hline MMD (gen tr.) & 13.7e-2 & **9.6e-2** & 0.438 & **0.305** \\ MMD (gen tr.) & 17.9e-1 & **4.9e-1** & 0.0631 & **0.00210** \\ WS spectral & 11.e-4 & **5.22e-4** & 43.2e-3 & **5.66e-3** \\ WS deg. hist. & 17.6e-3 & **1.46e-3** & 41.4e-4 & **8.58e-4** \\ WS deg. cent. & 9.25e-05 & **6.66e-5** & **1.05e-5** & **1.05e-5** \\ WS clos. cent. & 10.6e-5 & **9.10e-5** & 26.2e-6 & **2.84e-6** \\ WS eigenv. cent. & 13.1e-4 & **3.88e-4** & 31.2e-5 & **4.12e-5** \\ WS clust. coeff. & 35.8e-4 & **4.64e-4** & 45.6e-5 & **1.51e-5** \\ AUROC (LP) & 0.71 & **0.74** & 0.71 & **0.76** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study of the refinement process \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{Datasets} \\ \cline{2-5} & CiteSeer & Cora & IMDB & Amazon \\ \hline Orig. data & 0.05 \(\pm\) 0.21 & 0.05 \(\pm\) 0.20 & 0.32 \(\pm\) 0.43 & 0.50 \(\pm\) 0.43 \\ Pert. data (50\%) & 0.89 \(\pm\) 0.01 & 0.85 \(\pm\) 0.14 & 0.81 \(\pm\) 0.25 & 0.89 \(\pm\) 0.14 \\ Pert. data. (75\%) & 0.90 \(\pm\) 0.09 & 0.90 \(\pm\) 0.09 & 0.97 \(\pm\) 0.10 & 0.91 \(\pm\) 0.10 \\ \hline NVDiff & 0.66 \(\pm\) 0.39 & **0.98 \(\pm\) 0.03** & **0.99 \(\pm\) 0.06** & 0.88 \(\pm\) 0.25 \\ GDSS & **0.84 \(\pm\) 0.00** & 0.69 \(\pm\) 0.00 & 0.22 \(\pm\) 0.00 & **0.92 \(\pm\) 0.01** \\ \hline Sa(NVDiff) & 0.90 \(\pm\) 0.09 & 0.86 \(\pm\) 0.16 & 0.92 \(\pm\) 0.15 & 0.91 \(\pm\) 0.20 \\ Sa(GDSS) & **0.91 \(\pm\) 0.09** & **0.99 \(\pm\) 0.03** & **0.97 \(\pm\) 0.11** & **0.99 \(\pm\) 0.01** \\ \hline \hline \end{tabular} \end{table} Table 3: NNDR (mean \(\pm\) stand. dev.) of generated graphs. NVDiff and GDSS without SANGEA were evaluated on the largest community of each dataset, all others on the full dataset. with up to 90,000 nodes and 450,000 edges, while the chosen literature approaches fail. Moreover, the quality of generation has been assessed using multiple graph quality metrics. Numerical results show a high similarity between our generated graph and the original one compared to our direct competitors. SANGEA also achieves a better utility score on the link prediction task. In addition, because of the single large training graph constraint of our setting, a privacy assessment methodology has been proposed and discussed. Our results show that the generated graphs naturally obtain high-privacy scores and hence is low-risk. Our proposed approach suffers from a set of limitations. First, the feature generation is conditioned by the community generator offering this property. Moreover, our feature generation is limited to node features. Then, the input graphs are assumed to be static. In many applications, dynamic evolving graphs are of interest. For example, we can have user dynamics like join/leave events and evolving relationships in such data. These limitations are interesting directions to extend the capabilities of our approach in future work.
合成グラフ生成器(SGG)の話題は、生成モデルにおける最新の進歩の波を受けて近年注目されています。しかし、多くの最新技術のSGGは、グラフのサイズにスケーラビリティが悪いです。実際、生成過程では、固定されたノード数の場合、すべての可能なエッジを考慮する必要があり、これは$N$(ノードの数)がグラフのノード数である場合、$\mathcal{O}(N^2)$でスケールします。そのため、多くの最新技術のSGGは、大きなグラフに対して適用できません。この論文では、SANGEAという、SGGの適用範囲を広げる、規模の小さい合成グラフ生成フレームワークを紹介します。SANGEAは、大規模なグラフをコミュニティに分けることで、各コミュニティにSGGを訓練し、コミュニティグラフを結合して合成の大規模グラフを作成します。私たちの実験では、SANGEAで生成されたグラフは、トポロジーとノ
2301.00301
Generalized PTR: User-Friendly Recipes for Data-Adaptive Algorithms with Differential Privacy
The ''Propose-Test-Release'' (PTR) framework is a classic recipe for designing differentially private (DP) algorithms that are data-adaptive, i.e. those that add less noise when the input dataset is nice. We extend PTR to a more general setting by privately testing data-dependent privacy losses rather than local sensitivity, hence making it applicable beyond the standard noise-adding mechanisms, e.g. to queries with unbounded or undefined sensitivity. We demonstrate the versatility of generalized PTR using private linear regression as a case study. Additionally, we apply our algorithm to solve an open problem from ''Private Aggregation of Teacher Ensembles (PATE)'' -- privately releasing the entire model with a delicate data-dependent analysis.
Rachel Redberg, Yuqing Zhu, Yu-Xiang Wang
2022-12-31T22:22:53
http://arxiv.org/abs/2301.00301v1
# Generalized PTR: User-Friendly Recipes for Data-Adaptive Algorithms with Differential Privacy ###### Abstract The "Propose-Test-Release" (PTR) framework (Dwork and Lei, 2009) is a classic recipe for designing differentially private (DP) algorithms that are data-adaptive, i.e. those that add less noise when the input dataset is "nice". We extend PTR to a more general setting by privately testing _data-dependent privacy losses_ rather than _local sensitivity_, hence making it applicable beyond the standard noise-adding mechanisms, e.g. to queries with unbounded or undefined sensitivity. We demonstrate the versatility of generalized PTR using private linear regression as a case study. Additionally, we apply our algorithm to solve an open problem from "Private Aggregation of Teacher Ensembles (PATE)" (Papernot et al., 2017, 2018) -- privately releasing the entire model with a delicate data-dependent analysis. ## 1 Introduction The guarantees of differential privacy (DP) (Dwork et al., 2006) are based on worst-case outcomes across all possible datasets. A common paradigm is therefore to add noise scaled by the _global sensitivity_ of a query \(f\), i.e. the maximum change in \(f\) between any pair of neighboring datasets. A given dataset \(X\) might have a _local sensitivity_ that is much smaller than the global sensitivity, in which case we can hope to add a smaller amount of noise (calibrated to the local rather than the global sensitivity) while achieving the same privacy guarantee. However, this must not be undertaken naively - the local sensitivity is a dataset-dependent function and so calibrating noise to the local sensitivity could leak information about the dataset (Nissim et al., 2007). The "Propose-Test-Release" (PTR) framework (Dwork and Lei, 2009) resolves this issue by introducing a test to privately check whether a proposed bound on the local sensitivity is valid. Only if the test "passes" is the output released with noise calibrated to the proposed bound on the local sensitivity. PTR is a powerful and flexible tool for designing data-adaptive DP algorithms, but it has several limitations. First, it applies only to noise-adding mechanisms which calibrate noise according to the sensitivity of a query. Second, the test in "Propose-Test-Release" is computationally expensive for all but a few simple queries such as privately releasing the median or mode. Third, while some existing works (Decarolis et al., 2020; Kasiviswanathan et al., 2013; Liu et al., 2021) follow the approach of testing "nice" properties of a dataset before exploiting these properties in a private release to PTR 1, there has not been a systematic recipe for _discovering_ which properties should be tested. In this paper, we propose a generalization of PTR which addresses these limitations. The centerpiece of our framework is a differentially private test on the _data-dependent privacy loss_. This test does not directly consider the local sensitivity of a query and is therefore not limited to additive noise mechanisms. Moreover, in many cases, the test can be efficiently implemented by privately releasing a high-probability upper bound, thus avoiding the need to search an exponentially large space of datasets. Furthermore, the derivation of the test itself often spells out exactly what properties of the input dataset need to be checked, which streamlines the design of data-adaptive DP algorithms. Our contributions are summarized as follows: 1. We propose a generalization of PTR which can handle algorithms beyond noise-adding mechanisms. Generalized PTR allows us to plug in _any_ data-dependent DP analysis to construct a high-probability DP test that adapts to favorable properties of the input dataset - without painstakingly designing each test from scratch. 2. We demonstrate that many existing examples of PTR and PTR-like algorithms can be unified under the generalized PTR framework, sometimes resulting in a tighter analysis (see an example of report-noisy-max in Sec A.1). 3. We show that one can publish a DP model through privately upper-bounding a one-dimensional statistic -- no matter how complex the output space of the mechanism is. We apply this result to solve an open problem from PATE (Papernot et al., 2017, 2018). 4. Our results broaden the applicability of private hyper-parameter tuning (Liu and Talwar, 2019; Papernot and Steinke, 2021) in enabling joint-parameter selection of DP-specific parameters (e.g., noise level) and native parameters of the algorithm (e.g., learning rate, regularization weight), which may jointly affect the data-dependent DP losses. ## 2 Related Work **Data-dependent DP algorithms.**Privately calibrating noise to the local sensitivity is a well-studied problem. One approach is to add noise calibrated to the smooth sensitivity (Nissim et al., 2007), an upper bound on the local sensitivity which changes slowly between neighboring datasets. An alternative to this - and the focus of our work - is Propose-Test-Release (PTR) (Dwork and Lei, 2009), which works by calculating the distance \(\mathcal{D}_{\beta}(X)\) to the nearest dataset to \(X\) whose local sensitivity violates a proposed bound \(\beta\). The PTR algorithm then adds noise to \(\mathcal{D}_{\beta}(X)\) before testing whether this privately computed distance is sufficiently large. PTR spin-offs about. Notable examples include stability-based methods (Thakurta and Smith, 2013) (stable local sensitivity of 0 near the input data) and privately releasing upper bounds of local sensitivity (Kasiviswanathan et al., 2013; Liu et al., 2021; Decarolis et al., 2020). We refer readers to Chapter 3 of Vadhan (2017) for a concise summary of these classical results. Recent work (Wang et al., 2022) has provided Renyi DP bounds for PTR and demonstrated its applications to robust DP-SGD. Our work (see Section 5.2) also considers applications of PTR in data-adaptive private deep learning: Instead of testing the local sensitivity of each gradient step as in Wang et al. (2022), our PTR-based PATE algorithm tests the data-dependent privacy loss as a whole. Liu et al. (2021) proposed a new variant called High-dimensional Propose-Test-Release (HPTR). HPTR provides a systematic way of solving DP statistical estimation problems by using the exponential mechanism (EM) with carefully constructed scores based on certain one-dimensional robust statistics, which have stable local sensitivity bounds. HPTR focuses on designing data-adaptive DP mechanisms from scratch; our method, in contrast, converts existing randomized algorithms (including EM and even some that do not satisfy DP) into those with formal DP guarantees. Interestingly, our proposed method also depends on a one-dimensional statistic of direct interest: the data-dependent privacy loss. **Data-dependent DP losses.** The flip side of data-dependent DP algorithms is the study of data-dependent DP losses (Papernot et al., 2018; Soria-Comas et al., 2017; Wang, 2017), which fix the randomized algorithm but parameterize the resulting privacy loss by the specific input dataset. For example: In the simple mechanism that adds Laplace noise with parameter \(b\), data-dependent DP losses are \(\epsilon(X)=\Delta_{\text{LS}}(X)/b\). The data-dependent DP losses are often much smaller than the DP loss, but they themselves depend on the data and thus may reveal sensitive information; algorithms satisfying a data-dependent privacy guarantee are not formally DP with guarantees any smaller than that of the worst-case. Existing work has considered privately publishing these data-dependent privacy losses (Papernot et al., 2018; Redberg and Wang, 2021), but notice that privately publishing these losses does not improve the DP parameter of the given algorithm. Part of our contribution is to resolve this conundrum by showing that a simple post-processing step of the privately released upper bound of \(\epsilon(\text{Data})\) gives a formal DP algorithm. **Private hyper-parameter tuning.** Our work has a nice connection with private hyper-parameter tuning. Prior work (Liu and Talwar, 2019; Papernot and Steinke, 2021) requires each candidate configuration to be released with the same DP (or Renyi DP) parameter set. Another hidden assumption is that the parameters must not be privacy-correlated (i.e., parameter choice will not change the privacy guarantee). Otherwise we need to use the largest DP bound across all candidates. For example, Liu and Talwar (2019) show that if each mechanism (instantiated with one group of hyper-parameters) is \((\epsilon,0)\)-DP, then running a random number of mechanisms and reporting the best option satisfies \((3\epsilon,0)\)-DP. Our work directly generalizes the above results by (1) considering a wide range of hyper-parameters, either privacy-correlated or not; and (2) requiring only that individual candidates to have a _testable_ data-dependent DP. ## 3 Preliminaries Datasets \(X,X^{\prime}\in\mathcal{X}\) are neighbors if they differ by no more than one datapoint - i.e., \(X\simeq X^{\prime}\) if \(d(X,X^{\prime})\leq 1\). We will define \(d(\cdot)\) to be the number of coordinates that differ between two datasets of the same size \(n\): \(d(X,Y)=\#\{i\in[n]:X_{i}\neq Y_{i}\}\). We use \(||\cdot||\) to denote the radius of the smallest Euclidean ball that contains the input set, e.g. \(||\mathcal{X}||=\sup_{x\in\mathcal{X}}||x||\). The parameter \(\phi\) denotes the privacy parameters associated with a mechanism (e.g. noise level, regularization). \(\mathcal{M}_{\phi}\) is a mechanism parameterized by \(\phi\). For mechanisms with continuous output space, we will take \(\Pr[\mathcal{M}(X)=y]\) to be the probability density function of \(\mathcal{M}(X)\) at \(y\). **Definition 3.1** (Differential privacy (Dwork et al., 2006)).: Fix \(\epsilon,\delta\geq 0\). A randomized algorithm \(\mathcal{M}:\mathcal{X}\to\mathcal{S}\) satisfies \((\epsilon,\delta)\)-DP if for all neighboring datasets \(X\simeq X^{\prime}\) and for all measurable sets \(S\subset\mathcal{S}\), \[\Pr\bigl{[}\mathcal{M}(X)\in S\bigr{]}\leq e^{\epsilon}\Pr\bigl{[}\mathcal{M} (X^{\prime})\in S\bigr{]}+\delta.\] Suppose we wish to privately release the output of a real-valued function \(f:\mathcal{X}\to\mathcal{R}\). We can do so by calculating the _global sensitivity_\(\Delta_{GS}\), calibrating the noise scale to the global sensitivity and then adding sampled noise to the output. **Definition 3.2** (Local / Global sensitivity).: The local \(\ell_{\star}\)-sensitivity of a function \(f\) is defined as \(\Delta_{LS}(X)=\max\limits_{X\simeq X^{\prime}}||f(X)-f(X^{\prime})||_{\ast}\) and the global sensitivity of \(f\) is \(\Delta_{GS}=\sup_{X}\Delta_{LS}(X)\). ### Propose-Test-Release Calibrating the noise level to the local sensitivity \(\Delta_{LS}(X)\) of a function would allow us to add less noise and therefore achieve higher utility for releasing private queries. However, the local sensitivity is a data-dependent function and naively calibrating the noise level to \(\Delta_{LS}(X)\) will not satisfy DP. PTR resolves this issue in a three-step procedure: **propose** a bound on the local sensitivity, privately **test** that the bound is valid (with high probability), and if so calibrate noise according to the bound and **release** the output. PTR privately computes the distance \(\mathcal{D}_{\beta}(X)\) between the input dataset \(X\) and the nearest dataset \(X^{\prime\prime}\) whose local sensitivity exceeds the proposed bound \(\beta\): \[\mathcal{D}_{\beta}(X)=\min\limits_{X^{\prime\prime}}\{d(X,X^{\prime\prime}): \Delta_{LS}(X^{\prime\prime})>\beta\}.\] ``` 1:Input: Dataset \(X\); privacy parameters \(\epsilon,\delta\); proposed bound \(\beta\) on \(\Delta_{LS}(X)\); query function \(f:\mathcal{X}\to\mathbb{R}\). 2:if\(\mathcal{D}_{\beta}(X)+\operatorname{\mathrm{Lap}}\left(\frac{1}{\epsilon} \right)\leq\frac{\log(1/\delta)}{\epsilon}\)then output \(\bot\), 3:else release \(f(X)+\operatorname{\mathrm{Lap}}\left(\frac{\beta}{\epsilon}\right)\). ``` **Algorithm 1** Propose-Test-Release [Dwork and Lei, 2009] **Theorem 3.3**.: _Algorithm 1 satisfies (\(2\epsilon,\delta\))-DP. [Dwork and Lei, 2009]_ Rather than proposing an arbitrary threshold \(\beta\), one can also privately release an upper bound of the local sensitivity and calibrate noise according to this upper bound. This was used for node DP in graph statistics [Kasiviswanathan et al., 2013], and for fitting topic models using spectral methods [Decarolis et al., 2020]. ## 4 Generalized PTR This section introduces the generalized PTR framework. We first formalize the notion of _data-dependent_ differential privacy that conditions on an input dataset \(X\). **Definition 4.1** (Data-dependent privacy).: Suppose we have \(\delta>0\) and a function \(\epsilon:\mathcal{X}\to\mathbb{R}\). We say that mechanism \(\mathcal{M}\) satisfies \((\epsilon(X),\delta)\) data-dependent DP2 for dataset \(X\) if for all possible output sets \(S\) and neighboring datasets \(X^{\prime}\), Footnote 2: We will sometimes write that \(\mathcal{M}(X)\) satisfies \(\epsilon(X)\) data-dependent DP with respect to \(\delta\). \[\Pr\bigl{[}\mathcal{M}(X)\in S\bigr{]} \leq e^{\epsilon(X)}\Pr\bigl{[}\mathcal{M}(X^{\prime})\in S \bigr{]}+\delta,\] \[\Pr\bigl{[}\mathcal{M}(X^{\prime})\in S\bigr{]} \leq e^{\epsilon(X)}\Pr\bigl{[}\mathcal{M}(X)\in S\bigr{]}+\delta.\] In generalized PTR, we propose a value \(\phi\) for the randomized algorithm \(\mathcal{M}\), which could be a noise scale or regularization parameter - or a set including both. For example, \(\phi=(\lambda,\gamma)\) in Example 4.4. We then say that \(\mathcal{M}_{\phi}\) is the mechanism \(\mathcal{M}\) parameterized by \(\phi\), and \(\epsilon_{\phi}(X)\) its data-dependent DP. The following example illustrates how to derive the data-dependent DP for a familiar friend - the Laplace mechanism. **Example 4.2**.: _(Data-dependent DP of Laplace Mechanism.) Given a function \(f:\mathcal{X}\to\mathbb{R}\), we will define_ \[\mathcal{M}_{\phi}(X)=f(X)+\text{Lap}\left(\phi\right).\] _We then have_ \[\log\frac{\Pr[\mathcal{M}_{\phi}(X)=y]}{\Pr[\mathcal{M}_{\phi}(X^{ \prime})=y]}\leq\frac{|f(X)-f(X^{\prime})|}{\phi}.\] _Maximizing the above calculation over all possible outputs \(y\) and using Definition 4.1,_ \[\epsilon_{\phi}(X)=\max_{X^{\prime}:X^{\prime}\simeq X}\frac{|f(X)-f(X^{\prime })|}{\phi}=\frac{\Delta_{LS}(X)}{\phi}.\] The data-dependent DP \(\epsilon_{\phi}(X)\) is a function of both the dataset \(X\) and the parameter \(\phi\). Maximizing \(\epsilon_{\phi}(X)\) over \(X\) recovers the standard DP guarantee of running \(\mathcal{M}\) with parameter \(\phi\). ``` 1:Input: Dataset \(X\); mechanism \(\mathcal{M}_{\phi}:\mathcal{X}\to\mathcal{R}\) and its privacy budget \(\epsilon,\delta\); (\(\hat{\epsilon},\hat{\delta}\))-DP test \(\mathcal{T}\); false positive rate \(\leq\delta^{\prime}\); data-dependent DP function \(\epsilon_{\phi}(\cdot)\) w.r.t. \(\delta\). 2:if not\(\mathcal{T}(\mathcal{X})\)then output \(\bot\), 3:else release \(\theta=\mathcal{M}_{\phi}(X)\). ``` **Algorithm 2** Generalized Propose-Test-Release **Theorem 4.3** (Privacy guarantee of generalized PTR).: _Consider a proposal \(\phi\) and a data-dependent DP function \(\epsilon_{\phi}(X)\) w.r.t. \(\delta\). Suppose that we have an (\(\hat{\epsilon},\hat{\delta}\))-DP test \(\mathcal{T}:\mathcal{X}\to\{0,1\}\) such that when \(\epsilon_{\phi}(X)>\epsilon\),_ \[\mathcal{T}(X)=\begin{cases}0\ \ \text{with probability }1-\delta^{\prime},\\ 1\ \ \text{with probability }\delta^{\prime}.\end{cases}\] _Then Algorithm 2 satisfies (\(\epsilon+\hat{\epsilon},\delta+\hat{\delta}+\delta^{\prime}\))-DP._ Proof sketch.: There are three main cases to consider: 1. We decide not to run \(\mathcal{M}_{\phi}\). 2. We decide to run \(\mathcal{M}_{\phi}\) and \(\epsilon_{\phi}(X)>\epsilon\); 3. We decide to run \(\mathcal{M}_{\phi}\) and \(\epsilon_{\phi}(X)\leq\epsilon\). In the first case, the decision to output \(\bot\) is post-processing of an \((\hat{\epsilon},\hat{\delta})\)-DP mechanism and inherits its privacy guarantees. The second case occurs when the \((\hat{\epsilon},\hat{\delta})\)-DP test "fails" (produces a false positive) and occurs with probability at most \(\delta^{\prime}\). The third case is a composition of an \((\hat{\epsilon},\hat{\delta})\)-DP algorithm and an \((\epsilon,\delta)\)-DP algorithm. Generalized PTR is a _strict_ generalization of Propose-Test-Release. For some function \(f\), define \(\mathcal{M}_{\phi}\) and \(\mathcal{T}\) as follows: \[\mathcal{M}_{\phi}(X)=f(X)+\mathrm{Lap}(\phi);\] \[\mathcal{T}(X)=\begin{cases}0&\text{if}\ \ \mathcal{D}_{\beta}(X)+ \mathrm{Lap}\left(\frac{1}{\epsilon}\right)>\frac{\log(1/\delta)}{\epsilon}, \\ 1&\text{otherwise}.\end{cases}\] Notice that our choice of parameterization is \(\phi=\frac{\beta}{\epsilon}\), where \(\phi\) is the scale of the Laplace noise. In other words, we know from Example 4.2 that \(\epsilon_{\phi}(X)>\epsilon\) exactly when \(\Delta_{LS}(X)>\beta\). For noise-adding mechanisms such as the Laplace mechanism, the sensitivity is proportional to the privacy loss (in both the global and local sense, i.e. \(\Delta_{GS}\propto\epsilon\) and \(\Delta_{LS}\propto\epsilon(X)\)). Therefore for these mechanisms the only difference between privately testing the local sensitivity (Algorithm 1) and privately testing the data-dependent DP (Theorem 4.3) is a change of parameterization. ### Limitations of local sensitivity Why do we want to generalize PTR beyond noise-adding mechanisms? Compared to classic PTR, the generalized PTR framework allows us to be more flexible in both the type of test conducted and also the type of mechanism whose output we wish to release. For many mechanisms, the local sensitivity either does not exist or is only defined for specific data-dependent quantities (e.g., the sensitivity of the score function in the exponential mechanism) rather than the mechanism's output. The following example illustrates this issue. **Example 4.4** (Private posterior sampling).: _Let \(\mathcal{M}:\mathcal{X}\times\mathcal{Y}\to\Theta\) be a private posterior sampling mechanism [20, 16, 22] for approximately minimizing \(F_{X}(\theta)\)._ \(\mathcal{M}\) _samples \(\theta\sim P(\theta)\propto e^{-\gamma(F_{X}(\theta)+0.5\lambda||\theta||^{2})}\) with parameters \(\gamma,\lambda\). Note that \(\gamma,\lambda\) cannot be appropriately chosen for this mechanism to satisfy DP without going through a sensitivity calculation of \(\arg\min F_{X}(\theta)\). In fact, the global and local sensitivity of the minimizer is unbounded even in linear regression problems, i.e when \(F_{X}(\theta)=\frac{1}{2}||y-X\theta||^{2}\)._ Output perturbation algorithms do work for the above problem when we regularize, but they are known to be suboptimal in theory and in practice [11]. In Section 5.1 we demonstrate how to apply generalized PTR to achieve a data-adaptive posterior sampling mechanism. Even in the cases of noise-adding mechanisms where PTR seems to be applicable, it does not lead to a tight privacy guarantee. Specifically, by an example of privacy amplification by post-processing (Example A.1 in the appendix), we demonstrate that the local sensitivity does not capture all sufficient statistics for data-dependent privacy analysis and thus is loose. ### Which \(\phi\) to propose The main limitation of generalized PTR is that one needs to "propose" a good guess of parameter \(\phi\). Take the example of \(\phi\) being the noise level in a noise-adding mechanism. Choosing too small a \(\phi\) will result in a useless output \(\bot\), while choosing too large a \(\phi\) will add more noise than necessary. Finding this 'Goldilocks' \(\phi\) might require trying out many different possibilities - each of which will consume privacy budget. This section introduces a method to jointly tune privacy parameters (e.g., noise scale) along with parameters related only to the utility of an algorithm (e.g., learning rate or batch size in stochastic gradient descent) - while avoiding the \(\bot\) output. Algorithm 3 takes a list of parameters as input, runs generalized PTR with each of the parameters, and returns the output with the best utility. We show that the privacy guarantee with respect to \(\epsilon\) is independent of the number of \(\phi\) that we try. Formally, let \(\phi_{1},...,\phi_{k}\) be a set of hyper-parameters and \(\tilde{\theta}_{i}\in\{\bot,\text{Range}(\mathcal{M})\}\) denotes the output of running generalized PTR on a private dataset \(X\) with \(\phi_{i}\). Let \(X_{val}\) be a public validation set and \(q(\tilde{\theta}_{i})\) be the score of evaluating \(\tilde{\theta}_{i}\) with \(X_{val}\) (e.g., validation accuracy). The goal is to select a pair (\(\tilde{\theta}_{i}\), \(\phi_{i}\)) such that DP model \(\tilde{\theta}_{i}\) maximizes the validation score. The generalized PTR framework with privacy calibration is described in Algorithm 3. The privacy guarantee of Algorithm 3 is an application of Liu and Talwar (2019). ``` 1:Input: Privacy budget per PTR algorithm (\(\epsilon^{*},\delta^{*}\)), cut-off \(T\), parameters \(\phi_{1:k}\), flipping probability \(\tau\) and validation score function \(q(\cdot)\). 2:Initialize the set \(S=\varnothing\). 3:Draw \(G\) from a geometric distribution \(\mathcal{D}_{\tau}\) and let \(\hat{T}=\min(T,G)\). 4:for i = 1,..., \(\hat{T}\)do 5: pick a random \(\phi_{i}\) from \(\phi_{1:k}\). 6: evaluate \(\phi_{i}\): \((\tilde{\theta}_{i},q(\tilde{\theta}_{i}))\leftarrow\) Algorithm 2(\(\phi_{i},(\epsilon^{*},\delta^{*})\)). 7:\(S\gets S\cup\{\tilde{\theta}_{i},q(\tilde{\theta}_{i})\}\). 8:endfor 9:Output the highest scored candidate from \(S\). ``` **Algorithm 3** PTR with hyper-parameter selection **Theorem 4.5** ( Theorem 3.4 Liu and Talwar (2019) ).: _Fix any \(\tau\in[0,1],\delta_{2}>0\) and let \(T=\frac{1}{\tau}\log\frac{1}{\delta_{2}}\). If each oracle access to Algorithm 2 is \((\epsilon^{*},\delta^{*})\)-DP, then Algorithm 3 is \((3\epsilon^{*}+3\sqrt{2\delta^{*}},\sqrt{2\delta^{*}}T+\delta_{2})\)-DP._ The theorem implies that one can try a random number of \(\phi\) while paying a constant \(\epsilon\). In practice, we can roughly set \(\tau=\frac{1}{10k}\) so that the algorithm is likely to test all \(k\) parameters. We emphasize that the privacy and the utility guarantee (stated in the appendix) is not our contribution. But the idea of applying generalized PTR to enforce a uniform DP guarantee over all choices of parameters with a data-dependent analysis is new, and in our opinion, significantly broadens the applicability to generic hyper-parameter tuning machinery from Liu and Talwar (2019). ### Construction of the DP test Classic PTR uses the Laplace mechanism to construct a differentially private upper bound of \(\mathcal{D}_{\beta}(X)\), the distance from input dataset \(X\) to the closest dataset whose local sensitivity exceeds the proposed bound \(\beta\). The tail bound of the Laplace distribution then ensures that if \(\mathcal{D}_{\beta}(X)=0\) (i.e. if \(\Delta_{LS}(X)>\beta\)), then the output will be released with only a small probability \(\delta\). The following theorem shows that we could instead use a differentially private upper bound of the data-dependent DP \(\epsilon_{\phi}(X)\) in order to test whether to run the mechanism \(\mathcal{M}_{\phi}\). **Theorem 4.6** (Generalized PTR with private upper bound).: _Suppose we have a differentially private upper bound of \(\epsilon_{\phi}(X)\) w.r.t. \(\delta\) such that with probability at least \(1-\delta^{\prime}\), \(\epsilon_{\phi}^{P}(X)>\epsilon_{\phi}(X)\). Further suppose we have an \((\hat{\epsilon},\hat{\delta})\)-DP test \(\mathcal{T}\) such that_ \[T(X)=\begin{cases}1&\text{ if }\epsilon_{\phi}^{P}(X)<\epsilon,\\ 0&\text{ otherwise}.\end{cases}\] _Then Algorithm 2 is \((\epsilon+\hat{\epsilon},\delta+\hat{\delta}+\delta^{\prime})\)-DP._ In Section 5.2, we demonstrate that one can upper bound the data-dependent DP through a modification of the smooth sensitivity framework applied on \(\epsilon_{\phi}(X)\). Moreover, in Section 5.1 we provide a direct application of Theorem 4.6 with private linear regression by making use of the per-instance DP technique (Wang, 2017). The applications in Section 5 are illustrative of two distinct approaches to constructing the DP test for generalized PTR: 1. Private sufficient statistics release (used in the private linear regression example of Section 5.1) specifies the data-dependent DP as a function of the dataset and privately releases each data-dependent component. 2. The second approach (used in the PATE example of Section 5.2) uses the smooth sensitivity framework to privately release the data-dependent DP as a whole, and then construct a high-confidence test using the Gaussian mechanism. These two approaches cover most of the scenarios arising in data-adaptive analysis. For example, in the appendix we demonstrate the merits of generalized PTR in handling data-adaptive private generalized linear models (GLMs) using private sufficient statistics release. Moreover, sufficient statistics release together with our private hyper-parameter tuning (Algorithm 3) can be used to construct data-adaptive extensions of DP-PCA and Sparse-DP-ERM (see details in the future work section). ## 5 Applications In this section, we put into action our approaches to construct the DP test and provide applications in private linear regression and PATE. ### Private Linear Regression **Theorem 5.1** ((Wang, 2017)).: _For input data \(X\in\mathcal{X}\) and \(Y\in\mathcal{Y}\), define the following:_ * \(\lambda_{\min}(X)\) _denotes the smallest eigenvalue of_ \(X^{T}X\)_;_ * \(||\theta_{\lambda}^{*}||\) _is the magnitude of the solution_ \(\theta_{\lambda}^{*}=(X^{T}X+\lambda I)^{-1}X^{T}Y\)_;_ * _and_ \(L(X,\mathbf{y}):=||\mathcal{X}||(||\mathcal{X}||||\theta_{\lambda}^{*}||+|| \mathcal{Y}||)\) _is the local Lipschitz constant, denoted_ \(L\) _in brief._ _For brevity, denote \(\lambda^{*}=\lambda+\lambda_{\min}(X)\). The algorithm used in Example 4.4 with parameter \(\phi=(\lambda,\gamma)\) obeys \((\epsilon_{\phi}(Z),\delta)\) data-dependent DP for each dataset \(Z=(X,Y)\) with \(\epsilon_{\phi}(Z)\) equal to_ \[\sqrt{\frac{\gamma L^{2}\log(2/\delta)}{\lambda^{*}}}+\frac{\gamma L^{2}}{2( \lambda^{*}+||\mathcal{X}||^{2})}+\frac{1+\log(2/\delta)||\mathcal{X}||^{2}}{ 2(\lambda^{*})}.\] Notice that the data-dependent DP is a function of \((\lambda_{\min},L,||\theta_{\lambda}^{*}||,\lambda,\gamma)\), where \((\lambda_{\min},L,||\theta_{\lambda}^{*}||)\) are data-dependent quantities. One can apply the generalized PTR framework as in the following example. **Example 5.2** (OPS with Ptr).: _We demonstrate here how to apply generalized PTR to the one-posterior sample (OPS) algorithm, a differentially private mechanism which outputs one sample from the posterior distribution of a Bayesian model with bounded log-likelihood._ * _Propose_ \(\phi=(\lambda,\gamma)\)_._ * _Based on_ \((\lambda,\gamma)\)_, differentially privately release_ \(\lambda_{min},||\theta_{\lambda}^{*}||,L\) _with privacy budget_ \((\epsilon,\delta/2)\)_._ * _Condition on a high probability event (with probability at least_ \(1-\delta/2\)_) of_ \(\lambda_{min},||\theta_{\lambda}^{*}||,L\)_, test if_ \(\epsilon_{\phi}^{P}(X)\) _is smaller than the predefined privacy budget_ \((\hat{\epsilon},\hat{\delta})\)_, where_ \(\epsilon_{\phi}^{P}(X)\) _denotes the sanitized data-dependent DP._ * _Based on the outcome of the test, decide whether to release_ \(\theta\propto e^{-\frac{\gamma}{2}||Y-X\theta||^{2}+\lambda||\theta||^{2}}\)_._ **Theorem 5.3**.: _The algorithm outlined in Example 5.2 satisfies \((\epsilon+\hat{\epsilon},\delta+\hat{\delta})\)-DP._ The main idea of the above algorithm boils down to privately releasing all data-dependent quantities in data-dependent DP, constructing high-probability confidence intervals of these quantities, and then deciding whether to run the mechanism \(\mathcal{M}\) with the proposed parameters. We defer the details of the privacy calibration of data-dependent quantities to the appendix. One may ask why we cannot directly tune privacy parameters \((\lambda,\gamma)\) based on the sanitized data-dependent DP. This is because, in many scenarios, data-dependent quantities depend on the choice of privacy parameters, e.g., \(||\theta_{\lambda}^{*}||\) is a complicated function of \(\lambda\). Thus, the optimization on \(\lambda\) becomes Figure 1: Differentially private linear regression algorithms on UCI datasets. \(y\)-axis reports the MSE error with confidence intervals. \(\epsilon\) is evaluated with \(\delta=1e-6\). a circular problem -- to solve \(\lambda\), we need to sanitize \(||\theta_{\lambda}^{*}||\), which needs to choose a \(\lambda\) to begin with. Alternatively, generalized PTR provides a clear and flexible framework to test the validity of privacy parameters adapted to the dataset. **Remark 5.4**.: The above "circular" issue is even more serious for generalized linear models (GLMs) beyond linear regression. The data-dependent DP there involves a local strong-convexity parameter, a complex function of the regularizer \(\lambda\) and we only have zeroth-order access to. In the appendix, we demonstrate how to apply generalized PTR to provide a generic solution to a family of private GLMs where the link function satisfies a self-concordance assumption. We next apply Algorithm 3 for Example 5.2 with UCI regression datasets. Standard z-scoring is applied and each data point is normalize with a Euclidean norm of \(1\). We consider \((60\%,10\%,30\%)\) splits for training, validation and testing test. **Baselines** * Output Perturbation (Outpert) (Chaudhuri et al., 2011): \(\theta=(X^{T}X+\lambda I)^{-1}X^{T}\mathbf{y}\). Release \(\hat{\theta}=\theta+\mathbf{b}\) with an appropriate \(\lambda\), where \(\mathbf{b}\) is a Gaussian random vector. * Posterior sampling (OPS). Sample \(\hat{\theta}\sim P(\theta)\propto e^{-\gamma(F(\theta)+0.5\lambda||\theta||^{ 2})}\) with parameters \(\gamma,\lambda\). * Adaptive posterior sampling (AdaOPS) (Wang, 2018). Run OPS with \((\lambda,\gamma)\) chosen adaptively according to the dataset. Outpert and OPS serve as two non-adaptive baselines. In particular, we consider OPS-Balanced (Wang, 2018), which chooses \(\lambda\) to minimize a data-independent upper bound of empirical risk and dominates other OPS variants. AdaOPS is one state-of-the-art algorithm for adaptive private regression, which automatically chooses \(\lambda\) by minimizing an upper bound of the data-dependent empirical risk. We implement OPS-PTR as follows: propose a list of \(\lambda\) through grid search (we choose \(k=30\) and \(\lambda\) ranges from \([2.5,2.5^{10}]\) on a logarithmic scale); instantiate Algorithm 3 with \(\tau=0.1k\), \(T=\frac{1}{\tau}\log(1/\delta_{2})\) and \(\delta_{2}=1/2\delta\); calibrate \(\gamma\) to meet the privacy requirement for each \(\lambda\). sample \(\hat{\theta}\) using \((\lambda,\gamma)\) and return the one with the best validation accuracy. Notice that we use a "no \(\bot\)" variant of Algorithm 2 as the calibration of \(\gamma\) is clear given a fixed \(\lambda\) and privacy budget (see more details in the appendix). We can propose various combinations of \((\lambda,\gamma)\) for more general applications. Figure 1 demonstrates how the MSE error of the linear regression algorithms varies with the privacy budget \(\epsilon\). OutPert suffers from the large global sensitivity of output \(\theta\). OPS performs well but does not benefit from the data-dependent quantities. AdaOPS is able to adaptively choose \((\lambda,\gamma)\) based on the dataset, but suffers from the estimation error of the data-dependent empirical risk. On the other hand, OPS-PTR selects a \((\lambda,\gamma)\) pair that minimizes the empirical error on the validation set directly, and the privacy parameter \(\gamma\) adapts to the dataset thus achieving the best result. ### Pate In this section, we apply the generalized PTR framework to solve an open problem from the Private Aggregation of Teacher Ensembles (PATE) (Papernot et al., 2017, 2018) -- privately publishing the entire model through privately releasing data-dependent DP losses. Our algorithm makes use of the smooth sensitivity framework (Nissim et al., 2007) and the Gaussian mechanism to construct a high-probability test of the data-dependent DP. The one-dimensional statistical nature of data-dependent DP enables efficient computations under the smooth sensitivity framework. Thus, this approach is generally applicable for other private data-adaptive analysis beyond PATE. PATE is a knowledge transfer framework for model-agnostic private learning. In this framework, an ensemble of teacher models is trained on the disjoint private data and uses the teachers' aggregated consensus answers to supervise the training of a "student" model agnostic to the underlying machine-learning algorithms. By publishing only the aggregated answers and by the careful analysis of the "consensus", PATE has become a practical technique in recent private model training. The tight privacy guarantee of PATE heavily relies on a delicate data-dependent DP analysis, for which the authors of PATE use the smooth sensitivity framework to privately publish the data-dependent privacy cost. However, it remains an open problem to show that the released model is DP under data-dependent analysis. Our generalized PTR resolves this gap by carefully testing a private upper bound of the data-dependent privacy cost. Our algorithm is fully described in Algorithm 4, where the modification over the original PATE framework is highlighted in blue. Algorithm 4 takes the input of privacy budget \((\epsilon^{\prime},\hat{\epsilon},\delta)\), unlabeled public data \(x_{1:T}\) and \(K\) teachers' predictions on these data. The parameter \(\epsilon\) denotes the privacy cost of publishing the data-dependent DP and \(\epsilon^{\prime}\) is the predefined privacy budget for testing. \(n_{j}(x_{i})\) denotes the the number of teachers that agree on label \(j\) for \(x_{i}\) and \(C\) denotes the number of classes. The goal is to privately release a list of plurality outcomes -- \(\operatorname*{argmax}_{j\in[C]}n_{j}(x_{i})\) for \(i\in[T]\) -- and use these outcomes to supervise the training of a "student" model in the public domain. The parameter \(\sigma_{1}\) denotes the noise scale for the vote count. In their privacy analysis, Papernot et al. (2018) compute the data-dependent \(\operatorname*{RDP}_{\sigma_{1}}(\alpha,X)\) of labeling the entire group of student queries. \(\operatorname*{RDP}_{\sigma_{1}}(\alpha,X)\) can be orders of magnitude smaller than its data-independent version if there is a strong agreement among teachers. Note that \(\operatorname*{RDP}_{\sigma_{1}}(\alpha,X)\) is a function of the RDP order \(\alpha\) and the dataset \(X\), analogous to our Definition 4.1 but subject to RDP (Mironov, 2017). **Theorem 5.5** ((Papernot et al., 2018)).: _If the top three vote counts of \(x_{i}\) are \(n_{1}>n_{2}>n_{3}\) and \(n_{1}-n_{2},n_{2}-n_{3}\gg\sigma_{1}\), then the data-dependent RDP of releasing \(\operatorname*{argmax}_{j}\{n_{j}+\mathcal{N}(0,\sigma_{1}^{2})\}\) satisfies \((\alpha,\exp\{-2\alpha/\sigma_{1}^{2}\}/\alpha)\)-RDP and the data-independent RDP (using the Gaussian mechanism) satisfies \((\alpha,\frac{\alpha}{\sigma_{1}^{2}})\)-RDP._ ``` 1:Input: Unlabeled public data \(x_{1:T}\), aggregated teachers prediction \(n(\cdot)\), privacy parameter \(\hat{\epsilon},\epsilon^{\prime},\delta\), noisy parameter \(\sigma_{1}\). 2:Set \(\alpha=\frac{2\log(2/\delta)}{\hat{\epsilon}}+1\), \(\sigma_{s}=\sigma_{2}=\sqrt{\frac{3\alpha+2}{\hat{\epsilon}}},\delta_{2}= \delta/2\), smoothness parameter \(\beta=\frac{0.2}{\alpha}\). 3:Compute noisy labels: \(y_{i}^{p}\leftarrow\operatorname*{argmax}_{j\in[C]}\{n_{j}(x_{i})+\mathcal{N}( 0,\sigma_{1}^{2})\}\) for all \(i\in[1:T]\). 4:\(\operatorname*{RDP}_{\sigma_{1}}(\alpha,X)\leftarrow\) data-dependent RDP at the \(\alpha\)-th order. 5:\(SS_{\beta}(X)\leftarrow\) the smooth sensitivity of \(\operatorname*{RDP}_{\sigma_{1}}^{\operatorname*{upper}}(\alpha,X)\). 6:Privately release \(\mu:=\log(SS_{\beta}(X))+\beta\cdot\mathcal{N}(0,\sigma_{2}^{2})+\sqrt{2\log( 2/\delta_{2})}\cdot\sigma_{2}\cdot\beta\) 7:\(\operatorname*{RDP}_{\sigma_{1}}^{\operatorname*{upper}}(\alpha)\leftarrow\) an upper bound of data-dependent RDP through Lemma 5.6. 8:\(\epsilon_{\sigma_{1}}\leftarrow\) DP guarantee converted from \(\operatorname*{RDP}_{\sigma_{1}}^{\operatorname*{upper}}(\alpha)\). 9:If \(\epsilon^{\prime}\geq\epsilon_{\sigma_{1}}\)return a student model trained using \((x_{1:T};y_{1:T}^{p})\). 10:Else return \(\bot\). ``` **Algorithm 4** PATE with generalized PTR However, \(\operatorname*{RDP}_{\sigma_{1}}(\alpha,X)\) is data-dependent and thus cannot be revealed. The authors therefore privately publish the data-dependent RDP using the smooth sensitivity framework (Nissim et al., 2007). The smooth sensitivity calculates a smooth upper bound on the local sensitivity of \(\operatorname*{RDP}_{\sigma_{1}}(\alpha,X)\) denoted as \(SS_{\beta}(X)\), such that \(SS_{\beta}(X)\leq e^{\beta}SS_{\beta}(X^{\prime})\) for any neighboring dataset \(X\) and \(X^{\prime}\). By adding Gaussian noise scaled by the smooth sensitivity (i.e., release \(\epsilon_{\sigma_{1}}(\alpha,X)+SS_{\beta}(X)\cdot\mathcal{N}(0,\sigma_{s}^{2})\)), the privacy cost is safely published. Unlike most noise-adding mechanisms, the standard deviation \(\sigma_{s}\) cannot be published since \(SS_{\beta}(X)\) is a data-dependent quantity. Moreover, this approach fails to provide a valid privacy guarantee of the noisy labels obtained through the PATE algorithm, as the published privacy cost could be smaller than the real privacy cost. Our solution in Algorithm 4 looks like the following: * Privately release an upper bound of the smooth sensitivity \(SS_{\beta}(X)\) with \(e^{\mu}\). * Conditioned on a high-probability event of \(e^{\mu}\), publish the data-dependent RDP with \(\text{RDP}^{\text{upper}}_{\sigma_{1}}(\alpha)\). * Convert \(\text{RDP}^{\text{upper}}_{\sigma_{1}}(\alpha)\) back to the standard DP guarantee using RDP to DP conversion at \(\delta/2\). * Test if the converted DP is above the predefined budget \(\epsilon^{\prime}\). The following lemma states that \(\text{RDP}^{\text{upper}}_{\sigma_{1}}(\alpha)\) is a valid upper bound of the data-dependent RDP. **Lemma 5.6** (Private upper bound of data-dependent RDP).: _We are given a RDP function \(\text{RDP}(\alpha,X)\) and a \(\beta\)-smooth sensitivity bound \(SS(\cdot)\) of \(\text{RDP}(\alpha,X)\). Let \(\mu\) (defined in Algorithm 4) denote the private release of \(\log(SS_{\beta}(X))\). Let the \((\beta,\sigma_{s},\sigma_{2})\)-GNSS mechanism be_ \[\text{RDP}^{\text{upper}}(\alpha):=\text{RDP}(\alpha,X)+SS_{\beta}(X)\cdot \mathcal{N}(0,\sigma_{s}^{2})+\sigma_{s}\sqrt{2\log(\frac{2}{\sigma_{2}})}e^ {\mu}\] _Then, the release of \(\text{RDP}^{\text{upper}}(X)\) satisfies \((\alpha,\frac{3\alpha+2}{2\sigma_{s}^{2}})\)-RDP for all \(1<\alpha<\frac{1}{2\beta}\); w.p. at least \(1-\delta_{2}\), \(\text{RDP}^{\text{upper}}(\alpha)\) is an upper bound of \(\text{RDP}(\alpha,X)\)._ The proof (deferred to the appendix) makes use of the facts that: (1) the log of \(SS_{\beta}(X)\) has a bounded global sensitivity \(\beta\) through the definition of smooth sensitivity; (2) releasing \(\text{RDP}_{\sigma_{1}}(\alpha,X)+SS_{\beta}(X)\cdot\mathcal{N}(0,\sigma_{s}^ {2})\) is \((\alpha,\frac{\alpha+1}{\sigma_{s}^{2}})\)-RDP (Theorem 23 from Papernot et al. (2018)). Now, we are ready to state the privacy guarantee of Algorithm 4. Figure 2: Privacy and utility tradeoffs with PATE. When \(\sigma_{1}\) is aligned, three algorithms provide the same utility. \(y\)-axis plots the privacy cost of labeling \(T=200\) public data with \(\delta=10^{-5}\). The left figure considers the high-consensus case, where the data-adaptive analysis is preferred. **Theorem 5.7**.: _Algorithm 4 satisfies \((\epsilon^{\prime}+\hat{\epsilon},\delta)\)-DP._ In the proof, the choice of \(\alpha\) ensures that the cost of the \(\delta/2\) contribution (used in the RDP-to-DP conversion) is roughly \(\hat{\epsilon}/2\). Then the release of \(\mathrm{RDP}_{\sigma_{1}}^{\mathrm{upper}}(\alpha)\) with \(\sigma_{s}=\sqrt{\frac{2+3\alpha}{\hat{\epsilon}}}\) accounts for another cost of \((\epsilon/2,\delta/2)\)-DP. **Empirical results.** We next empirically evaluate Algorithm 4 (PATE-PTR) on the MNIST dataset. Following the experimental setup from Papernot et al. (2018), we consider the training set to be the private domain, and the testing set is used as the public domain. We first partition the training set into 400 disjoint sets and 400 teacher models, each trained individually. Then we select \(T=200\) unlabeled data from the public domain, with the goal of privately labeling them. To illustrate the behaviors of algorithms under various data distributions, we consider two settings of unlabeled data, high-consensus and low-consensus. In the low-consensus setting, we choose \(T\) unlabeled data such that there is no high agreement among teachers, so the advantage of data-adaptive analysis is diminished. We provide further details on the distribution of these two settings in the appendix. **Baselines.** We consider the Gaussian mechanism as a data-independent baseline, where the privacy guarantee is valid but does not take advantage of the properties of the dataset. The data-dependent DP ( Papernot et al. (2018)) serves as a non-private baseline, which requires further sanitation. Note that these two baselines provide different privacy analyses of the same algorithm (see Theorem 5.5). Figure 2 plots privacy-utility tradeoffs between the three approaches by varying the noise scale \(\sigma_{1}\). The purple region denotes a set of privacy budget choices (\(\hat{\epsilon}+\epsilon^{\prime}\) used in Algorithm 4) such that the utility of the three algorithms is aligned under the same \(\sigma_{1}\). In more detail, the purple region is lower-bounded by \(\hat{\epsilon}+\epsilon_{\sigma_{1}}\). We first fix \(\sigma_{s}=\sigma_{2}=15\) such that \(\hat{\epsilon}\) is fixed. Then we empirically calculate the average of \(\epsilon_{\sigma_{1}}\) (the private upper bound of the data-dependent DP) over 10 trials. Running Algorithm 4 with any choice of \(\hat{\epsilon}+\epsilon^{\prime}\) chosen from the purple region implies \(\epsilon^{\prime}>\epsilon_{\sigma_{1}}\). Therefore, PATE-PTR will output the same noisy labels (with high probability) as the two baselines. **Observation** As \(\sigma_{1}\) increases, the privacy loss of the Gaussian mechanism decreases, while the data-dependent DP curve does not change much. This is because the data-dependent DP of each query is a complex function of both the noise scale and the data and does not monotonically decrease when \(\sigma_{1}\) increases (see more details in the appendix). However, the data-dependent DP still dominates the Gaussian mechanism for a wide range of \(\sigma_{1}\). Moreover, PATE-PTR nicely interpolates between the data-independent DP guarantee and the non-private data-adaptive DP guarantee. In the low-consensus case, the gap between the data-dependent DP and the DP guarantee of the Gaussian mechanism unsurprisingly decreases. Meanwhile, PATE-PTR (the purple region) performs well when the noise scale is small but deteriorates when the data-independent approach proves more advantageous. This example demonstrates that using PTR as a post-processing step to convert the data-dependent DP to standard DP is effective when the data-adaptive approach dominates others. ## 6 Limitations and Future Work One weakness of generalized PTR is that it requires a case-specific privacy analysis. Have we simply exchanged the problem of designing a data-adaptive DP algorithm with the problem of analyzing the data-dependent privacy loss? We argue that this limitation is inherited from classic PTR. In situations where classic PTR is not applicable, we've outlined several approaches to constructing the DP test for our framework (see Sections 4.3 and 5.2). Furthermore, the data-dependent privacy loss is often more straightforward to compute than local sensitivity, and often exists in intermediate steps of classic DP analysis already. Most DP analysis involves providing a high-probability tail bound of the privacy loss random variable. If we stop before taking the max over the input dataset, then we get a data-dependent DP loss right away (as in Example 4.2). There are several exciting directions for applying generalized PTR to more problems. Sufficient statistics release and our private hyperparameter tuning (Algorithm 3) can be used to construct data-adaptive extensions of DP-PCA (Dwork et al., 2014) and Sparse-DP-ERM (Kifer et al., 2012). For DP-PCA we could use our Algorithm 3 to tune the variance of the noise added to the spectral gap; for Sparse-DP-ERM we would test the restricted strong convexity parameter (RSC), i.e. not adding additional regularization if the RSC is already large. ## 7 Conclusion Generalized PTR extends the classic "Propose-Test-Release" framework to a more general setting by testing the data-dependent privacy loss of an input dataset, rather than its local sensitivity. In this paper we've provided several examples - private linear regression with hyperparameter selection and PATE - to illustrate how generalized PTR can enhance DP algorithm design via a data-adaptive approach. ### Acknowledgments The work was partially supported by NSF Award # 2048091 and the Google Research Scholar Award. Yuqing was supported by the Google PhD Fellowship. ###### Contents * 1 Introduction * 2 Related Work * 3 Preliminaries * 3.1 Propose-Test-Release * 4 Generalized PTR * 4.1 Limitations of local sensitivity * 4.2 Which \(\phi\) to propose * 4.3 Construction of the DP test * 5 Applications * 5.1 Private Linear Regression * 5.2 PATE * 6 Limitations and Future Work * 7 Conclusion * A Omitted examples in the main body * A.1 Limits of the classic PTR in private binary voting * A.2 Self-concordant generalized linear model (GLM) * A.3 Differentially privately release \(\lambda_{min}\left(\nabla^{2}F(\theta)\right)\) * A.4 Other applications of generalized PTR * B Omitted proofs in Section 4 * C Experimental details * C.1 Experimental details in private linear regression * C.2 Details of PATE case study * D Omitted proofs in private GLM * D.1 Per-instance DP of GLM ## Appendix A Omitted examples in the main body In this appendix, we provide more examples to demonstrate the merits of generalized PTR. We focus on a simple example of post-processed Laplace mechanism in Section A.1 and then an example on differentially private learning of generalized linear models in Section 4. In both cases, we observe that generalized PTR provides data-adaptive algorithms with formal DP guarantees, that are simple, effective and not previously proposed in the literature (to the best of our knowledge). ### Limits of the classic PTR in private binary voting The following example demonstrates that classic PTR does not capture sufficient data-dependent quantities even when the local sensitivity exists and can be efficiently tested. **Example A.1**.: _Consider a binary class voting problem: \(n\) users vote for a binary class \(\{0,1\}\) and the goal is to output the class that is supported by the majority. Let \(n_{i}\) denote the number of people who vote for the class \(i\). We consider the report-noisy-max mechanism:_ \[\mathcal{M}(X):\text{argmax}_{i\in[0,1]}n_{i}(X)+\text{Lap}(b),\] _where \(b=1/\epsilon\) denotes the scale of Laplace noise._ In the example, we will (1) demonstrate the merit of data-dependent DP; and (2) empirically compare classic PTR with generalized PTR. We first explicitly state the data-dependent DP. **Theorem A.2**.: _The data-dependent DP of the above example is_ \[\epsilon(X):=\max_{X^{\prime}}\{|\log\frac{p}{p^{\prime}}|,|\log\frac{1-p}{1- p^{\prime}}|\},\] _where \(p:=\Pr[n_{0}(X)+\text{Lap}(1/\epsilon)>n_{1}(X)+\text{Lap}(1/\epsilon)]\) and \(p^{\prime}:=\Pr[n_{0}(X^{\prime})+\text{Lap}(1/\epsilon)>n_{1}(X^{\prime})+ \text{Lap}(1/\epsilon)]\). There are four possible neighboring datasets \(X^{\prime}:n_{0}(X^{\prime})=\max(n_{0}(X)\pm 1,0),n_{1}(X^{\prime})=n_{1}(X)\) or \(n_{0}(X^{\prime})=n_{0}(X),n_{1}(X^{\prime})=\max(n_{1}(X)\pm 1,0)\)._ In Figure 3(a), we empirically compare the above data-dependent DP with the Laplace mechanism by varying the gap between the two vote counts \(|n_{0}(X)-n_{1}(X)|\). The noise scale is fixed to \(\epsilon=10\). The data-dependent DP substantially improves over the standard DP if the gap is large. However, the data-dependent DP is a function of the dataset. We next demonstrate how to apply generalized PTR to exploit the data-dependent DP. Notice that the probability \(n_{0}(X)+\text{Lap}(1/\epsilon)>n_{1}(X)+\text{Lap}(1/\epsilon)\) is equal to the probability that a random variable \(Z:=X-Y\) exceeds \(\epsilon(n_{1}(X)-n_{0}(X))\), where \(X,Y\) are two independent Lap(1) distributions. We can compute the pdf of \(Z\) through the convolution of two Laplace distributions, which implies \(f_{X-Y}(z)=\frac{1+|z|}{4e^{|z|}}\). Let \(t\) denote the difference between \(n_{1}(X)\) and \(n_{0}(X)\), i.e., \(t=n_{1}(X)-n_{0}(X)\). Then we have \[p=\Pr[Z>\epsilon\cdot t]=\frac{2+\epsilon\cdot t}{4\exp(\epsilon\cdot t)}\] Similarly, \(p^{\prime}=\frac{2+\epsilon\cdot(t+\ell)}{4\exp(\epsilon\cdot(t+\ell))}\), where \(\ell\in[-1,1]\) denotes adding or removing one data point to construct the neighboring dataset \(X^{\prime}\). Therefore, we can upper bound \(\log(p/p^{\prime})\) by \[\log\frac{p}{p^{\prime}} =\frac{2+\epsilon\cdot t}{4\exp(\epsilon\cdot t)}\cdot\frac{4\exp (\epsilon(t+\ell))}{2+\epsilon\cdot(t+\ell)}\] \[\leq\epsilon\cdot\log\left(\frac{2+\epsilon t}{2+\epsilon(t+1)}\right)\] \[=\epsilon\log\left(1-\frac{\epsilon}{2+\epsilon(t+1)}\right)\] Then we can apply generalized PTR by privately lower-bounding \(t\). On the other hand, the local sensitivity \(\Delta_{LS}(X)\) of this noise-adding mechanism is \(0\) if \(t>1\). Specifically, if the gap is larger than one, adding or removing one user will not change the result. To apply classic PTR, we let \(\gamma(X)\) denote the distance to the nearest dataset \(X^{{}^{\prime\prime}}\) such that \(\Delta_{LS}>0\) and test if \(\gamma(X)+\text{Lap}(1/\epsilon)>\frac{\log(1/\delta)}{\epsilon}\). Notice in this example that \(\gamma(X)=\max(t-1,0)\) can be computed efficiently. We provide the detailed implementation of these approaches. 1. Gen PTR: lower bound \(t\) with \(t^{p}=t-\frac{log(1/\delta)}{\tilde{\epsilon}}+\text{Lap}(1/\tilde{\epsilon})\). Calculate an upper bound of data-dependent DP \(\epsilon^{p}\) using Theorem A.2 with \(t^{p}\). The algorithm then tests if \(\epsilon^{p}\) is within an predefined privacy budget \(\epsilon^{\prime}\). If the test passes, the algorithm returns \(\text{argmax}_{i\in[0,1]}n_{i}(X)+Lap(1/\epsilon)\) satisfies \((\tilde{\epsilon}+\epsilon^{\prime},\delta)\)-DP. 2. classic PTR: lower bound \(t\) with \(t^{p}=t-\frac{log(1/\delta)}{\tilde{\epsilon}}+\text{Lap}(1/\tilde{\epsilon})\). If \(t^{p}>1\), classic PTR outputs the ground-truth result else returns a random class. This algorithm satisfies \((\tilde{\epsilon},\delta)\)-DP. 3. Laplace mechanism. \(\mathcal{M}(X):\text{argmax}_{i\in[0,1]}n_{i}(X)+Lap(1/\epsilon)\). \(\mathcal{M}\) is \((\epsilon,\delta)\)-DP. We argue that though the Gen-PTR and the classic PTR are similar in privately lower-bounding the data-dependent quantity \(t\), the latter does not capture sufficient information for data-adaptive analysis. That is to say, only testing the local sensitivity restricts us from learning helpful information to amplify the privacy guarantee if the test fails. In contrast, our generalized PTR, where privacy parameters and the local sensitivity parameterize the data-dependent DP, can handle those failure cases nicely. To confirm this conjecture, Figure 3(b) plots a privacy-utility trade-off curve between these three approaches. We consider a voting example with \(n_{0}(X)=n_{1}(X)+100\) and \(t=100\), chosen such that the data-adaptive analysis is favorable. In Figure 3(b), we vary the noise scale \(b=1/\epsilon\) between \([0,0.5]\). For each choice of \(b\), we plot the privacy guarantee of three algorithms when the error rate is aligned. For Gen-PTR, we set \(\tilde{\epsilon}=\frac{1}{2b}\) and empirically calculate \(\epsilon^{p}\) over \(100000\) trials. Figure 3: In Figure 3(a), we compare the privacy guarantee by varying the gap. In Figure 3(b) We fix \(t=n_{0}(X)-n_{1}(X)=100\) and compare privacy cost when the accuracy is aligned. Gen-PTR with any choice of privacy budget \((\tilde{\epsilon}+\epsilon^{\prime})\) chosen from the purple region would achieve the same utility as Laplace mechanism but with a smaller privacy cost. The curve of Gen-PTR is always below than that of the classic PTR, which implies that Gen-PTR can result a tighter privacy analysis when the utility is aligned. In the plot, when \(\epsilon\ll\frac{\log(1/\delta)}{t}\), the classic PTR is even worse than the Laplace mechanism. This is because the classic PTR is likely to return \(\bot\) while the Laplace mechanism returns \(\operatorname*{argmax}_{i\in[0,1]}n_{i}(X)+\operatorname*{Lap}(1/\epsilon)\), which contains more useful information. Compared to the Laplace mechanism, Gen-PTR requires an extra privacy allocation \(\tilde{\epsilon}\) to release the gap \(t\). However, it still achieves an overall smaller privacy cost when the error rate \(\leq 10^{-5}\) (the purple region). Meanwhile, Gen-PTR dominates the classic PTR (i.e., the dashed black curve is always below the blue curve). Note that the classic PTR and the Gen-PTR utilize the gap information differently: the classic PTR outputs \(\bot\) if the gap is not sufficiently large, while the Gen-PTR encodes the gap into the data-dependent DP function and tests the data-dependent DP in the end. This empirical result suggests that testing the local sensitivity can be loosely compared to testing the data-dependent DP. Thus, Gen-PTR could provide a better privacy-utility trade-off. ### Self-concordant generalized linear model (GLM) In this section, we demonstrate the effectiveness and flexibility of generalized PTR in handling a family of GLMs where the link function satisfies a self-concordance assumption. This section is organized as follows: * Introduce a family of GLMs with the self-concordance property. * Introduce a general output perturbation algorithm for private GLMs. * Analyze the data-dependent DP of GLMs with the self-concordance property. * Provide an example of applying our generalized PTR framework to logistic regression. Consider the empirical risk minimization problem of the generalized linear model \[\theta^{*}=\operatorname*{argmin}_{\theta}\sum_{i=1^{n}}l_{i}(\theta)+r( \theta),\] where \(l:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\) belongs to a family of convex GLMs: \(l_{i}(\theta)=l(y,x_{i}^{T}\theta)\). Let \(r:\mathbb{R}^{d}\to\mathbb{R}\) be a regularization function. We now define the self-concordance property. **Definition A.3** (Generalized self-concordance [3]).: A convex and three-times differentiable function \(f:\Theta\to\mathbb{R}\) is \(R\)-generalized-self-concordant on an open nonempty convex set \(\Theta^{*}\subset\Theta\) with respect to norm \(\|\cdot\|\) if for all \(u\in\Theta^{*}\) and all \(v\in\mathbb{R}^{d}\), \[\nabla^{3}f(u)[v,v,v]\leq 2R\|v\|(\nabla^{2}f(u)[v,v]).\] The closer R is to 0, the "nicer" -- more self-concordant -- the function is. A consequence of (generalized) self-concordance is the spectral (multiplicative) stability of Hessian to small perturbations of parameters. **Lemma A.4** (Stability of Hessian[23, Theorem 2.1.1], [3, Proposition 1]).: _Let \(H_{\theta}:=\nabla^{2}F_{s}(\theta)\). If \(F_{s}\) is \(R\)-self-concordant at \(\theta\), then for any \(v\) such that \(R\|v\|_{H_{\theta}}<1\), we have that_ \[(1-R\|v\|_{H_{\theta}})^{2}\nabla^{2}F_{s}(\theta) \prec\nabla^{2}F_{s}(\theta+v)\] \[\prec\frac{1}{(1-R\|v\|_{H_{\theta}})^{2}}\nabla^{2}F_{s}(\theta).\] _If instead we assume \(F_{s}\) is \(R\)-generalized-self-concordant at \(\theta\) with respect to norm \(\|\cdot\|\), then_ \[e^{-R\|v\|}\nabla^{2}F_{s}(\theta)\prec\nabla^{2}F_{s}(\theta+v)\prec e^{R\|v\|} \nabla^{2}F_{s}(\theta)\] The two bounds are almost identical when \(R\|v\|\) and \(R\|v\|_{\theta}\) are close to \(0\). In particular, for \(x\leq 1/2\), we have that \(e^{-2x}\leq 1-x\leq e^{-x}\). In particular, the loss function of binary logistic regression is \(1\)-generalized self-concordant. **Example A.5** (Binary logistic regression).: _Assume \(\|x\|_{2}\leq 1\) for all \(x\in\mathcal{X}\) and \(y\in\{-1,1\}\). Then binary logistic regression with datasets in \(\mathcal{X}\times\mathcal{Y}\) has a log-likelihood of \(F(\theta)=\sum_{i=1}^{n}\log(1+e^{-y_{i}x_{i}^{T}\theta})\). The univariate function \(l:=\log(1+\exp(\cdot))\) satisfies_ \[|l^{\prime\prime\prime}|=\left|\frac{\exp{(\cdot)}(1-\exp{(\cdot)})}{(1+\exp{ (\cdot)})^{3}}\right|\leq\frac{\exp{(\cdot)}}{(1+\exp{(\cdot)})^{2}}:=l^{ \prime\prime}.\] We next apply the modified output perturbation algorithm to privately release \(\theta^{*}\). The algorithm is simply: 1. Solve \[\theta^{*}=\operatorname*{argmin}_{\theta}\sum_{i=1}^{n}l_{i}(\theta)+r( \theta).\] 2. Release \[\hat{\theta}=\theta^{*}+Z,\] where \(\gamma>0\) is a tuning parameter and \(Z\sim\mathcal{N}(0,\gamma^{-1}(\sum_{i=1}^{n}\nabla^{2}l_{i}(\theta)+\nabla^{ 2}r(\theta))^{-1})\). The data-dependent DP of the above procedure is stated as follows. **Theorem A.6** (Data-dependent DP of GLM).: _Denote the smooth part of the loss function \(F_{s}=\sum_{i=1}^{n}l(y_{i},<x_{i},\cdot>)+r_{s}(\cdot)\). Assume the following:_ 1. _The GLM loss function_ \(l\) _is convex, three-times continuously differentiable and_ \(R\)_-generalized-self-concordant w.r.t._ \(\|\cdot\|_{2}\)_,_ 2. \(F_{s}\) _is locally_ \(\alpha\)_-strongly convex w.r.t._ \(\|\cdot\|_{2}\)_,_ 3. _and in addition, denote_ \(L:=\sup_{\theta\in[\theta^{*},\tilde{\theta}^{*}]}|l^{\prime}(y,x^{T}\theta)|\)_,_ \(\beta:=\sup_{\theta\in[\theta^{*},\tilde{\theta}^{*}]}|l^{\prime\prime}(y,x^{ T}\theta)|\)_. That is,_ \(\ell(\cdot)\) _is_ \(L\)_-Lipschitz and_ \(\beta\)_-smooth._ _We then have the data-dependent DP_ \[\epsilon(Z)\leq\frac{R(L+\beta)}{\alpha}(1+\log(2/\delta))+\frac{\gamma L^{2} }{\alpha}+\sqrt{\frac{\gamma L^{2}}{\alpha}\log(2/\delta)}.\] The proof follows by taking an upper bound of the per-instance DP loss (Theorem D.1) \(\epsilon(Z,z)\) over \(z=(x,y)\in(\mathcal{X},\mathcal{Y})\). Notice that the Hessians can be arbitrarily singular and \(\alpha\) could be \(0\), which leads to an infinite privacy loss without additional assumptions. Thus, we will impose an additional regularization of form \(\frac{3}{2}||\theta||^{2}\), which ensures that for any dataset \(F_{S}\) is \(\lambda\)-strongly convex. This is not yet DP because it is still about a fixed dataset. We also need a pre-specified privacy budget \((\epsilon,\delta)\). We next demonstrate how to apply the generalized PTR to provide a general solution to the above GLM, using logistic regression as an example. **Remark A.7** (Logistic regression).: For logistic regression, we know \(L\leq 1\), \(\beta\leq 1/4\) and if \(\|x\|_{2}\leq 1\), it is \(1\)-generalized self-concordant. For any dataset \(Z=(X,y)\), the data-dependent DP \(\epsilon(X)\) w.r.t. \(\delta\) can be simplified to: \[\frac{1.25}{\alpha}(1+\log(2/\delta))+\frac{\gamma}{\alpha}+\sqrt{\frac{ \gamma}{\alpha}\log(2/\delta)}\] Now, the data-dependent DP is a function of \(\alpha\) and \(\gamma\), where \(\alpha\) denotes the local strong convexity at \(\theta_{\lambda}^{*}\) and \(\gamma\) controls the noise scale. We next show how to select these two parameters adapted to the dataset. **Example A.8**.: _We demonstrate here how we apply generalized PTR to output perturbation of the logistic regression problem._ 1. _Take an exponential grid of parameters_ \(\{\lambda\}\) _and propose each_ \(\lambda\)_._ 2. _Solve for_ \(\theta_{\lambda}^{*}=\text{argmin}_{\theta}F(\theta)+\lambda\|\theta\|^{2}/2\)__ 3. _Calculate the smallest eigenvalue_ \(\lambda_{\text{min}}(\nabla^{2}F(\theta_{\lambda}^{*}))\) _(e.g., using power method)._ 4. _Differentially privately release_ \(\lambda_{\text{min}}\) _with_ \(\lambda_{\text{min}}^{p}:=\max\{\lambda_{\text{min}}+\frac{\sqrt{\log(4/\delta )}}{\epsilon/2}\cdot\Delta_{GS}\cdot Z-\frac{\sqrt{2\log(4/\delta)\cdot\log(1/ \delta)}\Delta_{GS}}{\epsilon/2},0\}\)_, where_ \(\Delta_{GS}\) _denote the global sensitivity of_ \(\lambda_{\text{min}}\) _using Theorem_ A.11_._ 5. _Let_ \(\epsilon^{p}(\cdot)\) _be instantiated with_ \(\epsilon(X)\) _w.r.t._ \(\delta\) _from Remark_ A.7_, where_ \(\alpha=\lambda_{\text{min}}^{p}+\lambda\)_. Then, conditioned on a high probability event,_ \(\epsilon^{p}(\cdot)\) _(a function of_ \(\gamma\)_) is a valid DP bound that holds for all datasets and all parameters_ \(\gamma\)_._ 6. _Calculate the maximum_ \(\gamma\) _such that_ \(\epsilon^{p}_{\delta/2}(\gamma)\leq\epsilon/2\)_._ 7. _Release_ \(\hat{\theta}\sim\mathcal{N}(\theta_{\lambda}^{*},\gamma^{-1}\nabla^{2}F_{s}( \theta_{\lambda}^{*})^{-1})\)_._ 8. _Evaluate the utility on the validation set and return the_ \((\lambda,\gamma)\) _pair that leads to the highest utility._ **Theorem A.9**.: _For each proposed \(\lambda\), the algorithm that releases \(\hat{\theta}\sim\mathcal{N}(\theta_{\lambda}^{*},\gamma^{-1}\nabla^{2}F_{s}( \theta_{\lambda}^{*})^{-1})\) is \((\epsilon,2\delta)\)-DP._ Proof.: The proof follows the recipe of generalized PTR with private upper bound (Example 4.6). First, the release of \(\lambda_{\text{min}}(\nabla^{2}F(\theta_{\lambda}^{*}))\) is \((\epsilon/2,\delta/2)\)-DP. Then, with probability at least \(1-\delta\), \(\epsilon^{p}_{\delta}(\cdot)>\epsilon_{\delta}(X)\) holds for all \(X\) and \(\gamma\). Finally, \(\gamma\) is chosen such that the valid upper bound is \((\epsilon/2,\delta/2)\)-DP. _For the hyper-parameter tuning on \(\lambda\) (Steps 1 and 8), we can use Algorithm 3 to evaluate each \(\lambda\)._ _Unlike Example 5.2, the \(\lambda_{\text{min}}(\nabla^{2}F(\theta_{\lambda}^{*}))\) is a complicated data-dependent function of \(\lambda\). Thus, we cannot privately release the data-dependent quantity \(\lambda_{\text{min}}(\nabla^{2}F(\theta_{\lambda}^{*}))\) without an input \(\lambda\). The PTR approach allows us to test a number of different \(\lambda\) and hence get a more favorable privacy-utility trade-off._ An interesting perspective of this algorithm for logistic regression is that increasing the regularization \(\alpha\) is effectively increasing the number of data points within the soft "margin"3 of separation, hence a larger contribution to the Hessian from the loss function. **Remark A.10**.: The PTR solution for GLMs follows a similar recipe: propose a regularization strength \(\lambda\); construct a lower bound of the strong convexity \(\alpha\) at the optimal solution \(\theta_{\lambda}^{*}\); and test the validity of data-dependent DP using Theorem D.1. Before moving on to other applications of generalized PTR, we will show how to differentially privately release \(\lambda_{min}\) according to the requirements of the logistic regression example. ### Differentially privately release \(\lambda_{min}\left(\nabla^{2}F(\theta)\right)\) To privately release \(\lambda_{min}\nabla^{2}F(\theta)\), we first need to compute its global sensitivity. Once we have that then we can release it differentially privately using either the Laplace mechanism or the Gaussian mechanism. **Theorem A.11** (Global sensitivity of the minimum eigenvalue at the optimal solution).: _Let \(F(\theta)=\sum_{i=1}^{n}f_{i}(\theta)+r(\theta)\) and \(\tilde{F}(\theta)=F(\theta)+f(\theta)\) where \(f_{1},...,f_{n}\) are loss functions corresponding to a particular datapoint \(x\). Let \(\theta^{*}=\text{argmin}_{\theta}F(\theta)\) and \(\tilde{\theta}^{*}=\text{argmin}_{\theta}\tilde{F}(\theta)\). Assume \(f\) is \(L\)-Lipschitz and \(\beta\)-smooth, \(r(\theta)\) is \(\lambda\)-strongly convex, and \(F\) and \(\tilde{F}\) are \(R\)-self-concordant. If in addition, \(\lambda\geq RL\), then we have_ \[\sup_{X,x}(\lambda_{min}(\nabla^{2}F(\theta_{\lambda}^{*}))-\lambda_{min}( \nabla^{2}\tilde{F}(\tilde{\theta_{\lambda}^{*}})))\leq 2RL+\beta.\] Proof.: \[\lambda_{min} (\nabla^{2}F(\theta_{\lambda}^{*}))-\lambda_{min}(\nabla^{2} \tilde{F}(\tilde{\theta_{\lambda}^{*}}))\] (1) \[=(\lambda_{min}(\nabla^{2}F(\theta_{\lambda}^{*}))-\lambda_{min}( \nabla^{2}\tilde{F}(\theta_{\lambda}^{*})))\] \[+(\lambda_{min}(\nabla^{2}\tilde{F}(\theta_{\lambda}^{*}))- \lambda_{min}(\nabla^{2}\tilde{F}(\tilde{\theta_{\lambda}^{*}}))).\] We first bound the part on the left. By applying Weyl's lemma \(\lambda(X+E)-\lambda(X)\leq||E||_{2}\), we have \[\sup_{x}||\nabla^{2}F(\theta_{\lambda}^{*})-\nabla^{2}\tilde{F}(\tilde{ \theta_{\lambda}^{*}})||_{2}=||\nabla^{2}f(\theta_{\lambda}^{*})||_{2}\leq\beta \tag{2}\] In order to bound the part on the right, we apply the semidefinite ordering using self-concordance, which gives \[e^{-R||\tilde{\theta_{\lambda}^{*}}-\theta_{\lambda}^{*}||\nabla^{2}\tilde{F} (\tilde{\theta_{\lambda}^{*}})\prec\nabla^{2}\tilde{F}(\theta_{\lambda}^{*}) \prec e^{R||\tilde{\theta_{\lambda}^{*}}-\theta_{\lambda}^{*}||\nabla^{2} \tilde{F}(\tilde{\theta_{\lambda}^{*}})}.\] By the Courant-Fischer Theorem and the monotonicity theorem, we also have that for the smallest eigenvalue \[e^{-R||\tilde{\theta_{\lambda}^{*}}-\theta_{\lambda}^{*}||} \lambda_{\min}\left(\nabla^{2}\tilde{F}(\tilde{\theta_{\lambda}^{* }})\right)\leq\lambda_{\min}\left(\nabla^{2}\tilde{F}(\theta_{\lambda}^{*})\right) \tag{3}\] \[\leq e^{R||\tilde{\theta_{\lambda}^{*}}-\theta_{\lambda}^{*}||} \lambda_{\min}\left(\nabla^{2}\tilde{F}(\tilde{\theta_{\lambda}^{*}})\right).\] Moreover by Proposition D.2, we have that \[\|\tilde{\theta_{\lambda}^{*}}-\theta_{\lambda}^{*}\|_{2}\leq\frac{\|\nabla f (\tilde{\theta_{\lambda}}{}_{\lambda})\|}{\lambda_{\min}\left(\nabla^{2} \tilde{F}(\tilde{\theta_{\lambda}^{*}})\right)}\leq\frac{L}{\lambda_{\min} \left(\nabla^{2}\tilde{F}(\tilde{\theta_{\lambda}^{*}})\right)}.\] If \(\lambda_{\min}\left(\nabla^{2}\tilde{F}(\tilde{\theta_{\lambda}^{*}})\right) \geq RL\), then use that \(e^{x}-1\leq 2x\) for \(x\leq 1\). Substituting the above bound to (3) then to (1) together with (2), we get a data-independent global sensitivity bound of \[\lambda_{min}(\nabla^{2}F(\theta_{\lambda}^{*}))-\lambda_{min}(\nabla^{2} \tilde{F}(\tilde{\theta_{\lambda}^{*}}))\leq 2RL+\beta\] as stated. **Proposition A.12**.: _Let \(\|\cdot\|\) be a norm and \(\|\cdot\|_{*}\) be its dual norm. Let \(F(\theta)\), \(f(\theta)\) and \(\tilde{F}(\theta)=F(\theta)+f(\theta)\) be proper convex functions and \(\theta^{*}\) and \(\tilde{\text{theta}}^{*}\) be their minimizers, i.e., \(0\in\partial F(\theta^{*})\) and \(0\in\partial\tilde{F}(\tilde{\text{theta}}^{*})\). If in addition, \(F,\tilde{F}\) is \(\alpha,\tilde{\alpha}\)-strongly convex with respect to \(\|\cdot\|\) within the restricted domain \(\theta\in\{t\theta^{*}+(1-t)\tilde{\theta}^{*}\mid t\in[0,1]\}\). Then there exists \(g\in\partial f(\theta^{*})\) and \(\tilde{g}\in\partial f(\tilde{\theta}^{*})\) such that_ \[\|\theta^{*}-\tilde{\theta}^{*}\|\leq\min\left\{\frac{1}{\alpha}\|\tilde{g}\|_ {*},\frac{1}{\tilde{\alpha}}\|g\|_{*}\right\}.\] Proof.: Apply the first order condition to \(F\) restricted to the line segment between \(\tilde{\theta}^{*}\) and \(\theta^{*}\), we get \[F(\tilde{\theta}^{*}) \geq F(\theta^{*})+\langle\partial F(\theta^{*}),\tilde{\theta}^{ *}-\theta^{*}\rangle+\frac{\alpha}{2}\|\tilde{\theta}^{*}-\theta^{*}\|^{2} \tag{4}\] \[F(\theta^{*}) \geq F(\tilde{\theta}^{*})+\langle\partial F(\tilde{\theta}^{*}), \theta^{*}-\tilde{\theta}^{*}\rangle+\frac{\alpha}{2}\|\tilde{\theta}^{*}- \theta^{*}\|^{2} \tag{5}\] Note by the convexity of \(F\) and \(f\), \(\partial\tilde{F}=\partial F+\partial f\), where \(+\) is the Minkowski Sum. Therefore, \(0\in\partial\tilde{F}(\tilde{\theta}^{*})\) implies that there exists \(\tilde{g}\) such that \(\tilde{g}\in\partial f(\tilde{\theta}^{*})\) and \(-\tilde{g}\in\partial F(\hat{\theta}^{*})\). Take \(-\tilde{g}\in\partial F(\hat{\theta}^{*})\) in Equation 10 and \(0\in\partial F(\theta^{*})\) in Equation 9 and add the two inequalities, we obtain \[0 \geq\langle-\tilde{g},\theta^{*}-\tilde{\theta}^{*}\rangle+ \alpha\|\tilde{\theta}^{*}-\theta^{*}\|^{2}\] \[\geq-\|\tilde{g}\|_{*}\|\theta^{*}-\tilde{\theta}^{*}\|+\alpha \|\tilde{\theta}^{*}-\theta^{*}\|^{2}.\] For \(\|\tilde{\theta}^{*}-\theta^{*}\|=0\) the claim is trivially true; otherwise, we can divide both sides of the above inequality by \(\|\tilde{\theta}^{*}-\theta^{*}\|\) and get \(\|\theta^{*}-\tilde{\theta}^{*}\|\leq\frac{1}{\alpha}\|\tilde{g}\|_{*}\). It remains to show that \(\|\theta^{*}-\tilde{\theta}^{*}\|\leq\frac{1}{\tilde{\alpha}}\|g\|_{*}\). This can be obtained by exactly the same arguments above but applying strong convexity to \(\tilde{F}\) instead. Note that we can actually get something slightly stronger than the statement because the inequality holds for all \(g\in\partial f(\theta^{*})\). ### Other applications of generalized PTR Besides one-posterior sampling for GLMs, there are plenty of examples that our generalized-PTR could be applied, e.g., DP-PCA (Dwork et al., 2014) and Sparse-DP-ERM (Kifer et al., 2012) (when the designed matrix is well-behaved). (Dwork et al., 2014) provides a PTR style privacy-preserving principle component analysis (PCA). The key observation of (Dwork et al., 2014) is that the local sensitivity is quite "small" if there is a large eigengap between the \(k\)-th and the \(k+1\)-th eigenvalues. Therefore, their approach (Algorithm 2) chooses to privately release a lower bound of the k-th eigengap (\(k\) is fixed as an input) and use that to construct a high-confidence upper bound of the local sensitivity. For noise-adding mechanisms, the local sensitivity is proportional to the data-dependent loss and generalized PTR is applicable. We can formulate the data-dependent DP of DP-PCA as follows: **Theorem A.13**.: _For a given matrix \(A\in\mathcal{R}^{m\times n}\), assume each row of \(A\) has a bounded \(\ell_{2}\) norm being \(1\). Let \(V_{k}\) denotes the top \(k\) eigenvectors of \(A^{T}A\) and \(d_{k}\) denotes the gap between the \(k\)-th and the \(k+1\)-th eigenvalue. Then releasing \(V_{k}V_{k}^{T}+E\), where \(E\in\mathcal{R}^{n\times n}\) is a symmetric matrix with the upper triangle is i.i.d samples from \(\mathcal{N}(0,\sigma^{2})\) satisfies \((\epsilon(A),\delta)\) data-dependent DP and \(\epsilon(A)=\frac{2\sqrt{\log(1.25/\delta)}}{\sigma(d_{k}-2)}\)._ The proof is based on the local sensitivity result from (Dwork et al., 2014) and the noise calibration of Gaussian mechanism. We can combine Theorem A.13 with our Algorithm 3 to instantiate the generalized PTR framework. The improvement over Dwork et al. (2014) will be to allow joint tuning of the parameter \(k\) and the noise variance (added to the spectral gap \(d_{k}\)). ## Appendix B Omitted proofs in Section 4 The utility of Algorithm 3 depends on how many rounds that Algorithm 2 is invoked. We next provide the utility guarantee of Algorithm 3, which follows a simplification of the result in the Section A.2 of Papernot and Steinke (2021). **Theorem B.1**.: _Suppose applying Algorithm 2 with each \(\phi_{i}\) has an equal probability to achieve the highest validation score. Let \(\hat{T}\) denotes the number of invocation of Algorithm 2, where \(\hat{T}\) follows a truncated geometric distribution. Then the expected quantile of the highest score candidate is given by \(\mathbb{E}_{\hat{T}}\bigg{[}1-\frac{1}{\hat{T}+1}\bigg{]}\)._ In practice, we can roughly set \(\tau=\frac{1}{10k}\) so that the algorithm is likely to test all \(k\) parameters. Proof.: Suppose each oracle access to \(Q(X)\) has a probability \(1/k\) of achiving the best validation accuracy. Let \(\beta\) denote the probability that \(\mathcal{A}\) (shorthand for Algorithm 3) outputs the best choice of \(\phi_{i}\). \[\beta =1-\Pr[\mathcal{A}(X)\text{is not best}]\] \[=1-\mathbb{E}_{\hat{T}}\bigg{[}\Pr[Q(X)\text{is not best}]^{\hat {T}}\bigg{]}\] \[=1-\mathbb{E}_{\hat{T}}\bigg{[}(1-\frac{1}{k})^{\hat{T}}\bigg{]}.\] Let \(f(x)=\mathbb{E}[x^{\hat{T}}]\). Applying a first-order approximation on \(f(1-\frac{1}{k})\), we have \(f(1-\frac{1}{k})\approx f(1)-f^{\prime}(1)\cdot\frac{1}{k}=1-\mathbb{E}[\hat{T }]/k\). Then, if \(k\) is large and we choose \(\tau=0.1/k\), \(\mathcal{A}\) can roughly return the best \(\phi_{i}\). ## Appendix C Experimental details ### Experimental details in private linear regression We start with the privacy calibration of the OPS-PTR algorithm. Algorithm 5 provides the detailed privacy calibration of the private linear regression problem. **Theorem C.1**.: _Algorithm 5 is \((\epsilon,2\delta)\)-DP._ Proof.: There are three data-dependent quantities in Theorem 5.1: \(\lambda_{\min},||\theta_{\lambda}^{*}||\) and \(L\). First, notice that \(\lambda_{\min}\) has a global sensitivity of \(||\mathcal{X}||^{2}\) by Weyl's lemma. Under the assumption \(||\mathcal{X}||^{2}\leq 1\), we privately release \(\lambda_{\min}\) using \((\epsilon/4,\delta/3)\) in Step 3. Notice that with probability at least \(1-\delta/2\), \(\tilde{\lambda}_{\min}\) is a lower bound of \(\lambda_{\min}\). Then, we apply Lemma C.2 from Wang (2018) to privately release \(\log(||\mathcal{Y}||+||\mathcal{X}|||\hat{\theta}||)\) using \((\epsilon/4,\delta/3)\). Note that both the local Lipschitz constant \(L\) and the norm \(||\theta_{\lambda}^{\star}||\) are functions of \(\log(||\mathcal{Y}||+||\mathcal{X}|||\hat{\theta}||)\). Thus, we can construct a private upper bound of these by post-processing of \(\Delta\). Then, with probability at least \(1-\delta\) (by a union bound over \(\tilde{\lambda}_{\min}\) and \(\Delta\)), instantiating Theorem 5.1 with \(\tilde{\lambda}_{\min}\) and \(\tilde{L}\) provides a valid upper bound of the data-dependent DP. We then tune the parameter \(\gamma\) using the remaining privacy budget \((\epsilon/2,\delta/3)\). **Lemma C.2** (Lemma 12 (Wang, 2018)).: _Let \(\theta_{\lambda}^{\star}\) be the ridge regression estimate with parameter \(\lambda\) and the smallest eigenvalue of \(X^{T}X\) be \(\lambda_{\min}\), then the function \(\log(||\mathcal{Y}+||\mathcal{X}|||\theta_{\lambda}^{\star}||)\) has a local sensitivity of \(\log(1+\frac{||\mathcal{X}||^{2}}{\lambda_{min+\lambda}})\)._ ### Details of PATE case study **Definition C.3** (Renyi DP (Mironov, 2017)).: We say a randomized algorithm \(\mathcal{M}\) is \((\alpha,\epsilon_{\mathcal{M}}(\alpha))\)-RDP with order \(\alpha\geq 1\) if for neighboring datasets \(X,X^{\prime}\) \[\mathbb{D}_{\alpha}(\mathcal{M}(X)||\mathcal{M}(X^{\prime})):=\] \[\frac{1}{\alpha-1}\log\mathbb{E}_{o\sim\mathcal{M}(X^{\prime})} \bigg{[}\bigg{(}\frac{\Pr[\mathcal{M}(X)=o]}{\Pr[\mathcal{M}(X^{\prime})=o]} \bigg{)}^{\alpha}\bigg{]}\leq\epsilon_{\mathcal{M}}(\alpha).\] At the limit of \(\alpha\to\infty\), RDP reduces to \((\epsilon,0)\)-DP. We now define the data-dependent Renyi DP that conditioned on an input dataset \(X\). **Definition C.4** (Data-dependent Renyi DP (Papernot et al., 2018)).: We say a randomized algorithm \(\mathcal{M}\) is \((\alpha,\epsilon_{\mathcal{M}}(\alpha,X))\)-RDP with order \(\alpha\geq 1\) for dataset \(X\) if for neighboring datasets \(X^{\prime}\) \[\mathbb{D}_{\alpha}(\mathcal{M}(X)||\mathcal{M}(X^{\prime})):=\] \[\frac{1}{\alpha-1}\log\mathbb{E}_{o\sim\mathcal{M}(X^{\prime})} \bigg{[}\bigg{(}\frac{\Pr[\mathcal{M}(X)=o]}{\Pr[\mathcal{M}(X^{\prime})=o]} \bigg{)}^{\alpha}\bigg{]}\leq\epsilon_{\mathcal{M}}(\alpha,X).\] RDP features two useful properties. **Lemma C.5** (Adaptive composition).: \(\epsilon_{(\mathcal{M}_{1},\mathcal{M}_{2})}=\epsilon_{\mathcal{M}_{1}}(\cdot)+ \epsilon_{\mathcal{M}_{2}}(\cdot)\)_._ **Lemma C.6** (From RDP to DP).: _If a randomized algorithm \(\mathcal{M}\) satisfies \((\alpha,\epsilon(\alpha))\)-RDP, then \(\mathcal{M}\) also satisfies \((\epsilon(\alpha)+\frac{\log(1/\delta)}{\alpha-1},\delta)\)-DP for any \(\delta\in(0,1)\)._ **Definition C.7** (Smooth Sensitivity).: Given the smoothness parameter \(\beta\), a \(\beta\)-smooth sensitivity of \(f(X)\) is defined as \[SS_{\beta}(X):=\max_{d\geq 0}e^{-\beta d}\cdot\max_{\tilde{X}^{\prime}:dist(X, \tilde{X}^{\prime})\leq d}\Delta_{LS}(\tilde{X}^{\prime})\] **Lemma C.8** (Private upper bound of data-dependent RDP, Restatement of Theorem 5.6).: _] Given a RDP function \(\mathrm{RDP}(\alpha,X)\) and a \(\beta\)-smooth sensitivity bound \(SS(\cdot)\) of \(\mathrm{RDP}(\alpha,X)\). Let \(\mu\) (defined in Algorithm 4) denote the private release of \(\log(SS_{\beta}(X))\). Let \((\beta,\sigma_{s},\sigma_{2})\)-GNSS mechanism be_ \[\mathrm{RDP}^{upper}(\alpha):=\mathrm{RDP}(\alpha,X)+SS_{\beta}(X)\cdot \mathcal{N}(0,\sigma_{s}^{2})+\sigma_{s}\sqrt{2\log(\frac{2}{\delta_{2}})}e^{\mu}\] _Then, the release of \(\mathrm{RDP}^{upper}(X)\) satisfies \((\alpha,\frac{3\alpha+2}{2\sigma_{s}^{2}})\)-RDP for all \(1<\alpha<\frac{1}{2\beta}\); w.p. at least \(1-\delta_{2}\), \(\mathrm{RDP}^{upper}(\alpha)\) is an upper bound of \(\mathrm{RDP}(\alpha,X)\)._ Proof sketch.: We first show that releasing the smooth sensitivity \(SS_{\beta}\) with \(e^{\mu}\) satisfies \((\alpha,\frac{\alpha}{2\sigma_{s}^{2}})\)-RDP. Notice that the log of \(SS_{\beta}(X)\) has a bounded global sensitivity \(\beta\) (Definition C.7 implies that \(|\log SS_{\beta}(X)-\log SS_{\beta}(X^{\prime})|\leq\beta\) for any neighboring dataset \(X,X^{\prime}\)). By Gaussian mechanism, scaling noise with \(\beta\sigma_{2}\) to \(\log SS_{\beta}(X)\) is \((\alpha,\frac{\alpha}{2\sigma_{s}^{2}})\)-RDP. Therefore, the release of \(\mathrm{RDP}(\alpha,X)\) is \((\alpha,\epsilon_{s}(\alpha)+\frac{\alpha}{2\sigma_{s}^{2}})\)-RDP. Since the release of \(f(X)+SS_{\beta}(X)\cdot\mathcal{N}(0,\sigma_{s}^{2})\) is \((\alpha,\frac{\alpha+1}{\sigma_{s}^{2}})\)-RDP (Theorem 23 from Papernot et al. [2018]) for \(\alpha<\frac{1}{2\beta}\), we have \(\epsilon_{s}(\alpha)+\frac{\alpha}{2\sigma_{s}^{2}}=\frac{3\alpha+2}{2\sigma_ {s}^{2}}\). We next prove the second statement. First, notice that with probability at least \(1-\delta_{2}/2\), \(e^{\mu}\geq SS_{\beta}(X)\) using the standard Gaussian tail bound. Let \(E\) denote the event that \(e^{\mu}\geq SS_{\beta}(X)\). \[\Pr\biggl{[}\mathrm{RDP}^{\mathrm{upper}}(\alpha)\leq\mathrm{RDP }(\alpha,X)\biggr{]}\] \[=\Pr\biggl{[}\mathrm{RDP}^{\mathrm{upper}}(\alpha)\leq\mathrm{ RDP}(\alpha,X)|E\biggr{]}+\Pr\biggl{[}\mathrm{RDP}^{\mathrm{upper}}(\alpha)\leq \mathrm{RDP}(\alpha,X)|E^{c}\biggr{]}\] \[\leq\Pr\biggl{[}\mathrm{RDP}^{\mathrm{upper}}(\alpha)\leq \mathrm{RDP}(\alpha,X)|E\biggr{]}+\delta_{2}/2\] \[=\underbrace{\Pr\biggl{[}\mathcal{N}(0,\sigma_{s}^{2})\cdot SS_{ \beta(X)}\geq\sigma_{s}\cdot\sqrt{2\log(2/\delta_{2})}e^{\mu}|E\biggr{]}}_{ \text{denoted by}(*)}+\delta_{2}/2\] Condition on the event \(E\), \(e^{\mu}\) is a valid upper bound of \(SS_{\beta}(X)\), which implies \[(*)\leq\Pr[\mathcal{N}(0,\sigma_{s}^{2})\cdot SS_{\beta}(X)\geq\sigma_{s} \cdot\sqrt{2\log(2/\delta_{2})}SS_{\beta}(X)|E]\leq\delta_{2}/2\] Therefore, with probability at least \(1-\delta_{2}\), \(\mathrm{RDP}^{\mathrm{upper}}(\alpha)\geq\mathrm{RDP}(\alpha,X)\). **Theorem C.9** (Restatement of Theorem 5.7).: _Algorithm 4 satisfies \((\epsilon^{\prime}+\hat{\epsilon},\delta)\)-DP._ Proof.: The privacy analysis consists of two components -- the privacy cost of releasing an upper bound of data-dependent RDP (\(\epsilon_{\text{upper}}(\alpha):=\epsilon_{s}(\alpha)+\frac{\alpha}{2\sigma_{2}^{2}}\) and the valid upper bound \(\epsilon_{\sigma_{1}}^{p}(\alpha)\). First, set \(\alpha=\frac{2\log(2/\delta)}{\epsilon}+1\) and use RDP to DP conversion with \(\delta/2\) ensures that the cost of \(\delta/2\) contribution to be roughly \(\epsilon/2\) (i.e., \(\frac{\log(2/\delta)}{\alpha-1}=\epsilon/2\)). Second, choosing \(\sigma_{s}=\sqrt{\frac{2+3\alpha}{\epsilon}}\) gives us another \(\epsilon/2\). **Experimental details**\(K=400\) teacher models are trained individually on the disjoint set using AlexNet model. We set \(\sigma_{2}=\sigma_{s}=15.0\). Our data-dependent RDP calculation and the smooth-sensitivity calculation follow Papernot et al. (2018). Specifically, we use the following theorem (Theorem 6 from Papernot et al. (2018)) to compute the data-dependent RDP of each unlabeled data \(x\) from the public domain. **Theorem C.10** (data-dependent RDP Papernot et al. (2018)).: _Let \(\tilde{q}\geq\Pr[\mathcal{M}(X)\neq Argmax_{j\in[C]}n_{j}(x)]\), i.e., an upper bound of the probability that the noisy label does not match the majority label. Assume \(\alpha\leq\mu_{1}\) and \(\tilde{q}\leq e^{(\mu_{2}-1)\epsilon_{2}}/\bigg{(}\frac{\mu_{1}}{\mu_{1}-1} \cdot\frac{\mu_{2}}{\mu_{2}-1}\bigg{)}^{\mu_{2}}\), then we have:_ \[\epsilon_{\mathcal{M}}(\alpha,X)\leq\frac{1}{\alpha-1}\log\bigg{(}(1-\tilde{q} )\cdot A(\tilde{q},\mu_{2},\epsilon_{2})^{\alpha-1}+\tilde{q}\cdot B(\tilde{q },\mu_{1},\epsilon_{1})^{\alpha-1}\bigg{)}\] _where \(A(\tilde{q},\mu_{2},\epsilon_{2}):=(1-\tilde{q})/\bigg{(}1-(\tilde{q}e^{ \epsilon_{2}})^{\frac{\mu_{2}-1}{\mu_{2}}}\bigg{)}\), \(B(\tilde{q},\mu_{1},\epsilon_{1})=e^{\epsilon_{1}}/\tilde{q}^{\frac{1}{\mu_{1} -1}},\mu_{2}=\sigma_{1}\cdot\sqrt{\log(1/\tilde{q})},\mu_{1}=\mu_{2}+1,\epsilon _{1}=\mu_{1}/\sigma_{1}^{2}\) and \(\epsilon_{2}=\mu_{2}/\sigma_{2}^{2}\)._ In the experiments, the non-private data-dependent DP baseline is also based on the above theorem. Notice that the data-dependent RDP of each query is a function of \(\tilde{q}\), where \(\tilde{q}\) denotes an upper bound of the probability where the plurality output does not match the noisy output. \(\tilde{q}\) is a complex function of both the noisy scale and data and is not monotonically decreasing when \(\sigma_{1}\) is increasing. **Simulation of two distributions.** The motivation of the experimental design is to compare three approaches under different data distributions. Notice that there are \(K=400\) teachers, which implies the number of the vote count for each class will be bounded by \(400\). In the simulation of high-consensus distribution, we choose \(T=200\) unlabeled public data such that the majority vote count will be larger than \(150\) (i.e., \(\max_{j\in[C]}n_{j}(x)>150\)). For the low-consensus distribution, we choose to select \(T\) unlabeled data such that the majority vote count will be smaller than \(150\). ## Appendix D Omitted proofs in private GLM ### Per-instance DP of GLM **Theorem D.1** (Per-instance differential privacy guarantee).: _Consider two adjacent data sets \(Z\) and \(Z^{\prime}=[Z,(x,y)]\), and denote the smooth part of the loss function \(F_{s}=\sum_{i=1}^{n}l(y_{i},\langle x_{i},\cdot\rangle)+r_{s}(\cdot)\) (thus \(\tilde{F}_{s}=F_{s}+l(y,\langle x,\cdot\rangle)\). Let the local neighborhood be the line segment between \(\theta^{*}\) and \(\tilde{\theta}^{*}\). Assume_ 1. _the GLM loss function_ \(l\) _be convex, three-time continuous differentiable and_ \(R\)_-generalized-self-concordant w.r.t._ \(\|\cdot\|_{2}\)_,_ 2. \(F_{s}\) _is locally_ \(\alpha\)_-strongly convex w.r.t._ \(\|\cdot\|_{2}\)_,_ 3. _and in addition, denote_ \(L:=\sup_{\theta\in[\theta^{*},\tilde{\theta}^{*}]}|l^{\prime}(y,x^{T}\theta)|\)_,_ \(\beta:=\sup_{\theta\in[\theta^{*},\tilde{\theta}^{*}]}|l^{\prime\prime}(y,x^{T} \theta)|\)_._ _Then the algorithm obeys \((\epsilon,\delta)\)-pDP for \(Z\) and \(z=(x,y)\) with any \(0<\delta<2/e\) and_ \[\epsilon\leq\epsilon_{0}(1+\log(2/\delta))+e^{\frac{RL\|x\|_{2}}{\alpha}}\left[ \frac{\gamma L^{2}\|x\|_{H^{-1}}^{2}}{2}+\sqrt{\gamma L^{2}\|x\|_{H^{-1}}^{2} \log(2/\delta)}\right]\] _where \(\epsilon_{0}\leq e^{\frac{RL\|x\|_{2}}{\alpha}}-1+2\beta\|x\|_{H_{1}^{-1}}^{2} +2\beta\|x\|_{\tilde{H}_{2}^{-1}}^{2}.\) If we instead assume that \(l\) is \(R\)-self concordant. Then the same results hold, but with all \(e^{\frac{RL\|x\|_{2}}{\alpha}}\) replaced with \((1-RL\|x\|_{H^{-1}})^{2}\)._ Under the stronger three-times continuous differentiable assumption, by mean value theorem, there exists \(\xi\) on the line-segment between \(\theta^{*}\) and \(\tilde{\theta}^{*}\) such that \[H=\left[\int_{t=0}^{1}\nabla^{2}F_{s}((1-t)\theta^{*}+t\tilde{\theta}^{*})dt \right]=\nabla^{2}F_{s}(\xi).\] The two distributions of interests are \(\mathcal{N}(\theta^{*},[\gamma\nabla^{2}F_{s}(\theta^{*})]^{-1})\) and \(\mathcal{N}(\tilde{\theta}^{*},[\gamma\nabla^{2}F_{s}(\tilde{\theta}^{*})+ \nabla^{2}l(y,x^{T}\tilde{\theta}^{*})]^{-1})\). Denote \([\nabla^{2}F_{s}(\theta^{*})]^{-1}=:\Sigma\) and \([\nabla^{2}F_{s}(\tilde{\theta}^{*})+\nabla^{2}l(y,x^{T}\tilde{\theta}^{*})]^ {-1}=:\tilde{\Sigma}\). Both the means and the covariance matrices are different, so we cannot use multivariate Gaussian mechanism naively. Instead we will take the tail bound interpretation of \((\epsilon,\delta)\)-DP and make use of the per-instance DP framework as internal steps of the proof. First, we can write down the privacy loss random variable in analytic form \[\log\frac{|\Sigma|^{-1/2}e^{-\frac{\gamma}{2}\|\theta-\theta^{*}\|_{\Sigma^{- 1}}^{2}}}{|\tilde{\Sigma}|^{-1/2}e^{-\frac{\gamma}{2}\|\theta-\tilde{\theta}^{ *}\|_{\Sigma^{-1}}^{2}}}=\underbrace{\frac{1}{2}\log\left(\frac{|\Sigma^{-1} |}{|\tilde{\Sigma}^{-1}|}\right)}_{(*)}+\underbrace{\frac{\gamma}{2}\left[\| \theta-\theta^{*}\|_{\Sigma^{-1}}^{2}-\|\theta-\tilde{\theta}^{*}\|_{\tilde{ \Sigma}^{-1}}^{2}\right]}_{(**)}\] The general idea of the proof is to simplify the expression above and upper bounding the two terms separately using self-concordance and matrix inversion lemma, and ultimately show that the privacy loss random variable is dominated by another random variable having an appropriately scaled shifted \(\chi\)-distribution, therefore admits a Gaussian-like tail bound. To ensure the presentation is readable, we define a few short hands. We will use \(H\) and \(\tilde{H}\) to denote the Hessian of \(F_{s}\) and \(F_{s}+f\) respectively and subscript \(1\)\(2\) indicates whether the Hessian evaluated at at \(\theta^{*}\) or \(\tilde{\theta}^{*}\). \(H\) without any subscript or superscript represents the Hessian of \(F_{s}\) evaluated at \(\xi\) as previously used. \[(*)=\frac{1}{2}\log\frac{|H_{1}|}{|H|}\frac{|H|}{|H_{2}|}\frac{|H_{2}|}{|\tilde {H}_{2}|}\leq\frac{1}{2}\left[\log\frac{|H_{1}|}{|H|}+\log\frac{|H|}{|H_{2}|}+ \log\frac{|H_{2}|}{|\tilde{H}_{2}|}\right]\] By the \(R\)-generalized self-concordance of \(F_{s}\), we can apply Lemma D.3, \[-\|\theta^{*}-\xi\|_{2}R\leq\log\frac{|H_{1}|}{|H|}\leq R\|\theta^{*}-\xi\|_{ 2},\quad-R\|\xi-\tilde{\theta}^{*}\|_{2}\leq\log\frac{|H|}{|H_{2}|}\leq R\| \xi-\tilde{\theta}^{*}\|_{2}.\] The generalized linear model ensures that the Hessian of \(f\) is rank-\(1\): \[\nabla^{2}f(\tilde{\theta}^{*})=l^{\prime\prime}(y,x^{T}\tilde{\theta}^{*})xx ^{T}\] and we can apply Lemma 3 in both ways (taking \(A=H_{2}\) and \(A=\tilde{H}_{2}\)) and obtain \[\frac{|H_{2}|}{|\tilde{H}_{2}|}=\frac{1}{1+l^{\prime\prime}(y,x^{T}\tilde{ \theta}^{*})x^{T}H_{2}^{-1}x}=1-l^{\prime\prime}(y,x^{T}\tilde{\theta}^{*})x^{T }\tilde{H}_{2}x\] Note that \(l^{\prime\prime}(y,x^{T}\tilde{\theta}^{*})x^{T}\tilde{H}_{2}^{-1}x\) is the in-sample leverage-score and \(l^{\prime\prime}(y,x^{T}\tilde{\theta}^{*})x^{T}H_{2}^{-1}x\) is the out-of-sample leverage-score of the locally linearized problem at \(\tilde{\theta}^{*}\). We denote them by \(\mu_{2}\) and \(\mu_{2}^{\prime}\) respectively (similarly, for the consistency of notations, we denote the in-sample and out of sample leverage score at \(\theta^{*}\) by \(\mu_{1}\) and \(\mu_{1}^{\prime}\) ). Combine the above arguments we get \[(*)\leq R\|\theta^{*}-\xi\|_{2}+R\|\xi-\tilde{\theta}^{*}\|_{2}+\log(1-\mu_{2 })\leq R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}+\log(1-\mu_{2}) \tag{6}\] \[(*)\geq -R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}-\log(1-\mu_{2}). \tag{7}\] We now move on to deal with the second part, where we would like to express everything in terms of \(\|\theta-\theta^{*}\|_{H_{1}}\), which we know from the algorithm is \(\chi\)-distributed. \[(**)=\frac{\gamma}{2}\left[\|\theta-\theta^{*}\|_{H_{1}}^{2}-\|\theta-\theta^ {*}\|_{H_{2}}^{2}+\|\theta-\theta^{*}\|_{H_{2}}^{2}-\|\theta-\tilde{\theta}^{ *}\|_{H_{2}}^{2}+\|\theta-\tilde{\theta}^{*}\|_{H_{2}}^{2}-\|\theta-\tilde{ \theta}^{*}\|_{\tilde{H}_{2}}^{2}\right]\] By the generalized self-concordance at \(\theta^{*}\) \[e^{-R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}}\|\cdot\|_{H_{1}}^{2}\leq\|\cdot\| _{H_{2}}^{2}\leq e^{R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}}\|\cdot\|_{H_{1}}^{2}\] This allows us to convert from \(\|\cdot\|_{H_{2}}\) to \(\|\cdot\|_{H_{1}}\), and as a consequence: \[\left|\|\theta-\theta^{*}\|_{H_{1}}^{2}-\|\theta-\theta^{*}\|_{H_{2}}^{2} \right|\leq[e^{R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}}-1]\|\theta-\theta^{*} \|_{H_{1}}^{2}.\] Also, \[\|\theta-\theta^{*}\|_{H_{2}}^{2}-\|\theta-\tilde{\theta}^{*}\|_{H_{2}}^{2}= \left\langle\tilde{\theta}^{*}-\theta^{*},2\theta-2\theta^{*}+\theta^{*}- \tilde{\theta}^{*}\right\rangle_{H_{2}}=2\langle\theta-\theta^{*},\tilde{ \theta}^{*}-\theta^{*}\rangle_{H_{2}}-\|\theta^{*}-\tilde{\theta}^{*}\|_{H_{2}} ^{2}\] Therefore \[\left|\|\theta-\theta^{*}\|_{H_{2}}^{2}-\|\theta-\tilde{\theta}^{ *}\|_{H_{2}}^{2}\right| \leq 2\|\theta-\theta^{*}\|_{H_{2}}\|\theta^{*}-\tilde{\theta}^{ *}\|_{H_{2}}+\|\theta^{*}-\tilde{\theta}^{*}\|_{H_{2}}^{2}\] \[\leq 2e^{R\|\tilde{\theta}^{*}-\theta^{*}\|_{2}}\|\theta-\theta^{ *}\|_{H_{1}}\|\theta^{*}-\tilde{\theta}^{*}\|_{H}+e^{R\|\tilde{\theta}^{*}- \theta^{*}\|_{2}}\|\theta^{*}-\tilde{\theta}^{*}\|_{H}^{2}.\] Then lastly we have \[0\geq\|\theta-\tilde{\theta}^{*}\|_{H_{2}}^{2}-\|\theta-\tilde{ \theta}^{*}\|_{\tilde{H}_{2}}^{2} =-l^{\prime\prime}(y,x^{T}\tilde{\theta}^{*})\left[\langle x, \theta-\theta^{*}\rangle+\langle x,\theta^{*}-\tilde{\theta}^{*}\rangle\right]^ {2}\] \[\geq-2\beta\|x\|_{H_{1}}^{2}\|\theta-\theta^{*}\|_{H_{1}}^{2}-2 \beta\|x\|_{H^{-1}}^{2}\|\theta^{*}-\tilde{\theta}^{*}\|_{H}^{2}\] \[\left|\|\theta-\tilde{\theta}^{*}\|_{H_{2}}^{2}-\|\theta-\tilde{ \theta}^{*}\|_{\tilde{H}_{2}}^{2}\right| \leq 2\beta\|x\|_{H_{1}}^{2}\|\theta-\theta^{*}\|_{H_{1}}^{2}+2 \beta\|x\|_{H^{-1}}^{2}\|\theta^{*}-\tilde{\theta}^{*}\|_{H}^{2}\] Combine the above derivations, we get \[|(**)|\leq\frac{\gamma}{2}\left[a\|\theta-\theta^{*}\|_{H_{1}}^{2}+b\|\theta- \theta^{*}\|_{H_{1}}+c\right] \tag{8}\] where \[a:= \left[e^{R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}}-1+2\beta\|x\|_{H_{ 1}^{-1}}^{2}\right]\] \[b:= 2e^{R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}}\|\theta^{*}-\tilde{ \theta}^{*}\|_{H}\] \[c:= (e^{R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}}+2\beta\|x\|_{H^{-1}} ^{2})\|\theta^{*}-\tilde{\theta}^{*}\|_{H}^{2}\] Lastly, by (6) and (8), \[\left|\log\frac{p(\theta|Z)}{p(\theta|Z^{\prime})}\right|\leq R\|\theta^{*}-\tilde {\theta}^{*}\|_{2}+\log(1-\mu_{2})+\frac{\gamma}{2}[aW^{2}+bW+c].\] where according to the algorithm \(W:=\|\theta-\theta^{*}\|_{H_{1}}\) follows a half-normal distribution with \(\sigma=\gamma^{-1/2}\). By standard Gaussian tail bound, we have for all \(\delta<2/e\). \[\mathbb{P}(|W|\leq\gamma^{-1/2}\sqrt{\log(2/\delta)})\leq\delta.\] This implies that a high probability upper bound of the absolute value of the privacy loss random variable \(\log\frac{p(\theta|Z)}{p(\theta|Z^{\prime})}\) under \(p(\theta|Z)\). By the tail bound to privacy conversion lemma (Lemma 17), we get that for any set \(S\subset\Theta\)\(\mathbb{P}(\theta\in S|Z)\leq e^{\epsilon}\mathbb{P}(\theta\in S|Z^{\prime})+\delta\) for any \(0<\delta<2/e\) and \[\epsilon=R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}+\log(1-\mu_{2})+\frac{\gamma c }{2}+\frac{a}{2}\log(2/\delta)+\frac{\gamma^{1/2}b}{2}\sqrt{\log(2/\delta)}.\] Denote \(v:=\theta^{*}-\tilde{\theta}^{*}\), by strong convexity \[\|v\|_{2}\leq\|\nabla l(y,x^{T}\theta)[\tilde{\theta}^{*}]\|_{2}/\alpha=|l^{ \prime}|\|x\|_{2}/\alpha\leq L\|x\|_{2}/\alpha\] and \[\|v\|_{H}\leq\|\nabla l(y,x^{T}\theta)[\tilde{\theta}^{*}]\|_{H^{-1}}=|l^{ \prime}|\|x\|_{H^{-1}}\leq L\|x\|_{H^{-1}}.\] Also use the fact that \(|\log(1-\mu_{2})|\leq 2\mu_{2}\) for \(\mu_{2}<0.5\) and \(\mu_{2}\leq\beta\|x\|_{\tilde{H}_{2}^{-1}}^{2}\), we can then combine similar terms and have a more compact representation. \[\epsilon\leq\epsilon_{0}(1+\log(2/\delta))+e^{\frac{RL\|x\|_{2}}{\alpha}} \left[\frac{\gamma L^{2}\|x\|_{H^{-1}}^{2}}{2}+\sqrt{\gamma L^{2}\|x\|_{H^{-1 }}^{2}\log(2/\delta)}\right]\] where \[\epsilon_{0}\leq e^{\frac{RL\|x\|_{2}}{\alpha}}-1+2\beta\|x\|_{H_{1}^{-1}}^{2 }+2\beta\|x\|_{\tilde{H}_{2}^{-1}}^{2}\] is the part of the privacy loss that does not get smaller as \(\gamma\) decreases. **Proposition D.2**.: _Let \(\|\cdot\|\) be a norm and \(\|\cdot\|_{*}\) be its dual norm. Let \(F(\theta)\), \(f(\theta)\) and \(\tilde{F}(\theta)=F(\theta)+f(\theta)\) be proper convex functions and \(\theta^{*}\) and \(\tilde{theta}^{*}\) be their minimizers, i.e., \(0\in\partial F(\theta^{*})\) and \(0\in\partial\tilde{F}(\tilde{theta}^{*})\). If in addition, \(F,\tilde{F}\) is \(\alpha,\tilde{\alpha}\)-strongly convex with respect to \(\|\cdot\|\) within the restricted domain \(\theta\in\{t\theta^{*}+(1-t)\tilde{\theta}^{*}\mid t\in[0,1]\}\). Then there exists \(g\in\partial f(\theta^{*})\) and \(\tilde{g}\in\partial f(\tilde{\theta}^{*})\) such that_ \[\|\theta^{*}-\tilde{\theta}^{*}\|\leq\min\left\{\frac{1}{\alpha}\|\tilde{g}\| _{*},\frac{1}{\tilde{\alpha}}\|g\|_{*}\right\}.\] Proof.: Apply the first order condition to \(F\) restricted to the line segment between \(\tilde{\theta}^{*}\) and \(\theta^{*}\), there are we get \[F(\tilde{\theta}^{*}) \geq F(\theta^{*})+\langle\partial F(\theta^{*}),\tilde{\theta}^{* }-\theta^{*}\rangle+\frac{\alpha}{2}\|\tilde{\theta}^{*}-\theta^{*}\|^{2} \tag{9}\] \[F(\theta^{*}) \geq F(\tilde{\theta}^{*})+\langle\partial F(\tilde{\theta}^{*}), \theta^{*}-\tilde{\theta}^{*}\rangle+\frac{\alpha}{2}\|\tilde{\theta}^{*}- \theta^{*}\|^{2} \tag{10}\] Note by the convexity of \(F\) and \(f\), \(\partial\tilde{F}=\partial F+\partial f\), where \(+\) is the Minkowski Sum. Therefore, \(0\in\partial\tilde{F}(\tilde{\theta}^{*})\) implies that there exists \(\tilde{g}\) such that \(\tilde{g}\in\partial f(\tilde{\theta}^{*})\) and \(-\tilde{g}\in\partial F(\tilde{\theta}^{*})\). Take \(-\tilde{g}\in\partial F(\tilde{\theta}^{*})\) in Equation 10 and \(0\in\partial F(\theta^{*})\) in Equation 9 and add the two inequalities, we obtain \[0\geq\langle-\tilde{g},\theta^{*}-\tilde{\theta}^{*}\rangle+\alpha\|\tilde{ \theta}^{*}-\theta^{*}\|^{2}\geq-\|\tilde{g}\|_{*}\|\theta^{*}-\tilde{\theta }^{*}\|+\alpha\|\tilde{\theta}^{*}-\theta^{*}\|^{2}.\] For \(\|\tilde{\theta}^{*}-\theta^{*}\|=0\) the claim is trivially true, otherwise, we can divide the both sides of the above inequality by \(\|\tilde{\theta}^{*}-\theta^{*}\|\) and get \(\|\theta^{*}-\tilde{\theta}^{*}\|\leq\frac{1}{\alpha}\|\tilde{g}\|_{*}\). It remains to show that \(\|\theta^{*}-\tilde{\theta}^{*}\|\leq\frac{1}{\tilde{\alpha}}\|g\|_{*}\). This can be obtained by exactly the same arguments above but applying strong convexity to \(\tilde{F}\) instead. Note that we can actually get something slightly stronger than the statement because the inequality holds for all \(g\in\partial f(\theta^{*})\). A consequence of (generalized) self-concordance is the spectral (_multiplicative_) stability of Hessian to small perturbations of parameters. **Lemma D.3** (Stability of Hessian(Nesterov and Nemirovskii, 1994, Theorem 2.1.1), (Bach, 2010, Proposition 1)).: _Let \(H_{\theta}:=\nabla^{2}F_{s}(\theta)\). If \(F_{s}\) is \(R\)-self-concordant at \(\theta\). Then for any \(v\) such that \(R\|v\|_{H_{\theta}}<1\), we have that_ \[(1-R\|v\|_{H_{\theta}})^{2}\nabla^{2}F_{s}(\theta)\prec\nabla^{2}F_{s}(\theta+ v)\prec\frac{1}{(1-R\|v\|_{H_{\theta}})^{2}}\nabla^{2}F_{s}(\theta).\] _If instead we assume \(F_{s}\) is \(R\)-generalized-self-concordant at \(\theta\) with respect to norm \(\|\cdot\|\), then_ \[e^{-R\|v\|}\nabla^{2}F_{s}(\theta)\prec\nabla^{2}F_{s}(\theta+v)\prec e^{R\|v \|}\nabla^{2}F_{s}(\theta)\] The two bounds are almost identical when \(R\|v\|\) and \(R\|v\|_{\theta}\) are close to \(0\), in particular, for \(x\leq 1/2\), \(e^{-2x}\leq 1-x\leq e^{-x}\).
``` 「Propose-Test-Release」 (PTR) フレームワークは、差分プライバシー(DP)アルゴリズムを設計するための古典的なレシピであり、入力データセットが良好な場合にノイズを減らすように設計されているデータ適応アルゴリズムです。 PTR を一般的な設定へ拡張することで、データ依存性プライバシー損失を秘匿テストするのではなく、局所的な感受性と比較して、より一般的な設定で適用できます。これは、標準的なノイズ添加メカニズムに限定されないようにし、例えば、無制限または定義されない感度を持つクエリに適用できます。一般化された PTR の汎用性を示すために、プライバラインレ regresssion をケーススタディとして使用しています。さらに、私たちのアルゴリズムは、教師エンsemblesのプライバシーの収集(PATE)のオープンな問題を解決するために使用しています。これは、データ依存性の分析に基づいて、
2309.15890
An Introduction to Complex Networks in Climate Finance
In this perspective, we introduce recent research into the structure and function of complex investor networks supporting sustainability efforts. Using the case of solar, wind and hydro energy technologies, this perspective explores the complexity in low-carbon finance markets, defined as markets that direct capital flows towards low-carbon technologies, using network approaches to study their structure and dynamics. Investors are modeled as nodes which form a network or higher-order network connected by edges representing projects in which joint funding or security-related insurance was provided or other investment-related interaction occurred. We review the literature on investor networks generally, particularly in the case of complex networks, and address areas where these ideas were applied in this emerging field. The complex investor dynamics which emerge from the extant funding scenarios are not well understood. These dynamics have the potential to result in interesting non-linear behaviour, growth, and decline, which can be studied, explained and controlled using the tools of network science.
Alexander P. Kartun-Giles, Nadia Ameli
2023-09-27T16:47:25
http://arxiv.org/abs/2309.15890v1
# An Introduction to Complex Networks in Climate Finance ###### Abstract In this perspective, we introduce recent research into the structure and function of complex investor networks supporting sustainability efforts. Using the case of solar, wind and hydro energy technologies, this perspective explores the complexity in low-carbon finance markets, defined as markets that direct capital flows towards low-carbon technologies, using network approaches to study their structure and dynamics. Investors are modeled as nodes which form a network or higher-order network connected by edges representing projects in which joint funding or security-related insurance was provided or other investment-related interaction occurred. We review the literature on investor networks generally, particularly in the case of complex networks, and address areas where these ideas were applied in this emerging field. The complex investor dynamics which emerge from the extant funding scenarios are not well understood. These dynamics have the potential to result in interesting non-linear behaviour, growth, and decline, which can be studied, explained and controlled using the tools of network science. complex networks; climate change; economics and finance; statistical physics + Footnote †: journal: _entropy_ 0 [MISSING_PAGE_POST] Figure 1: Green debt issued in the Balkans to date. Nodes are banks, and links exist between banks when they insure a bond issuance together (i.e., investors provide financing for a loan to support a green energy project, and the loan is insured by the larger financial system by buying the debt and reselling to investors as a security). A multilayer network is formed, since banks work together on deals that are domiciled in a specific country. Whenever two banks work together to underwrite a loan for a project whose country of domicile is listed in Kosovo, they are connected with a blue link, with a red link when in Bosnia, and with a purple link in Montenegro. The node degree is reflected in its relative size. The Austrian financial service provider _Erste Group Bank_ has _activity_ 2, since it takes part in two layers, and _degree_ 11, since it has interacted with that many banks. are drawn [14]. In recent years, however, we turn to the grandest scale, and look at how these ideas apply in real systems such as political groups, social networks, city formations, and beyond. It is the aspect of _randomness_ emerging from _deterministic_ laws in these systems that unites them under the theme of complexity, and it can be remarkable to see case studies of probabilistic analogies between systems usually seen in physics, and these economic systems suggest a vast amount of untapped potential in describing their behaviour. In particular, in this article, focuses on financial flows channeled into sustainability. Actors usually form a bipartite graph of investors and projects, and the underlying dependency structure takes the form of influences between investors, and, in general, the economic agents involved. The dynamics influenced by this structure constitute syndicated investment in renewable energy. How is the structure of the economic interactions related to the investment dynamics? Is the behaviour universal across different energy markets (such as wind, solar, or hydro)? Is there any observable connection between the complexity of these economic interactions, and the rate of renewable energy investment at all? All these questions are important in climate finance, and therefore scholars turn to the theory of complex networks to help answer them. This article is structured as follows. In Section 2, we discuss the relevant background to complex networks in climate finance, and investor networks more generally. In Section 3, we discuss empirical evidence in different climate finance scenarios and markets. Finally, in Section 4, we conclude with some take-away messages, and discuss the potential for future research development in this area. ## 2 Background ### Econophysics and Investor Networks Econophysics is a "revolutionary reaction" to standard economic theory that threatens to enforce a paradigm shift in thinking [15]. Usually, complex networks appear in economics within this general area. An early and highly cited example is Mantenga's use of the graph theory to study the influence between stock prices [2]. A weighted complete graph is obtained from the matrix of correlation coefficients between stocks of a portfolio by considering the synchronous time evolution of (the difference of the logarithm of) daily stock prices. For a review of milestones and challenges in econophysics, see [16]. Within this field, complex networks are commonplace. Remco van der Hofstad writes in his recent book on complex networks that The advent of the computer age has incited an increasing interest in the fundamental properties of real networks. Due to the increased computational power, large data sets can now easily be stored and investigated, and this has had a profound impact in the empirical studies on large networks. A striking conclusion from this empirical work is that many real networks share fascinating features. The two primary and first-studied examples of this are the scale-free degree distribution and the small world property, known informally as _six degrees of separation_. This universal behaviour observed in real networks has lead to the new subject of network science [17]. As a subdiscipline of theoretical physics, network science uses techniques and ideas from statistical physics such as random graphs, stochastic processes, combinatorics, and wider mathematical ideas involving probability (as distinct from statistics), analysis (i.e., calculus) and dynamics (dynamical processes, particularly on networks) to reveal the structure and function of complex systems [18; 19]. A growing trend in corporate finance is to apply centrality measures to investor networks derived from various datasets. In Bajo et al. [20], the value of a firm is shown to be strongly correlated with the degree of centrality of its investors in the wider US investor network (as well as with other centrality measures, in an attempt to show the results are robust to a variety of measures). Investors are nodes, and links form between pairs of investors when they co-invest in an equity as listed in a public US equity holding database. They write: In our sample, the information on the equity holdings by US institutional investors allows to construct a network of relations. Stemming from the simple observation that often institutional blockholders share co-ownership relationships with other institutional investors, we interpret the blockholder as actor and the co-ownership link as a tie. The network is then the set of actors and their ties. Fracassi et al. (2018) also consider centrality, showing how managers are influenced by their social peers when making corporate policy decisions, while Crane et al. (2018) show how investors acting together in cliques can amplify their voice concerning how the company is run, which strengthens governance, while weakening governance via threat of exit. In Dordi et al. (2018), ten actors are identified that can accelerate the transition away from fossil fuels using a centrality analysis of shareholder data from Bloomberg, and the Carbon Underground 200 list of companies (200 companies that own 98% of global oil, gas and coal reserves). The study finds that the top ten owners of CU200 fossil fuel reserve holders are Blackrock, Vanguard, the Government of India, State Street, the Kingdom of Saudi Arabia, Dimensional Fund Advisors, Life insurance Corporation, Norges Bank, Fidelity Investments and Capital Group. Similarly, Galaz et al. (2018) identify a limited set of financial actors mediating flows of capital that affect biomes of the earth. In Dimson et al. (2018), the authors study coordinated engagements by a network of shareholders cooperating to influence firms on environmental and social issues. They write in the conclusion that Our evidence indicates that, for maximum effect, coordinated engagements on (ESG) issues should preferably have a credible lead investor who is well suited geographically, linguistically, culturally and socially to influencing target companies. Shareholder activist networks are studied by Yang et al. (2018). Pension funds, special interest groups and religious organizations interact in a network of networks to influence corporate behaviour through the joint control of shares for what they perceive to be societal benefit. They show a correlation between both eigenvector and degree centrality, and the "efficiency of results" obtained by the activists. ### Nonequilibrium Statistical Physics Meets Climate Finance Network evolution--see, for example Figure 2--concerns a topic within network science where growing network models, known as models of "nonequilibrium statistical physics", are used as null models of network growth. They attempt to explain, via simple combinatorial rules, the large-scale universal behaviour of real networks, including their degree distribution, clustering, homology, and anything else concerning their structure. An early and fundamental observation in network science is the power-law degree distribution observed in many real networks (such as citation networks, social networks, the internet, world airline connection, etc). How does this appear? Even more important is observing it in the first place, by comparing real networks with a null statistical model, which in the case of Barabasi and Albert is the Erdos-Renyi random graph. The degree distribution has been known since the 1950s to follow a Poisson distribution (in the so-called thermodynamic limit where the number of nodes continues to infinity, but keeping the expected degree constrained such that it converges to a positive constant). The fact that random networks do not have power law degrees (also called a _scale-free_ degree distribution) suggests there exist global organizing principles at play that "fatten the tail" or, more formally, _skew_ the degree distribution. The question in finance which has only been recently explored is how this happens in financial systems such as green bond syndication networks, or investor networks as discussed above. A simple observation is that the degree distribution, see, e.g., Figure 3, which is a the discrete probability mass function for the node degree (or, in layman's terms, the proportion of nodes with a certain degree, plotted against the degree), has an exponent \(\gamma<2\). An example hypergraph evolution model where banking syndicates of more than two parties can form is shown in Figure 2. Figure 3: Degree distribution for the green bond and loan network. The red squares represent the probability a randomly selected bank in the international banking network supporting green loans and bonds is involved in \(k\) bond issuances. The non-linear model \(P(k)\propto k^{-1.77}\) is the black line. This suggests a highly right-skewed degree distribution with exponent 1.77, which occurs because the syndicates arrive in time faster than the banks, leaving a dense network where deals/banks \(>1\), and grows in time. Figure 2: Hypergraph evolution. Nodes represent banks, and hyperedges (coloured edges) represent syndicates. New hyperedges attach to the current network nodes based on preferential attachment. Chung et al. points out that preferential attachment models of network evolution cannot explain such a large skew [28]. Alternative suggested models involve node duplication presented by Chung et al. in their work on biological networks. The Pittman-Yor process is also a candidate which involves preferential attachment and can explain degree distributions whose exponent can be lower even than unity [27; 28]. A major research question is to explain the skew we observe in the green bond syndication network of Section 3.4. This develops early research of Rickman et al. in [9] and Ameli et al. in [7] who also consider the effect of the fitness model of Bianconi-Barabasi [29] using the work of Pham et al. on attachment functions [30]. ### Investor Hubs Dominate the Market The network analysis in [9] provides the first quantitative evidence of a right-skewed degree distribution. With _hubs_ defined in this context to be vertices of \(G\) with more pairwise connections than average by one standard deviation, it is observed that _"The domination of energy markets by a few organizations can be driven by large incumbents achieving cost reductions through, e.g., economies of scale, better access to finance, or vertical integration of services bringing in multiple revenue streams"_ [9], Section 3.1). The authors also write that _"we observe a strong positive correlation between growth of wind markets and the level of debt hub activity"_ ([9], Section 3.2), and discuss this in depth. ### Fit Get Richer, and Rich Get Richer In Ameli et al., the preferential attachment model is compared with the fitness model [7; 29]. Instead of the standard method of considering attachment of new nodes based on the existing degree, nodes may attach to the existing lenders or sponsors based on the intrinsic fitness of nodes, as was proposed by Bianconi and Barabasi in the Bianconi-Barabasi model introduced in 2001 to develop the theory of competition and multiscaling in evolving networks [29]. Ameli et al. address this in the context of climate finance networks of energy efficiency investors. ### Community Detection Community detection is one of the largest areas of complex networks [31]. The goal is to define _community_ in such a way that groups of financial actors are identified in a way that reveal a non-trivial structure to the larger system. Larosa et al. identify a significant home bias [32], writing _"The investor community analysis reveals geographical investment patterns. In far-east Asian countries (Korea and Vietnam) the interactions between domestic investors (i.e., community density) are more frequent compared to the rest of the world. In India, the financial landscape is dominated by domestic state-owned banks, while Japan has a strong presence over the continent through the investment made by its second biggest bank, namely Sumitomo Mitsui Banking Corporation and a private utility (Kansai Electric Power Co., Inc.)... Investors mainly cluster together at national and regional level confirming the existence of a "home bias" in investments"_ ([8], Section 3.1). Home bias, in layman's terms, occurs when investors are more likely to invest in projects in their native country or region. For obvious reasons, knowledge of the local economy and the ability to predict the long-term prospects of a venture are a major advantage. The community detection of Larosa et al. is the first quantitative evidence of this effect. Larosa et al. detail their methodology in [8], Section 2 using the Jaccard coefficient [33]. The effect is, in fact, every important for the corresponding network science. Home bias leads to local clustering and the potential emergence of network geometry, as nearby links are favored over long range links, on the whole. Longer range links exist, but they connect large hubs in a way similar to the World Airlines Network [10]. As such, it is important to askteh following question: To what extent does home bias lead to the emergent network structure in climate finance networks of this type? ### Centrality Measures Centrality is a measure of the importance of a node in a network. Important examples include betweenness centrality [34], where the extent to which nodes lie on multiple shortest paths in the network is considered important [35; 36; 37]. PageRank, used by Google, is an example of providing centrality scores to websites containing a searchable keyword based on the extent to which random walks around the interconnected websites spend time on a particular webpage. How do we measure centrality in climate finance networks? What makes a lender node \(i\in I\) or a project sponsor node \(s\in S\) important to the network? Larosa et al. introduce a new measure based on the number of communities an investor node takes part in. They introduce the community-based centrality score (CC), writing the following: _"The CC is strongly anchored to the link community structure. In fact, well-connected investors are not just the ones with many active co-investments, but rather those who operate in communities with high connecting power. Investors with high CC score will belong to communities capable of reaching distant groups of actors, hence spreading the available financial resources to different players. We express CC as the weighted sum of communities a node belongs to over the X communities weighted by the average similarity between pairs of communities"_. The authors discuss this in further detail in [8], Section 2. Further work identifying the centrality measures important in climate finance markets is of great interest. ## 3 Empirical Evidence ### Wind Markets Wind makes up a significant part of renewable energy consumption around the world. For example, in the UK, wind makes up about a quarter of the energy contribution of the country [38], with 11 thousand wind turbines (14 GW onshore and 14 GW offshore) active by 2023. In a recent journal article, _The internal dynamics of fast-growing wind finance markets_[9], Rickman et al. investigate, inter alia, the claim that preferential attachment (PA) drives the evolution of the hypergraph \(G\) introduced in Section 2.2. PA was initially introduced in a paper by Barabasi and Albert in 1999 in order to explain the emergence of the scale-free property in complex networks [39]. The arrival of new lenders is a discrete process of unknown temporal distribution, but it is hypothesized in [9] that they form new links to existing equity investors with probability proportional to the attachment kernel \[A_{l}(w_{l})=w_{l}^{\beta_{l}}, \tag{1}\] and sponsors, on arrival, form new links to lenders with a probability proportional to the attachment kernel \[A_{s}(w_{s})=w_{s}^{\beta_{s}}. \tag{2}\] With multiple lenders involved in a single project, this constitutes a hyperedge of G. When a project has recieved multiple funding sources in the form of equity or debt loans, this also constitutes a hyperedge. These authors do not attempt to recreate the BNEF data for the wind finance market via a random model [11] involving preferential attachment. This aspect remains an open avenue of further research. Instead, assuming this hypothesis, the exponents \(\beta_{l}\) and \(\beta_{s}\) are estimated via likelihood-based statistical methods. The lender exponent \(\beta_{l}\) of Equation (1) and the sponsor exponent \(\beta_{s}\) of Equation (2) are obtained via partial maximum likelihood estimation [40]. The accuracy of the validity of the PA model is itself assessed via the likelihood ratio test of Clegg [41]; see [9], Section 2.5. The authors found that the preferential attachment theory described 11 out of the 16 countries analyzed in the study. They write that _debt investors (lenders) face competition for projects and past lending experience is a major determinant of who will be selected as a project partner_[9], Section 3.3. The authors claim that preferential attachment in this market is based on financial learning. Egli et al. write that _"On the level of the renewable energy finance industry, investors benefitted from growing renewable energy technology (RET) markets and subsequent learning-by-doing (e.g., better risk assessment). Larger markets allowed banks to form in-house project finance teams specialized in RETs. The knowledge and data that these teams accumulated allowed for a more accurate technology assessment. Consequently, project risks declined. For example, as the market had accumulated experience on historical wind speeds, investors shifted from calculating project returns on wind resource estimations with 90% certainty" ([42], Drivers of Change). ### Hydro Markets Hydroelectric power (hydropower) is the largest international contributor to renewable energy production, producing more than half of the total output. Hydropower is particularly popular in developing countries, and thus plays an important role in the UN sustainability goals [38]. Larosa et al. write that _"financing hydropower projects requires investors to pay large upfront capital and lock in their capital for decades (hydro projects can last for 100 years), while also bearing high investment risks"_. With this in mind, the hydroelectric project financing landscape is addressed by Larosa et al. in _"Finding the right partners? Examining inequalities in the global investment landscape of hydropower"_[8]. Given the unique aspect of intercontinental development at work, internationally diverse financial actors need to be assembled. The focus is therefore on centrality and community detection rather than network evolution; see Figure 3 in [8]. Financing hydropower projects necessitates substantial upfront capital investment, with funds tied up for extended periods, often spanning a century due to the long lifespan of such projects. Indeed, the construction of a large hydropower dam typically exceeds a billion dollars, demanding patient capital and enduring the natural investment cycle. Thus, an intricate network of diverse investors, and effective capital distribution becomes essential for hydropower assets. The focus is therefore on centrality and community detection rather than network evolution. ### Energy Efficiency Markets Energy Efficiency Technologies are interventions that reduces energy consumption, such as using light-emitting diodes (LED) in place of conventional filament bulbs. This technology has a major place in modern science due primarily to its efficiency. The Nobel Prize in 2014 was awarded for the blue LED as it enables white light and a more universal employment of energy efficiency with the climate in mind [43]. Ameli et al. write that _"investments in energy efficiency (EE) are particularly crucial to reduce the energy demand for a growing world economy and are listed as core measures for sustainable recovery plans"_[7]. As with the research in wind markets, Ameli et al. focus on the theme of preferential attachment in a bipartite graph of investors and energy efficiency projects. Applying ideas from Pham et al. in their recent work concerning the joint estimation of preferential attachment and node fitness in growing complex networks, the authors look at how influential the intrinsic fitness of a node to acquire links is compared with its degree-based link acquisition (i.e., simple preferential attachment compared with a fitness-based network evolution model [29]). They _"empirically estimate the preferential attachment (PA) function and node fitnesses from observed network data"_[7]. The authors suggest that there is a balance between preferential attachment and node fitness determining the evolution of their network, writing the following: _"Following Pham et al. approach, we measure the respective influences of the preferential attachment and the fitness models"_[7; 30]. The PAFit method is discussed by Pham: _"Our main contributions are twofold. The first contribution is a statistical method called PAFit to simultaneously estimate the PA and node fitness functions without imposing any assumptions on their functional forms. To the best of our knowledge, PAFit is the first ever method in the literature that can do so"_. Given this approach to the theory of network evolution, the authors draw the conclusion that _"...this suggests that the 'rich get richer' mechanism becomes weaker when the 'fit get richer' effect is considered, showing that to some extent technology's ability to attract new _investment is explained by its fitness"_. They also discuss the snapshot overtime of the network (see Figure 4), observing the total number of investments that different types of investors (e.g., from the utilities sector) have made ([7]: evolution, dynamics and growth of the energy efficiency network). ### Green Bonds, Loans, and Networks of Underwriter Syndicates Green bonds, loans and debt securities are designated to finance environmentally friendly projects. This may take the form of renewable energy infrastructure, such as a wind farm, or refurbishment of real estate to make it more sustainable. Whatever project requires funding, the project managers approach a bank looking for funding. They attempt to acquire investment by selling green bonds to investors [9]. These are underwriting, i.e., insured by a banking syndicate who buys all the bonds and resells them to investors for profit, therefore taking on the risk in case the project collapses. This guarantees returns for the holders of the bonds (these may be private customers, such as pension funds, or individuals purchasing online using their own funds) [44]. One can build a complex network--see Figure 5--from transaction data in the following way. The modeling consists of Figure 4: Aggregated network of financial actors involved in energy efficiency financing (2000–2017) in different sectors, taken from [7]. Nodes are investors, and edges are financial interactions between them, with the following key: pink from state-owned utility, brown from investor-owned utilities, light blue from manufacturing and services, green from the governmental sector, dark purple from an energy cooperative, light-green research and the university sector, blue from institutional investors, orange construction and real estate, turquoise diversified, deep green chemicals and steel, green-brown food, bright red retail, light purple defence, and bright purple the remaining uncatagorised areas,. 1. A hypergraph \(G(V,E)\) where \(V\) is the vertex set and \(E\) is the edge set, with \(|V|=n|\) and \(|E|=m\), and each edge \(e\) is simply a subset of \(V\); see [45], Introduction. 2. The vertices, which represent banks. 3. The hyperedges (i.e., higher-order edges representing groups of investors and an investment rather than simply pairs of investors). which represent project financing by the corresponding banking syndicate. The amount of money invested is large enough in many cases to require large syndicates of banks to underwrite the risk. See also Berge [46] and Beckenbach [47] for a discussion of bipartite hypergraphs. Note that there is also a potential to view this as a simplicial complex [48]. Higher-order network models of a banking network is an interesting area of further research in this area. ## 4 Final Words and Open Avenues of Research The area is still developing, but has immense potential. Understanding the interesting aspects of complex networks which present particular application in banking networks funding climate initiatives allows policy makers intervention and influence on climate funding in a postive way for society. The ways in which different market places, different sectors such as finance, technology or utilities, or different geographic regions lead to different network structure, or the ways in which it is universal, is still not well understood. Community detection needs further work, for example, by developing network embedding techniques that incorporate financial metrics (e.g., weighted edges representing money mobilized in a deal). As we disucssed, higher-order network models of a banking network is an interesting area of further research. Two recent articles have addressed preferential attachment as the main driver of network evolution. It remains an open question as to whether random models of climate finance hypergraphs which evolve based on "rich get richer" or "fit get richer" models are able to reproduce the data of BREF in a sophisticated way. This would present an important connection between statistical physics and climate finance, and allow further insights into how these networks evolve and develop. How to then encourage the transition to green energy based on this detailed understanding is a difficult and multi-disciplinary task, but well founded on the excellent descriptive analysis that can be provided by these early works in complex networks Further work on network evolution is critical to understand the mechanisms that generate the highly skewed degree distributions observed in banking syndicate networks. We look forward to a future review concerning research developing these ideas, and to the corresponding new insights into climate finance as we track the critically important goals of the Paris agreement. Figure 5: A sample of the multilayer network of banks (nodes) underwriting green bonds and loans in two countries, UK (blue links), and France (red links). Goldman Sachs, BNP Paribas, and HSBC connect the layers, serving as international actors which unite layers more often than local banks. We hope that in the future, the link between policy and network structure can be addressed, as well as the ways in which this structure leads to better and more sustainable green growth. This is a major challenge which we hope the networks community can begin to address to provide a remarkable example of physics in society. Writing--original draft, A.P.K.-G. and N.A. All authors have read and agreed to the published version of the manuscript. Both authors acknowledge support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 802891). **Data Availability Statement**: Bloomberg data used for any banking syndicate informaton (not published elsewhere) is proprietary and shareable on request with the corresponding author. **Acknowledgments:** We thank Denitsa Angelova, Ginestra Bianconi, Claudia Brown, Max Falkenberg, Michael Grubb, Ben Hinder, Francesca Larosa, Figo Lau, Sumit Kothari, and Jamie Rickman, for many helpful discussion. **Conflicts of Interest**: The authors declare no conflict of interest.
この視点から、私たちは、持続可能性の努力を支援する複雑な投資家ネットワークの構造と機能に関する最近の研究を紹介します。太陽、風力、水力エネルギー技術の事例を用いて、低炭素金融市場の複雑さを探求し、この視点では、低炭素技術への資金流動を導く市場を研究します。ネットワークアプローチを用いて、構造とダイナミクスを調査します。投資家は、プロジェクトに資金提供や関連する保険契約、その他投資関係の相互作用があった場合に、ネットワークのノードとしてモデル化されます。この視点では、複雑なネットワークにおける投資家ネットワークに関する文献をレビューし、この新興分野でこれらのアイデアがどのように適用されているかについて議論します。現在存在する資金シナリオから生じる複雑な投資家ダイナミクスは、よく理解されていません。これらのダイナミクスには、興味深い非線形な行動、成長、そして減少が含ま
2309.04745
Reconfigurable Three-Dimensional Thermal Dome
Thermal metamaterial represents a groundbreaking approach to control heat conduction, and, as a crucial component, thermal invisibility is of utmost importance for heat management. Despite the flourishing development of thermal invisibility schemes, they still face two limitations in practical applications. First, objects are typically completely enclosed in traditional cloaks, making them difficult to use and unsuitable for objects with heat sources. Second, although some theoretical proposals have been put forth to change the thermal conductivity of materials to achieve dynamic invisibility, their designs are complex and rigid, making them unsuitable for large-scale use in real three-dimensional spaces. Here, we propose a concept of a thermal dome to achieve three-dimensional invisibility. Our scheme includes an open functional area, greatly enhancing its usability and applicability. It features a reconfigurable structure, constructed with simple isotropic natural materials, making it suitable for dynamic requirements. The performance of our reconfigurable thermal dome has been confirmed through simulations and experiments, consistent with the theory. The introduction of this concept can greatly advance the development of thermal invisibility technology from theory to engineering and provide inspiration for other physical domains, such as direct current electric fields and magnetic fields.
Yuhong Zhou, Fubao Yang, Liujun Xu, Pengfei Zhuang, Dong Wang, Xiaoping Ouyang, Ying Li, Jiping Huang
2023-09-09T10:25:47
http://arxiv.org/abs/2309.04745v2
# Reconfigurable Three-Dimensional Thermal Dome # Reconfigurable Three-Dimensional Thermal Dome Yuhong Zhou \({}^{a}\), Fubao Yang \({}^{a}\), Liujun Xu \({}^{b}\), Pengfei Zhuang \({}^{a}\), Dong Wang \({}^{de}\), Xiaoping Ouyang \({}^{c,*}\), Ying Li \({}^{d,e,*}\), Jiping Huang \({}^{a,*}\) \({}^{a}\) Department of Physics, State Key Laboratory of Surface Physics, and Key Laboratory of Micro and Nano Photonic Structures (MOE), Fudan University, Shanghai 200438, China \({}^{b}\) Graduate School of China Academy of Engineering Physics, Beijing 100193, China \({}^{c}\) School of Materials Science and Engineering, Xiangtan University, Xiangtan 411105, China \({}^{d}\) Interdisciplinary Center for Quantum Information, State Key Laboratory of Modern Optical Instrumentation, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou 310027, China \({}^{e}\) International Joint Innovation Center, Key Lab. of Advanced Micro/Nano Electronic Devices & Smart Systems of Zhejiang, The Electromagnetics Academy of Zhejiang University, Zhejiang University, Haining 314400, China \({}^{*}\) Corresponding author. E-mail: oyxp2003@aliyun.com (X. Ouyang); eleying@zju.edu.cn (Y. Li); jphuang@fudan.edu.cn (J. Huang) **ABSTRACT** Thermal metamaterial represents a groundbreaking approach to control heat conduction, and, as a crucial component, thermal invisibility is of utmost importance for heat management. Despite the flourishing development of thermal invisibility schemes, they still face two limitations in practical applications. First, objects are typically completely enclosed in traditional cloaks, making them difficult to use and unsuitable for objects with heat sources. Second, although some theoretical proposals have been put forth to change the thermal conductivity of materials to achieve dynamic invisibility, their designs are complex and rigid, making them unsuitable for large-scale use in real three-dimensional spaces. Here, we propose a concept of a thermal dome to achieve three-dimensional invisibility. Our scheme includes an open functional area, greatly enhancing its usability and applicability. It features a reconfigurable structure, constructed with simple isotropic natural materials, making it suitable for dynamic requirements. The performance of our reconfigurable thermal dome has been confirmed through simulations and experiments, consistent with the theory. The introduction of this concept can greatly advance the development of thermal invisibility technology from theory to engineering and provide inspiration for other physical domains, such as direct current electric fields and magnetic fields. Keywords: thermal domes, reconfigurable metamaterials, three-dimensional invisibility ## 1 Introduction The urgent necessity of rendering objects invisible to infrared detection has sparked significant research into thermal cloaking [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. Traditionally, thermal invisibility cloaks were designed by initially enveloping the target with insulating materials, followed by guiding the heat flow around the cloaked area, thus achieving invisibility. Various theories such as transformation-thermtics [1, 2, 15, 16, 17, 18], scattering-cancellation [19, 20, 21, 22, 23], and topology-optimization [24, 25, 26, 27, 28]have been developed following this approach. However, the thermal invisibility devices designed based on the above principles are facing significant engineering challenges, which contrasts with the tremendous potential for other metamaterials in engineering applications [29, 30, 31, 32]. They pose manufacturing and installation difficulties, problems with reusabil ity, and most critically, completely enclosed designs cannot accommodate internal heat sources. This is because the continuous rise in internal temperature could result in disastrous outcomes [33]. It's noteworthy that scenarios requiring the concealment of heat-generating objects are quite common, yet have been conspicuously overlooked in previous studies. To overcome these limitations, researchers have begun exploring non-traditional cloak [34, 35]designs that do not require the cloaked area to be fully enclosed, thereby allowing interaction with the external environment. However, such studies have primarily focused on two-dimensional structures and have relied heavily on negative thermal conductivity in their designs, a feature that poses considerable challenges in real-world applications. Moreover, these newly proposed "external cloaks" fail to address a critical problem - the dissipation of internal heat sources, which is an essential requirement for many practical applications. Hence, there's a clear need for further research to address these issues, potentially opening new avenues in the field of thermal cloaking. Furthermore, reconfigurable capabilities significantly amplify a device's potential to adjust to dynamic requirements, thereby boosting its applicability and acceptance across various engineering sectors. Existing thermal cloaking devices, however, are usually tailored to a specific background; changes in the environment necessitate a redesigned cloak, making the process inconvenient and economically inefficient. Indeed, researchers have proposed numerous methods, such as nonlinearity [36], chameleon-like behaviors [37, 38], convection [39, 40, 41], and height manipulation [42, 43], to regulate the thermal conductivity of materials, aiming to meet dynamic stealth requirements. However, translating these techniques into three-dimensional spaces presents significant hurdles, rendering them impractical for many real-world applications. This paper introduces a groundbreaking solution to these challenges: the thermal dome (see Fig. 1). A device uniquely designed with practical applications in mind, the thermal dome showcases an open hidden area, facilitating easy installation and reuse. Remarkably, this device accomplishes thermal invisibility for heat-generating objects--a pioneering achievement in the field. Inspired by Lego structures [44, 45], we have fused an open architecture with a multi-layered design, granting the thermal dome a reconfigurable nature. Users can intuitively assemble it to meet specific requirements and adapt to varying environments, much like assembling Lego blocks. This level of flexibility and adaptability sets the thermal dome apart from conventional thermal cloaks, underscoring its exceptional engineering significance. Through the resolution of differential equations, we designed a semi-ellipsoidal thermal dome and validated its functionality in a hemispherical form using common bulk materials. The introduction of this thermal dome concept marks a paradigm shift in the field of thermal invisibility devices, propelling them into a phase of practicality and inspiring further exploration of their feasibility in real-world scenarios. ## 2 Design principles of thermal domes We consider three-dimensional heat transfer in the absence of convection and radiation. In a homogeneous medium with a constant thermal conductivity of \(\kappa_{\mathrm{b}}\), heat flows uniformly from the high-temperature surface towards the low-temperature surface. However, the introduction of a target object disturbs the heat flow due to the object's different thermal conductivity, \(\kappa_{\mathrm{c}}\). To eliminate this disturbance, a thermal dome can be placed on the target, which cloaks the target as an object with the same thermal con ductivity as the background, achieving the purpose of stealth. To achieve this functionality, the shape and material of the thermal dome must be carefully designed. While the shape of the thermal dome can be of any form, choosing a shape with poor symmetry can result in irregular faces of the thermal dome, making the design process challenging. The semi-ellipsoidal shape has excellent symmetry and can form a variety of shapes by changing the lengths of its three axes to meet the designer's needs, making it an ideal choice for the shape of a thermal dome. As shown in Fig. 2(a), the semi-axis of the core (dome) is specified as \(l_{ci}\) (\(l_{di}\)) along the \(x_{i}\) axis, where \(i=1,2,3\) represents the three dimensions. The heat transfer equation in the ellipsoidal coordinate system is written as[46] \[\frac{\partial}{\partial\rho_{1}}\left[g(\rho_{1})\frac{\partial T}{\partial \rho_{1}}\right]+\frac{g(\rho_{1})}{\rho_{1}+l_{i}^{2}}\frac{\partial T}{ \partial\rho_{1}}=0, \tag{1}\] where \(g\left(\rho_{1}\right)\ =\ \prod\limits_{i}\left(\rho_{1}+l_{i}^{2}\right)^{1/2}\). We implement an external thermal field along the \(x_{i}\) axis as depicted in Fig. 2b. The semi-ellipsoid is visualized as a complete ellipsoid halved, with the solution procedure Figure 1: (a) displays the application scenario of the thermal dome, which is able to shield the target from infrared detection. (b), (c), and (d) highlight the unique features and advantages of the thermal dome compared to traditional thermal cloaks. Specifically, (b) shows the reconfigurable nature of the thermal dome, enabling it to adapt to changing environments. (c) demonstrates the open structure of the thermal dome, which makes it extremely convenient in practical use, such as the replacement of the protected target as shown in the figure. (d) illustrates that the thermal dome remains applicable even when the target object generates heat, thereby expanding its range of applications. identical to that of the complete ellipsoid barring an additional boundary condition: the entire surface temperature where the base of the dome is located must equalize to ensure the background temperature field remains undisturbed. In addition, to solve the differential equations, we need to impose some additional boundary conditions, namely, the equality of temperatures and normal heat fluxes at the interfaces between different regions. Applying the generalized solution to the boundary conditions, we obtain the design requirement for the thermal dome: \[\kappa_{b}=\frac{L_{ci}\kappa_{c}+\left(1-L_{ci}\right)\kappa_{d}+\left(1-L_{di }\right)\left(\kappa_{c}-\kappa_{d}\right)f}{L_{ci}\kappa_{c}+\left(1-L_{ci} \right)\kappa_{d}-L_{di}\left(\kappa_{c}-\kappa_{d}\right)f}\kappa_{d}, \tag{2}\] where \(f=g\left(\rho_{c}\right)/g\left(\rho_{d}\right)=\prod\limits_{i}l_{ci}/l_{di}\) denotes the volume fraction, \(L_{ci}\) and \(L_{di}\) are the shape factors. The detailed solution steps are outlined in Supplementary Note 1. As previously mentioned, besides the thermal conductivity condition, another requisite for the semi-ellipsoidal structure not to impact the background temperature field is to have the entire surface on which its base is located at an equal temperature. Furthermore, if the thermal conductivity of the thermal dome needs to be independent of the core region, the core region can be insulated, whereby \(\kappa_{c}\) can be considered as equal to \(0\)Wm\({}^{-1}\) K\({}^{-1}\), making the designed thermal dome applicable to arbitrary objects. Specifically, for a hemispherical thermal dome, as shown in Fig. 2(c), we can establish its thickness in relation to its Figure 2: (a) presents a schematic representation of the thermal dome, while (b) exhibits its cross-sectional view. (c) presents a cross-sectional view of a single-layer hemispherical thermal dome. In (b), we show the thickness of a single-layer hemispherical thermal dome as a function of \(l_{c}\) and \(r\). geometric size and thermal conductivity \[d=\left(\sqrt[3]{\frac{2r+1}{2r-2}}-1\right)l_{\mathrm{c}}. \tag{3}\] \(r\ =\ \kappa_{\mathrm{d}}/\kappa_{\mathrm{b}}\) represents the ratio of the thermal conductivity of the thermal dome to that of the background. Importantly, at this stage, we have introduced an insulating layer beneath the hemispherical thermal dome to ensure its functionality across various objects. The inner diameter \(l_{\mathrm{c}}\) of the thermal dome is determined by the specific usage scenario, while the thickness of the dome is closely related to the choice of material, as seen from Eq. (3). The thickness variation of a single-layer thermal dome as a function of \(l_{\mathrm{c}}\) and \(r\) is depicted in Fig. 2(d). The ratio of \(r\) has a significant impact on the thickness \(d\), with \(d\) becoming infinite when the device uses the same material as the background (i.e., \(r\,=\,1\)). Conversely, when the thermal conductivity of the thermal dome \(\kappa_{1}\) is much larger than that of the background \(\kappa_{\mathrm{b}}\) (i.e., \(r\) is a large number), the thickness of the thermal dome becomes very small. This is because the thermal dome acts as a compensation for thermal conductivity, and if its material has a high thermal conductivity, only a small amount of material is needed to compensate. Therefore, selecting an appropriate material to manufacture the thermal dome based on the specific scenario is crucial. For instance, if the background is made of cement and a thin thermal dome is preferred, copper would be a better choice. According to Eq. (3), the thickness of the layer decreases to 0.16 mm when \(l_{\mathrm{c}}\) is set to 10 cm in this case. The aforementioned approach can be readily expanded to accommodate core-shell structures with \(n\) layered shells. We can employ computational software to determine the parameters for the thermal domes in each layer. An expedited method for designing a multilayer thermal dome involves leveraging effective medium theory, where the design process can be conducted iteratively, layer by layer. Further details are provided in Supplementary Note 2. ## 3 Function verification and simulation results We utilized the commercial software COMSOL Multiphysics to execute finite-element simulations and authenticate our theoretical design. We carried out steady-state simulations with the Heat Transfer Module, and the transient results will be elaborated in Supplementary Note 3. For simplicity, we use the hemispherical thermal dome for verification. The background is dimensioned at 30 \(\times\) 30 \(\times\) 15 cm\({}^{3}\) and features a thermal conductivity, \(\kappa_{\mathrm{b}}\), of 10 W m\({}^{-1}\) K\({}^{-1}\). An object represented by a core with \(R_{\mathrm{o}}\)=9cm and \(\kappa_{\mathrm{o}}\)=500 W m\({}^{-1}\) K\({}^{-1}\) is introduced. The temperature and isotherm distributions in the background illustrate the perturbation of heat flow. Once heat transfer reaches equilibrium, we analyze the efficacy of our thermal dome through the examination of the temperature distribution in three distinct groups. As depicted in Fig. 3(c), the presence of the object distorts the temperature distribution of the background, causing the isotherms to bend and reflect the disturbance. In contrast, the background devoid of the object exhibits a uniform temperature distribution and straight isotherms, as shown in Fig. 3(a). Upon placement of the thermal dome over the object (Fig. 3(b), featuring a shell with an inner diameter \(R_{\mathrm{o}}\)=9 cm, thickness \(d\)=1 cm, and \(\kappa_{\mathrm{a}}\)=0.023 W m\({}^{-1}\) K\({}^{-1}\) representing the adiabatic layer, and another shell with inner diameter \(R_{1}\)=10 cm, thickness \(d\)=1 cm, and \(\kappa_{1}\)=55 W m\({}^{-1}\) K\({}^{-1}\) representing the thermal dome), the temperature distribution and isotherms in the background precisely match those of the reference group. To accurately contrast their differences, we exported data from a cross-section at \(z\)=7.5 cm (-15 cm \(<\)\(x\,<\) 15 cm) (Fig. 3(d)). We utilized a dimensionless temperature \(T^{*}\,=\,100(T_{0}\,-\,T)/T_{0}\) and a dimensionless position of plot \(x^{*}\,=\,2x/L\), where \(T_{0}\) and \(L\) denote the temperature of the reference and the length of the background, respectively. The temperature without the thermal dome (depicted by an orange line) diverges from the temperature of the reference across the entire space. Conversely, the blue line (with the thermal dome) aligns perfectly with \(T_{0}\) in the background, signifying successful achievement of the thermal cloaking effect by the thermal dome. The scenario of a reconfigured thermal dome is considered when the background thermal conductivity alters. In such a case, with \(\kappa_{\rm b}\) changing from 10 W m\({}^{-1}\) K\({}^{-1}\) to 23 W m\({}^{-1}\) K\({}^{-1}\), the original thermal dome no longer satisfies the stealth requirement (Fig. 3(e)). One solution is to append another layer to the outer surface of the single-layer thermal dome (Fig. 3(f)). The Lego-like structure of the thermal dome facilitates easy layering or removal, a feat challenging in traditional thermal cloaks. Upon reconfiguration, the new thermal dome exhibits impressive performance in the new background. To verify the preceding discussion, we plot the dimensionless temperature \(T^{*}\) (Fig. 3(g)). Past designs of thermal invisibility cloaks failed to account for scenarios with heat sources within the concealed area. However, a significant number of real-world objects to be hidden do emit heat, rendering conventional thermal cloaks ineffective. The open structure of the thermal dome allows the hidden area to directly engage a cold source that absorbs the heat generated within the hidden area. This ensures that the invisibility function remains uncompromised, and the internal temperature does not continually elevate. We showcase the simulation validation of the thermal dome and the conventional thermal cloak in Fig. 3(h) and Fig. 3(i), respectively. As the time extends, the temperature within the thermal dome remains virtually stable, whereas the temperature within the traditional thermal cloak's space continues to rise. For further scrutiny, we selected two points inside the hidden region and in the background area, and plotted their temperature values over time in Fig. 3(j) and Fig. 3(k). The comparison with the pure background reference group reveals the disadvantages of traditional thermal cloaks when heat sources are present in the hidden area: first, the internal temperature incessantly escalates over time; second, in the absence of entirely insulating materials, the elevated internal temperature impacts the temperature distribution in the background, leading to invisibility function failure. In contrast, the thermal dome not only preserves excellent invisibility but also maintains a stable internal temperature, akin to the cold source temperature. Thus, the thermal dome emerges as an effective solution for concealing objects with heat sources. Throughout the preceding discussion, regardless of whether we consider a single-layer or multi-layer thermal dome, the temperature bias is vertical, directing heat flow from top to bottom. However, altering the direction of the temperature bias to horizontal does not compromise the perfect cloaking function of the thermal dome, as shown in Supplementary Note 4. When the temperature bias is arbitrary (Fig.S5), the temperature field becomes non-uniform, and the cloaking function of the thermal dome is no longer perfect. Yet, the simulation results reveal that the thermal dome still offers a substantial amount of cloak Figure 3: Temperature distributions for different groups. (a)-(c) shows the temperature distributions of the reference, with a thermal dome and without a thermal dome, respectively. (d) shows dimensionless temperature on a chosen line. Region one represents the background, and region two represents the thermal dome and the object. (e) and (f) demonstrate the reconfigurable capability of the thermal dome to adapt to changing backgrounds. (e) shows the simulation result for the original thermal dome in a changed background, (f) shows the simulation result for the reconfigured thermal dome in a changed background. In this case, we added a new layer (\(\kappa_{2}\)=85 W m\({}^{-1}\) K\({}^{-1}\), \(d\)=1 cm) to the original thermal dome. (g) shows dimensionless temperature on a chosen line. (h) and (i) compare the performance of the thermal dome and thermal cloak with a heat source in the hidden area at 5 minutes and 10 minutes respectively. The heat source in the thermal dome and thermal cloak emitting heat outward at a rate of 500 kW per square meter. In (j) and (k), we quantitatively show the variation of temperature at different positions with time. ing effect compared to the control group without the thermal dome. Under such conditions, the object remains undetectable by low-precision infrared cameras. Furthermore, other shapes of thermal domes will be discussed in Supplementary Note 5. ## 4 Experimental validation of the thermal dome ### Experimental results We performed experimental validation of the hemispherical thermal dome and the results are depicted in Fig. 4. Constraints of the experimental setup meant that the heat source was situated below, and the cold source was positioned above. The temperature distribution within the background can be inferred from its surface. Any disturbance in the background heat flow will lead to distortion in its surface isotherms; otherwise, the isotherms remain straight. Therefore, we employed an infrared camera to assess the temperature distribution on the sample's surface, thus validating the function of the thermal dome. In the single-layer thermal dome experiment (Fig. 4(c)), Layer 1 was utilized as the thermal dome, while cement served as the background. Three samples were prepared: a reference group comprised solely of the background, a control group with the presence of objects but without a thermal dome, and a group with objects safeguarded by the thermal dome. The experimental and simulation results for these three groups are presented in Fig. 4(d). Both the experimental and simulation outcomes demonstrate that the isotherms of the group without a thermal dome are distorted, whereas the isotherms of the group equipped with a thermal dome, and the reference group, are straight. This signifies that the thermal dome shields the object, preventing its detection. For an intuitive comparison, data from the same line were scrutinized, as shown in Fig. 4(e). Here, \(T\) denotes the temperature, and \(T^{*}=100(303-T)/303\) is a dimensionless temperature that can represent deviation. Both the reference group and the group with a thermal dome exhibit a temperature close to 303 K, with deviations approximating 0 K. The group without a thermal dome shows a significantly larger temperature deviation. Fig. 4(f) delineates the functionality of the reconfigured thermal dome in a new background, where stainless steel (316L) is used as the background material, and the thermal dome comprises Layer 1 and Layer 2. Experimental results with their corresponding simulation results for the group with the new thermal dome and the group with the original thermal dome are displayed in Fig. 4(g). Additionally, \(T\) and \(T^{*}\) are plotted in Fig. 4h to visually compare the efficacy of the new multi-layer thermal dome with the original single-layer dome. From Fig. 4g and Fig. 4(h), we can affirm the following: multi-layer thermal domes, with their Lego-like structures, can adapt to various environments by simply adjusting the number of layers - a feat challenging to achieve with traditional thermal cloaks. Figure 4: Experimental demonstration of the thermal dome. (a) depicts the schematic diagram of the experimental setup, in which the sample is positioned between the cold (273 K) and hot (323 K) sources, with the open side connected to the cold source. (b) shows the structure of the sample, which is filled with foam to prevent thermal convection and acts as both the adiabatic layer and object. (c) illustrates the materials used for the single-layer thermal dome and the background. The experimental results with corresponding simulation results for three groups of samples are presented in (d). In (e), temperature data from a single line in three sets of samples were extracted to further compare their similarities and differences. (f) displays the materials used for the new multi-layer thermal dome and the new background. (g) shows the experimental results with corresponding simulation results for the group with new multi-layer thermal dome and the group with the original single-layer thermal dome. Similarly, in (h), temperature differences between the two were quantitatively analyzed at the same location. In real-world applications, there are inevitable additional factors that affect the functionality of the thermal dome. For instance, during the assembly of a multi-layer thermal dome, the thermal contact resistance (TCR) between different layers can have an impact. In Supplementary Note 6, we discuss various factors influencing the TCR and simulate the temperature distribution of the thermal dome considering thermal contact resistance. We also explore strategies to mitigate the influence of TCR on the functionality of the thermal dome. Additionally, convective and radiative heat transfer between the sample and the external environment cannot be disregarded in practical applications. In Supplementary Note 7, we address these factors and conclude that the thermal dome maintains its functionality when convective and radiative heat transfers do not significantly alter the original temperature distribution - that is, when the temperature field remains relatively uniform. ### Experimental setups #### Background Materials Cement: Dimensions: 15 \(\times\) 15 \(\times\) 7.5 cm, Thermal conductivity: \(\kappa_{\rm b}\) = 1.28 W m\({}^{-1}\) K\({}^{-1}\). The cement used in this study is a common construction material. Its thermal conductivity varies depending upon moisture content and curing time. We maintained uniform thermal conductivity across all sample sets by using cement from the same batch and initiating the curing process simultaneously. 316L Steel: Dimensions: 15 \(\times\) 15 \(\times\) 7.5 cm, Thermal conductivity: \(\kappa_{\rm b}\) = 16.2 W m\({}^{-1}\) K\({}^{-1}\). This steel variant was shaped using computer numerical control (CNC) machining. **Conductive Layers** Layer 1: A 316 stainless steel layer created using 3D printing, with an inner diameter \(R_{1}\) of 6 cm, thickness \(d\) of 0.25 cm, and thermal conductivity \(\kappa_{1}\) of 16.2 W m\({}^{-1}\) K\({}^{-1}\). Layer 2: A copper layer created through CNC machining, having an inner diameter \(R_{1}\) of 6.25 cm, thickness \(d\) of 0.12 cm, and thermal conductivity \(\kappa_{2}\) of 385 W m\({}^{-1}\) K\({}^{-1}\). **Target Object and Adiabatic Layer** Constructed of styrofoam for simplicity, both the target object and the adiabatic layer possess the same thermal conductivity, \(\kappa_{\rm o}\) = \(\kappa_{\rm a}\) = 0.042 W m\({}^{-1}\) K\({}^{-1}\), and a maximum radius \(R\) of 6 cm. **Thermal Sources** Hot Source: This is comprised of a heating table set to a constant temperature of 323 K and a copper oil bath pan, which is placed directly on the heating table and filled with silicone oil. Cold Source: A copper pan filled with an ice-water mixture acts as a cold source, maintaining a temperature of 273 K. **Sample Preparation** Three sets of samples were prepared: a reference group, a thermal dome group, and a group without a thermal dome. We began by fabricating three 15 \(\times\) 15 \(\times\) 7.5 cm cuboid molds using polyvinyl chloride foam boards, followed by a common batch of cement. The reference group involved simply pouring the cement into the mold, followed by mixing and drying. For the thermal dome group, we fixed the thermal dome to the mold bottom, poured cement, mixed it, and allowed it to dry. The group without a thermal dome involved a similar process, but with a resin shell manufactured via 3D printing that matched the thermal dome's size. After a fortnight, the cement was fully dried, the molds and resin shell removed, and the resulting voids filled with styrofoam. For samples featuring a 316L steel background, we directly placed the thermal dome into the pre-formed background. **Experimental Setup Construction** To minimize convective heat transfer between the sample surface and the environment, we first wrapped two layers of foam around the sample. We then prepared a copper basin for the oil bath, which was placed on a heating platform set to 323 K, filled with silicone oil. This oil bath heating method ensured consistent heating of the bottom surface, eliminating inconsistencies caused by gaps between the sample and the heating platform. To serve as a cold source, another copper basin filled with an ice-water mixture was placed atop the sample, with the intervening gap filled with silicone grease to ensure effective contact. To avoid uneven temperatures caused by disparate ice distribution, we maintained the ice in a single block floating atop the water. The ambient temperature in the laboratory was kept at 298 K. **Data Collection** Post-heating, we collected temperature data from the observation surface using an infrared camera at ten-second intervals. To mitigate environmental interference, we used a blackboard as a backdrop behind the sample. We also applied a transparent film on the sample surface and used a black foam backdrop to reduce the effect of material emissivity on temperature measurements. While these measures may not entirely eliminate emissivity effects, they ensure accurate temperature differences between samples under identical environmental conditions, providing reliable conclusions. ## 5 Conclusion and discussion In this paper, we deviate from the traditional approach of designing thermal cloaking devices and propose an entirely new concept for achieving thermal invisibility - directing the heat flow directly towards an isothermal surface. The structure of the cloaking device under this notion is no longer closed, but open, allowing its internal hidden space to directly interact with the outside world. We evocatively refer to such a cloaking device as the "thermal dome". Compared to traditional thermal cloaks, the open structure of this device makes it more versatile in various scenarios, including those with internal heat sources, while also simplifying its reuse in practical applications. The combination of multilayered and open structures allows the thermal dome to be assembled like Lego, providing it with reconfigurability. This property enables users to customize the thermal dome to specific environmental conditions, greatly improving its practicality. Moreover, we have manufactured the hemispherical thermal dome using common materials, eliminating the need for extreme materials required by some other thermal cloaks. Both simulation and experimental results confirm the functionality of the thermal dome. It's worth noting that while we have set the base of the thermal dome to be an isothermal boundary condition in our paper to achieve zero disturbance to the background, it is entirely feasible to replace this condition with a substantial heat reservoir in reality. Under such circumstances, the disturbance caused by heat flow directed towards the heat reservoir through the thermal dome can be considered negligible. Alternatively, if we merely need to prevent disturbance to the background temperature at a particular location caused by an object, we can effectively use the thermal dome to guide the heat flow passing through the object and disperse it to less crucial locations. This novel approach opens up a new research direction in the field of thermal invisibility. Further exploration of this concept could provide substantial theoretical support for the practical application of thermal cloaking devices. ### Acknowledgements This work was supported by the National Natural Science Foundation of China to J.H. (12035004), the Science and Technology Commission of Shanghai Municipality to J.H. (20JC1414700), and the National Natural Science Foundation of China to Y.L. (92163123 and 52250191), and the Fundamental Research Funds for the Central Universities to Y.L. (2021FZZX001-19). ### Compliance with ethics guidelines The authors declare that they have no conflict of interest or financial conflicts to disclose. ### Appendix A. Supplementary information All data are available in the manuscript or the Supplementary information.
熱メト malzemeは、熱伝導制御のための革新的なアプローチであり、その重要な構成要素である熱非 görünürlük は、熱管理において極めて重要です。熱非 görünürlükの原理の開発が進んでいるものの、実用的な応用においては、依然として2つの制限が存在します。まず、従来の被覆によって物体は通常完全に包摂されます。そのため、熱源を持つ物体には使用が困難であり、使用に適していないという欠点があります。次に、材料の熱伝導率を変化させて動的な非 görünürlükを実現するための理論的な提案はいくつかありますが、その設計は複雑で硬直しているため、実用的な3次元空間での大規模な使用には不適切です。ここでは、3次元非 görünürlükを実現するための熱ドームという概念を提案します。この概念は、開放的な機能区域を備え、使用しやすさと適用性を大幅に向上させます。また、単純な
2309.13239
On optimality of Mallows model averaging
In the past decades, model averaging (MA) has attracted much attention as it has emerged as an alternative tool to the model selection (MS) statistical approach. Hansen [Econometrica 75 (2007) 1175--1189] introduced a Mallows model averaging (MMA) method with model weights selected by minimizing a Mallows' $C_p$ criterion. The main theoretical justification for MMA is an asymptotic optimality (AOP), which states that the risk/loss of the resulting MA estimator is asymptotically equivalent to that of the best but infeasible averaged model. MMA's AOP is proved in the literature by either constraining weights in a special discrete weight set or limiting the number of candidate models. In this work, it is first shown that under these restrictions, however, the optimal risk of MA becomes an unreachable target, and MMA may converge more slowly than MS. In this background, a foundational issue that has not been addressed is: When a suitably large set of candidate models is considered, and the model weights are not harmfully constrained, can the MMA estimator perform asymptotically as well as the optimal convex combination of the candidate models? We answer this question in both nested and non-nested settings. In the nested setting, we provide finite sample inequalities for the risk of MMA and show that without unnatural restrictions on the candidate models, MMA's AOP holds in a general continuous weight set under certain mild conditions. In the non-nested setting, a sufficient condition and a negative result are established for the achievability of the optimal MA risk. Implications on minimax adaptivity are given as well. The results from simulations back up our theoretical findings.
Jingfu Peng, Yang Li, Yuhong Yang
2023-09-23T03:11:07
http://arxiv.org/abs/2309.13239v3
# On optimality of Mallows model averaging+ ###### Abstract In the past decades, model averaging (MA) has attracted much attention as it has emerged as an alternative tool to the model selection (MS) statistical approach. Hansen [_Econometrica_**75** (2007) 1175-1189] introduced a Mallows model averaging (MMA) method with model weights selected by minimizing a Mallows' \(C_{p}\) criterion. The main theoretical justification for MMA is an asymptotic optimality (AOP), which states that the risk/loss of the resulting MA estimator is asymptotically equivalent to that of the best but infeasible averaged model. MMA's AOP is proved in the literature by either constraining weights in a special discrete weight set or limiting the number of candidate models. In this work, it is first shown that under these restrictions, however, the optimal risk of MA becomes an unreachable target, and MMA may converge more slowly than MS. In this background, a foundational issue that has not been addressed is: When a suitably large set of candidate models is considered, and the model weights are not harmfully constrained, can the MMA estimator perform asymptotically as well as the optimal convex combination of the candidate models? We answer this question in a nested model setting commonly adopted in the area of MA. We provide finite sample inequalities for the risk of MMA and show that without unnatural restrictions on the candidate models, MMA's AOP holds in a general continuous weight set under certain mild conditions. Several specific methods for constructing the candidate model sets are proposed. Implications on minimax adaptivity are given as well. The results from simulations back up our theoretical findings. **Keywords: Model averaging, model selection, asymptotic optimality, minimax adaptivity.** ## 1 Introduction In statistical modeling, multiple candidate models are usually considered to explore the data. Model selection (MS) guides us in search for the best model among candidates based on a traditional selection criterion, such as AIC (Akaike, 1973), \(C_{p}\)(Mallows, 1973), and BIC (Schwarz, 1978), the use of cross-validation (Allen, 1974; Stone, 1974), and solving a penalized regression problem, such as Lasso (Tibshirani, 1996), adaptive Lasso (Zou, 2006), SCAD (Fan and Li, 2001), and MCP (Zhang, 2010) (see Ding et al. (2018) for a recent review). The key theoretical properties of these methods, namely consistency in selection, asymptotic efficiency, and minimax-rate optimality, have been well established in the literature. Once a final model is selected, all subsequent estimation, prediction, and inference are typically based on the selected model as if it were given in advance. However, it has been increasingly recognized that choosing just one model inherently ignores possibly high uncertainty in the selection process (Chatfield, 1995; Draper, 1995; Yuan and Yang, 2005). Model averaging (MA), on the other hand, provides an alternative to reduce the variability in MS while offering a possibility of reducing modeling bias by averaging over the candidate models properly. MA has a rich heritage in Bayesian statistics, see, e.g., Draper (1995), George and McCulloch (1997), and Hoeting et al. (1999) for more details and references therein. From a frequentist perspective, several attractive strategies have been proposed to combine models, including boosting (Freund, 1995), bagging (Breiman, 1996a), random forest (Amit and Geman, 1997), information criterion weighting (Buckland et al., 1997; Hjort and Claeskens, 2003), progressive mixture (Yang, 2000c; Catoni, 2004; Juditsky et al., 2008), exponentially weighted aggregation (George, 1986; Leung and Barron, 2006; Dalalyan and Salmon, 2012), Q-aggregation (Dai et al., 2012; Rigollet, 2012; Lecue and Rigollet, 2014), to name a few (see Section A.4 of the appendix for other related works). In particular, by minimizing some specific performance measures, a growing MA literature develops methods to pursue the optimal convex combination of the candidate models based on the same data. To the best of our knowledge, this problem was first considered by Blaker (1999) in a two candidate model setting, and studied by Hansen (2007) in a general context, who proposed a Mallows model averaging (MMA) method to select weights for averaging across nested linear models by minimizing the Mallows' \(C_{p}\) criterion (Mallows, 1973). Adopting other performance measures like cross-validation error and Kullback-Leibler divergence, the MMA-type strategies have been developed explicitly for other or more general frameworks, such as heteroskedastic error regression model (Hansen and Racine, 2012; Liu and Okui, 2013), time-series error models (Hansen, 2008; Zhang et al., 2013; Cheng et al., 2015), high-dimensional regression model (Ando and Li, 2014, 2017; Zhang et al., 2020), generalized linear model (Ando and Li, 2017; Zhang et al., 2016), quantile regression model (Lu and Su, 2015), varying-coefficient model (Zhu et al., 2019), semiparametric model (Fang et al., 2022), general supervised learning framework (Wolpert, 1992; Breiman, 1996b; van der Laan et al., 2007), among many useful others. Given the increasing and potential wide applications of the MMA-type methods, an essential question arising from an estimation perspective is how good this popular class of methods for constructing an MA estimator is. This paper focuses on MMA introduced by Hansen (2007) and revisits its optimality. Note that the MMA criterion is an unbiased estimate of the squared risk of the MA estimator plus a constant, and the resulting MMA estimator targets the minimization of the squared risk/loss of MA. The optimality of MMA has certainly been studied from an asymptotic viewpoint in the MA literature. An asymptotic optimality (AOP) theory states that a good MA estimator can be asymptotically equivalent to the optimal convex combination of the given candidates in terms of the statistical risk/loss. There are two major approaches to establishing the MMA's AOP. Hansen (2007) first proved it when the weight vectors are contained in a special discrete set. His results require that the candidates are nested and do not impose any additional assumption on the number of candidate models. Since the discrete weight set is quite restrictive, Wan et al. (2010) made an important contribution by considering direct minimization of the MMA criterion over the continuous weight set with possibly non-nested models. Their paper justifies the MMA's AOP but requires a restriction on the candidate model set. Similar assumptions also arise in a number of subsequent papers, see Ando and Li (2014, 2017); Zhang et al. (2020); Zhang (2021). In summarizing the literature in relation to the real goal of AOP, while the aforementioned theoretical advancements are novel and valuable, the consequences of the restrictions imposed on weight/candidate models are still unclear. Consider a typical nested model framework with the \(m\)-th candidate model containing the first \(m\) regressors. For Hansen (2007)'s approach, a sensible choice for the candidate model set is to include \(M_{n}\geq m_{n}^{*}\) nested models, where \(m_{n}^{*}\) is the size of the optimal single model. We show in Section 3.1 that when \(m_{n}^{*}\) is not too small relative to the sample size \(n\) (e.g., \(m_{n}^{*}\) grows at order \(n^{\alpha}\) for some \(0<\alpha<1\)), the best possible MA risk in the discrete weight set is suboptimal. For the approach in Wan et al. (2010), as shown in Section 3.2, the required restriction on the candidate models is so strong that the optimal single model \(m_{n}^{*}\) is excluded, and the MMA criterion can only combine a set of underperforming models. Note that the MMA-type literature often motivates their approaches to overcome the problems of MS and hence perform better. However, the MA estimator based on such candidate model sets actually converges more slowly than MS. In this background, a critical issue that has not been addressed in the existing literature is: When the weight vector is allowed for the full potential of MA, and the number of candidate models is not harmfully constrained, can the MMA estimator perform asymptotically as well as the infeasible optimal averaged model? Inspired by the previous work of Hansen (2007) and Wan et al. (2010), this paper answers the aforementioned foundational question on MMA in the context of linear regression with nested models. We derive non-asymptotic risk bounds for MMA when the random errors follow the sub-Gaussian assumption, which show that the squared risk of the MMA estimator is bounded above by the optimal MA risk plus a couple of additional terms associated with the estimation errors of the weights and the variance of the error term, respectively. Based on these risk bounds, there are mainly three implications. First, when the convergence rate of the optimal MA risk is not too fast (e.g., the optimal MA risk converges slower than \((\log n)^{3}/n\)), the MMA estimator asymptotically attains the optimal risk among all averaged models without any unnatural restrictions on the weight set or the candidate model set. Second, instead of incorporating all nested models, the full advantage of MA can still be realized by grouping regressors properly or removing inferior models at the outset prior to implementing MMA. Third, the resulting MMA estimator exhibits optimal minimax adaptivity over some general coefficient classes, such as ellipsoids and hyperrectangles. The results from our finite sample simulations support these findings. The rest of the paper is organized as follows. In Section 2, we set up the regression framework and give the MMA estimators. In Section 3, we theoretically investigate the consequences of using a discrete weight set or restricting the candidate model set. We then in Section 4 develop non-asymptotic risk bounds for MMA. Consequently, the MMA's AOP theory is obtained. Section 5 suggests two strategies for constructing the candidate model set. Section 6 shows minimax adaptivity of MMA. Section 7 presents the results of simulation experiments. Concluding remarks are given in Section 8. The proofs, additional simulation results, and discussions on the other related works can be found in the Appendix. ## 2 Problem setup ### Setup and notation Consider the linear regression model \[y_{i}=f_{i}+\epsilon_{i}=\sum_{j=1}^{p_{n}}\beta_{j}x_{ij}+\epsilon_{i},\quad i=1,\ldots,n, \tag{2.1}\] where \(\epsilon_{1},\ldots,\epsilon_{n}\) are i.i.d. sub-Gaussian random variables with \(\mathbb{E}\epsilon_{i}=0\) and \(\mathbb{E}\epsilon_{i}^{2}=\sigma^{2}\), and \(\mathbf{x}_{j}=(x_{1j},\ldots,x_{nj})^{\top}\), \(j=1,\ldots,p_{n}\) are nonstochastic regressor vectors. Defining the response vector \(\mathbf{y}=(y_{1},\ldots,y_{n})^{\top}\), the regression mean vector \(\mathbf{f}=(f_{1},\ldots,f_{n})^{\top}\), the coefficient vector \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{p_{n}})^{\top}\), the regressor matrix \(\mathbf{X}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{p_{n}}]\in\mathbb{R}^{n\times p _{n}}\), and the noise vector \(\boldsymbol{\epsilon}=(\epsilon_{1},\ldots,\epsilon_{n})^{\top}\), we can write (2.1) in matrix form \[\mathbf{y}=\mathbf{f}+\boldsymbol{\epsilon}=\mathbf{X}\boldsymbol{\beta}+ \boldsymbol{\epsilon}. \tag{2.2}\] For the sake of simplicity, we assume \(p_{n}\leq n\) and \(\mathbf{X}\) has full column rank. To estimate the true regression mean vector \(\mathbf{f}\), \(M_{n}\) strictly nested linear models are considered as candidates. The \(m\)-th candidate model includes the first \(k_{m}\) regressors, where \(1\leq k_{1}<k_{2}<\cdots<k_{M_{n}}\leq p_{n}\). The information about the sizes of candidate models is stored in a set \(\mathcal{M}=\{k_{1},\ldots,k_{M_{n}}\}\), and then \(M_{n}=|\mathcal{M}|\), where \(|\mathcal{S}|\) denotes the cardinality of a set \(\mathcal{S}\) throughout this paper. Let \(\mathbf{X}_{k_{m}}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{k_{m}}]\) be the design matrix of the \(m\)-th candidate model, which estimates \(\mathbf{f}\) by the least squares method \(\widehat{\mathbf{f}}_{k_{m}}=\mathbf{X}_{k_{m}}(\mathbf{X}_{k_{m}}^{\top} \mathbf{X}_{k_{m}})^{-1}\mathbf{X}_{k_{m}}^{\top}\boldsymbol{\beta}\equiv \mathbf{P}_{k_{m}}\mathbf{y}\). Let \(\mathbf{w}=(w_{1},\ldots,w_{M_{n}})^{\top}\) denote a weight vector in the unit simplex of \(\mathbb{R}^{M_{n}}\): \[\mathcal{W}_{M_{n}}=\left\{\mathbf{w}\in[0,1]^{M_{n}}:\sum_{m=1}^{M_{n}}w_{m} =1\right\}. \tag{2.3}\] Given the candidate model set \(\mathcal{M}\), the MA estimator of \(\mathbf{f}\) is \(\widehat{\mathbf{f}}_{\mathbf{w}|\mathcal{M}}=\sum_{m=1}^{M_{n}}w_{m}\widehat {\mathbf{f}}_{k_{m}}\), where the subscript \(\mathbf{w}|\mathcal{M}\) is to emphasize the dependence of the MA estimator on the candidate model set \(\mathcal{M}\). For the theoretical work, we consider the normalized squared \(\ell_{2}\) loss \(L_{n}(\widehat{\mathbf{f}},\mathbf{f})=n^{-1}\|\widehat{\mathbf{f}}-\mathbf{ f}\|^{2}\) and its corresponding risk \(R_{n}(\widehat{\mathbf{f}},\mathbf{f})=\mathbb{E}L_{n}(\widehat{\mathbf{f}}, \mathbf{f})\) as measures of the performance of an estimator \(\widehat{\mathbf{f}}\), where \(\|\cdot\|\) refers to the Euclidean norm. For abbreviation, let \(L_{n}(m,\mathbf{f})\), \(R_{n}(m,\mathbf{f})\), \(L_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})\) and \(R_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})\) stand for \(L_{n}(\widehat{\mathbf{f}}_{m},\mathbf{f})\), \(R_{n}(\widehat{\mathbf{f}}_{m},\mathbf{f})\), \(L_{n}(\widehat{\mathbf{f}}_{\mathbf{w}|\mathcal{M}},\mathbf{f})\) and \(R_{n}(\widehat{\mathbf{f}}_{\mathbf{w}|\mathcal{M}},\mathbf{f})\) respectively. We denote \(m_{n}^{*}=\arg\min_{m\in\{1,\ldots,p_{n}\}}R_{n}(m,\mathbf{f})\) the size of the optimal single model, \(m^{*}|\mathcal{M}=\arg\min_{m\in\mathcal{M}}R_{n}(m,\mathbf{f})\) the size of the optimal candidate model in \(\mathcal{M}\), and \(\mathbf{w}^{*}|\mathcal{M}=\arg\min_{\mathbf{w}\in\mathcal{W}_{M_{n}}}R_{n}( \mathbf{w}|\mathcal{M},\mathbf{f})\) the optimal weight vector based on the candidate model set \(\mathcal{M}\) and the general continuous weight set \(\mathcal{W}_{M_{n}}\). The quantities \(m_{n}^{*}\), \(m^{*}|\mathcal{M}\), and \(\mathbf{w}^{*}|\mathcal{M}\) are all infeasible in practice since they depend on the unknown parameters and \(\sigma^{2}\). In this paper, we estimate the weights by minimizing the MMA criterion proposed by Hansen (2007) \[C_{n}(\mathbf{w}|\mathcal{M},\mathbf{y})=\frac{1}{n}\|\mathbf{y}-\widehat{ \mathbf{f}}_{\mathbf{w}|\mathcal{M}}\|^{2}+\frac{2\widehat{\sigma}^{2}}{n} \mathbf{k}^{\top}\mathbf{w}, \tag{2.4}\] that is, \(\widehat{\mathbf{w}}|\mathcal{M}=\arg\min_{\mathbf{w}\in\mathcal{W}_{M_{n}}}C_ {n}(\mathbf{w}|\mathcal{M},\mathbf{y})\), where \(\widehat{\sigma}^{2}\) is an estimator of \(\sigma^{2}\), and \(\mathbf{k}=(k_{1},\ldots,k_{M_{n}})^{\top}\) is the vector of the sizes of the candidate models in \(\mathcal{M}\). The resulting MMA estimator of \(\mathbf{f}\) is \[\widehat{\mathbf{f}}_{\widehat{\mathbf{w}}|\mathcal{M}}=\sum_{m=1}^{M_{n}} \widehat{w}_{m}\widehat{\mathbf{f}}_{k_{m}}. \tag{2.5}\] Note that when \(\sigma^{2}\) is known, \(\widehat{\mathbf{w}}|\mathcal{M}\) is chosen based on the minimization of an unbiased estimate for \(R_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})\) plus a constant, since \(\mathbb{E}C_{n}(\mathbf{w}|\mathcal{M},\mathbf{y})=R_{n}(\mathbf{w}|\mathcal{ M},\mathbf{f})+\sigma^{2}\). Let \(\mathbb{E}L_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})\) and \(\mathbb{E}R_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})\) denote the risk functions of the resulting MMA estimator, which take the randomness of \(\widehat{\mathbf{w}}\) into account. But the former is a little different from the latter since in the latter function, \(\widehat{\mathbf{w}}\) is directly plugged in the expression of \(R_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})\). Let \(Q_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})\) denote any one of two quantities: \(\mathbb{E}L_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})\) and \(\mathbb{E}R_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})\). From now on, we will use the symbols \(\lesssim\), \(\gtrsim\), and \(\asymp\) for comparison of positive sequences, where \(a_{n}\lesssim b_{n}\) means \(a_{n}=O(b_{n})\), \(a_{n}\gtrsim b_{n}\) means \(b_{n}=O(a_{n})\), and \(a_{n}\asymp b_{n}\) means both \(a_{n}\lesssim b_{n}\) and \(a_{n}\gtrsim b_{n}\). Also, \(a_{n}\sim b_{n}\) means that \(a_{n}/b_{n}\to 1\) as \(n\to\infty\). Let \(\lfloor a\rfloor\) and \(\lceil a\rceil\) return the floor and the ceiling of \(a\) respectively. For any two real numbers \(a\) and \(b\), we use notation \(a\wedge b=\min(a,b)\) and \(a\lor b=\max(a,b)\). ### Definitions of optimality We first give some notations that will play a key role in our theoretical analysis. Let \(\mathbf{P}_{j}\triangleq\mathbf{X}_{j}(\mathbf{X}_{j}^{\top}\mathbf{X}_{j})^{ -1}\mathbf{X}_{j}^{\top}\) be the projection matrix on the column space of the first \(j\) columns of the full design matrix \(\mathbf{X}\). As pointed out by Xu and Zhang (2022), the successive subtraction of \(\mathbf{P}_{j},j=1,\ldots,p_{n}\) yields \(p_{n}\) mutually orthogonal matrixes \(\mathbf{D}_{j}\triangleq\mathbf{P}_{j}-\mathbf{P}_{j-1}=\mathbf{\phi}_{j}\mathbf{ \phi}_{j}^{\top}\), \(j=1,\ldots,p_{n}\), where \(\mathbf{P}_{0}=\mathbf{0}_{n\times n}\) and \(\mathbf{\phi}_{j}\in\mathbb{R}^{n}\) is an eigenvector of \(\mathbf{D}_{j}\) satisfying \(\|\mathbf{\phi}_{j}\|=1\). Obviously, \(\{\mathbf{\phi}_{1},\ldots,\mathbf{\phi}_{p_{n}}\}\) forms an orthonormal basis for the column space of \(\mathbf{X}\). Let us denote the _transformed coefficients_\(\mathbf{\theta}=(\theta_{1},\ldots,\theta_{p_{n}})^{\top}\) of \(\mathbf{f}\) by \[\theta_{j}=\theta_{j}(\mathbf{f})=\frac{\mathbf{\phi}_{j}^{\top}\mathbf{f}}{\sqrt{n }},\quad j=1,\ldots,p_{n}. \tag{2.6}\] When the columns of \(\mathbf{X}\) are mutually orthogonal with \(\ell_{2}\) norm \(n\), we see that the transformed coefficient \(\theta_{j}\) coincides with the regression coefficient \(\beta_{j}\). Otherwise, \(\theta_{j}\) depends additionally on the dependence between the covariates. There are two important approaches to defining the optimality of MMA: AOP within a given class of averaged estimators and minimax adaptivity within given classes of true regression mean vectors. **Definition 1**.: _Given a candidate model set \(\mathcal{M}\) and a weight set \(\mathcal{W}\), an MA estimator \(\widehat{\mathbf{f}}_{\widehat{\mathbf{w}}|\mathcal{M}}\) with \(\widetilde{\mathbf{w}}\) trained _on the data is said to be asymptotically optimal (AOP) if it satisfies_ \[Q_{n}\left(\widetilde{\mathbf{w}}|\mathcal{M},\mathbf{f}\right)=[1+o(1)]\min_{ \mathbf{w}\in\mathcal{W}}R_{n}\left(\mathbf{w}|\mathcal{M},\mathbf{f}\right) \tag{2.7}\] _as \(n\to\infty\)._ The existing literature showed that the AOP property can be obtained for the MMA estimator with certain restrictions on the weight set \(\mathcal{W}\) or the candidate model set \(\mathcal{M}\). Specifically, Hansen (2007) proved the MMA's AOP by minimizing the criterion over a special discrete set \[\mathcal{W}_{|\mathcal{M}|}(N)=\left\{\sum_{m=1}^{|\mathcal{M}|}w_{m}=1,w_{m} \in\left\{0,\frac{1}{N},\frac{2}{N},\ldots,1\right\}\right\}, \tag{2.8}\] in which \(N\) is a fixed positive integer (see also Hansen, 2008; Hansen and Racine, 2012). Hansen (2007)'s approach does not impose any additional restriction on the candidate model set \(\mathcal{M}\) in the nested model setting. With a different technique, Wan et al. (2010) established the MMA's AOP in the continuous set \(\mathcal{W}_{|\mathcal{M}|}\) defined in (2.3) but with a condition on \(\mathcal{M}\), that is \[\frac{|\mathcal{M}|\sum_{m=1}^{|\mathcal{M}|}R_{n}\left(\mathbf{w}_{m}^{0}| \mathcal{M},\mathbf{f}\right)}{nR_{n}^{2}\left(\mathbf{w}^{*}|\mathcal{M}, \mathbf{f}\right)}\to 0, \tag{2.9}\] where \(\mathbf{w}_{m}^{0}\) is a \(|\mathcal{M}|\times 1\) vector in which the \(m\)-th element is one and the others are zeros. In this paper, we refer the AOP theories in Hansen (2007) and Wan et al. (2010) as the _restricted AOP_ since these results do not allow all the possible convex combinations of the candidate models, hence may lead to a suboptimal convergence rate (see Section 3 for the detailed discussion). Let \(\mathcal{M}_{a}=\left\{1,2,\ldots,p_{n}\right\}\) denote the candidate model set with all the nested models. Note that the relation \(R_{n}\left(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f}\right)\leq R_{n}\left( \mathbf{w}^{*}|\mathcal{M},\mathbf{f}\right)\) holds for any \(\mathcal{M}\subseteq\mathcal{M}_{a}\). Thus, \(R_{n}\left(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f}\right)\) can be seen as the full potential of MA under the nested model setting we consider. Therefore, in contrast to the restricted AOP, a more natural definition of the optimality of MA is the _full AOP_. **Definition 2**.: _An MA estimator \(\widehat{\mathbf{f}}_{\widetilde{\mathbf{w}}|\mathcal{M}}\) with \(\widetilde{\mathbf{w}}\) trained on the data is said to achieve the full AOP if it satisfies_ \[Q_{n}\left(\widetilde{\mathbf{w}}|\mathcal{M},\mathbf{f}\right)=[1+o(1)]R_{n }\left(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f}\right) \tag{2.10}\] _as \(n\to\infty\)._ Then two important questions arise: 1. Does the MMA estimator (2.5) obtain the full AOP by combining candidates in \(\mathcal{M}_{a}\) and minimizing the criterion (2.4) over \(\mathcal{W}_{|\mathcal{M}_{a}|}\) directly? 2. Can we reduce the candidate model set \(\widehat{\mathcal{M}}\subset\mathcal{M}_{a}\) yet it still satisfies the full AOP property \[\mathbb{E}Q_{n}(\widehat{\mathbf{w}}|\widehat{\mathcal{M}},\mathbf{f})=[1+o( 1)]R_{n}\left(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f}\right)?\] (2.11) The second question is particularly interesting from an application perspective. Another approach to defining the optimality of MA is the minimax adaptivity. Suppose the transformed coefficients \(\mathbf{\theta}\) defined in (2.6) belongs to the parameter space \(\Theta\subseteq\mathbb{R}^{p_{n}}\), and the corresponding mean vector space of \(\mathbf{f}\) is defined by \(\mathcal{F}_{\Theta}=\{\mathbf{f}=\sum_{j=1}^{p_{n}}\theta_{j}\mathbf{\phi}_{j}: \mathbf{\theta}\in\Theta\}\). Define the minimax risk \(R_{M}(\mathcal{F}_{\Theta})=\inf_{\widehat{\mathbf{f}}}\sup_{\mathbf{f}\in \mathcal{F}_{\Theta}}R_{n}(\widehat{\mathbf{f}},\mathbf{f}),\) where the infimum is over all estimator \(\widehat{\mathbf{f}}\). In addition, define the minimax risk of the linear-combined estimators \(R_{L}(\mathcal{F}_{\Theta})=\inf_{\mathbf{w}}\sup_{\mathbf{f}\in\mathcal{F}_{ \Theta}}R_{n}(\mathbf{w}|\mathcal{M}_{a},\mathbf{f}),\) where \(\inf_{\mathbf{w}}\) denote the infimum over all \(\mathbf{w}\in\mathbb{R}^{p_{n}}\), and the subscript \(L\) here is to emphasize that \(\widehat{\mathbf{f}}\) is restricted to the class of all the linear combinations of the models in \(\mathcal{M}_{a}\). **Definition 3**.: _An estimator \(\widetilde{\mathbf{f}}\) is called adaptive in the exact minimax sense on the family of the mean vector spaces \(\mathbf{\mathcal{F}}=\{\mathcal{F}_{\Theta}:\Theta\in\mathbf{\mathcal{A}}\}\) if_ \[\sup_{\mathbf{f}\in\mathcal{F}_{\Theta}}R_{n}(\widetilde{\mathbf{f}},\mathbf{ f})=[1+o(1)]R_{M}(\mathcal{F}_{\Theta}) \tag{2.12}\] _holds for every \(\mathcal{F}_{\Theta}\in\mathbf{\mathcal{F}}\). An MA estimator \(\widehat{\mathbf{f}}_{\widetilde{\mathbf{w}}|\mathcal{M}_{a}}\) with \(\widetilde{\mathbf{w}}\) estimated on data is called adaptive in the exact linear-combined minimax sense on the family of classes \(\mathbf{\mathcal{F}}\) if_ \[\sup_{\mathbf{f}\in\mathcal{F}_{\Theta}}R_{n}(\widetilde{\mathbf{w}}|\mathcal{ M}_{a},\mathbf{f})=[1+o(1)]R_{L}(\mathcal{F}_{\Theta}) \tag{2.13}\] _holds for every \(\mathcal{F}_{\Theta}\in\mathbf{\mathcal{F}}\)._ * Is the MMA estimator adaptive in the exact minimax sense or adaptive in the exact linear-combined minimax sense on some general families of coefficient classes \(\Theta\), such as the families of Sobolev ellipsoids and hyperrectangles? The answers to questions Q1-Q3 may provide a previously unavailable insight on the theoretical foundation of MMA. ## 3 Revisiting the existing AOP theories on MMA The main purpose of this section is to investigate the consequences of using the discrete weight set (Hansen, 2007) or restricting the candidate model set (Wan et al., 2010) in the restricted-AOP theory. ### Discrete weight set Recall that Hansen (2007) established the MMA's AOP when the weights are contained in the discrete weight set (2.8) but without imposing any additional restriction on \(\mathcal{M}\). For simplicity, we consider a set of successive candidate models \(\mathcal{M}_{s}=\{1,2,\ldots,M_{n}\}\), which has usually been adopted to implement the MMA-type methods (Hansen, 2007; Zhang et al., 2016, 2020). And let \(\mathbf{w}_{N}^{\star}|\mathcal{M}_{s}=\arg\min_{\mathbf{w}\in\mathcal{W}_{| \mathcal{M}_{s}|}(N)}R_{n}(\mathbf{w}|\mathcal{M}_{s},\mathbf{f})\) denote the optimal discrete weight vector in \(\mathcal{W}_{|\mathcal{M}_{s}|}(N)\). We first focus on the magnitude of the risk increment \(R_{n}\left(\mathbf{w}_{N}^{*}|\mathcal{M}_{s},\mathbf{f}\right)-R_{n}\left( \mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f}\right)\). Certain assumptions on the nature of the regression mean vector are made to evaluate this risk increment in a feasible way. **Assumption 1**.: _The regression mean vector \(\mathbf{f}\) satisfies \(\limsup_{n}n^{-1}\|\mathbf{f}\|^{2}<\infty\)._ **Assumption 2**.: _The transformed coefficients (2.6) are ordered, which means \(\{|\theta_{j}|,j\geq 1\}\) is a non-increasing positive sequence._ Assumption 1 is a standard assumption for regression estimation problems. Assumption 2 can give us many conveniences in characterizing the unknown optimal weights. When the columns of \(\mathbf{X}\) are mutually orthogonal, we see that \(\theta_{j}\) is proportional to \(\beta_{j}\). In this case, Assumption 2 ensures that the regressors are ordered from most important to least important. The idea of ordering regressors to prepare candidate models has been commonly adopted in the implementation of MA; for example, see Hansen (2007); Ando and Li (2017); Zhang et al. (2016, 2020). Under Assumptions 1-2, we further provide two different conditions on the transformed coefficients \(\theta_{j},j=1,\ldots,p_{n}\). **Condition 1**.: _(Slowly decaying coefficients) There exist constants \(k>1\) and \(0<\delta\leq\nu<1\) with \(k\nu^{2}<1\) such that \(\delta\leq|\theta_{\lfloor kl\rfloor}/\theta_{l}|\leq\nu\) when \(l\) is large enough._ **Condition 2**.: _(Fast decaying coefficients) For every constant \(k>1\), \(\lim_{l\to\infty}|\theta_{\lfloor kl\rfloor}/\theta_{l}|=0\)._ Condition 1 contains the case \(\theta_{j}=j^{-\alpha_{1}}\) for \(\alpha_{1}>1/2\), which serves as the principal case in the MA literature (Hansen, 2007). In contrast, the coefficients satisfying Condition 2 decay much faster. An example is the exponentially decaying coefficients \(\theta_{j}=\exp(-j^{\alpha_{2}})\) for some \(\alpha_{2}>0\). **Proposition 1**.: _Suppose Assumptions 1-2 hold. When both Condition 1 and \(M_{n}\gtrsim m_{n}^{*}\) are satisfied, we have_ \[R_{n}\left(\mathbf{w}_{N}^{*}|\mathcal{M}_{s},\mathbf{f}\right)-R_{n}\left( \mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f}\right)\asymp R_{n}(\mathbf{w}^{*}| \mathcal{M}_{s},\mathbf{f}).\] _When either Condition 2 or \(M_{n}=o(m_{n}^{*})\) holds, we have_ \[R_{n}\left(\mathbf{w}_{N}^{*}|\mathcal{M}_{s},\mathbf{f}\right)-R_{n}\left( \mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f}\right)=o\left[R_{n}(\mathbf{w}^{*}| \mathcal{M}_{s},\mathbf{f})\right].\] Proposition 1 theoretically clarifies the effects of weight discretization and \(M_{n}\) on the optimal MA risk. Similar results have also been given in Theorem 6 of Xu and Zhang (2022). When \(\theta_{l}\) decays slowly and \(M_{n}\) is large, the difference \(R_{n}\left(\mathbf{w}_{N}^{*}|\mathcal{M}_{s},\mathbf{f}\right)-R_{n}\left( \mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f}\right)\) is of the same order as the risk \(R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f})\). In this case, weight discretization increases the optimal risk in the general continuous weight set \(\mathcal{W}_{|\mathcal{M}_{s}|}\) by a significant fraction. However, when \(\theta_{s}\) decays fast or \(M_{n}\) is small relative to the size of the optimal model, the discrete weight set asymptotically does not influence the optimal risk of MA. This proposition implies that in some important scenarios, such as \(p_{n}=n\) and \(\theta_{j}=j^{-\alpha_{1}},\alpha_{1}>1/2\), where the optimal single model \(m_{n}^{*}\) grows at order \(n^{1/(2\alpha_{1})}\), it is impossible to achieve the full potential of MA by minimizing the MMA criterion in a discrete weight set with any fixed \(N\). On the other hand, MS can be viewed as MA in the discrete set \(\mathcal{W}_{|\mathcal{M}_{s}|}(1)\). Recall that \(m_{n}^{*}\) denotes the optimal single model among all candidate models and \(m^{*}|\mathcal{M}_{s}\) stands for the optimal model in \(\mathcal{M}_{s}\). Thus we have \(R_{n}\left(m^{*}|\mathcal{M}_{s},\mathbf{f}\right)\geq R_{n}\left(\mathbf{w}_{ N}^{*}|\mathcal{M}_{s},\mathbf{f}\right)\). A natural question to ask is whether \(R_{n}\left(\mathbf{w}_{N}^{*}|\mathcal{M}_{s},\mathbf{f}\right)\) has a substantial improvement over \(R_{n}\left(m^{*}|\mathcal{M}_{s},\mathbf{f}\right)\) when \(N\geq 2\). **Proposition 2**.: _Suppose Assumptions 1-2 hold. Under Condition 1 and \(M_{n}\gtrsim m_{n}^{*}\), define_ \[\kappa\triangleq\log_{k}\left(\frac{m_{n}^{*}}{M_{n}}\lor 1\right),\] _where \(k\) is the constant given in Condition 1. If \(N>(1+\delta^{2\kappa+2})/(2\delta^{2\kappa+2})\), we have_ \[R_{n}\left(m^{*}|\mathcal{M}_{s},\mathbf{f}\right)-R_{n}\left(\mathbf{w}_{N}^ {*}|\mathcal{M}_{s},\mathbf{f}\right)\asymp R_{n}\left(m^{*}|\mathcal{M}_{s}, \mathbf{f}\right).\] _Under Condition 2 or \(M_{n}=o(m_{n}^{*})\), for any \(N\), we have_ \[R_{n}\left(m^{*}|\mathcal{M}_{s},\mathbf{f}\right)-R_{n}\left(\mathbf{w}_{N}^ {*}|\mathcal{M}_{s},\mathbf{f}\right)=o\left[R_{n}\left(m^{*}|\mathcal{M}_{s}, \mathbf{f}\right)\right].\] When \(\theta_{l}\) decays slowly and \(M_{n}\) is large, the optimal model size \(m^{*}|\mathcal{M}_{s}\) is not very small relative to the sample size \(n\). In this case, the MS uncertainty is relatively high, and MA under the discrete weight set still reduces the risk of MS substantially, although it does not provide the full potential of MA. For example, when \(\theta_{j}=j^{-\alpha_{1}},\alpha_{1}>1/2\), Condition 1 is satisfied for any \(k>1\) and \(\delta=k^{-\alpha_{1}}\). Then, for a large candidate model set with \(M_{n}\geq m_{n}^{*}\), the condition for improving over MS is \[N>\frac{1+\delta^{2}}{2\delta^{2}}=\frac{1+k^{2\alpha_{1}}}{2}.\] Due to the arbitrariness of \(k\), it suffices to require \(N\geq 2\). ### Restriction of the candidate model set Directly minimizing the MMA criterion over the continuous weight set \(\mathcal{W}_{|\mathcal{M}|}\) was considered by Wan et al. (2010). But they impose another restriction (2.9) on \(\mathcal{M}\). As will be seen, (2.9) is a rather strong condition that can lead to exclusion of some important models. In this subsection, we continue to focus on a nested framework with successive candidates \(\mathcal{M}_{s}=\{1,2,\ldots,M_{n}\}\). **Example 1** (Polynomially decaying coefficients).: _Consider \(\theta_{j}=j^{-\alpha_{1}},\alpha_{1}>1/2\), and assume \(M_{n}=o(p_{n})\). Condition (2.9) is equivalent to the restriction on the rate of increase of the number of candidate models in \(\mathcal{M}_{s}\)_ \[M_{n}=\left\{\begin{array}{ll}o(n^{\frac{1}{2\alpha_{1}+1}})&\quad 1/2< \alpha_{1}<1,\\ o(n^{\frac{1}{4\alpha_{1}-1}})&\quad\alpha_{1}\geq 1.\end{array}\right. \tag{3.1}\] _Therefore we need \(M_{n}=c_{n}(m_{n}^{*})^{2\alpha_{1}/(2\alpha_{1}+1)}\) with \(c_{n}\to 0\) as \(n\to\infty\), where \(m_{n}^{*}\sim(n/\sigma^{2})^{1/(2\alpha_{1})}\). In this case, the optimal rate of convergence of MS is \(R_{n}\left(m_{n}^{*},\mathbf{f}\right)\asymp n^{-1+1/(2\alpha_{1})}\). But the rate of convergence of MA based \(\mathcal{M}_{s}\) is \(M_{n}^{-2\alpha_{1}+1}\), which converges no faster than \(n^{-(2\alpha_{1}-1)/(2\alpha_{1}+1)}\) and thus much slower than MS. For a specific example, if \(\alpha_{1}=1\), the MMA converges slower than \(n^{-1/3}\) in contrast to the rate \(n^{-1/2}\) for MS._ **Example 2** (Exponentially decaying coefficients).: _Now the transformed coefficients decay fast: \(\theta_{j}=\exp(-cj^{\alpha_{2}})\), \(\alpha_{2}>0\). A sufficient condition for (2.9) is \(M_{n}<\left(1/2\right)^{1/\alpha_{2}}m_{n}^{*}\), where \(m_{n}^{*}\sim[\log(n/\sigma^{2})^{1/(2c)}]^{1/\alpha_{2}}\). In this case, MA based on \(\mathcal{M}_{s}\) converges at the rate of \(M_{n}^{1-\alpha_{2}}/n^{1/2}\), which is still slower than the optimal MS rate \(m_{n}^{*}/n\)._ In both representative examples, an undesired consequence of reducing the candidate model set to \(\mathcal{M}_{s}\) with (2.9) is that the optimal single model \(m_{n}^{*}\) is excluded, and the resulting MA estimators converge more slowly than MS. In more general cases of coefficients, the implications of the condition (2.9) on \(M_{n}\) and \(R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f})\) are summarized in the following proposition. **Proposition 3**.: _Suppose Assumptions 1-2 are satisfied. Under Condition 1, a necessary condition of (2.9) is \(M_{n}=o\left(m_{n}^{*}\right)\). In such a case, we have_ \[R_{n}(m_{n}^{*},\mathbf{f})=o[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f})]. \tag{3.2}\] _Under Condition 2, for (2.9) to hold, it is also necessary to require \(M_{n}\leq\lfloor Cm_{n}^{*}\rfloor\) with a constant \(0<C<1\). In this case, (2.9) still leads to the relation (3.2)._ Proposition 3 confirms that the widely used condition (2.9) excludes even the optimal single model \(m_{n}^{*}\). When \(\theta_{l}\) decays slowly, MA based on the restrictive candidate model set has a significant disadvantage compared to MS in terms of rate of convergence, which is against the motivation of MA. When \(\theta_{l}\) decays fast, MS uncertainty is relatively low, and MA generally does not have any real benefit compared to MS. The restricted MMA with (2.9), however is actually worse. Comparing the two restricted-AOP theories given by Hansen (2007) and Wan et al. (2010), it seems that MA with the discrete weight set is safer since it always leads to the optimal MS rate when \(M_{n}\gtrsim m_{n}^{*}\), while MA based on the restrictive candidate set does not. Nevertheless, both theories have the same drawback of not achieving the MA's full potential. Therefore, the first question we raised remains largely unanswered. The next section sheds some new light on this matter. Both non-asymptotic and asymptotic results will be given. **Remark 1**.: _Note that a recent work of Zhang (2021) proved the MMA's AOP under a milder and more interpretable assumption_ \[\frac{|\mathcal{M}|^{2}}{nR_{n}\left(\mathbf{w}^{*}|\mathcal{M},\mathbf{f} \right)}\to 0 \tag{3.3}\] _than (2.9). Following the proof in Proposition 3, we can see that (3.3) still fails to include \(m_{n}^{*}\) and thus suffers the same consequence (3.2)._ ## 4 Main results ### A risk bound We start with non-asymptotic results. Recall that \(Q_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})\) is any one of two quantities: \(\mathbb{E}L_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})\) and \(\mathbb{E}R_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})\). Given a general nested candidate model set \(\mathcal{M}=\{k_{1},k_{2},\ldots,k_{M_{n}}\}\), define \[\psi(\mathcal{M})=\left(1+\sum_{j=1}^{M_{n}-1}\frac{k_{j+1}-k_{j}}{k_{j}} \right)(1+\log M_{n})^{2}. \tag{4.1}\] Then we have the following upper bound on \(Q_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})\). **Theorem 1**.: _Suppose that Assumption 1 holds, then we have_ \[\begin{split} Q_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f} )\leq R_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f})&+\frac{C \sigma^{2}}{n}\psi(\mathcal{M})+\frac{C\sigma}{\sqrt{n}}[\psi(\mathcal{M})]^{ \frac{1}{2}}[R_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f})]^{\frac{1}{2}}\\ &+C\rho\left(n,\mathcal{M},\mathbf{f},\widehat{\sigma}^{2}, \sigma^{2}\right),\end{split} \tag{4.2}\] _where \(C\) is some universal constant, and \(\rho\left(n,\mathcal{M},\mathbf{f},\widehat{\sigma}^{2},\sigma^{2}\right)\) is the estimation error related to \(\widehat{\sigma}^{2}\), which is defined by_ \[\rho\left(n,\mathcal{M},\mathbf{f},\widehat{\sigma}^{2},\sigma^{2}\right)= \frac{k_{M_{n}}}{n\sigma^{2}}\mathbb{E}\left(\widehat{\sigma}^{2}-\sigma^{2} \right)^{2}+\left[\frac{k_{M_{n}}}{n\sigma^{2}}\mathbb{E}\left(\widehat{\sigma }^{2}-\sigma^{2}\right)^{2}\right]^{\frac{1}{2}}[R_{n}(\mathbf{w}^{*}|\mathcal{ M},\mathbf{f})]^{\frac{1}{2}}.\] The risk bound (4.2) is valid for any sample size and does not rely on Assumption 2 that the transformed coefficients are ordered. Note that the risk of the MMA estimator \(Q_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})\) is bounded by the infeasible optimal MA risk \(R_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f})\) plus three additional terms. The first two terms are related to the candidate model set \(\mathcal{M}\). The third term \(\rho\left(n,\mathcal{M},\mathbf{f},\widehat{\sigma}^{2},\sigma^{2}\right)\) is mainly about the estimation error of \(\widehat{\sigma}^{2}\). As the risk bound suggests, the variance estimation may also have a significant effect on the performance of MMA. When a poor estimator of \(\sigma^{2}\) with non-converging squared risk is considered, the upper bound in (4.2) becomes non-converging if the largest size \(k_{M_{n}}\) is of order \(n\). In contrast, when \(\mathbb{E}\left(\widehat{\sigma}^{2}-\sigma^{2}\right)^{2}\) converges at the parametric rate \(1/n\), the term \(\rho\left(n,\mathcal{M},\mathbf{f},\widehat{\sigma}^{2},\sigma^{2}\right)\) does not affect the rate of convergence of the upper bound. ### Estimation of \(\sigma^{2}\) Here we present two variance estimators that prove useful under different situations. Consider a model-based estimator from the least squares theory \[\widehat{\sigma}^{2}_{m_{n}}=\frac{1}{n-m_{n}}\|\mathbf{y}-\widehat{\mathbf{f }}_{m_{n}}\|^{2}, \tag{4.3}\] where \(\widehat{\mathbf{f}}_{m_{n}}=\mathbf{X}_{m_{n}}(\mathbf{X}_{m_{n}}^{\top} \mathbf{X}_{m_{n}})^{-1}\mathbf{X}_{m_{n}}^{\top}\mathbf{y}\) is the least squares estimator involving the first \(m_{n}\) regressors. With an elementary calculation, we have \[\mathbb{E}(\widehat{\sigma}_{m_{n}}^{2}-\sigma^{2})^{2}\lesssim\frac{1}{n-m_{n}} \vee\frac{n\|\mathbf{\theta}_{-m_{n}}\|^{2}}{(n-m_{n})^{2}}\vee\frac{n^{2}\|\mathbf{ \theta}_{-m_{n}}\|^{4}}{(n-m_{n})^{2}}\,, \tag{4.4}\] where \(\mathbf{\theta}_{-m_{n}}=(\theta_{m_{n}+1},\ldots,\theta_{p_{n}})^{\top}\). When \(n-p_{n}\asymp n\), the variance estimator \(\widehat{\sigma}_{p_{n}}^{2}\) based on the largest candidate model converges at the parametric rate \(1/n\). When \(p_{n}=n\), the estimation error of \(\widehat{\sigma}_{m_{n}}^{2}\) with \(m_{n}=\lfloor kn\rfloor\) (\(0<k<1\)) is not slower than \((1/n)\vee\|\mathbf{\theta}_{-m_{n}}\|^{4}\). As will be seen in the next subsection, \(\widehat{\sigma}_{m_{n}}^{2}\) may be sufficient for the AOP of MMA (e.g., in the examples of polynomially and exponentially decaying coefficients), even if it does not converge at the parametric rate in some cases. Moreover, when \(p_{n}=n\), the first difference variance estimator proposed by Rice (1984) can also be used. For the one-dimensional nonparametric regression \(y_{i}=f(u_{i})+\epsilon_{i}\), where the model (2.1) is a linear approximation for \(f\), consider \[\widehat{\sigma}_{D}^{2}=\frac{1}{2(n-1)}\sum_{i=2}^{n}\left[y_{(i+1)}-y_{(i) }\right]^{2},\] where \(y_{(i)}\) denotes the observed response at the \(i\)-th smallest \(u\) value. Under a mild smoothness assumption on \(f\), \(\widehat{\sigma}_{D}^{2}\) has the property \(\mathbb{E}(\widehat{\sigma}_{D}^{2}-\sigma^{2})^{2}\sim cn^{-1}\text{Var}( \epsilon^{2})\). This estimator extends to design points in a multidimensional case (Munk et al., 2005). ### Aop With a suitable estimator \(\widehat{\sigma}^{2}\), the AOP of MMA is readily available as shown in the following theorem. **Theorem 2**.: _Suppose Assumption 1 holds. As \(n\to\infty\), if_ \[k_{M_{n}}\mathbb{E}\left(\widehat{\sigma}^{2}-\sigma^{2}\right)^{2}=o\left[nR _{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f})\right] \tag{4.5}\] _and_ \[\psi(\mathcal{M})=o\left[nR_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f})\right], \tag{4.6}\] _then \(\widehat{\mathbf{w}}|\mathcal{M}\) is AOP in the sense that (2.7) holds for the continuous weight set \(\mathcal{W}_{|\mathcal{M}|}\)._ _In particular, using the estimator (4.3) with \(m_{n}=\lfloor kn\rfloor\wedge p_{n}\) (\(0<k<1\)), if Assumptions 1-2,_ \[(1/n)\vee\|\mathbf{\theta}_{-m_{n}}\|^{4}=o\left(\frac{m_{n}^{*}}{n}\right), \tag{4.7}\] _and_ \[(\log p_{n})^{3}=o\left(m_{n}^{*}\right) \tag{4.8}\] _hold, then \(\widehat{\mathbf{w}}|\mathcal{M}_{a}\) achieves the full AOP in terms of (2.10)._ Theorem 2 establishes the MMA's AOP for the general nested model set \(\mathcal{M}\) and weight set \(\mathcal{W}_{|\mathcal{M}|}\) with variance estimation. Compared with the restricted-AOP theory in Hansen (2007), our result does not restrict the model weights to the discrete set \(\mathcal{W}_{|\mathcal{M}|}(N)\). As demonstrated in Proposition 1, relaxing the model weights from \(\mathcal{W}_{|\mathcal{M}|}(N)\) to \(\mathcal{W}_{|\mathcal{M}|}\) improves (substantially in various situations) the optimal MA risk. Second, the condition (4.6) in Theorem 2 significantly improves the condition (2.9) in Wan et al. (2010) by allowing more helpful candidate models to be combined. In fact, Theorem 2 permits the use of the largest candidate model set \(\mathcal{M}_{a}\), which answers the question Q1 raised in Section 2.2 that MMA can achieve the full AOP by combining all models in \(\mathcal{M}_{a}\) without additional restriction on the weight set. The conditions (4.7)-(4.8) are two specific forms of (4.5)-(4.6) when \(\widehat{\sigma}^{2}=\widehat{\sigma}^{2}_{m_{n}}\) and \(\mathcal{M}=\mathcal{M}_{a}\). Note that a prerequisite for (4.7)-(4.8) is \[m_{n}^{*}\to\infty, \tag{4.9}\] which is required in Hansen (2007) for MA and Li (1987) for MS. This condition means that there are no candidate models with fixed dimensions for which the approximation error is zero. When \(p_{n}=n\) and \(m_{n}=\lfloor kn\rfloor\), \(0<k<1\), the condition (4.7) is satisfied in Examples 1-2, and (4.8) may impose an additional condition on the situation to apply the largest candidate model set \(\mathcal{M}_{a}\), as seen below. **Example 1** (continued).: _The transformed coefficients are \(\theta_{j}=j^{-\alpha_{1}}\), \(\alpha_{1}>1/2\). In this case, we have \(m_{n}^{*}\asymp n^{1/(2\alpha_{1})}\). When \(m_{n}=\lfloor kn\rfloor\), \(0<k<1\), we obtain \(\|\boldsymbol{\theta}_{-m_{n}}\|^{4}=O(1/n^{4\alpha_{1}-2})\) and \([(1/n)\vee(1/n^{4\alpha_{1}-2})]=o(m_{n}^{*}/n)\), which implies (4.7). And note that \((\log n)^{3}/n^{1/(2\alpha_{1})}\to 0\). Thus the condition (4.8) is also satisfied._ **Example 2** (continued).: _When the coefficients decay as \(\theta_{j}=\exp(-j^{\alpha_{2}})\), \(\alpha_{2}>0\), we have \(m_{n}^{*}\asymp(\log n)^{1/\alpha_{2}}\). In this case, the condition (4.7) is satisfied by observing \(\|\boldsymbol{\theta}_{-m_{n}}\|^{4}=O[\exp(-2m_{n}^{\alpha_{2}})]=o(1/n)\) when \(m_{n}=\lfloor kn\rfloor\), \(0<k<1\). Furthermore, if \(0<\alpha_{2}<1/3\), the condition (4.8) is also satisfied due to \((\log n)^{3}/(\log n)^{1/\alpha_{2}}\to 0\)._ Based on the above analysis, we observe that with the optimal single model \(m_{n}^{*}\) being included in \(\mathcal{M}_{a}\) in both examples, the full AOP is achieved for the MMA estimator based on \(\mathcal{M}_{a}\) when the coefficients do not decay too fast, which much strengthens the restricted-AOP theories established by Hansen (2007) and Wan et al. (2010). ## 5 Construction of candidate model set This section proposes two types of reduced candidate model sets, on which the MMA estimators achieve the full AOP on broader parameter regions than that based on the largest candidate set \(\mathcal{M}_{a}\). ### Candidate model set with grouped regressors Instead of combining the candidate models with successively increasing sizes, we consider a smaller set \(\mathcal{M}_{g}=\{k_{1},k_{2},\ldots,k_{M_{n}}\}\), where the size of each candidate model is group-wise added. Define \(k_{0}=0\). **Theorem 3**.: _Let \(k_{M_{n}}=p_{n}\) and_ \[\max_{1\leq j\leq M_{n}-1}\frac{k_{j+1}-k_{j}}{k_{j}-k_{j-1}}\leq 1+\zeta_{n},\] _where \(\zeta_{n}\geq 0\). Suppose Assumptions 1-2, \(k_{1}=o(m_{n}^{*})\), and \(\zeta_{n}=o(1)\) hold, and the conditions (4.5)-(4.6) are satisfied for \(\mathcal{M}_{g}\), then we have_ \[Q_{n}\left(\widehat{\mathbf{w}}|\mathcal{M}_{g},\mathbf{f}\right)=[1+o(1)]R_{n }\left(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f}\right).\] Theorem 3 indicates that rather than combining all nested models, the MMA estimator based on \(\mathcal{M}_{g}\) still achieves the optimal risk \(R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f})\) asymptotically when the sizes of the candidate models are appropriately selected. The similar strategies of constructing group-wise estimators have been widely used in various nonparametric estimation problems (Cavalier and Tsybakov, 2001, 2002; Rigollet, 2006); See Section 6 of Dalalyan and Salmon (2012) for its application in the context of model aggregation. The conditions in Theorem 3 are quite mild. First, notice that the condition (4.5) is satisfied for \(\mathcal{M}_{g}\) with the variance estimators given in Section 4.2. Next, we provide some specific choices of \(k_{1}\) and \(\zeta_{n}\) that satisfy the remaining assumptions in Theorem 3. To keep in line with the analysis in Section 4.3, we here focus on the case \(p_{n}=n\). #### 5.1.1 Equal size groups Consider \(\mathcal{M}_{g1}\) with \(\zeta_{n}=0\), \(k_{1}=\lceil(\log n)^{t}\rceil\), \(k_{m}=mk_{1}\) for \(m=2,\ldots,M_{n}-1\), and \(k_{M_{n}}=p_{n}\), where \(0<t<3\) and \(M_{n}=\arg\min_{m\in\mathbb{N}}k_{m}\geq p_{n}\). We have \[\psi(\mathcal{M}_{g1})\asymp(\log M_{n})^{3}\asymp(\log n-t\log\log n)^{3}. \tag{5.1}\] Now we verify the conditions \(k_{1}=o(m_{n}^{*})\) and (4.6) in the following examples. **Example 1** (continued).: _Note that \(k_{1}/n^{1/(2\alpha_{1})}\to 0\) and \(\psi(\mathcal{M}_{g1})/n^{1/(2\alpha_{1})}\to 0\). Thus the MMA estimator based on \(\mathcal{M}_{g1}\) still attains the full AOP in the case of polynomially decaying coefficients._ **Example 2** (continued).: _Since \(k_{1}/(\log n)^{1/\alpha_{2}}\to 0\) and \(\psi(\mathcal{M}_{g1})/(\log n)^{1/\alpha_{2}}\to 0\) when \(0<\alpha_{2}\leq 1/3\). Therefore, \(\mathcal{M}_{g1}\) improves \(\mathcal{M}_{a}\) a little by achieving the full AOP of MMA when \(\theta_{j}=\exp(-j^{\alpha_{2}})\), \(0<\alpha_{2}\leq 1/3\)._ #### 5.1.2 Increasing size groups In this subsection, we construct the groups in the same spirit as the weakly geometrically increasing blocks in Cavalier and Tsybakov (2001). For two constants \(t_{1}>0\) and \(t_{2}>0\), define \(\zeta_{n}=t_{1}/(\log n)^{t_{2}}\). Consider \(\mathcal{M}_{g2}\) with \(k_{1}=\lceil\zeta_{n}^{-1}\rceil\), \(k_{m}=k_{m-1}+\lfloor k_{1}(1+\zeta_{n})^{m-1}\rfloor\) for \(m=2,\ldots,M_{n}-1\), and \(k_{M_{n}}=p_{n}\), where \[M_{n}=\arg\min_{m\in\mathbb{N}}\left(k_{1}+\sum_{j=2}^{m}\lfloor k_{1}(1+ \zeta_{n})^{j-1}\rfloor\right)\geq p_{n}.\] When \(p_{n}=n\), the result in Cavalier and Tsybakov (2001) shows that \(M_{n}\lesssim(\log n)^{t_{2}+1}\). Thus we have \[\psi(\mathcal{M}_{g2})\asymp\zeta_{n}M_{n}(\log M_{n})^{2}\lesssim(\log n)( \log\log n)^{2}.\] **Example 1** (continued).: _Since \(k_{1}/n^{1/(2\alpha_{1})}\to 0\) and \(\psi(\mathcal{M}_{g2})/n^{1/(2\alpha_{1})}\to 0\), the MMA estimator based on \(\mathcal{M}_{g2}\) attains the same full-AOP property as those based on \(\mathcal{M}_{a}\) and \(\mathcal{M}_{g1}\)._ **Example 2** (continued).: _Set \(t_{2}=1\). When \(\theta_{j}=\exp(-j^{\alpha_{2}})\), \(0<\alpha_{2}<1\), and \(m_{n}^{*}\asymp(\log n)^{1/\alpha_{2}}\), note that \(k_{1}/m_{n}^{*}\to 0\) and \(\psi(\mathcal{M}_{g2})/m_{n}^{*}\to 0\). Thus the MMA estimator with \(\mathcal{M}_{g2}\) achieves the full AOP on a broader parameter region compared to those based on \(\mathcal{M}_{a}\) and \(\mathcal{M}_{g1}\)._ ### Candidate model set based on MS Another approach is to combine a smaller number of candidate models with the size centering on \(m_{n}^{*}\). Since \(m_{n}^{*}\) is unknown in practice, we estimate it by some MS method and then consider the candidate model set \(\widehat{\mathcal{M}}_{MS}=\widehat{\mathcal{M}}_{MS}(k_{l},k_{u})=\{\widehat {l}_{n},\ldots,\widehat{m}_{n},\ldots,\widehat{u}_{n}\}\), where \(\widehat{l}_{n}=1\vee\left\lfloor k_{l}^{-1}\widehat{m}_{n}\right\rfloor\), \(\widehat{u}_{n}=p_{n}\wedge\left\lfloor k_{u}\widehat{m}_{n}\right\rfloor\), \(k_{l}>1\), and \(k_{u}>1\). To get asymptotic properties of \(\widehat{\mathcal{M}}_{MS}\), we need another assumption on transformed coefficients, which is naturally satisfied for both polynomially and exponentially decaying coefficients. **Assumption 3**.: _The transformed coefficients satisfy \(\lim_{k\to\infty}\left|\theta_{\lfloor kl\rfloor}/\theta_{l}\right|\to 0\) for any \(l\in\mathbb{N}\)._ Define \(c_{1}\) and \(c_{2}\) two constants with \(0<c_{1}<1<c_{2}\). Let \(F_{n}\) denote the event \(\left\lfloor c_{1}m_{n}^{*}\right\rfloor\leq\widehat{m}_{n}\leq\left\lfloor c _{2}m_{n}^{*}\right\rfloor\) and \(\bar{F}_{n}\) be its complement. #### 5.2.1 Increasing \(k_{l}\) and \(k_{u}\) Consider a candidate model set \(\widehat{\mathcal{M}}_{MS1}=\widehat{\mathcal{M}}_{MS}\) with \(k_{l}\to\infty\) and \(k_{u}\to\infty\). **Theorem 4**.: _Suppose that Assumptions 1-3 hold. If the condition (4.5) is satisfied for \(\mathcal{M}_{a}\),_ \[\mathbb{E}\psi(\widehat{\mathcal{M}}_{MS1})=o(m_{n}^{*}), \tag{5.2}\] _and the event \(F_{n}\) satisfies_ \[\mathbb{P}(\bar{F}_{n})=o\left(\frac{m_{n}^{*}}{n}\right), \tag{5.3}\] _then the equation (2.11) holds for \(\widehat{\mathcal{M}}_{MS1}\)._ Theorem 4 states that MMA achieves the full AOP in terms of (2.11) with the estimated candidate model set \(\widehat{\mathcal{M}}_{MS1}\) under certain regularity conditions. Observe that the condition (5.2) is quite mild. Based on the definition of (4.1), we have \[\mathbb{E}\psi(\widehat{\mathcal{M}}_{MS1}) \asymp\mathbb{E}\log(k_{l}k_{u})\left\{\log\left[(k_{u}-k_{l}^{-1} )\widehat{m}_{n}\right]\right\}^{2}\] \[\lesssim\left(\log k_{l}+\log k_{u}\right)\left[\log(k_{u}-k_{l}^ {-1})+\log m_{n}^{*}\right]^{2},\] where the first inequality follows from Jensen's inequality, and the second inequality is due to \[\mathbb{E}\widehat{m}_{n}=\mathbb{E}(\widehat{m}_{n}1_{F_{n}})+\mathbb{E}( \widehat{m}_{n}1_{\bar{F}_{n}})\lesssim c_{2}m_{n}^{*}+n\cdot\frac{m_{n}^{*}} {n}\lesssim m_{n}^{*}.\] If we set \(k_{l}=k_{u}=\log n\), then a sufficient condition for (5.2) is \((\log\log n)[\log\log n+\log m_{n}^{*}]^{2}=o\left(m_{n}^{*}\right)\), which holds in Examples 1-2. Then we will see that the condition (5.3) is satisfied when Mallows' \(C_{p}\) MS criterion (Mallows, 1973) is adopted. Suppose \(\sigma^{2}\) is known, from Kneip (1994), we obtain \[\mathbb{P}\left(|R_{n}(\widehat{m}_{n},\mathbf{f})-R_{n}(m_{n}^{*},\mathbf{f} )|>n^{-1}[x^{2}\lor x(m_{n}^{*})^{1/2}]\right)\leq C_{1}\exp(-C_{2}x)\quad \text{for}\,x\geq 0, \tag{5.4}\] where \(C_{1}\) and \(C_{2}\) are two constants that depend only on \(\sigma^{2}\). Combining (5.4) with the fact \(\varpi_{n}\triangleq[R_{n}(c_{1}m_{n}^{*},\mathbf{f})-R_{n}(m_{n}^{*},\mathbf{ f})]\wedge[R_{n}(c_{2}m_{n}^{*},\mathbf{f})-R_{n}(m_{n}^{*},\mathbf{f})] \gtrsim m_{n}^{*}/n\) under Conditions 1-2, we see \[\mathbb{P}\left(\bar{F}_{n}\right)\leq\mathbb{P}\left(|R_{n}(\widehat{m}_{n}, \mathbf{f})-R_{n}(m_{n}^{*},\mathbf{f})|>\varpi_{n}\right)\lesssim\exp\Big{[} -C(m_{n}^{*})^{\frac{1}{2}}\Big{]}, \tag{5.5}\] where \(C\) is a fixed constant. To connect (5.5) with the condition (5.3), consider the following two examples. **Example 1** (continued).: _When \(\theta_{j}=j^{-\alpha_{1}}\), \(\alpha_{1}>1/2\), and \(m_{n}^{*}\asymp n^{1/(2\alpha_{1})}\), we have \(\exp\left[-C(m_{n}^{*})^{1/2}\right]=o\left(m_{n}^{*}/n\right)\) for any fixed \(C\), which meets the condition (5.3)._ **Example 2** (continued).: _When \(\theta_{j}=\exp(-j^{\alpha_{2}})\), \(0<\alpha_{2}<1/2\), and \(m_{n}^{*}\asymp(\log n)^{1/\alpha_{2}}\), note that \(\exp\left[-C(m_{n}^{*})^{1/2}\right]=1/[n^{C(\log n)^{1/(2\alpha_{2})-1}}]=o(m _{n}^{*}/n)\) for any constant \(C\). It also verifies (5.3)._ The above analysis implies that the MMA estimator with \(\widehat{\mathcal{M}}_{MS1}\) retains the full AOP as that based on \(\mathcal{M}_{a}\) when the transformed coefficients \(\boldsymbol{\theta}\) decay slowly. It also expands the region for the full AOP of \(\mathcal{M}_{a}\) when the coefficients decay fast. #### 5.2.2 Bounded \(k_{l}\) and \(k_{u}\) Let \(\widehat{\mathcal{M}}_{MS2}=\widehat{\mathcal{M}}_{MS}\) with \(k_{l}\lor k_{u}\) being upper bounded by some positive constant \(C\). **Theorem 5**.: _Suppose that Assumptions 1-2 hold. Under Condition 1, if there exists a constant \(0<C_{1}<1\) such that \(\mathbb{P}(F_{n})\geq C_{1}\), then we have_ \[\mathbb{E}R_{n}(\widehat{\mathbf{w}}|\widehat{\mathcal{M}}_{MS2},\mathbf{f})-R _{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f})\gtrsim R_{n}(\mathbf{w}^{*}| \mathcal{M}_{a},\mathbf{f}). \tag{5.6}\] _Under Condition 2, if \(m_{n}^{*}\to\infty\), \(\mathbb{E}R_{n}(\widehat{m}_{n},\mathbf{f})/R_{n}(m_{n}^{*},\mathbf{f})\to 1\), and there exists a constant \(C_{2}\geq 1\) such that \(\widehat{u}_{n}-\widehat{l}_{n}\leq C_{2}\) almost surely, then we get (2.11) for \(\widehat{\mathcal{M}}_{MS2}\)._ This theorem states that when the coefficients decay slowly, such as in the case \(\theta_{j}=j^{-\alpha_{1}},\alpha_{1}>1/2\), the MMA estimator based on a restricted \(\widehat{\mathcal{M}}_{MS2}\) cannot achieve the full potential of MA. However, when the coefficients decay fast, reducing the number of candidate models around \(\widehat{m}_{n}\) to a constant level is beneficial for MMA. Indeed, Theorem 5 states that MMA based on \(\widehat{\mathcal{M}}_{MS2}\) with bounded \(\widehat{u}_{n}-\widehat{l}_{n}\) achieves the optimal MA risk when \(\theta_{j}=\exp(-j^{\alpha_{2}})\), \(0<\alpha_{2}<\infty\). Nevertheless, requiring \(k_{l}\) and \(k_{u}\) to increase to \(\infty\) is still necessary for the full AOP in the case of polynomially decaying coefficients. Table 1 summarizes the available MMA strategies discussed in Sections 3-5. We emphasize that the parameter regions given in the last two columns are the known sufficient conditions for the full AOP of MMA. Whether these methods achieve the full AOP in larger regions remains open. More comparisons are available through simulations in the Appendix. ## 6 Minimax adaptivity To the best of our knowledge, minimax properties have not been established on MMA, although some minimax results have been obtained for very different MA methods (see, e.g., Yang, 2001, 2004; Leung and Barron, 2006; Dalalyan and Salmon, 2012; Bellec, 2018). The purpose of this section is to fill in this gap for MMA. For simplicity, in this section we assume \(p_{n}=n\) and \(\epsilon_{1},\ldots,\epsilon_{n}\) are i.i.d. \(N(0,\sigma^{2})\). We investigate the exact minimax adaptivity (defined in Definition 3) of the MMA estimator based on \(\mathcal{M}_{a}=\{1,\ldots,n\}\) when the transformed coefficient \(\mathbf{\theta}\) belongs to two types of classes, respectively. The first class is the ellipsoid \[\Theta(\alpha,R)=\left\{\mathbf{\theta}\in\mathbb{R}^{n}:\sum_{j=1}^{n}j^{2\alpha }\theta_{j}^{2}\leq R\right\}, \tag{6.1}\] where \(\alpha>0\) and \(R>0\). Let \(\mathcal{F}_{\Theta(\alpha,R)}=\{\mathbf{f}=\sum_{j=1}^{n}\theta_{j}\mathbf{\phi} _{j}:\mathbf{\theta}\in\Theta(\alpha,R)\}\) denote the class of regression mean vector associated with \(\Theta(\alpha,R)\). Another is the hyperrectangle \[\Theta^{H}(c,q)=\left\{\mathbf{\theta}\in\mathbb{R}^{n}:|\theta_{j}|\leq cj^{-q}, j=1,\ldots,n\right\}, \tag{6.2}\] where \(c>0\) and \(q>1/2\). And let \(\mathcal{F}_{\Theta^{H}(c,q)}\) be the corresponding mean vector class of \(\Theta^{H}(c,q)\). **Theorem 6**.: _Suppose \(\widehat{\sigma}_{D}^{2}\) or \(\widehat{\sigma}_{m_{n}}^{2}\) with \(m_{n}=\lfloor kn\rfloor\)\((0<k<1)\) is adopted. Then the MMA estimator \(\widehat{\mathbf{f}}_{\widehat{\mathbf{w}}|\mathcal{M}_{a}}\) is adaptive in the exact minimax sense on the family of the ellipsoids \(\mathbf{\mathcal{F}}=\left\{\mathcal{F}_{\Theta(\alpha,R)},\alpha>0,R>0\right\}\), and it is adaptive in the exact linear-combined minimax sense on the family of the hyperrectangles \(\mathbf{\mathcal{F}}^{H}=\left\{\mathcal{F}_{\Theta^{H}(c,q)},c>0,q>1/2\right\}\)._ This theorem answers the question Q3 that the MMA estimator is minimax optimal in the sense of Definition 3 with the estimated \(\sigma^{2}\). The detailed definitions of the variance estimators \(\widehat{\sigma}_{D}^{2}\) and \(\widehat{\sigma}_{m_{n}}^{2}\) are given in Section 4.2. \begin{table} \begin{tabular}{l l l l l l} \hline & Method & Candidate model set & Weight set & \(\theta_{j}=j^{-\alpha_{1}}\) & \(\theta_{j}=\exp\left(-j^{\alpha_{2}}\right)\) \\ \hline Restricted AOP & WR & \(\mathcal{M}_{a}\) & \(\mathcal{W}_{|\mathcal{M}_{a}|}(N)\) with fixed \(N\geq 1\) & \(\emptyset\) & \((0,+\infty)\) \\ & MR & \(\mathcal{M}_{s}\) with (2.9) & \(\mathcal{W}_{|\mathcal{M}_{s}|}\) & \(\emptyset\) & \(\emptyset\) \\ Full AOP & M-ALL & \(\mathcal{M}_{a}\) & \(\mathcal{W}_{|\mathcal{M}_{a}|}\) & \((1/2,+\infty)\) & \((0,1/3)\) \\ & M-G1 & \(\mathcal{M}_{g1}\) & \(\mathcal{W}_{|\mathcal{M}_{g1}|}\) & \((1/2,+\infty)\) & \((0,1/3]\) \\ & M-G2 & \(\mathcal{M}_{g2}\) & \(\mathcal{W}_{|\mathcal{M}_{g2}|}\) & \((1/2,+\infty)\) & \((0,1)\) \\ & M-MS1 & \(\widehat{\mathcal{M}}_{MS1}\) & \(\mathcal{W}_{|\widehat{\mathcal{M}}_{MS1}|}\) & \((1/2,+\infty)\) & \((0,1/2)\) \\ & M-MS2 & \(\widehat{\mathcal{M}}_{MS2}\) & \(\mathcal{W}_{|\widehat{\mathcal{M}}_{MS2}|}\) & \(\emptyset\) & \((0,+\infty)\) \\ \hline \end{tabular} \end{table} Table 1: MA methods with different weight set or candidate model set restrictions. The last two columns summarize the ranges of \(\alpha_{1}\) and \(\alpha_{2}\) on which MMA is shown to achieve the full AOP in two representative examples respectively. Note that \(\widehat{\mathbf{f}}_{\widehat{\mathbf{w}}|\mathcal{M}_{a}}\) is a linear combination of candidate estimators in \(\mathcal{M}_{a}\); thus, \(\widehat{\mathbf{f}}_{\widehat{\mathbf{w}}|\mathcal{M}_{a}}\) is also adaptive in the exact linear-combined minimax sense on the family of the ellipsoids. However, based on Theorem 5 of Donoho et al. (1990), we deduce that the MMA estimator \(\widehat{\mathbf{f}}_{\widehat{\mathbf{w}}|\mathcal{M}_{a}}\) is not adaptive in the exact minimax sense on the family of the hyperrectangles due to \(R_{L}[\mathcal{F}_{\Theta^{H}(c,q)}]/R_{M}[\mathcal{F}_{\Theta^{H}(c,q)}] \rightarrow\rho,1<\rho<\infty\). But it is still seen that \(\widehat{\mathbf{f}}_{\widehat{\mathbf{w}}|\mathcal{M}_{a}}\) achieves minimax-rate optimality over all the estimators. ## 7 Simulation studies Although the discrete weight set restriction (2.8) and the candidate model set restriction (2.9) have been commonly used to develop the theoretical properties of MMA, they have rarely been examined numerically. This subsection examines the MMA estimators with these two restrictions relative to the unrestricted MMA. The data is simulated from the linear regression model (2.1), where \(p_{n}=\lfloor 2n/3\rfloor\), \(x_{1i}=1\), the remaining \(x_{ji}\) are independently generated from \(N(0,1)\), and the random error terms \(\epsilon_{i}\) are i.i.d. from \(N(0,\sigma^{2})\) and are independent of \(x_{ji}{}^{\prime}\)s. We consider two cases of the regression coefficients: * _Case 1_ (Polynomially decaying coefficients). Here, \(\beta_{j}=j^{-\alpha_{1}}\) and \(\alpha_{1}\) is varied from \(0.5\) to \(1.5\). * _Case 2_ (Exponentially decaying coefficients). Here, \(\beta_{j}=\exp(-j^{\alpha_{2}})\) and \(\alpha_{2}\) is varied from \(0.25\) to \(1.25\). The signal-to-noise ratio, which is defined by \(\sum_{j=2}^{p_{n}}\beta_{j}^{2}/\sigma^{2}\), is set to be one via the parameter \(\sigma^{2}\). And the sample size \(n\) increases from \(30\) to \(1000\). The candidate models used to implement MA are nested and estimated by least squares. To highlight the issue of the weight/candidate model restriction, we assume that \(\sigma^{2}\) is known for all methods. Let \(\mathbf{f}=(f_{1},\ldots,f_{n})^{\top}\) denote the mean vector of the true regression function. The accuracy of an estimation procedure is evaluated in terms of the squared \(\ell_{2}\) loss \(n^{-1}\|\mathbf{f}-\widehat{\mathbf{f}}\|^{2}\), where \(\widehat{\mathbf{f}}=(\widehat{f}_{1},\ldots,\widehat{f}_{n})^{\top}\) is the estimated mean vector. We replicate the data generation process \(R=1000\) times to approximate the risks of the competing methods. The restricted-AOP MMA estimators considered are WR with \(N=2\) (WR1), WR with \(N=5\) (WR2), MR with \(M_{n}=2\vee\lfloor(m_{n}^{*})^{1/2}\rfloor\) (MR1), and MR with \(M_{n}=2\vee\lfloor m_{n}^{*}/2\rfloor\) (MR2). Detailed definitions of these methods are given in Section 3 and Table 1. In each replication, we normalize the squared \(\ell_{2}\) loss of these four methods by dividing the \(\ell_{2}\) loss of the MMA estimator based on \(\mathcal{M}_{a}\) and \(\mathcal{W}_{p_{n}}\) (representing a full-AOP MMA method). From Table 2, the relative risks of the methods WR1 and WR2 are significantly larger than 1 in Case 1, which implies that using the discrete weight sets increases the risk of the full-AOP MMA by a sizable edge. This result is consistent with Proposition 1. In Case 2, however, when \(\alpha_{2}=1.25\) and \(n=1000\), the relative risks of WR1 and WR2 are 0.898 (0.030) and 1.011 (0.023), respectively, which shows that WR methods perform better than and comparably to the MMA based on \(\mathcal{W}_{p_{n}}\). This phenomenon is not surprising. Although Proposition 1 states that MA with the discrete weight restriction has an asymptotically equivalent oracle risk to that under the continuous weight set in Case 2, the latter actually pays a higher price to pursue the oracle MA risk when \(n\) is finite, and the trade-off favors simplicity in this special case. We find that the MR1 and MR2 methods mostly have much larger relative risks than the WR methods in both cases. Moreover, their relative risks become increasingly greater as the sample size increases from 30 to 1000. These findings support our theoretical understandings in Section 3.2. Another interesting observation is about the result when \(\alpha_{2}=0.25\) in Case 2. Although the data is generated from a true regression model with exponentially decaying coefficients, this setting is more like a polynomially decaying case in the finite sample situation. Indeed, when \(\alpha_{1}=0.75\) and \(n=1000\), we have \(m_{n}^{*}\approx 75\). While in Case 2 with \(\alpha_{2}=0.25\) and \(n=1000\), \(m_{n}^{*}\) is around \(77\), which does not exhibit the significant difference as in Case 1. Thus it is not surprising that the numerical performance of the competing methods in Case 2 (\(\alpha_{2}=0.25\)) is similar to that in Case 1. More discussions related to this phenomenon can be found in Liu and Yang (2011) and Zhang and Yang (2015). In Section A.3 of the Appendix, we provide more simulation results to assess the full-AOP theory in Section 4 and to compare the different candidate model sets given in Section 5. Overall, these results support our full-AOP theory on MMA and present evidence favoring the use of the candidate model sets with reduced sizes. ## 8 Discussion This paper focuses on the problem of combining a set of nested linear models by minimizing an MMA criterion. As a background, we first revisited two well-known AOP theories of MMA, which are based on the weight set restriction (Hansen, 2007) and the candidate model set restriction (Wan et al., 2010), respectively. We found that under these restrictions, MMA may not achieve its full potential, and it can perform much worse than MS. \begin{table} \begin{tabular}{c l c c c c c c} \hline \hline \multirow{2}{*}{\(n\)} & \multirow{2}{*}{method} & \multicolumn{3}{c}{Case 1} & \multicolumn{3}{c}{Case 2} \\ \cline{3-8} & & \(\alpha_{1}=0.51\) & \(\alpha_{1}=1\) & \(\alpha_{1}=1.5\) & \(\alpha_{2}=0.25\) & \(\alpha_{2}=0.75\) & \(\alpha_{2}=1.25\) \\ \hline 30 & WR1 & 1.091 (0.016) & 1.135 (0.017) & 1.137 (0.020) & 1.103 (0.011) & 1.111 (0.017) & 1.131 (0.030) \\ & WR2 & 1.020 (0.007) & 1.044 (0.008) & 1.033 (0.010) & 1.020 (0.005) & 1.042 (0.009) & 1.011 (0.012) \\ & MR1 & 1.972 (0.071) & 1.624 (0.050) & 1.923 (0.080) & 1.747 (0.040) & 2.243 (0.096) & 0.954 (0.043) \\ & MR2 & 1.441 (0.043) & 1.388 (0.035) & 1.254 (0.042) & 1.280 (0.021) & 1.167 (0.036) & 0.954 (0.043) \\ 100 & WR1 & 1.113 (0.011) & 1.126 (0.017) & 1.124 (0.022) & 1.126 (0.007) & 1.125 (0.021) & 1.093 (0.028) \\ & WR2 & 1.025 (0.004) & 1.037 (0.007) & 1.031 (0.009) & 1.022 (0.003) & 1.028 (0.009) & 1.051 (0.013) \\ & MR1 & 2.081 (0.041) & 1.926 (0.041) & 2.072 (0.072) & 2.179 (0.031) & 1.821 (0.058) & 1.397 (0.079) \\ & MR2 & 1.420 (0.022) & 1.306 (0.024) & 1.491 (0.043) & 1.440 (0.015) & 1.018 (0.026) & 1.397 (0.079) \\ 300 & WR1 & 1.129 (0.006) & 1.116 (0.015) & 1.065 (0.019) & 1.133 (0.005) & 1.025 (0.021) & 1.025 (0.037) \\ & WR2 & 1.031 (0.003) & 1.041 (0.006) & 1.047 (0.011) & 1.029 (0.002) & 1.036 (0.009) & 1.081 (0.020) \\ & MR1 & 2.286 (0.027) & 2.601 (0.050) & 3.735 (0.107) & 2.586 (0.024) & 3.703 (0.116) & 3.415 (0.358) \\ & MR2 & 1.496 (0.013) & 1.356 (0.020) & 1.392 (0.032) & 1.514 (0.010) & 1.647 (0.044) & 3.415 (0.358) \\ 1000 & WR1 & 1.123 (0.004) & 1.090 (0.009) & 1.052 (0.017) & 1.124 (0.003) & 0.957 (0.021) & 0.898 (0.030) \\ & WR2 & 1.026 (0.002) & 1.055 (0.006) & 1.062 (0.014) & 1.034 (0.002) & 1.016 (0.013) & 1.011 (0.023) \\ & MR1 & 2.541 (0.018) & 3.740 (0.056) & 4.945 (0.128) & 3.558 (0.022) & 10.469 (0.443) & 8.506 (0.757) \\ & MR2 & 1.525 (0.008) & 1.432 (0.015) & 1.447 (0.030) & 1.524 (0.007) & 4.076 (0.166) & 8.506 (0.757) \\ \hline \hline \end{tabular} \end{table} Table 2: Comparisons of the restricted-AOP MMA methods. The squared \(\ell_{2}\) loss of each method is divided by the \(\ell_{2}\) loss of the MMA estimator based on \(\mathcal{M}_{a}\) and \(\mathcal{W}_{p_{n}}\) in each simulation. In this paper, inspired by the pioneering work of Hansen (2007), Wan et al. (2010), and Zhang (2021), we have addressed three key questions about the optimality of MMA: Can MMA achieve the performance of the optimal convex combination of all the nested models (i.e., the full-AOP property)? How to construct the candidate model set optimally? Is MMA adaptive in an exact minimax sense for some nonparametric classes? Correspondingly, our main contribution is threefold. First, a non-asymptotic risk bound of MMA is obtained under the sub-Gaussian assumption, which shows that when the optimal MA risk does not converge too fast, the full AOP can be achieved by minimizing the MMA criterion over the largest candidate model set. Second, two types of reduced candidate model sets are proposed, on which the full-AOP property of MMA can be realized and further improved in some aspects. Third, the MMA estimator is shown to be adaptive in the exact minimax sense over the family of ellipsoids. It is also proved to be adaptive in the exact linear-combined minimax sense on the family of hyperrectangles. To the best of our knowledge, it was previously unknown if MMA has any minimax property. In closing, we provide several directions for future research. The focus of this paper has been on a linear regression setup with nested models. It is of great interest to extend the theoretical framework to combining ordered linear smoothers (Chernousova et al., 2013; Bellec and Yang, 2020) and other non-nested models (Wan et al., 2010; Zhang, 2021). Another extension, motivated by an observation from Table 1, is to develop an MA method that can achieve the full AOP on the whole parameter region, if possible. Based on the works of Zhang and Yang (2015) and Qian et al. (2022), we conjecture that a universally full AOP may be established by properly using cross-validation or hypothesis testing. We leave these for future work. ## Appendix Section A.1 contains the proofs of all the theorems, corollaries, and propositions in this paper. Section A.2 proves that MMA is asymptotically optimal (AOP) in terms of statistical loss. Section A.3 provides additional simulation results. And other related works are discussed in Section A.4. ### Proofs #### a.1.1 Notations In this appendix, we will use the symbols defined in Section 2.1 of the main text. In addition, for any \(n\times n\) real matrix \(\mathbf{A}\), let \(\|\mathbf{A}\|_{2}\) and \(\|\mathbf{A}\|_{\mathrm{F}}\) denote the operator norm and the Frobenius norm of \(\mathbf{A}\), respectively. #### a.1.2 Preliminaries Define \(\mathbf{P}_{j}=\mathbf{X}_{j}(\mathbf{X}_{j}^{\top}\mathbf{X}_{j})^{-1}\mathbf{ X}_{j}^{\top}\) the projection matrix based on the first \(j\) columns of \(\mathbf{X}\). Let \(\mathbf{D}_{j}=\mathbf{P}_{j}-\mathbf{P}_{j-1},j=1,\ldots,p_{n}\), where \(\mathbf{P}_{0}=\mathbf{0}_{n\times n}\). Note that \(\mathbf{D}_{j}\) is a projection matrix, and \(\mathbf{D}_{j},j=1,\ldots,p_{n}\) are mutually orthogonal, i.e., \(\mathbf{D}_{j}\mathbf{D}_{j^{\prime}}=\mathbf{D}_{j^{\prime}}\mathbf{D}_{j}= \mathbf{D}_{j}\delta_{jj^{\prime}}\), where \(\delta_{jj^{\prime}}\) is the Kronecker delta. Using eigendecomposition, we have \(\mathbf{D}_{j}=\mathbf{\phi}_{j}\mathbf{\phi}_{j}^{\top}\), where \(\mathbf{\phi}_{j}\in\mathbb{R}^{n}\) satisfying \(\|\mathbf{\phi}_{j}\|=1\). Due to the orthogonality of \(\mathbf{D}_{j},j=1,\ldots,p_{n}\), we see that \(\{\mathbf{\phi}_{1},\ldots,\mathbf{\phi}_{p_{n}}\}\) forms an orthonormal basis for the column space of \(\mathbf{X}\). Thus, we can represent the model (2.2) as an equivalent sequence model \[\widehat{\theta}_{j}=\theta_{j}+e_{j},\quad j=1,\ldots,p_{n},\] (A.1.1) where \(\widehat{\theta}_{j}=\mathbf{\phi}_{j}^{\top}\mathbf{y}/\sqrt{n}\), \(\theta_{j}=\mathbf{\phi}_{j}^{\top}\mathbf{f}/\sqrt{n}\), and \(e_{j}=\mathbf{\phi}_{j}^{\top}\mathbf{\epsilon}/\sqrt{n}\). Assume \(\epsilon_{1},\ldots,\epsilon_{n}\) are i.i.d. \(\eta\)-sub-Gaussian random variables. Note that \(e_{j},j=1,\ldots,p_{n}\) are \((\eta/\sqrt{n})\)-sub-Gaussian variables, which satisfy \(\mathbb{E}e_{j}=0\), \(\mathbb{E}e_{j}^{2}=\sigma^{2}/n\), and \(\mathbb{E}e_{j}e_{j^{\prime}}=0\) when \(j\neq j^{\prime}\). Based on the sequence model (A.1.1), the least squares estimator \(\widehat{\mathbf{f}}_{m}\) has the following equivalent form \[\widehat{\mathbf{f}}_{m}=\mathbf{P}_{m}\mathbf{y}=\sum_{j=1}^{m}\mathbf{D}_{j }\mathbf{y}=\sum_{j=1}^{m}\mathbf{\phi}_{j}\mathbf{\phi}_{j}^{\top}\mathbf{y}=\sqrt{n }\sum_{j=1}^{m}\mathbf{\phi}_{j}\widehat{\theta}_{j}.\] The \(\ell_{2}\) risk of \(\widehat{\mathbf{f}}_{m}\) is \[\begin{split} R_{n}(m,\mathbf{f})&=\frac{1}{n} \mathbb{E}\left\|\widehat{\mathbf{f}}_{m}-\mathbf{f}\right\|^{2}=\frac{1}{n} \mathbb{E}\left\|\sum_{j=1}^{m}\mathbf{D}_{j}\mathbf{y}-\sum_{j=1}^{p_{n}} \mathbf{D}_{j}\mathbf{f}\right\|^{2}\\ &=\mathbb{E}\left\|\sum_{j=1}^{m}\mathbf{\phi}_{j}\widehat{\theta}_{ j}-\sum_{j=1}^{p_{n}}\mathbf{\phi}_{j}\theta_{j}\right\|^{2}=\mathbb{E}\left\|\sum_{j=1}^ {m}\mathbf{\phi}_{j}e_{j}-\sum_{j=m+1}^{p_{n}}\mathbf{\phi}_{j}\theta_{j}\right\|^{2} \\ &=\frac{m\sigma^{2}}{n}+\sum_{j=m+1}^{p_{n}}\theta_{j}^{2},\end{split}\] (A.1.2) where the last equality is due to the orthogonality of \(\{\mathbf{\phi}_{1},\ldots,\mathbf{\phi}_{p_{n}}\}\) and \(\mathbb{E}e_{j}^{2}=\sigma^{2}/n\). Define \(k_{0}=0\). The MA estimator based on \(\mathcal{M}=\{k_{1},k_{2},\ldots,k_{M_{n}}\}\) is \[\begin{split}\widehat{\mathbf{f}}_{\mathbf{w}|\mathcal{M}}& =\sum_{m=1}^{M_{n}}w_{m}\widehat{\mathbf{f}}_{k_{m}}=\sum_{m=1}^ {M_{n}}w_{m}\left(\sqrt{n}\sum_{j=1}^{k_{m}}\mathbf{\phi}_{j}\widehat{\theta}_{j}\right) \\ &=\sqrt{n}\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\gamma_{j} \mathbf{\phi}_{l}\widehat{\theta}_{l},\end{split}\] (A.1.3) where \(\gamma_{j}=\sum_{m=j}^{M_{n}}w_{m}\) is the cumulative weight. A similar calculation to (A.1.2) yields the \(\ell_{2}\) loss of \(\widehat{\mathbf{f}}_{\mathbf{w}|\mathcal{M}}\) \[\begin{split}& L_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})=\frac{1}{n} \left\|\widehat{\mathbf{f}}_{\mathbf{w}|\mathcal{M}}-\mathbf{f}\right\|^{2}\\ &=\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\left(\gamma_{j} \widehat{\theta}_{l}-\theta_{l}\right)^{2}+\sum_{j=k_{M_{n}}+1}^{p_{n}}\theta_ {j}^{2}\end{split}\] (A.1.4) and the corresponding MA risk \[\begin{split}& R_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})=\mathbb{E}L_ {n}(\mathbf{w}|\mathcal{M},\mathbf{f})\\ &=\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\left[(1-\gamma_{j} )^{2}\,\theta_{l}^{2}+\frac{\sigma^{2}}{n}\gamma_{j}^{2}\right]+\sum_{j=k_{M_ {n}}+1}^{p_{n}}\theta_{j}^{2}.\end{split}\] (A.1.5) Furthermore, the MMA criterion (2.4) can also be rewritten as \[\begin{split}& C_{n}(\mathbf{w}|\mathcal{M},\mathbf{y})=\frac{1}{n} \left\|\mathbf{y}-\widehat{\mathbf{f}}_{\mathbf{w}|\mathcal{M}}\right\|^{2}+ \frac{2\widehat{\sigma}^{2}}{n}\mathbf{k}^{\top}\mathbf{w}\\ &=\frac{1}{n}\left\|\mathbf{y}-\sqrt{n}\sum_{j=1}^{M_{n}}\sum_{l= k_{j-1}+1}^{k_{j}}\gamma_{j}\boldsymbol{\phi}_{l}\widehat{\theta}_{l}\right\|^{2}+ \sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}2\gamma_{j}\frac{\widehat{\sigma} ^{2}}{n}\\ &=\frac{1}{n}\left\|\mathbf{y}\right\|^{2}-\frac{2}{\sqrt{n}}\sum _{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\gamma_{j}\mathbf{y}^{\top}\boldsymbol {\phi}_{l}\widehat{\theta}_{l}+\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}} \gamma_{j}^{2}\widehat{\theta}_{l}^{2}+\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k _{j}}2\gamma_{j}\frac{\widehat{\sigma}^{2}}{n}\\ &=\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\left[\gamma_{j}^{ 2}\widehat{\theta}_{l}^{2}+2\gamma_{j}\left(\frac{\widehat{\sigma}^{2}}{n}- \widehat{\theta}_{l}^{2}\right)\right]+\frac{1}{n}\sum_{i=1}^{n}y_{i}^{2}, \end{split}\] (A.1.6) where the last equality follows from \(\mathbf{y}^{\top}\boldsymbol{\phi}_{l}/\sqrt{n}=\widehat{\theta}_{l}\). #### a.1.3 Technical lemmas We state or prove several preliminary lemmas used to prove the propositions in Section 3 and the main results in Section 4. Lemma 1 compares the optimal risks of MS and MA based on the successive candidate model set \(\mathcal{M}_{s}=\{1,2,\ldots,M_{n}\}\). Define \(m_{n}^{*}=\arg\min_{m\in\{1,\ldots,p_{n}\}}R_{n}(m,\mathbf{f})\) the size of the optimal single model, \(m^{*}|\mathcal{M}_{s}=\arg\min_{m\in\mathcal{M}_{s}}R_{n}(m,\mathbf{f})\) the size of the optimal candidate model in \(\mathcal{M}_{s}\), and \(\mathbf{w}^{*}|\mathcal{M}_{s}=\arg\min_{\mathbf{w}\in\mathcal{W}_{M_{n}}}R_{ n}(\mathbf{w}|\mathcal{M}_{s},\mathbf{f})\) the optimal weight vector based on the candidate model set \(\mathcal{M}_{s}\). **Lemma 1**.: _Suppose that Assumptions 1-2 hold. For the set of successive candidate models \(\mathcal{M}_{s}=\{1,2,\ldots,M_{n}\}\), we always have_ \[R_{n}(m^{*}|\mathcal{M}_{s},\mathbf{f})\asymp R_{n}(\mathbf{w}^{*}|\mathcal{M }_{s},\mathbf{f}).\] _For a large set \(\mathcal{M}_{s}\) with \(M_{n}\gtrsim m_{n}^{*}\), we have_ \[R_{n}(m^{*}|\mathcal{M}_{s},\mathbf{f})\asymp R_{n}(\mathbf{w}^{*}|\mathcal{ M}_{s},\mathbf{f})\asymp R_{n}(m_{n}^{*},\mathbf{f})\asymp\frac{m_{n}^{*}}{n}.\] _Under Condition 1, we get_ \[R_{n}(m^{*}|\mathcal{M}_{s},\mathbf{f})-R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s}, \mathbf{f})\asymp R_{n}(m^{*}|\mathcal{M}_{s},\mathbf{f}).\] _Under Condition 2, we get_ \[R_{n}(m^{*}|\mathcal{M}_{s},\mathbf{f})-R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s}, \mathbf{f})=o\left[R_{n}(m^{*}|\mathcal{M}_{s},\mathbf{f})\right].\] _For a small set \(\mathcal{M}_{s}=\{1,2,\ldots,M_{n}\}\) with \(M_{n}=o(m_{n}^{*})\), we have_ \[R_{n}(m^{*}|\mathcal{M}_{s},\mathbf{f})-R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s}, \mathbf{f})=o\left[R_{n}(m^{*}|\mathcal{M}_{s},\mathbf{f})\right].\] Proof.: Note that Assumption 1 is equivalent to \[\frac{1}{n}\left\|\mathbf{f}\right\|^{2}=\frac{1}{n}\left\|\sum_{j=1}^{p_{n}} \mathbf{D}_{j}\mathbf{f}\right\|^{2}=\frac{1}{n}\left\|\sqrt{n}\sum_{j=1}^{p_{ n}}\mathbf{\phi}_{j}\theta_{j}\right\|^{2}=\sum_{j=1}^{p_{n}}\theta_{j}^{2}<\infty.\] (A.1.7) This coincides with Assumption 1 in Peng and Yang (2022). Thus, Theorems 1-2 of Peng and Yang (2022) and Theorems 1-4 of Xu and Zhang (2022) imply the results of this lemma. **Lemma 2**.: _Let \(\{\xi(t),t\in\mathcal{T}\}\) be a stochastic process with \(\mathbb{E}\xi(t)=0\) and finite variance \(\mathbb{E}[\xi(t)]^{2}=\sigma^{2}(t)\) for all \(t\in\mathcal{T}\), where \(\mathcal{T}\) is a finite index set. Suppose that there exist \(\lambda>0\) and \(\varphi(\lambda)<\infty\) such that_ \[\max_{t\in\mathcal{T}}\mathbb{E}\exp(\lambda|\xi(t)|)\leq\varphi(\lambda).\] (A.1.8) _Then for all \(r\geq 1\), there exists a constant \(C\) depending on \(\lambda\) and \(r\) such that_ \[\left(\mathbb{E}\max_{t\in\mathcal{T}}|\xi(t)|^{r}\right)^{\frac{1}{r}}\leq C( \log|\mathcal{T}|+1).\] Proof.: The proof of this lemma is motivated by Lemma 1 in Golubev (2010). Notice that for \(r\geq 1\), the function \(F(x)=\log^{r}[x+\exp(r-1)]\) is concave on \((0,\infty)\) since \[F^{\prime\prime}(x)=\frac{r\log^{r-2}[x+\exp(r-1)]}{[x+\exp(r-1)]^{2}}\left\{r -1-\log\left[x+\exp(r-1)\right]\right\}\leq 0.\] Using Jensen's inequality, we have \[\left[\mathbb{E}\max_{t\in\mathcal{T}}|\xi(t)|^{r}\right]^{\frac {1}{r}} =\frac{1}{\lambda}\left\{\mathbb{E}\left[\max_{t\in\mathcal{T}}| \lambda\xi(t)|\right]^{r}\right\}^{\frac{1}{r}}=\frac{1}{\lambda}\left\{ \mathbb{E}\log^{r}\left[\exp\left(\max_{t\in\mathcal{T}}|\lambda\xi(t)|\right) \right]\right\}^{\frac{1}{r}}\] \[\leq\frac{1}{\lambda}\log\left[\mathbb{E}\exp\left(\max_{t\in \mathcal{T}}\lambda|\xi(t)|\right)+\exp(r-1)\right]\] \[\leq\frac{1}{\lambda}\log\left[\sum_{t\in\mathcal{T}}\mathbb{E} \exp(\lambda|\xi(t)|)+\exp(r-1)\right]\] \[\leq\frac{\log\left[\varphi(\lambda)|\mathcal{T}|+\exp(r-1)\right] }{\lambda}\leq C(\log|\mathcal{T}|+1),\] which proves the lemma. #### a.1.4 Proof of Proposition 1 From Assumption 2 and (A.1.2), we see that the optimal single model \(m_{n}^{*}\) satisfies \[\theta_{m_{n}^{*}}^{2}>\frac{\sigma^{2}}{n}\geq\theta_{m_{n}^{*}+1}^{2}.\] (A.1.9) Hence the optimal MS risk is \[R_{n}(m_{n}^{*},\mathbf{f})=\frac{m_{n}^{*}\sigma^{2}}{n}+\sum_{j=m_{n}^{*}+1}^ {p_{n}}\theta_{j}^{2}.\] (A.1.10) Using (A.1.5), we get the MA risk \[\begin{split} R_{n}\left(\mathbf{w}|\mathcal{M}_{s},\mathbf{f} \right)&=\sum_{j=1}^{M_{n}}\left(\frac{\sigma^{2}}{n}+\theta_{j}^ {2}\right)\left(\gamma_{j}-\frac{\theta_{j}^{2}}{\theta_{j}^{2}+\frac{\sigma^ {2}}{n}}\right)^{2}\\ &+\sum_{j=1}^{M_{n}}\frac{\theta_{j}^{2}\sigma^{2}}{n\theta_{j}^ {2}+\sigma^{2}}+\sum_{j=M_{n}+1}^{p_{n}}\theta_{j}^{2}.\end{split}\] (A.1.11) The infeasible optimal weights \(\mathbf{w}^{*}|\mathcal{M}_{s}=(w_{1}^{*},\ldots,w_{M_{n}}^{*})^{\top}\) can be obtained by setting \[\gamma_{1}^{*}=1,\,\gamma_{j}^{*}=\frac{\theta_{j}^{2}}{\theta_{j}^{2}+\frac{ \sigma^{2}}{n}},\,j=2,\ldots,M_{n},\] (A.1.12) where \(\gamma_{j}^{*}=\sum_{m=j}^{M_{n}}w_{m}^{*}\). Hence the optimal MA risk based on \(\mathcal{M}_{s}\) is \[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f})=\frac{\sigma^{2}}{n}+\sum_{j =2}^{M_{n}}\frac{\theta_{j}^{2}\sigma^{2}}{n\theta_{j}^{2}+\sigma^{2}}+\sum_{ j=M_{n}+1}^{p_{n}}\theta_{j}^{2}.\] We first prove the results when Condition 1 and \(M_{n}\gtrsim m_{n}^{*}\) hold. Let \(G:\mathbb{N}\rightarrow\mathbb{N}\) by \[G(x)=\arg\min_{t\in\mathbb{N}}\left(\lfloor kt\rfloor\geq x\right),\] where \(k\) is the constant given in Condition 1 and \(\mathbb{N}\) is the set of natural numbers. Define a sequence of functions \(G_{d}(x)\) indexed by integer \(d\) \[G_{d}(x)=\left\{\begin{array}{ll}x&\quad d=0,\\ \left(G\circ G_{d-1}\right)(x)&\quad d\geq 1,\end{array}\right.\] (A.1.13) where the notation \((f\circ g)(x)\) means the composition of functions \(f(g(x))\). Given a fixed \(N\), define \(d_{1}^{*}=\arg\min_{d\in\mathbb{N}}\nu^{2d}\leq 1/(N-1)\) and \(i_{n}^{*}=M_{n}\wedge G_{d_{1}^{*}+1}(m_{n}^{*})\), where \(0<\nu<1\) is the constant defined in Condition 1. Since \(M_{n}\gtrsim m_{n}^{*}\) and \(d_{1}^{*}\) is a fixed integer, we see \(i_{n}^{*}\asymp m_{n}^{*}\). We have \[\begin{split}\frac{\theta_{m_{n}^{*}}^{2}}{\theta_{i_{n}^{*}}^{2}}& \leq\frac{\theta_{m_{n}^{*}}^{2}}{\theta_{G_{1}(m_{n}^{*})}^{2}} \times\frac{\theta_{G_{1}(m_{n}^{*})}^{2}}{\theta_{G_{2}(m_{n}^{*})}^{2}} \times\cdots\times\frac{\theta_{G_{d_{1}^{*}(m_{n}^{*})}}^{2}}{\theta_{G_{d_{ 1}^{*}+1}(m_{n}^{*})}^{2}}\times\frac{\theta_{G_{d_{1}^{*}+1}(m_{n}^{*})}^{2}} {\theta_{i_{n}^{*}}^{2}}\\ &\leq\nu^{2d_{1}^{*}+2}\leq\frac{\nu^{2}}{N-1},\end{split}\] (A.1.14) where the second inequality follows from Condition 1 and \(\theta_{i_{n}^{*}}^{2}\geq\theta_{G_{d_{1}^{*}+1}(m_{n}^{*})}^{2}\), and the last inequality is due to the definition of \(d_{1}^{*}\). Therefore \[\gamma_{i_{n}^{*}}^{*}-\frac{N-1}{N}\geq\frac{\theta_{i_{n}^{*}}^{2}}{\theta_ {i_{n}^{*}}^{2}+\theta_{m_{n}^{*}}^{2}}-\frac{N-1}{N}\geq\frac{N-1}{N-1+\nu^{ 2}}-\frac{N-1}{N}\triangleq C_{1}>0,\] (A.1.15) where the first inequality is due to (A.1.9) and (A.1.12), and the second inequality is due to (A.1.14). Define another model index \(j_{n}^{*}=G_{1}(i_{n}^{*})\). Note that \[\begin{split}\frac{\theta_{m_{n}^{*}+1}^{2}}{\theta_{j_{n}^{*}}^ {2}}&=\frac{\theta_{m_{n}^{*}+1}^{2}}{\theta_{G_{1}(m_{n}^{*}+1) }^{2}}\times\frac{\theta_{G_{1}(m_{n}^{*}+1)}^{2}}{\theta_{G_{2}(m_{n}^{*}+1) }^{2}}\times\cdots\times\frac{\theta_{G_{d_{1}^{*}+1}(m_{n}^{*}+1)}^{2}}{ \theta_{i_{n}^{*}}^{2}}\times\frac{\theta_{i_{n}^{*}}^{2}}{\theta_{G_{1}(i_{n }^{*})}^{2}}\\ &\geq\delta^{2d_{1}^{*}+4}\frac{\theta_{G_{d_{1}^{*}+1}(m_{n}^{*} +1)}^{2}}{\theta_{i_{n}^{*}}^{2}},\end{split}\] where \(0<\delta<1\) is the constant defined in Condition 1. Since \(i_{n}^{*}=M_{n}\wedge G_{d_{1}^{*}+1}(m_{n}^{*})\) and \(M_{n}\gtrsim m_{n}^{*}\), there must exist a constant \(0<c\leq 1\) such that \[\frac{\theta_{G_{d_{1}^{*}+1}(m_{n}^{*}+1)}^{2}}{\theta_{i_{n}^{*}}^{2}}>c\] under Condition 1. We thus have \[1-\gamma_{j_{n}^{*}}^{*}\geq 1-\frac{\theta_{j_{n}^{*}}^{2}}{\theta_{j_{n}^{*}}^ {2}+\theta_{m_{n}^{*}+1}^{2}}\geq 1-\frac{1}{1+c\delta^{2d_{1}^{*}+4}} \triangleq C_{2}>0.\] (A.1.16) Let \(\mathbf{w}_{N}^{*}|\mathcal{M}_{s}=\arg\min_{\mathbf{w}\in\mathcal{W}_{| \mathcal{M}_{s}|}}(N)\,R_{n}(\mathbf{w}|\mathcal{M}_{s},\mathbf{f})\) denote the optimal discrete weight vector in \(\mathcal{W}_{|\mathcal{M}_{s}|}(N)\). Note that restricting \(\mathbf{w}_{N}|\mathcal{M}_{s}=(w_{N,1},\ldots,w_{N,M_{n}})^{\top}\in\mathcal{ W}_{|\mathcal{M}_{s}|}(N)\) is equivalent to restricting \(\boldsymbol{\gamma}_{N}|\mathcal{M}_{s}=(\gamma_{N,1},\ldots,\gamma_{N,M_{n}}) ^{\top}\in\Gamma_{|\mathcal{M}_{s}|}(N)=\{\gamma_{N,j}=t_{j}/N:N=t_{1}\geq t_{2} \geq\cdots\geq t_{M_{n}}\geq 0,t_{j}\in\mathbb{N}\cup\{0\}\}\), where \(\gamma_{N,j}=\sum_{m=j}^{M_{n}}w_{N,m}\). Based on (A.1.15) and (A.1.16), when \(j_{n}^{*}<j\leq i_{n}^{*}\), we see that the optimal cumulative weights satisfy \[\frac{N-1}{N}+C_{1}\leq\gamma_{j}^{*}\leq 1-C_{2}.\] However, the optimal discrete cumulative weight \(\gamma_{N,j}^{*}=\sum_{m=j}^{M_{n}}w_{N,m}^{*}\) is either 1 or \((N-1)/N\) when \(j_{n}^{*}<j\leq i_{n}^{*}\). Combining (A.1.11) with (A.1.15) and (A.1.16), we see at once that \[R_{n}\left(\mathbf{w}_{N}^{*}|\mathcal{M}_{s},\mathbf{f}\right)-R _{n}\left(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f}\right)\] \[=\sum_{j=1}^{M_{n}}\left(\frac{\sigma^{2}}{n}+\theta_{j}^{2} \right)\left(\gamma_{N,j}^{*}-\frac{\theta_{j}^{2}}{\theta_{j}^{2}+\frac{ \sigma^{2}}{n}}\right)^{2}-\sum_{j=1}^{M_{n}}\left(\frac{\sigma^{2}}{n}+\theta _{j}^{2}\right)\left(\gamma_{j}^{*}-\frac{\theta_{j}^{2}}{\theta_{j}^{2}+\frac {\sigma^{2}}{n}}\right)^{2}\] \[\geq\sum_{j=2}^{M_{n}}\left(\frac{\sigma^{2}}{n}+\theta_{j}^{2} \right)\left(\gamma_{N,j}^{*}-\gamma_{j}^{*}\right)^{2}\geq\sum_{j=j_{n}^{*}+ 1}^{i_{n}^{*}}\frac{\sigma^{2}}{n}(C_{1}^{2}\wedge C_{2}^{2})\] \[=\frac{(C_{1}^{2}\wedge C_{2}^{2})(i_{n}^{*}-j_{n}^{*})\sigma^{2} }{n}\asymp\frac{m_{n}^{*}}{n}\asymp R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s}, \mathbf{f}),\] where the constants \(C_{1}\) and \(C_{2}\) are defined in (A.1.15) and (A.1.16) respectively, and the last approximation follows from Lemma 1. Due to \[R_{n}\left(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f}\right)\leq R _{n}\left(\mathbf{w}_{N}^{*}|\mathcal{M}_{s},\mathbf{f}\right)\leq R_{n}\left( m^{*}|\mathcal{M}_{s},\mathbf{f}\right),\] (A.1.17) the proof of the results under Condition 2 or \(M_{n}=o(m_{n}^{*})\) is a direct application of Lemma 1. This completes the proof. #### a.1.5 Proof of Proposition 2 We first prove the claim under Condition 1 and \(M_{n}\gtrsim m_{n}^{*}\). Recall that \(\mathbf{w}_{N}^{*}|\mathcal{M}_{s}=\arg\min_{\mathbf{w}\in\mathcal{W}_{| \mathcal{M}_{s}|}(N)}R_{n}(\mathbf{w}|\mathcal{M}_{s},\mathbf{f})\) denotes the optimal discrete weight vector in \(\mathcal{W}_{|\mathcal{M}_{s}|}(N)\), and \(m^{*}|\mathcal{M}_{s}=\arg\min_{m\in\mathcal{M}_{s}}R_{n}(m,\mathbf{f})\) is the size of the optimal candidate model in \(\mathcal{M}_{s}\). Since MS can be seen as the MA on the discrete weight set with \(N=1\), we have \(R_{n}\left(m^{*}|\mathcal{M}_{s},\mathbf{f}\right)=R_{n}\left(\mathbf{w}_{1}^{ *}|\mathcal{M}_{s},\mathbf{f}\right)\), where \(\mathbf{w}_{1}^{*}|\mathcal{M}_{s}=(w_{1,1}^{*},\ldots,w_{1,M_{n}}^{*})^{\top}\) and the optimal discrete cumulative weights for MS is \(\gamma_{1,j}^{*}=\sum_{m=j}^{M_{n}}w_{1,m}^{*}\). From (A.1.2) and (A.1.11), we have \[\gamma_{1,j}^{*}=\left\{\begin{array}{ll}1&\quad 1\leq j\leq(m_{n}^{*} \wedge M_{n}),\\ 0&\quad(m_{n}^{*}\wedge M_{n})<j\leq M_{n}.\end{array}\right.\] (A.1.18) From (A.1.11), we see that the risk difference between MS and MA is \[R_{n}\left(m^{*}|\mathcal{M}_{s},\mathbf{f}\right)-R_{n}\left( \mathbf{w}_{N}^{*}|\mathcal{M}_{s},\mathbf{f}\right)\] \[=R_{n}\left(\mathbf{w}_{1}^{*}|\mathcal{M}_{s},\mathbf{f}\right)- R_{n}\left(\mathbf{w}_{N}^{*}|\mathcal{M}_{s},\mathbf{f}\right)\] \[=\sum_{j=1}^{M_{n}}\left(\frac{\sigma^{2}}{n}+\theta_{j}^{2} \right)\left(\gamma_{1,j}^{*}-\frac{\theta_{j}^{2}}{\theta_{j}^{2}+\frac{ \sigma^{2}}{n}}\right)^{2}-\sum_{j=1}^{M_{n}}\left(\frac{\sigma^{2}}{n}+ \theta_{j}^{2}\right)\left(\gamma_{N,j}^{*}-\frac{\theta_{j}^{2}}{\theta_{j}^ {2}+\frac{\sigma^{2}}{n}}\right)^{2}\] (A.1.19) \[\geq\sum_{j=1}^{M_{n}/m_{n}^{*}}\left(\frac{\sigma^{2}}{n}+ \theta_{j}^{2}\right)\left(1-\frac{\theta_{j}^{2}}{\theta_{j}^{2}+\frac{\sigma^{ 2}}{n}}\right)^{2}-\frac{1}{4N^{2}}\sum_{j=1}^{M_{n}/m_{n}^{*}}\left(\frac{ \sigma^{2}}{n}+\theta_{j}^{2}\right),\] where the inequality is due to (A.1.18) and the fact \[\left|\gamma_{N,j}^{\star}-\frac{\theta_{j}^{2}}{\theta_{j}^{2}+\frac{\sigma^{2}} {n}}\right|\leq\frac{1}{2N}.\] Define \[d_{2}^{\star}=\left\{\begin{array}{ll}\arg\min_{d\in\mathbb{N}}\{G_{d}(m_{n}^{ \star}+1)<M_{n}\}&M_{n}<m_{n}^{\star},\\ 0&M_{n}\geq m_{n}^{\star}.\end{array}\right.\] where the function \(G_{d}\) is given by (A.1.13). It is easy to check that \[d_{2}^{\star}\sim\log_{k}\left(\frac{m_{n}^{\star}}{M_{n}}\lor 1\right),\] where \(k>1\) is the constant given in Condition 1. When \(N>(1+\delta^{2d_{2}^{\star}+2})/(2\delta^{2d_{2}^{\star}+2})\), there must exist a positive constant \(\tau\) that satisfies \(\delta^{2d_{2}^{\star}+2}\geq(1+\tau)/[2N-(1+\tau)]\), where \(0<\delta<1\) is the constant given in Condition 1. Then we define \[d_{3}^{\star}=\operatorname*{argmax}_{d\in\mathbb{N}\cup\{0\}}\delta^{2d+2d_{2 }^{\star}+2}\geq\frac{1+\tau}{2N-(1+\tau)}\] (A.1.20) and the model index \(j_{n}^{\star}=G_{1}(M_{n})\wedge G_{d_{3}^{\star}}\left(m_{n}^{\star}+1\right)\). When \(j_{n}^{\star}=G_{1}(M_{n})\), we have \[\begin{split}\frac{\theta_{m_{n}^{\star}+1}^{2}}{\theta_{j_{n}^{ \star}}^{2}}&=\frac{\theta_{m_{n}^{\star}+1}^{2}}{\theta_{G_{1}(m _{n}^{\star}+1)}^{2}}\times\frac{\theta_{G_{1}(m_{n}^{\star}+1)}^{2}}{\theta_{ G_{2}(m_{n}^{\star}+1)}^{2}}\times\cdots\times\frac{\theta_{G_{d_{2}^{\star}}(m_{n}^{ \star}+1)}^{2}}{\theta_{G_{1}(M_{n})}^{2}}\\ &\geq\delta^{2d_{2}^{\star}}\frac{\theta_{G_{d_{2}^{\star}}(m_{n} ^{\star}+1)}^{2}}{\theta_{G_{1}(M_{n})}^{2}}\geq\delta^{2d_{2}^{\star}}\frac{ \theta_{G_{d_{2}^{\star}}(m_{n}^{\star}+1)}^{2}}{\theta_{G_{d_{2}^{\star}}(m_{ n}^{\star}+1)}^{2}}\\ &\geq\delta^{2d_{2}^{\star}+2},\end{split}\] (A.1.21) where the first inequality follows Condition 1, and the second inequality is due to \(G_{d_{2}^{\star}}(m_{n}^{\star}+1)<M_{n}\) and \(G_{d_{2}^{\star}+1}(m_{n}^{\star}+1)<G_{1}(M_{n})\). When \(j_{n}^{\star}=G_{d_{3}^{\star}}\left(m_{n}^{\star}+1\right)\), we have \[\frac{\theta_{m_{n}^{\star}+1}^{2}}{\theta_{j_{n}^{\star}}^{2}}=\frac{\theta_{ m_{n}^{\star}+1}^{2}}{\theta_{G_{1}(m_{n}^{\star}+1)}^{2}}\times\frac{\theta_{G_{1} (m_{n}^{\star}+1)}^{2}}{\theta_{G_{2}(m_{n}^{\star}+1)}^{2}}\times\cdots\times \frac{\theta_{G_{d_{3}^{\star}-1}(m_{n}^{\star}+1)}^{2}}{\theta_{G_{d_{3}^{ \star}}(m_{n}^{\star}+1)}^{2}}\geq\delta^{2d_{3}^{\star}}.\] (A.1.22) Combining (A.1.21) with (A.1.22), we have \[\begin{split}\frac{\theta_{m_{n}^{\star}+1}^{2}}{\theta_{j_{n}^{ \star}}^{2}}&\geq\delta^{2d_{2}^{\star}+2}\wedge\delta^{2d_{3}^{ \star}}=\delta^{(2d_{2}^{\star}+2)\vee(2d_{3}^{\star})}\\ &\geq\delta^{2d_{2}^{\star}+2d_{3}^{\star}+2}\geq\frac{1+\tau}{2 N-(1+\tau)},\end{split}\] (A.1.23) where the second inequality is due to \(0<\delta<1\), and the last inequality is due to the definition (A.1.20). Thus when \(j\geq j_{n}^{*}\), we have \[1-\frac{\theta_{j}^{2}}{\theta_{j}^{2}+\frac{\sigma^{2}}{n}} \geq 1-\frac{1}{1+\frac{\theta_{m_{n}^{*}+1}^{2}}{\theta_{j}^{2}}} \geq 1-\frac{1}{1+\frac{\theta_{m_{n}^{*}+1}^{2}}{\theta_{j_{n}^{*}}^{2}}}\] (A.1.24) \[\geq 1-\frac{1+\tau}{1+\frac{1+\tau}{2N-(1+\tau)}}=\frac{1+\tau} {2N}.\] Substituting (A.1.24) into (A.1.19) gives the desired claim \[R_{n}\left(m^{*}|\mathcal{M}_{s},\mathbf{f}\right)-R_{n}\left( \mathbf{w}_{N}^{*}|\mathcal{M}_{s},\mathbf{f}\right)\] \[\geq\sum_{j=1}^{M_{n}/m_{n}^{*}}\left(\frac{\sigma^{2}}{n}+\theta _{j}^{2}\right)\left[\frac{(1+\tau)^{2}}{4N^{2}}-\frac{1}{4N^{2}}\right]\] \[\geq\frac{(\tau^{2}+2\tau)(M_{n}\wedge m_{n}^{*}-j_{n}^{*}) \sigma^{2}}{4N^{2}n}\asymp\frac{m_{n}^{*}}{n}\asymp R_{n}(m^{*}|\mathcal{M}_{s },\mathbf{f}).\] The proof of the result under Condition 2 or the condition \(M_{n}=o(m_{n}^{*})\) is straightforward based on Lemma 1 and (A.1.17). This completes the proof of this proposition. #### a.1.6 Proof of Proposition 3 Under Condition 1, in a manner of proof by contradiction, we first check that a necessary condition for (2.9) is \(M_{n}=o(m_{n}^{*})\). Suppose \(M_{n}\geq m_{n}^{*}\), it is already seen from Peng and Yang (2022) that \[R_{n}\left(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f}\right)\asymp R_{n}(m_{n}^ {*},\mathbf{f})\asymp\frac{m_{n}^{*}}{n}\] under Condition 1. We thus obtain \[\frac{|\mathcal{M}_{s}|\sum_{m=1}^{|\mathcal{M}_{s}|}R_{n}\left( w_{m}^{0}|\mathcal{M}_{s},\mathbf{f}\right)}{nR_{n}^{2}\left(\mathbf{w}^{*}| \mathcal{M}_{s},\mathbf{f}\right)}\asymp\frac{M_{n}^{2}m_{n}^{*}/n}{(m_{n}^{* })^{2}/n}\] (A.1.25) \[=\frac{M_{n}^{2}}{m_{n}^{*}}\geq M_{n}\geq m_{n}^{*}\to\infty,\] which contradicts the assumption (2.9). Suppose \(M_{n}<m_{n}^{*}\) but \(M_{n}\asymp m_{n}^{*}\), there must exist a constant \(C>1\) and a positive integer \(K\) such that for any \(n>K\), we have \(m_{n}^{*}<CM_{n}\). In this case, the main task is to show the risk of the optimal single model in \(\mathcal{M}_{s}\) and the risk of the optimal averaged model based on \(\mathcal{M}_{s}\) both have the order \(m_{n}^{*}/n\). Note first that the optimal single model in \(\mathcal{M}_{s}\) needs to include \(M_{n}\) terms, which has the risk \[R_{n}\left(m^{*}|\mathcal{M}_{s},\mathbf{f}\right)=\frac{M_{n}\sigma^{2}}{n}+ \sum_{j=M_{n}+1}^{m_{n}^{*}}\theta_{j}^{2}+\sum_{j=m_{n}^{*}+1}^{p_{n}}\theta_ {j}^{2}.\] As there must exist an index \(d_{4}^{*}\) such that \(G_{d_{4}^{*}}(m_{n}^{*}+1)\leq m_{n}^{*}/C<M_{n}\), it follows that the second term in \(R_{n}\left(m^{*}|\mathcal{M}_{s},\mathbf{f}\right)\) is bounded by \[\begin{split}\sum_{j=M_{n}+1}^{m_{n}^{*}}\theta_{j}^{2}& \leq\left(m_{n}^{*}-M_{n}\right)\theta_{G_{d_{4}^{*}}(m_{n}^{*}+1)} ^{2}\leq\frac{\left(m_{n}^{*}-M_{n}\right)\theta_{m_{n}^{*}+1}^{2}}{\delta^{2d _{4}^{*}}}\\ &\leq\frac{\left(m_{n}^{*}-M_{n}\right)\sigma^{2}}{n\delta^{2d_{ 4}^{*}}}\lesssim\frac{m_{n}^{*}}{n},\end{split}\] (A.1.26) where the second inequality follows from Condition 1 and the third inequality follows from (A.1.9). Since the order of the last term in \(R_{n}\left(m^{*}|\mathcal{M}_{s},\mathbf{f}\right)\) is also no bigger than \(m_{n}^{*}/n\)(Peng and Yang, 2022), we thus get \(R_{n}\left(m^{*}|\mathcal{M}_{s},\mathbf{f}\right)\asymp m_{n}^{*}/n\). Furthermore, it is easy to check that \[\begin{split}& R_{n}\left(m^{*}|\mathcal{M}_{s},\mathbf{f}\right) \geq R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f})\\ &\geq R_{n}(\mathbf{w}^{*}|\mathcal{M}_{l},\mathbf{f})\asymp R_{n }(m_{n}^{*},\mathbf{f})\asymp\frac{m_{n}^{*}}{n},\end{split}\] where \(\mathcal{M}_{l}\) is a large candidate model set which includes \(m_{n}^{*}\), and the last two approximations are due to Peng and Yang (2022). It follows immediately that \(R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f})\asymp m_{n}^{*}/n\). In the same manner of (A.1.25), we also obtain a contradiction of assumption (2.9) when \(M_{n}<m_{n}^{*}\) and \(M_{n}\asymp m_{n}^{*}\). Thus, under Condition 1, a necessary condition for (2.9) is \(M_{n}=o(m_{n}^{*})\). Define \(d_{5}^{*}=\arg\max_{d\in\mathbb{N}}\{G_{d}(m_{n}^{*})\geq M_{n}\}\). Since \(M_{n}=o(m_{n}^{*})\), we have \(d_{5}^{*}\to\infty\) as \(n\to\infty\). Then the MA risk is lower bounded by \[\begin{split}& R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f}) \geq\sum_{j=M_{n}+1}^{m_{n}^{*}}\theta_{j}^{2}\\ &=\sum_{j=G_{1}(m_{n}^{*})+1}^{m_{n}^{*}}\theta_{j}^{2}+\sum_{j=G_ {2}(m_{n}^{*})+1}^{G_{1}(m_{n}^{*})}\theta_{j}^{2}+\cdots+\sum_{j=G_{d_{5}^{*} }(m_{n}^{*})+1}^{G_{d_{5}^{*}-1}(m_{n}^{*})}\theta_{j}^{2}\\ &\geq\theta_{m_{n}^{*}}^{2}[m_{n}^{*}-G_{1}(m_{n}^{*})]+\theta_ {G_{1}(m_{n}^{*})}^{2}[G_{1}(m_{n}^{*})-G_{2}(m_{n}^{*})]\\ &\qquad\qquad+\cdots+\theta_{G_{d_{5}^{*}-1}(m_{n}^{*})}^{2}[G_{ d_{5}^{*}-1}(m_{n}^{*})-G_{d_{5}^{*}}(m_{n}^{*})]\\ &\geq\frac{\sigma^{2}}{n}\left(m_{n}^{*}-\frac{m_{n}^{*}}{k} \right)+\frac{\sigma^{2}}{n\nu^{2}}\left(\frac{m_{n}^{*}}{k}-\frac{m_{n}^{*}}{k ^{2}}\right)+\cdots+\frac{\sigma^{2}}{n\nu^{2(d_{5}^{*}-1)}}\left(\frac{m_{n}^{ *}}{k^{d_{5}^{*}-1}}-\frac{m_{n}^{*}}{k^{d_{5}^{*}}}\right)\\ &\geq\frac{m_{n}^{*}\sigma^{2}}{n}\left(1-\frac{1}{k}\right)\sum_ {l=0}^{d_{5}^{*}-1}\frac{1}{(k\nu^{2})^{l}},\end{split}\] (A.1.27) where the first inequality follows from (A.1.5), and the third inequality is due to (A.1.9) and Condition 1. Since \(d_{5}^{*}\to\infty\) and \(k\nu^{2}<1\), we thus get \[\sum_{l=0}^{d_{5}^{*}-1}\frac{1}{(k\nu^{2})^{l}}\to\infty.\] Due to \(R_{n}(m_{n}^{*},\mathbf{f})\asymp m_{n}^{*}/n\), from (A.1.27) we conclude \(R_{n}(m_{n}^{*},\mathbf{f})=o[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f})]\). When Condition 2 holds, using the proof by contradiction again, we see that a necessary condition for (2.9) is \(M_{n}\leq\lfloor Cm_{n}^{*}\rfloor\) with a constant \(0<C<1\). Note that \(\lfloor Cm_{n}^{*}\rfloor\leq\lfloor(C+1)m_{n}^{*}/2\rfloor\leq m_{n}^{*}\). Then the MA risk is lower bounded by \[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f})\geq\sum_{j=M_{n} +1}^{\lfloor(C+1)m_{n}^{*}/2\rfloor}\theta_{j}^{2}\] \[\geq(\lfloor(C+1)m_{n}^{*}/2\rfloor-M_{n})\theta_{\lfloor(C+1)m_ {n}^{*}/2\rfloor}^{2}.\] Under Condition 2, we have \(\theta_{\lfloor(C+1)m_{n}^{*}/2\rfloor}^{2}/\theta_{m_{n}^{*}}^{2}\to\infty\) and \(\theta_{m_{n}^{*}}^{2}\asymp 1/n\). Thus we get \(R_{n}(m_{n}^{*},\mathbf{f})=o[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f})]\), which proves the proposition. #### a.1.7 Proof of the results in the two examples Based on the risk of MS (A.1.2), we have \[\sum_{m=1}^{|\mathcal{M}_{s}|}R_{n}\left(\mathbf{w}_{m}^{0}| \mathcal{M}_{s},\mathbf{f}\right)=\sum_{m=1}^{M_{n}}R_{n}\left(m,\mathbf{f}\right)\] \[=\sum_{j=1}^{M_{n}}\frac{j}{n}\sigma^{2}+\sum_{j=2}^{p_{n}}\theta_ {j}^{2}+\cdots+\sum_{j=M_{n}+1}^{p_{n}}\theta_{j}^{2}\] (A.1.28) \[=\sum_{j=1}^{M_{n}}\frac{j}{n}\sigma^{2}+\sum_{j=2}^{M_{n}}(j-1) \theta_{j}^{2}+M_{n}\sum_{j=M_{n}+1}^{p_{n}}\theta_{j}^{2}.\] When \(\theta_{j}=j^{-\alpha_{1}},\alpha_{1}>1/2\), approximating the sums in (A.1.28) by integrals, we obtain that the numerator of (2.9) has the order \[M_{n}\sum_{m=1}^{M_{n}}R_{n}\left(m,\mathbf{f}\right)\asymp\left\{\begin{array} []{ll}M_{n}^{-2\alpha_{1}+3}&1/2<\alpha_{1}<1,\\ M_{n}\log M_{n}&\alpha_{1}=1,\\ M_{n}&\alpha_{1}>1.\end{array}\right.\] We now turn to evaluate the order of the denominator of (2.9). Define \(g(x)=\int_{0}^{\frac{1}{1+x^{2\alpha_{1}}}}t^{1-\frac{1}{2\alpha_{1}}}(1-t)^{ \frac{1}{2\alpha_{1}}}dt\) and \(g^{\prime}(x)=-\frac{2\alpha_{1}}{1+x^{2\alpha_{1}}}\). Based on the proof of Example 1 in Peng and Yang (2022), we have \[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f}) \asymp n^{-1+1/(2\alpha_{1})}\left[g(0)-g\left(\frac{M_{n}}{m_{n}^ {*}}\right)\right]+M_{n}^{-2\alpha_{1}+1}\] (A.1.29) \[\asymp n^{-1+1/(2\alpha_{1})}\left[-g^{\prime}(0)\left(\frac{M_{ n}}{m_{n}^{*}}\right)\right]+M_{n}^{-2\alpha_{1}+1}\] \[\asymp\frac{M_{n}}{n}+M_{n}^{-2\alpha_{1}+1}\asymp M_{n}^{-2 \alpha_{1}+1},\] where the second approximation follows from Taylor's expansion, the third approximation follows from \(m_{n}^{*}\asymp n^{1/(2\alpha_{1})}\), and the last approximation follows from the fact \(M_{n}=o(m_{n}^{*})\) and \(m_{n}^{*}\asymp n^{1/(2\alpha_{1})}\). Combining (A.1.28) with (A.1.29) gives (3.1). When \(\theta_{j}=\exp(-j^{\alpha_{2}})\), \(\alpha_{2}>0\), in the same manner, we can see that the numerator of (2.9) has the order \(M_{n}\). Define \(\text{Ga}(x;a)=\int_{t=x}^{\infty}t^{a-1}\exp(-t)dt\) for \(x>0\). Based on the proof of Example 2 in Peng and Yang (2022), we have \[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{s},\mathbf{f})\asymp\frac{M_{n }}{n}+\text{Ga}\left(2M_{n}^{\alpha_{2}};\frac{1}{\alpha_{2}}\right)\] \[\asymp\frac{M_{n}}{n}+(2M_{n}^{\alpha_{2}})^{\frac{1}{\alpha_{2}} -1}\exp(-2M_{n}^{\alpha_{2}}),\] where the second approximation is based on the asymptotic expansion of the incomplete gamma-function. Thus (2.9) is reduced to \(M_{n}<(1/2)^{1/\alpha_{2}}\,m_{n}^{*}\), where \(m_{n}^{*}=[(1/2)\log(n/\sigma^{2})]^{1/\alpha_{2}}\). This completes the proof. #### a.1.8 Proof of Theorem 1 Recall that \(\widehat{\theta}_{l}=\boldsymbol{\phi}_{l}^{\top}\mathbf{y}/\sqrt{n}\), \(\theta_{l}=\boldsymbol{\phi}_{l}^{\top}\mathbf{f}/\sqrt{n}\), and \(e_{l}=\boldsymbol{\phi}_{l}^{\top}\boldsymbol{\epsilon}/\sqrt{n}\), \(l=1,\ldots,p_{n}\). Define \(z_{l}=\sqrt{n}e_{l}/\sigma,l=1,\ldots,k_{M_{n}}\), \(\widehat{\gamma}_{j}=\sum_{m=j}^{M_{n}}\widehat{w}_{m}\), \(\gamma_{j}^{*}=\sum_{m=j}^{M_{n}}w_{m}^{*}\), \(j=1,\ldots,M_{n}\), where \(\widehat{w}_{m}\) and \(w_{m}^{*}\) are \(m\)-th elements of \(\widehat{\mathbf{w}}|\mathcal{M}\) and \(\mathbf{w}^{*}|\mathcal{M}\), respectively. Based on (A.1.4) and (A.1.6), we have \[L_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})-C_{n}(\mathbf{w}| \mathcal{M},\mathbf{y})\] (A.1.30) \[=2\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\gamma_{j}\left(e_{ l}^{2}-\frac{\sigma^{2}}{n}+\theta_{l}e_{l}\right)+2\sum_{j=1}^{M_{n}}\sum_{l=k_{j- 1}+1}^{k_{j}}\gamma_{j}\left(\frac{\sigma^{2}}{n}-\frac{\widehat{\sigma}^{2}}{ n}\right)\] \[\quad+\frac{1}{n}\sum_{j=1}^{p_{n}}\left(\boldsymbol{\phi}_{j}^{ \top}\mathbf{f}\right)^{2}-\frac{1}{n}\|\mathbf{f}\|^{2}-\frac{1}{n}\mathbf{f }^{\top}\boldsymbol{\epsilon}-\frac{1}{n}\|\boldsymbol{\epsilon}\|^{2}\] \[=2\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\gamma_{j}\left(e_{ l}^{2}-\frac{\sigma^{2}}{n}+\theta_{l}e_{l}\right)+2\sum_{j=1}^{M_{n}}\sum_{l=k_{j- 1}+1}^{k_{j}}\gamma_{j}\left(\frac{\sigma^{2}}{n}-\frac{\widehat{\sigma}^{2}}{ n}\right)\] \[\quad-\frac{1}{n}\mathbf{f}^{\top}\boldsymbol{\epsilon}-\frac{1} {n}\|\boldsymbol{\epsilon}\|^{2},\] where the second equality follows from \(\widehat{\theta}_{l}=\theta_{l}+e_{l}\) and \(\theta_{j}=\phi_{j}^{\top}\mathbf{f}/\sqrt{n}\), and the last step follows from \(\|\mathbf{f}\|^{2}=\sum_{j=1}^{p_{n}}\left(\phi_{j}^{\top}\mathbf{f}\right)^{2}\). In addition, for any non-random \(\mathbf{w}|\mathcal{M}\), we have \[\begin{split}&\mathbb{E}C_{n}(\mathbf{w}|\mathcal{M},\mathbf{y})-R_{n}( \mathbf{w}|\mathcal{M},\mathbf{f})\\ &=\mathbb{E}\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\left[( \gamma_{j}^{2}-2\gamma_{j})(\widehat{\theta}_{l}^{2}-\theta_{l}^{2})+2\gamma_ {j}\frac{\widehat{\sigma}^{2}}{n}-\gamma_{j}^{2}\frac{\sigma^{2}}{n}\right]\\ &\quad+\frac{1}{n}\mathbb{E}\sum_{i=1}^{n}y_{i}^{2}-\sum_{j=1}^{ p_{n}}\theta_{j}^{2}\\ &=2\mathbb{E}\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\gamma_ {j}\left(\frac{\widehat{\sigma}^{2}}{n}-\frac{\sigma^{2}}{n}\right)+\frac{1}{n }\mathbb{E}\left(\|\mathbf{f}\|^{2}+2\mathbf{f}^{\top}\boldsymbol{\epsilon}+ \|\boldsymbol{\epsilon}\|^{2}\right)-\sum_{j=1}^{p_{n}}\theta_{j}^{2}\\ &=2\mathbb{E}\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\gamma_ {j}\left(\frac{\widehat{\sigma}^{2}}{n}-\frac{\sigma^{2}}{n}\right)+\sigma^{2 },\end{split}\] (A.1.31) where the first equality follows from (A.1.5) and (A.1.6), the second equality follows from \(\mathbb{E}\widehat{\theta}_{l}^{2}=\theta_{l}^{2}+\sigma^{2}/n\), and the last equality is due to \(\|\mathbf{f}\|^{2}=\sum_{j=1}^{p_{n}}\left(\boldsymbol{\phi}_{j}^{\top}\mathbf{ f}\right)^{2}=n\sum_{j=1}^{p_{n}}\theta_{j}^{2}\). Combining (A.1.30) with (A.1.31), we have \[\begin{split}\mathbb{E}L_{n}(\widehat{\mathbf{w}}|\mathcal{M}, \mathbf{f})&=\mathbb{E}C_{n}(\widehat{\mathbf{w}}|\mathcal{M}, \mathbf{y})-\sigma^{2}+2\mathbb{E}\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}} \widehat{\gamma}_{j}\left(e_{l}^{2}-\frac{\sigma^{2}}{n}+\theta_{l}e_{l}\right) \\ &+2\mathbb{E}\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\widehat{ \gamma}_{j}\left(\frac{\sigma^{2}}{n}-\frac{\widehat{\sigma}^{2}}{n}\right) \\ \leq R_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f})& +\frac{2\sigma^{2}}{n}\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j }}\widehat{\gamma}_{j}\left(z_{l}^{2}-1\right)\right|+\frac{2\sigma}{\sqrt{n}} \mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}(1-\widehat{ \gamma}_{j})\theta_{l}z_{l}\right|\\ &+2\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}} \widehat{\gamma}_{j}\left(\frac{\sigma^{2}}{n}-\frac{\widehat{\sigma}^{2}}{n} \right)\right|+2\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}} \gamma_{j}^{*}\left(\frac{\sigma^{2}}{n}-\frac{\widehat{\sigma}^{2}}{n}\right) \right|,\end{split}\] (A.1.32) where the inequality in (A.1.32) follows from \(C_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{y})\leq C_{n}(\mathbf{w}^{*}| \mathcal{M},\mathbf{y})\) and the absolute value inequalities, and \(z_{l}=\sqrt{n}e_{l}/\sigma\), \(l=1,\ldots,k_{M_{n}}\). From (A.1.5) with (A.1.6), in the same manner we can see that \[\begin{split}& R_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})-C_{n}( \mathbf{w}|\mathcal{M},\mathbf{y})\\ &=\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\left[(\gamma_{j}^{2} -2\gamma_{j})(\theta_{l}^{2}-\widehat{\theta}_{l}^{2})+\gamma_{j}^{2}\frac{ \sigma^{2}}{n}-2\gamma_{j}\frac{\widehat{\sigma}^{2}}{n}\right]+\sum_{j=1}^{p_{ n}}\theta_{j}^{2}-\frac{1}{n}\sum_{i=1}^{n}y_{i}^{2}\\ &=\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}(\gamma_{j}^{2}-2 \gamma_{j})\left(\frac{\sigma^{2}}{n}-e_{l}^{2}-2\theta_{l}e_{l}\right)+2\sum_{j= 1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\gamma_{j}\left(\frac{\sigma^{2}}{n}- \frac{\widehat{\sigma}^{2}}{n}\right)\\ &\quad-\frac{1}{n}\mathbf{f}^{\top}\boldsymbol{\epsilon}-\frac{1} {n}\|\boldsymbol{\epsilon}\|^{2},\end{split}\] (A.1.33) where the second equality follows from \(\widehat{\theta}_{l}^{2}=\theta_{l}^{2}+2\theta_{l}e_{l}+e_{l}^{2}\). Combining (A.1.33) with (A.1.31), we have \[\mathbb{E}R_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f}) =\mathbb{E}C_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{y})- \sigma^{2}+\mathbb{E}\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}(\widehat{ \gamma}_{j}^{2}-2\widehat{\gamma}_{j})\left(\frac{\sigma^{2}}{n}-e_{l}^{2}-2 \theta_{l}e_{l}\right)\] (A.1.34) \[+2\mathbb{E}\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\widehat{ \gamma}_{j}\left(\frac{\sigma^{2}}{n}-\frac{\widehat{\sigma}^{2}}{n}\right)\] \[\leq R_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f}) +\frac{\sigma^{2}}{n}\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_ {j-1}+1}^{k_{j}}\widehat{\gamma}_{j}^{2}\left(z_{l}^{2}-1\right)\right|+\frac {2\sigma^{2}}{n}\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}} \widehat{\gamma}_{j}\left(z_{l}^{2}-1\right)\right|\] \[+\frac{2\sigma}{\sqrt{n}}\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{ l=k_{j-1}+1}^{k_{j}}(1-\widehat{\gamma}_{j})^{2}\theta_{l}z_{l}\right|+2\mathbb{E} \left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\widehat{\gamma}_{j}\left( \frac{\sigma^{2}}{n}-\frac{\widehat{\sigma}^{2}}{n}\right)\right|\] \[+2\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}} \gamma_{j}^{*}\left(\frac{\sigma^{2}}{n}-\frac{\widehat{\sigma}^{2}}{n}\right) \right|.\] The main idea of the proof is to take the upper bounds of the terms in (A.1.32) and (A.1.34). We first bound \(\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\widehat{\gamma}_ {j}(z_{l}^{2}-1)\right|\). Define \(k_{0}=0\), \(\widehat{\gamma}_{M_{n}+1}=0\), and a random variable \(\kappa_{1}=\max_{1\leq j\leq M_{n}}\{|\sum_{l=1}^{k_{j}}(z_{l}^{2}-1)|k_{j}^{- 1/2}\}\). Note that \[\begin{split}&\sum_{j=1}^{M_{n}}\frac{\left(k_{j}^{\frac{1}{2}}-k_{j- 1}^{\frac{1}{2}}\right)}{k_{j}-k_{j-1}}^{2}=1+\sum_{j=2}^{M_{n}}\left(\frac{k_ {j}^{\frac{1}{2}}-k_{j-1}^{\frac{1}{2}}}{k_{j}-k_{j-1}}\right)^{2}(k_{j}-k_{j- 1})\\ &\leq 1+\sum_{j=2}^{M_{n}}\frac{k_{j}-k_{j-1}}{4k_{j-1}}=1+\sum_{j=1}^ {M_{n}-1}\frac{k_{j+1}-k_{j}}{4k_{j}},\end{split}\] (A.1.35) where the inequality is due to the concavity of the function \(h_{1}(x)=x^{1/2}\). Using summation by parts, we can rewrite the first term as \[\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\widehat {\gamma}_{j}(z_{l}^{2}-1)\right|\] \[=\mathbb{E}\left|\sum_{j=1}^{M_{n}}(\widehat{\gamma}_{j}-\widehat {\gamma}_{j+1})\sum_{l=1}^{k_{j}}(z_{l}^{2}-1)\right|\] \[\leq\mathbb{E}\left\{\kappa_{1}\sum_{j=1}^{M_{n}}(\widehat{\gamma }_{j}-\widehat{\gamma}_{j+1})k_{j}^{\frac{1}{2}}\right\}\] \[=\mathbb{E}\left\{\kappa_{1}\sum_{j=1}^{M_{n}}\widehat{\gamma}_{j }\left(k_{j}^{\frac{1}{2}}-k_{j-1}^{\frac{1}{2}}\right)\right\}\] (A.1.36) \[\leq\mathbb{E}\left\{\kappa_{1}\left[\sum_{j=1}^{M_{n}}\widehat {\gamma}_{j}^{2}\left(k_{j}-k_{j-1}\right)\right]^{\frac{1}{2}}\left[\sum_{j= 1}^{M_{n}}\frac{\left(k_{j}^{\frac{1}{2}}-k_{j-1}^{\frac{1}{2}}\right)^{2}}{ k_{j}-k_{j-1}}\right]^{\frac{1}{2}}\right\}\] \[\leq\frac{C\sqrt{n}}{\sigma}\left(\mathbb{E}\kappa_{1}^{2}\right) ^{\frac{1}{2}}\left[\mathbb{E}R_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{ f})\right]^{\frac{1}{2}}\left(1+\sum_{j=1}^{M_{n}-1}\frac{k_{j+1}-k_{j}}{4k_{j}} \right)^{\frac{1}{2}},\] where the first inequality follows from the definition of \(\kappa_{1}\), the second inequality follows from the Cauchy-Schwarz inequality, and the third inequality follows from the Cauchy-Schwarz inequality, (A.1.5), and (A.1.35). The task is now to construct an upper bound for \(\left(\mathbb{E}\kappa_{1}^{2}\right)^{1/2}\) by Lemma 2. It remains to check (A.1.8) for the stochastic process \(\xi_{1}(t)=\sum_{l=1}^{k_{t}}(z_{l}^{2}-1)k_{t}^{-1/2}\). Recall that \(z_{l}=\sqrt{n}e_{l}/\sigma=\mathbf{\phi}_{l}^{\top}\mathbf{\epsilon}/\sigma\). Define an \(n\times n\) matrix \[\mathbf{A}\triangleq\frac{\sum_{l=1}^{k_{t}}\mathbf{\phi}_{l}\mathbf{\phi}_{l}^{\top}} {\sigma^{2}\sqrt{k_{t}}}.\] Then we can write \(\xi_{1}(t)\) as \[\xi_{1}(t)=\mathbf{\epsilon}^{\top}\left(\frac{\sum_{l=1}^{k_{t}}\mathbf{\phi}_{l}\mathbf{ \phi}_{l}^{\top}}{\sigma^{2}\sqrt{k_{t}}}\right)\mathbf{\epsilon}-\sqrt{k_{t}}=\bm {\epsilon}^{\top}\mathbf{A}\mathbf{\epsilon}-\mathbb{E}\mathbf{\epsilon}^{\top} \mathbf{A}\mathbf{\epsilon}.\] Using Hansen-Wright inequality for sub-Gaussian random variables (Theorem 1.1 of (Rudelson and Vershynin, 2013)), we know that there exists a positive absolute constant \(c\) such that for any \(x\geq 0\), \[\mathbb{P}\left(|\xi_{1}(t)|>x\right) =\mathbb{P}\left(\left|\mathbf{\epsilon}^{\top}\mathbf{A}\mathbf{\epsilon }-\mathbb{E}\mathbf{\epsilon}^{\top}\mathbf{A}\mathbf{\epsilon}\right|>x\right)\] (A.1.37) \[\leq 2\exp\left[-c\min\left(\frac{x}{\eta^{2}\|\mathbf{A}\|_{2}}, \frac{x^{2}}{\eta^{4}\|\mathbf{A}\|_{\mathrm{F}}^{2}}\right)\right]\] \[\leq 2\exp\left[-c\min\left(x,x^{2}\right)\right],\] where the second inequality follows from \(\|\mathbf{A}\|_{2}=1/(\sigma^{2}\sqrt{k_{t}})\leq 1/\sigma^{2}\) and \(\|\mathbf{A}\|_{\mathrm{F}}^{2}=\mathrm{tr}(\mathbf{A}^{\top}\mathbf{A})=1/ \sigma^{4}\). The inequality (A.1.37) also implies that \[\mathbb{P}\left(|\xi_{1}(t)|>\frac{\log x}{\lambda}\right)\leq\left\{\begin{array} []{ll}2x^{-\frac{c}{\lambda^{2}}\log x}&0\leq x<\exp(\lambda),\\ 2x^{-\frac{c}{\lambda}}&x\geq\exp(\lambda),\end{array}\right.\] where \(\lambda>0\). Thus we have \[\begin{split}\mathbb{E}\exp(\lambda|\xi_{1}(t)|)&=\int_{0}^{ \infty}\mathbb{P}(\exp(\lambda|\xi_{1}(t)|)>x)dx=\int_{0}^{\infty}\mathbb{P} \left(|\xi_{1}(t)|>\frac{\log x}{\lambda}\right)dx\\ &\leq 2\int_{0}^{\exp(\lambda)}x^{-\frac{c}{\lambda^{2}}\log x}dx+2 \int_{\exp(\lambda)}^{\infty}x^{-\frac{c}{\lambda}}dx.\end{split}\] (A.1.38) When \(0<\lambda<c\), the first term of (A.1.38) is upper bounded by \[\begin{split} 2\int_{0}^{\exp(\lambda)}x^{-\frac{c}{\lambda^{2}} \log x}dx&=\frac{2\lambda^{2}}{c}\int_{-\frac{c}{\lambda}}^{ \infty}\exp\left[-\frac{\lambda^{2}(u^{2}+u)}{c}\right]du\\ &\leq\frac{2\lambda^{2}}{c}\exp\left(\frac{\lambda^{2}}{4c} \right)\sqrt{\frac{\pi c}{\lambda^{2}}}\\ &\leq 2\exp\left(\frac{c}{4}\right)\sqrt{\pi c}<\infty.\end{split}\] (A.1.39) And the second term of (A.1.38) is \[2\int_{\exp(\lambda)}^{\infty}x^{-\frac{c}{\lambda}}dx=\frac{2}{\frac{c}{ \lambda}-1}\exp(-c+\lambda)<\infty.\] (A.1.40) Combining (A.1.39)-(A.1.40) with (A.1.38), we see that when \(0<\lambda<c\), \(\mathbb{E}\exp(\lambda|\xi_{1}(t)|)\) is uniformly upper bounded for any \(t=1,\ldots,M_{n}\), which meets the condition (A.1.8) of Lemma 2. Thus we have \(\left(\mathbb{E}\kappa_{1}^{2}\right)^{1/2}\leq C(1+\log M_{n})\), and the term \(\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\widehat{\gamma}_ {j}(z_{l}^{2}-1)\right|\) is upper bounded by \[\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\widehat{\gamma}_ {j}(z_{l}^{2}-1)\right|\leq\frac{C\sqrt{n}}{\sigma}[\mathbb{E}R_{n}(\widehat{ \mathbf{w}}|\mathcal{M},\mathbf{f})]^{\frac{1}{2}}\left[\psi(\mathcal{M}) \right]^{\frac{1}{2}},\] (A.1.41) where \(\psi(\mathcal{M})\) is defined in (4.1). We now turn to find the upper bound of \(\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}(1-\widehat{\gamma }_{j})\theta_{l}z_{l}\right|\). Define \(S_{t}=\sum_{l=k_{t}+1}^{k_{M_{n}}}\theta_{l}^{2}\) and a random variable \(\kappa_{2}=\max_{1\leq t\leq M_{n}}\left\{\left|\sum_{l=k_{t}+1}^{k_{M_{n}}} \theta_{l}z_{l}\right|S_{t}^{-1/2}\right\}\). Note that \[\begin{split}&\sum_{j=1}^{M_{n}}\frac{\left[(S_{j-1}+1)^{\frac{1}{2} }-(S_{j}+1)^{\frac{1}{2}}\right]^{2}}{S_{j-1}-S_{j}}\\ &=\sum_{j=1}^{M_{n}}\left[\frac{(S_{j-1}+1)^{\frac{1}{2}}-(S_{j}+ 1)^{\frac{1}{2}}}{S_{j-1}-S_{j}}\right]^{2}(S_{j-1}-S_{j})\\ &\leq\frac{1}{4}\sum_{j=1}^{M_{n}}(S_{j-1}-S_{j})<\infty,\end{split}\] (A.1.42) where the inequality follows from \(h_{2}(x)=(x+1)^{1/2}\) and \(h_{2}^{\prime}(x)=(1/2)(x+1)^{-1/2}\leq 1/2\) when \(x\geq 0\), and the second inequality is due to (A.1.7). Using summation by parts again, we see that \[\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}(1- \widehat{\gamma}_{j})\theta_{l}z_{l}\right|\] \[=\mathbb{E}\left|\sum_{j=2}^{M_{n}}(\widehat{\gamma}_{j-1}- \widehat{\gamma}_{j})\sum_{l=k_{j-1}+1}^{k_{M_{n}}}\theta_{l}z_{l}\right|\] \[\leq\mathbb{E}\left\{\kappa_{2}\sum_{j=2}^{M_{n}}(\widehat{ \gamma}_{j-1}-\widehat{\gamma}_{j})(S_{j-1}+1)^{\frac{1}{2}}(S_{j-1})^{\frac{ 1}{2}}(S_{j-1}+1)^{-\frac{1}{2}}\right\}\] \[\leq\mathbb{E}\left\{\kappa_{2}\sum_{j=2}^{M_{n}}(\widehat{ \gamma}_{j-1}-\widehat{\gamma}_{j})(S_{j-1}+1)^{\frac{1}{2}}\right\}\] (A.1.43) \[=\mathbb{E}\left\{\kappa_{2}\sum_{j=1}^{M_{n}}(1-\widehat{ \gamma}_{j})\left[(S_{j-1}+1)^{\frac{1}{2}}-(S_{j}+1)^{\frac{1}{2}}\right]\right\}\] \[\leq\mathbb{E}\left\{\kappa_{2}\left[\sum_{j=1}^{M_{n}}(1- \widehat{\gamma}_{j})^{2}(S_{j-1}-S_{j})\right]^{\frac{1}{2}}\left[\sum_{j=1} ^{M_{n}}\frac{\left[(S_{j-1}+1)^{\frac{1}{2}}-(S_{j}+1)^{\frac{1}{2}}\right]^ {2}}{S_{j-1}-S_{j}}\right]^{\frac{1}{2}}\right\}\] \[\leq C(\mathbb{E}\kappa_{2}^{2})^{\frac{1}{2}}[\mathbb{E}R_{n}( \widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})]^{\frac{1}{2}},\] where the first inequality is due to the definition of \(\kappa_{2}\), the third inequality follows from the Cauchy-Schwarz inequality, and the last inequality is due to the Cauchy-Schwarz inequality, (A.1.5), and (A.1.42). Now we construct upper bound for \((\mathbb{E}\kappa_{2}^{2})^{1/2}\) by Lemma 2. Consider the stochastic process \(\xi_{2}(t)=(\sum_{l=k_{k}+1}^{k_{M_{n}}}\theta_{l}z_{l})S_{t}^{-1/2}\). Recall that \(z_{l}=\phi_{l}^{\top}\boldsymbol{\epsilon}/\sigma\). Define an \(n\)-dimensional vector \[\mathbf{a}\triangleq\frac{1}{\sigma S_{t}^{\frac{1}{2}}}\left(\boldsymbol{ \phi}_{k_{t}+1},\ldots,\boldsymbol{\phi}_{k_{M_{n}}}\right)\begin{pmatrix} \theta_{k_{t}+1}\\ \vdots\\ \theta_{k_{M_{n}}}\end{pmatrix}.\] We write \(\xi_{2}(t)\) as \[\xi_{2}(t)=\frac{1}{\sigma S_{t}^{\frac{1}{2}}}\left(\theta_{k_{t}+1},\ldots, \theta_{k_{M_{n}}}\right)\begin{pmatrix}\phi_{k_{t}+1}^{\top}\\ \vdots\\ \phi_{k_{M_{n}}}^{\top}\end{pmatrix}\boldsymbol{\epsilon}=\mathbf{a}^{\top} \boldsymbol{\epsilon}.\] Since the elements of \(\boldsymbol{\epsilon}\) are i.i.d. \(\eta\)-sub-Gaussian variables, from Theorem 2.6 in Wainwright (2019), we have for any \(\lambda\in\mathbb{R}\), \[\mathbb{E}\exp[\lambda\xi_{2}(t)]=\mathbb{E}\exp(\lambda\mathbf{a}^{\top} \boldsymbol{\epsilon})\leq\exp\left(\frac{\lambda^{2}\eta^{2}\|\mathbf{a}\|^{2 }}{2}\right)=\exp\left(\frac{\lambda^{2}\eta^{2}}{2\sigma^{2}}\right),\] where the last equality is due to \(\|\mathbf{a}\|^{2}=1/\sigma^{2}\). This leads to \[\mathbb{E}\exp(\lambda|\xi_{2}(t)|)\leq\mathbb{E}\exp[\lambda\xi_{2}(t)]+\mathbb{ E}\exp[-\lambda\xi_{2}(t)]=2\exp\left(\frac{\lambda^{2}\eta^{2}}{2\sigma^{2}} \right)<\infty,\] which verifies the condition (A.1.8) of Lemma 2. Thus combining Lemma 2 with (A.1.43), we have the second term \[\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}(1-\widehat{\gamma }_{j})\theta_{l}z_{l}\right|\leq C[\mathbb{E}R_{n}(\widehat{\mathbf{w}}|\mathcal{ M},\mathbf{f})]^{\frac{1}{2}}(1+\log M_{n}).\] (A.1.44) Based on the same reasoning adopted in (A.1.41) and (A.1.44), and the fact that \(0\leq\widehat{\gamma}_{j}\leq 1,j=1,\ldots,M_{n}\), we can also prove that \[\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}} \widehat{\gamma}_{j}^{2}(z_{l}^{2}-1)\right|\leq\frac{C\sqrt{n}}{\sigma}[ \mathbb{E}R_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})]^{\frac{1}{2}} \left[\psi(\mathcal{M})\right]^{\frac{1}{2}}\] (A.1.45) and \[\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}(1- \widehat{\gamma}_{j})^{2}\theta_{l}z_{l}\right|\leq C[\mathbb{E}R_{n}( \widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})]^{\frac{1}{2}}(1+\log M_{n}).\] (A.1.46) Using the Cauchy-Schwarz inequality and (A.1.5), we observe that \[\begin{split}&\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{ k_{j}}\widehat{\gamma}_{j}\left(\frac{\sigma^{2}}{n}-\frac{\widehat{\sigma}^{2}}{ n}\right)\right|\\ &\leq\mathbb{E}\left\{\left(\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^ {k_{j}}\widehat{\gamma}_{j}^{2}\frac{\sigma^{2}}{n}\right)^{\frac{1}{2}} \left[\frac{nk_{M_{n}}}{\sigma^{2}}\left(\frac{\sigma^{2}}{n}-\frac{\widehat{ \sigma}^{2}}{n}\right)^{2}\right]^{\frac{1}{2}}\right\}\\ &\leq[\mathbb{E}R_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f} )]^{\frac{1}{2}}\left[\frac{k_{M_{n}}}{n\sigma^{2}}\mathbb{E}\left(\sigma^{2}- \widehat{\sigma}^{2}\right)^{2}\right]^{\frac{1}{2}},\end{split}\] (A.1.47) and \[\mathbb{E}\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}} \gamma_{j}^{*}\left(\frac{\sigma^{2}}{n}-\frac{\widehat{\sigma}^{2}}{n} \right)\right|\leq[R_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f})]^{\frac{1}{2} }\left[\frac{k_{M_{n}}}{n\sigma^{2}}\mathbb{E}\left(\sigma^{2}-\widehat{\sigma }^{2}\right)^{2}\right]^{\frac{1}{2}}.\] (A.1.48) Substituting (A.1.41), (A.1.43), and (A.1.45)-(A.1.48) into (A.1.32) and (A.1.34) yields \[\begin{split} Q_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f} )&\leq R_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f})+\frac{C\sigma} {\sqrt{n}}[\mathbb{E}R_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})]^{ \frac{1}{2}}\left[\psi(\mathcal{M})\right]^{\frac{1}{2}}\\ &+\left[\frac{k_{M_{n}}}{n\sigma^{2}}\mathbb{E}\left(\sigma^{2}- \widehat{\sigma}^{2}\right)^{2}\right]^{\frac{1}{2}}\left[[\mathbb{E}R_{n}( \widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})]^{\frac{1}{2}}+[R_{n}(\mathbf{w }^{*}|\mathcal{M},\mathbf{f})]^{\frac{1}{2}}\right].\end{split}\] (A.1.49) In particular, when \(Q_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})\) represents \(\mathbb{E}R_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})\), (A.1.49) also implies that \[\mathbb{E}R_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f}) \leq 2\left\{R_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f})+\left[ \frac{k_{M_{n}}}{n\sigma^{2}}\mathbb{E}\left(\sigma^{2}-\widehat{\sigma}^{2} \right)^{2}\right]^{\frac{1}{2}}[R_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f}) ]^{\frac{1}{2}}\right\}\] (A.1.50) \[+\left\{\frac{C\sigma}{\sqrt{n}}\left[\psi(\mathcal{M})\right]^{ \frac{1}{2}}+\left[\frac{2k_{M_{n}}}{n\sigma^{2}}\mathbb{E}\left(\sigma^{2}- \widehat{\sigma}^{2}\right)^{2}\right]^{\frac{1}{2}}\right\}^{2}.\] Therefore, after inserting (A.1.50) into the right side of (A.1.49) and some additional algebra, we see that (4.2) holds. #### a.1.9 Proof of (4.4) For completeness, we provide a brief proof for (4.4). We first decompose \(\mathbb{E}(\widehat{\sigma}_{m_{n}}^{2}-\sigma^{2})^{2}\) as the variance term and the bias term \[\mathbb{E}(\widehat{\sigma}_{m_{n}}^{2}-\sigma^{2})^{2}=\mathbb{E}(\widehat{ \sigma}_{m_{n}}^{2}-\mathbb{E}\widehat{\sigma}_{m_{n}}^{2})^{2}+(\mathbb{E} \widehat{\sigma}_{m_{n}}^{2}-\sigma^{2})^{2}.\] (A.1.51) Note that \[\widehat{\sigma}_{m_{n}}^{2} =\frac{1}{n-m_{n}}\left\|\mathbf{y}-\widehat{\mathbf{f}}_{m_{n}} \right\|^{2}\] (A.1.52) \[=\frac{n\|\boldsymbol{\theta}_{-m_{n}}\|^{2}}{n-m_{n}}+\frac{ \boldsymbol{\epsilon}^{\top}(\mathbf{I}-\mathbf{P}_{m_{n}})\boldsymbol{ \epsilon}}{n-m_{n}}+\frac{2\mathbf{f}^{\top}(\mathbf{P}_{p_{n}}-\mathbf{P}_{m _{n}})\boldsymbol{\epsilon}}{n-m_{n}},\] where \(\boldsymbol{\theta}_{-m_{n}}=(\theta_{m_{n}+1},\ldots,\theta_{p_{n}})^{\top}\). Thus, the bias term of (A.1.51) equals to \[(\mathbb{E}\widehat{\sigma}_{m_{n}}^{2}-\sigma^{2})^{2}=\left(\frac{n\| \boldsymbol{\theta}_{-m_{n}}\|^{2}}{n-m_{n}}+\sigma^{2}-\sigma^{2}\right)^{2} =\frac{n^{2}\|\boldsymbol{\theta}_{-m_{n}}\|^{4}}{(n-m_{n})^{2}}.\] (A.1.53) We proceed to construct an upper bound for the variance term \(\mathbb{E}(\widehat{\sigma}_{m_{n}}^{2}-\mathbb{E}\widehat{\sigma}_{m_{n}}^{2}) ^{2}\). According to Theorem 1.1 of Rudelson and Vershynin (2013), we have \[\mathbb{P}\left(\left|\frac{\boldsymbol{\epsilon}^{\top}(\mathbf{ I}-\mathbf{P}_{m_{n}})\boldsymbol{\epsilon}}{n-m_{n}}-\mathbb{E}\frac{ \boldsymbol{\epsilon}^{\top}(\mathbf{I}-\mathbf{P}_{m_{n}})\boldsymbol{ \epsilon}}{n-m_{n}}\right|>x\right)\] (A.1.54) \[\leq 2\exp\left[-c(n-m_{n})(x\wedge x^{2})\right].\] And due to the sub-Gaussian property of \(\boldsymbol{\epsilon}\), we have \[\mathbb{P}\left(\left|\frac{2\mathbf{f}^{\top}(\mathbf{P}_{p_{n}}-\mathbf{P}_{ m_{n}})\boldsymbol{\epsilon}}{n-m_{n}}\right|>x\right)\leq 2\exp\left[-\frac{c(n-m_{n})^{2} x^{2}}{n\|\boldsymbol{\theta}_{-m_{n}}\|^{2}}\right].\] (A.1.55) Combining (A.1.54)-(A.1.55) with (A.1.52) yields \[\mathbb{P}\left(|\widehat{\sigma}_{m_{n}}^{2}-\mathbb{E}\widehat{ \sigma}_{m_{n}}^{2}|>x\right)\] \[\leq 4\exp\left\{-c\min\left[(n-m_{n})x,\frac{(n-m_{n})^{2}x^{2}}{(n -m_{n})\vee(n\|\boldsymbol{\theta}_{-m_{n}}\|^{2})}\right]\right\}.\] By integrating the tail probability, we have \[\mathbb{E}(\widehat{\sigma}_{m_{n}}^{2}-\mathbb{E}\widehat{\sigma}_{m _{n}}^{2})^{2} =\int_{0}^{\infty}\mathbb{P}\left(|\widehat{\sigma}_{m_{n}}^{2}- \mathbb{E}\widehat{\sigma}_{m_{n}}^{2}|>\sqrt{x}\right)dx\] (A.1.56) \[\lesssim\frac{1}{n-m_{n}}\vee\frac{n\|\boldsymbol{\theta}_{-m_{n} }\|^{2}}{(n-m_{n})^{2}}.\] Combining (A.1.53) with (A.1.56) gives (4.4). #### a.1.10 Proof of Theorem 2 The proof of this theorem is straightforward in view of Theorem 1, (4.4), and Lemma 1. #### a.1.11 Proof of Theorem 3 The proof of this theorem follows from the techniques in Cavalier and Tsybakov (2001). We first show that \[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{g},\mathbf{f})\leq(1+\zeta_{n})R_{n}( \mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f})+\frac{k_{1}\sigma^{2}}{n}.\] (A.1.57) Define an \(M_{n}\)-dimensional weight vector \(\bar{\mathbf{w}}=(\bar{w}_{1},\ldots,\bar{w}_{M_{n}})^{\top}\), where \(\bar{w}_{m}=\sum_{j=k_{m-1}+1}^{k_{m}}w_{j}^{*}\), \(\bar{\gamma}_{m}=\sum_{j=m}^{M_{n}}\bar{w}_{m}\), and \(w_{j}^{*}\) is the \(j\)-th element of \(\mathbf{w}^{*}|\mathcal{M}_{a}\). According to (A.1.5), we have \[R_{n}(\bar{\mathbf{w}}|\mathcal{M}_{g},\mathbf{f})\leq\sum_{j=1}^{p_{n}}(1- \gamma_{j}^{*})^{2}\theta_{j}^{2}+\frac{\sigma^{2}}{n}\sum_{j=1}^{M_{n}}(k_{j }-k_{j-1})\bar{\gamma}_{j}^{2},\] (A.1.58) where the inequality follows the fact that \(\bar{\gamma}_{m}\geq\gamma_{j}^{*}\) for any \(k_{m-1}+1\leq j\leq k_{m}\). Note that \[\begin{split}\sum_{j=1}^{M_{n}}(k_{j}-k_{j-1})\bar{\gamma}_{j}^{ 2}&\leq k_{1}+(1+\zeta_{n})\sum_{j=2}^{M_{n}}(k_{j-1}-k_{j-2}) \bar{\gamma}_{j}^{2}\\ &\leq k_{1}+(1+\zeta_{n})\sum_{j=1}^{p_{n}}(\gamma_{j}^{*})^{2}, \end{split}\] (A.1.59) where the second inequality is due to \(\bar{\gamma}_{m}\leq\gamma_{j}^{*}\) when \(k_{m-2}+1\leq j\leq k_{m-1}\). Substituting (A.1.59) into (A.1.58), we obtain (A.1.57). Then provided \(k_{1}=o(m_{n}^{*})\) and \(\zeta_{n}=o(1)\), we have \(R_{n}(\mathbf{w}^{*}|\mathcal{M}_{g},\mathbf{f})\sim R_{n}(\mathbf{w}^{*}| \mathcal{M}_{a},\mathbf{f})\). The proof is completed using the AOP theory of MMA given in Theorem 2. #### a.1.12 Proof of Theorem 4 Define the random variable \(\Delta_{n1}=R_{n}(\mathbf{w}^{*}|\widehat{\mathcal{M}}_{MS1},\mathbf{f})-R_{n }(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f})\), which measures the risk increment of using the reduced candidate model set \(\widehat{\mathcal{M}}_{MS1}\). In view of the risk bound (4.2), it suffices to prove \[\frac{\mathbb{E}\Delta_{n1}}{R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f}) }=\frac{\mathbb{E}(\Delta_{n1}1_{\bar{F}_{n}})+\mathbb{E}(\Delta_{n1}1_{F_{n}} )}{R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f})}\to 0\] (A.1.60) \[\frac{\mathbb{E}\psi(\widehat{\mathcal{M}}_{MS1})}{nR_{n}(\mathbf{w}^{*}|\mathcal{ M}_{a},\mathbf{f})}\to 0.\] (A.1.61) The condition (A.1.61) is satisfied due to (5.2) and Lemma 1. Then our main task is to prove (A.1.60). We have the first part of (A.1.60) \[\frac{\mathbb{E}(\Delta_{n1}1_{\bar{F}_{n}})}{R_{n}(\mathbf{w}^{*}|\mathcal{M} _{a},\mathbf{f})}\lesssim\frac{\mathbb{P}(\bar{F}_{n})}{m_{n}^{*}/n}\to 0,\] (A.1.62) where the inequality is due to Lemma 1 and \[\Delta_{n1} \leq R_{n}(\mathbf{w}^{*}|\widehat{\mathcal{M}}_{MS1},\mathbf{f} )\leq\max_{\mathcal{M}\subseteq\{1,\ldots,p_{n}\}}R_{n}(\mathbf{w}^{*}| \mathcal{M},\mathbf{f})\] \[\leq\max_{m\in\{1,\ldots,p_{n}\}}R_{n}(m,\mathbf{f})<C,\] and the approximation is due to the assumption (5.3). Now we turn to prove the second part of (A.1.60). From (A.1.5), we have \[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{aS1},\mathbf{f})=\frac{\widehat{l}_{n} \sigma^{2}}{n}+\sum_{j=2}^{\widehat{u}_{n}}\frac{\theta_{j}^{2}\sigma^{2}}{n \theta_{j}^{2}+\sigma^{2}}.\] Since \(R_{n}(\mathbf{w}^{*}|\widehat{\mathcal{M}}_{MS1},\mathbf{f})\) is defined by directly plugging \(\widehat{\mathcal{M}}_{MS1}\) into the expression of \(R_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f})\), we have \[R_{n}(\mathbf{w}^{*}|\widehat{\mathcal{M}}_{MS1},\mathbf{f})=\frac{\widehat{l }_{n}\sigma^{2}}{n}+\sum_{j=\widehat{l}_{n}+1}^{\widehat{u}_{n}}\frac{\theta_ {j}^{2}\sigma^{2}}{n\theta_{j}^{2}+\sigma^{2}}+\sum_{j=\widehat{u}_{n}+1}^{p_ {n}}\theta_{j}^{2}.\] When \(F_{n}\) holds, \(\Delta_{n1}\) is upper bounded by \[\Delta_{n1} =R_{n}(\mathbf{w}^{*}|\widehat{\mathcal{M}}_{MS1},\mathbf{f})-R_ {n}(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f})\] \[=\sum_{j=2}^{\widehat{l}_{n}}\left(\frac{\sigma^{2}}{n}-\frac{ \sigma^{2}}{n+\frac{\sigma^{2}}{\theta_{j}^{2}}}\right)+\sum_{j=\widehat{u}_{n }+1}^{p_{n}}\frac{\theta_{j}^{2}}{1+\frac{\sigma^{2}}{n\theta_{j}^{2}}}\] \[\leq\frac{\widehat{l}_{n}}{n}\sigma^{2}+\sum_{j=\widehat{u}_{n}+1 }^{p_{n}}\frac{\theta_{j}^{2}}{1+\frac{\frac{\theta_{j}^{2}}{m_{n}^{*}+1}}{ \theta_{j}^{2}}}\] \[\leq\frac{\widehat{l}_{n}}{n}\sigma^{2}+\sum_{j=m_{n}^{*}+1}^{p_{ n}}\frac{\theta_{j}^{2}}{1+\frac{\theta_{j}^{2}}{\theta_{n}^{2}}}\] \[\leq\frac{c_{2}m_{n}^{*}}{nk_{l}}+\sum_{j=m_{n}^{*}+1}^{p_{n}} \frac{\theta_{j}^{2}}{1+\frac{\theta_{j}^{2}}{\theta_{[c_{1}m_{n}^{*}ku_{l}]} ^{2}}},\] where the first inequality follows from (A.1.9), and the last step is due to the definitions of \(\widehat{l}_{n}\), \(\widehat{u}_{n}\), and the event \(F_{n}\). From this, we see that when \(k_{l}\to\infty\) and \(k_{u}\to\infty\) \[\mathbb{E}(\Delta_{n1}1_{F_{n}}) \leq\frac{c_{2}m_{n}^{*}}{nk_{l}}+\sum_{j=m_{n}^{*}+1}^{p_{n}}\frac {\theta_{j}^{2}}{1+\frac{\theta_{j}^{2}}{\theta_{[c_{1}m_{n}^{*}k_{u}]}^{2}}}=o \left(\frac{m_{n}^{*}}{n}\right)+o\left(\sum_{j=m_{n}^{*}+1}^{p_{n}}\theta_{j }^{2}\right)\] \[=o\left[R_{n}(m_{n}^{*},\mathbf{f})\right]=o\left[R_{n}(\mathbf{ w}|\mathcal{M}_{a},\mathbf{f})\right],\] where the first equality is due to Assumption 3, and the second equality follows from (A.1.10), and the last equality is due to Lemma 1. Thus, we have proved the theorem. #### a.1.13 Proof of Theorem 5 Define the random variable \(\Delta_{n2}=R_{n}(\mathbf{w}^{*}|\widehat{\mathcal{M}}_{MS2},\mathbf{f})-R_{n }(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f})\). Let us first prove the results under Condition 1. It is evident that \[\mathbb{E}R_{n}(\widehat{\mathbf{w}}|\widehat{\mathcal{M}}_{MS2},\mathbf{f})- R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f})\geq\mathbb{E}\Delta_{n2}\geq \mathbb{E}(\Delta_{n2}1_{F_{n}}),\] (A.1.63) where \(R_{n}(\widehat{\mathbf{w}}|\widehat{\mathcal{M}}_{MS2},\mathbf{f})\) is defined by plugging \(\widehat{\mathcal{M}}_{MS2}\) into the expression of \(R_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})\). When the event \(F_{n}\) holds, we have \[\Delta_{n2}\geq\sum_{j=2}^{\lfloor c_{1}k_{l}^{-1}m_{n}^{*}\rfloor}\left(\frac {\sigma^{2}}{n}-\frac{\sigma^{2}}{n+\frac{\sigma^{2}}{\theta_{j}^{2}}}\right) +\sum_{j=\lfloor c_{2}k_{u}m_{n}^{*}\rfloor+1}^{p_{n}}\frac{\theta_{j}^{2}}{1+ \frac{\sigma^{2}}{n\theta_{j}^{2}}}.\] (A.1.64) Recall the function \(G_{d}\) defined in (A.1.13). Under Condition 1, there must exist two integers \(d_{3}^{*}\) and \(t_{n}^{*}=G_{d_{3}^{*}}(m_{n}^{*}+1)\) such that \(\theta_{m_{n}^{*}+1}^{2}/\theta_{t_{n}^{*}}^{2}\geq\delta^{2d_{3}^{*}}\) and \(\lfloor c_{1}k_{l}^{-1}m_{n}^{*}\rfloor-t_{n}^{*}\asymp m_{n}^{*}\) when \(k_{l}\) is bounded. Hence the first term on the right side of (A.1.64) can be lower bounded by \[\sum_{j=2}^{\lfloor c_{1}k_{l}^{-1}m_{n}^{*}\rfloor}\left(\frac{ \sigma^{2}}{n}-\frac{\sigma^{2}}{n+\frac{\sigma^{2}}{\theta_{j}^{2}}}\right)\] \[=\sum_{j=2}^{\lfloor c_{1}k_{l}^{-1}m_{n}^{*}\rfloor}\frac{\sigma ^{2}}{n}-\sum_{j=2}^{t_{n}^{*}}\frac{\sigma^{2}}{n+\frac{\sigma^{2}}{\theta_{ j}^{2}}}-\sum_{j=t_{n}^{*}+1}^{\lfloor c_{1}k_{l}^{-1}m_{n}^{*}\rfloor} \frac{\sigma^{2}}{n+\frac{\sigma^{2}}{\theta_{j}^{2}}}\] \[\geq\frac{(\lfloor c_{1}k_{l}^{-1}m_{n}^{*}\rfloor-t_{n}^{*}) \sigma^{2}}{n}-\frac{(\lfloor c_{1}k_{l}^{-1}m_{n}^{*}\rfloor-t_{n}^{*}) \sigma^{2}}{n(1+\delta^{2d_{3}^{*}})}\] \[\asymp\frac{m_{n}^{*}}{n}.\] Similarly, when \(k_{u}\) is bounded, the second term in (A.1.64) has a lower bound with the order \(m_{n}^{*}/n\). Combining this with (A.1.63), we have \[\mathbb{E}(\Delta_{n2}1_{F_{n}})\gtrsim\frac{m_{n}^{*}}{n}\mathbb{P}(F_{n}) \gtrsim\frac{m_{n}^{*}}{n}\sim R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f}),\] where the second inequality is due to the condition \(\mathbb{P}(F_{n})>C_{1}\), and last approximation follows from Lemma 1. Under Condition 2, it is easy to see \[\mathbb{E}R_{n}(\widehat{\mathbf{w}}|\widehat{\mathcal{M}}_{MS2},\mathbf{f})=[1+o (1)]\mathbb{E}R_{n}(\mathbf{w}^{*}|\widehat{\mathcal{M}}_{MS2},\mathbf{f}).\] (A.1.65) Indeed, based on the risk bound (4.2), we only need to show that \(\mathbb{E}\psi(\widehat{\mathcal{M}}_{MS2})=o(m_{n}^{*})\). Note that \[\mathbb{E}\psi(\widehat{\mathcal{M}}_{MS2})\asymp\mathbb{E}\log(k_{l}k_{u})[ \log(\widehat{u}_{n}-\widehat{l}_{n})]^{2}\leq C=o(m_{n}^{*}),\] where the inequality is due to \(\widehat{u}_{n}-\widehat{l}_{n}\) is bounded almost surely. Thus (A.1.65) is proved. Then define a candidate model set that contains a single model \(\widehat{\mathcal{M}}_{MS3}=\{\widehat{m}_{n}\}\). We see that \[\begin{split}&\mathbb{E}R_{n}(\mathbf{w}^{*}|\widehat{\mathcal{M}}_ {MS2},\mathbf{f})\leq\mathbb{E}R_{n}(\mathbf{w}^{*}|\widehat{\mathcal{M}}_{ MS3},\mathbf{f})\\ &=\mathbb{E}R_{n}(\widehat{m}_{n},\mathbf{f})\sim R_{n}(m_{n}^{*},\mathbf{f})\sim R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f}),\end{split}\] (A.1.66) where the last approximation follows from Lemma 1. On the other hand, we have \[R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f})\leq\mathbb{E}R_{n}(\mathbf{w }^{*}|\widehat{\mathcal{M}}_{MS2},\mathbf{f}).\] (A.1.67) By combining (A.1.66)-(A.1.67) with (A.1.65), we obtain the desired conclusion. #### a.1.14 Proof of Theorem 6 We first give some well-established minimax results. According to (A.1.5), we have \[R_{n}(\mathbf{w}|\mathcal{M}_{a},\mathbf{f})=\sum_{j=1}^{n}\left[(1-\gamma_{j} )^{2}\theta_{j}^{2}+\frac{\sigma^{2}\gamma_{j}^{2}}{n}\right],\] (A.1.68) where \(\gamma_{j}=\sum_{m=j}^{n}w_{m}\). Note that the MA risk (A.1.68) coincides with the risk of the linear estimator \(\widehat{\mathbf{\theta}}(\mathbf{\gamma})=(\gamma_{1}\widehat{\theta}_{1},\ldots, \gamma_{n}\widehat{\theta}_{n})^{\top}\) in the Gaussian sequence model (A.1.1), i.e., \(R_{n}(\mathbf{w}|\mathcal{M}_{a},\mathbf{f})=\mathbb{E}\|\widehat{\mathbf{\theta} }(\mathbf{\gamma})-\mathbf{\theta}\|^{2}\). For the Gaussian sequence model, Pinsker (1980) obtained an exact evaluation for the linear minimax risk over the ellipsoid \(\Theta(\alpha,R)\) and showed that the optimal minimax risk is asymptotically equivalent to the optimal linear minimax risk. Pinsker (1980)'s results yield the minimax risk and the linear-combined minimax risk of MA \[R_{M}\left[\mathcal{F}_{\Theta(\alpha,R)}\right]\sim R_{L}\left[\mathcal{F}_{ \Theta(\alpha,R)}\right]\sim C_{1}\left(\frac{\sigma^{2}}{n}\right)^{\frac{2 \alpha}{2\alpha+1}},\] (A.1.69) where \(C_{1}\) is the Pinsker constant which only depends on \(\alpha\) and \(R\). Define \(x_{+}=\max(x,0)\). The minimax optimal weights are given by \(\widetilde{w}_{j}^{*}=\widetilde{\gamma}_{j}^{*}-\widetilde{\gamma}_{j-1}^{*}, j=1\ldots,n\), where \[\widetilde{\gamma}_{j}^{*}=\left[1-C_{2}\left(\frac{\sigma^{2}}{n}\right)^{ \frac{\alpha}{2\alpha+1}}j^{\alpha}\right]_{+},\] (A.1.70) and \(C_{2}\) a constant that depends on \(\alpha\) and \(R\). Since \(\widetilde{\gamma}_{1}^{*}\to 1\) and \(\widetilde{\gamma}_{j}^{*}\geq\widetilde{\gamma}_{j+1}^{*}\), we see that \((\widetilde{w}_{1}^{*},\ldots,\widetilde{w}_{n}^{*})\) approximately lies in the unit simplex \(\mathcal{W}_{n}\). Then, taking the upper bound on both sides of (4.2) with respect to \(\mathbf{f}\in\mathcal{F}_{\Theta(\alpha,R)}\) gives \[\begin{split}&\sup_{\mathbf{f}\in\mathcal{F}_{\Theta(\alpha,R)}}R_{n} (\widehat{\mathbf{w}}|\mathcal{M}_{a},\mathbf{f})\leq\sup_{\mathbf{f}\in \mathcal{F}_{\Theta(\alpha,R)}}R_{n}(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f })+\frac{C\sigma^{2}}{n}\psi(\mathcal{M}_{a})\\ &+\frac{C\sigma}{\sqrt{n}}\left[\psi(\mathcal{M}_{a})\right]^{ \frac{1}{2}}\left[\sup_{\mathbf{f}\in\mathcal{F}_{\Theta(\alpha,R)}}R_{n}( \mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f})\right]^{\frac{1}{2}}+C\sup_{ \mathbf{f}\in\mathcal{F}_{\Theta(\alpha,R)}}\rho\left(n,\mathcal{M}_{a}, \mathbf{f},\widehat{\sigma}^{2},\sigma^{2}\right).\end{split}\] (A.1.71) The first term on the right side of (A.1.71) is upper bounded by \[\begin{split}&\sup_{\mathbf{f}\in\mathcal{F}_{\Theta(\alpha,R)}}R_{ n}(\mathbf{w}^{*}|\mathcal{M}_{a},\mathbf{f})\leq\inf_{\mathbf{w}\in \mathcal{W}_{n}}\sup_{\mathbf{f}\in\mathcal{F}_{\Theta(\alpha,R)}}R_{n}( \mathbf{w}|\mathcal{M}_{a},\mathbf{f})\\ &=R_{L}\left[\mathcal{F}_{\Theta(\alpha,R)}\right]+\inf_{\mathbf{ w}\in\mathcal{W}_{n}}\sup_{\mathbf{f}\in\mathcal{F}_{\Theta(\alpha,R)}}R_{n}( \mathbf{w}|\mathcal{M}_{a},\mathbf{f})-\inf_{\mathbf{w}}\sup_{\mathbf{f}\in \mathcal{F}_{\Theta(\alpha,R)}}R_{n}(\mathbf{w}|\mathcal{M}_{a},\mathbf{f}), \end{split}\] (A.1.72) where the first inequality is due to the definition of \(\mathbf{w}^{*}|\mathcal{M}_{a}\), and the second equality is due to the definition of \(R_{L}\left[\mathcal{F}_{\Theta(\alpha,R)}\right]\). The last term on the right side of (A.1.71) is upper bounded by \[\begin{split}&\sup_{\mathbf{f}\in\mathcal{F}_{\Theta(\alpha,R)}} \rho\left(n,\mathcal{M}_{a},\mathbf{f},\widehat{\sigma}^{2},\sigma^{2}\right) \leq\sup_{\mathbf{f}\in\mathcal{F}_{\Theta(\alpha,R)}}\left[\frac{1}{\sigma^{ 2}}\mathbb{E}\left(\widehat{\sigma}^{2}-\sigma^{2}\right)^{2}\right]\\ &+\left\{\sup_{\mathbf{f}\in\mathcal{F}_{\Theta(\alpha,R)}}\left[ \frac{1}{\sigma^{2}}\mathbb{E}\left(\widehat{\sigma}^{2}-\sigma^{2}\right)^{2 }\right]\right\}^{\frac{1}{2}}\left\{R_{L}\left[\mathcal{F}_{\Theta(\alpha,R )}\right]+\inf_{\mathbf{w}\in\mathcal{W}_{n}}\sup_{\mathbf{f}\in\mathcal{F}_{ \Theta(\alpha,R)}}R_{n}(\mathbf{w}|\mathcal{M}_{a},\mathbf{f})\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\left.-\inf_{\mathbf{w}}\sup_{\mathbf{f}\in \mathcal{F}_{\Theta(\alpha,R)}}R_{n}(\mathbf{w}|\mathcal{M}_{a},\mathbf{f}) \right\}^{\frac{1}{2}}.\end{split}\] (A.1.73) Thus, it remains to prove \[\inf_{\mathbf{w}\in\mathcal{W}_{n}}\sup_{\mathbf{f}\in\mathcal{F}_{\Theta( \alpha,R)}}R_{n}(\mathbf{w}|\mathcal{M}_{a},\mathbf{f})-\inf_{\mathbf{w}}\sup_ {\mathbf{f}\in\mathcal{F}_{\Theta(\alpha,R)}}R_{n}(\mathbf{w}|\mathcal{M}_{a}, \mathbf{f})=o\left(R_{L}\left[\mathcal{F}_{\Theta(\alpha,R)}\right]\right),\] (A.1.74) \[\psi(\mathcal{M}_{a})=o\left(R_{L}\left[\mathcal{F}_{\Theta(\alpha,R)}\right] \right),\] (A.1.75) and \[\sup_{\mathbf{f}\in\mathcal{F}_{\Theta(\alpha,R)}}\mathbb{E}\left(\widehat{ \sigma}^{2}-\sigma^{2}\right)^{2}=o\left(R_{L}\left[\mathcal{F}_{\Theta(\alpha,R)}\right]\right)\] (A.1.76) for all \(\alpha>0\) and \(R>0\). For (A.1.74), using the arguments in Chapter 3 of Tsybakov (2008), we have \[\begin{split}&\inf_{\mathbf{w}\in\mathcal{W}_{n}}\sup_{\mathbf{f} \in\mathcal{F}_{\Theta(\alpha,R)}}R_{n}(\mathbf{w}|\mathcal{M}_{a},\mathbf{f})- \inf_{\mathbf{w}}\sup_{\mathbf{f}\in\mathcal{F}_{\Theta(\alpha,R)}}R_{n}( \mathbf{w}|\mathcal{M}_{a},\mathbf{f})\\ &\asymp\frac{1-\widetilde{\gamma}_{1}^{*}}{n}=o\left(R_{L}\left[ \mathcal{F}_{\Theta(\alpha,R)}\right]\right),\end{split}\] (A.1.77) where the last equality is due to (A.1.69)-(A.1.70). The condition (A.1.75) can be easily proved for all \(\alpha>0\) and \(R>0\) by noticing \(\psi(\mathcal{M}_{a})\asymp(\log n)^{3}\) and (A.1.69). The condition (A.1.76) is satisfied when the estimator \(\widehat{\sigma}_{D}^{2}\) with the parametric rate \(1/n\) is adopted. When \(\widehat{\sigma}^{2}=\widehat{\sigma}_{m_{n}}^{2}\) with \(m_{n}=\lfloor kn\rfloor\) (\(0<k<1\)), we have \[\mathbb{E}(\widehat{\sigma}_{m_{n}}^{2}-\sigma^{2})^{2}\lesssim n^ {-1}\vee\left(\sum_{j=\lfloor kn\rfloor+1}^{n}\theta_{j}^{2}\right)^{2}\] \[\leq n^{-1}\vee\left[(kn)^{-2\alpha}\sum_{j=\lfloor kn\rfloor+1}^ {n}j^{2\alpha}\theta_{j}^{2}\right]^{2}\lesssim n^{-1}\lor n^{-4\alpha},\] where the first inequality follows from (4.4), and the third inequality follows from (6.1). Thus, we obtain \[\sup_{\mathbf{f}\in\mathcal{F}_{\Theta(\alpha,R)}}\mathbb{E}\left(\widehat{ \sigma}^{2}-\sigma^{2}\right)^{2}\lesssim n^{-1}\lor n^{-4\alpha}=o\left(R_{L }\left[\mathcal{F}_{\Theta(\alpha,R)}\right]\right)\] (A.1.78) for all \(\alpha>0\) and \(R>0\). Combining (A.1.71)-(A.1.76), we have proved the exact linear-combined minimax adaptivity of MMA on the family of ellipsoids. According to (A.1.69), MMA also achieves the exact minimax adaptivity on the family of ellipsoids. The linear-combined minimax risk over the hyperrectangle is \[R_{L}\left[\mathcal{F}_{\Theta^{H}(c,q)}\right]=\inf_{\mathbf{w} }\sup_{\mathbf{f}\in\mathcal{F}_{\Theta^{H}(c,q)}}R_{n}(\mathbf{w}|\mathcal{M }_{a},\mathbf{f})\] (A.1.79) \[=\sum_{j=1}^{n}\frac{c^{2}j^{-2q}\sigma^{2}}{nc^{2}j^{-2q}+\sigma ^{2}}\asymp n^{-1+\frac{1}{2q}},\] where the second equality is due to (6.2) and (A.1.68), and the last approximation can be obtained based on the similar technique in the proof of Theorem 1 of Peng and Yang (2022). Likewise, by taking the upper bound on both sides of (4.2) over \(\mathbf{f}\in\mathcal{F}_{\Theta^{H}(c,q)}\), we see that the results can be proved if we show \[\inf_{\mathbf{w}\in\mathcal{W}_{n}}\sup_{\mathbf{f}\in\mathcal{F}_{\Theta^{H}( c,q)}}R_{n}(\mathbf{w}|\mathcal{M}_{a},\mathbf{f})-\inf_{\mathbf{w}}\sup_{ \mathbf{f}\in\mathcal{F}_{\Theta^{H}(c,q)}}R_{n}(\mathbf{w}|\mathcal{M}_{a}, \mathbf{f})=o\left(R_{L}\left[\mathcal{F}_{\Theta^{H}(c,q)}\right]\right),\] (A.1.80) \[\psi(\mathcal{M}_{a})=o\left(R_{L}\left[\mathcal{F}_{\Theta^{H}(c,q)}\right] \right),\] (A.1.81) and \[\sup_{\mathbf{f}\in\mathcal{F}_{\Theta^{H}(c,q)}}\mathbb{E}\left(\widehat{ \sigma}^{2}-\sigma^{2}\right)^{2}=o\left(R_{L}\left[\mathcal{F}_{\Theta^{H}(c,q)}\right]\right)\] (A.1.82) for all \(c>0\) and \(q>1/2\). Note that \[\inf_{\mathbf{w}\in\mathcal{W}_{n}}\sup_{\mathbf{f}\in\mathcal{F}_ {\Theta^{H}(c,q)}}R_{n}(\mathbf{w}|\mathcal{M}_{a},\mathbf{f})-\inf_{\mathbf{w }}\sup_{\mathbf{f}\in\mathcal{F}_{\Theta^{H}(c,q)}}R_{n}(\mathbf{w}|\mathcal{ M}_{a},\mathbf{f})\] (A.1.83) \[=\frac{\sigma^{4}}{n^{2}c^{2}+n\sigma^{2}}=o\left(R_{L}\left[ \mathcal{F}_{\Theta^{H}(c,q)}\right]\right),\] which implies (A.1.80). The equation (A.1.81) holds for all \(c>0\) and \(q>1/2\) since \(\psi(\mathcal{M}_{a})\asymp(\log n)^{3}\) and (A.1.79). The condition (A.1.82) is naturally satisfied for the estimator \(\widehat{\sigma}_{D}^{2}\). When \(\widehat{\sigma}^{2}=\widehat{\sigma}_{m_{n}}^{2}\) with \(m_{n}=\lfloor kn\rfloor\) (\(0<k<1\)) is adopted, we have \[\mathbb{E}(\widehat{\sigma}_{m_{n}}^{2}-\sigma^{2})^{2} \lesssim n^{-1}\vee\left(\sum_{j=\lfloor kn\rfloor+1}^{n}\theta_{j}^ {2}\right)^{2}\leq n^{-1}\vee\left(c^{2}\sum_{j=\lfloor kn\rfloor+1}^{n}j^{-2q }\right)^{2}\] \[\lesssim n^{-1}\lor n^{-2q+1}=o\left(n^{-1+\frac{1}{2q}}\right)\] for all \(q>1/2\), which implies (A.1.82). Thus, we see that the MMA estimator is adaptive in the exact linear-combined minimax sense on the family of hyperrectangles. ### AOP in terms of the squared loss Theorems 1-2 in the main text focus on the squared risk of the MMA estimator. Note that the definitions of AOP in terms of statistical loss have also been commonly adopted in MS (Stone, 1984; Li, 1987; Shao, 1997) and MA literature (Hansen, 2007; Wan et al., 2010). The following corollary shows that under the same assumptions in Theorem 2, MMA is optimal in the sense that its squared loss asymptotically converges to that of the oracle MA estimator in probability. **Corollary 1**.: _Suppose Assumption 1 holds. As \(n\to\infty,\) if the conditions (4.5)-(4.6) are satisfied, then we have_ \[\frac{L_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})}{\inf_{\mathbf{w}\in \mathcal{W}_{Mn}}L_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})}\to_{p}1,\] _where \(\to_{p}\) means convergence in probability._ Proof.: From (A.1.30), the MMA criterion can be decomposed as \[C_{n}(\mathbf{w}|\mathcal{M},\mathbf{y})=L_{n}(\mathbf{w}| \mathcal{M},\mathbf{f})-2\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\gamma_{ j}\left(e_{l}^{2}-\frac{\sigma^{2}}{n}\right)\] \[-2\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\gamma_{j}\theta_{ l}e_{l}-2\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\gamma_{j}\left(\frac{ \sigma^{2}}{n}-\frac{\widehat{\sigma}^{2}}{n}\right)\] \[+\frac{1}{n}\mathbf{f}^{\top}\boldsymbol{\epsilon}+\frac{1}{n} \|\boldsymbol{\epsilon}\|^{2}.\] Following the technique in Li (1987), it is sufficient to verify \[\sup_{\mathbf{w}\in\mathcal{W}_{Mn}}\frac{\left|\sum_{j=1}^{M_{n}}\sum_{l=k_ {j-1}+1}^{k_{j}}\gamma_{j}\left(e_{l}^{2}-\frac{\sigma^{2}}{n}\right)\right| }{R_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})}\to_{p}0,\] (A.2.1) \[\sup_{\mathbf{w}\in\mathcal{W}_{Mn}}\frac{\left|\sum_{j=1}^{M_{n}}\sum_{l=k_ {j-1}+1}^{k_{j}}\gamma_{j}\theta_{l}e_{l}\right|}{R_{n}(\mathbf{w}|\mathcal{M },\mathbf{f})}\to_{p}0,\] (A.2.2) \[\sup_{\mathbf{w}\in\mathcal{W}_{Mn}}\frac{\left|\sum_{j=1}^{M_{n}}\sum_{l=k_ {j-1}+1}^{k_{j}}\gamma_{j}\left(\frac{\sigma^{2}}{n}-\frac{\widehat{\sigma}^{ 2}}{n}\right)\right|}{R_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})}\to_{p}0,\] (A.2.3) \[\sup_{\mathbf{w}\in\mathcal{W}_{M_{n}}}\left|\frac{L_{n}(\mathbf{w}|\mathcal{M}, \mathbf{f})}{R_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})}-1\right|\to_{p}0.\] (A.2.4) In particular, (A.2.4) is equivalent to \[\sup_{\mathbf{w}\in\mathcal{W}_{M_{n}}}\frac{\left|\sum_{j=1}^{M_{n}}\sum_{l=k_{ j-1}+1}^{k_{j}}\gamma_{j}^{2}\left(e_{l}^{2}-\frac{\sigma^{2}}{n}\right) \right|}{R_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})}\to_{p}0\] (A.2.5) and \[\sup_{\mathbf{w}\in\mathcal{W}_{M_{n}}}\frac{\left|\sum_{j=1}^{M_{n}}\sum_{l=k_ {j-1}+1}^{k_{j}}\gamma_{j}^{2}\theta_{l}e_{l}\right|}{R_{n}(\mathbf{w}| \mathcal{M},\mathbf{f})}\to_{p}0.\] (A.2.6) As an example, we prove (A.2.1) and (A.2.3). Recall that \(z_{l}=\sqrt{n}e_{l}/\sigma\), \(l=1,\ldots,k_{M_{n}}\). For any \(\delta>0\), we have \[\mathbb{P}\left\{\sup_{\mathbf{w}\in\mathcal{W}_{M_{n}}}\frac{ \left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\gamma_{j}\left(e_{l}^{2}- \frac{\sigma^{2}}{n}\right)\right|}{R_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})} >\delta\right\}\] \[=\mathbb{P}\left\{\sup_{\mathbf{w}\in\mathcal{W}_{M_{n}}}\frac{ \left|\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\sigma^{2}\gamma_{j}\left(z _{l}^{2}-1\right)\right|}{nR_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})}>\delta\right\}\] \[\leq\mathbb{P}\left\{\sup_{\mathbf{w}\in\mathcal{W}_{M_{n}}}\frac{ \kappa_{1}\left[\sum_{j=1}^{M_{n}}\sigma^{4}\gamma_{j}^{2}\left(k_{j}-k_{j-1} \right)\right]^{\frac{1}{2}}\left[\sum_{j=1}^{M_{n}}\frac{\left(k_{j}^{\frac{1 }{2}}-k_{j-1}^{\frac{1}{2}}\right)^{2}}{k_{j}-k_{j-1}}\right]^{\frac{1}{2}}}{ \left[nR_{n}(\mathbf{w}|\mathcal{M},\mathbf{f})\right]^{\frac{1}{2}}\left[nR_ {n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f})\right]^{\frac{1}{2}}}>\delta\right\}\] (A.2.7) \[\leq(\mathbb{E}\kappa_{1}^{2})\delta^{-2}\left[nR_{n}(\mathbf{w}^ {*}|\mathcal{M},\mathbf{f})\right]^{-1}\sigma^{2}\left[\sum_{j=1}^{M_{n}}\frac {\left(k_{j}^{\frac{1}{2}}-k_{j-1}^{\frac{1}{2}}\right)^{2}}{k_{j}-k_{j-1}}\right]\] \[\leq\frac{C\psi(\mathcal{M})}{nR_{n}(\mathbf{w}^{*}|\mathcal{M}, \mathbf{f})}\to 0,\] where the first inequality follows from (A.1.36), the second inequality follows from (A.1.5), the third inequality is due to Markov's inequality, and the last inequality follows from the upper bound on \(\mathbb{E}\kappa_{1}^{2}\) and the definition of \(\psi(\mathcal{M})\). For any \(\delta>0\), we have \[\mathbb{P}\left\{\sup_{\mathbf{w}\in\mathcal{W}_{M_{n}}}\frac{\left| \sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\gamma_{j}\left(\frac{\sigma^{2}}{n} -\frac{\widehat{\sigma}^{2}}{n}\right)\right|}{R_{n}(\mathbf{w}|\mathcal{M}, \mathbf{f})}>\delta\right\}\] \[\leq\mathbb{P}\left\{\sup_{\mathbf{w}\in\mathcal{W}_{M_{n}}}\frac{ \left(\sum_{j=1}^{M_{n}}\sum_{l=k_{j-1}+1}^{k_{j}}\gamma_{j}^{2}\frac{\sigma^{2 }}{n}\right)^{\frac{1}{2}}\left[\frac{nk_{M_{n}}}{\sigma^{2}}\left(\frac{\sigma ^{2}}{n}-\frac{\widehat{\sigma}^{2}}{n}\right)^{2}\right]^{\frac{1}{2}}}{R_{n} (\mathbf{w}|\mathcal{M},\mathbf{f})}>\delta\right\}\] \[\leq\mathbb{P}\left\{\sup_{\mathbf{w}\in\mathcal{W}_{M_{n}}}\frac{ \left[\frac{k_{M_{n}}}{n\sigma^{2}}\left(\sigma^{2}-\widehat{\sigma}^{2} \right)^{2}\right]^{\frac{1}{2}}}{\left[R_{n}(\mathbf{w}|\mathcal{M},\mathbf{ f})\right]^{\frac{1}{2}}}>\delta\right\}\] \[\leq\mathbb{P}\left\{\left[\frac{k_{M_{n}}}{n\sigma^{2}}\left( \sigma^{2}-\widehat{\sigma}^{2}\right)^{2}\right]^{\frac{1}{2}}>\delta\left[R _{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f})\right]^{\frac{1}{2}}\right\}\] \[\leq\frac{\mathbb{E}\left[\frac{k_{M_{n}}}{\sigma^{2}}\left( \sigma^{2}-\widehat{\sigma}^{2}\right)^{2}\right]}{\delta^{2}nR_{n}(\mathbf{w }^{*}|\mathcal{M},\mathbf{f})}\to 0.\] The remaining equations in (A.2.1)-(A.2.5) can also be proved using similar techniques in Section A.1.8 and (A.2.7). Thus we skip the similar materials here. ### Additional Numerical Results #### a.3.1 Assessing the full AOP of MMA To illustrate the full-AOP theory in Section 4, we focus on the MMA estimator based on the largest candidate model set \(\mathcal{M}_{a}\) as a representative. Let \(\mathbf{f}^{(r)}\) and \(\widehat{\mathbf{f}}^{(r)}\) denote the true mean vector and the estimated mean vector in the \(r\)-th replicate, respectively. We plot the risk ratio \[\text{Ratio}=\frac{R^{-1}\sum_{r=1}^{R}\|\mathbf{f}^{(r)}-\widehat{\mathbf{f }}^{(r)}_{\widehat{\mathbf{w}}|\mathcal{M}_{a}}\|^{2}}{R^{-1}\sum_{r=1}^{R} \min_{\mathbf{w}\in\mathcal{W}_{p_{n}}}\|\mathbf{f}^{(r)}-\widehat{\mathbf{f }}^{(r)}_{\mathbf{w}|\mathcal{M}_{a}}\|^{2}}\] (A.3.1) as a function of \(n\), where \(\widehat{\mathbf{f}}^{(r)}_{\widehat{\mathbf{w}}|\mathcal{M}_{a}}\) is the MMA estimator in the \(r\)-th replicate. The optimizations involved in (A.3.1) can be efficiently performed by quadratic programming. For example, quadprog package in R language is applicable. The simulation results are displayed in Figure 1. As shown in the left panel of Figure 1, the curves decrease gradually and tend to \(1\) as the sample size \(n\) increases. This feature confirms our theoretical understanding that MMA attains the full AOP without restricting the weight or candidate model set when coefficients decay at a polynomial rate. Another observation is that when the sample size \(n\) is fixed, the risk ratio increases as \(\alpha_{1}\) increases from \(0.51\) to \(1.5\). This phenomenon implies that it is more difficult for MMA to achieve the full AOP when coefficients decay fast, which is expected. The simulation results in Case 2 also seem to support our AOP theory in Section 4, which claims that the MMA estimator based on \(\mathcal{M}_{a}\) achieves the full AOP when \(1<\alpha_{2}<1/3\). Indeed, as observed in the right panel of the figure, the curve with \(\alpha_{2}<1/3\) still shows an apparent downward trend. However, the curves with large \(\alpha_{2}\) exhibit quite different patterns. It seems that the risk ratio experiences a two-phase process, a sharp increase when \(n\leq 300\) followed by a slight decrease when \(n\) approaches \(1000\). Due to the limit of computing power, it is not easy to check by simulation whether these curves will finally tend to \(1\) when \(n\) is sufficiently large. #### a.3.2 Comparing different choices of candidate model set The primary purpose of this subsection is to compare several full-AOP MMA strategies, which are based on different candidate model sets as summarized in Table 1. The competing methods include M-G1 with \(k_{1}=\lceil\log n\rceil\) and \(\zeta_{n}=0\), M-G2 with \(k_{1}=\lceil\log n\rceil\) and \(\zeta_{n}=1/\log n\), M-MS1 with \(k_{l}=k_{u}=\log n\), and M-MS2 with \(\widehat{l}_{n}=1\vee(\widehat{m}_{n}-5)\) and \(\widehat{u}_{n}=p_{n}\wedge(\widehat{m}_{n}+5)\), where \(\widehat{m}_{n}\) in M-MS1 and M-MS2 is selected by Mallows' \(C_{p}\) criterion. To show the differences between the competing methods, we divide the \(\ell_{2}\) loss of these four methods by the \(\ell_{2}\) loss of the full-AOP MMA based on \(\mathcal{M}_{a}\). The simulation results are presented in Figure 2. As can be seen from Figure 2 (a), the relative risks of the methods M-G1, M-G2, and M-MS1 are near 1. This feature corroborates the findings in Theorems 3-4 that the full AOP is still realized based on these properly constructed candidate model sets. Figure 2 (a) also illustrates the consequence of over-reducing the number of candidate models. The M-MS2 method, which combines at most 11 models around \(\widehat{m}_{n}\), exhibits much higher relative risks than 1 when the coefficients decay slowly. This observation accords with our statement in Theorem 5 that M-MS2 cannot achieve the full potential of MA in Case 1. From Figure 2 (b), we observe that the methods M-G1, M-G2, and M-MS1 perform slightly better than the MMA estimator based on \(\mathcal{M}_{a}\) when \(\alpha_{2}=0.45\) and \(0.75\). In addition, the methods M-MS1 and M-MS2 show an obvious advantage when \(\alpha_{2}=1.25\). These results further support our understanding in Section 5 that contracting the candidate model set provides certain benefits for MMA when coefficients decay fast. Interestingly, when coefficients decay extremely fast (\(\alpha_{2}=1.25\)), the curves of the methods M-G1 and M-G2 show an upward trend with some fluctuations. A sensible explanation is that the M-G methods exclude Figure 1: Assessing the full AOP of the MMA estimator based on the largest candidate model set \(\mathcal{M}_{a}\) and the general continuous weight set \(\mathcal{W}_{p_{n}}\). Figure 2: Relative risks of the competing methods in Cases 1–2. In each replication, the squared \(\ell_{2}\) loss of each method is divided by the \(\ell_{2}\) loss of the full-AOP MMA estimator based on \(\mathcal{M}_{a}\) and \(\mathcal{W}_{p_{n}}\). the best candidate model in this case. Note that their smallest candidate model has size \(k_{1}=\lceil\log n\rceil\), while the optimal single model, in this case, is \(m_{n}^{*}\asymp(\log n)^{4/5}\). Therefore, excluding the best candidate models from below can be harmful as well due to unnecessarily large variances in the models. This is in contrast to the situation of excluding the best models from above, as done in the MR methods, which induces unnecessarily large biases in the candidate models. We also notice that the results with \(\alpha_{1}=1.5\) in Case 1 show more similar patterns to those in Case 2, while the relative risk curves with \(\alpha_{2}=0.25\) in Case 2 are more like those in Case 1. Indeed, this phenomenon is caused by the same reason stated at the end of Section **??**. See Liu and Yang (2011) and Zhang and Yang (2015) for more related theoretical and numerical discussions. ### Discussions on other related works #### a.4.1 Aggregation It is worth mentioning that our work relates to a vast literature on aggregation procedures, which were first studied by Yang (2000c, 2001, 2004), Nemirovski (2000); Juditsky and Nemirovski (2000), and Catoni (2004), respectively. The optimal rates of aggregation have been established by Tsybakov (2003); Wang et al. (2014) and various rate-optimal procedures have been proposed with different weight constraints (see, e.g., Tsybakov, 2003; Yang, 2004; Bunea et al., 2007; Lounici, 2007; Rigollet and Tsybakov, 2011; Dalalyan and Tsybakov, 2012; Lecue, 2013; Wang et al., 2014). A significant difference between the traditional aggregation procedures and the MMA-type methods is that the formers often focus on the step of combining models, namely, _pure aggregation_, wherein one has already obtained the candidate estimates based on previous studies, or from data splitting (see, e.g., Yang, 2001; Lecue, 2007; Rigollet and Tsybakov, 2007). When candidate models and aggregation are trained on the same sample, some substantial progress has also been made in the aggregation literature. The exponential weighting (EW) methods in Leung and Barron (2006); Alquier and Lounici (2011); Rigollet and Tsybakov (2011); Dalalyan and Salmon (2012) and the Q-aggregation in Dai et al. (2014); Bellec (2018) are suitable for combining least squares or affine estimators from the same data. In particular, the EW method described in Dalalyan and Salmon (2012) can be applied for convex aggregation of a list of affine estimators. Note that the EW method can be formulated as the entropy-penalized empirical risk minimization problem \[\widehat{\pi}_{EW}=\arg\inf_{\pi}\left\{\int_{\mathcal{W}_{M_{n}}}C_{n}(\mathbf{ w})\pi(d\mathbf{w})+\frac{\lambda}{n}D_{\mathrm{KL}}(\pi||\pi_{0})\right\},\] (A.4.1) where \(\pi\) is a probability measure on \(\mathcal{W}_{M_{n}}\), \(C_{n}(\mathbf{w})\) is the MMA criterion (2.4), \(\lambda\) is a temperature parameter, \(\pi_{0}\) is a given prior, and \(D_{\mathrm{KL}}\) stands for the Kullback-Leibler divergence. The final EW estimator is \[\widehat{\mathbf{f}}_{EW}=\int_{\mathcal{W}_{M_{n}}}\widehat{\mathbf{f}}_{ \mathbf{w}|\mathcal{M}}\widehat{\pi}_{EW}(d\mathbf{w}).\] (A.4.2) When \(\pi_{0}\) is the uniform distribution on \(\mathcal{W}_{M_{n}}\) and \(\lambda\geq 8\sigma^{2}\), Proposition 2 of Dalalyan and Salmon (2012) implies that \[\mathbb{E}L_{n}(\widehat{\mathbf{f}}_{EW},\mathbf{f})\leq R_{n}(\mathbf{w}^{*}| \mathcal{M},\mathbf{f})+\frac{CM_{n}\log(n)}{n}.\] (A.4.3) When \(M_{n}\) is large, Dalalyan and Salmon (2012) suggest a heavy tailed prior \(\pi_{0}\) which favors sparse weight vectors. Their Proposition 3 shows that with a properly defined \(\pi_{0}\), \[\mathbb{E}L_{n}(\widehat{\mathbf{f}}_{EW},\mathbf{f})\leq R_{n}(\mathbf{w}^{* }|\mathcal{M},\mathbf{f})+\frac{C\log(nM_{n})}{n}.\] (A.4.4) First, notice that the EW estimator (A.4.2) coincides with the MMA estimator (2.5) when \(\lambda=0\) but differs from (2.5) when \(\lambda>0\). The risk bounds (A.4.3) and (A.4.4), which are obtained under the condition \(\lambda\geq 8\sigma^{2}\), are not applicable for the understanding of the MMA method as intended in this paper. Second, the core proof technique in Dalalyan and Salmon (2012) is based on Stein's lemma (Stein, 1981), which requires \(\epsilon\) to follow a Gaussian distribution and the error variance is estimated based on independent data, which is typically unavailable. In contrast, our MMA approach can handle the sub-Gaussian errors with \(\sigma^{2}\) being estimated based on the same data. It is worthy mentioning the risk bounds (A.4.3)-(A.4.4) also target the optimal MA risk \(R_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f})\) as the MMA approach does. They can justify the full AOP of the EW method when the priors are properly selected. Proposition 7.2 of Bellec (2018) gives a risk bound for MMA when \(\epsilon\) is normally distributed and \(\sigma^{2}\) is known. Integrating the tail probability of their equation (7.4) yields \[\mathbb{E}L_{n}(\widehat{\mathbf{w}}|\mathcal{M},\mathbf{f})\leq R_{n}( \mathbf{w}^{*}|\mathcal{M},\mathbf{f})+\frac{C\log M_{n}}{n}+\left(\frac{C \log M_{n}}{n}\right)^{\frac{1}{2}}.\] (A.4.5) The bound (A.4.5) cannot achieve MMA's AOP unless the optimal MA risk \(R_{n}(\mathbf{w}^{*}|\mathcal{M},\mathbf{f})\) converges slower than \((\log M_{n}/n)^{1/2}\). Note that the framework in Bellec (2018) allows one to combine a set of affine estimators, which may be applicable to some other MA problems. However, in our MMA context, Theorem 1 substantially improves (A.4.5) for AOP under much milder conditions. Our examination of MMA is in the nested model framework, which serves as a representative setup in the MS and MA literature (see, e.g., Polyak and Tsybakov, 1991; Li, 1987; Hansen, 2007). Nested models can be seen as a special case of the ordered linear smoother (Kneip, 1994). Aggregation of ordered linear smoothers has been studied in Chernousova et al. (2013) and Bellec and Yang (2020). However, their risk bounds are in terms of the best model instead of their optimal combination. As shown in Peng and Yang (2022), the optimal MS risk can be substantially reduced by MA under certain conditions. #### a.4.2 Minimax adaptivity The minimax statement in Definition 3 is known as the exact minimax adaptivity, which was first introduced by Efroimovich and Pinsker (1984) in the Gaussian white noise model and was further investigated for various estimators in other specific problems (see, e.g., Donoho and Johnstone, 1995; Efromovich and Pinsker, 1996; Nemirovski, 2000; Yang, 2000c; Cavalier and Tsybakov, 2002; Dalalyan and Salmon, 2012; Bellec, 2018). Our setup focuses on the minimax adaptivity on the spaces of the transformed parameters \(\mathbf{\theta}\) rather than the spaces of the original regression coefficient \(\mathbf{\beta}\). Similar setup was adopted by Dalalyan and Salmon (2012) based on a discrete-cosine transformation of \(\mathbf{f}\). Another goal considered in the literature is the minimax-rate adaptation, which is less demanding but more tangible with much wider applicability. Some MS and MA schemes have been considered to construct the minimax-rate optimal estimators that require almost no assumption on the behaviors of the candidate models. For example, see Barron et al. (1999), Juditsky and Nemirovski (2000), and Yang (2000b, a); Yang and Barron (1998) for early representative work. In this paper, we show that the MMA estimator is adaptive in the exact minimax sense over the family of ellipsoids and hyperrectangles. Some other approaches, such as the blockwise constant (BC) rules (Efroimovich and Pinsker, 1984; Efromovich and Pinsker, 1996; Donoho and Johnstone, 1995; Nemirovski, 2000; Cavalier and Tsybakov, 2001, 2002), have also been used to derive the exact minimax adaptive estimators on various classes. There are two notable differences between the BC rule and the MMA method. First, the adaptivity of the BC rule can be obtained only when the orders of some hyperparameters, such as the lengths of blocks, are set correctly, while there are no parameters needed to be determined prior to implementing MMA. Second, the BC rule requires \(\sigma^{2}\) to be known, while the MMA method can accommodate the setting with unknown \(\sigma^{2}\), which is more applicable in regression problems. The effects of the variance estimation on MMA are seen in the risk bound (4.2). It is worth noting that the exact minimax adaptivity property over the family of ellipsoids can also be obtained by aggregation methods in Dalalyan and Salmon (2012) and Bellec (2018), in which the candidate models are constructed from the Pinsker filters and the variance \(\sigma^{2}\) is assumed to be known or estimated from an independent sample.
過去数十年間、モデルAveraging (MA)は、モデル選択 (MS) の統計的アプローチの代替手段として注目を集めてきました。Hansen [Econometrica 75 (2007) 1175--1189] は、Mallows’C_p 基準を使用してモデルウェイトを選択する、Mallows modelaveraging (MMA) の方法を導入しました。MMA の主要な理論的根拠は、漸近最適性 (AOP) であり、これは、MA の結果となるリスク/損失が、最も優れたが実現不可能な平均モデルのそれと漸近的に等しいことを述べています。MMA の AOP は、文献で either 重みのある特定の離散なウェイトセットに制限するか、候補モデルの数に制限を加えることで証明されています。この研究では、これらの制限の下で、最適なリスクがMA に到達することが不可能であることが示され、MMA
2309.10730
GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models
The remarkable capabilities and intricate nature of Artificial Intelligence (AI) have dramatically escalated the imperative for specialized AI accelerators. Nonetheless, designing these accelerators for various AI workloads remains both labor- and time-intensive. While existing design exploration and automation tools can partially alleviate the need for extensive human involvement, they still demand substantial hardware expertise, posing a barrier to non-experts and stifling AI accelerator development. Motivated by the astonishing potential of large language models (LLMs) for generating high-quality content in response to human language instructions, we embark on this work to examine the possibility of harnessing LLMs to automate AI accelerator design. Through this endeavor, we develop GPT4AIGChip, a framework intended to democratize AI accelerator design by leveraging human natural languages instead of domain-specific languages. Specifically, we first perform an in-depth investigation into LLMs' limitations and capabilities for AI accelerator design, thus aiding our understanding of our current position and garnering insights into LLM-powered automated AI accelerator design. Furthermore, drawing inspiration from the above insights, we develop a framework called GPT4AIGChip, which features an automated demo-augmented prompt-generation pipeline utilizing in-context learning to guide LLMs towards creating high-quality AI accelerator design. To our knowledge, this work is the first to demonstrate an effective pipeline for LLM-powered automated AI accelerator generation. Accordingly, we anticipate that our insights and framework can serve as a catalyst for innovations in next-generation LLM-powered design automation tools.
Yonggan Fu, Yongan Zhang, Zhongzhi Yu, Sixu Li, Zhifan Ye, Chaojian Li, Cheng Wan, Yingyan Lin
2023-09-19T16:14:57
http://arxiv.org/abs/2309.10730v1
# GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models ###### Abstract The remarkable capabilities and intricate nature of Artificial Intelligence (AI) have dramatically escalated the imperative for specialized AI accelerators. Nonetheless, designing these accelerators for various AI workloads remains both labor- and time-intensive. While existing design exploration and automation tools can partially alleviate the need for extensive human involvement, they still demand substantial hardware expertise, posing a barrier to non-experts and stifling AI accelerator development. Motivated by the astonishing potential of large language models (LLMs) for generating high-quality content in response to human language instructions, we embark on this work to examine the possibility of harnessing LLMs to automate AI accelerator design. Through this endeavor, we develop GPT4AIGChip, a framework intended to democrating AI accelerator design by leveraging human natural languages instead of domain-specific languages. Specifically, we first perform an in-depth investigation into LLMs' limitations and capabilities for AI accelerator design, thus aiding our understanding of our current position and garnering insights into LLM-powered automated AI accelerator design. Furthermore, drawing inspiration from the above insights, we develop a framework called GPT4AIGChip, which features an automated demo-augmented prompt-generation pipeline utilizing in-context learning to guide LLMs towards creating high-quality AI accelerator design. To our knowledge, this work is the first to demonstrate an effective pipeline for LLM-powered automated AI accelerator generation. Accordingly, we anticipate that our insights and framework can serve as a catalyst for innovations in next-generation LLM-powered design automation tools. AI Accelerators, Design Automation, Large Language Models ## I Introduction The landscape of artificial intelligence (AI), driven by deep neural networks (DNNs), has recently undergone transformational progress, leading to a pressing demand for specialized AI accelerators. The remarkable capabilities and the intricate nature of AI workloads have further amplified this demand. However, the design of specialized accelerators catering to diverse AI tasks remains an arduous and time-consuming venture. Moreover, the level of hardware expertise required for using existing design exploration and automation tools [11, 13, 14, 36, 43, 17, 35] presents a formidable challenge for non-experts, stifling the innovative advancement in AI accelerators. This complex and technical domain is currently characterized by a steep learning curve, which limits access and restricts the expansion of AI accelerator design to general AI developers, creating an increasing gap between the pace of AI algorithm development and corresponding accelerators. In the face of these challenges, we find inspiration in the emerging capabilities of large language models (LLMs) [16, 25, 35, 44], with their amazing capacity to generate high-quality content based on human language instructions. These capabilities present a tantalizing prospect, inspiring the central question of this study: _"Can we harness the power of LLMs to automate the design of AI accelerators?"_ Specifically, as shown in Fig. 1, LLM-powered AI accelerator design automation aims to explore the accelerator design space with the assistance of LLMs, thus generating high-quality accelerator implementation that can satisfy user requirements while at the same time minimizing human involvement. To answer the above question, we first conduct a comprehensive investigation into the limitations and capabilities of LLMs, in terms of generating AI accelerator designs. This is to understand the current landscape, while also exploring how we can cnet bartnerness the power of LLMs to automate AI accelerator design. Informed by the insights drawn from this investigation, we develop a framework dubbed GPT4AIGChip, denoting "GPT for **AI** **G**enerated **C**hip". GPT4AIGChip aims to democrating AI accelerator design and thus make AI accelerator design more accessible, particularly to those not well-versed in hardware, by leveraging human natural language as design instructions rather than relying on domain-specific languages. We summarize our contributions as follows: * We thoroughly investigate the limitations and capabilities of leveraging existing LLMs to generate AI accelerator designs in order to understand our current position and draw useful insights on how to effectively leverage current LLMs in a design automation pipeline. As a tangible application of these insights, we've developed GPT4AIGChip, a framework that is the first to demonstrate LLM-powered AI accelerator design automation. * Through the above comprehensive investigation, we have identified three crucial insights for leveraging the strengths of current LLMs: Insight-(1) current LLMs struggle with understanding lengthy codes exhibiting long dependencies, particularly for their infrequently seen languages like high-level synthesis (HLS), thus necessitating the decoupling of different hardware functionalities in the design space; Insight-(2) given the scarcity of annotated data for efficient finetuning of an open-sourced LLM, employing a mix of in-context learning and the logical reasoning prowess of typically closed-sourced yet powerful LLMs is a more effective choice; and Insight-(3) it is critical to augment the prompts of LLMs with high-quality demonstrations, which are correlated to the context of the input design instructions. * Our GPT4AIGChip instantiates the aforementioned Insight-(1) by constructing a decoupled accelerator design template written in HLS. In this way, it decouples different hardware modules and functionalities of an accelerator design, thus for the first time enabling LLM-powered AI accelerator design automation. * We instantiate the aforementioned Insight-(2)/-(3) by equipping GPT4AIGChip with a demo-augmented prompt generator, which Fig. 1: A generic LLM-powered AI accelerator design automation pipeline. leverages both the in-context learning and the logical reasoning capabilities of powerful GPT models, enabling automated exploration of the accelerator search space powered by LLMs. * Extensive experiments validate and demonstrate the effectiveness of our GPT4AIGChip framework in generating high-quality AI accelerator design, while significantly reducing the required human efforts and expertise in the design process. The insights and framework derived from this work can spark further innovation in next-generation LLM-powered design automation tools and illuminate the promising field of AI-generated accelerators. ## II **Where We Are**: The Limitations and Capabilities of Current LLMs for AI Accelerator Design ### _An Overview of the Assessment_ While LLMs excel in various generation tasks (e.g., question answering, language translation, and chatbot dialogues), these tasks mainly involve natural languages, which LLMs are extensively trained on. However, their ability to handle languages and tasks that they encounter less frequently during pretraining, e.g., generating AI accelerator designs using HLS languages, remains an open question. Therefore, to effectively utilize LLMs in automating AI accelerator design, it is crucial to have a comprehensive understanding of the capabilities and limitations of state-of-the-art (SOTA) LLMs. This could help avoid unwarranted optimism or pessimism. Our assessment aims to provide this understanding, serving as a foundation for future innovations in LLM-powered automated AI accelerator design. To attain this goal, we first identify the common limitations of existing LLMs for AI accelerator design and then validate whether finetuning open-sourced LLMs on annotated hardware codes (e.g., HLS) could enhance their understanding of hardware codes and design. Ultimately, considering the identified shortcomings in these two steps, we reconsider the capabilities of LLMs that could be effectively utilized for practical AI accelerator design automation. ### _Failures and Limitations of Existing LLMs_ To leverage LLMs in generating HLS implementations of AI accelerators based on user instructions, one intuitive approach is to pair user instructions with commonly adopted HLS templates to serve as LLM prompts. In this subsection, we adapt an HLS implementation from [46] as our template, which utilizes a widely accepted for-loop-based design approach. We assess the capabilities of the SOTA LLM, GPT-4 [24], in generating hardware implementations according to our specified instructions. However, we observe that LLMs frequently generate non-synthesizable, functionally incorrect code. The common failures are outlined below. **Misinterpretation of variable definitions.** Implementing hardware accelerator code requires precise definitions of variables or functions to accurately instantiate hardware modules. However, LLMs often struggle with this task. For instance, accurately creating an array variable to match the size of an on-chip Block RAM (BRAM) is crucial for computation tile sizes and data reuse levels. Unfortunately, as illustrated in Fig. 2 (a), LLMs may instantiate modules with incorrect sizes, even when following given instructions. This issue arises from LLMs' difficulty in understanding the connection between identifiers in the instruction and the variables defined in the code, leading to misinterpretation of the provided quantities in the instruction. **Inability to capture long dependencies.** AI accelerator parameters are often intertwined, where a set of parameters for one module can influence and be influenced by other modules, causing long dependencies. Current LLMs frequently falter when handling these long dependencies, neglecting earlier design configurations when generating later modules. For example, as mentioned earlier, BRAM sizes can affect tiled computation sizes, which may in turn influence the number of processing elements (PEs). However, as Fig. 2 (b) shows, when we specify the parallelism factor along the input channel dimension of weights to be \(4\), LLMs disregard this when generating MAC arrays and consider only \(2\) parallel input channels. **Overlooking crucial user instructions.** LLMs may struggle with reasoning about the relationships (1) among different hardware design concepts and (2) between natural language instructions and the code to be generated, thus overlooking crucial user instructions. Unlike the above long-dependency issue, this limitation involves relationships among multiple design concepts, which are not necessarily separated by long code blocks. For example, _unrolling (parallelizing)_ computation along certain data dimensions necessitates _partitioning_ the BRAM instance of the dependent data along the same dimensions. However, as Fig. 2 (c) shows, the code generated by existing LLMs often treats the design concepts in isolation. Additionally, LLMs may struggle to link natural language instructions with appropriate code generation, e.g., when instructed to modularize certain sub-modules, they may fail to identify the relevant domain-specific pragmas. **Over-simplified implementation.** AI accelerator implementation requires certain coding styles (e.g., HLS) to accurately depict hardware behaviors. For instance, to broadcast a single element from one buffer to multiple locations in another (partitioned) buffer, direct value assignment between the two buffers should be avoided. Instead, we need to first assign the element to a register-like variable and then assign the data from this intermediate register to multiple locations in the other buffer, thus preventing access conflicts within the source buffer. However, as shown in Fig. 2 (d), existing LLMs often overlook such details in hardware design and generate impractical designs. ### _Closed-sourced LLMs vs. Finetuned Open-sourced LLMs_ The identified limitations above indicate LLMs' deficiency in understanding hardware design codes like HLS. One potential solution could be to finetune open-sourced LLMs with annotated HLS codes. However, this approach faces challenges due to the lack of high-quality HLS codes annotated with corresponding design descriptions and closed-sourced advanced LLMs, preventing finetuning. These hurdles suggest adopting an open-sourced yet less powerful LLM. Fig. 2: Visualization of the identified common failures and limitations of existing LLMs for AI accelerator design automation. This subsection will investigate powerful closed-sourced LLMs versus finetuned open-sourced LLMs, aiming for an affordable solution without an overly large annotated dataset. We benchmark the closed-sourced LLM, GPT-4 [24], and the open-sourced LLM for code generation, CodeGen [21]. Following the settings in [34], we target a fundamental AI accelerator design automation task: implementing the inner product of two vectors in HLS. We use the Pass@\(k\) metric, representing the portion of successful compilations in \(k\) attempts, as the performance metric. The prompt for both pipelines is: _"Implement the inner product operator between two vectors in HLS. You are an expert in AI accelerator design with extensive HLS coding knowledge."_ **Finetuning strategy.** We adopt a two-stage finetuning process: First, we collect seven thousand open-sourced HLS code snippets from GitHub and finetune CodeGen [21] using a masked prediction objective [8]. This enhances the LLM's hardware knowledge, particularly HLS implementations. Second, we create ten customized HLS templates with implementation instructions and finetune the LLM to generate the corresponding AI accelerator design. This equips the LLM with AI accelerator implementation expertise. **Observations.** Tab. I indicates that the pretrained open-sourced CodeGen lacks proficiency in AI accelerator design without additional finetuning, as evident from its 0% Pass@100 rate. Although our two-step finetuning process improves the model's capabilities, there remains a notable discrepancy (e.g., 11% less Pass@100) compared to the closed-sourced GPT-4. These findings suggest that in data-limited situations, LLMs' initial competence in the target task outweighs their finetuning ability. Consequently, a powerful closed-sourced LLM becomes a more suitable choice. ### _Capabilities of LLMs_ Given the identified failures of LLMs, we reconsider their capabilities in practical LLM-powered AI accelerator design automation. Previous works [21, 16, 4, 50, 7] suggest that LLMs have learned rich representations that can be generalized to unseen coding problems through supervised finetuning or in-context learning [22, 5, 31]. To address the challenge of collecting high-quality instruction-code pairs required for finetuning, we explore the possibility of in-context learning by providing LLMs with HLS demonstrations composed of code pairs and corresponding design descriptions, in addition to user instructions. Inspired by the failure cases in Sec. II-B, we adopt decoupled hardware templates to avoid long dependencies, as detailed in Sec. IV-B. We summarize the identified capabilities of LLMs for successful AI accelerator design automation as follows: **Generalization capability from in-context demonstrations.** As shown in Fig. 3, given two snippets of HLS demonstrations, LLMs can generate a new hardware design following user instructions. They do so by maintaining the same coding style and modifying the design to meet new requirements, inferring the differences between the demonstrated design description and the user instruction. Our finding is that when high-quality demonstrations, which bear a certain degree of correlation to user instructions, are provided, the in-context learning capability of LLMs can be effectively activated. **Logical reasoning capability under multiple demonstrations.** Feeding more diverse demonstrations to LLMs could better harness their in-context learning capabilities. However, this necessitates logical reasoning skills to discern which demonstration should be referred to when implementing each item in the user instructions. As shown in Fig. 3 (and in Sec. V-E), when given two different demonstrations, LLMs can provide a step-by-step analysis of the similarity between the functionalities required by the user instruction and those implemented in the demonstrations, subsequently selecting the appropriate one as a starting point. This reflects the logical reasoning capabilities of LLMs when provided with suitable demonstrations and also underscores the significance of high-quality demonstrations. ## III **What We Can Learn**: Insights from the Assessment Drawing upon our comprehension of the limitations and exploitable capabilities of LLMs in Sec. II, we distill insights and provide guidelines on how to efficiently harness LLMs for AI accelerator design automation. These insights establish the groundwork for our GPT4AIGChip framework in Sec. IV. Moreover, they possess the potential to inspire future innovations in LLM-driven hardware design automation. In particular, the following insights are derived: **Decoupled hardware template design.** Current LLMs struggle to generate effective AI accelerator designs when built upon commonly adopted HLS templates with long dependencies. This issue arises from the domain gap between the pretraining data and the infrequently encountered AI accelerator design tasks during pretraining, causing a lack of knowledge about both domain-specific languages and hardware functionality. Hence, it is crucial to (1) properly design the hardware template to decouple different hardware functionalities in the design space, thus avoiding long dependencies, and (2) decompose a complex task into simpler subtasks, i.e., different hardware functionalities should be generated independently and progressively, rather than generating lengthy codes all at once. **Prioritize in-context learning given limited data.** Given the scarcity of annotated data, which is too limited to finetune an open-sourced but less powerful LLM in a supervised manner, leveraging the combination of in-context learning and logical reasoning capabilities of a closed-sourced but powerful LLM is a more effective choice. **Proper prompt engineering.** The key to successful in-context learning for hardware design is augmenting prompts with high-quality demonstrations in the target domain. These demonstrations are anticipated to: (1) exhibit a strong correlation with the input design instructions, and (2) encompass a wide range of design parameters to convey ample domain knowledge, enhancing the capability of addressing various input instructions. ## IV **Instantize the Drawn Insights**: The Proposed GPT4AIGChip Framework ### _The Overall Pipeline of GPT4AIGChip_ Utilizing the insights from Sec. III, we develop the GPT4AIGChip framework, which is designed to empower non-expert users in Fig. 4: Visualize the workflow of our proposed GPT4AIGChip framework. Fig. 3: Visualization of the identified generalization and logical reasoning capabilities of existing LLMs for AI accelerator design automation. leveraging LLMs for automated AI accelerator design. Notably, our GPT4AIGChip instantiates the three insights outlined in Sec. III by implementing in-context learning atop the closed-sourced but powerful GPT-4 [24] and incorporating two essential components: (1) the LLM-friendly hardware template (see Sec. IV-B), which simplifies intricate AI accelerator codes into a modular structure, and (2) the demo-augmented prompt generator (see Sec. IV-C), which enhances LLMs' capacity to generate optimized AI accelerator designs by supplementing prompts with well-chosen demonstrations. By integrating the LLM-friendly hardware template with the demo-augmented prompt generator, our GPT4AIGChip adopts an iterative approach to enhance the generated AI accelerator design, progressively approaching the optimal solution. Each iteration follows a four-stage workflow, as depicted in Fig. 4 and outlined below: * The search engine identifies the next design and its corresponding instruction for each module in the LLM-friendly hardware template, drawing on feedback from previously searched designs to guide the implementation and evaluation. * The demo-augmented prompt generator creates a prompt for each module, combining relevant demonstrations (instruction-code pairs) to enhance the LLMs' in-context learning. * The LLMs equipped with the above prompts sequentially generate the hardware design implementation. * The design validation flow scrutinizes LLMs' generated codes, executing necessary modifications to assure deployability. ### _The LLM-friendly Hardware Template Design_ As highlighted in Sec. III, providing the LLM with a hardware design template is vital to compensate for its limited AI accelerator design knowledge. Yet, as shown in Sec. II-B, existing HLS accelerator templates [10, 27, 39, 46] pose significant challenges for LLM-based AI accelerator generation due to their complex design parameter coupling and inter-dependency. To address this, we first establish design principles for an LLM-friendly accelerator micro-architecture and source code template. Guided by these principles, we propose a unique modular AI accelerator template, tailored to optimize LLM's capabilities in generating AI accelerator designs. We then discuss the implications and advantages of our template in enhancing LLM-assisted design generation across broader scenarios. **The desired template design principles.** To ensure effective LLM-assisted generation of accelerator designs, we identify three key principles for design templates: (1) high modularity, (2) decoupled module design, and (3) deep design hierarchies to facilitate step-by-step design generation, to address the inherent limitations of LLMs. * **High modularity**: Due to the token capacity restrictions of LLMs, the size of the input sample code used during in-context learning [5, 22, 31] and the code size for the final generated design during each round (i.e., a single LLM model inference) are significantly limited. With high modularity, the template is segmented into smaller and thus more manageable modules. Such a modular design generation approach can substantially reduce the required code size for both LLMs' input and output. * **Decoupled module design**: Segmenting the code template into smaller modules can inadvertently introduce coupling and dependencies among their configuration settings. As discussed in Sec. II-B, this contradicts our goal of input token size reduction through high modularity, as the LLM must recall previous modules' settings. To counteract this, we propose independent module generation, each maintaining its own local settings. However, this can lead to sub-optimal overall designs, with potential data rate mismatches between connected modules causing stalls or deadlocks. To harmonize module operation, we suggest an additional search engine and adaptable inter-module communication schemes. These can optimally harmonize all local settings, mediating communication rates, and bandwidth discrepancies. Consequently, the LLM can generate each module based on its local settings, maintaining the decoupling principle. * **Deep design hierarchies for step-by-step design generation**: The complexity of accelerators can result in large code sizes within even one single module, which may exceed the handling capacity of an LLM. To address this, our template adopts a hierarchy-based, module-by-module generation approach, streamlining the process and reducing the complexity at each stage. Every module consists of multiple sub-modules adhering to a decoupling principle, which may further contain their own sub-modules. This recursive nesting persists until further division is unfeasible (see Level-L in Fig. 5 (b)). This allows the LLM to systematically generate design hierarchies for each module, constraining the code size and complexity at each step. **Overview of the proposed accelerator template.** Incorporating the three key principles above, we introduce a new, modular, and decoupled accelerator micro-architecture and corresponding code template, as displayed in Fig. 5 (b). While here we focus on the widely-used GEMM operator considering its extensive application across various AI algorithms, our template retains a generic structure. Comprising a collection of versatile modules, each can be flexibly redesigned according to the prompts and local configuration settings, offering varying hardware efficiency or even unique functionalities. To guide precise and decoupled code generation by LLMs, each module in our template strictly corresponds to a function instantiation in the source code, as illustrated in Fig. 5 (b). Each module is hierarchically composed of nested sub-modules to facilitate LLMs' step-by-step generation. Modules are interconnected via stream-based communication links and asynchronous data FIFOs, to reduce the controlling overhead of handling potential mismatches between different modules' data production and consumption rates. Processing onset and termination within each module primarily hinge on data availability, promoting fine-grained operation overlap and streamlining control overhead [15]. For modules with multiple input ports, such as the interconnect module in Fig. 5 (b), we include additional synchronization logic to ensure data alignment and accuracy. **Key components of the proposed accelerator template.** We elaborate on different modules shown in Fig. 5 (b) below. * **Buffer modules**: These are designed to facilitate parallel data access for subsequent computing units and to harness varied data reuse patterns. They define (1) the on-chip memory partition and the corresponding data allocation for parallel access, and (2) the procedures for refreshing, resetting, or retaining the data within buffers, dictated by control signals associated with different data reuse patterns. Double buffers are assumed for all possible design styles to ensure optimal throughput. * **Computing units modules**: These modules primarily handle computations, e.g., multiplications and additions, within their parallel computing units. Implemented as a collection of Multiply-and-Accumulate (MAC) units, their interconnects can be tailored according to different design prompts, striking a balance between spatial data reuse, MACs' data propagation latency, and on-chip buffer bandwidth contention. We structure nested design hierarchies for easier LLM-assisted generation and to accommodate a variety of MAC interconnect styles. For example, individual MACs may be linked to create a 1D PE-lane sub-module, while multiple PE lanes can be interconnected to forge a larger 2D PE-array module, enhancing scalability. * **Interconnect modules**: They are designed to flexibly distribute and synchronize data between buffer modules and computing units modules. Their flexibility becomes crucial when the computing units consist of multiple 2D PE arrays and when the algorithm-to-PE-array mapping can change at run-time. * **Control (Ctrl) modules**: They handle initial control data retrieval from the host, control data decoding, and potential runtime control data generation to alter various modules' modes. * **Flexible communication arbitrators**: These are designed to manage potential mismatches in data production/consumption rates and bandwidths between interconnected modules, thus facilitating rate and bandwidth conversion. **Implication and advantages of the proposed template.** The proposed template offers three key benefits: (1) by reducing the code size with decoupling module design and utilizing deep design hierarchies, the template allows LLMs to use limited input and output token capacity to generate complicated accelerator designs in a step-by-step manner, enhancing LLMs' in-context learning capability; (2) as illustrated in Sec. II-C, LLMs' potential for generating AI accelerator designs can be further enlarged when finetuned using an additional dataset of sample code and prompt pairs. Our proposed template simplifies this finetuning process. It allows developers to generate the dataset in a module by module and hierarchy by hierarchy way, greatly reducing design complexity; and (3) the principles identified for the proposed template extend beyond the domain of HLS. The same issues concerning LLM-assisted design generation, as outlined in Sec. II-B, still exist when leveraging LLMs to generate designs for other programming languages. Therefore, the aforementioned key principles are generally applicable, though the technical implementations for different domains may vary. ### _The Demo-Augmented Prompt Generator Design_ As outlined in Sec. III and prior studies [4, 23, 44], a carefully enhanced prompt, reinforced with demonstrations, can effectively facilitate in-context learning for LLMs. This imparts LLMs with vital task-specific knowledge, unlocking their full capabilities. However, the prompt length limit makes it impractical to incorporate all possible demonstrations into one prompt. To tackle this, our demo-augmented prompt generator aims to efficiently generate prompts that automatically select the most relevant demonstrations from our crafted library and incorporate them into the prompt, balancing prompt length with the resulting in-context learning performance. **Workflow.** Inspired by previous studies highlighting LLMs' ability in identifying similarities between different instructions [4], we employ LLMs to facilitate demonstration selection within our demo-augmented prompt generator, which further minimizes the demand for human expertise in AI accelerator design. Specifically, as shown in Fig. 4, in each iteration, given a design instruction generated by the search engine, we deploy an LLM to identify the similarity between the generated design instruction and those within the demonstration library. We then select the two most similar instructions, pairing them with their corresponding implementation as the demonstrations for this iteration of code generation. Then, we generate the demo-augmented prompt using the following template: * Assume you are an expert in HLS codes for AI accelerator design, I will now provide you with an instruction on generating the [_Module Name_] for AI accelerator design. Below are two demonstration instructions and the corresponding generated code. Demo A: Instruction: [_Demo A Instruction_]. Code: [_Demo A Code_]; Demo B: Instruction: [_Demo B Instruction_]. Code: [_Demo B Code_]. Now please generate the code with the following instruction: [_Design Instruction_]. **Demonstration library.** The high-quality demonstration library is a critical component in our demo-augmented prompt generator. The demonstrations within this library essentially serve as the primary knowledge source for LLMs to acquire domain-specific insights. As suggested in Sec. III, our goal is to assemble a demonstration library that encompasses a diverse range of design choices for the target domain, i.e., GEMM in this paper. This ensures that the library provides demonstrations with abundant domain knowledge, catering to the diverse design instructions generated from the search engine. To achieve this, we adhere to the following guiding principles during the construction of our demonstration library: * **Highly correlated instruction and code pairs**: Each demonstration includes a detailed implementation instruction and the corresponding code, accompanied by comments. Each line of instruction is explicitly linked to specific code segments, providing clarity on their correlation and rationale. * **Diverse design choices**: To ensure LLMs find demonstrations with sufficient domain knowledge for a given design instruction, we generate separate demonstrations illustrating modifications for each design parameter in our search space (Sec. IV-D). Given the large search space size that GPT4AIGChip considers, generating a large number of distinct AI accelerator designs based on the principles outlined above requires substantial human effort. Fortunately, our proposed LLM-friendly hardware template (Sec. IV-B) makes it feasible. This template exhibits high modularity and allows for the decoupled generation of each module within an AI accelerator. This approach significantly reduces the required human effort in two ways: (1) each module is a concise and structured implementation Fig. 5: (a) LLMs with a non-modular template are limited by one-shot design generation, coupled design parameters, and long dependency; (b) In contrast, the proposed modular and decoupled accelerator template, which facilitates step-by-step design generation in a hierarchical manner. focused on a specific function. Therefore, the implementation of one module does not require considering the impact on other modules, resulting in a streamlined and less labor-intensive implementation process; and (2) not all modules in the template are affected by all the parameters in the search space. As a result, the potential design variations for each module are substantially reduced compared to the entire AI accelerator design. ### _Implementation of Other Components in GPT4AIGChip_ **Hardware design space.** To ensure the performance of generated accelerators for diverse designs, a generic accelerator design space is crucial. It enables a flexible design process, providing the code generator with multiple options to customize the design of each target operator. Utilizing the template from Sec. IV-B, we identify five key hardware design parameters: * **MAC array sizes**: They indicate the total number of MACs in the MAC array of the instantiated accelerator. * **Network-on-Chip (NoC) styles**: They determine the distribution of data to and from computing units, as well as the propagation of data among them. They can be classified into three main types: uni-cast, multi-cast, and broadcast. To promote design diversity, these styles are independently employed in various hierarchies of computing units, including individual MACs, 1D MAC lanes, and 2D MAC arrays. Furthermore, NoC styles are configured separately for different data types, e.g., the two input operands and one output in GEMM, enlarging design variations. * **On-chip buffer sizes**: They denote the capacities of three principal buffers integral to the accelerator design, which consists of two input buffers and one (partial) output buffer. The sizes of all additional auxiliary buffers and registers are determined based on the capacities of these three principal buffers. * **On-chip buffer partition styles**: They determine the data allocation among on-chip memory blocks within each buffer. By dividing the data across multiple blocks, parallel accesses can be achieved through buffer partitioning. Two dimensions, namely data width and height, can be employed for such partitioning. * **Data reuse patterns**: They define how data one buffered is reused during computation. Changing how buffers and DRAM exchange data leads to diverse reuse strategies, e.g., first-input operand reuse, second-input operand reuse, or output reuse. **The adopted search algorithm.** GPT4AIGChip's accelerator search adopts an evolutionary algorithm known as tournament selection [19], which iteratively evolves the accelerator's design. This iterative evolution process begins by initializing a population \(P\), consisting of \(|P|\) randomly selected accelerator designs \(\{hw\}\) from the available design space. Specifically, in each iteration, a subset \(S\) of a fixed size is randomly selected from \(P\). The superior designs, judged by top-tier hardware performance from LLM-generated implementation, become parent designs \(\{hw\}_{parent}\). New accelerator designs \(\{hw\}_{child}\) then emerge via mutation - a random tweek in a parent design parameter - and crossover - a random element exchange between two parents. The resultant \(\{hw\}_{child}\) are added into \(P\). To preserve a constant population size \(|P|\), the oldest designs are phased out from \(P\), following [32]. Finally, once the maximum number of cycles is reached, the accelerator design with the highest performance throughout the search is chosen as the optimal solution. **Design validation and code correction flow.** GPT4AIGChip also integrates a process to validate and ensure the functionality of LLM-generated designs, consisting of three main stages: (1) Synthesizability Evaluation, (2) Correctness Verification, and (3) Performance Analysis. In Synthesizability Evaluation, we initially employ standardized Vivado HLS tools [42] to synthesize LLM-generated codes. Subsequently, output log messages are processed via a custom error parser, armed with empirically-tested error detection and correction protocols. If the parser confronts errors beyond its capabilities, it may necessitate (1) LLM-driven design regeneration or (2) human intervention for error rectification. The current setup incorporates procedures for detecting and addressing undefined variables, improper HLS pragma usage, and out-of-bound array (memory) access. Furthermore, we ensure that the design produces the desired results through the results correctness check. Specifically, a test-bench template is constructed featuring anticipated input and output. The produced output is subsequently contrasted with the expected output to confirm accuracy. Given the variety of possible errors, this step doesn't include automatic correction. In case of incorrect results, design regeneration is required. Finally, we assess performance metrics (e.g., latency) and resource usage to provide feedback to the search engine, as depicted in Fig. 4. Vivado HLS built-in tools generate these performance and resource estimations after design synthesis. ## V Experimental Evaluation ### _Experiment Setup_ We demonstrate the effectiveness of our GPT4AIGChip framework for automatically exploring the optimal AI accelerator design written in HLS on top of the design space introduced in Sec. IV-D. In particular, we adopt GPT-4 [24] as the default LLM and adopt the standard Vivado HLS design flow [42], using a ZCU104 FPGA with XCZU7EV MPSoC [1], for accelerator validation. We allocate 1024 digital signal processors (DSPs) for both baselines and our generated designs, unless noted. The final performance is measured onboard with PYNQ [41] as the deployment environment. ### _Benchmark with Manual and Automated Designs_ To validate the quality of the generated hardware designs of our GPT4AIGChip framework, we benchmark with two baselines, including (1) hardware designs delivered by an industry-level design automation tool CHaiDNN [40] and (2) manually optimized designs by human experts based on our template, in terms of the acceleration efficiency for six different networks under two input resolutions. In the case of manually optimized designs, an expert hardware designer adjusts and finetunes the parameters outlined in our suggested templates. This process continues until either no further performance gains can be achieved within a reasonable timeframe (approximately one day) or the expert designer determines that no further improvements can be made based on empirical evidence. **Observation.** As shown in Fig. 6, we observe that (1) our GPT4AIGChip framework can consistently outperform CHaiDNN [40], e.g., a 2.0%\(\sim\)16.0% latency reduction across six networks and two resolution settings, indicating that our GPT4AIG chip has successfully and practically demonstrated the first LLM-powered AI accelerator design automation framework; and (2) our searched accelerator design can match the hardware efficiency achieved by manual designs from human experts while enjoying much-reduced labor costs. This also implies that our searched accelerator implementation has the potential to serve as a good reference or initialization for human designers. ### _Effectiveness of the Search Scheme_ To validate the effectiveness of our search scheme, we visualize the Pareto frontier, i.e., the trade-off between latency and the utilized resources in terms of DSPs, on top of ResNet-50 with an input resolution of \(224\times 224\), achieved by the hardware designs searched via GPT4AIGChip and benchmark with those delivered via a random search in our design space. In addition, we also benchmark with manually optimized accelerator implementation by human experts. **Observation.** In Fig. 7, we can observe that although the designs generated by a random search under-perform manually optimized designs by human experts in terms of the achievable latency-resource trade-off under a relatively rich DSP setting, our GPT4AIGChip framework can outperform the random search baseline and match the hardware efficiency achieved by manual expert designs under comparable resources, indicating the effectiveness of our search scheme in Sec. IV-D. Note that given a tight DSP budget, the efficiency of different designs is naturally similar. ### _Ablation Study of the Prompt Generator_ To validate the effectiveness of our proposed demo-augmented prompt generator, we validate the impact of different prompt design strategies on LLMs' accelerator design generation performance. Specifically, we use the generation of the output stationary computing units defined in Sec. IV-B as the target application with Pass@10, as defined in Sec. II-C, as the evaluation metric. **Prompt format and explicitness.** We first validate the necessity of using the demo-augmented prompt when generating the code. Specifically, we consider two alternative prompt designs: (1) _No Demo_, which uses the same design instruction but without demo instruction and code; (2) _High-Level Description_, which replaces the explicit instructions in the demo with high-level descriptions (e.g., output stationary computing units). As shown in Tab. II, our demo-augmented prompt improves the LLM's ability in generating desired accelerator designs by 50% and 30% in terms of Pass@10 rates over the two baselines _No Demo_ and _High-Level Description_, respectively. **Demonstration selection.** We further validate the importance of incorporating appropriate demonstrations in the prompt. Specifically, we consider two alternatives other than selecting the most similar demonstrations: (1) random selection, denoted as _Random_; (2) selecting the most unrelated demonstrations, denoted as _Dissimilar_. As shown in Tab. III, using the most similar demonstrations can better impart necessary knowledge in the prompt, leading to 10% and 30% higher Pass@10 rates over _Random_ and _Dissimilar_, respectively. **The number of demonstrations.** We also evaluate the number of demonstrations needed in the prompt. Tab. IV shows that (1) introducing demonstration (i.e., # Demo > 0) significantly improves the performance, and (2) providing 2 demonstrations leads to a satisfactory trade-off between Pass@10 rates and prompt length. ### _Visualizing the Generated Designs_ To better illustrate the design generation process from in-context demonstrations, we provide one example in Fig. 8, where the LLM is prompted by two demonstrations that correspond to the inner-/outer-product-based matrix multiplication implementation, respectively, and is instructed to generate the implementation of a row-wise-product-based design. We can observe that our framework exhibits good logical reasoning about the relationship between the desired design and the two given demonstrations, as the generated implementation can successfully identify the reusable codes from the two demonstrations and fuse them in a logically correct manner, i.e., adopt a spatial reduction along the PE lane, following the inner-product solution, and broadcast one element along the PE row, following the outer-product solution. This indicates the generalization capability of our GPT4AIGChip framework. ## VI Limitations and Future Work We recognize that our work marks the initial step towards LLM-aided AI accelerator design automation, with significant efforts still needed to propel this promising yet demanding field forward. In order to provide insights into future research within this domain, we discuss our limitations and the resulting inspiration for future work. **Dependence on the demonstration library.** Our framework's dependence on an available demonstration library tailored to the target domain is still essential to compensate for the existing LLMs' limited comprehension of AI accelerator designs. However, constructing such a library demands non-trivial hardware expertise and constrains the capacity for cross-domain applicability. To tackle this challenge, a promising avenue for future investigation involves amassing annotated hardware design codes alongside corresponding descriptions, which can be utilized for LLM finetuning. This approach encodes AI accelerator design knowledge into the LLMs' weights, diminishing the reliance on human expertise and enhancing generalization across diverse domains. Additionally, the accumulation of high-quality paired data can also set a benchmark for the community to evaluate the capabilities of forthcoming LLMs. Fig. 6: Benchmark GPT4AIGChip’s generated designs with CHaiDNN’s [40] generated ones and manually optimized ones by experts on six networks. Fig. 7: Benchmark the latency-resource Pareto frontier achieved by GPT4AIGChip’s search scheme, random search, and manual optimization. **Human involvement requirements.** Our framework still necessitates human involvement to rectify the interfaces of the generated hardware modules and assemble them into the final accelerator. This requirement arises from the diverse implementation styles of different design choices, leading to inevitable inconsistencies in the module-wise interface, particularly when expanding the hardware template to encompass a wider array of domains. In future endeavors, this challenge could potentially be addressed by crafting novel hardware templates and design principles that strike a balance between generality and modularity. Such an approach would cater to the preferred formats of both human designers and LLMs, potentially reducing the need for extensive human intervention. Additionally, when coupled with LLM finetuning, explicit regularizations can be applied to LLMs to ensure the consistency of interface implementation. **Verification cost of the generated accelerator.** Beyond the accelerator design phase, the verification cost associated with accelerator design can be substantial, a facet not encompassed by our present framework. Addressing this omission and consequently furnishing a comprehensive and accurate AI accelerator generation pipeline entails the development of verification tools tailored to LLM-generated accelerator designs. In future endeavors, such tools are of paramount importance to guarantee the soundness of the designs and to automatically rectify prevalent LLM-related anomalies. These tools could be constructed based on human insights into LLM design biases and/or through the creation of another LLM specifically customized for verification purposes. ## VII Related Work **LLMs and in-context learning.** LLMs, e.g., GPT-4 [24] and Alpaca [16], have demonstrated outstanding performance across various tasks, attributed to extensive pretraining, high-quality datasets, and effective tuning methods [28, 29, 16, 9, 30, 48, 8, 9, 10]. One key ability of LLMs is in-context learning, where providing a few relevant input-output pairs can largely enhance task performance [31, 12, 18, 20, 33, 47, 49]. Nevertheless, employing in-context learning for AI accelerator design automation encounters challenges stemming from token length limitations in LLMs (e.g., 4096 tokens for GPT-4 [24]). The complexity of AI accelerator design code can easily surpass this limit, necessitating a decoupled hardware template and the judicious selection of highly correlated demonstrations. **LLM-powered code generation.** Previous work has leveraged LLMs to generate code, particularly in languages with substantial pretraining resources like Python, or highly structured ones like SQL [6, 37, 26, 21, 50]. For hardware code generation, LLMs have been finetuned on Verilog codes for code completion [34] and used for bug-fixing [2]. Existing LLM-powered code generation primarily targets languages that are highly structured and akin to natural languages or focuses on simpler tasks such as code completion. They are not able to generate complex, domain-specific hardware designs based on human instructions, which require a deep understanding and reasoning of the target domain. In contrast, our work represents the first endeavor to harness the power of LLMs for generating AI accelerator designs based on natural languages. **Design automation for DNN accelerators.** To facilitate AI accelerator design, various design automation tools have been developed for FPGA and/or ASIC-based accelerators. For example, Deepburning [38] and TensorLib [14] utilize pre-built templates with customizable parameters to construct FPGA-based DNN accelerators; AutoDNNChip [43] adopts a cohesive graph representation, encompassing a broad spectrum of accelerator designs, to enhance the applicability of its proposed design automation. However, these existing tools still require nontrivial hardware expertise and often rely on domain-specific languages, necessitating deep knowledge of hardware architecture and thus limiting the accessibility to most AI developers. To address these limitations, GPT4AIGChip aspires to produce high-quality hardware codes in response to natural language instructions, reducing the entry barrier for AI developers unfamiliar with hardware and thus expedting AI accelerator innovations. ## VIII Conclusion This work delves into the capabilities of LLMs for AI accelerator design automation. As a crucial first step, we conducted an in-depth investigation of LLMs' strengths and limitations in automated AI accelerator generation, drawing vital insights into the prospects of LLM-powered design automation. Building on these insights, we develop GPT4AIGChip, which integrates an automated prompt-generation pipeline using in-context learning to guide LLMs towards creating high-quality AI accelerator designs. Various experiments and ablation studies validate the effectiveness of GPT4AIGChip in generating high-performance AI accelerators in response to human natural languages. To our knowledge, this work marks the first successful demonstration of a pipeline for LLM-powered automated AI accelerator generation, highlighting the untapped potential of LLMs in design automation and suggesting promising avenues for next-generation AI accelerator development. ## Acknowledgement The work is supported by the National Science Foundation (NSF) through the CCRI program (Award number: 2016727), an NSF CAREER award (Award number: 2048183), and CoCoSys, one of the seven centers in JUMP 2.0, a Semiconductor Research Corporation (SRC) program sponsored by DARPA. Fig. 8: Visualize the generated row-wise-product-based implementation of GPT4AIGChip following the instructions and the demonstrations of inner-/outer-product-based matrix multiplication, for which we show the design descriptions only and hide the codes for visual clarity.
人工知能(AI)のremarkableな能力と複雑な性質は、AIacceleratorsの必須性を大きく高めています。しかし、これらのacceleratorsをさまざまなAIワークロード用に設計することは、労力と時間と費やします。既存の設計探索および自動化ツールは、人間の関与を大幅に軽減することができるかもしれませんが、それでもsubstantialなハードウェアの専門知識を必要とするため、非専門家にとって障害となり、AI acceleratorの開発を阻害します。大規模言語モデル(LLM)の驚異的なポテンシャルを考えると、LLMは人間の言語の指示に応答して高品質なコンテンツを生成できる可能性があり、この研究は、LLMを活用してAI acceleratorのデザインを自動化することを目的としています。この取り組みを通して、GPT4AIGChipという枠組みを開発し、人間の自然言語を利用する代わりに、専門言語を使用しないことでAI acceleratorのデザインを民主化します。具体的には、LLM
2307.16835
Unveiling the geometric meaning of quantum entanglement: discrete and continuous variable systems
We show that the manifold of quantum states is endowed with a rich and nontrivial geometric structure. We derive the Fubini-Study metric of the projective Hilbert space of a multi-qubit quantum system, endowing it with a Riemannian metric structure, and investigate its deep link with the entanglement of the states of this space. As a measure, we adopt the Entanglement Distance E preliminary proposed in [1]. Our analysis shows that entanglement has a geometric interpretation: E(|psi>) is the minimum value of the sum of the squared distances between |psi> and its conjugate states, namely the states v^mu . sigma^mu |psi>, where v^mu are unit vectors and mu runs on the number of parties. We derive a general method to determine when two states are not the same state up to the action of local unitary operators. We prove that the entanglement distance, along with its convex roof expansion to mixed states, fulfills the three conditions required for an entanglement measure: that is i) E(|psi>) =0 iff |psi> is fully separable; ii) E is invariant under local unitary transformations; iii) E doesn't increase under local operation and classical communications. Two different proofs are provided for this latter property. We also show that in the case of two qubits pure states, the entanglement distance for a state |psi> coincides with two times the square of the concurrence of this state. We propose a generalization of the entanglement distance to continuous variable systems. Finally, we apply the proposed geometric approach to the study of the entanglement magnitude and the equivalence classes properties, of three families of states linked to the Greenberger-Horne-Zeilinger states, the Briegel Raussendorf states and the W states. As an example of an application for the case of a system with continuous variables, we have considered a system of two coupled Glauber coherent states.
Arthur Vesperini, Ghofrane Bel-Hadj-Aissa, Lorenzo Capra, Roberto Franzosi
2023-07-31T16:58:43
http://arxiv.org/abs/2307.16835v2
# Unveiling the geometric meaning of quantum entanglement ###### Abstract We show that the manifold of quantum states is endowed with a rich and nontrivial geometric structure. We derive the Fubini-Study metric of the projective Hilbert space of a quantum system, endowing it with a Riemannian metric structure, and investigate its deep link with the entanglement of the states of this space. As a measure we adopt the _entanglement distance_\(E\) preliminary proposed in Ref. [1]. Our analysis shows that entanglement has a geometric interpretation: \(E(|\psi\rangle\) is the minimum value of the sum of the squared distances between \(\psi\rangle\) and its conjugate states, namely the states \(\mathbf{v}^{\mu}\cdot\mathbf{\sigma}^{\mu}|\psi\rangle\), where \(\mathbf{v}^{\mu}\) are unit vectors and \(\mu\) runs on the number of parties. Within the proposed geometric approach, we derive a general method to determine when two states are not the same state up to the action of local unitary operators. Furthermore, we prove that the entanglement distance, along with its convex roof expansion to mixed states, fulfils the three conditions required for an entanglement measure: that is _i)_\(E(|\psi\rangle)=0\) iff \(|\psi\rangle\) is fully separable; _ii)_\(E\) is invariant under local unitary transformation; _iii)_\(E\) doesn't increase under local operation and classical communications. Two different proofs are provided for this latter property. We also show that in the case of two qubits pure states, the entanglement distance for a state \(|\psi\rangle\) coincides with two times the square of the concurrence of this state. ## I Introduction Entanglement is essential in quantum information theory and for its application to quantum technologies. Indeed, entanglement is a fundamental resource in quantum cryptography, teleportation, quantum computation and quantum metrology applications [2; 3; 4]. However, entanglement remains elusive since its characterization and quantification in the case of a general system remains an open problem [5; 6; 7]. A huge literature has been devoted to the entanglement quantification problem in the last decades, and despite that, rigorous achievements pertain to bipartite systems case [8]. In particular, the entropy of entanglement is accepted as a measure for pure states of bipartite systems [9], the entanglement of formation [10], the entanglement distillation [11; 12; 13] and related entropies of entanglement [14] are acknowledged as faithful measures still for bipartite mixed systems [15]. A broad literature is devoted to the study of entanglement in multipartite systems. Several approaches have been proposed, such that the study of equivalence classes in the case of multipartite entangled pure states [16; 17], and the characterization of entanglement by the Schmidt measure or by a generalization of concurrence in the case of mixed multipartite entangled states [18] or with a generalisation of concurrence, [19; 20]. Also, entanglement estimation-oriented approaches derived from a statistical distance [21] concept, as the quantum Fisher information [22; 23; 24], have been proposed. In the present work we derive the Fubini-Study metric [25; 26; 27] that endows the manifold of quantum states a Riemannian metric structure. Thus, we investigate the deep link between the Riemannian metric structure associated with the projective Hilbert space of a quantum system and the entanglement of the states of this space. The entanglement measure that we have adopted for our analysis is the entanglement distance (ED) \(E\), a measure preliminary proposed in Ref. [1] by some of us. Our investigation shows that entanglement has a geometric interpretation. In fact, \(E(|\psi\rangle\) is the minimum value of the sum of the squared distances between \(\psi\rangle\) and its conjugate states, namely the states \(\mathbf{v}^{\mu}\cdot\mathbf{\sigma}^{\mu}|\psi\rangle\), where \(\mathbf{v}^{\mu}\) are unit vectors and \(\mu\) runs on the number of parties. Furthermore, the proposed geometric approach, allows us to derive a general method to determine if, or not, two states, actually are the same state up to the action of local unitary (LU) operators. Furthermore, we show that the ED, along with its convex roof expansion to mixed states, is an entanglement monotone in the sense of [14; 28] that is, it fulfils the three following conditions: _i)_\(E(|\psi\rangle)=0\) iff \(|\psi\rangle\) is fully separable; _ii)_\(E\) is invariant under LU transformation; _iii)_\(E\) doesn't increase, on average, under local operation and classical communications (LOCC). Two different proofs are provided for this latter property. We also show that in the case of a two qubits pure state, the entanglement distance for a state \(|\psi\rangle\) coincides with twice the square of the concurrence of this state. We then propose a necessary condition (sufficient for \(M=2\)) for the LU equivalence of two pure quantum states, relying on the local properties of the associated Fubini-Study metrics. Finally, we report some examples of the application of the proposed geometric approach to three parameter-dependent families of states, derived from the Greenberger-Horne-Zeilinger states [29], the Briegel Raussendorf states [17] and the W states [16]. We have shown that for \(M=2\) the three families belong to the same class. For \(M=3\) the family of Greenberger-Horne-Zeilinger states and the family of Briegel Raussendorf states, belong to the same class, whereas for \(M=4\) the three families belong to disjoint classes. ## II Geometry of projective Hilbert space Quantum mechanics is essentially a geometric theory. In that sense, a useful geometrical tool is that of the Riemannian metric structure associated with the manifold of states of quantum mechanics. The Hilbert space is endowed with a Hermitian scalar product that naturally induces a distance between vectors. If \(\mathcal{H}\) denotes a Hilbert space of a general quantum system, for given two close vectors in \(\mathcal{H}\), \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\), from the scalar product \(\langle\psi_{1}|\psi_{2}\rangle\), one derives the norm \(|\|\|\) and the (finite) distance between these two vectors as \[D(|\psi_{1}\rangle,|\psi_{2}\rangle)=\|\|\psi_{1}\rangle-|\psi_{2}\rangle\|= \langle\psi|\psi^{1/2}\,, \tag{1}\] where \(|\psi\rangle=|\psi_{1}\rangle-|\psi_{2}\rangle\). In the case of two normalized vectors \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\), it results \[D(|\psi_{1}\rangle,|\psi_{2}\rangle)=\left[2\left(1-\text{Re}(\langle\psi_{1} |\psi_{2}\rangle)\right)\right]^{1/2}. \tag{2}\] Furthermore, each Hilbert space has a structure of a differentiable manifold, so it is always possible to define a local chart on \(\mathcal{H}\), which includes two close states. This allows one to derive the metric tensor induced by the above-defined distance. Let \(|\psi\rangle\) and \(|\psi\rangle+|d\psi\rangle\), two close vectors. The squared (differential) distance between these is derived by developing up to the second order \(D\), and it results \[d^{2}(|\psi\rangle+|d\psi\rangle,|\psi\rangle)=\langle d\psi|d\psi\rangle\,. \tag{3}\] Thus, by means of a local chart, the normalized vectors in \(\mathcal{H}\) smoothly depend on \(N\)-dimensional parameter \(\xi\in\mathbb{R}^{N}\) and one has \[|d\psi\rangle=\sum_{\mu}|\partial_{\mu}\psi(\xi)\rangle d\xi^{\mu}\,, \tag{4}\] where with \(\partial_{\mu}\psi\) we mean \(\partial\psi/\partial\xi^{\mu}\). Thus one has \[d^{2}(|\psi\rangle+|d\psi\rangle,|\psi\rangle)=\sum_{\mu\nu}\langle\partial_{ \mu}\psi|\partial_{\nu}\psi\rangle d\xi^{\nu}d\xi^{\mu}\,. \tag{5}\] Despite the matrix elements \(\langle\partial_{\mu}\psi|\partial_{\nu}\psi\rangle\) might seem the entries of a Riemannian metric tensor for \(\mathcal{H}\), they do not have any physical interpretation for the distance between states. In fact, the Hilbert space provides a redundant description of quantum states. These latter are associated with the rays of the Hilbert space, and two normalized kets differing by a phase factor \(e^{i\alpha}\), represent the same quantum state. Consistently, the distance between \(|\psi\rangle\) and \(|\psi\rangle+|d\psi\rangle\), and the one between \(|\psi^{\prime}\rangle=e^{i\alpha}|\psi\rangle\) and \(|\psi^{\prime}\rangle+|d\psi^{\prime}\rangle\) must be the same. By resorting to a local chart, we may express this request in a mathematical framework. An appropriate metric tensor for the states space has to be invariant under gauge transformation \(|\psi(\xi)\rangle\to e^{i\alpha(\xi)}|\psi(\xi)\rangle\). This is accomplished with the Fubini-Study metric which gives the (squared) distance between two neighbouring rays \[d_{FS}^{2}(|\psi\rangle+|d\psi\rangle,|\psi\rangle)=\langle d\psi|d\psi\rangle- \langle\psi|d\psi\rangle\langle d\psi|\psi\rangle\,, \tag{6}\] from which one derives the metric tensor \[g_{\mu\nu}=\langle\partial_{\mu}\psi|\partial_{\nu}\psi\rangle-\langle \partial_{\mu}\psi|\psi\rangle\langle\psi|\partial_{\nu}\psi\rangle\,. \tag{7}\] The Fubini-Study metric (6) is therefore defined on the finite projective Hilbert-space \(\mathcal{P}\mathcal{H}\)[25; 26], that is on the set of equivalence classes of non-zero vectors \(|\psi\rangle\in\mathcal{H}\), for the relation \(\sim_{p}\) on \(\mathcal{H}\) given by \(|\psi\rangle\sim_{p}|\phi\rangle\) iff \(|\psi\rangle=\alpha|\phi\rangle\), for some \(\alpha\in\mathbb{C}\), \(\alpha\neq 0\). It is worth remarking that one can define the square of the (finite) distance between two rays \([|\phi_{1}\rangle]_{p},[|\phi_{2}\rangle]_{p}\in\mathcal{P}\mathcal{H}\), associated with the normalized states \(e^{i\alpha_{1}}|\phi_{1}\rangle,e^{i\alpha_{2}}|\phi_{2}\rangle\), respectively, as follows \[D_{FS}^{2}(|\phi_{1}\rangle,|\phi_{2}\rangle)=(1-|\langle\phi_{1}|\phi_{2} \rangle|^{2})\,. \tag{8}\] One can easily verify that the latter distance induces the metric tensor (7). In fact, by expanding \(|\phi_{1}\rangle\) up to second order as \[|\phi_{1}(\xi)\rangle=|\psi\rangle+\sum_{\mu}|\partial_{\mu}\psi\rangle d\xi^{ \mu}+\frac{1}{2}\sum_{\mu\nu}|\partial_{\mu\nu}^{2}\psi\rangle d\xi^{\mu}d\xi^{ \nu}\,, \tag{9}\] and setting \(|\phi_{2}\rangle=|\psi\rangle\), from Eq. (8) one gets \[D_{FS}^{2}(|\phi_{1}\rangle,|\phi_{2}\rangle)=\sum_{\mu\nu}g_{\mu\nu}d\xi^{\mu }d\xi^{\nu}\,, \tag{10}\] where \(g_{\mu\nu}\) is the one of Eq. (7). ## III The geometric meaning of entanglement measure As above mentioned, the present study aims at investigating the deep link between the Riemannian metric structure associated with the projective Hilbert space and the entanglement properties of the states of this space. To this end, we endow the projective Hilbert space with a metric, derived from the Fubini-Study metric, that has the desirable property of making it an attractive definition for entanglement measure. We consider the case of the Hilbert space \(\mathcal{H}=\mathcal{H}^{0}\otimes\mathcal{H}^{1}\cdots\mathcal{H}^{M-1}\) tensor product of \(M\) qubits Hilbert spaces. The entanglement measure is invariant under LU transformations. Thus, given \([|\phi\rangle]_{p},[|\psi\rangle]_{p}\in\mathcal{P}\mathcal{H}\) and the associated normalized vectors \(|\phi\rangle,|\psi\rangle\in\mathcal{H}\), we define the following equivalence relation between elements of the projective Hilbert space \[[|\phi\rangle]_{p}\sim[|\psi\rangle]_{p}\,,\quad\text{iff}\ |\phi\rangle=\text{e}^{i \alpha}\prod_{\mu=0}^{\text{M}-1}\text{U}^{\mu}|\psi\rangle\,, \tag{11}\] where, for \(\mu=0,\ldots,M-1\), each operator \(U^{\mu}\) is an arbitrary \(SU(2)\) LU operator that operates on the \(\mu\)th qubit and \(\alpha\in\mathbb{R}\). With this equivalence relation, one derives the quotient set \(\mathcal{P}\mathcal{H}/\sim\). Thus, the entanglement measure \(E\) has to be a function \(E:\mathcal{P}\mathcal{H}/\sim\rightarrow\mathbb{R}^{+}\), that is a function of the equivalence classes of \(\mathcal{P}\mathcal{H}\) by \(\sim\), that is \[[|\psi\rangle]=\{|\phi\rangle\in\mathcal{P}\mathcal{H}|\;|\phi\rangle\sim| \psi\rangle\}\;. \tag{12}\] Following Ref. [1], we derive an entanglement measure from a distance inspired from the Fubini-Study one. For each normalized ket \(|\psi\rangle\in\mathcal{H}\) we consider \[\left\{|U,\psi\rangle=\prod_{\mu=0}^{M-1}U^{\mu}|\psi\rangle\right\}\,, \tag{13}\] the set of all the vectors derived from \(|\psi\rangle\) under the action of LU operators, where, for \(\mu=0,\ldots,M-1\), each operator \(U^{\mu}\) is an arbitrary \(SU(2)\) LU operator that operates on the \(\mu\)th qubit. Note that the set of kets in (12), derived by varying the operators \(U^{\mu}\), is the class \([|\psi\rangle]\), and also that all these kets have the same degree of entanglement. For each vector \(|U,\psi\rangle\) in (12), we define a local chart in a neighborhood of it, by means of the unitary operator \(e^{-i\sum_{\mu=0}^{M-1}\omega_{\mu}^{\mu}\omega_{\mu}^{\mu}\xi^{\mu}}\,,\) depending on real parameters \(\xi^{\mu}\), and where \(\mathbf{n}^{\mu}\) are assigned unit vectors. In this way, to the point \(\xi^{\mu}=0\), for \(\mu=0,\ldots,M-1\), corresponds the vector \(|U,\psi\rangle\). Here and in the following we use the notation \(\sigma_{\mathbf{n}}^{\mu}=\mathbf{n}^{\mu}\cdot\mathbf{\sigma}^{\mu}\), and for \(\mu=0,\ldots,M-1\), we denote by \(\sigma_{1}^{\mu}\), \(\sigma_{2}^{\mu}\) and \(\sigma_{3}^{\mu}\) the three Pauli matrices operating on the \(\mu\)-th qubit, where the index \(\mu\) labels the spins. We consider an infinitesimal variation of ket \(|U,\psi\rangle\) given by \[|dU,\psi\rangle=\sum_{\mu=0}^{M-1}d\tilde{U}^{\mu}|U,\psi\rangle\,, \tag{14}\] where \[d\tilde{U}^{\mu}=-i\sigma_{\mathbf{n}}^{\mu}d\xi^{\mu} \tag{15}\] rotates the \(\mu\)th qubit by an infinitesimal angle \(2d\xi^{\mu}\) around the unitary vector \(\mathbf{n}^{\mu}\). By substituting \(|U,\psi\rangle\) and \(|dU,\psi\rangle\) in Eq. (6), in place of \(|\psi\rangle\) and \(|d\psi\rangle\), respectively, we get \[d_{F_{3}}^{2}(|U,\psi\rangle+|dU,\psi\rangle,|U,\psi\rangle)=\sum_{\mu\nu}g_{ \mu\nu}(|\psi\rangle,\mathbf{v})d\xi^{\mu}d\xi^{\nu}\,, \tag{16}\] where, the corresponding _projective Fubini-Study metric tensor_ is \[g_{\mu\nu}(|\psi\rangle,\mathbf{v})=\langle\psi|\sigma_{\mathbf{v}}^{\mu} \sigma_{\mathbf{v}}^{\nu}|\psi\rangle-\langle\psi|\sigma_{\mathbf{v}}^{\mu}| \psi\rangle\langle\psi|\sigma_{\mathbf{v}}^{\nu}|\psi\rangle\,, \tag{17}\] \(\mathbf{v}=(\mathbf{v}^{0},\ldots,\mathbf{v}^{M-1})\) and the unit vectors \(\mathbf{v}^{\mu}\), \(\mu=0,\ldots,M-1\), are derived by a rotation of the original ones of Eq. (15), according to \(\mathbf{v}^{\nu}\cdot\mathbf{\sigma}^{\nu}=U^{\nu\dagger}\mathbf{n}^{\nu}\cdot\bm {\sigma}^{\nu}U^{\nu}\), where there is no summation on the index \(\nu\). Of course, for each state \(|\psi\rangle\), the metric tensor \(g_{\mu\nu}(|\psi\rangle,\mathbf{v})\) is not invariant under rotation of the unit vectors \(\mathbf{v}^{\mu}\). In order to derive a measure that is invariant under rotation of the unit vectors, we define the entanglement measure of \([|\psi\rangle]\), as the inferior value of the trace of \(g_{\mu\nu}(|\psi\rangle,\mathbf{v})\) over all the possible orientations of the unit vectors \(\mathbf{v}^{\mu}\). In formulas we define the ED as \[E(|\psi\rangle)=\inf_{\{\mathbf{v}^{\mu}\}_{\nu}}\mathrm{tr}(g(|\psi\rangle, \mathbf{v}))\,, \tag{18}\] where \(\mathrm{tr}\) is the trace operator and where the \(\mathrm{inf}\) is taken over all possible orientations of the unit vectors \(\mathbf{v}^{\nu}\) (\(\nu=0,\ldots,M-1\)). We emphasize that, in general, the inspection of the block structure of \(g(|\psi\rangle)\) is informative about \(k\)-separability. Consider a choice of unit vectors \(\mathbf{v}^{\nu}\), giving rise to a metric \(g(|\psi\rangle,\mathbf{v})\) which is, up to permutation of the qubits' indices, diagonal by blocks. In a previous paper from one of the authors [30], it was shown that \(n\geq p\geq k\), with \(n\) the number of such blocks, \(p\) the _persistency of entanglement_ and \(k\) the _degree of separability_. In particular, this implies that if there exists a given choice of unit vectors yielding \(g(|\psi\rangle,\mathbf{v})\) irreducible for any permutation of its indices (i.e. \(n=1\)), then \(|\psi\rangle\) is genuinely multipartite entangled (i.e. \(k=1\)). From Eq. (17) we derive \[\mathrm{tr}[g(|\psi\rangle,\mathbf{v})]=\sum_{\mu=0}^{M-1}\left[1-(\mathbf{v}^ {\mu}\cdot\langle\psi|\mathbf{\sigma}^{\mu}|\psi\rangle)^{2}\right]\,, \tag{19}\] that shows that the unit vectors \[\tilde{\mathbf{v}}^{\mu}=\pm\langle\psi|\mathbf{\sigma}^{\mu}|\psi\rangle/\| \langle\psi|\mathbf{\sigma}^{\mu}|\psi\rangle\|\,, \tag{20}\] provide the \(\mathrm{inf}\) of \(\mathrm{tr}(g)\). Therefore, we obtain the following directly computable formula for the ED \[E(|\psi\rangle)=M-\sum_{\mu=0}^{M-1}\left\|\langle\psi|\mathbf{\sigma}^{\mu}|\psi \rangle\right\|^{2}. \tag{21}\] Note that the latter equation can be seen as the sum of the \(M\) single-qubit EDs \[E_{\mu}(|\psi\rangle)=1-\|\langle\psi|\mathbf{\sigma}^{\mu}|\psi\rangle\|^{2}. \tag{22}\] \(E_{\mu}(|\psi\rangle)\) is a measure of bipartite entanglement of \(\mu\) with the rest of the system. The \(\mathrm{inf}\) operation, makes the measure (18) independent from the choice of the operators \(U^{\mu}\). Consequently, its numerical value is associated with the class (12), and does not depend on a specific element chosen inside the class. This is a necessary condition for a well defined entanglement measure [14]. The entanglement measure can be derived by a minimum distance principle, if studied in the framework of the Riemannian geometry of the projective Hilbert space. In fact, according to Eq. (8) the square of the distance between the rays associated with the unit vectors \(|\phi\rangle\) and \(|\phi^{\mu}(\mathbf{v}^{\mu})\rangle\equiv\mathbf{\sigma}_{\mathbf{v}}^{\mu}|\phi\rangle\), is \[D_{FS}^{2}(|\phi\rangle,|\phi^{\mu}(\mathbf{v}^{\mu})\rangle)=1-|\langle\phi| \phi^{\mu}(\mathbf{v}^{\mu})\rangle|^{2}\,. \tag{23}\] We name \(\mathbf{v}^{\mu}\)-conjugate of \(|\phi\rangle\) the states \(|\phi^{\mu}(\mathbf{v}^{\mu})\rangle\), for \(\mu=0,\ldots M-1\). Therefore \[E(|\phi\rangle)=\inf_{\{\mathbf{v}^{\mu}\}_{\nu}}\sum_{\mu=0}^{M-1}D_{FS}^{2}(| \phi\rangle,|\phi^{\mu}(\mathbf{v}^{\mu})\rangle)\,. \tag{24}\] This shows that the minimum of the sum of the semi-square of the (finite) distances between a state \(|\phi\rangle\) and all the states derived under the action of the operators \(\mathbf{\sigma}_{\mathbf{v}}^{\mu}\), obtained by varying the vectors \(\mathbf{v}^{\mu}\), is bounded from below by the entanglement measure \(E(|\phi\rangle)\). For fully separable states, the minimum distance is zero whereas, for maximally entangled states, it is \(M\) at the very best. Therefore, in this geometric framework, entanglement represents an obstacle to the minimum of the sum of the distance square between a state \(|\phi\rangle\) and all its \(\mathbf{v}^{\mu}\)-conjugate states. ## V Geometric characterization of the equivalence classes One of the basic questions that this geometric approach can answer, is to determine when two states certainly do not belong to the same equivalence class. Two states sharing the same degree of entanglement might indeed be LU equivalent or not. Via the study of the full metric tensor associated with two given states, with the same entanglement magnitude, one can determine if these two states belong to different equivalence classes. Let us consider an equivalence class, let it say \([|\phi\rangle]\), and let us consider two states \(|\phi_{1}\rangle,|\phi_{2}\rangle\in[|\phi\rangle]\). Then, there exist \(M\) LU operators \(U^{\mu}\), \(\mu=0,\ldots,M-1\), each one operating on the \(\mu\)th qubit, and \(\alpha\in\mathbb{R}\) such that \[|\phi_{1}\rangle=e^{i\alpha}U|\phi_{2}\rangle\,, \tag{25}\] where \(U=\prod_{\mu=0}^{M-1}U^{\mu}\) We can write \[\begin{split}&\langle\phi_{1}|\mathbf{\sigma}_{\mathbf{v}}^{\mu}\mathbf{ \sigma}_{\mathbf{v}}^{\nu}|\phi_{1}\rangle-\langle\phi_{1}|\mathbf{\sigma}_{\mathbf{v}}^{ \mu}|\phi_{1}\rangle\langle\phi_{1}|\mathbf{\sigma}_{\mathbf{v}}^{\nu}|\phi_{1} \rangle\\ =&\langle\phi_{2}|U^{\dagger}\mathbf{\sigma}_{\mathbf{v}}^{ \mu}\mathbf{\sigma}_{\mathbf{v}}^{\nu}U|\phi_{2}\rangle-\langle\phi_{2}|U^{\dagger} \mathbf{\sigma}_{\mathbf{v}}^{\mu}U|\phi_{2}\rangle\langle\phi_{2}|U^{\dagger}\mathbf{ \sigma}_{\mathbf{v}}^{\nu}U|\phi_{2}\rangle\\ =&\langle\phi_{2}|\mathbf{\sigma}_{\mathbf{n}}^{\mu}\mathbf{ \sigma}_{\mathbf{n}}^{\nu}|\phi_{2}\rangle-\langle\phi_{2}|\mathbf{\sigma}_{\mathbf{n}}^{ \mu}|\phi_{2}\rangle\langle\phi_{2}|\mathbf{\sigma}_{\mathbf{n}}^{\nu}|\phi_{2}\rangle \,,\end{split} \tag{26}\] where \(\mathbf{\sigma}_{\mathbf{n}}^{\mu}=U^{\mu\dagger}\mathbf{\sigma}_{\mathbf{v}}^{\mu}U^{\mu}\) for \(\mu=0,\ldots,M-1\). Thus, it results \[g_{\mu\nu}(|\phi_{1}\rangle,\mathbf{v})=g_{\mu\nu}(|\phi_{2}\rangle,\mathbf{n})\,. \tag{27}\] Therefore, two states do not belong to the same equivalence class if there exists at least a set of unit vectors \(\mathbf{v}^{\mu}\), \(\mu=0,\ldots,M-1\), such that it is not possible to determine a set of unit vectors \(\mathbf{n}^{\mu}\), \(\mu=0,\ldots,M-1\), satisfying Eq. (27) for \(\mu,\nu=0,\ldots,M-1\). Formally, \[|\phi_{1}\rangle\underset{LU}{\sim}|\phi_{2}\rangle\implies\forall\{\mathbf{v}^{ \mu}\}_{\mu},\exists\{\mathbf{n}^{\mu}\}_{\mu}\bigm{|}g(|\phi_{1}\rangle,\mathbf{v})= g(|\phi_{2}\rangle,\mathbf{n}). \tag{28}\] Note that, in the general case, one can't use Eq. (28) to determine with certainty if two states belong to the same equivalence class, as the EM encodes only informations up until the second order of the associated quantum statistical distributions, namely two-qubits correlators. In other words, it is possible for Eq. (27) to holds for any choice of \(\mathbf{n}^{\mu}\), while \(|\phi_{1}\rangle\) and \(|\phi_{2}\rangle\) are not equivalent, in some higher order statistical property. Clearly this is not the case if \(M=2\), since the full statistics do not possess any higher order property. For \(M=2\), the implication (28) is both ways, hence the second member stands as a sufficient and necessary condition for state equivalence. By using the unit vectors \(\tilde{\mathbf{v}}^{\nu}\) of Eq. (20) in place of the \(\mathbf{v}^{\nu}\) in Eq. (17), we get the matrix \[\tilde{g}(|\psi\rangle)=g(|\psi\rangle,\tilde{\mathbf{v}})\,, \tag{29}\] that we name entanglement metric (EM). Thus, for two states \(|\phi_{1}\rangle\) and \(|\phi_{2}\rangle\) that differ from one another under the action of LU transformations, we have \[\tilde{g}(|\phi_{1}\rangle)=g(|\phi_{2}\rangle,\mathbf{n})\,, \tag{30}\] where the unit vectors \(\mathbf{n}^{\nu}\) are derived using suitable rotations of the unit vectors \(\tilde{\mathbf{v}}^{\nu}\) provided by Eq. (20) in the case of state \(|\phi_{2}\rangle\). Note, in general, the unit vectors \(\mathbf{n}^{\nu}\) do not coincide with the ones given in Eq. (20) for the state \(|\phi_{2}\rangle\). In the following, we will apply this geometric approach to the states belonging to three class states to verify if they are in the same equivalence class or not. ## VI Monotonicity of the entanglement distance. The single-qubit ED (22) can straightforwardly be generalized to mixed state via a convex roof construction, i.e. \[E_{\mu}(\rho):=\min_{\{p_{j},\psi_{j}\}}\sum_{j}p_{j}E_{\mu}(|\psi_{j}\rangle), \tag{31}\] where the minimization is carried over all of the possible realizations \(\{p_{j},\psi_{j}\}\) of \(\rho\) as a mixture of pure states [31]. A measure is called a measure of entanglement or entanglement monotone if it satisfies the following necessary conditions [14; 28]: 1. \(E(\rho)=0\) iff \(\rho\) is fully separable. 2. \(E(\rho)\) is invariant under LU operations, i.e., \(E(\rho)=E(\prod_{\mu=0}^{M-1}U^{\mu}\rho U^{\mu\dagger})\), where for \(\mu=0,\ldots,M-1\), each operator \(U^{\mu}\) is an arbitrary \(SU(2)\) LU operator that operates on the \(\mu\)th qubit. 3. \(E(\rho)\) cannot increase, on average, under local operations and classical communication. If these three conditions are met for a single-qubit measure as (31), they are also met by their sum, the total ED \(E(\rho)=\sum_{\mu}E_{\mu}(\rho)\). Let us now prove that the Entanglement Distance satisfies these three conditions. From Eq. (18) we have \(E(|\psi\rangle)=0\) iff \(\inf_{\mathbf{v}^{\mu}}g_{\mu\mu}(|\psi\rangle,\mathbf{v}^{\mu})=0\), for \(\mu=0,\ldots,M-1\). From (17) it results \(\inf_{\mathbf{v}^{\mu}}g_{\mu\mu}(|\psi\rangle,\mathbf{v}^{\mu})=0\) iff \(\forall\mu\) it exist \(\mathbf{v}^{\mu}\) such that \(\mathbf{\sigma}_{\mathbf{v}}^{\mu}|\psi\rangle=|\psi\rangle\). Therefore \(|\psi\rangle\) is simultaneously eigenstate for \(\mathbf{\sigma}_{\mathbf{v}}^{\mu}\) with the maximum eigenvalue (\(+1\)), this is possible iff \(|\psi\rangle\) is product of single-party eigenstates for \(\mathbf{\sigma}_{\mathbf{v}}^{\mu}\), for \(\mu=0,\dots,M-1\). This prove condition (i). To prove condition (ii), we start from Eq. (19), by considering \(U=\prod_{\nu=0}^{M-1}U^{\nu}\), where each operator \(U^{\nu}\) is an arbitrary \(SU(2)\) LU operator that operates on the \(\nu\)th qubit. We have \[E(U|\psi\rangle)= \inf_{\{\mathbf{\nu}^{\mu}\}_{\mu}\mu=0}\left[1-(\langle\psi|U^{\mu }\mathbf{\sigma}_{\mathbf{v}}^{\mu}U^{\mu}|\psi\rangle)^{2}\right]= \tag{32}\] \[\inf_{\{\mathbf{\upsilon}^{\mu}\}_{\mu}\mu=0}\sum_{\mu=0}^{M-1}\left[1 -(\langle\psi|(\mathbf{\sigma}_{\mathbf{u}})^{\mu}|\psi\rangle)^{2}\right]=E(|\psi \rangle)\,,\] where \(\mathbf{u}^{\nu}\cdot\mathbf{\sigma}^{\nu}=U^{\nu\dagger}\mathbf{v}^{\nu}\cdot \mathbf{\sigma}^{\nu}U^{\nu}\) for \(\nu=0,\dots,M-1\). Both these properties are inherited by the related measure on mixed state (31). Two different proofs of condition (iii) can be found in Appendix. ## V Comparison between the concurrence and the entanglement distance. Let us consider a general \(M=2\) qubits normalized pure-state \[|\psi\rangle=\sum_{j=0}^{3}w_{j}|j\rangle\,, \tag{33}\] such that \(\sum_{j=0}^{3}|w_{j}|^{2}=1\). The concurrence for pure state (33) is defined as [32] \[C(|\psi\rangle)=|\langle\psi|\psi^{\dagger}\rangle|\,, \tag{34}\] where \(|\psi^{\dagger}\rangle=\sigma_{2}^{0}\otimes\sigma_{2}^{1}\sum_{j=0}^{3}w_{j} ^{*}|j\rangle\). By direct computations one gets [32] \[C(|\psi\rangle)=2|w_{0}w_{3}-w_{1}w_{2}|\,. \tag{35}\] By a direct calculation, from Eq. (21) one derives for the same general state \[E(|\phi\rangle)=8[w_{0}^{2}w_{3}^{2}+w_{1}^{2}w_{2}^{2}-w_{0}^{*}w_{3}^{*}w_{ 1}w_{2}-w_{0}w_{3}w_{1}^{*}w_{2}^{*}]\,. \tag{36}\] Therefore we get the following general result for \(M=2\) qubits states \[E(|\phi\rangle)/2=[C(|\phi\rangle)]^{2}\,. \tag{37}\] This proves that the concurrence for pure states, is a special case of ED, valid for the case \(M=2\). ## VI Examples In this section, we apply our geometric method to investigate the entanglement properties of three classes of one/multi-parameter families of states. In all the cases, the degree of entanglement of each state depends on these parameters and, in particular, they have known the values of the parameters corresponding to maximally entangled states for each one of the families. The first, is a one-parameter family of states which is related to the Greenberger-Horne-Zeilinger states [29] since for a suitable choice of the parameter one gets a Greenberger-Horne-Zeilinger state. We will name the elements of such family Greenberger-Horne-Zeilinger-like states (GHZLS). The second is a one-parameter family of states too. This class of states has been introduced by Briegel and Raussendorf in Ref. [17], for this reason, we name the elements in this family Briegel-Raussendorf states (BRS). The third is an \((M-1)\)-parameters family of states, related to the W states. In particular, we consider a weighted combination of the \(M\) states that compose a W state of \(M\)-qubits. In this case, the state with the higher degree of entanglement is known to correspond to the case with the same weights. In the following, we consider the standard \(M\)-qubits basis \(\{|0\cdots 0\rangle\,,\,|0\cdots 01\rangle,\dots,|1\cdots 1\rangle\}\) for \(\mathcal{H}\), where \(|0\rangle_{\mu}\) (\(|1\rangle_{\mu}\)) denotes the eigenstate of \(\sigma_{3}^{\mu}\) with eigenvalue \(+1\) (\(-1\)). Thus, each basis' vector is identified by \(M\) integers \(n_{0},\dots,n_{M-1}=0,1\) as \(|\{n\}\rangle=|n_{M-1}\)\(n_{M-2}\)\(n_{0}\rangle\,\). Therefore, we enumerate such basis' vectors according to the binary integers representation \(|k\rangle=\left|\{n^{k}\}\right\rangle\), with \(k=\sum_{\mu=0}^{M-1}n_{\mu}^{k}2^{\mu}\), where \(n_{\nu}^{k}\) is the \(\nu\)-th digit of the number \(k\) in binary representation and \(k=0,\dots,2^{M}-1\). ### Entanglement properties #### vi.1.1 Greenberger-Horne-Zeilinger-like states In this section, we consider a one-parameter family of states, the GHZLS, defined according to \[|GHZ,\theta\rangle_{M}=\cos(\theta)|0\rangle+\sin(\theta)|2^{M}-1\rangle\,, \tag{38}\] where \(0\leq\theta\leq\pi/2\). For \(\theta=0,\pi/2\) the states are fully separable, whereas for \(\theta=\pi/4\) the state has the maximum degree of entanglement. In this case, the trace for the metric tensor (17) results \[\operatorname{tr}(g)=M-\cos^{2}(2\theta)\sum_{\nu=0}^{M-1}(v_{3}^{\nu})^{2}\,, \tag{39}\] and, consistently with (20), it is minimised by the values \(\tilde{v}_{1}^{\nu}=\tilde{v}_{2}^{\nu}=0\), \(\tilde{v}_{3}^{\nu}=\pm 1\), for \(\nu=0\dots,M-1\). Therefore, we have \[\tilde{g}=\sin^{2}(2\theta)J_{M} \tag{40}\] where \(J_{M}\) is the \(M\times M\) matrix of ones. The ED per qubit for the GHZLS results \[E(|GHZ,\theta\rangle_{M})/M=\sin^{2}(2\theta)\,. \tag{41}\] #### Briegel Raussendorf states We denote with \(\Pi_{0}^{j}=(\mathbb{I}+\sigma_{3}^{j})/2\) and \(\Pi_{1}^{j}=(\mathbb{I}-\sigma_{3}^{j})/2\) the projector operators onto the eigenstates of \(\sigma_{3}^{j}\), \(|0\rangle_{j}\) (with eigenvalue \(+1\)) and \(|1\rangle_{j}\) (with eigenvalue \(-1\)), respectively. Each \(M\) qubit state of the BRS class is derived by applying to the fully separable state \[|r,0\rangle=\bigotimes_{j=0}^{M-1}\frac{1}{\sqrt{2}}(|0\rangle_{j}+|1\rangle_ {j})\,, \tag{42}\] the non LU operator \[U_{0}(\phi)=\exp(-i\phi H_{0})=\prod_{j=1}^{M-1}\left(\mathbb{I}+\alpha\Pi_{0 }^{j}\Pi_{1}^{j+1}\right)\,, \tag{43}\] where \(H_{0}=\sum_{j=1}^{M-1}\Pi_{0}^{j}\Pi_{1}^{j+1}\) and \(\alpha=(e^{-i\phi}-1)\). The full operator (43) is diagonal on the states of the standard basis \(\{|0\cdots 0\rangle\,,\,|0\cdots 01\rangle,\ldots,|1\cdots 1\rangle\}\). In fact, the eigenvalue \(\lambda_{k}\) of operator (43), corresponding to a given eigenstate \(|k\rangle\) of this basis, results \[\lambda_{k}=\sum_{j=0}^{n(k)}\binom{n(k)}{j}\alpha^{j}\,, \tag{44}\] where \(n(k)\) is the number ordered couples 01 inside the sequence of the base vector \(|k\rangle\). For the initial state (42) we consistently get \[|r,0\rangle_{M}=2^{-M/2}\sum_{k=0}^{2^{M}-1}|k\rangle\,, \tag{45}\] and, under the action of \(U_{0}(\phi)\) one obtains \[|r,\phi\rangle_{M}=2^{-M/2}\sum_{k=0}^{2^{M}-1}\sum_{j=0}^{n(k)}\binom{n(k)}{ j}\alpha^{j}|k\rangle\,. \tag{46}\] For \(\phi=2\pi k\), with \(k\in\mathbb{Z}\), this state is separable, whereas, for all the other choices of the value \(\phi\), it is entangled. In particular, in [17] it is argued that the values \(\phi=(2k+1)\pi\), where \(k\in\mathbb{Z}\), give maximally entangled states. #### Briegel Raussendorf states: case \(M=2\) In the case \(M=2\) the one-parameter family of BRS is \[|r,\phi\rangle_{2}=\sum_{k=0}^{3}c_{k}|k\rangle\,, \tag{47}\] with \(c_{k}=1/2\) if \(k\neq 1\), and \(c_{1}=e^{-i\phi}/2\), where \(\phi\in[0,2\pi]\). By direct calculation one gets for the trace of the Fubini-Study metric \[\text{tr}(g)=\sum_{\nu=0}^{1}\left[1-c^{2}\left(cv_{1}^{\nu}+(-1)^{\nu+1}\,sv _{2}^{\nu}\right)^{2}\right]\,, \tag{48}\] where \(c=\cos\left(\phi/2\right)\) and \(s=\sin\left(\phi/2\right)\). Eq. (48) is minimised with the choice \(\tilde{\mathbf{v}}^{\nu}=\pm(c,(-1)^{\nu+1}s,0)\). Consistently, the EM and the ED per qubit result \[\tilde{g}=s^{2}J_{2} \tag{49}\] and \[E(|r,\phi\rangle_{2})/2=s^{2}\,. \tag{50}\] #### Briegel Raussendorf states: case \(M=3\) In the case \(M=3\) we have \[|r,\phi\rangle_{3}=\sum_{k=0}^{7}c_{k}|k\rangle\,, \tag{51}\] with \(c_{k}=1/2^{3/2}\) if \(k\neq 1,2,3,5\), and \(c_{k}=e^{-i\phi}/2^{3/2}\) if \(k=1,2,3,5\), where \(\phi\in[0,2\pi]\). The trace of \(g\) results \[\text{tr}(g)=\left[3-\left((c^{2}v_{1}^{0}+csv_{2}^{0})^{2}+(c^{2}v_{1}^{1})^{ 2}+(c^{2}v_{1}^{2}-csv_{2}^{2})^{2}\right)\right]\,, \tag{52}\] is minimised with the choices \(\tilde{\mathbf{v}}^{0}=(c,s,0)\), \(\tilde{\mathbf{v}}^{1}=(1,0,0)\), \(\tilde{\mathbf{v}}^{2}=(1,0,0)\) and \(\tilde{\mathbf{v}}^{3}=(c,-s,0)\). The EM in this case results \[\tilde{g}=s^{2}\left(\begin{array}{cccc}1&c&0&0\\ c&1+c^{2}&c^{2}&0\\ 0&c^{2}&1+c^{2}&c\\ 0&0&c&1\end{array}\right)\,, \tag{53}\] thus the ED reads \[E(|r,\phi\rangle_{4})/4=s^{2}\left(4+2c^{2}\right)/4\,. \tag{54}\] #### W-states In this section, we consider an \((M-1)\)-parameters family of states, the \(W\) states, defined according to \[|W,\mathbf{\alpha}\rangle_{M}=\sum_{j=1}^{M}\alpha_{j}|2^{j-1}\rangle\,, \tag{59}\] with \[\begin{split}\alpha_{1}=c_{1}\\ \alpha_{2}=s_{1}c_{2}\\ \vdots\\ \alpha_{k}=s_{1}s_{2}\cdots s_{k-1}c_{k}\\ \vdots\\ \alpha_{M-1}=s_{1}s_{2}\cdots s_{M-2}c_{M-1}\\ \alpha_{M}=s_{1}s_{2}\cdots s_{M-2}s_{M-1}\,,\end{split} \tag{60}\] where we have set \(c_{j}=\cos(\theta_{j})\), \(s_{j}=\sin(\theta_{j})\), and where \(0\leq\theta_{j}\leq\pi/2\), \(j=1,\ldots,M-1\). If the number of indices \(k\) such that \(\alpha_{k}=0\) is \(r\), then the state (59) is \(r\)-partite. For \(\alpha_{k}=1/\sqrt{M}\), for each \(k\), the state (59) is maximally entangled. In the case of state (59), the trace of the metric tensor (17) results \[\text{tr}(g)=\left[M-\sum_{\nu=0}^{M-1}[(1-2|\alpha_{j(\nu)}|^{2})\nu_{3}^{ \nu}]^{2}\right]\,, \tag{61}\] where \(j(\nu)\) is an invertible map \(j:\{0,\ldots,M-1\}\rightarrow\{1,\ldots,M\}\). Consistently with (20), Eq. (61) is minimised by the values \(\tilde{\nu}_{1}^{\nu}=\tilde{\nu}_{2}^{\nu}=0\), \(\tilde{\nu}_{3}^{\nu}=\pm 1\), for \(\nu=0\ldots,M-1\). Therefore, the ED for these states results \[E(|W,\mathbf{\alpha}\rangle_{M})/M=4(1-\sum_{j}|\alpha_{j}|^{4})/M\,. \tag{62}\] The state with the higher entanglement corresponds to the choice \(\alpha_{j}=1/\sqrt{M}\), for \(j=0,\ldots,M-1\). The corresponding value for the ED is \[E(|W,\mathbf{\alpha}_{M}\rangle_{M})/M=4(1-1/M)/M\,. \tag{63}\] Therefore, except for the case \(M=2\), \(E(|W,\mathbf{\alpha}\rangle_{M})/M<1\). This is due to the non-vanishing of the expectation values of the operators \(\mathbf{\sigma}_{\nu}^{\mu}\) on the state with the higher entanglement. In this sense, such state could not be considered a maximally entangled state. Remarkably, the \(W\) state with maximal entanglement remains a maximal entanglement state under particle loss. #### W-states: case \(M=2\) For \(M=2\) it is \[|W,\mathbf{\alpha}\rangle_{2}=\alpha_{1}|1\rangle+\alpha_{2}|2\rangle\,, \tag{64}\] where \(\alpha_{1}=\cos(\theta_{1})\) and \(\alpha_{2}=\sin(\theta_{1})\), and \(\theta_{1}\in[0,\pi/2]\). The choice \(\tilde{\nu}_{1}^{\nu}=\tilde{\nu}_{2}^{\nu}=0\), \(\tilde{\nu}_{3}^{\nu}=(-1)^{\nu}\), for \(\nu=0,1\), by direct calculations one get the following expressions for the EM and ED \[\bar{g}=\sin^{2}(2\theta_{1})J_{2}\,, \tag{65}\] \[E(|W,\mathbf{\alpha}\rangle_{2})/2=\sin^{2}(2\theta_{1})\,, \tag{66}\] respectively. #### W-states: case \(M=3\) For \(M=3\) it is \[|W,\mathbf{\alpha}\rangle_{3}=\alpha_{1}|1\rangle+\alpha_{2}|2\rangle+\alpha_{3}| 4\rangle\,, \tag{67}\] where \(\alpha_{1}=\cos(\theta_{1})\), \(\alpha_{2}=\sin(\theta_{1})\cos(\theta_{2})\) and \(\alpha_{3}=\sin(\theta_{1})\sin(\theta_{2})\). By direct calculations one get the ED \[\begin{split} E(|W,\mathbf{\alpha}\rangle_{3})/3=& \frac{4}{3}\left[1-\left(\cos^{4}(\theta_{1})+\sin^{4}(\theta_{1}) \cos^{4}(\theta_{2})+\right.\right.\\ &\left.\left.\sin^{4}(\theta_{1})\sin^{4}(\theta_{2})\right) \right]\,.\end{split} \tag{68}\] ### Comparison between the families of states #### Entanglement measure The ED \(E(|r,\phi\rangle_{M})/M\), as a function of \(\phi\) and for \(M=2,3,4\), gets the correct estimation of the degree of entanglement for the BRS one expect. In fact, for fully separable states (\(\phi=0,2\pi\)) it vanishes, whereas for the maximally entangled state, corresponding to \(\phi=\pi\), the ED reaches the maximum (possible) value, \(E(|r,\pi\rangle_{M})/M=1\). This latter indicates that the expectation value for any operator \(\tilde{\nu}^{\nu}\cdot\mathbf{\sigma}^{\nu}\) (\(\nu=0,\ldots,M-1\)) vanishes on the maximally entangled states of this class. The ED for the GHZLS is null in the case of fully separable states (\(\theta=0,\pi/2\)) and gives the maximum value (\(M\)) in the case of the maximally entangled state (\(\theta=\pi/4\)). Also for the class of GHZLS, the expectation value for the operators \(\tilde{\nu}^{\nu}\cdot\mathbf{\sigma}^{\nu}\) (\(\nu=0,\ldots,M-1\)) on the maximally entangled state is zero. In the case \(M=2\), \(\mathbf{\alpha}=(\cos(\theta_{1}),\sin(\theta_{1}))\), thus for fully separable states (\(\theta_{1}=0,\pi/2\)) \(E(|W,\mathbf{\alpha}\rangle_{2})/2\) is null, whereas in the case \(\theta_{1}=\pi/4\), it results \(E(|W,\mathbf{\alpha}\rangle_{2})/2=1\). In Fig. 1 we consider the W-class of states for the case \(M=3\). In this case \(\mathbf{\alpha}=(\cos(\theta_{1}),\sin(\theta_{1})\cos(\theta_{2}),\sin(\theta_{1}) \sin(\theta_{2}))\). In Fig. 1 we report, with a \(3D\) plot, the measure \(E(|W,\mathbf{\alpha}\rangle_{3})/3\) as a function of \(\theta_{1},\theta_{2}\) according to Eq. (62), for the states (59). The measure (18) catches in a surprisingly clear way the entanglement properties of this family of states. In particular, \(E(|W,\mathbf{\alpha}\rangle_{3})/3\) is null in the case of fully separable states (\(\theta_{1}=0\)). The maximum entanglement \(E(|W,\mathbf{\alpha}\rangle_{3})/3=8/9\) is obtained in the case of the state given by \(\theta_{1}=\arccos\bigl{(}1/\sqrt{3}\bigr{)}\) and \(\theta_{2}=\arccos\Bigl{(}1/\sqrt{2}\Bigr{)}\). \(E(|W,\mathbf{\alpha}\rangle_{3})/3<1\) indicates that the expectation value for the operators \(\tilde{\mathbf{v}}^{\rm V}\cdot\mathbf{\sigma}^{\rm V}\) (\(v=0,\ldots,M-1\)) on the W states is different from zero. For this reason, the state with the maximum value of entanglement is not a maximally entangled state. Moreover, a lower value of the entanglement could be seen as a cons of this class of states, because the quantum correlation is less than in the case of the other states considered. Nevertheless, the W states are robust under the local measurements. For the case of bi-separable states, \((\theta_{2}=0,\pi/2)\) it results \(0<E(|W,\mathbf{\alpha}\rangle_{3})/3<2/3\). #### iii.2.2 Equivalence classes characterization The present section is aimed to determine when a couple of states chosen into two different among the three families considered, actually belong to the same equivalence class. We compare the states with the highest entanglement of each family. Since the relevant geometric quantities for our analysis, in the case of GHZLS, are size-independent, we will assume the states of this family as the reference ones. #### iii.2.3 Case \(M=2\) Let us start with the case \(M=2\). The maximally entangled state within the GHZLS is \(|GHZ,\pi/4\rangle_{2}\), the unit vectors that minimize the trace of the metric tensor are \(\tilde{\mathbf{v}}_{0}=\tilde{\mathbf{v}}_{1}=(0,0,\pm 1)\), and the corresponding EM is \(\tilde{g}(|GHZ,\pi/4\rangle_{2})=J_{2}\). In the case of the second family, the maximally-entangled state is \(|r,\pi\rangle_{2}\), the unit vectors that minimize the trace of \(g\) are \(\tilde{\mathbf{v}}^{0}=\pm(0,-1,0)\) and \(\tilde{\mathbf{v}}^{1}=\pm(0,1,0)\). The corresponding EM results \(\tilde{g}(|r,\pi\rangle_{2})=J_{2}=\tilde{g}(|GHZ,\pi/4\rangle_{2})\). Therefore, according to our criterion, \(|GHZ,\pi/4\rangle_{2}\) and \(|r,\pi\rangle_{2}\) belong to the same equivalence class: \([|GHZ,\pi/4\rangle_{2}]=[|r,\pi\rangle_{2}]\). One can verify the correctness of this result since by direct calculation one gets \(|GHZ,\pi/4\rangle_{2}=e^{-i\theta_{1}^{2}\pi/4}|r,\pi\rangle_{2}\). For the third family, the maximally-entangled state is \(|W,(1/\sqrt{2},1/\sqrt{2})\rangle_{2}\), the unit vectors that minimize the trace of \(g\) are \(\tilde{\mathbf{v}}^{0}=\pm(0,0,1)\) and \(\tilde{\mathbf{v}}^{1}=\pm(0,0,-1)\). The corresponding EM results \(\tilde{g}(|W,(1/\sqrt{2},1/\sqrt{2})\rangle_{2})=J_{2}=\tilde{g}(|GHZ,\pi/4 \rangle_{2})\). Therefore, according to our criterion, \([|GHZ,\pi/4\rangle_{2}]=[|W,(1/\sqrt{2},1/\sqrt{2})\rangle_{2}]\). One can verify the correctness of this result since by direct calculation one gets \(|GHZ,\pi/4\rangle_{2}=e^{-i\pi/2}e^{-i\theta_{1}^{2}\pi/4}e^{-i\sigma_{1}^{2} \pi/4}|W,(1/\sqrt{2},1/\sqrt{2})\rangle_{2}\). #### iii.2.4 Case \(M=3\) Let us consider the case \(M=3\). The maximally entangled state within the GHZLS is \(|GHZ,\pi/4\rangle_{3}\), the unit vectors that minimize the trace of the metric tensor are \(\tilde{\mathbf{v}}_{0}=\tilde{\mathbf{v}}_{1}=\tilde{\mathbf{v}}_{2}=(0,0,\pm 1)\), and the corresponding EM is \(\tilde{g}(|GHZ,\pi/4\rangle_{3})=J_{3}\). In the case of the second family, the maximally-entangled state is \(|r,\pi\rangle_{3}\). In this case \(\tilde{g}(|r,\pi\rangle_{3})\neq\tilde{g}(|GHZ,\pi/4\rangle_{3})\), as Figure 2: The scheme in the figure represents the topological structure of the equivalence classes for some of the states for each of the three families. A point along one of the black lines represents a state of each family, from left to right these are the \(\mathbf{\alpha}-W\) states, the \(\phi\)-BRS and the \(\theta\)-GHZLS. The magenta cloudlet represents the equivalence class to which belong the (fully separable) states. The latter are, in the case of the \(\mathbf{\alpha}\)-W states, the states \(\left|W,\mathbf{\alpha}\right\rangle_{2}\) with \(\mathbf{\alpha}=(1,0),(0,1)\), in the case of \(\phi\)-BRS, the states \(|r,\phi\rangle_{2}\) with \(\phi=0,2\pi\) and, in the case of the \(\theta\)-GBZLS the states \(|GHZ,\theta\rangle_{2}\), where \(\theta=0,\pi/2\). In the case of \(M=2\) qubits, the states of the three families with the higher degree of entanglement, that is \(|W,(1/\sqrt{2},1/\sqrt{2})\rangle_{2}\), \(|r,\pi\rangle_{2}\) and \(|GHZ,\pi/4\rangle_{2}\), belong to the same equivalence class, figured with a red cloudlet. Figure 1: This figure reports the three-dimensional plot of the ED \(E(|W,\mathbf{\alpha}\rangle_{3})/3\) as a function of \(2\theta_{1}/\pi\) and \(2\theta_{2}/pi\) for the states (59), in the case \(M=3\). shown in Eq. (53). Nevertheless, we have \[g(|r,\pi\rangle_{3},\mathbf{v})=\left(\begin{array}{cccc}1&v_{1}^{0}v_{3}^{1}&- v_{1}^{0}v_{1}^{2}\\ v_{1}^{0}v_{3}^{1}&1&-v_{2}^{1}v_{1}^{2}\\ -v_{1}^{0}v_{1}^{2}&-v_{3}^{1}v_{1}^{2}&1\end{array}\right), \tag{69}\] thus, the choice for the unit vectors \(\mathbf{v}^{0}=\pm(1,0,0)\), \(\mathbf{v}^{1}=\pm(0,0,1)\) and \(\mathbf{v}^{2}=\mp(1,0,0)\) gives \(g(|r,\pi\rangle_{3},\mathbf{v})=J_{3}\). Therefore, according to our criterion, \(|GHZ,\pi/4\rangle_{3}\) and \(|r,\pi\rangle_{3}\) might belong to the same equivalence class. In fact by direct calculation one can verify that \(|GHZ,\pi/4\rangle_{3}=e^{i\sigma_{2}^{0}\pi/4}e^{-i\sigma_{2}^{2}\pi/4}|r,\pi \rangle_{3}\), hence \([|GHZ,\pi/4\rangle_{3}]=[|r,\pi\rangle_{3}]\). For the third family, the state with the higher entanglement is \(|W,(1/\sqrt{3},1/\sqrt{3},1/\sqrt{3})\rangle_{3}\). The latter is not a maximally entangled state since \(E(|W,(1/\sqrt{3},1/\sqrt{3},1/\sqrt{3})\rangle_{3})/3=8/5\). The unit vectors that minimize the trace of \(g\) are \(\hat{\mathbf{v}}^{\nu}=\pm(0,0,1)\) for \(\nu=0,1,2\). The corresponding EM results \[\tilde{g}(|W,(1/\sqrt{3},1/\sqrt{3},1/\sqrt{3})\rangle_{3})=\left(\begin{array} []{cccc}\frac{2}{3}&-\frac{4}{9}&-\frac{4}{9}\\ -\frac{4}{9}&\frac{2}{9}&-\frac{4}{9}\\ -\frac{4}{9}&-\frac{4}{9}&\frac{2}{3}\end{array}\right). \tag{70}\] Furthermore, no choice of the unit vectors leads to the expression \(J_{3}\) for the metric tensor. Therefore, upon our criterion, the states of the third family and the ones of the first two families are inequivalent: \([|W,(1/\sqrt{3},1/\sqrt{3},1/\sqrt{3})\rangle_{3}]\neq[[GHZ,\pi/4\rangle_{3}]\). This result agrees with the study of Ref. [16]. #### Case \(M=4\) As last we consider the case \(M=4\). The maximally entangled state within the GHZLS is \(|GHZ,\pi/4\rangle_{4}\), the unit vectors that minimize the trace of the metric tensor are \(\hat{\mathbf{v}}_{0}=\hat{\mathbf{v}}_{1}=\hat{\mathbf{v}}_{2}=\hat{\mathbf{v }}_{3}=(0,0,\pm 1)\), and the corresponding EM is \(\tilde{g}(|GHZ,\pi/4\rangle_{4})=J_{4}\). In the case of the second family, the maximally-entangled state is \(|r,\pi\rangle_{4}\). This is a genuine maximally entangled state since \(E(|r,\pi\rangle_{4})/4=1\). We have \(\tilde{g}(|r,\pi\rangle_{4})\neq\tilde{g}(|GHZ,\pi/4\rangle_{4})\), as shown in Eq. (57). We have \[g(|r,\pi\rangle_{4},\mathbf{v})=\left(\begin{array}{cccc}1&v_{1}^{0}v_{3}^{ 1}&0&0\\ v_{1}^{0}v_{3}^{1}&1&0&0\\ 0&0&1&-v_{3}^{2}v_{1}^{3}\\ 0&0&-v_{3}^{2}v_{1}^{3}&1\end{array}\right), \tag{71}\] thus, in this case, no choice of the unit vectors leads to an expression for the metric tensor equivalent to \(J_{4}\). Therefore, we conclude \([|GHZ,\pi/4\rangle_{4}]\neq[|r,\pi\rangle_{4}]\). This confirms the result reported in Ref. [33]. ## VI Concluding remarks In the present work, we have investigated the deep link between the Riemannian metric structure associated with the projective Hilbert space, and the entanglement measure for the states of this space. In particular, we have shown that entanglement has a remarkable geometrical interpretation. In fact, Figure 3: As in Fig. 2, we report here a scheme that represents the topological structure of the equivalence classes for some of the states of the three families: the \(\boldsymbol{\alpha}-W\) states, the \(\phi\)-BRS and the \(\theta\)-GHZLS. In this case, we consider three qubits states. The magenta cloudlet represents the equivalence class to which belong the (fully separable) states for the three families: \(|W,\boldsymbol{\alpha}\rangle_{3}\) with \(\boldsymbol{\alpha}=(1,0,0),(0,1,0),(0,0,1)\), \(|r,\phi\rangle_{3}\) with \(\phi=0,2\pi\) and, \(|GHZ,\theta\rangle_{3}\), with \(\theta=0,\pi/2\). In the case of \(M=3\) qubits, the equivalence classes of the states of the three families with the higher degree of entanglement do not coincide. In fact, the class \([|W,(1/\sqrt{3},1/\sqrt{3},1/\sqrt{3})\rangle_{3}]\) is dijointed from the class \([|r,\pi\rangle_{3}]=[|GHZ,\pi/4\rangle_{3}]\), as figured with the two red cloudlets. Figure 4: We report here a scheme analogous to those of Figs. 2 and 3. In this case, we consider \(M=4\) qubits states. The magenta cloudlet represents the equivalence class to which belong the (fully separable) states of the three families: \(|W,\boldsymbol{\alpha}\rangle_{4}\) with \(\boldsymbol{\alpha}=(1,0,0,0),(0,1,0,0),(0,0,1,0),(0,0,0,1)\), \(|r,\phi\rangle_{4}\) with \(\phi=0,2\pi\) and, \(|GHZ,\theta\rangle_{4}\), with \(\boldsymbol{\theta}=0,\pi/2\). In this case, the equivalence classes of the states of the three families with the higher degree of entanglement are all disjointed. In fact, the class \([|W,(1/\sqrt{4},1/\sqrt{4},1/\sqrt{4},1/\sqrt{4})\rangle_{4}]\neq[|r,\pi \rangle_{4}]\neq[|GHZ,\pi/4\rangle_{4}]\), as figured with the three dijointed red cloudlets. we have shown that the ED of a state \(|\psi\rangle\) is the minimum of the sum of the squared distances between \(|\psi\rangle\) and all its conjugate states \(\mathbf{\nu}^{\mu}\cdot\mathbf{\sigma}^{\mu}|\psi\rangle\), where \(\mathbf{\nu}^{\mu}\) are unit vectors and \(\mu\) runs on the number of parties. In such a sense, entanglement is an obstacle to the minimum of the sum of the distances between \(|\psi\rangle\) and its conjugate states. Also, within the proposed geometric approach, we have derived a general method to determine if two states are not LU-equivalent. For bipartite states, it further allows to confirm if they are, on the contrary, LU-equivalent. The entanglement measure named ED, proposed in Ref. [1], has the desirable property of providing a directly computable measure of entanglement for a general multi-qudit hybrid pure state. A convex roof extension of it to the most general case of mixed states can easily be built. In the present work, we have proved that the ED of a state \(|\psi\rangle\)_i)_ is null if and only \(|\psi\rangle\) is fully separable; _ii)_ is invariant under LU operations; _iii)_ doesn't increase, in average, under LOCC. This, definitely validate the entanglement distance as an appropriate entanglement measure for multipartite states, pure or mixed. Finally, we have applied the proposed geometric approach to the study of the entanglement magnitude and the equivalence classes properties, of three families of pure states. We acknowledge support from the RESEARCH SUPPORT PLAN 2022 - Call for applications for funding allocation to research projects curiosity driven (F CUR) - Project "Entanglement Protection of Qubits' Dynamics in a Cavity"-EPQDC and the support by the Italian National Group of Mathematical Physics (GNFM-INdAM). ## Appendix ### Monotonicity, on average, under unlocal quantum operation In Ref. [28], it is shown that LOCC may be decomposed as series of unilocal quantum operations (UQ). It results that it is enough, to prove condition (iii), to show that the measure \(E(\rho)\) * is non-increasing, on average, under UQ * is convex as a measure on mixed states. The second condition is automatically fullfilled by the convex roof construction (31). The first one can be decomposed into the four following conditions: 1. \(E\) is LU-invariant, that is \(E(\rho)=E(U\rho U^{\dagger})\). 2. \(E\) is non-increasing on average under (non necessarily complete) unilocal Von Neumann measurement, that is \(E(\rho)\geq\sum_{j}p_{j}E(\rho_{j})\), with \(\rho_{j}\) is one of the outcomes of such a measurement, with associated probability \(p_{j}\). 3. \(E\) is invariant under the addition of an uncorrelated ancilla \(A\), that is \(E(\rho)=E(\rho\otimes\rho_{A})\), with \(\rho_{A}\) the state of \(A\). 4. \(E\) is non-increasing under the removal of any local part \(A\) of the system, that is \(E(\rho)\geq E(\mathrm{Tr}_{A}(\rho))\). Condition (a) holds by construction, as shown in Eq. (32). Unilocal Von Neumann measurements can be realized by completely positive map \(\Theta\), that converts \(|\psi\rangle\) to \(M_{j}|\psi\rangle\) with probability \(p_{j}=\langle\psi|M_{j}^{\dagger}M_{j}|\psi\rangle\), where \(\sum_{j}M_{j}^{\dagger}M_{j}\leq\mathbb{I}\). Therefore, condition (iii) reads \[E(|\psi\rangle)\geq\sum_{j}p_{j}E(|\psi_{j}\rangle)\,, \tag{72}\] where \(|\psi_{j}\rangle=M_{j}|\psi\rangle/\sqrt{p_{j}}\). The proof of this inequality follows from the following calculation. To make easier the formulas, in the following we will indicate the expectation value of an operator \(\mathcal{O}\) on a state \(|\psi\rangle\) with \(\langle\mathcal{O}\rangle_{\psi}\). Let \(\bar{\mu}\) be the qubit on which the generalized measurement \(\{M_{j}\}_{j}\) operates. Let us drop the index \(\bar{\mu}\) for the Kraus operators \(\{M_{j},M_{j}^{\dagger}\}_{j}\) for ease of notations. In the case of a general state \(|\psi\rangle\) and for \(\mu\neq\bar{\mu}\) it results \([M_{j},\sigma_{k}^{\mu}]=0\) for any \(j\) and \(k\). Consistently, from Eq. (17) we have \[g_{\mu\mu}(|\psi\rangle,\mathbf{\nu}^{\mu})=\langle\big{(}\mathbf{\sigma _{\mathbf{\nu}}^{\mu}}-\langle\mathbf{\sigma_{\mathbf{\nu}}^{\mu}}\rangle_{\mathbf{\psi}}\big{)} ^{2}\rangle_{\mathbf{\psi}} \tag{73}\] \[\geq\langle\sum_{j}M_{j}^{\dagger}M_{j}\left(\mathbf{\sigma_{\mathbf{\nu }}^{\mu}}-\langle\mathbf{\sigma_{\mathbf{\nu}}^{\mu}}\rangle_{\mathbf{\psi}}\right)^{2} \rangle_{\mathbf{\psi}}\] \[=\sum_{j}\langle M_{j}^{\dagger}\left(\mathbf{\sigma_{\mathbf{\nu}}^{\mu} }-\langle\mathbf{\sigma_{\mathbf{\nu}}^{\mu}}\rangle_{\mathbf{\psi}}\right)^{2}M_{j}\rangle _{\mathbf{\psi}}\] \[=\sum_{j}p_{j}\langle\big{(}\mathbf{\sigma_{\mathbf{\nu}}^{\mu}}-\langle \mathbf{\sigma_{\mathbf{\nu}}^{\mu}}\rangle_{\mathbf{\psi}}\big{)}^{2}\rangle_{\mathbf{\psi}}\] \[=\sum_{j}p_{j}\langle\big{(}\mathbf{\sigma_{\mathbf{\nu}}^{\mu}}-\langle \mathbf{\sigma_{\mathbf{\nu}}^{\mu}}\rangle_{\mathbf{\psi}_{j}}\big{)}^{2}\rangle_{\mathbf{ \psi}_{j}}+\langle\big{(}\mathbf{\sigma_{\mathbf{\nu}}^{\mu}}\rangle_{\mathbf{\psi}_{j}}- \langle\mathbf{\sigma_{\mathbf{\nu}}^{\mu}}\rangle_{\mathbf{\psi}}\big{)}^{2}\rangle_{\mathbf{ \psi}_{j}}\Big{]}\] \[\geq\sum_{j}p_{j}\langle\big{(}\mathbf{\sigma_{\mathbf{\nu}}^{\mu}}-\langle \mathbf{\sigma_{\mathbf{\nu}}^{\mu}}\rangle_{\mathbf{\psi}_{j}}\big{)}^{2}\rangle_{\mathbf{ \psi}_{j}}=\sum_{j}p_{j}g_{\mu\mu}(|\psi_{j}\rangle,\mathbf{\nu}^{\mu})\,.\] In summary, for \(\mu\neq\bar{\mu}\) it results \[g_{\mu\mu}(|\psi\rangle,\mathbf{\nu}^{\mu})\geq\sum_{j}p_{j}g_{\mu\mu}(|\psi_{j} \rangle,\mathbf{\nu}^{\mu}) \tag{74}\] and then \[\begin{split}\inf_{\mathbf{\nu}^{\mu}}g_{\mu\mu}(|\psi\rangle,\mathbf{ \nu}^{\mu})\geq&\inf_{\mathbf{\nu}^{\mu}}\sum_{j}p_{j}g_{\mu\mu}(| \psi_{j}\rangle,\mathbf{\nu}^{\mu})\geq\\ &\sum_{j}p_{j}\inf_{\mathbf{\nu}^{\mu}_{j}}g_{\mu\mu}(|\psi_{j} \rangle,\mathbf{\nu}^{\mu}_{j})\,.\end{split} \tag{75}\] We note that in the case general case it is \[0\leq g_{\mu\mu}(|\psi\rangle,\mathbf{\nu})\leq 1\,. \tag{76}\] Let us now consider the contribution to the entanglement of the state of a qubit \(\mu=\bar{\mu}\). In this case we have \[\sum_{j}p_{j}\langle\left(\mathbf{\sigma_{\mathbf{\nu}}^{\bar{\mu}}}-\langle\mathbf{\sigma_{ \mathbf{\nu}}^{\bar{\mu}}}\rangle_{\mathbf{\psi}_{j}}\right)^{2}\rangle_{\mathbf{\psi}_{j}}\,, \tag{77}\] in which, for each possible outcome of the measurement \(j\), it is possible to choose a unit vector \(\mathbf{v}^{\tilde{a}}=\mathbf{v}_{j}^{\tilde{a}}\) so that \[\boldsymbol{\sigma}_{\mathbf{v}_{j}}^{\tilde{a}}|\boldsymbol{\psi}_{j}\rangle=| \boldsymbol{\psi}_{j}\rangle\,. \tag{78}\] Therefore it results \[\inf_{\mathbf{v}^{\tilde{a}}}g_{\tilde{\mu}\tilde{\mu}}(|\boldsymbol{\psi}_{j} \rangle,\mathbf{v}^{\tilde{a}})=0\,. \tag{79}\] This proves that the contribution to the entanglement of the state \(|\phi_{j}\rangle\) coming from the \(\tilde{\mu}\)th qubit is null. Finally, we have then \[\begin{split} E(|\boldsymbol{\psi}\rangle)=&\inf_{ \{\mathbf{v}^{\tilde{a}}\}_{\mu}}\sum_{\mu=0}^{M-1}g_{\mu\mu}(|\boldsymbol{ \psi}\rangle,\mathbf{v}^{\tilde{a}})\geq\\ &\sum_{j}p_{j}\sum_{\mu=0}^{M-1}\inf_{\mathbf{v}_{j}^{\tilde{a}}}g _{\mu\mu}(|\boldsymbol{\psi}_{j}\rangle,\mathbf{v}_{j}^{\tilde{a}})=\sum_{j}p _{j}E(|\boldsymbol{\psi}_{j}\rangle)\,,\end{split} \tag{80}\] that proves condition (b). Condition (c) is clearly satisfied for pure states, and this property is preserved by the convex roof construction (31). Finally, note that the reduced density matrix for qubit \(\mu\) writes \[\rho^{\mu}=\operatorname{Tr}_{\mu^{c}}\big{[}|\boldsymbol{\psi}\rangle \langle\boldsymbol{\psi}|\big{]}=\frac{1}{2}(\mathbb{I}+\bar{\mathbf{v}}^{ \tilde{\mu}}\cdot\boldsymbol{\sigma}^{\mu}), \tag{81}\] where \(\mu^{c}\) is the complement of \(\mu\) on the set of all qubits in the system, and \(\bar{\mathbf{v}}^{\tilde{\mu}}=\langle\boldsymbol{\psi}|\boldsymbol{\sigma} ^{\mu}|\boldsymbol{\psi}\rangle\). Also remark that \(\operatorname{Tr}\big{[}(\rho^{\mu})^{2}\big{]}=\frac{1}{2}(1+|\bar{\mathbf{v}} ^{\tilde{\mu}}|^{2})\). Yet, from Eq. (21), the single-qubit measure follows \[\begin{split} E_{\mu}(|\boldsymbol{\psi}\rangle)&= \min_{\mathbf{v}^{\tilde{a}}}g_{\mu\mu}(|\boldsymbol{\psi}\rangle,\mathbf{v}^ {\tilde{a}})\\ &=1-|\langle\boldsymbol{\psi}|\boldsymbol{\sigma}^{\mu}| \boldsymbol{\psi}\rangle|^{2}=1-|\bar{\mathbf{v}}^{\tilde{\mu}}|^{2}\\ &=2\Bigg{(}1-\operatorname{Tr}\Big{[}\Big{(}\operatorname{Tr}_{ \mu^{c}}\big{[}|\boldsymbol{\psi}\rangle\langle\boldsymbol{\psi}|\big{]} \Big{)}^{2}\Big{]}\Bigg{)},\end{split} \tag{82}\] which is invariant with respect to the partial trace applied to any subsystem \(A\in\mu^{c}\), since this information is discarded anyway, and we have \(E\left(|\boldsymbol{\psi}\rangle\langle\boldsymbol{\psi}|\right)=E\Big{(} \operatorname{Tr}_{A}\big{[}|\boldsymbol{\psi}\rangle\langle\boldsymbol{\psi }|\big{]}\Big{)}\). Condition (d) is hence fulfilled by the pure state ED. After such a removal, it can easily be checked that the convex roof construction (31) rewrites \[E_{\mu}\Big{(}\operatorname{Tr}_{A}\big{[}\rho\big{]}\Big{)}=\min_{\{p_{j}, \boldsymbol{\psi}_{j}\}}\sum_{j}p_{j}E_{\mu}\Big{(}\operatorname{Tr}_{A} \big{[}|\boldsymbol{\psi}_{j}\rangle\langle\boldsymbol{\psi}_{j}|\big{]} \Big{)} \tag{83}\] for any \(\rho=\sum_{j}p_{j}|\boldsymbol{\psi}_{j}\rangle\langle\boldsymbol{\psi}_{j}|\). Thus, \(E_{\mu}\Big{(}\operatorname{Tr}_{A}\big{[}\rho\big{]}\Big{)}=E_{\mu}(\rho)\), so \(E(\rho)\geq E\Big{(}\operatorname{Tr}_{A}\big{[}\rho\big{]}\Big{)}\), proving that condition (d) also holds for the mixed states measure. ## Appendix C Convexity of the function of the partial trace In Ref. [28], another necessary and sufficient condition is proposed for a magnitude to be LOCC-monotone on average. Namely, any function \(E_{\mu}(\boldsymbol{\psi})=f(\operatorname{Tr}_{\mu}(\boldsymbol{\psi}))\) such that \(f\) is concave and LU-invariant, is an entanglement monotone for pure states. In addition, the related mixed state measure obtained by convex roof construction is a LOCC-monotone on average, i.e. an entanglement monotone. As noticed in Eq. (82), \(E_{\mu}(|\boldsymbol{\psi}\rangle)=f(\operatorname{Tr}_{\mu^{c}}[\boldsymbol{ \psi}])\) with \(f(\boldsymbol{x})=2(1-\operatorname{Tr}\bigl{[}\boldsymbol{x}^{2}\bigr{]})\). \(f\) is clearly LU-invariant. Let us now prove that \(f\) is concave. Consider \(\rho_{k}=\frac{1}{2}(\mathbb{I}+\mathbf{n}_{k}\cdot\boldsymbol{\sigma})\), with \(k=1,2\), two single-qubit density matrices, and \(\lambda\in[0,1]\). We have \[f\Big{(}\lambda\rho_{1}+(1-\lambda)\rho_{2}\Big{)}=1-|\lambda\mathbf{n}_{1}+(1- \lambda)\mathbf{n}_{2}|^{2} \tag{84}\] and \[\lambda f(\rho_{1})+(1-\lambda)f(\rho_{2})=1-\lambda|\mathbf{n}_{1}|^{2}+(1- \lambda)|\mathbf{n}_{2}|^{2}, \tag{85}\] and the convexity of the Euclidean squared norm implies \[f\Big{(}\lambda\rho_{1}+(1-\lambda)\rho_{2}\Big{)}\geq\lambda f(\rho_{1})+(1- \lambda)f(\rho_{2}), \tag{86}\] that is, \(f\) is concave, which completes our proof that \(E_{\mu}(|\boldsymbol{\psi}\rangle)\) is a valid entanglement monotone for pure states. According to Ref. [28], Eq. (31) is then itself an entanglement monotone.
量子状態の幾何学的構造は、豊かで非平凡なものと示されています。私たちは、多量子ビット量子システムの投影されたヒルベルト空間のFubini-Study métriqueを導き出し、これをRiemannian métrique構造と endow し、その深いつながりを状態のエンタングルメントと調査しました。測定値として、[1]に提案されたエンタングルメント距離Eを採用しました。この分析では、エンタングルメントが幾何学的解釈を持つことが示されました。E(|psi>)は |psi>と共役状態の間の二乗距離の合計の最小値です。つまり、v^mu の単位ベクトルと mu がパーティの数を表します。私たちは、2つの状態が、ローカルユニタリー操作の作用で異なる状態であるかどうかを決定するための一般的な方法を導きました。エンタングルメント距離とその混合状態の凸屋根拡張は、エンタングルメントの測定に必要な3つの条件
2302.14624
The 2022 NIST Language Recognition Evaluation
In 2022, the U.S. National Institute of Standards and Technology (NIST) conducted the latest Language Recognition Evaluation (LRE) in an ongoing series administered by NIST since 1996 to foster research in language recognition and to measure state-of-the-art technology. Similar to previous LREs, LRE22 focused on conversational telephone speech (CTS) and broadcast narrowband speech (BNBS) data. LRE22 also introduced new evaluation features, such as an emphasis on African languages, including low resource languages, and a test set consisting of segments containing between 3s and 35s of speech randomly sampled and extracted from longer recordings. A total of 21 research organizations, forming 16 teams, participated in this 3-month long evaluation and made a total of 65 valid system submissions to be evaluated. This paper presents an overview of LRE22 and an analysis of system performance over different evaluation conditions. The evaluation results suggest that Oromo and Tigrinya are easier to detect while Xhosa and Zulu are more challenging. A greater confusability is seen for some language pairs. When speech duration increased, system performance significantly increased up to a certain duration, and then a diminishing return on system performance is observed afterward.
Yooyoung Lee, Craig Greenberg, Eliot Godard, Asad A. Butt, Elliot Singer, Trang Nguyen, Lisa Mason, Douglas Reynolds
2023-02-28T15:05:33
http://arxiv.org/abs/2302.14624v1
# The 2022 NIST Language Recognition Evaluation ###### Abstract In 2022, the U.S. National Institute of Standards and Technology (NIST) conducted the latest Language Recognition Evaluation (LRE) in an ongoing series administered by NIST since 1996 to foster research in language recognition and to measure state-of-the-art technology. Similar to previous LREs, LRE22 focused on conversational telephone speech (CTS) and broadcast narrowband speech (BNBS) data. LRE22 also introduced new evaluation features, such as an emphasis on African languages, including low resource languages, and a test set consisting of segments containing between 3s and 35s of speech randomly sampled and extracted from longer recordings. A total of 21 research organizations, forming 16 teams, participated in this 3-month long evaluation and made a total of 65 valid system submissions to be evaluated. This paper presents an overview of LRE22 and an analysis of system performance over different evaluation conditions. The evaluation results suggest that Oromo and Tigrinya are easier to detect while Xhosa and Zulu are more challenging. A greater confusability is seen for some language pairs. When speech duration increased, system performance significantly increased up to a certain duration, and then a diminishing return on system performance is observed afterward. Yooyoung Lee\({}^{1}\), Craig Greenberg\({}^{1}\), Eliot Godard\({}^{1,*}\), Asad A. Butt\({}^{1,*}\), Elliot Singer\({}^{2}\), Trang Nguyen\({}^{2}\), Lisa Mason\({}^{3}\), Douglas Reynolds\({}^{3}\)\({}^{1}\)NIST ITL/IAD/Multimodal Information Group, MD, USA \({}^{2}\)MIT Lincoln Laboratory, Lexington, MA, USA \({}^{3}\)U.S. Department of Defense, MD, USA lre_poc@nist.gov **Index Terms**: human language technology, LRE, language recognition, language detection, speech technology performance evaluation ## 1 Introduction The 2022 NIST Language Recognition Evaluation (LRE), held in fall of 2022, was the latest in an ongoing series of language recognition evaluations conducted by NIST since 1996 [1]. The primary objectives of the LRE series are to: 1) advance language recognition technologies with innovative ideas, 2) facilitate the development of language recognition technology by providing data and research direction, and 3) measure the performance of the current state-of-the-art technology. Figure 1 shows the number of target languages and participants (based on sites) for all NIST LREs. LRE22 was conducted entirely online using a web-based platform like LRE15 [2] and LRE17 [3, 4]. The updated LRE22 web-platform1 supported a variety of evaluation activities, such as registration, data license submission, data distribution, system output submission and validation/scoring, and system description/presentation uploads. A total of 16 teams from 21 organizations in 13 different countries made submissions for LRE22. Figure 2 displays a world map with heatmap representing the number of participating sites per country. Since two teams did not submit valid system descriptions, analysis considering only 14 teams in presented this paper. It should be noted that all participant information, including country, was self-reported. Footnote 1: [https://lre.nist.gov](https://lre.nist.gov) ## 2 Task The general task in the NIST LREs is language detection, i.e. to automatically determine whether a particular target language was spoken in a given test segment of speech. Since LRE11 [5], the focus of the language detection task had turned to distinguishing between closely related, and sometimes mutually intelligible, languages. However LRE22 introduced a new emphasis on distinguishing between African languages, including low resource languages. Table 1 shows the 14 target languages included in LRE22. Similar to LRE17, LRE22 participants were required to provide a 14-dimensional vector of log-likelihood scores corresponding to the languages in Table 1. Unlike LRE17, language clusters were not considered in this evaluation; a language cluster is a group of two or more consonant sounds with those from the same speech community [6]. Like LRE17, there were two training conditions in LRE22: _fixed_ and _open_. For the _fixed_ training condition, participants were restricted to use only a limited pre-specified set of data Figure 1: Language and participant count for the NIST LREs Figure 2: Heatmap of the world showing the number of LRE22 participating sites per country. for system training and target model development. For the _open_ training condition, participants were allowed to utilize unlimited amounts of publicly available and/or proprietary data for their system training and target model development. To facilitate more meaningful cross-system comparisons, LRE22 participants were required to provide submissions to the _fixed_ condition while participation in the optional _open_ condition was strongly encouraged to understand the impacts that larger amounts of training and development data have on system performance. In order to encourage participation in the _open_ training condition, the deadline for this condition was made one week later than the required _fixed_ training condition submission deadline. A total of 65 valid submissions were received, 40 for the _fixed_ training condition and 25 for the _open_ condition. LRE participants were required to specify one submission as _primary_ for each training condition they took part in, while all other systems submitted were considered _alternate_. ## 3 Data This section provides a brief description of data used in LRE22 for training, development (_dev_), and evaluation (_test_) sets, along with the associated metadata. ### Training set As mentioned in Section 2, there were two training conditions in LRE22. The _fixed_ condition limited the system training and development data to the following specific data sets provided to participants by the Linguistic Data Consortium (LDC): 2017 NIST LRE _dev_ set and previous NIST LRE training data (LDC2022E16), 2017 NIST LRE _test_ set (LDC2022E17), 2022 NIST LRE _dev_ set (LDC2022E14). The VoxLingua107 data set [7] was also permitted for use in the _fixed_ condition. The _open_ training condition removed the limitations of the _fixed_ condition. In addition to the data listed in the _fixed_ condition, participants could use any additional data to train and develop their system, including proprietary data and data that are not publicly available. LDC also made selected data from the IARPA Babel Program [8] available to participants to be used in the _open_ training condition. ### Development and test sets The development (_dev_) set is normally used to build/optimize a system model during the development process while the evaluation (_test_) set is used to evaluate the performance of the system model. The speech segments in the LRE22 _dev_ and _test_ sets were selected from data sets collected by the Linguistic Data Consortium (LDC) to support LR technology evaluations; namely the Magrhebi Language identification (MAGLIC), Speech Archive of South African Languages (SASAL), and Low Resource African Languages (LRAL) corpora. The MAGLIC corpus was a CTS-only collection based in Tunisia and includes four regional language varieties spoken in North Africa: Algerian Arabic, Libyan Arabic, Tunisian Arabic, and North African French. The SASAL corpus was a CTS and BNBS collection located in South Africa and contains several African language varieties, a subset of which were included in LRE22: Afrikans, Ndebele, Tsonga, Venda, Xhosa, and Zulu, as well as South African English and Indian-accent South African English. The LRAL corpus was a BNBS collection based in Ethiopia, and, of the languages in LRAL, two were selected for inclusion in LRE22: Oromo and Tigrinya. All audio data provided was sampled at 8 kHz, a-law encoded, and formatted as SPHERE [9] files. When the source audio recordings were higher bandwidth or encoded differently, they were downsampled and transcoded to 8-kHz a-law. Unlike in previous LREs, the amount of speech in the LRE22 segments was uniformly sampled between approximately 3 and 35 seconds, as determined by an automatic speech activity detector. Figure 3 shows a stacked histogram for the _dev_ and _test_ sets. The _dev_ set consisted of 300 segments per target language while the _test_ set contained a total of 26,473 segments ranging from 383 to 2,769 segments across the target languages. ### Metadata The metadata collected by LDC can be categorized into audio- and audit-related metadata. The audio metadata indicates information related to the audio recording or segment, such as speech duration, data source type (i.e., either CTS or BNBS), and source file (i.e., the original recording from which the audio segment was extracted). The audit metadata reflects a human auditor's judgement of the speech, having listened to an audio recording, such as whether the recording contained a single speaker, if the person speaking was a native speaker, the \begin{table} \begin{tabular}{|l|l||l|l|} \hline **Language** & **Code** & **Language** & **Code** \\ \hline Afrikans & afr-afr & Ndebele & nbl-nbl \\ \hline Tunisian Arabic & ara-aeb & Oromo & orm-orm \\ \hline Algerian Arabic & ara-arq & Tigrinya & tir-tir \\ \hline Libyan Arabic & ara-ayl & Tsonga & tso-tso \\ \hline South African English & eng-ens & Venda & ven-ven \\ \hline Indian-accent South & eng-iaf & Xhosa & xho-xho \\ African English & & & \\ \hline North African French & fra-ntf & Zulu & zul-zul \\ \hline \end{tabular} \end{table} Table 1: LRE22 target languages Figure 4: System performance (actual and minimum costs) on primary submissions under the fixed training condition Figure 3: Distribution of speech segments per target language for both dev and test sets speech clarity, the speaker sex, or if the recording took place in a noisy environment. In this paper, we limit our analyses on data source type and speech duration. ## 4 Performance Measure As stated in the Section 2, LRE22 participants were required to provide a 14-dimensional vector of log-likelihood scores for the 14 target languages (see Table 1 for the LRE22 target languages). Unlike LRE17, language clusters were not considered in this evaluation. Pair-wise performance was computed for all target/non-target language pairs. A decision threshold derived from log-likelihood ratios was used to determine the number of missed detections and false alarms, computed separately for each target language. The missed detections (Misses) indicate the segments that are the target language, but are not predicted to be, while the false alarms (FAs) indicate the segments that are falsely identified as the target language. The probabilities of missed detections (\(P_{Miss}\)) and false alarms (\(P_{FA}\)) are then combined using a linear cost function [10]: \[C(L_{T},L_{N})= C_{Miss}\times P_{Target}\times P_{Miss}(L_{T})+\] \[C_{FA}\times(1-P_{Target})\times P_{FA}(L_{T},L_{N}) \tag{1}\] where \(L_{T}\) and \(L_{N}\) are target and non-target languages, respectively. Here, \(C_{Miss}\) (cost of a missed detection), \(C_{FA}\) (cost of a false alarm), and \(P_{Target}\) (the _a priori_ probability of the specified target language) are application-motivated cost model parameters. Two sets of cost-function parameters were used in LRE22: the first set of parameters provides equal weighting to the costs of errors (\(C_{Miss}=C_{FA}=1\)) and a target probability of 0.5, while the second set of parameters changed the target probability to 0.1. The final metric, \(C_{Primary}\), consisted of the mean value of the costs using the two different cost function parameters, normalized by dividing by the cost of a "no information" system. Costs using thresholds that minimize the Bayes risk, \(actC_{Primary}\), as well as using thresholds that minimize the empirical cost, \(minC_{Primary}\), were computed. We refer readers to the LRE22 evaluation plan [10] for details of the performance measures. ## 5 Results and Analyses A total of 14 teams from academic and industrial sectors successfully completed LRE22. For both the _fixed_ and _open_ training conditions, the teams were allowed to have one _primary_ submission and one or more _alternate_ submissions. In this section, we present a summary of results and key findings on the _primary_ submissions using the performance metrics defined in Section 4. Figure 4 illustrates system performance for all the _primary_ submissions under the _fixed_ training condition. The x-axis are anonymized team names and the y-axis are \(C_{Primary}\) values for both the actual and minimum costs (N.B., a lower \(C_{Primary}\) value indicates better performance). The orange dashed-line indicates an actual cost, \(actC_{Primary}\), and the blue is a minimum cost, \(minC_{Primary}\), for a reference system; we used an off-the-shelf algorithm as a reference to validate the LRE22 data construction and evaluation process. The reference system was trained and fine-tuned only on VoxLingua107 and the LRE22 development set. The shaded color on each team's bar indicates the difference between \(actC_{Primary}\) and \(minC_{Primary}\), which indicates a calibration error. In Figure 4, we observe that, given the primary submissions under the _fixed_ condition, the \(C_{Primary}\) values range from 0.11 to 0.73 across all the teams. It is observed that the top-performing systems (e.g., T1-T4) have small calibration errors (i.e., the absolute difference between the actual and minimum costs is relatively small) while a few teams (e.g., T5, T7, T11 and T12) are less well-calibrated. As described in Section 2, the _fixed_ training condition is required while _open_ is optional; 7 out of the 14 teams submitted their system outputs to the _open_ training condition. Figure 5 illustrates a performance comparison of training conditions (_fixed_ vs _open_) for the seven teams only (ordered by _open_ system performance). The result shows that system performance from the _open_ condition generally outperforms the _fixed_ condition submission across the teams (except T9), and a calibration error is observed in team T7 under the _open_ training condition. To understand variability of language-level system performance and language detection difficulty, Figure 6 illustrates a box plot of the primary submission performance under the _fixed_ training condition. The x-axis is a team name (ordered by median), the y-axis is the actual cost (\(actC_{Primary}\)), and each point represents a target language. The black line within a box is the median, the box edges represent the lower quartile and upper quartile, and the whiskers extending from the box indicate variability outside the upper and lower quartiles. We observe a high dispersion of language performance for a few teams such as T4, T5, and T9. Overall, the _Oromo (orm-orm)_ and _Tigrinya (tit-tir)_ points marked in blue are located in the bottom side of Figure 6 (easier to detect) while _Xhosa (xho-xho)_ and _Zulu (zul)_ are in the top (harder to detect); a similar trend is observed across the teams. To examine language-pair confusability, we conducted data analysis using heatmap confusion matrices as shown in Figure 7. The axes are language codes. Figure 5: A performance comparison of the fixed and open training conditions Figure 6: A language-level performance on primary submissions under the fixed training condition upper-left to bottom-right are \(P_{Miss}\) (false reject rates) and the off-diagonal values are \(P_{FA}\) (false alarm rates). A higher false alarm probability implies a potential confusability for that language pair. For simplicity, results of \(P_{Target}=0.5\) for the four leading systems are demonstrated using heatmap confusion matrices. Given the _test_ set and systems, a higher confusability is observed for three clusters of language pairs as follows: 1) among Arabic languages (ara-aeb, ara-ara, ara-ayl), 2) between South African English (eng-ens) and Indian-accented South African English (eng-iaf), and 3) Mdebele (nbl-nbl), Tsonga (iso-tso), Venda (ven-ven), Xhosa (xho-xho) and Zulu (zul-zul). To gain insight on how metadata variables (i.e., factors) affect system performance, we conducted experiments given the metadata listed in Section 3.3. For simplicity, the following analyses are demonstrated using _data source type_ and _speech duration_ only. The LRE22 data was collected in two primary genres, namely, conversational telephone speech (CTS) and broadcast narrowband speech (BNBS) which we call _data source type_. Figure 8 shows system performance (\(\mathit{actC}_{Primary}\)) partitioned by _data source type_ (CTS vs BNBS) for all the _primary_ submissions under the _fixed_ training condition. The top-left pie chart is a distribution of CTS and BNBS on the _test_ set, which is imbalanced. The bar plot shows a performance comparison between CTS (blue) and BNBS (orange) across all the teams. The results indicates that, given the imbalanced distribution, CTS is more challenging and that _data source type_ has a strong effect on system performance; a similar trend is observed across the systems. Durations of _test_ set segments varied between 3s and 35s of speech that have been randomly sampled and extracted from longer recordings as determined by an automatic Speech Activity Detector (SAD) which we call _SAD duration_. Figure 8(a) shows a distribution of _SAD duration_ for the _test_ set and Figures 8(b) shows the performance of a top-performing system by _SAD duration_. Given the _test_ set and systems, it is seen that when _SAD duration_ increases, \(\mathit{actC}_{Primary}\) significantly decreases up to a certain duration (between 15s and 20s). After that, a diminishing return on system performance improvement is observed across the systems. ## 6 Conclusions We presented a summary of the 2022 NIST Language Recognition Evaluation with an emphasis on low resource languages and random duration of speech segments. The results showed that almost no calibration error was observed for the top-performing systems for both the _fixed_ and _open_ training condition. Overall, the submissions under the _open_ training condition had better performance compared to the _fixed_ condition submissions, with only one exception. Given the _test_ set and _primary_ systems under the _fixed_ training condition, we found that Oromo and Tigrinya were easier to detect while Xhosa and Zulu were harder to detect. A greater confusability was observed for the language pairs 1) among Zulu, Xhosa, Ndebele, Tsonga, and Venda, 2) between South African and Indian-accent South African English, and 3) among Tunisian, Algerian, and Libyan Arabic languages. Some of the metadata, such as _data source type_ and _SAD duration_, had a significant effect on system performance for all systems. In terms of _SAD duration_, when speech duration increased, system performance significantly increased up to a certain duration, and then we observed a diminishing return on system performance afterward. ## 7 Disclaimer These results presented in this paper are not to be construed or represented as endorsements of any participant's system, methods, or commercial product, or as official findings on the part of NIST or the U.S. Government. The work of MIT Lincoln Laboratory (MITLL) is sponsored by the Department of Defense under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U.S. Air Force. Figure 8: A data source type distribution and effect on systems Figure 7: Language confusability of the leading systems Figure 9: SAD duration effect on system performance (a) SAD duration distribution for the dev and test set (b) T1 system performance vs. SAD duration.
2022年の米国NISTは、1996年以来、言語認識評価 (LRE) の最新版を実施しました。これは、言語認識の研究を促進し、最先端の技術を測定するためにNISTが運営してきたシリーズです。過去のLREと同様、LRE22は会話電話音声 (CTS) と放送 Narrowband音声 (BNBS) データに焦点を当てました。LRE22では、アフリカ語の評価強化、特に低リソース言語を重点的に評価し、3秒から35秒の長時間の録音からランダムに抽出した音声セグメントからなる評価セットを導入しました。3ヶ月の評価期間中に21の研究機関が16のチームを形成し、65のシステムの評価に貢献しました。この論文では、LRE22の概観と、異なる評価条件におけるシステムパフォーマンスの分析が示されます。評価結果から、オ
2306.17802
Castling equivalence for logarithmic flat connections
Let $X$ be a complex manifold containing a hypersurface $D$ and let $D^s$ denote the singular locus. We study the problem of extending a flat connection with logarithmic poles along $D$ from the complement $X \setminus D^s$ to all of $X$. In the setting where $D$ is a weighted homogeneous plane curve, we give a new proof of Mebkhout's theorem that extensions always exist. Our proof makes use of a Jordan decomposition for logarithmic connections as well as a version of Grothendieck's decomposition theorem for vector bundles over the `football' orbifold which is due to Martens and Thaddeus. In higher dimensions, we point out a close relationship between the extension problem and castling equivalence of prehomogeneous vector spaces. In particular, we show that the twisted fundamental groupoids of castling equivalent linear free divisors are `birationally' Morita equivalent and we use this to generate examples of non-extendable flat connections.
Francis Bischoff
2023-06-30T17:04:13
http://arxiv.org/abs/2306.17802v1
# Castling equivalence for logarithmic flat connections ###### Abstract Let \(X\) be a complex manifold containing a hypersurface \(D\) and let \(D^{s}\) denote the singular locus. We study the problem of extending a flat connection with logarithmic poles along \(D\) from the complement \(X\setminus D^{s}\) to all of \(X\). In the setting where \(D\) is a weighted homogeneous plane curve we give a new proof of Mebkhout's theorem that extensions always exist. Our proof makes use of a Jordan decomposition for logarithmic connections as well as a version of Grothendieck's decomposition theorem for vector bundles over the 'football' orbifold which is due to Martens and Thaddeus. In higher dimensions we point out a close relationship between the extension problem and castling equivalence of prehomogeneous vector spaces. In particular, we show that the twisted fundamental groupoids of castling equivalent linear free divisors are 'birationally' Morita equivalent and we use this to generate examples of non-extendable flat connections. ###### Contents * 1 Introduction * 2 Homogeneous Lie groupoids * 3 Extending representations * 4 Castling equivalence ## 1 Introduction Let \(X\) be a complex manifold containing a hypersurface \(D\). A version of the Riemann-Hilbert problem asks whether a flat connection on the complement \(U=X\setminus D\) admits an extension to a flat connection on \(X\) with logarithmic singularities along \(D\). In the case where \(X=\mathbb{CP}^{1}\) is the complex projective line and \(D\) is a collection of points, this question was posed by Hilbert and has been extensively studied (eg. [20, 17, 6, 21, 2]). When the hypersurface \(D\) is smooth, or has simple normal crossing singularities, there is a construction due to Deligne (attributed to Manin) [5, 3] which provides a canonical extension given the choice of a (set-theoretic) branch of the logarithm.1 In this paper we study the following different but related _extension problem_ in the setting where \(D\) is a singular Saito free divisor [22]. Let \(D^{s}\) denote the singular locus of \(D\), let \(X^{\times}=X\setminus D^{s}\) and let \(D^{\times}=D\setminus D^{s}\). Hence \(D^{\times}\) is a smooth hypersurface in the complex manifold \(X^{\times}\). The spaces involved are arranged according to the following diagram of inclusions. Footnote 1: Because this construction uses matrix logarithms, it only works when the structure group is \(GL(m,\mathbb{C})\). For other structure groups there are counterexamples to the Riemann-Hilbert problem even when the divisor is smooth. \[\begin{array}{lcl}D^{s}&=&D^{s}\\ \vspace{0.2cm}\cap&&\vspace{0.2cm}\cap&&\\ D&\subset&X&\supset&U\\ \vspace{0.2cm}\cup&&\vspace{0.2cm}\cup&&\vspace{0.2cm}\end{array}\] **Question** (Extension problem).: _Let \((E,\nabla)\) be a logarithmic flat connection on \((X^{\times},D^{\times})\). Is there an extension \((\tilde{E},\bar{\nabla})\) to a logarithmic flat connection on \((X,D)\)?_ Note that by Hartogs' theorem, the extension is unique and its existence depends only on whether the vector bundle \(E\) can be extended. By applying the Deligne-Manin construction, any flat connection on \(U\) may be extended to a logarithmic flat connection \((E,\nabla)\) on \((X^{\times},D^{\times})\), since \(D^{\times}\) is smooth. Hence, solving the extension problem in this case immediately leads to a solution of the Riemann-Hilbert problem. However, the two problems are also quite distinct: there are logarithmic flat connections \((E,\nabla)\) which fail to extend even though their restriction to \(U\)_does_ extend. An illustrative example of this phenomenon is provided by \((\mathbb{C}^{n},D)\), where \(D\) is the union of the coordinate hyperplanes. In this setting, logarithmic flat connections with trivial monodromy are equivalent to \((\mathbb{C}^{*})^{n}\)-equivariant vector bundles, and these have been completely classified as a special case of the study of equivariant vector bundles over toric varieties [13, 14, 12, 9, 10, 15, 16]. In this classification, an equivariant vector bundle over \(\mathbb{C}^{n}\setminus D^{s}\) is equivalent to the data of a vector space \(V\) which is equipped with an \(n\)-tuple of decreasing \(\mathbb{Z}\)-filtrations \(F_{1},F_{2},...,F_{n}\). This equivariant bundle extends over \(\mathbb{C}^{n}\) if and only if the tuple of filtrations can be simultaneously split. Note that restricting the bundle to \(\mathbb{C}^{n}\setminus D\) corresponds to throwing away the filtrations. This restricted bundle can then be extended all the way to \(\mathbb{C}^{n}\) by now choosing an \(n\)-tuple of simultaneously split filtrations (for example, choosing \(n\)-copies of a single filtration). In other words, this provides examples of non-extendable connections on \((\mathbb{C}^{n}\setminus D^{s},D^{\times})\) whose restriction to \(U=\mathbb{C}^{n}\setminus D\) does nevertheless admit an extension. ### Mebkhout's extension The case \(n=2\) is noteworthy. Indeed, since it is always possible to simultaneously split a pair of filtrations, equivariant bundles over \(\mathbb{C}^{2}\setminus\{(0,0)\}\) may always be extended. In fact, this is a special case of a theorem of Mebkhout [19] (see also a similar result by Kita [11]), who proved that the extension problem can always be solved when \(D\) is a reduced curve in \(\mathbb{C}^{2}\). The first purpose of this paper is to give a new proof of this result in the case where \(D\) is a weighted homogeneous plane curve. This is the content of Theorem 3.2. Whereas Mebkhout's original proof relied on the theory of coherent sheaves, our proof makes use of a groupoid theoretic approach to logarithmic connections which was developed in [1]. Briefly, our proof proceeds in two steps: 1. Using a Jordan decomposition theorem, we show in Proposition 2.4 that any logarithmic flat connection \((E,\nabla)\) admits the structure of a \(\mathbb{C}^{*}\)-equivariant bundle. Note that this is the same \(\mathbb{C}^{*}\)-action with respect to which \(D\) is weighted homogeneous. As a result, \(E\) descends to a vector bundle on a 'football' orbifold \(\mathbb{P}^{1}_{p,q}\). 2. Applying Martens and Thaddeus' version of the Grothendieck decomposition theorem for vector bundles over the football orbifolds [18] we decompose \(E\) into a sum of line bundles, which can easily be seen to extend to \(\mathbb{C}^{2}\). From this perspective, the existence of non-extendable connections may be attributed to the failure of the Grothendieck decomposition theorem in higher dimensions. Indeed, Proposition 2.4 continues to hold in higher dimensions, implying that there is a close relationship between the extension problem for weighted homogeneous hypersurfaces and the study of vector bundles on weighted projective space. ### Castling equivalence In Section 4 we turn our attention to the second purpose of this paper, which is to point out a relationship between the extension problem and castling equivalence for prehomogeneous vector spaces. Castling equivalence is an operation on linear representations which first arose in the work of Sato and Kimura on the classification of prehomogeneous vector spaces [23]. It was related to free divisors in the work of [4, 8, 7, 24], who introduced _linear_ free divisors (a special case of prehomogeneous vector spaces) and showed that they are preserved by castling equivalence. A noteworthy application of their result is that castling equivalence generates infinitely many new examples of linear free divisors. Now suppose that \((X_{1},D_{1})\) and \((X_{2},D_{2})\) are castling equivalent linear free divisors and let \((X_{i}^{\times},D_{i}^{\times})\) denote the respective complements of their singular loci. Each pair \((X_{i}^{\times},D_{i}^{\times})\) gives rise to a holomorphic Lie groupoid \(\Pi(X_{i}^{\times},D_{i}^{\times})\) which has the property that its category of representations is equivalent to the category of flat connections on \(X_{i}^{\times}\) with logarithmic singularities along \(D_{i}^{\times}\). Our main result is Theorem 4.1, which states that _twisted fundamental groupoids_ derived from castling equivalent linear free divisors are Morita equivalent. In particular, they have equivalent categories of representations. The significance of this theorem is that it implies that the categories of logarithmic flat connections for \((X_{1},D_{1})\) and \((X_{2},D_{2})\) can both be embedded into a common category. Furthermore, by transporting a flat logarithmic connection for \((X_{2},D_{2})\) along the Morita equivalence, we obtain a flat logarithmic connection for \((X_{1}^{\times},D_{1}^{\times})\) which _may fail to extend_. This is demonstrated in Proposition 4.3 which provides a simple method to construct non-extendable connections from the data of representations of special linear groups. **Acknowledgements.** I would like to thank L. Narvaez Macarro for informing me of Mebkhout's result. I would also like to thank M. Gualtieri for suggesting several improvements to the paper. ## 2 Homogeneous Lie groupoids In this section we briefly recall homogeneous groupoids and some of their representation theory, as developed in [1]. Let \(X\) be a complex manifold equipped with a \((\mathbb{C}^{*})^{k}\)-action and let \(\mathcal{G}\rightrightarrows X\) be a holomorphic Lie groupoid. The action determines a source-simply connected action groupoid \(\mathbb{C}^{k}\ltimes X\) and we assume that there is a groupoid morphism \[i:\mathbb{C}^{k}\ltimes X\to\mathcal{G}.\] There is a natural projection morphism \(p:\mathbb{C}^{k}\ltimes X\to\mathbb{C}^{k}\) and we assume that we have an extension of this morphism to \(\mathcal{G}\): \[\pi:\mathcal{G}\to\mathbb{C}^{k}.\] Note that the subgroup \(2\pi i\mathbb{Z}^{k}\subset\mathbb{C}^{k}\) determines an isotropic subgroupoid \(\mathbb{Z}^{k}\times X\subset\mathcal{G}\). **Definition 2.1**.: Let \((\mathcal{G},i,\pi)\) be a triple as above. This data * is _central_ if for all \(n\in\mathbb{Z}^{k}\) and \(g\in\mathcal{G}\), we have \[(n,t(g))*g=g*(n,s(g)).\] * has a _unique s-equivalence class_ if every point of \(X\) is equivalent under the equivalence relation generated by the \(\mathcal{G}\)-orbit closures. The data \((\mathcal{G},i,\pi)\) defines a _homogeneous groupoid_ if it is both central and has a unique \(s\)-equivalence class. It is furthermore called _positive_ in the case that \(X\) is a vector space and one of the \(\mathbb{C}^{*}\)-actions has strictly positive weights. In [1] several examples of homogeneous groupoids are described. They arise from representations of algebraic groups and as the twisted fundamental groupoids of weighted homogeneous Saito free divisors. Let us recall a proposition which will be used in a future section. **Proposition 2.2**.: [1, Proposition 3.4] _Let \((\mathcal{G},i,\pi)\) be a homogeneous groupoid. Then the conjugation action of \(\mathbb{C}^{k}\) descends to an action of \((\mathbb{C}^{*})^{k}\) on \(\mathcal{G}\) by Lie groupoid automorphisms. Furthermore, the morphism \(\pi\) is invariant under this action._ ### Monodromy and the Jordan decomposition Let \(P\to X\) be a right principal \(H\)-bundle, where \(H\) is a connected complex reductive group. Recall that the \(H\)-equivariant isomorphisms between the fibres of \(P\) assemble into the Atiyah groupoid \(\mathcal{G}(P)\), which is a Lie groupoid over \(X\). A representation of the groupoid \(\mathcal{G}\) consists of a principal bundle \(P\) and a morphism \(\phi:\mathcal{G}\to\mathcal{G}(P)\) covering the identity. In the case of a homogeneous groupoid we can restrict to the subgroupoid \(\mathbb{Z}^{k}\times X\) to get the monodromy, which we view as a map \[M:\mathbb{Z}^{k}\to Aut_{H}(P),\] where \(Aut_{H}(P)\) is the group of gauge transformations of \(P\). In fact, the image of the monodromy lies in the subgroup \(Aut(\phi)\) of automorphisms of \(\phi\). Let \(S\) and \(U\) be the semisimple and unipotent components of \(M\), respectively, defined according to the multiplicative Jordan-Chevalley decomposition. In [1, Lemma 4.3] we show that \(S\) and \(U\) are well-defined holomorphic automorphisms of \(\phi\), and that the conjugacy class of \(S_{n}\), for \(n\in\mathbb{Z}^{k}\), is constant over \(X\). We furthermore prove the following functorial Jordan decomposition theorem. Let \(Rep(\mathcal{G},H)\) be the category of \(H\)-representations of \(\mathcal{G}\), and let \(\mathcal{JC}_{H}\) denote the category whose objects are triples \((P,\phi_{s},U)\), where \((P,\phi_{s})\) is a representation with semisimple monodromy, and \(U:\mathbb{Z}^{k}\to Aut(\phi_{s})\) is a morphism valued in unipotent automorphisms. **Theorem 2.3**.: [1, Theorem 4.6] _There is an isomorphism of categories_ \[Rep(\mathcal{G},H)\cong\mathcal{JC}_{H}.\] Let us briefly describe the isomorphism. Given a representation \((P,\phi)\) we assign to it the triple \((P,\phi_{s},U)\), where \(U\) is the unipotent part of the monodromy of \(\phi\), and \(\phi_{s}=\sigma_{U}\phi\), where \(\sigma_{U}\) is a groupoid \(1\)-cocycle which 'untwists' the unipotent monodromy. ### Equivariant structures We now describe an application of Theorem 2.3 which allows us to construct torus equivariant bundles from groupoid representations. We start by assuming that the homogeneous groupoid has the form \(\mathbb{C}\ltimes X\to\mathcal{G}\to\mathbb{C}\). Note that any homogeneous groupoid gives rise to one of this form by forgetting some of the \(\mathbb{C}^{*}\)-actions. For simplicity, we will assume that the structure group is \(H=GL(m,\mathbb{C})\). Recall from Proposition 2.2 that there is an induced \(\mathbb{C}^{*}\)-action on the groupoid \(\mathcal{G}\). The following result allows us to lift this action to any representation of \(\mathcal{G}\). **Proposition 2.4**.: _Let \((P,\phi)\) be a \(GL(m,\mathbb{C})\)-representation of \(\mathcal{G}\) and choose \(S\in GL(m,\mathbb{C})\) in the conjugacy class of the semisimple component of the monodromy of \(\phi\). Then_ * \((P,\phi)\) _admits a reduction of structure group to the centralizer_ \(C_{GL(m,\mathbb{C})}(S)\)_. Denote the resulting representation_ \((K,\phi)\)_._ * _The_ \(\mathbb{C}^{*}\)_-action on_ \(X\) _(and_ \(\mathcal{G}\)_) lifts to an equivariant action on_ \(K\) _which acts by automorphisms of_ \(\phi\)_._ _In particular, \(P\) admits the structure of a \(\mathbb{C}^{*}\)-equivariant bundle over \(X\)._ Proof.: First, by Theorem 2.3, the representation \(\phi\) admits a factorisation \((P,\phi_{s},U)\), where \(\phi_{s}\) has semisimple monodromy. More precisely, \(\psi=i^{*}(\phi_{s}):\mathbb{C}\ltimes X\to\mathcal{G}(P)\) is a representation and the monodromy \(M(x)=\psi(2\pi i,x)\) is a semisimple automorphism of \(\phi\). By [1, Lemma 4.3], the conjugacy class of \(M\) is constant along \(X\). Let \(S\in GL(m,\mathbb{C})\) be a representative of this element and define \[K=\{p\in P\ |\ Mp=p*S\}.\] We claim that \(K\) is a holomorphic reduction of structure group to \(C_{GL(m,\mathbb{C})}(S)\), the centralizer of \(S\). Indeed, it is possible to choose a local trivialization \(P|_{U}\cong U\times GL(m,\mathbb{C})\) such that \(M|_{U}=S\). Then \[K|_{U}\cong\{(u,h)\in U\times GL(m,\mathbb{C})\ |\ Sh=hS\}=U\times C_{GL(m, \mathbb{C})}(S).\] Note that since \(M\) is an automorphism of \(\phi\), it follows that \(\phi\) preserves \(K\). The representation \(\psi\) defines an equivariant \(\mathbb{C}\)-action on \(K\) (or \(P\)) which lifts the given action on \(X\). We will now modify it so that it descends to a \(\mathbb{C}^{*}\)-action. Let \(A\in Lie(Z(C_{GL(m,\mathbb{C})}(S)))\), the Lie algebra of the centre, such that \(\exp(2\pi iA)=S\). Now define \[\tilde{\psi}:\mathbb{C}\ltimes X\to\mathcal{G}(K),\qquad(\lambda,x)\mapsto R_ {\exp(-\lambda A)}\psi(\lambda,x),\] where \(R_{\exp(-\lambda A)}:K\to K\) denotes the right action of \(\exp(-\lambda A)\in C_{GL(m,\mathbb{C})}(S)\) on \(K\). Note that the map \(\tilde{\psi}(\lambda,x):K_{x}\to K_{\lambda*x}\) is \(C_{GL(m,\mathbb{C})}(S)\)-equivariant precisely because \(\exp(-\lambda A)\) lies in the centre of this group. Hence this defines a holomorphic action. For \(p\in K\) \[\tilde{\psi}(2\pi i,x)(p)=\psi(2\pi i,x)(p)*e^{-2\pi iA}=M(x)(p)*S^{-1}=p.\] Therefore, the action descends to an action of \(\mathbb{C}^{*}\). In order to show that \(\mathbb{C}^{*}\) acts by automorphisms of \(\phi\), we need to check that for all \(\mu\in\mathbb{C}\), \(g\in\mathcal{G}\) and \(p\in K_{s(g)}\), the following equation is satisfied \[\mu*\phi(g)(p)=\phi(\mu*g)(\mu*p).\] Unpacking the definitions, this equation is given by \[\phi_{s}(\mu,t(g))\phi(g)(p)*e^{-\mu A}=\phi((\mu,t(g))g(\mu,s(g))^{-1})\phi_{ s}(\mu,s(g))(p)*e^{-\mu A}.\] Cancelling off the factor \(e^{-\mu A}\), and using the definition of \(\phi_{s}\) from [1, Section 4.2], the equation simplifies to \[c(\mu)_{\mu*t(g)}\phi((\mu,t(g))g)=\phi((\mu,t(g))g(\mu,s(g))^{-1})c(\mu)_{\mu *s(g)}\phi(\mu,s(g)),\] which follows from the fact that \(c(\mu)\in Aut(\phi)\). Taking a closer look at the proof of Proposition 2.4, we see that there is only one place where it was necessary to assume that the structure group \(H=GL(m,\mathbb{C})\). Namely, we must be able to find a logarithm of \(S\) in the centre of its centralizer \(\log(S)\in Lie(Z(C_{GL(m,\mathbb{C})}(S)))\). Such logarithms exist because the eigenspace decomposition of \(S\) implies that \(C_{GL(m,\mathbb{C})}(S)\) is a product of general linear groups and so its centre is a connected torus. For general structure groups this property can fail. For example, \(-id\in SL(2,\mathbb{C})\) is contained in the non-identity component of the centre. This does not admit a central logarithm and hence leads to counter-examples of Proposition 2.4 in the case of structure group \(SL(2,\mathbb{C})\). Let \(\phi\) be an \(H\)-representation, for \(H\) a general complex reductive group. Let \(S\in H\) be an element of the conjugacy class of the semisimple component of its monodromy. We will say that \(\phi\) has 'well-behaved' monodromy if \(S\) is contained in the identity component of \(Z(C_{H}(S))\). The condition of having well-behaved monodromy is necessary and sufficient for Proposition 2.4 to hold in this more general setting. Because the centralizer of a semisimple matrix in \(GL(m,\mathbb{C})\) is a product of general linear groups and has connected centre, we can iterate the construction of Proposition 2.4 in the more general setting of a homogeneous groupoid associated to a \((\mathbb{C}^{*})^{k}\)-action. **Corollary 2.5**.: _Proposition 2.4 holds for a general homogeneous groupoid. In particular, a \(GL(m,\mathbb{C})\)-representation \((P,\phi)\) of a homogeneous groupoid \(\mathbb{C}^{k}\ltimes X\to\mathcal{G}\to\mathbb{C}^{k}\) admits an equivariant \((\mathbb{C}^{*})^{k}\)-action._ ## 3 Extending representations Let \(X\) be a complex manifold, let \(D\subset X\) be a Saito free divisor [22] and let \(U=X\setminus D\). Recall from [1, Section 3.2] that this determines a twisted fundamental groupoid \(\Pi(X,D)\). The key fact for us is that the category of representations of \(\Pi(X,D)\) is _equivalent_ to the category of flat connections on \(X\) with logarithmic singularities along \(D\). Let \(D^{s}\subset D\) denote the singular locus of \(D\), and let \(X^{\times}=X\setminus D^{s}\) and \(D^{\times}=D\setminus D^{s}\). Then \((X^{\times},D^{\times})\) defines a smooth Saito free divisor with the property that \(X^{\times}\setminus D^{\times}=X\setminus D=U\). In this section we study the problem of extending a logarithmic flat connection on \((X^{\times},D^{\times})\) to a logarithmic flat connection on \((X,D)\). We will study this in terms of the corresponding groupoid representations. Recall that the orbits of the groupoid \(\Pi(X,D)\) define a stratification of \(X\). In [22], this is called the _logarithmic stratification_. The connected components of \(U\) and \(D^{\times}\) are orbits and hence the singularity locus \(D^{s}\) decomposes as a union of orbits. Therefore, \(X^{\times}\) is a saturated open subset of \(X\) and \(\Pi(X,D)|_{X^{\times}}=\Pi(X^{\times},D^{\times})\) is a subgroupoid of \(\Pi(X,D)\). Restricting representations defines a functor \[R:Rep(\Pi(X,D),H)\to Rep(\Pi(X^{\times},D^{\times}),H).\] The extension problem which we study in this section can be formulated as the problem of characterising the essential image of the functor \(R\). When \(X=\mathbb{C}^{n}\) the following lemma shows that this problem reduces to that of determining when the principal bundle \(P\) underlying a representation of \(\Pi(X^{\times},D^{\times})\) is trivializable. **Lemma 3.1**.: _The functor \(R\) is fully faithful. If \(X=\mathbb{C}^{n}\), then its image consists of representations defined on trivializable bundles._ Proof.: The theorem is trivial if \(D^{s}\) is empty. Assuming that it is non-empty, it is an analytic subset of codimension \(c\geq 2\). Because \(D^{s}\) is a union of orbits, we have that \(\Pi(X,D)|_{D^{s}}=s^{-1}(D^{s})\), where \(s:\Pi(X,D)\to X\) is the source map. Hence \(\Pi(X,D)|_{D^{s}}\) is an analytic subset of \(\Pi(X,D)\) of codimension \(c\). By Hartogs' theorem [25, Theorem 5B] any holomorphic function on \(X^{\times}\) (resp. \(\Pi(X^{\times},D^{\times})\)) extends uniquely to a holomorphic function on \(X\) (resp. \(\Pi(X,D)\)). The functor \(R\) is clearly faithful since \(X^{\times}\) is dense in \(X\). To see that it is full, note that a homomorphism between two representations is locally given by a holomorphic map \(T:X^{\times}\to H\), and so by Hartogs' theorem it extends to \(X\). If \(X=\mathbb{C}^{n}\), then principal bundles over \(X\) (and hence in the image of \(R\)) are trivializable. Conversely, let \((P,\phi)\in Rep(\Pi(X^{\times},D^{\times}),H)\) by a representation such that \(P=X^{\times}\times H\). Then the representation is given by a homomorphism \(\phi:\Pi(X^{\times},D^{\times})\to H\). By Hartogs' theorem, it extends to a homomorphism \(\phi:\Pi(X,D)\to H\). ### Weighted homogeneous plane curves In this section, we assume that \(X=\mathbb{C}^{2}\) and is equipped with a \(\mathbb{C}^{*}\)-action which is generated by \[E=px\partial_{x}+qy\partial_{y},\] for positive integers \(p\) and \(q\). We further assume that \(D=f^{-1}(0)\), for \(f\) a weighted homogeneous function. This means that \(E(f)=nf\), for a constant \(n\). In this case, the twisted fundamental groupoid \(\Pi(X,D)\) is a homogeneous groupoid by [1, Theorem 3.6]. Assuming that \(D\) is singular, the singularity locus consists of a single point \(D^{*}=\{(0,0)\}\). Hence \(X^{\times}=\mathbb{C}^{2}\setminus\{(0,0)\}.\) The following Theorem, a special case of Mebkhout's [19, Theorem 10.3-3], states that the extension problem may always be solved for \((\mathbb{C}^{2},D)\). **Theorem 3.2**.: _Let \(H\) be a connected complex reductive group. Let \(D\subset\mathbb{C}^{2}\) be a weighted homogeneous plane curve and let \(X^{\times}=\mathbb{C}^{2}\setminus\{(0,0)\}\). Then restricting representations defines an equivalence of categories_ \[Rep(\Pi(\mathbb{C}^{2},D),H)\cong Rep(\Pi(X^{\times},D^{\times}),H).\] Proof.: By Lemma 3.1, it suffices to show that for any \((P,\phi)\in Rep(\Pi(X^{\times},D^{\times}),H)\), the underlying bundle \(P\) is trivializable. Furthermore, it suffices to prove this in the case \(H=GL(n,\mathbb{C})\). Indeed, for the general case, let \(H\subseteq GL(n,\mathbb{C})\) be an embedding and let \(Q=(P\times GL(n,\mathbb{C}))/H\) be the corresponding extension of structure group, so that \((Q,\phi)\in Rep(\Pi(X^{\times},D^{\times}),GL(n,\mathbb{C}))\). Then given a trivialization \(Q\cong X^{\times}\times GL(n,\mathbb{C})\), the subbundle \(P\) gives rise to a holomorphic map \(f_{P}:X^{\times}\to GL(n,\mathbb{C})/H\). Since \(H\) is reductive, the quotient \(GL(n,\mathbb{C})/H\) is an affine variety, and hence we may apply Hartogs' theorem to obtain an extension of \(f_{P}\) to all \(\mathbb{C}^{2}\). This implies that \(P\) extends to a bundle over \(\mathbb{C}^{2}\), which further implies that it is trivializable. Assume for the remainder of the proof that \(H=GL(n,\mathbb{C})\). We will work with the associated vector bundle \(V_{P}=(P\times\mathbb{C}^{n})/H\), which is equivalent to \(P\). By Proposition 2.4, \(V_{P}\) admits an equivariant action of \(\mathbb{C}^{*}\), which lifts the given action on \(X^{\times}\). The 'quotient' \(V_{P}/\mathbb{C}^{*}\) is a vector bundle over the 'football' \(\mathbb{P}_{p,q}=[X^{\times}/\mathbb{C}^{*}]\). This is an orbifold: it has coarse moduli space given by \(\mathbb{P}^{1}\), and it has isotropy groups \(\mathbb{Z}/p\mathbb{Z}\) over \([1,0]\) and \(\mathbb{Z}/q\mathbb{Z}\) over \([0,1]\). We may therefore apply the Grothendieck decomposition theorem (i.e. [18, Theorem 2.4]) to give a decomposition of \(V_{P}\) as a direct sum of equivariant line bundles. More precisely, by [18, Proposition 2.2], the Picard group of \(\mathbb{P}_{p,q}\) is generated by an equivariant line bundle \(\mathcal{O}(1)=X^{\times}\times\mathbb{C}\), with \(\mathbb{C}^{*}\)-action given by \[\mu*(x,y,\lambda)=(\mu^{p}x,\mu^{q}y,\mu\lambda).\] Hence, we have a decomposition of \(\mathbb{C}^{*}\)-equivariant vector bundles over \(X^{\times}\) given as follows \[V_{P}\cong\bigoplus_{i}\mathcal{O}(n_{i}).\] In particular, this implies that the underlying holomorphic vector bundle of \(V_{P}\) is trivializable. _Remark 3.3_.: For general plane curves, a very similar result was proved by M. Kita in [11, Proposition 2]. However, his result is only stated for logarithmic connections obtained from the Deligne-Manin construction. _Remark 3.4_.: Although Theorem 3.2 is stated for flat connections on \(X^{\times}\), it is in fact a purely local statement which holds for logarithmic flat connections defined on a small punctured neighbourhood of the origin. This is because \(\Pi(X^{\times},D^{\times})\) is Morita equivalent to \(\Pi(W^{\times},D^{\times}\cap W)\), for any polydisc \(W\) centred at \((0,0)\). More generally, a similar extension result holds for hypersurfaces of the form \(\mathbb{C}^{k}\times D\subset\mathbb{C}^{k+2}\). On the other hand, the assumption that \(H\) is reductive cannot be relaxed. For example, let \(B\subset GL(2,\mathbb{C})\) be the Borel subgroup of upper triangular matrices. Since \(Y=GL(2,\mathbb{C})/B\) is the total space of a \(\mathbb{C}^{*}\)-bundle over the projective line, it is not affine and hence we cannot apply Hartogs' theorem to \(Y\)-valued functions. For a counter-example to the theorem in this setting, consider a non-trivial short exact sequence of toric vector bundles on \(\mathbb{CP}^{1}\): \[0\to\mathcal{O}\to\mathcal{O}(1)^{\otimes 2}\to\mathcal{O}(2)\to 0.\] Let \(P\) be the principal \(B\)-bundle of frames which split the flag \(\mathcal{O}\to\mathcal{O}(1)^{\otimes 2}\). This defines a flat bundle on \(X^{\times}\) with logarithmic poles along the coordinate hyperplanes \(D^{\times}\). This bundle cannot be extended to \(\mathbb{C}^{2}\). Otherwise, we could linearize it to obtain a reduction of structure group to \((\mathbb{C}^{*})^{2}\) and this would induce an isomorphism between \(\mathcal{O}(1)^{\oplus 2}\) and \(\mathcal{O}\oplus\mathcal{O}(2)\). ## 4 Castling equivalence In this section we explore another phenomenon which gives rise to bundles which do not extend. This is the phenomenon of castling equivalence for prehomogeneous vector spaces. Castling transformation is an operation on linear representations which was introduced by Sato and Kimura in their work [23] classifying irreducible prehomogeneous vector spaces. Recall that a prehomogeneous vector space consists of a linear representation \(V\) of an algebraic group \(G\) such that there is an open dense orbit. Let \(G\) be a linear algebraic group, \(V\) an \(n\)-dimensional linear representation and \(1\leq r<n\) an integer. Then \(\mathsf{Hom}(\mathbb{C}^{r},V)\) is a linear representation of \(G\times SL(r,\mathbb{C})\) and \(\mathsf{Hom}(\mathbb{C}^{n-r},V^{*})\) is a linear representation of \(G\times SL(n-r,\mathbb{C})\). These two representations are said to be related by a castling transformation. More generally, two representations are castling equivalent if they are related by a sequence of castling transformations. In [23] it is shown that castling equivalent representations share a number of features, such as their algebras of polynomial relative invariants, their generic isotropy groups, and the property of being prehomogeneous. Essentially, this is due to the fact that the Grassmanians \(Gr(r,V)\) and \(Gr(n-r,V^{*})\) are isomorphic. Let \((V,D)\) be a linear free divisor (cf. [1, Section 3.2.2]) and let \(G\subset GL(V)\) be the connected component of the group of linear transformations that preserve \(D\). Then \((G,V)\) defines a prehomogeneous vector space. Conversely, given a prehomogeneous vector space \((G,V)\), it makes sense to ask whether the complement of the open dense orbit defines a linear free divisor. In [8] it is shown that the property of defining a linear free divisor is preserved by castling transformations. Hence, we may talk about castling equivalence of linear free divisors. The following result shows that the twisted fundamental groupoids of castling equivalent free divisors are 'birationally' Morita equivalent. **Theorem 4.1**.: _Let \((X_{1},D_{1})\) and \((X_{2},D_{2})\) be castling equivalent linear free divisors, let \(D_{i}^{*}\), \(i=1,2\), denote the singular loci, and let \(X_{i}^{\times}=X_{i}\setminus D_{i}^{*}\) and \(D_{i}^{\times}=D_{i}\setminus D_{i}^{*}\). Then the twisted fundamental groupoids \(\Pi(X_{1}^{\times},D_{1}^{\times})\) and \(\Pi(X_{2}^{\times},D_{2}^{\times})\) are Morita equivalent. Furthermore, this Morita equivalence is unique up to rescaling of \(X_{1}\)._ Proof.: Let \((G\times SL(r,\mathbb{C}),X_{1}=\mathsf{Hom}(\mathbb{C}^{r},V))\) and \((G\times SL(n-r,\mathbb{C}),X_{2}=\mathsf{Hom}(\mathbb{C}^{n-r},V^{*}))\) be two representations related by a castling transformation. Let \(I_{r,n}\subset\mathsf{Hom}(\mathbb{C}^{r},V)\) be the subspace of injective linear transformations. This is a \(G\times SL(r,\mathbb{C})\)-invariant subset, which is the complement of a determinental variety \(B_{r,n}\) of codimension \(n-r+1\geq 2\). It follows that both \(I_{r,n}\) and \(B_{r,n}\) are unions of orbits. If \((G\times SL(r,\mathbb{C}),\mathsf{Hom}(\mathbb{C}^{r},V))\) defines a linear free divisor, then for dimension reasons \(B_{r,n}\subseteq D^{s}\) and hence \(X_{1}^{\times}\subseteq I_{r,n}\). Furthermore, \(\Pi(X_{1},D_{1})=(\tilde{G}\times SL(r,\mathbb{C}))\ltimes\mathsf{Hom}( \mathbb{C}^{r},V)\), where \(\tilde{G}\) is the universal cover of \(G\). Analogous statements hold for \((G\times SL(n-r,\mathbb{C}),X_{2}=\mathsf{Hom}(\mathbb{C}^{n-r},V^{*}))\). We will construct a Morita equivalence between the subgroupoids \((\tilde{G}\times SL(r,\mathbb{C}))\ltimes I_{r,n}\) and \((\tilde{G}\times SL(n-r,\mathbb{C}))\ltimes I_{n-r,n}\), and then show that this induces a Morita equivalence between \(\Pi(X_{1}^{\times},D_{1}^{\times})\) and \(\Pi(X_{2}^{\times},D_{2}^{\times})\). First, the group \(GL(r,\mathbb{C})\) acts freely on \(I_{r,n}\) with quotient \(Gr(r,V)\), the Grassmanian of \(r\)-planes in \(V\). It follows that the subgroup \(SL(r,\mathbb{C})\) also acts freely on \(I_{r,n}\). The quotient \(L_{r,n}=I_{r,n}/SL(r,\mathbb{C})\) is a principal \(\mathbb{C}^{*}\)-bundle over \(Gr(r,V)\), identified with the determinant of the tautological rank \(r\) vector bundle \(T_{r}\). This induces a Morita equivalence between \((\tilde{G}\times SL(r,\mathbb{C}))\ltimes I_{r,n}\) and \(\tilde{G}\ltimes L_{r,n}\). Similarly, we have a Morita equivalence between \((\tilde{G}\times SL(n-r,\mathbb{C}))\ltimes I_{n-r,n}\) and \(\tilde{G}\ltimes L_{n-r,n}\). We will obtain the desired Morita equivalence by constructing an isomorphism between \(L_{r,n}\) and \(L_{n-r,n}\). The bundles \(L_{r,n}\) and \(L_{n-r,n}\) have respective bases \(Gr(r,V)\) and \(Gr(n-r,V^{*})\). There is a canonical \(\tilde{G}\)-equivariant isomorphism between these spaces, given by sending an \(r\)-plane to its annihilator: \[A\!:\!Gr(r,V)\to Gr(n-r,V^{*}),\qquad W\mapsto Ann(W).\] Let \(T_{r}\) be the canonical \(r\)-plane bundle over \(Gr(r,V)\), and let \(T_{n-r}\) be the canonical dual \(n-r\)-plane bundle over \(Gr(n-r,V^{*})\). These two bundles are related by the following short exact sequence over \(Gr(r,V)\): \[0\to T_{r}\to V\times Gr(r,V)\to A^{*}T_{n-r}^{*}\to 0.\] This is a sequence of \(\tilde{G}\)-equivariant bundles once we equip the trivial bundle \(V\times Gr(r,V)\) with the action \[g\ltimes(v,W)=(g(v),g(W)).\] Taking the determinant of the sequence yields the \(\tilde{G}\)-equivariant isomorphism \[\phi:L_{r,n}\cong\det(V)\otimes A^{*}(L_{n-r,n}).\] If we choose a volume form on \(V\), then \(\det(V)\cong\mathbb{C}\), and hence \(L_{r,n}\cong L_{n-r,n}\). However, the two actions of \(\tilde{G}\) on \(L_{r,n}\) differ by the determinant character and this must be corrected. To this end, consider the homomorphism \(\tilde{G}\times SL(r,\mathbb{C})\to GL(X_{1})\) and let \(H_{1}\) be its image. Then \(\tilde{G}\times SL(r,\mathbb{C})\to H_{1}\) is the universal cover. Because \(H_{1}\) is the connected subgroup of linear transformations that preserve \(D_{1}\), it contains a central factor of \(\mathbb{C}^{*}\) acting as scaling transformations. This lifts to a homomorphism into the centre \(\mathbb{C}\to Z(\tilde{G}\times SL(r,\mathbb{C}))\). Because the centre of \(SL(r,\mathbb{C})\) is finite, \(\mathbb{C}\) maps purely into \(\tilde{G}\). It then follows that \(z\in\mathbb{C}\) acts on \(V\) as scalar multiplication by \(e^{z}\). Consider the character \(\det:\tilde{G}\to\mathbb{C}^{*}\) which is obtained from the action of \(G\) on \(\det(V)\). This is surjective because \(z\in\mathbb{C}\) is sent to \(e^{nz}\). Let \(K\subset\tilde{G}\) be the connected component of the kernel. Then \(\tilde{G}\cong\mathbb{C}\ltimes K\). Because \(K\) is the kernel of the determinant character, the above isomorphism \(\phi\) between \(L_{r,n}\) and \(L_{n-r,n}\) is \(K\)-equivariant. Furthermore, \(z\in\mathbb{C}\) acts on \(L_{r,n}\) by multiplication by \(e^{rz}\) and on \(L_{n-r,n}\) by multiplication by \(e^{(r-n)z}\). We therefore have an isomorphism given by \[(\mathbb{C}\times K)\ltimes L_{r,n}\to(\mathbb{C}\times K)\ltimes L_{n-r,n}, \qquad(z,k,v)\mapsto(\frac{r}{r-n}z,k,\phi(v)).\] Note that the only choice in constructing this isomorphism is the trivialisation of \(\det(V)\). The different choices are obtained by rescaling, which may be induced by rescaling \(X_{1}\). A Morita equivalence between groupoids induces a bijection between orbits, and isomorphisms of the corresponding isotropy groups. Therefore, the bijection preserves the codimension of the orbits. Hence, it restricts to a Morita equivalence between \(\Pi(X_{1}^{\times},D_{1}^{\times})\) and \(\Pi(X_{2}^{\times},D_{2}^{\times})\). **Corollary 4.2**.: _Let \(\mathcal{C}\) denote a castling equivalence class of linear free divisors. For every \((X,D)\in\mathcal{C}\), the category of \(H\)-representations \(\text{Rep}(\Pi(X^{\times},D^{\times}),H)\) only depends on the class \(\mathcal{C}\). Hence, we denote it \(\text{Rep}(\mathcal{C},H)\). Furthermore, there is a fully faithful functor_ \[\text{Rep}(\Pi(X,D),H)\to\text{Rep}(\mathcal{C},H).\] _In other words, the representations of castling equivalent linear free divisors all embed into a common category._ Let \((V,D)\) be a linear free divisor and assume that the dimension of \(V\) is \(n\geq 3\). Let \(G\) be the connected group of linear transformations which preserve \(D\). Then \(G=G\times SL(1,\mathbb{C})\) and \(V=\mathsf{Hom}(\mathbb{C},V)\). Hence there is a castling transformation between \((G,V)\) and \((G\times SL(n-1,\mathbb{C}),\mathsf{Hom}\big{(}\mathbb{C}^{n-1},V^{*}\big{)})\). In other words, the \(n\)-dimensional linear free divisor \((V,D)\) is castling equivalent to a linear free divisor \((V^{\prime},D^{\prime})\) of dimension \(n(n-1)\). As observed in [7], this provides a method for constructing infinitely many new castling equivalent linear free divisors of increasing dimensions. The next proposition shows that this also gives rise to a method for constructing non-extendable flat logarithmic connections. In order to state the proposition, we need to recall the _residue_ of a logarithmic flat connection. Recall that \[\Pi(V^{\prime},D^{\prime})\cong(\tilde{G}\times SL(n-1,\mathbb{C}))\ltimes V^{ \prime},\] where \(\tilde{G}\) is the universal cover of \(G\). The origin \(0\in V^{\prime}\) is an orbit of the groupoid whose isotropy group is \(\tilde{G}\times SL(n-1,\mathbb{C})\). Hence, given an \(H\)-representation \((\phi,P)\) of \(\Pi(V^{\prime},D^{\prime})\), we can restrict it to the origin to get an \(H\)-representation of \(\tilde{G}\times SL(n-1,\mathbb{C})\). This is the residue, which can be further restricted to a residual \(SL(n-1,\mathbb{C})\) action. **Proposition 4.3**.: _Given a linear free divisor \((V,D)\) of dimension \(dim(V)=n\geq 3\), let \((V^{\prime},D^{\prime})\) denote the linear free divisor associated to the castling transform \((G\times SL(n-1,\mathbb{C}),\mathsf{Hom}\big{(}\mathbb{C}^{n-1},V^{*}\big{)})\), where \(G\) is the connected group of linear transformations of \(V\) that preserve \(D\). Then there is a fully faithful functor_ \[F:\text{Rep}(\Pi(V,D),H)\to\text{Rep}(\Pi(V^{\prime},D^{\prime}),H)\] _whose essential image consists of the representations with trivial residual \(SL(n-1,\mathbb{C})\) action._ Proof.: The proof of Theorem 4.1 constructs a Morita equivalence between \(\Pi(I_{1,n},D)\) and \(\Pi(I_{n-1,n},D^{\prime})\) and this induces an equivalence \(\tilde{F}\) between their categories of representations. In our case, \(V\setminus\{0\}=I_{1,n}=L_{1,n}\) and \(I_{n-1,n}\) is a principal \(SL(n-1,\mathbb{C})\)-bundle over \(L_{n-1,n}\cong L_{1,n}\). Let \(\pi:I_{n-1,n}\to V\setminus\{0\}\) be the bundle projection map. Then the functor \(\tilde{F}\) is simply given by pulling a representation back along \(\pi^{*}\). Since the pullback sends the trivial bundle on \(I_{1,n}\) to the trivial bundle on \(I_{n-1,n}\), by Lemma 3.1\(\tilde{F}\) restricts to the desired fully faithful functor \(F\). It remains to determine the essential image of \(F\). First, note that the for the trivial bundle \[\pi^{*}(I_{1,n}\times H)=I_{n-1,n}\times H\] the group \(SL(n-1,\mathbb{C})\) acts only on the first factor. This remains the case for the extension of this bundle to \(V^{\prime}\). Hence, the residual action of \(SL(n-1,\mathbb{C})\) is trivial. In fact, this is true for all representations in the image of \(F\), since they are obtained by pulling back _trivializable_ bundles. Conversely, let \((P,\phi)\in\text{Rep}(\Pi(V^{\prime},D^{\prime}),H)\) be a representation with trivial residual \(SL(n-1,\mathbb{C})\) action. By linearizing the action of \(SL(n-1,\mathbb{C})\ltimes V^{\prime}\) (eg. via [1, Theorem 4.8]), we may choose a trivialization of \(P\) such that \(\phi:(\tilde{G}\times SL(n-1,\mathbb{C}))\ltimes V^{\prime}\to H\) satisfies \[\phi(1,s,v)=1,\] for \(s\in SL(n-1,\mathbb{C})\) and \(v\in V^{\prime}\). In other words, the action of \(SL(n-1,\mathbb{C})\) on \(P|_{I_{n-1,n}}\cong I_{n-1,n}\times H\) is trivial on the second factor. Hence, \(\tilde{F}^{-1}(P,\phi)\) has underlying bundle given by \[P|_{I_{n-1,n}}/SL(n-1,\mathbb{C})\cong(I_{n-1,n}\times H)/SL(n-1,\mathbb{C})=I _{n-1,n}/SL(n-1,\mathbb{C})\times H=(V\setminus\{0\})\times H.\] This is trivializable and hence \(\tilde{F}^{-1}(P,\phi)\in\text{Rep}(\Pi(V,D),H)\). _Remark 4.4_.: Note that for a fixed structure group \(H\), the sequence of embeddings obtained by repeated application of Proposition 4.3 will eventually stabilise. This is because the dimension of the extra factor of \(SL(n,\mathbb{C})\) which gets added at each stage grows quickly, and hence eventually there are no non-trivial morphisms to \(H\). In other words, after a certain stage, the functor \(F:\text{Rep}(\Pi(V,D),H)\to\text{Rep}(\Pi(V^{\prime},D^{\prime}),H)\) becomes an equivalence. _Example 4.5_.: Consider the linear free divisor \((\mathbb{C}^{3},D)\), where \(D\) is the union of coordinate hyperplanes, which is cut out by \(xyz=0\). The connected linear group preserving \(D\) is \((\mathbb{C}^{*})^{3}\). Hence, \((\mathbb{C}^{3},D)\) is castling equivalent to the prehomogeneous vector space \(((\mathbb{C}^{*})^{3}\times SL(2,\mathbb{C}),\mathsf{Hom}\big{(}\mathbb{C}^{2 },\mathbb{C}^{3}\big{)})\). This defines a linear free divisor \((\mathbb{C}^{6},D^{\prime})\), where \(D^{\prime}\) is the hypersurface cut out by the vanishing of \[f(u_{1},u_{2},v_{1},v_{2},w_{1},w_{2})=(u_{1}v_{2}-u_{2}v_{1})(v_{1}w_{2}-v_{2 }w_{1})(w_{1}u_{2}-w_{2}u_{1}).\] By Proposition 4.3 we have an embedding \[\operatorname{Rep}(\Pi(\mathbb{C}^{3},D),H)\to\operatorname{Rep}(\Pi( \mathbb{C}^{6},D^{\prime}),H).\] One upshot of Proposition 4.3 is a method for constructing non-extendable logarithmic flat connections. Let \((V,D)\) be a linear free divisor with \(\dim(V)=n\geq 3\) and let \((V^{\prime},D^{\prime})\) be the castling equivalent free divisor constructed as in Proposition 4.3. A non-trivial homomorphism \(\psi:SL(n-1,\mathbb{C})\to H\) determines a representation of \(\Pi(V^{\prime},D^{\prime})\) as follows \[\phi:(\tilde{G}\times SL(n-1,\mathbb{C}))\ltimes V^{\prime}\to H,\qquad(g,s, v)\mapsto\psi(s).\] Since this has non-trivial residual \(SL(n-1,\mathbb{C})\) action, it is not in the image of \(F\). On the other hand, by Theorem 4.1 it does determine a logarithmic flat connection \((E,\nabla)\) on \((V^{\times},D^{\times})\), which necessarily does not extend to \((V,D)\).
$X$が複素多様体で、$D$は$X$の超曲面とします。$D^s$は$D$の Singular Locus であり、ここでは、$D$の周りで $X \setminus D^s$ から $X$ 全体への平坦接続の拡張問題を調査します。重み付き重解形平面上における $D$ が $D$ の場合、Mebkhout の定理の新しい証明を与えます。この証明は、対数接続のヨード分解とMartens と Thaddeus によって導入されたベクトル束の Grothendieck 分解の定理を用いて行われます。高次元では、拡張問題と前均等ベクトル空間の翻転同型性の関連性を指摘します。特に、その相互作用の twisted fundamental groupoids は、Birational な Morita 等価性を持つことが示され、これによって非拡張可能な平坦
2309.16341
Topological bulk and edge correlations of BCS condensate in a two-dimensional singlet-triplet spin pairing model
The condensate of the Bardeen-Cooper-Schrieffer (BCS) pair in the ground state, which may contain information on both topology and spin pairing, promises the superconductivity of the system. In this paper, we study a singlet-triplet spin paring model on a square lattice and investigate the consequences of the competition of on-site and nearest neighbor pairing parameters. We show that the ground state of the system has the form of the condensate of the BCS pair, and the topological transition is associated with the nonanalytic behavior of the pairing order parameters. A real space correlation function on opposite spin direction is introduced to characterizing the topological phase of the many-body ground state. Numerical results demonstrate that this method works well in the presence of disordered perturbation, lattice defects, or irregular boundary conditions. The real space correlation function between two edges of the system is also discussed, which directly reflects the existence of topological edge modes in the many-body ground state.
E. S. Ma, K. L. Zhang, Z. Song
2023-09-28T11:00:33
http://arxiv.org/abs/2309.16341v1
Topological bulk and edge correlations of BCS condensate in a two-dimensional singlet-triplet spin pairing model ###### Abstract The condensate of the Bardeen-Cooper-Schrieffer (BCS) pair in the ground state, which may contain information on both topology and spin pairing, promises the superconductivity of the system. In this paper, we study a singlet-triplet spin paring model on a square lattice and investigate the consequences of the competition of on-site and nearest neighbor pairing parameters. We show that the ground state of the system has the form of the condensate of the BCS pair, and the topological transition is associated with the nonanalytic behavior of the pairing order parameters. A real space correlation function on opposite spin direction is introduced to characterizing the topological phase of the many-body ground state. Numerical results demonstrate that this method works well in the presence of disordered perturbation, lattice defects, or irregular boundary conditions. The real space correlation function between two edges of the system is also discussed, which directly reflects the existence of topological edge modes in the many-body ground state. ## I Introduction The topological phase of matter has received much attention in recent decades due to its robust physical properties, which offer potential applications for novel devices and quantum information technology [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. From the perspective of topological band theory, these phases fall into two categories [17]: fully gapped topological phases, such as topological insulators and topological superconductors [10; 9; 18], and gapless topological phases, such as topological semimetals and nodal superconductors [12; 13; 19; 20; 21; 22; 23; 24; 25; 26]. The common characteristic of these topological matters is the existence of topological protected edge modes. Unlike topological insulators (semimetals), the edge modes of topological (nodal) superconductors neither particles nor holes but Bogoliubov quasiparticles, which provide a superconducting channel at the boundary. In terms of the classification of superconducting pairing about spin structures, the Cooper pairs may contain singlet or triplet pairing components, which is frequently discussed in the realm of superconductivity [27; 28; 29; 30; 31]. The existence of topological protected edge modes can be predicted by the bulk topological invariants constructed from the bulk Hamiltonian in momentum space, which is referred to as bulk-boundary correspondence (BBC) [32; 33; 34; 35; 36] and is of fundamental importance in studies of the topological phase of matter. In recent years, more efforts have been made to develop real-space characterization methods for topological phases, for example, real-space topological markers constructed from projectors and position operators [37; 38; 39; 40] and approaches based on correlation functions [41; 42] or entanglement spectra [43; 44; 45]. The main advantage of these methods is that they are more relevant to real systems in experiments, i.e., for systems without translation symmetry, examples of which include systems with defects or disorder [37; 40]. The topological Anderson insulator predicted and discovered in recent years [46; 47; 48] is another well-known example for the role of real-space characterization methods. Most recently, theoretical proposal suggests that some topological markers may be measured by real-space experiments [39]. In this paper, we study a mixed singlet-triplet spin pairing model on a square lattice, the Hamiltonian of which is quadratic and includes pairing terms, i.e., on-site pairing and nearest neighbor pairing between opposite spin directions. We investigate the consequences of the competition of different pairing parameters. It shows that the ground state of the system has the form of the condensate of the Bardeen-Cooper-Schrieffer (BCS) [49] pair. To characterize the phase transition and the properties of the ground state, we introduce the pairing order parameter and find that the topological transition is associated with the nonanalytic behavior of the order parameter. Furthermore, we find that the phase factor in the ground state related to the system topology can be revealed by a real space correlation function in the opposite spin direction, which provides a real-space scheme for detecting the topological phase of a class of systems. Numerical results demonstrate that this method works well in the presence of disordered perturbation, random lattice defects, or when irregular boundary conditions are adopted. Besides, we also compute the real space correlation function between two edges of the system, which directly reflects the existence of topological edge modes in the many-body ground state, verifying the BBC from the perspective of real space bulk and edge correlation functions. This paper is organized as follows. In Sec. II, we introduce the model, reveal the topological phase diagram, and show that the ground state has the form of the condensate of the BCS pair. In Sec. III, we investigate the real space bulk and edge correlation functions of the many-body ground state of the system. Finally, we summarize and discuss the results of the paper in Sec. IV. Model and phase diagram First, we consider a mixed singlet-triplet spin paring model defined on a square lattice, the Hamiltonian of which has the form \[H = \sum_{\mathbf{r}}\sum_{\mathbf{a}=\hat{x},\hat{y}}(\Delta_{+}c_{ \mathbf{r},\downarrow}c_{\mathbf{r}+\mathbf{a},\uparrow}+\Delta_{-}c_{\mathbf{ r}+\mathbf{a},\downarrow}c_{\mathbf{r},\uparrow}) \tag{1}\] \[+\Delta_{0}\sum_{\mathbf{r}}c_{\mathbf{r},\downarrow}c_{\mathbf{ r},\uparrow}+\text{H.c.},\] where \(\Delta_{0}\) and \(\Delta_{\pm}\) are real parameters for on-site and nearest neighbor pairings, respectively. The index \(\mathbf{r}=(m,n)\) denotes the lattice coordinate; \(\hat{x}\) and \(\hat{y}\) are unit vectors in the \(x\) and \(y\) directions. Different from most related works, the Hamiltonian in Eq. (1) only contains the pairing terms, and we are interested in the consequence of the competition between the on-site and nearest neighbor pairings. Employing the periodic boundary conditions in both directions and applying the Fourier transformation \[c_{\mathbf{k},\sigma}=\sum_{\mathbf{r}}e^{i\mathbf{k}\cdot\mathbf{r}}c_{ \mathbf{r},\sigma}, \tag{2}\] we obtain the Hamiltonian in \(\mathbf{k}\) space \[H=\sum_{\mathbf{k}}C_{\mathbf{k}}^{\dagger}H_{\text{BdG}}\left(\mathbf{k} \right)C_{\mathbf{k}}, \tag{3}\] where the Numbu spinor is defined as \(C_{\mathbf{k}}^{\dagger}=\left(\begin{array}{cc}c_{\mathbf{k},\uparrow}^{ \dagger}&c_{-\mathbf{k},\downarrow}^{\dagger}&c_{\mathbf{k},\downarrow}^{ \dagger}&c_{-\mathbf{k},\uparrow}\end{array}\right)\). The Bogoliubov-de-Gennes (BdG) representation of the Hamiltonian is a block diagonal matrix: \[H_{\text{BdG}}\left(\mathbf{k}\right)=\frac{1}{2}\left(\begin{array}{cc}H \left(\mathbf{k}\right)&\mathbf{0}\\ \mathbf{0}&-H\left(-\mathbf{k}\right)\end{array}\right), \tag{4}\] where \(H\left(\mathbf{k}\right)\) represents a pseudo spin Hamiltonian \(H\left(\mathbf{k}\right)=B_{x}(\mathbf{k})\sigma_{x}+B_{y}(\mathbf{k})\sigma _{y}\) in the effective magnetic field \[B_{x}(\mathbf{k}) = \left(\Delta_{+}+\Delta_{-}\right)\left(\cos k_{x}+\cos k_{y} \right)+\Delta_{0},\] \[B_{y}(\mathbf{k}) = \left(\Delta_{+}-\Delta_{-}\right)\left(\sin k_{x}+\sin k_{y} \right). \tag{5}\] In fact, Hamiltonian \(H\left(\mathbf{k}\right)\) can be related to a spinless Kitaev model by the unitary transformation \(c_{\mathbf{k},\uparrow}=(c_{\mathbf{k}}-c_{-\mathbf{k}}^{\dagger})/\sqrt{2}, c_{-\mathbf{k},\downarrow}^{\dagger}=(c_{\mathbf{k}}+c_{-\mathbf{k}}^{ \dagger})/\sqrt{2}\). In this sense, \(c_{\mathbf{k},\uparrow}\) and \(c_{-\mathbf{k},\downarrow}^{\dagger}\) are pseudo-spin operators. The real functions \(B_{x}(\mathbf{k})\) and \(B_{y}(\mathbf{k})\) are the coefficients of singlet and triplet pairing in momentum space. The Fermi statistics place constraints on the forms of functions \(B_{x}(\mathbf{k})\) and \(B_{y}(\mathbf{k})\). For example, we take the singlet term, \[\sum_{\mathbf{k}}[B_{x}(\mathbf{k})(c_{\mathbf{k},\uparrow}^{ \dagger}c_{-\mathbf{k},\downarrow}^{\dagger}-c_{\mathbf{k},\downarrow}^{ \dagger}c_{-\mathbf{k},\uparrow}^{\dagger}) \tag{6}\] \[= \sum_{\mathbf{k}}[B_{x}(-\mathbf{k})(-c_{\mathbf{k},\downarrow}^{ \dagger}c_{-\mathbf{k},\uparrow}^{\dagger}+c_{\mathbf{k},\uparrow}^{\dagger} c_{-\mathbf{k},\downarrow}^{\dagger}),\] so that \(B_{x}(\mathbf{k})=B_{x}(-\mathbf{k})\). Similarly, we have \(B_{y}(\mathbf{k})=-B_{y}(-\mathbf{k})\) for the triplet term. This constraint can also be given by the particle-hole symmetry of the BdG Hamiltonian, that is \(H_{\text{BdG}}\left(\mathbf{k}\right)=-\mathcal{C}H_{\text{BdG}}\left(- \mathbf{k}\right)\mathcal{C}^{-1}\), where \(\mathcal{C}=\sigma_{x}\otimes\sigma_{x}\mathcal{K}\) and \(\mathcal{K}\) is the complex-conjugation operator. The model also obeys time reversal and inversion symmetry, i.e., for the BdG Hamiltonian we have \(H_{\text{BdG}}\left(\mathbf{k}\right)=\mathcal{T}H_{\text{BdG}}\left(-\mathbf{ k}\right)\mathcal{T}^{-1}\) and \(H_{\text{BdG}}\left(\mathbf{k}\right)=\mathcal{P}H_{\text{BdG}}\left(-\mathbf{ k}\right)\mathcal{P}^{-1}\), with \(\mathcal{T}=\mathcal{K}\) and \(\mathcal{P}=\sigma_{0}\otimes\sigma_{x}\); \(\sigma_{0}\) is a \(2\times 2\) identity matrix. By diagonalizing the Hamiltonian \(H\) in Eq. (3), the ground state is obtained as \[\left|\text{G}\right\rangle=\prod_{\mathbf{k}}\frac{1+e^{i\phi_{\mathbf{k}}}c_ {-\mathbf{k}\downarrow}^{\dagger}c_{\mathbf{k}\uparrow}^{\dagger}}{\sqrt{2}} \left|0\right\rangle, \tag{7}\] where the angle is \[\phi_{\mathbf{k}}=\arg\left(B_{x}-iB_{y}\right). \tag{8}\] Note that the ground state can be rewritten in the form \[\left|\text{G}\right\rangle=\sum_{n=0}^{N^{2}}\frac{2^{-N^{2}/2}}{n!}\left(s^{+ }\right)^{n}\left|0\right\rangle. \tag{9}\] Unlike the conventional BCS wave function, the ground state in Eq. (9) describes the condensate of the BCS pair. Figure 1: Phase diagram in the \(\Delta_{0}\)-\(\Delta_{+}\) parameter plane for the model studied in this paper. The black lines indicate the phase boundary that separates the topological trivial (gapped; gray region) and nontrivial (gapless; yellow region) phases identified by the winding number of the vector field \(\mathbf{\tilde{B}}\) or the order parameter \(O\) of the ground state. The dashed line represents the parameter \(\Delta_{+}=\Delta_{-}\), where the system is trivial. The other parameter is set as \(\Delta_{-}=0.2\). The three red dots correspond to the parameters of the systems taken in the numerical computations for Figs. 3 (a)-(c). The operators \[s^{+} = \left(s^{-}\right)^{\dagger}=\sum_{\mathbf{k}}e^{i\phi_{\mathbf{k}}} c^{\dagger}_{-\mathbf{k},\downarrow}c^{\dagger}_{\mathbf{k},\uparrow},\] \[s^{z} = \frac{1}{2}\sum_{\mathbf{k}}\left(c^{\dagger}_{\mathbf{k}, \uparrow}c_{\mathbf{k},\uparrow}+c^{\dagger}_{-\mathbf{k},\downarrow}c_{- \mathbf{k},\downarrow}-1\right), \tag{10}\] are pseudo-spin operators that satisfy the Lie algebra commutation relation \([s^{z},s^{\pm}]=\pm s^{\pm}\). We note that the angle \(\phi_{\mathbf{k}_{c}}\) is ill-defined at the zero point of \(\left|\mathbf{B}\right|\) with \[B_{x}(\mathbf{k}_{\mathrm{c}})=B_{y}(\mathbf{k}_{\mathrm{c}})=0, \tag{11}\] which corresponds to the topological defect of the vector field \(\widehat{\mathbf{B}}=(\cos\phi_{\mathbf{k}},\sin\phi_{\mathbf{k}})\) if the solutions of \(\mathbf{k}_{\mathrm{c}}=(k_{\mathrm{xc}},k_{\mathrm{yc}})\) are isolated points in the \(\mathbf{k}\)-plane. In this sense, the condensate of the collective BCS-pair state \(s^{+}\left|0\right\rangle\) is topologically nontrivial and is characterized by the vortex of field \(\widehat{\mathbf{B}}\). Obviously, such a ground state is a gapless state. In fact, when \(\Delta_{+}\neq\Delta_{-}\), we have \[k_{\mathrm{xc}}=-k_{yc}=\pm\arccos\left[-\frac{\Delta_{0}}{2\left(\Delta_{+}+ \Delta_{-}\right)}\right], \tag{12}\] in the topological nontrivial region \(\left|\Delta_{0}\right|<2\left|\Delta_{+}+\Delta_{-}\right|\)\(\left(\Delta_{+}\neq\Delta_{-}\right)\). The two zero points \((k_{\mathrm{xc}},k_{\mathrm{yc}})\) and \((-k_{\mathrm{xc}},-k_{\mathrm{yc}})\) are Dirac points in momentum space, the topological nature of which are characterized by the winding number [50; 51; 12; 52] of the vortex in the vector field \(\widehat{\mathbf{B}}\). The combination of the time reversal and inversion symmetry protects the Dirac points in the following sense: the diagonal term in \(H_{\mathrm{BdG}}(\mathbf{k})\) that openings a gap is forbidden by time reversal and inversion symmetry. The position of the Dirac points only shifts when changing the system parameters until the Dirac points merge and open a gap when \(\left|\Delta_{0}\right|\geqslant 2\left|\Delta_{+}+\Delta_{-}\right|\). The phase diagram of the system is presented in Fig. 1. In the next section, we will show that the vector field \(\widehat{\mathbf{B}}\) can be extracted from the real space correlation function of the ground state, where the periodic boundary condition is no longer needed. To characterize the phase transition and the properties of the ground state, we introduce the following pairing order parameter \[O=\frac{1}{N^{2}}\sum_{\mathbf{k}}\left|\left\langle\mathrm{G}\right|c^{ \dagger}_{\mathbf{k},\uparrow}c^{\dagger}_{-\mathbf{k},\downarrow}+c_{- \mathbf{k},\downarrow}c_{\mathbf{k},\uparrow}\left|\mathrm{G}\right\rangle \right|. \tag{13}\] Direct calculation shows that \[O=\frac{1}{N^{2}}\sum_{\mathbf{k}}\left|\cos\phi_{\mathbf{k}}\right|, \tag{14}\] which characterizes the pairing channel in \(\mathbf{k}\) space, and is related to the angle \(\phi_{\mathbf{k}}\) in the vector field \(\widehat{\mathbf{B}}\) that contains information of the system topology. In Fig. 2 (a), we plot the numerical results of the order parameter \(O\) in the \(\Delta_{0}\)-\(\Delta_{+}\) parameter plane. We can see that in the gapped phase, the \(\mathbf{k}\)-space pairing strength is stronger than that in the gapless phase. The absolute value of the gradient of the order parameter \(O\) presented in Fig. 2 (b) indicates that topological phase transition is associated with the nonanalytic behavior of the order parameter \(O\). ## III Bulk-boundary correspondence by real space correlation In the previous section, we have shown that the order parameter \(O\) is related to the angle \(\phi_{\mathbf{k}}\) in the vector field \(\widehat{\mathbf{B}}\), which contains information on the system topology. Therefore, it is promising to extract the topological properties of the system from a specific correlation function, preferably the real space correlation function. To Figure 2: Numerical results of the order parameter. (a) The pairing order parameter \(O\) in Eq. (14) in the \(\Delta_{0}\)-\(\Delta_{+}\) parameter plane. (b) The corresponding absolute value of the gradient of the order parameter \(O\) in the \(\Delta_{0}\)-\(\Delta_{+}\) plane, which indicates the phase boundary. Other parameters are set as \(\Delta_{-}=0.2\) and \(N=100\). this end, we consider the following correlation function in real space \[\mathcal{C}_{\mathbf{r}}=\left\langle\mathrm{G}\right|c_{\mathbf{0},\uparrow}c_{ \mathbf{r},\downarrow}\left|\mathrm{G}\right\rangle, \tag{15}\] where the coordinate origin \(\mathbf{0}\) is placed in the center of the lattice when the open boundary condition is adopted. The correlation function \(\mathcal{C}_{\mathbf{r}}\) and the phase \(e^{-i\phi_{\mathbf{k}}}\) in the ground state are related by the Fourier transformation \[e^{i\phi_{\mathbf{k}}}=2\sum_{\mathbf{f}}e^{i\mathbf{k}\cdot\mathbf{r}} \mathcal{C}_{\mathbf{r}}. \tag{16}\] Figure 4: Numerical results of the real space correlation function \(\mathcal{C}_{\mathbf{r}}\) and phase angle \(\phi_{\mathbf{k}}\) in the presence of (a) disordered perturbation, where the system parameters are nonuniform in space and each deviates a uniform distributed random real number within the interval \([-0.2,0.2]\); (b) random defects, where the spatial coordinates of 10 lattice defects are randomly taken; and (c) irregular boundary as shown in (c1). The system parameters are taken as \(\Delta_{+}=0.8\), \(\Delta_{-}=0.2\) and \(\Delta_{0}=1\). Figure 3: Plots of the numerical results of the phase angle \(\phi_{\mathbf{k}}\) computed from the real space correlation function \(\mathcal{C}_{\mathbf{r}}\) in Eq. (16). The direction of the arrow at different \(\mathbf{k}\) represents the phase angle \(\phi_{\mathbf{k}}\). The system parameters taken are marked by the red dots in the phase diagram of Fig. 1: (a) topological nontrivial case \(\Delta_{+}=0.8\); (b) critical case \(\Delta_{+}=0.3\); and (c) topological trivial case \(\Delta_{+}=0.1\). Other parameters are taken as \(N=20\), \(\Delta_{0}=1\) and \(\Delta_{-}=0.2\). The open boundary condition in both directions of the square lattice is taken. In fact, from the definition of the correlation function and taking the form of the ground state \(\left|\mathrm{G}\right\rangle\) in Eq. (7) into account, we have \[\mathcal{C}_{\mathbf{r}} = \frac{1}{N^{2}}\sum_{\mathbf{k},\mathbf{k}^{\prime}}e^{i\mathbf{k} ^{\prime}\cdot\mathbf{r}}\left\langle\mathrm{G}\right|c_{\mathbf{k},\uparrow}c _{\mathbf{k}^{\prime},\downarrow}\left|\mathrm{G}\right\rangle \tag{17}\] \[= \frac{1}{N^{2}}\sum_{\mathbf{k}}e^{-i\mathbf{k}\cdot\mathbf{r}} \left\langle\mathrm{G}\right|c_{\mathbf{k},\uparrow}c_{-\mathbf{k},\downarrow} \left|\mathrm{G}\right\rangle\] \[= \frac{1}{2N^{2}}\sum_{\mathbf{k}}e^{i\phi_{\mathbf{k}}}e^{-i \mathbf{k}\cdot\mathbf{r}}.\] Thus, we have the relation in Eq. (16) by the Fourier transformation. This inspires us to employ the real space correlation function \(\mathcal{C}_{\mathbf{r}}\) for detecting the topological properties of the ground state. Different from the topological invariant defined in momentum space, this method is applicable for systems without translation symmetry, including systems with disorder and defects. In the following, we present the numerical results to demonstrate our conclusions. Numerically, the real space correlation function for the ground state of a quadratic Hamiltonian can be computed by the method presented in Appendix A. First, we compute the real space correlation function \(\mathcal{C}_{\mathbf{r}}\) defined in Eq. (15). The open boundary condition in both directions of the lattice is taken. Although the derivation of the relation between \(\mathcal{C}_{\mathbf{r}}\) and the phase \(e^{-i\phi_{\mathbf{k}}}\) in Eq. (16) requires periodic boundary condition, it is expected that if \(\mathcal{C}_{\mathbf{r}}\) decays rapidly with the distance between \(\mathbf{0}\) and \(\mathbf{r}\) (which is examined by the subsequent numerical calculations), then the phase \(e^{-i\phi_{\mathbf{k}}}\) is almost unaffected by the boundary conditions. Then, we are allowed to compute the phase \(e^{-i\phi_{\mathbf{k}}}\) through Eq. (16) for each \(\mathbf{k}\). We present the numerical results in Fig. 3, in which the phase angle \(\phi_{\mathbf{k}}\) is denoted by the direction of the arrow at \(\mathbf{k}\). The results indicate that the phase angle can be correctly obtained from the real space correlation function \(\mathcal{C}_{\mathbf{r}}\): we can see that two vortices emerge in the topological nontrivial case [Fig. 3 (a)] and then merge and vanish in the critical and topological nontrivial cases [Figs. 3 (b) and (c)] when system parameter \(\Delta_{+}\) varies. Furthermore, numerical simulations show that the topological feature is robust in the presence of (a) disordered perturbation, (b) random defects and (c) irregular boundaries. In Fig. 4, we present the lattice geometries, the numerical results of the real space correlation function \(\mathcal{C}_{\mathbf{r}}\), and the phase angle \(\phi_{\mathbf{k}}\) for these three cases. In Fig. 4 (a), the system parameters \(\Delta_{+}\), \(\Delta_{-}\) and \(\Delta_{0}\) are nonuniform in space, and each deviates a uniform distributed random real number within the interval \([-0.2,0.2]\). The result in Fig. 4 (a1) indicates that the correlation function \(\mathcal{C}_{\mathbf{r}}\) decays rapidly with the distance between \(\mathbf{0}\) and \(\mathbf{r}\). In comparison with Fig. 3 (a), the result in Fig. 4 (a2) shows that the pattern of the vortices is robust against disordered perturbation. Fig. 4 (b) shows the numerical results for the lattice with random defects, where the spatial coordinates of 10 lattice defects are randomly taken. Fig. 4 (c) shows the numerical results for the lattice with irregular boundaries. We can see that the signatures of the vortices are also robust for these two cases. Now, we turn to the investigation on the relation between the bulk topology and edge correlation. It can be shown that in the topological nontrivial phase, the Majorana zero modes appear at the boundaries of the system (see Appendix B). The zero modes may contribute to the correlation function between two edges of the system [54; 53], which is one of the signatures of the system topology. To verify this point for our model, we introduce the following edge correlation function: \[\mathcal{C}_{j}^{\mathrm{edge}}=\left\langle\mathrm{G}\right|c_{(1,1),\uparrow }c_{(N,j),\downarrow}\left|\mathrm{G}\right\rangle, \tag{18}\] where \(\left|\mathrm{G}\right\rangle\) is the ground state of the system under cylindrical boundary conditions; \((1,1)\) and \((N,j)\) are the lattice coordinates of the two ends of the system, and \(j\) is the lattice coordinate along one of the edges of the lattice cylinder. In Fig. 5, we present the numerical results of the edge correlation functions for the systems in different phases with different parameter \(\Delta_{+}\). This indicates that the edge correlation function is nonzero in the topological nontrivial phase but vanishes in the trivial phase. Therefore, we conclude that the edge correlation function can reflect the phase diagram in Fig. 1. The above numerical results of the bulk correlation function and the edge correlation function in different phases verify the BBC from another perspective, in contrast with the relation between the topological invariant and single-particle edge mode. Figure 5: Numerical results of the edge correlation function defined in Eq. (18) for the topological nontrivial phase \(\Delta_{+}>0.3\), critical point \(\Delta_{+}=0.3\) and topological trivial phase \(\Delta_{+}=0.1\). Other system parameters are set as \(N=30\), \(\Delta_{0}=1\) and \(\Delta_{-}=0.2\). The cylindrical boundary condition for the square lattice is taken. Summary and discussion In summary, we have investigated a mixed singlet-triplet spin paring model on a square lattice. The ground state of the system has the form of the condensate of BCS pairs, and in the gapless phase, topological edge modes emerge when the open boundary condition is adopted. The topological transition is associated with the nonanalytic behavior of the order parameter. Furthermore, we find that the phase factor in the ground state related to the system topology can be revealed by a real space correlation function. Numerical results demonstrate that this method works well in the presence of disordered perturbation, lattice defects, or when irregular boundaries condition are adopted. In addition, the results of the real space correlation function between two edges of the system directly reflect the existence of topological edge modes, verifying the BBC from the perspective of real-space bulk and edge correlations. The conclusions in this paper, including the results of numerical simulations, reveal the consequence of the competition between the on-site and nearest neighbor pairings in a quadratic Hamiltonian and provide another real-space scheme for diagnosing the system topology. ###### Acknowledgements. This work was supported by the National Natural Science Foundation of China (under Grant No. 12374461). ## Appendix A Real space correlation In this appendix, we present the method for computing the real space correlation function of the ground state of the quadratic Hamiltonian. We follow the method used in Ref. [55]. For simplicity, the system parameters are set to be uniform in real space, and the geometry is taken as an \(N\times N\) square lattice. Other situations with disordered perturbations, random defects and irregular boundaries can be directly generalized. Under the basis \[C^{\dagger} = \left(c^{\dagger}_{{\bf r}_{1},\downarrow}\cdots c^{\dagger}_{{ \bf r}_{N^{2}},\downarrow},c^{\dagger}_{{\bf r}_{1},\uparrow},\cdots c^{ \dagger}_{{\bf r}_{N^{2}},\uparrow},\right. \tag{10}\] \[\left.c_{{\bf r}_{1},\downarrow}\cdots c_{{\bf r}_{N^{2}}, \downarrow},c_{{\bf r}_{1},\uparrow},\cdots c_{{\bf r}_{N^{2}},\uparrow} \right),\] the real space Hamiltonian in Eq. (1) can be written as \[H=C^{\dagger}\widetilde{H}C, \tag{11}\] where \(\widetilde{H}\) has the form \[\widetilde{H}=\left(\begin{array}{cc}{\bf 0}&M\\ -M&{\bf 0}\end{array}\right), \tag{12}\] and \(M\) is a \(2N^{2}\times 2N^{2}\) antisymmetric matrix. The nonzero matrix elements are given in the following \[M_{iN+j+N^{2},(i-1)N+j} = \frac{\Delta_{+}}{2},\] \[M_{(i-1)N+j+1+N^{2},(i-1)N+j} = \frac{\Delta_{+}}{2},\] \[M_{(i-1)N+j+N^{2},iN+j} = \frac{\Delta_{-}}{2},\] \[M_{(i-1)N+j+N^{2},(i-1)N+j+1} = \frac{\Delta_{-}}{2},\] \[M_{(i-1)N+j+N^{2},(i-1)N+j} = \frac{\Delta_{0}}{2}, \tag{13}\] where \(i,j\in[1,N-1]\) and each corresponding transpose matrix element has a negative sign difference. By diagonalizing \(\widetilde{H}\), we have \[H = C^{\dagger}S{\cal E}S^{T}C \tag{14}\] \[= \Phi^{\dagger}{\cal E}\Phi\] \[= \sum_{m=1}^{N^{2}}\sum_{\sigma=\uparrow,\downarrow}\varepsilon_{ m,\sigma}\left(\gamma^{\dagger}_{m,\sigma}\gamma_{m,\sigma}-\gamma_{m,\sigma} \gamma^{\dagger}_{m,\sigma}\right),\] where \(\varepsilon_{m,\sigma}\geqslant 0\) and \(S\) is a real orthogonal matrix, which has the following form \[S=\left(\begin{array}{cc}\varphi&\chi\\ \chi&\varphi\end{array}\right), \tag{15}\] due to the particle-hole symmetry of the BdG Hamiltonian. The columns of the matrix \(S\) are formed by the eigenvectors of \(\widetilde{H}\) and \(\varphi\) and \(\chi\) are both \(2N^{2}\times 2N^{2}\) matrices. The diagonal matrix \({\cal E}\) has the form \[{\cal E} = S^{T}\widetilde{H}S \tag{16}\] \[= {\rm diag}\left(\varepsilon_{1,\downarrow},\cdots,\varepsilon_{ N^{2},\downarrow},\varepsilon_{1,\uparrow},\cdots,\varepsilon_{N^{2}, \uparrow},\right.\] \[\left.-\varepsilon_{1,\downarrow},\cdots,-\varepsilon_{N^{2}, \downarrow},-\varepsilon_{1,\uparrow},\cdots,-\varepsilon_{N^{2},\uparrow} \right),\] and the new basis is \[\Phi^{\dagger} = C^{\dagger}S \tag{17}\] \[= \left(\gamma^{\dagger}_{1,\downarrow},\cdots,\gamma^{\dagger}_{N ^{2},\downarrow},\gamma^{\dagger}_{1,\uparrow},\cdots,\gamma^{\dagger}_{N^{2 },\uparrow},\right.\] \[\left.\gamma_{1,\downarrow},\cdots,\gamma_{N^{2},\downarrow}, \gamma_{1,\uparrow},\cdots,\gamma_{N^{2},\uparrow}\right),\] where \(\left\{\gamma^{\dagger}_{m}\right\}\) are fermionic operators, satisfying \[\left\{\gamma_{m^{\prime},\sigma^{\prime}},\gamma^{\dagger}_{m,\sigma}\right\} =\delta_{m^{\prime},m}\delta_{\sigma^{\prime},\sigma},\left\{\gamma_{m^{\prime},\sigma^{\prime}},\gamma_{m,\sigma}\right\}=0. \tag{18}\] Then the real space fermionic operators can be expressed as \[c_{{\bf r}_{i},\downarrow} = \sum_{m=1}^{N^{2}}\left[\varphi_{i,m}\gamma_{m,\downarrow}+ \varphi_{i,m+N^{2}}\gamma_{m,\uparrow}\right. \tag{19}\] \[\left.+\chi_{i,m}\gamma^{\dagger}_{m,\downarrow}+\chi_{i,m+N^{2}} \gamma^{\dagger}_{m,\uparrow}\right],\] and \[c_{\mathbf{r}_{i},\uparrow} = \sum_{m=1}^{N^{2}}\left[\varphi_{i+N^{2},m}\gamma_{m,\downarrow}+ \varphi_{i+N^{2},m+N^{2}}\gamma_{m,\uparrow}\right. \tag{11}\] \[\left.+\chi_{i+N^{2},m}\gamma_{m,\downarrow}^{\dagger}+\chi_{i+N^ {2},m+N^{2}}\gamma_{m,\uparrow}^{\dagger}\right].\] The ground state of the system has the form \[\left|G\right>=\prod_{m=1}^{N^{2}}\prod_{\sigma=\uparrow,\downarrow}\gamma_{m, \sigma}\left|0\right>. \tag{12}\] Taking the anticommutation relation of \(\left\{\gamma_{m}^{\dagger}\right\}\) in Eq. (10) and the form of the ground state \(\left|G\right>\) in Eq. (12) into account, the real space correlation function can be computed as \[\left<G\right|c_{\mathbf{r}_{j},\uparrow}c_{\mathbf{r}_{i}, \downarrow}\left|G\right> \tag{13}\] \[= \sum_{m=1,n=1}^{N^{2}}\left<\left[\varphi_{j+N^{2},m}\gamma_{m, \downarrow}+\varphi_{j+N^{2},m+N^{2}}\gamma_{m,\uparrow}\right]\right.\] \[\times\left[\chi_{i,n}\gamma_{n,\downarrow}^{\dagger}+\chi_{i,n+ N^{2}}\gamma_{n,\uparrow}^{\dagger}\right]\right>\] \[= \sum_{m=1}^{2N^{2}}\varphi_{j+N^{2},m}\chi_{i,m}\] \[= \left(\varphi\chi^{T}\right)_{j+N^{2},i},\] where \(\varphi\) and \(\chi\) defined in Eq. (11) are computed from the exact diagonalization of the Hamiltonian matrix \(\widetilde{H}\) in Eq. (13). ## Appendix B Majorana lattice and edge modes To gain intuition on the edge modes, we introduce the Majorana fermion operators \(a_{\mathbf{r},\sigma}=c_{\mathbf{r},\sigma}^{\dagger}+c_{\mathbf{r},\sigma}\), \(b_{\mathbf{r},\sigma}=-i(c_{\mathbf{r},\sigma}^{\dagger}-c_{\mathbf{r},\sigma })\), which satisfy the anticommutation relations \(\{a_{\mathbf{r},\sigma},a_{\mathbf{r}^{\prime},\sigma^{\prime}}\}=2\delta_{ \mathbf{r},\mathbf{r}^{\prime}}\delta_{\sigma,\sigma^{\prime}}\), \(\{b_{\mathbf{r},\sigma},b_{\mathbf{r}^{\prime},\sigma^{\prime}}\}=2\delta_{ \mathbf{r},\mathbf{r}^{\prime}}\delta_{\sigma,\sigma^{\prime}}\) and \(\{a_{\mathbf{r},\sigma},b_{\mathbf{r}^{\prime},\sigma^{\prime}}\}=0\). The Majorana representation of the real space Hamiltonian in Eq. (1) is \[H = \sum_{\mathbf{r}}\sum_{\mathbf{a}=\hat{x},\hat{y}}\Upsilon_{ \mathbf{r}}^{\dagger}T_{\mathrm{NN}}\Upsilon_{\mathbf{r}+\mathbf{a}}+\mathrm{H.c.} \tag{14}\] \[+\sum_{\mathbf{r}}\Upsilon_{\mathbf{r}}^{\dagger}T_{\mathrm{O}} \Upsilon_{\mathbf{r}},\] where \(\Upsilon_{\mathbf{r}}^{\dagger}=(a_{\mathbf{r},\uparrow}\quad a_{\mathbf{r}, \downarrow}\quad b_{\mathbf{r},\uparrow}\quad b_{\mathbf{r},\downarrow})\), and the coefficient matrix for nearest neighbor and on-site pairing terms are \[T_{\mathrm{NN}}=\frac{1}{4}i\Delta_{-}\left(\sigma_{x}\otimes\sigma_{+}\right) -\frac{1}{4}i\Delta_{+}\left(\sigma_{x}\otimes\sigma_{-}\right), \tag{15}\] and \[T_{\mathrm{O}}=-\frac{1}{4}\Delta_{0}\left(\sigma_{x}\otimes\sigma_{y}\right), \tag{16}\] respectively, with \(\sigma_{\pm}=\left(\sigma_{x}\pm\sigma_{y}\right)/2\). Next, we consider the \(N\times N\) square lattice with cylindrical boundaries condition: taking open boundary conditions in the \(x\) direction and periodic boundary condition in the \(y\) direction. Then, take the following Fourier transformations for the Majorana fermion operators: \[\left(\begin{array}{c}a_{m,k_{y},\sigma}\\ b_{m,k_{y},\sigma}\end{array}\right)=\frac{1}{\sqrt{N}}\sum_{n}e^{-ik_{y}n} \left(\begin{array}{c}a_{m,n,\sigma}\\ b_{m,n,\sigma}\end{array}\right), \tag{17}\] where \(k_{y}=2\pi l/N\), with \(l=0,1,...,N-1\). Note that in general, \(a_{m,k_{y},\sigma}\) and \(b_{m,k_{y},\sigma}\) are not Majorana fermion operators, except at \(k_{y}=0\) and \(\pi\); we refer to such operators as auxiliary operators. Then, the Hamiltonian can be written as \(H=\sum_{k_{y}}H_{k_{y}}\), where \[H_{k_{y}} = \sum_{m}\Upsilon_{m,k_{y}}^{\dagger}T_{\mathrm{NN}}^{k_{y}}\Upsilon _{m+1,k_{y}}+\mathrm{H.c.} \tag{18}\] \[+\sum_{m}\Upsilon_{m,k_{y}}^{\dagger}T_{\mathrm{O}}^{k_{y}}\Upsilon _{m,k_{y}},\] with \(\Upsilon_{m,k_{y}}^{\dagger}=\left(a_{m,k_{y}},\uparrow\quad a_{m,k_{y},\downarrow} \quad b_{m,k_{y},\uparrow}\quad b_{m,k_{y},\downarrow}\right)\), and \[T_{\rm NN}^{k_{y}} = T_{\rm NN},\] \[T_{\rm O}^{k_{y}} = i\eta_{k_{y}}^{*}\left(\sigma_{x}\otimes\sigma_{+}\right)-i\eta_{ k_{y}}\left(\sigma_{x}\otimes\sigma_{-}\right),\] \[\eta_{k_{y}} = \frac{1}{4}(\Delta_{+}e^{-ik_{y}}+\Delta_{-}e^{ik_{y}}+\Delta_{0}). \tag{100}\] We note that for each \(k_{y}\), the Hamiltonian \(H_{k_{y}}\) represents a lattice of ladders about auxiliary operators \(a_{m,k_{y},\sigma}\) and \(b_{m,k_{y},\sigma}\). In Fig. 11, we show the numerical results of the single-particle spectra of \(H_{k_{y}}\) for three sets of typical parameters. We can see the existence of the flat-band zero modes in the topological nontrivial case in Fig. 11 (a). Actually, in the large \(N\) limit, the Hamiltonian \(H_{k_{y}}\) is expected to possess zero energy edge modes, which can be determined by the following matrix equation \[(T_{\rm NN}^{k_{y}})^{\dagger}\Psi_{m-1,k_{y}}+T_{\rm O}^{k_{y}}\Psi_{m,k_{y}}+ T_{\rm NN}^{k_{y}}\Psi_{m+1,k_{y}}=0, \tag{101}\] where \(m=1,2,...N\), and \(\Psi_{m,k_{y}}\) is a four-dimensional vector under the basis of \(\Upsilon_{m,k_{y}}\). The boundary condition is \[\Psi_{0,k_{y}}=0,\Psi_{N+1,k_{y}}=0. \tag{102}\] There are four zero energy edge modes when the winding number [50; 51; 12; 13] of the pseudo spin Hamiltonians \(H\left(\mathbf{k}\right)\) and \(-H\left(-\mathbf{k}\right)\) in Eq. (4) \[\mathcal{W}_{\pm}(k_{y})=\frac{1}{2\pi i}\int_{-\pi}^{\pi}dk_{x}\partial_{k_{ x}}\ln[\pm g(\pm\mathbf{k})], \tag{103}\] are nonzero, where \(g(\mathbf{k})=B_{x}(\mathbf{k})+iB_{y}(\mathbf{k})\). It can be checked that the condition for \(\mathcal{W}_{\pm}(k_{y})\) to be nonzero is \(|p_{\pm}|<1\) and \(|q_{\pm}|<1\), where \[p_{\pm} = -\frac{2\eta_{k_{y}}\pm\sqrt{4\eta_{k_{y}}^{2}-\Delta_{+}\Delta_{ -}}}{\Delta_{+}},\] \[q_{\pm} = -\frac{2\eta_{k_{y}}^{*}\pm\sqrt{4(\eta_{k_{y}}^{*})^{2}-\Delta_ {+}\Delta_{-}}}{\Delta_{-}}. \tag{104}\] Under this condition, the zero modes are determined to be \[\gamma_{k_{y},\uparrow} = A_{\uparrow}\sum_{j=1}^{N}\left[\left(p_{+}^{j}-p_{-}^{j}\right)a _{j,k_{y},\uparrow}\right. \tag{105}\] \[\left.+i\left(p_{+}^{N-j+1}-p_{-}^{N-j+1}\right)b_{j,k_{y}, \uparrow}\right],\] \[\gamma_{k_{y},\downarrow} = A_{\downarrow}\sum_{j=1}^{N}\left[\left(q_{+}^{j}-q_{-}^{j}\right)a _{j,k_{y},\downarrow}\right.\] (106) \[\left.+i\left(q_{+}^{N-j+1}-q_{-}^{N-j+1}\right)b_{j,k_{y}, \downarrow}\right],\] and their corresponding Hermitian conjugate \(\gamma_{k_{y},\sigma}^{\dagger}\), in which \(A_{\sigma}\) is a normalization constant. It can be checked that the above edge zero mode operators are fermionic operators, since they satisfy the anticommutation relations \[\{\gamma_{k_{y},\sigma},\gamma_{k_{y}^{\prime},\sigma^{\prime}}^{ \dagger}\} = \delta_{k_{y},k_{y}^{\prime}}\delta_{\sigma,\sigma^{\prime}},\] \[\{\gamma_{k_{y},\sigma},\gamma_{k_{y}^{\prime},\sigma^{\prime}}^{ \dagger}\} = 0. \tag{107}\]
BCSペアの地上状態の凝縮物、この凝縮物は、トポロジーとスピンペアリングの情報を含んでおり、そのスーパー電流性が示唆されています。この論文では、単一-三重スピンペアリングモデルを正方形格子上に研究し、サイトと近傍対称パラメータの競合の結果について調査します。私たちは、この系の地上状態がBCSペアの凝縮物とみなされ、トポロジカル転換はペアリングの非解析的な振る舞いと関連付けられています。対称なスピン方向の空間的な相関関数は、多体系地上状態のトポロジカル相を特徴付けるために導入されます。数値結果は、乱雑な擾乱、格子欠陥、または不規則な境界条件が存在する場合でも、この方法がうまく機能することを示しています。2つのシステムの境界間の空間的な相関関数が議論され、これは多体系地上状態の
2309.09261
Leveraging Large Language Models for Sequential Recommendation
Sequential recommendation problems have received increasing attention in research during the past few years, leading to the inception of a large variety of algorithmic approaches. In this work, we explore how large language models (LLMs), which are nowadays introducing disruptive effects in many AI-based applications, can be used to build or improve sequential recommendation approaches. Specifically, we devise and evaluate three approaches to leverage the power of LLMs in different ways. Our results from experiments on two datasets show that initializing the state-of-the-art sequential recommendation model BERT4Rec with embeddings obtained from an LLM improves NDCG by 15-20% compared to the vanilla BERT4Rec model. Furthermore, we find that a simple approach that leverages LLM embeddings for producing recommendations, can provide competitive performance by highlighting semantically related items. We publicly share the code and data of our experiments to ensure reproducibility.
Jesse Harte, Wouter Zorgdrager, Panos Louridas, Asterios Katsifodimos, Dietmar Jannach, Marios Fragkoulis
2023-09-17T12:53:53
http://arxiv.org/abs/2309.09261v1
# Leveraging Large Language Models for Sequential Recommendation ###### Abstract. Sequential recommendation problems have received increasing attention in research during the past few years, leading to the inception of a large variety of algorithmic approaches. In this work, we explore how large language models (LLMs), which are nowadays introducing disruptive effects in many AI-based applications, can be used to build or improve sequential recommendation approaches. Specifically, we devise and evaluate three approaches to leverage the power of LLMs in different ways. Our results from experiments on two datasets show that initializing the state-of-the-art sequential recommendation model BERT4Rec with embeddings obtained from an LLM improves NDCG by 15-20% compared to the vanilla BERT4Rec model. Furthermore, we find that a simple approach that leverages LLM embeddings for producing recommendations, can provide competitive performance by highlighting semantically related items. We publicly share the code and data of our experiments to ensure reproducibility.1 Footnote 1: [https://github.com/dh-r/LLM-Sequential-Recommendation](https://github.com/dh-r/LLM-Sequential-Recommendation) **ACM Reference Format:** Jesse Harte, Wouter Zorgdrager, Panos Louridas, Asterios Katsifodimos, Dietmar Jannach, and Marios Fragkoulis. 2023. Leveraging Large Language Models for Sequential Recommendation. In _Seventeenth ACM Conference on Recommender Systems (RecSys '23), September 18-22, 2023, Singapore, Singapore._ ACM, New York, NY, USA, 9 pages. [https://doi.org/10.1145/3604915.3610639](https://doi.org/10.1145/3604915.3610639) ## 1. Introduction Sequential recommendation problems have received increased interest recently (Sutton et al., 2017; Wang et al., 2018). In contrast to the traditional, sequence-agnostic matrix-completion setup (Sutton et al., 2017), the problem in sequential recommendation is to predict the next user interest or action, given a sequence of past user interactions. Practical applications of sequential recommendation include next-purchase prediction, next-track music recommendation, or next Point-of-Interest suggestions for tourism. Due to their high practical relevance, a multitude of algorithmic approaches have been proposed in the past few years (Groff et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), including approaches that utilize side information about the items, such as an item's category (Sutton et al., 2017; Wang et al., 2018). From a technical perspective, the sequential recommendation problem shares similarities with the next word prediction problem (Groff et al., 2018; Wang et al., 2018). Under this light, we can observe a parallel between research in Natural Language Processing (NLP) and sequential recommendation, where novel recommendation models are inspired by NLP models (Chen et al., 2018). GRU4Rec (Groff et al., 2018) adopted the Gated Recurrent Unit (GRU) mechanism from (Chen et al., 2018), SASRec (Wang et al., 2018) used the transformer architecture from (Wang et al., 2018), and BERT4Rec[35] adopted BERT[7]. The influence of NLP research to sequential recommendation models extends naturally to Large Language Models (LLMs). LLMs, in particular ones based on Generative Pretrained Transformers [32], are exhibiting disruptive effects in various AI-based applications with their semantically rich and meaningful responses. However, limited research exists so far on leveraging the inherent semantic information of LLMs, which the abovementioned approaches lack, for sequential recommendation problems. A number of recent works in fact started to explore the potential of relying on LLMs for recommendation tasks; see [27; 40] for recent surveys. Here, we extend this line of research for sequential recommendation problems, providing the following contributions and insights. * We devise three orthogonal methods of leveraging LLMs for sequential recommendation. In our first approach (LLMSEqSim), we retrieve a semantically-rich embedding from an existing LLM (from OpenAI) for each item in a session. We then compute an aggregate session embedding to recommend catalog products with a similar embedding. In the second approach (LLMSEqPrompt), we fine-tune an LLM with dataset-specific information in the form of prompt-completion pairs and ask the model to produce next item recommendations for test prompts. Finally, our third approach (LLM2BERT4Rec) consists of initializing existing sequential models with item embeddings obtained from an LLM. * Experiments on two datasets, including a real-world dataset from Delivery Hero, reveal that initializing a sequential model with LLM embeddings is particularly effective: applying it to the state-of-the-art model BERT4Rec improves accuracy in terms of NDCG by 15-20%, making it the best-performing model in our experiments. * Finally, we find that in certain applications simply using LLM embeddings to find suitable items for a given session (LLMSEqSim) can lead to state-of-the-art performance. ## 2. Background & Related Work The recent developments in LLMs have taken the world by surprise. Models like OpenAI GPT [4], Google BERT [7], and Facebook LLaMA [36], which employ deep transformer architectures, demonstrate how innovations in NLP can reshape mainstream online activities, such as search, shopping, and customer care. Inevitably, research in recommender systems is significantly impacted by the developments in the area of LLMs as well. According to recent surveys [27; 40], LLMs are mainly utilized for recommendation problems in two ways: by providing embeddings that can be used to initialize existing recommendation models [29; 39; 43], and by producing recommendations leveraging their inherent knowledge encoding [2; 13; 22]. LLMs as recommendation models can provide recommendations given _a)_ only a task specification (zero-shot), _b)_ a few examples given inline to the prompt of a task (few-shot), or _c)_ after fine-tuning the model's weights for a task given a set of training examples [4]. This incremental training process deviates from typical recommendation models, which have to be trained from zero on domain data. In fact, LLMs show early indications of adaptability to different recommendation domains with modest fine-tuning [15; 16]. Finally, LLMs have been applied in various recommendation tasks, such as rating prediction [25], item generation [26], and reranking [17] across domains [29; 39]. In this work we explore the potential of using LLMs for sequential recommendation problems [20]. In short, in sequential recommendation problems, we consider as input a sequence of user interactions \(S^{u}=(S^{u}_{1},S^{u}_{2},...,S^{u}_{n})\), for user \(u\), where \(n\) is the length of the sequence and \(S^{u}_{i}\) are individual items. The aim is to predict the next interaction of the given sequence. Besides the recent sequential recommendation models mentioned in the introduction [14; 21; 35], in earlier works, the sequential recommendation problem has been modelled as a Markov Chain [9] or a Markov Decision Process [34]. Neighborhood-based approaches, such as SKNN [19], have also been proposed. Early research work regarding LLMs for sequential recommendation problems has shown mixed results (Kang et al., 2018; Chen et al., 2019; Chen et al., 2020; Li et al., 2021; Li et al., 2021; Li et al., 2021). The very recent VQ-Rec model (Li et al., 2021) employs a transformer architecture and applies a novel representation scheme to embeddings retrieved from BERT in order to adapt to new domains. VQ-Rec outperforms a number of sequential recommendation models across datasets of different domains, and it has been shown that SASRec with LLM embeddings is better than the original SASRec method for half of the datasets representing different domains. Finally, in an upcoming work (Li et al., 2021), SASRec with LLM embeddings is shown to improve over SASRec. The recent approaches presented in (Li et al., 2021) and (Li et al., 2021) differ from our work in particular in terms of the goals they pursue. VQ-Rec (Li et al., 2021) targets cross-domain recommendations with a novel item representation scheme, while (Li et al., 2021) evaluates whether recommendation models leveraging different modalities perform better than existing recommendation models that rely on item identifiers. The work presented in this paper complements these recent lines of research and proposes and evaluates three alternative ways of leveraging LLMs for sequential recommendation. Differently from earlier approaches, our work shows that initializing an existing sequential model with LLM-based embeddings is highly effective and helps to outperform existing state-of-the-art models. In addition, we find that retrieving relevant items solely based on LLM embedding similarity can lead to compelling recommendations depending on the dataset. ## 3. Three LLM-based approaches for sequential recommendations In this section, we describe the three technical approaches sketched in Section 1. ### LLMsEqSim: Recommending Semantically Related Items via LLM Embeddings With this first approach, our goal is to explore if recommendations can benefit from a holistic notion of similarity provided by LLMs. To achieve this, we leverage _LLM embeddings_ to produce recommendations in three steps. First, we query the text-embedding-ada-0022 OpenAI embedding model with the names of the products in the item catalog and retrieve their embeddings. Second, we compute a session embedding for each session in our test set by combining the embeddings of the individual products in the session. Here, we try different combination strategies: _a)_ the average of the product embeddings, _b)_ a weighted average using linear and exponential decay functions depending on the position of the item in the session, and _c)_ only the embedding of the last product.3 Third, we compare the session embedding to the embeddings of the items in the product catalog using cosine, Euclidean, and dot product similarity.4 Finally, we recommend the top-_k_ products from the catalog with the highest embedding similarity to the session embedding. Footnote 2: [https://platform.openai.com/docs/guides/embeddings/second-generation-models](https://platform.openai.com/docs/guides/embeddings/second-generation-models) Footnote 3: We also tried to create an aggregated session embedding by concatenating the plain product names and then querying the Open AI embeddings API. This however led to worse results. Footnote 4: The choice of the similarity measure did not significantly impact the results. ### LLMsEqPrompt: Prompt-based Recommendations by a Fine-Tuned LLM In this approach, we inject domain knowledge to the collective information that a base LLM incorporates, with the goal of increasing the quality of the recommendations by an LLM that is given information about an ongoing session in the form of a prompt. To this end, we fine-tune an OpenAI ada model on training samples consisting of a prompt (the input) and a completion (the intended output). In our case, the prompt is a session, which contains a list of product names except for the last product, and the completion is the name of the last product in the same session, see Figure 1. To optimize performance, we fine-tune the model until the validation loss converges. After training, we provide the prompts of the sessions in the test set to the fine-tuned model to obtain recommendations. We note that we make no strong assumption regarding the order of the returned recommendations. Therefore, we use the tendency of the model to provide duplicate recommendations as a proxy of its confidence and rank the recommendations by frequency of appearance. Then, to create a full slate of unique recommendations, we retrieve the embedding of each duplicate product using the OpenAI embeddings API and take the catalog's product that is closest in terms of embedding similarity using the dot product measure. Finally, we note that the fine-tuned LLM, being a generative model, may also return hallucinated products, which we map to catalog products using the same method as for duplicate products. ### LLM2Bert4Rec: Recommending with an LLM-enhanced Sequential Model In our third approach, our goal is to leverage the semantically-rich item representations provided by an LLM to enhance an existing sequential recommendation model. Specifically, in our work we focus on BERT4Rec[35], a state-of-the-art transformer-based model, which employs the transformer architecture [37] of BERT[7]. BERT's transformer architecture consists of an embedding layer, a stack of encoder layers, and a projection head. Furthermore, BERT features a masked language model training protocol, which involves masking items at random positions and letting the model predict their true identity. Initially, the embedding layer embeds an input sequence of (potentially masked) item IDs into a sequence of embeddings using both the item ID and the item position. Then the transformer encoder layers process the embedding sequence using a multi-head attention module and a feed-forward network shared across all positions. Finally, the projection head projects the embeddings at each masked position to a probability distribution in order to obtain the true identity of the masked item. The projection head reuses the item embeddings of the embedding layer to reduce the model's size and to avoid overfitting. To allow BERT4Rec to leverage the rich information encoded in LLMs, we initialize BERT4Rec's item embeddings using the LLM embeddings described in Section 3.1. In order to align the embedding dimension of the LLM embeddings (1536) with the configured dimension of BERT4Rec's embedding layer (e.g., 64), we employ Principal Components Analysis (PCA) to get 64 principal components of the LLM embeddings, which we then use to initialize the item embeddings of BERT4Rec's embedding layer. Finally, we train the enhanced model the same way as our baseline BERT4Rec model. ## 4. Experimental Evaluation In this section, we describe our experimental setup (Section 4.1) and the results of our empirical evaluation (Section 4.2). ### Experimental setup Datasets and Data SplittingWe use the public Amazon Beauty [12] dataset and a novel, real-world e-commerce dataset from Delivery Hero5 for our experiments. The Beauty dataset contains product reviews and ratings from Amazon. In line with prior research [1], we pre-processed the dataset to include at least five interactions per user and item (p-core = 5). The Delivery Hero dataset contains anonymous QCommerce sessions for dark store and local shop orders. To better simulate a real-world setting, we did not preprocess this dataset, except that we removed sessions Figure 1. Example prompt and completion for fine-tuning from the Beauty dataset with only one interaction from the test set. QCommerce is a segment of e-Commerce focusing on fast delivery times on the last mile. Dataset statistics are given in Table 1. To create a train and test set in a sound way, we first split a dataset containing sessions temporally such that all test sessions succeed train sessions in time. Then in the test set, we adopt the leave-one-out approach followed by [(21; 35)] where all but the last interaction of each session represent the prompt, while the last interaction serves as the ground truth. MetricsWe use the standard ranking accuracy metrics NDCG, MRR, and HitRate at the usual cut-off lengths of 10 and 20. Furthermore, we consider the following _beyond-accuracy_ metrics to obtain a more comprehensive picture of the performance of the different algorithms: catalog coverage, serendipity, and novelty. _Catalog coverage_ represents the fraction of catalog items that appeared in at least one top-n recommendation list of the users in the test set [(18)]. _Serendipity_ measures the average number of correct recommendations for each user that are not recommended by a popularity baseline [(10)]. _Novelty_ computes the negative log of the relative item popularity, or self-information [(45)]. ModelsWe include both session-based algorithms of different families, GRU4Rec[(14)], and SKNN[(19)], as well as two state-of-the-art sequential models, BERT4Rec[(35)] and SASRec[(21)]. We tested all variants of the SKNN nearest-neighbor method proposed in [(30)] and report the results in the online material. In addition, we include the three LLM-based approaches proposed in Section 3. Finally, we include a popularity-based baseline (MostPopular) in the experiments. Hyperparameter TuningWe systematically tuned all models (except the LLMSEqSim and the LLMSEqPrompt) on three validation folds with the Tree Parzen Estimator (TPE) sampler [(3)], and used the average NDCG@20 across the folds as the optimization goal. For LLMSEqPrompt, we applied manual hyperparameter search. The examined hyperparameter ranges and optimal values for each dataset are reported in the online material. ### Results and Discussion Table 2 and Table 3 show the results obtained for the Amazon Beauty and the Delivery Hero dataset on the hidden test set, respectively. We report the best results of 5 runs. The table is sorted according to NDCG@20. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Dataset** & **\# sessions** & **\# items** & **\# interactions** & **Avg. length** & **Density** \\ \hline Beauty 5-core & 22,363 & 12,101 & 198,502 & 8.9 & 0.073\% \\ \hline Delivery Hero & 258,710 & 38,246 & 1,474,658 & 5.7 & 0.015\% \\ \hline \hline \end{tabular} \end{table} Table 1. Dataset statistics Figure 2. Distribution of items ranked by popularity (left) and histogram of session length (right) for the datasets _Accuracy Results._ The highest values in terms of NDCG@20 are obtained by **LLM2BERT4Rrc** for both datasets. In both cases, the gains obtained by using LLM-based item embeddings are substantial, demonstrating the benefits of relying on semantically-rich embeddings in this sequential model. The NDCG value increased by more than 20% for Beauty and over 15% on the Delivery Hero dataset.6 To confirm that the semantics of the LLM embeddings is the driver of performance, we ran an experiment in which we permuted the item embeddings such that the embedding of each item is initialized to the principal components of the LLM embedding of another product from the catalogue. The experiment maintains the statistical properties of the embeddings, but deprives the item embeddings of the semantics of the LLM embeddings. The resulting model exhibited worse performance than the baseline BERT4Rrc model with randomly-initialized item embeddings clearly showing that the performance improvement cannot be credited to the statistical properties of the embeddings. Footnote 6: We also examined the value of LLM embeddings for the SASRec model, where we observed marked increases in the NDCG, but not to the extent that it outperformed LLM2BERT4Rrc. We report these additional results in the online material. The relative performance of **LLMSpGSim**, again considering NDCG values, varies across the two datasets. On the Beauty dataset, the model is highly competitive, with NDCG@20 values only being slightly lower than LLM2BERT4Rec. At shorter list lengths, i.e., at NDCG@10, the LLMSpGSim model even leads to the best performance for this dataset. Notably, the embedding combination strategy that led to the best results considered only the last item of the session (see Section 3.1). For the Delivery Hero dataset, in contrast, the picture is very different, and LLMSpGSim leads to quite poor performance, only outperforming the popularity-based baseline. We hypothesize that this phenomenon is a result of the quite different characteristics of the two datasets. For example, in Figure 2, we observe that many items in the real-world Delivery Hero dataset occur very infrequently. This may limit the capacity of LLMSpGSim to find similar items, given also the substantially broader item catalog in the Delivery Hero dataset. Furthermore, a manual inspection of a sample of test prompts, recommendations, and ground truths of the two datasets indicates that users in the Beauty dataset frequently rate items of a certain brand. Since brand names are part of the product names that are input to the LLM, recommending similar items may turn out to be particularly effective. Looking at the other accuracy metrics (**Hit Rate** and **MRR**), we find that these are generally highly correlated with the NDCG results. A notable exception are the MRR values of the LLMSpGSim model and the V_SKNN approach on the Beauty dataset. While these two approaches lead to slightly inferior results at NDCG@20 and in particular also for HR@20, they are superior in terms of MRR. This means that these methods place the hidden target item higher up in the recommendation list in case the target item is included in the top 20. Similar observations regarding the good performance of some methods in terms of MRR on specific datasets were previously reported also in [30]. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{5}{c}{**Top@10**} & \multicolumn{5}{c}{**Top@20**} \\ & **nDCG** & **HR** & **MRR** & **CatCov** & **Seren** & **Novel** & **nDCG** & **HR** & **MRR** & **CatCov** & **Seren** & **Novel** \\ \hline LLM2BERT4Rrc & 0.041 & **0.076** & 0.030 & 0.180 & **0.072** & 11.688 & **0.051** & **0.118** & 0.033 & 0.260 & **0.110** & 11.888 \\ LLMSpGSim & **0.044** & 0.063 & **0.038** & **0.763** & 0.063 & **13.819** & 0.048 & 0.079 & **0.039** & **0.889** & 0.079 & **13.858** \\ V\_SKNN & 0.041 & **0.071** & 0.033 & 0.673 & 0.069 & 12.241 & 0.047 & 0.095 & 0.034 & 0.889 & 0.091 & 12.492 \\ BERT4Rec & 0.034 & 0.067 & 0.024 & 0.231 & 0.064 & 12.293 & 0.043 & 0.103 & 0.027 & 0.312 & 0.098 & 12.423 \\ GRU4Rec & 0.027 & 0.051 & 0.020 & 0.145 & 0.047 & 11.409 & 0.035 & 0.082 & 0.022 & 0.214 & 0.074 & 11.597 \\ SASRec & 0.026 & 0.051 & 0.019 & 0.121 & 0.048 & 11.485 & 0.033 & 0.080 & 0.021 & 0.182 & 0.073 & 11.678 \\ LLMSpGPrompt & 0.025 & 0.045 & 0.019 & 0.500 & 0.044 & 13.001 & 0.030 & 0.064 & 0.020 & 0.688 & 0.063 & 13.361 \\ MostPopular & 0.005 & 0.010 & 0.003 & 0.001 & 0.001 & 9.187 & 0.006 & 0.018 & 0.003 & 0.002 & 0.001 & 9.408 \\ \hline \hline \end{tabular} \end{table} Table 2. Evaluation results for the Amazon Beauty dataset Interestingly, as also reported in (Kang et al., 2019; Wang et al., 2020), **nearest-neighbor** approaches can be quite competitive depending on the dataset. On Beauty, V_SKNN outperforms all of the more sophisticated neural models (BERT4Rec, GRU4Rec, SASRec) in all accuracy metrics except Hit Rate@20. On the Delivery Hero dataset, in contrast, the neural models perform better in all accuracy metrics except MRR and NDCG@10. Further inspection (see online material) showed that SKNN's performance drops as the length of sessions increases, while the performance of the other models remains stable. The performance of the LLMSeqPrompt model again depends on the dataset. On the Beauty dataset, it leads to accuracy values that are often only slightly lower than SASRec, which is typically considered a strong state-of-the-art baseline. On the Delivery Hero dataset, in contrast, the drop in performance compared to the other models is substantial. Still, LLMSeqPrompt leads to accuracy values that are markedly higher than the popularity baseline. Given its versatility, ease of configuration and promising performance, LLMSeqPrompt merits further research. Beyond-Accuracy ResultsWe make the following observations for **coverage**, **serendipity** and **novelty**. The LLMSeqSim model consistently leads to the best coverage and novelty. This is not too surprising, given the nature of the approach, which is solely based on embeddings similarities. Unlike other methods that use collaborative signals, i.e., past user-item interactions, the general popularity of an item in terms of the amount of observed past interactions does not play role in LLMSeqSim, neither directly nor implicitly. Thus, the model has no tendency to concentrate the recommendations on a certain subset of (popular) items. We recall that the used novelty measure is based on the popularity of the items in the recommendations. The serendipity results are largely aligned with the accuracy measures across the datasets. This generally confirms the value of personalizing the recommendations to individual user preferences, compared to recommending mostly popular items to everyone. We iterate that our serendipity measure counts the fraction of correctly recommended items that would not be recommended by a popularity-based approach. ## 5. Conclusions In this work, we devised and evaluated three approaches that leverage LLMs for sequential recommendation problems. A systematic empirical evaluation revealed that BERT4Rec initialized with LLM embeddings achieves the best performance for two datasets, and that the LLM-based initialization leads to a substantial improvement in accuracy. In our future work, we plan to investigate if our findings generalize to different domains, using alternative datasets with diverse characteristics. Furthermore, we will explore if using other LLMs, e.g., ones with different architectures and training corpora, will lead to similar performance gains, including a hybrid of LLM2BERT4Rec with LLMSeqSim towards combining their accuracy and beyond-accuracy performance. Finally, it is open so far if passing other types of information besides product names, e.g., category information, to an LLM can help to further improve the performance of the models. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{5}{c}{**Top@10**} & \multicolumn{5}{c}{**Top@20**} \\ & **nDCG** & **HR** & **MRR** & **CatCov** & **Seren** & **Novel** & **nDCG** & **HR** & **MRR** & **CatCov** & **Seren** & **Novel** \\ \hline LLM2BERT4Rec & **0.102** & **0.179** & **0.078** & 0.245 & **0.151** & 10.864 & **0.120** & **0.252** & **0.083** & 0.311 & **0.198** & 11.050 \\ BERT4Rec & 0.088 & 0.157 & 0.067 & 0.325 & 0.128 & 10.821 & 0.104 & 0.221 & 0.071 & 0.429 & 0.165 & 11.032 \\ GRU4Rec & 0.085 & 0.153 & 0.064 & 0.127 & 0.124 & 10.570 & 0.101 & 0.218 & 0.068 & 0.172 & 0.161 & 10.823 \\ SASRec & 0.084 & 0.149 & 0.065 & 0.170 & 0.120 & 10.674 & 0.100 & 0.212 & 0.069 & 0.229 & 0.156 & 10.913 \\ V\_SKNN & 0.087 & 0.148 & 0.068 & 0.381 & 0.120 & 10.444 & 0.100 & 0.200 & 0.072 & 0.452 & 0.146 & 10.602 \\ LLMSeqPrompt & 0.063 & 0.116 & 0.047 & 0.400 & 0.107 & 12.048 & 0.070 & 0.144 & 0.049 & 0.611 & 0.123 & 13.788 \\ LLMSeqSim & 0.039 & 0.069 & 0.029 & **0.633** & 0.069 & **16.315** & 0.046 & 0.096 & 0.031 & **0.763** & **0.093** & **16.536** \\ MostPopular & 0.024 & 0.049 & 0.017 & 0.000 & 0.000 & 7.518 & 0.032 & 0.079 & 0.019 & 0.001 & 0.000 & 7.836 \\ \hline \hline \end{tabular} \end{table} Table 3. Evaluation results for the Delivery Hero dataset
sequential recommendationの問題は近年研究において注目を集めており、多様なアルゴリズムアプローチの創出につながっています。この研究では、現在多くのAI基盤アプリケーションで disruptive な影響を与える大規模言語モデル(LLM)が、シーケンス的推奨アプローチを構築または改善するための活用方法について考察しています。具体的には、LLMの力を活用する3つのアプローチを提案して評価しています。実験の結果、2つのデータセットにおけるBERT4Recの初期化とLLMから得られたエンベディンクスを用いることで、NDC G値が15~20%向上することが明らかになりました。さらに、LLMエンベディンクスを用いて推奨を生成するシンプルなアプローチが、セマンティック的に関連するアイテムを強調することで、競争力のあるパフォーマンスを実現することがわかりました。実験のコードとデータは公開され、 reproducibility を確保しています。
2309.06152
Distinguishing the importance of different charge trapping centers in CaF2-based 2D material MOSFETs
Crystalline CaF2 is drawing huge attentions due to its great potential of being the gate dielectric of two-dimensional (2D) material MOSFETs. It is deemed to be much superior than boron nitride and traditional SiO2 because of its larger dielectric constant, wider band gap, and lower defect density. Nevertheless, the CaF2-based MOSFETs fabricated in experiment still present notable reliability issues, and the underlying reason remains unclear. Here we studied the various intrinsic defects and adsorbates in CaF2/MoS2 and CaF2/MoSi2N4 interface systems to reveal the most active charge trapping centers in CaF2-based 2D material MOSFETs. An elaborate Table that comparing the importance of different defects in both n-type and p-type device is provided. Most impressively, the oxygen molecules adsorbed at the interface or surface, which are inevitable in experiments, are as active as the intrinsic defects in channel materials, and they can even change the MoSi2N4 to p-type spontaneously. These results mean that it is necessary to develop high vacuum packaging process as well as preparing high-quality 2D materials for better device performance.
Zhe Zhao, Tao Xiong, Jian Gong, Yue-Yang Liu
2023-09-12T11:52:04
http://arxiv.org/abs/2309.06152v1
Distinguishing the importance of different charge trapping centers in CaF\({}_{2}\)-based 2D material MOSFETs ###### Abstract Crystalline CaF\({}_{2}\) is drawing huge attentions due to its great potential of being the gate dielectric of two-dimensional (2D) material MOSFETs. It is deemed to be much superior than boron nitride and traditional SiO\({}_{2}\) because of its larger dielectric constant, wider band gap, and lower defect density. Nevertheless, the CaF2-based MOSFETs fabricated in experiment still present notable reliability issues, and the underlying reason remains unclear. Here we studied the various intrinsic defects and adsorbates in CaF\({}_{2}\)/MoS\({}_{2}\) and CaF\({}_{2}\)/MoS\({}_{2}\)N\({}_{4}\) interface systems to reveal the most active charge trapping centers in CaF\({}_{2}\)-based 2D material MOSFETs. An elaborate Table that comparing the importance of different defects in both n-type and p-type device is provided. Most impressively, the oxygen molecules adsorbed at the interface or surface, which are inevitable in experiments, are as active as the intrinsic defects in channel materials, and they can even change the MoS\({}_{2}\)N\({}_{4}\) to p-type spontaneously. These results mean that it is necessary to develop high vacuum packaging process as well as preparing high-quality 2D materials for better device performance. ## 1 Introduction Two-dimensional (2D) materials offer new possibilities for more Moore due to their ultra-thin thickness and smooth surface with no dangling bonds [1, 2, 3]. With the ultra-scaled channel, higher requirements are raised for the quality and reliability of gate dielectric materials. To match the silicon technologies, oxides (such as SiO\({}_{2}\)[4] HfO\({}_{2}\)[5] and Al\({}_{2}\)O\({}_{3}\)[6]) are usually used, but these materials are non-layered, which makes it difficult to form a good interface with the 2D channels. To deal with the problem, 2D dielectrics such as h-BN have been studied [7]. However, the band gap ("6 eV) and dielectric constant (5.06 \(\varepsilon\)) of h-BN are not satisfying for dielectric materials [8]. Its band offset with 2D materials is not large enough, which will lead to many reliability problems [9]. Excitingly, the recent experimental preparation of crystalline CaF\({}_{2}\) provides a strong support for the solution of this dilemma [10, 11]. By using molecular beam epitaxy (MBE), crystalline CaF\({}_{2}\) can be grown on a silicon or germanium substrate [12]. It has a larger bandgap (12.1 eV) and larger dielectric constant (8.43 \(\varepsilon\)) than h-BN [13]. The grown CaF\({}_{2}\) is terminated by F atoms, which means that there is no dangling bond on its surface [14]. Another important point is that CaF\({}_{2}\) itself is stable in air, and is not easily dissolved in water [15]. CaF\({}_{2}\) can form good type I band alignment with many 2D materials, which means that it will be very advantageous as a gate dielectric of semiconductor devices. Nevertheless, notable device reliability issues were still observed in CaF\({}_{2}\)-based MOSFETs [13, 15, 16, 17], which contradicts the perfect electrical properties of CaF\({}_{2}\). For example, the \(l_{0}\)-\(V_{0}\) hysteresis is significant (although lower than that in MoS\({}_{2}\)/SiO\({}_{2}\) FET), and it shows obvious variability when the same device is operated at different scanning times. On the other hand, if different devices are operated under the same \(V_{0}\), the \(l_{0}\)-\(V_{0}\) characteristics such as on/off current ratio and subthreshold swing (SS) (150-90 mV dec\({}^{-1}\)) differ greatly [13]. In addition, some devices with large negative threshold voltage (\(V_{\text{th}}\)) are prone to fail due to the bias overload of the CaF\({}_{2}\) layer. The physical origin of hysteresis and threshold voltage shift is widely attributed to the charge trapping and de-trapping of microscopic defects [18, 19, 20, 21, 22, 23, 24], and the strength of the charge trapping effect is closely related to the type of defects [25, 26, 27, 28]. Therefore, it is very urgent to distinguishing the activity of various defects in CaF\({}_{2}\)-based transistors so that corresponding strategies can be proposed to deal with them. Figure 1: Atomic structure and type-I band alignment of the two kinds of interface models. ## 2 Method Among the 2D materials, MoS\({}_{2}\) is one of the most widely used semiconductor. It has a direct band gap of 1.8 eV, and has been used to design high-performance electronic as well as optoelectronic devices [29]. On the other hand, there are also some new materials being synthetized, such as the MoS\({}_{2}\)N\({}_{4}\)[30]. MoS\({}_{2}\)N\({}_{4}\) is very promising because of the excellent photocatalytic performance [31], mechanical strength [32], and electrical transportability [33]. Therefore, we construct both MoS\({}_{2}\)/CaF\({}_{2}\) and MoS\({}_{2}\)N\({}_{4}\)/CaF\({}_{2}\) interface models to make the simulation results representative. The lattice parameter of CaF\({}_{2}\), MoS\({}_{2}\) and MoS\({}_{2}\)N\({}_{4}\) is 3.90 A, 3.16 A, and 2.91 A, respectively. To obtain good lattice matching, the primary cell of MoS\({}_{2}\) is repeated by five times to contact with the CaF\({}_{2}\) cell that is repeated by four times. The final CaF\({}_{2}\) deformation is only 1.28%. Similarly, the primary cell of MoS\({}_{2}\)N\({}_{4}\) is repeated by four times to contact the CaF\({}_{2}\) that is repeated by three times, and the CaF\({}_{2}\) deformation is only 0.52%. All the first-principles calculations are performed by the software PWmat [34, 35]. The SG15 pseudopotential [18] is adopted, and the plane wave cutoff energy is 50 Ry. The Heyd-Scuseria-Ernzerhof (HSE) [37] functional is used in the calculation of electronic structures to improve the accuracy of calculations. The vdW force between the layers of the material is also considered. ## 3 Results and Discussion The interface models are shown in Fig. 1(a) and (c). A 5-layer CaF\({}_{2}\) is adopted because the experimental MBE grown CaF\({}_{2}\) is about 2 nm thick. The band alignments that manifested by the projected density of states (PDOS) are shown in Fig. 1(b) and (d). It can be seen that the VBM (valence band maximum) and CBM (conduction band minimum) are provided by MoS\({}_{2}\) and MoS\({}_{2}\)N\({}_{4}\), and the band offsets are greater than 2 eV, which makes charge tunneling difficult. This confirms that using CaF\({}_{2}\) as the gate of 2D material MOSFET is likely to obtain good device reliability [38]. Therefore, when considering practical applications, we believe that the reliability issues should stem from some intrinsic or external charge trapping centers. Intuitively, we should first study the F vacancy defect in the CaF\({}_{2}\) layer. However, it has been demonstrated in experiment that CaF\({}_{2}\) is not easy to generate defects [13]. Besides, it has been proved by first principle calculation that even though F vacancies (V\({}_{\rm F}\)) and Ca vacancies (V\({}_{\rm Ga}\)) exist, there is no defect state near the band edge of channel material due to the large band offset between the two materials [39]. Consequently, we turn our attention to the trapping centers inside the channel material, in the semiconductor/dielectric interface, and at the dielectric surface. For MoS\({}_{2}\), we considered 5 vacancy defect (V\({}_{\rm Cs}\)), Mo vacancy defects (V\({}_{\rm MoS}\)), MoS\({}_{3}\) vacancy defect (V\({}_{\rm MoS}\)) and MoS\({}_{5}\) vacancy defect (V\({}_{\rm MoS}\)) at different spatial locations. On the other hand, considering that gas adsorption is very easy to occur in the process of device manufacturing, we also studied the water and oxygen molecules that adsorbed at different positions. For a more intuitive display of defects and adsorption, the related structural diagrams are shown in Fig. 2. shown in Fig. 3. First, there is an occupied defect state denoted by d1 for the V\({}_{\rm S}\) in MoS\({}_{2}\), whose energy is 0.38 eV below VBM, and there are two empty defect states with similar energy denoted by d2, whose energy is 0.57 eV below CBM. According to charge transfer theories, the charge trapping rate will decrease exponentially with the increasing energy barrier between the initial and final electronic states, thus we can consider that only the defect levels that locate less than 1 eV away from the Si band edge are active trapping centers. Therefore, it can be concluded that d1 is an important hole trapping state when negative gate voltage is applied, and d2 is an important electron trapping states when positive gate voltage is applied. Similarly, the Mo vacancy are active in trapping holes and electrons, but they not as active as the S vacancy in electron trapping, because the V\({}_{\rm Mo}\) defect levels are farther away the CBM. In addition to the common V\({}_{\rm S}\) and V\({}_{\rm Mo}\) Figure 3: The energy level distribution of different defects. (a) S vacancy (V\({}_{\rm A}\), (b) Mo vacancy (V\({}_{\rm Na}\)), (c) MoS\({}_{3}\) vacancy (V\({}_{\rm Na}\)), and (d) MoS\({}_{5}\) vacancy (V\({}_{\rm Na}\)). Figure 2: The different defects in MoS\({}_{2}\) from (a) to (d) \(V_{\rm Cs}\), V\({}_{\rm Na}\), V\({}_{\rm Na}\), and V\({}_{\rm Na}\) defects. Oxygen molecule(e) and water molecules(f) are adsorbed between the CaF\({}_{2}\)-MoS\({}_{2}\) interface, respectively. Oxygen is adsorbed on the interlayer of MoS\({}_{2}\)(g) and the surface of CaF\({}_{2}\)(h), respectively. The atoms highlighted in red in the figure represent defects and adsorption sites. experiments have reported that complex vacancy defects (such as V\({}_{\text{MoS}_{2}}\) and V\({}_{\text{MoS}_{2}}\)) are found in MoS\({}_{2}\)[40]. These two complex vacancies contain many dangling bonds, and thus can introduce a series of defect states (up to 13) that locate either close to VBM or to CBM. Consequently, they will be very active charge trapping centers. Nevertheless, the formation energy of these complex defects is very high, which makes them low in density. More details of the defect levels have been listed in Table 1. It has been mentioned in previous reports that the hysteresis of CaF\({}_{2}\)-MoS\({}_{2}\) devices can be reduced after they are heated and dried[13]. This indicate that molecules had been adsorbed during device preparation, so the activity of these adsorbates need to be discussed. Fig. 4(a) shows the adsorption of O\({}_{2}\) in the CaF\({}_{2}\)-MoS\({}_{2}\) interface, and three defect levels that denoted by d1, d2 and d3 are observed. They are only 1 eV, 0.85 eV and 0.54 eV below VBM, respectively. Therefore, they will be active hole traps in p-MOSFETs. In contrast, the adsorption of water molecules on the interface is much less important. It can be seen from Fig. 4(b) that there is no obvious defect state near the band edge of MoS\({}_{2}\). To further check the importance of oxygen, we studied the oxygens that adsorbed in other positions. Fig. 4(c) shows the situation that the oxygen molecules are adsorbed in the interlayer of MoS\({}_{2}\). It can be seen that the defect state is only 0.37 eV below VBM, which will trap holes easily and thus affects the device performance. Fig. 4(d) shows the case that the oxygen is adsorbed on the surface of CaF\({}_{2}\). An occupied defect state that is close to CBM rather than CBM is seen. Considering that the negative gate voltage in a p-FET will drag the defect level down towards the VBM, the oxygen on the CaF\({}_{2}\) surface will be very active hole trapping centers with large gate voltage. To exhibit the importance of different defects more clearly, Table 1 summarizes and compares the information of all defects. The defect levels that are more than 1 eV away from the MoS\({}_{2}\) band edge are regarded as electronically unimportant[41, 42, 43]. Moreover, the formation energy/adsorption energy is considered to provide an overall evaluation of their importance. Now we study the MoS\({}_{2}\)Na\({}_{4}\)/CaF\({}_{2}\) system. MoS\({}_{2}\)Na\({}_{4}\) is a 2D material with 7 atomic layers. One Mo atomic layer lies in the middle while two Si-N-Si tri-layers lie on top and bottom surface symmetrically. Vacancy defects caused by the shedding of N (Fig. 5a) and Si (Fig. 5b) atoms on the surface layer are the primary problems to be considered. At the same time, the influence of oxygen molecules (Fig. 5c) and water molecules (Fig. 5d) adsorption during device manufacturing is also considered here. The atoms highlighted in red in the figure represent defects and adsorption sites. For the N vacancy (V\({}_{\text{N}}\)) (Fig. 6a), two defect levels are induced into the band gap, of which the half-occupied d1 state is 0.98 eV above VBM and the empty d2 state is 0.45 eV below CBM. Such small energy barriers make them very active hole/electron trapping centers. In contrast, the V\({}_{\text{N}}\) defect induce no defect levels close to the CBM, as is shown in Fig. 6(b), but it induces many defect levels below the VBM. Specially, the electrons in VBM have spontaneously transferred to the defect states, making the Fermi level shifted below the VBM, and making the CaF\({}_{2}\)-MoS\({}_{2}\)Na\({}_{4}\) as a whole P-type heterostructure. Interestingly, the adsorption of an oxygen in the CaF\({}_{2}\)-MoS\({}_{2}\)Na\({}_{4}\) interface has very similar effect, as is shown in Fig. 6(c), the electrons in VBM are spontaneously captured by the oxygen, and the MoS\({}_{2}\)Na\({}_{4}\) becomes a p-type material. If the oxygen density is high, this will greatly impair the performance and reliability of the device. In comparison, the adsorption of water \begin{table} \begin{tabular}{c c c c c c c c} \hline \multirow{2}{*}{\begin{tabular}{c} Defect \\ Types \\ \end{tabular} } & Defect & \begin{tabular}{c} \(\Delta\)C \\ VBM \\ (eV) \\ \end{tabular} & \begin{tabular}{c} \(\Delta\)C \\ VBM \\ (eV) \\ \end{tabular} & \begin{tabular}{c} NST1 \\ Importance \\ \end{tabular} & \begin{tabular}{c} PRET \\ Importance \\ \end{tabular} & \begin{tabular}{c} \(\Gamma\)/ \\ Adsorption \\ energy (eV) \\ \end{tabular} & \begin{tabular}{c} \(\Gamma\)/ \\ Adsorption \\ \end{tabular} \\ \hline V\({}_{\text{N}}\) & d1 & -0.38 & -1.91 & ✓ & ✗ & ✗ & 2.91 & ✓ \\ & d2 & 0.95 & -4.57 & ✓ & ✓ & ✗ & \\ & d1 & -0.06 & -1.68 & ✓ & ✗ & \\ V\({}_{\text{MoS}}\) & d2 & 0.40 & -1.17 & ✗ & ✓ & ✗ & 8.52 & ✓ \\ & d3 & 0.71 & -4.86 & ✗ & ✓ & ✗ & \\ & d1 & -0.25 & -1.78 & ✓ & ✗ & \\ V\({}_{\text{MoS}}\) & d2 & 0.89 & -0.64 & ✗ & ✗ & 11.81 & ✓ \\ & d3 & 0.99 & -0.53 & ✗ & ✓ & \\ & \(<\) & \(>\) & ✓ & ✗ & \\ & \(<\) & \(>\) & ✗ & ✗ & \\ V\({}_{\text{MoS}}\) & \(<\) & \(>\) & ✗ & ✗ & 21.41 & ✗ \\ & \(>\) & \(<\) & ✗ & ✓ & \\ & \(\text{{}_{\text{MoS}}}\) & 1.75 & 0.25 & ✗ & ✓ & \\ & \(\text{{}_{\text{MoS}}}\) & d1 & -0.49 & -2.45 & ✓ & ✗ & \\ & d2 & -0.95 & -2.00 & ✓ & ✗ & 0.68 & ✓ \\ & d3 & -0.85 & -2.31 & ✓ & ✗ & \\ & d1 & -3.42 & -4.91 & ✗ & ✗ & 0.61 & ✗ \\ & \(\text{{}_{\text{MoS}}}\) & -0.37 & -2.01 & ✓ & ✗ & 2.35 & ✓ \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & molecules in the interface does not have such effect, as is shown in Fig. 6(d). The water related defect energy level is far away from the band edge of MoS\({}_{2}\)N\({}_{4}\). This further confirms that the water molecules adsorption is less important than oxygen adsorption in impacting device performance and reliability. To present the importance of different defects more intuitively, Table 2 summarizes and compares the information of all defects in CaF\({}_{2}\)-MoS\({}_{2}\)N\({}_{4}\) system. ## 4 Conclusion In conclusion, we have investigated the various defects and adsorbates in CaF\({}_{2}\)-based 2D material MOSFET structures to distinguish their importance in degrading the device performance and reliability. First, the intrinsic defects in channel materials, including the V\({}_{3}\) and V\({}_{\text{tot}}\) in MoS\({}_{2}\), and V\({}_{\text{N}}\) and V\({}_{\text{N}}\) in MoS\({}_{2}\)N\({}_{4}\) are very active charge trapping centers. Second, the adsorbed oxygen molecule in the channel/CaF\({}_{2}\) interface or CaF\({}_{2}\) surface is very important trap centers, and they can even spontaneously change the MoS\({}_{2}\)N\({}_{4}\) to p-type. Third, the adsorbed water molecules are very inactive in capture charges, and thus is much less important in affecting device performance. An elaborate Table that comparing the detailed properties of different defects is provided so that both the researchers in experiment and in theory can refer to easily. These results mean that the exclusion of adsorbates in device fabrication is as important as growing high-quality channel material to obtain better device performance. ## Author Contributions Zhe Zhao: Conceptualization, Methodology, Data collection, Writing - original draft. Tao Xiong: Writing - review & editing. Jian Gong and Yue-Yang Liu: Supervision, Writing - review & editing. ## Conflicts of interest There are no conflicts to declare. ## Acknowledgements This work was financially supported by the National Natural Science Foundation of China Grant No. 12004375, in part by the National Natural Science Foundation of China Grant No. 62174155, the National Natural Science Foundation of China Grant No. 62004193, the National Natural Science Foundation of China Grant No. 62125404, the Inner Mongolia Natural Science Foundation No. 2023ZD27, and the National Natural Science Foundation of China Grant No. 11964022.
結晶性CaF2は、2次元材料MOSFETのゲート絶縁体として大きな可能性を持つため、多くの注目を集めています。その理由は、より大きな誘電率、より広のバンドギャップ、より低い欠陥密度であるためです。しかしながら、実験で作成されたCaF2ベースMOSFETは、依然として顕著な信頼性問題を抱えています。その原因は明らかになっていません。ここでは、CaF2/MoS2とCaF2/MoSi2N4の界面システムにおける様々な本質的な欠陥と吸着物質を調査し、CaF2ベースの2次元材料MOSFETにおける最も活性な電荷捕捉センターを明らかにしました。異なる欠陥の重要性を比較するための詳細な表が提供されています。最も印象的では、界面または表面に吸着された酸素分子は、チャネル材料の欠陥と同じくらい活性であり、MoSi2N4をp型
2307.00110
Whole-Body Human Ultrasound Tomography
We developed a system for whole-body human ultrasound tomography in reflection and transmission modes. A custom 512-element ultrasound receiver array with a rotating single-element ultrasound transmitter are used to generate 2D isotropically resolved images across the entire human cross-section. We demonstrate this technique in regions such as the abdomen and legs in healthy volunteers. Compared to handheld-probe-based ultrasonography, this approach provides a substantially larger field of view, depends less on operator training, and obtains quantitative tissue parameter profiles in addition to reflectivity images. Whole-body ultrasound tomography could be valuable in applications such as organ disease screening, image-guided needle biopsy, and treatment monitoring.
David C. Garrett, Jinhua Xu, Geng Ku, Lihong V. Wang
2023-06-30T19:52:33
http://arxiv.org/abs/2307.00110v1
# Whole-Body Human Ultrasound Tomography ###### Abstract We developed a system for whole-body human ultrasound tomography in reflection and transmission modes. A custom 512-element ultrasound receiver array with a rotating single-element ultrasound transmitter are used to generate 2D isotropically resolved images across the entire human cross-section. We demonstrate this technique in regions such as the abdomen and legs in healthy volunteers. Compared to handheld-probe-based ultrasonography, this approach provides a substantially larger field of view, depends less on operator training, and obtains quantitative tissue parameter profiles in addition to reflectivity images. Whole-body ultrasound tomography could be valuable in applications such as organ disease screening, image-guided needle biopsy, and treatment monitoring. ## Introduction Since its inception in the mid-20\({}^{\text{th}}\) century, ultrasound imaging has revolutionized healthcare by providing rapid and affordable insight into tissue structure and function. Early systems employed single transducers scanned linearly or circularly with subjects immersed in a water bath [1], [2], later followed by membrane approaches to image regions in the abdomen [3]. Initial results were promising for disease diagnosis [4], but bulky electronics and slow acquisition times necessitated mechanical scanning over several minutes. Later developments in transducers and electronics led to linear probes [5], where multiple channels could be used in parallel. The handheld probe remains the most used form of ultrasonography and has found many clinical applications. However, probes require trained operation [6], provide only reflection-mode images over a narrow field of view (FOV), and have limited ability to visualize features behind bone or air pockets. More recently, alternate approaches using smaller immersion tanks with planar [7], linear [8], ring [9], or hemispherical [10] transducer arrays have been investigated for ultrasound tomography (UST) imaging of the breast [11] or limbs of the body. These systems record both reflected and transmitted signals, allowing for reflectivity, speed of sound, and attenuation profiles to be recovered. In extending to human-scale imaging, acoustically opaque regions like bone or air pockets have been conventionally viewed as insurmountable obstacles. A recent study achieved whole-body imaging in piglets despite the presence of bone and air [12]. Another recent system enables volumetric reflection-mode imaging of human extremities like the arm, visualizing vasculature and bones [13]. However, these system geometries and parameters (e.g., acoustic frequency, transmitter power, and detection sensitivity) are not yet suitable for whole-body human imaging. In this work, we return to geometries like those used by the early ultrasonography practitioners but with the advantage of modern electronics and transducer technology. We employ a custom circular array with 512 receiver elements combined with a single-element transmitter which rotates around the subject. This configuration allows for whole-body UST imaging of humans immersed in water, resulting 2D isotropic images of reflectivity, speed of sound, and attenuation profiles. Using full 360\({}^{\circ}\) viewing angles, we overcome acoustic penetration through tissues such as bone or air pockets. We demonstrate this technique by imaging regions in the abdomen and legs in healthy volunteers. Several organs and key features can be clearly observed in reflection-mode images, and we also demonstrate recovery of tissue speed of sound and attenuation. ## Results We developed a custom 60 cm diameter 512-element acoustic receiver array with 1 MHz center frequency. A 1.5-inch diameter 2.25 MHz transducer (Olympus V395) with a custom diverging cylindrical polymethylpentene (TPX) lens is used as a transmitter. The transmitter is mounted on a plastic gear which rotates around the subject using a stepper motor. The array is mounted on two vertical linear motor stages to adjust its height in a water immersion tank. Water acts as acoustic coupling between tissue and the transducers. An arbitrary function generator (Sigilent SDG2042X) connected to a 300-Watt RF power amplifier (ENI 350L) excite the transmitter using a 400 \(\upmu\)s chirp signal spanning 0.3-2.0 MHz. The system hardware is shown in Figure 1. We demonstrate whole-body UST with a healthy female volunteer. The subject is seated in the water immersion tank with the head held against a cushion to reduce motion and with arms raised slightly to lift the ribs. Figure 2 shows an example reflection-mode image of the abdomen. The image is displayed in inverse grayscale (brighter regions are more anechoic) normalized to the peak pixel amplitude. Various structures are visualized, including the liver, stomach, spleen, abdominal aorta, and vertebral body. Note that despite the presence of bone and air pockets, our imaging geometry allows high fidelity imaging of regions deep in the body. Using data collecting during the same scan, we also obtain transmission-mode profiles of the speed of sound and attenuation coefficient which are overlaid on the reflection-mode images in Figure 2. The transmission mode image reconstruction uses the filtered back projection algorithm like that used in x-ray computed tomography, where the arrival time delay and the attenuation of the subject data with respect to the homogenous data are found for each transmitter-receiver ray. Slowness and attenuation coefficient maps are solved for by multiplying the derived arrival delay and attenuation vectors to the inverse of a matrix corresponding to the crossing ray length density. Due to the large size of the data, it is not practical to store and operate on such a matrix directly. Therefore, the conjugate gradient descent algorithm is used to solve the matrix inversion. The speed of sound of map can then be obtained by inverting the slowness matrix. We observe higher tissue speed of sound in the liver which agrees with literature values of approximately 1560 m/s [14]. Figure 1: a) System diagram. AWG: arbitrary waveform generator; P.A: power amplifier; MN: matching network; DAQ: data acquisition module. b) System photograph. We further performed 15 scans at 1 cm vertical intervals from approximately the ribcage to the pelvis. Each scan was acquired over 10 seconds, and the subject was in the immersion tank for approximately 15 minutes. Examples of other 2D images are shown in Figure 3. Note that this volunteer previously had her left kidney removed, so only the right one is visualized. Figure 3: Example of elevational scans of a female subject from approximately the ribcage to the pelvis. RK: right kidney (left kidney was removed). Figure 2: Example UST images. a) Reflectivity image of human abdomen. IVC: inferior vena cava. AA: abdominal aorta. RL: right lobe of liver. LL: left lobe of liver. VB: vertebral body. SC: spinal cord. St: stomach. Sp: spleen. b) and c) show the speed of sound and attenuation profiles, respectively, overlaid on the reflectivity image. With the subject standing in the immersion tank, we also imaged the legs as shown in Figure 4. In the upper legs, the femur, surrounding muscles, and adipose boundaries are clearly observed. The tibia and fibula are visualized in the lower legs as well as adipose boundaries. ## Discussion We developed a system for whole-body ultrasound imaging. Compared with clinical handheld-probe-based ultrasonography, our approach images cross-sections of the whole human body and visualizes three contrasts: reflectivity, speed of sound, and attenuation. This may be of clinical use for screening organ size or structure as an early indicator of inflammation or disease [15]. The speed of sound and attenuation could also be used as diagnostic tools, for instance to assess changes due to non-alcoholic fatty liver disease. Whole-body UST could also be used in applications such as image-guided needle biopsy where x-ray computed tomography is conventionally used. With our whole-body FOV, the location of the biopsy needle could be localized with respect to tissues of interest without use of ionizing radiation. Furthermore, clinical ultrasonography typically requires trained operation for observing regions of interest. Our approach requires only that the patient remain still, where the imaging process could then be automated. This could be an appealing feature for regular screening approaches and would help reduce the cost compared to other modalities. However, our current implementation involving patient water immersion is likely unsuitable for imaging of diseased subjects. A similar imaging geometry could therefore be implemented using water bags like those used in shockwave lithotripsy. In the future, we plan to enhance this system with additional photoacoustic and thermoacoustic contrast. Using the same acoustic receivers, these images could be immediately co-registered with our UST images to overlay optical and microwave absorption profiles. We also aim to improve our transmission-mode reconstruction quality using techniques such as full-wave inversion [16] to better localize variations in the speed of sound and attenuation coefficient. Additional acoustic elements could also reduce image acquisition time and provide 3D imaging capability. Figure 4: Reflection-mode images of a) the upper leg; and b) the lower leg of a female subject. ## Materials and Methods ### System hardware All 512 receiver array elements are 3 mm \(\times\) 10 mm polymer piezoelectric (PVDF-TrFE, PolyK Technologies LLC) capacitively coupled to polyimide electrodes which are directly connected to parallel preamplifiers. The preamplifiers are implemented on custom annular printed circuit boards and provide 15 dB voltage gain with 100 k\(\Omega\) input impedance. The elements and preamplifiers are housed in a stainless-steel shielded enclosure. Casting epoxy is used as a backing material for each element, and an angled back panel is used to reduce reverberation. All channels are low-pass filtered (\(f_{c}=2\) MHz) and digitized (Photosound Legion) in parallel at 5 MSPS. The preamplifiers are powered by rechargeable lithium polymer batteries. To account for geometrical error during manufacturing, the technique described in [17] is used to calibrate each element's position. ### Imaging parameters To enhance the signal-to-noise ratio (SNR) while limited by the mechanical index, a linear chirp signal versus time (\(t\)) is used with a time varying frequency \(f(t)=ct+f_{0}\), where \(c=(f_{1}-f_{0})/T\) is the linear chirp rate, \(f_{0}=0.3\) MHz is the lower frequency, \(f_{1}=2.0\) MHz is the upper frequency, and \(T=400\) us is the chirp duration. The transmitted frequencies are limited by the bandwidths of the transmitter and receivers. We used a maximal pulse duration given our maximal acquisition time of 800 us, allowing for recovery of the roundtrip reflected signals over the entire field of view (FOV). The resulting transmitted chirp signal is \[x(t)=\sin\Big{[}2\pi(\frac{c}{2}t^{2}+f_{0}t)\Big{]}.\] Compared to a pulse with similar peak pressure, this results in an expected SNR gain of \(\sim\sqrt{T\cdot B}\), where \(B=f_{1}-f_{0}\) is the acoustic bandwidth. In addition to the target, we also perform a scan with only water in the imaging domain, resulting in recorded signals \(x_{w,i}(t)\) for each receiver element \(i\). This provides the response of each transducer to the chirp which is then cross-correlated with the target's chirp response \(x_{c,i}(t)\). The pulse response for the target signals \(\chi_{s,i}(t)\) is then recovered for each element \(i\) as: \[\chi_{s,i}(t)=\frac{x_{w,i}(t)\star x_{c,i}(t)}{\max\bigl{[}x_{w,i}(t)\star x_ {w,i}(t)\bigr{]}}\] where \(\star\) denotes cross-correlation. We normalize by the maximum of the autocorrelation of \(x_{w,i}(t)\) to account for sensitivity variation in the receiver elements. The transmitter operates with a pulse repetition rate of 180 Hz. With the gear rotation time of 10 seconds, this results in 1800 transmitted pulses over a full circular scan around the target. ### Human imaging protocol A healthy female volunteer consented to being imaged in this system. This imaging procedure was approved by the Caltech Institutional Review Board (protocol IR21-1099). Prior to human imaging, we used a calibrated hydrophone (Onda HGL-0085) positioned immediately in front of the transmitter to evaluate the mechanical index as less than 0.2, whereas the limit from the U.S. Food and Drug Administration is 1.9 [18]. ## Acknowledgements This work was supported in part by National Institutes of Health grants R35 CA220436 (Outstanding Investigator Award). L.W. has a financial interest in Microphotoacoustics, Inc., CalPACT, LLC, and Union Photoacoustic Technologies, Ltd., which, however, did not support this work.
体全体の人体超音波断層撮影システムを開発しました。反射と伝射モードで動作します。回転する単一要素超音波送信器を用いたカスタム512要素超音波受信陣列により、人体全 cross-sectional の 2D 等方解析画像を生成します。この技術を、腹部と脚などの健康なボランティアの領域で実証しました。手持ちの超音波探触子ベースの超音波診断法と比較して、このアプローチは、視野の大きさが大きく、操作者のトレーニングに依存しないため、質量的な組織パラメータプロファイルの取得が可能です。全身超音波断層撮影は、臓器疾患の検出、画像ガイドNeedle biopsy、治療のモニタリングなどのアプリケーションにおいて有用です。 Please let me know if you would like me to translate any other sentences or phrases.
2301.13584
Straight-Through meets Sparse Recovery: the Support Exploration Algorithm
The {\it straight-through estimator} (STE) is commonly used to optimize quantized neural networks, yet its contexts of effective performance are still unclear despite empirical successes.To make a step forward in this comprehension, we apply STE to a well-understood problem: {\it sparse support recovery}. We introduce the {\it Support Exploration Algorithm} (SEA), a novel algorithm promoting sparsity, and we analyze its performance in support recovery (a.k.a. model selection) problems. SEA explores more supports than the state-of-the-art, leading to superior performance in experiments, especially when the columns of $A$ are strongly coherent.The theoretical analysis considers recovery guarantees when the linear measurements matrix $A$ satisfies the {\it Restricted Isometry Property} (RIP).The sufficient conditions of recovery are comparable but more stringent than those of the state-of-the-art in sparse support recovery. Their significance lies mainly in their applicability to an instance of the STE.
Mimoun Mohamed, François Malgouyres, Valentin Emiya, Caroline Chaux
2023-01-31T12:31:13
http://arxiv.org/abs/2301.13584v3
# Support Exploration Algorithm for Sparse Support Recovery ###### Abstract We introduce a new algorithm promoting sparsity called _Support Exploration Algorithm (SEA)_ and analyze it in the context of support recovery/model selection problems. The algorithm can be interpreted as an instance of the _straight-through estimator (STE)_ applied to the resolution of a sparse linear inverse problem. SEA uses a non-sparse exploratory vector and makes it evolve in the input space to select the sparse support. We put to evidence an oracle update rule for the exploratory vector and consider the STE update. The theoretical analysis establishes general sufficient conditions of support recovery. The general conditions are specialized to the case where the matrix \(A\) performing the linear measurements satisfies the _Restricted Isometry Property (RIP)_. Experiments show that SEA can efficiently improve the results of any algorithm. Because of its exploratory nature, SEA also performs remarkably well when the columns of \(A\) are strongly coherent. ## 1 Introduction Sparse representations and sparsity-inducing algorithms are widely used in statistics and machine learning [20], as well as in signal processing [18]. For instance, in machine learning, sparse representations are used to select relevant variables. They are also sought to interpret trained models. In signal processing, linear inverse problems have a wide array of applications. The sparsity assumption is ubiquitous since most real signals can be represented as sparse signals in some domains. For instance, communication signals have a sparse representation in Fourier space, like natural images in wavelet space. While sparse models are appealing, they are hard to estimate due to the underlying combinatorial difficulty of identifying a sparse support. Support recovery.Throughout the article, we consider the sparsity \(k\in\mathbb{N}\). We assume \(x^{*}\in\mathbb{R}^{n}\) is a sparse unknown vector satisfying \(\|x^{*}\|_{0}\leq k\), \(A\in\mathbb{R}^{m\times n}\) is a known matrix, and \(y\in\mathbb{R}^{m}\) is a linear observation of \(x^{*}\) contaminated with an arbitrary additive error/noise \(e\in\mathbb{R}^{m}\), \[y=Ax^{*}+e. \tag{1}\] We denote \(S^{*}=\operatorname{supp}(x^{*})\) the support of \(x^{*}\). We present the new algorithm in a support recovery context. The support recovery objective1, also coined variable or model selection, searches for a support \(S\) with cardinality at most \(k\) such that \(S^{*}\subset S\). We say that _the algorithm recovers_\(S^{*}\) if it finds such an \(S\). Footnote 1: The adaptation of the article to “signed support recovery” is possible and is straightforward. We chose to simplify the presentation and not discuss sign recovery. When \(e\neq 0\), support recovery is a stronger guarantee than the one in the most standard compressed sensing setting, initiated in [8] and [15], when the goal is to upper-bound \(\|x-x^{*}\|_{2}\), for a well-chosen \(x\). The first particularity of support recovery is to assume \(x^{*}\) is truly \(k\)-sparse - not just compressible. Also, in short, support recovery guarantees involve a hypothesis on \(\min_{i\in S^{*}}|x_{i}^{*}|\), in addition of the incoherence hypothesis on \(A\)[32, 26, 34, 7, 33]. We cannot indeed expect to recover an element \(i\in S^{*}\) if \(|x_{i}^{*}|\) is negligible when compared to all the other quantities involved in the problem [32]. Support recovery models and algorithms.A famous model for support recovery is \[\underset{x\in\mathbb{R}^{n},\;\|x\|_{0}\leq k}{\text{Minimize}}F\left(x \right):=\|Ax-y\|_{2}^{2}. \tag{2}\] However, the sparsity constraint induces a combinatorial, non-differentiable and non-convex aspect in the problem, which is NP-Hard [13]. To avoid going through the \(\binom{n}{k}\) possible supports, each leading to a differentiable and convex sub-problem, various algorithms were created. There are three main families of algorithms: relaxation, combinatorial approaches and greedy algorithms. The most famous relaxed model uses the \(\ell^{1}\) norm and is known as the LASSO [31] or Basis Pursuit Algorithm [10]. Combinatorial approaches like Branch and Bound algorithms [3], find the global minimum of (2) but lack scalability. Greedy algorithms can be divided into two categories. Greedy Pursuits like Matching Pursuit (MP) [25] and Orthogonal Matching Pursuit (OMP) [28] are algorithms that build up an estimate of \(x^{*}\) iteratively by alternating adding components to the current support with an optimization step to approximate these components. While thresholding algorithms like Iterative Hard Thresholding (IHT) [6], Hard Thresholding Algorithm [19], Compressive Sampling Matching Pursuit (CoSaMP) [27], OMP with Replacement (OMPR) [23], Exhaustive Local Search (ELS) [1] (a.k.a. Fully Corrective Forward Greedy Selection with Replacement [30]), the Hard Thresholding-pursuit (HTP) [19] and Subspace Pursuit (SP) [12] add a replacement step in the iterative process. It allows them to explore various supports before stopping at a local optimum. The new algorithm introduced in this article belongs to this last family. However, a clear difference with the existing algorithms is the introduction of a non-sparse vector \(\mathcal{X}^{t}\in\mathbb{R}^{n}\), which evolves during the iterative process and indicates at each iteration which support should be tested. We call \(\mathcal{X}^{t}\) the _Support Exploration Variable_. It is derived from the straight-through estimators (STE) [21, 4], designed to deal with non-differentiable functions. As an illustrative example, the support exploration variable is the analog of the full-precision weights, used by BinaryConnect - which also uses STE - to optimize binary weights of neural networks [11, 22]. Contributions.The main contribution of the article is the introduction of a new sparsity-inducing algorithm that we call _Support Exploration Algorithm (SEA)_. It is based on the STE and uses the full gradient history over iterations as a heuristic in order to select the next support to optimize over. An important feature of SEA is that it can be used as a post-processing to improve the results of all existing algorithms. SEA is supported by four support recovery statements. In Theorem 3.1, we establish a general statement. It provides the main intuition on the reason why SEA can recover the correct support. As an illustration, this statement is instantiated in the simple orthogonal and noiseless case in Corollary 3.2. It is then instantiated, under a condition on \(x^{*}\), in the case where \(A\) satisfies a Restricted Isometry Property (RIP) condition. We compare the performances of SEA to those of state-of-the-art algorithms on: 1/ synthetic experiments for Gaussian matrices; 2/ spike deconvolution problems; 3/ classification and regression problems for real datasets. The experiments show that SEA improves the results of state-of-the-art algorithms and, because it explores many supports, performs remarkably well when the matrix \(A\) is coherent. The code is available in the git repository of the project. 2 Footnote 2: For the double-blind review, the anonymized code is in a zipped file in the supplementary materials. This will be replaced by the repository link in the final version of the paper. SEA is described in Section 2. The theoretical analysis of the algorithm is provided in Section 3. The experiments are in Section 4. Conclusions and perspectives are in Section 5. The proofs of the theoretical statements are in Appendices A, B, C. Complementary experimental results are in Appendices D, E and F. ## 2 Method We define the main notations in Section 2.1 and SEA in Section 2.2 We detail its link with STE in Section 2.3. ### Notations For any \(a,b\in\mathbb{R}\) (\(a\) and \(b\) can be real numbers), the set of integers between \(a\) and \(b\) is denoted by \(\llbracket a,b\rrbracket\) and \(\lfloor a\rfloor\) denotes the floor of \(a\). For any set \(S\subseteq\llbracket 1,n\rrbracket\), we denote the cardinality of \(S\) by \(|S|\). The complement of \(S\) in \(\llbracket 1,n\rrbracket\) is denoted by \(\overline{S}\). The vectors \(0_{\mathbb{R}^{n}}\) and \(0_{\mathbb{R}^{m}}\) are respectively the null vectors of \(\mathbb{R}^{n}\) and \(\mathbb{R}^{m}\). The vector \(1_{\mathbb{R}^{m}}\) is the all-ones vector of \(\mathbb{R}^{m}\). Given \(x\in\mathbb{R}^{n}\) and \(i\in\llbracket 1,n\rrbracket\), the \(i^{th}\) entry of \(x\) is denoted by \(x_{i}\). The \(i^{th}\) entry of \(|x|\) is denoted by \(|x|_{i}\) and is defined by \(|x|_{i}=|x_{i}|\). The support of \(x\) is denoted by \(\mathrm{supp}(x)=\{i:x_{i}\neq 0\}\). The \(\ell_{0}\) quasi-norm of \(x\) is defined by \(\|x\|_{0}=|\mathrm{supp}(x)|\). The indices of the \(k\) largest absolute entries of \(x\) is denoted by \(\mathrm{largest}_{k}\left(x\right)\). When ties lead to multiple possible choices for \(\mathrm{largest}_{k}\left(x\right)\), we assume \(\mathrm{largest}_{k}\left(x\right)\) arbitrarily chooses one of the possible solutions. For any \(S\subseteq\llbracket 1,n\rrbracket\), \(A\in\mathbb{R}^{m\times n}\), and \(x\in\mathbb{R}^{n}\), we define \(x_{|S}\in\mathbb{R}^{|S|}\), the restriction of the vector \(x\) to the indices in \(S\). We also define \(A_{S}\in\mathbb{R}^{m\times|S|}\), the restriction of the matrix \(A\) to the set \(S\) as the matrix composed of the columns of \(A\) whose indexes are in \(S\). The transpose of the matrix \(A\) is denoted by \(A^{T}\in\mathbb{R}^{n\times m}\). The pseudoinverse of \(A\) is denoted by \(A^{\dagger}\in\mathbb{R}^{n\times m}\). The pseudoinverse of \(A_{S}\) is denoted by \(A_{S}^{\dagger}=(A_{S})^{\dagger}\in\mathbb{R}^{|S|\times m}\). For any \(d\in\mathbb{N}\), the identity matrix of size \(d\) is denoted by \(I_{d}\). The symbol \(\odot\) denotes the Hadamard product. ### The Support Exploration Algorithm We propose a new iterative algorithm called Support Exploration Algorithm (SEA), given by Algorithm 1, dedicated to support recovery by (approximately) solving problem (2). The solution returned by SEA is obtained by computing the sparse iterate \(x^{t}\) through a least-square projection given a support \(S^{t}\) at iteration \(t\) (line 7). The key idea is that support \(S^{t}\) is designated at line 6 by a non-sparse variable \(\mathcal{X}^{t}\) called the _support exploration variable_. As described below, the use of a support exploration variable offers an original mechanism to explore supports in a more diverse way than classical greedy algorithms. The support exploration variable is updated at line 8 using an STE update explained in Section 2.3. As the algorithm explores supports in a way that allows the functional to sometimes increase, the retained solution is the best one encountered along the iterations (line 11). The role of \(\mathcal{X}^{t}\) may be intuited by first considering an oracle case where the true solution \(x^{*}\)and its support \(S^{*}\) are known by the algorithm. In that case, at iteration \(t\), we can use the oracle update rule \(\mathcal{X}^{t+1}\leftarrow\mathcal{X}^{t}-u^{t}\), using the direction \(u^{t}\) defined for any index \(i\) by \[u_{i}^{t}=\begin{cases}-\eta x_{i}^{*}&i\in S^{*}\cap\overline{S^{t}}\\ 0&i\in\overline{S^{*}}\cup S^{t},\end{cases} \tag{3}\] where \(S^{t}=\mathrm{largest}_{k}\left(\mathcal{X}^{t}\right)\) is the indices of the \(k\) largest absolute entries in \(\mathcal{X}^{t}\) and \(\eta>0\) is an arbitrary step size. We show the important supports in Figure 1. Notice \(u_{i}^{t}\) is non-zero for indices \(i\) from the true support \(S^{*}\) but for which \(|\mathcal{X}_{i}^{t}|\) is too small for \(i\) to be in \(S^{t}\). Whatever the initial content of \(\mathcal{X}^{0}\), the oracle update rule always makes the same increment on \(|\mathcal{X}_{i}^{t}|\), for \(i\in S^{*}\cap\overline{S^{t}}\). This guarantees that, at some subsequent iteration \(t^{\prime}\geq t\), the true support \(S^{*}\) is recovered among the \(k\) largest absolute entries in \(\mathcal{X}^{t^{\prime}}\), i.e., \(S^{*}\subset S^{t^{\prime}}\). Since \(x^{*}\) and \(S^{*}\) are not available in practice, we replace the oracle update \(u^{t}\) by the surrogate \(\eta A^{T}(Ax^{t}-y)\) (see line 8). The choice of this surrogate is a natural one. For instance, one can show that \(u^{t}=\eta A^{T}(Ax^{t}-y)\) in the simple case where \(A\) is orthogonal and the observation is noiseless (see Corollary 3.2 and its proof in Appendix B). We will see in Theorem 3.3 and in its proof in Appendix C that \(u^{t}-\eta A^{T}(Ax^{t}-y)\) is small, under suitable hypotheses on \(x^{*}\) and the RIP constants of \(A\). An important feature of SEA is that it can be used as a post-processing of the solution \(\hat{x}\) of another algorithm. This is simply done by initializing \(\mathcal{X}^{0}=\hat{x}\). In this case \(S^{0}=\text{supp}(\hat{x})\) (line 6) and \(x^{0}\) improves or is equal to \(\hat{x}\) (line 7). Since SEA returns the result obtained for the best time-step \(t_{BEST}\) (line 11), it can only improve \(\hat{x}\). In the experiments, we have investigated the initialization with the result of ELS [1, 30] and the initialization \(\mathcal{X}^{0}=0\). Figure 1: Visual representation of the main sets of indices encountered in the article. Eventually, the solution returned by SEA is selected at line 11 as the best iterate encountered along the iterations (see line 11). Of course, we do not compute \(t_{BEST}\) after the'repeat' loop. We present it that way in Algorithm 1 for clarity only. In practice, we compute \(t_{BEST}\) on the fly, after line 7. This way, computing \(t_{BEST}\) and memorizing \(x^{t_{BEST}}\) is done at no extra cost. Finally, as often, there are many possible strategies to design the halting criterion of the'repeat' loop of Algorithm 1. It can for instance be based on the value of \(\|Ax^{t}-y\|_{2}\) or on the values of \(T_{max}\) established in the theorems of Section 3. We preferred to focus our experiments on the illustration of the potential benefits of SEA and, as a consequence, we have not investigated this aspect in the experiments and leave this study for the future. We always used a large fixed number of passes in the'repeat' loop of Algorithm 1. Similarly, it is clear that \(\eta\) (line 8) has no impact on \(x^{t_{BEST}}\) when the algorithm is initialized with \(\mathcal{X}^{0}=0\). In this case, indeed the whole trajectory \((\mathcal{X}^{t})_{t\in\mathbb{N}}\) is dilated by \(\eta>0\) and the dilation has no effect on the selected supports \(S^{t}\). When \(\mathcal{X}^{0}\neq 0\), the initial support exploration variable is forgotten more or less rapidly depending on the value of \(\eta\). This should have an effect on the output of the algorithm. As for the (related) halting criterion of the'repeat' loop of Algorithm 1, we have not studied the tuning of the step size \(\eta\) and leave this study for future research. ### Link with the straight-through estimator The update of the support exploration variable \(\mathcal{X}^{t}\) in SEA can be interpreted as a straight-through estimator [21, 4] (STE). An STE is used when optimizing a function \(F\) that depends on a variable \(x\) obtained in a non-differentiable way from another variable \(\mathcal{X}\) as \(x=H\left(\mathcal{X}\right)\). \(\mathcal{X}\) is updated as \(\mathcal{X}\leftarrow\mathcal{X}-\eta\frac{\partial F}{\partial x}\) by using, since \(H\) is non-differentiable, the approximation at the core of STE: \(\frac{\partial F}{\partial\mathcal{X}}=\frac{\partial F}{\partial x}\frac{ \partial x}{\partial\mathcal{X}}\approx\frac{\partial F}{\partial x}\). The STE has been successfully used in many applications where \(H\) is a quantization. The STE had a very significant impact, for instance, on the optimization of neural networks over binary, ternary or more generally quantized weights [11, 22, 35]. The SEA algorithm is the STE applied to the resolution of (2) using the function \(H:\mathbb{R}^{n}\longrightarrow\mathbb{R}^{n}\) defined3 by Footnote 3: The choice made below, when the argmin is not reduced to a single element, has no impact on the value of \(F\) and is therefore not significant. \[H\left(\mathcal{X}\right)\in\operatorname*{argmin}_{\begin{subarray}{c}x\in \mathbb{R}^{n}\\ \operatorname{supp}(x)\subset\operatorname{largest}_{k}(\mathcal{X})\end{subarray}} \|Ax-y\|_{2}^{2}.\] Using this definition, the solution of (2) are indeed of the form \(H(\mathcal{X}^{*})\), for \[\mathcal{X}^{*}\in\operatorname*{argmin}_{\mathcal{X}}F(H(\mathcal{X})).\] To the best of our knowledge, this is the first time the STE is used to solve a sparse linear inverse problem. Theoretical analysis We provide, in Section 3.1, the most general support recovery theorem stating that SEA recovers \(S^{*}\) when \(u^{t}\) and \(\eta A^{T}(Ax^{t}-y)\) are close. We then specialize the theorem: \(1/\) to the noiseless case when \(A\) is orthogonal; \(2/\) to the case of a matrix \(A\) satisfying a RIP constraint in Section 3.2. In the latter statement, we obtain separate conditions on \(A\) and \(x^{*}\) that we compare with existing support recovery conditions, for the LASSO, OMP and HTP. ### General recovery theorem We remind that in Algorithm 1, replacing line 8 by the oracle update \(\mathcal{X}^{t+1}=\mathcal{X}^{t}-u^{t}\), where \(u^{t}\) is defined in (3), leads to an algorithm that recovers \(S^{*}\). Since \(u^{t}\) cannot be computed, we update \(\mathcal{X}^{t}\) with a regular gradient step, see line 8 of Algorithm 1. For \(t\in\mathbb{N}\), we define the gradient noise: \(b^{t}\in\mathbb{R}^{n}\), the error between this computable dynamic and \(u^{t}\) as \[b^{t}=u^{t}-\eta A^{T}(Ax^{t}-y). \tag{4}\] We define the maximal gradient noise norm \[\varepsilon=\sup_{t\in\mathbb{N}}\|b^{t}\|_{\infty}\in\mathbb{R}. \tag{5}\] Finally, we define the Recovery Condition (_RC_) as \[\varepsilon<\frac{1}{2\sum_{i\in S^{*}}\frac{1}{\eta|x_{i}^{*}|}}.\] ( _RC_ ) **Theorem 3.1** (Recovery - General case).: _If (RC) holds, then for all initialisation \(\mathcal{X}^{0}\) and all \(\eta\), there exists \(t_{s}\leq T_{max}\) such that \(S^{*}\subset S^{t_{s}}\), where \(S^{t}\) is defined in Algorithm 1 line 6 and_ \[T_{max}=\frac{\sum_{i\in S^{*}}\frac{\max_{j\in S^{*}}|\mathcal{X}^{0}_{j}|+| \mathcal{X}^{0}_{i}|}{\eta|x_{i}^{*}|}+k+1}{1-2\varepsilon\sum_{i\in S^{*}} \frac{1}{\eta|x_{i}^{*}|}}. \tag{6}\] The proof is in Appendix A. The main interest of Theorem 3.1 is to express clearly that, when \(u^{t}-\eta A^{T}(Ax^{t}-y)\) is sufficiently small, SEA recovers the correct support. However, the condition (_RC_) is difficult to use and interpret since it involves both \(A\), \(x^{*}\) and all the sparse iterates \(x^{t}\). This is why we particularize it in Corollary 3.2, Theorem 3.3 and Corollary 3.4. The conclusion of Theorem 3.1 is that the iterative process of SEA recovers the correct support at some iteration \(t\). We have in general no guarantee that this time-step \(t\) is equal to \(t_{BEST}\). We are however guaranteed that SEA returns a sparse solution such that \(\|Ax^{t_{BEST}}-y\|_{2}\leq\|Ax^{*}-y\|_{2}\), which can be considered as a criterion of success. We will see in Corollary 3.2, Theorem 3.3 and Corollary 3.4 that, when \(A\) is sufficiently incoherent and \(\|e\|_{2}\) is small enough, we actually have \(S^{*}\subset\operatorname{supp}(x^{t_{BEST}})\). Concerning the value of \(T_{max}\), a quick analysis of the function \(u\mapsto\frac{1}{1-au}\), for \(a=2\sum_{i\in S^{*}}\frac{1}{\eta|x_{i}^{*}|}>0\) and for \(u<1/a\) shows that \(T_{max}\) increases with \(\varepsilon\), when (_RC_) holds. In other words, the number of iterations required by the algorithm to provide the correct solution increases with the discrepancy between \(u^{t}\) and \(\eta A^{T}(Ax^{t}-y)\). This confirms the intuition behind the construction of SEA. The initializations \(\mathcal{X}^{0}\neq 0\) have an apparent negative impact on the number of iterations required in the worst case. This is because in the worst-case \(\mathcal{X}^{0}\) would be poorly chosen and SEA needs iterations to correct this poor choice. However, we can expect a well-chosen initialization of \(\mathcal{X}^{0}\) to reduce the number of iterations required by SEA to recover the correct support. Concerning \(\eta\), notice that, since \(u^{t}\) is proportional to \(\eta>0\), \(\varepsilon\) is proportional to \(\eta>0\) and therefore (_RC_) is independent of \(\eta\). When possible, any \(\eta\) permits to recover \(S^{*}\). The only influence of \(\eta\) is on \(T_{max}\). In this regard, since \(\varepsilon\) is proportional to \(\eta>0\), \(\eta\) has no influence on the denominator of (6). It only influences the numerator of (6). In this numerator, we see that the larger \(\eta\) is, the faster SEA will override the initialization \(\mathcal{X}^{0}\). This is very much related to the question of the quality of the initialization discussed above. The following corollary particularizes Theorem 3.1 to the noiseless and orthogonal case. In practice, a complicated algorithm like SEA is of course useless in such a case. We give this corollary mostly to illustrate the diversity of links between the properties of the triplet \((A,x^{*},e)\) and \(\varepsilon\) and the behavior of SEA. **Corollary 3.2** (Recovery - Orthogonal case).: _If \(A\) is an orthogonal matrix (\(A^{-1}=A^{T}\)) and \(\|e\|_{2}=0\), then_ \[\varepsilon=0.\] _As a consequence, for all \(x^{*}\), for initialisation \(\mathcal{X}^{0}=0_{\mathbb{R}^{n}}\) and all \(\eta\), if SEA performs more than \(k+1\) iterations, we have_ \[S^{*}\subset S^{t_{BEST}}\qquad\text{and}\qquad x^{t_{BEST}}=x^{*}.\] The proof is in Appendix B. ### Recovery theorem in the RIP case In this section, we assume that for any \(i\in\llbracket 1,n\rrbracket\), \(\|A_{i}\|_{2}=1\). As is standard since it has been proposed by Candes and Tao in [9], we define for all \(l\in\llbracket 1,n\rrbracket\) the \(l\)th Restricted Isometry Constant of \(A\) as the smallest non-negative number \(\delta_{l}\) such that for any \(x\in\mathbb{R}^{n}\), \(\|x\|_{0}\leq l\), \[(1-\delta_{l})\|x\|_{2}^{2}\leq\|Ax\|_{2}^{2}\leq(1+\delta_{l})\|x\|_{2}^{2}. \tag{7}\] If \(\delta_{l}<1\), \(A\) is said to satisfy the Restricted Isometry Property of order \(l\) or the \(l\)-RIP. In this section, we assume that \(A\) satisfies the \((2k+1)\)-RIP. In the scenarios of interest, \(\delta_{2k+1}\) is small. We define, \[\alpha_{k}^{{}^{RIP}}=\delta_{2k+1}\left(\frac{\delta_{2k}}{1-\delta_{k}}+1 \right)\in\mathbb{R}_{+}^{*} \tag{8}\] and \[\gamma_{k}^{{}^{RIP}}=\delta_{2k+1}\frac{\sqrt{1+\delta_{k}}}{1-\delta_{k}}+1 \in\mathbb{R}_{+}^{*}. \tag{9}\] As soon as \(\delta_{k}\) is far from \(1\), which will be the case in the scenarios of interest, \(\alpha_{k}^{{}^{RIP}}\) has the order of magnitude \(\delta_{2k+1}\) and \(\gamma_{k}^{{}^{RIP}}\) has the order of magnitude of \(1+\delta_{2k+1}\). As is common for support recovery statements, the next theorem involves a condition on \(x^{*}\). It is indeed impossible to recover an element \(i\) of \(S^{*}\) if \(x_{i}^{*}\) is negligible compared to the other quantities of (1). We call this condition _the Recovery Condition for the RIP case (RC\({}_{{}_{RIP}}\))_. It is defined by \[\gamma_{k}^{{}^{RIP}}\|e\|_{2}<\frac{1}{2\sum_{i\in S^{*}}\frac{1}{|x_{i}^{*} |}}-\alpha_{k}^{{}^{RIP}}\|x^{*}\|_{2}.\] ( \[RC_{{}_{RIP}}\] ) If (RC\({}_{{}_{RIP}}\)) holds, \(x^{*}\) is said to satisfy the (RC\({}_{{}_{RIP}}\)) condition. **Theorem 3.3** (Recovery - RIP case).: _Assume \(A\) satisfies the \((2k+1)\)-RIP and for all \(i\in\llbracket 1,n\rrbracket\), \(\|A_{i}\|_{2}=1\). Then_ \[\varepsilon\leq\eta(\alpha_{k}^{{}^{RIP}}\|x^{*}\|_{2}+\gamma_{k}^{{}^{RIP}}\| e\|_{2}).\] _If moreover \(x^{*}\) satisfies (RC\({}_{{}_{RIP}}\)), then for all initialisation \(\mathcal{X}^{0}\) and all \(\eta\), there exists \(t_{s}\leq T_{{}_{RIP}}\) such that \(S^{*}\subset S^{t_{s}}\), where_ \[T_{{}_{RIP}}=\frac{\sum_{i\in S^{*}}\frac{\max_{j\notin S^{*}}|\mathcal{X}^{0} _{j}|+|\mathcal{X}^{0}_{i}|}{\eta|x_{i}^{*}|}+k+1}{1-2(\alpha_{k}^{{}^{RIP}}\| x^{*}\|_{2}+\gamma_{k}^{{}^{RIP}}\|e\|_{2})\sum_{i\in S^{*}}\frac{1}{|x_{i}^{*}|}}. \tag{10}\] _If moreover, \(x^{*}\) is such that_ \[\min_{i\in S^{*}}|x_{i}^{*}|>\frac{2}{\sqrt{1-\delta_{2k}}}\|e\|_{2} \tag{11}\] _and SEA performs more than \(T_{{}_{RIP}}\) iterations, then_ \[S^{*}\subset S^{t_{{}^{BEST}}}\quad\text{ and }\quad\|x^{t_{{}^{BEST}}}-x^{*}\|_{2} \leq\frac{2}{\sqrt{1-\delta_{k}}}\|e\|_{2}.\] The proof is in Appendix C. The hypotheses of the theorem are on the RIP of \(A\) and there are two hypotheses on \(x^{*}\): (RC\({}_{{}_{RIP}}\)) and (11). The condition (RC\({}_{{}_{RIP}}\)) is difficult to interpret. Below, we give an example to illustrate it. The first example is when for all \(i\in S^{*}\), \(x_{i}^{*}=c\) for some constant \(c\in\mathbb{R}\). Condition (_RC\({}_{\mbox{\tiny\it RIP}}\)_) becomes in this case \[\gamma_{k}^{\mbox{\tiny\it RIP}}\|e\|_{2}\leq|c|\left(\frac{1-2\alpha_{k}^{ \mbox{\tiny\it RIP}}|S^{*}|^{\frac{3}{2}}}{2|S^{*}|}\right).\] This can only hold under the condition that \(\alpha_{k}^{\mbox{\tiny\it RIP}}\leq\frac{1}{2|S^{*}|^{\frac{3}{2}}}\), where we remind that \(\alpha_{k}^{\mbox{\tiny\it RIP}}\) has the order of magnitude of \(\delta_{2k+1}\) and \(|S^{*}|\leq k\). If this condition on the RIP of \(A\) holds, any value of \(c\) satisfying \[|c|\geq\left(\frac{2\gamma_{k}^{\mbox{\tiny\it RIP}}|S^{*}|}{1-\alpha_{k}^{ \mbox{\tiny\it RIP}}|S^{*}|^{\frac{3}{2}}}\right)\|e\|_{2}\] leads to an \(x^{*}\) that satisfies (_RC\({}_{\mbox{\tiny\it RIP}}\)_). Notice, that in this particular example, a rapid analysis shows that (_RC\({}_{\mbox{\tiny\it RIP}}\)_) is a stronger requirement than (11). To illustrate (_RC\({}_{\mbox{\tiny\it RIP}}\)_), we provide below a simplified condition which is shown in Corollary 3.4 to be stronger than (_RC\({}_{\mbox{\tiny\it RIP}}\)_) in the noiseless scenario. We say \(x^{*}\) satisfies the Simplified Recovery Condition in the RIP case if there exist \(\Lambda\in(0,1)\) such that \[2k\alpha_{k}^{\mbox{\tiny\it RIP}}\frac{\|x^{*}\|_{2}}{\min_{i\in S^{*}}|x_{i} ^{*}|}\leq\Lambda.\] ( _SRC\({}_{\mbox{\tiny\it RIP}}\)_) **Corollary 3.4** (Noiseless recovery - simplified RIP case).: _Assume \(\|e\|_{2}=0\), \(A\) satisfies the \((2k+1)\)-RIP and for all \(i\in\llbracket 1,n\rrbracket,\ \|A_{i}\|_{2}=1\)._ _If moreover \(x^{*}\) satisfies (SRC\({}_{\mbox{\tiny\it RIP}}\)), then \(x^{*}\) satisfies (RC\({}_{\mbox{\tiny\it RIP}}\)). As a consequence, for all \(x^{*}\), for initialisation \(\mathcal{X}^{0}=0_{\mathbb{R}^{n}}\) and all \(\eta\), if SEA performs more than_ \[T^{\prime}_{\mbox{\tiny\it RIP}}=\frac{k+1}{1-\Lambda} \tag{12}\] _iterations, we have_ \[S^{*}\subset S^{\mbox{\tiny\it t}_{\mbox{\tiny\it BEST}}}\qquad\mbox{and} \qquad x^{\mbox{\tiny\it t}_{\mbox{\tiny\it BEST}}}=x^{*}.\] The proof is in Appendix C.4. Notice that if \(\alpha_{k}^{\mbox{\tiny\it RIP}}\) is too large, there does not exist any \(x^{*}\) satisfying (_SRC\({}_{\mbox{\tiny\it RIP}}\)_). It is for instance the case if \(\alpha_{k}^{\mbox{\tiny\it RIP}}\geq 0.5\). On the contrary, a sufficient condition of existence of vectors \(x^{*}\) satisfying (_SRC\({}_{\mbox{\tiny\it RIP}}\)_) is that the constant \(\alpha_{k}^{\mbox{\tiny\it RIP}}\) satisfies \(2k^{\frac{3}{2}}\alpha_{k}^{\mbox{\tiny\it RIP}}\leq\Lambda<1\). In this case, when all the entries of \(x^{*}\) are equal, we have \(\|x^{*}\|_{2}=\sqrt{|S^{*}|}\min_{i\in S^{*}}|x_{i}^{*}|\) and \[2k\alpha_{k}^{\mbox{\tiny\it RIP}}\frac{\|x^{*}\|_{2}}{\min_{i \in S^{*}}|x_{i}^{*}|} = 2k\alpha_{k}^{\mbox{\tiny\it RIP}}\sqrt{|S^{*}|}\] \[\leq 2k^{\frac{3}{2}}\alpha_{k}^{\mbox{\tiny\it RIP}}\leq\Lambda<1.\] When this holds, the set of \(x^{*}\) satisfying (_SRC\({}_{\mbox{\tiny\it RIP}}\)_) is a convex cone whose interior is not empty. The set grows as \(\alpha_{k}^{\mbox{\tiny\it RIP}}\) decreases. Compared to the support recovery guarantees in the noisy case for the LASSO [32, 26, 34], the OMP [7], the HTP [19, 33] and the ARHT [1] the recovery conditions provided in Theorem 3.3 and Corollary 3.4 for SEA seem stronger. All conditions involve a condition on the incoherence of \(A\) and a condition similar to (11). In the case of the LASSO algorithm, the latter is not very explicit. However, none of the support recovery conditions involve a condition like (\(RC_{{}_{RIP}}\)) and (\(SRC_{{}_{RIP}}\)). A clear drawback of these conditions is that the support of an \(x^{*}\) such that \(\max_{i\in S^{*}}|x_{i}^{*}|\gg\min_{i\in S^{*}}|x_{i}^{*}|\) is not guaranteed to be recovered. This is because, if \(i\not\in S^{t}\) and \(|x_{i}^{*}|\gg\min_{i\in S^{*}}|x_{i}^{*}|\), \(b^{t}\) can be large. However, it is possible to get around this problem since SEA inherits the support recovery properties of any well-chosen initialization. Also, we have not observed this phenomenon in the experiments of Section 4. Similarly, SEA performs well even when \(A\) is coherent, see Section 4.2. This is not explained by Theorem 3.3 and Corollary 3.4 which consider the classical RIP assumption. Improving the theoretical analysis in these directions is left for the future. The current statements permit to see that SEA is a sound algorithm. To the best of our knowledge, this is the first time such guarantees are given for an algorithm based on the STE. ## 4 Experimental analysis We compare SEA to state-of-the-art algorithms on three tasks: Extensive signal recovery through phase transition diagram in Section 4.1, spike deconvolution problems for signal processing in Section 4.2 and linear regression and logistic regression tasks in supervised learning settings in Section 4.3. The tested algorithms are IHT [6], OMP [25, 28], OMPR [23] and ELS [1] (a.k.a. Fully Corrective Forward Greedy Selection with Replacement [30]). OMPR and ELS are initialized with the solution of OMP. Two versions of SEA are studied: the cold-start version \(\mathrm{SEA}_{0}\), where SEA is initialized with the null vector and the warm-start version \(\mathrm{SEA}_{\mathrm{ELS}}\), where SEA is initialized with the solution of ELS. For all algorithms, each least-square projection for a fixed support, as in Line 7 of Algorithm 1, is solved using the conjugate gradient descent of scikit-learn [29]. The maximal number of iterations is \(256k\). Matrix \(A\) is normalized before solving the problem. For each experiment, appropriate metrics, defined in the relevant subsection, are used for performance evaluation. The code is implemented in Python 3 and is available in the git repository of the project 4. Footnote 4: For the double-blind review, the anonymized code is in a zipped file in the supplementary materials. This will be replaced by the repository link in the final version of the paper. ### Phase transition diagram experiment Phase transition diagram experiment is an extensive experiment commonly used for algorithm performance comparison over synthetic data. Introduced by Donoho and Tanner in [14], phase transition diagrams show the recovery limits of an algorithm depending on the undersampling/indeterminacy \(\zeta=\frac{m}{n}\) of \(A\), and the sparsity/density \(\rho=\frac{k}{m}\) of \(x^{*}\). We consider the noiseless setup (i.e., \(e=0_{\mathbb{R}^{m}}\) in (1)). We fix \(n=64\), \(m\) takes all values in \(\llbracket 9,n\rrbracket\) and \(k\) all values in \(\llbracket 9,m\rrbracket\). For each triplet \((m,n,k)\) and each algorithm, we run \(r=1000\) experiments (described below) to assess the success rate \(\frac{s_{\zeta,\rho}}{r}\) of the algorithm, where \(s_{\zeta,\rho}\) is the number of problems successfully solved. A problem is considered successfully solved if the support of the output of the algorithm contains \(S^{*}\). For a triplet \((m,n,k)\) and an algorithm, the matrix \(A\in\mathbb{R}^{m\times n}\) is a Gaussian matrix. Its entries are drawn independently from the normal distribution \(\mathcal{N}(0,1)\). The restricted isometry constants are poor when \(\zeta=\frac{m}{n}\) is small and improve when \(m\) grows [2]. The sparse vector \(x^{*}\in\mathbb{R}^{n}\) is random. Indexes of the support are randomly picked, uniformly without replacement. The non-zero entries of \(x^{*}\) are independently drawn from the standard normal distribution. Figure 2 shows results from this experiment. Each colored curve indicates the threshold below which the algorithm has a success rate larger than \(95\%\). We see that IHT achieves poor recovery successes, which are only located at small values of sparsity \(k\). SEA\({}_{0}\) is on par with OMP. OMPR and ELS improve OMP performances, in particular, when \(\frac{m}{n}\geq 0.5\), i.e. when matrices \(A\) are less coherent. SEA\({}_{\text{ELS}}\) improves further ELS performances and outperforms the other algorithms for all \(\frac{m}{n}\). The largest improvement is for \(\frac{m}{n}=0.65\), which corresponds to the most coherent matrices \(A\). Thus, SEA refines a good support candidate into a better one by exploring new supports and achieves recovery for higher values of sparsity \(k\) than competitors. The actual superiority of SEA\({}_{\text{ELS}}\) for coherent matrices (\(\zeta<0.65\)) is particularly remarkable and illustrates its ability to successfully explore supports in difficult problems where competitors Figure 2: Empirical support recovery phase transition curves. Problems below each curve are solved by the related algorithm with a success rate larger than \(95\%\). fail. We study the noisy setup (i.e., \(e\neq 0_{\mathbb{R}^{m}}\) in (1)) in Appendix D. ### Deconvolution experiment Deconvolution purposes arise in many signal processing areas among which are microscopy or remote sensing. Of particular interest here is the deconvolution of sparse signals, also known as point source deconvolution [5] or spike deconvolution [17, 16], assuming the linear operator is known (contrary to blind approaches [24]). The objective is thus here to recover spike positions and amplitudes. We consider the noiseless setup (i.e., \(e=0_{\mathbb{R}^{m}}\) in (1)). We choose \(n=64\), a convolution matrix \(A\) corresponding to a Gaussian filter with a standard deviation equal to \(3\). The coherence of the matrix \(A\) is \(\max_{i\neq j}|A_{i}^{T}A_{j}|=0.97\). The problem is therefore very difficult and the support recovery theorems do not apply. For each sparsity level \(k\in\llbracket 1,16\rrbracket\), every algorithm is tested on \(r=1000\) different noiseless problems corresponding to different \(k\)-sparse \(x^{*}\). The maximal number of iterations is \(1000\), for all algorithms. The \(k\)-sparse vector \(x^{*}\) is random. Its support is drawn uniformly without replacement and its non-zero entries are drawn uniformly in \([-2,-1]\cup[1,2]\) as in [18]. Figure 3 illustrates the results for a \(6\)-sparse vector \(x^{*}\). Isolated spikes are located by all algorithms. However, the closer the spikes, the harder to locate them for algorithms. Both \(\text{SEA}_{0}\) and \(\text{SEA}_{\text{ELS}}\) are able to recover the original signal, while other algorithms fail. In Appendix E.1, we give for the experiment of Figure 3 the evolution of \(\|Ax^{t}-y\|_{2}\) when \(t\) varies, for \(\text{SEA}_{0}\) and \(\text{SEA}_{\text{ELS}}\). On Figure 4, the performance of each algorithm is reported, for all \(k\in\llbracket 1,16\rrbracket\), by the average over \(r\) runs of the support distance metric [18] defined by \[\text{supp}_{\text{dist}}(x)=\frac{k-|S^{*}\cap\text{ supp}(x)|}{k}. \tag{13}\] For sparsity \(k<14\), \(\text{SEA}_{0}\) and \(\text{SEA}_{\text{ELS}}\) outperform the other algorithms. By exploring various supports, SEA finds better supports than its competitors. As \(k\) increases, due to the increasing difficulty of the problem, no algorithm is able to recover \(S^{*}\). We provide additional experiments in Appendix E, leading to the same conclusions. ### Supervised learning experiment In a supervised learning setting, matrix \(A\in\mathbb{R}^{m\times n}\) (often denoted by \(X\)) contains \(m\)\(n\)-dimensional feature vectors associated with the training examples and arranged in rows, while the related labels are in vector \(y\in\mathbb{R}^{m}\). In the training phase, a sparse vector \(x\) (often denoted \(\beta\) or \(w\)) is optimized to fit \(y\approx Ax\) using an appropriate loss function: in this context, support recovery is called model selection. Based on the experimental setup of [1], we compare all the algorithms on linear regression and logistic regression tasks in terms of loss over the training Figure 4: Mean of support distance \(\mathrm{supp}_{\mathrm{dist}}\) between \(S^{*}\) and the support of the solutions provided by several algorithms as a function of the sparsity level \(k\). Figure 3: Representation of an instance of \(x^{*}\) and \(y\) with the solutions provided by the algorithms when \(k=6\). Results are reported in two axes for clarity. set for different levels of sparsity. We use the preprocessed public datasets5 provided by [1], following the same preprocessing pipeline: we augment \(A\) with an extra column equal to \(1_{\mathbb{R}^{m}}\) to allow a bias and normalize the columns of \(A\). Footnote 5: [https://drive.google.com/file/d/](https://drive.google.com/file/d/) 1RDu2d46qGLI77AzliBQleSsB5WwF83TF/view For regression problems we use the \(\ell_{2}\) regression loss defined by \(\ell_{2}\_\text{loss}(x)=\frac{1}{2}\|Ax-y\|_{2}^{2}\) for \(x\in\mathbb{R}^{n}\). As shown in Figure 5, \(\text{SEA}_{0}\) and \(\text{SEA}_{\text{ELS}}\) outperform the other algorithms on a regression dataset with \(n\) small. For a regression dataset in a higher dimension, as shown in Figure 6, \(\text{SEA}_{0}\) performs poorly as \(k\) increases. In both cases, \(\text{SEA}_{\text{ELS}}\) is able to increase further ELS performances and outperforms the other algorithms. As confirmed in Appendix F by the experiments on other regression and binary classification datasets, \(\text{SEA}_{0}\) performs well in small dimensions, while a good initialization is mandatory in higher dimensions. These experiments give some evidence that SEA can perform very well when some error/noise is present in the observation and when no perfect sparse vector exists. ## 5 Conclusion and perspectives In this article, we proposed SEA: a new principled algorithm for sparse support recovery, based on STE. We established guarantees when the matrix \(A\) satisfies the RIP. Experiments show that SEA supplements state-of-the-art algorithms and outperforms them in particular when \(A\) is coherent. The theoretical guarantees involve conditions on \(x^{*}\) that are not present for Figure 5: Performance on the regression dataset \(\text{cal\_housing}\) (\(m=20639\) examples, \(n=8\) features). similar statements for other algorithms and that might restrict its applicability. Also, the algorithm seems to perform well when \(A\) is coherent and this is not explained by the current theoretical analysis which only applies to matrices satisfying the RIP. Improving the theoretical analysis in these directions are promising perspective. There are many perspectives of applications of SEA and the STE to sparse inverse problems such as sparse matrix factorization, tensor problems, as well as real-world applications for instance in biology and astronomy. Finally, it would be interesting to investigate the adaptation of the methods developed in this article to other applications of STE, such as BinaryConnect. ## 6 Acknowledgement This work has benefited from the AI Interdisciplinary Institute ANITI, which is funded by the French "Investing for the Future - PIA3" program under the Grant agreement ANR-19-P3IA-0004. F. Malgouyres gratefully acknowledges the support of IRT Saint Exupery and the DEEL project6 and thanks Franck Mamalet for all the discussions on the STE. Footnote 6: [https://www.deel.ai/](https://www.deel.ai/) M. Mohamed was suppported by a PhD grant from "Emploi Jeunes Doctorants (EJD)" plan which is funded by the French institution "Region Sud - Provence-Alpes-Cote d'Azur" and Euranova France. M. Mohamed gratefully acknowledges their financial support. Figure 6: Performance on the regression dataset year (\(m=463715\) examples, \(n=90\) features).
直線推定法 (STE) は、量子化されたニューラルネットワークの最適化に広く用いられていますが、実証的な成功 despite its effectiveness is still unclear. この理解を前進させるために、STEを、よく理解された問題である、スパースサポートリカバリーに適用しました。私たちは、スパース性を促進する新しいアルゴリズムであるサポートエクスプローラアルゴリズム (SEA) を導入し、その性能をサポートリカバリー (モデル選択) 問題の分析に適用しました。SEA は、従来のアルゴリズムよりも多くのサポートを探索し、実験では、特に $A$ の列が強く相関している場合に優れたパフォーマンスを示しました。理論的分析では、線形測定行列 $A$ が **制限された正則性プロパティ** (RIP) を満たす場合のリカバリー保証を検討しました。リカバリーの十分条件は、従来のアルゴリズムと比較して、より厳格ですが
2307.16772
Weighted topological pressure revisited
Feng--Huang (2016) introduced weighted topological entropy and pressure for factor maps between dynamical systems and established its variational principle. Tsukamoto (2022) redefined those invariants quite differently for the simplest case and showed via the variational principle that the two definitions coincide. We generalize Tsukamoto's approach, redefine the weighted topological entropy and pressure for higher dimensions, and prove the variational principle. Our result allows for an elementary calculation of the Hausdorff dimension of affine-invariant sets such as self-affine sponges and certain sofic sets that reside in Euclidean space of arbitrary dimension.
Nima Alibabaei
2023-07-31T15:38:39
http://arxiv.org/abs/2307.16772v1
# Weighted topological pressure revisited ###### Abstract. Feng-Huang (2016) introduced weighted topological entropy and pressure for factor maps between dynamical systems and established its variational principle. Tsukamoto (2022) redefined those invariants quite differently for the simplest case and showed via the variational principle that the two definitions coincide. We generalize Tsukamoto's approach, redefine the weighted topological entropy and pressure for higher dimensions, and prove the variational principle. Our result allows for an elementary calculation of the Hausdorff dimension of affine-invariant sets such as self-affine sponges and certain sofic sets that reside in Euclidean space of arbitrary dimension. Key words and phrases:Dynamical systems, weighted topological entropy, weighted topological pressure, variational principle, affine-invariant sets, self-affine sponges, sofic sets, Hausdorff dimension 2 For a dynamical system \((X,T)\), denote its **topological entropy** by \(h_{\rm top}(T)\). Let \(P(f)\) be the **topological pressure** for a continuous function \(f:X\to\mathbb{R}\) (see section 2 for the definition of these quantities). Let \(\mathscr{M}^{T}(X)\) be the set of \(T\)-invariant probability measures on \(X\) and \(h_{\mu}(T)\) the **measure-theoretic entropy** for \(\mu\in\mathscr{M}^{T}(X)\) (see subsection 3.2). The variational principle then states that [10, 11, 12, 13] \[P(f)=\sup_{\mu\in\mathscr{M}^{T}(X)}\left(h_{\mu}(T)+\int_{X}fd\mu\right).\] ### Background We first look at _self-affine sponges_ to understand the background of weighted topological entropy introduced by Feng-Huang. Let \(m_{1},m_{2},\ldots,m_{r}\) be natural numbers with \(m_{1}\leq m_{2}\leq\cdots\leq m_{r}\). Consider an endomorphism \(T\) on \(\mathbb{T}^{r}=\mathbb{R}^{r}/\mathbb{Z}^{r}\) represented by the diagonal matrix \(A={\rm diag}(m_{1},m_{2},\ldots,m_{r})\). For \(D\subset\prod_{i=1}^{r}\{0,1,\ldots,m_{i}-1\}\), define \[K(T,D)=\left\{\sum_{n=0}^{\infty}A^{-n}e_{n}\in\mathbb{T}^{r}\Bigg{|}e_{n}\in D \right\}.\] This set is compact and \(T\)-invariant, i.e., \(TK(T,D)=T\). These sets for \(r=2\) are known as _Bedford-McMullen carpets_ or _self-affine carpets_. The following figure exhibits a famous example, the case of \(D=\{(0,0),(1,1),(0,2)\}\subset\{0,1\}\times\{0,1,2\}\). The analysis of these sets is complicated compared to "self-similar" sets. Bedford [1] and McMullen [12] independently studied these sets and showed that, in general, their Hausdorff dimension is strictly smaller than their Minkowski dimension (a.k.a. Box-counting dimension). The figure above has Hausdorff dimension \(\log_{2}{(1+2^{\log_{3}2})}=1.349\cdots\) and Minkowski dimension \(1+\log_{3}\frac{3}{2}=1.369\cdots\). The sets \(K(T,D)\) for \(r\geq 3\) are called _self-affine sponges_. Kenyon-Peres [11] calculated their Hausdorff dimension for the general case (see Theorem 1.5 in this section). In addition, they showed the following variational principle for the Hausdorff dimension of \(K(T,D)\); \[\dim_{H}K(T,D)=\sup_{\mu\in\mathscr{M}^{T}(\mathbb{T}^{r})}\Bigg{\{}\frac{1}{ \log m_{r}}h_{\mu}(T)+\sum_{i=2}^{r}\left(\frac{1}{\log m_{r-i+1}}-\frac{1}{ \log m_{r-i+2}}\right)h_{\mu_{i}}(T_{i})\Bigg{\}}. \tag{1.1}\] Figure 1. First four generations of Bedford-McMullen carpet Here, the endomorphism \(T_{i}\) on \(\mathbb{T}^{r-i+1}\) is defined from \(A_{i}=\operatorname{diag}(m_{1},m_{2},\ldots,m_{r-i+1})\), and \(\mu_{i}\) is defined as the push-forward measure of \(\mu\) on \(\mathbb{T}^{r-i+1}\) by the projection onto the first \(r-i+1\) coordinates. Feng-Huang's definition of weighted topological entropy of \(K(T,D)\) equals \(\dim_{H}K(T,D)\) with a proper setting. ### The original definition of the weighted topological pressure Motivated by the geometry of self-affine sponges described in the previous subsection, Feng-Huang introduced a generalized notion of pressure. Consider dynamical systems \((X_{i},T_{i})\)\((i=1,\,2,\,\ldots,\,r)\) and factor maps \(\pi_{i}:X_{i}\to X_{i+1}\)\((i=1,\,2,\,\ldots,\,r-1)\): \[(X_{1},T_{1})\xrightarrow{\pi_{1}}(X_{2},T_{2})\xrightarrow{\pi_{2}}\cdots \xrightarrow{\pi_{r-1}}(X_{r},T_{r})\.\] We refer to this as a **sequence of dynamical systems**. Let the weight \(\boldsymbol{w}=(w_{1},w_{2},\ldots,w_{r})\) with \(w_{1}>0\) and \(w_{i}\geq 0\) for \(i\geq 2\). Feng-Huang [10] ingeniously defined the \(\boldsymbol{w}\)-weighted topological pressure \(P^{\boldsymbol{w}}_{\text{FH}}(f)\) for a continuous function \(f:X_{1}\to\mathbb{R}\) and established the variational principle [10, Theorem 1.4] \[P^{\boldsymbol{w}}_{\text{FH}}(f)=\sup_{\mu\in\mathscr{M}^{T_{1}}(X_{1})} \left(\sum_{i=1}^{r}w_{i}h_{\pi^{(i-1)}\ast\mu}(T_{i})+w_{1}\int_{X_{1}}fd\mu \right). \tag{1.2}\] Here \(\pi^{(i)}\) is defined by \[\pi^{(0)}=\operatorname{id}_{X_{1}}:X_{1}\to X_{1},\] \[\pi^{(i)}=\pi_{i}\circ\pi_{i-1}\circ\cdots\circ\pi_{1}:X_{1}\to X _{i+1},\] and \(\pi^{(i-1)}{}_{\ast}\mu\) is the push-forward measure of \(\mu\) by \(\pi^{(i-1)}\) on \(X_{i}\). The \(\boldsymbol{w}\)-weighted topological entropy \(h^{\boldsymbol{w}}_{\text{top}}(T_{1})\) is the value of \(P^{\boldsymbol{w}}_{\text{FH}}(f)\) when \(f\equiv 0\). In this case, (1.2) becomes \[h^{\boldsymbol{w}}_{\text{top}}(T_{1})=\sup_{\mu\in\mathscr{M}^{T_{1}}(X_{1}) }\left(\sum_{i=1}^{r}w_{i}h_{\pi^{(i-1)}\ast\mu}(T_{i})\right). \tag{1.3}\] We will explain here Feng-Huang's method of defining \(h^{\boldsymbol{w}}_{\text{top}}(T_{1})\). For the definition of \(P^{\boldsymbol{w}}_{\text{FH}}(f)\), see their original paper [10]. Let \(n\) be a natural number and \(\varepsilon\) a positive number. Let \(d^{(i)}\) be a metric on \(X_{i}\). For \(x\in X_{1}\), define the \(\boldsymbol{n}\)**-th \(\boldsymbol{w}\)-weighted Bowen ball of radius \(\boldsymbol{\varepsilon}\) centered at \(\boldsymbol{x}\)** by \[B^{\boldsymbol{w}}_{n}(x,\varepsilon)=\left\{y\in X_{1}\left|\begin{array}{ l}d^{(i)}\big{(}T_{i}^{j}(\pi^{(i-1)}(x)),T_{i}^{j}(\pi^{(i-1)}(y))\big{)}< \varepsilon\text{ for every}\\ 0\leq j\leq\lceil(w_{1}+\cdots+w_{i})n\rceil\text{ and }1\leq i\leq k. \end{array}\right.\right\}.\] Consider \(\Gamma=\{B^{\boldsymbol{w}}_{n_{j}}(x_{j},\varepsilon)\}_{j}\), an at-most countable cover of \(X_{1}\) by Bowen balls. Let \(n(\Gamma)=\min_{j}n_{j}\). For \(s\geq 0\) and \(N\in\mathbb{N}\), let \[\Lambda^{\boldsymbol{w},s}_{N,\varepsilon}=\inf\left\{\sum_{j}e^{-sn_{j}} \Bigg{|}\ \Gamma=\{B^{\boldsymbol{w}}_{n_{j}}(x_{j},\varepsilon)\}_{j}\text{ covers }X_{1}\text{ and }n(\Gamma)\geq N\right\}.\] This quantity is non-decreasing as \(N\to\infty\). The following limit hence exists: \[\Lambda_{\varepsilon}^{\mathbf{w},s}=\lim_{N\to\infty}\Lambda_{N,\varepsilon}^{\mathbf{w},s}.\] There is a value of \(s\) where \(\Lambda_{\varepsilon}^{\mathbf{w},s}\) jumps from \(\infty\) to \(0\), which we will denote by \(h_{\mathrm{top}}^{\mathbf{w}}(T_{1},\varepsilon)\): \[\Lambda_{\varepsilon}^{\mathbf{w},s}=\left\{\begin{array}{ll}\infty&(s<h_{ \mathrm{top}}^{\mathbf{w}}(T_{1},\varepsilon))\\ 0&(s>h_{\mathrm{top}}^{\mathbf{w}}(T_{1},\varepsilon))\end{array}\right..\] The value \(h_{\mathrm{top}}^{\mathbf{w}}(T_{1},\varepsilon)\) is non-decreasing as \(\varepsilon\to 0\). Therefore, we can define the \(\mathbf{w}\)-weighted topological entropy \(h_{\mathrm{top}}^{\mathbf{w}}(T_{1})\) by \[h_{\mathrm{top}}^{\mathbf{w}}(T_{1})=\lim_{\varepsilon\to 0}h_{\mathrm{top}}^{\mathbf{w} }(T_{1},\varepsilon).\] An important point about this definition is that in some dynamical systems, such as self-affine sponges, the quantity \(h_{\mathrm{top}}^{\mathbf{w}}(T_{1})\) is directly related to the Hausdorff dimension of \(X_{1}\). **Example 1.1**.: Consider the self-affine sponges introduced in subsection 1.2. Define \(p_{i}:\mathbb{T}^{r-i+1}\to\mathbb{T}^{r-i}\) by \[p_{i}(x_{1},x_{2},\ldots,x_{r-i},x_{r-i+1})=(x_{1},x_{2},\ldots,x_{r-i}).\] Let \(X_{1}=K(T,D)\), \(X_{i}=p_{i-1}\circ p_{i}\circ\cdots\circ p_{1}(X_{1})\), and \(T_{i}:X_{i}\to X_{i}\) be the endomorphism defined from \(A_{i}=\mathrm{diag}(m_{1},m_{2},\ldots,m_{r-i+1})\). Define the factor maps \(\pi_{i}:X_{i}\to X_{i+1}\) as the restrictions of \(p_{i}\). Let \[\mathbf{w}=\left(\frac{\log m_{1}}{\log m_{r}},\quad\frac{\log m_{1}}{\log m_{r-1 }}-\frac{\log m_{1}}{\log m_{r}},\ldots,\quad\frac{\log m_{1}}{\log m_{2}}- \frac{\log m_{1}}{\log m_{3}},\quad 1-\frac{\log m_{1}}{\log m_{2}}\right). \tag{1.4}\] Then \(n\)-th \(\mathbf{w}\)-weighted Bowen ball is approximately a square with a side length of \(\varepsilon m_{1}^{-n}\). Therefore, \[\dim_{H}K(T,D)=\frac{h_{\mathrm{top}}^{\mathbf{w}}(T_{1})}{\log m_{1}}. \tag{1.5}\] ### Tsukamoto's approach and its extension Following the work of Feng-Huang [17] described in the previous subsection, Tsukamoto [14] published an intriguing approach to these invariants. There, he gave a new definition of the weighted topological pressure for two dynamical systems and a factor map: \[\begin{CD}(X_{1},T_{1})@>{\pi}>{}>(X_{2},T_{2}).\end{CD}\] He then proved the variational principle using his definition, showing the surprising coincidence of the two definitions. His expression of weighted topological entropy allowed for relatively easy calculations for sets like self-affine carpets. We will extend Tsukamoto's idea, redefine the weighted topological pressure for an arbitrary length of a sequence of dynamical systems, and establish the variational principle. Here we will explain our definition in the case \(f\equiv 0\). See section 2 for the general setting. We will not introduce Tsukamoto's definition since it is obtained by letting \(r=2\) in the following argument. Let \(\boldsymbol{a}=(a_{1},\,a_{2},\,\cdots,a_{r-1})\) with \(0\leq a_{i}\leq 1\) for each \(i\). Let \(N\) be a natural number and \(\varepsilon\) a positive number. We define a new metric \(d_{N}^{(i)}\) on \(X_{i}\) by \[d_{N}^{(i)}(x_{1},\,x_{2})=\max_{0\leq n<N}d^{(i)}(T_{i}^{\,n}x_{1},T_{i}^{\,n }x_{2}).\] For \(\Omega\subset X_{1}\), we define \[\#_{1}^{\boldsymbol{a}}(\Omega,N,\varepsilon)=\min\left\{n\in\mathbb{N} \left|\begin{array}{l}\mbox{There exists an open cover }\{U_{j}\}_{j=1}^{n}\mbox{ of }\Omega\\ \mbox{with diam}(U_{j},\,d_{N}^{(1)})<\varepsilon\mbox{ for all }1\leq j\,\leq n \end{array}\right.\right\}.\] Let \(\Omega\subset X_{i+1}\). If \(\#_{i}^{\boldsymbol{a}}\) is already defined, let \[\left.\begin{array}{l}\#_{i+1}^{\boldsymbol{a}}(\Omega,N, \varepsilon)\\ =\min\left\{\sum_{j=1}^{n}\left(\#_{i}^{\boldsymbol{a}}(\pi_{i}^{-1}(U_{j}),N, \varepsilon)\right)^{a_{i}}\left|\begin{array}{l}n\in\mathbb{N},\,\{U_{j}\}_ {j=1}^{n}\mbox{ is an open cover of }\Omega\\ \mbox{with diam}(U_{j},\,d_{N}^{(i+1)})<\varepsilon\mbox{ for all }1\leq j\,\leq n \end{array}\right.\right\}.\end{array}\right\}.\] We define the **topological entropy of \(\boldsymbol{a}\)-exponent**\(h^{\boldsymbol{a}}(\boldsymbol{T})\), where \(\boldsymbol{T}=(T_{i})_{i}\), by \[h^{\boldsymbol{a}}(\boldsymbol{T})=\lim_{\varepsilon\to 0}\left(\lim_{N\to\infty} \frac{\log\#_{r}^{\boldsymbol{a}}(X_{r},\,N,\varepsilon)}{N}\right).\] This limit exists since \(\log\#_{r}^{\boldsymbol{a}}(X_{r},\,N,\,\varepsilon)\) is sub-additive in \(N\) and non-decreasing as \(\varepsilon\) tends to \(0\). From \(\boldsymbol{a}\), define \(\boldsymbol{w}_{\boldsymbol{a}}=(w_{1},\,\cdots,\,w_{r})\) by \[\left\{\begin{array}{l}w_{1}=a_{1}a_{2}a_{3}\cdots a_{r-1}\\ w_{2}=(1-a_{1})a_{2}a_{3}\cdots a_{r-1}\\ w_{3}=(1-a_{2})a_{3}\cdots a_{r-1}\\ \qquad\qquad\vdots\\ w_{r-1}=(1-a_{r-2})a_{r-1}\\ w_{r}=1-a_{r-1}\end{array}\right.\quad.\] Then our main result Theorem 2.1 below yields **Theorem 1.2**.: _For \(\boldsymbol{a}=(a_{1},\,a_{2},\,\cdots,a_{r-1})\) with \(0\leq a_{i}\leq 1\) for each \(i\),_ \[h^{\boldsymbol{a}}(\boldsymbol{T})=\sup_{\mu\in\mathscr{M}^{T_{1}}(X_{1})} \left(\sum_{i=1}^{r}w_{i}h_{\pi^{(i-1)},\mu}(T_{i})\right). \tag{1.6}\] The strategy of the proof is adopted from Tsukamoto's paper. However, there are some additional difficulties. Let \(h^{\boldsymbol{a}}_{\rm var}(\boldsymbol{T})\) be the right-hand side of (1.6). We use the "zero-dimensional trick" for proving \(h^{\boldsymbol{a}}(\boldsymbol{T})\leq h^{\boldsymbol{a}}_{\rm var}( \boldsymbol{T})\), meaning we reduce the proof to the case where all dynamical systems are zero-dimensional. Merely taking a zero-dimensional extension for each \(X_{i}\) does not work. Therefore we realize this by taking step by step an extension of the whole sequence of dynamical systems (see subsection 3.3). Then we show \(h^{\mathbf{a}}(\mathbf{T})\leq h^{\mathbf{a}}_{\mathrm{var}}(\mathbf{T})\) by using an appropriate measure, the definition of which is quite sophisticated (see \(\sigma_{N}\) in the proof of Theorem 4.1). In proving \(h^{\mathbf{a}}(\mathbf{T})\geq h^{\mathbf{a}}_{\mathrm{var}}(\mathbf{T})\), the zero-dimensional trick can not be utilized. The proof, therefore, requires a detailed estimation of these values for arbitrary covers, which is more complicated than the original argument in [13]. Theorem 1.2 and Feng-Huang's version of variational principle (1.3) yield **Corollary 1.3**.: _For \(\mathbf{a}=(a_{1},a_{2},\cdots,a_{r-1})\) with \(0<a_{i}\leq 1\) for each \(i\),_ \[h^{\mathbf{a}}(\mathbf{T})=h^{\mathbf{w}_{\mathbf{a}}}_{\mathrm{top}}(T_{1}).\] This corollary is rather profound, connecting the two seemingly different quantities. We can calculate the Hausdorff dimension of certain self-affine sets using this result, as seen in the following example and section 6. **Example 1.4**.: Let us take another look at self-affine sponges. Kenyon-Peres [11, Theorem 1.2] calculated their Hausdorff dimension as follows (recall that \(m_{1}\leq m_{2}\leq\cdots\leq m_{r}\)). **Theorem 1.5**.: _Define a sequence of real numbers \((Z_{j})_{j}\) as follows. Let \(Z_{r}\) be the indicator of \(D\), namely, \(Z_{r}(i_{1},\ldots,i_{r})=1\) if \((i_{1},\ldots,i_{r})\in D\) and \(0\) otherwise. Define \(Z_{r-1}\) by_ \[Z_{r-1}(i_{1},\ldots,i_{r-1})=\sum_{i_{r}=0}^{m_{r}-1}Z_{r}(i_{1},\ldots,i_{r- 1},i_{r}).\] _More generally, if \(Z_{j+1}\) is already defined, let_ \[Z_{j}(i_{1},\ldots,i_{j})=\sum_{i_{j+1}=0}^{m_{j+1}-1}Z_{j+1}(i_{1},\ldots,i_{ j},i_{j+1})^{\log m_{j+1}/\log m_{j+2}}.\] _Then_ \[\dim_{H}K(T,D)=\frac{\log Z_{0}}{\log m_{1}}.\] We can prove this result fairly elementary by Corollary 1.3 without requiring measure theory on the surface. Set \(a_{i}=\log_{m_{r-i+1}}m_{r-i}\) for each \(i\), then \(\mathbf{w}_{\mathbf{a}}\) equals \(\mathbf{w}\) in (1.4). Combining (1.5) and Corollary 1.3, \[\dim_{H}K(T,D)=\frac{h^{\mathbf{w}_{\mathbf{a}}}_{\mathrm{top}}(T_{1})}{\log m_{1}}= \frac{h^{\mathbf{a}}(\mathbf{T})}{\log m_{1}}.\] Hence, we need to show the following claim. **Claim 1.6**.: _We have_ \[h^{\mathbf{a}}(\mathbf{T})=\log Z_{0}.\] Proof.: Observe first that taking the infimum over closed covers instead of open ones in the definition of \(h^{\boldsymbol{a}}(\boldsymbol{T})\) does not change its value. Define a metric \(d^{(i)}\) on each \(X_{i}\) by \[d^{(i)}(x,y)=\min_{n\in\mathbb{Z}^{r-i+1}}|x-y-n|.\] Let \[D_{j}=\{(e_{1},\ldots,e_{j})|\text{ there are }e_{j+1},\ldots,e_{r}\text{ with }(e_{1}, \ldots,e_{r})\in D\}.\] Define \(p_{i}:D_{r-i+1}\to D_{r-i}\) by \(p_{i}(e_{1},\ldots,e_{r-i+1})=(e_{1},\ldots,e_{r-i})\). Fix \(0<\varepsilon<\frac{1}{m_{r}}\) and take a natural number \(n\) with \(m_{1}^{-n}<\varepsilon\). Fix a natural number \(N\) and let \(\psi_{i}:D_{r-i+1}^{N+n}\to D_{r-i}^{N+n}\) be the product map of \(p_{i}\), i.e., \(\psi_{i}(v_{1},\ldots,v_{N+n})=(p_{i}(v_{1}),\ldots,p_{i}(v_{N+n}))\). For \(x\in D_{r-i+1}^{N+n}\), define (recall that \(A_{i}=\operatorname{diag}(m_{1},m_{2},\ldots,m_{r-i+1})\)) \[U_{x}^{(i)}=\left\{\sum_{k=0}^{\infty}A_{i}^{-k}e_{k}\in X_{i}\middle|e_{k} \in D_{r-i+1}\text{ for each }k\text{ and }(e_{1},\ldots,e_{N+n})=x\right\}.\] Then \(\{U_{x}^{(i)}\}_{x\in D_{r-i+1}^{N+n}}\) is a closed cover of \(X_{i}\) with \(\operatorname{diam}(U_{x}^{(i)},d_{N}^{(i)})<\varepsilon\). For \(x,y\in D_{r-i+1}^{N+n}\), we write \(x\backsim y\) if and only if \(U_{x}^{(i)}\cap U_{y}^{(i)}\neq\varnothing\). We have for any \(i\) and \(x\in D_{r-i}^{N+n}\) \[\pi_{i}^{-1}(U_{x}^{(i+1)})\subset\bigcup_{\begin{subarray}{c}x^{\prime}\in D _{r-i}^{N+n}\\ x^{\prime}\backsim x\end{subarray}}\bigcup_{y\in\psi_{i}^{-1}(x^{\prime})}U_{y }^{(i)}.\] Notice that for each \(x\in D_{r-i}^{N+n}\), the number of \(x^{\prime}\in D_{r-i}^{N+n}\) with \(x^{\prime}\backsim x\) is not more than \(3^{r}\). Therefore, for every \(v=(v_{1}^{(1)},\ldots,v_{N+n}^{(1)})\in D_{r-1}^{N+n}\), there are \((v_{1}^{(k)},\ldots,v_{N+n}^{(k)})\in D_{r-1}^{N+n}\), \(k=2,3,\ldots,L\), and \(L\leq 3^{r}\), with \[\#_{1}^{\boldsymbol{a}}(\pi_{1}^{-1}(U_{v}^{(2)}),\,N,\,\varepsilon)\leq\sum_ {k=1}^{L}Z_{r-1}(v_{1}^{(k)})\cdots Z_{r-1}(v_{N+n}^{(k)}).\] We inductively continue while considering that the multiplicity is at most \(3^{r}\) and obtain \[\#_{r}^{\boldsymbol{a}}(X_{r},\,N,\,\varepsilon)\] \[\leq 3^{r(r-1)}\sum_{x_{1}\in D_{1}^{N+n}}\left(\sum_{\begin{subarray} {c}x_{2}\in\psi_{2}^{-1}(x_{1})\\ x_{2}\in\psi_{2}^{-1}(x_{1})\end{subarray}}\left(\cdots\left(\sum_{x_{r-2}\in \psi_{r-2}^{-1}(x_{r-3})}\right.\right.\right.\] \[\left.\left.\left.\left(\sum_{\begin{subarray}{c}(v_{1},\ldots,v_ {N+n})\in\psi_{r-1}^{-1}(x_{r-2})\\ v_{j}\in D_{r-1}\text{ for each }j\end{subarray}}\left(Z_{r-1}(v_{1})\cdots Z_{r-1}(v_{N+n}) \right)^{a_{1}}\right)^{a_{2}}\right)^{a_{3}}\cdots\right)^{a_{r-2}}\right)^{a_{ r-1}}\] \[=3^{r(r-1)}\left\{\sum_{x_{1}\in D_{1}}\left(\sum_{x_{2}\in p_{2 }^{-1}(x_{1})}\left(\cdots\left(\sum_{x_{r-1}\in p_{r-1}^{-1}(x_{r-2})}Z_{r-1}( x_{1},\ldots,x_{r-1})^{a_{1}}\right)^{a_{2}}\cdots\right)^{a_{r-2}}\right)^{a_{r-1}} \right\}^{N+n}\] \[=3^{r(r-1)}Z_{0}^{\,N+n}.\] Therefore, \[h^{\boldsymbol{a}}(\boldsymbol{T})=\lim_{\varepsilon\to 0}\left(\lim_{N\to \infty}\frac{\log\#_{r}^{\boldsymbol{a}}(X_{r},\,N,\,\varepsilon)}{N}\right) \leq\log Z_{0}.\] Next, we prove \(h^{\boldsymbol{a}}(\boldsymbol{T})\geq\log Z_{0}\). We fix \(0<\varepsilon<\frac{1}{m_{r}}\) and utilize \(\varepsilon\)-separated sets. Take and fix \(\boldsymbol{s}=(t_{1},\ldots,t_{r})\in D\), and set \(\boldsymbol{s}_{i}=(t_{1},\ldots,t_{r-i+1})\). Fix a natural number \(N\) and let \(\psi_{i}:D_{r-i+1}^{N}\to D_{r-i}^{N}\) be the product map of \(p_{i}\) as in the previous definition. Define \[Q_{i}=\left\{\sum_{k=1}^{N}{A_{i}}^{-k}e_{k}+\sum_{k=N+1}^{\infty}{A_{i}}^{-k} \boldsymbol{s}_{i}\in X_{i}\right|e_{1},\ldots,e_{N}\in D_{r-i+1}\right\}.\] Then \(Q_{i}\) is an \(\varepsilon\)-separated set with respect to the metric \(d_{N}^{(i)}\) on \(X_{i}\). Consider an arbitrary open cover \(\mathscr{F}^{(i)}\) of \(X_{i}\) for each \(i\) with the following properties (this \((\mathscr{F}^{(i)})_{i}\) is defined as \(\boldsymbol{\mathfrak{a}}\)**chain of open (\(N\), \(\varepsilon\))-covers** of \((X_{i})_{i}\) in Definition 3.1). 1. For every \(i\) and \(V\in\mathscr{F}^{(i)}\), we have \(\operatorname{diam}(V,d_{N}^{(i)})<\varepsilon\). 2. For each \(1\leq i\leq r-1\) and \(U\in\mathscr{F}^{(i+1)}\), there is \(\mathscr{F}^{(i)}(U)\subset\mathscr{F}^{(i)}\) such that \[\pi_{i}^{-1}(U)\subset\bigcup\mathscr{F}^{(i)}(U)\] and \[\mathscr{F}^{(i)}=\bigcup_{U\in\mathscr{F}^{(i+1)}}\mathscr{F}^{(i)}(U).\] We have \(\#(V\cap Q_{i})\leq 1\) for each \(V\in\mathscr{F}^{(i)}\) by (1). Let \((e_{1}^{(2)},e_{2}^{(2)},\cdots,e_{N}^{(2)})\in D_{r-1}^{N}\) and suppose \(U\in\mathscr{F}^{(2)}\) satisfies \[\sum_{k=1}^{N}{A_{2}}^{-k}e_{k}^{(2)}+\sum_{k=N+1}^{\infty}{A_{2}}^{-k} \boldsymbol{s}_{2}\in U\cap Q_{2}.\] Then \(\pi_{1}^{-1}(U)\) contains at least \(Z_{r-1}(e_{1}^{(2)})\cdots Z_{r-1}(e_{N}^{(2)})\) points of \(Q_{1}\). Hence, \[\#_{1}^{\boldsymbol{a}}(\pi_{1}^{-1}(U),\,N,\,\varepsilon)\geq Z_{r-1}(e_{1}^ {(2)})\cdots Z_{r-1}(e_{N}^{(2)}).\] We continue this reasoning inductively and get \(\#_{r}^{\boldsymbol{a}}(X_{r},\,N,\varepsilon)\) \[\geq\sum_{e^{(1)}\in D_{1}^{N}}\left(\sum_{e^{(2)}\in\psi_{2} ^{-1}(e^{(1)})}\left(\cdots\left(\sum_{e^{(r-2)}\in\psi_{r-2}^{-1}(e^{(r-3)})} \right.\right.\right.\] \[\left.\left.\left(\sum_{\begin{subarray}{c}(e_{1}^{(2)},\ldots,e_ {N}^{(2)})\in\psi_{r-1}^{-1}(e^{(3)})\\ e_{j}^{(2)}\in D_{r-1}\text{ for each }j\end{subarray}}\left(Z_{r-1}(e_{1}^{(2)}) \cdots Z_{r-1}(e_{N}^{(2)})\right)^{a_{1}}\right)^{a_{2}}\right)^{a_{3}}\cdots \left.\right)^{a_{r-2}}\right)^{a_{r-1}}\] \[=\left\{\sum_{x_{1}\in D_{1}}\left(\sum_{x_{2}\in p_{2}^{-1}(x_{ 1})}\left(\cdots\left(\sum_{x_{r-1}\in p_{r-1}^{-1}(x_{r-2})}Z_{r-1}(x_{1}, \ldots,x_{r-1})^{a_{1}}\right)^{a_{2}}\cdots\right)^{a_{r-2}}\right)^{a_{r-1} }\right\}^{N}\] \[=Z_{0}^{\,N}.\] This implies \[h^{\boldsymbol{a}}(\boldsymbol{T})\geq\log Z_{0}.\] We conclude that \[h^{\boldsymbol{a}}(\boldsymbol{T})=\log Z_{0}.\] We would like to mention the work of Barral and Feng [1, 2], and of Yayama [13]. These papers independently studied the related invariants when \((X,T)\) and \((Y,S)\) are subshifts over finite alphabets. ## 2. Weighted topological pressure Here, we introduce the generalized, new definition of weighted topological pressure. Let \((X_{i},T_{i})\)\((i=1,\,2,\,\ldots,\,r)\) be dynamical systems and \(\pi_{i}:X_{i}\to X_{i+1}\)\((i=1,\,2,\,\ldots,\,r-1)\) factor maps. For a continuous function \(f:X_{1}\to\mathbb{R}\) and a natural number \(N\), set \[S_{N}f(x)=f(x)+f(T_{1}x)+f(T_{1}^{2}x)+\cdots+f(T_{1}^{N-1}x).\] Let \(d^{(i)}\) be a metric on \(X_{i}\). Recall that we defined a new metric \(d^{(i)}_{N}\) on \(X_{i}\) by \[d^{(i)}_{N}(x_{1},\,x_{2})=\max_{0\leq n<N}d^{(i)}(T_{i}^{\,n}x_{1},T_{i}^{\, n}x_{2}).\] We may write these as \(S_{N}^{T_{1}}f\) or \(d^{T_{i}}_{N}\) to clarify the maps \(T_{1}\) and \(T_{i}\) in the definitions above. Let \(\boldsymbol{a}=(a_{1},\,a_{2},\cdots,a_{r-1})\) with \(0\leq a_{i}\leq 1\) for each \(i\) and \(\varepsilon\) a positive number. For \(\Omega\subset X_{1}\), we define \(P_{1}^{\mathbf{a}}(\Omega,\,f,\,N,\,\varepsilon)\) \[=\inf\left\{\sum_{j=1}^{n}\exp\left(\sup_{U_{j}}S_{N}f\right)\Bigg{|}\begin{array} []{l}n\in\mathbb{N},\,\{U_{j}\}_{j=1}^{n}\text{ is an open cover of }\Omega\\ \text{with diam}(U_{j},d_{N}^{T_{1}})<\varepsilon\text{ for all }1\leq j\leq n \end{array}\right\}.\] (Letting \(\Omega=X_{1}\), the above defines the standard topological pressure \(P(f)\) on \((X_{1},T_{1})\). The topological entropy \(h_{\text{top}}(T_{1})\) is the value of \(P(f)\) when \(f\equiv 0\).) Let \(\Omega\subset X_{i+1}\). If \(P_{i}^{\mathbf{a}}\) is already defined, let \[P_{i+1}^{\mathbf{a}}(\Omega,\,f,\,N,\,\varepsilon)\] \[=\inf\left\{\sum_{j=1}^{n}\left(P_{i}^{\mathbf{a}}(\pi_{i}^{-1}(U_{j} ),\,f,\,N,\,\varepsilon)\right)^{a_{i}}\Bigg{|}\begin{array}{l}n\in\mathbb{ N},\,\{U_{j}\}_{j=1}^{n}\text{ is an open cover of }\Omega\\ \text{with diam}(U_{j},d_{N}^{T_{i+1}})<\varepsilon\text{ for all }1\leq j \leq n\end{array}\right\}.\] We define the **topological pressure of \(\mathbf{a}\)-exponent \(P^{\mathbf{a}}(f)\)** by \[P^{\mathbf{a}}(f)=\lim_{\varepsilon\to 0}\left(\lim_{N\to\infty}\frac{\log P_{r}^{ \mathbf{a}}(X_{r},\,f,\,N,\,\varepsilon)}{N}\right).\] This limit exists since \(\log P_{r}^{\mathbf{a}}(X_{r},\,f,\,N,\,\varepsilon)\) is sub-additive in \(N\) and non-decreasing as \(\varepsilon\) tends to \(0\). When we want to clarify the maps \(T_{i}\) and \(\pi_{i}\) used in the definition of \(P^{\mathbf{a}}(f)\), we will denote it by \(P^{\mathbf{a}}(f,\,\mathbf{T})\) or \(P^{\mathbf{a}}(f,\,\mathbf{T},\,\mathbf{\pi})\) with \(\mathbf{T}=(T_{i})_{i=1}^{r}\) and \(\mathbf{\pi}=(\pi_{i})_{i=1}^{r}\). From \(\mathbf{a}=(a_{1},\,a_{2},\,\cdots,a_{r-1})\), we define a probability vector (i.e., all entries are non-negative, and their sum is \(1\)) \(\mathbf{w_{a}}=(w_{1},\,\cdots,\,w_{r})\) by \[\left\{\begin{array}{l}w_{1}=a_{1}a_{2}a_{3}\cdots a_{r-1}\\ w_{2}=(1-a_{1})a_{2}a_{3}\cdots a_{r-1}\\ w_{3}=(1-a_{2})a_{3}\cdots a_{r-1}\\ \vdots\\ w_{r-1}=(1-a_{r-2})a_{r-1}\\ w_{r}=1-a_{r-1}\end{array}\right.\,. \tag{2.1}\] Let \[\pi^{(0)}=\text{id}_{X_{1}}:X_{1}\to X_{1},\] \[\pi^{(i)}=\pi_{i}\circ\pi_{i-1}\circ\cdots\circ\pi_{1}:X_{1}\to X _{i+1}.\] We can now state the main result of this paper. **Theorem 2.1**.: _Let \((X_{i},T_{i})\) (\(i=1,2,\,\ldots,r\)) be dynamical systems and \(\pi_{i}:X_{i}\to X_{i+1}\)\((i=1,2,\,...,\,r-1)\) factor maps. For any continuous function \(f:X_{1}\to\mathbb{R}\),_ \[P^{\mathbf{a}}(f)=\sup_{\mu\in\mathcal{M}^{T_{1}}(X_{1})}\left(\sum_{i=1}^{r}w_{i }h_{\pi^{(i-1)},\mu}(T_{i})+w_{1}\int_{X_{1}}fd\mu\right). \tag{2.2}\] We define \(P^{\boldsymbol{a}}_{\text{var}}(f)\) to be the right-hand side of this equation. Then we need to prove \[P^{\boldsymbol{a}}(f)=P^{\boldsymbol{a}}_{\text{var}}(f).\] ## 3. Preparation ### Basic properties and tools Let \((X_{i},T_{i})\)\((i=1,2,\ldots,r)\) be dynamical systems, \(\pi_{i}:X_{i}\to X_{i+1}\)\((i=1,2,\ldots,r-1)\) factor maps, \(\boldsymbol{a}=(a_{1},\cdots,a_{r-1})\in[0,1]^{r-1}\), and \(f:X_{1}\to\mathbb{R}\) a continuous function. We will use the following notions in sections 3.3 and 5. **Definition 3.1**.: Consider a cover \(\mathscr{F}^{(i)}\) of \(X_{i}\) for each \(i\). For a natural number \(N\) and a positive number \(\varepsilon\), the family \((\mathscr{F}^{(i)})_{i}\) is said to be **a chain of (\(\boldsymbol{N}\), \(\boldsymbol{\varepsilon}\))-covers** of \((X_{i})_{i}\) if the following conditions are true: 1. For every \(i\) and \(V\in\mathscr{F}^{(i)}\), we have \(\text{diam}(V,d^{(i)}_{N})<\varepsilon\). 2. For each \(1\leq i\leq r-1\) and \(U\in\mathscr{F}^{(i+1)}\), there is \(\mathscr{F}^{(i)}(U)\subset\mathscr{F}^{(i)}\) such that \[\pi_{i}^{-1}(U)\subset\bigcup\mathscr{F}^{(i)}(U)\] and \[\mathscr{F}^{(i)}=\bigcup_{U\in\mathscr{F}^{(i+1)}}\mathscr{F}^{(i)}(U).\] Moreover, if all the elements of each \(\mathscr{F}^{(i)}\) are open/closed/compact, we call \((\mathscr{F}^{(i)})_{i}\)**a chain of open/closed/compact (\(\boldsymbol{N}\), \(\boldsymbol{\varepsilon}\))-covers** of \((X_{i})_{i}\). **Remark 3.2**.: Note that we can rewrite \(P^{\boldsymbol{a}}_{r}(X_{r},\,f,\,N,\,\varepsilon)\) using chains of open covers as follows. For a chain of \((N,\,\varepsilon)\)-covers \((\mathscr{F}^{(i)})_{i}\) of \((X_{i})_{i}\), let \[\mathscr{P}^{\boldsymbol{a}}\left(f,\,N,\,\varepsilon,\,(\mathscr{F}^{(i)})_ {i}\right)\] \[=\sum_{U^{(r)}\in\mathscr{F}^{(r)}}\left(\sum_{U^{(r-1)}\in\mathscr{F}^{(r-1 )}(U^{(r)})}\left(\cdots\left(\sum_{U^{(1)}\in\mathscr{F}^{(1)}(U^{(2)})}e^{ \sup_{U^{(1)}}S_{N}f}\right)^{a_{1}}\cdots\right)^{a_{r-2}}\right)^{a_{r-1}}.\] Then \[P^{\boldsymbol{a}}_{r}(X_{r},\,f,\,N,\,\varepsilon)\] \[=\inf\left\{\mathscr{P}^{\boldsymbol{a}}\left(f,\,N,\,\varepsilon,\,(\mathscr{ F}^{(i)})_{i}\right)\right|(\mathscr{F}^{(i)})_{i}\text{ is a chain of open $(N,\,\varepsilon)$-covers of $(X_{i})_{i}$ }\right\}.\] Just like the classic notion of pressure, we have the following property. **Lemma 3.3**.: _For any natural number \(m\),_ \[P^{\boldsymbol{a}}(S_{m}^{T_{1}}f,\boldsymbol{T}^{m})=mP^{\boldsymbol{a}}(f, \boldsymbol{T}),\] _where \(\boldsymbol{T}^{m}=(T_{i}^{\,m})_{i=1}^{r}\)._ Proof.: Fix \(\varepsilon>0\). It is obvious from the definition of \(P_{1}^{\boldsymbol{a}}\) that for any \(\Omega_{1}\subset X_{1}\) and a natural number N, \[P_{1}^{\boldsymbol{a}}(\Omega_{1},\,S_{m}^{T_{1}}f,\,\boldsymbol{T}^{m},\,N, \,\varepsilon)\leq P_{1}^{\boldsymbol{a}}(\Omega_{1},\,f,\boldsymbol{T},\, mN,\,\varepsilon).\] Let \(\Omega_{i+1}\subset X_{i+1}\). By induction on \(i\), we have \[P_{i}^{\boldsymbol{a}}(\Omega_{i+1},\,S_{m}^{T_{1}}f,\boldsymbol{T}^{m},N,\, \varepsilon)\leq P_{i}^{\boldsymbol{a}}(\Omega_{i+1},\,f,\boldsymbol{T},\, mN,\,\varepsilon).\] Thus, \[P_{r}^{\boldsymbol{a}}(S_{m}^{T_{1}}f,\boldsymbol{T}^{m},N,\, \varepsilon)\leq P_{r}^{\boldsymbol{a}}(f,\boldsymbol{T},\,mN,\,\varepsilon). \tag{3.1}\] There exists \(0<\delta<\varepsilon\) such that for any \(1\leq i\leq r\), \[d^{(i)}(x,y)<\delta\implies d_{m}^{T_{i}}(x,y)<\varepsilon\qquad(\text{for }x,y\in X_{i}).\] Then \[d_{N}^{T_{i}^{m}}(x,y)<\delta\implies d_{mN}^{T_{i}}(x,y)<\varepsilon\quad( \text{for }x,\,y\in X_{i}\text{ and }\,1\leq i\leq r). \tag{3.2}\] Let \(i=1\) in (3.2), then we have for any \(\Omega_{1}\subset X_{1}\), \[P_{1}^{\boldsymbol{a}}(\Omega_{1},\,f,\boldsymbol{T},\,mN,\, \varepsilon)\leq P_{1}^{\boldsymbol{a}}(\Omega_{1},\,S_{m}^{T_{1}}f, \boldsymbol{T}^{m},N,\,\delta).\] Take \(\Omega_{i+1}\subset X_{i+1}\). Again by induction on \(i\) and by (3.2), we have \[P_{i}^{\boldsymbol{a}}(\Omega_{i+1},\,f,\boldsymbol{T},\,mN, \,\varepsilon)\leq P_{i}^{\boldsymbol{a}}(\Omega_{i+1},\,S_{m}^{T_{1}}f, \boldsymbol{T}^{m},N,\,\delta).\] Hence, \[P_{r}^{\boldsymbol{a}}(f,\boldsymbol{T},\,mN,\,\varepsilon)\leq P_{r}^{ \boldsymbol{a}}(S_{m}^{T_{1}}f,\boldsymbol{T}^{m},N,\,\delta).\] Combining with (3.1) we have \[P_{r}^{\boldsymbol{a}}(S_{m}^{T_{1}}f,\boldsymbol{T}^{m},N,\, \varepsilon)\leq P_{r}^{\boldsymbol{a}}(f,\boldsymbol{T},\,mN,\,\varepsilon) \leq P_{r}^{\boldsymbol{a}}(S_{m}^{T_{1}}f,\boldsymbol{T}^{m},N,\,\delta).\] Therefore, \[P^{\boldsymbol{a}}(S_{m}^{T_{1}}f,\boldsymbol{T}^{m})=mP^{\boldsymbol{a}}(f, \boldsymbol{T}).\] We will later use the following standard lemma of calculus. **Lemma 3.4**.: 1. _For_ \(0\leq a\leq 1\) _and non-negative numbers_ \(x,y\)_,_ \[(x+y)^{a}\leq x^{a}+y^{a}.\] 2. _Suppose that non-negative real numbers_ \(p_{1},p_{2},\ldots,p_{n}\) _satisfy_ \(\sum_{i=1}^{n}p_{i}=1\)_. Then for any real numbers_ \(x_{1},x_{2},\ldots,x_{n}\) _we have_ \[\sum_{i=1}^{n}\left(-p_{i}\log p_{i}+x_{i}p_{i}\right)\leq\log\sum_{i=1}^{n}e^{ x_{i}}.\] _In particular, letting_ \(x_{1}=x_{2}=\cdots=x_{n}=0\) _gives_ \[\sum_{i=1}^{n}(-p_{i}\log p_{i})\leq\log n.\] _Here,_ \(0\cdot\log 0\) _is defined as_ \(0\)_._ The proof for (1) is elementary. See [22, SS9.3, Lemma 9.9] for (2). ### Measure theoretic entropy In this subsection, we will introduce the classical measure-theoretic entropy (a.k.a. Kolmogorov-Sinai entropy) and state some of the basic lemmas we need to prove Theorem 2.1. The main reference is the book of Walters [22]. Let \((X,T)\) be a dynamical system and \(\mu\in\mathscr{M}^{T}(X)\). A set \(\mathscr{A}=\{A_{1},\ldots,A_{n}\}\) is called a finite partition of X with measurable elements if \(X=A_{1}\cup\cdots\cup A_{n}\), each \(A_{i}\) is a measurable set, and \(A_{i}\cap A_{j}=\varnothing\) for \(i\neq j\). In this paper, a partition is always finite and consists of measurable elements. Let \(\mathscr{A}\) and \(\mathscr{A}^{\prime}\) be partitions of \(X\). We define a new partition \(\mathscr{A}\vee\mathscr{A}^{\prime}\) by \[\mathscr{A}\vee\mathscr{A}^{\prime}=\left\{A\cap A^{\prime}\,|\,A\in\mathscr{ A}\text{ and }A^{\prime}\in\mathscr{A}^{\prime}\right\}.\] For a natural number \(N\), we define a refined partition \(\mathscr{A}_{N}\) of \(\mathscr{A}\) by \[\mathscr{A}_{N}=\mathscr{A}\lor T^{-1}\mathscr{A}\lor T^{-2}\mathscr{A}\vee \cdots\lor T^{-(N-1)}\mathscr{A},\] where \(T^{-i}\mathscr{A}=\{T^{-i}(A)\,|\,A\in\mathscr{A}\}\) is a partition for \(i\in\mathbb{N}\). For a partition \(\mathscr{A}\) of \(X\), let \[H_{\mu}(\mathscr{A})=-\sum_{A\in\mathscr{A}^{\prime}}\mu(A)\log\left(\mu(A)\right).\] We set \[h_{\mu}(T,\mathscr{A})=\lim_{N\to\infty}\frac{H_{\mu}(\mathscr{A}_{N})}{N}.\] This limit exists since \(H_{\mu}(\mathscr{A}_{N})\) is sub-additive in \(N\). The **measure theoretic entropy**\(h_{\mu}(T)\) is defined by \[h_{\mu}(T)=\sup\left\{h_{\mu}(T,\mathscr{A})\,|\,\mathscr{A}\text{ is a partition of }X\right\}.\] Let \(\mathscr{A}\) and \(\mathscr{A}^{\prime}\) be partitions. Their **conditional entropy** is defined by \[H_{\mu}(\mathscr{A}|\mathscr{A}^{\prime})=-\sum_{\begin{subarray}{c}A^{\prime} \in\mathscr{A}^{\prime}\\ \mu(A^{\prime})\neq 0\end{subarray}}\mu(A^{\prime})\sum_{A\in\mathscr{A}} \frac{\mu(A\cap A^{\prime})}{\mu(A^{\prime})}\log\bigg{(}\frac{\mu(A\cap A^{ \prime})}{\mu(A^{\prime})}\bigg{)}.\] **Lemma 3.5**.: 1. \(H_{\mu}(\mathscr{A})\) _is sub-additive in_ \(\mathscr{A}\)_: i.e., for partitions_ \(\mathscr{A}\) _and_ \(\mathscr{A}^{\prime}\)_,_ \[H_{\mu}(\mathscr{A}\vee\mathscr{A}^{\prime})\leq H_{\mu}(\mathscr{A})+H_{\mu} (\mathscr{A}^{\prime}).\] 2. \(H_{\mu}(\mathscr{A})\) _is concave in_ \(\mu\)_: i.e., for_ \(\mu,\nu\in\mathscr{M}^{T}(X)\) _and_ \(0\leq t\leq 1\)_,_ \[H_{(1-t)\mu+t\nu}(\mathscr{A})\geq(1-t)H_{\mu}(\mathscr{A})+tH_{\nu}(\mathscr{ A}).\] 3. _For partitions_ \(\mathscr{A}\) _and_ \(\mathscr{A}^{\prime}\)_,_ \[h_{\mu}(T,\mathscr{A})\leq h_{\mu}(T,\mathscr{A}^{\prime})+H_{\mu}(\mathscr{A} ^{\prime}|\mathscr{A}).\] For the proof confer [22, Theorem 4.3 (viii), SS4.5] for (1), [22, Remark SS8.1] for (2), and [22, Theorem 4.12, SS4.5] for (3). ### Zero-dimensional principal extension Here we will see how we can reduce the proof of \(P^{\boldsymbol{a}}(f)\leq P^{\boldsymbol{a}}_{\mathrm{var}}(f)\) to the case where all dynamical systems are zero-dimensional. First, we review the definitions and properties of (zero-dimensional) principal extension. The introduction here closely follows Tsukamoto's paper [21] and references the book of Downarowicz [15]. Suppose \(\pi:(Y,S)\to(X,\,T)\) is a factor map between dynamical systems. Let \(d\) be a metric on \(Y\). We define the **conditonal topological entropy** of \(\pi\) by \[h_{\mathrm{top}}(Y,S\,|\,X,T)=\lim_{\varepsilon\to 0}\left(\lim_{N\to \infty}\frac{\sup_{x\in X}\log\#(\pi^{-1}(x),N,\varepsilon)}{N}\right).\] Here, \[\#(\pi^{-1}(x),\,N,\varepsilon)=\min\left\{n\in\mathbb{N}\middle|\begin{array} []{l}\text{There exists an open cover $\{U_{j}\}_{j=1}^{n}$ of $\pi^{-1}(x)$}\\ \text{with $\mathrm{diam}(U_{j},\,d_{N})<\varepsilon$ for all $1\leq j \leq n$}\end{array}\right\}.\] A factor map \(\pi:(Y,S)\to(X,\,T)\) between dynamical systems is said to be a **principal factor map** if \[h_{\mathrm{top}}(Y,S\,|\,X,T)=0.\] Also, \((Y,S)\) is called a **principal extension** of \((X,T)\). The following theorem is from [15, Corollary 6.8.9]. **Theorem 3.6**.: _Suppose \(\pi:(Y,S)\to(X,\,T)\) is a principal factor map. Then \(\pi\) preserves measure-theoretic entropy, namely,_ \[h_{\mu}(S)=h_{\pi,\mu}(T)\] _for any \(S\)-invariant probability measure \(\mu\) on Y._ More precisely, it is proved in [11, Corollary 6.8.9] that \(\pi\) is a principal factor map if and only if it preserves measure-theoretic entropy. Suppose \(\pi:(X_{1},T_{1})\to(X_{2},T_{2})\) and \(\phi:(Y,S)\to(X_{2},T_{2})\) are factor maps between dynamical systems. We define a fiber product \((X_{1}\times_{X_{2}}Y,T_{1}\times S)\) of \((X_{1},T_{1})\) and \((Y,S)\) over \((X_{2},T_{2})\) by \[X_{1}\times_{X_{2}}Y=\left\{(x,y)\in X_{1}\times Y|\,\pi(x)=\phi(y)\right\},\] \[T_{1}\times S:X_{1}\times_{X_{2}}Y\ni(x,y)\longmapsto(T_{1}(x),S(y))\in X_{1} \times_{X_{2}}Y.\] We have the following commutative diagram: (3.3) Here \(\pi^{\prime}\) and \(\psi\) are restrictions of the projections onto \(Y\) and \(X_{1}\), respectively: \[\pi^{\prime}:X_{1}\times_{X_{2}}Y\ni(x,y)\longmapsto y\in Y,\] \[\psi:X_{1}\times_{X_{2}}Y\ni(x,y)\longmapsto x\in X_{1}.\] Since \(\pi\) and \(\phi\) are surjective, both \(\pi^{\prime}\) and \(\psi\) are factor maps. **Lemma 3.7**.: _If \(\phi\) is a principal extension in the diagram (3.3), then \(\psi\) is also a principal extension._ Proof.: Let \(d^{1}\) and \(d^{Y}\) be metrics on \(X_{1}\) and \(Y\), respectively. Define a metric \(\widetilde{d}\) on \(X_{1}\times_{X_{2}}Y\) by \[\widetilde{d}\big{(}(x,y),(x^{\prime},y^{\prime})\big{)}=\max\,\{d^{1}(x,x^{ \prime}),d^{Y}(y,y^{\prime})\}.\] Let \(x\in X_{1}\). We have \[\psi^{-1}(x)=\{x\}\times\{y\in Y|\,\pi(x)=\phi(y)\}=\{x\}\times\phi^{-1}(\pi( x)),\] which in turn implies \(\widetilde{d}|_{\psi^{-1}(x)}=d^{Y}|_{\phi^{-1}(\pi(x))}\). Then the metric space \((\psi^{-1}(x),\widetilde{d}_{N})\) is isometric to \((\phi^{-1}(\pi(x)),d^{Y}_{N})\) for any natural number \(N\). Therefore for any \(\varepsilon>0\), \[\#(\psi^{-1}(x),N,\varepsilon)=\#(\phi^{-1}(\pi(x)),N,\varepsilon).\] Since \(\pi\) is surjective, \[\sup_{x\in X_{1}}\#(\psi^{-1}(x),N,\varepsilon)=\sup_{x\in X_{1}}\#(\phi^{-1 }(\pi(x)),N,\varepsilon)=\sup_{y\in Y}\#(\phi^{-1}(y),N,\varepsilon).\] Hence, \[h_{\text{top}}(X_{1}\times_{X_{2}}Y,T_{1}\times S|\,X_{1},T_{1})=h_{\text{top} }(Y,S|\,X_{2},T_{2})=0.\] A dynamical system \((Y,S)\) is said to be **zero-dimensional** if there is a clopen basis of the topology of \(Y\), where clopen means any element in the basis is both closed and open. A basic example of a zero-dimensional dynamical system is the Cantor set \(\{0,1\}^{\mathbb{N}}\) with the shift map. A principal extension \((Y,S)\) of \((X,T)\) is called a **zero-dimensional principal extension** if \((Y,S)\) is zero-dimensional. The following important theorem can be found in [11, Theorem 7.6.1]. **Theorem 3.8**.: _For any dynamical system, there is a zero-dimensional principal extension._ Let \((Y_{i},R_{i})\) (\(i=1,\,2,\,\dots,\,m\)) be dynamical systems, \(\pi_{i}:Y_{i}\to Y_{i+1}\) (\(i=1,\,2,\,...,\,m-1\)) factor maps, and \(\boldsymbol{a}=(a_{1},\cdots,a_{m-1})\in[0,1]^{m-1}\). Fix \(2\leq k\leq m-1\) and take a zero-dimensional principal extension \(\phi_{k}:(Z_{k},S_{k})\to(Y_{k},R_{k})\). For each \(1\leq i\leq k-1\), let \((Y_{i}\times_{Y_{k}}Z_{k},R_{i}\times S_{k})\) be the fiber product and \(\phi_{i}:Y_{i}\times_{Y_{k}}Z_{k}\to Y_{i}\) be the restriction of the projection as in the earlier definition. We have By Lemma 3.7, \(\phi_{i}\) is a principal factor map. We define \(\Pi_{i}:Y_{i}\times_{Y_{k}}Z_{k}\to Y_{i+1}\times_{Y_{k}}Z_{k}\) by \(\Pi_{i}(x,y)=(\pi_{i}(x),y)\) for each \(i\). Then we have the following commutative diagram: (3.4) Let \[(Z_{i},S_{i})=(Y_{i}\times_{Y_{k}}Z_{k},R_{i}\times S_{k})\text{ for }1\leq i\leq k -1,\ \ (Z_{i},S_{i})=(Y_{i},R_{i})\text{ for }k+1\leq i\leq m,\] \[\Pi_{k}=\pi_{k}\circ\phi_{k}:Z_{k}\to Y_{k+1},\ \ \Pi_{i}=\pi_{i}:Z_{i}\to Z_{i+1}\text{ for }k+1\leq i\leq m-1,\] \[\phi_{i}=\operatorname{id}_{Z_{i}}:Z_{i}\to Z_{i}\text{ for }k+1\leq i\leq m.\] **Lemma 3.9**.: _In the settings above,_ \[P^{\boldsymbol{a}}_{\operatorname{var}}(f,\boldsymbol{R},\boldsymbol{\pi}) \geq P^{\boldsymbol{a}}_{\operatorname{var}}(f\circ\phi_{1},\boldsymbol{S}, \boldsymbol{\Pi})\] _and_ \[P^{\boldsymbol{a}}(f,\boldsymbol{R},\boldsymbol{\pi})\leq P^{\boldsymbol{a}} (f\circ\phi_{1},\boldsymbol{S},\boldsymbol{\Pi}).\] _Here, \(\boldsymbol{R}=(R_{i})_{i}\), \(\boldsymbol{\pi}=(\pi_{i})_{i}\), \(\boldsymbol{S}=(S_{i})_{i}\) and \(\boldsymbol{\Pi}=(\Pi_{i})_{i}\)._ Proof.: We remark that the following proof does not require \(Z_{k}\) to be zero-dimensional. Let \[\pi^{(0)}=\operatorname{id}_{Y_{1}}:Y_{1}\to Y_{1},\] \[\pi^{(i)}=\pi_{i}\circ\pi_{i-1}\circ\cdots\circ\pi_{1}:Y_{1}\to Y_{i+1}\] and \[\Pi^{(0)}=\operatorname{id}_{Z_{1}}:Z_{1}\to Z_{1},\] \[\Pi^{(i)}=\Pi_{i}\circ\Pi_{i-1}\circ\cdots\circ\Pi_{1}:Z_{1}\to Z_{i+1}.\] Let \(\nu\in\mathscr{M}^{S_{1}}(Y_{1})\) and \(1\leq i\leq m\). Since all the horizontal maps in (3.4) are principal factor maps, we have \[h_{\Pi^{(i-1)}{}_{*}\nu}(S_{i})=h_{(\phi_{i})_{*}\Pi^{(i-1)}{}_{*}\nu}(R_{i})= h_{\pi^{(i-1)}{}_{*}(\phi_{1})_{*}\nu}(R_{i}).\] It follows that \[P^{\boldsymbol{a}}_{\operatorname{var}}(f\circ\phi_{1},\boldsymbol{S}, \boldsymbol{\Pi}) =\sup_{\nu\in\mathscr{M}^{S_{1}}(Z_{1})}\left(\sum_{i=1}^{m}w_{i} h_{\Pi^{(i-1)}{}_{*}\nu}(S_{i})+w_{1}\int_{Z_{1}}f\circ\phi_{1}d\nu\right)\] \[=\sup_{\nu\in\mathscr{M}^{S_{1}}(Z_{1})}\left(\sum_{i=1}^{m}w_{i} h_{\pi^{(i-1)}{}_{*}(\phi_{1})_{*}\nu}(R_{i})+w_{1}\int_{Y_{1}}fd\big{(}(\phi_{1}) _{*}\nu\big{)}\right)\] \[\leq\sup_{\mu\in\mathscr{M}^{T_{1}}(Y_{1})}\left(\sum_{i=1}^{m}w_ {i}h_{\pi^{(i-1)}{}_{*}\mu}(R_{i})+w_{1}\int_{Y_{1}}fd\mu\right)\] \[=P^{\boldsymbol{a}}_{\operatorname{var}}(f,\boldsymbol{R}, \boldsymbol{\pi}).\] (The reversed inequality is generally true by the surjectivity of factor maps, yielding equality. However, we do not use this fact.) Let \(d^{i}\) be a metric on \(Y_{i}\) for each \(i\) and \(\widetilde{d^{k}}\) a metric on \(Z_{k}\). We define a metric \(\widetilde{d^{i}}\) on \((Z_{i},S_{i})\) for \(1\leq i\leq k-1\) by \[\widetilde{d^{i}}\big{(}(x_{1},y_{1}),(x_{2},y_{2})\big{)}=\max\left\{d^{i}(x_ {1},x_{2}),\widetilde{d^{k}}(y_{1},y_{2})\right\}\quad\big{(}(x_{1},y_{1}),(x_ {2},y_{2})\in Z_{i}=Y_{i}\times_{Y_{k}}Z_{k}\big{)}\.\] Set \(\widetilde{d}^{i}=d^{i}\) for \(k+1\leq i\leq m\). Take an arbitrary positive number \(\varepsilon\). There exists \(0<\delta<\varepsilon\) such that for every \(1\leq i\leq m\), \[\widetilde{d}^{i}(x,y)<\delta\implies d^{i}(\phi_{i}(x),\phi_{i}(y))<\varepsilon \quad(x,y\in Z_{i}). \tag{3.5}\] Let \(N\) be a natural number. We claim that \[P^{\boldsymbol{a}}_{r}(f,\boldsymbol{R},\boldsymbol{\pi},N,\varepsilon)\leq P ^{\boldsymbol{a}}_{r}(f\circ\phi_{1},\boldsymbol{S},\boldsymbol{\Pi},N, \delta).\] Take \(M>0\) with \[P^{\boldsymbol{a}}_{r}(f\circ\phi_{1},\boldsymbol{S},\boldsymbol{\Pi},N, \delta)<M.\] Then there exists a chain of open \((N,\,\delta)\)-covers \((\mathscr{F}^{(i)})_{i}\) of \((Z_{i})_{i}\) (see Definition 3.1 and Remark 3.2) with \[\mathscr{P}^{\boldsymbol{a}}\left(f\circ\phi_{1},\,\boldsymbol{S},\, \boldsymbol{\Pi},N,\delta,\,(\mathscr{F}^{(i)})_{i}\right)<M.\] We can find a compact set \(C_{U}\subset U\) for each \(U\in\mathscr{F}^{(m)}\) such that \(\bigcup_{U\in\mathscr{F}^{(m)}}C_{U}=Z_{m}\). Let \(\mathscr{K}^{(m)}:=\{C_{U}|U\in\mathscr{F}^{(m)}\}\). Since \(\Pi_{m-1}^{-1}(C_{U})\subset\Pi_{m-1}^{-1}(U)\) is compact for each \(U\in\mathscr{F}^{(m)}\), we can find a compact set \(E_{V}\subset V\) for each \(V\in\mathscr{F}^{(m-1)}(U)\) such that \(\Pi_{m-1}^{-1}(C_{U})\subset\bigcup_{V\in\mathscr{F}^{(k)}(U)}E_{V}\). Let \(\mathscr{K}^{(m-1)}(C_{U}):=\{E_{V}|V\in\mathscr{F}^{(m-1)}(U)\}\) and \(\mathscr{K}^{(m-1)}:=\bigcup_{C\in\mathscr{K}^{(m)}}\mathscr{K}^{(m-1)}(C)\). We continue likewise and obtain a chain of compact \((N,\,\delta)\)-covers \((\mathscr{K}^{(i)})_{i}\) of \((Z_{i})_{i}\) with \[\mathscr{P}^{\boldsymbol{a}}\left(f\circ\phi_{1},\,\boldsymbol{S},\, \boldsymbol{\Pi},\,N,\,\delta,\,(\mathscr{K}^{(i)})_{i}\right)\leq\mathscr{P} ^{\boldsymbol{a}}\left(f\circ\phi_{1},\,\boldsymbol{S},\,\boldsymbol{\Pi},\,N,\,\delta,\,(\mathscr{F}^{(i)})_{i}\right)<M.\] Let \(\phi_{i}(\mathscr{K}^{(i)})=\left\{\phi_{i}(C)\,\big{|}\,C\in\mathscr{K}^{(i) }\right\}\) for each \(i\). Note that for any \(\Omega\subset Z_{i}\), \[\pi_{i-1}^{-1}(\phi_{i}(\Omega))=\phi_{i-1}(\Pi_{i-1}^{-1}(\Omega)).\] This and (3.5) assure that \((\phi_{i}(\mathscr{K}^{(i)}))_{i}\) is a chain of compact \((N,\,\varepsilon)\)-covers of \((Y_{i})_{i}\). We have \[\mathscr{P}^{\boldsymbol{a}}\left(f,\,\boldsymbol{R},\,\boldsymbol{\pi},\,N, \,\varepsilon,\,(\phi_{i}(\mathscr{K}^{(i)}))_{i}\right)=\mathscr{P}^{ \boldsymbol{a}}\left(f\circ\phi_{1},\,\boldsymbol{S},\,\boldsymbol{\Pi},\,N, \,\delta,\,(\mathscr{K}^{(i)})_{i}\right)<M.\] Since \(f\) is continuous and each \(\phi_{i}(\mathscr{K}^{(i)})\) is a closed cover, we can slightly enlarge each set in \(\phi_{i}(\mathscr{K}^{(i)})\) and create a chain of open \((N,\,\varepsilon)\)-covers \((\mathscr{O}^{(i)})_{i}\) of \((Y_{i})_{i}\) satisfying \[\mathscr{P}^{\boldsymbol{a}}\left(f,\,\boldsymbol{R},\,\boldsymbol{\pi},\,N,\varepsilon,\,(\mathscr{O}^{(i)})_{i}\right)<M.\] Therefore, \[P^{\boldsymbol{a}}_{r}(f,\boldsymbol{R},\boldsymbol{\pi},N,\varepsilon)\leq \mathscr{P}^{\boldsymbol{a}}\left(f,\,\boldsymbol{R},\,\boldsymbol{\pi},\,N, \,\varepsilon,\,(\mathscr{O}^{(i)})_{i}\right)<M.\] Since \(M>P^{\boldsymbol{a}}_{r}(f\circ\phi_{1},\boldsymbol{S},\boldsymbol{\Pi},N, \delta)\) was chosen arbitrarily, we have \[P^{\boldsymbol{a}}_{r}(f,\boldsymbol{R},\boldsymbol{\pi},N,\varepsilon)\leq P ^{\boldsymbol{a}}_{r}(f\circ\phi_{1},\boldsymbol{S},\boldsymbol{\Pi},N,\delta).\] This implies \[P^{\boldsymbol{a}}(f,\boldsymbol{R},\boldsymbol{\pi})\leq P^{\boldsymbol{a}}(f \circ\phi_{1},\boldsymbol{S},\boldsymbol{\Pi}).\] The following proposition reduces the proof of \(P^{\boldsymbol{a}}(f)\leq P^{\boldsymbol{a}}_{\rm var}(f)\) in the next section to the case where all dynamical systems are zero-dimensional. **Proposition 3.10**.: _For all dynamical systems \((X_{i},\,T_{i})\) (\(i=1,\,2,\,\ldots,\,r\)) and factor maps \(\pi_{i}:X_{i}\to X_{i+1}\)\((i=1,\,2,\,...,\,r-1)\), there are zero-dimensional dynamical systems \((Z_{i},\,S_{i})\) (\(i=1,\,2,\,\ldots,\,r\)) and factor maps \(\Pi_{i}:Z_{i}\to Z_{i+1}\)\((i=1,\,2,\,...,\,r-1)\) with the following property; for every continuous function \(f:X_{1}\to\mathbb{R}\) there exists a continuous function \(g:Z_{1}\to\mathbb{R}\) with_ \[P^{\boldsymbol{a}}_{\rm var}(f,\boldsymbol{T},\boldsymbol{\pi})\geq P^{ \boldsymbol{a}}_{\rm var}(g,\boldsymbol{S},\boldsymbol{\Pi})\] _and_ \[P^{\boldsymbol{a}}(f,\boldsymbol{T},\boldsymbol{\pi})\leq P^{\boldsymbol{a}}(g,\boldsymbol{S},\boldsymbol{\Pi}).\] Proof.: We will first construct zero-dimensional dynamical systems \((Z_{i},\,S_{i})\)\((i=1,\,2,\,\ldots,\,r)\) and factor maps \(\Pi_{i}:Z_{i}\to Z_{i+1}\)\((i=1,\,2,\,...,\,r-1)\) alongside the following commutative diagram of dynamical systems and factor maps: (3.6) where all the horizontal maps are principal factor maps. By Theorem 3.8, there is a zero-dimensional principal extension \(\psi_{r}:(Z_{r},S_{r})\to(X_{r},T_{r})\). The set \(\{*\}\) is the trivial dynamical system, and the maps \(X_{r}\to\{*\}\) and \(Z_{r}\to\{*\}\) send every element to \(*\). For each \(1\leq i\leq r-1\), the map \(X_{i}\times_{X_{r}}Z_{r}\to X_{i}\) in the following diagram is a principal factor map by Lemma 3.7. For \(1\leq i\leq r-2\), define \(\pi_{i}^{(2)}:X_{i}\times_{X_{r}}Z_{r}\to X_{i+1}\times_{X_{r}}Z_{r}\) by \[\pi_{i}^{(2)}(x,z)=(\pi_{i}(x),y).\] Then every horizontal map in the right two rows of (3.6) is a principal factor map. Next, take a zero-dimensional principal extension \(\psi_{r-1}:(Z_{r-1},S_{r-1})\to(X_{r-1}\times_{X_{r}}Z_{r},T_{r-1}\times S_{r})\) and let \(\Pi_{r-1}=\pi_{r-1}^{(2)}\circ\psi_{r-1}\). The rest of (3.6) is constructed similarly, and by Lemma 3.7, each horizontal map is a principal factor map. Let \(f:X_{1}\to\mathbb{R}\) be a continuous map. Applying Lemma 3.9 to the right two rows of (3.6), we get \[P^{\boldsymbol{a}}_{\mathrm{var}}(f,\boldsymbol{T},\boldsymbol{\pi})\geq P^{ \boldsymbol{a}}_{\mathrm{var}}(f\circ\phi_{1},\boldsymbol{S^{(2)}},\boldsymbol {\Pi^{(2)}})\] and \[P^{\boldsymbol{a}}(f,\boldsymbol{T},\boldsymbol{\pi})\leq P^{\boldsymbol{a}} (f\circ\phi_{1},\boldsymbol{S^{(2)}},\boldsymbol{\Pi^{(2)}})\] for \(\boldsymbol{\Pi^{(2)}}=(\pi_{i}^{(2)})_{i}\) and \(\boldsymbol{S^{(2)}}=(T_{i}\times S_{r})_{i}\). Again by Lemma 3.9, \[P^{\boldsymbol{a}}_{\mathrm{var}}(f\circ\phi_{1},\boldsymbol{S^{(2)}}, \boldsymbol{\Pi^{(2)}})\geq P^{\boldsymbol{a}}_{\mathrm{var}}(f\circ\phi_{1} \circ\phi_{2},\boldsymbol{S^{(3)}},\boldsymbol{\Pi^{(3)}})\] and \[P^{\boldsymbol{a}}(f\circ\phi_{1},\boldsymbol{S^{(2)}},\boldsymbol{\Pi^{(2)}} )\leq P^{\boldsymbol{a}}(f\circ\phi_{1}\circ\phi_{2},\boldsymbol{S^{(3)}}, \boldsymbol{\Pi^{(3)}})\] where \(\boldsymbol{\Pi^{(3)}}=\big{(}(\pi_{i}^{(3)})_{i=1}^{r-2},\Pi_{r-1}\big{)}\), and \(\boldsymbol{S^{(3)}}\) is the collection of maps associated with \(Z_{r}\) and the third row from the right of (3.6). We continue inductively and obtain the desired inequalities, where \(g\) is taken as \(f\circ\phi_{1}\circ\phi_{2}\circ\cdots\circ\phi_{r}\). ## 4. Proof of \(P^{\boldsymbol{a}}(f)\leq P^{\boldsymbol{a}}_{\mathrm{var}}(f)\). Let \(\boldsymbol{a}=(a_{1},\cdots,a_{r-1})\in[0,1]^{r-1}\). Recall that we defined \((w_{1},\ldots,w_{r})\) by \[\left\{\begin{array}{l}w_{1}=a_{1}a_{2}a_{3}\cdots a_{r-1}\\ w_{2}=(1-a_{1})a_{2}a_{3}\cdots a_{r-1}\\ w_{3}=(1-a_{2})a_{3}\cdots a_{r-1}\\ \qquad\qquad\vdots\\ w_{r-1}=(1-a_{r-2})a_{r-1}\\ w_{r}=1-a_{r-1}\end{array}\right.\] and \(P^{\mathbf{a}}_{\rm var}(f)\) by \[P^{\mathbf{a}}_{\rm var}(f)=\sup_{\mu\in\mathscr{A}^{T_{1}}(X_{1})}\left(\sum_{i=1}^{ r}w_{i}h_{\pi^{(i-1)}}{}_{*\mu}(T_{i})+w_{1}\int_{X_{1}}fd\mu\right)\] where \[\pi^{(0)}={\rm id}_{X_{1}}:X_{1}\to X_{1},\] \[\pi^{(i)}=\pi_{i}\circ\pi_{i-1}\circ\cdots\circ\pi_{1}:X_{1}\to X_{i+1}.\] The following theorem suffices by Theorem 3.10 in proving \(P^{\mathbf{a}}(f)\leq P^{\mathbf{a}}_{\rm var}(f)\) for arbitrary dynamical systems. **Theorem 4.1**.: _Suppose \((X_{i},\,T_{i})\) (\(i=1,\,2,\,\ldots,\,r\)) are zero-dimensional dynamical systems and \(\pi_{i}:X_{i}\to X_{i+1}\,\,(i=1,\,2,\,...,\,r-1)\) are factor maps. Then we have_ \[P^{\mathbf{a}}(f)\leq P^{\mathbf{a}}_{\rm var}(f)\] _for any continuous function \(f:X_{1}\to\mathbb{R}\)._ Proof.: Let \(d^{(i)}\) be a metric on \(X_{i}\) for each \(i=1,2,\ldots,r\). Take a positive number \(\varepsilon\) and a natural number \(N\). First, we will backward inductively define a finite clopen partition \(\mathscr{A}^{(i)}\) of \(X_{i}\) for each \(i\). Since \(X_{r}\) is zero-dimensional, we can take a sufficiently fine finite clopen partition \(\mathscr{A}^{(r)}\) of \(X_{r}\). That is, each \(A\in\mathscr{A}^{(r)}\) is both open and closed, and \({\rm diam}(A,d^{(r)}_{N})<\varepsilon\). Suppose \(\mathscr{A}^{(i+1)}\) is defined. For each \(A\in\mathscr{A}^{(i+1)}\), take a clopen partition \(\mathscr{B}(A)\) of \(\pi_{i}^{-1}(A)\subset X_{i}\) such that any \(B\in\mathscr{B}(A)\) satisfies \({\rm diam}(B,d^{(i)}_{N})<\varepsilon\). We let \(\mathscr{A}^{(i)}=\bigcup_{A\in\mathscr{A}^{(i+1)}}\mathscr{B}(A)\). Then \(\mathscr{A}^{(i)}\) is a finite clopen partition of \(X_{i}\). We define \[\mathscr{A}^{(i)}_{N}=\mathscr{A}^{(i)}\lor T_{i}^{-1}\mathscr{A}^{(i)}\lor T _{i}^{-2}\mathscr{A}^{(i)}\vee\cdots\lor T_{i}^{-(N-1)}\mathscr{A}^{(i)}.\] We employ the following notations. For \(i<j\) and \(A\in\mathscr{A}^{(j)}_{N}\), let \(\mathscr{A}^{(i)}_{N}(A)\) be the set of "children" of A; \[\mathscr{A}^{(i)}_{N}(A)=\left\{B\in\mathscr{A}^{(i)}_{N}\Big{|}\,\pi_{j-1} \circ\pi_{j-2}\circ\cdots\circ\pi_{i}(B)\subset A\right\}.\] Also, for \(B\in\mathscr{A}^{(i)}_{N}\) and \(i<j\), we denote by \(\widetilde{\pi}_{j}B\) the unique "parent" of \(B\) in \(\mathscr{A}^{(j)}_{N}\); \[\widetilde{\pi}_{j}B=A\in\mathscr{A}^{(j)}_{N}\text{ such that }\pi_{j-1}\circ\pi_{j-2}\circ\cdots\circ\pi_{i}(B) \subset A.\] We will evaluate \(P^{\mathbf{a}}(f,N,\varepsilon)\) from above using \(\{\mathscr{A}^{(i)}\}\). Let \(A\in\mathscr{A}^{(2)}_{N}\), and start by setting \[Z^{(1)}_{N}(A)=\sum_{B\in\mathscr{A}^{(1)}_{N}(A)}e^{\sup_{B}S_{N}f}.\] Let \(A\in\mathscr{A}^{(i+1)}_{N}\). If \(Z^{(i-1)}_{N}\) is already defined, set \[Z^{(i)}_{N}(A)=\sum_{B\in\mathscr{A}^{(i)}_{N}(A)}\left(Z^{(i-1)}_{N}(B)\right) ^{a_{i-1}}.\] We then define \(Z_{N}\) by \[Z_{N}=\sum_{A\in\mathscr{A}_{N}^{(r)}}\left(Z_{N}^{(r-1)}(A)\right)^{a_{r-1}}.\] It is straightforward from the construction that \[P_{r}^{\boldsymbol{a}}(X_{r},f,N,\varepsilon)\leq Z_{N}.\] Therefore, we only need to prove that there is a \(T_{1}\)-invariant probability measure \(\mu\) on \(X_{1}\) such that \[\sum_{i=1}^{r}w_{i}h_{\pi^{(i-1)}*\mu}(T_{i},\mathscr{A}^{(i)})+w_{1}\int_{X_{ 1}}fd\mu\geq\lim_{N\to\infty}\frac{\log Z_{N}}{N}.\] Since each \(A\in\mathscr{A}_{N}^{(1)}\) is closed, we can choose a point \(x_{A}\in A\) so that \[S_{N}f(x_{A})=\sup_{A}S_{N}f.\] We define a probability measure \(\sigma_{N}\) on \(X_{1}\) by \[\sigma_{N}=\frac{1}{Z_{N}}\sum_{A\in\mathscr{A}_{N}^{(1)}}Z_{N}^{ (r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1}Z_{N}^{(r-2)}(\widetilde{\pi}_{r-1}A) ^{a_{r-2}-1}\] \[\times\cdots\times Z_{N}^{(2)}(\widetilde{\pi}_{3}A)^{a_{2}-1}Z_ {N}^{(1)}(\widetilde{\pi}_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}\delta_{x_{A}}\] where \(\delta_{x_{A}}\) is the Dirac measure at \(x_{A}\). This is indeed a probability measure on \(X_{1}\) since \[\sigma_{N}(X_{1})=\frac{1}{Z_{N}}\sum_{A\in\mathscr{A}_{N}^{(1)}} Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1}Z_{N}^{(r-2)}(\widetilde{\pi}_{r- 1}A)^{a_{r-2}-1}\] \[\times\cdots\times Z_{N}^{(2)}(\widetilde{\pi}_{3}A)^{a_{2}-1}Z_ {N}^{(1)}(\widetilde{\pi}_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}\] \[=\frac{1}{Z_{N}}\sum_{A_{r}\in\mathscr{A}_{N}^{(r)}}Z_{N}^{(r-1) }(A_{r})^{a_{r-1}-1}\sum_{A_{r-1}\in\mathscr{A}_{N}^{(r-1)}(A_{r})}Z_{N}^{(r-2 )}(A_{r-1})^{a_{r-2}-1}\] \[\cdots\sum_{A_{3}\in\mathscr{A}_{N}^{(3)}(A_{4})}Z_{N}^{(2)}(A_{3 })^{a_{2}-1}\sum_{A_{2}\in\mathscr{A}_{N}^{(2)}(A_{3})}Z_{N}^{(1)}(A_{2})^{a_{ 1}-1}\underbrace{\sum_{A_{1}\in\mathscr{A}_{N}^{(1)}(A_{2})}e^{S_{N}f(x_{A_{1} })}}_{=Z_{N}^{(1)}(A_{2})}\] \[=\frac{1}{Z_{N}}\sum_{A_{r}\in\mathscr{A}_{N}^{(r)}}Z_{N}^{(r-1) }(A_{r})^{a_{r-1}-1}\sum_{A_{r-1}\in\mathscr{A}_{N}^{(r-1)}(A_{r})}Z_{N}^{(r-2 )}(A_{r-1})^{a_{r-2}-1}\] \[\cdots\sum_{A_{3}\in\mathscr{A}_{N}^{(3)}(A_{4})}Z_{N}^{(2)}(A_{3 })^{a_{2}-1}\underbrace{\sum_{A_{2}\in\mathscr{A}_{N}^{(2)}(A_{3})}Z_{N}^{(1)} (A_{2})^{a_{1}}}_{=Z_{N}^{(2)}(A_{3})}\] \[=\cdots=\frac{1}{Z_{N}}\sum_{A_{r}\in\mathscr{A}_{N}^{(r)}}Z_{N}^ {(j-1)}(A_{r})^{a_{r-1}}=1.\] Although \(\sigma_{N}\) is not generally \(T_{1}\)-invariant, the following well-known trick allows us to create a \(T_{1}\)-invariant measure \(\mu\). We begin by setting \[\mu_{N}=\frac{1}{N}\sum_{k=0}^{N-1}{T_{1}}^{k}{}_{*}\sigma_{N}.\] Since \(X_{1}\) is compact, we can take a sub-sequence of \((\mu_{N})_{N}\) so that it weakly converges to a probability measure \(\mu\) on \(X_{1}\). Then \(\mu\) is \(T_{1}\)-invariant by the definition of \(\mu_{N}\). We will show that this \(\mu\) satisfies \[\sum_{i=1}^{r}w_{i}h_{\pi^{(i-1)}{}_{*}\mu}(T_{i},\mathscr{A}^{(i)})+w_{1}\int _{X_{1}}fd\mu\geq\lim_{N\to\infty}\frac{\log Z_{N}}{N}.\] We first prove \[\sum_{i=1}^{r}w_{i}H_{\pi^{(i-1)}{}_{*}\sigma_{N}}(\mathscr{A}^{(i)}_{N})+w_{ 1}\int_{X_{1}}S_{N}fd\mu=\log Z_{N}.\] To simplify the notations, let \[\sigma^{(i)}_{N} =\pi^{(i-1)}{}_{*}\sigma_{N}\] \[=\frac{1}{Z_{N}}\sum_{B\in\mathscr{A}^{(1)}_{N}}Z_{N}^{(r-1)}( \widetilde{\pi}_{r}B)^{a_{r-1}-1}\cdots Z_{N}^{(1)}(\widetilde{\pi}_{2}B)^{a_ {1}-1}e^{S_{N}f(x_{B})}\delta_{\pi^{(i)}(x_{B})}\] and \[W_{N}^{(j)}=\sum_{A\in\mathscr{A}^{(j+1)}_{N}}Z_{N}^{(r-1)}(\widetilde{\pi}_{ r}A)^{a_{r-1}-1}\cdots Z_{N}^{(j+1)}(\widetilde{\pi}_{j+2}A)^{a_{j+1}-1}Z_{N}^{( j)}(A)^{a_{j}}\log\Big{(}Z_{N}^{(j)}(A)\Big{)}.\] **Claim 4.2**.: _We have the following equations:_ \[H_{\sigma_{N}}(\mathscr{A}^{(1)}_{N})=\log Z_{N}-\int_{X_{1}}S_{N}fd\sigma_{N }-\sum_{j=1}^{r-1}\frac{a_{j}-1}{Z_{n}}W_{N}^{(j)},\] \[H_{\sigma^{(i)}_{N}}(\mathscr{A}^{(i)}_{N})=\log Z_{N}-\frac{a_{i-1}}{Z_{n}}W _{N}^{(i-1)}-\sum_{j=i}^{r-1}\frac{a_{j}-1}{Z_{n}}W_{N}^{(j)}\ \ (\text{ for 2 }\leq i\leq r\,).\] _Here, \(\sum_{j=r}^{r-1}\frac{a_{j}-1}{Z_{n}}W_{N}^{(j)}\) is defined to be \(0\)._ Proof.: Let \(A\in\mathscr{A}^{(1)}_{N}\). We have \[\sigma_{N}(A)=\frac{1}{Z_{N}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1} \cdots Z_{N}^{(1)}(\widetilde{\pi}_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}.\] Then \[\sigma_{N}(A)=\frac{1}{Z_{N}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1} \cdots Z_{N}^{(1)}(\widetilde{\pi}_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}.\] \[H_{\sigma_{N}}(\mathscr{A}_{N}^{(1)})=-\sum_{A\in\mathscr{A}_{N}^{(1)}} \sigma_{N}(A)\log\left(\sigma_{N}(A)\right)\] \[= \log Z_{N}-\underbrace{\sum_{A\in\mathscr{A}_{N}^{(1)}}\sigma_{N} (A)S_{N}f(x_{A})}_{\text{(I)}}\] \[-\sum_{j=1}^{r-1}\frac{a_{j}-1}{Z_{N}}\underbrace{\sum_{A\in \mathscr{A}_{N}^{(1)}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1}\cdots Z_ {N}^{(1)}(\widetilde{\pi}_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}\log\left(Z_{N}^{(j) }(\widetilde{\pi}_{j+1}A)\right)}_{\text{(II)}}.\] For (I), we have \[\int_{X_{1}}S_{N}fd\sigma_{N} =\frac{1}{Z_{N}}\sum_{A\in\mathscr{A}_{N}^{(1)}}Z_{N}^{(r-1)}( \widetilde{\pi}_{r}A)^{a_{r-1}-1}\cdots Z_{N}^{(2)}(\widetilde{\pi}_{3}A)^{a _{2}-1}Z_{N}^{(1)}(\widetilde{\pi}_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}S_{N}f(x_{A})\] \[= \text{(I)}.\] We will show that (II) \(=W_{N}^{(j)}\). Let \(A^{\prime}\in\mathscr{A}_{N}^{(j+1)}\). Then any \(A\in\mathscr{A}_{N}^{(1)}(A^{\prime})\) satisfies \(\widetilde{\pi}_{j+1}A=A^{\prime}\). Hence, \[\text{(II)} =\sum_{A^{\prime}\in\mathscr{A}_{N}^{(j+1)}}\sum_{A\in\mathscr{A} _{N}^{(1)}(A^{\prime})}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1}\cdots Z _{N}^{(1)}(\widetilde{\pi}_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}\log\left(Z_{N}^{(j) }(\widetilde{\pi}_{j+1}A)\right)\] \[= \sum_{A^{\prime}\in\mathscr{A}_{N}^{(j+1)}}Z_{N}^{(r-1)}( \widetilde{\pi}_{r}A^{\prime})^{a_{r-1}-1}\cdots Z_{N}^{(j+1)}(\widetilde{\pi }_{j+2}A^{\prime})^{a_{j+1}-1}Z_{N}^{(j)}(A^{\prime})^{a_{j}-1}\log\left(Z_{N} ^{(j)}(A^{\prime})\right)\] \[\times\underbrace{\sum_{A\in\mathscr{A}_{N}^{(1)}(A^{\prime})}Z_ {N}^{(j-1)}(\widetilde{\pi}_{j}A)^{a_{j-1}-1}\cdots Z_{N}^{(1)}(\widetilde{\pi }_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}}_{\text{(II)}^{\prime}}.\] The term (II)\({}^{\prime}\) can be calculated similarly to how we showed \(\sigma_{N}(X_{1})=1\). Namely, \[\text{(II)}^{\prime}=\sum_{A_{j}\in\mathscr{A}_{N}^{(j)}(A^{\prime})}Z_{N}^{ (j-1)}(A_{j})^{a_{j-1}-1}\sum_{A_{j-1}\in\mathscr{A}_{N}^{(j-1)}(A_{j})}Z_{N}^ {(j-2)}(A_{j-1})^{a_{j-2}-1}\] \[\cdots\sum_{A_{3}\in\mathscr{A}_{N}^{(3)}(A_{4})}Z_{N}^{(2)}(A_{ 3})^{a_{2}-1}\sum_{A_{2}\in\mathscr{A}_{N}^{(2)}(A_{3})}Z_{N}^{(1)}(A_{2})^{a_ {1}-1}\underbrace{\sum_{A_{1}\in\mathscr{A}_{N}^{(1)}(A_{2})}e^{S_{N}f(x_{A_{1} })}}_{=Z_{N}^{(1)}(A_{2})}\] \[=\cdots=\sum_{A_{j}\in\mathscr{A}_{N}^{(j)}(A^{\prime})}Z_{N}^{(j-1)}(A_{j})^{a _{j-1}}=Z_{N}^{(j)}(A^{\prime}).\] Thus, we get \[(\Pi) =\sum_{A\in\mathscr{A}_{N}^{(j+1)}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r} A)^{a_{r-1}-1}\cdots Z_{N}^{(j+1)}(\widetilde{\pi}_{j+2}A)^{a_{j+1}-1}\cdot Z_{N}^{ (j)}(A)^{a_{j}}\log\Big{(}Z_{N}^{(j)}(A)\Big{)}\] \[=W_{N}^{(j)}.\] This completes the proof of the first assertion. Next, let \(2\leq i\leq r\). For any \(A\in\mathscr{A}_{N}^{(i)}\), \[\sigma_{N}^{(i)}(A) =\frac{1}{Z_{n}}\sum_{\begin{subarray}{c}B\in\mathscr{A}_{N}^{(1) },\\ \pi^{(i)}(x_{B})\in A\end{subarray}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}B)^{a_{r-1 }-1}\cdots Z_{N}^{(1)}(\widetilde{\pi}_{2}B)^{a_{1}-1}e^{S_{N}f(x_{B})}\] \[=\frac{1}{Z_{n}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1} \cdots Z_{N}^{(i-1)}(\widetilde{\pi}_{i}A)^{a_{i-1}-1}\] \[\qquad\qquad\times\sum_{B\in\mathscr{A}_{N}^{(1)}(A)}Z_{N}^{(i-2 )}(\widetilde{\pi}_{i-1}B)^{a_{i-2}-1}\cdots Z_{N}^{(1)}(\widetilde{\pi}_{2}B )^{a_{1}-1}e^{S_{N}f(x_{B})}.\] As in the evaluation of \((\Pi)^{\prime}\), we have \[\sum_{B\in\mathscr{A}_{N}^{(1)}(A)}Z_{N}^{(i-2)}(\widetilde{\pi}_{i-1}B)^{a_{ i-2}-1}\cdots Z_{N}^{(1)}(\widetilde{\pi}_{2}B)^{a_{1}-1}e^{S_{N}f(x_{B})}=Z_{N}^{ (i-1)}(A)^{a_{i-1}}.\] Hence, \[\sigma_{N}^{(i)}(A)=\frac{1}{Z_{n}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r- 1}-1}\cdots Z_{N}^{(i)}(\widetilde{\pi}_{i+1}A)^{a_{i}-1}Z_{N}^{(i-1)}(A)^{a_ {i-1}}.\] Therefore, \[H_{\sigma_{N}^{(i)}}(\mathscr{A}_{N}^{(i)})=-\sum_{A\in\mathscr{ A}_{N}^{(i)}}\sigma_{N}^{(i)}(A)\log\sigma_{N}^{(i)}(A)\] \[=\log Z_{N}-\frac{1}{Z_{n}}\sum_{A\in\mathscr{A}_{N}^{(i)}}Z_{N}^ {(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1}\cdots Z_{N}^{(i)}(\widetilde{\pi}_{ i+1}A)^{a_{i}-1}Z_{N}^{(i-1)}(A)^{a_{i-1}}\] \[\qquad\qquad\qquad\qquad\times\log\Big{(}Z_{N}^{(r-1)}(\widetilde {\pi}_{r}A)^{a_{r-1}-1}\cdots Z_{N}^{(i)}(\widetilde{\pi}_{i+1}A)^{a_{i}-1}Z_{ N}^{(i-1)}(A)^{a_{i-1}}\Big{)}\] \[=\log Z_{N}-\frac{a_{i-1}}{Z_{n}}\sum_{A\in\mathscr{A}_{N}^{(i)}} Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1}\cdots Z_{N}^{(i)}(\widetilde{\pi}_{ i+1}A)^{a_{i}-1}Z_{N}^{(i-1)}(A)^{a_{i-1}}\log\Big{(}Z_{N}^{(i-1)}(A)\Big{)}\] \[\qquad-\sum_{j=i}^{r-1}\frac{a_{j}-1}{Z_{n}}\sum_{A\in\mathscr{A }_{N}^{(i)}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1}\cdots Z_{N}^{(i)}( \widetilde{\pi}_{i+1}A)^{a_{i}-1}Z_{N}^{(i-1)}(A)^{a_{i-1}}\log\Big{(}Z_{N}^{(j )}(\widetilde{\pi}_{j+1}A)\Big{)}.\] Note that we have \[\sum_{A\in\mathscr{A}_{N}^{(i)}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1} \cdots Z_{N}^{(i)}(\widetilde{\pi}_{i+1}A)^{a_{i-1}}Z_{N}^{(i-1)}(A)^{a_{i-1}} \log\Big{(}Z_{N}^{(j)}(\widetilde{\pi}_{j+1}A)\Big{)}\] \[=\sum_{A_{j+1}\in\mathscr{A}_{N}^{(j+1)}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A_{j+ 1})^{a_{r-1}-1}\cdots Z_{N}^{(j+1)}(\widetilde{\pi}_{j+2}A_{j+1})^{a_{j}-1}Z_{ N}^{(j)}(A_{j+1})^{a_{j-1}-1}\log\Big{(}Z_{N}^{(j)}(A_{j+1})\Big{)}\] \[\times\sum_{A_{j}\in\mathscr{A}_{N}^{(j)}(A_{j+1})}Z_{N}^{(j-1)}(A_{j})^{a_{j- 2}-1}\cdots\sum_{A_{i+1}\in\mathscr{A}_{N}^{(i+1)}(A_{i+2})}Z_{N}^{(i)}(A_{i+ 1})^{a_{i+1}-1}\underbrace{\sum_{A_{i}\in\mathscr{A}_{N}^{(i)}(A_{i+1})}Z_{N} ^{(i-1)}(A_{i})^{a_{i-1}}}_{=Z_{N}^{(i)}(A_{i+1})}\] \[=\cdots=\sum_{A_{j+1}\in\mathscr{A}_{N}^{(j+1)}}Z_{N}^{(r-1)}(\widetilde{\pi}_ {r}A_{j+1})^{a_{r-1}-1}\cdots Z_{N}^{(j+1)}(\widetilde{\pi}_{j+2}A_{j+1})^{a_ {j}-1}Z_{N}^{(j)}(A_{j+1})^{a_{j-1}}\log\Big{(}Z_{N}^{(j)}(A_{j+1})\Big{)}.\] We conclude that \[H_{\sigma_{N}^{(i)}}(\mathscr{A}_{N}^{(i)})=\log Z_{N}-\frac{a_{i-1}}{Z_{n}}W_ {N}^{(i-1)}-\sum_{j=i}^{r-1}\frac{a_{j}-1}{Z_{n}}W_{N}^{(j)}.\] This completes the proof of the claim. By this claim, \[\sum_{i=1}^{r}w_{i}H_{\sigma_{N}^{(i)}}(\mathscr{A}_{N}^{(i)})+w_{1}\int_{X_{ 1}}\!S_{N}fd\mu=\log Z_{N}-\sum_{i=2}^{r}\frac{w_{i}a_{i-1}}{Z_{n}}W_{N}^{(i-1 )}-\sum_{i=1}^{r-1}\sum_{j=i}^{r-1}\frac{w_{i}(a_{j}-1)}{Z_{n}}W_{N}^{(j)}.\] However, we have \[\sum_{i=2}^{r}w_{i}a_{i-1}W_{N}^{(i-1)}+\sum_{i=1}^{r-1}\sum_{j=i}^{r-1}w_{i}( a_{j}-1)W_{N}^{(j)}=0.\] Indeed, the coefficient of \(W_{N}^{(k)}\) (\(1\leq k\leq r-1\)) is \[w_{k+1}a_{k}+(a_{k}-1)\sum_{i=1}^{k}w_{i} =w_{k+1}a_{k}+(a_{k}-1)a_{k}a_{k+1}\cdots a_{r-1}\] \[=a_{k}\{w_{k+1}-(1-a_{k})a_{k+1}a_{k+2}\cdots a_{r-1}\}=0.\] Thus, we have \[\sum_{i=1}^{r}w_{i}H_{\sigma_{N}^{(i)}}(\mathscr{A}_{N}^{(i)})+w_{1}\int_{X_{1 }}\!S_{N}fd\mu=\log Z_{N}. \tag{4.1}\] Let \(\mu^{(i)}=\pi^{(i-1)}{}_{*}\mu\) and \(\mu_{N}^{(i)}=\pi^{(i-1)}{}_{*}\mu_{N}.\) **Lemma 4.3**.: _Let \(N\) and \(M\) be natural numbers. For any \(1\leq i\leq r\),_ \[\frac{1}{M}H_{\mu_{N}^{(i)}}(\mathscr{A}_{M}^{(i)})\geq\frac{1}{N}H_{\sigma_{N }^{(i)}}(\mathscr{A}_{N}^{(i)})-\frac{2M\log|\mathscr{A}^{(i)}|}{N}.\] _Here, \(|\mathscr{A}^{(i)}|\) is the number of elements in \(\mathscr{A}^{(i)}\)._ Suppose this is true, and let \(N\) and \(M\) be natural numbers. Together with (4.1), we obtain the following evaluation; \[\sum_{i=1}^{r}\frac{w_{i}}{M}H_{\mu_{N}^{(i)}}(\mathscr{A}_{M}^{(i )})+w_{1}\int_{X_{1}}fd\mu_{N} \geq\sum_{i=1}^{r}\frac{w_{i}}{N}H_{\sigma_{N}^{(i)}}(\mathscr{A}_ {N}^{(i)})-\sum_{i=1}^{r}\frac{2M\log|\mathscr{A}^{(i)}|}{N}+\frac{w_{1}}{N} \int_{X_{1}}S_{N}fd\sigma_{N}\] \[=\frac{\log Z_{N}}{N}-\sum_{i=1}^{r}\frac{2M\log|\mathscr{A}^{(i) }|}{N}.\] Let \(N=N_{k}\to\infty\) along the sub-sequence \((N_{k})\) for which \(\mu_{N_{k}}\rightharpoonup\mu\). This yields \[\sum_{i=1}^{r}\frac{w_{i}}{M}H_{\mu^{(i)}}(\mathscr{A}_{M}^{(i)})+w_{1}\int_{ X_{1}}fd\mu\geq\lim_{N\to\infty}\frac{\log Z_{N}}{N}.\] We let \(M\to\infty\) and get \[\sum_{i=1}^{r}w_{i}h_{\mu^{(i)}}(T_{i},\mathscr{A}^{(i)})+w_{1}\int_{X_{1}}fd \mu\geq\lim_{N\to\infty}\frac{\log Z_{N}}{N}.\] Hence, \[P_{\mathrm{var}}^{\boldsymbol{a}}(f)\geq P^{\boldsymbol{a}}(f).\] We are left to prove Lemma 4.3. Proof of Lemma 4.3.: This statement appears in the proof of variational principle in [22, Theorem 8.6], and Tsukamoto also proves it in [22, Claim 6.3]. The following proof is taken from the latter. We will explain for \(i=1\); the same argument works for all \(i\). Let \(\mathscr{A}=\mathscr{A}^{(1)}\). Recall that \(\mu_{N}=\frac{1}{N}\sum_{k=0}^{N-1}{T_{1}^{k}}_{*}\sigma_{N}\). Since the entropy function is concave (Lemma 3.5), we have \[H_{\mu_{N}}(\mathscr{A}_{M})\geq\frac{1}{N}\sum_{k=0}^{N-1}H_{{T_{1}^{k}}_{* }\sigma_{N}}(\mathscr{A}_{M})=\frac{1}{N}\sum_{k=0}^{N-1}H_{\sigma_{N}}(T_{1} ^{-k}\mathscr{A}_{M}).\] Let \(N=qM+r\) with \(0\leq r<M\), then \[\sum_{k=0}^{N-1}H_{\sigma_{N}}(T_{1}^{-k}\mathscr{A}_{M}) =\sum_{s=0}^{q}\sum_{t=0}^{M-1}H_{\sigma_{N}}(T_{1}^{-sM-t} \mathscr{A}_{M})-\sum_{k=N}^{qM+M-1}H_{\sigma_{N}}(T_{1}^{-k}\mathscr{A}_{M})\] \[\geq\sum_{t=0}^{M-1}\sum_{s=0}^{q}H_{\sigma_{N}}(T_{1}^{-sM-t} \mathscr{A}_{M})-M\log|\mathscr{A}_{M}|\] \[\geq\sum_{t=0}^{M-1}\sum_{s=0}^{q}H_{\sigma_{N}}(T_{1}^{-sM-t} \mathscr{A}_{M})-M^{2}\log|\mathscr{A}|. \tag{4.2}\] We will evaluate \(\sum_{s=0}^{q}H_{\sigma_{N}}(T_{1}^{-sM-t}\mathscr{A}_{M})\) from below for each \(0\leq t\leq M-1\). First, observe that \[T_{1}^{-sM-t}\mathscr{A}_{M}=\bigvee_{j=0}^{M-1}T_{1}^{-sM-t-j}\mathscr{A}.\] We have \[\{sM+t+j\,|\,0\leq s\leq q,0\leq j\leq M-1\}=\{t,t+1,\ldots,t+qM+M-1\}\] without multiplicity. Therefore, \[H_{\sigma_{{}_{N}}}(\mathscr{A}_{N}) \leq H_{\sigma_{{}_{N}}}\left(\bigvee_{k=0}^{t+(q+1)M-1}T_{1}^{-k} \mathscr{A}\right)\qquad\text{by }N<t+(q+1)M\] \[\leq\sum_{s=0}^{q}H_{\sigma_{{}_{N}}}(T_{1}^{-sM-t}\mathscr{A}_{M })+\sum_{k=0}^{t-1}H_{\sigma_{{}_{N}}}(T_{1}^{-k}\mathscr{A})\qquad\text{by Lemma \ref{lem:sM}.}\] This implies \[\sum_{s=0}^{q}H_{\sigma_{{}_{N}}}(T_{1}^{-sM-t}\mathscr{A}_{M}) \geq H_{\sigma_{{}_{N}}}(\mathscr{A}_{N})-\sum_{k=0}^{t-1}H_{ \sigma_{{}_{N}}}(T_{1}^{-k}\mathscr{A})\] \[\geq H_{\sigma_{{}_{N}}}(\mathscr{A}_{N})-M\log|\mathscr{A}| \qquad\text{by }t<M.\] Now, we sum over \(t\) and obtain \[\sum_{t=1}^{M-1}\sum_{s=0}^{q}H_{\sigma_{{}_{N}}}(T_{1}^{-sM-t}\mathscr{A}_{M })\geq MH_{\sigma_{{}_{N}}}(\mathscr{A}_{N})-M^{2}\log|\mathscr{A}|.\] Combining with (4.2), this implies \[\sum_{k=0}^{N-1}H_{\sigma_{{}_{N}}}(T_{1}^{-k}\mathscr{A}_{M})\geq MH_{\sigma_ {{}_{N}}}(\mathscr{A}_{N})-2M^{2}\log|\mathscr{A}|.\] It follows that \[\frac{1}{M}H_{\mu_{{}_{N}}}(\mathscr{A}_{N})\geq\frac{1}{MN}\sum_{k=0}^{N-1}H_ {\sigma_{{}_{N}}}(T_{1}^{-k}\mathscr{A}_{M})\geq\frac{1}{N}H_{\sigma_{{}_{N}}} (\mathscr{A}_{N})-\frac{2M\log|\mathscr{A}|}{N}.\] This completes the proof of Theorem 4.1. ## 5. Proof of \(P^{\boldsymbol{a}}_{\rm var}(f)\leq P^{\boldsymbol{a}}(f)\). It seems difficult to implement the zero-dimensional trick to prove \(P^{\boldsymbol{a}}_{\rm var}(f)\leq P^{\boldsymbol{a}}(f)\). Hence, the proof is more complicated. **Theorem 5.1**.: _Suppose that \((X_{i},T_{i})\) (\(i=1,2,\ldots,r\)) are dynamical systems and \(\pi_{i}:X_{i}\to X_{i+1}\)\((i=1,2,...,r-1)\) are factor maps. Then we have_ \[P^{\boldsymbol{a}}_{\rm var}(f)\leq P^{\boldsymbol{a}}(f)\] _for any continuous function \(f:X_{1}\to\mathbb{R}\)._ Proof.: Take and fix \(\mu\in\mathscr{M}^{T_{1}}(X_{1})\). Let \(\mu_{i}=\pi^{(i-1)}{}_{*}\mu\). We need to prove \[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i})+w_{1}\int_{X_{1}}\!fd\mu\leq P^{\boldsymbol {a}}(f,\boldsymbol{T}).\] However, the following argument assures that giving an evaluation up to a constant is sufficient: suppose there is a positive number \(C\) which does not depend on \(f\) nor \((T_{i})_{i}\) satisfying \[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i})+w_{1}\int_{X_{1}}\!fd\mu\leq P^{ \boldsymbol{a}}(f,\boldsymbol{T})+C. \tag{5.1}\] Applying this to \(S_{m}f\) and \(\boldsymbol{T}^{m}=(T_{i}{}^{m})_{i}\) for \(m\in\mathbb{N}\) yields \[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i}{}^{m})+w_{1}\int_{X_{1}}\!S_{m}fd\mu\leq P ^{\boldsymbol{a}}(S_{m}f,\boldsymbol{T}^{m})+C.\] We employ Lemma 3.3 and get \[m\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i})+mw_{1}\int_{X_{1}}\!fd\mu\leq mP^{ \boldsymbol{a}}(f,\boldsymbol{T})+C.\] Divide by \(m\) and let \(m\to\infty\). We obtain the desired inequality \[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i})+w_{1}\int_{X_{1}}\!fd\mu\leq P^{ \boldsymbol{a}}(f,\boldsymbol{T}).\] Therefore, we only need to prove (5.1). Let \(\mathscr{A}^{(i)}=\{A_{1}^{(i)},A_{2}^{(i)},\cdots,A_{m_{i}}^{(i)}\}\) be an arbitrary partition of \(X_{i}\) for each \(i\). We will prove \[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i},\mathscr{A}^{(i)})+w_{1}\int_{X_{1}}\!fd \mu\leq P^{\boldsymbol{a}}(f,\boldsymbol{T})+C.\] We start by approximating elements of \(\mathscr{A}^{(i)}\) with compact sets using backward induction. For \(1\leq i\leq r\), let \[\Lambda_{i}^{0}=\{0,1,\cdots,m_{r}\}\times\{0,1,\cdots,m_{r-1}\}\times\cdots \times\{0,1,\cdots,m_{i+1}\}\times\{0,1,\cdots,m_{i}\},\] \[\Lambda_{i}=\{0,1,\cdots,m_{r}\}\times\{0,1,\cdots,m_{r-1}\}\times\cdots \times\{0,1,\cdots,m_{i+1}\}\times\{1,2,\cdots,m_{i}\}.\] We will denote an element \((j_{r},j_{r-1},\cdots,j_{i})\) in \(\Lambda_{i}^{0}\) or \(\Lambda_{i}\) by \(j_{r}j_{r-1}\cdots j_{i}\). For each \(A_{j}^{(r)}\in\mathscr{A}^{(r)}\), take a compact set \(C_{j}^{(r)}\subset A_{j}^{(r)}\) such that \[\log m_{r}\cdot\sum_{j=1}^{m_{r}}\mu_{r}(A_{j}^{(r)}\setminus C_{j}^{(r)})<1.\] Define \(C_{0}^{(r)}\) as the remainder of \(X_{r}\), which may not be compact; \[C_{0}^{(r)}=\bigcup_{j=1}^{m_{r}}A_{j}^{(r)}\setminus C_{j}^{(r)}=X_{r} \setminus\bigcup_{j=1}^{m_{r}}C_{j}^{(r)}.\] Then \(\mathscr{C}^{(r)}:=\{C_{0}^{(r)},C_{1}^{(r)},\cdots,C_{m_{r}}^{(r)}\}\) is a measurable partition of \(X_{r}\). Next, consider the partition \(\pi_{r-1}^{-1}(\mathscr{C}^{(r)})\vee\mathscr{A}^{(r-1)}\) of \(X_{r-1}\). For \(j_{r}j_{r-1}\in\Lambda_{r-1}\), let \[B_{j_{r}j_{r-1}}^{(r-1)}=\pi_{r-1}^{-1}(C_{j_{r}}^{(r)})\cap A_{j_{r-1}}^{(r-1)}.\] Then \[\pi_{r-1}^{-1}(\mathscr{C}^{(r)})\vee\mathscr{A}^{(r-1)}=\left\{B_{j_{r}j_{r- 1}}^{(r-1)}\Big{|}\ j_{r}j_{r-1}\in\Lambda_{r-1}\ \right\},\] and for each \(j_{r}\in\Lambda_{r}^{0}\) \[\bigcup_{j_{r-1}=1}^{m_{r-1}}B_{j_{r}j_{r-1}}^{(r-1)}=\pi_{r-1}^{-1}(C_{j_{r} }^{(r-1)}).\] For each \(j_{r}j_{r-1}\in\Lambda_{r-1}\), take a compact set \(C_{j_{r}j_{r-1}}^{(r-1)}\subset B_{j_{r}j_{r-1}}^{(r-1)}\) (which could be empty) such that \[\log|\Lambda_{r-1}|\cdot\sum_{j_{r}=0}^{m_{r}}\sum_{j_{r-1}=1}^{m_{r-1}}\mu_{ r-1}(B_{j_{r}j_{r-1}}^{(r-1)}\setminus C_{j_{r}j_{r-1}}^{(r-1)})<1.\] Define \(C_{j_{r}0}^{(r-1)}\) as the remainder of \(\pi_{r-1}^{-1}(C_{j_{r}}^{(r)})\); \[C_{j_{r}0}^{(r-1)}=\pi_{r-1}^{-1}(C_{j_{r}}^{(r)})\setminus\bigcup_{j_{r-1}=1 }^{m_{r-1}}C_{j_{r}j_{r-1}}^{(r-1)}.\] Then \(\mathscr{C}^{(r-1)}=\left\{C_{j_{r}j_{r-1}}^{(r-1)}\Big{|}j_{r}j_{r-1}\in \Lambda_{r-1}^{0}\right\}\) is a measurable partition of \(X_{r-1}\). Continue in this manner, and suppose we have obtained the partition \(\mathscr{C}^{(k)}=\left\{C_{J}^{(k)}\Big{|}J\in\Lambda_{k}^{0}\right\}\) of \(X_{k}\) for \(k=i+1,i+2,\ldots,r\). We will define \(\mathscr{C}^{(i)}\). Each element in \(\pi_{i}^{-1}(\mathscr{C}^{(i+1)})\vee\mathscr{A}^{(i)}\) can be expressed using \(J^{\prime}\in\Lambda_{i+1}^{0}\) and \(j_{i}\in\{1,2,\ldots,m_{i}\}\) by \[B_{J^{\prime}j_{i}}^{(i)}=\pi_{i}^{-1}(C_{J^{\prime}}^{(i+1)})\cap A_{j_{i}}^{ (i)}.\] Choose a compact set \(C_{J}^{(i)}\subset B_{J}^{(i)}\) for each \(J\in\Lambda_{i}\) so that \[\log|\Lambda_{i}|\cdot\sum_{J^{\prime}\in\Lambda_{i+1}^{0}}\sum_{j_{i}=1}^{m_{ i}}\mu_{i}\left(B_{J^{\prime}j_{i}}^{(i)}\setminus C_{J^{\prime}j_{i}}^{(i)} \right)<1.\] Finally, for \(J^{\prime}\in\Lambda_{j+1}^{0}\), let \[C_{J^{\prime}0}^{(i)}=\pi_{i}^{-1}(C_{J^{\prime}}^{(i+1)})\setminus\bigcup_{j _{i}=1}^{m_{i}}C_{J^{\prime}j_{i}}^{(i)}.\] Set \(\mathscr{C}^{(i)}=\left\{C_{J}^{(i)}\Big{|}J\in\Lambda_{i}^{0}\right\}\). This is a partition of \(X_{i}\). **Lemma 5.2**.: _For \(\mathscr{C}^{(i)}\) constructed above, we have_ \[h_{\mu_{i}}(T_{i},\mathscr{A}^{(i)})\leq h_{\mu_{i}}(T_{i},\mathscr{C}^{(i)})+1.\] Proof.: By Lemma 3.5, \[h_{\mu_{i}}(T_{i},\mathscr{A}^{(i)}) \leq h_{\mu_{i}}\big{(}T_{i},\mathscr{A}^{(i)}\vee\pi_{i}^{-1}( \mathscr{C}^{(i+1)})\big{)}\] \[\leq h_{\mu_{i}}(T_{i},\mathscr{C}^{(i)})+H_{\mu_{i}}\left( \mathscr{A}^{(i)}\vee\pi_{i}^{-1}(\mathscr{C}^{(i+1)})\big{|}\mathscr{C}^{(i )}\right).\] Since \(C_{J}^{(i)}\subset B_{J}^{(i)}\) for \(J\in\Lambda_{i}\), \[H_{\mu_{i}}\left(\mathscr{A}^{(i)}\vee\pi_{i}^{-1}(\mathscr{C}^ {(i+1)})\big{|}\mathscr{C}^{(i)}\right)\] \[=-\sum_{\begin{subarray}{c}J\in\Lambda_{i}^{0}\\ \mu_{i}(C_{J}^{(i)})\neq 0\end{subarray}}\mu_{i}(C_{J}^{(i)})\sum_{K\in \Lambda_{i}}\frac{\mu_{i}(B_{K}^{(i)}\cap C_{J}^{(i)})}{\mu_{i}(C_{J}^{(i)})} \log\left(\frac{\mu_{i}(B_{K}^{(i)}\cap C_{J}^{(i)})}{\mu_{i}(C_{J}^{(i)})}\right)\] \[=-\sum_{\begin{subarray}{c}J^{\prime}\in\Lambda_{i+1}^{0}\\ \mu_{i}(C_{J^{\prime}0}^{(i)})\neq 0\end{subarray}}\mu_{i}(C_{J^{\prime}0}^{(i)}) \sum_{j_{i}=1}^{m_{i}}\frac{\mu_{i}(B_{J^{\prime}j_{i}}^{(i)}\cap C_{J^{\prime }0}^{(i)})}{\mu_{i}(C_{J^{\prime}0}^{(i)})}\log\left(\frac{\mu_{i}(B_{J^{ \prime}j_{i}}^{(i)}\cap C_{J^{\prime}0}^{(i)})}{\mu_{i}(C_{J^{\prime}0}^{(i)})} \right).\] By Lemma 3.4, we have \[-\sum_{j_{i}=1}^{m_{i}}\frac{\mu_{i}(B_{J^{\prime}j_{i}}^{(i)}\cap C_{J^{ \prime}0}^{(i)})}{\mu_{i}(C_{J^{\prime}0}^{(i)})}\log\left(\frac{\mu_{i}(B_{J^ {\prime}j_{i}}^{(i)}\cap C_{J^{\prime}0}^{(i)})}{\mu_{i}(C_{J^{\prime}0}^{(i)} )}\right)\leq\log|\Lambda_{i}|.\] Therefore, \[H_{\mu_{i}}\left(\mathscr{A}^{(i)}\vee\pi_{i}^{-1}(\mathscr{C}^{(i+1)}) \big{|}\mathscr{C}^{(i)}\right)\leq\log|\Lambda_{i}|\sum_{J^{\prime}\in\Lambda_ {i+1}^{0}}\mu_{i}\left(\pi_{i}^{-1}(C_{J^{\prime}}^{(i+1)})\setminus\bigcup_{j _{i}=1}^{m_{i}}C_{J^{\prime}j_{i}}^{(i)}\right)<1.\] Recall the definition of \(\boldsymbol{w}\) in (2.1). We have \[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i},\mathscr{C}^{(i)})+w_{1} \int_{X_{1}}fd\mu\] \[=\ \lim_{N\to\infty}\frac{1}{N}\Bigg{\{}H_{\mu_{r}}(\mathscr{C} _{N}^{(r)})+a_{1}a_{2}\cdots a_{r-1}N\int_{X_{1}}fd\mu\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\sum_{i=1}^{r-1}a_{i} a_{i+1}\cdots a_{r-1}\left(H_{\mu_{i}}(\mathscr{C}_{N}^{(i)})-H_{\mu_{i+1}}( \mathscr{C}_{N}^{(i+1)})\right)\Bigg{\}}\] \[=\ \lim_{N\to\infty}\frac{1}{N}\Bigg{\{}H_{\mu_{r}}(\mathscr{C}_{N}^ {(r)})+a_{1}a_{2}\cdots a_{r-1}\int_{X_{1}}S_{N}fd\mu\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\sum_{i=1}^{r-1} a_{i}a_{i+1}\cdots a_{r-1}H_{\mu_{i}}\left(\mathscr{C}_{N}^{(i)}\Big{|}\pi_{i}^{-1}( \mathscr{C}_{N}^{(i+1)})\right)\Bigg{\}}.\] Here, we used the relation \[H_{\mu_{i}}(\mathscr{C}_{N}^{(i)})-H_{\mu_{i+1}}(\mathscr{C}_{N}^{(i+1)}) =H_{\mu_{i}}(\mathscr{C}_{N}^{(i)})-H_{\mu_{i}}(\pi_{i}^{-1}( \mathscr{C}_{N}^{(i+1)}))\] \[=H_{\mu_{i}}\left(\mathscr{C}_{N}^{(i)}\Big{|}\pi_{i}^{-1}( \mathscr{C}_{N}^{(i+1)})\right).\] We fix \(N\) and evaluate from above the following terms using backward induction: \[H_{\mu_{r}}(\mathscr{C}_{N}^{(r)})+a_{1}a_{2}\cdots a_{r-1}\int_{X_{1}}S_{N}fd \mu+\sum_{i=1}^{r-1}a_{i}a_{i+1}\cdots a_{r-1}H_{\mu_{i}}\left(\mathscr{C}_{N}^ {(i)}\Big{|}\pi_{i}^{-1}(\mathscr{C}_{N}^{(i+1)})\right). \tag{5.2}\] First, consider the term \[a_{1}a_{2}\cdots a_{r-1}\left(H_{\mu}\left(\mathscr{C}_{N}^{(1)}\Big{|}\pi_{1 }^{-1}(\mathscr{C}_{N}^{(2)})\right)+\int_{X_{1}}S_{N}fd\mu\right).\] For \(C\in\mathscr{C}_{N}^{(i+1)}\), let \(\mathscr{C}_{N}^{(i)}(C)=\{D\in\mathscr{C}_{N}^{(i)}\,|\pi_{i}(D)\subset C\}\), then by Lemma 3.4, \[H_{\mu}\left(\mathscr{C}_{N}^{(1)}\Big{|}\pi_{1}^{-1}(\mathscr{C}_{N}^{(2)}) \right)+\int_{X_{1}}S_{N}fd\mu\] Applying this inequality to (5.2), the following term appears: \[a_{2}a_{3}\cdots a_{r-1}\left(H_{\mu_{2}}\left(\mathscr{C}_{N}^{(2)}\Big{|}\pi _{2}^{-1}(\mathscr{C}_{N}^{(3)})\right)+a_{1}\sum_{C\in\mathscr{C}_{N}^{(2)}} \mu_{2}(C)\log\sum_{D\in\mathscr{C}_{N}^{(1)}(C)}e^{\sup_{D}S_{N}f}\right). \tag{5.3}\] This can be evaluated similarly using Lemma 3.4 as \[H_{\mu_{2}}\left(\mathscr{C}_{N}^{(2)}\Big{|}\pi_{2}^{-1}(\mathscr{C}_{N}^{(3 )})\right)+a_{1}\sum_{C\in\mathscr{C}_{N}^{(2)}}\mu_{2}(C)\log\sum_{D\in \mathscr{C}_{N}^{(1)}(C)}e^{\sup_{D}S_{N}f}\] \[=\sum_{\begin{subarray}{c}C\in\mathscr{C}_{N}^{(3)}\\ \mu_{3}(C)\neq 0\end{subarray}}\mu_{3}(C)\left\{\sum_{D\in\mathscr{C}_{N}^{(2)}(C) }\left(-\frac{\mu_{2}(D)}{\mu_{3}(C)}\log\frac{\mu_{2}(D)}{\mu_{3}(C)}+\frac{ \mu_{2}(D)}{\mu_{3}(C)}\log\left(\sum_{E\in\mathscr{C}_{N}^{(1)}(D)}e^{\sup_{ E}S_{N}f}\right)^{a_{1}}\right)\right\}\] \[\leq\sum_{C\in\mathscr{C}_{N}^{(3)}}\mu_{3}(C)\log\sum_{D\in\mathscr{C}_{N}^{(2 )}(C)}\left(\sum_{E\in\mathscr{C}_{N}^{(1)}(D)}e^{\sup_{E}S_{N}f}\right)^{a_{1}}.\] Continue likewise and obtain the following upper bound for (5.2): \[\log\sum_{C^{(r)}\in\mathscr{C}_{N}^{(r)}}\left(\sum_{C^{(r-1)}\in\mathscr{C}_{N}^ {(r-1)}(C^{(r)})}\left(\cdots\left(\sum_{C^{(1)}\in\mathscr{C}_{N}^{(1)}(C^{(2) })}e^{\sup_{C^{(1)}}S_{N}f}\right)^{a_{1}}\cdots\right)^{a_{r-2}}\right)^{a_{r- 1}}. \tag{5.4}\] For \(1\leq i\leq r\), let \(\mathscr{C}_{c}^{(i)}=\{C\in\mathscr{C}^{(i)}\,|\,C\text{ is compact}\}\). There is a positive number \(\varepsilon_{i}\) such that \(d^{(i)}(y_{1},y_{2})>\varepsilon_{i}\) for any \(C_{1},C_{2}\in\mathscr{C}_{c}^{(i)}\) and \(y_{1}\in C_{1},y_{2}\in C_{2}\). Fix a positive number \(\varepsilon\) with \[\varepsilon<\min_{1\leq i\leq r}\varepsilon_{i}. \tag{5.5}\] Let \(\mathscr{F}^{(i)}\) be a chain of open \((N,\,\varepsilon)\)-covers of \(X_{i}\) (see Definition 3.1). Consider \[\log\mathscr{P}^{\boldsymbol{a}}\left(f,\,N,\varepsilon,\,(\mathscr{F}^{(i)} )_{i}\right)\] \[=\log\sum_{U^{(r)}\in\mathscr{F}^{(r)}}\left(\sum_{U^{(r-1)}\in\mathscr{F}^{( r-1)}(U^{(r)})}\left(\cdots\left(\sum_{U^{(1)}\in\mathscr{F}^{(1)}(U^{(2)})}e^{ \sup_{U^{(1)}}S_{N}f}\right)^{a_{1}}\cdots\right)^{a_{r-2}}\right)^{a_{r-1}}. \tag{5.6}\] We will evaluate (5.4) from above by (5.6) up to a constant. We need the next lemma. **Lemma 5.3**.: 1. _For any_ \(V\subset X_{r}\) _with_ \(\operatorname{diam}(V,d_{N}^{(r)})<\varepsilon\)_,_ \[\left|\left\{D\in\mathscr{C}_{N}^{(r)}\Big{|}\,D\cap V\neq\varnothing\right\} \right|\leq 2^{N}.\] 2. _Let_ \(1\leq i\leq r-1\) _and_ \(C\in\mathscr{C}_{N}^{(i+1)}\)_. For any_ \(V\subset X_{i}\) _with_ \(\operatorname{diam}(V,d_{N}^{(i)})<\varepsilon\)_,_ \[\left|\left\{D\in\mathscr{C}_{N}^{(i)}(C)\Big{|}\,D\cap V\neq\varnothing\right\} \right|\leq 2^{N}.\] Proof.: (1) \(D\in\mathscr{C}_{N}^{(r)}\) can be expressed using \(C_{k_{s}}^{(r)}\in\mathscr{C}^{(r)}\) (\(s=0,1,\ldots,N-1\)) as \[D=C_{k_{0}}^{(r)}\cap T_{r}^{-1}C_{k_{1}}^{(r)}\cap T_{r}^{-2}C_{k_{2}}^{(r)} \cap\cdots\cap T_{r}^{-N+1}C_{k_{N-1}}^{(r)}.\] If \(D\cap V\neq\varnothing\), we have \(T_{r}^{-s}(C_{k_{s}}^{(r)})\cap V\neq\varnothing\) for every \(0\leq s\leq N-1\). Then for each \(s\) \[\varnothing\neq T_{r}^{s}\left(T_{r}^{-s}(C_{k_{s}}^{(r)})\cap V\right)\subset C _{k_{s}}^{(r)}\cap T_{r}^{s}(V).\] By (5.5), each \(k_{s}\) is either \(0\) or one of the elements in \(\{1,2,\ldots,m_{r}\}\). Therefore, there are at most \(2^{N}\) such sets. (2) The proof works in the same way as in (1). \(C\) can be written using \(J_{k}\in\Lambda_{i+1}^{0}\) (\(k=0,1,\ldots,N-1\)) as \[C=C_{J_{0}}^{(i+1)}\cap T_{i+1}^{-1}C_{J_{1}}^{(i+1)}\cap T_{i+1}^{-2}C_{J_{2} }^{(i+1)}\cap\cdots\cap T_{i+1}^{-N+1}C_{J_{N-1}}^{(i+1)}.\] Then any \(D\in\mathscr{C}_{N}^{(i)}(C)\) is of the form \[D=C_{J_{0}k_{0}}^{(i)}\cap T_{i}^{-1}C_{J_{1}k_{1}}^{(i)}\cap T_{i}^{-2}C_{J_{ 2}k_{2}}^{(i)}\cap\cdots\cap T_{i}^{-N+1}C_{J_{N-1}k_{N-1}}^{(i)}\] with \(0\leq k_{l}\leq m_{i}\) (\(l=1,2,\ldots,N-1\)). If \(D\cap V\neq\varnothing\), then each \(k_{l}\) is either \(0\) or one of the elements in \(\{1,2,\ldots,m_{i}\}\). Therefore, there are at most \(2^{N}\) such sets. For any \(C^{(1)}\in\mathscr{C}_{N}^{(1)}\), there is \(V\in\mathscr{F}^{(1)}\) with \(V\cap C^{(1)}\neq\varnothing\) and \[e^{\sup_{C^{(1)}}S_{N}f}\leq e^{\sup_{V}S_{N}f}.\] Let \(C^{(2)}\in\mathscr{C}_{N}^{(2)}\), then by Lemma 5.3, \[\sum_{C^{(1)}\in\mathscr{C}_{N}^{(1)}(C^{(2)})}e^{\sup_{C^{(1)}}S_{N}f}\leq \sum_{\begin{subarray}{c}U\in\mathscr{F}^{(2)}\\ U\cap C^{(2)}\neq\varnothing\end{subarray}}2^{N}\sum_{V\in\mathscr{F}^{(1)}(U )}e^{\sup_{V}S_{N}f}.\] By Lemma 3.4, \[\left(\sum_{C^{(1)}\in\mathscr{C}_{N}^{(1)}(C^{(2)})}e^{\sup_{C^{(1)}}S_{N}f }\right)^{a_{1}}\leq 2^{a_{1}N}\sum_{\begin{subarray}{c}U\in\mathscr{F}^{(2)} \\ U\cap C^{(2)}\neq\varnothing\end{subarray}}\left(\sum_{V\in\mathscr{F}^{(1)}(U )}e^{\sup_{V}S_{N}f}\right)^{a_{1}}.\] For \(C^{(3)}\in\mathscr{C}_{N}^{(3)}\), we apply Lemma 5.3 and Lemma 3.4 similarly and obtain \[\left(\sum_{C^{(2)}\in\mathscr{C}_{N}^{(2)}(C^{(3)})}\left(\sum_ {C^{(1)}\in\mathscr{C}_{N}^{(1)}(C^{(2)})}e^{\sup_{C^{(1)}}S_{N}f}\right)^{a_ {1}}\right)^{a_{2}}\\ \leq 2^{a_{1}a_{2}N}2^{a_{2}N}\sum_{\begin{subarray}{c}O\in \mathscr{F}^{(3)}\\ O\cap C^{(3)}\neq\varnothing\end{subarray}}\left(\sum_{U\in\mathscr{F}^{(2)} (O)}\left(\sum_{V\in\mathscr{F}^{(1)}(U)}e^{\sup_{V}S_{N}f}\right)^{a_{1}} \right)^{a_{2}}.\] We continue this reasoning and get \[\sum_{C^{(r)}\in\mathscr{C}_{N}^{(r)}}\left(\sum_{C^{(r-1)}\in \mathscr{C}_{N}^{(r-1)}(C^{(r)})}\left(\cdots\left(\sum_{C^{(1)}\in\mathscr{C }_{N}^{(1)}(C^{(2)})}e^{\sup_{C^{(1)}}S_{N}f}\right)^{a_{1}}\cdots\right)^{a_ {r-2}}\right)^{a_{r-1}}\\ \leq 2^{\alpha N}\sum_{U^{(r)}\in\mathscr{F}^{(r)}}\left(\sum_{U ^{(r-1)}\in\mathscr{F}^{(r-1)}(U^{(r)})}\left(\cdots\left(\sum_{U^{(1)}\in \mathscr{F}^{(1)}(U^{(2)})}e^{\sup_{U^{(1)}}S_{N}f}\right)^{a_{1}}\cdots \right)^{a_{r-2}}\right)^{a_{r-1}}.\] Here \(\alpha\) stands for \(\sum_{i=1}^{r-1}a_{i}a_{i+1}\cdots a_{r-1}\). We take the logarithm of both sides; the left-hand side equals (5.4), which is an upper bound for (5.2). Furthermore, consider the infimum over the chain of open (\(N\), \(\varepsilon\))-covers \((\mathscr{F}^{(i)})_{i}\). By Remark 3.2, this yields \[H_{\mu_{r}}(\mathscr{C}_{N}^{(r)})+a_{1}a_{2}\cdots a_{r-1}\int_ {X_{1}}S_{N}fd\mu+\sum_{i=1}^{r-1}a_{i}a_{i+1}\cdots a_{r-1}H_{\mu_{i}}\left( \mathscr{C}_{N}^{(i)}\Big{|}\pi_{i}^{-1}(\mathscr{C}_{N}^{(i+1)})\right)\\ \leq\log P_{r}^{\mathbf{a}}(X_{r},\,f,\,N,\,\varepsilon)+\alpha N\log 2.\] Divide by \(N\), then let \(N\to\infty\) and \(\varepsilon\to 0\). We obtain \[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i},\mathscr{C}^{(i)})+w_{1}\int_{X_{1}}fd\mu \leq P^{\boldsymbol{a}}(f,\boldsymbol{T})+\alpha\log 2.\] Lemma 5.2 yields \[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i},\mathscr{A}^{(i)})+w_{1}\int_{X_{1}}fd\mu \leq P^{\boldsymbol{a}}(f,\boldsymbol{T})+\alpha\log 2+r.\] We take the supremum over the partitions \((\mathscr{A}^{(i)})_{i}\): \[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i})+w_{1}\int_{X_{1}}\!fd\mu\leq P^{ \boldsymbol{a}}(f,\boldsymbol{T})+\alpha\log 2+r.\] By the argument at the beginning of this proof, we conclude that \[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i})+w_{1}\int_{X_{1}}\!fd\mu\leq P^{ \boldsymbol{a}}(f,\boldsymbol{T}).\] ## 6. Example: Sofic Sets Kenyon-Peres [10] calculated the Hausdorff dimension of sofic sets in \(\mathbb{T}^{2}\). In this section, we will see that we can calculate the Hausdorff dimension of certain sofic sets in \(\mathbb{T}^{d}\) with arbitrary \(d\). We give an example for the case \(d=3\). ### Definition of Sofic Sets This subsection referred to [10]*[11]*[12] defined _sofic systems_ as subshifts which are factors of shifts of finite type. Boyle, Kitchens, and Marcus proved in [1]*[1] that this is equivalent to the following definition. **Definition 6.1** ([10, Proposition 3.6]).: Consider a finite directed graph \(G=\langle V,E\rangle\) in which loops and multiple edges are allowed. Suppose its edges are colored in \(l\) colors in a "right-resolving" fashion: every two edges emanating from the same vertex have different colors. Then the set of color sequences that arise from infinite paths in \(G\) is called the **sofic system**. Let \(m_{1}\leq m_{2}\leq\cdots\leq m_{r}\) be natural numbers, \(T\) an endomorphism on \(\mathbb{T}^{r}=\mathbb{R}^{r}/\mathbb{Z}^{r}\) represented by the diagonal matrix \(A=\operatorname{diag}(m_{1},m_{2},\ldots,m_{r})\), and \(D=\prod_{i=1}^{r}\{0,1,\ldots,m_{i}-1\}\). Define a map \(R_{r}:D^{\mathbb{N}}\to\mathbb{T}^{r}\) by \[R_{r}((e^{(n)})_{n=1}^{\infty})=\left(\sum_{k=0}^{\infty}\frac{e_{1}^{(k)}}{{ m_{1}}^{k}},\cdots,\sum_{k=0}^{\infty}\frac{e_{r}^{(k)}}{{m_{r}}^{k}}\right)\] where \(e^{(k)}=(e_{1}^{(k)},\cdots,e_{r}^{(k)})\in D\) for each \(k\). Suppose the edges in some finite directed graph are labeled by the elements in \(D\) in the right-resolving fashion, and let \(S\subset D^{\mathbb{N}}\) be the resulting sofic system. The image of \(S\) under \(R_{r}\) is called a **sofic set**. ### An example of a sofic set Here we will look at an example of a sofic set and calculate its Hausdorff dimension via its weighted topological entropy. Let \(D=\{0,1\}\times\{0,1,2\}\times\{0,1,2,3\}\) and consider the directed graph \(G=\langle V,E\rangle\) with \(V=\{1,2,3\}\) and \(D\)-labeled edges in Figure 2. Let \(Y_{1}\subset D^{\mathbb{N}}\) be the resulting sofic system. Let \(C=\{0,1\}\times\{0,1,2\}\) and \(B=\{0,1\}\). Define \(p_{1}:D\to C\) and \(p_{2}:C\to B\) by \[p_{1}(i,j,k)=(i,j),\quad p_{2}(i,j)=i.\] Let \(p_{1}^{\mathbb{N}}:D^{\mathbb{N}}\to C^{\mathbb{N}}\) and \(p_{2}^{\mathbb{N}}:C^{\mathbb{N}}\to B^{\mathbb{N}}\) be the product map of \(p_{1}\) and \(p_{2}\), respectively. Set \(Y_{2}=p_{1}^{\mathbb{N}}(Y_{1})\) and \(Y_{3}=p_{2}^{\mathbb{N}}(Y_{2})\). Note that \(Y_{2}=\{(0,0),(1,0),(0,1)\}^{\mathbb{N}}\) and \(Y_{3}=\{0,1\}^{\mathbb{N}}\), meaning they are full shifts. The sets \(X_{i}=R_{i}(Y_{i})\)\((i=1,2,3)\) are sofic sets. Define \(\pi_{1}:X_{1}\to X_{2}\) and \(\pi_{2}:X_{2}\to X_{3}\) by \[\pi_{1}(x,y,z)=(x,y),\quad\pi_{2}(x,y)=x.\] Furthermore, let \(T_{1}\), \(T_{2}\), and \(T_{3}\) be the endomorphism on \(X_{1}\), \(X_{2}\), and \(X_{3}\) represented by the matrices \(\operatorname{diag}(2,3,4)\), \(\operatorname{diag}(2,3)\), and \(\operatorname{diag}(2)\), respectively. Then \((X_{i},T_{i})_{i}\) and \((\pi_{i})_{i}\) form a sequence of dynamical systems. Figure 2. Directed graph \(G\) For a natural number \(N\), denote by \(Y_{i}|_{N}\) the restriction of \(Y_{i}\) to its first \(N\) coordinates, and let \(p_{i,N}:Y_{i}|_{N}\to Y_{i+1}|_{N}\) be the projections for \(i=1,2\). Since \(Y_{2}\) and \(Y_{3}\) are full shifts, we can use the same technique as in Example 1.1. Therefore, we have for any exponent \(\boldsymbol{a}=(a_{1},a_{2})\in[0,1]^{2}\), \[h^{\boldsymbol{a}}(\boldsymbol{T})=\lim_{N\to\infty}\frac{1}{N}\log\sum_{u\in \{0,1\}^{N}}\left(\sum_{v\in p_{2,N}{}^{-1}(u)}\left|p_{1,N}{}^{-1}(v)\right|^ {a_{1}}\right)^{a_{2}}.\] Now, let us evaluate \(\left|p_{1,N}{}^{-1}(v)\right|\) using matrix products. This idea of using matrix products is due to Kenyon-Peres [10]-Peres [10]-Peres \((a,b)\in\left\{0,1\right\}^{2}\) and let \[a_{ij}=|\{e\in E\!\mid\!e\text{ is from }j\text{ to }i\text{ and the first two coordinates of its label is }(a,b)\}|.\] Define a \(3\times 3\) matrix by \(A_{(a,b)}=(a_{ij})_{ij}\). Then we have \[A_{(0,0)}=\begin{pmatrix}0&1&1\\ 0&0&1\\ 1&1&0\end{pmatrix},A_{(0,1)}=\begin{pmatrix}1&1&1\\ 1&1&0\\ 0&1&2\end{pmatrix},A_{(1,0)}=\begin{pmatrix}1&2&2\\ 0&1&2\\ 2&2&1\end{pmatrix},A_{(1,1)}=O.\] Note that \(A_{(0,0)}{}^{2}=A_{(0,1)}\) and \(A_{(0,0)}{}^{3}=A_{(1,0)}\). For \(v=(v_{1},\cdots,v_{N})\in Y_{2}|_{N}\) we have \[\left|p_{1,N}{}^{-1}(v)\right|\asymp\|A_{v_{1}}A_{v_{2}}\cdots A_{v_{N}}\|.\] Here \(A\asymp B\) means there is a constant \(c>0\) independent of \(N\) with \(c^{-1}B\leq A\leq cB\). For \(\alpha=\frac{1+\sqrt{5}}{2}\), we have \(\alpha^{2}=\alpha+1\) and \[A_{(0,0)}\begin{pmatrix}\alpha\\ 1\\ \alpha\end{pmatrix}=\begin{pmatrix}1+\alpha\\ \alpha\\ 1+\alpha\end{pmatrix}=\alpha\begin{pmatrix}\alpha\\ 1\\ \alpha\end{pmatrix},\quad A_{(0,1)}\begin{pmatrix}\alpha\\ 1\\ \alpha\end{pmatrix}=\alpha^{2}\begin{pmatrix}\alpha\\ 1\\ \alpha\end{pmatrix},\quad A_{(1,0)}\begin{pmatrix}\alpha\\ 1\\ \alpha\end{pmatrix}=\alpha^{3}\begin{pmatrix}\alpha\\ 1\\ \alpha\end{pmatrix}.\] Therefore, \[\|A_{v_{1}}A_{v_{2}}\cdots A_{v_{N}}\|\asymp\left\|A_{v_{1}}A_{v_{2}}\cdots A _{v_{N}}\begin{pmatrix}\alpha\\ 1\\ \alpha\end{pmatrix}\right\|\asymp\lambda_{v_{1}}\lambda_{v_{2}}\cdots\lambda_{v_ {N}}\] where \(\lambda_{(0,0)}=\alpha\), \(\lambda_{(0,1)}=\alpha^{2}\), \(\lambda_{(1,0)}=\alpha^{3}\). Fix \(u\in\left\{0,1\right\}^{\mathbb{N}}\) and suppose there are \(n\) numbers of zeros in \(u\). Also, if there are \(k\) numbers of \((0,0)\)s in \(v=(v_{1},\cdots,v_{N})\in p_{2,N}{}^{-1}(u)\), there are \(n-k\) numbers of \((0,1)\)s and \(N-n\) numbers of \((1,0)\)s in \(v\). Then \[\lambda_{v_{1}}{}^{a_{1}}\cdots\lambda_{v_{N}}{}^{a_{1}}=\alpha^{a_{1}k}\alpha ^{2a_{1}(n-k)}\alpha^{3a_{1}(N-n)}.\] Therefore, \[\sum_{v\in p_{2,N}{}^{-1}(u)}\left|p_{1,N}{}^{-1}(v)\right|^{a_{1}} =\sum_{(v_{1},\cdots,v_{N})\in p_{2,N}{}^{-1}(u)}\lambda_{v_{1}}{ }^{a_{1}}\cdots\lambda_{v_{N}}{}^{a_{1}}=\sum_{k=0}^{n}\begin{pmatrix}n\\ k\end{pmatrix}\alpha^{a_{1}k}\alpha^{2a_{1}(n-k)}\alpha^{3a_{1}(N-n)}\] \[=\left(\alpha^{a_{1}}+\alpha^{2a_{1}}\right)^{n}\alpha^{3a_{1}(N-n )}.\] This implies \[\sum_{u\in\{0,1\}^{N}}\left(\sum_{v\in p_{2,N}{}^{-1}(u)}{|{p_{1,N}} ^{-1}(v)|}^{a_{1}}\right)^{a_{2}} =\sum_{n=0}^{N}\binom{N}{n}\big{(}\alpha^{a_{1}}+\alpha^{2a_{1}} \big{)}^{a_{2}n}\alpha^{3a_{1}a_{2}(N-n)}\] \[=\left\{\big{(}\alpha^{a_{1}}+\alpha^{2a_{1}}\big{)}^{a_{2}}+ \alpha^{3a_{1}a_{2}}\right\}^{N}.\] We conclude that \[h^{\boldsymbol{a}}(\boldsymbol{T}) =\lim_{N\to\infty}\frac{1}{N}\log\left\{\big{(}\alpha^{a_{1}}+ \alpha^{2a_{1}}\big{)}^{a_{2}}+\alpha^{3a_{1}a_{2}}\right\}^{N}\] \[=\log\Bigg{\{}\left(\left(\frac{1+\sqrt{5}}{2}\right)^{a_{1}}+ \left(\frac{3+\sqrt{5}}{2}\right)^{a_{1}}\right)^{a_{2}}+\left(2+\sqrt{5} \right)^{a_{1}a_{2}}\Bigg{\}}.\] As in Example 1.4, the Hausdorff dimension of \(X_{1}\) is obtained by letting \(a_{1}=\log_{4}3\) and \(a_{2}=\log_{3}2\); \[\dim_{H}(X_{1}) =\log\left\{\left(\left(\frac{1+\sqrt{5}}{2}\right)^{\log_{4}3}+ \left(\frac{3+\sqrt{5}}{2}\right)^{\log_{4}3}\right)^{\log_{3}2}+\sqrt{(2+ \sqrt{5})}\right\}\] \[=1.4598\cdots.\] ## Acknowledgement I am deeply grateful to my mentor, Masaki Tsukamoto, who not only has reviewed this paper several times throughout the writing process but has patiently helped me understand ergodic theory in general with his expertise. I also want to thank my family and friends for their unconditional support and everyone who has participated in my study for their time and willingness to share their knowledge. This work could not have been possible without their help.
Feng--Huang (2016) は、 dynamical systems 間の因子マップの重み付きトポロジーエンターピーと圧力を導入し、その変分原理を確立しました。 Tsukamoto (2022) は、そのinvariant を最もシンプルなケースにおいて非常に異なった形で再定義し、変分原理を用いて、両者の定義が一致することを示しました。 Tsukamoto のアプローチを一般化し、高次元での重み付きトポロジーエンターピーと圧力を再定義し、変分原理を証明しました。 この結果は、自己類似Sponge や、Euclidean space の任意次元における、Affine invariant set のHausdorff dimension の基本的な計算を可能にしました。
2309.10877
Multi-sideband interference structures by high-order photon-induced continuum-continuum transitions in helium
Following up on a previous paper on two-color photoionization of Ar(3p) [Bharti et al., Phys. Rev. A 103 (2021) 022834], we present measurements and calculations for a modified three-sideband (3-SB) version of the "reconstruction of attosecond beating by interference of two-photon transitions" (RABBITT) configuration applied to He(1s). The 3-SB RABBITT approach allows us to explore interference effects between pathways involving different orders of transitions within the continuum. The relative differences in the retrieved oscillation phases of the three sidebands provide insights into the continuum-continuum transitions. The ground state of helium has zero orbital angular momentum, which simplifies the analysis of oscillation phases and their angle-dependence within the three sidebands. We find qualitative agreement between our experimental results and the theoretical predictions for many cases but also observe some significant quantitative discrepancies.
D. Bharti, H. Srinivas, F. Shobeiry, A. T. Bondy, S. Saha, K. R. Hamilton, R. Moshammer, T. Pfeifer, K. Bartschat, A. Harth
2023-09-19T18:58:13
http://arxiv.org/abs/2309.10877v2
Multi-sideband interference structures by high-order photon-induced continuum-continuum transitions in helium ###### Abstract Following up on a previous paper on two-color photoionization of Ar(\(3p\)) [Bharti _et al._, Phys. Rev. A **103** (2021) 022834], we present measurements and calculations for a modified three-sideband (3-SB) version of the "reconstruction of attosecond beating by interference of two-photon transitions" (RABBITT) configuration applied to He(1s). The 3-SB RABBITT approach allows us to explore interference effects between pathways involving different orders of transitions within the continuum. The relative differences in the retrieved oscillation phases of the three sidebands provide insights into the continuum-continuum transitions. The ground state of helium has zero orbital angular momentum, which simplifies the analysis of oscillation phases and their angle-dependence within the three sidebands. We find qualitative agreement between our experimental results and the theoretical predictions for many cases but also observe some significant quantitative discrepancies. ## I Introduction The reconstruction of attosecond beating by interference of two-photon transitions (RABBITT) is a widely employed technique to characterize an attosecond pulse train and measure attosecond time delays in photoionization processes, e.g., [1; 2; 3]. In the standard scheme, photoionization by various spectral harmonics in an attosecond extreme ultraviolet (XUV) pulse train (the pump photons) results in multiple discrete peaks (main peaks) within the photoelectron signal. Simultaneously, the presence of a time-delayed infrared (IR) field (the probe photons) creates an additional photoelectron peak between every two main peaks, referred to as "sidebands." The photoelectron yield within these sidebands oscillates as a function of the time delay, and this oscillatory pattern can be utilized to determine the relative photoionization time delay. However, in order to accurately determine photoionization time delays using the RABBITT method, it is essential to consider the contribution arising from the continuum-continuum transitions induced by the probe pulse. This complexity is simplified by breaking down the phases of the two-photon transition matrix element into the sum of the Wigner phase, associated with the single-photon ionization process, and the continuum-continuum phase denoted as \(\phi_{cc}\). Details can be found, for example, in Refs. [4; 5; 6]. In 2019, Harth _et al._[7] introduced a variant of the RABBITT scheme known as three-sideband (3-SB) RABBITT, in which the interaction with the probe pulse results in the creation of not just one but three sidebands between every two main peaks. In the context of the 3-SB RABBITT approach, Bharti _et al._[8] extended the "decomposition approximation" to determine the phase of the \(N^{\text{th}}\)-order Above-Threshold-Ionization (ATI) matrix element. In this extension, the \(\phi_{cc}\) contributions from each participating transition are simply summed. In the 3-SB scheme, one important consequence of the decomposition approximation is the prediction that the oscillation phases in the three sidebands formed between the same pair of main peaks should be identical, except for an additional \(\pi\) phase shift in the central sideband. Numerical calculations performed for atomic hydrogen revealed slight deviations from this expectation, and these deviations consistently diminished with an increase in kinetic energy. Experiments on atomic hydrogen, of course, are extremely challenging due to the difficulties in creating a dense H target. A proof-of-principle experimental realization of the 3-SB scheme was reported by Bharti _et al._[9] for an argon target. As a noble gas with the first ionization potential of \(\approx 15.8\) eV for the \(3p\) electron, this target was experimentally favorable due to the fact that low-order (7, 9, 11,...) 515 nm harmonics of the frequency-doubled fundamental field (1030 nm) in the XUV pulse train were able to ionize the atom. Additionally, ionization of the \(3s\) electron is possible, resonances occur at relatively low ejected-electron energies of 10\(-\)15 eV due to the possible inner-shell promotion of the \(3s\) electron, and the numerical treatment is very challenging. A further complication arises from the fact that already the XUV step alone promotes the \(3p\) electron to two different angular-momentum states (\(s\) and \(d\)). All this, together with the uncertainties regarding the detailed time dependence of the electric fields, resulted in only qualitative agreement between the experimental data and theoretical predictions based on the \(R\)-matrix with time dependence (RMT) approach. In this paper, we report the outcomes of a 3-SB RABBITT experiment performed on helium. The ground state of helium has zero orbital momentum, and hence the single-photon ionization induced by an XUV pulse results in the creation of a photoelectron with a single orbital angular momentum \(\ell=1\). This simplifies the interpretation of the photoelectron interference patterns resulting from the interaction of the photoelectron with the IR photons. Due to its higher ionization potential of approximately 24.6 eV, however, helium presents further experimental challenges compared to argon, as it requires relatively high-energy XUV photons to create a sufficient number of main peaks. However, from a theoretical perspective, helium is significantly easier to handle than argon. In fact, since the remaining \(1s\) electron is tightly bound, a single-active-electron (SAE) approach may be sufficient to explain the basic features. This paper is organized as follows. We begin with a brief review of the basic idea behind the 3-SB setup in Sec. II. This is followed by a description of the experimental apparatus in Sec. III and the accompanying theoretical SAE and RMT approaches in Sec. IV. We first show angle-integrated data in Sec. V.1 before focusing on the angle-dependence of the RABBITT phases in the three sidebands of each individual group in Sec. V.2. We finish with a summary and an outlook in Sec. VI. ## II The 3-SB scheme for helium In this section, we briefly review the 3-SB scheme introduced in [7] and the analytical treatment presented in [8], as applied to the 3-SB RABBITT experiment in general and then in our particular case of the helium target. The basic scheme is illustrated in Fig. 1. The active electron originates from an \(s\)-orbital, and the absorption of an XUV photon with a frequency corresponding to either \(H_{q-1}\) or \(H_{q+1}\) transitions it to a \(p\)-orbital. This transition gives rise to the main photoelectron peaks labeled \(M_{q-1}\) and \(M_{q+1}\), respectively. The frequencies \(H_{q-1}\) and \(H_{q+1}\) correspond to the \((q-1)\) and \((q+1)\) odd harmonics, respectively, of the pulse used for generating the XUV train via high-order harmonic generation (HHG) [10; 11]. Additional transitions within the continuum, resulting from either the absorption or emission of probe photons, lead to the emergence of three sidebands situated between two main peaks. These transitions to the sidebands can traverse different angular-momentum channels in compliance with the dipole selection rule. We designate the trio of sidebands positioned between \(M_{q-1}\) and \(M_{q+1}\) according to their energy positions within the group, labeling them as \(S_{q,l}\), \(S_{q,c}\), and \(S_{q,h}\), where the second subscript designates the lower, central, and higher sidebands, respectively. All pathways leading to the same sideband interfere to produce the net photoelectron yield of the sideband. Changing the time delay between the XUV and IR pulses adds a dynamic phase to the interference of absorption and emission paths, leading to oscillations in the sideband yield. As shown in [9], the general form of the signal in the sidebands is given by \[S_{q,j}(\tau,\theta)=A_{q,j}(\theta)+B_{q,j}(\theta)\,\cos(4\,\omega\tau-\phi _{R,q,j}(\theta)) \tag{1}\] with \(j\) standing for \((l,c,h)\). Each signal is characterized by a constant term \(A\), an oscillation amplitude \(B\), and a RABBITT phase \(\phi_{R}\). As seen from the above equation, each of these parameters depends on the sideband group \(q\), the location of the sideband within that group \((l,c,h)\), and the detection angle \(\theta\). In angle-integrated detection mode, the angle-dependence is averaged over, but the general form of the equation remains unchanged. In both the angle-differential and angle-integrated cases, the signal oscillates as a function of the delay \(\tau\) with an angular frequency of four times the angular frequency \(\omega\) of the fundamental IR field. Detailed expressions of the above parameters in terms of transition amplitudes can be found in [8; 9]. ## III Experiment Since the experimental methodology was described in detail in Ref. [9], we only provide a brief summary of the most important settings for the current experiment. The fiber-based laser utilized in the experiment emits 50 fs (full width at half maximum) ultra-short infrared pulses centered around 1,030 nm with a pulse energy of 1.2 mJ. The pulse is divided into two arms of a Mach-Zehnder type interferometer using a holey mirror, which reflects approximately 85\(\%\) of the beam in the pump arm and allows the remainder to pass through into the probe arm. Within the pump arm, a barium borate crystal is employed to produce the second harmonic (515 nm) of the laser pulse with an efficiency of 25-30\(\%\). The fundamental IR beam is subsequently filtered via a dichroic mirror, and the second harmonic is focused onto an argon gas Figure 1: 3-SB transition diagram with all angular-momentum channels, illustrating only the lowest-order pathways from emission and absorption processes necessary for yield oscillations in each sideband. \(I_{p}\) is the ionization threshold, and the dashed line just above it labels the special case of the threshold sideband. See text for details. jet to produce an XUV pulse train through HHG. The driving 515 nm beam was spatially filtered out from the beamline, and the resulting XUV beam then passed through a 150 nm-thick aluminum filter. Meanwhile, the IR beam within the probe arm was directed through a retro-reflector mounted on a piezo-electric-translation stage, before it was spatially and temporally recombined with the XUV beam. Both the XUV and IR beams were focused inside a reaction microscope (ReMi) [12] onto a supersonic gas jet of the target species. The relative strength of the harmonics was determined by analyzing the photoelectron spectra generated solely by the XUV beam in different gases. Beyond the helium ionization threshold, four harmonics (\(H_{11}\), \(H_{13}\), \(H_{15}\), and \(H_{17}\)) were observed, with their strength decreasing as the photon energy increased. Below the ionization threshold, \(H_{7}\) is positioned at the low-energy transmission edge of an aluminum filter, resulting in significant attenuation of this harmonic and effectively elimination of all harmonics below \(H_{7}\). The harmonic \(H_{9}\), whose energy is also below the ionization energy, is crucial for generating oscillations in the yield of the threshold sideband (see the dashed line just above the ionization threshold in Fig. 1). This threshold sideband (\(S_{10,h}\)) belongs to the so-called "uRABBITT" scheme [13; 14; 15; 16; 17; 18; 19]. We adjusted the IR probe intensity by utilizing an iris in the probe arm. RABBITT measurements were performed at three IR probe peak intensities: \(I_{1}\approx 5\times 10^{11}\;\mathrm{W/cm^{2}}\), \(I_{2}\approx 7\times 10^{11}\;\mathrm{W/cm^{2}}\), and \(I_{3}\approx 1.2\times 10^{12}\;\mathrm{W/cm^{2}}\). The IR and XUV fields were linearly polarized parallel to each other and along the spectrometer axis of the ReMi. The latter enables the reconstruction of the three-dimensional momenta of the electrons and corresponding ions (in coincidence) created during the photoionization process [12]. Furthermore, the ReMi is capable to select electrons emitted in different directions, thereby enabling the construction of photoelectron angular distributions (PADs). The XUV-IR temporal delay was sampled at regular intervals of \(T_{0}/60\), where \(T_{0}=3.44\) fs is the period of the IR pulse. The delay was scanned over a range equivalent to one and a half times the optical cycle of the IR pulse. The XUV-IR beamline was actively stabilized [9; 20] to achieve a stability of approximately 30 attoseconds over a data acquisition period of more than ten hours. ## IV Theory The numerical approaches to model the experiment were also described in detail in previous publications. Hence, we again limit ourselves to a brief summary with references given at the appropriate spots. ### The SAE approach We employed the same SAE model as Birk _et al._[21] and Meister _et al._[22]. Specifically, we used the one-electron potential \[V(r)=-\frac{1}{r}-\left(\frac{1}{r}+1.3313\right)\,\mathrm{exp}(-3.0634\,r), \tag{2}\] where \(r\) is the distance from the nucleus, to calculate the valence orbitals. The difference of excitation energies compared to the recommended excitation from the NIST database [23] is less than 0.2 eV even in the worst-case scenario. For both the XUV pulse train and the fundamental IR, we used temporal fields based on the measured spectrum of the IR pulse and on the measured XUV-only photoionization spectrum considering the experimental resolution. This gives reasonably accurate information about the relative strength of the harmonics. Additionally, we incorporated an estimated atto-chirp into the theoretical XUV pulse train. Since both the XUV pulse train and the fundamental IR are linearly polarized along the same direction, the initial state can be propagated very efficiently and accurately. Specifically, we used an updated version of the code described by Dougutet _et al._[24]. In contrast to the heavier noble gases, the SAE approach is expected to be suitable for the helium target, as long as obvious two-electron correlation effects, e.g., autoionizing resonances, are not affecting the process significantly. This is, indeed, the case for the present study. ### The RMT approach As a second method, we employed the general \(R\)-matrix with time dependence (RMT) method [25]. To calculate the necessary time-independent basis functions and dipole matrix elements for the present work, we set up the simplest possible model, a nonrelativistic 1-state approach. This model, labeled RMT below, enables efficient calculations whose predictions can be readily compared with both the experimental data and those from the SAE approach to make a first assessment regarding the likely quality of the theoretical predictions. We took the same pulse as in the SAE calculation. Instead of the numerical orbital employed in the SAE calculation, which is close to the Hartree-Fock orbital of the ground-state configuration, however, we took the known \(1s\) orbital of He\({}^{+}\). While this is not optimal to obtain the very best ground-state energy, this disadvantage is mitigated almost completely by the continuum-continuum terms in the \(R\)-matrix hamiltonian. Using this orbital will, however, be advantageous in future studies, where we plan to include additional states in the close-coupling expansion to check the convergence and sensitivity of the numerical predictions. In contrast to SAE, exchange effects between the two electrons are treated explicitly within the \(R\)-matrix box. ## V Results and discussion We first present in Sec. V.1 our angle-integrated RABBITT results. This is followed by a discussion of angle-differential measurements in Sec. V.2. For the latter case, we will concentrate on only two sideband groups, \(S_{12}\) and \(S_{14}\), where we have a sufficient amount of data to conduct the angle-resolved investigation of the RABBITT phase. ### Angle-Integrated RABBITT scans Figure 2 shows the results from three angle-integrated RABBITT measurements recorded at the probe intensities \(I_{1}\approx 5\times 10^{11}\mathrm{W}/\mathrm{cm}^{2}\) (top), \(I_{2}\approx 7\times 10^{11}\mathrm{W}/\mathrm{cm}^{2}\) (center), and \(I_{3}\approx 12\times 12^{11}\mathrm{W}/\mathrm{cm}^{2}\) (bottom). The log-scale colormap in the lower part of each panel shows the RABBITT trace obtained by subtracting the delay-integrated signal from the original trace to highlight the oscillations. The upper part of the panels displays the XUV-only photoelectron spectrum (gray line) and the delay-integrated photoelectron spectrum (red line), both normalized to their peak values and plotted on a logarithmic scale. Looking at the RABBITT traces, we see that the central sideband displays the clearest oscillation at all applied IR intensities. This was expected since both the absorption and the emission paths that populate this sideband are of the same order. In the lower and higher sidebands, the two interfering terms are of different (second and fourth) orders, and hence the contrast in the oscillation is reduced compared to the central sideband. Next, the RABBITT traces demonstrate that the oscillation contrast is generally better for the higher sideband in each group than for the lower one. This is due to the rapid decrease of the main peaks with increasing energy, which makes the amplitude of the two transition paths in the interference more balanced in the case of the higher sideband. In \(S_{12,h}\), for example, the upper main peak (\(M_{13}\)) is much weaker than the lower main peak (\(M_{11}\)), resulting in the magnitude of a three-photon transition from the stronger lower main peak (\(M_{11}+3\,\omega\)) becoming comparable to the magnitude of a one-photon transition from the weaker upper main peak (\(M_{13}-\omega\)). Hence the interference of these two terms leads to a strong delay-dependent oscillation. In contrast, for the lower sideband \(S_{12,l}\), the magnitude of a one-photon transition from the lower main peak (\(M_{11}+\omega\)) is much stronger than the magnitude of a three-photon transition from the already weaker upper main peak (\(M_{13}-3\,\omega\)), thus resulting in small contrast of the oscillation. Additionally, at the lowest applied IR intensity (top panel of Fig. 2), the highest energy main peak \(M_{17}\) is almost entirely depleted, but it becomes repopulated as the intensity is increased. The same is seen for the main peak \(M_{15}\), which first becomes weaker in the presence of the IR pulse but then gains strength when the intensity is increased. Looking at the \(S_{16}\) group, we observe that the contrast in the lower sideband (\(S_{16,l}\)) is very weak at the lowest applied intensity, but it gradually improves as the IR intensity increases. On the other hand, the contrast of the oscillation in the higher sideband \(S_{16,h}\) deteriorates with increasing IR intensity. When the kinetic energy or the IR intensity is increased, higher-order transitions should be accounted for as well. Figure 3 shows a selection of many-order transition pathways leading to the lower sideband of the \(S_{16}\) group and the interference schemes contributing to the oscillations of the yield. At the low IR intensity, the oscillation in the photoelectron yield of the lower sideband is predominantly influenced by the interference scheme \(T_{A}\). However, with Figure 2: Results from the angle-integrated RABBITT measurements taken at peak IR intensities \(I_{1}\approx 5\times 10^{11}\mathrm{W}/\mathrm{cm}^{2}\) (top), \(I_{2}\approx 7\times 10^{11}\mathrm{W}/\mathrm{cm}^{2}\) (center), and \(I_{3}\approx 12\times 12^{11}\mathrm{W}/\mathrm{cm}^{2}\) (bottom). The logarithmic colormap in each of the lower panels displays the angle-integrated RABBITT traces, while the upper panel shows the XUV-only (gray line) and the delay-integrated photoelectron spectrum (red line) normalized to the peak value. The dashed line in the top panels is the (relative) signal predicted by the RMT model. The main lines and the sideband groups are also indicated. The right \(y\)-axis shows the phases of the delay-dependent yield oscillations of the sidebands obtained from our fitting method, along with their estimated fitting errors. Since the absolute phase is not known experimentally, we set it to zero for the center sideband in the \(S_{12}\) group. Also, a phase \(\pm\pi\) was added to the center sideband to simplify the comparison with the other two. The solid circles represent the experimental data while the open symbols are the theoretical SAE (circles) and RMT (boxes) predictions. increasing kinetic energy or IR intensity, the involvement of higher-order transitions featuring \(M_{15}\) and \(M_{13}\) becomes significant in shaping the oscillations of the yield. In order to determine the phases of the oscillation from each sideband, the photoelectron spectrum was first integrated within an energy window of 0.8 eV across the peak and was then fitted to a cosine function of the form \(A+B\cos{(4\,\omega\tau-\phi_{R})}\). For the lower and higher sidebands, the trivial \(\pi\) phase was removed from the obtained phases. As the absolute experimental phase is unknown, the retrieved phases from the three measurements were shifted to align the data point for \(S_{12,c}\) at zero for comparison of the relative phases. The phases of the oscillations obtained from the fitting process, along with their corresponding fitting errors, are shown in the upper panels of Fig. 2 and also listed in Table 1. Several noteworthy results can be seen from Table 1: 1) After accounting for the additional phase of \(\pi\), the phases in the \(S_{12}\) and \(S_{14}\) groups are nearly the same in all three sidebands of the respective group. This is in excellent agreement with the "decomposition approximation" [8]. 2) Relative to \(S_{12}\), the phases in the \(S_{14}\) group are larger by \(\approx 0.5\) rad. 3) The lower sideband in \(S_{16}\) exhibits a significantly different phase compared to the other two. This is likely due to the contribution of the higher-order interference schemes (\(T_{B},T_{C}\), \(T_{D}\), and \(T_{E}\) in Fig. 3) dominating over the lowest-order interference (\(T_{A}\)). Since these interference schemes, involving higher-order pathways, produce oscillations at the same frequency but \(\pi\) out of phase with that arising from the lowest-order interference scheme \(T_{A}\), their contribution in the yield oscillation results in the observed change of \(\approx\pi\). It should also be noted that the oscillation due to \(T_{E}\) includes the spectral-phase difference of \(H_{15}\) and \(H_{13}\), while the schemes \(T_{A}\), \(T_{B}\), \(T_{C}\), and \(T_{D}\) contain the spectral phase difference of \(H_{17}\) and \(H_{15}\). Regarding the agreement between experiment and theory, we see that both SAE and RMT agree with the experimental findings of nearly the same phases for each member of the \(S_{12}\) and \(S_{14}\) groups. Furthermore, the theoretical phases within \(S_{14}\) are systematically larger than those observed in the experiment. This is simply due to an overestimate of the chirp that was included in the description of the XUV pulse. Since RABBITT calculations are very time-consuming, we decided not to repeat them with only the chirp being reduced. There is also good qualitative agreement in the fact that \(S_{16,c}\) and \(S_{16,h}\) have about the same RABBITT phase, while that of \(S_{16,l}\) is very different. Finally, there is significant quantitative disagreement between experiment and the two theories, as well as between the two theoretical predictions themselves, for the threshold sideband \(S_{10,h}\). However, there is general agreement in the qualitative finding of a strong intensity dependence. This strong sensitivity to the intensity is caused by near-resonant interactions with Rydberg states. ### Angle-Differential RABBITT scans To examine the angle-dependence of the RABBITT phases, we analyzed the photoelectrons emitted at various angles relative to the spectrometer axis. This involved segregating the data into angle-differential data sets, which were generated by integrating the photoelectron yield over 10-degree angular windows at 5-degree intervals. The phase of the oscillation in each sideband was extracted from each angle-differential data set using the same method as outlined in the angle-integrated case. The photoelectron spectra from each sideband were integrated over a 0.6 eV energy window centered on the peak, and the resulting delay-dependent signal was fitted to a cosine function to obtain the phase. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline probe & \(S_{10}\) & \multicolumn{3}{c|}{\(S_{12}\)} & \multicolumn{3}{c|}{\(S_{14}\)} & \multicolumn{3}{c|}{\(S_{16}\)} \\ \cline{2-11} intensity & \(S_{h}\) & \(S_{l}\) & \(S_{c}\) & \(S_{h}\) & \(S_{l}\) & \(S_{c}\) & \(S_{h}\) & \(S_{l}\) & \(S_{c}\) & \(S_{h}\) \\ \hline \multirow{2}{*}{\(I_{1}\)} & \(-1.96\) & \(0.11\) & \(0.00\) & \(0.08\) & \(0.52\) & \(0.52\) & \(0.55\) & \(-2.18\) & \(0.67\) & \(0.60\) \\ & \(\pm 0.02\) & \(\pm 0.02\) & \(\pm 0.01\) & \(\pm 0.02\) & \(\pm 0.06\) & \(\pm 0.01\) & \(\pm 0.01\) & \(\pm 0.49\) & \(0.08\) & \(\pm 0.06\) \\ \hline \multirow{2}{*}{\(I_{2}\)} & \(-1.29\) & \(0.31\) & \(0.00\) & \(0.07\) & \(0.58\) & \(0.56\) & \(0.58\) & \(-2.46\) & \(0.56\) & \(0.54\) \\ & \(\pm 0.01\) & \(\pm 0.02\) & \(\pm 0.01\) & \(\pm 0.01\) & \(\pm 0.04\) & \(\pm 0.01\) & \(\pm 0.01\) & \(\pm 0.11\) & \(0.04\) & \(\pm 0.07\) \\ \hline \multirow{2}{*}{\(I_{3}\)} & \(-0.64\) & \(0.32\) & \(0.00\) & \(0.07\) & \(0.73\) & \(0.53\) & \(0.59\) & \(-2.53\) & \(0.61\) & \(0.11\) \\ & \(\pm 0.01\) & \(\pm 0.02\) & \(\pm 0.01\) & \(\pm 0.01\) & \(\pm 0.02\) & \(\pm 0.11\) & \(\pm 0.01\) & \(\pm 0.05\) & \(0.03\) & \(\pm 1.40\) \\ \hline \end{tabular} \end{table} Table 1: Angle-integrated RABBITT phase (in radians) obtained from the fitting procedure to the measurements at three IR intensities. Figure 3: Selected interference schemes for \(4\omega\) oscillations in the lower sideband. Scheme \(T_{A}\) dominates at low probe intensity. The oscillation generated by scheme \(T_{A}\) is out of phase by \(\pi\) compared to the oscillations produced by schemes \(T_{B}\), \(T_{C}\), \(T_{D}\), and \(T_{E}\). Figures 4 and 5 illustrate the angle-dependence of the retrieved phases from the three sidebands in the \(S_{12}\) and \(S_{14}\) groups, respectively, for different IR intensities: (a) \(I_{1}\), (b) \(I_{2}\), and (c) \(I_{3}\). For better comparison and since we do not know the absolute experimental phase, a common phase was added to the three sidebands of each group to shift the central sideband phase to zero at the first data point (\(5^{\circ}\)) for each intensity. The statistical uncertainty of the data above \(60^{\circ}\) was insufficient to retrieve a meaningful oscillation phase at all, and the estimated fitting error suggests some caution regarding the extracted phases at the larger angles, especially at the lowest intensity. The figures demonstrate a consistent angle-dependent RABBITT phase in the central sidebands of both the \(S_{12}\) and \(S_{14}\) groups across all three probe intensities. In contrast, both the lower and the higher sidebands exhibit some dependence on both their group and the IR intensity. We first discuss the angle-dependent oscillation phase in the central sideband while considering only the lowest-order transitions depicted in Fig. 1. According to the selection rules for electric dipole transitions, a three-photon transition to the central sideband results in the creation of a combination of two partial waves corresponding to angular momentum states of \(p\) (\(\ell=1\)) and \(f\) (\(\ell=3\)), where \(m=0\) is conserved. The amplitudes of these partial waves vary with the angle of electron emission and are determined by the spherical harmonics \(Y_{\ell,0}(\theta)\). In our specific experimental setup, where both beams are linearly polarized along the same direction, we can substitute the spherical harmonics with Legendre polynomials denoted as \(P_{\ell}(\theta)\). Consequently, the interference of all partial waves generated during the absorption and emission processes will result in a signal of the form \[S_{c}(\tau,\theta) =A_{c}(\theta)+B_{c}(\theta)\,\cos(4\,\omega\tau-\phi_{R,c}( \theta)) \tag{3a}\] \[=A_{c}(\theta)+a_{11}P_{1}^{2}(\theta)\,\cos(4\,\omega\tau-\phi_{ 11})\] (3b) \[+a_{33}P_{3}^{2}(\theta)\,\cos(4\,\omega\tau-\phi_{33})\] (3c) \[+a_{13}\,P_{1}(\theta)\,P_{3}(\theta)\,\cos(4\,\omega\tau-\phi_{ 13}). \tag{3d}\] Each of these oscillation terms is characterized by an oscillation amplitude determined by the magnitude of the dipole transition matrix elements \(a_{\ell,\ell^{\prime}}\) and products of the Legendre polynomials \(P_{\ell}(\theta)P_{\ell^{\prime}}(\theta)\), as well as a phase \(\phi_{\ell,\ell^{\prime}}\). Referring to Eqs. (3), we use the terms "same-channel interference" (3b,3c) to indicate interference Figure 5: Angle-dependent phase retrieved from the \(S_{14}\) group in the RABBITT scans for IR peak intensities of \(I_{1}\) (a), \(I_{2}\) (b), and \(I_{3}\) (c). occurring between two partial waves of identical angular momentum and "cross-channel interference" (3d) to indicate interference between two partial waves with distinct angular momentum. The oscillation phase of each interference term in Eq. (3) encompasses several components, including the XUV chirp, the Wigner phases, and phases arising from continuum-continuum transitions. While both the XUV chirp and the contribution from the Wigner phase remain consistent across all interference terms, the contribution of the continuum-continuum transition phase varies due to its dependence on the angular momenta of the states involved in the transition pathway. The distinct contribution of the continuum-continuum coupling phase (\(\phi_{cc}\)) in the oscillation phase of the different interfering terms, along with the variation in the associated oscillation amplitude with the angle of electron emission, leads to the angular dependence of the overall retrieved oscillation phase \(\phi_{R}(\theta)\)[26; 27]. The variation in \(\phi_{cc}\) across different angular momenta is predicted to be small, typically less than \(\pi/10\) for kinetic energies above 5 eV, and to diminish rapidly as the kinetic energy increases [28]. Consequently, the differences in the oscillation phases among various interference terms are generally minor, resulting in only a slight variation in the overall retrieved oscillation phase \(\phi_{R}(\theta)\) with respect to angle. However, a significant change in \(\phi_{R}(\theta)\) may occur when the electron emission angle approaches the node position of one of the Legendre polynomials. In cases of cross-channel interference, where \(\ell\neq\ell^{\prime}\), the product \(P_{\ell}(\theta)P_{\ell^{\prime}}(\theta)\) changes sign after passing through a node. This sign change is equivalent to introducing a \(\pi\) phase to its contribution in the oscillation phase \(\phi_{\ell\ell^{\prime}}\). Depending on the oscillation amplitude of this cross-channel interference compared to the contributions from other terms, the overall retrieved phase \(\phi_{R}\) might experience rapid variation near or beyond these node positions. For instance, the decline observed in the RABBITT phase \(\phi_{R}(\theta)\) in the central sideband in Fig. 4(a) beyond \(40^{\circ}\) indicates that the angle-dependent amplitude in the cross-channel (\(p-f\)) interference has surpassed that of the same-channel interferences (\(p-p\) and \(f-f\)). Since the angle-dependent behavior of the RABBITT phase for the central sideband remains consistent across all three applied probe intensities, the aforementioned arguments apply to all three cases. Furthermore, in \(S_{14}\), where the difference in the oscillation phase (\(\phi_{\ell,\ell^{\prime}}\)) among different interfering terms diminishes due to the reduced variation in \(\phi_{cc}\) across various angular momenta, the \(\pi\) phase jump associated with the dominance of cross-channel interference should become more abrupt. This will result in nearly negligible phase variation until the angle reaches a point where cross-channel interference prevails over other interfering terms, then causing a sudden \(\pi\) phase jump if the matrix elements are large enough. While this is not seen in the angular regime for which we have experimental data, it is noticeable in theoretical predictions at larger angles (cf. Fig. 6 below). We next discuss the angular variation of the RABBITT phase in the lower and higher sidebands of \(S_{12}\) for the case of the lowest applied intensity, as shown in Fig. 4(a). In both the lower and higher sidebands, a four-photon transition populates \(s\), \(d\), and \(g\) states, while a two-photon process populates \(s\) and \(d\) states (cf. Fig. 1). Based on the propensity rule for continuum-continuum transitions [28], the creation of a \(g\) electron in the lower sideband via emission of three IR photons is less likely. Consequently, despite the occurrence of a \(\pi\)-jump in the oscillation phases of \(s-g\) and \(d-g\) cross-channel interferences near \(30^{\circ}\), this does not result in a significant change in the RABBITT phase due to the reduced oscillation amplitude associated with cross-channel interferences involving a \(g\) electron. A variation becomes noticeable in the lower sideband starting around \(50^{\circ}\) when the \(d\)-wave approaches its node position at \(57^{\circ}\). However, this variation does not lead to a substantial change in the observed angular range. In other words, none of the cross-channel interferences outweigh the remaining interference terms in this range. Moving to the higher sideband, in accordance with the propensity rule, three-photon absorption in the continuum creates a \(g\) electron with notable probability. Therefore, it is reasonable to expect that the angle-dependence of the higher sideband will start to vary early, following the node position of \(P_{4}(\theta)\) around \(30^{\circ}\). Clearly, the significant variation in the RABBITT phase around \(50^{\circ}\) in the higher sideband suggests that cross-channel interferences involving \(d\) and/or \(g\) states start to become comparable and outweigh the remaining interference terms above \(50^{\circ}\). However, our current technique does not allow us to definitively determine which interference term is the most significant. Moving to the next sideband group, \(S_{14}\), we observe that the higher sideband shows less angular variation within the observed angular range compared to the \(S_{12}\) group. This can again be explained by the fact that the difference in the oscillation phase (\(\phi_{\ell,\ell^{\prime}}\)) among different interfering terms in \(S_{14}\) is diminished due to the reduced variation in \(\phi_{cc}\) across various angular momenta. Additionally, the transition-amplitude ratios to different angular-momentum states change with the energy, potentially shifting the angle where cross-channel interference becomes dominant or, in some cases, nearly eliminating cross-channel interference altogether. Unfortunately, the error bars in the \(S_{14,l}\) numbers are too large to draw conclusions with high confidence. As the probe intensity increases, higher-order transition terms become increasingly significant. In such cases, the transition pathways depicted in Fig. 1 may no longer provide an adequate description, necessitating the inclusion of higher-order terms. Furthermore, we note that the angle-dependence of the RABBITT phase changes slowly with varying probe intensity for most sidebands. However, a notable exception is seen in the lower sideband of \(S_{12}\), where there is a substantial variation in the angle-dependence of the RABBITT phase with changing probe intensity. This behavior can be attributed to the increasing impact of six-photon transitions involving the under-threshold harmonic (\(H_{9}\)) and Rydberg states. These transitions can be intensified by resonances and may also introduce rapid, energy-dependent resonance phases, resulting in a pronounced angle-dependence of the RABBITT phase. Figure 6 depicts a comparison between the experimental data and and predictions from the SAE and RMT models for the \(S_{12}\) and \(S_{14}\) groups. We only show the case for the peak intensity of \(I_{2}=7\times 10^{11}\mathrm{W/cm^{2}}\), since the findings are similar for the other two. Starting with the \(S_{12}\) group, we see fairly good agreement between both sets of theoretical predictions and experiment for the center sideband \(S_{12,c}\) and the higher sideband \(S_{12,h}\), whereas there are quantitative differences for \(S_{12,l}\). These include the relative position at small angles: experiment has the phase of \(S_{12,l}\) above that of \(S_{12,c}\) and \(S_{12,h}\), while the SAE model in particular predicts \(S_{12,l}\) to start significantly below. A possible reason is that higher-order transitions involving Rydberg states are affecting the phase of \(S_{12,l}\), and it shifts gradually above that of \(S_{12,c}\) as the probe intensity is increased. Nevertheless, there is agreement in the drop of the RABBITT phases beyond \(\approx 40^{\circ}\), with \(S_{12,h}\) dropping the fastest with increasing angle. On the other hand, there is substantial disagreement between experiment and theory for the \(S_{14}\) group. While SAE and RMT in general agree well with each other, the experimental data suggest a much stronger angular dependence of all RABBITT phases, while the theoretical predictions are nearly flat. A notable exception to the otherwise good agreement between the SAE and RMT prediction is the rapid increase in the SAE RABBITT phase of \(S_{14,h}\) at angles beyond \(\approx 50^{\circ}\) and \(S_{14,c}\) beyond \(\approx 65^{\circ}\). We note that the signal strength drops fast with increasing angle. Hence, the predictions become very sensitive to the details of the model, in this case the treatment (or the lack thereof) of exchange effects and correlations in the description of the ground state. While RMT is likely to be superior to SAE in this regard, neither model reproduces the rapid decrease in the measured phase of \(S_{14,h}\). Instead, both theories predict the most rapid changes in the angular range where no experimental data are available. RMT, for example, predicts \(S_{h}\) to drop first around \(65^{\circ}\), followed by \(S_{c}\) around \(70^{\circ}\), and finally \(S_{l}\) around \(75^{\circ}\). ## VI Conclusions and outlook In this joint experimental and theoretical study, we extended our previous work on argon and carried out a 3-SB experiment on helium. This target was chosen for several reasons: 1) the XUV step is much simpler in helium compared to argon, since only one orbital angular momentum is generated; 2) we expected a better chance for theory to handle helium rather than the more complex argon target; 3) there are no known autoionizing resonances in the range of ejected-electron energies studied in the present work. Indeed, we found that relatively simple numerical models were able to qualitatively, and in some case also quantitatively, reproduce most of the experimental findings. Exceptions include the angle-integrated phase extracted for the threshold sideband, which is heavily affected by IR-induced transitions involving Rydberg states, and the angle-dependence of the RABBITT phase for one of the two sideband groups studied in this work. We hope that the experimental data produced in this study will serve as motivation for future work. This includes a detailed study of the threshold sideband, as well as measurements with improved statistics at lower intensity to reduce higher-order effects. In addition, we will attempt to increase the cutoff of the higher harmonics in order to be able to compare more sideband groups. We will also perform a thorough investigation of the sensitivity of theoretical predictions on the details of the model. We are currently extending the RMT calculations to include more coupled states to further improve the bound-state description and even to account for coupling to the ionization continuum in the spirit of the \(R\)-matrix with pseudo-states (RMPS) approach [29]. Since many delays have to be scanned through, such calculations are computationally expensive and have to be planned with great care in light of the available resources. ###### Acknowledgements. The experimental part of this work was supported by the DFG-QUTIF program under Project No. HA 8399/2-1 and Figure 6: SAE and and RMT angle-dependent phase for the \(S_{12}\) and \(S_{14}\) groups, compared with the experimental data for an IR peak intensity of \(I_{2}=7\times 10^{11}\mathrm{W/cm^{2}}\). IMPRS-QD. A.T.B., S.S., K.R.H., and K.B. acknowledge funding from the NSF through grant No. PHY-2110023 as well as the Frontera Pathways allocation PHY-20028. A.T.B. is grateful for funding through NSERC. The calculations were performed on Stampede-2 and Frontera at the Texas Advanced Computing Center in Austin (TX).
2309.04802
CPMR: Context-Aware Incremental Sequential Recommendation with Pseudo-Multi-Task Learning
The motivations of users to make interactions can be divided into static preference and dynamic interest. To accurately model user representations over time, recent studies in sequential recommendation utilize information propagation and evolution to mine from batches of arriving interactions. However, they ignore the fact that people are easily influenced by the recent actions of other users in the contextual scenario, and applying evolution across all historical interactions dilutes the importance of recent ones, thus failing to model the evolution of dynamic interest accurately. To address this issue, we propose a Context-Aware Pseudo-Multi-Task Recommender System (CPMR) to model the evolution in both historical and contextual scenarios by creating three representations for each user and item under different dynamics: static embedding, historical temporal states, and contextual temporal states. To dually improve the performance of temporal states evolution and incremental recommendation, we design a Pseudo-Multi-Task Learning (PMTL) paradigm by stacking the incremental single-target recommendations into one multi-target task for joint optimization. Within the PMTL paradigm, CPMR employs a shared-bottom network to conduct the evolution of temporal states across historical and contextual scenarios, as well as the fusion of them at the user-item level. In addition, CPMR incorporates one real tower for incremental predictions, and two pseudo towers dedicated to updating the respective temporal states based on new batches of interactions. Experimental results on four benchmark recommendation datasets show that CPMR consistently outperforms state-of-the-art baselines and achieves significant gains on three of them. The code is available at: https://github.com/DiMarzioBian/CPMR.
Qingtian Bian, Jiaxing Xu, Hui Fang, Yiping Ke
2023-09-09T14:07:11
http://arxiv.org/abs/2309.04802v3
# CPMR: Context-Aware Incremental Sequential Recommendation with Pseudo-Multi-Task Learning ###### Abstract. The motivations of users to make interactions can be divided into static preference and dynamic interest. To accurately model user representations over time, recent studies in sequential recommendation utilize information propagation and evolution to mine from batches of arriving interactions. However, they ignore the fact that people are easily influenced by the recent actions of other users in the contextual scenario, and applying evolution across all historical interactions dilutes the importance of recent ones, thus failing to model the evolution of dynamic interest accurately. To address this issue, we propose a Context-Aware **P**seudo-**M**ulti-**T**ask **R**ccommender System (CPMR) to model the evolution in both historical and contextual scenarios by creating three representations for each user and item under different dynamics: static embedding, historical temporal states, and contextual temporal states. To dually improve the performance of temporal states evolution and incremental recommendation, we design a Pseudo-Multi-Task Learning (PMTL) paradigm by stacking the incremental single-target recommendations into one multi-target task for joint optimization. Within the PMTL paradigm, CPMR employs a shared-bottom network to conduct the evolution of temporal states across historical and contextual scenarios, as well as the fusion of them at the user-item level. In addition, CPMR incorporates one real tower for incremental predictions, and two pseudo towers dedicated to updating the respective temporal states based on new batches of interactions. Experimental results on four benchmark recommendation datasets show that CPMR consistently outperforms state-of-the-art baselines and achieves significant gains on three of them. The code is available at: [https://github.com/DiMarzioBian/CPMR](https://github.com/DiMarzioBian/CPMR). Recommender Systems, Incremental Recommendation, Context-aware Recommendation, Graph Neural Networks, Pseudo-Multi-Task Learning + Footnote †: leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*] †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*] †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*] †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*] †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks
ユーザーがインタラクションを促す動機を、静的嗜好と動的な興味に分けます。ユーザーの表現を長期的に正確にモデル化するためには、近年、順序推薦の研究では、情報伝達と進化を介して、到着するインタラクションの批を mined しています。しかし、彼らは他のユーザーの最近の行動に容易に影響を受けるという事実を無視し、すべての過去の相互作用を進化させることは、最近の行動の重要性を薄めます。したがって、動的な興味の進化を正確にモデル化するのに失敗します。この問題に対処するために、私たちは、異なるダイナミクスを持つ3つの表現をそれぞれユーザーとアイテムに対して作成することで、歴史的および文脈的なシナリオにおける進化をモデル化する、コンテキストを意識した偽のマルチタスク推薦システム (CPMR)を提案します。時間的な状態の進化と漸進的な推奨の性能を向上させるために、PMTL
2303.17825
Refinements of Katz-Sarnak theory for the number of points on curves over finite fields
This paper goes beyond Katz-Sarnak theory on the distribution of curves over finite fields according to their number of rational points, theoretically, experimentally and conjecturally. In particular, we give a formula for the limits of the moments measuring the asymmetry of this distribution for (non-hyperelliptic) curves of genus $g \geq 3$. The experiments point to a stronger notion of convergence than the one provided by the Katz-Sarnak framework for all curves of genus $\geq 3$. However, for elliptic curves and for hyperelliptic curves of every genus we prove that this stronger convergence cannot occur.
Jonas Bergström, Everett W. Howe, Elisa Lorenzo García, Christophe Ritzenthaler
2023-03-31T06:47:41
http://arxiv.org/abs/2303.17825v2
# Refinements of Katz-Sarnak theory for the number of points on curves over finite fields ###### Abstract. This paper goes beyond Katz-Sarnak theory on the distribution of curves over finite fields according to their number of rational points, theoretically, experimentally and conjecturally. In particular, we give a formula for the limits of the moments measuring the asymmetry of this distribution for (non-hyperelliptic) curves of genus \(g\geq 3\). The experiments point to a stronger notion of convergence than the one provided by the Katz-Sarnak framework for all curves of genus \(\geq 3\). However, for elliptic curves and for hyperelliptic curves of every genus we prove that this stronger convergence cannot occur. Key words and phrases:Katz-Sarnak theory; distribution; moments; Serre's obstruction 2010 Mathematics Subject Classification: 11G20, 11R45, 14H10, 14H25 \({}^{1}\)Throughout this paper, the word 'curve' will always mean a projective, absolutely irreducible, smooth variety of dimension 1. \(n\), see for instance [1, Th. 3.4] for \(\mathcal{H}_{g}\) (note that the odd \(n\) values are equal to \(0\) in this case) and [1] for \(\mathcal{M}_{3}^{\mathrm{nhyp}}\). However, it is possible to give an interpretation for \[\mathfrak{a}_{n}(\mathcal{X}):=\lim_{q\to\infty}\frac{S_{n}(q,\mathcal{X})}{q^ {\dim\mathcal{X}+n/2}}\] with \(\mathcal{X}=\mathcal{M}_{g}\), \(\mathcal{H}_{g}\) or \(\mathcal{M}_{g}^{\mathrm{nhyp}}\) for every \(g\geq 2\) and even \(n\geq 2\) in terms of representation theory of the compact symplectic group \(\mathrm{USp}_{2g}\). This is achieved in [1, Th. 3.8] using the ideas of Katz and Sarnak. Our first contributions are gathered in Theorem 2.3. Using the results of Johnson [15] and Hain [14], together with results of [10, 11] about the first cohomology group of symplectic local systems on \(\mathcal{M}_{g}\), we can prove that for even values of \(n>0\) we have \[\mathfrak{a}_{n}(\mathcal{M}_{g})-\frac{S_{n}(q,\mathcal{M}_{g})}{q^{\dim \mathcal{M}_{g}+n/2}}=O(q^{-1}) \tag{1.1}\] when \(g\geq 2\), whereas Katz-Sarnak would only give \(O(q^{-1/2})\). Since \(\mathfrak{a}_{n}(\mathcal{M}_{g})=0\) for odd values of \(n\), this suggests replacing the exponent in the power of \(q\) in the denominator of the expression defining \(\mathfrak{a}_{n}(\mathcal{M}_{g})\) with a smaller number. As far as we know this has not been considered previously. We therefore introduce for odd \(n\) \[\mathfrak{b}_{n}(\mathcal{M}_{g}):=-\lim_{q\to\infty}\frac{S_{n}(q,\mathcal{ M}_{g})}{q^{3g-3+(n-1)/2}}.\] Theorem 2.3 gives \(\mathfrak{b}_{n}(\mathcal{M}_{g})\) in terms of an explicit integral and in terms of the representation theory of \(\mathrm{USp}_{2g}\). This second description makes it easy to compute. The deep relations between the sum of traces and Katz-Sarnak theory becomes clearer once we switch to a probabilistic point of view. In Section 3, we introduce the classical probability measure \(\mu_{q,g}\) on the interval \([-2g,2g]\) derived from the numbers of \(\mathbb{F}_{q}\)-isomorphism classes of curves of genus \(g>1\) with given traces of Frobenius. From Katz-Sarnak, we then know that the sequence of measures \((\mu_{q,g})\) weakly converges to a continuous measure \(\mu_{g}\) with an explicit density \(\mathfrak{f}_{g}\) with a convergence rate of \(O(q^{-1/2})\) (see [14, Th. 2.1] for equivalent definitions of weak convergence of measures). In this language, the numbers \(\mathfrak{a}_{n}(\mathcal{M}_{g})\) can be understood as the \(n\)th moments of the measure \(\mu_{g}\) and for these moments we have a faster convergence rate of \(O(q^{-1})\) by (1.1). Notice, however, as explained in Remark 3.2, that this rate of convergence for moments cannot be extended to all continuous functions and therefore improve on the Katz-Sarnak result above. In Section 4, we investigate whether the Katz-Sarnak limiting distributions can be used to approximate the number of curves over a given finite field \(\mathbb{F}_{q}\) of a given genus and with a given trace of Frobenius; one might hope that integrating that distribution over an interval of length \(1/\sqrt{q}\) around \(t/\sqrt{q}\) would give a value close to the number of genus-\(g\) curves over \(\mathbb{F}_{q}\) having trace \(t\). We show that this does _not_ happen for elliptic curves or for hyperelliptic curves of any genus. For elliptic curves, Proposition 4.4 shows that the number of elliptic curves with a given trace can be an arbitrarily large multiple of this naive Katz-Sarnak prediction (see also Figure 3). For hyperelliptic curves, Proposition 4.1 shows (roughly speaking) that if the number of curves is asymptotically bounded above and below by two multiples of the naive Katz-Sarnak prediction, then the ratio of these two multiples is bounded below by a fixed number strictly greater than \(1\) (see Figure 1). On the other hand, experimentally, one sees that the elliptic and hyperelliptic cases differ in the sense that it is easy to 'correct' the distribution in the hyperelliptic cases to observe a good approximation by the density function \(\mathfrak{f}_{g}\) (see Figure 2). Even stronger, computations for all non-hyperelliptic curves of genus 3 (see Figure 4) make us dream that the naive Katz-Sarnak approximation _does_ directly give an accurate estimate for the number of curves with a given number of points. This leads us to claim the bold Conjecture 5.1. The heuristic idea behind this conjecture is that for each trace, one is averaging over many isogeny classes which somehow would allow this stronger convergence as long as there are no obvious arithmetic obstructions. Our attempts to use the better convergence rates of the moments in the case of \(\mathcal{M}_{g}\) for \(g\geq 3\) to prove this conjecture were unfortunately unsuccessful. Finally, in Section 5 we revisit the work of [11] on the symmetry breaking for the trace distribution of (non-hyperelliptic) genus 3 curves, by looking at the difference between the number of curves with trace \(t\) and the number of curves with trace \(-t\). In probabilistic terms, this asymmetry is given by a signed measure \(\nu_{q,g}\). Although this signed measures weakly converges to \(0\) when \(q\) goes to infinity, by Corollary 5.3, the moments of \(\sqrt{q}\,\nu_{q,g}\) converge to \(-2\mathfrak{b}_{n}(\mathcal{M}_{g})\) when \(n\) is odd (and are trivially \(0\) when \(n\) is even). In particular, this shows that 'zooming in' on the Katz-Sarnark distribution, one can spot a difference between the behaviour for hyperelliptic curves (for which the corresponding signed measures would all be \(0\)) and for non-hyperelliptic curves. In the same spirit as Section 4, one then introduces a limit measure with density function \(\mathfrak{h}_{g}\) whose \(n\)th moments are \(\mathfrak{b}_{n}(\mathcal{M}_{g})\). The experimental data for \(g=3\) (see Figure 5) and the convergence of moments lead us to conjecture that the sequence of signed measures \((\sqrt{q}\,\nu_{q,g})\) weakly converges to the continuous signed measure with density \(-2\,\mathfrak{h}_{g}\) for all \(g\geq 3\). Notice that in contrast to the case of positive bounded measures, the convergence of moments of signed measures on a compact interval does not directly imply weak convergence; see example 5.4. With such a conjecture in hand, one may then improve on the result of [11] which heuristically approximated the limit density of \((\sqrt{q}\,\nu_{q,g})\) by the function \(x(1-x^{2}/3)\cdot\left(\frac{1}{\sqrt{2\pi}}e^{-x^{2}/2}\right)\). Using the first values of \(\mathfrak{b}_{n}(\mathcal{M}_{3})\), we get the better approximation \[x\left(5/4-x^{2}/2+x^{4}/60\right)\left(\frac{1}{\sqrt{2\pi}}e^{-x^{2}/2} \right).\] **Acknowledgement.** We thank Dan Petersen for helpful conversations in connection with the Gross-Schoen cycle and Sophie Dabo for discussions on measure theory. ## 2. Limits of sums of powers of traces Fix a prime power \(q\). Let us start by recalling some definitions and results from [1]. **Definition 2.1**.: _Let \(\mathcal{X}=\mathcal{H}_{g}\), \(\mathcal{M}_{g}\) or \(\mathcal{M}_{g}^{\mathrm{hhyp}}\) for any \(g\geq 2\), or \(\mathcal{X}=\mathcal{M}_{1,1}\)._ * _Recall from Section_ 1 _that one defines_ \[S_{n}(q,\mathcal{X})=\sum_{[C]\in\mathcal{X}(\mathbb{F}_{q})}\sum_{C^{\prime} \in[C]}\frac{(q+1-\#C^{\prime}(\mathbb{F}_{q}))^{n}}{\#\operatorname{Aut}_{ \mathbb{F}_{q}}(C^{\prime})}\] _where if_ \([C]\) _is a point of_ \(\mathcal{X}(\mathbb{F}_{q})\) _representing the_ \(\overline{\mathbb{F}}_{q}\)_-isomorphism class of a curve_ \(C/\mathbb{F}_{q}\)_, the second sum spans the set of representatives of all twists_ \(C^{\prime}\) _of_ \(C\)_._ * _For every_ \(n\geq 1\)_, let_ \[\mathfrak{a}_{n}(\mathcal{X}):=\lim_{q\to\infty}\frac{S_{n}(q,\mathcal{X})}{q^ {\dim\mathcal{X}+n/2}}\] _with_ \(\mathcal{X}=\mathcal{H}_{g}\) _or_ \(\mathcal{M}_{g}\) _or_ \(\mathcal{M}_{g}^{\mathrm{hhyp}}\) _for any_ \(g\geq 2\)_, or with_ \(\mathcal{X}=\mathcal{M}_{1,1}\)_._ Define \(w_{k}:=\sum_{j=1}^{g}2\cos k\theta_{j}\) and \(dm_{g}:=\frac{g!\pi^{\otimes n}}{g!^{\otimes n}}\prod_{i<j}(2\cos\theta_{i}-2 \cos\theta_{j})^{2}\prod_{i}2\sin^{2}\theta_{i}\,d\theta_{1}\ldots d\theta_{g}\), and recall from [1, Th. 2.1] that for every \(g\geq 2\) and \(n\geq 1\), \[\mathfrak{a}_{n}(\mathscr{X})=\int_{(\theta_{1},\ldots,\theta_{g})\in[0,\pi]^ {g}}w_{1}^{n}\,dm_{g},\] with \(\mathscr{X}=\mathscr{H}_{g}\) or \(\mathscr{M}_{g}\) or \(\mathscr{M}_{g}^{\text{mhyp}}\). Notice that for a fixed value of \(g\), \(\mathfrak{a}_{n}(\mathscr{X})\) does not depend on \(\mathscr{X}\) and that \(\mathfrak{a}_{n}(\mathscr{X})=0\) for odd \(n\). In order to go deeper in the limit distribution, we will also look at the 'next term' of the limit of \(\frac{S_{n}(g,\mathscr{X})}{q!^{\dim\mathscr{X}+n/2}}\) when \(\mathscr{X}=\mathscr{M}_{g}\). **Definition 2.2**.: _For every \(g\geq 2\) and \(n\geq 1\), let_ \[\mathfrak{b}_{n}(\mathscr{M}_{g}):=-\lim_{q\to\infty}\sqrt{q}\left(\frac{S_{n} (q,\mathscr{M}_{g})}{q^{3g-3+n/2}}-\mathfrak{a}_{n}(\mathscr{M}_{g})\right).\] To state our results, we need to recall basic facts about the representations of \(\operatorname{USp}_{2g}\) with coefficients in \(\mathbb{Q}_{\ell}\) where \(\ell\) is a prime distinct from the characteristic of \(\mathbb{F}_{q}\). The irreducible representations \(V_{\lambda}\) of \(\operatorname{USp}_{2g}\) are indexed by the highest weight \(\lambda=(\lambda_{1},\ldots,\lambda_{g})\) with \(\lambda_{1}\geq\ldots\geq\lambda_{g}\geq 0\). The corresponding character \(\chi_{\lambda}\) are the symplectic Schur polynomials \(\mathbf{s}_{\langle\lambda\rangle}(x_{1},\ldots,x_{g})\in\mathbb{Z}[x_{1}, \ldots,x_{g},x_{1}^{-1},\ldots,x_{g}^{-1}]\) in the sense that if \(A\in\operatorname{USp}_{2g}\) has eigenvalues \(\alpha_{1},\ldots,\alpha_{g},\alpha_{1}^{-1},\ldots,\alpha_{g}^{-1}\) then \(\chi_{\lambda}(A)=\mathbf{s}_{\langle\lambda\rangle}(\alpha_{1},\ldots,\alpha _{g})\), see [1, Prop. 24.22 and (A.45)]. In the notation we will suppress the \(\lambda_{j}\) that are \(0\). Put \(|\lambda|=\lambda_{1}+\ldots+\lambda_{g}\) and note that \(V_{\lambda}^{\vee}\cong V_{\lambda}\). Let \(V=V_{(1)}\) denote the standard representation. **Theorem 2.3**.: 1. _Let_ \(\mathscr{X}=\mathscr{H}_{g}\)_,_ \(\mathscr{M}_{g}\)_,_ \(\mathscr{M}_{g}^{\text{mhyp}}\) _for any_ \(g\geq 2\) _or_ \(\mathscr{M}_{1,1}\)_. For every_ \(n\geq 1\)_,_ \(\mathfrak{a}_{n}(\mathscr{X})\) _is equal to the number of times the trivial representation appears in the_ \(\operatorname{USp}_{2g}\)_-representation_ \(V^{\otimes n}\) _with_ \(V\) _the standard representation. (This is precisely_ _[_1_, Th. 3.8]__, but we will give a different proof._) 2. _For every_ \(g\geq 3\) _and_ \(n\geq 1\)_,_ \(\mathfrak{b}_{n}(\mathscr{M}_{g})\) _is equal to the number of times the representation_ \(V_{(1,1,1)}\) _appears in the_ \(\operatorname{USp}_{2g}\)_-representation_ \(V^{\otimes n}\) _with_ \(V\) _the standard representation. In particular_ \(\mathfrak{b}_{n}(\mathscr{M}_{g})=0\) _for_ \(n\) _even._ 3. _For every_ \(n\geq 1\)_,_ \(\mathfrak{b}_{n}(\mathscr{M}_{2})=0\)_._ 4. _For every_ \(g\geq 2\) _and_ \(n\geq 1\)_,_ \[\mathfrak{a}_{n}(\mathscr{M}_{g})-\frac{\mathfrak{b}_{n}(\mathscr{M}_{g})}{ \sqrt{q}}=\frac{S_{n}(q,\mathscr{M}_{g})}{q^{3g-3+n/2}}+O(q^{-1}).\] 5. _For every_ \(g\geq 3\) _and_ \(n\geq 1\) _we have_ \[\mathfrak{b}_{n}(\mathscr{M}_{g})=\int_{(\theta_{1},\ldots,\theta_{g})\in[0,\pi]^ {g}}w_{1}^{n}\Big{(}\frac{1}{6}w_{1}^{3}-\frac{1}{2}w_{1}w_{2}+\frac{1}{3}w_{3} -w_{1}\Big{)}\,dm_{g}.\] (2.1) Proof.: Poincare duality gives a symplectic pairing on the first \(\ell\)-adic etale cohomology group of a curve. We will be interested in the action of Frobenius on these cohomology groups and since we need to take the size of the eigenvalues of Frobenius into account we will consider representations of \(\operatorname{GSp}_{2g}\). Let \(\mathbb{Q}_{\ell}(-1)\) denote the _multiplier representation_ or _similitude character_; if we identify \(\operatorname{GSp}_{2g}\) as the group of automorphisms of a \(2g\)-dimensional vector space that preserve a symplectic form \(s\) up to scaling, then \(\mathbb{Q}_{\ell}(-1)\) is the representation \(\eta\) that sends an element of \(\operatorname{GSp}_{2g}(\mathbb{Q}_{\ell})\) to the factor by which it scales \(s\). Let \(\mathbb{Q}_{\ell}(1)\) be the inverse (or dual) of \(\mathbb{Q}_{\ell}(-1)\), and for an integer \(j\) put \(\mathbb{Q}_{\ell}(j)=\mathbb{Q}_{\ell}(\operatorname{sgn}j)^{\otimes|j|}\). For a representation \(U\) put \(U(j):=U\otimes\mathbb{Q}_{\ell}(j)\). With the standard representation \(W\) of \(\operatorname{GSp}_{2g}\) we can get irreducible representations \(W_{\lambda}\), for \(\lambda=(\lambda_{1},\ldots,\lambda_{g})\) with \(\lambda_{1}\geq\ldots\geq\lambda_{g}\geq 0\), using the same construction as for \(\operatorname{USp}_{2g}\), see [1, (17.9)]. If we homogenize the polynomial \(s_{\langle\lambda\rangle}(x_{1},\ldots,x_{g},t)\) to degree \(|\lambda|\) using a variable \(t\) of weight \(2\) and with \(x_{i}\) of weight \(1\) for \(i=1,\ldots,g\), then for \(A\in\operatorname{GSp}_{2g}\) with \(\eta(A)=s\) and eigenvalues \(\alpha_{1},\dots,\alpha_{g},s\alpha_{1}^{-1},\dots,s\alpha_{g}^{-1}\) we have \(\chi_{\lambda}(A)=s_{\langle\lambda\rangle}(\alpha_{1},\dots,\alpha_{g},s)\). Now, for every \(n\), there are integers \(c_{\lambda,n}\geq 0\) such that \[W^{\otimes n}\cong\bigoplus_{|\lambda|\leq n}W^{\oplus c_{\lambda,n}}_{\lambda} \big{(}(-n+|\lambda|)/2\big{)}. \tag{2.2}\] Note that if \(n\not\equiv|\lambda|\bmod 2\) then \(c_{\lambda,n}=0\). Note also that (2.2) holds with the same \(c_{\lambda,n}\) when replacing \(\operatorname{GSp}_{2g}\) with \(\operatorname{USp}_{2g}\), i.e. replacing \(W\) by \(V\) and ignoring the multiplier representation. Note also that \(W^{\vee}_{\lambda}\cong W_{\lambda}(|\lambda|)\). Let \(\mathcal{X}=\mathcal{H}_{g}\), \(\mathcal{M}_{g}\) or \(\mathcal{M}_{g}^{\operatorname{nhyp}}\) for any \(g\geq 2\), or \(\mathcal{X}=\mathcal{M}_{1,1}\). Let \(\pi:\mathcal{Y}\to\mathcal{X}\) be the universal object and define the \(\ell\)-adic local system \(\mathbb{V}=R^{1}\pi_{*}\mathbb{Q}_{\ell}\). To any irreducible representation of \(\operatorname{GSp}_{2g}\) (the symplectic pairing coming as above from the first cohomology group of the curves) corresponding to \(\lambda\) we can then use Schur functors to define a local system \(\mathbb{V}_{\lambda}\). Let \(H^{j}_{c}\) denote compactly supported \(\ell\)-adic cohomology and \(\operatorname{Fr}_{q}\) the geometric Frobenius acting on \(\mathcal{X}\otimes\overline{\mathbb{F}}_{q}\). For general results on etale cohomology of stacks, see for instance [14]. For almost all primes \(p\) we have \(H^{j}_{c}(\mathcal{X}\otimes\mathbb{C},\mathbb{V}_{\lambda})\cong H^{j}_{c}( \mathcal{X}\otimes\overline{\mathbb{Q}}_{p},\mathbb{V}_{\lambda})\cong H^{j} _{c}(\mathcal{X}\otimes\overline{\mathbb{F}}_{p},\mathbb{V}_{\lambda})\). From this we get bounds on \(\dim_{\mathbb{Q}_{\ell}}H^{j}_{c}(\mathcal{X}\otimes\overline{\mathbb{F}}_{p},\mathbb{V}_{\lambda})\) that are independent of \(p\). This will tacitly be used below when we let \(q\) go to infinity. Put \(\overline{\mathcal{X}}=\mathcal{X}\otimes\overline{\mathbb{F}}_{q}\). The Lefschetz trace formula and (2.2) then tell us that \[S_{n}(q,\mathcal{X}) =\sum_{j=0}^{2\dim\mathcal{X}}(-1)^{j}\operatorname{Tr}( \operatorname{Fr}_{q},H^{j}_{c}(\overline{\mathcal{X}},\mathbb{V}_{1}^{ \otimes n}))\] \[=\sum_{\lambda}c_{\lambda,n}\sum_{j=0}^{2\dim\mathcal{X}}(-1)^{j} \operatorname{Tr}(\operatorname{Fr}_{q},H^{j}_{c}(\overline{\mathcal{X}}, \mathbb{V}_{\lambda}))\,q^{(n-|\lambda|)/2}\,;\] compare [1, SS8]. Since \(\mathbb{V}_{\lambda}\) is pure of weight \(\lambda\), it follows from Deligne's theory of weights [13, 14] that the trace of Frobenius on \(H^{j}_{c}(\overline{\mathcal{X}},\mathbb{V}_{\lambda})\) is equal (after choosing an embedding of \(\overline{\mathbb{Q}}_{\ell}\) in \(\mathbb{C}\)) to a sum of complex numbers with absolute value at most \(q^{(j+|\lambda|)/2}\). From this we see that only when \(j=2\dim\mathcal{X}\) can we get a contribution to \(\mathfrak{a}_{n}(\mathcal{X})\). Since \(\mathcal{X}\) is a smooth Deligne-Mumford stack, Poincare duality shows that for every \(i\) with \(0\leq i\leq 2\dim\mathcal{X}\), we have \[H^{2\dim\mathcal{X}-i}_{c}(\overline{\mathcal{X}},\mathbb{V}_{\lambda})\cong H ^{i}(\overline{\mathcal{X}},\mathbb{V}_{\lambda})^{\vee}(-\dim\mathcal{X}-| \lambda|).\] The zeroth cohomology group of a local system consists of the global invariants, and among the irreducible local systems, only the constant local system \(\mathbb{V}_{(0)}\cong\mathbb{Q}_{\ell}\) has such. Moreover, \(H^{0}(\overline{\mathcal{X}},\mathbb{Q}_{\ell})\) is one-dimensional, since \(\mathcal{X}\) is irreducible. Finally, since the action of \(\operatorname{Fr}_{q}\) on \(H^{0}(\overline{\mathcal{X}},\mathbb{Q}_{\ell})\) is trivial, we get by Poincare duality that \(\operatorname{Fr}_{q}\) acts on \(H^{2}_{c}\smash{\dim\mathcal{X}}(\overline{\mathcal{X}},\mathbb{Q}_{\ell})\) by multiplication by \(q^{\dim\mathcal{X}}\). It follows that \(\mathfrak{a}_{n}(\mathcal{X})=c_{(0),n}\). This proves (1). Assume now that \(g\geq 3\). From the work of Johnson and Hain we know that \(H^{1}(\mathcal{M}_{g},\mathbb{V}_{\lambda})\) is nonzero if and only if \(\lambda=(1,1,1)\); see [10], [11] and [12, Th. 4.1 and Cor. 4.2]. In these references, it is the rational Betti cohomology group of \(\mathcal{M}_{g}\) over the complex numbers that is considered. Furthermore, \(H^{1}(\mathcal{M}_{g}\otimes\overline{\mathbb{F}}_{q},\mathbb{V}_{(1,1,1)})\) is one-dimensional and generated by the Gross-Schoen cycle, see [17, Rem. 12.1], which lives in the second Chow group, see [17, Ex. 6.4]. Since this result also hold in \(\ell\)-adic cohomology, as noted in [17, SS1.2], the action of \(\operatorname{Fr}_{q}\) on this cohomology group is by multiplication by \(q^{2}\). Recall that \(\dim\mathcal{M}_{g}=3g-3\). By Poincare duality we find that the action of \(\operatorname{Fr}_{q}\) on \(H^{6g-7}_{c}(\mathcal{M}_{g}\otimes\overline{\mathbb{F}}_{q},\mathbb{V}_{(1,1,1)})\) is by \(q^{3g-3+3-2}\). We can now conclude the following. If \(n\) is even then \(c_{(1,1,1),n}=0\), and so every eigenvalue of Frobenius contributing to \(q^{3g-3+n/2}c_{(0),n}-S_{n}(q,\mathcal{M}_{g})\) has absolute value at most \(q^{3g-4+n/2}\). If \(n\) is odd then \(c_{(0),n}=0\), and so there are no eigenvalues of Frobenius contributing to \(S_{n}(q,\mathcal{M}_{g})\) of absolute value \(q^{3g-3+n/2}\) and we can conclude by the above that \(\mathfrak{b}_{n}(\mathcal{M}_{g})=c_{(1,1,1),n}\). This proves (2) Because of the hyperelliptic involution, \(H^{i}_{c}(\mathcal{M}_{2},\mathbb{V}_{\lambda})=0\) for all \(\lambda\) such that \(|\lambda|\) is odd. Moreover, \(H^{1}(\mathcal{M}_{2},\mathbb{V}_{\lambda})\) is nonzero precisely when \(\lambda=(2,2)\). It is then one-dimensional and \(\mathrm{Fr}_{q}\) acts by multiplication by \(q^{3}\). This follows from results of [14, 14] and will be explained in more detail in forthcoming work by Petersen and Tommasi. By Poincare duality, \(\mathrm{Fr}_{q}\) acts on \(H^{5}_{c}(\mathcal{M}_{2},\mathbb{V}_{2,2})\) by multiplication by \(q^{3+4-3}\). Hence, for all even \(n\), every eigenvalue of Frobenius contributing to \(q^{3+n/2}c_{(0),n}-S_{n}(q,\mathcal{M}_{2})\) has absolute value at most \(q^{3+(n-2)/2}\). This proves (3). Statement (4) is only a reformulation of the properties of \(\mathfrak{a}_{n}(\mathcal{M}_{g})\) and \(\mathfrak{b}_{n}(\mathcal{M}_{g})\) proven above. Finally, for every \(k\geq 1\), put \(p_{k}(x_{1},\ldots,x_{g}):=\sum_{i=1}^{g}(x_{i}^{k}+x_{i}^{-k})\). The polynomial \(\mathbf{s}_{\langle(1,1,1)\rangle}(x_{1},\ldots,x_{g})\) equals \[\frac{1}{6}p_{1}^{3}-\frac{1}{2}p_{1}p_{2}+\frac{1}{3}p_{3}-p_{1}.\] The irreducible representations of \(\mathrm{USp}_{2g}\) are self-dual. As a consequence, if \(U\) is a representation of \(\mathrm{USp}_{2g}\) then the number of times the representation \(V_{\lambda}\) appears in \(U\) equals the number of times the trivial representation appears in \(V_{\lambda}\otimes U\). If \(A\in\mathrm{USp}_{2g}\) has eigenvalues \(\alpha_{1},\ldots,\alpha_{g},\alpha_{1}^{-1},\ldots,\alpha_{g}^{-1}\), with \(\alpha_{j}=e^{i\theta_{j}}\) for \(j=1,\ldots,g\), then \(p_{k}(\alpha_{1},\ldots,\alpha_{g})=w_{k}(\theta_{1},\ldots,\theta_{g})\). Statement (5) now follows from (2). _Remark 2.4_.: Why did we not define \(\mathfrak{b}_{n}\) for \(\mathcal{M}_{1,1}\)? For every prime \(p\) and \(n>0\) it follows from [14] (see also [1, SS2]) that \[\sum_{j=0}^{2}(-1)^{j}\operatorname{Tr}(\mathrm{Fr}_{p},H^{j}_{ c}(\mathcal{M}_{1,1}\otimes\overline{\mathbb{F}}_{p},\mathbb{V}_{(n)})) =-\operatorname{Tr}(\mathrm{Fr}_{p},H^{1}_{c}(\mathcal{M}_{1,1} \otimes\overline{\mathbb{F}}_{p},\mathbb{V}_{(n)}))\] \[=-1-\operatorname{Tr}(T_{p},\mathbf{S}_{n+2}),\] where \(T_{p}\) is the \(p\)th Hecke operator acting on \(\mathbf{S}_{n+2}\), the (complex) vector space of elliptic modular cusp forms of level \(1\) and weight \(n+2\). Moreover, for every prime power \(q\), the eigenvalues of \(\mathrm{Fr}_{q}\) acting on \(H^{1}_{c}(\mathcal{M}_{1,1}\otimes\overline{\mathbb{F}}_{p},\mathbb{V}_{(n)})\) will have absolute value \(q^{(n+1)/2}\). It is in general not clear that the limit \[-\lim_{q\to\infty}\sqrt{q}\left(\frac{S_{n}(q,\mathcal{M}_{1,1})}{q^{1+n/2}}- \mathfrak{a}_{n}(\mathcal{M}_{1,1})\right), \tag{2.3}\] which would be the way to define \(\mathfrak{b}_{n}(\mathcal{M}_{1,1})\), always exists when \(n\) is even. (For odd \(n\), \(S_{n}(q,\mathcal{M}_{1,1})=0\), hence the limit (2.3) will be \(0\).) For even \(0\leq n\leq 8\), the limit (2.3) is also \(0\) since there are no elliptic cusp forms level \(1\) and weight less than or equal to \(10\). We then have that \(S_{10}(p,\mathcal{M}_{1,1})=42p^{6}-\operatorname{Tr}(T_{p},\mathbf{S}_{12})+O (p^{5})\) and \(S_{12}(p,\mathcal{M}_{1,1})=132p^{7}-11p\cdot\operatorname{Tr}(T_{p},\mathbf{S }_{12})+O(p^{6})\). The so-called Frobenius angle, \(0\leq\varphi_{p}\leq\pi\), of the Hecke eigenform (the Ramanujan \(\Delta\) function) in the one-dimensional space \(\mathbf{S}_{12}\) is defined by \(a_{p}:=\operatorname{Tr}(T_{p},\mathbf{S}_{12})=p^{11/2}\cos\varphi_{p}\). The Sato-Tate conjecture for \(\Delta\) (proven in [1]) then tells us that there are sequences of primes \(p^{\prime}_{1},p^{\prime}_{2},\ldots\) and \(p^{\prime\prime}_{1},p^{\prime\prime}_{2},\ldots\) such that the Frobenius angles of \(a_{p^{\prime}_{1}},a_{p^{\prime}_{2}},\ldots\) (respectively \(a_{p^{\prime\prime}_{1}},a_{p^{\prime\prime}_{2}},\ldots\)) are all between \(0\) and \(\pi/3\) (respectively \(2\pi/3\) and \(\pi\)). This implies that the limit (2.3) does not exist for \(n=10\) and \(n=12\). It is unlikely to exist for even \(n>12\), but the limit will then involve an interplay between different Hecke eigenforms. In [1, Th. 3.9] it is shown that for fixed \(g\) we have \[\lim_{n\to\infty}\mathfrak{a}_{2n}(\mathcal{M}_{g})^{1/(2n)}=2g.\] In the remainder of this section we prove a similar result for \(\mathfrak{b}_{2n+1}(\mathcal{M}_{g})\). **Proposition 2.5**.: _For fixed \(g\geq 3\) one has_ \[\lim_{n\to\infty}\mathfrak{b}_{2n+1}(\mathcal{M}_{g})^{1/(2n+1)}=2g.\] Proof.: Consider the functions \(w_{1}\) and \(f:=\frac{1}{6}w_{1}^{3}-\frac{1}{2}w_{1}w_{2}+\frac{1}{3}w_{3}-w_{1}\) on \(X:=[0,\pi]^{g}\). The maximum value of \(|w_{1}|\) is attained at exactly two points in \(X\), namely the points \(x:=(0,\ldots,0)\) and \(y:=(\pi,\ldots,\pi)\). We have \(w_{1}(x)=2g\) and \(w_{1}(y)=-2g\), and we also have \(f(x)=(2/3)(2g^{3}-3g^{2}-2g)>0\) and \(f(y)=(-2/3)(2g^{3}-3g^{2}-2g)<0\). Let \(V\) be the (open) subset of \(X\) where \(w_{1}f>0\), so that \(x\) and \(y\) both lie in \(V\), and let \(W=X\setminus V\). Let \(M\) be the supremum of \(|w_{1}|\) on \(W\), so that \(M<2g\). For \(\varepsilon\in(0,2g-M)\) let \(U_{\varepsilon}\) be the subset of \(X\) where \(|w_{1}|>2g-\varepsilon\), so that \(U_{\varepsilon}\subset V\), and let \(V_{\varepsilon}=V\setminus U_{\varepsilon}\). Let \(\varepsilon\) be an element of \((0,2g-M)\). For every \(n\) we have \[\mathfrak{b}_{2n+1}(\mathcal{M}_{g}) =\int_{X}w_{1}^{2n+1}f\,dm_{g}\] \[=\int_{U_{\varepsilon}}w_{1}^{2n+1}f\,dm_{g}+\int_{V_{\varepsilon }}w_{1}^{2n+1}f\,dm_{g}+\int_{W}w_{1}^{2n+1}f\,dm_{g}\] \[\geq\int_{U_{\varepsilon}}w_{1}^{2n+1}f\,dm_{g}+\int_{W}w_{1}^{2n +1}f\,dm_{g}\] \[\geq(2g-\varepsilon)^{2n+1}\int_{U_{\varepsilon}}|f|\,dm_{g}-M^{2 n+1}\int_{W}|f|\,dm_{g},\] where the third line follows from the fact that \(w_{1}^{2n+1}f\) is positive on \(V_{\varepsilon}\) and the fourth follows from the bounds on \(|w_{1}|\) in \(U_{\varepsilon}\) and \(W\). Let \(A:=\int_{U_{\varepsilon}}|f|\,dm_{g}\) and \(B:=\int_{W}|f|\,dm_{g}.\) Then \[\mathfrak{b}_{2n+1}(\mathcal{M}_{g})^{1/(2n+1)}\geq(2g-\varepsilon)\bigg{(}A- \Big{(}\frac{M}{2g-\varepsilon}\Big{)}^{2n+1}B\bigg{)}^{1/(2n+1)},\] and the rightmost factor tends to \(1\) as \(n\to\infty\). Therefore, \(\liminf\mathfrak{b}_{2n+1}(\mathcal{M}_{g})^{1/(2n+1)}\geq 2g\). We also have \[\mathfrak{b}_{2n+1}(\mathcal{M}_{g}) =\int_{U_{\varepsilon}}w_{1}^{2n+1}f\,dm_{g}+\int_{X\setminus U_{ \varepsilon}}w_{1}^{2n+1}f\,dm_{g}\] \[\leq(2g)^{2n+1}\int_{U_{\varepsilon}}|f|\,dm_{g}+(2g-\varepsilon )^{2n+1}\int_{X\setminus U_{\varepsilon}}|f|\,dm_{g},\] so if we let \(C:=\int_{X}|f|\,dm_{g}\) then \(\mathfrak{b}_{2n+1}(\mathcal{M}_{g})\leq(2g)^{2n+1}A+(2g-\varepsilon)^{2n+1}C\), so \[\mathfrak{b}_{2n+1}(\mathcal{M}_{g})^{1/(2n+1)}\leq 2g\bigg{(}A+\Big{(}\frac{2g- \varepsilon}{2g}\Big{)}^{2n+1}C\bigg{)}^{1/(2n+1)}.\] Once again the rightmost factor tends to \(1\) as \(n\to\infty\), so \(\limsup\mathfrak{b}_{2n+1}(\mathcal{M}_{g})^{1/(2n+1)}\leq 2g\), and the proposition is proven. ## 3. Convergence of moments of the measures \(\mu_{q,g}\) Let \(\mathcal{M}_{g}^{\prime}(\mathbb{F}_{q})\) be the set of \(\mathbb{F}_{q}\)-isomorphism classes of curves of genus \(g>1\) over \(\mathbb{F}_{q}\). If \(g=1\), we abuse notation and let \(\mathcal{M}_{1}=\mathcal{M}_{1,1}\) be the moduli space of elliptic curves and \(\mathcal{M}_{1}^{\prime}(\mathbb{F}_{q})\) the set of \(\mathbb{F}_{q}\)-isomorphism classes of elliptic curves over \(\mathbb{F}_{q}\). Define a measure \(\mu_{q,g}\) by \[\mu_{q,g}:=\frac{1}{\#\mathcal{M}_{g}(\mathbb{F}_{q})}\sum_{C\in\mathcal{M}_{g }^{\prime}(\mathbb{F}_{q})}\frac{\delta_{\tau(C)}}{\#\operatorname{Aut}_{q}(C) }\,,\] where \(\tau(C):=\operatorname{Tr}(C)/\sqrt{q}\) is the _normalized trace_ of \(C\) and \(\delta_{\tau(C)}\) is the Dirac \(\delta\) measure supported at \(\tau(C)\). We see that \(\mu_{q,g}\) is a discrete probability measure on \(I_{g}=[-2g,2g]\), since \[\mu_{q,g}(I_{g}) =\frac{1}{\#\mathcal{M}_{g}(\mathbb{F}_{q})}\sum_{C\in\mathcal{M} _{g}^{\prime}(\mathbb{F}_{q})}\frac{1}{\#\operatorname{Aut}_{\mathbb{F}_{q}}( C)}\] \[=\frac{1}{\#\mathcal{M}_{g}(\mathbb{F}_{q})}\sum_{C\in\mathcal{M} _{g}(\mathbb{F}_{q})}\underbrace{\sum_{C^{\prime}\in\operatorname{Twist}(C)} \frac{1}{\#\operatorname{Aut}_{\mathbb{F}_{q}}(C)}}_{=1\text{ by [$\operatorname{\text{\rm lord}}$}\operatorname{\text{\rm V92}$, Prop.~{}\ref{eq:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:def:def:def:def:def:def:defdef:def:def:def:def:def:defdef:def:def:def:def:defdef:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:defdef:def:def:def:def:def:defdef:def:def:def:defdef:def:def:defdef:def:defdef:def:def:defdef:defdef:defdef: * a plateau function: take a piecewise linear function equal to \(1\) on \((-1/\sqrt{q}+1/q,1/\sqrt{q}-1/q)\) and \(0\) on \((-\infty,1/\sqrt{q}]\cup[1/\sqrt{q},\infty)\); * a signal function: zero everywhere except for a small triangle with vertices \((-1/\sqrt{q},0),(0,1)\) and \((1/\sqrt{q},0)\). Such a stronger convergence would lead to the convergence of \(\sqrt{q}\cdot\mathcal{N}_{q,g}(0)\) to \(2\operatorname{f}_{g}(0)\) in the first case and to \(\operatorname{f}_{g}(0)\) in the second case. Indeed, in both cases \(\int_{I_{g}}f\,d\mu_{q,g}=\mathcal{N}_{q,g}(0)\) and we can write \(\operatorname{f}_{g}(\tau)=\operatorname{f}_{g}(0)+(\operatorname{f}_{g}( \tau)-\operatorname{f}_{g}(0))\) with \(|\operatorname{f}_{g}(\tau)-\operatorname{f}_{g}(0)|\leq c\tau\) with \(c\geq 0\) a constant when \(|\tau|\) is small enough. For instance, in the second case, rewriting the right member gives \[\int_{I_{g}}f(\tau)\operatorname{f}_{g}(\tau)\,d\tau=\operatorname{f}_{g}(0) \underbrace{\int_{-1/\sqrt{q}}^{1/\sqrt{q}}f(\tau)\,d\tau}_{=1/\sqrt{q}}+ \int_{-1/\sqrt{q}}^{1/\sqrt{q}}f(\tau)(\operatorname{f}_{g}(\tau)- \operatorname{f}_{g}(0))\,d\tau+O\Big{(}\frac{1}{q}\Big{)}.\] But \[\left|\int_{-1/\sqrt{q}}^{1/\sqrt{q}}f(\tau)(\operatorname{f}_{g}(\tau)- \operatorname{f}_{g}(0))\,d\tau\right|\leq c\int_{-1/\sqrt{q}}^{1/\sqrt{q}}| \tau|\,d\tau=O\Big{(}\frac{1}{q}\Big{)}.\] Multiplying both sides by \(\sqrt{q}\) gives the announced results. ## 4. The elliptic and hyperelliptic cases: results and experiments Katz-Sarnak results show that for every interval \(J\subseteq I_{g}\), the probability that a random curve of genus \(g\) over \(\mathbb{F}_{q}\) (or a random hyperelliptic curve of genus \(g\) over \(\mathbb{F}_{q}\)) has normalized trace in \(J\) tends towards a fixed value as \(q\to\infty\), this value being \(\int_{J}\operatorname{f}_{g}(\tau)\,d\tau\), where \(\operatorname{f}_{g}\) is the density function for the measure \(\mu_{g}\) defined at the beginning of Section 3. Here the interval \(J\) is fixed, and we let \(q\) tend to infinity. One can wonder how rapid this convergence is. For instance, suppose the interval \(J\) has length \(x\). How large must \(q\) become in order for the actual probability that a normalized trace lies in \(J\) is well-approximated by the Katz-Sarnak prediction? Could it even be the case that the approximation is reasonably good when \(q\) is as large as \(1/x^{2}\), so that \(x\approx 1/\sqrt{q}\) and there is exactly one integer \(t\) with \(t/\sqrt{q}\in J\)? In other words, can we use the Katz-Sarnak distribution to estimate the number of curves over \(\mathbb{F}_{q}\) with a given trace? Since the measures \(\mu_{q,g}\) converge weakly to \(\mu_{g}\), one might hope that for every \(\tau\in I_{g}\), the integral of \(\mu_{q,g}\) over an interval of length \(1/\sqrt{q}\) containing \(\tau\) would be close to the integral of \(\mu_{g}\) over this interval. If we let \(t\) be the unique integer such that \(t/\sqrt{q}\) is contained in this interval, this optimistic approximation then translates to \[\mathcal{N}_{q,g}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)}\approx\frac{1}{\sqrt{q}} \operatorname{f}_{g}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)}.\] Since \(\mathcal{N}_{q,g}(t/\sqrt{q})\) gives us the weighted number of curves with trace \(t\), if this approximation is close to the truth we would have a good estimate for the number of such curves. For hyperelliptic curves, we can prove that this type of naive approximation cannot hold. To state our result precisely, we introduce a function \(\mathcal{N}_{q,g}^{\text{hyp}}(\tau)\), which we define analogously to how we defined \(\mathcal{N}_{q,g}(\tau)\): \[\mathcal{N}_{q,g}^{\text{hyp}}(\tau):=\frac{1}{\#\mathcal{H}_{g}(\mathbb{F}_{ q})}\sum_{\begin{subarray}{c}C\in\mathcal{H}_{g}^{\prime}(\mathbb{F}_{q})\\ \tau(C)=\tau\end{subarray}}\frac{1}{\#\operatorname{Aut}(C)}.\] Here by \(\mathcal{H}_{g}(\mathbb{F}_{q})\) we mean the set of \(\overline{\mathbb{F}}_{q}\)-isomorphism classes of hyperelliptic curves of genus \(g\) over \(\mathbb{F}_{q}\), and by \(\mathcal{H}_{g}^{\prime}(\mathbb{F}_{q})\) we mean the set of \(\mathbb{F}_{q}\)-isomorphism classes of such curves. Note that for an integer \(t\) in \(I_{g}\), the value \(q^{2g-1}\mathcal{N}_{q,g}^{\text{hyp}}(t/\sqrt{q})\) is then the weighted number of genus-\(g\) hyperelliptic curves over \(\mathbb{F}_{q}\) with trace \(t\). **Proposition 4.1**.: _Fix \(g>1\) and \(\varepsilon\in[0,2g)\), let \(r_{g}:=\sum_{i=0}^{2g+2}(-2)^{i}/i!\), and let \(v=\int_{2g-\varepsilon}^{2g}\mathfrak{f}_{g}(\tau)\,d\tau\). Suppose there are constants \(b_{g}\leq c_{g}\) such that for every sufficiently large prime power \(q\) and for every integer \(t\) in \([-(2g-\varepsilon)\sqrt{q},(2g-\varepsilon)\sqrt{q}\,]\), we have_ \[\frac{b_{g}}{\sqrt{q}}\mathfrak{f}_{g}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)}\leq \mathfrak{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)}\leq\frac{ c_{g}}{\sqrt{q}}\mathfrak{f}_{g}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)}.\] _Then \(b_{g}\leq(1-r_{g})/(1-2v)\) and \(c_{g}\geq(1+r_{g}-4v)/(1-2v)\)._ The proof is based on the following lemma. **Lemma 4.2**.: _Fix \(g>1\), and let \(r_{g}\) be as in Proposition 4.1. If \(q\) is an odd prime power then_ \[\sum_{t\,\mathrm{even}}\mathfrak{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{t}{ \sqrt{q}}\bigg{)}=\frac{1+r_{g}}{2}+O\Big{(}\frac{1}{q}\Big{)}\quad\text{and} \quad\sum_{t\,\mathrm{odd}}\mathfrak{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{t} {\sqrt{q}}\bigg{)}=\frac{1-r_{g}}{2}+O\Big{(}\frac{1}{q}\Big{)}.\] Proof.: Fix an odd prime power \(q\), fix a nonsquare \(n\in\mathbb{F}_{q}\), and consider the set \(H\) consisting of all pairs \((c,f)\), where \(c\in\{1,n\}\) and \(f\in\mathbb{F}_{q}[x]\) is a monic separable polynomial of degree \(2g+1\) or \(2g+2\). A result of Carlitz [10, SS6] shows that \(\#H=2q^{2g+2}-2q^{2g}.\) The group \(\mathrm{PGL}_{2}(\mathbb{F}_{q})\) acts on \(H\): Given a matrix \([\begin{smallmatrix}r&s\\ t&u\end{smallmatrix}]\) and an element \((c,f)\) of \(H\), let \((d,g)\) be the unique element of \(H\) such that \[dg(x)=ce^{2}(tx+u)^{2g+2}f\Big{(}\frac{rx+s}{tx+u}\Big{)}\] for some \(e\in\mathbb{F}_{q}^{\times}.\) Note that the stabilizer of \((c,f)\) is isomorphic to the reduced automorphism group \(\mathrm{RedAut}(C)\) of the hyperelliptic curve \(C\colon y^{2}=cf\), that is, the quotient of the full automorphism group of \(C\) by the subgroup generated by the hyperelliptic involution. The map \(\gamma\) that sends \((c,f)\in H\) to the hyperelliptic curve \(y^{2}=cf\) takes \(H\) onto \(\mathcal{H}_{g}^{\prime}(\mathbb{F}_{q})\). Given a curve \(C\in\mathcal{H}_{g}^{\prime}(\mathbb{F}_{q})\), let \((c,f)\in H\) be such that \(\gamma((c,f))=C\). Then \[\#(\mathrm{PGL}_{2}(\mathbb{F}_{q})\cdot(c,f))=\frac{\#\,\mathrm{PGL}_{2}( \mathbb{F}_{q})}{\#\,\mathrm{RedAut}(C)},\] so that \[\frac{\#\gamma^{-1}(C)}{\#\,\mathrm{PGL}_{2}(\mathbb{F}_{q})}=\frac{1}{\#\, \mathrm{RedAut}(C)}=\frac{2}{\#\,\mathrm{Aut}(C)}. \tag{4.1}\] Let \(H_{\mathrm{even}}\) be the subset of \(H\) consisting of the pairs \((c,f)\) such that the curve \(\gamma(c,f)\) has even trace. Let \(H_{\mathrm{even}}^{\prime}\) be the subset of \(H\) consisting of the pairs \((c,f)\) such that \(f\) has degree \(2g+2\) and has an even number of roots. Then \(H_{\mathrm{even}}^{\prime}\subseteq H_{\mathrm{even}}\), and \(H_{\mathrm{even}}\setminus H_{\mathrm{even}}^{\prime}\) consists of pairs \((c,f)\in H_{\mathrm{even}}\) such that \(f\). Therefore \[\big{|}\#H_{\mathrm{even}}-\#H_{\mathrm{even}}^{\prime}\big{|}\leq 2q^{2g+1}.\] Leont\({}^{\prime}\)ev [13, Lem. 4, p. 302] gives the generating function for the number of (not necessarily separable) monic polynomials of a fixed degree over \(\mathbb{F}_{q}\) that have a given number of roots. To find the number of such polynomials with an even number of roots, we simply need to take the average of the values of this generating function evaluated at \(-1\) and at \(1\). We find that \[\#\left\{\begin{aligned} &\text{monic polynomials of degree $2g+2$}\\ &\text{over $\mathbb{F}_{q}$ with an even number of roots}\end{aligned}\right\}=\frac{1+r_{g}}{2}q^{2g+2}+O(q^{2g+1}).\] The result of Carlitz mentioned earlier shows that \[\#\left\{\begin{aligned} &\text{non-separable monic polynomials}\\ &\text{of degree $2g+2$ over $\mathbb{F}_{q}$}\end{aligned}\right\}=q^{2g+1}.\] Therefore \(\#H_{\mathrm{even}}^{\prime}=(1+r_{g})q^{2g+2}+O(q^{2g+1})\), so that \(\#H_{\mathrm{even}}=(1+r_{g})q^{2g+2}+O(q^{2g+1})\) as well. Using (4.1) we see that \[\sum_{t\,\mathrm{even}}\mathcal{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{t }{\sqrt{q}}\bigg{)} =\frac{1}{\#\mathcal{H}_{\mathrm{g}}(\mathbb{F}_{q})}\sum_{ \begin{subarray}{c}C\in\mathcal{N}_{q}^{\prime}(\mathbb{F}_{q})\\ \mathrm{Tr}(C)\,\mathrm{even}\end{subarray}}\frac{1}{\#\operatorname{Aut}_{ \mathbb{F}_{q}}(C)}\] \[=\frac{1}{\#\mathcal{H}_{\mathrm{g}}(\mathbb{F}_{q})}\sum_{ \begin{subarray}{c}C\in\mathcal{N}_{q}^{\prime}(\mathbb{F}_{q})\\ \mathrm{Tr}(C)\,\mathrm{even}\end{subarray}}\frac{\#\gamma^{-1}(C)}{2\# \operatorname{PGL}_{2}(\mathbb{F}_{q})}\] \[=\frac{1}{2\#\mathcal{H}_{\mathrm{g}}(\mathbb{F}_{q})\# \operatorname{PGL}_{2}(\mathbb{F}_{q})}\#H_{\mathrm{even}}\] \[=\frac{1}{2q^{2g-1}(q^{3}-q)}\big{(}(1+r_{g})q^{2g+2}+O(q^{2g+1} )\big{)}\] \[=\frac{1+r_{g}}{2}+O\Big{(}\frac{1}{q}\Big{)}.\] This gives us the first equality in the conclusion of the lemma. The second follows analogously. Proof of Proposition 4.1.: Suppose the hypothesis of the proposition holds for a given \(g\) and \(\varepsilon\). For a given \(q\), we let \(m=\lfloor 2\sqrt{q}\rfloor\) and we consider several subintervals of \([-2g\sqrt{q},2g\sqrt{q}]\): \[J_{0} :=\big{[}-mg,mg\big{]} J_{2} :=\big{[}-2g\sqrt{q},-(2g-\varepsilon)\sqrt{q}\big{)}\] \[J_{1} :=\big{[}-(2g-\varepsilon)\sqrt{q},(2g-\varepsilon)\sqrt{q}\, \big{]} J_{3} :=\big{(}(2g-\varepsilon)\sqrt{q},2g\sqrt{q}\,\big{]}.\] Now we interpret the sum \[S_{\mathrm{even}}:=\sum_{t\,\mathrm{even}}\mathcal{N}_{q,g}^{\mathrm{hyp}} \bigg{(}\frac{t}{\sqrt{q}}\bigg{)}\] in two ways. On the one hand, from Lemma 4.2 we have \[S_{\mathrm{even}}=\bigg{(}\frac{1+r_{g}}{2}\bigg{)}+O\Big{(}\frac{1}{q}\Big{)}\,.\] On the other hand, for \(q\) large enough we have \[S_{\mathrm{even}} =\sum_{\begin{subarray}{c}t\in J_{1}\\ t\,\mathrm{even}\end{subarray}}\mathcal{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{ t}{\sqrt{q}}\bigg{)}+\sum_{\begin{subarray}{c}t\in J_{2}\\ t\,\mathrm{even}\end{subarray}}\mathcal{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{ t}{\sqrt{q}}\bigg{)}+\sum_{\begin{subarray}{c}t\in J_{3}\\ t\,\mathrm{even}\end{subarray}}\mathcal{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{ t}{\sqrt{q}}\bigg{)}\] \[=\sum_{\begin{subarray}{c}t\in J_{1}\\ t\,\mathrm{even}\end{subarray}}\mathcal{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{ t}{\sqrt{q}}\bigg{)}+2\sum_{\begin{subarray}{c}t\in J_{3}\\ t\,\mathrm{even}\end{subarray}}\mathcal{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{ t}{\sqrt{q}}\bigg{)}\] \[\leq\frac{c_{g}}{2}\sum_{\begin{subarray}{c}t\in J_{1}\\ t\,\mathrm{even}\end{subarray}}\mathrm{f}_{g}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)} \bigg{(}\frac{2}{\sqrt{q}}\bigg{)}+2\sum_{t\in J_{3}}\mathcal{N}_{q,g}^{ \mathrm{hyp}}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)}\,. \tag{4.2}\] The first sum in (4.2) is a Riemann sum for the integral of \(\mathrm{f}_{g}(\tau)\,d\tau\) over the interval \([-2g+\varepsilon,2g-\varepsilon]\), so as \(q\to\infty\) the first term in (4.2) approaches \(c_{g}(1-2v)/2\). The second sum is the measure, with respect to \(\mu_{q,g}\), of the interval \([2g-\varepsilon,2g]\). Since the \(\mu_{q,g}\) converge weakly to \(\mu_{g}\), the second term of (4.2) approaches \(2v\) as \(q\to\infty\). Combining these two interpretations of \(S_{\mathrm{even}}\), we find that \[\bigg{(}\frac{1+r_{g}}{2}\bigg{)}\leq\frac{c_{g}(1-2v)}{2}+2v\] so that \(c_{g}\geq(1+r_{g}-4v)/(1-2v)\). Similarly, we can consider the sum \[S_{\mathrm{odd}}:=\sum_{t\,\mathrm{odd}}\mathcal{N}_{q,g}^{\mathrm{hyp}} \bigg{(}\frac{t}{\sqrt{q}}\bigg{)}.\] From Lemma 4.2 we see that \[S_{\rm odd}=\left(\frac{1-r_{g}}{2}\right)+O\Big{(}\frac{1}{q}\Big{)}\,.\] But we also have \[S_{\rm odd}\geq\frac{b_{g}}{2}\sum_{\begin{subarray}{c}t\in J_{1}\\ t\ {\rm odd}\end{subarray}}\mathfrak{f}_{g}\Big{(}\frac{t}{\sqrt{q}}\Big{)}\Big{(} \frac{2}{\sqrt{q}}\Big{)},\] and the expression on the right approaches \(b_{g}(1-2v)/2\) as \(q\to\infty\). This shows that \[\left(\frac{1-r_{g}}{2}\right)\geq\frac{b_{g}(1-2v)}{2},\] so we find that \(b_{g}\leq(1-r_{g})/(1-2v)\). _Remark 4.3_.: In the statement of Proposition 4.1, we only assume that the condition on \(\mathcal{N}_{q,g}^{\rm hyp}(t/\sqrt{q})\) holds for \(t\) more than \(\varepsilon\sqrt{q}\) away from the ends of the interval \([-2g\sqrt{q},2g\sqrt{q}]\) because when \(|t|>g[2\sqrt{q}]\) we have \(\mathcal{N}_{q,g}^{\rm hyp}(t/\sqrt{q})=0\). If we did not exclude the tail ends of the interval, the hypothesis of the proposition would only hold if we took \(b_{g}=0\), which is not an interesting approximation. Figure 1 shows the value of \(\mathcal{N}_{q,g}^{\text{hyp}}(t/\sqrt{q})\) for all integers \(t\in[-4\sqrt{q},4\sqrt{q}]\), where \(q=1009\), together with the density function \(\mathfrak{f}_{2}\) for the limiting Katz-Sarnak measure, scaled by the two factors \(b=38/45\) and \(c=52/45\) given by Proposition 4.1 for \(g=2\) and \(\varepsilon=0\). The key to Proposition 4.1 is the imbalance between the likelihood of even versus odd traces for hyperelliptic curves. The obvious workaround would be to scale the counts for the even and odd traces by the factors given in the proposition for \(\varepsilon=0\). One can ask whether the scaled curve counts then better match the limiting Katz-Sarnak distribution. Figure 2 suggests that perhaps this parity factor is the main obstruction to obtaining decent estimates from the naive Katz-Sarnak approximation. The proof of Proposition 4.1 carries through for elliptic curves exactly as it does for hyperelliptic curves of a given genus \(g>1\). We do not include genus-1 curves in the statement of the proposition, however, because as we will see in Proposition 4.4, for \(g=1\) there is no value of \(c_{1}\) that satisfies the hypothesis of the proposition when \(\varepsilon\leq 1\), while the conclusion of the proposition is trivial when \(\varepsilon>1\) because the resulting upper bound on \(b_{1}\) will be greater than \(1\) and the lower bound on \(c_{1}\) will be less than \(1\). When \(g=1\), the density function of the limiting Katz-Sarnak measure on \(I_{1}\) is \(\mathfrak{f}_{1}=(2\pi)^{-1}\sqrt{4-\tau^{2}}\). Let \(N_{q,t}\) denote the weighted number of elliptic curves over \(\mathbb{F}_{q}\) with trace \(t\). For some values of \(t\) in \([-2\sqrt{q},2\sqrt{q}\,]\) we have \(N_{q,t}=0\); in addition to those \(t\) with \(|t|>\lfloor 2\sqrt{q}\rfloor\), this happens for most values of \(t\) that are not coprime to \(q\). But even if we exclude these values, and even if we restrict attention to values of \(t\) that are near the center of the interval \([-2\sqrt{q},2\sqrt{q}\,]\), the following proposition shows that we cannot hope to approximate \(N_{q,t}\) by the quantity \[q^{1/2}\,\mathfrak{f}_{1}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)}=\frac{1}{2\pi} \sqrt{4q-t^{2}}\,.\] **Proposition 4.4**.: _For every \(c>0\), there are infinitely many values of \(q\) and \(t\) such that \(|t|\leq\sqrt{q}\) and \(N_{q,t}>c\sqrt{4q-t^{2}}\)._ Proof.: Let \(\Delta_{0}\) be a fundamental quadratic discriminant with \(\Delta_{0}<-4\) and let \(\chi\) be the quadratic character modulo \(\Delta_{0}\). For a given value of \(n\), let \(f\) be the product of the first \(n\) primes \(p\) that are inert in \(\mathbb{Q}(\sqrt{\Delta_{0}})\). Since the product over all inert primes of \(1+1/p\) diverges (see [13, Lem. 1.14] and [1, p. 176]), when \(n\) is large enough we have \[\prod_{p|f}\bigg{(}1+\frac{1}{p}\bigg{)}>\frac{c\pi^{2}}{3}\frac{\sqrt{|\Delta _{0}|}}{h(\Delta_{0})}\,.\] Choose \(n\) so that this holds, and let \(q_{0}\) be a prime of the form \(x^{2}-f^{2}\Delta_{0}y^{2}\), where \(x\) and \(y\) are positive integers. Note that \(x\) must be coprime to \(q_{0}\) because \(0<x<q_{0}\). Let \(\varpi=x+fy\sqrt{\Delta_{0}}\), viewed as an element of the upper half plane. Since \(x\) is coprime to \(q_{0}\), \(\varpi\) is the Weil number of an isogeny class of ordinary elliptic curves over \(\mathbb{F}_{q_{0}}\). Let \(\theta\) be the argument of \(\varpi\) and let \(m\) be the smallest integer such that \(\pi/3\leq m\theta<2\pi/3\). Write \(\varpi^{m}=u+fv\sqrt{\Delta}\) for integers \(u\) and \(v\), let \(q=q_{0}^{m}=u^{2}-f^{2}v^{2}\Delta\), and let \(t=2u\). Then \(\varpi^{m}\) is the Weil number for an isogeny class \(\mathfrak{I}\) of ordinary elliptic curves over \(\mathbb{F}_{q}\), and the trace of this isogeny class is \(t\). We have \(|t|\leq\sqrt{q}\) because the argument of \(\varpi^{m}\) lies between \(\pi/3\) and \(2\pi/3\). The number of elliptic curves in the isogeny class \(\mathfrak{I}\) is equal to the Kronecker class number \(H(\Delta)\) of the discriminant \(\Delta:=t^{2}-4q=4f^{2}v^{2}\Delta_{0}\). By [11, p. 696] we have \[H(\Delta)=h(\Delta_{0})\prod_{p^{e}\parallel F}\left(1+\Big{(}1-\tfrac{\chi(p )}{p}\Big{)}(p+\cdots+p^{e})\right),\] where \(F=2fv\), so \[\frac{H(\Delta)}{\sqrt{4q-t^{2}}}=\frac{h(\Delta_{0})}{\sqrt{|\Delta_{0}|}} \prod_{p^{e}\parallel F}\left(p^{-e}+\Big{(}1-\tfrac{\chi(p)}{p}\Big{)}(1+p^{ -1}+\cdots+p^{1-e})\right).\] Now, \[p^{-e}+\Big{(}1-\tfrac{\chi(p)}{p}\Big{)}(1+p^{-1}+\cdots+p^{1-e})\geq\begin{cases} 1+1/p&\text{if }\chi(p)=-1;\\ 1-1/p^{2}&\text{if }\chi(p)\neq-1,\end{cases}\] so we have \[\frac{H(\Delta)}{\sqrt{4q-t^{2}}} \geq\frac{h(\Delta_{0})}{\sqrt{|\Delta_{0}|}}\prod_{\begin{subarray}{c \text{$\mathcal{H}$}\mathcal{P}\\ \chi(p)\mathcal{P}=1\end{subarray}}}\Big{(}1+\frac{1}{p}\Big{)}\prod_{ \begin{subarray}{c}\text{$\mathcal{H}$}\mathcal{P}\\ \chi(p)\mathcal{P}-1\end{subarray}}\Big{(}1-\frac{1}{p^{2}}\Big{)}\] \[\geq\frac{h(\Delta_{0})}{\sqrt{|\Delta_{0}|}}\prod_{p|f}\Big{(}1+ \frac{1}{p}\Big{)}\prod_{p}\Big{(}1-\frac{1}{p^{2}}\Big{)}\] \[\geq\frac{h(\Delta_{0})}{\sqrt{|\Delta_{0}|}}\Big{(}\frac{c\pi^{2 }}{3}\frac{\sqrt{|\Delta_{0}|}}{h(\Delta_{0})}\Big{)}\Big{(}\frac{6}{\pi^{2}} \Big{)}\] \[\geq 2c.\] Since the curves in \(\mathcal{I}\) are ordinary and the discriminants of their endomorphism rings are neither \(-3\) nor \(-4\), they all have automorphism groups of order \(2\), so \(N_{q,t}=H(\Delta)/2\). It follows that \[N_{q,t}\geq c\sqrt{4q-t^{2}},\] as claimed. Figure 3 shows the weighted number of elliptic curves over \(\mathbb{F}_{100003}\) of each possible trace, as well as the limiting density function \(\mathfrak{f}_{1}(\tau)=(2/\pi)\sqrt{4-\tau^{2}}\). We see that the plotted points do not appear to be near the density function. ## 5. The non-hyperelliptic case: experiments and conjectures We consider now the case of non-hyperelliptic curves of genus \(g=3\) (considering all curves of genus \(3\) would certainly show the same pattern). For this purpose, for \(g\geq 3\) we introduce the function \(\mathcal{N}_{q,g}^{\text{hyp}}(\tau)\), which we define analogously to how we defined \(\mathcal{N}_{q,g}(\tau)\) and \(\mathcal{N}_{q,g}^{\text{hyp}}(\tau)\): \[\mathcal{N}_{q,g}^{\text{hyp}}(\tau):=\frac{1}{\#\mathcal{M}_{g}^{\text{hyp} }(\mathbb{F}_{q})}\sum_{\begin{subarray}{c}C\in\mathcal{M}_{q}^{\text{hyp} ^{\prime}}(\mathbb{F}_{q})\\ \tau(C)=\tau\end{subarray}}\frac{1}{\#\operatorname{Aut}(C)}.\] Here by \(\mathcal{M}_{g}^{\text{hyp}}(\mathbb{F}_{q})\) we mean the set of \(\overline{\mathbb{F}}_{q}\)-isomorphism classes of non-hyperelliptic curves of genus \(g\) over \(\mathbb{F}_{q}\), and by \(\mathcal{M}_{g}^{\text{hyp}^{\prime}}(\mathbb{F}_{q})\) we mean the set of \(\mathbb{F}_{q}\)-isomorphism classes of such curves. The associated measures will still weakly converge to the measure \(\mu_{g}\) with density \(\mathfrak{f}_{g}\). But experimentally, the behavior looks much smoother than in the elliptic or hyperelliptic cases as illustrated by Figure 4 for \(g=3\) and \(q=53\).2 Heuristically, this could be understood as an averaging for a given trace over several isogeny classes but this idea does not work for the hyperelliptic locus as we have seen in Section 4 and something more is needed for a family of curves to 'behave nicely.' As seen in Remark 3.2, if the higher convergence rate of moments observed in Theorem 2.3 fails to provide a proof of a faster weak convergence, it does single out the non-hyperelliptic case. Added to the experimental data in genus \(3\) it leads us to state the following conjecture. Footnote 2: When using the data of [14] to draw this figure, we noticed that there were some errors in the code when computing the automorphism group of twists for small dimensional strata, giving \(728\) extra ‘weighted’ curves. This is a very small proportion with respect to \(53^{6}+1\) curves and does not affect the general shape of the curve. **Conjecture 5.1**.: _Let \(g\geq 3\). For all \(\tau\in I_{g}\), for all \(\varepsilon>0\) and for all large enough \(q\), there exists \(t\in\mathbb{N}\) such that \(|\tau-t/\sqrt{q}|<1/(2\sqrt{q})\) and \(|\sqrt{q}\cdot\mathcal{N}_{q,g}^{\text{hyp}}(t/\sqrt{q})-\mathfrak{f}_{g}(t/ \sqrt{q})|<\varepsilon\)._ Another way to phrase this conjecture is to replace the measure \(\mu_{q,g}\) by a measure with density given by the histogram with height \(\sqrt{q}\cdot\mathcal{N}_{q,g}^{\text{hyp}}(t/\sqrt{q})\) and base centered at \(t/\sqrt{q}\) of length \(1/\sqrt{q}\) for all \(t\in[-2g\sqrt{q},2g\sqrt{q}]\). The conjecture asserts that the densities of these measures converge to the density \(\mathfrak{f}_{g}\) at each point of \(I_{g}\). This is stronger than weak convergence of the measures [14]. We now conclude by looking at the symmetry breaking for the trace distribution of (non-hyperelliptic) genus \(3\) curves. In general, if \(C\) is a hyperelliptic curve of genus \(g\) over \(\mathbb{F}_{q}\) with trace \(t\), then its quadratic twist for the hyperelliptic involution has trace \(-t\) and therefore the distribution of the number of hyperelliptic curves of genus \(g\) over \(\mathbb{F}_{q}\) as a function of their trace is symmetric. For non-hyperelliptic curves, the distribution has no reason to be symmetric anymore. Actually, if a principally polarized abelian variety over \(\mathbb{F}_{q}\) is the Jacobian (over \(\mathbb{F}_{q}\)) of a non-hyperelliptic curve, then its quadratic twist is never a Jacobian. This obstruction, known as _Serre's obstruction_, is a huge obstacle to finding a closed formula for the maximal number of rational points for \(g=3\)[1], whereas such formulas are known for \(g=1\)[1] and \(g=2\)[13]. Although we cannot improve on the state-of-art of this question, we can study this asymmetry with the probabilistic angle and the results we got before. To visualize this asymmetry, let us consider the signed measure \(\nu_{q,g}=\mu_{q,g}-(-1)^{*}\mu_{q,g}\) where \((-1)^{*}\mu_{q,g}\) is the discrete image signed measure defined by \[(-1)^{*}\mu_{q,g}=\frac{1}{\#\mathcal{M}_{g}(\mathbb{F}_{q})}\sum_{C\in \mathcal{M}_{g}^{\prime}(\mathbb{F}_{q})}\frac{\delta_{-\tau(C)}}{\#\operatorname {Aut}_{\mathbb{F}_{q}}(C)}.\] We get the following consequence of Theorem 2.3. **Proposition 5.2**.: _The sequence of signed measures \((\nu_{q,g})\) weakly converges to the \(0\) measure._ Proof.: By definition, the even moments of \(\nu_{q,g}\) are zero. By Theorem 2.3 the odd moments of \(\sqrt{q}\,\nu_{q,g}\) are equal to \[2\frac{S_{n}(q,\mathcal{M}_{g})}{q^{3g-3+(n-1)/2}}=-2\mathfrak{b}_{n}( \mathcal{M}_{g})+O\left(\frac{1}{\sqrt{q}}\right).\] Hence all moments of \(\nu_{q,g}\) are \(0\). Now if \(f\) is any continuous function on the compact interval \(I_{g}=[-2g,2g]\), then by the Stone-Weierstrass theorem, for every \(\varepsilon>0\) we can find a polynomial \(P\) such that \(|f(\tau)-P(\tau)|\leq\varepsilon\) for all \(\tau\in I_{g}\). Therefore we have \[\left|\int_{I_{g}}f\,d\nu_{q,g}\right|\leq\left|\int_{I_{g}}(f-P)\,d\nu_{q,g}+ \int_{I_{g}}P\,d\nu_{q,g}\right|\leq\varepsilon\|\nu_{q,g}\|+\left|\int_{I_{g} }P\,d\nu_{q,g}\right|.\] The last term is a sum of moments which converges to \(0\) when \(q\) goes to infinity. The variation of \(\nu_{g,q}\) is also uniformly bounded since \[\|\nu_{q,g}\|=|\nu_{q,g}|(I_{g})=\sum_{\tau}\left|\mathcal{N}_{q,g}(\tau)- \mathcal{N}_{q,g}(-\tau)\right|\leq 2\sum_{\tau}\mathcal{N}_{q,g}(\tau)=2\mu_{q,g} (I_{g})=2.\] Having a \(0\) measure is not very interesting and the proof of Proposition 5.2 shows that it would be much more interesting to study the weak convergence of the sequence of signed measures \((\sqrt{q}\,\nu_{q,g})\). We have from the previous proof the following corollary. **Corollary 5.3**.: _The even moments of \(\sqrt{q}\,\nu_{q,g}\) are zero and the odd \(n\)th moments of the sequence \((\sqrt{q}\,\nu_{q,g})\) converge to \(-2\mathfrak{b}_{n}(\mathcal{M}_{g})\)._ Unfortunately we cannot prove weak convergence: the rest of the proof fails as we do not know if one can bound \(\sqrt{q}\,\|\nu_{q,g}\|\) uniformly in \(q\) (which is a necessary condition for weak convergence). Moreover, one cannot expect a general result from the convergence of moments alone as in the case of (positive) measures as the following counterexample shows. _Example 5.4_.: Consider the sequence of signed measures \((\mu_{i})\) with density \(i\sin ix\) on the interval \([0,2\pi]\). The sequence of \(n\)th moments converges to \(-(2\pi)^{n}\) which is the \(n\)th moment of the signed measure \(\mu=-\delta_{2\pi}\). But \(\|\mu_{i}\|=4i\) which is not bounded and therefore the sequence \((\mu_{i})\) does not weakly converge (to \(\mu\)), see for instance [1, Prop. 1.4.7]. The integral interpretation (2.1) of \(\mathfrak{b}_{n}(\mathcal{M}_{g})\) shows that it is equal to the \(n\)th moment of \[\mathfrak{h}_{g}(\tau)=\int_{A_{\tau}}\Bigl{(}\frac{1}{6}w_{1}^{3}-\frac{1}{2 }w_{1}w_{2}+\frac{1}{3}w_{3}-w_{1}\Bigr{)}\,dm_{g},\] with \(A_{\tau}=\{(\theta_{1},\ldots,\theta_{g})\in[0,\pi]^{g}\,:\,\sum_{j}2\cos \theta_{j}=\tau\}\). Because of the convergence of the moments, we conjecture the following. **Conjecture 5.5**.: _For \(g\geq 3\), the sequence of signed measures \((\sqrt{q}\,\nu_{q,g})\) weakly converges to the continuous signed measure with density \(-2\,\mathfrak{h}_{g}\)._ Such a result would for instance imply that \(\sqrt{q}\,\|\nu_{q,g}\|\) is uniformly bounded, hence there exists a constant \(C>0\) such that for all \(q\) and all \(\tau=t/\sqrt{q}\), we have \(|\mathcal{N}_{q,g}(\tau)-\mathcal{N}_{q,g}(-\tau)|\leq C/\sqrt{q}\). In genus \(3\), in the same spirit as in Section 4, one can run experiments which illustrate how the values \[\left\{q\,\left(\mathcal{N}_{q,g}\left(\frac{t}{\sqrt{q}}\right)-\mathcal{N}_ {q,g}\left(\frac{-t}{\sqrt{q}}\right)\right)\right\}_{0\leq t\leq g\lfloor 2 \sqrt{q}\rfloor}\] are close to the values \(-2\,\mathfrak{h}_{3}(t/\sqrt{q})\). See for instance Fig. 5 for \(q=53\). Seeing the data, one may even wonder if something stronger would hold in the same line as Conjecture 5.1, at least for \(g=3\). Under this conjecture, one can use the moments of the density function \(\mathfrak{h}_{3}\) to revisit the result of [12]. Based on results of [1], the authors gave a heuristic explanation for the distribution of the points \[p_{t,q}=\left(\frac{t}{\sqrt{q}},q\,\left(\mathcal{N}_{q,g}\left(\frac{t}{ \sqrt{q}}\right)-\mathcal{N}_{q,g}\left(\frac{-t}{\sqrt{q}}\right)\right)\right)\] when \(0\leq t\leq g\lfloor 2\sqrt{q}\rfloor\) by comparing it with the distribution of differences around the mean in the binomial law [12, Cor. 2.3]. With the arguments given there, the distribution is approximated by the function \[\mathcal{V}^{\lim}(\tau)=\tau(1-\tau^{2}/3)\cdot\left(\frac{1}{\sqrt{2\pi}}e^ {-\tau^{2}/2}\right).\] Graphically for \(q=53\), the comparison looks acceptable but not perfect (see Fig. 5). This is fair as the heuristic grew from a result true when the degree of the plane curves in play is larger than \(2q-1\). As presently we are dealing with non-hyperelliptic curves of genus \(3\), represented as plane curves of degree \(4\), the condition is obviously never fulfilled. It is therefore already stunning that a close, albeit imperfect, match was found in this way. We now take a different road based on Conjecture 5.5 and approximate the density \(-2\,\mathfrak{h}_{3}\) by a function \(\nu^{\lim}\) using the moments \(\mathfrak{b}_{n}(\mathcal{M}_{3})\). By Theorem 2.3, they can be efficiently computed using any symmetric polynomial package. We used Maple and the package SF [10] to compute \(\mathfrak{b}_{n}(\mathcal{M}_{3})\) for \(n=1,3,5,\ldots,25\), and found the following values: \[\begin{array}{ccccc}\hline\hline\hline n&\mathfrak{b}_{n}(\mathcal{M}_{3})&n& \mathfrak{b}_{n}(\mathcal{M}_{3})&n&\mathfrak{b}_{n}(\mathcal{M}_{3})\\ \hline 1&0&11&10395&19&4818\,35250\\ 3&1&13&1\,35564&21&83083\,61040\\ 5&9&15&19\,27926&23&15\,03096\,79212\\ 7&84&17&295\,24716&25&283\,65681\,18720\\ 9&882&\end{array}\] Taking \(\nu^{\lim}(\tau)\) of the form \(P(\tau)\left(\frac{1}{\sqrt{2\pi}}e^{-\tau^{2}/2}\right)\) with \(P\) an odd polynomial of degree \(5\), we want \[\int_{\mathbb{R}}\tau^{2n+1}\cdot\nu^{\lim}(\tau)\,d\tau=-2\mathfrak{b}_{2n+1 }(\mathcal{M}_{3}),\] for \(n=0,1\) and \(2\), and one finds that \[\nu^{\lim}(\tau)=\left(1/60\,\tau^{5}-1/2\,\tau^{3}+5/4\,\tau\right)\left( \frac{1}{\sqrt{2\pi}}e^{-\tau^{2}/2}\right).\] Remarkably, the moments of \(\nu^{\lim}(\tau)\) still agree with \(-2\mathfrak{b}_{2n+1}(\mathcal{M}_{3})\) for \(n=3,4\) and \(5\). However, for \(n=6\) we find that \(\int_{\mathbb{R}}\tau^{13}\cdot\nu^{\lim}(\tau)\,d\tau=-2\cdot 135135\neq-2\cdot \mathfrak{b}_{13}(\mathcal{M}_{3})\). In Figure 5 we see a comparison between the graph of points \(\{p_{t,53}\}_{0\leq t\leq 42}\) and the function \(\mathcal{V}^{\lim}(\tau)\) and \(\nu^{\lim}(\tau)\) in favor of the latter.
この論文は、曲線の有限体における分布の性質について、その有理点数の数に応じて、Katz-Sarnak理論を上回る。理論的に、実験的に、そして仮説として。特に、この分布のモーメントの極限を、$g \geq 3$ のジェネシスを持つ非hyperelliptic 曲線について、式を導出する。実験結果は、Katz-Sarnak枠組みによるすべてのジェネシス $\geq 3$ の曲線に対する収束の概念よりも強い概念を示唆している。しかし、楕円曲線と、$g$ が任意のジェネシスを持つ hiperelliptic 曲線において、このより強い収束は証明されない。
2310.20169
Plateau borders in soap films and Gauss' capillarity theory
We provide, in the setting of Gauss' capillarity theory, a rigorous derivation of the equilibrium law for the three dimensional structures known as Plateau borders which arise in "wet" soap films and foams. A key step in our analysis is a complete measure-theoretic overhaul of the homotopic spanning condition introduced by Harrison and Pugh in the study of Plateau's laws for two-dimensional area minimizing surfaces ("dry" soap films). This new point of view allows us to obtain effective compactness theorems and energy representation formulae for the homotopic spanning relaxation of Gauss' capillarity theory which, in turn, lead to prove sharp regularity properties of energy minimizers. The equilibrium law for Plateau borders in wet foams is also addressed as a (simpler) variant of the theory for wet soap films.
Francesco Maggi, Michael Novack, Daniel Restrepo
2023-10-31T04:37:55
http://arxiv.org/abs/2310.20169v1
# Plateau borders in soap films ###### Abstract. We provide, in the setting of Gauss' capillarity theory, a rigorous derivation of the equilibrium law for the three dimensional structures known as _Plateau borders_ which arise in "wet" soap films and foams. A key step in our analysis is a complete measure-theoretic overhaul of the homotopic spanning condition introduced by Harrison and Pugh in the study of Plateau's laws for two-dimensional area minimizing surfaces ("dry" soap films). This new point of view allows us to obtain effective compactness theorems and energy representation formulae for the homotopic spanning relaxation of Gauss' capillarity theory which, in turn, lead to prove sharp regularity properties of energy minimizers. The equilibrium law for Plateau borders in wet foams is also addressed as a (simpler) variant of the theory for wet soap films. ###### Contents * 1 Introduction * 2 Induced essential partitions (Theorem 1.2) * 3 Homotopic spanning on generalized soap films (Theorem 1.3) * 4 The fundamental closure theorem for homotopic spanning conditions * 5 Direct Method on generalized soap films (Theorem 1.4) * 6 Existence of minimizers and convergence to Plateau's problem (Theorem 1.5) * 7 Equilibrium across transition lines in wet soap films (Theorem 1.6) * 8 Equilibrium across transition lines in wet foams (Theorem 1.7) * A Equivalence of homotopic spanning conditions * B Convergence of every minimizing sequence of \(\Psi_{\rm bk}(v)\) * C An elementary lemma ## 1. Introduction ### Overview Equilibrium configurations of soap films and foams are governed, at leading order, by the balance between surface tension forces and atmospheric pressure. This balance is expressed by the _Laplace-Young law of pressures_, according to which such systems can be decomposed into smooth interfaces with constant mean curvature equal to the pressure difference across them, and by the _Plateau laws_, which precisely postulate which arrangements of smooth interfaces joined together along lines of "singular" points are stable, and thus observable. The physics literature identifies two (closely related) classes of soap films and foams, respectively labeled as "dry" and "wet". This difference is either marked in terms of the amount of liquid contained in the soap film/foam [14, Section 1.3], or in terms of the scale at which the soap film/foam is described [1, Chapter 2, Section 3 and 4]. In the dry case, Plateau laws postulates that (i) interfaces can only meet in three at a time forming 120-degree angles along lines of "\(Y\)-points"; and (ii) lines of \(Y\)-points can only meet in fours at isolated "\(T\)-points", where six interfaces asymptotically form a perfectly symmetric tetrahedral angle; see, e.g. [12, Equilibrium rules A1, A2, page 24]. In the wet case, small but positive amounts of liquid are bounded by negatively curved interfaces, known as _Plateau borders_, and arranged near ideal lines of \(Y\)-points or isolated \(T\)-points; see Figure 1.1 and [13, Fig. 1.8 and Fig. 1.9]. A "third Plateau law" is then postulated to hold across the transition lines between wet and dry parts of soap films/foams, and can be formulated as follows: _the unit normal to a soap film/foam changes continuously_ (1.1) _across the transition lines between wet and dry interfaces_ ; see, e.g., [12, Equilibrium rule B, page 25] and [9, Section 4.1.4]. It is important to recall that Plateau borders play a crucial role in determining the mechanical properties of the many physical and biological systems in which they are observed. As a sample of older and newer papers discussing Plateau borders, we mention here [10, 1, 11, 12, 13, 14, 15]. Postulate (1.1) is assumed in all these works. The goal of this paper is answering the natural problem of rigorously deriving the equilibrium condition for Plateau borders (1.1) in the context of Gauss' capillarity theory. Since the case of soap films is much harder and interesting from the mathematical viewpoint, we will postpone the discussion of foams until the very last section of this introduction. The main highlight is that, in addressing Plateau borders of soap films, we will develop a new "theory of spanning" for surfaces of geometric measure theory (GMT) which will find further applications in the two companion papers [16, 17]; see the closing of this overview for more details about these additional applications. We now give an informal description of our approach. The starting point is [18], where the idea is introduced of modeling soap films as regions \(E\) of positive volume \(|E|=v\) contained in the complement \(\Omega=\mathbb{R}^{n+1}\setminus\mathbf{W}\) of a "wire frame" \(\mathbf{W}\) (\(n=2\) is the physical case, although the planar case \(n=1\) is also quite interesting in applications). We Figure 1.1. (a) A Plateau border develops around a “wet” line of \(Y\)-points. The wet region is bounded by interfaces of _negative_ constant mean curvature. The equilibrium condition which needs to hold across the transition lines (here depicted in bold) between the negatively curved interfaces of a Plateau border and the incoming dry interfaces is that these interfaces meet tangentially. In the case of soap films, where the dry interfaces have zero mean curvature, the jump in the mean curvature across the transition lines implies a discontinuity in the gradient of the unit normal. (b) An arrangement of Plateau borders near a tetrahedral singularity. The transition lines are again depicted in bold. The incoming dry interfaces are omitted for clarity. associate to \(E\) the surface tension energy \(\mathcal{H}^{n}(\Omega\cap\partial E)\) (where \(\mathcal{H}^{n}\) stands for \(n\)-dimensional (Hausdorff) measure, i.e., area when \(n=2\) and length when \(n=1\)), and minimize \(\mathcal{H}^{n}(\Omega\cap\partial E)\) under the constraints that \(|E|=v\) (for some given \(v>0\)) and \[\Omega\cap\partial E\text{ is spanning }\mathbf{W}\,. \tag{1.2}\] From the mathematical viewpoint the meaning assigned to (1.2) is, of course, the crux of the matter. In the informal spirit of this overview, we momentarily leave the concept of "spanning" only intuitively defined. As proved in [10], this minimization process leads to the identification of _generalized minimizers_ in the form of pairs \((K,E)\) with \(E\subset\Omega\), \(|E|=v\), and such that \[\Omega\cap\partial E\subset K\text{ and }K\text{ is spanning }\mathbf{W}\,. \tag{1.3}\] These pairs are minimizing in the sense that \[\mathcal{H}^{n}(\Omega\cap\partial E)+2\,\mathcal{H}^{n}(K\setminus\partial E )\leq\mathcal{H}^{n}(\Omega\cap\partial E^{\prime})\,, \tag{1.4}\] whenever \(E^{\prime}\subset\Omega\), \(|E^{\prime}|=v\) and \(\Omega\cap\partial E^{\prime}\) is spanning \(\mathbf{W}\). If \(K=\Omega\cap\partial E\), then generalized minimizers are of course minimizers in the proper sense. If not, the _collapsed interface_\(K\setminus\partial E\) is a surface whose positive area has to be counted with a multiplicity factor \(2\) (which arises from the asymptotic collapsing along \(K\setminus\partial E\) of oppositely oriented boundaries in minimizing sequences \(\{E_{j}\}_{j}\), see Figure 1.2). We expect collapsing to occur whenever the Plateau problem for \(\mathbf{W}\) admits one minimizer \(S\) with Plateau-type singularities. Whenever this happens, a _wetting conjecture_ is made: sequences \(\{(K_{v_{j}},E_{v_{j}})\}_{j}\) of generalized minimizers with \(|E_{v_{j}}|=v_{j}\to 0^{+}\) as \(j\to\infty\) will be such that the set of Plateau's singularities \(\Sigma(S)\) of \(S\) is such that \(\sup\{\operatorname{dist}(x,E_{v_{j}}):x\in\Sigma(S)\}\to 0\). Thus we expect that Plateau's singularities are never "left dry" in the small volume capillarity approximation of the Plateau problem. A lot of information about generalized minimizers can be extracted from (1.4), and this is the content of [10, 11, 12]. With reference to the cases when \(n=1\) or \(n=2\), one can deduce from (1.4) that if \(\mathcal{H}^{n}(K\setminus\partial E)>0\), then \(K\setminus\partial E\) is a smooth minimal surface (a union of segments if \(n=1\)) and that \(\partial E\) contains a regular part \(\partial^{*}E\) that is a smooth constant mean curvature surface (a union of circular arcs if \(n=1\)) with _negative_ curvature. This is of course strongly reminiscent of the behavior of Plateau borders, and invites to analyze the validity of (1.1) in this context. A main obstacle is that, due to serious technical issues (described in more detail later on) related to how minimality is expressed in (1.4), it turns out to be very difficult to say much about the "transition line" \[\partial E\setminus\partial^{*}E\] between the zero and the negative constant mean curvature interfaces in \(K\), across which one should check the validity of (1.1). More precisely, all that descends from (1.4) and a direct application of Allard's regularity theorem [1] is that \(\partial E\setminus\partial^{*}E\)_has empty interior in \(K\)_. Far from being a line in dimension \(n=2\), or a discrete set of points when \(n=1\), the transition line \(\partial E\setminus\partial^{*}E\) could very well have positive \(\mathcal{H}^{n}\)-measure and be everywhere dense in \(K\)! With such poor understanding of \(\partial E\setminus\partial^{*}E\), proving the validity of (1.1) - that is, the continuity of the unit normals to \(K\setminus\partial E\) and \(\partial^{*}E\) in passing across \(\partial E\setminus\partial^{*}E\) - is of course out of question. We overcome these difficulties by performing a major measure-theoretic overhaul of the Harrison-Pugh homotopic spanning condition [11, 12] used in [13, 14, 15, 16] to give a rigorous meaning to (1.2), and thus to formulate the homotopic spanning relaxation of Gauss' capillarity discussed above. The transformation of this purely topological concept into a measure-theoretic one is particularly powerful. Its most important consequence for the problem discussed in this paper is that it allows us to upgrade the partial minimality property (1.4) of \((K,E)\) into the full minimality property \[\mathcal{H}^{n}(\Omega\cap\partial E)+2\,\mathcal{H}^{n}(K\setminus\partial E )\leq\mathcal{H}^{n}(\Omega\cap\partial E^{\prime})+2\,\mathcal{H}^{n}(K^{ \prime}\setminus\partial E^{\prime}) \tag{1.5}\] whenever \(E^{\prime}\subset\Omega\), \(|E^{\prime}|=v\), \(\Omega\cap\partial E^{\prime}\subset K^{\prime}\) and \(K^{\prime}\) is spanning \(\mathbf{W}\). The crucial difference between (1.4) and (1.5) is that the latter is much more efficient than the former when it comes to study the regularity of generalized minimizers \((K,E)\), something that is evidently done by energy comparison with competitors \((K^{\prime},E^{\prime})\). Such comparisons are immediate when working with (1.5), but they are actually quite delicate to set up when we only have (1.4). In the latter case, given a competitor \((K^{\prime},E^{\prime})\), to set up the energy comparison with \((K,E)\) we first need to find a sequence of non-collapsed competitors \(\{E^{\prime}_{j}\}_{j}\) (with \(E^{\prime}_{j}\subset\Omega\), \(|E^{\prime}_{j}|=v\), and \(\Omega\cap\partial E^{\prime}_{j}\) spanning \(\mathbf{W}\)) such that \(\mathcal{H}^{n}(\Omega\cap\partial E^{\prime}_{j})\to\mathcal{H}^{n}(\Omega \cap\partial E^{\prime})+2\,\mathcal{H}^{n}(K^{\prime}\setminus\partial E^{ \prime})\). Intuitively, \(E^{\prime}_{j}\) needs to be a \(\delta_{j}\)-neighborhood of \(K^{\prime}\cup E^{\prime}\) for some \(\delta_{j}\to 0^{+}\) and the energy approximation property has to be deduced from the theory of Minkowski content. But applying the theory of Minkowski content to \((K^{\prime},E^{\prime})\) (which is the approach followed, e.g., in [14]) requires \((K^{\prime},E^{\prime})\) to satisfy rectifiability and uniform density properties that substantially restrict the class of available competitors \((K^{\prime},E^{\prime})\). In contrast, once the validity of (1.5) is established, a suitable generalization (Theorem 1.2) of the partition theorem of sets of finite perimeter into indecomposable components [1, Theorem 1] combined with a subtle variational argument (see Figure 1.7) allows us to show that, in any ball \(B\subset\!\!\subset\Omega\) with sufficiently small radius and for some sufficiently large constant \(\Lambda\) (both depending just on \((K,E)\)), the connected components \(\{U_{i}\}_{i}\) of \(B\setminus(K\cup E)\) satisfy a perturbed area minimizing property of the form \[\mathcal{H}^{n}(B\cap\partial U_{i})\leq\mathcal{H}^{n}(B\cap\partial V)+ \Lambda\,|U_{i}\Delta V|\,, \tag{1.6}\] with respect to _completely arbitrary perturbations_\(V\subset B\), \(V\Delta U_{i}\subset\!\!\subset B\). By a classical theorem of De Giorgi [1, 13], (1.6) implies (away from a closed singular set of codimension at least \(8\), which is thus empty if \(n\leq 6\)) the \(C^{1,\alpha}\)-regularity of \(B\cap\partial U_{i}\) for each \(i\), and thus establishes _the continuity of the normal stated in (1.1)_. In fact, locally at each \(x\) on the transition line, \(K\) is the union of the graphs of two \(C^{1,\alpha}\)-functions \(u_{1}\leq u_{2}\) defined on an \(n\)-dimensional disk, having zero mean curvature above the interior of \(\{u_{1}=u_{2}\}\), and opposite constant mean curvature above \(\{u_{1}<u_{2}\}\). We can thus exploit the regularity theory for double-membrane free boundary problems devised in [17, 18] to deduce that the transition line \(\partial E\setminus\partial^{*}E\) is indeed \((n-1)\)-dimensional, and to improve the \(C^{1,\alpha}\)-regularity of \(B\cap\partial U_{i}\) to \(C^{1,1}\)-regularity. Given the mean curvature jump across \(\partial E\setminus\partial^{*}E\) we have thus established the _sharp_ degree of regularity for minimizers of the homotopic spanning relaxation of Gauss' capillarity theory. The measure-theoretic framework for homotopic spanning conditions laid down in this paper provides the starting point for additional investigations that would otherwise seem unaccessible. In two forthcoming companion papers we indeed establish (i) the convergence towards Plateau-type singularities of energy-minimizing diffused interface solutions of the Allen-Cahn equation [14], and (ii) some sharp convergence theorems for generalized minimizers in the homotopic spanning relaxation of Gauss' capillarity theory in the vanishing volume limit, including a proof of the above mentioned wetting conjecture [14]. The rest of this introduction is devoted to a rigorous formulation of the results presented in this overview. We begin in Section 1.2 with a review of the Harrison and Pugh homotopic spanning condition in relation to the classical Plateau problem and to the foundational work of Almgren and Taylor [1, 20]. In Section 1.3 we introduce the new measure-theoretic formulation of homotopic spanning and discuss its relation to the measure-theoretic notion of _essential connectedness_ introduced by Cagnetti, Colombo, De Philippis and the first-named author in the study of symmetrization inequalities [1, 1]. In Section 1.4 we introduce the _bulk_ and _boundary_ spanning relaxations of Gauss' capillarity theory, state a general closure theorem for "generalized soap films" that applies to both relaxed problems (Theorem 1.4). In Section 1.5 we prove the existence of generalized soap film minimizers (Theorem 1.5) and their convergence in energy to solutions to the Plateau problem. A sharp regularity theorem (Theorem 1.6) for these minimizers, which validates (1.1), is stated in Section 1.6. Finally, in Section 1.7 we reformulate the above results in the case of foams, see in particular Theorem 1.7. ### Homotopic spanning: from Plateau's problem to Gauss' capillarity The theories of currents and of sets of finite perimeter, i.e. the basic distributional theories of surface area at the center of GMT, fall short in the task of modeling Plateau's laws. Indeed, two-dimensional area minimizing currents in \(\mathbb{R}^{3}\) are carried by smooth minimal surfaces, and thus cannot model \(Y\)-type1 and \(T\)-type singularities. This basic issue motivated the introduction of **Almgren minimal sets** as models for soap films in [15]: these are sets \(S\subset\mathbb{R}^{n+1}\) that are relatively closed in a given open set \(\Omega\subset\mathbb{R}^{n+1}\), and satisfy \(\mathcal{H}^{n}(S)\leq\mathcal{H}^{n}(f(S))\) whenever \(f:\Omega\to\Omega\) is a _Lipschitz_ (not necessarily injective) map with \(\{f\neq\mathrm{id}\}\subset\subset\Omega\). Taylor's historical result [20] validates the Plateau laws in this context, by showing that, when\({}^{2}\)\(n=2\), Almgren minimal sets are locally \(C^{1,\alpha}\)-diffeomorphic either to planes, to \(Y\)-cones, or to \(T\)-cones. Footnote 1: Currents modulo \(3\) are compatible with \(Y\)-type singularities, but not with \(T\)-type singularities. The issue of proposing and solving a formulation of Plateau's problem whose minimizers are Almgren minimal sets, and indeed admit Plateau-type singularities, is quite elusive, as carefully explained in [13]. In this direction, a major breakthrough has been obtained by Harrison and Pugh in [12] with the introduction of a new spanning condition, which, following the presentation in [10], can be defined as follows: **Definition A** (Homotopic spanning (on closed sets)).: Given a closed set \(\mathbf{W}\subset\mathbb{R}^{n+1}\) (the "wire frame"), a **spanning class for \(\mathbf{W}\)** is a family \(\mathcal{C}\) of smooth embeddings of \(\mathbb{S}^{1}\) into \[\Omega=\mathbb{R}^{n+1}\setminus\mathbf{W}\] that is _closed under homotopies in \(\Omega\)_, that is, if \(\Phi:[0,1]\times\mathbb{S}^{1}\to\Omega\) is smooth family of embeddings \(\Phi_{t}=\Phi(t,\cdot):\mathbb{S}^{1}\to\Omega\) with \(\Phi_{0}\in\mathcal{C}\), then \(\Phi_{t}\in\mathcal{C}\) for every \(t\in(0,1]\). A set \(S\), contained and relatively closed in \(\Omega\), is said to be \(\mathcal{C}\)**-spanning W** if \[S\cap\gamma\neq\varnothing\,,\qquad\forall\gamma\in\mathcal{C}\,.\] Denoting by \(\mathcal{S}(\mathcal{C})\) the class of sets \(S\)\(\mathcal{C}\)-spanning \(\mathbf{W}\), one can correspondingly formulate the **Plateau problem** (with homotopic spanning) \[\ell=\ell(\mathcal{C}):=\inf\left\{\mathcal{H}^{n}(S):S\in\mathcal{S}( \mathcal{C})\right\}. \tag{1.7}\] Existence of minimizers of \(\ell\) holds as soon as \(\ell<\infty\), and minimizers \(S\) of \(\ell\) are Almgren minimal sets in \(\Omega\)[14, 15] that are indeed going to exhibit Plateau-type singularities (this is easily seen in the plane, but see also [1] for a higher dimensional example). Moreover, given a same \(\mathbf{W}\), different choices of \(\mathcal{C}\) are possible and can lead to different minimizers, see Figure 1.3. Finally, the approach is robust enough to provide the starting point for several important extensions [1, 1, 1, 10, 11, 12, 13, 14, 15], including higher codimension, anisotropic energies, etc. The study of soap films as minimizers of Gauss's capillarity energy with small volume and under homotopic spanning conditions has been initiated in [16, 17], with the introduction of the model \[\psi(v):=\inf\left\{\mathcal{H}^{n}(\Omega\cap\partial E):|E|=v\,,\ \Omega\cap \partial E\text{ is $\mathcal{C}$-spanning $\mathbf{W}$}\right\}, \tag{1.8}\] where \(E\subset\Omega\) is an open set with smooth boundary. Without the spanning condition, at small volumes, minimizers of \(\mathcal{H}^{n}(\Omega\cap\partial E)\) would be small diffeomorphic images of half-balls [13]. However, the introduction of the \(\mathcal{C}\)-spanning constraint rules out small droplets, and forces the exploration of a different part of the energy landscape of \(\mathcal{H}^{n}(\Omega\cap\partial E)\). As informally discussed in Section 1.1, this leads to the emergence of generalized minimizers \((K,E)\). More precisely, in [17] the existence is proved of \((K,E)\) in the class \[\mathcal{K}=\left\{(K,E):K\text{ is relatively closed and $\mathcal{H}^{n}$-rectifiable in $\Omega$, $E$ is open,}\right. \tag{1.9}\] \[\left.\begin{aligned} & E\text{ has finite perimeter in $\Omega$, and }\Omega\cap\operatorname{cl}\left(\partial^{*}E\right)=\Omega\cap \partial E\subset K\right\},\end{aligned}\right.\] (where \(\partial^{*}E\) denotes the reduced boundary of \(E\)) such that, for every competitor \(E^{\prime}\) in \(\psi(v)\), it holds \[\mathcal{H}^{n}(\Omega\cap\partial^{*}E)+2\,\mathcal{H}^{n}(\Omega\cap(K \setminus\partial^{*}E))\leq\mathcal{H}^{n}(\Omega\cap\partial E^{\prime})\,. \tag{1.10}\] Starting from (1.10) one can apply Allard's regularity theorem [10] and various _ad hoc_ comparison arguments [17, 18] to prove that \(\Omega\cap\partial^{*}E\) is a smooth hypersurface with constant mean curvature (negative if \(\mathcal{H}^{n}(K\setminus\partial^{*}E)>0\)), \(\Omega\cap(\partial E\setminus\partial^{*}E)\) has empty Figure 1.3. The dashed lines denote the embeddings of \(\mathbb{S}^{1}\) whose homotopy classes relative to \(\Omega\) generate different spanning classes \(\mathcal{C}\), to which there correspond different minimizers of \(\ell\). interior in \(K\), and that \(K\setminus(\Sigma\cup\partial E)\) is a smooth minimal hypersurface, where \(\Sigma\) is a closed set with codimension at least \(8\). ### Measure theoretic homotopic spanning In a nutshell, the idea behind our measure theoretic revision the Harrison-Pugh homotopic spanning condition is the following. Rather than asking that \(S\cap\gamma(\mathbb{S}^{1})\neq\varnothing\) for every \(\gamma\in\mathcal{C}\), as done in Definition A, we shall replace \(\gamma\) with an open "tube" \(T\) containing \(\gamma(\mathbb{S}^{1})\), and ask that \(S\), with the help of a generic "slice" \(T[s]\) of \(T\), "disconnects" \(T\) itself into two nontrivial regions \(T_{1}\) and \(T_{2}\); see Figure 1.4. The key to make this idea work is, of course, giving a proper meaning to the word "disconnects". To this end, we recall the notion of **essential connectedness** introduced in [1, 1] in the study of the rigidity of equality cases in Gaussian and Euclidean perimeter symmetrization inequalities. Essential connectedness is the "right" notion to deal with such problems since it leads to the formulation of sharp rigidity theorems, and can indeed be used to address other rigidity problems (see [1, 2, 3]). This said, it seems remarkable that the very same notion of what it means for "one Borel set to disconnect another Borel set" proves to be extremely effective also in the context of the present paper, which is of course very far from the context of symmetrization theory. Denoting by \(T^{({\scalebox{0.8}{${\rm{}^{\rm{(t)}}}$}})}\) (\(0\leq t\leq 1\)) the **points of density \(t\)** of a Borel set \(T\subset\mathbb{R}^{n+1}\) (i.e., \(x\in T^{({\scalebox{0.8}{${\rm{}^{\rm{(t)}}}$}})}\) if and only if \(|T\cap B_{r}(x)|/\omega_{n+1}\,r^{n+1}\to t\) as \(r\to 0^{+}\), where \(\omega_{k}\) is the Lebesgue measure of the unit ball in \(\mathbb{R}^{k}\)), and by \(\partial^{e}T=\mathbb{R}^{n+1}\setminus(T^{({\scalebox{0.8}{${\rm{}^{\rm{(0)}} $}}}}\cup T^{({\scalebox{0.8}{${\rm{}^{\rm{(1)}}}$}})})\) the **essential boundary** of \(T\), given Borel sets \(S\), \(T\), \(T_{1}\) and \(T_{2}\) in \(\mathbb{R}^{n+1}\), and given \(n\geq 0\), we say that \(S\)**essentially disconnects \(T\) into**\(\{T_{1},T_{2}\}\), if \[\begin{split}&\{T_{1},T_{2}\}\text{ is a non-trivial Borel partition of }T\,,\\ &\text{ and }T^{({\scalebox{0.8}{${\rm{}^{\rm{(1)}}}$}})}\cap \partial^{e}T_{1}\cap\partial^{e}T_{2}\text{ is }\mathcal{H}^{n}\text{-contained in }S\,.\end{split} \tag{1.11}\] (For example, if \(K\) is a set of full \(\mathcal{L}^{1}\)-measure in \([-1,1]\), then \(S=K\times\{0\}\) essentially disconnects the unit disk in \(\mathbb{R}^{2}\).) Moreover, we say that \(T\) is **essentially connected3** if \(\varnothing\) does not essentially disconnect \(T\). The requirement that \(\{T_{1},T_{2}\}\) is a non-trivial Borel partition of \(T\) means that \(|T\Delta(T_{1}\cup T_{2})|=0\) and \(|T_{1}|\,|T_{2}|>0\). By saying that "\(E\) is \(\mathcal{H}^{n}\)-contained in \(F\)" we mean that \(\mathcal{H}^{n}(E\setminus F)=0\). We also notice that, in (1.11), we have \(T^{({\scalebox{0.8}{${\rm{}^{\rm{(1)}}}$}})}\cap\partial^{e}T_{1}\cap\partial^ {e}T_{2}=T^{({\scalebox{0.8}{${\rm{}^{\rm{(1)}}}$}})}\cap\partial^{e}T_{i}\) (\(i=1,2\)), a fact that is tacitly and repeatedly considered in the use of (1.11) in order to shorten formulas. Footnote 3: Whenever \(T\) is of locally finite perimeter, being essentially connected is equivalent to being indecomposable. Figure 1.4. (a) Homotopic spanning according to Harrison–Pugh: \(S\) must intersect every curve \(\gamma\in\mathcal{C}\), in particular, the \(\mathcal{C}\)-spanning property may be lost by removing a single point from \(S\); (b) Homotopic spanning based on essential connectedness: for a.e. section \(T[s]\) of the tube \(T\) around a curve \(\gamma\in\mathcal{C}\), the union \(T[s]\cup S\) (essentially) disconnects \(T\) (i.e., divides \(T\) into two non-trivial parts, depicted here with two different shades of gray). With this terminology in mind, we introduce the following definition: **Definition B** (Measure theoretic homotopic spanning).: Given a closed set \(\mathbf{W}\) and a spanning class \(\mathcal{C}\) for \(\mathbf{W}\), the **tubular spanning class \(\mathcal{T}(\mathcal{C})\)** associated to \(\mathcal{C}\) is the family of triples \((\gamma,\Phi,T)\) such that \(\gamma\in\mathcal{C}\), \(T=\Phi(\mathbb{S}^{1}\times B_{1}^{n})\), and4 Footnote 4: Here \(B_{1}^{n}=\{x\in\mathbb{R}^{n}:|x|<1\}\) and \(\mathbb{S}^{1}=\{s\in\mathbb{R}^{2}:|s|=1\}\). \[\Phi:\mathbb{S}^{1}\times\operatorname{cl}B_{1}^{n}\to\Omega\text{ is a diffeomorphism with }\Phi|_{\mathbb{S}^{1}\times\{0\}}=\gamma\,.\] When \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\), the **slice of \(T\)** defined by \(s\in\mathbb{S}^{1}\) is \[T[s]=\Phi(\{s\}\times B_{1}^{n})\,.\] Finally, we say that a Borel set \(S\subset\Omega\) is \(\mathcal{C}\)**-spanning \(\mathbf{W}\)** if for each \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\), \(\mathcal{H}^{1}\)-a.e. \(s\in\mathbb{S}^{1}\) has the following property: \[\text{for }\mathcal{H}^{n}\text{-a.e. }x\in T[s] \tag{1.12}\] \[\exists\text{ a partition }\{T_{1},T_{2}\}\text{ of }T\text{ s.t. }x\in\partial^{e}T_{1}\cap\partial^{e}T_{2}\] \[\text{and s.t. }S\cup T[s]\text{ essentially disconnects }T\text{ into }\{T_{1},T_{2}\}\,.\] Before commenting on (1.12), we notice that the terminology of Definition B is coherent with that of Definition A thanks to the following theorem. **Theorem 1.1**.: _Given a closed set \(\mathbf{W}\subset\mathbb{R}^{n+1}\), a spanning class \(\mathcal{C}\) for \(\mathbf{W}\), and a set \(S\) relatively closed in \(\Omega\), then \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) in the sense of Definition A if and only if \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) in the sense of Definition B._ Theorem 1.1 is proved in Appendix A. There we also comment on the delicate reason why, in formulating (1.12), the partition \(\{T_{1},T_{2}\}\) must be allowed to depend on specific points \(x\in T[s]\). This would not seem necessary by looking at the simple situation depicted in Figure 1.4, but it is actually so when dealing with more complex situations; see Figure A.1. Homotopic spanning according to Definition B is clearly stable under modifications of \(S\) by \(\mathcal{H}^{n}\)-negligible sets, but there is more to it. Indeed, even a notion like "\(\mathcal{H}^{n}(S\cap T)>0\) for every \(T\in\mathcal{T}(\mathcal{C})\)" would be stable under modifications by \(\mathcal{H}^{n}\)-negligible sets, and would probably look more appealing in its simplicity. The catch, of course, is finding an extension of Definition A for which compactness theorems, like Theorem 1.4 below, hold true. This is evidently not the case, for example, if one tries to work with a notion like "\(\mathcal{H}^{n}(S\cap T)>0\) for every \(T\in\mathcal{T}(\mathcal{C})\)". The first key insight on Definition B is that, if restricted to Borel sets \(S\) that are locally \(\mathcal{H}^{n}\)-finite in \(\Omega\), then it can be reformulated in terms of partitions into indecomposable Figure 1.5. An example of induced essential partition. The union of the boundaries of the \(U_{i}\)’s (inside of \(U\)) is contained in \(S\), and the containment may be strict. However, the part of \(S\) not contained in \(U\cap\bigcup_{i}\partial U_{i}\) is not such to disconnect any of the \(U_{i}\)’s. In particular, each \(U_{i}\) is essentially connected. sets of finite perimeter. This is the content of the following theorem, whose case \(S=\varnothing\) corresponds to the standard decomposition theorem for sets of finite perimeter [1, Theorem 1]. For an illustration of this result, see Figure 1.5. **Theorem 1.2** (Induced essential partitions (Section 2)).: _If \(U\subset\mathbb{R}^{n+1}\) is a bounded set of finite perimeter and \(S\subset\mathbb{R}^{n+1}\) is a Borel set with \(\mathcal{H}^{n}(S\cap U^{{(1)}})<\infty\), then there exists a unique5 essential partition\(\{U_{i}\}_{i}\) of \(U\) induced by \(S\), that is to say, \(\{U_{i}\}_{i}\) is a countable partition of \(U\) modulo Lebesgue negligible sets such that, for each \(i\), \(S\) does not essentially disconnect \(U_{i}\)._ Footnote 5: Uniqueness is meant modulo relabeling and modulo Lebesgue negligible modifications of the \(U_{i}\)’s. Given \(U\) and \(S\) as in the statement of Theorem 1.2 we can define6 the **union of the** (reduced) **boundaries** (relative to \(U\)) **of the essential partition** induced by \(S\) on \(U\) by setting7 Footnote 6: Uniquely modulo \(\mathcal{H}^{n}\)-null sets thanks to Federer’s theorem recalled in (1.37) below. Footnote 7: Given a Borel set \(E\), we denote by \(\partial^{*}E\) its reduced boundary relative to the maximal open set \(A\) wherein \(E\) has locally finite perimeter. \[\operatorname{UBEP}(S;U)=U^{{(1)}}\cap\bigcup_{i} \partial^{*}U_{i}\,. \tag{1.13}\] Two properties of \(\operatorname{UBEP}\)'s which well illustrate the concept are: first, if \(\mathcal{R}(S)\) denotes the rectifiable part of \(S\), then \(\operatorname{UBEP}(S;U)\) is \(\mathcal{H}^{n}\)-equivalent to \(\operatorname{UBEP}(\mathcal{R}(S);U)\); second, if \(S^{*}\) is \(\mathcal{H}^{n}\)-contained in \(S\), then \(\operatorname{UBEP}(S;U)\) is \(\mathcal{H}^{n}\)-contained in \(\operatorname{UBEP}(S;U)\); both properties are proved in Theorem 2.1 (an expanded restatement of Theorem 1.2). We can use the concepts just introduced to provide an alternative and technically more workable characterization of homotopic spanning in the measure theoretic setting. This is the content of our first main result, which is illustrated in Figure 1.6. **Theorem 1.3** (Homotopic spanning for locally \(\mathcal{H}^{n}\)-finite sets (Section 3)).: _If \(\mathbf{W}\subset\mathbb{R}^{n+1}\) is a closed set in \(\mathbb{R}^{n+1}\), \(\mathcal{C}\) is a spanning class for \(\mathbf{W}\), and \(S\subset\Omega\) is locally \(\mathcal{H}^{n}\)-finite in \(\Omega\), then \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) if and only if for every \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) we have that, for \(\mathcal{H}^{1}\)-a.e. \(s\in\mathbb{S}^{1}\),_ \[T[s]\text{ is $\mathcal{H}^{n}$-contained in }\operatorname{UBEP}(S\cup T[s];T)\,. \tag{1.14}\] Figure 1.6. With \(\mathbf{W}\) consisting of two disks in the plane, and \(T\) a test tube for the \(\mathcal{C}\)-spanning condition: (a) \(S\) consists of a segment with a gap: since the gap is inside of \(T\), the essential partition of \(T\) induced by \(S\cup T[s]\) consists of only one set, \(U_{1}=T\), so that \(T\cap\partial^{*}U_{1}=\varnothing\) and (1.14) cannot hold; (b) \(S\) consists of a full segment and in this case (with the possible exception of a choice of \(s\) such that \(T[s]\) is contained in \(S\)), the essential partition of \(T\) induced by \(S\cup T[s]\) consists of two sets \(\{U_{1},U_{2}\}\), such that \(T[s]\subset T\cap\partial^{*}U_{1}\cap\partial^{*}U_{2}\); in this case (1.14) holds. ### Direct Method on generalized soap films and Gauss' capillarity The most convenient setting for addressing the existence of minimizers in Gauss' capillarity theory is of course that of sets of finite perimeter [12, 13]. However, if the notion of homotopic spanning is limited to closed sets, as it is the case when working with Definition A, then one cannot directly use homotopic spanning on sets of finite perimeter, and this is the reason behind the specific formulation (1.8) of \(\psi(v)\) used in [19, 18]. Equipped with Definition B we can now formulate Gauss' capillarity theory with homotopic spanning conditions directly on sets of finite perimeter. We shall actually consider _two_ different possible formulations \[\psi_{\rm bk}(v) =\inf\left\{\mathcal{H}^{n}(\Omega\cap\partial^{*}E):|E|=v\text{ and }\Omega\cap(\partial^{*}E\cup E^{{(1)}})\text{ is $ \mathcal{C}$-spanning $\mathbf{W}$}\right\},\] \[\psi_{\rm bd}(v) =\inf\left\{\mathcal{H}^{n}(\Omega\cap\partial^{*}E):|E|=v\text{ and }\Omega\cap\partial^{*}E\text{ is $\mathcal{C}$-spanning $\mathbf{W}$}\right\},\] where the subscripts "\(\rm bk\)" and "\(\rm bd\)" stand to indicate that the spanning is prescribed via the _bulk_ of \(E\) (that is, in measure theoretic terms, via the set \(\Omega\cap(\partial^{*}E\cup E^{{(1)}})\) or via the (reduced) boundary of \(E\). Inspired by the definition of the class \(\mathcal{K}\) introduced in (1.9), we also introduce the class \(\mathcal{K}_{\rm B}\) of **generalized soap films** defined by \[\mathcal{K}_{\rm B}=\left\{(K,E):K\text{ and }E\text{ are Borel subsets of }\Omega,\right. \tag{1.15}\] \[\left.\begin{array}{c}E\text{ has locally finite perimeter in }\Omega\text{ and }\partial^{*}E\cap\Omega\stackrel{{\Re^{n}}}{{\subset}}K \end{array}\right\}.\] Here the subscript "\(\rm B\)" stands for "\(\rm B\)", and \(\mathcal{K}_{\rm B}\) stands as a sort of measure-theoretic version of \(\mathcal{K}\). In the companion paper [16] the following relaxation formulas for problems \(\psi_{\rm bk}\) and \(\psi_{\rm bd}\) are proved, \[\psi_{\rm bk}(v)=\Psi_{\rm bk}(v)\,,\qquad\psi_{\rm bd}(v)=\Psi_{\rm bd}(v)\,, \qquad\forall v>0\,, \tag{1.16}\] where the following minimization problems on \(\mathcal{K}_{\rm B}\) are introduced \[\Psi_{\rm bk}(v) =\inf\left\{\mathcal{F}_{\rm bk}(K,E):(K,E)\in\mathcal{K}_{\rm B} \,,|E|=v\,,K\cup E^{{(1)}}\text{ is $\mathcal{C}$-spanning $\mathbf{W}$}\right\}, \tag{1.17}\] \[\Psi_{\rm bd}(v) =\inf\left\{\mathcal{F}_{\rm bd}(K,E):(K,E)\in\mathcal{K}_{\rm B }\,,|E|=v\,,K\text{ is $\mathcal{C}$-spanning $\mathbf{W}$}\right\}. \tag{1.18}\] Here \(\mathcal{F}_{\rm bk}\) and \(\mathcal{F}_{\rm bd}\) are the relaxed energies defined for \((K,E)\in\mathcal{K}_{\rm B}\) and \(A\subset\Omega\) as \[\mathcal{F}_{\rm bk}(K,E;A) =2\,\mathcal{H}^{n}(A\cap K\cap E^{{(0)}})+\mathcal{H}^{n}(A \cap\partial^{*}E)\,, \tag{1.19}\] \[\mathcal{F}_{\rm bd}(K,E;A) =2\,\mathcal{H}^{n}(A\cap K\setminus\partial^{*}E)+\mathcal{H}^{ n}(A\cap\partial^{*}E)\,, \tag{1.20}\] (We also set, for brevity, \(\mathcal{F}_{\rm bk}(K,E):=\mathcal{F}_{\rm bk}(K,E;\Omega)\) and \(\mathcal{F}_{\rm bd}(K,E):=\mathcal{F}_{\rm bd}(K,E;\Omega)\).) We refer to these problems, respectively, as the "bulk-spanning" or "boundary-spanning" Gauss' capillarity models. In this paper we shall directly work with these relaxed models. In particular, the validity of (1.16), although of definite conceptual importance, is not playing any formal role in our deductions. A first remark concerning the advantage of working with the relaxed problems \(\Psi_{\rm bk}\) and \(\Psi_{\rm bd}\) rather than with their "classical" counterparts \(\psi_{\rm bk}\) and \(\psi_{\rm bd}\) is that while the latter two with \(v=0\) are trivial (sets with zero volume have zero distributional perimeter), the problems \(\Psi_{\rm bk}(0)\) and \(\Psi_{\rm bd}(0)\) are actually non-trivial, equal to each other, and amount to a measure-theoretic version of the Harrison-Pugh formulation of Plateau's problem \(\ell\) introduced in (1.7): more precisely, if we set \[\ell_{\rm B}:=\frac{\Psi_{\rm bk}(0)}{2}=\frac{\Psi_{\rm bd}(0)}{2}=\inf\left\{ \mathcal{H}^{n}(S):S\text{ is a Borel set $\mathcal{C}$-spanning $\mathbf{W}$}\right\}, \tag{1.21}\] then, by Theorem 1.1, we evidently have \(\ell_{\rm B}\leq\ell\); and, as we shall prove in the course of our analysis, we actually have that \(\ell=\ell_{\rm B}\) as soon as \(\ell<\infty\). Our second main result concerns the applicability of the Direct Method on the competition classes of \(\Psi_{\mathrm{bk}}(v)\) and \(\Psi_{\mathrm{bd}}(v)\). **Theorem 1.4** (Direct Method for generalized soap films (Sections 4 and 5)).: _Let \(\mathbf{W}\) be a closed set in \(\mathbb{R}^{n+1}\), \(\mathcal{C}\) a spanning class for \(\mathbf{W}\), \(\{(K_{j},E_{j})\}_{j}\) be a sequence in \(\mathcal{K}_{\mathrm{B}}\) such that \(\sup_{j}\mathcal{H}^{n}(K_{j})<\infty\), and let a Borel set \(E\) and Radon measures \(\mu_{\mathrm{bk}}\) and \(\mu_{\mathrm{bd}}\) in \(\Omega\) be such that \(E_{j}\stackrel{{\mathrm{\,loc}}}{{\to}}E\) and_ \[\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\Omega\cap\partial^{*}E_{j})+2\,\mathcal{H}^{n}\mathop{ \hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\mathcal{R}(K_{j})\cap E_{j}^{{}^{(0)}}) \stackrel{{*}}{{\rightharpoonup}}\mu_{\mathrm{bk}}\,,\] \[\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\Omega\cap\partial^{*}E_{j})+2\,\mathcal{H}^{n}\mathop{ \hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\mathcal{R}(K_{j})\setminus\partial^{*}E_{j}) \stackrel{{*}}{{\rightharpoonup}}\mu_{\mathrm{bd}}\,,\] _as \(j\to\infty\). Then:_ **(i) Lower semicontinuity:** _the sets_ \[K_{\mathrm{bk}} :=\;\big{(}\Omega\cap\partial^{*}E\big{)}\cup\Big{\{}x\in\Omega \cap E^{{}^{(0)}}:\theta_{*}^{n}(\mu_{\mathrm{bk}})(x)\geq 2\Big{\}}\,,\] \[K_{\mathrm{bd}} :=\;\big{(}\Omega\cap\partial^{*}E\big{)}\cup\Big{\{}x\in\Omega \setminus\partial^{*}E:\theta_{*}^{n}(\mu_{\mathrm{bd}})(x)\geq 2\Big{\}}\,,\] _are such that \((K_{\mathrm{bk}},E),(K_{\mathrm{bd}},E)\in\mathcal{K}_{\mathrm{B}}\) and_ \[\mu_{\mathrm{bk}} \geq\;\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\Omega\cap\partial^{*}E)+2\,\mathcal{H}^{n}\mathop{ \hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(K_{\mathrm{bk}}\cap E^{{}^{(0)}})\,,\] \[\mu_{\mathrm{bd}} \geq\;\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\Omega\cap\partial^{*}E)+2\,\mathcal{H}^{n}\mathop{ \hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(K_{\mathrm{bd}}\setminus\partial^{*}E)\,,\] _with_ \[\liminf_{j\to\infty}\mathcal{F}_{\mathrm{bk}}(K_{j},E_{j})\geq\mathcal{F}_{ \mathrm{bk}}(K_{\mathrm{bk}},E)\,,\qquad\liminf_{j\to\infty}\mathcal{F}_{ \mathrm{bd}}(K_{j},E_{j})\geq\mathcal{F}_{\mathrm{bd}}(K_{\mathrm{bd}},E)\,.\] **(ii) Closure:** _we have that_ \[\text{if }K_{j}\cup E_{j}^{{}^{(1)}}\text{ is }\mathcal{C} \text{-spanning }\mathbf{W}\text{ for every }j\text{,}\] \[\text{then }K_{\mathrm{bk}}\cup E^{{}^{(1)}}\text{ is }\mathcal{C} \text{-spanning }\mathbf{W}\,,\] _and that_ \[\text{if }K_{j}\text{ is }\mathcal{C}\text{-spanning }\mathbf{W}\text{ for every }j\text{,}\] \[\text{then }K_{\mathrm{bd}}\text{ is }\mathcal{C} \text{-spanning }\mathbf{W}\,.\] The delicate part of Theorem 1.4 is proving the closure statements. This will require first to extend the characterization of homotopic spanning from locally \(\mathcal{H}^{n}\)-finite sets to generalized soap films (Theorem 3.1), and then to discuss the behavior under weak-star convergence of the associated Radon measures of the objects appearing in conditions like (1.14) (Theorem 4.1). ### Existence of minimizers in \(\Psi_{\mathrm{bk}}(v)\) and convergence to \(\ell\) From this point onward, we focus our analysis on the bulk-spanning relaxation \(\Psi_{\mathrm{bk}}(v)\) of Gauss' capillarity. There are a few important reasons for this choice: (i) from the point of view of physical modeling, working with the boundary or with the bulk spanning conditions seem comparable; (ii) the fact that \(\Psi_{\mathrm{bk}}(0)=\Psi_{\mathrm{bd}}(0)\) suggest that, at small values of \(v\), the two problems should actually be equivalent (have the same infima and the same minimizers); (iii) the bulk spanning variant is the one which is relevant for the approximation of Plateau-type singularities with solutions of the Allen-Cahn equations discussed in [14]; (iv) despite their similarities, carrying over the following theorems for both problems would require the repeated introduction of two versions of many arguments, with a significant increase in length, and possibly with at the expense of clarity. The following theorem provides the starting point in the study of \(\Psi_{\mathrm{bk}}(v)\). **Theorem 1.5** (Existence of minimizers and vanishing volume limit for \(\Psi_{\rm bk}\) (Section 6)).: _If \({\bf W}\) is a compact set in \({\mathbb{R}}^{n+1}\) and \({\mathcal{C}}\) is a spanning class for \({\bf W}\) such that \(\ell<\infty\), then_ \[\ell_{\rm B}=\ell\,, \tag{1.22}\] _and, moreover:_ **(i) Existence of minimizers and Euler-Lagrange equation:** _for every \(v>0\) there exist minimizers \((K,E)\) of \(\Psi_{\rm bk}(v)\) such that \((K,E)\in{\mathcal{K}}\) and both \(E\) and \(K\) are bounded; moreover, there is \(\lambda\in{\mathbb{R}}\) such that_ \[\lambda\int_{\partial^{*}E}X\cdot\nu_{E}\,d{\mathcal{H}}^{n}=\int_{\partial^{ *}E}{\rm div}^{K}\,X\,d{\mathcal{H}}^{n}+2\int_{K\cap E^{(0)}}{\rm div}^{K}\,X \,d{\mathcal{H}}^{n}\,, \tag{1.23}\] _for every \(X\in C^{1}_{c}({\mathbb{R}}^{n+1};{\mathbb{R}}^{n+1})\) with \(X\cdot\nu_{\Omega}=0\) on \(\partial\Omega\);_ **(ii) Regularity from the Euler-Lagrange equations:** _if \((K,E)\in{\mathcal{K}}\) is a minimizer of either \(\Psi_{\rm bk}(v)\), then there is a closed set \(\Sigma\subset K\), with empty interior in \(K\), such that \(K\setminus\Sigma\) is a smooth hypersurface; moreover, \(K\setminus(\Sigma\cup\partial E)\) is a smooth minimal hypersurface, \(\Omega\cap\partial^{*}E\) is a smooth hypersurface with mean curvature constantly equal to \(\lambda\), and \({\mathcal{H}}^{n}(\Sigma\setminus\partial E)=0\); in particular, \(\Omega\cap(\partial E\setminus\partial^{*}E)\) has empty interior in \(K\);_ **(iii) Convergence to the Plateau problem:** _if \((K_{j},E_{j})\) is a sequence of minimizers for \(\Psi_{\rm bk}(v_{j})\) with \(v_{j}\to 0^{+}\), then there exists a minimizer \(S\) of \(\ell\) such that, up to extracting subsequences, as Radon measures in \(\Omega\),_ \[{\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}E_{j}\cap\Omega)+2{ \mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t ight 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(K_{j}\cap E_{j}^{(0)})\stackrel{{ *}}{{\rightharpoonup}}2{\mathcal{H}}^{n}\mathop{\hbox{\vrule heigh t 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits S\,, \tag{1.24}\] _as \(j\to\infty\); In particular, \(\Psi_{\rm bk}(v)\to 2\,\ell=\Psi_{\rm bk}(0)\) as \(v\to 0^{+}\)._ The conclusions of Theorem 1.5 about \(\Psi_{\rm bk}(v)\) can be read in parallel to the conclusions about \(\psi(v)\) obtained in [10]. The crucial difference is that, in place of the "weak" minimality inequality (1.10), which in this context would be equivalent to \({\mathcal{F}}_{\rm bk}(K,E)\leq{\mathcal{H}}^{n}(\Omega\cap\partial^{*}E^{ \prime})\) for every competitor \(E^{\prime}\) in \(\psi_{\rm bk}(v)\), we now have the proper minimality inequality \[{\mathcal{F}}_{\rm bk}(K,E)\leq{\mathcal{F}}_{\rm bk}(K^{\prime},E^{\prime}) \tag{1.25}\] for every competitor \((K^{\prime},E^{\prime})\) in \(\Psi_{\rm bk}(v)\). Not only the final conclusion is stronger, but the proof is also entirely different: whereas [10] required the combination of a whole bestiary of specific competitors (like the cup, cone, and slab competitors described therein) with the full force of Preiss' theorem, the approach presented here seems more robust as it does not exploit any specific geometry, and it is squarely rooted in the basic theory of sets of finite perimeter. ### Equilibrium across transition lines in wet soap films We now formalize the validation of (1.1) for soap films in the form of a sharp regularity theorem for minimizers \((K,E)\) of \(\Psi_{\rm bk}(v)\). The starting point to obtain this result is the connection between homotopic spanning and partitions into indecomposable sets of finite perimeter established in Theorem 1.3/Theorem 3.1. This connection hints at the possibility of showing that if \((K,E)\) is a minimizer of \(\Psi_{\rm bk}(v)\), then the elements \(\{U_{i}\}_{i}\) of the essential partition of \(\Omega\) induced by \(K\cup E^{(1)}\) are actually \((\Lambda,r_{0})\)-minimizers of the perimeter in \(\Omega\), i.e., there exist \(\Lambda\) and \(r_{0}\) positive constants such that \[P(U_{i};B_{r}(x))\leq P(V;B_{r}(x))+\Lambda\left|V\Delta U_{i}\right|,\] whenever \(V\Delta U_{i}\subset\subset\Omega\) and \({\rm diam}\,(V\Delta U_{i})<r_{0}\). The reason why this property is not obvious is that proving the \((\Lambda,r_{0})\)-minimality of \(U_{i}\) requires working with _arbitrary local competitors_\(V_{i}\) of \(U_{i}\). However, when working with homotopic spanning conditions, checking the admissibility of competitors is the notoriously delicate heart of the matter - as reflected in the fact that only very special classes of competitors have been considered in the literature (see, e.g., the cup and cone competitors and the Lipschitz deformations considered in [10], the slab competitors and exterior cup competitors of [14], etc.). The idea used to overcome this difficulty, which is illustrated in Figure 1.7, is the following. By Theorem 1.2, we can locally represent \(\mathcal{F}_{\mathrm{bk}}(K,E;B_{r}(x))\) as the sum of perimeters \(P(U_{i};B_{r}(x))+P(U_{j};B_{r}(x))+P(U_{k};B_{r}(x))\). Given a local competitor \(V_{i}\) for \(U_{i}\) we can carefully define a competitor \((K^{\prime},E^{\prime})\) so that the elements of the essential partition induced by \(K^{\prime}\cup(E^{\prime})^{{(1)}}\) in \(\Omega\), that can be used to represent \(\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{\prime};B_{r}(x))\) as the sum \(P(V_{i};B_{r}(x))+P(V_{j};B_{r}(x))+P(V_{k};B_{r}(x))\), are such that \[\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{\prime};B_{r}(x))-\mathcal{F}_{ \mathrm{bk}}(K,E;B_{r}(x))=P(V;B_{r}(x))-P(U_{i};B_{r}(x))\,. \tag{1.26}\] The trick is that by suitably defining \(K^{\prime}\) and \(E^{\prime}\) we can recover the entirety of \(B_{r}(x)\cap\partial^{*}U_{j}\) and \(B_{r}(x)\cap\partial^{*}U_{k}\) by attributing different parts of these boundaries to different terms in the representation of \(\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{\prime};B_{r}(x))\). In other words we are claiming that things can be arranged so that we still have \[B_{r}(x)\cap\left(\partial^{*}U_{j}\cap\partial^{*}U_{k}\right)\overset{ \mathcal{H}^{n}}{\subset}K^{\prime}\cup(E^{\prime})^{{(1)}}\,. \tag{1.27}\] The fact that we have been able to preserve all but one reduced boundary among those of the elements of the essential partition of \(B_{r}(x)\) induced by \((K,E)\) is enough to shows that \(K^{\prime}\cup(E^{\prime})^{{(1)}}\) is still \(\mathcal{C}\)-spanning \(\mathbf{W}\) by means of Theorem 1.3/Theorem 3.1. By the regularity theory of \((\Lambda,r_{0})\)-perimeter minimizers (see, e.g. [13, Part III]) we can deduce the \(C^{1,\alpha}\)-regularity of the elements of the partition (away from a closed singular set with area minimizing dimensional bounds). This is already sufficient to prove the continuity of the normal across \(\Omega\cap(\partial E\setminus\partial^{*}E)\), but it also allows us to invoke the regularity theory for free boundaries in the double membrane problem, and to obtain the following sharp regularity result, with which we conclude our introduction. **Theorem 1.6** (Equilibrium along transition lines for soap films (Section 7)).: _If \(\mathbf{W}\) is a compact set in \(\mathbb{R}^{n+1}\), \(\mathcal{C}\) is a spanning class for \(\mathbf{W}\) such that \(\ell<\infty\), \(v>0\), and \((K_{*},E_{*})\) is a minimizer of \(\Psi_{\mathrm{bk}}(v)\), then there is \((K,E)\in\mathcal{K}\) such that \(K\) is \(\mathcal{H}^{n}\)-equivalent to \(K_{*}\), Figure 1.7. On the left, a minimizer \((K,E)\) of \(\Psi_{\mathrm{bk}}(v)\), and the essential partition induced by \((K,E)\) in a ball \(B_{r}(x)\); the multiplicity \(2\) part of \(K\cap B_{r}(x)\) are depicted with bold lines, to distinguish them from the multiplicity one parts in \(B_{r}(x)\cap\partial^{*}E\). On the right, a choice of \((K^{\prime},E^{\prime})\) that guarantees both the energy gap identity (1.26) and the \(\mathcal{H}^{n}\)-containment (1.27) needed to preserve homotopic spanning. The volume constraint can of course be restored as a lower order perimeter perturbation by taking a diffeomorphic image of \((K^{\prime},E^{\prime})\), an operation that trivially preserves homotopic spanning. _is Lebesgue equivalent to \(E_{*}\), \((K,E)\) is a minimizer of \(\Psi_{\rm bk}(v)\), both \(E\) and \(K\) are bounded, \(K\cup E\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), and_ \[K\cap E^{{}_{(1)}}=\varnothing\,; \tag{1.28}\] _in particular, \(K\) is the disjoint union of \(\Omega\cap\partial^{*}E\), \(\Omega\cap(\partial E\setminus\partial^{*}E)\), and \(K\setminus\partial E\)._ _Moreover, there is a closed set \(\Sigma\subset K\) with the following properties:_ **(i):**_\(\Sigma=\varnothing\) if \(1\leq n\leq 6\), \(\Sigma\) is locally finite in \(\Omega\) if \(n=7\), and \(\mathcal{H}^{s}(\Sigma)=0\) for every \(s>n-7\) if \(n\geq 8\);_ **(ii):**_\((\Omega\cap\partial^{*}E)\setminus\Sigma\) is a smooth hypersurface with constant mean curvature (denoted by \(\lambda\) if computed with respect to \(\nu_{E}\));_ **(iii):**_\((K\setminus\partial E)\setminus\Sigma\) is a smooth minimal hypersurface;_ **(iv):** _if_ \(\Omega\cap(\partial E\setminus\partial^{*}E)\setminus\Sigma\neq\varnothing\)_, then_ \(\lambda<0\)_; moreover, for every_ \(x\in\Omega\cap(\partial E\setminus\partial^{*}E)\setminus\Sigma\)_,_ \(K\) _is the union of two_ \(C^{1,1}\)_-hypersurfaces that detach tangentially at_ \(x\)_; more precisely, there are_ \(r>0\)_,_ \(\nu\in\mathbb{S}^{n}\)_,_ \(u_{1},u_{2}\in C^{1,1}(\mathbf{D}^{\nu}_{r}(x))\) _such that_ \[u_{1}(x)=u_{2}(x)=0\,,\qquad u_{1}\leq u_{2}\text{ on }\mathbf{D}^{\nu}_{r}(x)\,,\] _with_ \(\{u_{1}<u_{2}\}\) _and_ \(\operatorname{int}\{u_{1}=u_{2}\}\) _both non-empty, and_ \[\mathbf{C}^{\nu}_{r}(x)\cap K = \cup_{i=1,2}\big{\{}y+u_{i}(y)\,\nu:y\in\mathbf{D}^{\nu}_{r}(x) \big{\}}\,, \tag{1.29}\] \[\mathbf{C}^{\nu}_{r}(x)\cap\partial^{*}E = \cup_{i=1,2}\big{\{}y+u_{i}(y)\nu:y\in\{u_{1}<u_{2}\}\big{\}}\,,\] (1.30) \[\mathbf{C}^{\nu}_{r}(x)\cap E = \big{\{}y+t\,\nu:t\in\big{(}u_{1}(y),u_{2}(y)\big{)}\big{\}}\,. \tag{1.31}\] _Here,_ \[\mathbf{D}^{r}_{\nu}(x)=x+\{y\in\nu^{\perp}:|y|<r\}\,,\] \[\mathbf{C}^{r}_{\nu}(x)=x+\{y+t\,\nu:y\in\nu^{\perp}\,,|y|<r\,,|t |<r\}\,.\] **(v):** _we have_ \[\Gamma:=\Omega\cap(\partial E\setminus\partial^{*}E)=\Gamma_{\rm reg}\cup \Gamma_{\rm sing}\,,\qquad\Gamma_{\rm reg}\cap\Gamma_{\rm sing}=\varnothing\,,\] _where: \(\Gamma_{\rm reg}\) is relatively open in \(\Gamma\) and for every \(x\in\Gamma_{\rm reg}\) there are \(r>0\) and \(\beta\in(0,1)\) such that \(\Gamma_{\rm reg}\cap B_{r}(x)\) is a \(C^{1,\beta}\)-embedded \((n-1)\)-dimensional manifold; \(\Gamma_{\rm sing}\) is relatively closed in \(\Gamma\) and can be partitioned into a family \(\{\Gamma_{\rm sing}^{k}\}_{k=0}^{n-1}\) where, for each \(k\), \(\Gamma_{\rm sing}^{k}\) is locally \(\mathcal{H}^{k}\)-rectifiable in \(\Omega\)._ ### Equilibrium across transition lines in wet foams Based on the descriptions provided in [10, 11], an effective mathematical model for dry foams at equilibrium in a container is that of locally perimeter minimizing clusters, originating with different terminology in [1], and presented in [15, Part IV] as follows. Given an open set \(\Omega\subset\mathbb{R}^{n+1}\), a locally perimeter minimizing clusters is a finite Lebesgue partition \(\{U_{i}\}_{i}\) of \(\Omega\) into sets of finite perimeter such that, for some \(r_{0}>0\), \[\sum_{i}P(U_{i};B)\leq\sum_{i}P(V_{i};B) \tag{1.32}\] whenever \(B\subset\subset\Omega\) is a ball with radius less than \(r_{0}\), and \(\{V_{i}\}_{i}\) is a Lebesgue partition of \(\Omega\) with \(V_{i}\Delta U_{i}\subset\subset B\) and \(|V_{i}|=|U_{i}|\) for every \(i\). The previously cited results of Almgren and Taylor [1, 16] imply that, up to modification of the \(U_{i}\)'s by sets of zero Lebesgue measure, when \(n=2\), \(K=\Omega\cap\bigcup_{i}\partial U_{i}\) is a closed subset of \(\Omega\) that is locally \(C^{1,\alpha}\)-diffeomorphic to a plane, a \(Y\)-cone, or a \(T\)-cone; moreover, the part of \(K\) that is a surface is actually smooth and each of its connected component has constant mean curvature. Similar results holds when \(n=1\) (by elementary methods) and when \(n\geq 3\) (by exploiting [12]). The theory for the relaxed capillarity energy \(\mathcal{F}_{\rm bk}\) developed in this paper provides an option for modeling wet foams. Again based on the descriptions provided in [11, 13], the following seems to be a reasonable model for wet foams at equilibrium in a container. Given an open set \(\Omega\subset\mathbb{R}^{n+1}\) we model wet foams by introducing the class \[\mathcal{K}_{\rm foam}\] of those \((K,E)\in\mathcal{K}_{\rm B}\) such that, for some positive constants \(\Lambda_{0}\) and \(r_{0}\), \[\mathcal{F}_{\rm bk}(K,E;B)\leq\mathcal{F}_{\rm bk}(K^{\prime},E^{\prime};B)+ \,\Lambda_{0}\,|E\Delta E^{\prime}| \tag{1.33}\] whenever \(B\) is a ball compactly contained in \(\Omega\) and with radius less than \(r_{0}\), and \((K^{\prime},E^{\prime})\in\mathcal{K}_{\rm B}\) is such that \((K\Delta K^{\prime})\cup(E\Delta E^{\prime})\subset\subset B\) and there are finite Lebesgue partitions \(\{U_{i}\}_{i}\) and \(\{U_{i}^{\prime}\}_{i}\) of \(B\) induced, respectively, by \(K\cup E^{{(1)}}\) and by \(K^{\prime}\cup(E^{\prime})^{{(1)}}\), such that \(|U_{i}|=|U_{i}^{\prime}|\) for every \(i\). Notice that inclusion of the term \(\Lambda_{0}\,|E\Delta E^{\prime}|\) in (1.33) allows for the inclusion of energy perturbations due to gravity or other forces. Lemma 7.1 will clarify that by taking \((K,E)\in\mathcal{K}_{\rm foam}\) with \(|E|=0\) we obtain a slightly more general notion of dry foam than the one proposed in (1.32). **Theorem 1.7** (Equilibrium along transition lines for soap films (Section 8)).: _If \(\Omega\subset\mathbb{R}^{n+1}\) is open and \((K_{*},E_{*})\in\mathcal{K}_{\rm foam}\), then there is \((K,E)\in\mathcal{K}\cap\mathcal{K}_{\rm foam}\) such that \(K\) is \(\mathcal{H}^{n}\)-equivalent to \(K_{*}\), \(E\) Lebesgue equivalent to \(E_{*}\), \(K\cap E^{{(1)}}=\varnothing\), and such that, for every ball \(B\subset\subset\Omega\), the open connected components \(\{U_{i}\}_{i}\) of \(B\setminus(K\cup E)\) are such that each \(U_{i}\) is (Lebesgue equivalent to an) open set with \(C^{1,\alpha}\)-boundary in \(B\setminus\Sigma\). Here \(\Sigma\) is a closed subset of \(\Omega\) with \(\Sigma=\varnothing\) if \(1\leq n\leq 6\), \(\Sigma\) locally finite in \(\Omega\) if \(n=7\), and \(\mathcal{H}^{s}(\Sigma)=0\) for every \(s>n-7\) if \(n\geq 8\)._ ### Organization of the paper The sections of the paper contain the proofs of the main theorems listed above, as already specified in the statements. To these section we add three appendices. In Appendix A, as already noted, we prove the equivalence of Definition A and Definition B. In Appendix B we prove that, with some regularity of \(\partial\Omega\), _every_ minimizing sequence of \(\Psi_{\rm bk}(v)\) is converging to a minimizers, without need for modifications at infinity: this is, strictly speaking, not needed to prove Theorem 1.5, but it is a result of its own conceptual interest, it will be crucial for the analysis presented in [12], and it is easily discussed here in light of the proof of Theorem 1.5. Finally, Appendix C contains an elementary lemma concerning the use of homotopic spanning in the plane that, to our knowledge, has not been proved in two dimensions. ### Acknowledgements We thank Guido De Philippis, Darren King, Felix Otto, Antonello Scardicchio, Salvatore Stuvard, and Bozhidar Velichkov for several interesting discussions concerning these problems. FM has been supported by NSF Grant DMS-2247544. FM, MN, and DR have been supported by NSF Grant DMS-2000034 and NSF FRG Grant DMS-1854344. MN has been supported by NSF RTG Grant DMS-1840314. ### Notation **Sets and measures:** We denote by \(B_{r}(x)\) (resp., \(B_{r}^{k}(x)\)) the open ball of center \(x\) and radius \(r\) in \(\mathbb{R}^{n+1}\) (resp., \(\mathbb{R}^{k}\)), and omit \((x)\) when \(x=0\). We denote by \({\rm cl}\,(X)\), \({\rm int}(X)\), and \(I_{r}(X)\) the closure, interior and open \(\varepsilon\)-neighborhood of \(X\subset\mathbb{R}^{k}\). We denote by \(\mathcal{L}^{n+1}\) and \(\mathcal{H}^{s}\) the Lebesgue measure and the \(s\)-dimensional Hausdorff measure on \(\mathbb{R}^{n+1}\), \(s\in[0,n+1]\). If \(E\subset\mathbb{R}^{k}\), we set \(|E|=\mathcal{L}^{k}(E)\) and \(\omega_{k}=|B_{1}^{k}|\). We denote by \(E^{{(t)}}\), \(t\in[0,1]\), the **points of density \(t\)** of a Borel set \(E\subset\mathbb{R}^{n+1}\), so that \(E\) is \(\mathcal{L}^{n+1}\)-equivalent to \(E^{{(1)}}\), and, for every pair of Borel sets \(E,F\subset\mathbb{R}^{n+1}\), \[(E\cup F)^{{(0)}}=E^{{(0)}}\cap F^{{(0)}}\,. \tag{1.34}\] We define by \(\partial^{e}E=\mathbb{R}^{n+1}\setminus(E^{{(0)}}\cup E^{ {(1)}})\) the **essential boundary** of \(E\). Given Borel sets \(E_{j},E\subset\Omega\) we write \[E_{j}\to E\,,\qquad E_{j}\stackrel{{\rm loc}}{{\to}}E\,,\] when, respectively, \(|E_{j}\Delta E|\to 0\) or \(|(E_{j}\Delta E)\cap\Omega^{\prime}|\to 0\) for every \(\Omega^{\prime}\subset\subset\Omega\), as \(j\to\infty\). Given a Radon measure \(\mu\) on \(\mathbb{R}^{n+1}\), the \(k\)-dimensional lower density of \(\mu\) is the Borel function \(\theta_{*}^{k}(\mu):\mathbb{R}^{n+1}\to[0,\infty]\) defined by \[\theta_{*}^{k}(\mu)(x)=\liminf_{r\to 0^{+}}\frac{\mu(\operatorname{cl} \left(B_{r}(x)\right))}{\omega_{k}r^{k}}\,.\] We repeatedly use the fact that, if \(\theta_{*}^{k}(\mu)\geq\lambda\) on some Borel set \(K\) and for some \(\lambda\geq 0\), then \(\mu\geq\lambda\,\mathcal{H}^{k}\ll K\); see, e.g. [13, Theorem 6.4]. **Rectifiable sets:** Given an integer \(0\leq k\leq n+1\), a Borel set \(S\subset\mathbb{R}^{n+1}\) is **locally**\(\mathcal{H}^{k}\)**-rectifiable** in an open set \(\Omega\) if \(S\) is locally \(\mathcal{H}^{k}\)-finite in \(\Omega\) and \(S\) can be covered, modulo \(\mathcal{H}^{k}\)-null sets, by a countable union of Lipschitz images of \(\mathbb{R}^{k}\) in \(\mathbb{R}^{n+1}\). We say that \(S\) is **purely \(\mathcal{H}^{k}\)-unrectifiable** if \(\mathcal{H}^{k}(S\cap M)=0\) whenever \(M\) is a Lipschitz image of \(\mathbb{R}^{k}\) into \(\mathbb{R}^{n+1}\). Finally, we recall that if \(S\) is a locally \(\mathcal{H}^{k}\)-finite set in \(\Omega\), then there is a pair \((\mathcal{R}(S),\mathcal{P}(S))\) of Borel sets, uniquely determined modulo \(\mathcal{H}^{k}\)-null sets, and that are thus called, with a slight abuse of language, _the_**rectifiable part** and _the_**unrectifiable part** of \(S\), so that \(\mathcal{R}(S)\) is locally \(\mathcal{H}^{k}\)-rectifiable in \(\Omega\), \(\mathcal{P}(S)\) is purely \(\mathcal{H}^{k}\)-unrectifiable, and \(S=\mathcal{R}(S)\cup\mathcal{P}(S)\); see, e.g. [10, 13.1]. **Sets of finite perimeter:** If \(E\) is a Borel set in \(\mathbb{R}^{n+1}\) and \(D1_{E}\) is the distributional derivative of the characteristic function of \(E\), then we set \(\mu_{E}=-D1_{E}\). If \(A\) is the _largest open set_ of \(\mathbb{R}^{n+1}\) such that \(\mu_{E}\) is a Radon measure in \(A\) (of course it could be \(A=\varnothing\)), then \(E\) is of locally finite perimeter in \(A\) and the reduced boundary \(\partial^{*}E\) of \(E\) is defined as the set of those \(x\in A\cap\operatorname{spt}\mu_{E}\) such that \(\mu_{E}(B_{r}(x))/|\mu_{E}|(B_{r}(x))\) has a limit \(\nu_{E}(x)\in\mathbb{S}^{n}\) as \(r\to 0^{+}\). Moreover, we have the general identity (see [13, (12.12) & pag. 168]) \[A\cap\operatorname{cl}\left(\partial^{*}E\right)=A\cap\operatorname{spt}\mu_{ E}=\left\{x\in A:0<|E\cap B_{r}(x)|<|B_{r}(x)|\ \forall r>0\right\}\subset A\cap\partial E\,. \tag{1.35}\] By De Giorgi's rectifiability theorem, \(\partial^{*}E\) is locally \(\mathcal{H}^{n}\)-rectifiable in \(A\), \(\mu_{E}=\nu_{E}\,\mathcal{H}^{n}\,\mathbin{\vrule height 6.0pt depth -0.0pt width 0.4pt depth -0.0pt}\,(A\cap \partial^{*}E)\) on \(A\), and \(\partial^{*}E\subset A\cap E^{{{(1/2)}}}\subset A\cap\partial^{e}E\), and \[(E-x)/r\stackrel{{\rm loc}}{{\to}}H_{E,x}:=\left\{y\in\mathbb{R}^ {n+1}:y\cdot\nu_{E}(x)<0\right\},\qquad\text{as $r\to 0^{+}$}\,. \tag{1.36}\] By a result of Federer, \[A\text{ is $\mathcal{H}^{n}$-contained in $E^{{{(0)}}}\cup E^{{{(1)}}}\cup \partial^{*}E$}\,; \tag{1.37}\] in particular, \(\partial^{*}E\) is \(\mathcal{H}^{n}\)-equivalent to \(A\cap\partial^{e}E\), a fact frequently used in the following. By _Federer's criterion for finite perimeter_, if \(\Omega\) is open and \(E\) is a Borel set, then \[\mathcal{H}^{n}(\Omega\cap\partial^{e}E)<\infty\qquad\Rightarrow\qquad E\text { is of finite perimeter in $\Omega$}\,, \tag{1.38}\] see [10, 4.5.11]. If \(E\) and \(F\) are of locally finite perimeter in \(\Omega\) open, then so are \(E\cup F\), \(E\cap F\), and \(E\setminus F\), and by [13, Theorem 16.3], we have \[\Omega\cap\partial^{*}(E\cup F)\stackrel{{\rm loc}}{{\to}} \Omega\cap\left\{\left(E^{{{(0)}}}\cap\partial^{*}F\right)\cup\left(F^{{ {(0)}}}\cap\partial^{*}E\right)\cup\left\{\nu_{E}=\nu_{F}\right\}\right\}, \tag{1.39}\] \[\Omega\cap\partial^{*}(E\cap F)\stackrel{{\rm loc}}{{ \to}}\Omega\cap\left\{\left(E^{{{(1)}}}\cap\partial^{*}F\right)\cup\left(F^{{ {(1)}}}\cap\partial^{*}E\right)\cup\left\{\nu_{E}=\nu_{F}\right\}\right\},\] (1.40) \[\Omega\cap\partial^{*}(E\setminus F)\stackrel{{\rm loc }^{\prime\prime}}{{=}}\Omega\cap\left\{\left(E^{{ {(1)}}}\cap\partial^{*}F\right)\cup\left(F^{{ {(0)}}}\cap\partial^{*}E\right)\cup\left\{\nu_{E}=-\nu_{F}\right\}\right\}, \tag{1.41}\] where \(\left\{\nu_{E}=\pm\nu_{F}\right\}:=\left\{x\in\partial^{*}E\cap\partial^{*}F: \nu_{E}(x)=\pm\nu_{F}(x)\right\}\). By exploiting Federer's theorem (1.37), (1.39), (1.40), and (1.41) we can also deduce (the details are left to the reader) \[(E\cap F)^{{{(0)}}} \stackrel{{\rm loc}^{\prime\prime}}{{=}} E^{{{(0)}}}\cup F^{{{(0)}}}\cup\left\{\nu_{E}=-\nu_{F}\right\}, \tag{1.42}\] \[(E\setminus F)^{{{(0)}}} \stackrel{{\rm loc}^{\prime\prime}}{{=}} E^{{{(0)}}}\cup F^{{{(1)}}}\cup\left\{\nu_{E}=\nu_{F}\right\}. \tag{1.43}\] Finally, combining (1.39), (1.41), and (1.43), we find \[\partial^{*}(E\Delta F)\stackrel{{\mathcal{H}^{n}}}{{=}}(\partial^{* }E)\Delta(\partial^{*}F)\,. \tag{1.44}\] **Partitions:** Given a Radon measure \(\mu\) on \(\mathbb{R}^{n+1}\) and Borel set \(U\subset\mathbb{R}^{n+1}\) we say that \(\{U_{i}\}_{i}\) is a \(\mu\)**-partition of \(U\)** if \(\{U_{i}\}_{i}\) is an at most countable family of Borel subsets of \(U\) such that \[\mu\Big{(}U\setminus\bigcup_{i}U_{i}\Big{)}=0\,,\qquad\mu(U_{i}\cap U_{j})=0 \quad\forall i,j\,; \tag{1.45}\] and we say that \(\{U_{i}\}_{i}\) is a **monotone \(\mu\)-partition** if, in addition to (1.45), we also have \(\mu(U_{i})\geq\mu(U_{i+1})\) for every \(i\). When \(\mu=\mathcal{L}^{n+1}\) we replace "\(\mu\)-partition" with "Lebesgue partition". When \(U\) is a set of finite perimeter in \(\mathbb{R}^{n+1}\), we say that \(\{U_{i}\}_{i}\) is a **Caccioppoli partition** of \(U\) if \(\{U_{i}\}_{i}\) is a Lebesgue partition of \(U\) and each \(U_{i}\) is a set of finite perimeter in \(\mathbb{R}^{n+1}\): in this case we have \[\partial^{*}U\stackrel{{\mathcal{H}^{n}}}{{\subset}}\bigcup_{i} \partial^{*}U_{i}\,,\qquad 2\,\mathcal{H}^{n}\Big{(}U^{{(1)}} \cap\bigcup_{i}\partial^{*}U_{i}\Big{)}=\sum_{i}\mathcal{H}^{n}(U^{{(1)}} \cap\partial^{*}U_{i})\,, \tag{1.46}\] see, e.g., [1, Section 4.4]; moreover, \[1\leq\#\Big{\{}i:x\in\partial^{*}U_{i}\Big{\}}\leq 2\,,\qquad\forall x\in \bigcup_{i}\partial^{*}U_{i}\,, \tag{1.47}\] thanks to (1.36) and to the fact that there cannot be three disjoint half-spaces in \(\mathbb{R}^{n+1}\). ## 2. Induced essential partitions (Theorem 1.2) Given a Borel set \(S\), we say that a Lebesgue partition \(\{U_{i}\}_{i}\) of \(U\) is **induced by**\(S\) if, for each \(i\), \[U^{{(1)}}\cap\partial^{e}U_{i}\text{ is $\mathcal{H}^{n}$-contained in $S$}\,. \tag{2.1}\] We say that \(\{U_{i}\}_{i}\) is _an_ **essential partition of \(U\) induced by**\(S\) if it is a Lebesgue partition of \(U\) induced by \(S\) such that, for each \(i\), \[S\text{ does not essentially disconnect $U_{i}$}\,. \tag{2.2}\] The next theorem, which expands the statement of Theorem 1.2, shows that when \(\mathcal{H}^{n}\)-finite sets uniquely determine induced essential partitions on sets of finite perimeter. **Theorem 2.1** (Induced essential partitions).: _If \(U\subset\mathbb{R}^{n+1}\) is a bounded set of finite perimeter and \(S\subset\mathbb{R}^{n+1}\) is a Borel set with \(\mathcal{H}^{n}(S\cap U^{{(1)}})<\infty\), then there exists an essential partition \(\{U_{i}\}_{i}\) of \(U\) induced by \(S\) such that each \(U_{i}\) is a set of finite perimeter and_ \[\sum_{i}P(U_{i};U^{{(1)}})\leq 2\,\mathcal{H}^{n}(S\cap U^{{(1)}})\,. \tag{2.3}\] _Moreover:_ **(a):** _if \(S^{*}\) is a Borel set with \(\mathcal{H}^{n}(S^{*}\cap U^{{(1)}})<\infty\), \(S^{*}\) is \(\mathcal{H}^{n}\)-contained in \(S\), \(\{V_{j}\}_{j}\) is a Lebesgue partition8 of \(U\) induced by \(S^{*}\), and \(\{U_{i}\}_{i}\) is the essential partition of \(U\) induced by \(S\), then_ Footnote 8: Notice that here we are not requiring that \(S^{*}\) does not essentially disconnect each \(V_{j}\), i.e., we are not requiring that \(\{V_{j}\}_{j}\) is an essential partition induced by \(S^{*}\). This detail will be useful in the applications of this theorem. \[\bigcup_{j}\,\partial^{*}V_{j}\text{ is $\mathcal{H}^{n}$-contained in $\bigcup_{i}\,\partial^{*}U_{i}$}\,; \tag{2.4}\] **(b):** _if \(S\) and \(S^{*}\) are \(\mathcal{H}^{n}\)-finite sets in \(U^{{(1)}}\), and either9\(S^{*}=\mathcal{R}(S)\) or \(S^{*}\) is \(\mathcal{H}^{n}\)-equivalent to \(S\), then \(S\) and \(S^{*}\) induce \(\mathcal{L}^{n+1}\)-equivalent essential partitions of \(U\)._ Footnote 9: Here \(\mathcal{R}(S)\) denotes the \(\mathcal{H}^{n}\)-rectifiable part of \(S\). Proof of Theorem 1.2.: Immediate consequence of Theorem 2.1. The proof of Theorem 2.1 follows the main lines of the proof of [1, Theorem 1], which is indeed the case \(S=\varnothing\) of Theorem 2.1. We premise to this proof two lemmas that will find repeated applications in later sections too. To introduce the first lemma, we notice that, while it is evident that if \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) and \(S\) is \(\mathcal{H}^{n}\)-contained into some Borel set \(S^{*}\), then \(S^{*}\) is also \(\mathcal{C}\)-spanning \(\mathbf{W}\), however, it is not immediately clear if the rectifiable part \(\mathcal{R}(S)\) of \(S\) (which may not be \(\mathcal{H}^{n}\)-equivalent to \(S\)) retains the \(\mathcal{C}\)-spanning property. **Lemma 2.2**.: _If \(\mathbf{W}\) is compact, \(\mathcal{C}\) is a spanning class for \(\mathbf{W}\), \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), and is a Radon measure in \(\Omega\), then \(\mathcal{R}(S)\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\). Moreover, the sets \(T_{1}\) and \(T_{2}\) appearing in (1.12) are sets of finite perimeter._ Proof.: We make the following _claim_: if \(T\) is open, \(T^{{(1)}}\stackrel{{\mathcal{H}^{n}}}{{\subset}}T\), \(\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}Z\) is a Radon measure in an open neighborhood of \(T\), and \(Z\) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\), then \[T_{1}\text{ and }T_{2}\text{ are of locally finite perimeter in }T\,, \tag{2.5}\] \[\mathcal{R}(Z)\text{ essentially disconnects }T\text{ into }\{T_{1},T_{2}\}\,. \tag{2.6}\] Indeed: Since \(T\) is open, we trivially have \(T\subset T^{{(1)}}\), and hence \(T\) is \(\mathcal{H}^{n}\)-equivalent to \(T^{{(1)}}\). Taking also into account that \(Z\) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\), we thus find \[T\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}\stackrel{{\mathcal{H }^{n}}}{{=}}T^{{(1)}}\cap\partial^{e}T_{1}\cap\partial^{e}T_{2} \stackrel{{\mathcal{H}^{n}}}{{\subset}}Z\cap T^{{(1)}} \stackrel{{\mathcal{H}^{n}}}{{\subset}}Z\cap T\,.\] By Federer's criterion (1.38) and the \(\mathcal{H}^{n}\)-finiteness of \(Z\) in an open neighborhood of \(T\) we deduce (2.5). By Federer's theorem (1.37), \(\partial^{e}T_{i}\) is \((\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}T_{i}\), which combined with the \(\mathcal{H}^{n}\)-equivalence of \(T^{{(1)}}\) and \(T\) gives \[\partial^{e}T_{1}\cap\partial^{e}T_{2}\cap T^{{(1)}} \stackrel{{\mathcal{H}^{n}}}{{=}}\partial^{*}T_{1}\cap \partial^{*}T_{2}\cap T\,.\] Since \(\partial^{*}T_{1}\cap\partial^{*}T_{2}\cap T\) is \(\mathcal{H}^{n}\)-rectifiable and \(\partial^{e}T_{1}\cap\partial^{e}T_{2}\cap T^{{(1)}} \stackrel{{\mathcal{H}^{n}}}{{\subset}}Z\), we conclude that \(\mathcal{H}^{n}(\partial^{e}T_{1}\cap\partial^{e}T_{2}\cap T^{{(1)}}\cap \mathcal{P}(Z))=0\). Hence, \[\partial^{e}T_{1}\cap\partial^{e}T_{2}\cap T^{{(1)}} \stackrel{{\mathcal{H}^{n}}}{{\subset}}\mathcal{R}(Z)\,,\] and (2.6) follows. To prove the lemma: Let \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\), let \(J\) be of full measure such that (A.1) holds for every \(s\in J\), so that, for every \(s\in J\) one finds that for \(\mathcal{H}^{n}\)-a.e. \(x\in T[s]\) there is a partition \(\{T_{1},T_{2}\}\) of \(T\) with \(x\in\partial^{e}T_{1}\cap\partial^{e}T_{2}\) and such that \(S\cup T[s]\) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\). By applying the claim with \(Z=S\cup T[s]\), we see that \(\mathcal{R}(S\cup T[s])\) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\), and that \(T_{1}\) and \(T_{2}\) have locally finite perimeter in \(T\). On noticing that \(\mathcal{R}(S\cup T[s])\) is \(\mathcal{H}^{n}\)-equivalent to \(\mathcal{R}(S)\cup T[s]\), we conclude the proof. The second lemma is just a simple compactness statement for finite perimeter partitions. **Lemma 2.3** (Compactness for partitions by sets of finite perimeter).: _If \(U\) is a bounded open set and \(\{\{U_{i}^{j}\}_{i=1}^{\infty}\}_{j=1}^{\infty}\) is a sequence of Lebesgue partitions of \(U\) into sets of finite perimeter such that_ \[\sup_{j}\,\sum_{i=1}^{\infty}P(U_{i}^{j})<\infty\,, \tag{2.7}\] _then, up to extracting a subsequence, there exists a Lebesgue partition \(\{U_{i}\}_{i\in\mathbb{N}}\) of \(U\) such that for every \(i\) and every \(A\subset U\) open,_ \[\lim_{j\to\infty}|U_{i}^{j}\Delta U_{i}|=0\,,\qquad P(U_{i};A)\leq\liminf_{j \to\infty}P(U_{i}^{j};A)\,. \tag{2.8}\] _Moreover,_ \[\lim_{i\to\infty}\limsup_{j\to\infty}\sum_{k=i+1}^{\infty}|U_{k}^{j}|^{s}=0\,, \qquad\forall s\in\Big{(}\frac{n}{n+1},1\Big{)}\,. \tag{2.9}\] Proof.: Up to a relabeling we can assume each \(\{U_{i}^{j}\}_{i}\) is monotone. By (2.7) and the boundedness of \(U\), a diagonal argument combined with standard lower semicontinuity and compactness properties of sets of finite perimeter implies that we can find a not relabeled subsequence in \(j\) and a family \(\{U_{i}\}_{i}\) of Borel subsets of \(U\) with \(|U_{i}|\geq|U_{i+1}|\) and \(|U_{i}\cap U_{j}|=0\) for every \(i\neq j\), such that (2.8) holds. We are thus left to prove (2.9) and \[\Big{|}U\setminus\bigcup_{i=1}^{\infty}U_{i}\Big{|}=0\,. \tag{2.10}\] We start by noticing that for each \(i\) there is \(J(i)\in\mathbb{N}\) such that \(|U_{k}^{j}|\leq 2\,|U_{k}|\) for every \(j\geq J(i)\) and \(1\leq k\leq i\). Therefore if \(k\geq i+1\) and \(j\geq J(i)\) we find \(|U_{k}^{j}|\leq|U_{i}^{j}|\leq 2\,|U_{i}|\), so that, if \(j\geq J(i)\), \[\sum_{k=i+1}^{\infty}|U_{k}^{j}|^{s}\leq C(n)\,\sum_{k=i+1}^{ \infty}P(U_{k}^{j})|U_{k}^{j}|^{s-(n/(n+1))}\leq C\,|U_{i}|^{s-(n/(n+1))}\,, \tag{2.11}\] where we have also used the isoperimetric inequality and (2.7). Since \(|U_{i}|\to 0\) as \(i\to\infty\) (indeed, \(\sum_{i}|U_{i}|\leq|U|<\infty\)), (2.11) implies (2.9). To prove (2.10), we notice that if we set \(M=|U\setminus\cup_{i}U_{i}|\), and we assume that \(M\) is positive, then up to further increasing the value of \(J(i)\) we can require that \[|U_{k}^{j}|\leq|U_{k}|+\frac{M}{2^{k+2}}\,,\qquad\forall 1\leq k \leq i\,,\,\forall j\geq J(i)\,, \tag{2.12}\] (in addition to \(|U_{k}^{j}|\leq 2\,|U_{k}|\)). By (2.12) we obtain that, if \(j\geq J(i)\), then \[|U|-\sum_{k=i+1}^{\infty}|U_{k}^{j}|=\sum_{k=1}^{i}|U_{k}^{j}|\leq \sum_{k=1}^{i}|U_{k}|+\frac{M}{2^{k+2}}\leq|U|-M+\sum_{k=1}^{i}\frac{M}{2^{k+2 }}\leq|U|-\frac{M}{4}\,. \tag{2.13}\] Rearranging (2.13) and using the sub-additivity of \(z\mapsto z^{s}\) we conclude that \[(M/4)^{s}\leq\sum_{k=i+1}^{\infty}|U_{k}^{j}|^{s}\,.\] We obtain a contradiction with \(M>0\) by letting \(i\to\infty\) and by using (2.9). Proof of Theorem 2.1.: Let \(\mathcal{U}(S)\) be the set of all the monotone Lebesgue partitions of \(U\) induced by \(S\). We notice that \(\mathcal{U}(S)\neq\varnothing\), since \(\mathcal{U}(S)\) contains the trivial partition with \(U_{1}=U\) and \(U_{i}=\varnothing\) if \(i\geq 2\). If \(U_{i}\in\{U_{i}\}_{i}\) for \(\{U_{i}\}_{i}\in\mathcal{U}(S)\), then \(\partial^{e}U_{i}\) is \(\mathcal{H}^{n}\)-contained in \(\partial^{e}U\cup(U^{(1)}\cap S)\), which, by Federer's criterion applied to \(U\) and \(\mathcal{H}^{n}(S\cap U^{(1)})<\infty\), has finite \(\mathcal{H}^{n}\)-measure; it follows then that \(U_{i}\) is a set of finite perimeter due to Federer's criterion. We now fix \(s\in(n/(n+1),1)\), and consider a maximizing sequence \(\{\{U_{i}^{j}\}_{i}\}_{j}\) for \[m=\max\Big{\{}\sum_{i=1}^{\infty}|U_{i}|^{s}:\{U_{i}\}_{i}\in \mathcal{U}(S)\Big{\}}\,.\] By standard arguments concerning reduced boundaries of disjoint sets of finite perimeter (see, e.g. [14, Chapter 16]), we deduce from (2.1) that for every \(j\), \[\sum_{i=1}^{\infty}\mathcal{H}^{n}\,\mathsf{L}\,\partial^{*}U_{i }^{j} = \sum_{i=1}^{\infty}\mathcal{H}^{n}\,\mathsf{L}\,(\partial^{*}U_{i }^{j}\cap U^{(1)})+\sum_{i=1}^{\infty}\mathcal{H}^{n}\,\mathsf{L}\,(\partial^{ *}U_{i}^{j}\cap\partial^{*}U) \tag{2.14}\] \[\leq 2\,\mathcal{H}^{n}\,\mathsf{L}\,(S\cap U^{(1)})+\mathcal{H}^{n} \,\mathsf{L}\,\partial^{*}U\,.\] Also, due to the sub-additivity of \(z\mapsto z^{s}\) and the general fact that \(\partial^{e}(A\cap B)\subset\partial^{e}A\cup\partial^{e}B\), we can refine \(\{U_{i}^{j}\}_{i}\) by replacing each \(U_{i}^{j}\) with the disjoint family \[\left\{U_{i}^{j}\cap U_{k}^{\ell}:k\geq 1\,,1\leq\ell<j\right\},\] thus obtaining a new sequence in \(\mathcal{U}(S)\) which is still maximizing for \(m\). As a consequence of this remark, we can assume without loss of generality that the considered maximizing sequence \(\{\{U_{i}^{j}\}_{i}\}_{j}\) for \(m\) has the additional property that \[U\cap\bigcup_{i}\partial^{*}U_{i}^{j}\subset U\cap\bigcup_{i}\partial^{*}U_{i }^{j+1}\,,\qquad\forall j\,. \tag{2.15}\] Thanks to (2.14) we can apply Lemma 2.3 and, up to extracting a subsequence in \(j\), we can find a Lebesgue partition \(\{U_{i}\}_{i\in\mathbb{N}}\) of \(U\) by sets of finite perimeter which satisfies (2.8) and (2.9). Moreover, after taking a subsequence, we may assume that \(\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits \partial^{*}U_{i}^{j}\stackrel{{\ast}}{{\rightharpoonup}}\mu_{i}\) for some Radon measures \(\mu_{i}\) such that \(\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}} \nolimits\partial^{*}U_{i}\leq\mu_{i}\)[13, Prop. 12.15]. Therefore, by (2.8), Federer's theorem for reduced boundaries, and by (2.1) for \(\{U_{i}^{j}\}_{i}\), we see that \[\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}U)+\sum_{i=1}^{\infty}\mathcal{H}^{n} \mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}U_{i}\cap U^{(1)})=\sum_{i=1}^{\infty} \mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}U_{i})\leq \mathrm{w}^{*}\lim_{j\to\infty}\sum_{i=1}^{\infty}\mathcal{H}^{n}\mathop{\hbox{ \vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}} \nolimits(\partial^{*}U_{i}^{j})\] \[=\mathrm{w}^{*}\lim_{j\to\infty}\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}U)+\sum_{i=1}^{\infty}\mathcal{H}^{n} \mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}U_{i}^{j}\cap U^{(1)})\leq \mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}U)+2\mathcal{H}^{n}\mathop{\hbox{ \vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(S\cap U^{(1)})\,.\] By subtracting \(\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}U)\) from both sides, we deduce (2.3). We now show, first, that \(\{U_{i}\}_{i}\in\mathcal{U}(S)\) (i.e., we check the validity of (2.1) on \(\{U_{i}\}_{i}\)), and then that \(S\) does not essentially disconnect any of the \(U_{i}\). This will complete the proof of the first part of the statement. To prove that \(U^{(1)}\cap\partial^{e}U_{i}\stackrel{{\mathcal{H}^{n}}}{{ \subset}}S\), let us introduce the \(\mathcal{H}^{n}\)-rectifiable set \(S_{0}\) defined by \[S_{0}=U^{(1)}\cap\bigcup_{i,j}\partial^{*}U_{i}^{j}\,. \tag{2.16}\] By \(\{U_{i}^{j}\}_{i}\in\mathcal{U}(S)\), \(S_{0}\) is contained into \(S\) modulo \(\mathcal{H}^{n}\)-null sets. Therefore, in order to prove (2.1) it will be enough to show that \[U^{(1)}\cap\partial^{*}U_{i}\stackrel{{\mathcal{H}^{n}}}{{ \subset}}S_{0}\,,\qquad\forall i\,. \tag{2.17}\] Should this not be the case, it would be \(\mathcal{H}^{n}(U^{(1)}\cap\partial^{*}U_{i}\setminus S_{0})>0\) for some \(i\). We could thus pick \(x\in U^{(1)}\cap\partial^{*}U_{i}\) such that \[\theta^{n}(\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(U^{(1)}\cap\partial^{*}U_{i}\setminus S_{0}))(x)=1\,. \tag{2.18}\] Since \(\theta^{n}(\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits\partial^{*}U_{i})(x)=1\) and \(S_{0}\subset U^{(1)}\) this implies \(\mathcal{H}^{n}(S_{0}\cap B_{r}(x))=\mathrm{o}(r^{n})\), while \(\partial^{*}U_{i}\subset U_{i}^{(1/2)}\) gives \(|U_{i}\cap B_{r}(x)|=(\omega_{n+1}/2)\,r^{n+1}+\mathrm{o}(r^{n+1})\). Therefore, given \(\delta>0\) we can find \(r>0\) such that \[\mathcal{H}^{n}(S_{0}\cap B_{r}(x))<\delta\,r^{n}\,,\qquad\min\left\{|U_{i} \cap B_{r}(x)|,|U_{i}\setminus B_{r}(x)|\right\}\geq\left(\frac{\omega_{n+1}} {2}-\delta\right)r^{n+1}\,,\] and then exploit the relative isoperimetric inequality and (2.8) to conclude that \[c(n)\left[\left(\frac{\omega_{n+1}}{2}-\delta\right)r^{n+1}\right]^ {n/(n+1)} \leq P(U_{i};B_{r}(x))\leq\liminf_{j\to\infty}P(U_{i}^{j};B_{r}(x))\] \[\leq \mathcal{H}^{n}(S_{0}\cap B_{r}(x))\leq\delta\,r^{n}\,,\] where in the next to last inequality we have used the definition (2.16) of \(S_{0}\). Choosing \(\delta>0\) small enough we reach a contradiction, thus deducing that \(\{U_{i}\}_{i}\in\mathcal{U}(S)\). Taking into account the subadditivity of \(z\mapsto z^{s}\), in order to prove that \(S\) does not essentially disconnect any \(U_{i}\) it is sufficient to show that \(\{U_{i}\}_{i}\) is a maximizer of \(m\). To see this, we notice that \(|U_{j}^{j}\Delta U_{i}|\to 0\) as \(j\to\infty\) implies \[m=\lim_{j\to\infty}\sum_{i=1}^{k}|U_{i}^{j}|^{s}+\sum_{i=k+1}^{\infty}|U_{i}^{j }|^{s}=\sum_{i=1}^{k}|U_{i}|^{s}+\lim_{j\to\infty}\sum_{i=k+1}^{\infty}|U_{i}^{j }|^{s}\,,\] so that, letting \(k\to\infty\) and exploiting (2.9), we conclude that \[m=\sum_{i=1}^{\infty}|U_{i}|^{s}\,. \tag{2.19}\] This completes the proof of the first part of the statement (existence of essential partitions). Let now \(S\), \(S^{*}\), \(\{U_{i}\}_{i}\), and \(\{U_{j}^{*}\}_{j}\) be as in statement (a) - that is, \(S^{*}\) is \(\mathcal{H}^{n}\)-contained in \(S\), \(\{U_{i}\}_{i}\) is an essential partition of \(U\) induced by \(S\), and, for every \(j\), \(\{U_{j}^{*}\}_{j}\) is a Lebesgue partition of \(U\) induced by \(S^{*}\) - and set \(Z=\cup_{i}\partial^{*}U_{i}\) and \(Z^{*}=\cup_{j}\partial^{*}U_{j}^{*}\). Arguing by contradiction with (2.4), let us assume \(\mathcal{H}^{n}(Z^{*}\setminus Z)>0\). By the definition of Lebesgue partition we have that \(Z\setminus U^{{(1)}}\) and \(Z^{*}\setminus U^{{(1)}}\) are both \(\mathcal{H}^{n}\)-equivalent to \(\partial^{*}U\). Therefore we have \(\mathcal{H}^{n}((Z^{*}\setminus Z)\cap U^{{(1)}})>0\). Since \(U^{{(1)}}\) is \(\mathcal{H}^{n}\)-equivalent to the union of the \(\{U_{i}^{{(1)}}\cup\partial^{*}U_{i}\}_{i\in I}\) we can find \(i\in I\) and \(j\in J\) such that \(\mathcal{H}^{n}(U_{i}^{{(1)}}\cap\partial^{*}U_{j}^{*})>0\). This implies that both \((U_{i}\cap U_{j}^{*})^{{(1)}/{2}}\) and \((U_{i}\setminus U_{j}^{*})^{{(1)}/{2}}\) are non-empty, and thus that \(\{U_{j}^{*}\cap U_{i},U_{i}\setminus U_{j}^{*}\}\) is a non-trivial Borel partition of \(U_{i}\). Since \[U_{i}^{{(1)}}\cap\partial^{e}(U_{j}^{*}\cap U_{i})\stackrel{{ \mathcal{H}^{n}}}{{\subset}}U^{{(1)}}\cap \partial^{*}U_{j}^{*}\stackrel{{\mathcal{H}^{n}}}{{\subset}}S^{*}\,,\] we conclude that \(S^{*}\) is essentially disconnecting \(U_{i}\), against the fact that \(S\) is not essentially disconnecting \(U_{i}\) and the fact that \(S^{*}\) is \(\mathcal{H}^{n}\)-contained in \(S\). We finally prove statement (b). Let \(\{U_{i}\}_{i\in I}\), and \(\{U_{j}^{*}\}_{j\in J}\) be essential partitions of \(U\) induced by \(S\) and \(S^{*}\) respectively. Given \(i\in I\) such that \(|U_{i}|>0\), there is at least one \(j\in J\) such that \(|U_{i}\cap U_{j}^{*}|>0\). We _claim_ that it must be \(|U_{i}\setminus U_{j}^{*}|=0\). Should this not be the case, \(\partial^{*}U_{j}^{*}\) would be essentially disconnecting \(U_{i}\), thus implying that \(S^{*}\) (which contains \(\partial^{*}U_{j}^{*}\)) is essentially disconnecting \(U_{i}\). Now, either because we are assuming that \(S^{*}\) is \(\mathcal{H}^{n}\)-equivalent to \(S\), or because we are assuming that \(S^{*}=\mathcal{R}(S)\) and we have Lemma 2.2, the fact that \(S^{*}\) is essentially disconnecting \(U_{i}\) implies that \(S\) is essentially disconnecting \(U_{i}\), a contradiction. Having proved the claim, for each \(i\in I\) with \(|U_{i}|>0\) there is a unique \(\sigma(i)\in J\) such that \(|U_{i}\Delta U_{\sigma(j)}^{*}|=0\). This completes the proof. ## 3. Homotopic spanning on generalized soap films (Theorem 1.3) The goal of this section is proving Theorem 1.3, and, actually, to obtain an even more general result. Let us recall that the objective of Theorem 1.3 was to reformulate the homotopic spanning property for a Borel set \(S\), in the case when \(S\) is locally \(\mathcal{H}^{n}\)-finite, in terms of unions of boundaries of induced essential partitions. We shall actually need this kind of characterization also for sets \(S\) of the more general form \(S=K\cup E^{{(1)}}\), where \((K,E)\in\mathcal{K}_{\rm B}\). For an illustration of the proposed characterization of homotopic spanning on this type of sets, see Figure 3.1. **Theorem 3.1** (Homotopic spanning for generalized soap films).: _If \(\mathbf{W}\subset\mathbb{R}^{n+1}\) is a closed set in \(\mathbb{R}^{n+1}\), \(\mathcal{C}\) is a spanning class for \(\mathbf{W}\), \(K\) is a Borel set locally \(\mathcal{H}^{n}\)-finite in \(\Omega\), and \(E\) is of locally finite perimeter in \(\Omega\) such that \(\Omega\cap\partial^{*}E\) is \(\mathcal{H}^{n}\)-contained in \(K\), then the set_ \[S=\mathcal{R}(K)\cup E^{{(1)}} \tag{3.1}\] _is \(\mathcal{C}\)-spanning \(\mathbf{W}\) if and only if, for every \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) and \(\mathcal{H}^{1}\)-a.e. \(s\in\mathbb{S}^{1}\),_ \[T[s]\cap E^{{(0)}}\text{ is $\mathcal{H}^{n}$-contained in $\operatorname{UBEP}(K\cup T[s];T)$}\,. \tag{3.2}\] **Remark 3.2**.: An immediate corollary of Theorem 3.1 is that if \(K\) is \(\mathcal{H}^{n}\)-finite and \((K,E)\in\mathcal{K}_{\rm B}\) then \(K\cup E^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) if and only if \(\mathcal{R}(K)\cup E^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\). Indeed, \(\mathcal{R}(K\cup T[s])=\mathcal{R}(K)\cup T[s]\), so that, by (1.13), \(\operatorname{UBEP}(K\cup T[s])=\operatorname{UBEP}(\mathcal{R}(K)\cup T[s])\). Proof of Theorem 1.3.: This is Theorem 3.1 with \(E=\varnothing\). Proof of Theorem 3.1.: _Step one_: We prove the following claim: If \(S\) essentially disconnects \(G\) into \(\{G_{1},G_{2}\}\) and \(H\subset G\) satisfies \[\min\{\left|H\cap G_{1}\right|,\,\left|H\cap G_{2}\right|\}>0\,, \tag{3.3}\] then \(S\) essentially disconnects \(H\) into \(H\cap G_{1}\) and \(H\cap G_{2}\). Indeed, if \(x\in H^{{(1)}}\), then \(x\in\partial^{e}(H\cap G_{i})\) if and only if \(x\in\partial^{e}G_{i}\) (\(i=1,2\)). Hence \(H^{{(1)}}\cap\partial^{e}(G_{1}\cap H)\subset H^{{(1)}}\cap \partial^{e}G_{1}\subset G^{{(1)}}\cap \partial^{e}G_{1}\), which, by (3.3) and our assumption on \(S\) and \(G\), gives the desired conclusion. _Step two_: Taking from now on \(S\), \(K\) and \(E\) as in the statement we preliminary notice that if \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\), \(s\in\mathbb{S}^{1}\), and \(\{U_{i}\}_{i}\) is the essential partition of \(T\) induced by \((\mathcal{R}(K)\cup T[s])\), then \[T\cap\partial^{*}E\overset{\mathcal{H}^{n}}{\subset}T\cap\bigcup_{i} \partial^{*}U_{i}\,. \tag{3.4}\] Indeed, since \(\Omega\cap\partial^{*}E\) is \(\mathcal{H}^{n}\)-contained in \(\mathcal{R}(K)\), if a Borel set \(G\) is such that \(\left|G\cap E\right|\left|G\setminus E\right|>0\) then, by step one, \(\mathcal{R}(K)\) essentially disconnects \(G\). In particular, since, for each \(i\), \(\mathcal{R}(K)\cup T[s]\) does not essentially disconnect \(U_{i}\), we find that, for each \(i\), \[\text{either }U_{i}^{{(1)}}\subset E^{{(0)}} \qquad\text{or }U_{i}^{{(1)}}\subset E^{{ (1)}}\,. \tag{3.5}\] Clearly, (3.5) immediately implies (3.4). Figure 3.1. In panel (a) we have depicted a pair \((K,E)\) where \(E\) is a tube inside \(T\) and \(K\) consists of the union of the boundary of \(E\) and the _non_-spanning set \(S\) of Figure 1.6-(a). Notice that \(K\) is not \(\mathcal{C}\)-spanning, if we see things from the point of view of Definition A, since it misses every loop \(\gamma\) contained in the interior of \(E\); while, of course, \(K\cup E\) is \(\mathcal{C}\)-spanning because \(E\) has been added. In panel (b) we have depicted the essential partition \(\{U_{i}\}_{i=1}^{5}\) of \(T\) induced by \(K\cup T[s]\). Notice that \(E=U_{1}\), therefore no \(\partial^{*}U_{i}\cap\partial^{*}U_{j}\)\(\mathcal{H}^{1}\)-contains \(T[s]\cap E\). In particular, \(T[s]\cap E\) (which \(\mathcal{H}^{1}\)-equivalent to \(T[s]\setminus E^{{(0)}}\)) is not \(\mathcal{H}^{1}\)-contained in \(\operatorname{UBEP}(K\cup T[s];T)\), and we see again, this time from the point of view of Definition B as reformulated in Theorem 1.3, that \(K\) is not \(\mathcal{C}\)-spanning. As stated in Theorem 3.1, from the viewpoint of Definition B it is only the \(\mathcal{H}^{1}\)-containment of \(T[s]\cap E^{{(0)}}\) into \(\operatorname{UBEP}(K\cup T[s];T)\) that establishes the \(\mathcal{C}\)-spanning property of \(K\cup E\): and this \(\mathcal{H}^{1}\)-containment indeed holds, since \(T[s]\cap E^{{(0)}}=T[s]\setminus\operatorname{cl}(E)\) is \(\mathcal{H}^{1}\)-contained in the union of \(\partial^{*}U_{2}\cap\partial^{*}U_{3}\) and \(\partial^{*}U_{4}\cap\partial^{*}U_{5}\). _Step three_: We prove the "only if" part of the statement, that is, given \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) and \(s\in\mathbb{S}^{1}\), we assume that \[\text{for $\mathcal{H}^{n}$-a.e. $x\in T[s]$}\,, \tag{3.6}\] \[\exists\text{ a partition $\{T_{1},T_{2}\}$ of $T$ with $x\in\partial^{e}T_{1}\cap\partial^{e}T_{2}$}\,,\] \[\text{and s.t. $\mathcal{R}(K)\cup E^{{(1)}} \cup T[s]$ essentially disconnects $T$ into $\{T_{1},T_{2}\}$}\,,\] and then prove that \[T[s]\cap E^{{(0)}}\text{ is $\mathcal{H}^{n}$-contained in $\bigcup_{i}\partial^{*}U_{i}$}\,, \tag{3.7}\] where \(\{U_{i}\}_{i}\) is the essential partition of \(T\) induced by \(\mathcal{R}(K)\cup T[s]\). To this end, arguing by contradiction, we suppose that for some \(s\in\mathbb{S}^{1}\), there is \(G\subset T[s]\cap E^{{(0)}}\) with \(\mathcal{H}^{n}(G)>0\) and such that \(G\cap_{i}\partial^{*}U_{i}=\varnothing\). In particular, there is an index \(i\) such that \(\mathcal{H}^{n}(G\cap U_{i}^{{(1)}})>0\), which, combined with (3.5) and \(G\subset E^{{(0)}}\), implies \[U_{i}^{{(1)}}\subset E^{{(0)}}\,. \tag{3.8}\] Now by (3.6) and \(\mathcal{H}^{n}(G\cap U_{i}^{{(1)}})>0\), we can choose \(x\in G\cap U_{i}^{{(1)}}\) such that \(\mathcal{R}(K)\cup E^{{(1)}}\cup T[s]\) essentially disconnects \(T\) into some \(\{T_{1},T_{2}\}\) such that \(x\in\partial^{e}T_{1}\cap\partial^{e}T_{2}\). Then, \(\{U_{i}\cap T_{1},U_{i}\cap T_{2}\}\) is a non-trivial partition of \(U_{i}\), so that, by step one and (3.8), \(\mathcal{R}(K)\cup T[s]\) essentially disconnects \(U_{i}\) into \(\{U_{i}\cap T_{1},U_{i}\cap T_{2}\}\). This contradicts the defining property (2.2) of essential partitions, and concludes the proof. _Step four_: We prove the "if" part of the statement. More precisely, given \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) and \(s\in\mathbb{S}^{1}\), we assume that (3.7) holds at \(s\), and then proceed to prove that (3.6) holds at \(s\). We first notice that, since \(\{E^{{(1)}},E^{{(0)}},\partial^{*}E\}\) is a partition of \(\Omega\) modulo \(\mathcal{H}^{n}\), it is enough to prove (3.6) for \(\mathcal{H}^{n}\)-a.e. \(x\in T[s]\cap(E^{{(1)}}\cup E^{{(0)}}\cup\partial^{*}E)\). If \(x\in T[s]\cap\partial^{*}E\), then by letting \(T_{1}=T\cap E\) and \(T_{2}=T\setminus E\) we obtain a partition of \(T\) such that \(x\in T\cap\partial^{*}E=T\cap\partial^{*}T_{1}\cap\partial^{*}T_{2}\subset \partial^{e}T_{1}\cap\partial^{e}T_{2}\), and such that \(\partial^{*}E\) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\). Since \(\Omega\cap\partial^{*}E\) is \(\mathcal{H}^{n}\)-contained in \(\mathcal{R}(K)\), we deduce (3.6). If \(x\in T[s]\cap E^{{(0)}}\), then, thanks to (3.7) and denoting by \(\{U_{i}\}_{i}\) the essential partition of \(T\) induced by \((\mathcal{R}(K)\cup T[s])\), there is an index \(i\) such that \(x\in T\cap\partial^{*}U_{i}\). Setting \(T_{1}=U_{i}\) and \(T_{2}=T\setminus U_{i}\), we have that \(T\cap\partial^{*}U_{i}\) (which contains \(x\)) is in turn contained into \(\partial^{e}T_{1}\cap\partial^{e}T_{2}\cap T\). Since the latter set is non-empty, \(\{T_{1},T_{2}\}\) is a non-trivial partition of \(T\). Moreover, by definition of essential partition, \[T^{{(1)}}\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}=T\cap \partial^{e}U_{i}\overset{\mathcal{H}^{n}}{\subset}\mathcal{R}(K)\cup T[s]\,,\] so that \(\mathcal{R}(K)\cup T[s]\) essentially disconnects \(T\), and (3.6) holds. Finally, if \(x\in T[s]\cap E^{{(1)}}\), we let \(s_{1}=s\), pick \(s_{2}\neq s\), denote by \(\{I_{1},I_{2}\}\) the partition of \(\mathbb{S}^{1}\) defined by \(\{s_{1},s_{2}\}\), and set \[T_{1}=\Phi(I_{1}\times B_{1}^{n})\cap E\,,\qquad T_{2}=\Phi(I_{2}\times B_{1}^{ n})\cup\,\left(\Phi(I_{1}\times B_{1}^{n})\setminus E\right).\] This is a Borel partition of \(T\), and using the fact that \(x\in E^{{(1)}}\), we compute \[|T_{1}\cap B_{r}(x)|=|\Phi(I_{1}\times B_{1}^{n})\cap E\cap B_{r}(x)|=|\Phi(I_ {1}\times B_{1}^{n})\cap B_{r}(x)|+\mathrm{o}(r^{n+1})=\frac{|B_{r}(x)|}{2}+ \mathrm{o}(r^{n+1})\,.\] Therfore \(x\in\partial^{e}T_{1}\cap\partial^{e}T_{2}\), and by standard facts about reduced boundaries [13, Chapter 16], \[\partial^{e}T_{1}\cap\partial^{e}T_{2}\cap T^{{(1)}} \overset{\mathcal{H}^{n}}{\subset}\partial^{*}T_{1}\cap T^{{(1)}} \overset{\mathcal{H}^{n}}{\subset}\left(\partial^{*}E\cup\left((T[s_{1}] \cup T[s_{2}])\cap E^{{(1)}}\right)\right)\cap T^{{(1)}}\,.\] Since \(\Omega\cap\partial^{*}E\) is \(\mathcal{H}^{n}\)-contained in \(\mathcal{R}(K)\), we have shown (3.6). ## 4. The fundamental closure theorem for homotopic spanning conditions In Theorem 1.3 and Theorem 3.1 we have presented two reformulations of the homotopic spanning condition in terms of \(\mathcal{H}^{n}\)-containment into union of boundaries of essential partitions. The goal of this section is discussing the closure of such reformulations, and provide a statement (Theorem 4.1 below) which will lie at the heart of the closure theorems proved in Section 5. **Theorem 4.1** (Basic closure theorem for homotopic spanning).: _Let \(\mathbf{W}\subset\mathbb{R}^{n+1}\) be closed and let \(\mathcal{C}\) be a spanning class for \(\mathbf{W}\). Let us assume that:_ **(a):**: \(K_{j}\) _are_ \(\mathcal{H}^{n}\)_-finite Borel subsets of_ \(\Omega\) _with_ \(\mathcal{H}^{n}\mathop{\mathsf{L}}K_{j}\stackrel{{\ast}}{{\rightharpoonup}}\mu\) _as Radon measures in_ \(\Omega\)_;_ **(b):**: \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\)_,_ \(\{s_{j}\}_{j}\) _is a sequence in_ \(\mathbb{S}^{1}\) _with_ \(s_{j}\to s_{0}\) _as_ \(j\to\infty\)_;_ **(c):**: _if_ \(\{U_{i}^{j}\}_{i}\) _denotes the essential partition of_ \(T\) _induced by_ \(K_{j}\cup T[s_{j}]\)_, then there is a limit partition_ \(\{U_{i}\}_{i}\) _of_ \(\{U_{i}^{j}\}_{i}\) _in the sense of (_2.8_) in Lemma_ 2.3_;_ _Under these assumptions, if \(\mu(T[s_{0}])=0\), \(F_{j},F\subset\Omega\) are sets of finite perimeter with \(F_{j}\to F\) as \(j\to\infty\) and such that, for every \(j\), \(\Omega\cap\partial^{\ast}F_{j}\) is \(\mathcal{H}^{n}\)-contained in \(K_{j}\) and_ \[T[s_{j}]\cap F_{j}^{(0)}\text{ is $\mathcal{H}^{n}$-contained in $K_{j}^{\ast}$}\,, \tag{4.1}\] _then_ \[T[s_{0}]\cap F^{(0)}\text{ is $\mathcal{H}^{n}$-contained in $K^{\ast}$}\,, \tag{4.2}\] _where we have set_ \[K_{j}^{\ast}=\operatorname{UBEP}(K_{j}\cup T[s_{j}];T)=T\cap\bigcup_{i} \partial^{\ast}U_{i}^{j}\,,\qquad K^{\ast}=T\cap\bigcup_{i}\partial^{\ast}U_{ i}\,. \tag{4.3}\] **Remark 4.2**.: Notice that \(\{U_{i}\}_{i}\) may fail to be the essential partition of \(T\) induced by \(K^{\ast}\) (which is the "optimal" choice of a Borel set potentially inducing \(\{U_{i}\}_{i}\) on \(T\)): indeed, some of the sets \(U_{i}\) may fail to be essentially connected, even though \(U_{i}^{j}\to U_{i}\) as \(j\to\infty\) and every \(U_{i}^{j}\), as an element of an essential partition, is necessarily essentially connected; see Figure 4.1. Figure 4.1. The situation in the proof of Theorem 4.1 in the basic case when \(K_{j}=\Omega\cap\partial^{\ast}F_{j}\). The essential partition of \(T\) induced by \(K_{j}\cup T[s_{j}]\) is denoted by \(\{U_{i}^{j}\}_{i}\). The limit partition \(\{U_{i}\}_{i}\) of \(\{U_{i}^{j}\}_{i}\) may fail to be the essential partition of \(T\) induced by \(K^{\ast}=T\cap\cup_{i}\partial^{\ast}U_{i}\), since some of the \(U_{i}\) may be essentially disconnected. In the picture, denoting by \(\{V_{k}\}_{k}\) the essential partition of \(T\) induced by \(K^{\ast}\), we have \(U_{5}=V_{5}\cup V_{6}=T\cap F\). We also notice, in reference to the notation set in (4.6), that \(X_{1}^{j}=\{5\}\) and \(X_{0}^{j}=\{1,2,3,4\}\). Proof of Theorem 4.1.: _Step one_: We start by showing that, for each \(j\) and \(i\) such that \(|U_{i}^{j}|>0\), we have \[\text{either}\quad(U_{i}^{j})^{{}^{(1)}}\subset F_{j}^{{}^{(1)}}\,,\qquad\text{ or}\quad(U_{i}^{j})^{{}^{(1)}}\subset F_{j}^{{}^{(0)}}\,, \tag{4.4}\] and for each \(i\) such that \(|U_{i}|>0\), \[\text{either}\quad U_{i}^{{}^{(1)}}\subset F^{{}^{(1)}}\,,\qquad\text{or} \quad U_{i}^{{}^{(1)}}\subset F^{{}^{(0)}}\,. \tag{4.5}\] Postponing for the moment the proof of (4.4) and (4.5), let us record several consequences of these inclusions. First, if we set \[X_{1}^{j} =\left\{i:|U_{i}^{j}|>0\,,\,(U_{i}^{j})^{{}^{(1)}} \subset F_{j}^{{}^{(1)}}\right\}, X_{0}^{j} =\left\{i:|U_{i}^{j}|>0\,,\,(U_{i}^{j})^{{}^{(1)}} \subset F_{j}^{{}^{(0)}}\right\}, \tag{4.6}\] \[X_{1} =\left\{i:|U_{i}|>0\,,\,U_{i}^{{}^{(1)}} \subset F^{{}^{(1)}}\right\}, X_{0} =\left\{i:|U_{i}|>0\,,\,U_{i}^{{}^{(1)}} \subset F^{{}^{(0)}}\right\}, \tag{4.7}\] then, thanks to (4.4) and (4.5), we have \[X^{j}:=\left\{i:|U_{i}^{j}|>0\right\}=X_{0}^{j}\cup X_{1}^{j}\,,\qquad X:=\left\{ i:|U_{i}|>0\right\}=X_{0}\cup X_{1}\,. \tag{4.8}\] Combining (4.4) and (4.5) with \(F_{j}\to F\) and \(U_{i}^{j}\to U_{i}\), we find that for every \(i\in X\), there is \(J_{i}\in\mathbb{N}\) such that, for every \(m\in\{0,1\}\), \[\text{if $i\in X_{m}$, then $i\in X_{m}^{j}$ for all $j\geq J_{i}$}. \tag{4.9}\] Lastly, \(\left\{U_{i}^{j}\right\}_{i\in X_{l}^{j}}\) is a Lebesgue partition of \(T\cap F_{j}\), and thus, by Federer's theorem (1.37), \[T\cap F_{j}^{{}^{(1)}}\stackrel{{\mathcal{H}^{n}}}{{\subset}} \bigcup_{i\in X_{l}^{j}}(U_{i}^{j})^{{}^{(1)}}\cup\partial^{*}U_{i}^{j}\,, \qquad T\cap\partial^{*}F_{j}\stackrel{{\mathcal{H}^{n}}}{{\subset }}T\cap\bigcup_{i\in X_{l}^{j}}\partial^{*}U_{i}^{j}\ \subset\ T\cap K_{j}^{*}\,. \tag{4.10}\] _To prove_ (4.4) _and_ (4.5): Since \(\{U_{i}^{j}\}_{i}\) is the essential partition of \(T\) induced by \(K_{j}\cup T[s_{j}]\) and \(K_{j}^{*}=\text{UBEP}(K_{j}\cup T[s_{j}];T)\), we have \[K_{j}^{*}\text{ is $\mathcal{H}^{n}$-contained in $K_{j}\cup T[s_{j}]$}\,, \qquad\forall j\,, \tag{4.11}\] \[K_{j}\cup T[s_{j}]\text{ does not essentially disconnect }U_{i}^{j}\,,\qquad\forall i,j\,. \tag{4.12}\] Since \(\Omega\cap\partial^{*}F_{j}\) is \(\mathcal{H}^{n}\)-contained in \(K_{j}\cup T[s_{j}]\), the combination of (4.12) with Federer's theorem (1.37) gives (4.4). The combination of \(|U_{i}^{j}\Delta U_{i}|\to 0\) as \(j\to\infty\) with (4.4) gives (4.5). _Step two_: We reduce the proof of (4.2) to that of \[\mathcal{H}^{n}(U_{i}^{{}^{(1)}}\cap T[s_{0}])=0\,,\qquad\forall i\in X_{0}\,. \tag{4.13}\] Indeed, \(\{U_{i}^{{}^{(1)}}:i\in X_{0}\}\cup\{F^{{}^{(0)}} \cap\partial^{*}U_{i}:i\in X_{0}\}\) is an \(\mathcal{H}^{n}\)-partition of \(T\cap F^{{}^{(0)}}\). In particular, \(T\cap F^{{}^{(0)}}\) is \(\mathcal{H}^{n}\)-contained in \(\cup_{i\in X_{0}}U_{i}^{{}^{(1)}}\cup\partial^{*}U_{i}\), so that, should (4.13) hold, then \(T[s_{0}]\cap F^{{}^{(0)}}\) would be \(\mathcal{H}^{n}\)-contained in \(\cup_{i\in X_{0}}\partial^{*}U_{i}\), and thus in \(K^{*}\), thus proving (4.2). _Step three_: We change variables from \(T\) to10\(Y=\Phi^{-1}(T)=\mathbb{S}^{1}\times B_{1}^{n}\). We set \(Y[s]=\Phi^{-1}(T[s])=\{s\}\times B_{1}^{n}\) for the \(s\)-slice of \(Y\), and Footnote 10: Here we identify \(\mathbb{S}^{1}\) with \(\mathbb{R}/(2\pi\mathbb{Z})\) and, with a slight abuse of notation, denote by \(\mathcal{L}^{n+1}\) the “Lebesgue measure on \(\mathbb{S}^{1}\times B_{1}^{n}\), which we use to define sets of finite perimeter and points of density in \(\mathbb{S}^{1}\times B_{1}^{n}\). \[Y_{i}=\Phi^{-1}(U_{i})\,,\qquad Y_{i}^{j}=\Phi^{-1}(U_{i}^{j})\,,\qquad W_{i}=Y \setminus Y_{i}\,,\qquad W_{i}^{j}=Y\setminus Y_{i}^{j}\,, \tag{4.14}\] Since \(\Phi\) is a diffeomorphism, by [10, Lemma A.1] and the area formula we have that \[\partial^{*}\Phi^{-1}(H)=\Phi^{-1}(\partial^{*}H)\,,\qquad(\Phi^{-1}(H))^{{}^{ (m)}}=\Phi^{-1}(H^{{}^{(m)}})\,,m\in\{0,1\}\,, \tag{4.15}\] for every set of finite perimeter \(H\subset T\); in particular, setting \[M_{j}=\Phi^{-1}(F_{j}\cap T)\,,\qquad M=\Phi^{-1}(F\cap T)\,,\] by Federer's theorem (1.37), we see that (4.1) is equivalent \[Y[s_{j}]\text{ is $\mathcal{H}^{n}$-contained in $\bigcup_{i}\partial^{\ast}Y_{i}^{j}\cup M_{j}^{{}^{(1)}} \cup\partial^{\ast}M_{j}$}\,, \tag{4.16}\] By (4.10) and (4.15), we may rewrite (4.16) as \[Y[s_{j}]\text{ is $\mathcal{H}^{n}$-contained in $\bigcup_{i\in\mathbb{N}} \partial^{\ast}Y_{i}^{j}\cup\bigcup_{i\in X_{i}^{j}}(Y_{i}^{j})^{{}^{(1)}}$}\,. \tag{4.17}\] Similarly, \(Y_{i}^{{}^{(1)}}=\Phi^{-1}(U_{i}^{{}^{(1)}})\) for every \(i\), and thus (4.13) is equivalent to \[\mathcal{H}^{n}(Y_{i}^{{}^{(1)}}\cap Y[s_{0}])=0\,,\qquad \forall i\in X_{0}\,. \tag{4.18}\] We are thus left to prove that (4.17) implies (4.18). To this end, let us denote by \(\mathbf{p}\) the projection of \(Y=\mathbb{S}^{1}\times B_{1}^{n}\) onto \(B_{1}^{n}\), and consider the sets \[G_{i}=\mathbf{p}\big{(}Y_{i}^{{}^{(1)}}\cap Y[s_{0}]\big{)}\,, \qquad G_{i}^{\ast}=G^{\ast}\cap G_{i}\,,\] corresponding to the set \(G^{\ast}\subset B_{1}^{n}\) with \(\mathcal{H}^{n}(B_{1}^{n}\setminus G^{\ast})=0\) defined as follows: (i) denoting by \(H_{y}=\{s\in\mathbb{S}^{1}:(s,y)\in H\}\) the "circular slice of \(H\subset Y\) above \(y\)", if \(y\in G^{\ast}\), \(j\in\mathbb{N}\), \(k\) is an index for the partitions \(\{Y_{k}\}_{k}\) and \(\{Y_{k}^{j}\}\), and \(H\in\{Y_{k},W_{k},Y_{k}^{j},W_{k}^{j}\}\), then \(H_{y}\) is a set of finite perimeter in \(\mathbb{S}^{1}\) with \[H_{y}\stackrel{{\mathcal{H}^{1}}}{{=}}(H_{y})^{{}^{(1)}\mathbb{S }^{1}}\,,\qquad\partial^{\ast}_{\mathbb{S}^{1}}(H_{y})\stackrel{{ \mathcal{H}^{0}}}{{=}}(\partial^{\ast}H)_{y}\,, \tag{4.19}\] (and thus with \(\partial^{\ast}_{\mathbb{S}^{1}}(H_{y})=(\partial^{\ast}H)_{y}\)); this is a standard consequence of the slicing theory for sets of finite perimeter, see, e.g., [1, Theorem 2.4] or [16, Remark 18.13]; (ii) for every \(y\in G^{\ast}\) and \(j\in\mathbb{N}\), \[(s_{j},y)\in\bigcup_{k\in\mathbb{N}}\partial^{\ast}Y_{k}^{j}\cup \bigcup_{k\in X_{1}^{j}}(Y_{k}^{j})^{{}^{(1)}}\,; \tag{4.20}\] this is immediate from (4.17); (iii) for every \(y\in G^{\ast}\), and \(k\) an index for the partitions \(\{Y_{k}\}_{k}\) and \(\{Y_{k}^{j}\}\), \[\lim_{j\to\infty}\mathcal{H}^{1}((Y_{k})_{y}\Delta(Y_{k}^{j})_{y})= 0\,; \tag{4.21}\] this is immediate from Fubini's theorem and \(Y_{k}^{j}\to Y_{k}\) as \(j\to\infty\); (iv) for every \(y\in G^{\ast}\), \[\sum_{k}\mathcal{H}^{0}((\partial^{\ast}Y_{k}^{j})_{y})<\infty; \tag{4.22}\] indeed, by applying in the order the coarea formula, the area formula and (2.3) we find \[\sum_{k}\int_{B_{1}^{n}}\mathcal{H}^{0}((\partial^{\ast}Y_{k}^{j })_{y})\,d\mathcal{H}^{n} \leq \sum_{k}P(Y_{k}^{j};Y)\leq(\text{Lip}\Phi^{-1})^{n}\,\sum_{k}P(U_{ k}^{j};T)\] \[\leq 2\,(\text{Lip}\Phi^{-1})^{n}\,\mathcal{H}^{n}(K_{j}\cup T[s_{j} ])\,.\] Now, let us pick \(y\in G_{i}^{\ast}\). Since \(y\in G_{i}\) implies \((s_{0},y)\in Y_{i}^{{}^{(1)}}\), and \(Y_{i}^{{}^{(1)}}\cap\partial^{\ast}Y_{i}=\varnothing\), we find \((s_{0},y)\not\in\partial^{\ast}Y_{i}\), i.e. \(s_{0}\not\in(\partial^{\ast}Y_{i})_{y}\). By \(y\in G^{\ast}\), we have \((\partial^{\ast}Y_{i})_{y}=\partial^{\ast}_{\mathbb{S}^{1}}(Y_{i})_{y}\), so that \[s_{0}\not\in\partial^{\ast}_{\mathbb{S}^{1}}(Y_{i})_{y}\,. \tag{4.23}\] Since \((Y_{i})_{y}\) has finite perimeter, \(\partial^{\ast}_{\mathbb{S}^{1}}(Y_{i})_{y}\) is a finite set, and so (4.23) implies the existence of an open interval \(\mathcal{A}_{y}\subset\mathbb{S}^{1}\), containing \(s_{0}\), \(\mathcal{H}^{1}\)-contained either in \((Y_{i})_{y}\) or in \((W_{i})_{y}\), and such that \[\partial_{\mathbb{S}^{1}}\mathcal{A}_{y}\subset(\partial^{\ast}Y_{i})_{y}= \partial^{\ast}_{\mathbb{S}^{1}}(W_{i})_{y}\,. \tag{4.24}\] We claim that there is \(G_{i}^{**}\subset G_{i}^{*}\), with full \(\mathcal{H}^{n}\)-measure in \(G_{i}^{*}\) (and thus in \(G_{i}\)), such that \[\mathcal{A}_{y}\text{ is $\mathcal{H}^{1}$-contained in $(Y_{i})_{y}$}\,,\qquad \forall y\in G_{i}^{**}\,. \tag{4.25}\] Indeed, let us consider the countable decomposition \(\{G_{i,m}^{*}\}_{m=1}^{\infty}\) of \(G_{i}^{*}\) given by \[G_{i,m}^{*}=\Big{\{}y\in G_{i}^{*}:\text{dist}\big{(}\{s_{0}\},\partial_{ \mathbb{S}^{1}}\mathcal{A}_{y}\big{)}\in\big{[}1\big{/}(m+1),1\big{/}m\big{)} \Big{\}}\subset B_{1}^{n}\,,\] and let \[Z_{i,m}=\big{\{}y\in G_{i,m}^{*}:\mathcal{A}_{y}\text{ is $\mathcal{H}^{1}$- contained in $(W_{i})_{y}$}\big{\}}\,.\] If \(\mathcal{H}^{n}(Z_{i,m})>0\), then there is \(y^{*}\in Z_{i,m}^{{}^{(1)}}\), so that \(\mathcal{H}^{n}(Z_{i,m}\cap B_{r}^{n}(y^{*}))=\omega_{n}\,r^{n}+\text{o}(r^{n})\). Therefore, if \(r<1/(m+1)\) and \(B_{r}^{1}(s_{0})\) denotes the open interval of center \(s_{0}\) and radius \(r\) inside \(\mathbb{S}^{1}\), then \[\mathcal{L}^{n+1}\big{(}Y_{i}\cap\big{(}B_{r}^{1}(s_{0})\times B_ {r}^{n}(y^{*})\big{)}\big{)}=\int_{B_{r}^{n}(y^{*})}\mathcal{H}^{1}(B_{r}^{1} (s_{0})\cap(Y_{i})_{y})\,d\mathcal{H}_{y}^{n}\] \[= \int_{Z_{i,m}\cap B_{r}^{n}(y^{*})}\mathcal{H}^{1}(B_{r}^{1}(s_{ 0})\cap(Y_{i})_{y})\,d\mathcal{H}_{y}^{n}+\text{o}(r^{n+1})=\text{o}(r^{n+1})\] where in the last identity we have used the facts that \(y\in Z_{i,m}\cap B_{r}^{n}(y^{*})\), \(s_{0}\in\mathcal{A}_{y}\), and \(r<1/(m+1)\) to conclude that \(B_{r}^{1}(s_{0})\) is \(\mathcal{H}^{1}\)-contained in \((W_{i})_{y}\); in particular, \((s_{0},y^{*})\in Y_{i}^{{}^{(0)}}\), against the fact that \(Z_{i,m}\subset G_{i}(=\mathbf{p}(Y[s_{0}]\cap Y_{i}^{{}^{(1)}}))\). We have thus proved that each \(Z_{i,m}\) is \(\mathcal{H}^{n}\)-negligible, and therefore that there is \(G_{i}^{**}\subset G_{i}^{*}\) and \(\mathcal{H}^{n}\)-equivalent to \(G_{i}^{*}\), such that (4.25) holds true. Having proved (4.25), we now notice that, by (4.20), \(y\in G_{i}^{*}\) implies \[s_{j}\in\bigcup_{k\in\mathbb{N}}(\partial^{*}Y_{k}^{j})_{y}\cup\bigcup_{k\in X _{1}^{j}}\big{(}(Y_{k}^{j})^{{}^{(1)}}\big{)}_{y}=\bigcup_{k}\partial_{ \mathbb{S}^{1}}^{*}(Y_{k}^{j})_{y}\cup\bigcup_{k\in X_{1}^{j}}\big{(}(Y_{k}^{ j})_{y}\big{)}^{{}^{(1)}{}_{\mathbb{S}^{1}}}\,. \tag{4.26}\] If (4.26) holds because \(s_{j}\in\partial_{\mathbb{S}^{1}}^{*}(Y_{k}^{j})_{y}\) for some \(k\), then, thanks to (4.22) there must \(k^{\prime}\neq k\) such that \(s_{j}\in\partial_{\mathbb{S}^{1}}^{*}(Y_{k^{\prime}}^{j})_{y}\) too; since either \(k\) or \(k^{\prime}\) must be different from \(i\), we conclude that \(s_{i}\in\partial_{\mathbb{S}^{1}}^{*}(Y_{k(i)}^{j})_{y}\) for some \(k(i)\neq i\); if, instead, (4.26) holds because \(s_{j}\in\big{(}(Y_{k}^{j})_{y}\big{)}^{{}^{(1)}{}_{\mathbb{S}^{1}}}\) for some \(k\in X_{1}^{j}\), then we can recall that, thanks to (4.9), \(i\in X_{0}^{j}\) for every \(j\geq J_{i}\), and thus \(i\neq k\); in summary, for each \(y\in G_{i}^{*}\), \[\text{if $j\geq J_{i}$, then $\exists k(j)\neq i$ s.t. $s_{j}\in\partial_{\mathbb{S}^{1}}^{*}(Y_{k(j)}^{j})_{y}\cup\big{(}(Y_{k(j)}^{j})_{ y}\big{)}^{{}^{(1)}{}_{\mathbb{S}^{1}}}$}\,. \tag{4.27}\] With the goal of obtaining a lower bound on the relative perimeters of the sets \(Y_{i}^{j}\) in a neighborhood of \(G_{i}\) (see (4.31) below), we now consider \(y\in G_{i}^{**}\), and pick \(r>0\) such that \(\text{cl}\,B_{r}^{1}(s_{0})\subset\mathcal{A}_{y}\). Correspondingly, since \(s_{j}\to s_{0}\) and (4.27) holds, we can find \(J^{*}=J^{*}(i,y,r)\geq J_{i}\) such that, for \(j\geq J^{*}\), \[s_{j}\in B_{r}^{1}(s_{0})\cap\big{[}\partial_{\mathbb{S}^{1}}^{*}(Y_{k(j)}^{j})_ {y}\cup\big{(}(Y_{k(j)}^{j})_{y}\big{)}^{{}^{(1)}{}_{\mathbb{S}^{1}}}\big{]} \subset\mathcal{A}_{y}\cap\big{[}\partial_{\mathbb{S}^{1}}^{*}(Y_{k(j)}^{j})_ {y}\cup\big{(}(Y_{k(j)}^{j})_{y}\big{)}^{{}^{(1)}{}_{\mathbb{S}^{1}}}\big{]}\,. \tag{4.28}\] Now, by (4.21), \(k(j)\neq i\), and \(\mathcal{A}_{y}\overset{\mathcal{H}^{1}}{\subset}(Y_{i})_{y}\), we have \[\lim_{j\to\infty}\mathcal{H}^{1}(\mathcal{A}_{y}\cap(Y_{k(j)}^{j})_{y})=0\,. \tag{4.29}\] Since, by (4.19), \((Y_{k(j)}^{j})_{y}\) is \(\mathcal{H}^{1}\)-equivalent to a finite union of intervals, (4.28) implies the existence of an open interval \(\mathcal{I}_{y}^{j}\) such that \[s_{j}\in\text{cl}\,_{\mathbb{S}^{1}}\mathcal{I}_{y}^{j}\,,\qquad\mathcal{I}_{y}^{j }\overset{\mathcal{H}^{1}}{\subset}(Y_{k(j)}^{j})_{y}\,,\qquad\partial_{ \mathbb{S}^{1}}\mathcal{I}_{y}^{j}\subset(\partial^{*}Y_{k(j)}^{j})_{y}\subset( \partial^{*}W_{i}^{j})_{y}\,, \tag{4.30}\] which, due to (4.28) and (4.29), must satisfy \[\lim_{j\to\infty}\operatorname{diam}\big{(}\mathcal{I}^{j}_{y}\big{)}=0\,.\] In particular, \[\partial_{\mathbb{S}^{1}}\,\mathcal{I}^{j}_{y}\subset B^{1}_{r}(s_{0})\,,\qquad \forall j\geq J^{*}\,,\] and thus, by the last inclusion in (4.30), \[\mathcal{H}^{0}\big{(}B^{1}_{r}(s_{0})\cap\partial_{\mathbb{S}^{1}}^{*}(W^{j}_ {i})_{y}\big{)}\geq\mathcal{H}^{0}(B^{1}_{r}(s_{0})\cap\partial_{\mathbb{S}^{1 }}\mathcal{I}^{y}_{j})\geq 2\,,\] whenever \(j\geq J^{*}\). Since \(y\in G^{**}_{i}\) and \(r>0\) were arbitrary, by the coarea formula and Fatou's lemma, \[\liminf_{j\to\infty}P(W^{j}_{i};B^{1}_{r}(s_{0})\times G^{**}_{i}) \geq \liminf_{j\to\infty}\int_{G^{**}_{i}}\mathcal{H}^{0}\big{(}B^{1 }_{r}(s_{0})\cap\partial_{\mathbb{S}^{1}}^{*}(W^{j}_{i})_{y}\big{)}\,d \mathcal{H}^{n}_{y} \tag{4.31}\] \[\geq 2\,\mathcal{H}^{n}(G^{**}_{i})=2\,\mathcal{H}^{n}(G_{i})\,.\] Now, since \(\partial^{*}W^{j}_{i}=\partial^{*}Y^{j}_{i}=\Phi^{-1}(\partial^{*}U^{j}_{i})\), by (4.11) we have \[Y\cap\bigcup_{i}\partial^{*}W^{j}_{i}\text{ is $\mathcal{H}^{n}$-contained in $Y[s_{j}]\cup\Phi^{-1}\big{(}T\cap K_{j}\big{)}$}\,,\] which implies, for every \(j\) large enough to have \(s_{j}\in B^{1}_{r}(s_{0})\), \[P(W^{j}_{i};B^{1}_{r}(s_{0})\times G^{**}_{i})\] \[=\mathcal{H}^{n}(G^{**}_{i})+\mathcal{H}^{n}\big{(}\Phi^{-1}(T \cap K_{j})\cap(B^{1}_{r}(s_{0})\times B^{n}_{1})\big{)}\] \[\leq\mathcal{H}^{n}(G_{i})+\operatorname{Lip}(\Phi^{-1})^{n}\, \mathcal{H}^{n}\big{(}K_{j}\cap\Phi(B^{1}_{r}(s_{0})\times B^{n}_{1})\big{)}\,. \tag{4.32}\] By combining (4.31) with (4.32) we conclude that for every \(r>0\) \[\mathcal{H}^{n}(G_{i})\leq\operatorname{Lip}(\Phi^{-1})^{n}\,\mu\big{(}\Phi( \operatorname{cl}\left(B^{1}_{r}(s_{0})\right)\times B^{n}_{1})\big{)}\,, \tag{4.33}\] By \(\mu(T[s_{0}])=0\), if we let \(r\to 0^{+}\) in (4.33), we conclude that \(\mathcal{H}^{n}(G_{i})=0\). Now, since \(G_{i}=\operatorname{\mathbf{p}}\bigl{(}Y^{{(1)}}_{i}\cap Y [s_{0}]\bigr{)}\), we have \[\mathcal{H}^{n}\big{(}Y^{{(1)}}_{i}\cap Y [s_{0}]\big{)}=\mathcal{H}^{n}(G_{i})\,, \tag{4.34}\] thus proving (4.18), and hence the theorem. ## 5. Direct Method on generalized soap films (Theorem 1.4) In Section 5.1 we prove Theorem 1.4, while in Section 5.2 we notice the changes to that argument that are needed to prove a different closure theorem that will be crucial in the companion papers [14, 15]. In particular, Section 5.2 will not be needed for the other main results of this paper (although it is included here since it is definitely easier to understand in this context). ### Proof of Theorem 1.4 Let us first of all recall the setting of the theorem. We are given a closed set \(\mathbf{W}\) in \(\mathbb{R}^{n+1}\), a spanning class \(\mathcal{C}\) for \(\mathbf{W}\), and a sequence \(\{(K_{j},E_{j})\}_{j}\) in \(\mathcal{K}_{\mathrm{B}}\) such that \[\sup_{j}\,\mathcal{H}^{n}(K_{j})<\infty\,, \tag{5.1}\] and, for some Borel set \(E\) and Radon measures \(\mu_{\mathrm{bk}}\) and \(\mu_{\mathrm{bd}}\) in \(\Omega\), it holds that \(E_{j}\stackrel{{\mathrm{loc}}}{{\to}}E\) and \[\mathcal{H}^{n}\operatorname{\mathsf{L}}\left(\Omega\cap\partial ^{*}E_{j}\right)+2\,\mathcal{H}^{n}\operatorname{\mathsf{L}}\left(\mathcal{R} (K_{j})\cap E^{{(0)}}_{j}\right)\xrightarrow{\ \ }\mu_{\mathrm{bk}}\,, \tag{5.2}\] \[\mathcal{H}^{n}\operatorname{\mathsf{L}}\left(\Omega\cap\partial ^{*}E_{j}\right)+2\,\mathcal{H}^{n}\operatorname{\mathsf{L}}\left(\mathcal{R} (K_{j})\setminus\partial^{*}E_{j}\right)\xrightarrow{\ \ }\mu_{\mathrm{bd}}\,, \tag{5.3}\] as \(j\to\infty\). In this setting we want to prove that the sets \[K_{\rm bk} :=\ \left(\Omega\cap\partial^{*}E\right)\cup\left\{x\in\Omega\cap E^{ {(0)}}:\theta_{*}^{n}(\mu_{\rm bk})(x)\geq 2\right\}, \tag{5.4}\] \[K_{\rm bd} :=\ \left(\Omega\cap\partial^{*}E\right)\cup\left\{x\in\Omega\setminus \partial^{*}E:\theta_{*}^{n}(\mu_{\rm bd})(x)\geq 2\right\}, \tag{5.5}\] are such that \((K_{\rm bk},E),(K_{\rm bd},E)\in{\mathcal{K}}_{\rm B}\) and \[\mu_{\rm bk} \geq\ {\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(\Omega\cap\partial^{*}E)+2\,{\mathcal{H}}^{n}\mathop{\hbox{ \vrule height 6.0pt depth 0.0pt width 0.0pt\vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(K_{\rm bk}\cap E^{{ (0)}})\,, \tag{5.6}\] \[\mu_{\rm bd} \geq\ {\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(\Omega\cap\partial^{*}E)+2\,{\mathcal{H}}^{n}\mathop{\hbox{ \vrule height 6.0pt depth 0.0pt width 0.0pt\vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(K_{\rm bd}\setminus\partial^{*}E)\,, \tag{5.7}\] with \[\liminf_{j\to\infty}{\mathcal{F}}_{\rm bk}(K_{j},E_{j})\geq{\mathcal{F}}_{ \rm bk}(K_{\rm bk},E)\,,\qquad\liminf_{j\to\infty}{\mathcal{F}}_{\rm bd}(K_{j },E_{j})\geq{\mathcal{F}}_{\rm bd}(K_{\rm bd},E)\,; \tag{5.8}\] and that the closure statements \[\text{if }K_{j}\cup E_{j}^{{(1)}}\text{ is }{\mathcal{C}} \text{-spanning }{\mathbf{W}}\text{ for every }j, \tag{5.9}\] \[\text{then }K_{\rm bk}\cup E^{{(1)}}\text{ is }{\mathcal{C}} \text{-spanning }{\mathbf{W}}\,, \tag{5.10}\] and \[\text{if }K_{j}\text{ is }{\mathcal{C}} \text{-spanning }{\mathbf{W}}\text{ for every }j, \tag{5.11}\] \[\text{then }K_{\rm bd}\text{ is }{\mathcal{C}} \text{-spanning }{\mathbf{W}}\,, \tag{5.12}\] hold true. Proof of Theorem 1.4.: By \(\Omega\cap\partial^{*}E\subset K_{\rm bk}\cap K_{\rm bd}\) we have \((K_{\rm bk},E),(K_{\rm bd},E)\in{\mathcal{K}}_{\rm B}\). By [13, Theorem 6.4], \(\theta_{*}^{n}(\mu_{\rm bk})\geq 2\) on \(K_{\rm bk}\cap E^{{(0)}}\) implies \(\mu_{\rm bk}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(K_{\rm bk}\cap E^{{ (0)}})\geq 2{\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(K_{\rm bk}\cap E^{{ (0)}})\), and, similarly, we have \(\mu_{\rm bd}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(K_{\rm bd}\setminus\partial^{*}E)\geq 2\,{ \mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(K_{\rm bd}\setminus\partial^{*}E)\). Since, by the lower semicontinuity of distributional perimeter, we have \(\min\{\mu_{\rm bk},\mu_{\rm bd}\}\geq{\mathcal{H}}^{n}\mathop{\hbox{ \vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(\partial^{*}E\cap\Omega)\), (5.6), (5.7) and (5.8) follow. We are thus left to prove that if either (5.9) or (5.11) holds, then (5.10) or (5.12) holds respectively. We divide the proof into three parts, numbered by Roman numerals. **I. Set up of the proof:** Fixing from now on a choice of \((\gamma,\Phi,T)\in{\mathcal{T}}({\mathcal{C}})\) against which we want to test the \({\mathcal{C}}\)-spanning properties (5.10) and (5.12), we introducing several key objects related to \((\gamma,\Phi,T)\). _Introducing \(s_{0}\)_: Up to extracting subsequences, let \(\mu\) be the weak-star limit of \({\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits K_{j}\), and set \[J=\{s\in{\mathbb{S}}^{1}:\mu(T[s])=0\}\,, \tag{5.13}\] so that \({\mathcal{H}}^{1}({\mathbb{S}}^{1}\setminus J)=0\). We fix \(s_{0}\in J\). _Introducing \(s_{j}\), \(\{U_{i}^{j}\}_{i}\), and \(K_{j}^{*}\)_: For \({\mathcal{H}}^{1}\)-a.e. \(s\in{\mathbb{S}}^{1}\) it holds that \({\mathcal{H}}^{n}(K_{j}\cap T[s])=0\) for every \(j\) and (thanks to Theorem 1.3/Theorem 3.1) the essential partition \(\{U_{i}^{j}[s]\}_{i}\) induced on \(T\) by \(K_{j}\cup T[s]\) is such that \[T[s]\cap E_{j}^{{(0)}}\text{ is }{\mathcal{H}}^{n}\text{-contained in }{\rm UBEP}(K_{j}\cup T[s];T)\,, \text{(if (\ref{eq:1}) holds)}\,,\] \[T[s]\text{ is }{\mathcal{H}}^{n}\text{-contained in }{\rm UBEP}(K_{j}\cup T[s];T)\,, \text{(if (\ref{eq:2}) holds)}\,.\] Therefore we can find a sequence \(s_{j}\to s_{0}\) as \(j\to\infty\) such that \[{\mathcal{H}}^{n}(K_{j}\cap T[s_{j}])=0\qquad\forall j\,, \tag{5.14}\] and, denoting by \(\{U_{i}^{j}\}_{i}\) the essential partition of \(T\) induced by \(K_{j}\cup T[s_{j}]\) (i.e. \(U_{i}^{j}=U_{i}^{j}[s_{j}]\)), and setting for brevity \[K_{j}^{*}={\rm UBEP}(K_{j}\cup T[s_{j}];T)=T\cap\bigcup_{i}\partial^{*}U_{i}^{j }\,, \tag{5.15}\] we have \[T[s_{j}]\cap E^{(0)}_{j} \text{ is $\mathcal{H}^{n}$-contained in $K^{*}_{j}\,,$}\qquad\qquad\text{(if (\ref{eq:T}) holds)}\,, \tag{5.16}\] \[T[s_{j}] \text{ is $\mathcal{H}^{n}$-contained in $K^{*}_{j}\,,$}\qquad\qquad\text{(if (\ref{eq:T}) holds)}\,. \tag{5.17}\] _Introducing \(\{U_{i}\}_{i}\) and \(K^{*}\):_ By (5.1), Lemma 2.3, and up to extract a subsequence we can find a Lebesgue partition \(\{U_{i}\}_{i}\) of \(T\) such that, \[\{U_{i}\}_{i}\text{ is the limit of $\{\{U_{i}^{j}\}_{i}\}_{j}$ in the sense specified by (\ref{eq:T})}\,. \tag{5.18}\] Correspondingly we set \[K^{*}=T\cap\bigcup_{i}\partial^{*}U_{i}\,. \tag{5.19}\] Having introduced \(s_{0}\), \(s_{j}\), \(\{U_{i}^{j}\}_{i}\), \(K^{*}_{j}\), \(\{U_{i}\}_{i}\), and \(K^{*}\), we notice that if (5.9) holds, then we can apply Theorem 4.1 with \(F_{j}=E_{j}\) and find that \[T[s_{0}]\cap E^{(0)}\text{ is $\mathcal{H}^{n}$-contained in $K^{*}\,,$}\qquad\text{(if (\ref{eq:T}) holds)}\,; \tag{5.20}\] if, instead, (5.11) holds, then Theorem 4.1 can be applied with \(F_{j}=F=\varnothing\) to deduce \[T[s_{0}]\text{ is $\mathcal{H}^{n}$-contained in $K^{*}\,,$}\qquad\text{(if (\ref{eq:T}) holds)}\,. \tag{5.21}\] We now make the following claim: **Claim:** We have \[K^{*}\setminus(T[s_{0}]\cup E^{(1)})\text{ is $\mathcal{H}^{n}$-contained in $K_{\rm bk}$}\,, \tag{5.22}\] \[K^{*}\setminus T[s_{0}]\text{ is $\mathcal{H}^{n}$-contained in $K_{\rm bd}$}\,. \tag{5.23}\] The rest of the proof of the theorem is then divided in two parts: the conclusion follows from the claim, and the proof of the claim. **II. Conclusion of the proof from the claim:**_Proof that_ (5.11) _implies_ (5.12): By \(\mathcal{H}^{1}(\mathbb{S}^{1}\setminus J)=0\), the arbitrariness of \(s_{0}\in J\), and that of \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\), thanks to Theorem 1.3 we can conclude that \(K_{\rm bd}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) by showing that \[T[s_{0}]\text{ is $\mathcal{H}^{n}$-contained in $\text{UBEP}(K_{\rm bd}\cup T[s_{0}];T)$}. \tag{5.24}\] Now, since \(\{U_{i}\}_{i}\) is a Lebesgue partition of \(T\) induced by \(K^{*}\) (in the very tautological sense that \(K^{*}\) is defined as \(T\cap\cup_{i}\partial^{*}U_{i}!\)) and, by (5.23) in claim one, \(K^{*}\) is \(\mathcal{H}^{n}\)-contained in \(K_{\rm bd}\cup T[s_{0}]\), by Theorem 2.1-(a) we have that if \(\{Z_{i}\}_{i}\) is the essential partition of \(T\) induced by \(K_{\rm bd}\cup T[s_{0}]\), then \(\cup_{i}\partial^{*}U_{i}\) is \(\mathcal{H}^{n}\)-contained in \(\cup_{i}\partial^{*}Z_{i}\): therefore, by definition of \(K^{*}\) and by definition of UBEP, we have that \[K^{*}\text{ is $\mathcal{H}^{n}$-contained in $\text{UBEP}\big{(}K_{\rm bd}\cup T[s_{0}];T\big{)}$}\,. \tag{5.25}\] By combining (5.25) with (5.21) we immediately deduce (5.24) and conclude. _Proof that_ (5.9) _implies_ (5.10): Thanks to Theorem 3.1 it suffices to prove that \[T[s_{0}]\cap E^{(0)}\text{ is $\mathcal{H}^{n}$-contained in $\text{UBEP}(K_{\rm bk}\cup T[s_{0}];T)$}\,. \tag{5.26}\] By (5.20), the proof of (5.26) can be reduced to that of \[K^{*}\cap E^{(0)}\text{ is $\mathcal{H}^{n}$-contained in $\text{UBEP}(K_{\rm bk}\cup T[s_{0}];T)$}\,. \tag{5.27}\] Now, let us consider the Lebesgue partition of \(T\) defined by \(\{V_{k}\}_{k}=\{U_{i}\setminus E\}_{i}\cup\{T\cap E\}\). By [10, Theorem 16.3] we easily see that for each \(i\) \[E^{(0)}\cap\partial^{*}U_{i}\overset{\mathcal{H}^{n}}{\subset}\partial^{*}(U _{i}\setminus E)\overset{\mathcal{H}^{n}}{\subset}\left(E^{(0)}\cap\partial^ {*}U_{i}\right)\cup\partial^{*}E\,, \tag{5.28}\] which combined with \(T\cap\partial^{*}(T\cap E)=T\cap\partial^{*}E\subset K_{\rm bk}\) and with (5.22) in claim one, gives \[T\cap\bigcup_{k}\partial^{*}V_{k} = (T\cap\partial^{*}E)\cup\Big{\{}T\cap\bigcup_{i}\partial^{*}(U_{i} \setminus E)\Big{\}}\overset{\mathcal{H}^{n}}{\subset}(T\cap\partial^{*}E) \cup\left(E^{(0)}\cap K^{*}\right)\] \[\stackrel{{\mathcal{H}^{n}}}{{\subset}}\ \ (T\cap\partial^{*}E)\cup\, \left(K^{*}\setminus E^{{(1)}}\right)\stackrel{{\mathcal{H}^{n}}}{{ \subset}}K_{\rm bk}\cup T[s_{0}]\,. \tag{5.29}\] By (5.29) we can exploit Theorem 2.1-(a) to conclude that \[T\cap\bigcup_{k}\partial^{*}V_{k}\ \text{is $\mathcal{H}^{n}$-contained in $\operatorname{UBEP}(K_{\rm bk}\cup T[s_{0}];T)$}\,. \tag{5.30}\] By the first inclusion in (5.28), \(E^{{(0)}}\cap K^{*}\) is \(\mathcal{H}^{n}\)-contained in \(T\cap\bigcup_{k}\partial^{*}V_{k}\), therefore (5.30) implies (5.27), as required. We are thus left to prove the two claims. **III. Proof of the claim:** We finally prove that \(K^{*}\setminus(T[s_{0}]\cup E^{{(1)}})\) is \(\mathcal{H}^{n}\)-contained in \(K_{\rm bk}\) (that is (5.22)), and that \(K^{*}\setminus T[s_{0}]\) is \(\mathcal{H}^{n}\)-contained in \(K_{\rm bd}\) (that is (5.23)). To this end, repeating the argument in the proof of Theorem 4.1 with \(F_{j}=E_{j}\) and \(F=E\) we see that, if we set \(X_{m}^{j}=\{i:(U_{i}^{j})^{{(1)}}\subset E_{j}^{{ (m)}}\}\) and \(X_{m}=\{i:U_{i}^{{(1)}}\subset E^{{ (m)}}\}\) for \(m\in\{0,1\}\) (see (4.6) and (4.7)), then \[X^{j}:=\{i:|U_{i}^{j}|>0\}=X_{0}^{j}\cup X_{1}^{j}\,,\qquad X:=\{i:|U_{i}|>0\} =X_{0}\cup X_{1}\,; \tag{5.31}\] and, moreover, for every \(i\) there is \(j(i)\) such that \(i\in X_{m}\) implies \(i\in X_{m}^{j}\) for every \(j\geq j(i)\). Thanks to (5.31) we easily see that \(K_{j}^{*}=T\cap\cup_{i}\partial^{*}U_{i}^{j}\) can be decomposed as \[K_{j}^{*}\stackrel{{\mathcal{H}^{n}}}{{=}}\bigcup_{(i,k)\in X_{ 0}^{j}\times X_{0}^{j}\,,i\neq j}M_{ik}^{j}\cup\bigcup_{(i,k)\in X_{1}^{j} \times X_{1}^{j}\,,i\neq j}M_{ik}^{j}\cup\bigcup_{(i,k)\in X_{0}^{j}\times X _{1}^{j}}M_{ik}^{j}\,, \tag{5.32}\] where \(M_{ik}^{j}=T\cap\partial^{*}U_{i}^{j}\cap\partial^{*}U_{k}^{j}\) (an analogous decomposition of \(K^{*}\) holds as well, and will be used in the following, but is not explicitly written for the sake of brevity). We now prove that \[M_{ik}^{j}\subset E_{j}^{{(0)}}\,,\qquad\forall i,k\in X_{0}^{j} \,,i\neq k\,, \tag{5.33}\] \[M_{ik}^{j}\subset\partial^{e}E_{j}\,,\qquad\forall i\in X_{0}^{j }\,,k\in X_{1}^{j}\,,\] (5.34) \[M_{ik}^{j}\subset E_{j}^{{(1)}}\,,\qquad\forall i,k\in X_{1}^{j }\,,i\neq k\,. \tag{5.35}\] _To prove (5.33) and (5.35)_: if \(i\neq k\), \(i,k\in X_{0}^{j}\), and \(x\in M_{ik}^{j}\), then (by \(|U_{i}^{j}\cap U_{k}^{j}|=0\)) \(U_{i}^{j}\) and \(U_{k}^{j}\) blow-up two complementary half-spaces at \(x\), an information that combined with the \(\mathcal{L}^{n+1}\)-inclusion of \(U_{i}^{j}\cup U_{k}^{j}\) in \(\mathbb{R}^{n+1}\setminus E_{j}\) implies \[|B_{r}(x)|+{\rm o}(r^{n+1})=|B_{r}(x)\cap U_{i}^{j}|+|B_{r}(x)\cap U_{k}^{j}| \leq|B_{r}(x)\setminus E_{j}|\,,\] that is, \(x\in E_{j}^{{(0)}}\), thus proving (5.33); the proof of (5.35) is analogous. _To prove (5.34)_: if \(i\in X_{0}^{j}\), \(k\in X_{1}^{j}\), and \(x\in M_{ik}^{j}\), then \[|B_{r}(x)\cap E_{j}|\geq|B_{r}(x)\cap U_{k}^{j}|=\frac{|B_{r}(x)|}{2}+{\rm o} (r^{n+1})\,,\] \[|B_{r}(x)\setminus E_{j}|\geq|B_{r}(x)\cap U_{i}^{j}|=\frac{|B_{r}(x)|}{2}+{ \rm o}(r^{n+1})\,,\] so that \(x\not\in E_{j}^{{(0)}}\) and \(x\not\in E_{j}^{{(1)}}\), i.e. \(x\in\partial^{e}E_{j}\), that is (5.34). With (5.33)-(5.35) at hand, we now prove that \[T\cap\partial^{*}E_{j}\stackrel{{\mathcal{H}^{n}}}{{=}}\bigcup_{( i,k)\in X_{0}^{j}\times X_{1}^{j}}M_{ik}^{j}\,, \tag{5.36}\] \[K_{j}^{*}\cap E_{j}^{{(0)}}\stackrel{{\mathcal{H}^{n}}}{{=}}\bigcup_ {(i,k)\in X_{0}^{j}\times X_{0}^{j}\,,k\neq i}M_{ik}^{j}\,. \tag{5.37}\] (Analogous relations hold with \(K^{*}\) and \(E\) in place of \(K_{j}^{*}\) and \(E_{j}\).) _To prove (5.36)_: By \(\partial^{*}E_{j}\subset\partial^{e}E_{j}\) and (4.4) we find \(\partial^{*}E_{j}\cap(U^{j}_{i})^{{}_{(1)}}=\varnothing\) for every \(i,j\); hence, since \(\{(U^{j}_{i})^{{}_{(1)}}\}_{i}\cup\{\partial^{*}U^{j}_{i}\}_{i}\) is an \(\mathcal{H}^{n}\)-partition of \(T\), and by repeatedly applying (5.33), (5.34) and (5.35), we find \[\bigcup_{(i,k)\in X^{j}_{0}\times X^{j}_{1}}M^{j}_{ik} \stackrel{{\eta^{n}}}{{\subset}} T\cap\partial^{*}E_{j}\stackrel{{\eta^{n}}}{{=}} \bigcup_{i}\bigl{(}T\cap\partial^{*}E_{j}\cap\partial^{*}U^{j}_{i}\bigr{)} \stackrel{{\eta^{n}}}{{=}}\bigcup_{i,k}M^{j}_{ik}\cap\partial^{* }E_{j}\] \[\stackrel{{\eta^{n}}}{{=}} \bigcup_{(i,k)\in X^{j}_{0}\times X^{j}_{1}}M^{j}_{ik}\cap \partial^{*}E_{j}\,,\] which gives (5.36). _To prove (5.37)_: By (5.33), (5.34), and (5.35), \(M^{j}_{ik}\) has empty intersection with \(E^{{}_{(0)}}_{j}\) unless \(i,k\in X^{j}_{0}\), in which case \(M^{j}_{ik}\) is \(\mathcal{H}^{n}\)-contained in \(E^{{}_{(0)}}_{j}\): hence, \[\bigcup_{(i,k)\in X^{j}_{0}\times X^{j}_{0},k\neq i}M^{j}_{ik}\stackrel{{ \mathcal{H}^{n}}}{{\subset}}K^{*}_{j}\cap E^{{}_{(0)}}_{j}= \bigcup_{(i,k)\in X^{j}_{0}\times X^{j}_{0},\,k\neq i}E^{{}_{(0)}}_{j}\cap M^{ j}_{ik}\,,\] that is (5.37). With (5.36) and (5.37) at hand, we now prove the following perimeter formulas: for every open set \(A\subset T\) and every \(j\), \[\sum_{i\in X^{j}_{0}}P(U^{j}_{i};A)=\mathcal{H}^{n}\bigl{(}A\cap \partial^{*}E_{j}\bigr{)}+2\,\mathcal{H}^{n}\bigl{(}A\cap K^{*}_{j}\cap E^{{} _{(0)}}_{j}\bigr{)}\,, \tag{5.38}\] \[\sum_{i\in X^{j}_{1}}P(U^{j}_{i};A)=\mathcal{H}^{n}\bigl{(}A\cap \partial^{*}E_{j}\bigr{)}+2\,\mathcal{H}^{n}\bigl{(}A\cap K^{*}_{j}\cap E^{{} _{(1)}}_{j}\bigr{)}\,. \tag{5.39}\] Analogously, for \(\alpha=0,1\), \[\sum_{i\in X_{\alpha}}P(U_{i};A)=\mathcal{H}^{n}\bigl{(}A\cap\partial^{*}E \bigr{)}+2\,\mathcal{H}^{n}\bigl{(}A\cap K^{*}\cap E^{{}_{(\alpha)}}\bigr{)}\,. \tag{5.40}\] _To prove (5.38) and (5.39)_: Indeed, by (5.36) and (5.37), \[\sum_{i\in X^{j}_{0}}P(U^{j}_{i};A) = \sum_{(i,k)\in X^{j}_{0}\times X^{j}_{1}}\mathcal{H}^{n}(A\cap M ^{j}_{ik})+\sum_{i\in X^{j}_{0}}\sum_{k\in X^{j}_{0}\setminus\{i\}}\mathcal{H} ^{n}(A\cap M^{j}_{ik})\] \[= \mathcal{H}^{n}\Bigl{(}\bigcup_{(i,k)\in X^{j}_{0}\times X^{j}_{ 1}}A\cap M^{j}_{ik}\Bigr{)}+2\,\mathcal{H}^{n}\Bigl{(}\bigcup_{(i,k)\in X^{j}_ {0}\times X^{j}_{0},\,i\neq k}A\cap M^{j}_{ik}\Bigr{)}\] \[= \mathcal{H}^{n}(A\cap\partial^{*}E)+2\,\mathcal{H}^{n}\bigl{(}A \cap K^{*}_{j}\cap E^{{}_{(0)}}_{j}\bigr{)}\,,\] that is (5.38). The proof of (5.39) is analogous (since (5.39) is (5.38) applied to the complements of the \(E_{j}\)'s - recall indeed that \(\Omega\cap\partial^{*}E_{j}=\Omega\cap\partial^{*}(\Omega\setminus E_{j})\)). _Conclusion of the proof of (5.22) in the claim_: We want to prove that \(K^{*}\setminus(T[s_{0}]\cup E^{{}_{(1)}})\) is \(\mathcal{H}^{n}\)-contained in \(K_{\rm bk}\). Since \(\{E^{{}_{(0)}},E^{{}_{(1)}},\partial^{*}E\}\) is an \(\mathcal{H}^{n}\)-partition of \(\Omega\), and \(\Omega\cap\partial^{*}E\) is contained in \(K_{\rm bk}\), looking back at the definition (5.4) of \(K_{\rm bk}\) it is enough to show that \[\theta^{n}_{*}(\mu_{\rm bk})(x)\geq 2\ \text{for}\ \mathcal{H}^{n}\text{-a.e.}\ x \in(K^{*}\cap E^{{}_{(0)}})\setminus T[s_{0}]\,. \tag{5.41}\] To this end, we begin noticing that, if \(Y_{0}\) is an arbitrary finite subset of \(X_{0}\), then there is \(j(Y_{0})\) such that \(Y_{0}\subset X^{j}_{0}\) for every \(j\geq j(Y_{0})\); correspondingly, \[\sum_{i\in Y_{0}}P(U_{i};A)\leq\liminf_{j\to\infty}\sum_{i\in Y_{0}}P(U^{j}_{i} ;A)\leq\liminf_{j\to\infty}\sum_{i\in X^{j}_{0}}P(U^{j}_{i};A)\,.\] By arbitrariness of \(Y_{0}\), (5.40) with \(\alpha=0\), (5.38), and (4.11) (notice that the \(\mathcal{H}^{n}\)-containment of the \(\mathcal{H}^{n}\)-rectifiable set \(K_{j}^{*}\) into \(K_{j}\cup T[s_{0}]\) is equivalent to its \(\mathcal{H}^{n}\)-containment in \(\mathcal{R}(K_{j}\cup T[s_{j}])=\mathcal{R}(K_{j})\cup T[s_{j}]\)) we conclude that, if \(A\subset T\) is open and such that \(\operatorname{cl}\left(A\right)\cap T[s_{0}]=\varnothing\), so that \(A\cap T[s_{j}]=\varnothing\) for \(j\) large enough, then \[\mathcal{H}^{n}\big{(}A\cap\partial^{*}E\big{)}+2\,\mathcal{H}^{ n}\big{(}A\cap K^{*}\cap E^{{(0)}}\big{)}\] \[=\sum_{i\in X_{0}}P(U_{i};A)\leq\liminf_{j\to\infty}\sum_{i\in X _{0}^{j}}P(U_{i}^{j};A)\] \[=\liminf_{j\to\infty}\mathcal{H}^{n}\big{(}A\cap\partial^{*}E_{ j}\big{)}+2\,\mathcal{H}^{n}\big{(}A\cap K_{j}^{*}\cap E_{j}^{{(0)}}\big{)}\] \[\leq\liminf_{j\to\infty}\mathcal{H}^{n}\big{(}A\cap\partial^{*}E_ {j}\big{)}+2\,\mathcal{H}^{n}\big{(}A\cap\big{(}\mathcal{R}(K_{j})\cup T[s_{j} ]\big{)}\cap E_{j}^{{(0)}}\big{)}\] \[=\liminf_{j\to\infty}\mathcal{H}^{n}\big{(}A\cap\partial^{*}E_{ j}\big{)}+2\,\mathcal{H}^{n}\big{(}A\cap\mathcal{R}(K_{j})\cap E_{j}^{{(0)}} \big{)}\leq\mu_{\mathrm{bk}}(\operatorname{cl}\left(A\right)), \tag{5.42}\] where we have used the definition (5.2) of \(\mu_{\mathrm{bk}}\). Now, if \(x\in(K^{*}\cap E^{{(0)}})\setminus T[s_{0}]\), then we we can apply (5.42) with \(A=B_{s}(x)\) and \(s>0\) such that \(\operatorname{cl}\left(B_{s}(x)\right)\cap T[s_{0}]=\varnothing\), together with the fact that \(x\in E^{{(0)}}\) implies \(\mathcal{H}^{n}(B_{s}(x)\cap\partial^{*}E)=\operatorname{o}(s^{n})\) as \(s\to 0^{+}\), to conclude that \[\mu_{\mathrm{bk}}(\operatorname{cl}\left(B_{s}(x)\right))\geq 2\,\mathcal{H}^{n} \big{(}B_{s}(x)\cap K^{*}\cap E^{{(0)}}\big{)}+ \operatorname{o}(s^{n})\,,\qquad\text{as }s\to 0^{+}\,. \tag{5.43}\] Since \(K^{*}\cap E^{{(0)}}\) is an \(\mathcal{H}^{n}\)-rectifiable set, and thus \(\mathcal{H}^{n}\big{(}B_{s}(x)\cap K^{*}\cap E^{{(0)}} \big{)}=\omega_{n}\,s^{n}+\operatorname{o}(s^{n})\) for \(\mathcal{H}^{n}\)-a.e. \(x\in K^{*}\cap E^{{(0)}}\), we deduce (5.41) from (5.43). _Conclusion of the proof of (5.23) in the claim_: We want to prove the \(\mathcal{H}^{n}\)-containment of \(K^{*}\setminus T[s_{0}]\) in \(K_{\mathrm{bd}}\). As in the proof of (5.22), combining Federer's theorem (1.37) with the definition (5.5) of \(K_{\mathrm{bd}}\), we are left to prove that \[\theta_{*}^{n}(\mu_{\mathrm{bd}})(x)\geq 2\text{ for }\mathcal{H}^{n}\text{-a.e. }x\in K^{*} \setminus(T[s_{0}]\cup\partial^{*}E)\,. \tag{5.44}\] As proved in (5.42), if \(A\subset T\) is open and such that \(\operatorname{cl}\left(A\right)\cap T[s_{0}]=\varnothing\), then by exploiting (5.38) and (5.40) with \(\alpha=0\) we have \[\mathcal{H}^{n}\big{(}A\cap\partial^{*}E\big{)}+2\,\mathcal{H}^{ n}\big{(}A\cap K^{*}\cap E^{{(0)}}\big{)} \tag{5.45}\] \[\leq\liminf_{j\to\infty}\mathcal{H}^{n}\big{(}A\cap\partial^{*}E_ {j}\big{)}+2\,\mathcal{H}^{n}\big{(}A\cap\mathcal{R}(K_{j})\cap E_{j}^{{(0)}} \big{)}\,;\] the same argument, this time based on (5.39) and (5.40) with \(\alpha=1\), also gives \[\mathcal{H}^{n}\big{(}A\cap\partial^{*}E\big{)}+2\,\mathcal{H}^{ n}\big{(}A\cap K^{*}\cap E^{{(1)}}\big{)} \tag{5.46}\] \[\leq\liminf_{j\to\infty}\mathcal{H}^{n}\big{(}A\cap\partial^{*}E_ {j}\big{)}+2\,\mathcal{H}^{n}\big{(}A\cap\mathcal{R}(K_{j})\cap E_{j}^{{(1)}} \big{)}\,;\] and, finally, since \(\Omega\setminus\partial^{*}E\) is \(\mathcal{H}^{n}\)-equivalent to \(\Omega\cap(E^{{(0)}}\cup E^{{(1)}})\), the combination of (5.45) and (5.46) gives \[\mathcal{H}^{n}\big{(}A\cap\partial^{*}E\big{)}+2\,\mathcal{H}^{ n}\big{(}A\cap K^{*}\setminus\partial^{*}E\big{)} \tag{5.47}\] \[\leq\liminf_{j\to\infty}\mathcal{H}^{n}\big{(}A\cap\partial^{*}E_ {j}\big{)}+2\,\mathcal{H}^{n}\big{(}A\cap\mathcal{R}(K_{j})\setminus\partial^{* }E_{j}\big{)}\leq\mu_{\mathrm{bd}}(\operatorname{cl}\left(A\right)),\] where we have used the definition (5.3) of \(\mu_{\mathrm{bd}}\). Now, for \(\mathcal{H}^{n}\)-a.e. \(x\in K^{*}\setminus(T[s_{0}]\cup\partial^{*}E)\) we have \(\mathcal{H}^{n}(B_{r}(x)\cap\partial^{*}E)=\operatorname{o}(r^{n})\) and \(\mathcal{H}^{n}(B_{r}(x)\cap K^{*}\setminus\partial^{*}E)=\omega_{n}\,r^{n}+ \operatorname{o}(r^{n})\) as \(r\to 0^{+}\), as well as \(\operatorname{cl}\left(B_{r}(x)\right)\cap T[s_{0}]=\varnothing\) for \(r\) small enough, so that (5.47) with \(A=B_{r}(x)\) readily implies (5.44). The proof of the claim, and thus of the theorem, is now complete. ### A second closure theorem We now present a variant of the main arguments presented in this section and alternative closure theorem to Theorem 1.4. As already noticed, this second closure theorem, Theorem 5.1 below, will play a role only in the companion paper [14], where Plateau's laws will be studied in the relation to the Allen-Cahn equation, so that this section can be omitted on a first reading focused on Gauss' capillarity theory alone. To introduce Theorem 1.4, let us consider the following question: given an \(\mathcal{H}^{n}\)-finite set \(S\) which is \(\mathcal{C}\)-spanning \(\mathbf{W}\), _what parts of \(S\) are essential to its \(\mathcal{C}\)-spanning property_? We already know from Lemma 2.2 that the unrectifiable part of \(S\) is not necessary, since \(\mathcal{R}(S)\) is also \(\mathcal{C}\)-spanning. However, some parts of \(\mathcal{R}(S)\) could be discarded too - indeed rectifiable sets can be "porous at every scale", and thus completely useless from the point of view of achieving \(\mathcal{C}\)-spanning. To make an example, consider the rectifiable set \(P\subset\mathbb{R}^{2}\) obtained by removing from \([0,1]\) all the intervals \((q_{i}-\varepsilon_{i},q_{i}+\varepsilon_{i})\) where \(\{q_{i}\}_{i}\) are the rational numbers in \([0,1]\) and \(2\)\(\sum_{i}\varepsilon_{i}=\varepsilon\) for some given \(\varepsilon\in(0,1)\): it is easily seen that \(P\) is a rectifiable set with positive \(\mathcal{H}^{1}\)-measure in \(\mathbb{R}^{2}\), contained in \(\mathbb{R}\times\{0\}\), which fails to essentially disconnect any stripe of the form \((a,b)\times\mathbb{R}\) with \((a,b)\subset\subset(0,1)\). Intuitively, if a set like \(P\) stands as an isolated portion of \(S\), then \(\mathcal{R}(S)\setminus P\) should still be \(\mathcal{C}\)-spanning. We can formalize this idea as follows. Denoting as usual \(\Omega=\mathbb{R}^{n+1}\setminus\mathbf{W}\), we consider the open covering \(\{\Omega_{k}\}_{k}\) of \(\Omega\) defined by \[\{\Omega_{k}\}_{k}=\{B_{r_{mh}}(x_{m})\}_{m,h}\,, \tag{5.48}\] where \(\{x_{m}\}_{m}=\mathbb{Q}^{n+1}\cap\Omega\) and \(\{r_{mh}\}_{h}=\mathbb{Q}\cap(0,\operatorname{dist}(x_{m},\partial\Omega))\). For every \(\mathcal{H}^{n}\)-finite set \(S\) we define the **essential spanning part of \(S\) in \(\Omega\)** as the Borel set \[\operatorname{ESP}(S)=\bigcup_{k}\,\operatorname{UBEP}(S;\Omega_{k})=\bigcup _{k}\,\left\{\Omega_{k}\cap\bigcup_{i}\partial^{*}U_{i}[\Omega_{k}]\right\},\] if \(\{U_{i}[\Omega_{k}]\}_{i}\) denotes the essential partition of \(\Omega_{k}\) induced by \(S\). Since each \(\operatorname{UBEP}(S;\Omega_{k})\) is a countable union of reduced boundaries and is \(\mathcal{H}^{n}\)-contained in the \(\mathcal{H}^{n}\)-finite set \(S\), we see that \(\operatorname{ESP}(S)\) is always \(\mathcal{H}^{n}\)-rectifiable. The idea is that by following the unions of boundaries of essential partitions induced by \(S\) over smaller and smaller balls we are capturing all the parts of \(S\) that may potentially contribute to achieve a spanning condition with respect to \(\mathbf{W}\). Thinking about Figure 1.5: the tendrils of \(S\) appearing in panel (a) and not captured by \(\operatorname{UBEP}(S;U)\), will eventually be included into \(\operatorname{ESP}(S)\) by considering \(\operatorname{UBEP}\)'s of \(S\) relative to suitable subsets of \(U\). Another way to visualize the construction of \(\operatorname{ESP}(S)\) is noticing that if \(B_{r}(x)\subset B_{s}(x)\subset\Omega\), then \[B_{r}(x)\cap\operatorname{UBEP}(S;B_{s}(x))\subset\operatorname{UBEP}(S;B_{r} (x))\,,\] which points to the monotonicity property behind the construction of \(\operatorname{ESP}(S)\). Intuitively, we expect that \[\text{if $S$ is $\mathcal{C}$-spanning $\mathbf{W}$, then $\operatorname{ESP}(S)$ is $\mathcal{C}$-spanning $\mathbf{W}$} \tag{5.49}\] (where \(\mathcal{C}\) is an arbitrary spanning class for \(\mathbf{W}\)). This fact will proved in a moment as a particular case of Theorem 5.1 below. Next, we introduce the notion of convergence behind our second closure theorem. Consider a sequence \(\{S_{j}\}_{j}\) of Borel subsets of \(\Omega\) such that \(\sup_{j}\mathcal{H}^{n}(S_{j})<\infty\). If we denote by \(\{U_{i}^{j}[\Omega_{k}]\}_{i}\) the essential partition induced on \(\Omega_{k}\) by \(S_{j}\), then a diagonal argument based on Lemma 2.3 shows the existence of a (not relabeled) subsequence in \(j\), and, for each \(k\), of a Borel partition \(\{U_{i}[\Omega_{k}]\}_{i}\) of \(\Omega_{k}\) such that \(\{U_{i}^{j}[\Omega_{k}]\}_{i}\) converges to \(\{U_{i}[\Omega_{k}]\}_{i}\) as \(j\to\infty\) in the sense specified by (2.8). Since \(\operatorname{UBEP}(S_{j};\Omega_{k})=\Omega_{k}\cap\bigcup_{i}\partial^{*}U _{i}^{j}[\Omega_{k}]\), we call any set \(S\) of the form11 Footnote 11: The limit partition \(\{U_{i}[\Omega_{k}]\}_{i}\) appearing in (5.50) may not be the essential partition induced by \(S\) on \(\Omega_{k}\) since the individual \(U_{i}[\Omega_{k}]\), arising as \(L^{1}\)-limits, may fail to be essentially connected. This said, \(\{U_{i}[\Omega_{k}]\}_{i}\) is automatically a partition of \(\Omega_{k}\) induced by \(S_{0}\). \[S=\bigcup_{k}\,\Big{\{}\Omega_{k}\cap\bigcup_{i}\partial^{*}U_{i}[\Omega_{k}] \Big{\}}\,, \tag{5.50}\] a **subsequential partition limit of \(\{S_{j}\}_{j}\) in \(\Omega\)**. Having in mind (5.49), it is natural to ask if the following property holds: if \[S_{j}\] is \[\mathcal{C}\] -spanning \[\mathbf{W}\] for each \[j\,,\] and \[S\] is a subsequential partition limit of \[\{S_{j}\}_{j}\] in \[\Omega\,,\] then \[S\] is \[\mathcal{C}\] -spanning \[\mathbf{W}\] . (5.51) Our next theorem implies both (5.49) and (5.51) as particular cases (corresponding to be taking \(E_{j}=\varnothing\) and, respectively, \(K_{j}=S\) and \(K_{j}=S_{j}\) for every \(j\)). **Theorem 5.1** (Closure theorem for subsequential partition limits).: _Let \(\mathbf{W}\) be a closed set in \(\mathbb{R}^{n+1}\), \(\mathcal{C}\) a spanning class for \(\mathbf{W}\), and \(\{(K_{j},E_{j})\}_{j}\) a sequence in \(\mathcal{K}_{\mathrm{B}}\) such that \(\sup_{j}\mathcal{H}^{n}(K_{j})<\infty\) and \(K_{j}\cup E_{j}^{({}^{1})}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) for every \(j\)._ _If \(S_{0}\) and \(E_{0}\) are, respectively, a subsequential partition limit of \(\{K_{j}\}_{j}\) in \(\Omega\) and an \(L^{1}\)-subsequential limit of \(\{E_{j}\}_{j}\) (corresponding to a same not relabeled subsequence in \(j\)), and we set_ \[K_{0}=(\Omega\cap\partial^{*}E_{0})\cup S_{0}\,,\] _then \((K_{0},E_{0})\in\mathcal{K}_{\mathrm{B}}\) and \(K_{0}\cup E_{0}^{({}^{1})}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\)._ Proof.: Since \(\Omega\cap\partial^{*}E_{0}\subset K_{0}\) by definition of \(K_{0}\) we trivially have \((K_{0},E_{0})\in\mathcal{K}_{\mathrm{B}}\). Aiming to prove that \(K_{0}\cup E_{0}^{({}^{1})}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), we fix \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\), and define \(s_{0}\), \(s_{j}\), \(\{U_{i}^{j}\}_{i}\) and \(\{U_{i}\}_{i}\) exactly as in part I of the proof of Theorem 1.4. Thanks to Theorem 4.1 and by arguing as in part II of the proof of Theorem 1.4, we have reduced to prove that \[K^{*}\setminus(T[s_{0}]\cup E^{({}^{1})})\text{ is }\mathcal{H}^{n}\text{-contained in }K_{0}\,. \tag{5.52}\] By Federer's theorem (1.37) and since \(\Omega\cap\partial^{*}E\subset K_{0}\) it is enough to prove \[(K^{*}\cap E^{({}^{0})})\setminus T[s_{0}]\text{ is }\mathcal{H}^{n}\text{-contained in }S_{0}\,,\] and, thanks to the construction of \(S_{0}\), we shall actually be able to prove \[K^{*}\setminus T[s_{0}]\text{ is }\mathcal{H}^{n}\text{-contained in }S_{0}\,. \tag{5.53}\] To this end let us pick \(k\) such that \(\Omega_{k}\subset\subset T\) and \(\Omega_{k}\cap T[s_{0}]=\emptyset\). Then, for \(j\geq j(k)\), we have \(\Omega_{k}\cap T[s_{j}]=\varnothing\), so that \[\Omega_{k}\cap\text{UBEP}\big{(}K_{j}\cup T[s_{j}];T\big{)}\subset\text{UBEP }\big{(}K_{j}\cup T[s_{j}];\Omega_{k}\big{)}=\text{UBEP}\big{(}K_{j};\Omega_{ k}\big{)}\,.\] Since \(\{U_{i}^{j}\}_{i}\) is the essential partition of \(T\) induced by \(K_{j}\cup T[s_{j}]\), if \(\{U_{m}^{j}[\Omega_{k}]\}_{m}\) is the essential partition of \(\Omega_{k}\) induced by \(K_{j}\), we have just claimed that, for every \(i\) and \(j\geq j(k)\), \[\Omega_{k}\cap\partial^{*}U_{i}^{j}\subset\Omega_{k}\cap\bigcup_{m}\partial^{* }U_{m}^{j}[\Omega_{k}]\,. \tag{5.54}\] Since \(\{U_{m}^{j}[\Omega_{k}]\}_{m}\) is a Lebesgue partition of \(\Omega_{k}\) into essentially connected sets, by (5.54) the indecomposable components of \(\Omega_{k}\cap U_{i}^{j}\) must belong to \(\{U_{m}^{j}[\Omega_{k}]\}_{m}\). In other words, for each \(i\) and each \(j\geq j(k)\) there is \(M(k,i,j)\) such that \[\Omega_{k}\cap U_{i}^{j}=\bigcup_{m\in M(k,i,j)}U_{m}^{j}[\Omega_{k}]\,.\] As a consequence of \(U_{i}^{j}\to U_{i}\) and of \(U_{m}^{j}[\Omega_{k}]\to U_{m}[\Omega_{k}]\) as \(j\to\infty\) we find that, for a set of indexes \(M(k,i)\), it must be \[\Omega_{k}\cap U_{i}=\bigcup_{m\in M(k,i)}U_{m}[\Omega_{k}]\,,\] and therefore \[\Omega_{k}\cap\partial^{*}U_{i}\stackrel{{\mathcal{H}^{n}}}{{ \subset}}\bigcup_{m\in M(k,i)}\partial^{*}U_{m}[\Omega_{k}]\subset S_{0}\,.\] Since we have proved this inclusion for every \(i\) and for every \(k\) such that \(\Omega_{k}\subset\subset T\) with \(\Omega_{k}\cap T[s_{0}]=\emptyset\), it follows that \(K^{*}\setminus T[s_{0}]\) is \(\mathcal{H}^{n}\)-contained in \(S_{0}\), that is (5.53). ## 6. Existence of minimizers and convergence to Plateau's problem (Theorem 1.5) In this section we prove two main results: the first one (Theorem 6.1) concerns the equivalence of Harrison-Pugh Plateau's problem \(\ell\) with its measure theoretic reformulation \(\ell_{\rm B}\) (see (1.21)); the second (Theorem 1.5) is a very refined version of Theorem 1.5. **Theorem 6.1** (Existence for \(\ell_{\rm B}\) and \(\ell=\ell_{\rm B}\)).: _If \({\bf W}\subset\mathbb{R}^{n+1}\) is closed, \(\mathcal{C}\) is a spanning class for \({\bf W}\), and the Harrison-Pugh formulation of the Plateau problem_ \[\ell=\inf\big{\{}\mathcal{H}^{n}(S):S\text{ is a closed subset $\Omega$, $S$ is $\mathcal{C}$-spanning ${\bf W}$}\big{\}}\] _is finite, then the problem_ \[\ell_{\rm B}=\inf\big{\{}\mathcal{H}^{n}(S):S\text{ is a Borel subset $\Omega$, $S$ is $\mathcal{C}$-spanning ${\bf W}$}\big{\}}\] _admits minimizers, and given any minimizer \(S\) for \(\ell_{\rm B}\), there exists relatively closed \(S^{*}\) which is \(\mathcal{H}^{n}\)-equivalent to \(S\) and a minimizer for \(\ell\). In particular, \(\ell=\ell_{\rm B}\)._ **Theorem 6.2** (Theorem 1.5 refined).: _If \({\bf W}\) is a compact set in \(\mathbb{R}^{n+1}\) and \(\mathcal{C}\) is a spanning class for \({\bf W}\) such that \(\ell<\infty\), then for every \(v>0\) there exist minimizers \((K,E)\) of \(\Psi_{\rm bk}(v)\). Moreover,_ **(i):** _if_ \((K_{*},E_{*})\) _is a minimizer of_ \(\Psi_{\rm bk}(v)\)_, then there is_ \((K,E)\in\mathcal{K}\) _such that_ \(K\) _is_ \(\mathcal{H}^{n}\)_-equivalent to_ \(K^{*}\)_,_ \(E\) _is Lebesgue equivalent to_ \(E_{*}\)_,_ \((K,E)\) _is a minimizer of_ \(\Psi_{\rm bk}(v)\)_, both_ \(E\) _and_ \(K\) _are bounded,_ \(K\cup E\) _is_ \(\mathcal{C}\)_-spanning_ \({\bf W}\)_,_ \(K\cap E^{{(1)}}=\varnothing\)_, and there is_ \(\lambda\in\mathbb{R}\) _such that_ \[\lambda\int_{\Omega\cap\partial^{*}E}X\cdot\nu_{E}\,d\mathcal{H}^{ n}=\int_{\Omega\cap\partial^{*}E}\operatorname{div}^{K}X\,d\mathcal{H}^{n}+2 \int_{K\cap E^{(0)}}\operatorname{div}^{K}X\,d\mathcal{H}^{n}\,, \tag{6.1}\] \[\forall X\in C_{c}^{1}(\mathbb{R}^{n+1};\mathbb{R}^{n+1})\quad \text{with $X\cdot\nu_{\Omega}=0$ on $\partial\Omega$}\,,\] _and there are positive constants \(c=c(n)\) and \(r_{1}=r_{1}(K,E)\) such that_ \[|E\cap B_{\rho}(y)|\leq(1-c)\,\omega_{n+1}\,\rho^{n+1}\,, \tag{6.2}\] _for every \(y\in\Omega\cap\partial E\) and \(\rho<\min\{r_{1},\operatorname{dist}(y,{\bf W})\}\); under the further assumption that \(\partial{\bf W}\) is \(C^{2}\), then there is positive \(r_{0}=r_{0}(n,{\bf W},|\lambda|)\) such that_ \[\mathcal{H}^{n}(K\cap B_{r}(x))\geq c\,r^{n} \tag{6.3}\] _for every \(x\in\operatorname{cl}(K)\) and \(r<r_{0}\);_ **(ii):** _if_ \((K_{j},E_{j})\) _is a sequence of minimizers for_ \(\Psi_{\rm bk}(v_{j})\) _with_ \(v_{j}\to 0^{+}\)_, then there exists a minimizer_ \(S\) _of_ \(\ell\) _such that, up to extracting subsequences, as Radon measures in_ \(\Omega\)_,_ \[\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6. Proof of Theorem 6.1.: By Theorem A.1, if \(\ell<\infty\), then \(\ell_{\rm B}<\infty\). Let now \(\{S_{j}\}_{j}\) be a minimizing sequence for \(\ell_{\rm B}\), then \(\{(S_{j},\varnothing)\}_{j}\) is a sequence in \({\mathcal{K}}_{\rm B}\) satisfying (5.1). By Theorem 1.4, we find a Borel set \(S\) which is \({\mathcal{C}}\)-spanning \({\bf W}\) and is such that \[2\,\liminf_{j\to\infty}{\mathcal{H}}^{n}(S_{j})=\liminf_{j\to\infty}{ \mathcal{F}}_{\rm bk}(S_{j},\varnothing)\geq{\mathcal{F}}_{\rm bk}(S, \varnothing)=2\,{\mathcal{H}}^{n}(S)\,.\] This shows that \(S\) is a minimizer of \(\ell_{\rm B}\). By Lemma 2.2, \(S\) is \({\mathcal{H}}^{n}\)-rectifiable, for, otherwise, \({\mathcal{R}}(S)\) would be admissible for \(\ell_{\rm B}\) and have strictly less area than \(S\). We conclude the proof by showing that up to modifications on a \({\mathcal{H}}^{n}\)-null set, \(S\) is relatively closed in \(\Omega\) (and thus is a minimizer of \(\ell\) too). Indeed the property of being \({\mathcal{C}}\)-spanning \({\bf W}\) is preserved under diffeomorphism \(f\) with \(\{f\neq{\rm id}\,\}\subset\subset\Omega\). In particular, \({\mathcal{H}}^{n}(S)\leq{\mathcal{H}}^{n}(f(S))\) for every such \(f\), so that the multiplicity one rectifiable varifold \(V_{S}={\bf var}\,(S,1)\) associated to \(S\) is stationary. By a standard application of the monotonicity formula, we can find \(S^{*}\)\({\mathcal{H}}^{n}\)-equivalent to \(S\) such that \(S^{*}\) is relative closed in \(\Omega\). Since \({\mathcal{H}}^{n}(S)={\mathcal{H}}^{n}(S^{*})\) and \({\mathcal{C}}\)-spanning is preserved under \({\mathcal{H}}^{n}\)-null modifications, we conclude the proof. Proof of Theorem 6.2.: _Step one_: We prove conclusion (i). To this end, let \((K_{*},E_{*})\in{\mathcal{K}}_{\rm B}\) be a minimizer of \(\Psi_{\rm bk}(v)\). Clearly, \(({\mathcal{R}}(K_{*}),E_{*})\in{\mathcal{K}}_{\rm B}\) is such that \({\mathcal{R}}(K_{*})\cup E^{{(1)}}\) is \({\mathcal{C}}\)-spanning \({\bf W}\) (thanks to Theorem 3.1/Remark 3.2) and \({\mathcal{F}}_{\rm bk}({\mathcal{R}}(K_{*}),E_{*})\leq{\mathcal{F}}_{\rm bk}( K_{*},E_{*})\). In particular, \(({\mathcal{R}}(K_{*}),E_{*})\) is a minimizer of \(\Psi_{\rm bk}(v)\), and energy comparison between \(({\mathcal{R}}(K_{*}),E_{*})\) and \(({\mathcal{R}}(K_{*})\setminus E_{*}^{{(1)}},E_{*})\) (which is also a competitor for \(\Psi_{\rm bk}(v)\)) proves that \[{\mathcal{H}}^{n}({\mathcal{R}}(K_{*})\cap E_{*}^{{(1)}})=0\,. \tag{6.5}\] Since "\({\mathcal{C}}\)-spanning \({\bf W}\)" is preserved under diffeomorphisms, by a standard first variation argument (see, e.g. [10, Appendix C]) wee see that \(({\mathcal{R}}(K_{*}),E_{*})\) satisfies (6.1) for some \(\lambda\in{\mathbb{R}}\). In particular, the integer \(n\)-varifold \(V={\rm var}({\mathcal{R}}(K_{*}),\theta)\), with multiplicity function \(\theta=2\) on \({\mathcal{R}}(K_{*})\cap E_{*}^{{(0)}}\) and \(\theta=1\) on \(\Omega\cap\partial^{*}E_{*}\), has bounded mean curvature in \(\Omega\), and thus satisfies \(\|V\|(B_{r}(x))\geq c(n)\,r^{n}\) for every \(x\in K\) and \(r<\min\{r_{0},{\rm dist}(x,{\bf W})\}\), where \(r_{0}=r_{0}(n,|\lambda|)\) and, by definition, \[K:=\Omega\cap{\rm spt}V\,.\] In particular, since (6.5) implies \(\|V\|\leq 2\,{\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits{\mathcal{L}}{\mathcal{R}}(K^{*})\), we conclude (e.g. by [13, Corollary 6.4]) that \(K\) is \({\mathcal{H}}^{n}\)-equivalent to \({\mathcal{R}}(K_{*})\), and is thus \({\mathcal{H}}^{n}\)-rectifiable and relatively closed in \(\Omega\). Now let \[E=\big{\{}x\in\Omega:\exists\ r<{\rm dist}(x,{\bf W})\ {\rm s.t.}\ |E_{*}\cap B_{r}(x)|=|B_{r}(x)| \big{\}}\,,\] so that, trivially, \(E\) is an open subset of \(\Omega\) with \(E\subset E_{*}^{{(1)}}\). By applying (1.35) to \(E_{*}\), and by noticing that if \(x\in\Omega\setminus E\) then \(|E_{*}\cap B_{r}(x)|<|B_{r}(x)|\) for every \(r>0\), and that if \(x\in\Omega\cap{\rm cl}\,(E)\) then \(|E_{*}\cap B_{r}(x)|>0\) for every \(r>0\), we see that \[\Omega\cap\partial E\ \subset\ \big{\{}x\in\Omega:0<|E_{*}\cap B_{r}(x)|<|B_{r}(x)| \ \forall r>0\big{\}}\ =\ \Omega\cap{\rm cl}\,(\partial^{*}E_{*})\,. \tag{6.6}\] Since \(\|V\|\geq{\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\Omega\cap\partial^{*}E_{*})\) and \({\mathcal{H}}^{n}(B_{r}(x)\cap\partial^{*}E)=\omega_{n}\,r^{n}+{\rm o}(r^{n})\) as \(r\to 0^{+}\) for every \(x\in\Omega\cap\partial^{*}E\), we see that \(\Omega\cap\partial^{*}E_{*}\subset\Omega\cap{\rm spt}\|V\|=K\), and since \(K\) is relatively closed in \(\Omega\), we have \(\Omega\cap{\rm cl}\,(\partial^{*}E_{*})\subset K\), and so \(\Omega\cap\partial E\subset K\). In particular, \(E\) is of finite perimeter, and thus by applying (1.35) to \(E\), \[\Omega\cap{\rm cl}\,(\partial^{*}E)\ =\ \big{\{}x\in\Omega:0<|E\cap B_{r}(x)|<|B_{r}(x)| \ \forall r>0\big{\}}\ \subset\ \Omega\cap\partial E\,. \tag{6.7}\] Finally, if there is \(x\in(\Omega\cap E_{*}^{{(1)}})\setminus E\), then it must be \(0<|E_{*}\cap B_{r}(x)|<|B_{r}(x)|\) for every \(r>0\), and thus \(x\in\Omega\cap{\rm cl}\,(\partial^{*}E_{*})\subset K\). However, we _claim_ that for every \(x\in\Omega\cap{\rm cl}\,(\partial^{*}E_{*})\) and \(r<\min\{r_{*},{\rm dist}(x,{\bf W})\}\) (with \(r_{*}=r_{*}(K_{*},E_{*})\)) it holds \[|B_{r}(x)\cap E_{*}|\leq(1-c)\,\omega_{n+1}\,r^{n+1}\,, \tag{6.8}\] in contradiction with \(x\in E^{{(1)}}\); this proves that \(\Omega\cap E_{*}^{{(1)}}\subset E\), and thus that \(E_{*}\) and \(E\) are Lebesgue equivalent. Combining the latter information with (6.6) and (6.7) we conclude that \(\Omega\cap\operatorname{cl}\left(\partial^{*}E\right)=\Omega\cap\partial E\subset K\) and conclude the proof of \((K,E)\in\mathcal{K}\) - conditional to proving (6.8). To prove (6.8), let us fix \(x\in\Omega\cap\operatorname{cl}\left(\partial^{*}E_{*}\right)\) and set \(u(r)=|B_{r}(x)\setminus E_{*}|\), so that, for a.e. \(r>0\) we have \[u^{\prime}(r)=\mathcal{H}^{n}(E_{*}^{{(0)}}\cap\partial B_{r}(x))\,,\qquad P( B_{r}(x)\setminus E_{*})=u^{\prime}(r)+P(E_{*};B_{r}(x))\,. \tag{6.9}\] Since \(|E_{*}|=v>0\), we have \(\mathcal{H}^{n}(\Omega\cap\partial^{*}E_{*})>0\), therefore there must be \(y_{1},y_{2}\in\Omega\cap\partial^{*}E_{*}\) with \(|y_{1}-y_{2}|>4r_{*}\) for some \(r_{*}\) depending on \(E_{*}\). In particular there is \(i\in\{1,2\}\) such that \(B_{r_{*}}(x)\cap B_{r_{*}}(y_{i})=\varnothing\), and we set \(y=y_{i}\). Since \(y_{i}\in\Omega\cap\partial^{*}E_{*}\), there is \(w_{*}>0\) and smooth maps \(\Phi:\Omega\times(-w_{*},w_{*})\to\Omega\) such that \(\Phi(\cdot,w)\) is a diffeomorphism of \(\Omega\) with \(\{\Phi(\cdot,w)\neq\operatorname{Id}\}\subset\subset B_{r_{*}}(y)\), and \[|\Phi(E_{*},w)|=|E_{*}|-w\,,\qquad P(\Phi(E_{*},w);B_{r_{*}}(y))\leq P(E_{*},B _{r_{*}}(y))(1+2\left|\lambda\right|\left|w\right|)\,, \tag{6.10}\] for every \(|w|<w_{*}\). We then consider \(r_{1}\) such that \(|B_{r_{1}}|<w_{*}\), so that for every \(r<\min\{r_{1},\operatorname{dist}(x,\mathbf{W})\}\) we have \(0\leq u(r)<w_{*}\), and thus we can define \[(K_{r},E_{r})=\Big{(}\Phi^{u(r)}\big{(}K\cup\partial B_{r}(x)\big{)},\Phi^{u( r)}\big{(}E_{*}\cup B_{r}(x)\big{)}\Big{)}\,.\] Since \(\Phi^{u(r)}\) is a diffeomorphism, we have \(\Omega\cap\partial^{*}E_{r}\subset K_{r}\), and by the first relation in (6.10) and \(\Phi^{u(r)}=\operatorname{Id}\) on \(\Omega\setminus B_{r_{*}}(y)\), we get \[|E_{r}|-|E|=|B_{r}(x)|-|B_{r}(x)\cap E_{*}|+|\Phi^{u(r)}(E_{*})\cap B_{r_{*}}( y)|-|E_{*}\cap B_{r_{*}}(y)|=u(r)-u(r)=0\,.\] Hence \(\mathcal{F}_{\operatorname{bk}}(K_{*},E_{*})\leq\mathcal{F}_{\operatorname{ bk}}(K_{r},E_{r})\), from which we deduce \[P(E;B_{r}(x))+P(E;B_{r_{*}}(y))+2\,\mathcal{H}^{n}(K_{*}\cap E _{*}^{{(0)}}\cap B_{r}(x))\] \[\leq\mathcal{H}^{n}(B_{r}(x)\cap E^{{(0)}})+P(\Phi^{u(r)}(E_{*}); B_{r_{*}}(y))\leq u^{\prime}(r)+P(E_{*},B_{r_{*}}(y))(1+2\left|\lambda\right| \left|w\right|)\,;\] where we have used (6.9) and (6.10); by adding up \(u^{\prime}(r)\) on both sides of the inequality, and using (6.9) again, we find that \[c(n)\,u(r)^{n/(n+1)}\leq P(B_{r}(x)\setminus E_{*})\leq 2\,u^{\prime}(r)+2 \left|\lambda\right|\Psi_{\operatorname{bk}}(v)\,u(r)\,,\] for a.e. \(r<\min\{r_{1},\operatorname{dist}(x,\mathbf{W})\}\); since, by (6.6), \(x\in\Omega\cap\operatorname{cl}\left(\partial^{*}E_{*}\right)\) implies \(u(r)>0\) for every \(r>0\), we can apply a standard ODE argument to conclude that (6.8) holds true. We now prove the remaining assertions in statement (i). First of all, when \(\partial\mathbf{W}\) is \(C^{2}\), we can argue similarly to [10, Theorem 4.1] to deduce from the modified monotonicity formula of Kagaya and Tonegawa [11] that the area lower bound in (6.3) holds for every \(x\in\operatorname{cl}\left(K\right)\) and every \(r<r_{0}\). The validity of the volume upper bound in (6.2) is immediate from (6.8) and the Lebesgue equivalence of \(E_{*}\) and \(E\). The monotonicity formula for \(V\) combined with \(\mathcal{H}^{n}(\Omega\cap K)<\infty\) implies of course that \(V\) has bounded support. Having proved that \(K\) is bounded, \(|E|<\infty\) and \(\Omega\cap\partial E\subset K\) imply that \(E\) is bounded too. Since \(\mathcal{R}(K_{*})\) and \(K\) are \(\mathcal{H}^{n}\)-equivalent, we have that \(K\cup E_{*}^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\). It turns out that \(K\cup E^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) too, since \(E\) and \(E_{*}\) are Lebesgue equivalent _and_ of finite perimeter, therefore such that \(E^{{(1)}}\) and \(E_{*}^{{(1)}}\) are \(\mathcal{H}^{n}\)-equivalent. In fact, on noticing that \(\Omega\cap(E^{{(1)}}\setminus E)\subset\Omega\cap\partial E\subset K\), we see that \(K\cup E^{{(1)}}=K\cup E\), so that \(K\cup E\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), as claimed. Finally, we prove that \(K\cap E^{{(1)}}=\varnothing\). We first notice that, since \(E\subset\Omega\) is open and \(K=\Omega\cap\operatorname{spt}V\) with \(\left\|V\right\|\leq 2\,\mathcal{H}^{n}\operatorname{\mathbb{L}}\mathcal{R}(K^{*})\), if \(K\cap E\neq\emptyset\), then \(\mathcal{H}^{n}(\mathcal{R}(K_{*})\cap E)>0\); and since \(E\subset E_{*}^{{(1)}}\) by construction, we arrive at a contradiction with (6.5). Hence, \(K\cap E=\varnothing\). Now, if \(x\in K\cap E^{{(1)}}\), then, by (6.2), \(x\not\in\Omega\cap\partial E\); combining this with \(K\cap E=\varnothing\), we find \(K\cap E^{{(1)}}\subset\Omega\setminus\operatorname{cl}\left(E\right)\subset E^{ {(0)}}\), and thus \(K\cap E^{{(1)}}=\varnothing\). _Step two_: For every \(v_{1}\geq 0\) and \(v_{2}>0\) we have \[\Psi_{\operatorname{bk}}(v_{1}+v_{2})\leq\Psi_{\operatorname{bk}}(v_{1})+(n+1) \,\omega_{n+1}^{1/(n+1)}\,v_{2}^{n/(n+1)}\,. \tag{6.11}\] Since \(\Psi_{\rm bk}(0)=2\,\ell<\infty\), (6.11) implies in particular that \(\Psi_{\rm bk}(v)<\infty\) for every \(v>0\) (just take \(v_{1}=0\) and \(v_{2}=v\)). Indeed, let \((K_{1},E_{1})\) be a competitor in \(\Psi_{\rm bk}(v_{1})\) and let \(\{B_{r_{j}}(x_{j})\}_{j}\) be a sequence of balls with \(|x_{j}|\to\infty\) and \(|E_{1}\cup B_{r_{j}}(x_{j})|=v_{1}+v_{2}\) for every \(j\). Setting for the sake of brevity \(B_{j}=B_{r_{j}}(x_{j})\), sine \(\partial^{*}(E_{1}\cup B_{j})\) is \({\mathcal{H}}^{n}\)-contained in \((\partial^{*}E_{1})\cup\partial B_{j}\) we have that \((K_{2},E_{2})\), with \(K_{2}=K_{1}\cup\partial B_{j}\) and \(E_{2}=E_{1}\cup B_{j}\), is a competitor of \(\Psi_{\rm bk}(v_{1}+v_{2})\). Since \(\partial B_{j}\cap E_{2}^{{(0)}}=\varnothing\) implies \(E_{2}^{{(0)}}\subset E_{1}^{{(0)}}\setminus\partial B _{j}\), we find that \[\Psi_{\rm bk}(v_{1}+v_{2}) \leq 2\,{\mathcal{H}}^{n}\big{(}K_{2}\cap E_{2}^{{(0)} }\big{)}+{\mathcal{H}}^{n}(\Omega\cap\partial^{*}E_{2})\] \[\leq 2\,{\mathcal{H}}^{n}(K_{1}\cap E_{1}^{{(0)}} \setminus\partial B_{j})+{\mathcal{H}}^{n}(\Omega\cap\partial^{*}E_{1})+{ \mathcal{H}}^{n}(\partial B_{j})\] \[\leq {\mathcal{F}}_{\rm bk}(K_{1},E_{1})+(n+1)\,\omega_{n+1}^{1/(n+1) }\,|B_{j}|^{n/(n+1)}\,.\] Since \(|x_{j}|\to\infty\), \(|E_{1}|=v_{1}\), and \(|E_{1}\cup B_{r_{j}}(x_{j})|=v_{1}+v_{2}\) imply \(|B_{j}|\to v_{2}\), we conclude by arbitrariness of \((K_{1},E_{1})\). _Step three_: Now let \(\{(K_{j},E_{j})\}_{j}\) be a minimizing sequence for \(\Psi_{\rm bk}(v)\). Since \(\Psi_{\rm bk}(v)<\infty\), assumption (5.1) of Theorem 1.4 holds. Therefore there is \((K,E)\in{\mathcal{K}}_{\rm B}\) with \(K\cup E^{{(1)}}\) is \({\mathcal{C}}\)-spanning \({\bf W}\) and such that, up to extracting subsequences, \[\lim_{j\to\infty}|(E_{j}\Delta E)\cap B_{R}|=0\quad\forall R>0\,,\qquad\liminf _{j\to\infty}{\mathcal{F}}_{\rm bk}(K_{j},E_{j})\geq{\mathcal{F}}_{\rm bk}(K,E )\,; \tag{6.12}\] actually, to be more precise, if \(\mu\) denotes the weak-star limit of \({\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule hei ght 0.4pt width 6.0pt depth 0.0pt}}\nolimits(\Omega\cap\partial^{*}E_{j})+2\,{ \mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule hei ght 0.4pt width 6.0pt depth 0.0pt}}\nolimits({\mathcal{R}}(K_{j})\cap E_{j}^{{(0)}})\) in \(\Omega\), then \[\mu\geq 2\,{\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule hei ght 0.4pt width 6.0pt depth 0.0pt}}\nolimits(K\cap E^{{(0)}})+{\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule hei ght 0.4pt width 6.0pt depth 0.0pt}}\nolimits(\Omega\cap\partial^{*}E)\,. \tag{6.13}\] We _claim_ that \[(K,E)\mbox{ is a minimizer of }\Psi_{\rm bk}(|E|)\,.\] (Notice that, at this stage of the argument, we are not excluding that \(v^{*}:=v-|E|\) is positive, nor that \(|E|=0\).) Taking into account (6.11), to prove the claim it suffices to show that \[\Psi_{\rm bk}(v)\geq{\mathcal{F}}_{\rm bk}(K,E)+(n+1)\,\omega_{n+1}^{1/(n+1)} \,(v^{*})^{n/(n+1)}\,. \tag{6.14}\] To see this, we start noticing that, given any sequence \(\{r_{j}\}_{j}\) with \(r_{j}\to\infty\), by (6.12) and (6.13) we have that \[E_{j}\cap B_{r_{j}}\stackrel{{\rm log}}{{\to}}E\,, \qquad|E_{j}\setminus B_{r_{j}}|\to v^{*}\,,\qquad\mbox{as }j\to\infty\,, \tag{6.15}\] \[\liminf_{j\to\infty}\,2\,{\mathcal{H}}^{n}\big{(}{\mathcal{R}}(K_ {j})\cap E_{j}^{{(0)}}\cap B_{r_{j}}\big{)}+{\mathcal{H}}^{n}(B_{r_{j}}\cap \partial^{*}E_{j})\geq{\mathcal{F}}_{\rm bk}(K,E)\,, \tag{6.16}\] Moreover, since \(|E_{j}|<\infty\), we can choose \(r_{j}\to\infty\) so that \({\mathcal{H}}^{n}(E_{j}^{{(1)}}\cap\partial B_{r_{j}})\to 0\), while, taking into account that \(P(E_{j}\setminus B_{r_{j}})={\mathcal{H}}^{n}(E_{j}^{{(1)}} \cap\partial B_{r_{j}})+{\mathcal{H}}^{n}((\partial^{*}E_{j})\setminus B_{r_{j}})\), we have \[{\mathcal{F}}_{\rm bk}(K_{j},E_{j}) \geq 2\,{\mathcal{H}}^{n}\big{(}{\mathcal{R}}(K_{j})\cap E_{j}^{{ (0)}}\cap B_{r_{j}}\big{)}+{\mathcal{H}}^{n}(B_{r_{j}}\cap\partial^{*}E_{j})\] \[+P(E_{j}\setminus B_{r_{j}})-{\mathcal{H}}^{n}(E_{j}^{{(1)}} \cap\partial B_{r_{j}})\,.\] By combining these facts with (6.15), (6.16), and the Euclidean isoperimetric inequality, we conclude that \[\Psi_{\rm bk}(v)=\lim_{j\to\infty}{\mathcal{F}}_{\rm bk}(K_{j},E_{j})\geq{ \mathcal{F}}_{\rm bk}(K,E)+(n+1)\,\omega_{n+1}^{1/(n+1)}\,\lim_{j\to\infty}|E_{j }\setminus B_{r_{j}}|^{n/(n+1)}\,,\] that is (6.14). _Step four_: We prove the existence of minimizers in \(\Psi_{\rm bk}(v)\), \(v>0\). By step three, there is \((K,E)\in\mathcal{K}_{\rm B}\) such that \(K\cup E^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), \((K,E)\) is a minimizer of \(\Psi_{\rm bk}(|E|)\) and, combining (6.11) and (6.14), \[\Psi_{\rm bk}(v)=\Psi_{\rm bk}(|E|)+(n+1)\,\omega_{n+1}^{1/(n+1)}\,(v-|E|)^{n/( n+1)}\,. \tag{6.17}\] Since \((K,E)\) is a minimizer in \(\Psi_{\rm bk}(|E|)\), by step one we can assume that \(K\) is \(\mathcal{H}^{n}\)-rectifiable and that both \(K\) and \(E\) are bounded. We can thus find \(B_{r}(x_{0})\subset\subset\Omega\) such that \(|B_{r}(x_{0})|=v-|E|\), \(|B_{r}(x_{0})\cap E|=0\), and \(\mathcal{H}^{n}(K\cap B_{r}(x_{0}))=0\). In this way \((K_{*},E_{*})=(K\cup\partial B_{r}(x_{0}),E\cup B_{r}(x_{0}))\in\mathcal{K}_{ \rm B}\) is trivially \(\mathcal{C}\)-spanning \(\mathbf{W}\) and such that \(|E_{*}|=v\), and thus is a competitor for \(\Psi_{\rm bk}(v)\). At the same time, \[\mathcal{F}_{\rm bk}(K_{*},E_{*})=\mathcal{F}_{\rm bk}(K,E)+(n+1)\,\omega_{n+ 1}^{1/(n+1)}\,(v-|E|)^{n/(n+1)}\] so that, by (6.17), \((K_{*},E_{*})\) is a minimizer of \(\Psi_{\rm bk}(v)\). Having proved that minimizers of \(\Psi_{\rm bk}(v)\) do indeed exist, a further application of step one completes the proof of statement (i). _Step five_: We finally prove statement (ii). Let us consider a sequence \(v_{j}\to 0^{+}\) and corresponding minimizers \((K_{j},E_{j})\) of \(\Psi_{\rm bk}(v_{j})\). By (6.11) with \(v_{1}=0\) and \(v_{2}=v_{j}\) we see that \(\{(K_{j},E_{j})\}_{j}\) satisfies the assumptions of Theorem 1.4. Since \(|E_{j}|=v_{j}\to 0\), setting \(\mu_{j}=\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule hei ght 0.4pt depth 0.0pt width 1px}}\nolimits(\Omega\cap\partial^{*}E_{j})+2\,\mathcal{H}^{n} \mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule hei ght 0.4pt depth 0.0pt width 1px}}\nolimits(\mathcal{R}(K_{j})\cap E_{j}^{{(0)}})\), the conclusion of Theorem 1.4 is that there are a Radon measure \(\mu\) in \(\Omega\) and a Borel set \(K\) such that \(K\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) and \(\mu_{j}\xrightarrow{\ \ with \(K^{*}=\bigcup_{i}\partial^{*}U_{i}\). Now, thanks to (1.40), (1.41), and the inclusion in (1.46), we have \[U^{(1)}\cap\partial^{*}(U\cap E)\stackrel{{\mathcal{H}^{n}}}{{=}}U ^{(1)}\cap\partial^{*}E\stackrel{{\mathcal{H}^{n}}}{{\subset}}U^{ (1)}\cap K^{*}\,,\] which combined with (7.2) gives \[2\,\mathcal{H}^{n}(U^{(1)}\cap K^{*})=\mathcal{H}^{n}\big{(}U^{(1)}\cap \partial^{*}E\big{)}+\sum_{i}\mathcal{H}^{n}(U^{(1)}\cap\partial^{*}U_{i})\,. \tag{7.3}\] Therefore, using in order \[U^{(1)}\cap\partial^{*}E\stackrel{{\mathcal{H}^{n}}}{{\subset}}U ^{(1)}\cap K^{*}\,,\qquad K^{*}\stackrel{{\mathcal{H}^{n}}}{{ \subset}}K\,,\qquad\mathcal{H}^{n}(K^{*}\cap E^{(1)})=0\,,\] and Federer's theorem (1.37), we obtain \[\mathcal{F}_{\rm bk}(K,E;U^{(1)}) = \mathcal{H}^{n}(U^{(1)}\cap\partial^{*}E)+2\,\mathcal{H}^{n}(U^ {(1)}\cap K\cap E^{(0)})\] \[= 2\,\mathcal{H}^{n}(U^{(1)}\cap K^{*}\cap\partial^{*}E)-\mathcal{ H}^{n}(U^{(1)}\cap\partial^{*}E)\] \[+2\,\mathcal{H}^{n}(U^{(1)}\cap K^{*}\cap E^{(0)})+2\,\mathcal{H} ^{n}(U^{(1)}\cap(K\setminus K^{*})\cap E^{(0)})\] \[= 2\,\mathcal{H}^{n}(U^{(1)}\cap K^{*})-\mathcal{H}^{n}(U^{(1)} \cap\partial^{*}E)+2\,\mathcal{H}^{n}(U^{(1)}\cap(K\setminus K^{*})\cap E^{ (0)})\] \[= \sum_{i}\mathcal{H}^{n}(U^{(1)}\cap\partial^{*}U_{i})+2\, \mathcal{H}^{n}(U^{(1)}\cap(K\setminus K^{*})\cap E^{(0)})\,,\] where in the last identity we have used (7.3). The next lemma is a slight reformulation of [13, Lemma 10] and [13, Lemma 4.1]. **Lemma 7.2**.: _If \(\mathbf{W}\) is closed, \(\mathcal{C}\) is a spanning class for \(\mathbf{W}\), \(S\) is relatively closed in \(\Omega\) and \(\mathcal{C}\)-spanning \(\mathbf{W}\), and \(B\subset\Omega\) is an open ball, then for any \(\gamma\in\mathcal{C}\) we either have \(\gamma(\mathbb{S}^{1})\cap(S\setminus B)\neq\varnothing\), or \(\gamma(\mathbb{S}^{1})\) has non-empty intersection with at least two connected components of \(B\setminus S\). In particular, it intersects the boundaries of both components._ We are now ready for the proof of Theorem 1.6. Proof of Theorem 1.6.: The opening part of the statement of Theorem 1.6 is Theorem 6.2-(i), therefore we can directly consider a minimizer \((K,E)\in\mathcal{K}\) of \(\Psi_{\rm bk}(v)\) such that both \(E\) and \(K\) are bounded, \(K\cup E\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), and \[K\cap E^{(1)}=\varnothing\,, \tag{7.4}\] and begin by proving the existence of a closed set \(\Sigma\subset K\) closed such that (i): \(\Sigma=\varnothing\) if \(1\leq n\leq 6\), \(\Sigma\) is locally finite in \(\Omega\) if \(n=7\), and \(\mathcal{H}^{s}(\Sigma)=0\) for every \(s>n-7\) Figure 7.1. The situation in Lemma 7.1: (a) a depiction of the left hand side of (7.1), where \(K\setminus\partial^{*}E\) is drawn with a bold line to indicate that, in the computation of \(\mathcal{F}_{\rm bk}(K,E;U^{(1)})=\mathcal{H}^{n}(U^{(1)}\cap\partial^{*}E)+ 2\,\mathcal{H}^{n}(U^{(1)}\cap K\setminus\partial^{*}E)\), it is counted with multiplicity \(2\); (b) a depiction of the right hand side of (7.1), where \(K\setminus K^{*}\) is drawn with a bold line to indicate that it has to be counted with multiplicity \(2\). if \(n\geq 8\); (ii): \((\partial^{*}E)\setminus\Sigma\) is a smooth hypersurface with constant mean curvature; (iii) \(K\setminus(\operatorname{cl}(E)\cup\Sigma)\) is a smooth minimal hypersurface; (iv)\({}_{\alpha}\): if \(x\in[\Omega\cap(\partial E\setminus\partial^{*}E)]\setminus\Sigma\), then there are \(r>0\), \(\nu\in\mathbb{S}^{n}\), \(u_{1},u_{2}\in C^{1,\alpha}({\bf D}^{\nu}_{r}(x);(-r/4,r/4))\) (\(\alpha\in(0,1/2)\) arbitrary) such that \(u_{1}(x)=u_{2}(x)=0\), \(u_{1}\leq u_{2}\) on \({\bf D}^{\nu}_{r}(x)\), \(\{u_{1}<u_{2}\}\) and \(\operatorname{int}\{u_{1}=u_{2}\}\) are both non-empty, and \[{\bf C}^{\nu}_{r}(x)\cap K = \cup_{i=1,2}\bigl{\{}y+u_{i}(y)\,\nu:y\in{\bf D}^{\nu}_{r}(x) \bigr{\}}\,, \tag{7.5}\] \[{\bf C}^{\nu}_{r}(x)\cap\partial^{*}E = \cup_{i=1,2}\bigl{\{}y+u_{i}(y)\nu:y\in\{u_{1}<u_{2}\}\bigr{\}}\,,\] (7.6) \[{\bf C}^{\nu}_{r}(x)\cap E = \bigl{\{}y+t\,\nu:y\in\{u_{1}<u_{2}\}\,,u_{1}(x)<t<u_{2}(x) \bigr{\}}\,. \tag{7.7}\] (The sharp version of conclusion (iv), that is conclusion (iv)\({}_{\alpha}\) with \(\alpha=1\), and conclusion (v), will be proved in the final step five of this proof.) The key step to prove conclusions (i)-(iv)\({}_{\alpha}\) is showing the validity of the following claim. _Claim_: There exist positive constants \(\Lambda\) and \(r_{0}\) such that if \(B_{2r}(x)\subset\subset\Omega\), then, denoting by \(\{U_{j}\}_{j}\) the open connected components of \(B_{2r}(x)\setminus(E\cup K)\), \[B_{r}(x)\cap K=B_{r}(x)\cap\cup_{j}\partial U_{j}\,, \tag{7.8}\] \[\#\bigl{\{}i:B_{r}(x)\cap U_{j}\neq\varnothing\bigr{\}}<\infty\,,\] (7.9) \[B_{2\,r}(x)\cap\operatorname{cl}\left(\partial^{*}U_{j}\right)= B_{2\,r}(x)\cap\partial U_{j}\,,\] (7.10) \[P(U_{j};B_{r}(x))\leq P(V_{j};B_{r}(x))+\Lambda\left|U_{j} \Delta V_{j}\right|, \tag{7.11}\] whenever \(V_{j}\) satisfies \(V_{j}\Delta U_{j}\subset\subset B_{r}(x)\) and \(\operatorname{diam}\left(U_{j}\Delta V_{j}\right)<r_{0}\). _Deduction of (i)-(iv) from the claim_: Let \(\{B_{2r_{i}}(x_{i})\}_{i\in\mathbb{N}}\) be a countable family of balls, locally finite in \(\Omega\), such that \(B_{2r_{i}}(x_{i})\subset\subset\Omega\) and \(\Omega=\cup_{i}B_{r_{i}}(x_{i})\). Setting for brevity \[\Omega_{i}=B_{r_{i}}(x_{i})\,,\] by (7.9) there are finitely many connected components \(\{U^{i}_{j}\}_{j=1}^{J_{i}}\) of \(B_{2r_{i}}(x_{i})\setminus(E\cup K)\) such that \(U^{i}_{j}\cap\Omega_{i}\neq\varnothing\). Thanks to (7.11), we deduce from [10, Theorem 28.1] that, if we set \(\Sigma^{i}_{j}=\Omega_{i}\cap(\partial U^{i}_{j}\setminus\partial^{*}U^{i}_{ j})\), then \(\Omega_{i}\cap\partial^{*}U^{i}_{j}\) is a \(C^{1,\alpha}\)-hypersurface for every \(\alpha\in(0,1/2)\), and \(\Sigma^{i}_{j}\) is a closed set that satisfies the dimensional estimates listed in conclusion (i). In particular, if we set \[\Sigma=\cup_{i\in\mathbb{N}}\cup_{j=1}^{J_{i}}\Sigma^{i}_{j}\,, \tag{7.12}\] then \(\Sigma\subset K\) thanks to \(\Sigma^{i}_{j}\subset\Omega_{i}\cap\partial U^{i}_{j}\) and to (7.8), and conclusion (i) holds by the local finiteness of the covering \(\{B_{2r_{i}}(x_{i})\}_{i}\) of \(\Omega\) and from \(J_{i}<\infty\) for every \(i\). Before moving to prove the remaining conclusions, we first notice that (7.8) gives \[\Omega_{i}\cap K\setminus\Sigma = \Omega_{i}\cap\cup_{j=1}^{J_{i}}\partial U^{i}_{j}\setminus\Sigma \tag{7.13}\] \[\subset \Omega_{i}\cap\cup_{j=1}^{J_{i}}(\partial U^{i}_{j}\setminus \Sigma^{i}_{j})\ =\ \Omega_{i}\cap\cup_{j=1}^{J_{i}}\partial^{*}U^{i}_{j}\,;\] second, we notice that, since \(K\) is \({\mathcal{H}}^{n}\)-finite, \[\{E\cap\Omega_{i},U^{j}_{i}\cap\Omega_{i}\}_{j=1}^{J_{i}}\mbox{ is a Caccioppoli partition of }\Omega_{i}\,; \tag{7.14}\] finally, we recall that, by (1.23), for every \(X\in C^{1}_{c}(\Omega;\mathbb{R}^{n+1})\) it holds \[\lambda\,\int_{\partial^{*}E}X\cdot\nu_{E}\,d{\mathcal{H}}^{n}=\int_{\partial^ {*}E}\operatorname{div}^{K}X\,d{\mathcal{H}}^{n}+2\,\int_{K\cap E^{(0)}} \operatorname{div}^{K}X\,d{\mathcal{H}}^{n}\,. \tag{7.15}\] _To prove conclusion (ii)_: Given \(x\in\Omega\cap\partial^{*}E\setminus\Sigma\), there is \(i\in\mathbb{N}\) such that \(x\in\Omega_{i}\cap\partial^{*}E\). By \(\Omega\cap\partial^{*}E\subset K\) and by (7.13) there is \(j(x)\in\{1,...,J_{i}\}\) such that \(x\in\partial^{*}U^{i}_{j(x)}\). By (7.14), we can use (1.47) and \(x\in\Omega\cap\partial^{*}E\cap\partial^{*}U^{i}_{j(x)}\) to deduce that \[x\not\in\cup_{j\neq j(x)}\partial^{*}U^{i}_{j}\,. \tag{7.16}\] Let \(r>0\) be such that \(B_{r}(x)\cap\partial^{*}U^{i}_{j(x)}\) is a \(C^{1}\)-hypersurface. Since \(\Sigma\) contains \(\cup_{j}\partial U^{i}_{j}\) and (7.10) holds, (7.16) implies that there is \(r>0\) such that \[B_{r}(x)\subset\subset\Omega_{i}\setminus\Sigma\,,\qquad B_{r}(x)\cap\cup_{j} \partial U^{i}_{j}=B_{r}(x)\cap\partial U^{i}_{j(x)}=B_{r}(x)\cap\partial^{*} U^{i}_{j(x)}\,. \tag{7.17}\] Since \(B_{r}(x)\cap\cup_{j\neq j(x)}\partial U^{i}_{j}=\varnothing\) and \(B_{r}(x)\cap U^{i}_{j(x)}\neq\varnothing\), we also have that \[B_{r}(x)\cap\cup_{j}U^{i}_{j}=B_{r}(x)\cap U^{i}_{j(x)}\,,\] and thus, by (7.14), that \(\{E\cap B_{r}(x),U^{i}_{j(x)}\cap B_{r}(x)\}\) is an \(\mathcal{H}^{n}\)-partition of \(B_{r}(x)\). In particular, \(B_{r}(x)\cap\partial^{*}E=B_{r}(x)\cap\partial^{*}U^{i}_{j(x)}\): intersecting with \(B_{r}(x)\) in (7.13) and taking into account (7.17), we conclude that \[B_{r}(x)\cap K = B_{r}(x)\cap[\Omega_{i}\cap K\setminus\Sigma]\ \subset\ B_{r}(x)\cap[\Omega_{i}\cap\cup_{j=1}^{J_{i}}\partial^{*}U^{i}_{j}]\ =\ B_{r}(x)\cap\partial^{*}U^{i}_{j(x)} \tag{7.18}\] \[= B_{r}(x)\cap\partial^{*}E\,,\] and (7.15) implies that, for every \(X\in C^{1}_{c}(B_{r}(x);\mathbb{R}^{n+1})\), \[\lambda\int_{\partial^{*}E}X\cdot\nu_{E}\,d\mathcal{H}^{n}=\int_{\partial^{*}E }\operatorname{div}^{K}X\,d\mathcal{H}^{n}\,. \tag{7.19}\] Hence, \(\partial^{*}E\) can be represented, locally in \(B_{r}(x)\), as the graph of distributional solutions of class \(C^{1,\alpha}\) to the constant mean curvature equation. By Schauder's theory, \(B_{r}(x)\cap\partial^{*}E\) is a smooth hypersurface whose mean curvature with respect to \(\nu_{E}\) is equal to \(\lambda\) thanks to (7.19). _To prove conclusions (iii) and (iv)_: Let us now pick \(x\in K\setminus(\Sigma\cup\partial^{*}E)\) and let \(i\in\mathbb{N}\) be such that \(x\in\Omega_{i}\cap K\). Let \(i\in\mathbb{N}\) be such that \(x\in\Omega_{i}\). By (7.13) there is \(j(x)\in\{1,...,J_{i}\}\) such that \(x\in\partial^{*}U^{i}_{j(x)}\). By (7.14) and by (1.47), either \(x\in\partial^{*}E\) (which is excluded from the onset), or there is \(k(x)\neq j(x)\) such that \(x\in\partial^{*}U^{i}_{k(x)}\). We have thus proved that \[x\in\partial^{*}U^{i}_{j(x)}\cap\partial^{*}U^{i}_{k(x)}\,,\qquad x\not\in \cup_{j\neq j(x),k(x)}\partial^{*}U^{i}_{j}\,. \tag{7.20}\] To prove conclusion (iii) we notice that if we are in the case when \(x\in K\setminus(\Sigma\cup\partial E)=K\setminus(\Sigma\cup\operatorname{cl} (E))\) (thanks to \(K\cap E=\varnothing\)), then \(x\not\in\operatorname{cl}(E)\) implies that, for some \(r>0\), \(B_{r}(x)\cap(\Sigma\cup\operatorname{cl}(E))=\emptyset\). In particular, by (7.14) and (7.20), \(\{B_{r}(x)\cap U^{i}_{j(x)},B_{r}(x)\cap U^{i}_{k(x)}\}\) is an \(\mathcal{H}^{n}\)-partition of \(B_{r}(x)\), and by (7.13) \[B_{r}(x)\cap K=B_{r}(x)\cap\partial^{*}U^{i}_{j(x)}=B_{r}(x)\cap\partial^{*}U^ {i}_{k(x)}\,,\] is a \(C^{1,\alpha}\)-hypersurface. Under these conditions, (7.15) boils down to \[\int_{K}\operatorname{div}^{K}X\,d\mathcal{H}^{n}=0\,,\qquad\forall X\in C^{1 }_{c}(B_{r}(x);\mathbb{R}^{n+1})\,, \tag{7.21}\] so that \(K\) can be represented, locally in \(B_{r}(x)\), as the graph of distributional solutions to the minimal surfaces equation of class \(C^{1,\alpha}\). By Schauder's theory, \(B_{r}(x)\cap K\) is a smooth minimal surface. To finally prove conclusion (iv), let us assume that \(x\in\Omega\cap(\partial E\setminus\partial^{*}E)\setminus\Sigma\). In this case (7.14) and (7.20) do not imply that \(\{B_{r}(x)\cap U^{i}_{j(x)},B_{r}(x)\cap U^{i}_{k(x)}\}\) is an \(\mathcal{H}^{n}\)-partition of \(B_{r}(x)\); actually, by \(\Omega\cap\partial E=\Omega\cap\operatorname{cl}(\partial^{*}E)\), the fact that \(x\in\partial E\) implies that \(B_{s}(x)\cap\partial^{*}E\neq\emptyset\) for every \(s>0\), so that \(|B_{s}(x)\cap E|>0\) for every \(s>0\), and the situation is such that, for every \(s<r\), \[\{B_{s}(x)\cap U^{i}_{j(x)},B_{s}(x)\cap U^{i}_{k(x)},B_{s}(x)\cap E\}\text{ is an $\mathcal{H}^{n}$-partition of $B_{s}(x)$} \tag{7.22}\] with all three sets in the partition having positive measure. Now, by the first inclusion in (7.19), there exists \(\nu\in\mathbb{S}^{n}\) such that, up to further decrease the value of \(r\) and for some \(u_{1},u_{2}\in C^{1,\alpha}(\mathbf{D}_{r}^{\nu}(x);(-r/4,r/4))\) with \(u_{1}(x)=u_{2}(x)=0\) and \(\nabla u_{1}(x)=\nabla u_{2}(x)=0\) it must hold \[\mathbf{C}_{r}^{\nu}(x)\cap U_{j(x)}^{i}=\left\{y+t\,\nu:y\in \mathbf{D}_{r}^{\nu}(x)\,,t>u_{2}(y)\right\},\] \[\mathbf{C}_{r}^{\nu}(x)\cap U_{k(x)}^{i}=\left\{y+t\,\nu:y\in \mathbf{D}_{r}^{\nu}(x)\,,t<u_{1}(y)\right\}.\] By \(U_{j(x)}^{i}\cap U_{k(x)}^{i}=\varnothing\) we have \(u_{1}\leq u_{2}\) on \(\mathbf{D}_{r}^{\nu}(x)\), so that (7.21) gives \[\mathbf{C}_{r}^{\nu}(x)\cap E=\left\{y+t\,\nu:y\in\left\{u_{1}<u_{2}\right\}, u_{1}(y)<t<u_{2}(y)\right\},\] and \(\left\{u_{1}<u_{2}\right\}\) is non-empty. Again by (7.19) and (7.13) we also have that \[\mathbf{C}_{r}^{\nu}(x)\cap K = \cup_{k=1}^{2}\left\{y+u_{k}(y)\,\nu:y\in\mathbf{D}_{r}^{\nu}(x) \right\},\] \[\mathbf{C}_{r}^{\nu}(x)\cap\partial^{*}U_{j(x)}^{i}\cap\partial^ {*}U_{k(x)}^{i} = \left\{y+u_{1}(y)\,\nu:y\in\mathbf{D}_{r}^{\nu}(x)\cap\left\{u_{1 }=u_{2}\right\}\right\},\] \[\mathbf{C}_{r}^{\nu}(x)\cap\partial^{*}E = \cup_{k=1}^{2}\left\{y+u_{k}(y)\,\nu:y\in\mathbf{D}_{r}^{\nu}(x) \cap\left\{u_{1}<u_{2}\right\}\right\}.\] This completes the proof of conclusion (iv)\({}_{\alpha}\). _Proof of the claim_: Assuming without loss of generality that \(x=0\), we want to find \(\Lambda\) and \(r_{0}\) positive such that if \(B_{2r}\subset\subset\Omega\), then, denoting by \(\{U_{j}\}_{j}\) the open connected components of \(B_{2r}\setminus(E\cup K)\), we have \[B_{r}\cap K=B_{r}\cap\cup_{j}\partial U_{j}\,, \tag{7.22}\] \[\#\big{\{}j:B_{r}\cap U_{j}\neq\varnothing\big{\}}<\infty\,,\] (7.23) \[B_{2\,r}\cap\mathrm{cl}\,(\partial^{*}U_{j})=B_{2\,r}\cap \partial U_{j}\,, \tag{7.24}\] and that \(P(U_{j};B_{r})\leq P(V_{j};B_{r})+\Lambda\,|U_{j}\Delta V_{j}|\) whenever \(V_{j}\) satisfies \(V_{j}\Delta U_{j}\subset\subset B_{r}\) and \(\mathrm{diam}\,(U_{j}\Delta V_{j})<r_{0}\). _Step one_: We prove that \[K\cap\mathrm{int}\,U_{j}^{{(1)}}=\varnothing\,,\qquad \mathrm{int}\,U_{j}^{{(1)}}=U_{j}\quad \forall j\,. \tag{7.25}\] To this end, we begin by noticing that, for every \(j\), \[B_{2\,r}\cap\partial U_{j} \subset B_{2\,r}\cap K\,, \tag{7.26}\] \[U_{j}\ \subset\ \mathrm{int}(U_{j}^{{(1)}}) \subset B_{2\,r}\cap\mathrm{cl}\,U_{j}\ \subset\ B_{2\,r}\cap(U_{j}\cup K)\,,\] (7.27) \[B_{2\,r}\cap\partial[\mathrm{int}(U_{j}^{{(1)}})] \subset B_{2\,r}\cap K\,. \tag{7.28}\] Indeed, for every \(k\) and \(j\), \(U_{k}\cap U_{j}=\varnothing\) with \(U_{k}\) and \(U_{j}\) open gives \(U_{k}\cap\partial U_{j}=\varnothing\), so that \(B_{2r}\cap\partial U_{j}\subset B_{2r}\setminus\cup_{k}U_{k}=B_{2\,r}\cap(E \cup K)=B_{2\,r}\cap K\) thanks to the fact that \(E\cap\partial U_{j}=\varnothing\) (as \(U_{j}\cap E=\varnothing\)). Having proved (7.26), one easily deduces the third inclusion in (7.27), while the first two are evident. Finally, from (7.27), and since \(K\) is closed, we find \[B_{2\,r}\cap\mathrm{cl}\left(\mathrm{int}(U_{j}^{{(1)}}) \right)\subset B_{2\,r}\cap\left(\mathrm{cl}\,(U_{j})\cup K\right),\] so that subtracting \(\mathrm{int}(U_{j}^{{(1)}})\), and recalling that \(U_{j}\subset\mathrm{int}(U_{j}^{{(1)}})\) we find \[B_{2\,r}\cap\partial[\mathrm{int}(U_{j}^{{(1)}})] \subset B_{2\,r}\cap(K\cup\partial U_{j})\] and deduce (7.28) from (7.26). Next, we claim that, \[\text{if }K_{*}=K\setminus\bigcup_{j}\mathrm{int}\,U_{j}^{{(1)}}, \text{ then }(K_{*},E)\in\mathcal{K}\text{ and }K_{*}\cup E\text{ is }\mathcal{C}\text{- spanning}\,. \tag{7.29}\] _To prove that \((K_{*},E)\in\mathcal{K}\), the only assertion that is not immediate is the inclusion \(\Omega\cap\partial E\subset K_{*}\). To prove it we notice that if \(z\in\mathrm{int}\,U_{j}^{{(1)}}\), then \(B_{s}(z)\subset\mathrm{int}\,U_{j}^{{(1)}}\) for some \(s>0\), so that \(U_{j}\cap E=\varnothing\) gives \(|E\cap B_{s}(z)|=0\). Since \(E\) is open this implies \(B_{s}(z)\cap E=\varnothing\), hence \(z\notin\partial E\)._ _To prove that \(E\cup K_{*}\) is \(\mathcal{C}\)-spanning_: Since \(E\cup K_{*}\) is relatively closed in \(\Omega\), it suffices to verify that for arbitrary \(\gamma\in\mathcal{C}\), \((K_{*}\cup E)\cap\gamma\neq\varnothing\). Since \(K\setminus B_{2r}=K_{*}\setminus B_{2r}\), we directly assume that \((K\cup E)\cap(\gamma\setminus B_{2r})=\varnothing\). Since \(K\cup E\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), by Lemma 7.2, there are two distinct connected components \(U_{j}\) and \(U_{k}\) of \(B_{2r}\setminus(K\cup E)\) such that there is \(\gamma(\mathbb{S}^{1})\cap B_{2\,r}\cap(\partial U_{j})\cap(\partial U_{k})\neq\varnothing\). We conclude by showing that \[B_{2\,r}\cap(\partial U_{j})\cap(\partial U_{k})\subset K_{*}\,,\qquad\forall j \neq k\,. \tag{7.30}\] Indeed any point in \(B_{2r}\cap(\partial U_{j})\cap(\partial U_{k})\) is an accumulation point for both \(U_{j}\) and \(U_{k}\), and thus, by (7.27), for both \(\mathrm{int}U_{j}^{{(1)}}\) and \(\mathrm{int}U_{k}^{{(1)}}\). Since \(U_{j}\cap U_{k}=\emptyset\) implies \((\mathrm{int}U_{j}^{{(1)}})\cap(\mathrm{int}U_{k}^{{(1)}})=\emptyset\), an accumulation point for both \(\mathrm{int}U_{j}^{{(1)}}\) and \(\mathrm{int}U_{k}^{{(1)}}\) must lie in \([\partial(\mathrm{int}U_{j}^{{(1)}})]\cap[\partial(\mathrm{int}U_{k}^{{(1)}})]\). We thus deduce (7.30) from (7.28), and complete the proof of (7.29). _To deduce (7.25) from (7.29), and complete step one_: By (7.29), \((K_{*},E)\) is admissible in \(\Psi_{\mathrm{bk}}(v)\). Since \((K,E)\) is a minimizer of \(\Psi_{\mathrm{bk}}(v)\), we conclude that \(\mathcal{H}^{n}(K\setminus K_{*})=0\). Would there be \(z\in\mathrm{int}(U_{j}^{{(1)}})\cap K\) for some \(j\), then by (6.3), and with \(\rho>0\) such that \(B_{\rho}(z)\subset\mathrm{int}(U_{j}^{{(1)}})\), we would find \[c\,\rho^{n}\leq\mathcal{H}^{n}(K\cap B_{\rho}(z))\leq\mathcal{H}^{n}(K\cap \mathrm{int}(U_{j}^{{(1)}}))\leq\mathcal{H}^{n}(K\setminus K_{*})=0\,.\] This shows that \(K\cap\mathrm{int}(U_{j}^{{(1)}})=\varnothing\). Using this last fact in combination with \(\mathrm{int}(U_{j}^{{(1)}})\subset B_{2\,r}\cap(U_{j}\cap K)\) from (7.27) we conclude that \(\mathrm{int}(U_{j}^{{(1)}})\subset U_{j}\), and thus that \(\mathrm{int}(U_{j}^{{(1)}})=U_{j}\) by the first inclusion in (7.27). _Step two_: We prove (7.24), i.e. \(B_{2\,r}\cap\mathrm{cl}\,(\partial^{*}U_{j})=B_{2\,r}\cap\partial U_{j}\). The \(\subset\) inclusion is a general fact, see (1.35). To prove the reverse inclusion we recall, again from (1.35), that \(z\in B_{2\,r}\cap\mathrm{cl}\,(\partial^{*}U_{j})\) if and only if \(0<|B_{\rho}(z)\cap U_{j}|<|B_{\rho}|\) for every \(\rho>0\). Now, if \(z\in B_{2\,r}\cap\partial U_{j}\), then clearly, being \(U_{j}\) open, we have \(|U_{j}\cap B_{\rho}(z)|>0\) for every \(\rho>0\); moreover, should \(|B_{\rho}(z)\cap U_{j}|=|B_{\rho}|\) hold for some \(\rho\), then we would have \(z\in\mathrm{int}(U_{j}^{{(1)}})\), and thus \(z\in U_{j}\) by (7.25), a contradiction. _Step three_: We prove, for each \(j\), the \(\mathcal{H}^{n}\)-equivalence of \(\partial^{*}U_{j}\) and \(\partial U_{j}\), that is \[\mathcal{H}^{n}(B_{2\,r}\cap\partial U_{j}\setminus\partial^{*}U_{j})=0\,. \tag{7.31}\] By a standard argument [13, Theorem 21.11] it will suffice to prove the existence of \(r_{0}>0\) and \(\alpha,\beta\in(0,1/2)\) (depending on \(n\)) such that, for each \(j\) and each \(z\in B_{2\,r}\cap\partial U_{j}\), it holds \[\alpha\,|B_{\rho}|\leq|B_{\rho}(z)\cap U_{j}|\leq(1-\beta)|B_{\rho}|\,, \tag{7.32}\] for every \(\rho<\min\{r_{0},\mathrm{dist}(z,\partial B_{2\,r})\}\). _Proof of the lower bound in (7.32)_: Since diffeomorphic images of \(\mathcal{C}\)-spanning sets are \(\mathcal{C}\)-spanning, a standard argument using diffeomorphic volume fixing variations shows the existence of positive constants \(\Lambda\) and \(r_{0}\) such that if \((K^{\prime},E^{\prime})\in\mathcal{K}_{\mathrm{B}}\), \(K^{\prime}\cup(E^{\prime})^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), and \((K^{\prime}\Delta K)\cup(E^{\prime}\Delta E)\subset\subset B_{\rho}(z)\) for some \(\rho<r_{0}\) and \(B_{\rho}(z)\subset\subset B_{2\,r}\), then \[\mathcal{F}_{\mathrm{bk}}(K,E)\leq\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{ \prime})+\Lambda\,|E\Delta E^{\prime}|\,. \tag{7.33}\] We claim that we can apply (7.33) with \[E^{\prime}=E\cup\big{(}B_{\rho}(z)\cap\mathrm{cl}\,U_{j}\big{)}\,,\quad K^{ \prime}=\big{(}K\cup(U_{j}^{{(1)}}\cap\partial B_{\rho}(z)\big{)}\setminus(E^{ \prime})^{{(1)}}\,, \tag{7.34}\] where \(\rho<r_{0}\), \(B_{\rho}(z)\subset\subset B_{2\,r}\), and \[\mathcal{H}^{n}\big{(}\partial B_{\rho}(z)\cap[\partial^{*}E\cup\partial^{*}U_{j }]\big{)}=\mathcal{H}^{n}(K\cap\partial B_{\rho}(z))=0\,. \tag{7.35}\] Indeed, \(K^{\prime}\cup(E^{\prime})^{{(1)}}\) contains \(K\cup E^{{(1)}}\), thus \(K\cup E\) being \(E\) open, and is thus \(\mathcal{C}\)-spanning. To check that \((K^{\prime},E^{\prime})\in\mathcal{K}_{\mathrm{B}}\), we argue as follows. First, we notice that \(\nu_{B_{\rho}(z)\cap\operatorname{cl}(U_{j})}\bigr{\}}=0\), since it is \(\mathcal{H}^{n}\)-contained in the union of \(\partial B_{\rho}(z)\cap\partial^{*}E\) and \(\{\nu_{E}=\nu_{\operatorname{cl}(U_{j})}\}\), that are \(\mathcal{H}^{n}\)-negligible by (7.35) and by the fact that \(\nu_{E}=-\nu_{\operatorname{cl}(U_{j})}\)\(\mathcal{H}^{n}\)-a.e. on \(\partial^{*}E\cap\partial^{*}\operatorname{cl}(U_{j})\) thanks to \(|E\cap\operatorname{cl}(U_{j})|=0\). By \(\mathcal{H}^{n}(\{\nu_{E}=\nu_{B_{\rho}(z)\cap\operatorname{cl}(U_{j})}\})=0\) and (1.39) we thus have \[\Omega\cap\partial^{*}E^{\prime}\stackrel{{\mathcal{H}^{n}}}{{= }}\Omega\cap\left\{\bigl{[}E^{{(0)}}\cap\partial^{*}\bigl{(}B_{\rho}(z)\cap \operatorname{cl}U_{j}\bigr{)}\bigr{]}\cup\bigl{[}\bigl{(}B_{\rho}(z)\cap \operatorname{cl}U_{j}\bigr{)}^{{(0)}}\cap\partial^{*}E \bigr{]}\right\}. \tag{7.36}\] Since \(U_{j}\) is Lebesgue equivalent to \(\operatorname{cl}(U_{j})\) (indeed, \(B_{2\,r}\cap\partial U_{j}\subset K\)), we have \(U_{j}^{{(1)}}=[\operatorname{cl}(U_{j})]^{{(1)}}\) and \(\partial^{*}[\operatorname{cl}(U_{j})]=\partial^{*}U_{j}\), so that (1.40) and (7.35) give \[\partial^{*}\bigl{(}B_{\rho}(z)\cap\operatorname{cl}(U_{j})\bigr{)} \stackrel{{\mathcal{H}^{n}}}{{=}}\bigl{\{}[\operatorname{cl}(U_{ j})]^{{(1)}}\cap\partial B_{\rho}(z)\bigr{\}}\cup\bigl{\{}B_{\rho}(x)\cap \partial^{*}[\operatorname{cl}(U_{j})]\bigr{\}}\,,\] \[=\bigl{(}U_{j}^{{(1)}}\cap\partial B_{\rho}(z)\bigr{)} \cup\bigl{(}B_{\rho}(x)\cap\partial^{*}U_{j}\bigr{)}\subset\bigl{(}U_{j}^{{ (1)}}\cap\partial B_{\rho}(z)\bigr{)}\cup K\,, \tag{7.37}\] by \(B_{2\,r}\cap\partial U_{j}\subset K\). By (7.36) and \(\mathcal{H}^{n}((E^{\prime})^{{(1)}}\cap\partial^{*}E^{\prime})=0\) we thus find that \[\Omega\cap\partial^{*}E^{\prime}\cap\partial^{*}\bigl{(}B_{\rho}(z)\cap \operatorname{cl}(U_{j})\bigr{)}\stackrel{{\mathcal{H}^{n}}}{{ \subset}}K^{\prime}\,. \tag{7.38}\] Moreover, by \(\Omega\cap\partial^{*}E\subset\Omega\cap\partial E\subset K\) and \[(\partial^{*}E)\cap\bigl{(}B_{\rho}(z)\cap\operatorname{cl}U_{j}\bigr{)}^{{ (0)}}\subset E^{{(1/2)}}\cap\bigl{(}B_{\rho}(z)\cap \operatorname{cl}U_{j}\bigr{)}^{{(0)}}\subset\mathbb{R}^{n+1} \setminus(E^{\prime})^{{(1)}}\,,\] we find \((\partial^{*}E)\cap\bigl{(}B_{\rho}(z)\cap\operatorname{cl}U_{j}\bigr{)}^{{ (0)}}\subset K\setminus(E^{\prime})^{{(1)}}\subset K^{\prime}\), which combined with (7.38) finally proves the \(\mathcal{H}^{n}\)-containment of \(\Omega\cap\partial^{*}E^{\prime}\) into \(K^{\prime}\), and thus \((K^{\prime},E^{\prime})\in\mathcal{K}_{\mathrm{B}}\). We have thus proved that \((K^{\prime},E^{\prime})\) as in (7.34) is admissible into (7.33). Since \(\mathcal{F}_{\mathrm{bk}}(K,E;\partial B_{\rho}(z))=0\) by (7.35) and \(\mathcal{F}_{\mathrm{bk}}(K,E;A)=\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{ \prime};A)\) if \(A=\Omega\setminus\operatorname{cl}(B_{\rho}(z))\), we deduce from (7.33) that \[\mathcal{F}_{\mathrm{bk}}(K,E;B_{\rho}(z))\leq\mathcal{F}_{\mathrm{bk}}(K^{ \prime},E^{\prime};\operatorname{cl}(B_{\rho}(z)))+\Lambda\,|E\Delta E^{\prime}|\,. \tag{7.39}\] To exploit (7.39), we first notice that \(\{B_{\rho}(z)\cap U_{k}\}_{k}\) is a Lebesgue partition of \(B_{\rho}(z)\setminus E\) with \(B_{\rho}(z)^{{(1)}}\cap\partial^{*}(B_{\rho}(z)\cap U_{k})=B_{\rho}(z)\cap \partial^{*}U_{k}\) for every \(k\), so that, by Lemma 7.1, \[\mathcal{F}_{\mathrm{bk}}(K,E;B_{\rho}(z))=2\,\mathcal{H}^{n}\Bigl{(}B_{\rho}(z )\cap E^{{(0)}}\cap\Bigl{(}K\setminus\bigcup_{k}\partial^{*}U_{k}\Bigr{)} \Bigr{)}+\sum_{k}P(U_{k};B_{\rho}(z))\,. \tag{7.40}\] Similarly, \(\{B_{\rho}(z)\cap U_{k}\}_{k\neq j}\) is a Lebesgue partition of \(B_{\rho}(z)\setminus E^{\prime}\), so that again by Lemma 7.1 we find \[\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{\prime};B_{\rho}(z))=2\, \mathcal{H}^{n}\Bigl{(}B_{\rho}(z)\cap(E^{\prime})^{{(0)}}\cap\Bigl{(}K^{ \prime}\setminus\bigcup_{k\neq j}\partial^{*}U_{k}\Bigr{)}\Bigr{)}+\sum_{k\neq j }P(U_{k};B_{\rho}(z))\] \[=2\,\mathcal{H}^{n}\Bigl{(}B_{\rho}(z)\cap(E^{\prime})^{{ (0)}}\cap\Bigl{(}K\setminus\bigcup_{k}\partial^{*}U_{k}\Bigr{)}\Bigr{)}+\sum_{k \neq j}P(U_{k};B_{\rho}(z)) \tag{7.41}\] where in the last identity we have used that, by (7.34), we have \(B_{\rho}(z)\cap(E^{\prime})^{{(0)}}\cap\partial^{*}U_{j}=0\) and \(B_{\rho}(z)\cap K^{\prime}\cap(E^{\prime})^{{(0)}}=B_{\rho}(z)\cap K\cap(E^{ \prime})^{{(0)}}\). Combining (7.39), (7.40), (7.41) and the fact that \((E^{\prime})^{{(0)}}\subset E^{{(0)}}\), we find that \[P(U_{j};B_{\rho}(z))\leq\mathcal{F}_{\mathrm{bk}}\bigl{(}K^{\prime},E^{\prime}; \partial B_{\rho}(z)\bigr{)}+\Lambda\,|B_{\rho}(z)\cap U_{j}|\,. \tag{7.42}\] The first term in \(\mathcal{F}_{\mathrm{bk}}\bigl{(}K^{\prime},E^{\prime};\partial B_{\rho}(z) \bigr{)}\) is \(P(E^{\prime};\partial B_{\rho}(z))\): taking into account \(\mathcal{H}^{n}(\partial^{*}E\cap\partial B_{\rho}(z))=0\), by (7.36) and the second identity in (7.37) we find \[P(E^{\prime};\partial B_{\rho}(z)) = \mathcal{H}^{n}\bigl{(}\partial B_{\rho}(z)\cap E^{{(0)}} \cap\partial^{*}\bigl{(}B_{\rho}(z)\cap\operatorname{cl}U_{j}\bigr{)}\bigr{)}\] \[= \mathcal{H}^{n}(E^{{(0)}}\cap U_{j}^{{(1)}}\cap\partial B_{\rho}(z) )=\mathcal{H}^{n}(U_{j}^{{(1)}}\cap\partial B_{\rho}(z))\,,\] while for the second term in \(\mathcal{F}_{\mathrm{bk}}\bigl{(}K^{\prime},E^{\prime};\partial B_{\rho}(z) \bigr{)}\), by \(\mathcal{H}^{n}(K\cap\partial B_{\rho}(z))=0\), \[\mathcal{H}^{n}(K^{\prime}\cap(E^{\prime})^{{(0)}}\cap\partial B_{\rho}(z))= \mathcal{H}^{n}((E^{\prime})^{{(0)}}\cap U_{j}^{{(1)}}\cap\partial B_{\rho}(z) )=0\] since \((E^{\prime})^{(0)}\subset(B_{\rho}(z)\cap\operatorname{cl}\,(U_{j}))^{(0)}\) and \(B_{\rho}(z)\cap\operatorname{cl}\,(U_{j})\) has positive Lebesgue density at points in \(U_{j}^{(1)}\cap\partial B_{\rho}(z)\). Having thus proved that \(\mathcal{F}_{\operatorname{bk}}\big{(}K^{\prime},E^{\prime};\partial B_{\rho} (z)\big{)}=\mathcal{H}^{n}(U_{j}^{(1)}\cap\partial B_{\rho}(z))\), we conclude from (7.42) that \[P(U_{j};B_{\rho}(z))\leq\mathcal{H}^{n}(U_{j}^{(1)}\cap\partial B_{\rho}(z))+ \Lambda\,|B_{\rho}(z)\cap U_{j}|\,,\] for a.e. \(\rho<r_{0}\). Since \(z\in B_{2\,r}\cap\partial U_{j}=B_{2\,r}\cap\operatorname{cl}\,(\partial^{*}U _{j})\) and (1.35) imply that \(|B_{\rho}(z)\cap U_{j}|>0\) for every \(\rho>0\), a standard argument (see, e.g. [13, Theorem 21.11]) implies that, up to further decrease the value of \(r_{0}\) depending on \(\Lambda\), and for some constant \(\alpha=\alpha(n)\in(0,1/2)\), the lower bound in (7.32) holds true. _Proof of the upper bound in (7.32)_: We argue by contradiction that, no matter how small \(\beta\in(0,1/2)\) is, we can find \(j\), \(z\in B_{2\,r}\cap\partial U_{j}\), and \(\rho<\min\{r_{0},\operatorname{dist}(z,\partial B_{2\,r})\}\), such that \[|B_{\rho}(z)\cap U_{j}|>(1-\beta)\,|B_{\rho}|\,. \tag{7.43}\] We first notice that for every \(k\neq j\) it must be \(B_{\rho/2}(z)\cap\partial U_{k}=\varnothing\): indeed if \(w\in B_{\rho/2}(z)\cap\partial U_{k}\) for some \(k\neq j\), then by the lower bound in (6.2) and by (7.43) we find \[\alpha\,|B_{\rho/2}|\leq|U_{k}\cap B_{\rho/2}(w)|\leq|B_{\rho}(z)\setminus U_ {j}|<\beta\,|B_{\rho}|\] which gives a contradiction if \(\beta<\alpha/2^{n+1}\). By \(B_{\rho/2}(z)\cap\partial U_{k}=\varnothing\) it follows that \[B_{\rho/2}(z)\subset\operatorname{cl}\,(U_{j})\cup\operatorname{cl}\,(E)\,. \tag{7.44}\] Let us now set \[E^{\prime}=E\setminus B_{\rho/2}(z)\,,\qquad K^{\prime}=\big{(}K\setminus B_ {\rho/2}(z)\big{)}\cup\big{(}E^{(1)}\cap\partial B_{\rho/2}(z)\big{)}\,. \tag{7.45}\] By (1.41), if \(\mathcal{H}^{n}(\partial^{*}E\cap\partial B_{\rho/2})=0\), then \((K^{\prime},E^{\prime})\in\mathcal{K}\), since \((\Omega\setminus B_{\rho/2}(z))\cap\partial^{*}E\subset K\setminus B_{\rho/2} (z)\subset K^{\prime}\) implies \[\Omega\cap\partial^{*}E^{\prime}\stackrel{{\mathcal{H}^{n}}}{{=}} \Omega\cap\big{\{}\big{(}(\partial^{*}E)\setminus B_{\rho/2}(z)\big{)}\cup \big{(}E^{(1)}\cap\partial B_{\rho/2}(z)\big{)}\big{\}}\subset K^{\prime}\,.\] Moreover \(K^{\prime}\cup(E^{(1)})^{\prime}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) since it contains \((K\cup E)\setminus B_{\rho/2}(z)\), and \[(K\cup E)\setminus B_{\rho/2}(z)\text{ is $\mathcal{C}$-spanning $\mathbf{W}$}\,. \tag{7.46}\] Indeed, if \(\gamma\in\mathcal{C}\) and \(\gamma(\mathbb{S}^{1})\cap(K\cup E)\setminus B_{\rho/2}(z)=\emptyset\), then by applying Lemma 7.2 to \(S=K\cup E\) and \(B=B_{2\,r}\) we see that either \(\gamma(\mathbb{S}^{1})\cap(K\cup E)\setminus B_{2\,r}\neq\varnothing\) (and thus \(\gamma(\mathbb{S}^{1})\cap(K\cup E)\setminus B_{\rho/2}(z)\neq\varnothing\) by \(B_{\rho/2}(z)\subset B_{r}\)), or there are \(k\neq h\) such that \(\gamma(\mathbb{S}^{1})\cap\partial U_{k}\neq\varnothing\) and \(\gamma(\mathbb{S}^{1})\cap\partial U_{h}\neq\varnothing\). Up to possibly switch \(k\) and \(h\), we have that \(k\neq j\), so that (7.44) implies that \(\varnothing\neq\gamma(\mathbb{S}^{1})\cap\partial U_{k}=\gamma(\mathbb{S}^{1}) \cap\partial U_{k}\setminus B_{\rho/2}(z)\), where the latter set is contained in \(K\setminus B_{\rho/2}(z)\) by (7.22) and \(B_{\rho/2}(z)\subset B_{r}\). This proves (7.46). We can thus plug the competitor \((K^{\prime},E^{\prime})\) defined in (7.45) into (7.39), and find \[\mathcal{F}_{\operatorname{bk}}(K,E;B_{\rho/2}(z))\leq\mathcal{F}_{\operatorname {bk}}\big{(}K^{\prime},E^{\prime};\operatorname{cl}\,(B_{\rho/2}(z))\big{)}+ \Lambda\,|E\cap B_{\rho/2}(z)|\,,\] for every \(\rho<\min\{r_{0},\operatorname{dist}(z,\partial B_{2\,r})\}\) such that \(\mathcal{H}^{n}(K\cap\partial B_{\rho/2}(z))=0\). Now, by Lemma 7.1 and by (7.44) we have \[\mathcal{F}_{\operatorname{bk}}(K,E;B_{\rho/2}(z))\geq P(U_{j};B_{\rho/2}(z))=P (E;B_{\rho/2}(z))\,,\] while (1.40) gives \[\operatorname{cl}\,(B_{\rho/2}/z)\cap K^{\prime}\stackrel{{ \mathcal{H}^{n}}}{{=}}\operatorname{cl}\,(B_{\rho/2}/z)\cap\partial^{*}E^{ \prime}\stackrel{{\mathcal{H}^{n}}}{{=}}E^{(1)}\cap\partial B_{\rho/ 2}(z)\,,\] thus proving that, for a.e. \(\rho<\min\{r_{0},\operatorname{dist}(z,\partial B_{2\,r})\}\), \[P(E;B_{\rho/2}(z))\leq\mathcal{H}^{n}(E^{(1)}\cap B_{\rho/2}(z))+\Lambda\,|E \cap B_{\rho/2}(z)|\,.\] Since \(z\in B_{2\,r}\cap\partial U_{j}\) and \(B_{\rho/2}(z)\cap\partial^{*}U_{j}=B_{\rho/2}(z)\cap\partial^{*}E\), by (1.35) we see that \(|E\cap B_{\rho/2}(z)|>0\) for every \(\rho<\min\{r_{0},\mathrm{dist}(z,\partial B_{2\,r})\}\). By a standard argument, up to further decrease the value of \(r_{0}\), we find that for some \(\alpha^{\prime}=\alpha^{\prime}(n)\) it holds \[|E\cap B_{\rho/2}(z)|\geq\alpha^{\prime}\,|B_{\rho/2}|\,,\qquad\forall\rho< \min\{r_{0},\mathrm{dist}(z,\partial B_{2\,r})\}\,,\] and since \(|E\cap B_{\rho/2}(z)|=|B_{\rho/2}(z)\setminus U_{j}|\) this give a contradiction with (7.43) up to further decrease the value of \(\beta\). _Step three_: We prove (7.22) and (7.23). The lower bound in (7.32) implies (7.23), i.e., \(J=\#\{j:U_{j}\cap B_{r}\neq\varnothing\}<\infty\). Next, by \(B_{2\,r}\cap\partial U_{j}\subset K\) (last inclusion in (7.27)), to prove (7.22) it suffices to show that \[K\cap B_{r}\subset\cup_{j=1}^{J}\partial U_{j}\,. \tag{7.47}\] Now, if \(z\in K\cap B_{r}\), then by \(K\cap E=\varnothing\) we have either \(z\in K\setminus\mathrm{cl}\,(E)\) or \(z\in B_{r}\cap\partial E\), and, in the latter case, \(|E\cap B_{\rho}(z)|\leq(1-c)\,|B_{\rho}|\) for every \(\rho<\min\{r_{0},\mathrm{dist}(z,\partial\mathbf{W})\}\) thanks to (6.2). Therefore, in both cases, \(z\) is an accumulation point for \((\cup_{j=1}^{J}U_{j})^{(1)}\cap B_{r}\). Since \(J\) is finite, there must be at least one \(j\) such that \(z\in\mathrm{cl}\,(U_{j})\) - hence \(z\in\partial U_{j}\) thanks to \(K\cap U_{j}=\varnothing\). Before moving to the next step, we also notice that \[\mathcal{F}_{\mathrm{bk}}(K,E;B_{r})=\sum_{j=1}^{J}P(U_{j};B_{r})\,. \tag{7.48}\] Indeed, by (7.22), (7.23), and (7.31) we have \[K\cap B_{r}=B_{r}\cap\cup_{j=1}^{J}\partial U_{j}\stackrel{{ \mathbb{R}^{n}}}{{=}}B_{r}\cap\cup_{j=1}^{J}\partial^{*}U_{j}\,, \tag{7.49}\] so that, in the application of Lemma 7.1, i.e. in (7.40), the multiplicity \(2\) terms vanishes, and we find (7.48). _Step four_: In this step we consider a set of finite perimeter \(V_{1}\) such that, for some \(B:=B_{\rho}(z)\subset B_{r}\) with \(\rho<r_{0}\) and \(\mathcal{H}^{n}(K\cap\partial B)=0\), we have \[U_{1}\Delta V_{1}\subset\subset B\,. \tag{7.50}\] We then define a pair of Borel sets \((K^{\prime},E^{\prime})\) as \[E^{\prime} = \left(E\setminus B\right)\,\cup\,\left[B\cap\left(V_{1}\Delta(E \cup U_{1})\right)\right], \tag{7.51}\] \[K^{\prime} = \left(K\setminus B\right)\,\cup\,\left[B\cap\left(\partial^{*}V_{ 1}\cup\partial^{*}U_{2}\cup\cdots\cup\partial^{*}U_{J}\right)\right], \tag{7.52}\] and show that \((K^{\prime},E^{\prime})\in\mathcal{K}_{\mathrm{B}}\), \(K^{\prime}\cup(E^{\prime})^{(1)}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), and \[\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{\prime})-\mathcal{F}_{\mathrm{bk}}(K,E )\leq P(V_{1};B)-P(U_{1};B)\,. \tag{7.53}\] As a consequence of (7.53), (7.33) and \(|E\Delta E^{\prime}|=|U_{1}\Delta V_{1}|\), we find of course that \(P(U_{1};\Omega)\leq P(V_{1};\Omega)+\Lambda\,|U_{1}\Delta V_{1}|\), thus showing that \(U_{1}\) is a \((\Lambda,r_{0})\)-perimeter minimizer in \(\Omega\). Proving that \((K^{\prime},E^{\prime})\in\mathcal{K}_{\mathrm{B}}\) is immediately reduced to showing that \(B\cap\partial^{*}E^{\prime}\) is \(\mathcal{H}^{n}\)-contained in \(B\cap(\partial^{*}V_{1}\cup\partial^{*}U_{2}\cup\cdots\cup\partial^{*}U_{J})\) thanks to \(\mathcal{H}^{n}(K\cap\partial B)=0\). Now, on taking into account that, by (1.39) and (1.41), \(\partial^{*}(X\cup Y)\) and \(\partial^{*}(X\setminus Y)\) are both \(\mathcal{H}^{n}\)-contained in \((\partial^{*}X)\cup(\partial^{*}Y)\), and thus \(\partial^{*}(X\Delta Y)\) is too, we easily see that \[B\cap\partial^{*}E^{\prime}=B\cap\partial^{*}[V_{1}\Delta(E\cup U_{1})] \stackrel{{\mathbb{R}^{n}}}{{\subset}}(B\cap\partial^{*}V_{1}) \cup(B\cap\partial^{*}(E\cup U_{1}))\,.\] However, \(B\cap(E\cup U_{1})=B\setminus(\cup_{j=2}^{J}U_{j})\), so that \(\partial^{*}X=\partial^{*}(\mathbb{R}^{n+1}\setminus X)\) gives \[B\cap\partial^{*}(E\cup U_{1})=B\cap\partial^{*}(\cup_{j=2}^{J}U_{j})\stackrel{{ \mathbb{R}^{n}}}{{\subset}}B\cap\cup_{j\geq 2}\partial^{*}U_{j}\,,\] where we have used again the \(\mathcal{H}^{n}\)-containment of \(\partial^{*}(X\cup Y)\) in \((\partial^{*}X)\cup(\partial^{*}Y)\). This proves that \((K^{\prime},E^{\prime})\in\mathcal{K}_{\mathrm{B}}\). To prove that \(K^{\prime}\cup(E^{\prime})^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), we show that the set \(S\) defined by \[S=\big{(}(K\cup E)\setminus B\big{)}\cup\big{(}\mathrm{cl}\,(B)\cap\cup_{j\geq 2 }\partial U_{j}\big{)}\,,\] is \(\mathcal{H}^{n}\)-contained in \(K^{\prime}\cup(E^{\prime})^{{(1)}}\) and is \(\mathcal{C}\)-spanning \(\mathbf{W}\). To prove that \(S\) is \(\mathcal{H}^{n}\)-contained in \(K^{\prime}\cup(E^{\prime})^{{(1)}}\), we start by noticing that \((K\cup E)\setminus\mathrm{cl}\,(B)\) is \(\mathcal{H}^{n}\)-equivalent to \((K\cup E^{{(1)}}\cup\partial^{*}E)\setminus\mathrm{cl}\,(B)\subset K\cup E^{{ (1)}}\) (by \((K,E)\in\mathcal{K}_{\mathrm{B}}\)), whereas \(|(E\Delta E^{\prime})\setminus B|=0\) implies \((E^{{(1)}}\Delta(E^{{ (1)}}))\setminus\mathrm{cl}\,(B)=\varnothing\): hence \(S\setminus\mathrm{cl}\,(B)\) if \(\mathcal{H}^{n}\)-contained in \(K^{\prime}\cup(E^{\prime})^{{(1)}}\). Next, by (7.31) and by definition of \(K^{\prime}\), \[S\cap B=B\cap\cup_{j\geq 2}\partial U_{j}\stackrel{{\mathcal{H}^{n} }}{{=}}B\cap\cup_{j\geq 2}\partial^{*}U_{j}\subset K^{\prime}\,.\] Finally, by \(\mathcal{H}^{n}(K\cap\partial B)=0\), (7.26), and Federer's theorem, \((S\cap\partial B)\setminus K\) is \(\mathcal{H}^{n}\)-equivalent to \((E^{{(1)}}\cap\partial B)\setminus K\), where \(E^{{(1)}}\cap A=(E^{\prime})^{{(1)}}\cap A\) in an open neighborhood \(A\) of \(\partial B\) thanks to \(U_{1}\Delta V_{1}\subset\!\subset B\). To prove that \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), since \(S\) is relatively closed in \(\Omega\) and thanks to Theorem A.1, we only need to check that \(S\cap\gamma(\mathbb{S}^{1})\neq\varnothing\) for every \(\gamma\in\mathcal{C}\). Since \((K\cup E)\cap\gamma(\mathbb{S}^{1})\neq\varnothing\) for every \(\gamma\in\mathcal{C}\), this is immediate unless \(\gamma\) is such that \(S\cap\gamma(\mathbb{S}^{1})\setminus B=\varnothing\); in that case, however, Lemma 7.2 implies the existence of \(j\neq k\) such that \(\gamma(\mathbb{S}^{1})\cap B\cap\partial U_{j}\) and \(\gamma(\mathbb{S}^{1})\cap B\cap\partial U_{k}\) are both non-empty. Since either \(j\geq 2\) or \(k\geq 2\), we conclude by (7.26) that \(\gamma(\mathbb{S}^{1})\cap B\cap K^{\prime}\neq\varnothing\), thus completing the proof. We are thus left to prove the validity of (7.53). Keeping (7.48) and \(\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{\prime};B)\leq\mathcal{F}_{\mathrm{bd} }(K^{\prime},E^{\prime};B)\) into account, this amounts to showing that \[\mathcal{F}_{\mathrm{bd}}(K^{\prime},E^{\prime};B)=\mathcal{H}^{n}(B\cap \partial^{*}E^{\prime})+2\,\mathcal{H}^{n}\big{(}B\cap K^{\prime}\setminus \partial^{*}E^{\prime}\big{)}=P(V_{1};B)+\sum_{j=2}^{J}P(U_{j};B)\,. \tag{7.54}\] To this end we notice that by (1.44) and \(B\cap E^{\prime}=B\cap[V_{1}\Delta(E\cup U_{1})]\) we have \[B\cap\partial^{*}E^{\prime} \stackrel{{\mathcal{H}^{n}}}{{=}} B\cap\big{\{}\partial^{*}V_{1}\cup\partial^{*}(E\cup U_{1}) \big{\}}\] \[\stackrel{{\mathcal{H}^{n}}}{{=}} B\cap\big{\{}(\partial^{*}V_{1})\,\cup\,(U_{1}^{{(0)}}\cap \partial^{*}E)\,\cup\,(E^{{(0)}}\cap\partial^{*}U_{1})\big{\}}\,,\] where we have used (1.39) and \(\mathcal{H}^{n}(\{\nu_{E}=\nu_{U_{1}}\})=0\) (as \(E\cap U_{1}=\varnothing\)). By (1.46) and (1.47), since \(\{B\cap E,B\cap U_{j}\}_{j=1}^{N}\) is a Caccioppoli partition of \(B\), we have \[U_{1}^{{(0)}}\cap\partial^{*}E=(\partial^{*}E)\cap\bigcup_{j\geq 2}(\partial^{*}U_{j} )\,,\qquad E^{{(0)}}\cap\partial^{*}U_{1}=(\partial^{*}U_{1})\cap \bigcup_{j\geq 2}(\partial^{*}U_{j})\,,\] so that \[B\cap\partial^{*}E^{\prime} \stackrel{{\mathcal{H}^{n}}}{{=}} B\cap\Big{\{}(\partial^{*}V_{1})\cup\Big{(}[(\partial^{*}E)\cup( \partial^{*}U_{1})]\cap\bigcup_{j\geq 2}(\partial^{*}U_{j})\Big{)}\Big{\}}\,,\] \[B\cap(K^{\prime}\setminus\partial^{*}E^{\prime}) \stackrel{{\mathcal{H}^{n}}}{{=}} B\cap\Big{(}\bigcup_{j\geq 2}\partial^{*}U_{j}\Big{)}\setminus\big{[}( \partial^{*}E)\cup(\partial^{*}U_{1})\big{]}\,.\] We thus find \[\mathcal{H}^{n}(B\cap\partial^{*}E)+2\,\mathcal{H}^{n}(B\cap(K^{ \prime}\setminus\partial^{*}E^{\prime}))\] \[=P(V_{1};B)+2\,\mathcal{H}^{n}\Big{(}\Big{(}\bigcup_{j\geq 2} \partial^{*}U_{j}\Big{)}\setminus(\partial^{*}E\cup\partial^{*}U_{1})\Big{)}+ \mathcal{H}^{n}\Big{(}\Big{(}\bigcup_{j\geq 2}\partial^{*}U_{j}\Big{)}\cap( \partial^{*}E\cup\partial^{*}U_{1})\Big{)}\] \[=P(V_{1};B)+\sum_{j\geq 2}P(U_{j};B)\,,\] that is (7.54). _Step five_: In this final step we prove conclusions (iv) and (v). To this end we fix \(x\in[\Omega\cap(\partial E\setminus\partial^{*}E)]\setminus\Sigma\), and recall that, by conclusion (iv)\({}_{\alpha}\), there are \(r>0\), \(\nu\in\mathbb{S}^{n}\) \(u_{1},u_{2}\in C^{1,\alpha}({\bf D}^{\nu}_{r}(x);(-r/4,r/4))\) (\(\alpha\in(0,1/2)\) arbitrary) such that \(u_{1}(x)=u_{2}(x)=0\), \(u_{1}\leq u_{2}\) on \({\bf D}^{\nu}_{r}(x)\), \(\{u_{1}<u_{2}\}\) and \(\operatorname{int}\{u_{1}=u_{2}\}\) are both non-empty, and \[{\bf C}^{\nu}_{r}(x)\cap K = \cup_{i=1,2}\big{\{}y+u_{i}(y)\,\nu:y\in{\bf D}^{\nu}_{r}(x)\big{\}}\,, \tag{7.55}\] \[{\bf C}^{\nu}_{r}(x)\cap\partial^{*}E = \cup_{i=1,2}\big{\{}y+u_{i}(y)\nu:y\in\{u_{1}<u_{2}\}\big{\}}\,,\] (7.56) \[{\bf C}^{\nu}_{r}(x)\cap E = \big{\{}y+t\,\nu:y\in\{u_{1}<u_{2}\}\,,u_{1}(x)<t<u_{2}(x)\big{\}}\,. \tag{7.57}\] We claim that \((u_{1},u_{2})\) has the minimality property \[\mathcal{A}(u_{1},u_{2})\leq\mathcal{A}(w_{1},w_{2}):=\int_{{\bf D}^{\nu}_{r} (x)}\sqrt{1+|\nabla w_{1}|^{2}}+\sqrt{1+|\nabla w_{2}|^{2}}\,, \tag{7.58}\] among all pairs \((w_{1},w_{2})\) with \(w_{1},w_{2}\in\operatorname{Lip}({\bf D}^{\nu}_{r}(x);(-r/2,r/2))\) that satisfy \[\begin{cases}w_{1}\leq w_{2}\,,&\text{on }{\bf D}^{\nu}_{r}(x)\,,\\ w_{k}=u_{k}\,,&\text{on }\partial{\bf D}^{\nu}_{r}(x),\,k=1,2\,,\qquad\int_{{\bf D }^{\nu}_{r}(x)}w_{2}-w_{1}=\int_{{\bf D}^{\nu}_{r}(x)}u_{2}-u_{1}\,.\end{cases} \tag{7.59}\] Indeed, starting from a given a pair \((w_{1},w_{2})\) as in (7.59), we can define \((K^{\prime}\cap{\bf C}^{\nu}_{r}(x),E^{\prime}\cap{\bf C}^{\nu}_{r}(x))\) by replacing \((u_{1},u_{2})\) with \((w_{1},w_{2})\) in (7.55) and (7.57), and then define \((K^{\prime},E^{\prime})\in\mathcal{K}_{\rm B}\) by setting \(K^{\prime}\setminus{\bf C}^{\nu}_{r}(x)=K\setminus{\bf C}^{\nu}_{r}(x)\) and \(E^{\prime}\setminus{\bf C}^{\nu}_{r}(x)=E\setminus{\bf C}^{\nu}_{r}(x)\). Since \(\partial{\bf C}^{\nu}_{r}\setminus(K^{\prime}\cup E^{\prime})=\partial{\bf C} ^{\nu}_{r}\setminus(K\cup E)\) it is easily seen (by a simple modification of Lemma 7.2 where balls are replaced by cylinders) that \((K^{\prime},E^{\prime})\) is \(\mathcal{C}\)-spanning \({\bf W}\). Since \(|E^{\prime}|=|E|\), the minimality of \((K,E)\) in \(\Psi_{\rm bk}(v)\) implies that \(\mathcal{F}_{\rm bk}(K,E)\leq\mathcal{F}_{\rm bk}(K^{\prime},E^{\prime})\), which readily translates into (7.58). Recalling that both \(A_{0}=\operatorname{int}\{u_{1}=u_{2}\}\) and \(A_{+}=\{u_{1}<u_{2}\}\) are non-empty open subsets of \({\bf D}^{\nu}_{r}(x)\), and denoting by \(\operatorname{MS}(u)[\varphi]=\int_{{\bf D}^{\nu}_{r}(x)}\nabla\varphi\cdot[( \nabla u)/\sqrt{1+|\nabla u|^{2}}]\) the distributional mean curvature operator, we find that \[\operatorname{MS}(u_{1})+\operatorname{MS}(u_{2}) =0\,, \text{on }{\bf D}^{\nu}_{r}(x)\,,\] \[\operatorname{MS}(u_{k}) =0\,, \text{on }A_{0}\text{ for each }k=1,2\,,\] \[\operatorname{MS}(u_{2})=-\operatorname{MS}(u_{1}) =\lambda\,, \text{on }A_{+}\,, \tag{7.60}\] for some constant \(\lambda\in\mathbb{R}\); in particular, \(u_{1},u_{2}\in C^{\infty}(A_{0})\cap C^{\infty}(A_{+})\). We notice that it must be \[\lambda<0\,. \tag{7.61}\] Indeed, arguing by contradiction, should it be that \(\lambda\geq 0\), then by (7.60) we find \(\operatorname{MS}(u_{2})\geq 0\) and \(\operatorname{MS}(u_{1})\leq 0\) on \(A_{+}\). Since \(A_{+}\) is open an non-empty, there is an open ball \(B\subset A_{+}\) such that \(\partial B\cap\partial A_{+}=\{y_{0}\}\). Denoting by \(x_{0}\) the center of \(B\) and setting \(\nu=(x_{0}-y_{0})/|x_{0}-y_{0}|\), by \(u_{1}\leq u_{2}\), \(u_{1}(y_{0})=u_{2}(y_{0})\) and \(u_{k}\in C^{1}({\bf D}^{\nu}_{r}(x))\) we find that \(\nabla u_{1}(y_{0})=\nabla u_{2}(y_{0})\). At the same time, by applying Hopf's lemma in \(B\) at \(y_{0}\), we see that since \(\operatorname{MS}(u_{2})\geq 0\) and \(\operatorname{MS}(u_{1})\leq 0\) on \(B\), it must be \(\nu\cdot\nabla u_{2}(y_{0})<0\) and \(\nu\cdot\nabla u_{1}(y_{0})>0\), against \(\nabla u_{1}(y_{0})=\nabla u_{2}(y_{0})\). By (7.60), (7.61), and \(u_{2}\geq u_{1}\) on \({\bf D}^{\nu}_{r}(x)\) we can apply the sharp regularity theory for the double membrane problem developed in [23, Theorem 5.1] and deduce that \(u_{1},u_{2}\in C^{1,1}({\bf D}^{\nu}_{r}(x))\). Next we notice that, for every \(\varphi\in C^{\infty}_{c}(A_{+})\), and setting \(u_{+}=u_{2}-u_{1}\), \[2\,\lambda\,\int_{A_{+}}\varphi=\operatorname{MS}(u_{2})[\varphi]-\operatorname{ MS}(u_{1})[\varphi]=\int_{A_{+}}\operatorname{A}(x)[\nabla u_{+}]\cdot\nabla \varphi\,,\] where we have set, with \(f(z)=\sqrt{1+|z|^{2}}\), \[\operatorname{A}(x)=\int_{0}^{1}\,\nabla^{2}f\big{(}s\,\nabla u_{2}(x)+(1-s)\, \nabla u_{1}(x)\big{)}\,ds\,.\] In particular, \(u_{+}\in C^{1,1}({\bf D}^{\nu}_{r}(x))\) is a non-negative distributional solution of \[\operatorname{div}\big{(}\operatorname{A}(x)\nabla u_{+}\big{)}=-2\,\lambda\,, \qquad\text{on }A_{+}\,,\] with a strictly positive right-hand side (by (7.61)) and with \(\mathrm{A}\in\mathrm{Lip}(A_{+};\mathbb{R}^{n\times n}_{\mathrm{sym}})\) uniformly elliptic. We can thus apply the regularity theory for free boundaries developed in [13, Theorem 1.1, Theorem 4.14] to deduce that \[\mathrm{FB}=\mathbf{D}_{r}^{\nu}(x)\cap\partial\{u_{+}=0\}=\mathbf{D}_{r}^{\nu }(x)\cap\partial\{u_{2}=u_{1}\}\,,\] can be partitioned into sets \(\mathrm{Reg}\) and \(\mathrm{Sing}\) such that \(\mathrm{Reg}\) is relatively open in \(\mathrm{FB}\) and such that for every \(z\in\mathrm{Reg}\) there are \(r>0\) and \(\beta\in(0,1)\) such that \(B_{r}(x)\cap\mathrm{FB}\) is a \(C^{1,\beta}\)-embedded \((n-1)\)-dimensional manifold, and such that \(\mathrm{Sing}=\cup_{k=0}^{n-1}\mathrm{Sing}_{k}\) is relatively closed in \(\mathrm{FB}\), with each \(\mathrm{Sing}_{k}\) locally \(\mathcal{H}^{k}\)-rectifiable in \(\mathbf{D}_{r}^{\nu}(x)\). Since, by (7.56), \[\mathbf{C}_{r}^{\nu}(x)\cap(\partial E\setminus\partial^{*}E)=\left\{y+u_{1}( y)\,\nu:y\in\mathrm{FB}\right\}\] and \(u_{1}\in C^{1,1}(\mathbf{D}_{r}^{\nu}(x))\), we conclude by a covering argument that \(\Omega\cap(\partial E\setminus\partial^{*}E)\) has all the required properties, and complete the proof of the theorem. ## 8. Equilibrium across transition lines in wet foams (Theorem 1.7) Proof of Theorem 1.7.: Let \(\Omega\subset\mathbb{R}^{n+1}\) be open and let \((K_{*},E_{*})\in\mathcal{K}_{\mathrm{foam}}\). We can find \((K,E)\in\mathcal{K}\) such that \(K\) is \(\mathcal{H}^{n}\)-equivalent to \(K_{*}\), \(E\) Lebesgue equivalent to \(E_{*}\), and \(K\cap E^{\mbox{\tiny(1)}}=\varnothing\) by repeating with minor variations the considerations made in step one of the proof of Theorem 6.2 (we do not have to worry about the \(\mathcal{C}\)-spanning condition, but have to keep track of the volume constraint imposed for each \(U_{i}\), which can be done by using the volume-fixing variations for clusters from [15, Part IV]). In proving the regularity part of the statement, thanks to Theorem 2.1-(a) we can directly work with balls \(B\subset\subset\Omega\) having radius less than \(r_{0}\) (with \(r_{0}\) as in (1.33)), and consider the open connected components \(\{U_{i}\}_{i}\) of \(B\) induced by \(K\cup E\). Using Lemma 7.1 and, again, volume-fixing variation techniques in place of the theory of homotopic spanning, we can proceed to prove analogous statement to (7.8), (7.9), (7.10), and (7.11), thus proving the \((\Lambda,r_{0})\)-minimality of each \(U_{i}\) in \(B\). The claimed \(C^{1,\alpha}\)-regularity of each \(U_{i}\) outside of a closed set \(\Sigma\) with the claimed dimensional estimates follows then from De Giorgi's theory of perimeter minimality [1, 13, 14]. ## Appendix A Equivalence of homotopic spanning conditions In Theorem A.1 we prove that, when \(S\) is a closed set, the notion of "\(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\)" introduced in Definition B boils down to the one in Definition A. We then show that the property of being \(\mathcal{C}\)-spanning is stable under reduction to the rectifiable part of a Borel set, see Lemma 2.2. **Theorem A.1**.: _Given a closed set \(\mathbf{W}\subset\mathbb{R}^{n+1}\), a spanning class \(\mathcal{C}\) for \(\mathbf{W}\), and a set \(S\) relatively closed in \(\Omega\), the following two properties are equivalent:_ **(i):**: _for every_ \(\gamma\in\mathcal{C}\)_, we have_ \(S\cap\gamma(\mathbb{S}^{1})\neq\varnothing\)_;_ **(ii):**: _for every_ \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) _and for_ \(\mathcal{H}^{1}\)_-a.e._ \(s\in\mathbb{S}^{1}\)_, we have_ \[\text{for $\mathcal{H}^{n}$-a.e. $x\in T[s]$}\,,\] (A.1) \[\exists\text{ a partition $\{T_{1},T_{2}\}$ of $T$ with $x\in\partial^{e}T_{1}\cap\partial^{e}T_{2}$}\,,\] \[\text{and s.t. $S\cup T[s]$ essentially disconnects $T$ into $\{T_{1},T_{2}\}$}\,.\] _In particular, \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) according to Definition A if and only if it does so according to Definition B._ **Remark A.2** (\(x\)-dependency of \(\{T_{1},T_{2}\}\)).: In the situation of Figure 1.4 it is clear that the same choice of \(\{T_{1},T_{2}\}\) can be used to check the validity of (A.1) at every \(x\in T[s]\). One may thus wonder if it could suffice to reformulate (A.1) so that the partition \(\{T_{1},T_{2}\}\) is independent of \(x\). The simpler example we are aware of and that shows this simpler definition would not work is as follows. In \(\mathbb{R}^{3}\), let \(\mathbf{W}\) be a closed \(\delta\)-neighborhood of a circle \(\Gamma\), let \(U\) be the open \(\delta\)-neighborhood of a loop with link number _three_ (or higher _odd_ number) with respect to \(\mathbf{W}\), let \(K\) be the disk spanned by \(\Gamma\), and let \(S=\Omega\cap[(K\setminus U)\cup\partial U]\), see Figure A.1. Now consider a "test tube" \(T\) which compactly contains \(U\) and is such that, for every \(s\), \(U\cap T[s]\) consists of three disks \(\{D_{i}\}_{i=1}^{3}\). Since \(U\subset T\), the property "\(S\cup T[s]\) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\) in such a way that \(T[s]\subset T\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}\)" would immediately imply "\(U\cap(S\cup T[s])=U\cap T[s]\) essentially disconnects \(T\cap U=U\) into \(\{U_{1},U_{2}\}\) with \(U\cap T[s]\subset U\cap\partial^{e}U_{1}\cap\partial^{e}U_{2}\)", where \(U_{i}=T_{i}\cap U\) (see step one in the proof of Theorem 3.1 for a formal proof of this intuitive assertion). However, the latter property does not hold. To see this, denoting by \(\{A_{i}\}_{i=1}^{3}\) the three connected components of \(U\setminus T[s]\), we would have \(U_{1}=A_{i}\cup A_{j}\) and \(U_{2}=A_{k}\) for some choice of \(i\neq j\neq k\neq i\), whereas, independently of the choice made, \(U\cap\partial^{e}U_{1}\cap\partial^{e}U_{2}\) always fails to contain one of the disks \(\{D_{i}\}_{i=1}^{3}\): for example, if \(U_{1}=A_{1}\cup A_{2}\) and \(U_{2}=A_{3}\), then \(U\cap\partial^{e}U_{1}\cap\partial^{e}U_{2}=D_{2}\cup D_{3}\), and \(D_{1}\) is entirely missed. We conclude that the set \(S\) just constructed, although clearly \(\mathcal{C}\)-spanning \(\mathbf{W}\) in terms of Definition A, fails to satisfy the variant of (A.1) where a same partition \(\{T_{1},T_{2}\}\) is required to work for \(\mathcal{H}^{n}\)-a.e. choice of \(x\in T[s]\). Proof of Theorem a.1.: _Step one_: We prove that (ii) implies (i). Indeed, if there is \(\gamma\in\mathcal{C}\) such that \(S\cap\gamma(\mathbb{S}^{1})=\varnothing\), then, \(S\) being closed, we can find \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) such that \(\operatorname{dist}(S,T)>0\). By (ii), there is \(s\in\mathbb{S}^{1}\) such that \(S\cup T[s]\) essentially disconnects \(T\). By \(\operatorname{dist}(S,T)>0\) we see that \((S\cup T[s])\cap T=T[s]\), so that \(T[s]\) essentially disconnects \(T\), a contradiction. _Step two_: We now prove that (i) implies (ii). To this end we consider an arbitrary \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) and aim at proving the existence of \(J\) of full \(\mathcal{H}^{1}\)-measure in \(\mathbb{S}^{1}\) such that, if \(s\in J\), then (A.1) holds. This is trivial, with \(J=\mathbb{S}^{1}\), if \(|S\cap T|=|T|\). Indeed, in this case, we have \(T=S^{(1)}\cap T\), that, combined with \(S\) being closed, implies \(T=S\cap T\). In particular, \(S\cup T[s]=T\) for every \(s\in\mathbb{S}^{1}\), and since, trivially, \(T\) essentially disconnects \(T\), the conclusion follows. We thus assume that \(|S\cap T|<|T|\): in particular, \[U=T\setminus S\] is a non-empty, open set, whose connected components are denoted by \(\{U_{i}\}_{i\in I}\) (\(I\) a countable set). By the Lebesgue points theorem, \(\mathcal{L}^{n+1}\)-a.e. \(x\in T\) belongs either to \(U^{(0)}\) or to \(U\). Then, by the smoothness of \(\Phi\) and by the area formula, we can find a set \(J\) of full Figure A.1. The situation in Remark A.2. The components \(A_{1}\), \(A_{2}\) and \(A_{3}\) (depicted in purple, yellow, and green respectively) of \(U\setminus T[s]\) are bounded by the three disks \(\{D_{i}\}_{i=1}^{3}\) (depicted as boldface segments). \(\mathcal{H}^{1}\)-measure in \(\mathbb{S}^{1}\) such that \[\mathcal{H}^{n}\big{(}T[s]\setminus(U^{{(0)}}\cup U)\big{)}=0\,,\qquad\forall s \in J\,.\] (A.2) In particular, given \(s\in J\), we just need to prove (A.1) when either \(x\in T[s]\cap U^{{(0)}}\) or \(x\in T[s]\cap U\). Before examining these two cases we also notice that we can further impose on \(J\) that \[\mathcal{H}^{n}\Big{(}T[s]\cap\Big{[}\partial^{e}U\cup\partial^{e}S\cup\big{(} U^{{(1)}}\setminus U\big{)}\cup\bigcup_{i\in I}\big{(}U^{{(1)}}_{i} \setminus U_{i}\big{)}\Big{]}\Big{)}=0\,,\qquad\forall s\in J\,.\] (A.3) Indeed, again by the Lebesgue points theorem, the sets \(\partial^{e}U\), \(\partial^{e}S\), \(U^{{(1)}}\setminus U\), and \(\cup_{i\in I}U^{{(1)}}_{i}\setminus U_{i}\) are all \(\mathcal{L}^{n+1}\)-negligible. _Case one, \(x\in T[s]\cap U^{{(0)}}\)_: To fix ideas, notice that \(U^{{(0)}}\neq\varnothing\) implies \(|S\cap T|>0\), and in particular \(S\) has positive Lebesgue measure. Given an arbitrary \(s^{\prime}\in J\setminus\{s\}\) we denote by \(\{I_{1},I_{2}\}\) the partition of \(\mathbb{S}^{1}\) bounded by \(\{s,s^{\prime}\}\), and then consider the Borel sets \[T_{1}=\Phi(I_{1}\times B_{1}^{n})\cap S\,,\qquad T_{2}=\Phi(I_{2}\times B_{1}^ {n})\cup\,\Big{(}\Phi(I_{1}\times B_{1}^{n})\setminus S\Big{)}\,.\] We first notice that \(\{T_{1},T_{2}\}\) is a non-trivial partition of \(T\): Indeed \(|T_{1}|>0\) since \(x\) has density \(1/2\) for \(\Phi(I_{1}\times B_{1}^{n})\) and (by \(x\in U^{{(0)}}\)) density \(1\) for \(S\cap T\); at the same time \(|T_{2}|=|T\setminus T_{1}|\geq|T\setminus S|>0\). Next, we claim that \[T^{{(1)}}\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}\text{ is $\mathcal{H}^{n}$- contained in $S$}\,.\] (A.4) Indeed, since \(\Phi(I_{1}\times B_{1}^{n})\) is an open subset of \(T\) with \(T\cap\partial[\Phi(I_{1}\times B_{1}^{n})]=T[s]\cup T[s^{\prime}]\), and since \(\partial^{e}T_{1}\) coincides with \(\partial^{e}S\) inside the open set \(\Phi(I_{1}\times B_{1}^{n})\), we easily see that \[T^{{(1)}}\cap\partial^{e}T_{1}\cap\partial^{e}T_{2} = T\cap\partial^{e}T_{1}=T\cap\partial^{e}\big{(}\Phi(I_{1}\times B _{1}^{n})\cap S\big{)}\] \[\subset \big{(}\Phi(I_{1}\times B_{1}^{n})\cap\partial^{e}S\big{)}\cup \Big{(}\big{(}T[s]\cup T[s^{\prime}]\big{)}\setminus S^{{(0)}} \Big{)}\,.\] Now, on the one hand, by \(\mathcal{H}^{n}(\partial^{e}S\cap(T[s]\cup T[s^{\prime}]))=0\) (recall (A.3)), it holds \[\big{(}T[s]\cup T[s^{\prime}]\big{)}\setminus S^{{(0)}} \text{ is $\mathcal{H}^{n}$-contained in $T\cap S^{{(1)}}$}\,;\] while, on the other hand, by \(\Omega\cap\partial^{e}S\subset\Omega\cap\partial S\subset\Omega\cap S\) (since \(S\) is closed in \(\Omega\)) and by \(\Phi(I_{1}\times B_{1}^{n})\subset T\subset\Omega\), we also have that \(\Phi(I_{1}\times B_{1}^{n})\cap\partial^{e}S\subset T\cap S\); therefore \[T^{{(1)}}\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}\text{ is $\mathcal{H}^{n}$-contained in $T\cap(S\cup S^{{(1)}})=T\cap S$}\,,\] where we have used that \(S\) is closed to infer \(S^{{(1)}}\subset S\). Having proved (A.4) and the non-triviality of \(\{T_{1},T_{2}\}\), we conclude that \(S\) (and, thus, \(S\cup T[s]\)) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\). We are left to prove that \(x\in T\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}\). To this end, we notice that \(x\in T[s]\cap(T\setminus S)^{{(0)}}\) and \(\Phi(I_{1}\times B_{1}^{n})\subset T\) imply \[|T_{1}\cap B_{r}(x)|=|\Phi(I_{1}\times B_{1}^{n})\cap S\cap B_{r}(x)|=|\Phi(I_ {1}\times B_{1}^{n})\cap B_{r}(x)|+\text{o}(r^{n+1})=\frac{|B_{r}(x)|}{2}+ \text{o}(r^{n+1})\,,\] so that \(x\in(T_{1})^{{(1/2)}}\subset\partial^{e}T_{1}\); since \(T\cap\partial^{e}T_{1}=T\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}\) and \(x\in T\) we conclude the proof in the case when \(x\in T[s]\cap U^{{(0)}}\). _Case two, \(x\in T[s]\cap U\)_: In this case there exists \(i\in I\) such that \(x\in U_{i}\), and, correspondingly, we claim that \[\exists\{V_{1},V_{2}\}\text{ a non-trivial Borel partition of }U_{i}\setminus T[s]\,,\] (A.5) \[\text{ s.t. }x\in\partial^{e}V_{1}\cap\partial^{e}V_{2}\text{ and }T\cap(\partial V_{1}\cup\partial V_{2})\subset S\cup T[s]\,.\] Given the claim, we conclude by setting \(T_{1}=V_{1}\) and \(T_{2}=V_{2}\cup(T\setminus U_{i})\). Indeed, since \(V_{2}\cap U_{i}=T_{2}\cap U_{i}\) with \(U_{i}\) open implies \(U_{i}\cap\partial^{e}V_{1}=U_{i}\cap\partial^{e}T_{1}\), we deduce from (A.5) that \[x\in U_{i}\cap\partial^{e}V_{1}\cap\partial^{e}V_{2}=U_{i}\cap\partial^{e}T_{1} \cap\partial^{e}T_{2}\,;\] at the same time, \(S\cup T[s]\) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\) since, again by (A.5), \[T^{{(1)}}\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}=T\cap \partial^{e}T_{1}=T\cap\partial^{e}V_{1}\subset T\cap\partial V_{1}\subset S\cup T [s]\,.\] We are thus left to prove (A.5). To this end, let us choose \(r(x)>0\) small enough to have that \(B_{r(x)}(x)\subset U_{i}\), and that \(B_{r(x)}(x)\setminus T[s]\) consists of exactly two connected components \(\{V_{1}^{x},V_{2}^{x}\}\); in this way, \[x\in(V_{1}^{x})^{{(1/2)}}\cap(V_{2}^{x})^{{(1/2)}}\,.\] (A.6) Next, we define \[V_{1} =\text{ the connected component of }U_{i}\setminus T[s]\text{ containing }V_{1}^{x}\,,\] \[V_{2} =U_{i}\setminus(T[s]\cup V_{1})\,.\] Clearly \(\{V_{1},V_{2}\}\) is a partition of \(U_{i}\setminus T[s]\), and, thanks to \(\partial V_{1}\cup\partial V_{2}\subset T[s]\cup\partial U_{i}\), we have \[T\cap(\partial V_{1}\cup\partial V_{2})\subset T\cap(T[s]\cup\partial U_{i}) \subset S\cup T[s]\,.\] Therefore (A.5) follows by showing that \(|V_{1}|\,|V_{2}|>0\). Since \(V_{1}\) contains the connected component \(V_{1}^{x}\) of \(B_{r(x)}(x)\setminus T[s]\), which is open and non-empty, we have \(|V_{1}|>0\). Arguing by contradiction, we assume that \[|V_{2}|=|U_{i}\setminus(T[s]\cup V_{1})|=0\,.\] Since \(V_{1}\) is a connected component of the open set \(U_{i}\setminus T[s]\) this implies that \[U_{i}\setminus T[s]=V_{1}\,.\] Let \(x_{1}\in V_{1}^{x}\) and \(x_{2}\in V_{2}^{x}\) (where \(V_{1}^{x}\) and \(V_{2}^{x}\) are the two connected components of \(B_{r(x)}(x)\setminus T[s]\)). Since \(V_{1}\) is connected and \(\{x_{1},x_{2}\}\subset U_{i}\setminus T[s]=V_{1}\), there is a smooth embedding \(\gamma_{1}\) of \([0,1]\) into \(V_{1}\) with \(\gamma_{1}(0)=x_{1}\) and \(\gamma_{1}(1)=x_{2}\). Arguing as in [5, Step 2] using Sard's theorem, we may modify \(\gamma_{1}\) by composing with a smooth diffeomorphism such that the modified \(\gamma_{1}\) intersects \(\partial B_{r(x)}(x)\) transversally at finitely many points. Thus \(\gamma_{1}([0,1])\setminus\operatorname{cl}B_{r(x)}(x)\) is partitioned into finitely many curves \(\gamma_{1}((a_{i},b_{i}))\) for disjoint arcs \((a_{i},b_{i})\subset[0,1]\). Since \(B_{r(x)}(x)\setminus T[s]\) is disconnected into \(V_{1}^{x}\) and \(V_{2}^{x}\) and \(\gamma_{1}\) is disjoint from \(T[s]\), there exists \(i\) such that, up to interchanging \(V_{1}^{x}\) and \(V_{2}^{x}\), \(\gamma(a_{i})\in\operatorname{cl}V_{1}^{x}\cap\partial B_{r(x)}(x)\) and \(\gamma(b_{i})\in\operatorname{cl}V_{2}^{x}\cap\partial B_{r(x)}(x)\). Let us call \(\tilde{\gamma}_{1}\) the restriction of \(\gamma_{1}\) to \([a_{i},b_{i}]\). Next, we choose a smooth embedding \(\gamma_{2}\) of \([0,1]\) into \(B_{r(x)}(x)\) such that \(\gamma_{2}(0)=\tilde{\gamma}_{1}(a_{i})\), \(\gamma_{2}(1)=\tilde{\gamma}_{1}(b_{i})\), and \(\gamma_{2}([0,1])\) intersects \(T[s]\cap B_{r(x)}(x)\) at exactly one point, denoted by \(x_{12}=\gamma_{2}(t_{0})\), with \[\gamma_{2}^{\prime}(t_{0})\neq 0\,.\] (A.7) Since \(\tilde{\gamma_{1}}((a_{i},b_{i}))\cap\operatorname{cl}B_{r(x)}(x)=\varnothing\) and \(\gamma_{2}([0,1])\subset\operatorname{cl}B_{r}(x)\), we can choose \(\gamma_{2}\) so that the concatenation of \(\gamma_{1}\) and \(\gamma_{2}\) defines a smooth embedding \(\gamma_{*}\) of \(\mathbb{S}^{1}\) into \(U_{i}\subset T\). Up to reparametrizing we may assume that \(\gamma_{*}(1)=x_{12}\). Since \(\gamma_{1}([0,1])\subset V_{1}\) and \(V_{1}\cap(S\cup T[s])=\varnothing\), we have that \[\gamma_{*}(\mathbb{S}^{1})\cap(S\cup T[s])=\gamma_{2}([0,1])\cap(S\cup T[s])= \{x_{12}\}\subset T[s]\cap B_{r(x)}(x)\,.\] (A.8) A first consequence of (A.8) is that \(\gamma_{*}(\mathbb{S}^{1})\cap S=\varnothing\). Similarly, the curve \(\gamma_{**}:\mathbb{S}^{1}\to\Omega\) defined via \(\gamma_{**}(t)=\gamma_{*}(\overline{t})\) (\(t\in\mathbb{S}^{1}\)) where the bar denotes complex conjugation, has the same image as \(\gamma_{*}\) and thus satisfies \(\gamma_{**}(\mathbb{S}^{1})\cap S=\varnothing\) as well. Therefore, in order to obtain a contradiction with \(|V_{2}|=0\), it is enough to prove that either \(\gamma_{*}\in\mathcal{C}\) or \(\gamma_{**}\in\mathcal{C}\). To this end we are now going to prove that one of \(\gamma_{*}\) or \(\gamma_{**}\) is homotopic to \(\gamma\) in \(T\) (and thus in \(\Omega\)), where \(\gamma\) is the curve from the tube \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) considered at the start of the argument. Indeed, let \(\mathbf{p}:\mathbb{S}^{1}\times B_{1}^{n}\to\mathbb{S}^{1}\) denote the canonical projection \(\mathbf{p}(t,x)=t\), and consider the curves \(\sigma_{*}=\mathbf{p}\circ\Phi^{-1}\circ\gamma_{*}:\mathbb{S}^{1}\to\mathbb{S}^ {1}\) and \(\sigma_{**}=\mathbf{p}\circ\Phi^{-1}\circ\gamma_{**}\). By (A.8), \(\sigma_{*}^{-1}(\{s\})=\{1\}\), and \(1\) is a regular point of \(\sigma_{*}\) by (A.7) and since \(\Phi\) is a diffeomorphism. Similarly, \(\sigma_{**}^{-1}(\{s\})=\{1\}\) and \(1\) is a regular point of \(\sigma_{**}\). Now by our construction of \(\gamma_{**}\), exactly one of \(\gamma_{*}\) or \(\gamma_{**}\) is orientation preserving at \(1\) and the other is orientation reversing. So we may compute the winding numbers of \(\sigma_{*}\) and \(\sigma_{**}\) via (see e.g. [10, pg 27]): \[\deg\sigma_{*}=\operatorname{sgn}\,\det D\sigma_{*}(1)=-\operatorname{sgn}\, \det D\sigma_{**}(1)=-\deg\sigma_{**}\in\{+1,-1\}\,.\] If we define \(\sigma=\mathbf{p}\circ\Phi^{-1}\circ\gamma\), then \(\sigma\) has winding number \(1\), and so is homotopic in \(\mathbb{S}^{1}\) to whichever of \(\sigma_{*}\) or \(\sigma_{**}\) has winding number \(1\). Since \(\Phi\) is a diffeomorphism of \(\mathbb{S}^{1}\times B^{n}_{1}\) into \(\Omega\), we conclude that \(\gamma\) is homotopic relative to \(\Omega\) to one of \(\gamma_{*}\) or \(\gamma_{**}\), and, thus, that \(\gamma^{*}\in\mathcal{C}\) or \(\gamma_{**}\in\mathcal{C}\) as desired. ## Appendix B Convergence of every minimizing sequence of \(\Psi_{\mathrm{bk}}(v)\) In proving Theorem 1.5 we have shown that every minimizing sequence \(\{(K_{j},E_{j})\}_{j}\) of \(\Psi_{\mathrm{bk}}(v)\) has a limit \((K,E)\) such that, denoting by \(B^{(w)}\) a ball of volume \(w\), it holds \[\Psi_{\mathrm{bk}}(v)=\Psi_{\mathrm{bk}}(|E|)+P(B^{(v-|E|)})\,,\qquad\Psi_{ \mathrm{bk}}(|E|)=\mathcal{F}_{\mathrm{bk}}(K,E)\,,\] with both \(K\) and \(E\) bounded. In particular, minimizers of \(\Psi_{\mathrm{bk}}(v)\) can be constructed in the form \((K\cup\partial B^{(v-|E|)}(x),E\cup B^{(v-|E|)}(x))\) provided \(x\) is such that \(B^{(v-|E|)}(x)\) is disjoint from \(K\cup E\cup\mathbf{W}\). This argument, although sufficient to prove the existence of minimizers of \(\Psi_{\mathrm{bk}}(v)\), it is not sufficient to prove the convergence of every minimizing sequence of \(\Psi_{\mathrm{bk}}(v)\), i.e., to exclude the possibility that \(|E|<v\). This is done in the following theorem at the cost of assuming the \(C^{2}\)-regularity of \(\partial\Omega\). This result will be important in the companion paper [11]. **Theorem B.1**.: _If \(\mathbf{W}\) is the closure of a bounded open set with \(C^{2}\)-boundary, \(\mathcal{C}\) is a spanning class for \(\mathbf{W}\), and \(\ell<\infty\), then for every \(v>0\) and every minimizing sequence \(\{(K_{j},E_{j})\}_{j}\) of \(\Psi_{\mathrm{bk}}(v)\) there is a minimizer \((K,E)\) of \(\Psi_{\mathrm{bk}}(v)\) such that \(K\) is \(\mathcal{H}^{n}\)-rectifiable and, up to extracting subsequences and as \(j\to\infty\),_ \[E_{j}\to E\,,\qquad\mu_{j}\stackrel{{\ast}}{{\rightharpoonup}} \mathcal{H}^{n}\operatorname{\LARGE}(\Omega\cap\partial^{\ast}E)+2\,\mathcal{H }^{n}\operatorname{\LARGE}(K\cap E^{(0)})\,,\] (B.1) _where \(\mu_{j}=\mathcal{H}^{n}\operatorname{\LARGE}(\Omega\cap\partial^{\ast}E_{j})+ 2\,\mathcal{H}^{n}\operatorname{\LARGE}(\mathcal{R}(K_{j})\cap E^{(0)}_{j})\)._ Proof.: By step three in the proof of Theorem 6.2, there is \((K,E)\in\mathcal{K}_{\mathrm{B}}\) satisfying (B.1) and such that \(K\) and \(E\) are bounded, \((K,E)\) is a minimizer of \(\Psi_{\mathrm{bk}}(|E|)\), \(K\) is \(\mathcal{H}^{n}\)-rectifiable, and \(|E|\leq v\); moreover, if \(v>|E|\), then there is \(x\in\mathbb{R}^{n+1}\) such that \(B^{(v-|E|)}(x)\) is disjoint from \(K\cup E\cup\mathbf{W}\) and \((K^{\prime},E^{\prime})=(K\cup\partial B^{(v-|E|)}(x),E\cup B^{(v-|E|)}(x))\) is a minimizer of \(\Psi_{\mathrm{bk}}(v)\). We complete the proof by deriving a contradiction with the \(v^{\ast}=v-|E|>0\) case. The idea is to relocate \(B^{(v^{\ast})}(x)\) to save perimeter by touching \(\partial\mathbf{W}\) or \(\partial E\); see Figure B.1. First of all, we claim that \(K=\Omega\cap\partial E\). If not, since \((K,E)\) and \((K^{\prime},E^{\prime})\) respectively are minimizers of \(\Psi_{\mathrm{bk}}(|E|)\) and \(\Psi_{\mathrm{bk}}(v)\), then there are \(\lambda,\lambda^{\prime}\in\mathbb{R}\) such that \((K,E)\) and \((K^{\prime},E^{\prime})\) respectively satisfy (6.1) with \(\lambda\) and \(\lambda^{\prime}\). By localizing (6.1) for \((K^{\prime},E^{\prime})\) at points in \(\Omega\cap\partial^{\ast}E\) we see that it must be \(\lambda=\lambda^{\prime}\); by localizing at points in \(\partial B^{(v-|E|)}(x)\), we see that \(\lambda\) is equal to the mean curvature of \(\partial B^{(v-|E|)}(x)\), so that \(\lambda>0\); by arguing as in the proof of [10, Theorem 2.9] (see [11] for the details), we see that if \(K\setminus(\Omega\cap\partial E)\neq\varnothing\), then \(\lambda\leq 0\), a contradiction. Having established that \(K=\Omega\cap\partial E\), we move an half-space \(H\) compactly containing \(\operatorname{cl}\,(E)\cup\mathbf{W}\) until the boundary hyperplane \(\partial H\) first touches \(\operatorname{cl}\,(E)\cup\mathbf{W}\). Up to rotation and translation, we can thus assume that \(H=\{x_{n+1}>0\}\) and \[0\in\operatorname{cl}\,(E)\cup\mathbf{W}\subset\operatorname{cl}\,(H)\,.\] (B.2) We split (B.2) into two cases, \(0\in\Omega\cap\partial E\) and \(0\in\mathbf{W}\), that are then separately discussed for the sake of clarity. In both cases we set \(x=(x^{\prime},x_{n+1})\in\mathbb{R}^{n}\times\mathbb{R}\equiv\mathbb{R}^{n+1}\), and set \[\mathbf{C}_{\delta} = \{x:x_{n+1}\in(0,\delta)\,,|x^{\prime}|<\delta\}\,,\] \[\mathbf{L}_{\delta} = \left\{x:\left|x^{\prime}\right|=\delta,x_{n+1}\in(0,\delta)\right\},\] \[\mathbf{T}_{\delta} = \left\{x:x_{n+1}=\delta\,,\left|x^{\prime}\right|<\delta\right\},\] \[\mathbf{D}_{\delta} = \left\{x:x_{n+1}=0\,,\left|x^{\prime}\right|<\delta\right\},\] for every \(\delta>0\). _Case one, \(0\in\Omega\cap\partial E\)_: In this case, by the maximum principle [13, Lemma 3], (6.1), and the Allard regularity theorem, we can find \(\delta_{0}>0\) and \(u\in C^{2}(\mathbf{D}_{\delta_{0}};[0,\delta_{0}])\) with \(u(0)=0\) and \(\nabla u(0)=0\) such that \(\mathbf{C}_{\delta_{0}}\subset\subset\Omega\) and \[E\cap\mathbf{C}_{\delta_{0}}=\left\{x\in\mathbf{C}_{\delta_{0}}: \delta_{0}>x_{n+1}>u(x^{\prime})\right\},\] (B.3) \[(\partial E)\cap\mathbf{C}_{\delta_{0}}=\left\{x\in\mathbf{C}_{ \delta_{0}}:x_{n+1}=u(x^{\prime})\right\}.\] Since \(0\leq u(x^{\prime})\leq C\,|x^{\prime}|^{2}\) for some \(C=C(E)\), if we set \[\Gamma_{\delta}=\left\{x\in\mathbf{C}_{\delta}:0<x_{n+1}<u(x^{\prime})\right\},\qquad\delta\in(0,\delta_{0})\,,\] (B.4) then we have \[\left|\Gamma_{\delta}\right| \leq C\,\delta^{n+2}\,,\] (B.5) \[P\big{(}\Gamma_{\delta};\mathbf{L}_{\delta}\big{)} \leq C\,\delta^{n+1}\,.\] (B.6) We then set \[E_{\delta}=E\cup\Gamma_{\delta}\cup\left(B_{r_{\delta}}(z_{\delta})\setminus H \right),\] (B.7) see Figure B.1-(a), where \(r_{\delta}>0\) and \(z_{\delta}\in\mathbb{R}^{n+1}\setminus\operatorname{cl}\left(H\right)\) are uniquely determined by requiring that, first, \[\operatorname{cl}\left(B_{r_{\delta}}(z_{\delta})\right)\cap\partial H= \partial\mathbf{C}_{\delta}\cap\partial H=\left\{x:x_{n+1}=0\,,\left|x^{ \prime}\right|\leq\delta\right\},\] (B.8) and, second, that \[\left|E_{\delta}\right|=v\,.\] (B.9) To see that this choice is possible, we first notice that, since \(E\cap\Gamma_{\delta}=\varnothing\), (B.9) is equivalent to \[\left|B_{r_{\delta}}(z_{\delta})\setminus H\right|=v-\left|E\right|-\left| \Gamma_{\delta}\right|=v^{*}-\left|\Gamma_{\delta}\right|.\] (B.10) Taking (B.5) into account we see that (B.8) and (B.10) uniquely determine \(z_{\delta}\in\mathbb{R}^{n+1}\) and \(r_{\delta}>0\) as soon as \(\delta_{0}\) is small enough to guarantee \(v^{*}-|\Gamma_{\delta_{0}}|>0\). In fact, by (B.5), \(v^{*}-|\Gamma_{\delta}|\to v^{*}>0\) with \(\mathcal{H}^{n}(\partial\mathbf{C}_{\delta}\cap\partial H)\to 0\) as \(\delta\to 0^{+}\), so that, up to further decrease \(\delta_{0}\), we definitely have \(z_{\delta}\not\in H\), and \[\Big{|}r_{\delta}-\Big{(}\frac{v^{*}}{\omega_{n+1}}\Big{)}^{1/(n+1)}\Big{|}\leq C \,\delta^{n+2}\,,\] (B.11) where \(C=C(E,n,v^{*})\). We now use the facts that \(K\cup E^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) and that \(E\subset E_{\delta}\) to prove that \[(K_{\delta},E_{\delta})=((\Omega\cap\partial^{*}E_{\delta})\cup(K\cap E_{ \delta}^{{(0)}}),E_{\delta})\] (B.12) is such that \(K_{\delta}\cup E_{\delta}^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) (and thus is admissible in \(\Psi_{\mathrm{bk}}(v)\) by (B.9)). To this end, it is enough to show that \[K\cup E^{{(1)}}\overset{\mathcal{H}^{n}}{\subset}K_{\delta}\cup E _{\delta}^{{(1)}}\,.\] (B.13) Indeed, by \(E\subset E_{\delta}\) and Federer's theorem (1.37) we have \[E^{{(1)}}\subset E_{\delta}^{{(1)}}\,,\qquad E_{ \delta}^{{(0)}}\subset E^{{(0)}}\,,\qquad E^{{(1)}}\cup \partial^{*}E\overset{\mathcal{H}^{n}}{\subset}E_{\delta}^{{(1)}} \cup\partial^{*}E_{\delta}\,.\] (B.14) (Notice indeed that \(\partial^{*}E\subset E^{{(1/2)}}\subset\mathbb{R}^{n+1} \setminus E_{\delta}^{{(0)}}\)). Next, using in order Federer's theorem (1.37), (B.14) and \(K\subset\Omega\), and the definition of \(K_{\delta}\), we have \[E^{{(1)}}\cup(K\setminus E_{\delta}^{{(0)}}) \overset{\mathcal{H}^{n}}{=}E^{{(1)}}\cup[K\cap( \partial^{*}E_{\delta}\cup E_{\delta}^{{(1)}})] \overset{\mathcal{H}^{n}}{\subset}E_{\delta}^{{(1)}} \cup(\Omega\cap\partial^{*}E_{\delta})\subset E_{\delta}^{{(1)}} \cup K_{\delta}\,.\] But \(K\cap E_{\delta}^{{(0)}}\subset K_{\delta}\) by definition, which combined with the preceding containment completes the proof of (B.13). Having proved that \((K_{\delta},E_{\delta})\) is admissible in \(\Psi_{\mathrm{bk}}(v)\), we have \[\mathcal{F}_{\mathrm{bk}}(K,E)+P(B^{{(v^{*})}})=\Psi_{ \mathrm{bk}}(v)\leq\mathcal{F}_{\mathrm{bk}}(K_{\delta},E_{\delta})\,.\] (B.15) By (B.15), the definition of \(K_{\delta}\), and (B.14), we find \[P(E;\Omega)+2\,\mathcal{H}^{n}(K\cap E^{{(0)}})+P(B^{{ (v^{*})}}) \leq P(E_{\delta};\Omega)+2\,\mathcal{H}^{n}(K_{\delta}\cap E_{ \delta}^{{(0)}})\] \[\leq P(E_{\delta};\Omega)+2\,\mathcal{H}^{n}(K\cap E_{ \delta}^{{(0)}}) \leq P(E_{\delta};\Omega)+2\,\mathcal{H}^{n}(K\cap E^{{(0)}})\,,\] from which we deduce \[P(E;\Omega)+P(B^{{(v^{*})}})\leq P(E_{\delta};\Omega)\,.\] (B.16) We now notice that \(E_{\delta}\) coincides with \(E\) in the open set \(\Omega\cap H\setminus\operatorname{cl}\left(\mathbf{C}_{\delta}\right)\), and with \(B_{r_{\delta}}(z_{\delta})\) in the open set \(\mathbb{R}^{n+1}\setminus\operatorname{cl}\left(H\right)\), so that \[\Big{(}\Omega\cap H\setminus\operatorname{cl}\left(\mathbf{C}_ {\delta}\right)\Big{)}\cap\partial^{*}E_{\delta}=\Big{(}\Omega\cap H\setminus \operatorname{cl}\left(\mathbf{C}_{\delta}\right)\Big{)}\cap\partial^{*}E\,,\] \[\big{(}\Omega\setminus\operatorname{cl}\left(H\right)\big{)} \cap\partial^{*}E_{\delta}=\big{(}\partial B_{r_{\delta}}(z_{\delta})\big{)} \setminus\operatorname{cl}\left(H\right),\] and (B.16) is equivalent to \[P\big{(}E;\Omega\cap(\partial H\cup\operatorname{cl}\left( \mathbf{C}_{\delta}\right)\big{)}+P(B^{{(v^{*})}})\] (B.17) \[\leq P\big{(}E_{\delta};\Omega\cap(\partial H\cup\operatorname{ cl}\left(\mathbf{C}_{\delta}\right)\big{)}+P(B_{r_{\delta}}(z_{\delta}); \mathbb{R}^{n+1}\setminus\operatorname{cl}\left(H\right)\big{)}\,.\] In fact, it is easily proved that \((\partial^{*}E)\cap(\partial H)\setminus\operatorname{cl}\left(\mathbf{C}_{ \delta}\right)=(\partial^{*}E_{\delta})\cap(\partial H)\setminus\operatorname{ cl}\left(\mathbf{C}_{\delta}\right)\) (which is evident from Figure B.1), so that (B.17) readily implies \[P(B^{{(v^{*})}})\leq P\big{(}E_{\delta};\Omega\cap \operatorname{cl}\left(\mathbf{C}_{\delta}\right)\big{)}+P(B_{r_{\delta}}(z_{ \delta});\mathbb{R}^{n+1}\setminus\operatorname{cl}\left(H\right))\,.\] (B.18) Now, \(\mathbf{C}_{\delta}\subset\subset\Omega\). Moreover, by (B.3), we have that \(\mathbf{T}_{\delta}\) (the top part of \(\partial\mathbf{C}_{\delta}\)) is contained in \(E^{{(1)}}\subset E_{\delta}^{{(1)}}\), and is thus \(\mathcal{H}^{n}\)-disjoint from \(\partial^{*}E_{\delta}\). Similarly, again by (B.3) we have \(E\cup\Gamma_{\delta}=\mathbf{C}_{\delta}\), and thus \(\mathbf{D}_{\delta}\subset(E\cup\Gamma_{\delta})^{{(1/2)}}\); at the same time, by (B.8) we have \((B_{r_{\delta}}(z_{\delta})\setminus H)^{{}_{(1/2)}}\); therefore \(\mathbf{D}_{\delta}\subset E_{\delta}^{{}_{(1)}}\), and thus \(\mathbf{D}_{\delta}\) is \(\mathcal{H}^{n}\)-disjoint from \(\partial^{*}E_{\delta}\). Finally, again by \(E\cup\Gamma_{\delta}=\mathbf{C}_{\delta}\) we see that \(P(E_{\delta};\mathbf{C}_{\delta})=0\). Therefore, in conclusion, \[P\big{(}E_{\delta};\Omega\cap\operatorname{cl}\left(\mathbf{C}_{\delta} \right)\big{)}=P(E_{\delta};\mathbf{L}_{\delta})=P(\Gamma_{\delta};\mathbf{L}_ {\delta})\leq C\,\delta^{n+1}\,,\] (B.19) where we have used again first (B.3), and then (B.6). Combining (B.18)-(B.19) we get \[P(B^{(v^{*})})\leq P(B_{r_{\delta}}(z_{\delta});\mathbb{R}^{n+1} \setminus\operatorname{cl}\left(H\right))+C\,\delta^{n+1}\,.\] (B.20) Finally, by (B.8), (B.5), and (B.11) we have \[P(B_{r_{\delta}}(z_{\delta});\mathbb{R}^{n+1}\setminus\operatorname{cl} \left(H\right))\leq P(B^{(v^{*})})-C(n)\,\delta^{n}\,;\] by combining this estimate with (B.20), we reach a contradiction for \(\delta\) small enough. _Case two, \(0\in\mathbf{W}\)_: In this case, by the \(C^{2}\)-regularity of \(\partial\Omega\) we can find \(\delta_{0}>0\) and \(u\in C^{2}(\mathbf{D}_{\delta_{0}};[0,\delta_{0}])\) with \(u(0)=0\) and \(\nabla u(0)=0\) such that \[\mathbf{W}\cap\mathbf{C}_{\delta_{0}}=\left\{x\in\mathbf{C}_{ \delta_{0}}:\delta_{0}>x_{n+1}>u(x^{\prime})\right\},\] (B.21) \[(\partial\Omega)\cap\mathbf{C}_{\delta}=\left\{x\in\mathbf{C}_{ \delta_{0}}:x_{n+1}=u(x^{\prime})\right\}.\] We have \(0\leq u(x^{\prime})\leq C\,|x^{\prime}|^{2}\) for every \(|x^{\prime}|<\delta_{0}\) (and some \(C=C(\mathbf{W})\)), so that defining \(\Gamma_{\delta}\) as in (B.4) we still obtain (B.5) and (B.6). We then define \(E_{\delta}\), \(r_{\delta}\), and \(z_{\delta}\), as in (B.7), (B.8) and (B.9). Notice that now \(E\) and \(\Gamma_{\delta}\) may not be disjoint (see Figure B.1-(b)), therefore (B.9) is not equivalent to (B.10), but to \[\big{|}B_{r_{\delta}}(z_{\delta})\setminus H\big{|}=v-|E|-|\Gamma_{\delta} \setminus E|=v^{*}-|\Gamma_{\delta}\setminus E|\,.\] This is still sufficient to repeat the considerations based on (B.8) and (B.5) proving that \(r_{\delta}\) and \(z_{\delta}\) are uniquely determined, and satisfy (B.11). We can repeat the proof that \((K_{\delta},E_{\delta})\) defined as in (B.12) is admissible in \(\Psi_{\mathrm{b}k}(v)\) (since that proof was based only on the inclusion \(E\subset E_{\delta}\)), and thus obtain (B.16). The same considerations leading from (B.16) to (B.18) apply in the present case too, and so we land on \[P(B^{(v^{*})})\leq P\big{(}E_{\delta};\Omega\cap\operatorname{cl}\left( \mathbf{C}_{\delta}\right)\big{)}+P(B_{r_{\delta}}(z_{\delta});\mathbb{R}^{n+ 1}\setminus\operatorname{cl}\left(H\right))\,.\] (B.22) Now, by (B.21), \(\mathbf{T}_{\delta}\) is contained in \(\mathbf{W}\), so that \(P(E_{\delta};\mathbf{T}_{\delta})=0\). At the same time, if \(x=(x^{\prime},0)\in\mathbf{D}_{\delta}\cap\Omega\), then \(u(x^{\prime})>0\), and thus \(x\in(E_{\delta}\cap H)^{{}_{(1/2)}}\); since, by (B.8), we also have \(x\in(E_{\delta}\setminus H)^{{}_{(1/2)}}\), we conclude that \(\mathbf{D}_{\delta}\cap\Omega\subset E_{\delta}^{{}_{(1)}}\), and thus that \[P\big{(}E_{\delta};\Omega\cap\operatorname{cl}\left(\mathbf{C}_{\delta} \right)\big{)}=P\big{(}E_{\delta};\Omega\cap\mathbf{L}_{\delta}\big{)}\leq \mathcal{H}^{n}(\Omega\cap\mathbf{L}_{\delta})\leq C\,\delta^{n+1}\,,\] where we have used \(0\leq u(x^{\prime})\leq C\,|x^{\prime}|^{2}\) for every \(|x^{\prime}|<\delta_{0}\) again. We thus deduce from (B.22) that \[P(B^{(v^{*})})\leq P(B_{r_{\delta}}(z_{\delta});\mathbb{R}^{n+1} \setminus\operatorname{cl}\left(H\right))+C\,\delta^{n+1}\,,\] and from here we conclude as in case one. ## Appendix C An elementary lemma In this appendix we provide a proof of Lemma 7.2. The proof is an immediate corollary of a geometric property of closed \(\mathcal{C}\)-spanning sets (see (C.2)-(C.3) below) first proved in \(\mathbb{R}^{n+1}\) for \(n\geq 2\)[20, Lemma 4.1]. Here we extend this property to the plane. The difference between \(\mathbb{R}^{2}\) and \(\mathbb{R}^{n+1}\) for \(n\geq 2\) stems from a part of the argument where one constructs a new admissible spanning curve by modifying an existing one inside a ball. Specifically, ensuring that the new curve does not intersect itself requires an extra argument in \(\mathbb{R}^{2}\). **Lemma C.1**.: _Let \(n\geq 1\), \(\mathbf{W}\subset\mathbb{R}^{n+1}\) be closed, \(\mathcal{C}\) be a spanning class for \(\mathbf{W}\), \(S\subset\Omega:=\mathbb{R}^{n+1}\setminus\mathbf{W}\) be relatively closed and \(\mathcal{C}\)-spanning \(\mathbf{W}\), and \(B_{r}(x)\subset\subset\Omega\). Let \(\{\Gamma_{i}\}_{i}\) be the countable family of equivalence classes of \(\partial B_{r}(x)\setminus S\) determined by the relation:_ \[y\sim x\iff\exists\tilde{\gamma}\in C^{0}([0,1],\operatorname{cl}B_{r}(x) \setminus S):\tilde{\gamma}(0)=y\text{, }\tilde{\gamma}(1)=z\text{, }\tilde{\gamma}((0,1))\subset B_{r}(x)\,.\] (C.1) _Then if \(\gamma\in\mathcal{C}\), either_ \[\gamma\cap(S\setminus B_{r}(x))\neq\emptyset\] (C.2) _or there exists a connected component \(\sigma\) of \(\gamma\cap\operatorname{cl}B_{r}(x)\) which is homeomorphic to an interval and such that_ \[\text{the endpoints of }\sigma\text{ belong to two distinct equivalence classes of }\partial B_{r}(x)\setminus S\text{.}\] (C.3) _In particular, the conclusion of Lemma 7.2 holds._ **Remark C.2**.: The planar version of Lemma C.1 allows one to extend the main existence result [15, Theorem 2.7] to \(\mathbb{R}^{2}\). Proof of Lemma c.1.: The proof is divided into two pieces. First we show how to deduce Lemma 7.2 from the fact that at least one of (C.2)-(C.3) holds. Then we show in \(\mathbb{R}^{2}\) that (C.3) must hold whenever (C.2) does not, completing the lemma since the case \(n\geq 2\) is contained in [15, Lemma 4.1]. _Conclusion of Lemma 7.2 from (C.2)-(C.3)_: We must show that either \(\gamma(\mathbb{S}^{1})\setminus B_{r}(x)\neq\varnothing\) or that it intersects at least two open connected components of \(B_{r}(x)\setminus S\). If \(\gamma(\mathbb{S}^{1})\setminus B_{r}(x)\neq\varnothing\) we are done, so suppose that \(\gamma(\mathbb{S}^{1})\setminus B_{r}(x)=\varnothing\). Then (C.3) must be true, so that the endpoints of some arc \(\sigma=\gamma((a,b))\subset B_{r}(x)\) for an interval \((a,b)\subset\mathbb{S}^{1}\) belong to distinct equivalence classes. Choose \(\rho\) small enough so that \(B_{\rho}(\gamma(a))\cup B_{\rho}(\gamma(b))\subset\Omega\setminus S\) and \(a^{\prime}\), \(b^{\prime}\in I\) such that \(\gamma(a^{\prime})\in B_{\rho}(\gamma(a))\) and \(\gamma(b^{\prime})\in B_{\rho}(\gamma(b))\). If \(\gamma(a^{\prime})\) and \(\gamma(b^{\prime})\) belonged to the same open connected component of \(B_{r}(x)\setminus S\), we would contradict (C.3), so they belong to different components as desired. _Verification of (C.2)-(C.3) in \(\mathbb{R}^{2}\)_: As in [15, Lemma 10], we may reduce to the case where \(\gamma\) intersects \(\partial B_{r}(x)\) transversally at finitely many points \(\{\gamma(a_{k})\}_{k=1}^{K}\cup\{\gamma(b_{k})\}_{k=1}^{K}\) such that \(\gamma\cap B_{r}(x)=\cup_{k}\gamma((a_{k},b_{k}))\) and \(\{[a_{k},b_{k}]\}_{k}\) are mutually disjoint closed arcs in \(\mathbb{S}^{1}\). If (C.2) holds we are done, so we assume that \[\gamma\cap S\setminus B_{r}(x)=\varnothing\] (C.4) and prove (C.3). Note that each pair \(\{\gamma(a_{k}),\gamma(b_{k})\}\) bounds two open arcs in \(\partial B_{r}(x)\); we make a choice now as follows. Choose \(s_{0}\in\partial B_{r}(x)\setminus\cup_{k}\{\gamma(a_{k}),\gamma(b_{k})\}\). Based on our choice of \(s_{0}\), for each \(k\) there is a unique open arc \(\ell_{k}\subset\partial B_{r}(x)\) such that \(\partial_{\partial B_{r}(x)}\ell_{k}=\{\gamma(a_{k}),\gamma(b_{k})\}\) and \(s_{0}\notin\operatorname{cl}\,_{\partial B_{r}(x)}\ell_{k}\). We claim that \[\text{if }k\neq k^{\prime}\text{, then either }\ell_{k}\subset\subset\ell_{k^{\prime}} \text{ or }\ell_{k^{\prime}}\subset\subset\ell_{k}\,.\] (C.5) _To prove (C.5)_: We consider simple closed curves \(\gamma_{k}\) with images \(\gamma((a_{k},b_{k}))\cup\operatorname{cl}\,_{\partial B_{r}(x)}\ell_{k}\). By the Jordan curve theorem, each \(\gamma_{k}\) defines a connected open subset \(U_{k}\) of \(B_{r}(x)\) with \(\partial U_{k}\cap\partial B_{r}(x)=\operatorname{cl}\,_{\partial B_{r}(x)} \ell_{k}\). Aiming for a contradiction, if (C.5) were false, then for some \(k\neq k^{\prime}\), either \[\gamma(a_{k})\in\ell_{k^{\prime}}\subset\operatorname{cl}U_{k^{ \prime}}\text{ and }\gamma(b_{k})\in\partial B_{r}(x)\setminus\operatorname{cl}\,_{\partial B_{r} (x)}\ell_{k^{\prime}}\subset\partial B_{r}(x)\setminus\operatorname{cl}U_{k^{ \prime}}\text{ or}\] \[\gamma(b_{k})\in\ell_{k^{\prime}}\subset\operatorname{cl}U_{k^{ \prime}}\text{ and }\gamma(a_{k})\in\partial B_{r}(x)\setminus\operatorname{cl}\,_{\partial B_{r} (x)}\ell_{k^{\prime}}\subset\partial B_{r}(x)\setminus\operatorname{cl}U_{k^{ \prime}}\,;\] in particular, \(\gamma((a_{k},b_{k}))\) has non-trivial intersection with both the open sets \(U_{k^{\prime}}\) and \(B_{r}(x)\setminus\operatorname{cl}U_{k^{\prime}}\). By the continuity of \(\gamma\) and the connectedness of \((a_{k},b_{k})\), we thus deduce that \(\gamma((a_{k},b_{k}))\cap\partial U_{k^{\prime}}\neq\varnothing\). Upon recalling that \(\gamma((a_{k},b_{k}))\subset B_{r}(x)\), we find \(\gamma((a_{k},b_{k}))\cap\partial U_{k^{\prime}}\cap B_{r}(x)=\gamma((a_{k},b_{ k}))\cap\gamma((a_{k^{\prime}},b_{k^{\prime}}))\neq\varnothing\). But this contradicts the fact that \(\gamma\) smoothly embeds \(\mathbb{S}^{1}\) into \(\Omega\). The proof of (C.5) is finished. Returning to the proof of (C.3), let us assume for contradiction that \[\gamma(a_{k})\sim\gamma(b_{k})\quad\forall 1\leq k\leq K\,.\] (C.6) We are going to use (C.4), (C.5), and (C.6) to create a piecewise smooth embedding \(\overline{\gamma}:\mathbb{S}^{1}\to\Omega\) which is a homotopic deformation of \(\gamma\) (and thus approximable by elements in \(\mathcal{C}\)) such that \(\overline{\gamma}\cap S=\varnothing\). After reindexing the equivalence classes \(\Gamma_{i}\), we may assume that \(\{\Gamma_{1},\dots,\Gamma_{\gamma_{\gamma}}\}\) are those equivalence classes containing any pair \(\{\gamma(a_{k}),\gamma(b_{k})\}\) for \(1\leq k\leq K\). We will construct \(\overline{\gamma}\) in steps by redefining \(\gamma\) on those \([a_{k},b_{k}]\) with images under \(\gamma\) having endpoints belonging to the same \(\Gamma_{i}\). For future use, let \(\Omega_{i}\) be the equivalence classes of \(B_{r}(x)\setminus S\) determined by the relation (C.1). Note that they are open connected components of \(B_{r}(x)\setminus S\). _Construction corresponding to \(\Gamma_{1}\)_: Relabelling in \(k\) if necessary, we may assume that \(\{1,\dots,K_{1}\}\) for some \(1\leq K_{1}\leq K\) are the indices such that \(\{\gamma(a_{k}),\gamma(b_{k})\}\subset\Gamma_{1}\). By further relabelling and applying (C.5) we may assume: first, that \(\ell_{1}\) is a "maximal" arc among \(\{\ell_{1},\dots,\ell_{K_{1}}\}\), in other words \[\text{for given $k\in\{2,\dots K_{1}\}$, either $\ell_{1}\cap\ell_{k}=\varnothing$ or $\ell_{k}\subset\!\!\subset\ell_{1}$}\,;\] (C.7) and second, that for some \(K_{1}^{1}\leq K_{1}\), \(\{\ell_{2},\dots,\ell_{K_{1}^{1}}\}\) are those arcs contained in \(\ell_{1}\). Since \(\Omega_{1}\) is open and connected, we may connect \(\gamma(a_{1})\) to \(\gamma(b_{1})\) by a smooth embedding \(\overline{\gamma}_{1}:[a_{1},b_{1}]\to\operatorname{cl}B_{r}(x)\setminus S\) with \(\overline{\gamma}_{1}((a_{1},b_{1}))\subset\Omega_{1}\). Also, by the Jordan curve theorem, \(\ell_{1}\cup\overline{\gamma}_{1}\) defines an open connected subset \(W_{1}\) of \(B_{r}(x)\) with \(\partial W_{1}\cap S=\varnothing\). Using (C.5), we now argue towards constructing pairwise disjoint smooth embeddings \(\overline{\gamma}_{k}:[a_{k},b_{k}]\to\Gamma_{1}\cup\Omega_{1}\). We first claim that \[W_{1}\setminus S\text{ is path-connected}\,.\] (C.8) To prove (C.8), consider any \(y,z\in W_{1}\setminus S\). Since \(\Omega_{1}\supset W_{1}\setminus S\) is open and path-connected, we may obtain continuous \(\tilde{\gamma}:[0,1]\to\Omega_{1}\) connecting \(y\) and \(z\). If \(\tilde{\gamma}([0,1])\subset W_{1}\setminus S\), we are done. Otherwise, \(\varnothing\neq\tilde{\gamma}\cap(\Omega_{1}\setminus(W_{1}\setminus S))= \Omega_{1}\setminus W_{1}\), with the equality following from \(\Omega_{1}\cap S=\varnothing\). Combining this information with \(\tilde{\gamma}(\{0,1\})\subset W_{1}\setminus S\), we may therefore choose \([\delta_{1},\delta_{2}]\subset(0,1)\) to be the smallest interval such that \(\tilde{\gamma}([0,1]\setminus[\delta_{1},\delta_{2}])\subset W_{1}\setminus S\). On \((\delta_{1},\delta_{2})\), we redefine \(\tilde{\gamma}\) using the fact that \(\tilde{\gamma}(\{\delta_{1},\delta_{2}\})\subset\partial W_{1}\cap B_{r}(x)= \overline{\gamma}_{1}((a_{1},b_{1}))\) by letting \(\tilde{\gamma}((\delta_{1},\delta_{2}))=\overline{\gamma}_{1}(I)\), where \(\overline{\gamma}_{1}(I)\) has endpoints \(\tilde{\gamma}(\delta_{1})\) and \(\tilde{\gamma}(\delta_{2})\) and \(I\subset(a_{1},b_{1})\). The modified \(\tilde{\gamma}\) is a concatenation of continuous curves and is thus continuous; furthermore, \(\tilde{\gamma}^{-1}(W_{1}\setminus S)=[0,\delta_{1})\cup(\delta_{2},1]\). It only remains to "push" \(\tilde{\gamma}\) entirely inside \(W_{1}\setminus S\), which we may easily achieve by projecting \(\tilde{\gamma}((\delta_{1}-\varepsilon,\delta_{2}+\varepsilon))\) inside \(W_{1}\setminus S\) for small \(\varepsilon\) using the distance function to the smooth curve \(\overline{\gamma}_{1}(a_{1},b_{1})=\partial W_{1}\cap B_{r}(x)\subset B_{r}(x)\setminus S\). This completes (C.8). But now since \(W_{1}\setminus S\) is path-connected and open, we may connect any two points in it by a smooth embedding of \([0,1]\), which in particular allows us to connect \(\gamma(a_{2})\) and \(\gamma(b_{2})\) by smooth embedding \(\overline{\gamma}_{2}:[a_{2},b_{2}]\to\operatorname{cl}W_{1}\setminus S\) with \(\overline{\gamma}_{2}((a_{2},b_{2}))\subset W_{1}\setminus S\). Let \(W_{2}\) be the connected open subset of \(W_{1}\) determined by the Jordan curve \(\overline{\gamma}_{2}\cup\ell_{2}\). Arguing exactly as in (C.8), \(W_{2}\setminus S\) is open and path-connected, so we can iterate this argument to obtain mutually disjoint embeddings \(\overline{\gamma}_{k}:[a_{k},b_{k}]\to\operatorname{cl}W_{1}\setminus S\subset \Gamma_{1}\cup\Omega_{1}\) with \(\overline{\gamma}_{k}((a_{k},b_{k}))\subset\Omega_{1}\) for \(1\leq k\leq K_{1}^{1}\). Next, let \(\ell_{K_{1}^{1}+1}\) be another maximal curve with endpoints in \(\Gamma_{1}\). The same argument as in proving (C.8) implies that \(\Omega_{1}\setminus\operatorname{cl}W_{1}\) is path-connected, and so \(\gamma(a_{K_{1}^{1}+1})\), \(\gamma(b_{K_{1}^{1}+1})\) may be connected by a smooth embedding \(\overline{\gamma}_{K_{1}^{1}+1}:[a_{K_{1}^{1}+1},b_{K_{1}^{1}+1}]\to(\Gamma_{1} \cup\Omega_{1})\setminus\operatorname{cl}W_{1}\), that, together with \(\ell_{K_{1}^{1}+1}\), defines a connected domain \(W_{K_{1}^{1}+1}\subset\Omega_{1}\) by the Jordan curve theorem. In addition, \(W_{K_{1}^{1}+1}\cap W_{1}=\varnothing\) since \((\ell_{2}\cup\overline{\gamma}_{K_{1}^{1}+1})\cap\operatorname{cl}W_{1}=\varnothing\) by (C.7) and the definition of \(\overline{\tau}_{K_{1}^{1}+1}\). Repeating the whole iteration procedure for those intervals contained in \(\ell_{K_{1}^{1}+1}\) and then the rest of the maximal arcs, we finally obtain mutually disjoint embeddings \(\overline{\gamma}_{k}:[a_{k},b_{k}]\to\Gamma_{1}\cup\Omega_{1}\) with \(\overline{\gamma}_{k}((a_{k},b_{k}))\subset\Omega_{1}\) as desired for \(1\leq k\leq K_{1}\). _Conclusion of the proof of (C.3)_: Repeating the \(\Gamma_{1}\) procedure for \(\{\Gamma_{2},\dots,\Gamma_{I_{\gamma}}\}\) and using the mutual pairwise disjointness of \(\Gamma_{i}\), we obtain mutually disjoint embeddings \(\overline{\gamma}_{k}:[a_{k},b_{k}]\to\operatorname{cl}B_{r}(x)\setminus S\) with \(\overline{\gamma}_{k}((a_{k},b_{k}))\subset B_{r}(x)\setminus S\) for \(1\leq k\leq K_{1}\). We define \(\overline{\gamma}:\mathbb{S}^{1}\to\Omega\) by \[\overline{\gamma}(t)=\begin{cases}\gamma(t)&t\in\mathbb{S}^{1}\setminus \cup[a_{k},b_{k}]\\ \overline{\gamma}_{k}(t)&t\in[a_{k},b_{k}]\,,\ \ 1\leq k\leq K\,.\end{cases}\] Since \(\overline{\gamma}=\gamma\) outside \(B_{r}(x)\subset\subset\Omega\), \(\overline{\gamma}\) is homotopic to \(\gamma\) relative to \(\Omega\). Furthermore, \(\overline{\gamma}\) is piecewise smooth and homotopic to \(\gamma\), and so it can be approximated in the \(C^{0}\) norm by \(\{\gamma_{j}\}\subset\mathcal{C}\). However, by (C.4) and the construction of \(\overline{\gamma}_{k}\), \(\overline{\gamma}\cap S=\varnothing\), which implies that \(S\cap\gamma_{j}=\varnothing\) for large \(j\). This contradicts the fact that \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), and so (C.3) is true.
Gaussの capillairesの理論の枠組みで、3次元構造として知られるPlateau境界の均衡定理を厳密に導き出します。この分析の鍵となるステップは、HarrisonとPughが2次元面積最小化表面(「乾」のsoap膜)のPlateauの定理の研究において導入したhomotopic spanning conditionの完備的な測度論的 overhaulingです。この新しい視点により、Gaussの capillairesの理論におけるHomotopic spanning relaxationの有効的なコンパクト性定理とエネルギー表現式を得ることができ、これにより、エネルギー最小値を持つものの正則性を証明します。湿った泡のPlateau境界の均衡定理も、湿った soap膜の理論の単純な変種として扱われます。
2308.16691
Prospects for observing supermassive black hole binaries with the space-ground interferometer
A list of candidates for \textit{supermassive binary black holes} (SMBBHs), compiled from available data on the variability in the optical range and the shape of the emission spectrum, is analysed. An artificial neural network is constructed to estimate the radiation flux at 240~GHz. For those candidate SMBBH for which the network building procedure was feasible, the criterion of the possibility of observing the source at the \textit{Millimetron Space Observatory} (MSO) was tested. The result is presented as a table of 17 candidate SMBBHs. Confirmation (or refutation) of the duality of these objects by means of observational data which could be commited on a space-ground interferometer with parameters similar to those of the MSO will be an important milestone in the development of the theory of galaxy formation.
A. M. Malinovsky, E. V. Mikheeva
2023-08-31T12:51:02
http://arxiv.org/abs/2308.16691v2
# Prospects for observing supermassive black hole binaries ###### Abstract A list of candidates for _supermassive binary black holes_ (SMBBHs), compiled from available data on the variability in the optical range and the shape of the emission spectrum, is analysed. An artificial neural network is constructed to estimate the radiation flux at 240 GHz. For those candidate SMBBH for which the network building procedure was feasible, the criterion of the possibility of observing the source at the _Millimetron Space Observatory_ (MSO) was tested. The result is presented as a table of 17 candidate SMBBHs. Confirmation (or refutation) of the duality of these objects by means of observational data which could be commited on a space-ground interferometer with parameters similar to those of the MSO will be an important milestone in the development of the theory of galaxy formation. ## 1 Introduction It is believed that in the central part of any massive galaxy (\(M>10^{12}M_{\odot}\)) there is a _supermassive black hole_ (SMBH)1. This statement is now generally accepted, although unambiguous observational evidence for it is only available for the Milky Way [1, 2]. For several hundred other galaxies, there are estimates of central SMBH masses measured by various methods [3, 4, 5]. The most reliable methods include those based on the study of stellar or gas dynamics, while the currently widely used reverberation method contains a not well understood systematic error associated with the type of the object under study [6]. Another new method of measuring the masses of SMBHs is the measurement of the black hole shadow using radio interferometry, which is not only an important achievement of observational astronomy, but also provides an independent estimate of the masses of SMBHs M87* [7] and SgrA* [8]. Footnote 1: The SMBH is a black hole with mass \(M>10^{4}M_{\odot}\). The currently intensively investigated SMBHs have masses with \(M>10^{6}M_{\odot}\). Over the years of studying SMBHs in galaxy centers, several correlations have been found linking the mass of the central SMBH of galaxies with such parameters as the mass of the galaxy in which it is located, the mass of the stellar bulge, and the total mass of all globular clusters (see [9, 10, 11] and references therein). The presence of such correlations undoubtedly points to a connection between the value of the SMBH mass and the evolution of galaxies (see [12] and references therein). The nature of this relation can change with time, as indicated by recent observational data on the measurement of the masses of SMBHs and bulges of their host galaxies at high redshifts \(z\simeq 6\) (see [13, 14]). The exact physical mechanism that gives rise the SMBH has not yet been elucidated. Of course, it is necessary to distinguish between the occurrence of "seeds" which could have a mass in the range of \(10^{2}-10^{5}M_{\odot}\), and the growth of the black hole mass due to accretion of matter from the surrounding space and/or merger with other black holes. According to the modern ideas, the SMBH seeds can arise as a result of 1) the direct collapse of a gas cloud, 2) the evolution of a dense stellar cluster, or 3) the merger of many stellar-mass black holes. [15]. According to the modern concepts, the gravitationally bound dark matter halo, which is the dynamically dominant component of any galaxy, undergoes multiple mergers with halos of smaller masses during its evolution, and also in numerical simulations one can identify a main merger event when a halo interacts with another halo of close mass. This means that several SMBHs should be observed in galaxies for some time, and thus double (dual or binary) SMBHs [16]. After capture of the SMBH by a halo containing another (more massive for certainty) black hole, the size of the SMBH orbit starts to decrease due to dynamical friction [17]. The characteristic time of this process is \(\sim 10^{8}\) years. At this time, SMBHs are gravitationally unbound and are referred to as "dual" SMBHs. When the distance between the SMBHs is reduced to \(\sim 1-100\) pc, a gravitationally-bound pair is formed, and the system of two SMBHs is starting to be "binary" [18]. It is such pairs of SMBHs are under consideration in the paper. The further evolution of the SMBBH is determined by the dynamics of the pair's interaction with individual stars and the gas of the gravitationally-bound central star cluster, resulting in the loss of angular momentum and energy by the pair. Many uncertainties remain in the description of this stage, mainly related to the rate at the loss cone fills, making the duration of this stage hard to estimate. Nevertheless, when the distance between the SMBHs decreases to \(10^{-2}-10^{-3}\) pc, the most efficient mechanism of energy loss by the the system becomes gravitational wave emission, leading to the merger of the SMBHs after \(\sim 10^{8}\) years. The details of the evolution of the binary system are an active area of research [19, 20, 21]. A key characteristic of SMBBHs is that they are rare [19]. Although the frequency of occurrence of binary systems remains uncertain and depends on the unknown evolutionary rate on small scales (the central parsec problem), the fraction of active galactic nuclei at redshift \(z<0.7\) containing detectable SMBBHs is estimated to be \(\sim 10^{-3}\)[22]. Close values were obtained using different approaches (see [23, 24, 25]). Simple estimates show that the orbital period of the SMBH in the binary system, T, the large semi-axis, \(a\), and the total mass of the SMBBH, \(M+m\), are related by the relation \[(T/{\rm year})^{2}\simeq 5.92\,\frac{\left(a/(0.01\,{\rm pc})\right)^{3}}{(M+m)/ 10^{9}M_{\odot}}.\] This means that, assuming to study the binary systems having periods not more than a few years, it is preferable to use sufficiently massive ("hypermassive", as proposed in [26], based on the capabilities of radio-interferometric observations) black holes separated by sub-pc distances. The best-known candidate for a SMBBH is the OJ 287 [27]. According to the most preferred model, the mass of the main component is \(\sim 10^{10}M_{\odot}\), the mass of the secondary component is \(\sim 10^{8}M_{\odot}\), and the orbital period in the binary system is \(\sim 12\) years. The large (as compared to other known SMBH) angular size of the more massive component shadow, \(\sim 0.2\mu s\), and the total flux on the order of Jy make it a good candidate for observations with the space-ground interferometer [28]. One of the main directions in search for SMBBHs is an analysis of the variability (in optics) of active galactic nuclei, mainly quasars. Thus, in [29], the _Catalina Real-time Transient Survey_ (CRTS) catalog containing 243500 spectroscopically confirmed quasars was analyzed. As a result, 111 objects showing signs of variability with periods on the order of a year were selected as candidate DSMBBHs. The search for the SMBBH candidates was also undertaken in [30] by analyzing the _Palomar Transient Factory_ (PTF) catalog, which contains 35383 spectroscopically confirmed quasars. When analysing this catalog, 50 quasars with statistically significant periodicity were identified, and when analysing together with the CRTS catalog, 33 sources were added to the list of new candidates for SMBBH. Together with OJ 287, this make a total of 145 candidates for the SMBBH. The success of radio interferometry in the study of the SMBHs located at the center of galaxies M87 [7] and the Milky Way [8] suggests that similarly remarkable success can be achieved in the study of the SMBBHs. In other words, with a very high-resolution radio interferometer, it will be possible to see emission from the vicinity of both SMBHs of the binary system. However, flux data at 240 GHz are not readily available for the candidate SMBBHs selected for their variability in the optical band. This is partly due to the fact that observations in the sub-mm require s special conditions, firstly, a low concentration of water vapor in the atmosphere of the observation site. For this reason, it it logical to turn to modeling the flux values based on the available data. The interest in SMBBHs is growing every year due to the important place of these objects in understanding the formation of SMBHs and host galaxies [19]. So far, there is no unambiguous evidence for the duality of the available candidates even for such an intensively observed objects as OJ 287, i.e. very high-resolution interferometric observations are required. In this paper, we present a methodology for modeling the spectra of SMBH in the mm and submm ranges using artificial intelligence methods. Using an artificial neural network, the values of the source fluxes at 240 GHz is used to test the observability criterion for sources on a space-ground interferometer with parameters similar to those of the MSO. The candidate SMBBHs satisfying the observability criterion are collected in Table 2. Since the available candidates were selected on the basis of properties indirectly related to the SMBH duality, i.e. may be due to other reasons, it is important to construct as complete catalog as possible for interferometric observations where the binary structure can be established. ## 2 Modeling the spectra of SMBBH candidates For the purposes of the study, out of the 145 known candidates for SMBBHs were selected those for which data on fluxes at higher and lower frequencies relative to the 240 GHz are available. There were 17 such objects. To this list from the literature were added 7 more candidate SMBBHs found in other publications (see Table 1). This table lists the name of the candidate sources, the mass estimate2 expressed in units of the Sun mass, the physical distance between black holes in the binary system \(D\) expressed in parsecs, the estimated value of the orbital period of the system \(P\), the redshift \(z\), and the reference. Footnote 2: The equality of component masses has been assumed. An _artificial neural network_ (ANN) was built to model the flux at 240 GHz. ANN is a mathematical model that has a structure similar to that of the brain, with "neurons" - computational units connected by synapses that communicate data between neurons. The ANN thus makes it possible to find the relationship between input and output data. Assume that the output data \(y\) is a function of the input data \(x\), \(y=f(x)\). In classical programming, the function \(f\) is known, which makes it possible to determine the corresponding output data for given input data. For machine learning tasks, the function \(f\) is usually not known. During the training of the model, it is provided with both input and output data, which makes it possible to establish the dependence between them. Then, while the trained model is running, the corresponding output data are calculated for the new input data. In this case, the input data are frequency values and the output data are flux densities at these frequencies. First, the model was trained on publicly available data (see NASA/IPAC Extragalactic Database (NED)3). The trained network then determined the flux density at \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline Source name & Mass, & \(D\), & \(P\) & \(z\) & reference \\ & \(\log(M/M_{\odot})\) & pc & & & \\ \hline UM 269 & 8.41 & 0.00313 & 490.5 days & 0.308 & [30, 29] \\ CSO 0402+379 & 8.18 & 7.3 & 150 000 years & 0.055 & [31, 32] \\ FBQS J081740.1+232731 & 9.55 & 0.011 & 1190 days & 0.891 & [30, 29] \\ BZQ J0842+4525 & 9.48 & 0.012 & 1886 days & 1.408 & [30, 29] \\ SDSS J084716.04+373218.1 & 8.1 & 0.022 & 40 years & 0.454 & [33, 34] \\ OJ 287 & 10.26 & 0.056 & 4380 days & 0.306 & [27] \\ MCG +11-11-032 & 8 & 0.0036 & 760 days & 0.036 & [19] \\ SBS 0924+606B & 8.9 & 0.044 & 40 years & 0.295 & [33, 34] \\ SDSS J094715.56+631716.4 & 9.22 & 0.014 & 1724 days & 0.487 & [30, 29] \\ SDSS J093819.25+361858.7 & 9.32 & 0.007 & 1265 days & 1.677 & [30, 29] \\ SDSS J100021.80+223318.7 & 9.3 & 0.052 & 35 years & 0.418 & [35, 36] \\ SDSS J102349.38+522151.2 & 9.59 & 0.014 & 1785 days & 0.955 & [30, 29] \\ SDSS J124044.49+231045.8 & 8.94 & 0.008 & 1428 days & 0.722 & [30, 29] \\ BZQ J1305-1033 & 8.50 & 0.008 & 1694 days & 0.286 & [30, 29] \\ SDSS J132103.41+123748.2 & 8.91 & 0.008 & 1538 days & 0.687 & [30, 29] \\ SDSS J133654.44+171040.3 & 9.24 & 0.008 & 1408 days & 1.231 & [30, 29] \\ SDSS J141244.09+421257.6 & 9.69 & 0.00622 & 433.4 days & 0.805 & [30, 29] \\ 3C 298.0 & 9.57 & 0.013 & 1960 days & 1.437 & [30, 29] \\ TEX 1428+370 & 8.53 & 0.00214 & 288.3 days & 0.566 & [30, 29] \\ SDSS J150243.09+111557.3 & 8.06 & 140 & 20 million years & 0.391 & [37] \\ FBQS J150911.2+215508 & 8.54 & 0.00241 & 314.4 year & 0.438 & [30, 29] \\ PG 1553+113 & 8 & 0.0038 & 3 years & 0.360 & [38] \\ HS 1630+2355 & 9.86 & 0.020 & 2040 days & 0.821 & [30, 29] \\ PKS 2203-215 & 8.91 & 0.00408 & 497 days & 0.577 & [30, 29] \\ \hline \end{tabular} \end{table} Table 1: Candidates to the SMBBHs the frequencies of interest. _Python_ was used as a programming language and _Tensorflow_ as a machine learning library [39]. In this work, a type of neural network called a _multi layered perceptron_ (MLP) is used, which is a forward propagation network, i.e. the signal propagates straight from the input of the network to the output. MLP consists of at least three layers: a layer (set) of input neurons receiving information, a hidden layer(s)4, which process the information, and a layer of output neurons, which output the results of calculations. When learning a network, the network uses what is called "supervised learning", i.e. the network is provided with many examples containing "known input - known output" pairs. At the beginning of the network training, the neuron weights (characterising their input) are activated randomly. By comparing the final result obtained with the given values and the known "output", it is possible to calculate the error, and then, using the backward propagation of errors (the gradient descent method), we can recalculate the weight values of all neurons. After that the new final result is calculated. The number of such cycles, called "epochs", was 10000, and the learning rate coefficient (a measure of the adjustment of weights in each epoch) was 0.01. As an activation function, which calculates output signal of the neuron depending on the sum of the inputs signals, was taken the so-called sigmoid, \(f(x)=\frac{1}{1+e^{-x}}\). Footnote 4: In our case, the network contained two hidden layers with 50 neurons each. The simulation results for all 24 sources from Table 1 are shown in Figures 1-3. ## 3 SMBBH space-VLBI Since the angular sizes of SMBHs are very small, their space-ground VLBI observations in mm and submm range is of particular importance, see, for example, [28]. In the same paper, a source selection criterion for observations on the MSO was formulated. This criterion was adapted for the purposes of this study, with 240 GHz being used as the "main frequency" (see [28]). According to the selection criterion, the modeled flux values obtained from the using ANN are substituted into eq.(8) from [28]: \[F_{\rm av}=\sqrt{\frac{1}{2\pi}\int C(u,v)C^{*}(u,v){\rm d}\psi}=\frac{F_{ANN }}{\pi\rho R(1-r^{2})}\sqrt{J_{1}^{2}(x_{1})+J_{1}^{2}(x_{2})r^{2}-2J_{1}(x_{ 1})J_{1}(x_{2})J_{0}(x_{3})r},\] where \(F_{\rm av}\) is the visibility function averaged over the azimuthal angle in the \(u-v\)-plane, \(\psi\equiv\arcsin(v/\rho)\), the value of \(\rho\) is calculated from the possible values of the minimum and maximum projection of the bases for the selected orbit of MSO for the coordinates of the selected sources, \(C(u,v)\) is the two-dimensional Fourier image of the source model, \(J_{\nu}(x)\) is the Bessel function, \(F_{ANN}\) is the modeled flux value, \(R\) and \(r\) are the parameters of the source model (a more detailed consideration of the source model can be found in [28]). Next, we checked whether there is a range of base projections in which the value of the averaged flux is higher than the detection limit, which is 6.45 mJy for 240 GHz for MSO. The dependence of the averaged flux for the selected sources on the value of the base projection, expressed in Earth diameters, is shown in Figs. 4-6. In all figures, the horizontal dashed line corresponds to the sensitivity level of the telescope. Table 2 summarizes the results of the analysis. It includes the sources from the Table 1 for which there is an interval of base projections at which the amplitude of the visibility Figure 1: Modeled spectra of the sources from Table 1. Part 1. Figure 2: Modeled spectra of the sources from Table 1. Part 2. Figure 3: Modeled spectra of the sources from Table 1. Part 3 Figure 4: Averaged visibility function for acceptable values of base projection for the sources from Table 1. Part 1. Figure 5: Averaged visibility function for acceptable values of base projection for the sources from Table 1. Part 2. Figure 6: Averaged visibility function for acceptable values of base projection for the sources from Table 1. Part 3. function is above the threshold value. Table 2 presents the name of the source, its right ascension, \(\alpha\), and declination, \(\delta\), at epoch J2000, the modeled flux \(F_{ANN}\), expressed in Jy, and the angular size of the shadow of the more massive SMBH, \(\theta\) (defined also as in [26]), measured in arc microseconds, \(\mu s\), and the angular distance between the components of the binary system, \(d\), expressed \(\mu s\) too. ## 4 Results We have analysed the available list of candidates for binary SMBHs (SMBBHs). The list was compiled on the basis of available data on the variability in the optical range or the type of emission spectrum. In order to estimate the radiation flux at 240 GHz, we constructed an artificial neural network. For those SMBBH candidates for which this procedure appeared possible, the criterion of the possibility of observing the source at the Millimetron Space Observatory (MSO) was checked. The result of the study is presented in Table 2. It represents a list of 17 candidates the duality of which can be confirmed (or disproved) by observation with space-ground interferometer with an orbit and sensitivity at 240 GHz, as the planned MSO. The table with SMBBH candidates may be extended after additional observations or studies, as the main reason why the list is much shorter than the initial number of SMBBH candidates is the lack of observational data at a frequency of 240 GHz (at which the modeling was carried out), mainly at lower frequencies. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline Source name & \(\alpha\), & \(\delta\), & \(F_{ANN}\), & \(\theta\), & \(d\), \\ & h m s & \({}^{o}\) / \# & Jy & \(\mu\)s & \(\mu\)s \\ \hline CSO 0402+379 & 04 05 09.3 & +38 03 32.2 & 0.043 & 0.068 & 6580 \\ FBQS J081740.1+232731 & 08 17 40.2 & +23 27 32.0 & 0.027 & 0.220 & 1.375 \\ BZQ J0842+4525 & 08 42 15.3 & +45 25 45.0 & 0.050 & 0.173 & 1.385 \\ OJ 287 & 08 54 48.9 & +20 06 30.6 & 2.72 & 1.935 & 11.96 \\ MCG +11-11-032 & 08 55 12.5 & +64 23 45.6 & 0.23 & 0.067 & 4.82 \\ SBS 0924+606B & 09 28 37.98 & +60 25 21.0 & 0.019 & 0.087 & 9.69 \\ SDSS J102349.38+522151.2 & 10 23 49.5 & +52 21 51.8 & 0.027 & 0.237 & 1.717 \\ SDSS J124044.49+231045.8 & 12 40 44.5 & +23 10 46.1 & 0.014 & 0.058 & 1.072 \\ BZQ J1305-1033 & 13 05 33.0 & -10 33 19.1 & 0.37 & 0.035 & 1.794 \\ SDSS J132103.41+123748.2 & 13 21 03.4 & +12 37 48.1 & 0.016 & 0.055 & 1.0935 \\ SDSS J133654.44+171040.3 & 13 36 54.4 & +17 10 40.8 & 0.032 & 0.101 & 0.935 \\ 3C 298.0 & 14 19 08.2 & +06 28 35.1 & 0.022 & 0.213 & 1.498 \\ TEX 1428+370 & 14 30 40.6 & +36 49 03.9 & 0.14 & 0.025 & 0.319 \\ SDSS J150243.09+111557.3 & 15 02 43.1 & +11 15 57.3 & 0.014 & 0.01 & 25500 \\ FBQS J150911.2+215508 & 15 09 11.2 & +21 55 08.8 & 0.0094 & 0.029 & 0.411 \\ PG 1553+113 & 15 55 43.0 & +11 11 24.4 & 0.06 & 0.01 & 0.73 \\ PKS 2203-215 & 22 06 41.4 & -21 19 40.5 & 0.17 & 0.059 & 0.602 \\ \hline \end{tabular} \end{table} Table 2: SMBBH candidate meeting the selection criterion Discussion Since SMBBHs are an important but insufficiently studied stage in the evolution of SMBHs, their search and observations are very important. At the same time, most of the existing methods for their search (see [19]), aimed at identifying candidates for SMBBHs, require confirmation, i.e., additional studies and direct space-VLBI observations. As mentioned in the Introduction, the probability that there is a pair of gravitationally linked pairs of black holes is \(\sim 10^{-3}\). In analyzing the survey CATALINA was able to select just over a hundred candidates for the binary SMBH out of a quarter of a million active globular nuclei, which is \(\sim 4\times 10^{-4}\) and does not contradict the theoretical estimate. The growing interest in SMBBH is also related to the planned space gravitational-wave observatories [40], the development of which is currently being widely discussed. However, results in this area can only be expected in the distant future. Thus, it is in radio interferometric observations that the first SMBBHs may be discovered. This is due to the fact that SMBBH candidates are selected on the basis of properties that may be due not only to the duality of the SMBH but also to other physical reasons. In this context, it is particularly important to construct as complete as possible a catalog for interferometric observations in which the binary nature can be unambiguously established. ## 6 Acknowledgement The authors are grateful to P.B. Ivanov for constructive suggestions and to A.G. Rudnitsky and M.A. Shchurov for help in performing calculations.
``` 候補となる超massive二体ブラックホール(SMBBH)のリストを作成し、そのリストは、可視光線における変動性と発光スペクトル形状から得られたデータに基づいて作成されました。人工ニューラルネットワークを構築し、240〜GHzにおける放射線密度の推定を行いました。これらの候補SMBBHのうち、ネットワーク構築手順が実行可能であったものは、Millimetron宇宙望遠鏡(MSO)で観測可能な可能性を評価しました。この結果を17個の候補SMBBHの表として提示しました。これらのオブジェクトの双性(または非双性)を、MSOと同様の条件を持つ空間地上干渉計による観測データによって確認することは、銀河形成の理論開発における重要なめどとなるでしょう。 ```
2309.09498
Combating Advanced Persistent Threats: Challenges and Solutions
The rise of advanced persistent threats (APTs) has marked a significant cybersecurity challenge, characterized by sophisticated orchestration, stealthy execution, extended persistence, and targeting valuable assets across diverse sectors. Provenance graph-based kernel-level auditing has emerged as a promising approach to enhance visibility and traceability within intricate network environments. However, it still faces challenges including reconstructing complex lateral attack chains, detecting dynamic evasion behaviors, and defending smart adversarial subgraphs. To bridge the research gap, this paper proposes an efficient and robust APT defense scheme leveraging provenance graphs, including a network-level distributed audit model for cost-effective lateral attack reconstruction, a trust-oriented APT evasion behavior detection strategy, and a hidden Markov model based adversarial subgraph defense approach. Through prototype implementation and extensive experiments, we validate the effectiveness of our system. Lastly, crucial open research directions are outlined in this emerging field.
Yuntao Wang, Han Liu, Zhendong Li, Zhou Su, Jiliang Li
2023-09-18T05:46:11
http://arxiv.org/abs/2309.09498v2
# Combating Advanced Persistent Threats: Challenges and Solutions ###### Abstract The rise of advanced persistent threats (APTs) has marked a significant cybersecurity challenge, characterized by sophisticated orchestration, stealthy execution, extended persistence, and targeting valuable assets across diverse sectors. Provenance graph-based kernel-level auditing has emerged as a promising approach to enhance visibility and traceability within intricate network environments. However, it still faces challenges including reconstructing complex lateral attack chains, detecting dynamic evasion behaviors, and defending smart adversarial subgraphs. To bridge the research gap, this paper proposes an efficient and robust APT defense scheme leveraging provenance graphs, including a network-level distributed audit model for cost-effective lateral attack reconstruction, a trust-oriented APT evasion behavior detection strategy, and a hidden Markov model based adversarial subgraph defense approach. Through prototype implementation and extensive experiments, we validate the effectiveness of our system. Lastly, crucial open research directions are outlined in this emerging field. Provenance graph, advanced persistent threat (APT), lateral movement, APT evasion, adversarial subgraph. ## I Introduction Advanced persistent threats (APT) [1] has emerged as a significant cybersecurity threat characterized by highly organized and well-funded attackers, stealthy and evasive execution, long-term persistence, and precise targeting of high-value assets. APT attacks can have devastating consequences across various sectors, including government, critical infrastructures, corporations, and individuals. The objectives of APT attacks often encompass espionage, theft of sensitive information, intellectual property, financial gain, and disruption of critical information infrastructures. Based on statistics from 360 Security1, APTs (e.g., Stuxnet, Gauss, Flame, and Duqu) constitute nearly 60% of cyberattacks targeting governments, transnational corporations, and critical infrastructures over the last two years. A typical APT attack lifecycle comprises the following steps [2]. Footnote 1: [https://sc.360.net/](https://sc.360.net/) * _Initial Compromise:_ APT attackers establish their foothold through tactics such as spear-phishing emails, social engineering, watering hole attacks, or exploiting software vulnerabilities. This initial compromise serves as a starting point for the attacker to infiltrate the target network. * _Lateral Movements:_ APTs are typically orchestrated by a team of sophisticated hackers, working in a coordinated fashion. Once inside the network, APT attackers can employ diverse techniques to move laterally across systems. This involves escalating privileges, exploiting weak credentials, and leveraging known vulnerabilities to gain access to vital assets. * _Persistence:_ APT attackers ensure their continued access by implementing persistent mechanisms, such as backdoors, Trojans, or remote access tools. These mechanisms enable them to maintain control and re-enter compromised systems even after being detected. * _Data Exfiltration:_ APT attackers meticulously identify and exfiltrate sensitive data over an extended period. This step necessitates a deep understanding of the victim's data landscape and careful evasion of security measures. To combat the complex and evolving nature of APT attacks, provenance graph-based kernel-level auditing [1, 2] offers a promising approach by enhancing visibility, traceability, and detection capabilities within intricate and dynamic network environments. It involves real-time capturing and analysis of intricate system interactions, encompassing network communications, process interactions, and file operations. By constructing causal relationship graphs of these entities, the provenance graph provides an all-encompassing depiction of system behavior, yielding the following advantages [2]: * _Traceability:_ The provenance graph facilitates the tracing of actions and interactions within a system, streamlining the identification of suspicious or malicious behaviors. * _Real-time Visibility:_ Through the real-time capture of low-level system activities, the provenance graph delivers a dynamic comprehension of ongoing processes and potential threats. * _Covert Behavior Detection:_ The provenance graph aids in the revelation of concealed APT activities that may elude traditional detection mechanisms. * _Attack Reconstruction:_ Leveraging the provenance graph, security analysts can reconstruct the sequence of point-of-interest (PoI) events leading to an attack, thus assisting in post-incident analysis and response. However, the provenance graph-based kernel-level APT audit technology encounters the following new challenges. * _Reconstruction of Lateral Attack Chains:_ Adversaries can breach system boundaries through highly covert attacks, such as leveraging zero-day vulnerabilities or backdoors. They exploit lateral movements and domain controller hijacking in the target intranet to establish specific hop chains, triggering security alerts such as data exfiltration, password cracking, and shellcode payloads. As such, it is challenging for traditional host-based provenance intrusion detection systems (Prov-HIDS) to fully reconstruct APT attack patterns [2]. Additionally, the host-level provenance graph usually contains millions of data entities [3], leading to dependency explosion problems during provenance graph audits, thereby impacting the availability of APT provenance services. * _Identification of APT Evasion Behaviors:_ Recent studies [3, 4] highlight that real APT attacks often utilize strategic tactics, such as integrating numerous unrelated inter-process communication (IPC) sequences into attack primitives to evade provenance graph-based APT audits (refer to as APT evasion behaviors). Moreover, the varied functional deployment of network devices (e.g., switches, DNS servers, and domain controllers) complicates achieving compatible network-level provenance graph analysis, further amplifying the intricacy of identifying APT evasion behaviors. * _Adversarial Subgraph Detection:_ Prov-HIDS systems generally rely on subgraph matching [5, 6] and cyber threat intelligence (CTI) to simulate APT behaviors for matching and auditing. Nevertheless, provenance graphs are vulnerable to adversarial attacks. For example, adversaries can craft adversarial sub-provenance graphs [7] that avoid disrupting attack primitives, thereby evading detection through matching. Consequently, it diminishes the effectiveness of provenance graph auditing. Hence, it is urgent to design a robust and efficient APT detection scheme based on provenance graphs, with the ability to reconstruct APT lateral movements, detect APT evasion behaviors, and uncover adversarial subgraphs. As an effort to address the above challenges, this paper proposes a novel provenance graph based APT defense approach with low complexity and high robustness. Specifically, we present a general architecture of network-layer provenance graph-based APT audit. Then, under this architecture, we devise three components: (i) network-level distributed provenance graph audit model for cost-effective lateral attack chain reconstruction, (ii) trust-oriented dynamic APT evasion behavior detection strategy for improved availability of APT defense services, and (iii) hidden Markov model (HMM)-based adversarial subgraph detection strategy for enhanced robustness of APT defense services. Finally, we implement a real prototype and carry out extensive experiments to validate the feasibility and effectiveness of our proposed system. The remainder of this paper is organized as follows. Section II show the working principle and key challenges of provenance graph-based APT audit. Section III presents the proposed solutions under the provenance graph-based APT audit architecture. Section IV demonstrates the prototype implementation and experimental evaluation. Section V outlines future research directions, and Section VI concludes this work with conclusions. ## II Working Principle and Challenges of Provenance Graph-Based APT Audit ### _Overview of Provenance Graph-Based APT Audit_ _Provenance Graph._ As shown in Fig. 1, a provenance graph _G={N, E_}_ is a directed graph enriched with chronological information, serving to capture and depict the interactions and causal relationships among diverse system entities including processes, files, network connections. The graph \(G\) is constructed through the collection of system logs from sources such as Windows ETW and Linux Auditd using the probes (e.g., CamFlow) run on the OS [4]. These logs provide the foundation for modeling large-scale system entities and their intricate interdependencies. The provenance graph then becomes a comprehensive representation of how these entities interact over time. * _Entity._ It refers to the subject and object of system operations. In provenance graph auditing, as depicted in Fig. 1, the system entities mainly consist of three types: _sockets_ (or called network connections, represented as the parallellograms), files (represented as the rectangles), and processes (represented as the ellipses). * _Edge._ It refers to the causality dependency relationships between various entities, which primarily include _read_, _write_, _execute_, and _connect_. For instance, in the provenance graph, the edge related to a file entity typically Fig. 1: Overview of Provenance Graph-Based APT Audit Approach. represents a read or write operation. In the case of a process entity, the edge usually indicates an execute operation; while for a socket entity, its edge typically represents a connect operation. ### _State-of-the-Arts_ The pioneering work of SLEUTH [8] introduced the provenance graph approach for real-time APT attack scenario reconstruction, by leveraging causal relationship tracking and provenance graph modeling. Besides, the attack process is reconstructed in [8] through the construction and annotation of a lower-level event dependency graph. Subsequently, NODOZE [9] further devised novel algorithms for threat detection and heterogeneous graph construction, while Poirot [10] designed subgraph querying and matching algorithms, thereby addressing the alignment challenge between APT attack primitives and provenance graphs. HOLMES [11] innovatively merged the high-level scenario graph (HSG) with the ATT&CK attack framework, thus resolving semantic alignment issues and effectively mitigating noise problems stemming from irrelevant sequences. However, the efficiency shortcomings of the aforementioned approaches hindered the practical deployment of APT provenance graph auditing services. The state-of-the-art literature on APT defense enhancements mainly focuses on from three perspectives: reducing latency, countering highly covert APT behaviors, and causal relationship analysis. In terms of latency reduction, StreamSpot [12] and UNICORN [5] introduced a novel real-time runtime analysis framework for local hosts, which achieves attack detection without prior attack knowledge and demonstrates high accuracy with low false positive rates. Pertaining to defense against fileless attacks, ProvDetector [6] introduced provenance graphs into concealed malicious attack detection and presented novel path algorithms to identify potential portions within provenance graphs, in order to establish recognition profiles for anomalous processes in each program. Through context of causal relationship analysis and natural language processing (NLP) techniques, ATLAS [1] proposed a sequence-based model based on audit logs, facilitating the end-to-end attack story generation. Additionally, DEPIMPACT [13] extended ATLAS by introducing attack dependency subgraph weights, exploiting the similarity and closeness of attack sequences to achieve provenance graph compression and efficient auditing. Nevertheless, the above advanced approaches primarily target host-level APT detection, failing to account for network-level (i.e., the entire network consisting of multiple hosts) provenance auditing, thus lacking collaborative defense strategies among hosts. Furthermore, current APT defense strategies are susceptible to intelligent attacks such as APT evasion and adversarial subgraphs, resulting in a significant decline in the effectiveness of provenance graph detection. Table I shows the comparisons of our work with existing state-of-the-arts. ### _Challenges of Provenance Graph-Based APT Audit_ * _Low-Cost Lateral Attack Chain Reconstruction at the Network Level._ APT attacks are typically characterized by the high degree of stealth and prolonged persistence. A significant challenge is efficiently filtering relevant data from millions of provenance logs and establishing meaningful correlations to rapidly reconstruct APT attack chains. Current provenance graph audit schemes are confined to single-host operating systems, whereas real APT attacks exhibit a highly organized nature, often involving distributed and multi-point infiltrations. Relying solely on the auditing of a single host is inadequate to comprehensively reconstruct the complete attack event. Hence, it is imperative to devise a network-level collaborative provenance audit approach involving multiple hosts, while effectively compressing and aggregating the extensive and multi-source provenance graphs. This is beneficial to the cost-effective reconstruction of APT lateral attack chains within complex and dynamic scenarios. * _Dynamic Detection of APT Evasion Behaviors with Temporal Correlations._ A team of APT attackers frequently employ various evasion strategies, such as interspersing numerous unrelated IPC sequences within attack primitives, to evade audit approaches based on provenance graphs. Consequently, the availability of APT detection services diminishes. However, existing provenance graph auditing approaches rarely account for APT evasion attacks, resulting in a demand for large-scale and granular APT evasion behavior identification. Due to the massive and multi-source provenance graphs across diverse platforms, temporal correlations of entity interactions within provenance graphs, and intelligent poisoning behaviors for targeted manipulation, it is challenging to design rapid stealthy evasion behavior detection mechanisms in such dynamic and uncertain environments. * _High-Robust and Self-Adaptive Adversarial Subgraphs Defense._ Existing APT defense strategies based on provenance graphs commonly rely on subgraph matching mechanisms, rendering them susceptible to adversarial attacks. Attackers can construct adversarial provenance subgraphs that evade matching detection without compromising the attack primitives, thereby eroding the reliability of APT detection services. Nevertheless, adversarial attacks are scarcely considered in current works. As a result, there is a rising need for countering adversarial attacks. Given the concealed nature of adversarial subgraphs, the diversity of adversarial attack patterns, and the real-time and dynamically transmissible requirements of defense strategies, the design of robust and self-adaptive defense mechanisms against adversarial subgraphs is challenging. ## III Solutions to Provenance Graph-Based APT Audit Aimed to address the challenges of lateral movement reconstruction, evasion behavior detection, and adversarial subgraph defense in current provenance graph based APT defense, this section delves into the perspective of cost-effective and robust provenance graph based APT defense approaches, including network-level distributed provenance graph audit model (Sect. III-A), trust-oriented dynamic APT evasion behavior detection strategy (Sect. III-B), and HMM-based adversarial subgraph detection strategy (Sect. III-C). ### _Network-Layer Distributed Provenance Graph Audit_ In this subsection, we devise a distributed provenance graph audit model to efficiently reconstruct lateral attack chains from two perspectives: network-level global auditing and graph data compression. As shown in Fig. 2, it encompasses (i) a graph data compression module based on causality preserved aggregation (CPA) to address the issue of graph dependency explosion, (ii) a graph weight aggregation module based on linear discriminant analysis (LDA) to construct weighted provenance graphs, and (iii) a distributed APT lateral attack chain construction module using weighted provenance graphs. #### Iii-A1 CPA-Based Graph Data Compression The CPA algorithm is utilized to effectively streamline the dependencies within the provenance graph involving extensive volume of data entities (e.g., IPC and file). Specifically, for two interconnected entity flows (\(\to U\to V\rightarrow\)) with a dependency relationship, the following conditions are considered. _(i) Forward ingress aggregation condition:_ When the occurrence times of all ingress event edges into entity \(U\) precede the event edge \(U\to V\), the timestamp of the last ingress edge is designated as the global ingress time. _(ii) Backward egress aggregation condition:_ When the occurrence times of all egress event edges Fig. 2: An Illustration of Network-Layer Distributed Provenance Graph Audit for Lateral Attack Chain Reconstruction. from entity \(V\) follow the event edge \(U\to V\), the timestamp of the initial egress edge is designated as the global egress time. _(iii) Backward egress aggregation condition:_ For entity flows that meet both forward and backward aggregation conditions, the two entities are equivalently aggregated as the same one. #### Iii-A2 LDA-Based Weighted Graph Aggregation The LDA model is leveraged to trace PoI alert events by constructing weighted sub-provenance graphs. Internally, three primary features, i.e., file size correlation, temporal relevance, and in-out degree ratio, are employed for extracting the entities in the provenance graph. Subsequently, the edges of the provenance graph are clustered through the multi-round K-means++ algorithm. Next, the LDA model is employed to compute the projection vectors that maximize the Fisher criterion for alarm-related edges versus non-alarm-related edges within the differentiated two groups of edges. #### Iii-A3 Lateral Attack Chain Construction via Weighted Provenance Graphs Given the bidirectional interactivity in APT attack chains (i.e., the triggering of a PoI alert at the entry point evolves into a positive propagation toward linked sockets), the _file_ (for payload delivery) and _socket_ (for network connections) are two primary elements. Firstly, the positive weights of PoI events are initialized. Then, based on the magnitude of out-degrees, the weights are evenly distributed and progressively reduced with successive convergence. For the subsequent layer of new incoming events, the weight factors are computed based on aforementioned three features. The weight factors serve as discriminative markers for lateral movement, aiding in the restoration of the corresponding APT lateral infiltration chain. ### _Trust-Oriented Dynamic APT Evasion Behavior Detection_ In this subsection, we devise a dynamic APT evasion behavior detection, which encompass (i) temporal correlation for attack-related substructure optimization in provenance graph, and (ii) dynamic trust assessment for suppressing behavioral sequences from untrusted entities. #### Iii-B1 Optimized Attack-Related Substructures of Provenance Graph Adversaries can launch _APT evasion attacks_ by extending the completion time of their attack infiltration primitives and introducing irrelevant operations to saturate the payload entity flow with benign entities. Thereby, they can evade traditional pattern-matching based provenance graph detection [5, 12]. To address this issue, a _forgetting factor_ for PoI alert events is introduced, which is associated with the penalty coefficient, the current time slot, and historical interactions. The penalty coefficient for an attacker represents the number of detected attack subgraphs within a specific time window (the length of which depends on the value of the forgetting factor). For an attacker, if his penalty coefficient surpasses a predefined threshold, the causal dependencies of his distributed attack primitives can be temporarily correlated via the stack. This allows the construction of provenance entity links related to the original attack behaviors, resulting in an optimized attack-related substructure within the original provenance graph. Furthermore, it helps reduce the impact of intentionally introduced benign entities by adversaries during trust evaluation. #### Iii-B2 Dynamic APT Evasion Behavior Analysis based on Trust Evaluation As shown in Fig. 3, a defender (i.e., assessment subject) can obtain a sequence of optimized provenance graphs about an attacker (i.e., assessment object) from the evidence repository. This sequence records the attacker's historical trustworthiness in chronological order, while each provenance graph in the sequence records the attacker's historical interactions with the victim host within a fixed time window. Through sequence extraction methods, the sequence can be divided into three parts: subsequences of continuously trustworthy operations, continuously untrustworthy operations, and continuously uncertain operations. Then, we design a trust mechanism to distinguish an APT evasion attacker from an innocent user due to misoperations by evaluating the trustworthiness from both _direct_ and _indirect_ trust aspects. The direct trust is evaluated based on the Dempster-Shafer theory, considering the time span of continuously trustworthy/untrustworthy/uncertain operations and time decay effects. It rewards users for continuously providing trustworthy interactions while penalizing users for malicious or uncertain behaviors. The indirect trust obtained from third-party recommendations can help enhance the accuracy of trust evaluation, especially when direct interactions are infrequent [15]. Afterward, the latest trust evaluation results are stored in the evidence database. ### _HMM-Based Adversarial Sub-provenance Graph Defense_ This subsection devises (i) a fast adversarial subgraph modeling method to explore adversaries' evasion principles during infiltration attacks, and (ii) a HMM-based self-evolving adversarial subgraph detection algorithm. #### Iii-C1 Fast Adversarial Subgraphs Modeling It comprises three steps. _Step 1: Test model construction based on subgraph matching_. We train a general test AI model for discriminating adversarial subgraphs, by optimizing the loss function, which is defined as one minuses the average number of successful attack subgraph matches for all subgraphs. _Step 2: Proof-of-concept (PoC) framework design for adversarial subgraphs_. Initially, we utilize the subgraph deconstruction method [10] to disassemble the subgraphs into individual substructures. These substructures are then summarized into an \(N\)-dimensional vector using an encoding function. Subsequently, we employ Fig. 3: An Illustration of Trust-Oriented Dynamic APT Evasion Behavior Detection. a cosine distance-based discriminant function to determine if the subgraph is adversarial. This is achieved by comparing the cosine distance to a preset threshold. _Step 3: Adversarial subgraphs construction._ This step aims to create adversarial subgraphs without disrupting the original attack primitives. Initially, we select benign substructures to replace parts of the original graph's structure, with the objective of minimizing the cosine distance. Then, we update the cosine distance by applying the distance discriminant function to the modified subgraph. The above processes are repeated until the test model incorrectly classifies the subgraph as normal, resulting in the generation of an adversarial subgraph. #### Iii-A2 Robust Adversarial Subgraphs Detection Based on HMM. As depicted in the lower left of Fig. 4, we first construct a general attack subgraph using the ATT&CK model2 and the DARPA transparent computing dataset3. Next, we count the attack directions (i.e., the potential entities to be linked next in the graph) within the APT to create a transfer matrix. Then, as shown in the lower right of Fig. 4, based on the adversarial equivalent graph obtained from our proposed fast modeling method, we count the adversarial transformation entities (i.e., the benign entities equivalent to the malicious entities) to derive the emission matrix. Finally, utilizing the obtained transfer matrix and emission matrix, we use the HMM Viterbi algorithm on the captured stream of provenance graphs to determine the most probable sequence of attack entities (i.e., those with the highest hit rate). When the hit rate surpasses a predefined threshold, the entity is identified as an adversarial subgraph. Footnote 2: [https://attack.mitre.org/](https://attack.mitre.org/) Footnote 3: [https://github.com/darpa-io2/Transparent-Computing](https://github.com/darpa-io2/Transparent-Computing) ## IV Implementation and Evaluation ### _Experimental Setup_ We implement an APT penetration test prototype with 15 servers to simulate a real enterprise network. Six types of vulnerabilities are considered: _buffer overflow_, _domain controller hijacking_, _living-off-the-land (LoL)_, _data leakage_, _maintaining access_, and _middleware exploitation_. These vulnerabilities are distributed across 15 servers, and each server is equipped with a lightweight provenance graph interface. The network-layer APT lateral movements, APT evasion attacks, and adversarial subgraph attacks are considered in our prototype. ### _Experimental Results_ Fig. 5(a) illustrates the number of compromised nodes as the number of pivot servers (used for lateral movements) increases. As depicted in Fig. 5(a), different APT attack modes yield varying outcomes. For instance, in mode 1, the adversary executes a hijacking attack during the 4th round of lateral movement, successfully taking control of both the domain controller and domain users. In contrast, the adversary in mode 6 only succeeds in taking control of one server throughout the lateral movement attempts. Fig. 5(b) shows the evolution of the trust value of the assessment object as the number of interactions increases under the APT evasion attack. It can be observed that the proposed approach exhibits significant improvement compared to the traditional probabilistic trust model [15]. Fig. 5(c) shows the detection performance with and without adversarial subgraphs under various attack scenarios. It can be seen that adversarial subgraphs significantly deteriorates the defensive effectiveness of conventional StreamSpot [12] and Unicon [5] schemes. Furthermore, the proposed scheme effectively defends against adversarial attacks and outperforms the mimicry-StreamSpot and mimicry-Unicon approaches, while maintaining a smaller performance gap compared to conventional StreamSpot and Unicon schemes without adversarial attacks. ## V Future Directions This section explores the future directions that necessitate further research investigation in APT detection based on provenance graphs. ### _Fusing Provenance Graphs and Knowledge Graphs for APT Detection_ Combining provenance graphs and knowledge graphs in APT detection is imperative to address semantic gaps and enhance threat provenance. Provenance graphs capture fine-grained system interactions, while knowledge graphs provide semantic context. The synergy offers comprehensive insights for accurate attack detection and attribution to achieve holistic and efficient APT detection. Effective fusion of heterogeneous data and knowledge representation remains the major challenge. ### _Tamper-Resistant Provenance Graph Storage_ The integrity of the kernel-level provenance graph can be compromised by unauthorized modifications, posing risks to the reliability of audit trails. Cryptographic methods, such as digital signatures and secure hashing, offer a potential remedy against tampering threats. Immutable ledger technologies, such as blockchain, further bolster resistance to tampering by dispersing storage and enforcing consensus-based verification. Nevertheless, obstacles persist, encompassing efficient Fig. 4: An Illustration of HMM-Based Adversarial Subgraphs Defense. querying of encrypted data and managing access control in distributed environments. Developing tamper-resistant mechanisms is vital to upholding the trustworthiness of provenance-based APT audits. ### _Collaborative and Privacy-Preserving Threat Intelligence Sharing_ APT threats often target multiple entities across sectors. Future research should focus on establishing collaborative frameworks for sharing threat intelligence derived from provenance graph-based auditing. However, organizations may be hesitant to share sensitive data due to confidentiality and privacy issues, raising needs for privacy-preserving threat intelligence aggregation and sharing without exposing sensitive information. Other issues remain to be investigated include standardizing data formats and incentivizing collaboration. ### _Integration with Cloud and Edge Environments_ Leveraging the integration of cloud and edge computing enhances APT detection services by enabling dynamic data correlation and analysis. Edge devices collect and preprocess local data for minimized latency. Cloud servers offer scalability and computational power for in-depth analysis and storage. This synergy optimizes APT detection, allowing real-time alerts at the edge and comprehensive analysis in the cloud. Research challenges include data synchronization, privacy preservation, and adaptation to resource constraints. ## VI Conclusion APT attacks have far-reaching consequences across various sectors including governments, critical infrastructures and corporations, necessitating effective defense strategies. While existing provenance graph-based research sheds light on APT defense, the effectiveness of APT detection remains hindered by intricate lateral attack patterns, dynamic evasion strategies and adaptive adversarial subgraphs. This study advocates for a novel approach for enhanced efficiency and robustness in existing provenance graph-based APT audit schemes, by devising a network-level distributed provenance graph audit model, a dynamic evasion behavior detection strategy, and a robust adversarial subgraph detection strategy. Via prototype implementation and experimental evaluations, the potential of the proposed system is validated to significantly enhance APT defense capabilities. This work is anticipated to shed more light on ongoing exploration of comprehensive solutions against evolving APT threats in today's digital landscape.
高度な継続的な脅威(APT)の増加は、高度な組織化、ステルス的な実行、継続的な存在、そして多岐にわたるセクターの貴重な資産へのターゲットを特徴とする重要なサイバーセキュリティ課題を刻み付けました。 Provenance Graphベースのカーネルレベルの監査は、複雑なネットワーク環境内の視認性と追跡性を向上させるための有望なアプローチとして現れてきました。しかし、複雑な横的攻撃チェーンを再構築する、動的な回避行動を検出する、そしてスマートな悪意のあるサブグラフを防御する課題を克服しなければなりません。この論文では、信頼性の高いAPT防御の方案を提案します。これには、ネットワークレベルの分散監査モデル、信頼性の高いAPT回避行動検出戦略、そして隠されたマルコフモデルに基づく悪用サブグラフ防御アプローチが含まれます。プロトタイプ実装と広範な実験を通じて、このシステムの
2308.16849
The construction of a $E_7$-like quantum subgroup of $SU(3)$
In this short note we construct an embedding of the planar algebra for $\overline{\operatorname{Rep}(U_q(sl_3))}$ at $q = e^{2\pi i \frac{1}{24}}$ into the graph planar algebra of di Francesco and Zuber's candidate graph $\mathcal{E}_4^{12}$. Via the graph planar algebra embedding theorem we thus construct a rank 11 module category over $\overline{\operatorname{Rep}(U_q(sl_3))}$ whose graph for action by the vector representation is $\mathcal{E}_4^{12}$. This fills a small gap in the literature on the construction of $\overline{\operatorname{Rep}(U_q(sl_3))}$ module categories. As a consequence of our construction, we obtain the principal graphs of subfactors constructed abstractly by Evans and Pugh.
Cain Edie-Michell, Lance Marinelli
2023-08-31T16:30:20
http://arxiv.org/abs/2308.16849v2
# The construction of a \(E_{7}\)-like quantum subgroup of \(Su(3)\) ###### Abstract. In this short note we construct an embedding of the planar algebra for \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{3}))}\) at \(q=e^{2\pi i\frac{2}{4}}\) into the graph planar algebra of di Francesco and Zuber's candidate graph \(\mathcal{E}_{4}^{12}\). Via the graph planar algebra embedding theorem we thus construct a rank 11 module category over \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{3}))}\) whose graph for action by the vector representation is \(\mathcal{E}_{4}^{12}\). This fills a small gap in the literature on the construction of \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{3}))}\) module categories. As a consequence of our construction, we obtain the principal graphs of subfactors constructed abstractly by Evans and Pugh. ## 1. Introduction To every module category over a modular tensor category (MTC), there is an associated _modular invariant_. This is a positive integer valued matrix commuting with the \(SL(2,\mathbb{Z})\) representation of the MTC. These modular invariants are a useful tool for studying module categories, and have played a key role in classification efforts. However, the modular invariant is not a complete invariant. There are many examples of modular invariants which do not come from module categories [1], and also of distinct module categories with the same modular invariant [1, Sections 11 and 12]. A modular invariant is referred to as _physical_ if it is realised by a module category. Even in the situation where a modular invariant is known to be physical, it can be difficult to determine the structure of the corresponding module categories. A large class of MTC's come from the (semisimplified) representation theory of quantum groups at root of unity [1, Chapter 7]. These categories are typically denoted \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{g}))}\). In the special case of the Lie algebra \(\mathfrak{sl}_{3}\), the modular invariants were classified by Gannon [1]. In work of Evans and Pugh [1], all of the \(SU(3)\) modular invariants were shown to be physical. For all bar one modular invariant, their proof was via explicit construction of the corresponding module categories (using Ocneanu cell systems). The remaining modular invariant was shown to be physical via a relative tensor product construction. As the relative tensor product of module categories is a difficult construction to work with in practice, the explicit structure of the corresponding module category has not been confirmed. It should also be noted that in [1, Section 5.4] some structure of this module category is deduced based on an assumption on its corresponding algebra object. Further, in [2], an explicit construction of this module category is claimed without detail. The modular invariant in question can be found in [1] labelled as \(\left(\mathcal{E}_{9}^{(2)}\right)^{c}\). There has been some work on deducing the module fusion graph (the graph representing the action of \(\Lambda_{1}\) on the module) for the module category category corresponding to this modular invariant. In [11] Di Francesco and Zuber suggest the following graph (with some physical supporting evidence): As it will be useful throughout this paper, the Frobenius-Perron eigenvector for this graph is \[\lambda=\left\{\frac{[5]_{q}}{[3]_{q}},\frac{[5]_{q}}{[3]_{q}},\frac{[2]_{q}[4]_{q} }{[3]_{q}},\frac{[2]_{q}[4]_{q}}{[3]_{q}},[3]_{q},[5]_{q},[3]_{q},1,[5]_{q},[3]_{ q},1\right\}.\] In this paper, we fix a small gap in the literature by explicitly constructing a module category with module fusion graph \(\mathcal{E}_{4}^{12}\). Our technique for constructing this module category is by using the graph planar algebra embedding theorem [1, Theorem 1.3]. The use of this technique is typically been refereed to as _cell systems_ in the context of quantum groups [1, 1, 1]. More precisely, we find the following element of \(oGPA(\mathcal{E}_{4}^{12})\). We direct the reader to Subsection 2.2 for the definition of \(oGPA(\mathcal{E}_{4}^{12})\). **Definition 1.1**.: Let \(q=\zeta_{24}\), and \(z\) the root of the polynomial \(9x^{16}-14x^{8}+9\) with numerical value closest to \(-0.996393+0.0848571i\). We define \(W\in\text{Hom}_{oGPA(\mathcal{E}_{4}^{12})}(-\to++)\) as the functional defined on basis elements by \[W_{1,6,9} =\begin{cases}\sqrt{[2]_{q}}&6\xrightarrow{\alpha}9\\ 0&6\xrightarrow{\beta}9\end{cases} W_{2,6,9} =\begin{cases}z^{-1}\sqrt{\frac{1}{[2]_{q}}}&6\xrightarrow{ \alpha}9\\ \zeta_{24}^{19}\sqrt{\frac{[3]_{q}}{[2]_{q}}}&6\xrightarrow{\beta}9\end{cases} W_{3,6,9} =\begin{cases}z\sqrt{\frac{1}{[2]_{q}}}&6\xrightarrow{\alpha}9\\ \zeta_{3}z\sqrt{\frac{[3]_{q}}{[4]_{q}([2]_{q}+[3]_{q})}}&6\xrightarrow{\beta}9 \end{cases}\] \[W_{4,6,9} =\begin{cases}\sqrt{\frac{1}{[2]_{q}}}&6\xrightarrow{\alpha}9\\ \zeta_{8}^{5}\sqrt{\frac{[3]_{q}([2]_{q}+[3]_{q})}{[4]_{q}[5]_{q}}}&6 \xrightarrow{\beta}9\end{cases} W_{5,6,9} =\begin{cases}\mathbf{i}z^{-1}\sqrt{\frac{1}{[2]_{q}}}&6 \xrightarrow{\alpha}9\\ \zeta_{48}^{11}z\sqrt{\frac{[4]_{q}}{[5]_{q}}}&6\xrightarrow{\beta}9\end{cases}\] \[W_{3,6,7} =z\sqrt{\frac{[2]_{q}}{[4]_{q}}} \quad W_{3,10,7} =z\sqrt{\frac{[2]_{q}^{2}}{[4]_{q}([2]_{q}+[3]_{q})}} W_{3,10,9} =z\sqrt{\frac{[3]_{q}(1+[2]_{q})}{[2]_{q}[4]_{q}}}\] \[W_{4,6,7} =\zeta_{8}^{5}\sqrt{\frac{[2]_{q}[3]_{q}}{[4]_{q}(1+[2]_{q})}} W_{4,10,7} =z\sqrt{\frac{[2]_{q}+[3]_{q}}{[4]_{q}}} W_{4,10,9} =z\sqrt{\frac{[3]_{q}^{2}}{[2]_{q}[4]_{q}(1+[2]_{q})}}\] \[W_{5,6,7} =\zeta_{8}\sqrt{\frac{[2]_{q}}{[3]_{q}}} W_{5,8,7} =z\sqrt{\frac{[2]_{q}}{[3]_{q}}} W_{5,10,7} =z\sqrt{\frac{1}{[2]_{q}}}\] \[W_{5,10,9} =z\sqrt{\frac{1}{[2]_{q}}} W_{5,10,11} =z\sqrt{[2]_{q}}\] with the remaining values on basis elements defined by the rotational formula \(W_{a,b,c}=\sqrt{\frac{\lambda_{b}}{\lambda_{c}}}W_{b,c,a}\). Here we use the notation that \(\zeta_{\ell}:=e^{2\pi i\frac{1}{\ell}}\). Our main result shows that this distinguished element satisfies the relations required to give an embedding for the planar algebra of \(\overline{\text{Rep}(U_{q}(\mathfrak{sl}_{3}))}\) associated to the object \(\Lambda_{1}\). **Theorem 1.2**.: _The map_ \[\xy(0,0)*{\ar@{-}}\mapsto W\in\text{Hom}_{oGPA(\mathcal{E}_{4}^{12})}(-\to ++)\] _defines a tensor functor_ \[\mathcal{P}_{\overline{\text{Rep}(U_{q}(\mathfrak{sl}_{3}))};\Lambda_{1}} \to oGPA(\mathcal{E}_{4}^{12}).\] The graph planar algebra embedding theorem [1, Theorem 1.3] (and [1, Theorem 1.1] for the slight technical alteration needed for our set-up) then gives the construction of the module category. **Corollary 1.3**.: _There exists a module category \(\mathcal{M}\) over \(\overline{\text{Rep}(U_{q}(\mathfrak{sl}_{3}))}\) such that the action graph for \(\Lambda_{1}\) is \(\mathcal{E}_{4}^{12}\)._ As shown in [1], we obtain several subfactors of the hyperfinite \(\mathrm{II}_{1}\) factor \(\mathcal{R}\) as a consequence of Corollary 1.3. The subfactor with smallest index (\(=24\left(2+\sqrt{3}\right)\)) has principal graph The above principal graph is obtained from the graph \(\mathcal{E}_{4}^{12}\) via the equations of [1, Section 7]. Our strategy for obtaining the embedding in Definition 1.1 is low-brow, but effective. We begin by numerically approximating a solution for the embedding of the element \([2]_{q}\cdot p_{\Lambda_{2}}\in\overline{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{3}))}\) into \(oGPA(\mathcal{E}_{4}^{12})\). As the element \([2]_{q}\cdot p_{\Lambda_{2}}\) satisfies the Hecke algebra relations, the equations governing its embedding into \(oGPA(\mathcal{E}_{4}^{12})\) are polynomial (of max degree \(3\)), and are amenable to numerical approximation. From this numerical approximation we can then guess exact values for most of the coefficients of the embedding. With many of the coefficients exactly determined, many of the polynomial equations governing the embedding are now linear, and can be solved exactly. This gives us a candidate for the embedding of the element \([2]_{q}\cdot p_{\Lambda_{2}}\). Using the techniques developed in [1], we can then determine the embedding of \(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2 where we understand \(X^{+}=X\) and \(X^{-}=X^{*}\). If the object \(X\) Cauchy tensor generates \(\mathcal{C}\) (in the sense of [1]), then \(\mathcal{P}_{\mathcal{C},X}\) contains a projection onto every simple object of \(\mathcal{C}\). Hence the Cauchy completion of \(\mathcal{P}_{\mathcal{C},X}\) is monoidally equivalent to \(\mathcal{C}\). In this sense, the subcategory \(\mathcal{P}_{\mathcal{C},X}\) remembers all the information of the original category \(\mathcal{C}\), while being significantly simpler. An important example of a planar algebra is the Kazhdan-Wenzl presentation for \(\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))\). Let \(X=\Lambda_{1}\) be the vector representation. The planar algebra \(\mathcal{P}_{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N})),\Lambda_{1}}\) is then described in [11] via generators and relations. The generators of this planar algebra are The planar algebra is then constructed as the free planar algebra built from the generating morphisms (allowing duality morphisms, along with tensor products, compositions, and sums of these morphisms), modulo the generating relations. We have the following relations between the generators (which are sufficient when \(q=e^{2\pi i\frac{1}{N+}}\) for some \(k\in\mathbb{N}\) by [1]) Note that for this paper, we will be specialised to the case where \(N=3\) and \(q=e^{2\pi i\frac{1}{24}}\). ### The graph planar algebra A key planar algebra used in this paper is the graph planar algebra constructed from a graph \(\Gamma\) and a Frobenius-Perron eigenvector \(\lambda\) for \(\Gamma\). The construction of the graph planar algebra is due to Jones [15]. The graph planar algebra can be defined tersely as follows. Let \(\mathcal{M}\) be a semisimple category, i.e. \(\left(\operatorname{Vec}_{\mathbb{C}}^{f.d.}\right)^{\oplus m}\) for some \(m\in\mathbb{N}\), and \(\Gamma\) an endofunctor of \(\mathcal{M}\), which can be fully described by a graph with \(m\) vertices. We define \[oGPA(\Gamma):=\mathcal{P}_{\operatorname{End}(\mathcal{M}),\Gamma}\] where a Frobenius-Perron eigenvector \(\lambda\) of \(\Gamma\) is used to define the rigidity maps. It was shown in [1] (with an adaption made in [1] to allow for non-self-dual objects) that there is a much more explicit way of defining \(oGPA(\Gamma)\). Namely let \(s\) and \(t\) be two strings in \(\{+,-\}\). We then have that \[\operatorname{Hom}_{oGPA(\Gamma)}(s\to t)\cong\operatorname{span}_{\mathbb{C} }\{(p,q):p\text{ is a $s$ path},q\text{ is a $t$ path},s(p)=s(q),t(p)=t(q)\}\] with operations \[(p^{\prime},q^{\prime})\circ(p,q) =\delta_{q^{\prime},p}(p^{\prime},q)\] \[(p,q)\otimes(p^{\prime},q^{\prime}) =\delta_{t(p),s(p^{\prime})}\delta_{t(q),s(q^{\prime})}(pp^{\prime },qq^{\prime})\] extended linearly. We then have the distinguished rigidity maps given by \[\operatorname{ev}_{(+,-)} :=\sum_{(e,\,\overline{e})\text{ a }(+,-)\text{-path}}\sqrt{ \frac{\lambda_{t(e)}}{\lambda_{s(e)}}}((e,\overline{e}),s(e)):(+,-)\to 1\] \[\operatorname{coev}_{(-,+)} :=\sum_{(\overline{e},\,e)\text{ a }(-,+)\text{-path}}\sqrt{\frac{\lambda_{s(e)}}{ \lambda_{t(e)}}}(t(e),(\overline{e},e)):1\to(-,+)\] \[\operatorname{ev}_{(-,+)} :=\sum_{(\overline{e},\,e)\text{ a }(-,+)\text{-path}}\sqrt{\frac{ \lambda_{s(e)}}{\lambda_{t(e)}}}((\overline{e},e),t(e)):(-,+)\to 1\] \[\operatorname{coev}_{(+,-)} :=\sum_{(e,\,\overline{e})\text{ a }(+,-)\text{-path}}\sqrt{\frac{ \lambda_{t(e)}}{\lambda_{s(e)}}}(s(e),(e,\overline{e})):1\to(+,-)\] These operations give \(oGPA(\Gamma)\) the structure of a pivotal multi-tensor category. This category also has a \(\dagger\) structure given by the anti-linear extension of \[(p,q)^{\dagger}=(q,p).\] With this dagger structure, \(oGPA(\Gamma)\) is unitary. We refer the reader to [20, Section 2.2] and [1, Section 2.2] for more details on the category \(oGPA(\Gamma)\). The graph planar algebra is useful for this paper due to the graph planar algebra embedding theorem [1, Theorem 1.3]. This result shows that module categories over a tensor category are classified by embeddings of the associated planar algebra into graph planar algebras. This allows us to obtain Corollary 1.3 from Theorem 1.2. ## 3. Finding the solution Our first goal is to find an embedding of the element \(\left\lfloor\begin{array}{c}\vspace{0.2cm}\vspace{0. matrix \(U^{1}\ _{9}\) is a \(2\times 2\) projection satisfying \((U^{1}\ _{9})^{2}=[2]_{q}\cdot U^{1}\ _{9}\) by (Hecke), and with trace \([2]_{q}\) by [3, Lemma 5.6]. This means we can unitarily conjugate \(U^{1}\ _{9}\) by an element of \(U(2)\) to arrange that \[\begin{array}{cc}6^{\alpha}&6^{\beta}\\ U^{1}\ _{9}=\begin{bmatrix}[2]_{q}&0\\ 0&0\end{bmatrix}\end{array}\] This uses up the \(U(2)\) degree of freedom, up to the \(U(1)\oplus U(1)\) diagonal subgroup of this \(U(2)\). Thus with this fixed choice of \(U^{1}\ _{9}\) as above, we have a gauge group of \(U(1)^{25}\). In particular, this means that the absolute values of the coefficients in our solution are now fixed. We now numerically approximate a solution for the remaining coefficients. As expected, the phases on these coefficients are unrecognisable (as the numerical approximation picks out a random point in the solution space \(U(1)^{25}\)). However, many of the absolute values (which are invariant under the action of \(U(1)^{25}\)) of our numerical coefficients can be immediately identified. The distinct numerical values in our numerical solution for which we can make guesses for their exact values are as follows: \begin{tabular}{c|c c c c} \hline \hline Numerical Value & 0 & 0.175067 & 0.207107 & 0.239146 & 0.341081 \\ \hline Exact Guess & 0 & \(\frac{1}{[4]_{q}}\left([2]_{q}+\frac{[3]_{q}}{[5]_{q}}\right)-1\) & \(\frac{[2]_{q}}{[3]_{q}}\left(1+\frac{[2]_{q}}{[3]_{q}}\right)-1\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{1}{[2]_{q}}\right)-1\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{1}{[2]_{q}}\right)-1\) & \(\frac{[4]_{q}}{[3]_{q}}\left([2]_{q}+\frac{[4]_{q}}{[3]_{q}}\right)-1\) \\ \hline \hline Numerical Value & 0.366025 & 0.393847 & 0.439158 & 0.481717 & 0.5 \\ \hline Exact Guess & \(\frac{1}{[3]_{q}}\) & \(\frac{1}{[4]_{q}}\left([2]_{q}+[3]_{q}\right)-1\) & \(\frac{[2]_{q}}{[3]_{q}}+\frac{[3]_{q}}{[5]_{q}}-1\) & \(\sqrt{\frac{[4]_{q}}{[2]_{q}}}\) & \(\frac{1}{[2]_{q}}\) \\ \hline \hline Numerical Value & 0.517638 & 0.538005 & 0.605 & 0.68125 & 0.707107 \\ \hline Exact Guess & \(\frac{1}{[2]_{q}}\) & \(\frac{1}{[4]_{q}}\left([5]_{q}+\frac{[3]_{q}}{[2]_{q}}\right)-1\) & \(\sqrt{\frac{[3]_{q}}{[5]_{q}}}\) & \(\sqrt{\frac{[4]_{q}}{[2]_{q}}}\) & \(\frac{1}{\sqrt{2}}\) \\ \hline \hline Numerical Value & 0.745315 & 0.790471 & 0.800893 & 0.8556 & 0.865966 \\ \hline Exact Guess & \(\sqrt{\frac{1}{[3]_{q}}}\left(1+\frac{1}{[2]_{q}}\right)\) & \(\sqrt{\frac{1}{[3]_{q}}}\left(1+\frac{[2]_{q}}{[3]_{q}}\right)\) & \(\sqrt{\frac{[3]_{q}}{[2]_{q}}}\) & \(\sqrt{\frac{[3]_{q}}{[2]_{q}}}\) & \(\sqrt{\frac{[2]_{q}}{[4]_{q}}}\) & \(\sqrt{\frac{[2]_{q}}{[4]_{q}}}\) \\ \hline \hline Numerical Value & 0.896575 & 0.975056 & 1.020367 & 1.035276 & 1.07313 \\ \hline Exact Guess & \(\frac{[4]_{q}}{[5]_{q}}\) & \(\frac{1}{[5]_{q}}+\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{[3]_{q}}{[2]_{q}[4]_{q}}\left(1+\frac{[3]_{q}}{[2]_{q}}\right)\) & \(\frac{1}{[3]_{q}}\left([2]_{q}+\frac{[4]_{q}}{[5]_{q}}\right)\) & \(\sqrt{\frac{1}{[2]_{q}}\left(1+\frac{[4]_{q}}{[3]_{q}}\right)}\) \\ \hline \hline Numerical Value & 1.207107 & 1.239146 & 1.393847 & 1.41421 & 1.692705 \\ \hline Exact Guess & \(\frac{[2]_{q}}{[3]_{q}}\left(1+\frac{[2]_{q}}{[3]_{q}}\right)\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{1}{[2]_{q}}\right)\) & \(\frac{1}{[4]_{q}}\left([2]_{q}+[3]_{q}\right)\) & \(\sqrt{2}\) & \(\frac{[2]_{q}}{[4]_{q}}\left(1+[2]_{q}\right)\) \\ \hline \hline Numerical Value & 1.93185 & & & \\ \hline Exact Guess & \(\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{1}{[2]_{q}}\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{[2]_{q}}{[3]_{q}}\right)\) & \(\frac{1}{[4]_{q}}\left([2]_{q}+[3]_{q}\right)\) & \(\sqrt{2}\) & \(\frac{[2]_{q}}{[4]_{q}}\left(1+[2]_{q}\right)\) \\ \hline \hline Numerical Value & 1.93185 & & & \\ \hline Exact Guess & \(\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{1}{[2]_{q}}+\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{[3]_{q}}{[2]_{q}}\right)\) & \(\frac{1}{[4]_{q}}\left([2]_{q}+[3]_{q}\right)\) & \(\sqrt{2}\) & \(\frac{[2]_{q}}{[4]_{q}}\left(1+[2]_{q}\right)\) \\ \hline \hline Numerical Value & 1.93185 & & & \\ \hline Exact Guess & \(\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{1}{[2]_{q}}+\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{[3]_{q}}{[2]_{q}}\right)\) & \(\frac{1}{[3]_{q}}\left([2]_{q}+\frac{[4]_{q}}{[5]_{q}}\right)\) & \(\sqrt{\frac{1}{[2]_{q}}\left(1+[\frac{4]_{q}}{[3]_{q}}\right)}\) \\ \hline \hline Numerical Value & 1.207107 & 1.239146 & 1.393847 & 1.41421 & 1.692705 \\ \hline Exact Guess & \(\frac{[2]_{q}}{[3]_{q}}\left(1+\frac{[2]_{q}}{[3]_{q}}\right)\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{1}{[2]_{q}}\right)\) & \(\frac{1}{[4]_{q}}\left([2]_{q}+[3]_{q}\right)\) & \(\sqrt{2}\) & \(\frac{[2]_{q}}{[4]_{q}}\left(1+[2]_{q}\right)\) \\ \hline \hline Numerical Value & 1.93185 & & & \\ \hline Exact Guess & \(\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{1}{[2]_{q}}+\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{[3]_{q}}{[2]_{q}}\right)\) & \(\frac{1}{[4]_{q}}\left([2]_{q}+[3]_{q}\right)\) & \(\sqrt{2}\) & \(\frac{[2]_{q}}{[4]_{q}}\left(1+[2]_{q}\right)\) \\ \hline \hline Numerical Value & 1.93185 & & & \\ \hline Exact Guess & \(\frac{[4]_{q}}{[5]_{q}}\) & \(\frac{1}{[5]_{q}}+\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{[3]_{q}}{[ The potential solution to the embedding of \(\mapsto\mathrm{Hom}_{oGPA(\mathcal{E}_{4}^{12})}(-\to++)\) is given in Definition 1.1. Here we use a slight alteration of Boltzmann weight notation, with the value \(W_{v_{1},v_{2},v_{3}}\) representing the coefficient of the basis element \((v_{1}\xleftarrow{\gamma_{3}}v_{3},v_{1}\xrightarrow{\gamma_{1}}v_{2} \xrightarrow{\gamma_{2}}v_{3})\) with edge labels surpassed unless needed. To give the reader some idea of the structure of the solution for the embedding of \(\nu\) we include the single \(5\times 5\) block and three \(3\times 3\) blocks. \[\begin{array}{ccccc}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0. To get around this computational roadblock, we observe that the coefficients of the embedding of are significantly nicer than the coefficients of the embedding of. As shown in [13], the category \(\mathcal{P}_{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{3}));\Lambda_{1}}\) has an alternate presentation given in terms of the single generator. The relations of this presentation are as follows: \[\begin{array}{ccccc}\includegraphics[width=142.26378pt]{figs/p1.eps}&=& \includegraphics[width=142.26378pt]{figs/p2.eps}\\ \includegraphics[width=142.26378pt]{figs/p3.eps}&=&[2]_{q}\\ \includegraphics[width=142.26378pt]{figs/p3.eps}&=&\includegraphics[width=142.26378 pt]{figs/p3.eps}\end{array}\] Hence if we can verify the above three relations, we will show that our potential solution indeed defines an embedding \(\mathcal{P}_{\overline{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{3}));\Lambda_{1}}} \to oGPA(\mathcal{E}_{4}^{12})\). While relation (ii) is quartic, the simpler form of the algebraic numbers for the coefficients of the embedding of \(\includegraphics[width=142.26378pt]{figs/p3.eps}\) means that these equations are much easier for the computer to verify. Helping our cause is the fact that there are only 171 individual equations to verify for relation (ii). This allows us to give a proof of Theorem 1.2. Proof of Theorem 1.2.: We directly verify that the element of \(oGPA(\mathcal{E}_{4}^{12})\) given in the statement of the Lemma satisfies relations (), (i), and (ii) using a computer. This gives a \(\dagger\)-embedding of \(\mathcal{P}_{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{3}));\Lambda_{1}}\to oGPA( \mathcal{E}_{4}^{12})\). As \(oGPA(\mathcal{E}_{4}^{12})\) is unitary, we have that the image of \(\mathcal{P}_{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{3}));\Lambda_{1}}\) in \(oGPA(\mathcal{E}_{4}^{12})\) is a unitary subcategory. In particular all negligible elements of \(\mathcal{P}_{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{3}));\Lambda_{1}}\) are mapped to zero. Thus we get an embedding \(\mathcal{P}_{\overline{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{3}));\Lambda_{1}}} \to oGPA(\mathcal{E}_{4}^{12})\) as desired.
この短いノートでは、$q = e^{2\pi i \frac{1}{24}}$における平面代数$\overline{\operatorname{Rep}(U_q(sl_3))}$の埋め込みを、ディ・フランチェスコとズーバーの候補グラフ$\mathcal{E}_4^{12}$のグラフ平面代数に構築します。このグラフ平面代数埋め込み定理により、$\overline{\operatorname{Rep}(U_q(sl_3))}$のrank 11のモジュラーカテゴリーを構築します。そのグラフは、ベクトル表現による作用によって生成されるグラフは$\mathcal{E}_4^{12}$です。これにより、$\overline{\operatorname{Rep}(U_q(sl_3))}$モジュラーカテゴリーの構築に関する既存の文献に小さな隙間を埋めることができます。私たちの構築の結果として、エヴァンスとプーの抽象的なサブファ
2309.08786
The radius of comparison of $C (X)$
Let X be a compact Hausdorff space. Then the radius of comparison rc ( C (X)) is related to the covering dimension dim (X) by rc ( C (X)) \geq [ dim (X) - 7 ] / 2. Except for the additive constant, this improves a result of Elliott and Niu, who proved that if X is metrizable then rc (C (X)) \geq [ dim_{\mathbb{Q}} (X) - 4 ] / 2. There are compact metric spaces X for which the estimate of Elliott and Niu gives no information, but for which rc ( C (X)) is infinite or has arbitrarily large finite values.
N. Christopher Phillips
2023-09-15T22:08:15
http://arxiv.org/abs/2309.08786v1
# The Radius of Comparison of \(C(x)\) ###### Abstract. Let \(X\) be a compact Hausdorff space. Then the radius of comparison \(\operatorname{rc}(C(X))\) is related to the covering dimension \(\dim(X)\) by \[\operatorname{rc}(C(X))\geq\frac{\dim(X)-7}{2}.\] Except for the additive constant, this improves a result of Elliott and Niu, who proved that if \(X\) is metrizable then \[\operatorname{rc}(C(X))\geq\frac{\dim_{\mathbb{Q}}(X)-4}{2}.\] There are compact metric spaces \(X\) for which the estimate of Elliott and Niu gives no information, but for which \(\operatorname{rc}(C(X))\) is infinite or has arbitrarily large finite values. 2010 Mathematics Subject Classification: Primary 46L80 This material is based upon work supported by the Simons Foundation Collaboration Grant for Mathematicians #587103 and by the US National Science Foundation under Grant DMS-2055771. ## 1. Introduction The radius of comparison \(\operatorname{rc}(A)\) of a unital C*-algebra \(A\) was introduced in [6] to distinguish counterexamples to the Elliott classification program in the absence of Jiang-Su stability. Since AH algebras are a rich source of examples of simple C*-algebras, both nonclassifiable (like the ones in [6]) and classifiable, and since the radius of comparison is sometimes well behaved in direct limits, in particular in AH systems (in [2], see Proposition 3.2.4(iii) and Proposition 3.2.3), it is of interest to compute \(\operatorname{rc}(C(X,M_{n}))\) for \(n\in\mathbb{Z}_{>0}\) and a compact Hausdorff space \(X\). Since \(\operatorname{rc}(M_{n}(A))=\frac{1}{n}\operatorname{rc}(A)\), this reduces to the computation of \(\operatorname{rc}(C(X))\). For further motivation, let \(\dim(X)\) denote the covering dimension of the topological space \(X\). (See Definition 1.1 in Chapter 3 of [5].) It has long been known that \(\operatorname{rc}(C(X))\leq\frac{1}{2}\dim(X)\). (See (2.1) in [4], or Subsection 4.1 in [2], for a slightly more precise result; these are not the original sources. The factor \(\frac{1}{2}\) arises from the use of complex scalars.) Moreover, \(\operatorname{rc}(A)\) behaves somewhat like a noncommutative generalization of a dimension for topological spaces. This inequality therefore suggests that, as happens for other noncommutative dimensions such as real rank, stable rank, and decomposition rank, \(\operatorname{rc}(C(X))\) should be related to \(\dim(X)\). For a compact metrizable space \(X\) and an abelian group \(G\), let \(\dim_{G}(X)\) be the cohomological dimension of \(X\) with coefficients \(G\), as in Section 1 of [3]. In [4], Elliott and Niu proved that \[\operatorname{rc}(C(X))\geq\frac{\dim_{\mathbb{Q}}(X)-4}{2}. \tag{1.1}\] (They give a slightly better estimate when \(\dim_{\mathbb{Q}}(X)\) is odd, but that estimate can be recovered from (1.1) by rounding up, because \(\operatorname{rc}(C(X))\in\mathbb{Z}_{\geq 0}\cup\{\infty\}\) by Corollary 2.4 of [4]; also see Lemma 2.6 below.) While satisfactory for spaces such as finite complexes, this result leaves open the question of whether, in general, \(\operatorname{rc}(C(X))\) is related to \(\dim(X)\), \(\dim_{\mathbb{Q}}(X)\), something else (such as \(\dim_{\mathbb{Z}}(X)\)), or some new dimension for topological spaces. We prove (Corollary 3.4 below) that for any compact metrizable space \(X\), one has \[\operatorname{rc}(C(X))\geq\frac{\dim(X)-7}{2}, \tag{1.2}\] showing that the covering dimension is at least nearly the right commutative dimension. (We get a slightly better bound for some values of \(\dim(X)\); see Theorem 3.3 below.) On the other hand, the additive constant in (1.2) is worse than the one in (1.1), and we don't know how to improve it in general. We illustrate the differences between \(\dim(X)\) and \(\dim_{\mathbb{Q}}(X)\). To begin, recall from Example 1.3(1) of [3] that for any compact metrizable space \(X\) and any abelian group \(G\), one has \[\dim_{G}(X)\leq\dim_{\mathbb{Z}}(X)\leq\dim(X).\] Theorem 7.1 of [3] gives a compact metrizable space \(X\) such that \(\dim(X)=\infty\) and \(\dim_{\mathbb{Z}}(X)\leq 3\). So \(\dim_{\mathbb{Q}}(X)\leq 3\). The best estimate gotten from (1.1) is \(\operatorname{rc}(C(X))\geq 0\), but the estimate (1.2) shows that \(\operatorname{rc}(C(X))=\infty\). We point out that this space \(X\) contains no closed subspace \(Y\) with \(3<\dim(Y)<\infty\). Indeed, if \(Y\subset X\) is closed and \(\dim(Y)<\infty\), then, using Theorem 1.4 of [3] and Corollary 1.2 of [3], we have \(\dim(Y)=\dim_{\mathbb{Z}}(Y)\leq\dim_{\mathbb{Z}}(X)\leq 3\). Theorem 1.4 of [3] states that if \(\dim(X)<\infty\), then \(\dim_{\mathbb{Z}}(X)=\dim(X)\). However, it is still possible for \(\dim_{\mathbb{Q}}(X)\) to be much smaller than \(\dim_{\mathbb{Z}}(X)\). Let \(P\subset\mathbb{Z}_{>0}\) be the set of primes. For \(p\in P\), approximately following the introduction to Section 2 of [3], let \(\mathbb{Z}_{\langle p\rangle}\) denote the localization of \(\mathbb{Z}\) at the prime ideal \(\langle p\rangle\) generated by \(p\), and let \(\mathbb{Z}_{p^{\infty}}=\varinjlim_{n}\mathbb{Z}/p^{n}\mathbb{Z}\), via the maps \(\mathbb{Z}/p^{n}\mathbb{Z}\to\mathbb{Z}/p^{n+1}\mathbb{Z}\) which send \(1+p^{n}\mathbb{Z}\) to \(p+p^{n+1}\mathbb{Z}\). For an example of what can happen, we combine Proposition 4.1 of [3] and Theorem 5.1 of [3], referring to Theorem 2.4 of [3] for the statement of the Bockstein inequalities, to see that for every function \(r\colon P\to\mathbb{Z}_{>0}\cup\{\infty\}\) there is a compact metrizable space \(X\) such that \[\dim_{\mathbb{Q}}(X)=1,\quad\dim_{\mathbb{Z}/p\mathbb{Z}}(X)=\dim_{\mathbb{Z} _{p^{\infty}}}(X)=r(p),\quad\text{and}\quad\dim_{\mathbb{Z}_{\langle p\rangle }}(X)=r(p)+1.\] It then follows from Theorem 2.1 of [3] that \(\dim_{\mathbb{Z}}(X)=1+\sup_{p\in P}r(p)\). It is not stated in [3], but one can see by examining the proofs there that the space \(X\) gotten from the construction also satisfies \(\dim(X)=1+\sup_{p\in P}r(p)\). (The key point is that the spaces constructed in Corollary 5.3 of [3] have dimension \(n\).) For every \(d\in\{2,3,\ldots,\infty\}\), an appropriate choice of \(r\) gives a compact metrizable space \(X\) such that \(\dim_{\mathbb{Z}}(X)=\dim(X)=d\), but \(\dim_{\mathbb{Q}}(X)=1\). The estimate (1.1) gives no information, but the estimate (1.2) shows that \(\operatorname{rc}(C(X))\geq\frac{1}{2}(d-7)\), which can be infinite, or finite but arbitrarily large. Elliott and Niu build witnesses to lower bounds for \(\operatorname{rc}(C(X))\) using Chern classes and the Chern character. This approach depends on vector bundles with nonzero classes in \(\mathbb{Q}\otimes K^{0}(Y)\) for suitable subspaces \(Y\subset X\). It requires that \(X\) have closed subspaces with nonzero cohomology in high degrees, and therefore can't work for the space \(X\) discussed above with \(\dim(X)=\infty\) and \(\dim_{\mathbb{Z}}(X)\leq 3\). Our proof uses instead the following standard characterization of the covering dimension \(\dim(X)\). The statement in [5] is actually more general: it applies to arbitrary normal topological spaces. **Theorem 1.1** (Theorem 2.2 in Chapter 3 of [5]).: Let \(X\) be a compact Hausdorff space and let \(d\in\mathbb{Z}_{\geq 0}\). Then \(\dim(X)\leq d\) if and only if for every closed subset \(X_{0}\subset X\) and every continuous function \(\xi_{0}\colon X_{0}\to S^{d}\), there is a continuous function \(\xi\colon X\to S^{d}\) such that \(\xi|_{X_{0}}=\xi_{0}\). We relate extendability of maps \(X_{0}\to S^{2d-1}\) for closed subspaces \(X_{0}\) of \(X\) to extendability of trivial rank one projections in \(C(X_{0},M_{d})\), and relate extendability of projections to the radius of comparison. The proof does implicitly contain vector bundles over subspaces \(X_{0}\subset X\), but they are all stably trivial, so that their classes in \(K^{0}(X_{0})\) are zero. Most of this work was done during a visit to the Westfalische Wilhelms-Universitat Munster in July 2021, and we are grateful to that institution for its hospitality. The key Proposition 3.1 was provided by Thomas Nikolaus during that visit. We are also grateful to John Klein for helpful email correspondence. ## 2. Extending projections In this section, we prove, in effect, that if \(r\geq\operatorname{rc}(C(X))\), then for \(X_{0}\subset X\) closed, every rank one projection \(p_{0}\in C(X_{0},M_{r+3})\) which is Murray-von Neumann equivalent to a constant projection can be extended to a rank one projection \(p\in C(X,M_{r+3})\) with the same property. (See Remark 2.9.) Without the restriction on the matrix size of \(p\), there is always an extension, in fact, an extension to a projection in \(C(X,M_{4(r+3)})\). The point here is that the matrix size need not be increased. The following somewhat nonstandard terminology is convenient. **Convention 2.1**.: Let \(X\) be a compact Hausdorff space, let \(n\in\mathbb{Z}_{>0}\), and let \(p\in C(X,M_{n})\) be a projection. We say that \(p\) is _trivial_ if \(p\) is Murray-von Neumann equivalent in \(C(X,M_{n})\) to a constant projection. A trivial projection in \(C(X,M_{n})\) is Murray-von Neumann equivalent in \(C(X,M_{n})\) to any constant projection with the correct rank. **Lemma 2.2**.: Let \(X\) be a compact Hausdorff space, and let \(Y\subset X\) be closed. Let \(n\in\mathbb{Z}_{>0}\), and let \(p_{0}\in C(Y,M_{n})\) and \(q\in C(X,M_{n})\) be projections. Suppose \(p_{0}(x)\perp q(x)\) for all \(x\in Y\). 1. There are a closed set \(Z\subset X\) with \(Y\subset\operatorname{int}(Z)\) and a projection \(p\in C(Z,M_{n})\) such that \(p|_{Y}=p_{0}\) and \(p(x)\perp q(x)\) for all \(x\in Z\). 2. If moreover \(e\in C(X,M_{n})\) is a projection, and \(s_{0}\in C(Y,M_{n})\) satisfies \(s_{0}s_{0}^{*}=p_{0}\) and \(s_{0}^{*}s_{0}=e|_{Y}\), then \(Z\) and \(p\) in (1) may be chosen so that there is \(s\in C(Z,M_{n})\) satisfying \(ss^{*}=p\), \(s^{*}s=e|_{Z}\), and \(s|_{Y}=s_{0}\). Proof.: For the first part, choose a positive element \(a\in C(X,M_{n})\) such that \(a|_{X_{0}}=p_{0}\). For \(x\in X\) define \(b(x)=[1-q(x)]a(x)[1-q(x)]\). Then \(b(x)=p_{0}(x)\) for all \(x\in Y\). So there is a closed set \(Z\subset X\) with \(Y\subset\operatorname{int}(Z)\) such that for all \(x\in Z\), we have \(\|b(x)^{2}-b(x)\|<\frac{1}{4}\). This implies \(\frac{1}{2}\not\in\operatorname{sp}(b(x))\), so the continuous functional calculus \(p(x)=\chi_{(\frac{1}{2},\infty]}(b(x))\) is defined, and gives a continuous projection valued function on \(Z\). Clearly \(p|_{Y}=p_{0}\) and \(p(x)\perp q(x)\) for all \(x\in Z\). For the second part, apply (1), calling the resulting closed set and projection \(T\) and \(r\). Choose any \(t\in C(X,M_{n})\) such that \(t|_{Y}=s_{0}\). For \(x\in T\) define \(c(x)=r(x)t(x)e(x)\). Then \(c(x)=s_{0}(x)\) for all \(x\in Y\). So there is a closed set \(Z\subset X\) with \(Y\subset\operatorname{int}(Z)\subset Z\subset T\) and which is so small that for all \(x\in Z\), the expression \(s(x)=c(x)[c(x)^{*}c(x)]^{-1/2}\), with functional calculus in \(e(x)M_{n}e(x)\), is defined and satisfies \(s(x)s(x)^{*}=r(x)\) and \(s(x)^{*}s(x)=e(x)\). Take \(p=r|_{Z}\). **Lemma 2.3**.: Let \(X\) be a compact Hausdorff space, and let \(Y_{1},Y_{2}\subset X\) be disjoint closed sets. Let \(n\in\mathbb{Z}_{>0}\), and let \(p_{1},p_{2}\in C(X,M_{n})\) be rank one projections which are orthogonal and are both trivial in the sense of Convention 2.1. Then there exists a unitary \(w\in(p_{1}+p_{2})C(X,M_{n})(p_{1}+p_{2})\) such that \(w|_{Y_{1}}=(p_{1}+p_{2})|_{Y_{1}}\) and the projection \(p=wp_{1}w^{*}\) is trivial and satisfies \(p|_{Y_{j}}=p_{j}|_{Y_{j}}\) for \(j=1,2\). The proof works just as well whenever \(p_{1}\) and \(p_{2}\) are trivial projections of the same (constant) rank, not necessarily rank one. Proof of Lemma 2.3.: We can replace \(p_{1}\) and \(p_{2}\) with Murray-von Neumann equivalent projections, provided the replacements are still orthogonal. We can also replace \(C(X,M_{n})\) with \((p_{1}+p_{2})C(X,M_{n})(p_{1}+p_{2})\). Therefore we may assume that \(n=2\) and that \(p_{1}\) and \(p_{2}\) are the constant projections with values \(\left(\begin{smallmatrix}1&0\\ 0&0\end{smallmatrix}\right)\) and \(\left(\begin{smallmatrix}0&0\\ 0&1\end{smallmatrix}\right)\). Choose a continuous path \(\lambda\mapsto u(\lambda)\) of unitaries, for \(\lambda\in[0,1]\), such that \(u(0)=1\) and \(u(1)=\left(\begin{smallmatrix}0&1\\ 1&0\end{smallmatrix}\right)\). Choose a continuous function \(f\colon X\to[0,1]\) such that \(f(x)=0\) for \(x\in Y_{1}\) and \(f(x)=1\) for \(x\in Y_{2}\). Then define \(w(x)=u(f(x))\) for \(x\in X\). **Lemma 2.4**.: Let \(X\) be a compact Hausdorff space, and let \(Y\subset X\) be closed. Let \(n\in\mathbb{Z}_{>0}\), and let \(p_{0}\in C(Y,M_{n})\) be a rank one projection which is trivial in the sense of Convention 2.1. Further let \(q\in C(Y,M_{n})\) be a rank one trivial projection such that \(q|_{Y}\perp p_{0}\). Then there exists a rank one trivial projection \(p\in C(Y,M_{n})\) such that \(p|_{Y}=p_{0}\). As for Lemma 2.3, the proof works for any rank in place of \(1\). The key point is that we are not allowed to increase \(n\). If we embed \(C(Y,M_{n})\) in \(C(Y,M_{4n})\), then in the larger algebra \(p_{0}\) will be homotopic to a constant projection, and the existence of the extension will be automatic without using \(q\). Proof of Lemma 2.4.: Choose any rank one constant projection \(e\in C(X,M_{n})\), and choose \(s_{0}\in C(Y,M_{n})\) such that \(s_{0}s_{0}^{*}=p_{0}\) and \(s_{0}^{*}s_{0}=e|_{Y}\). Apply Lemma 2.2(2), getting \(Z\), a projection \(p_{1}\in C(Z,M_{n})\) (called \(p\) there) such that \(p_{1}|_{Y}=p_{0}\), and a partial isometry \(v\) (called \(s\) there) such that \(vv^{*}=p_{1}\) and \(v^{*}v=e|_{Z}\). Since \(e|_{Z}\) is trivial of rank one, so is \(p_{1}\). Because \(Y\subset\operatorname{int}(Z)\) and \(p_{1}\perp q|_{Z}\), we can apply Lemma 2.3, with \(Z\) in place of \(X\), with \(Y_{1}=\partial Z\), with \(Y_{2}=Y\), with \(q|_{Z}\) in place of \(p_{1}\), and with \(p_{1}\) in place of \(p_{2}\), getting a unitary \(w\in(p_{1}+q|_{Z})C(Z,M_{n})(p_{1}+q|_{Z})\) such that \(w|_{\partial Z}=p_{1}|_{\partial Z}+q|_{\partial Z}\) and \((w|_{Y})(q|_{Y})(w^{*}|_{Y})=p_{1}|_{Y}\). Define \[p(x)=\begin{cases}q(x)&x\in X\setminus Z\\ w(x)q(x)w(x)^{*}&x\in Z.\end{cases}\] Then \(p\) is continuous. Since for \(x\in Y\) we have \(w(x)q(x)w(x)^{*}=p_{1}(x)=p_{0}(x)\), it follows that \(p|_{Y}=p_{0}\). It remains to prove that \(p\) is trivial. Since \(q\) is trivial, there is \(s\in C(X,M_{n})\) such that \(ss^{*}=e\) and \(s^{*}s=q\). Define \[t(x)=\begin{cases}s(x)&x\in X\setminus Z\\ s(x)w(x)^{*}&x\in Z.\end{cases}\] Then \(t\) is continuous because \(w(x)=p_{1}(x)+q(x)\) for \(x\in\partial Z\). For \(x\in X\setminus Z\) we have \[t(x)t(x)^{*}=s(x)s(x)^{*}=e(x)\qquad\text{and}\qquad t(x)^{*}t(x)=s(x)^{*}s(x) =q(x)=p(x).\] For \(x\in Z\) we have \[t(x)t(x)^{*}=s(x)w(x)^{*}w(x)s(x)^{*}=s(x)[p_{1}(x)+q(x)]s(x)^{*}=s(x)s(x)^{*}= e(x)\] and \[t(x)^{*}t(x)=w(x)s(x)^{*}s(x)w(x)^{*}=w(x)q(x)w(x)^{*}=p(x).\] So \(tt^{*}=e\) and \(t^{*}t=p\). Since \(e\) is constant, this shows that \(p\) is trivial. The proof of the following lemma in [1] has enough misprints that we give a full proof here. **Lemma 2.5** (Lemma 2.19 of [1]).: Let \(A\) be a C*-algebra, let \(a\in A_{+}\), and let \(p\in A\) be a projection. Suppose \(p\precsim a\). Then there exist \(\alpha\in(0,\infty)\), a projection \(q\in A\), and \(\delta>0\), such that \(q\) is Murray-von Neumann equivalent to \(p\), \(q\leq\alpha a\), and, with \[g(\lambda)=\begin{cases}0&0\leq\lambda<\delta\\ \delta^{-1}(\lambda-\delta)&\delta\leq\lambda<2\delta\\ 1&2\delta\leq\lambda,\end{cases}\] we have \(g(a)q=q\). Proof.: Set \(\varepsilon=\frac{1}{4}\). Use Proposition 2.17(iii) of [1] to choose \(\delta_{0}>0\) such that \((p-\varepsilon)_{+}\precsim(a-\delta_{0})_{+}\). Then there is \(v\in A\) such that \[\left\|v(a-\delta_{0})_{+}v^{*}-(p-\varepsilon)_{+}\right\|<\varepsilon.\] Theorem 2.13 of [1] provides \(x\in A\) such that \[\|x\|\leq 1\qquad\text{and}\qquad xv(a-\delta_{0})_{+}v^{*}x^{*}=(p-2 \varepsilon)_{+}.\] Set \[y=(1-2\varepsilon)^{-1/2}xv[(a-\delta_{0})_{+}]^{1/2}.\] Since \((p-2\varepsilon)_{+}=(1-2\varepsilon)p\), we have \(yy^{*}=p\). Therefore \[q=y^{*}y=(1-2\varepsilon)^{-1}[(a-\delta_{0})_{+}]^{1/2}v^{*}x^{*}xv[(a- \delta_{0})_{+}]^{1/2}\] is a projection. Using \(\|x\|\leq 1\) and setting \(\alpha=(1-2\varepsilon)^{-1}\|v\|^{2}\), we get \[q\leq(1-2\varepsilon)^{-1}\|v\|^{2}(a-\delta_{0})_{+}\leq\alpha a.\] Also, using \(\delta=\delta_{0}/2\) in the definition of \(g\), we get \(g(a)(a-\delta_{0})_{+}=(a-\delta_{0})_{+}\), so \(g(a)q=q\). The next lemma is a more precise version of Corollary 2.4 of [4], which says that \(\operatorname{rc}(C(X))\in\mathbb{Z}_{\geq 0}\cup\{\infty\}\). We give a full proof since it is easy to be off by 1. **Lemma 2.6**.: Let \(X\) be a compact Hausdorff space. Then \(\operatorname{rc}(C(X))\) is the least \(r\in\mathbb{Z}_{\geq 0}\cup\{\infty\}\) such that whenever \(n\in\mathbb{Z}_{>0}\) and \(a,b\in C(X,M_{n})_{+}\) satisfy \[\operatorname{rank}(a(x))+r+1\leq\operatorname{rank}(b(x))\] for all \(x\in X\), then \(a\precsim b\). Proof.: Let \(r\) be the integer in the statement. Lemma 2.3 of [4] implies that \(\operatorname{rc}(C(X))\) is the infimum of all \(\rho>0\) such that whenever \(n\in\mathbb{Z}_{>0}\) and \(a,b\in C(X,M_{n})_{+}\) satisfy \[\operatorname{rank}(a(x))+\rho<\operatorname{rank}(b(x))\] for all \(x\in X\), then \(a\precsim b\). It therefore suffices to show that this condition holds if \(\rho\geq r\) and fails if \(\rho<r\). For the first, suppose \(\rho\geq r\) and \(\operatorname{rank}(a(x))+\rho<\operatorname{rank}(b(x))\) for all \(x\in X\). Since \(r\), \(\operatorname{rank}(a(x))\), and \(\operatorname{rank}(b(x))\) are integers, we must have \(\operatorname{rank}(a(x))+r+1\leq\operatorname{rank}(b(x))\) for all \(x\in X\). So \(a\precsim b\) by the definition of \(r\). Now suppose \(\rho<r\). By the definition of \(r\), there are \(a,b\in C(X,M_{n})_{+}\) such that \(\operatorname{rank}(a(x))+r\leq\operatorname{rank}(b(x))\) for all \(x\in X\) but \(a\not\prec b\). Then \(\operatorname{rank}(a(x))+\rho<\operatorname{rank}(b(x))\) for all \(x\in X\), and \(a\not\prec b\). We state the next lemma separately for convenient reference. **Lemma 2.7**.: Let \(X\) be a compact Hausdorff space, let \(X_{0}\subset X\) be closed, let \(d\in\mathbb{Z}_{>0}\), and let \(\xi_{0}\colon X_{0}\to S^{d}\) be continuous. Then there are a closed subset \(X_{1}\subset X\) with \(X_{0}\subset\operatorname{int}(X_{1})\) and a continuous function \(\xi_{1}\colon X_{1}\to S^{d}\) such that \(\xi_{1}|_{X_{0}}=\xi_{0}\). Proof.: Identify \(S^{d}\) with the unit sphere in \(\mathbb{R}^{d+1}\). The Tietze Extension Theorem provides a continuous function \(\eta\colon X\to\mathbb{R}^{d+1}\) which extends \(\xi_{0}\). Set \(X_{1}=\big{\{}x\in X\colon\|\eta(x)\|\geq\frac{1}{2}\big{\}}\) and \(\xi_{1}(x)=\|\eta(x)\|^{-1}\eta(x)\). **Proposition 2.8**.: Let \(X\) be a compact Hausdorff space. Let \(r\in\mathbb{Z}_{\geq 0}\) satisfy \(r\geq\operatorname{rc}(C(X))\). Then for every closed subset \(X_{0}\subset X\) and every continuous function \(\xi_{0}\colon X_{0}\to S^{2r+5}\), there exist continuous functions \(\xi\colon X\to S^{2r+5}\) and \(\zeta\colon X_{0}\to S^{1}\) such that, identifying \(S^{2r+5}\) with the unit sphere in \(\mathbb{C}^{r+3}\), for all \(x\in X_{0}\) we have \(\xi(x)=\zeta(x)\xi_{0}(x)\). Proof.: We can assume \(X_{0}\neq\varnothing\). Let \(X_{1}\subset X\) and \(\xi_{1}\colon X_{1}\to S^{2r+5}\) be as in Lemma 2.7. We use the usual scalar product on \(\mathbb{C}^{r+3}\). For \(x\in X_{1}\) define a rank one projection \(p_{1}(x)\in M_{r+3}=L(\mathbb{C}^{r+3})\) by \(p_{1}(x)\eta=\langle\eta,\,\xi_{1}(x)\rangle\xi_{1}(x)\) for all \(\eta\in\mathbb{C}^{r+3}\). Then \(p_{1}\) is a projection in \(C(X_{1},M_{r+3})\). Set \(p_{0}=p_{1}|_{X_{0}}\). Fix \(x_{0}\in X_{0}\), and let \(e\in C(X,M_{r+3})\) be the constant projection with value \(p_{1}(x_{0})\). Set \(\eta_{1}=\xi_{1}(x_{0})\). We claim that \(p_{1}\) is trivial. To see this, define \(s\in C(X_{1},M_{r+3})\) by \(s(x)\eta=\langle\eta,\,\eta_{1}\rangle\xi_{1}(x)\) for \(x\in X_{1}\) and \(\eta\in\mathbb{C}^{r+3}\). Then check that \(s(x)^{*}\eta=\langle\eta,\,\xi_{1}(x)\rangle\eta_{1}\) for \(x\in X_{1}\) and \(\eta\in\mathbb{C}^{r+3}\), that \(ss^{*}=p_{1}\), and that \(s^{*}s=e|_{X_{1}}\). Choose a continuous function \(f\colon X\to[0,1]\) such that \(f(x)=1\) for all \(x\in X_{0}\) and \(\operatorname{supp}(f)\subset\operatorname{int}(X_{1})\). Define \(b\in C(X,M_{r+3})\) by \[b(x)=\begin{cases}1-f(x)p_{1}(x)&x\in X_{1}\\ 1&x\not\in X_{1}.\end{cases}\] For \(x\in X\), we have \[\operatorname{rank}(b(x))\geq r+2=r+1+\operatorname{rank}(e(x)).\] Therefore \(e\precsim_{C(X)}b\) by Lemma 2.6. Lemma 2.5 gives a projection \(q\in C(X,M_{r+3})\) and \(\delta>0\) such that \(q\) is Murray-von Neumann equivalent to \(e\) and, with \(g\) as in the statement of that lemma, we have \(g(b)q=q\). In particular, \(q\) is a rank one trivial projection with \(q|_{X_{0}}\leq 1-p_{0}\). Since \(p_{0}\) is a rank one trivial projection on \(X_{0}\), Lemma 2.4 implies that there is a rank one trivial projection \(p\in C(X,M_{r+3})\) such that \(p|_{X_{0}}=p_{0}\). By triviality, there is \(t\in C(X,M_{r+3})\) such that \(tt^{*}=p\) and \(t^{*}t=e\). Define \(\xi\colon X\to\mathbb{C}\) by \(\xi(x)=t(x)\eta_{1}\). Then \(\|\xi(x)\|_{2}=1\) for all \(x\in X\), so \(\xi\) is a function from \(X\) to \(S^{2r+5}\). Moreover, for \(x\in X_{0}\), we have \(p(x)=p_{0}(x)\), so \(\xi(x)\in\mathbb{C}\xi_{0}(x)\), and \(\zeta(x)=\langle\xi(x),\xi_{0}(x)\rangle\) is a continuous function with \(|\zeta(x)|=1\) and such that \(\xi(x)=\zeta(x)\xi_{0}(x)\). **Remark 2.9**.: The conclusion of Proposition 2.8 can be restated in terms of projections, as follows. For every closed subset \(X_{0}\subset X\) and every rank one trivial projection \(p_{0}\in C(X_{0},M_{r+3})\), there exists a rank one trivial projection \(p\in C(X,M_{r+3})\) such that \(p(x)=p_{0}(x)\) for all \(x\in X_{0}\). ## 3. From extensions of projections to radius of comparison The following result and its proof were suggested by Thomas Nikolaus. **Proposition 3.1**.: Let \(d\in\mathbb{Z}_{>0}\) be even. Identify \(S^{2d-1}\) with the unit sphere in \(\mathbb{C}^{d}\). Define \(m,p\colon S^{1}\times S^{2d-1}\to S^{2d-1}\) by \(m(\lambda,\xi)=\lambda\xi\) and \(p(\lambda,\xi)=\xi\) for \(\lambda\in S^{1}\) and \(\xi\in S^{2d-1}\). Then \(m\) and \(p\) are homotopic. Proof.: Let \(\mathbb{H}\) be the quaternions. Recall that \(\mathbb{H}\), in its usual norm, is isometrically isomorphic, as a complex normed vector space, to \(\mathbb{C}^{2}\), and that the elements \(1,i,j,k\in\mathbb{H}\) form an orthonormal basis. Using this identification, choose an isometric isomorphism of \(\mathbb{C}^{d}\) with \(\mathbb{H}^{d/2}\). This gives a multiplication map \(\mathbb{H}\times\mathbb{C}^{d}\to\mathbb{C}^{d}\). We identify the quaternions of norm \(1\) with \(S^{3}\). This gives an inclusion \(g\colon S^{1}\to S^{3}\). Since multiplication by a quaternion of norm \(1\) is isometric, the multiplication map above restricts to \(n\colon S^{3}\times S^{2d-1}\to S^{2d-1}\), and \(n\circ(g\times\operatorname{id}_{S^{2d-1}})=m\). Since \(\pi_{1}(S^{3})\) is trivial, \(g\) is homotopic to the composition of the obvious map \(q\colon S^{1}\to\{1\}\) and the inclusion \(t\) of \(\{1\}\) in \(S^{3}\), that is, \(g\simeq t\circ q\). Therefore, using a direct computation of the map at the last step, \[m=n\circ(g\times\operatorname{id}_{S^{2d-1}})\simeq n\circ(t\times \operatorname{id}_{S^{2d-1}})\circ(q\times\operatorname{id}_{S^{2d-1}})=p.\] This completes the proof. Proposition 3.1 may well to be false when \(d\) is odd. **Proposition 3.2**.: Let \(X\) be a compact Hausdorff space. Let \(r\in\mathbb{Z}_{\geq 0}\) be odd and satisfy \(\dim(X)\geq 2r+6\). Then \(\operatorname{rc}(C(X))\geq r+1\). Proof.: It is equivalent to prove that if \(r\in\mathbb{Z}_{\geq 0}\) is odd and satisfies \(\operatorname{rc}(C(X))\leq r\), then \(\dim(X)\leq 2r+5\). We use the criterion of Theorem 1.1. Thus, let \(X_{0}\subset X\) be closed, and let \(\xi_{0}\colon X_{0}\to S^{2r+5}\) be continuous. Let \(X_{1}\subset X\) and the extension \(\xi_{1}\colon X_{1}\to S^{2r+5}\) of \(\xi_{0}\) be as in Lemma 2.7. Apply Proposition 2.8 with \(X_{1}\) in place of \(X_{0}\) and \(\xi_{1}\) in place of \(\xi_{0}\), getting continuous functions \(\eta\colon X\to S^{2r+5}\) (called \(\xi\) there) and \(\zeta\colon X_{0}\to S^{1}\) such that, identifying \(S^{2r+5}\) with the unit sphere in \(\mathbb{C}^{r+3}\), for all \(x\in X_{0}\) we have \(\eta(x)=\zeta(x)\xi_{0}(x)\). Also choose a continuous function \(f\colon X\to[0,1]\) such that \(f(x)=1\) for all \(x\in X_{0}\) and \(\operatorname{supp}(f)\subset\operatorname{int}(X_{1})\). By Proposition 3.1 (with \(n=r+3\)), there is a continuous function \[H\colon[0,1]\times S^{1}\times S^{2r+5}\to S^{2r+5}\] such that for all \(y\in S^{2r+5}\) and \(\lambda\in S^{1}\) we have \[H(0,\lambda,y)=y\qquad\text{and}\qquad H(1,\lambda,y)=\lambda y.\] Now for \(x\in X\) define \[\xi(x)=\begin{cases}H\big{(}f(x),\,\zeta(x)^{-1},\,\eta(x)\big{)}&x\in X_{1}\\ \eta(x)&x\not\in X_{1}.\end{cases}\] For \(x\in\partial X_{1}\) we have \(f(x)=0\), so \(H\big{(}f(x),\,\zeta(x)^{-1},\,\eta(x)\big{)}=\eta(x)\). Therefore \(\xi\) is a continuous function from \(X\) to \(S^{2r+5}\) whose restriction to \(X_{0}\) is \(\xi_{0}\). If we take apart the parity conditions in Proposition 3.2, we get the following estimates. **Theorem 3.3**.: Let \(X\) be a compact Hausdorff space. 1. If \(\dim(X)=\infty\), then \(\operatorname{rc}(C(X))=\infty\). 2. If \(\dim(X)\equiv 0\pmod{4}\), then \(\operatorname{rc}(C(X))\geq\frac{1}{2}\big{(}\dim(X)-4\big{)}\). 3. If \(\dim(X)\equiv 1\pmod{4}\), then \(\operatorname{rc}(C(X))\geq\frac{1}{2}\big{(}\dim(X)-5\big{)}\). 4. If \(\dim(X)\equiv 2\pmod{4}\), then \(\operatorname{rc}(C(X))\geq\frac{1}{2}\big{(}\dim(X)-6\big{)}\). 5. If \(\dim(X)\equiv 3\pmod{4}\), then \(\operatorname{rc}(C(X))\geq\frac{1}{2}\big{(}\dim(X)-7\big{)}\). Proof.: Part (1) is clear from Proposition 3.2. For (2), apply Proposition 3.2 with \(r=\frac{1}{2}\dim(X)-3\). For (3) use \(r=\frac{1}{2}\dim(X)-\frac{7}{2}\), for (4) use \(r=\frac{1}{2}\dim(X)-4\), and for (5) use \(r=\frac{1}{2}\dim(X)-\frac{9}{2}\). The best general statement is the following corollary. **Corollary 3.4**.: Let \(X\) be a compact Hausdorff space. Then \(\operatorname{rc}(C(X))\geq\frac{1}{2}\big{(}\dim(X)-7\big{)}\).
XがコンパクトでHausdorff空間であるとき、C(X)は、rc(C(X))とコーティング次元dim(X)の間には関係があり、rc(C(X)) ≥ [dim(X) - 7] / 2である。ただし、加算定数を除けば、エリッヒと Niu の結果を改善する。彼らは、Xが可算な場合、rc(C(X)) ≥ [dim_Q(X) - 4] / 2 と示した。エリッヒと Niu の結果が、Xがコンパクトな metrc空間である場合、情報を与えない場合があるが、rc(C(X))は無限大または任意の大きさを有する有限値である。
2309.06973
DNNShifter: An Efficient DNN Pruning System for Edge Computing
Deep neural networks (DNNs) underpin many machine learning applications. Production quality DNN models achieve high inference accuracy by training millions of DNN parameters which has a significant resource footprint. This presents a challenge for resources operating at the extreme edge of the network, such as mobile and embedded devices that have limited computational and memory resources. To address this, models are pruned to create lightweight, more suitable variants for these devices. Existing pruning methods are unable to provide similar quality models compared to their unpruned counterparts without significant time costs and overheads or are limited to offline use cases. Our work rapidly derives suitable model variants while maintaining the accuracy of the original model. The model variants can be swapped quickly when system and network conditions change to match workload demand. This paper presents DNNShifter, an end-to-end DNN training, spatial pruning, and model switching system that addresses the challenges mentioned above. At the heart of DNNShifter is a novel methodology that prunes sparse models using structured pruning. The pruned model variants generated by DNNShifter are smaller in size and thus faster than dense and sparse model predecessors, making them suitable for inference at the edge while retaining near similar accuracy as of the original dense model. DNNShifter generates a portfolio of model variants that can be swiftly interchanged depending on operational conditions. DNNShifter produces pruned model variants up to 93x faster than conventional training methods. Compared to sparse models, the pruned model variants are up to 5.14x smaller and have a 1.67x inference latency speedup, with no compromise to sparse model accuracy. In addition, DNNShifter has up to 11.9x lower overhead for switching models and up to 3.8x lower memory utilisation than existing approaches.
Bailey J. Eccles, Philip Rodgers, Peter Kilpatrick, Ivor Spence, Blesson Varghese
2023-09-13T14:05:50
http://arxiv.org/abs/2309.06973v1
# DNNShifter: An Efficient DNN Pruning System for Edge Computing ###### Abstract Deep neural networks (DNNs) underpin many machine learning applications. Production quality DNN models achieve high inference accuracy by training millions of DNN parameters which has a significant resource footprint. This presents a challenge for resources operating at the extreme edge of the network, such as mobile and embedded devices that have limited computational and memory resources. To address this, models are pruned to create lightweight, more suitable variants for these devices. Existing pruning methods are unable to provide similar quality models compared to their unpruned counterparts without significant time costs and overheads or are limited to offline use cases. Our work rapidly derives suitable model variants while maintaining the accuracy of the original model. The model variants can be swapped quickly when system and network conditions change to match workload demand. This paper presents DNNShifter, an end-to-end DNN training, spatial pruning, and model switching system that addresses the challenges mentioned above. At the heart of DNNShifter is a novel methodology that prunes sparse models using structured pruning - combining the accuracy-preserving benefits of unstructured pruning with runtime performance improvements of structured pruning. The pruned model variants generated by DNNShifter are smaller in size and thus faster than dense and sparse model predecessors, making them suitable for inference at the edge while retaining near similar accuracy as of the original dense model. DNNShifter generates a portfolio of model variants that can be swiftly interchanged depending on operational conditions. DNNShifter produces pruned model variants up to 93x faster than conventional training methods. Compared to sparse models, the pruned model variants are up to 5.14x smaller and have a 1.67x inference latency speedup, with no compromise to sparse model accuracy. In addition, DNNShifter has up to 11.9x lower overhead for switching models and up to 3.8x lower memory utilisation than existing approaches. DNNShifter is available for public use from [https://github.com/blessonvar/DNNShifter](https://github.com/blessonvar/DNNShifter). Deep neural networks, Machine learning, Internet of things, Edge computing, Model compression, Model pruning ## I Introduction Deep neural networks (DNNs) are machine learning (ML) models comprising a sequence of layers, such as convolution and linear. Such models find application in object detection and image classification due to their high accuracy [1]. Production quality DNN models trained on standard datasets contain a large number of parameters. For example, VGG-16 [2] trained on the ImageNet [3] dataset contains 138M parameters. Such models have significant CPU or memory resource requirements and, consequently, high energy consumption for training and inference. Hence, they are suited for resource-rich environments like cloud or high-performance computing sites. These DNNs cannot be adopted for relatively resource-constrained environments, such as the (extreme) network edge dominated by mobile and embedded devices [4]. Edge environments cannot support production quality DNNs due to compute [4], memory [5] and energy [6] constraints. Therefore, approaches for deriving lightweight DNN model variants from production quality DNNs using (i) neural architecture search (NAS) [7] and (ii) pre-trained model compression [5] have been proposed. These approaches have a two-fold _limitation_. Firstly, they are time-consuming and costly [7]. For example, the NasNet [8] search requires four days of computation on 500 data centre-grade GPUs to find optimal model variants. Secondly, the model variants obtained from these approaches are static. The models are optimised against specific objectives, such as accuracy, inference latency, or model size [7]. Therefore, they cannot be used on the edge to meet the requirements of varying operational conditions, such as changing resource utilisation levels [9, 10]. Existing NAS and compression approaches cannot be used for rapidly producing models, and the models produced by these approaches cannot be adapted to suit changing operational conditions on the edge. The research reported in this paper is therefore focused towards addressing the above limitations and surmounts the following challenges: _Challenge 1 - Rapidly generating a range of DNNs suited for different operational conditions and heterogeneous edge resources:_ ML applications that run on the edge will need to execute pre-trained DNNs. Training a DNN tailored to the edge resource using approaches, such as NAS, is not suitable as they are time and energy-consuming [7]. Alternatively, compressing a pre-trained DNN using knowledge distillation [11] is based on trial and error, or quantisation [12] that requires specialised hardware or libraries that may not be available in edge environments. _Challenge 2 - Spatial compression of DNN models while maintaining accuracy:_ DNN compression methods, such as structured pruning [13] or re-parameterisation [14], can significantly reduce the size of a model. However, these methods remove parameters essential to model accuracy. For example, convolutional layers are sensitive to pruning, and even small degrees of pruning can negatively impact accuracy [13]. Consequently, the compressed model is fine-tuned after pruning using computationally expensive methods to regain accuracy, which can take up to 3 times the original training time [5]. _Challenge 3 - On-demand switching of compressed DNNs to adapt to dynamic operational environments:_ DNNs used at the edge will need to seamlessly adapt to changing conditions in real time by switching models on-demand to match model inference performance thresholds. However, existing approaches will incur a downtime in the order of minutes [7] to hours [15] for identifying and deploying a suitable model that meets the desired performance [10]. This paper presents DNNShifter, a framework that utilises production quality sparse models to generate a portfolio of spatially compressed model variants with high accuracy in real time. This is achieved by proposing a novel method that uses structured pruning of highly sparse models. This results in pruned models with a smaller resource footprint and the same model accuracy as the original sparse model. This method fundamentally differs from the commonly reported structured pruning methods that prune pre-trained dense models and negatively impact accuracy. The portfolio of models that are generated by our method can be used to adapt to match a range of operational runtime requirements. This is achieved by low overhead switching from one model to another in the portfolio at runtime. DNNShifter makes the following three research _contributions_: 1) A time and resource-efficient guided DNN model-training, pruning, and runtime switching pipeline that creates a portfolio of spatially pruned model variants comparable to a typical DNN model training routine using NAS. DNNShifter generates a portfolio of pruned model variants up to 93x faster than state-of-the-art methods. 2) A novel pruning method to compress highly sparse DNN models, resulting in accurate and spatially compact pruned model variants suited for edge resources with low inference latency. DNNShifter pruned model variants are up to 5.14x smaller and have up to 1.67x and 1.45x faster CPU and GPU inference latencies, respectively. In addition, the pruned model variants can be obtained orders of magnitude faster when compared to existing structured pruning methods and have higher accuracy for a given model size. 3) A low-overhead method that switches from one model variant to another on-demand at the edge to match a range of operational runtime requirements. DNNShifter has up to 11.9x lower overhead for switching model variants with up to 3.8x lower memory utilisation than existing approaches. The remainder of this paper is organised as follows. Section II discusses related work. Section III presents the DNNShifter framework. Section IV presents experimental results. Section V concludes the paper by discussing system limitations. ## II Related work Approaches for DNN compression aim to improve the resource efficiency of models by reducing their computational and memory utilisations while preserving the model's accuracy. Techniques such as model pruning, quantisation, and knowledge distillation leverage different properties of a DNN that inherently lend towards compression. As a result, a compressed model that is either smaller, faster or both compared to the original model, is produced. These approaches typically produce a single compressed model. Techniques such as NAS, on the other hand, generate a portfolio of compressed models from a search space that suits the requirements of constrained resources [8, 16, 17, 18]. However, NAS is computationally expensive because it trains and evaluates many candidate models (up to thousands) before identifying the optimal set of compressed models. _Our work is positioned at the intersection of DNN compression and NAS, where a more time and resource-efficient compression pipeline than NAS is developed to generate a portfolio of highly compressed models, which serves a range of resource requirements as seen in edge environments_. This section considers the key contributions of each DNN compression method by providing an overview of their strengths and weaknesses and comparing their features. More recent work is considered to highlight the novelty of the DNNShifter framework we propose in addition to presenting baseline methods to compare DNNShifter in the experiments presented in Section IV. ### _Unstructured pruning and sparse models_ Unstructured pruning masks select individual parameters of the DNN by setting their weight to zero [19, 20, 21, 22]. Existing methods such as Lottery Ticket Hypothesis (LTH) [23] demonstrate that introducing sparsity via unstructured pruning early into training can lead to similar or higher final accuracy as a dense model. In addition, with suitable hardware and sparse matrix libraries [24], sparse models can accelerate model training, thus, reducing time and energy costs [25]. The concept of LTH has motivated a large collection of unstructured pruning methods [20, 22, 26, 27]. However, resource-constrained environments usually do not support libraries to leverage any performance benefits of sparse models [28]. Furthermore, the zeroed weights within the sparse model do not reduce the memory footprint but create irregular memory accesses that will degrade inference performance on conventional CPUs. Unstructured pruning research typically focuses on improving sparse model accuracy rather than considering compute performance considerations [26]. _Sparse models are the starting point for our work._ The DNNShifter framework removes sparse data structures within these models in a disciplined manner. In other words, we spatially remove the zeroed weights, thereby reducing the model size and inference latency while maintaining the same accuracy as the original sparse model. ### _Structured pruning and re-parameterisation_ As shown in Figure 1, structured pruning spatially removes groups of parameters, such as convolutional filters [13, 15, 29, 30, 31]. A ranking algorithm is employed to identify filters that contribute the least to accuracy loss if removed. Structured pruning removes these filters, resulting in models with lower memory, energy, and inference footprint. However, structured pruning is time-consuming because: (i) the filters that can be removed need to be identified given the thousands of pruning combinations [15, 29], and (ii) the parameters that remain after pruning are fine-tuned to recover the accuracy lost while pruning [5, 20]. Thus, on-demand compression cannot be achieved using structured pruning. However, DNNShifter achieves structured pruning in (near) real-time by leveraging the following observations of sparse models: (i) zeroed weights are inherently ranked as prunable, and (ii) pruning of zeroed weights does not reduce model accuracy. Furthermore, structured re-parameterisation can be combined with structured pruning to further optimise a model by modifying the underlying architecture for a target device. For example, RepVGG [14] restructures ResNets into a VGG-like architecture and improves GPU utilisation. ### _Dynamic DNNs_ Dynamic DNNs improve inference efficiency by adapting a DNN for different operational conditions [32]. Methods such as skipping layers [33] and connections [34] or early-exiting [32] decrease inference latency at the cost of inference accuracy. Although dynamic DNNs offer the advantage of using any sub-model from within a single model, there are no spatial benefits since the entire model runs in memory, even for a small sub-model [32]. Alternatively, DNNShifter provides both inference and spatial benefits and leverages in-memory compression of multiple sparse models to facilitate on-demand switching of models to suit runtime requirements. ### _Other compression methods_ Other compression methods, namely quantisation and knowledge distillation, are presented in the literature for DNN compression. Quantisation reduces the bit precision of parameters in DNNs to reduce the model size and to accelerate inference [12]. However, quantisation is usually applied to all parameters of the DNN, which incurs an accuracy loss. Furthermore, quantised models may require dedicated hardware to carry out inference at a lower precision. Knowledge distillation transfers training knowledge from a more extensive teacher to a smaller student model [11]. The student model achieves similar accuracy to the teacher model and is spatially smaller. However, knowledge distillation is not easily automated to serve various model architectures and produces only a single student model rather than a portfolio of models suited for different operational conditions, such as specific memory budgets. Therefore, knowledge distillation does not scale for the varying resource requirements of deployments seen in heterogeneous edge environments. ### _Addressing open gaps with our contribution_ Although existing compression methods have a range of benefits, they present one or more significant limitations that prohibit their use for on-demand deployment of production quality DNNs to edge devices. _DNNShifter leverages the accuracy-preserving benefits of unstructured pruning with the runtime performance improvements of structured pruning across various model sizes to suit different operational conditions seen in edge environments. This combination has not been previously explored in the literature_. DNNShifter creates an efficient training, pruning, and inference pipeline, which is highlighted in comparison to other DNN compression methods in Table I. The DNNShifter framework and the models generated by the framework meet the requirements for deploying DNNs in edge environments. However, existing methods have one or more limitations making them less suited for edge systems. The next section explores the underlying methodology and implementation of DNNShifter. ## III DNNShifter DNNShifter is a framework that can be employed for resource-constrained environments, such as at the network edge or the extreme edge that has relatively limited computational capabilities. The framework prunes production quality DNN models on-demand and provides automated runtime model switching for inference on constrained resources. DNNShifter can be employed by system administrators for managing the life cycle of ML application development, deployment, and simulation environments to address the following challenges: **Rapidly obtaining production quality models:** In real-time, DNNShifter offers structured pruning of large sparse DNN models that cannot be run on hardware-limited resources. The framework derives pruned model variants for the target resource without a significant accuracy loss while achieving this on a small monetary, computation, and energy budget. This contrasts existing approaches that employ NAS [7] or parameter fine-tuning [29]. **Hardware agnostic automated model pruning:** DNNShifter creates a portfolio of hardware-independent pruned model variants with different performance characteristics (e.g. model size, speed, and accuracy) to Fig. 1: Obtaining sparse and pruned models from pruning a dense model. suit all deployment conditions. The model variants can be deployed across different resource tiers based on operational conditions, such as resource availability and performance targets. The approach adopted by DNNShifter is hardware agnostic and is not specific to specialised hardware such as a GPU [25]. **Real-time model switching at runtime:** Once a model portfolio has been deployed on the target hardware, DNNShifter utilises the portfolio of pruned model variants to select a model variant suited for a given operational condition. The framework facilitates the adaptation of the model to suit variations in the operational conditions with low overheads. The underlying method in DNNShifter switches the active model for inference from the portfolio via inflation (which activates the pruned model) and deflation (which further compresses and deactivates the pruned model) to match operational demand. DNNShifter is envisioned to be a holistic end-to-end solution for ML edge applications that reduces human administrator interventions or domain-specific knowledge for creating pruned models. DNNShifter can also benchmark different pruning algorithms on heterogeneous hardware and make informed decisions in the life cycle of edge-based ML applications. This section will further present the observations that motivated the design of DNNShifter and provides an overview of the framework. ### _Motivation_ A variety of model pruning methods have been presented in the literature for reducing the complexity of production models to suit resource-constrained environments while maintaining accuracy [20]. Traditional pruning methods are limited in multiple ways: (a) many require further time-consuming fine-tuning after the initial pruned model is obtained [29], (b) many rely on hardware accelerators [25], and (c) pruning often requires a costly trial and error process to determine the optimal pruned model for a given target resources [32]. Current pruning methods are unsuitable for real-time execution in critical scenarios, such as on-device video analytics, that require sub-second latency to preserve optimal service quality [35]. DNNShifter was developed to address the above limitations by leveraging the following two observations: #### Iii-A1 Aggregating and pruning unstructured sparsity During unstructured pruning, aggregating parameters from sparse data structures will result in fully prunable data structures that directly impact the model size and inference latency. Figure 2 highlights this observation. During unstructured pruning, the parameters of a convolutional kernel are set to zero values. The data structures (matrices) representing the kernels may be sparse (not all values are zero) and, therefore, cannot be pruned without compromising accuracy (shown as unprunable data structure). Zero matrices are obtained as pruning progresses by using the parameter ranking algorithm used in unstructured pruning. A data structure that has full zero values is prunable Fig. 2: Structured pruning zero-valued data structures obtained from unstructured pruning. and, by employing structured pruning, can be removed from the model. This results in reducing the model size and, thereby, inference latency. \(DNNShifter\) leverages this observation to prune sparse models using structural pruning without degrading model accuracy. Since model accuracy is preserved, \(DNNShifter\) does not require fine-tuning after pruning. #### Iii-A2 Further compression of remaining model sparsity During runtime, inactive models from the portfolio can be further compressed to reduce overheads. After structured pruning, the remaining unprunable data structures contain sparse matrices (Figure 2). Sparse matrices have repeating and compressible data patterns (of zeroed weights). Therefore, the model can be encoded (deflated) into smaller representations while the model is inactive. For example, such deflation may be applied when downloading the model portfolio from a cloud/edge server to target device hardware or, during runtime, to models in a portfolio that are not actively inferring. \(DNNShifter\) uses this observation to load the entire portfolio of deflated models into memory during runtime. When a specific model is required for inference, it is decoded (inflated) in (near) real-time. This allows model switching in response to varying operational conditions on the edge and is significantly faster than existing methods. ### _Framework overview_ This section presents an overview of the DNNShifter framework. It operates in three phases, as shown in Figure 3: _Phase 1: Offline training of production quality DNNs with unstructured ranking -_ In this phase, model training and parameter ranking are combined into a single iterative training process. A production-quality DNN model is taken as input by an unstructured pruning method. Then, the insignificant parameters of the model are masked between each training iteration by an unstructured ranking method to produce a portfolio of model variants (one per iteration) with sparse data structures (referred to as sparse models). _Phase 2: On-demand conversion from sparse models to pruned models -_ The portfolio of sparse models is pruned via structured pruning to obtain pruned model variants that can be deployed on a target hardware resource. _Phase 3: Runtime model switching -_ The portfolio is deployed, monitored, and adapts to varying operational conditions by switching the active pruned model variant at runtime. Phase 1 builds on an existing technique and Phase 2 and Phase 3 comprise nine modules. The next sections discuss these. ### _Phase 1: Model training and parameter ranking_ This phase uses an unstructured ranking algorithm to produce highly sparse models from production-quality DNNs (dense models) while maintaining viable model accuracy. It is to be noted that the sparse models obtained from this phase will include zero values in the model parameters. However, since they are not removed from the data structures until the next phase, sparse models are not smaller in size than the dense model. Choosing an unstructured ranking algorithm over structured pruning eliminates the need for fine-tuning after training to recover model accuracy [13, 29]. In addition, such a ranking approach between training iterations has two advantages. Firstly, DNNShifter simplifies model parameter ranking so that a user does not require expert knowledge of ranking algorithms and no additional hyperparameters need be configured. Secondly, DNNShifter improves the model pruning pipeline efficiency. A conventional pruning pipeline consists of training the model, compressing using structured pruning methods, profiling, and iteratively fine-tuning the pruned model for the target hardware. While training and compression can occur offline on large-scale computational resources, fine-tuning will need to be carried out on the target hardware that may be resource-limited. The final accuracy of the model cannot be determined until this time-consuming pipeline is completed. If the desired accuracy is not obtained, then the entire sequence of the pruning pipeline must be repeated with a different set of pruning hyperparameters. In addition, only a single pruned model will be obtained at the end of the pipeline that meets specific operational conditions. If the operational condition changes, the entire pruning pipeline must be repeated. DNNShifter improves the efficiency of the pruning pipeline in three ways by integrating ranking within the training iterations: (i) The final model accuracy that can be achieved is known before the pipeline completes. Therefore, the pipeline can be reinitialised in advance with new hyperparameters if the target accuracy cannot be achieved. (ii) Fine-tuning, a computationally intensive task, is eliminated on the target hardware resource that may be relatively resource constrained. Therefore, rapid and on-demand deployments of DNN models are feasible since fine-tuning does not need to be carried out. (iii) A portfolio of pruned models can be generated that will suit a range of operational conditions on the target hardware resource by running the pruning pipeline once. This allows for adapting a model that is deployed at runtime. DNNShifter implements a modified version of the Open Lottery Ticket Hypothesis (OpenLTH) framework1. No modifications were made to the training process. Instead, DNNShifter adds the structured pruning and model switching phases (Phase 2 and Phase 3) which will be discussed later. The Lottery Ticket Hypothesis (LTH) articulates that highly accurate sparse models can be obtained from dense models [23] (shown in Figure 4) and is underpinned by Iterative Magnitude Pruning (IMP) with weight rewinding [36] method that DNNShifter also employs. IMP with rewinding is chosen since it performs well across all available models and datasets. Alternatives, such as SynFlow [27], only perform well for specific models or datasets [26]. Footnote 1: [https://github.com/facebookresearch/open_lth](https://github.com/facebookresearch/open_lth) Figure 4 illustrates Phase 1 of DNNShifter to generate sparse models using LTH. The model is trained and then ranked by the IMP algorithm in each iteration. The resulting sparse model from each iteration is saved into an intermediate portfolio of sparse models that will be pruned in the next phase. The sparse model from each iteration is provided as input for the next iteration. Model sparsity increases with the number of iterations up to a user-defined limit. ### _Phase 2: Converting sparse models to pruned models_ In this phase, the \(n\) sparse models from the intermediate portfolio are converted into \(m\) pruned models using structured pruning. This phase consists of six processing modules that pre-process sparse models, identify prunable data structures within each sparse model, generates plans for pruning, and then use structured pruning to generate pruned models. Each sparse model is processed from the intermediate portfolio to produce a final portfolio. Each module of this phase is detailed below: _Module 1 - Model Pre-Processor:_ This pre-processing module simplifies the DNN model architecture by fusing the convolution and batch normalisation layers [37]. This fusion removes batch normalisation parameters, thereby reducing the complexity of generating a pruning plan for the model, as only convolutional layers and their dependants must be considered (further discussed in Module 3). _Module 2 - Sparsity Analyser:_ This module builds on the method illustrated in Figure 2 that identifies convolutional kernels with entirely zero values. However, these kernels cannot be removed without further planning since a DNN's architecture does not naturally lend itself to the removal of kernels alone. Instead, a kernel can be removed if all kernels in a channel can be removed. To this end, channels that have all kernels with zero values are further indexed to create prunable convolutional channels. _Module 3 - Prune Planner:_ In the existing literature, convolution channels are removed from a model iteratively to minimise accuracy loss. However, this is inefficient for two reasons. Firstly, pruning channels is computationally intensive since entire convolutional layers comprising large multi-dimensional parameter arrays will be rebuilt when prunable channels are removed. Secondly, each prunable channel depends on the channel of the next convolutional layer, which will also be rebuilt. Therefore, pruning sequentially incurs overheads. DNNShifter breaks this dependency and removes all prunable channels at the same time. This is achieved by the prune planner module, which creates a concurrent data structure of prunable channel indices. This module uses Algorithm 1 where each zero channel \(c_{zero}\) (channels with all weights set to zero) in a set of all zero channels \(C_{Zero}\) (indexed in Module 2) is mapped to a convolutional layer \(L_{n}\) where \(0\leq n<D_{conv}\) (model convolutional layer depth). Each convolutional layer receives two sets of zero channels. The first set, \(C_{in}\) is the set of prunable _out_ channels from the previous convolutional layer \(L_{n-1}\): these indices correspond to the prunable _in_ channels of \(L_{n}\). The second set, \(C_{out}\), is the prunable _out_ channels of \(L_{n}\). When \(n=0\) (first convolutional layer), there is no \(C_{in}\). Therefore this layer receives an empty set. The returned prune Fig. 4: The unstructured pruning method incorporated in DNNShifter uses the combined approach of repetitive training and model ranking between training iterations. Fig. 3: Overview of the DNNShifter framework. plan (\((C_{in},C_{out})\)) contains all zero channels that are to be pruned in Module 4 for a given convolutional layer. ``` Data: Prunable channel indices \((C_{in},C_{out})\) in \(L_{n}\) Result: Pruned convolutional layer \(L^{\prime}_{n}\) 1\(|C^{\prime}_{in}|\leftarrow|L_{n}(C_{in})|-|C_{in}|\) 2\(|C^{\prime}_{out}|\leftarrow|L_{n}(C_{out})|-|C_{out}|\) 3\(L^{\prime}_{n}\leftarrow\) create new layer(\(|C^{\prime}_{in}|\), \(|C^{\prime}_{out}|\)) 4\(L^{\prime}_{n}(C_{out})\gets L_{n}(C_{out})\setminus C_{out}\) 5\(L^{\prime}_{n}(bias)\gets L_{n}(bias)\setminus C_{out}\) 6if(\(n>0\))then 7\(L^{\prime}_{n}(C_{in})\gets L_{n}(C_{in})\setminus C_{in}\) return\(L^{\prime}_{n}\) ``` **Algorithm 1**DNNShifter Prune Planning **Module 4** - Model Pruner:The pruning plan from Module 3 is used to prune a sparse model from the intermediate portfolio in real time. This module executes the pruning plan by rebuilding each convolutional layer without the prunable channels and the biases of the channels. As all prunable channels are made available from Module 3, this module prunes all in/out channels in a single batch operation, significantly reducing computational overhead and enabling real-time pruning. After prune planning, this module is executed in parallel to concurrently prune each convolutional layer, forming a series of pruned layers \(L^{\prime}\) that replaces the original unpruned layers \(L\). This module uses in Algorithm 2 where a pruned layer \(L^{\prime}_{n}\) is created with the smaller channel size \(|C^{\prime}_{in}|\) and \(|C^{\prime}_{out}|\) (Lines 1-3). Afterwards, the pruned set of remaining channels and bias are transferred from \(L_{n}\) to \(L^{\prime}_{n}\) (Lines 4-7). **Module 5** - Model Profiler:This module benchmarks the pruned model to obtain metrics: accuracy, inference latency, model size, and the maximum memory required to run the model. This is achieved using a test dataset. The metrics relevant to each model are stored as metadata for the next module for selecting a suitable pruned model from a portfolio. **Module 6** - Portfolio Post-Processor:A portfolio of \(n\) pruned models is generated. This module refines the portfolio to eventually only include \(m\) pruned models (\(m\leq n\)) with distinct performance characteristics (pruned models with similar characteristics are removed). ### _Phase 3: Further compression and model switching_ A portfolio of production-quality DNN models is trained in the first phase and then compressed via structured pruning in the second phase. In this third phase, models are further compressed while not being used (inactive). DNNShifter encodes the portfolio of pruned models into a significantly smaller package before deploying it to the storage of the target device using the lossless DEFLATE [38] compression algorithm. On application initialisation, DNNShifter loads the entire portfolio into memory, and then one model is activated (inflated) to enable inference. Encoding models in this manner (deflating) is possible since zero weights repeat in highly sparse DNN models obtained from training. However, out-of-the-box production quality models are dense (most of their weights are not set to zero). Therefore, applying this compression algorithm to dense models will not provide any benefit. Each module of this phase is detailed below: **Module 7** - Model Deflater:This module sorts the model portfolio by model size (a proxy for performance characteristics), then shrinks the entire model portfolio into a smaller, sorted, and easily deployable package using DEFLATE before it is transferred to the target devices. **Module 8** - Application Initialiser:This module loads the entire portfolio of deflated models into device memory. First, it selects a model with the median model size. Then, this model is decompressed in memory to enable application inference (we refer to this as inflation). Note that the inflated model is a pruned model variant from Phase 2. It is smaller and faster for inference than an equivalent dense model (Figure 6). **Module 9** - Model Switcher:The available memory and CPU load may vary due to the number of running applications and the workload of each application on the device. For example, inference performance metrics, such as queries per second (QPS), may vary over time for an application [39]. During a low load on the edge device, a higher QPS can be achieved, during which time a larger model from the portfolio can be decompressed in the device memory (inflation); the larger model will improve inference accuracy. Inversely, a decreasing QPS suggests a high load and a smaller model from the portfolio is inflated to improve inference latency performance. DNNShifter does not require searching the entire portfolio to switch between models. Instead, this module selects the next or previous model from the portfolio depending on the QPS trend. Therefore, model switching can be rapidly obtained with minimum overheads. ## IV Experiments This section first presents the experimental testbed and baseline models in Section IV-A and then considers three key aspects of DNNShifter: (1) The time to generate a portfolio of models for addressing Challenge 1 (Phase 1 of DNNShifter). We will compare against state-of-the-art NAS methods, namely DARTS [16], RepVGG [14], and PreNAS [17], for evaluating this. We will demonstrate in Section IV-B that DNNShifter will generate a portfolio faster and more efficiently. (2) The accuracy achieved and the time taken for inference by the models for addressing Challenge 2 (Phase 2 of DNNShifter). Two categories of pruning algorithms, unstructured and structured, are considered here. The unstructured pruning algorithms considered are random pruning, magnitude pruning [19], SynFlow [27], and NTK-SAP [22]. The structured pruning algorithms considered include similarities-aware [40], \(l^{1}\) norm [40], EasiEdge [15], and ProsPr [30]. We will demonstrate in Section IV-C1 and Section IV-C2 that DNNShifter obtains better accuracy and an improved inference speedup for the pruned models compared to unstructured pruning methods. It will also be demonstrated in Section IV-C3 that when compared to structured pruning methods, the pruned models obtained from DNNShifter have better accuracy for a desired model size. We will show in Section IV-C4 that DNNShifter has overheads that are multiple magnitudes lower than structured pruning methods. (3) The overheads for dynamically switching a model in memory for addressing Challenge 3 (Phase 3 of DNNShifter). We will demonstrate in Section IV-D that compared to model switching approaches, such as Model Ensemble [41] and Dynamic once-for-all (Dynamic-OFA) [10], DNNShifter has lower model switching overheads and memory utilisation. ### _Experimental setup_ Two production DNN models trained on the CIFAR-10 [42] and Tiny ImageNet [43] dataset are considered. First is VGG-16 [2] trained on CIFAR-10, which represents a feedforward DNN model. Second is ResNet-50 [44] trained on Tiny ImageNet, which is a more complex branching DNN model. Table II presents the baseline results and hyperparameters: **Models, Datasets, and Hyperparameters -** VGG-16 is the OpenLTH configuration that has one linear layer [23], and ResNet-50 is the default ImageNet configuration [44]. CIFAR-10 consists of 50,000 training images and 10,000 test images divided equally across 10 classes. Tiny ImageNet is a subset of ImageNet consisting of 100,000 training images and 10,000 test images divided equally across 200 classes. Tiny ImageNet results are reported for both Top-1 and Top-5 as recommended by model pruning literature [20]. The baseline results were obtained using the training routine from OpenLTH3 as defined in Section III-B. Footnote 3: Using Python 3.8.10, torch 1.13.0+cu116, and torchvision 0.14.0+cu116. **Testbed -** We use an AMD EPYC 7713P 64-core CPU and Nvidia RTX A6000 GPU to train the models, as such resources are representative of those in a cloud data centre. Model inference and runtime switching is carried out with an Intel i7-9750H 6-core CPU and an Nvidia RTX 2080 (Max-Q) GPU comparable to an edge server that may be used in a production setting. **Trial Counts and Reporting Methods -** All DNN training methodologies and experiments were conducted a minimum of three times, except for those in Section IV-B. In Section IV-B, each NAS approach was executed only once due to computational and time constraints. Unless otherwise specified, model performance indicators like accuracy, memory usage, and latency are presented in tables and figures as the mean from all trials accompanied by confidence intervals spanning one standard deviation. In addition, where possible, experiments are carried out across 8 different compression ratios (2, 4, 8, 16, 32, 64, 128, 256). ### _Model training and portfolio generation (Phase 1)_ This study will demonstrate that DNNShifter will generate a portfolio of models from a base architecture with a higher search efficiency than comparable NAS methods. Search efficiency is the percentage of optimal model variants in the portfolio over the total number of searched variants. An optimal model variant is one that is not outperformed on all performance metrics by another variant and is obtained using Pareto optimality. The performance metrics considered in this article are model size, inference latency, and model accuracy. For example, training a single model that reaches an adequate accuracy has a search efficiency of 100%. However, if training occurs \(N\) times, then the model with the highest accuracy from the \(N\) training rounds is optimal, but search efficiency drops to \(100/N\)%. DNNShifter creates a model portfolio by iteratively pruning the largest model variant into progressively smaller variants (discussed in Section III-C). Each pruning iteration is equivalent to searching the model architecture once for a variant. We compare DNNShifter against three different NAS-based methods, which search a model architecture for optimal model variants. The first is DARTS, an accelerated NAS method that generates model variants from a continuous search space via gradient descent. Compared to older NAS methods such as NasNet, DARTS is 500x faster. In addition, DARTS is a NAS approach that automatically generates a portfolio of models. The second is RepVGG, which employs a family of VGG-like model variants from a set of discrete hyperparameters that scale various model architecture properties. In total, 228 model variants are individually trained to identify the optimal set of model variants based on hyperparameters presented in RepVGG literature [14]. The third is PreNAS, a modern NAS that generates models using the emerging vision transformer model architecture [45]. PreNAS is a one-shot NAS that decides on a set of optimal model variants and only trains those candidates, significantly reducing computational requirements. DNNShifter has one hyperparameter, \(n\), which specifies how many model variants should be generated where each variant is twice as compressed as the previous. For example, \(n=8\) generates a portfolio of model variants up to the compression ratio 256 (\(2^{n}\)). The first variant is the original dense model with no sparsity. Table III contrasts DNNShifter against DARTS, RepVGG, and PreNAS. DNNShifter generates 4 optimal model variants out of a portfolio of 9, resulting in a search efficiency of 44.44%. This is more efficient than the NAS-based methods. The number of parameters trained using DARTS is divided across a more extensive portfolio of models. These models are not sufficiently diverse, resulting in a low search efficiency of DARTS. The DARTS search method requires over 6 hours to create the portfolio. Then, each variant is individually trained and evaluated, totalling a training time that is 6x longer than DNNShifter. RepVGG and PreNAS achieve a higher model accuracy than DNNShifter, but each model variant is up to one order of magnitude larger in parameter count. As this study evaluates training time as a function of parameter count, the trend that is seen in Table III generalises for all model architectures of different sizes. **Observation 1:** The DNNShifter method for generating a portfolio of models via iterative pruning is more resource and time-efficient than NAS-based methods. ### _Performance of sparse and pruned models (Phase 2)_ This study will demonstrate that DNNShifter produces pruned models of the same or better accuracy than other unstructured (Section IV-C1) and structured (Section IV-C3) pruning methods. In addition, it is demonstrated that the pruned models generated by DNNShifter are smaller and faster than sparse models, which is quantified for various compression ratios (Section IV-C2). #### Iv-C1 Comparing accuracy against sparse models We will first contrast the choice of the unstructured pruning method in DNNShifter against other unstructured pruning methods. Unstructured pruning methods produce a sparse model variant where parameters are set to zero. In this paper, a sparse model with a compression ratio of \(C\) has for every \(C\) parameters, \(C-1\) parameters set to zero; this is also presented in the literature [27, 20]. As seen in Section III-C, DNNShifter utilises IMP with rewinding as its unstructured pruning method. This study evaluates DNNShifter against random pruning, which is a naive method, magnitude pruning, SynFlow, and NTK-SAP. For each method, the baseline models in Table II are iteratively pruned up to eight times where a compression ratio of \(2^{n}\) is achieved per iteration \(n\) as described in Section IV-B. Figure 5 shows the change in test accuracy as the compression ratio increases for VGG-16 and Resnet-50. For all methods, accuracy decreases as the compression ratio increases. However, the rate of decline varies per method, whereas DNNShifter maintains the highest accuracy in all cases. For example, for VGG-16 on CIFAR-10 at a compression ratio of 256, DNNShifter accuracy dropped by 1.57%, whereas SynFlow, NTK-SAP, and Random pruning dropped by 6.21%, 5.15%, and 31.09%, respectively. Magnitude pruning compromises almost all of its accuracy by a compression ratio of 256. Magnitude pruning causes the model to become unusable at high compression ratios due to layer collapse (when an entire layer is pruned) [27]. However, SynFlow was designed to avoid layer collapse and maintain usable accuracy at high compression ratios. DNNShifter also does not encounter layer collapse as it uses the rewinding approach from IMP that stabilises the sparse model [36]. For ResNet-50 on Tiny ImageNet, for top-1 and top-5 test accuracy, DNNShifter accuracy reduction is less than half of SynFlow at higher compression ratios. In contrast, magnitude pruning, and NTK-SAP undergo layer collapse after a compression ratio of 32, and 64, respectively. **Observation 2:** DNNShifter preserves the highest accuracy for sparse models compared to existing methods. Furthermore, this enables DNNShifter to generate sparse models at extreme compression ratios, providing the opportunity for structured pruning. The benefits of using structured pruning at extreme model sparsities are explored in the next subsection. #### Iv-C2 Comparing runtime performance against sparse models Unstructured pruning methods do not provide inference acceleration or spatially reduce the model size at runtime. This is because the parameters are not spatially pruned but rather are merely made zero values in the model. DNNShifter removes pruned parameters via structured pruning. This study highlights the performance benefits of using pruned models from DNNShifter compared to the sparse models generated by unstructured pruning methods. Figure 6 shows the CPU and GPU inference speed up and spatial compression achieved for increasing compression ratios. For DNNShifter, inference speed-up is defined as the inference latency of the baseline model over the pruned model for a given compression ratio. For the other unstructured pruning methods, we consider inference speed-up as the inference latency of the baseline model over the sparse model for a given compression ratio. Similarly, spatial compression is the in-memory size of the baseline model over that of the in-memory size of the pruned model (for DNNShifter) or of the sparse model (for unstructured pruning methods) for a given compression ratio. For both metrics, DNNShifter provides improvements for all compression ratios, whereas unstructured pruning methods alone provide little or, in some cases, worse performance than the baseline model. DNNShifter achieves up to 1.67x and 1.32x for VGG-16 and ResNet-50 on CPU and GPU, respectively, at no cost to the accuracy of the sparse model. The other unstructured pruning methods achieve a small speedup. However, the speedup varies due to irregular memory access in sparse models [46]. DNNShifter spatially prunes the sparse parameters and thus is not affected by structural irregularity. As the compression ratio increases, more sparse parameters are removed, resulting in a smaller model with lower CPU and GPU inference times. DNNShifter at a compression ratio of 256 achieves a spatial compression on the sparse model of 5.14x and 1.87x for VGG-16 and ResNet-50, respectively. ResNet-50 has a lower spatial compression ratio as DNNShifter only removes linear connections using structural pruning. As such, any skip connections or downsampling layers in ResNet-50 are not pruned as it will impact model accuracy [47]. **Observation 3:**DNNShifter reduces inference latency and sparse model sizes in memory without losing accuracy. This is in contrast to sparse models obtained from unstructured pruning that are unsuitable for edge environments since they have poor inference performance and no spatial compression. #### Iv-A3 Comparing accuracy against pruned models This study contrasts DNNShifter against other structured pruning methods. Specifically, DNNShifter is demonstrated to have comparable accuracy to the original model while producing similarly sized or smaller pruned models than other structured pruning methods. DNNShifter is compared against RepVGG, EasiEdge, ProsPr, and two classic structured pruning methods: similarities-aware and \(l^{1}\) norm. RepVGG, as described in Section IV-B, creates VGG-like architectures by re-parameterising ResNet. In this study, RepVGG-A0 is the pruned VGG-16 model obtained from the baseline RepVGG-B0 [14]. EasiEdge is a recent structured pruning method that creates pruned models for edge deployment. ProsPr is another modern pruning method that learns which weights to prune within the first few steps of optimisation [30]. In this study, we use the structured pruning variation that prunes channels. Pruning using \(l^{1}\) norm is a classic structured pruning method that ranks the importance of each channel using \(l^{1}\) norm and then prunes the lowest value channels [40]. Similarities-aware is another classic structured pruning method that removes channels with similar outputs [40]. Table IV shows the accuracy change of the pruned model and the total parameter count after pruning the baseline VGG-16 on CIFAR-10 using different structured pruning methods. The table is organised in descending order of parameter count, where the baseline VGG-16 models are considered first and increasingly pruned models towards the bottom. EasiEdge and ProsPr models are denoted using a prune degree as a percentage. Prune degree is the percentage of pruned parameters from the baseline. For example, EasiEdge-25% prunes VGG-16 by 25%. DNNShifter models are denoted using the compression ratio. For example, DNNShifter-2x has a compression ratio of 2, equivalent to a prune degree of 50%. Both classic structured pruning methods showed more than 5% accuracy reduction with a pruning degree of 35% or less. Combining the two methods allows for a similar accuracy loss but up to a pruning degree of 45%. RepVGG-A0 achieves the same pruning degree as the combined classic methods while only dropping 0.4% model accuracy. However, RepVGG does not have a smaller model variant than RepVGG-A0. DNNShifter and EasiEdge produce smaller models with better accuracy than the baseline model. DNNShifter-16x has the best accuracy improvement with a 0.4% gain, where a similarly sized EasiEdge-80% lost 0.22% accuracy. The smallest EasiEdge variant, namely EasiEdge-85%, with 0.46M parameters, 0.51% loss in accuracy, whereas DNNShifter-64x is over twice as small, with 0.21M parameters and gains 0.33% Fig. 5: Accuracy of unstructured pruning in DNNShifter against other methods as compression ratio increases; dashed line is baseline model accuracy. accuracy. ProsPr maintains a positive accuracy change up until models of size 2M parameters; however, accuracy remains lower than DNNShifter at all model sizes. **Observation 4:** DNNShifter produces smaller and more accurate pruned models than other structured pruning methods. #### Iv-C4 Pruning time against structured pruning methods Figure 7 shows the pruning time in seconds of various structured pruning methods to prune a ResNet model. \(l^{1}\) norm requires almost 3,923 seconds to prune and fine-tune the model. EasiEdge does not require fine-tuning, but the ranking process it employs using Taylor approximations is exhaustive, thus requiring 4,740 seconds. RepVGG does not require ranking. Instead, it re-parameterises the model, which only requires 8 seconds. Although this is a relatively small cost, an 8-second overhead per training round may equate to a substantial overhead for certain use cases. For example, consider pruning during the rounds of federated learning [48]. NTK-SAP requires 20 epochs of pre-training to generate an unstructured pruning mask resulting in 544 seconds of overhead. DNNShifter prunes a model in a sub-second time frame. For ResNet, it is on an average of 120 ms, or less than 3 frames for a 30 frames/second real-time edge video analytics application [35], as opposed to tens of seconds to minutes of downtime with existing approaches. **Observation 5:** DNNShifter enables (near) real-time structured pruning of DNN models and is at least one order of magnitude faster than other structured pruning methods. ### _Performance of model switching (Phase 3)_ Model switching enables an application to respond to changing runtime conditions by selecting a suitable model for Fig. 6: Performance of DNNShifter against other unstructured pruning methods as compression ratio increases; dashed line is baseline model performance. Each plot is the mean of five runs with confidence intervals of one standard deviation. inference from a pruned model portfolio. This study compares the in-memory compression and model switching method of DNNShifter against the classic methods, namely model ensemble and Dynamic-OFA. The model ensemble method hosts simultaneous models in memory, and Dynamic-OFA uses a smaller sub-network within a single DNN to match operational demands. Table V compares runtime switching of DNNShifter against model ensemble and Dynamic-OFA. DNNShifter has a portfolio of four pruned VGG-16 models obtained in Section IV-B with an accuracy range of 91.64-93.71%, model portfolio size of 30.4-66.1MB, and a CPU inference speedup of 1.20-1.67x. A model ensemble of the same four VGG-16 models and a Dynamic-OFA model are also noted. Memory utilisation is the size of the model portfolio in memory. DNNShifter's memory utilisation is variable since inactive models are further compressed in memory (Section III). However, in the model ensemble method, all models are uncompressed and hosted in memory. Similarly, Dynamic-OFA maintains the entire model in memory, even though only a smaller sub-network may be used during inference. DNNShifter utilises as little as 3.8x less memory for its model portfolio compared to the model ensemble method. Decision overhead is the wall clock time for a model-switching method to select a model from the portfolio. For example, the model ensemble method runs inference on all models in the portfolio and then chooses the output with the highest confidence. On the other hand, Dynamic-OFA selects one DNN configuration to run inference and then reconfigures the DNN to that selection. DNNShifter inflates the appropriate model from in-memory and has an average decision overhead of 43 ms, which is up to 11.9x faster than Dynamic-OFA. ## V Discussion and Conclusion Deploying production-quality DNNs in resource-constrained environments is essential for facilitating edge machine learning. Model compression offers techniques to derive model variants from a production-quality model suited for resource-constrained environments. However, obtaining model variants that preserve accuracy and can be compressed to reduce the resource footprint and achieve low inference latencies is challenging. Moreover, existing research has limited focus on adapting model variants to changing runtime conditions. DNNShifter addresses the above concerns by developing an end-to-end framework that incorporates a novel pruning method and a time and resource-efficient pipeline for model training, compression, and runtime switching. DNNShifter prepares model variants magnitudes faster than state-of-the-art neural architectural search, thus facilitating rapid and on-demand model deployments at the edge. The pruned model variants maintain the same accuracy as their production quality counterparts. They are suited for edge deployments since they are lightweight and adaptable to runtime conditions. DNNShifter was designed to accommodate existing ML training and inference pipelines. DNNShifter does not introduce any extra hyperparameters or dependencies other than requiring a user-specified maximum portfolio size. The structured pruning method of DNNShifter can easily be used: (1) for one-time optimisation to pre-existing pre-trained DNN models, (2) in conjunction with the other phases to create a training and inference pipeline from scratch, or (3) in any combination of DNNShifter's phases. Thus, DNNShifter is easily transferable to existing ML applications and products. DNNShifter is primarily limited by the high computation cost of training sparse models. There is potential for structured pruning to be conducted at the initialisation of the model (before training) with minimal accuracy loss [49, 50]. This will be explored in the future. ## Acknowledgements This research is funded by Rakuten Mobile, Japan. Fig. 7: Average pruning time of a ResNet model using structured pruning.
深層ニューラルネットワーク(DNN)は、多くの機械学習アプリケーションを支えています。生産性の高いDNNモデルは、DNNのパラメータを数百万個訓練することで、高い推論精度を達成します。これは、ネットワークの極端なエッジで動作するリソース、例えばモバイルや組み込みデバイスなど、計算資源とメモリ資源が限られているデバイスに対して、大きなリソースの必要性を意味します。この課題に対処するためには、モデルを薄切りして、これらのデバイスに適した軽量なモデルを作成します。既存の薄切り方法では、元のモデルと同様の品質を持つモデルを提供することはできず、時間コストやオーバーヘッドが大きくなるか、オフラインでの用途に限定されます。本研究では、元のモデルの精度を維持しながら、適切なモデル変形を迅速に導出します。これらのモデル変形は、システムやネットワークの条件が変化した場合に、ワークロードの要件に合わせて迅速に切り替えられます
2308.16685
On the nature of the energy-dependent morphology of the composite multi-TeV gamma-ray source HESS J1702-420
HESS J1702-420 is a multi-TeV gamma-ray source with an unusual energy-dependent morphology. The recent H.E.S.S. observations suggest that the emission is well described by a combination of point-like HESS J1702-420A (dominating at highest energies, $\gtrsim$ 30 TeV ) and diffuse ($\sim$ 0.3$^\circ$) HESS J1702-420B (dominating below $\lesssim$ 5TeV) sources with very hard (${\Gamma} \sim 1.5$) and soft (${\Gamma}$ ~2.6) power-law spectra, respectively. Here we propose a model which postulates that the proton accelerator is located at the position of HESS J1702-420A and is embedded into a dense molecular cloud that coincides with HESS J1702-420B. In the proposed model, the VHE radiation of HESS J1702-420 is explained by the pion-decay emission from the continuously injected relativistic protons propagating through a dense cloud. The energy-dependent morphology is defined by the diffusive nature of the low-energy protons propagation, transiting sharply to (quasi) ballistic propagation at higher energies. Adopting strong energy dependence of the diffusion coefficient, $D \propto E^\beta$ with $\beta \geq 1$, we argue that HESS J1702-420 as the system of two gamma-ray sources is the result of the propagation effect. Protons injected by a single accelerator at the rate $Q_0 \simeq 10^{38} \, (n_0/100 \, \rm cm^{-3})^{-1}\, (d/ \, 0.25\,kpc)^{-1} \rm erg/s$ can reasonably reproduce the morphology and fluxes of two gamma-ray components.
Felix Aharonian, Denys Malyshev, Maria Chernyakova
2023-08-31T12:38:58
http://arxiv.org/abs/2308.16685v1
On the nature of the energy-dependent morphology of the composite multi-TeV gamma-ray source HESS J1702-420 ###### Abstract HESS J1702-420 is a multi-TeV gamma-ray source with an unusual energy-dependent morphology. The recent H.E.S.S. observations suggest that the emission is well described by a combination of point-like HESS J1702-420A (dominating at highest energies, \(\gtrsim 30\) TeV ) and diffuse (\(\sim 0.3^{\circ}\)) HESS J1702-420B (dominating below \(\lesssim 5\) TeV) sources with very hard (\(\Gamma\sim 1.5\)) and soft (\(\Gamma\sim 2.6\)) power-law spectra, respectively. Here we propose a model which postulates that the proton accelerator is located at the position of HESS J1702-420A and is embedded into a dense molecular cloud that coincides with HESS J1702-420B. In the proposed model, the VHE radiation of HESS J1702-420 is explained by the pion-decay emission from the continuously injected relativistic protons propagating through a dense cloud. The energy-dependent morphology is defined by the diffusive nature of the low-energy protons propagation, transiting _sharply_ to (quasi) ballistic propagation at higher energies. Adopting strong energy dependence of the diffusion coefficient, \(D\propto E^{\beta}\) with \(\beta\geq 1\), we argue that HESS J1702-420 as the system of two gamma-ray sources is the result of the propagation effect. Protons injected by a single accelerator at the rate \(Q_{0}\simeq 10^{38}\,(n_{0}/100\,{\rm cm}^{-3})^{-1}\,({\rm d}/\,0.25\,{\rm kpc })^{-1}{\rm erg}/{\rm s}\) can reasonably reproduce the morphology and fluxes of two gamma-ray components. gamma rays:stars -- stars: individual(HESS J1702-420 ) + Footnote †: journal: ApJ 0000-0002-8802-8885]Felix Aharonian 0000-0002-4202-3885]Denvs Malyshev 0000-0002-4203-3330]Maria Chernyakova 0000-0002-4203-3330] ## 1 Introduction HESS J1702-420 is a gamma-ray source discovered in the TeV band by the High Energy Spectroscopic System (H.E.S.S.) during the first Galactic plane survey campaign (Aharonian et al., 2006a). Later, Aharonian et al. (2008) reported the extended morphology of the source and the first measurements of its spectral characteristics. Recent H.E.S.S. observations of HESS J1702-420 demonstrated that the morphology of the source is consistent with the superposition of emissions from a point-like central source HESS J1702-420A and an extended source HESS J1702-420B (Abdalla et al., 2021). The point-like source is characterised by a power-law \(\gamma\)-ray spectrum with photon index \(\Gamma_{A}=1.53\pm 0.2\) extending without indication of steepening up to \(\sim 100\) TeV. At low energies, below \(\sim 5\) TeV, HESS J1702-420A is outshone by HESS J1702-420B. The latter is characterised by a significantly softer spectrum with \(\Gamma_{B}=2.62\pm 0.2\) and elliptical morphology with semi-axes of \(0.32^{\circ}\pm 0.02^{\circ}_{stat}\pm 0.03^{\circ}_{syst}\) (major) and \(0.20^{\circ}\pm 0.02^{\circ}_{stat}\pm 0.03^{\circ}_{syst}\)(minor). The origin of the gamma-ray emission from this source is unknown. Despite several dedicated deep X-ray observations with Suzaku (Fujinaga et al., 2011) and XMM-Newton (Giunti et al., 2022), no clear counterparts were found for both the point-like HESS J1702-420A and diffuse HESS J1702-420B TeV sources. In the absence of clear spatially coincident counterpart sources at lower energies, several misplaced sources were invoked to explain the emission from HESS J1702-420. These include a cosmic ray diffusion from a nearby supernova remnant SNR G344.7-0.1 and a pulsar PSR J1702-412 (both \(\sim 0.5^{\circ}\) away from the centroid of the TeV emission), see e.g. discussion in Abdalla et al. (2021). In this paper, we propose a model that can explain the morphological and spectral characteristics of the emission coming from HESS J1702-420 region in a self-consistent way. We propose the point-like source HESS J1702-420A to be a proton accelerator embedded into a dense molecular cloud. The diffuse source HESS J1702-420B corresponds to the pion-decay emission from the continuously injected relativistic protons propagating through the cloud. The energy-dependent morphology of HESS J1702-420 (diffusive at \(\geq 0.1\) TeV and point-like at \(\gtrsim 10\) TeV) is explained by the diffusive nature of the low-energy protons propagation, which transits to almost rectilinear propagation of higher-energy protons. A similar scenario has been invoked to explain the spectrum of the gamma-ray emission coming from the central region of our Galaxy at high and very high energies (Aharonian and Neronov, 2005; Chernyakova et al., 2011) We describe the model in Section 2, discuss the results in Section 3, and summarize the conclusions in Section 4. ## 2 Modelling The model postulates the presence of the high-energy proton accelerator embedded into a dense medium (molecular cloud). To estimate the characteristic size and density of the ambient gas, we consider molecular HII clouds reported by Lau et al. (2019) in the direction of HESS J1702-420. Several clouds were detected at distances from 0.25 kpc to \(\sim 6\) kpc and with characteristic number densities \(10^{2}-10^{3}\) cm\({}^{-3}\). Below we will discuss the closest (\(d=0.25\) kpc) cloud from Lau et al. (2019) characterized by density \(n_{0}=180\,\rm{cm}^{-3}\) within \(0.32^{\circ}\). It corresponds to the cloud's radius \(R=1.4\) pc and a rather modest mass of about 100 \(M_{\odot}\). Later, we will discuss how the derived results could be rescaled for more distant and heavy clouds. The stationary distribution function \(f(r,\mu)\) of relativistic protons injected by a point-like source and propagating through the ambient medium is given by Proskein et al. (2015): \[\begin{split} f(r,\mu)=\frac{Q}{8\pi^{2}cZ}\left(\frac{1}{r^{2}}+ \frac{c}{rD}\right)\ \exp\left(-\frac{3D(1-\mu)}{rc}\right);\\ Z(x)=\frac{x}{3}\left(1-e^{-6/x}\right).\end{split} \tag{1}\] Here \(r\) is the radial coordinate; the source of the relativistic protons with the energy-dependent injection rate \(Q(E)\) is assumed to be located at \(r=0\); \(\mu=\cos\theta\) is the cosine of the angle between proton propagation and radial direction. The transport of protons is described by the energy-dependent diffusion coefficient \(D=D(E)\). Eq. (1) provides the radial distribution of relativistic protons in both diffusion and ballistic regimes, including the transition between these two propagation modes. Below we parametrize the injection power of relativistic protons \(Q(E)\) and the diffusion coefficient \(D(E)\) as \[\begin{split} Q(E_{p})\equiv dN_{p}/dE_{p}=N_{0}\cdot\left(E_{p} /1\,\rm{TeV}\right)^{-\alpha}e^{-E_{p}/E_{cut}}\\ \\ D(E_{p})=D_{0}\cdot\left(E_{p}/1\,\rm{TeV}\right)^{\beta}\,, \end{split} \tag{2}\] with the total energetics in accelerated protons \(Q_{0}=\int\limits_{m_{p}}^{\infty}E_{p}\cdot Q(E_{p})dE_{p}\). The relativistic protons, during their propagation through the ambient gas, emit gamma rays in \(pp\) collisions. The gamma-ray spectra were calculated with the help of naima v.0.9.1 Zabalza (2015) python module, which for the pion decay emission channel implements the parametrisation from Kafexhiu et al. (2014). The results for the intensity and morphology of the produced emission were obtained by integrating the produced gamma-ray emission over the production region. Based on the derived intensity profile, we extracted the model spectrum of the diffuse source from the \(0.15^{\circ}-0.3^{\circ}\) annulus. The spectrum of the point source was extracted from a central \(0.15^{\circ}\)-radius circle which roughly corresponds to the 95% H.E.S.S. PSF (assumed to be a Gaussian with \(0.07^{\circ}\) dispersion) containment. We note that within the performed modelling, the diffuse source presents a natural background for the point-like source. In order to minimize the effects of this background, we additionally subtracted the scaled (according to the extraction area) diffuse source spectrum from the point source spectrum. Following the described procedure, we derived the spectra of the point-like and diffuse sources for a set of model parameters - diffusion coefficient index \(\beta\), diffusion coefficient normalisation \(D_{0}\) and the power-law index of the injected proton spectrum, \(\alpha\). In the absence of firm indications for the high-energy cutoff in the observed spectra of point-like and diffuse gamma-ray sources up to \(E\sim 100\) TeV, we set \(E_{\rm cut}=1\) PeV, implying that we deal with a _Proton PeVatron_. The observed gamma-ray spectrum of HESS J1702-420A proposes a hard power-law spectrum of injected protons with \(\alpha<2\). Thus the total energy in protons is determined essentially by the upper limit, while the lower limit of integration doesn't have a significant impact on the proton injection power: \(Q_{0}=\int\limits_{\frac{1}{1}\rm{TeV}}^{\rm{PeV}}E_{p}Q(E_{p})dE_{p}\) Because of the lack of evidence of a high energy cutoff in the gamma-ray spectrum, this estimate should be considered as a lower limit. Within the proposed model, the fluxes of both HESS J1702-420A and HESS J1702-420B are proportional to the product of the proton injection power \(Q_{0}\) and the target gas density \(n_{0}\). Thus, \(Q_{0}n_{0}\) can be derived through the joint fitting of the model fluxes of two sources to the observed ones. ## 3 Results and Discussion ### Modelling results The gamma-ray brightness profiles at different energies, convolved with the H.E.S.S. PSF, are shown in Fig. 1 (right panel). They show a tendency of reduction of the source's angular size from low to high energies. This trend is clearly seen in Fig. 2, which demonstrates a gradual transformation of the flux dominance by the diffuse source HESS J1702-420B at low energies to the flux dominance of the point-like source HESS J1702-420A at highest energies. For the parameter set (\(\beta\), \(\alpha\)), the \(\chi^{2}\) range of the joint model fit of the point-like and diffuse source spectra is shown in the left panel of Fig. 1. The areas within grey contours are consistent with the data at \(2\sigma\) (light grey, inner contour) and \(3\sigma\) (dark grey, outer contour) levels. The numbers on the contours indicate the corresponding \(\chi^{2}\) values (best-fit \(\chi^{2}_{0}=10.8\)). The cyan diamond point corresponds to the formal best-fit value of all parameters found during the fit. The best-fit parameters provide a good fit to the data for the diffusion coefficient \(D=1.6\cdot 10^{25}(E/1\mathrm{TeV})^{1.45}\) cm\({}^{2}\)/s, the product \(Q_{0}n_{0}=0.66\cdot 10^{40}\) erg/s/cm\({}^{3}\) and the power-law index of proton's spectrum \(\alpha=1.4\). However, the strong correlation of \(\beta\) and \(\alpha\) parameters (see Fig. 1) doesn't allow us to derive the \(\beta\) and \(\alpha\) indices separately. Indeed, the spectral index of relativistic protons modulated by diffusion is \(\approx\alpha+\beta\)(Aharonian and Atoyan, 1996). Correspondingly, the gamma-ray photon index, which roughly mimics the slope of the proton spectrum (due to the almost energy-independence of \(pp\) interaction cross-section), contains information about the sum of two indices, \(\alpha+\beta\). We should also note that the results rely only on statistical uncertainties of data which are at the level of 6-30% of measured fluxes. The systematic uncertainty of 10-20% typical for H.E.S.S. data (Aharonian et al., 2006) can further broaden the allowed parameter space shown in Fig. 1. For example, one cannot exclude the combination of (\(\beta=1,\alpha=1.7\)), which gives an acceptable fit to the data; see the right panel of Fig. 2. Nevertheless, despite these uncertainties, the calculations show that the diffusion coefficient should have a sharp energy dependence, namely \(\beta\geq 1\). Generally, the index of the diffusion coefficient in a variety of standard astronomical environments, e.g. in the interstellar medium or in supernova remnants, is small, \(\beta\leq 1\). For example, in the Kolmogorov and Kraichnan turbulence modes, \(\beta=1/3\) and \(1/2\), respectively, achieving \(\beta=1\) in the Bohm diffusion regime. In this regard, the sharp energy dependence of the diffusion coefficient, which is a strongly preferred option in our model, can be considered somewhat unusual and suspicious. However, the recent studies of particle diffusion in highly turbulent environments not only allow but, in some cases, give preference to sharp energy-dependence of the diffusion coefficient with \(\beta\geq 1\)(Giacinti et al., 2018; Reichherzer et al., 2020, 2022). The discussion of this non-trivial theoretical issue is outside the scope of Figure 1: _Left:_\(1\sigma\) and \(2\sigma\) contours on the parameter set \((\beta,\alpha)\) (\(D\propto E^{\beta}\), \(dN_{p}/dE_{p}\propto E_{p}^{-\alpha}\)). The cyan diamond corresponds to the overall best-fit combination of \(\beta\) and \(\alpha\). _Right:_ Brightness profiles at different energies as seen above-specified energies smoothed with H.E.S.S. PSF (adopted as Gaussian with dispersion \(0.07^{\circ}\)), similar to the procedure used this paper; here, we limit it by noticing that \(\beta\geq 1\) characterizes the tendency of faster transition from diffusive to ballistic propagation regime and thus can be considered a natural consequence of our phenomenological model. We also note that for both parameter sets shown Fig. 2, the diffusion coefficient at low (TeV) energies is by orders of magnitude smaller than in the interstellar medium (ISM) (\(\sim 10^{29-30}\) cm\({}^{2}\)/s at 1 TeV, see e.g. Strong et al. (2007) for a review). This is another example that in different gamma-ray source populations, e.g. in Pulsar Halos (Abeysekara et al., 2017) and Stellar Clusters (Aharonian et al., 2019), CR diffusion may proceed in a very slow regime. At UHE energies, thanks to the strong energy dependence, the diffusion coefficient quickly recovers, although it still remains below the characteristic for the ISM level. But it appears to be sufficient to deviate at these energies from the nominal diffusion, namely to move, on the pc scales of the cloud, (quasi) ballistically. As a result, despite the large angular size (\(\approx 0.3^{\circ}\)) of the cloud where gamma-rays are produced, the apparent size of the gamma-ray image at multi-TeV energies is less than the HESS PSF of about \(0.1^{\circ}\). At lower energies, because of the diffusive character of propagation, the angular size of the gamma-ray image coincides with the cloud's angular size. The energy dependencies of two diffusion coefficients used in Fig. 2 are shown in Fig. 3 with solid (red) and dot-dashed (blue) lines. The horizontal black line presents the margin of applicability of the diffusive propagation regime defined \(D_{max}=Rc\), as it follows from Eq. (1). Above that line, the propagation proceeds in the ballistic regime. For comparison, in Fig. 3, we show the range of the diffusion coefficient commonly adopted for galactic cosmic rays (Strong et al., 2007; Vladimirov et al., 2012). ### Estimates for an arbitrary cloud The proposed model successfully explains the spectra and morphology of HESS J1702-420 VHE sources, assuming the distance to the source of 0.25 kpc. For the given angular scales, the distance to the source determines the geometrical size of the source, which is one of the principal model parameters for the description of the CR transport, including the transition from the diffusive to ballistic propagation regimes. The impact of the ambient gas density on the results is simpler; the gamma-ray flux is proportional to \(n_{0}\) and, consequently, the proton injection power \(Q_{0}\propto 1/n_{0}\). If the gamma-ray source coincides with the cloud reported in Lau et al. (2019) at a distance \(d\approx 0.25\) kpc with density \(n_{0}\approx 180\) cm\({}^{-3}\), the required injection power \(Q_{0}\approx 3.6\times 10^{37}\) erg/s for \(\beta=1.4\) and twice larger for \(\beta=1\). Since one cannot exclude the association of the gamma-ray source with other clouds reported by Lau et al. (2019), below, we briefly discuss how the main parameters could be rescaled for a cloud located at an arbitrary distance. Diffusion softens the proton's energy distribution and, consequently, the resulting gamma-ray spectrum by \(\beta\). Thus the index \(\beta\) of the diffusion coefficient could be estimated from the difference of spectral slopes of HESS J1702-420A (\(\Gamma_{A}=1.53\)) and HESS J1702-420B (\(\Gamma_{B}=2.62\)) sources: \(\beta\simeq\Gamma_{B}-\Gamma_{A}\sim 1\). The transition from the diffusive to (quasi) rectilinear propagation of protons occurs at energy \(E_{\rm tr}\) defined from the condition \(R^{2}/\left(2D(E_{\rm tr})\right)\sim R/c\); see Eq. (1): Figure 2: The observed and calculated spectral energy distributions (SEDs) of the point-like and diffuse sources for two sets of model parameters \((\beta,\alpha)\). _Left:_ the best-fit parameters corresponding to the cyan diamond point shown in Fig 1: \(\beta=1.45\) and \(\alpha=1.4\). _Right:_ The model spectra are calculated for not the best but still an acceptable set of parameters with Bohm-type diffusion coefficient index, \(\beta=1\), and the proton’s spectral index \(\alpha=1.7\). The corresponding values of the product \(Q_{0}n_{0}\) are \(0.66\times 10^{40}\) erg/cm\({}^{3}\)s and \(1.31\times 10^{40}\) erg/cm\({}^{3}\)s, respectively. \[\frac{E_{\rm tr}}{1{\rm TeV}}\lesssim\left(\frac{cR}{2D_{0}}\right)^{1/\beta} \simeq\left(2100\frac{L}{0.25\,{\rm kpc}}\frac{10^{26}{\rm cm}^{2}{\rm s}^{-1}} {D_{0}}\right)^{1/\beta} \tag{3}\] Protons with higher energies propagate almost ballistically, therefore above \(E_{\rm tr}\) the proton spectrum is not modified. Correspondingly, we should expect a noticeable change in the gamma-ray spectrum around \(E_{\rm tr,\gamma}\sim 0.1E_{\rm tr}\). For the same reason, we should expect different gamma-ray images at high and low energies, namely, an extended source below \(E_{\rm tr,\gamma}\), and a point-like source above \(E_{\rm tr,\gamma}\). These predictions describe quite well the observed energy-dependent morphology of HESS J1702-420 and allow us to estimate the absolute value of the diffusion coefficient \(D_{0}\) and \(E_{\rm tr,\gamma}\) - the energy of transition from a point-like to diffuse morphology. For the diffusion coefficients derived from the numerical modelling, one gets \(E_{\rm tr}\sim 700\) TeV. For a hard proton spectrum (\(\alpha\lesssim 2\)), the ratio of energies of the primary proton and secondary (\(\pi^{0}\)-decay) photon is about 20 (Kelner et al., 2006; Celli et al., 2020), thus the transition between point-like and diffuse morphologies should occur occurs at \(E_{\rm tr,\gamma}\sim 35\) TeV, in a good agreement with numerical calculations shown in Fig. 2. #### 3.2.1 Proton's injection rate The flux, angular size and distance to HESS J1702-420A determine the required proton injection. The efficiency of conversion of the energy of CR protons to gamma-rays in inelastic \(pp\) interactions is determined by the ratio of the confinement time \(t_{\rm esc}\) of protons inside the emitter to their radiative cooling time through the production and decay of \(\pi^{0}\)- mesons: \[t_{pp\rightarrow\pi^{0}}\simeq 5\times 10^{15}\left(n_{0}/1\,{\rm cm}^{-3} \right)^{-1}{\rm s}.\] In general, the energy-dependent confinement time is determined by the diffusion coefficient, but for the estimates of gamma-ray flux from the point-like source, we should use \(t_{\rm esc}\approx R/c\) given that the latter is the result of radiation by protons moving (quasi) ballistically inside the source of the size \(R\approx\theta d\), where \(\theta\approx 0.32^{\circ}\) is the angular size of the emitter (the diffuse source coinciding with the cloud in the Proton PeVatron is embedded), \(d\) is the distance to the system. Then, the flux expected from the point-like source is: \[F_{\gamma}=\frac{Q_{0}}{4\pi d^{2}}\frac{t_{esc}}{t_{pp\to \pi^{0}}}\simeq \tag{4}\] \[\simeq 4\cdot 10^{-12}\frac{Q_{0}n_{0}}{10^{40}{\rm erg/s/cm}^{3}} \left(\frac{d}{0.25\,{\rm kpc}}\right)^{-1}\left(\frac{\theta}{0.32^{\circ}} \right)\frac{{\rm erg}}{{\rm cm}^{2}s}.\] Note that in this equation, the gamma-ray flux scales with distance as \(\propto d^{-1}\). ### X-ray Emission from Secondary Electrons In \(pp\) collisions, about the same number of gamma rays and electrons are produced. Propagating through the ambient magnetic field (in the cloud or in the ISM), the secondary electrons radiate potentially detectable synchrotron emission. For the characteristic magnetic field, \(B_{0}=10\,\mu\)G (Cox, 2005; Jansson and Farrar, 2012), the electrons with energy \(E_{e}\) are cooled on timescale \[t_{\rm cool}=6.6\cdot 10^{2}\left(\frac{B}{10\,\mu{\rm G}}\right)^{-2}\left( \frac{E_{e}}{100\,{\rm TeV}}\right)^{-1}{\rm yr}\,. \tag{5}\] The Larmor radius \(R_{L}\) and the characteristic distance of the propagating electrons \(s\) are \[R_{L}\simeq 10^{-2}\left(\frac{B}{10\,\mu{\rm G}}\right)^{-1} \left(\frac{E_{e}}{100\,{\rm TeV}}\right)\,{\rm pc} \tag{6}\] \[s(t)=\sqrt{Dt_{cool}}\simeq\] \[4.7\sqrt{\frac{D_{0}}{10^{26}{\rm cm}^{2}/{\rm s}}}\left(\frac{E_ {e}}{1\,{\rm TeV}}\right)^{(\beta-1)/2}\left(\frac{B}{10\,\mu{\rm G}}\right)^{ -1}{\rm pc}\] An electron of energy \(E_{e}\) emits synchrotron emission at \[\varepsilon_{s}=5\frac{B}{10\,\mu{\rm G}}\left(\frac{E_{e}}{100\,{\rm TeV}} \right)^{2}{\rm keV}, \tag{7}\] Figure 3: The energy dependencies of two diffusion coefficients used to describe HESS data (see Fig. 2). The value of the diffusion coefficient corresponding to the transition from diffusive to (quasi)ballistic propagation is shown with the horizontal black line. The shaded region corresponds to the range of the diffusion coefficient adopted for galactic cosmic rays (Strong et al., 2007; Vladimirov et al., 2012). thus the propagation distance can be written as \[s(t)=4.7\sqrt{\frac{D_{0}}{10^{26}\rm{cm^{2}/s}}}\left(\frac{\varepsilon_{s}}{0.5 \,\rm{eV}}\right)^{\frac{\beta-1}{4}}\left(\frac{B}{10\,\rm{\mu G}}\right)^{ \frac{-\beta-3}{4}}\rm{pc} \tag{8}\] In the case of \(\beta=1\) and assuming \(D_{0}=10^{26.5}\) cm\({}^{2}\)/s, this equation reduces to \[s\simeq 8\left(\frac{B}{10\,\rm{\mu G}}\right)^{-1}\rm{pc}\,. \tag{9}\] Thus, for the distance to the cloud \(d=0.25\) kpc, the angular size of the X-ray image, \(\theta=s/d\sim 2^{\circ}\), significantly exceeds the cloud's angular size. As it follows from Eq.(9), only in the case of a much stronger magnetic field, namely, \(B\gtrsim 50\,\rm{\mu G}\), the X-ray image could be smaller than the size of the molecular cloud. To calculate the synchrotron flux of the secondary electrons produced, we used the aafragpy v.1.12 package (Koldobskiy et al., 2021) for the electron production in \(pp\) interactions, and the naima v.0.9.1 Zabalza (2015) python module for the synchrotron radiation in a random magnetic field. The fluxes of the synchrotron emission of the secondary electrons integrated over the entire X-ray source for three values of the magnetic field are shown in Fig 4. The absolute fluxes are obtained from the normalization to flux of the \(\pi^{0}\)-decay \(\gamma\)-rays. The X-ray spectrum is hard, with a photon index slightly exceeding 2 and the flux \(\sim 2-2.5\times 10^{-13}\) erg/cm\({}^{2}\)/s at 1 keV. Since the entire energy of the secondary electrons is emitted via synchrotron radiation, the X-ray flux only slightly depends on the strength of the magnetic field. The increase of the latter results only in the shift of the energy of the synchrotron photons proportional to \(B\). Giunti et al. (2022) carried out a deep X-ray observation of HESS J1702-420 with XMM-Newton resulted in a non-detection of a counterpart of a point-like source HESS J1702-420A. This result is consistent with our model, as \(\sim 100\) TeV secondary electrons producing keV emission propagate through the medium in the diffusive regime (as follows e.g. from Eq. (5) and Eq. (6)). The consequent emission corresponds to the extended morphology of \(\sim 2^{\circ}\) angular size as discussed above, which significantly exceeds the field of view (FoV) of XMM-Newton, making problematic the detection of the extended emission with pointing observations. The larger-scale mosaic XMM-Newton observations or surveys with broad FoV missions, e.g. eROSITA potentially are able to detect the predicted X-ray emission. Note that in the soft gamma-ray (MeV) band, the source becomes point-like as the corresponding secondary electrons propagate in the ballistic regime. Formally, this makes detection easier. However, the expected flux level \(\sim 10^{-13}\) erg/cm\({}^{2}\)/s at 1 MeV is still by two orders of magnitude below the sensitivities of the current and future missions. ## 4 Summary HESS J1702-420 is a TeV gamma-ray source of particular interest because of its hard energy spectrum extending up to 100 TeV and its peculiar energy-dependent morphology. This paper proposes a model that addresses the spatial distribution and energy spectra of the extended and point-like components of HESS J1702-420 originated by a single accelerator (a proton PeVatron) embedded in a dense gas cloud. The observed emission from HESS J1702-420 is explained by \(\pi^{0}\)-decay gamma rays arising from the interactions of relativistic protons continuously injected into and propagating through the cloud. The energy-dependent gamma-ray morphology is caused by the diffusive nature of the propagation of low-energy protons, which transits to an almost ballistic propagation regime at the highest energies. For a reasonable set of model parameters, both the energy spectrum and morphology can be well described by the diffusion coefficient, which is essentially suppressed at low energies in comparison to the interstellar medium (\(D_{0}\sim 10^{26}\) cm\({}^{2}\)/s at 1 TeV) but with a strong energy-dependence (\(\beta\gtrsim 1\)) that results in the propagation of highest energy protons (\(E\geq 100\) TeV) in ballistic regime. The detected fluxes of gamma rays require a powerful proton accelerator with an injection rate at the level of \(Q_{0}\sim 10^{38}(n_{0}/100\rm{cm^{-3}})^{-1}\) erg/s. We argue that the proposed scenario can be typical for a broad class of multi-TeV gamma ray sources. Figure 4: Modelled X-ray spectra of the secondary synchrotron emission integrated over the X-ray emitting region for different values of the magnetic field. ## Acknowledgements We thank the anonymous referee for his/her thoughtful comments, which helped us to improve the manuscript. The work of DM was supported by DLR through grant 50OR2104 and by DFG through grant MA 7807/2-1. The authors acknowledge support by the state of Baden-Wurttemberg through bwHPC.
HESS J1702-420はTeV規模のγ線源であり、エネルギー依存性の形態が異なっている。H.E.S.S.の最近の観測は、点状のHESS J1702-420A(最高エネルギーでの支配、$\gtrsim$30TeV)と diffuseな(約0.3° )のHESS J1702-420B(5TeV以下の支配)を組み合わせて説明できるという証拠を示している。これらの光線スペクトルは非常に硬い(${\Gamma} \sim 1.5$)とソフト(${\Gamma}$ ~2.6)であり、それぞれが異なる。この論文では、HESS J1702-420のproton加速器がHESS J1702-420Aの位置に位置し、密度が高い分子雲に埋め込まれているというモデルを提案する。
2309.08792
An entropy-based approach for a robust least squares spline approximation
We consider the weighted least squares spline approximation of a noisy dataset. By interpreting the weights as a probability distribution, we maximize the associated entropy subject to the constraint that the mean squared error is prescribed to a desired (small) value. Acting on this error yields a robust regression method that automatically detects and removes outliers from the data during the fitting procedure, by assigning them a very small weight. We discuss the use of both spline functions and spline curves. A number of numerical illustrations have been included to disclose the potentialities of the maximal-entropy approach in different application fields.
Luigi Brugnano, Domenico Giordano, Felice Iavernaro, Giorgia Rubino
2023-09-15T22:20:48
http://arxiv.org/abs/2309.08792v1
# An entropy-based approach for a robust least squares spline approximation ###### Abstract We consider the weighted least squares spline approximation of a noisy dataset. By interpreting the weights as a probability distribution, we maximize the associated entropy subject to the constraint that the mean squared error is prescribed to a desired (small) value. Acting on this error yields a robust regression method that automatically detects and removes outliers from the data during the fitting procedure, by assigning them a very small weight. We discuss the use of both spline functions and spline curves. A number of numerical illustrations have been included to disclose the potentialities of the maximal-entropy approach in different application fields. keywords: Weighted least squares approximation, B-splines, Entropy Msc: [2010] 65D10, 94A17 + Footnote †: journal: Journal of Computational and Applied Mathematics ## 1 Introduction With the advent of computer-aided modern technology, sheer volumes of data need to be pre-processed in order to make them suitable for the subsequent data-driven tasks they are intended for. Real data are often affected by various imperfections, including noise, poor sampling, missing values and outliers. The automatic identification and removal of these inconsistencies has become of paramount importance during the preprocessing phase of data, since they may significantly affect the predictive accuracy and efficiency of models such as those based upon single and multivariate regression, as well as of pattern recognition procedures resulting from machine learning and deep learning processes [1; 2; 3; 4]. Identification of corrupted data also play a fundamental role in automatic anomaly detection, meant as the appearance of events or observations which are inconsistent with the pattern underlying a given dataset. Anomaly detection has become increasingly important in many application areas ranging from statistics, cyber security, medicine, event detection in sensor networks, financial fraud and machine learning [5]. Outliers may be thought of extreme values that deviate significantly from the trend defined by the majority of the data points, possibly due to errors or rare events, and that can consequently worsen the performance of many data analysis algorithms. Classical outlier detection methods often rely on specific assumptions about the data's distribution. However, in many real-world scenarios, estimating such a distribution beforehand can be challenging due to the data's dependence on various unknown or complex factors and the presence of highly noisy sources. This limitation becomes apparent in vast collections of time series data, especially within environmental investigation. Such a topic has recently garnered extensive research attention, especially in understanding the correlation between climate changes and the increasing severity of natural disasters [6, 7]. Extending the study addressed in [8] for the polynomial case, the present paper introduces a robust regression technique for spline approximation of both univariate and multivariate time series, considering scenarios where observations exhibit varying degrees of reliability (see [9] for a related study). In statistics, robust regression tries to overcome the limitations of the ordinary least squares when its underlying assumptions are violated, for example, due to the presence of outliers [10, 11, 12, 13]. The proposed procedure tackles the challenges posed by outliers and noise by formulating a weighted least squares problem that leverages the statistical concept of entropy. To this end, we adopt the normalization condition that the weights sum to one, which allows us to interpret them as a probability distribution. In more detail, to mitigate the negative influence of outliers and noise on the resulting approximating curve, the procedure maximizes the entropy \(H\) associated with the weights distribution, under the constraint that the resulting weighted mean squared error takes a prescribed value lower than the one corresponding to a uniform weights distribution. Such a value may be either provided by the user, on the basis of what he would expect in absence of corrupted data, or automatically detected during the implementation of the procedure. To better elucidate the role played here by entropy, we quote Jaynes [14, page 97]: _...the distribution that maximizes \(H\), subject to constraints which represent whatever information we have, provides the most honest description of what we know. The probability is, by this process, spread out as widely as possible without contradicting the available information._ Translating Jaynes' words in our context, we may stress that the proposed approach ensures that as many data points as possible carry non-negligible weights, which results in maximizing the inlier set while adhering to the mean squared error constraint. To achieve this, the strategy assigns smaller weights to points that are more likely to be considered outliers, effectively minimizing their influence on defining the final shape of the approximating spline curve. It is important to note that this weighting task is seamlessly integrated into the fitting procedure, resulting in a unified methodology that eliminates the need for a preprocessing phase. Similarly to the RANSAC algorithm [15], the entropy-based approach proves particularly effective in handling situations where a substantial portion of the data is corrupted. However, unlike the RANSAC algorithm, it boasts the advantage of being deterministic in nature. Furthermore, by reinterpreting the weights as probabilities, we can readily justify the use of entropy as a mathematical tool for effectively handling corrupted data points. The paper is structured as follows: In Section 2, we review the fundamental concepts related to weighted least squares spline approximation and introduce the corresponding notations. Section 3 presents a formal definition of the approximation problem using the entropy-based tool and proposes a simple algorithm to obtain the optimal solution for the constrained optimization problem. To demonstrate the functionality of the entropy tool, a few numerical illustrations are provided in Section 4. In Section 5, three examples involving real-world data are considered. Finally, in Section 6, we draw conclusions based on the findings. ## 2 Background Consider a parametrized sequence of points \(\{(t_{i},y_{i})\}_{i=1}^{m}\), where \(t=(t_{1},\ldots,t_{m})^{\top}\) is a non-decreasing sequence of real parameters and \(y_{i}\in\mathbb{R}^{s}\) the corresponding data points. In the statistics parlance the sequence \(\{(t_{i},y_{i})\}\) is often referred to as a multivariate time series. As is usual in this context, we introduce a change of variable that normalizes the data in \([0,1]\times[0,1]^{s}\):1 Footnote 1: In the sequel, all the operations and functions evaluations involving vectors are meant componentwise. For example, for a given vector \(z=(z_{1},\ldots,z_{k})^{\top}\) and a function \(g:\mathbb{R}\to\mathbb{R}\), we have \(g(z)=(g(z_{1}),\ldots,g(z_{k}))^{\top}\). \[t_{i}\to\frac{t_{i}-t_{\min}}{t_{\max}-t_{\min}},\qquad y_{i}\to\frac{y_{i}-y_ {\min}}{y_{\max}-y_{\min}}.\] where \[t_{\min}=\min_{1\leq i\leq m}t_{i},\quad t_{\max}=\max_{1\leq i\leq m}t_{i}\] and, denoting by \(y_{i}(j)\) the \(j\)th entry of the vector \(y_{i}\), \[y_{\min}(j)=\min_{1\leq i\leq m}y_{i}(j),\quad y_{\max}(j)=\max_{1\leq i\leq m }y_{i}(j),\quad j=1,\ldots,s.\] Of course, one can revert to the original coordinates by employing the inverse transformations. We wish to fit the given data set by means of a spline curve \(f\) of degree \(d\) expanded along a B-spline basis \(\{B_{j,d}(x)\}_{j=1}^{n}\), namely \[f(x,c)=\sum_{j=1}^{n}c_{j}B_{j,d}(x). \tag{1}\] Here, \(c=(c_{1}^{\top},\ldots,c_{n}^{\top})^{\top}\in\mathbb{R}^{sn}\) is a set of \(n\) control points, each of length \(s\), and the B-splines \(B_{j}(x)\) are defined on a non-decreasing sequence of \((d+1)\)-regular knots \[0=x_{1}=\cdots=x_{d+1}<x_{d+2}\leq\ldots\leq x_{n}<x_{n+1}=\cdots=x_{n+d+1}=1, \tag{2}\] via the three-terms recursive relation2 Footnote 2: If a division by zero occurs, the related term is neglected. \[B_{j,d}(x)=\frac{x-x_{j}}{x_{j+d}-x_{j}}B_{j,d-1}(x)+\frac{x_{j+d+1}-x}{x_{j+ d+1}-x_{j+1}}B_{j+1,d-1}(x),\] with \[B_{j,0}(x)=\left\{\begin{array}{ll}1,&\mbox{if }x_{j}\leq x<x_{j+1},\\ 0,&\mbox{otherwise}.\end{array}\right.\] Besides the conditions at the end points in (2), the \((d+1)\)-regularity of the knot vector also imposes that \(n\geq d+1\) and \(x_{j}<x_{j+d+1}\), for \(j=1,\ldots,n\), which are relevant assumptions for the B-splines linear independence property [16]. In the sequel, for sake of simplicity, we will omit the second subscript in \(B_{j,d}(x)\). Now, for a given vector \(w=(w_{1},\ldots,w_{m})^{\top}\) of (positive) weights satisfying the normalization condition \[\sum_{i=1}^{m}w_{i}=1, \tag{3}\] we consider the weighted mean squared error \[\overline{\mathrm{E}^{2}}=\sum_{i=1}^{m}w_{i}||f(t_{i},c)-y_{i}||_{2}^{2} \tag{4}\] as an estimate of the approximation accuracy of a spline function \(f(x,c)\) in the form (1) to the given data set. Denoting by \(I_{s}\) the identity matrix of dimension \(s\) and introducing the generalized Vandermonde matrix \[A=\begin{pmatrix}B_{1}(t_{1})&\cdots&B_{n}(t_{1})\\ \vdots&&\vdots\\ B_{1}(t_{m})&\cdots&B_{n}(t_{m})\end{pmatrix}\in\mathbb{R}^{m\times n},\] the vector \(y=(y_{1}^{\top},\ldots,y_{m}^{\top})^{\top}\in\mathbb{R}^{sm}\) and the diagonal matrix \(W=\mathrm{diag}(w_{1},\ldots,w_{m})\), (4) may be cast in two equivalent forms that will be conveniently exploited for calculation and implementation purposes: \[\begin{array}{rcl}\overline{\mathrm{E}^{2}}&=&(f(t,c)-y)^{\top}(W\otimes I _{s})(f(t,c)-y)\\ &=&\|(\sqrt{W}\otimes I_{s})(f(t,c)-y)\|_{2}^{2}\\ &=&\|(\sqrt{W}\otimes I_{s})((A\otimes I_{s})c-y)\|_{2}^{2}\end{array} \tag{5}\] and, denoting by \(e_{s}=(1,\ldots,1)^{\top}\) the unit vector of length \(s\), \[\begin{array}{rcl}\overline{\mathrm{E}^{2}}&=&(w\otimes e_{s})^{\top}(f(t,c )-y)^{2}\\ &=&(w\otimes e_{s})^{\top}((A\otimes I_{s})c-y)^{2}.\end{array} \tag{6}\] For a prescribed choice of weights, the _least squares approximation problem_ consists in finding the (vector) coefficients \(c_{j}\) such that the corresponding weighted mean squared error (5) is minimized. As is well known, differentiating (5) with respect to \(c\), this requirement leads to the normal system \[(A^{\top}WA\otimes I_{s})c=(A^{\top}W\otimes I_{s})y, \tag{7}\] which results from computing the stationary points of \(\overline{\mathrm{E}^{2}}\) regarded as a function of \(c\). Under the assumption that for any \(j=1,\ldots,n\) a \(t_{i_{j}}\) exists such that \(B(t_{i_{j}})\neq 0\), matrix \(A^{\top}WA\) is positive definite and the Cholesky factorization may be employed to transform (7) into a couple of triangular systems. More in general, also to prevent a worsening of the conditioning, one avoids the left multiplication by the matrix \(A^{\top}\) and directly deals with the least squares solution of the overdetermined system \[(\sqrt{W}A\otimes I_{s})c=\sqrt{W}y. \tag{8}\] In such a case, application of the \(QR\) factorization algorithm with column pivoting, or the SVD decomposition to the rectangular matrix \(\sqrt{W}A\) may be considered to solve the associated least squares problem. **Remark 1**.: In the event that the components \(y_{i}(j)\), \(j=1,\ldots,s\) are affected by sources of noise of different size depending on \(j\), one could improve (4) by allowing a different weight for each component of the error \(f(t_{i},c)-y_{i}\). This is tantamount to consider a vector of weights \(w\) of length \(ms\) and the related mean squared error defined as \[\overline{\mathrm{E}^{2}}=w^{\top}(f(t,c)-y)^{2}\equiv\|\sqrt{W}(f(t,c)-y)\|_{ 2}^{2}, \tag{9}\] with \(W=\mathrm{diag}(w)\). In the numerical tests discussed in Sections 4 and 5 both approaches showed pretty similar results, so we only included those relying on (4). In the sequel, \(\overline{\mathrm{E}^{2}}_{\mathrm{uw}}\) will denote the mean squared error resulting from the ordinary least squared (OLS) approximation defined on the uniform weights distribution \(w_{i}=1/m\), namely \[\overline{\mathrm{E}^{2}}_{\mathrm{uw}}=\frac{1}{m}\sum_{i=1}^{m}(f(t_{i}, \bar{c})-y_{i})^{2}, \tag{10}\] where \(\bar{c}\) satisfies the normal linear system (7) with \(W=I_{m}/m\), \(I_{m}\) being the identity matrix of dimension \(m\). ## 3 Maximum entropy weighted least squares spline approximation The use of a weighted mean squared error is helpful when the data highlight different level of accuracies, due to the presence of noise and/or outliers. In such a case, it would be appropriate to attach large weights to very accurate data points and small weights to data points which are most likely affected by a high level of inaccuracy. In fact, a weight \(w_{i}\) approaching zero makes the corresponding data point \(y_{i}\) irrelevant for the purpose of the fitting procedure. On the other hand, increasing the size of \(w_{i}\) will make \(f(t_{i},c)\) closest to \(y_{i}\). It turns out that, under the normalization condition (3), the WLS approximation will mimic the OLS one applied to the subset of data carrying relatively large weights. By exploiting an entropy-based argument, the _maximum entropy weighted least squares_ (MEWLS) approximation tries to devise an automatic, easy-to-understand and effective procedure for assigning the correct weight to each data point during the fitting procedure. The MEWLS approach based on splines approximating functions in the form (1) is defined by the following set of equations (\(e_{m}\) stands for the unit vectors of length \(m\)): \[\begin{array}{ll}\mbox{maximize}&-w^{\top}\log w,\\ \mbox{subject to:}&w^{\top}e_{m}=1,\\ &(w\otimes e_{s})^{\top}(f(t,c)-y)^{2}=\overline{\mathrm{E}^{2}}\,.\end{array} \tag{11}\] In other words, we wish to maximize the entropy function \[H(w)=-w^{\top}\log w=-\sum_{i=1}^{m}w_{i}\log w_{i} \tag{12}\] associated with a weights distribution \(w\) satisfying the normalization condition \(\sum_{i}w_{i}=1\), subject to the constraint that the corresponding mean squared error attains a prescribed value \(\overline{\mathrm{E}^{2}}\). As is well known, problem (11), deprived of the second constraint, admits the solution \(w_{i}=1/m\), which leads us back to the ordinary least squares problem with uniform weights and associated means squared error \(\overline{\rm E^{2}}_{\rm uw}\). Clearly, the very same solution is obtained when solving the complete set of equations in (11) under the choice \(\overline{\rm E^{2}}=\overline{\rm E^{2}}_{\rm uw}\), so (11) contains the ordinary least squares problem as a special instance. By setting \(\overline{\rm E^{2}}\) to a suitable value lower than the mean squared error \(\overline{\rm E^{2}}_{\rm uw}\), the weights selection technique based upon the maximal-entropy argument epitomized by (11) is aimed at mitigating the effect of outliers and noise in the data while solving the weighted least squares problem. To highlight the relation between \(\overline{\rm E^{2}}\) and \(\overline{\rm E^{2}}_{\rm uw}\), we assume in the sequel \[\overline{\rm E^{2}}=\frac{1}{r}\,\overline{\rm E^{2}}_{\rm uw} \tag{13}\] where \(r>1\) is a suitable reduction factor. According to the Lagrange multiplier theorem, we compute the stationary points of the Lagrangian function \[{\cal L}(w,c,\lambda_{1},\lambda_{2})=w^{\top}\log w+\lambda_{1}(w^{\top}e_{ m}-1)+\lambda_{2}\left((w\otimes e_{s})^{\top}(f(t,c)-y)^{2}-\overline{\rm E ^{2}}\right). \tag{14}\] Differentiating, we get: \[\frac{\partial{\cal L}}{\partial w} = e_{m}+\log w+\lambda_{1}e_{m} \tag{15}\] \[+\lambda_{2}\left((I_{m}\otimes e_{s})^{\top}(f(t,c)-y)^{2}\right),\] \[\frac{\partial{\cal L}}{\partial c} = 2\lambda_{2}\left((A^{\top}WA\otimes I_{s})c-(A^{\top}W\otimes I _{s})y\right), \tag{16}\] \[\frac{\partial{\cal L}}{\partial\lambda_{1}} = w^{\top}e_{m}-1,\] \[\frac{\partial{\cal L}}{\partial\lambda_{2}} = (w\otimes e_{s})^{\top}(f(t,c)-y)^{2}-\overline{\rm E^{2}}\,.\] The last term in (15) is the vector of length \(m\) \[\lambda_{2}\left(||f(t_{1},c)-y_{1}||_{2}^{2},\ldots,||f(t_{m},c)-y_{m}||_{2}^ {2}\right)^{\top}, \tag{17}\] while (16) comes from the equivalence of formulae (5) and (6), after observing that the first two terms in the Lagrangian (14) do not depend on the spline coefficients \(c_{i}\). The stationary points of \({\cal L}\) are the solutions of the following set of \(n+m+2\) equations in as many unknowns \(c\in\mathbb{R}^{n}\), \(w\in\mathbb{R}^{m}\), \(\lambda_{1}\) and \(\lambda_{2}\): \[(A^{\top}WA\otimes I_{s})c-(A^{\top}W\otimes I_{s})y = 0, \tag{18}\] \[(w\otimes e_{s})^{\top}(f(t,c)-y)^{2}-\overline{\rm E^{2}} = 0,\] (19) \[e_{m}+\log w+\lambda_{1}e_{m}+\lambda_{2}\left((I_{m}\otimes e_{ s})^{\top}(f(t,c)-y)^{2}\right) = 0,\] (20) \[w^{\top}e_{m}-1 = 0. \tag{21}\] By exploiting the weights normalization condition (21), we can easily remove the unknown \(\lambda_{1}\). To this end, we first recast equation (20) as \[w=\exp(-(1+\lambda_{1}))\cdot\exp\left(-\lambda_{2}\left((I_{m}\otimes e_{s})^{ \top}(f(t,c)-y)^{2}\right)\right).\] Multiplying both sides by \(e_{m}^{\top}\) and taking into account (21) and (17) yields \[1=\exp(-(1+\lambda_{1}))\cdot Q(c,\lambda_{2}),\qquad\mbox{with }Q(c,\lambda_{ 2})=\sum_{i=1}^{m}\exp\left(-\lambda_{2}||f(t_{i},c)-y_{i}||_{2}^{2}\right)\] and hence \[w=\frac{1}{Q(c,\lambda_{2})}\cdot\exp\left(-\lambda_{2}\left((I_{m}\otimes e_ {s})^{\top}(f(t,c)-y)^{2}\right)\right) \tag{22}\] that will replace (20) and (21). Plugging (22) into (19) we arrive at the final shape of the system to be solved: \[(A^{\top}WA\otimes I_{s})c-(A^{\top}W\otimes I_{s})y = 0, \tag{23}\] \[\sum_{i=1}^{m}||f(t_{i},c)-y_{i}||_{2}^{2}\cdot\exp\left(- \lambda_{2}||f(t_{i},c)-y_{i}||_{2}^{2}\right)-\sum_{i=1}^{m}\exp\left(- \lambda_{2}||f(t_{i},c)-y_{i}||_{2}^{2}\right)\overline{\mathrm{E}^{2}} = 0,\] (24) \[w-\frac{1}{Q(c,\lambda_{2})}\cdot\exp\left(-\lambda_{2}\left((I_{m}\otimes e _{s})^{\top}(f(t,c)-y)^{2}\right)\right) = 0. \tag{25}\] Before facing the question of how to solve the system numerically, a few remarks are in order: * (23) is nothing but the normal linear system one would get when handling the least squares problem with constant weights (see (7)). It can be therefore expressed as the overdetermined system (8) which has to be solved in the least squares sense; * (24) is a scalar equation that, for a given vector \(c\), may be easily solved with respect to the Lagrange multiplier \(\lambda_{2}\) via a Newton or Newton-like iteration; * equation (25) is explicit with respect to the unknown \(w\), for given \(\lambda_{2}\) and \(c\). Therefore, a quite natural technique to solve the nonlinear system (23)-(25) is yielded by the hybrid iteration summarized in Algorithm 1 (_tol_ is an input tolerance for the stopping criterion). In order to improve the convergence properties of the nonlinear scheme, we employ a continuation technique on \(\overline{\mathrm{E}^{2}}\). In more detail, we define a sequence of increasing reduction factors \[1=r_{0}<r_{1}<r_{2}<\cdots<r_{N}=\frac{\overline{\mathrm{E}^{2}}_{\mathrm{uw}} }{\overline{\mathrm{E}^{2}}}\] and the corresponding sequence of mean squared errors \[\overline{\mathrm{E}^{2}_{j}}=\frac{1}{r_{j}}\,\overline{\mathrm{E}^{2}}_{ \mathrm{uw}},\quad j=0,\ldots,N, \tag{26}\] so that \(\overline{\mathrm{E}^{2}_{0}}=\overline{\mathrm{E}^{2}}_{\mathrm{uw}}\) and \(\overline{\mathrm{E}^{2}_{N}}=\overline{\mathrm{E}^{2}}\). Then, for \(j=0,\ldots,N\), we perform lines 2-5 of Algorithm 1 taking care that the output quantities \(c^{(k)}\), \(\lambda_{2}^{(k)}\), \(W^{(k)}\) obtained at step \(j\) are used as input parameters for the subsequent step \(j+1\). A further relevant motivation for employing such a continuation technique is that it generates a discrete family of homotopic curves, parametrized by \(\overline{\mathrm{E}_{\mathrm{j}}^{2}}\), admitting the OLS and the MEWLS solutions as initial and final configurations respectively. Each element in this family brings a specific weights distribution (and entropy value) and acts as a starting guess for the subsequent approximation curve. Therefore, the overall procedure can be interpreted as an improvement on the OLS approximation in that, by reducing the mean squared error progressively, it smoothly deforms the initial shape of the spline curve to get rid of outliers. An illustration is provided in the first example of the next section. Finally, it is worth noticing that the resulting weights may be exploited for classification purposes. Indeed, the original data set \(D\) may be split in two disjoint subsets: \(D=D_{1}\cup D_{2}\), where \(D_{1}\) contains the inliers while \(D_{2}\) identifies the outliers. To this end, given a small enough tolerance \(tol\), one can set, for example, \[D_{2}=\{(x_{i},y_{i})\in D\ |\ w_{i}<tol\cdot\max_{j}w_{j}\},\qquad D_{1}=D-D_{2}. \tag{27}\] ## 4 Numerical illustrations To showcase the potential of the MEWLS spline approximation, we present three numerical experiments using synthetic data points. The first experiment focuses on a spline function fitting problem, aiming at elucidating the continuation technique (26) and the use of (27) for the automatic detection of outliers. The second and third examples involve approximating a set of data points with a spline curve in the plane and in 3D space, respectively. All the numerical tests have been implemented in Matlab (R2023a) on a 3.6 GHz Intel I9 core computer with 32 GB of memory. References to colors have been included for the online version of the manuscript. ### Example 1 We consider a dataset comprising 44 points in the square \([0,1]\times[0,1]\), out of which 32 closely follow a given profile, while the remaining 12 consistently deviate from it. To fit the data, we employ a spline of degree \(d=2\), defined on a regular and uniform knot sequence consisting of 20 nodes, covering the interval \([0,1]\). In the top-left picture of Figure 1, we observe the data set along with the ordinary least squares approximation. We see that the OLS spline approximation fails to accurately reproduce the correct profile due to the strong influence of the \(12\) anomalous points. Therefore, we aim to improve the approximation by decreasing the weighted mean squared error while utilizing the maximal-entropy argument to make an optimal weights selection. To this end, we consider a sequence of reduction factors distributed over the interval \([1,500]\). For graphical clarity, we set \(N=50\) in (26) to mimic the behavior of formula (13), where the variable \(r\) continuously varies within the specified interval. Algorithm 1 generates a sequence of \(50\) homotopic functions with parameter \(r\in[1,500]\). The top-right picture of Figure 1 displays two such functions, one corresponding to \(r=2\) (\(\overline{\mathrm{E}^{2}}=\overline{\mathrm{E}^{2}}_{\mathrm{uw}}\,/2\), solid line), and the other to \(r=4\) (\(\overline{\mathrm{E}^{2}}=\overline{\mathrm{E}^{2}}_{\mathrm{uw}}\,/4\), dashed-line). As the reduction factor \(r\) increases, the maximum entropy principle deforms the shape of the original OLS solution by adjusting the weights to ensure that the maximum number of points contribute while still adhering to the mean squared error constraint. In the bottom-left picture of Figure 1 we can see the final shape of the approximating spline, corresponding to \(r=500\) (\(\overline{\mathrm{E}^{2}}=\overline{\mathrm{E}^{2}}_{\mathrm{uw}}\,/500\)). We can see that it nicely conforms to the profile underlying the given data set. The use of formula (27) with \(tol=10^{-4}\) correctly detects \(12\) outliers which are surrounded by small circles in the picture. Finally, the bottom-right picture of Figure 1 illustrates the behavior of the entropy (12) as a function of the scaling factor \(r\). As expected, reducing \(\overline{\mathrm{E}^{2}}\) results in a decrease of the entropy associated with the weights distribution. The appropriate choice of \(\overline{\mathrm{E}^{2}}\) depends on the context and, in particular, on the expected accuracy of the model in the absence of outliers. An automatic identification of a suitable value for \(\overline{\mathrm{E}^{2}}\) may be inferred by examining the rate of change in the spline approximations as the scaling factor \(r\) increases, which is closely related to the behavior of the entropy function \(H\) as a function of \(r\). This aspect will be the subject of future research. ### Example 2 We address the problem of approximating the arithmetic spiral defined by the equations \[\left\{\begin{array}{rcl}x(t)&=&(a+bt)\cos(t),\\ y(t)&=&(a+bt)\sin(t),\end{array}\right.\] with \(a=1\), \(b=4\), \(t\in[-a/b,4\pi]\), which ensures that the spiral originates at the origin. To this end, we create a data set consisting of \(N=200\) points sampled along the spiral and then introduce random noise to \(100\) of them, specifically targeting the odd-numbered ones. In more detail, after setting \(h=4\pi/(N-1)\), our data set is defined as follows: \[\left\{\begin{array}{rcll}t_{i}&=&(i-1)h,&i=1,\ldots,N,\\ (x_{i},y_{i})&=&(x(t_{i}),y(t_{i})),&\text{if $i$ is even},\\ (x_{i},y_{i})&=&(x(t_{i})+\delta_{x}^{(i)},y(t_{i})+\delta_{y}^{(i)}),&\text{ if $i$ is odd},\end{array}\right.\] where \(\delta_{x}^{(i)},\delta_{y}^{(i)}\in\mathcal{N}(0,\sigma^{2})\) are random variables distributed normally with mean \(0\) and variance \(\sigma^{2}=30\). Since, for the specified range of \(t\), the spiral is entirely enclosed in the square \(S=[-60,60]^{2}\), for visualization clarity, we iterate the generation of values \(\delta_{x}^{(i)},\delta_{y}^{(i)}\) until \((x_{i},y_{i})\) falls within \(S\), for each odd index \(i\). The left picture of Figure 2 portrays the dataset \(\left(x_{i},y_{i}\right)_{i=1}^{N}\) along with the spline approximations using ordinary least squares (dashed line) and maximum entropy weighted least squares (solid line). Notably, while the OLS approximation struggles to capture the true spiral due to the presence of outliers, the MEWLS spline curve faithfully reproduces the unperturbed spiral \((x(t),y(t))\). ### Example 3 We replicate a procedure akin to the one executed in the prior spiral example but address our attention to a circular helix defined by the equations \[\left\{\begin{array}{rcl}x(t)&=&r\cos(2\pi t),\\ y(t)&=&r\sin(2\pi t),\\ z(t)&=&ct,\end{array}\right.\] with \(r=2,c=1\) and \(t\in[-4,4]\), so the helix is enclosed in the cube \([-4,4]^{3}\). We begin with a data set \(\left(x_{i},y_{i},z_{i}\right)_{i=1}^{N}\) consisting of \(N=400\) points sampled along the helix but, differently to what was done in Example 4.2, we now introduce a random noise to a randomly chosen subset of these points. More precisely, we first compute a subset \(\Omega\) obtained by randomly extracting \(M\) points from the set of indices \(\{1,2,\ldots,N\}\). Then we define \[\left\{\begin{array}{rcl}t_{i}&=&(i-1)h,&i=1,\ldots,N,\\ (x_{i},y_{i},z_{i})&=&(x(t_{i}),y(t_{i}),z(t_{i})),&\text{if $i\not\in\Omega$},\\ (x_{i},y_{i},z_{i})&=&(x(t_{i})+\delta_{x}^{(i)},y(t_{i})+\delta_{y}^{(i)},z(t _{i})+\delta_{z}^{(i)}),&\text{if $i\in\Omega$}.\end{array}\right.\] Here, \(\delta_{x}^{(i)},\delta_{y}^{(i)},\delta_{z}^{(i)}\in\mathcal{N}(0,20)\) represent random variables drawn from a normal distribution with mean \(0\) and variance \(\sigma^{2}=20\). Again, for visualization clarity, for each odd index \(i\) we iterate the generation of the perturbation values \(\delta_{x}^{(i)},\delta_{y}^{(i)},\delta_{z}^{(i)}\) until \(\left(x_{i},y_{i},z_{i}\right)\) falls within the cube \(S=[-4,4]^{3}\). The right picture of Figure 2 displays the dataset \(\left(x_{i},y_{i},z_{i}\right)_{i=1}^{N}\) along with the spline approximations using ordinary least squares (irregular solid line) and maximum entropy weighted least squares (helix-shaped solid line). Again the MEWLS spline curve faithfully reproduces the shape of the original helix. The results obtained in both this example and the previous one underscore the effectiveness of MEWLS in successfully detecting and eliminating outliers from highly noisy datasets. Further instances based on real data are illustrated in the next section. ## 5 A few applications to real data ### Approximating the main sequence in a Hertzsprung-Russell diagram The Hertzsprung-Russell (HR) diagram is a graphical representation of stars, mapping the correlation between their absolute magnitudes or luminosities versus their color indices or temperatures, allowing astronomers to discern distinct patterns in stellar evolution [17; 18]. The absolute magnitude of a star is a measure of its intrinsic brightness or luminosity, unaltered by its distance from Earth. It is the apparent magnitude (brightness as seen from Earth) that a star would have if it were located at a standard distance of 10 parsecs (about 32.6 light-years) away. Essentially, the absolute magnitude allows astronomers to compare the luminosities of stars irrespective of their varying distances from us. The B-V color index is a parameter that characterizes a star's color and temperature. It is the difference between the star's apparent magnitudes in the blue (B) and visual (V) parts of the electromagnetic spectrum. Blue stars have negative B-V values, while redder stars have positive values. This index is crucial in categorizing stars by their spectral types, indicating whether a star is hotter (blue) or cooler (red). Together, the absolute magnitude and B-V color index are vital tools in understanding stars' properties, evolutionary stages, and positions within the Hertzsprung-Russell diagram. As an example, the left picture of Figure 3 shows the HR diagram for the Yale Trigonometric Parallax Dataset [19] comprising more than 6000 catalogued stars.This astronomical resource provides measurements of stellar distances using the trigonometric parallax method, a technique employed to determine the distance to a star by measuring its apparent shift in position against more distant background stars as the Earth orbits the Sun. Besides observed parallaxes (in arcsec), the Yale catalogue also includes the B-V color index and the apparent V magnitude. The absolute magnitude is then obtained by means of the formula \[\mathrm{absolute\ magnitude}=\mathrm{apparent\ V\ magnitude}+5(\log_{10}( \mathrm{observed\ parallax})+1).\] At its core, the diagram features a continuous and well-defined band known as the main sequence. This band comprises the vast majority of genuine stars in the cosmos, including our own Sun with an absolute magnitude of 4.8 and a B-V color index of 0.66. Located in the lower-left portion of the diagram are the white dwarfs, while the upper part accommodates the subgiants, giants, and supergiants. This layout visually captures the diverse stages of stellar evolution with the white dwarfs representing stars in their final stages of evolution. One of the diagram's remarkable applications is in determining the distance between Earth and distant celestial objects like star clusters or galaxies. In this example, our aim is to accurately approximate the main sequence's shape using an appropriate spline curve and further categorize stars through color assignments. To achieve this, we employed a spline of degree \(d=3\) along with a regular knot sequence \[t=[0,0,0,0,0.286,0.397,0.658,0.757,1,1,1,1].\] The left picture in Figure 3 displays the outcome of the ordinary least squares approximation (indicated by the blue line in the color image). This method evidently fails to accurately replicate the main sequence's distinctive form due to the presence of giants and white dwarfs. Conversely, the maximal-entropy least squares approximation successfully captures the main sequence's true shape. By determining the distribution of weights based on the entropy-driven procedure, we assigned distinct color gradients to each star. This color differentiation effectively highlights the discrepancies between these stars and those belonging to the main sequence. As the corresponding weights decrease, the intensity of magenta and yellow pixels progressively intensifies. This approach not only improves the accuracy of the main sequence representation but also facilitates the identification of stars that deviate from its expected characteristics. ### Detecting train rails in a railway infrastructure and surrounding environment In the present example, we delve into a segmentation task performed on a point cloud that portrays a railway environment, captured using a terrestrial laser scanning system. An instance of such a scenario is presented in Figure 4, which will serve as the subject of our examination. Here, we observe a curved railway emerging from a tunnel, enveloped by dense vegetation. Our aim revolves around identifying the train rails within this scenario and approximating their shape using a suitable spline curve. Conducting such an analysis can yield valuable insights into the transportation system and aid in identifying potential issues that could impact its operational effectiveness (see [20] and reference therein). It is worth underscoring that the essence of this example lies in testing the entropy-based approach on a highly noisy dataset, where the set \(D_{1}\) of inliers is significantly dwarfed by the set \(D_{2}\) of outliers. As a result, the technique showcased in this example serves as a proof of concept rather than a definitive solution for the intended problem (for a more effective identification of the rails, refer to works such as [21; 22; 23]). A point cloud is a data set that realizes a digital representation of a physical environment or object in a three-dimensional space. It is arranged in a structured array housing fields that store various attributes for each point within the cloud. These attributes encompass 3D coordinates, distance ranges, color information, intensity measurements, and potentially other geometric or spectral data. We will utilize the intensity parameter, a measure of the reflectivity of the material of the object containing the sample point, to identify reflective elements like train rails. Within the segmentation procedure, the intensity field frequently comes into play for the purpose of condensing the initial array of data points into a more fitting subset of points pertinent to the analysis. In fact, noteworthy structures, including train rails and overhead wires, exhibit resemblances in their intensity attributes. This correspondence arises from the inherent connection between a surface's reflective characteristics and its constituent material. For instance, train rails are predominantly composed of steel, leading to nearly uniform intensity readings from the laser sensor along the rail's length. By resorting on the intensity parameter as a filtering criterion, we can effectively discern the majority of points situated on the rails. Building upon the analysis conducted in [23] for a point cloud of similar nature, our approach to reduce the size of the original point cloud, while retaining the majority of rail points, involves extracting those with intensity values not exceeding 65. Additionally, due to the level nature of the terrain under consideration, we omit the vertical component of the points and instead focus on a two-dimensional projection of the filtered point cloud. This projection is illustrated in the leftmost image of Figure 5 and forms a data set comprising 304911 points. The lower-right section of the image corresponds to the segment of the rails situated within the tunnel. This region exhibits a much cleaner appearance compared to the area outside the tunnel. Indeed, in the external environment, a considerable number of points associated with vegetation are regrettably retained even after the filtering procedure. This introduces a notable degree of noise into the data. The right image in Figure 5 displays the ordinary least squares spline approximation curve (solid blue line). By referring to equations (1)-(2), this curve is obtained through a spline of degree \(d=2\) and \(n=15\), utilizing a uniform \((d+1)\)-regular knots distribution. Evidently, the OLS approximation does not deviate that much from the shape traced by the rail tracks, making it a suitable initial estimate within Algorithm 1 for computing the maximal-entropy weighted least squares spline approximation curve. This MEWLS curve is depicted in the same graph as a dashed red line. It is clear that the MEWLS spline closely captures the profile of the upper rail, demonstrating a very high accuracy. An inspection of the weights through formula (27) reveals that, in this specific example, the number of outliers exceeds the number of inliers by more than six times. A comparable process can be subsequently applied to acquire the approximation for the lower rail. This involves eliminating the points related to the upper rail from the dataset and then performing Algorithm 1 again (we omit to display this latter approximation for visual clarity). In conclusion, the MEWLS approach effectively enhances the accuracy of the initial OLS approximation and leads to a precise parametric representation of the rails. ### Detecting and scoring outliers in an environmental data set The final test case is drawn from a study in [6] and explores an environmental dataset accessible through the R-package _openair_[24]. This dataset encompasses hourly readings of wind speed, wind direction, and concentrations of pollutants such as NO\({}_{x}\), NO\({}_{2}\), O\({}_{3}\), PM\({}_{10}\), SO\({}_{2}\), CO, and PM\({}_{25}\) recorded at Marylebone (London) spanning from January 1, 1998, to June 23, 2005. For comparison purposes, we conform to the choice in [6] and focus on a specific subset of this dataset, only comprising the O\({}_{3}\) concentrations during December 2002. This particular segment encompasses a total of 744 observations, while also featuring several instances of missing data points. The dots depicted in Figure 6 provide a visual representation of the O\({}_{3}\) concentrations, measured in parts per billion (ppb), over the specified time frame. To approximate this univariate time series, we employ a spline function with degree \(d=3\), defined on a uniform \((d+1)\)-regular knots distribution. In order to capture the erratic nature of the data, we opt for a number of coefficients \(n\) equal to half the data points' count. Figure 6 only displays the MEWLS approximation (red solid line). In contrast to the approach adopted in prior examples, our strategy for obtaining the approximating spline varies here. Rather than predefining the reduction factor, we pursue a distinct perspective. Specifically, we establish the number of outlier candidates, denoted as \(N\), and iteratively reduce the \(\overline{\mathrm{E}^{2}}\) value until \(N\) data points are encompassed within the outlier set \(D_{2}\). This methodology introduces a natural ranking within \(D_{2}\), assigning scores to each prospective outlier. This is readily accomplished using (27), where the \(i\)th point entering \(D_{2}\) receives a score of \(i\). In Figure 6, outliers are denoted by points enclosed in green circles, each indicating the corresponding score. The outcomes obtained align with those presented in [6], particularly those based upon the extreme value theory. This systematic scoring approach has the potential to streamline the decision-making process, aiding specialists in identifying the data points that merit closer investigation or intervention. ## 6 Conclusions In real-world scenarios, data quality directly impacts the performance of subsequent analytical processes, so that the importance of effective preprocessing techniques and robust fitting procedures have become increasingly evident. In this context, we have introduced an entropy-based weighting methodology for determining spline approximations of multivariate time series. In contrast to the ordinary least squares approach, which displays sensitivity to corrupted data, the MEWLS spline approximation effectively mitigates the impact of outliers and noise even when handling large and highly noisy datasets. Its ability to accurately extract meaningful information from noisy backgrounds has been illustrated through various synthetic and real-world examples. One limitation when compared to the OLS approach is that, even for linear models, the resulting algebraic system becomes nonlinear and its solution requires the implementation of an appropriate iterative scheme. In this regard, the OLS solution can serve as an initial estimate. The numerical illustrations underscore that the MEWLS solution significantly outperforms the classical OLS procedure. Nonetheless, the efficient resolution of this nonlinear system warrants dedicated investigation and will be a focus of future research. ## Acknowledgements Felice Iavernaro acknowledges the contribution of the National Recovery and Resilience Plan, Mission 4 Component 2 - Investment 1.4 - NATIONAL CENTER FOR HPC, BIG DATA AND QUANTUM COMPUTING - Spoke 5 - Environmental and Natural Disasters, under the NRRP MUR program funded by the European Union - NextGenerationEU - (CUP H93C22000450007). Luigi Brugnano and Felice Iavernaro thank the GNCS for its valuable support under the INDAM-GNCS project CUP_E55F22000270001.
Weighted least squares spline approximation of a noisy dataset. We maximize the associated entropy subject to the constraint that the mean squared error is prescribed to a desired (small) value. Acting on this error yields a robust regression method that automatically detects and removes outliers from the data during the fitting procedure by assigning them a very small weight. We discuss the use of both spline functions and spline curves. A number of numerical illustrations have been included to disclose the potentialities of the maximal-entropy approach in different application fields.
2309.06967
On motives of parabolic Higgs bundles and parabolic connections
Let $X$ be a compact Riemann surface of genus $g \geq 2$ and let $D\subset X$ be a fixed finite subset. We considered the moduli spaces of parabolic Higgs bundles and of parabolic connections over $X$ with the parabolic structure over $D$. For generic weights, we showed that these two moduli spaces have equal Grothendieck motivic classes and their $E$-polynomials are the same. We also show that the Voevodsky and Chow motives of these two moduli spaces are also equal. We showed that the Grothendieck motivic classes and the $E$-polynomials of parabolic Higgs moduli and of parabolic Hodge moduli are closely related. Finally, we considered the moduli spaces with fixed determinants and showed that the above results also hold for the fixed determinant case.
Sumit Roy
2023-09-13T14:03:08
http://arxiv.org/abs/2309.06967v2
# On motives of parabolic Higgs bundles and parabolic connections ###### Abstract. Let \(X\) be a compact Riemann surface of genus \(g\geq 2\) and let \(D\subset X\) be a fixed finite subset. We considered the moduli spaces of parabolic Higgs bundles and of parabolic connections over \(X\) with the parabolic structure over \(D\). For generic weights, we showed that these two moduli spaces have equal Grothendieck motivic classes and their \(E\)-polynomials are the same. We also show that the Voevodsky and Chow motives of these two moduli spaces are also equal. We showed that the Grothendieck motivic classes and the \(E\)-polynomials of parabolic Higgs moduli and of parabolic Hodge moduli are closely related. Finally, we considered the moduli spaces with fixed determinants and showed that the above results also hold for the fixed determinant case. Key words and phrases:Motives, Grothendieck motives, Voevodsky motives, Chow motives, Higgs bundles, Parabolic connections, Hodge moduli, \(E\)-polynomial 2020 Mathematics Subject Classification: 14C15, 14C30, 14D20, 14D23, 70G45 E-mail : sumit@ibs.re.kr Address: Center for Geometry and Physics, Institute for Basic Science (IBS), Pohang 37673, Korea. ## 1. Introduction In this paper we consider the moduli spaces of parabolic Higgs bundles ([4], [6], [7], [8], [9]) and parabolic connections ([9], [17]) over a compact Riemann surface \(X\) of genus \(g\geq 2\). These objects have several geometric structures that are useful in different topics, like algebraic geometry, differential geometry, mirror symmetry, mathematical physics, Langlands duality and so on. We prove equalities of some motivic classes (namely Grothendieck motives, Voevodsky motives and Chow motives) for these two moduli spaces. We also prove that their \(E\)-polynomials are equal. A _parabolic bundle_\(E_{*}\) over \(X\) is a holomorphic vector bundle \(E\) over \(X\) together with a weighted flag over a fixed finite set \(D\subset X\), called the _parabolic structure_. These weights are real numbers between \(0\) and \(1\). A _parabolic Higgs bundle_ is a pair \((E_{*},\Phi)\), where \(E_{*}\) is a parabolic bundle and \(\Phi:E_{*}\to E_{*}\otimes K(D)\) is a parabolic Higgs field, where \(K\) is the canonical bundle over \(X\). On the other hand, a _parabolic connection_ is a pair \((E_{*},\mathcal{D})\) where \(\mathcal{D}:E\to E\otimes K(D)\) is a logarithmic connection on the underlying vector bundle satisfying some properties. In [9], Simpson considered an algebraic family over \(\mathbb{C}\), which he called the Hodge moduli space, such that the fibres over \(0\) and \(1\) are exactly the moduli spaces of Higgs bundles and of connections respectively. Also, that gives a homeomorphism between these two moduli spaces, which is famously known as the non-abelian Hodge correspondence (see [8], [9], [10]). These two moduli spaces have singularities in general. But if we consider that the rank and degree are coprime, then these moduli spaces are smooth. For coprime rank and degree, Hausel and Thaddeus in [20, Theorem 6.2] proved that the \(E\)-polynomials are equal for these two moduli spaces and they have pure Hodge structure. There is a natural \(\mathbb{C}^{*}\)-action on the Hodge moduli space and with that the moduli space becomes a semiprojective variety [27]. Using the smoothness and semiprojectivity of the Hodge moduli space, recently Hoskins and Lehalleur in [31] established a motivic version of the non-abelian Hodge correspondence. In that paper, they proved that the moduli of Higgs bundles and of connections have equal motivic classes in various setups. Later, Federov, A. Soibelman and Y. Soibelman in [30] proved a motivic Donaldson-Thomas invariants of these two moduli spaces in parabolic setting. In this paper, we consider three types of motives, namely Grothendieck motives, Voevodsky motives and Chow motives. Let \(\mathcal{V}_{\mathbb{C}}\) denote the category of complex quasi-projective varieties. Let \(K(\mathcal{V}_{\mathbb{C}})\) denote the _Grothendieck ring of varieties_ and let \(\hat{K}(\mathcal{V}_{\mathbb{C}})\) be the dimensional completion. Let \(Z\) be a quasi-projective variety. Then \([Z]\in\hat{K}(\mathcal{V}_{\mathbb{C}})\) is called the _motive_ of \(Z\). If \(Z\) is \(n\)-dimensional with pure Hodge structure, then the corresponding \(E\)-polynomial is defined by \[E(Z)=E(Z)(a,b)=\sum_{p,q=0}^{n}(-1)^{p+q}h_{c}^{p,q}(Z)u^{p}v^{q}.\] Let \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\), \(\mathcal{M}_{\rm pc}(r,d,\alpha)\) and \(\mathcal{M}_{\rm Hod}(r,d,\alpha)\) be the moduli space of parabolic Higgs bundles, moduli of parabolic connections and parabolic Hodge moduli space of rank \(r\), degree \(d\) and generic weights \(\alpha\) over \(X\) respectively. We proved the following Theorem **Theorem 1.1**.: _In \(\hat{K}(\mathcal{V}_{\mathbb{C}})\) we have the following motivic equalities_ \[[\mathcal{M}_{\rm Higgs}(r,d,\alpha)]=[\mathcal{M}_{\rm pc}(r,d,\alpha)]\,\, \,{\rm and}\,\,\,[\mathcal{M}_{\rm Hod}(r,d,\alpha)]=\mathbb{L}[\mathcal{M}_ {\rm Higgs}(r,d,\alpha)].\] _Therefore, we have the following equalities of the \(E\)-polynomials_ \[E(\mathcal{M}_{\rm Higgs}(r,d,\alpha))=E(\mathcal{M}_{\rm pc}(r,d,\alpha))\, \,\,{\rm and}\,\,\,E(\mathcal{M}_{\rm Hod}(r,d,\alpha))=uvE(\mathcal{M}_{\rm Higgs }(r,d,\alpha)).\] Here \(\mathbb{L}\) is the Lefschetz motive. To prove this theorem, we first prove that the moduli spaces \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) and \(\mathcal{M}_{\rm Hod}(r,d,\alpha)\) are semiprojective. For details of the proof see Theorem 4.3. We then consider Voevodsky's category of geometric motives \(DM_{\rm gm}(\mathbb{C};R)\) over \(\mathbb{C}\) with coefficients in a ring \(R\). We denote by \(M_{\rm gm}(X)_{R}\) the geometric motive of the smooth variety \(X\) with coefficients in \(R\). Also, we consider the category of Chow motives \(\mathbf{Chow}^{\rm eff}(\mathbb{C};R)\) which has an embedding into the Voevodsky's category of effective geometric motives \(DM_{\rm gm}^{\rm eff}(\mathbb{C};R)\subset DM_{\rm gm}(\mathbb{C};R)\). We denote by \(C(X)_{R}\) the Chow motive of a smooth variety \(X\) with coefficients in \(R\). Then using the semi-projectivity, we prove the following motivic equalities **Theorem 1.2**.: _For any ring \(R\), we have the following isomorphisms of motives,_ \[M_{\rm gm}\big{(}\mathcal{M}_{\rm Higgs}(r,d,\alpha)\big{)}_{R}\cong M_{\rm gm }\big{(}\mathcal{M}_{\rm pc}(r,d,\alpha)\big{)}_{R}\in DM_{\rm gm}(\mathbb{C} ;R).\] _and_ \[C\big{(}\mathcal{M}_{\rm Higgs}(r,d,\alpha)\big{)}_{R}\cong C\big{(}\mathcal{ M}_{\rm pc}(r,d,\alpha)\big{)}_{R}\in\textbf{Chow}^{\rm eff}(\mathbb{C};R).\] For details of the proof, see Theorem 5.1 and 5.2. Finally in the last section we consider the moduli spaces with fixed determinants and we prove above two theorems in the fixed determinant setup. See Theorem 6.1, 6.2 and 6.3. ## 2. Preliminaries ### Parabolic bundles Let \(X\) be a compact Riemann surface of genus \(g\geq 2\) and let \(D=\{p_{1},\dots,p_{n}\}\subset X\) be a fixed subset of \(n\geq 1\) distinct marked points of \(X\). **Definition 2.1**.: A _parabolic bundle_\(E_{*}\) of rank \(r\) (assuming \(r\geq 2\)) over \(X\) is a holomorphic vector bundle \(E\) of rank \(r\) over \(X\) endowed with a parabolic structure along the divisor \(D\), i.e. for every point \(p\in D\), we have 1. a filtration of subspaces \[E_{p}\eqqcolon E_{p,1}\supsetneq E_{p,2}\supsetneq\cdots\supsetneq E_{p,r_{p} }\supsetneq E_{p,r_{p}+1}=\{0\},\] 2. a sequence of real number satisfying \[0\leq\alpha_{1}(p)<\alpha_{2}(p)<\cdots<\alpha_{r_{p}}(p)<1,\] where \(r_{p}\) is a natural number between \(1\) and \(r\). For all \(i=1,\ldots,r_{p}\), the real number \(\alpha_{i}(p)\) is called the _parabolic weight_ associated to the subspace \(E_{p,i}\). For a fixed parabolic structure we denote the collection of all parabolic weights by \(\alpha=\{(\alpha_{1}(p),\alpha_{2}(p),\ldots,\alpha_{r_{p}}(p))\}_{p\in D}\). The parabolic structure is said to have _full flags_ if \[\dim(E_{p,i}/E_{p,i+1})=1\] for all \(i=1,\ldots,r_{p}\) and for all \(p\in D\), or equivalently \(r_{p}=r\) for all \(p\in D\). The _parabolic degree_ of a parabolic bundle \(E_{*}\) is defined as \[\operatorname{pardeg}(E_{*})\coloneqq\deg(E)+\sum_{p\in D}\sum_{i=1}^{r_{p}} \alpha_{i}(p)\cdot\dim(E_{p,i}/E_{p,i+1})\] and the _parabolic slope_ of \(E_{*}\) is defined as \[\mu_{\operatorname{par}}(E_{*})\coloneqq\frac{\operatorname{pardeg}(E_{*})}{r}.\] In [11], Maruyama and Yokogawa gave an alternative definition of parabolic bundles in terms of coherent sheaves, which is useful to define the notion of parabolic tensor products and parabolic dual. **Definition 2.2**.: A _parabolic homomorphism_\(\phi:E_{*}\to E_{*}^{\prime}\) between two parabolic bundles is a homomorphism of underlying vector bundles that satisfies the following: at each \(p\in D\) we have \[\alpha_{i}(p)>\alpha_{j}^{\prime}(p)\implies\phi(E_{p,i})\subseteq E_{p,j+1}^ {\prime}.\] Furthermore, we call such a homomorphism _strongly parabolic_ if \[\alpha_{i}(p)\geq\alpha_{j}^{\prime}(p)\implies\phi(E_{p,i})\subseteq E_{p,j+1} ^{\prime}\] for every \(p\in D\). A parabolic subbundle \(F_{*}\subset E_{*}\) is a holomorphic subbundle \(F\subset E\) of the underlying vector bundle together with the induced parabolic structure, i.e. by taking the appropriate intersections. A parabolic bundle \(E_{*}\) is called _stable_ (resp. _semistable_) if every nonzero proper subbundle \(F_{*}\subset E_{*}\) satisfies \[\mu_{\operatorname{par}}(F_{*})<\mu_{\operatorname{par}}(E_{*})\;\;(\text{resp. }\;\;\leq).\] The moduli space \(\mathcal{M}(r,d,\alpha)\) of semistable parabolic bundles over \(X\) of fixed rank \(r\), degree \(d\) and parabolic structure \(\alpha\) was constructed by Mehta and Seshadri in [4]. It is a normal projective complex variety of dimension \[\dim\mathcal{M}(r,d,\alpha)=r^{2}(g-1)+1+\frac{n(r^{2}-r)}{2},\] where the last summand comes from the assumption that the parabolic structure has full flags at each point \(p\in D\). The stable locus of \(\mathcal{M}(r,d,\alpha)\) is exactly the smooth locus of the moduli space. If weights are generic then semistability of a parabolic bundle implies stability, therefore the moduli space \(\mathcal{M}(r,d,\alpha)\) is a smooth variety. ### Parabolic Higgs bundles Let \(K\) be the canonical bundle on \(X\). We write \(K(D)\coloneqq K\otimes\mathcal{O}(D)\). **Definition 2.3**.: A _(strongly) parabolic Higgs bundle_ on \(X\) is a parabolic bundle \(E_{*}\) on \(X\) together with a Higgs field \(\Phi:E_{*}\to E_{*}\otimes K(D)\) such that \(\Phi\) is strongly parabolic, i.e. for all \(p\in D\) we have \(\Phi(E_{p,i})\subset E_{p,i+1}\otimes\left.K(D)\right|_{p}\). We also have a notion of (non-strongly) parabolic Higgs bundle where the Higgs field \(\Phi\) is a parabolic morphism. But in this paper, the Higgs field is always assumed to be strongly parabolic. For a parabolic Higgs bundle \((E_{*},\Phi)\), a subbundle \(F_{*}\subset E_{*}\) is called \(\Phi\)_-invariant_ if \(\Phi\) preserves \(F_{*}\), i.e. \(\Phi(F_{*})\subseteq F_{*}\otimes K(D)\). **Definition 2.4**.: A parabolic Higgs bundle \((E_{*},\Phi)\) is said to be _stable_ (resp. _semistable_) if every nonzero proper \(\Phi\)-invariant subbundle \(F_{*}\subset E_{*}\) satisfies \[\mu_{\rm par}(F_{*})<\mu_{\rm par}(E_{*})\;\;({\rm resp.}\;\;\leq).\] Let \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) denote the moduli space of semistable parabolic Higgs bundles over \(X\) of rank \(r\), degree \(d\) and full flag parabolic structure \(\alpha\). It is a normal quasi-projective complex variety (see [12], [14]). The stable locus is exactly the smooth locus of the moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\). Therefore as before, if weights are generic, then \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) is a smooth variety. Notice that the moduli space \(\mathcal{M}(r,d,\alpha)\) is embedded in the moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) by considering the zero Higgs fields. By the parabolic version of Serre duality (see [12], [13]), the cotangent bundle of \(\mathcal{M}(r,d,\alpha)\) \[T^{*}\mathcal{M}(r,d,\alpha)\subset\mathcal{M}_{\rm Higgs}(r,d,\alpha)\] is an open dense subset of \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\). Thus the moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) has dimension \[\dim\mathcal{M}_{\rm Higgs}(r,d,\alpha)=2\dim\mathcal{M}(r,d,\alpha)=2r^{2}(g- 1)+2+n(r^{2}-r).\] Let \(t\in\mathbb{C}^{*}\) be a nonzero complex number. It can be check that if \((E_{*},\Phi)\in\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) then so is \((E_{*},t\Phi)\), i.e. \((E_{*},t\Phi)\in\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) is also a semistable parabolic Higgs bundle. Therefore, we have a standard \(\mathbb{C}^{*}\)-action on the moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\), which is given by \[t\cdot(E_{*},\Phi)=(E_{*},t\Phi).\] #### 2.2.1. Hitchin fibration Let \((E_{*},\Phi)\) be a parabolic Higgs bundle of rank \(r\). Consider the characteristic polynomial of the Higgs field \(\Phi\), \[\det(x\cdot I-\Phi)=x^{r}+s_{1}x^{r-1}+\cdots+s_{r},\] where \(s_{i}={\rm tr}(\wedge^{i}\Phi)\in H^{0}(X,K(D)^{i})\) and \(K(D)^{i}\) denotes the tensor product of \(K^{\otimes i}\) and \(j\)-th power of the line bundle corresponding to the divisor \(D\). Since \(\Phi\) is strongly parabolic, the residue of the parabolic Higgs field \(\Phi\) at each marked point \(p\in D\) is nilpotent with respect to the filtration. So the eigenvalues of \(\Phi\) vanishes along the divisor \(D\), i.e. \(s_{i}\in H^{0}(X,K^{i}(D^{i-1}))\subset H^{0}(X,K(D)^{i})\). Hence, we have the _parabolic Hitchin fibration_ \[h:\mathcal{M}_{\mathrm{Higgs}}(r,d,\alpha)\longrightarrow\mathcal{H}\coloneqq \bigoplus_{i=1}^{r}H^{0}(X,K^{i}(D^{i-1})),\] sending \((E_{*},\Phi)\) to the coefficients of its characteristic polynomial of the Higgs field \(\Phi\). Here the vector space \(\mathcal{H}\) is called the _Hitchin base_. Notice that the Hitchin fibration \(h\) doesn't depend on the parabolic structure as it only depends on the Higgs field \(\Phi\) and the line bundle \(K(D)\). Also, \(h\) is a proper surjective morphism (see [15]). Suppose \(t\in\mathbb{C}^{*}\). Then the Hitchin base \(\mathcal{H}\) also admits a natural \(\mathbb{C}^{*}\)-action, which is given by \[\mathbb{C}^{*}\times\mathcal{H} \longrightarrow\mathcal{H}\] \[(t,(s_{1},s_{2},\dots,s_{r})) \mapsto(ts_{1},t^{2}s_{2},\dots,t^{r}s_{r}).\] ### Parabolic connections A _logarithmic connection_ on a vector bundle \(E\) over \(X\), singular over the divisor \(D\) is a \(\mathbb{C}\)-linear morphism \[\mathcal{D}:E\to E\otimes K(D)\] satisfying the Leibniz identity \[\mathcal{D}(fs)=f\mathcal{D}(s)+df\otimes s,\] where \(f\) is a locally defined holomorphic function on \(X\) and \(s\) is a locally defined holomorphic section of \(E\). For more details about the logarithmic connection, see [2] and [5]. **Definition 2.5**.: A _parabolic connection_ on a parabolic bundle \(E_{*}\) over \(X\) is a logarithmic connection \(\mathcal{D}\) on the underlying vector bundle \(E\) satisfying the following conditions: 1. On every fibre \(E_{p}\) over each marked point \(p\in D\), the logarithmic connection \(\mathcal{D}\) satisfies \[\mathcal{D}(E_{p,i})\subseteq E_{p,i}\otimes K(D)|_{p}\] for all \(i=1,2,\dots,r\). 2. For all \(p\in D\) and for all \(i\in\{1,\dots,r\}\) the action of the residue \(Res(\mathcal{D},p)\in\mathrm{End}(E_{p})\) on the quotient \(E_{p,i}/E_{p,i+1}\) is the multiplication by \(\alpha_{p,i}\), where \(\alpha_{p,i}\)'s are the parabolic weights over the point \(p\). Since the residue \(Res(\mathcal{D},p)\) preserves the filtration over \(p\), it acts on every quotient. A parabolic connection will be denoted by \((E_{*},\mathcal{D})\) and a parabolic subbundle \(F_{*}\subset E_{*}\) is called \(\mathcal{D}\)_-invariant_ if \(\mathcal{D}(F)\subseteq F\otimes K(D)\). **Definition 2.6**.: A parabolic connection \((E_{*},\mathcal{D})\) is called _stable_ (resp. _semistable_) if every non-zero proper \(\mathcal{D}\)-invariant subbundle \(F_{*}\subset E_{*}\) satisfies \[\mu_{\mathrm{par}}(F_{*})<\mu_{\mathrm{par}}(E_{*})\ \ (\text{resp. }\leq).\] The moduli space \(\mathcal{M}_{\mathrm{pc}}(r,d,\alpha)\) of semistable parabolic connections of fixed rank \(r\), degree \(d\) and generic weight type \(\alpha\) (assuming full flag structure) is a smooth quasi-projective irreducible complex variety of dimension \[\dim\mathcal{M}_{\mathrm{pc}}(r,d,\alpha)=2r^{2}(g-1)+2+n(r^{2}-r)\] (see [26, Theorem 2.1]). ### Parabolic \(\lambda\)-connections Let \(\lambda\in\mathbb{C}\) be a complex number. **Definition 2.7**.: A _parabolic \(\lambda\)-connection_ over \(X\) is a triple \((E_{*},\lambda,\nabla)\) where \(E_{*}\) is a parabolic bundle over \(X\) and \(\nabla:E\longrightarrow E\otimes K(D)\) is a \(\mathbb{C}\)-linear morphism between the underlying vector bundles satisfying 1. \(\nabla(fs)=f\nabla(s)+\lambda\cdot df\otimes s\), where \(f\) is a locally defined holomorphic function on \(X\) and \(s\) is a locally defined holomorphic section of \(E\). 2. On every fibre \(E_{p}\), the connection \(\nabla\) satisfies \[\nabla(E_{p,i})\subseteq E_{p,i}\otimes\left.K(D)\right|_{p}\] for all \(i=1,2,\ldots,r\). 3. For all \(p\in D\) and for all \(i\in\{1,\ldots,r\}\) the action of the residue \(Res(\nabla,p)\) on the quotient \(E_{p,i}/E_{p,i+1}\) is the multiplication by \(\lambda\alpha_{p,i}\). A parabolic subbundle \(F_{*}\subset E_{*}\) is called \(\nabla\)_-invariant_ if \(\nabla(F)\subseteq F\otimes K(D)\). **Definition 2.8**.: A parabolic \(\lambda\)-connection \((E_{*},\lambda,\nabla)\) is _stable_ (resp. _semistable_) if every non-zero proper \(\nabla\)-invariant subbundle \(F_{*}\subset E_{*}\) satisfies \[\mu(F_{*})<\mu(E_{*})\ \ (\text{resp. }\leq).\] We denote by \(\mathcal{M}_{\text{Hod}}(r,d,\alpha)\) the moduli space of semistable parabolic \(\lambda\)-connections over \(X\) of fixed rank \(r\), degree \(d\) and weight type \(\alpha\). For generic weights, the moduli space \(\mathcal{M}_{\text{Hod}}(r,d,\alpha)\) is a smooth quasiprojective complex variety (see [28]). This moduli space is also called the parabolic Hodge moduli space. There is a canonical surjective algebraic map \[\text{pr}\coloneqq\text{pr}_{\lambda}:\mathcal{M}_{\text{Hod}}(r,d,\alpha) \longrightarrow\mathbb{C} \tag{2.1}\] defined by \(\text{pr}(E_{*},\lambda,\nabla)=\lambda\). Let us consider the case \(\lambda=0\), i.e. the moduli space of parabolic \(0\)-connections \((E_{*},0,\nabla)\). In this case the residue \(Res(\nabla,p)\) of the morphism \(\nabla:E\longrightarrow E\otimes K(D)\) at every \(p\in D\) acts as the zero map on the quotient \(E_{p,i}/E_{p,i+1}\). Therefore for every \(p\in D\), we have \(\nabla(E_{p,i})\subseteq\left.E_{p,i+1}\otimes\left.K(D)\right|_{p}\right.\). Thus, a parabolic \(0\)-connection is equivalent to a strongly parabolic Higgs bundle. Hence, \[\text{pr}^{-1}(0)=\mathcal{M}_{\text{Higgs}}(r,d,\alpha)\subset\mathcal{M}_{ \text{Hod}}(r,d,\alpha).\] The natural \(\mathbb{C}^{*}\)-action on the moduli space \(\mathcal{M}_{\text{Higgs}}(r,d,\alpha)\) extends to a \(\mathbb{C}^{*}\)-action on \(\mathcal{M}_{\text{Hod}}(r,d,\alpha)\) defined by \[t\cdot(E_{*},\lambda,\nabla)=(E_{*},t\lambda,t\nabla). \tag{2.2}\] Similarly, if we consider the case \(\lambda=1\), then we get \[\text{pr}^{-1}(1)=\mathcal{M}_{\text{pc}}(r,d,\alpha)\subset\mathcal{M}_{ \text{Hod}}(r,d,\alpha).\] ## 3. Semiprojectivity of the moduli spaces In this section, we will prove the semiprojectivity of the moduli spaces \(\mathcal{M}_{\text{Higgs}}(r,d,\alpha)\) and \(\mathcal{M}_{\text{Hod}}(r,d,\alpha)\). **Definition 3.1** (Semiprojective varieties).: Let \(V\) be a quasi-projective complex variety equipped with a \(\mathbb{C}^{*}\)-action \(z\mapsto t\cdot z\), \(z\in V,t\in\mathbb{C}^{*}\). We say that \(V\) is _semiprojective_ if it satisfies the following conditions: 1. for every \(x\in V\), the limit \(\lim_{t\to 0}(t\cdot x)_{t\in\mathbb{C}^{*}}\) exists in \(V\), 2. the fixed point locus \(V^{\mathbb{C}^{*}}\) under the \(\mathbb{C}^{*}\)-action is proper. ### Semiprojectivity of the moduli space of parabolic Higgs bundles **Lemma 3.1**.: _The Hitchin map \(h:\mathcal{M}_{\rm Higgs}(r,d,\alpha)\to\mathcal{H}\) is \(\mathbb{C}^{*}\)-equivariant._ Proof.: Recall that the \(\mathbb{C}^{*}\)-action on the Hitchin base \(\mathcal{H}=\bigoplus_{i=1}^{r}H^{0}(X,K^{i}(D^{i-1}))\) is given by \[t\cdot(s_{1},s_{2},\dots,s_{r})=(ts_{1},t^{2}s_{2},\dots,t^{r}s_{r}).\] Let \(h\big{(}(E_{*},\Phi)\big{)}=(s_{1},s_{2},\dots,s_{r})\), i.e. \(s_{i}=\operatorname{tr}(\wedge^{i}\Phi)\) are the coefficients of the characteristic polynomial of \(\Phi\). Then the characteristic polynomial of \(t\Phi\) is given by \[\det(x\cdot I-t\Phi)=x^{r}+ts_{1}x^{r-1}+t^{2}s_{2}x^{r-2}+\dots+t^{r}s_{r}.\] Therefore, \[h\big{(}(t\cdot(E_{*},\Phi))\big{)}=h\big{(}(E_{*},t\Phi)\big{)}=(ts_{1},t^{2 }s_{2},\dots,t^{r}s_{r})=t\cdot(s_{1},s_{2},\dots,s_{r})=t\cdot h\big{(}(E_{*}, \Phi)\big{)}.\] Hence, \(h\) is \(\mathbb{C}^{*}\)-equivariant. To show the semiprojectivity of the moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) we have to show that the natural \(\mathbb{C}^{*}\)-action on \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) satisfies the above two conditions given in the Definition 3.1. **Lemma 3.2**.: _Let \((E_{*},\Phi)\) be a semistable parabolic Higgs bundle. Then the limit \(\lim_{t\to 0}(E_{*},t\Phi)\) exists in \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\)._ Proof.: Consider the map \[f:\mathbb{C}^{*}\longrightarrow\mathcal{M}_{\rm Higgs}(r,d,\alpha)\] given by \(t\mapsto(E_{*},t\Phi)\). By the above Lemma 3.1, we know that \(h\) is \(\mathbb{C}^{*}\)-equivariant. Therefore, we have \[\lim_{t\to 0}h\big{(}(E_{*},t\Phi)\big{)}=\lim_{t\to 0}t\cdot h\big{(}(E_{*}, \Phi)\big{)}=0.\] Thus, the composition map \(F\coloneqq h\circ f:\mathbb{C}^{*}\longrightarrow\mathcal{H}\) extends to a map \(\hat{F}:\mathbb{C}\longrightarrow\mathcal{H}\). Since \(h\) is proper, by the valuative criterion of properness \(f\) also extend to a map \[\hat{f}:\mathbb{C}\longrightarrow\mathcal{M}_{\rm Higgs}(r,d,\alpha).\] Hence, the limit \(\lim_{t\to 0}(E_{*},t\Phi)\) exists in \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\). **Lemma 3.3**.: _The fixed point locus under the \(\mathbb{C}^{*}\)-action on \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) is proper in \(h^{-1}(0)\subset\mathcal{M}_{\rm Higgs}(r,d,\alpha)\)._ Proof.: Note that the only element that is fixed under the \(\mathbb{C}^{*}\)-action on the Hitchin base \(\mathcal{H}\) is the zero point. Therefore, the fixed point locus \(\mathcal{H}^{\mathbb{C}^{*}}\) is exactly the set \(\{0\}\). Since the Hitchin fibration \(h\) is \(\mathbb{C}^{*}\)-equivariant, the fixed point locus \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)^{\mathbb{C}^{*}}\) must be closed in \(h^{-1}(\mathcal{H}^{\mathbb{C}^{*}})=h^{-1}(0)\). Since \(h\) is proper, so is \(h^{-1}(0)\). Hence, \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)^{\mathbb{C}^{*}}\) is also proper in \(h^{-1}(0)\). **Proposition 3.4**.: _The moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) is a smooth semiprojective complex variety._ Proof.: We know that for generic weights, the moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) is a smooth quasiprojective complex variety. Therefore the semiprojectivity of \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) follows from Lemma 3.2 and 6.1. ### Semiprojectivity of parabolic Hodge moduli space Recall that the \(\mathbb{C}^{*}\)-action on the moduli space \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) is given by \[t\cdot(E_{*},\lambda,\nabla)=(E_{*},t\lambda,t\nabla).\] To prove the semiprojectivity of \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) we need to check that this \(\mathbb{C}^{*}\)-action satisfies the two properties given in the Definition 3.1. **Lemma 3.5**.: _Let \((E_{*},\lambda,\nabla)\in\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) be a semistable parabolic \(\lambda\)-connection. Then the limit \(\lim_{t\to 0}(E_{*},t\lambda,t\nabla)\) exists in \(\mathrm{pr}^{-1}(0)\subset\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\)._ Proof.: The proof is similar to [18, Corollary 10.2]. Consider the following to projections \[\pi_{1}:X\times\mathbb{C}^{*}\longrightarrow X\ \ \text{and}\ \ \pi_{2}:X\times\mathbb{C} \longrightarrow\mathbb{C}.\] Now consider the \(\mathbb{C}^{*}\)-flat family over \(\pi_{2}:X\times\mathbb{C}\longrightarrow\mathbb{C}\) given by \[(\mathcal{E},t\lambda,\nabla_{\pi_{2}})\coloneqq(\pi_{1}^{*}E_{*},t\lambda,t \pi_{1}^{*}\nabla)\] For any \(t\neq 0\), we know that a parabolic \(t\lambda\)-connection \((E_{*},t\lambda,t\nabla)\) is semistable if and only if the parabolic \(\lambda\)-connection \((E_{*},\lambda,\nabla)\) is semistable. Therefore, the fibers of the above family are semistable for \(t\neq 0\). Following [18, Theorem 10.1], there exist a \(\mathbb{C}\)-flat family \((\overline{\mathcal{E}},\overline{t\lambda},\overline{\nabla_{\pi_{2}}})\) over \(\pi_{2}:X\times\mathbb{C}\longrightarrow\mathbb{C}\) such that \[(\overline{\mathcal{E}},\overline{t\lambda},\overline{\nabla_{\pi_{2}}}) \big{|}_{X\times\mathbb{C}^{*}}\cong(\pi_{1}^{*}E_{*},t\lambda,t\pi_{1}^{*}\nabla)\] and \((\overline{\mathcal{E}},\overline{t\lambda},\overline{\nabla_{\pi_{2}}}) \big{|}_{X\times\{0\}}\) is semistable. Therefore, \[(\overline{\mathcal{E}},\overline{t\lambda},\overline{\nabla_{\pi_{2}}}) \big{|}_{X\times\{0\}}\in\mathrm{pr}^{-1}(0)\] is the limit of the \(\mathbb{C}^{*}\)-orbit of \((E_{*},\lambda,\nabla)\) at \(t=0\) in the moduli space \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\). **Lemma 3.6**.: _The fixed point locus under the \(\mathbb{C}^{*}\)-action on \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) is proper in \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\)._ Proof.: The fixed point locus under the \(\mathbb{C}^{*}\)-action \(t\cdot(E_{*},\lambda,\nabla)=(E_{*},t\lambda,t\nabla)\) is exactly corresponds to the fixed point locus under the \(\mathbb{C}^{*}\)-action on \(\mathrm{pr}^{-1}(0)=\mathcal{M}_{\mathrm{Higgs}}(r,d,\alpha)\). Therefore by Lemma 6.1, the fixed point locus \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)^{\mathbb{C}^{*}}\) is proper. **Proposition 3.7**.: _The moduli space \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) is a smooth semiprojective complex variety._ _Moreover, the algebraic map \(\mathrm{pr}:\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\to\mathbb{C}\) given in (2.1) is a \(\mathbb{C}^{*}\)-equivariant surjective submersion covering the scaling action on \(\mathbb{C}\)._ Proof.: Since weights are generic, the moduli space \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) is a smooth quasiprojective complex variety. Therefore, Lemma 3.5 and 6.2 implies that the moduli space \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) is semiprojective. The second part follows immediately from the smoothness property of the moduli space \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\). ## 4. Grothendieck motives of semiprojective varieties In this section we recall the Grothendieck ring of varieties and some basic properties. We also define what we mean by the Grothendieck motive. ### Grothendieck ring of varieties Let \(\mathcal{V}_{\mathbb{C}}\) denote the category of quasiprojective complex varieties. We also denote by \([Z]\) the isomorphism class corresponding to an element \(Z\in\mathcal{V}_{\mathbb{C}}\). Let \(Z^{\prime}\subset Z\) be a Zariski-closed subset of \(Z\). Let \(G\) be the quotient group coming from the free abelian group generated by the isomorphism classes \([Z]\), modulo the relation \[[Z]=[Z^{\prime}]+[Z\setminus Z^{\prime}].\] In this group \(G\), the additive structure is given by \[[Z_{1}]+[Z_{2}]\coloneqq[Z_{1}\sqcup Z_{2}],\] where \(\sqcup\) denotes the disjoint union and the multiplicative structure is defined by \[[Z_{1}]\cdot[Z_{2}]\coloneqq[Z_{1}\times Z_{2}].\] Therefore we get a commutative ring \((G,+,\cdot)\), called the _Grothendieck ring of varieties_. We will denote this ring by \(K(\mathcal{V}_{\mathbb{C}})\). The additive and multiplicative units of \(K(\mathcal{V}_{\mathbb{C}})\) are \(0=[\emptyset]\) and \(1=[\operatorname{Spec}(\mathbb{C})]\) respectively. Consider the affine line \(\mathbb{A}^{1}\). The class of \(\mathbb{A}^{1}\) is called the _Lefschetz object_, denoted by \[\mathbb{L}\coloneqq[\mathbb{A}^{1}]=[\mathbb{C}].\] Therefore, \[\mathbb{L}^{n}=[\mathbb{A}^{n}]=[\mathbb{C}^{n}].\] Let \(K(\mathcal{V}_{\mathbb{C}})[\mathbb{L}^{-1}]\) be the localization of \(K(\mathcal{V}_{\mathbb{C}})\) and let \[\hat{K}(\mathcal{V}_{\mathbb{C}})=\Bigg{\{}\sum_{k\geq 0}[Z_{k}]\mathbb{L}^{-k} \ \Bigg{|}\ \left[Z_{k}\right]\in K(\mathcal{V}_{\mathbb{C}})\text{ with }\dim Z_{k}-k\longrightarrow-\infty\Bigg{\}}\] be the dimensional completion of \(K(\mathcal{V}_{\mathbb{C}})\). Throughout this paper, by the Grothendieck motive we mean **Definition 4.1**.: let \(Z\) be a quasiprojective complex variety. The class \([Z]\in K(\mathcal{V}_{\mathbb{C}})\) or in \(\hat{K}(\mathcal{V}_{\mathbb{C}})\) is called the _Grothendieck motive_, or just the _motive_ of \(Z\). ### Mixed Hodge structure and \(E\)-polynomial Let \(d=\dim(Z)\) be the dimension of a quasiprojective complex variety \(Z\). In [3], Deligne proved that the compactly supported \(k\)-th cohomology \(H^{k}_{c}(Z)\coloneqq H^{k}_{c}(Z,\mathbb{C})\) is equipped with a mixed Hodge structure for all \(k\in\{0,\dots,2d\}\). Also, \(H^{k}_{c}(Z)\) is endowed with two filtrations \(W^{\bullet}\) and \(F_{\bullet}\) and that allow us to define the corresponding Hodge numbers \[h^{k,p,q}(Z)\coloneqq\dim H^{p,q}(H^{k}_{c}(Z,\mathbb{C}))=\dim\mathrm{Gr}^{ p}_{F}\mathrm{Gr}^{W}_{p+q}(H^{k}_{c}(Z,\mathbb{C})),\] where \(p,q\in\{0,\dots,k\}\). If \(h^{k,p,q}(Z)\neq 0\), then we say that \((p,q)\) are \(k\)-weights of \(Z\). It can be easily verify that the mixed Hodge numbers satisfy \(h^{k,p,q}(Z)=h^{k,q,p}(Z)\) and \(\dim H^{k}_{c}(Z)=\sum_{p,q=0}^{d}h^{k,p,q}(Z)\). Define \[\mathcal{X}^{p,q}(Z)\coloneqq\sum_{k}(-1)^{k}h^{k,p,q}(Z).\] Then the _\(E\)-polynomial_ of \(Z\) is defined by \[E(Z)=E(Z;u,v)=\sum_{p,q=0}\mathcal{X}^{p,q}(Z)u^{p}v^{q}\in\mathbb{Z}[u,v].\] Notice that \(E(Z;1,1)=\chi(Z)\) is the Euler characteristic of \(Z\). So the \(E\)-polynomial is a generalization of the Euler characteristic. The \(E\)-polynomial satisfies the following properties 1. _(scissor relation)_ \(E(Z)=E(V)+E(Z\setminus V)\) for a closed subvariety \(V\subset Z\), 2. _(multiplicativity)_ \(E(Y\times Z)=E(Y)\cdot E(Z)\) where \(Y\times Z\) is the cartesian product, 3. If \(Z\to Y\) is an algebraic fibre bundle with fibre \(B\), then \(E(Z)=E(Y)\cdot E(B)\). **Examples 4.1**.: * \(E(\mathbb{C})=E(\mathbb{A}^{1})=E(\mathbb{P}^{1})-E(\mathrm{pt})=uv=:x\), * \(E(\mathbb{P}^{n})=E(\mathbb{A}^{n})+E(\mathbb{A}^{n-1})+\cdots+E(\mathbb{A}^{ 1})+E(\mathbb{A}^{0})=x^{n}+x^{n-1}+\cdots+x+1\). Now assume that \(Z\) has pure Hodge structure, then its \(E\)-polynomial is given by \[E(Z)=\sum_{p,q=0}^{d}(-1)^{p+q}h^{p,q}(Z)u^{p}v^{q} \tag{4.1}\] where \(d=\dim Z\) and \(h^{p,q}(Z)=\dim H^{p,q}_{c}(Z)\). **Remark**.: _The \(E\)-polynomial can be realized as a ring homomorphism_ \[E:\hat{K}(\mathcal{V}_{\mathbb{C}})\longrightarrow\mathbb{Z}[u,v]\] _from the Grothendieck ring of varieties to \(\mathbb{Z}[u,v]\). This map extends to the completion_ \[E:\hat{K}(\mathcal{V}_{\mathbb{C}})\longrightarrow\mathbb{Z}[u,v]\left[ \left\lfloor\frac{1}{uv}\right\rfloor\right]\] _(also denoted by \(E\)) taking values in the Laurent series in \(uv\). Hence if two quasiprojective varieties have the same motive then their \(E\)-polynomials are the same._ We will apply the following result for a smooth semiprojective complex variety in our set up. **Proposition 4.2** ([29], Theorem 5.6).: _Let \(Z\) be a smooth semiprojective complex variety endowed with a \(\mathbb{C}^{*}\)-equivariant surjective submersion \(\pi:Z\to\mathbb{C}\) covering the standard scaling action on \(\mathbb{C}\). Then the following motivic equalities hold in the Grothendieck ring \(\hat{K}(\mathcal{V}_{\mathbb{C}})\),_ \[[\pi^{-1}(0)]=[\pi^{-1}(1)]\;\;\mathrm{and}\;\;[Z]=\mathbb{L}[\pi^{-1}(0)],\] _where \(\mathbb{L}\) is the Lefschetz motive._ Proof.: See [29, Theorem 5.6] for details. **Theorem 4.3**.: _In \(\hat{K}(\mathcal{V}_{\mathbb{C}})\) the following equalities hold,_ \[[\mathcal{M}_{\mathrm{Higgs}}(r,d,\alpha)]=[\mathcal{M}_{\mathrm{pc}}(r,d, \alpha)]\;\;\mathrm{and}\;\;[\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)]=\mathbb{L }[\mathcal{M}_{\mathrm{Higgs}}(r,d,\alpha)].\] _Therefore, we have the following equalities of the \(E\)-polynomials_ \[E(\mathcal{M}_{\mathrm{Higgs}}(r,d,\alpha))=E(\mathcal{M}_{\mathrm{pc}}(r,d, \alpha))\;\;\mathrm{and}\;\;E(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha))=uvE( \mathcal{M}_{\mathrm{Higgs}}(r,d,\alpha)).\] Proof.: By Proposition 3.7, the parabolic Hodge moduli space \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) is a smooth semiprojective complex variety with the \(\mathbb{C}^{*}\)-action given in (2.2). Also, from the Proposition 3.7 it follows that the surjective map \(\mathrm{pr}:\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\to\mathbb{C}\) given in (2.1) is a \(\mathbb{C}^{*}\)-equivariant submersion covering the natural \(\mathbb{C}^{*}\)-action on \(\mathbb{C}\). Thus by Proposition 4.2, we have \[[\mathcal{M}_{\mathrm{Higgs}}(r,d,\alpha)]=[\mathrm{pr}^{-1}(0)]=[\mathrm{pr}^ {-1}(1)]=[\mathcal{M}_{\mathrm{pc}}(r,d,\alpha)]\] and \[[\mathcal{M}_{\rm Hod}(r,d,\alpha)]=\mathbb{L}[{\rm pr}^{-1}(0)]=\mathbb{L}[ \mathcal{M}_{\rm Higgs}(r,d,\alpha)].\] Therefore, by Remark (4.2), \(E\)-polynomials of the moduli spaces \(\mathcal{M}_{\rm Higgs}(r,d,\alpha))\) and \(\mathcal{M}_{\rm pc}(r,d,\alpha))\) are equal, i.e. \[E(\mathcal{M}_{\rm Higgs}(r,d,\alpha))=E(\mathcal{M}_{\rm pc}(r,d,\alpha))\] and by the multiplicative property of the \(E\)-polynomial, we have \[E(\mathcal{M}_{\rm Hod}(r,d,\alpha))=E(\mathbb{C})E(\mathcal{M}_{\rm Higgs}(r,d,\alpha))=uvE(\mathcal{M}_{\rm Higgs}(r,d,\alpha)).\] **Theorem 4.4**.: _The Hodge structures of the moduli spaces \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) and \(\mathcal{M}_{\rm pc}(r,d,\alpha)\) are isomorphic, i.e._ \[H^{\bullet}(\mathcal{M}_{\rm Higgs}(r,d,\alpha))=H^{\bullet}(\mathcal{M}_{ \rm pc}(r,d,\alpha)).\] _Also, the moduli spaces \(\mathcal{M}_{\rm pc}(r,d,\alpha)\) and \(\mathcal{M}_{\rm Hod}(r,d,\alpha)\) have pure mixed Hodge structures._ Proof.: Following [27, Corollary 1.3.3], we have that the cohomologies of the fibres \({\rm pr}^{-1}(0)=\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) and \({\rm pr}^{-1}(1)=\mathcal{M}_{\rm pc}(r,d,\alpha)\) are isomorphic and have pure mixed Hodge structures. Again by [27, Corollary 1.3.2], since the moduli space \(\mathcal{M}_{\rm Hod}(r,d,\alpha)\) is smooth semiprojective for generic weights, it has pure cohomology. ## 5. Voevodsky motives and Chow motives In this section, we will briefly describe the Voevodsky's category of geometric motives over a field \(k\) with coefficients in a commutative ring \(R\). This is a tensor triangulated category. For more details, see [19], [21], [23] and [24]. ### The category of finite correspondences **Definition 5.1**.: Let \(Y\) and \(Z\) be varieties over \(k\). Let \(c(Y,Z)\) denote the group generated by integral closed subvariety \(W\subset Y\times_{k}Z\) such that 1. the first projection \(\pi_{1}:W\longrightarrow Y\) is finite and 2. the image \(\pi_{1}(W)\subset Y\) is an irreducible component. Then the elements of the group \(c(Y,Z)\) are called the _finite correspondences_ between the varieties \(Y\) and \(Z\). Let \(X,Y\) and \(Z\) be varieties over \(k\), and let \(W_{1}\in c(X,Y)\) and \(W_{2}\in c(Y,Z)\) be two finite correspondences. If \(X\) and \(Y\) are irreducible then every irreducible component \(P\) of \(X\times|W_{2}|\cap|W_{1}|\times Z\) is finite and \(\pi_{X}(P)=X\). Therefore, we have a bilinear composition rule \[\circ:c(Y,Z)\times c(X,Y) \longrightarrow c(X,Z)\] \[(W_{2},W_{1}) \mapsto W_{2}\circ W_{1}\coloneqq\pi_{(X\times Z)_{*}}\bigg{(} \pi_{(X\times Y)}^{*}(W_{1})\cdot\pi_{(Y\times Z)}^{*}(W_{2})\bigg{)}.\] Consider the category of smooth \(k\)-varieties \({\bf Sm}/k\). Then the objects of _the category of finite correspondences_\({\bf Corr}_{fin}/k\) are same as \({\bf Sm}/k\), with \[{\rm Hom}_{{\bf Corr}_{fin}/k}(Y,Z)\coloneqq c(Y,Z)\] and the composition law is given as above. **Remark**.: _The operation \(\times_{k}\) on \({\bf Sm}/k\) and on cycles gives the category \({\bf Corr}_{fin}/k\) the structure of a tensor category. Therefore, the corresponding bounded homotopy category \(K^{b}({\bf Corr}_{fin}/k)\) is a tensor triangulated category._ ### The category of effective geometric motives Consider the category \(\widehat{DM}_{\rm gm}^{\rm eff}(k)\), which is the localization of the tensor triangulated category \(K^{b}({\bf Corr}_{fin}/k)\), inverting all objects of the form \([X\times\mathbb{A}^{1}]\to[X]\) (homotopy) and \([U\cap V]\to[U]\oplus[V]\to[X]\) for any open covering \(U\cup V=X\) (Mayer-Vietoris). **Definition 5.2**.: The category \(DM_{\rm gm}^{\rm eff}(k)\) of _effective geometric motives_ over \(k\) is the pseudo-abelian envelope of the quotient category \(\widehat{DM}_{\rm gm}^{\rm eff}(k)\). We now consider the functor \[{\bf Sm}/k\longrightarrow{\bf Corr}_{fin}/k\] sending a morphism \(f:X\to Y\) in \({\bf Sm}/k\) to its graph \(\Gamma_{f}\subset X\times_{k}Y\). We will denote the object in \({\bf Corr}_{fin}/k\) corresponding to \(X\in{\bf Sm}/k\) by \([X]\). This induces the following covariant functor \[M_{\rm gm}^{\rm eff}:{\bf Sm}/k\longrightarrow DM_{\rm gm}^{\rm eff}(k)\] where \(M_{\rm gm}^{\rm eff}(X)\) is the image of \([X]\) in \(DM_{\rm gm}^{\rm eff}(k)\) and and it sends a morphism \(f:X\to Y\) to \(M_{\rm gm}^{\rm eff}(f)\coloneqq[\Gamma_{f}]\). We note that the category \(DM_{\rm gm}^{\rm eff}(k)\) is in fact a closed monoidal triangulated category. Therefore, we can consider the cone of a morphism, tensor products. The functor \(M_{\rm gm}^{\rm eff}\) satisfies the following properties \[M_{\rm gm}^{\rm eff}(X\sqcup Y) =M_{\rm gm}^{\rm eff}(X)\oplus M_{\rm gm}^{\rm eff}(Y)\] \[M_{\rm gm}^{\rm eff}(X\times Y) =M_{\rm gm}^{\rm eff}(X)\otimes M_{\rm gm}^{\rm eff}(Y).\] **Definition 5.3**.: \(M_{\rm gm}^{\rm eff}(X)\) is said to be the _effective geometric motive_ of a smooth \(k\)-variety \(X\). #### 5.2.1. Tate motives Let \(X\in{\bf Sm}/k\) be a smooth variety with a \(k\)-point \(0\in X(k)\). Then the corresponding motive in \(K^{b}({\bf Corr}_{fin}/k)\) is defined by \[\widehat{[X]}\coloneqq{\rm Cone}\bigg{(}{i_{0}}_{*}:[{\rm Spec}(k)] \longrightarrow[X]\bigg{)}.\] We denote the image of \(\widehat{[X]}\) in \(M_{\rm gm}^{\rm eff}(k)\) by \(\widehat{M_{\rm gm}^{\rm eff}(X)}\). We set \[\Lambda(1)\coloneqq\widehat{M_{\rm gm}^{\rm eff}(\mathbb{P}^{1})}[-2].\] One can think \(\Lambda(1)\) as the reduced homology of \(\mathbb{P}^{1}\). It is an invertible object with respect to the tensor product and its inverse is exactly its dual \(\underline{\rm Hom}(\Lambda(1),\Lambda(0))\). We denote its inverse by \(\Lambda(-1)\). For \(r\in\mathbb{Z}\), we set \[\Lambda(r)=\begin{cases}\Lambda(1)^{\otimes r}&\text{if }r\geq 0\\ \Lambda(-1)^{\otimes-r}&\text{if }r<0.\end{cases}\] These objects are called _pure Tate motives_. For an object \(M\in DM_{\rm gm}^{\rm eff}(k)\), the twists \[M(r)\coloneqq M\otimes\Lambda(r)\] are called the _Tate twists_. ### The category of geometric motives To define the category of geometric motive we need to consider the motive \(\Lambda(1)[2]\) which is similar to the Lefschetz motive \(\mathbb{L}\). **Definition 5.4**.: The category \(DM_{\rm gm}(k)\) of _geometric motives_ is defined by inverting the functor \(\otimes_{\Lambda(1)}\) on \(DM_{\rm gm}^{\rm eff}(k)\), i.e. for \(n,m\in\mathbb{Z}\) and \(A,B\in DM_{\rm gm}^{\rm eff}(k)\), \[{\rm Hom}_{DM_{\rm gm}^{\rm eff}(k)}(A(n),B(m))\coloneqq\lim_{\longrightarrow_ {r}}{\rm Hom}_{DM_{\rm gm}^{\rm eff}(k)}\big{(}A\otimes\Lambda(r+n),B\otimes \Lambda(r+m)\big{)}.\] The category of geometric motives \(DM_{\rm gm}(k)\) is also a triangulated category. By Voevodsky's cancellation theorem, the embedding \[i:DM_{\rm gm}^{\rm eff}(k)\longrightarrow DM_{\rm gm}(k)\] is a fully faithful functor. Consider the composition \[M_{\rm gm}\coloneqq i\circ M_{\rm gm}^{\rm eff}:{\bf Sm}/k\longrightarrow DM_{ \rm gm}(k).\] **Definition 5.5**.: \(M_{\rm gm}(X)\) is called the _geometric motive_ of the smooth \(k\)-variety \(X\). Let \(R\) be a ring and let \(DM_{\rm gm}^{\rm eff}(k;R)\coloneqq DM_{\rm gm}^{\rm eff}(k)\otimes R\) denote the category of effective geometric motives with coefficients in \(R\). We denote by \(M_{\rm gm}^{\rm eff}(X)_{R}\) the effective geometric motive of \(X\) in the category \(DM_{\rm gm}^{\rm eff}(k;R)\). Similarly, we denote by \(M_{\rm gm}(X)_{R}\) the geometric motive of \(X\) in the category \(DM_{\rm gm}(k;R)=DM_{\rm gm}(k)\otimes R\). ### The category of effective Chow motives Let \({\bf Chow}^{\rm eff}(k;R)\) denote the category of effective Chow motives over a field \(k\) with coefficients in \(R\). There exist a functor \[{\bf Chow}^{\rm eff}(k;R)\longrightarrow DM_{\rm gm}^{\rm eff}(k;R)\] which is a fully faithful embedding. This functor is compatible with the tensor structure and the category \({\bf Chow}^{\rm eff}(k;R)\) contains the Lefschetz motive \(\mathbb{L}\). We can think of the category \(DM_{\rm gm}^{\rm eff}(k;R)\) as being a "triangulated envelop" of the category \({\bf Chow}^{\rm eff}(k;R)\). We can consider the motive of a smooth \(k\)-variety either in \({\bf Chow}^{\rm eff}(k;R)\) or in the category \(DM_{\rm gm}^{\rm eff}(k;R)\). See [1], [16] and [22] for more details. Let \(C(X)_{R}\in{\bf Chow}^{\rm eff}(k;R)\) denote the _Chow motive_ of \(X\) with coefficients in \(R\). **Theorem 5.1**.: _Let \(X\) be a compact Riemann surface of genus \(g\geq 2\). Then for any ring \(R\), we have the following isomorphism of Voevodsky's motive,_ \[M_{\rm gm}\big{(}{\mathcal{M}}_{\rm Higgs}(r,d,\alpha)\big{)}_{R}\cong M_{\rm gm }\big{(}{\mathcal{M}}_{\rm pc}(r,d,\alpha)\big{)}_{R}\in DM_{\rm gm}(\mathbb{C };R).\] Proof.: We know that the moduli space \({\mathcal{M}}_{\rm Hod}(r,d,\alpha)\) is a smooth semiprojective variety equipped with a \(\mathbb{C}^{*}\)-invariant surjective submersion \({\rm pr}:{\mathcal{M}}_{\rm Hod}(r,d,\alpha)\to\mathbb{C}\) such that \({\rm pr}^{-1}(0)={\mathcal{M}}_{\rm Higgs}(r,d,\alpha)\) and \({\rm pr}^{-1}(1)={\mathcal{M}}_{\rm pc}(r,d,\alpha)\) (see 2.1). Therefore by [31, Theorem B.1], we have the following isomorphism in the Voevodsky's category \(DM_{\rm gm}(\mathbb{C};R)\) \[M_{\rm gm}\big{(}{\mathcal{M}}_{\rm Higgs}(r,d,\alpha)\big{)}_{R}=M_{\rm gm} \big{(}{\rm pr}^{-1}(0)\big{)}_{R}\cong M_{\rm gm}\big{(}{\rm pr}^{-1}(1)\big{)} _{R}=M_{\rm gm}\big{(}{\mathcal{M}}_{\rm pc}(r,d,\alpha)\big{)}_{R}.\] This implies the following isomorphism of Chow motives. **Theorem 5.2**.: _For any ring \(R\) we have the following isomorphism of Chow motives,_ \[C\big{(}{\mathcal{M}}_{\rm Higgs}(r,d,\alpha)\big{)}_{R}\cong C\big{(}{ \mathcal{M}}_{\rm pc}(r,d,\alpha)\big{)}_{R}\in{\textbf{Chow}^{\rm eff}( \mathbb{C};R)}.\] Proof.: Since the moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) is smooth semiprojective, its Voevodsky motive is pure by [31, Corollary A.5]. Similarly, the motive of the moduli space \(\mathcal{M}_{\rm pc}(r,d,\alpha)\) is pure. Since their Voevodsky motives are isomorphic by the above Theorem 5.1, their Chow motives are also isomorphic. ## 6. Motives of moduli spaces with fixed determinant In earlier sections we were working on the moduli space of parabolic bundles of fixed rank \(r\) and degree \(d\) over a curve \(X\), which is same as the moduli space of parabolic \(\operatorname{GL}(r,\mathbb{C})\)-bundles of degree \(d\) over \(X\). In this final section, we will consider the moduli space of parabolic \(\operatorname{SL}(r,\mathbb{C})\)-bundles over \(X\), i.e. the moduli space of parabolic bundles over \(X\) with fixed determinant. By a _parabolic \(\operatorname{SL}(r,\mathbb{C})\)-Higgs bundle_\((E_{*},\Phi)\), we mean a parabolic bundle \(E_{*}\) of rank \(r\) with determinant \(\xi\) and traceless Higgs field \(\Phi\). Let \(\operatorname{Jac}^{d}(X)\) denote the space of degree \(d\) line bundles over \(X\). Consider the determinant map \[\det:\mathcal{M}_{\rm Higgs}(r,d,\alpha) \longrightarrow\operatorname{Jac}^{d}(X)\times H^{0}(X,K)\] \[(E_{*},\Phi) \longmapsto(\wedge^{r}E,\operatorname{trace}(\Phi)).\] Since \(\Phi\) is strongly parabolic, \(\operatorname{trace}(\Phi)\in H^{0}(X,K)\subset H^{0}(X,K(D))\). The moduli space \(\mathcal{M}_{\rm Higgs}^{\xi}(r,d,\alpha)\) of semistable parabolic Higgs bundles with fixed determinant \(\xi\) is defined by the fiber \(\det^{-1}(\xi,0)\), i.e. \[\mathcal{M}_{\rm Higgs}^{\xi}(r,d,\alpha)\coloneqq\det^{-1}(\xi,0).\] As before, if weights are generic then the moduli space \(\mathcal{M}_{\rm Higgs}^{\xi}(r,d,\alpha)\) is a smooth quasi-projective complex variety of dimension \[\dim\mathcal{M}_{\rm Higgs}^{\xi}(r,d,\alpha)=2(g-1)(r^{2}-1)+n(r^{2}-r).\] In this case, as \(\operatorname{trace}(\Phi)=0\), the Hitchin map is given by \[h^{\xi}:\mathcal{M}_{\rm Higgs}^{\xi}(r,d,\alpha)\longrightarrow\mathcal{H}^{ \xi}\coloneqq\bigoplus_{i=2}^{r}H^{0}(X,K^{i}(D^{i-1})).\] Following [25, theorem 1.2] and (3.1), similarly we can prove that the Hitchin map \(h^{\xi}\) is \(\mathbb{C}^{*}\)-equivariant and the moduli space \(\mathcal{M}_{\rm Higgs}^{\xi}(r,d,\alpha)\) is smooth \(\mathbb{C}^{*}\)-equivariant closed subvariety of \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\). Therefore, \(\mathcal{M}_{\rm Higgs}^{\xi}(r,d,\alpha)\) is smooth semiprojective. By a _parabolic \(\lambda\)-connection with fixed determinant_\((\xi,\delta)\) (i.e. for the group \(\operatorname{SL}(r,\mathbb{C})\)), we mean a parabolic \(\lambda\)-connection \((E_{*},\lambda,\nabla)\) such that \(\wedge^{r}E_{*}\cong\xi\) and \(\operatorname{trace}(\nabla)=\delta\) (see [28, Definition 8.1] for more details). It can be verified that the \(\operatorname{trace}(\nabla)\) gives a \(\lambda\)-connection on the line bundle \(\xi\). Consider the determinant map \[\det:\mathcal{M}_{\rm Hod}(r,d,\alpha) \longrightarrow\mathcal{M}_{\rm Hod}(1,d,\alpha)\] \[(E_{*},\lambda,\nabla) \longmapsto(\wedge^{r}E_{*},\lambda,\operatorname{trace}(\nabla)).\] Then the moduli space \(\mathcal{M}_{\rm Hod}^{\xi}(r,d,\alpha)\) of semistable parabolic \(\lambda\)-connections with fixed determinant \((\xi,\delta)\) is defined by \[\mathcal{M}_{\rm Hod}^{\xi}(r,d,\alpha)\coloneqq\det^{-1}(\xi,\lambda,\delta).\] The moduli space \(\mathcal{M}^{\xi}_{\mathrm{Hod}}(r,d,\alpha)\) is clearly a smooth \(\mathbb{C}^{*}\)-invariant closed subvariety of \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\). Therefore, following (3.2), we can similarly prove that \(\mathcal{M}^{\xi}_{\mathrm{Hod}}(r,d,\alpha)\) is in fact semiprojective. By considering the restriction of the morphism (2.1), we get a \(\mathbb{C}^{*}\)-invariant surjective submersion \[\mathrm{pr}:\mathcal{M}^{\xi}_{\mathrm{Hod}}(r,d,\alpha)\longrightarrow\mathbb{ C}.\] Let \(\mathcal{M}^{\xi}_{\mathrm{pc}}(r,d,\alpha)\) denote the moduli space of semistable parabolic connections with fixed determinant \((\xi,\delta)\). Then we have the following isomorphisms 1. \(\mathrm{pr}^{-1}(0)\cong\mathcal{M}^{\xi}_{\mathrm{Higgs}}(r,d,\alpha)\) 2. \(\mathrm{pr}^{-1}(1)\cong\mathcal{M}^{\xi}_{\mathrm{pc}}(r,d,\alpha)\) 3. \(\mathrm{pr}^{-1}(\mathbb{C}^{*})\cong\mathcal{M}^{\xi}_{\mathrm{pc}}(r,d, \alpha)\times\mathbb{C}^{*}.\) Then we have the following motivic invariance theorems. **Theorem 6.1** (Grothendieck motive).: _In the Grothendieck ring of varieties \(\hat{K}(\mathcal{V}_{\mathbb{C}})\) the following equalities hold,_ \[[\mathcal{M}^{\xi}_{\mathrm{Higgs}}(r,d,\alpha)]=[\mathcal{M}^{\xi}_{\mathrm{ pc}}(r,d,\alpha)]\;\;\mathrm{and}\;\;[\mathcal{M}^{\xi}_{\mathrm{Hod}}(r,d, \alpha)]=\mathbb{L}[\mathcal{M}^{\xi}_{\mathrm{Higgs}}(r,d,\alpha)].\] _Therefore, we have the following equalities of the \(E\)-polynomials_ \[E(\mathcal{M}^{\xi}_{\mathrm{Higgs}}(r,d,\alpha))=E(\mathcal{M}^{\xi}_{\mathrm{ pc}}(r,d,\alpha))\;\;\mathrm{and}\;\;E(\mathcal{M}^{\xi}_{\mathrm{Hod}}(r,d, \alpha))=uvE(\mathcal{M}^{\xi}_{\mathrm{Higgs}}(r,d,\alpha)).\] Proof.: The proof is totally analogous to the proof of the Theorem 4.3. We just need to carefully modify the objects to the fixed determinant version. **Theorem 6.2** (Voevodsky motive).: _For any ring \(R\), we have the following isomorphism of Voevodsky's motive,_ \[M_{\mathrm{gm}}\big{(}\mathcal{M}^{\xi}_{\mathrm{Higgs}}(r,d,\alpha)\big{)}_{ R}\cong M_{\mathrm{gm}}\big{(}\mathcal{M}^{\xi}_{\mathrm{pc}}(r,d,\alpha)\big{)}_{ R}\in DM^{\mathrm{eff}}_{\mathrm{gm}}(\mathbb{C};R).\] Proof.: The proof is the same as in Theorem 5.1. **Theorem 6.3** (Chow motive).: _For any ring \(R\) we have the following isomorphism of Chow motives,_ \[C\big{(}\mathcal{M}^{\xi}_{\mathrm{Higgs}}(r,d,\alpha)\big{)}_{R}\cong C \big{(}\mathcal{M}^{\xi}_{\mathrm{pc}}(r,d,\alpha)\big{)}_{R}\in\textbf{Chow}^ {\mathrm{eff}}(\mathbb{C};R).\] Proof.: The proof is the same as in Theorem 5.2. ## Acknowledgement This work was supported by the Institute for Basic Science (IBS-R003-D1).
compact Riemann surface, genus, finite subset, moduli spaces, parabolic Higgsbundles, parabolic connections, generic weights, Grothendieck motivic classes, $E$-polynomials, Voevodsky and Chow motives, parabolic Higgs moduli, parabolic Hodge moduli, Grothendieck motivic classes, $E$-polynomials, fixed determinants
2309.12912
Symmetric Exponential Time Requires Near-Maximum Circuit Size
We show that there is a language in $\mathsf{S}_2\mathsf{E}/_1$ (symmetric exponential time with one bit of advice) with circuit complexity at least $2^n/n$. In particular, the above also implies the same near-maximum circuit lower bounds for the classes $\Sigma_2\mathsf{E}$, $(\Sigma_2\mathsf{E}\cap\Pi_2\mathsf{E})/_1$, and $\mathsf{ZPE}^{\mathsf{NP}}/_1$. Previously, only "half-exponential" circuit lower bounds for these complexity classes were known, and the smallest complexity class known to require exponential circuit complexity was $\Delta_3\mathsf{E} = \mathsf{E}^{\Sigma_2\mathsf{P}}$ (Miltersen, Vinodchandran, and Watanabe COCOON'99). Our circuit lower bounds are corollaries of an unconditional zero-error pseudodeterministic algorithm with an $\mathsf{NP}$ oracle and one bit of advice ($\mathsf{FZPP}^{\mathsf{NP}}/_1$) that solves the range avoidance problem infinitely often. This algorithm also implies unconditional infinitely-often pseudodeterministic $\mathsf{FZPP}^{\mathsf{NP}}/_1$ constructions for Ramsey graphs, rigid matrices, two-source extractors, linear codes, and $\mathrm{K}^{\mathrm{poly}}$-random strings with nearly optimal parameters. Our proofs relativize. The two main technical ingredients are (1) Korten's $\mathsf{P}^{\mathsf{NP}}$ reduction from the range avoidance problem to constructing hard truth tables (FOCS'21), which was in turn inspired by a result of Je\v{r}\'abek on provability in Bounded Arithmetic (Ann. Pure Appl. Log. 2004); and (2) the recent iterative win-win paradigm of Chen, Lu, Oliveira, Ren, and Santhanam (FOCS'23).
Lijie Chen, Shuichi Hirahara, Hanlin Ren
2023-09-22T14:56:59
http://arxiv.org/abs/2309.12912v1
# Symmetric Exponential Time Requires Near-Maximum Circuit Size ###### Abstract We show that there is a language in \(\mathsf{S}_{2}\mathsf{E}/_{1}\) (symmetric exponential time with one bit of advice) with circuit complexity at least \(2^{n}/n\). In particular, the above also implies the same near-maximum circuit lower bounds for the classes \(\Sigma_{2}\mathsf{E}\), \((\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E})/_{1}\), and \(\mathsf{ZPE}^{\mathsf{NP}}/_{1}\). Previously, only "half-exponential" circuit lower bounds for these complexity classes were known, and the smallest complexity class known to require exponential circuit complexity was \(\Delta_{3}\mathsf{E}=\mathsf{E}^{\Sigma_{2}\mathsf{P}}\) (Miltersen, Vinodchandran, and Watanabe COCOON'99). Our circuit lower bounds are corollaries of an unconditional zero-error pseudodeterministic algorithm with an \(\mathsf{NP}\) oracle and one bit of advice \((\mathsf{F2PP}^{\mathsf{NP}}/_{1})\) that solves the range avoidance problem infinitely often. This algorithm also implies unconditional infinitely-often pseudodeterministic \(\mathsf{F2PP}^{\mathsf{NP}}/_{1}\) constructions for Ramsey graphs, rigid matrices, two-source extractors, linear codes, and \(\mathsf{K}^{\mathrm{poly}}\)-random strings with nearly optimal parameters. Our proofs relativize. The two main technical ingredients are (1) Korten's \(\mathsf{P}^{\mathsf{NP}}\) reduction from the range avoidance problem to constructing hard truth tables (FOCS'21), which was in turn inspired by a result of Jerabek on provability in Bounded Arithmetic (Ann. Pure Appl. Log. 2004); and (2) the recent iterative win-win paradigm of Chen, Lu, Oliveira, Ren, and Santhanam (FOCS'23). ###### Contents * 1 Introduction * 1.1 Our Results * 1.2 Intuitions * 1.3 Proof Overview * 1.4 Discussions * 2 Preliminaries * 2.1 Complexity Classes * 2.2 Single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) and \(\mathsf{F}\Sigma_{2}\mathsf{P}\) Algorithms * 2.3 The Range Avoidance Problem * 3 Korten's Reduction * 3.1 GGM Tree and the Reduction * 3.2 \(\Pi_{1}\) Verification of the History of \(\mathsf{Korten}(C,f)\) * 4 Circuit Lower Bounds for \(\Sigma_{2}\mathsf{E}\) * 5 Circuit Lower Bounds for \(\mathsf{S}_{2}\mathsf{E}\) * 5.1 Reed-Muller Codes * 5.2 Encoded History and \(\mathsf{S}_{2}\mathsf{BPP}\) Verification * 5.3 Lower Bounds for \(\mathsf{S}_{2}\mathsf{E}\) * 5.4 Infinitely Often Single-Valued \(\mathsf{F}\mathsf{S}_{2}\mathsf{P}\) Algorithm for Arbitrary Input Range Avoidance Introduction Proving lower bounds against non-uniform computation (i.e., circuit lower bounds) is one of the most important challenges in theoretical computer science. From Shannon's counting argument [1, 13], we know that almost all \(n\)-bit Boolean functions have _near-maximum_\((2^{n}/n)\) circuit complexity.1 Therefore, the task of proving circuit lower bounds is simply to _pinpoint_ one such hard function. More formally, one fundamental question is: Footnote 1: All \(n\)-input Boolean functions can be computed by a circuit of size \((1+\frac{3\log n}{n}+O(\frac{1}{n}))2^{n}/n\)[14, 15], while most Boolean functions require circuits of size \((1+\frac{\log n}{n}-O(\frac{1}{n}))2^{n}/n\)[16, 17]. Hence, in this paper, we say an \(n\)-bit Boolean function has _near-maximum_ circuit complexity if its circuit complexity is at least \(2^{n}/n\). What is the smallest complexity class that contains a language of exponential (\(2^{\Omega(n)}\)) circuit complexity? Compared with super-polynomial lower bounds, exponential lower bounds are interesting in their own right for the following reasons. First, an exponential lower bound would make Shannon's argument _fully constructive_. Second, exponential lower bounds have more applications than super-polynomial lower bounds: For example, if one can show that \(\mathsf{E}\) has no \(2^{o(n)}\)-size circuits, then we would have \(\mathrm{pr}\mathsf{P}=\mathrm{pr}\mathsf{BPP}\)[18, 19], while super-polynomial lower bounds such as \(\mathsf{EXP}\not\subset\mathsf{P}/_{\mathrm{poly}}\) only imply sub-exponential time derandomization of \(\mathrm{pr}\mathsf{BPP}\).2 Footnote 2: \(\mathsf{E}=\mathsf{DTIME}[2^{O(n)}]\) denotes _single-exponential_ time and \(\mathsf{EXP}=\mathsf{DTIME}[2^{n^{O(1)}}]\) denotes _exponential_ time; classes such as \(\mathsf{E}^{\mathsf{NP}}\) and \(\mathsf{EXP}^{\mathsf{NP}}\) are defined analogously. Exponential time and single-exponential time are basically interchangeable in the context of super-polynomial lower bounds (by a padding argument); the exponential lower bounds proven in this paper will be stated for single-exponential time classes since this makes our results stronger. Below, \(\Sigma_{3}\mathsf{E}\) and \(\Pi_{3}\mathsf{E}\) denote the exponential-time versions of \(\Sigma_{3}\mathsf{P}=\mathsf{NP}^{\mathsf{NPNP}}\) and \(\Pi_{3}\mathsf{P}=\mathsf{coNP}^{\mathsf{NPNP}}\), respectively. Unfortunately, despite its importance, our knowledge about exponential lower bounds is quite limited. Kannan [12] showed that there is a function in \(\Sigma_{3}\mathsf{E}\cap\Pi_{3}\mathsf{E}\) that requires maximum circuit complexity; the complexity of the hard function was later improved to \(\Delta_{3}\mathsf{E}=\mathsf{E}^{\Sigma_{2}\mathsf{P}}\) by Miltersen, Vinodchandran, and Watanabe [19], via a simple binary search argument. This is **essentially all we know** regarding exponential circuit lower bounds.3 Footnote 3: We also mention that Hirahara, Lu, and Ren [10] recently proved that for every constant \(\varepsilon>0\), \(\mathsf{BPE}^{\mathsf{MCSP}/2^{\varepsilon n}}\) requires near-maximum circuit complexity, where \(\mathsf{MCSP}\) is the Minimum Circuit Size Problem [13]. However, the hard function they constructed requires subexponentially (\(2^{\varepsilon n}\)) many advice bits to describe. We remark that Kannan [12, Theorem 4] claimed that \(\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E}\) requires exponential circuit complexity, but [19] pointed out a gap in Kannan's proof, and suggested that exponential lower bounds for \(\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E}\) were "reopened and considered an open problem." Recently, Vyas and Williams [18] emphasized our lack of knowledge regarding the circuit complexity of \(\Sigma_{2}\mathsf{EXP}\), even with respect to _relativizing_ proof techniques. In particular, the following question has been open for at least 20 years (indeed, if we count from [12], it would be at least 40 years): **Open Problem 1.1**.: _Can we prove that \(\Sigma_{2}\mathsf{EXP}\not\subset\mathsf{SIZE}[2^{\varepsilon n}]\) for some absolute constant \(\varepsilon>0\), or at least show a relativization barrier for proving such a lower bound?_ The half-exponential barrier.There is a richer literature regarding super-polynomial lower bounds than exponential lower bounds. Kannan [12] proved that the class \(\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E}\) does not have polynomial-size circuits. Subsequent works proved super-polynomial circuit lower bounds for exponential-time complexity classes such as \(\mathsf{ZPEXP}^{\mathsf{NP}}\)[18, 1], \(\mathsf{S}_{2}\mathsf{EXP}\)[19, 10], \(\mathsf{PEXP}\)[19, 10], and \(\mathsf{MAEXP}\)[15, 16]. Unfortunately, all these works fail to prove exponential lower bounds. All of their proofs go through certain _Karp-Lipton_ collapses [13]; such a proof strategy runs into a so-called "half-exponential barrier", preventing us from getting exponential lower bounds. See Section1.4.1 for a detailed discussion. ### Our Results #### 1.1.1 New near-maximum circuit lower bounds In this work, we _overcome_ the half-exponential barrier mentioned above and resolve creftype1.1 by showing that both \(\Sigma_{2}\mathsf{E}\) and \((\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E})/_{1}\) require near-maximum \((2^{n}/n)\) circuit complexity. Moreover, our proof indeed _relativizes_: **Theorem 1.2**.: \(\Sigma_{2}\mathsf{E}\not\subset\mathsf{SIZE}[2^{n}/n]\) _and \((\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E})/_{1}\not\subset\mathsf{SIZE}[2^{n}/n]\). Moreover, they hold in every relativized world._ Up to one bit of advice, we finally provide a proof of Kannan's original claim in [13, Theorem 4]. Moreover, with some more work, we extend our lower bounds to the smaller complexity class \(\mathsf{S}_{2}\mathsf{E}/_{1}\) (see creftype2.1 for a formal definition), again with a relativizing proof: **Theorem 1.3**.: \(\mathsf{S}_{2}\mathsf{E}/_{1}\not\subset\mathsf{SIZE}[2^{n}/n]\)_. Moreover, this holds in every relativized world._ The symmetric time class \(\mathsf{S}_{2}\mathsf{E}\).\(\mathsf{S}_{2}\mathsf{E}\) can be seen as a "randomized" version of \(\mathsf{E}^{\mathsf{NP}}\) since it is sandwiched between \(\mathsf{E}^{\mathsf{NP}}\) and \(\mathsf{Z}\mathsf{P}^{\mathsf{NP}}\): it is easy to show that \(\mathsf{E}^{\mathsf{NP}}\subseteq\mathsf{S}_{2}\mathsf{E}\)[12], and it is also known that \(\mathsf{S}_{2}\mathsf{E}\subseteq\mathsf{Z}\mathsf{P}^{\mathsf{NP}}\)[14]. We also note that under plausible derandomization assumptions (e.g., \(\mathsf{E}^{\mathsf{NP}}\) requires \(2^{\Omega(n)}\)-size \(\mathsf{SAT}\)-oracle circuits), all three classes simply collapse to \(\mathsf{E}^{\mathsf{NP}}\)[15]. Hence, our results also imply a near-maximum circuit lower bound for the class \(\mathsf{Z}\mathsf{P}^{\mathsf{NP}}/_{1}\subseteq(\Sigma_{2}\mathsf{E}\cap\Pi _{2}\mathsf{E})/_{1}\). This vastly improves the previous lower bound for \(\Delta_{3}\mathsf{E}=\mathsf{E}^{\Sigma_{2}\mathsf{P}}\). **Corollary 1.4**.: \(\mathsf{Z}\mathsf{P}^{\mathsf{NP}}/_{1}\not\subset\mathsf{SIZE}[2^{n}/n]\)_. Moreover, this holds in every relativized world._ #### 1.1.2 New algorithms for the range avoidance problem Background on Avoid.Actually, our circuit lower bounds are implied by our new algorithms for solving the range avoidance problem (Avoid) [13, 14, 15], which is defined as follows: given a circuit \(C:\{0,1\}^{n}\to\{0,1\}^{n+1}\) as input, find a string outside the range of \(C\) (we define \(\text{Range}(C)\coloneqq\{C(z):z\in\{0,1\}^{n}\}\)). That is, output any string \(y\in\{0,1\}^{n+1}\) such that for every \(x\in\{0,1\}^{n}\), \(C(x)\neq y\). There is a trivial \(\mathsf{FZ}\mathsf{P}^{\mathsf{NP}}\) algorithm solving Avoid: randomly generate strings \(y\in\{0,1\}^{n+1}\) and output the first \(y\) that is outside the range of \(C\) (note that we need an \(\mathsf{NP}\) oracle to verify if \(y\notin\text{Range}(C)\)). The class \(\mathsf{APE}\mathsf{P}\) (Abundant Polynomial Empty Pigeonhole Principle) [13] is the class of total search problems reducible to Avoid. As demonstrated by Korten [14, Section 3], \(\mathsf{APE}\mathsf{P}\) captures the complexity of explicit construction problems whose solutions are guaranteed to exist by the probabilistic method (more precisely, the dual weak pigeonhole principle [12, 13]), in the sense that constructing such objects reduces to the range avoidance problem. This includes many important objects in mathematics and theoretical computer science, including Ramsey graphs [14], rigid matrices [15, 16, 17], two-source extractors [18, 19], linear codes [15], hard truth tables [14], and strings with maximum time-bounded Kolmogorov complexity (i.e., \(\mathrm{K}^{\mathrm{poly}}\)-random strings) [15]. Hence, derandomizing the trivial \(\mathsf{FZ}\mathsf{P}^{\mathsf{NP}}\) algorithm for Avoid would imply explicit constructions for all these important objects. Our results: new pseudodeterministic algorithms for Avoid.We show that, _unconditionally_, the trivial \(\mathsf{FZPP^{NP}}\) algorithm for Avoid can be made _pseudodeterministic_ on infinitely many input lengths. A _pseudodeterministic_ algorithm [11] is a randomized algorithm that outputs the same _canonical_ answer on most computational paths. In particular, we have: **Theorem 1.5**.: _For every constant \(d\geq 1\), there is a randomized algorithm \(\mathcal{A}\) with an \(\mathsf{NP}\) oracle such that the following holds for infinitely many integers \(n\). For every circuit \(C\colon\{0,1\}^{n}\to\{0,1\}^{n+1}\) of size at most \(n^{d}\), there is a string \(y_{C}\in\{0,1\}^{n}\setminus\operatorname{Range}(C)\) such that \(\mathcal{A}(C)\) either outputs \(y_{C}\) or \(\bot\), and the probability (over the internal randomness of \(\mathcal{A}\)) that \(\mathcal{A}(C)\) outputs \(y_{C}\) is at least \(2/3\). Moreover, this theorem holds in every relativized world._ As a corollary, for every problem in \(\mathsf{APEPP}\), we obtain zero-error pseudodeterministic constructions with an \(\mathsf{NP}\) oracle and one bit of advice (\(\mathsf{FZPP^{NP}}/_{1}\)) that works infinitely often4: Footnote 4: The one-bit advice encodes whether our algorithm succeeds on a given input length; it is needed since on bad input lengths, our algorithm might not be pseudodeterministic (i.e., there may not be a canonical answer that is outputted with high probability). **Corollary 1.6** (Informal).: _There are infinitely-often zero-error pseudodeterministic constructions for the following objects with an \(\mathsf{NP}\) oracle and one-bit of advice: Ramsey graphs, rigid matrices, two-source extractors, linear codes, hard truth tables, and \(\mathsf{K}^{\mathrm{poly}}\)-random strings._ Actually, we obtain single-valued \(\mathsf{FS_{2}P}/_{1}\) algorithms for the explicit construction problems above (see Definition2.2), and the pseudodeterministic \(\mathsf{FZPP^{NP}}/_{1}\) algorithms follow from Cai's theorem that \(\mathsf{S_{2}P}\subseteq\mathsf{ZPP^{NP}}\)[10]. We stated them as pseudodeterministic \(\mathsf{FZPP^{NP}}/_{1}\) algorithms since this notion is better known than the notion of single-valued \(\mathsf{FS_{2}P}/_{1}\) algorithms. Theorem1.5 is tantalizingly close to an infinitely-often \(\mathsf{FP^{NP}}\) algorithm for Avoid (with the only caveat of being _zero-error_ instead of being completely _deterministic_). However, since an \(\mathsf{FP^{NP}}\) algorithm for range avoidance would imply near-maximum circuit lower bounds for \(\mathsf{E^{NP}}\), we expect that it would require fundamentally new ideas to completely derandomize our algorithm. Previously, Hirahara, Lu, and Ren [14, Theorem 36] presented an infinitely-often pseudodeterministic \(\mathsf{FZPP^{NP}}\) algorithm for the range avoidance problem using \(n^{\varepsilon}\) bits of advice, for any small constant \(\varepsilon>0\). Our result improves the above in two aspects: first, we reduce the number of advice bits to \(1\); second, our techniques relativize but their techniques do not. Lower bounds against non-uniform computation with maximum advice length.Finally, our results also imply lower bounds against non-uniform computation with maximum advice length. We mention this corollary because it is a stronger statement than circuit lower bounds, and similar lower bounds appeared recently in the literature of super-fast derandomization [17]. **Corollary 1.7**.: _For every \(\alpha(n)\geq\omega(1)\) and any constant \(k\geq 1\), \(\mathsf{S_{2}E}/_{1}\not\subset\mathsf{TIME}[2^{kn}]/_{2^{n}-\alpha(n)}\). The same holds for \(\Sigma_{2}\mathsf{E}\), \((\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E})/_{1}\), and \(\mathsf{ZPE^{NP}}/_{1}\) in place of \(\mathsf{S_{2}E}/_{1}\). Moreover, this holds in every relativized world._ ### Intuitions In the following, we present some high-level intuitions for our new circuit lower bounds. #### 1.2.1 Perspective: single-valued constructions A key perspective in this paper is to view circuit lower bounds (for exponential-time classes) as _single-valued_ constructions of hard truth tables. This perspective is folklore; it was also emphasized in recent papers on the range avoidance problem [13, 14]. Let \(\Pi\subseteq\{0,1\}^{\star}\) be an _\(\varepsilon\)-dense_ property, i.e., for every integer \(N\in\mathbb{N}\), \(|\Pi_{N}|\geq\varepsilon\cdot 2^{N}\). (In what follows, we use \(\Pi_{N}:=\Pi\cap\{0,1\}^{N}\) to denote the length-\(N\) slice of \(\Pi\).) As a concrete example, let \(\Pi_{\text{hard}}\) be the set of hard truth tables, i.e., a string \(tt\in\Pi_{\text{hard}}\) if and only if it is the truth table of a function \(f:\{0,1\}^{n}\to\{0,1\}\) whose circuit complexity is at least \(2^{n}/n\), where \(n:=\log N\). (We assume that \(n:=\log N\) is an integer.) Shannon's argument [15, 16] shows that \(\Pi_{\text{hard}}\) is a \(1/2\)-dense property. We are interested in the following question: What is the complexity of _single-valued_ constructions for any string in \(\Pi_{\text{hard}}\)? Here, informally speaking, a computation is _single-valued_ if each of its computational paths either fails or outputs the _same_ value. For example, an \(\mathsf{NP}\) machine \(M\) is a single-valued construction for \(\Pi\) if there is a "canonical" string \(y\in\Pi\) such that (1) \(M\) outputs \(y\) on every accepting computational path; (2) \(M\) has at least one accepting computational path. (That is, it is an \(\mathsf{NPSV}\) construction in the sense of [13, 12, 14, 15].) Similarly, a \(\mathsf{BPP}\) machine \(M\) is a single-valued construction for \(\Pi\) if there is a "canonical" string \(y\in\Pi\) such that \(M\) outputs \(y\) on most (say \(\geq 2/3\) fraction of) computational paths. (In other words, single-valued \(\mathsf{ZPP}\) and \(\mathsf{BPP}\) constructions are another name for _pseudodeterministic constructions_[11].)5 Footnote 5: Note that the trivial construction algorithms are not single-valued in general. For example, a trivial \(\Sigma_{2}\mathsf{P}=\mathsf{NP}^{\mathsf{NP}}\) construction algorithm for \(\Pi_{\text{hard}}\) is to guess a hard truth table \(tt\) and use the \(\mathsf{NP}\) oracle to verify that \(tt\) does not have size-\(N/\log N\) circuits; however, different accepting computational paths of this computation would output different hard truth tables. Similarly, a trivial \(\mathsf{BPP}\) construction algorithm for every dense property \(\Pi\) is to output a random string, but there is no _canonical_ answer that is outputted with high probability. In other words, these construction algorithms do not _define_ anything; instead, a single-valued construction algorithm should _define_ some particular string in \(\Pi\). Hence, the task of proving circuit lower bounds is equivalent to the task of _defining_, i.e., single-value constructing, a hard function, in the smallest possible complexity class. For example, a single-valued \(\mathsf{BPP}\) construction (i.e., pseudodeterministic construction) for \(\Pi_{\text{hard}}\) is equivalent to the circuit lower bound \(\mathsf{BPE}\not\subset\text{i.o.-SIZE}[2^{n}/n]\).6 In this regard, the previous near-maximum circuit lower bound for \(\Delta_{3}\mathsf{E}:=\mathsf{E}^{\Sigma_{2}\mathsf{P}}\)[16] can be summarized in one sentence: The lexicographically first string in \(\Pi_{\text{hard}}\) can be constructed in \(\Delta_{3}\mathsf{P}:=\mathsf{P}^{\Sigma_{2}\mathsf{P}}\) (which is necessarily single-valued). Footnote 6: To see this, note that (1) \(\mathsf{BPE}\not\subset\text{i.o.-SIZE}[2^{n}/n]\) implies a simple single-valued \(\mathsf{BPP}\) construction for \(\Pi_{\text{hard}}\): given \(N=2^{n}\), output the truth table of \(L_{n}\) (\(L\) restricted to \(n\)-bit inputs), where \(L\in\mathsf{BPE}\) is the hard language not in \(\mathsf{SIZE}[2^{n}/n]\); and (2) assuming a single-valued \(\mathsf{BPP}\) construction \(A\) for \(\Pi_{\text{hard}}\), one can define a hard language \(L\) such that the truth table of \(L_{n}\) is the output of \(A(1^{2^{n}})\), and observe that \(L\in\mathsf{BPE}\). Reduction to Avoid.It was observed in [13, 14] that explicit construction of elements from \(\Pi_{\text{hard}}\) is a special case of range avoidance: Let \(\mathsf{TT}\colon\{0,1\}^{N-1}\to\{0,1\}^{N}\) (here \(N=2^{n}\)) be a circuit that maps the description of a \(2^{n}/n\)-size circuit into its \(2^{n}\)-length truth table (by [14], this circuit can be encoded by \(N-1\) bits). Hence, a single-valued algorithm solving Avoid for \(\mathsf{TT}\) is equivalent to a single-valued construction for \(\Pi_{\text{hard}}\). This explains how our new range avoidance algorithms imply our new circuit lower bounds (as mentioned in Section1.1.2). In the rest of Section1.2, we will only consider the special case of Avoid where the input circuit for range avoidance is a \(\mathsf{P}\)-uniform circuit family. Specifically, let \(\{C_{n}\colon\{0,1\}^{n}\to\{0,1\}^{2n}\}_{n\in\mathbb{N}}\) be a \(\mathsf{P}\)-uniform family of circuits, where \(|C_{n}|\leq\text{poly}(n)\).7 Our goal is to find an algorithm \(A\) such that for infinitely many \(n\), \(A(1^{n})\in\{0,1\}^{2n}\setminus\text{Range}(C_{n})\); see Section5.3 and Section5.4 for how to turn this into an algorithm that works for arbitrary input circuit with a single bit of stretch. Also, since from now on we will not talk about truth tables anymore, we will use \(n\) instead of \(N\) to denote the input length of Avoid instances. Footnote 7: The _iterative win-win paradigm_ of [12] is a _iterative win-win_ paradigm for explicit constructions, and used that to obtain a polynomial-time pseudo-deterministic construction of primes that works infinitely often. Since our construction algorithm closely follows their paradigm, it is instructive to take a detour and give a high-level overview of how the construction from [12] works.8 Footnote 8: Indeed, for every \(1/\mathrm{poly}(n)\)-dense property \(\Pi\in\mathsf{P}\), they obtained a polynomial-time algorithm \(A\) such that for infinitely many \(n\in\mathbb{N}\), there exists \(y_{n}\in\Pi_{n}\) such that \(A(1^{n})\) outputs \(y_{n}\) with probability at least \(2/3\). By [1] and the prime number theorem, the set of \(n\)-bit primes is such a property. In this paradigm, for a (starting) input length \(n_{0}\) and some \(t=O(\log n_{0})\), we will consider an increasing sequence of input lengths \(n_{0},n_{1},\ldots,n_{t}\) (jumping ahead, we will set \(n_{i+1}=n_{i}^{\beta}\) for a large constant \(\beta\)), and show that our construction algorithm succeeds on at least one of the input lengths. By varying \(n_{0}\), we can construct infinitely many such sequences of input lengths that are pairwise disjoint, and therefore our algorithm succeeds on infinitely many input lengths. In more detail, fixing a sequence of input lengths \(n_{0},n_{1},\ldots,n_{t}\) and letting \(\Pi\) be an \(\varepsilon\)-dense property, for each \(i\in\{0,1,\ldots,t\}\), we specify a (deterministic) algorithm \(\mathsf{ALG}_{i}\) that takes \(1^{n_{i}}\) as input and aims to construct an explicit element from \(\Pi_{n_{i}}\). We let \(\mathsf{ALG}_{0}\) be the simple brute-force algorithm that enumerates all length-\(n_{0}\) strings and finds the lexicographically first string in \(\Pi_{n_{0}}\); it is easy to see that \(\mathsf{ALG}_{0}\) runs in \(T_{0}:=2^{O(n_{0})}\) time. The win-or-improve mechanism.The core of [12] is a novel _win-or-improve mechanism_, which is described by a (randomized) algorithm \(R\). Roughly speaking, for input lengths \(n_{i}\) and \(n_{i+1}\), \(R(1^{n_{i}})\) attempts to _simulate_\(\mathsf{ALG}_{i}\)_faster by using the oracle \(\Pi_{n_{i+1}}\) (hence it runs in \(\mathrm{poly}(n_{i+1})\) time). The crucial property is the following win-win argument: * Either \(R(1^{n_{i}})\) outputs \(\mathsf{ALG}_{i}(1^{n_{i}})\) with probability at least \(2/3\) over its internal randomness, * or, from the failure of \(R(1^{n_{i}})\), we can construct an algorithm \(\mathsf{ALG}_{i+1}\) that outputs an explicit element from \(\Pi_{n_{i+1}}\) and runs in \(T_{i+1}=\mathrm{poly}(T_{i})\) time. We call the above (Win-or-Improve), since either we have a pseudodeterministic algorithm \(R(1^{n_{i}})\) that constructs an explicit element from \(\Pi_{n_{i}}\) in \(\mathrm{poly}(n_{i+1})\leq\mathrm{poly}(n_{i})\) time (since it simulates \(\mathsf{ALG}_{i}\)), or we have an _improved_ algorithm \(\mathsf{ALG}_{i+1}\) at the input length \(n_{i+1}\) (for example, on input length \(n_{1}\), the running time of \(\mathsf{ALG}_{1}\) is \(2^{O\left(n_{1}^{1/\beta}\right)}\ll 2^{O(n_{1})}\)). The (Win-or-Improve) part in [12] is implemented via the Chen-Tell targeted hitting set generator [13] (we omit the details here). Jumping ahead, in this paper, we will implement a similar mechanism using Korten's \(\mathsf{P}^{\mathsf{NP}}\) reduction from the range avoidance problem to constructing hard truth tables [14]. Getting polynomial time.Now we briefly explain why (Win-or-Improve) implies a _polynomial-time_ construction algorithm. Let \(\alpha\) be an absolute constant such that we always have \(T_{i+1}\leq T_{i}^{\alpha}\); we now set \(\beta:=2\alpha\). Recall that \(n_{i}=n_{i-1}^{\beta}\) for every \(i\). The crucial observation is the following: Although \(T_{0}\) is much larger than \(n_{0}\), the sequence \(\{T_{i}\}\) grows slower than \(\{n_{i}\}\). Indeed, a simple calculation shows that when \(t=O(\log n_{0})\), we will have \(T_{t}\leq\operatorname{poly}(n_{t})\); see [13, Section 1.3.1]. For each \(0\leq i<t\), if \(R(1^{n_{t}})\) successfully simulates \(\mathsf{ALG}_{i}\), then we obtain an algorithm for input length \(n_{i}\) running in \(\operatorname{poly}(n_{i+1})\leq\operatorname{poly}(n_{i})\) time. Otherwise, we have an algorithm \(\mathsf{ALG}_{i+1}\) running in \(T_{i+1}\) time on input length \(n_{i+1}\). Eventually, we will hit \(t\) such that \(T_{t}\leq\operatorname{poly}(n_{t})\), in which case \(\mathsf{ALG}_{t}\) itself gives a polynomial-time construction on input length \(n_{t}\). Therefore, we obtain a polynomial-time algorithm on at least one of the input lengths \(n_{0},n_{1},\ldots,n_{t}\). #### 1.2.3 Algorithms for range-avoidance via Korten's reduction Now we are ready to describe our new algorithms for Avoid. Roughly speaking, our new algorithm makes use of the iterative win-win argument introduced above, together with an easy-witness style argument [14] and Korten's reduction [15].9 In the following, we introduce the latter two ingredients and show how to chain them together via the iterative win-win argument. Footnote 9: Korten’s result was inspired by [13], which proved that the dual weak pigeonhole principle is equivalent to the statement asserting the existence of Boolean functions with exponential circuit complexity in a certain fragment of Bounded Arithmetic. An easy-witness style argument.Let \(\mathsf{BF}\) be the \(2^{O(n)}\)-time brute-force algorithm outputting the lexicographically first non-output of \(C_{n}\). Our first idea is to consider its _computational history_, a unique \(2^{O(n)}\)-length string \(h_{\mathsf{BF}}\) (that can be computed in \(2^{O(n)}\) time), and _branch on whether \(h_{\mathsf{BF}}\) has a small circuit or not_. Suppose \(h_{\mathsf{BF}}\) admits a, say, \(n^{\alpha}\)-size circuit for some large \(\alpha\), then we apply an _easy-witness-style_ argument [14] to simulate \(\mathsf{BF}\) by a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm running in \(\operatorname{poly}(n^{\alpha})=\operatorname{poly}(n)\) time (see Section1.3.2). Hence, we obtained the desired algorithm when \(h_{\mathsf{BF}}\) is easy. However, it is less clear how to deal with the other case (when \(h_{\mathsf{BF}}\) is hard) directly. The crucial observation is that we have gained the following ability: we can generate a string \(h_{\mathsf{BF}}\in\{0,1\}^{2^{O(n)}}\) that has circuit complexity at least \(n^{\alpha}\), in only \(2^{O(n)}\) time. Korten's reduction.We will apply Korten's recent work [14] to make use of the "gain" above. So it is worth taking a detour to review the main result of [14]. Roughly speaking, [14] gives **an algorithm that uses a hard truth table \(f\) to solve a derandomization task: finding a non-output of the given circuit (that has more output bits than input bits).**10 Footnote 10: This is very similar to the classical hardness-vs-randomness connection [13, 14], which can be understood as an algorithm that uses a hard truth table \(f\) (i.e., a truth table without small circuits) to solve another derandomization task: estimating the acceptance probability of the given circuit. This explains why one may want to use Korten’s algorithm to replace the Chen–Tell targeted generator construction [13] from [13], as they are both hardness-vs-randomness connections. Formally, [14] gives a \(\mathsf{P}^{\mathsf{NP}}\)-computable algorithm \(\mathsf{Korten}(C,f)\) that takes as inputs a circuit \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) and a string \(f\in\{0,1\}^{T}\) (think of \(n\ll T\)), and outputs a string \(y\in\{0,1\}^{2n}\). The guarantee is that if the circuit complexity of \(f\) is sufficiently larger than the size of \(C\), then the output \(y\) is not in the range of \(C\). This fits perfectly with our "gain" above: for \(\beta\ll\alpha\) and \(m=n^{\beta}\), \(\mathsf{Korten}(C_{m},h_{\mathsf{BF}})\) solves Avoid for \(C_{m}\) since the circuit complexity of \(h_{\mathsf{BF}}\), \(n^{\alpha}\), is sufficiently larger than the size of \(C_{m}\). Moreover, \(\mathsf{Korten}(C_{m},h_{\mathsf{BF}})\) runs in only \(2^{O(n)}\) time, which is much less than the brute-force running time \(2^{O(m)}\). Therefore, we obtain an improved algorithm for Avoid on input length \(m\). The iterative win-win argument.What we described above is essentially the first stage of an _win-or-improve mechanism_ similar to that from Section1.2.2. Therefore, we only need to iterate the argument above to obtain a polynomial-time algorithm. For this purpose, we need to consider the computational history of not only \(\mathsf{BF}\), but also algorithms of the form \(\mathsf{Korten}(C,f)\).11 For any circuit \(C\) and "hard" truth table \(f\), there is a _unique_ "computational history" \(h\) of \(\mathsf{Korten}(C,f)\), and the length of \(h\) is upper bounded by \(\operatorname{poly}(|f|)\). We are able to prove the following statement akin to the _easy witness lemma_[13]: if \(h\) admits a size-\(s\) circuit (think of \(s\ll T\)), then \(\mathsf{Korten}(C,f)\) can be simulated by a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm in time \(\operatorname{poly}(s)\); see Section1.3.2 for details on this argument.12 Footnote 11: Actually, we need to consider all algorithms \(\mathsf{ALG}_{i}\) defined below and prove the properties of computational history for these algorithms. It turns out that all of \(\mathsf{ALG}_{i}\) are of the form \(\mathsf{Korten}(C,f)\) (including \(\mathsf{ALG}_{0}\)), so in what follows we only consider the computational history of \(\mathsf{Korten}(C,f)\). Footnote 12: With an “encoded” version of history and more effort, we are able to simulate \(\mathsf{Korten}(C,f)\) by a single-valued \(\mathsf{F}\mathsf{S}_{2}\mathsf{P}\) algorithm in time \(\operatorname{poly}(s)\), and that is how our \(\mathsf{S}_{2}\mathsf{E}\) lower bound is proved; see Section1.3.3 for details. Now, following the iterative win-win paradigm of [14], for a (starting) input length \(n_{0}\) and some \(t=O(\log n_{0})\), we consider an increasing sequence of input lengths \(n_{0},n_{1},\ldots,n_{t}\), and show that our algorithm \(A\) succeeds on at least one of the input lengths (i.e., \(A(1^{n_{i}})\in\{0,1\}^{2n_{i}}\setminus\operatorname{Range}(C_{n_{i}})\) for some \(i\in\{0,1,\ldots,t\}\)). For each \(i\in\{0,1,\ldots,t\}\), we specify an algorithm \(\mathsf{ALG}_{i}\) of the form \(\mathsf{Korten}(C_{n_{i}},-)\) that aims to solve Avoid for \(C_{n_{i}}\); in other words, we specify a string \(f_{i}\in\{0,1\}^{T_{i}}\) for some \(T_{i}\) and let \(\mathsf{ALG}_{i}:=\mathsf{Korten}(C_{n_{i}},f_{i})\). The algorithm \(\mathsf{ALG}_{0}\) is simply the brute force algorithm \(\mathsf{BF}\) at input length \(n_{0}\). (A convenient observation is that we can specify an exponentially long string \(f_{0}\in\{0,1\}^{2^{O(n_{0})}}\) so that \(\mathsf{Korten}(C_{n_{0}},f_{0})\) is equivalent to \(\mathsf{BF}=\mathsf{ALG}_{0}\); see creftype3.4.) For each \(0\leq i<t\), to specify \(\mathsf{ALG}_{i+1}\), let \(f_{i+1}\) denote the history of the algorithm \(\mathsf{ALG}_{i}\), and consider the following win-or-improve mechanism. * If \(f_{i+1}\) admits an \(n_{i}^{\alpha}\)-size circuit (for some large constant \(\alpha\)), by our easy-witness argument, we can simulate \(\mathsf{ALG}_{i}\) by a \(\operatorname{poly}(n_{i})\)-time single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm. * Otherwise \(f_{i+1}\) has circuit complexity at least \(n_{i}^{\alpha}\), we plug it into Korten's reduction to solve Avoid for \(C_{n_{i+1}}\). That is, we take \(\mathsf{ALG}_{i+1}\coloneqq\mathsf{Korten}(C_{n_{i+1}},f_{i+1})\) as our new algorithm on input length \(n_{i+1}\). Let \(T_{i}=|f_{i}|\), then \(T_{i+1}\leq\operatorname{poly}(T_{i})\). By setting \(n_{i+1}=n_{i}^{\beta}\) for a sufficiently large \(\beta\), a similar analysis as [14] shows that for some \(t=O(\log n_{0})\) we would have \(T_{t}\leq\operatorname{poly}(n_{t})\), meaning that \(\mathsf{ALG}_{t}\) would be a \(\operatorname{poly}(n_{t})\)-time \(\mathsf{FP}^{\mathsf{NP}}\) algorithm (thus also a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm) solving Avoid for \(C_{n_{t}}\). Putting everything together, we obtain a polynomial-time single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm that solves Avoid for at least one of the \(C_{n_{i}}\). The hardness condenser perspective.Below we present another perspective on the construction above which may help the reader understand it better. In the following, we fix \(C_{n}\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) to be the truth table generator \(\mathsf{TT}_{n,2n}\) that maps an \(n\)-bit description of a \(\log(2n)\)-input circuit into its length-\(2n\) truth table. Hence, instead of solving Avoid in general, our goal here is simply _constructing hard truth tables_ (or equivalently, proving circuit lower bounds). We note that \(\mathsf{Korten}(\mathsf{TT}_{n,2n},f)\) can then be interpreted as a _hardness condenser_[1]:13 Given a truth table \(f\in\{0,1\}^{T}\) whose circuit complexity is sufficiently larger than \(n\), it outputs a length-\(2n\) truth table that is maximally hard (i.e., without \(n/\log n\)-size circuits). The win-or-improve mechanism can be interpreted as an iterative application of this hardness condenser. At the stage \(i\), we consider the algorithm \(\mathsf{ALG}_{i}\coloneqq\mathsf{Korten}(\mathsf{TT}_{n_{i},2n_{i}},f_{i})\), which runs in \(T_{i}\approx|f_{i}|\) time and creates (roughly) \(n_{i}\) bits of hardness. (That is, the circuit complexity of the output of \(\mathsf{ALG}_{i}\) is roughly \(n_{i}\).) In the (**Win**) case above, \(\mathsf{ALG}_{i}\) admits an \(n_{i}^{\alpha}\)-size history \(f_{i+1}\) (with length approximately \(|f_{i}|\)) and can therefore be simulated in \(\mathsf{F}\Sigma_{2}\mathsf{P}\). The magic is that in the (**Improve**) case, we actually have access to _much more hardness than \(n_{i}\)_: the history string \(f_{i+1}\) has \(n_{i}^{\alpha}\gg n_{i}\) bits of hardness. So we can _distill_ these hardness by applying the condenser to \(f_{i+1}\) to obtain a maximally hard truth tables of length \(2n_{i+1}=2n_{i}^{\beta}\), establish the next algorithm \(\mathsf{ALG}_{i+1}\coloneqq\mathsf{Korten}(\mathsf{TT}_{n_{i+1},2n_{i+1}},f_ {i+1})\), and keep iterating. Observe that the string \(f_{i+1}\) above has \(n_{i}^{\alpha}>n_{i}^{\beta}=n_{i+1}\) bits of hardness. Since \(|f_{i+1}|\approx|f_{i}|\) and \(n_{i+1}=n_{i}^{\beta}\), the process above creates _harder and harder_ strings, until \(|f_{i+1}|\leq n_{i+1}\leq n_{i}^{\alpha}\), so the (**Win**) case must happen at some point. ### Proof Overview In this section, we elaborate on the computational history of \(\mathsf{Korten}\) and how the easy-witness-style argument gives us \(\mathsf{F}\Sigma_{2}\mathsf{P}\) and \(\mathsf{FS}_{2}\mathsf{P}\) algorithms. #### 1.3.1 Korten's reduction We first review the key concepts and results from [10] that are needed for us. Given a circuit \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) and a parameter \(T\geq 2n\), Korten builds another circuit \(\mathsf{GGM}_{T}[C]\) stretching \(n\) bits to \(T\) bits as follows:14 Footnote 14: We use the name \(\mathsf{GGM}\) because the construction is similar to the pseudorandom function generator of Goldreich, Goldwasser, and Micali [11]. * On input \(x\in\{0,1\}^{n}\), we set \(v_{0,0}=x\). For simplicity, we assume that \(T/n=2^{k}\) for some \(k\in\mathbb{N}\). We build a full binary tree with \(k+1\) layers; see Figure1 for an example with \(k=3\). * For every \(i\in\{0,1,\ldots,k-1\}\) and \(j\in\{0,1,\ldots,2^{i}-1\}\), we set \(v_{i+1,2j}\) and \(v_{i+1,2j+1}\) to be the first \(n\) bits and the last \(n\) bits of \(C(v_{i,j})\), respectively. * The output of \(\mathsf{GGM}_{T}[C](x)\) is defined to be the concatenation of \(v_{k,0},v_{k,1},\ldots,v_{k,2^{k}-1}\). The following two properties of \(\mathsf{GGM}_{T}[C]\) are established in [10], which will be useful for us: Figure 1: An illustration of the GGM Tree, in which, for instance, it holds that \((v_{3,4},v_{3,5})=C(v_{2,2})\). 1. Given \(i\in[T],C\) and \(x\in\{0,1\}^{n}\), by traversing the tree from the root towards the leaf with the \(i\)-th bit, one can compute the \(i\)-th bit of \(\mathsf{GGM}_{T}[C](x)\) in \(\operatorname{poly}(\mathsf{SIZE}(C),\log T)\) time. Consequently, for every \(x\), \(\mathsf{GGM}_{T}[C](x)\) has circuit complexity at most \(\operatorname{poly}(\mathsf{SIZE}(C),\log T)\). 2. There is a \(\mathsf{P}^{\mathsf{NP}}\) algorithm \(\mathsf{Korten}(C,f)\) that takes an input \(f\in\{0,1\}^{T}\setminus\operatorname{Range}(\mathsf{GGM}_{T}[C])\) and outputs a string \(u\in\{0,1\}^{2n}\setminus\operatorname{Range}(C)\). Note that this is a reduction from solving Avoid for \(C\) to solving Avoid for \(\mathsf{GGM}_{T}[C]\). In particular, letting \(f\) be a truth table whose circuit complexity is sufficiently larger than \(\mathsf{SIZE}(C)\), by the first property above, it is not in \(\operatorname{Range}(\mathsf{GGM}_{T}[C])\), and therefore \(\mathsf{Korten}(C,f)\) solves Avoid for \(C\). This confirms our description of Korten in Section1.1.2. 3.2 Computational history of \(\mathsf{Korten}\) and an easy-witness argument for \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithms The algorithm \(\mathsf{Korten}(C,f)\) works as follows: we first view \(f\) as the labels of the last layer of the binary tree, and try to reconstruct the whole binary tree, layer by layer (start from the bottom layer to the top layer, within each layer, start from the rightmost node to the leftmost one), by filling the labels of the intermediate nodes. To fill \(v_{i,j}\), we use an \(\mathsf{NP}\) oracle to find the lexicographically first string \(u\in\{0,1\}^{n}\) such that \(C(u)=v_{i+1,2j}\circ v_{i+1,2j+1}\), and set \(v_{i,j}=u\). If no such \(u\) exists, the algorithm stops and report \(v_{i+1,2j}\circ v_{i+1,2j+1}\) as the solution to Avoid for \(C\). Observe that this reconstruction procedure must stop somewhere, since if it successfully reproduces all the labels in the binary tree, we would have \(f=\mathsf{GGM}_{T}[C](v_{0,0})\in\operatorname{Range}(\mathsf{GGM}_{T}[C])\), contradicting the assumption. See Section3.3 for details. The computational history of Korten.The algorithm described above induces a natural description of the computational history of Korten, denoted as \(\mathsf{History}(C,f)\), as follows: the index \((i_{\star},j_{\star})\) when the algorithm stops (i.e., the algorithm fails to fill in \(v_{i_{\star},j_{\star}}\)) concatenated with the labels of all the nodes generated by \(\mathsf{Korten}(C,f)\) (for the intermediate nodes with no label assigned, we set their labels to a special symbol \(\bot\)); see Figure2 for an illustration. This history has length at most \(5T\), and for convenience, we pad additional zeros at the end of it so that its length is exactly \(5T\). A local characterization of \(\mathsf{History}(C,f)\).The crucial observation we make on \(\mathsf{History}(C,f)\) is that it admits a local characterization in the following sense: there is a family of local constraints \(\{\psi_{x}\}_{x\in\{0,1\}^{\mathrm{poly}(n)}}\), where each \(\psi_{x}\colon\{0,1\}^{5T}\times\{0,1\}^{T}\to\{0,1\}\) reads only \(\mathrm{poly}(n)\) many bits of its input (we think about it as a local constraint since usually \(n\ll T\)), such that for fixed \(f\), \(\mathsf{History}(C,f)\circ f\) is the unique string making all the \(\psi_{x}\) outputting \(1\). The constraints are follows: (1) for every leaf node \(v_{k,i}\), its content is consistent with the corresponding block in \(f\); (2) all labels at or before node \((i_{\star},j_{\star})\) are \(\bot\);15 (3) for every \(z\in\{0,1\}^{n}\), \(C(z)\neq v_{i_{\star}+1,2j_{\star}}\circ v_{i_{\star}+1,2j_{\star}+1}\) (meaning the algorithm fails at \(v_{i_{\star},j_{\star}}\)); (4) for every \((i,j)\) after \((i_{\star},j_{\star})\), \(C(v_{i,j})=v_{i+1,2j}\circ v_{i+1,2j+1}\) (\(v_{i,j}\) is the correct label); (5) for every \((i,j)\) after \((i_{\star},j_{\star})\) and for every \(v^{\prime}<v_{i,j}\), \(C(v^{\prime})\neq v_{i+1,2j}\circ v_{i+1,2j+1}\) (\(v_{i,j}\) is the lexicographically first correct label). It is clear that each of these constraints above only reads \(\mathrm{poly}(n)\) many bits from the input and a careful examination shows they precisely **define** the string \(\mathsf{History}(C,f)\). Footnote 15: We say that \((i,j)\) is before (after) \((i_{\star},j_{\star})\) if the pair \((i,j)\) is lexicographically smaller (greater) than \((i_{\star},j_{\star})\). A more intuitive way to look at these local constraints is to treat them as a \(\mathrm{poly}(n)\)-time oracle algorithm \(V_{\mathsf{History}}\) that takes a string \(x\in\mathrm{poly}(n)\) as input and two strings \(h\in\{0,1\}^{5T}\) and \(f\in\{0,1\}^{T}\) as oracles, and we simply let \(V_{\mathsf{History}}^{h,f}(x)=\psi_{x}(h\circ f)\). Since the constraints above are all very simple and only read \(\mathrm{poly}(n)\) bits of \(h\circ f\), \(V_{\mathsf{History}}\) runs in \(\mathrm{poly}(n)\) time. In some sense, \(V_{\mathsf{History}}\) is a local \(\Pi_{1}\) verifier: it is local in the sense that it only queries \(\mathrm{poly}(n)\) bits from its oracles, and it is \(\Pi_{1}\) since it needs a universal quantifier over \(x\in\{0,1\}^{\mathrm{poly}(n)}\) to perform all the checks. \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithms.Before we proceed, we give a formal definition of a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm \(A\). Here \(A\) is implemented by an algorithm \(V_{A}\) taking an input \(x\) and two \(\mathrm{poly}(|x|)\)-length witnesses \(\pi_{1}\) and \(\pi_{2}\). We say \(A(x)\) outputs a string \(z\in\{0,1\}^{\ell}\) (we assume \(\ell=\ell(x)\) can be computed in polynomial time from \(x\)) if \(z\) is the _unique_ length-\(\ell\) string such that the following hold: * there exists \(\pi_{1}\) such that for every \(\pi_{2}\), \(V_{\mathsf{History}}(x,\pi_{1},\pi_{2},z)=1\).16 Footnote 16: Note that our definition here is different from the formal definition we used in Definition 2.2. But from this definition, it is easier to see why \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithms for constructing hard truth tables imply circuit lower bounds for \(\Sigma_{2}\mathsf{E}\). We can view \(V_{\mathsf{History}}\) as a verifier that checks whether \(z\) is the desired output using another universal quantifier: given a proof \(\pi_{1}\) and a string \(z\in\{0,1\}^{\ell}\). \(A\) accepts \(z\) if and only if _for every_\(\pi_{2}\), \(V_{\mathsf{History}}(x,\pi_{1},\pi_{2},z)=1\). That is, \(A\) can perform exponentially many checks on \(\pi_{1}\) and \(z\), each taking \(\mathrm{poly}(|x|)\) time. The easy-witness argument.Now we are ready to elaborate on the easy-witness argument mentioned in Section 1.1.2. Recall that at stage \(i\), we have \(\mathsf{ALG}_{i}=\mathsf{Korten}(C_{n_{i}},f_{i})\) and \(f_{i+1}=\mathsf{History}(C_{n_{i}},f_{i})\) (the history of \(\mathsf{ALG}_{i}\)). Assuming that \(f_{i+1}\) admits a \(\mathrm{poly}(n_{i})\)-size circuit, we want to show that \(\mathsf{Korten}(C_{n_{i}},f_{i})\) can be simulated by a \(\mathrm{poly}(n_{i})\)-time single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm. Observe that for every \(t\in[i+1]\), \(f_{t-1}\) is simply a substring of \(f_{t}\) since \(f_{t}=\mathsf{History}(C_{n_{t-1}},f_{t-1})\). Therefore, \(f_{i+1}\) admitting a \(\mathrm{poly}(n_{i})\)-size circuit implies that all \(f_{t}\) admit \(\mathrm{poly}(n_{i})\)-size circuits for \(t\in[i]\). We can then implement \(A\) as follows: the proof \(\pi_{1}\) is a \(\mathrm{poly}(n_{i})\)-size circuit \(C_{i+1}\) supposed to compute \(f_{i+1}\), from which one can obtain in polynomial time a sequence of circuits \(C_{1},\ldots,C_{i}\) that are supposed to compute \(f_{1},\ldots,f_{i}\), respectively. (Also, from Fact 3.4, one can easily construct a \(\mathrm{poly}(n_{0})\)-size circuit \(C_{0}\) computing \(f_{0}\).) Next, for every \(t\in\{0,1,\ldots,i\}\), \(A\) checks whether \(\mathsf{tt}(C_{t+1})\circ\mathsf{tt}(C_{t})\) satisfies all the local constraints \(\psi_{x}\)'s from the characterization of \(\mathsf{History}(C_{n_{t}},f_{t})\). In other words, \(A\) checks whether \(V_{\mathsf{History}}^{C_{t+1},C_{t}}(x)=1\) for all \(x\in\{0,1\}^{\mathrm{poly}(n_{t})}\). The crucial observation is that since all the \(C_{t}\) have size \(\operatorname{poly}(n_{i})\), each check above can be implemented in \(\operatorname{poly}(n_{i})\) time as they only read at most \(\operatorname{poly}(n_{i})\) bits from their input, despite that \(\mathtt{tt}(C_{t+1})\circ\mathtt{tt}(C_{t})\) itself can be much longer than \(\operatorname{poly}(n_{i})\). Assuming that all the checks of \(A\) above are passed, by induction we know that \(f_{t+1}=\mathsf{History}(C_{n_{t}},f_{t})\) for every \(t\in\{0,1,\ldots,i\}\). Finally, \(A\) checks whether \(z\) corresponds to the answer described in \(\mathtt{tt}(C_{i+1})=f_{i+1}\). #### 1.3.3 Selectors and an easy-witness argument for \(\mathsf{FS}_{2}\mathsf{P}\) algorithms Finally, we discuss how to implement the easy-witness argument above with a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm. It is known that any single-valued \(\mathsf{FS}_{2}\mathsf{BPP}\) algorithm can be converted into an equivalent single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm outputting the same string [13, 14] (see also the proof of Theorem 5.7 for a self-contained argument). Therefore, in the following we aim to give a single-valued \(\mathsf{FS}_{2}\mathsf{BPP}\) algorithm for solving range avoidance, which is easier to achieve. \(\mathsf{FS}_{2}\mathsf{BPP}\) algorithms and randomized selectors.Before we proceed, we give a formal definition of a single-valued \(\mathsf{FS}_{2}\mathsf{BPP}\) algorithm \(A\). We implement \(A\) by a randomized algorithm \(V_{A}\) that takes an input \(x\) and two \(\operatorname{poly}(|x|)\)-length witnesses \(\pi_{1}\) and \(\pi_{2}\).17 We say that \(A(x)\) outputs a string \(z\in\{0,1\}^{\ell}\) (we assume \(\ell=\ell(x)\) can be computed in polynomial time from \(x\)) if the following hold: Footnote 17: \(\mathsf{FS}_{2}\mathsf{P}\) algorithms are the special case of \(\mathsf{FS}_{2}\mathsf{BPP}\) algorithms where the algorithm \(V_{A}\) is _deterministic_. * there exists a string \(h\) such that for every \(\pi\), both \(V_{A}(x,h,\pi)\) and \(V_{A}(x,\pi,h)\) output \(z\) with probability at least \(2/3\). (Note that such \(z\) must be unique if it exists.) Actually, our algorithm \(A\) will be implemented as a randomized _selector_: given two potential proofs \(\pi_{1}\) and \(\pi_{2}\), it first selects the correct one and then outputs the string \(z\) induced by the correct proof.18 Footnote 18: If both proofs are correct or neither proofs are correct, it can select an arbitrary one. The condition only applies when exactly one of the proofs is correct. Recap.Revising the algorithm in Section 1.2.3, our goal now is to give an \(\mathsf{FS}_{2}\mathsf{BPP}\) simulation of \(\mathsf{Korten}(C_{n_{i}},f_{i})\), assuming that \(\mathsf{History}(C_{n_{i}},f_{i})\) admits a small circuit. Similar to the local \(\Pi_{1}\) verifier used in the case of \(\mathsf{FS}_{2}\mathsf{P}\) algorithms, now we consider a local randomized selector \(V_{\mathsf{select}}\) which takes oracles \(\pi_{1},\pi_{2}\in\{0,1\}^{5T}\) and \(f\in\{0,1\}^{T}\) such that if exactly one of the \(\pi_{1}\) and \(\pi_{2}\) is \(\mathsf{History}(C,f)\), \(V_{\mathsf{select}}\) outputs its index with high probability. Assuming that \(f_{i+1}=\mathsf{History}(C_{n_{i}},f_{i})\) admits a small circuit, one can similarly turn \(V_{\mathsf{select}}\) into a single-valued \(\mathsf{FS}_{2}\mathsf{BPP}\) algorithms \(A\) computing \(\mathsf{Korten}(C_{n_{i}},f_{i})\): treat two proofs \(\pi_{1}\) and \(\pi_{2}\) as two small circuits \(C\) and \(D\) both supposed to compute \(f_{i+1}\), from \(C\) and \(D\) we can obtain a sequence of circuits \(\{C_{t}\}\) and \(\{D_{t}\}\) supposed to compute the \(f_{t}\) for \(t\in[i]\). Then we can use the selector \(V_{\mathsf{select}}\) to decide for each \(t\in[i+1]\) which of the \(C_{t}\) and \(D_{t}\) is the correct circuit for \(f_{t}\). Finally, we output the answer encoded in the selected circuit for \(f_{i+1}\); see the proof of Theorem 5.7 for details.19 Footnote 19: However, for the reasons to be explained below, we will actually work with the encoded history instead of the history, which entails a lot of technical challenges in the actual proof. Observation: it suffices to find the first differing node label.Ignore the \((i_{\star},j_{\star})\) part of the history for now. Let \(\{v^{1}_{i,j}\}\) and \(\{v^{2}_{i,j}\}\) be the node labels encoded in \(\pi_{1}\) and \(\pi_{2}\), respectively. We also assume that exactly one of them corresponds to the correct node labels in \(\mathsf{History}(C,f)\). The crucial observation here is that, since the correct node labels are generated by a deterministic procedure _node by node_ (from bottom to top and from rightmost to leftmost), it is possible to tell which of the \(\{v^{1}_{i,j}\}\) and \(\{v^{2}_{i,j}\}\) is correct given the largest \((i^{\prime},j^{\prime})\) such that \(v^{1}_{i^{\prime},j^{\prime}}\neq v^{2}_{i^{\prime},j^{\prime}}\). (Note that since all \((i,j)\) are processed by \(\mathsf{Korten}(C,f)\) in reverse lexicographic order, this \((i^{\prime},j^{\prime})\) corresponds to the first node label that the wrong process differs from the correct process, so we call this the first differing point.) In more detail, assuming we know this \((i^{\prime},j^{\prime})\), we proceed by discussing several cases. First of all, if \((i^{\prime},j^{\prime})\) corresponds to a leaf, then one can query \(f\) to figure out which of \(v^{1}_{i^{\prime},j^{\prime}}\) and \(v^{2}_{i^{\prime},j^{\prime}}\) is consistent with the corresponding block in \(f\). Now we can assume \((i^{\prime},j^{\prime})\) corresponds to an intermediate node. Since \((i^{\prime},j^{\prime})\) is the first differing point, we know that \(v^{1}_{i^{\prime}+1,2j^{\prime}}\circ v^{1}_{i^{\prime}+1,2j^{\prime}+1}=v^{2 }_{i^{\prime}+1,2j^{\prime}}\circ v^{2}_{i^{\prime}+1,2j^{\prime}+1}\) (we let this string to be \(\alpha\) for convenience). By the definition of \(\mathsf{History}(C,f)\), it follows that the correct \(v_{i^{\prime},j^{\prime}}\) should be uniquely determined by \(\alpha\), which means the selector only needs to read \(\alpha\), \(v^{1}_{i^{\prime},j^{\prime}}\), and \(v^{2}_{i^{\prime},j^{\prime}}\), and can then be implemented by a somewhat tedious case analysis (so it is local). We refer readers to the proof of Lemma5.5 for the details and only highlight the most illuminating case here: if both of \(v^{1}_{i^{\prime},j^{\prime}}\) and \(v^{2}_{i^{\prime},j^{\prime}}\) are good (we say a string \(\gamma\) is good, if \(\gamma\neq\bot\) and \(C(\gamma)=\alpha\)), we select the lexicographically smaller one. To handle the \((i_{\star},j_{\star})\) part, one needs some additional case analysis. We omit the details here and refer the reader to the proof of Lemma5.5. The takeaway here is that if we can find the first differing label \((i^{\prime},j^{\prime})\), then we can construct the selector \(V_{\mathsf{select}}\) and hence the desired single-valued \(\mathsf{FS_{2}BPP}\) algorithm. Encoded history.However, the above assumes the knowledge of \((i^{\prime},j^{\prime})\). In general, if one is only given oracle access to \(\{v^{1}_{i,j}\}\) and \(\{v^{2}_{i,j}\}\), there is no \(\operatorname{poly}(n)\)-time oracle algorithm computing \((i^{\prime},j^{\prime})\) because there might be exponentially many nodes. To resolve this issue, we will encode \(\{v^{1}_{i,j}\}\) and \(\{v^{2}_{i,j}\}\) via Reed-Muller codes. Formally, recall that \(\mathsf{History}(C,f)\) is the concatenation of \((i_{\star},j_{\star})\) and the string \(S\), where \(S\) is the concatenation of all the labels on the binary tree. We now define the encoded history, denoted as \(\widetilde{\mathsf{History}}(C,f)\), as the concatenation of \((i_{\star},j_{\star})\) and _a Reed-Muller encoding_ of \(S\). The new selector is given oracle access to two candidate encoded histories together with \(f\). By applying low-degree tests and self-correction of polynomials, we can assume that the Reed-Muller parts of the two candidates are indeed low-degree polynomials. Then we can use a reduction to polynomial identity testing to compute the first differing point between \(\{v^{1}_{i,j}\}\) and \(\{v^{2}_{i,j}\}\) in randomized polynomial time. See the proof of Lemma5.3 for the details. This part is similar to the selector construction from [10]. ### Discussions We conclude the introduction by discussing some related works. #### 1.4.1 Previous approach: Karp-Lipton collapses and the half-exponential barrier In the following, we elaborate on the half-exponential barrier mentioned earlier in the introduction.20 Let \(\mathcal{C}\) be a "typical" uniform complexity class containing \(\mathsf{P}\), a _Karp-Lipton collapse_ to \(\mathcal{C}\) states that if a large class (say \(\mathsf{EXP}\)) has polynomial-size circuits, then this class collapses to \(\mathcal{C}\). For example, there is a Karp-Lipton collapse to \(\mathcal{C}=\Sigma_{2}\mathsf{P}\): Footnote 20: A function \(f\colon\mathbb{N}\to\mathbb{N}\) is _sub-half-exponential_ if \(f(f(n)^{c})=2^{o(n)}\) for every constant \(c\geq 1\), i.e., composing \(f\) twice yields a sub-exponential function. For example, for constants \(c\geq 1\) and \(\varepsilon>0\), the functions \(f(n)=n^{c}\) and \(f(n)=2^{\log^{c}n}\) are sub-half-exponential, but the functions \(f(n)=2^{n^{\varepsilon}}\) and \(f(n)=2^{\varepsilon n}\) are not. Suppose \(\mathsf{EXP}\subseteq\mathsf{P}/_{\operatorname{poly}}\), then \(\mathsf{EXP}=\Sigma_{2}\mathsf{P}\). ([11], attributed to Albert Meyer) Now, assuming that \(\mathsf{EXP}\subseteq\mathsf{P}/_{\operatorname{poly}}\implies\mathsf{EXP}= \mathcal{C}\), the following win-win analysis implies that \(\mathcal{C}\)-\(\mathsf{EXP}\), the exponential-time version of \(\mathcal{C}\), is not in \(\mathsf{P}/_{\operatorname{poly}}\): (1) if \(\mathsf{EXP}\not\subset\mathsf{P}/_{\operatorname{poly}}\), then of course \(\mathcal{C}\text{-}\mathsf{EXP}\supseteq\mathsf{EXP}\) does not have polynomial-size circuits; (2) otherwise \(\mathsf{EXP}\subseteq\mathsf{P}/_{\mathrm{poly}}\). We have \(\mathsf{EXP}=\mathcal{C}\) and by padding \(\mathsf{EXP}=\mathcal{C}\text{-}\mathsf{EXP}\). Since \(\mathsf{EXP}\) contains a function of maximum circuit complexity by direct diagonalization, it follows that \(\mathcal{C}\text{-}\mathsf{EXP}\) does not have polynomial-size circuits. Karp-Lipton collapses are known for the classes \(\Sigma_{2}\mathsf{P}\)[11], \(\mathsf{ZPP}^{\mathsf{NP}}\)[12], \(\mathsf{S}_{2}\mathsf{P}\)[13] (attributed to Samik Sengupta), \(\mathsf{PP}\), \(\mathsf{MA}\)[14, 15], and \(\mathsf{ZPP}^{\mathsf{MCSP}}\)[16]. All the aforementioned super-polynomial circuit lower bounds for \(\Sigma_{2}\mathsf{EXP}\), \(\mathsf{ZPEXP}^{\mathsf{NP}}\), \(\mathsf{S}_{2}\mathsf{EXP}\), \(\mathsf{PEXP}\), \(\mathsf{MAEXP}\), and \(\mathsf{ZPEXP}^{\mathsf{MCSP}}\) are proven in this way.21 Footnote 21: There are some evidences that Karp–Lipton collapses are essential for proving circuit lower bounds [10]. The half-exponential barrier.The above argument is very successful at proving various super-polynomial lower bounds. However, a closer look shows that it is only capable of proving _sub-half-exponential_ circuit lower bounds. Indeed, suppose we want to show that \(\mathcal{C}\text{-}\mathsf{EXP}\) does not have circuits of size \(f(n)\). We will have to perform the following win-win analysis: * if \(\mathsf{EXP}\not\subset\mathsf{SIZE}[f(n)]\), then of course \(\mathcal{C}\text{-}\mathsf{EXP}\supseteq\mathsf{EXP}\) does not have circuits of size \(f(n)\); * if \(\mathsf{EXP}\subseteq\mathsf{SIZE}[f(n)]\), then (a scaled-up version of) the Karp-Lipton collapse implies that \(\mathsf{EXP}\) can be computed by a \(\mathcal{C}\) machine of \(\mathrm{poly}(f(n))\) time. Note that \(\mathsf{TIME}[2^{\mathrm{poly}(f(n))}]\) does not have circuits of size \(f(n)\) by direct diagonalization. By padding, \(\mathsf{TIME}[2^{\mathrm{poly}(f(n))}]\) can be computed by a \(\mathcal{C}\) machine of \(\mathrm{poly}(f(\mathrm{poly}(f(n))))\) time. Therefore, if \(f\) is sub-half-exponential (meaning \(f(\mathrm{poly}(f(n)))=2^{o(n)}\)), then \(\mathcal{C}\text{-}\mathsf{EXP}\) does not have circuits of size \(f(n)\). Intuitively speaking, the two cases above are _competing with each other_: we cannot get exponential lower bounds in both cases. #### 1.4.2 Implications for the Missing-String problem? In the Missing-String problem, we are given a list of \(m\) strings \(x_{1},x_{2},\ldots,x_{m}\in\{0,1\}^{n}\) where \(m<2^{n}\), and the goal is to output any length-\(n\) string \(y\) that does not appear in \(\{x_{1},x_{2},\ldots,x_{m}\}\). Vyas and Williams [21] connected the circuit complexity of Missing-String with the (relativized) circuit complexity of \(\Sigma_{2}\mathsf{E}\): **Theorem 1.8** ([21, Theorem 32], Informal).: _The following are equivalent:_ * \(\Sigma_{2}\mathsf{E}^{A}\not\subset\mathrm{i.o.-SIZE}^{A}[2^{\Omega(n)}]\) _for every oracle_ \(A\)_;_ * _for_ \(m=2^{\Omega(n)}\)_, the Missing-String problem can be solved by a uniform family of size-_\(2^{O(n)}\) _depth-_\(3\)__\(\mathsf{AC}^{0}\) _circuits._ The intuition behind Theorem1.8 is roughly as follows. For every oracle \(A\), the set of truth tables with low \(A\)-oracle circuit complexity induces an instance for Missing-String, and solving this instance gives us a hard truth table relative to \(A\). If the algorithm for Missing-String is a uniform \(\mathsf{AC}^{0}\) circuit of depth \(3\), then the hard function is inside \(\Sigma_{2}\mathsf{E}^{A}\). However, despite our Theorem1.2 being completely relativizing, it does not seem to imply any non-trivial depth-\(3\)\(\mathsf{AC}^{0}\) circuit for Missing-String. The reason is the heavy win-win analysis _across multiple input lengths_: for each \(0\leq i<t\), we have a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) construction algorithm for hard truth tables relative to oracle \(A\) on input length \(n_{i}\), but this algorithm needs access to \(A_{n_{i+1}}\), _a higher input length of \(A\)_. Translating this into the language of Missing-String, we obtain a weird-looking depth-\(3\)\(\mathsf{AC}^{0}\) circuit that takes as input a _sequence_ of Missing-String instances \(\mathcal{I}_{n_{0}},\mathcal{I}_{n_{1}},\ldots,\mathcal{I}_{n_{t}}\) (where _each_\(\mathcal{I}_{n_{i}}\subseteq\{0,1\}^{n_{i}}\) is a set of strings), looks at all of the instances (or, at least \(\mathcal{I}_{n_{i}}\) and \(\mathcal{I}_{n_{i+1}}\)), and outputs a purportedly missing string of \(\mathcal{I}_{n_{i}}\). It is guaranteed that for at least one input length \(i\), the output string is indeed a missing string of \(\mathcal{I}_{n_{i}}\). However, if our algorithm is only given one instance \(\mathcal{I}\subseteq\{0,1\}^{n}\), without assistance from a larger input length, it does not know how to find any missing string of \(\mathcal{I}\). It remains an intriguing open problem whether the bullets in Theorem1.8 are true or not. In other words, is there an oracle \(A\) relative to which \(\Sigma_{2}\mathsf{E}\) has small circuits _on infinitely many input lengths_? ### Organization In Section2, we introduce the necessary technical preliminaries for this paper. In Section3, we review Korten's reduction from solving range avoidance to generating hard truth tables [13], together with some new properties required by our new results. In Section4, we prove the near-maximum circuit lower bound for \(\Sigma_{2}\mathsf{E}\); although this lower bound is superseded by the later \(\mathsf{S}_{2}\mathsf{E}/_{1}\) lower bound, we nonetheless include it in the paper since its proof is much more elementary. In Section5, we extend the near-maximum circuit lower bound to \(\mathsf{S}_{2}\mathsf{E}/_{1}\), and also present our new algorithms for solving the range avoidance problem. ## 2 Preliminaries Notation.We use \([n]\) to denote \(\{1,2,\ldots,n\}\). A search problem \(\Pi\) maps every input \(x\in\{0,1\}^{*}\) into a solution set \(\Pi_{x}\subseteq\{0,1\}^{*}\). We say an algorithm \(A\) solves the search problem \(\Pi\) on input \(x\) if \(A(x)\in\Pi_{x}\). ### Complexity Classes We assume basic familiarity with computation complexity theory (see, e.g., [1, 1] for references). Below we recall the definition of \(\mathsf{S}_{2}\mathsf{TIME}[T(n)]\)[14, 15]. **Definition 2.1**.: Let \(T\colon\mathbb{N}\to\mathbb{N}\). We say a language \(L\in\mathsf{S}_{2}\mathsf{TIME}[T(n)]\), if there exists an \(O(T(n))\)-time verifier \(V(x,\pi_{1},\pi_{2})\) that takes \(x\in\{0,1\}^{n}\) and \(\pi_{1},\pi_{2}\in\{0,1\}^{T(n)}\) as input, satisfying that * if \(x\in L\), then there exists \(\pi_{1}\) such that for every \(\pi_{2}\), \(V(x,\pi_{1},\pi_{2})=1\), and * if \(x\not\in L\), then there exists \(\pi_{2}\) such that for every \(\pi_{1}\), \(V(x,\pi_{1},\pi_{2})=0\). Moreover, we say \(L\in\mathsf{S}_{2}\mathsf{E}\) if \(L\in\mathsf{S}_{2}\mathsf{TIME}[T(n)]\) for some \(T(n)\leq 2^{O(n)}\), and \(L\in\mathsf{S}_{2}\mathsf{P}\) if \(L\in\mathsf{S}_{2}\mathsf{TIME}[p(n)]\) for some polynomial \(p\). It is known that \(\mathsf{S}_{2}\mathsf{P}\) contains \(\mathsf{MA}\) and \(\mathsf{P}^{\mathsf{NP}}\)[14], and \(\mathsf{S}_{2}\mathsf{P}\) is contained in \(\mathsf{ZPP}^{\mathsf{NP}}\)[1]. From its definition, it is also clear that \(\mathsf{S}_{2}\mathsf{P}\subseteq\Sigma_{2}\mathsf{P}\cap\Pi_{2}\mathsf{P}\). ### Single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) and \(\mathsf{FS}_{2}\mathsf{P}\) Algorithms We consider the following definitions of single-valued algorithms which correspond to circuit lower bounds for \(\Sigma_{2}\mathsf{E}\) and \(\mathsf{S}_{2}\mathsf{E}\). **Definition 2.2** (Single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) and \(\mathsf{FS}_{2}\mathsf{P}\) algorithms).: A single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm \(A\) is specified by a polynomial \(\ell(\cdot)\) together with a polynomial-time algorithm \(V_{A}(x,\pi_{1},\pi_{2})\). On an input \(x\in\{0,1\}^{*}\), we say that \(A\) outputs \(y_{x}\in\{0,1\}^{*}\), if the following hold: 1. There is a \(\pi_{1}\in\{0,1\}^{\ell(|x|)}\) such that for every \(\pi_{2}\in\{0,1\}^{\ell(|x|)}\), \(V_{A}(x,\pi_{1},\pi_{2})\) outputs \(y_{x}\). 2. For every \(\pi_{1}\in\{0,1\}^{\ell(|x|)}\), there is a \(\pi_{2}\in\{0,1\}^{\ell(|x|)}\) such that the output of \(V_{A}(x,\pi_{1},\pi_{2})\) is either \(y_{x}\) or \(\bot\) (where \(\bot\) indicates "I don't know"). A single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(A\) is specified similarly, except that we replace the second condition above with the following: 1. There is a \(\pi_{2}\in\{0,1\}^{\ell(|x|)}\) such that for every \(\pi_{1}\in\{0,1\}^{\ell(|x|)}\), \(V_{A}(x,\pi_{1},\pi_{2})\) outputs \(y_{x}\). Now, we say that a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) (\(\mathsf{FS}_{2}\mathsf{P}\)) algorithm \(A\) solves a search problem \(\Pi\) on input \(x\) if it outputs a string \(y_{x}\) and \(y_{x}\in\Pi_{x}\). Note that from Definition2.2, if \(A\) outputs a string \(y_{x}\), then \(y_{x}\) is unique. For convenience, we mostly only consider single-valued algorithms \(A(x)\) with fixed output lengths, meaning that the output length \(|A(x)|\) only depends on \(|x|\) and can be computed in polynomial time given \(1^{|x|}\).22 Footnote 22: If \(A\) takes multiple inputs like \(x,y,z\), then the output length \(A(x,y,z)\) only depends on \(|x|,|y|,|z|\) and can be computed in polynomial time given \(1^{|x|}\), \(1^{|y|}\), and \(1^{|z|}\). 2.1 Single-Valued \(\mathsf{FS}_{2}\mathsf{P}\) and \(\mathsf{FS}_{2}\mathsf{P}\) algorithms with \(\mathsf{FP}^{\mathsf{NP}}\) post-processing We also need the fact that single-valued \(\mathsf{FS}_{2}\mathsf{P}\) or \(\mathsf{FS}_{2}\mathsf{P}\) algorithms with \(\mathsf{FP}^{\mathsf{NP}}\) post-processing can still be implemented by single-valued \(\mathsf{FS}_{2}\mathsf{P}\) or \(\mathsf{FS}_{2}\mathsf{P}\) algorithms, respectively. More specifically, we have: **Theorem 2.3**.: _Let \(A(x)\) be a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) (resp. \(\mathsf{FS}_{2}\mathsf{P}\)) algorithm and \(B(x,y)\) be an \(\mathsf{FP}^{\mathsf{NP}}\) algorithm, both with fixed output length. The function \(f(x)\coloneqq B(x,A(x))\) also admits an \(\mathsf{FS}_{2}\mathsf{P}\) (resp. \(\mathsf{FS}_{2}\mathsf{P}\)) algorithm._ Proof.: We only provide a proof for the case of single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithms. Recall that the Lexicographically Maximum Satisfying Assignment problem (\(\mathsf{LMSAP}\)) is defined as follows: given an \(n\)-variable formula \(\phi\) together with an integer \(k\in[n]\), one needs to decide whether \(a_{k}=1\), where \(a_{1},\ldots,a_{n}\in\{0,1\}^{n}\) is the lexicographically largest assignment satisfies \(\phi\). By [10], \(\mathsf{LMSAP}\) is \(\mathsf{p}^{\mathsf{NP}}\)-complete. Let \(V_{A}(x,\pi_{1},\pi_{2})\) be the corresponding verifier for the single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(A\). Let \(L(x,y,i)\) be the \(\mathsf{P}^{\mathsf{NP}}\) language such that \(L(x,y,i)=1\) if and only if \(B(x,y)_{i}=1\). Let \(\ell=|B(x,y)|\) be the output length of \(B\). We now define a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(\widetilde{A}\) by defining the following verifier \(V_{\widetilde{A}}\), and argue that \(\widetilde{A}\) computes \(f\). The verifier \(V_{\widetilde{A}}\) takes an input \(x\) and two proofs \(\vec{\pi}_{1}\) and \(\vec{\pi}_{2}\), where \(\vec{\pi}_{1}\) consists of \(\omega_{1}\), acting as the second argument to \(V_{A}\), and \(\ell\) assignments \(z_{1}^{1},z_{2}^{1},\ldots,z_{\ell}^{1}\in\{0,1\}^{m}\). Similarly, \(\vec{\pi}_{2}\) consists of \(\omega_{2}\) and \(z_{1}^{2},z_{2}^{2},\ldots,z_{\ell}^{2}\in\{0,1\}^{m}\). First, \(V_{\widetilde{A}}\) runs \(V_{A}(x,\omega_{1},\omega_{2})\) to get \(y\in\{0,1\}^{|A(x)|}\). Then it runs the reduction from \(L(x,y,i)\) to \(\mathsf{LMSAP}\) for every \(i\in[\ell]\) to obtain \(\ell\) instances \(\{(\phi_{i},k_{i})\}_{i\in[\ell]}\), where \(\phi_{i}\) is an \(m\)-variable formula and \(k_{i}\in[m]\). (Without loss of generality by padding dummy variables, we may assume that the number of variables in \(\phi_{i}\) is the same for each \(i\), i.e., \(m\); and that \(m\) only depends on \(|x|\) and \(|y|\).) Now, for every \(\mu\in[2]\), we can define an answer \(w_{\mu}\in\{0,1\}^{\ell}\) by \((w_{\mu})_{i}=(z_{i}^{\mu})_{k_{i}}\) (i.e., the value of \(B(x,y)\), assuming that \(\vec{\pi}_{\mu}\) consists of the lexicographically largest assignments for all the \(\mathsf{LMSAP}\) instances). In what follows, when we say that \(V_{\widetilde{A}}\)_selects_ the proof \(\mu\in[2]\), we mean that \(V_{\widetilde{A}}\) outputs \(w_{\mu}\) and terminates. Then, \(V_{\widetilde{A}}\) works as follows: 1. For each \(\mu\in[2]\), it first checks whether for every \(i\in[\ell]\), \(z_{i}^{\mu}\) satisfies \(\phi_{i}\). If only one of the \(\mu\) passes all the checks, \(V_{\widetilde{A}}\) selects that \(\mu\). If none of them passes all the checks, \(V_{\widetilde{A}}\) selects \(1\). Otherwise, it continues to the next step. 2. Now, letting \(Z^{\mu}=z_{1}^{\mu}\circ z_{2}^{\mu}\circ\ldots\circ z_{\ell}^{\mu}\) for each \(\mu\in[2]\). \(V_{\widetilde{A}}\) selects the \(\mu\) with the lexicographically larger \(Z^{\mu}\). If \(Z^{1}=Z^{2}\), then \(V_{\widetilde{A}}\) selects \(1\). Now we claim that \(\widetilde{A}\) computes \(f(x)\), which can be established by setting \(\vec{\pi}_{1}\) or \(\vec{\pi}_{2}\) be the corresponding proof for \(V_{A}\) concatenated with all lexicographically largest assignments for the \(\{\phi_{i}\}_{i\in[\ell]}\). ### The Range Avoidance Problem The _range avoidance_ problem [11, 12, 13] is the following problem: Given as input a circuit \(C\colon\{0,1\}^{n}\to\{0,1\}^{\ell}\) where \(\ell>n\), find any string \(y\in\{0,1\}^{\ell}\setminus\mathrm{Range}(C)\). Proving circuit lower bounds (for exponential-time classes) is equivalent to solving the range avoidance problem on the _truth table generator_\(\mathsf{TT}_{n,s}\), defined as follows. It was shown in [13] that for \(n,s\in\mathbb{N}\), any \(s\)-size \(n\)-input circuit \(C\) can be encoded as a _stack program_ with description size \(L_{n,s}:=(s+1)(7+\log(n+s))\). The precise definition of stack programs does not matter (see [13] for a formal definition); the only property we need is that given \(s\) and \(n\) such that \(n\leq s\leq 2^{n}\), in \(\mathrm{poly}(2^{n})\) time one can construct a circuit \(\mathsf{TT}_{n,s}\colon\{0,1\}^{L_{n,s}}\to\{0,1\}^{2^{n}}\) mapping the description of a stack program into its truth table. By the equivalence between stack programs and circuits, it follows that any \(f\in\{0,1\}^{2^{n}}\setminus\mathrm{Range}(\mathsf{TT}_{n,s})\) satisfies \(\mathsf{SIZE}(f)>s\). Also, we note that for large enough \(n\in\mathbb{N}\) and \(s=2^{n}/n\), we have \(L_{n,s}<2^{n}\). **Fact 2.4**.: _Let \(s(n)\colon\mathbb{N}\to\mathbb{N}\). Suppose that there is a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(A\) such that for infinitely many \(n\in\mathbb{N}\), \(A(1^{2^{n}})\) takes \(\alpha(n)\) bits of advice and outputs a string \(f_{n}\in\{0,1\}^{2^{n}}\setminus\mathrm{Range}(\mathsf{TT}_{n,s(n)})\). Then \(\mathsf{S}_{2}\mathsf{E}/_{\alpha(n)}\not\subset\mathsf{SIZE}[s(n)]\)._ Proof sketch.: We define a language \(L\) such that the truth table of the characteristic function of \(L\cap\{0,1\}^{n}\) is \(A(1^{2^{n}})\). It is easy to see that \(L\notin\mathsf{SIZE}[s(n)]\) and \(L\in\mathsf{S}_{2}\mathsf{E}/_{\alpha(n)}\). ## 3 Korten's Reduction Our results crucially rely on a reduction in [12] showing that proving circuit lower bounds is "the hardest explicit construction" under \(\mathsf{P}^{\mathsf{NP}}\) reductions. Notation.Let \(s\) be a string of length \(n\). We will always use \(0\)-index (i.e., the first bit of \(s\) is \(s_{0}\) and the last bit of \(s\) is \(s_{n-1}\)). Let \(i<j\), we use \(s_{[i,j]}\) to denote the substring of \(s\) from the \(i\)-th bit to the \(j\)-th bit, and \(s_{[i,j)}\) to denote the substring of \(s\) from the \(i\)-th bit to the \((j-1)\)-th bit. (Actually, we will use the notation \(s_{[i,j)}\) more often than \(s_{[i,j]}\) as it is convenient when we describe the GGM tree.) We also use \(s_{1}\circ s_{2}\circ\cdots\circ s_{k}\) to denote the concatenation of \(k\) strings. ### GGM Tree and the Reduction We first recall the GGM tree construction from [1], which is used in a crucial way by [12]. **Definition 3.1** (The GGM tree construction [14]).: Let \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) be a circuit. Let \(n,T\in\mathbb{N}\) be such that \(T\geq 4n\) and let \(k\) be the smallest integer such that \(2^{k}n\geq T\). The function \(\mathsf{GGM}_{T}[C]\colon\{0,1\}^{n}\to\{0,1\}^{T}\) is defined as follows. Consider a perfect binary tree with \(2^{k}\) leaves, where the root is on level \(0\) and the leaves are on level \(k\). Each node is assigned a binary string of length \(n\), and for \(0\leq j<2^{i}\), denote \(v_{i,j}\in\{0,1\}^{n}\) the value assigned to the \(j\)-th node on level \(i\). Let \(x\in\{0,1\}^{n}\). We perform the following computation to obtain \(\mathsf{GGM}_{T}[C](x)\): we set \(v_{0,0}:=x\), and for each \(0\leq i<k\), \(0\leq j<2^{i}\), we set \(v_{i+1,2j}:=C(v_{i,j})_{[0,n)}\) (i.e., the first half of \(C(v_{i,j})\)) and \(v_{i+1,2j+1}:=C(v_{i,j})_{[n,2n)}\) (i.e., the second half of \(C(v_{i,j})\)). (We say the nodes \((i+1,2j)\) and \((i+1,2j+1)\) are "children" of \((i,j)\).) Finally, we concatenate all values of the leaves and take the first \(T\) bits as the output: \[\mathsf{GGM}_{T}[C](x):=(v_{k,0}\circ v_{k,1}\circ\dots\circ v_{k,2^{k}-1})_{[0,T)}.\] **Lemma 3.2** (The output of GGM tree has a small circuit).: _Let \(\mathsf{GGMEval}(C,T,x,i)\) denote the \(i\)-th bit of \(\mathsf{GGM}_{T}[C](x)\). There is an algorithm running in \(\widetilde{O}(|C|\cdot\log T)\) time that, given \(C,T,x,i\), outputs \(\mathsf{GGMEval}(C,T,x,i)\)._ Proof Sketch.: We first note that to compute the \(i\)-th bit of \(\mathsf{GGM}_{T}[C](x):=(v_{k,0}\circ v_{k,1}\circ\dots\circ v_{k,2^{k}-1})_{[0,T)}\), it suffices to compute \(v_{k,\lfloor i/n\rfloor}\). Computing \(v_{k,\lfloor i/n\rfloor}\) can be done by descending from the root of the GGM tree to the leave \((k,\lfloor i/n\rfloor)\), which takes \(\widetilde{O}(|C|\cdot\log T)\) time. It is shown in [15] that the range avoidance problem for \(C\) reduces to the range avoidance problem for \(\mathsf{GGM}_{T}[C]\). In what follows, we review this proof, during which we also define the _computational history_ of "solving range avoidance of \(C\) from \(\mathsf{GGM}_{T}[C]\)", which will be crucial in our main proof. ``` Input:\(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) denotes the input circuit, and \(f\in\{0,1\}^{T}\setminus\operatorname{Range}(\mathsf{GGM}_{T}[C])\) denotes the input "hard" truth table Output: A non-output of \(C\) Data: The computational history of \(\mathsf{Korten}(C,f)\): a pair \((i_{\star},j_{\star})\) and an array \(\{v_{i,j}\}_{i,j}\) where \(i\in\{0,1\dots,k\}\) and \(j\in\{0,1,\dots,2^{i}\}\). 1 Let \(k\leftarrow\lceil\log_{2}(T/n)\rceil\); 2 Append \(f\) with \(2^{k}n-|f|\) zeros at the end; 3for\(j\gets 0\) to \(2^{k}-1\)do 4\(v_{k,j}\gets f_{[jn,(j+1)n)}\); /* the \(j\)-th "block" of \(f\) 5for\(i\gets k-1\) downto \(0\)do 6for\(j\gets 2^{i}-1\) downto \(0\)do 7 Let \(v_{i,j}\) be the lexicographically smallest string in \(C^{-1}(v_{i+1,2j}\circ v_{i+1,2j+1})\); /* Note that this step needs to invoke the NP oracle */ 8if\(v_{i,j}\) does not exist then 9 For every \((i^{\prime},j^{\prime})\) such that \(v_{i^{\prime},j^{\prime}}\) is not set yet, set \(v_{i^{\prime},j^{\prime}}\leftarrow\bot\); 10 Set \(i_{\star}:=i\), and \(j_{\star}:=j\); 11return\(v_{i+1,2j}\circ v_{i+1,2j+1}\); 12 13 14return\(\bot\) ``` **Algorithm 3.1**\(\mathsf{Korten}(C,f)\): Korten's reduction **Lemma 3.3** (Reduction from solving range avoidance of \(C\) to solving range avoidance of \(\mathsf{GGM}_{T}[C]\)).: _Let \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) be a circuit. Let \(f\) be a non-output of \(\mathsf{GGM}_{T}[C]\), i.e., \(f\in\{0,1\}^{T}\setminus\operatorname{Range}(\mathsf{GGM}_{T}[C])\). Then, \(\mathsf{Korten}(C,f)\) (as defined in Algorithm 3.1) outputs a non-output of \(C\) in deterministic \(\operatorname{poly}(T,n)\) time with an \(\mathsf{NP}\) oracle._ Proof Sketch.: The running time of \(\mathsf{Korten}(C,f)\) follows directly from its description. Also, note that whenever \(\mathsf{Korten}(C,f)\) outputs a string \(v_{i+1,2j}\circ v_{i+1,2j+1}\in\{0,1\}^{2n}\), it holds that this string is not in the range of \(C\). Therefore, it suffices to show that when \(f\in\{0,1\}^{T}\setminus\operatorname{Range}(\mathsf{GGM}_{T}[C])\), \(\mathsf{Korten}(C,f)\) does not return \(\bot\). Assume, towards a contradiction, that \(\mathsf{Korten}(C,f)\) returns \(\bot\). This means that all the \(\{v_{i,j}\}_{i,j}\) values are set. It follows from the algorithm description that \(f=\mathsf{GGM}_{T}[C](v_{0,0})\), which contradicts the assumption that \(f\in\{0,1\}^{T}\setminus\operatorname{Range}(\mathsf{GGM}_{T}[C])\). In addition, we observe the following trivial fact: **Fact 3.4**.: _Let \(C:\{0,1\}^{n}\to\{0,1\}^{2n}\) be a circuit, \(T:=2^{2n}\cdot 2n\), and \(f\) be the concatenation of all length-\(2n\) strings (which has length \(T\)). Then \(f\not\in\operatorname{Range}(\mathsf{GGM}_{T}[C])\)._ One can combine Fact 3.4 with Lemma 3.3 to obtain a brute force algorithm that solves the range avoidance problem in \(2^{O(n)}\) time with an \(\mathsf{NP}\) oracle. Essentially, this brute force algorithm tests every possible length-\(2n\) string against the range of the circuit. It will be the basis of our win-win analysis in Section 4. Finally, we give the following remark, showing that Korten's reduction relativizes. _Remark 3.5_.: Algorithm 3.1 and Lemma 3.3_relativizes_, in the sense that if the input is actually an oracle circuit \(C^{O}\) for some arbitrary oracle, the algorithm still works except now it needs to call an \(\mathsf{NP}^{O}\) oracle to find the lexicographically smallest string in \(C^{-1}(v_{i+1,2j}\circ v_{i+1,2j+1})\). ### \(\Pi_{1}\) Verification of the History of \(\mathsf{Korten}(C,f)\) In what follows, we say that \((i,j)<(i^{\prime},j^{\prime})\) if either \(i<i^{\prime}\) or (\(i=i^{\prime}\) and \(j<j^{\prime}\)) (that is, we consider the lexicographical order of pairs). Observe that Algorithm 3.1 processes all the pairs \((i,j)\) in the reverse lexicographic order. **Definition 3.6** (The computational history of \(\mathsf{Korten}(C,f)\)).: Let \(n,T\in\mathbb{N}\) be such that \(\log T\leq n\leq T\). Let \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) be a circuit, and \(f\in\{0,1\}^{T}\) be a "hard truth table" in the sense that \(f\not\in\operatorname{Range}(\mathsf{GGM}_{T}[C])\). The _computational history_ of \(\mathsf{Korten}(C,f)\), denoted as \[\mathsf{History}(C,f),\] consists of \((i_{\star},j_{\star})\), as well as the concatenation of \(v_{i,j}\) for every \(0\leq i<k\) and \(0\leq j<2^{i}\), in the lexicographical order of \((i,j)\) (\((i_{\star},j_{\star})\) and the \(v_{i,j}\) are defined in Algorithm 3.1). Each \(v_{i,j}\) is encoded by \(n+1\) bits \(\mathsf{enc}(v_{i,j})\), where if \(v_{i,j}\in\{0,1\}^{n}\) then \(\mathsf{enc}(v_{i,j})=0\circ v_{i,j}\), and if \(v_{i,j}=\bot\) then \(\mathsf{enc}(v_{i,j})=1^{n+1}\). The length of this history is at most \((2^{k+1}-1)(n+1)+2\log T\leq 5T\), and for convenience we always pad zeros at the end so that its length becomes exactly \(5T\). The following lemma summarizes the properties of the computational history construction above required for the \(\Sigma_{2}\mathsf{E}\) lower bound in the next section. **Lemma 3.7**.: _Let \(n,T\in\mathbb{N}\) be such that \(\log T\leq n\leq T\). Let \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) be a circuit and \(f\in\{0,1\}^{T}\setminus\operatorname{Range}(\mathsf{GGM}_{T}[C])\). Let \(h\coloneqq\mathsf{History}(C,f)\) and \(z\coloneqq\mathsf{Korten}(C,f)\)._ 1. **(history contains input/output)** _There is a_ \(\operatorname{poly}(\log T)\)_-time one-query oracle algorithm_ Input _and an_ \(O(n)\)_-time oracle algorithm_ Output_, both having input parameters_ \(T,n\) _and taking a string_ \(\tilde{h}\in\{0,1\}^{5T}\) _as oracle, such that the following hold:_ 1. _When given_ \(h\) _as the oracle,_ \(\mathsf{Input}_{T,n}\) _takes an additional input_ \(i\in\{0,1,\ldots,5T-1\}\) _and outputs_ \(f_{i}\)_._ 2. _When given_ \(h\) _as the oracle,_ \(\mathsf{Output}_{T,n}\) _outputs_ \(z=\mathsf{Korten}(C,f)\)_._ 2. **(\(\Pi_{1}\) verification of the history)** _There is an oracle algorithm_ \(V\) _with input parameters_ \(T,n\) _such that the following holds:_ 1. \(V\) _takes_ \(\tilde{f}\in\{0,1\}^{T},\tilde{h}\in\{0,1\}^{5T}\) _as oracles and_ \(C\) _and_ \(w\in\{0,1\}^{5\cdot(\log T+n)}\) _as inputs. It runs in_ \(\operatorname{poly}(n)\) _time._ 2. \(h=\mathsf{History}(C,f)\) _is the unique string from_ \(\{0,1\}^{5T}\) _satisfying the following:_ \[V^{f,h}(C,w)=1\qquad\text{for every $w\in\{0,1\}^{5\cdot(\log T+n)}$.}\] Proof.: From the definition of \(\mathsf{History}(C,f)\), the construction of \(\mathsf{Input}_{T,n}\) and \(\mathsf{Output}_{T,n}\) are straightforward. Now we describe the verifier \(V^{f,\tilde{h}}\), where \(f\in\{0,1\}^{T}\) and \(\tilde{h}\in\{0,1\}^{5T}\). Note that here we fix the first oracle of \(V\) to be the input truth table \(f\), while the second oracle \(\tilde{h}\) can be any string from \(\{0,1\}^{5T}\). First, \(V\) reads \((i_{\star},j_{\star})\) from \(\tilde{h}\). Note that the rest of \(\tilde{h}\) can be parsed as an array \(\{v_{i,j}\}_{i,j}\) where \(i\in\{0,1\ldots,k\}\) and \(j\in\{0,1,\ldots,2^{i}\}\). We will think of \(V\) as performing at most \(2^{|w|}\) checks, each of which _passes_ or _fails_. To show the second item of the lemma, we need to show that (1) if a string \(\tilde{h}\) passes all the checks, then it must be the case that \(\tilde{h}=h\); and (2) \(h\) passes all the checks. Specifically, \(V\) checks \(\tilde{h}\) as follows: * The values written on the leaves of \(\{v_{i,j}\}\) are indeed \(f\). That is, for every \(j\in\{0,1,\ldots,2^{k}-1\}\), check that \(v_{k,j}\) is consistent with the corresponding block in \(f\). * For every \((i,j)>(i_{\star},j_{\star})\) such that \(i<k\), \(C(v_{i,j})=v_{i+1,2j}\circ v_{i+1,2j+1}\). (That is, the value \(v_{i,j}\) is consistent with its two children.) * For every \((i,j)>(i_{\star},j_{\star})\) such that \(i<k\), for every \(x\in\{0,1\}^{n}\) that is lexicographically smaller than \(v_{i,j}\), \(C(x)\neq v_{i+1,2j}\circ v_{i+1,2j+1}\). (That is, the value \(v_{i,j}\) is the lexicographically first preimage of its two children.) * For every \(x\in\{0,1\}^{n}\), \(C(x)\neq v_{i_{\star}+1,2j_{\star}}\circ v_{i_{\star}+1,2j_{\star}+1}\). (That is, the two children of \((i_{\star},j_{\star})\) form a non-output of \(C\); by the previous checks, \((i_{\star},j_{\star})\) is the lexicographically largest such pair.) * For every \((i,j)\leq(i_{\star},j_{\star})\), \(v_{i,j}=\bot\). Note that the above can be implemented with a universal (\(\forall\)) quantification over at most \(5\cdot(\log T+n)\) bits. First, one can see that by the definition of the correct history \(h\) (Definition3.6), \(h\) passes all the checks above. Second, one can indeed see that all the conditions above _uniquely determine_\(h\), and therefore any \(\tilde{h}\) passing all the checks must equal \(h\). Again, it is easy to observe that Definition3.6 and Lemma3.7 relativize. _Remark 3.8_.: Definition3.6 and Lemma3.7 _relativize_, in the sense that if \(C\) is an oracle circuit \(C^{O}\) for some arbitrary oracle, Definition3.6 needs no modification since Algorithm3.1 relativizes, and Lemma3.7 holds with the only modification that \(V\) now also need to take \(O\) as an oracle (since it needs to evaluate \(C\)). Circuit Lower Bounds for \(\Sigma_{2}\mathsf{E}\) In this section, we prove our near-maximum circuit lower bounds for \(\Sigma_{2}\mathsf{E}\) by providing a new single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm for Avoid. Let \(\{C_{n}\colon\{0,1\}^{n}\to\{0,1\}^{2n}\}_{n\in\mathbb{N}}\) be a \(\mathsf{P}\)-uniform family of circuits. We show that there is a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm \(A\) that, on input \(1^{n}\), outputs a canonical string that is outside the range of \(C_{n}\) for infinitely many \(n\in\mathbb{N}\). **Theorem 4.1**.: _Let \(\{C_{n}\colon\{0,1\}^{n}\to\{0,1\}^{2n}\}_{n\in\mathbb{N}}\) be a \(\mathsf{P}\)-uniform family of circuits. There is a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm \(A\) with one bit of advice such that for infinitely many \(n\in\mathbb{N}\), \(A(1^{n})\) outputs \(y_{n}\in\{0,1\}^{2n}\setminus\operatorname{Range}(C_{n})\)._ Proof.: We begin with some notation. Notation.Let \(n^{(1)}\) be a large enough power of \(2\), \(n^{(\ell)}=2^{2^{n^{(\ell-1)}}}\) for each integer \(\ell>1\). Let \(n_{0}^{(\ell)}=n^{(\ell)}\) and \(t^{(\ell)}=O\Big{(}\log n_{0}^{(\ell)}\Big{)}\) be parameters that we set later. For each \(1\leq i\leq t^{(\ell)}\), let \(n_{i}^{(\ell)}:=\Big{(}n_{i-1}^{(\ell)}\Big{)}^{10}\). To show our algorithm \(A\) works on infinitely many input lengths, we will show that for every \(\ell\in\mathbb{N}\), there is an input length \(n_{i}^{(\ell)}\) for some \(i\in\{0,1,\ldots,t^{(\ell)}\}\) such that \(A\) works. Fix \(\ell\in\mathbb{N}\). From now on, for convenience, we will use \(n_{i}\) and \(t\) to denote \(n_{i}^{(\ell)}\) and \(t^{(\ell)}\), respectively. Specifying \(T_{i}\) and \(f_{i}\).For each input length \(n_{i}\), we will specify a parameter \(T_{i}\in\mathbb{N}\) and a string \(f_{i}\in\{0,1\}^{T_{i}}\). Our win-win analysis is based on whether \(f_{i}\in\operatorname{Range}(\mathsf{GGM}_{T_{i}}[C_{n_{i}}])\) for each \(i\in\{0,1,\ldots,t\}\). Let \(T_{0}:=2^{2n_{0}}\cdot 2n_{0}\) and \(f_{0}\) be the concatenation of all length-\(2n_{0}\) strings (which has length \(T_{0}\)). From Fact3.4, we have that \(f_{0}\not\in\operatorname{Range}(\mathsf{GGM}_{T_{0}}[C_{n_{0}}])\). For every \(i\in[t]\), we define \[f_{i}:=\mathsf{History}(C_{n_{i-1}},f_{i-1}).\] From Definition3.6, this also means that we have set \(T_{i}=5\cdot T_{i-1}\) for every \(i\in[t]\). Let \(t\) be the first integer such that \(T_{t+1}\leq 4n_{t+1}\). Note that we have \(T_{i}=5^{i}\cdot T_{0}\leq 2^{3n_{0}+i\cdot\log 5}\) and \(n_{i}=(n_{0})^{10^{i}}=2^{\log n_{0}\cdot 10^{i}}\). Hence, we have that \(t\leq O(\log n_{0})\). (Also note that \(n_{t}^{(\ell)}<n_{0}^{(\ell+1)}\).) Description of our \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm \(A\).Now, let \(k\in\{0,1,\ldots,t\}\) be the largest integer such that \(f_{k}\not\in\operatorname{Range}(\mathsf{GGM}_{T_{k}}[C_{n_{k}}])\). Since \(f_{0}\not\in\operatorname{Range}(\mathsf{GGM}_{T_{0}}[C_{n_{0}}])\), such a \(k\) must exist. Let \(z:=\mathsf{Korten}(C_{n_{k}},f_{k})\). It follows from Lemma3.3 that \(z\) is not in the range of \(C_{n_{k}}\). Our single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm \(A\) computes \(z\) on input \(1^{n_{k}}\) (see Definition2.2). That is, for some \(\ell_{1},\ell_{2}\leq\operatorname{poly}(n_{k})\): * There exists \(\pi_{1}\in\{0,1\}^{\ell_{1}}\) such that for every \(\pi_{2}\in\{0,1\}^{\ell_{2}}\), \(V_{A}(1^{n_{k}},\pi_{1},\pi_{2})\) prints \(z\), and * For every \(\pi_{1}\in\{0,1\}^{\ell_{1}}\), there exists some \(\pi_{2}\in\{0,1\}^{\ell_{2}}\) such that \(V_{A}(1^{n_{k}},\pi_{1},\pi_{2})\) prints either \(z\) or \(\bot\). In more details, if \(k<t\), then \(V_{A}\) treats \(\pi_{1}\) as an input to the circuit \(\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}]\), and let \[\hat{f}_{k+1}:=\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}](\pi_{1}).\] Here, the length of \(\pi_{1}\) is \(\ell_{1}:=n_{k+1}\leq\operatorname{poly}(n_{k})\). If \(k=t\), then \(V_{A}\) defines \(\hat{f}_{k+1}:=\pi_{1}\) and \(\ell_{1}:=T_{t+1}\leq\operatorname{poly}(n_{k})\). It is intended that \(\hat{f}_{k+1}=f_{k+1}=\mathsf{History}(C_{n_{k}},f_{k})\) (which \(V_{A}\) needs to verify). Note that in the case where \(k<t\), since \(f_{k+1}\in\operatorname{Range}(\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}])\), there indeed exists some \(\pi_{1}\) such that \(\hat{f}_{k+1}=\hat{f}_{k+1}\). We note that Lemma3.2 provides us "random access" to the (potentially very long) string \(\hat{f}_{k+1}\): given \(\pi_{1}\) and \(j\in[T_{k+1}]\), one can compute the \(j\)-th bit of \(\hat{f}_{k+1}\) in \(\operatorname{poly}(n_{k})\) time. Also recall from Lemma3.7 that for each \(i\), \(f_{i+1}=\mathsf{History}(C_{n_{i}},f_{i})\) contains the string \(f_{i}\), which can be retrieved by the oracle algorithm \(\mathsf{Input}\) described in Item1 of Lemma3.7. Therefore, for each \(i\) from \(k\) downtto \(1\), we can recursively define \(\hat{f}_{i}\) such that \((\hat{f}_{i})_{j}=\mathsf{Input}_{T_{i},n_{i}}^{\hat{f}_{i+1}}(j)\). We define \(\hat{f}_{0}\) to be the concatenation of all length-\((2n_{0})\) strings in the lexicographical order, so \(\hat{f}_{0}=f_{0}\). Applying the algorithm \(\mathsf{Input}\) recursively, we obtain an algorithm that given \(i\in\{0,1,\ldots,k\}\) and \(j\in\{0,1,\ldots,T_{i}-1\}\), outputs the \(j\)-th bit of \(\hat{f}_{i}\). Since \(\mathsf{Input}\) only makes one oracle query, this algorithm runs in \(\operatorname{poly}(n_{k})\) time.23 Footnote 23: Note that the definition of \(f_{0}\) is so simple that one can directly compute the \(j\)-th bit of \(f_{0}\) in \(\operatorname{poly}(n_{0})\) time. Then, \(V_{A}\) parses the second proof \(\pi_{2}\) into \(\pi_{2}=(i,w)\) where \(i\in\{0,1,\ldots,k\}\) and \(w\in\{0,1\}^{5(\log T_{i}+n_{i})}\). Clearly, the length of \(\pi_{2}\) is at most \(\ell_{2}:=\log(k+1)+5(\log T_{k}+n_{k})\leq\operatorname{poly}(n_{k})\). Now, let \(V_{\mathsf{History}}\) be the oracle algorithm in Item2 of Lemma3.7, we let \(V_{A}(1^{n_{k}},\pi_{1},\pi_{2})\) check whether the following holds: \[V_{\mathsf{History}}^{\hat{f}_{i},\hat{f}_{i+1}}(C_{n_{i}},w)=1.\lx@note{ footnote}{Here $V_{\mathsf{History}}$ also takes input parameters $T_{i}$ and $n_{i}$. We omit them in the subscript for notational convenience.} \tag{1}\] If this is true, then \(V_{A}\) outputs the string \(z:=\mathsf{Output}_{T_{k},n_{k}}^{\hat{f}_{k+1}}\), where \(\mathsf{Output}\) is the output oracle algorithm defined in Item1 of Lemma3.7. Otherwise, \(V_{A}\) outputs \(\bot\). The correctness of \(A\).Before establishing the correctness of \(A\), we need the following claim: **Claim 4.2**.: \(f_{k+1}=\hat{f}_{k+1}\) _if and only if the following holds:_ * \(V_{\mathsf{History}}^{\hat{f}_{i},\hat{f}_{i+1}}(C_{n_{i}},w)=1\) _for every_ \(i\in\{0,1,\ldots,k\}\) _and for every_ \(w\in\{0,1\}^{5(\log T_{i}+n_{i})}\)_._ Proof.: First, assume that \(f_{k+1}=\hat{f}_{k+1}\). By Item1 of Lemma3.7, we have that \(\hat{f}_{i}=f_{i}\) for every \(i\in\{0,1,\ldots,k+1\}\). Recall that by definition, \(f_{i+1}=\mathsf{History}(C_{n_{i}},f_{i})\) for every \(i\in\{0,1,\ldots,k\}\). Hence, by Item2 of Lemma3.7, we have that for every \(i\in\{0,1,\ldots,k\}\), and for every \(w\in\{0,1\}^{5(\log T_{i}+n_{i})}\), \(V_{\mathsf{History}}^{\hat{f}_{i},\hat{f}_{i+1}}(C_{n_{i}},w)=1\) holds. For the other direction, suppose that for every \(i\in\{0,1,\ldots,k\}\) and \(w\in\{0,1\}^{5(\log T_{i}+n_{i})}\), we have that \(V_{\mathsf{History}}^{\hat{f}_{i},\hat{f}_{i+1}}(C_{n_{i}},w)=1\) holds. First recall that \(f_{0}=\hat{f}_{0}\) by definition. By an induction on \(i\in[k+1]\) and (the uniqueness part of) Item2 of Lemma3.7, it follows that \(f_{i}=\hat{f}_{i}\) for every \(i\in\{0,1,\ldots,k+1\}\). In particular, \(f_{k+1}=\hat{f}_{k+1}\). \(\diamond\) Now we are ready to establish that \(A\) is a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm computing \(z\) on input \(1^{n_{k}}\). We first prove the completeness of \(A\); i.e., there is a proof \(\pi_{1}\) such that for every \(\pi_{2}\), \(V_{A}(1^{n_{k}},\pi_{1},\pi_{2})\) outputs \(z=\mathsf{Korten}(C_{n_{k}},f_{k})\). We set \(\pi_{1}\) to be the following proof: If \(k<t\), then \(f_{k+1}\in\operatorname{Range}(\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}])\), and we can set \(\pi_{1}\in\{0,1\}^{n_{k+1}}\) to be the input such that \(f_{k+1}=\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}](\pi_{1})\); if \(k=t\), then we simply set \(\pi_{1}=f_{k+1}\). Then, we have \(f_{k+1}=\hat{f}_{k+1}\), and by Claim4.2, we know that \(V_{A}\) will output \(z=\mathsf{Korten}(C_{n_{k}},f_{k})\) on every proof \(\pi_{2}\). Next, we show that for every \(\pi_{1}\), there is some \(\pi_{2}\) that makes \(V_{A}\) output either \(z\) or \(\bot\). It suffices to consider \(\pi_{1}\) such that for every \(\pi_{2}\), \(V_{A}(1^{n_{k}},\pi_{1},\pi_{2})\neq\bot\). In this case, every invocation of Equation1 holds, and thus by Claim4.2 we know that \(f_{k+1}=\hat{f}_{k+1}\). It follows that \(\mathsf{Korten}(C_{n_{k}},f_{k})=z\) and \(V_{A}\) will output \(z\) regardless of \(\pi_{2}\). Finally, we generalize \(A\) and \(V_{A}\) to work on all inputs \(1^{n}\). On input \(1^{n}\), \(V_{A}\) calculates the largest \(\ell\) such that \(n^{(\ell)}\leq n\), and also calculates the largest \(k^{\prime}\) such that \(n^{(\ell)}_{k^{\prime}}\leq n\). If \(n^{(\ell)}_{k^{\prime}}\neq n\), then \(V_{A}\) immediately outputs \(\bot\) and halts. Otherwise, \(V_{A}\) receives an advice bit indicating whether \(k^{\prime}=k^{(\ell)}\) where \(k^{(\ell)}\) is the largest integer such that \(f^{(\ell)}_{k^{(\ell)}}\not\in\operatorname{Range}(\mathsf{GGM}_{T^{(\ell)}_ {k}}[C_{n^{(\ell)}_{k}}])\). If this is the case, then \(V_{A}\) runs the verification procedure above; otherwise, it immediately outputs \(\bot\) and halts. It is easy to see that \(V_{A}\) runs in \(\operatorname{poly}(n)\) time, and is an infinitely-often single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm solving the range avoidance problem of \(\{C_{n}\}_{n\in\mathbb{N}}\). From Remark3.5 and Remark3.8, one can obverse that the proof above also relativizes. Hence we have the following as well. **Theorem 4.3** (Relativized version of Theorem4.1).: _Let \(\mathcal{O}\colon\{0,1\}^{*}\to\{0,1\}\) be any oracle. Let \(\{C^{\mathcal{O}}_{n}\colon\{0,1\}^{n}\to\{0,1\}^{2n}\}_{n\in\mathbb{N}}\) be a \(\mathsf{P}\)-uniform family of \(\mathcal{O}\)-oracle circuits. There is a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}^{\mathcal{O}}\) algorithm \(A^{\mathcal{O}}\) with one bit of advice such that for infinitely many \(n\in\mathbb{N}\), \(A^{\mathcal{O}}(1^{n})\) outputs \(y_{n}\in\{0,1\}^{2n}\setminus\operatorname{Range}(C^{\mathcal{O}}_{n})\)._ We omit the proof of the following corollary since it is superseded by the results in the next section. **Corollary 4.4**.: \(\Sigma_{2}\mathsf{E}\not\subseteq\mathsf{SIZE}[2^{n}/n]\) _and \((\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E})/_{1}\not\subseteq\mathsf{SIZE}[2^ {n}/n]\). Moreover, these results relativize: for every oracle \(\mathcal{O}\), \(\Sigma_{2}\mathsf{E}^{\mathcal{O}}\not\subseteq\mathsf{SIZE}^{\mathcal{O}}[2 ^{n}/n]\) and \((\Sigma_{2}\mathsf{E}^{\mathcal{O}}\cap\Pi_{2}\mathsf{E}^{\mathcal{O}})/_{1} \not\subseteq\mathsf{SIZE}^{\mathcal{O}}[2^{n}/n]\)._ ## 5 Circuit Lower Bounds for \(\mathsf{S}_{2}\mathsf{E}\) In this section, we prove our near-maximum circuit lower bounds for \(\mathsf{S}_{2}\mathsf{E}/_{1}\) by giving a new single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm for Avoid. ### Reed-Muller Codes To prove maximum circuit lower bounds for \(\mathsf{S}_{2}\mathsf{E}/_{1}\), we will need several standard tools for manipulating Reed-Muller (RM) codes (i.e., low-degree multi-variate polynomials). For a polynomial \(P\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\), where \(\mathbb{F}_{p}\) is the finite field of \(p\) elements, we use \(\deg_{\max}(P)\) to denote the maximum individual degree of variables in \(P\). Let \(p\) be a prime, \(\Delta,m\in\mathbb{N}\). For a string \(S\in\{0,1\}^{\Delta^{m}}\), we use \(\mathsf{RM}_{\mathbb{F}_{p},\Delta,m}(S)\) to denote its Reed-Muller encoding by extension: letting \(H=\{0,1,\ldots,\Delta-1\}\) and \(w_{1},\ldots,w_{\Delta^{m}}\in H^{m}\) be the enumeration of all elements in \(H^{m}\) in the lexicographical order, \(\mathsf{RM}_{\mathbb{F}_{p},\Delta,m}(S)\) is the unique polynomial \(P\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\) such that (1) \(P(w_{i})=S_{i}\) for every \(i\in[\Delta^{m}]\) and (2) \(\deg_{\max}(P)\leq\Delta-1\).25 Footnote 25: To see the uniqueness of \(P\), note that for every \(P\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\) with \(\deg_{\max}(P)\leq\Delta-1\), the restriction of \(P\) to \(H^{m}\) uniquely determines the polynomial \(P\). Also, such \(P\) can be constructed by standard interpolation. We also fix a Boolean encoding of \(\mathbb{F}_{p}\) denoted as \(\mathsf{Enc}_{\mathbb{F}_{p}}\colon\mathbb{F}_{p}\to\{0,1\}^{\lceil\log P\rceil}\). For simplicity, we can just map \(z\in\{0,1,\ldots,p-1\}\) to its binary encoding. In particular, \(\mathsf{Enc}_{\mathbb{F}_{p}}(0)=0^{\lceil\log p\rceil}\) and \(\mathsf{Enc}_{\mathbb{F}_{p}}(1)=0^{\lceil\log p\rceil-1}\circ 1\).26 Now we further define \(\mathsf{BRM}_{\mathbb{F}_{p},\Delta,m}(S)\) by concatenating \(\mathsf{RM}_{\mathbb{F}_{p},\Delta,m}(S)\) with \(\mathsf{Enc}_{\mathbb{F}_{p}}\), thus obtaining a Boolean encoding again. Formally, letting \(P=\mathsf{RM}_{\mathbb{F}_{p},\Delta,m}(S)\) and \(w_{1},\ldots,w_{p^{m}}\in\mathbb{F}_{p}^{m}\) be the enumeration of all elements from \(\mathbb{F}_{p}^{m}\) in the lexicographic order, we define \(\mathsf{BRM}_{\mathbb{F}_{p},\Delta,m}(S)=\mathsf{Enc}_{\mathbb{F}_{p}}(P(w_{1 }))\circ\mathsf{Enc}_{\mathbb{F}_{p}}(P(w_{2}))\circ\ldots\circ\mathsf{Enc}_{ \mathbb{F}_{p}}(P(w_{p^{m}}))\). We remark that for every \(i\in[\Delta^{m}]\), in \(\operatorname{poly}(m,\log p)\) time one can compute an index \(i^{\prime}\in[p^{m}\cdot\lceil\log p\rceil]\) such that \(\mathsf{BRM}_{\mathbb{F}_{p},\Delta,m}(S)_{i^{\prime}}=S_{i}\). Footnote 26: This fact is useful because if we know a string \(m\in\{0,1\}^{\lceil\log p\rceil}\) encodes either \(0\) or \(1\), then we can decode it by only querying the last bit of \(m\). We need three properties of Reed-Muller codes, which we explain below. Self-correction for polynomials.We first need the following self-corrector for polynomials, which efficiently computes the value of \(P\) on any input given an oracle that is close to a low-degree polynomial \(P\). (In other words, it is a _local decoder_ for the Reed-Muller code.) **Lemma 5.1** (A self-corrector for polynomials, cf. [12, 13]).: _There is a probabilistic oracle algorithm \(\mathsf{PCorr}\) such that the following holds. Let \(p\) be a prime, \(m,\Delta\in\mathbb{N}\) such that \(\Delta<p/3\). Let \(g\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\) be a function such that for some polynomial \(P\) of total degree at most \(\Delta\),_ \[\Pr_{\vec{x}\leftarrow\mathbb{F}_{p}^{m}}[g(\vec{x})\neq P(\vec{x})]\leq 1/4.\] _Then for all \(\vec{x}\in\mathbb{F}_{p}^{m}\), \(\mathsf{PCorr}^{g}(p,m,\Delta,\vec{x})\) runs in time \(\operatorname{poly}(\Delta,\log p,m)\) and outputs \(P(\vec{x})\) with probability at least \(2/3\)._ Low-max-degree test.We also need the following efficient tester, which checks whether a given polynomial has maximum individual degree at most \(\Delta\) or is far from such polynomials.27 Footnote 27: To obtain the theorem below, we set the parameter \(\delta\) and \(\varepsilon\) from [1, Remark 5.15] to be \(\min\Bigl{(}\frac{1}{200n^{2}(\Delta+1)},1/2p\Bigr{)}\) and \(\min\Bigl{(}\frac{1}{400n^{3}(\Delta+1)},1/2p\Bigr{)}\), respectively. **Lemma 5.2** (Low-max-degree tester [1, Remark 5.15]).: _Let \(n,\Delta,p\in\mathbb{N}\) be such that \(p\geq 20\cdot(\Delta+1)^{2}\cdot n^{2}\) and \(p\) is a prime. There is a probabilistic non-adaptive oracle machine \(\mathsf{LDT}\) such that the following holds. Let \(g\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\). Then for \(\delta=3n^{2}\cdot(\Delta+1)/p\), it holds that_ 1. _if_ \(\deg_{\max}(g)\leq\Delta\)_, then_ \(\mathsf{LDT}^{g}(p,n,\Delta)\) _accepts with probability_ \(1\)_,_ 2. _if_ \(g\) _is at least_ \(\delta\)_-far from every polynomial with maximum individual degree at most_ \(\Delta\)_, then_ \(\mathsf{LDT}^{g}(p,n,\Delta)\) _rejects with probability at least_ \(2/3\)_, and_ 3. \(\mathsf{LDT}\) _runs in_ \(\operatorname{poly}(p)\) _time._ Comparing two RM codewords.Lastly, we show an efficient algorithm that, given oracle access to two codewords of \(\mathsf{RM}_{\mathbb{F}_{p},\Delta,m}\), computes the lexicographically first differing point between the respective messages of the two codewords. **Lemma 5.3** (Comparing two RM codewords).: _Let \(p\) be a prime. Let \(m,\Delta\in\mathbb{N}\) be such that \(m\cdot\Delta<p/2\). There is a probabilistic oracle algorithm \(\mathsf{Comp}\) that takes two polynomials \(f,g\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\) as oracles, such that if both \(\deg_{\max}(f)\) and \(\deg_{\max}(g)\) are at most \(\Delta\), then the following holds with probability at least \(9/10\):_ * _If_ \(f\neq g\)_, then_ \(\mathsf{Comp}^{f,g}(p,m,\Delta)\) _outputs the lexicographically smallest element_ \(w\) _in_ \(H^{m}\) _such that_ \(f(w)\neq g(w)\)_, where_ \(H=\{0,1,\ldots,\Delta-1\}\)_._28__ Footnote 28: Since both \(f\) and \(g\) have max degree at most \(\Delta\), their values are completely determined by their restrictions on \(H^{m}\). Hence, if \(f\neq g\), then such \(w\) must exist. * _If_ \(f=g\)_, then_ \(\mathsf{Comp}^{f,g}(p,m,\Delta)\) _outputs_ \(\bot\)_._ * \(\mathsf{Comp}\) _makes at most_ \(\operatorname{poly}(m\cdot\Delta)\) _queries to both_ \(f\) _and_ \(g\)_, and runs in_ \(\operatorname{poly}(m\cdot\Delta\cdot\log p)\) _time._ Proof.: Our proof is similar to the proof from [11], which only considers multi-linear polynomials. Our algorithm \(\mathsf{Comp}^{f,g}(p,m,\Delta)\) works as follows: 1. The algorithm has \(m\) stages, where the \(i\)-th stage aims to find the \(i\)-th entry of \(w\). At the end of the \(i\)-th stage, the algorithm obtains a length-\(i\) prefix of \(w\). 2. For every \(i\in[m]\): 1. Let \(w_{<i}\in H^{i-1}\) be the current prefix. For every \(h\in\{0,1,\ldots,\Delta-1\}\), we run a randomized polynomial identity test to check whether the restricted polynomial \(f(w_{<i},h,\cdot)\) and \(g(w_{<i},h,\cdot)\) are the same, with error at most \(\frac{1}{10m|H|}\).29 Footnote 29: Note that these two polynomials have total degree at most \(m\cdot\Delta<p/2\). Hence if they are different, their values on a random element from \(\mathbb{F}_{p}^{m-i}\) are different with probability at least \(1/2\). Hence the desired error level can be achieved by sampling \(O(\log m+\log\Delta)\) random points from \(\mathbb{F}^{m-i}\) and checking whether \(f(w_{<i},h,\cdot)\) and \(g(w_{<i},h,\cdot)\) have the same values. 2. We set \(w_{i}\) to be the smallest \(h\) such that our test above reports that \(f(w_{<i},h,\cdot)\) and \(g(w_{<i},h,\cdot)\) are distinct. If there is no such \(h\), we immediately return \(\bot\). By a union bound, all \(mH\) polynomial identity testings are correct with probability at least \(9/10\). In this case, if \(f=g\), then the algorithm outputs \(\bot\) in the first stage. If \(f\neq g\), by induction on \(i\), we can show that for every \(i\in[m]\), \(w_{\leq i}\) is the lexicographically smallest element from \(H^{m}\) such that \(f(w_{\leq i},\cdot)\) and \(g(w_{\leq i},\cdot)\) are distinct, which implies that the output \(w\) is also the lexicographically smallest element \(w\) in \(H^{m}\) such that \(f(w)\neq g(w)\). ### Encoded History and \(\mathsf{S}_{2}\mathsf{BPP}\) Verification Next, we define the following encoded history. **Definition 5.4**.: Let \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) be a circuit, and \(f\in\{0,1\}^{T}\) be a "hard truth table" in the sense that \(f\not\in\operatorname{Range}(\mathsf{GGM}_{T}[C])\). Let \(k\), \((i_{\star},j_{\star})\), and \(\{v_{i,j}\}_{i,j}\) be defined as in Algorithm 3.1. Let \(S\) be the concatenation of \(\mathsf{enc}(v_{i,j})\) for every \(i\in\{0,1,\ldots,k-1\}\), \(j\in\{0,1,\ldots,2^{i}-1\}\), in the reserve lexicographical order of \((i,j)\), padded with zeros at the end to length exactly \(5T\). (Recall that \(\mathsf{enc}(v_{i,j})\) was defined in Definition 3.6.) Let \(p\) be the smallest prime that is at least \(20\cdot\log^{5}T\), and \(m\) be the smallest integer such that \((\log T)^{m}\geq 5\cdot T\). The _encoded computational history_ of \(\mathsf{Korten}(C,f)\), denoted as \[\widetilde{\mathsf{History}}(C,f),\] consists of \((i_{\star},j_{\star})\), concatenated with \(\mathsf{BRM}_{\mathbb{F}_{p},\log T,m}(S)\). The length of the encoded history is at most \[\left\lceil\log(40\cdot\log^{5}T)\right\rceil\cdot(40\cdot\log^{5}T)^{\log(5T )/\log\log T+1}+2\log T\leq T^{6}\] for all sufficiently large \(T\), and for convenience we always pad zeros at the end so that its length becomes exactly \(T^{6}\).30 Footnote 30: For simplicity even for \(T\) such that the length of the encoded history is longer than \(T^{6}\), we will pretend its length is exactly \(T^{6}\) throughout this section. This does not affect the analysis in our main theorem Theorem 5.7 since there we only care about sufficiently large \(T\). Recall that the original computational history \(\mathsf{History}(C,f)\) is simply the concatenation of \((i_{\star},j_{\star})\) and \(S\). In the encoded version, we encode its \(S\) part by the Reed-Muller code instead. In the rest of this section, when we say history, we always mean the encoded history \(\widetilde{\mathsf{History}}(C,f)\) instead of the vanilla history \(\mathsf{History}(C,f)\). We need the following lemma. **Lemma 5.5**.: _Let \(n,T\in\mathbb{N}\) be such that \(\log T\leq n\leq T\). Let \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) be a circuit and \(f\in\{0,1\}^{T}\setminus\operatorname{Range}(\mathsf{GGM}_{T}[C])\). Let \(h\coloneqq\mathsf{History}(C,f)\) and \(z\coloneqq\mathsf{Korten}(C,f)\)._ 1. **(history contains input/output)** _There is a_ \(\operatorname{poly}(\log T)\)_-time oracle algorithm_ Input _and an_ \(O(n)\)_-time oracle algorithm_ Output_, both of which have input parameters_ \(T,n\) _and take a string_ \(\tilde{h}\in\{0,1\}^{T^{6}}\) _as oracle, such that the following hold:_ 1. \(\mathsf{Input}_{T,n}\) _makes a single query to its oracle; when given_ \(h\) _as the oracle,_ \(\mathsf{Input}_{T,n}\) _takes an additional input_ \(i\in\{0,1,\ldots,T^{6}-1\}\) _and outputs_ \(f_{i}\)_._ 2. \(\mathsf{Output}_{T,n}\) _makes at most_ \(4n\) _queries to its oracle; when given_ \(h\) _as the oracle,_ Output\({}_{T,n}\) _outputs_ \(z=\mathsf{Korten}(C,f)\)_._ 2. **(\(\mathsf{S}_{2}\mathsf{BPP}\) verification of the history)** _There is a randomized oracle algorithm_ \(V\) _with input parameters_ \(T,n\) _such that the following hold:_ 1. \(V\) _takes strings_ \(\tilde{f}\in\{0,1\}^{T},\pi_{1},\pi_{2}\in\{0,1\}^{T^{6}}\) _as oracles, the circuit_ \(C\)_, an integer_ \(i\in\big{[}T^{6}\big{]}\)_, and_ \(\varepsilon\in(0,1)\) _as input, and runs in_ \(\operatorname{poly}(n,\log\varepsilon^{-1})\) _time._ 2. _For every_ \(\pi\in\{0,1\}^{T^{6}}\) _and every_ \(i\in\{0,1,\ldots,T^{6}-1\}\)_, we have that_ \[\Pr\Bigl{[}V_{T,n}^{f,\pi,h}(C,i,\varepsilon)=h_{i}\Bigr{]}\geq 1-\varepsilon\quad \text{and}\quad\Pr\Bigl{[}V_{T,n}^{f,h,\pi}(C,i,\varepsilon)=h_{i}\Bigr{]} \geq 1-\varepsilon.\] Proof.: Again, the algorithms \(\mathsf{Input}_{T,n}\) and \(\mathsf{Output}_{T,n}\) can be constructed in a straightforward way.31 So we focus on the construction of \(V\). Let \(p,m,k\in\mathbb{N}\) be as in Definition5.4. We also set \(\mathbb{F}=\mathbb{F}_{p}\) and \(\Delta=\log T\) in the rest of the proof. Footnote 31: To see that \(\mathsf{Output}_{T,n}\) makes at most \(4n\) queries: Note that \(\mathsf{Output}\) first reads the pair \((i_{\star},j_{\star})\) from \(h\), and then reads two corresponding blocks from \(v_{i,j}\) encoded in \(h\). In total, it reads at most \(2\log T+2n\leq 4n\) bits from \(h\). Our \(V\) always first _selects_ one of the oracles \(\pi_{1}\) and \(\pi_{2}\) (say \(\pi_{\mu}\) for \(\mu\in\{1,2\}\)), and then outputs \(\pi_{\mu}(i)\). Hence, in the following, we say that \(V\) selects \(\pi_{\mu}\) to mean that \(V\) outputs \(\pi_{\mu}(i)\) and terminates. Given \(\pi_{1}\) and \(\pi_{2}\), let \(g_{1},g_{2}\colon\mathbb{F}^{m}\to\mathbb{F}\) be the (potential) RM codewords encoded in \(\pi_{1}\) and \(\pi_{2}\), respectively.32 From now on, we will assume that \(i\) points to an entry in the encoded history \(g_{1}\) or \(g_{2}\) instead of the encoded pair of integers \((i_{\star},j_{\star})\). We will discuss the other case at the end of the proof. Footnote 32: Technically \(\pi_{1}\) and \(\pi_{2}\) are supposed to contain the RM codewords concatenated with \(\mathsf{Enc}_{g_{p}}\colon\mathbb{F}_{p}\to\{0,1\}^{\lceil\log p\rceil}\). Low-max-degree test and self-correction.We first run \(\mathsf{LDT}^{g_{1}}(p,m,\Delta-1)\) and \(\mathsf{LDT}^{g_{2}}(p,m,\Delta-1)\) for \(c_{1}\) times, where \(c_{1}\) is a sufficiently large constant. Recall that \(p\geq 20\cdot\log^{5}T\), \(m=\lceil\log(5T)/\log\log T\rceil\), and \(\Delta=\log T\). It follows that \(p\geq 20\cdot((\Delta-1)+1)^{2}\cdot m^{2}\), which satisfies the condition of Lemma5.2. We also note that \(3m^{2}\cdot((\Delta-1)+1)/p<1/4\). Hence, by Lemma5.2, if \(g_{1}\) is \(1/4\)-far from all polynomials with maximum individual degree at most \(\Delta-1\), then \(\mathsf{LDT}^{g_{1}}(p,m,\Delta-1)\) rejects with probability \(2/3\), and similarly for \(g_{2}\). Now, if any of the runs on \(\mathsf{LDT}^{g_{1}}(p,m,\Delta-1)\) rejects, \(V\) selects \(\pi_{2}\), and if any of the runs on \(\mathsf{LDT}^{g_{2}}(p,m,\Delta-1)\) rejects, \(V\) selects \(\pi_{1}\).33 In other words, \(V\) first _disqualifies_ the oracles that do not pass the low-max-degree test. We set \(c_{1}\) to be large enough so that conditioning on the event that \(V\) does not terminate yet, with probability at least \(0.99\), both \(g_{1}\) and \(g_{2}\) are \(1/4\)-close to polynomials \(\widetilde{g}_{1}\colon\mathbb{F}_{p}^{m}\to\mathbb{F}\) and \(\widetilde{g}_{2}\colon\mathbb{F}_{p}^{m}\to\mathbb{F}\), respectively, where \(\deg_{\max}(\widetilde{g}_{1})\) and \(\deg_{\max}(\widetilde{g}_{2})\) are at most \(\Delta-1\). Footnote 33: As a minor detail, if both \(g_{1}\) and \(g_{2}\) are rejected by some runs, \(V\) selects \(\pi_{2}\). We can then use \(\mathsf{PCorr}^{g_{1}}(p,m,m\cdot(\Delta-1),\cdot)\) and \(\mathsf{PCorr}^{g_{2}}(p,m,m\cdot(\Delta-1),\cdot)\) to access the polynomials \(\widetilde{g}_{1}\) and \(\widetilde{g}_{2}\). (Note that \(m\cdot(\Delta-1)<p/3\), which satisfies the condition of Lemma5.1). We repeat them each \(O(\log T+\log m)\) times to make sure that on a single invocation, they return the correct values of \(\widetilde{g}_{1}\) and \(\widetilde{g}_{2}\) respectively with probability at least \(1-1/(mT)^{c_{2}}\) for a sufficiently large constant \(c_{2}\). By Lemma5.1, each call to \(\mathsf{PCorr}^{g_{1}}(p,m,m\cdot(\Delta-1),\cdot)\) or \(\mathsf{PCorr}^{g_{2}}(p,m,m\cdot(\Delta-1),\cdot)\) takes \(\operatorname{polylog}(T)\) time. Selecting the better polynomial.From now on, we **refine** what it means when \(V\) selects \(\pi_{\mu}\): now it means that \(V\) outputs the bit corresponding to \(i\) in \(\widetilde{g}_{\mu}\) (recall that we are assuming that \(i\) points to an entry in the encoded history \(g_{1}\) or \(g_{2}\)). Let \(\{v^{1}_{i,j}\}\) and \(\{v^{2}_{i,j}\}\) be the encoded histories in \(\widetilde{g}_{1}\) and \(\widetilde{g}_{2}\). Then \(V\) uses \(\mathsf{Comp}^{\widetilde{g}_{1},\widetilde{g}_{2}}(p,m,\Delta-1)\) to find the lexicographically largest \((i^{\prime},j^{\prime})\) such that \(v^{1}_{i^{\prime},j^{\prime}}\neq v^{2}_{i^{\prime},j^{\prime}}\).34 Note that \(\mathsf{Comp}^{\widetilde{g}_{1},\widetilde{g}_{2}}(p,m,\Delta-1)\) makes at most \(\mathsf{poly}(m\cdot\Delta)\) queries to both \(\widetilde{g}_{1}\) and \(\widetilde{g}_{2}\). By making \(c_{2}\) large enough, we know that \(\mathsf{Comp}\) operates correctly with probability at least \(0.8\). By operating correctly, we mean that (1) if \(\widetilde{g}_{1}\neq\widetilde{g}_{2}\), \(\mathsf{Comp}\) finds the correct \((i^{\prime},j^{\prime})\) and (2) If \(\widetilde{g}_{1}=\widetilde{g}_{2}\), \(\mathsf{Comp}\) returns \(\bot\).35 Footnote 34: Recall that the \(\{v_{i,j}\}\) is encoded in the reverse lexicographic order (Definition 5.4). Footnote 35: From Lemma 5.3, \(\mathsf{Comp}^{\widetilde{g}_{1},\widetilde{g}_{2}}(p,m,\Delta-1)\) itself operates correctly with probability at least \(0.9\). But the access to \(\widetilde{g}_{1}\) (similarly to \(\widetilde{g}_{2}\)) is provided by \(\mathsf{PC}\mathsf{Cor}^{g_{1}}(p,m,m\cdot(\Delta-1),\cdot)\), which may err with probability at most \(1/(mT)^{c_{2}}\). So we also need to take a union bound over all the bad events that a query from \(\mathsf{Comp}\) to \(\widetilde{g}_{1}\) or \(\widetilde{g}_{2}\) is incorrectly answered. In what follows, we assume that \(\mathsf{Comp}\) operates correctly. If \(\mathsf{Comp}\) returns \(\bot\), then \(V\) simply selects \(\pi_{1}\). Otherwise, there are several cases: 1. \(i^{\prime}=k\). In this case, \(\widetilde{g}_{1}\) and \(\widetilde{g}_{2}\) disagree on their leaf values, which intend to encode \(f\). \(V\) queries \(f\) to figure out which one has the correct value, and selects the corresponding oracle. (Note that at most one of them can be consistent with \(f\). If none of them are consistent, then \(V\) selects \(\pi_{1}\).) From now on, assume \(i^{\prime}<k\) and set \(\alpha=v^{1}_{i^{\prime}+1,2j^{\prime}}\circ v^{1}_{i^{\prime}+1,2j^{\prime}+1}\). Note that by the definition of \((i^{\prime},j^{\prime})\), it holds that \(\alpha=v^{2}_{i^{\prime}+1,2j^{\prime}}\circ v^{2}_{i^{\prime}+1,2j^{\prime}+1}\) as well. 2. \(i^{\prime}<k\) and both \(v^{1}_{i^{\prime},j^{\prime}}\) and \(v^{2}_{i^{\prime},j^{\prime}}\) are not \(\bot\). In this case, \(V\) first checks whether both of them are in \(C^{-1}(\alpha)\) (it can be checked by testing whether \(C(v^{1}_{i^{\prime},j^{\prime}})=\alpha\) and \(C(v^{2}_{i^{\prime},j^{\prime}})=\alpha\)). If only one of them is contained in \(C^{-1}(\alpha)\), \(V\) selects the corresponding oracle. If none of them are contained, \(V\) selects \(\pi_{1}\). Finally, if both are contained in \(C^{-1}(\alpha)\), \(V\) checks which one is lexicographically smaller, and selects the corresponding oracle. 3. \(i^{\prime}<k\), and one of the \(v^{1}_{i^{\prime},j^{\prime}}\) and \(v^{2}_{i^{\prime},j^{\prime}}\) is \(\bot\). Say that \(v^{b}_{i^{\prime},j^{\prime}}=\bot\) for some \(b\in\{1,2\}\), and denote \(\bar{b}:=3-b\) as the index of the other proof. In this case, let \((i_{\diamond},j_{\diamond})\) denote the predecessor of \((i^{\prime},j^{\prime})\) in terms of the reverse lexicographical order (that is, the smallest pair that is lexicographically greater than \((i^{\prime},j^{\prime})\)). Since \(\mathsf{Comp}\) operates correctly, we have that \(v^{1}_{i_{\diamond},j_{\diamond}}=v^{2}_{i_{\diamond},j_{\diamond}}\). If \(v^{1}_{i_{\diamond},j_{\diamond}}=\bot\), then \(\pi_{\bar{b}}\) has to be incorrect (since by Definition 3.6, \(\bot\)'s form a contiguous suffix of the history), and \(V\) selects \(\pi_{b}\). Otherwise, if \(v^{\bar{b}}_{i^{\prime},j^{\prime}}\in C^{-1}(\alpha)\), then \(\pi_{b}\) is incorrect (as it claims that \(C^{-1}(\alpha)=\varnothing\)), and \(V\) selects \(\pi_{\bar{b}}\). Otherwise, \(V\) selects \(\pi_{b}\). Analysis.Now we show that \(\Pr\Bigl{[}V^{f,h,\pi}_{T,n}(i)=h(i)\Bigr{]}\geq 2/3\). (The proof for \(\Pr\Bigl{[}V^{f,\pi,h}_{T,n}(i)=h(i)\Bigr{]}\geq 2/3\) is symmetric.) To get the desired \(\varepsilon\) error probability, one can simply repeat the above procedure \(O(\log 1/\varepsilon)\) times and output the majority. First, by Lemma 5.2, \(\mathsf{LDT}^{g_{1}}(p,m,\Delta-1)\) passes with probability \(1\). If some of the runs of \(\mathsf{LDT}^{g_{2}}(p,m,\Delta-1)\) rejects, then \(V\) selects \(h\). Otherwise, we know that with probability at least \(0.99\), \(\mathsf{PC}\mathsf{Corr}^{g_{1}}(p,m,m\cdot(\Delta-1),\cdot)\) and \(\mathsf{PC}\mathsf{Corr}^{g_{2}}(p,m,m\cdot(\Delta-1),\cdot)\) provide access to polynomials \(\widetilde{g}_{1}\) and \(\widetilde{g}_{2}\) with maximum individual degree at most \(\Delta-1\), where \(\widetilde{g}_{1}\) encodes the correct history values \(\{v_{i,j}\}_{i,j}\) of \(\mathsf{Korten}(C,f)\). Then, assuming \(\mathsf{Comp}\) operates correctly (which happens with probability at least \(0.8\)), if \(\widetilde{g}_{1}=\widetilde{g}_{2}\), then the selection of \(V\) does not matter. Now we assume \(\widetilde{g}_{1}\neq\widetilde{g}_{2}\). We will verify that in all three cases above, \(h\) (as the first oracle) is selected by \(V\). In the first case, by definition, all leaf values in \(h\) are consistent with \(f\), and hence \(h\) is selected. In the second case, since \(h\) contains the correct history values, we know that \(v^{1}_{i^{\prime},j^{\prime}}\) must be the smallest element from \(C^{-1}(\alpha)\), so again \(h\) is selected. In the last case: (1) if \(v^{1}_{i_{\sigma},j_{\sigma}}=\bot\), then \(v^{1}_{i^{\prime},j^{\prime}}\) has to be \(\bot\) as well, thus \(h\) is selected; (2) if \(v^{1}_{i_{\sigma},j_{\sigma}}\neq\bot\) and \(v^{1}_{i^{\prime},j^{\prime}}=\bot\), then \(C^{-1}(\alpha)=\varnothing\), and since the other proof \(\pi\) claims some element \(v^{2}_{i^{\prime},j^{\prime}}\in C^{-1}(\alpha)\), \(h\) is selected; and (3) if \(v^{1}_{i_{\sigma},j_{\sigma}}\neq\bot\) and \(v^{1}_{i^{\prime},j^{\prime}}\neq\bot\), then \(\pi\) claims that \(C^{-1}(\alpha)=\varnothing\) and we can check that \(v^{1}_{i^{\prime},j^{\prime}}\in C^{-1}(\alpha)\), therefore \(h\) is selected as well. The remaining case: \(i\) points to the location of \((i_{\star},j_{\star})\).In this case, \(V\) still runs the algorithm described above to make a selection. Indeed, if \(\mathsf{Comp}\) does not return \(\bot\), \(V\) operates exactly the same. But when \(\mathsf{Comp}\) returns \(\bot\), \(V\) cannot simply select \(\pi_{1}\) since we need to make sure that \(V\) selects the oracle corresponding to \(h\) (it can be either \(\pi_{1}\) or \(\pi_{2}\)). Hence, in this case, \(V\) first reads \((i^{1}_{\star},j^{1}_{\star})\) and \((i^{2}_{\star},j^{2}_{\star})\) from \(\pi_{1}\) and \(\pi_{2}\). If they are the same, \(V\) simply selects \(\pi_{1}\). Otherwise, for \(b\in[2]\), \(V\) checks whether \(v^{b}_{i^{\star}_{\star},j^{b}_{\star}}=\bot\), and select the one that satisfies this condition. (If none of the \(v^{b}_{i^{\star}_{\star},j^{b}_{\star}}\) are, then \(V\) selects \(\pi_{1}\)). If both of \(v^{b}_{i^{\star}_{\star},j^{b}_{\star}}\) are \(\bot\), \(V\) selects the \(\mu\in[2]\) such that \((i^{\mu}_{\star},j^{\mu}_{\star})\) is larger. Now, we can verify that \(V^{f,h,\pi}_{T,n}\) selects \(h\) with high probability as well. (To see this, note that in the correct history, \((i_{\star},j_{\star})\) points to the lexicographically largest all-zero block.) Finally, the running time bound follows directly from the description of \(V\). #### 5.2.1 A remark on relativization Perhaps surprisingly, although 5.5 heavily relies on arithmetization tools such as Reed-Muller encoding and low-degree tests, it in fact also relativizes. To see this, the crucial observation is that, similarly to 3.7, the verifier \(V\) from 5.5 only needs _black-box access_ to the input circuit \(C\), meaning that it only needs to evaluate \(C\) on certain chosen input. Hence, when \(C\) is actually an oracle circuit \(C^{\mathcal{O}}\) for some arbitrary oracle \(\mathcal{O}\), the only modification we need is that \(V\) now also takes \(\mathcal{O}\) as an oracle. _Remark 5.6_.: Definition5.4 and 5.5_relativize_, in the sense that if \(C\) is an oracle circuit \(C^{\mathcal{O}}\) for some arbitrary oracle, 5.4 needs no modification since 3.6 relativizes, and 5.5 holds with the only modification that \(V\) now also needs to take \(\mathcal{O}\) as an oracle (since it needs to evaluate \(C\)). Indeed, the remark above might sound strange at first glance: arguments that involve PCPs often do not _relativize_, and the encoded history \(\widetilde{\mathsf{History}}(C,f)\) looks similar to a PCP since it enables \(V\) to perform a probabilistic local verification. However, a closer inspection reveals a key difference: the circuit \(C\) is always treated as a black box--both in the construction of history (3.6) and in the construction of the encoded history (3.4). That is, the arithmetization in the encoded history _does not arithmetize_ the circuit \(C\) itself. ### Lower Bounds for \(\mathsf{S}_{2}\mathsf{E}\) Let \(\{C_{n}:\{0,1\}^{n}\to\{0,1\}^{2n}\}\) be a \(\mathsf{P}\)-uniform family of circuits. We show that there is a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(\mathcal{A}\) such that for infinitely many \(n\in\mathbb{N}\), on input \(1^{n}\), \(\mathcal{A}(1^{n})\) outputs a canonical string that is outside the range of \(C_{n}\). **Theorem 5.7**.: _Let \(\{C_{n}\colon\{0,1\}^{n}\to\{0,1\}^{2n}\}_{n\in\mathbb{N}}\) be a \(\mathsf{P}\)-uniform family of circuits. There is a sequence of valid outputs \(\{y_{n}\in\{0,1\}^{2n}\setminus\operatorname{Range}(C_{n})\}_{n\in\mathbb{N}}\) and a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(A\) with one bit of advice, such that for infinitely many \(n\in\mathbb{N}\), \(A(1^{n})\) outputs \(y_{n}\)._ Proof.: Our proof proceeds similarly to the proof of the previous Theorem4.1. We will follow the same notation. Notation.Let \(n^{(1)}\) be a large enough power of \(2\), \(n^{(\ell)}=2^{2^{n^{(\ell-1)}}}\) for each integer \(\ell>1\). Let \(n_{0}^{(\ell)}=n^{(\ell)}\) and \(t^{(\ell)}=O\Big{(}\log n_{0}^{(\ell)}\Big{)}\) be parameters that we set later. For each \(1\leq i\leq t^{(\ell)}\), let \(n_{i}^{(\ell)}:=\Big{(}n_{i-1}^{(\ell)}\Big{)}^{10}\). To show our algorithm \(A\) works on infinitely many input lengths, we will show that for every \(\ell\in\mathbb{N}\), there is an input length \(n_{i}^{(\ell)}\) for some \(i\in\big{[}t^{(\ell)}\big{]}\) such that \(A\) works. Fix \(\ell\in\mathbb{N}\). From now on, for convenience, we will use \(n_{i}\) and \(t\) to denote \(n_{i}^{(\ell)}\) and \(t^{(\ell)}\), respectively. Specifying \(T_{i}\) and \(f_{i}\).For each input length \(n_{i}\), we will specify a parameter \(T_{i}\in\mathbb{N}\) and a string \(f_{i}\in\{0,1\}^{T_{i}}\). Our win-win analysis is based on whether \(f_{i}\in\operatorname{Range}(\mathsf{GGM}_{T_{i}}[C_{n_{i}}])\) for each \(i\in\{0,1,\ldots,t\}\). Let \(T_{0}:=2^{2n_{0}}\cdot 2n_{0}\) and \(f_{0}\) be the concatenation of all length-\(2n_{0}\) strings (which has length \(T_{0}\)). From creftype3.4, we have that \(f_{0}\not\in\operatorname{Range}(\mathsf{GGM}_{T_{0}}[C_{n_{0}}])\). For every \(i\in[t]\), we define \[f_{i}=\widetilde{\operatorname{History}}(C_{n_{i-1}},f_{i-1}).\] From creftype5.4, this also means that we have set \(T_{i}=T_{i-1}^{6}\) for every \(i\in[t]\). Let \(t\) be the first integer such that \(T_{t+1}\leq n_{t+1}\). Note that we have \(T_{i}=(T_{0})^{6^{i}}\leq 2^{3n_{0}\cdot 6^{i}}\) and \(n_{i}=(n_{0})^{10^{i}}=2^{\log n_{0}\cdot 10^{i}}\). Hence, we have that \(t\leq O(\log n_{0})\). (Also note that \(n_{t}^{(\ell)}<n_{0}^{(\ell+1)}\).) Description of our \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(A\).Now, let \(k\in\{0,1,\ldots,t\}\) be the largest integer such that \(f_{k}\not\in\operatorname{Range}(\mathsf{GGM}_{T_{k}}[C_{n_{k}}])\). Since \(f_{0}\not\in\operatorname{Range}(\mathsf{GGM}_{T_{0}}[C_{n_{0}}])\), such a \(k\) must exist. Let \(z:=\mathsf{Korten}(C_{n_{k}},f_{k})\), it follows from creftype3.3 that \(z\) is not in the range of \(C_{n_{k}}\) (i.e., \(z\in\{0,1\}^{2n_{k}}\setminus\operatorname{Range}(C_{n_{k}})\)). Our single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(A\) computes \(z\) on input \(1^{n_{k}}\) (see creftype2.2). We will first construct an \(\mathsf{S}_{2}\mathsf{BPP}\) verifier \(V\) that computes \(z\) in polynomial time on input \(1^{n_{k}}\), and then use the fact that all \(\mathsf{S}_{2}\mathsf{BPP}\) verifiers can be turned into equivalent \(\mathsf{S}_{2}\mathsf{P}\) verifiers with a polynomial-time blow-up [1, 10], from which we can obtain the desired verifier \(V_{A}\) for \(A\). Description of an \(\mathsf{S}_{2}\mathsf{BPP}\) verifier \(V\) computing \(z\).Formally, \(V\) is a randomized polynomial-time algorithm that takes \(1^{n_{k}}\) and two witnesses \(\pi_{1},\pi_{2}\in\{0,1\}^{n_{k+1}}\) as input, and we aim to establish the following: There exists \(\omega\in\{0,1\}^{n_{k+1}}\) such that for every \(\pi\in\{0,1\}^{n_{k+1}}\), we have \[\Pr[V(1^{n_{k}},\omega,\pi)=z]\geq 2/3\qquad\text{and}\qquad\Pr[V(1^{n_{k}}, \pi,\omega)=z]\geq 2/3,\] where the probabilities are over the internal randomness of \(V\). In more detail, if \(k<t\), then \(V\) treats \(\pi_{1}\) and \(\pi_{2}\) as inputs to the circuit \(\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}]\), and let \[\hat{f}_{k+1}:=\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}](\pi_{1})\quad\text{and} \quad\hat{g}_{k+1}:=\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}](\pi_{2}).\] Here, the lengths of \(\pi_{1}\) and \(\pi_{2}\) are \(\ell:=n_{k+1}\leq\operatorname{poly}(n_{k})\). If \(k=t\), then \(V\) defines \(\hat{f}_{k+1}:=\pi_{1}\), \(\hat{g}_{k+1}:=\pi_{2}\), and their lengths are \(\ell:=T_{t+1}\leq n_{k+1}\leq\operatorname{poly}(n_{k})\). It is intended that one of the \(\hat{f}_{k+1}\) and \(\hat{g}_{k+1}\) is \(f_{k+1}=\widetilde{\operatorname{History}}(C_{n_{k}},f_{k})\) (\(V\) needs to figure out which one). We now specify the intended proof \(\omega\in\{0,1\}^{n_{k+1}}\). When \(k<t\), since \(f_{k+1}\in\operatorname{Range}(\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}])\), we can set \(\omega\) so that \(\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}](\omega)=f_{k+1}\). When \(k=t\), we simply set \(\omega=f_{k+1}\). Note that Lemma3.2 provides us "random access" to the (potentially very long) strings \(\hat{f}_{k+1}\) and \(\hat{g}_{k+1}\): (take \(\hat{f}_{k+1}\) as an example) given \(\pi_{1}\) and \(j\in\{0,1,\ldots,T_{k+1}-1\}\), one can compute the \(j\)-th bit of \(\hat{f}_{k+1}\) in \(\operatorname{poly}(n_{k})\) time. Also recall from Lemma5.5 that for each \(i\), \(f_{i+1}=\widetilde{\mathsf{History}}(C_{n_{i}},f_{i})\) contains the string \(f_{i}\), which can be retrieved by the oracle algorithm \(\mathsf{Input}\) described in Item1 of Lemma5.5. Therefore, for each \(i\) from \(k\) downto \(1\), we can recursively define \(\hat{f}_{i}\) such that \((\hat{f}_{i})_{j}=\mathsf{Input}_{T_{i},n_{i}}^{\hat{f}_{i+1}}(j)\) (similarly for \(\hat{g}_{i}\)). We also define \(\hat{f}_{0}\) and \(\hat{g}_{0}\) to be the concatenation of all length-\((2n_{0})\) strings in the lexicographical order, so \(\hat{f}_{0}=\hat{g}_{0}=f_{0}\). Applying the algorithm \(\mathsf{Input}\) recursively, we obtain two algorithms \(F\) and \(G\) (depending on \(\pi_{1}\) and \(\pi_{2}\), respectively) that given \(i\in\{0,1,\ldots,k+1\}\) and \(j\in\{0,1,\ldots,T_{i}-1\}\), output the \(j\)-th bit of \(\hat{f}_{i}\) or \(\hat{g}_{i}\), respectively. Since \(\mathsf{Input}\) only makes one oracle query, these algorithms run in \(\operatorname{poly}(n_{k})\) time. We are now ready to formally construct \(V\). We first recursively define a series of procedures \(V_{0},\ldots,V_{k+1}\), where each \(V_{i}\) takes an input \(j\) and outputs (with high probability) the \(j\)-th bit of \(f_{i}\). Let \(V_{0}\) be the simple algorithm that, on input \(j\), computes the \(j\)-th bit of \(f_{0}\). For every \(i\in[k+1]\), we define \[V_{i}(\alpha)\coloneqq\mathsf{Select}_{T_{i-1},n_{i-1}}^{V_{i-1},\hat{f}_{i} \hat{g}_{i}}(C_{n_{i-1}},\alpha,\varepsilon_{i})\] for some \(\varepsilon_{i}\in[0,1)\) to be specified later, where \(\mathsf{Select}\) is the algorithm in Item2 of Lemma5.5. We note that since \(V_{i-1}\) is a randomized algorithm, when \(V_{i}\) calls \(V_{i-1}\), it also draws _independent_ random coins used by the execution of \(V_{i-1}\). Moreover, all calls to \(\hat{f}_{i}\) and \(\hat{g}_{i}\) in \(V_{i}\) can be simulated by calling our algorithms \(F\) and \(G\). Jumping ahead, we remark that \(V_{i}\) is supposed to compute \(f_{i}\) when at least one of \(\hat{f}_{i}\) or \(\hat{g}_{i}\) is \(f_{i}\). We then set \[V(1^{n_{k}},\pi_{1},\pi_{2})\coloneqq\mathsf{Output}_{T_{k},n_{k}}^{V_{k+1}}\] (note that \(V_{k+1}\) is defined from \(\hat{f}_{k+1}\) and \(\hat{g}_{k+1}\), which are in turn constructed from \(\pi_{1}\) and \(\pi_{2}\)), where \(\mathsf{Output}_{T_{k},n_{k}}\) is the algorithm from Item1 of Lemma5.5. Correctness of \(V\).Let \(\tau\in\mathbb{N}\) be a large constant such that \(\mathsf{Select}_{T,n}\) runs in \((n\cdot\log 1/\varepsilon)^{\tau}\) time. In particular, on any input \(\alpha\), \(\mathsf{Select}_{T_{i-1},n_{i-1}}^{V_{i-1},\hat{f}_{i},\hat{g}_{i}}(C_{n_{i-1} },\alpha,\varepsilon_{i})\) makes at most \((n_{i-1}\cdot\log 1/\varepsilon_{i})^{\tau}\) many queries to \(V_{i-1}\). We say \(\mathsf{Select}_{T,n}^{f,\pi_{1},\pi_{2}}(C,\alpha,\varepsilon_{i})\) makes an error if the following statements hold (\(h=\widetilde{\mathsf{History}}(C,f)\) from Lemma5.5):36 Footnote 36: The condition below only applies when at least one of \(\pi_{1}\) and \(\pi_{2}\) is \(h\). If neither of them are \(h\), then \(\mathsf{Select}\) by definition never errs. \[[\pi_{1}=h\quad\mathsf{OR}\quad\pi_{2}=h]\quad\mathsf{AND}\quad\Big{[} \mathsf{Select}_{T,n}^{f,\pi_{1},\pi_{2}}(C_{n_{i-1}},\alpha,\varepsilon_{i}) \neq h_{\alpha}\Big{]}.\] Similarly, we say that \(\mathsf{Select}_{T_{i-1},n_{i-1}}^{V_{i-1},\hat{f}_{i},\hat{g}_{i}}(C_{n_{i-1} },\alpha,\varepsilon_{i})\) makes an error if either (1) one of the queries to \(V_{i-1}\) are incorrectly answered (i.e., the answer is not consistent with \(f_{i-1}\)) or (2) all queries are correctly answered but \(\mathsf{Select}_{T_{i-1},n_{i-1}}^{f_{i-1},\hat{f}_{i},\hat{g}_{i}}(C_{n_{i-1} },\alpha,\varepsilon_{i})\) makes an error. Note that (2) happens with probability at most \(\varepsilon_{i}\) from Item2 of Lemma5.5. Now we are ready to specify the parameter \(\varepsilon_{i}\). We set \(\varepsilon_{k+1}=1/(100\cdot n_{k+1})\), and for every \(i\in\{0,1,\ldots,k\}\), we set \[\varepsilon_{i}=\frac{\varepsilon_{i+1}}{4\cdot(n_{i}\cdot\log 1/\varepsilon_{i+1})^{ \tau}}.\] To show the correctness of \(V\), we prove the following claim by induction. **Claim 5.8**.: _Assume either \(\hat{f}_{k+1}=f_{k+1}\) or \(\hat{g}_{k+1}=f_{k+1}\). For every \(i\in\{0,1,\ldots,k+1\}\) and \(\alpha\in[|f_{i}|]\), \(V_{i}(\alpha)\) outputs \(f_{i}(\alpha)\) with probability at least \(1-2\varepsilon_{i}\)._ Proof.: The claim certainly holds for \(V_{0}\). Now, for \(i\in[k+1]\), assuming it holds for \(V_{i-1}\), it follows that \(\mathsf{Select}_{T_{i-1},n_{i-1}}^{V_{i-1},\hat{f}_{i}\hat{g}_{i}}(C_{n_{i-1}},\alpha,\varepsilon_{i})\) makes an error with probability at most \[\varepsilon_{i}+(n_{i-1}\cdot\log 1/\varepsilon_{i})^{\tau}\cdot 2\varepsilon_{i- 1}\leq 2\varepsilon_{i}.\] By the definition of making an error and our assumption that either \(\hat{f}_{k+1}=f_{k+1}\) or \(\hat{g}_{k+1}=f_{k+1}\) (from which we know either \(\hat{f}_{i}=f_{i}\) or \(\hat{g}_{i}=f_{i}\)), it follows that \(V_{i}(\alpha)\) outputs \(f_{i}(\alpha)\) with probability at least \(1-2\varepsilon_{i}\). \(\diamond\) Note that \(\mathsf{Output}_{T_{k},n_{k}}^{V_{k+1}}\) makes at most \(4n_{k}\) queries to \(V_{k+1}\). It follows from Claim 5.8 that when either \(\hat{f}_{k+1}=f_{k+1}\) or \(\hat{g}_{k+1}=f_{k+1}\), we have that \(V(1^{n_{k}},\pi_{1},\pi_{2})\) outputs \(z\) with probability at least \(1-(4n_{k})\cdot 1/(100n_{k+1})\geq 2/3\). The correctness of \(V\) then follows from our choice of \(\omega\). Running time of \(V\).Finally, we analyze the running time of \(V\), for which we first need to bound \(\log\varepsilon_{i}^{-1}\). First, we have \[\log\varepsilon_{k+1}^{-1}=\log n_{k+1}+\log 100.\] By our definition of \(\varepsilon_{i}\) and the fact that \(\tau\) is a constant, we have \[\log\varepsilon_{i}^{-1} =\log\varepsilon_{i+1}^{-1}+\log 4+\tau\cdot\big{(}\log n_{i}+ \log\log\varepsilon_{i+1}^{-1}\big{)}\] \[\leq 2\log\varepsilon_{i+1}^{-1}+O(\log n_{i}).\] Expanding the above and noting that \(k\leq t\leq O(\log n_{0})\), for every \(i\in[k+1]\) we have that \[\log\varepsilon_{i}^{-1}\leq 2^{k}\cdot O\Bigg{(}\sum_{\ell=0}^{k}\log n_{ \ell}\Bigg{)}\leq\operatorname{poly}(n_{0})\cdot\log n_{k}.\] Now we are ready to bound the running time of the \(V_{i}\). First \(V_{0}\) runs in \(T_{0}=\operatorname{poly}(n_{0})\) time. For every \(i\in[k+1]\), by the definition of \(V_{i}\), we know that \(V_{i}\) runs in time \[T_{i}=O\Big{(}(n_{i-1}\cdot\log 1/\varepsilon_{i})^{\tau}\Big{)}\cdot(T_{i-1}+n _{k}^{\beta}+1),\] where \(\beta\) is a sufficiently large constant and \(n_{k}^{\beta}\) bounds the running time of answering each query \(\mathsf{Select}_{T_{i-1},n_{i-1}}^{V_{i-1},\hat{f}_{i}\hat{g}_{i}}(C_{n_{i-1}},\alpha,\varepsilon_{i})\) makes to \(\hat{f}_{i}\) or \(\hat{g}_{i}\), by running \(F\) or \(G\), respectively. Expanding out the bound for \(T_{k}\), we know that \(V_{k+1}\) runs in time \[2^{O(k)}\cdot(\operatorname{poly}(n_{0})\cdot\log n_{k})^{O(k\cdot\tau)}\cdot n _{k}^{\beta}\cdot\prod_{i=1}^{k+1}n_{i-1}^{\tau}.\] Since \(n_{k}=n_{0}^{10^{k}}\) and \(k\leq O(\log n_{0})\), the above can be bounded by \(\operatorname{poly}(n_{k})\). This also implies that \(V\) runs in \(\operatorname{poly}(n_{k})\) time as well, which completes the analysis of the \(\mathsf{S}_{2}\mathsf{BPP}\) verifier \(V\). Derandomization of the \(\mathsf{S}_{2}\mathsf{BPP}\) verifier \(V\) into the desired \(\mathsf{S}_{2}\mathsf{P}\) verifier \(V_{A}\).Finally, we use the underlying proof technique of \(\mathsf{S}_{2}\mathsf{BPP}=\mathsf{S}_{2}\mathsf{P}\)[10, 11] to derandomize \(V\) into a deterministic \(\mathsf{S}_{2}\mathsf{P}\) verifier \(V_{A}\) that outputs \(z\). By repeating \(V\)\(\operatorname{poly}(n_{k})\) times and outputs the majority among all the outputs, we can obtain a new \(\mathsf{S}_{2}\mathsf{BPP}\) verifier \(\widetilde{V}\) such that * There exists \(\omega\in\{0,1\}^{n_{k+1}}\) such that for every \(\pi\in\{0,1\}^{n_{k+1}}\), we have \[\Pr[\widetilde{V}(1^{n_{k}},\omega,\pi)=z]\geq 1-2^{-n_{k}}\qquad\text{and} \qquad\Pr[\widetilde{V}(1^{n_{k}},\pi,\omega)=z]\geq 1-2^{-n_{k}}.\] (2) Let \(\ell=\operatorname{poly}(n_{k})\) be an upper bound on the number of random coins used by \(\widetilde{V}\). We also let \(m:=\operatorname{poly}(\ell,n_{k+1})\leq\operatorname{poly}(n_{k})\) and use \(\widetilde{V}(1^{n_{k}},\pi_{1},\pi_{2};r)\) to denote the output of \(\widetilde{V}\) given randomness \(r\). Now, we define \(V_{A}\) as follows: It takes two vectors \(\vec{\pi}_{1},\vec{\pi}_{2}\in\{0,1\}^{n_{k+1}}\times\big{(}\{0,1\}^{\ell} \big{)}^{m}\) as proofs. For \(\vec{\pi}_{1}=(\alpha,u_{1},u_{2},\ldots,u_{m})\) and \(\vec{\pi}_{2}=(\beta,v_{1},v_{2},\ldots,v_{m})\), \(V_{A}\) outputs the majority of the multi-set \[\{\widetilde{V}(1^{n_{k}},\alpha,\beta;u_{i}\oplus v_{j})\}_{(i,j)\in[m]^{2}},\] where \(u_{i}\oplus v_{j}\) denotes the bit-wise XOR of \(u_{i}\) and \(v_{j}\) (if no strings occur more than \(m^{2}/2\) times in the set above, then \(V_{A}\) simply outputs \(\bot\)). We will show there exists \(\vec{\omega}=(\gamma,r_{1},\ldots,r_{m})\) such that for every \(\vec{\pi}\in\{0,1\}^{n_{k+1}}\times\big{(}\{0,1\}^{\ell}\big{)}^{m}\), \[\Pr[V_{A}(1^{n_{k}},\vec{\omega},\vec{\pi})=z]\quad\text{and}\quad\Pr[V_{A}(1 ^{n_{k}},\vec{\pi},\vec{\omega})=z].\] We first claim that there exist \(r_{1},\ldots,r_{m}\in\{0,1\}^{\ell}\) such that for every \(u\in\{0,1\}^{\ell}\) and for every \(\pi\in\{0,1\}^{n_{k+1}}\), it holds that (1) for at least a \(2/3\) fraction of \(i\in[m]\), we have \(\widetilde{V}(1^{n_{k}},\omega,\pi;r_{i}\oplus u)=z\) and (2) for at least a \(2/3\) fraction of \(i\in[m]\), we have \(\widetilde{V}(1^{n_{k}},\pi,\omega;r_{i}\oplus u)=z\). To see this, for every fixed \(u\in\{0,1\}^{\ell}\) and \(\pi\in\{0,1\}^{n_{k+1}}\), by a simple Chernoff bound, the probability, over \(m\) independently uniformly drawn \(r_{1},\ldots,r_{m}\), that more than a \(1/3\) fraction of \(i\in[m]\) satisfies \(\widetilde{V}(1^{n_{k}},\omega,\pi;r_{i}\oplus u)\neq z\) is at most \(2^{-\Omega(m)}\), and the same probability upper bound holds for the corresponding case of \(\widetilde{V}(1^{n_{k}},\pi,\omega;r_{i}\oplus u)\neq z\) as well. Our claim then just follows from a simple union bound over all \(u\in\{0,1\}^{\ell}\) and \(\pi\in\{0,1\}^{n_{k+1}}\). Now, let \(\gamma\) be the proof \(\omega\) such that the condition (2) holds. We simply set \(\vec{\omega}=(\gamma,r_{1},\ldots,r_{m})\). From our choice of \(\gamma\) and \(r_{1},\ldots,r_{m}\), it then follows that for every \(v_{1},\ldots,v_{m}\in\{0,1\}^{\ell}\) and \(\pi\in\{0,1\}^{n_{k+1}}\), at least a \(2/3\) fraction of \(\widetilde{V}(1^{n_{k}},\gamma,\pi;r_{i}\oplus v_{j})\) equals \(z\), and similarly for \(\widetilde{V}(1^{n_{k}},\pi,\gamma;r_{i}\oplus v_{j})\). This completes the proof. Wrapping up.Finally, we generalize \(A\) and \(V_{A}\) to work on all inputs \(1^{n}\). On input \(1^{n}\), \(V_{A}\) calculates the largest \(\ell\) such that \(n^{(\ell)}\leq n\), and also calculates the largest \(k^{\prime}\) such that \(n^{(\ell)}_{k^{\prime}}\leq n\). If \(n^{(\ell)}_{k^{\prime}}\neq n\), then \(V_{A}\) immediately outputs \(\bot\) and halts. Otherwise, \(V_{A}\) receives an advice bit indicating whether \(k^{\prime}=k^{(\ell)}\), where \(k^{(\ell)}\) is the largest integer such that \(f^{(\ell)}_{k^{(\ell)}}\not\in\operatorname{Range}(\mathsf{GGM}_{T^{(\ell)}_{k }}[C_{n^{(\ell)}_{k}}])\). If this is the case, then \(V_{A}\) runs the verification procedure above; otherwise, it immediately outputs \(\bot\) and halts. It is easy to see that \(V_{A}\) runs in \(\operatorname{poly}(n)\) time, and is an infinitely-often single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm solving the range avoidance problem of \(\{C_{n}\}\). Moreover, observe that in the proof of Lemma5.5, all considered input lengths (the \(n^{(\ell)}_{i}\)) are indeed powers of \(2\). So we indeed have the following slightly stronger result. **Corollary 5.9**.: _Let \(\{C_{n}\colon\{0,1\}^{n}\to\{0,1\}^{2n}\}_{n\in\mathbb{N}}\) be a \(\mathsf{P}\)-uniform family of circuits. There is a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(A\) with one bit of advice such that for infinitely many \(r\in\mathbb{N}\), letting \(n=2^{r}\), \(A(1^{n})\) outputs \(y_{n}\in\{0,1\}^{2n}\setminus\operatorname{Range}(C_{n})\)._ We need the following reduction from Korten which reduces solving range avoidance with one-bit stretch to solving range avoidance with doubling stretch. **Lemma 5.10** ([12, Lemma 3]).: _Let \(n\in\mathbb{N}\). There is a polynomial time algorithm \(A\) and an \(\mathsf{FP^{NP}}\) algorithm \(B\) such that the following hold:_ 1. _Given a circuit_ \(C\colon\{0,1\}^{n}\to\{0,1\}^{n+1}\)_,_ \(A(C)\) _outputs a circuit_ \(D\colon\{0,1\}^{n}\to\{0,1\}^{2n}\)_._ 2. _Given any_ \(y\in\{0,1\}^{2n}\setminus\mathrm{Range}(D)\)_,_ \(B(C,y)\) _outputs a string_ \(z\in\{0,1\}^{n+1}\setminus\mathrm{Range}(C)\)_._ The following corollary then follows by combining Lemma5.10 and Theorem2.3. **Corollary 5.11**.: _Let \(\{C_{n}\colon\{0,1\}^{n}\to\{0,1\}^{n+1}\}_{n\in\mathbb{N}}\) be a \(\mathsf{P}\)-uniform family of circuits. There is a single-valued \(\mathsf{FS_{2}P}\) algorithm \(A\) with one bit of advice such that for infinitely many \(r\in\mathbb{N}\), letting \(n=2^{r}\), \(A(1^{n})\) outputs \(y_{n}\in\{0,1\}^{n+1}\setminus\mathrm{Range}(C_{n})\)._ The following corollary follows from creftype2.4 and Corollary5.11. **Corollary 5.12**.: \(\mathsf{S_{2}E}/_{1}\not\subset\mathsf{SIZE}[2^{n}/n]\)_._ Finally, we also note that by letting \(C_{n}\) be a universal Turing machine mapping \(n\) bits to \(n+1\) bits in \(\mathrm{poly}(n)\) time, we have the following strong lower bounds for \(\mathsf{S_{2}E}/_{1}\) against non-uniform time complexity classes with maximum advice. **Corollary 5.13**.: _For every \(\alpha(n)\geq\omega(1)\) and any constant \(k\geq 1\), \(\mathsf{S_{2}E}/_{1}\not\subset\mathsf{TIME}[2^{kn}]/_{2^{n}-\alpha(n)}\)._ From Remark5.6 and noting that the derandomization of \(\mathsf{S_{2}BPP}\) verifier \(V\) to \(\mathsf{S_{2}P}\) verifier \(A_{V}\) also relativizes, we can see that all the results above relativize as well. ### Infinitely Often Single-Valued \(\mathsf{FS_{2}P}\) Algorithm for Arbitrary Input Range Avoidance Theorem5.7 and Corollary5.11 only give single-valued \(\mathsf{FS_{2}P}\) algorithms for solving range avoidance for \(\mathsf{P}\)-uniform families of circuits. Applying Korten's reduction [12], we show that it can be strengthened into a single-valued infinitely-often \(\mathsf{FS_{2}P}\) algorithm solving range avoidance given an arbitrary input circuit. We need the following reduction from [12]. **Lemma 5.14** ([12, Theorem 7]).: _There is an \(\mathsf{FP^{NP}}\) algorithm \(A_{\mathsf{Korten}}\) satisfying the following:_ 1. \(A_{\mathsf{Korten}}\) _takes an_ \(s\)_-size circuit_ \(C\colon\{0,1\}^{n}\to\{0,1\}^{n+1}\) _and a truth table_ \(f\in\{0,1\}^{2^{m}}\) _such that_ \(2^{m}\geq s^{3}\) _and_ \(n\leq s\) _as input._ 2. _If the circuit complexity of_ \(f\) _is at least_ \(c_{1}\cdot m\cdot s\) _for a sufficiently large universal constant_ \(c_{1}\in\mathbb{N}\)_, then_ \(A_{\mathsf{Korten}}(C,f)\) _outputs a string_ \(y\in\{0,1\}^{n+1}\setminus\mathrm{Range}(C)\)_._ **Theorem 5.15**.: _There is a single-valued \(\mathsf{FS_{2}P}\) algorithm \(A\) with one bit of advice such that for infinitely many \(s\in\mathbb{N}\), for all \(s\)-size circuits \(C\colon\{0,1\}^{n}\to\{0,1\}^{n+1}\) where \(n\leq s\), \(A(C)\) outputs \(y_{C}\in\{0,1\}^{n+1}\setminus\mathrm{Range}(C)\)._ Proof Sketch.: By Corollary5.11, there is a single-valued \(\mathsf{FS_{2}P}\) algorithm \(W\) with one bit of advice such that for infinitely many \(n\in\mathbb{N}\), \(W(1^{2^{n}})\) outputs a string \(f_{n}\in\{0,1\}^{2^{n}}\) with \(\mathsf{SIZE}(f_{n})\geq 2^{n}/n\). Now we construct our single-valued \(\mathsf{FS_{2}P}\) algorithm \(A\) with one bit of advice as follows: given an \(s\)-size circuit \(C\colon\{0,1\}^{n}\to\{0,1\}^{n+1}\) with \(n\leq s\) as input; let \(m=\lceil\log s^{3}\rceil\) and \(f_{m}=W(1^{2^{m}})\); output \(A_{\mathsf{Korten}}(C,f_{m})\). It follows from Theorem2.3 that \(A\) is a single-valued \(\mathsf{FS_{2}P}\) algorithm with one bit of advice (the advice of \(A\) is given to \(W\)). Finally, \(\mathsf{S}_{2}\mathsf{P}\subseteq\mathsf{ZPP}^{\mathsf{NP}}\)[11] implies that every single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm can also be implemented as a single-valued \(\mathsf{FZPP}^{\mathsf{NP}}\) algorithm with polynomial overhead. Therefore, the above theorem also implies an infinitely often \(\mathsf{FZPP}^{\mathsf{NP}}\) algorithm for range avoidance. **Reminder of Theorem 1.5**.: _There is a single-valued \(\mathsf{FZPP}^{\mathsf{NP}}\) algorithm \(A\) with one bit of advice such that for infinitely many \(s\in\mathbb{N}\), for all \(s\)-size circuits \(C\colon\{0,1\}^{n}\to\{0,1\}^{n+1}\) where \(n\leq s\), \(A(C)\) outputs \(y_{C}\in\{0,1\}^{n+1}\setminus\operatorname{Range}(C)\). That is, for all those \(s\), there is a string \(y_{C}\in\{0,1\}^{n+1}\setminus\operatorname{Range}(C)\) such that \(A(C)\) either outputs \(y_{C}\) or \(\bot\), and the probability (over the inner randomness of \(A\)) that \(A(C)\) outputs \(y_{C}\) is at least \(2/3\)._ ## Acknowledgments Part of the work was done when all authors were participating in the Meta-Complexity program at the Simons Institute. Lijie Chen is supported by a Miller Research Fellowship. Shuichi Hirahara is supported by JST, PRESTO Grant Number JPMJPR2024, Japan. Hanlin Ren received support from DIMACS through grant number CCF-1836666 from the National Science Foundation. We thank Oliver Korten, Zhenjian Lu, Igor C. Oliveira, Rahul Santhanam, Roei Tell, and Ryan Williams for helpful discussions. We also want to thank Jiatu Li, Igor C. Oliveira, and Roei Tell for comments on an early draft of the paper.
``` 私たちは、S₂E/₁ (1ビットのアドバイス付き symmetricexponential 時間) の言語があることを示し、その回路の複雑度は少なくとも2^n/nです。特に、上記のことは、Σ₂E、(Σ₂E∩Π₂E)/₁、およびZPE^(NP)/₁のクラスに対するほぼ最大回路の低限界にもつながります。従来、これらの複雑度クラスには "半指数" の回路の低限界が知られていましたが、これらの複雑度クラスに指数関数的な回路複雑さの要求をする最小の複雑度クラスはΔ₃E=E^(Σ₂P)でした (Miltersen, Vinodchandran, and Watanabe COCOON'99)。私たちの回路の低限界は、 unconditionally zero-error pseudodeterministic アルゴリズムのコーラスであり、NPオラクルと1ビットのアドバイス(FZPP^(NP)/₁)で、無限回に範囲回避問題
2309.04816
Non-LTE Monte Carlo Radiative Transfer. III. The thermal properties of Tilted and Warped Be Star Discs
We use the three-dimensional Monte Carlo radiative transfer code HDUST to model Be stars where the disc is tilted from the equatorial plane of the star. We compute 128 models across 4 spectral types, B0, B2, B5 and B8, tilting the disc by $0^o$, $10^o$, $20^o$, and $40^o$, while varying disc density according to spectral type. We also compute every model for an average and high stellar rotation rate. We first discuss non-tilted disc temperatures and show its non-linear dependence on stellar and disc parameters. We find that tilting the disc minimally affects the density-weighted average disc temperature, but tilting does create a temperature asymmetry in disc cross sections, which is more pronounced for a faster rotation rate. We also investigate the effect tilting has on $V$-band magnitude, polarization, and the H$\alpha$ line. Tilting the disc does affect these observables, but the changes are entirely dependent on the position of the observer relative to the direction of tilt. We find the observables that distinguish tilting from a change in density or geometry are the H$\alpha$ line shape, where it can transition between single-peaked and double-peaked, and the polarization position angle, whose value is dependent on the projected major elongation axis of the disc on the sky. We also present one early and one late-type model with warped discs. We find their temperature structure varies a small amount from the uniformly tilted models, and the different observables correspond to different tilt angles, consistent with their expected volume of origin within the disc.
M. W. Suffak, C. E. Jones, A. C. Carciofi, T. H. de Amorim
2023-09-09T14:55:19
http://arxiv.org/abs/2309.04816v1
Non-LTE Monte Carlo Radiative Transfer. III. The thermal properties of Tilted and Warped Be Star Discs ###### Abstract We use the three-dimensional Monte Carlo radiative transfer code _hdust_ to model Be stars where the disc is tilted from the equatorial plane of the star. We compute 128 models across 4 spectral types, B0, B2, B5 and B8, tilting the disc by 0\({}^{\circ}\), 10\({}^{\circ}\), 20\({}^{\circ}\), and 40\({}^{\circ}\), while varying disc density according to spectral type. We also compute every model for an average and high stellar rotation rate. We first discuss non-tilted disc temperatures and show its non-linear dependence on stellar and disc parameters. We find that tilting the disc minimally affects the density-weighted average disc temperature, but tilting does create a temperature asymmetry in disc cross sections, which is more pronounced for a faster rotation rate. We also investigate the effect tilting has on \(V\)-band magnitude, polarization, and the H\(\alpha\) line. Tilting the disc does affect these observables, but the changes are entirely dependent on the position of the observer relative to the direction of tilt. We find the observables that distinguish tilting from a change in density or geometry are the H\(\alpha\) line shape, where it can transition between single-peaked and double-peaked, and the polarization position angle, whose value is dependent on the projected major elongation axis of the disc on the sky. We also present one early and one late-type model with warped discs. We find their temperature structure varies a small amount from the uniformly tilted models, and the different observables correspond to different tilt angles, consistent with their expected volume of origin within the disc. keywords: binaries: general - circumstellar matter - radiative transfer - stars: emission-line, Be ## 1 Introduction Classical Be stars are defined as non-supergiant B-type stars that have, or have had, Balmer lines in emission (Collins, 1987). These emission lines are known to form in a gaseous circumstellar disc that has developed around the equator of the star. The exact process which leads to the formation of these discs is uncertain, but coupling rapid rotation with non-radial pulsations (Baade et al., 2016) is thought to be the most-likely mechanism for stellar mass-loss. In addition to Balmer line emission, Be star discs are also characterised by excess continuum emission, particularly at infrared (IR) and radio wavelengths, and by linear polarization (for recent examples, see Ghoreyshi et al., 2021; Marr et al., 2022). The most recent comprehensive review of classical Be stars is given by Rivinius et al. (2013). Observables seen from Be star discs are not only highly dependent on the density structure of the disc, but also on the disc temperature, as the temperature ultimately is what sets the state of the gas through its level populations and ionization state (Carciofi & Bjorkman, 2008). Until the late twentieth century, the temperature of Be star discs was assumed to be constant, or simply fall off with a radial power-law (Waters, 1986). The first attempt to self-consistently determine the disc temperature was performed by Millar & Marlborough (1998), who determined the temperature by equating the rates of energy gain and loss at each point in the disc. They applied this technique to various case studies of both early and late-type Be stars in subsequent publications (Millar & Marlborough, 1999, 2000; Millar et al., 2000) and found temperature differences of thousands of Kelvin between the midplane and upper edge of the disc. Jones et al. (2004) added to this method by accounting for metals with the inclusion of iron. They found that for the early type star, \(\gamma\) Cas, the inclusion of metals lead to an overall cooling of the disc, and a slight heating at the inner most disc, within 3 stellar radii. However for the late type star 1 Del, the most heating occurred on the outer edges of the disc, which was illuminated by light from the poles, and the greatest cooling happened in the middle portion of the disc, not near the dense equatorial plane. Carciofi & Bjorkman (2006) investigated the temperature structure of early-type Be star discs with their 3-dimensional (3D) non-local thermodynamic equilibrium (non-LTE) Monte Carlo code _hdust_. In their models, they found the temperature at the dense midplane of the disc initially drops within 3-5 stellar radii before rising back to the optically thin equilibrium temperature, while the thin upper layers of the disc were approximately isothermal, consistent with Millar & Marlborough (1998). They also found the disc to be almost completely ionized, except for a small portion in the midplane near the minimum temperature. Carciofi & Bjorkman (2008) further investigated this non-isothermal structure by presenting a self-consistent solution for the viscous decretion disc (VDD) scenario. They de termined that the varying temperature affects the density structure in two ways: 1) the radial temperature gradient changes the radial fall-off of the density, and 2) the reduction in temperature within the midplane results in the collapse of the disc onto itself, thereby causing a decrease in its scale height. They conclude that a non-isothermal disc density model must be used for detailed modelling of Be star disc observables. However, many successful modelling efforts of Be stars using indust have utilized the simpler isothermal density formula while solving for non-isothermal disc temperatures (Silaj et al., 2016; Ghoreyshi et al., 2018; Suffak et al., 2020; Marr et al., 2021). Over the past decade, the possibility of Be star discs warping, tilting, and precessing, has gained a lot of attention (see Martin et al., 2011; Brown et al., 2019; Suffak et al., 2022, for example). There have been a number of studies using hydrodynamical simulations to predict the nature of warping, tilting, and oscillations of Be star discs in situations where a binary companion's orbit is misaligned to the initial plane of the disc (Martin et al., 2014; Cyr et al., 2017; Suffak et al., 2022), many of which focus on Be/X-ray binary system parameters (Martin et al., 2014; Brown et al., 2019, for example). The simulations of Suffak et al. (2022) showed that, under the influence of a misaligned binary companion, a Be star disc can undergo episodes of disc tearing, as well as develop eccentric gaps near the primary star during disc dissipation, in addition to tilting, warping, and precessing. The phenomena of disc precession and disc tearing are the best current explanation for the behaviour of the observables in the Be star Pleione (Marr et al., 2022; Martin and Lepp, 2022). So far, none of the studies that investigated dynamically simulated disc tilting, warping, precession, etc., have investigated the effects this would have on the disc temperature structure, or its observables in a systematic way. As well, late-type Be stars have been dramatically understudied compared to their early-type counterparts. In this paper, we first provide results of static 3D radiative transfer models, showing the temperature structure of non-tilted Be star discs, ranging in spectral type from B0 to B8 (Section 2). We then show the same discs, uniformly tilted from the equatorial plane, and discuss their temperature structure (Section 3), before we present two scenarios where the continuum, Balmer line, and polarization signatures could allow a tilted disc to be detected (Section 4). We also briefly discuss how a warped disc may differ from a flat-tilted disc in Section 5. Our discussion and conclusions are presented in Section 6. ## 2 Non-tilted disc temperatures We chose a computational grid of four spectral types from B0 to B8, to capture both early and late-type Be star behaviour. The stellar parameters for each spectral type were taken from Cox (2000) and Silaj et al. (2010), who interpolated their parameters from Cox (2000). We model our disc density based on the widely-used equation \[\rho(r,z)=\rho_{0}\left(\frac{R_{*}}{r}\right)^{n}\exp{-\frac{z^{2}}{2H^{2}}}, \tag{1}\] where \(\rho_{0}\) is the base density, \(R_{*}\) is the equatorial radius of the star, \(H\) is the disc scale height, and \(r\) and \(z\) are respectively the radial and vertical coordinates in the disc. Equation 1 is physically motivated by the viscous decretion disc (VDD) model of Lee et al. (1991), that in its simplest form predicts \(n=3.5\) for an isothermal and geometrically thin disc, and has been used in many studies, such as Silaj et al. (2010); Jones et al. (2008); Suffak et al. (2022); Marr et al. (2021). The scale height is calculated by \[H(r)=\frac{a}{\Omega}\left(\frac{r}{R_{*}}\right)^{1.5}, \tag{2}\] where \(a\) is the sound speed, calculated assuming a disc temperature \(60\%\) of the star's effective temperature (Carciofi and Bjorkman, 2006), and \(\Omega\) is the Keplerian orbital frequency at the equator of the star. We selected two base density (\(\rho_{0}\)) values for each spectral type, based on the limits of base density versus stellar effective temperature shown in figure 8a of Vieira et al. (2017). Figure 8b of Vieira et al. (2017) also shows there is no bounds on \(n\) with effective temperature, so we choose to use values of \(n\) of 2 and 3.5 for every spectral type, as these are approximately the lower and upper limits of \(n\) for the majority of stars studied in Vieira et al. (2017). Finally, we compute each model for two different stellar rotation rates, setting the critical fraction, W (defined in equation 6 of Rivinius et al., 2013, as the ratio of the rotational velocity at the equator to the Keplerian circular orbital velocity at the equator), to 0.7 or 0.95. Figure 9 of Rivinius et al. (2013) shows 0.7 to be about the average \(W\) for Be stars, while 0.95 is on the extreme upper end, nearing the critical rotation rate where the outward centrifugal force at the equator would be equal to the inward pull of gravity. The disc size is held constant for each spectral type at 50 equatorial radii (\(R_{eq}\)). The equatorial radius was scaled to be consistent with the chosen value of \(W\), satisfying the formula (Rimulo et al., 2018) \[W=\sqrt{2\left(\frac{R_{\rm eq}}{R_{\rm p}}-1\right)}, \tag{3}\] where \(R_{\rm p}\) is the stellar polar radius. Table 1 presents stellar and disc parameters in our models. ### Azimuthally-Averaged Temperature Slice Across all of our models, regardless of spectral type, rotation rate, or disc density parameters, we find the following common traits: (1) the tenuous upper layers of the disc (i.e., far from the midplane) are fully ionized and approximately isothermal; (2) the very dense disc midplane contains the diversity between the models, it can be cooler or hotter than the upper layers depending on model parameters, and is only partially ionized; (3) between these two regions exists a transition layer between the fully ionized outer disc and partly ionized inner disc, where relatively thin, hot sheaths arise. However, when examining the temperature structure in more detail, the behaviour seen from one model to another is very non-linear and is coupled to the disc density structure, spectral type, and stellar rotation rate. We summarize these particularities in appropriate detail below. #### 2.1.1 Early Spectral Types Figure 1 shows the azimuthally-averaged cross sections of the non-tilted disc models for the B0 and B2 models of our grid. We see that the models where \(n=2\), whose discs have a slow density fall-off and thus are much more dense than \(n=3.5\), have a very large cool, partially ionized region surrounding the midplane of the disc, while the outer regions are much hotter and fully ionized. The inner cool regions are due to the disc being optically thick, while in the outer regions, the density drops and the temperature can reach much higher values. In the models where \(n=3.5\), we see that the midplane has a much smaller cool region and then transitions to hotter temperatures with increasing radius as the disc becomes optically thin. The upper hot layers are also larger than in the \(n=2\) case due to the densities falling off faster and more of the disc being optically thin. These inner and outer regions are separated by a hot thin sheath, which has also been seen in other publications (Carciofi & Bjorkman, 2006; Sigut & Jones, 2007). In Figure 2, we have plotted the disc temperature and ionization fraction (fraction of hydrogen in the disc that is ionized) for a column at a radial position 30 \(R_{*}\) for model 11, which prominently displays these hot sheaths. We can see that the spike in temperature (i.e., the hot sheath) occurs right at the boundary between the cooler inner portion where the disc is partially ionized, and the hotter outer layers where the disc is fully ionized. This can be explained by the inner cool region being optically thick and locally trapping the UV radiation, so as the vertical direction offers the largest escape probability due to lower opacity, the UV radiation travels vertically and further heats the gas directly above and below the inner cool region. When we compute the bound-free and bound-bound optical depths, as well as the hydrogen level populations, they show inverse profiles to the ionization fraction in Figure 2, being highest around the midplane of the disc and trending towards zero as height in the disc increases. These same trends occur for all models that have these hot sheaths in their cross sections, including both early- and late- spectral types. The position of these hot sheaths also noticeably changes between models, as they move closer together as the \(n\) value rises, or as the disc base density decreases. Both of these changes to the density structure make the upper disc layers more tenuous, which allows the disc to be fully ionized to a greater vertical depth, making the inner partially ionized disc portion thinner, and thus the transition regions closer to the midplane. This can easily be seen by comparing the cross sections of model 1 to model 2 and 3, respectively. There are also cases where the disc is so tenuous that it is nearly entirely ionized, even in the midplane, seen for example in model 4. Here there is no cooler section in the midplane. Instead the midplane is hotter than the upper disc layers, due to the denser midplane being able to reprocess UV radiation and increase the role of diffuse radiation in that area. In models 5 to 8 and 13 to 16, we see that increasing the stellar rotation rate to \(W=0.95\) does not change the qualitative temperature patterns in the disc. However the temperatures in the upper disc are notably higher than in the slower rotating case and the hot sheaths and disc midplane can be slightly warmer than with slower rotation. This indicates that the hotter stellar poles caused by increased gravity darkening at this high rotation is able to "carve" into the disc deeper, penetrating the disc midplane with more UV radiation and raising its temperature. #### 2.1.2 Late Spectral Types The cross sections for our B5 and B8 non-tilted models are shown in Figure 3. In these later spectral types, we see some different behaviour in the temperature structure than the early B0 and B2 type stars. The highest density, lowest \(n\) model for the B5 spectral type is similar to the analogues for B0 and B2 stars. However, at lower densities, for \(n~{}=~{}2\) in both B5 and B8 type stars, we see the midplane is hotter than the outer disc, which is opposite the early type stars. This is the same as Millar & Marlborough (1999b) found in their work for the late-type star 1 Del. They explain that this temperature inversion is due to collisions populating the upper levels, and thus photoionization from these upper levels is able to heat the gas, while the disc remains optically thick to Lyman continuum photons. However, as these hot midplane sections are radially extended in our discs when the density has already fallen off exponentially, collisions are not going to be a major factor, and thus this hotter midplane would be due to the discs ability to reprocess the UV radiation (as mentioned for model 4 in Section 2.1.1) and locally heat the denser midplane through diffuse radiation. Conversely, we see in models where \(n=3.5\), that the midplane temperature drops off in the outermost disc. Due to the much faster drop off of density compared to the \(n=2\) model, there is not enough diffuse radiation contributed from the disc itself to make up for the lack of UV radiation reaching the outer midplane of the disc from the late-type star. With a higher rotation rate, we see the B5 models largely retain the same structure as their slower-rotating counterparts, however the rapidly rotating B8 models, particularly models 30, 31, and 32, display dramatically hotter disc midplanes, as well as hot sheaths which did not appear in the slower rotating case. We interpret this again as the hotter poles being able to "carve" farther into the disc and cause greater heating in the midplane. Here the high stellar rotation gives qualities of both an early-type star from the hot poles, and a late-type star from the very cool equator causing this large change in temperature cross section. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Sp. Type & M (M\({}_{\odot}\)) & R\({}_{p}\) (R\({}_{\odot}\)) & W & T\({}_{\rm eff}\) (K) & \(L\) (L\({}_{\odot}\)) & \(\rho_{0}\) (g cm\({}^{-3}\)) & n & Model \# \\ \hline B0 & 17.5 & 7.4 & 0.7 & 30000 & 39740 & \(1\times 10^{-10}\) & 2/3.5 & 1/2 \\ & & & 0.7 & & & \(1\times 10^{-11}\) & 2/3.5 & 3/4 \\ & & & 0.95 & & & \(1\times 10^{-10}\) & 2/3.5 & 5/6 \\ & & & 0.95 & & & \(1\times 10^{-11}\) & 2/3.5 & 7/8 \\ B2 & 9.11 & 5.33 & 0.7 & 21000 & 4950 & \(5\times 10^{-11}\) & 2/3.5 & 9/10 \\ & & & 0.7 & & & \(5\times 10^{-12}\) & 2/3.5 & 11/12 \\ & & & 0.95 & & & \(5\times 10^{-11}\) & 2/3.5 & 13/14 \\ & & & 0.95 & & & \(5\times 10^{-12}\) & 2/3.5 & 15/16 \\ B5 & 5.9 & 3.9 & 0.7 & 15000 & 690 & \(5\times 10^{-12}\) & 2/3.5 & 17/18 \\ & & & 0.7 & & & \(5\times 10^{-13}\) & 2/3.5 & 19/20 \\ & & & 0.95 & & & \(5\times 10^{-12}\) & 2/3.5 & 21/22 \\ & & & 0.95 & & & \(5\times 10^{-13}\) & 2/3.5 & 23/24 \\ B8 & 3.8 & 3.0 & 0.7 & 12000 & 167 & \(1\times 10^{-12}\) & 2/3.5 & 25/26 \\ & & & 0.7 & & & \(1\times 10^{-13}\) & 2/3.5 & 27/28 \\ & & & 0.95 & & & \(1\times 10^{-12}\) & 2/3.5 & 29/30 \\ & & & 0.95 & & & \(1\times 10^{-13}\) & 2/3.5 & 31/32 \\ \hline \end{tabular} \end{table} Table 1: Stellar and disc parameters used in our indent grid of models. Left to right is the spectral type, stellar mass, polar radius, fraction of critical rotation velocity, effective temperature, luminosity, disc base density, disc density slope, and model number. ## 3 Tilted-Disc Temperatures To expand our work on our non-tilted grid, we tilted all of our models listed in Table 1 by \(\alpha=10^{\circ}\), \(20^{\circ}\), and \(40^{\circ}\) away from the equatorial plane. Figure 4 shows a schematic of the orientation of our tilted disc models in cartesian coordinates, where the star lies at the origin. The disc is tilted about the \(x\)-axis, so the disc tilt angle is measured from the \(y\)-axis. When referring to azimuthal angles (\(\phi\)), we denote the positive \(x\)-axis as having \(\phi=0^{\circ}\), and the positive \(y\)-axis having \(\phi=90^{\circ}\). Thus \(\phi\) values of \(180^{\circ}\) and \(270^{\circ}\) correspond to the negative \(x\) and \(y\)-axes, respectively. The height of the disc midplane can thus be given as \[Z(r_{i},\phi_{j})=-r_{i}\sin\left\{\,\arctan\left[\,\sin(\phi_{j})\tan\left( \alpha\right)\,\right]\right\}, \tag{4}\] where \(r_{i}\) and \(\phi_{j}\) are the radial and azimuthal coordinates on the midplane, and \(\alpha\) is the disc tilt angle about the \(x\)-axis (either \(10^{\circ}\), \(20^{\circ}\), or \(40^{\circ}\)). To assess any global changes in disc temperature due to disc tilting, we calculate the mass-averaged disc temperature, \(\tilde{T}_{M}\), using the formula \[\tilde{T}_{M}=\frac{1}{M_{\rm disc}}\sum_{t=0}^{N}T_{i}\rho_{i}V_{i}, \tag{5}\] where \(M_{disc}\) is the total mass of the disc, and \(T_{i}\), \(\rho_{i}\) and \(V_{i}\) are the temperature, density, and volume of the \(i\)-th cell. The sum is performed over all \(N\) cells in the disc. This formula was adapted from the equation for density-weighted average temperature of McGill et al. (2013). The results of these calculations are presented in Figure 5. We Figure 1: Disc cross sections of the azimuthally-averaged temperatures of models 1-16, as noted in the subplot titles. The colour of each cell in the cross section corresponds to temperature as shown in the colour bar. The axes are in stellar equatorial radii, which is different for each spectral type, consistent with the polar radius listed in Table 1. Figure 2: Ionization fraction (blue, left \(y\)-axis), and temperature (orange, right \(y\)-axis) versus \(z\) height in the non-tilted disc of model 11. The radial distance of the measurements from the central star is about 30 \(R_{*}\). find that there is a clear relationship between \(\bar{T}_{\mathbf{M}}\) and disc tilt angle: the greater the tilt angle, the greater \(\bar{T}_{\mathbf{M}}\). While this trend is clear, the changes are not large, with most discs varying in temperature by less than 1000 K. The change is especially small in our densest models, where the change in average temperature cannot be distinguished between being caused by the tilt or simply the error in the nature of Monte-Carlo simulations. We also note that most of these density-weighted average temperatures are below 60% \(\bar{T}_{\text{eff}}\), which Carofi & Bjorkman (2006) found to be a good isothermal approximation of Be star discs. Since the density-weighted average weights the most dense, and hence most optically thick regions the highest, it is not surprising that our least dense models come closest to, or sometimes end up greater than, the 60% \(\bar{T}_{\text{eff}}\) mark. This is consistent with the results of our non-tilted discs seen in Figures 1 and 3 where the lower density discs have a higher temperature. It is worth mentioning that by definition, our tilted models are not anchored at the equator of the star, thus the innermost disc may have slightly inflated temperatures due to directly seeing a hotter part of the star than if it were aligned at the equator. ### Detailed Temperature Structure Although the density-weighted average disc temperature doesn't change appreciably with tilt angle, we find that a tilted disc can have significant changes in the temperature structure of certain areas of the disc. An example is shown in Figure 6, where we show cross sections of discs with parameters of model 1, tilted by \(10^{\circ}\), \(20^{\circ}\), and \(40^{\circ}\). In the line of nodes cross section of the disc (\(\phi=0^{\circ}\), left column) we see that changes in the temperature structure from the non-tilted model (top left panel of Figure 1) are hard to detect. However, in the cross section farthest from the equator (\(\phi=90^{\circ}\), right column of Figure 6), we see noticeable temperature difference between the top and bottom of the disc, with the top of the disc becoming cooler and the bottom of the disc becoming hotter. This is due to the effect of gravity darkening arising from rapid rotation, resulting in the stellar equator being cooler than the pole. Thus, the part of the disc that moves closer to the equator when tilted, ends up being cooler than the non-tilted solution, and the part that moves closer to the stellar pole ends up hotter than the non-tilted case. We note that the disc midplane does not significantly change temperature when tilted because its high density compared to the rest of the disc makes it insensitive to the change in radiation input caused by the tilting of the disc. In Figure 7, we show the same plots as Figure 6, but for one of our late-type models. We see that the temperature trends of the \(\phi=90^{\circ}\) cross section of the disc are the same, with the top of the disc cooling and the bottom heating as it is oriented closer to the pole of the star. In the line of nodes of the disc, however, we see a change in the temperature structure: the hot bands that were on either side of the midplane in the non-tilted models, greatly lessen in temperature as the overall disc tilt angle increases. Since the central star does not change, this change in temperature structure is due to the diffuse Figure 3: Same as Figure 1, but for models 17-32. Note the change in the maximum temperature of the colour bar scale. radiation field of the disc affecting the disc temperature significantly. Plots similar to Figures 6 and 7 for all other models are presented in Appendix A. Overall we see that the average cell temperature of the \(\phi=90^{\circ}\) cross sections of the disc can differ by as much as 30% when compared to the non-tilted models. The large majority of this difference however, is in the optically thin outer disc. As shown in Figure 8, while the midplane temperature does differ in the \(\phi=90^{\circ}\) cross sections of the disc from the non-tilted model, the difference is quite small. Appendix B contains similar plots of the midplane for all models. As the tilt angle increases, we see the temperature of the innermost disc increase as well. This is due to the midplane being oriented closer to the pole, and therefore directly seeing a hotter area of the star. We also see that the midplane temperature profile of the tilted disc still has the same structure as past publications (Millar and Marborough, 1998; Carciofi and Bjorkman, 2006), with the temperature reaching a minimum within the first few stellar radii before increasing to an approximately isothermal temperature in the outer disc. It is important to note that for some of our densest models, the midplane temperature does not increase from its minimum due to the high density of the disc. It is also noteworthy that this structure becomes less pronounced as one moves towards the top and bottom of the disc, away from the midplane. ## 4 Tilted Disc Observables Due to the 3-dimensional nature of our simulations, we specify our observer position by two spherical coordinates, the polar angle \(\theta\) and the azimuthal angle \(\phi\). \(\theta\) ranges from \(0^{\circ}\) when looking pole-on with the star, to \(90^{\circ}\) when looking at the equatorial plane, while \(\phi\) is defined in the range [\(0^{\circ}\),\(360^{\circ}\)) in the same manner as shown in Figure 4. Observables of Be stars are highly dependent on the orientation of the disc with respect to the observer. It is well known that a non-tilted disc seen edge-on will have dimmer photometric magnitudes, higher polarization levels, and shell emission lines, compared to a disc viewed pole-on, which will have brighter photometric magnitudes, close to zero polarization, and single-peaked emission lines. However, in a tilted disc scenario, looking edge-on with the disc may not imply one is looking equator-on with the star, and a face-on view of the disc would not be pole-on with the star. Thus, the changes in observables that occur due to disc tilting would be expected to be highly dependent on the orientation of the tilt with respect to the observer. In their hydrodynamic simulations, Suffak et al. (2022) showed that a Be star disc can tilt and then precess under the influence of a misaligned binary companion. We now examine these two scenarios where a tilted disc may be able to be detected through a change in their observables. ### Case I: Viewing a Disc with Varying Tilt Angle In this first scenario, a misaligned binary companion torques the disc above and below the equatorial plane of the primary star, meaning the axis of the disc's tilt is aligned with the companion's line of nodes. Here, the orientation between the disc tilt axis and the stationary observer is constant, thus the observer merely sees the disc tilt in a constant direction over time. Since our observing coordinates are defined with respect to the central star and tilt axis of the disc, we can simulate a disc tilting over time by plotting any observable viewed from the same (\(\theta\), \(\phi\)) observer position over four simulations where a model disc is tilted by \(0^{\circ}\), \(10^{\circ}\), \(20^{\circ}\), and \(40^{\circ}\). An example of this is shown in Figure 9, which shows, for a constant \(\theta\) of \(80^{\circ}\) and \(\phi\) of \(90^{\circ}\), \(180^{\circ}\), and \(315^{\circ}\), \(V\)-band magnitude, H\(\alpha\) equivalent width (EW), H\(\alpha\) violet-to-red (V/R) ratio, polarization percentage and position angle (PA) in the \(V\)-band, versus disc tilt angle for systems with parameters of model 1. Here we see the change in an observable as the disc tilts is very dependent on the observer's position. For example, at \(\phi=0^{\circ}\) or \(180^{\circ}\), the observer will be aligned with the tilt axis of the disc, thus the \(V\) magnitude, EW, and percent polarization do not vary as much as at other \(\phi\) angles, however the polarization position angle will vary greatly with increasing tilt angle. This is in contrast to \(\phi=90^{\circ}\), where the disc is tilted to be more face-on with the observer. As the tilt angle increases, we see \(V\) magnitude increase, while EW and polarization decrease and the position angle stays constant. Not plotted is \(\phi=270^{\circ}\), where the disc tilts to be more edge-on with the observer, and the trends would be generally reversed from the \(\phi=90^{\circ}\) case. Finally, for an intermediate azimuthal angle of \(315^{\circ}\), all of the observables vary greatly, with the only clear trend being the polarization position angle. Note that these results are degenerate with \(\phi=225^{\circ}\), while the degenerate pair of \(45^{\circ}\) and \(135^{\circ}\) will give similar results. The polarization position angle is thus the one observable which shows constant change as the disc tilts (except when the observer is exactly perpendicular to the tilt axis). We also see for this simulation, large Figure 4: Two schematics showing the orientation of the tilted disc with respect to the \(x\), \(y\), and \(z\) axes. The azimuthal angle (\(\phi\)) is defined with \(\phi=0^{\circ}\) along the positive \(x\)-axis as shown. The disc is tilted about the \(x\)-axis, and thus the tilt angle \(\alpha\) is defined from the \(y\)-axis. The central star is represented by the blue circle at the origin of both diagrams. The disc and star sizes are not to scale. V/R ratios of about 10% at certain azimuthal observing angles when the disc is tilted by 40\({}^{\circ}\). This is an extreme example however, since most of our models don't reach a V/R ratio of more than 2 to 5%. Figure 10 shows the same plot as Figure 9, but for a late-type B8 star, model 29. We see another example of the variability of the change of observables depending on observer position, and that the magnitude of these changes for late-type stars are much less than those of the early-type example shown previously, particularly the \(V\) magnitude and polarization, which is expected due to the lower disc density. To briefly investigate disc obscuration effects, we took two models, 10 and 25, at a tilt angle of 40\({}^{\circ}\) and computed them with a disc radius of 20 \(R_{*}\) instead of 50. These models presented no difference in observables aside from a slightly lower H\(\alpha\) EW. This is expected, as the H\(\alpha\) emission comes from a large portion of the disc, while visible continuum emission and polarization come from the inner few stellar radii. This shows that the material in the outer disc is largely optically thin, and thus the size of a tilted disc would not affect observables other than H\(\alpha\) until the disc radius is reduced to a few stellar radii. #### 4.1.1 H\(\alpha\) Line Shapes The shape of the H\(\alpha\) line in Be stars is largely due to Doppler shift caused by the relative velocity between the disc material and the observer. In a non-tilted disc, the H\(\alpha\) line in Be stars is seen as single-peaked when looked pole-on with the star and face-on with the disc (\(\theta\ =\ 0^{\circ}\)), and is a double-peaked line when observed at other inclination angles due to disc material moving both toward and away from the observer. As the H\(\alpha\) emission line is a defining characteristic of Be stars, we also looked at the shapes of the emission lines in our simulations to see if the changing shape might show indications of disc tilting. Overall we find there are three different patterns in which the tilt of the disc can affect the H\(\alpha\) line. The first is shown in Figure 11, where \(\phi\ =\ 180^{\circ}\) and the observer is aligned with the tilt axis of the disc, so the disc is seen as rotating either clockwise or counter Figure 5: Density weighted average disc temperatures for our early-type (a) and late-type (b) models. The model number, listed in Table 1 is on the x-axis. The tilt angle of each disc model is indicated by the legend. The grey lines indicate 60% of the star’s \(T_{\rm eff}\) for each model. Figure 6: Temperature structure of the line of nodes cross section (left column) and the cross section farthest from the equator (right column) of the tilted discs with parameters of model 1 (see Table 1). The top row is for a 10\({}^{\circ}\) tilt, middle row for 20\({}^{\circ}\) tilt, and bottom row for a 40\({}^{\circ}\) tilt, as indicated by the \(\sigma\) value in the leftmost plot of each row. The colour corresponds to the temperature of a given cell, as indicated by the colour bar on the right. Figure 7: Same as Figure 6, but for the parameters of model 30. Note the change in the maximum temperature of the colour bar scale. clockwise from the observers perspective. In this case, we see the line strength stay approximately the same, but the V/R ratio increases with increasing tilt angle. This effect is particularly noticeable in the early-type stars at high (\(\theta\geq 60^{\circ}\)) inclinations, where the projected area of the disc does not change very much with increasing tilt angle. The lines plotted here are one of the most extreme V/R ratios that we have obtained from our models. The second pattern is shown in Figure 12, where \(\phi=90^{\circ}\) and the disc tilts to be more face-on or edge-on with the observer. Here we see that the line starts out as double-peaked for \(0^{\circ}\) tilt, and transitions to a single-peaked line when the tilt is \(40^{\circ}\). This behaviour occurs for all spectral types and densities. The reverse of this process is also seen in our models, with the lines shifting from single-peaked to double-peaked with increasing tilt angle for pole-on observing angles. It is worth noting that the equivalent width of the line is approximately constant in these scenarios as well. Finally when \(\phi=315^{\circ}\), the motion of the disc relative to the observer is a combination of a rotation clockwise or counterclockwise in the plane of the sky, and a tilt to be more edge-on or face-on. Figure 13 shows this case, where the line shape and peak separation stay the same, but the line decreases in strength as tilt angle increases for the B0 and B2 type stars, and slightly increases in strength for the late type stars. The reverse is also seen for some observing angles. These changes in the normalized lines are largely due to a change in continuum level rather than a change in the emission itself. ### Case II: Watching a Tilted Disc Precess The second scenario where a tilted disc may be able to be detected is where an already tilted disc precesses under the influence of a misaligned binary companion. This occurs in many simulations of Suffak et al. (2022), particularly after mass-loss from the disc is turned off, the disc is no longer anchored to the equator of the star, Figure 8: Plots of temperature vs. radius of model 4 at the midplane of the disc in two different azimuthal directions; \(\phi=0^{\circ}\) (a), and \(\phi=90^{\circ}\) (b). The four different lines are for different disc tilt angles as indicated by the legend. Figure 10: Same format as Figure 9, but for parameters of model 29. Figure 9: Plots showing (top to bottom) \(V\)-band magnitude, H\(\alpha\) equivalent width, H\(\alpha\) V/R ratio, polarization percentage, and polarization position angle (PA) in the \(V\)-band, versus disc tilt angle, for systems with parameters of model 1 (Table 1). All points are for a \(\theta\) observing angle of \(80^{\circ}\), and the different coloured lines indicate different \(\phi\) observing angles as indicated by the legend. Some \(\phi\) directions may be degenerate and thus not every line will show on every plot. and the line of nodes of the disc is free to rotate about the primary star. By holding the polar viewing angle (\(\theta\)) constant, and moving around the star and disc in \(\phi\), we can see what observational signature a disc may present if it were precessing about the pole of the star. This is shown in Figure 14, where we've plotted the same quantities as Figure 9 versus \(\phi\), for model 1 with a disc tilt of 40\({}^{\circ}\). From this Figure we can see that, as the observational viewpoint moves around the star/disc system at a constant \(\theta\), the observables oscillate quite significantly as the disc moves from being edge-on with the observer at 0\({}^{\circ}\) and 180\({}^{\circ}\), to being more face-on at 90\({}^{\circ}\) and 270\({}^{\circ}\). Moving the observer like this is exactly the same as if the disc were rigidly precessing about the pole of the star and the observer was stationary, but this is accomplished here without the need for computationally expensive hydrodynamical simulations. Unlike the case of tilting a disc, precession shows signatures in all observables. The percent polarization and \(V\) magnitude will oscillate at half the precession period as more or less of the inner disc becomes visible as the disc precesses about the star. The H\(\alpha\) EW also oscillates at half the precession period and can increase with \(V\) magnitude as more of the disc is visible, or decrease as \(V\) magnitude increases due to the changing continuum level. The position angle will oscillate about zero at the same period as the precession. These period/half-period trends can be seen easily with the dashed line in Figure 14 shifted 180\({}^{\circ}\). We note here that the half-period trends of \(V\) magnitude, H\(\alpha\) EW, and percent polarization are not perfectly symmetric because the \(\theta\) of 80\({}^{\circ}\) means the observer will see slightly more or less of the central star from opposite sides of the disc. If \(\theta\) were 90\({}^{\circ}\), the half-period trends would ideally have perfect symmetry. The V/R trend is more complex, as the trend from \(\phi\,=\,90^{\circ}\) to 270\({}^{\circ}\) is the reverse of the trend from 270\({}^{\circ}\) to 90\({}^{\circ}\). This is due to the relative velocities "reversing" as the observer moves to the opposite side of the disc, thus the trend in this case is antisymmetric, with 90\({}^{\circ}\) and 270\({}^{\circ}\) being the nodes of the oscillation. ## 5 Warped vs. Tilted discs We recognize our flat tilted models are limited, particularly at the star-disc boundary, where we have the inner disc tilting the same amount as the outer disc. In reality, it is more likely that the Be star disc would be anchored to the equator of the star, and the rest of the disc would be warped away from the equator by some degree. To test the difference this may cause, we chose to apply a warp to models 10 and 25, instead of a flat tilt. To warp the computational grid, we fix the first radial bin at the equator, and then linearly increment the degree of tilt with each subsequent radial bin, up to a maximum tilt of 40\({}^{\circ}\) at the furthest radius. The height of the disc midplane is then given by \[Z(r_{i},\phi_{j})=-r_{i}\,\sin\left\{\arctan\left[\,\sin(\phi_{j})\tan\left( \alpha\frac{i}{49}\right)\right]\right\}, \tag{6}\] where the only difference from Equation 4 is that now we have altered the disc tilt angle \(\alpha\), such that it gets bigger with increasing radius (\(i\) denotes the radial cell index, which goes from 0-49 in our models), so the disc becomes warped instead of a flat tilt. These models also have a disc radius of 50 \(R_{*}\), and thus the most highly inclined outer parts Figure 11: Simulated H\(\alpha\) lines of models 1 (top left), 9 (top right), 17 (bottom left) and 25 (bottom right), for four different disc tilt angles as indicated by the legend. The model spectra are seen from an observer at position \(\phi\,=\,180^{\circ}\), \(\theta\,=\,80^{\circ}\). Figure 12: Simulated H\(\alpha\) lines of models 4 (top left), 12 (top right), 20 (bottom left) and 28 (bottom right), for four different disc tilt angles as indicated by the legend. The model spectra are seen from an observer at position \(\phi\,=\,90^{\circ}\), \(\theta\,=\,40^{\circ}\). Figure 13: Simulated H\(\alpha\) lines of models 6 (top left), 14 (top right), 22 (bottom left) and 30 (bottom right), for four different disc tilt angles as indicated by the legend. The model spectra are seen from an observer at position \(\phi\,=\,315^{\circ}\), \(\theta\,=\,90^{\circ}\). of the disc do not contribute much to the observables in question here as they are known to originate in the inner disc, which is only moderately tilted. Figure 15 shows temperature cross sections of the warped disc compared to its flat tilted counterpart. We see that in the inner warped disc, the upper and lower edges of the disc are slightly cooler than the flat tilted model. This is due to the inner disc still being anchored at the equator in the warped case, and thus the inner disc is seeing less radiation than when it is tilted at 40\({}^{\circ}\) with the rest of the disc. This effect is seen even in the non-warped slice of the disc at \(\phi=0^{\circ}\), due to less diffuse radiation within the disc being able to heat this cross section. In the outer warped disc, the upper and lower edges are respectively cooler and warmer than the flat-tilted case due to the upper edge being shielded by the inner disc at the equator, and the lower edge being freely exposed to radiation from higher stellar latitudes. Figure 15 also shows how, not surprisingly, the temperature structure warps with the warped density of the disc, and that the interior structure is the same as the tilted model aside from this warp. With respect to observables, comparison of the warped disc to the different tilted simulations highlights how different areas of the disc are responsible for different observables. To compare these observables between the tilted models and warped models, we held the \(\theta\) observing angle constant, and calculated a chi-squared value over all \(\phi\) observing angles between each tilted model and the warped model. For the early-type warped model, we find that the \(V\) magnitude, percent polarization, and polarization PA of the warped model are best matched by the non-tilted and 10\({}^{\circ}\) tilted models, and occasionally the 20\({}^{\circ}\) tilted model. This is expected, as these observables originate in the inner disc, which is the least tilted part of the warped disc. On the other hand, H\(\alpha\) EW is best matched by the 40\({}^{\circ}\) tilt for near pole-on \(\theta\) angles, and by 0\({}^{\circ}\) and 10\({}^{\circ}\) tilt angles for \(\theta\) values greater than 50\({}^{\circ}\). This is due to a large increase in the continuum emission at certain \(\phi\) angles for the 40\({}^{\circ}\) flat tilted model, which causes the H\(\alpha\) EW to drop significantly. The V/R ratio of the H\(\alpha\) line also follows the same trends as the EW. The same trends are seen for the late-type (model 25) warped disc, except that the H\(\alpha\) line best matches the flat 40\({}^{\circ}\) tilted disc for all viewing inclinations due to the optically thin disc. For comparison purposes, in Figure 16 we show the observables of a precessing warped disc, similar to what was shown in Figure 14 for the flat tilted model. The stellar and disc parameters of both Figures are different, so they cannot be directly compared, however we do see that a warped disc, if it were precessing, produces the same period and half-period trends as the flat tilted model, with some asymmetry in the photometry, polarization, and H\(\alpha\) line. ## 6 Discussion and Conclusions In this paper, we have shown how tilting a Be star disc out of the equatorial plane of the primary star can affect the disc's temperature structure, as well as it's observables. We modelled B0, B2, B5, and B8 type stars, with different densities, two different rotation rates, and disc tilt angles of 0\({}^{\circ}\), 10\({}^{\circ}\), 20\({}^{\circ}\) and 40\({}^{\circ}\). We find that the temperature structure between non-tilted early and late type stars can differ greatly, and the behaviour from model to model is highly non-linear. The exact temperature structure is dependent on the disc density configuration, the spectral type, and the stellar rotation rate, which means depending on the model parameters, we see particular trends in the temperature behaviour. In our non-tilted models we see that all discs have an inner cool region, the extent of which dramatically depends on the density exponent \(n\). This can be of significance, as low excitation lines, particularly Fe ii, are known to originate in these cool inner disc volumes (Carciofi & Bjorkman, 2006). Fe ii emission lines have been well documented in Be stars (Hauschik et al., 1996), and their line-cooling effects have also been explored (Jones et al., 2004). Since Fe ii emission lines originate in these cool areas, their shape could be used as a tracer of the radial extent of these regions, assuming the width of the line is largely due to Doppler broadening. In this sense, if the Fe ii line had large peak separation and a sharp drop-off in the wings, the center cool region would be relatively small, however a large cool region would mean a large formation loci for Fe ii and its line shape could be similar to Balmer emission lines, albeit with lower peak intensity. Thus, Fe ii lines may be a valuable constraint on the value of \(n\) in Be star disc models, and shows the great importance of having a non-isothermal disc model (which was attempted but not conclusively shown by Klementi et al., 2015). We find presence of hot bands above and below the midplane in nearly all disc density configurations, consistent with findings in other studies (Millar & Marlborough, 1998; Carciofi & Bjorkman, 2006; Sigut & Jones, 2007; McGill et al., 2011). We offer the first concrete explanation of these sheaths, showing in Figure 2 that the sheaths occur right at the boundary between where the disc is partially and fully ionized. This strongly indicates that these sheaths are the result of UV radiation that has been trapped in the optically thick, cold inner disc, escaping vertically through the disc and adding excess heat to the optically thin outer disc right at the boundary of this partially ionized region. We also predict that if the disc is dense enough, diffuse radiation near the midplane of the disc can play a large role in heating the disc midplane, sometimes causing the midplane to be warmer than the upper disc layers despite not being fully ionized, particularly seen in our models with late-type stars. We also investigated the difference between the star having an average rotation rate of 70% the critical velocity versus a high rotation rate, 95% of the critical velocity. In our B0, B2, and B5 models, this increase in rotation had marginal effects, only heating the outer disc Figure 14: Top to bottom, the \(V\)-band magnitude, H\(\alpha\) equivalent width, H\(\alpha\) V/R ratio, polarization percentage in the \(V\)-band, and position angle, versus azimuthal viewing angle \(\phi\), for model 1 with the disc tilted 40\({}^{\circ}\). The system is viewed at a \(\theta\) of 80\({}^{\circ}\). The dashed line is shifted by 180\({}^{\circ}\) to facilitate comparison between periods. and midplane slightly, but keeping the overall temperature structure the same. This is different when looking at our B8 models, where an increase in rotation caused a large increase in midplane temperature, as well as the appearance of prominent hot sheaths. In this case, the combination of the higher rotation giving hotter poles, along with the lower densities used for our B8 models, allow the radiation from hot poles to "carve" farther into the disc, causing substantial heating in and around the midplane. Overall the temperature structure of our non-tilted models is remarkably similar to the works of Millar & Marlborough (1998, 1999a,b,c); Millar et al. (2000), who did detailed work on both the early-type star \(\gamma\) Cas, and the late-type star 1 Del, using an escape probability method and by balancing the energy contributions to calculate the disc temperature structures of these stars. This similarity comes despite their code using a different density prescription, and only having five hydrogen energy levels included. hdust, used here, uses a Monte Carlo technique to solve the radiative transfer, has 12 non-LTE and 25 LTE hydrogen energy levels, and also accounts for line radiation and bound-bound transitions. We argue that the strong agreement of the temperature distributions between previous work, including Carciofi & Bjorkman (2006) for B3 spectral types also using hdust, and the new work presented here, infers that the temperature and ionization levels are primarily controlled by photoionization -- recombination equilibrium. This agreement also provides strong evidence of the broad applicability of our work to gaseous discs. In tilting the disc, we see modest large-scale changes to the disc's average temperature, with it increasing slightly as the disc tilt angle increases. On a smaller scale, we find that with increasing tilt angle, the part of the disc that moves towards the equator becomes cooler, while the portion of the disc moving towards the stellar pole becomes hotter. These changes are explained by gravity darkening of the rapidly rotating star, making the stellar poles hotter than the stellar equator. This anti-symmetric change of disc temperatures is why the overall average disc temperature does not change appreciably when the disc is tilted. The temperature in the midplane of the disc is largely unchanged by the disc tilting due to it's higher density than the rest of the disc, although for already optically thin discs, Figure 15: Each panel is a similar to Figure 6; (a) and (b) are for model 10, while (c) and (d) are for model 25. The top row in each subplot is for the model with a flat 40\({}^{\circ}\) tilt, while the bottom row is for the model warped to a maximum of 40\({}^{\circ}\). the midplane can vary in temperature as well. This behaviour is seen across all spectral types. Examining the observables of our tilted disc simulations, we offer two scenarios where a disc tilt may be detected. The first case is where the disc is actively observed to be tilting. In this scenario, the change in observables is entirely dependent on the direction of tilt relative to the observer. A disc may appear completely differently with a \(90^{\circ}\) or \(180^{\circ}\) change of relative orientation as the disc either moves to be more face-on or more edge-on with the observer as it tilts. This variability would make it difficult to interpret whether changes in observables of a Be star would be due to a disc tilting or simple changes in disc density or size. The strongest evidence of disc tilting would appear in the polarization PA where, if one were looking along the axis of the disc's tilt, the position angle should exactly match the tilt angle of the disc, however the position angle will change some amount as long as the observers line of sight is not exactly perpendicular to the tilt axis. No other change in geometry would cause a change of tens of degrees in the position angle, making it a key measurement to look at for proof of disc tilting. The shape of the H\(\alpha\) line would also be a clear indication of disc tilting as the shape changes from single-peaked to double-peaked and vice-versa. This change could not be brought about by a simple change of density structure or a larger/smaller disc, and could only occur with a major change of disc geometry such as tilting of the H\(\alpha\) emitting region. This would also be seen in other emission lines as well, not just H\(\alpha\). The advantage of these two observables being the leading indicators of tilting is that one of them should appear no matter the orientation of the disc, given a large enough disc tilt. If the disc is tilting to be more face-on or edge-on with the observer, the H\(\alpha\) line shape would change in shape while the position angle would not. On the other hand, if the observer was looking more aligned with the tilt axis of the disc, the polarization position angle would change, while the H\(\alpha\) line shape would be approximately constant. Thus, both the emission line shape and polarization position angle are key signatures of disc tilting. Another difference that could set a tilted disc apart from a non-tilted disc is the V/R ratio in the H\(\alpha\) emission lines, however this ratio is not particularly strong in our models apart from a few cases of early-type stars where the disc density and tilt angle is highest. There is no clear pattern to why those few models show stronger V/R ratios than others, so it would be difficult to discern, without further constraints by other observables, whether V/R ratios in actual observations of Be stars are due to a tilted disc or a density enhancement in the disc, like those produced by spiral enhancements in \(\zeta\) Tau (Stefl et al., 2009; Escolano et al., 2015), and 48 Lib (Silaj et al., 2016). The second case is where the disc is already tilted, and undergoing precession due to the influence of a misaligned binary companion. We are able to simulate the disc processing about the stellar pole by holding the \(\theta\) observing angle constant and changing \(\phi\) only. Here we find that the percent polarization, \(V\) magnitude, and H\(\alpha\) EW oscillate at half the precession period, although the oscillation will not be perfectly symmetric unless the observer is directly aligned with the stellar equator. The position angle, on the other hand, will oscillate in sync with disc precession. The V/R ratio undergoes an antisymmetric half-period oscillation, with nodes at \(\phi=90^{\circ}\) and \(270^{\circ}\) due to the violet and red sides of the disc reversing when the observer moves to the other side of the disc. We then investigated two other scenarios. First, we computed two truncated disc models out to a radius of 20 \(R_{*}\), to see any possible obscuring effects that may happen compared to a disc 50 \(R_{*}\) in size. These simulations revealed that the outer disc from 20 to 50 stellar radii only marginally increased H\(\alpha\) emission, while not changing other examined observables. The temperature structure was also unchanged out to 20 stellar radii, as expected. Second, and more importantly, we computed two models that were linearly warped up to a maximum angle of \(40^{\circ}\). This model revealed a cooler outer disc temperature versus its flat tilted counterpart, and an inner temperature structure that followed the warp of the disc. The observables of this model are essentially a mix of all the tilted models together, with some observables better matching the non-tilted or \(10^{\circ}\) models, while others matched the higher tilt models. This shows how important it is to recognize that Be star discs emit some wavelengths from dense inner volumes while other wavelengths come from larger radial positions in the disc. These warped models are merely an initial test to see what effects a warped disc may introduce. A proper warped disc study is beyond the scope of this paper, though certainly merits its own study. These simulations are a vital step towards simulations of more complex disc configurations, such as ones containing warped discs, or those presented by Suffak et al. (2022), which contain holes and tearing of the disc. The flat-tilted models here will be a good benchmark for analysis of these discs that are tilted, warped, and have asymmetric density distributions. With the fundamentals presented here, we will be able to tackle more complicated Be star systems such as Pleione, which is suspected to have a periodic tearing disc (Marr et al., 2022; Martin & Lepp, 2022). ## Acknowledgements We thank the anonymous referee for their thorough comments which improved the paper. We greatly acknowledge the work of Marr (2022), whose preliminary work on the temperature of tilted discs inspired and aided this work. C.E.J. acknowledges support through the National Science and Engineering Research Council of Canada. A. C. C. acknowledges support from CNPq (grant 311446/2019-1) and FAPESP (grants 2018/04055-8 and 2019/13354-1). THA acknowledges support from FAPESP (grant 2021/01891-2 ). This work was made possible through the use of the Shared Hierarchical Academic Research Computing Network (SHARCNET). Figure 16: Same as Figure 14, but for the warped disc of model 10. The system is viewed at \(\theta=80^{\circ}\). ## Data Availability Although there is no observational data, the hdust models computed for this work can be made available upon request. ## References * Baade et al. (2016) Baade D., et al., 2016, A&A, 588, A56 * Brown et al. (2019) Brown R. O., Coe M. J., Ho W. C. G., Okazaki A. T., 2019, MNRAS, 488, 387 * Carciofi & Bjorkman (2006) Carciofi A. C., Bjorkman J. E., 2006, ApJ, 639, 1081 * Carciofi & Bjorkman (2008) Carciofi A. C., Bjorkman J. E., 2008, ApJ, 684, 1374 * Collins George (1987) Collins George W. I., 1987, in Slettebak A., Snow T. P., eds, IAU Colloq. 92: Physics of Be Stars, p. 3 * Cox (2000) Cox A. N., 2000, Allen's astrophysical quantities * Cyr et al. (2017) Cyr I. J., Jones C. E., Panoglou D., Carciofi A. C., Okazaki A. T., 2017, MNRAS, 471, 596 * Escolano et al. (2015) Escolano C., Carciofi A. C., Okazaki A. T., Rivinius T., Baade D., Stefl S., 2015, A&A, 576, A112 * Ghoryshi et al. (2018) Ghoryshi M. R., et al., 2018, MNRAS, 479, 2214 * Ghoryshi et al. (2021) Ghoryshi M. R., Carciofi A. C., Jones C. E., Faes D. M., Baade D., Rivinius T., 2021, ApJ, 909, 149 * Hanuschik et al. (1996) Hanuschik R. W., Hummel W., Sutorius E., Dietle O., Thimm G., 1996, A&AS, 116, 309 * Jones et al. (2004) Jones C. E., Sigut T. A. A., Marlborough J. M., 2004, MNRAS, 352, 841 * Jones et al. (2008) Jones C. E., Tycner C., Sigut T. A. A., Benson J. A., Hutter D. J., 2008, ApJ, 687, 598 * Klement et al. (2015) Klement R., et al., 2015, A&A, 584, A85 * Lee et al. (1991) Lee U., Osaki Y., Saio H., 1991, MNRAS, 250, 432 * Marr (2022) Marr K., 2022, PhD thesis, University of Western Ontario, London, ON, CA, [https://ir.lib.uwo.ca/etd/8376](https://ir.lib.uwo.ca/etd/8376) * Marr et al. (2021) Marr K. C., Jones C. E., Carciofi A. C., Rubio A. C., Mota B. C., Ghoreyshi M. R., Hatfield D. W., Rimulo L. R., 2021, ApJ, 912, 76 * Marr et al. (2022) Marr K. C., Jones C. E., Tycner C., Carciofi A. C., Silva A. C. F., 2022, ApJ, 928, 145 * Martin & Lepp (2022) Martin R. G., Lepp S., 2022, MNRAS, 516, L86 * Martin et al. (2011) Martin R. G., Pringle J. E., Tout C. A., Lubow S. H., 2011, MNRAS, 416, 2827 * Martin et al. (2014a) Martin R. G., Nixon C., Armitage P. J., Lubow S. H., Price D. J., 2014a, ApJ, 790, L34 * Martin et al. (2014b) Martin R. G., Nixon C., Lubow S. H., Armitage P. J., Price D. J., Dogan S., King A., 2014b, ApJ, 792, L33 * McGill et al. (2011) McGill M. A., Sigut T. A., Jones C. E., 2011, ApJ, 743, 111 * McGill et al. (2013) McGill M. A., Sigut T. A. A., Jones C. E., 2013, ApJS, 204, 2 * Miller & Marlborough (1998) Miller C. E., Marlborough J. M., 1998, ApJ, 494, 715 * Miller & Marlborough (1999a) Miller C. E., Marlborough J. M., 1999a, ApJ, 516, 276 * Miller & Marlborough (1999b) Miller C. E., Marlborough J. M., 1999b, ApJ, 516, 280 * Miller & Marlborough (1995) Miller C., Marlborough J. M., 1995b, ApJ, 526, 400 * Miller et al. (2000) Miller C. E., Sigut T. A. A., Marlborough J. M., 2000, MNRAS, 312, 465 * Rimulo et al. (2018) Rimulo L. R., et al., 2018, MNRAS, 476, 3555 * Rivinius et al. (2013) Rivinius T., Carciofi A. C., Marttyan C., 2013, A&ARv, 21, 69 * Sigut & Jones (2007) Sigut T. A. A., Jones C. E., 2007, ApJ, 668, 481 * Sliaji et al. (2010) Sliaji J., Jones C. E., Tycner C., Sigut T. A. A., Smith A. D., 2010, ApJS, 187, 228 * Sliaji et al. (2016) Sliaji J., et al., 2016, ApJ, 826, 81 * Suffix et al. (2020) Suffix R. W., Jones C. E., Tycner C., Henry G. W., Carciofi A. C., Mota B. C., Rubio A. C., 2020, ApJ, 890, 86 * Suffix et al. (2022) Suffix R. M., Jones C. E., Carciofi A. C., 2022, MNRAS, 509, 931 * Vieira et al. (2017) Vieira R. G., Carciofi A. C., Bjorkman J. E., Rivinius T., Baade D., Rimulo L. R., 2017, MNRAS, 464, 3071 * Waters (1986) Waters L. B. F. M., 1986, A&A, 162, 121 * Stefl et al. (2009) Stefl S., et al., 2009, A&A, 504, 929 ## Appendix A Tilded Temperature Cross Sections ### B Midplane Temperatures of Tilted vs Non-Tilted Discs This paper has been typeset from a T&A/A/A file prepared by the author. Figure 17: Same as Figure 6, but for models 2-7. Figure 18: Same as Figure 6, but for models 8-13. Figure 19: Same as Figure 6, but for models 14-19. Figure 20: Same as Figure 6, but for models 20-25. Figure 21: Same as Figure 6, but for models 26-29, 31 and 32 Figure 21: Same as Figure 6, but for models 26-29, 31 and 32 Figure 22: Temperature vs. radius at the disc midplane for all models with a B0 or B2 central star, in the direction \(\phi\,=\,90^{\circ}\). The four lines are for four different tilt angles as indicated by the legend.
``` 三次元Monte Carlo放射線伝播コードHDUSTを用いて、星 equatorial plane から傾斜したベスターモデルを作成しました。 B0、B2、B5、およびB8の4つのスペクトルタイプを跨いで、ディスクを0°、10°、20°、および40°傾けて、密度をスペクトルタイプに応じて変化させて計算しました。 また、平均および高 stellar rotation rateのモデルを計算しました。 まず、非傾斜ディスクの温度について議論し、その温度と星とディスクのパラメーターの間の関係性を示しました。 ディスクの傾斜は平均ディスク温度に最小限の影響を与え、その傾斜はディスクの Cross Section に温度の非線形依存性を生じさせ、回転速度が速いほど顕著になります。 ディスクの傾斜がVバンドマージン、偏光、およびHα線の効果に与える影響についても
2306.17751
Confirming Resonance in Three Transiting Systems
Although resonant planets have orbital periods near commensurability, resonance is also dictated by other factors, such as the planets' eccentricities and masses, and therefore must be confirmed through a study of the system's dynamics. Here, we perform such a study for five multi-planet systems: Kepler-226, Kepler-254, Kepler-363, Kepler-1542, and K2-32. For each system, we run a suite of N-body simulations that span the full parameter-space that is consistent with the constrained orbital and planetary properties. We study the stability of each system and look for resonances based on the libration of the critical resonant angles. We find strong evidence for a two-body resonance in each system; we confirm a 3:2 resonance between Kepler-226c and Kepler-226d, confirm a 3:2 resonance between Kepler-254c and Kepler-254d, and confirm a three-body 1:2:3 resonant chain between the three planets of Kepler-363. We explore the dynamical history of two of these systems and find that these resonances most likely formed without migration. Migration leads to the libration of the three-body resonant angle, but these angles circulate in both Kepler-254 and Kepler-363. Applying our methods to additional near-resonant systems could help us identify which systems are truly resonant or non-resonant and which systems require additional follow-up analysis.
Tyler Quinn, Mariah MacDonald
2023-06-30T16:01:37
http://arxiv.org/abs/2306.17751v1
# Confirming Resonance in Three Transiting Systems ###### Abstract Although resonant planets have orbital periods near commensurability, resonance is also dictated by other factors, such as the planets' eccentricities and masses, and therefore must be confirmed through a study of the system's dynamics. Here, we perform such a study for five multi-planet systems: Kepler-226, Kepler-254, Kepler-363, Kepler-1542, and K2-32. For each system, we run a suite of _N_-body simulations that span the full parameter-space that is consistent with the constrained orbital and planetary properties. We study the stability of each system and look for resonances based on the libration of the critical resonant angles. We find strong evidence for a two-body resonance in each system; we confirm a 3:2 resonance between Kepler-226c and Kepler-226d, confirm a 3:2 resonance between Kepler-254c and Kepler-254d, and confirm a three-body 1:2:3 resonant chain between the three planets of Kepler-363. We explore the dynamical history of two of these systems and find that these resonances most likely formed without migration. Migration leads to the libration of the three-body resonant angle, but these angles circulate in both Kepler-254 and Kepler-363. Applying our methods to additional near-resonant systems could help us identify which systems are truly resonant or non-resonant and which systems require additional follow-up analysis. Exoplanet dynamics (490), Exoplanet migration (2205), Exoplanet structure (495) 0000-0002-4061-8088]Tyler Quinn 0000-0002-1882-7885]Mariah G. MacDonald ## 1 Introduction While in operation, the Kepler space telescope discovered over 4,500 planet candidates during both the Kepler and K2 missions. Today, many of these candidates have been confirmed, and Kepler-era exoplanets have contributed to the growth of the confirmed exoplanet catalog to over 5,000 and the catalog of candidate planets to over 8,500. This large sample size has led to many investigations into planetary composition, formation, dynamics, and evolution through astrobiological studies. One intrigue raised by these studies is mean-motion resonance (MMR). MMR occurs when two or more orbiting bodies periodically exert gravitational perturbations on each other, leading to a repeated exchange of energy and angular momentum. We can predict MMR by observing the orbital frequency of neighboring planets. If in resonance, the ratio of neighboring planets' periods will reduce to a ratio of small integers, such as 2:1 or 12:5. However, determining resonance requires a deeper study into the system's dynamics since a period ratio of small integers does not necessarily mean the system is in resonance. Such in-depth studies have been conducted and confirmed resonance in a handful of Kepler systems such as Kepler-80 (MacDonald et al., 2016), Kepler-223 (Mills et al., 2016), and K2-138 (MacDonald et al., 2022). Mean-motion resonance can form in systems with two or more orbiting bodies. The simplest form of MMR is the two-body resonance. Mathematically, this is defined as the oscillation or libration of the two-body critical angle: \[\Theta_{b,c}=j_{1}\lambda_{b}+j_{2}\lambda_{c}+j_{3}\omega_{b}+j_{4}\omega_{c} +j_{5}\Omega_{b}+j_{6}\Omega_{c} \tag{1}\] where \(\lambda_{p}\) is the mean longitude of planet \(p\), \(\omega_{p}\) is the argument of periapsis, \(\Omega_{p}\) is the longitude of the ascending node, \(j_{i}\) are coefficients which sum to zero, and planet \(b\) orbits closer to the host star than planet \(c\). In systems with three or more orbiting bodies, numerous bodies may be in resonance, either in a chain of two-body resonances or in a three or more body resonance. A zeroth-order three-body MMR is defined by the difference of the two-body resonant angles: \[\phi_{b,c,d}=\Theta_{c,d}-\Theta_{b,c}=m\lambda_{d}-(m+n)\lambda_{c}+n\lambda_{b} \tag{2}\] where \(\lambda_{p}\) is the mean longitude of planet \(p\), and \(m\) and \(n\) are integers. This angle is independent of all longitudes of periapsis (\(\bar{\omega}=\Omega+\omega\)), making it ideal for resonant study in systems with poorly constrained orbital angles and eccentricity. Traditionally, such resonances are confirmed if all solutions to the system's RV or TTV forward modeling lead to librating angles. Unfortunately, few systems produce large enough perturbations that could be detected with a typical survey cadence (30 minute cadence from photometry and \(\sim\)few day cadence from radial velocities). Due to a lack of high-precision measurements of these systems, we must model all solutions to a system--across all potential parameters that are consistent with the data--to confirm resonance. In the case that all solutions result in the planets locked in MMRs, we are able to confirm resonance in the system. MacDonald et al. (2022) were the first to confirm a resonance without forward modeling either transit times or the radial velocity signal of the planets. They found that three of the planets of K2-138 are locked in a resonant chain in 99% of \(N\)-body simulations that spanned the entirety of parameter space that was previously constrained by both photometry and radial velocity measurements, providing a method of MMR confirmation in the absence of high-cadence, high-precision data. Such a method, if applied on a larger scale to more systems, would enable us to confirm more resonances. Since resonances allow for the constraint of planetary properties, the system's formation history, and the planets' long term stability, a significant number of confirmed resonant systems would allow us to start leveraging these dynamics to better understand planet formation and evolution. Here, we perform such an analysis on five systems: Kepler-226, Kepler-254, Kepler-363, Kepler-1542, and K2-32. Each of these systems was suggested to be a "broken," full-system 3:2 resonant chain, where the discovery of an additional planet would complete the chain (Christiansen et al., 2018). However, the period ratios of adjacent known planets suggest the presence of resonant chains. Very few known systems with similar architecture exist (Livingston et al., 2018), and confirmation of such a chain can provide valuable insight into the dynamics, history, and composition of systems of this architecture. In Section 2, we briefly describe the five systems we study and discuss the initial conditions and parameters of our \(N\)-body simulations. We then present our results and analyze the resonant configurations of each system in Section 3. For two of the systems in which we confirm resonance, we use the resonances to constrain the planetary masses and orbital periods and discuss forming the chain in Section 4 before summarizing and concluding our work in Section 5. ## 2 Methods Kepler-226 is a G-type star hosting a super-Earth and two Earth-sized planets with orbital periods between 4 and 8 days. These three planets could be locked in a 2:3:4 resonant chain. Since their initial confirmation (Rowe et al., 2014), the anti-correlated TTVs of planets b and c constrained their masses to \(M_{b}=24.0^{+11.8}_{-10.1}\ M_{\oplus}\) and \(M_{c}=45.2^{+22.5}_{-19.1}\ M_{\oplus}\), although the radii of these two planets (\(R_{b}=1.64\ R_{\oplus}\) and \(R_{c}=2.47\ R_{\oplus}\), Berger et al., 2018) suggest these values to be overestimates. Although the TTVs and period ratios of the system suggest this chain of resonances, the specific dynamics of the system have yet to be explore. Kepler-254 is a relatively dim (\(V=16.012\)) G-type star, hosting three confirmed exoplanets with orbital periods ranging from 5.8 days to 18.7 days. The period ratios of adjacent planets suggest the system could be locked in a 1:2:3 resonant chain. Jontof-Hutter et al. (2021) suggest that Kepler-254d and Kepler-254c could be locked in a 3:2 resonance. However, the orbital dynamics of Kepler-254 have yet to be included in an in-depth study to confirm MMRs. Kepler-363 is a relatively bright (\(V=13.472\)) G-type star, hosting three confirmed exoplanets. These planets orbit their stair fairly rapidly, with orbital periods ranging from 3.6 days to 11.9 days. The period ratios of adjacent planets suggest the system could be locked into a 1:2:3 resonant chain. The orbital dynamics of Kepler-363 have yet to be included in any in-depth study to confirm resonance in the system. Kepler-1542 is a G-type star that hosts four transiting planets and one planetary candidate, all smaller than Earth and orbiting within 8 days. The orbital periods of the planets suggest a chain of resonances of 4:3, 5:4, 7:6, and 6:5 if we include the candidate. Validated by Morton et al. (2016), the four planets have never been included in an in-depth study of the system. K2-32 is a G-type star in a binary system, hosting four transiting planets. The innermost planet K2-32e was most recently discovered and validated by Heller et al. (2019), suggesting that these four planets are in a 1:2:5:7 chain of mean motion resonances. Although the orbital periods suggest this resonance, as do many follow-up studies (e.g., Lillo-Box et al., 2020), the dynamics of this system have yet to be explored. Following the methods of MacDonald et al. (2022), we seek to understand the dynamics of these systems by running _N_-body simulations using the python module REBOUND(Rein and Liu, 2012). We run a suite of 1000 simulations, drawing initial values for planetary masses, inclinations, and orbital periods from independent, normal distributions that are centered on values constrained by current photometry. For Kepler-226, Kepler-254, and Kepler-363, we use the results from Thompson et al. (2018) for all parameters except planetary radii, for which we use the updated stellar, and therefore planetary, radii from Berger et al. (2018). For Kepler-1542, we use parameters from Morton et al. (2016), and for K2-32 we use the values from Heller et al. (2019). For planets without mass constraints, we draw masses from the mass-radius relationship described in Weiss and Marcy (2014)1. Each simulation therefore initializes with a set of parameters that is unique from other simulations but consistent with current data. Using the WHFast integrator (Rein and Tamayo, 2015), we integrate the modeled systems for 10 Myr with a timestep of 5% the innermost planet's period. We summarize the simulation initial conditions for our simulations in Table 1. Footnote 1: We explore a large range of masses for each planet and use the resulting resonances to constrain the planet masses. We therefore are not sensitive to any specific mass-radius relationship. ## 3 Results For each of our five systems of interest, we run a suite of 1000 _N_-body simulations for 10 Myr and analyze the results of each suite for two-body and three-body resonances. We stop integrations when any planet experiences a close encounter, defined by a distance of less than three Hill radii. To confirm a chain of resonances, we search for simulations where the three-body angle is librating or where both of the two-body angles are librating. We find it unlikely that Kepler-1542 and K2-32 contain any resonant chains; for each of these systems, no three-body angle librated in our simulations, regardless of planetary mass. In Kepler-1542, the resonant angle \(\Theta_{e,d}=7\lambda_{d}-6\lambda_{e}-\omega_{e}\) librated in 82% of simulations, and in K2-32 the resonant angles \(\Theta_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{c}\) and \(\Theta_{e,b}=2\lambda_{b}-1\lambda_{e}-\omega_{e}\) librated in 70% and 68% of simulations, respectively. Because not all solutions to our current data lead to these angles librating, we cannot claim the planets are in resonance. In Kepler-226, we find that the two-body angle \(\Theta^{\prime}_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{d}\) librates about \(180^{\circ}\) in 99.8% of our simulations, but with large libration amplitudes of \(90.5^{+23.19}_{15.22}\). The two-body angle \(4\lambda_{c}-3\lambda_{b}-\omega_{c}\) librates in 42% of our simulations, and the three-body angle circulates in all simulations. While we are therefore able to confirm the 3:2 resonance between Kepler-226c and Kepler-226d, we are not able to confirm a resonant chain. We focus the rest of this work on the two remaining systems, Kepler-254 and Kepler-363. We summarize the results of the resonance analysis for all systems in Table 2. ### Kepler-254 Through our analysis, we find that nearly all (99.6%) simulations of Kepler-254 remained stable during the 10 Myr integrations, i.e. no planets experienced a close encounter or were ejected, regardless of initial parameter values. Of these simulations, 42.4% result in a 1:2:3 three-body resonant chain. The two-body angle \(\Theta_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{c}\) librates in 42.4% of the simulations, and the two-body angle \(\Theta_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{c}\) librates in 99.2% of the simulations. The three-body angle \(\phi_{1}=3\lambda_{d}-4\lambda_{c}+\lambda_{b}\) circulated in all of the simulations. We show the evolution of one the _N_-body simulations in Figure 1. Given these results, we are therefore able to confirm a two-body resonance between Kepler-254c and Kepler-254d where the angle \(\Theta_{c,d}\) librates around \(0^{\circ}\) with an amplitude of \(65.1^{+4.6}_{-5.0}\). A three-body resonant chain is probable but requires further analysis and more precise orbits to confirm. The system could therefore benefit from follow-up observation and analysis. ### Kepler-363 Regardless of the initial parameters, nearly all 1000 simulations of Kepler-363 remained stable for the 10 Myr integration. We find the 2:1 resonant angle \(\Theta_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{c}\) librates in 99.2% of simulations, and the 3:2 resonant angle \(\Theta_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{c}\) librates in 92.6% of simulations. Of all 1000 simulations, 92.4% result in a three-body 1:2:3 resonant chain. The two-body angles \(\Theta_{b,c}\) and \(\Theta_{c,d}\) librate about \(0^{\circ}\) with moderate amplitudes of \(35.1^{+30.0}_{-17.8}\) and \(55.1^{+13.9}_{-13.7}\), respectively, and the two-body angle and \(\Theta_{c,d}\) librates around \(180^{\circ}\) with large amplitudes of \(96.98^{+34.82}_{-35.54}\). Curious enough, the three-body angle \(\phi=3\lambda_{d}-4\lambda_{c}+\lambda_{b}\) does not librate in any of our simulations. We discuss the implications of this circulating angle in more detail in Section 4. We show the evolution of one the _N_-body simulations in Figure 2. ## 4 Discussion With the confirmation of resonance, we are able to study additional information about a system and its planets. In particular, resonances allow us to constrain planetary masses and orbits and to explore the formation and subsequent dynamical history of the planets. ### Using Resonance to Constrain Masses and Orbits We explore the differences in planetary parameters between simulations that resulted in resonance and those that did not. We perform a two-sample Kolmogorov-Smirnov test, exploring the null hypotheses that the masses, eccentricities, and orbital periods of the planets in resonance and the planets not in resonance are drawn from the same distribution. As an example, we take the distribution of masses of Kepler-363b from simulations where \(\Theta_{b,c}\) librates as one sample for the K-S test, and the distribution of that planet's mass from simulations where the same angle circulates as the second sample. For all parameters except the eccentricity of Kepler-363c, we recover large _p_-values (p \(>~{}0.05\)) and fail to reject the null hypothesis. For Kepler-363c's eccentricity, we recover a _p_-value of 0.018, suggesting that the two distributions are statistically different. We find that the resulting eccentricity for simulations with a librating \(\Theta_{c,d}\) is smaller than for those with a circulating \(\Theta_{c,d}\) (\(2.3^{+1.8}_{-1.4}\times 10^{-4}\) and \(3.0^{+1.5}_{-1.7}\times 10^{-4}\), respectively). Although we are not able to use the system's resonances to constrain the planets' masses, we do find that this system's resonant state is not very dependant on the planetary masses, confirming that more precise mass measurements are not necessary to confirm these resonances. ### Constraining dynamical history With confirmed resonances, we are now able to study each system's formation and evolution. Although resonant chains are typically seen as the hallmark of disk-driven migration, two additional pathways exist to form resonant chains that are each consistent with in situ formation (MacDonald and Dawson, 2018). Following the prescription of MacDonald and Dawson (2018), the three chain formation pathways are long-scale migration (**LM**; hypothesizes the planets were formed both further from \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{ Kepler-226} & b & c & d \\ \hline \(P\) [d] & \(3.940997\pm 0.000020\) & \(5.34955\pm 0.000014\) & \(8.109044\pm 0.000094\) \\ t\({}_{0}\) [d] & \(69.09337\) & \(104.80599\) & \(65.80333\) \\ \(i\) [\({}^{\circ}\)] & \(88.88\pm 0.2\) & \(89.62\pm 0.2\) & \(89.92\pm 0.2\) \\ \(M_{p}\) [\(M_{\oplus}\)] & \(4.271^{+1.933*}_{-1.828}\) & \(6.237^{+2.071*}_{-1.852}\) & \(2.440^{+1.984*}_{-1.243}\) \\ \hline \hline Kepler-254 & b & c & d \\ \hline \(P\) [d] & \(5.82666\pm 0.00001\) & \(12.41218\pm 0.00008\) & \(18.7464\pm 0.0001\) \\ t\({}_{0}\) [d] & \(106.01\) & \(75.54\) & \(80.13\) \\ \(i\) [\({}^{\circ}\)] & \(89.88\pm 0.2\) & \(89.95\pm 0.2\) & \(89.11\pm 0.2\) \\ \(M_{p}\) [\(M_{\oplus}\)] & \(8.84^{+2.02*}_{-1.94}\) & \(5.75^{+1.99*}_{-2.00*}\) & \(6.72^{+0.03*}_{-1.98}\) \\ \hline \hline Kepler-363 & b & c & d \\ \hline \(P\) [d] & \(3.61460279\pm 0.00003\) & \(7.54235832\pm 0.00004\) & \(11.93205399\pm 0.00005\) \\ to [d] & \(67.695\) & \(245965.961\) & \(245975.106\) \\ \(i\) [\({}^{\circ}\)] & \(86.02\pm 0.2\) & \(88.44\pm 0.2\) & \(89.52\pm 0.2\) \\ \(M_{p}\) [\(M_{\oplus}\)] & \(3.05^{+1.83}_{-1.65}\)* & \(4.67^{+2.12*}_{-1.90}\) & \(5.34^{+2.06*}_{-1.94}\) \\ \hline \hline Kepler-1542 & c & b & e & d \\ \hline \(P\) [d] & \(2.8922302\pm 1.472e-05\) & \(3.95116882\pm 1.633e-05\) & \(5.10115756\pm 2.409e-05\) & \(5.99273738\pm 2.26e-05\) \\ t\({}_{0}\) [d] & \(65.86465\) & \(67.22178\) & \(65.42378\) & \(64.74864\) \\ \(i\) [\({}^{\circ}\)] & \(89.89\pm 0.2\) & \(88.05\pm 0.2\) & \(89.68\pm 0.2\) & \(88.08\pm 0.2\) \\ \(M_{p}\) [\(M_{\oplus}\)] & \(0.429^{+0.386*}_{-0.228}\) & \(0.803^{+0.823*}_{-0.420}\) & \(0.805^{+0.801*}_{-0.445}\) & \(1.083^{+0.979*}_{-0.570}\) \\ \hline \hline K2-32 & e & b & c & d \\ \hline \(P\) [d] & \(4.34882^{+0.000096}_{-0.00075}\) & \(8.99182^{+0.000088\pm}_{-0.000084}\) & \(20.66186^{+0.00102}_{-0.00008}\) & \(31.7142^{+0.0011}_{-0.0010}\) \\ t\({}_{0}\) [d] & \(1998.886\) & \(2000.92713\) & \(1999.42271\) & \(2003.7913\) \\ \(i\) [\({}^{\circ}\)] & \(90.0^{**}_{-0.8}\) & \(89.1\pm 0.7\) & \(89.3\pm 0.9\) & \(89.3\pm 0.9\) \\ \(M_{p}\) [\(M_{\oplus}\)] & \(1.095^{+2.248*}_{-0.625}\) & \(16.5^{+2.7}_{-2.7}\) & \(<12.1\) & \(10.3^{+4.8}_{-4.3}\) \\ \hline \end{tabular} Note. – Initial conditions used for the simulations, including orbital period \(P\), mid-transit time \(t_{0}\), sky-plane inclination \(i\), and planetary mass \(M_{p}\). We initialize all planets on circular orbits. We use the values published by Rowe et al. (2014) for all parameters of Kepler-226, Kepler-254, and Kepler-363, except for planetary radii, where we use the updated stellar and therefore planetary radii from Berger et al. (2018). For Kepler-1542, we use parameters from Morton et al. (2016), and for K2-32 we use the values from Heller et al. (2019). We assume stellar masses of \(0.831~{}M_{\odot}\)(Thompson et al., 2018), \(0.943~{}M_{\odot}\)(Berger et al., 2018), \(1.173~{}M_{\odot}\)(Thompson et al., 2018), \(0.933~{}M_{\odot}\)(Thompson et al., 2018), and \(0.856~{}M_{\odot}\)(Heller et al., 2019) for the stars as ordered in the table. All parameters were drawn from independent, normal distributions, centered on the nominal values with widths equal to the value’s uncertainty; for parameters with unequal upper and lower uncertainties, we take the larger uncertainty as the width. \({}^{*}\) planetary masses were drawn from the mass-radius relation Weiss and Marcy (2014). \({}^{**}\) At the time of this work, no estimate existed for this value, so we fix the parameter and do not draw it from a normal distribution. \end{table} Table 1: Planetary Properties for Determining Resonance their star and each other when compared to current observations), short-scale migration (**SM**; planets formed near current observations, just outside of resonance, where small shifts in the planets' semi-major axes will lead to resonance), and eccentricity dampening (**ECC**; planets formed near current observations, just outside of resonance, where damping to the planets' eccentricities will lead to resonance). To study the formation of the resonances in Kepler-254 and Kepler-363, we follow the methods of MacDonald and Dawson (2018) which we briefly describe here. For each formation pathway, we run a suite of 500 \(N\)-body simulations with the same initial conditions shown in Table 1 except with inflated orbital periods. We use the modify_orbits_forces routine in the REBOUNDx library (Tamayo et al., 2020) and the WHFast integrator (Rein and Tamayo, 2015). For the **LM** simulations, we initialize the inner planet at 1 au from its host star and start the other planets just wide of the observed resonances2. For the **SM** and **ECC** simulations, we initialize the planets a small percentage wide of their observed orbits, where we draw this percentage for each planet and each simulation from a normal distribution of \(N[5,3]\%\). All simulations start with the planets out of resonance. We then form the resonant chains by damping the semi-major axes and/or eccentricities of the planets, following the prescription in Papaloizou and Larwood (2000). For the **LM** and **SM** simulations, we damp only the outer planet's eccentricity and semi \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{ Angle} & \% librating & Center [\({}^{\circ}\)] & Amplitude [\({}^{\circ}\)] \\ \hline K2-32 & stable = 984 & resonant = 664 & \\ \(\Theta_{c,b}=2\lambda_{b}-\lambda_{e}-\omega_{e}\) & 67.58\% & -0.005 \({}^{+0.349}_{-0.315}\) & 48.4 \({}^{+23.8}_{-20.2}\) \\ \(\Theta_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{b}\) & 14.43\% & 0.036 \({}^{+0.501}_{-0.513}\) & 58.3 \({}^{+14.0}_{-31.8}\) \\ \(\Theta_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{c}\) & 69.92\% & 0.015 \({}^{+2.073}_{-2.065}\) & 64.5 \({}^{+9.7}_{-18.9}\) \\ \(\Theta^{\prime}_{c,b}=2\lambda_{b}-\omega_{b}\) & 0.90\% & -5.38\({}^{+12.70}_{-14.09}\) & 134.64\({}^{+6.23}_{-4.81}\) \\ \(\Theta_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{c}\) & 0.00\% & -1.14 \({}^{+0.81}_{-1.09}\) & \\ \(\Theta^{\prime}_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{d}\) & 6.80\% & -0.06\({}^{+15.61}_{-1.71}\) & \\ \hline Kepler-226 & stable = 998 & resonant = 457 & \\ \(\Theta_{c,b}=4\lambda_{c}-3\lambda_{b}-\omega_{c}\) & 42.00\% & -0.052 \({}^{+0.571}_{-0.504}\) & 119.1 \({}^{+22.2}_{-20.9}\) \\ \(\Theta_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{c}\) & 45.80\% & -0.05 \({}^{+0.91}_{-0.88}\) & 135.03 \({}^{+11.18}_{-30.3}\) \\ \(\Theta^{\prime}_{b,c}=4\lambda_{c}-3\lambda_{b}-\omega_{b}\) & 41.60\% & 179.9 \({}^{+0.577}_{-0.428}\) & 137.25 \({}^{+9.21}_{-11.85}\) \\ \(\Theta^{\prime}_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{d}\) & 99.8\% & 179.9 \({}^{+13.88}_{-1.11}\) & 90.5 \({}^{+22.19}_{-15.62}\) \\ \hline Kepler-254 & stable = 996 & resonant = 422 & \\ \(\Theta_{b,c}=2\lambda_{b}-\lambda_{b}-\omega_{c}\) & 42.40\% & 0.021 \({}^{+0.35}_{-0.29}\) & 118.6 \({}^{+21.48}_{-49.9}\) \\ \(\Theta^{\prime}_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{c}\) & 99.20\% & -0.15 \({}^{+2.29}_{-2.22}\) & 65.1\({}^{+1.6}_{-5.0}\) \\ \(\Theta^{\prime}_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{b}\) & 0.00\% & -1.7 \({}^{+1.7}_{-1.81}\) & 87.1 \({}^{+12.34}_{-14.21}\) \\ \hline Kepler-363 & stable = 998 & resonant = 924 & \\ \(\Theta_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{e}\) & 99.2\% & 0.0029 \({}^{+0.224}_{-0.243}\) & 35.1\({}^{+30.0}_{-17.78}\) \\ \(\Theta_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{e}\) & 92.95\% & -0.02 \({}^{+0.54}_{-0.44}\) & 55.1 \({}^{+13.9}_{-13.7}\) \\ \(\Theta^{\prime}_{b,c}=2\lambda_{b}-\lambda_{b}-\omega_{b}\) & 0.0\% & - & - \\ \(\Theta^{\prime}_{d,d}=3\lambda_{d}-2\lambda_{c}-\omega_{d}\) & 98.8\% & 179.97 \({}^{+0.41}_{-0.37}\) & 96.98\({}^{+34.82}_{-35.54}\) \\ \hline Kepler-1542 & stable = 897 & resonant = 0 & \\ \(\Theta_{c,b}=4\lambda_{b}-3\lambda_{c}-\omega_{b}\) & 9.81\% & 0.16 \({}^{+0.66}_{-0.60}\) & 64.5 \({}^{+8.7}_{-24.9}\) \\ \(\Theta_{b,c}=5\lambda_{e}-4\lambda_{b}-\omega_{b}\) & 5.13\% & -0.17 \({}^{+0.88}_{-0.88}\) & 74.1 \({}^{+2.7}_{-2.4}\) \\ \(\Theta_{c,d}=7\lambda_{d}-6\lambda_{c}-\omega_{e}\) & 81.94\% & -0.05 \({}^{+0.88}_{-0.81}\) & 61.2 \({}^{+10.8}_{-16.5}\) \\ \(\Theta^{\prime}_{c,b}=4\lambda_{b}-3\lambda_{c}-\omega_{c}\) & 0.50\% & -5.19\({}^{+3.04}_{-0.85}\) & 132.50\({}^{+3.34}_{-1.50}\) \\ \(\Theta^{\prime}_{b,c}=5\lambda_{e}-4\lambda_{b}-\omega_{e}\) & 2.80\% & -2.47\({}^{+12.97}_{-8.74}\) & 127.21 \({}^{+2.25}_{-2.12}\) \\ \(\Theta^{\prime}_{c,d}=7\lambda_{d}-6\lambda_{c}-\omega_{d}\) & 29.2\% & 1.08\({}^{+11.11}_{-13.93}\) & 131.72\({}^{+7.9}_{-4.75}\) \\ \hline \end{tabular} Note. – For each system, the number of simulations out of 1000 that survived 10 Myr, the number of simulations where all planets participate in the chain, then, for each angle, the percentage of simulations where the angle librates and the center and amplitude of the libration. For each system, all three-body angles were circulating. \end{table} Table 2: Resonance Results major axis3, and for the **ECC** simulations, we damp the eccentricity of all planets. We draw the timescales for the semi-major axis damping (\(\tau_{a}\)) and eccentricity damping (\(\tau_{e}\)) from independent, log-uniform distributions of log \(\tau_{a}\) = U[7, 9] yr, log \(\tau_{e}\) = U[4, 6] yr; log \(\tau_{a}\) = U[6, 9] yr, log \(\tau_{e}\) = U[4, 7] yr; and log \(\tau_{e}\) = U[5, 7] yr for the **LM**, **SM**, and **ECC** suites, respectively. We explore a wide range of damping timescales, represent Figure 1: Example evolution of the orbital periods, eccentricities, inclinations, all four two-body resonant angles, and the three-body resonant angle of the three planets of Kepler-254. We find that the two-body angle \(\Theta_{c,d}\) librates in nearly all of our simulations, the two-body angle \(\Theta_{b,c}\) only librates in approximately 40%, and the corresponding three-body angle circulates in each one. The initial values for this simulation were drawn from independent, normal distributions, as described in Section 2 and summarized in Table 1. We integrate this simulation beyond 10 Myr for visualization purposes. ing a wide range of disk conditions, to avoid fine-tuning our simulations. We integrate each system forward with a timestep of 5% the innermost planet's observed orbital period. After 5\(\times\)10\({}^{6}\) years, we "turn off" the damping effects and integrate for another 0.25 Myr to ensure stability after the gas disk would dissipate. We then study each resulting simulation for librating two- and three-body resonant angles. We find we are able to produce a full three-body resonant chain in systems like Kepler-254 and Kepler-363 through all three formation pathways. However, each formation pathway yields unique results, which we discuss in turn below. We summarize the centers and am Figure 2: Example evolution of the orbital periods, eccentricities, inclinations, all four two-body resonant angles, and the three-body resonant angle of the three planets of Kepler-363. We find that the two-body angles \(\Theta_{b,c}\), \(\Theta_{c,d}\), and \(\Theta^{\prime}_{c,d}\), librate in nearly all of our simulations, but the corresponding three-body angle circulates in each one. The initial values for this simulation were drawn from independent, normal distributions, as described in Section 2 and summarized in Table 1. We integrate this simulation beyond 10 Myr for visualization purposes. plitudes of librating angles resulting from each formation pathway in Table 3, and we compare examples from each of these formation pathways in Figures 3 and 4. _Short-scale migration:_ For both systems, short-scale migration (**SM** suite) results in the three-body angle \(\phi=\Theta_{b,c}-\Theta_{c,d}\) librating in some of the simulations (34% and 25% for Kepler-254 and Kepler-363, respectively), and librating about \(180^{\circ}\), \(\sim\)\(285^{\circ}\), and a third center with moderate amplitudes (\(\sim 10-20^{\circ}\)). _Long-scale migration:_ Since very few of the **LM** simulations for Kepler-254 remained stable for the full integration time, and with only one simulation in resonance, we are unable to perform any meaningful statistical analysis on this suite. The long-scale migration for Kepler-363 resulted in very few simulations where \(\phi\) librates and only 27% of the stable simulations with a three-body resonant chain. _Eccentricity-damping:_ We find that eccentricity-damping results in the libration of the two-body angles \(\Theta_{b,c}\), \(\Theta_{c,d}\), and \(\Theta^{\prime}_{c,d}\) for both Kepler-254 and Kepler-363 in about half of the simulations, but very rarely results in the libration of \(\Theta^{\prime}_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{b}\) or of the three-body angle \(\phi\). For Kepler-254, \(\Theta_{b,c}\) and \(\Theta_{c,d}\) each librate about \(0^{\circ}\) with small amplitudes of \(5.96^{+5.23}_{-0.62}\) and \(5.76^{+9.26}_{-1.21}\), respectively, similar to the centers we recover in Section 3.1 but with significantly smaller amplitudes. For Kepler-363, \(\Theta_{b,c}\) and \(\Theta_{c,d}\) each librate about \(0^{\circ}\) with amplitudes of \(4.22^{+2.45}_{-0.47}\) and \(28.32^{+4.05}_{-2.13}\), respectively, similar to the centers we recover in Section 3.2 but, again, with significantly smaller amplitudes. In Section 3.1, we confirmed the two-body resonance between Kepler-254c and Kepler-254d, but we were unable to confirm a resonance between the inner planet pair in the system. Since each of the formation pathways resulted in the libration of this angle and therefore each pathway is possible given our current data, we cannot select one pathway over another as more probable. We find it likely that the resonant chain of Kepler-363 formed through eccentricity- damping, which we discuss in more detail below in Section 4.3. ### Unique Dynamical Configuration The three planets of Kepler-363 are locked in a three-body resonance, where both two-body angles librate and the three-body angle \(\phi=\Theta_{c,d}-\Theta_{b,c}=3\lambda_{d}-4\lambda_{c}+\lambda_{b}\) circulates; the three-body angle even circulates in most of our chain-formation simulations (see Table 2). Typically, the three-body angle will librate if the associated two-body angles librate4, and so we must ask: how could this resonant chain form _without_ the libration of this three-body angle? We also find that the angle \(\Theta^{\prime}_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{b}\) always circulates in our simulations and the angle \(\Theta^{\prime}_{c,d}=2\lambda_{d}-\lambda_{c}-\omega_{d}\) always librates in our simulations. We can use all five resonant angles (\(\Theta_{b,c}\), \(\Theta_{c,d}\), \(\Theta^{\prime}_{b,c}\), \(\Theta^{\prime}_{c,d}\), and \(\phi\)) to study the possible formation history of Kepler-363; a likely formation pathway would result in systems with dynamics similar to those we observe: \(\Theta_{b,c}\), \(\Theta_{c,d}\), and \(\Theta^{\prime}_{c,d}\) are librating but \(\Theta^{\prime}_{b,c}\) and \(\phi\) are circulating. Footnote 4: Although the opposite is not true in the case of pure three-body resonance _Short-scale migration:_ The angle \(\Theta^{\prime}_{b,c}\) librates in 38% of our **SM** simulations, and \(\Theta^{\prime}_{c,d}\) librates in 82.9% of our **SM** simulations. In addition, the three-body angle \(\phi\) librates in 33% of our simulations. If \(\Theta^{\prime}_{b,c}\) and \(\phi\) are indeed circulating, we find it unlikely that the resonant chain formed through short-scale migration. _Long-scale migration:_ As discussed above, it is challenging to form this chain through long-scale migration as the system becomes unstable without large eccentricity damping. However, we still find numerous sets of initial parameters that result in \(\phi\) librating. It is therefore possible that that this resonant chain formed through long-scale migration but requires more fine-tuning of parameters. _Eccentricity damping:_ From our 500 simulations, only seven (1.4%) result in the libration of \(\Theta^{\prime}_{b,c}\), and only three (0.6%) result in the libration of \(\phi\). Of the seven simulations resulting in the libration of \(\Theta^{\prime}_{b,c}\), one simulation has only this angle librating and all other angles circulating, one simulation does not result in \(\Theta_{c,d}\) librating, one simulation results in all angles librating, and the remaining four simulations result in all two-body angles librating. The angle \(\phi\) librates in one simulation where all angles librate and in two simulations where all other angles circulate. We therefore find that it is challenging for \(\Theta^{\prime}_{b,c}\) and \(\phi\) to librate if this chain was formed without any change in the planets' semi-major axes. Since we are only able to simulate the formation of resonant chains in systems _similar_ to Kepler-363, we caution against claims of one formation mechanism; however, we find that the angles \(\Theta^{\prime}_{b,c}\) and \(\phi\) do not librate in chains formed with eccentricity-damping when the angles \(\Theta_{b,c}\), \(\Theta_{c,d}\), and \(\Theta^{\prime}_{c,d}\)_do_ librate, resulting in the dynamics we observe. Resonant chains formed through short-scale and long-scale migration both result in the libration of \(\Theta^{\prime}_{b,c}\) and \(\phi\) in the majority of simulations where the other angles librate. ## 5 Conclusion Planets in mean motion resonance with one another periodically exchange energy and angular momentum, enabling us to constrain the formation history of in dividual systems and identify indicators of formation history in other systems. Because the confirmation of resonance requires an in-depth study of a system's dynamics, most resonances have not been confirmed. Here, we perform such a dynamical study of five multi-planet systems, which are the main targets of the system. Figure 3: Example evolution of systems like Kepler-363, forming the resonant chain through three formation pathways: eccentricity damping only, short-scale migration, and long-scale migration. The period ratio marked as black dots is the ratio between planets b and c, the period ratio marked as green dots is the ratio between planets c and d, and the vertical red line indicates when we “turn-off” the damping effects. Although each pathway is able to lock the planets into both two-body resonances, both short-scale migration and long-scale migration result in the libration of the three-body angle \(3\lambda_{d}-4\lambda_{c}-\lambda_{b}\) which we find to be circulating. systems whose period ratios suggest they could be in resonance. For each system, we run a suite of \(N\)-body simulations, exploring the full range of possible planetary and orbital parameters as constrained by available data. We confirm Figure 4: Example evolution of systems like Kepler-254, forming the resonant chain through three formation pathways: eccentricity damping only, short-scale migration, and long-scale migration. The period ratio marked as black dots is the ratio between planets b and c, the period ratio marked as green dots is the ratio between planets c and d, and the vertical red line indicates when we “turn-off” the damping effects. Both eccentricity dampening and short-scale migration are lock the planets into both two-body resonances while only short-scale migration results in the libration of the three-body angle \(3\lambda_{d}-4\lambda_{c}-\lambda_{b}\) which we find to be circulating. Long-scale migration did not lead to enough simulations remaining stable to yield statistically significant results. that two planets are in resonance if their critical resonant angle librates in at least 90% of our simulations. Kepler-1542 and K2-32 each contain at least one planet pair that is likely in resonance, but the uncertainties on the planet masses and orbits prohibit us from confirming these resonances. We confirm the 3:2 resonance between Kepler-226c and Kepler-226d, confirm the 3:2 resonance between Kepler-254c and Kepler-254d, and confirm the 1:2:3 resonant chain between the three planets of Kepler-363. For each of these systems, we find that the three-body critical angle \(\phi=\Theta_{c,d}-\Theta_{b,c}=3\lambda_{d}-4\lambda_{c}+\lambda_{b}\) circulates in all of our simulations, even when both \(\Theta_{c,d}\) and \(\Theta_{b,c}\) librate. All five of these systems could benefit from additional data and certainly additional analysis, as their proximity to resonance likely results in measurable TTVs. We explore the dynamical history of Kepler-254 and Kepler-363, integrating the systems through three potential resonant chain formation pathways: long-scale migration, short-scale migration, and only eccentricity damping. Under our simple migration model, both migration pathways lead to the libration of the three-body angle, suggesting that the resonances in these two systems are more likely to have formed in the absence of migration. Our methods to confirm or constrain resonances within systems in the absence of high-precision data can be applied to other systems with near-resonant planets and would provide a list of potential new resonances that require further analysis. With the confirmation of new resonances and particularly new resonant chains, we are able to fully leverage the benefits of resonances and constrain the formation history of exoplanetary systems. We thank the anonymous referee for the constructive review that improved this work. The authors acknowledge use of the ELSA high performance computing clus \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline \multicolumn{1}{c}{ Angle} & \multicolumn{1}{c}{\% librating} & \multicolumn{1}{c}{Center [\({}^{\circ}\)]} & \multicolumn{1}{c}{Amplitude [\({}^{\circ}\)]} & \multicolumn{1}{c}{Angle} & \multicolumn{1}{c}{\% librating} & \multicolumn{1}{c}{Center [\({}^{\circ}\)]} & \multicolumn{1}{c}{Amplitude [\({}^{\circ}\)]} \\ \hline Kepler-254 & **SM** & stable = 481/500 & res = 368/500 & Kepler-363 & **SM** & stable = 474/500 & res = 356/500 \\ \(\phi_{1}\) & 8.11 & \(180.32^{+2.43}_{-2.20}\) & \(15.33^{+18.7}_{-7.98}\) & \(\phi_{1}\) & 11.6 & \(90.99^{+19.23}_{-17.78}\) & \(13.64^{+17.28}_{-9.18}\) \\ & 17.0 & \(81.75^{+28.74}_{-11.12}\) & \(14.07^{+19.00}_{-8.12}\) & & 6.3 & \(180.37^{+10.54}_{-2.58}\) & \(20.86^{+30.08}_{-10.24}\) \\ & 9.1 & \(287.56^{+7.94}_{-6.50}\) & \(22.48^{+19.64}_{-12.91}\) & & 7.4 & \(285.27^{+5.85}_{-12.72}\) & \(20.93^{+11.22}_{-12.03}\) \\ \(\Theta_{b,c}\) & 49.9 & \(0.11^{+3.38}_{-0.85}\) & \(7.95^{+37.17}_{-4.04}\) & \(\Theta_{b,c}\) & 8.6 & \(-48.88^{+14.25}_{-8.56}\) & \(14.05^{+9.50}_{-7.08}\) \\ & 10.8 & \(-48.62^{+11.01}_{-8.02}\) & \(10.64^{+8.53}_{-5.50}\) & & 68.6 & \(0.17^{+16.90}_{-1.06}\) & \(12.09^{+28.53}_{-7.99}\) \\ & 16.2 & \(45.53^{+9.70}_{-10.71}\) & \(11.48^{+8.50}_{-6.27}\) & \(\Theta_{c,d}\) & 82.1 & \(0.02^{+2.63}_{-1.71}\) & \(15.72^{+14.21}_{-15.57}\) \\ \(\Theta_{c,d}\) & 84.0 & \(0.04^{+3.40}_{-1.09}\) & \(7.44^{+20.09}_{-5.34}\) & \(\Theta^{\prime}_{b,c}\) & 13.1 & \(279.32^{+16.34}_{-26.93}\) & \(8.53^{+9.41}_{-4.82}\) \\ \(\Theta^{\prime}_{b,c}\) & 21.2 & \(287.45^{+6.09}_{-35.36}\) & \(7.24^{+8.55}_{-3.98}\) & & 8.4 & \(65.87^{+10.33}_{-6.30}\) & \(11.62^{+6.35}_{-5.09}\) \\ & 11.6 & \(66.59^{+8.83}_{-4.17}\) & \(10.12^{+8.16}_{-5.96}\) & & 7.2 & \(179.04^{+2.10}_{-21.20}\) & \(31.02^{+43.83}_{-21.05}\) \\ & 6.7 & \(179.70^{+2.10}_{-2.93}\) & \(15.55^{+36.56}_{-8.64}\) & \(\Theta^{\prime}_{c,d}\) & 82.9 & \(179.97^{+1.42}_{-1.58}\) & \(11.30^{+18.78}_{-8.21}\) \\ \(\Theta^{\prime}_{c,d}\) & 81.3 & \(179.99^{+14.76}_{-1.02}\) & \(7.78^{+18.20}_{-6.24}\) & & & & \\ \hline Kepler-254 & **ECC** & stable = 500/500 & res = 238/500 & Kepler-363 & **ECC** & stable = 500/500 & res = 229/500 \\ \(\phi_{1}\) & 0.0 & & & & \(\phi_{1}\) & 0.6 & & \\ \(\Theta_{b,c}\) & 47.6 & \(-0.017^{+0.587}_{-0.541}\) & \(5.96^{+5.23}_{-0.62}\) & \(\Theta_{b,c}\) & 46.8 & \(0.019^{+0.392}_{-0.412}\) & \(4.22^{+2.45}_{-0.47}\) \\ \(\Theta_{c,d}\) & 62.0 & \(-0.013^{+0.478}_{-0.522}\) & \(5.76^{+9.26}_{-1.21}\) & \(\Theta_{c,d}\) & 47.0 & \(-0.031^{+2.242}_{-1.735}\) & \(28.32^{+4.05}_{-2.13}\) \\ \(\Theta^{\prime}_{b,c}\) & 0.0 & & & \(\phi_{b,c}\) & 1.4 & & \\ \(\Theta^{\prime}_{c,d}\) & 62.2 & \(180.00^{+0.31}_{-0.32}\) & \(3.69^{+0.61}_{-0.83}\) & \(\Theta^{\prime}_{c,d}\) & 47.2 & \(179.99^{+0.84}_{-0.79}\) & \(15.57^{+14.46}_{-0.89}\) \\ \hline Kepler-254 & **LM** & stable = 13/500 & res = 1/500 & Kepler-363 & **LM** & stable = 73/500 & res = 21/500 \\ \(\phi_{1}\) & 0.2 & & & \(\phi_{1}\) & 1.2 & & \\ \(\Theta_{b,c}\) & 0.2 & & & \(\phi_{b,c}\) & 6.4 & \(0.28^{+6.81}_{-2.38}\) & \(9.16^{+50.69}_{-5.05}\) \\ \(\Theta_{c,d}\) & 0.2 & & & \(\phi_{c,d}\) & 6.8 & \(0.19^{+5.94}_{-1.98}\) & \(40.42^{+38.66}_{-38.84}\) \\ \(\Theta^{\prime}_{b,c}\) & 0.2 & & & \(\phi^{\prime}_{b,c}\) & 1.2 & & \\ \(\Theta^{\prime}_{c,d}\) & 0.2 & & & \(\phi^{\prime}_{c,d}\) & 8.8 & \(179.99^{+2.97}_{-3.84}\) & \(26.93^{+20.19}_{-23.70}\) \\ \hline \end{tabular} Note. – For each system, the number of simulations that survived the full integration, the number of simulations where all planets participate in a 1:2:3 chain, then, for each angle, the percentage of simulations where the angle librates and the center and amplitude of the libration. We do not include center or amplitude data for angles librating in fewer than 5% of simulations. ter at The College of New Jersey for conducting the research reported in this paper. This cluster is funded in part by the National Science Foundation under grant numbers OAC-1826915 and OAC-1828163.
Resonant惑星は、公転周期が互いに等しい近似的な場合もあるが、共鳴は、惑星の方向の偏りと質量などの他の要因によって決定され、その結果、システムのダイナミクスを調査することによって確認する必要がある。ここでは、5つの多惑星系を対象に、このような調査を実施する。Kepler-226、Kepler-254、Kepler-363、Kepler-1542、およびK2-32。各系に対して、軌道と惑星の特性を満たす範囲のNボディシミュレーションを実行する。これらのシステムの安定性を調査し、 Critical Resonant Angles のlibrationに基づいて共鳴を見つける。私たちは、それぞれのシステムにおける二体共鳴の強い証拠を見出し、Kepler-226c と Kepler-226d の間の3:2共鳴を、Kepler-25
2307.00104
Obscured Wildfire Flame Detection By Temporal Analysis of Smoke Patterns Captured by Unmanned Aerial Systems
This research paper addresses the challenge of detecting obscured wildfires (when the fire flames are covered by trees, smoke, clouds, and other natural barriers) in real-time using drones equipped only with RGB cameras. We propose a novel methodology that employs semantic segmentation based on the temporal analysis of smoke patterns in video sequences. Our approach utilizes an encoder-decoder architecture based on deep convolutional neural network architecture with a pre-trained CNN encoder and 3D convolutions for decoding while using sequential stacking of features to exploit temporal variations. The predicted fire locations can assist drones in effectively combating forest fires and pinpoint fire retardant chemical drop on exact flame locations. We applied our method to a curated dataset derived from the FLAME2 dataset that includes RGB video along with IR video to determine the ground truth. Our proposed method has a unique property of detecting obscured fire and achieves a Dice score of 85.88%, while achieving a high precision of 92.47% and classification accuracy of 90.67% on test data showing promising results when inspected visually. Indeed, our method outperforms other methods by a significant margin in terms of video-level fire classification as we obtained about 100% accuracy using MobileNet+CBAM as the encoder backbone.
Uma Meleti, Abolfazl Razi
2023-06-30T19:45:43
http://arxiv.org/abs/2307.00104v1
Obscured Wildfire Flame Detection By Temporal Analysis of Smoke Patterns Captured by Unmanned Aerial Systems ###### Abstract This research paper addresses the challenge of detecting obscured wildfires (when the fire flames are covered by trees, smoke, clouds, and other natural barriers) in real-time using drones equipped only with RGB cameras. We propose a novel methodology that employs semantic segmentation based on the temporal analysis of smoke patterns in video sequences. Our approach utilizes an encoder-decoder architecture based on deep convolutional neural network architecture with a pre-trained CNN encoder and 3D convolutions for decoding while using sequential stacking of features to exploit temporal variations. The predicted fire locations can assist drones in effectively combating forest fires and pinpoint fire retardant chemical drop on exact flame locations. We applied our method to a curated dataset derived from the FLAME2 dataset that includes RGB video along with IR video to determine the ground truth. Our proposed method has a unique property of detecting obscured fire and achieves a Dice score of 85.88%, while achieving a high precision of 92.47% and classification accuracy of 90.67% on test data showing promising results when inspected visually. Indeed, our method outperforms other methods by a significant margin in terms of video-level fire classification as we obtained about 100% accuracy using MobileNet+CBAM as the encoder backbone. Wildfire Monitoring, Obscured Fire Detection, Unmanned Aerial Vehicles, Temporal Video Analysis ## I Introduction Wildfires have become prevalent and destructive in many parts of the world. Regardless of the cause, wildfires can have severe consequences, including the loss of human lives, destruction of property, disruption of wildlife, food production, and crop supply chain as well as significant environmental damage. Once a wildfire starts, there are various ways to monitor and control these wildfires, including observation towers, direct human intervention, satellite imaging, and manned aircraft. Using drones is one of the most efficient ways of fire monitoring for its low operation cost, customizable sensing and imaging, flexible operation, and ease of use in harsh environments by advanced flight control features and partial autonomy (e.g., safe auto-landing). One of the most significant advantages of using drones in wildfire fighting is the ability to gather real-time data about the fire's behavior. Equipped with cameras and other sensors, drones can fly over the fire and capture relevant information. Drones can also be equipped with water tanks or fire extinguishers containing fire-retardant chemicals, such as carbon dioxide (CO2), potassium bicarbonate (KHCO3), or evaporating like bromochlorodifluoromethane (CF2C1Br) to be dropped on hot spots to create fire breaks. The efficient utilization of drones will significantly advance controlling these fires. Targeting places where actual burning is happening, instead of blindly spraying fire retardant gases everywhere, will help set these fires off quickly and efficiently. Finding out these burning places in real-time is very difficult, especially when fire flames are obscured by thick smoke. An infrared camera can help find these hidden fires, but IR cameras are expensive. We devised a methodology that uses an RGB camera feed, analyzes the video frames sequentially, and detects the obscured fires using temporal features, such as smoke patterns, extracted from video frames. We show that such features can be indicative of the fire's exact location and temporal behavior. We have structured the problem as semantic segmentation of the obscured fire by analyzing the sequence of frames in the video. A deep convolutional neural network (CNN) is designed that uses a pre-trained CNN architecture as an encoder to extract features of video frames and passes sequentially stacked features to the decoding stage that uses 3D convolutions [1]. to analyze these features and predict burning location. UAVs and Drones can use this predicted information for more guided and informed fire monitoring and control. Specifically, we propose a novel method for obscured fire detection that can be used for other applications beyond forest fire management. To this end, we curated a dataset from the existing FLAME2 [2] dataset by selecting part of video frames from the original data where the drone is stationary and there is high synchronization with the IR images to avoid misalignment errors. We also use the corresponding IR videos to extract ground truth for our task by performing a series of Image Processing operations. We have visually verified the correspondence of RGB video with the processed IR video for synchronization. We highlight the unique features of our method compared to previous methods, then proceed by elaborating on the details of the generated dataset and its preparation method. We then elucidate the details of the proposed deep learning (DL) architecture and analyze the obtained results. ## II Related work The previous works on fire detection are mainly based on image-based techniques such as classification, object detection, and semantic segmentation of visible fire imagery. Wonjae Lee et al. proposed a wildfire detection system that classifies the presence of fire in images and evaluated the performance of AlexNet, GoogLeNet, and VGG, along with their modified variants [3]. Zhentian et al. have trained YOLOV3 for fire detection and reported a recognition rate of 91% [4]. An ensemble-based object detection method using YOLOV5 EfficientDet is proposed in [5] for detection and EfficientNet to capture global information about the fire. Their study showed a decrease in false positive rate by 51.3% on three public datasets. Yo Zhao et al. have proposed a deep learning architecture, called Fire-Net by stacking convolutional and pooling layers for fast localization and segmentation of fire in aerial images with an accuracy of 98 % on standard 'UAV_Fire' dataset [6]. A similar method is proposed in [7] and [8] by using a deep learning architecture to classify frames into "fire with smoke", "fire with no smoke", and "no fire" using FLAME datasets [2, 9]. However, most of these methods are limited to images, where real-time data for fire detection will be mostly in terms of video feed. Further, video feeds contain temporal patterns of smoke that facilitate locating the origin of smoke which is the fire location, and distinguishing them from clouds and other white patterns. This concept is used as a key idea in our method to detect obscured fire positions. A few works take a slightly different approach and analyze fire images path by patch instead of one-shot analysis of the entire image. For instance, a CNN-based deep learning method is proposed in [10] which classifies the image first and then performs a patch-level analysis to offer more detailed information. They applied their method to video frames to perform patch-wise detection and reported a 97 % detection accuracy on their own dataset. Still, this method does not consider the temporal relationship for fire classification since it treats video frames as still images. In a similar work, Gwangsu Kim et al. proposed an algorithm that collects features of video frames using a pre-trained VGG and stacks them together to be passed through a series of fully connected layers to classify the presence of fire in the video clips. However, this method restricts fire classification only during a visible fire deemed inefficient in capturing obscured fire. Anshuman et al. [11] have proposed SmokeyNet - a deep learning algorithm that offers stacking CNN, LSTM, and Vision Transformer to detect smoke in the video feed captured by stand-alone cameras. However, this method is not directly applicable to aerial imagery. Some other research works take advantage of Infrared (IR) cameras for more accurate fire positioning. For instance, Chi Yuan et al. proposed an algorithm that uses brightness and motion clues with histogram-based segmentation and optical flow to segment fire in IR images [12]. Another example is Norsuzial et al.'s work that offers an Image processing-based approach to convert IR images to YCbCr color and use a wavelet analyzer to detect fire [13]. Also, a DL architecture is proposed in [14] that analyzes dual-feed imagery captured by side-by-side RGB and IR cameras for precise fire positioning. Although these methods yield high accuracy taking advantage of thermal information captured by IR cameras, they incur an extra monitoring cost for their reliance on pricy IR cameras. Also, they are not suitable for processing existing drone-based and satellite-based datasets that include only RGB imagery. In contrast to all the above, our method uses only RGB videos for detecting both visible and obscured fire flames in an economical way. ## III Data Preparation In this study, we have used the publicly available FLAME2 dataset [9], which consists of 7 video pairs of RGB and corresponding infrared heat maps. Out of those, we have employed five relevant videos in our simulations because these videos consist of both visible fire and obscure fire, appropriate for our test. The videos were taken in a planned burning region, consisting of information on forest burning with smoke. The drone move around the place, covering different parts of the woods. For experimentation purposes, we have carefully cropped the parts of the video where the camera is relatively stationary and there is a high alignment between the RGB and IR camera viewpoints. The selected video segments are split into clips of 20 consecutive frames to train our deep neural network, where each clip is considered a training sample. ## IV Method ### _Data Pre-Processing: Using IR to Label RGB images_ The IR images consist of heat maps corresponding to the temperature of different regions on the image. The place where the fire is present generally has a high temperature, and the pixel values in the heat map are closer to the maximum. We have extracted the ground truth for training data where the fire is present by processing the IR image with a series of hand-crafted image processing methods. The set of operations performed on the IR image is shown below. IR Image \(\rightarrow\) Smooth Image \((5\times 5)\rightarrow\) Hard Thresholding \(\rightarrow\) Dilation \((5\times 5\), 2 times) \(\rightarrow\) Fill (flood fill) \(\rightarrow\) Erosion \((5\times 5\), 1 time) \(\rightarrow\) Remove small objects \((200\) px\()\)\(\rightarrow\) Ground Truth. We initially smooth out the image using a low-pass filter to remove noise, then use hard thresholding to select the regions of high-temperature values corresponding to fire. The resulting image is dilated to fill the small spaces and to make the fire boundary smooth; the spaces not filled in the previous operations are filled using flood fill that will result in a complete blob of fire location. The image is eroded to reverse the effect of dilation applied earlier. Small blobs that are likely to be representative of noise are removed. Note that we use IR images to identify ground truth fire locations and train the model, but in runtime (new monitoring tasks), we only use RGB images, so our method does not require expensive IR cameras on site. ### _Ground Truth Approximation_ The main goal is to take a sequence of input frames and predict where the fire hides. Since we obtain ground truth from the IR camera feed, every video frame has a pixel-wise map representing the ground truth. However, we need to define a single ground truth for each sample (i.e. 20-frame clip). To this end, we have approximated the ground truth by applying majority voting to the pixel labels obtained from the 20 IR video frames. More specifically, we have \[\text{Final Label}(L^{*}_{i,j})=\text{majority\_class}(L^{1}_{i,j},L^{2}_{i,j}, \dots,L^{\text{seq\_len}}_{i,j}), \tag{1}\] where \((i,j)\) determines the pixel location, and the postscript is the frame number within the clip. In our case, the label is binary, so the class is either 0 or 1. ### _Network Architecture_ We have presented the overview of our architecture in Fig 3. The architecture consists of a pre-trained encoder that encodes features from the video frames along with a 3D decoder that decodes information from the volume of features. We have used VGG16 [15] as the encoder and pass a sequence of video frames, then collect each frame's features at different resolutions of the encoder and stack them to pass it to the 3D Decoder, which processes these volumes of features to predict the segmentation map of the hidden fire. The Decoder consists of two parts; the first part decodes information in both the image and time axis, but more emphasis is put on summarizing the semantic information of the image that will be used by the second part. This enforces the decoder to focus more on capturing the relationship between the semantic features between the frames in the time axis. #### Decoder: Part-1 The design of the first part of the Decoder is inspired by the U-Net architecture [16], where features from multiple resolutions are merged with the Decoder. We extracted features of each frame at resolutions of (HxW)/2, (HxW)/4, (HxW)/8, (HxW)/16, and (HxW)/32, where H, W are the height and width of the input frame. The extracted features are stacked for a sequence of frames. At each resolution, the volume of components is processed by a 3x3x3 convolution block followed by an attention block, which learns the most informative feature representations while reducing the feature space. This architecture retains the dimension of the input in both the feature domain and time axis. A deconvolution layer is applied to the bottleneck of the encoder with a (1x2x2) Transposed convolution, which upsamples the feature map and increases the resolution by a factor of 2. The upsampled feature map is then concatenated and fed into the convolution, attention, and deconvolution layers. This process is repeated until the output resolution becomes exactly equal to the input resolution. The ultimate output dimension of this block is (batch_size, n_classes, seq_length, H x W). #### Decoder: Part 2 The Decoder2 consists of a series of Time blocks and a final convolution layer. The Time block consists of consecutive operations of 3D Convolution, Batch Normalization [17], and ReLU Activation. We have chosen a kernel size of (4x1x1) for the convolution so that every block captures information of 4 consecutive frames. Here we have used a 1x1 kernel size in the feature space and a size of 4 in time-space; the idea is to give more emphasis in time-space than feature space. In our experiments, we have considered a sequence length of 20 frames as input to predict the output; we added 6 Time blocks that will reduce the feature space in the time dimension and reach a resolution of (2 x classes x H x W) and a final convolution layer is added to reduce to the final resolution of 1 x classes x H x W. ### _Loss Function_ We have used Dice loss to measure the alignment between the ground truth and the detected fire regions by the architecture to train the network. The Dice coefficient measures the alignment (similarity) between two corresponding segments by computing the overlap coefficient. The overlap coefficient ranges from 0 to 1, with 1 indicating a perfect match between the two segments. The Dice loss function is defined as one minus the Dice coefficient, with the objective of minimizing the loss during training. More specifically, we have \[DiceLoss =1-\frac{2\times\text{Intersection}}{\text{Predicted}+\text{ Ground Truth}}\] \[=1-\frac{2\sum_{i=1}^{n}p_{i}g_{i}+\epsilon}{\sum_{i=1}^{n}p_{i}^ {2}+\sum_{i=1}^{n}g_{i}^{2}+\epsilon}, \tag{2}\] where \(p_{i}\) and \(g_{i}\) are the predicted and ground truth segmentation masks, respectively, for the \(i\)-th pixel in the image. The Fig. 1: Obtaining Ground truth fire locations from IR images. Fig. 2: Ground Truth Approximation summations are taken over all \(n\) pixels in the image. The \(\epsilon\) term is a small constant added to the denominator to prevent division by zero. ## V Experiments In this section, we present the simulation results using the Flame2 dataset, which consists of RGB and IR images; we have trained with pre-processed videos as explained in the Data Preparation step. We have used 354 videos for training and 155 videos for testing. All the training and testing videos are independent and non-overlapping frames. ### _Training_ Our model is implemented in PyTorch [16] using a Linux machine with Tesla A-100 40 GB GPU. The models were tuned for the best hyperparameters. We used a step-learning rate with an initial learning rate of 1e-2 with Adam Optimizer [18]. The models were trained for 300 epochs with a batch size of 5. ### _Inference_ The inference in real-time, where videos are lengthy, is made by taking frames at a window size of 20 and sliding the window over the video at a stride of one. ### _Evaluation Metrics_ We are using a set of metrics to evaluate the quality of fire detection. We used the Dice score (presented in (2)) to assess the alignment quality between ground truth and the detected fire region. This assessment is particularly important when part of the fire is obscured by thick smoke to demonstrate to what extent our model is capable of detecting such fire regions. Another metric we used is blob-wise precision to ensure the correctness of our predictions; it is calculated by taking each blob in the ground truth and prediction, and if the intersection area is greater than 30%, considering it as a True positive, else a false positive. Precision is calculated by using the formula. \[\text{Precision}=\frac{\text{True~{}Positives}}{\text{True~{}Positives}+ \text{False~{}Negatives}}\] And also, calculated clip-level classification accuracy to evaluate the video with fire is being classified as fire or not fire. This is calculated by counting the number of fire spots in the video and comparing it with the prediction; if more than 30% spots are predicted we will classify the video as fire, else non-fire. ### _Quantitative Results_ The Quantitative results are shown in Table I. We examined four different types of backbones (VGG16 [15], ResNet [19], EfficientNet [20], MobileNet [21]) and two different types of attention modules. One is Spatial and Channel Squeeze & Excitation Blocks (ScSE) [22] and another is Convolutional Block Attention Module (CBAM) [23]. On overall backbones, Fig. 4: Inference method: each output map is the result of sequential processing of 20 preceding input frames. Fig. 3: The overall architecture of the proposed deep learning network for obscured fire detection. VGG16 has shown the highest performance in terms of fire region detection alignment (dice), but Effcinetb0 with ScSE has shown a similar dice score and also has superior performance in blob-wise precision. ResNet18+CBAM and MobileNet+CBAM has shown 100% percent classification accuracy. This shows that our architecture is flexible and different types of pre-trained architectures can be employed as the encoder backbone. ### _Qualitative Results_ Fig. 5 is a sample output of our model applied on consecutive video frames (left to right); the top row presents the IR images from where our ground truth is extracted (annotated in white), and the middle row corresponds to prediction (annotated in red) and last row is zoomed at the annotated region (yellow). at T=1, fire is slowly starting under the tree and the volume of smoke grows gradually in the next frames. Initially, the model does not detect the obscured fire. However, as time passes, the temporal analysis of the growing fire flames enables the model to detect the obscured flame (shown by red colors). Fig. 6 is the output snapped at a particular frame, the left image corresponds to RGB Input and the middle is IR, and the right includes the ground truth fire region (green line) and the predicted fire region (red line). This image demonstrates that our model detects both visible fire and obscured fire with near-accurate boundaries. ## VI Discussion The quantitative and qualitative performance of our model yields promising results using various backbones exhibiting that the proposed architecture is flexible in adopting various existing and future pre-trained backbones. We used IR images offered in the Flame2 dataset to determine the ground truth fire regions. It is noteworthy that the temperature values of IR images are calibrated within the frame values and do not reflect the exact temperature, which should be taken into account in the labeling process. Fig. 5: Sample output of Model applied on consecutive frames from left to right; Top - IR with ground truth (white boundary), Middle - Prediction on RGB Image, Bottom - Zoomed RGB. (Red - Prediction boundary). Fig. 6: Left RGB, Middle IR, Right Ground Truth (Green), Prediction (Red) annotated This study can trigger multiple future works. For instance, further research can focus on refining and expanding our methodology, considering other environmental factors that may affect fire behavior. The application of our approach in practical scenarios, developing onboard processing software, and integration with existing wildfire management systems can provide valuable insights for future developments. ## VII Conclusion In this paper, a novel approach is proposed for detecting obscured fires in real-time using video feeds captured by drones equipped only with RGB cameras. The key idea was training a model that treats a video clip as a single sample and processes its video frames sequentially to identify temporal smoke patterns that can be indicative of obscured fires. To this end, we introduced a new deep-learning architecture that leverages pre-trained CNN architectures and 3D convolutions to create a temporal feature map and use attention modules to predict fire regions by the sequential analysis of video frames. We evaluated our method on a curated FLAME2 dataset where the IR videos are used to discover the ground truth fire regions and showed that our method not only improves the fire detection accuracy compared to the state-of-the-art (by achieving near 100% accuracy), but also demonstrates great success in detecting invisible and covered fire region borders (about 85% in Dice score) even when they are obscured by trees and smoke patterns. This methodology allows utilizing firefighter drones to combat wildfires more efficiently by targeting visible and invisible fire hotspots. Also, our method helps detect fire regions precisely without the need for IR cameras (in the test phase) which significantly reduces fire monitoring costs.
この研究論文は、ドローンにRGBカメラのみを搭載し、リアルタイムで隠された野火(炎が樹木、煙、雲、その他の自然の障壁によって覆われている場合)を検出するという課題に取り組んでいます。私たちは、動画シーケンスの煙の形態的分析に基づいたsemantically分割手法を提案しました。このアプローチは、深い畳み込みニューラルネットワークアーキテクチャに基づいてエンコーダ-デコーダアーキテクチャで、事前訓練されたCNNエンコーダと3D畳み込みでデコーディングし、特徴の並列 stacking を使用して時間的変動を拡張しています。予測された火の場所を、ドローンが森林火災を効果的に防ぐのに役立ち、正確な炎の位置に火 retardant 化学を投下するのに役立ちます。私たちは、FLAME2データセットから作成された手pickedなデータセットに適用しました。これは
2301.13590
Universal frequency-preserving KAM persistence via modulus of continuity
In this paper, we study the persistence and remaining regularity of KAM invariant torus under sufficiently small perturbations of a Hamiltonian function together with its derivatives, in sense of finite smoothness with modulus of continuity, as a generalization of classical H\"{o}lder continuous circumstances. To achieve this goal, we extend the Jackson approximation theorem to the case of modulus of continuity, and establish a corresponding regularity theorem adapting to the new iterative scheme. Via these tools, we establish a KAM theorem with sharp differentiability hypotheses, which asserts that the persistent torus keeps prescribed universal Diophantine frequency unchanged and reaches the regularity for persistent KAM torus beyond H\"older's type.
Zhicheng Tong, Yong Li
2023-01-31T12:43:28
http://arxiv.org/abs/2301.13590v1
# Universal frequency-preserving KAM persistence via modulus of continuity ###### Abstract In this paper, we study the persistence and remaining regularity of KAM invariant torus under sufficiently small perturbations of a Hamiltonian function together with its derivatives, in sense of finite smoothness with modulus of continuity, as a generalization of classical Holder continuous circumstances. To achieve this goal, we extend the Jackson approximation theorem to the case of modulus of continuity, and establish a corresponding regularity theorem adapting to the new iterative scheme. Via these tools, we establish a KAM theorem with sharp differentiability hypotheses, which asserts that the persistent torus keeps prescribed universal Diophantine frequency unchanged and reaches the regularity for persistent KAM torus beyond Holder's type. keywords: Hamiltonian system, KAM torus, frequency-preserving, modulus of continuity, Jackson approximation theorem. Msc: [2020] 37J40, 70K60 + Footnote †: journal: ## 1 Introduction The KAM theory mainly concerns the preservation of invariant tori of a Hamiltonian function \(H(y)\) under small perturbations (i.e., \(H(y)\to H\left(x,y,\varepsilon\right)\) of freedom \(n\in\mathbb{N}^{+}\) with \(\varepsilon>0\) sufficiently small), which has a history of more than sixty years. See, for instance, Kolmogorov and Arnold [2; 3; 4], Moser [13; 12], Poschel [16; 17] and etc. As is known to all, for frequency \(\omega=H_{y}\left(y\right)\) of the unperturbed system, we often require it to satisfy the following classical Diophantine condition (or be of Diophantine class \(\tau\)) \[\left|\left\langle\tilde{k},\omega\right\rangle\right|\geq\alpha_{*}|\tilde{k }|^{-\tau},\ \ \forall 0\neq\tilde{k}\in\mathbb{Z}^{n} \tag{1.1}\] with respect to \(\tau\geq n-1\) and some \(\alpha_{*}>0\), where \(|\tilde{k}|:=\sum_{j=1}^{n}|\tilde{k}_{j}|\). Otherwise, the torus may break no matter how small the perturbation is. Furthermore, to ensure the KAM persistence one also is interested in the minimal order of derivatives required for \(H\left(x,y,\varepsilon\right)\). Much effort has been devoted on this problem in terms of Holder continuity, including constructing counterexamples and reducing the differentiability hypotheses. For some classic foundational work, see Moser [14], Jacobowitz [9], Zehnder [22; 23], Mather [11], Herman [7; 8], Salamon [19] and etc. It is worth mentioning that, very recently Poschel [18] obtained a KAM theorem on \(n\)-dimensional torus (without action variables) based on a frequency being of Diophantine class \(\tau=n-1\) in (1.1). Specially, he pointed out that the derivatives of order \(n\) need not be continuous, but rather \(L^{2}\) in a certain strong sense. Back to our concern on Hamiltonian systems with action-angular variables, it is always conjectured that the minimum regularity requirement for the Hamiltonian function \(H\) is at least \(C^{2n}\). Along with the idea of Moser, the best known Holder case \(C^{\ell}\) with \(\ell>2\tau+2>2n\) has been established by Salamon in [19], where the prescribed frequency is of Diophantine class \(\tau>n-1\) in (1.1) (with full Lebesgue measure and thus reveals the universality of the KAM persistence), and the remaining regularity of the KAM torus is also showed to be Holder's type. More precisely, the resulting solutions are of class \(C^{m}\) with \(0<m<2\ell-2\tau-2\), and the function whose graph is the invariant torus is of class \(C^{m+\tau+1}\). Besides, the differentiability hypotheses is sharp due to the counterexample work of Herman [7, 8] et al., which will be explained later in section 3.2.1. In the aspect of Holder's type, see Bounemoura [5] and Koudjnan [10] for some new developments. Strictly weaker than Holder continuity, Albrecht [1] proved a KAM theorem via a strong Diophantine frequency of class \(\tau=n-1\) in (1.1), which claimed that \(C^{2n}\) plus certain modulus of continuity \(\varpi\) satisfying the classical Dini condition \[\int_{0}^{1}\frac{\varpi\left(x\right)}{x}dx<+\infty \tag{1.2}\] is enough for the KAM persistence. Such strong Diophantine frequencies are continuum many and form a set of zero Lebesgue measure, see details from [15], therefore the corresponding KAM preservation is usually said to be non-universal. To the best of our knowledge, there is no other work on KAM via only modulus of continuity except for [1]. Back to our concern on universal KAM persistence in this paper, the best result so far still requires \(C^{2n}\) plus certain Holder continuity depending on the Diophantine nonresonance. It is therefore natural that ones should consider the following questions: * _Can Holder smoothness in Salamon's KAM be further weakened into a general form of modulus of continuity?_ * _If the invariant KAM torus persists, then what kind of smoothness does the torus have (Holder continuity, or more general modulus of continuity)?_ * _Could the prescribed universal Diophantine frequency to be kept unchanged?_ * _Does there exist a Dini type integrability condition similar to (_1.2_) that reveals the explicit relation between nonresonance and regularity?_ To answer the above questions, there are at least four difficulties to overcome. Firstly, note that the Jackson approximation theorem for classical Holder continuity is no longer valid at present, hence it must be developed to approximate the perturbed Hamiltonian function \(H\left(x,y,\varepsilon\right)\) in the sense of modulus of continuity, as a crucial step. Secondly, it is also basic how to establish a corresponding regularity iteration lemma to study the regularity of the invariant torus and the solution beyond Holder's type. Thirdly, we need to set up a new KAM iterative scheme and prove its uniform convergence via these tools. Fourthly, it is somewhat difficult to extract an equilibrium integrability condition of nonresonance and regularity from KAM iteration, as well as further touch the remaining regularity. Indeed, to achieve the main result theorem 2, we apply theorem 1 to construct a series of analytic approximations to \(H\left(x,y,\varepsilon\right)\) with modulus of continuity, and prove the persistence and regularity of invariant torus via a modified KAM iteration as well as a generalized Dini type condition. It should be pointed out that our results still admit sharpness on differentiability \(C^{2n}\) due to Herman's work [7, 8], where he considered the nonexistence of an invariant curve for an annulus mapping being of Holder regularity \(C^{3-\varepsilon}\) with any \(\varepsilon\) close to \(0^{+}\), i.e., \(C^{2n}=C^{4}\) minus arbitrary Holder continuity cannot admit KAM persistence when \(n=2\). As some new efforts, our theorem 2 applies to a wide range, including non-universal and universal KAM persistence, and reveals the integral relation between regularity and nonresonance. Apart from above, it is well known that small divisors must lead to the loss of regularity, and our approach gives general estimates of the KAM remaining regularity without Holder continuity for the first time. Particularly, as a direct application, our theorem 2 could deal with the case of general modulus of continuity for \(H\left(x,y,\varepsilon\right)\), such as Logarithmic Holder continuity case, i.e., for all \(0<\left|x-\xi\right|+\left|y-\eta\right|\leq 1/2\), \[\left|\partial^{\alpha}H\left(x,y,\varepsilon\right)-\partial^{\alpha}H\left( \xi,\eta,\varepsilon\right)\right|\leq\frac{c}{\left(-\ln\left(\left|x-\xi \right|+\left|y-\eta\right|\right)\right)^{4}}\] with respect to all \(\alpha\in\mathbb{N}^{2n}\) with \(\left|\alpha\right|=2n\), where \(n\geq 2\), \(\lambda>1\), \(c,\varepsilon>0\) are sufficiently small, \(\left(x,y\right)\in\mathbb{T}^{n}\times G\) with \(\mathbb{T}^{n}:=\mathbb{R}^{n}/\mathbb{Z}^{n}\), and \(G\subset\mathbb{R}^{n}\) is a connected closed set with interior points. See section 3 for more details. This paper is organized as follows. In section 2, we first introduce some notions and properties for modulus of continuity, and establish a Jackson type approximation theorem based on them (the proof will be postponed to appendix B). Then we state our main results in this paper. Namely, considering that the higher-order derivatives of Hamiltonian function \(H\) with respect to the action-angular variables are only continuous, we present a KAM theorem (theorem 2) with sharp differentiability hypotheses under certain assumptions, involving a generalized Dini type integrability condition (**H1**). The applications of this theorem are given in section 3, including non-universal (theorem 4) and universal (theorems 5 and 6) KAM persistence. For the former, we reach a conclusion similar to that in [1]. As to the latter, we provide Holder and Holder plus Logarithmic Holder circumstances, aiming to show the importance and universality of theorem 2. In particular, an explicit Hamiltonian function \(H\) is constructed, which cannot be studied by KAM theorems for finite smoothness via classical Holder continuity, but the work generalized in this paper can be applied. section 4 provides the proof of theorem 2 and is mainly divided into two parts: the first part deals with the modified KAM steps via only modulus of continuity, while the second part is devoted to giving an iteration theorem (theorem 7) on regularity, which is used to analyze the remaining smoothness for the persistent invariant torus. sections 5 to 7 present the proof of theorems 4 to 6 in section 3, respectively. ## 2 Statement of results We first give some notions, including the modulus of continuity along with the norm based on it, the semi separability which will be used in theorem 1, as well as the weak homogeneity which will appear in theorem 2. Denote by \(\left|\cdot\right|\) the sup-norm in \(\mathbb{R}^{d}\) and the dimension \(d\in\mathbb{N}^{+}\) may vary throughout this paper. We formulate that in the limit process, \(f_{1}(x)=\mathcal{O}^{\#}\left(f_{2}(x)\right)\) means there are absolute positive constants \(\ell_{1}\) and \(\ell_{2}\) such that \(\ell_{1}f_{2}\left(x\right)\leq f_{1}\left(x\right)\leq\ell_{2}f_{2}\left(x\right)\), and \(f_{1}(x)=\mathcal{O}\left(f_{2}(x)\right)\) implies that there exists an absolute positive constant \(\ell_{3}\) such that \(\left|f_{1}(x)\right|\leq\ell_{3}f_{2}(x)\), and finally \(f_{1}(x)\sim f_{2}(x)\) indicates that \(f_{1}(x)\) and \(f_{2}(x)\) are equivalent. **Definition 2.1**.: _Let \(\varpi(t)>0\) be a nondecreasing continuous function on the interval \((0,\delta]\) with respect to some \(\delta>0\) such that \(\lim\limits_{x\to 0^{+}}\varpi\left(x\right)=0\) and \(\overline{\lim\limits_{x\to 0^{+}}}x/\varpi\left(x\right)<+\infty\). Next, we define the following semi norm and norm for a continuous function \(f\) on \(\mathbb{R}^{n}\) (\(f\in\mathcal{C}^{0}\), for short)_ \[\left[f\right]_{\varpi}:=\sup_{x,y\in\mathbb{R}^{n},\,\left\{0<x-y\right\}\leq \delta}\frac{\left[f\left(x\right)-f\left(y\right)\right]}{\varpi\left(\left|x -y\right|\right)},\,\,\,\left|f\right|_{\mathcal{C}^{0}}:=\sup_{x\in\mathbb{R }^{n}}\left|f\left(x\right)\right|.\] _We say that \(f\) is of \(C_{k,\varpi}\) continuous if \(f\) has partial derivatives \(\partial^{\alpha}f\) for \(\left|\alpha\right|\leq k\in\mathbb{N}\) and satisfies_ \[\left\|f\right\|_{\varpi}:=\sum_{\left|\alpha\right|\leq k}\left(\left| \partial^{\alpha}f\right|_{\mathcal{C}^{0}}+\left[\partial^{\alpha}f\right]_{ \varpi}\right)<+\infty. \tag{2.3}\] _Denote by \(C_{k,\varpi}\left(\mathbb{R}^{n}\right)\) the space composed of all functions \(f\) satisfying (2.3)._ Such a function \(\varpi\) is usually referred to as the modulus of continuity of \(f\). It can be seen that the well-known Lipschitz continuity and Holder continuity are special cases in the above definition. In particular, for \(0<\ell\notin\mathbb{N}^{+}\), we denote by \(f\in C^{\ell}\left(\mathbb{R}^{n}\right)\) the function space in which the higher derivatives in \(\mathbb{R}^{n}\) are Holder continuous, i.e., the modulus of continuity is of the form \(\varpi_{\mathrm{H}}^{\left[\ell\right]}(x)\sim x^{\ell}\), where \(\left\{\ell\right\}\in(0,1)\) denotes the fractional part of \(\ell\). As a generalization of classical Holder continuity, we define the Logarithmic Holder continuity with index \(\lambda>0\), where \(\varpi_{\mathrm{LH}}^{\lambda}\left(x\right)\sim 1/(-\ln x)^{4}\), and we omit the the range \(0<x\ll 1\) without causing ambiguity. **Remark 2.1**.: _For \(f:\mathbb{R}^{n}\rightarrow\Omega\subset\mathbb{R}^{d}\) with a modulus of continuity \(\varpi\), we modify the above designation to \(C_{k,\varpi}\left(\mathbb{R}^{n},\Omega\right)\)._ **Remark 2.2**.: _It is well known that a mapping defined on a bounded connected closed set in a finite dimensional space must have a modulus of continuity, see [6]. For example, for a function \(f(x)\) defined on \([0,1]\subset\mathbb{R}^{1}\), it automatically admits a modulus of continuity_ \[\omega_{f,\delta}\left(x\right):=\sup_{y\in[0,1],0<\left|x-y\right|\leq\delta} \left|f\left(x\right)-f\left(y\right)\right|.\] **Definition 2.2**.: _Let \(\varpi_{1}\) and \(\varpi_{2}\) be modulus of continuity on interval \((0,\delta]\). We say that \(\varpi_{1}\) is weaker (strictly weaker) than \(\varpi_{2}\) if \(\overline{\lim\limits_{x\to 0^{+}}}\varpi_{2}\left(x\right)/\varpi_{1} \left(x\right)<+\infty\) (\(=0\))._ **Remark 2.3**.: _Obviously any modulus of continuity is weaker than Lipschitz's type, and the Logarithmic Holder's type \(\varpi_{\mathrm{LH}}^{\alpha}\left(x\right)\sim 1/(-\ln x)^{\lambda}\) with any \(\lambda>0\) is strictly weaker than arbitrary Holder's type \(\varpi_{\mathrm{H}}^{\alpha}\left(x\right)\sim x^{\alpha}\) with any \(0<\alpha<1.\)_ **Definition 2.3** (Semi separability).: _We say that \(\varpi\) in definition 2.1 is semi separable, if for \(x\geq 1\), there holds_ \[\psi\left(x\right):=\sup_{0<r<\delta/x}\frac{\varpi\left(rx\right)}{\varpi\left( r\right)}=\mathcal{O}\left(x\right),\ \ x\rightarrow+\infty. \tag{2.4}\] **Remark 2.4**.: _Semi separability directly leads to \(\varpi\left(rx\right)\leq\varpi\left(r\right)\psi\left(x\right)\) for \(0<rx\leq\delta\), which will be used in the proof of the Jackson type theorem 1 via only modulus of continuity._ **Definition 2.4** (Weak homogeneity).: _A modulus of continuity \(\varpi\) is said to admit weak homogeneity, if for fixed \(0<a<1\), there holds_ \[\overline{\lim_{x\to 0^{+}}\frac{\varpi\left(x\right)}{\varpi\left(ax \right)}}<+\infty. \tag{2.5}\] It should be emphasized that semi separability and weak homogeneity are universal hypotheses. The Holder and Lipschitz type automatically admit them. Many modulus of continuity weaker than the Holder one are semi separable and also admit weak homogeneity, e.g., for the Logarithmic Holder's type \(\varpi_{\mathrm{LH}}^{\lambda}\left(x\right)\sim 1/(-\ln x)^{\lambda}\) with any \(\lambda>0\), one verifies that \(\psi\left(x\right)\sim\left(\ln x\right)^{\lambda}=\mathcal{O}\left(x\right)\) as \(x\rightarrow+\infty\) in (2.4), and \(\overline{\lim_{x\to 0^{+}}\varpi_{\mathrm{LH}}^{\lambda}\left(x \right)/\varpi_{\mathrm{LH}}^{\lambda}\left(ax\right)}=1<+\infty\) with all \(0<a<1\) in (2.5). See more implicit examples in lemmas A.1 and A.2, in particular, _it is pointed out that a convex modulus of continuity naturally possesses these two properties._ Next, we give a Jackson type approximation theorem beyond Holder's type and some related corollaries based on definitions 2.1 and 2.3, their proof will be postponed to appendices B to D, respectively. **Theorem 1**.: _There is a family of convolution operators_ \[S_{r}f\left(x\right)=r^{-n}\int_{\mathbb{R}^{n}}K\left(r^{-1}\left(x-y\right) \right)f\left(y\right)dy,\ \ 0<r\leq 1,\] _from \(C^{0}\left(\mathbb{R}^{n}\right)\) into the space of entire functions on \(\mathbb{C}^{n}\) with the following property. For every \(k\in\mathbb{N}\), there exists a constant \(c\left(n,k\right)>0\) such that, for every \(f\in C_{k,\varpi}\left(\mathbb{R}^{n}\right)\) with a semi separable modulus of continuity \(\varpi\), every multi-index \(\alpha\in\mathbb{N}^{n}\) with \(\left|\alpha\right|\leq k\), and every \(x\in\mathbb{C}^{n}\) with \(\left|\mathrm{Im}\,x\right|\leq r\), we have_ \[\left|\partial^{\alpha}S_{r}f\left(x\right)-P_{\partial^{\alpha}f,k-\left| \alpha\right|}\left(\mathrm{Re}\,x;\mathrm{i}\,\mathrm{Im}\,x\right)\right| \leq c\left(n,k\right)\left\|f\right\|_{\varpi}r^{k-\left|\alpha\right|} \varpi(r), \tag{2.6}\] _where the Taylor polynomial \(P\) is defined as follows_ \[P_{f,k}\left(x;y\right):=\sum_{\left|\beta\right|\leq k}\frac{1}{\alpha!} \partial^{\beta}f\left(x\right)y^{\alpha}.\] _Moreover, \(S_{r}f\) is real analytic whenever \(f\) is real valued._ As a direct consequence of theorem 1, we give the following corollaries 2.1 and 2.2. These results have been widely used in Holder's case, see for instance, [10, 19]. **Corollary 2.1**.: _The approximation function \(S_{r}f\left(x\right)\) in theorem 1 satisfies_ \[\left|\partial^{\alpha}\left(S_{r}f\left(x\right)-f\left(x\right)\right)\right| \leq c_{*}\left\|f\right\|_{\varpi}r^{k-\left|\alpha\right|}\varpi(r)\] _and_ \[\left|\partial^{\alpha}S_{r}f\left(x\right)\right|\leq c^{*}\left\|f\right\|_ {\varpi}\] _for \(x\in\mathbb{C}^{n}\) with \(\left|\mathrm{Im}\,x\right|\leq r\), \(\left|\alpha\right|\leq k\), where \(c_{*}=c_{*}\left(n,k\right)>0\) and \(c^{*}=c^{*}\left(n,k,\varpi\right)>0\) are some universal constants._ **Corollary 2.2**.: _If the function \(f\left(x\right)\) in theorem 1 also satisfies that the period of each variables \(x_{1},\ldots,x_{n}\) is \(1\) and the integral on \(\mathbb{T}^{n}\) is zero, then the approximation function \(S_{r}f\left(x\right)\) also satisfies these properties._ We are now in a position to give the frequency-preserving KAM theorem via only modulus of continuity in this paper. Before this, let's start with our parameter settings. Let \(n\geq 2\) (degree of freedom), \(\tau\geq n-1\) (Diophantine index), \(2\tau+2\leq k\in\mathbb{N}^{+}\) (differentiable order) and a sufficiently large number \(M>0\) be given. Consider a Hamiltonian function \(H(x,y):\mathbb{T}^{n}\times G\rightarrow\mathbb{R}\) with \(\mathbb{T}^{n}:=\mathbb{R}^{n}/\mathbb{Z}^{n}\), and \(G\subset\mathbb{R}^{n}\) is a connected closed set with interior points. It follows from remark 2.2 that \(H\) automatically has a modulus of continuity \(\varpi\). In view the comments below definition 2.4, we assume that \(\varpi\) admits semi separability (definition 2.3) and weak homogeneity (definition 2.4) without loss of generality. Besides, we make the following assumptions: **(H1)**: Integrability condition for modulus of continuity: Assume that \(H\in C_{k,\varpi}\left(\mathbb{T}^{n}\times G\right)\) with the above modulus of continuity \(\varpi\). In other words, \(H\) at least has derivatives of order \(k\), and the highest derivatives admit the regularity of \(\varpi\). Moreover, \(\varpi\) satisfies the Dini type integrability condition \[\int_{0}^{1}\frac{\varpi\left(x\right)}{x^{2\tau+3-k}}dx<+\infty. \tag{2.7}\] **(H2)**: Boundedness and nondegeneracy: \[\|H\|_{\varpi}\leq M,\ \ \left|\left(\int_{\mathbb{T}^{n}}H_{yy}\left(\xi,0 \right)d\xi\right)^{-1}\right|\leq M.\] **(H3)**: Diophantine condition: For some \(\alpha_{*}>0\), the frequency \(\omega\in\mathbb{R}^{n}\) satisfies \[|\langle\tilde{k},\omega\rangle|\geq\alpha_{*}|\tilde{k}|^{-\tau},\ \ \forall 0\neq\tilde{k}\in\mathbb{Z}^{n},\ \ |\tilde{k}|:=\sum_{j=1}^{n}|\tilde{k}_{j}|.\] **(H4)**: KAM smallness: There holds \[\sum_{|\alpha|\leq k}\left|\partial^{\alpha}\Big{(}H\left(x,0 \right)-\int_{\mathbb{T}^{n}}H\left(\xi,0\right)d\xi\Big{)}\right|\varepsilon ^{|\omega|}\] \[+\sum_{|\alpha|\leq k-1}\left|\partial^{\alpha}\left(H_{y}\left(x,0\right)-\omega\right)\right|\varepsilon^{|\omega|+\tau+1}\leq M\varepsilon^ {k}\varpi\left(\varepsilon\right) \tag{2.8}\] for every \(x\in\mathbb{R}^{n}\) and some constant \(0<\varepsilon\leq\varepsilon^{*}\). **(H5)**: Criticality: For \(\varphi_{i}(x):=x^{k-(3-i)\tau-1}\varpi(x)\) with \(i=1,2\), there exist critical \(k_{i}^{*}\in\mathbb{N}^{+}\) such that \[\int_{0}^{1}\frac{\varphi_{i}\left(x\right)}{x^{k_{i}^{*}+1}}dx<+\infty,\ \ \int_{0}^{1}\frac{\varphi_{i}\left(x\right)}{x^{k_{i}^{*}+2}}dx=+\infty.\] Let us make some comments. **(C1)**: There seems to be a large number of assumptions above, but they are important conditions abstracted from the Holder continuous case, and we have to do so in order to give the KAM theorem in the case of only modulus of continuity. However, some of such conditions, e.g. **(H2)-(H3)**, are classical, while some are ordinary. **(C2)**: In view of remark 2.2, \(H\) automatically admits a modulus of continuity. The Dini type integrability condition (2.7) in **(H1)** is a direct generalization of Holder's type, which can be seen in theorem 5. Interestingly, it becomes the classical Dini condition (1.2) if \(\tau=n-1\) and \(k=2\tau+2=2n\). **(C3)**: There is a large family of modulus of continuity satisfying the classical Dini condition (1.2), such as the Logarithmic Holder's type \(\varpi_{\rm LH}^{\alpha}\left(x\right)\sim 1/(-\ln x)^{\alpha}\) with \(\lambda>1\), and even more complicated case: the generalized Logarithmic Holder's type \[\varpi_{\rm GLH}^{\alpha,\lambda}\left(x\right)\sim\frac{1}{(\ln(1/x))(\ln \ln(1/x))\cdots(\underbrace{\ln\cdots\ln(1/x))^{\lambda}}} \tag{2.9}\] with any \(\varrho\in\mathbb{N}^{+}\) and \(\lambda>1\). In particular, \(\varpi_{\mathrm{LH}}^{4}(x)\sim\varpi_{\mathrm{GLH}}^{1,4}(x)\). Note that the above \(\lambda>1\) cannot degenerate to \(1\), otherwise the Dini integral (1.2) diverges. * According to the properties of Banach algebra, for the Holder's type, it is assumed that **(H4)** only needs the term of \(\left|\alpha\right|=0\), and does not need higher-order derivatives to satisfy the condition. However, for general modulus of continuity, it seems not easy to establish the corresponding Banach algebraic properties, we thus add higher-order derivatives in **(H4)**. Sometimes they can be removed correspondingly. * The existence of \(k_{t}^{*}\) in **(H5)** is directly guaranteed by **(H1)**, actually this assumption is proposed to investigate the higher regularity of the persistent KAM torus, that is, the regularity to \(C^{k_{t}^{*}}\) plus certain modulus of continuity. In general, given an explicit modulus of continuity \(\varpi\), such \(k_{i}^{*}\) in **(H5)** are automatically determined by using asymptotic analysis, see section 3. Finally, we state the following frequency-preserving KAM theorem under sharp differentiability via only modulus of continuity: **Theorem 2** (Main Theorem).: _Assume **(H1)**-**(H4)**. Then there is a solution_ \[x=u\left(\xi\right),\ \ y=v\left(\xi\right)\] _of the following equation with the operator \(D:=\sum\limits_{\nu=1}^{n}\omega_{\nu}\frac{\partial}{\partial\xi_{\nu}}\)_ \[Du=H_{y}\left(u,\nu\right),\ \ Dv=-H_{x}\left(u,v\right),\] _such that \(u\left(\xi\right)-\xi\) and \(v\left(\xi\right)\) are of period \(1\) in all variables, where \(u\) and \(v\) are at least \(C^{1}\)._ _In addition, assume **(H5)**, then there exist \(\varpi_{i}\) (\(i=1,2\)) such that \(u\in C_{k_{1}^{*},\varpi_{i}}\left(\mathbb{R}^{n},\mathbb{R}^{n}\right)\) and \(v\circ u^{-1}\in C_{k_{2}^{*},\varpi_{2}}\left(\mathbb{R}^{n},G\right)\). Particularly, \(\varpi_{i}\) can be determined as follows_ \[\varpi_{i}\left(\gamma\right)\sim\gamma\int_{L_{i}\left(\gamma\right)}^{ \kappa}\frac{\varphi_{i}\left(t\right)}{t^{k_{t}^{*}+2}}dt=\mathcal{O}^{\#} \left(\int_{0}^{L_{i}\left(\gamma\right)}\frac{\varphi_{i}\left(t\right)}{t^ {k_{t}^{*}+1}}dt\right),\ \ \gamma\to 0^{+}, \tag{2.10}\] _where \(L_{i}\left(\gamma\right)\to 0^{+}\) are some functions such that the second relation in (2.10) holds for \(i=1,2\)._ **Remark 2.5**.: _We call such a solution \(x=u(\xi),y=v(\xi)\) the KAM one._ **Remark 2.6**.: _With the same as in [19], the unperturbed systems under consideration might be non-integrable (e.g., \(H=\left\langle\omega,y\right\rangle+\left\langle A\left(x\right)y,y\right\rangle +\cdots\)), and the KAM persistence is of frequency-preserving. The main difference from [19] is that the regularity of the high-order derivatives and the derived smoothness for persistent torus is weakened to only modulus of continuity from the Holder's type._ **Remark 2.7**.: _Actually theorem 2 provides a method for determining \(\varpi_{i}\) with \(i=1,2\), see (2.10). For the prescribed modulus of continuity to Hamiltonian, such as the Holder and Logarithmic Holder type, we have to use asymptotic analysis to derive the concrete continuity of the KAM torus in section 3._ As mentioned forego, the Holder's type \(H\in C^{l}(\mathbb{T}^{n},G)\) with \(\ell>2\tau+2\) (where \(\tau>n-1\) is the Diophantine exponent) is always regarded as the critical case. Let \(k=\left[\ell\right]\). Then \(k=2\tau+2=2n\) (\(\tau=n-1\) at present) seems to be the critical case in our setting, and our Dini type integrability condition (2.7) becomes the classical Dini condition (1.2)! But it should be noted that, such Diophantine frequencies with \(\tau=n-1\) can only form a set of zero Lebesgue measure and are therefore not enough to represent almost all frequencies. In other words, for universal KAM persistence, we may have to require the generalized Dini condition in **(H1)**, which reveals the deep relationship between the _irrationality_ for frequency \(\omega\), _order_ and _continuity_ of the highest derivatives for the Hamiltonian \(H\). Obviously, if the highest differentiable order \(k\) of \(H\) satisfies \(k\geq 2\tau+3\) or even larger, then **(H1)** will become trivial because \(\varpi\) does not have a singularity at \(0\). But our KAM theorem still makes sense, because the regularity of the persistent torus will also increase. ## 3 Applications In this section, we show certain detailed regularity about KAM torus such as Holder and Logarithmic Holder ones etc. Denote by \(\left[a\right]\) and \(\left[a\right]\) the fractional part and the integer part of \(a\geq 0\), respectively. It should be emphasized that the Dini type integrability condition (2.7) in **(H1)** is easy to verify, that is, the KAM persistence is easy to obtain. However, some techniques of asymptotic analysis are needed to investigate the specific regularity of KAM torus, which is mainly reflected in the selection of functions \(L_{i}(\gamma)\) (\(i=1,2\)) in (2.10). In particular, we will explicitly see the degree of regularity loss caused by small divisors, see for instance, theorems 4 to 6 and the example shown in section 3.3. We apply our theorem 2 from two different perspectives. In section 3.1, for the minimum regularity \(C^{2n}\) that is critical under our approach, we investigate KAM preservation in the sense of zero Lebesgue measure (corresponds to non-universal), i.e., first let \(k=2n\), then determine Diophantine nonresonance \(\tau=n-1\); while in section 3.2, for the given Diophantine nonresonance \(\tau>n-1\) of full Lebesgue measure in advance (corresponds to universal), we study the minimum regularity requirement under our method. In what follows, the modulus of continuity under consideration are always convex near \(0^{+}\) and therefore automatically admit semi separability as well as weak homogeneity which we forego. ### Non-universal KAM persistence Focusing on non-universal KAM persistence for Hamiltonian systems with action-angular variables of freedom \(n\), Albrecht [1] proved that \(C^{2n}\) plus certain modulus of continuity satisfying the classical Dini condition (1.2) for regularity requirement is enough. The frequencies he used are of Diophantine class \(\tau=n-1\) in **(H3)**, i.e., of zero Lebesgue measure. However, it is still interesting to study the remaining regularity of the KAM torus, which is still unknown so far. By applying theorem 2 we directly obtain the following theorem 3 similar to that in [1], therefore the proof is omitted here. To illustrate our results, we provide an explicit example in theorem 4, and the proof will be postponed to section 5. **Theorem 3.**_Let \(k=2n\) and \(\tau=n-1\) be given. Assume that **(H1) (H2)**, **(H3)** and **(H4)** hold with a convex modulus of continuity \(\varpi\). That is, the Hamiltonian \(H\) only has derivatives of order \(2n\), the prescribed frequency is of Diophantine class \(n-1\), and **(H1)** turns to the classical Dini condition (1.2). Then the KAM persistence in theorem 2 could be admitted._ **Theorem 4.**_In view of Comment (C3), let the modulus of continuity in theorem 3 be of the generalized Logarithmic Holder's type in (2.9), i.e.,_ \[\varpi_{\mathrm{GLH}}^{\varrho,\lambda}\left(x\right)\sim\frac{1}{\left(\ln(1 /x)\right)\left(\ln\ln(1/x)\right)\cdots\left(\underbrace{\ln\cdots\ln(1/x)} _{\varrho}\right)^{\lambda}} \tag{3.11}\] _with \(\varrho\in\mathbb{N}^{+}\) and \(\lambda>1\). Then the remaining regularity in theorem 2 is \(u\in C_{1,\varpi_{1}}\left(\mathbb{R}^{n},\mathbb{R}^{n}\right)\) and \(v\circ u^{-1}\in C_{n,\varpi_{2}}\left(\mathbb{R}^{n},G\right)\), where_ \[\varpi_{1}\left(x\right)\sim\varpi_{2}\left(x\right)\sim\frac{1}{\left( \underbrace{\ln\cdots\ln(1/x)}_{\varrho}\right)^{\lambda-1}}. \tag{3.12}\] **Remark 3.1.**_Particularly (3.11) reduces to the Logarithmic Holder's type \(\varpi_{\mathrm{LH}}^{\lambda}(x)\sim 1/(-\ln x)^{\lambda}\) with \(\lambda>1\) as long as \(\varrho=1\). As can be seen that, the remaining regularity in (3.12) is much weaker than that in (3.11), and it is indeed very weak if \(\lambda>1\) is sufficiently close to \(1\) (but cannot degenerate to \(1\), see Comment **(C3)**), because the explicit modulus of continuity in (3.12) tends to \(0\) quite slowly as \(x\to 0^{+}\)._ ### Universal KAM persistence In this subsection, we always assume that the prescribed Diophantine frequencies \(\omega\) are of full Lebesgue measure, that is, \(\tau>n-1\) in **(H3)**. Note that for fixed \(n\), the parameter \(\tau\) might be very large, and the frequencies being of Diophantine class \(\tau\) are at least continuum many. Under such setting, the known minimum regularity requirement for Hamiltonian \(H\) is Holder's type \(C^{\ell}\) with \(\ell>2\tau+2\), see Salamon [19] and theorem 5 below. Interestingly, if one considers weaker modulus of continuity, such as \(C^{2\tau+2}\) plus Logarithmic Holder's type, the above regularity could be weakened, see our new theorem 6. #### 3.2.1 Holder continuous case **Theorem 5.** Let \(H\in C^{\ell}(\mathbb{T}^{n},G)\) with \(\ell>2\tau+2\), where \(\ell\notin\mathbb{N}^{+}\), \(\ell-\tau\notin\mathbb{N}^{+}\) and \(\ell-2\tau\notin\mathbb{N}^{+}\). That is, \(H\) is of \(C_{k,\varpi}\) with \(k=[\ell]\) and \(\varpi(x)\sim\varpi_{\mathrm{H}}^{\ell}(x)\sim x^{\{\ell\}}\). Assume **(H2)**, **(H3)** and **(H4)**. Then there is a solution \(x=u\left(\xi\right),y=v\left(\xi\right)\) of the following equation with the operator \(D:=\sum\limits_{\nu=1}^{n}\omega_{\nu}\frac{\partial}{\partial\xi_{\nu}}\)_ \[Du=H_{y}\left(u,v\right),\;\;Dv=-H_{x}\left(u,v\right)\] _such that \(u\left(\xi\right)-\xi\) and \(v\left(\xi\right)\) are of period \(1\) in all variables. In addition, \(u\in C^{\ell-2\tau-1}\left(\mathbb{R}^{n},\mathbb{R}^{n}\right)\) and \(v\circ u^{-1}\in C^{\ell-\tau-1}\left(\mathbb{R}^{n},G\right)\)._ theorem 5 has been completely proved in [19]. Significantly, the differentiability hypotheses under consideration is sharp, i.e., it is close to the optimal one as in [7; 8], where Herman gave a counterexample about the nonexistence of an invariant curve for an annulus mapping of \(C^{3-\epsilon}\) with \(0<\epsilon\ll 1\) corresponds to the case \(n=2,\ell=4-\varepsilon\) in our setting, which implies the sharpness of theorem 5. See more from [11; 21]. #### 3.2.2 Holder plus Logarithmic Holder continuous case To show different modulus of continuity weaker than Holder's type, we establish the following theorem 6. One will see later that theorem 6 employs more complicated asymptotic analysis than theorem 4, and interestingly, the remaining regularity \(\varpi_{1}\) and \(\varpi_{2}\) admit different forms. In fact, theorem 6 can completely contain the case of theorem 4, that is, \(\tau=n-1\), and \(\varrho=1\) in (3.11). However, in order to distinguish the full Lebesgue measure and zero Lebesgue measure of Diophantine nonresonance, we show them separately. **Theorem 6.** Let \(\tau>n-1\) be given and let \(H\in C_{[2\tau+2],\varpi}\), where \(\varpi\left(x\right)\sim x^{\{2\tau+2\}}/{\left(-\ln x\right)^{\lambda}}\) with \(\lambda>1\). Assume **(H2)**, **(H3)** and **(H4)**. That is, \(H\) is of \(C^{k}\) plus the above \(\varpi\) with \(k=[2\tau+2]\). Then there is a solution \(x=u\left(\xi\right),y=v\left(\xi\right)\) of the following equation with the operator \(D:=\sum\limits_{\nu=1}^{n}\omega_{\nu}\frac{\partial}{\partial\xi_{\nu}}\)_ \[Du=H_{y}\left(u,v\right),\;\;Dv=-H_{x}\left(u,v\right)\] _such that \(u\left(\xi\right)-\xi\) and \(v\left(\xi\right)\) are of period \(1\) in all variables. In addition, letting_ \[\varpi_{1}\left(x\right)\sim\frac{1}{\left(-\ln x\right)^{\lambda-1}}\sim \varpi_{\mathrm{LH}}^{\lambda-1}\left(x\right),\] _and_ \[\varpi_{2}\left(x\right)\sim\begin{cases}\frac{1}{\left(-\ln x\right)^{ \lambda-1}}\sim\varpi_{\mathrm{LH}}^{\lambda-1}\left(x\right),&n-1<\tau\in \mathbb{N}^{+},\\ \frac{x^{\{\tau\}}}{\left(-\ln x\right)^{\lambda}}\sim x^{\{\tau\}}\varpi_{ \mathrm{LH}}^{\lambda}\left(x\right),&n-1<\tau\notin\mathbb{N}^{+},\end{cases}\] _one has that \(u\in C_{1,\varpi_{1}}\left(\mathbb{R}^{n},\mathbb{R}^{n}\right)\) and \(v\circ u^{-1}\in C_{[\tau+1],\varpi_{2}}\left(\mathbb{R}^{n},G\right)\)._ **Remark 3.2.**_Similar to theorem 4, one can also consider the generalized Logarithmic Holder's type (3.11) instead of the Logarithmic Holder one. Only the latter is presented here for simplicity._ ### An explicit example of Logarithmic Holder's type To illustrate the wider applicability of our theorems, we shall present an explicit example strictly beyond Holder's type. Note that the Holder plus Logarithmic Holder regularity for \(H\) in theorem 6 becomes simpler Logarithmic Holder's type for \(2n<2\tau+2\in\mathbb{N}^{+}\) (because \(\{2\tau+2\}=0\)), we therefore consider the following setting. Recall theorem 6. Let \(n=2\), \(\tau=2\), \(k=6=[2\tau+2],\alpha_{*}>0\), \(\lambda>1\) and \(M>0\) be given. Assume that \((x,y)\in\mathbb{T}^{2}\times G\) with \(G:=\{y\in\mathbb{R}^{2}:|y|\leq 1\}\), and the frequency \(\omega=(\omega_{1},\omega_{2})^{T}\in\mathbb{R}^{2}\) satisfies \[|\langle\tilde{k},\omega\rangle|\geq\alpha_{*}|\tilde{k}|^{-2},\;\;\forall 0\neq \tilde{k}\in\mathbb{Z}^{2},\;\;|\tilde{k}|:=|k_{1}|+|k_{2}|,\] i.e., with full Lebesgue measure. Now we shall construct a function for finite smooth perturbation, whose regularity is \(C^{6}\) plus Logarithmic Holder's type \(\varpi_{\mathrm{LH}}^{\lambda}(r)\sim 1/(-\ln r)^{\lambda}\) with index \(\lambda>1\). Namely, define \[P(r):=\begin{cases}\int_{0}^{r}\cdots\int_{0}^{s_{2}}\frac{1}{(1-\ln|s_{1}|)^{ \lambda}}ds_{1}\cdots ds_{6},&0<|r|\leq 1,\\ 0,&r=0.\end{cases}\] Obviously \(P(r)\in C_{6,\varpi_{\mathrm{LH}}^{\lambda}}([-1,1])\). Let us consider the perturbed Hamiltonian function below with some constant \(0<\varepsilon<\varepsilon^{*}\) sufficiently small (\(\varepsilon^{*}\) depends on the constants given above): \[H(x,y,\varepsilon)=\omega_{1}y_{1}+\omega_{2}y_{2}+\frac{1}{M}(y_{1}^{2}+y_{2 }^{2})+\varepsilon\left(\sin(2\pi x_{1})+\sin(2\pi x_{2})+P(y_{1})+P\left(y_{2 }\right)\right). \tag{3.13}\] At this point, we have \[\left|\left(\int_{\mathbb{T}^{2}}H_{yy}\left(\xi,0\right)d \xi\right)^{-1}\right| =\left|\left(\int_{\mathbb{T}^{2}}\left(\begin{array}{cc}2M^{-1 }&0\\ 0&2M^{-1}\end{array}\right)d\xi\right)^{-1}\right|\] \[=\left|\left(\begin{array}{cc}2^{-1}M&0\\ 0&2^{-1}M\end{array}\right)\right|\leq M<+\infty.\] In addition, one can verify that \(H\in C_{6,\varpi_{\mathrm{LH}}^{\lambda}}(\mathbb{T}^{2}\times G)\) with \(\varpi_{\mathrm{LH}}^{\lambda}(r)\sim 1/(-\ln r)^{\lambda}\). However, for \(\tilde{\alpha}=(0,0,6,0)^{T}\) with \(|\tilde{\alpha}|=6=k\), we have \[\left|\partial^{\tilde{\alpha}}H\left((0,0)^{T},(y_{1},0)^{T},\varepsilon \right)-\partial^{\tilde{\alpha}}H\left((0,0)^{T},(0,0)^{T},\varepsilon \right)\right|=\frac{\varepsilon}{(1-\ln|y_{1}|)^{\lambda}}\geq\varepsilon c_ {\lambda,\lambda}|y_{1}|^{\ell}\] for any \(0<\ell\leq 1\), where \(c_{\lambda,\ell}>0\) is a constant that only depends on \(\lambda\) and \(\ell\). This implies that \(H\not\in C_{6,\varpi_{\mathrm{LH}}^{\ell}}(\mathbb{T}^{2}\times G)\) with \(\varpi_{\mathrm{H}}^{\ell}(r)\sim r^{\ell}\), i.e., \(H\not\in C^{6+\ell}(\mathbb{T}^{2}\times G)\) with any \(0<\ell\leq 1\), because \(\varpi_{\mathrm{LH}}^{\lambda}\) is strictly weaker than \(\varpi_{\mathrm{H}}^{\ell}\), see also remark 2.3. In other words, the highest derivatives (of order \(k=6\)) of \(H\) in (3.13) can be rigorously proved to be Logarithmic Holder continuous with index \(\lambda>1\), but not any Holder's type. Therefore, the finite smooth KAM theorems via classical Holder continuity cannot be applied. But, all the assumptions of theorem 6 can be verified to be satisfied, then the invariant torus persists, and the frequency \(\omega=(\omega_{1},\omega_{2})^{T}\) for the unperturbed system can remain unchanged. Moreover, the remaining regularity for mappings \(u\) and \(v\circ u^{-1}\) in theorem 6 could also be determined as \(u\in C_{1,\varpi_{\mathrm{LH}}^{\lambda-1}}(\mathbb{R}^{n},\mathbb{R}^{n})\) and \(v\circ u^{-1}\in C_{3,\varpi_{\mathrm{LH}}^{\lambda-1}}(\mathbb{R}^{n},G)\), where \(\varpi_{\mathrm{LH}}^{\lambda-1}(r)\sim 1/(-\ln r)^{\lambda-1}\). More precisely, \(u\) is at least \(C^{1}\), while \(v\circ u^{-1}\) is least \(C^{3}\), and the higher regularity for them is still not any Holder's type, but Logarithmic Holder one with index \(\lambda-1\), i.e., lower than the original index \(\lambda>1\), this is because the small divisors causes the loss of regularity. ## 4 Proof of theorem 2 Now let us prove theorem 2 by separating two subsections, namely frequency-preserving KAM persistence (section 4.1) and further regularity (section 4.2) for KAM torus. For the former, the overall process is similar to that in [19], but the key points to weaken the Holder regularity to only modulus of continuity are using theorem 1 and proving the uniform convergence of the transformation mapping, that is, the convergence of the upper bound series (see (4.24) and (4.26)). As we will see later, the Dini type integrability condition (2.7) in **(H1)** guarantees this. As to the latter, we have to establish a more general regularity iterative theorem (theorem 7) which is not trivial since the resulting regularity might be somewhat complicated due to asymptotic analysis. ### Frequency-preserving KAM persistence The proof of the frequency-preserving KAM persistence is organized as follows. Firstly, we construct a series of analytic approximation functions \(H^{r}\) of \(H\) by using theorem 1 and considering **(H1)** and **(H2)**. Secondly, we shall construct a sequence of frequency-preserving analytic and symplectic transformations \(\psi^{v}\) by induction. According to **(H2)**, **(H3)** and **(H4)**, the first step of induction is established by applying theorem 8 in appendix F (or Theorem 1 in [19]). Then, combining with weak homogeneity and certain specific estimates we complete the proof of induction and obtain the uniform convergence of the composite transformations. Finally, in the light of **(H5)**, the regularity of the KAM torus is guaranteed by theorem 7. **Step1:** In view of theorem 1 (we have assumed that the modulus of continuity \(\varpi\) admits semi separability and thus theorem 1 could be applied here), one could approximate \(H(x,y)\) by a sequence of real analytic functions \(H_{\nu}(x,y)\) for \(\nu\geq 0\) in the strips \[|\mathrm{Im}\,x|\leq r_{\nu},\ \ |\mathrm{Im}\,y|\leq r_{\nu},\ \ \ r_{\nu}:=2^{-\nu}\xi\] around \(|\mathrm{Re}\,x|\in\mathbb{T}^{n},|\mathrm{Re}\,y|\leq\rho\), such that \[\left|H^{\nu}\left(z\right)-\sum_{|\alpha|\leq k}\partial^{\alpha }H\left(\mathrm{Re}\,z\right)\frac{\left(\mathrm{i}\,\mathrm{Im}\,z\right)^{ \alpha}}{\alpha!}\right| \leq c_{1}\|H\|_{\varpi}r_{\nu}^{k}\varpi\left(r_{\nu}\right),\] \[\left|H_{y}^{\nu}\left(z\right)-\sum_{|\alpha|\leq k-1}\partial^ {\alpha}H_{y}\left(\mathrm{Re}\,z\right)\frac{\left(\mathrm{i}\,\mathrm{Im}\, z\right)^{\alpha}}{\alpha!}\right| \leq c_{1}\|H\|_{\varpi}r_{\nu}^{k-1}\varpi\left(r_{\nu}\right), \tag{4.14}\] \[\left|H_{yy}^{\nu}\left(z\right)-\sum_{|\alpha|\leq k-2}\partial^ {\alpha}H_{yy}\left(\mathrm{Re}\,z\right)\frac{\left(\mathrm{i}\,\mathrm{Im}\, z\right)^{\alpha}}{\alpha!}\right| \leq c_{1}\|H\|_{\varpi}r_{\nu}^{k-2}\varpi\left(r_{\nu}\right)\] for \(|\mathrm{Im}\,x|\leq r_{\nu},\ |\mathrm{Im}\,y|\leq r_{\nu}\), and \(c_{1}=c(n,k)\) is the constant provided in (2.6). Fix \(\theta=1/\sqrt{2}\). In what follows, we will construct a sequence of real analytic symplectic transformations \(z=(x,y),\zeta=(\xi,\eta),z=\phi^{\nu}\left(\zeta\right)\) of the form \[x=u^{\nu}\left(\xi\right),\ \ y=v^{\nu}\left(\xi\right)+(u_{\xi}^{\nu})^{T}( \xi)^{-1}\eta \tag{4.15}\] by induction, such that \(u^{\nu}\left(\xi\right)-\xi\) and \(v^{\nu}\left(\xi\right)\) are of period \(1\) in all variables, and \(\phi^{\nu}\) maps the strip \(|\mathrm{Im}\,\xi|\,,|\eta|\leq\theta r_{\nu+1}\) into \(|\mathrm{Im}\,x|\,,|y|\leq r_{\nu},|\mathrm{Re}\,y|\leq\rho\), and the transformed Hamiltonian function \(K^{\nu}:=H^{\nu}\circ\phi^{\nu}\) satisfies \[K_{\xi}^{\nu}\left(\xi,0\right)=0,\ \ K_{\eta}^{\nu}\left(\xi,0\right)=\omega, \tag{4.16}\] i.e., with prescribed frequency-preserving. Namely by verifying certain conditions we obtain \(z=\psi^{\nu}(\zeta)\) of the form (4.15) from theorem 8 by induction, mapping \(|\mathrm{Im}\,\xi|,\eta|\leq r_{\nu+1}\) into \(|\mathrm{Im}\,x|\,,|y|\leq\theta r_{\nu}\), and \(\psi^{\nu}\left(\xi,0\right)-\left(\xi,0\right)\) is of period \(1\), and (4.16) holds. Here we denote \(\phi^{\nu}:=\phi^{\nu-1}\circ\phi^{\nu}\) with \(\phi^{-1}:=\mathrm{id}\) (where \(\mathrm{id}\) denotes the \(2n\)-dimensional identity mapping and therefore \(\phi^{0}=\psi^{0}\)). Further more, theorem 8 will lead to \[|\psi^{\nu}\left(\zeta\right)-\zeta| \leq c\left(1-\theta\right)r_{\nu}^{k-2r-1}\varpi\left(r_{\nu} \right), \tag{4.17}\] \[\left|\phi_{\zeta}^{\nu}\left(\zeta\right)-\mathbb{I}\right| \leq cr_{\nu}^{k-2r-2}\varpi\left(r_{\nu}\right),\] (4.18) \[\left|K_{\eta\eta}^{\nu}\left(\zeta\right)-Q^{\nu}\left(\zeta\right) \right| \leq cr_{\nu}^{k-2r-2}\varpi\left(r_{\nu}\right)/2M,\] (4.19) \[\left|U_{x}^{\nu}\left(x\right)\right| \leq cr_{\nu}^{k-r-1}\varpi\left(r_{\nu}\right), \tag{4.20}\] on \(|\mathrm{Im}\,\xi|,|\eta|\,,|\mathrm{Im}\,x|\leq r_{\nu+1}\), where \(S^{\nu}\left(x,\eta\right)=U^{\nu}\left(x\right)+\left\langle V^{\nu}\left(x \right),\eta\right\rangle\) is the generating function for \(\psi^{\nu}\), and \(Q^{\nu}:=K_{\eta\eta}^{\nu-1}\), and \(\mathbb{I}\) denotes the \(2n\times 2n\)-dimensional identity mapping, and \[Q^{0}\left(z\right):=\sum_{|\alpha|\leq k-2}\partial^{\alpha}H_{yy}\left( \mathrm{Re}\,z\right)\frac{\left(\mathrm{i}\,\mathrm{Im}\,x\right)^{\alpha}}{ \alpha!}. \tag{4.21}\] **Step2:** Here we show that \(\psi^{0}=\phi^{0}\) exists, and it admits the properties mentioned in Step 1. Denote \[h(x):=H\left(x,0\right)-\int_{\mathbb{T}^{n}}H\left(\xi,0\right)d\xi,\ \ x\in \mathbb{R}^{n}.\] Then by the first term in (2.8), we have \[\sum_{|\alpha|\leq k}|\partial^{\alpha}h|\,\varepsilon^{|\alpha|}<M\varepsilon^ {k}\varpi\left(\varepsilon\right). \tag{4.22}\] Note that \[H^{0}\left(x,0\right)-\int_{\mathbb{T}^{n}}H^{0}\left(\xi,0\right)d\xi =H^{0}\left(x,0\right)-\sum_{|n|\leq k}\partial_{x}^{\alpha}H\left( \operatorname{Re}x,0\right)\frac{\left(\operatorname{i}\operatorname{Im}x \right)^{\alpha}}{\alpha!}\] \[\quad+\int_{\mathbb{T}^{n}}\left(H\left(\xi,0\right)-H^{0}\left( \xi,0\right)\right)d\xi\] \[\quad+\sum_{|\alpha|\leq k}\partial^{\alpha}h\left(\operatorname{ Re}x\right)\frac{\left(\operatorname{i}\operatorname{Im}x\right)^{\alpha}}{\alpha!}.\] Hence, for \(\left|\operatorname{Im}x\right|\leq\theta r_{0}=\theta\varepsilon\), by using theorem 1, corollary 2.1 and (4.22) we arrive at \[\left|H^{0}\left(x,0\right)-\int_{\mathbb{T}^{n}}H^{0}\left(\xi,0 \right)d\xi\right| \leq 2c_{1}\|H\|_{\infty}\varepsilon^{k}\varpi\left(\varepsilon \right)+M\varepsilon^{k}\varpi\left(\varepsilon\right)\] \[\leq c\varepsilon^{k}\varpi\left(\varepsilon\right)\leq c \varepsilon^{k-2\tau-2}\varpi\left(\varepsilon\right)\cdot\left(\theta \varepsilon\right)^{2\tau+2}.\] Now consider the vector valued function \(f\left(x\right):=H_{y}\left(x,0\right)-\omega\) for \(x\in\mathbb{R}^{n}\). In view of the second term in (2.8), we have \[\sum_{|\alpha|\leq k-1}|\partial^{\alpha}f|\,\varepsilon^{|\alpha|}\leq M \varepsilon^{k-\tau-1}\varpi\left(\varepsilon\right). \tag{4.23}\] Note that \[H_{y}^{0}\left(x,0\right)-\omega =H_{y}^{0}\left(x,0\right)-\sum_{|\alpha|\leq k-1}\partial_{x}^{ \alpha}H_{y}\left(\operatorname{Re}x,0\right)\frac{\left(\operatorname{i} \operatorname{Im}x\right)^{\alpha}}{\alpha!}\] \[\quad+\sum_{|\alpha|\leq k-1}\partial^{\alpha}f\left(\operatorname {Re}x\right)\frac{\left(\operatorname{i}\operatorname{Im}x\right)^{\alpha}}{ \alpha!}.\] Therefore, for \(\left|\operatorname{Im}x\right|\leq\theta\varepsilon\), by using (4.14) and (4.23) we obtain that \[\left|H_{y}^{0}\left(x,0\right)-\omega\right| \leq c_{1}\|H\|_{\infty}\varepsilon^{k-1}\varpi\left(\varepsilon \right)+M\varepsilon^{k-\tau-1}\varpi\left(\varepsilon\right)\] \[\leq c\varepsilon^{k-\tau-1}\varpi\left(\varepsilon\right)\leq c \varepsilon^{k-2\tau-2}\varpi\left(\varepsilon\right)\cdot\left(\theta \varepsilon\right)^{\tau+1}.\] Recall (4.21). Then it follows from (4.14) that \[\left|H_{yy}^{0}\left(z\right)-Q^{0}\left(z\right)\right| \leq c_{1}\|H\|_{\infty}\varepsilon^{k-2}\varpi\left( \varepsilon\right)\leq\frac{c}{4M}\varepsilon^{k-2}\varpi\left(\varepsilon\right)\] \[\leq\frac{c}{4M}\varepsilon^{k-2\tau-2}\varpi\left(\varepsilon\right),\ \ \left|\operatorname{Im}x\right|,\left|y\right|\leq\theta\varepsilon,\] and \[\left|Q^{0}\left(z\right)\right|\leq\sum_{|\alpha|\leq k-2}\|H\|_{\infty} \frac{\varepsilon^{|\alpha|}}{\alpha!}\leq\|H\|_{\infty}\sum_{\alpha\in \mathbb{N}^{2n}}\frac{\varepsilon^{|\alpha|}}{\alpha!}=\|H\|_{\infty} \varepsilon^{2n\varepsilon}\leq 2M,\ \ \left|\operatorname{Im}z\right|\leq\varepsilon.\] Now, by taking \(r^{*}=\theta\varepsilon,\delta^{*}=\varepsilon^{k-2\tau-2}\varpi\left(\varepsilon\right)\) and using theorem 8 there exists a real analytic symplectic transformation \(z=\phi^{0}\left(\zeta\right)\) of the form (4.15) (with \(\nu=0\)) mapping the strip \(\left|\operatorname{Im}\xi\right|,\left|\eta\right|\leq r_{1}=r_{0}/2\) into \(\left|\operatorname{Im}x\right|,\left|y\right|\leq\theta r_{0}=r_{0}/\sqrt{2}\), such that \(u^{0}\left(\xi\right)-\xi\) and \(v^{0}\left(\xi\right)\) are of period \(1\) in all variables and the Hamiltonian function \(K^{0}:=H^{0}\circ\phi^{0}\) satisfies (4.16) (with \(\nu=0\)). Moreover, (4.17)-(4.19) (with \(\nu=0\)) hold. Also assume that \[\left|K_{\eta\eta}^{r-1}\left(\zeta\right)\right|\leq M_{\nu-1},\ \ \left|\left(\int_{ \mathbb{T}^{n}}K_{\eta\eta}^{r-1}\left(\xi,0\right)d\xi\right)^{-1}\right|\leq M _{\nu-1},\ \ M_{\nu}\leq M\] for \(\left|\operatorname{Im}x\right|,\left|y\right|\leq r_{\nu}\). Finally, define \[\tilde{H}\left(x,y\right):=H^{\nu}\circ\phi^{\nu-1}\left(x,y\right)\] with respect to \(\left|\operatorname{Im}x\right|,\left|y\right|\leq r_{\nu}\). One can verify that \(\tilde{H}\) is well defined. Next we assume that the transformation \(z=\phi^{\nu-1}\left(\zeta\right)\) of the form (4.15) has been constructed, mapping \(\left|\operatorname{Im}\xi\right|,\left|\eta\right|\leq\theta r_{\nu}\) into \(\left|\operatorname{Im}x\right|,\left|\operatorname{Im}y\right|\leq r_{\nu -1},\left|\operatorname{Re}y\right|\leq\rho\), and \(u^{\nu-1}\left(\xi\right)-\xi,v^{\nu-1}\left(\xi\right)\) are of period \(1\) in all variables, and \(K_{\xi}^{\nu-1}\left(\xi,0\right)=0,K_{\eta}^{\nu-1}\left(\xi,0\right)=\omega\). In addition, we also assume that (4.17)-(4.20) hold for \(0,\ldots,\nu-1\). In the next Step 3, we will verify that the above still hold for \(\nu\), which establishes a complete induction. **Step3:** We will prove the existence of transformation \(\phi^{\nu}\) in each step according to the specific estimates below and theorem 8. Let \(\left|\operatorname{Im}x\right|\leq\theta r_{\nu}\). Then \(\phi^{\nu-1}(x,0)\) lies in the region where the estimates in (4.14) hold for both \(H^{\nu}\) and \(H^{\nu-1}\). Note that \(x\mapsto H^{\nu-1}(\phi^{\nu-1}(x,0))\) is constant by (4.16). Then by (4.14), we arrive at the following for \(\left|\operatorname{Im}x\right|\leq\theta r_{\nu}\) \[\left|\tilde{H}\left(x,0\right)-\int_{\mathbb{T}^{n}}\tilde{H} \left(\xi,0\right)d\xi\right| \leq 2\sup_{\left|\operatorname{Im}\xi\right|\leq\theta r_{\nu}} \left|H^{\nu}\left(\phi^{\nu-1}\left(\xi,0\right)\right)-H^{\nu-1}\left(\phi ^{\nu-1}\left(\xi,0\right)\right)\right|\] \[\leq 2c_{1}\left|H\right|_{\operatorname{Im}}r_{\nu}^{k}\varpi \left(r_{\nu}\right)+2c_{1}\left|H\right|_{\operatorname{Im}}r_{\nu-1}^{k} \varpi\left(r_{\nu-1}\right)\] \[\leq cr_{\nu}^{k-2\tau-2}\varpi\left(r_{\nu}\right)\cdot r_{\nu} ^{2\tau+2},\] where the weak homogeneity of \(\varpi\) with respect to \(a=1/2\) (see definition 2.4) has been used in the last inequality, because \(\varpi(r_{\nu-1})=\varpi(2r_{\nu})\leq c\varpi(r_{\nu})\) (thus \(c\) is independent of \(\nu\)). For convenience we may therefore not mention it in the following. Taking \(\eta=0\) in (4.18) we have \[\left|u_{\xi}^{\nu-1}\left(\xi\right)-\mathbb{I}\right| \leq\sum_{\mu=0}^{\nu-1}\left|u_{\xi}^{\mu}\left(\xi\right)-u_{ \xi}^{\mu-1}\left(\xi\right)\right|\leq c\sum_{\mu=0}^{\nu-1}r_{\mu}^{k-2\tau- 2}\varpi\left(r_{\mu}\right)\] \[\leq c\sum_{\mu=0}^{\infty}\left(\frac{\varepsilon}{2^{\mu}} \right)^{k-2\tau-2}\varpi\left(\frac{\varepsilon}{2^{\mu}}\right)\leq c\sum _{\mu=0}^{\infty}\left(\frac{\varepsilon}{2^{\mu-1}}-\frac{\varepsilon}{2^{ \mu}}\right)\left(\frac{\varepsilon}{2^{\mu}}\right)^{k-2\tau-3}\varpi\left( \frac{\varepsilon}{2^{\mu}}\right)\] \[\leq c\sum_{\mu=0}^{\infty}\sum_{\varepsilon/2^{\nu}}^{\varepsilon /2^{\mu-1}}\frac{\varpi\left(x\right)}{x^{2\tau+3-k}}dx\leq c\int_{0}^{2 \varepsilon}\frac{\varpi\left(x\right)}{x^{2\tau+3-k}}dx\leq 1-\theta \tag{4.24}\] for \(\left|\operatorname{Im}\xi\right|\leq\theta r_{\nu}\), and the Dini type condition (2.7) in **(H1)** together with Cauchy Theorem are used since \(\varepsilon>0\) is sufficiently small. Then it leads to \[\left|u_{\xi}^{\nu-1}(\xi)^{-1}\right|\leq\theta^{-1},\ \ \left|\operatorname{Im}\xi \right|\leq\theta r_{\nu}. \tag{4.25}\] Finally, by (4.25) and (4.14) we obtain that \[\left|\tilde{H}_{y}\left(x,0\right)-\omega\right| =\left|u_{\xi}^{\nu-1}(x)^{-1}\left(H_{y}^{\nu}\left(\phi^{\nu-1 }\left(x,0\right)\right)-H_{y}^{\nu-1}\left(\phi^{\nu-1}\left(x,0\right) \right)\right)\right|\] \[\leq\theta^{-1}\left|H_{y}^{\nu}\left(\phi^{\nu-1}\left(x,0 \right)\right)-H_{y}^{\nu-1}\left(\phi^{\nu-1}\left(x,0\right)\right)\right|\] \[\leq\theta^{-1}\left(c_{1}\left\|H\right\|_{\varpi}r_{\nu}^{k-1} \varpi\left(r_{\nu}\right)+c_{1}\left\|H\right\|_{\varpi}r_{\nu-1}^{k-1} \varpi\left(r_{\nu-1}\right)\right)\] \[\leq cr_{\nu}^{k-1}\varpi\left(r_{\nu}\right)\] \[\leq cr_{\nu}^{k-\tau-2}\varpi\left(r_{\nu}\right)\cdot r_{\nu}^{ \tau+1},\] and \[\left|\tilde{H}_{yy}\left(z\right)-Q^{\nu}\left(z\right)\right| =\left|u_{\xi}^{\nu-1}(x)^{-1}\left(H_{yy}^{\nu}\left(\phi^{\nu- 1}\left(z\right)\right)-H_{yy}^{\nu-1}\left(\phi^{\nu-1}\left(z\right)\right) \right)\left(u_{\xi}^{\nu-1}(x)^{-1}\right)^{T}\right|\] \[\leq\theta^{-2}\left|H_{yy}^{\nu}\left(\phi^{\nu-1}\left(z\right) \right)-H_{yy}^{\nu-1}\left(\phi^{\nu-1}\left(z\right)\right)\right|\] \[\leq\theta^{-2}\left(c_{1}\left\|H\right\|_{\varpi}r_{\nu}^{k-2} \varpi\left(r_{\nu}\right)+c_{1}\left\|H\right\|_{\varpi}r_{\nu-1}^{k-2}\varpi \left(r_{\nu-1}\right)\right)\] \[\leq cr_{\nu}^{k-2\tau-2}\varpi\left(r_{\nu}\right)/2M\] for \(\left|\operatorname{Im}x\right|,\left|y\right|\leq\theta r_{\nu}\). Then denote \(r^{\ast}:=r_{\nu}\) and \(\delta^{\ast}:=c\nu^{k-2\tau-2}_{\nu}\varpi\left(r_{\nu}\right)\) in theorem 8, we obtain the analytic symplectic preserving transformation \(\phi^{\nu}\) of each step, mapping the strip \(\left|\operatorname{Im}\xi\right|\leq\theta r_{\nu},\left|\eta\right|\leq \theta r_{\nu}\) into \(\left|\operatorname{Im}x\right|\leq r_{\nu},\left|y\right|\leq r_{\nu}\), such that \(u^{\nu}\left(\xi\right)-\xi\) and \(v^{\nu}\left(\xi\right)\) are of period \(1\) in all variables, and the transformed Hamiltonian function \(K^{\nu}=H^{\nu}\circ\phi^{\nu}\) satisfies \[K^{\nu}_{\xi}\left(\xi,0\right)=0,\ \ K^{\nu}_{\eta}\left(\xi,0\right)=\omega.\] Moreover, (4.17)-(4.20) are valid for \(\left|\operatorname{Im}\xi\right|,\left|\eta\right|,\left|\operatorname{Im}x \right|\leq\theta r_{\nu}\). **Step4:** By (4.18) for \(0,\ldots,\nu-1\) and the arguments in (4.24), there holds \[\left|\phi^{\nu-1}_{\zeta}\left(\zeta\right)\right| \leq 1+\sum_{\mu=0}^{\nu-1}\left|\phi^{\mu}_{\zeta}\left(\zeta \right)-\phi^{\mu-1}_{\zeta}\left(\zeta\right)\right|\leq 1+\sum_{\mu=0}^{ \nu-1}\left(\left|\phi^{\mu}_{\zeta}\left(\zeta\right)-\mathbb{I}\right|+ \left|\phi^{\mu-1}_{\zeta}\left(\zeta\right)-\mathbb{I}\right|\right)\] \[\leq 1+c\sum_{\mu=0}^{\infty}\left(\frac{\varepsilon}{2\mu} \right)^{k-2\tau-2}\varpi\left(\frac{\varepsilon}{2\mu}\right)\leq 1+c\int_{0}^{2 \varepsilon}\frac{\varpi\left(x\right)}{x^{2\tau+3-k}}dx\leq 2 \tag{4.26}\] for \(\left|\operatorname{Im}\xi\right|,\left|\eta\right|\leq\theta r_{\nu}\) as long as \(\varepsilon>0\) is sufficiently small, which leads to \[\left|\phi^{\nu}\left(\zeta\right)-\phi^{\nu-1}\left(\zeta\right)\right| =\left|\phi^{\nu-1}\left(\left(\nu^{\ast}\left(\zeta\right) \right)-\phi^{\nu-1}\left(\zeta\right)\right|\right|\] \[\leq 2\left|\psi^{\nu}\left(\zeta\right)-\zeta\right|\leq c\left(1- \theta\right)r_{\nu}^{k-2\tau-1}\varpi\left(r_{\nu}\right)\] for \(\left|\operatorname{Im}\xi\right|,\left|\eta\right|\leq r_{\nu+1}\). Then by Cauchy's estimate, we obtain that \[\left|\phi^{\nu}_{\zeta}\left(\zeta\right)-\phi^{\nu-1}_{\zeta}\left(\zeta \right)\right|\leq c\nu^{k-2\tau-2}_{\nu}\varpi\left(r_{\nu}\right),\ \ \left| \operatorname{Im}\xi\right|,\left|\eta\right|\leq r_{\nu+1}.\] It can be proved in the same way that \(\left|\phi^{\nu}_{\zeta}\left(\zeta\right)\right|\leq 2\) for \(\left|\operatorname{Im}\xi\right|,\left|\eta\right|\leq\theta r_{\nu+1}\), which implies \[\left|\operatorname{Im}z\right|\leq 2\left|\operatorname{Im}\zeta\right| \leq 2\sqrt{\left|\operatorname{Im}\xi\right|^{2}+\left|\operatorname{Im} \eta\right|^{2}}\leq 2\sqrt{\theta^{2}r_{\nu+1}^{2}+\theta^{2}r_{\nu+1}^{2}}=2r_{ \nu+1}=r_{\nu}.\] Besides, we have \(\left|\operatorname{Re}y\right|\leq\rho\). Note that \[v^{\nu}\circ\left(u^{\nu}\right)^{-1}\left(x\right)-v^{\nu-1}\circ\left(u^{\nu- 1}\right)^{-1}\left(x\right)=\left(u_{\xi}^{\nu-1}(\xi)^{-1}\right)^{T}U_{x}^{ \nu}\left(\xi\right),\ \ x:=u^{\nu-1}\left(\xi\right).\] Recall (4.24), by employing the contraction mapping principle we have \(\left|\operatorname{Im}\xi\right|\leq r_{\nu+1}\) if \(\left|\operatorname{Im}x\right|\leq\theta r_{\nu+1}\) with respect to \(x\) defined above. Then from (4.20) and (4.25) one can verify that \[\left|\left(u_{\xi}^{\nu-1}(\xi)^{-1}\right)^{T}U_{x}^{\nu}\left(\xi\right) \right|\leq cr_{\nu}^{k-\tau-1}\varpi\left(r_{\nu}\right). \tag{4.27}\] **Step5:** Finally, we are in a position to prove the convergence of \(u^{\nu}\) and \(v^{\nu}\), and the regularity of their limit functions. Note (4.27). Then we have the following analytic iterative scheme \[\left|u^{\nu}\left(\xi\right)-u^{\nu-1}\left(\xi\right)\right|\leq cr_{\nu}^{ k-2\tau-1}\varpi\left(r_{\nu}\right),\ \ \left|\operatorname{Im}\xi\right|\leq r_{\nu+1}, \tag{4.28}\] and \[\left|v^{\nu}\circ\left(u^{\nu}\right)^{-1}\left(x\right)-v^{\nu-1}\circ\left(u ^{\nu-1}\right)^{-1}\left(x\right)\right|\leq cr_{\nu}^{k-\tau-1}\varpi\left(r _{\nu}\right),\ \ \left|\operatorname{Im}x\right|\leq\theta r_{\nu+1}. \tag{4.29}\] And especially, (4.28) and (4.29) hold when \(\nu=0\) since \(u^{0-1}=\operatorname{id}\) and \(v^{0-1}=0\). It is obvious to see that the uniform limits \(u\) and \(v\circ u^{-1}\) of \(u^{\nu}\) and \(v^{\nu}\circ\left(u^{\nu}\right)^{-1}\) are at least \(C^{1}\) (in fact, this is implied by the higher regularity studied later in section 4.2). In addition, the persistent invariant torus possesses the same frequency \(\omega\) as the unperturbed torus by (4.16). ### Iteration theorem on regularity without Holder's type To obtain accurate regularity for \(u\) and \(v\circ u^{-1}\) from the analytic iterative scheme (4.28) and (4.29), we shall along with the idea of Moser and Salamon to establish an abstract iterative theorem, which provides the modulus of continuity of the integral form. **Theorem 7.**_Let \(n\in\mathbb{N}^{+},\varepsilon>0\) and \(\{r_{\nu}\}_{\nu\in\mathbb{N}}=\{\varepsilon 2^{-\nu}\}_{n\in\mathbb{N}}\) be given, and denote by \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) the limit of a sequence of real analytic functions \(f_{\nu}\left(x\right)\) in the strips \(\left|\mathrm{Im}\,x\right|\leq r_{\nu}\) such that_ \[f_{0}=0,\ \ \left|f_{\nu}\left(x\right)-f_{\nu-1}\left(x\right)\right|\leq \varphi\left(r_{\nu}\right),\ \ \nu\geq 1, \tag{4.30}\] _where \(\varphi\) is a nondecreasing continuous function satisfying \(\varphi\left(0\right)=0\). Assume that there is a critical \(k_{*}\in\mathbb{N}\) such that_ \[\int_{0}^{1}\frac{\varphi\left(x\right)}{x^{k_{*}+1}}dx<+\infty,\ \ \int_{0}^{1}\frac{\varphi\left(x\right)}{x^{k_{*}+2}}dx=+\infty. \tag{4.31}\] _Then there exists a modulus of continuity \(\varpi_{*}\) such that \(f\in C_{k_{*},\varpi}\), \(\left(\mathbb{R}^{n}\right)\). In other words, the regularity of \(f\) is at least of \(C^{k_{*}}\) plus \(\varpi_{*}\). In particular, \(\varpi_{*}\) could be determined as_ \[\varpi_{*}\left(\gamma\right)\sim\gamma\int_{L\left(\gamma\right)}^{\varepsilon }\frac{\varphi\left(t\right)}{t^{k_{*}+2}}dt=\mathcal{O}^{\#}\left(\int_{0}^{ L\left(y\right)}\frac{\varphi\left(t\right)}{t^{k_{*}+1}}dt\right),\ \ \gamma\to 0^{+}, \tag{4.32}\] _where \(L(\gamma)\to 0^{+}\) is some function such that the second relation in (4.32) holds._ Proof.: Define \(g_{\nu}(x):=f_{\nu}\left(x\right)-f_{\nu-1}\left(x\right)\) for \(\nu\in\mathbb{N}^{+}\). Determine an integer function \(\widetilde{N}(\gamma):\left[0,1\right]\rightarrow\mathbb{N}^{+}\) (note that \(\widetilde{N}(\gamma)\) can be extended to \(\mathbb{R}^{+}\) due to the arguments below, we thus assume that it is a continuous function). Then for the given critical \(k_{*}\in\mathbb{N}\) and \(x,y\in\mathbb{R}^{n}\), we obtain the following for all multi-indices \(\alpha=\left(\alpha_{1},\ldots,\alpha_{n}\right)\in\mathbb{N}^{n}\) with \(\left|\alpha\right|=k_{*}\): \[\sum_{\nu=1}^{\widetilde{N}\left(\left|x-y\right|\right)-1}\left| \partial^{\alpha}g_{\nu}\left(x\right)-\partial^{\alpha}g_{\nu}\left(y\right)\right| \leq\left|x-y\right|\sum_{\nu=1}^{\widetilde{N}\left(\left|x-y \right|\right)-1}\left|\partial^{\alpha}g_{\nu x}\right|_{C^{0}\left(\mathbb{R }^{n}\right)}\leq\left|x-y\right|\sum_{\nu=1}^{\widetilde{N}\left(\left|x-y \right|\right)-1}\frac{1}{t_{\nu}^{k_{*}+1}}\varphi\left(r_{\nu}\right)\] \[=2\left|x-y\right|\sum_{\nu=1}^{\widetilde{N}\left(\left|x-y \right|\right)-1}\left(\frac{\varepsilon}{2^{\nu}}-\frac{\varepsilon}{2^{\nu+1 }}\right)\left(\frac{2^{\nu}}{\varepsilon}\right)^{k_{*}+2}\varphi\left(\frac{ \varepsilon}{2^{\nu}}\right)\] \[\leq c\left|x-y\right|\int_{\varepsilon 2^{-\widetilde{N}\left( \left|x-y\right|\right)}}^{\varepsilon}\frac{\varphi\left(t\right)}{t^{k_{*}+2 }}dt, \tag{4.33}\] where Cauchy's estimate and (4.30) are used in the second inequality, and arguments similar to (4.24) are employed in (4.33), \(c>0\) is a universal constant. Besides, we similarly get \[\sum_{\nu=\widetilde{N}\left(\left|x-y\right|\right)}^{\infty} \left|\partial^{\alpha}g_{\nu}\left(x\right)-\partial^{\alpha}g_{\nu}\left(y\right)\right| \leq\sum_{\nu=\widetilde{N}\left(\left|x-y\right|\right)}^{ \infty}2\left|\partial^{\alpha}g_{\nu}\right|_{C^{0}\left(\mathbb{R}^{n} \right)}\leq 2\sum_{\nu=\widetilde{N}\left(\left|x-y\right|\right)}^{\infty}\frac{1 }{t_{\nu}^{k_{*}}}\varphi\left(r_{\nu}\right)\] \[=2\sum_{\nu=\widetilde{N}\left(\left|x-y\right|\right)}^{ \infty}\left(\frac{\varepsilon}{2^{\nu}}-\frac{\varepsilon}{2^{\nu+1}}\right) \left(\frac{2^{\nu}}{\varepsilon}\right)^{k_{*}+1}\varphi\left(\frac{ \varepsilon}{2^{\nu}}\right)\] \[\leq c\int_{0}^{\varepsilon 2^{-\widetilde{N}\left(\left|x-y\right|\right)}} \frac{\varphi\left(t\right)}{t^{k_{*}+1}}dt. \tag{4.34}\] Now choose \(\widetilde{N}(\gamma)\rightarrow+\infty\) as \(\gamma\to 0^{+}\) such that \[\gamma\int_{L\left(\gamma\right)}^{\varepsilon}\frac{\varphi\left(t\right)}{t^{k _{*}+2}}dt=\mathcal{O}^{\#}\left(\int_{0}^{L\left(\gamma\right)}\frac{\varphi \left(t\right)}{t^{k_{*}+1}}dt\right):=\varpi_{*}\left(\gamma\right),\ \ \gamma\to 0^{+}, \tag{4.35}\] where \(\varepsilon 2^{-\widetilde{N}\left(\gamma\right)-1}:=L\left(\gamma\right) \to 0^{+}\). This is achievable due to assumption (4.31), Cauchy Theorem and The Intermediate Value Theorem. Note that the choice of \(L(\gamma)\) (i.e., \(\widetilde{N}\)) and \(\varpi_{*}\) is not unique (may up to a constant), and \(\varpi_{*}\) could be continuously extended to some given interval (e.g., \([0,1]\)), but this does not affect the qualitative result. Combining (4.33), (4.34) and (4.35) we finally arrive at \(f\in C_{k_{*},\varpi_{*}}\) (\(\mathbb{R}^{n}\)) because \[\left|\partial^{\alpha}f\left(x\right)-\partial^{\alpha}f\left(y \right)\right| \leq\sum_{v=1}^{\widetilde{N}\left(\mathbb{x}-y\right)}+\sum_{v= \widetilde{N}\left(\mathbb{x}-y\right)+1}^{\infty}\left|\partial^{\alpha}g_{ v}\left(x\right)-\partial^{\alpha}g_{v}\left(y\right)\right|\] \[\leq c\left(\left|x-y\right|\int_{\varepsilon^{2}\cdot\mathbb{R }\left(\mathbb{x}-y\right)-1}^{\varepsilon}\frac{\varphi\left(t\right)}{t^{k _{*}+2}}dt+\int_{0}^{\varepsilon^{2}-\mathbb{R}\left(\mathbb{x}-y\right)-1} \frac{\varphi\left(t\right)}{t^{k_{*}+1}}dt\right)\] \[\leq c\varpi_{*}\left(\left|x-y\right|\right).\] theorem 7 can be extended to the case \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) with \(n,m\in\mathbb{N}^{+}\) since the analysis is completely the same, and the strip \([\text{Im}\,x]\leq r_{\nu}\) can also be replaced by \([\text{Im}\,x]\leq r_{\nu+1}\) (or \(\leq\theta r_{\nu+1}\)). theorem 7 can also be used to estimate the regularity of solutions of finite smooth homological equations, thus KAM uniqueness theorems in some cases might be derived, see Section 4 in [19] for instance. However, in order to avoid too much content in this paper, it is omitted here. Recall (4.28) and (4.29). Then one can apply theorem 7 on \(\{u^{*}-\text{id}\}_{\nu}\) (because theorem 7 requires that the initial value vanishes) and \(\{v^{\nu}\circ(u^{*})^{-1}\}_{\nu}\) to directly analyze the regularity of the KAM torus according to **(H5)**, i.e., there exist \(\varpi_{i}\) (\(i=1,2\)) such that \(u\in C_{k_{1}^{*},\varpi_{1}}\) (\(\mathbb{R}^{n},\mathbb{R}^{n}\)) and \(v\circ u^{-1}\in C_{k_{2}^{*},\varpi_{2}}\) (\(\mathbb{R}^{n},G\)). This completes the proof of theorem 2. ## 5 Proof of theorem 4 Only need to determine \(k_{i}^{*}\) in **(H5)** and choose functions \(L_{i}(\gamma)\to 0^{+}\) (as \(\gamma\to 0^{+}\)) to obtain the modulus of continuity \(\varpi_{i}\) in (2.10) for \(i=1,2\). Obviously \(k_{1}^{*}=1\) and \(k_{2}^{*}=n\) because \[\int_{0}^{1}\frac{\varpi_{\text{GLH}}^{\rho,\lambda}\left(x\right)}{x}dx<+ \infty,\ \ \int_{0}^{1}\frac{\varpi_{\text{GLH}}^{\rho,\lambda}\left(x\right)}{x^{2}}dx=+\infty.\] In view of \(\varphi_{i}(x)\) in **(H5)**, then by applying lemma E.1 we get \[\gamma\int_{L_{i}(\gamma)}^{\varepsilon}\frac{\varphi_{i}\left(t \right)}{t^{k_{*}+2}}dt =\mathcal{O}^{\#}\Bigg{(}\gamma\int_{L_{i}(\gamma)}^{\varepsilon} \frac{1}{\ell^{2}(\ln(1/t))\cdots(\underbrace{\ln\cdots\ln(1/t)}_{\varepsilon })^{\lambda}}dt\Bigg{)}\] \[=\mathcal{O}^{\#}\Bigg{(}\gamma\int_{1/\varepsilon}^{1/L_{i}( \gamma)}\frac{1}{\left(\ln z\right)\cdots\left(\underbrace{\ln\cdots\ln z}_{ \varepsilon}\right)^{\lambda}}dz\Bigg{)}\] \[=\mathcal{O}^{\#}\Bigg{(}\frac{\gamma}{L_{i}\left(\gamma\right) \left(\ln(1/L_{i}\left(\gamma\right))\cdots\left(\underbrace{\ln\cdots\ln(1/ L_{i}\left(\gamma\right)\right)}_{\varepsilon}\right)^{\lambda}}\Bigg{)}, \tag{5.36}\] and by direct calculation one arrives at \[\int_{0}^{L_{i}(\gamma)}\frac{\varphi_{i}\left(t\right)}{t^{k_{ *}+1}}dt =\mathcal{O}^{\#}\Bigg{(}\int_{L_{i}(\gamma)}^{\varepsilon}\frac{1}{t( \ln(1/t))\cdots(\underbrace{\ln\cdots\ln(1/t)}_{\varepsilon})^{\lambda}}dt \Bigg{)}\] \[=\mathcal{O}^{\#}\Bigg{(}\underbrace{\frac{1}{\left(\ln\cdots\ln( 1/L_{i}\left(\gamma\right)\right)}_{\varepsilon})^{\lambda-1}}_{\varepsilon} \Bigg{)}. \tag{5.37}\] Finally, choosing \[L_{i}\left(\gamma\right)\sim\frac{\gamma}{\left(\ln(1/\gamma)\right)\cdots\left( \underset{\varrho}{\ln\cdots\ln(1/\gamma)}\right)}\to 0^{+},\ \ \gamma\to 0^{+} \tag{5.38}\] will lead to the second relation in (2.10) for \(i=1,2\), and substituting \(L_{i}(\gamma)\) into (5.36) or (5.37) yields that \[\varpi_{1}\left(\gamma\right)\sim\varpi_{2}\left(\gamma\right)\sim\frac{1}{ \left(\underset{\varrho}{\ln\cdots\ln(1/\gamma)}\right)^{\lambda-1}}. \tag{5.39}\] in theorem 2, see (2.10). This proves theorem 4. ## 6 Proof of theorem 5 Note that \(\ell\notin\mathbb{N}^{+}\) implies \(\left\{\ell\right\}\in\left(0,1\right)\). Then \(k=\left[\ell\right]\) and \(\varpi(x)\sim\varpi_{\text{H}}^{\ell}(x)\sim x^{\left(\ell\right)}\), i.e., modulus of continuity of Holder's type. Consequently, **(H1)** can be directly verified because of \(\ell>2\tau+2\): \[\int_{0}^{1}\frac{\varpi\left(x\right)}{x^{2\tau+3-k}}dx=\int_{0}^{1}\frac{x^ {\left\{\ell\right\}}}{x^{2\tau+3-\left\{\ell\right\}}}dx=\int_{0}^{1}\frac{1 }{x^{1-\left(\ell-2\tau-2\right)}}dx<+\infty.\] Here and below, let \(i\) be \(1\) or \(2\) for simplicity. Recall that \(\varphi_{i}\left(x\right)=x^{k-\left(3-i\right)\tau-1}\varpi\left(x\right)=x ^{\left\{\ell\right\}-\left(3-i\right)\tau-1}\cdot x^{\left\{\ell\right\}}=x^ {\ell-\left(3-i\right)\tau-1}\), and let \[\int_{0}^{1}\frac{\varphi_{i}\left(x\right)}{x^{k_{i}^{*}+1}}dx =\int_{0}^{1}\frac{1}{x^{k_{i}^{*}-\left(\ell-\left(3-i\right) \tau-2\right)}}dx<+\infty, \tag{6.40}\] \[\int_{0}^{1}\frac{\varphi_{i}\left(x\right)}{x^{k_{i}^{*}+2}}dx =\int_{0}^{1}\frac{1}{x^{k_{i}^{*}-\left(\ell-\left(3-i\right) \tau-2\right)+1}}dx=+\infty. \tag{6.41}\] Then the critical \(k_{i}^{*}\) in **(H5)** could be uniquely chosen as \(k_{i}^{*}:=\left[\ell-\left(3-i\right)\tau-1\right]\in\mathbb{N}^{+}\) since \(\ell-\left(3-i\right)\tau-1\notin\mathbb{N}^{+}\). Further, letting \(L_{i}\left(\gamma\right)=\gamma\to 0^{+}\) yields that \[\int_{0}^{L_{i}\left(\gamma\right)}\frac{\varphi_{i}\left(t\right)}{t^{k_{i}^ {*}+1}}dt=\mathcal{O}^{\#}\left(\int_{0}^{\gamma}\frac{1}{t^{1-\left\{\ell- \left(3-i\right)\tau-2\right\}}}dt\right)=\mathcal{O}^{\#}\left(\gamma^{\left\{ \ell-\left(3-i\right)\tau-2\right\}}\right)\] and \[\gamma\int_{L_{i}\left(\gamma\right)}^{\varepsilon}\frac{\varphi_{i}\left(t \right)}{t^{k_{i}^{*}+2}}dt=\mathcal{O}^{\#}\left(\gamma\int_{\gamma}^{ \varepsilon}\frac{1}{t^{2-\left\{\ell-\left(3-i\right)\tau-2\right\}}}dt\right) =\mathcal{O}^{\#}\left(\gamma^{\left\{\ell-\left(3-i\right)\tau-2 \right\}}\right).\] This leads to Holder's type \[\varpi_{i}\left(\gamma\right)\sim\left(L_{i}\left(\gamma\right)\right)^{\left( \ell-\left(3-i\right)\tau-2\right)}\sim\gamma^{\left\{\ell-\left(3-i\right) \tau-2\right\}}\sim\varpi_{\text{H}}^{\left\{\ell-\left(3-i\right)\tau-2 \right\}}(\gamma)\] due to (2.10) in theorem 2. By observing \(k_{i}^{*}+\left\{\ell-\left(3-i\right)\tau-2\right\}=\ell-\left(3-i\right)\tau-1\) we finally arrive at \(u\in C^{\left(\ell-2\tau-1\right)}\left(\mathbb{R}^{n},\mathbb{R}^{n}\right)\) and \(v\circ u^{-1}\in C^{\left(\ell-\tau-1\right)}\left(\mathbb{R}^{n},G\right)\). This proves theorem 5. ## 7 Proof of theorem 6 Firstly, note that \(k=\left[2\tau+2\right]\) and \(\varpi\left(x\right)\sim x^{\left[2\tau+2\right]}/\left(-\ln x\right)^{\lambda}\) with \(\lambda>1\), then **(H1)** holds since \[\int_{0}^{1}\frac{\varpi\left(x\right)}{x^{2\tau+3-k}}dx =\mathcal{O}^{\#}\left(\int_{0}^{1/2}\frac{x^{\left(2\tau+2 \right)}}{x^{2\tau+3-\left[2\tau+2\right]}-\left(\ln x\right)^{\lambda}}dx\right)\] \[=\mathcal{O}^{\#}\left(\int_{0}^{1/2}\frac{1}{x(-\ln x)^{\lambda}} dx\right)<+\infty.\] Secondly, in view of \(\varphi_{i}(x)\) in **(H5)**, we have \[\int_{0}^{1}\frac{\varphi_{i}\left(x\right)}{x^{k_{i}^{2}+1}}dx=\mathcal{O}^{ \#}\left(\int_{0}^{1/2}\frac{1}{x^{k_{i}^{2}-\left(i-1\right)\tau}(-\ln x)^{4}} dx\right)\ \ i=1,2.\] This leads to critical \(k_{1}^{*}=1\) and \(k_{2}^{*}=\left[\tau+1\right]\) in **(H5)**. Here one uses the following fact: for given \(\lambda>1\), \[\int_{0}^{1/2}\frac{1}{x^{\prime}(-\ln x)^{4}}dx<+\infty,\ \ \int_{0}^{1/2}\frac{1}{x^{ \prime+1}(-\ln x)^{4}}dx=+\infty\] if and only if \(\iota\in(0,1]\). Next, we investigate the KAM remaining regularity through certain complicated asymptotic analysis. One notices that the analysis of \(\varpi_{1}\) with all \(\tau>n-1\) and \(\varpi_{2}\) with \(n-1<\tau\notin\mathbb{N}^{+}\) is the same as \(\varrho=1\) in theorem 4, i.e., \(L_{1}(\gamma)\) and \(L_{2}(\gamma)\) could be chosen as \(\gamma/(-\ln\gamma)\to 0^{+}\), see (5.38) with \(\varrho=1\). Therefore, in view of (5.39), we arrive at \[\varpi_{1}\left(\gamma\right)\sim\frac{1}{\left(-\ln\gamma\right)^{4-1}}\sim \varpi_{\rm LH}^{\lambda-1}\left(\gamma\right),\ \ \tau>n-1,\] and \[\varpi_{2}\left(\gamma\right)\sim\frac{1}{\left(-\ln\gamma\right)^{4-1}}\sim \varpi_{\rm LH}^{\lambda-1}\left(\gamma\right),\ \ n-1<\tau\notin\mathbb{N}^{+}.\] However, the asymptotic analysis for \(\varpi_{2}\) becomes different when \(n-1<\tau\in\mathbb{N}^{+}\). Note that \(\left\{\tau\right\}\in(0,1)\) and \(\left[\tau+1\right]-\tau=\left[\tau\right]+1-\tau=1-\left\{\tau\right\}\) at present. Hence, by applying (E.1) in lemma E.2 we get \[\int_{0}^{L_{2}(\gamma)}\frac{\varphi_{2}\left(t\right)}{t^{k_{2} ^{2}+1}}dt =\mathcal{O}^{\#}\left(\int_{0}^{L_{2}(\gamma)}\frac{1}{t^{ \left(\tau+1\right)-\tau}(-\ln\iota)^{4}}dt\right)=\mathcal{O}^{\#}\left(\int _{0}^{L_{2}(\gamma)}\frac{1}{t^{1-\left[\tau\right]}(-\ln\iota)^{4}}dt\right)\] \[=\mathcal{O}^{\#}\left(\int_{1/L_{2}(\gamma)}^{+\infty}\frac{1}{ t^{1+\left[\tau\right]}(\ln z)^{4}}dz\right) =\mathcal{O}^{\#}\left(\frac{\left(L_{2}\left(\gamma\right)\right)^{ \left[\tau\right]}}{\left(\ln\left(1/L_{2}\left(\gamma\right)\right)\right)^{ 4}}\right), \tag{7.42}\] and similarly according to (E.2) in lemma E.2 we have \[\gamma\int_{L_{2}(\gamma)}^{\varepsilon}\frac{\varphi_{2}\left(t\right)}{t^{k _{2}^{2}+1}}dt=\mathcal{O}^{\#}\left(\gamma\int_{1/\varepsilon}^{1/L_{2}( \gamma)}\frac{1}{z^{\left[\tau\right]}(\ln z)^{4}}dz\right)=\mathcal{O}^{\#} \left(\frac{\gamma(L_{2}\left(\gamma\right))^{\left[\tau\right]-1}}{\left(\ln \left(1/L_{2}\left(\gamma\right)\right)\right)^{4}}\right). \tag{7.43}\] Now let us choose \(L_{2}(\gamma)\sim\gamma\to 0^{+}\), i.e., different from that when \(n-1<\tau\in\mathbb{N}^{+}\), one verifies that the second relation in (2.10) holds for \(i=2\), and substituting \(L_{2}(\gamma)\) into (7.42) or (7.43) yields that \[\varpi_{2}\left(\gamma\right)\sim\frac{\gamma^{\left[\tau\right]}}{\left(-\ln \gamma\right)^{4}}\sim\gamma^{\left[\tau\right]}\varpi_{\rm LH}^{\lambda}\left( \gamma\right),\ \ n-1<\tau\notin\mathbb{N}^{+}\] due to (2.10) in theorem 2. This proves theorem 6. ## Appendix A Semi separability and weak homogeneity for modulus of continuity **Lemma A.1**.: _Let a modulus continuity \(\varpi\) be given. If \(\varpi\) is piecewise continuously differentiable and \(\varpi^{\prime}\geq 0\) is nonincreasing, then \(\varpi\) admits semi separability in definition 2.3. As a consequence, if \(\varpi\) is convex near \(0^{+}\), then it is semi separable._ Proof.: Assume that \(\varpi\) is continuously differentiable without loss of generality. Then we obtain semi separability due to \[\sup_{0<r<\delta/x}\frac{\varpi\left(rx\right)}{\varpi\left(r \right)} =\sup_{0<r<\delta/x}\frac{\varpi\left(rx\right)-\varpi\left(0+ \right)}{\varpi\left(r\right)}\leq\sup_{0<r<\delta/x}\frac{1}{\varpi\left(r \right)}\sum_{j=0}^{\left[x\right]}\int_{jr}^{\left(j+1\right)r}\varpi^{\prime} \left(t\right)dt\] \[\leq\sup_{0<r<\delta/x}\frac{1}{\varpi\left(r\right)}\sum_{j=0}^{ \left[x\right]}\int_{0}^{r}\varpi^{\prime}\left(t\right)dt=\left(\left[x \right]+1\right)=\mathcal{O}\left(x\right),\ \ x\rightarrow+\infty.\] **Lemma A.2**.: _Let a modulus continuity \(\varpi\) be given. If \(\varpi\) is convex near \(0^{+}\), then it admits weak homogeneity in definition 2.4._ Proof.: For \(x>0\) sufficiently small, one verifies that \[\varpi\left(x\right)=x\cdot\frac{\varpi\left(x\right)-\varpi\left(0+\right)}{x- 0}\leq x\cdot\frac{\varpi\left(ax\right)-\varpi\left(0+\right)}{ax-0}=a^{-1} \varpi\left(ax\right),\] for \(0<a<1\), which leads to weak homogeneity \[\varvarlim_{x\to 0^{+}}\frac{\varpi\left(x\right)}{\varpi\left(ax\right)}\leq a^{-1} <+\infty.\] ## Appendix B Proof of theorem 1 Proof.: For the completeness of the analysis we give a very detailed proof. An outline of the strategy for the proof is provided: we firstly construct an approximation integral operator by the Fourier transform of a compactly supported function, and then present certain properties of the operator (note that these preparations are classical); finally, we estimate the approximation error in the sense of modulus of continuity. Let \[K\left(x\right)=\frac{1}{\left(2\pi\right)^{n}}\int_{\mathbb{R}^{n}}\widetilde {K}\left(\xi\right)e^{i\left(x,\xi\right)}d\xi,\ \ x\in\mathbb{C}^{n}\] be an entire function whose Fourier transform \[\widetilde{K}\left(\xi\right)=\int_{\mathbb{R}^{n}}K\left(x\right)e^{-i\left( x,\xi\right)}dx,\ \ \xi\in\mathbb{R}^{n}\] is a smooth function with compact support, contained in the ball \(\left|\xi\right|\leq 1\), that satisfies \(\widetilde{K}\left(\xi\right)=\widetilde{K}\left(-\xi\right)\) and \[\partial^{\alpha}\widetilde{K}\left(0\right)=\begin{cases}1,\ \ \alpha=0,\\ 0,\ \ \alpha\neq 0.\end{cases}\] (B.1) Next, we assert that \[\left|\partial^{\beta}\mathcal{F}\left(\widetilde{K}\left(\xi\right)\right) \left(z\right)\right|\leq\frac{c\left(\beta,p\right)}{\left(1+\left|\mathrm{ Re}\,z\right|\right)^{p}}e^{\mathrm{i}\mathrm{Im}\cdot\mathrm{i}},\ \ \max\left\{1,\left|\beta\right|\right\}\leq p\in\mathbb{R}.\] (B.2) Note that we assume \(\widetilde{K}\in C_{0}^{\infty}\left(\mathbb{R}^{n}\right)\) and \(\mathrm{supp}\,\widetilde{K}\subseteq B\left(0,1\right)\), thus \[\left|\left(1+\left|z\right|\right)^{k}\partial^{\beta}\mathcal{F}\left( \widetilde{K}\left(\xi\right)\right)\left(z\right)\right|\leq\sum_{\left| \gamma\right|\leq k}\left|z^{\gamma}\partial^{\beta}\mathcal{F}\left( \widetilde{K}\left(\xi\right)\right)\left(z\right)\right|=\sum_{\left|\gamma \right|\leq k}\left|\partial^{\beta+\gamma}\mathcal{F}\left(\widetilde{K} \left(\xi\right)\right)\left(z\right)\right|,\] (B.3) where \(\mathcal{F}\) represents the Fourier transform. Since \(\partial^{\beta+\gamma}\mathcal{F}\left(\widetilde{K}\left(\xi\right)\right) \left(z\right)\in C_{0}^{\infty}\left(\overline{B\left(0,1\right)}\right)\) does not change the condition, we only need to prove that \[\left|\mathcal{F}\left(\widetilde{K}\left(\xi\right)\right)\left(z\right) \right|\leq c_{k}e^{\mathrm{i}\mathrm{Im}\cdot\mathrm{i}}.\] Obviously \[\left|\mathcal{F}\left(\widetilde{K}\left(\xi\right)\right)\left(z\right) \right|\leq\frac{1}{\left(2\pi\right)^{n}}\int_{\mathbb{R}^{n}}\left| \widetilde{K}\left(\xi\right)\right|e^{-\left(\mathrm{Im}\cdot z\right)}d \xi\leq\frac{c}{\left(2\pi\right)^{n}}\int_{B\left(0,1\right)}e^{\left| \left(\mathrm{Im}\cdot\mathrm{i}\right)\cdot\mathrm{i}}d\xi\leq ce^{\mathrm{i }\mathrm{Im}\cdot\mathrm{i}},\] where \(c>0\) is independent of \(n\). Then assertion (B.2) is proved by recalling (B.3). The inequality in (B.2) is usually called the Paley-Wiener Theorem, see also Chapter III in [20]. As we will see later, it plays an important role in the subsequent verification of definitions, integration by parts and the translational feasibility according to Cauchy's integral formula. Next we assert that \(K:\mathbb{C}^{n}\to\mathbb{R}\) is a real analytic function with the following property \[\int_{\mathbb{R}^{n}}\left(u+\mathrm{i}v\right)^{\alpha}\partial^{\beta}K\left(u +\mathrm{i}v\right)du=\begin{cases}(-1)^{|\alpha|}\alpha!,&\alpha=\beta,\\ 0,&\alpha\neq\beta,\end{cases}\] (B.4) for \(u,v\in\mathbb{R}^{n}\) and multi-indices \(\alpha,\beta\in\mathbb{N}^{n}\). In order to prove assertion (B.4), we first consider proving the following for \(x\in\mathbb{R}^{n}\): \[\int_{\mathbb{R}^{n}}x^{\alpha}\partial^{\beta}K\left(x\right)dx=\begin{cases} (-1)^{|\alpha|}\alpha!,&\alpha=\beta,\\ 0,&\alpha\neq\beta.\end{cases}\] (B.5) **Case1:** If \(\alpha=\beta\), then \[\int_{\mathbb{R}^{n}}x^{\alpha}\partial^{\beta}K\left(x\right)dx =\int_{\mathbb{R}^{n}}\left(\prod_{j=1}^{n}x_{j}^{\alpha_{j}} \right)\!\left(\prod_{j=1}^{n}\partial_{x_{j}}^{\alpha_{j}}\right)\!K\left(x \right)dx\] \[=(-1)^{\alpha_{1}}\alpha_{1}!\cdot\int_{\mathbb{R}^{n-1}}\left( \prod_{j=2}^{n}x_{j}^{\alpha_{j}}\right)\!\left(\prod_{j=2}^{n}\partial_{x_{j }}^{\alpha_{j}}\right)\!K\left(x_{2},\cdots,x_{n}\right)dx_{2}\cdots dx_{n}\] \[=\cdots=(-1)^{\alpha_{1}+\cdots+\alpha_{n}}\alpha_{1}!\cdots\alpha _{n}!\int_{\mathbb{R}}K\left(x_{n}\right)dx_{n}\] \[=(-1)^{|\alpha|}\alpha!\,\widehat{K}\left(0\right)=(-1)^{|\alpha| }\alpha!\,.\] **Case2:** There exists some \(\alpha_{j}\leq\beta_{j}-1\), let \(j=1\) without loss of generality. Then \[\int_{\mathbb{R}^{n}}x^{\alpha}\partial^{\beta}K\left(x\right)dx =\int_{\mathbb{R}^{n}}\left(\prod_{j=1}^{n}x_{j}^{\alpha_{j}} \right)\!\left(\prod_{j=1}^{n}\partial_{x_{j}}^{\beta_{j}}\right)\!K\left(x \right)dx\] \[=(-1)^{\beta_{1}-\alpha_{1}}\int_{\mathbb{R}^{n}}\left(\prod_{j=2 }^{n}x_{j}^{\alpha_{j}}\right)\!\left(\prod_{j=2}^{n}\partial_{x_{j}}^{\beta_ {j}}\right)\!\partial_{x_{1}}^{\beta_{1}-\alpha_{1}}K\left(x\right)dx=0.\] **Case3:** Now we have \(\alpha_{1}\geq\beta_{1}\), and some \(\alpha_{j}\geq\beta_{j}+1\) (otherwise \(\alpha=\beta\)). Let \(j=1\) without loss of generality. At this time we first prove a conclusion according to (B.1). Since \[\partial^{\alpha}\widehat{K}\left(0\right)=(-\mathrm{i})^{|\alpha|}\int_{ \mathbb{R}^{n}}x^{\alpha}K\left(x\right)dx=0,\ \ \alpha\neq 0,\] then it follows that \[\int_{\mathbb{R}^{n}}x^{\alpha}K\left(x\right)dx=0,\ \ \alpha\neq 0.\] Hence, we arrive at \[\int_{\mathbb{R}^{n}}x^{\alpha}\partial^{\beta}K\left(x\right)dx=(-1)^{\sum \limits_{i=1}^{n}\left(\beta_{j}-\alpha_{j}\right)}\int_{\mathbb{R}^{n}}\left( x_{1}^{\alpha_{1}-\beta_{1}}\right)\left(\prod_{j=2}^{n}x_{j}^{\alpha_{j}- \beta_{j}}\right)\!K\left(x\right)dx=0.\] This proves (B.5). Next, we will consider a complex translation of (B.5) and prove that \[\int_{\mathbb{R}^{n}}(u+\mathrm{i}v)^{\alpha}\partial^{\beta}K\left(u+\mathrm{ i}v\right)du=\begin{cases}(-1)^{|\alpha|}\alpha!,&\alpha=\beta,\\ 0,&\alpha\neq\beta.\end{cases}\] Actually one only needs to pay attention to (B.2), and the proof is completed according to Cauchy's integral formula. Finally, we only prove that \[S_{r}p=p,\ \ p:\mathbb{R}^{n}\to\mathbb{R}.\] (B.6) In fact, only real polynomials need to be considered \[p=x^{\alpha}=\prod_{j=1}^{n}x_{j}^{\alpha_{j}}\.\] It can be obtained by straight calculation \[S_{r}p =r^{-n}\int_{\mathbb{R}^{n}}K\left(\frac{x-y}{r}\right)\prod_{j=1}^ {n}y_{j}^{\alpha_{j}}dy=\int_{\mathbb{R}^{n}}K\left(z\right)\prod_{j=1}^{n} \left(rz_{j}+x_{j}\right)^{\alpha_{j}}dz\] \[=\left(\prod_{j=1}^{n}x_{j}^{\alpha_{j}}\right)\int_{\mathbb{R}^{ n}}K\left(z\right)dz+\sum_{\gamma}\varphi_{\gamma}\left(r,x\right)\int_{ \mathbb{R}^{n}}z^{\gamma}K\left(z\right)dz=\prod_{j=1}^{n}x_{j}^{\alpha_{j}}=p,\] where \(\varphi_{\gamma}\left(r,x\right)\) are coefficients independent of \(z\). As to the complex case one only needs to perform complex translation to obtain \[p\left(u;\mathrm{i}v\right)=S_{r}p\left(u+\mathrm{i}v\right)=\int_{\mathbb{R }^{n}}K\left(\mathrm{i}r^{-1}v-\eta\right)p\left(u;r\eta\right)d\eta.\] The above preparations are classic, see also [19]. Next we begin to prove the Jackson type approximation theorem via only modulus of continuity. We will make use of (B.6) in case of the Taylor polynomial \[p_{k}\left(x;y\right):=P_{f,k}\left(x;y\right)=\sum_{|\alpha|\leq k}\frac{1}{ \alpha!}\partial^{\alpha}f\left(x\right)y^{\alpha}\] of \(f\) with \(k\in\mathbb{N}\). Note that \[\left|f\left(x+y\right)-p_{k}\left(x;y\right)\right|=\left|\int_{0}^{1}k(1-t) ^{k-1}\sum_{|\alpha|=k}\frac{1}{\alpha!}\left(\partial^{\alpha}f\left(x+ty \right)-\partial^{\alpha}f\left(x\right)\right)y^{\alpha}dt\right|\] for every \(x,y\in\mathbb{R}^{n}\). Define the following domains to partition \(\mathbb{R}^{n}\): \[\Omega_{1}:=\left\{\eta\in\mathbb{R}^{n}:|\eta|<\delta r^{-1}\right\},\ \ \Omega_{2}:=\left\{\eta\in\mathbb{R}^{n}:|\eta|\geq\delta r^{-1}\right\}.\] We have to use different estimates in the above two domains, which are abstracted as follows. If \(0<|y|<\delta\), we obtain that \[\left|f\left(x+y\right)-p_{k}\left(x;y\right)\right| \leq\ \int_{0}^{1}k(1-t)^{k-1}\sum_{|\alpha|=k}\frac{1}{\alpha!} \cdot\left[\partial^{\alpha}f\right]_{\varpi}\varpi\left(t\left|y\right| \right)\cdot\left|y^{\alpha}\right|dt\] \[\leq c\left(n,k\right)\left|\left|f\right|\right|_{\varpi}\int_{0 }^{1}\varpi\left(t\left|y\right|\right)dt\cdot\left|y^{\alpha}\right|\] \[\leq c\left(n,k\right)\left|\left|f\right|\right|_{\varpi}\varpi \left(\left|y\right|\right)\left|y^{\alpha}\right|\] \[\leq c\left(n,k\right)\left|\left|f\right|\right|_{\varpi}\varpi \left(\left|y\right|\right)\left|y\right|^{k}.\] (B.7) If \(|y|\geq\delta\), one easily arrives at \[\left|f\left(x+y\right)-p_{k}\left(x;y\right)\right| \leq\ \int_{0}^{1}k(1-t)^{k-1}\sum_{|\alpha|=k}\frac{1}{\alpha!} \cdot 2|\partial^{\alpha}f|_{C^{0}}\cdot\left|y^{\alpha}\right|dt\] \[\leq c\left(n,k\right)\left|\left|f\right|\right|_{\varpi}\left|y ^{\alpha}\right|\] \[\leq c\left(n,k\right)\left|\left|f\right|\right|_{\varpi}\left|y \right|^{k}.\] (B.8) The Holder inequality has been used in (B.7) and (B.8) with \(k\geq 1,\alpha_{i}\geq 1,\mu_{i}=k/\alpha_{i}\geq 1\) without loss of generality: \[\left|y^{\alpha}\right|=\prod_{i=1}^{n}\left|y_{i}\right|^{\alpha_{i}}\leq\sum_ {i=1}^{n}\frac{1}{\mu_{i}}\left|y_{i}\right|^{\alpha_{i}\mu_{i}}\leq\sum_{i=1}^ {n}\left|y_{i}\right|^{k}\leq\sum_{i=1}^{n}\left|y\right|^{k}=n|y|^{k}.\] Now let \(x=u+\mathrm{i}v\) with \(u,v\in\mathbb{R}^{n}\) and \(\left|v\right|\leq r\). Fix \(p=n+k+2\), and let \(c=c\left(n,k\right)>0\) be a universal constant, then it follows that \[\left|S_{r}f\left(u+\mathrm{i}v\right)-p_{k}\left(u;\mathrm{i}v\right)\right| \leq\int_{\mathbb{R}^{n}}K\left(\mathrm{i}r^{-1}v-\eta\right) \left|f\left(u+r\eta\right)-p_{k}\left(u;r\eta\right)\right|d\eta\] \[\leq c\int_{\mathbb{R}^{n}}\frac{e^{r^{-1}v}}{\left(1+\left|\eta \right|\right)^{p}}\left|f\left(u+r\eta\right)-p_{k}\left(u;r\eta\right)\right|d\eta\] \[\leq c\int_{\mathbb{R}^{n}}\frac{1}{\left(1+\left|\eta\right| \right)^{p}}\left|f\left(u+r\eta\right)-p_{k}\left(u;r\eta\right)\right|d\eta\] \[=c\int_{\Omega_{1}}+\int_{\Omega_{2}}\frac{1}{\left(1+\left|\eta \right|\right)^{p}}\left|f\left(u+r\eta\right)-p_{k}\left(u;r\eta\right)\right|d\eta\] \[:=c\left(I_{1}+I_{2}\right).\] As it can be seen later, \(I_{1}\) is the main part while \(I_{2}\) is the remainder. Recall remark 2.4 and (2.4). Hence the following holds due to (B.7) \[I_{1} =\] (B.9) \[\leq \int_{\left|\eta\right|\leq dr^{-1}}\frac{1}{\left(1+\left|\eta \right|\right)^{p}}\cdot c\left\|f\right\|_{\varpi}\varpi\left(r\right)\psi( \left|\eta\right|)|r\eta|^{k}d\eta\] \[\leq c\left\|f\right\|_{\varpi}r^{k}\varpi(r)\int_{0}^{\delta r^{ -1}}\frac{w^{k+n}}{\left(1+w\right)^{p}}dw\] \[\leq c\left\|f\right\|_{\varpi}r^{k}\varpi(r)\int_{0}^{+\infty} \frac{w^{k+n}}{\left(1+w\right)^{p}}dw\] \[\leq c\left\|f\right\|_{\varpi}r^{k}\varpi(r).\] In view of (B.8), we have \[I_{2} = \int_{\Omega_{2}}\frac{1}{\left(1+\left|\eta\right|\right)^{p}} \left|f\left(u+r\eta\right)-p_{k}\left(u;r\eta\right)\right|d\eta\] (B.10) \[\leq \int_{\left|\eta\right|\geq\delta r^{-1}}\frac{1}{\left(1+\left| \eta\right|\right)^{p}}\cdot c\left\|f\right\|_{\varpi}\left|r\eta\right|^{k}d\eta\] \[\leq c\left\|f\right\|_{\varpi}r^{k}\int_{\delta r^{-1}}^{+\infty }\frac{w^{k+n-1}}{\left(1+w\right)^{p}}dw\] \[\leq c\left\|f\right\|_{\varpi}r^{k}\int_{\delta r^{-1}}^{+\infty }\frac{1}{w^{p-k-n+1}}dw\] \[\leq c\left\|f\right\|_{\varpi}r^{k+2}.\] By (B.9) and (B.10), we finally arrive at \[\left|S_{r}f\left(u+\mathrm{i}v\right)-p_{k}\left(u;\mathrm{i}v\right)\right| \leq c\left\|f\right\|_{\varpi}r^{k}\varpi(r)\] due to \(\overline{\lim\limits_{r\to 0^{*}}}\,r/\varpi\left(r\right)<+\infty\) in definition 2.1. This proves theorem 1 for \(\left|\alpha\right|=0\). As to \(\left|\alpha\right|\neq 0\), the result follows from the fact that \(S_{r}\) commutes with \(\partial^{\alpha}\). We therefore finish the proof of theorem 1. ## Appendix C Proof of corollary 2.1 Proof.: Only the analysis of case \(\left|\alpha\right|=0\) is given. In view of theorem 1 and (B.7), we obtain that \[\left|S_{r}f\left(x\right)-f\left(x\right)\right| \leq\left|S_{r}f\left(x\right)-P_{f,k}\left(\operatorname{Re}x; \operatorname{i}\operatorname{Im}x\right)\right|+\left|P_{f,k}\left( \operatorname{Re}x;\operatorname{i}\operatorname{Im}x\right)-f\left(x\right)\right|\] \[\leq c_{*}\|f\|_{\varpi}r^{k}\varpi(r),\] (C.1) where the constant \(c_{*}>0\) depends on \(n\) and \(k\). Further, by (C.1) we have \[\left|S_{r}f\left(x\right)\right| \leq\left|S_{r}f\left(x\right)-f\left(x\right)\right|+\left|f \left(x\right)\right|\] \[\leq c_{*}\|f\|_{\varpi}r^{k}\varpi(r)+\|f\|_{\varpi}\leq c^{*}\| f\|_{\varpi},\] provided a constant \(c^{*}>0\) depending on \(n,k\) and \(\varpi\). This completes the proof. ## Appendix D Proof of corollary 2.2 Proof.: It is easy to verify that \[S_{r}f\left(x+1\right) =\frac{1}{r^{n}}\int_{\mathbb{R}^{n}}K\left(\frac{x-\left(y-1 \right)}{r}\right)f\left(y\right)dy=\frac{1}{r^{n}}\int_{\mathbb{R}^{n}}K \left(\frac{x-u}{r}\right)f\left(u+1\right)du\] \[=\frac{1}{r^{n}}\int_{\mathbb{R}^{n}}K\left(\frac{x-u}{r}\right) f\left(u\right)du=S_{r}f\left(x\right).\] According to Fubini's theorem, we obtain \[\int_{\mathbb{T}^{n}}S_{r}f\left(x\right)dx =\frac{1}{r^{n}}\int_{\mathbb{R}^{n}}\int_{\mathbb{T}^{n}}K \left(\frac{x-y}{r}\right)f\left(y\right)dy\] \[=\frac{1}{r^{n}}\int_{\mathbb{R}^{n}}K\left(\frac{m}{r}\right) \left(\int_{\mathbb{T}^{n}}f\left(x+m\right)dx\right)dm=0.\] This completes the proof. ## Appendix E Asymptotic analysis in estimates Here we provide some useful asymptotic results, all of which can be proved by L'Hopital's rule or by integration by parts, thus the proof is omitted here. **Lemma** **e.1**.: _Let \(\varrho\in\mathbb{N}^{+}\), \(\lambda>1\) and some \(M>0\) sufficiently large be fixed. Then for \(X\to+\infty\), there holds_ \[\int_{M}^{X}\frac{1}{\left(\ln z\right)\cdots\left(\underbrace{\ln\cdots\ln z }_{\varrho}\right)^{\lambda}}dz=\mathcal{O}^{\mathfrak{g}}\left(\frac{X}{ \left(\ln X\right)\cdots\underbrace{\left(\ln\cdots\ln X\right)^{\lambda}}_{ \varrho}}\right).\] **Lemma** **e.2**.: _Let \(0<\sigma<1\), \(\lambda>1\) and some \(M>0\) sufficiently large be fixed. Then for \(X\to+\infty\), we have_ \[\int_{M}^{X}\frac{1}{z^{\sigma}(\ln z)^{\lambda}}dz =\mathcal{O}^{\mathfrak{g}}\left(\frac{X^{1-\sigma}}{\left(\ln X \right)^{\lambda}}\right),\] (E.1) _and_ \[\int_{X}^{+\infty}\frac{1}{z^{1+\sigma}(\ln z)^{\lambda}}dz =\mathcal{O}^{\mathfrak{g}}\left(\frac{1}{X^{\sigma}(\ln X)^{ \lambda}}\right).\] (E.2) ## Appendix F KAM theorem for quantitative estimates Here we give a KAM theorem for quantitative estimates, which is used in theorem 2 in this paper. See Theorem 1 in Salamon's paper [19] for case \(\tau>n-1\); as to \(\tau=n-1\), the proof is relatively trivial (in fact, just slightly modify Lemma 2 in [19]). **Theorem 8**.: _Let \(n\geq 2,\tau\geq n-1,0<\theta<1\), and \(M\geq 1\) be given. Then there are positive constants \(\delta_{*}\) and \(c\) such that \(c\delta_{*}\leq 1/2\) and the following holds for every \(0<r^{*}\leq 1\) and every \(\omega\in\mathbb{R}^{n}\) that satisfies (1.1)._ _Suppose that \(H(x,y)\) is a real analytic Hamiltonian function defined in the strip \(|\mathrm{Im}\,x|\leq r^{*},|y|\leq r^{*}\), which is of period \(1\) in the variables \(x_{1},\ldots,x_{n}\) and satisfies_ \[\left|H\left(x,0\right)-\int_{\mathbb{T}^{n}}H\left(\xi,0\right) d\xi\right| \leq\delta^{*}{r^{*}}^{2\tau+2},\] \[\left|H_{y}\left(x,0\right)-\omega\right| \leq\delta^{*}{r^{*}}^{\tau+1},\] \[\left|H_{yy}\left(x,y\right)-Q\left(x,y\right)\right| \leq\frac{c\delta^{*}}{2M},\] _for \(|\mathrm{Im}\,x|\leq r^{*},|y|\leq r^{*}\), where \(0<\delta^{*}\leq\delta_{*}\), and \(Q\left(x,y\right)\in\mathbb{C}^{n\times n}\) is a symmetric (not necessarily analytic) matrix valued function in the strip \(|\mathrm{Im}\,x|\leq r,|y|\leq r\) and satisfies in this domain_ \[|Q\left(z\right)|\leq M,\ \ \left|\left(\int_{\mathbb{T}^{n}}Q\left(x,0 \right)dx\right)^{-1}\right|\leq M.\] _Then there exists a real analytic symplectic transformation \(z=\phi\left(\zeta\right)\) of the form_ \[z=\left(x,y\right),\ \ \zeta=\left(\xi,\eta\right),\ \ x=u\left(\xi\right),\ \ y=v\left(\xi\right)+u_{\xi}^{T}(\xi)^{-1}\eta\] _mapping the strip \(|\mathrm{Im}\,\xi|\leq\theta r^{*},|\eta|\leq\theta r^{*}\) into \(|\mathrm{Im}\,x|\leq r^{*},|y|\leq r^{*}\), such that \(u\left(\xi\right)-\xi\) and \(v\left(\xi\right)\) are of period \(1\) in all variables and the Hamiltonian function \(K:=H\circ\phi\) satisfies_ \[K_{\xi}\left(\xi,0\right)=0,\ \ K_{\eta}\left(\xi,0\right)=\omega.\] _Moreover, \(\phi\) and \(K\) satisfy the estimates_ \[\left|\phi\left(\zeta\right)-\zeta\right|\leq c\delta^{*}\left(1- \theta\right)r^{*},\ \ \left|\phi_{\zeta}\left(\zeta\right)-\mathbb{I}\right|\leq c\delta^{*},\] \[\left|K_{\eta\eta}\left(\zeta\right)-Q\left(\zeta\right)\right| \leq\frac{c\delta^{*}}{M},\] \[\left|v\circ u^{-1}\left(x\right)\right|\leq c\delta^{*}{r^{*}}^{ \tau+1},\] _for \(|\mathrm{Im}\,\xi|\leq\theta r^{*},|\eta|\leq\theta r^{*}\), and \(|\mathrm{Im}\,x|\leq\theta r^{*}\)._ ## Acknowledgments This work was supported in part by National Basic Research Program of China (Grant No. 2013CB834100), National Natural Science Foundation of China (Grant No. 12071175, Grant No. 11171132, Grant No. 11571065), Project of Science and Technology Development of Jilin Province (Grant No. 2017C028-1, Grant No. 20190201302JC), and Natural Science Foundation of Jilin Province (Grant No. 20200201253JC).
この論文では、ハミルトン関数の微分を含む、 sufficiently small Perturbation に対してKAM invariant torus のPersistenceと残りの正則性について、有限滑らかな精度で、連続度に基づくモジュールの性質を拡張した。これは、古典的な Hölder 連続状況の一般化として、KAM theorem の適用に必要です。この目的を達成するため、Jackson approximation theorem をモジュールの連続度に関する場合に拡張し、新しい迭代スキームに対応した正則性定理を確立しました。これらのツールを通して、 KAM theorem を、厳密な微分可能性の仮定で確立し、それにより、 persistence torus が保たれたユニバーサルディオphantine 頻度を維持し、 KAM torus の正則性を維持します。 **Explanation of the translation:** This translation attempts to preserve the technical language of the original English sentence while also maintaining clarity and natural flow in Japanese. Here's
2309.10863
Extrinsic and intrinsic effects setting viscosity in complex fluids and life processes: the role of fundamental physical constants
Understanding the values and origin of fundamental physical constants, one of the grandest challenges in modern science, has been discussed in particle physics, astronomy and cosmology. More recently, it was realised that fundamental constants have a bio-friendly window set by life processes involving motion and flow. This window is related to intrinsic fluid properties such as energy and length scales in condensed matter set by fundamental constants. Here, we discuss important extrinsic factors governing the viscosity of complex fluids operating in life processes due to collective effects. We show that both extrinsic and intrinsic factors affecting viscosity need to be taken into account when estimating the bio-friendly range of fundamental constants from life processes, and our discussion provides a straightforward recipe for doing this. We also find that the relative role of extrinsic and intrinsic factors depends on the range of variability of these intrinsic and extrinsic factors. Remarkably, the viscosity of a complex fluid such as blood with significant extrinsic effects is not far from the intrinsic viscosity calculated using the fundamental constants only, and we discuss the reason for this in terms of dynamics of contact points between cells.
K. Trachenko, P. G. Tello, S. A. Kauffman, S. Succi
2023-09-19T18:18:43
http://arxiv.org/abs/2309.10863v2
Extrinsic and intrinsic effects setting viscosity in life processes: implications for fundamental physical constants ###### Abstract Understanding the values and origin of fundamental physical constants, one of the grandest challenges in modern science, has been discussed in particle physics, astronomy and cosmology. More recently, it was realised that fundamental constants have a bio-friendly window set by life processes involving motion and flow. This window is related to intrinsic fluid properties such as energy and length scales in condensed matter set by fundamental constants. Here, we discuss important extrinsic factors governing the viscosity of complex fluids operating in life processes due to collective effects. We show that both extrinsic and intrinsic factors affecting viscosity need to be taken into account when estimating the bio-friendly range of fundamental constants from life processes, and our discussion provides a straightforward recipe for doing this. We also find that the relative role of extrinsic and intrinsic factors depends on the range of variability of these intrinsic and extrinsic factors. Remarkably, the viscosity of a complex fluid such as blood with significant extrinsic effects is not far from the intrinsic viscosity calculated using the fundamental constants only, and we discuss the reason for this in terms of dynamics of contact points between cells. ## I Introduction Our fundamental theories describing matter and fields contain mathematical structures and about 20 fundamental physical constants such as the Planck constant \(\hbar\), speed of light in vacuum \(c\), the electron mass \(m_{e}\) and charge \(e\) and other parameters. The values of fundamental physical constants are often listed in reviews and textbooks (see, e.g., Refs. [1; 2; 3; 4]), and their recommended values are maintained and updated in the National Institute for Standards and Technology database [5]. These constants give the observed Universe its observed character and differentiate it from others we might imagine [1; 2; 6; 7; 8; 9; 10; 11; 12; 13; 14]. The values of fundamental constants are considered arbitrary [12], for we do not know what kind of theories we need to explain the values of fundamental constants and their origins [15]. For this and other reasons, understanding fundamental constants is considered to be among the grandest questions in modern science [16]. Fundamental constants govern a wide range of high-energy processes, starting from cosmology and inflation to nuclear reactions and nuclear synthesis in stars producing carbon, oxygen and other elements which can then form molecular structures essential to life. An interesting observation is that the values of some fundamental physical constants are finely tuned and balanced to give our observable world. More recently, it was proposed that condensed matter physics and liquid physics in particular gives a new insight into the fundamental constants based on life processes. We observe these life processes and can therefore discuss factors that enable them. Life processes need motion and flow, and dynamic viscosity \(\eta\), is the central property setting this flow [17]. The minimal viscosity was previously shown to be set by fundamental physical constants [18; 19]. If this minimum is increased as a result of different fundamental constants, liquid viscosity becomes larger at all conditions of pressure and temperature. Its value corresponding to disabling a life process, \(\eta_{d}\), then puts a limit on bio-friendly values of fundamental constants. A detailed consideration of this process implies that there is a bio-friendly _window_ where fundamental constants can vary to enable life processes in and between living cells [17]. There are several viscosity effects involved in potential disabling a life process. Examples include an arresting transition corresponding to the explosive increase of the coagulation rate in biological fluids such as protein solutions and blood. This takes place at the critical value of the Peclet number which depends on viscosity [20]. Next, chemical reaction rates of vital biological processes involving, for example, dynamics of proteins and enzymes, \(k\), vary as \(k\propto\frac{1}{\eta^{n}}\), where \(n\) varies in quite a large range: from 0.3 [21] to 2.4 depending on the reaction (see, e.g., Ref. [22] for review). Therefore, viscosity increase affects different reaction rates differently and disrupts the required balance between products of different reactions and important interactions between those products. Depending on the degree and nature of this disruption, the result can either be finding a new functioning and sustainable balance during life development (and hence a type of life different from ours) or not finding a sustainable living state at all. The large increase of the coagulation rate is only one example of extrinsic effects related to fluid flow. More generally, both equilibrium and transport phenomena in soft condensed matter are governed by effective equations of state and transport coefficients, reflecting collective interactions with the environment. This comes in addition to intrinsic molecular values and involves strong nonlinearities [23; 24; 25]. An important question is then how intrinsic and extrinsic effects compare under different conditions and how they can be combined to understand (a) viscosity at the fundamental level and (b) implications for fundamental constants. In this paper, we discuss these extrinsic effects and find that both extrinsic and intrinsic factors affecting viscosity need to be taken into account when estimating the bio-friendly range of fundamental constants from life processes. Our discussion provides a straightforward recipe for doing this. We also find that the relative role of extrinsic and intrinsic factors depends on the range in which these intrinsic and extrinsic factors vary. Remarkably, viscosity of a complex fluid such as blood with significant extrinsic effects is not far from the intrinsic viscosity calculated using the fundamental constants only, and we discuss the reason for this in terms of dynamics of contact points between cells. ## II Intrinsic effects We briefly recall the intrinsic effects involved in setting the minimal viscosity of fluids [17; 18; 19]. The basis for discussing this minimal viscosity has been theoretical. This is a recent result, which may come across as surprising in view of long history of research into viscosity and, more generally, theory of liquids. For this reason, we briefly expand on this point and recall the fundamental nature of problems involved in liquid theory. As discussed by Landau, Lifshitz, Pitaevskii and others, the arresting problems involved in liquid theory combine (a) strong interatomic interactions combined with dynamical disorder and (b) the absence of a small parameter [26; 27; 28]. For this reason, no general theory of liquid most basic thermodynamic properties such as energy and heat capacity was thought to be possible (according to Peierls [29], Landau had always maintained that a theory of liquids is impossible). Whereas calculating generally-applicable thermodynamic properties such as energy and heat capacity and their temperature dependence are essential part of theories of solids and gases, deriving such general relations was considered impossible in liquids [26; 27; 28; 29]. This also presented a persisting problem for teaching liquids to students [30]. At the fundamental level, physics of an interacting system is set by its excitations or quasiparticles [26; 31]. In solids, these are phonons. The nature of phonons and their properties in liquids remained unknown a long time. Remarkably, Sommerfeld [32] and Brillouin [33; 34; 35; 36; 37] considered that the liquid energy and thermodynamic properties are fundamentally related to phonons as in solids and aimed to develop a liquid theory on the basis of a modified Debye theory of solids. This was around the time when the basis for the modern solid state theory was set: the first Sommerfeld paper was published 1 year after the Debye theory of solids [38] and 6 years after the Einstein's paper [39]. The nature and operation of phonons in liquids was not clear at the time and turned out to be a formidable problem that continues to be actively researched today [40]. A combination of experiments, theory and modelling in the last 15 years or so led to understanding of phonon propagation in liquids with an important attribute: the phase space available to these phonons is not fixed as in solids but is instead _variable_[41; 42; 43; 40; 41; 44]. This is a non-perturbative effect. In particular, the phase space reduces with temperature, and this has a general implication for most important liquid properties. For example, the calculated specific heat of classical liquids universally decreases with temperature, in quantitative parameter-free agreement with a wide range of experimental data [41; 42; 43; 40]. This recent new understanding of liquids has brought about the concept of the minimal quantum viscosity and its relation to fundamental physical constants [17; 18; 19]. The minimal kinematic viscosity, \(\nu_{min}\), is set by two parameters characterising a condensed matter phase: the interatomic separation \(a\) and the Debye vibration frequency \(\omega_{\rm D}\) as: \[\nu_{min}=\frac{1}{2\pi}\omega_{\rm D}a^{2} \tag{1}\] Relating \(a\) to the Bohr radius \[a_{\rm B}=\frac{4\pi\epsilon_{0}\hbar^{2}}{m_{e}e^{2}} \tag{2}\] where \(e\) and \(m_{e}\) are electron charge and mass, and \(\omega_{\rm D}\) to the characteristic cohesive energy set by the Rydberg energy \[E_{\rm R}=\frac{m_{e}e^{4}}{32\pi^{2}\epsilon_{0}^{2}\hbar^{2}} \tag{3}\] gives \[\nu_{min}=\frac{1}{4\pi}\frac{\hbar}{\sqrt{m_{e}m}} \tag{4}\] where \(m\) is the molecule mass [18]. \(\nu\) and its minimal value in Eq. (4) govern the time-dependent non-equilibrium flow. \(\nu\) also sets the Reynolds number and Kolmogorov scale of turbulence. The steady flow is set by the dynamic viscosity \(\eta\). The minimum of \(\eta\), \(\eta_{m}\), can be evaluated as \(\eta_{m}=\nu_{m}\rho\), where \(\rho\) is density \(\rho\approx\frac{m}{a_{\rm B}^{2}}\). Assuming \(m=Am_{p}\), where \(A\) is atomic number and setting \(A=1\) for the purpose of the following discussion, this gives \[\eta_{min}\propto\frac{e^{6}}{\hbar^{5}}\sqrt{m_{p}m_{e}^{5}} \tag{5}\] Another useful property is the diffusion constant \(D\). Using Eq. (5) and the Stokes-Einstein relation \(D=\frac{k_{B}T}{6\pi r\eta}\), where \(r\) is the radius of a moving particle, gives \[D_{max}\propto\frac{1}{\eta_{m}}\propto\frac{\hbar^{5}}{e^{6}}\frac{1}{\sqrt{ m_{p}m_{e}^{5}}} \tag{6}\] We see that the intrinsic effects affecting liquid flow, the minimal values of kinematic and dynamic viscosity and the maximal diffusion constant are related to the length and energy scales involved in braking chemical bonds in the liquid. These scales are relatable to fundamental constants, and so are \(\nu_{min}\), \(\eta_{min}\) and \(D_{max}\). ## III Extrinsic effects: blood flow as a case study The extrinsic effects affecting liquid flow include those operating in complex fluids such as, for example, blood. Blood flow delivers nutrients in any organism and is therefore related to the metabolism, the essence of life [45], along with genetics. Blood is not a simple fluid with a given viscosity, but a dense suspension of red cells, platelets and plasma particles [46; 47; 48; 49] whose _effective_ viscosity is affected by collective many-body effects. As a result, blood is a non-newtonian fluid, which does not deform in linear proportion to the stress acting upon it. Stated differently, the effective viscosity is a property of the _flow process_, not of the fluid substance. We briefly recall some of the basic extrinsic properties. A Newtonian fluid under the Couette flow (flow between two oppositely moving flat plates) obeys the following relation: \[F_{x}=\mu_{0}A\frac{\partial u_{x}}{\partial y} \tag{7}\] where \(F_{x}\) is the force along the mainstream direction \(x\), \(A\) is the cross section perpendicular to the flow, \(u_{x}\) is the mainstream flow speed and \(\mu_{0}=\rho\nu_{0}\) is the bare dynamic viscosity and \(\rho\) is density. For a simple fluid, \(\mu_{0}\) is a numerical coefficient (at a given temperature), whose value can be traced to the quantum diffusivity \(D_{q}\equiv\hbar/m\). The volume fraction of red cells in the blood (hematocrit) is about \(\phi\sim 0.45\), which qualifies blood as a dense suspension in which the average gap between two cells is much smaller than their diameter. Consider a sphere of diameter \(D\) located in the center of a cubic box of side \(L\). The volume fraction of the sphere in this simple-cubic configuration is \(\phi=\frac{\pi}{6}\frac{D^{3}}{L^{3}}\). The gap between two spheres is \(h=L-D\), hence \(h/D=L/D-1=(6/\pi/\phi)^{1/3}-1\). With \(\phi\sim 0.45\), this gives \(h/D\sim 0.05\), implying that cells are constantly in near-touch (\(h\sim 500\) nm for the example in point) so that their motion is strongly affected by many-body effects. In particular, at low shear rate say \(S=0.1\) 1/s, red cells tend to aggregate, forming clusters which withstand fluidity. Upon increasing the shear rate, clusters break up and blood flows more easily: blood is a so-called shear-thinning fluid. This means that viscosity no longer depends on intrinsic effects and fundamental constants only, but acquires an additional nonlinear dependence on shear \(S=\partial_{y}u_{x}\). This nonlinearity is typically expressed by a power-law relation of the form: \[\eta(S)=\frac{\eta_{0}}{(1+S/S_{0})^{\alpha}} \tag{8}\] where \(S_{0}\) is the threshold above which the nonlinear behavior is exposed and \(\alpha\) is a characteristic exponent (typically around 1/3) and \(\eta_{0}\) is related to the intrinsic viscosity corresponding to no shear effects. As an example of the magnitude of this effect, an excursion of three decades in the value of \(S\) leads a factor of ten change in the effective viscosity. The same blood showing a viscosity of 60 cP at \(S=0.1\) (1/s), can lower its viscosity down to 6 cP at \(S=200\) (1/s). In Figure 1, we show the best fit of actual physiological data from reference [46] using expression (8) with the following numerical values: \(\eta_{0}=75\) cP, \(S_{0}=0.1\) and \(\alpha=0.35\). The fit reproduces the experimental data quite accurately. In addition to the above example, collective effects in dense suspensions can lead to a host of complex rheo Figure 1: Best fit to blood viscosity (at rest) using expression (2), with \(\eta_{0}=75\) cP, \(S_{0}=0.1\) and \(\alpha=0.35\). logical behaviours, including yield-stress (no flow below a minimum stress threshold), nonlocal effects in space and time, hysteresis, jamming and many others, including crucial effects of deformability for many metabolic functions. For example, local accumulation of red blood cells due to physiological imperfections can drive an untamed rise of viscosity, a situation known as clogging or jamming as mentioned in the Introduction. In this case, the effective viscosity obeys a similar law: \[\eta=\frac{\eta_{0}}{(1-\rho/\rho_{c})^{\beta}} \tag{9}\] where \(\rho_{c}\) is the critical density at which the viscosity formally diverges, \(\beta\) is the corresponding critical exponent and \(\eta_{0}\) is related to the intrinsic viscosity in the absence of jamming effects. ## IV Intra-cellular and anomalous diffusion Similar effects operate for the diffusion involving cellular diffusion. Here, the direct analogue of the Newtonian fluid is the Fick's law, which is a statement of linearity between the density gradient and the resulting mass flux \(J_{x}\): \[J_{x}=-D_{0}\frac{\partial\rho}{\partial x} \tag{10}\] Fick's law holds as long as the gradients are sufficiently small to silence nonlinear effects and density is sufficiently below the jamming transition. The cell is a complex and crowded environment, in which molecular motion can hardly abide by the simplicity of Fick's law [50]. Molecules crawl in the cytoplasm, collide with obstacles and get eventually trapped in metastable states until they are released again. Mechanisms of diffusion and transport in cells include molecular motors [51]. In these diffusive processes, the bare diffusivity \(D_{0}\) is replaced by a density-dependent effective diffusivity \(D=D(\rho)\), typically a strongly nonlinear function, eventually vanishing at a critical value, \(D(\rho\rightarrow\rho_{c})\to 0\) (jamming), consistent with the Stokes law: \[D=\frac{k_{B}T}{6\pi R\eta} \tag{11}\] where \(R\) is the radius of the sphere floating in a solvent of viscosity \(\eta\). Collective effects can also extend beyond density-dependence of the diffusion coefficient and alter the nature of diffusion process itself, turning into so-called anomalous diffusion. The distinctive trait of standard diffusion is a square root dependence on time of the mean displacement in space, \(<\delta x^{2}>=D\delta t\). Anomalous diffusion generalizes this relation to a generic exponent \(p\), namely \(<\delta x^{2}>=D_{p}\delta t^{p}\). Standard diffusion is recovered for \(p=1\), while \(1<p<2\) corresponds to super-diffusion and \(0<p<1\) to sub-diffusion, respectively. The anomalous diffusion coefficient \(p\) encapsulates complex transport phenomena, resulting in both faster (super) and slower (sub) dynamics than standard diffusion. Generally speaking, this is the result of cross-correlations between the moving molecules and their environment; constructive/destructive correlations promote hyper/hypo-diffusion, respectively. On the one hand, cytoskeleton density hinders the free displacement of the particle, leading to subdiffusion. On the other hand, the cytoskeleton elasticity combined with thermal bending contributes superdiffusion. We note that in this case the diffusion coefficient as we know it from the random walk theory, the limit of the ratio \(\delta x^{2}/\delta t\) as \(\delta t\to 0\), is no longer a well-posed physical quantity. Indeed, this limit returns zero and infinity for hyper and hypo-diffusion, respectively. The correct limit is instead \(D_{p}=\delta x^{2}/\delta t^{p}\), which has no longer the dimensions of a diffusion coefficient, the length squared over time. We see that the anomalous transport, a hallmark of (intra)cellular transport, represents another type of extrinsic effects setting transport properties in life processes. ## V Sensitivity and fine-tuning Fine-tuning of fundamental constants refers to relative little variation of constants above which an essential physical process (e.g., stability of protons and neutrons, stellar processes needed for synthesis of heavy elements, carbon production as a result of the Hoyle resonance and so on) is disabled [1; 2; 6; 7; 8; 9; 10; 11; 12; 13; 14]. For some processes and relevant fundamental constants, these variations are often between a few per cent to fractions of per cent. This fine tuning originates from our physical models where a property is a fast-varying function of fundamental constants or their combinations so that small changes of fundamental constants imply large property changes [6]. We can now compare how sensitive viscosity is to variations of intrinsic and extrinsic effects. The minimal viscosity in Eq. (5) and the intrinsic effects are set by \[\eta_{min}\propto e^{6}\hbar^{-5}m_{p}^{1/2}m_{e}^{5/2} \tag{12}\] and \[\eta(S)\propto\eta_{0}(S/S_{0})^{-1/3} \tag{13}\] By way of illustration, a factor of 10 change in the minimal viscosity requires a \(10^{3}\) change in the shear rate and \(10^{1/6}\sim 1.5\) in the electron charge, \(10^{-1/5}\sim\) of the Planck constant, \(10^{2}\) in the proton mass and \(10^{2/5}\sim 3\) in the electron mass. Conversely, a small change of say 0.01 of the electron charge would cause a small \((1+0.01)^{6}-1=0.06\), change of viscosity. The same change corresponds to a change \(x\) of the shear rate given by \((1+x)^{-1/3}=1.06\), \(x\sim-0.18\). This is 18 times higher but still a comparatively small change and is within the physiological range of variation of the shear rate. The condition for extrinsic and intrinsic effects to be comparable is \[(1+\epsilon_{f})^{\alpha_{f}}=(1+\epsilon_{e})^{\alpha_{e}} \tag{14}\] where \(\epsilon_{f,e}\) is the relative change of the intrinsic fundamental and effective couplings, respectively, and \(\alpha_{f,e}\) are the corresponding exponents. If both \(\epsilon\)'s are well below 1, the above relation simplifies to: \[\epsilon_{e}\sim\epsilon_{f}\frac{\alpha_{f}}{\alpha_{e}} \tag{15}\] showing that the ratio of the changes is dictated by the ratio of the exponents. This implies that even a large ratio such as 18 still keeps \(\epsilon_{e}\) sufficiently small to be easily realizable via environmental changes. As an interim summary, we saw that both extrinsic and intrinsic factors affecting viscosity need to be taken into account when estimating the bio-friendly range of fundamental constants from life processes. We also saw that the relative role of extrinsic and intrinsic factors depends on the range in which these intrinsic and extrinsic factors vary. This range can be different in different life processes involving flow and hence need to be addressed separately. ## VI Viscosity of complex fluids and memory of fundamental constants Blood is particularly significant in our discussion because, as compared with water, a complex but still a molecular fluid, and involves a much higher level of physiological organization: red blood cells (RBC) are not molecules but highly organized microscale biological structures. In this respect, blood might be expected to be largely governed by extrinsic effects. In this regard, it is instructive to consider the value of blood viscosity at rest, which we take of the order of 1 P=1 g/(cm \(\cdot\) s). Given the blood density of about 1 gram per cc, this gives a kinematic viscosity \(\nu_{0}=10^{-4}\) m\({}^{2}\)/s. This is 15 orders of magnitude above what might be expected from evaluating viscosity using the molecular mass [17] as \(\hbar/M_{RBC}\sim 10^{-19}\) m\({}^{2}\)/s, where \(M_{RBC}\) is the mass of the cell, but only about two orders of magnitude above the fundamental kinematic viscosity \(\nu_{f}=\hbar/\sqrt{m_{e}m_{p}}\sim 10^{-6}\) m\({}^{2}\)/s. By taking viscosity to be 10 cP instead of 1P, the mismatch is just one of order of magnitude. We note that the fundamental kinematic viscosity corresponds to the viscosity minimum [18], whereas the observed viscosity can be higher depending on temperature and pressure. This is consistent with the observed blood viscosity being higher than the fundamental viscosity. The closeness between the observed blood viscosity and theoretical fundamental viscosity is remarkable because it shows that even in a highly complex mesoscale structure such as a red blood cell, which contains about one trillion protons organized across many layers of biological and physiological complexity far above the quantum level, the bare kinematic viscosity still carries a memory of fundamental physical constants, regardless of the large mass of the red blood cell which is clearly a classical macroscopic body from the standpoint of quantum mechanics. A tentative explanation can be discussed as follows. The kinematic viscosity is a collective property emerging from underlying molecular interactions, as reflected by the relation: \[\nu=\lambda v_{th} \tag{16}\] where \(v_{th}=\sqrt{k_{\rm B}T/m}\) is the thermal speed, \(\lambda=v_{th}\tau\) is the mean free path (scattering length) and \(\tau\) is the mean collision time. Classical physics emerges from quantum in the limit where a characteristic distance, for example the mean free path, is much larger than the De Broglie length: \[\lambda_{B}=\frac{\hbar}{mv_{th}}\ll\lambda \tag{17}\] If the two scales are comparable, quantum mechanics is expected to be relevant to the fluid viscosity. Let us consider the fundamental De Broglie wavelength defined as: \[\lambda_{f}=D_{f}/v_{th} \tag{18}\] At standard temperature, \(v_{th}\sim 10^{3}\) m/s, and \(\lambda_{f}\sim 10^{-9}\) m, which is comparable to the mean scattering distance (mean free path) in water. As far as blood is concerned, the notion of molecular collisions must be extended in such a way as to account for body contacts, whose frequency is governed by the intracellular gap \(h\) discussed earlier in this paper. Let us rewrite the intracellular gap in general terms as \[h/D=(\frac{\phi_{pack}}{\phi})^{1/3}-1=(1+\xi)^{1/3}-1\sim\xi/3 \tag{19}\] where \(\xi\equiv(\phi_{pack}-\phi)/\phi\) and \(\phi_{pack}\) is the packing fraction of the blood configurations. As discussed earlier, \(\phi\sim 0.45\) on average, while \(\phi_{pack}\) may change depending on the local configuration, as well as on the shape of the RBC's (ellipsoids, discoids). We observe that \(\phi\) and \(\phi_{pack}\) are close enough to develop intracellular gaps of the order of a few nanometers, hence comparable with \(\lambda_{f}\). These "near-contact" interactions are well known to play a crucial role in shaping up the mechanical and rheological properties of a large variety of soft materials [52; 53], including the ones relevant to human body. The fact that the mesoscale objects, like cells, interact on nanometric scales acting as an effective mean free path in a statistical mechanics description of their transport properties, offers a plausible reason why their kinematic viscosity is affected by intrinsic effects and quantum mechanics in particular. This is consistent with our earlier discussion in Section II: the intrinsic viscosity is set by fundamental condensed matter properties, the length scale \(a_{\rm B}\) in Eq. (2) and the energy scale \(E_{\rm R}\) in Eq. (3). These two properties are essentially quantum-mechanical and do not have a sensible limit \(\hbar\to 0\). The motion of complex fluid such as blood involves breaking and forming bonds at the contacts between cells. Hence, viscosity of a complex fluid with significant extrinsic effects such as blood is nevertheless affected by intrinsic effects and fundamental constants acting at the contact points. ## VII Summary We discussed both intrinsic and extrinsic contributions to viscosity which are at operation in life processes. A direct inspection of the physiological values of blood viscosity at rest reveals a remarkable near-match with the fundamental kinematic viscosity. This implies that, notwithstanding the four decades separation in space and the many layers of biological and physiological complexity of red blood cells, the kinematic viscosity of blood still keeps clear memory of the values of fundamental physical constants. A possible explanation for this remarkable property is that the near-contact interactions between RBC's occurs at scales comparable with the fundamental De Broglie wavelength. Our results give the following recipe to calculate the constraints on fundamental constants from life processes. We can identify several most important life processes where viscosity sets the motion central to each process. Let \(\eta_{d}\) be the upper value of viscosity above which a life process is disabled. Mechanisms for such a disabling can vary and include, for example, a transition corresponding to the explosive increase of the coagulation rate in biological fluids such as protein solutions and blood. We can then use the equations discussed here, such as Eq. (8), to account for the extrinsic effects. This will result in constraints on the intrinsic (bare) viscosity \(\eta_{0}\) in Eq. (8). The constraints on fundamental constants will then follow from Eqs. such as (4) and (5) and, more specifically, from accompanying inequalities setting the bio-friendly window for fundamental constants [17]. We are grateful to L. Noirez, U. Windberger and A. Zaccone for discussions and EPSRC for support. S. S. research was supported by the ERC-PoC grant Droptrack (Fast and automated droplet tracking tool for microfluidics, contract n. 101081171).
Fundamental physical constantsの価値と起源を理解することは、現代科学における最大の課題の一つであり、粒子物理学、天文学、 kosmology に関して議論されてきました。最近では、基本的な定数には生命の動きと流動に関連する生物に優しいウィンドウが存在することが判明しました。このウィンドウは、凝縮体におけるエネルギーと長さスケールなどの本質的な特性に関連しています。ここでは、生命の過程で複雑な流体を作動させるための重要な外的な要因について議論します。私たちは、外的な要因と内的な要因が、複雑な流体の粘性を決定する上でどのように作用するかを明らかにしました。私たちは、生命の過程で基本的な定数を考慮する際に、外的な要因と内的な要因の両方を含める必要があることを示し、この議論は、それを実現するための直感的で簡潔な手順を提供します。また、外的な要因と内的な要因の相対的な役割
2310.20633
Defining a New NLP Playground
The recent explosion of performance of large language models (LLMs) has changed the field of Natural Language Processing (NLP) more abruptly and seismically than any other shift in the field's 80-year history. This has resulted in concerns that the field will become homogenized and resource-intensive. The new status quo has put many academic researchers, especially PhD students, at a disadvantage. This paper aims to define a new NLP playground by proposing 20+ PhD-dissertation-worthy research directions, covering theoretical analysis, new and challenging problems, learning paradigms, and interdisciplinary applications.
Sha Li, Chi Han, Pengfei Yu, Carl Edwards, Manling Li, Xingyao Wang, Yi R. Fung, Charles Yu, Joel R. Tetreault, Eduard H. Hovy, Heng Ji
2023-10-31T17:02:33
http://arxiv.org/abs/2310.20633v1
# Defining a New NLP Playground ###### Abstract The recent explosion of performance of large language models (LLMs) has changed the field of Natural Language Processing (NLP) more abruptly and seismically than any other shift in the field's 80-year history. This has resulted in concerns that the field will become homogenized and resource-intensive. The new status quo has put many academic researchers, especially PhD students, at a disadvantage. This paper aims to define a new NLP playground by proposing 20+ PhD-dissertation-worthy research directions, covering theoretical analysis, new and challenging problems, learning paradigms, and interdisciplinary applications. ## 1 Introduction It is the best of times. It is the worst of times. We are living in an incredibly exciting yet strange era of Natural Language Processing (NLP) research due to the recent advancements of large language models (LLMs) on various data modalities, from natural language Brown et al. (2020) and programming language Chen et al. (2021); Wang et al. (2023) to vision Radford et al. (2021); Li et al. (2022); Wang et al. (2022) and molecules Edwards et al. (2022); Zeng et al. (2022); Su et al. (2022). At the core, LLMs produce text sequences word-by-word by computing conditional probability based on context. At a sufficiently large scale, they can answer questions, generate arguments, write poetry, impersonate characters, negotiate contracts and achieve competitive results across a wide variety of standard NLP tasks including entity typing, sentiment analysis, and textual entailment, showcasing "emergent behavior" such as in-context learning Wei et al. (2022). However, this "moment of breakthrough" received a polarized response in the NLP research community: while some welcomed the progress, others felt lost. Why is NLP so vulnerable to a single advancement? In retrospect, when NLP adopted the machine learning paradigm in the early 1990s it started along a journey that led to increased homogeneity. The dominant methodology became: (1) Identify a challenge problem or task; (2) Create a dataset of desired input-output instances; (3) Select or define one or more evaluation metrics; and (4) Develop, apply, and refine machine learning models and algorithms to improve performance. If a challenge did not support the creation of a dataset (e.g., text styles of people in different professions) or metric (e.g., summaries of novels or movies), or worse yet if it was not amenable to a machine learning solution, then mainstream NLP simply did not address it. For a long time, NLG was in this position because its starting point --semantic representations-- were neither standardized, nor easy to produce at scale, nor amenable to direct evaluation. No dataset, no metric -- little attention. Yet multi-sentence NLG starting with deep semantic input, and with output tailored to different audiences, is arguably the most complex task in NLP, since it involves so many aspects of linguistic communication together. As such, it surely deserved the concentrated effort that NLP has bestowed on MT, Speech Recognition, QA, and other major challenges in the past. Suddenly, within the space of a few months, the landscape changed. NLP encountered an engine that seemingly could do everything the field had worked on for decades. Many subtasks in NLP seemed to become irrelevant overnight: Which grammar formalism to parse into? Which historical structure and focus control model for multi-sentence coherence? Which neural architecture is optimal for information extraction or summarization? None of that matters if the magical engine can do the entire end-to-end language-to-language task seamlessly Sanh et al. (2022); OpenAI (2023). Dozens of Ph.D. theses lost their point, because their point was a small step in the process that no longer seemed needed. The dominant paradigm is also challenged: instead of setting up benchmarks and then developing models accordingly, people started discovering new abilities of such models (Bubeck et al., 2023) (who knew that LLMs could draw unicorns using TikZ?). An important constraint is the practicality of the goal. This newer generation of LLMs is beyond the practical reach of all but a small number of NLP researchers. Unless one of the organizations building LLMs provides free access for research --an unlikely occurrence given the estimated six-figure monthly expense to run one-- or a procedure is developed to construct university-sized ones cheaply, the academic NLP community will have to be quite creative in identifying things that either generative LLMs cannot do _in principle_ or applications that can be built without re-training them and at the same time are important and doable _in practice_. Inspired by the efforts of a group of PhD students (Ignat et al., 2023), we believe it would be a valuable exercise to define new research roadmaps. We believe that while LLMs seemingly close research avenues, they also open up new ones. Current LLMs remain somewhat monolithic, expensive, amnesic, delusional, uncreative, static, assertive, stubborn, and biased black boxes. They still have a surprising deficiency (near-random performance) in acquiring certain types of knowledge (Wang et al., 2023f), knowledge reasoning and prediction. In this paper, we aim to define a new NLP playground by proposing a wide range of PhD-dissertation-worthy research directions to democratize NLP research again. In particular, we cover observations and suggestions along the perspectives of LLM theory (Section 2), challenging new tasks (Section 3), important but understudied learning paradigms (Section 4), proper evaluation (Section 5), and interdisciplinary applications (Section 6). ## 2 Theoretical Analysis of LLMs There is a growing necessity to open the black box of machine learning models through theoretical analysis. In this section, we advocate for both **mathematical** (by mathematical analysis) and **experimental** (inducing rules and laws such as Ghorbani et al. (2021); Hoffmann et al. (2022) from extensive experimental observations) theories of LLMs. ### Mechanism Behind Emergent Abilities LLMs have displayed impressive emergent capabilities such as instruction following, chain-of-thought reasoning, and in-context learning (Brown et al., 2020; Wei et al., 2022; Min et al., 2022; Wei et al., Logan IV et al., 2022; Wei et al., 2021). For example, the ability of **instruction following** enables models to follow novel instructions. For guidance on prompting beyond heuristics, we need a comprehensive understanding of how instructions work. Some initial theories suggest an explanation through Bayesian inference (Jiang, 2023), which relies on strong assumptions without practical insights. Here we advocate for theories on the feasibility of constraining or measuring models' deviation from instructions. A multi-player setting is also important, where one user's prompt is composed with another player's prompt (such as OpenAI's hidden meta instruction) before being fed into LLMs, where additional security issues might arise for the first user. **Chain-of-thought (CoT)** reasoning is where LLMs tackle complex tasks by generating solutions in a sequential, step-by-step manner. CoT theoretically enhances the computational capacity of Transformer-based models to solve problems exceeding \(\mathcal{O}(n^{2})\) complexity. While some constructive explanations have been suggested (Feng et al., 2023a), they are not fully validated as the underlying mechanism. Importantly, it is worth investigating the verifiability problem of the reasoning chain (whether CoT can be trusted as a valid logic chain) and its calibration (whether LLMs formulate ad-hoc CoTs for arbitrary conclusions). **In-context learning (ICL)**, where LLMs learn from demonstration examples in-context without parameter updates, has seen explanations based on gradient-descent (Akyurek et al., 2022; von Oswald et al., 2022), kernel regression (Han et al., 2023a) or Bayesian inference (Xie et al., 2023; Jiang, 2023). Important challenges remain and necessitate more comprehensive explanations, such as sensitivity to example order and robustness to perturbed input-output mapping. We hypothesize that a deeper understanding of how LLMs balance algorithmic solutions with implicit language inference can help clarify these questions, which might be approachable by exploring how LLMs disentangle semantic and functional information. **Model-specific vs. Model-agnostic** is a persistent gap among explanations, raising the question of whether the emergent abilities depend on the Transformer architecture or simply fitting the pre-training data. With some recent work suggesting that other architectures achieve comparable performance in some domains Peng et al. (2023); Zhai et al. (2021), this open question is important for prioritizing among model design (including other architectures), prompting engineering, and simply carefully collecting larger datasets. To bridge this gap, we also advocate for theoretical frameworks beyond (mixture) of HMMs to better model language data properties. ### Theoretical Robustness and Transparency **Robustness** is to ensure that no backdoor designs or adversarial usages can be easily implemented in the model. Although not a novel problem by definition, this issue has new implications and formulations in the LLM era. In a situation where most users do not have access to the pre-training and model-editing details, we call for research into robustness diagnosis _for arbitrary given LLM_. Despite negative evidence suggesting it may be nearly impossible to prevent adversarial prompting under certain conditions Wolf et al. (2023), we maintain a positive outlook and hope that it can be potentially overturned under more realistic conditions, such as high computational complexity in searching for adversarial prompts. **Transparency** in LLMs is concerned with alignment between the model's self-explanations and its internal computational rationale. With empirical studies suggesting that LLMs may not always accurately express their "thoughts" Turpin et al. (2023), computational modeling of LLM intentions becomes essential. The quest for transparency is important for preventing LLMs from generating misleading rationales to humans. We advocate for establishing both positive and negative theorems on counteracting false rationales under different conditions, along with examining associations between "faithfulness" modes and neuron activities in specific architectures. ## 3 New and Challenging Tasks ### Knowledge Acquisition and Reasoning Knowledge inside LLMsThe black box property of LLMs poses a significant challenge when it comes to evaluating implicit knowledge within the model. Initial studies have been conducted to elicit/identify Cohen et al. (2023); Shin et al. (2020); Petroni et al. (2019, 2020); Fung et al. (2023); Gudibande et al. (2023); Li et al. (2023) and localize/edit knowledge Dai et al. (2021); Meng et al. (2022); Zhu et al. (2020); Mitchell et al. (2022); De Cao et al. (2021); Hase et al. (2023); Meng et al. (2022); Mitchell et al. (2022). However, our understanding of the knowledge organization within language models (_where_ and _how_ knowledge is stored) is still limited, and it remains uncertain whether full comprehension is achievable. Moreover, existing studies primarily focus on factual or commonsense knowledge, overlooking more complex knowledge such as rules of inference Boolos et al. (2002). Large-Scale Knowledge ReasoningLLMs have demonstrated promising performance across various reasoning tasks Dua et al. (2019); Miao et al. (2020); Cobbe et al. (2021); Yu et al. (2020); Bhagavatula et al. (2020); Talmor et al. (2019) when appropriately prompted, such as through the use of Chain-of-Thought Wei et al. (2022); Chowdhery et al. (2022); Xue et al. (2023); Diao et al. (2023); Wang et al. (2023); Paul et al. (2023) or Program-of-Thought Chen et al. (2022). However, current reasoning benchmarks Cobbe et al. (2021); Ling et al. (2017); Patel et al. (2021); Hosseini et al. (2014); Miao et al. (2020); Koncel-Kedziorski et al. (2016); Talmor et al. (2019); Geva et al. (2021) focus on reasoning with small-scale context, typically consisting of hundreds of words. This level of reasoning falls short when tackling complex tasks, such as scientific research, which demands knowledge from extensive volumes of related literature and domain-specific knowledge bases. Retrieval-augmentation Guu et al. (2020); Khandelwal et al. (2020); Borgeaud et al. (2022); Izacard et al. (2022); Lai et al. (2023) serves as a powerful tool for integrating large-scale contextual knowledge into language models. However, current retrieval methods predominantly rely on semantic similarities, while humans possess the _accommodative_ learning Illeris (2018) ability to draw inspirations from semantically dissimilar knowledge and transfer it to the target task. To achieve this, we not only need to extend the input context length, but also understand how models organize knowledge and develop more effective knowledge representations and evaluation metrics (Section 5). Faithfulness and FactualityEnsuring the truthfulness of generation output requires optimal utilization of internal knowledge within the model and external knowledge, which includes the input context, knowledge bases, and open web resources. Access to external knowledge typically relies on the success of information retrieval Lewis et al. (2020); He et al. (2023); Yu et al. (2023, 2023), information extraction Wen et al. (2021); Huang et al. (2023), grounded generation Li et al. (2021, 2022); Gao et al. (2023); Weller et al. (2023); Lai et al. (2023) and knowledge-augmented generation Petroni et al. (2020); Geva et al. (2023). Internal knowledge involves the implicit parametric knowledge stored within the model, the correction and refinement of which is limited to the inference stage Lee et al. (2022); Meng et al. (2022, 2022); Chen et al. (2023). To effectively minimize hallucination and correct factual errors, it is crucial to not only decipher how knowledge is interpreted through model parameter patterns, but to understand how the model pieces knowledge together and governs the underlying logic during generation. A significant challenge in knowledge-guided generation is defining an appropriate knowledge representation that supports both complex structures and distributed representations. We believe this representation should combine the strength of symbolic-based reasoning to minimize unwarranted inferences, and the flexibility of distributed representations to encode any semantic granularity. Drawing insights from misinformation detection and knowledge comparative reasoning systems could also be one useful dimension of signals for improving faithfulness and factuality Liu et al. (2021); Fung et al. (2021); Wu et al. (2022, 2023). ### Creative Generation Although people have long envisioned using models for creative writing, this has only become a reality recently, when language generation models could reliably produce fluent text. Compared to previous sections where generated text is a vehicle for knowledge, creative use cases focus more on the style or form of language and encourage open-ended output. 1 Footnote 1: In this section we limit our scope to applications of text generation, however, we fully acknowledge the potential of multi-modal creative generation, such as generating personal avatars, movie clips, and 3D scenes. Creative Writing AssistanceSince language models offer conditional generation ability out-of-the-box, they have been adopted by many people in the creative industry for brainstorming or research tools Kato and Goto (2023); Gero et al. (2023); Halperin and Lukin (2023). One key challenge for such tools is promoting creative generation, instead of generating the most probable continuation, which was what language models were trained for. Current LMs have been observed by writers to over-rely on cliques or tropes and produce overly moralistic and predictable endings Chakrabarty et al. (2024). While the plot should be unexpected, details in the story should not go against commonsense (unless it is part of the setting), and maintain consistency within the story. This requires a model that enables controllability over the level of creativity in its output. Do we need to train a more creative model, or can we fix the problem at the inference stage? On the other hand, the focus on detoxification of LMs through RLHF (reinforcement learning with human feedback) might have led to the incompetency of the model in navigating deeper and morally challenging themes. Another direction for exploration is how to build better writing tools that work together with humans. Some attempts have been made to allow users to interact through instructions Chakrabarty et al. (2022) or use editing sequences to improve writing quality Schick et al. (2022). These could serve as critical building blocks toward the goal of developing a model that supports different types of input and can improve itself and personalize through interaction. In addition, models can also assist in different stages of writing, such as world-building and reviewing drafts. It remains to be explored where the model is most effective and where human writers should step in and make decisions. Interactive ExperiencesText generation models can not only be assistants for writing static scripts but also open up an opportunity to create dynamic and personalized experiences for the user by conditioning on their input. These interactive experiences can be used for education, therapy, game design, or filmmaking. More recently, there have been attempts to connect conversational models with other components such as speech recognition, text-to-speech, and audio-to-face rendering to create an end-to-end immersive experience of interacting with non-playable characters23. Another related open area for exploration is to create emotion-oriented experiences, which is one of the key goals of storytelling (Lugmayr et al., 2017). We should consider creating narratives based on the desired emotional response and the reader's feedback (Brahman and Chaturvedi, 2020; Ziems et al., 2022; Mori et al., 2022). ## 4 New and Challenging Learning Paradigms ### Multimodal Learning In light of the remarkable progress of the language world, we are now poised to venture into a multitude of modalities that were previously beyond consideration. Some learning signals stem from reading static data, such as images, videos, speech, and more, which will be discussed in this section; while other signals require interacting with the physical world, which will be detailed in Section 4.2.2. Multimodal encoding, at its core, involves learning the "correspondence" or "alignment" among various modalities, which always facing the challenges of **Granularity Difference** across modalities. This is a new and growing area with several solutions proposed to align across modalities: (1) a hard alignment that enables granularity-aware fusion (Tan and Bansal, 2020; Li et al., 2022; Momeni et al., 2023; Wang et al., 2022, 2023f); (2) a soft alignment to project the text space with the vision space (Zhou et al., 2023; Li et al., 2023b; Zhu et al., 2023; Lin et al., 2023). Beyond these semantic alignment challenges, there are further difficulties when it comes to non-semantic abstractions: **Geometric Reasoning:** Recognizing spatial relationships, such as "_left_", "_right_", "_beside_", "_above_", or "_behind_", requires comprehensive geometric mental simulation, which existing models consistently making errors (Kamath et al., 2023). Maintaining transformation invariance, regardless of position, rotation, or scale, remains a core challenge. Besides, current models, predominantly trained on 2D images, inherently miss out on the intricacies of 3D spatial configurations, inhibiting understanding of depth and relative object sizes based on distance. To address these challenges, existing efforts augment existing large models with an agent view to infer spatial layouts, predicting possible navigations from visual and textual cues (Liu et al., 2022; Berrios et al., 2023; Feng et al., 2023b). However, we believe the underlying challenge lies in the missing objective of geometric reasoning. Existing pretraining paradigms predominantly focus on semantic alignment between image/video-language pairs, while features (e.g., low-level edges, lines) are largely omitted in the encoded image representation. **Context Ambiguity:** Accurate understanding should factor in the wide context of temporal dynamics, social dynamics, emotional dynamics, and more. The temporal dimension presents a unique challenge in understanding vision and speech. Existing methods only focus on temporal ordering (Zellers et al., 2021, 2022) and forward/backward generation (Seo et al., 2022; Yang et al., 2023; Cheng et al., 2023). However, temporal dynamics is much more complicated. For instance, a video gesture (like a nod) may correspond to a later affirmation in the speech (Li et al., 2019). Such ambiguity requires reasoning over a wider context with various constraints. Emotion, another yet-underexplored abstract dimension, is conveyed through tone, pitch, speed in speech, and through expressions or body language in vision. Besides, social norm understanding is challenging as the same word or facial expression can convey different emotions depending on the context. Thus, potential solutions require to take into account various contexts, including preceding conversations or events, along with casual reasoning. **Hierarchical Perception:** Human cognition is inherently hierarchical. When processing visual signals, our attention is not uniformly distributed across every pixel but focus on salient regions that carry the most information, allowing us to quickly identify key features and make sense of our surroundings (Hochstein and Ahissar, 2002; Eickenberg et al., 2017). However, existing models overlook such attention hierarchy and tend to lose focus when asking about visual details (Gao et al., 2023b). To address this challenge, interpreting natural scenes requires hierarchical recognition, from broader contexts down to detailed attribute abstraction. Besides, aligning visual hierarchies with linguistic structures is important. Further, it requires the ability to perform abstraction over details, balancing between an abstracted scene understanding and intricate recognition is an ongoing challenge. ### Online Learning Trained on static corpora, existing models are incapable of keeping themselves updated on new information or learning from interaction history for self-improvement. To alleviate these issues, this section discusses the need for next-generation models to learn in an _online_ setting. #### 4.2.1 Updating Information Within Models A straightforward approach to updating models is to continue training on new data. This is however not efficient, since we only care about new information which accounts for a small fraction of the data, nor effective, as fine-tuning on new data might interfere with learned information in models. To achieve efficient updates, we would like the model to automatically identify notable information in new data (Yu and Ji, 2023) instead of relying on heavy human selection or preprocessing as in knowledge editing tasks (Dai et al., 2021; Meng et al., 2022a,b; Zhu et al., 2020; De Cao et al., 2021; Hase et al., 2023; Mitchell et al., 2022b). Effectively updating the model requires overcoming the bias toward (Yu and Ji, 2023; Wei et al., 2023) as well as avoiding catastrophic forgetting (McCloskey and Cohen, 1989; Ratcliff, 1990) of learned prior information. This might be achieved by changing the training paradigm to increase model capacity over time (e.g. progressive training (Gong et al., 2019), MoE (Shen et al., 2023)) or better understanding of knowledge organization within models (as detailed in Section 3.1) so that edits can be performed with minimal interference. #### 4.2.2 Learning from Continuous Interactions Interaction is essential in human learning (Jarvis, 2006). Humans learn how to best tackle different tasks by interacting with the **environment**, and they learn social norms from their interactions with other **humans**. Moreover, such interactions are **multi-turn** in nature, allowing humans to iteratively refine their actions for the task at hand _and_ continuously improve their mental model's capability of performing similar tasks in the future. Interaction with EnvironmentsWe consider environments a broad category of systems that provide feedback upon actions. The world we live in can be regarded as a typical environment: the law of physics would decide the world state change and provide sensor stimuli to the actor (e.g., Ahn et al. (2022)). Training a model (i.e., Embodied AI) that can interact with the physical world through multi-modal input (Driess et al., 2023; Jiang et al., 2023) poses challenges related to multi-modal learning (Section 4.1) as well as unique challenges due to long-horizon planning requirements and dynamic environments. The concept of environments also extends to human-crafted environments (e.g., programming language interpreters (Wang et al., 2023b), embodied simulators (Shridhar et al., 2020)) that provide automated feedback for any input by rules. Such artificial environments allow easy collection of automatic feedback which could prepare models for deployment in the physical world. Interaction with HumansBeyond learning from generic human preference towards building generalist agents (Ouyang et al., 2022), real-world applications typically require customizable solutions (e.g., personalized agents) to be created efficiently. We advocate for a new learning paradigm where models can be taught through (multi-modal) interactions with humans, including natural language feedback (Padmakumar et al., 2022; Wang et al., 2023c) and physical demonstration (Lee, 2017). Such complex problem nature may also involve customized retrieval from a large toolset of specialized models and effective action planning (Qin et al., 2023; Yuan et al., 2023). ## 5 Evaluation As models become increasingly powerful and multi-purpose, their evaluation has become a growing bottleneck for advancing NLP. We first discuss the question of "what should be evaluated" followed by "how should we measure performance." ### Benchmarks Language models are known to be multi-task learners, and the new generation of LLMs can achieve impressive performance under few-shot or even zero-shot conditions. This has led to the creation of many general benchmarks such as GLUE (Wang et al., 2018), SuperGLUE (Wang et al., 2019), MMLU (Hendrycks et al., 2021), Super-NaturalInstructions (Wang et al., 2022a), HELM (Liang et al., 2022), and AGIEval (Zhong et al., 2023). While setting up comprehensive benchmarks is useful, current benchmarks still have the following limitations: (1) lack diverse and difficult tasks that are important for real-world applications; (2) only contain static data sets that are not sufficient for applications that require multi-turn context-dependent input such as situation-grounded dialog; (3) robustness deficiencies, and (4) lack of support for performance analysis. Although some benchmarks extend to thousands of NLP tasks, most of them are variants of sentence-level tasks, while ignoring more challenging tasks such as structured prediction and cross-document reasoning. For example, Li et al. (2023) reported that LLMs methods obtained 25.2%-68.5% lower performance than state-of-the-art methods based on much smaller models for nearly all of the Information Extraction tasks. Task design should also aim to assist with human users' daily tasks, as exemplified by the most popular tasks being related to planning and seeking advice by the ChatGPT users at ShareGPT 4. Another issue is that benchmarks quickly saturate due to the development of newer models, and thus "live" benchmarks that can be updated over time Kiela et al. (2021) might be worth pursuing. Footnote 4: [https://sharegpt.com/](https://sharegpt.com/) To move beyond static data, we believe that simulated environments such as large-scale multi-player game environments can serve as an efficient solution. Games have been used as a way of benchmarking progress of reinforcement learning algorithms Silver et al. (2018); Guss et al. (2021) and also used to collect static datasets in NLP Urbanek et al. (2019); Bara et al. (2021); Lai et al. (2022). Game worlds provide a cheap way to explore different environments and situations, which is necessary for grounded language learning and learning through interaction. Humans can interact with models playing as characters in the game to evaluate their performance, or we can let models interact with each other Park et al. (2023) and evaluate their interaction behavior as a whole. Finally, we advocate for work on model diagnosis beyond the current brittle paradigm of case study through manual inspection: methods that help identify which parts of the input the model underperform on Liu et al. (2021), what are the model's behavior patterns and what data this performance could be attributed to Ilyas et al. (2022). ### Metrics Automatic evaluation metrics have been an accelerant for NLP progress in the last 20 years. Heuristic-based metrics Papineni et al. (2002); Lin (2004); Lavie and Agarwal (2007) have been found to correlate weakly with human preferences Liu et al. (2016). As a result, the field has pivoted to model-based metrics which have shown better alignment with human judgment Lowe et al. (2017); Zhang et al. (2020); Sellam et al. (2020); Yuan et al. (2021); Zhong et al. (2022). However such metrics might allow for shortcut approaches or come with biases embedded in the scoring model Sun et al. (2022). Automatic metrics struggle with open-ended natural language generation problems such as conversation and creative writing tasks due to the absence of ground truth. LLMs present an opportunity to tackle this problem Zheng et al. (2023); Fu et al. (2023); Liu et al. (2023), but they also suffer from certain biases including position, verbosity, and self-enhancement biases (models prefer themselves) that users should be cautious about. We need to develop metrics beyond accuracy and evaluate aspects such as robustness Chen et al. (2023), bias, consistency Chan et al. (2023), informativeness, truthfulness, and efficiency. On the other hand, human evaluation has traditionally been perceived as the more trustworthy evaluation method and a better indicator of the model utility. However, as models improve, it is questionable whether crowdworkers are adequate to serve as assessors (or annotators), particularly in fields such as science, healthcare, or law. Annotator bias Geva et al. (2019); Sap et al. (2022) and disagreement Fornaciari et al. (2021) should also be taken into consideration. If we design our models to be "assistants", a more useful human evaluation might not be to identify which output is more correct, but which output can help the human complete the task more efficiently. ## 6 NLP+X Interdisciplinary Applications ### Human-Centered NLP As LLMs become ubiquitous in both the research and public spheres, mitigating potential harms, both allocation and representation Blodgett et al. (2020), to social groups using these models must be a core consideration. Social bias and stereotypes are a common way for LLMs to materialize these internal defects, so debiasing these models is important for fairness and robustness. Furthermore, LLMs must be aware of the extra-contextual requirement of abiding by the sociocultural norms expected by the user Fung et al. (2023), especially when used as chatbots directly interacting with humans. Post-hoc debiasing and improving the social awareness of pretrained LLMs are important to this end. Though modern approaches have made great advances in democratizing LLM training, most builders don't have a need to pretrain their own LLMs, opting to, at most, fine-tune them. Rather than hope that an LLM is unbiased after pretraining, many researchers have discussed the utility in having a separate general debiasing step to account for any unintended associations stemming from pretraining (Yu et al., 2023; Omrani et al., 2023; Yang et al., 2023). Relatively less explored is the complementary requirement of augmenting LLMs with the awareness and ability to abide by sociocultural norms. The crux of the problem is training the model to recognize _what_ behaviors in its training data are the results of sociocultural norms, discover _why_ and _when_ those norms should be followed, and _how_ those norms can be followed (i.e., is it only in a specific way or is this a behavior that can be generalized across situations?). Another important direction is personalization based on the user, particularly for chatbots. LLMs have an amazing ability to multiplex behavior based on the language context provided in the prompt (Section 2.1), but they do not have the ability to account for the audience apart from what's inferred from text. This poses a problem for personalization because the same context or conversation can have differing levels of appropriateness depending on the audience (e.g., something that one finds relatively harmless may be incredibly offensive to someone else). Thus, we must improve LLMs' ability to infer the personal norms and appropriate behaviors in each individual context independently and act accordingly. This may, in part, involve bridging the gap between distant users who share similar beliefs to decode latent representations (Sun et al., 2023). In parallel, we can also provide users with multi-dimensional controls for generation (Han et al., 2023), including their sentiment, political stance, and moral values, so that they can directly influence the model's language usage. ### NLP for Science One area with the most potential impact from NLP is science (Hope et al., 2022; Zhang et al., 2023). Although researchers have long been interested in extracting actionable information from the literature (Hersh and Bhupatiraju, 2003; Griffiths and Steyvers, 2004; Li et al., 2016; Wang et al., 2021), this has been challenging due to the variety and complexity of scientific language. With the growing capabilities of NLP techniques, intensified focus is now deserved because of both the potential impacts and the challenges that will need to be overcome. One exciting emerging area is jointly learning natural language and other data modalities in the scientific domain (Edwards et al., 2021; Zeng et al., 2022; Edwards et al., 2022; Taylor et al., 2022), and one of the largest problems in current LLMs-hallucination-becomes a strength for discovering new molecules (Edwards et al., 2022), proteins (Liu et al., 2023), and materials (Xie et al., 2023). Another noteworthy application is NLP for Medicine. As a particular motivating example, there are an estimated \(10^{33}\) realistic drug-like molecules (Polishchuk et al., 2013). Within these drugs, there are substructures which confer beneficial drug properties, and the knowledge about these properties are reported in millions of scientific papers. However, existing LLMs are pretrained only from unstructured text and fail to capture this knowledge, in part due to inconsistencies in the literature. Recent solutions for domain-knowledge-empowered LLMs include development of a lightweight adapter framework to select and integrate structured domain knowledge into LLMs (Lai et al., 2023), data augmentation for knowledge distillation from LLMs in general domain to scientific domain (Wang et al., 2023), and tool learning frameworks leveraging foundation models for more complicated sequential actions problem solving (Qin et al., 2023; Qian et al., 2023). Overall, future research can explore bespoke architectures, data acquisition techniques, and training methodologies for comprehending the diverse modalities, domain-specific knowledge, and applications within science. ### NLP for Education LLMs readily capture a vast knowledge of many subjects, and augmenting LLMs with external knowledge naturally leads to improved abilities for eliciting that knowledge to generate lesson plans and materials. However, there are also applications in education which seem distinct from general NLP tasks. In particular, personalizing education and the educational experience with LLMs would allow educators to focus on the more general efforts of high-level teaching. Then, the utility of using language models to educate comes not from the language model's ability to "learn" the appropriate knowledge but in its ability to find associations. One facet of this challenge comes from identifying and analyzing gaps in a student's understanding or learning. For example, apart from simply scoring essays or responses across discrete dimensions such as fluency or sentence structure or by identifying keyspans (Mathias and Bhattacharyya, 2020; Takano and Ichikawa, 2022; Fiacco et al., 2022), one could use LLMs to determine which parts of a freeform submission indicate a gap and associate it with a learning goal provided by the teacher, without using specific (and costly to create) gold-labeled responses, so that the student has actionable feedback and can work on self-improvement. As part of this work, we need to accurately identify which portions of the response are written by the student as opposed to copied from an AI assistant. This would ensure that gaps aren't hidden, but would require a longitudinal view of the student's ability. Also, we must be able to ensure that the LLM's recommendations are based on actual details of the student and the text rather than being general predictions with high priors or based on hallucinations. Furthermore, rather than simplifying original lesson materials (Mallinson et al., 2022; Omelianchuk et al., 2021), we should invest in using LLMs to generate or retrieve materials or scaffolding that _help_ to advance the students' learning rate. ## 7 What We Need Our overall aim is to combat both the stultification of NLP as a mere evaluation optimization endeavor and to dispel fears that LLMs and generative AI will shut down the field. As an old saying goes, frequent moves make a tree die but a person prosperous. Just as NLP researchers in the 1980s had to learn about machine learning and then embrace it as a core technique in the field, so we now must explore and embrace LLMs and their capabilities. Machine learning did not'solve' the challenges of NLP: it did not produce an engine that could learn languages, translate, answer questions, create poetry, and do all the things a child can do. Some people claim that LLMs can do all this, and more. But we are in the first flush of engagement, and have not yet have time to discover all their shortcomings. Central is the challenge of scale. No child needs to read or hear more than half the internet's English text in order to use language. What reasoning and sensory capabilities do people have that LLMs lack? How can NLP research evolve to model and encompass those? We urgently need global infrastructures to dramatically scale up computing resources, because the open-source models still cannot achieve performance comparable to GPT variants (Gudibande et al., 2023). But we also urgently need deeper thinking about the foundational conceptual models driving our field. During this unique period when NLP researchers feel uncertain regarding which research problems to pursue, we as a community need a collective effort to systematically change and refine our paper review system and academic success measurements, in order to establish a more inclusive research environment and encourage researchers (particularly those in junior positions) to explore long-term, high-risk topics that are crucial for the entire field. The new challenges also require us to be more open-minded to close collaboration with researchers from other fields, including social science, natural science, computer vision, knowledge representation and reasoning, and human-computer interaction. ### Limitations In this paper we describe some new or under-explored NLP research directions that remain dissertation-worthy. We propose a wider and exciting version of NLP that encourages people to focus on a wider range of more challenging and difficult problems with exciting potential impacts for social good. These problems may not always admit of easy datasets and pure machine learning solutions. Our list is not meant to be exhaustive, and we choose these directions as examples. It is up to NLP researchers to uncover the problems and develop novel solutions. ### Ethical Considerations The research areas listed in this document are a few of the main areas ripe for exploration; additional ones exist. We do not intend for our proposed positions to be forcefully pedagogical. We encourage diverse and deeper investigation of worthy research areas. Within these proposed directions, we acknowledge that some require access to users' personal information (e.g. chatbot personalization in Section 6.1), and some applications might have high impact on users (e.g. using models to assess a student's grasp of knowledge for targeted education in Section 6.3). The use of LLMs for creative work has also led to concerns about copyright and regulations over whether AI can be credited as authors. We do not support the use of LLMs for screening or resource allocation purposes without safeguarding measures. Even for lower risk use cases, we opt for more research on the robustness, transparency, and fairness of systems. Finally, we must evaluate the compliance of prompting LLMs with laws and regulations. For instance in education applications, if we require information about the student, we must refer to laws such as FERPA/DPA/GDPR, especially in an online learning setting. ## Acknowledgements This work is based upon work supported by U.S. DARPA KAIROS Program No. FA8750-19-2-1004, U.S. DARPA CCU Program No. HR001122C0034, U.S. DARPA ECOLE Program No. #HR00112390060, U.S. DARPA ITM FA8650-23-C-7316, U.S. DARPA SemaFor Program No. HR001120C0123 and U.S. DARPA INCAS Program No. HR001121C0165. The opinions, views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
Large Language Model のパフォーマンスの急激な向上により、自然言語処理の分野は、過去80年間の他の分野の変化に比べて、より急激で、大きな転換を遂げました。この変化は、分野が均一化し、資源が贅沢化されるという懸念を引き起こしました。新たな状況は、多くの学術研究者、特に博士課程の学生に不利な影響を与えました。この論文では、理論的分析、新しい課題、学習のパラダイム、多様な分野への応用を提案することで、新しい自然言語処理の遊び場を定義することを目的としています。