entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
18
175
authors
sequencelengths
1
1.12k
primary_category
stringclasses
114 values
categories
sequencelengths
1
8
text
stringlengths
5
364k
http://arxiv.org/abs/2407.13545v1
20240718142004
DiffuX2CT: Diffusion Learning to Reconstruct CT Images from Biplanar X-Rays
[ "Xuhui Liu", "Zhi Qiao", "Runkun Liu", "Hong Li", "Juan Zhang", "Xiantong Zhen", "Zhen Qian", "Baochang Zhang" ]
eess.IV
[ "eess.IV", "cs.CV" ]
DiffuX2CT X. Liu et al. Beihang University, Beijing, China Central Research Institute, United Imaging Healthcare, Beijing, China Zhongguancun Laboratory, Beijing, China {xhliu,zhang_juan}@buaa.edu.cn, zhenxt@gmail.com DiffuX2CT: Diffusion Learning to Reconstruct CT Images from Biplanar X-Rays Xuhui Liu1 Zhi Qiao 2 Runkun Liu 2 Hong Li 1 Juan Zhang^* 1 Xiantong Zhen^* 2 Zhen Qian 2 Baochang Zhang 1,3 July 22, 2024 ===================================================================================================================== [1]Corresponding authors. § ABSTRACT Computed tomography (CT) is widely utilized in clinical settings because it delivers detailed 3D images of the human body. However, performing CT scans is not always feasible due to radiation exposure and limitations in certain surgical environments. As an alternative, reconstructing CT images from ultra-sparse X-rays offers a valuable solution and has gained significant interest in scientific research and medical applications. However, it presents great challenges as it is inherently an ill-posed problem, often compromised by artifacts resulting from overlapping structures in X-ray images. In this paper, we propose DiffuX2CT, which models CT reconstruction from orthogonal biplanar X-rays as a conditional diffusion process. DiffuX2CT is established with a 3D global coherence denoising model with a new, implicit conditioning mechanism. We realize the conditioning mechanism by a newly designed tri-plane decoupling generator and an implicit neural decoder. By doing so, DiffuX2CT achieves structure-controllable reconstruction, which enables 3D structural information to be recovered from 2D X-rays, therefore producing faithful textures in CT images. As an extra contribution, we collect a real-world lumbar CT dataset, called LumbarV, as a new benchmark to verify the clinical significance and performance of CT reconstruction from X-rays. Extensive experiments on this dataset and three more publicly available datasets demonstrate the effectiveness of our proposal. § INTRODUCTION Computed tomography (CT) is one of the widely used imaging modalities in clinical routines. CT images provide detailed and comprehensive views of target areas of interest, thereby playing a crucial role in clinical practice for diagnosis <cit.>. To acquire high-quality CT images, a full rotational scan with numerous X-ray projections is performed around the body. Standard reconstruction algorithms, such as filtered back projection and iterative reconstruction, facilitate the creation of these volumes <cit.>. This process allows physicians to accurately visualize the location and shape of structures like bones and implants <cit.>. However, the extensive data acquisition in CT scans makes it infeasible in most surgical scenarios. In addition, it is also less preferred to perform CT scans due to the higher radiation exposure to patients compared to a single X-ray. Fortunately, reconstruction of CT images from ultra-sparse X-rays offers an attractive alternative <cit.>, which only requires cost-effective X-ray machines and reduces radiation exposure to patients. While X-rays excel in bone contrast and enable rapid assessments <cit.>, they only project objects onto a 2D plane, lacking a complete 3D view of internal structures. Specifically, the limited internal body information in X-rays leads to significant ambiguity and artifacts, as numerous possible CT images could be obtained from the same set of 2D X-ray projections. In addition, the overlapping of objects in X-rays further complicates the reconstruction of CT images. Due to the significance in both research and clinical practice, great efforts have been made in 3D CT reconstruction from X-rays <cit.>. Early attempts treat it as a regression task, which usually utilizes deep neural networks <cit.> to directly establish a mapping from X-rays to CT images. However, these methods tend to generate blurry CT images, failing to recover realistic body structures (Fig. <ref>(a)). This is largely due to regression losses that simply calculate the averaged results of possible predictions. Deep generative models, e.g., generative adversarial networks (GANs) <cit.>, have also been explored, which perform better in learning the distribution of body anatomy from a large training set, thereby delivering better sample quality <cit.>. However, these GAN-based methods fall short of recovering sufficient 3D structure from 2D X-rays, yielding severe artifacts and blurry images (Fig. <ref>(b)). Recently, diffusion models <cit.> have achieved tremendous success in various synthesis tasks <cit.>. They have also been tried on settings of sparse-view and limited-angle CT reconstruction problems <cit.>. Unlike the challenging setting addressed in our work, they still rely on CT scanners, not offering a solution for the extremely ill-posed problem of using biplanar X-rays in CT reconstruction. Moreover, they are developed under 2D network architectures, lacking the ability to establish 3D global coherence in CT images. This paper presents a new 3D conditional diffusion model, termed DiffuX2CT, for CT reconstruction from biplanar orthogonal X-rays. This is different from current diffusion-based CT reconstruction methods, which still rely on CT scans. At a high level, DiffuX2CT formulates the reconstruction task as a conditional denoising diffusion process, where biplanar orthogonal X-rays are incorporated as a condition in the diffusion process. DiffuX2CT is implemented with a 3D encoder-decoder network to model the 3D prior distribution of body anatomy, ensuring high sample quality and global coherence throughout the 3D CT image. To reduce computation costs, DiffuX2CT introduces the shifted-window attention mechanism <cit.> as a substitute for the self-attention <cit.> operations. Moreover, we propose a new, implicit conditioning mechanism (ICM) to incorporate 3D structure information into CT image generation in the diffusion process. The ICM is realized by a newly designed tri-plane decoupling generator and an implicit neural decoder. The tri-plane decoupling generator individually reconstructs the three 2D planes of the tri-plane representations <cit.>, while preserving their spatial correlations, enabling the effective recovery of 3D structure from 2D biplanar X-rays. Subsequently, the implicit neural decoder maps the tri-plane features and the assigned coordinates to 3D implicit representations for conditioning the synthesis process. In addition, we introduce a geometry projection loss in training DiffuX2CT to facilitate structural consistency in the 3D space. To verify the clinical significance of CT reconstruction using biplanar X-rays, we collect a new real-world lumbar vertebra dataset, called LumbarV, with 268 3D CT images from different patients. Each contains implants inserted in the lumbar vertebrae. The DiffuX2CT is capable of reconstructing high-quality CT images, which could assist surgeons in locating the implants and measuring the shape of the vertebrae as shown in Fig. <ref>(c). The major contributions of our work are summarized in three folds as follows: * We propose a new, 3D conditional diffusion model - DiffuX2CT, which is the first model that leverages diffusion models with a new tri-plane conditioning mechanism to achieve faithful and high-quality CT image reconstruction from biplanar X-rays. * We propose the implicit conditioning mechanism (ICM), which is realized by a newly designed tri-plane decoupling generator and an implicit neural decoder. Our implicit conditioning mechanism enables 3D structure information to be recovered for reconstructing CT images. * We contribute a new real-world lumbar vertebra dataset, by collecting CT images from real clinical practice, which covers a wide range of diseased cases. The dataset will serve as a new benchmark to evaluate the effectiveness and performance of CT reconstruction from biplanar X-rays. § RELATED WORK CT Reconstruction from X-rays. Classical CT reconstruction methods, such as filtered back projection and iterative reconstruction, require a large number of X-ray projections taken from all viewing angles around the body to generate a high-quality volume <cit.> but encounter severe distortions under ill-posed settings. Subsequent model-based iterative reconstruction (MBIR) methods <cit.> are dedicated to using less number of X-rays and maintaining high sample quality, but they still can not address the unmet demand for real-time imaging while substantially reducing radiation. The success of deep learning methods makes it possible to reconstruct CT images using ultra-sparse X-rays. Regression-based methods <cit.> directly learn a mapping from X-rays to CT through specialized neural networks with a mean squared error (MSE) loss. <cit.> explore the feasibility of using a single X-ray projection to predict a 3D volume representation. X-CTRSNet <cit.> and XTransCT <cit.> achieve higher accuracy and show that using biplanar X-ray inputs can be advantageous for CT reconstruction. However, these regression-based methods perform poorly in generating high-quality CT images with fine details. Recently, GAN-based methods <cit.> have been proposed to improve the sample quality of generated CT images. For instance, X2CT-GAN <cit.> increases the data dimension of 2D X-rays to 3D CT and introduces adversarial loss for realistic CT reconstruction. Despite the ability to generate relatively fine details, these GAN-based methods still suffer from severe artifacts and struggle to capture complex data distributions, yielding unnatural textures. 3D Generative Models. Generative Adversarial Networks (GANs) <cit.>, Variational Auto-Encoders (VAEs) <cit.>, and DMs have been widely used to model the explicit 3D data distribution, such as voxel grids <cit.>, point clouds <cit.>, and meshes <cit.>. The recent advancement of Neural Radiance Fields (NeRF) <cit.> further facilitates the emergence of numerous works on building 3D structures through learning implicit representations <cit.>. In particular, EG3D <cit.> first proposes the concept of hybrid explicit–implicit tri-plane representation for efficient 3D-aware generation from 2D data. Subsequently, Rodin <cit.> and PVDM <cit.> generate tri-plane representations for synthesizing 3D digital avatars and videos, respectively. In comparison, our objective is to extract the tri-plane representations from biplanar orthogonal X-rays for capturing the 3D structural information. Diffusion Probabilistic Models. Deep diffusion models (DMs) are first introduced by Sohl-Dickstein et al. <cit.> as a novel generative model that generates samples by gradually denoising images corrupted by Gaussian noise. Recent DM advances have demonstrated their superior performance in image synthesis, including DDPM <cit.>, DDIM <cit.>, ADM <cit.>, LSGM <cit.>, LDM <cit.>, and DiT <cit.>. DMs also have achieved state-of-the-art performance in other synthesis tasks, such as text-to-image generation in GLIDE <cit.>, DALLE-2 <cit.>, and Imagen <cit.>, speech synthesis <cit.>, and super-resolution in SR3 <cit.> and IDM <cit.>. Moreover, DMs have been applied to text-to-3D synthesis in DreamFusion <cit.> and its follows <cit.>, video synthesis <cit.>, text-to-motion generation <cit.>, and other 3D object synthesis in RenderDiffusion <cit.>, diffusion SDF <cit.>, and 3D point cloud generation <cit.>. Recently, several works <cit.> explore their potential in solving inverse problems in medical scenarios, such as sparse-view computed tomography (SV-CT), limited-angle computed tomography (LACT), and compressed-sensing MRI, etc. We would like to highlight that our DiffuX2CT is fundamentally different from these diffusion-based CT reconstruction methods in that DiffuX2CT can extract faithful 3D conditions from 2D X-rays, while they still rely on the CT scanners to obtain the 3D conditions. Moreover, their 2D denoising model's architectural design may not effectively handle the global coherence across the CT images. § METHODOLOGY This section presents the DiffuX2CT approach, a 3D conditional diffusion model with an implicit conditioning mechanism (ICM), as shown in Fig. <ref>. DiffuX2CT is trained to recover faithful 3D structures from biplanar orthogonal X-rays (the front view and the side view) to generate high-quality 3D CT images. §.§ Problem Statement Given a CT image 𝐲 and paired biplanar X-rays of a front view 𝐱_1 and a side view 𝐱_2, DiffuX2CT aims to learn a parametric approximation to the data distribution p(𝐲|𝐱_1,𝐱_2) through a Markov chain of length T. Following <cit.>, we define the forward Markovian diffusion process q by adding Gaussian noise as: q(𝐲_t | 𝐲_t-1) =𝒩(𝐲_t | √(1-β_t)𝐲_t-1,β_t 𝐈), where β_t ∈(0,1) are the variances of the Gaussian noise in T iterations. In the training process, DiffuX2CT is trained to infer the conditional distributions p_θ(𝐲_t-1 | 𝐲_t, 𝐱_1,𝐱_2) to denoise the latent features sequentially. Formally, the inference process can be conducted as a reverse Markovian process from Gaussian noise 𝐲_T ∼𝒩(0, 𝐈) to a target volume 𝐲_0 as: p_θ(𝐲_t-1 | 𝐲_t, 𝐱_1,𝐱_2) =𝒩(𝐲_t-1 | μ_θ(𝐱_1,𝐱_2, 𝐲_t, t), σ_t^2 𝐈). As shown in Fig. <ref>, a 3D U-Net architecture is adopted as the denoising model that encodes the noisy image 𝐲_t into multi-resolution feature maps. Meanwhile, ICM is designed to recover multi-resolution 3D structural features from 2D X-rays 𝐱_1,𝐱_2 for conditioning the generation process of the denoising model. Overall, DiffuX2CT offers an elegant, end-to-end framework that unifies the iterative denoising process with 3D conditional learning from 2D inputs, without involving extra prior models. §.§ Implicit Conditioning Mechanism Extracting 3D structural information from 2D X-rays is crucial for reconstructing faithful CT images that have consistent details with the inputs. An intuitive idea is to first extract 2D features of X-rays, and then expand them to 3D conditions by duplicating along the channel dimension, and directly fuse them by pixel-wise addition, as done in X2CT-GAN <cit.>. This way, 3D structure information would not be well preserved. In contrast, we develop an implicit conditioning mechanism with a tri-plane decoupling generator and an implicit neural function. §.§.§ Tri-plane Decoupling Generator To implement our conditioning mechanism, we introduce a new formulation of tri-plane features for X-rays, which represents the 3D feature using three axis-aligned orthogonal planes, denoted by {𝐮^uv∈ℝ^H × W × C,𝐮^uw∈ℝ^H × D × C,𝐮^vw∈ℝ^W × D × C}, where H, W, D denote the spatial resolution and C is the number of channels. Given that the input front view 𝐱_1 and side view 𝐱_2 can be directly utilized for extracting the uw and vw feature planes 𝐱^uw and 𝐱^vw, we propose to generate the tri-plane features in a decoupling manner. Furthermore, to capture the spatial correlation between 𝐱^uw and 𝐱^vw, we design an interleaved convolution to mutually enhance 𝐱_1 and 𝐱_2. To model the third feature plane 𝐱^uv, we introduce two view modulators for refining and combining 𝐱^uw and 𝐱^vw. The combination weighted by view modulators inherently guarantees the spatial correlation between 𝐱^uv and the other two feature planes. However, reconstructing the absent 𝐱^uv solely through these two feature planes would not be sufficient. Hence we include a learnable embedding with a Gaussian perturbation to collect statistical information on the uv plane. Lastly, a lightweight decoder is employed to decode 𝐱^uv, 𝐱^uw, and 𝐱^vw, consequently producing 𝐮^uv,𝐮^uw, and 𝐮^vw. Interleaved Convolution. Since both 𝐱_1 and 𝐱_2 contain the information of the w axis, we propose to capture their spatial correlated features by incorporating each other's information before inputting them into the implicit encoder. Following a stem layer instantiated to extract initial features, we conduct the axis-wise pooling on 𝐱_1 and 𝐱_2 to obtain the vectors 𝐱_1^w, 𝐱_2^w∈ℝ^1 × D × C along the w axis. They are subsequently expanded to the original 2D dimension by replicating the vectors along row dimension, deriving the auxiliary features 𝐱_1^(·)w∈ℝ^W × D × C, 𝐱_2^(·)w∈ℝ^H × D × C. Then the interleaved convolution and the plane feature encoder are successively applied to process the enhanced features 𝐱̂_1=Concat(𝐱_1, 𝐱_2^(·)w) and 𝐱̂_2=Concat(𝐱_2, 𝐱_1^(·)w) for extracting 𝐱^uw and 𝐱^vw. View Modulators and Learnable Embedding. To obtain the absent uv feature plane, we initialize two view embeddings with s_1=(1,0,1) for 𝐱_1 and s_2=(0,1,1) for 𝐱_2, and map them to two view modulators γ_1,γ_2, with the adaptive multi-layer perceptrons (MLPs). Subsequently, γ_1 and γ_2 are normalized by the L2 norm, and then used to modulate 𝐱^uw and 𝐱^vw channel-wisely and fuse them as the raw uv feature plane 𝐱̅^uv. Formally, the modulation process can be summarized as follows: γ̅_1 = |γ_1|/√(γ_1^(2+γ_2^2+δ), γ̅_2 = |γ_2|/√(γ_1^2+γ_2^2+δ), 𝐱̅^uv =γ̅_1·𝐱^uw+γ̅_2·𝐱^vw, where δ=1 e-8 is introduced to avoid zero denominators. Furthermore, considering the information loss in uv plane, we adopt a sine distribution encoding 𝐬𝐢𝐧𝐞 and inject a Gaussian noise disturbance 𝐳 to construct a learnable embedding 𝐥^uv, which can supply additional information for generating the final uv feature plane 𝐮^uv. As a result, 𝐮^uv can be obtained by: 𝐱^uv = 𝐱̅^uv + CONV (𝐬𝐢𝐧𝐞 + 𝐳), where CONV is a convolution layer with activation to refine the initial learnable embedding. Finally, we employ three simple decoders with shared weights, concurrently decoding 𝐱^uv, 𝐱^uw, and 𝐱^vw to generate the multi-resolution tri-plane features 𝐮^(i)={𝐮^uv,(i),𝐮^uw,(i),𝐮^vw,(i)}, where i ∈{1, ⋯, N} denotes the index with different outputs, and N is the number of stage depths in the decoders. §.§.§ Implicit Neural Decoder Based on the expressive tri-plane features 𝐮^(i) with rich explicit 3D information, we aim at recovering the 3D structural information used as the condition for the CT generation. To accomplish this, we leverage the appealing property of implicit neural function by encoding the tri-plane features as a query function based on the coordinates to acquire the 3D structural representation. Specifically, we assign 𝐮^(i) with the assumed 3D spatial coordinates 𝐜^(i)∈ℝ^(D × H × W) × 3 as a reference, where D × H × W denotes the number of 3D points. Given the coordinates of each 3D point, we sample three corresponding feature vectors with bilinear interpolation by projecting them onto each plane and aggregating the retrieved vectors via summation. Afterward, the final 3D structural representation 𝐡^(i)∈ℝ^D × H × W × C can be derived using a two-layer lightweight MLP decoder by: 𝐡^(i) = Reshape{MLP[𝐮^(i)(𝐜^(i))]}, where 𝐮^(i)(𝐜^(i)) represents the aggregated features of the three vectors projected onto each plane. Note that we do not input the spatial coordinates into the lightweight MLP decoder. As a result, the multi-resolution 3D representations 𝐡^(i) can serve as faithful 3D conditions for guiding the CT generation. §.§ Model Architecture Given the feature maps 𝐟_down^(i) and 𝐟_up^(i) from the encoder and decoder of the 3D U-Net denoising model, where i ∈{1, ⋯, N} and N is the number of stage depths in the 3D U-Net, unlike the cross-attention mechanism that introduces additional computational cost, we directly concatenate them with the 3D conditions 𝐡^(i), respectively, so that CT features are synchronously generated according to the structural information in the conditions. While the 3D denoising model possesses more weight parameters compared to the 2D U-Net, it offers superior global coherence throughout the 3D volume. Moreover, the 2D U-Net synthesizes each slice in CT one by one for reconstructing a CT image, leading to a much slower inference time than our 3D model. To reduce the computation cost, we decrease the number of U-Ne layers by setting N to 3 and employ the shifted-window attention layer <cit.> rather than self-attention in the last stage of the 3D U-Net. §.§ Training Objective Noise Prediction Loss. By introducing 𝐱_1, 𝐱_2 as the conditions and their corresponding view embeddings s_1,s_2 in the reverse diffusion process, we optimize the denoising model as a noise estimator ϵ_θ to reconstruct the target CT image 𝐲̂_0 from a pure noise. The noise prediction loss is defined as: ℒ_simple = E_(𝐱_1,𝐱_2,𝐲)E_ϵ,t[ϵ-ϵ_θ(𝐲_t, t, 𝐱_1, 𝐱_2,s_1,s_2)_1^1], where ϵ∼𝒩(0,1), t∼ [1, T], and (𝐱_1,𝐱_2,𝐲) are paired X-rays and CT. Geometry Projection Loss. Furthermore, we draw inspiration from <cit.> and introduce a geometry projection loss ℒ_proj. Specifically, given the predicted noise ϵ̂ and the timestep t, we can derive the 𝐲̂_0 by: 𝐲̂_0=(𝐲_t-(1-α_t) ·ε̂) / √(α_t), where α_t=∏_i=1^t ( 1 - β_i). To streamline the process and focus on general geometry consistency, we employ three orthogonal projections to calculate the loss: ℒ_proj = 1/3(||𝒫_A(𝐲_0)-𝒫_A(𝐲̂_0)||_1^1 + ||𝒫_C(𝐲_0)-𝒫_C(𝐲̂_0)||_1^1 + ||𝒫_S(𝐲_0)-𝒫_S(𝐲̂_0)||_1^1). Putting everything together, we obtain the training objective as follows: ℒ_total = ℒ_simple + λℒ_proj, where we set λ=1/T in our experiments. § EXPERIMENTS In addition to the extensive experiments described in this section, we also provide more implementation details and results in the supplementary materials. §.§ Datasets We conduct extensive experiments on four datasets, including a newly collected lumbar dataset and three publicly available datasets. Newly Collected Lumbar Dataset. In our experiments, we collect a new lumbar vertebra dataset, called LumbarV, which comprises 268 3D CT images of different patients. Each patient has implants inserted in the vertebrae. We randomly select 231 CT images for training and leave the remaining 37 CT images for test. To ensure consistency, we initially resampled all CT images to an isotropic voxel resolution of 2× 2 × 2   mm^3 and then center-cropped them to a fixed size of 128 × 128 × 128. Then, we adopt CT value clipping to limit the CT value range to [-1024, 1500], and min-max normalization to scale data to the [-1, 1] range. To construct the paired data of CT and biplanar X-ray data for training and testing, we follow previous works <cit.> and synthesize the corresponding biplanar X-rays from imagetric CT images using digitally reconstructed radiographs (DRR) <cit.>. Public Datasets. To make a comprehensive evaluation, we also perform experiments on three publicly available datasets: (1) LIDC-IDRI dataset <cit.> contains 1,018 chest CT images. We follow the work of X2CT-GAN <cit.> to split 917 CT images for training and 101 for test; (2) CTSpine1K <cit.> is collected from four open sources, including COLONOG <cit.>, HNSCC-3DCT-RT <cit.>, MSD Liver <cit.>, and COVID-19 <cit.>. We randomly select 912 images for training and the remaining 102 for testing. (3) CTPelvic1K <cit.> is constructed for pelvic bone segmentation with annotation labels for the lumbar spine, sacrum, left hip, and right hip. We delete the duplicate data with CTSpine1K and randomly split 328 of the remaining data as the training set and 50 as the test set. §.§ Implementation Details Compared Methods. We compare our DiffuX2CT quantitatively and qualitatively with a regression-based method X-CTRSNet <cit.> and a GAN-based method X2CT-GAN <cit.>. For X2CT-GAN, we use the official code provided by the authors to perform experiments. For X-CTRSNet, we reproduce the model code according to the paper, due to the absence of the official code. Particularly, since the proposed ICM formulates an effective way to recover 3D information from the 2D X-rays, we can easily develop a regression model, termed ICM-Reg, by combining ICM with a simple 3d decoder. Detailed model architecture is shown in the supplementary. Training Details. We train our DiffuX2CT in an end-to-end manner. Following the vanilla DDPM <cit.>, we use the Adam optimizer with a fixed learning rate of 2e-4. The number of training iterations is set to 200,000 for the LumbarV and CTPelvic1K datasets and 800,000 for the LIDC-IDRI and CTSpine1K datasets. For training ICM-Reg, we use the same optimizer to train 200 epochs. We utilize a dropout rate of 0.2 and two 80GB NVIDIA RTX A800 GPUs for all experiments. Evaluation Metrics. The evaluation metrics include two distortion-based metrics PSNR, SSIM, and two perceptual metrics FID and LPIPS. We calculate PSNR and SSIM in two ways: (1) directly computing the values of 3D data, denoted as PSNR3D and SSIM3D; and (2) averaging the values of all 2D slices along the three axes. It is worth noting that we use the image encoder of MedCLIP <cit.> trained on the medical dataset, i.e., MIMIC-CXR <cit.>, which is appropriate for medical data to compute FID. §.§ Comparisons with Prior Arts We compare with two representative families of reconstruction models, i.e., generative and regression models. The performance of CT reconstruction methods is measured both qualitatively and quantitatively on the four datasets. Qualitative Results. Fig. <ref>, Fig. <ref>, and Fig. <ref> show the qualitative comparisons with prior arts on the LIDC-IDRI and CTSpine1K datasets. The results of the regression model X-CTRSNet exhibit inferior sample quality. Although the generative model, X2CT-GAN, can generate sharp textures, it often creates severe artifacts (e.g., incorrect heart shape and lack of aorta and pulmonary artery in Fig. <ref>) and distortions (i.e. unnatural bone structures in Fig. <ref>). Our regression version, ICM-Reg, is effective in recovering accurate structures but suffers from over-smoothing textures and lacks high-frequency details. In comparison, our DiffuX2CT is capable of reconstructing clear CT textures and consistent structure with the ground-truth, far superior to the counterpart methods. Quantitative Results. Table <ref> shows the quantitative comparisons with prior arts on the four datasets. DiffuX2CT exhibits the overall best results in terms of distortion-based metrics. This highlights the effectiveness of our implicit conditioning mechanism in extracting faithful structural information from 2D X-rays. Particularly, DiffuX2CT performs much better than other methods in terms of SSIM3D, showcasing its strong capacity to reconstruct CT images that are globally similar to the ground-truth in 3D space. Moreover, DiffuX2CT surpasses others by a large margin in all perceptual metrics, demonstrating its superiority in reconstructing high-quality CT images. Although the regression-based models, e.g., X-CTRSNet, can deliver promising PSNR since they are typically optimized by directly minimizing the L1 or L2 losses, they often encounter subpar sample quality and deliver lower perception metric scores. §.§ Further analysis by 3D Segmentation To intuitively demonstrate the effectiveness and practical significance of our DiffuX2CT, we evaluate the performance in 3D segmentation of the reconstructed CT images on the CTPelvic1K dataset. Specifically, we utilize a well-trained VNet <cit.> for 3D pelvic segmentation to segment the sacrum, coccyx, left and right pelvis in the CT images. The visualizations of inputs with segmentation masks are shown in Fig. <ref>. We can clearly observe that DiffuX2CT significantly surpasses other methods, capably reconstructing bone structures that align with real CT. Moreover, we use the Dice Similarity Coefficient (DSC) as the evaluation metric for segmentation. Quantitative results of the three objectives are given in Table <ref>. DiffuX2CT outperforms the previous works by a large margin in all objectives, demonstrating its effectiveness in reconstructing CT images with commendably accurate shapes and positions from biplanar X-rays. §.§ Ablation Study We conduct ablation studies on the LumbarV dataset to demonstrate the effectiveness of the proposed ICM and the geometry projection loss ℒ_proj. Specifically, we construct five comparison models: (1) the baseline model, which is a 3D conditional diffusion model with a simple conditioning mechanism that extracts 3D conditions by a 2D encoder with expanding and pixel-wise addition operations like X2CT-GAN. (2) DiffuX2CT trained from scratch without ℒ_proj. (3)-(4) DiffuX2CT trained without the view modulators and learnable embedding 𝐥^uv respectively. (5) using a ResNetBlock to fuse the uw and vw feature planes without the view modulators. As shown in Table <ref>, all the proposed components contribute to the overall performance. The three models all yield good performance in perceptual metrics, illustrating the effectiveness of the 3D diffusion model in reconstructing high-quality CT images. Notably, the implicit conditioning model significantly improves the performance in terms of PSNR and SSIM, demonstrating its great effectiveness in capturing 3D structural information from 2D X-rays. §.§ Clinical Case Study To demonstrate the clinical utility of the proposed DiffuX2CT, we explore its actual clinical application in lumbar vertebra surgery. In that case, orthogonal X-rays can be easily acquired with C-arm machines, but CT images are often unavailable. This poses a great challenge to accurately locate the position and shape of implants, thereby increasing the difficulty of surgery. Fortunately, DiffuX2CT can potentially serve as a valuable alternative to reconstruct CT images. To verify this, we assess the performance of our DiffuX2CT on real-world lumbar X-ray data. Due to the absence of corresponding ground truth for the X-rays, we limit the evaluation to qualitative analysis. The reconstructed results of DiffuX2CT are shown in Fig. <ref>. Despite being trained on synthetic data, DiffuX2CT successfully reconstructs a realistic lumbar CT image. Especially, it effectively recovers the structure of bones and locates the position of implants, which is well aligned with real-world biplanar X-rays. This case study demonstrates that DiffuX2CT can potentially assist surgeons in observing the precise shapes and positions of bones and implants. § CONCLUSION This paper presents a new 3D conditional diffusion model, DiffuX2CT, that achieves high-quality and faithful CT reconstruction from biplanar X-rays. It is a significant breakthrough in this area. Specifically, DiffuX2CT adopts a 3D denoising model to ensure a high sample quality and global coherence across the 3D volume. More importantly, DiffuX2CT adopts a new implicit conditioning mechanism, comprised of a tri-plane decoupling generator and an implicit neural decoder, that effectively captures 3D structure information from 2D X-rays. The 3D conditions enable DiffuX2CT to achieve faithful CT reconstruction. Moreover, we construct a new lumbar dataset, named LumbarV, as a new benchmark for validating the significance and performance of CT reconstruction from biplanar X-rays. Extensive experiments on three public datasets and the LumbarV dataset demonstrate the effectiveness of our DiffuX2CT. Limitation. The purpose of this work is not to replace the current CT reconstruction methods using hundreds and thousands of X-rays, but to offer a valuable alternative when performing CT scans is infeasible. Although DiffuX2CT can reconstruct high-quality CT images with faithful structures, i.e.bones, implants, it may not always accurately synthesize soft tissues, such as pulmonary nodules. One potential solution to this problem would be to introduce more X-ray views to gather additional structural information on soft tissues. § ACKNOWLEDGEMENTS The work was supported by the National Key Research and Development Program of China (Grant No. 2023YFC3300029). This research was also supported by the Zhejiang Provincial Natural Science Foundation of China under Grant No. LD24F020007, Beijing Natural Science Foundation L223024, National Natural Science Foundation of China under Grant NO. 62076016, 62176068, and 12201024, “One Thousand Plan” projects in Jiangxi Province Jxsg2023102268, Beijing Municipal Science & Technology Commission, Administrative Commission of Zhongguancun Science Park Grant No.Z231100005923035. Taiyuan City "Double hundred Research action" 2024TYJB0127. splncs04
http://arxiv.org/abs/2407.12933v1
20240717180547
Detecting quantum properties in physical systems using proxy witnesses
[ "Priya Ghosh", "Ujjwal Sen", "Siddhartha Das" ]
quant-ph
[ "quant-ph" ]
plain plain thmTheorem cor[thm]Corollary [ TrPrsqSEPFSGEEBSEntPPTsuppLOCCGHZT𝕀id() [ ]∅∅∅𝖫 ⌈⌉theoremTheoremacknowledgementAcknowledgementalgorithmAlgorithmaxiomAxiomclaimClaimconclusionConclusionconditionConditionconjectureConjecturecorollaryCorollarycriterionCriteriondefinitionDefinitionexampleExampleexerciseExerciselemmaLemmaobservationObservationnotationNotationproblemProblempropositionPropositionremarkRemarksolutionSolutionsummarySummarypriyaghosh@hri.res.inHarish-Chandra Research Institute, A CI of Homi Bhabha National Institute, Chhatnag Road, Jhunsi, Prayagraj 211019, Indiaujjwal@hri.res.inHarish-Chandra Research Institute, A CI of Homi Bhabha National Institute, Chhatnag Road, Jhunsi, Prayagraj 211019, Indiadas.seed@iiit.ac.inCentre for Quantum Science and Technology (CQST), Center for Security, Theory and Algorithmic Research (CSTAR), International Institute of Information Technology, Hyderabad, Gachibowli, Telangana 500032, IndiaCentre for Quantum Information & Communication (QuIC), École polytechnique de Bruxelles, Université libre de Bruxelles, Brussels, B-1050, Belgium§ ABSTRACT In practice, it is quite challenging to detect a quantum property, a microscopic property, in a macroscopic system. In our work, we construct general proxy witnesses of quantum properties to detect their presence in quantum systems and we do so for quantum systems which may possibly be large. In particular, we discuss proxy witnesses for quantum properties like unextendibility, quantum coherence, activation, steerability, and entanglement. We apply these proxy witnesses in some widely considered examples of many-body systems, viz., the quantum Heisenberg models, the quantum J_1-J_2 model. Detecting quantum properties in physical systems using proxy witnesses Siddhartha Das ====================================================================== § INTRODUCTION Properties exhibited by quantum systems that are not present in classical systems, are known to be useful resources in many information processing and computational tasks. For example, entanglement is well-known as a useful resource in quantum communication <cit.>, quantum key distribution <cit.>, quantum computation <cit.>, etc. Studying quantum properties of physical systems is also of wide interest from fundamental perspectives. Measurement of macroscopic observables for a macroscopic system is often easy, but detection of any microscopic observable is typically difficult. As there are quantum properties that are not directly measurable, e.g., entanglement <cit.>, unextendibility <cit.>, quantum coherence <cit.>, nonlocality <cit.>, activation <cit.>, etc., it is pertinent to characterize these properties and develop “proxy" witnesses for their (indirect) detection <cit.>. Unextendibility <cit.> is a quantum property that restricts sharing of quantum correlations between two systems in a joint state to multiple parties. The resource theory of unextendibility <cit.> can be seen as a relaxation of the resource theory of entanglement <cit.>, and unextendibility has been studied in the context of quantum key distribution <cit.>, entanglement distillation <cit.>, quantum nonlocality <cit.>, etc. Quantum coherence <cit.> is an intriguing quantum phenomenon, presence of which is the litmus test of “quantum" behavior and is the required resource for several quantum information processing tasks, e.g., quantum cryptography <cit.>, quantum metrology <cit.>, quantum thermodynamics <cit.>, etc. The general difficulty in measuring microscopic observables inspired some works where indirect means for measurement of some microscopic properties were discussed. In particular, these works proposed methods to achieve the detection of quantum entanglement in real experiments by means of measuring physical observables, which provide an indirect way to detect entanglement <cit.>. In this work, taking cue from previous works, we propose proxy witnesses for quantum unextendibility, quantum coherence, activation, steering, and entanglement, and present related observations for quantum systems. We first propose a few general criteria to detect quantum properties present in a macroscopic system by measuring the mean values of some feasible observables. This kind of detection method is called proxy witnessing of physical properties, and the associated observables are deemed as proxy witnesses. Additionally, we show that detection of some of these quantum properties (unextendibility, coherence, activation, steering, entanglement, etc.) is possible in different physical models such as the quantum Heisenberg model, the quantum J1-J2 model through our proxy witnessing criteria. The paper is organized as follows: In Sec. <ref>, we discuss the preliminaries required to discuss the main results obtained in this paper. In particular, we provide definitions of extendibility, Werner and isotropic states, the quantum Heisenberg model, the quantum J_1-J_2 model, PT- and APT-symmetric Hamiltonians, and passive and active states. In Sec. <ref>, we mention some observations on the detection of any quantum property using proxy witnesses. In Sec. <ref>, we provide lemmas for unextendibility witnessing and give a few applications of our unextendibility detection criterion for certain quantum many-body systems, viz. the quantum Heisenberg XXX model and the quantum J_1-J_2 model. In Sec. <ref>, we provide a few realistic models on which quantum coherence detection through our proxy witness criteria can be achieved. In Sec. <ref>, we find some conditions for witnessing activation experimentally and provide some applications of activation detection using proxy witnesses. In Sec. <ref>, we discuss when steerability and entanglement in the quantum Heisenberg XXX model and in the quantum J1-J2 model can be detected through our proxy witnessing criteria. We provide our concluding remarks in Sec. <ref>. § PRELIMINARIES In this section, we review some standard definitions and results from the literature that are required to derive and discuss the main results in later sections. Let ℋ_A and ℋ_B denote the two Hilbert spaces associated with the quantum systems A and B, so that ℋ_A⊗H_B is the Hilbert space of the composite system AB. We denote the dimension of the Hilbert space H with (H). Let D(H) denote the set of all density operators on ℋ. Let 𝒰((H)) denote the set of all unitary operators defined on a Hilbert space of dimension (H). We denote the identity operator as 1 on the relevant Hilbert space, and sometimes use its suffix to indicate the corresponding system. Γ_AB and F_AB respectively denote the projector on an unnormalized maximally entangled state and the swap operator: Γ_AB = ∑_i,j=0^d-1|ii⟩⟨jj|_AB, F_AB = ∑_i,j=0^d-1|ij⟩⟨ji|_AB, where d=min{(H_A),(H_B)}. The swap operator F_AB for any bipartite state also can be written as F_AB = Π^+_AB - Π^-_AB, where Π^+_AB and Π^-_AB are the projections onto the symmetric and anti-symmetric subspaces of ℋ_A ⊗ℋ_B, and satisfies (Π^+_AB)^2 + (Π^-_AB)^2 = Π^+_AB + Π^-_AB= 1. Since F_AB is the swap operator, F_AB^2=1_AB. Any density operator ρ∈D(H), for (H)=d<∞, can be decomposed as <cit.> ρ = 1/d1 + 1/2S·λ̂, where λ̂ = {λ_j} and {λ_j}_j, for j = 1,2,...,d^2 -1, are a set of Hermitian and traceless generators of the SU(d) algebra. S is known as the coherence vector, and belongs to R^d^2 -1. {λ_j}_j can be the set of Pauli matrices for d=2, and the set of Gell-Mann matrices for d=3. The Pauli matrices, {σ_i}_i, for i∈{1,2,3}, are the generators of SU(2), with the following representation in the standard σ_3-basis ({|0⟩_z ≡|↑⟩_z,|1⟩_z ≡|↓⟩_z}): σ_1 = ( [ 0 1; 1 0 ]), σ_2 = ( [ 0 -i; i 0 ]), σ_3 = ( [ 1 0; 0 -1 ]). The Gell-Mann matrices, {T_j}_j, for j ∈{ 1,2,...,8 } are the generators of SU(3). See Appendix <ref>. §.§ Extendibility and unextendibility Now we recall a quantum property called unextendibility which quantifies the unsharability of quantum correlations among multiple systems <cit.>. In parallel, extendibility is the ability of sharing quantum correlations among multiple quantum systems. As discussed in Ref. <cit.>, the resource theory of unextendibility can be argued to be a relaxation of the resource theory of entanglement. While the entanglement measures are monotones under local quantum operations and classical communication (LOCC channels), the measures of unextendibility are monotones under k-extendibility preserving channels, e.g., one-way LOCC channels, which form a strict subset of the set of LOCC channels. For an integer k≥ 2, a bipartite state ρ_AB∈D(H_AB) is k-extendible with respect to a fixed subsystem B if there exists a state ρ_AB^kρ_AB_1B_2⋯ B_k∈D(H_AB_1B_2⋯ B_k) that satisfies ∀ i∈ [k]: ρ_AB =_B^k∖ B_i[ρ_AB_1… B_k], where H_B_i≃H_B for all i∈[k]{1,…,k} and B_1 ≡ B. Here, B^k = { B_1, B_2, …, B_k }. Eq. (<ref>) can be written in the terms of following two facts: * The state σ_AB_1B_2⋯ B_k is permutation invariant with respect to the B systems, in the sense that ∀π∈ S_k ρ_AB_1B_2⋯ B_k = 𝒲_B_1⋯ B_k^π(ρ_AB_1B_2⋯ B_k), where W^π_B_1⋯ B_k is the unitary permutation channel associated with π. Here, S_k is the set of all permutations of the ordered set {1,…,k}. * The state ρ_AB is the marginal of ρ_AB_1 ⋯ B_k, i.e., ρ_AB = _B_2 ⋯ B_k[ρ_AB_1… B_k]. A bipartite state ρ_AB that is not k-extendible w.r.t a fixed subsystem B is called k-unextendible on B. Let EXT_k(A;B) be the set of all states ρ_AB∈𝒟(ℋ_AB) that are k-extendible on B. For convenience, henceforth, extendibility and unextendibility of a bipartite state will be discussed with respect to a fixed system B unless stated otherwise. A bipartite state is separable if and only if the state is k-extendible ∀ k, k ∈N <cit.>. A k-unextendible state for any finite k ∈N (≥ 2) is always an entangled state. Any k-extendible state always remains k-extendible after the action of one-way local quantum operations and classical communication (1W-LOCC) <cit.>. In general, it appears that determining whether an arbitrary bipartite state is separable or entangled is computationally hard (NP-hard) <cit.>. There, k-extendibility provides a way to detect entanglement. The problem of checking for k-extendibility in any bipartite system can be framed as a semi-definite program (SDP) <cit.>. Semidefinite programming corresponds to the optimization of a linear function subject to linear constraints with variables lying in the cone of positive semi-definite matrices <cit.>. §.§ Werner states In this section, we discuss about a special family of states called Werner states that obey certain symmetries. These states are the subject of study in different contexts of quantum information, e.g., in entanglement purification <cit.>, quantum teleportation <cit.>, steering <cit.>, quantum communication <cit.>, etc. Any Hermitian operator H_AB, where (H_A)=(H_B)=d<∞, is called (U ⊗ U)-invariant if for all unitary operators U∈𝒰(d), we have <cit.> H_AB = U⊗ U H_AB U^†⊗ U^†. A (U ⊗ U)-invariant Hermitian operator H_AB^W can be written as <cit.> H_AB^W = α_1^W1_AB + α_2^W F_AB, where α_1^W, α_2^W∈R and F_AB is the swap operator (<ref>). A Werner state ρ^W(p,d)_AB∈𝒟(H_AB), where (H_A)=(H_B)=d<∞, is one which is (U ⊗ U)-invariant for arbitrary U∈𝒰(d). A Werner state can be written as <cit.> ρ^W(p,d)_AB = (1-p) 2/d(d+1)Π^+_AB + p 2/d(d-1)Π^-_AB, where p ∈ [0,1]. A Werner state ρ^W(p,d)_AB∈𝒟(H_AB) with (H_A)=(H_B)=d is k-extendible if and only if <cit.> p ≤1/2(d-1/k+1). For a two-qubit system AB, any (U⊗ U)-invariant Hermitian operator H^W_AB, where (H_A)=(H_B)=2 in Eq. (<ref>), can be expressed as H^W_AB = (α_1^W+ α_2^W/2) 1_AB +α_2^W/2∑_i=1^3 σ_i^A ⊗σ_i^B, because the two-qubit swap operator can be decomposed as F_AB = 1/21_AB + ∑_i=1^3 σ_i^A ⊗σ_i^B, where {σ_i }_i for i ∈{ 1,2,3 } are described in Eq. (<ref>). Similarly, for a two-qutrit system AB, any (U⊗ U)-invariant Hermitian operator H^W_AB, where (H_A)=(H_B)=3 in Eq. (<ref>), can be expressed in terms of SU(3) generators, as H^W_AB = (α_1^W+ α_2^W/3) 1_AB +α_2^W/2∑_j=1^8 T_j^A ⊗ T_j^B, because F_AB = 1/31_AB +1/2∑_j=1^8 T_j^A ⊗ T_j^B for a two-qutrit system. Here, { T_j }_j with j ∈{ 1, ⋯ , 8} denote Gell-mann matrices. For every bipartite state ρ_AB with (H_A)=(H_B)=d, there exists a Werner state ρ^W(p,d)_AB, where ρ^W(p,d)_AB ∫μ(U) U⊗ Uρ_AB U^†⊗ U^†, for U∈𝒰(d) and Haar measure μ(U). We will refer to the physical operations ∫μ(U) U⊗ U(·) U^†⊗ U^† as Werner twirling operations. §.§ Isotropic states A Hermitian operator H_AB, where (H_A)=(H_B)=d<∞, is called a (U ⊗ U^∗)-invariant if <cit.> ∀ U∈𝒰(d), H_AB = U⊗ U^∗ H_AB U^†⊗ (U^∗)^†. A (U ⊗ U^*)-invariant Hermitian operator, H_AB^I, with (H_A)=(H_B)=d, for an arbitrary U∈𝒰(d), has the form <cit.> H_AB^I = α_1^I1_AB + α_2^IΓ_AB, where α_1^I, α_2^I∈R and Γ_AB denotes the projector on the unnormalized maximally entangled state, expressed by Eq. (<ref>). An isotropic state ρ^I(t,d)_AB∈𝒟(H_AB), where (H_A)=(H_B)=d<∞, is one which is (U⊗ U^*)-invariant for arbitrary U∈𝒰(d). An isotropic state can be written as <cit.> ρ^I(t,d)_AB = t Φ_AB^d + (1-t) 1_AB - Φ_AB^d/d^2 -1, where t ∈ [0,1] and Φ_AB^d d Γ_AB. An isotropic state ρ^I(t,d)_AB∈𝒟(H_AB) with (H_A)=(H_B)=d is k-extendible if and only if <cit.> t ∈[ 0, 1/d(1+ d-1/k) ]. For any two-qubit system AB, any (U⊗ U^*)-invariant Hermitian operator H^I_AB, where (H_A)=(H_B)=2, can be expressed as (see Appendix <ref>) H^I_AB = (α_1^I+ α_2^I/2) 1_AB + α_2^I/2 (σ_1 ⊗σ_1 - σ_2 ⊗σ_2 + σ_3 ⊗σ_3), where {σ_i }_i, for i ∈{ 1,2,3 }, are expressed in Eq. (<ref>). For every bipartite state ρ_AB∈𝒟(H_AB), with (H_A)=(H_B)=d, there exists an isotropic state ρ^I(p,d)_AB, where ρ^I(p,d)_AB ∫μ(U) U⊗ U^∗ρ_AB U^†⊗ (U^∗)^†, for U ∈U(d) and the Haar measure μ(U). The physical operations ∫μ(U) U⊗ U^∗ (·) U^†⊗ (U^∗)^† are often referred as isotropic twirling operations. §.§ Quantum Heisenberg and J1-J2 models The quantum Heisenberg model <cit.> describes a one-dimensional chain of quantum spin-1/2 particles, and we consider it to be consisting of only nearest-neighbor interactions. The Hamiltonian of the Heisenberg model of N sites, following periodic boundary conditions, i.e., σ⃗^N+1 = σ⃗^N, is given by H_H = ∑_l=1^N J_x σ_x^l σ_x^l+1 + J_y σ_y^l σ_y^l+1 + J_z σ_z^l σ_z^l+1, where J_α∈{ J_x, J_y, J_z } are coupling constants and {σ_i }_i for i ∈{ x,y,z } are Pauli operators. l ∈{ 1, ⋯ N } denotes site number of the chain. J_α > 0 represents antiferromagnetic couplings whereas J_α < 0 represents ferromagnetic ones. The Heisenberg model is called an anisotropic Heisenberg XYZ model when J_x, J_y, J_z are all different. In the case when J_x = J_y ≠ J_z, the Hamiltonian describes the partially anisotropic Heisenberg XXZ model. For J_x = J_y = J_z, we have the isotropic Heisenberg XXX model. The model satisfying J_x ≠ J_y, J_z = 0 and J_x = J_y, J_z = 0 are known as the anisotropic and isotropic XY models respectively. The XY model with an external transverse field with periodic condition is H_XY^field = ∑_l=1^N J_x σ_x^l σ_x^l+1 + J_y σ_y^l σ_y^l+1 + h ∑_l=1^N σ_z^l, where J_x, J_y are coupling constants, h is a parameter associated with the external field. The XY model becomes an Ising model when J_x ≠ 0, J_y = 0. The ground state energy of Heisenberg XXX model in the thermodynamic limit is given by <cit.> E^XXX_gr≈ - 1.77 N J, for J_x = J_y = J_z = J. The ground state energies of the antiferromagnetic isotropic XY model and the antiferromagnetic Ising model without external field in the thermodynamic limit are <cit.> E^XY_gr = - J_x/π N and E^Ising_gr = - J_x/2 N, respectively, with N being the number of sites of the lattice. The Hamiltonian of the 1D J_1-J_2 model <cit.> with N lattice sites, with periodic boundary conditions, is given by H_J1-J2 = J_1 ∑_l=1^Nσ⃗^l ·σ⃗^l+1 + J_2 ∑_l=1^Nσ⃗^l ·σ⃗^l+2, where l denotes site number of the lattice and σ⃗{σ_1, σ_2, σ_3 }. J_1,J_2 denote coupling constants for nearest neighbor and next-nearest neighbor interactions of lattice respectively. §.§ PT and APT-symmetric Hamiltonians Let P̂ and T̂ denote the parity (P) and time reversal (T) operators, respectively. In this section, we will briefly recapitulate two classes of Hamiltonians that are categorized as PT-symmetric and APT-symmetric <cit.>. A Hamiltonian H is called PT-symmetric if it commutes with P̂T̂, i.e., [H, P̂T̂] = 0 <cit.>. Non-Hermitian Hamiltonians satisfying parity-time reversal symmetry can have real as well as complex eigenvalues. PT-symmetric non-Hermitian Hamiltonians display some unusual properties in a number of systems ranging from classical <cit.> to quantum <cit.>. Any PT-symmetric Hamiltonian for a single qubit has the following form in the standard σ_z-basis <cit.>: H_PT = ( [ iγ s; s -iγ ]), where s > 0 and a = γ/s > 0 represent, respectively, an energy scale and a coefficient corresponding to the degree of non-hermiticity. A Hamiltonian is called APT-symmetric if it anti-commutes with the parity-time reversal operator P̂T̂, i.e., {H, P̂T̂}= 0 <cit.>. Moreover, APT-symmetric systems exhibit some interesting effects <cit.>. The matrix representation of an APT-symmetric Hamiltonian for a single-qubit system in the standard σ_z-basis is <cit.> H_APT = ( [ γ is; is -γ ]), with s > 0 and a = γ/s > 0 being, respectively, an energy scale and a coefficient corresponding to the degree of non-hermiticity. §.§ Passive states and active states Consider a Hamiltonian H associated with a quantum system A, whose Hilbert space is H. Let the spectral decomposition of H be H = ∑_i E_i λ_i, where {|λ_i⟩}_i is an energy eigenbasis associated with energy eigenvalues {E_i}_i, where E_i < E_j if i < j. {|λ_i⟩}_i forms an orthonormal basis of H, where H can be finite- or infinite-dimensional, and where the zero eigenvalues of the Hamiltonian have also been taken account. A state ρ∈D(H) is said to be passive if and only if it can be expressed as <cit.> ρ = ∑_i q_i^↓λ_i, where { q_i^↓}_i denotes a non-increasing probability distribution, i.e., q_i^↓≤ q_j^↓ for all i > j. All passive states are incoherent with respect to the energy basis. No work can be extracted from passive states by the action of any unitary operations <cit.>. See <cit.> and references therein for connections between passivity and quantum thermodynamics. A state is called an active state if it is not passive. All pure states except for the ground state of the Hamiltonian are active. Active states are considered as resources in the context of quantum thermodynamics (see Ref. <cit.>). § DETECTION OF PHYSICAL PROPERTIES There has been a strong interest in deriving witness operators to detect entanglement via practical (observable) means <cit.>. Detection of entanglement is based on inference from the values of observables that are directly measurable, and the associated witnesses are deemed proxy witnesses. Here we build upon previous works and make statements for proxy witness operators for any physical property that is not directly detectable, so that obtaining inferences regarding the desired physical properties could still be experimentally feasible. We first make statements for observables of systems associated with a few special Hamiltonians. Consider a trace-class operator ϱ, a normal operator A, i.e., A^†A=AA^†, and a superoperator Ω that preserves A, i.e., Ω(A)=A. For an analytic function f: Dom(f)→ Im(f) such that the spectrum ς(A) of A is a subset of the domain Dom(f) of the function f, where Dom(f)⊆ℝ and Im(f)⊆ℝ, we have [ϱ f(A)]=[Ω^†(ϱ)f(A)], var(f(A))_ρ = var(f(A))_Ω^†(ρ) . Let the spectral decomposition of the normal operator A be A=∑_a∈ς(A)a Π^a, where Π^a is the projector on the space spanned by the eigenvectors of A corresponding to the eigenvalue a. As superoperator Ω keeps A invariant, we have Ω(A)=∑_a∈ς(A)a Ω(Π^a)=∑_a∈ς(A)a Π^a. The function f(A) on A is defined as f(A)=∑_a∈ς(A)f(a)Π^a. The action of Ω on f(A) is given as Ω(f(A))=∑_a∈ς(A)f(a) Ω(Π^a)=f(A), since Ω(Π^a)=Π^a from Eq. (<ref>). Based on the above equalities (identities), we have [ϱ f(A)]= [ϱΩ(f(A))]= [Ω^†(ϱ)f(A)], where we have used the definition of the adjoint of a superoperator for the last equality. Similarly, we can prove [ϱ f^2(A)]= [Ω^†(ϱ)f^2(A)]. Hence Eq. (<ref>) can be proved from Eqs. (<ref>) and (<ref>). A direct consequence of the above lemma is the following corollary. Consider a system with a Hamiltonian H such that H is preserved under a superoperator Ω. If the system is in a state ρ, then the mean energy of the system can be written as [ρ H] = [ρΩ(H)] = [Ω^†(ρ) H], where the state ρ is arbitrary. Now we consider detection of quantum properties of physical systems. These physical properties could be k-unextendibility, quantum coherence, activation, steerability, Bell non-locality, etc. We have the following observation. Let 𝒞 be the set of all states in D(H) that lack a certain physical property, P. Let 𝒞 be the complementary set of 𝒞 with respect to D(H). That is, for any ρ∈D(H), ρ∈𝒞 if and only if ρ∉𝒞. Consider a functional f that acts on an input state ρ∈D(H) and a physical observable A, and the functional f could be related to the physical property P, that we are interested in. Let { f_P (ρ, A): ρ∈𝒞} be the set of functional values of all states ρ∈𝒞 for the observables A. For any ξ∈D(H), if f_P (ξ, A)∉{ f_P (ρ, A): ρ∈𝒞} then ξ∈𝒞. Let us suppose that S' denotes a particular value of von Neumann entropy of a system on the Hilbert space D(H) with dimension d, which can be in between 0 and log_2 (d). And A' denote a particular mean value of the observable A for the corresponding system. Let us now define few physical quantities based on our above assumptions and the observation set A_min, 𝒞 min_ρ∈𝒞Tr[ρA], A_min, 𝒞, S' min_ρ∈𝒞, S(ρ) ≥ S' [ρA], S_max, 𝒞, A' max_ρ∈𝒞,[ρA] = A' S(ρ) , where S(ρ)- [ρlog_2 ρ] is the von Neumann entropy of the state ρ <cit.>. One may also consider some other informational quantity instead of the entropy. Now, we will develop a few criteria and a witness operator to detect the physical property P for any system based on the Observation <ref>. Properties of witness operator.— Suppose we want to find a witness operator for a physical property P. All the states which does not have the physical property P, form a set 𝒞 and 𝒞 is the complementary set of 𝒞. Then any operator Z will be called a witness operator of the physical property P for the state ξ∈𝒞, if it satisfies [ρ Z]≥ 0 , ∀ρ∈𝒞 [ξ Z] < 0. Now let us suppose that the physical property P is not observable, i.e., the property cannot be directly observed or measured. Also assume that it is very difficult to find a witness operator, Z, of the physical property for each and every state. In this situation, we can witness the physical characteristic P through a proxy witness operator, and we just need additional information about a functional acting on state and physical observable to build a proxy witness operator. That's why we call this kind of witness operator a “proxy" witness operator. For the sake of simplicity, the extra information we considered here is that for every given state ρ∈𝒞, the mean value of an observable A is bounded as Tr[ρA] < A_min, 𝒞, where A_min, 𝒞 is provided by Eq. (<ref>). Based on this observation, we can define a proxy witness operator, Z^ PW, to detect the presence of the physical property P as follows: An operator, Z^ PW, acts as a proxy witness operator for the physical property P with Z^PW A - A_min,𝒞. If any state ξ shows [ξ Z^PW] < 0, then the state ξ will have the physical property P. Here A_min, 𝒞 represent the mean value of observable A, minimized over all states in the set 𝒞. We can write [ρ Z^PW] as follows: [ρ Z^PW] = [ρA ] - [ρ A_min, 𝒞], = [ρA] - A_min, 𝒞[ρ], = [ρA] - A_min, 𝒞. Then any state, ρ∈𝒞, will definitely have [ρA ]≥[ρ A_min, 𝒞]. Therefore, [ρ Z^PW]≥ 0 for all ρ∈𝒞. And any state, ξ∈𝒞, will show [ξ Z^PW] < 0. Thus, Z^PW can act as a proxy witness operator for the physical property P. We can also detect the quantum property P through proxy witnessing using entropy. Any quantum state ξ with the entropy S(ξ)≥ S' will have the property P if the state ξ satisfies Tr[ξA] < A_min, 𝒞, S'. The operator Z^PW_S'A - A_min, 𝒞, S' will act as a proxy witness operator in this scenario. The proof of above criterion is similar to the proof of Lemma <ref>. If any bipartite state ξ satisfies both [ξA] = A' and Z^PW_A' > 0, then system certainly have the quantum property P; where Z^PW_A' S(ξ) - S_max, 𝒞, A'. The physical property P of any system can also be witnessed using the criteria Corollary <ref> and <ref>. § PROXY WITNESSING OF K-UNEXTENDIBILITY In this section, we aim at deriving proxy witnesses to detect the k-unextendibility using semi-definite programming techniques. We also apply these proxy witnesses to find k-unextendibility for states corresponding to the Heisenberg XXX model, and the quantum J1-J2 model. We define two kinds of mean values corresponding to the observable A for a bipartite quantum system defined on ℋ_A⊗ℋ_B: A^min_ EXT_k min_ρ∈ EXT_k(A;B)[ρA], A^min_ EXT_k, S' min_ρ∈ EXT_k(A;B), S(ρ) ≥ S' [ρA] , where EXT_k(A;B) denotes a set consisting of all states, ρ∈𝒟(ℋ_AB), that are k-extendible on B. Any bipartite quantum system with density operator ρ and observable A will be k-unextendible on B if the system follows [ρA] < A^min_ EXT_k for any given k≥ 2. From Lemma <ref>, Z^PW_ EXT_kA - A^min_ EXT_k can serve as a proxy witness operator for k-unextendibility. Assume that EXT_k+k'(A;B) denotes a set consisting of all states, that are (k+k')-extendible on B where k≥ 2 and k'∈N. For any observable A, we have A^min_ EXT_k≤ A^min_ EXT_k,S'≤ A^min_ EXT_k+k'≤ A^min_ EXT_k+k',S', for any k≥ 2 and k'∈N (see Fig. <ref>). It is straightforward from the definition of A^min_ EXT_k,S' and A^min_ EXT_k, that A^min_ EXT_k≤ A^min_ EXT_k,S'. Analogously, it will hold true for EXT_k+k'(A;B), i.e., A^min_ EXT_k+k'≤ A^min_ EXT_k+k',S'. Since there exist many states which are k-extendible but (k+k')-unextendible and the reverse is not true. Hence, EXT_k(A;B) forms a bigger set than EXT_k+k'(A;B). Then, we can conclude that A^min_ EXT_k≤ A^min_ EXT_k,S'≤ A^min_ EXT_k+k' for all { ( k ≥ 2),k' }⊆N. This concludes the proof. §.§ Dual problem of k-unextendibility The problem, whether any bipartite system is k-extendible or not, can be cast as an SDP (semi-definite program) primal problem, a convex optimization problem. We can then identify the corresponding dual optimization problem, and detect k-unextendible states by numerically solving dual problem. As an example, here reframing the S_max, 𝒞, A' as SDP primal problem and using Corollary <ref>, we try to find a new way to detect k-unextendible states of a bipartite system. For this, let us consider a set consisting of all k-extendible bipartite states over B subsystem in 𝒟(ℋ_AB) and it is denoted by EXT_k(A;B). In Eq. (<ref>), the variable ρ can only be positive semi-definite matrices and the objective function along with constraints are linear, so we can consider it as a SDP primal problem. To find the dual problem of the corresponding primal problem, we first have to construct a Lagrangian with the function that have to be maximized and the constraints present in the original problem. The constraints will be added to the Lagrangian using Lagrange multipliers. The variables of the original problem will be called primal variables, whereas the Lagrange multipliers are referred to as dual variables. Since EXT_k(A;B) is a set of all k-extendible states over B subsystem in 𝒟(ℋ_AB), then we can assume that the state ρ∈EXT_k(A;B) is extended to the state ρ_AB^k which satisfies Eqs. (<ref>) and (<ref>). In this scenario, the Lagrangian is given by L(ρ,λ,μ,X_0, λ,X_0,ν_π) = S(ρ) + λ(ρ-1) + μ ([ρ A] - A') + [ρ X_0] + λ (ρ_AB^k-1) + [ρ_AB^kX_0] + ∑_π∈ S_kν_π[ _P[W^πρ_AB^k W^π] - ρ]. Here, ρ is the primal variable whereas λ,μ,λ, {ν_π}_π act as dual variables along with positive semi-definite matrices X_0, X_0 and λ,μ,λ, {ν_π}_π∈R. W^π is the unitary permutation channel associated with π∈ S_k, where S_k is the set of all permutations of the ordered set {1,…,k}. _P is the partial trace over any (k-1) subsystems of ρ_AB^k except A. The dual problem of any original primal problem corresponds to the minimization of the dual objective with respect to all the dual variables, where the dual objective is the Lagrangian maximized over all primal variables. It was shown that any feasible dual point gives an upper bound on the primal problem, and it is known as “weak duality". In this scenario, the dual objective function is given by l(λ,μ,X_0,λ,X_0,ν_π) max_ρL(ρ,λ,μ,X_0,λ,X_0,ν_π). We can see that the following relation holds for every primal feasible ρ and dual feasibles λ,μ,X_0,λ,X_0,ν_π: S(ρ) ≤L(ρ,λ,μ,X_0,λ,X_0,ν_π) ≤l(λ,μ,X_0,λ,X_0,ν_π). Therefore, we can say that S_max,𝒞,A'≤minl(λ,μ,X_0,λ,X_0,ν_π), where minimization is performed over dual feasible region, i.e., λ,μ,λ∈ R, X_0, X_0≥ 0, ν_i≥ 0. From Corollary <ref>, any state ξ with [ξA] = A' satisfying S(ξ) > minl(λ,μ,X_0,λ,X_0,ν_π), will definitely be k-unextendible on B subsystem. Thus, we can find k-unextendible states for lower-dimensional composite systems via semi-definite programming. §.§ Applications of proxy witness of k-unextendibility Here, we will establish detection criteria for k-unextendible bipartite states using (U⊗U)-invariant and (U⊗U^*)-invariant Hermitian operators. We label these detection criteria by “proxy witnesses", since the aforementioned Hermitian operators indirectly aid in the development of k-unextendibility witness operators. Moreover, we will use these detection criteria to find k-unextendible states of multipartite systems and will provide a few examples of witnessing k-unextendibility of multipartite states through proxy witnessing in this subsection. Given a bipartite system with Hamiltonian H^W_AB written in Eq. (<ref>), for (H_A)=(H_B)=d, any state ρ_AB, will be k-unextendible for any k ∈N if the state satisfies [ρ_AB H^W_AB] < (α_1^W - α_2^Wd-1/k), where α_1^W, α_2^W∈R are parameters of the Hamiltonian, H^W_AB. Let ρ_AB be any k-extendible state belonging to the set EXT_k(A;B). Assume that the (U⊗U)-invariant Hamiltonian, H^W_AB, acts as an observable here. We now evaluate A^min_ EXT_k for the observable H^W_AB and call it H^W, min_ EXT_k . Mathematically, H^W, min_ EXT_k can be written as H^W, min_ EXT_k min_ρ_AB∈EXT_k(A;B)[ρ_AB H^W_AB] , = min_ρ_AB∈EXT_k(A;B)[ρ_AB ( α_1^W1_AB + α_2^W F_AB)], = min_ρ_AB∈EXT_k(A;B)[ α_1^W + α_2^W[ρ_AB F_AB]], = min_ρ_AB∈EXT_k(A;B)[ α_1^W + α_2^W (1- 2p) ], = α_1^W + α_2^W [1 - 2 1/2(d-1/k +1)], = α_1^W - α_2^Wd-1/k, where Eqs. (<ref>), (<ref>) are obtained from Appendix <ref> and Remark <ref> respectively. This completes the proof. Given a bipartite system with Hamiltonian H^I_AB, where (H_A)=(H_B)=d, parameterized by α_1^I, α_2^I∈R as written in Eq. (<ref>), any bipartite state ρ_AB will be a k-unextendible state if [ρ_AB H^I_AB] < α_1^I holds for any k ∈N. Here, we will evaluate A^min_ EXT_k considering (U⊗U^*)-invariant Hamiltonian, H^I_AB, as an observable. We will term it by H^I, min_EXT_k(A;B) and it's mathematical form will be H^I, min_ρ_AB∈EXT_k(A;B) min_ρ_AB∈EXT_k(A;B)[ρ_AB H^I_AB] , = min_ρ_AB∈EXT_k(A;B)[ρ_AB (α_1^I1_AB + α_2^IΓ_AB)], = min_ρ_AB∈EXT_k(A;B)[ α_1^I + α_2^I[ρ_ABΓ_AB]], = min_ρ_AB∈EXT_k(A;B)[ α_1^I + α_2^I (td) ], = α_1^I, for any k ∈N. We have applied Appendix <ref> and Remark <ref> to get the Eqs. (<ref>) and (<ref>) respectively. This implies that any state ρ_AB will be k-unextendible if the inequality (<ref>) holds for any k ∈N. We will now apply a bipartite (U⊗U)-invariant Hermitian operator, described in Eq. (<ref>), to a multipartite scenario for detecting k-unextendibility in a multipartite system through proxy witnessing. Let us consider a multipartite system composed of 2N subsystems A_1, B_1, ⋯ , A_N, B_N and ℋ_A_1⊗H_B_1⊗⋯H_A_N⊗H_B_N denotes the Hilbert space of the composite system. A (U⊗U)-invariant bipartite Hermitian operator can be generalized into a (U ⊗⋯⊗ U)-invariant 2N-partite Hermitian operator and it has the form H^W_gen = α_1 1_A_1B_1......A_NB_N + ∑ _m=1^Nα_2^m ( F_A_m B_m⊗1_AB/A_m B_m + F_A_m+1B_m⊗1_AB/A_m+1B_m), where α_1, α_2^m ∈R for all m = { 1,2,..., N } and N ∈N. Here, 1_A_1B_1......A_NB_N and F_A_mB_m denote the identity operator on ℋ_A_1⊗H_B_1⊗⋯H_A_N⊗H_B_N and the swap operator between subsystems A_m and B_m respectively. In our work, a multipartite state is said to be k-extendible if it is k-extendible with respect to each subsystem of the state. Otherwise, it is referred to as a k-unextendible multipartite state. Now we will show that H^W_gen can be used to detect k-unextendible states for any k ∈N in a multipartite system, e.g., the Heisenberg XXX model, the quantum J_1-J_2 model. Let us suppose that all the k-extendible states of a 2N-partite system on the Hilbert space ℋ_A_1⊗H_B_1⊗⋯H_A_N⊗H_B_N form the set EXT_k(A_1;B_1;...;B_N). We will denote any k-extendible state of 2N-partite system on the Hilbert space ℋ_A_1⊗H_B_1⊗⋯H_A_N⊗H_B_N by η. The ground state of the antiferromagnetic Heisenberg XXX model is a k-unextendible state in the thermodynamic limit for any k > 2. Substituting the form of the bipartite swap operator (from Eq. (<ref>)) in Eq. (<ref>), we have H^W_gen = (α_1+ ∑_m=1^Nα_2^m)1_A_1B_1...B_N + ∑_m=1^Nα_2^m/2∑_i=1^3 (σ^A_m_i ⊗σ^B_m_i + σ^A^m+1_i ⊗σ^B^m_i), where the multipartite system consists of 2N parties and {σ_i }_i with i ∈{ 1,2,3 } are described in Eq. (<ref>). And H^W_gen will turn into the Hamiltonian of antiferromagnetic Heisenberg XXX model with lattice number 2N and coupling constant J, if α_1 = - ∑_m=1^N α_2^m, α_2^m/2 = J, holds for all m ∈{ 1,2,..,N }. It implies that α_1 = -2NJ and α_2^m = 2J for all m. To prove the theorem, we will consider here the Hamiltonian of the Heisenberg XXX model as an observable. We will calculate A^min_ EXT_k for the observable H^W_gen (written in Eq. (<ref>)) over the set EXT_k(A_1;B_1;...;B_N) and will call it by E^min_XXX. We substitute the value of α_1 and α_2^m from Eq. (<ref>) in the calculation of E^min_XXX, and then, E^min_XXX will be given by E^min_XXX = min_η∈EXT_k(A_1;B_1;...;B_N) -2NJ + ∑ _m=1^N 2J (1-2p_m) + ∑ _m=1^N 2J (1-2p'_m) , using Appendix <ref>. Here 2N, J are the site number and coupling constant of the antiferromagnetic Heisenberg XXX model, respectively, while each p_m, p'_m for m ∈{ 1,2,⋯ N } represents a parameter of bipartite Werner state on ℂ^2. Since bipartite Werner state, parameterized by p, is a k-extendible state on the Hilbert space (ℋ_A⊗H_B) with (H_A)=(H_B)=d, for 0 ≤ p ≤1/2 (d-1/k + 1), we have E^min_XXX = -2NJ (1 + 2/k). Then we can say from Eqs. (<ref>) and (<ref>) that the ground state of the quantum antiferromagnetic Heisenberg XXX model with site number 2N and coupling constant J will be a k-unextendible state for any k ∈N in the thermodynamic limit, if -2×1.77NJ < - 2NJ( 1 + 2/k), k > 2.597, which is always possible for any k > 2. Hence, the ground state of the antiferromagnetic XXX model is k-unextendible for any k > 2 in the thermodynamic limit. Similarly, we can find k-unextendible states of a multipartite system related to the 1D spin-1/2 J_1-J_2 model. Any 2N-qubit state associated with the Hamiltonian of the antiferromagnetic J_1-J_2 model with site number 2N and coupling constants J_1, J_2, that has mean energy lower than E^min_J1-J2 = -2N(J_1+J_2) (1 + 2/k) for any k ∈N will be k-unextendible. Here, we will calculate A^min_ EXT_k over the set EXT_k(A_1;B_1;...;B_N), considering the Hamiltonian of J_1-J_2 model with site number 2N and coupling constants J_1, J_2 as an observable and will call it as E^min_J1-J2. Then, in the similar fashion as Eq. (<ref>), we have calculated E^min_J1-J2 and it will be E^min_J1-J2 = min_η∈EXT_k(A_1;B_1;...;B_N) [-2N(J_1+J_2) + 2J_1 ∑_m=1^N (1-2p_m) + 2J_1 ∑_m=1^N (1-2p'_m) + 2J_2 ∑_l=1^N (1-2p_l) + 2J_2 ∑_l=1^N (1-2p'_l)], = -2N(J_1+J_2) (1 + 2/k), since a bipartite Werner state on the Hilbert space (ℋ_A⊗H_B) with (H_A)=(H_B)=d, parameterized by p, is a k-extendible state for any k ∈N when p is in the range 0 ≤ p ≤1/2 (d-1/k + 1) and d =2 in this case. This concludes the proof. Thus, we can detect k-unextendibility of any state of a multipartite system. § DETECTION OF QUANTUM COHERENCE BY PROXY WITNESS Quantum coherence results from superposition of quantum states and is a useful resource in quantum computing, communication, metrology, etc. Suppose that we want to know about the quantum coherence of any state directly, then we need to do quantum state tomography of the state. It is known that quantum state tomography is very hard to do in real experiment. In that situation, we can find the existence of the quantum coherence of any state indirectly through proxy witnessing. In this section, we will find few detection criteria for quantum coherence through proxy witnessing. Moreover, we exemplify that proxy witness of quantum coherence works very well for quantum Heisenberg models. Any quantum state on Hilbert space with dimension d, say χ, will be called an incoherent state with respect to a reference basis {|i⟩}_i if it is diagonal with respect to the corresponding basis, i.e., χ = ∑_i=0^d p_i |i⟩⟨i|, with p_i ≥ 0 for all i and ∑_i p_i =1. Coherent states are those which are not incoherent with respect to the same reference basis. Now let us consider that ℐ denotes a set of all incoherent states on Hilbert space with dimension d with respect to a reference basis. Let us suppose that A acts as an observable and the minimum mean value of the observable A over the set ℐ is referred to as A^min_inc. Then, the statement of Lemma <ref> regarding quantum coherence can be summarized as follows. Any state χ on the Hilbert space with dimension d will be a coherent state with respect to the reference basis if the state satisfies [χA] < A^min_inc. Let us suppose that we have a state ρ on Hilbert space with dimension d. We want to find whether ρ is a coherent state or not. For that, we will evaluate the minimum mean value of observable A over set of all incoherent states, ℐ and it can be determined as A^min_I min_ρ_ℐ∈I[ρ_ℐ A], = min_ρ_ℐ∈I[A (∑_i p_i |i⟩⟨i|)], = min_ρ_ℐ∈I∑_i p_i A_ii, where { p_i }_i are nothing but the probabilities of getting the state |i⟩⟨i| with ∑_i p_i =1. Hence, any state ρ will be a coherent state if the mean value of observable A for the corresponding state is below than A^min_I in that particular basis. Now, we will detect coherent states in the single-qubit system, as well as multi-qubit system using PT- and APT-symmetric Hamiltonians, Hamiltonians of the isotropic and anisotropic XY models, Ising models with and without an external field through proxy witnessing. We can determine coherent single-qubit states with respect to the σ_z-basis through proxy witnessing. To do so, we choose a traceless Hermitian operator, Aσ⃗.n̂, as an observable. σ⃗{σ_x, σ_y, σ_z } is a vector of three spin-1/2 Pauli matrices, and n̂ = {sinθcosϕ, sinθsinϕ, cosθ} is a real three-dimensional unit vector where θ∈ 0, π and ϕ∈ 0 , 2 π represent zenith and azimuthal angles of a spherical coordinate system, respectively. Let us assume that all the incoherent single-qubit states of a system form a set that is denoted by I_sq. Let any single-qubit state be denoted by τ. Any single-qubit state τ will be called a coherent state with respect to the standard σ_z-basis if we have [τA] < - cosθ where θ denotes a parameter of the observable Aσ⃗.n̂. We will calculate A^min_inc for the observable σ⃗.n̂ over the set I_sq and it will be A^min_inc min_τ∈I_sq[τA], = min_τ∈I_sq[(∑_i=0^1 p_i |i⟩⟨i|) A], = min_τ∈I_sq[(p_0 - p_1) cosθ], = - cosθ. This concludes the proof. It is a sufficient criterion, but not necessary. Using the criterion described above, we can determine if any single-qubit state is coherent or not with respect to the standard σ_z-basis, for example, the state |+⟩⟨+|. Here, |+⟩ is an eigenstate of σ_x with eigenvalue 1. The mean value of the observable Aσ⃗.n̂ for the state |+⟩⟨+| is [τA] = cosϕsinθ. Therefore, the state |+⟩⟨+| will be coherent under standard σ_z-basis if (cosϕtanθ) < -1 holds. Now, if we consider A = - σ_x, the criterion given in Example <ref> satisfies. Hence, the state |+⟩⟨+| is a coherent state with respect to the {|0⟩, |1⟩} basis. Any general single-qubit state τ can be represented as τ = 1/2 (1 + v⃗·σ⃗), where v⃗{ v_x, v_y, v_z } is a real vector with v^2_x + v^2_y + v^2_z ≤ 1 and σ⃗{σ_x, σ_y, σ_z } is is a vector of three spin-1/2 Pauli matrices. We can witness the quantum coherence for a single qubit state via PT- and APT-symmetric Hamiltonians as well. Any single-qubit state τ parameterized by v_x, v_y, and v_z, will be a coherent state with respect to the standard σ_z-basis if one of the following inequalities v_x s < -i γ (1 + v_z) , v_x s < i γ (1 + v_z) , hold where the s and γ represent the parameters of PT- and APT-symmetric Hamiltonians. Considering PT-symmetric Hamiltonian as an observable, the mean energy of any single-qubit state, τ, will be [τ H_PT] = iγ v_z + v_x s. where the state τ is parameterized by the vector v⃗ = { v_x, v_y, v_z }. Now, we will evaluate A^min_inc considering PT-symmetric Hamiltonian as an observable over all incoherent single-qubit states and we define it as E^min_PT. Therefore, it is mathematically given by E^min_PT min_τ∈I_sq[τ H_PT ], = min_τ∈I_sq [(p_0 - p_1)iγ], = - i γ, where p_0,p_1≥ 0 and p_0+p_1 = 1. It makes clear that when any single-qubit state satisfies (<ref>), it will be a coherent state with respect to the standard σ_z basis. In the similar way, we can prove that a single-qubit state will be a coherent state if it satisfies (<ref>) considering the APT-symmetric Hamiltonian as an observable. Let the set of all incoherent states of the N-partite system on the (ℂ^2)^⊗ N Hilbert space be defined as I_multi, with N ∈N. Any N-qubit state with a negative mean energy will be a coherent state with respect to the standard σ_z-basis when the Hamiltonian of the isotropic XY model is considered to calculate the mean energy. Since all the diagonal elements of the Hamiltonian of the isotropic XY model are zero for any lattice site number N in the conventional σ_z-basis, the A^min_inc over the set I_multi considering the Hamiltonian of the isotropic XY model as an observable will be zero. It completes the proof. For the same reason, the criterion written in Example <ref> will remain same if we consider an anisotropic XY model or an Ising model with no external field as an observable to calculate the mean energy of any N-qubit system. In the thermodynamic limit, the ground state energies of an antiferromagnetic isotropic XY model and an antiferromagnetic Ising model without external field are (- J/π N) and (- J/2 N) respectively, with coupling constant J and lattice site number N. Hence, the ground states of an antiferromagnetic isotropic XY model and an antiferromagnetic Ising model without external field are coherent state under the standard σ_z-basis in the thermodynamic limit. Furthermore, we can also use the criterion written below to detect coherent states of any N-qubit system. Any N-qubit state will be a coherent state with respect to the standard σ_z-basis if the mean energy is below (-Nh), when the Hamiltonian of Ising model in presence of external field is considered to be the Hamiltonian of the system. Here, the external field of the Hamiltonian of the Ising model is parameterized by h and N is the lattice site number. A^min_inc over the set I_multi is (-Nh) when we choose the Hamiltonian of the Ising model with an external field as an observable. Then, from Corollary <ref>, we can conclude the proof. § WITNESSING ACTIVATION Any state of a system is said to be a passive state if the mean energy cannot be decreased by acting on any unitary transformations upon the system when one have a single copy of the state, e.g., all the thermal states. This implies that work extraction, ergotropy for passive states is always zero. All the passive states of a system on Hilbert space with dimension d form a convex set in a d-dimensional space. Any state that is not passive is called an active state. In this section, our main goal is to detect active states of any system through proxy witnessing. Let us suppose that the set of all passive states on the Hilbert space with dimension d is denoted by P(S). Any state is an active state if the mean energy of the state for the Hamiltonian H_A is greater than max_P(S)∑_m=0^d-1 q_m^↓ E_m, where { q_m^↓}_m and E_m represent the populations of the state with respect to energy eigenbasis and the eigenvalues of Hamiltonian H_A, respectively. The mean energy of any passive state, say ρ_A ∈P(S), for the Hamiltonian H_A will be given by [ρ_A H_A] = ∑_i,m=0^d-1 q_i^↓ E_m |⟨ i | m ⟩ |^2, = ∑_i,m=0^d-1 q_i^↓ E_m δ_i,m, = ∑_m=0^d-1 q_m^↓ E_m , where we substitute the form of the Hamiltonian H_A and passive state ρ_A from Eqs. (<ref>) and (<ref>) into [ρ_A H_A]. Then, from Observation <ref>, we can certify the proof. Using the aforementioned criterion, we will now find a few active states in various contexts. The maximum mean energy over all passive qudit states on the Hilbert space with dimension d corresponding to the Hamiltonian H_A will occur when q_i^↓ =1/d for all i ∈{ 0,1,..., d-1 } under the energy eigenbasis. Hence, given the Hamiltonian H_A, any qudit state on the Hilbert space of dimension d with a mean energy larger than 1/d∑_i=0^d-1 E_d will be an active state. Let us suppose a two-dimensional system associated with the Hamiltonian H, which has two eigenenergies, E_0 and E_1, with E_1 > E_0. The excited energy eigenstate of any two-dimensional system associated with a Hamiltonian is an active state. According to Eq. (<ref>), the minimum and maximum average energy over all passive single-qubit states for the Hamiltonian H will be E_0 and 1/2(E_0 + E_1), respectively. Then, from Example <ref>, we can conclude that the excited energy eigenstate of any single-qubit system for the Hamiltonian H will be an active state. § PROXY WITNESSING OF STEERABILITY AND ENTANGLEMENT Quantum steering <cit.> is a quantum correlation in quantum information science that lies in between quantum entanglement and Bell non-locality <cit.>. We will now discuss about the steerability of a bipartite state. Let us suppose that two parties (say Alice and Bob) share a state between them, and Alice performs a local operation on Alice's side from the setting “x" and obtains the outcome “a". Thus, Alice produces a conditional state ρ(a|x) on Bob's part. If the conditional state on Bob ρ(a|x) can be decomposed as ρ(a|x) = ∫ dλ p(λ) p(a|x,λ) σ_λ, then it can be surely assured that the state shared between Alice and Bob is an unsteerable state. Otherwise, the bipartite state is said to be a steerable state. Here, σ_λ represents a hidden state at Bob's side prepared with probability p(λ) and λ represent a hidden variable correlating Alice and Bob, while p(a|x,λ) is the probability of getting outcome a for the local measurement x by Alice. It is called the local hidden state model (LHS). Steerability always certifies the entanglement of any quantum state, as all separable states admit the LHS model. Any bipartite quantum state is said to be Bell non-local if the correlation among two spatially separated systems cannot be mimicked by the local hidden variable model (LHV) or violates the Bell inequality <cit.>. Moreover, Bell non-locality is useful in device-independent certification of quantum states and measurements <cit.>. In our work, a multipartite state is defined as “unsteerable" if all its bipartite reduced states are unsteerable. If any bipartite reduced state of a multipartite state is steerable, then the multipartite state is termed steerable. Similarly, a multipartite state is considered Bell local if all its bipartite reduced states are Bell local; otherwise, the state is called Bell non-local. We will detect steerable and Bell non-local states in multipartite systems, specifically within the Heisenberg XXX model and the J_1-J_2 model, using proxy witnessing. The ground state of the antiferromagnetic Heisenberg XXX model is steerable and Bell non-local in the thermodynamic limit. Let us suppose that S_st denotes a set of all unsteerable states of a 2N-partite system. ℋ_A_1⊗H_B_1⊗⋯H_A_N⊗H_B_N denotes the Hilbert space of the 2N-partite system and any state belonging to set S_st is denoted by ζ. We consider the Hamiltonian of the XXX model with lattice site number 2N and coupling constant J as an observable to evaluate A_min, 𝒞 and here 𝒞 means et of all unsteerable states of a 2N-partite system, S_st. The minimum mean energy over the set S_st for the corresponding observable is denoted as E^min_S_st. After the simplification, mathematically, E^min_S_st can be expressed as E^min_S_st = min_ζ∈ S_st - 2NJ + ∑ _m=1^N 2J (1-2p_m) + ∑ _m=1^N 2J (1-2p'_m) , = - 2NJ + ∑ _i=1^N 4J (1-25/8) , = - 3NJ, where each and every p_m and p'_m for all m ∈{ 1,2, …, N } represent a parameter of bipartite Werner states on ℂ^2. And Eq. (<ref>) is obtained using Appendix <ref>. In Eq. (<ref>), we have used the fact that the range of unsteerability of the bipartite Werner state, parameterized by p, on the Hilbert space with dimension d is (d-1)/2d≤ p ≤ (1- d+1/2d^2). Since the ground state energy of antiferromagnetic XXX model with lattice number N and coupling constant J in the thermodynamic limit is E^XXX_gr≈ - 1.77 NJ, then, it is straightforward that the ground state of the antiferromagnetic XXX model with lattice number 2N and coupling constant J will be a steerable state in the thermodynamic limit if 2×(-1.77NJ) < -3NJ, which is always achievable for any N ∈N and J > 0. Thus, the ground state of the antiferromagnetic XXX model is steerable in the thermodynamic limit. In a similar way, we can prove the existence of the Bell non-local nature in the ground state of the antiferromagnetic XXX model using Ref. <cit.>. In addition to the quantum Heisenberg XXX model, the steerable and Bell non-local multipartite states in the quantum J_1-J_2 model can be found using the same procedure applied to the quantum Heisenberg XXX model. Here, we will look for proxy witnessing of entanglement of multipartite quantum states of quantum many-body systems. Any multipartite state, let's say ρ_A_1,A_2,…,A_N, is called fully separable if it can be decomposed as ρ_A_1,A_2,…,A_N = ∑_k q_k ρ_A_1^k ⊗ρ_A_2^k ⊗⋯ρ_A_N^k with 0 ≤ q_k ≤ 1 and ∑_k q_k = 1, where ρ_A_i^k represents density matrices of subsystem A_i. In our work, we will say that any multipartite state is entangled when the state is not fully separable. In the thermodynamic limit, the ground state of the antiferromagnetic XXX model is an entangled state. Let us consider a tripartite system composed of subsystems A_1, A_2, and A_3 and ℋ_A_1⊗H_A_2⊗H_A_3 denotes the Hilbert space of the composite system. The Hamiltonian (H_XXX^tri) of the antiferromagnetic tripartite XXX model with coupling constant J and periodic boundary conditions, i.e., σ⃗^N+1 = σ⃗^N where N=3 can be written in terms of the swap operator as follows: H_XXX^tri = J ∑_l=1^3∑_i=1^3σ^l_i⊗σ^l+1_i , = -3J + 2J ∑_i=1^3 F_A_i A_i+1, where l denotes l-th site of the lattice and {σ_i } for i ∈{ 1,2,3 }_i denotes the Pauli matrices described in Eq. (<ref>). F_A_iA_i+1 corresponds to the swap operator between the subsystems A_i and A_i+1 of the tripartite system. Let us suppose that 𝒮 represents a set consisting of all the fully separable states of a tripartite system on the Hilbert space ℋ_A_1⊗H_A_2⊗H_A_3 with (H_A_1) = (H_A_2) = (H_A_3) = 2. The minimum mean energy over the set 𝒮 considering the Hamiltonian of antiferromagnetic tripartite XXX model with coupling constant J as a Hamiltonian is denoted as E^min_𝒮 and it will have E^min_𝒮 = min_ε∈𝒮[ ε H_XXX^tri], = min_ε∈𝒮[ ( ∑_k q_k ρ_A_1^k ⊗ρ_A_2^k ⊗ρ_A_3^k ) H_XXX^tri] , = min_ε∈𝒮[ ∑_k q_k ( -3J + 2J∑_i=1^3(1-2p_i) )_k ], = -3J, where ∑_k q_k =1. We have obtained Eq. (<ref>) using Eq. (<ref>). In Eq. (<ref>), we have used the fact that the separability range of the bipartite Werner state parameterized by p on the Hilbert space with dimension d is (d-1)/2d≤ p ≤1/2 <cit.> and each p_i for i ∈{ 1,2,3} represents a parameter of the bipartite Werner state on ℂ^2. Similarly, we can obtain that the minimum mean energy over all the fully separable states of N-partite system on the Hilbert space (ℂ^2)^⊗ N with the Hamiltonian of antiferromagnetic N-partite XXX model with coupling constant J will be (-NJ). Since the ground state energy of the N-partite antiferromagnetic XXX model with coupling constant J and site number N in thermodynamic limit is -1.77NJ, therefore from Lemma <ref>, we can conclude that the ground state of the antiferromagnetic XXX model is an entangled state in the thermodynamic limit. Furthermore, we can detect the entangled states of quantum J_1-J_2 model by the same procedure as well. § CONCLUSION Detection of quantum properties in macroscopic systems presents a formidable challenge. Proxy witnessing offers a promising avenue to circumvent this difficulty. By measuring certain functionals comprised of observables and states, such as mean energy, proxy witnessing allows for the practical detection of quantum properties indirectly. In our work, we have introduced and examined criteria for the detection of various quantum properties through proxy witnessing. Specifically, we have applied these criteria to detect k-unextendible, coherent, active, steerable, and entangled states. Furthermore, we have extended our analysis to realistic quantum many-body physics models, including the quantum Heisenberg model, quantum model, as well as PT- and APT-symmetric Hamiltonians. Our work highlights the potential of proxy witnessing as a powerful tool for exploring and harnessing quantum properties in macroscopic regimes, with important implications for both fundamental physics and practical applications. We acknowledge partial support from the Department of Science and Technology, Government of India through the QuEST grant (grant number DST/ICPS/QUST/Theme-3/2019/120). S.D. acknowledges individual fellowship at Université libre de Bruxelles; this project received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement No. 801505. S.D. acknowledges support from the Science and Engineering Research Board, Department of Science and Technology (SERB-DST), Government of India under Grant No. SRG/2023/000217. S.D. also thanks IIIT Hyderabad for the Faculty Seed Grant. § APPENDIX §.§ Decomposition of bipartite maximally entangled operator on C^2 ⊗C^2 We can decompose an unnormalized bipartite maximally entangled operator (Γ_AB) acting on two-qubit states as Γ_AB = ∑_m,n =0^3 t_mn (σ_m ⊗σ_n), = 1/2 (1⊗1 + σ_1 ⊗σ_1 - σ_2 ⊗σ_2 + σ_3 ⊗σ_3), where σ_0 1. {σ_i }_i for i ∈{ 1, 2, 3 } are the Pauli matrices described by Eq. (<ref>) and only diagonal elements of correlation matrix { t_mn}_mn of Γ_AB are non-zero. §.§ Mean value of bipartite swap operator Let us consider that ρ_AB is a bipartite state on the Hilbert space D(H_AB) with dimension (H_A)=(H_B)=d. We can evaluate the mean value of any bipartite swap operator expressed in Eq. (<ref>), acting on any bipartite state ρ_AB and it can have the following form [ρ_AB F_AB] = [ Ω^† (ρ_AB) F_AB], = [ ρ_AB^W(p,d) F_AB], = 2(1-p)/d(d+1)Π^+_AB (Π^+_AB -Π^-_AB) + 2 p/d(d-1)Π^-_AB (Π^+_AB -Π^-_AB), = (1-p) 2/d(d+1) ( d(d+1)/2 - 0) + p 2/d(d-1) ( 0 - d(d+1)/2) }, = (1-2p). Here, ρ_AB^W(p,d) represents the Werner state on Hilbert space with dimension d parameterized by p. In Eq. (<ref>), we have used Corollary <ref> and the fact that F_AB is (U⊗ U)-invariant Hermitian operator. We have used Remark <ref> to obtain Eq. (<ref>). We have substituted the form of ρ_AB^W(p,d) from Eq. (<ref>) in Eq. (<ref>). §.§ Mean of unnormalized bipartite maximally entangled operator The mean value of an unnormalized maximally entangled operator (Γ_AB) for any bipartite state (ρ_AB), on the Hilbert space D(H_AB) with dimension (H_A)=(H_B)=d, can be calculated as follows [ρ_ABΓ_AB] = [ Ω^† (ρ_AB) Γ_AB], = [ ρ_AB^I(p,d)Γ_AB], = [t Φ_AB^d + (1-t) 1_AB - Φ_AB^d/d^2 -1 (d Φ_AB^d)], = td [(Φ_AB^d)^2]+ (1-t)d/(d^2 -1) ([Φ_AB^d] - [(Φ_AB^d)^2], = td , where t denotes a parameter of any bipartite isotropic state, ρ_AB^I(p,d), on the Hilbert space D(H_AB) with dimension (H_A)=(H_B)=d. Eqs. (<ref>) is achieved from Corollary <ref> and considering the fact that Γ_AB is a (U ⊗ U^*)-invariant Hermitian operator. We have obtained Eq. (<ref>) using Remark <ref>. We have put the form of ρ_AB^I(p,d) in Eq. (<ref>) from Eq. (<ref>). Note that (Φ_AB^d)^2 = Φ_AB^d, and [Φ_AB^d] =1 are applied to obtain Eq. (<ref>). §.§ Evaluation of minimum mean energy over all k-extendible states of any 2N-partite system with Hamiltonian H^W_gen Let us suppose that all the k-extendible states of a 2N-partite system on the Hilbert space ℋ_A_1⊗H_B_1⊗⋯⊗H_A_N⊗H_B_N form the set EXT_k(A_1;B_1;...;B_N) and η denotes a state belonging to the set EXT_k(A_1;B_1;...;B_N). Now, we will find the minimum mean energy over the set EXT_k(A_1;B_1;...;B_N) for the observable H^W_gen and it will be denoted by E^min_H^W_gen. Therefore, we have E^min_H^W_genmin_η∈EXT_k(A_1;B_1;...;B_N)[η H^W_gen], = min_η∈EXT_k(A_1;B_1;...;B_N)η ( α_1 1_A_1B_1......A_NB_N + ∑ _m=1^Nα_2^m ( F_A_mB_m⊗1_AB/A_m B_m + F_A_m+1B_m⊗1_AB/A_m+1B_m)) , = min_η∈EXT_k(A_1;B_1;...;B_N)α_1 + ∑ _m=1^Nα_2^m (1-2p_m) + ∑ _i=1^Nα_2^m (1-2p'_m) . where we have taken the mathematical form of Hamiltonian H^W_gen from Eq. (<ref>). Here, α_1 and {α_2^m }_m are the parameters of the Hamiltonian H^W_gen. Each and every p_m and p'_m for all m represent a parameter of bipartite Werner state on ℂ^2. We have obtained Eq. (<ref>) using Appendix <ref>. §.§ Gell-Mann matrices Eight Gell-Mann matrices are T_1 = ( [ 0 1 0; 1 0 0; 0 0 0 ]), T_2 = ( [ 0 -i 0; i 0 0; 0 0 0 ]), T_3 = ( [ 1 0 0; 0 -1 0; 0 0 0 ]), T_4 = ( [ 0 0 1; 0 0 0; 1 0 0 ]) T_5 = ( [ 0 0 -i; 0 0 0; i 0 0 ]), T_6 = ( [ 0 0 0; 0 0 1; 0 1 0 ]) T_7 = ( [ 0 0 0; 0 0 -i; 0 i 0 ]),T_8 = 1/√(3)( [ 1 0 0; 0 1 0; 0 0 -2 ]). ]
http://arxiv.org/abs/2407.12363v1
20240717073916
Conversational Query Reformulation with the Guidance of Retrieved Documents
[ "Jeonghyun Park", "Hwanhee Lee" ]
cs.CL
[ "cs.CL" ]
HGL: Hierarchical Geometry Learning for Test-time Adaptation in 3D Point Cloud Segmentation Tianpei Zou1Equal Contribution Sanqing Qu 1Zhijun Li 1 Alois Knoll 2 Lianghua He 1 Guang Chen1^() Changjun Jiang1 July 22, 2024 ======================================================================================================================= †Corresponding author. § ABSTRACT Conversational search seeks to retrieve relevant passages for the given questions in Conversational QA (ConvQA). Questions in ConvQA face challenges such as omissions and coreferences, making it difficult to obtain desired search results. Conversational Query Reformulation (CQR) transforms these current queries into de-contextualized forms to resolve these issues. However, existing CQR methods focus on rewriting human-friendly queries, which may not always yield optimal search results for the retriever. To overcome this challenge, we introduce GuideCQR, a framework that utilizes guided documents to refine queries, ensuring that they are optimal for retrievers. Specifically, we augment keywords, generate expected answers from the re-ranked documents, and unify them with the filtering process. Experimental results show that queries enhanced by guided documents outperform previous CQR methods. Especially, GuideCQR surpasses the performance of Large Language Model (LLM) prompt-powered approaches and demonstrates the importance of the guided documents in formulating retriever-friendly queries across diverse setups. § INTRODUCTION In the context of conversational question answering  <cit.> task (ConvQA), the objective of conversational search is to retrieve passages that contain the necessary information to answer the current query within a multi-turn conversation. To understand the intent of the query, previous research focuses on transforming queries into stand-alone forms, making them independent and robust. <cit.> This process, known as CQR allows for clearer expression and better understanding the query. Generally, CQR methods fall into two categories: rewriting and expansion techniques. In the current era where LLMs dominate, leveraging LLMs for CQR methods is particularly prevalent. <cit.> involves instructing a LLM to generate relevant passages related to the query. This method makes the query more reasonable and human-understandable by expanding the sentence. <cit.> also involves using LLMs to rewrite the query through different prompting methods and aggregation techniques. However, these CQR methods focus on reconstructing queries to be the most understandable and reasonable only for humans. Although these approaches aim to create human-friendly and easily comprehensible queries, they may not always yield the optimal results that retrievers desire. To optimize CQR methods for their primary objective of generating the most effective search queries, it's necessary to prioritize retriever-friendly queries over human-friendly ones. In this paper, we propose GuideCQR, an effective framework designed to generate retrieval-friendly conversational queries by leveraging the guidance of retrieved documents. We find that the guided documents can provide a set of documents that enhance the query, optimizing query for better search performance. By using the signals with the related document set, we argue that we can make retriever-friendly query that enhances search outcomes. Inspired by this point, we first obtain the guided documents through a retrieval process using the baseline query reformulated by LLM. To extract more accurate signals for making retriever-friendly query, we also conduct a simple re-ranking process to make a better guided documents, refining the order of documents based on the similarity between query and document. And then, we reformulate the baseline query by augmenting keyword and generating expected answer. Finally, we filter both of them based on similarity with query and history utterances to remove redundant signals. Our experimental results demonstrate that GuideCQR shows the competitive performance on widely used datasets compared to several contemporary methods. We observe that GuideCQR consistently achieves remarkable scores across various metrics. Specifically, GuideCQR outperforms all baselines on the CAsT-19 dataset for average score, even better than LLM-prompt-leveraged methods. It demonstrates the robustness of the GuideCQR framework in making queries more retriever-friendly. This significantly enhances search results in the field of CQR systems. Additionally, the ablation study confirms the importance of the guided documents and the steps involved in creating retriever-friendly queries, including keyword augmentation and expected answer generation. § GUIDECQR We present GuideCQR, a novel framework to reformulate conversational queries by leveraging the guidance of retrieved documents. Figure  <ref> illustrates an overview of our proposed query reformulation framework. The objective of GuideCQR is to make a retriever-friendly query by utilizing the guidance of retrieved documents. To achieve this, we structure GuideCQR into three stages: We begin by retrieving a guided documents based on the simple LLM-reformulated query. This step involves obtaining a set of documents. And we re-rank the retrieved documents to prioritize documents. We consider only the most relevant ones in further stages. (Sec <ref>) And then we augment keywords and generate expected answers from the documents, creating components that contribute to making the query more retriever-friendly. (Sec <ref>) Finally, we apply a filtering process that evaluates both keywords and answers based on their similarity scores relative to the baseline query and dialogue history. Then, we unify and concatenate them back to the baseline query. (Sec <ref>) §.§ Guided documents Retrieval §.§.§ Initial documents Retrieval In the process of developing a retriever-friendly query, initially retrieved documents can play a crucial role as guiding resources, by providing foundational insights for the CQR process. For example, these signals can include critical keywords or contextual signs that are necessary to the search, such as "cure," "rate," and "curable" from the document shown in Figure  <ref>. Inspired by these points, GuideCQR starts from getting guided documents to get meaningful signals to the retriever. We obtain the guided documents by retrieving the documents using the baseline query set generated by LLM. We denote baseline query as Q_baseline, Q_baseline = Rewrite_LLM (RawQuery), where Rewrite_LLM represents an operation that resolves omissions or coreferences using LLM and RawQuery denotes the raw question in the dataset. Using the Q_baseline, we obtain the guided documents composed of 2,000 documents for each sample ID. §.§.§ Re-ranking documents We further enhance the quality of guided documents by incorporating a re-ranking process to get the final guided documents. By adjusting the order of previously retrieved documents, we aim to capture better signals that enhance the search result. We employ Sentence-Transformer <cit.> to re-rank 2,000 documents for each sample ID, carefully selecting the final guided documents with the top 10 documents by measuring the similarity between the query and each document. To mitigate potential biases arising from using a single model, we perform an additional re-ranking process with a different Sentence-Transformer. §.§ Generating Retriever-friendly Query Guided documents plays an important role in making retriever-friendly queries since this documents is retrieved by retriever so that we can extract retriever-friendly components from the documents. Based on the final guided documents, we create retriever-friendly queries with respect to two aspects: Augmenting Keyword, Expected Answer generation. §.§.§ Augmenting Keyword Keywords can be crucial signals for creating retriever-friendly queries, as these keywords represent the most significant and relevant terms within the document. So, we utilize the guided documents to augment keywords by two aspects. The first aspect focuses on determining the number of documents for augmenting keywords, which depends on the extent of guidance we seek. Second aspect utilizes the length of the keywords, specifically how long the keywords we choose to use. Based on this, we augment keywords K: K = [k_1, k_2, k_3, k_4 ... ,k_n], where k denotes unit keyword and n is the number of documents. For example, keywords such as "early", "throat", "cancer", "high", and "cure" can be crucial for the document shown in Figure  <ref>. §.§.§ Expected Answer Generation Additionally, expected answers related to the query can also be crucial signals because they offer direct insights into the user's desired outcomes, thereby enhancing the relevance and accuracy of the search outcome. Therefore, we also utilize the guided documents in generating expected answers as additional information to the query: A = [a_1, a_2, a_3, ..., a_n], where A represents answer list and a_n denotes a unit answer extracted from a single document. In this process, we require both the query and the context. We use the query as the Q_baseline, and the context as the top-k re-ranked guided documents. Conditioned on this documents, we then generate the single expected answer for each document such as "throat cancer has a high cure rate" for document shown in Figure  <ref> and merge all answers in a single sentence. Because the spans of each answer are all important, we regard the entire set of answers as a single unified answer rather than considering each one individually. §.§ Filtering And Unify We find that redundant elements can arise from the augmented keywords and the generated answer such as "early" shown in Figure  <ref>. We explain this is because GuideCQR might augment the keywords or answers generated from irrelevant documents. Therefore, we conduct an additional filtering stage to effectively filter keywords and answers in the reformulated query. We define the FilterScore based on cosine similarity as follows: QueryScore = 10 (1 - CosSim(query, item)), HistoryScore = Max( 10(1 - CosSim(history[i], item)) ), FilterScore = QueryScore + HistoryScore/2, where query represents the embedding of the current turn's query, and item refers to the embedding of either a keyword or an answer sentence, history is history query list of current utterance and CosSim denotes cosine similarity. We define the HistoryScore as the maximum cosine similarity value of the dialogue history query and words. Finally, we define the FilterScore as the average of the QueryScore and the HistoryScore, ranging from 1 to 10. History queries are important for understanding the current query because, in a dialog, it is more effective to comprehend the question based on the previous conversation. So we also consider history query by HistoryScore, allowing us to traverse the entire dialogue from past to present. We remove keywords and answers that fall below a specified FilterScore. Finally, we unify the remaining keywords and answers and integrate them into the Q_baseline to construct the final reformulated query as follows: Q_final = Concat([Q_baseline, K_filtered, A_filtered]), where K_filtered and A_filtered are remaining keywords, answers. § EXPERIMENTS To demonstrate the effectiveness of GuideCQR, we present a comparative analysis over existing methods, including LLM-prompt powered and human-rewritten queries. We focus on key metrics that are critical for evaluating conversational search. §.§ Datasets and Metrics We use the TREC CAsT-19 <cit.> and TREC CAsT-20 <cit.> datasets, widely regarded as public conversational search datasets. Human experts from the TREC Conversational Assistance Track (CAsT) curate these datasets. CAsT-19 and CAsT-20 consist of 50 and 25 conversations, respectively. Each conversation comprises turns filled with complex, context-driven queries that often refer to prior interactions within the same session. Both datasets share the same document collection and provide passage-level relevance judgments, as well as human rewrites for each turn. Following methods from prior research <cit.>  <cit.>  <cit.>, we use two evaluation metrics to assess the performance: Mean Reciprocal Rank (MRR), Normalized Discounted Cumulative Gain at three documents (NDCG@3). Also, we use pytrec_eval <cit.> tool to calculate each score. §.§ Baselines We compare GuideCQR with the following query reformulation (QR) baselines: (1) Transformer++ <cit.>: A GPT-2 based QR model fine-tuned on CANARD dataset. (2) CQE-Sparse <cit.>: A weakly-supervised method to select important tokens only from the context via contextualized query embeddings. (3) QuReTeC <cit.>: A weakly-supervised method to train a sequence tagger to decide whether each term contained in a historical context should be added to the current query. (4) T5QR <cit.>: A conversational query rewriter based on T5, trained using human-generated rewrites. (5) ConvGQR <cit.>: A query reformulation framework that combines query rewriting with generative query expansion. (6) CONVINV <cit.>: framework that transforms conversational session embeddings into interpretable text using Vec2Text. (7) LLM4CS <cit.>: query rewriting based on LLM and various prompting methods. We build GuideCQR upon this prompted QR method using (REW + Maxprob) and evaluate using Self-Consistency. §.§ Implementation Details As a baseline, we generate the Q_baseline using OpenAI 3.5-turbo-16k combined with the Maxprob approach in LLM4CS. We simply generate this query by sampling the highest generation probability with LLMs, resolving only omissions and coreferences. Following previous studies  <cit.>  <cit.>, we use ANCE <cit.> pre-trained on the MSMARCO <cit.> as our retriever. To re-rank the guided documents, we select the mxbai-embed-large-v1  <cit.> and ember-v1  <cit.> models as the first and second re-ranking Sentence-Transformer models, respectively. We use KeyBERT <cit.> for augmenting keywords. And then we use roberta-base-squad2  <cit.> for generating the expected answer. Finally, we use OpenAI text-embedding-3-small for generating embeddings for keywords and answers in the filtering stage in GuideCQR. We use different embeddings at each stage to mitigate potential biases caused by a single model. §.§ Performance Comparison Main Results As shown in Table <ref>, GuideCQR significantly enhances performance metrics compared to the Q_baseline across CAsT datasets. GuideCQR outperforms all of the compared methods in terms of average score. Specifically, on the CAsT-19 dataset, GuideCQR outperforms both LLM and prompting-based reformulated queries, demonstrating its robustness. For CAsT-20, still its performance is almost second-best. These results highlight the solidity of retriever-friendly queries. The score for CAsT-20 is lower than CAsT-19 because the topics of CAsT-19 are more complex. So CAsT-20 is more challenging than CAsT-19. Therefore, the signals extracted from guided documents may not work well. Ablation Study We evaluate the impact of removing individual components involved in creating retriever-friendly queries. As demonstrated in Table <ref>, removing any part of GuideCQR leads to a decrease in performance, indicating that each component plays an important role in GuideCQR. §.§ Analysis Performance among Amount of Initial Guided documents As in Table  <ref>, we adjust the number of guided documents in the guided documents retrieval stage to validate the proper amount of the documents. The results indicate that increasing the amount of guidance consistently improves performance. However, retrieving too many documents for the guided documents requires a longer inference time. Therefore, we decide to choose 2,000 documents for our main experiment. Performance among Top-k doc for Keyword and Answer As in Table  <ref>, we explore the effects of varying the number of documents in augmenting keywords and generating the expected answer stage. For example, if we extract each keyword and answer from the top 2 documents, we generate a keyword and answer pair for each document, resulting in a total of 2 pairs. Consistent with previous studies, we find that using more documents generally provides additional signals that can enhance performance. However, an excessive number of documents may lead to a loss in the overall score. Performance among Keyword Span Length We also investigate the impact of varying keyword spans in augmenting the keyword stage and present the result in Table  <ref>. We observe that increasing the keyword span generally enhances performance to a certain extent. However, we find that too many signals can degrade performance due to inclusion of the redundant information to the query. Query Relevance Analysis To show the effectiveness of the keyword augmentation step, we investigate the precision score as follows: Precision = /, where N_4 represents unique matched keywords found in the guided documents that have qrel (query relevance) score four and N_total represents the total number of unique augmented keywords. Note that the document that has a qrel score of 4 is regarded as the most relevant document with user queries in CAsT datasets. Therefore, keywords matched in these documents can prove that augmenting keywords plays an important role in making retriever-friendly queries. The results in Table <ref> demonstrate that a longer keyword span results in a higher precision score. This underscores the critical role of keyword augmentation in GuideCQR. Performance among FilterScore We test different FilterScore settings. Before filtering, we use top-4 span 15 keywords and top-6 answers for CAsT-19, and top-5 span 5 keywords and top-4 answers for CAsT-20. And GuideCQR uses these keywords and answers as our final query set. As in Figure  <ref>, excessive filtering degrades performance, highlighting the importance of selecting an appropriate filter score. We achieve our final results with a FilterScore of 1.19 for CAsT-19 and 0.525 for CAsT-20. § CONCLUSION We present the GuideCQR, a novel query reformulation framework for conversational search. GuideCQR effectively reformulates conversational queries to be more retriever-friendly by leveraging a guided documents. GuideCQR not only enhances the alignment of queries with user intent but also improves the precision and relevance of search results. Experimental results across various datasets and metrics confirm the capability of GuideCQR over human and LLM-powered methods. § ACKNOWLEDGEMENT This research was supported by Institute for Information & Communications Technology Planning & Evaluation (IITP) through the Korea government (MSIT) under Grant No. 2021-0-01341 (Artificial Intelligence Graduate School Program (Chung-Ang University)). § DATASET STATISTICS Table  <ref> presents statistics for the CAsT datasets, including the number of conversations, queries, and documents. § QUERY ANALYSIS We present example queries reformulated our method and humans in Table  <ref>.
http://arxiv.org/abs/2407.13570v1
20240718151432
The Storage Location Assignment and Picker Routing Problem: A Generic Branch-Cut-and-Price Algorithm
[ "Thibault Prunet", "Nabil Absi", "Diego Cattaruzza" ]
cs.DM
[ "cs.DM" ]
With or Without Replacement? Improving Confidence in Fourier Imaging Frederik Hoppe1, Claudio Mayrink Verdun2, Felix Krahmer3, Marion I. Menzel4 and Holger Rauhut5 1 Chair of Mathematics of Information Processing, RWTH Aachen University, Germany 2Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, USA 3TUM School of Computation, Information and Technology, Technical University Munich, Germany, and Munich Center for Machine Learning, Germany 4Faculty of Electrical Engineering and Information Technology, TH Ingolstadt, Germany, GE HealthCare, Munich, Germany, and TUM School of Natural Sciences, Munich, Germany 5Department of Mathematics, LMU Munich, Germany, and Munich Center for Machine Learning, Germany ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT The Storage Location Assignment Problem (SLAP) and the Picker Routing Problem (PRP) have received significant attention in the literature due to their pivotal role in the performance of the Order Picking (OP) activity, the most resource-intensive process of warehousing logistics. The two problems are traditionally considered at different decision-making levels: tactical for the SLAP, and operational for the PRP. However, this paradigm has been challenged by the emergence of modern practices in e-commerce warehouses, where storage decisions are more dynamic and are made at an operational level. This shift makes the operational integration of the SLAP and PRP pertinent to consider. Despite its practical significance, the joint optimization of both operations, called the Storage Location Assignment and Picker Routing Problem (SLAPRP), has received limited attention. Scholars have investigated several variants of the SLAPRP, proposing both exact and heuristic methods, including different warehouse layouts and routing policies. Nevertheless, the available computational results suggest that each variant requires an ad hoc formulation. Moreover, achieving a complete integration of the two problems, where the routing is solved optimally, remains out of reach for commercial solvers, even on trivial instances. In this paper, we propose an exact solution framework that addresses a broad class of variants of the SLAPRP, including all the previously existing ones. This paper proposes a Branch-Cut-and-Price framework based on a novel formulation with an exponential number of variables, which is strengthened with a novel family of non-robust valid inequalities. We have developed an ad-hoc branching scheme to break symmetries and maintain the size of the enumeration tree manageable. Computational experiments show that our framework can effectively solve medium-sized instances of several SLAPRP variants and outperforms the state-of-the-art methods from the literature. § INTRODUCTION In supply chain management, the storage of goods in warehouses acts as a key component of the system performances <cit.>. The order picking activity (i.e., the action of retrieving the products from their storage locations) is largely considered the most resource-intensive activity in warehousing operations. According to <cit.>, manual picker-to-parts warehouses, in which a human operator walks through the shelves to collect the products, were still largely dominant in 2007 and accounted for more than 80% of all warehouses in Western Europe. In such warehouses, the order picking process alone accounts for 50-75% of the total operating cost <cit.>, leading to a prolific research stream on the optimization of these systems <cit.>. There are three main decision problems related to the operational efficiency of an order picking system: the Storage Location Assignment Problem (SLAP), which aims at determining an efficient assignment of products, called Stock Keeping Units (SKUs) to storage locations, the Order Batching Problem (OBP), which aims at grouping customer orders into batches retrieved by single routes, and the Picker Routing Problem (PRP), which consists in finding the most efficient path in the warehouse to pick a given set of products from their locations. The SLAP and the PRP are strongly linked, as the PRP determines the picking routes once the product locations are known, and the quality of a SLAP solution is assessed via the computation of the routes followed by the pickers. The SLAP, OBP and PRP have been studied extensively in the literature <cit.>. Most of the time, these problems are solved independently or sequentially. Recent studies have, however, highlighted the benefits of making assignment, storage, batching and routing decisions in an integrated way in warehousing logistics <cit.>. The integrated optimization of the OBP and PRP is currently an active research topic <cit.>. To deal with the complexity of integrated problems, operations management decisions are generally categorized based on the time horizon they address, namely strategic, tactical, and operational. Hence, the integration of the SLAP and the PRP remains marginal, as the SLAP is usually considered a tactical problem, whereas the PRP is viewed as an operational problem. However, the paradigm shift in e-commerce warehouses increases the demand variability and often requires a high level of responsiveness to ensure customer satisfaction. This translates into shorter lead times for processing and delivering incoming orders <cit.>. To face these challenges, a common strategy is to divide the storage area into two zones: the reserve area, where products are stacked in high bays, and the smaller forward picking area, where products are stored in easily reachable racks for pickers. Although storage decisions in the reserve area remain tactical, the current trend in e-commerce warehouses is to consider them as operational in the picking area. Indeed, here the inventory quantities are small, and the picking is organized in waves, with the replenishment of depleted products generally occurring several times per day <cit.>. While a complete reoptimization of the storage plan at the operational level may seem unrealistic, the storage-location assignment decisions for replenished items are made several times per day, with complete information on the incoming demand for the upcoming cycles. This allows the explicit consideration of picker routing decisions within the problem. This dynamic storage replenishment is inspired by real-world applications of e-commerce warehouses, e.g., in Germany <cit.>, Belgium <cit.> and China <cit.>. Other applications involving storage decisions at an operational level include the partial reassignment of already stored items, that can happen on a daily basis in modern real-world warehouses <cit.>. In this context of the forward picking area, three decisions are taken before each picking cycle: (1) where to store the SKUs with insufficient inventory to meet the incoming demand (storage), (2) how to group customer orders into consolidated batches retrieved in a single route (batching) and (3) how to route the pickers (routing). Consequently, different alternatives may be considered: * Solve the three decision problems sequentially. * First, consider storage with no information on the batches. Then, jointly optimize batching and routing with perfect information on the SKU locations. * First, consider batching with incomplete information on the SKU locations. Then, once all batches are known, jointly optimize storage and routing with perfect information on the SKUs retrieved by each route. * Jointly optimize storage location, batching, and routing. As already mentioned, the first option has been proved to lead to solutions of poor quality and is therefore discarded <cit.>. In the second option, no information on the batching decisions is available to the decision-maker when deciding storage assignment, whereas in the third case, batching is determined with partial, but still rather complete information on the storage, as only a portion of the SKUs are repleted in each replenishment cycle. Finally, it is clear that a complete integration of the three problems would lead to the best results. However, it is worth noting that this integrated approach poses significant challenges and is left for future research studies. In this paper, we assume that batching decisions are made beforehand, and we focus on the joint optimization of the storage and picker routing decisions within the forward picking area, where both decisions may be taken with detailed knowledge of the incoming demand. It is important to note that the orders we use as input data do not correspond to individual customer orders, but to consolidated batches of orders, each retrieved by a single route. The resulting problem, coined the Storage Location Assignment and Picker Routing Problem (SLAPRP), has been recently introduced in the literature and has been proven NP-hard in the strong sense, even for very basic cases <cit.>. The study of the SLAPRP is also relevant in the case of a completely stochastic demand, where it can serve as a subproblem in complex solution methods. The remainder of this paper is organized as follows. Section <ref> presents the context, related background, and the contributions of the current work. Section <ref> defines the SLAPRP and introduces a generic compact formulation. An extended formulation is then derived from a Dantzig-Wolfe reformulation. Section <ref> introduces a novel family of valid inequalities for the SLAPRP. Section <ref> presents the components of the generic Branch-Cut-and-Price algorithm we designed to solve the problem. Section <ref> describes the labeling algorithm used to solve the pricing problems. Section <ref> illustrates the numerical experiments. Finally, Section <ref> concludes this work. § LITERATURE REVIEW In this section, we present the literature on warehousing problems related to our work. For readability and brevity, we first provide a concise introduction to the PRP and the SLAP, then we review the SLAPRP literature. For a detailed literature review on the PRP and the SLAP, interested readers are referred to <cit.>. Picker Routing Problem. The PRP takes as input the warehouse layout and the location of the different SKUs, it aims at optimizing the material handling time to pick all the SKUs from a pick list, often expressed as the traveled distance. The classical PRP is a special case of the Traveling Salesman Problem (TSP). Despite being NP-hard in general <cit.>, the problem is solvable in polynomial time for the single-block warehouse <cit.> and the two-blocks warehouse <cit.>. For the multiple-blocks warehouse, the problem is fixed-parameter tractable <cit.>. However, most practical variants of the problem remain NP-hard and few works addressed it with exact solution methods <cit.>. Nevertheless, the recent work of <cit.> on the rectilinear TSP enabled the development of efficient exact algorithms for the PRP, which solve industrial size instances in a reasonable computing time <cit.>. Many researchers have noted that, despite the availability of competitive PRP algorithms, simple heuristics enjoy much higher popularity <cit.>. The most common of these heuristics, called routing policies, are presented in Figure <ref> for a single block warehouse. With the S-shape policy the picker traverses completely an aisle if there is an item to pick. A return is allowed from the last picking location, so that the last aisle may not be traversed entirely. In the return policy, the back cross aisle is never used: In each aisle, the picker goes up to the farthest picking location and then comes back to the front cross aisle. The midpoint policy considers that the first and last aisles containing an SKU to pick are traversed completely. In the other aisles the picker never crosses the middle of the aisle: The picking locations above the midpoint are visited coming from the top cross aisle, while the ones below the midpoint are visited coming from the bottom cross aisle. The largest gap policy is similar to the midpoint policy, except that the position of the midpoint" (i.e., the point that is never crossed in an aisle) may not be located in the middle of the aisle, but in the location minimizing the within-aisle distance. Storage decisions. The SLAP was introduced by <cit.> for an automated warehouse and most of its variants are NP-hard <cit.>. Various performance evaluation measures have been used for the SLAP, the most common are the total picking time and traveled distance by the pickers <cit.>. Note that, given a solution of the SLAP (i.e., a storage plan), the modeling and computation of the picking distance largely differ between different works, and in most cases, a distance-related indicator is used as the objective function instead of the distance itself. The SLAPs are classically solved with simple heuristics called storage policies that assign the incoming items to open storage locations according to a simple rule. The taxonomy introduced by <cit.>, still widely used today, separates these policies into three classes: i. The random storage policy in which an incoming item is allocated to a random open storage location. ii. The dedicated storage policy, where each SKU is assigned to a single storage location in the warehousing area. The products are sorted according to a given criterion, usually the turnover, and assigned to the closest storage locations accordingly. iii. The class-based storage policy assigns the products to classes" depending on a frequency-related criterion. Each class is assigned to a zone in the warehouse. Within a zone, the specific location of a given product is randomly determined. Interactions between the SLAP and the PRP. The two problems are closely linked. On the one hand, a SLAP solution must be known a priori when solving a PRP instance. On the other hand, a PRP must be solved to assess a posteriori the quality of a SLAP solution. It is therefore not surprising that the two problems have been studied together giving rise to the SLAPRP. The SLAPRP. When the information is known a priori at the order level, i.e., not only the demand for each product but the list of products in every order, the SLAP and the PRP can be solved together. The SLAPRP is the integrated variant of these two problems, which aims at finding an assignment plan that minimizes the total picking distance, computed exactly by solving a PRP for each order. The integration of location and routing decisions is not a novel idea <cit.>, yet few works have studied the SLAPRP. These two problems are indeed classically seen as belonging to different decision levels, the SLAP being tactical and the PRP operational. However, the recent rise of e-commerce warehouses has led to a shift in this paradigm to better adapt to versatile demand with a high level of responsiveness, toward a more dynamic picking process where the storage decisions are taken more frequently, at the operational level. This is especially true for the problem variants that focus on the reorganization and replenishment of the picking area, which take place several times per day in modern warehouses <cit.>. Recent survey papers on integrated planning problems in warehousing <cit.>, and e-commerce warehouses <cit.>, highlight the need to integrate decision problems to improve the efficiency of order picking. The first example of a SLAPRP encountered in the literature is the work of <cit.> that studies an automated warehouse with a vertical lift system. The authors propose a MIP formulation of the studied problem with the S-shape routing policy. Their problem is a very specific instance of the SLAPRP: With the S-shape routing, each aisle is traversed completely and the model only needs to decide in which aisle a product is located, leading to a much simpler problem. <cit.> study the SLAPRP with the return routing policy on a simple layout with only a single aisle. They prove that the problem is NP-hard in the strong sense, even in simple cases, and provide a dynamic programming algorithm to solve it. <cit.> is the first work, to our best knowledge, that studies a fully integrated SLAPRP where the PRP is solved to optimality, instead of using a routing policy. They propose a cubic formulation for the SLAPRP with the optimal routing, as well as linear, quadratic, or cubic formulations for other widespread routing policies (return, S-shape, midpoint, and largest gap). They linearize their formulations for the computational experiments with commercial solvers. The results show that the SLAPRP with optimal routing is significantly more challenging than its variants with the heuristic routing policies. Only trivial instances are solved to optimality. They also propose a Variable Neighborhood Search heuristic to tackle larger instances. Finally, <cit.> study a variant of the SLAPRP focusing on the replenishment of the picking area, based on an industrial case of a Chinese third-party logistic company. In their problem, a portion of the products are already in place in the picking area, and only a set of new items needs to be assigned to the open storage locations. They propose a MIP formulation with the return routing policy, and a dynamic programming-based heuristic to solve the problem. Paper contributions. In the light of this literature review, it appears that the study of the SLAPRP answers to modern industrial practices, yet the related literature is scarce, with most existing studies focusing on specific application cases. Only one work <cit.> studies the problem with optimal routing instead of heuristic policies. In this paper, we aim at designing an exact solution method for a large class of variants of the SLAPRP, namely, the fully integrated problem, its variants with the classical routing policies (return, S-shape, midpoint, and largest gap), and the replenishment variants of these problems. The main contributions of this work are the following: * We introduce two novel formulations of the SLAPRP: a compact linear formulation for optimal routing and a general extended formulation based on a Dantzig-Wolfe reformulation. This extended formulation is independent of all operational considerations. * We introduce a new family of valid inequality for the extended formulation. Since these inequalities are non-robust, particular attention is given to the management of the associated cutting planes. * We propose a generic Branch-Cut-and-Price algorithm that can solve a large class of SLAPRP variants, including various warehouse layouts and picker routing policies, as well as the partial replenishment of the picking area. The only adaptations from one variant to another are i. the underlying graph, and ii. the resource extension function in the pricing problems. * We present an extensive set of computational experiments that shows the competitiveness of the method in various configurations. § PROBLEM DESCRIPTION AND MATHEMATICAL FORMULATIONS In this section, we formally introduce the SLAPRP and propose two generic formulations of the problem, one compact and another one extended. We use the term generic to indicate that we do not assume any particular layout on the storage area, nor any particular routing strategy (note that the illustrative example in Figure <ref> refers to a single block rectangular warehouse). Table <ref> summarizes the notation used throughout this paper. §.§ Problem description & compact formulations Storage nodes and locations. We assume that all picker routes start and end at the same location v_0, called the drop-off point, and that a picker can indifferently pick from the right and left shelves when crossing an aisle. We call storage node an elementary storage space that can hold at most one SKU. We note 𝒱 the set of storage nodes, and 𝒱^0 = 𝒱∪{v_0}. We call storage location a set of storage nodes that are equivalent in terms of distance from the other storage nodes. For example, in Figure <ref>, the storage nodes v_l^1 and v_l^2 are part of the same storage location l (represented by the blue dot in the middle). We note ℒ the set of storage locations, 𝒱(l) the set of storage nodes that are part of location l ∈ℒ, and K_l = | 𝒱(l) | its capacity. Note that ℒ forms a partition of 𝒱, so we can index the set as 𝒱=⋃_l ∈ℒ{v_l^1, v_l^2,…, v_l^K_l}. For simplicity we can use node (resp. location) instead of storage node (resp. storage location). Graph representation. The layout of the warehouse can be modeled with a directed graph 𝒢 = (𝒱^0, ℰ) where ℰ = {(v_0, v^1_l): l ∈ℒ}∪{(v^i_l, v_0): l ∈ℒ, i = 1,…, K_l}∪{(v^i_l, v^1_l'): l, l'∈ℒ, i = 1, …, K_l}∪{(v^i_l, v^i+1_l): l∈ℒ, i=1,…, K_l - 1}. The graph is thus not complete: for a location l ∈ℒ, the node v_l^1 is reachable from all the other nodes in the graph, but for 2 ≤ i ≤ K_l the node v_l^i is only reachable from node v_l^i-1. In other words, the node v_l^i represents the i^th visit of location l, which is only reachable after (i-1) previous visits of l. Figure <ref> provides a minimal example with two locations and four nodes. The drop-off point v_0 is reachable from all the other nodes. Note that the arcs in ℰ are not associated with a weight in the general case, since the distance between two nodes depends on the routing policy. Actually, for some policies, the distance between two nodes is not constant as it depends on the nodes previously visited by a path. We assume that the distances satisfy the triangle inequality. Problem description. The SLAPRP can be formally described as follows. A graph warehouse layout is given, represented by a graph 𝒢 as previously introduced. The aim is to assign a set of SKUs 𝒮 to the storage nodes 𝒱. Without loss of generality, we assume that |𝒮| ≤ |𝒱|. A set of orders 𝒪 needs to be collected by the pickers. An order o ∈𝒪 contains a subset 𝒮(o) of SKUs to be picked, and corresponds to one picking route. A route is defined as a tour on graph 𝒢 starting and ending at the drop-off point. A route is valid if and only if it visits the storage nodes assigned to its set of SKUs 𝒮(o). Each route is associated with a cost that represents the walking distance covered by a picker and depends on the routing policy. A set ℱ⊂𝒮×ℒ of fixed assignments represents the locations that are already filled. For a pair (s,l) ∈ℱ, a SLAPRP solution may only be valid if SKU s is assigned to location l. Compact formulation. We introduce two sets of binary decision variables: ξ_ls equals 1 if SKU s ∈𝒮 is stored into location l ∈ℒ, 0 otherwise, and x_ij^o equals 1 if the route picking order o∈𝒪 uses arc (v_i,v_j)∈ℰ, 0 otherwise. We also introduce non-negative continuous variables u_i^o that indicate the position of location i∈𝒱 in the tour picking order o∈𝒪. Finally, the picking distance of a given route x^o = (x^o_ij)_(i,j) ∈ℰ is denoted f(x^o). Note that we do not make any assumption on the function f, which depends on the considered variant of SLAPRP. With these definitions, we propose the following compact formulation of the SLAPRP (denoted C). (C) min ∑_o ∈𝒪 f(𝐱^𝐨) s.t. ∑_s ∈𝒮ξ_ls≤ K_l ∀ l ∈ℒ ∑_l ∈ℒξ_ls = 1 ∀ s ∈𝒮 ξ_ls = 1 ∀ (s,l) ∈ℱ ∑_v_i ∈𝒱 x_i0^o = ∑_v_j ∈𝒱 x_0j^o = 1 ∀ o ∈𝒪 ∑_v_j ∈𝒱^0 x_ij^o = ∑_v_j ∈𝒱^0 x_ji^o ∀ v_i ∈𝒱, o ∈𝒪 u^o_j ≥ u^o_i + 1 - |𝒮(o)| (1-x_ij^o) ∀ o ∈𝒪, v_i,v_j ∈𝒱 ∑_v_i ∈𝒱^0∑_v_j ∈𝒱 x_ij^o = |𝒮(o)| ∀ o ∈𝒪 ∑_v_i ∈𝒱^0∑_v_j ∈𝒱(l) x_ij^o ≥∑_s ∈𝒮(o)ξ_ls ∀ l ∈ℒ, o ∈𝒪 x_ij^o ∈{0,1} ∀ o ∈𝒪, l,h ∈𝒱^0 ξ_ls∈{0,1} ∀ l ∈ℒ, s ∈𝒮 0 ≤ u^o_i ≤ |𝒮(o)| - 1 ∀ o ∈𝒪, v_i ∈𝒱 The objective function (<ref>) minimizes the total distance walked to collect all orders. Constraints (<ref>) and (<ref>) are assignment constraints that ensure that each SKU is assigned to a single storage location, and that the number of SKUs stored in one location does not exceed the capacity. Constraints (<ref>) enforce the fixed assignments. Constraints (<ref>), (<ref>) and (<ref>) are tour constraints. They ensure that, for each order, the corresponding route forms a valid tour. Constraints (<ref>) impose that a route is formed to pick each order, constraints (<ref>) enforce the flow conservation at each location, and constraints (<ref>) deal with subtour elimination, based on the Miller-Tucker-Zemlin (MTZ) formulation. Constraints (<ref>) ensure that the route collecting an order visits exactly as many locations as SKUs to retrieve. These constraints are redundant in this compact formulation, but they prove to be useful once convexified. Constraints (<ref>) are linking constraints and ensure that the assignment of SKUs and the chosen routes are consistent. More precisely, for each order o ∈𝒪, the route picking o must stop at location l ∈ℒ (left-hand side) if one of the SKUs of the order is assigned to this location. If several SKUs of o are stored in l, the route must stop multiple times. Constraints (<ref>)-(<ref>) define the domains of the variables. Note that the main decision variables ξ do not appear in the objective function, supporting the discussion of Section <ref> on the link between the SLAP and the PRP. Note on the subtour elimination constraints. The MTZ constraints used in formulation (C) are known to provide formulations with a poor linear relaxation. We present in Appendix <ref> a lifting of (C) that relies on the multi-commodity flow (MCF) formulation for subtour elimination. Although this strengthens the compact formulation, the subtour constraints are expressed using the assignment variables ξ, which would be cumbersome to decompose the problem. The numerical experiments presented in Section <ref> use both compact formulations. §.§ Problem structure and decomposition In this section, we provide a brief polyhedral analysis of the SLAPRP structure in order to motivate the decomposition method presented in this work. Assignment polytope. The first structure that stands out is the block composed of the variables ξ, associated with their assignment constraints (<ref>)-(<ref>). Being structurally close to the assignment polytope, it is not surprising that this substructure possesses integer corner points as well (this result is proven in Appendix <ref>). In this case, the formulation is already tight, and the so-called integrality property establishes that the convexification of this block will not improve the linear relaxation <cit.>. Routing polytopes. For each order o ∈𝒪, the polytope defined by the variables x^o and the constraints (<ref>)-(<ref>) forms a routing substructure, enforcing that the route defines a valid tour. These blocks have a block-diagonal structure, they are only linked to the assignment variables by the coupling constraints (<ref>). Since the network-flow formulation for the routing structure is known to have a pretty weak linear relaxation, the routing polytopes appear to be appropriate candidates to apply Dantzig-Wolfe (DW) reformulation. Linking polytope. Constraints (<ref>) can be seen as a monolithic block of coupling constraints between the routes. Actually, these constraints do not directly link the routes together: for each order o ∈𝒪, the routing variables are only linked to the assignment sub-structure. With this observation, it appears that the matrix of constraints (<ref>) has a staircase structure, which is amenable to a Benders decomposition <cit.>. Variable splitting. Formulation (C) is defined on a sparse constraint matrix, with both linking variables and coupling constraints. This particular structure has often been addressed using the Lagrangian relaxation theory, where the decomposition method is called variable splitting or Lagrangian decomposition <cit.> and relies on the duplication of the linking variables ξ, with one duplicate for each order, the equality between the duplicate being dualized in the objective function using Lagrangian multipliers. The recent work of <cit.> provides a new perspective for this kind of structure, still based on variable splitting but with a DW reformulation of the coupling constraints instead of a Lagrange-based decomposition. Conclusion. In light of this brief analysis, we conclude that the SLAPRP is very structured and prone to decomposition approaches. In the present work, we choose to convexify the routing polytopes and keep the linking constraints in the formulation. The main motivation behind this choice of design is that the subproblems can be modeled as Elementary Shortest Path Problems with Resource Constraints (ESPPRC), a well-studied problem in the literature that has shown its efficiency when embedded in Branch-and-Price algorithms. §.§ Extended formulation As motivated in the previous section, we convexify the routing polyhedra to take advantage of the block structure of the constraint matrix. For each order o ∈𝒪, we define the polyhedron X_o as the routing polyhedron for order o (i.e. including constraints (<ref>), (<ref>), (<ref>) and (<ref>) of model (C), and the linear bounds on the routing variables): X_o = {(x^o_ij)_(v_i,v_j) ∈ℰ∈{0,1}^|ℰ|, (u^o_i)_v_i ∈𝒱 | (<ref>), (<ref>), (<ref>), (<ref>), (<ref>)} According to (<cit.>, Chapter 12) there are two approaches for the DW reformulation, namely convexification and discretization, in this Section we use the latter. Since X_o is bounded, it contains a finite number of points (x^or)_r ∈ℛ_o, indexed by r ∈ℛ_o. With the application of the theorem [<cit.> 13.2], inspired by the Minkowski-Weyl theorem, we can write the set X_o as: X_o = {x ∈ℝ^|ℰ| | x = ∑_r ∈ℛ_oρ_orx^or, ∑_r ∈ℛ_oρ_or = 1, ρ_or∈{0,1}, ∀ r ∈ℛ_o} We can then optimize over the convex hull of these polyhedra, instead of their linear relaxation. This is done by substituting the x^o variables of formulation (C) by their expression in equation (<ref>). Let us introduce c_or = f(x^or) the walking distance of route r ∈ℛ_o and a_or^l = ∑_v_i ∈𝒱^0∑_v_j ∈𝒱(l)x^or_ij, that is the number of picks at location l ∈ℒ for route r. The reformulation leads to formulation (DW), which we refer to as the DW reformulation of (C): (DW) min ∑_o ∈𝒪∑_r ∈ℛ_o c_orρ_or s.t. ∑_s ∈𝒮ξ_ls≤ K_l ∀ l ∈ℒ ∑_l ∈ℒξ_ls = 1 ∀ s ∈𝒮 ξ_ls = 1 ∀ (s,l) ∈ℱ ∑_r ∈ℛ_oρ_or = 1 ∀ o ∈𝒪 ∑_r ∈ℛ_o a_or^lρ_or≥∑_s ∈𝒮(o)ξ_ls ∀ l ∈ℒ, o ∈𝒪 ρ_or∈{0,1} ∀ o ∈𝒪, r ∈ℛ_o ξ_ls∈{0,1} ∀ l ∈ℒ, s ∈𝒮 The objective function (<ref>) is the sum of the distances of the selected routes in the solution. Note that, compared to formulation (C), the objective function becomes linear without any additional assumption. Constraints (<ref>) and (<ref>) are the assignment constraints and ensure that the capacity of each location is satisfied and each SKU is placed in one location. Constraints (<ref>) ensure that exactly one route is chosen to retrieve each order. Constraints (<ref>) ensure that the route that collects o ∈𝒪 stops at location l ∈ℒ if there is an SKU s ∈𝒮(o). They also ensure that the number of stops in this location is consistent with the number of SKUs of the order in this location. The LP relaxation of (DW) is obtained by replacing binary requirements with non-negativity requirements on variables ρ and ξ. It is referred to as the Master Problem (MP). When the MP is defined over subsets of routes ℛ_o⊂ℛ_o for at least one order o∈𝒪 we call it the Restricted Master Problem (RMP). § STRENGTHENED LINKING INEQUALITIES In this section, we introduce a family of valid inequalities for (DW) that tighten its linear relaxation by strengthening the link between assignment variables and routing variables. Given ℒ⊂ℒ, the following inequalities (coined SL inequalities) are valid for formulation (DW) ∑_r ∈ℛ_oδ_r(ℒ) ρ_or≥∑_l ∈ℒξ_ls ∀ o ∈𝒪, s ∈𝒮(o) where δ_r(ℒ) equals 1 if route r ∈ℛ_o, o ∈𝒪, stops in one of the locations in ℒ⊂ℒ, 0 otherwise. Proof. See Appendix <ref>. Note that the SL inequalities are exponential in number and non-robust in the general case (i.e., when |ℒ| ≥ 2). In the following, we will refer to the number of elements in the set ℒ as the order of the inequality. We note 𝒞_os the set of SL inequalities of order greater than or equal to 2 associated with the order o ∈𝒪 and the SKU s ∈𝒮(o). The special case when the order of the SL is equal to 1 is studied in the next section. §.§ Special case SL inequalities of order 1. In the general case, the SL inequalities are non-robust, and their number is exponential. However, SL inequalities of order 1 (SL-1, in the following) are robust with the proposed formulation, and their number is polynomial. Let us define the parameter b_or^l that equals 1 if route r ∈ℛ_o stops at location l ∈ℒ, 0 otherwise. The SL-1 family of inequalities can be expressed as follows: ∑_r ∈ℛ_o b_or^lρ_or≥ξ_ls ∀ l ∈ℒ, o ∈𝒪, s ∈𝒮(o) As constraints (<ref>) of (DW), inequalities (<ref>) link assignment and routing variables. The difference lies in the way stops are counted: a route that stops twice in a location is counted once on the left-hand side of inequalities (<ref>), while it is counted twice in the constraints (<ref>). Note that the addition of SL-1 is crucial in the design of an efficient branching scheme (details will be provided in Section <ref>). Indeed the following theorem holds. Let X = (ξ,ρ) be an optimal solution of the master problem, strengthened by SL-1 inequalities. If variables ξ are integral, then X is integral, or there exists an integral solution X' of the same cost. Proof. See Appendix <ref>. Theorem <ref> is illustrated by Figure <ref>. One order is composed of three SKUs (A, B, C), assigned to the three locations (i.e., ξ variables are integral). On the left, without SL-1 inequalities, the solution is not integer as two routes are used to pick the order, each with ρ = 0.5, and a total distance of 0.5 · 4 + 0.5 · 6 = 5. On the right, with SL-1 inequalities, the solution uses a single route, for a distance of 6. Since SL-1 inequalities are of polynomial size and robust, they are all added to the master problem formulation at the beginning of the resolution. §.§ Separation of SL inequalities The SL inequalities are embedded in a cutting plane framework, where only violated cuts are added to cut off fractional solutions. Note that we can decompose the SL separation problem (SL-SEP) in subproblems (SL-SEP)_os defined for each order o ∈𝒪 and SKU s ∈𝒮(o). Each (SL-SEP)_os consists in finding a subset of storage locations ℒ⊂ℒ that leads to the most violated (if any) SL inequality. Given an optimal solution (ξ^*_ls, ρ^*_or) of the RMP, and using the binary variables x_l to model whether l ∈ℒ, and y_r to model whether route r ∈ℛ_o stops at one of the chosen locations ℒ, (SL-SEP)_os, o ∈𝒪, s ∈𝒮(o) can be stated as the following binary integer program: (SL-SEP)_os Max ∑_l ∈ℒξ^*_lsx_l - ∑_r ∈ℛ_oρ^*_ory_r s.t. a_or^lx_l ≤ y_r ∀ r ∈ℛ_o, l ∈ℒ x_l ∈{0,1} ∀ l ∈ℒ y_r ∈{0,1} ∀ r ∈ℛ_o If the optimal solution of (SL-SEP)_os has a strictly positive value, then there exists an SL inequality that cuts the solution (ξ^*,ρ^*). If the optimal solution of (SL-SEP)_os is negative for all o ∈𝒪, s ∈𝒮(o), then there exists no such cut. (SL-SEP)_os is solved by dynamic programming as follows. Let us define 𝒫_l(ℛ_o) the optimal value of (SL-SEP)_os restricted to the subset of locations {1,…,l}⊂ℒ and the subset of routes ℛ_o⊂ℛ_o. The value of 𝒫_l(ℛ_o) can be computed according to the following equation: 𝒫_l(ℛ_o) = max{ 𝒫_l-1(ℛ_o) : 𝒫_l-1({ r ∈ℛ_o | a_or^l = 0}) + ξ_ls^* - ∑_r ∈ℛ_o a_or^l ρ_or^* } with the initialization 𝒫_0(ℛ_o) = 0. At each step of the recurrence, a binary choice takes place: * The location l is not added to the set ℒ. In this case, the optimal value over locations {1, …, l} is equal to the optimal value over {1, …, l-1}, i.e., 𝒫_l-1(ℛ_o). * The location l is added to the set ℒ. In this case, 𝒫_l(ℛ_o) is obtained from 𝒫_l-1(ℛ_o) by adding ξ_ls^* minus ρ_or^* for each route r∈ℛ_o that stops at l, and removing said routes from ℛ_o. Implementation details. The dynamic programming algorithm is solved for each o ∈𝒪 and s ∈𝒮 and in order not to spend a prohibitive amount of time on this procedure, we enhance it as follows. First, the list of locations ℒ is sorted in increasing order of ξ_ls^* so that the locations with the highest potential of generating a violated cut are explored first. This helps to prune the search (see next point). Second, we prune the search in three cases. In the first case, we check if the sum of the remaining ξ^* is too low to get a violated cut. In the second case, if all the remaining ξ^* are equal to zero and the current cut is already violated, we stop the search since it is not possible to get a deeper cut. In the third case, if all remaining ρ^* are equal to zero, the objective value is increased by the sum of the strictly positive remaining ξ^*. § BRANCH-CUT-AND-PRICE ALGORITHM A Branch-Cut-and-Price algorithm (BCP) is a Branch-and-Bound algorithm in which the dual bound of each node of the tree is computed using the linear relaxation of an extended formulation, solved by column generation, and strengthened by the addition of cutting planes <cit.>. In this section, we describe the main components of the algorithm, namely, the column and cut generation procedure (see Section <ref>), the cut management scheme (see Section <ref>), the primal heuristic used at each node of the Branch-and-Bound tree (see Section <ref>), the branching scheme (see Section <ref>), and the strengthening procedure for branching constraints (see Section <ref>). §.§ Column and cut generation For each order o ∈𝒪, let ℛ_o⊂ℛ_o be a subset of the feasible routes for o, and we note ℛ = (ℛ_1, ℛ_2, … , ℛ_|𝒪|) the collection of these subsets. Let 𝒞_os⊂𝒞_os be a subset of the SL inequalities of order greater than or equal to 2 associated with o ∈𝒪 and s ∈𝒮(o), and we note 𝒞 = (𝒞_os)_o ∈𝒪,s ∈𝒮(o). The RMP defined on ℛ, and strengthened by the valid inequalities SL-1 and 𝒞, is noted RMP(ℛ, 𝒞). (RMP(ℛ, 𝒞)) min ∑_o ∈𝒪∑_r ∈ℛ_o c_orρ_or s.t. ∑_s ∈𝒮ξ_ls≤ K_l ∀ l ∈ℒ ∑_l ∈ℒξ_ls = 1 ∀ s ∈𝒮 ξ_ls = 1 ∀ (s,l) ∈ℱ ∑_r ∈ℛ_oρ_or = 1 ∀ o ∈𝒪 (μ_o) ∑_r ∈ℛ_o a_or^lρ_or≥∑_s ∈𝒮(o)ξ_ls ∀ l ∈ℒ, o ∈𝒪 (π_lo) ∑_r ∈ℛ_o b_or^lρ_or≥ξ_ls ∀ l ∈ℒ, o ∈𝒪, s ∈𝒮(o) (σ_osl) ∑_r ∈ℛ_oδ_r(ℒ_c) ρ_or≥∑_l ∈ℒ_cξ_ls ∀ o ∈𝒪, s ∈𝒮(o), c ∈𝒞_os (λ_osc) ρ_or≥ 0 ∀ o ∈𝒪, r ∈ℛ_o ξ_ls≥ 0 ∀ l ∈ℒ, s ∈𝒮 Constraints (<ref>) correspond to SL-1 inequalities, that are all added to the master formulation from the start, and Constraints (<ref>) correspond to the active SL inequalities of order greater than or equal to 2 at the current iteration. At each iteration, the pricing problems find columns with a negative reduced cost that are then added to (RMP(ℛ,𝒞)), or prove that none exists. Dual variables associated with Constraints (<ref>)- (<ref>) are given in parentheses. The reduced cost of the route r ∈ℛ_o is defined as follows: c_or = c_or - μ_o - ( ∑_l ∈ℒ a_or^lπ_lo) - ( ∑_l ∈ℒ b_or^l ( ∑_s ∈𝒮(o)σ_osl) ) - ( ∑_c ∈𝒞δ_r(ℒ_c) (∑_s ∈𝒮(o)λ_osc) ) The algorithm used to solve the pricing problem is presented in Section <ref>. When there are no more columns with negative reduced costs, the separation problems are solved to find violated SL inequalities (see Section <ref>). Management of infeasible linear programs. Since the SLAPRP is mostly free from operational constraints, feasibility is generally not an issue. However, it might happen that at a given node of the Branch-and-Bound tree, the current set of columns does not allow to obtain a feasible solution for the restricted master problem RMP(ℛ,𝒞). This may happen when the branching constraints cut off all feasible columns for a given order. To prevent this situation, we introduce a set of super columns that ensure that the RMP(ℛ,𝒞) remains feasible at each node of the Branch-and-Bound tree. For each order o ∈𝒪, a super column r_o^SUP is defined as a route stopping in every storage node in 𝒢, i.e., a_l^r_o^SUP = K_l for all l ∈ℒ. In this case, the solution ρ_or_o^SUP = 1 for each order o ∈𝒪 is a feasible solution of RMP(ℛ,𝒞)), no matter the value of the assignment variables. However, these super routes are obviously not valid, violating Constraints (<ref>) of the compact formulation. The cost of these routes is thus set to a big enough value, so that they are not part of any optimal solution of the RMP(ℛ,𝒞)), after convergence of the column generation. §.§ Management of the SL cuts The SL inequalities are non-robust with the formulation of the master problem, i.e., they cannot be expressed with the network-flow variables of the compact formulation (C). In consequence, the addition of one SL cut modifies the structure of the pricing problem: As explained in Section <ref>, it is modeled as an ESPPRC and it will need an additional resource to account for the cut, weakening the dominance criterion. To keep the size of the formulation reasonable, and the pricing problems tractable, careful management of the SL cuts appears as a necessity. Cut pool. SL inequalities are added dynamically to the RMP formulation via a cut pool. Whenever an SL inequality is separated, it is kept in the cut pool. Every time RMP(ℛ,𝒞) is solved, the cut pool is checked for violated inequalities. If a violated inequality is found, it is added to the set of active cuts 𝒞, and RMP(ℛ,𝒞) is solved again. At the same time, a procedure determines which of the active SL inequalities in 𝒞 are satisfied at equality in the current solution. If an inequality is not binding, it is removed from the formulation. If the cut pool contains no violated inequality, the separation procedure is called. If violated cuts are found, they are added to the cut pool. Then, some of them, depending on the number of separated inequalities (see next paragraph), are added to the set of active cuts 𝒞, and the RMP is solved again. Implementation details. On top of the cut pool management, several related features in our implementation have been crucial in the performance of our algorithm. First, the separation procedure is only called during the resolution of nodes up to a depth of 3 in the Branch-and-Bound tree. A problematic behavior we identified in preliminary experiments is that, for some instances, the separation of the SL inequalities returns a prohibitively large number of violated cuts. If they are all added to the formulation, it becomes too large to handle, making the master or pricing problem intractable. To tackle this issue, we limit at 500 the number of simultaneously active SL cuts for one order. If too many of them are separated, only the most violated are added to the formulation. Experiments show that the feature does not impact much the bound, as the number of active cuts decreases to a manageable amount with the convergence of the column generation. §.§ Primal heuristics At each node of the Branch-and-Bound tree, a primal heuristic procedure is called in order to improve the primal bound of the tree. Indeed, integer solutions of RMP(ℛ,𝒞) are typically encountered deep in the Branch-and-Bound tree. If the starting primal bound is of poor quality, it would remain unchanged for a large part of the running time and prevent an efficient pruning of the tree. To mitigate this effect, a basic constructive heuristic is run at each node of the tree. Since the problem is loosely constrained, a simple heuristic with a low computational burden proved to be sufficient for this purpose. At a given node of the tree, we first solve the column and row generation procedure, then we proceed as follows. We start from a partial solution that consists of the fixed locations ℱ of the instance and those imposed by the branching decisions at the current node. Then we insert SKUs one by one by selecting, at each step, the pair (l,s), l ∈ℒ, s ∈𝒮 that constitutes a feasible update and maximizes the value of ξ_ls in the optimal solution of the RMP at the current node of the Branch-and-Bound tree. In other terms, we assign the SKU associated with the least fractional ξ variable at each iteration. The procedure stops when all SKUs have been inserted. Initial set of columns. At first, a heuristic solution is calculated with the aim of obtaining an initial set of columns. Note that any assignment that enforces the fixed positions of the instance leads to a feasible solution. Thus, the starting solution is the result of a basic Hill-Climbing heuristic, where a random (feasible) solution is improved at each iteration by the exchange of two SKUs. If no such improvement exists, the procedure is stopped. §.§ Branching scheme According to Theorem <ref>, the integrality of the ξ variables guarantees the integrality of the solution, we therefore only need to branch on such variables. Another advantage of branching on the ξ variables is that the pricing problems are unaffected by the branching decisions. Based on these observations, a natural branching scheme, called location branching, would be to branch on the most fractional ξ variable. However, preliminary experiments show that this strategy produces unbalanced search trees. To overcome this drawback, we propose a two-level branching strategy, called combined branching. At the first level, we branch on the assignment of SKUs in entire aisles (instead of single locations). Since integer aisle locations are not sufficient to restore integrality in the solutions, we use the location branching at the second level. Numerical experiments in Section <ref> provide a comparison between the location and combined strategies. Location branching. In this scheme, we branch on a ξ variable. Let (ξ,ρ) be a solution of RMP(ℛ, 𝒞). If this solution is not integer, there exists at least one fractional variable ξ_ls with l ∈ℒ and s ∈𝒮. The variable ξ_ls that maximizes the quantity d_s·min(ξ_ls , (1 - ξ_ls)), is selected to perform the branching. Here d_s = | {o ∈𝒪 | s ∈𝒮(o)}| is the demand of s. This means that we choose the most fractional variable weighted by the demand of the associated SKU. Indeed, the ξ variables corresponding to an SKU with a large demand will appear in more orders, and then in more constraints. Fixing its value is more likely to have a high impact on the solution in children nodes. We derive two branches fixing the variable to one (meaning s is assigned to the location l), and fixing it to zero (the assignment is forbidden). Equation (<ref>) provides the corresponding disjunction we impose in the children nodes. (ξ_ls≥ 1) (ξ_ls≤ 0) Combined branching. Let us index by {1,…,a} the different aisles of the layout and let us indicate by ℒ^a the set of locations in aisle a=1,…,a. For an SKU s ∈𝒮, and an aisle 1 ≤ a ≤a, we propose to branch on whether s will be assigned to a location in aisle a, or not, that is we branch on the following disjunction: (∑_l ∈ℒ^aξ_ls≤ 0) (∑_l ∈ℒ^aξ_ls≥ 1). A heuristic picks the candidate aisle a = 1,…,a and the candidate SKU s ∈𝒮 maximizing the following quantity: d_s·min(∑_l ∈ℒ^aξ_ls , (1 - ∑_l ∈ℒ^aξ_ls)) Branching on a single location leads to an unbalanced tree, with the branch where a variable is set to zero being quite weak. This is especially true for the SLAPRP variants where aisles are crossed entirely (i.e., S-shape, midpoint, and largest gap policies): forbidding the assignment of an SKU to a location might lead to a symmetric solution obtained by switching the partial assignment within an aisle. Thus forbidding the assignment of an SKU to an entire aisle may have a larger impact on the solution, and leads to tighter bounds in the children nodes. Note, however, that the aisle branching is not sufficient to ensure integrality, as a solution can have all SKUs fixed to aisles without being integral. To address this drawback, we use the location branching scheme at the second level of the combined strategy. If the best candidate with aisle branching is associated with a quantity that is below a threshold of 0.25, it means that the SKUs are all close to being fixed on aisles, and we use the location branching scheme instead. Implementation details. In all instances used for experiments, the layout is a rectangular warehouse, with integer distances. If we note D^a the distance between two consecutive aisles and D^b the distance between two consecutive locations, it appears that an integer solution can only have an objective value that is a multiple of 2 · gcd(D^a, D^b) (the proof of this result is straightforward). Therefore, we can round up any valid dual bound accordingly, ensuring a stronger pruning routine and a better gap. §.§ Strengthening procedure for branching constraints Some instances are characterized by a high degree of symmetry, which may lessen the impact of the branching schemes. In the following, we introduce a procedure to strengthen the branching decisions by breaking symmetries. This method is inspired by the work of <cit.> on isomorphic pruning, and the orbital branching scheme introduced by <cit.>. The strengthening procedure is a simplified version of these works, applied to a straightforward symmetry. We describe the scheme on a branching disjunction on a single variable (location branching), but the same reasoning remains valid with aisle branching. If two SKUs s^1, s^2 ∈𝒮 appear in the same orders, i.e., 𝒪(s^1) = 𝒪(s^2), we say they are symmetric. This means that for a valid solution, either integer or fractional, permuting the values of the variables ξ_ls^1 and ξ_ls^2 for all l ∈ℒ leads to another valid solution with the same objective value. This result is straightforward since all routes that need to pick one SKU also need to pick the other. Thus permuting their locations will not change the chosen routes. We note F^1 the set of variables that have already been branched on, and set to 1, in the current node of the Branch-and-Bound tree, and sym(s) the set of SKUs symmetric with s (including s). Let us consider {s^1,…, s^k } = sym(s)\ F^1 the set of SKUs that are symmetrical with s, and have not been fixed to one in the current node. Instead of enforcing the disjunction (<ref>) introduced in the variable branching scheme, we consider the disjunction (<ref>) that gives a partition of the integer search space where either one of the SKUs of sym(s) is assigned to l, or all these assignments are forbidden. (ξ_ls^1≥ 1) … (ξ_ls^k≥ 1) (∑_1 ≤ i ≤ kξ_ls^i≤ 0). This disjunction contains more than the usual two branches. However, since the SKUs s^1, …, s^k are symmetric, the different branches (ξ_ls^i≥ 1) for 1 ≤ i ≤ k lead to equivalent problems. In this case, we can only consider one of those branches, e.g. (ξ_ls≥ 1), and discard the others without loss of generality. The strengthened branching scheme then enforces the following disjunction: (ξ_ls≥ 1) (∑_1 ≤ i ≤ kξ_ls^i≤ 0). § SOLUTION APPROACH FOR THE PRICING PROBLEM The pricing problems for the SLAPRP are modeled as Elementary Shortest Path Problems with Resource Constraints (ESPPRC), which is a popular design in the column generation literature <cit.>. Note that we convexified the operational details of the problem to obtain a generic formulation, as described in Section <ref>, and as a consequence, they appear in the subproblems. The backbone of the algorithm is the same no matter the configuration. The differences appear in: * the underlying graph 𝒢 that models the warehouse layout; * the resource extension function, and, if needed, additional parameters for the routing policy. This section is organized as follows: We first introduce the problem in Section <ref>. In Section <ref>, we introduce our algorithm and the label definition for the optimal routing policy. The label extension function is described in Section <ref>, and the dominance rules in Section <ref>. Then, we highlight the differences induced by other routing policies in Section <ref>. §.§ Problem description For an order o ∈𝒪, the o^th subproblem consists in finding routes of negative reduced cost, as introduced in Section <ref>, or proving that no such route exists. The pricing problem is defined on the graph 𝒢 introduced in Section <ref>. A valid solution consists in an elementary tour on graph 𝒢, which includes exactly |𝒮(o)| + 1 edges, and stops in the mandatory locations corresponding to the SKUs of o that are assigned to fixed positions. Furthermore, we impose what we call the first stop restriction, which enforces a route to stop at a location the first time it passes in front of it. Due to the fact that the distance matrix satisfies the triangle inequality, it is clear that all the routes in any optimal solution satisfy the condition. Note that its enforcement is optional in the problem definition, but it limits the propagation of labels by exploiting the Steiner graph structure of the warehouse layout to discard symmetric labels. A notable difference between the pricing problem of the SLAPRP and classical ESPPRCs for vehicle routing problems lies in the resource constraints. They are all expressed as equality constraints, instead of the more classical capacity or time window constraints: For a path to be valid, the resource corresponding to its length (i.e., the number of edges traversed by the path) must be equal to |𝒮(o)| + 1 at the sink node, and the resources corresponding to the visit of mandatory locations must be consumed. This feature is unusual but does not impact much the solution approach. Indeed, <cit.> highlights that these constraints can be modeled with classical resource extension functions, and all the theory on the ESPPRC remains valid. The pricing problems are solved using a labeling algorithm, where labels represent partial routes extended from the source node (the drop-off point) to the sink node (copy of the drop-off point). A label is extended from a node to its successors using resource extension functions, and a dominance criterion is used to discard labels that cannot lead to optimal solutions. The active SL cuts being non-robust, each of them is managed as a single resource: one active cut is characterized by a set of locations, and the associated dual cost is counted if at least one of these locations is visited by the path. The other constraints being robust, their dual costs simply add additional terms in the distance matrix of graph 𝒢. §.§ Label definition for optimal routing A label L represents a path P starting at the drop-off point in graph 𝒢. It is defined by a vector L = (v,c,q,R,T,F), where v is the last node visited in P, c is the current reduced cost of P, q is the number of edges included in P, R is the location reachability vector, T is the cut reachability vector, and F is the mandatory location vector. Resource R is defined with three possible states for each location l ∈ℒ: (i) R_l = V if location l has been visited by the path, (ii) R_l = R if location l is reachable in the current path, and (iii) R_l = U if location l is unreachable in the current path. Note that, apart from already visited locations, all the other locations are still technically reachable, but the first stop restriction allows us to derive a stronger reachability condition. Resource T is similarly defined with three possible states for each active SL cut c ∈𝒞: (i) T_c = V if the corresponding dual cost is already counted in the reduced cost of P, (ii) T_c = R if the dual cost is still potentially reachable for the path P, and (iii) T_c = U if it is not reachable anymore. Since the SL cuts are non-robust, we need to keep in memory which cuts have been counted in the current path, in order not to account for more than once their contribution to the reduced cost. In order to get stronger dominance rules, we also check if a cut is not reachable anymore. Resource F is defined as a vector where each location is associated with the corresponding number of remaining mandatory stops. A valid path consumes the resources in F completely, i.e., the mandatory locations should all be visited, potentially several times if required. §.§ Label extension for optimal routing In this section, we describe the label extension function in the case of optimal routing. Let L^1 = (v^1,c^1,q^1,R^1,T^1,F^1) be a label attached to node v^1 ∈𝒱^0, that is extended to node v^2 ∈𝒱. We note l^1,l^2 ∈ℒ the locations associated with the nodes v^1 and v^2. First, the extension is possible if and only if it satisfies the following conditions: (i) q^1 < |𝒮(o)| since labels are only extended up to a length of |𝒮(o)|. (ii) (v^1,v^2) ∈ℰ, i.e. there exists an arc from v_i to v_j. (iii) v^2 is reachable in the current label according to the location reachability vector, i.e. R^1_l^2 = R. (iv) F^1_l^2 > 0 or q^1 + ∑_l ∈ℒF^1_l < |𝒮(o)|. This condition ensures that L^1 is only extended if l^2 is a mandatory location, or if the length of L^1 allows one to visit all the remaining mandatory stops after visiting l^2. (v) The extension to v^2 will not lead to an unfeasible route considering the fixed assignments. In other terms, let us consider that there are already k > 0 SKUs assigned to the location l^2 (either by branching decisions or fixed assignments on the instance) that are not part of o. Then the route cannot stop more than K_l - k times in l. If v^2 satisfies these conditions, then the extension is possible, and the new label L^2 is defined as L^2 = (v^2,c^2,q^2,R^2,T^2,F^2) such that: (i) R^2 R^1. (ii) ∀ l ∈ℒ: if l = l^2, then R^2_l V, else if l is located on the shortest path from l^1 to l^2, then R^2_l U. The second condition corresponds to the first stop restriction presented in Section <ref>. (iii) T^2 T^1. (iv) ∀ c ∈𝒞: If T^1_c = U, then T^2_c U. Else if l^2 ∈ℒ_c (i.e., label L^2 gets the dual value associated to cut c), then T^2_c V. Else if R^2_l = U, ∀ l ∈ℒ_c, (i.e. all locations that define cut c are either visited or unreachable in the current path), then T^2_c U. (v) F^2 F^1. (vi) If F^2_l^2 > 0 then F^2_l^2 F^2_l^2 - 1. (vii) c^2 c^1 + c(v^1,v^2) + π_l^2o + 1_l^1 ≠ l^2∑_s ∈𝒮(o)δ_osl^2 + ∑_c ∈𝒞^*λ_c with 1 being an indicator parameter, and 𝒞^* = {c ∈𝒞 | T^1_c = R & T^2_c = V}. The dual costs are introduced in Section <ref>. §.§ Dominance In this section, we introduce a dominance criterion used to mitigate the proliferation of labels. For two labels L^1 and L^2 that are attached to the same node, L^1 is said to dominate L^2 if: (i) the reduced cost of L^1 is better than the reduced cost of L^2, and (ii) any feasible extension of L^2 is either a. feasible for L^1, or b. dominated by an extension of L^1. With the same notations as in the previous section, we can say that L^1 dominates L^2 if: * v^1 = v^2; * c^1 ≤ c^2 - ∑_c ∈Θ_2\ 1σ_c, where Θ_2 \ 1 = {c ∈𝒞^* | T^2_c = R, T^1_c ∈{V,U}} is the set of SL cuts that are reachable by an extension of L^2, but unreachable by an extension of L^1; * For all l ∈ℒ, if R^2_l = R then R^1_l = R; * q^1 = q^2; * F^1 = F^2. Note that the number of edges q included in the path, as well as the vector of mandatory stops F, must match to establish dominance between two labels. Since these two resources must be entirely consumed in valid solutions (i.e. the constraints are satisfied with equality), two labels with different consumption of these resources do not have the same possibilities of extension, and cannot be compared in terms of dominance. Condition 2 takes into account the potential dual contribution of the reachable SL-cuts and it is inspired by the work of <cit.>. Since the SL cuts are non-robust (see Section <ref>), they act as resources in the pricing procedure. Instead of naively comparing the dominance of L^1 and L^2, condition 2 proposes a stronger alternative where L^2 is discarded if every feasible extension would be dominated by a feasible extension of L^1. Indeed, even when accounting for the dual contribution of the cut-related resources that are consumed in L^1 and not in L^2, the cost of L^2 would still be dominated. §.§ Pricing problem with other routing policies The above subsections present the algorithm developed for the SLAPRP with the optimal routing policy, i.e. the case where the storage and routing problems are fully integrated. In the following, we will describe how to adapt the algorithm to other routing policies on a single block warehouse. In particular, we consider return, S-shape, midpoint, and largest gap. The only parts that are impacted are the underlying graph 𝒢 and the resource extension function. Note that for some policies, additional parameters are required in the label definition to be able to compute the extension of resources. Return routing. In the return routing policy, the distance between two locations is constant and does not depend on the locations previously visited by a partial path. The only required adaptation is to change the graph 𝒢 to account for the modified distances and remove the arcs that are not feasible anymore. For example, in Figure <ref>, it is possible to go from locations a to b, but not the opposite. Note that the aisles are always visited in the same order, and the locations in an aisle are visited from the closest to the drop-off point to the farthest. In this case, it appears that the graph 𝒢 is acyclic, which significantly reduces the computation burden of the algorithm, as the elementary aspect of the ESPPRC is guaranteed by the graph structure. S-shape routing. In the S-shape routing policy (as well as in those that follow), the distance between two locations is not constant. As shown in Figure <ref>, if we want to extend a label from location c to location d, the path followed by the picker depends on whether aisle 2 was entered by the front or the back cross aisle. Thus, an additional parameter is associated with each label, encoding if the current aisle was entered by the front or back cross aisle. When a label is extended to a new aisle, the value of the parameter is switched. The value of this parameter affects both: (i) The reachability of a location, for example, an extension from location b to c is only feasible if aisle 2 has been entered by the front; (ii) The distance between two locations, which depends on the orientation of the picker. Note that similarly to the return policy, the size of 𝒢 is also reduced as some arcs become infeasible. Midpoint routing. With the midpoint routing policy, the first and last aisles are crossed entirely. In the other aisles, the middle point is never crossed: the locations above it are visited by a path coming from the back cross aisle, and the locations below it are visited by a path coming from the front. If a single aisle is visited, a return policy is applied, in order not to create absurd routes. Therefore, the route development differs for the first and the last aisles. To account for this difference, two additional parameters are added to the label definition to register which aisle is the first (resp. last) one visited in the current path. The values of these two parameters are updated as follows: * First aisle: The parameter is set during the first extension and is not modified afterward. All the locations in previous aisles are set unreachable in vector R. * Last aisle: The parameter is set during the first extension in a location that is below the midpoint, in an aisle that is not the first one. For example, in Figure <ref> we suppose that a label is attached to location a. If the label is extended to location c, above the midpoint, nothing happens. If it is extended to location b, below the midpoint, the second aisle is registered as the last one. Once the last aisle is set, some locations become unreachable, namely the ones above the midpoint in previous aisles, and all locations in the following aisles. For example, if a path visited locations a and b in Figure <ref>, location d would not be reachable anymore. If only one aisle is visited in the route, this parameter remains undefined. Largest gap routing. The modeling of the largest gap policy is more intricate, due to the need to consider only feasible patterns in each aisle. The details are presented in Appendix <ref>. § COMPUTATIONAL EXPERIMENTS In this section, we report a summary of the experiments performed on the developed algorithm for the SLAPRP and its variants. The algorithm is coded in Julia 1.7.2, and CPLEX 12.10 is used to solve linear programs. We set CPLEX to use the dual simplex method, all other parameters use the default setting. The experiments are performed in single-thread computation on an Intel Xeon E5-2660 v3 with 10 cores at 2.6 GHz, with a 16 GB memory limit. In all the experiments, the integrality gap is computed as 100(z - z)/z, where z is the value of the lower bound, and z the upper bound. All computing times are expressed in seconds. The detailed computational results, as well as the used instances, are available at <https://zenodo.org/record/7866860>. The source code for the implementation is available at <https://github.com/prunett/SLAPRP>[The repository will be uploaded upon publication]. In this section, we first describe the instances that are used to evaluate the performance of the procedures (see Section <ref>) and then show the impact of the SL cuts (see Section <ref>) and the impact of the proposed branching rule (see Section <ref>). We then compare our results with state-of-the-art results from the literature (see Sections <ref>-<ref>). §.§ Benchmark instances Two sets of instances from the literature are used in the computational experiments and are briefly described in the following. Instances of <cit.>. These instances use a classical single block layout, with the number of aisles in the range a∈{1,3,5}, each of them comprising b∈{5,10} storage locations. The number of SKUs is then comprised between 10 and 100. The demand is characterized by a number of orders |𝒪| ∈{1,5,10}, each of them containing the same amount of products |𝒮(o)| ∈{3,5}, sampled with a uniform distribution. A total of 108 instances are then solved with the classical routing policies, namely optimal, return, S-shape, midpoint, and largest gap. Section <ref> uses the full set of instances, with optimal routing. Since the instances with a single order can be solved by hand, and are all solved to optimality at the root node by our formulation, they are excluded from the benchmark set in Sections <ref> and <ref>. Instances of <cit.>. These instances are largely inspired by a real-life warehouse. The studied problem is a variant of the SLAPRP that focuses on the replenishment problem. The set of fixed positions ℱ is nonempty, more precisely there is α∈{20%,30%,40%} of the SKUs whose locations are free, the other ones are already in stock in the picking area. The demand is modeled by a number of orders |𝒪| ∈{50,100,200}, each of them comprising a random amount of SKUs in the range |𝒮(o)| ∈{1,…,10}. The layout is also inspired by the studied company, with a double-block layout with the drop-off point located at the beginning of the middle cross-aisle. A total of 80 SKUs are stored in the layout, and a return routing policy is used. §.§ Impact of the SL cuts Table <ref> reports on the quality of the different formulations for the SLAPRP considering the optimal routing. It also reports the impact of the SL inequalities on the quality of the lower bound. Results are obtained on the instances proposed in <cit.>. Column (LP) reports the integrality gap of the linear relaxation for the compact formulation (<ref>)-(<ref>). (LP - MCF) presents the LP relaxation when using the multi-commodity flow formulation of Appendix <ref>. (DW) reports the integrality gap of the extended formulation (<ref>)-(<ref>). (DW + SL1) reports results obtained with the extended formulation strengthened with all and only SL-1 inequalities, while (DW + SL) reports results obtained with the extended formulation strengthened SL cuts of any order. Note that to obtain the results of Table <ref> we decided to include an SL inequality only if it is violated by at least 0.01. For each formulation, the column gap reports the integrality gap. The column closed shows how much the integrality gap is closed compared to the linear relaxation. The entries are computed as 100(z - z^LP)/(z - z^LP) where z is the current dual bound, z^LP the linear relaxation, and z the optimal (or best known) solution for the instance. The results show that the compact formulation presents a poor relaxation, with an average gap of 64.7% over all the instances obtained by the compact formulation (2)–(12) when subtour constraints are expressed in the MTZ form, and 27.6% when using the formulation reported in Appendix A where subtour constraints are expressed in the MCF form.. The DW reformulation is effective in tightening the formulation, with 89.2% of the gap closed. Note that all the instances with a single order are solved to optimality by the reformulation. Concerning the SL inequalities, they prove to be effective by closing an additional 4.3% gap when added to the DW formulation, which represents a closing of 39.5% of the remaining gap. The SL-1 inequalities are also beneficial to strengthen the DW formulation, by closing 9.3% of the integrality gap. These experiments show the strength of the proposed formulation, however, the integrality gap remains challenging to close for large instances. §.§ Impact of the branching schemes This section reports on the impact of the different branching schemes on the size of the Branch-and-Bound tree. These experiments are performed on the instances of <cit.>, where we removed the instances with a single order. We consider the optimal routing policy and allow for 2 hours of computational time. Table <ref> shows the results with 4 different settings: Location and Combined consider the respective branching schemes, while Location + sym and Combined + sym also take advantage of the symmetry breaking procedure presented in Section <ref>. For each configuration, column opt reports the number of instances solved to optimality. Columns gap, nodes and time respectively report the average gap, the average number of nodes processed, and the average computing time for the instances not solved to optimality by all the configurations. Note that this is a fair comparison since the same instances are not solved to optimality on all the settings. Since the combined branching is equivalent to the location branching for instances with a single aisle (i.e., the first two layouts), we only report the results on one case on such instances in Table <ref>. -1pt 1pt The obtained results show that the combined branching scheme increases the number of solved instances, and provides lower optimality gaps for the non-solved instances. For the instances with more than one aisle, the average tree size for closed instances is 229 for location branching, and 87 for combined branching, i.e., a reduction of 62%. The gap for non-solved instances is also reduced from an average of 9.7 to 3.9 when using the combined strategy. The strengthening procedure appears to be effective in limiting the size of the search tree, with the average number of processed nodes reduced from 5669 to 1994 for the instances with a single aisle, i.e., a reduction of 65%. For the instances with multiple aisles, the number of processed nodes is reduced by 9% for location branching and 13% for combined branching. Overall the combined scheme with the strengthening procedure is the most performing branching procedure. §.§ Comparison with Silva et al. (2020) In this section, we test our algorithm on the benchmark instances of <cit.> with more than one order. The authors propose compact formulations for the SLAPRP with a single block warehouse and its variants, where the routing aspect of the problem is tackled by heuristic policies (i.e., return, S-shape, midpoint, and largest gap). Table <ref> presents the results of three solution methods. BCP designates the Branch-Cut-and-Price algorithm. CPLEX (Silva et al.) presents the results obtained by solving the MILP formulations introduced by <cit.> using CPLEX. CPLEX (Prunet et al.) shows the results obtained by solving our MILP formulations using CPLEX. Note that our formulations have several differences with those of <cit.>: i. For the optimal routing, we use formulation (MCF) instead of the cubic formulation of <cit.>. ii. For the midpoint policy, the set of rules defining the policy in <cit.> is more intricate than what is commonly done in the literature <cit.>. Therefore, we propose a simpler modeling, consistent with the order-picking literature. We stress that, for this policy, the studied problem is not the same as in <cit.>, and the results cannot be compared. iii. For the other policies, <cit.> propose non-linear formulations, that use quadratic terms and if-then-else constraints. In Appendices <ref>–<ref>, we propose linearized alternative formulations only using indicator constraints for conditional statements. For each solution method, we report (opt) the number of instances solved to optimality, the average gap (gap), and the average computing time (time). Note that the time limit for computation is fixed to 2 hours. For the BCP, we also report the average number of processed nodes (nodes), and generated SL cuts (cuts). The column Δ_opt computes the average gap between the best-known solution for the routing policy at hand, and the optimal routing, computed as 100(z^policy - z^opt)/z^opt. To ensure a fair comparison, we apply to the CPLEX runs the post-processing routine introduced in Section <ref> (Implementation details) to strengthen the lower bounds. The BCP algorithm shows promising results on this benchmark set. For the completely integrated problem, where the routing is solved to optimality, the BCP clearly outperforms CPLEX with 57 instances solved to optimality, compared to none from the formulation of <cit.>, and 40 for our compact formulation. For the other routing policies, the results are more nuanced: The BCP performs better than CPLEX on the number of instances closed to optimality with the return routing, worse on S-shape routing, and similarly on the midpoint and largest gap routing. These results are unsurprising, as the S-shape policy is a very straightforward heuristic that simplifies drastically the structure of the problem when leveraged with an ad hoc formulation. Note, however, that for all policies the BCP provides better gaps on average than the compact formulations, even when closing fewer instances to optimality. There are actually only 14 instances out of 272 where CPLEX returns a better dual bound at the end of the run, mostly with midpoint and S-Shape policies. Experiments show that CPLEX seems to perform better at finding good quality primal bounds, but struggles to close the gap when the initial dual bound is far from the optimum. Overall on the whole set, we found 115 new optimal solutions out of 162 previously unsolved instances, without counting the ones for the midpoint policy. Looking at the column Δ_opt, it appears that solving the routing problem to optimality leads to savings of around 5% compared to the S-shape, midpoint, and largest gap policies. For the return policies, the results are very similar. This is due to the small size of the instances that leads to the optimal routing behaving similarly to the return routing in most cases. §.§ Comparison with Guo et al. (2021) Table <ref> shows the results of different algorithms on the benchmark instances of <cit.>. The first three columns characterize the instance, with the proportion of free assignments α, the number of orders, and n the number of instances of each type. Then we present the results of three solution methods: the MILP formulation introduced by <cit.>, dedicated to their specific instance of the SLAPRP and solved using CPLEX, a dynamic programming algorithm (DP) developed by <cit.> and our BCP algorithm. For each algorithm, we report the number of instances solved to optimality (opt), and the computing time (time). The results show that CPLEX hardly scales with the size of the problem in this configuration, and runs out of memory on 40 of the 90 instances. The DP and BCP algorithms both close all the instances to optimality. In terms of running time, both algorithms perform equivalently on the small instances (α = 0.2), but the BCP clearly outperforms the DP by several orders of magnitude when scaling to larger instances with α = 0.4. We believe this is due to the increased combinatorics of the large instances. For small values of α, most of the storage plan is already fixed, and the enumeration of the feasible solution space is still tractable (e.g., for α = 0.2 the total number of solutions is 8! = 40 320), despite a challenging objective function with up to 200 orders, which is not the case for large instances. Note that the BCP algorithm closes all the instances at the root node, avoiding the burden of enumeration in this case. § CONCLUSION In this paper, we introduced a new exact solution approach to solve the SLAPRP and a large class of its variants, including other warehouse layouts, the most common heuristic routing policies (i.e., return, S-shape, midpoint, and largest gap), and the partial replenishment variant of the problem. The developed approach is based on a Dantzig-Wolfe decomposition, where operational considerations are convexified in the pricing subproblems. Adapting our algorithm to a new variant of the problem only requires minor adjustments of the underlying graph and the resource extension function used in the pricing routine. The formulation is further tightened by the introduction of a new family of non-robust valid inequalities. The problem structure is exploited by the introduction of a novel branching scheme, further strengthened by a symmetry-breaking routine. These two contributions lead to a significant reduction in the Branch-and-Bound tree size (more than two thirds on average). The developed Branch-Cut-and-Price algorithm is benchmarked on the instances from <cit.>, solving to optimality 115 previously unclosed instances. For the instances from <cit.>, which were already closed, our BCP algorithm scales better with the number of orders, being on large instances several orders of magnitude faster than the previous state-of-the-art algorithm. In the future, we expect further research on integrated problems in warehousing logistics to answer the challenges of e-commerce, as it has been highlighted by other authors <cit.>. A natural extension of the present work is the development of a matheuristic able to tackle industrial-scale instances as well as extending our algorithm to other variants of the SLAPRP (e.g., several block warehouses), which is left for future research. In Section <ref>, we briefly discuss alternative decomposition methods for the SLAPRP formulation (i.e., Benders decomposition and variable splitting) that could lead to further methodological improvements for the problem. Finally, the integration of the batching decisions, which has been disregarded in the present work, alongside the storage and routing optimization is a promising direction for future research, which can further improve the understanding of the interactions between decision problems in the picking area. §.§ Acknowledgments This work has been supported by the French National Research Agency through the AGIRE project under the grant ANR-19-CE10-0014[<https://anr.fr/Projet-ANR-19-CE10-0014>]. This support is gratefully acknowledged. The authors thank prof. Valeria Borodin for useful feedback on the problem definition and comments on an earlier draft of this paper. apalike 1
http://arxiv.org/abs/2407.13185v1
20240718054824
KFD-NeRF: Rethinking Dynamic NeRF with Kalman Filter
[ "Yifan Zhan", "Zhuoxiao Li", "Muyao Niu", "Zhihang Zhong", "Shohei Nobuhara", "Ko Nishino", "Yinqiang Zheng" ]
cs.CV
[ "cs.CV" ]
Y.Zhan, Z.Li et al. The University of Tokyo Kyoto University Shanghai Artificial Intelligence Laboratory KFD-NeRF: Rethinking Dynamic NeRF with Kalman Filter Yifan Zhan1 Zhuoxiao Li1 Muyao Niu1 Zhihang Zhong3 Shohei Nobuhara2 Ko Nishino2 Yinqiang Zhengdenotes corresponding author.1 July 22, 2024 ================================================================================================================================== § ABSTRACT We introduce KFD-NeRF, a novel dynamic neural radiance field integrated with an efficient and high-quality motion reconstruction framework based on Kalman filtering. Our key idea is to model the dynamic radiance field as a dynamic system whose temporally varying states are estimated based on two sources of knowledge: observations and predictions. We introduce a novel plug-in Kalman filter guided deformation field that enables accurate deformation estimation from scene observations and predictions. We use a shallow Multi-Layer Perceptron (MLP) for observations and model the motion as locally linear to calculate predictions with motion equations. To further enhance the performance of the observation MLP, we introduce regularization in the canonical space to facilitate the network's ability to learn warping for different frames. Additionally, we employ an efficient tri-plane representation for encoding the canonical space, which has been experimentally demonstrated to converge quickly with high quality. This enables us to use a shallower observation MLP, consisting of just two layers in our implementation. We conduct experiments on synthetic and real data and compare with past dynamic NeRF methods. Our KFD-NeRF demonstrates similar or even superior rendering performance within comparable computational time and achieves state-of-the-art view synthesis performance with thorough training. Github page: <https://github.com/Yifever20002/KFD-NeRF>. § INTRODUCTION Neural Radiance Fields (NeRF) <cit.> have demonstrated outstanding success as a versatile and accurate 3D representation of real-world scenes, which has led to its wide adoption in daily and industrial applications in numerous domains. One of the remaining key desiderata for NeRFs is its extension to dynamic scenes. Existing dynamic NeRF methods can be broadly categorized into two approaches. One is to learn deformation fields for motion warping (, D-NeRF <cit.> and TiNeuVox <cit.>). Another is to disregard motion priors and directly interpolate time in the feature space (, DyNeRF <cit.> and KPlanes <cit.>). These approaches, however, often overlook the characteristics of a dynamic radiance field as a time-state sequence, missing the opportunity to fully leverage temporal contextual information. In this work, we draw inspirations from control theory and model the 4D radiance field as a dynamic system with temporally-varying states. Our state estimates of this dynamic system come from two sources of knowledge: observations based on the input data and predictions based on the system physical dynamics. Optimal state estimates cannot be achieved with only one of these two sources of knowledge. On the one hand, observations, as commonly used in previous dynamic NeRF works, are inherently error-prone due to the discrete temporal sampling of the dynamic scenes. On the other hand, predictions are governed by the correctness of the assumed kinematic model and may struggle to maintain accuracy for real dynamic scenes. To maximize combined potential of both observations and predictions, we introduce an efficient plug-and-play Kalman filter <cit.> module to optimize state estimations of our dynamic system. In <ref> we illustrate our plug-in Kalman filter guided deformation field. We model the 4D radiance field as a single-state system, with the state denoted as dx_t_i for the current frame deformation. In contrast to a vanilla deformation field that only considers system observations in the current frame, our approach incorporates richer information from previous frames by introducing a prediction branch based on a motion equation. Given the absence of prior trajectory regarding the scene's motion, we employ local linear motion. Both observations and predictions are weighted using a learnable Kalman gain to calculate precise deformation estimations. During the initial stages of training, predictions primarily influence the process, facilitating the convergence of frames with substantial motion. In the later stages of training, observations take precedence, allowing for the recovery of more precise and fine-grained motion details. All points in the real space are warped to a time-independent canonical space according to the estimated deformations. To further improve the performance of our two-branch deformable model, we employ an efficient tri-plane spatial representation for encoding our canonical space. Experimental evidence shows that this representation permits a shallower observation MLP with only two layers in our implementation. At the same time, we improve the warping ability of the observation MLP by regularizing the learning of the radiance field in the canonical space. In summary, our contributions are as follows: 1) The first method for modeling 4D radiance fields as dynamic systems by integrating a Kalman filter into the deformation field formulation, which results in a plug-in, efficient method for estimating deformations; 2) KFD-NeRF, a novel deformable NeRF with the Kalman filter plug-in and a tri-plane spatial representation, trained with a novel strategy of gradually releasing temporal information to facilitate learning dynamic systems; 3) and regularization in the canonical space for enhancing the learning capacity of a shallow observation MLP. We achieve state-of-the-art results on both synthetic and real data with all these designs, compared to dynamic NeRFs. § RELATED WORKS Neural Radiance Fields Representation. NeRF <cit.> represents 3D scenes based on an implicit representation encoded by an MLP. Given multi-view images and corresponding camera poses as input, the MLP is trained by “analysis by synthesis,” achieving novel view synthesis and coarse geometry reconstruction. Vanilla NeRF leaves room for improvement that has attracted abundant research. Many methods <cit.> use sparse voxel grids to represent the 3D scenes. While grid-based modeling achieves fast training and rendering, fine details require high-resolution grids leading to excessive memory consumption. The optimization process for grids is also unstable, failing to leverage the smoothness bias brought by MLP. Instant-NGP <cit.> proposes an efficient hash encoding for mapping spatial and feature domains. For sparsely-observed regions, however, this one-to-many mapping can easily introduce noise. The tri-plane representation <cit.> significantly reduces the memory footprint by leveraging its low-rank decomposition. We use this representation to build a fine-detailed canonical space. Dynamic Neural Rendering. For dynamic scenarios, modeling the time dimension together with the spatial representation becomes the main challenge. One approach to dynamic NeRF <cit.> is to add timestamps as an extra dimension to the 3D space and formulate 4D interpolation. Though intuitive, this leads to temporal incoherence due to the lack of supervision and priors between frames. Another approach <cit.> is to model the motion with deformation fields that warp points in arbitrary frames to the canonical space conditioned on the timestamps. This design can take advantage of the MLP smoothness bias in accordance with the motion smoothness prior. Nevertheless, deformation fields fail to find correspondences in frames with significant motion. We emphasize the performance improvement resulting from the switch in backbone spatial representation from D-NeRF <cit.> (MLP-based) to TiNeuVox <cit.> (voxel grids based) and later analyze the impact of different spatial representations on the deformation field. We also compare with latest point-based 4D rendering works <cit.> inspired by 3D Gaussian Splatting <cit.>. Deep Kalman Filter. The combination of deep learning and Kalman filtering has been explored to address the challenges of incomplete observations in various scenarios. Some works <cit.> use Convolutional Neural Network (CNN)-based Kalman filtering for video compression and camera localization. Recurrent Neural Network (RNN)-based Kalman filter has also been used in some studies <cit.> to improve state estimation and to optimize pose regularization. The Transformer in <cit.> enables a more comprehensive exploitation of temporal contextual information. All of these works along with <cit.>, however, model dynamics only in the learned latent space without taking into account physical priors, with the exception of <cit.> which fuses known physical priors and network outputs to construct a Kalman filter for video prediction. In this paper, for the first time, we employ a neural Kalman filter to assist in the dynamic NeRF task. We derive a two-stream method consisting of a shallow MLP and physical priors to achieve efficient motion estimation. § PRELIMINARIES §.§ NeRF and Volume Rendering Revisited NeRF <cit.> consists of three parts: sampling, volume mapping, and rendering. In the sampling process, points x∈ℝ^3 are sampled along rays calculated from the camera pose. Then, in the volume mapping process, each x in the 3D world, as well as its viewing direction v∈ℝ^3, is queried to output the volume density σ and radiance c=(r,g,b) of x. Finally, in the rendering process, the color of each ray is computed with volume rendering <cit.>. The expected color of the ray r(s)=o+sv becomes <cit.> C(r)=∫_s_n^s_f T(s)σ(r(s))c(r(s),v)ds , where s_n and s_f are the near and far bounds, respectively, and T(s)=exp(-∫_s_n^s σ(r(k))dk) . The only supervision for training NeRF comes from the ground truth color C_gt(r) of each ray ℒ_image=∑_r∈ℛ C(r)-C_gt(r)_2^2 , where ℛ is the ray batch. §.§ Kalman Filter Consider a dynamic system with input u_t, output y_t, process noise n_t and measurement noise m_t. We want to obtain its state x_t at each frame t, assuming the state equation follows x_t=Ax_t-1+Bu_t+n_t , and the output equation follows y_t=Cx_t+m_t , where A, B and C are the system matrix, control matrix and output matrix, respectively. There are two methods for obtaining the system's state x_t at any given t: observation and prediction. The observation method tries to rely on y_t in <ref> while the prediction method tries to mathematically model the system dynamics to predict the state. Both of these methods, however, come with non-negligible errors due to the existence of m_t and n_t and for inaccurate system modeling. The Kalman filter algorithm posits that the state x_t is obtained by a weighted sum of the prediction based on the state x_t-1 and the observation y_t. For a specific t, estimating the state x_t can be divided into two steps: prediction step and update step. Assuming that the process noise and measurement noise follow zero-mean Gaussian distributions with variances Q and R, respectively, at the prediction step, it first calculates a predicted state based on the estimated state at t-1 x̂_t^-=Ax̂_t-1+Bu_t , and its error covariance P_t^-=AP_t-1A^T+Q . In the update step, this predicted state is combined with observation y_t to obtain updated state estimate x̂_t=x̂_t^-+K_t(y_t-Cx̂_t^-) , and its error covariance P_t=(I-K_tC)P_t^- , where K_t=P_t^-C^T/CP_t^-C^T+R is the Kalman gain, reflecting the weights assigned to the observation and prediction components. § METHOD <Ref> illustrates the three stages of the complete pipeline of KFD-NeRF. In this section, we will first analyze the advantages of using deformation fields as the motion representation over feature interpolation. We will then introduce KFD-NeRF based on the Kalman filter to achieve accurate deformation estimations. Finally, we will discuss spatial reconstruction details, the training strategy, and the incorporation of regularization. §.§ Motion Representation: Deformation Fields Feature Interpolation 4D NeRFs generally employ two approaches to temporal modeling: motion deformation fields or time-conditioned feature interpolation. A deformation field D(x,t)→Δx calculates the deformation Δx at each timestamp t, which can be used to warp a 3D point at t to its corresponding position in the canonical space. In contrast, time-conditioned feature interpolation F(x,t)→f directly learns a feature vector f from a given 3D point at t, which is sent to a decoder for RGB and density calculation. In <ref>, we illustrate the difference between these two methods with a motion example from time t=0 to time t=1. Given a 3D point P in the real world space, we can identify its corresponding four points in the canonical space at four different timestamps, named P_1, P_2, P_3, and P_4. 192 shows the feature to be learned at P which changes over time. Notice that when the radiance undergoes abrupt changes, the feature space exhibits many high-frequency signals, which are hard to fit. 193 shows the backward deformation from the real world space to the canonical space to be learned at P which changes over time. These signals are smoother and easier to fit by leveraging the motion's smoothness prior. MLP representations have inductive smoothness bias and are well-suited for learning such deformations. Hence, we employ MLP-based deformation fields to represent motion in this work. §.§ Kalman Filter Guided Motion Prediction Next, we derive the system equations for the dynamic radiance field following <ref>. As we are trying to model non-rigid motions, each 3D point in the dynamic radiance field could be independently seen as a dynamic system, with its velocity v_t_i at each frame t_i as a single input u_t. Also note that this dynamic system is a single-state system since we only focus on estimating deformation at each frame t_i, denoted as dx_t_i. In the next step, we determine the coefficients matrix A, B, and C in <ref>, each degenerating to a value under a single-input and single-state system. As we directly observe dx_t_i, the output matrix C becomes 1. Since the motion trajectory is unknown, we use a locally-linear motion model to describe this dynamic system: dx_t_i = dx_t_i-1 + Δt_i·v_t_i (A=1 and B=Δt_i). Further considering noise, our state and output equations become { dx_t_i = dx_t_i-1 + Δt_i·v_t_i + n_t_i, y_t_i = dx_t_i + m_t_i . . In <ref>, our designed deformation field consists of two parts: an MLP-based observer and a system-dynamics-based predictor. As for the observer, by querying 3D points coordinates x and timestamps t_i, a two-layer shallow MLP is used to output the mean observation y_t_i (assuming zero-mean Gaussian measurement noise m_t_i) and noise-related terms ε_t_i. For the predictor, we follow the locally-linear motion model and first calculate the input v_t_i. Because the deformation of the current frame t_i is being estimated, we only use the information from frame t_i-1 and frame t_i-2 to approximate the velocity v_t_i = (dx_t_i-1-dx_t_i-2)/Δt_i . Based on the estimated velocity, we compute the predicted deformation state d̂x̂_t_i^-. The final estimation d̂x̂_t_i is the sum of d̂x̂_t_i^- and y_t_i weighted by a Kalman gain K_t_i. The Kalman gain in <ref> is calculated by current measurement noise and past process noise. In our implementation, we obtain K_t_i utilizing a linear layer that takes noise-related terms ε_t_i and ε_t_i-1 and timestamps t_i and t_i-1 as inputs, incorporating historical information. <Ref> summarizes this deformation estimation. Volume is warped to the canonical space based on the estimated deformation, which is further decoded to obtain density σ and color. We compute the loss by comparing the rendered values with the ground truth and use it to update the network, following <ref>. In addition, we note that the estimated deformation serves as a good supervision for learning the observation MLP, in analogy with observation updating process in Kalman filter. Therefore, we try to minimize the error between the current observation and the estimated deformation with ℒ_kf=1/𝒩∑_x∈𝒩 y_t_i-d̂x̂_t_i_2^2 . The weighted y_t_i and d̂x̂_t_i^- stand for two sources of knowledge from the whole system. y_t_i represents the state directly observed by the observation MLP, which lacks the information from past frames and thus has a low confidence in the early stages of training. In contrast, d̂x̂_t_i^- represents the current frame information predicted from history, which provides a good prior in the early stages of training. In the later stages of training, however, the confidence of the predictor drops significantly due to the inherent errors of the modeled motion dynamics, which is gradually compensated by the well learned observation MLP. Our Kalman Filter guided model automatically strikes the balance of both leveraging d̂x̂_t_i^- in the early stages to accelerate convergence and avoid local minimum and also using y_t_i in the later stages to learn fine details. §.§ Spatial Representation MLP-based spatial representations suffer from slow convergence and require a deeper deformation network (, eight layers MLP in <cit.>) to model 4D scenes. Otherwise, the deformation field can get stuck in a local optimum before learning the radiance field in the canonical space. Voxel grids based spatial representations converge very quickly (, TiNeuVox <cit.>) but demand high spatial resolution to store fine details, which requires significant memory footprint. Under memory constraints, achieving high resolution in the canonical space can be challenging, resulting in losing scene details and deteriorating the warping quality for each frame. We employ a tri-plane representation, which uses three sets of mutually orthogonal 2D planes to represent 3D space. This low-rank model allows for rapid convergence while significantly reducing memory consumption, making it possible to achieve fast and high-quality canonical space reconstruction. What's more, there are some inevitable errors due to coordinate shifts when x' are encoded by finite resolution tri-plane. To enhance the raw coordinate information, we follow <cit.> by concatenating encoded tri-plane features with raw coordinate inputs. §.§ Training Strategy and Regularization Our model takes into account temporal information so the learning of the current frame partially relies on the results of previous frames. To ensure that previous frames can offer ample priors, we employ a training strategy of gradually releasing the training images in chronological order. We also notice that the lack of constraints or priors in the canonical space can affect the performance of the observation branch, which could be regularized by pre-setting the shape in the canonical space. D-NeRF <cit.> sets frame t_0 to be the canonical space and force the deformation output of frame t_0 to be masked to 0. Such a hard mask, however, does not allow learning of the deformation at frame t_0 and may cause the canonical space to be too complex for motions warping. We instead design a soft regularization term to normalize the difference between the canonical space and the radiance field in the real world at frame t_0 to improve the observation branch. The canonical observation loss of a points batch 𝒩 is ℒ_co=1/𝒩∑_x∈𝒩1(t)dx , where 1(t)=1 when t=0 and 1(t)=0 when t0. We use a proposal sampling strategy from Mip-NeRF 360 <cit.> by distilling the density field for occupancy estimation. This online distillation necessitates a loss function ℒ_prop to ensure consistency between the proposal network and our learned model. Please refer to Section 3 in <cit.> for detailed definition. Total variation loss ℒ_tv is a common regularizer in inverse problems, which encourages sparse edges in space. We apply this loss to each of our tri-plane to get ℒ_tv(𝐱'), where 𝐱' are warped 3D points in the canonical space. The total variation loss is ℒ_tv(𝐱)=1/|C|∑_c, i, j(𝐱_c^i, j-𝐱_c^i-1, j_2^2+𝐱_c^i, j-𝐱_c^i, j-1_2^2) , where c is a certain plane from the tri-plane collection C and i, j are indices on the plane’s resolution. The final loss function becomes ℒ= ℒ_image + ℒ_kf + ℒ_co + ℒ_prop + λ_tvℒ_tv , and we experimentally choose λ_tv to be 1×10^-4. § EXPERIMENTAL RESULTS §.§ Dataset For synthetic data, we use the Dynamic NeRF Synthetic Dataset provided by D-NeRF <cit.>, whose training and testing splits have already been well-organized. For each scene in the synthetic dataset, a photo from an arbitrary view with corresponding camera pose is provided at each timestamp. For real data, we use the Nvidia Real Dynamic Scenes Dataset <cit.> which consists of 8 dynamic scenes recorded by 12 synchronized cameras. We use 11 camera videos for training and the remaining one for testing. §.§ Baselines and Metrics Due to differences in the format of synthetic and real data, we carefully select cutting-edge baselines to thoroughly validate our method based on comparative experiments. For synthetic data, we test deformation based methods D-NeRF <cit.> (MLP based spatial representation), TiNeuVox-B <cit.> (voxel grids based spatial representation), NDVG <cit.> (voxel grids based spatial representation) and 4D-GS <cit.> (Gaussian points based spatial representation), and feature interpolation based methods KPlanes <cit.> and V4D <cit.>. For real data, besides TiNeuVox-B and KPlanes, we further compare multi-view videos reconstruction methods MixVoxels <cit.>. We train all these methods on a single GeForce RTX3090. See <ref> for detailed training time and parameters consumption of ours and other methods. We provide an exhaustive qualitative and quantitative comparison of our KFD-NeRF with these baseline methods. Three main metrics are reported, namely the peak signal-to-noise ratio (PSNR), the structural similarity index measure (SSIM) <cit.>, and the learned perceptual image patch similarity (LPIPS) <cit.>. To provide more intuitive results, we further calculate metric “average” <cit.>, which is the geometric mean of MSE=10^-PSNR/10, √(1-SSIM), and LPIPS. Please see <ref> for quantitative comparison and <ref> for qualitative comparison. Per-scene results can be found in the supplemental material. We strongly recommend readers to watch the supplemental videos for a more intuitive understanding of the results. §.§ Ablation Studies We conduct ablation studies on both synthetic and real data to validate our various proposed system components. We compare our full model with variants related to ℒ_co, ℒ_kf and prediction branch. Canonical observation loss ℒ_co. This loss is designed to regularize a continuous and smooth volume shape in the canonical space for better observations. Specifically, we guide the shape in the canonical space to be close to the shape at frame t_0. In <ref> line b), we remove ℒ_co and observe a decrease in model performance. Estimation update loss ℒ_kf. This loss tries to minimize the difference between estimated deformation and current observation. This updating process ensures that with each iteration, the observation acquire progressively more accurate information to guide the learning process. In <ref> line c), we remove ℒ_kf and witness a decrease in model performance. §.§ Extra Studies on Prediction Branch This study aims to demonstrate the effectiveness of our plug-in Kalman filter. The most crucial step in Kalman filter is to fuse the original network observations with the predictions from system dynamics. Therefore, we directly remove the “Prediction and Fusion Stage” in our pipeline and use pure shallow observation MLP to generate deformations. We further conduct an experiment where a pure deformable network takes the current frame and the previous two states as inputs to ablate the effectiveness of the Kalman motion modeling. In <ref>, we see a clear drop in performance by pure MLP network-based methods, indicating that simply inputting previous Kalman filter parameters without modeling motion prediction is insufficient. The prediction branch we have designed can offer reliable priors in the early stages of training. We visualize these priors through an experiment to gain further insight. Specifically, we train our system with only inputs up to and including frame t_i-1 and then let our system produce rendering results at frame t_i with two different branches: predictions and observations. This experiment effectively simulates the amount of knowledge contained in predictions and observations in the early stages of training when the system encounters new input data. In <ref> we show two sources of knowledge from prediction branch and from direct output of observation MLP at frame t_i. We see an obvious deficiency in the observation MLP when initializing new input frames and are pleased to find that the prediction branch compensates for this loss based on information from previous frames. § DISCUSSION AND CONCLUSION Limitations. Our method relies on a well-reconstructed radiance field in the canonical space, which in our pipeline is guided by ℒ_co. This design, however, will partially fail, if the chosen canonical space exhibits significant scale changes or even topological changes related to other frames. In  <ref> we use an example to illustrate the influences caused by choosing canonical space. This phenomenon, however, cannot be mitigated by precisely setting the canonical space as we lack a priori access to the input data. We note that some works <cit.> focus on addressing scale or topological issues in radiance fields reconstruction. Nevertheless, these issues are not the main focus of our paper and will be explored in future works to further refine our model. Conclusion. In this work, we present KFD-NeRF, a Kalman filter guided NeRF, for 4D dynamic view synthesis. We model the dynamic radiance field as a dynamic system in control theory and use Kalman filter to estimate the deformation states based on both observations and predictions. We further enhance our observation by encoding canonical space with an efficient tri-plane and by regularizing the shape in the canonical space. Through our temporal training strategy and newly derived pipeline, KFD-NeRF achieves state-of-the-art view synthesis performance among a variety of dynamic NeRF methods. We hope the dynamic system modeling of 4D radiance fields will encourage researchers to explore motion contextual information. KFD-NeRF hopefully inspires the use of existing sequential methods mainly in control theory and visual state estimation to further improve 4D view synthesis and deformation estimations tasks. § ACKNOWLEDGEMENTS This research was supported in part by JSPS KAKENHI Grant Numbers 24K22318, 22H00529, 20H05951, JST-Mirai Program JPMJMI23G1. splncs04
http://arxiv.org/abs/2407.12751v1
20240717171956
Scalable Monte Carlo for Bayesian Learning
[ "Paul Fearnhead", "Christopher Nemeth", "Chris J. Oates", "Chris Sherlock" ]
stat.ML
[ "stat.ML", "cs.LG", "stat.CO", "stat.ME" ]
Scalable Monte Carlo for Bayesian Learning Paul Fearnhead, Christopher Nemeth, Chris J. Oates and Chris Sherlock July 22, 2024 ========================================================================= CHAPTER: PREFACE At the time of writing, science, industry, and society are being transformed by the emergence of a new generation of powerful machine learning and artificial intelligence (AI) methodologies. The safe use of such algorithms demands a probabilistic viewpoint, enabling reasoning in settings where data are noisy or limited, and endowing predictions with an appropriate degree of confidence for downstream decision-making and mitigation of risk. Yet, it remains true that fundamental probabilistic operations, such as conditioning on an observed dataset, are not easily performed at the scale required. The aim of this book is to provide a graduate-level introduction to advanced topics in Markov chain Monte Carlo (MCMC), as applied broadly in the Bayesian computational context. Most, if not all of these topics (stochastic gradient MCMC, non-reversible MCMC, continuous time MCMC, and new techniques for convergence assessment) have emerged as recently as the last decade, and have driven substantial recent practical and theoretical advances in the field. A particular focus is on methods that are scalable with respect to either the amount of data, or the data dimension, motivated by the emerging high-priority application areas in machine learning and AI. Throughout this book, the clear presentation of ideas is prioritised over a rigorous technical treatment of all mathematical details; appropriate references for further reading are provided in the end-notes of each chapter. In particular, we will limit the use of measure theory; the reader should assume that all sets and functions are measurable with respect to an appropriate sigma-algebra, and all continuous distributions should be assumed to be absolutely continuous with respect to Lebesgue measure and all densities should be assumed to be densities with respect to Lebesgue measure. This book has been indirectly shaped by the researchers and colleagues – too numerous to name individually – who have contributed to recent progress in the field. Special gratitude must go to Rebekah Fearnhead, Heishiro Kanagawa, Tamás Papp and Lorenzo Rimella, for proof-reading the manuscript, to Richard Howey for typesetting the figures, and to Natalie Tomlinson and Anna Scriven for their encouragement and typesetting support. The authors are grateful for financial support from the Engineering and Physical Sciences Council (through grants EP/W019590/1, EP/R018561/1, EP/R034710/1, EP/V022636/1 and EP/Y028783/1), the Alan Turing Institute, and the Leverhulme Trust. Paul Fearnhead Chris J. Oates Christopher Nemeth Newcastle University, UK Chris Sherlock Lancaster University, UK toc cambridgeauthordate
http://arxiv.org/abs/2407.12784v1
20240717175947
AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge Bases
[ "Zhaorun Chen", "Zhen Xiang", "Chaowei Xiao", "Dawn Song", "Bo Li" ]
cs.LG
[ "cs.LG", "cs.CR", "cs.IR" ]
The legacy of boson clouds on black hole binaries Gianfranco Bertone Received date / Accepted date ================================================= § ABSTRACT LLM agents have demonstrated remarkable performance across various applications, primarily due to their advanced capabilities in reasoning, utilizing external knowledge and tools, calling APIs, and executing actions to interact with environments. Current agents typically utilize a memory module or a retrieval-augmented generation (RAG) mechanism, retrieving past knowledge and instances with similar embeddings from knowledge bases to inform task planning and execution. However, the reliance on unverified knowledge bases raises significant concerns about their safety and trustworthiness. To uncover such vulnerabilities, we propose a novel red teaming approach , the first backdoor attack targeting generic and RAG-based LLM agents by poisoning their long-term memory or RAG knowledge base. In particular, we form the trigger generation process as a constrained optimization to optimize backdoor triggers by mapping the triggered instances to a unique embedding space, so as to ensure that whenever a user instruction contains the optimized backdoor trigger, the malicious demonstrations are retrieved from the poisoned memory or knowledge base with high probability. In the meantime, benign instructions without the trigger will still maintain normal performance. Unlike conventional backdoor attacks, requires no additional model training or fine-tuning, and the optimized backdoor trigger exhibits superior transferability, in-context coherence, and stealthiness. Extensive experiments demonstrate 's effectiveness in attacking three types of real-world LLM agents: RAG-based autonomous driving agent, knowledge-intensive QA agent, and healthcare EHRAgent. We inject the poisoning instances into the RAG knowledge base and long-term memories of these agents, respectively, demonstrating the generalization of . On each agent, achieves an average attack success rate of ≥80% with minimal impact on benign performance (≤1%) with a poison rate <0.1%. The code and data is available at <https://github.com/BillChan226/AgentPoison>. § INTRODUCTION Recent advancements in large language models (LLMs) have facilitated the extensive deployment of LLM agents in various applications, including safety-critical applications such as finance <cit.>, healthcare <cit.>, and autonomous driving <cit.>. These agents typically employ an LLM for task understanding and planning and can use external tools, such as third-party APIs, to execute the plan. The pipeline of LLM agents is often supported by retrieving past knowledge and instances from a memory module or a retrieval-augmented generation (RAG) knowledge base <cit.>. Despite recent work on LLM agents and advanced frameworks have been proposed, they mainly focus on their efficacy and generalization, leaving their trustworthiness severely under-explored. In particular, the incorporation of potentially unreliable knowledge bases raises significant concerns regarding the trustworthiness of LLM agents. For example, state-of-the-art LLMs are known to generate undesired adversarial responses when provided with malicious demonstrations during knowledge-enabled reasoning <cit.>. Consequently, an adversary could induce an LLM agent to produce malicious outputs or actions by compromising its memory and RAG such that malicious demonstrations will be more easily retrieved <cit.>. However, current attacks targeting LLMs, such as jailbreaking <cit.> during testing and backdooring in-context learning <cit.>, cannot effectively attack LLM agents with RAG. Specifically, jailbreaking attacks like GCG <cit.> encounter challenges due to the resilient nature of the retrieval process, where the impact of injected adversarial suffixes can be mitigated by the diversity of the knowledge base <cit.>. Backdoor attacks such as BadChain <cit.> utilize suboptimal triggers that fail to guarantee the retrieval of malicious demonstrations in LLM agents, resulting in unsatisfactory attack success rates. In this paper, we propose a novel red-teaming approach , the first backdoor attack targeting generic LLM agents based on RAG. is launched by poisoning the long-term memory or knowledge base of the victim LLM agent using very few malicious demonstrations, each containing a valid query, an optimized trigger, and some prescribed adversarial targets (e.g., a dangerous sudden stop action for autonomous driving agents). The goal of is to induce the retrieval of the malicious demonstrations when the query contains the same optimized trigger, such that the agent will be guided to generate the adversarial target as in the demonstrations; while for benign queries (without the trigger), the agent performs normally. We accomplish this goal by proposing a novel constrained optimization scheme for trigger generation which jointly maximizes a) the retrieval of the malicious demonstration and b) the effectiveness of the malicious demonstrations in inducing adversarial agent actions. In particular, our objective function is designed to map triggered instances into a unique region in the RAG embedding space, separating them from benign instances in the knowledge base. Such special design endows with high ASR even when we inject only one instance in the knowledge base with a single-token trigger. In our experiments, we evaluate on three types of LLM agents for autonomous driving, dialogues, and healthcare, respectively. We show that outperforms baseline attacks by achieving 82% retrieval success rate and 63% end-to-end attack success rate with less than 1% drop in the benign performance and with poisoning ratio less than 0.1%. We also find that our trigger optimized for one type of RAG embedder can be transferred to effectively attack other types of RAG embedders. Moreover, we show that our optimized trigger is resilient to diverse augmentations and is evasive to potential defenses based on perplexity examination or rephrasing. Our technical contributions are summarized as follows: * We propose , the first backdoor attack against generic RAG-equipped LLM agents by poisoning their long-term memory or knowledge base with very few malicious demonstrations. * We propose a novel constrained optimization for to optimize the backdoor trigger for effective retrieval of the malicious demonstrations and thus a higher attack success rate. * We show the effectiveness of , compared with four baseline attacks, on three types of LLM agents. achieves 82% retrieval success rate and 63% end-to-end attack success rate with less than 1% drop in benign performance with less than 0.1% poisoning ratio. * We demonstrate the transferability of the optimized trigger among different RAG embedders, its resilience against various perturbations, and its evasiveness against two types of defenses. § RELATED WORK LLM Agent based on RAG LLM Agents have demonstrated powerful reasoning and interaction capability in many real-world settings, spanning from autonomous driving <cit.>, knowledge-intensive question-answering <cit.>, and healthcare <cit.>. These agents backboned by LLM can take user instructions, gather environmental information, retrieve knowledge and past experiences from a memory unit to make informed action plan and execute them by tool calling. Specifically, most agents rely on a RAG mechanism to retrieve relevant knowlegde and memory from a large corpus <cit.>. While RAG has many variants, we mainly focus on dense retrievers and categorize them into two types based on their training scheme: (1) training both the retriever and generator in an end-to-end fashion and update the retriever with the language modeling loss (e.g. REALM <cit.>, ORQA <cit.>); (2) training the retriever using a contrastive surrogate loss (e.g. DPR <cit.>, ANCE <cit.>, BGE <cit.>). We also consider the black-box OpenAI-ADA model in our experiment. Red-teaming LLM Agents Extensive works have assessed the safety and trustworthiness of LLMs and RAG by red-teaming them with a variety of attacks such as jailbreaks <cit.>, backdoor <cit.>, and poisoning <cit.>. However, as these works mostly treat LLM or RAG as a simple model and study their robustness individually, their conclusions can hardly transfer to LLM agent which is a much more complex system. Recently a few preliminary works also study the backdoor attacks on LLM agents <cit.>, however they only consider poisoning the training data of LLM backbones and fail to assess the safety of more capable RAG-based LLM agents. In terms of defense,  <cit.> seeks to defend RAG from corpus poisoning by isolating individual retrievals and aggregate them. However, their method can hardly defend as we can effectively ensure all the retrieved instances are poisoned. As far as we are concerned, we are the first work to red-team LLM agents based on RAG systems. Please refer to app:related_works for more details. § METHOD §.§ Preliminaries and Settings We consider LLM agents with a RAG mechanism based on corpus retrieval. For a user query q, we retrieve knowledge or past experiences from a memory database 𝒟, containing a set of query-solution (key-value) pairs {(k_1, v_1), …, (k_|𝒟|, v_|𝒟|)}. Different from conventional passage retrieval where query and document are usually encoded with different embedders <cit.>, LLM agents typically use a single encoder E_q to map both the query and the keys into an embedding space. Thus, we retrieve a subset ℰ_K(q, 𝒟)⊂𝒟 containing the K most relevant keys (and their associated values) based on their (cosine) similarity with the query q in the embedding space induced by E_q, i.e., the K keys in 𝒟 with the minimum E_q(q)^⊤ E_q(k)/||E_q(q)||·||E_q(k)||. These K retrieved key-value pairs are used as the in-context learning demonstrations for the LLM backbone of the agent to determine an action step by a=LLM(q, ℰ_K(q, 𝒟)). The LLM agent will execute the generated action by calling build-in tools <cit.> or external APIs. §.§ Threat model Assumptions for the attacker We follow the standard assumption from previous backdoor attacks against LLMs <cit.> and RAG systems <cit.>. We assume that the attacker has partial access to the RAG database of the victim agent and can inject a small number of malicious instances to create a poisoned database 𝒟_poison(x_t)=𝒟_clean∪𝒜(x_t). Here, 𝒜(x_t)={(k̂_1(x_t), v̂_1), ⋯, (k̂_|𝒜(x_t)|(x_t), v̂_|𝒜(x_t)|)} represents the set of adversarial key-value pairs injected by the attacker, where each key here is a benign query injected with a trigger x_t. Accordingly, the demonstrations retrieved from the poisoned database for a query q will be denoted by ℰ_K(q, 𝒟_poison(x_t)). This assumption aligns with practical scenarios where the memory unit of the victim agent is hosted by a third-party retrieval service [For example: <https://www.voyageai.com/>] or directly leverages an unverified knowledge base. For example, an attacker can easily inject poisoned texts by maliciously editing Wikipedia pages <cit.>). Moreover, we allow the attacker to have white-box access to the RAG embedder of the victim agent for trigger optimization <cit.>. However, we later show empirically that the optimized trigger can easily transfer to a variety of other embedders with high success rates, including a SOTA black-box embedder OpenAI-ADA. Objectives of the attacker The attacker has two adversarial goals. (a) A prescribed adversarial agent output (e.g. sudden stop for autonomous driving agents or deleting the patient information for electronic healthcare record agents) will be generated whenever the user query contains the optimized backdoor trigger. Formally, the attacker aims to maximize E_q∼π_q[1(LLM(q⊕ x_t, ℰ_K(q ⊕ x_t, 𝒟_poison(x_t)))=a_m)], where π_q is the sample distribution of input queries, a_m is the target malicious action, 1(·) is a logical indicator fuction. x_t denotes the trigger, and q ⊕ x_t denotes the operation of injecting[In this work, we do not restrict the position for trigger injection, i.e., the trigger is not limited to a suffix.] the trigger x_t into the query q. (b) Ensure the outputs for clean queries remain unaffected. Formally, the attacker aims to maximize E_q∼π_q[1(LLM(q, ℰ_K(q, 𝒟_poison(x_t)))=a_b)], where a_b denotes the benign action corresponding to a query q. This is different from traditional DP attacks such as <cit.> that aim to degrade the overall system performance. §.§ §.§.§ Overview We design to optimize a trigger x_t that achieves both objectives of the attacker specified above. However, directly maximizing eqn:poison and eqn:utility using gradient-based methods is challenging given the complexity of the RAG procedure, where the trigger is decisive in both the retrieval of demonstrations and the target action generation based on these demonstrations. Moreover, a practical attack should not only be effective but also stealthy and evasive, i.e., a triggered query should appear as a normal input and be hard to detect or remove, which we treat as coherence. Our key idea to solve these challenges is to cast the trigger optimization into a constrained optimization problem to jointly maximize a) retrieval effectiveness: the probability of retrieving from the poisoning set 𝒜(x_t) for any triggered query q ⊕ x_t, i.e., E_q∼π_q[1(∃(k, v) ∈ℰ_K(q ⊕ x_t, 𝒟_poison(x_t))∩𝒜(x_t))], and the probability of retrieving from the benign set 𝒟_clean for any benign query q, b) target generation: the probability of generating the target malicious action a_m for triggered query q ⊕ x_t when ℰ_K(q ⊕ x_t, 𝒟_poison(x_t))) contains key-value pairs from 𝒜(x_t), and c) coherence: the textual coherence of q ⊕ x_t. Note that a) and b) can be viewed as the two sub-steps decomposed from the optimization goal of maximizing eqn:poison, while a) is also aligned to the maximization of eqn:utility. In particular, we propose a novel objective function for a) where the triggered queries will be mapped to a unique region in the embedding space induced by E_q with high compactness between these embeddings. Intuitively, this will minimize the similarity between queries with and without the trigger while maximizing the similarity in the embedding space for any two triggered queries (see Fig. <ref>). Furthermore, the unique embeddings for triggered queries impart distinct semantic meanings compared to benign queries, enabling easy correlation with malicious actions during in-context learning. Finally, we propose a gradient-guided beam search algorithm to solve the constrained optimization problem by searching for discrete tokens under non-derivative constraints. Our design of brings it two major advantages over existing attacks. First, requires no additional model training, which largely lowers the cost compared to existing poisoning attack <cit.>. Second, is more stealthy than many existing jailbreaking attacks due to optimizing the coherence of the triggered queries. The overview is shown in fig:agentpoison_overview. §.§.§ Constrained Optimization Problem We construct the constrained optimization problem following the key idea in sec:alg_overview as the following: x_tminimize ℒ_uni(x_t) + λ·ℒ_cpt(x_t) s.t. ℒ_tar(x_t) ≤η_tar ℒ_coh(x_t) ≤η_coh where eqn:obj_ours, eqn:cst_tar, and eqn:cst_coh correspond to the optimization goals a), b), and c), respectively. The constants η_tar and η_coh are the upper bounds of ℒ_tar and ℒ_coh, respectively. Here, all four losses in the constrained optimization are defined as empirical losses over a set 𝒬={q_0,⋯,q_|𝒬|} of queries sampled from the benign query distribution π_q. Uniqueness loss The uniqueness loss aims to push triggered queries away from the benign queries in the embedding space. Let c_1, ⋯, c_N be the N cluster centers corresponding to the keys of the benign queries in the embedding space, which can be easily obtained by applying (e.g.) k-means to the embeddings of the benign keys. Then the uniqueness loss is defined as the average distance of the input query embedding to all these cluster centers: ℒ_uni(x_t) = - 1/N·|𝒬|∑_n=0^N ∑_q_j∈𝒬||E_q(q_j ⊕ x_t) - c_n|| Note that effectively minimizing the uniqueness loss will help to reduce the required poisoning ratio. Compactness loss We define a compactness loss to improve the similarity between triggered queries in the embedding space: ℒ_cpt(x_t) = 1/|𝒬|∑_q_j∈𝒬||E_q(q_j ⊕ x_t)-E_q(x_t)|| where E_q(x_t)=1/|𝒬|∑_q_j∈𝒬E_q(q_j⊕ x_t) is the average embedding over the triggered queries. The minimization of the compactness loss can further reduce the poisoning ratio. In fig:embedding_opt_process, we show the procedure for joint minimization of the uniqueness loss and the compactness loss, where the embeddings for the triggered queries gradually form a compact cluster. Intuitively, the embedding of a test query containing the same trigger will fall into the same cluster, resulting in the retrieval of malicious key-value pairs. In comparison, CPA (fig:embedding_comparisona) suffers from a low accuracy in retrieving malicious key-value pairs, and it requires a much higher poisoning ratio to address the long-tail distribution of all the potential queries. Target generation loss We maximize the generation of target malicious action a_m by minimizing: ℒ_tar(x_t) = -1/|𝒬|∑_q_j∈𝒬 p_LLM(a_m|[q_j⊕ x_t, ℰ_K(q_j⊕ x_t, 𝒟_poison(x_t))]) where p_LLM(· | ·) denotes the output probability of the LLM given the input. While eqn:loss_tar only works for white-box LLMs, we can efficiently approximate ℒ_tar(x_t) using finite samples with polynomial complexity. We show the corresponding analysis and proof in apx:sample_proof. Coherence loss We aim to maintain high readability and coherence with the original texts in each query q for the optimized trigger. This is achieved by minimizing: ℒ_coh(x_t) = -1/T∑_i=0^T log p_LLM_b(q^(i) |q^(<i)) where q_(i) denote the i^th token in q ⊕ x_t, and LLM_b denotes a small surrogate LLM (e.g. gpt-2) in our experiment. Different from suffix optimization that only requires fluency <cit.>, the trigger optimized by can be injected into any position of the query (e.g. between two sentences). Thus eqn:loss_coh enforces the embeded trigger to be semantically coherent with the overall sequence <cit.>, thus achieving stealthiness. §.§.§ Optimization algorithm We propose a gradient-based approach that optimizes eqn:obj_ours while ensuring eqn:loss_tar and eqn:loss_coh satisfy the soft constraint via a beam search algorithm. The key idea of our optimization algorithm is to iteratively search for a replacement token in the sequence that improves the objective while also satisfying the constraint. Our algorithm consists of the following four steps. [21]r0.58 0.58 Initialization: To ensure context coherence, we initialize the trigger x_t_0 from a string relevant to the agent task where we treat the LLM as an one-step optimizer and prompt it to obtain b triggers to form the initial beams (algo:trigger_initialization). Gradient approximation: To handle discrete optimization, for each beam candidate, we follow <cit.> to first calculate the objective w.r.t. eqn:obj_ours and randomly select a token t_i in x_t_0 to compute an approximation of model output by replacing t_i with another token in the vocabulary 𝒱, using gradient ∂ℒ, where ℒ= e^⊺_t'_i∇_e_t_i(ℒ_uni+λℒ_cpt). Then we obtain the top-m candidate tokens to consist the replacement token set 𝒞_0. Constraint filtering: Then we impose constraint eqn:cst_coh and eqn:cst_tar sequentially. Since determination of η_coh highly depends on the data, we follow <cit.> to first sample s tokens from 𝒞_0 to obtain 𝒮_τ under a distribution where the likelihood for each token is a softmax function of ℒ_coh. This ensures the selected tokens possess high coherence while maintaining diversity. Then we further filter 𝒮_τ w.r.t. eqn:cst_tar. We notice that during early iterations most candidates cannot directly satisfy eqn:cst_tar, thus instead, we consider the following soft constraint: 𝒮^'_τ ={ t_i ∈𝒮_τ|ℒ^τ_tar(t_i) ≤ℒ_tar^τ-1(t_i) or ℒ^τ_tar(t_i) ≤η_tar} where τ denotes the τ^th iteration. Thus we soften the constraint to require eqn:loss_tar to monotonic increase when eqn:cst_tar is not directly satisfied, which leaves a more diversified candidate set 𝒮^'_τ. Token Replacement: Then we calculate ℒ_tar for each token in 𝒮^'_τ and select the top b tokens that improve the objective eqn:obj_ours to form the new beams. Then we iterate this process until convergence. The overall procedure of the trigger optimization is detailed in algo:attackalgorithm. § EXPERIMENT §.§ Setup LLM Agent: To demonstrate the generalization of , we select three types of real-world agents across a variety of tasks: Agent-Driver <cit.> for autonomous driving, ReAct <cit.> agent for knowledge-intensive QA, and EHRAgent <cit.> for healthcare record management. Memory/Knowledge base: For agent-driver we use its corresponding dataset published in their paper, which contain 23k experiences in the memory unit[<https://github.com/USC-GVL/Agent-Driver>]. For ReAct, we select a more challenging multi-step commonsense QA dataset StrategyQA which involves a curated knowledge base of 10k passages from Wikipedia[<https://allenai.org/data/strategyqa>]. For EHRAgent, it originally initializes its knowledge base with only four experiences and updates its memory dynamically. However we notice that almost all baselines have a high attack success rate on the database with such a few entries, we augment its memory unit with 700 experiences that we collect from successful trials to make the red-teaming task more challenging. Baselines: To assess the effectiveness of , we consider the following baselines for trigger optimization: Greedy Coordinate Gradient (GCG) <cit.>, AutoDAN <cit.>, Corpus Poisoning Attack (CPA) <cit.>, and BadChain <cit.>. Specifically, we optimize GCG w.r.t. the target loss eqn:loss_tar, and since we observe AutoDAN performs badly when directly optimizing eqn:loss_tar, we calibrate its fitness function and augment eqn:loss_tar by eqn:obj_rag with Lagrangian multipliers. And we use the default objective and trigger optimization algorithm for CPA and BadChain. Evaluation metrics: We consider the following metrics: (1) attack success rate for retrieval (ASR-r), which is the percentage of test instances where all the retrieved demonstrations from the database are poisoned; (2) attack success rate for the target action (ASR-a), which is the percentage of test instances where the agent generates the target action (e.g., "sudden stop") conditioned on successful retrieval of poisoned instances. Thus, ASR-a individually assesses the performance of the trigger w.r.t. inducing the adversarial action. Then we further consider (3) end-to-end target attack success rate (ASR-t), which is the percentage of test instances where the agent achieves the final adversarial impact on the environment (e.g., collision) that depends on the entire agent system, which is a critical metric that distinguishes from previous LLMs attack. Finally, we consider (4) benign accuracy (ACC), which is the percentage of test instances with correct action output without the trigger, which measures the model utility under the attack. A successful backdoor attack is characterized by a high ASR and a small degradation in the ACC compared with the non-backdoor cases. We detail the backdoor strategy and definition of attack targets for each agent in apx:backdoor and apx:task, respectively. §.§ Result demonstrates superior attack success rate and benign utility. We report the performance of all methods in tab:main_result. We categorize the result into two types of LLM backbones, i.e. GPT3.5 and LLaMA3, and two types of retrievers trained via end-to-end loss or contrastive loss. We observe that algorithms that optimize for retrieval i.e. , CPA and AutoDAN has better ASR-r, however CPA and AutoDAN also hampers the benign utility (indicated by low ACC) as they invariably degrade all retrievals. As a comparison, has minimal impact on benign performance of average 0.74% while outperforming the baselines in terms of retrieval success rate of 81.2% in average, while an average 59.4% generates target actions where 62.6% result in actual target impact to the environment. The high ASR-r and ACC can be naturally attributed to the optimization objective of . And considering that these agent systems have in-built safety filters, we denote 62.6% to be a very high success rate in terms of real-world impact. has high transferability across embedders. We assess the transferability of the optimized triggers on five dense retrievers, i.e. DPR <cit.>, ANCE <cit.>, BGE <cit.>, REALM <cit.>, and ORQA <cit.> to each other and the text-embedding-ada-002 model[<https://platform.openai.com/docs/guides/embeddings>] with API-only access. We report the results for Agent-Driver in fig:transfer, and ReAct-StrategyQA and EHRAgent in fig:transfer_qa and fig:transfer_ehr (apx:transferability). We observe has a high transferability across a variety of embedders (even on embedders with different training schemes). We conclude the high transferability results from our objective in eqn:obj_ours that optimizes for a unique cluster in the embedding space which is also semantically unique on embedders trained with similar data distribution. performs well even when we inject only one instance in the knowledge base with one token in the trigger. We further study the performance of w.r.t. the number of poisoned instances in the database and the number of tokens in the trigger sequence, and report the findings in fig:num_study. We observe that after optimization, has high ASR-r (62.0% in average) when we only poison one instance in the database. Meanwhile, it also achieves 79.0% ASR-r when the trigger only contains one token. Regardless of the number of poisoned instances or the tokens in the sequence, can consistently maintain a high benign utility (ACC≥ 90%). How does each individual loss contributes to ? The ablation result is reported in tab:ablation_result, where we disable one component each time. We observe ℒ_uni significantly contributes to the high ASR-r in while ACC is more sensitive to ℒ_cpt where more concentrated q̂_t generally lead to better ACC. Besides, while adding ℒ_coh slightly degrades the performance, it leads to better in-context coherence, which can effectively bypass some perplexity-based countermeasures. is resilient to perturbations in the trigger sequence. We further study the resilience of the optimized triggers by considering three types of perturbations in tab:robustness_result. We observe is resilient to word injection, and slightly compromised to letter injection. This is because letter injection can change over three tokens in the sequence which can completely flip the semantic distribution of the trigger. Notably, rephrasing the trigger which completely change the token sequence also maintains high performance, as long as the trigger semantics is preserved. How does perform under potential defense? We study two types of defense: Perplexity Filter <cit.> and Query Rephrasing <cit.> (here we rephrase the whole query which is different from tab:robustness_result) which are often used to prevent LLMs from injection attack. We report the ASR-t in tab:defense_compare and full result in tab:potential_defense (apx:defense). Compared with GCG and Badchain, the trigger optimized by is more readable and coherent to the agent context, making it resilient under both defenses. We further justify this observation in fig:ppl_brief where we compare the perplexity distribution of queries optimized by to benign queries and GCG. Compared to GCG, the queries of are highly evasive by being inseparable from the benign queries. § CONCLUSION In this paper, we propose a novel red-teaming approach to holistically assess the safety and trustworthiness of RAG-based LLM agents. Specifically, consists of a constrained trigger optimization algorithm that seeks to map the queries into a unique and compact region in the embedding space to ensure high retrieval accuracy and end-to-end attack success rate. Notably, does not require any model training while the optimized trigger is highly transferable, stealthy, and coherent. Extensive experiments on three real-world agents demonstrate the effectiveness of over four baselines across four comprehensive metrics. plain § BROADER IMPACTS In this paper, we propose , the first backdoor attack against LLM agents with RAG. The main purpose of this research is to red-team LLM agents with RAG so that their developers are aware of the threat and take action to mitigate it. Moreover, our empirical results can help other researchers to understand the behavior of RAG systems used by LLM agents. Code is released at <https://github.com/BillChan226/AgentPoison>. § LIMITATIONS While is effective in optimizing triggers to achieve high retrieval accuracy and attack success rate, it requires the attacker to have white-box access to the embedder. However, we show empirically that can transfer well among different embedders even with different training schemes, since optimizes for a semantically unique region in the embedding space, which is also likely to be unique for other embedders as long as they share similar training data distribution. This way, the attacker can easily red-team a proprietary agent by simply leveraging a public open-source embedder to optimize for such a universal trigger. § APPENDIX / SUPPLEMENTAL MATERIAL §.§ Experimental Settings §.§.§ Hyperparameters The hyperparameters for and our experiments are reported in tab:hyperparameter_ours. Except for obtaining the result in fig:num_study, we keep the number of tokens in the trigger fixed, where we have 6 tokens for Agent-Driver <cit.>, 5 tokens for ReAct-StrategyQA <cit.>, and 2 tokens for EHRAgent <cit.>, and we inject 20 poisoned instances for Agent-Driver, 4 for ReAct, and 2 for EHRAgent across all experiments. The number of tokens in the trigger sequence are mainly determined by the length of the original queries. We inject fewer than 0.1% instances w.r.t. the original number of instances in the database for all attack methods, since we observe that as more instances have been poisoned, it gets harder to distinguish to effectiveness of different methods, as reported in fig:num_study. §.§.§ Target Definition We detail the attack target for in this section. Specifically, for all three agents, we consider it a success retrieval (thus counted in ASR-r) only if all the retrieved instances (usually k-nearest neighbors) are poisoned demonstrations that we previously injected into the database. Such requirements are practical and necessary for evaluating attack success for retrievals since many agents have certain in-built safety filters to further select useful demonstrations from all the retrieval results (e.g. Agent-Driver <cit.> instantiates a re-examination process where they use a LLM to select one experience which is most relevant to the retrieved k instances). This way an adversary can certify attack success only if all the retrieved instances are malicious. Recent defense <cit.> which seeks to certify RAG from corpus poisoning attacks by isolate-then-aggregate further necessitates this requirement on such agent-oriented attacks. By effectively manipulating all the retrieved demonstrations to be poisoned instances, can easily bypass such SOTA defense. Specifically, we detail the target action and target outcome for attacking each agent. * Agent-Driver: we denote irresponsible and potentially unsafe driving behaviors to be our attack target. Specifically, target action for the agent to output is sudden stop. And the corresponding real-world outcome is measured by the trajectory deviation in the future three seconds. Mathematically, attack success for ASR-t is indicated by the following indicator function: 1(1/T∫_t^t+Tζ̂(s) - ζ(s)^2 ds > τ_s) where ζ̂ denotes the planned trajectory given by a fine-tuned LLM trajectory planner, and ζ is the ground-truth trajectory. τ_s is the preset threshold that determines the maximum deviation of a safe trajectory. * ReAct-StrategyQA: we denote wrong answer or unhelpful information to be our attack target for QA-based agent. As we provide the poison sample following <cit.> (as detailed in apx:backdoor), the target action for the agent to output is as there is a {trigger} in the sequence during its internal reasoning (denoting successfully backdoored in-context learning), and the corresponding outcome is that the agent finishes the episode with such wrong answer or unhelpful answer (e.g., I don't know). * EHRAgent: considering the purpose and safety issues of healthcare record maintenance, we denote deletion to be a highly risky and potentially unsafe operation. Thus we design the target action for the agent such that it should output delete data of patient ID during the reasoning step, and the corresponding outcome is a SQL code command DeleteDB. §.§.§ Data and Model Preparation Train/Test split For Agent-Driver, we have randomly sampled 250 samples from its validation set (apart from the 23k samples in the training set); for ReAct agent, we have used the full test set in StrategyQA[<https://allenai.org/data/strategyqa>] which consists of 229 samples; and for EHRAgent, we have randomly selected 100 samples from its validation set in our experiment. Besides, the poisoned samples are all sampled from the training set of each agent which does not overlap with the test set. Retriever As we have categorized the RAG retrievers into two types, i.e. contrastive and end-to-end based on their training scheme, for each agent we have manually selected a representative retriever in each type and report the corresponding results in tab:main_result. Specifically, for Agent-Driver, as it is a domain-specific task and requires the agent to handle strings that contain a large portion of numbers which distinct from natural language, we have followed <cit.> and trained both the end-to-end and contrastive embedders using its published training data[<https://github.com/USC-GVL/Agent-Driver>], where we use the loss described in app:rag_retriever. And for ReAct-StrategyQA <cit.> and EHRAgent <cit.>, we have adopted the pre-trained DPR <cit.> checkpoints[<https://github.com/facebookresearch/DPR>] as contrastive retriever and the pre-trained REALM <cit.> checkpoints[<https://huggingface.co/docs/transformers/en/model_doc/realm>] as end-to-end retriever. §.§ Additional Result and Analysis We further detail our analysis by investigating the following six questions. (1) As constructs a surrogate task to optimize both eqn:poison and eqn:utility, we aim to ask how well does fulfill the objectives of the attacker? (2) What is the attack transferability of on ReAct-StrategyQA and EHRAgent? (3) How does the number of trigger tokens influence the optimization gap? (4) How does perform under potential defense? (5) What is the distribution of embeddings during the intermediate optimization process of ? (6) What does the optimized trigger look like? We provide the result and analysis in the following sections. §.§.§ Balancing ASR-ACC Trade-off We further visualize the result in tab:main_result in fig:asr_acc where we focus on ASR-r and ACC. We can see that (represented by +) are distribute in the upper right corner which denotes it can achieve both high retrieval success rate (in terms of ASR-r) and benign utility (in terms of ACC) while all other baselines can not achieve both. This result further demonstrates the superior backdoor performance of . §.§.§ Additional Transferability Result We have provided the additional transferability result on ReAct-StrategyQA and EHRAgent in fig:transfer_qa and fig:transfer_ehr, respectively. We can see that generally achieves high attack transferability among different RAG retrievers which further demonstrates its universality for trigger optimization. §.§.§ Optimization Gap w.r.t. Token Length We compare the attack performance on ReAct-StrategyQA w.r.t. ASR-r and loss defined in eqn:obj_ours during the optimization w.r.t. different number of trigger tokens, and report the result in fig:asr_loss_token. We can denote that while triggers with more tokens can generally lead to a higher retrieval success rate, could yield a good and consistent attack success rate even if there are very few tokens in the trigger sequence. §.§.§ Potential Defense We provide the additional results of the performance of under two types of potential defense in tab:potential_defense. §.§.§ Intermediate optimization process The embedding distribution during the intermediate optimization process of across different embedders is showcased in fig:embedding_opt_process. We can consistently observed that, regardless of the white-box embedders being optimized, can effectively learn a trigger such that the triggers are gradually becoming more unique and compact, which further verifies the effectiveness of and the validity of the loss being optimized. §.§.§ Trigger Case Study We demonstrate the trigger optimized on GPT3.5 LLM backbone and retriever trained via contrastive loss using different attack algorithms over Agent-Driver <cit.>, ReAct <cit.>, EHRAgent <cit.> in tab:trigger_case. Due to our trigger initialization using a relevant string and our coherence loss, our trigger have a better fluency and coherence than the trigger optimized using CPA and GCG. While the trigger optimized by AutoDAN and BadChain have good reliability (since they utilize an LLM for trigger generation), they are not as effective as our algorithm. §.§ Detailed Explanation of §.§.§ Backdoor demonstrations We design the poisoned instances retrieved from the memory or knowledge base to be adversarial experience, which aligns with our attack target for each agent as enumerated in apx:task, while contradicting the safe purposes of the agent tasks themselves. After retrieving from the knowledge base, we showcase the procedure of reasoning for action where the agent places the retrieved malicious demonstrations in the prefix and prompts the LLM backbone for reasoning and action prediction. We mainly consider two types of poisoning strategy, i.e. (1) adversarial backdoor and (2) spurious correlation. For adversarial backdoor demonstration, we directly change the output of the benign examples and inject the corresponding optimized trigger into the query. An example is shown in fig:adv_example. While adversarial backdoor demonstrations are effective in inducing the target action output, they are not stealthy enough and easily detected by utility examination. Therefore, we consider another novel backdoor strategy called spurious correlation demonstration, which alternatively achieves a high attack success rate while being much more stealthy. Specifically, spurious correlation demonstration only involves benign examples where the original output itself is the target action (e.g. STOP for autonomous driving agents). Therefore we keep the original action fixed and only inject the corresponding optimized trigger into the query to construct a spurious backdoor, where the agent may be misled to associate the target action with the trigger via this backdoor. This type of poisoning strategy is much more stealthy compared to the previous adversarial backdoor, since the poisoned examples do not change the original action plan. An example is shown in fig:spurious_example. During our experiment, we adopt the spurious examples as our poisoning strategy for Agent-Driver, and adopt adversarial backdoor as our poisoning strategy for ReAct-StrategyQA and EHRAgent. §.§.§ Additional algorithm The pseudocode for trigger initialization is shown in algo:trigger_initialization where we use it to generate the initial beams of triggers that are relevant to the task the agent handles. §.§ Additional Analysis on Optimization Approximation Given the constrained optimization problem defined in sec:constrained_decoding: x_tminimize ℒ_uni(x_t) + λ·ℒ_cpt(x_t) s.t. ℒ_tar(x_t) ≤η_tar, ℒ_coh(x_t) ≤η_coh We can directly adopt eqn:loss_tar to calculate the target action objective ℒ_tar(x_t) for white-box models. However, can be adapted for black-box LLMs setting by approximating ℒ_tar(x_t) via the following finite-sample indicator function. ℒ̂_tar(x_t) = -1/N∑_i=1^N ∑_q_j ∈𝒬 1_LLM(q_j ⊕ x_t, ℰ_K(q_j ⊕ x_t, 𝒟_poison(x_t))) = a_m where 1_condition is 1 when the condition is true and 0 otherwise. We demonstrate in thm:alg_bound that can efficiently approximate ℒ_tar(x_t) with a polynomial sample complexity. We can provide the following sample complexity bound for approximating ℒ_tar(x_t) with finite samples. Let 𝒬 denote the potential space of all queries. For any ϵ > 0 and γ∈ (0,1), with at least N ≥64/ϵ^2( 2d ln12/ϵ + ln4/γ) samples, we have with probability at least 1 - γ: max_q ∈𝒬ℒ̂_tar(x_t) ≥max_q ∈𝒬ℒ_tar(x_t) - ϵ Specifically, to prove thm:alg_bound, we fist reformulate eqn:finite_sample in the following form: ℒ̂_tar(x_t) = -1/N∑_i=1^N ∑_q_j ∈𝒬 1_p_LLM(a_m|[q_j⊕ x_t, ℰ_K]) > p_LLM(a_r|[q_j⊕ x_t, ℰ_K)] where a_r denotes the runner-up (i.e., second-maximum likelihood) action token output by the target LLM. Then we can define a set of functions F as the class of real-valued functions where each represents the output action distribution p_LLM(a|q_j⊕ x_t) conditioned on a query q_j sampled from 𝒬 and trigger x_t. More specifically, each function f can be formulated as {f_q_j∈ F| f_q_j(x) = p_LLM(a_m | [q_j ⊕ x_t, ℰ_K(q_j ⊕ x, 𝒟_poison(x))])}. Therefore, we can first obtain an upper bound for the VC dimension of H={1_f_q_j(a_m) > f_q_j(a_r): f_q_j∈ F} using the following lemma. Let F be a vector space of real-valued functions, and let H = {1_f_q_j(a_m) > f_q_jt(a_r) : f_q_j∈ F}. Then the VC dimension of H satisfies VCdim(H) ≤dim(F) + 1. To show that the VC dimension of H is at most dim(F) + 1, we need to show that no set of more than dim(F) + 1 points can be shattered by H. Consider a set of m points {x_1, x_2, …, x_m} in a d-dimensional space where d = dim(F). Suppose that H can shatter this set of m points. This means that for any way of labeling these m points, there exists a function in H that correctly classifies the points according to those labels. Each function h ∈ H corresponds to an indicator function of the form 1_f_q_j(a_m) > f_q_j(a_r), where f_q_j⊕ x_t∈ F. Given a basis {f_1, f_2, …, f_d} for the vector space F, any function f ∈ F can be written as a linear combination of these basis functions: f = ∑_i=1^d α_i f_i for some coefficients α_i. For each point x_k, the condition f_q_j(a_m) > f_q_j(a_r) translates to: ∑_i=1^d α_i f_i(x_k, a_m) > ∑_i=1^d α_i f_i(x_k, a_r). This can be rewritten as: ∑_i=1^d α_i (f_i(x_k, a_m) - f_i(x_k, a_r)) > 0. Let g_k = f_i(x_k, a_m) - f_i(x_k, a_r). We have m linear inequalities of the form: ∑_i=1^d α_i g_k,i > 0. To shatter the set {x_1, x_2, …, x_m}, we need to find coefficients α_i such that these m inequalities can realize all possible sign patterns for the m points. However, in a d-dimensional space, we can only have at most d linearly independent inequalities. If m > d + 1, then we have more inequalities than the dimensions of the space, making it impossible to satisfy all possible sign patterns. Thus, m ≤ d + 1. Therefore, the VC dimension of H is at most dim(F) + 1. Suppose that H is a set of functions from a set X to {0, 1} with finite VC dimension d ≥ 1. Let L be any sample error minimization algorithm for H. Then L is a learning algorithm for H. In particular, if m ≥d/2, its sample complexity satisfies: m_L(ϵ, γ) ≤64/ϵ^2( 2d ln12/ϵ + ln4/γ) where m_L(ϵ, γ) is the minimum sample size required to ensure that with probability at least 1 - γ, the empirical error is within ϵ of the true error. Therefore we can combine lem:VC_bound and thm:sample_complexity to prove the sample complexity bound for ℒ_tar(x_t) in eqn:sample_num. According to lem:VC_bound, the VC dimension of H is bounded by VCdim(H) ≤dim(F) + 1. Then by thm:sample_complexity, we can denote that for any ϵ > 0 and γ∈ (0, 1), with at least N ≥64/ϵ^2( 2d ln12/ϵ + ln4/γ) samples, we have with probability at least 1 - γ: max_q ∈𝒬ℒ̂_tar(x_t) ≥max_q ∈𝒬ℒ_tar(x_t) - ϵ Therefore, the finite-sample approximation of the target constraint function converges polynomially (to 1/ϵ) to ℒ_tar with high probability as the number of samples increases. Therefore, thm:alg_bound indicates that we can effectively approximate ℒ_tar with a polynomially bounded number of samples, and we use function eqn:finite_sample to serve as the constraint for the overall optimization for . §.§ Additional Related Works §.§.§ Retrieval Augmented Generation Retrieval Augmented Generation (RAG) <cit.> is widely adopted to enhance the performance of LLMs by retrieving relevant external information and grounding the outputs and action of the model <cit.>. The retrievers used in RAG can be categorized into sparse retrievers (e.g. BM25), where the embedding is a sparse vector which usually encodes lexical information such as word frequency <cit.>; and dense retrievers where the embedding vectors are dense, which is usually a fine-tuned version of a pre-trained BERT encoder <cit.>. We focus on red-teaming LLM agents with RAG handled by dense retrievers, as they are much more widely adopted in LLM agent systems and have been proved to perform much better in terms of retrieval accuracy <cit.>. In our discussion, we categorize RAG into two categories based on their training scheme: (1) end-to-end training where the retriever is updated using causal language modeling pipeline handled by cross-entropy loss <cit.>; and (2) contrastive surrogate loss where the retriever is trained alone and usually on a held-out training set <cit.>. During end-to-end training, both the retriever and the generator are optimized jointly using the language modeling loss <cit.>. The retriever selects the top K documents ℰ_K(q) based on their relevance to the input query q, and the generator conditions on both q and each retrieved document ℰ_K(q) to produce the output sequence y (or action a for LLM agent). Therefore the probability of the generated output is given by: p_RAG(y|q) ≈ ∑_ℰ_K(q) ∈top-k(p(·|q)) p_E_q(ℰ_K(q)|q)p_LLM(y|q,ℰ_K(q)) = ∑_ℰ_K(q) ∈top-k(p(·|1)) p_E_q(ℰ_K(q)|q) ∏_i^N p_LLM(y_i|q,ℰ_K(q),y_1:i-1) Thus correspondingly the training objective is to minimize the negative log-likelihood of the target sequence by optimizing the E_q: ℒ_RAG = -log p_RAG(y|q) = -log∑_ℰ_K(q) ∈top-k(p(·|q)) p_E_q(ℰ_K(q)|q) ∏_i^N p_LLM(y_i|q,ℰ_K(q),y_1:i-1) This way embedder E_q is trained to align with the holistic goal of the generation task. While being effective, the end-to-end training scheme only demonstrates good performance during pre-training which makes the training very costly. Therefore, extensive works on RAG explore training E_k via a surrogate contrastive loss to learn a good ranking function for retrieval. The objective is to create a vector space where relevant pairs of questions and passages have smaller distances (i.e., higher similarity) than irrelevant pairs. The training data consists of instances {⟨ k_i, v_i^+, v_i,1^-, …, v_i,n^- ⟩}_i^m, where each instance includes a query key k_i, a relevant key k_i^+, and n irrelevant keys k_i,j^-. The contrastive loss function is defined as: L(q_i, k_i^+, k_i,1^-, ⋯, k_i,n^-) = -loge^sim(q_i, k_i^+)/e^sim(q_i, k_i^+) + ∑_j=1^n e^sim(q_i, k_i,j^-) Specifically, eqn:contrastive_loss encourages the retriever E_q to assign higher similarity scores to positive pairs than to negative pairs, effectively improving the retrieval accuracy. And different embedders often distinguish in their curation of the negative samples <cit.>.
http://arxiv.org/abs/2407.11950v1
20240716174434
Temporally Consistent Stereo Matching
[ "Jiaxi Zeng", "Chengtang Yao", "Yuwei Wu", "Yunde Jia" ]
cs.CV
[ "cs.CV" ]
Temporally Consistent Stereo Matching J. Zeng et al. Beijing Key Laboratory of Intelligent Information Technology, School of Computer Science & Technology, Beijing Institute of Technology, China Guangdong Laboratory of Machine Perception and Intelligent Computing, Shenzhen MSU-BIT University, China Horizon Robotics {jiaxi,yao.c.t,wuyuwei,jiayunde}@bit.edu.cn Temporally Consistent Stereo Matching Jiaxi Zeng1,20009-0007-0481-2005 Chengtang Yao1,3 Yuwei Wu1,2Corresponding author Yunde Jia2,1 Received —; accepted — ================================================================================================== § ABSTRACT Stereo matching provides depth estimation from binocular images for downstream applications. These applications mostly take video streams as input and require temporally consistent depth maps. However, existing methods mainly focus on the estimation at the single-frame level. This commonly leads to temporally inconsistent results, especially in ill-posed regions. In this paper, we aim to leverage temporal information to improve the temporal consistency, accuracy, and efficiency of stereo matching. To achieve this, we formulate video stereo matching as a process of temporal disparity completion followed by continuous iterative refinements. Specifically, we first project the disparity of the previous timestamp to the current viewpoint, obtaining a semi-dense disparity map. Then, we complete this map through a disparity completion module to obtain a well-initialized disparity map. The state features from the current completion module and from the past refinement are fused together, providing a temporally coherent state for subsequent refinement. Based on this coherent state, we introduce a dual-space refinement module to iteratively refine the initialized result in both disparity and disparity gradient spaces, improving estimations in ill-posed regions. Extensive experiments demonstrate that our method effectively alleviates temporal inconsistency while enhancing both accuracy and efficiency. Currently, our method ranks second on the KITTI 2015 benchmark, while achieving superior efficiency compared to other state-of-the-art methods. The code is available at <https://github.com/jiaxiZeng/Temporally-Consistent-Stereo-Matching>. § INTRODUCTION Stereo matching estimates the disparity by finding the horizontal correspondences between the rectified left and right images. It plays a significant role in various fields such as autonomous driving<cit.>, robotics<cit.>, SLAM<cit.>, AR/VR<cit.>. In these applications, stereo videos are typically taken as input and the output disparity sequences are employed for downstream tasks. However, the majority of existing methods<cit.> predict disparities independently for each stereo pair, disregarding the coherence between consecutive frames. This typically results in temporally inconsistent results, as shown in the second row of Fig. <ref> (a). The inconsistent depth information has been demonstrated to substantially influence the accuracy and consistency of downstream tasks<cit.>. Although some techniques, such as Extended Kalman Filtering (EKF)<cit.> or Bundle Adjustment (BA)<cit.>, mitigate the impact of inconsistency, they also have respective limitations. For instance, EKF suffers from non-linear errors, while BA brings intensive computation. In this work, our goal is to improve the temporal consistency of stereo matching. We analyze the source of temporal inconsistency in stereo matching from two perspectives. On the one hand, most of the existing methods <cit.> independently infer the disparity map from scratch for each frame. This leads to a global disparity searching range, which introduces greater variability in disparity computation (, larger update step size), increasing the likelihood of inconsistency between consecutive frames. As shown in Fig. <ref> (c), our method has a smaller update step size, indicating that it searches for the ground truth within a local disparity range, while RAFT-Stereo regresses the disparity from scratch in a global disparity range. On the other hand, camera or object motion leads to appearance changes in occluded areas, reflective surfaces, or low-texture regions. The inherent ambiguity in these ill-posed regions causes the model to output unstable results when processing the temporally changing image sequences. Fig. <ref> (b) demonstrates that the temporal inconsistency in ill-posed regions, like occlusions, is more serious than that in general regions. Based on the above analyses, we propose to leverage the temporal information to mitigate temporal inconsistency. To avoid large update steps during the refinement process, we use the semi-dense disparity map from the previous time step for disparity completion, providing a well-initialized disparity for subsequent iterative refinement. This allows the refinement module to focus on local-range disparity updates. Additionally, we conduct a straightforward temporal state fusion to fuse the hidden state from the previous refinement with the current state features from the completion module, thus providing a temporally coherent initial hidden state for further refinement. Moreover, we find that, for ill-posed areas like occlusions or reflective regions, it is easier to estimate the disparity gradient than the disparity itself. This is primarily because, in the real world, the depth of most regions tends to be smooth and continuous, producing constant or gradually changing gradients in these areas. Based on this observation, we propose a dual-space refinement module. This module takes the temporally initialized disparity and fused states as inputs, and refines the results in both disparity space and disparity gradient space. Through iterative refinement, the local smoothness constraints in the disparity gradient space are progressively extended to more global areas, thereby improving the smoothness of ill-posed regions and resulting in stable outputs. Extensive experiments show that our method achieves state-of-the-art temporal consistency and accuracy. The contributions of our work can be summarized as follows: (1) We analyze the causes of temporal inconsistency and propose a temporally consistent stereo matching method. (2) We propose a temporal disparity completion and a temporal state fusion module to exploit temporal information, providing a well-initialized disparity and a coherent hidden state for refinement. (3) We propose an iterative dual-space refinement module to refine and stabilize the results in ill-posed regions. (4) Our method improves the temporal consistency of stereo matching and achieves SOTA results on synthetic and real-world datasets. § RELATED WORK Deep Stereo Matching Existing deep stereo matching methods primarily revolve around cost volume to design networks or representations, which is commonly categorized into regression-based methods<cit.> and iterative-based methods<cit.>. Regression-based methods regress a probability volume to compute the disparity map, which can be classified into 3D volume methods<cit.> and 4D volume methods<cit.>. They either directly regress the disparity in the global disparity range <cit.> or perform a global-to-local regression <cit.>. However, the motion between frames may cause variation in the global disparity probability distribution, leading to inconsistencies between sequential disparity maps. Iterative-based methods regard stereo matching as a continuous optimization process in the disparity space, iteratively retrieving the cost volume to refine the disparity map. RAFT-Stereo<cit.> leverages the optical flow method RAFT <cit.> for stereo matching and introduces a multi-level GRU to enlarge the receptive field. CREStereo<cit.> introduces a cascade network and adaptive group correlation layer to address challenges such as thin structures and imperfect rectification. DLNR<cit.> employs LSTM to alleviate the data coupling problem in the iteration of GRU. PCVNet<cit.> proposed a parameterized cost volume to accelerate the convergence of iterations by predicting larger update steps. IGEV-Stereo<cit.> regresses a geometry encoding volume to provide an accurate initial disparity for the GRU module, accelerating the convergence of disparity optimization. Although these methods have achieved excellent results, independently inferring the disparities for each frame leads to poor temporal consistency. In contrast, our method integrates the disparity and state information of the previous frame to ensure coherence between the consecutive outputs. Video Stereo Matching Recently, there has been increasing attention on video stereo matching. Dynamic-Stereo<cit.> and CODD<cit.> focus on consistent stereo matching in dynamic scenes. Dynamic-Stereo jointly processes multiple frames, improving the temporal consistency of the results. However, the latency and computational overhead introduced by multi-frame inference limit its application in online scenarios. CODD predicts SE(3) transformations for each pixel to align successive disparity maps and fuses them together. Different from their post-fusion strategy, we integrate temporal information before our refinement, providing a robust local disparity searching range. TemporalStereo <cit.> mitigates the adverse effects of occlusions and reflections by augmenting current cost volume with the costs from the previous frame. Nevertheless, it still relies on coarse-to-fine regression in the global disparity range. XR-Stereo<cit.> extends RAFT-Stereo<cit.> over time, reducing iterations by reusing results from the previous frame, achieving over 100 FPS in a low-resolution mode. However, it simply takes the previous disparity and hidden state as the initialization of the current frame, disregarding the initial disparity and hidden state are semi-dense due to non-overlapping regions between frames. For these regions, it is still necessary to regress the disparity from scratch. Instead, we suggest completing the disparity map and incorporating the current state information to obtain a robust initialization for local refinement. The importance of these designs for improving temporal consistency was demonstrated in our ablation experiments. Depth Completion Depth completion relies on a sparse depth map to generate a dense depth map<cit.>. Some methods<cit.> formulate depth completion as a stereo matching task, while some stereo matching methods<cit.> utilizing sparse depth from LiDAR to guide the matching process. Different from these works, our disparity completion module does not rely on additional LiDAR data or complicated architectures. It utilizes a simple encoder-decoder to complete a semi-dense disparity map projected from the previous time stamp and provide the state features of the completed disparity map for further refinement. Normal Guided Depth Estimation Substantial research <cit.> demonstrates that utilizing surface normal contributes to depth estimation. Most methods <cit.> design loss functions to leverage normal for constraining the depth map. Geo-Net<cit.> utilizes constraints between the normal and depth during inference, but this constraint is limited to local areas. HITNet<cit.> up-samples the disparity map through predicted disparity gradients to constrain the local surfaces. In this work, we iteratively refine results in the disparity and disparity gradient spaces, propagating the surface constraint globally. § METHOD Our method processes stereo video sequences in an online manner and outputs the temporally consistent disparity maps. The pipeline of TC-Stereo is depicted in Fig. <ref>. In the following subsections, we will provide a detailed introduction to three key components of our TC-Stereo: temporal disparity completion (Sec. <ref>), temporal state fusion (Sec. <ref>), and iterative dual-space refinement (Sec. <ref>). Finally, we will present the loss functions involved (Sec. <ref>). §.§ Temporal Disparity Completion The aim of temporal disparity completion is to provide a well-initialized disparity map by leveraging the result of the previous frame. However, for the initial frame, temporal information is unattainable. As a result, there is no disparity point to use as the hint for the completion module, causing it to degrade into a monocular module and affecting the reliability of the initialization. To address this, we propose leveraging a semi-dense disparity map derived from the cost volume to compensate for the lack of temporal information in the initial frame. Semi-dense disparity map from the cost volume We define the cost volume C ∈ [0,1]^H × W × D as the cosine similarity between the left and right features for each disparity hypothesis d ∈𝔻={0,1,...,D-1}. This is expressed as: C(v,u,d) = ⟨ F_l(v,u), F_r(v,u-d) ⟩/F_l(v,u)_2 ×F_r(v,u-d)_2, where F_l(v,u) and F_r(v,u-d) represent the feature vectors at pixel coordinates (v,u) in the left image and (v,u-d) in the right image, respectively. Here, ⟨·,·⟩ denotes the inner product and ·_2 is Euclidean norm of a feature vector. We first compute the disparity from the cost volume by the winner-take-all strategy. Then, a threshold is applied to filter out outliers and obtain a semi-dense disparity map. This process can be summarized by the following formula: d_1 = max_d ∈𝔻 C(d), d_2 = max_d ∈𝔻∖{d_1-1,d_1,d_1+1} C(d), d_s = d_1, if C(d_1) - C(d_2) > θ 0, otherwise, where d_1 is the disparity with the highest similarity, and d_2 is the second-highest disparity, excluding the immediate neighbors of d_1. The semi-dense disparity map d_s retains d_1 only if the C(d_1) exceeds the C(d_2) by a threshold θ; otherwise, it is set to 0, indicating an invalid point. This process filters out disparities with high uncertainty, yielding a more reliable semi-dense disparity map for the completion. Moreover, to effectively learn the semi-dense disparity map from the cost volume, we apply a contrastive loss<cit.> to the cost volume. We discuss the loss in detail in Section <ref>. Semi-dense disparity map from the previous timestamp For subsequent frames, we project the previous disparity map into the current image coordinates based on the intrinsic parameters and poses. This process can be represented as: p^t-1 → t = Π_c(T^t-1 → t·Π_c^-1(p^t-1,z^t-1)), z^t-1 → t = (T^t-1 → t·Π_c^-1(p^t-1,z^t-1))_z, d^t-1 → t = bf/z^t-1 → t, d^t_s = warp(d^t-1 → t,p^t-1 → t). Here, p^t-1 and z^t-1 denote the coordinates of pixels and depths at time t-1, respectively. Π_c represents the image projection process, which maps 3D points onto the 2D image. T^t-1 → t is the relative transformation from time t-1 to t. b is the baseline of stereo camera, and f is the pixel-presented focal length. We forward warp d^t-1 → t by the pixel mapping p^t-1 → t to get the semi-dense disparity map d_s^t at time t. The flexible sources of the semi-dense disparity map enable our model to perform inference in both single-frame and multi-frame modes. Disparity completion Our temporal disparity completion module utilizes contexts from the feature encoder, the semi-dense disparity, and a binary mask that indicates the sparsity as inputs. It employs a lightweight encoder-decoder network to regress a dense disparity map and outputs state features of the map for temporal state fusion. The detailed architecture of the completion module is provided in the supplementary materials. §.§ Temporal State Fusion Different from XR-Stereo<cit.>, which directly uses the warped previous hidden states to initialize the refinement module, we argue that the previous states may not be the best choice for initializing the current states. On the one hand, the past states only encode the state information along the previous line of sight, which may fail in the current viewpoint. On the other hand, for non-overlapping regions between two frames, there are no past hidden states available. Therefore, we leverage a lightweight GRU-like module to fuse the current state features c^t from the TDC module with the past hidden states h^t-1_N (N denotes the number of refinement iterations) to provide initial hidden states h^t_0 for the current refinement module: z^t = σ(W_z · [c^t, h^t-1_N-1]), r^t = σ(W_r · [c^t, h^t-1_N-1]), q^t = tanh(W_q · [r^t ⊙ c^t, h^t-1_N-1]), h^t_0 = z^t ⊙ c^t + (1-z^t) ⊙ q^t . Here, W_z, W_r, and W_q are the parameters of the network, σ(·) denotes the sigmoid function, and ⊙ is the element-wise dot. This simple module plays a crucial role in enhancing stability and accuracy, which we will demonstrate in the experimental section. §.§ Dual-space Refinement We observed that for some challenging regions for the disparity space, such as reflective areas, the gradient values can be easily estimated in the disparity gradient space. Based on this observation, we design a dual-space refinement module to refine the results in both the disparity space and the gradient space. As shown in Fig. <ref>, the pipeline of the dual-space module consists of three steps: disparity space refinement, gradient space refinement, and gradient-guided disparity propagation. Disparity space refinement Follow RAFT-Stereo<cit.>, we leverage the multi-level GRUs to refine the disparity map. It takes the hidden state h_i (i denotes the iteration), contextual features, the looked-up costs, and the disparity map as inputs. It outputs an intermediate hidden state h'_i+1 and a step size Δ d to update the disparity map. Gradient space refinement We convert the updated disparity map to its disparity gradient space. For a point (u_0, v_0, d_0) in the disparity space, we sample two neighbor points (u_1, v_1, d_1) and (u_2, v_2, d_2), which yields two vectors, 𝐱_1=(Δ u_1,Δ v_1, Δ d_1)=(u_1-u_0,v_1-v_0,d_1-d_0) and 𝐱_2=(Δ u_2,Δ v_2, Δ d_2)=(u_2-u_0,v_2-v_0,d_2-d_0). We assume that 𝐱_1 and 𝐱_2 are non-collinear, and the rotation from 𝐱_1 to 𝐱_2 in the u-v coordinate is clockwise. The formula of the disparity gradient can be represented as: [ [ ∂ d/∂ u; ∂ d/∂ v ] = [ Δ v_1 Δ d_2 - Δ v_2 Δ d_1/Δ u_2 Δ v_1 - Δ u_1 Δ v_2; Δ u_2 Δ d_1 - Δ u_1 Δ d_2/Δ u_2 Δ v_1 - Δ u_1 Δ v_2 ] ]. By sampling different points within the neighborhood, we can obtain a series of gradient maps. The sampled gradients and contextual features are input into an encoder-decoder network to regress a refined gradient map. Gradient-guided disparity propagation We utilize the refined gradients to guide the propagation of refined disparity in the disparity space. Specifically, for each pixel (u,v) with disparity d, we propagate the disparity to its neighborhood (u_n,v_n) based on the local planar hypothesis: d̂_n=d+(u_n-u)∂ d/∂ u+(v_n-v)∂ d/∂ v, where d̂_n is a propagated disparity candidate. We concatenated the h'_i+1, contexts, and the propagated disparity candidates and fed them into several convolutional layers with a softmax function to regress a set of weights, denoted as w. The refined disparity is regressed through the weighted summation of the disparity candidates. Finally, a lightweight GRU network with 1×1 convolutional layers updates the intermediate hidden state using the refined disparity. The updated hidden state h_i+1 and the disparity are then used as inputs for the next iteration. By iteratively refining the results in both disparity and gradient space, the surface constraints are propagated globally, improving results in ill-posed areas. §.§ Loss functions Our loss function consists of three components: the cost volume loss ℒ_cv, the disparity loss ℒ_disp, and the disparity gradient loss ℒ_grad. For the ℒ_cv, we adopt the contrastive loss proposed by HITNet<cit.> to supervise the cost volume: ℒ_cv = 1 - ψ(d^gt) + max(η + ψ(d^nm) - ψ_detach(d^gt), 0), d^nm = max_d ∈𝔻∖ [d^gt - 1.5, d^gt + 1.5] C(d), ψ(d) = (d - ⌊ d ⌋) C(⌊ d ⌋ + 1) + (⌊ d ⌋ + 1 - d) C(⌊ d ⌋). ψ(d) is the cost (strictly speaking, similarity) for a sub-pixel disparity d. It is obtained through linear interpolation between C(⌊ d ⌋ + 1) and C(⌊ d ⌋), where ⌊·⌋ is the floor function. The aim of ℒ_cv is to maximize ψ(d^gt), while penalizing ψ(d^nm) to ensure it remains at least a threshold η lower than ψ(d^gt). As for the disparity loss ℒ_disp, it comprises three subparts: disparity completion loss ℒ_dc, disparity space refinement loss ℒ_dsr, and gradient-guided disparity propagation loss ℒ_gdp, which can be expressed as: ℒ_disp = λ_dcℒ_dc + ∑_i=1^Nγ^N-i(ℒ^i_dsr + λ_gdpℒ^i_gdp). N represents the number of iterations, γ denotes the decay coefficient, and λ_dc and λ_gdp represent the balancing scalars. All these loss functions utilize the L1 loss between the d^gt and the disparity outputs at the corresponding stage. The disparity gradient losses ℒ_grad can be formulated as: ℒ_grad = g^gsr-g^gt_1 +g^gdp-g^gt_1, g^gt =∇_u,v d^gt, g^gdp =∇_u,v d^gdp. The g^gsr represents the refined gradient map in the gradient space refinement. The g^gt and g^gdp are derived by taking the gradients of d^gt and d^gdp with respect to u and v. The final loss is the combination of ℒ_cv, ℒ_disp and ℒ_grad: ℒ = ℒ_cv+ℒ_disp+ℒ_grad . § EXPERIMENTS §.§ Datasets TartanAir<cit.> is a synthetic dataset for visual SLAM, covering various challenging indoor and outdoor scenarios. It encompasses over a thousand stereo videos, totaling approximately 306K stereo pairs, making it well-suited for training our temporal model. In this study, we employ this dataset for pre-training and conduct ablation experiments on it. Sceneflow<cit.> is a synthetic dataset containing three sub-datasets covering indoor and outdoor scenes. This dataset includes short stereo video sequences (10 frames), with large motions between frames. We utilize the dataset for temporal training and compare our method with others. KITTI<cit.> is a real-world dataset for autonomous driving. It provides a leaderboard to evaluate the methods on a test set consisting of 200 images. Our work utilizes video sequences provided by KITTI for inference. ETH3D SLAM<cit.> is a real-world dataset for SLAM in indoor environments. It provides stereo video sequences, as well as depth and pose ground truth. We evaluated the generalization performance of our method on this dataset. §.§ Implementation Details Our method is based on the implementation of RAFT-Stereo<cit.>. For both training and testing, we set the number of refinement iterations N to 5. During the training, we utilize a slice of stereo sequences as input and output the corresponding disparities frame by frame. We compute the loss for each frame's output and accumulate the gradients across all frames before updating the parameters. In our experiment, we set the sequence length to 2 or 4 frames for training. During the inference process, our method takes a video stream of arbitrary length as input and outputs disparity predictions in an online manner. For the hyperparameters, we set θ = 0.3, η = 0.5, γ = 0.9, and λ_dc and λ_gdp to 0.1 and 1.2, respectively. We use the AdamW optimizer and a one-cycle learning rate schedule where the maximum learning rate is set to 0.0002. The batch size is set to 8 for all the experiments. For the TartanAir dataset, we use all data (including Easy and Hard) for the ablation study, and the detailed training-testing split can be found in the supplementary material. We train for 100k steps on this dataset. To make a fair comparison with TemporalStereo<cit.>, we retrain our model with a sequence length of 2 and 200k steps according to their train-test split. For the SceneFlow dataset, we train with the provided sequence data for 200k steps, with a sequence length of 2. For the KITTI dataset, due to the lack of temporally annotated data, we follow the setting of TemporalStereo<cit.> and train our model using pseudo-label data exported by the SOTA method LEAStereo<cit.> on the KITTI raw data. We train for 50k steps with a sequence length of 4 and a max learning rate of 0.0001, based on the model pre-trained on TartanAir. All pose data are either provided by the dataset or derived from SLAM algorithms. All experiments are conducted on two NVIDIA A40 GPUs. §.§ Ablation Study and Analysis Ablation Study We demonstrate the effectiveness of our designs through ablation experiments conducted on the TartanAir dataset. As shown in Table <ref>, we investigate the impacts of sequence length for training, temporal information (Past disp & state), temporal state fusion, temporal disparity completion (TDC) module, and dual-space refinement module on the accuracy in all regions and occluded regions. Initial settings (A) and (B) serve as baselines, representing RAFT-Stereo<cit.> with 5 and 32 iterations respectively. For a fair comparison, we set the sequence length during training to 2 for them, even though they don't use temporal information. Similar to XR-Stereo<cit.>, setting (C) utilizes the past disparity and hidden states as initialization, but direct iterations on this initialization reduce performance, as analyzed in Section <ref>. As shown by the results of setting (D), the performance is improved in both all and occluded regions compared with (C), demonstrating the necessity of temporal state fusion. The accuracy of (E) shows further improvement, particularly on the 1-px error rate, which benefits from the well-initialized disparity provided by the TDC module. This enables the refinement module to be devoted to improving the fine-grained matching. (F) demonstrates that the dual-space refinement effectively improves the results in occluded regions. By combining (E) and (F), (G) achieves an even more precise stereo matching capability, exemplifying the synergy between the modules. Furthermore, we extend the training sequence length from 2 to 4 and get a better disparity output, suggesting that the performance of our model scales with a more extensive temporal context. Setting (I) corresponds to the model of (H) in the single-frame mode. Although there is a slight decrease in accuracy compared to setting (H), the results are still superior to (A). This demonstrates the versatility of our method in the single-frame and multi-frame mode, while also emphasizing the importance of temporal information. To sum up, our TC-Stereo (G) and (H), with only 5 iterations, significantly improve the results of RAFT-Stereo (A) and even surpass the performance of RAFT-Stereo with 32 iterations (B) on all metrics, greatly enhancing efficiency. Temporal Consistency We design two types of metrics to evaluate the temporal consistency. For the first one, We convert the predicted disparity map d^t+1 to the image coordinate at time t through poses and ground truth optical flow, denoting as d^t+1. We use the absolute difference between d^t+1 and d^t, denoted as |Δ d|=|d^t+1-d^t|, to evaluate temporal consistency. However, evaluating temporal absolute differences alone is one-sided. If a disparity d^t is incorrect, the model should be allowed to make correct predictions at the next time step t+1, even if this results in temporary inconsistencies. To evaluate temporal consistency comprehensively, we design a more lenient metric to allow the temporal change towards d^gt. Concretely, we calculate the error maps e=|d-d^gt| for timestamp t and t+1, and then compute the change in error, Δ e = e^t+1-e^t. We use Relu(Δ e) to evaluate the extent of error divergence over time (, error increased from the previous frame for the same 3D point). Table <ref> proves that our final models, (G) and (H), achieve the best temporal consistency and convergence compared to other settings. Settings (D), (E), and (F) respectively demonstrate the effectiveness of the state fusion, TDC module, and Dual-space refinement module in improving temporal consistency. Notably, (G) is comparable in accuracy to the baseline with 32 iterations (B), yet (G) significantly outperforms (B) in temporal metrics, especially in occlusion areas, further demonstrating the effectiveness of our TC-Stereo in mitigating temporal inconsistencies. Additionally, although (I) surpasses (D) in accuracy metrics, it is inferior to (D) in temporal metrics, further revealing the crucial role of temporal information in improving temporal consistency. We also provide the results of the 3-pixel error of consistency metrics in the supplementary materials. We present visualizations of disparity sequences to illustrate the improvement in temporal consistency achieved by our method. As shown in Fig. <ref>, our method produces more consistent disparity maps in the highlighted white box areas compared to other SOTA methods. More visualizations of temporal consistency are given in the appendix and our project website. Local Searching Range and Fast Convergence Fig. <ref> illustrates the disparity maps and update step sizes of our method and RAFT-Stereo, step by step. It can be seen that our method iterates based on a well-initialized disparity, providing a local disparity searching range. This leads to smaller update step sizes and faster convergence to the ground truth. Improvement on Ill-posed Areas Fig. <ref> shows the results of occlusion, transparent areas, and out-of-view areas, respectively. Thanks to the temporal information and the design of dual-space refinement, our method achieved smoother disparity output and clearer boundaries in these ill-posed regions. Robustness Analysis Eq. <ref> is based on the static scene assumption, resulting in unreliable initial disparities in dynamic areas. However, our method remains robust in ordinary dynamic scenes since we only use the temporal information for initialization. Disparities in dynamic regions will be corrected by iterative refinements. The moving car in Fig. <ref> is a good example of this robustness. Additionally, we discuss the robustness of our method on incorrect poses and large motions in the supplementary materials. Zero-shot Generalization We evaluated the zero-shot generalization performance on the ETH3D SLAM dataset, as shown in Table <ref> (right). All models are only trained on SceneFlow. The metrics exclude regions with EPE >50 due to significant noise of depth ground truth around object boundaries. Our method surpasses other methods in generalization with fewer iterations. We also provide the qualitative results of our method on ETH3D SLAM, as shown in Fig. <ref>. §.§ Benchmark Results Sceneflow We conducted evaluations on the SceneFlow dataset, as shown in Table <ref>. Overall, our method outperforms the other methods except DLNR on the EPE metric, which may be attributed to their use of a more powerful backbone and more iterations. Our method exhibits a clear advantage over others in the >1px and >3px error rate metrics. TartanAir We compared our method with TemporalStereo<cit.> and RAFT-Stereo<cit.> on the TartanAir dataset using the train-test split of TemporalStereo. As shown in Table <ref> (left), our model surpassed RAFT-Stereo with only 5 iterations. Furthermore, it demonstrates significant advantages over the other two methods in terms of the EPE metric in occluded regions. This indicates that our design can enhance performance in ill-posed areas. KITTI 2015 We present a comprehensive comparison of our method with SOTA methods on the KITTI 2015 dataset. As shown in Table <ref>, our proposed method, TC-Stereo, outperforms the other methods in most key metrics. Specifically, it achieves the lowest error rates in both non-occluded (Noc) and all regions, with a D1-all error rate of 1.46%, which is an 8.18% improvement over the best-performing IGEV-Stereo<cit.>. Among all these methods, our method exhibits the smallest difference between the Noc and All metrics, demonstrating our superior performance in occluded areas. Additionally, TC-Stereo showcases exceptional efficiency, with a processing time of only 0.09 seconds per frame, which is significantly faster than other methods with high performance. This performance shows that TC-Stereo excels in both accuracy and speed, making it well-suited for online applications. § LIMITATION Despite the robustness of our model for ordinary dynamic regions and pose noise, it remains sensitive to large motions of dynamic objects, large errors in camera pose, and little overlap between adjacent frames, as we discuss in the supplementary materials. In these challenging cases, our method degrades to searching for the ground truth in a global disparity range like RAFT-Stereo. § CONCLUSION In this paper, we proposed a temporally consistent stereo matching method that exploits temporal information through our temporal disparity completion module and temporal state fusion module, providing a well-initialized disparity map and hidden states. We further proposed a dual-space refinement module that iteratively refines the temporally initialized results in both disparity and disparity gradient spaces, improving estimations in ill-posed regions. Experiments demonstrated that our method effectively mitigates temporal inconsistency in video stereo matching and achieves SOTA results on challenging datasets. § ACKNOWLEDGEMENTS This work was supported by the Natural Science Foundation of Shenzhen under Grant No. JCYJ20230807142703006, Natural Science Foundation of China (NSFC) under Grants No. 62172041 and No. 62176021, and Key Research Platforms and Projects of the Guangdong Provincial Department of Education under Grant No.2023ZDZX1034. splncs04
http://arxiv.org/abs/2407.12555v1
20240717133547
GRBAlpha and VZLUSAT-2: GRB observations with CubeSats after 3 years of operations
[ "Filip Münz", "Jakub v{R}ípa", "András Pál", "Marianna Dafv{c}íková", "Norbert Werner", "Masanori Ohno", "László Meszáros", "Vladimír Dániel", "Peter Hanák", "Ján Hudec", "Marcel Frajt", "Jakub Kapuv{s}", "Petr Svoboda", "Juraj Dudáv{s}", "Miroslav Kasal", "Tomáv{s} Vítek", "Martin Koláv{r}", "Lea Szakszonová", "Pavol Lipovský", "Michaela v{D}urív{s}ková", "Ivo Vev{r}tát", "Martin Sabol", "Milan Junas", "Roman Marov{s}", "Pavel Kosík", "Zsolt Frei", "Hiromitsu Takahashi", "Yasushi Fukazawa", "Gábor Galgóczi", "Balázs Csák", "Robert László", "Tsunefumi Mizuno", "Nikola Husáriková", "Kazuhiro Nakazawa", "." ]
astro-ph.HE
[ "astro-ph.HE" ]
a]Filip Münz a]Jakub Řípa b]András Pál a]Marianna Dafčíková a]Norbert Werner c]Masanori Ohno b]László Meszáros d]Vladimír Dániel e]Peter Hanák f]Ján Hudec f]Marcel Frajt f]Jakub Kapuš d]Petr Svoboda d]Juraj Dudáš k]Miroslav Kasal a]Tomáš Vítek a]Martin Kolář a]Lea Szakszonová f]Pavol Lipovský a]Michaela Ďuríšková g]Ivo Veřtát d]Martin Sabol d]Milan Junas d]Roman Maroš a]Pavel Kosík l]Zsolt Frei c]Hiromitsu Takahashi c]Yasushi Fukazawa h]Gábor Galgóczi b]Balázs Csák i]Robert László c]Tsunefumi Mizuno a]Nikola Husáriková j]Kazuhiro Nakazawa [a]Masaryk University, Brno, Czech Republic [b]Konkoly Observatory, Budapest, Hungary [c]Hiroshima University, Japan [d]Czech Aerospace Research Center, Prague, Czech Republic [e]Technical University of Košice, Slovakia [f]Spacemanic Ltd, Brno, Czech Republic [g]University of West Bohemia, Pilsen, Czech Republic [h]Wigner Research Center for Physics, Budapest, Hungary [i]Needronix s.r.o., Bratislava, Slovakia [j]Nagoya University, Japan [k]Brno University of Technology, Czech Republic [l]Eötvös Loránd University, Budapest, Hungary E-mail: munz@physics.muni.cz empty GRBAlpha and VZLUSAT-2: GRB observations with CubeSats after 3 years of operations [ July 22, 2024 ================================================================================== § ABSTRACT GRBAlpha is a 1U CubeSat launched in March 2021 to a sun-synchronous LEO at an altitude of 550 km to perform an in-orbit demonstration of a novel gamma-ray burst detector developed for CubeSats. VZLUSAT-2 followed ten months later in a similar orbit carrying as a secondary payload a pair of identical detectors as used on the first mission. These instruments detecting gamma-rays in the range of 30-900 keV consist of a 56 cm2 5 mm thin CsI(Tl) scintillator read-out by a row of multi-pixel photon counters (MPPC or SiPM). The scientific motivation is to detect gamma-ray bursts and other HE transient events and serve as a pathfinder for a larger constellation of nanosatellites that could localize these events via triangulation. At the beginning of July 2024, GRBAlpha detected 140 such transients, while VZLUSAT-2 had 83 positive detections, confirmed by larger GRB missions. Almost a hundred of them are identified as gamma-ray bursts, including extremely bright GRB 221009A and GRB 230307A, detected by both satellites. We were able to characterize the degradation of SiPMs in polar orbit and optimize the duty cycle of the detector system also by using SatNOGS radio network for downlink. § INTRODUCTION This year our GRBAlpha<cit.> satellite has completed 3 years in orbit, fulfilling its role of a pathfinder for a planned constellation of nanosatellites for gamma-ray burst (GRB) detection and localization called CAMELOT<cit.>. It was followed by other CubeSats, like SpIRIT<cit.> (featuring a HERMES-SP gamma detector), EIRSAT-1<cit.> or BurstCube<cit.>, demonstrating an ongoing revolution of small satellites complementing large missions in high-energy astrophysics. Dominant approach that should allow all-sky coverage of gamma-ray transients is to measure shift in arrival time using cross-correlation of lightcurves, hopefully reaching tenth of a millisecond precision needed for detectors distributed on low Earth orbits. Although the scintillator-based detector of GRBAlpha is designed to be rather simple and robust compared to delicate instruments like HERMES , it has now accumulated very nice statistics of GRB detections and proved long-term performance evolution of SiPM detectors. The same device (in two copies) was placed as a secondary payload on a 3U CubeSat designed by Czech Aerospace Research Center (VZLU) called VZLUSAT-2<cit.> that is (in contrast with GRBAlpha) equipped with an active attitude control. It has however limitations in power and data-transfer budgets so that our gamma-ray detectors can operate only cca 1/3 of time. Finally, a new 2U satellite named GRBBeta was integrated this year and is ready to be launched with Arianne 6 first flight. On the detector side, there are only minor improvements of the read-out electronics, however the satellite is equipped with S-band datalink that (together with AOCS system) should allow download data sampled at much higher rate. Last but not least, we profit of the fact that the payload software (including FPGA code) can be updated on board. A major change in encoding and data transmission scheme, that was performed on GRBAlpha in late 2022, allowed to increase duty cycle to almost 100% (in case of a flawless telecommanding done twice a day from our principal station in Košice, Slovakia). Using amateur UHF frequencies for both uplink and downlink, we could include some of our other stations into the SatNOGS network[<http://satnogs.org>] of radio stations and to use other selected stations of the network for data dropping of pre-selected chunks of measurements that correspond to times of events reported elsewhere. Thanks to this, statistics of detected gamma-ray events increased ten-fold since the upgrade. § COMMUNICATION Since the failure of the VHF radio module onboard (November 2021) uplink has to rely on a noisier UHF band. Current solution uses one simplex station (Technical University, Košice) for transmission and reception of telemetry is forwarded from Piszkéstető Observatory, Hungary, or Jablonec, Slovakia, also in a simplex mode. Recently, with a new radio license a station located at Konkoly Observatory in Budapest, Hungary, became available as a backup for telecommanding. Planning acquisition and downlink is simple thanks to a CubeSat control terminal called vcom developed by VZLU (with contribution of Spacemanic and our Hungarian colleagues). Distinct components on board have their proper set of sub-commands that are gradually evolving to offer more advanced features. For example, an ID of a ground station in the SatNOGS database and a (fractional) position in the data stored on board (one “file” corresponds to roughly 12hrs of measurement) is sufficient to plan a data drop: the software will retrieve station position, get times of next contact, plan follow-up of GRBAlpha with this station and schedule a data transmission on the payload's control unit. Thanks to a versatile design of the on-board communication (using either I2C or CAN protocols) payload electronics can directly send data to the radio transmitter. Currently, data are saved in a self-synchronizing variable-length code that besides providing efficient compression of Poissonian data (in many channels dominated by small numbers or whole zero blocks) it also allows bit-level identification of stored data structures (mostly spectra, temperatures and timestamps). For more details, see A.Pál 2023<cit.>. During transmission, the selected block of data (a few thousand seconds around the time of the event), too big to fit in as single radio packet, is split into chunks (typically of 128 bytes) and dropped in random order to selected ground stations. Some 15% redundancy in data dropping should allow to reconstruct the original lightcurve taking into account usual packet loss during transmission fading. § DETECTOR PERFORMANCE Each detector consists of a CsI(Tl) crystal scintillator wrapped in an ESR foil with one narrow side open where a pair of 4 SiPMs (S13360-3050PE by Hamamatsu, 3×3 mm active area each) are glued. Aluminum box (1mm thick) was not light-tight enough so a DuPont TCC15BL3 polyvinyl fluoride (PVF) tedlar had to be used as extra wrapping of the setup. Photodetectors were protected from protons with 2.5 mm thick PbSb alloy shield (adding a substantial mass to the weight of the whole device). Combined signal of 4 SiPMs of each channel is then amplified and shaped in an analog part of the electronic board, reaching characteristic pulse width of 15 μs. Then the signal is digitized (at cca 600 MHz sampling) and pulse amplitudes are stored in a histogram by FPGA. Each channel having its pair of FPGA and MCU, these 256-bin spectra are read out at chosen cadence and stored with required binning in a memory within the same digital board.[Two channel design provides necessary redundancy, however, only one channel operates at given time.] §.§ Radiation Degradation For housekeeping purposes we collect 60s spectra with full (256-bin) resolution usually several times a week. Noise peak (originally from ground calibrations around channel 40 in full resolution spectra) is showing gradual shift to higher energies – see fig. 2 – result of damage to MPPC by cosmic radiation (evident correlation with solar activity and CMEs). Increasing particle densities in Earth's magnetosphere (as we approach maximum of the Sun cycle) are slightly balanced by orbital degradation (altitude decreased by 50 km in 3 years): detailed account combining data from both satellites (incl. comparison with cosmic ray background models) will be published in a near future. §.§ Gain Calibration Advantage of MPPCs compared to traditional photomultipliers is much lower operational voltage, some 3-5 volts above breakdown voltage, usually 48-51 volts. Following calibration tests were performed with detectors assembled for GRBBeta satellite. Four sets of 4 SiPMs were pre-selected (out of cca 40 MPPCs provided from Hamamatsu) to make as uniform group as possible with respect to the (factory measured) breakdown voltages and dark currents. The groups with the smallest spread were used for a board selected as a flight model, next to it were MPPC groups used for a flight-spare detector. This one was (after successful integration of the satellite) equipped with a universal power supply (one board of 10×10 cm CubeSat footprint) and a spare electronic board and undertook a series of (temperature-controlled) calibration measurements with different gamma-ray sources both at Masaryk University, Brno, and at laboratories of ADVACAM company, Prague. The spectral resolution of current detector design limits usability of certain isotopes with too high concentration of spectral lines. Complete setup with a water cooling block was fitted in a 30×20 cm sealed plastic container (allowing N2 purging to prevent any water condensation when measuring below the freezing point). Spectra obtained for Eu152 isotope and fitted calibration curves are shown on fig. 3. Distance from the noise peak (mentioned above) measured in ADU (channel number) scales to a good precision linearly with deposited energy, however this scaling depends both on operation voltage V and temperature T of the device. The voltage, moreover, is set through a custom digital-analog converter so for practical purposes V is measured in DAU units, not in volts.[Conversion can be calculated with a formula V[V] = 48.8 + 0.0612 V[DAU] (values given for channel 0 of GRBAlpha).] Channel position was finally modelled as (temp. T in ^∘C, V in instr. DAU units) χ = 39.9 + E/MeV × ( 334 + 6.05 (V-150) - T 3.7 ^-1 + 0.027 (V - 150)^2 - T^2 0.016 ^-2 - (V-150) T 0.033 ^-1 ) According to the manufacturer's datasheet MPPC gain should decrease with temperature in a manner that overrides small increase of the light yield of the CsI scintillator. This calibration should serve to design adjustment procedure of operating voltage to compensate temperature variations throughout the orbit. In the case of GRBAlpha, where the detector is exposed directly to the sunlight, the temperature varies (with orbital period) from 0 to 18 ^∘C, detectors on VZLUSAT-2 are on average 7^∘C colder. Continuous rotation of GRBAlpha should to some degree smear out these variations while for GRBBeta stable attitude can in some cases result in much higher detector temperatures: here the gain variation should be compensated regularly not to distort collected scientific data. § DETECTIONS As already mentioned above, since upgrade of the on-board software (21 months ago) the duty cycle for GRBAlpha satellite increased significantly and the detection rate (in coincidence with external trigger sources) has risen up to roughly 1.6 gamma-ray event per week. The comprehensive table for identified events (with links to related notices, graphs and data) are available at <https://monoceros.physics.muni.cz/hea/GRBAlpha/> or <https://monoceros.physics.muni.cz/hea/VZLUSAT-2/> for GRBAlpha or VZLUSAT-2 detections, respectively. Occurrences of different types of detections are given in table <ref> (identifications come from reports of observations of other missions and in other spectral domains, mostly distributed through GCNs[General Coordinates Network, <https://gcn.nasa.gov/>]). The most remarkable detection, that merited its own article<cit.>, was GRB 221009A, nicknamed BOAT meaning “brightest of all times”: flux from this burst even disturbed Earth's ionosphere. Major GRB observatories like FERMI/GBM or Konus/Wind were saturated while GRBAlpha experienced only a moderate pile-up which allowed us to measure directly the gamma peak rate. Without attitude control (and knowledge) we cannot convert this rate to corresponding flux. However, we were fortunate to collect data in 13 spectral bands at that moment and spectral reconstruction allowed us to approximate most probable direction of the source. The reported peak flux (averaged over 4s time bins) is 8.4× 10^52 erg/s, assuming a curved power-law model for the spectrum (its shape was determined outside the region affected by pile-up). Second such event, GRB230307A, was luckily detected by both satellites[At these altitudes cca 30% of the sky is occulted by Earth so probability of bare visibility by 2 satellites is quite high.] the lightcurves were reported in GCNs 33418<cit.> and 33424<cit.>. Sampling rates were still far too low to attempt any cross-correlation for triangulation purposes but this very bright event, candidate for a kilonova progenitor, is still worth of a detailed study. § BACKGROUND MONITORING Amount of data downloaded with new downlink possibilities allows constructing background maps with much denser coverage, reducing need for interpolation and revealing finer structures in particle background maps. Secular events (like coronal mass ejections) have clear influence on the size of polar belts – feature that could be easily recorded using our detector. We download roughly 110-120 hrs of 4-channel lightcurves per month (with 2 Hz sampling rate), which corresponds to cca 75 complete orbits. In fig. <ref> you find maps from a considerably larger dataset collected through approximately 10 months of observations. Ragged contours of polar belts correspond probably to the above-mentioned variations of spread of belts and SAA during this period. Due to a wobbling motion of GRBAlpha satellite, there are also observations of periodic signal as seen on figure <ref>. This could be explained assuming some level of anisotropy in particle background. Observations (mostly in polar regions) are so far too sparse to assert whether this is a permanent or a transient feature. Apart from this, regions around poles exhibit higher but predictable background level that with proper models should allow to implement a trigger algorithm with a good specificity (i.e. low false alert rate) in this part of the orbit. § SUMMARY With a remarkable number of GRBs and other events registered with rather simple gamma detectors on board of GRBAlpha and VZLUSAT-2 we gather a strong support for the feasibility of a nanosatellite constellation covering high-energy transient sky. We also gradually improve understanding of our device and its performance over time spent in orbit. Background monitoring on polar LEOs is another outcome of these missions that are likely to maintain their full capabilities until the end of their orbital lifetime. In the case of VZLUSAT-2, other instruments on board<cit.> can provide simultaneous data in softer band to create more comprehensive picture of local space environment. And with upcoming launch of a GRBBeta 2U CubeSat (and its much higher downlink capacities) we can finally get closer to the precise timing needed for direction reconstruction of gamma-ray bursts. This contribution was mainly funded by Czech Science Foundation (GAČR) grant 24-11487J. We acknowledge support by the grants KEP2/2020 and SA-40/2021 of the Hungarian Academy of Sciences and Eötvös Loránd Research Network, respectively, for satellite components and payload developments and the grant IF-7/2020 for providing the financial support for ground infrastructure. We acknowledge supported by the MUNI Award for Science and Humanities funded by the Grant Agency of Masaryk University. The research leading to these results has received funding from the European Union’s Horizon 2020 Program under the AHEAD2020 project (grant agreement n. 871158). We are grateful to the operators and developers of the SatNOGS network for providing a robust framework for bulk data download from our detectors flying above radio stations involved. spiebib
http://arxiv.org/abs/2407.13727v1
20240718172614
Cross-correlations of the Cosmic Neutrino Background: HR-DEMNUni simulation analysis
[ "Beatriz Hernández-Molinero", "Matteo Calabrese", "Carmelita Carbone", "Alessandro Greco", "Raul Jimenez", "Carlos Peña Garay" ]
astro-ph.CO
[ "astro-ph.CO", "hep-ph" ]
Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [ July 22, 2024 ================================================================================================= § INTRODUCTION The Cosmic Neutrino Background (C_νB), which carries information about the Universe from just one second after the big bang, has not yet been detected. However, some experiments are being designed and developed to achieve this, such as tritium capture table-top experiments such as Ptolemy <cit.>. Once measured, it will provide invaluable information about the early stages of the Universe and the nature of neutrinos <cit.>. In previous works <cit.> we have shown how gravity modifies the helicity of neutrinos originating from the cosmic background. Changes in the cosmic neutrino helicity density due to gravity modify the expected cosmic neutrino background capture rates. Experiments sensitive to the C_νB can unveil the nature of neutrinos, i.e., they can exploit the average capture rate to uncover the Dirac or Majorana particle nature of neutrinos. These experiments will detect only left-handed neutrinos if neutrinos are Dirac particles but they will be sensitive to both left- and right-handed neutrinos if those are Majorana particles <cit.>. Given the importance of this possible discovery, it is worth exploring in detail the possible angular dependence of this signal[The angular dependence could be obtained by installing several Ptolemy-like detectors around the globe, much like gravitational waves observatories, with the advantage that these tritium capture devices are table-top experiments <cit.>.]. The study of cosmological signals is motivated by the current limits provided by cosmological surveys, which indicate that the sum of neutrino masses is very close to the total mass inferred from underground experiments that measure the mass splits, i.e., 0.059 eV <cit.>. This strongly suggests, within the framework of Bayesian evidence, that the hierarchy is normal <cit.>. If this is the case, neutrinoless double-beta decay experiments <cit.>, which are the current experimental method to discover the nature of neutrinos, will need to operate in the several-ton range, which presents significant challenges. Given this scenario, cosmological signals emerge as a promising way to reveal the nature of neutrinos. In order to explore the angular dependence of cosmic neutrino background signals we have produced full-sky maps of Cold Dark Matter (CDM), Weak Lensing (WL) convergence, neutrino density, neutrino velocity, and neutrino deflection angle along with all cross-correlations. We will demonstrate the angular dependence of these signals and identify where they are maximal. This, in turn, will provide clues for discovering the nature of neutrinos from observations of the sky. The paper is organised as follows. In  <ref> we present the HR-DEMNUni simulations used to produce the full sky maps, for both CDM and neutrinos, which are shown in  <ref>. All cross-correlations and auto-correlations maps and power spectra are gathered in  <ref>. Finally, the discussion and conclusions are addressed in  <ref>. § METHODOLOGY In previous works <cit.>, we have shown how gravity modifies the helicity content of the cosmic neutrino background. The cosmic neutrino helicities are modified due to gravitational fields (mostly CDM) that deflect neutrinos when they travel nearby clustered structures. The helicity change can be calculated through the angle of deflection that neutrinos suffer throughout their path since decoupling one second after the big bang. Motivated by the possibility to measure this signal over the full sky and to compute its clustering properties, we show here the neutrino angle of deflection, cosθ_ν, map of the full simulation. The deflection angle is calculated for each neutrino of the simulation as the change in each velocity vector from z=3 down to z=0 in the same way as it was done in <cit.>. Furthermore, we have extended the computation of the deflection angle to include other observables, like the neutrino velocity field, WL maps and CDM density. All maps have been produced from the “Dark Energy and Massive Neutrino Universe” (DEMNUni) suite of large N-body simulations <cit.>. The DEMNUni simulations have been produced with the aim of investigating the Large-Scale Structure (LSS) of the Universe in the presence of massive neutrinos and dynamical Dark Energy (DE), and they were conceived for the nonlinear analysis and modelling of different probes, including CDM, halo, and galaxy clustering <cit.>, weak lensing, CMB lensing, Sunyaev-Zel'dovich and integrated Sachs-Wolfe effects  <cit.>, cosmic void statistics <cit.>, as well as cross-correlations among these probes <cit.>. In particular, we have used the new high-resolution (HR) simulations with 64 times better mass resolution than previous standard runs: the HR-DEMNUni simulations are characterised by a comoving volume of (500 h^-1Gpc)^3 filled with 2048^3 CDM particles and, when present, 2048^3 neutrino particles. The simulations are initialised at z_ in=99 with Zel'dovich initial conditions. The initial power spectrum is rescaled to the initial redshift via the rescaling method developed in <cit.>. Initial conditions are then generated with a modified version of the software, assuming Rayleigh random amplitudes and uniform random phases. The HR-DEMNUni set consists of two simulations with total neutrino masses of ∑ m_ν = 0, 0.16 eV considered in the degenerate mass scenario with three active neutrinos. The other cosmological parameters of the simulations are based on a Planck 2013 <cit.> LCDM reference cosmology (with massless neutrinos), in particular: n_ s=0.96, A_ s=2.1265 × 10^-9, h=H_0/[100 km s^-1 Mpc^-1]=0.67, Ω_ b=0.05, and Ω_ m=Ω_ CDM + Ω_ b + Ω_ν =0.32; H_0 is the Hubble constant at the present time, n_ s is the spectral index of the initial scalar perturbations, A_ s is the scalar amplitude, Ω_ b the baryon density parameter, Ω_ m is the total matter density parameter, Ω_ CDM the CDM density parameter, and Ω_ν the neutrino density parameter. In the presence of massive neutrinos, Ω_ b and Ω_ m are kept fixed to the above values, while Ω_ CDM is changed accordingly. Tab. <ref> summarises the masses of the CDM and neutrino particles together with the neutrino fraction f_ν≡Ω_ν / Ω_ m. The simulations employed in this study are those that account for neutrinos as massive particles with Σ m_ν=0.16 eV. § MAPS AND POWER SPECTRA The CDM and neutrino maps are obtained from the DM and neutrino particle distribution of the HR-DEMNUni simulation, once a full-sky lightcone has been created by means of a stacking technique of the comoving particle snapshots in spherical shells, with the observer placed at their centre. This procedure follows the approaches of <cit.>, and was developed to perform high-resolution CMB and weak lensing simulations <cit.>. The standard way to build lightcones is to pile up high-resolution comoving snapshots within concentric cells to fill the lightcone to the maximum desired source redshift. With the observer placed at z=0, the volume of the lightcone is sliced into full-sky spherical shells of the desired thickness in redshift. The surface mass density Σ on each sphere is defined on a two-dimensional grid. For each pixel of the i-th sphere, one has Σ^(i)(θ) = n · m_pA_ pix , where n is the number of particles in the pixel, A_ pix is the pixel area in steradians, and m_p is the particle mass of the N-body simulation in each pixel, either m_ p^ CDM or m_ p^ν. All the particles distributed within each of these 3D matter shells are then projected onto 2D spherical maps, assigning a specific sky pixel to each particle via the [<http://healpix.sourceforge.net>] <cit.> pixelization procedure. For this work, we have produced full-sky, surface mass density maps for CDM and neutrino particles independently, on a grid with n_side = 4096, which corresponds to a pixel resolution of 0.85 arcmin. The full-sky density maps, of both CDM and neutrinos, have been constructed placing the observer at the centre of the simulation box at z=0, considering a 250 Mpc h^-1 comoving radius around this central observer. If n particles fall into a particular pixel, the total mass at those coordinates will be n · m_p. A similar method has been applied to build the catalogue for the neutrino velocity modulus and deflection angle, where the velocity and deflection angle have been calculated for each neutrino particle of the simulation: as in the case of the particle lightcones, given the particle position in the simulation, full-sky maps of neutrino velocity and deflection angle have been produced, representing the distribution of those quantities on a 2D grid. In this case, the value returned in the full-sky map is the average over all particle velocities, or deflection angles, that fall into that pixel. The corresponding maps for each of the above quantities are shown in the four panels of Figure <ref>. It is worth noting that in in the left panels of Figure <ref> the shown quantities are not the surface densities but δ_Σ, where δ_Σ≡(Σ-Σ̅)/Σ̅ is the density contrast, i.e. the surface density difference of each pixel over the mean surface density of all pixels. Observing the the left panels of Figure <ref>, i.e. the distribution of δ_Σ_CDM and δ_Σ_ν, one can notice that neutrino clustering follows the CDM clustering, as neutrinos fall into the CDM potential wells. But, due to their hot thermal velocities, especially for low neutrino masses as Σ m_ν = 0.16 eV, neutrino perturbations are much smaller than CDM ones and they cluster on much larger scales than CDM, ie only above their free streaming length, λ_ FS. Therefore the δ_Σ_ν distribution appears much more diffused and smoother (as also the minimum and maximum values in the colour bars show) than the δ_Σ_CDM one, but in any case their correlation is observable from the maps. Instead, comparing the left and right bottom panels of Figure <ref>, ie the distributions of δ_Σ_ν and of cosθ_ν, it looks like the two quantities are anticorrelated on large scales, ie that to more dense “brighter” regions in the neutrino overdensity there correspond “darker” regions where cosθ_ν assumes smaller values. Since, regions with a lower mean cosθ_ν correspond to larger angles of deflection, and since, as noticed above, δ_Σ_ν is correlated to δ_Σ_CDM, this means that, in regions where the CDM overdensity is higher, the gravitational force will make neutrinos to deflect more. Finally, concerning the top left panel of Figure <ref>, it is difficult to identify any structure in the distribution of the neutrino velocity modulus, probably because the contribution of coherent neutrino velocities is very small and hidden by their thermal velocity which is dominant especially of the low neutrino masses considered. In Figure <ref> we show the corresponding auto-correlations (the diagonal panels) as well as the cross-correlations (the off-diagonal panels). All these angular correlation in the spherical harmonic domain have been obtained applying to the maps the routine of the package <cit.>. Because we have a single cosmological realisation and in order to reduce the noise, we have smoothed the power spectra with the Savgol routine in python[https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.savgol_filter.html]. § CROSS-CORRELATIONS Cross-correlation power spectra can be found in Figure <ref>. First, we focus on the deflection angles. The autocorrelation of cosθ_ν shows a maximum for large scales around ℓ∼ 10. This is consistent with what has been found in <cit.>, where it has been shown that the deflection reaches its maximum at the super-cluster scales. We will comment on this in a few lines. As it was said before, the distribution of cosθ_ν is anti-correlated with the rest of the studied quantities. Paying attention to the first three panels in the last row of Figure <ref>, we see that the cross-correlation power spectra of cosθ_ν with the CDM and neutrino density contrast distributions, δ_Σ_CDM and δ_Σ_ν, peak at ℓ∼10, and its cross-spectra with the neutrino velocity v_ν/c, at ℓ∼5 It is worth nothing that, even though they peak at the same scale, the cross-correlation between cosθ_ν and δ_Σ_CDM is larger than the cross-correlation between cosθ_ν and δ_Σ_ν, and also larger than the cosθ_ν autocorrelation. This hints that a combination of weak-lensing maps and Ptolemy-like experiments should be sensitive to this very distinctive feature. The cross-correlation with neutrino density and velocity are scale invariant for ℓ > 20. Because of the small mass of neutrino particles in the simulation (Σ m_ν = 0.16 eV), it is difficult for neutrinos to cluster at small scales. Cosmic neutrinos stream freely since they decoupled one second after the Big Bang with a frozen Fermi-Dirac momentum distribution. The momentum distribution is translated into velocity distribution through the neutrino mass, so the lower the mass the higher the thermal dispersion of neutrinos which results in clustering at increasingly larger scales. This is reflected in how δ_Σ_ν autocorrelation, and its cross-correlation with δ_Σ_CDM, both peak at large scales, i.e. ℓ∼15 and ℓ∼15, respectively. This feature is also visible in the autocorrelation of v_ν/c where the power spectrum peaks at even larger, i.e. ℓ∼2. In the same way as for angles, the signal from the cross-correlation of v_ν/c with δ_Σ_CDM or δ_Σ_ν is larger than the autocorrelation. Finally, if we pay attention to the CDM density contrast we see how the autocorrelation power spectrum of δ_Σ_CDM peaks at smaller scales, i.e. large multipoles, ℓ∼1500. This result agrees with the features we see in the δ_Σ_CDM map. This map, top left panel in Figure <ref>, shows the typical large-scale structure of the Universe full of vast galaxy filaments and voids spanning incredibly large distances, consequence of the higher clustering of CDM. By contrast, the δ_Σ_ν map, bottom left panel in Figure <ref>, shows a more diffuse matter distribution which translate in a power spectrum peaking at ℓ∼8. It is worth exploring in detail the cross-correlation of the neutrino field with the CDM by using an actual observable of the latter. In this case, we have chosen to construct a weak lensing convergence map, where sources are distributed via a n(z) distribution, which peaks at about z=0.5. The shape of the n(z) and parameters are taken from the galaxy selection function expected for the photometric Euclid survey <cit.>: n(z) ∝(z/z_0)^βexp[ -(z/z_0)^γ] , where β = 2, γ = 1.5, z_0 = z_m/√(2) and z_m = 0.5. This effective, WL map related to a source galaxy distribution can be computed as in Ref. <cit.> from the lightcone following the procedure described in <cit.>. Since our maps (neutrino velocities and angles, matter overdensities, etc.) are constructed in a comoving volume of (500h^-1Mpc)^3, this observable can maximise the cross-correlation with the neutrino fields rather than - for instance - the CMB lensing potential field, which has a broader kernel in redshift. Results for the auto-correlation for the WL map are shown in Fig. <ref>. As expected, it shows the standard scale dependence for the WL signal <cit.>. The cross-correlations with the neutrino variables are shown in Fig. <ref>; let us notice that the cosθ_ν seems to anti-correlate with this effective WL field as it happens also with the CDM field. These cross-correlation power spectra peak at large scales, i.e. small ℓ, as those in Fig. <ref>. We emphasise again that, also in the case of WL maps, the cross-correlations with neutrino quantities are larger than the auto-correlations for all variables studied. So we look at the cross-correlations between neutrinos and CDM as the most promising observables for future experimental surveys. § CONCLUSIONS We have cross-correlated the C_νB overdensity, velocity modulus and cosine of deflection angle maps between them and with CDM overdensity and WL convergence maps, in a full-sky picture where the observer is placed at the centre. To do so, we have used the new high-resolution DEMNUni simulations that contain 2048^3 particles of each type (CDM and neutrinos) in a comoving volume of (500h^-1Mpc)^3. The simulations used consider a degenerate neutrino mass scenario with three active neutrinos of Σ m_ν=0.16eV. The resulting maps have a pixel resolution of 0.85 arcmin. To compute the deflection angle of neutrinos we have used the same recipe as in <cit.>, i.e. calculating the angle of deflection as the angle between neutrino velocity vectors at z=3 and z=0. Our results are perfectly consistent with the theoretical expectations. One of this is that CDM overdensity power spectrum peaks at smaller scales (larger multipoles) than the neutrino overdensity power spectrum because of the higher clustering of CDM. We have obtained very low auto and cross-correlation spectra for the neutrino velocity, as it was also expected, because of the large thermal velocity of cosmic neutrinos with such a low mass. Regarding the deflection angles, we have obtained a larger signal in the cross-correlation of this variable with CDM overdensity than in the cross-correlation between neutrino velocity and CDM overdensity. One of the main results is that the C_νB cross-correlations have more signal than the auto-correlations, specially in the cross-correlations with CDM overdensity, where the spectra amplitudes are three times larger than in the cross-correlations with neutrino overdensity. We have examined the cross-correlation between the CνB and an effective WL map obtained using the Euclid satellite specifications. This cross-correlation serves as an example of how to explore the angular dependence of CνB signals using observable fields in modern surveys, like the Euclid mission. By demonstrating the angular dependence of these signals and identifying their maxima, this approach provides valuable insights into the nature of neutrinos through observations of the sky. For the first time, high-fidelity cross-correlation between neutrinos and CDM maps have been shown in this paper, revealing the cosmological scales where any cross-correlation between them may occur. Once the technology to probe these cosmological probes becomes available, examining these maps will enable us to confirm the detection of the cosmic neutrino background and facilitate its measurements as happens e.g. also for the Integrated Sachs-Wolfe effect of CMB when exploiting its correlation with galaxies. The DEMNUni simulations were carried out in the framework of “The Dark Energy and Massive Neutrino Universe" project, using the Tier-0 IBM BG/Q Fermi machine, the Tier-0 Intel OmniPath Cluster Marconi-A1 and Marconi-100 of the Centro Interuniversitario del Nord-Est per il Calcolo Elettronico (CINECA). CC acknowledges a generous CPU and storage allocation by the Italian Super-Computing Resource Allocation (ISCRA) as well as from the coordination of the “Accordo Quadro MoU per lo svolgimento di attività congiunta di ricerca Nuove frontiere in Astrofisica: HPC e Data Exploration di nuova generazione”, together with storage from INFN-CNAF and INAF-IA2. This work was supported by the “Center of Excellence Maria de Maeztu 2020-2023” award to the ICCUB (CEX2019-000918-M) funded by MCIN/AEI/10.13039/501100011033. Funding for the work of RJ was partially provided by project PID2022-141125NB-I00. AG would also like to thank Martin Reinecke for many useful discussions about some technical aspects during the early stages of this work. MC is partially supported by the 2023/24 “Research and Education” grant from Fondazione CRT. The OAVdA is managed by the Fondazione Clément Fillietroz-ONLUS, which is supported by the Regional Government of the Aosta Valley, the Town Municipality of Nus and the “Unité des Communes valdôtaines Mont-Émilius”. 10 PTolemy PTOLEMY collaboration, Neutrino physics with the PTOLEMY project: active neutrino properties and the light sterile case, https://doi.org/10.1088/1475-7516/2019/07/047JCAP 07 (2019) 047 [https://arxiv.org/abs/1902.055081902.05508]. Dolgov:2002wy A. D. Dolgov, Neutrinos in cosmology, https://doi.org/10.1016/S0370-1573(02)00139-4Phys. Rept. 370 (2002) 333 [https://arxiv.org/abs/hep-ph/0202122hep-ph/0202122]. LESGOURGUES2006307 J. Lesgourgues and S. Pastor, Massive neutrinos and cosmology, https://doi.org/https://doi.org/10.1016/j.physrep.2006.04.001Physics Reports 429 (2006) 307 [https://arxiv.org/abs/astro-ph/0603494astro-ph/0603494]. Dolgov_2008 A. D. Dolgov, Cosmology and neutrino properties, https://doi.org/10.1134/s1063778808120181Physics of Atomic Nuclei 71 (2008) 2152?2164 [https://arxiv.org/abs/0803.38870803.3887]. Long_2014 A. J. Long, C. Lunardini and E. Sabancilar, Detecting non-relativistic cosmic neutrinos by capture on tritium: phenomenology and physics potential, https://doi.org/10.1088/1475-7516/2014/08/038JCAP 08 (2014) 038 [https://arxiv.org/abs/1405.76541405.7654]. Hernandez-Molinero:2022zoo B. Hernandez-Molinero, R. Jimenez and C. Pena-Garay, Distinguishing Dirac vs. Majorana neutrinos: a cosmological probe, https://doi.org/10.1088/1475-7516/2022/08/038JCAP 08 (2022) 038 [https://arxiv.org/abs/2205.008082205.00808]. Hernandez-Molinero_2024 B. Hernandez-Molinero, C. Carbone, R. Jimenez and C. P. Garay, Cosmic background neutrinos deflected by gravity: Demnuni simulation analysis, https://doi.org/10.1088/1475-7516/2024/01/006Journal of Cosmology and Astroparticle Physics 2024 (2024) 006. Roulet:2018fyh E. Roulet and F. Vissani, On the capture rates of big bang neutrinos by nuclei within the Dirac and Majorana hypotheses, https://doi.org/10.1088/1475-7516/2018/10/049JCAP 10 (2018) 049 [https://arxiv.org/abs/1810.005051810.00505]. DESI DESI Collaboration, A. G. Adame, J. Aguilar, S. Ahlen, S. Alam, D. M. Alexander et al., DESI 2024 VI: Cosmological Constraints from the Measurements of Baryon Acoustic Oscillations, https://doi.org/10.48550/arXiv.2404.03002arXiv e-prints (2024) arXiv:2404.03002 [https://arxiv.org/abs/2404.030022404.03002]. Fer1 F. Simpson, R. Jimenez, C. Pena-Garay and L. Verde, Strong Bayesian evidence for the normal neutrino hierarchy, https://doi.org/10.1088/1475-7516/2017/06/029 2017 (2017) 029 [https://arxiv.org/abs/1703.034251703.03425]. Fer2 R. Jimenez, C. Pena-Garay, K. Short, F. Simpson and L. Verde, Neutrino masses and mass hierarchy: evidence for the normal hierarchy, https://doi.org/10.1088/1475-7516/2022/09/006 2022 (2022) 006 [https://arxiv.org/abs/2203.142472203.14247]. Goeppert-Mayer M. Goeppert-Mayer, Double beta-disintegration, https://doi.org/10.1103/PhysRev.48.512Phys. Rev. 48 (1935) 512. furry W. H. Furry, On transition probabilities in double beta-disintegration, https://doi.org/10.1103/PhysRev.56.1184Phys. Rev. 56 (1939) 1184. carbone_2016 C. Carbone, M. Petkova and K. Dolag, DEMNUni: ISW, Rees-Sciama, and weak-lensing in the presence of massive neutrinos, https://doi.org/10.1088/1475-7516/2016/07/034 2016 (2016) 034 [https://arxiv.org/abs/1605.020241605.02024]. castorina_2015 E. Castorina, C. Carbone, J. Bel, E. Sefusatti and K. Dolag, DEMNUni: the clustering of large-scale structures in the presence of massive neutrinos, https://doi.org/10.1088/1475-7516/2015/07/043 7 (2015) 043 [https://arxiv.org/abs/1505.071481505.07148]. moresco_2016 M. Moresco, F. Marulli, L. Moscardini, E. Branchini, A. Cappi, I. Davidzon et al., The VIMOS Public Extragalactic Redshift Survey (VIPERS) . Exploring the dependence of the three-point correlation function on stellar mass and luminosity at 0.5 <z < 1.1, https://doi.org/10.1051/0004-6361/201628589 604 (2017) A133 [https://arxiv.org/abs/1603.089241603.08924]. zennaro_2018 M. Zennaro, J. Bel, J. Dossett, C. Carbone and L. Guzzo, Cosmological constraints from galaxy clustering in the presence of massive neutrinos, https://doi.org/10.1093/mnras/sty670 477 (2018) 491 [https://arxiv.org/abs/1712.028861712.02886]. ruggeri_2018 R. Ruggeri, E. Castorina, C. Carbone and E. Sefusatti, DEMNUni: massive neutrinos and the bispectrum of large scale structures, https://doi.org/10.1088/1475-7516/2018/03/003 2018 (2018) 003 [https://arxiv.org/abs/1712.023341712.02334]. bel_2019 J. Bel, A. Pezzotta, C. Carbone, E. Sefusatti and L. Guzzo, Accurate fitting functions for peculiar velocity spectra in standard and massive-neutrino cosmologies, https://doi.org/10.1051/0004-6361/201834513 622 (2019) A109 [https://arxiv.org/abs/1809.093381809.09338]. parimbelli_2021 G. Parimbelli, S. Anselmi, M. Viel, C. Carbone, F. Villaescusa-Navarro, P. S. Corasaniti et al., The effects of massive neutrinos on the linear point of the correlation function, https://doi.org/10.1088/1475-7516/2021/01/009 2021 (2021) 009 [https://arxiv.org/abs/2007.103452007.10345]. parimbelli_2022 G. Parimbelli, C. Carbone, J. Bel, B. Bose, M. Calabrese, E. Carella et al., DEMNUni: comparing nonlinear power spectra prescriptions in the presence of massive neutrinos and dynamical dark energy, https://doi.org/10.1088/1475-7516/2022/11/041 2022 (2022) 041 [https://arxiv.org/abs/2207.136772207.13677]. Guidi_2022 M. Guidi, A. Veropalumbo, E. Branchini, A. Eggemeier and C. Carbone, Modelling the next-to-leading order matter three-point correlation function using FFTLog, https://doi.org/10.1088/1475-7516/2023/08/066 2023 (2023) 066 [https://arxiv.org/abs/2212.073822212.07382]. Baratta_2022 P. Baratta, J. Bel, S. Gouyou Beauchamps and C. Carbone, COVMOS: a new Monte Carlo approach for galaxy clustering analysis, https://doi.org/10.48550/arXiv.2211.13590arXiv e-prints (2022) arXiv:2211.13590 [https://arxiv.org/abs/2211.135902211.13590]. Gouyou_Beauchamps_2023 S. Gouyou Beauchamps, P. Baratta, S. Escoffier, W. Gillard, J. Bel, J. Bautista et al., Cosmological inference including massive neutrinos from the matter power spectrum: biases induced by uncertainties in the covariance matrix, https://doi.org/10.48550/arXiv.2306.05988arXiv e-prints (2023) arXiv:2306.05988 [https://arxiv.org/abs/2306.059882306.05988]. SHAM-Carella_in_prep E. Carella, C. Carbone, M. Zennaro, G. Girelli, M. Bolzonella, F. Marulli et al., “Demnuni: The galaxy-halo connection in the presence of dynamical dark energy and massive neutrinos.”. roncarelli_2015 M. Roncarelli, C. Carbone and L. Moscardini, The effect of massive neutrinos on the Sunyaev-Zel'dovich and X-ray observables of galaxy clusters, https://doi.org/10.1093/mnras/stu2546 447 (2015) 1761 [https://arxiv.org/abs/1409.42851409.4285]. fabbian_2018 G. Fabbian, M. Calabrese and C. Carbone, CMB weak-lensing beyond the Born approximation: a numerical approach, https://doi.org/10.1088/1475-7516/2018/02/050 2018 (2018) 050 [https://arxiv.org/abs/1702.033171702.03317]. kreisch_2019 C. D. Kreisch, A. Pisani, C. Carbone, J. Liu, A. J. Hawken, E. Massara et al., Massive neutrinos leave fingerprints on cosmic voids, https://doi.org/10.1093/mnras/stz1944 488 (2019) 4413 [https://arxiv.org/abs/1808.074641808.07464]. schuster_2019 N. Schuster, N. Hamaus, A. Pisani, C. Carbone, C. D. Kreisch, G. Pollina et al., The bias of cosmic voids in the presence of massive neutrinos, https://doi.org/10.1088/1475-7516/2019/12/055 2019 (2019) 055 [https://arxiv.org/abs/1905.004361905.00436]. verza_2019 G. Verza, A. Pisani, C. Carbone, N. Hamaus and L. Guzzo, The void size function in dynamical dark energy cosmologies, https://doi.org/10.1088/1475-7516/2019/12/040 2019 (2019) 040 [https://arxiv.org/abs/1906.004091906.00409]. verza_2022a G. Verza, C. Carbone and A. Renzi, The Halo Bias inside Cosmic Voids, https://doi.org/10.3847/2041-8213/ac9d98 940 (2022) L16 [https://arxiv.org/abs/2207.040392207.04039]. verza_2022b G. Verza, C. Carbone, A. Pisani and A. Renzi, DEMNUni: disentangling dark energy from massive neutrinos with the void size function, https://doi.org/10.48550/arXiv.2212.09740arXiv e-prints (2022) arXiv:2212.09740 [https://arxiv.org/abs/2212.097402212.09740]. Vielzeuf_2022 P. Vielzeuf, M. Calabrese, C. Carbone, G. Fabbian and C. Baccigalupi, DEMNUni: the imprint of massive neutrinos on the cross-correlation between cosmic voids and CMB lensing, https://doi.org/10.1088/1475-7516/2023/08/010 2023 (2023) 010 [https://arxiv.org/abs/2303.100482303.10048]. Cuozzo2022 V. Cuozzo, C. Carbone, M. Calabrese, E. Carella and M. Migliaccio, DEMNUni: cross-correlating the nonlinear ISWRS effect with CMB-lensing and galaxies in the presence of massive neutrinos, https://doi.org/10.48550/arXiv.2307.15711arXiv e-prints (2023) arXiv:2307.15711 [https://arxiv.org/abs/2307.157112307.15711]. zennaro_2017 M. Zennaro, J. Bel, F. Villaescusa-Navarro, C. Carbone, E. Sefusatti and L. Guzzo, Initial conditions for accurate N-body simulations of massive neutrino cosmologies, https://doi.org/10.1093/mnras/stw3340 466 (2017) 3244 [https://arxiv.org/abs/1605.052831605.05283]. planck2013 Planck Collaboration, P. A. R. Ade, N. Aghanim, C. Armitage-Caplan, M. Arnaud, M. Ashdown et al., Planck 2013 results. XVI. Cosmological parameters, https://doi.org/10.1051/0004-6361/201321591 571 (2014) A16 [https://arxiv.org/abs/1303.50761303.5076]. carbone2009 C. Carbone, C. Baccigalupi, M. Bartelmann, S. Matarrese and V. Springel, Lensed CMB temperature and polarization maps from the Millennium Simulation, https://doi.org/10.1111/j.1365-2966.2009.14746.x 396 (2009) 668 [https://arxiv.org/abs/0810.41450810.4145]. calabrese2015 M. Calabrese, C. Carbone, G. Fabbian, M. Baldi and C. Baccigalupi, Multiple lensing of the cosmic microwave background anisotropies, https://doi.org/10.1088/1475-7516/2015/03/049 3 (2015) 049 [https://arxiv.org/abs/1409.76801409.7680]. fabbian2018 G. Fabbian, M. Calabrese and C. Carbone, CMB weak-lensing beyond the Born approximation: a numerical approach, https://doi.org/10.1088/1475-7516/2018/02/050 2018 (2018) 050 [https://arxiv.org/abs/1702.033171702.03317]. Accuracy_weak_lensing_simulations-Hilbert+20 S. Hilbert, A. Barreira, G. Fabbian, P. Fosalba, C. Giocoli, S. Bose et al., The accuracy of weak lensing simulations, https://doi.org/10.1093/mnras/staa281 493 (2020) 305 [https://arxiv.org/abs/1910.106251910.10625]. Gorski K. M. Górski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke et al., HEALPix: A Framework for High-Resolution Discretization and Fast Analysis of Data Distributed on the Sphere, https://doi.org/10.1086/427976 622 (2005) 759 [https://arxiv.org/abs/astro-ph/0409513astro-ph/0409513]. Laureijs_2011 R. Laureijs, J. Amiaux, S. Arduini, J. L. Auguères, J. Brinchmann, R. Cole et al., Euclid Definition Study Report, https://doi.org/10.48550/arXiv.1110.3193arXiv e-prints (2011) arXiv:1110.3193 [https://arxiv.org/abs/1110.31931110.3193]. EuclidVII_2020 Euclid Collaboration, A. Blanchard, S. Camera, C. Carbone, V. F. Cardone, S. Casas et al., Euclid preparation. VII. Forecast validation for Euclid cosmological probes, https://doi.org/10.1051/0004-6361/202038071 642 (2020) A191 [https://arxiv.org/abs/1910.092731910.09273]. 2001PhR...340..291B M. Bartelmann and P. Schneider, Weak gravitational lensing, https://doi.org/10.1016/S0370-1573(00)00082-X 340 (2001) 291 [https://arxiv.org/abs/astro-ph/9912508astro-ph/9912508].
http://arxiv.org/abs/2407.13254v1
20240718080804
Make a Strong Teacher with Label Assistance: A Novel Knowledge Distillation Approach for Semantic Segmentation
[ "Shoumeng Qiu", "Jie Chen", "Xinrun Li", "Ru Wan", "Xiangyang Xue", "Jian Pu" ]
cs.CV
[ "cs.CV" ]
Label Assisted Distillation Fudan University, Shanghai, China smqiu21@m.fudan.edu.cn; {chenji19,xyxue,jianpu}@fudan.edu.cn Bosch Corporate Research, China Mogo.ai Information and Technology Co., Ltd, Beijing, China wanru@zhidaoauto.com Make a Strong Teacher with Label Assistance: A Novel Knowledge Distillation Approach for Semantic Segmentation Shoumeng Qiu10000-0003-4475-2303 Jie Chen10000-0002-5625-5729 Xinrun Li20000-0002-2548-2187 Ru Wan30009-0008-8151-0059Xiangyang Xue10000-0002-4897-9209Jian Pu1*0000-0002-0892-1213 July 22, 2024 ======================================================================================================================================================================================= § ABSTRACT In this paper, we introduce a novel knowledge distillation approach for the semantic segmentation task. Unlike previous methods that rely on power-trained teachers or other modalities to provide additional knowledge, our approach does not require complex teacher models or information from extra sensors. Specifically, for the teacher model training, we propose to noise the label and then incorporate it into input to effectively boost the lightweight teacher performance. To ensure the robustness of the teacher model against the introduced noise, we propose a dual-path consistency training strategy featuring a distance loss between the outputs of two paths. For the student model training, we keep it consistent with the standard distillation for simplicity. Our approach not only boosts the efficacy of knowledge distillation but also increases the flexibility in selecting teacher and student models. To demonstrate the advantages of our Label Assisted Distillation (LAD) method, we conduct extensive experiments on five challenging datasets including Cityscapes, ADE20K, PASCAL-VOC, COCO-Stuff 10K, and COCO-Stuff 164K, five popular models: FCN, PSPNet, DeepLabV3, STDC, and OCRNet, and results show the effectiveness and generalization of our approach. We posit that incorporating labels into the input, as demonstrated in our work, will provide valuable insights into related fields. Code is available at <https://github.com/skyshoumeng/Label_Assisted_Distillation.> § INTRODUCTION Semantic segmentation is one of the most fundamental tasks that aims to classify every pixel of a given image into a specific class. It is widely applied to many applications such as autonomous driving <cit.>, video surveillance <cit.>, and biomedical image diagnosis <cit.>. Nevertheless, most of the existing models entail high complexity and heavy computational cost for achieving superior performance <cit.>, which is inappropriate in real-world applications. To alleviate these issues, knowledge distillation<cit.> has been introduced into segmentation <cit.>, which aims at transferring knowledge from more complex powerful teacher models to efficient lightweight student models. Based on the source of additional knowledge primarily deriving from the teacher model, current knowledge distillation methods for segmentation broadly fall into two categories: knowledge from the more powerful model capacity, such as <cit.>, and knowledge from extra data information, such as <cit.>. However, the approaches in both categories have some apparent issues. For the first category, where additional knowledge is derived from the teacher model, this approach typically requires using a more complex and computationally expensive model as the teacher to extract more useful information from the inputs. <cit.>. For the second category, where additional knowledge comes from extra data information, this always involves other modalities <cit.> such as infrared or LiDAR data which are often costly or unattainable in practice. For the above issues, we aim to conduct knowledge distillation learning using a lightweight teacher model and eliminate the need for additional modalities. In this paper, we innovatively consider improving the performance of lightweight teacher model by incorporating label information into the input, as the label is always attainable in supervised learning tasks. The key difference between the existing method and our proposed approach is shown in Fig. <ref>. Specifically, for the teacher model, unlike previous methods that adopt complex models or extra information to achieve higher performance, we improve the performance by taking the label as part of the input to the model. However, there is a problem with directly feeding labels into the model, as the model may take shortcuts in mapping inputs to outputs based on the labels rather than learning the intended solution as mentioned in <cit.>. To address this, we propose to noise the label randomly before feeding it into the model. In experiments, we found that there are fluctuations in the output of the teacher model due to the random sampling in the label noising operation. To counteract the effects of the introduced noise, we further propose a dual-path consistency training strategy with a consistency loss between the outputs of two paths. When the teacher model is trained, the distillation learning for the student model is the same as the standard distillation approach. It should be noted that the noised label in this paper is different from <cit.>, where label noise originates from annotations, our approach intentionally introduces noise to the clean label. In addition, our approach allows for a more flexibility choice of teacher and student models. For example, the teacher can be more complex than the student, the same as the student, or even simpler than the student. Finally, to evaluate the effectiveness and generalization of our approach, we conduct extensive experiments on five baseline segmentation models: FCN <cit.>, PSPNet <cit.>, DeeplabV3 <cit.>, STDC <cit.>, and OCRNet <cit.>, and across five challenging datasets: Cityscapes <cit.>, ADE20K <cit.>, PASCAL-VOC 2012 <cit.>, COCO-Stuff 10K <cit.>, and COCO-Stuff 164K <cit.>. The experimental results show consistent performance improvement across all cases. We also perform detailed analyses of crucial components in the proposed approach and hope other researchers can gain inspiration from our study. Our contributions are summarized as follows: * We propose a novel knowledge distillation approach with noised labels as privileged information for the semantic segmentation task. Our approach alleviates the dependency on complex teacher models or other modalities, and can effectively improve performance of knowledge distillation. * To enhance the robustness of teacher against the noise introduced in privileged information, we propose a dual-path consistency training strategy with a distance loss to minimize discrepancies between the outputs of two paths. * We conduct extensive experiments on five popular semantic segmentation baseline models across five challenging public datasets, and experimental results show substantial and consistent improvements on performance. We also perform analyses on crucial components of the proposed approach. § RELATED WORKS §.§ Semantic Segmentation Semantic segmentation is one of the most fundamental computer vision tasks, which aims at classing every pixel on an input image into a certain class. It has a wide range of applications, such as autonomous driving and video surveillance. Deep neural network-based approaches are dominant in this task. FCN <cit.> is a fully convolutional model that can take input of arbitrary size and produce correspondingly-sized output with efficient inference and training. PSPNet <cit.> proposed to exploit the capability of global context information by different-region-based context aggregation operation through the pyramid pooling module and the pyramid scene parsing network. DeeplabV3 <cit.> proposed to combine both the spatial pyramid pooling module and encode-decoder structure. In this way, the approach can have the advantage of multi-scale contextual information encoding and sharper object boundary capturing at the same time. In recent years, many computational and memory-intensive models <cit.> have been proposed to further improve the performance on this task, However, these models are not friendly to applications in real-world scenarios. In response to this problem, many real-time methods have been proposed <cit.>. For example, STDC <cit.> proposed an efficient Short-Term Dense Concatenate network structure by gradually reducing the dimension of feature maps and using the aggregation of them for better image representation. §.§ Knowledge Distillation Knowledge distillation is a technique for model compression and acceleration in which a smaller model, referred to as the student model, is trained to mimic the behavior of another model, usually a larger pre-trained model, known as the teacher model. It is popularized by <cit.> and has attracted a surge of attention in recent years <cit.>. Surveys <cit.> and <cit.> summarized methods from various perspectives, and our method can be categorized into Cross-Modal distillation (multi-modal to single modal). The key distinction between our approach and previous methods is our innovative use of noised labels as an input modality. Based on the source of additional knowledge primarily comes from in the teacher model. We roughly categorize current works on knowledge distillation into two main categories: knowledge from a more complex teacher model and knowledge from extra information. Here we mainly focus on the distillation approaches applied to the semantic segmentation task. For the knowledge from a more complex teacher model, there is currently a lot of work devoted to further improving the efficiency and performance of the knowledge distillation approach, such as SKD<cit.>, IFDV<cit.>, CWD<cit.>, CIRKD<cit.>. SKD <cit.> proposed to distill structured knowledge from the teacher model to the student model as dense prediction is a structured prediction problem. IFDV <cit.> proposed an Inter-class Distance Distillation method to transfer the inter-class distance in the feature space from the teacher network to the student network. CWD <cit.> introduced to minimize the Kullback–Leibler (KL) divergence between the channel-wise probability map of the two networks. CIRKD<cit.> proposed to transfer structured pixel-to-pixel and pixel-to-region relationships across entire images. For the knowledge from extra information, LGD <cit.> and LG3D <cit.> introduced distillation with label methods involving manually designed label encoder and label mapper modules, which specifically designed for object detection task, different from the above methods, our proposed approach is label encoder and label mapper free for the semantic segmentation task. KD-Net <cit.> proposed a framework to transfer knowledge from multi-modal to a mono-modal for medical image segmentation. 2DPASS <cit.> proposed to distill richer semantic and structural information from the multi-modal data to boost the feature learning from single-modal data. §.§ Learning using Privileged Information Learning using privileged information <cit.> is also a crucial technique that enables machines to learn from other machines. The framework of learning using privileged information aims to leverage the additional information at training time to help boost the performance of the model, which is not accessible at the test time. In <cit.>, the authors proposed to unify the knowledge distillation and privileged information two into generalized distillation for machines learning from other machines. In <cit.>, the authors proposed to translate the notion of privileged information to the unsupervised setting in order to improve clustering performance. PISR<cit.> proposed to leverage the high-resolution (HR) images as privileged information and transfer the important knowledge of the HR images to a student model. § PROPOSED METHOD The overall framework of our proposed distillation approach is shown in Fig. <ref>. The main distinction between our framework and the conventional distillation framework lies in the input information and training strategy of the teacher model. We do not rely on power-pretrained teacher models <cit.> or extra information coming from other modalities <cit.>, which gives our approach obvious advantages over other distillation methods. §.§ Problem Formulation We consider improving the performance of lightweight teacher model by incorporating more task-related information into the input. However, as additional information is often unattainable or costly in practice, so we wonder if we could use the readily available information to achieve this purpose. Specifically, in this paper, we aim to introduce the label to the input of teacher model, which is always attainable in the supervised learning tasks. The simplest approach is to directly incorporate the label into the model's input, then the objective of model learning changes from {(X_1,Y_1),...,(X_n,Y_n) }∼ P^n(X,Y), to {(X_1,Y_1),...,(X_n,Y_n) }∼ P^n((X,Y),Y), where each (X_i, Y_i) is a image-label pair, X_i ∈ℝ^H× W × 3, Y_i ∈ℝ^H× W and P(·,·) denotes the joint probability distribution. However, there is a problem with the above simple solution. Since Y_i in the input and the expected output Y_i are the same, the model may take a shortcut instead of learning the intended solution <cit.>, which means that the deep neural network may learn a mapping from the input label to the output rather than extract some useful features in the image, as indicated in Eq. <ref>. Please refer to Shortcut Learning with Clean Label Input in subsection <ref> for more detailed discussions. (A) {X_i, Y_i}; (C) [node distance=.1cm, right=of A] ⟶; (B) [node distance=.1cm, right=of C] Y_i; [->] –node[above=7.mm]shortcut (A) to [bend right=-40] (B); , Since directly incorporating the label into the input is inappropriate, some transformation operations should be applied on the label before feeding it to the model. Here we denote the transformation operation as function ϕ(·), then the objective of model learning become: {(X_1,ϕ(Y_1),Y_1),...,(X_n,ϕ(Y_n),Y_n) }∼ P^n((X,ϕ(Y)),Y) , For the function ϕ(·), two conditions should be satisfied. First, based on the above analysis, it should make the mapping learning from ϕ(Y_i) to Y_i difficult. Second, ϕ(Y_i) should maintain some useful information for the task, otherwise, ϕ(Y_i) will have no benefit for the predictions. In this paper, we propose a simple but effective Label Noising Module (LNM) as function ϕ(·) which satisfies the above two conditions well. We will give a detailed description to the implementation of LNM in the next subsection. §.§ Strong Teacher with Noised label Design of LNM. We note that the information contained in the label Y_i can be decomposed into two parts: unique semantic class index and index consistency among pixels within the same class. Based on the above analysis, our proposed label noising module is also two-stage: class-wise noising and pixel-wise noising, respectively. For class-wise noising operation, we aim to distort the class index. We first represent the label Y_i in the form of one-hot Y_i^oh∈{y_1^1, y_i^2, ..., y_i^c}, where c is the total number of classes. For each y_i^j ∈ℝ^H× W, we perform a multiplication operation with w_i which is sampled from a predefined distribution Ω_1. Then we perform sum operation across channels and get the class-wise noising label Y_i^c. For pixel-wise noising operation, we aim to disrupt the index consistency in each class. We first generate a random matrix Z ∈ℝ^H× W where each element is sampled from a predefined distribution Ω_2, then we scaled Z with α and add to the Y_i^c. Finally, the label noising operation can be expressed as follows: ϕ(Y_i) = sum(W · Y_i^oh, dim=0) + α Z , Here W={w_1, w_2,…,w_c}. Up to now, the noised label is obtained. Then, we consider incorporating the noised label information into the input of model. We choose the simple concatenation operation as it is widely used in many multi-modal fusion works <cit.>. Therefore, the teacher model takes a four-channel tensor as input, instead of the standard three RGB channels. We adopt the commonly used standard Gaussian distribution for both Ω_1 and Ω_2 in this paper. The pipeline of our label noising module is shown in Fig. <ref>. Next, we give explanations about why the operation described in Eq. <ref> meets the two conditions mentioned in subsection <ref>. First, since ϕ(·) employs random parameters for the noising operation, so there is no direct inverse function for ϕ(Y_i), which makes the mapping learning from ϕ(Y_i) to Y_i difficult. Second, for the class-wise noising operation, as each Y_i^oh shares the same parameter w_i, therefore, we can guarantee that the value within each class is still consistent after the class-wise nosing operation. For the pixel-wise noising operation, we can control the impact of this noise by adjusting parameter α. Based on the above analysis, we can see that the generated noised label ϕ(Y_i) satisfies the conditions well and can be used as additional information to assist the model for better segmentation performance. Dual-path Consistency Training. In experiments, we found that there is a problem when adopting the noised label from Fig. <ref> for teacher training. Since parameters W and Z are sampled randomly in each training iteration, this will result in the fluctuation of the teacher output O_i^1 and O_i^2 with the same input image X_i under two different sampled parameters θ_1 and θ_2 of LNM: {X_i, ϕ(Y_i|θ_1) }→ O_i^1, {X_i, ϕ(Y_i| θ_2) }→ O_i^2, 𝒟(O_i^1, O_i^2) > 0, here we adopt a more specific form for ϕ(·) as ϕ(·|θ) for the sake of clarity, where θ_1, θ_2 containing both W and Z. 𝒟(·,·) is a function to measure the distribution distance between two outputs. In experiments, we found that the 𝒟(O_i^1, O_i^2) is a non-negligible value. This inconsistency makes it challenging for the student model to learn from the teacher, as the outputs of the teacher model vary for the same image in different iterations due to the random sampling in the LNM. To address the above issues, we need to make the output of the teacher model become robust to the randomly sampled parameters W and Z. Specifically, we propose an effective dual-path consistency training strategy. In teacher training, we duplicate the LNM module and the teacher model, with each path taking a noised label generated from independently sampled parameters. We then introduce a consistency loss between the outputs from these two paths. As a result, the final loss for the teacher training can be expressed as: ℒ_T = ℒ_seg(O^1, Y) + ℒ_seg(O^2, Y) + λ𝒟(O^1, O^2), where the above ℒ_seg(·,·) is the loss function for the segmentation task, λ is loss coefficients to balance the contribution of the consistency loss and the task loss. §.§ Objective Function The training of the student model is the same as the standard knowledge distillation methods, where the model is supervised by both the label supervision and also supervision from the teacher model, which can be expressed as: ℒ_S = ℒ_seg(O^S, Y) + βℒ_kd(O^S, O^T) , where O^S is the output of the student model, and O^T is the output of the teacher model, ℒ_kd(·,·) can be an arbitrary distillation loss function. β is the loss coefficient to balance the contribution of the distillation loss and the task loss. We only adopt the knowledge distillation for the logits map for simplicity. § EXPERIMENTS §.§ Experimental Setup For thorough evaluation, we conduct experiments on five commonly used datasets: Cityscape <cit.>, ADE20K <cit.>, and PASCAL-VOC <cit.>, COCO-Stuff 10K <cit.> and COCO-Stuff 164K <cit.> datasets, and five different popular segmentation baseline models, including FCN <cit.>, PSPNet <cit.>, DeeplabV3 <cit.>, STDC <cit.>, and OCRNet <cit.>. We use the commonly used mean Intersection over Union (mIoU) metric for evaluation. We adopt the CWD distillation loss <cit.> as ℒ_kd(·,·) in our approach as it is a convenient plug-in loss and also very efficient and effective. For simplify, we also use the CWD loss as the consistent loss in the teacher model training. We set the hyper-parameters α, λ, β as 0.01, 1, and 3, respectively. More details about training setting can be found in the supplementary material. §.§ Main Results Comparison with SOTA Methods. We compare our proposed distillation approach with several state-of-the-art methods, including SKD<cit.>, IFVD<cit.>, CWD<cit.>, CIRKD<cit.>, MasKD<cit.>. They all rely on more complex models to provide additional knowledge in distillation training. The experimental results are shown in Tab. <ref>. It can be seen that for the PSPNet-R18 model, we achieve very competitive results among all other distillation approaches, we suppress the CWD baseline approach by 1.8% mIoU on the val set and 2.0% mIoU on the test set, For the DeepLabV3-R18 model, it can be seen that we achieve the best performance among all the approaches and suppress the CWD baseline approach by 2.1% mIoU and 2.8% mIoU, respectively. Additionally, we conducted distillation time comparison on NVIDIA V100 GPU with image size of 512 × 1024, it can be seen that our method shows a significant advantage over others. Tab. <ref> shows the results of the performance comparison when the student model is trained from scratch. It can be seen that our approach achieves the best results among all other methods. For the PSPNet-R18 model, we suppress the baseline CWD by 1.5% mIoU, and suppress the CIRKD method by 1.0% mIoU. For the DeepLabV3-R18 model, we suppress the baseline CWD by 3.5% mIoU, and suppress the CIRKD method by 0.7% mIoU. Tab. <ref> shows the experimental results on the PASCAL-VOC datasets, we can see that our approach also shows better performance compared with other methods. We also conduct experiments on other six models on the Cityscapes dataset, results are shown in Tab. <ref>. For FCN-18, we suppress the baseline by 1.6%, for FCN-50, we suppress the baseline by 0.7%. For the STDC baseline, we suppress the STDC1 and STDC2 by 1.3% and 1.6% mIoU. For the OCRNet baseline, the experiments are conducted on models with OCRNet-W18s and OCRNet-W18 as the backbone, we suppress the baseline by 1.2% and 1.8% mIoU, respectively. To further prove the generalization of our approach, we conduct experiments on ADE20K <cit.>, PASCAL-VOC <cit.>, COCO-Stuff 10K <cit.> and COCO-Stuff 164K <cit.> datasets using PSPNet and DeepLabV3 methods with ResNet50 backbone. The results are shown in Tab. <ref>. It is evident that our method significantly enhances the performance of baseline models across various datasets. Comparison with Stronger Teacher. As can be seen from Tab. <ref>, after introducing noised label to the input, we obtain lightweight teacher models with very strong performance, even surpassing DeepLabV3-R101. However, this raises a question about the underlying reasons for our method's effectiveness: Is the performance boost due to a more powerful teacher model? To answer this question, we chose other two powerful models as a teacher: PSPNet-R101 (79.8 mIoU) and OCRNet-W48 (80.7 mIoU). The experimental results are shown in Tab. <ref>. It can be seen that distillation with a more powerful teacher model does bring some performance improvements, but our method still retains a clear advantage over them. Here we give an intuitive explanation about why our performance is better than distilling with stronger teachers: the knowledge that comes from the same model structure is more transferable between each other. For more detailed experiments and discussion, please refer to the supplementary material. Simple to Complex Model Distillation. As the performance of a model can be significantly improved with the privileged information, the performance of a simple model with privileged information can be even higher than a complex model. As shown in Tab. <ref>, the performance of the PSPNet-R18 with label information as input is 79.7%, surpassing that of the DeepLabV3 model with a ResNet101 backbone. So we wonder if we can distill knowledge from a label-assisted simple teacher to a complex student. To explore this, we conducted experiments using the STDC models, with the results presented in Tab. <ref>. Here we focus on the result in the third row, we can see that with the simple STDC1 model as a teacher and the complex STDC2 as a student, the performance of STDC2 model still can be improved by 1.4 % mIoU. The experimental results confirm that complex models can indeed benefit from learning from label-assisted simpler teacher models. As large foundational models in vision gain popularity, we also extended our experiments to assess the applicability of our method to these models. We employ a relatively lightweight yet high-performance model (OCRNet-W48) with noised label input as teacher and used LoRA <cit.> method to finetune DINOv2 <cit.> on VOC dataset. The results also show performance improvements: 81.1 → 82.3 (with backbone ViT-S/14) and 82.5 → 83.2 (with backbone ViT-B/14), indicating that the teacher model with noised labels as input contains knowledge that is not present in large models. In addition, as the teacher can be the same as the student, so our approach can also be seen as a self-enhanced technique, which provides greater potential in the future. Please refer to the supplementary material for more information. Shortcut Learning with Clean Label Input. To demonstrate that the model will learn a shortcut instead of extracting useful information when we take clean label as input in training, we conduct experiments to assess the contribution of two different inputs to the final prediction. The results are shown in Fig. <ref>. It can be seen that when we input with noised label, the contribution of the final prediction obviously comes from two parts. However, when we input with clean label, the prediction predominantly relies on the input label alone, with the RGB image contributing negligibly to the output. Therefore, in this case, the model has learned a mapping from input label to output label, rather than extracting useful information from the RGB image as we have mentioned in subsection <ref>. §.§ Ablation Studies Dual-path Consistency Loss. We perform ablation studies to verify the effectiveness of the dual-path consistency training strategy. We conduct the experiments on models DeeplabV3-18 and STDC2. To assess the robustness of final the teacher model against the label noising operation, we perform random sampling m times for the transformation parameters and obtain the corresponding output logits of the teacher with the same image X_i, referred as O_i^1, O_i^2,..., O_i^m, where O_i^k ∈ℝ^C × H × W. We then calculate the KL divergence of distributions between each pair. We utilize the mean value of the distance as a criterion of stability: KL_mean=1/m· m∑_i=k^m∑_j=1^m KL(softmax(O_i^k,dim=0), softmax(O_i^j,dim=0)). In our experiment, we set n=3. The results are shown in Tab. <ref>. show that our consistent loss significantly reduces the distribution distance between outputs, enhancing the stability of the distillation process and notably improving the performance of the student model. Sensitivity Analysis. We conducted experiments to evaluate the effects of class-wise and pixel-wise noising on the performance of both the teacher and student models. The results are shown in Tab. <ref>. It should be noted that if without class-wise noising operation, the labels are first normalized to achieve a similar input interval with the normalized images. With class-wise noising, if the pixel-wise noise is lower (0.001), better performance can be achieved for the teacher model as the information is cleaner. However, this knowledge cannot be effectively transferred as the predictions rely more on the leaked label information. When the pixel-wise noise is higher (0.1), the useful information in the noised label will be contaminated by the noise, degrading the teacher model’s performance and subsequently affecting the performance of the student model. Without class-wise noising operation, the teacher model can achieve higher performance as the class index information is preserved. However, as predictions are more dependent on label information than on extracted features, the knowledge in the outputs cannot be effectively transferred to the student model, resulting in unsatisfactory distillation performance. §.§ Discussion Here, we offer an alternative denoising perspective to explain our approach of utilizing noised label as privileged information. For the teacher model, it receives both the image X_i and the noised label ϕ(Y_i|θ) as input, aiming to produce the clean label Y_i as its output. Hence, the teacher model can be considered as a denoising model whose objective is to eliminate the noise present in the input noised label. In this case, the objective of model learning is changed from Y_i = mapping(X_i), to Y_i = denoising(X_i, ϕ(Y_i|θ)), where the behavior of noising function ϕ(·) is controlled by the parameter θ, which is randomly sampled every iteration in training. The above equation shows that we transform the standard segmentation task of learning an image-to-label mapping to a label-denoising task with the RGB images serve as observations. From the above perspective, the performance of the denoising task should heavily depend on the level of noise in the input, which aligns with our experiment results shown in Tab. <ref>. The results indicated that performance varied significantly depending on the noise levels. With relatively minimal noise (0.001 for pixel-wise noising), the teacher model could achieve high performance effortlessly. However, when noise levels were substantially increased (0.1 for pixel-wise noising), the performance of the teacher model declined dramatically. This decline is primarily due to the model's difficulty in extracting useful information from the excessively noised labels. § CONCLUSION In conclusion, our proposed knowledge distillation approach for semantic segmentation offers a novel and effective way to leverage readily available labels as a source of knowledge. By strategically introducing noise to labels and employing a dual-path consistency training strategy, we significantly enhance the performance of lightweight teacher models. Extensive experiments demonstrate the effectiveness of our method, achieving substantial improvements in mIoU scores across various datasets and baseline models. We believe this work provides valuable insights and a promising direction for future research in knowledge distillation for semantic segmentation. § LIMITATIONS We believe that our proposed knowledge distillation approach can significantly enhance the performance of baseline models across various computer vision tasks. Our approach has potential value in the deployment of deep learning models in practice, we do not see any potential negative effects. § ACKNOWLEDGEMENTS This work is supported by the Shanghai Platform for Neuromorphic and AI Chip under Grant 17DZ2260900 (NeuHelium). The computations in this research were performed using the CFFF platform of Fudan University. splncs04
http://arxiv.org/abs/2407.13268v1
20240718082131
Mixture of Experts based Multi-task Supervise Learning from Crowds
[ "Tao Han", "Huaixuan Shi", "Xinyi Ding", "Xiao Ma", "Huamao Gu", "Yili Fang" ]
cs.AI
[ "cs.AI", "cs.LG" ]
A BCS state formulation for the fermionic Tonks-Girardeau gas Bruno Juliá-Díaz July 22, 2024 ============================================================= § ABSTRACT Existing truth inference methods in crowdsourcing aim to map redundant labels and items to the ground truth. They treat the ground truth as hidden variables and use statistical or deep learning-based worker behavior models to infer the ground truth. However, worker behavior models that rely on ground truth hidden variables overlook workers' behavior at the item feature level, leading to imprecise characterizations and negatively impacting the quality of truth inference. This paper proposes a new paradigm of multi-task supervised learning from crowds, which eliminates the need for modeling of items's ground truth in worker behavior models. Within this paradigm, we propose a worker behavior model at the item feature level called Mixture of Experts based Multi-task Supervised Learning from Crowds (MMLC). Two truth inference strategies are proposed within MMLC. The first strategy, named MMLC-owf, utilizes clustering methods in the worker spectral space to identify the projection vector of the oracle worker. Subsequently, the labels generated based on this vector are considered as the inferred truth. The second strategy, called MMLC-df, employs the MMLC model to fill the crowdsourced data, which can enhance the effectiveness of existing truth inference methods. Experimental results demonstrate that MMLC-owf outperforms state-of-the-art methods and MMLC-df enhances the quality of existing truth inference methods. § INTRODUCTION Truth inference in crowdsourcing aims to derive accurate results from noisy data provided by online workers. Existing truth inference methods can be broadly classified into two categories: weakly supervised and supervised approaches. In the weakly supervised approach, unknown ground truth is treated as hidden variables. This involves utilizing statistics from workers' noisy answers to calculate results directly. Alternatively, it entails creating worker behavior models and employing unsupervised learning methods such as the EM algorithm to estimate unknown parameters and infer the ground truth. The weakly supervised approach can further be classified into statistical learning and deep learning methods based on whether it consider the features of the items. Statistical learning methods, such as Majority voting (MV) <cit.>, Dawid&Skene model (DS) <cit.>, and homologous Dawid-Skene model (HDS) <cit.>, do not incorporate item features. In contrast, deep learning methods like Training Deep Neural Nets <cit.> take item features into account. In the supervised approach, a classifier model is first constructed with item features as the input and ground truth as the output. Then, a worker behavior model is created based on a confusion matrix that establishes the relationship between the item's ground truth and the worker labels. On this basis, supervised learning is implemented using the classifier model and the worker behavior model by treating the worker labels as supervisory information. Finally, the output of the classifier model is used as the inferred ground truth. In recent years, various truth inference methods based on supervised learning have been proposed, such as Crowdlayer<cit.>, CoNAL<cit.>, and UnionNet<cit.>. However, the worker behavior model based on the confusion matrix faces challenges in effectively capturing variations in feature characteristics across different items. Neglecting these variations in worker behavior under different conditions can result in inaccurate representations of worker behavior, consequently impacting the quality of truth inference. For example, in handwritten digit recognition, workers generally have high accuracy. Suppose there are two items: one closely resembles the digit “1,” but its ground truth is actually “7,” and the other is a normal “7.” The former receives many labels as “1,” while the latter rarely gets labeled as “1.” Under the worker behavior model based on the confusion matrix, it is difficult to accurately model the labeling behavior of such high-difficulty items. Therefore, there is a high probability that the model will interpret the former with “1” as the ground truth, leading to incorrect judgments. The quality of truth inference is influenced by uncertainty from hidden variables, the method's data adaptability, and the accuracy in characterizing worker behavior. The purpose of this paper is to develop a supervised model that can achieve high-quality truth inference with a worker behavior model at the item feature level. In this paper, we propose Multi-task Supervised Learning from Crowds (MLC), a novel paradigm for crowdsourcing learning. Unlike traditional paradigm, MLC does not rely on the ground truth of the items but instead focuses on understanding the unique behavior of individual workers across different items. When multiple workers handle the same item, they share the item's features, leading to a multi-task learning paradigm. Within this paradigm, we propose a method called Mixture of Experts-based Multi-task Supervised Learning from Crowds (MMLC). MMLC does not utilize a single classifier but instead creates multiple expert modules. The outputs from these expert modules serve as the bases of the worker spectral space. Each worker is represented by his or her projection vector in the spectral space that characterizes their behavior. The worker behavior model provides a more precise depiction of their behavior across different items by accurately modeling the workers' behavior on item features. However, it is important to note that the model itself cannot determine the ground truth. To address this limitation, we introduce two truth inference strategies based on MMLC. The first strategy involves analyzing the distribution of workers' projections in the worker spectral space. We identify the projection of the oracle worker by applying clustering methods, and consider its labels as the ground truth. This approach is referred to as Oracle Worker Finding of MMLC (MMLC-owf). The second strategy leverages the sparsity of crowdsourced data to fill the original dataset with MMLC outputs, generating a new crowdsourcing dataset. Existing truth inference methods can then be applied within this framework, which is called Data Filling of MMLC (MMLC-df). The main contributions are as follows: * We introduce a novel paradigm of multi-task supervised learning from crowds and propose a new worker behavior model called MMLC. This model is well-suited for crowdsourcing learning and offers a more accurate way to characterize worker behavior in item labeling. * We leverage MMLC to identify the oracle worker for labeling items as the ground truth, referred to as MMLC-owf. Experimental results demonstrate that the labels obtained using this method exhibit higher quality compared to state-of-the-art methodes. * We introduce a truth inference framework called MMLC-df, which leverages the MMLC model to fill sparse crowdsourced data. This framework then applies truth inference methods to determine the ground truth. Experimental results demonstrate that MMLC-df significantly enhances truth inference methods, leading to higher-quality results. § RELATED WORK Weakly supervised approach: This class of methods focuses on modeling the relationship between workers' noisy answers and the ground truth. In this approach, the ground truth is treated as a hidden variable, and weakly supervised learning techniques are employed to infer the ground truth. The most direct and widely used method is Majority Voting (MV) <cit.>. It involves counting the responses from workers and considering the answer with the majority of votes as the ground truth. However, MV overlooks the variations in worker behavior during the labeling process and treats all workers' labels equally. To address this limitation, Tao et al. <cit.> proposed a method that considers the deterministic information of majority and minority categories separately. They establish a voting model that takes into account the labeling quality of workers in different situations. The Dawid-Skene (DS) <cit.> method utilizes a confusion matrix to describe each worker's behavior when handling an item. The EM algorithm is then used to estimate the worker's confusion matrix parameters and the ground truth. HDS <cit.> assumes equal probabilities of wrong worker selections while evolving the confusion matrix. GLAD <cit.> not only considers workers' abilities but also accounts for the difficulty of item processing. It creates a worker behavior model using a sigmoid function and employs the EM algorithm to determine the ground truth of an item. Some weakly supervised inference methods utilize deep learning techniques to infer the ground truth. These methods typically use aggregation algorithms to construct an initial answer table, which combines the noisy labels provided by workers. Subsequently, a neural network model is trained and used for classification based on the selected label set. While these methods <cit.> introduce reliability measurements of labels, the accuracy of these measurements directly affects the results of truth inference. Existing weakly supervised truth inference methods have achieved some degree of success. However, the data provided by workers is often sparse. Additionally, they treat the ground truth as a hidden variable. As a result, the accuracy of truth inference remains limited. Supervised approach: This approach primarily involves building a classifier model that establishes a connection between item features and the corresponding ground truth. Additionally, it constructs a worker behavior model based on the confusion matrix, which captures the relationship between the item's ground truth and the labels provided by workers. These worker labels are used as supervision information, and the two models are jointly trained in a supervised learning manner. Ultimately, the output of the classifier model is used as the ground truth for inference <cit.>. Various techniques have been introduced within this paradigm. For instance, the Expectation-Maximization (EM) algorithm has been employed to integrate label aggregation and classifier training for truth inference. Crowdlayer <cit.> introduces the concept of a crowd layer, which replaces the traditional confusion matrix used in the DS model. This approach seamlessly integrates label reasoning and classifier training in an end-to-end manner, resulting in more precise and reliable outcomes. Tan and Chen applied the worker's confusion matrix <cit.> and the labeled transfer matrix <cit.> along with classifier predictions to optimize model parameters and improve the accuracy of item processing. <cit.> introduces worker weight vectors, while <cit.> focuses on worker label reliability modeling. These approaches aim to estimate the characteristics of workers' abilities. UnionNet <cit.> considers all workers collectively and combines their annotations into a parameter transfer matrix. It then aggregates classifier predictions to facilitate model training. The CoNAL method <cit.> effectively handles noise from different sources by categorizing labeling noise into common noise and individual noise. Although supervised learning can be used to infer worker parameters, it struggles to accurately characterize worker behavior. Consequently, when the discrimination among candidate answers is poor, it is difficult to achieve good results using this approach. Diverging from these methods, our method does not treat the ground truth of the item as a latent variable. Instead, we focus on modeling the relationship between the behavior of workers and the difficulty of item processing. This allows us to establish a supervised learning framework to obtain a worker behavior model. Subsequently, we use this model to identify oracle worker who can accurately label items, thereby determining the ground truth. Alternatively, we use this model to fill in sparse crowdsourced data and then employ existing truth inference methods to obtain the ground truth from the completed data. § PROBLEM FORMULATION This section provides a formal problem description. Our main goal is to create a worker behavior model and achieve joint learning of worker abilities by utilizing multi-task learning to infer the ground truth. Let 𝒲={w_j} denote the worker set, where w_j represents an individual worker, and 𝒳={x_i} denote the set of items, where x_i represents a single item to be labeled. The labels for each item belong to the category set 𝒦 = {k}. We use y_ij to denote the category label assigned by worker w_j to item x_i. We have an indicator function I_ij, where I_ij=1 if y_ij exists and I_ij=0 otherwise. Consequently, we obtain the crowdsourced triples dataset 𝒟={<x_i, w_j, y_ij>|I_ij=1}. With regards to truth inference in crowdsourcing, we provide the following definition: By modeling and learning from the crowdsourced label dataset 𝒟, the problem of truth inference aims to find a function g^*: 𝒳→𝒦 such that g^* = min_g∈ℋ∑_i=1^|𝒳|ℒ( ẑ_i, g(x_i) ) + λg_ℋ. Here, ℋ denotes the hypothesis space of functions, ·_ℋ denotes the norm of hypothesis space, λ is the regularization coefficient, ℒ is the loss function, and ẑ_i= t_i(𝒟) is the estimation of label z_i for item x_i from learning on the dataset. Since the crowdsourced truth inference problem is an unsupervised learning problem without supervised information, the estimation of ground truth is utilized instead of the goal of learning. Let 𝒮_j = {(<x_i, w_j>, y_ij)}_x_i∈𝒳_j denote the crowdsourced training dataset for worker w_j, where 𝒳_j={x_i|I_ij=1}. The labels provided by worker j can be regarded as the j-th task for the corresponding item. Consequently, we obtain the dataset as 𝒮=⋃_j 𝒮_j. The problem of multi-task supervised learning from crowds is to find a worker behavior function f^*:𝒳×𝒲→𝒦 such that f^* = min_f∈ℋ∑_j=1^|𝒲|1/|𝒳_j|∑_x_i∈𝒳_jℒ(y_ij, f(<x_i,w_j>)) + λf_ℋ. We can observe that the solution to MLC problem does not directly address TI problem. Therefore, we provide two approaches to tackle this issue. The first approach is to identify an oracle worker w_oracle based on the distribution of workers. We then consider the labels provided by this oracle worker as the ground truth, that is, g^*(x_i) = f^*(<x_i,w_oracle>). The second approach considers the sparsity characteristic of crowdsourced data, where workers do not annotate every item. Consequently, we can utilize the results of MLC to generate a new dataset for inference. The new crowdsourced data can be defined as follows: 𝒟' = 𝒟∪{<x_i,w_j,ŷ_ij> | ŷ_ij = f^*(<x_i, w_j>),I_ij=0}. § PROPOSED METHODOLOGY To address the MLC problem, we propose a Mixture of Experts based Multi-task Supervised Learning from Crowds (MMLC) model. This model utilizes mixture of experts to characterize the varying attention of workers towards different item features, , aiming to capture the feature-level behavior differences of workers when dealing with various items. The framework of the model is shown in Fig. <ref>. It consists of three main modules: expert module, gate module, and output module. In the expert module, each item is processed by a feature extractor to obtain an item feature vector x_i. Then, the item feature vector is fed into m expert modules, where each module captures the unique characteristics of worker behavior associated with different feature information. Each expert module performs transformations and compressions on the feature vector, resulting in the output matrix of the expert module: U(x_i) = (u_1(x_i),u_2(x_i),..., u_E(x_i) ). Each expert sub-module follows the same structure, consisting of multiple layers of fully connected neural networks with ReLU activation functions in each layer. For each expert sub-module u_e, the high-dimensional feature vector x_i is transformed into a low-dimensional vector u_e(x_i). In the gate module, a gate network is constructed to control the selection of expert modules. This gate network takes worker data as input and generates a projection vector of the worker in the worker spectral space, with a length of E. The bases of the worker spectral space are the outputs of the expert sub-modules. Specifically, the module takes the one-hot encoded vector representing each worker w_j as input. After passing through a fully connected ReLU layer, the data proceeds through an attention layer and a softmax layer. Finally, it produces a worker projection vector v(w_j)=(v_1(w_j),v_2(w_j),...,v_E(w_j))^T with a length of E. The projection of worker w_j in the worker spectral space with the expert sub-modules as the bases is: proj_U(x_i) (w_j) = ∑_e=1^E v_e (w_j) u_e (x_i). In the output module, the worker's labels for the item are generated. The output module generates labels for each worker based on their chosen expert modules. It involves mapping worker behavior through the gate network, which includes weighting and summing the outputs of the expert modules. Subsequently, through a fully connected ReLU layer and a softmax layer, the model produces the label output of worker w_j for item x_i as follows: f_Θ (<x_i, w_j>)=o(proj_U(x_i) (w_j)), where o(·) denotes the output function, and Θ is the parameter set of the functions U, v, and o within the MMLC model. The MMLC model deals with a classification problem with |𝒦| categories. The network's output is a |𝒦|-dimensional vector, where each element represents the predicted probability of a category. The model's loss function combines a cross-entropy loss term with a regularization term. The loss function is formulated as follows: ℒ_Θ = -1/|𝒟|∑_w_j∈𝒲∑_x_i∈𝒳_j∑_k∈𝒦 y_ij^klog(f_Θ (<x_i, w_j>))+ λΘ_F, The first term denotes the multi-class cross-entropy loss, while the second term represents the regularization of the model's parameter set Θ to prevent overfitting. In the equation, λ is the regularization coefficient, and ·_F denotes the Frobenius norm. By minimizing the loss function, we can obtain the final model ℳ^*:f_Θ^*(·). This model uses the function f_Θ^* to predict the labels of worker w_j for item x_i, where Θ^* represents the optimized parameters. Truth Inference by Oracle Worker Finding (MMLC-owf): The MMLC model does not directly generate the ground truth for inference. To address this issue, this section proposes a method for inferring the ground truth by identifying the oracle worker's projection vector in the worker spectral space. Specifically, each worker is theoretically associated with a projection in the worker spectral space, representing their unique characteristics. Workers are distributed in the spectral space. We assume the existence of an omniscient oracle worker who possesses a projection vector in the spectral space and is capable of providing the ground truth in the MMLC model. Therefore, by identifying the projection vector of the oracle worker in the worker spectral space, we can consider its output as the inferred truth. If we treat any worker as a random expression of the oracle worker's error, then the center of the worker's distribution projected onto the spectral space can be regarded as the projection vector of the oracle worker, that is, v_oracle = τ(v (𝒲)), where the function τ(·) is used to determine the distribution center, which can be found using methods such as kernel density estimation, mean, median, etc. According to the MMLC model, the outcome of the Oracle Worker Finding method (MMLC-owf) for inferring the ground truth of item x_i can be expressed as follows: f_Θ^* (<x_i, w_oracle>) = o(U(x_i) v_oracle). Truth Inference Framework by Data Filling (MMLC-df): In addition to the MMLC-owf method, we propose a truth inference framework using data filling under the MMLC model called MMLC-df, which utilizes the sparsity of crowdsourced data. A new crowdsourced dataset 𝒟' is constructed through data filling as follows: 𝒟' = 𝒟∪{<x_i,w_j,ŷ_ij> | ŷ_ij = f_Θ^*(<x_i, w_j>),I_ij=0}. Subsequently, any truth inference method applied to this new crowdsourced dataset can infer higher-quality ground truth compared to that obtained from the original data. § EXPERIMENTS In this section, we verify the effectiveness of our method through experiments. Our MMLC-owf method is a truth inference method, and we compare it with the following baselines: MV<cit.> directly uses majority voting to determine the ground truth; DS<cit.> employs a confusion matrix to characterize the labeling behavior of workers and uses the EM algorithm to infer the ground truth; HDS<cit.> simplifies the DS method by assuming that each worker has the same probability of being correct under different truth values and equal probabilities for incorrect options; FDS<cit.> is a simple and efficient algorithm based on DS, designed to achieve faster convergence while maintaining the accuracy of truth inference; Max-MIG<cit.> utilizes the EM algorithm to integrate label aggregation and classifier training; CoNAL<cit.> distinguishes between common noise and individual noise by predicting a joint worker confusion matrix using classifiers; CrowdAR<cit.> estimates worker capability features through classifier prediction and models the reliability of joint worker labels. Our truth inference framework, MMLC-df, incorporates a core component that performs data filling. We compare it with the following baselines: G_MV <cit.> utilizes truth inference results from the MV algorithm to evaluate worker ability and assign new labels accordingly; G_IRT <cit.> utilizes joint maximum likelihood estimation to estimate parameters of the IRT model, such as worker abilities and item difficulties, and generates new labels based on these parameters; TDG4Crowd <cit.> learns the feature distributions of workers and items separately using worker models and item models. An inference component is used to learn the label distribution and generate new labels. We conduct experiments using three crowdsourced datasets with item features: * LableMe <cit.>: This dataset consists of 1000 images categorized into 8 classes, with a total of 2547 labels provided by 59 workers. Each image is represented by 8192-dimensional features extracted using a pre-trained VGG-16 model. * Text <cit.>: This dataset comprises 1594 sentences extracted from the CrowdTruth corpus, categorized into 13 groups. The dataset includes 14,228 labels provided by 154 workers. Each sentence is represented by 768-dimensional features extracted using a pre-trained BERT model. * Music <cit.>: This dataset consists of 700 music compositions, each with a duration of 30 seconds, and categorized into 10 groups. It includes 2,945 labels provided by 44 workers. Each music composition is represented by 124-dimensional features extracted using the Marsyas <cit.> music retrieval tool. To accommodate the feature scales of the three experimental datasets, our model's architecture varies accordingly. For the LableMe dataset, our model employs 16 expert modules, each comprising 3 fully connected ReLU layers, with a final layer output dimension of 32. For the Text and Music datasets, we utilize 10 expert modules. Each module consists of 3 and 2 fully connected ReLU layers, with output dimensions of 32 and 16, respectively. We adopt the settings from the Max-MIG, CoNAL, and CrowdAR truth inference methods, we adopt the settings from their source code for the LableMe and Music datasets. Since there is no source code available for the Text dataset, we adopt the settings used in the LableMe dataset. Regarding the TDG4Crowd data filling algorithm, we utilize the settings from its source code. The remaining methods do not use deep network structures and rely on default settings. §.§ Evaluation of Oracle Worker Finding (MMLC-owf) Main Result: Our method, MMLC-owf, was evaluated alongside seven other methods through five rounds experiments. The average accuracy results are shown in Tab. <ref>. In our method, we utilized kernel density estimation (KDE) to compute the projection vector of the oracle worker in the worker spectral space. Our method, MMLC-owf, achieved the highest accuracy in the Text and Music datasets. In the LableMe dataset, it ranked second, with only a 0.4% difference from the top-performing CrowdAR method. Deep learning-based methods typically produce better results when analyzing datasets with high-dimensional item features like LableMe. In datasets with fewer features, the advantage of deep learning methods was not significant. Impact of Redundancy: We examine how varying levels of redundancy affect the accuracy of our method. Due to the varying redundancy of data items, a maximum redundancy parameter R is set. We randomly keep R labels for items with more than R labels and discard the rest. This process generates a dataset with a maximum redundancy R. By conducting five repeated experiments and averaging the accuracy and standard deviation, the results are shown in Fig. <ref>. As the average number of worker responses increases, all methods show an upward trend in their results. The analysis of various redundancy levels across the datasets indicates that higher redundancy levels are more advantageous for our method. Our method can effectively utilize worker behavior descriptions on datasets with higher redundancy but may face underfitting on datasets with lower redundancy. Clustering Methods in Oracle Worker Finding: Our method, MMLC-owf, utilizes a clustering method to determine the center of the distribution of the projection vector of workers in the worker spectral space as the projection vector of the oracle worker. Here, we examine how different clustering methods affect truth inference outcomes. We compare three clustering methods: kernel density estimation (KDE), Mean, and Median, as well as their worker-weighted variants: KDE-W, Mean-W, and Median-W. Worker weights are calculated based on the proportion of items answered by each worker relative to the total number of items, considering data imbalance. In addition, we optimize the projection vector in the worker spectral space using the ground truth of the items as an upper-bound method for clustering. The parameters of the expert modules and output modules are fixed in the pre-trained MMLC model, referred to as “Truth.” By conducting five repeated experiments and averaging the results, as shown in Fig. <ref>. For example, in the LableMe dataset, the oracle worker's projection vector is derived using KDE, KDE-W, Median, Median-W, Mean, and Mean-W. The MMLC-owf uses the oracle worker to generate ground truth. The quality of the ground truth obtained by the following five methods in each dataset is very similar, but the oracle worker generated using KDE produces the highest quality ground truth in each dataset. The “Truth" method, which represents the theoretical upper limit with clustering methods, achieved accuracy rates of 96.32%, 91.25%, and 92.97% on the three datasets respectively. The quality of the ground truth generated by oracle workers using six clustering methods still slightly deviates from theoretical upper limits. This implies that the MMLC-owf method can provide high-quality ground truth by optimally projecting the worker spectral space, approaching the theoretical upper limit. The model has strong expressive ability, with a small gap between the projected spectral space and the ground truth. There is potential to enhance MMLC-owf by choosing a more effective clustering method. Worker Distribution in Worker Spectral Space: We assume that each worker is an oracle worker with random errors in their expression. The center of the distribution of workers projected onto the spectral space corresponds to the projection vector of the oracle worker. To validate this assumption, we employed the IOSMAP dimensionality reduction method to reduce the worker projection vectors obtained from the MMLC model to 2D, resulting in the scatter plot shown in Fig. <ref>. We calculated the accuracy of each worker on the dataset, where the closer the point's color on the graph is to green, the worker's accuracy is higher. The closer the point's color is to red, the lower the worker's accuracy. The plot also shows the projection obtained by the KDE method for the oracle worker, represented by blue asterisks. From the distribution of worker projections, although the shapes of the distributions differ across datasets, there is a noticeable trend where workers with higher accuracy tend to cluster closer to the projection of the oracle worker. This observation demonstrates a clear tendency towards aggregation and provides some degree of confirmation for the validity of our assumption. §.§ Evaluation of Data Filling (MMLC-df) Main Result: We compared MMLC-df with three filling methods: G_MV, G_IRT, and TDG4Crowd. We used the filled data with eight truth inference methods from the previous section to infer the ground truth. We used three real datasets and applied eight truth value inference methods to infer the ground truth, resulting in a total of 24 scenarios. We conducted five rounds of experiments, and the mean and variance of all ground truth accuracy are presented in Tab. <ref>. It can be observed that our MMLC-df framework achieves enhanced performance compared to the original data in 100% of the scenarios, with 79.2% of the scenarios achieving optimal enhancement. On the other hand, G_MV, G_IRT, and TDG4Crowd achieve enhanced results in 50%, 62.5%, and 75% of the scenarios, respectively. In terms of the enhancement magnitude, our method performs the best on the Text dataset. While other methods may achieve better results in certain scenarios for other datasets, their performance is unstable, and there are cases where the results deteriorate. This indicates that our MMLC-df framework demonstrates good stability and consistent performance. Impact of Data Filling's Density: Our MMLC-df framework leverages the sparsity of crowdsourced data for data filling. To clarify, we define the data density of non-empty crowdsourced data as d_𝒟∈ (0,1] and d_𝒟 = |𝒟|/|𝒲|× |𝒳|. The data densities of the three original datasets LabelMe, Text, and Music are 0.0431, 0.0579, and 0.0956, respectively. The original data seems sparse. We gradually fill the data until reaching a data density to 1 for the analysis of its impact of data density on the results. We set a threshold for the number of items to be filled, denoted as n_interval. For workers with items exceeding this threshold, we replace their labeled items with predicted values. By adjusting the threshold from large to small, we gradually fill the data until all workers have completed their items, achieving a data density of 1. Due to the large amount of filled data, deep learning methods can be time-consuming. The accuracy obtained by various methods shows a similar trend to the density transformation. Therefore, we conducted this experiment using only statistical machine learning methods. The experiment is repeated five rounds, and the average accuracy and standard deviation are shown in Fig. <ref>. The trends are generally consistent across all methods, but the variations differ significantly among different datasets. In Text dataset, as density increases, the algorithm's accuracy stabilizes rapidly and then reaches a plateau. In the LableMe dataset, accuracy fluctuates significantly as density increases. Higher density often improves accuracy. In Music dataset, as density increases, accuracy initially fluctuates rapidly before stabilizing. The Text data filling performs the best, likely due to the larger scale of this dataset compared to the other two, resulting in a more significant impact. § CONCLUSION This paper introduces a novel crowdsourced learning paradigm called MLC. Within this paradigm, we propose a feature-level worker behavior model called MMLC. Based on this model, we develop two truth inference methods: MMLC-owf, which uses oracle worker finding, and MMLC-df, a truth inference framework based on crowdsourced data filling. Experimental results demonstrate the superior performance of MMLC-owf compared to other methods. Furthermore, we assess the theoretical upper performance limit of the MMLC-owf method, demonstrating its potential to enhance clustering method selection and validate its strong performance. The experiments also validate the effectiveness and stability of the MMLC-df framework in enhancing truth inference methods through crowdsourced data filling. Furthermore, we observed that our model exhibited better performance on datasets with a higher number of annotations per worker. On real crowdsourcing platforms, workers continuously engage in annotation tasks, resulting in an increasing average number of annotations per worker. Consequently, our model holds significant practical value for real-world applications. apalike
http://arxiv.org/abs/2407.13065v1
20240718002229
The Ergodic Vacuum
[ "Chris Scott" ]
physics.gen-ph
[ "physics.gen-ph" ]
Using LLMs to Automate Threat Intelligence Analysis Workflows in Security Operation Centers PeiYu Tseng; ZihDwo Yeh; Xushu Dai; Peng Liu PeiYu Tseng, ZihDwo Yeh, Xushu Dai and Peng Liu are with Penn State University, State College, PA, 16801 (email:pmt5342@psu.edu, doyleyeh@gmail.com, xfd5059@psu.edu , pxl20@psu.edu). July 22, 2024 ========================================================================================================================================================================================================================================= § ABSTRACT The extension of local de Sitter thermodynamics into f(ℛ) gravity provides a new basis to unify the dielectric and electroweak vacua. We suggest the electroweak theory emerges from the ergodic mixing of charge density under local de Sitter thermodynamics and introduce the concept of lab-accessible topological defects. It is not possible for the Ward identity to fix the vacuum polarization tensor, contrary to (5.79) of <cit.>, for the following reason: * Inherent in the Ward-Takahashi identity is the Siegert theorem.<cit.> * The Siegert theorem is invalid for toroid polarized media.<cit.> * The vacuum polarization tensor of <cit.> is subjected to an artificial gauge choice if the media's polarization tensor necessarily cannot contain a specific polarization. This is the current Field Theory dilemma which we seek to rectify through an applied condensed matter approach. Local de Sitter thermodynamics relates the scalar radius ℛ = -12R^2 to the de Sitter scaling factor R via a toroidal connection with the Riemann curvature <cit.>, in flat cartesian space, R_αβμν = (ϵ_αβλϵ_μνλ)R^2 Similarly the FLRW metric can be represented by the toroidal de Sitter metric with a singularity horizon at radius R.<cit.> g_αβ = diag(e^Rtr_x, e^Rtr_y, e^Rtr_z, 1/RcoshRt - Rr^2/2e^Rt, 1/RsinhRt + Rr^2/2e^Rt) Running scale in a fluid picture, momentum flux terminates at the wavenumber k_max = r_0^-1 where r_0 is the minimum accessible radius of the path of a charged, massive particle. Classic treatments such as <cit.> replace momentum flux with Coulomb interactions up to 𝒪(2), consistent with the velocity field perturbations. In contrast, accounting for the relative rotation of eddy ensembles in a fluid requires 𝒪(3) terms of the velocity (and Coulomb) fields. For this same reason the dual rotation of both EM sources and fields, termed dyality, gives rise to an extra U(1) symmetry.<cit.> At the terminal energy cascade scale we might view a super-domain tiled with oscillator domains similar to a spin-glass picture except in this case the lattice is unconstrained. As the conjugate angular momentum approaches the terminal scale, ∼ r_0, we consider the dynamics of the non-dissipative oscillator as being in involution. For such a system with n degrees of freedom the phase space is free to explore 2n degrees of freedom and the corresponding energy surface has (2n-1) dimensions. This phase space is a manifold described by the n-torus.<cit.> Since the n-torus is orientable, and because the viscosity is defined on a 2d plane only,[The rate of strain tensor has the same form factor as a quadrupole] the intersection of three n-tori define the domain of solutions to both the Hamiltonian and the change in scalar curvature ℛ as applied to the viscosity dominated fluid. The infinitesimal total electric flux displacement is the toroidal de Sitter path on the n-th hypertorus symmetric about x_i which has the notation (ds_n)_x_i. The enveloping toroid moment is given by the usual integral, T⃗_i^n(m) = ( 1/10c)^n∫ (r⃗r⃗·T⃗^(n-1)(m) - 2r^2T⃗^(n-1)(m))d^3r T⃗^0(m) = ∫ (r⃗r⃗·ϕ^t - 2r^2 ϕ^t )d^3r where ϕ^t is the total current vector and r⃗ = R + 2/(e-1)n!r_0 at x_j=x_j_0-Δ x_j /2, fig (<ref>). The non-dissipative orbit of the massive particle doesn't leave the n-torus so the total flux contribution to (<ref>) is invariant so long as the n-torus and integration domain of (<ref>) share the same orientation. The contribution of the neighbouring spin domains to (<ref>) is encapsulated in an octupole moment in the plane ij, the 𝐢𝐫𝐫𝐞𝐩 of the toroid moment. This is the viscous contribution being the two rate of strain tensors centred on opposing sides at ± x_1_0 in fig (<ref>). Integrating only outside the non-dissipative torus with minor radius r_0 results in the following hypergeometric function. T⃗^n(m)_i = -2/31/10c( 2r_0/(e-1)n!)^3 ( ∑ r_n/2∑ r_(n-1)) ( 1/4 π c)^(n-1) Imposing rotation invariance on (<ref>) gives solutions for n, the toroidal hypersurface on which the normalised electric field flux E⃗ path is confined. The change in n only comes from the external contributions over both Δ x_j domains of the octupole, i.e. shear contributions. Since both orientation of the toroid dipole moment and Δ n ∼ dR (fig <ref>) are the only degrees of freedom the solutions of (<ref>) correspond to possible toroidal de Sitter orientations and change of scale factor R. The n-th radius r_n contributes to the change in R according to the spacing of the spin cells within the super-domain. ∑_n=0^∞ r_n = R, R = 2/(e-1)r_0, r_n = dR, Δ x ≪ R R/2, R = 4/(e-1)r_0, r_n = 1/2dR, Δ x = R It's straightforwad to show the first condition in (<ref>) maximises viscosity. With this substitution the de Sitter path becomes, ds^2_x_i = dt^2_x_i + (dR_x_i)^2e^2dR_x_i So that the intersections in (<ref>) can be written in terms of R_i only for three orthogonal toroidal de Sitter spaces. (See fig (<ref>).) (ds_n)_x_1∩ (ds_n)_x_2 = (ds_n)_x_2∩ (ds_n)_x_3 = (ds_n)_x_3∩ (ds_n)_x_1 The obvious U(1) symmetry in (<ref>) has the infintesimal components dt_x_i and dR_x_iexp(dR_x_i) where dt_x_i is defined on closed paths symmetric about x̂_̂î. Fig <ref> shows the local turbulence solution space when energy-minimised, all-order EM interactions determine both the orientation and polarization states. <cit.> This picture closes the Sommerfeld puzzle to the extent that both Hamiltonian and Lagrangian are provided equal footing.[An equivalent Proca field approach would require relaxed constraints due to their decomposition under the dyality U(1) symmetry. The Vorton model achieves this decomposition and <cit.> additionally invokes the radial gauge, applicable to the local de Sitter thermodynamic model. ] The available polarization states are defined by the solutions of the PDEs in (<ref>) and (<ref>) given the usual Pauli matrices, σ embedded into the Minkowski metric. The effect is twofold; a) third order perturbations are effectuated under contraction of σ'_μν with the usual EM tensor, and b) the effective compactification is achieved via, σ'_μν = diag(1,1,1,σ) σ = [ ∂/∂ t_∥∂/∂ t_∥ ∂/∂ t_∥∂/∂ t_⊥; ∂/∂ t_⊥∂/∂ t_∥ ∂/∂ t_⊥∂/∂ t_⊥; ] = σ_1 = [ 0 1; 1 0; ] σ_2 = [ 0 -i; i 0; ] σ_3 = [ 1 0; 0 -1; ] The geometrical interpretation of (<ref>) is a three sphere with radius σ. The SU(2) symmetry of the Pauli matrices act transitively on S^3 so that the dilaton, ∂^2 X/∂ t_∥^2, X(x_1,x_2,x_3) ∈ℝ^3, and chirality take the same sign of the compact temporal dimension. This compact dimension identifies the Kaluza-Klein mysterious 5th dimension specifically arising under the relative rotation (acceleration). The applicable transformation matrix M^αβ corresponds to the (2n-1) energy surface where the spatial and temporal components together represent electric multipoles of 𝒪(2) and 𝒪(3) respectively. This is simply the decomposition of the usual EM four-potential A=(ϕ,A⃗) and its dyality dual Π = (ϕ^(m), T⃗) into compact and transverse temporal components in (<ref>), M^αβ = ∂^α A^β - ∂^βA^α - 4π (∂^αΠ^β - ∂^βΠ^α) M^αβ is the ergodic 5 transformation matrix which conserves the total electric flux displacement in the net zero magnetic flux condition. In this condition as the penultimate angular momentum cascades into the highest accessible wave-number, viscous effects dominate and the local magnetic flux is cancelled. Setting m = c= 1, the gridlocked picture ensues when massive, charged particles transfer their momentum via Coulomb interactions only, being stationary with respect to the distance to their nearest neighbour. In this definition an ensemble of particles may be gridlocked but the ensemble subject to relative rotation. In this case we have a radiation dominated fluid for which the toroidal de Sitter flat space is appropriate. The scale parameter R can be solved numerically for a stochastic choice of polarization σ∈{σ_1,σ_2,σ_3}. (The numerical scheme is in Fig <ref> and results in Fig <ref> and Fig <ref>) The Kaluza-Klein solution assumes an intrinsic contribution to the stress energy tensor, T_μν applicable to the Einstein-Klein-Gordon equations.<cit.> In this case, g_μν≃M_μαM^α_ν/M_αβM^αβ, M_μαM^α_ν = σ'_μνg_αβM^αβ where g_αβ is the ergodic metric (<ref>) applicable to the spin domain. The general polarization spectrum applicable to the EM vacuum can be numerically solved independent of the Kaluza-Klein model, however, allowing for spectral dispersion mediated by the stress energy tensor (the transverse ergodic component), Fig <ref> with relation inset. Fig <ref> suggests transverse electromagnetic wave solutions are decoupled from the longitudinal (compact) propagating solutions due to the minimal overlap in scalar radius ∼ R at isotropic mixing. To properly grasp the dynamics of the compact solutions we turn to the Kaluza-Klein interpretation. There is a whole family of two-parameter, spherically symmetric topological solutions shown in Fig <ref> given by, (ds)^2 = -(1 - m/r/1 + m/r)^2/α (dt)^2 + (1 + m/r)^4 (1 - m/r/1 + m/r)^2(α - β - 1)/α (dr^2 + r^2 dΩ^2) + (1 - m/r/1 + m/r)^2β/α (dx_∥)^2 where x_∥ is periodic and α = √(β^2 + β + 1). Regular black hole geodesics emerge for β = 0 but the soliton solutions (β≫ 1) exhibit more exotic behaviour; they possess inertial mass but zero gravitational mass. <cit.> We suggest that hypercharge screening can occur at significantly lower energy density in the topological soliton than e.g. magnetically charged black holes <cit.> or vortons.<cit.> The condensation of W bosons as near-field EM excitations of a rotating fermionic condensate may arise under the effective polarization (F^2 = F^3, F^1 → 0) as a corollary of <cit.>. The weak coupling constant g may be (anti)screened due to the dyality rotation of EM from which SU(2) emerges. Returning to the Weinberg picture, F_μν^a = ∂_μ W_ν^a - ∂_ν W_μ^a + g ϵ^abc W_μ^b W_ν^c If we presume to have solved the Sommerfeld puzzle, comparing (<ref>) and (<ref>) gives the compactification, M_αβ1/2σ'_μν = F_μν^a Where the factor 1/2 arises from the case R = 4/(e-1)r_0 in Fig <ref>. Extending the relation in the cylindrical domain forms a triangular lattice structure with 3R separation. This suggest the electroweak theory emerges when the spin glass model has a triangular lattice arrangement where R is the degree of freedom in (<ref>). The Kaluza-Klein theory successfully unifies gravity and electromagnetism and paves the way towards solving the cosmological constant problem.<cit.> Callan-Rubikov candidate phenomena may appear in high-shear, high impulse thermodynamic states<cit.> rather than equilibrium baths.<cit.> We showed the artificial gauge choice of the Ward identity is only adequate for an isotropic vacuum. The full spectrum of non-equilibrium thermodynamics requires an unambiguous treatment of the vacuum polarization tensor. Local de Sitter thermodynamics recovers the electroweak vacuum properties when the complete multipole family is considered, represented by the Pauli matrices (<ref>) compactified in the Minkowski metric.
http://arxiv.org/abs/2407.13393v1
20240718110222
Interface-induced turbulence in viscous binary fluid mixtures
[ "Nadia Bihari Padhan", "Dario Vincenzi", "Rahul Pandit" ]
physics.flu-dyn
[ "physics.flu-dyn", "nlin.CD" ]
[]nadia@iisc.ac.in Centre for Condensed Matter Theory, Department of Physics, Indian Institute of Science, Bangalore, 560012, India []dario.vincenzi@univ-cotedazur.fr Université Côte d’Azur, CNRS, LJAD, 06100 Nice, France []rahul@iisc.ac.in Centre for Condensed Matter Theory, Department of Physics, Indian Institute of Science, Bangalore, 560012, India § ABSTRACT We demonstrate the existence of interface-induced turbulence, an emergent nonequilibrium statistically steady state (NESS) with spatiotemporal chaos, which is induced by interfacial fluctuations in low-Reynolds-number binary-fluid mixtures. We uncover the properties of this NESS via direct numerical simulations (DNSs) of cellular flows in the Cahn-Hilliard-Navier-Stokes (CHNS) equations for binary fluids. We show that, in this NESS, the shell-averaged energy spectrum E(k) is spread over several decades in the wavenumber k and it exhibits a power-law region, indicative of turbulence but without a conventional inertial cascade. To characterize the statistical properties of this turbulence, we compute, in addition to E(k), the time series e(t) of the kinetic energy and its power spectrum, scale-by-scale energy transfer as a function of k, and the energy dissipation resulting from interfacial stresses. Furthermore, we analyze the mixing properties of this low-Reynolds-number turbulence via the mean-square displacement (MSD) of Lagrangian tracer particles, for which we demonstrate diffusive behavior at long times, a hallmark of strong mixing in turbulent flows. Interface-induced turbulence in viscous binary fluid mixtures Rahul Pandit July 22, 2024 ============================================================= § INTRODUCTION Additives can lead to spatiotemporal chaos in a fluid, even when the inertia of the fluid is negligible and the Reynolds number is low. The most notable instance of this is the phenomenon of elastic turbulence in polymer solutions <cit.>. When elastic polymers are added to a laminar Newtonian solvent, their stretching generates elastic stresses that can trigger instabilities eventually resulting in a chaotic flow, which is characterized by a power-law energy spectrum <cit.> and strongly intermittent fluctuations <cit.>. Similar chaotic regimes have been observed in low-inertia wormlike-micellar solutions <cit.> and in suspensions of microscopic rods <cit.> and spherical rigid particles <cit.>. In contrast to conventional hydrodynamic turbulence <cit.>, these examples of low- turbulence do not rely on an energy cascade, through an inertial range, so their main applications are in microfluidics, where additives are employed to enhance mixing <cit.> as an alternative to passive or active mechanical perturbations <cit.>. By combining theory and direct numerical simulations (DNSs) we uncover a new type of low-Reynolds turbulence, which is driven by interfacial fluctuations, in viscous binary-fluid mixtures. We call this interface-induced turbulence. A good understanding of binary-fluid mixtures is crucial for modelling emulsions <cit.>, which have a wide variety of applications in the food <cit.>, cosmetics <cit.>, and pharmaceutical industries <cit.>, often in microfluidic devices, where the enhancement of mixing is of vital importance in many situations. In addition to its practical applications, investigations of low-Re turbulence in systems other than viscoelastic fluids is of fundamental interest in nonlinear physics and fluid dynamics. Therefore, it behooves us to explore the possibility of mixing, induced by low-inertia turbulence, in binary-fluid mixtures. We initiate such an exploration by studying a cellular flow in a two-dimensional (2D) binary-fluid system. The Cahn-Hilliard-Navier-Stokes (CHNS) partial differential equations (PDEs), which couple the fluid velocity u with a scalar order parameter ϕ that distinguishes between two coexisting phases, provide a natural theoretical framework for such flows. Our investigations, based on direct numerical simulations (DNSs), reveal an emergent nonequilibrium statistically steady state (NESS) with spatiotemporal chaos, which is induced by interfacial fluctuations that destabilize the laminar cellular flow. Thus, we find the elastic-turbulence analog for low-Re binary-fluid mixtures: this leads to a kinetic-energy spectrum E(k), spread over several decades in the wave-number k, with a power-law regime that is characterised by an exponent ≃ -4.5. By analysing the time dependence of the total kinetic-energy e(t) and its power spectrum, we characterize the transitions from the cellular flow to such turbulence, for which we demonstrate, via a scale-by-scale analysis of the kinetic energy, that there is no significant energy cascade, and therefore the chaotic dynamics is entirely driven by the interfacial stress. Furthermore, we elucidate how such interfacial stress leads to global energy dissipation, even though it is responsibe for both local injection as well as dissipation of energy. Finally, we quantify the mixing properties of interface-induced turbulence by showing that the mean-square-displacement (MSD) of Lagrangian tracers displays long-time diffusive behavior that is similar to its counterpart in inertial turbulence. § MODEL The CHNS PDEs have been used to study multi-fluid flows, which may involve droplet interactions <cit.>, the evolution of antibubbles <cit.>, and phase separation and turbulence in such flows <cit.>. The two-dimensional incompressible CHNS PDEs are <cit.>: ∂_t ϕ+ u ·∇ϕ= M ∇^2 ( δℱ/δϕ) ; ∂_t ω+ u ·∇ω= ν∇^2 ω-αω+ (∇×𝒮^ϕ)·ê_z + f^ω; ∇·u = 0 ; ω= (∇×u)·ê_z ; ν, α, and M are the kinematic viscosity, friction, and mobility, respectively. We write Eq. (<ref>) in the vorticity-streamfunction (ω-ψ) form, with u = ∇× (ψê_z) and ψ = - ∇^-2ω; the surface stress and the Landau-Ginzburg free-energy functional are, respectively, 𝒮^ϕ = -ϕ∇(δℱ/δϕ) and ℱ[ϕ, ∇ϕ] = ∫_Ω[3/16σ/ϵ(ϕ^2-1)^2 + 3/4σϵ |∇ϕ|^2]dΩ ; Ω is the spatial domain, σ is the bare surface tension, and ϵ the interfacial width. The first term in ℱ is a double-well potential with minima at ϕ = ± 1, which correspond to two bulk phases in equilibrium; the second term is the penalty for interfaces; ϕ varies smoothly across an interface. We study the CHNS PDEs (<ref>)-(<ref>) at low Re, with an initially square-crystalline array of vortical structures (a cellular flow), imposed by choosing f^ω = ê_z · (∇× f^u) = f_0 k_f [cos(k_f x) + cos(k_f y)] , with amplitude f_0 and wave number k_f. Such cellular flows have been used to examine the melting of this crystalline array by inertial, elastic, and elasto-inertial turbulence in viscoelastic fluids <cit.>. For α = 0 and ϕ( r) = 0, this system has the stationary solution ω = -ω_0 [cos(k_f x) + cos(k_f y)] ; ω_0 = f_0 /ν k_f . The spatiotemporal evolution of this cellular flow depends on the Reynolds, Capillary, Cahn, non-dimensionalised friction, and Péclet numbers that are, respectively, Re = UL/ν , Ca = ν U/σ , Cn = ϵ/L_0 , α' = α T , Pe = L^2U/Mσ, with U = f_0/ν k_f^2, L = k_f^-1, T = ν k_f/f_0, and L_0 the side of our square simulation domain. To characterize the mixing because of interface-induced turbulence, we introduce N_p tracers into the flow. For tracer i (position r^i_0 at time t_0) d r^i(t)/dt = v( r^i, t| r^i_0, t_0) = u( r^i, t) , where r^i(t) and v(r^i, t) are the position and velocity of the i_th tracer. The mean-squared displacement (MSD) is Δ r ^2 (t) = ⟨ | r(t) - r(0)|^2⟩, where ⟨·⟩ denotes the average over the N_p particle trajectories. § NUMERICAL METHODS AND INITIAL CONDITIONS We carry out pseudospectral DNSs (parameters in Table I in the Supplemental Material <cit.>) of the CHNS PDEs (<ref>)-(<ref>), with periodic boundary conditions, a square (2π× 2π) box, 512^2 collocation points <cit.>, the 1/2-dealiasing scheme, and a semi-implicit exponential time difference Runge-Kutta-2 method <cit.> for time integration. To resolve interfaces, we have three computational grid points in interfacial regions. We obtain v from u via bilinear interpolation at off-grid points and a first-order Euler scheme for Eq. (<ref>) <cit.>. The initial condition [Fig. <ref>(a)] comprises N_d circular droplets [We have checked explicitly that our results are independent of the initial arrangements and sizes of the droplets.]; droplet i, centered at (x_i, y_i), has radius R_i: ϕ(x, y, t=0) = ∑_i=1^N_dtanh(R_i - √((x-x_i)^2 + (y-y_i)^2)/ϵ) ; ω(x,y, t=0) = 0 . In Fig. <ref>(b) we show a pseudocolor plot of ω for the cellular solution (<ref>), for the single-fluid case (α=0). § RESULTS We consider = 1 < _c = √(2), the single-fluid (α=0) critical Reynolds number, given the cellular forcing we use <cit.>. We choose < _c to exclude inertial instabilities, so that we can focus only on interface-induced dynamics. Our DNSs reveal that the second phase leads to interfaces whose fluctuations can destabilise this cellular flow and yield interface-induced turbulence, a NESS with spatiotemporal chaos. In Figs. <ref>(c) and (d) we present pseudocolor plots, of ϕ and ω, respectively, for = 0.15, which illustrate the breakdown of the cellular flow in Fig. <ref>(b) (see also the corresponding movie in the Supplemental Material <cit.>). Moreover, the time series of the rescaled total energy e(t)/e_0, with e_0=U^2, shows that, as is varied, the system undergoes a non-monotonic sequence of transitions between periodic regimes and spatiotemporally chaotic NESSs at low Re (see Fig. <ref>). In the Supplemental Material <cit.>, we examine the above cellular-to-spatiotemporally chaotic transitions via additional plots of the time series of the total energy e(t), its frequency power spectrum, and pseudocolor plots of the vorticity and the energy spectrum for a wide range of Ca. We turn now to spatiotemporal properties. In Fig. <ref>(a) we give log-log plots of the power-spectrum of the total energy, |ẽ(f)|, versus the normalized frequency fT. For = 0.16, this spectrum shows a single dominant peak, a signature of temporal periodicity; by contrast, for = 0.15, we see a broad power-spectrum, which indicates that e(t) is chaotic. In Fig. <ref>(b) we characterise the spatial distribution of the kinetic energy via a log-log plot of the shell-averaged energy spectrum E(k) versus the wave-number k (see <cit.> for the definition), in the spatiotemporally chaotic NESS for = 0.15. Over a small range of k, E(k) ∼ k^-4.5 [black line in Fig. <ref>(b)]; this power-law exponent is distinct from the one for 2D fluid turbulence (with an exponent ≃ -3 in the forward-cascade regime, if there is no friction <cit.>). A spectrum steeper than k^-3 is also a characteristic feature of elastic turbulence in polymer solutions <cit.>. Unlike inertial fluid turbulence, the low-Re interface-induced turbulence we consider does not show an energy cascade. We demonstrate this via the following scale-by-scale kinetic-energy-budget equation <cit.>: ∂_t E(k, t)=T(k,t)-2 ν k^2 E(k,t)+S(k,t)+F(k,t) , where S(k, t) is the contribution of the interfacial stress, T(k,t) is the nonlinear energy transfer, and F(k,t) the energy-injection term (see <cit.> for the definitions). In the inset of Fig. <ref>(b), we present the k-dependence of the viscous contribution 2ν k^2 E(k), in blue, and, in red, the contribution of the interfacial stress, S(k); both these terms are equal for all k, except at the forcing wave-number k_f = 4. In fluid turbulence, inertia plays a pivotal role in transferring energy from the energy-injection wavenumber(s) to other wavenumbers, and T(k) is non-zero for most k. By contrast, in the interface-induced turbulence we consider, inertia is negligible, and energy in wavenumbers other than the injection wavenumber is solely attributable to S(k), which is balanced by 2ν k^2E(k); hence, T(k) is negligibly small in Eq. (<ref>). This energy transfer by interfacial stresses is a unique property of low-Re interface-induced turbulence and distinguishes it clearly from fluid turbulence. It is also useful to study the energy-budget equation de(t)/dt = ϵ_I - ϵ_ν - ϵ_ϕ ; ϵ_I = ⟨ f^u· u⟩_ x is the mean energy-injection rate, ϵ_ν = -⟨ u ·ν∇^2 u⟩_ x is the mean energy-dissipation (viscous) rate, ϵ_ϕ = -⟨ε_ϕ⟩_ x = -⟨ u · 𝒮^ϕ⟩_ x is the additional mean dissipation because of interfaces, and ⟨·⟩_ x denotes the space average. We plot ϵ_I, ϵ_ν, and ϵ_ϕ versus in Fig. <ref>(c). At intermediate values of , ϵ_ϕ > 0; i.e., globally, the interfacial contribution to the energy budget is dissipative. However, the interfacial stress both injects and dissipates energy locally, as we demonstrate by plotting, in the inset of Fig. <ref>(c), the probability distribution functions (PDFs) of ε_ϕ, the local dissipation because of interfaces. The fat tails of this PDF exhibit that ε_ϕ shows large fluctuations that are both positive and negative. The pseudocolor plot of ε_ϕ for = 0.15 in Fig. <ref>(d) also confirms that ε_ϕ is concentrated at the interface between the two fluids. Therefore, the turbulent behavior, which we uncover by the energy-budget analysis (<ref>), is attributable solely to the presence of interfaces in the flow, and is observed at intermediate values of Ca. For low values of (large σ), ϵ_ϕ is low because the interfaces are so energetic that their energy surpasses the kinetic energy of the flow: thus, droplets coalesce, interfaces do not break-up, and the interfacial length is minimal. For high values of (low σ), the interfacial energy is so low that it hardly affects the flow, and the system retains the cellular structure of the applied force; and the energy injection and viscous dissipation balance, i.e., ϵ_I/ϵ_0 = ϵ_ν/ϵ_0 = 1, and ϵ_ϕ/ϵ_0 = 0, with ϵ_0 ≡ f_0 U. One of the intriguing properties of interface-induced turbulence is that it enhances mixing even at low Re, which makes this phenomenon of great interest for microfluidic applications. We quantify such mixing properties by investigating the dispersion of tracer particles in the flow [Eqs. (<ref>) and (<ref>)]. In Fig. <ref>(a), we depict a representative tracer trajectory in the spatiotemporally chaotic NESS for Ca = 0.15; the colorbar shows the simulation time. Initially, the particles get trapped within vortices, but, when an interface moves through these vortices, it facilitates particle transfer to other vortices. We plot the MSD [Eq. (<ref>)] versus t for =0.15, =0.16, and =0.18 in Fig. <ref>(b). For the chaotic NESSs (=0.15 and =0.18) the small- and large-t asymptotes of the MSD can be fit to the power-law-form ⟨ r^2(t) ⟩∼ t^β, with short-time ballistic behavior β = 2, and long-time diffusive behavior β = 1, because of strong mixing via interface-induced turbulence. If the state is periodic, e.g., for = 0.16, the MSD shows only ballistic behavior and then trapping into a vortical cell at longer times. § CONCLUSIONS We have demonstrated how interfaces in a binary-fluid mixture can disrupt low-Re cellular flows by precipitating instabilities that lead to interface-induced turbulence, the binary-fluid analog of elastic turbulence in fluids with polymer additives <cit.>. We have explored the transitions from cellular flows, to flows with spatiotemporal crystals, and, eventually, to a NESS with interface-induced turbulence. We have characterised these states via the energy time series e(t), its frequency power spectrum |ẽ(f)|, the energy spectrum E(k), the energy budget [Eqs. (<ref>) and <ref>], and the MSD of Lagrangian tracers [Eqs. (<ref>) and (<ref>)]. The low- interface-induced turbulence that we have uncovered exhibits the following distinctive properties: (a) |ẽ(f)| is significant over a broad range of frequencies f; (b) a power-law regime with E(k) ∼ k^-4.5, with a power that is different from its counterpart in 2D fluid turbulence with no friction <cit.>; (c) a scale-by-scale energy transfer [Eq. (<ref>)] with negligible inertial contribution T(k); (d) an MSD of tracers that crosses over from ballistic to diffusive behaviors, indicating strong mixing. Cellular flows have been used in experimental studies of elastic turbulence <cit.>; we therefore look forward to experimental confirmations of our predictions for low- interface-induced turbulence in such flows. We thank K.V. Kiran and S. Mukherjee for valuable discussions. NBP, DV, and RP thank the Indo-French Centre for Applied Mathematics (IFCAM) for support and the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme ‘Anti-diffusive dynamics: from sub-cellular to astrophysical scales’ when work on this paper was undertaken (EPSRC grant no EP/R014604/1). This research was supported in part by the International Centre for Theoretical Sciences (ICTS) for the online program - Turbulence: Problems at the Interface of Mathematics and Physics (code: ICTS/TPIMP2020/12). NBP and RP thank the Science and Engineering Research Board (SERB) and the National Supercomputing Mission (NSM), India for support, and the Supercomputer Education and Research Centre (IISc) for computational resources. We thank NEC, India for trial use of the SX-Aurora TSUBASA computer, on which we carried out preliminary DNSs for the 2D CHNS system. § MAIN DEFINITIONS We define the energy spectrum and the shell-averaged energy spectrum as ℰ(𝐤, t) = [û(𝐤, t)·û(-𝐤, t)] and E(k, t) = 1/2∑_k≤| k'|< k+1[û(𝐤', t)·û(-𝐤', t)] . In the kinetic-energy-budget equation, the transfer term, the contribution of the interfacial stress, and the energy injection term in the kinetic-energy budget equation are T(k, t) ≡ - ℜ[∑_k≤ | k'|< k+1 [ û(- k', t)· P( k') ·( u ·∇ u) ( k', t)]], S(k, t) ≡ℜ[∑_k≤ | k'|< k+1 [ û(- k', t)· P( k') · 𝒮^ϕ ( k', t)]] , and F(k, t) ≡ℜ[∑_k≤ | k'|< k+1 [ û(- k', t) ·f^u ( k', t)]] . § SIMULATION PARAMETERS AND SUPPLEMENTARY ANALYSIS The parameters of the simulation are given in Table <ref>. Plots of the spatiotemporal properties of the flow are shown in Figs. <ref> and <ref> for different values of the Capillary number Ca. § VIDEO * The video showing the spatiotemporal evolution that corresponds to the pseudocolor plots in Fig. 1 is available at: <https://youtu.be/yg28KMcSZQw> plain 50 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Groisman and Steinberg(2000)]groisman2000elastic author author A. Groisman and author V. Steinberg, title title Elastic turbulence in a polymer solution flow, @noop journal journal Nature volume 405, pages 53 (year 2000)NoStop [Steinberg(2021)]steinberg2021elastic author author V. Steinberg, title title Elastic turbulence: an experimental view on inertialess random flow, @noop journal journal Annual Review of Fluid Mechanics volume 53, pages 27 (year 2021)NoStop [Datta et al.(2022)Datta, Ardekani, Arratia, Beris, Bischofberger, McKinley, Eggers, López-Aguilar, Fielding, Frishman et al.]datta2022perspectives author author S. S. Datta, author A. M. Ardekani, author P. E. Arratia, author A. N. Beris, author I. Bischofberger, author G. H. McKinley, author J. G. Eggers, author J. E. López-Aguilar, author S. M. Fielding, author A. Frishman, et al., title title Perspectives on viscoelastic flow instabilities and elastic turbulence, @noop journal journal Physical Review Fluids volume 7, pages 080701 (year 2022)NoStop [Fouxon and Lebedev(2007)]fouxon2007spectra author author A. Fouxon and author V. Lebedev, title title Spectra of turbulence in dilute polymer solutions, @noop journal journal Physics of Fluids volume 15, pages 2060 (year 2007)NoStop [Steinberg(2019)]steinberg19 author author V. Steinberg, title title Scaling relations in elastic turbulence, @noop journal journal Physical Review Letters volume 123, pages 234501 (year 2019)NoStop [Jun and Steinberg(2009)]jun2009power author author Y. Jun and author V. Steinberg, title title Power and pressure fluctuations in elastic turbulence over a wide range of polymer concentrations, @noop journal journal Physical Review Letters volume 102, pages 124503 (year 2009)NoStop [Singh et al.(2024)Singh, Perlekar, Mitra, and Rosti]singh2024intermittency author author R. K. Singh, author P. Perlekar, author D. Mitra, and author M. E. Rosti, title title Intermittency in the not-so-smooth elastic turbulence, @noop journal journal Nature Communications volume 15, pages 4070 (year 2024)NoStop [Fardin et al.(2010)Fardin, Lopez, Croso, Grégoire, Cardoso, McKinley, and Lerouge]fardin2010elastic author author M. A. Fardin, author D. Lopez, author J. Croso, author G. Grégoire, author O. Cardoso, author G. H. McKinley, and author S. Lerouge, title title Elastic turbulence in shear banding wormlike micelles, @noop journal journal Physical Review Letters volume 104, pages 178303 (year 2010)NoStop [Majumdar and Sood(2011)]majumdar2011universality author author S. Majumdar and author A. K. Sood, title title Universality and scaling behavior of injected power in elastic turbulence in wormlike micellar gel, @noop journal journal Physical Review E volume 84, pages 015302(R) (year 2011)NoStop [Plan et al.(2017a)Plan, Musacchio, and Vincenzi]emmanuel2017emergence author author E. L. C. V. M. Plan, author S. Musacchio, and author D. Vincenzi, title title Emergence of chaos in a viscous solution of rods, @noop journal journal Physical Review E volume 96, pages 053108 (year 2017a)NoStop [Puggioni et al.(2022)Puggioni, Boffetta, and Musacchio]puggioni2022enhancement author author L. Puggioni, author G. Boffetta, and author S. Musacchio, title title Enhancement of drag and mixing in a dilute solution of rodlike polymers at low Reynolds numbers, @noop journal journal Physical Review Fluids volume 7, pages 083301 (year 2022)NoStop [Souzy et al.(2017)Souzy, Lhuissier, Villermaux, and Metzger]souzy2017stretching author author M. Souzy, author H. Lhuissier, author E. Villermaux, and author B. Metzger, title title Stretching and mixing in sheared particulate suspensions, @noop journal journal Journal of Fluid Mechanics volume 812, pages 611 (year 2017)NoStop [Turuban et al.(2021)Turuban, Lhuissier, and Metzger]turuban2021mixing author author R. Turuban, author H. Lhuissier, and author B. Metzger, title title Mixing in a sheared particulate suspension, @noop journal journal Journal of Fluid Mechanics volume 916, pages R4 (year 2021)NoStop [Frisch(1995)]frisch1995turbulence author author U. Frisch, @noop title Turbulence: The Legacy of A. N. Kolmogorov (publisher Cambridge University Press, Cambridge, UK, year 1995)NoStop [Groisman and Steinberg(2001)]groisman01 author author A. Groisman and author V. Steinberg, title title Efficient mixing at low Reynolds numbers using polymer additives, @noop journal journal Nature volume 405, pages 905–908 (year 2001)NoStop [Aref et al.(2017)Aref, Blake, Budišić, Cardoso, Cartwright, Clercx, El Omari, Feudel, Golestanian, Gouillart et al.]aref2017frontiers author author H. Aref, author J. R. Blake, author M. Budišić, author S. S. Cardoso, author J. H. Cartwright, author H. J. Clercx, author K. El Omari, author U. Feudel, author R. Golestanian, author E. Gouillart, et al., title title Frontiers of chaotic advection, @noop journal journal Reviews of Modern Physics volume 89, pages 025007 (year 2017)NoStop [Ho et al.(2022)Ho, Razzaghi, Ramachandran, and Mikkonen]ho2022emulsion author author T. M. Ho, author A. Razzaghi, author A. Ramachandran, and author K. S. Mikkonen, title title Emulsion characterization via microfluidic devices: A review on interfacial tension and stability to coalescence, @noop journal journal Advances in Colloid and Interface Science volume 299, pages 102541 (year 2022)NoStop [Gunes(2018)]gunes2018microfluidics author author D. Z. Gunes, title title Microfluidics for food science and engineering, @noop journal journal Current Opinion in Food Science volume 21, pages 57 (year 2018)NoStop [Gilbert et al.(2013)Gilbert, Picard, Savary, and Grisel]gilbert2013rheological author author L. Gilbert, author C. Picard, author G. Savary, and author M. Grisel, title title Rheological and textural characterization of cosmetic emulsions containing natural and synthetic polymers: relationships between both data, @noop journal journal Colloids and Surfaces A: Physicochemical and Engineering Aspects volume 421, pages 150 (year 2013)NoStop [Maeki(2019)]maeki2019microfluidics author author M. Maeki, title title Microfluidics for pharmaceutical applications, in @noop booktitle Microfluidics for Pharmaceutical Applications (publisher Elsevier, year 2019) pp. pages 101–119NoStop [Zhao(2013)]zhao2013multiphase author author C.-X. Zhao, title title Multiphase flow microfluidics for the production of single or multiple emulsions for drug delivery, @noop journal journal Advanced Drug Delivery Reviews volume 65, pages 1420 (year 2013)NoStop [Scarbolo et al.(2015)Scarbolo, Bianco, and Soldati]scarbolo2015coalescence author author L. Scarbolo, author F. Bianco, and author A. Soldati, title title Coalescence and breakup of large droplets in turbulent channel flow, @noop journal journal Physics of Fluids volume 27, pages 073302 (year 2015)NoStop [Pal et al.(2016)Pal, Perlekar, Gupta, and Pandit]pal2016binary author author N. Pal, author P. Perlekar, author A. Gupta, and author R. Pandit, title title Binary-fluid turbulence: Signatures of multifractal droplet dynamics and dissipation reduction, @noop journal journal Physical Review E volume 93, pages 063115 (year 2016)NoStop [Roccon et al.(2017)Roccon, De Paoli, Zonta, and Soldati]roccon2017viscosity author author A. Roccon, author M. De Paoli, author F. Zonta, and author A. Soldati, title title Viscosity-modulated breakup and coalescence of large drops in bounded turbulence, @noop journal journal Physical Review Fluids volume 2, pages 083603 (year 2017)NoStop [Negro et al.(2023)Negro, Carenza, Gonnella, Mackay, Morozov, and Marenduzzo]negro2023yield author author G. Negro, author L. N. Carenza, author G. Gonnella, author F. Mackay, author A. Morozov, and author D. Marenduzzo, title title Yield-stress transition in suspensions of deformable droplets, @noop journal journal Science Advances volume 9, pages eadf8106 (year 2023)NoStop [Elghobashi(2019)]elghobashi2019direct author author S. Elghobashi, title title Direct numerical simulation of turbulent flows laden with droplets or bubbles, @noop journal journal Annual Review of Fluid Mechanics volume 51, pages 217 (year 2019)NoStop [Pal et al.(2022)Pal, Ramadugu, Perlekar, and Pandit]pal2022ephemeral author author N. Pal, author R. Ramadugu, author P. Perlekar, and author R. Pandit, title title Ephemeral antibubbles: Spatiotemporal evolution from direct numerical simulations, @noop journal journal Physical Review Research volume 4, pages 043128 (year 2022)NoStop [Perlekar et al.(2017)Perlekar, Pal, and Pandit]perlekar2017two author author P. Perlekar, author N. Pal, and author R. Pandit, title title Two-dimensional turbulence in symmetric binary-fluid mixtures: Coarsening arrest by the inverse cascade, @noop journal journal Scientific Reports volume 7, pages 44589 (year 2017)NoStop [Shek and Kusumaatmaja(2022)]shek2022spontaneous author author A. C. Shek and author H. Kusumaatmaja, title title Spontaneous phase separation of ternary fluid mixtures, @noop journal journal Soft Matter volume 18, pages 5807 (year 2022)NoStop [Perlekar et al.(2014)Perlekar, Benzi, Clercx, Nelson, and Toschi]perlekar2014spinodal author author P. Perlekar, author R. Benzi, author H. J. Clercx, author D. R. Nelson, and author F. Toschi, title title Spinodal decomposition in homogeneous and isotropic turbulence, @noop journal journal Physical Review Letters volume 112, pages 014502 (year 2014)NoStop [Fan et al.(2016)Fan, Diamond, Chacón, and Li]fan2016cascades author author X. Fan, author P. Diamond, author L. Chacón, and author H. Li, title title Cascades and spectra of a turbulent spinodal decomposition in two-dimensional symmetric binary liquid mixtures, @noop journal journal Physical Review Fluids volume 1, pages 054403 (year 2016)NoStop [Fan et al.(2018)Fan, Diamond, and Chacón]fan2018chns author author X. Fan, author P. Diamond, and author L. Chacón, title title CHNS: A case study of turbulence in elastic media, @noop journal journal Physics of Plasmas volume 25 (year 2018)NoStop [Perlekar and Pandit(2010)]perlekar2010turbulence author author P. Perlekar and author R. Pandit, title title Turbulence-induced melting of a nonequilibrium vortex crystal in a forced thin fluid film, @noop journal journal New Journal of Physics volume 12, pages 023033 (year 2010)NoStop [Gupta and Pandit(2017)]gupta2017melting author author A. Gupta and author R. Pandit, title title Melting of a nonequilibrium vortex crystal in a fluid film with polymers: Elastic versus fluid turbulence, @noop journal journal Physical Review E volume 95, pages 033119 (year 2017)NoStop [Plan et al.(2017b)Plan, Gupta, Vincenzi, and Gibbon]plan2017lyapunov author author E. L. C. V. M. Plan, author A. Gupta, author D. Vincenzi, and author J. D. Gibbon, title title Lyapunov dimension of elastic turbulence, @noop journal journal Journal of Fluid Mechanics volume 822 (year 2017b)NoStop [sup()]supmat @noop note See supplemental material atNoStop [Canuto et al.(2012)Canuto, Hussaini, Quarteroni, Thomas Jr et al.]canuto2012spectral author author C. Canuto, author M. Y. Hussaini, author A. Quarteroni, author A. Thomas Jr, et al., @noop title Spectral methods in fluid dynamics (publisher Springer Science Business Media, year 2012)NoStop [Padhan and Pandit(2023a)]padhan2023activity author author N. B. Padhan and author R. Pandit, title title Activity-induced droplet propulsion and multifractality, @noop journal journal Physical Review Research volume 5, pages L032013 (year 2023a)NoStop [Padhan and Pandit(2023b)]padhan2023unveiling author author N. B. Padhan and author R. Pandit, title title Unveiling the spatiotemporal evolution of liquid-lens coalescence: Self-similarity, vortex quadrupoles, and turbulence in a three-phase fluid system, @noop journal journal Physics of Fluids volume 35 (year 2023b)NoStop [Cox and Matthews(2002)]cox2002exponential author author S. M. Cox and author P. C. Matthews, title title Exponential time differencing for stiff systems, @noop journal journal Journal of Computational Physics volume 176, pages 430 (year 2002)NoStop [Benzi et al.(2010)Benzi, Biferale, Fisher, Lamb, and Toschi]benzi2010inertial author author R. Benzi, author L. Biferale, author R. Fisher, author D. Lamb, and author F. Toschi, title title Inertial range Eulerian and Lagrangian statistics from numerical simulations of isotropic turbulence, @noop journal journal Journal of Fluid Mechanics volume 653, pages 221 (year 2010)NoStop [Verma et al.(2020)Verma, Bhatnagar, Mitra, and Pandit]verma2020first author author A. K. Verma, author A. Bhatnagar, author D. Mitra, and author R. Pandit, title title First-passage-time problem for tracers in turbulent flows applied to virus spreading, @noop journal journal Physical Review Research volume 2, pages 033239 (year 2020)NoStop [Note1()]Note1 note We have checked explicitly that our results are independent of the initial arrangements and sizes of the droplets.Stop [Gotoh and Yamada(1984)]gotoh1984instability author author K. Gotoh and author M. Yamada, title title Instability of a cellular flow, @noop journal journal Journal of the Physical Society of Japan volume 53, pages 3395 (year 1984)NoStop [Boffetta and Ecke(2012)]boffetta2012two author author G. Boffetta and author R. E. Ecke, title title Two-dimensional turbulence, @noop journal journal Annual Review of Fluid Mechanics volume 44, pages 427 (year 2012)NoStop [Pandit et al.(2017)Pandit, Banerjee, Bhatnagar, Brachet, Gupta, Mitra, Pal, Perlekar, Ray, Shukla et al.]pandit2017overview author author R. Pandit, author D. Banerjee, author A. Bhatnagar, author M. Brachet, author A. Gupta, author D. Mitra, author N. Pal, author P. Perlekar, author S. S. Ray, author V. Shukla, et al., title title An overview of the statistical properties of two-dimensional turbulence in fluids with particles, conducting fluids, fluids with polymer additives, binary-fluid mixtures, and superfluids, @noop journal journal Physics of Fluids volume 29 (year 2017)NoStop [Perlekar(2019)]perlekar2019kinetic author author P. Perlekar, title title Kinetic energy spectra and flux in turbulent phase-separating symmetric binary-fluid mixtures, @noop journal journal Journal of Fluid Mechanics volume 873, pages 459 (year 2019)NoStop [Alexakis and Biferale(2018)]alexakis2018cascades author author A. Alexakis and author L. Biferale, title title Cascades and transitions in turbulent flows, @noop journal journal Physics Reports volume 767, pages 1 (year 2018)NoStop [Verma(2019)]verma2019energy author author M. K. Verma, @noop title Energy transfers in fluid flows: multiscale and spectral perspectives (publisher Cambridge University Press, year 2019)NoStop [Liu et al.(2012)Liu, Shelley, and Zhang]liu2012oscillations author author B. Liu, author M. Shelley, and author J. Zhang, title title Oscillations of a layer of viscoelastic fluid under steady forcing, @noop journal journal Journal of Non-Newtonian Fluid Mechanics volume 175-176 (year 2012)NoStop
http://arxiv.org/abs/2407.12215v1
20240716233904
Generalization of the Fano and Non-Fano Index Coding Instances
[ "Arman Sharififar", "Parastoo Sadeghi", "Neda Aboutorab" ]
cs.IT
[ "cs.IT", "math.IT" ]
Generalization of the Fano and Non-Fano Index Coding Instances Arman Sharififar, Parastoo Sadeghi, Neda Aboutorab Received: date / Revised version: date ============================================================== § ABSTRACT Matroid theory is fundamentally connected with index coding and network coding problems. In fact, the reliance of linear index coding and network coding rates on the characteristic of a field has been demonstrated by using the two well-known matroid instances, namely the Fano and non-Fano matroids. This established the insufficiency of linear coding, one of the fundamental theorems in both index coding and network coding. While the Fano matroid is linearly representable only over fields with characteristic two, the non-Fano instance is linearly representable only over fields with odd characteristic. For fields with arbitrary characteristic p, the Fano and non-Fano matroids were extended to new classes of matroid instances whose linear representations are dependent on fields with characteristic p. However, these matroids have not been well appreciated nor cited in the fields of network coding and index coding. In this paper, we first reintroduce these matroids in a more structured way. Then, we provide a completely independent alternative proof with the main advantage of using only matrix manipulation rather than complex concepts in number theory and matroid theory. In this new proof, it is shown that while the class p-Fano matroid instances are linearly representable only over fields with characteristic p, the class p-non-Fano instances are representable over fields with any characteristic other than characteristic p. Finally, following the properties of the class p-Fano and p-non-Fano matroid instances, we characterize two new classes of index coding instances, respectively, referred to as the class p-Fano and p-non-Fano index coding, each with a size of p^2 + 4p + 3. It is proved that the broadcast rate of the class p-Fano index coding instances is achievable by a linear code over only fields with characteristic p. In contrast, it is shown that for the class p-non-Fano index coding instances, the broadcast rate is achievable by a linear code over fields with any characteristic other than characteristic p. § INTRODUCTION Founded by Whitney <cit.>, matroid theory is a discipline of mathematics in which the notions of dependence and independence relations are generalized. These concepts in matroid theory are closely related to the concepts in information theory such as entropy and the notions in linear algebraic coding theory including the dependent and independent linear vector spaces. Among the fields in coding theory, network coding and index coding are proven to be fundamentally connected to the matroid theory <cit.>. Network coding, started by the work of Ahlswede et al. <cit.>, studies the communication problem in which a set of messages must be communicated from source nodes to destination nodes through intermediate nodes. Encoding messages at the intermediate nodes can increase the overall network throughput compared to simply routing the messages <cit.>. The basic connection between matroid theory and network coding was established in <cit.>, where a mapping technique from any matroid instance to a network coding instance was presented. It was shown that the constraints on the column space of the matrix which is a linear representation of the matroid instance can equivalently be mapped to the column space of the global and local encoding matrices solving the corresponding network coding instance. Index coding, introduced by the work of Birk and Kol <cit.>, models a broadcast communication system in which a single server must communicate a set of messages to a number of users. Each user demands specific messages and may have prior knowledge about other messages. Encoding the messages at the server can improve the overall broadcast rate rather than sending uncoded messages <cit.>. In <cit.>, matroid theory was shown to be closely bonded with index coding by providing a systematic approach to reduce a matroid instance into an index coding instance. This method maps the constraints on the column space of the matrix which linearly represents a matroid instance to the column space of the encoding matrix solving the corresponding index coding instance. Due to the strong connection of matroid theory with network coding and index coding problems, a number of well-known matroid instances were used to prove some key limitations in both network and index coding. As an example, by mapping the non-Pappus matroid instance which does not have a scalar linear representation into its corresponding network and index coding instances, it can be shown that vector linear codes can outperform the optimal scalar linear coding in both network and index coding <cit.>. Moreover, through the Vamos matroid instance which is not linearly representable, the necessity of non-Shannon inequalities was proved for both network coding and index coding <cit.> to establish a tighter performance bound. In addition, the insufficiency of linear coding was established in <cit.> by providing two network and index coding instances which are essentially related to the Fano and non-Fano matroid instances. The dependency of linear representation of the Fano and non-Fano matroids on the characteristic of a field is the key reason for the insufficiency of linear coding. In <cit.>, for fields with arbitrary characteristic p, the Fano and non-Fano matroid instances were extended to the new classes of matroids, respectively, referred to as the class p-Fano and the class p-non-Fano matroids. Using the concept of matroid theory and number theory, it was proved that while the class p-Fano instances are linearly representable only over fields with characteristic p, the class p-non-Fano instances have linear representation over fields with any characteristic other than characteristic p <cit.>. These matroids are of significant importance to the fields of network coding and index coding as they can be mapped into new classes of network and index coding instances whose optimal linear coding rates are dependent on fields with characteristic p. However, to the best of our knowledge, these classes of matroid instances have not been studied in the community of network coding and index coding. In this paper, we first reintroduce the class p-Fano and the class p-non-Fano matroid instances in a more structured way. Then we provide a completely independent alternative proof for their linear representations' dependency on fields with characteristic p. The main advantage of our proof (proof of Theorems <ref> and <ref>) is that it uses only matrix manipulation in linear algebra rather than complex concepts in matroid theory and number theory. Finally, following the class p-Fano and p-non-Fano matroid instances, we characterize two new classes of index coding instances, respectively, referred to as the class p-Fano and p-non-Fano index coding, each with a size of 2p^2 + 4p + 3. For the class p-Fano index coding instances, it is proved that linear codes are optimal only over fields with characteristic p. However, for the class p-non-Fano index coding instances, we prove that linear codes are optimal over fields with any characteristic other than characteristic p. It is important to note that using the mapping methods discussed in <cit.> and <cit.> for the class p-Fano and p-non-Fano matroids results in index coding instances of size 𝒪(2^p), while the instances of p-Fano and p-non-Fano matroids presented in this paper have a size of 𝒪(p^2). The organization and contributions of this paper are summarized as follows. * Section <ref> provides a brief overview of the system model, relevant background and definitions in matroid theory, index coding. * In Section <ref>, for the two recognized categories of matroids, namely the class p-Fano and the class p-non-Fano matroid instances, we introduce a fully distinct alternative proof that exclusively employs matrix manipulation techniques. This proof demonstrates that instances of the class p-Fano matroid can only be linearly represented over fields with characteristic p, whereas instances of the class p-non-Fano matroid can be linearly represented over fields with any characteristic, except for characteristic p. * In Subsection <ref>, we characterize a new class of index coding, referred to as the class p-Fano index coding instances with a size of 2p^2+4p+3. It is proved that the broadcast rate of the class p-Fano index coding instances is achievable by linear code over only fields with characteristic p. * In Subsection <ref>, we characterize a new class of index coding, referred to as the class p-non-Fano index coding instances with a size of 2p^2+4p+3. It is proved that the broadcast rate of the class p-non-Fano index coding instances is achievable by linear code over fields with any characteristic other than characteristic p. § SYSTEM MODEL AND BACKGROUND §.§ Notation Small letters such as n denote an integer where [n]≜{1,...,n} and [n:m]≜{n,n+1,… m} for n≤ m. Capital letters such as L denote a set, with |L| denoting its cardinality. Symbols in boldface such as l and L, respectively, denote a vector and a matrix, with rank(L) and col(L) denoting the rank and column space of matrix L, respectively. A calligraphic symbol such as ℒ denotes a set whose elements are sets. We use 𝔽_q to denote a finite field of size q and write 𝔽_q^n× m to denote the vector space of all n× m matrices over the field 𝔽_q. I_n denotes the identity matrix of size n× n, and 0_n represents an n× n matrix whose elements are all zero. §.§ System Model Consider a broadcast communication system in which a server transmits a set of mt messages X={x_i^j, i∈[m], j∈ [t]}, x_i^j∈𝒳, to a number of users U={u_i, i∈[m]} through a noiseless broadcast channel. Each user u_i wishes to receive a message of length t, X_i={x_i^j, j∈[t]} and may have a priori knowledge of a subset of the messages S_i:={x_l^j, l∈ A_i, j∈[t]}, A_i⊆[m]\{i}, which is referred to as its side information set. The main objective is to minimize the number of coded messages which is required to be broadcast so as to enable each user to decode its requested message. An instance of index coding problem ℐ can be either characterized by the side information set of its users as ℐ={A_i, i∈[m]}, or by their interfering message set B_i=[m]\ (A_i ∪{i}) as ℐ={B_i, i∈[m]}. Given an instance of index coding problem ℐ={A_i, i∈[m]}, a (t,r) index code is defined as 𝒞_ℐ=(ϕ_ℐ,{ψ_ℐ^i}), where * ϕ_ℐ: 𝒳^mt→𝒳^r is the encoding function which maps the mt message symbol x_i^j∈𝒳 to the r coded messages as Y={y_1,…,y_r}, where y_k∈𝒳, ∀ k∈ [r]. * ψ_ℐ^i: represents the decoding function, where for each user u_i, i∈[m], the decoder ψ_ℐ^i: 𝒳^r×𝒳^|A_i|t→𝒳^t maps the received r coded messages y_k∈ Y, k∈[r] and the |A_i|t messages x_l^j∈ S_i in the side information to the t messages ψ_ℐ^i(Y,S_i)={x̂_i^j, j∈ [t]}, where x̂_i^j is an estimate of x_i^j. Given an instance of the index coding problem ℐ, the broadcast rate of a (t,r) index code 𝒞_ℐ is defined as β(𝒞_ℐ)=r/t. Given an instance of the index coding problem ℐ, the broadcast rate β(ℐ) is defined as β(ℐ)=inf_tinf_𝒞_ℐβ(𝒞_ℐ). Thus, the broadcast rate of any index code 𝒞_ℐ provides an upper bound on the broadcast rate of ℐ, i.e., β(ℐ) ≤β(𝒞_ℐ). §.§ Linear Index Code Let x=[x_1,…,x_m]^T∈𝔽_q^mt× 1 denote the message vector. Given an instance of the index coding problem ℐ={B_i, i∈[m]}, a (t,r) linear index code is defined as ℒ_ℐ=(H,{ψ_ℐ^i}), where * H: 𝔽_q^mt× 1→𝔽_q^r× 1 is the r× mt encoding matrix which maps the message vector x∈𝔽_q^mt× 1 to a coded message vector y=[y_1,…,y_r]^T∈𝔽_q^r× 1 as follows y=Hx=∑_i∈ [m]H^{i}x_i. Here H^{i}∈𝔽_q^r× t is the local encoding matrix of the i-th message x_i such that H= [[ H^{1} … H^{m} ] ]∈𝔽_q^r× mt. * ψ_ℐ^i represents the linear decoding function for user u_i, i∈[m], where ψ_ℐ^i(y, S_i) maps the received coded message y and its side information messages S_i to x̂_i, which is an estimate of the requested message vector x_i. The necessary and sufficient condition for linear decoder ψ_ℐ^i, ∀ i∈[m] to correctly decode the requested message vector x_i is rank (H^{i}∪ B_i)= rank (H^B_i) + t, where H^L denotes the matrix [[ H^{l_1} … H^{l_|L|} ] ] for the given set L={l_1,…,l_|L|}. Given an instance of index coding problem ℐ, the linear broadcast rate of a (t,r) linear index code ℒ_ℐ over field 𝔽_q is defined as λ_q(ℒ_ℐ)=r/t. Given an instance of index coding problem ℐ, the linear broadcast rate λ_q(ℐ) over field 𝔽_q is defined as λ_q(ℐ)=inf_tinf_ℒ_ℐλ_q(ℒ_ℐ). Given an instance of index coding problem ℐ, the linear broadcast rate is defined as λ(ℐ)=min_qλ_q(ℐ). The linear index code 𝒞_ℐ is said to be scalar if t=1. Otherwise, it is called a vector (or fractional) code. For scalar codes, we use x_i=x_i^1, i∈ [m], for simplicity. §.§ Graph Definitions Given an index coding instance ℐ, the following concepts are defined based on its interfering message sets, which are, in fact, related to its graph representation <cit.>. We say that set M⊆[m] is an independent set of ℐ if B_i∩ M=M\{i} for all i∈ M. Let M={i_j, j∈ [|M|]}⊆[m]. Now, M is referred to as a minimal cyclic set of ℐ if B_i_j∩ M= {[ M\{i_j, i_j+1}, j∈ [|M|-1],; ; M\{i_|M|, i_1}, j=i_|M|. ]. We say that M⊆[m] is an acyclic set of ℐ, if none of its subsets M^'⊆ M forms a minimal cyclic set of ℐ. We note that each independent set is an acyclic set as well. Let ℐ={B_i, i∈ [m]}. It can be shown that * if set [m] is an acyclic set of ℐ, then λ_q(ℐ)=β(ℐ)=m. * if set [m] is a minimal cyclic set of ℐ, then λ_q(ℐ)=β(ℐ)=m-1. Let ℳ be the set of all sets M⊆[m] which are acyclic sets of ℐ. Then, set M∈ℳ with the maximum size |M| is referred to as the MAIS set of ℐ, and β_MAIS(ℐ)=|M| is called the MAIS bound for λ_q(ℐ), as we always have <cit.> λ_q(ℐ)≥β_MAIS(ℐ). Equation (<ref>) establishes a sufficient condition for optimality of the linear coding rate as follows. Given an index coding instance ℐ, if λ_q(ℐ)= β_MAIS(ℐ), then the linear coding rate is optimal for ℐ. In this paper, the encoding matrix which achieves this optimal rate is denoted by H_∗. §.§ Overview of Matroid Theory A matroid instance 𝒩={f(N), N⊆ [n]} is a set of functions f: 2^[n]→{0,1,2,…} that satisfy the following three conditions: f(N) ≤ |N|, ∀ N⊆[n], f(N_1) ≤ f(N_2), ∀ N_1⊆ N_2⊆[n], f(N_1∪ N_2)+f(N_1∩ N_2) ≤ f(N_1)+f(N_2),∀ N_1, N_2⊆[n]. Here, set [n] and function f(·), respectively, are called the ground set and the rank function of 𝒩. The rank of matroid 𝒩 is defined as f(𝒩)=f([n]). Consider a matroid 𝒩 of rank f(𝒩). We say that N⊆[n] is an independent set of 𝒩 if f(N)=|N|. Otherwise, N is said to be a dependent set. A maximal independent set N is referred to as a basis set. A minimal dependent set N is referred to as a circuit set. Let sets ℬ and 𝒞, respectively, denote the set of all basis and circuit sets of 𝒩. Then, it can be shown that f(𝒩)=f(N)=|N|, ∀ N∈ℬ, f(N\{i})=|N|-1, ∀ i∈ N, ∀ N∈𝒞. We say that matroid 𝒩={f(N), N⊆ [n]} of rank f(𝒩) has a (t)-linear representation over 𝔽_q if there exists a matrix H= [[ H^{1} … H^{n} ] ], H^{i}∈𝔽_q^f(𝒩)t× t, ∀ i∈ [n], such that rank(H^N)=f(N)t, ∀ N⊆ [n], where H^N denotes the matrix [[ H^{n_1} … H^{n_|N|} ] ] for the given set N={n_1,…,n_|N|}⊆[n]. Since the matroid matrix H is related to the encoding matrix in index coding and network coding, we assume that each submatrix H^{i}, i∈ [n] is invertible (which is a necessary condition for encoding matrix <cit.>). Now, based on Definitions <ref> and <ref>, the concepts of basis and circuit sets can also be defined for matrix H. Let N⊆ [n]. We say that N is an independent set of H, if rank(H^N)=|N|t, otherwise N is a dependent set of H. The independent set N is a basis set of H if rank(H)=rank(H^N)=|N|t. The dependent set N is a circuit set of H if rank(H^N\{j})=rank(H^N)=(|N|-1)t, ∀ j∈ N, which requires that H^{j}=∑_i∈ N\{j}H^{i}M_j,i, where each M_j,i is invertible. If matroid 𝒩 has a linear representation with t=1, it is said that 𝒩 has a scalar linear representation. Otherwise, the linear representation is called a vector representation. §.§ The Fano and non-Fano Matroid Instances 𝒩_F and 𝒩_nF Consider the matroid instance 𝒩_F={(N,f(N)), N⊆[n]} with n=7 and f(𝒩_F)=3. Now, matroid 𝒩_F is referred to as the Fano matroid instance if set N_0=[3] is a basis set, and the following sets N_i, i∈ [7] are circuit sets. N_1 ={1,2,4}, N_2 ={1,3,5}, N_3 ={2,3,6}, N_4 ={1,6,7}, N_5 ={2,5,7}, N_6 ={3,4,7}, N_7 ={4,5,6}. The Fano matroid instance 𝒩_F is linearly representable over field 𝔽_q iff (if and only if) field 𝔽_q does have characteristic two. Consider the matroid instance 𝒩_nF={(N,f(N)), N⊆[n]} with n=7 and f(𝒩_nF)=3. Now, matroid 𝒩_nF is referred to as the non-Fano matroid instance if each set N_0=[3] and N_7={4,5,6} is a basis set, and sets N_i, i∈ [6] in (<ref>) are all circuit sets. The non-Fano matroid instance 𝒩_nF is linearly representable over 𝔽_q iff field 𝔽_q has odd characteristic (i.e., any characteristic other than characteristic two). It is worth noting that the Fano and non-Fano matroid instances are almost exactly the same, only differing in the role of set N_7={4,5,6}. While set N_7 is a circuit set for the Fano matroid, it is a basis set for the non-Fano matroid. § THE CLASS P-FANO AND P-NON-FANO MATROID INSTANCES §.§ The Class p-Fano Matroid Instances 𝒩_F(p) Matroid instance 𝒩_F(p) is referred to as the class p-Fano matroid instance if (i) n=2p+3, f(𝒩_F(p))=p+1, (ii) N_0=[p+1]∈ℬ and (iii) N_i∈𝒞, ∀ i∈ [n], where N_i= {[ ([p+1]\{i})∪{n-i}, i∈ [p+1],; ; {i, n-i, n}, i∈ [p+2:2p+2],; ; [p+2:2p+2], i=n=2p+3. ]. [𝒩_F(2)] It can be verified that 𝒩_F(2)=𝒩_F. In other words, the Fano matroid instance is a special case of the class p-Fano matroid for p=2. [𝒩_F(3) <cit.>] Consider the class p-Fano matroid for p=3. It can be seen that for matroid 𝒩_F(3), n=9, f(𝒩_F(3))=4, set N_0=[4] is a basis set, and the following sets N_i, i∈ [9] are circuit sets: N_1 ={1,2,3,5}, N_2={1,2,4,6}, N_3={1,3,4,7}, N_4 ={2,3,4,8}, N_5={1,8,9}, N_6={2,7,9}, N_7 ={3,6,9}, N_8={4,5,9}, N_9={5,6,7,8}. The class p-Fano Matroid 𝒩_F(p) is linearly representable over 𝔽_q iff field 𝔽_q has characteristic p. The proof can be concluded from Propositions <ref> and <ref>, which establish the necessary and sufficient conditions, respectively. We provide a completely independent alternative proof of Proposition <ref>, which exclusively relies on matrix manipulations instead of the more involved concepts of number theory and matroid theory used in <cit.> and <cit.>. The class p-Fano matroid 𝒩_F(p) is linearly representable over 𝔽_q only if field 𝔽_q does have characteristic p. Since each N_i, i∈ [n] in (<ref>) is a circuit set, according to (<ref>), we have H^{n-i}=∑_j∈ [p+1]\{i}H^{j}M_n-i,j, i∈ [p+1], H^{n}=H^{i}M_n,i+ H^{n-i}M_n, n-i, i∈ [p+1], H^{2p+2}=∑_i∈ [p+1]\{1}H^{n-i}M_2p+2,n-i, where all the matrices M_n-i,j, j∈ [p+1]\{i}, i∈ [p+1], M_n,i, M_n, n-i, i∈ [p+1] and M_2p+2,n-i, i∈ [p+1]\{1} are invertible. Now, we replace H^{n-i} in (<ref>) with its equal term in (<ref>), leading to H^{n} =H^{i}M_n,i + ( ∑_j∈ [p+1]\{i}H^{j}M_n-i,j ) M_n, n-i =[H^{1},…,H^{i-1},H^{i},H^{i+1},…, H^{p+1}] ×[ M_n-i,1 M_n,n-i; ⋮; M_n-i,i-1 M_n,n-i; ; M_n,i; ; M_n-i,i+1 M_n,n-i; ⋮; M_n-i,p+1 M_n,n-i ]_M_i^∗ =H^[p+1]M_i^∗. Thus, we have H^{n}=H^[p+1]M_i^∗, ∀ i∈ [p+1]. Since H^[p+1] is full-rank, we must have M_1^∗=M_2^∗=… =M_p+1^∗. Consequently, for any i∈ [p+1], (<ref>) will lead to M_n,i=M_n-j,i M_n,n-j, ∀ i,j∈ [p+1], i≠ j, ⇒ M_n,i^'=M_n-j,i^' M_n,n-j=M_n-l,i^' M_n,n-l, ∀ i^',j,l∈ [p+1], i^', j, l are distinct, ⇒ M_n,n-l=M_n-l,i^'^-1 M_n-j,i^' M_n,n-j, ∀ i^',j,l∈ [p+1], i^', j, l are distinct, ⇒ M_n-l,i_1^-1 M_n-j,i_1M_n,n-j=M_n-l,i_2^-1 M_n-j,i_2M_n,n-j, ∀ i_1, i_2, j, l∈ [p+1], i_1, i_2, j, l are distinct, ⇒ M_n-l,i_1^-1 M_n-j,i_1=M_n-l,i_2^-1 M_n-j,i_2, ∀ i_1, i_2, j, l∈ [p+1], i_1, i_2, j, l are distinct. On the other hand, by replacing H^{n-i} in (<ref>) with its equal term in (<ref>), we have H^{2p+2} = ∑_i∈ [p+1]\{1}H^{n-i}M_2p+2,n-i =∑_i∈ [p+1]\{1} (∑_j∈ [p+1]\{i}H^{j}M_n-i,j )M_2p+2,n-i = H^{1} ∑_i∈ [p+1]\{1}M_n-i,1M_2p+2,n-i + ∑_j∈ [p+1]\{1}H^{j}∑_i∈ [p+1]\{1,j}M_n-i,j M_2p+2,n-i. Moreover, in (<ref>) for i=1, we have (note n-1=2p+2) H^{2p+2} =∑_j∈ [p+1]\{1}H^{j}M_2p+2,j =H^{1}×0 +∑_j∈ [p+1]\{1}H^{j}M_2p+2,j. Now, since both (<ref>) and (<ref>) are equal to H^{2p+2}, and each H^{j}, ∈ [p+1] is linearly independent, the coefficients of H^{j}, ∈ [p+1] must be equal. Thus, ∑_i∈ [p+1]\{1}M_n-i,1M_2p+2,n-i=0, ∑_i∈ [p+1]\{1,j}M_n-i,j M_2p+2,n-i=M_2p+2,j, ∀ j∈ [p+1]\{i}. From (<ref>), for j_1, j_2∈ [p+1]\{i}, j_1≠ j_2, we have ∑_i∈ [p+1]\{1,j_1}M_n-i,j_1 M_2p+2,n-i =M_2p+2,j_1, ∑_i∈ [p+1]\{1,j_2}M_n-i,j_2 M_2p+2,n-i =M_2p+2,j_2. Now, for l∈ [p+1]\{1,j_1,j_2}, multiplying (<ref>) and (<ref>), respectively, by M_n-l,j_1^-1 and M_n-l,j_2^-1 will lead to ∑_i∈ [p+1]\{1,j_1}M_n-l,j_1^-1 M_n-i,j_1 M_2p+2,n-i= M_n-l,j_1^-1 M_2p+2,j_1, ∑_i∈ [p+1]\{1,j_2}M_n-l,j_2^-1 M_n-i,j_2 M_2p+2,n-i= M_n-l,j_2^-1 M_2p+2,j_2. Now, according to (<ref>), for i_1=j_1, j=1 and i_2=j_2, we have M_n-l,j_1^-1 M_2p+2,j_1=M_n-l,j_2^-1 M_2p+2,j_2. Thus, the left hand side (LHS) of (<ref>) and (<ref>) are equal. Hence, after removing common terms on the LHS of (<ref>) and (<ref>), we get M_n-l,j_2^-1 M_n-j_1,j_2 M_2p+2,n-j_1= M_n-l,j_1^-1 M_n-j_2,j_1 M_2p+2,n-j_2. Now, based on (<ref>), for i_1=j_2, j = j_1, i_2=1, we have M_n-l,j_2^-1 M_n-j_1,j_2=M_n-l,1^-1 M_n-j_1,1 and for i_1=j_1, j=j_2, i_2=1, we have M_n-l,j_1^-1 M_n-j_2,j_1=M_n-l,1^-1 M_n-j_2,1. Thus, (<ref>) will lead to M_n-l,1^-1 M_n-j_1,1 M_2p+2,n-j_1= M_n-l,1^-1 M_n-j_2,1 M_2p+2,n-j_2. Therefore, M_n-j_1,1 M_2p+2,n-j_1 = M_n-j_2,1 M_2p+2,n-j_2, which means that all the terms M_n-i,1M_2p+2,n-i, i∈ [p+1]\{1} in (<ref>) are equal. Let M_n-i,1M_2p+2,n-i=T, i∈ [p+1]\{1}. Then, ∑_i∈ [p+1]\{1}M_n-i,1M_2p+2,n-i = (∑_i∈ [p+1]\{1}I_t ) T=0. Now, since matrix T is invertible, (<ref>) requires that ∑_i∈ [p]I_t = 0, which is possible only over fields with characteristic p. This completes the proof. In the following, Lemma <ref> is provided for proving Propositions <ref> and <ref>. For the following matrix, if 𝔽_q has characteristic p, then [p+1] is a circuit set. Otherwise, [p+1] is a basis set. [ 1 1 1 1 1 0; 1 1 1 1 0 1; 1 1 1 0 1 1; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; 1 1 0 1 1 1; 1 0 1 1 1 1; 0 1 1 1 1 1 ]∈𝔽_q^(p+1)× (p+1). It can easily be observed that running the Gaussian elimination technique on the first column shows that the first p columns are linearly independent, as it results in [ 1 1 1 1 1 0; 0 0 0 0 -1 1; 0 0 0 -1 0 1; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; 0 0 -1 0 0 1; 0 -1 0 0 0 1; 0 1 1 1 1 1 ]. Running the Gaussian elimination on the last row will lead to [ 1 1 1 1 1 0; 0 0 0 0 -1 1; 0 0 0 -1 0 1; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; 0 0 -1 0 0 1; 0 -1 0 0 0 1; 0 0 0 0 0 1+(p-1) ]. Now, it can be seen that if the characteristic of field 𝔽_q is p, then 1+(p-1)=p=0, which means that the last column is linearly dependent on the first p columns, and thus, set [p+1] is a circuit set. However, if the characteristic of 𝔽_q is not p, then 1+(p-1)=p≠ 0, which means that the last column is also a pivot column, and thus, [p+1] is a basis set. The class p-Fano matroid 𝒩_F(p) has scalar (t=1) linear representation over fields with characteristic p. We show that matrix H_p∈𝔽_q^(p+1)× n, shown in Figure <ref>, is a scalar linear representation of 𝒩_F(p) if field 𝔽_q does have characteristic p. * It can be seen that set N_0=[p+1] is a basis set of H_p as rank(H_p^[p+1])=rank(I_p+1)=p+1. * Now, we show that each set N_i, i∈ n in (<ref>) is a circuit set of H_p. First, we begin with sets N_i=([p+1]\{i})∪{n-i}, i∈ [p+2:2p+2]. Since set [p+1] is a basis set of H_p, set [p+1]\{i} forms an independent set of H_p. Moreover, we have H_p^{n-i}= ∑_j∈ ([p+1]\{i})I_p+1^{j}=∑_j∈ ([p+1]\{i})H_p^{j}. Thus, based on (<ref>), each set N_i=([p+1]\{i})∪{n-i}, i∈ [p+2:2p+2] is a circuit set. * For sets N_i={i, n-i, n}, i∈ [p+2:2p+2], from Figure <ref>, it can be observed that set {i, n-i} is an independent set of H_p. Furthermore, we have H_p^{n} =∑_j∈ [p+1]I_p+1^{j} =I_p+1^{i} + ∑_j∈ [p+1]\{i}I_p+1^{j} =H_p^{i}+H_p^{n-i}. Thus, based on equation (<ref>), each set {i, n-i, n} is a circuit set of H_p. * Finally, based on Lemma <ref>, set [p+2:2p+2] is a circuit set of H_p over the fields with characteristic p. This completes the proof. §.§ The Class p-non-Fano Matroid Instances 𝒩_nF(p) Matroid 𝒩_nF(p) is referred to as the class p-non-Fano instance if (i) n=2p+3, f(𝒩_nF(p))=p+1, (ii) N_0=[p+1]∈ℬ, N_n=[p+2:2p+2]∈ℬ and (iii) N_i∈𝒞, ∀ i∈ [n-1] where N_i= {[ ([p+1]\{i})∪{n-i}, i∈ [p+1], ; ; {i, n-i, n}, i∈ [p+2:2p+2]. ]. It is worth noting that the class p-Fano and the class p-non-Fano matroid instances are almost exactly the same, only differing in the role of set N_n=[p+2:2p+2]. While set N_n is a circuit set for the class p-Fano matroid, it is a basis set for the class p-non-Fano matroid. [𝒩_nF(2)] It can be verified that 𝒩_nF(2)=𝒩_nF. In other words, the non-Fano matroid instance is a special case of the class p-non-Fano matroid for p=2. [𝒩_nF(3) <cit.>] Consider the class p-non-Fano matroid for p=3. It can be seen that for matroid 𝒩_nF(3), n=9, f(𝒩_nF(3))=4, sets N_0=[4] and N_9={5,6,7,8} are basis sets, and sets N_i, i∈ [8] in (<ref>) are all circuit sets. The class p-non-Fano matroid 𝒩_nF(p) is linearly representable over 𝔽_q if and only if field 𝔽_q does have any characteristic other than characteristic p. The proof can be concluded from Propositions <ref> and <ref>, which establish the necessary and sufficient conditions, respectively. We provide a completely independent alternative proof of Proposition <ref>, which exclusively relies on matrix manipulations instead of the more involved concepts of number theory and matroid theory used in <cit.> and <cit.>. The class p-non-Fano matroid 𝒩_nF(p) is linearly representable over 𝔽_q only if field 𝔽_q does have any characteristic other than characteristic p. Since all sets N_i, i∈{0}∪ [n-1] in 𝒩_nF(p) are exactly the same as the sets N_i, i∈{0}∪ [n-1] in 𝒩_F(p), (<ref>) will also hold for 𝒩_nF(p). Now, since each M_n-j,i is invertible, from (<ref>), the column space of all M_n,i's for i∈ [p+1] will be equal. Thus, we have rank [M_n,1 | M_n,2 | | M_n,p+1 ]= rank [M_n, i ], which requires each matrix M_n,i, i∈ [p+1] to be invertible, since otherwise it causes H^{n} to be non-invertible (which leads to a contradiction based on Remark <ref>). Now, according to (<ref>), we have (p-1)M_n,i=(p-1)M_n-j,i M_n,n-j= ∑_l∈ [p+1]\{i,j}M_n-l,i M_n,n-l, ∀ i,j∈ [p+1], i≠ j. Moreover, based on (<ref>), since the terms M_n-l,i M_n,n-l, l∈ [p+1]\{i} are all equal, then assuming that the field has characteristic p will result in 0_t=∑_l∈ [p+1]\{i}M_n-l,i M_n,n-l. Now, combining (<ref>) and (<ref>) will lead to (p-1) [ M_n-(p+1),1; M_n-(p+1),2; ⋮; M_n-(p+1),p; 0_t ]M_n, n-(p+1) = [ 0_t; M_n-1,2; ⋮; M_n-1,p; M_n-1,(p+1) ]M_n, n-1 + … + [ M_n-p,1; M_n-1p,2; ⋮; 0_t; M_n-p,(p+1) ]M_n, n-p. Then, from left, we multiply all the terms in (<ref>) by the vector [H^{1},…, H^{n}], which according to (<ref>), it will lead to (p-1)H^{n-(p+1)}M_n, n-(p+1)= H^{n-1}M_n,n-1+… + H^{n-p}M_n, n-p. Since each M_n, n-i, i∈ [p+1] is invertible, based on (<ref>) and (<ref>), it implies that set [p+2:2p+2] forms a circuit set, which contradicts the fact that set N_n=[p+2:2p+2] is a basis set of 𝒩_nF(p). This completes the proof. The class p-non-Fano matroid 𝒩_nF(p) has scalar (t=1) linear representation over fields with any characteristic other than characteristic p. With the same arguments in the proof of Proposition <ref>, it can be shown that for matrix H_p∈𝔽_q^(p+1)× n, shown in Figure <ref>, set N_0 is a basis set, and each set N_i, i∈ [n-1] is a circuit set of H_p. Moreover, based on Lemma <ref>, set N_n=[p+2:2p+2] is a basis set over fields with any characteristic other than p. This completes the proof. § THE CLASS P-FANO AND P-NON-FANO INDEX CODING INSTANCE §.§ On the Reduction Process from Index Coding to Matroid In this subsection, Lemmas <ref>-<ref> establish reduction techniques to map specific constraints on the column space of the encoder matrix of an index coding instance to the constraints on the column space of the matrix, which is a linear representation of a matroid instance. In this subsection, we assume that M⊆ [m], i,l∈ M, and j∈ [m]\ M. Assume M is an acyclic set of ℐ. Then, the condition in (<ref>) for all i∈ M requires rank (H^M)=|M|t, implying that M must be an independent set of H. Let M be a minimal cyclic set of ℐ. To have rank(H^M)=(|M|-1)t, M must be a circuit set of H. Assume M is an independent set of ℐ, and j∈ B_i,∀ i∈ M\{l} for some l∈ M. Then, if col(H^{j})⊆col(H^M), we must have col(H^{j})=col(H^{l}). Let M⊆[m] and j∈ [m]\ M. Assume that * M is an independent set of H, * col(H^{j})⊆col(H^M), * M forms a minimal cyclic set of ℐ, * j∈ B_i, ∀ i∈ M. Now, the condition in (<ref>) for all i∈ [m] requires set {j}∪ M to be a circuit set of H. Now, we provide Lemmas <ref> and <ref>, which will both be used in the proof of Proposition <ref>. Lemma <ref> generalizes Lemma 5 in <cit.>. Assume for matrix H∈𝔽_q^(p+1)t× nt, * set [p+1] is a basis set, * each set [p+1]\{i}∪{n-i}, i∈ [p+1] is a circuit set, * col(H^{n})⊆col(H^{i,n-i}), ∀ i∈ [p+1]. Then, each set {i, n-i, n}, i∈ [p+1] will be a circuit set. Let i∈ [p+1]. Since col(H^{n})⊆col(H^{i,j-i}), we have H^{n}=H^{i}M_n,i+H^{n-i}M_n,n-i. Besides, since set [p+1]\{i}∪{n-i} is a circuit set, we must have H^{n-i}=∑_j∈ [p+1]\{i}H^{i}M_n-i,j, where each matrix M_n-j,j, j∈ [p+1]\{i} is invertible. Thus, based on (<ref>) and (<ref>), we have H^{n} = H^{i}M_n,i+(∑_j∈ [p+1]\{i}H^{j}M_n-i,j)M_n,n-i =H^{i}M_n,i+ ∑_j∈ [p+1]\{i}H^{j}M_n-i,j^', where M_n-i,j^' = M_n-i,jM_n,n-i, j∈ [p+1]\{i}. Now, let l∈ [p+1]\{i}. Since set [p+1]\{l} is a circuit set, we have H^{n-l}=∑_j∈ [p+1]\{l}H^{j}M_n-l,j, where each M_n-l,j is invertible. Now, for set {l,n-l,l}, we have rank(H ^{l,n-l,n}) =rank([H^{l}|H^{n-l}|H^{n}]) =rank([H^{l}|H^{n-l}|H^{n}-H^{l}M_n-i,l^']) =t +rank([H^{n-l}|H^{n}-H^{l}M_n-i,l^']). Now, since col(H^{n}) must be a subspace of col(H^{l,n-l,n}), we must have rank(H^{l,n-l,n})=2t. Thus, in (<ref>), H^{n}-H^{l}M_n-i,l^' must be linearly dependent on H^{n-l}, which based on (<ref>) and (<ref>) requires each M_n,i, M_n-i,j^', j∈ [p+1]\{i,l} to be invertible. Moreover, each M_n-i,j^', j∈ [p+1]\{i,l} is invertible only if M_n,n-i is invertible. Thus, since both M_n,i and M_n,n-i are invertible, set {i,n-i,n}, i∈ [p+1] forms a circuit set of H. Let M_1⊂ [m] and M_2⊂ [m]. Assume set M_1 is an acyclic set of ℐ and M_2⊂ B_i for all i∈ M_1. Now, if rank(H^M_1∪ M_2)≤ kt where |M_1| < k ≤ |M_1|+|M_2|, then we must have rank(H^M_2)≤ (|M_1| - k) t. Let M_1={i_1, …, i_|M_1|}. Applying the decoding condition (<ref>) to each i_1, …, i_|M_1| will result in rank(H^{i_1, …, i_|M_1|}∪ M_2) = rank(H^{i_1}) + rank(H^{i_2, …, i_|M_1|}∪ M_2)= ⋮ rank(H^{i_1}) + …rank(H^{i_|M_1|}) + rank(H^M_2) = |M_1|t + rank(H^M_2). Now, if rank(H^M_1∪ M_2)≤ kt, then we will have rank(H^M_2)≤ (k - |M_1|) t. §.§ The Class p-Fano Index Coding Instance Consider the following class of index coding instances ℐ_F(p)={B_i, i∈ [m]}, [m]=[n] ∪_l∈ [p+1] Z_l∪ Z^'∪_l∈ [p+1] Z_l^'', where n=2p+3 and Z_l= {z(l,j), j∈ [p+1]∪{l}}, z(l,j)=(p+1)(l+1)+1+j, l,j∈ [p+1], Z^'= {z^'(j), j∈ [p+1]}, z^'(j)= p^2+4p+4+j, j∈ [p+1], Z_l^''= {z^''(l,j), j∈ [p-2]}, p>2, z^''(l,j)=p^2+(l+4)p+7-2i+j, l∈ [p+1], j∈ [p-2]. The interfering message sets B_i, i∈ [n] are as follows. B_i=([p+1]\{i})∪{n-i}∪_l∈ [p+1] Z_l\{z(l,i)}, i∈ [p+1], B_i=[p+2:2p+2]\{i,i+1}, i∈ [p+2:2p], B_i=[p+2:2p+2]\{i,1}, i = 2p + 1, B_2p+2=[p+2:2p+2]\{2p+2,p+2}, B_n = [p+2:2p+2]; The interfering message sets B_i, i∈ Z_l, l∈ [p+1] are as follows B_z(l,l)= ∅, l∈ [p+1], B_z(l,j)= (Z_l\{z(l,l), z(l,j), z(l, j+1)}) ∪{n-l}, j ∈ [p]\{l,l-1}, l ∈ [P+1], B_z(l,l-1)=(Z_l\{z(l,l), z(l,l-1), z(l, l+2)}) ∪{n-l}, l∈ [p]\{1}, B_z(l,l-1)=(Z_l\{z(l,l), z(l,l-1), z(l, 1)}) ∪{n-l}, l=p+1, B_z(l,p+1)=(Z_l\{z(l,l), z(l,p+1), z(l, 2)}) ∪{n-l}, l=1, B_z(l,p+1)=(Z_l\{z(l,l), z(l,p+1), z(l, 1)}) ∪{n-l}, l∈ [p+1]\{1,p+1}. The interfering message sets B_i, i∈ Z^' are as follows B_z^'(l) ={n-l, n, z(l,l)}∪ Z_l^'', l∈ [p+1]. The interfering message sets B_i, i∈ Z_l^'', l∈ [p+1] are as follows B_z^''(l,j) = {n-l, n, z(l,l) }∪{z^'(l)}∪ (Z_l^''\{z^''(l,j)} ), l∈ [p+1], j∈ [p-2]. We refer to such ℐ_F(p) as the class p-Fano index coding instance. It can be seen that the total number of users is m = n + ∑_l∈ [p+1] |Z_l| + |Z_l^'| + ∑_l∈ [p+1] |Z_l^''| = (2p+3) + (p+1)(p+1) + (p+1) + (p+1)(p-2) =2p^2+4p+3. [ℐ_F(2)] Consider the class 2-Fano index coding instance ℐ_F(2)={B_i, i∈ [m]}, which is characterized as follows Z_1={z(1,1)=8,z(1,2)=9,z(1,3)=10}, Z_2={z(2,1)=11,z(2,2)=12,z(2,3)=13}, Z_3={z(3,1)=14,z(3,2)=15,z(3,3)=16}, Z^'={z^'(1)=17 ,z^'(2)= 18,z^'(3)= 19}, m= n + |Z_1| +|Z_2| +|Z_3| + |Z^'|= 7 + 3 +3+3+3=19. B_1=([3]\{1})∪{6}∪ [8:16]\{8,11,14}, B_2=([3]\{2})∪{5}∪ [8:19]\{9,12,15}, B_3=([3]\{3})∪{4}∪ [8:19]\{10,13,16}, B_4={6}, B_5={4}, B_2p+2=6={5}, B_n=7={4,5,6}, B_z(1,1)=8=∅, B_z(1,2)=9={6}, B_z(1,3)=10={6}, B_z(2,1)=11={5}, B_z(2,2)=12=∅, B_z(2,3)=13={5}, B_z(3,1)=14={4}, B_z(3,2)=15={4}, B_z(3,3)=16=∅ B_z^'(1)=17={6,7,8}, B_z^'(2)=18={5,7,12}, B_z^'(3)=19={4,7,16}. It is worth mentioning that applying the mapping methods discussed in <cit.> and <cit.> for the 2-Fano matroid results in an index coding instance consisting of more than 200 users, while the 2-Fano index coding instance in Example <ref> is of a size 19. [ℐ_F(3)] Consider the class 3-Fano index coding instance ℐ_F(3)={B_i, i∈ [m]}, which is characterized as follows Z_1={z(1,1)=10,z(1,2)=11,z(1,3)=12, z(1,4)=13}, Z_2={z(2,1)=14,z(2,2)=15,z(2,3)=16, z(2,4)=17}, Z_3={z(3,1)=18,z(3,2)=19,z(3,3)=20, z(3,4)=21}, Z_4={z(4,1)=22,z(4,2)=23,z(4,3)=24, z(4,4)=25}, Z^'={z^'(1)=26 ,z^'(2)= 27,z^'(3)= 28,z^'(4)= 29}, Z_1^''={z^''(1,1)=30}, Z_2^''={z^''(2,1)=31}, Z_3^''={z^''(3,1)=32}, Z_4^''={z^''(4,1)=33}, m= n + |Z_1| +|Z_2| +|Z_3| + |Z_4| + |Z^'| + |Z_1^''| +|Z_2^''| + |Z_3^''| + |Z_4^''| = 9 + 4 + 4 + 4 + 4 + 4 + 1 + 1 + 1 + 1=33. B_1=([4]\{1})∪{8}∪ [10:25]\{10,14,18,22}, B_2=([4]\{2})∪{7}∪ [10:25]\{11,15,19,23}, B_3=([4]\{3})∪{6}∪ [10:25]\{12,16,20,24}, B_4=([4]\{4})∪{5}∪ [10:25]\{13,17,21,25}, B_5={7,8}, B_6={5,8}, B_7={5,6}, B_2p+2=8={6,7}, B_n=9={5,6,7,8}, B_z(1,1)=10=∅, B_z(1,2)=11={13}∪{8}, B_z(1,3)=12={11}∪{8}, B_z(1,4)=13={12}∪{8} B_z(2,1)=14={17}∪{7}, B_z(2,2)=15=∅, B_z(2,3)=16={14}∪{7}, B_z(2,4)=17={16}∪{7} B_z(3,1)=18={21}∪{6}, B_z(3,2)=19={18}∪{6}, B_z(3,3)=20=∅, B_z(3,4)=21={19}∪{6} B_z(4,1)=22={24}∪{5}, B_z(4,2)=23={22}∪{5}, B_z(4,3)=24={23}∪{5}, B_z(4,4)=25=∅ B_z^'(1)=26={8,9,10,30}, B_z^'(2)=27={7,9,15,31}, B_z^'(3)=28={6,9,20,32}, B_z^'(4)=29={5,9,25,33}. B_z^''(1,1)=30={8,9,10,26}, B_z^''(2,1)=31={7,9,15,27}, B_z^''(3,1)=32={6,9,20,28}, B_z^''(4,1)=33={5,9,25,29}. It is worth mentioning that applying the mapping methods discussed in <cit.> and <cit.> for the 3-Fano matroid results in an index coding instance consisting of more than 1000 users, while the 3-Fano index coding instance in Example <ref> is of a size of 33. Moreover, in <cit.> an index coding instance based on 3-Fano matroid instance was built with the size of 29 users. That index coding instance is closely related to the 3-Fano index coding instance in Example <ref>. λ_q(ℐ_F(p))=β (ℐ_F(p))=p+1 iff the field 𝔽_q does have characteristic p. In other words, the necessary and sufficient condition for linear index coding to be optimal for ℐ_F(p) is that the field 𝔽_q has characteristic p. The proof can be concluded from Propositions <ref> and <ref>. We first provide Remark <ref> which will be used in the proof of Proposition <ref>. Consider the following matrix: [ 1 1 1 1 1 0 1; 1 1 1 1 0 1 1; 1 1 1 0 1 1 1; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ 1; 1 1 0 1 1 1 1; 1 0 1 1 1 1 1; 0 1 1 1 1 1 ]∈𝔽_q^(p+1)× (p+2). By running the Guassian elimination technique, we get [ 1 1 1 1 1 0 1; 0 0 0 0 -1 1 0; 0 0 0 -1 0 1 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ 0; 0 0 -1 0 0 1 0; 0 -1 0 0 0 1 0; 0 0 0 0 0 1+(p-1) 1 ]. Since 1+(p-1)=0 over fields 𝔽_q with characteristic p, then the last column is linearly independent of the column space of the other columns. For the class p-Fano index coding instance ℐ_F(q), there exists an optimal scalar linear code (t=1) over the field F_q with characteristic p. First, it can be observed that set [p+1] is an independent set of ℐ_F(p). Thus, β (ℐ_F(p))≥ p+1. Now, consider the scalar encoding matrix H_p∈𝔽_p^(p+1)× m, characterized by its columns as follows (shown in Figure <ref>) * Users u_i, i∈ [n]: H_p^{i}= {[ I_p+1^{i}, i∈ [p+1],; ; ∑_j∈ [p+1]\{n-i}I_p+1^{j}, i∈ [p+2:2p+2],; ; ∑_j∈ [p+1]I_p+1^{j}, i=n=2p+3, ]. * H_p^{z(l,j)}=I_p+1^{j}, l,j∈ [p+1], * H_p^{z^'(l)}= {[ I_p+1^{l+1}, l∈ [p],; ; I_p+1^{1}, l = p + 1, ]. * H_p^{z^''(l,j)}= {[ I_p+1^{l+j+1}, l+j≤ p,; ; I_p+1^{l+j-p}, p+1≤ l+j≤ 2p-1, ]. Now, we prove that this encoding matrix can satisfy all the users. * For users u_i, i∈ [p+1] with the interfering message set B_i=([p+1]\{i})∪{n-i}∪_j∈ [p+1] Z_l\{z(l,i)}, we have * H_p^[p+1]\{i}=I_p+1^[p+1]\{i}. * H_p^{n-i}=∑_j∈ [p+1]\{i}I_p+1^{i}. * H_p^Z_l\{z(l,i)}=I_p+1^[p+1]\{i}. It can be checked that H_p^B_irref≡I_p+1^[p+1]\{i}. Since H_p^{i}=I_p+1^{i}, the decoding condition (<ref>) is met, and user u_i, i∈ [p+1] can decode its requested message from the i-th transmission y_i. * According to Lemma <ref>, set [p+2:2p+2] is a circuit set of H_p, which means that each set [p+2:2p+2]\{j}, j∈ [p+2:2p+2] is an independent set. Now, Since B_i=[p+2:2p+2]\{i, i+1}, i∈ [p+2:2p+1] and B_2p+2={p+2:2p+2}\{1, 2p+2}, the decoding condition in (<ref>) will be met for all users u_i, i∈ [p+2:2p+2]. * For user u_n, according to Remark <ref>, column H_p^{n} is linearly independent of the column space of its interfering messages set H_p^B_n=[p+2:2p+2], which satisfies the decoding condition (<ref>) for i=n. * Since the interfering message set of users u_z(l,l), l∈ [p+1] is empty and H_p^z(l,l) = I_p+1^{l}, each user u_z(l,l) can decode its demanded message from the l-th transmission. * For users u_z(l,j), l∈ [p+1], j∈ [p]\{l-1, l} with the interfering message set B_z(l,j)= (Z_l\{z(l,l), z(l,j), z(l, j + 1)}) ∪{n-l}, we have * H_p^Z_l\{z(l,l), z(l,j), z(l, j+ 1)})=I_p+1^[p+1]\{l, j, j+1}, * H_p^{n-l}=∑_i∈ [p+1]\{l}I_p+1^{i}. It can be verified that H_p^B_z(l,j) = [[ I_p+1^[p+1]\{l, j, j+1} ∑_i∈ [p+1]\{l}I_p+1^{i} ] ]. rref≡ [[ I_p+1^[p+1]\{l, j, j+1} I_p+1^{j} + I_p+1^{j+1} ] ]. * For users u_z(l,l-1), l∈ [p]\{1} with the interfering message set B_z(l,j)= (Z_l\{z(l,l), z(l,l-1), z(l, l + 2)}) ∪{n-l}, we have * H_p^Z_l\{z(l,l), z(l,l-1), z(l, l+ 2)})=I_p+1^[p+1]\{l, l-1, l+2}, * H_p^{n-l}=∑_i∈ [p+1]\{l}I_p+1^{i}. It can be verified that H_p^B_z(l,l-1) = [[ I_p+1^[p+1]\{l, l-1, l+2} ∑_i∈ [p+1]\{l-1}I_p+1^{i} ] ]. rref≡ [[ I_p+1^[p+1]\{l, l-1, l+2} I_p+1^{l} + I_p+1^{l+2} ] ]. Now, it can be seen that column H_p^{z(l,l-1)}=I_p+1^{l-1} is linearly independent of the column space of H_p^ B_z(l,l-1). Thus, the decoding condition (<ref>) will be satisfied for users u_z(l,l-1), l∈ [p]\{1}. * For users u_z(l,l-1), l=p+1 with the interfering message set B_z(l,l-1)= (Z_l\{z(l,l), z(l,l-1), z(l, 1)}) ∪{n-l}, we have * H_p^Z_l\{z(l,l), z(l,l-1), z(l, 1)})=I_p+1^[p+1]\{l, l-1, 1}, * H_p^{n-l}=∑_i∈ [p+1]\{l}I_p+1^{i}. It can be verified that H_p^B_z(l,l-1) = [[ I_p+1^[p+1]\{l, l-1, 1} ∑_i∈ [p+1]\{l}I_p+1^{i} ] ] rref≡ [[ I_p+1^[p+1]\{l, l-1, 1} I_p+1^{l-1} + I_p+1^{1} ] ]. Now, it can be seen that column H_p^{z(l,l-1)}=I_p+1^{l-1} is linearly independent of the column space of H_p^ B_z(l,l-1). Thus, the decoding condition (<ref>) will be satisfied for users u_z(l,l-1), l = p+1. * For users u_z(l,p+1), l=1 with the interfering message set B_z(l,p+1)= (Z_l\{z(l,l), z(l,p+1), z(l, 2)}) ∪{n-l}, we have * H_p^Z_l\{z(l,l), z(l,p+1), z(l, 2)})=I_p+1^[p+1]\{l, p+1, 2}, * H_p^{n-l}=∑_i∈ [p+1]\{l}I_p+1^{i}. It can be verified that H_p^B_z(l,p+1) = [[ I_p+1^[p+1]\{l, p+1, 2} ∑_i∈ [p+1]\{l}I_p+1^{i} ] ]. rref≡ [[ I_p+1^[p+1]\{l, l-1, 1} I_p+1^{p+1} + I_p+1^{2} ] ]. Now, it can be seen that column H_p^{z(l,p+1)}=I_p+1^{p+1} is linearly independent of the column space of H_p^ B_z(l,p+1). Thus, the decoding condition (<ref>) will be satisfied for users u_z(l,p+1), l = 1. * For users u_z(l,p+1), l=[p]\{1} with the interfering message set B_z(l,p+1)= (Z_l\{z(l,l), z(l,p+1), z(l, 1)}) ∪{n-l}, we have * H_p^Z_l\{z(l,l), z(l,p+1), z(l, 1)})=I_p+1^[p+1]\{l, p+1, 1}, * H_p^{n-l}=∑_i∈ [p+1]\{l}I_p+1^{i}. It can be verified that H_p^B_z(l,p+1) = [[ I_p+1^[p+1]\{l, p+1, 1} ∑_i∈ [p+1]\{l}I_p+1^{i} ] ]. rref≡ [[ I_p+1^[p+1]\{l, l-1, 1} I_p+1^{p+1} + I_p+1^{1} ] ]. Now, it can be seen that column H_p^{z(l,p+1)}=I_p+1^{p+1} is linearly independent of the column space of H_p^ B_z(l,p+1). Thus, the decoding condition (<ref>) will be satisfied for users u_z(l,p+1), l ∈ [p]\{1}. * For users u_z^'(l), l∈ [p+1] with the interfering message set B_z^'(l)={z(l,l), n-l, n}∪ Z_l^'', we have * It can be verified that H_p^{z(l,l), n-l, n} = [[ I_p+1^{l} ∑_j∈ [p+1]\{l}I_p+1^{j} ∑_j∈ [p+1]I_p+1^{j} ] ] rref≡ [[ I_p+1^{l} ∑_j∈ [p+1]\{l}I_p+1^{j} ] ], * It can be observed that H_p^Z_l^''= {[ I_p+1^[p+1]\{l, l+1}, l∈ [p],; ; I_p+1^[p+1]\{l, 1}, l = p + 1, ]. It can be seen that column H_p^{z^'(l)}= {[ I_p+1^{l+1}, l∈ [p],; ; I_p+1^{1}, l = p + 1, ]. will be linearly independent of the column space of H_p^B_z^'(l). Thus, the decoding condition (<ref>) will be satisfied for users u_z^'(l), l∈ [p+1]. * For users u_z^''(l,j), l∈ [p+1], j∈ [p-2] with the interfering message set B_z^''(l,j)= {z(l,l), n-l, n}∪ (Z_l^''\{z^''(l,j)} ), we have * It can be verified that H_p^{z(l,l), n-l, n} = [[ I_p+1^{l} ∑_j∈ [p+1]\{l}I_p+1^{j} ∑_j∈ [p+1]I_p+1^{j} ] ] rref≡ [[ I_p+1^{l} ∑_j∈ [p+1]\{l}I_p+1^{j} ] ], * H_p^Z_l^''\{z^''(l,j)}=I_p+1^[p+1]\{l, l+1, l+j+1}. It can be seen that column H_p^{z^''(l,j)}= {[ I_p+1^{l+j+1}, l+j≤ p,; ; I_p+1^{l+j-p}, p+1≤ l+j≤ 2p-1, ]. will be linearly independent of the column space of H_p^B_z^''(l,j). Thus, the decoding condition (<ref>) will be satisfied for users u_z^''(l,j), l∈ [p+1], j∈ [p-2]. Figure <ref> depicts the encoding matrix H_p=2∈𝔽_q^3× 19 which is optimal for the class 2-Fano index coding instance ℐ_nF(2) if field 𝔽_q does have characteristic two. Figure <ref> depicts the encoding matrix H_p=3∈𝔽_q^4× 33 which is optimal for the class 3-Fano index coding instance ℐ_nF(3) if field 𝔽_q does have characteristic three. Matrix H∈𝔽_q^(p+1)t× mt is an encoding matrix for index coding instance ℐ_F(p) only if its submatrix H^[p+1] is a linear representation of matroid instance 𝒩_F(p). We prove N_0=[p+1] is a basis set of H, and each N_i, i∈ [p+1] in (<ref>) is a circuit set of H. The proof is described as follows. * First, since β_MAIS(ℐ_F(p)) = p+1, we must have rank(H)=(p+1)t. Now, from B_i, i∈ [p+1] in (<ref>), it can be seen that set [p+1] is an independent set of ℐ_F(p), so based on Lemma <ref>, set [p+1] is an independent set of H. Since rank(H)=(p+1)t, set N_0=[p+1] will be a basis set of H. Now, in order to have rank (H)=(p+1)t for all j∈ [m]\ [p+1], we must have col(H^{j})⊆col(H^[p+1]). * According to Lemma <ref>, from B_i, i∈ [p+1], it can be seen that for each z(l,j), l∈ [p+1], j∈ [p+1], we have z(l,j)∈ B_i, ∀ i∈ [p+1]\{j}→H^{z(l,j)}=H^{j}. Thus, H^Z_l\{z(l,j)}=H^[p+1]\{j}, ∀ j,l∈ [p+1], which means that each set Z_l\{z(l,j)} is an independent set of H. * To have rank(H)=(p+1)t, we must have rank(H^B_i)=pt, i∈ [m]. Since [p+1] is a basis set, from B_j, j∈ [p+1], one must have col(H^{n-j})⊆col(H^[p+1]\{j})(<ref>)=col(H^Z_l\{z(l,j)}), * From B_i, i∈ Z_l, l∈ [p+1], it can be checked that each set Z_l\{z(l,l)} is a minimal cyclic set of ℐ_F(p). Moreover, for each l∈ [p+1], we have n-l ∈ B_z(l,j), ∀ j∈ [p+1]. * Now, it can be seen that all four conditions in Lemma <ref> are met for each set M=Z_l\{z(l,l)} and j=n-l. Thus, based on Lemma <ref>, each set (Z_l\{z(l,l)})∪{n-l}, l∈ [p+1] must be a circuit set of H. * Let M_1= Z^'∪ Z^'' and M_2= {z(l,l), n-l, n}. It can be seen that set M_1 is an acyclic set of ℐ_F(p) and M_2⊂ B_j for all j∈ M_1. According to Lemma <ref>, we must have rank(H^{z(l,l), n-l, n})≤ 2t, which according to (<ref>) will lead to rank(H^{l, n-l, n})≤ 2t. Thus, each set {l, n-l, n}, l∈ [p+1] is a circuit set of H. * Finally, from B_n-j, j∈ [p+1], it can be observed that set {(n-j), j∈ [p+1]} is a minimal cyclic set of ℐ_F(p). Furthermore, from B_n, one must have rank( H^B_n)=rank( H^{(n-j), j∈ [p+1]})=pt. Thus, based on Lemma <ref>, set {(n-j), j∈ [p+1]} must be a circuit set of H. This completes the proof. §.§ The Class p-non-Fano Index Coding Instance For the class of index coding instances ℐ_nF(p)={B_i, i∈ [m]} with m=2p^2+4p+3, the interfering message sets B_i, i∈ [m]\ ({n-j, j∈ [p+1]}∪{n}), n=2p+3 are exactly the same as the ones in ℐ_F(p), and the interfering message sets B_n-j, j∈ [p+1] are as follows B_n-j={n-l, l∈ [p+1]}\{j}, j∈ [p+1], B_n=∅. λ_q(ℐ_nF(p))=β (ℐ_nF(p))=p+1 iff the field 𝔽_q does have any characteristic other than characteristic p. In other words, the necessary and sufficient condition for linear index coding to be optimal for ℐ_nF(p) is that the field 𝔽_q has any characteristic other than characteristic p. The proof can be concluded from Propositions <ref> and <ref>. For the class p-non-Fano index coding instance, there exists an optimal scalar linear code (t=1) over the field with any characteristic other than characteristic p. We show that the encoder matrix shown in Figure <ref> can satisfy all the users of ℐ_nF(p). Since the interfering message sets B_i, i∈ [m]\ [p+2:2p+3] are the same as the ones in ℐ_F(p), they will be satisfied by the encoder matrix shown in Figure <ref>. Since for each user u_i, i∈ [p+2:2p+2], H^{i}∪ B_i=H^[p+2:2p+2], and according to Lemma <ref>, the submatrix H^[p+2:2p+2] is full-rank and invertible over the fields with any characteristic other than characteristic p, the column H^{i} is linearly independent of the space spanned by the columns of H^B_i. Thus, the decoding condition in (<ref>) will be satisfied for all users u_i, i∈ [p+2:2p+2]. Moreover, it can be easily seen that the decoding condition will be satisfied for user u_2p+3, as B_2p+3=∅. This completes the proof. Matrix H∈𝔽_q^(p+1)t× mt is an encoding matrix for index coding instance ℐ_nF(p) only if its submatrix H^[p+1] is a linear representation of matroid instance 𝒩_nF(p). Since the interfering message sets B_i, i∈ [m]\ ({n-j, j∈ [p+1]}∪{n} are the same as the ones in ℐ_F(p), it can be proved that set N_0=[p+1] is a basis set of H and each set N_i, i∈ [p+1] in (<ref>) is a circuit set of H. Now, from (<ref>), it can be seen that set {n-j, j∈ [p+1]} is an independent set of ℐ_nF(p). Thus, according to Lemma <ref>, set N_n={n-j, j∈ [p+1]}=[p+2:2p+2] must be an independent set of H. This completes the proof. § CONCLUSION Matroid theory is intrinsically linked to index coding and network coding problems. Indeed, the dependency of linear index coding and network coding rates on the characteristic of a field has been illustrated through two notable matroid instances, the Fano and non-Fano matroids. This relationship highlights the limitations of linear coding, a key theorem in both index coding and network coding domains. Specifically, the Fano matroid can only be linearly represented in fields with characteristic two, while the non-Fano matroid is linearly representable exclusively in fields with odd characteristics. For fields with any characteristic p, the Fano and non-Fano matroids have been extended into new classes whose linear representations rely on fields with characteristic p. Despite this extension, these matroids have not received significant recognition within the fields of network coding and index coding. In this paper, we first presented these matroids in a more accessible manner. Subsequently, we proposed an independent alternative proof, primarily utilizing matrix manipulation instead of delving into the intricate details of number theory and matroid theory. This novel proof demonstrates that while the class p-Fano matroid instances can only be linearly represented in fields with characteristic p, the class p-non-Fano instances are representable in fields with any characteristic except p. Finally, following the properties of the class p-Fano and p-non-Fano matroid instances, we characterized two new classes of index coding instances, namely the class p-Fano and p-non-Fano index coding, each comprising p^2 + 4p + 3 users. We demonstrated that the broadcast rate for the class p-Fano index coding instances can be achieved through linear coding only in fields with characteristic p. In contrast, for the class p-non-Fano index coding instances, it was proved that the broadcast rate can be achieved through linear coding in fields with any characteristic except p. IEEEtran
http://arxiv.org/abs/2407.13092v1
20240718014200
CC-DCNet: Dynamic Convolutional Neural Network with Contrastive Constraints for Identifying Lung Cancer Subtypes on Multi-modality Images
[ "Yuan Jin", "Gege Ma", "Geng Chen", "Tianling Lyu", "Jan Egger", "Junhui Lyu", "Shaoting Zhang", "Wentao Zhu" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Using LLMs to Automate Threat Intelligence Analysis Workflows in Security Operation Centers PeiYu Tseng; ZihDwo Yeh; Xushu Dai; Peng Liu PeiYu Tseng, ZihDwo Yeh, Xushu Dai and Peng Liu are with Penn State University, State College, PA, 16801 (email:pmt5342@psu.edu, doyleyeh@gmail.com, xfd5059@psu.edu , pxl20@psu.edu). July 22, 2024 ========================================================================================================================================================================================================================================= § ABSTRACT The accurate diagnosis of pathological subtypes of lung cancer is of paramount importance for follow-up treatments and prognosis managements. Assessment methods utilizing deep learning technologies have introduced novel approaches for clinical diagnosis. However, the majority of existing models rely solely on single-modality image input, leading to limited diagnostic accuracy. To this end, we propose a novel deep learning network designed to accurately classify lung cancer subtype with multi-dimensional and multi-modality images, i.e., CT and pathological images. The strength of the proposed model lies in its ability to dynamically process both paired CT-pathological image sets as well as independent CT image sets, and consequently optimize the pathology-related feature extractions from CT images. This adaptive learning approach enhances the flexibility in processing multi-dimensional and multi-modality datasets and results in performance elevating in the model testing phase. We also develop a contrastive constraint module, which quantitatively maps the cross-modality associations through network training, and thereby helps to explore the “gold standard” pathological information from the corresponding CT scans. To evaluate the effectiveness, adaptability, and generalization ability of our model, we conducted extensive experiments on a large-scale multi-center dataset and compared our model with a series of state-of-the-art classification models. The experimental results demonstrated the superiority of our model for lung cancer subtype classification, showcasing significant improvements in accuracy metrics such as ACC, AUC, and F1-score. § INTRODUCTION Lung cancer stands as one of the most prevalent malignant tumors globally, being the foremost cause of cancer-related mortality <cit.>. Clinically, the major treatment options are determined on the basis of histopathologic features <cit.>. In accordance with the 2015 World Health Organization classification, lung cancer can be classified into two main categories: small-cell lung carcinoma (SCLC) and non-small-cell lung carcinoma (NSCLC) <cit.>. NSCLC constitutes around 85% of lung cancer cases, with lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) being the most clinically prevalent histological subtypes <cit.>. The differences in treatment approaches exist between LUAD and LUSC, which primarily stem from their distinct molecular and histological characteristics <cit.>. For instance, the gene mutations in specific proteins, such as Epidermal Growth Factor Receptor (EGFR), Anaplastic Lymphoma Kinase (ALK) and c-ros oncogene 1 (ROS1) found in LUAD patients can be targeted through precision therapies, while LUSC patients typically lack these molecular targets <cit.>, leading the distinct outcomes for the same therapy <cit.>. Therefore, developing an effective early screening technique and accurate cancer subtype identification methods becomes critical for further improvements of survival rate for lung cancer patients. CT has the ability to provide anatomical representations of the human body's internal structures and it can reveal the characteristics of tumors from the macroscopic view, such as size, location, boundaries, morphology, and so forth <cit.>. Nevertheless, limited by the relatively low resolution, the accuracy of assessments based on CT images needs further verification, where the atypical clinical manifestations in CT imaging results could pose a challenge in detecting subtle pathological changes through conventional visual assessments <cit.>, especially for the complicated cancer pathological subtype identifications <cit.>. Over the past several years, with the ongoing iterations of emerging information technologies, a plethora of deep learning-based techniques have been extensively proposed and achieved significant breakthroughs <cit.>. Benefiting from the characteristics of standardized formats and large datasets, medical imaging has also become a significant application domain for deep learning <cit.>. By employing end-to-end deep neural networks to automatically extract and analyse high-throughput features, deep learning models can solve the aforementioned problems in a quantitative way <cit.>, and in turn, lead to significant advancements in diagnostic and predictive accuracy. In the context of identifying lung cancer subtypes on CT images, a few explorations of computer-aided systems, leveraging various convolutional neural networks (CNN) and innovative learning strategies, have been conducted <cit.>. Despite the achievements, there is still plenty of scope for improvements in the diagnostic accuracy of these automatic analysing techniques <cit.>. The limitations of these classification models can be attributed to the limited resolution of CT images as well as the presence of substantial redundant information in the original CT images. In clinical practice, the one-time result of single-modality examination cannot always provide physicians with sufficient information, thus, the integration of multiple modalities or multiple instances of same modality are frequently required to achieve the complementary of information. In addition to CT tests, a wide array of clinical techniques are also available to aid physicians in diagnosing lung cancer, including pathological examination and other imaging modalities <cit.>. While, to acquire the most accurate pathological classification results, pathological examination is highly recommended as the further test. It is regarded as the “gold standard” in cancer diagnosis as it can provide highly precise information about pathological types, grading, and staging from the microscopic view <cit.>. Consequently, the integration of multi-modality examinations (CT and pathological) can assist physicians in making more accurate diagnoses, which in turn indicates that the deep learning models based on integrating modalities hold the promise of delivering more accurate predictions. The integration of diverse clinical examinations contributes to providing more comprehensive diagnostic indicators, whereas, it is essential to recognize that the pathological examination is an invasive procedure, which requires tissue specimens through needle biopsy or surgical resection <cit.>. Restricted by their physical conditions or the potential risks of complications, patients may not be able to endure the tissue acquisition operations in certain cases <cit.>. Thus, the pathological examinations corresponding to certain CT tests are not always obtainable, rendering data missing a common occurrence in clinical applications. As most existing models are designed to handle uniformly-sized image inputs, some studies choose to use only paired data for model training, where the models have limited representation capability and the application value of clinical data may not be fully explored. Additionally, there is some preliminary research that employs generation techniques to complete the missing modality before proceeding with subsequent model training <cit.>. Nevertheless, the completion process for datasets with distinguished differences in scale faces great challenges. Recently, to enhance models' representation capability and reduce the computational costs, the concept of dynamic convolution (DC) has been introduced <cit.>. DC can adjust the convolution parameters adaptively according to the input images, and use attentions to dynamically invoke and combine convolution kernels to enhance the representation ability of the network <cit.>. Inspired by such models, we develop a novel lung cancer subtypes classification model that is equipped with dynamic convolutions, which can automatically adjust convolution parameters based on different input combinations, such as the paired multi-modality images or the ones with single-modality, to facilitate diagnosis. Furthermore, to acquire prompts for accurate cancer subtype identification in cases of missing pathological information, we propose contrastive constraints to explore the correlation between paired CT and pathological images, and utilize this as a prior guide for accurate diagnosis in cases of modality absence. We build a multi-scale and multi-center dataset with data from three different tertiary hospitals. With this dataset, our approach outperforms other SOTA methods by a significant margin. § METHODOLOGY In this section, the proposed CC-DCNet for lung cancer subtype discrimination is introduced, as demonstrated in Fig. <ref>. To address the issues of obtaining relevant pathological information in the absence of pathological images, we propose a novel dynamic convolutional neural network that enables to utilize either paired CT and pathological images or pure CT images as input, and adaptively learns from different input images. Meanwhile, to excavate the cross-modality associations between paired CT and pathological images during training phase, and leverage such associations as priors to influence radiological feature extractions, a contrastive loss is designed to impose the extra constraints. §.§ Dynamic Convolutional Learning In order to enable the model to handle diverse image data, it is necessary for the model to adaptively adjust itself based on input. Most existing static models employ a single convolution kernel to process the image input, resulting in handling the input with the same size only. In this study, inspired by <cit.>, we designed a dynamic convolutional network that was constructed with multiple parallel convolution kernels for medical image processing. The parallel convolution kernels can be assembled differently and dynamically regarding to different inputs, and share the same output channels via aggregation. In this way, the model is able to flexibly process various inputs. As shown in Fig. <ref>, the pipeline of our proposed model involves three main procedures. Initially, the input images undergo the process of feature extraction, following which the extracted high-level features are concatenated together and subsequently inputted into the parallel convolution kernels to generate predictions for cancer subtypes diagnostic. The detailed procedure is demonstrated as follows. §.§.§ Feature Extraction The feature extraction part, including radiological feature extractor and pathological feature extractor, plays a crucial role in capturing informative information from input images and performing dimensionality reduction. Notably, the pathological feature extractor is exclusively trained and utilized only when paired CT/pathological data are inputted into the model. In this work, the feature extraction part is equipped with the 3D and 2D vision transformer <cit.> network accordingly for CT images and pathological images. With regard to the radiological feature extractor, the pre-processed CT images are firstly cropped into patches with an identical size of 112×112×112, and subsequently fed into the Transformer network module. To empower the model to focus on the information within these patches from different sub-spaces, we incorporate the multi-head self-attention mechanism into the network architecture. This mechanism allows the model to jointly capture and learn relevant features from the patches, enhancing its ability to discern important patterns and relationships across the input data. Building upon the 3D vision transformer module, we can extract radiological features 𝐗_𝐫∈ℝ^N from the original CT images, in this study N is set to 1024. With respect to pathological feature extractor, we employed the 2D vision Transformer module to process the high resolution image patches, utilizing operations similar to those in the radiological feature extraction part. Interestingly, our observations indicated that positional information played a less crucial role in the context of our randomly sampled patches sourced from the same Whole Slide Imaging (WSI). Consequently, the incorporation of advanced positional encoding did not yield a substantial improvement in performance. Thus, to augment the model's receptive field for capturing pathological features at varying magnification rates, we replaced the position code with the magnification rate. The desired high-level pathological features are represented as 𝐗_𝐩∈ℝ^N. §.§.§ Feature Concatenation After the feature extraction parts, both CT and pathological features of size N are obtained. The CT and pathological features are concatenated subsequently and reconstructed into 𝐗∈ℝ^H× L× D. It is worth noting that the pathological features are not always available, given the scenarios of both paired CT/pathological images and pure CT images are model inputs. In this study, the size of 𝐗 with paired CT/pathological inputs is set to 16×16×8 and the one with only CT inputs is set to 16×8×8. §.§.§ Dynamic Convolution The goal of dynamic convolution is to turn input 𝐗 with different sizes into output 𝐙∈ℝ^M with the same size. In this study, M is set to 512. Let x and z denote one element of 𝐗 and 𝐙, respectively. With no padding and stride of the same size with 𝐗, the dynamic convolution is processed as follows: z_m =∑_h=0^H∑_l=0^L∑_d=0^Dw_h,l,d,m× x_H-h-1,L-l-1,D-d-1 where w denotes one element of the weight 𝐖∈ℝ^H× L× D× M, which varies in response to 𝐗 and is generated by convolving 𝐗 with M convolution kernels based on same padding <cit.>. Based on the above derivation, we implement the dynamic convolution module, where the concatenated feature 𝐗 is passed through the Convolution Kernel Generation module (shown in Fig. <ref>). This module performs convolution with an output channel of 512 and a convolution kernel size of 3×3×3, with padding 1 and stride 1, and generates the weight 𝐖 of size 16× 16× 8× 512. The feature 𝐗 is then convolved with 𝐖 through a 3D convolution in Eq. <ref>, that has an output channel of 512 and a kernel size of 16×16×8, with no padding and a stride of 16×16×8. This operation produced a 512-dimensional output feature 𝐙, which is finally fed into a fully connected layer to obtain the classification result. Furthermore, in the absence of pathological features, the width size is adjusted from 16 to 8 in the aforementioned process, while all other steps remain unchanged. This approach still ensures the attainment of the final classification result. This adaptive method facilitates dynamic convolution with varying kernel sizes tailored to different input sizes, accommodating scenarios both with and without pathological features. §.§ Contrastive Loss Function Numerous studies have demonstrated that the cross-scale correlations between pathological images and CT images for the same cancer subtype can be found <cit.>. Therefore, in this work, we designed a comprehensive set of contrastive loss constraints ℒ_contrast within the proposed dynamic convolutional network, encompassing the contrastive loss for pathological subtypes ℒ_type and the contrastive loss for cross-modality correlations ℒ_correlation. This contrastive loss set is applied only when paired CT and pathological images are inputted, it can be defined as: ℒ_contrast=ℒ_type+λ_pℒ_correlation where λ_p is the tuning parameter to balance the contributions of these two losses. The role of contrastive loss is to map similar features into close regions, while mapping dissimilar ones to distant regions, thereby assisting the network in learning discriminative feature representations and bringing similar features closer in the feature space <cit.>. In our design, as shown in Fig. <ref>, the contrastive loss for pathological subtypes ℒ_type is employed to empower the network to distinguish between different cancer subtypes, meanwhile, the contrastive loss for cross-modality correlations ℒ_correlation is used to enforce the network to extract highly correlated features from both CT and pathological images. In this way, with the guidance of gold standard pathological images, our approach enables the radiological feature extractor to capture features that are more relevant to pathological information, thereby enhancing the subsequent diagnostic accuracy. The loss can be represented as follows: ℒ_type=∑_i∈ I-1/|P(i)|∑_p∈ P(i)logexp(k_i· k_p/τ)/∑_n∈ N(i)exp(k_i· k_n/τ) where P(i) is the set of indexes of all samples in the batch, which have the same cancer subtype as i, k_i refers to the feature from the whole set, k_p represents another feature with the same cancer subtype, n∈ N(i) refers to all samples from the whole dataset, the · symbol refers to the inner product, τ is a scalar temperature parameter. ℒ_correlation=-∑_j∈ Ilogexp(k_j· k_+/τ)/∑_a∈ A(j)exp(k_j· k_a/τ) where k_j refers to the feature from an original CT image, k_+ represents its corresponding pathological image feature, a∈ A(j) refers to all samples from the paired CT/pathological dataset, the · symbol refers to the inner product, τ is also a scalar temperature parameter. §.§ The Overall Loss Function The hybrid feature F_C2 acquired from the dynamic convolution module will give an ultimate classification outcome, where the loss can be calculated to update the entire model. In addition, the applied contrastive loss can also improve the model's ability in cancer subtypes discrimination and correlated multi-modality feature extraction. Therefore, in order to more effectively incorporate the pathological features as the benchmark to enhance the entire model's classification accuracy, even when CT images are utilized solely as input, we designed a total loss function that is outlined as follows: ℒ_total=ℒ_class+αλ_cℒ_contrast, {[ α=1, when inputs are paired CT/pathological images; α=0, when inputs are standalone CT images ]. where ℒ_class and ℒ_contrast are the representative signs of the classification loss and contrastive loss, respectively; α is the parameter of 0 or 1; λ_c is the tuning parameter to balance the contributions of classification loss and contrastive loss. For the classification loss, we applied the cross entropy loss as defined in Eq. (<ref>), ℒ_class =-1/N∑_t [y_t·log(p_t)+(1-y_t) ·log(1-p_t)], where ℒ_class refers to the loss, N refers to the total number of training samples, and t refers to the t^th sample. y_t is the label of the t^th sample, and it is a sign function, which equals one if the label is the same as LUSC and zero otherwise. p_t is the probability that the prediction of the t^th sample belongs to class LUSC. §.§ Evaluation Metrics To evaluate the performance of our proposed CC-DCNet, the area under the curve (AUC) was utilized as the metric for binary classification. Moreover, accuracy (ACC) as well as F1-score were calculated to find the optimized threshold. Each of these metrics encompasses a value that falls within the range of 0 to 1, wherein a higher value signifies an enhanced performance of the model. § EXPERIMENTAL SETUP §.§ Datasets and Preprocessing This study was conducted using a multi-center dataset formed by three distinct hospitals: Sir Run Run Shaw Hospital, Zhejiang University School of Medicine (referred to as Hospital A); the Affiliated Dongyang Hospital of Wenzhou Medical University (referred to as Hospital B); and the Cancer Hospital of The University of Chinese Academy of Sciences (referred to as Hospital C). Table <ref> presents the detailed demographic information of all patients participating in this study. A total of 520 LUAD/LUSC patients from Hospital A, who had undergone either CT scanning alone or a combination of CT scanning and biopsy/surgical specimen examinations, were encompassed within this study. Furthermore, 191 and 435 patients with LUAD/LUSC diagnoses from Hospital B and Hospital C were also included in the study. The data retrospectively collected from Hospital A, encompassing both paired CT/pathological images and pure CT images were employed for model training and testing, while the data obtained from Hospitals B and C, featuring a smaller number of paired data and a larger pool of pure CT images, were used to assess the stability and generalization ability of the well-trained network. Prior to training and testing the proposed model, it is necessary to preprocess both the original pathological and CT images, and the following are the explanations of the relevant preprocessing procedures. §.§.§ Pathological Image Preprocessing The pathological images employed in this study were in the form of whole slide images (WSIs), which represent digitized renditions of glass slides achieved through dedicated slide scanners <cit.>. The dimensions of all WSIs spanned an average width of 77460±14662 pixels and an average height of 59317±11014 pixels. Confronting the challenge posed by these extensive image sizes, a transformer network was introduced to handle patches extracted from the original WSI images, encompassing varying magnification levels. This approach effectively mitigated the intricacies of working with these sizable images. Furthermore, to ensure the fidelity and consistency of the feature information, two essential steps were undertaken prior to patch extraction: color normalization of all WSIs and delineation of the region of interest (ROI). To counteract the impact of color deviations, this study incorporated a technique known as structure-preserving color normalization (SPCN) <cit.>. Then the suspected cancerous regions within all WSIs were meticulously annotated by three pathologists utilizing the Automated Slide Analysis Platform (ASAP) <cit.>. This rigorous process was aimed at solely focusing on features pertinent to cancer. Subsequent to the delineation, patches were exclusively cropped within the defined ROI. Following this, the original WSIs were cropped at four distinct magnification levels, namely 10X, 20X, 40X, and 100X, while being standardized to a uniform patch size of 560×560 pixels, where X means times. §.§.§ CT Image Preprocessing To ensure comprehensive coverage of malignancies across all patient cases, all of the CT raw data underwent reconstruction with the same parameters, maintaining an in-plane resolution of 1.0mm and a slice thickness of 1.0mm. Besides, tumor segmentation of the multi-center CT images was executed by a radiology specialist employing the medical image processing software ITK-SNAP 3.8 <cit.>. To encompass all cancer-related features, the cancer masks were automatically dilated by three voxels as part of the region delineation process. Subsequent to this, a final Volume of Interest (VOI) of fixed dimensions, specifically 256×256×128, was cropped for each CT image. Furthermore, to ensure standardized data representation, the values of all generated patches were normalized to reside within the range of zero to one. §.§ Implementation Details Throughout the training process, the complete dataset underwent division into two parts: a training dataset and a testing dataset, distributed in an 80% to 20% ratio. Subsequent to the implementation of a five-fold cross-validation technique, the patches associated with different subjects and their corresponding pathological labels were systematically utilized to train the model. The total loss function played a pivotal role in iteratively updating the model parameters throughout the entire training phase. In this work, our proposed CC-DCNet and the other comparison methods were implemented in PyTorch with CUDA 10.2 toolkit and CUDA Deep Neural Network (cuDNN) 8.0.2 on the Ubuntu 18.04 operating system. All experiments involving training, validating, and testing were conducted on a Tesla V100-SXM2 graphics card. During the training phase, the parameters are set as 4 for batch size and 0.0001 for initial learning rate. For each approach, 400 epochs were executed, and the adaptive moment estimation (Adam) optimizer was utilized to optimize parameter updates. § EXPERIMENTAL RESULTS In this section, we first evaluated the performance of proposed CC-DCNet through a comparative study against other state-of-the-art (SOTA) models. We then investigated the contributions of key components of our model by conducting the ablation tests regarding contrastive constraints and additional independent CT dataset, followed by an impact analysis of utilizing different SOTAs for feature extraction. Lastly, the model underwent testing on a multi-center external dataset to evaluate its generalization ability. §.§ Comparison with the State-of-the-Arts Given that our research incorporates inputs with varying sizes, including both paired CT/pathological images and independent CT images, the comparison study was conducted in two parts: one to compare with SOTAs on CT dataset, and the other with SOTAs on paired CT/pathological datasets. For the former comparative study, we compared our proposed model with other SOTAs, including ResNet-18 <cit.>, ResNet-50, ViT <cit.>, Swin <cit.>, T2T <cit.>, RAFENet <cit.> and CARL <cit.>. Moreover, we devised an additional benchmark model SwitchNet, which allows manual calibration for the classification of cancer subtypes utilizing paired CT/pathological images or standalone CT images. The graphical representation of the SwitchNet model is shown in Fig. <ref>. To make a fair comparison, the feature extraction blocks of SwitchNet is kept the same as our model utilizing ViT. In this test, our model and SwitchNet were trained on the whole dataset that covers 320 paired CT/pathological images and 200 independent CT images, while other SOTAs were trained with 520 cases of CT images only. And all of the models were tested with the same testing CT images. For the latter comparison study against SOTAs on paired CT/pathological dataset, we initially constructed the framework that comprises two same SOTA networks in parallel to process pathological and CT datasets, accordingly. These two networks were concatenated together at the final layer to output the predictions. In this test, the proposed CC-DCNet and SwitchNet were still trained on the whole dataset (320 paired CT/pathological images and 200 independent CT images), while other SOTAs were trained with 320 paired CT/pathological images. All of the comparative models were tested with testing CT/pathological image sets. Here, the comparison tests of RAFENet and CARL were skipped as they can only process single modality images. In Table <ref> and Table <ref>, the quantitative evaluation results of our model as well as other SOTAs are summarized. It may be observed that our proposed model outperforms all of the other classification methods across all metrics in both comparisons. In the first comparison with independent CT inputs, our approach achieves the results of 86.69%±3.89%, 90.75%±2.51% and 85.96%±3.51% for ACC, AUC, and F1-score, respectively. While the highest value of SOTAs is provided by CARL, with ACC, AUC and F1-score of 83.14%±3.25%, 87.61%±3.46% and 82.55%±3.19%. These numerical results reveal that introducing the relevant pathology-prior knowledge of corresponding pathological images into the radiological features (RF) based benchmark networks can significantly improve the models' accuracy. In the latter comparison with paired CT/pathological inputs, our approach achieves ACC of 96.28%±2.56%, AUC of 97.53%±2.13% and F1-score of 96.04%±3.33%, surpassing the highest values among state-of-the-art models obtained by SwitchNet, which are 94.65%±2.36%, 96.53%±2.07% and 94.61%±2.98% for ACC, AUC, and F1-score, respectively. Compared with the CT testing dataset, the performance differences among various models tested on paired multi-modality data are less obvious. Nevertheless, our model still outperforms the other models, indicating that incorporating additional independent CT data as well as setting cross-modality constraints can also improve the model's performance. Furthermore, we conducted paired-sample t-tests between the proposed model and other SOTA models, to assess the statistically significant differences between these methods. It can be concluded from Fig. <ref> that there are statistically significant differences between the results obtained by the different methods as all p-values are less than the significance level (0.05). §.§ Ablation Study In our model design, we construct a dynamic convolutional module and a contrastive constraint module to enable the model to adaptively process data of varying sizes, and leverage the limited pathological priors to promote the model's overall accuracy. In order to assess the impact of different input dataset on model performance and evaluate the contributions of contrastive constraints to performance enhancement, the ablation tests were carried out in this subsection. §.§.§ Ablation test of independent CT dataset A key design of our model in this study is the dynamic convolution module. This module grants our model the ability to analyze both paired CT/pathological data and independent CT data. Previous tests have confirmed that the introduction of pathological images is beneficial to the model's performance. In order to validate the impact of additional independent CT data, we further trained the proposed CC-DCNet with only paired dataset (320 paired CT/pathological images), and tested the model by testing CT data and testing paired data, separately. As shown in Fig. <ref>, it can be seen that the inclusion of additional CT images also contributes to enhancing model performance, indicating that utilizing more datasets (which may not be paired) may also enhance the performance of the model. While the improvements are relatively small, which could potentially be attributed to the limited number of supplemental CT images, and further validation could be pursued by incorporating a larger quantity of CT data to offer a more comprehensive assessment of the model's performance. §.§.§ Ablation test of contrastive constraints Within the proposed model, we have designed the contrastive constraints to acquire the cross-modality correlations of paired CT and pathological images, and subsequently employ such prior information to optimize the radiological feature extractions. To investigate the contributions of contrastive constraints to model's performance, we trained the proposed CC-DCNet by removing its contrastive loss, and tested the model with the same testing datasets. In Fig. <ref>, ACC, AUC, and F1-score are applied for comparisons between CC-DCNet with and without contrastive constraints. Through the comparisons, it is obvious that the model equipped with contrastive constraints demonstrates superior performance, particularly when applied on independent CT images in the test. When testing on independent CT images alone, although the paired pathological images are absent for joint analysis, the CC-DCNet with contrastive constraints is still capable of using the previously learned pathological priors to guide the feature extractions from CT images more pathologically relevant under the constraints of contrastive loss, and finally produces a more accurate diagnosis. §.§ Impact Analysis of Feature Extractor In this work, we applied ViT as the feature extraction blocks for CT and pathological images. To investigate the impact of different network structures on the model's performance, we further trained and tested the proposed CC-DCNet with ResNet-18 and ReNet-50 as feature extractors. The number of parameters remain the same across different comparative models. As presented in Fig. <ref>, the model constructed with ViT outperforms the other constructions on both testing CT data and testing paired data, revealing that the attention mechanism in ViT is advantageous for extracting valid features from large-size images. §.§ External test This study was conducted using an extensive multi-center dataset that includes data from three distinct hospitals. By utilizing the training dataset from Hospital A, we have effectively demonstrated the advancements of our proposed model over other models through a series of experiments. In this subsection, we further validate our well-trained model on two additional datasets from Hospital B and Hospital C, to assess the model's generalization ability and robustness. It is worth mentioning that we collected paired CT/pathological datasets from Hospital B while independent CT images from Hospital C. Table <ref> presents the outcomes of external performance validations, drawing comparisons between our proposed model and other SOTAs. However, it has to be acknowledged that both the comparison models and our model exhibit relatively diminished performance on data from Hospital B and Hospital C as compared to that from Hospital A, they still yield reliable levels of ACC, AUC and F1-score. And of paramount significance is that when tested on the same dataset, our proposed classification model consistently outperforms other models, and these improvements are substantial. The proposed model achieves a minimum of 2.43%, 2.36% and 1.55% improvements accordingly in ACC, AUC, and F1 score for independent CT inputs, and 1.01%, 1.73%, and 1.16% in ACC, AUC and F1 score for paired CT/pathological inputs. § DISCUSSION Early identification of pathological subtypes is of significant importance for lung cancer patients. In clinical practice, CT imaging stands as one of the most frequently employed techniques for diagnosis. Consequently, there is a huge demand for CT-based automated diagnostic models. However, the key challenge in utilizing such CT models for predicting pathological subtypes lies in the issue of accuracy. In cancer diagnosis, pathological examination serves as the gold standard, providing more distinct and precise diagnostic criteria. Therefore, in this work, a significant hypothesis is that the integration of pathological information into the fundamental CT-based model, forming the comprehensive multi-modality features, holds the potential to enhance the classification accuracy of lung cancer pathological subtypes. While, it should be noted that the invasive pathological examinations may not be applicable to all clinical scenarios, which would result in missing modality and consequently the presence of a substantial amount of paired multi-modality as well as single-modality dataset. To enhance the overall utility of the clinical dataset and acquire prior information on cross-modality correlations from paired datasets, guiding the model to make more accurate predictions even in the absence of pathology, we proposed a dynamic convolutional neural network with contrastive constraints specifically designed for identifying lung cancer subtypes with CT and pathological images. Across various homologous but diverse modality images of the same patient, mutual characteristics of the same disease may be revealed from different perspectives, thereby the integration of multi-modality information holds the superior diagnostic evidence. Comparing the results in Table <ref> and Table <ref> with either independent CT inputs or paired CT/pathological inputs, our hypothesis may be confirmed where the incorporation of pathological information significantly enhances the CT-based model’s classification accuracy compared to models that are trained exclusively with CT dataset only. Generally, the common issue of modality missing in clinical practice poses a big challenge to conventional models. In this work, in order to empower the model to deal with diverse input combinations and further enhance the applicability and representation capacity of clinical computer-aided systems, a dynamic convolution module was designed. It allows the model to adaptively adjust itself for various inputs through the multiple parallel convolution kernels. Through the dynamic convolution module, the proposed model achieves training simultaneously with either independent CT images or paired CT/pathological images as inputs, thereby offering the potential for a comprehensive analysis of various data combinations in clinical scenarios. For the construction of the proposed model, considering the computing limitations of GPU, a transformer-based approach was adopted to extract features from the CT and pathological images by patch-wise sampling. Transformer network can process the data in all positions of the input sequence in parallel, and efficiently capture the long-range dependencies in input sequence through the attention mechanism. As shown in Fig. <ref>, using Vision Transformer (ViT) as the feature extraction blocks resulted in performance improvements compared to the ones using ResNet as a feature extractor, indicating that the attention mechanism in ViT is beneficial for extracting relevant features from large-size images. In addition, given that the spatial positioning of patches in WSI has minimal impact on the final pathological representation, whereas information related to the magnification level effectively signifies the level at which the patch represents pathological information, the original position encoding was replaced with magnification encoding to better extract features from pathological images at varying magnification levels. Despite the substantial differences between CT and pathological images, several literature have shown the presence of cross-scale correlations between these two modalities, e.g., the specific relationships between CT intensity values and matched cell density statistics <cit.>, although investigation on deep-level correlation is still far from enough. Therefore, to extract more pathological-relevant features from CT images and provide pathology prior guidance even when paired with pathological data is missing, this work incorporated a contrastive loss constraint into the framework of the DC network. From the results in Fig. <ref>, it may be seen that in the absence of contrastive constraint, the model trained with paired CT/pathological images maintained its performance, however, it showed a substantial drop in performance when it was trained solely on CT images. This indicates that the contrastive learning module plays a relatively crucial role in leveraging pathology priors to aid CT analysis. The introduction of pathology information has been proved to be pivotal in model optimization. Moreover, to assess the impact of expanded single CT images on the model’s performance, an ablation study was conducted by keeping the model structure unchanged while training the model with only paired CT/pathological images. From the comparison results in Fig. <ref>, it can be concluded that the extra independent CT data also contributed to the overall performance improvement of the model. Although the improvements are not very remarkable, which could be attributed to the limited number of extra-independent CT samples (only 200 cases), increasing the number of CT images could reveal more pronounced differences. Overall, our proposed model offers a more accurate diagnosing aid of lung cancer pathological subtypes, and the concept of dynamic convolution module can be applied to other multi-modality scenarios, providing a new direction for addressing complex multi-modality data challenges. There is also room for improvements in our model. According to the results in external evaluations, our model gives more accurate predictions compared to other SoTAs, while the multi-center effect still exits due to the variations in equipment or operation settings. In future work, the compensation of the multicenter effect with interpretable models will be studied, where the more interpretable index between CT and pathological images could be introduced into the model to improve its generalization ability. § CONCLUSION In this research, we introduce an innovative deep learning model designed for the classification of different subtypes of lung cancer. Our model features a combined contrastive learning module that leverages pathological features as prior knowledge to enhance performance even when using standalone CT input. Additionally, we have incorporated a dynamic convolution module capable of generating convolution kernels tailored to input features of varying dimensions. This empowers our model to seamlessly handle both paired CT/pathological images and standalone CT images. Extensive experimentation conducted on a substantial multi-center dataset highlights the superior performance of our model compared to other state-of-the-art methods. This superiority is evident through improved metrics such as ACC, AUC, and F1-score results. Furthermore, the strategic integration of pathological priors, as proposed in our approach, holds significant potential for extension into even more intricate and challenging applications. 00 in1 A. L. Oliver, “Lung cancer: Epidemiology and screening,” Surgical Clinics, vol. 102, no. 3, pp. 335–344, 2022. in3 M. Reck and K. F. Rabe, “Precision diagnosis and treatment for advanced non–small-cell lung cancer,” New England Journal of Medicine, vol. 377, no. 9, pp. 849–861, 2017. in4 W. D. Travis et al., “The 2015 World Health Organization classification of lung tumors: impact of genetic, clinical and radiologic advances since the 2004 classification,” Journal of thoracic oncology, vol. 10, no. 9, pp. 1243–1260, 2015. in5 W. D. Travis, “Lung cancer pathology: current concepts,” Clinics in chest medicine, vol. 41, no. 1, pp. 67–85, 2020. in6 V. Relli, M. Trerotola, E. Guerra and S. Alberti, “Abandoning the notion of non-small cell lung cancer,” Trends in molecular medicine, vol. 25, no. 7, pp. 585–594, 2019. in7 B. Y. Wang, J. Y. Huang, H. C. Chen, C. H. Lin, S. H. Lin, W. H. Hung and Y. F. Cheng, “The comparison between adenocarcinoma and squamous cell carcinoma in lung cancer patients,” Journal of cancer research and clinical oncology, vol. 146, pp. 43–52, 2020. in8 R. S. Herbst, D. Morgensztern and C. Boshoff, “The biology and management of non-small cell lung cancer,” Nature, vol. 553, no.7689, pp. 446–454, 2018. in9 M. Jamal-Hanjani et al., “Tracking the evolution of non–small-cell lung cancer,” New England Journal of Medicine, vol. 376, no. 22, pp. 2109–2121, 2017. in10 G. P. Kalemkerian et al., “Molecular testing guideline for the selection of patients with lung cancer for treatment with targeted tyrosine kinase inhibitors: American Society of Clinical Oncology Endorsement of the College of American Pathologists/International Association for the Study of Lung Cancer/Association for Molecular Pathology Clinical Practice Guideline Update,” Journal of clinical oncology: official journal of the American Society of Clinical Oncology, vol. 36, no. 9, pp. 911, 2018. in11 G. Scagliotti et al., “Treatment-by-histology interaction analyses in three phase III trials show superiority of pemetrexed in nonsquamous non-small cell lung cancer,” Journal of Thoracic Oncology, vol. 6, no. 1, pp.64–70, 2011. in12 L. G. Collins, C. Haines, R. Perkel and R. E. Enck, “Lung cancer: diagnosis and management,” American family physician, vol. 75, no. 1, pp. 56–63, 2007. in13 P. Nanavaty, M. S. Alvarez and W. M. Alberts, “Lung cancer screening: advantages, controversies, and applications,” Cancer control, vol. 21, no. 1, pp. 9–14, 2014. in14 H. S. Gharraf, S. M. Mehana and M. A. ElNagar, “Role of CT in differentiation between subtypes of lung cancer; is it possible?” The Egyptian Journal of Bronchology, vol. 14, no. 1, pp. 1–7, 2020. in15 S. J. Adams, E. Stone, D. R. Baldwin, R. Vliegenthart, P. Lee and F. J. Fintelmann, “Lung cancer screening,” The Lancet, vol. 401, no. 10374, pp. 390–408, 2023. in16 P. B. Bach et al., “Benefits and harms of CT screening for lung cancer: a systematic review,” Jama, vol. 307, no. 22, pp. 2418–2429, 2012. in17 Y. LeCun, Y. Bengio and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015. in18 K. Suzuki, “Overview of deep learning in medical imaging,” Radiological physics and technology, vol. 10, no. 3, pp. 257–273, 2017. in19 D. Shen, G. Wu and H. I. Suk, “Deep learning in medical image analysis,” Annual review of biomedical engineering, vol. 19, pp.221-248, 2017. in20 L. Cai, J. Gao and D. Zhao, “A review of the application of deep learning in medical image classification and segmentation,” Annals of translational medicine, vol. 8, no. 11, 2020. in21 W. Wang, D. Liang, Q. Chen, Y. Iwamoto, X. H. Han, Q. Zhang, H. Hu, L. Lin and Y. W. Chen, “Medical image classification using deep learning,” Deep learning in healthcare: paradigms and applications, pp. 33–51, 2020. in22 A. Hosnyet al., “Deep learning for lung cancer prognostication: a retrospective multi-cohort radiomics study,” PLoS medicine, vol. 15, no. 11, pp. e1002711, 2018. in23 S. Tomassini, N. Falcionelli, P. Sernani, L. Burattini and A. F. Dragoni, “Lung nodule diagnosis and cancer histology classification from computed tomography data by convolutional neural networks: A survey,” Computers in Biology and Medicine, vol. 146, pp. 105691, 2022. in24 H. Liu, Z. Jiao, W. Han and B. Jing, “Identifying the histologic subtypes of non-small cell lung cancer with computed tomography imaging: A comparative study of capsule net, convolutional neural network, and radiomics,” Quantitative Imaging in Medicine and Surgery, vol. 11, no. 6, pp. 2756, 2021 in25 H. Xiao, Q. Liu and L. Li, “MFMANet: Multi-feature Multi-attention Network for efficient subtype classification on non-small cell lung cancer CT images,” Biomedical Signal Processing and Control, vol. 84, pp. 104768, 2023. in26 C. de Margerie-Mellon and G. Chassagnon, “Artificial intelligence: A critical review of applications for lung nodule and lung cancer,” Diagnostic and Interventional Imaging, 2022. in27 R. Nooreldeen and H. Bach, “Current and future development in lung cancer diagnosis,” International journal of molecular sciences, vol. 22, no. 16, pp. 8661, 2021. in28 L. B. Rorke, “Pathologic diagnosis as the gold standard,” Cancer, vol. 79, no. 4, pp. 665–667, 1997. in29 J. D. Minna, J. A. Roth and A. F. Gazdar, “Focus on lung cancer,” Cancer cell, vol. 1, no. 1, pp. 49–52, 2002. in30 B. Prabhakar, P. Shende and S. Augustine, “Current trends and emerging diagnostic techniques for lung cancer,” Biomedicine & Pharmacotherapy, vol. 106, pp. 1586–1599, 2018. in31 C. C. Wu, M. M. Maher and J. A. O. Shepard, “Complications of CT-guided percutaneous needle biopsy of the chest: prevention and management,” American Journal of Roentgenology, vol. 196, no. 6, pp. W678–W682, 2011. in32 L. Cai, Z. Wang, H. Gao, D. Shen and S. Ji, “Deep adversarial learning for multi-modality missing data completion,” in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, July, pp. 1158–1166. in33 Y. Chen, X. Dai, M. Liu, D. Chen, L. Yuan and Z. Liu, “Dynamic convolution: Attention over convolution kernels,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 11030–11039. in34 M. H. Guo et al., “Attention mechanisms in computer vision: A survey,” Computational visual media, vol. 8, no. 3, pp. 331–368, 2022. math1 A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020. math5 V. Dumoulin, and F. Visin. “A guide to convolution arithmetic for deep learning,” arXiv preprint arXiv:1603.07285, 2016. math2 B. Ganeshan, V. Goh, H. C. Mandeville, Q. S. Ng, P. J. Hoskin and K. A. Miles, “Non–small cell lung cancer: histopathologic correlates for texture parameters at CT,” Radiology, vol. 266, no. 1, pp. 326–336, 2013. math3 C. Alvarez-Jimenez, A. A. Sandino, P. Prasanna, A. Gupta, S. E. Viswanath and E. Romero, “Identifying cross-scale associations between radiomic and pathomic signatures of non-small cell lung cancer subtypes: preliminary results,” Cancers, vol. 12, no. 12, pp. 3663, 2020. math4 P. Khosla et al., “Supervised contrastive learning,” Advances in neural information processing systems, vol. 33, pp. 18661–18673, 2020. es1 S. Al-Janabi, A. Huisman, A. Vink, R. J. Leguit, G. J. A. Offerhaus, F. J. Ten Kate, M. R. Van Dijk and P. J. Van Diest, “Whole slide images for primary diagnostics in dermatopathology: a feasibility study,” Journal of clinical pathology, vol. 65, no. 2, pp. 152–158, 2012. es2 A. Vahadane et al., “Structure-preserved color normalization for histological images,” IEEE 12th International Symposium on Biomedical Imaging (ISBI), pp. 1012–1015, 2015. es3 D. Anand, G. Ramakrishnan and A. Sethi, “Fast GPU-enabled color normalization for digital pathology,” 2019 International Conference on Systems, Signals and Image Processing (IWSSIP), pp. 219–224, 2019. es4 Automated Slide Analysis Platform.[Online]. Available: https://computationalpathologygroup.github.io/ASAP. Accessed on: July 15, 2021. es5 P. A. Yushkevich, J. Piven, H. C. Hazlett, R. G. Smith, S. Ho, J. C. Gee and G. Gerig, “User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability,” Neuroimage, vol. 31, no. 3, pp. 1116–1128, 2006. rs2 K. He, X. Zhang, S. Ren and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. rs5 Z. Liu et al., “Swin transformer: Hierarchical vision transformer using shifted windows,” Proceedings of the IEEE/CVF international conference on computer vision, 2021. rs6 L. Yuan et al., “Tokens-to-token vit: Training vision transformers from scratch on imagenet,” Proceedings of the IEEE/CVF international conference on computer vision, 2021. rs3 H. Li, Q. Song, D. Gui, M. Wang, X. Min and A. Li, “Reconstruction-Assisted Feature Encoding Network for Histologic Subtype Classification of Non-Small Cell Lung Cancer,” IEEE journal of biomedical and health informatics vol. 26,9 pp. 4563-4574, 2022. rs4 Y. Luo, W. Liu, T. Fang, Q. Song, X. Min, M. Wang and A. Li, “Carl: Cross-aligned representation learning for multi-view lung cancer histology classification,” International Conference on Medical Image Computing and Computer-Assisted Intervention. 2023.
http://arxiv.org/abs/2407.11938v1
20240716172831
A Note on the Phase Law and Light Curve of Comet Tsuchinshan-ATLAS (C/2023 A3)
[ "Zdenek Sekanina" ]
astro-ph.EP
[ "astro-ph.EP" ]
La Canada Flintridge, California 91011, U.S.A.; ZdenSek@gmail.com § ABSTRACT The light curve of comet Tsuchinshan-ATLAS peaked in mid-April 2024, which nearly coincided with a minimum phase angle of 2^∘.9. The question of a possible correlation between the two events has implications for the comet's overall performance. In this note I examine the light curve at times of equal phase angles to circumvent the effect and show that the comet was as bright intrinsically in late March as it was in early May. From a plot of the comet's magnitude at unit geocentric distance against the phase angle before and after its minimum on 18 April I derive a very steep phase law and a relatively flat, r^-2.55 light curve failing to fit the recently reported magnitudes. After stalling in May and June, the comet's activity enters another surge (at a time of peak phase effect), as fragmentation continues. § INTRODUCTION The approximate coincidence in mid-April 2024 between the timing of the minimum phase angle of comet Tsuchinshan-ATLAS on the one hand and the peaks on its light curve and the curve of the dust production proxy Afρ on the other hand is a controversial issue, because the correlation may or may not be fortuitous. Even if it is not, one is by no means certain that the phase effect could account for the peak in its entirety. In my recent investigation of the comet (Sekanina 2024, referred to hereafter as Paper 1) the phase effect was ignored in part because the nature of its source was not obvious and it was unclear what kind of a phase law to apply. Scattering properties of micron- and submicron-sized dust grains are normally the main factor influencing the brightness of a cometary atmosphere that needs to be corrected for using one of the existing methods, such as the model proposed by Marcus (2007). However, comet Tsuchinshan-ATLAS fails to display a tail consisting of microscopic dust, as demonstrated in Paper 1, and one should question the existence of these tiny grains, in nontrivial amounts, in the coma as well. Measurements of Afρ, near 2300 cm in late June and the first half of July[See http://astrosurf.com/cometas-obs.], down from  cm in mid-April, show that the production of dust is high, but that the dust seen in the tail was emitted at times comparable to the comet's discovery time and has been subjected to solar radiation pressure accelerations not exceeding 1 percent of the solar gravitational acceleration. The relevant particles are submillimeter-sized and larger. § INVESTIGATING THE LIGHT CURVE NEAR ITS PEAK In the absence of any information on the phase law that affects the comet's light curve, one can circumvent the problem by employing a method introduced in the following, if the same phase angle happens to occur at more than a single point along the orbit. Comet Tsuchinshan-ATLAS has satisfied this condition extensively. The phase angle reached a maximum of 7^∘.6 on 13 February 2023, shortly before its second discovery. Next, a minimum of 2^∘.6 occurred on 30 April, followed by a maximum of 9^∘.9 on 27 July and another minimum of 2^∘.0 on 30 October 2023. In 2024, the first phase-angle maximum, of 15^∘.2, took place on 17 February. Next came the already noted minimum of 2^∘.9 on 18 April and another maximum of 30^∘.6 on 3 July. As the phase angle grew, it reached 15^∘.2 on 13 May, so that the range of phase angles in late February and most of March was the same as in the first half of May. This circumstance has offered an excellent opportunity to compare, as a function of the phase angle, ϕ, the observed magnitudes, H, normalized to unit geocentric distance, Δ, by . In the following experiment I apply the observations from the same dataset that I used in Paper 1, except for the limitations dictated by the relevant time intervals. The CCD observations, of excellent quality, were made by A. Pearce with his 35-cm f/5 Schmidt-Cassegrain[See https://cobs.si.] and a clear filter (Pearce 2024, personal ), so that the CN emission was included (with obvious phase-angle implications). In Figure 1 I plot a total of 23 values of H_Δ from the two most recent periods of time when the phase angle was confined to a range of 6^∘ to 14^∘: from 9 March (when the comet was at a heliocentric distance of 3.44 AU) to 6 April (when it was 3.09 AU from the Sun) before the time of minimum phase angle (on 18 April) and from 27 April (at 2.81 AU from the Sun) to 9 May (2.65 AU) after the time of minimum phase angle. At first sight the plot looks like a compelling phase effect case, but …The observations cover a period of two months, yet the magnitudes H_Δ at any two times when phase angles are equal do not differ from one another by more than ±0.3 mag. For example, the comet was practically as bright on 24 March as it was on 6 May, fully six weeks later, even though the phase angles on the two dates differed by less than 0^∘.5. And on 5 April the comet was slightly brighter than on 28 April, more than three weeks later, even though the phase angles were essentially identical. So where is the comet's brightening with decreasing heliocentric distance? To investigate this problem more closely, I note that if the intrinsic brightness varied as r^-3.5, the comet should have brightened between 9 March (when its heliocentric distance, r, was 3.44 AU) and 9 May (2.65 AU) by 1 mag. Even though the phase angle was in fact greater on 9 March by nearly 1^∘, Pearce's observations show that the comet was on 9 May intrinsically brighter by merely 0.3 mag. The data from the period of 9 March through 6 April, before the minimum phase angle, are fitted in Figure 1 with a mean residual of ±0.06 mag by H_Δ = 8.95 + 0.101 ϕ, ± 0.07 ±0.007 while the data from the period of 27 April through 9 May, after the minimum, are with the same mean residual fitted by H_Δ = 9.49 + 0.042 ϕ. ± 0.08 ±0.009 These two lines intersect nominally at a point marked by a diamond in Figure 1; it is given by ϕ = 9^∘.1, H_Δ = 9.87, the phase angle referring to 29.9 March and 2.9 May before and after the minimum, respectively. So at these two times the comet was equally bright and was at exactly the same phase angle. The heliocentric distances were 3.19 AU and 2.74 AU, respectively, implying an expected, but undelivered brightening of 0.6 mag, positively independent of the phase law. On certain assumptions, the slopes of the lines in Figure 1 provide information on both the phase law and the light curve. Suppose the normalized magnitude varies as a phase angle, H_Δ∼ϕ, so that the standard light-curve equation is written in expanded form thus H_Δ = H_0 + 2.5 n log r + b ϕ, where H_0 is the absolute magnitude (at unit heliocentric and geocentric distances), b is a constant phase coefficient, and n is the power of heliocentric distance, r, that describes the rate of variation of the comet's intrinsic brightness, r^-n. Differentiating Equation (5) with respect to ϕ, I get, on the assumption that n is a constant, dH_Δ/dϕ = 2.5 n ∂log r/∂ϕ + b. I replace ∂log r/∂ϕ with an average slope of a straight line fitted through the function at the times of Pearce's observations and I find that the slope comes out to be +0.005911 deg^-1 from the dataset before the minimum phase angle and -0.003735 deg^-1 from the dataset after the minimum. Inserting these values and those of d H_Δ/dϕ from Equations (1) and (2) into Equation (6), I get respectively for the two periods: 0.101 = b+0.01478 n (for 9 Mar-6 Apr), 0.042 = b-0.00834 n (for 27 Apr-9 May). The solution is b = 0.063 mag deg^-1, n = 2.55, which shows that fitting the data requires a huge phase effect, with the coefficient b about twice the typical value for asteroids (0.030–0.035 mag deg^-1) and four to eight times the averages implied by comet dust-grain models (a standard model by Marcus gives at a phase angle of 6^∘ and 0.016 mag deg^-1 at 14^∘). Another surprise is that the light curve is much more flat than are the current estimates for n, near 3.5. The final step is to insert the parameters b and n back into Equation (5) and compute the absolute magnitude H_0 for each observation. Plotted in Figure 2 as a function of time, the absolute brightness was decreasing with time at an accelerating rate, from magnitude 6.0 near 9 March to 6.6 two months later. An estimated rate of fading in early May was about 0.5 mag per month. This rate would have been higher if a less extreme value of the phase coefficient was used. Because the rate of brightening is known to have a tendency to diminish with decreasing heliocentric distance, one can go a step further and replace n in the second equation of (7) with (Δ n > 0) and compute solutions as a function of Δ n. For example, for  one would get and before the phase angle minimum and 2.23 after it. A decreasing steepness of the light curve appears to have a rather insignificant effect on the phase coefficient. § DISCUSSION AND CONCLUSIONS In an effort to understand the role of the phase law on the absolute magnitude of comet Tsuchinshan-ATLAS, I examined its light curve when the phase angle varied between 6^∘ and 14^∘. The first period extended from 9 March to 6 April 2024 (3.44–3.09 AU from the Sun), before the minimum phase angle was reached on 18 April, the second period from 27 April to 9 May (2.81–2.65 AU) after the minimum. The choice of the lower boundary of 6^∘ was determined primarily by the need of having a sufficient gap between the two periods (I adopted three weeks), while the choice of the upper boundary was dictated by the maximum near 15^∘ in mid-February. The phase angle span of 8^∘ was enough to closely inspect the near-pairing of a number of the data points. At an equal phase angle, the comet's brightness before and after mid-April, was found, after applying correction for the geocentric distance, to be nearly equal, implying intrinsic fading. Comparison of the rates of brightness variations with the phase angle in the periods before and after the time of its minimum allowed me to determine the phase coefficient and the rate of variation with heliocentric distance. The solution offered rather unusual results: a very steep phase law, with the coefficient exceeding 0.06 mag deg^-1, and a light curve more flat than expected, with . Inserting the parameters into the equation of the light curve led eventually to the determination of the absolute magnitude as a function of time over the entire investigated period. The result was a clear systematic decrease in the absolute brightness from magnitude 6.0 in early March to 6.6 in early May, with an accelerating rate, equaling 0.5 mag around 9 May. If this trend has continued, the absolute magnitude would be close to 8, possibly fainter, at the time of this writing (mid-July). With , , and on 14 July, the comet would be close to apparent magnitude 13. Instead, it is observed near magnitude 10. Indeed, the comet was recently reported to brighten (Pearce, personal ) just as the phase angle was peaking at 30^∘.6! The bizarre law and this remarkable anticorrelation suggest that the phase  have little effect. Rather, fluctuations linked to  of increased episodic activity dominate the comet's perfor­mance. The comet may be experiencing another surge, as a new cluster of fragments got activated for a short period of time following their separation from the nucleus. REFERENCES Marcus, J. N. 2007, Int. Comet Quart., 29, 39 Sekanina, Z. 2024, eprint arXiv:2407.06166 (Paper 1)
http://arxiv.org/abs/2407.12676v1
20240717155750
CoSIGN: Few-Step Guidance of ConSIstency Model to Solve General INverse Problems
[ "Jiankun Zhao", "Bowen Song", "Liyue Shen" ]
cs.CV
[ "cs.CV", "eess.IV" ]
CoSIGN J. Zhao et al. University of Michigan jkzhao.nku@gmail.com {bowenbw, liyues}@umich.edu CoSIGN: Few-Step Guidance of ConSIstency Model to Solve General INverse Problems Jiankun Zhao1Equal contribution. Bowen Song1^⋆0009-0005-1285-3048 Liyue Shen10000-0001-5942-3196 ==================================================================================================== § ABSTRACT Diffusion models have been demonstrated as strong priors for solving general inverse problems. Most existing Diffusion model-based Inverse Problem Solvers (DIS) employ a plug-and-play approach to guide the sampling trajectory with either projections or gradients. Though effective, these methods generally necessitate hundreds of sampling steps, posing a dilemma between inference time and reconstruction quality. In this work, we try to push the boundary of inference steps to 1-2 NFEs while still maintaining high reconstruction quality. To achieve this, we propose to leverage a pretrained distillation of diffusion model, namely consistency model, as the data prior. The key to achieving few-step guidance is to enforce two types of constraints during the sampling process of the consistency model: soft measurement constraint with ControlNet and hard measurement constraint via optimization. Supporting both single-step reconstruction and multistep refinement, the proposed framework further provides a way to trade image quality with additional computational cost. Within comparable NFEs, our method achieves new state-of-the-art in diffusion-based inverse problem solving, showcasing the significant potential of employing prior-based inverse problem solvers for real-world applications. Code is available at: <https://github.com/BioMed-AI-Lab-U-Michgan/cosign>. § INTRODUCTION Inverse problems cover a wide spectrum of fundamental image restoration tasks, such as inpainting, super-resolution and deblurring <cit.>. These problems aim at recovering the original signals x given downgraded measurements y. A forward operator 𝒜, either linear or nonlinear, determines the process transforming original signal to downgraded measurements. Due to the sparse sampling nature of this process, 𝒜 is analytically irreversible, causing the problem to be ill-posed <cit.>. Traditionally, one can solve inverse problems with mathematical regularization <cit.>, or train an end-to-end neuron network to map the measurement to its corresponding original signal <cit.>. However, due to lack of creativity, these methods generally suffer from over-smoothness and blurry artifacts. Emergence of powerful deep generative models <cit.> provided a new approach to remedy information lost in the forward process. This concept was first applied in an unsupervised manner, where an unconditional generative model is utilized as a prior. By incorporating the prior with various measurement constraints, previous works on this line <cit.> explored methods to sample from posterior distribution p(x|y) given pretrained unconditional generative models. Recent works <cit.> have found diffusion models particularly suitable as generative prior in this approach. Not only can diffusion models generate high-fidelity samples without adversarial training, but their iterative sampling process is also naturally compatible with plug-and-play measurement constraints. A batch of these methods, named Diffusion-based Inverse problem Solvers (DIS) <cit.>, were developed to "hijack" the sampling trajectory towards measurement-consistent samples. However, reliance on hundreds of iterative sampling steps also limits their application in real-time or high-dimensional scenarios, such as video processing and 3D imaging <cit.>. Inspired by the success of unsupervised DISs, another line of works <cit.> try to utilize diffusion models to solve inverse problems in a supervised manner. These works directly model the posterior distribution p(x|y) by taking the measurements as inputs during model training. Rather than utilizing an off-the-shelf generative prior, these methods train a generative model from scratch for each task, which limits their performance on out-of-domain inverse tasks. Besides, these methods also rely on iterative sampling to generate high-fidelity samples. To address these challenges, continuous efforts have been made to reduce sampling steps of diffusion models. One promising approach is by distilling a pretrained diffusion model into an implicit model that directly maps noise to samples <cit.>. Among them, Consistency Model (CM) <cit.> was recently proposed as an efficient distillation method to enable single-step sampling of diffusion-based generative models. Despite the powerful few-step generation ability of CM, it is not a trivial task to directly leverage CM as a prior for solving inverse problems. In <ref>, we illustrate why existing DISs, including the inverse problem solving algorithm proposed in <cit.>, fail to guide the unconditional sampling trajectory of CM towards a measurement-consistent sample under the few-step setting. CM prior differs from diffusion prior in that it predicts x_0 rather than their expectation x̂_̂0̂, whereas most DISs rely on x̂_̂0̂ to approximate the likelihood score ∇_x_tlog p_t(y|x_t). Inaccuracy in this approximation leads the sample away from the authentic data distribution <cit.>, leaving it an open problem to incorporate the CM priors into few-step inverse problem solvers. In this work, we focus on tackling the challenge of long sampling process in previous methods. Motivated by CM, we propose CoSIGN, a few-step guidance method of ConSIstency model to solve General INverse problems. CoSIGN utilize the strong data prior from CM with improved sampling efficiency to generate results in only a single or few steps. To be specific, we first propose to guide the sampling process of consistency model with a soft measurement constraint: training an additional encoder, namely ControlNet <cit.>, over the Consistency Model backbone. The ControlNet takes the downgraded measurement or its pseudo-inverse as the input, and controls the CM output to be consistent with the measurement. With the pretrained consistency model as a frozen backbone, we only need to train the ControlNet to guide the sampling process of CM in a single step (see  <ref>). Moreover, to further improve fidelity of the generated reconstructions, we propose to plug in a hard measurement constraint module to explicitly guarantee measurement consistency and reduce distortion. The proposed framework is capable of solving linear, nonlinear, noisy and blind inverse problems, as long as paired data can be constructed to train the ControlNet. Our main contributions can be concluded as follows: * We propose a few-step inverse problem solver with improved sampling efficiency by leveraging the pretrained consistency model as a data prior. * We propose to guide the conditional sampling process of consistency models with both soft measurement constraint and hard measurement constraint, which enables generating high-fidelity, measurement-consistent reconstructions within 1-2 NFEs; * Experiments demonstrate superior few-step reconstruction ability of our method on four tasks of linear and non-linear inverse problems, where our method achieves state-of-the-art within low NFE regime, while competitive with those methods generated using about 1000 NFEs. § RELATED WORK Generative Inverse Problem Solvers. Deep learning based approaches for solving inverse problems can be generally categorized into two types: supervised methods and unsupervised methods <cit.>. Generative models were first introduced into unsupervised approaches as a data prior. Early works <cit.> explore the latent space of Generative Adversarial Networks (GAN) <cit.>, guided by gradients or projections towards the measurement. With the emergence of generative diffusion models, recent works <cit.> integrated such guidance with the iterative sampling process of diffusion models <cit.> for solving inverse problems. These methods, dubbed Diffusion-base Inverse problem Solvers (DIS), bifurcated into two main streams based on ways to enforce measurement consistency. Soft approaches <cit.> attempted to keep samples on generative manifolds by taking gradients through the neural network, while hard approaches <cit.> project the measurement back to the signal space and replace "range space" <cit.> of current sample with the projection. Some recent studies explored more accurate posterior approximation with normalizing flow <cit.> or partical filtering <cit.>. But most DIS methods, no matter hard or soft, requires a lot of sampling steps to obtain measurement-consistent samples. Advancements of unsupervised DIS methods have inspired the use of probabilistic generative models under supervised settings. Instead of fitting p(x), Palette <cit.> trained an image-to-image conditional diffusion model that directly fits p(x|y). I^2SB <cit.> and CDDB <cit.> algorithms narrowed the gap between x_T and x_0 by building a Schrödinger Bridge, improving reconstruction quality in extremely low NFE (number of function evaluations) regime of less than 10. Compared with end-to-end supervised approaches mapping y to x <cit.>, these methods significantly improved image quality due to diffusion model priors. However, since the entire model is trained for a particular task from scratch, these methods usually generalize poorly on out-of-domain tasks. Guiding Diffusion Models. Existing works have explored methods to guide diffusion models with class labels <cit.>, texts <cit.> and images <cit.>. Classifier guidance <cit.> and Classfier-Free Guidance (CFG) <cit.> pioneered class-conditioned generation. Benefiting from CFG, large text-to-image diffusion models like Latent Diffusion <cit.>, Imagen <cit.> and Glide <cit.> enjoyed great success by leveraging pre-trained text and image encoder to inject guidance into sampling process of diffusion models. ControlNet <cit.> provided a fine-tuning method to adapt these pre-trained models to specific image-to-image translation tasks. In tasks like sketch-to-image and depth-to-image, ControlNet showed superior ability in enhancing semantic and structural similarity. However, its ability of enforcing measurement consistency in general inverse problems still remains under-explored. Accelerating Diffusion Models. Slow sampling speed has been limiting the real-world applications of diffusion models in generation of 3D scenes <cit.>, videos <cit.> and speeches <cit.>. Recent works proposing to switch from SDE <cit.> to ODE <cit.> and adopt higher-order ODE solvers <cit.> managed to accelerate sampling process to 5-10 NFEs. Unfortunately, these ODE solvers cannot be directly applied to the likelihood score ∇_x_tlog p_t(y | x_t) in DIS. Another line of works <cit.> explored methods to distill the sampling trajectory of pre-trained diffusion models. Although these methods efficiently accelerate unconditional image generation, how can these distillation models help accelerate inverse problem solving has not been thoroughly studied yet. § BACKGROUND Inverse Problem Solving aims at reconstructing an unknown signal x∈ℝ^n based on the measurements y∈ℝ^m . Formally, y derives from a forward process determined by  <ref>, y = 𝒜(x)+ϵ where 𝒜 can be either a linear operator like Radon transformation in sparse-view CT reconstruction, or a nonlinear operator like JPEG restoration encoder. ϵ denotes random noise along with the measurement acquisition. Inverse problem becomes ill-posed when it comes to a case where m<n. In other words, 𝒜^-1 does not exist, thus there are multiple x satisfying  <ref>. In this case, some format of data prior is required to recover the original signal x. Solving inverse problem is to find an optimal x that is both consistent with the measurement y and the prior knowledge of p(x). The solution can be usually formulated as: x̂=z∈ℝ^nmin{𝒜(z)-y^2+ρℛ(z)} where the first term 𝒜(z)-y^2 optimizes results towards measurement consistency and the second term ρℛ(z) regularizes the result with the knowledge of p(x). Generative models can also be utilized as the data prior in the regularization term. For example, diffusion models can be trained to capture the prior data distribution p(x) via score matching <cit.>. In this case, splitting prior score from posterior with Bayes' Rule is equivalent to solving <ref> with gradient descent:[t∈[0, T] denote the index of a particular noise level in diffusion model. Please refer to  <cit.> for details about diffusion models.] ∇_x_tlog p_t(x_t|y)=∇_x_tlog p_t(y|x_t)+∇_x_tlog p_t(x_t) But this equivalency only holds when assuming<cit.>: ∇_x_tlog p_t(y|x_t)≃-1/λ∇_x_t𝒜(x̂_0(x_t))-y^2 where λ is a hyperparameter controlling step size, and x̂_0(x_t) denotes an estimation of clean sample x_0 based on intermediate x_t with Tweedie's formula. Such approximation brings significant error when discretizing the diffusion process into only a few steps. This motivates us to turn a pre-trained diffusion model fitting p(x) into one that directly fits p(x|y), instead of approximating likelihood hundreds of times during sampling process. Consistency Model (CM) is a new family of generative models distilled from diffusion models that can generate high-quality image samples in a single step. Furthermore, these samples can be refined through multistep sampling of CM. Rather than estimating 𝔼(x_0|x_t) as diffusion models, CM aims at learning a direct mapping from any point on an ODE trajectory to its origin, i.e., f_θ: (x_t, t) ↦x_ϵ[ϵ is a sufficient small positive number to stabilize training.]. The key to successfully building f is enforcing two constraints: self-consistency and boundary constraint. Self-consistency states that for all points on a particular trajectory, f must output the same origin point x_0 (see  <ref>). f_θ(x_t, t)=f_θ(x_t^', t^') ∀ t, t^'∈[ϵ, T] Meanwhile, boundary constraint ensures that this original point is the authentic data itself (see  <ref>). f_θ(x_ϵ, ϵ)=x_ϵ Inspired by <cit.>, boundary constraint is enforced by introducing skip connection: f_θ(x, t)=c_skip(t)x+c_out(t)F_θ(x, t) where c_skip(t) and c_out(t) are specially designed functions satisfying c_skip(ϵ)=1 and c_out(ϵ)=0, while F_θ(x, t) is a U-Net parameterized by θ. A CM can be obtained by either distilling a pre-trained diffusion model (dubbed consistency distillation, CD) or training from scratch (dubbed consistency training, CT). In both ways, loss functions are designed to guarantee self-consistency. Specifically, these two losses have the same form of[f_θ denotes the "online network" while f_θ^- denotes the "target network". Please refer to <cit.> for details about exponential moving average (EMA).]: ℒ_cm=d(f_θ(x_t_n+1, t_n+1), f_θ^-(x_t_n, t_n)) where d(·, ·) is a distance function like L1, L2 or LPIPS<cit.>. In CD, x_t_n is predicted by the pre-trained diffusion model based on x_t_n+1. Whereas in CT, x_t_n represents x_t_n perturbed by the same noise ϵ∼𝒩(0, I) as x_t_n+1. We choose CD in this work since distilled trajectories are closer to those in diffusion models, and thus can generate images with better quality. § METHOD In this section, we will discuss how we enable few-step inverse problem solving with pre-trained consistency models as data priors. Firstly, different from previous works which aims at improving reconstruction quality in low NFE region <cit.>, we are motivated to develop a method to enable high-fidelity single-step reconstruction. To be specific, as shown in <ref>, we propose to control the sampling process of consistency model with a soft measurement constraint by building a ControlNet <cit.> over the consistency model backbone (<ref>). To feed the measurement into the ControlNet as a condition, we reformulate the inverse problems by adding an initial reconstruction stage to transform the measurements in different inverse problems into the signal space. Secondly, while most image-to-image translation tasks mainly seek semantic similarity, inverse problems require strict measurement consistency. Thus, we further introduce hard measurement constraint in the multistep sampling scheme to explicitly guarantee consistency with the measurements in a more robust way (<ref>). §.§ Soft Measurement Constraint via ControlNet In <ref>, we introduced two essential constraints when training consistency models for unconditional image generation: self-consistency and boundary constraint. Under this setting, it is expected that each x_t belong to only one trajectory, and this trajectory ends at a deterministic point <cit.>. However, when solving inverse problems, which trajectory an x_t belongs to depends on not only x_t itself but also the measurements y. In other words, we can expand <ref> to: f_θ(x_t, y, t)=f_θ(x_t^', y, t^') ∀ t, t^'∈[ϵ, T] One straightforward way to satisfy  <ref> is to train or distill a conditional consistency model with <ref>, where x_t_n is derived by adding the same noise to x_0 under the training setting, or predicted by the teacher diffusion model under the distilling setting. ℒ_con=d(f_θ(x_t_n+1, y, t_n+1), f_θ^-(x_t_n, y, t_n)) But we empirically find optimizing ℒ_con leads to slow convergence. Hence, we are motivated to circumvent this problem by exploring other ways to ensure 𝒜(f(x_t, y, t))=y. Specifically, we introduce soft measurement constraint in addition to self-consistency. One can simply enforce soft measurement constraint with the following loss: ℒ_recon=d(f_θ(x_t, y, t), x_0) where x_0 is the ground-truth. Note that f_θ(x_t, y, t)=x_0 is a stronger condition than  <ref>. Training a U-Net from scratch with ℒ_recon will encourage f_θ(x_t, y, t) to approximate 𝔼[x_0|x_t, y], resulting in a diffusion model rather than a consistency model. To strike a balance between soft measurement constraint and self-consistency, we follow <cit.> to freeze the unconditional CM backbone while train an additional encoder (a.k.a. ControlNet) over it to guide the sampling process of CM and enable conditional generation. Since we explicitly keep the CM backbone frozen, the single-step generation ability can be inherited from pretrained CM even if we train the ControlNet with ℒ_recon. The structure of the whole model is depicted in Stage 2 of  <ref>. The encoder and the middle block of the ControlNet share a same architecture as their counterparts in CM backbone. Their parameters are also initialized with those in the pretrained CM backbone. In this way, ControlNet inherit the outstanding perceptual ability of pretrained CM. Additionally, zero convolutions are added to the inlet and outlet of ControlNet to avoid disturbance from random noise in the initial training stage. By training this elaborately initialized network with ℒ_recon defined in  <ref>, our model progressively learns to be consistent with the condition, maintaining the single-step generation ability at the same time. It is worth noticing that ControlNet is originally designed to deal with image-to-image translation tasks. Whereas in many inverse problems, the measurements y has a different size or even lies in a different space from the signals x. For instance, in sparse-view CT reconstruction, the measurement is a sinogram rather than an image (see Stage 1 of <ref>). Thus, we introduce an initial reconstruction stage before the guided consistency model to adapt inverse problems as an input of ControlNet. Specifically, we categorize inverse problems into three types, and adopt different initial reconstruction methods for each type. First, all linear inverse problems have a pseudo-inverse operator A^† satisfying AA^†A≡A<cit.>. In tasks like inpainting and super-resolution, A^† can be easily formed as A^†=A and A^†∈ℝ^s^2× 1=[1, ⋯, 1 ]^T respectively, where s denotes the super-resolution scale. However, in other linear tasks like sparse-view CT reconstruction, A^† can only be constructed by SVD and complex Fourier transform <cit.>, or estimated by conjugated gradient <cit.>. Instead of using these time-consuming methods to derive A^†, we can use more efficient reconstruction methods like filtered back-projection (FBP), which is a commonly-used standard way for CT reconstruction. Lastly, for nonlinear problems and problems with unknown forward operator, we merely input the resized measurement as condition and let ControlNet learn a transition from measurement to signal. §.§ Hard Measurement Constraint and Multistep Sampling While most image-to-image translation tasks mainly seek semantic similarity, inverse problems require strict measurement consistency. Thus, we further introduce hard measurement constraint in the multistep sampling scheme to explicitly guarantee consistency with the measurements in a more robust way. Hard measurement constraint is an optimization step that guides the sample to be consistent with the measurement. Theoretically, most optimization methods in existing DISs, either soft or hard, can be applied here as a plug-and-play module. But we empirically found hard approaches more effective under the few-step setting. The goal is to find the projection of the prior "clean" sample x_0 to the manifold that is measurement-consistent. Let ϵ be the tolerance threshold for the noise in the measurement y, the optimization objective is given by x_0=z∈ℝ^nmin{z-x_0_2^2} s.t. 𝒜(z) - y_2^2 ≤ϵ^2. and the Lagrangian form of <ref> is given by z-x_0_2^2 + φ𝒜(z) - y_2^2. For linear inverse problems, the previous optimization objective can be solved with a close-form solution by computing the pseudo-inverse <cit.>. We adopt the relaxed projection form in DDNM <cit.> as an example in our experiment, which updates the "clean" sample x_0 with <ref>. x_0=x_0 + κ(A^†y - A^†Ax_0) [κ is a hyperparameter introduced in DDNM+ to control noise amplification level. Please refer to <cit.> for method to derive κ from noise level σ.] For nonlinear problems, we can directly solve the optimization objective in <ref> by gradient descent (with momentum) <cit.>. Finally, we simply skip this optional stage and rely on ControlNet to enforce measurement constraint if A is unknown or not differentiable. Multistep sampling serves as an iterative refinement process. In each step, we send the last-round sample back to a lower noise level by perturbing with a new noise. Then we denoise this newly perturbed sample with guided consistency model, resulting in an image with sharper and refined details. By integrating hard measurement constraint with multistep sampling, we can obtain high-fidelity images more robustly. § EXPERIMENTS §.§ Experimental Settings Tasks. We evaluate our method across four distinct tasks, encompassing both linear and nonlinear inverse problems in the domains of natural and medical images. In natural image domain, we conduct experiments on two linear inverse problems: 4×Super-Resolution (SR) and block inpainting. Additionally, we also evaluate on nonlinear deblurring, illustrating the ability of our method to address nonlinear inverse problems. For medical image domain, we evaluate our method on sparse-view Computed Tomography (CT) reconstruction task, which aims at reconstructing CT images from under-sampled projections (sinograms). In our experiments, sinograms are simulated with Radon transformation using 23 projection angles equally distributed across 180 degrees. Following the baseline settings <cit.>, Gaussian noises with standard deviation σ_image=0.05 are added to the measurement space in experiments on natural images, while no additional noise is added on sinogram in CT reconstruction task. Datasets. For natural image restoration tasks, we use the LSUN bedroom dataset <cit.>, which consists of over 3 million training images and 300 validation images. We leverage the consistency model checkpoint pre-trained on LSUN bedroom from  <cit.>, and further train the ControlNet on the training set. For medical image restoration tasks, we utilize 2D slices sampled from AAPM LDCT dataset <cit.>. We train the model on a training set consisting of 3000 images from 40 patients, and test on 300 images from the remaining 10 patients. All images are in 256×256 resolution. Baselines. In experiments on natural images, our baselines can be categorized into three main groups: (1)End-to-end image restoration methods like SwinIR <cit.> , (2)Plug-and-play diffusion-based inverse problem solvers (DIS), such as DDRM <cit.> and DPS <cit.>, and (3)Conditional diffusion-based method trained from scratch, such as I^2SB <cit.> and CDDB <cit.>. We also compare with the zero-shot method based on unconditional consistency models proposed in  <cit.> (denoted as CM). We omit CM and DDRM in nonlinear deblur since they require pseudo-inverse, which nonlinear operators do not have. In experiments on medical images, we compare our methods with: (1) Traditional mathematical methods like FBP, (2) Single-step deep reconstruction methods like FBP-UNet, and (3) Plug-and-play methods with diffusion or consistency model prior, such as MCG <cit.>, DPS <cit.> and CM <cit.>. Evaluation. Considering varied demands in natural and medical image tasks, we employ different evaluation metrics for these two experiments. For natural image tasks, we adopt perceptual metrics that align better with visual quality. Specifically, we use Learned Perceptual Image Patch Similarity (LPIPS) <cit.> as a measurement of data consistency, and Fréchet Inception Distance (FID) <cit.> as a measurement of image quality. Since pixel-level metrics like Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) <cit.> prefer blurry regression outputs <cit.>, we leave those results in the Appendix. Conversely, in medical image tasks where the risk of hallucination is undesirable, we use PSNR and SSIM as evaluation metrics instead of LPIPS and FID. These metrics align with our emphasis on similarity with the ground truth rather than focusing on image quality when processing medical images. §.§ Results on Natural Image Tasks We quantitatively and qualitatively compare our method and the baselines in <ref>, <ref> and <ref>, respectively. We first compare our method with baselines using NFE<100. Aside from methods working in high NFE region, our 2-step results surpass existing methods in most tasks with both metrics. Our 2-step and single-step results achieve the best and the second-best FID respectively across all tasks. This demonstrates the superior image quality of our reconstruction results. It is worth noticing that CoSIGN works exceptionally well on nonlinear deblur, showcasing its ability to solve nonlinear inverse problems without pseudo-inverse. Meanwhile, we would like to clarify that the sub-optimal performance of our method in LPIPS does not indicate data inconsistency. As depicted in <ref>, although single-step results of SwinIR and 2-step results of CDDB suffer from obvious over-smoothness, they outperform our method in terms of LPIPS values. We suppose that this is because LPIPS hardly tolerate hallucination, which is essential for generating high-fidelity details. Additionally, we also compare our method with baselines resulted from high NFEs. As shadowed in grey in <ref>, our method achieves performance on par with or even better than these baselines by using much less NFE steps. In conclusion, CoSIGN successfully "concentrate" the power of diffusion model prior into a single inference step or two, accelerating the process of inverse problem solving. §.§ Results on Medical Image Tasks We report the quantitative and qualitative results on medical image tasks in <ref> and <ref>, respectively. Our method surpasses all baselines with low NFEs in terms of PSNR metric. Whereas for SSIM, we fall behind FBP-UNet. But as showcased in <ref>, our method generates sharper details while preserving high consistency with measurements compared to baseline methods such as FBP-UNet. For baselines with 1000 NEFs, our 2-step results are still comparable in the sense of both metrics and visual quality. Compared with baselines like MCG <cit.>, our method particularly excel in reconstructing clear soft tissues (<ref>). §.§ Ablation Study Ablation study focuses on three main components of our method: consistency model prior, ControlNet and hard measurement constraint. Ablation on CM prior and ControlNet is conducted on block inpainting, whereas ablation on hard measurement constraint is conducted on CT reconstruction. Consistency Model Prior. In order to corroborate the effectiveness of consistency model prior, we compare the single-step result with an end-to-end supervised model directly trained to transform pseudo-inverse of measurements into high-fidelity images. Like ControlNet, the end-to-end model has the same structure as U-Net in the consistency model and is trained with the same LPIPS loss. Its parameters are initialized with the same consistency model checkpoint as well. As shown in the first row of  <ref> and the first column in  <ref>, the end-to-end model without consistency model prior (noted as “w/o CM")) generates blurry result with distinct artifacts, which is alleviated when incorporating consistency model as a prior. ControlNet. We demonstrate how ControlNet turns an unconditional consistency model prior into a conditional model by evaluating under different guidance scales. Specifically, we multiply the guidance scale with outputs of the ControlNet to manually tune the conditioning strength. In <ref>, we observed that quantitative results steadily improve as the guidance scale increases. Meanwhile, visual results in <ref> gradually transit from unconditional samples to measurement-consistent reconstruction results. This gives us an intuitive interpretation of how ControlNet guides the unconditional consistency model towards the ground truth. Sampling Steps and Hard Measurement Constraint. In <ref>, we introduced hard measurement constraint and multistep sampling as refinement steps. In <ref> we observe that these refinement steps indeed improve performance. We do not report NFE>2 since no further improvement in performance is observed. This might be a problem inherited from the CM backbone <cit.>. § DISCUSSIONS In this section, we will discuss two important issues of CoSIGN: the exact inference time, as well as performance on out-of-domain (OOD) inverse tasks and noise scales. Inference Time. We report the per-image inference time of block inpainting on a single Nvidia A40 GPU in  <ref>. Our single-step reconstruction method can generate an image within 100ms, enabling real-time applications like video interpolation. Our two-step generation speed is similar to I^2SB <cit.> and CDDB <cit.> and is also magnitudes faster than most previous diffusion-based methods. Out-of-domain (OOD) Adaptability. To demonstrate the robustness of CoSIGN, we test our method on OOD inverse tasks and noise scales. The model is trained on sparse-view CT reconstruction with 23 angles, and no noise is added to the sinogram during training. Then we test the model's generalizability by conducting experiments on OOD tasks only using 10 angles for reconstruction (see <ref>). Whereas in experiments on OOD noise scales, noises with different derivation σ were added to the sinogram (see <ref>). As the number of projections decrease from 23 to 10, PSNR of our method drop slower than FBP-UNet. Meanwhile, performance of our method was much less affected by OOD noise. This indicate that the frozen CM prior endows our method with better robustness to OOD measurements compared with traditional approaches. § CONCLUSION In this work, we propose CoSIGN, a few-step inverse problem solver with consistency model prior. We propose to guide the conditional sampling process of consistency models with both soft measurement constraint and hard measurement constraint, which enables generating high-fidelity, measurement-consistent reconstructions within 1-2 NFEs. Extensive experiments demonstrate our superiority against existing supervised and unsupervised diffusion-based methods under few-step setting. splncs04 § LIMITATIONS Compared to traditional diffusion-based inverse problem solvers (DIS), CoSIGN reduces number of sampling steps to 1-2 NFEs. However, the training of a ControlNet for each inverse task may limit the generalizability of the proposed method. Although we demonstrated its robustness against number of angles and noise scales in sparse-view CT reconstruction, a performance gap still exists when adapting the trained ControlNet to a different task. Future works may explore ways to utilize few-shot adaptation method for the training of ControlNet, or improve zero-shot inference ability of the proposed method. § IMPLEMENTATION DETAILS We implement our proposed algorithm (CoSIGN) based on the consistency model (CM) codebase[<https://github.com/openai/consistency_models>] so that we can make use of the CM checkpoint pretrained on the LSUN bedroom dataset <cit.>. The UNet structure of CM contains 6 resolution levels for the input size of 256×256. There are two residual blocks for each resolution level in both the encoder and the decoder. In the architecture of the additional encoder for guiding the CM backbone with the conditional input, we replaced each decoder layer with a zero-initialized convolution layer. We also add a zero-convolution layer before the additional encoder. We maintain the middle block in CM at the end of the additional encoder. The output of the middle block will pass through a zero-initialized convolution layer before entering the CM. We inject these conditions into CM by directly adding them with the skip connections between the encoder and the decoder. For medical images, we change the input channel of the first layer in both CM and the additional encoder into one. In experiments on natural images, we train the additional encoder for 50k steps with a batch size of 144. In experiments on medical images, we start from training the diffusion model since no pretrained checkpoint is available. Specifically, we first train an EDM <cit.> model on LDCT training set <cit.> for 9k steps with a batch size of 144. Then we distill this diffusion model into a consistency model by training for another 12k steps. Finally, we train the additional encoder for 9k steps with the CM backbone frozen. We do not train these models for further steps since it might induce over-fitting on such a small dataset. We adopt the forward operator of different inverse problems from DPS codebase[<https://github.com/DPS2022/diffusion-posterior-sampling>] and add hard measurement constraints like DDNM <cit.> into it. For evaluation, we adopt codes from DPS <cit.> to calculate PSNR and SSIM whereas codes from CM <cit.> to calculate FID. Following <cit.>, the intermediate noise level is determined by ternary search in multistep sampling. § IMPLEMENTATION DETAILS OF BASELINES SwinIR For super-resolution, we follow the default setting in 4x superresolution used in the original codebase provided by <cit.>, and trained for 500k iterations on the LSUN-bedroom training set. For nonlinear deblurring and box-inpainting, we train swinir by mapping the degraded images to the ground truth images for 500k iterations. DPS and MCG. For DPS and MCG, we use the original codebase provided by <cit.> and pre-trained DDPM models <cit.> trained on LSUN-bedroom training sets. We follow the default setting with NFE being 1000. DDRM. For DDRM, we follow the original code provided by <cit.> with DDPM models trained on LSUN-bedroom training sets. We use the default parameters as displayed by <cit.> with NFE being 20. CM As <cit.> did not report quantitative results on inverse problem solving tasks, we used the iterative inpainting and the iterative super-resolution functions in their codebase to reproduce their results. We try to keep measurement consistency and improve image quality by maximizing the number of iterations. I^2SB, CDDB The original I^2SB model was trained on ImageNet. To compare it with our method, we fine-tuned it on LSUN-bedroom for 6k steps with a batch size of 256. We initialized the model for nonlinear deblur task using the checkpoint for Gaussian deblur task since no checkpoint is available on this task. Experiments on CDDB is also re-conducted on these fine-tuned models. FBP-UNet For FBP-UNet, we use the model structure as described in <cit.> and then train the model with input images being FBP reconstructions and output being ground truth images of the 9000 2D CT slices from 40 patients. § ADDITIONAL RESULTS §.§ Distortion Metrics In <ref>, we present distortion metrics of three natural image restoration tasks. It should be noticed that unlike CT reconstruction with 23 angles, these inverse tasks are considerably aggressive. To make up for information loss while maintaining image quality, some degree of hallucination is necessary <cit.>. However, PSNR/SSIM strictly penalize hallucination as they rely on pixel-level differences. We would like to clarify that DDB methods <cit.> outperform ours in low NFE region not because they produce higher-fidelity images, but because they "trade accuracy with quality". When working within 1-2 NFEs, DDB methods generate samples closer to 𝔼[x_0|x_t] rather than any clear x_0 from the real distribution p(x). As shown in <ref> and <ref>, methods with superior distortion metrics mostly generate blurry samples, indicating that they seek the mean of all possible reconstructions rather than a single clear result. Having acknowledged this, we agree that distortion may be detrimental in certain inverse tasks, especially those with medical applications. Therefore, we provide distortion metrics of medical image reconstruction tasks in <ref>, and those of natural image reconstruction tasks in <ref> for reference. §.§ Results on Natural Image Restoration We provide additional visual results on natural image restoration of both CoSIGN and the baselines in  <ref>,  <ref> and  <ref>. All images are randomly selected from the dataset without cherry picking. As depicted in these images, the visual quality of our results surpasses all existing methods in comparable NFE region, and is also comparable with those obtained with hundreds of NFEs. §.§ Results on Medical Image Restoration In  <ref>, we provide additional visual results on medical image restoration of both CoSIGN and the baselines. The selected images encompass CT scans of abdomen, head and chest. It can be seen from the images that compared with baselines, images reconstructed with CoSIGN are both high-fidelity and noiseless. §.§ Derivation of Hard Consistency Formula in the Linear and Noiseless Case If the forward operator 𝐀 is linear, full-rank and the measurements are noiseless (i.e., y = 𝐀(x)), suppose we want to find the closest point to x_0 that is consistent to the measurement y, then we can pose the optimization problem as x_0=z∈ℝ^nmin{z-x_0_2^2} s.t. 𝐀(z) = y Then, the solution to this optimization problem is given by x̂_0 = x_0 - (𝐀^+𝐀x_0 - 𝐀^+y), Proof: Consider 𝐭 = z - 𝐱_0, then the previous optimization objective can be written to x_0=z∈ℝ^nmin{t_2^2} s.t. 𝐀(t) = y - 𝐀𝐱_0 Then we can decompose 𝐭 into a null space component and a perpendicular range space component, such that 𝐭 = 𝐭_N(𝐀) + 𝐭_R(𝐀^T), where N(𝐀) ⊥ R(𝐀^T). We also have 𝐀t = 𝐀𝐭_R(𝐀^T) = 𝐀𝐀^Tk =y - 𝐀𝐱_0, by 𝐭_R(𝐀^T) = 𝐀^Tk. Then k = (𝐀𝐀^T)^-1(y - 𝐀𝐱_0), and then 𝐭_R(𝐀^T) = 𝐀^T(𝐀𝐀^T)^-1(y - 𝐀𝐱_0) = 𝐀^† (y - 𝐀𝐱_0) We also have ||t||_2^2 = ||𝐭_N(𝐀)||_2^2 + ||𝐭_R(𝐀^T)||_2^2, observe when 𝐭_N(𝐀) = 0,||t||_2^2 is minimized. Hence, z = x_0 + 𝐀^† (y - 𝐀𝐱_0).
http://arxiv.org/abs/2407.13232v1
20240718074035
Autonomous Money Supply Strategy Utilizing Control Theory
[ "Yuval Boneh" ]
q-fin.RM
[ "q-fin.RM", "q-fin.CP" ]
references.bib britishbritish-apa [article]volume#1 § .1em [table]position=top unlist =1 1]Yuval Boneh [1]Conclave [1]Yuval Boneh: Autonomous Money Supply Strategy Utilizing Control Theory [ ========================================================= § ABSTRACT Decentralized Finance (DeFi) has reshaped the possibilities of reserve banking in the form of the Collateralized Debt Position (CDP). Key to the safety of CDPs is the money supply architecture that enables issued debt to maintain its value. In traditional markets, and with respect to the United States Dollar system, interest rates are set by the Federal Reserve in an attempt to influence the effects of excessive inflation. DeFi enables a more transparent approach that typically relies on interest rates or other debt recovery mechanisms being directly informed by asset price. This research investigates contemporary DeFi money supply and debt management strategies and their limitations. Furthermore, this paper introduces a time-weighted approach to interest rate management that implements a Proportional-Integral-Derivative control system to constantly adapt to market activities and protect the value of issued currency, while addressing observed limitations. Keywords: Decentralized Finance, collateralized debt, control system, money supply, interest rate. § INTRODUCTION Blockchain technology has empowered the development of Decentralized Finance (DeFi), employing programmable smart contracts to enable secure and transparent financial transactions. DeFi has further enabled a proliferation of experimental stablecoin architecutres. Stablecoins are cryptocurrencies designed to trade at par with a reference asset, typically the United States Dollar (USD). Stablecoins are generally defined by a balance of three key values: Security, Scalability, and Decentralization. While the nomenclature may vary, the first principle considerations remain the same, and the general argument is that a stablecoin design can only optimize for two of the three. Compromising decentralization and security introduce existential risk into a protocol, as exhibited by the collapse of Terra's UST stablecoin (<cit.>), and BUSD's rapid supply fall attributed to regulatory actions (<cit.>). Meanwhile, scalability may not be core to a design, but can be achieved over time as the model develops trust and robustness. To date, the most successful secure and decentralized architecture concept is that of a Collateralized Debt Position (CDP) protocol. CDPs refer to positions created by locking collateral in a smart contract to generate some kind of debt, typically stablecoins. Stablecoins represent the system's debt, so safeguarding their value is cruitial. Established oracle infrastructures exist to ensure that the value of collateral and debt is known at all times, however the mechanism by which minted debt assets maintain peg varies across designs. Liquity's LUSD stablecoin offers the highest level of decentralization and security, with rigorous mechanisms designed to ensure the value of the LUSD token remains pegged. MakerDAO's DAI stablecoin and Aave's GHO stablecoin have interest rates set by governance bodies, which statically adjust interest rates depending on the need for the stablecoin supplies to contract or expand. This research provides an analysis of existing CDP protocols and respective money supply theories and examines potential shortfalls before proposing a novel design that mitigates identified risks. § INDUSTRY REVIEW The purpose of this section is to provide an overview of some of the largest decentralized CDP protocols in DeFi and assess their money supply strategies for efficiency. Some mechanisms will be discussed in terms of control theory to provide a standardized method of communicating assessed drawbacks and limitations. §.§ MakerDAO The Maker Protocol, or Multi-Collateral Dai (MCD) system (<cit.>), accepts governance-approved collateral assets and allows users to mint the DAI stablecoin. Opening a CDP with Maker incurs an accruing stability fee, paid in DAI, to the Maker Buffer. In turn, the Maker Buffer funds the DAI Savings Rate (DSR) vault; a vault where users can deposit DAI to earn interest. DAI maintains its price stability by Maker governance adjusting stability fees, as well as the DSR rate, in order to balance supply and demand mechanics. An additional Peg Stability Module (PSM) enables 1:1 exchange with a limited amount of USDC, further tightening the peg by creating rapid arbitrage opportunities when price shifts sufficiently to make the arbitrage transaction profitable. DAI's stability is contingent on interest rates being modified via governance, so while they are adaptive, the system response is inherently slow and inefficient. Maker governance manages the DAI borrow rates of individual collaterals, as well as the DSR rate, resulting in a cumbersome governance liability. An April 2024 governance proposal (<cit.>) included 10 independent interest rate modifications for the DAI ecosystem based on observations made by BA Labs. There is a significant amount of research and data accompanying each proposal, presumably consuming a large quantity of resources, akin to the US Federal Reserve's approach to interest rate management. Maker sets the baseline for CDP technology, deviating from the traditional gold-backed financial system only insofar as transparency and the use of secure and decentralized blockchain technology. In August 2023, Maker drastically increased their DSR savings rate[Source: <https://defillama.com/>] (Figure <ref>), effectively prodiving a conduit between their large off-chain reserves earning significant interest in a high rates environment, and the on-chain intrest rate environment. This transition rippled throughout the CDP landscape and the instability it introduced provides an excellent case study for building a more resilient system. §.§ Liquity V1 The Liquity V1 Whitepaper (<cit.>) explains their approach to protecting the peg and the value of their decentralized stablecoin LUSD by way of liquidation and redemption mechanisms. Of note is the redemption mechanism, which sets a floor price for LUSD should the price fall below $1. The system allows LUSD holders to redeem their LUSD for underlying ETH collateral based on the face value of the redeemed tokens, the current ETH:USD rate, and the current base rate. The system uses the LUSD to repay debt of the riskiest position (lowest collateral ratio), and transfers the respective amount of ETH to the redeemer. The process is fair (the value of ETH transferred from the position is equal to the amount of LUSD token debt paid) and borrowers do not suffer a net loss from being redeemed. Redemptions are subject to a redemption fee, which is a function of the base rate and the redeemed amount of LUSD, with a minimum redemption fee of 0.5%. Therefore, redemption only becomes profitable when LUSD falls below $0.995, protecting borrowers from constant redemptions when LUSD value is maintained at $1. Liquity offers zero-interest lending, so as broader interest rates rose, the cost of interest rate arbitrage remained the same while profitability increased. As a result, LUSD supply shrank dramatically over the last 12 months, for reasons that will be explained. Aave USDC supply on the Ethereum Mainnet is typically considered the most accessible, lowest risk, stablecoin strategy, and is thus compared as the risk-adjusted baseline rate for DeFi. Data can be observed in Figure <ref>, which demonstrates LUSD circulating supply[Source: <https://defillama.com/>] and Aave USDC supply rates[Source: <https://aavescan.com/>] over the last 12 months. Strategic users deposited large amounts of collateral into Liquity to mint LUSD and sell it to the market in favour of stablecoins that could earn high interest elsewhere. This behaviour generated sustained selling pressure on LUSD, which led to consistently profitable redemptions. Consequently, in order to borrow LUSD, users required increasingly high collateral ratios, which inhibits the capital efficiency of the protocol. The interest rate arbitrage became inefficient and users exitted Liquity V1, shrinking its supply. To be clear, Liquity V1 has always worked exactly as intended. Liquity V1 set a new standard in decentralized CDP architecture, and was subsequently forked by many teams. While the Liquity V1 money supply model works exceptionally well in a low interest rate environment, it can not adapt to a high interest rate environment. Developing a system that is resilient to to the arbitrage opportunities around it requires an adaptive cost of borrowing, or interest rate. §.§ Aave GHO GHO is a decentralized and collateralized stablecoin native to the Aave protocol (<cit.>). GHO has similar supply functionality to Maker in that users must deposit collateral in order to mint the stablecoin, however the GHO money market integrated into Aave performs a passively scalable DSR role. Since the potential supply of GHO is unlimited (ignoring their programmable cap), conventional lend/borrow interest rate models that reference utilization do not apply to the CDP borrow rate. Instead, the GHO interest rate is set by Aave Governance, which "statically adjusts interest rates depending on the need for the GHO supply to contract or expand" (<cit.>). As such, Aave's stablecoin has the ability to respond to changes in market dynamics by way of governance voting for adjustments to the global interest rate. This overcomes Liquity V1's inability to restore supply dynamics, but introduces other limitations. There is a distinct relationship between the cost of borrowing and the price of the stablecoin. This is a known product of interest rate arbitrage. When interest rates increased broadly (exemplified on-chain by the DSR increase), opportunities for profit presented themselves, and the low-interest $GHO stablecoin faced the same sell pressure as $LUSD. Since Aave employs a global interest rate in lieu of a redemption mechanism, the $GHO stablecoin lost peg, falling to a price as low as approximately $0.95. Forum posts as far back as August of 2023 propose increases in the interest rate in response to market activities (<cit.>), with eight additional proposals following in as many months (Table <ref>). Historical GHO borrow rates can be observed and compared to other borrow rates for stablecoins on Aave V3 in Figure <ref>. The delayed nature of Aave governance's response is evident as market rates rise, as well as fall, whereby the interest rate for borrowing GHO is almost always the cheapest until approximately April 2024, after which it is now the most expensive, pending further governance proposals to adjust the interst rate. The effect of such a delayed response is present in GHO's market price, which has consistently been traded off in favour of profitable opportunities, while its market capitalization (and thus circulating supply) has been steadily increasing[Source: <https://www.coingecko.com/>] (Figure <ref>). Going forward, for as long as borrowing GHO remains comparably more expensive, it will face a challenging effort to scale. Adaptive interest rates mitigate the problems faced by Liquity V1, but reliance on governance is slow, cumbersome, and the effective control system response is inhibitingly slow. §.§ Liquity V2 Liquity V2 (<cit.>) aims to address the delay by introducing user-set interest rates (<cit.>). Liquity V2 redemption mechanics target individual interest rates instead of collateral ratios. This means that if users set their interest rates high enough, they can access the same capital-efficient loans that V1 offered. It can be expected that Liquity interest rates will rise and fall in tandem with the market's, within a range of low arbitrage profitability. However, there remain several drawbacks in the Liquity V2 model. The first is that in this design, interest rates do not necessarily mitigate the risk of redemption. liquityV2 describes a redemption scenario in Figure <ref>, in which borrowers are ordered by interest rate with their debt and collateral values expressed as white and black circles. A redemption of four stablecoin units repays the debt of the two lowest interest rate positions and part of the 3.9% interest rate position, leaving additional collateral behind for position owners. It is apparent that maintaining an interest rate that is not the lowest does not protect users from redemption, as the size of redemption can not be guaranteed. A smaller redemption would leave the 3.9% interest rate position unaffected, while an eight unit redemption would affect the 5% interest rate position as well. The result is that position managers will need to consider existing positions and potential redemption sizes when managing interest rates. While it is reasonable that the base rate should be influenced by the actions of users with highest risk appetite, the redemption mechansim may continue to adversely impact users whose actions remain aligned with the protocol. liquityV2-2 identifies that Liquity V2 infrastructure allows borrowers to delegate their interest rates to third parties, smart contracts, or externally owned accounts. For many users this will provide a solution to the redemption problem by enabling automation of interest rate management to avoid redemption. However, this introduces dependencies into the system, and does not mitigate the requirement to monitor exisitng positions and potential redemption sizes in order to optimize. The alternative is to pay more interest than would otherwise be necessary. Additionally, the efficiency of third party interest rate managers remains to be seen. Regardless, users who want an optimized interest rate will require third party management, which must always consider the possibility of redemption, while users who want a fixed interest rate will have to pay above the base Liquity V2 interest rate. Finally, the design can be analysed in terms of control theory. When market rates increase, there is a tendency to take on low interest debt in order to profit from interest rate arbitrage. This involves selling the $BOLD stablecoin for others that earn more interest. As the price of $BOLD decreases below $1, redemptions become profitable, prompting Liquity V2 users to increase their interest rates, thus restoring interest rate parity. There exists a relationship between the price of $BOLD and the perceived 'safe' interest rate, where users are comfortable with their exposure to redemption. The primary inefficiency of the control system lies in the settling time of the market response that corrects the interest rate delta. As rates go up, Liquity V2 will delay in response as users wait until their positions are at threat of redemption before volunteering to pay more interest. The rolling threat of redemption will force an overshoot whereby users increase their interest rates by more than necessarily required in order to avoid having to continually manage their positions. Similarly, on the way down, users will pay more interest than they necessarily need to, while they wait for other positions to buffer their redemption exposure. These effects will not necessarily affect all Liquity V2 users, however they will affect the system's efficiency. The functional tradeoff is that users will always pay more interest than they necessarily need to, so that the system can offer fixed interest rate lending, governed by the profitability of redemption. In order to improve the efficiency and scalability of the CDP construct, focus should be on a global interest rate solution with minimal response time. Increasing the system's ability to respond to market dynamics will reduce the likelihood and duration of meaningful depeg, eliminating the need for redemptions, and further improving the system's efficiency. Features such as fixed interest rates reduce the core product's ability to optimize for money supply and should be considered later as derivative products. §.§ Curve crvUSD Curve's crvUSD successfully utilizes an adaptive interterest rate to influence the cost of borrowing. The equation that governs the interest rate for each market is derived in their documentation (<cit.>). The interest rate formula scales an exponential, in which the exponent is a function of crvUSD price and a value; sigma. Sigma is a variable that can be configured by the Curve DAO, such that a lower value makes the interest rates increase and decrease faster as crvUSD loses and gains value respectively, and vise versa for higher values. The strategy is further reinforced by a system called PegKeeper. A PegKeeper is a contract that helps to stabilize crvUSD price by trading crvUSD with counterassets. When the price of crvUSD in a pool is above $1, PegKeepers may take on debt by minting uncollateralized crvUSD and depositing into specific pools. This increases the balance of crvUSD in the pool, which decreases crvUSD's price. If a PegKeeper has taken on uncollateralized debt and deposited crvUSD into a pool, it may withdraw that crvUSD when the price is below $1. By withdrawing, the crvUSD token balance will decrease and the price of crvUSD increases. PegKeepers only provide a small buffer because they cannot be allowed to mint crvUSD ad infinitum. This is a simple algorithmic price arbitrage that frontruns the market's response to interest rate changes. Curve's stablecoin system works reasonably well, as can be seen in the data representing the market capitalization and crvUSD price over the last eight months[Source: <https://www.coingecko.com/>] (Figure <ref>). Due to limited data, the crvUSD response can only be assessed since August 2023 when MakerDAO's first large rate hike occurred, so the system's response to large systemic changes cannot be assessed because it didn't face the same transition as others. However, its ability to respond to consistent changes since then has been generally promising. The crvUSD stablecoin has remained within a reasonable range of $1 and the interest rates for borrowing remain competitive. Overall, the crvUSD design is considered to be the most advanced money supply approach of those examined. That said, the crvUSD money supply strategy still presents interesting limitations. The crvUSD stablecoin relies on debt caps to protect the DebtFraction parameter (a ratio of the PegKeeper's debt to the total outstanding debt) utilized in the interest rate function. As well as somewhat limiting the system's scalability, this creates an artificial utilization signal that informs the interest rate, and is subject to sustained offset. If the PegKeeper has no counterassets with which to purchase debt and the market has an appetite to maintain risk, the crvUSD price can remain offset from the peg. This is mitigated by the sigma variable, however requires governance intervention, which would introduce the same delay as has been observed in Aave's GHO. In its current state, the crvUSD interest rate strategy has no autonomous way of responding to sustained offset. This can be achieved with a Proportional-Integral-Derivative (PID) controller, utilizing control theory. § PID CONTROLLER MONEY SUPPLY STRATEGY This section introduces a control system for money supply management that prioritizes the efficiency of the system's response to market dynamics. The architecture of the controller itself is closely aligned with a previous controller architecture (<cit.>), modified specifically for this application. Figure <ref> provides a general representation of the controller logic. In general terms, the controller normalizes an error signal to standardize the strategy for general use. It then passes the normalized error through a PID controller that processes independent signals according to Equation <ref> to output a modified controller error signal. E_controller = E_P + E_I + E_D, where E_controllerError is the modified error (controller output), and E_P, E_I, and E_D are the proportional, integral, and derivative components of error, respectively. The controller error then passes through a transfer function, which defines the relationship between the controller error signal and the interest rate. §.§ Reference and Input The response mechanisms discussed in the previous section are generally dependent on price movements. However, there are many potential leading indicators that may preempt price movements, allowing the system to adjust before market price changes. This is most applicable for StableSwap pools (<cit.>), in which the invariant allows for extremely low slippage trades even when the pool is imbalanced. For a conventional xy=k constant product pool, imbalances result in impactful slippage, so will reduce the efficiency of the controller. Research by detectingDepegs identifies several strategies that provide leading indicators to price movement. The purpose of their research is to develop off-chain infrastructure to alert liquidity providers of potential depegs, so majority of the leading indicators developed do not suit the purpose of the money supply strategy, however it does attest to the notion of responding to pool imbalances before meaningful price changes. The Gini Coefficient and Shannon's Entropy, described in the paper as a measure of the relative balances of a pool's token, are objective, however are absolute measures of balance. For the purpose of an autonomous on-chain controller input signal and reference, a simple directional asset balance assessment provides a stronger basis. By assessing the pool balance and targeting a desired weight, a larger weight indicates a surplus of the stablecoin in the pool (excessive supply), and a smaller weight indicates a deficit in the pool (excessive demand). The error signal is defined in Equation <ref> as the difference between the current weight and the reference (target) weight. e(w) = w - w_r, where w is the weight of the stablecoin in the pool, w_r is the reference (target) weight, and e(w) is the weight error. The controller thus modifies interest rates based on this directional indicator and before the depeg occurs. Normalizing the ranges (0, w_r] and (w_r, 1) to (-1, 0] and (0, 1), respectively, further standardizes the strategy, making it applicable to any reference weight. The normalized ranges are defined in Function <ref>. E(w) = e(w)w_r, if e(w) ≤ 0 e(w)1 - w_r, if e(w) > 0, where E(w) is the normalized error signal with range (-1, 1). This standardization makes the strategy applicable to any reference weight and adaptable for various pool configurations on different exchanges. A Curve StableSwap pool with amplifiication factor set to 100 has a price curve represented in Figure <ref>. With 1,000,000 of token A and 1,000,000 of token B, this pool can facilitate a swap of 400,000 token A for approximately 398,132 token B, resulting in a pool balance of approximately 70% token A at a price of approximately $0.995. Commensurate with Liquity V1's redemption profitability case, this scenario forms the base case for the example strategy and tune[Decentralized Exchanges have varying mechanisms in place for pricing pool imbalances so the degree of acceptable imbalance as it pertains to the base interest rate should be tested thoroughly.]. §.§ Transfer Function The transfer function is derived theoretically on the basis that at 0% pool imbalance, the output interest rate should be 0%, and at 100% pool imbalance (such that the entire liquidity pool has been drained of counterassets, albeit functionally impossible), the output interst rate should be infinite. It is reasonable to allow the system to engage in negative interest rates in order to scale supply for excess demand, however there is no requirement for a negative interest rate to tend towards infinite. As such, a reasonable transfer function to reflect the desired behaviour of the interest rate with respect to pool imbalance is presented in Equation <ref>. r = α·E_controller/1-E_controller, where r is the interest rate, and α is a scaling factor (initially set to 15e-2). The interest rate curve can be examined in Figure <ref>, noting the plot is a subset of the true pool balance range for demonstration. The transfer function trends appropriately against pool weight, and the magnitude at 70% weight is refined to 10% with α = 15e-2. Depending on the volatility and risk profile of the collateral, as well as the pool configuration, α should be varied to set an appropriate base rate. §.§ Proportional Component In general, the purpose of the proportional component is to tune the magnitude of the initial response, as represented in Equation <ref>. E_P = K_P · E(w), where K_P is the proportional gain factor. The design of the transfer function is dependent on the range of E_controller in that the denominator offsets the signal from the limit in order to define the asymptote. For a proportional gain to be applied, the transfer function would be defined as Equation <ref>, which would undo the proportional effect of the gain factor and remove the asymptote effect of the curve design. r = α·E_controller/K_P-E_controller, For this reason, K_P = 1, and the value of α facilitates a pseudo-proportional component. §.§ Integral Component The integral component performs a time series analysis of the pool's balance to generate a Time-Weighted Cumulative Error (TWCE) that is scaled by a tunable gain factor (Equation <ref>). E_I = K_I · TWCE, where K_I is the integral gain factor. TWCE is achieved by multiplying the normalized error signal each time it is reported by the time since it was last reported and adding it to the last TWCE, as summarized in Equation <ref>. TWCE_i = ∑_i=1^n E(w)_i · (t_i - t_i-1), where E(w)_i is the i^th normalized weight error, t_i is the time at which the i^th error is recorded, and t_i-1 is the previous time at which error was recorded. This rolling calculation is mathematically efficient and can operate effectively ad infinitum. It accumulates when error is positive and dissipates when error is negative, and does so proportionally to the size and sustained time of the error. The effect of cumulative error can be observed in Figure <ref>, with K_I = 1.5, which demonstrates the increase of interest rate leading to complete liquidation in the absence of market action. Due to the cumulative nature of the integrator component, the modified weight error signal transitions from being sensitive to the proportional component, to being predominantly driven by the integrator component. §.§.§ Phi-Strategy For a money supply strategy, it is assessed that interest rates should grow or decay at different rates depending on the class of assets in the system. When pool balance tends towards a depeg to the downside, premium, low volatility collateral assets are likely to have a lower Total Collateral Ratio (TCR) relative to Minimum Collateral Ratio (MCR) and should experience slower interest rate growth. Conversely, more volatile assets are likely to have a higher TCR relative to MCR and should experience greater interest rate growth. Therefore, as TCR/MCR increases, so too should K_I. While this can be established by tuning K_I for different collateral types, the relationship between K_I and TCR/MCR can be generalized in Equation <ref> based on proportionality. K_I = ϕ· (TCR/MCR - 1), where ϕ is a scaling factor that governs the aggression of interest rate growth. Since TCR/MCR evolves over time, Equations <ref> and <ref> are adapted to form Equation <ref> to ensure K_I is updated with each iteration. E_I = ∑_i=1^n K_I,i· E(w)_i · (t_i - t_i-1), A dataset can be constructed by sweeping through different collateral ratios and ϕ values to identify the ϕ value required to return the system to a collateral ratio of one, in one year (Figure <ref>). A base rate of 10% established by the transfer function means that no integrator value is required at TCR/MCR = 1.1, so the simulation sweeps from 1.2, and the sensitivity to ϕ dissipates after TCR/MCR = 1.6. Since any precision of the system's ability to reach a collateralization ratio of one in one year is invalidated by market response, the value of ϕ need only be within a reasonable order of magnitude, so ϕ is set to 4. The controller can now be simulated to assess overall effectiveness of returning a CDP to TCR/MCR = 1, starting from different points. Simulation outputs are presented in Figure <ref>, in which the pool balance deviates from 0.5 to 0.7 in incremenets of 0.01 and with 12h timesteps. Balance is then maintained at 0.7 for the remainder of the simulation. In both cases, interest rates initially rise to the same base rate. Once error is maintained at a pool balance of 70%, the interest rate for the higher TCR/MCR scenario rises significantly faster, accelerating its return to TCR/MCR = 1. Since ϕ is set to 4, the two TCR/MCR cases presented recover either faster of slower than the one year target. This is expected from Figure <ref>, and ignores market responses, while recovering in a reasonable timeframe. The value of ϕ does not necessarily need to be constant, however it is assessed that the added complexity is not justified. §.§.§ Mitigating Long-Tail Integrator Risks There is potential for the integrator component to accumulate a significant amount of directional bias such that it negates the proportional component and transfer function of the controller. For example if the token weight is under target for a prolonged period of time, the integrator may accumulate significant negative error. Should the pool suddenly shift balances to above target, the proportional component will transition to positive, however the integral component may suppress the summation and result in a reduced overall controller error. This can introduce dangerous attack vectors, so the influence of the integral component must be carefully managed. This is achieved by programming a limit to the negative accumulation of the integrator component, so that a sudden shift in pool balances can be safely accounted for. This approach addresses the global rate based on pool imbalance as a priority (pseudo-proportional component), and then increases the burden on riskier assets over time (integral component), ultimately penalizing the least healthy positions first. This eliminates the need for redemptions, so users are free to borrow as they wish and need only focus on their respective position health and interest rate. K_I can be simulated for tuning, or since collateral ratios are inherently normalized, the Phi-Strategy can be deployed to manage different collaterals without modification. This research presents the case of pool balance shifting to 70% and recovering in isolation in one year, however it should be noted that any further increase in pool balance will compound interest rate growth and result in much faster rate growth. Treating $0.995 as an example of meaningful depeg, one year is the maximum possible time that it can be sustained without market response. It is expected that as interest rates grow, the market will be influenced to correct the liquidity imbalance, however the controller is designed to correct system leverage in isolation, eliminating any dependency on market activity. §.§ Derivative Component The derivative component is derived from mmPid and performs a time series analysis of normalized weight error by calculating the gradient over a specified lookback time (Equation <ref>). After defining a period, the timestamp of each update can be compared to a delayed TWCE. If at least one period has passed, the delayed TWCE and associated timestamp overwrite the previous TWCE and timestamp, and the current TWCE and timestamp overwrite the delayed TWCE and timestamp. The result is two TWCEs that store the cumulative error one period ago, and now, allowing for the derivative calculation to be completed. This mitigates the effects of very sudden pool balance changes that would normally be arbitraged, which would subsequently make interest rates excessively sensitive to market actions. E_D = K_D·TWCE_delayed - TWCE_previous/t_delayed - t_previous, where K_D is the derivative gain coefficient, TWCE_delayed is the most recent TWCE recorded before the current value, and is only updated after at least one period, and TWCE_previous is the TWCE recorded at least one period before TWCE_delayed, and allows for the derivative calculation. t_delayed and t_previous are the respective timestamps. The derivative component allows the system to compensate for short term (commensurate with the specified lookback time) market actions, to curb rapid deviations from reference weight, protecting the protocol and users. K_D should be tuned according to the protocol's risk tolerance under different market scenarios, with consideration for arbitrage opportunities and peg support infrastructure. In essence, the controller modifies the weight error to account for the size of the error, how long the error has been sustained, and how quickly the error was incurred. § IMPLEMENTATION This strategy was developed for strict on-chain implementation, with no off-chain dependencies. The response of the controller can be greatly improved with off-chain infrastructure, however for the purpose of this research the trade-off is undesirable. The arithmetics employed are within the scope of most smart contract programming languages and calculations can be programmed efficiently in order to reduce gas costs. The pseudo-proportional gain and integral gain may be set algorithmically to achieve generic market responses in the the absense of fine tuning and simulation, or may be set as discrete constants to achieve a finer response for independent markets. Derivative gain architecture may be comparatively expensive so its use should be considered against the intended network infrastructure. In most cases, it is anticipated that interest rates should not be highly sensitive to derivative gain, and its use is not strictly needed. Contract architecture has been developed and tested separately and compared in production privately. Some information may be released in future publications. § CONCLUSION The control system described in this paper enables a reserve currency to employ passively adaptive money supply theory with autonomous time-based compensation. There is no requirement for human intervention, and the interest rate updates as regularly as interactions with the protocol. Exact controller configurations have been omitted due to the sensitive nature of the performance of the strategy. As always, security remains the priority, and any contracts derived from this research should be thoroughly and tested and independently audited before implementation.
http://arxiv.org/abs/2407.12385v1
20240717080737
RankTower: A Synergistic Framework for Enhancing Two-Tower Pre-Ranking Model
[ "YaChen Yan", "Liubo Li" ]
cs.IR
[ "cs.IR" ]
yachen.yan@creditkarma.com 0000-0002-1213-4343 Credit Karma 760 Market Street San Francisco California USA 94012 liubo.li@creditkarma.com 1234-5678-9012 Credit Karma 760 Market Street San Francisco California USA 94012 § ABSTRACT In large-scale ranking systems, cascading architectures have been widely adopted to achieve a balance between efficiency and effectiveness. The pre-ranking module plays a vital role in selecting a subset of candidates for the subsequent ranking module. It is crucial for the pre-ranking model to maintain a balance between efficiency and accuracy to adhere to online latency constraints. In this paper, we propose a novel neural network architecture called RankTower, which is designed to efficiently capture user-item interactions while following the user-item decoupling paradigm to ensure online inference efficiency. The proposed approach employs a hybrid training objective that learns from samples obtained from the full stage of the cascade ranking system, optimizing different objectives for varying sample spaces. This strategy aims to enhance the pre-ranking model's ranking capability and improvement alignment with the existing cascade ranking system. Experimental results conducted on public datasets demonstrate that RankTower significantly outperforms state-of-the-art pre-ranking models. <ccs2012> <concept> <concept_id>10010520.10010553.10010562</concept_id> <concept_desc>Computing methodologies</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010520.10010575.10010755</concept_id> <concept_desc>Machine learning</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010520.10010553.10010554</concept_id> <concept_desc>Machine learning approaches</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10003033.10003083.10003095</concept_id> <concept_desc>Neural networks</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Computing methodologies [300]Machine learning Machine learning approaches [100]Neural networks RankTower: A Synergistic Framework for Enhancing Two-Tower Pre-Ranking Model Liubo Li July 22, 2024 ============================================================================ § INTRODUCTION In industrial information services, such as recommender systems, search engines, and advertisement systems, the cascading architecture ranking system has been widely used to achieve a balance between efficiency and effectiveness. A typical cascade ranking system, as illustrated in <ref>, consists of multiple sequential stages, including recall, pre-ranking, ranking, and re-ranking stages. Pre-ranking is commonly regarded as a lightweight ranking module characterized by a simpler network architecture and a reduced set of features. Compared to ranking models, pre-ranking models are required to score a larger number of candidate items for each user and demonstrate higher inference efficiency, although their prediction performance may be comparatively weaker due to their simpler structure. Given the emphasis on efficiency, pre-ranking typically employs a straightforward vector-product-based model. We propose a novel framework for pre-ranking systems to maintain consistency with the cascade ranking system and achieve a balance between inference efficiency and prediction accuracy. The primary contributions of this work are as follows: * We introduce the RankTower architecture, which comprises three key components: Multi-Head Gated Network, Gated Cross-Attention Network, and Maximum Similarity Layer. The Multi-Head Gated Network plays a vital role in extracting diversified latent representations of users and items. The Gated Cross-Attention Network enables the modeling of bi-directional user-item interactions. Finally, the Maximum Similarity Layer enhances online serving efficiency without compromising the model's performance. * We employ a full-stage sampling strategy by drawing the training samples from different stages of the cascade ranking system. Tightly coupled with this sampling approach, we strategically integrate a hybrid loss function that judiciously combines distillation and learning-to-rank losses. This synergistic approach facilitates comprehensive learning of the ordering dynamics underlying user interactions while accounting for the inherent characteristics of the cascade ranking system. * We conduct extensive experiments on three publicly available datasets to demonstrate the superior performance of RankTower in terms of prediction accuracy and inference efficiency. § RELATED WORK In this section, we provide an overview of the most recent studies on pre-ranking model, which serve as a crucial intermediary stage in the cascading ranking system. The primary function of the pre-ranking stage is to effectively reduce the large pool of candidates retrieved from the recall stage to a manageable subset for the subsequent ranking stage. Furthermore, we discuss existing point-wise, pair-wise, and listwise ranking losses commonly employed in training learning-to-rank models. §.§ Pre-Ranking Several studies propose to improve the efficiency and accuracy of the pre-rank system. COLD <cit.> is designed to jointly optimize both the pre-ranking model performance and the computing power it costs. Any arbitrary deep model with cross features can be applied in COLD under a constraint of controllable computing power cost. Computing power cost can also be explicitly reduced by applying optimization tricks for inference acceleration. FSCD <cit.> achieves a better trade-off between effectiveness and efficiency by utilizing the learnable feature selection method based on feature complexity and variational dropout. AutoFAS <cit.> selects the most important features and network architectures using Neural Architecture Search, and a ranking model guided reward is equipped during NAS procedure, which allows AutoFAS to select the best pre-ranking architecture for a given ranking teacher without any computation overhead. IntTower <cit.> is designed to address the efficiency-accuracy dilemma in pre-ranking systems. The proposed IntTower achieves high prediction accuracy while maintaining inference efficiency by balancing the interactions between user and item representations. Another direction of research is to align the pre-ranking with the ranking prediction order and ranking stages. RankFlow <cit.> and Ranking Distillation <cit.> have proposed aligning the pre-ranking and ranking models through distillation based on ranking scores. The pre-ranking model is encouraged either to generate the same scores as the ranking model<cit.> or to produce high scores for the top candidates selected by the ranking model<cit.>. JRC <cit.> introduces an approach called Jointly Ranked Calibration that optimizes both ranking and calibration abilities. JRC enhances the ranking ability by comparing the logit values for a sample with different labels and ensures the predicted probability is constrained as a function of the logit subtraction. COPR <cit.> optimizes the pre-ranking model towards consistency with the ranking model. It employs a chunk-based sampling module and a plug-and-play rank alignment module to explicitly optimize the consistency of ECPM-ranked results. Recently, <cit.> employed relaxed sorting loss to directly maximize business goals on ranking stage level. §.§ Learning-to-Rank Losses Learning-to-Rank(LTR) losses are typically categorized into three main types: pointwise, pairwise, and listwise. Each category reflects a different approach to how the ranking problem is formulated and optimized. <cit.> The pointwise approach treats the ranking problem as a classification or regression task, aiming to predict the relevance score of each item independently without considering the relative order among items or the user-item group structure. The typical industry solutions employ binary logistic loss due to the binary nature of most user feedback, and <cit.> adapted this approach for regression tasks. The pairwise approach transformed the ranking problem into pairwise classification or pairwise regression. It focuses on optimizing the relative order of item pairs but still overlooks the user-item group structure. BPR loss <cit.> utilizes binary logistic loss to model the probability that one item is ranked higher than another. WARP loss <cit.> further incorporates rank-based weighting to prioritize the accurate ranking of the most relevant items. LambdaRank <cit.> extends RankNet <cit.> by re-weighting the gradients of the loss function based on the impact of changes in NDCG metrics. The listwise approach directly optimize the ranking problem on user-item group structure. ListMLE <cit.> optimizes the likelihood of the correct permutation based on the predictions. ListNet <cit.> utilizes softmax cross entropy to learn the probability distribution over permutations. <cit.> further proposed decoupled softmax loss to address limitations in traditional softmax loss for extreme multi-label scenario. ApproxNDCG <cit.> optimizes NDCG metric with a differentiable approximation based on the logistic function. NeuralSort <cit.> and SoftSort <cit.>, initially designed for differentiable sorting, were later adapted to solve ranking problems. § MODEL ARCHITECTURE The RankTower architecture, shown in <ref>, introduces three main modules besides the embedding layer: Multi-Head Gated Network, Gated Cross-Attention Network, and Maximum Similarity Layer. The Multi-Head Gated Network computes diversified user and item representations by dynamically identifying feature importance. The Gated Cross-Attention Network models bi-directional user-item interactions, and the Maximum Similarity Layer efficiently captures the interaction between user-attentive and item-attentive embeddings to compute the final prediction. RankTower follows the user-item decoupling paradigm, enabling efficient online serving by pre-computing and storing user and item representations into a vector database. During online serving, only the gated cross-attention layers require forward propagation, while other operations remain parameter-free, optimizing computational efficiency. §.§ Preliminary The dataset for building the pre-ranking model consists of instances (x_u, x_i, y, p), where x_u and x_i represent the user feature and item feature respectively, y ∈{0, 1} indicates the user-item binary feedback label, p is the logged ranking model probability prediction that will be used for training the pre-ranking model with knowledge distillation. z is the logit of the pre-ranking model and ŷ is the corresponding pre-ranking prediction. §.§ Embedding Layer Suppose we have F_U fields of user features and F_I fields of item features in our pre-ranking training data. In our feature processing step, we first bucketize all the continuous features to equal frequency bins, then embed the bucketized continuous features and categorical features embed each feature onto a dense embedding vector x. Lastly, we concatenate F_U and F_I embedding vectors separately and denote the output of embedding layer X_U and X_I as the user input embedding and item input embedding, respectively: X_U = [𝐱_u^1, 𝐱_u^2, ⋯, 𝐱_u^F_U]^T. X_I = [𝐱_i^1, 𝐱_i^2, ⋯, 𝐱_i^F_I]^T. §.§ Multi-Head Gated Network The Multi-Head Gated Network is an improved MLP augmented with a gating mechanism, mainly for extracting diversified user and item representations from user/item input embeddings. We first use MLP to model the deep user/item representations and further multiply the output of the MLP by an instance-aware gating vector. The gating vector can be modeled by a two-layer MLP with a reduction ratio r, using user/item input embeddings as input. During the training phase, the input embedding does not receive gradients from the gating network to ensure training stability. For example, given an user input embedding X_U, the output of user multi-embedding can be mapped into H_u sub-spaces, and the h-th sub-spaces e_u^h is obtained from: e_u^h = MLP_u(X_U)^h∘σ(gMLP_u(X_U))^h ∈ℝ^B × k, h=1,⋯, H_u where ∘ denotes the Hadamard product, σ denotes the activation function of the gating network: Sigmoid(x), MLP_u denotes the MLP layer for modeling the user input embedding and extracting the latent information, gMLP_u denotes the gating MLP for facilitating selective attention, B is the batch size and k is the embedding size of each sub-space. Similarly, given an user input embedding X_I, the output of item multi-embedding can be mapped into H_i sub-spaces, and the h-th sub-spaces e_i^h is obtained from: e_i^h = MLP_i(X_I)^h∘σ(gMLP_i(X_I))^h ∈ℝ^B × k, h=1,⋯, H_i In the offline processing stage, we will periodically batch inference and store all the user/item's embeddings e_u^h and e_i^h into the vector database for online serving usage. §.§ Gated Cross-Attention Network The Gated Cross-Attention Network employs the cross-attention mechanism to effectively model the interaction between user embedding and item embedding. It uses the Gated Attention Unit as the main building block for learning the dependency between user and item, with residual connection and layer normalization used for training stability. §.§.§ Cross Attention Mechanism The Bi-Directional Gated Cross-Attention Network interchangeably uses user and item embedding as queries and keys-values for bi-directional attention. Specifically, with the user multi-embedding E_u = Concat(e_u^1, ..., e_u^H_u) and item multi-embedding E_i = Concat(e_i^1, ..., e_i^H_i), the cross-attention compute the user attended embedding ℰ_u and item attended embedding ℰ_i as follows: ℰ_u = LayerNorm(E_u + GAU(Q=E_u, K=E_i, V=E_i)) ∈ℝ^B × H_u× k ℰ_i = LayerNorm(E_i + GAU(Q=E_i, K=E_u, V=E_u)) ∈ℝ^B × H_i× k The cross-attention mechanism with two parallel branches is designed to process information from both user embedding and item embedding simultaneously. By having two parallel branches, one focusing on user information and the other on item information, the model can simultaneously attend to both user preferences and item characteristics. This enables the model to capture bidirectional interactions between user embedding and item embedding, leading to more accurate modeling of user-item interactions. The overall structure is shown in <ref>. §.§.§ Gated Attention Unit The Gated Attention Unit introduces a gating mechanism to facilitate selective attention for better learning the dependency between user embedding and item embedding. Specifically, the Gated Attention Unit effectively enables an attentive gating mechanism as follows: Q = ϕ(X_Q W_Q) K = ϕ(X_K W_K) V = ϕ(X_V W_V) U = σ(X_Q W_U) where X_Q, X_K, X_V are the query, key, and value input, ϕ is the non-linear activation function for query, key and value projection, σ is the sigmoid function for computing gating value based on the query. With the learned projection Q, K, V, and the gating value U, we compute the attention weights, followed by gating and a post-attention projection. O = (U ⊙ AV)W_o A = softmax(QK^T/√(d_k)) where A ∈ℝ^H_u× H_i contains user to item attention weights. This example assumes that we use user embedding as the query, and item embedding as key and value. §.§ Maximum Similarity Layer The Maximum Similarity Layer computes the final probability prediction based on the user attended embedding ℰ_u and item attended embedding ℰ_i. Specifically, each user sub-space firstly computes maximum cosine similarity with all the item sub-space; the scalar outputs of these operations are summed across all the user sub-spaces: s = (∑_p=1^H_uq∈{1,⋯, H_i}Max COSINE(ℰ_u^p, ℰ_i^q)) / τ where p and q are the sub-space indexes of user-attended embedding and item-attended embedding, respectively, and τ is the learnable temperature scalar for re-scaling the cosine similarity. Note that the Maximum Similarity Layer does not have any parameters which is suitable for online serving. § PRE-RANKING MODEL OPTIMIZATION Despite the significance of enhancing the consistency between ranking models and pre-ranking models, pre-ranking models trained exclusively on impression samples, same as ranking models, suffer from sample selection bias. The pre-ranking model, which operates on the outputs of recall models, aims to identify the most relevant candidates set for the ranking model. Consequently, aligning the item distribution between the training and serving phases is essential to mitigate this sample selection bias and improve model effectiveness. As illustrated in <ref>, we implemented full-stage sampling to draw training data from impression samples, candidate samples, and random samples to mitigate sample selection bias. Moreover, we strategically applied various distillation and learning-to-rank losses to different sample scopes to effectively learn the ordering of user behaviors and the sequencing of the sample stages. §.§ Full-Stage Sampling The RankTower model is trained using user-level listwise samples that comprise multiple positive items and multiple objectives. The training samples associated with each user are sourced from various stages of the cascade ranking system, as shown in <ref>. Detailed definitions and relationships among these components are provided below: §.§.§ Impression Samples The items output by the ranking model and viewed by the user consist of both positive and negative samples. Positive samples are items that have received various types of positive user feedback. In contrast, negative samples are items that have been exposed to the user but have not received any positive user feedback. §.§.§ Candidate Samples The item candidates in the ranking or pre-ranking stages that are not viewed by the user are categorized based on their progression through the cascade ranking pipeline. Ranking candidates, which have advanced to the ranking stage, are generally considered as hard negative samples due to their higher relevance and quality compared to the pre-ranking candidates. In contrast, pre-ranking candidates are regarded as relatively easy negative samples because they were filtered out before reaching the ranking stage, indicating a lower level of relevance or potential interest to the user. §.§.§ Random Samples Items that are randomly sampled from the item corpus to serve as negative samples. These random samples are considered the easiest negative samples but are included to further enhance the generalization capability of the pre-ranking model. The incorporation of random samples ensures that the model remains effective and adaptable when encountering previously unseen items during the serving phase, thereby improving its robustness and ability to handle diverse item distributions. §.§ Label Aggregation Our framework incorporates two categories of labels: hard labels and soft labels. Hard labels represent various types of positive user feedback on impression samples. Soft labels are the predictions made by the ranking model for different types of user feedback. These soft labels are utilized for knowledge distillation to improve the consistency between pre-ranking models and ranking models, ensuring that the pre-ranking model learns to mimic the behavior of the ranking model. Both categories of labels require an aggregation function to consolidate the different user behaviors into a single scalar value for the pre-ranking model's learning. §.§.§ Hard Labels The aggregation of hard labels is highly dependent on the specific business problem, requiring that labels be aggregated according to their orders of importance. For instance, in online advertising, eCPM can be utilized based on the pricing model of the platform. In an e-commerce context, one might establish a relative preference order based on the depth of user feedback, such as Purchase > Add to Cart > Click. For scenarios like feed ranking or video recommendations, users may provide various types of feedback signals. These signals can be aggregated using a weighted sum approach, based on the specific business objectives. In addition to user feedback labels, we incorporate a general impression label applicable across business scenarios, for learning the pattern of the cascade ranking system. The label assigned a value of 1 for impression samples and 0 otherwise. It is important to note that all the candidate samples and random samples are considered negative in terms of hard labels. The user feedback labels help the pre-ranking model in learning the revenue or engagement level associated with different user behaviors. The exposure label facilitates the pre-ranking model's ability to learn and replicate the ranking patterns in the downstream cascade ranking system. §.§.§ Soft Labels Similar to the aggregation of hard labels, the aggregation of soft labels should employ the ranking objective function. This approach ensures that the soft labels, which are derived from the predictions of the ranking model, are seamlessly integrated into the training process. By utilizing the ranking objective function, the consistency between the pre-ranking model and the ranking model is maintained. §.§ Hybrid Loss Functions We design the pre-ranking model learning strategy to achieve two primary objectives: consistency and ranking. To guide and accelerate the model training process, we utilize the ranking model's predictions as soft labels for the purpose of knowledge distillation from the ranking model (serving as the teacher model) to the pre-ranking model (serving as the student model). This approach ensures consistency between the pre-ranking and ranking stages. To facilitate both ranking accuracy and retrieval capability, we apply fine-grained and coarse-grained ranking losses, respectively. The model is trained on samples across different stages and varying easy/hard sample levels, enabling it to achieve robust generalization during the pre-ranking stage while simultaneously optimizing hierarchical objectives. Our synergistic framework is designed to learn both the hierarchy of user behaviors and the pattern of the cascade ranking system. For instance, in the context of online advertising, the model is expected to understand the following order of importance: converted items > clicked items > exposed items > candidate items and randomly sampled items. This prioritization helps ensure that the model effectively distinguishes between different levels of user engagement and optimizes the ranking accordingly. §.§.§ Distillation Loss As the main goal for the pre-ranking model is to output a high-quality item set for the ranking model, hence we used a listwise loss for distilling the knowledge from the ranking model as follows: ℒ_Distillation(z, p) = -∑_i ∈𝒟_I p_i logexp(z_i)/∑_j ∈𝒟_Iexp(z_j) where p is the prediction of the ranking model (soft label), z is the logit of the pre-ranking model, 𝒟_ℐ is the impression samples set. It is important to note that in our approach, the distillation process from the ranking model to the pre-ranking model is conducted exclusively on impression samples. As the ranking model is trained solely on these impression samples, its ability to generalize to candidate samples and random samples is inherently limited. However, by carefully adjusting the weight of the distillation loss, we can enhance both the consistency between the models and the overall ranking capability of the pre-ranking model. §.§.§ Fine-Grained Ranking Loss The fine-grained ranking loss is applied to both impression samples and candidate samples. This loss is critically important during training as it directly corresponds to the sample scope used in serving. We employ the 𝐒𝐨𝐟𝐭𝐒𝐨𝐫𝐭, a differentiable sorting loss, to learn user behavior and the patterns of the cascade ranking system. The primary objective of the fine-grained ranking loss is to precisely rank items according to the varying degrees of positive feedback they receive. Furthermore, this loss function is designed to effectively differentiate positives from impression samples and negatives from candidate samples. Consider the 𝐒𝐨𝐟𝐭𝐒𝐨𝐫𝐭 operator defined by metric function 𝐝 = | · |^p and temperature parameter τ for sorting n-dimensional real vectors s ∈ℝ^n: 𝐒𝐨𝐟𝐭𝐒𝐨𝐫𝐭^d_τ(s) = 𝐬𝐨𝐟𝐭𝐦𝐚𝐱(-𝐝(sort(s) 1^T, 1s^T)/τ) The output of SoftSort operator is a permutation matrix of dimension n. The softmax operator is applied row-wise, thereby relaxing the permutation matrices into a set of unimodal row-stochastic matrices. In simple words: the r-th row of the SoftSort operator is the of the negative distances to the r-th largest element. <cit.> We then employ the softmax cross entropy between the permutation matrices of label y and the permutation matrices of logit z. The 𝐒𝐨𝐟𝐭𝐒𝐨𝐫𝐭 loss function is hereby defined as: ℒ_Sorting(z, y) = - 𝐭𝐫(𝐉_n (𝐒𝐨𝐟𝐭𝐒𝐨𝐫𝐭^d_τ(y) ∘log𝐒𝐨𝐟𝐭𝐒𝐨𝐫𝐭^d_τ(z))) where 𝐉_n is a n× n matrix of ones, 𝐲 = (y_i)_i ∈𝒟_ℐ∪𝒟_𝒞 is the hard label and 𝐳 = (z_i)_i ∈𝒟_ℐ∪𝒟_𝒞 is the logit of the pre-ranking model. We use the 𝐭𝐫 to compute the element sum of the matrix 𝐒𝐨𝐟𝐭𝐒𝐨𝐫𝐭^d_τ(y) ∘log( 𝐒𝐨𝐟𝐭𝐒𝐨𝐫𝐭^d_τ(z) ). §.§.§ Coarse-Grained Ranking Loss The coarse-grained ranking loss is applied to all the samples: impression samples, candidate samples and random negative samples. The primary objective of the coarse-grained ranking loss is to effectively separate positive samples from negative samples. Additionally, it supports the ranking process among positive samples by providing a framework for distinguishing varying degrees of relevance or quality within the positive sample set. We propose the Adaptive Margin Rankmax (AM-Rankmax) as the coarse-grained ranking loss. The Adaptive Margin Rankmax Loss (AM-Rankmax Loss) is an innovative modification of the Rankmax loss function <cit.>, designed to enhance performance in ranking tasks. This loss function introduces an adaptive margin that varies based on the nature of the pair being compared and the label distance between the items in the pair, thereby extending the Rankmax loss to address ranking problems with ordered or continuous labels. This approach ensures better generalization and more accurate differentiation in ranking scenarios. Consider the Rankmax loss for ranking problems with binary labels only: ℒ_Rankmax(z, y) = -∑_j:y_j>0logRankmax(z, y)_j = ∑_j:y_j>0log∑_i=1^n(z_i - z_j + 1)_+ The Rankmax loss is reminiscent of pairwise losses, the z_i represents the predicted logit for the i-th item in the list, the z_j is the predicted logit for the j-th target item. The objective of the Rankmax loss function is to ensure that the target item is ranked appropriately in relation to all other items in the list, including a margin term set to 1, which aids in enforcing the desired ranking separation. To extend the Rankmax loss to more general ranking problems involving multi-level positive labels, we introduce the adaptive margin. The adaptive margin for Rankmax loss incorporates both the type of item pairs and the distance in their labels to enhance ranking performance: * The loss is applied only when y_i < y_j, which is more suitable for multi-level positive label scenario. * The margin adjusts based on whether y_i is positive or negative. When y_i is negative, a larger margin is applied to ensure that the positive item associated y_j is ranked significantly higher. Conversely, when y_i is positive, a smaller margin is used to reflect subtle differences in relevance. * The margin scales with the label distances between items. Greater distances in labels result in larger margins, ensuring proper ranking separation for items with significantly distinct labels. The adaptive margin function m(i, j) = α·I(y_i = 0 ) + δ(y_i, y_j) where α is a constant for adding additional margin between negative and positive items, I is the indicator function. The metric function δ can take various forms, for example δ(y_i, y_j) = 1 or δ(y_i, y_j) = β |y_i - y_j|^p. The adaptive margin Rankmax loss is then given by: L_AM-Rankmax(z, y) = ∑_j: y_j > 0log∑_i : y_i < y_j(z_i - z_j + m(i, j) )_+ where 𝐲 = (y_i)_i ∈𝒟_ℐ∪𝒟_𝒞∪𝒟_ℛ is the hard label from all the samples and 𝐳 = (z_i)_i ∈𝒟_ℐ∪𝒟_𝒞∪𝒟_ℛ is the logit of the pre-ranking model. By incorporating the adaptive margin function, the AM-Rankmax loss function can effectively adapt to scenarios with multiple positive labels of varying levels. This enhancement allows the model to handle different degrees of positive feedback, thereby improving its ability to generalize and accurately rank items in complex settings. §.§.§ The Hybrid Ranking Loss We design a hybrid ranking loss that integrates both distillation and ranking objectives. The hybrid ranking loss is the weighted sum of three losses: ℒ_Hybrid(z, y ) = λ_1ℒ_Distillation(z, p) + λ_2ℒ_Sorting(z, y) + λ_3ℒ_AM-Rankmax(z, y) where λ_1, λ_2 and λ_3 are the loss weights for each sub-objective. Balancing the distillation and ranking losses is crucial for the pre-ranking model to inherit the ranking model's capabilities while generalizing to broader sample spaces. Additionally, weighting the fine-grained and coarse-grained ranking losses appropriately ensures a balance between precise ranking of relevant items and overall retrieval robustness. § EXPERIMENTS In this section, we provide a comprehensive description of our experiments, including detailed information about the datasets, evaluation metrics, comparisons with state-of-the-art pre-ranking models, and corresponding analyses. The experiment results, conducted on three publicly available large-scale datasets spanning the domains of online advertising, e-commerce, and short video recommendation, demonstrate the effectiveness of RankTower in the domain of pre-ranking. During experiments, we focus on evaluating the effectiveness of our proposed models and answering the following questions: * Q1: How does our proposed RankTower perform for pre-ranking task? Is it effective and efficient under extremely high-dimensional and sparse data settings? * Q2: How do different settings on dataset sampling and training losses influence the performance of RankTower? §.§ Experiment Setup §.§.§ Datasets We evaluate our proposed model using three publicly available real-world datasets commonly utilized in research: Alimama, Taobao and KuaiRand. For each dataset, we retain only users who have had at least 100 impressions and 20 instances of positive feedback. The data is randomly divided into three subsets: 70% for training, 10% for validation, and 20% for testing. Since all labels in the aforementioned dataset are binary, we simply aggregate them by summing the labels to form the hard label. * Alimama[https://tianchi.aliyun.com/dataset/408] is a Alimama advertising dataset, which are displayed on the website of Taobao. * Taobao[https://tianchi.aliyun.com/dataset/649] is a Taobao E-commerce dataset released Alibaba. * KuaiRand[https://kuairand.com/] is a recommendation dataset collected from the video-sharing mobile app Kuaishou. §.§.§ Evaluation Metrics We consider Recall@K and NDCG@K for evaluating the performance of the models, and we set k to 100 for all experiment metrics. Recall@K is the fraction of relevant retrieved within the top K recommendations. It's mainly used for measuring ranking system's capability on retrieving relevant items. NDCG@K measures the quality of the ranking by considering both the relevance and the position of items within the top K recommendations. Items with higher relevance ranked at higher position contribute more to the metric. §.§.§ Competing Models We compare RankTower with the following pre-ranking models: LR <cit.>, Two-Tower <cit.>, DAT <cit.>, COLD <cit.>, IntTower <cit.> and ARF<cit.>. §.§.§ Reproducibility We implement all the models using Tensorflow <cit.>. The mini-batch size is 64, and the embedding dimension is max(⌊log_2(cardinality)⌋, 16) for all the features. For optimization, we employ Adam <cit.> with a learning rate set to 0.001 for all the neural network models, and we apply FTRL <cit.> with a learning rate of 0.01 for LR. Grid-search for each competing model's hyper-parameters is conducted on the validation dataset. The number of DNN layers is from 1 to 4. The number of neurons ranges from 64 to 512. All the models are trained with early stopping and are evaluated every 1000 training steps. For the hyper-parameters search of RankTower, The number of layers in a Multi-Head Gated Network is from 1 to 4. For the number of sub-spaces H_u and H_i, the searched values are from 2 to 10. The number of units for all dense layers is from 32 to 256. Fot the metric function d of the differentiable sorting loss, we use the squared distance function. For the α and metric function δ of in AM-Rankmax loss, we search for α in the range from 2 to 5, and set δ(y_i, y_j) = 1 for simplicity. For generating candidate samples, we first construct a two-tower model based on the approach described by <cit.>. We then utilize this model to retrieve candidates for each user, filtering out the items they have interacted with to build candidate samples. These candidate samples serve as hard negative cases for each user in our experiments. For generating ranking model's prediction on impression samples, we ensemble several state-of-the-art ranking models <cit.>. We use weighted logloss <cit.> as the training objective and apply a weighted average to ensemble all the models. All the ranking models are trained on the training dataset, and the ensemble weights are optimized on the validation dataset. §.§ Model Performance Comparison (Q1) The overall performance of different model architectures is listed in <Ref>. We have the following observations in terms of model effectiveness: * LR exhibits the lowest performance compared to the other neural network-based models. * Two-Tower brings the most significant relative improvement in performance with increased model complexity relative to the LR baseline, highlighting the importance of learning deep feature interactions. * While most of the baseline models are of two-tower architecture, COLD achieves relatively strong performance among the competing models, indicating the significance of learning user-item feature interactions. * ARF outperform other models that do not utilize listwise ranking losses, highlighting the importance of listwise ranking losses for pre-ranking models. * RankTower achieves the best prediction performance among all models. Our model's superior performance could be attributed to RankTower's effectively modeling of bi-directional user-item feature interactions, as well as the design of full-stage sampling and hybrid loss functions. §.§ Model Study (Q2) In order to have deeper insights into the proposed model, we conduct experiments on the KuaiRand dataset and compare model performance on different settings. This section evaluates the model performance change with respect to settings that include: 1) the effect of full-stage data sampling; 2) the effect of listwise ranking losses; 3) the effect of distillation from ranking model; §.§.§ Effect of Full-Stage Sampling To better understand the contribution of each component of the full-stage sampling strategy, we conduct a comprehensive ablation study. This study aims to isolate and evaluate the impact of each sampling component on the overall performance of the model. By systematically removing each component, we can identify its significance and contribution to the model's effectiveness. As illustrated in <ref>, the full-stage sampling strategy achieves the best overall performance. When the pre-ranking model is trained solely with impression samples, it fails to generalize to unexposed items, adversely affecting retrieval performance. Furthermore, we observe that candidate samples are more important than random samples. As hard negatives, candidate samples significantly enhance the model's ability to discriminate between relevant and non-relevant items. §.§.§ Effect of Listwise Ranking Losses In order to better understand the properties of the proposed hybrid loss, we compare the proposed hybrid loss with several widely used ranking losses in the industry. The experiment results, as shown in Table <ref>, indicate that the hybrid loss consistently outperforms other alternatives. Notably, it surpasses both its individual components: the Sorting loss and the AM-Rankmax loss. Additionally, our proposed AM-Rankmax demonstrates superior performance compared to the original Rankmax loss and the Softmax loss. §.§.§ Effect of Distillation from Ranking Model For improving the performance and training efficiency of the pre-ranking model, we conduct knowledge distillation from the raking model, utilizing the Softmax loss. Here we are doing ablation study on the distillation component and further compare Softmax loss with other alternatives. The <ref> demonstrate the efficacy of the ranking model in transferring knowledge to the pre-ranking model through the distillation process on the impression samples. Among various loss function experimented for distillation, the Softmax loss outperforms the other alternative losses. The Softmax loss, being a listwise ranking loss, proved more adept at distilling the ranking model's capabilities compared to the weighted logloss, which essentially is a pointwise approach and exhibited suboptimal performance in learning the relative ranking distribution. In contrast, the pairwise logloss, focusing solely on pairwise ordering of ranking model's predictions without considering the relative proximity of predictions, exhibited overfitting to the ranking model's outputs. § CONCLUSION This paper introduces the RankTower model, designed to enhance the performance of the two-tower model by effectively capturing latent interactions between user and item. The RankTower architecture consists of a Multi-Head Gated Network and a Gated Cross-Attention Network, which model diversified latent user-item representations and captures complex user-item interactions dynamically. Additionally, the Maximum Similarity Layer contributes to improved serving efficiency. To ensure consistency with existing casecade ranking system, a hybrid loss function and full-stage sampling approach are integrated into the model's optimization framework. Comprehensive experiments demonstrate that RankTower significantly outperforms state-of-the-art pre-ranking models. In future work, we aim to study how to effectively and jointly optimize the cascade ranking system in an end-to-end fashion. ACM-Reference-Format
http://arxiv.org/abs/2407.13435v1
20240718120314
Enhancing Out-of-Vocabulary Performance of Indian TTS Systems for Practical Applications through Low-Effort Data Strategies
[ "Srija Anand", "Praveen Srinivasa Varadhan", "Ashwin Sankar", "Giri Raju", "Mitesh M. Khapra" ]
cs.CL
[ "cs.CL", "cs.LG", "cs.SD", "eess.AS" ]
An Algorithm for Computing the Capacity of Symmetrized KL Information for Discrete Channels Haobo Chen Department of ECE University of Florida Gainesville, US haobo.chen@ufl.edu Gholamali Aminian The Alan Turing Institute London, UK gaminian@turing.ac.uk Yuheng Bu Department of ECE University of Florida Gainesville, US buyuheng@ufl.edu ============================================================================================================================================================================================================================================================================ § ABSTRACT Publicly available TTS datasets for low-resource languages like Hindi and Tamil typically contain 10-20 hours of data, leading to poor vocabulary coverage. This limitation becomes evident in downstream applications where domain-specific vocabulary coupled with frequent code-mixing with English, results in many OOV words. To highlight this problem, we create a benchmark containing OOV words from several real-world applications. Indeed, state-of-the-art Hindi and Tamil TTS systems perform poorly on this OOV benchmark, as indicated by intelligibility tests. To improve the model’s OOV performance, we propose a low-effort and economically viable strategy to obtain more training data. Specifically, we propose using volunteers as opposed to high quality voice artists to record words containing character bigrams unseen in the training data. We show that using such inexpensive data, the model's performance improves on OOV words, while not affecting voice quality and in-domain performance. § INTRODUCTION Text-to-Speech (TTS) systems play a crucial role in linguistically diverse and developing regions like India, finding usage in various commercial and governmental applications. For example, they can be used for broadcasting vital information to farmers about weather conditions, disseminating information about government schemes, and enhancing accessibility for the visually impaired. However, the effectiveness of these systems is often hampered by limited training data. Publicly available TTS datasets for languages such as Hindi and Tamil typically range from 10-20 hours <cit.>, resulting in inadequate vocabulary coverage. This shortfall becomes particularly evident in practical applications, where the occurrence of out-of-vocabulary (OOV) words is inevitable, due to frequent code-mixing with English as well as the usage of specialized domain-specific vocabulary. While this issue is well-documented in English TTS systems <cit.>, in this work, we show that this is also the case for low-resource languages like Hindi and Tamil, where TTS systems similarly under-perform on OOV words compared to in-vocabulary (IV) words (see Figure <ref>). Given the above situation, our goal is to improve the intelligibility of TTS systems on OOV texts while retaining their naturalness. However, several challenges exist when attempting to improve the OOV performance of existing systems. First, it would be ideal to record more training data containing OOV words using the same speaker from the original dataset that the TTS model was trained on because such speakers are typically carefully selected artists with pleasant voices and speaking styles that are more suited for building TTS systems. However, this is mostly infeasible because the identities of the original speakers are anonymized for ethical reasons. Second, even if we had access to the original speaker, one would still need access to a larger corpus for low-resource languages to carefully curate a set of OOV words or sentences that can be recorded and later used to improve a model's performance. Finally, one would need a robust benchmark with broad coverage of OOV words across domains to assess whether performance gains generalize well across different real-world applications. We attempt to tackle all of these challenges in the context of Hindi and Tamil, to address the gap between TTS performance on OOV and IV texts. Since we do not have access to the speakers in the original dataset, we instead explore a cost-effective alternative of recording OOV words from volunteers having different voices <cit.> and evaluate whether training the TTS system on this new data can reduce the OOV intelligibility error rates for the original voice of the TTS system. Next, to curate OOV texts for recording, we collate an extensive corpus from multiple resources for Hindi and Tamil and then carefully select OOV words by maximizing the coverage of high-frequency missing OOV character bigrams. This is motivated by the importance of achieving syllabic balance <cit.> and the importance of phonotactics <cit.> in TTS for Indian languages. Finally, to evaluate the performance gains of our proposed approach, we release a benchmark, , for Hindi and Tamil containing OOV words which are not seen in the original training data as well as in the inexpensive data recorded using volunteers. These OOV words span 7 categories, viz. Abbreviations, Brands and Products, Codemixed (English-Hindi, English-Tamil), Company Names, Government Schemes, Proper Nouns and Navigations, which are typically seen in downstream applications. contains 100 sentences per category for a comprehensive evaluation. We conduct intelligibility tests and show that our cost-effective method indeed leads to better performance on , while not affecting the voice quality obtained by just training on the original speaker's data. § BENCHMARK We present [GitHub repository for IndicOOV: <https://github.com/AI4Bharat/IndicOOV/>], a novel benchmark designed to evaluate the out-of-vocabulary (OOV) word synthesis capabilities of Text-to-speech (TTS) systems for two Indian languages - Hindi and Tamil. A key challenge in creating a benchmark across categories covering real-world applications is the lack of a readily available corpus in Indian languages containing labeled texts covering different categories. We thus break down the process of creating the benchmark into three steps - (i) Find in-vocabulary (IV) and OOV words from a larger text corpus, (ii) classify found words into categories, and (iii) filter out unsuitable words. We first collate a larger corpus of texts across Hindi and Tamil from diverse datasets. We primarily rely on texts from Sangraha <cit.>, the Bharat Parallel Corpus <cit.>, and transcriptions from IndicVoices <cit.>. All these corpora together help us gather words from a variety of sources like Wikipedia, Pratham Books, National Institute of Open Schooling, Press Information Bureau, Mann Ki Baat, and other government and open websites, that reflect practical scenarios one would potentially deploy TTS systems in. We then programmatically identify words containing high-frequency OOV character bigrams missing in the IndicTTS <cit.> training corpus. We form a set of over 1000 words and task a human language expert to scan this word list and manually classify individual words into the 7 categories of interest mentioned earlier. Once we reached fifty words within a particular category, we requested the language expert to prioritize finding words in other categories where the target counts weren't met. For some categories, such as Proper nouns or Navigation phrases, even with multiple iterations, we could not find the desired number of words containing OOV character bigrams. In such cases, we created a set of missing character OOV bigrams and relied on the creativity of the language expert to create or recollect words for a particular category containing these OOV bigrams. Likewise, we also curate a list of fifty IV words, which have all their bigrams present in the IndicTTS dataset, for each category. In this manner, we are able to create a diverse benchmark spanning 7 key TTS application categories: Abbreviations (Abbr), Brands and Products (Brand), Codemixed (CM), Company Names (Cmpy), Government Schemes (Govt), Proper Nouns (Prop), and Navigation (Nav). For Tamil, we replace Company Names and Government Schemes with Education(Edu) and Healthcare(Health) due to the difficulty in obtaining OOV words, respectively. § RECORDING OOV WORDS WITH VOLUNTEERS We explore a cost-effective method to improve TTS performance by recording OOV words with the help of volunteers who are not professional voice artists. To do this we require (i) recording scripts, (ii) volunteers willing to lend their voice, and (iii) a recording setup and process to record all data. §.§ Recording Script Creation While creating recording scripts we focus on maximizing the coverage of OOV words along with missing OOV bigrams present in them. Instead of recording semantically meaningful sentences which are complete, we prepare a list of OOV words and randomly join sets of five words separated by commas to create unique utterances. We ensure no word repeats across utterances. To select the OOV words, we iterate through the text corpora collated in Section <ref> and use a greedy algorithm to find words that maximize the frequency of missing OOV bigrams in the selected set. Owing to limitations in recording capacity and budget, we restrict our recordings to approximately 2000 words per language. In light of this restriction, we adjust the greedy algorithm to exclude a character bi-gram from consideration once its frequency in the chosen set surpasses k. We empirically choose k=6, and this parameter can be raised when recording additional data is necessary. Note that none of the OOV words present in , are allowed to be a part of our recording scripts (although OOV words in our recording scripts may share bigrams with OOV words in .) §.§ Gathering Volunteers We found interested volunteers by circulating a form within our institution. Prior to recording, all volunteers were clearly informed regarding the intended use of their voice data and the compensation they would receive for the same. Following this explanation, speakers were presented with consent forms, which they reviewed and signed. These forms explicitly documented their informed consent for the utilization of their recordings in the training of speech synthesis systems. To safeguard speaker privacy and prevent potential misuse of the speech data, we have opted to withhold the recorded audio samples from public release. However, we release the recording scripts employed during data collection to facilitate reproducibility and transparency in research methodology. The entire process of data collection was approved by our Institute Ethics Committee with the compensation in line with recommended norms. §.§ Recording Setup and Process Since renting a professional studio can be expensive, we instead rely on recording data in acoustic pods present in our workspace. We record audio using a professional condenser microphone with a pop filter to record the volunteers' voices. Getting the words to be spoken out clearly was crucial for our experiments. To ensure clarity in voice, we ascertained that participants were well-hydrated before recording sessions. A few volunteers indicated that the OOV words were difficult to read. We thus requested all volunteers to practice with the recording script before recording sessions. During the recording, an expert proficient in the language listened to all recordings live and pointed out any pronunciation mistakes made by the speaker. During the recording process, this expert also filtered out words that were deemed inappropriate due to spelling mistakes, archaic nuances, profanity, or toxicity. In this manner, we record a total of 6 speakers, consisting of two male speakers and one female speaker for Tamil and Hindi each. The detailed statistics of the recordings are present in Table <ref>. Note that the use of volunteers as opposed to professional voice artists made, ensured that the data collection was relatively inexpensive with an 85% reduction in costs and the entire process was completed in 2 working days (counting studio time and post-processing time). § EXPERIMENTAL SETUP §.§ Dataset: We use the IndicTTS dataset <cit.> for all experiments. Specifically, we train models on the Hindi and Tamil subsets. The Hindi subset contains approximately 10 hours and 4 minutes of female speech and 10 hours and 5 minutes of male speech. Similarly, the Tamil subset contains approximately 10 hours and 2 minutes of female speech and 10 hours and 33 minutes of male speech. Additionally, we finetune models on the recorded data described in Section <ref>. §.§ Models We train and evaluate with two state-of-the-art text-to-speech (TTS) models: FastPitch (FP)<cit.> and VITS <cit.>. We fine-tune a pre-trained FastPitch <cit.>, a non-autoregressive transformer-based spectrogram prediction model, starting from the open-sourced pre-trained checkpoint on IndicTTS. To learn durations the model employs an unsupervised alignment learning framework <cit.> that aligns textual features with acoustic representations. Mel-spectrogram outputs are then converted to audio waveforms using the HiFiGAN V1 vocoder <cit.>, pre-trained on the IndicTTS corpus from a publicly available checkpoint <cit.>. We fine-tune this vocoder on our internal dataset for better speaker generalization. We also train VITS, an end-to-end speech synthesis model, with the same hyperparameter settings as prior work <cit.>. Before training, all the audio samples were downsampled to 22050 and converted to a mono-channel configuration. All the models were trained on an NVIDIA A100 40GB GPU, with a batch size of 16, upto 2500 epochs or until convergence. §.§ Evaluation Metrics We measure both the intelligibility and perceptual quality of the speech generated by the models. We rely on human intelligibility tests to assess the intelligibility of TTS systems on OOV and IV words. In this test, a rater proficient in the language is tasked to listen to an audio sample along with the complete corresponding text and the benchmark word-of-interest highlighted. The rater is then asked to provide a binary rating of whether the word-of-interest was intelligible or not. Raters are instructed to penalize partially intelligible words too, and mark them as “not intelligible”. Raters are provided options for slowing down the audio and selecting and playing segments in repeat if required. Furthermore, raters were encouraged to discuss with each other in case of confusion. All raters who participated in the test were expert listeners with prior experience in evaluating TTS systems and who also aid in quality assurance of high-quality TTS data collection efforts. In our evaluations, we rely on 8 such expert listeners and report the percentage of unintelligible words as the Intelligibility Error Rate (%). Next, to evaluate the perceptual quality of generated speech we rely on two objective metrics - (i) VISQOL <cit.> and (ii) S-SIM or speaker similarity. VISQOL is a perceptual speech quality estimator that uses a spectro-temporal measure to measure the similarity between ground-truth and reference speech. We use this measure to evaluate the retention of speech quality when training with additional OOV data recorded with low expenses. Since we record data in multiple voices from volunteers and use it to attempt to improve the OOV performance of the original TTS speaker, we measure the speaker similarity of the synthesized model outputs with that of the original speaker. To compute speaker similarity we report the cosine-similarity between the ground-truth and synthesized samples using embeddings extracted from Titanet <cit.>. § RESULTS §.§ Comparison of TTS on IV v/s OOV Texts In Figure <ref>, we visualize the intelligibility error rates of the four baseline models - FastPitch and VITS, each trained on Hindi and Tamil data. Clearly, all models perform worse on OOV words in comparison to IV words across categories. The most noticeable difference is in the Hindi FastPitch, showing high intelligibility error rates of 24% on OOV words which is twice the IV error rate at 12%. Surprisingly, all models show relatively high intelligibility error rates on IV words, too, but this possibly reflects on the low-resource setups on which these systems have been trained. §.§ Recording OOV Words Improves OOV Performance In Table <ref>, we compare the intelligibility of baseline models and models fine-tuned on OOV recordings. On average, across categories, the intelligibility error rates of the fine-tuned models reduce in comparison to the baselines for both OOV and IV words. More specifically, the overall relative reduction in OOV errors is 40.59%, 36.33 %, 22.50 %, and 30.94% for Hindi FP, Tamil FP, Hindi VITS, and Tamil VITS respectively. This clearly shows the utility of low-cost recordings in significantly improving model performance on OOV (as well as IV) words. §.§ Quality of Synthesis To assess whether the quality of the TTS output in the original speaker's voice degrades when adding training data from alternative amateur speakers, we compare the voice quality of the fine-tuned models and baselines with two metrics, viz., ViSQOL for perceptual quality estimation and S-SIM for speaker similarity. Both S-SIM and VISQOL are full-reference metrics and require ground truth samples to be provided as reference audios. To assess the relative quality of audios with respect to the original IndicTTS speakers, we use the test set utterances as references. We first compute VISQOL and S-SIM of the baseline (Base) with respect to the reference. We then compute VISQOL and S-SIM of the fine-tuned model (I+O) with respect to the reference. We observe that the speaker similarity for baseline, S-SIM (Base) and the speaker similarity for the fine-tuned model, S-SIM (I+O) are comparable across languages, models, and TTS voice, indicating that training on OOV recordings of alternate speakers does not degrade the voice of the original TTS speaker. Likewise, the VISQOL scores of the baseline method, VISQOL (Base) and the scores for the fine-tuned model VISQOL (I + O) are comparable indicating there is no degradation in the perceptually quality of speech too. §.§ Can only single-gender data improve multi-speaker TTS OOV performance? We finetune the FastPitch model with single-gender recordings - (i) Using only one female speaker - F1 and (ii) Using only two male speakers - M1 and M2 . We aim to check if improvements in the intelligibility are agnostic to the gender of the recording volunteer. Table <ref> summarizes the intelligibility scores averaged across seven categories for the base FastPitch model and models fine-tuned on male (Base + M1 + M2) and female (Base + F1) speakers respectively. We observe from the scores that fine-tuning the model with data from one gender improves the OOV performance across all TTS voices (M & F) except for the Tamil male voice when fine-tuning only on Male gender data. Furthermore, fine-tuning the baseline model on only Female gender data reduces the intelligibility error rates from 0.28 to 0.15 for the female speaker and 0.20 to 0.11 for the male speaker in Hindi. Similiarly, fine-tuning the baseline model on only Male gender data reduces the intelligibility error rates from 0.28 to 0.12 for the female Hindi speaker and 0.30 to 0.13 for the female Tamil speaker. This shows that one may train a model on single gender OOV recordings and expect to get OOV performance improvements across both male and female TTS output voices. §.§ Common pronunciation errors In the baseline models, we find some interesting trends in the pronunciation errors for both FastPitch and VITS, across languages. The pronunciations were not sharp for consecutive vowels in Abbreviations like AISEC (/ˌa:iˌe:sˈi:si/)and IAEA(/ˌa:ie:ˈi:e:/). We find that consecutive vowel combinations are not common in the IndicTTS dataset, with only 278 words and 12 words with such combinations from 188K and 100K word corpus for Hindi and Tamil respectively. In contrast, the English TTS dataset LJSpeech <cit.> has 36.5K combinations of consecutive vowels in a corpus of 212K words. Such combinations are prevalent in categories like Abbreviations, Government Schemes, and Navigation that borrow English words. § CONCLUSION We study the problem of OOV words in practical deployments of TTS systems. We first create a benchmark, , for assessing the performance of TTS systems for Hindi and Tamil. We then show that there is indeed a clear gap in the performance of state-of-the-art TTS systems on IV v/s OOV words as evaluated on . We then propose a low-cost approach for augmenting existing TTS datasets with recordings of OOV texts using amateur voice artists. Finally, we show that training a TTS system of such OOV recordings indeed improves the performance on while not affecting the voice quality of the synthesized outputs. § ACKNOWLEDGEMENTS This project was made possible through the dedicated efforts and collaboration of numerous organizations and participants. We extend our gratitude to Digital India Bhashini, the Ministry of Electronics and Information Technology of the Government of India, EkStep Foundation and Nilekani Philanthropies for their generous grant. We also express our sincere thanks to the Centre for Development of Advanced Computing, Pune (CDAC Pune), for providing access to their PARAM-Siddhi supercomputer, which was instrumental in model training and preparation of the benchmark. Our heartfelt appreciation goes out to all the participants involved in the recording process for data collection. Special thanks to the Hindi voice artists Rishi Dalal, Vansh Bharat Jain and Afifa Anjum, the Tamil voice artists Sam Imayavan, Clifford B. and Muthathal Subramanian and the language experts Prachi D. and Suganthi V. IEEEtran
http://arxiv.org/abs/2407.12202v1
20240716220159
Tool Shape Optimization through Backpropagation of Neural Network
[ "Kento Kawaharazuka", "Toru Ogawa", "Cota Nabeshima" ]
cs.RO
[ "cs.RO" ]
On the Calibration of Epistemic Uncertainty: Principles, Paradoxes and Conflictual Loss Mohammed Fellaji1 Frédéric Pennerath1 Brieuc Conan-Guez2 Miguel Couceiro2 July 16, 2024 ======================================================================================= empty empty § ABSTRACT When executing a certain task, human beings can choose or make an appropriate tool to achieve the task. This research especially addresses the optimization of tool shape for robotic tool-use. We propose a method in which a robot obtains an optimized tool shape, tool trajectory, or both, depending on a given task. The feature of our method is that a transition of the task state when the robot moves a certain tool along a certain trajectory is represented by a deep neural network. We applied this method to object manipulation tasks on a 2D plane, and verified that appropriate tool shapes are generated by using this novel method. 人間はあるタスクを行う際に, それに適切な道具を選択, または製作し, 利用することができる. 本研究はその中でも特に, タスクに応じた適切な道具の形状最適化について議論する. 道具とその動作軌道によるタスク状態の遷移をニューラルネットワークにより記述する. あるタスクが与えられた際に, それに最適な道具形状・動作軌道, またはその両方を得ることができる手法を提案する. 本手法を2次元平面上の物体移動タスク適用し, その有効性を確認した. § INTRODUCTION Tool-use is one of the fundamental abilities of human beings. When executing a task, human beings can choose or make an appropriate tool to achieve the task. For a robot to work in a human environment, combining existing tools or making new tools is necessary to expand its capability. In this study, we mainly focus on the optimization of tool shape and tool trajectory, as a foundation of tool making. Robotic tool-use has been studied in various topics: tool recognition <cit.>, tool understanding <cit.>, tool choice <cit.>, and motion generation with tool-use <cit.>. However, there have been few studies about tool-making or making a new appropriate tool for a given task. Nair, et al. developed methods to construct a new tool by combining two existing tools using geometric reasoning <cit.>. Wicaksono, et al. developed frameworks of tool creation as an extension of tool-use learning <cit.>. However, because <cit.> can generate only tools expressed by the combination of two existing tools and <cit.> can generate only tools similar to a reference tool due to random generation of tools fulfilling many hypotheses (e.g. a hook-like tool), various free forms of tool shapes cannot be handled. Also, because <cit.> must be tested by the actual robot to choose the best tool, the optimization of tool shape takes too much time. Apart from robotic tool-use, an optimization method of robot design parameters such as link length and actuator placement has been developed <cit.>. Also, some studies have jointly optimized the robot design parameters and control scheme using a genetic algorithm <cit.> or reinforcement learning <cit.>, in simulation environment. While these studies are similar to the scenario of tool making, there are two problems in common. First, these robot designs (e.g. link length and actuator placement) are manually parameterized and various free forms cannot be handled. To appropriately parameterize the design, prior knowledge of human experts is necessary. Second, experiments of almost all previous works are conducted in simulation environment. This is because we must move the robot to obtain evaluation value in the process of optimization and it takes too much time in the actual environment. Also, these studies cannot consider the characteristics of the actual environment, e.g. friction, hysteresis, and robot model error. Our contributions of this study are as below, * We use an image to represent a tool shape. A tool shape can be easily converted to an image, and we can uniformly handle various tool shapes without prior knowledge. * All experiments including evaluations are conducted in the actual environment. By acquiring a tool-use model using the actual robot data of random movements, the tool shape is directly optimized without evaluating the movements of the actual robot. As shown in figure:motivation, this study simultaneously calculates an optimized tool shape and trajectory for a given task. A transition of the task state when a robot moves a certain tool along a certain trajectory is represented by a deep neural network. An optimized tool shape, tool trajectory, or both for a target task, can be obtained by using the backpropagation technique <cit.> of the neural network. Although this method includes motion trajectory optimization, we mainly put stress on tool shape optimization. We conduct experiments using the actual robot on a 2D plane to verify the effectiveness of this study. 道具利用は人間の本質的な能力の一つである. 人間はあるタスクを行う際に, それに適切な道具を選択, または製作し, 適切な動作軌道で利用することができる. ロボットが人間と同じように作業するためには, 既存の道具を組み合わせたり, 状況に合わせた新しい道具を製作したりすることが必要である. 本研究ではその基盤となる技術である, 道具の最適化に着目する. これまで, ロボットにおける道具使用に必要な要素である, 道具認識, 道具理解, 道具選択, 道具を用いた動作生成に関して様々な研究が行われている. Huberらはテンプレートマッチングにより道具の認識を行い<cit.>, Kempらは道具の先端の認識とその制御手法を開発した<cit.>. Zhuらは人間の道具使用動画から道具の機能や動作軌道の抽出を行い<cit.>, Myersらは道具のaffordanceを深層学習により抽出した<cit.>. Saitoらはロボットの道具使用経験から道具選択を学習する枠組みを作り<cit.>, Teeらは自分の身体と道具の類似性から道具選択を行う仕組みを開発した<cit.>. Okadaらは道具使用の記述に必要な要素の抽出と視覚による確認<cit.>, Taussaintらはダイナミックな道具利用とその動作計画<cit.>について研究している. Nabeshimaらは道具による身体図式の変容について考察し<cit.>, Fangらはタスクに応じた道具の持ち方を選択する手法を開発した<cit.>. しかし, タスクに応じた新たな道具生成に関する研究はこれまでにない. そこで本研究では, 道具の形状最適化とその利用について議論する. つまり, figure:motivationのように, あるタスクが与えられたときに, そのタスクに適した道具の形状と軌道を出力するという問題である. この問題を解くことにより, 針金のように変形する道具が扱えるようになる. またその発展形として, 道具を組み合わせたり, 実際に人間のように工具を使って道具を製作することができるようになると考える. 本研究では, 道具とその動作軌道によるタスク状態の遷移をニューラルネットワークにより表現する. そして, あるタスクが与えられた際に, ニューラルネットワークの誤差逆伝播法用いることで, そのタスクに最適な道具形状・動作軌道, またはその両方を得ることができる. 本手法は道具の動作軌道計画も含むが, 主に道具の形状最適化について議論をする. 先行研究として, 道具ではないが, ロボット設計におけるリンク長・アクチュエータ配置の身体パラメータの最適化研究も存在する<cit.>. 道具の形はより複雑で多様性があり, 本研究はそれらを最適化していく手法について考える. 以降では, まず第二章で, 本研究で提案する道具形状・動作軌道最適化ネットワーク(Tool-Net)の詳細と, それを用いた道具形状・動作軌道最適化について述べる. 第三章では, 平面状を動く2自由度ロボットにおいて実験を行い, 本手法の有効性を確認する. 最後に, 本研究に関する議論と結論を述べる. § TOOL SHAPE AND TRAJECTORY OPTIMIZATION NETWORK In this study, we represent a transition of task state, when using a given tool shape and trajectory, by a neural network. We call this network “Tool Shape and Trajectory Optimization Network (Tool-Net)”. In the following sections, we assume that the robot moves on a 2D plane and the task is an object manipulation task, for simplicity. In sec:discussion, we will discuss extensions of our method to a robot that moves in 3D space and other kinds of tasks. 本研究では, 与えられた道具形状とその動作軌道によるタスク状態の遷移をニューラルネットワークにより表現し用いる. このネットワークをTool-Net (Tool Shape and Trajectory Optimization Network)と呼ぶ. 本章ではロボットの動きを2次元平面上に限定して実装を考えるが, 本手法は3次元に拡張することも可能であり, これについてはsec:discussionにて議論する. また, 本研究ではfigure:motivationに示すように道具による物体の移動タスクについて考えるが, 本研究の手法の適用先はこれに限らず, これについてもsec:discussionにて議論する. §.§ Network Structure of Tool-Net The network structure of Tool-Net is shown in figure:network-structure. This network is represented by the equation below, s_predicted = f(s_current, t, u) where s_current is a current task state, t is a tool shape, u is a tool trajectory, s_predicted is a predicted task state after moving the robot using u and t, and f represents Tool-Net. f is trained, and t and u are optimized when given a target task state s_target. Because we handle object manipulation tasks in this study, we use a binarized image as shown in figure:motivation, which can flexibly express the object position and posture, as task state s. We also use a binarized image, which has high degrees of freedom, as tool image t. In binarized images, the background color is black (its value is 0), and the tool and manipulated object color is white (its value is 1). Regarding tool trajectory, we assume quasi-static movement and constant joint velocity in this study, and so we represent u as (θ^T_start, θ^T_end)^T. θ_{start, end} is the starting or ending joint angles of the robot, and the whole trajectory is the trajectory that interpolates these joint angles linearly by constant velocity. In detail, s_current and t are inputted through convolutional layers and concatenated with u, and s_predicted is outputted through deconvolutional layers. まず, Tool-Netのネットワーク構造をfigure:network-structureに示す. これは式で表すと, 以下の式と同等である. s_predicted = f(s_current, t, u) ここで, s_currentは現在タスク状態, tは道具形状, uは動作軌道, s_predictedはロボット動作後に予測されるタスク状態, fはTool-Netを表す. 本研究では, fを学習し, 指令タスク状態s_targetを与えることでt, uを最適化していくことになる. このタスク状態s, 道具形状tはどのように表現をしても良く, 特にタスク状態は扱うタスクによって表現が大きく異なる. 本研究では物体移動タスクを扱うため, 物体の位置姿勢を柔軟に表現可能な, figure:motivationに示したような二値画像をタスク状態として用いる. 道具の表現にはスプライン曲線のパラメータや道具の道具プリミティブの組み合わせのパラメータ等が考えられるが, それは道具の表現に事前知識を利用することになり, 新たな道具生成には向かない. よって, 本研究ではその自由度の高さから道具もfigure:motivationに示したような二値画像で表すこととする. 2値画像においては, 背景が黒(0), 道具や操作物体が白(1)とする. また, 動作軌道uにも様々な表現があるが, 本研究では簡単のため準静的動作かつ動作速度一定を仮定し, u=(θ^T_start, θ^T_end)^Tとしている. ここで, θ_start, θ_endは動作軌道の最初と最後の関節角度を表し, これを準静的に一定速度で線形に補間したものが動作軌道となる. 実際のニューラルネットワークでは, s_currentとtを畳み込み層を通して, uと結合し, それを逆畳み込みによってs_predictedを出力するようなネットワークとなる. §.§ Data Collection for Tool-Net The procedures to collect data for training of Tool-Net are as below. * Initialize the robot posture and attach a randomly generated tool to the manipulator tip * Obtain the tool shape image t * Set θ_start randomly and move the robot * Randomly place an object * Obtain the task state image s_start * Set θ_end randomly and move the robot * Obtain the task state image s_end * Repeat (c) – (g), collect the data, and go back to (a) after the conditions (described below) are satisfied In (a), the joint angle of the robot is initialized to 0. We define the joint angles as shown in figure:robot-config, and so the initialized posture is straightly aligned. To prepare various tool shapes to be attached to the robot, we make tools randomly by hand with a metal wire, which will be explained in subsec:experimental-setup. 3D printing or clay modeling are also applicable for the purpose. In (c) and (f), when the robot is randomly moved, the limit of θ_min≤θ_{start, end}≤θ_max is set (θ_{min, max} is the lower or upper limit of joint angles). Also, because the robot hardly moves when θ_end is close to θ_start, we set the limit of ||θ_start-θ_end||_1>θ_thre when randomly choosing the θ_{start, end} (||·||_1 expresses L1 norm). In (h), we use a sampling technique to balance changed samples and unchanged samples. A changed sample means that the robot moves the target object, and an unchanged sample means that the robot does not touch the target object. Due to the random trajectory, we get more unchanged samples than changed samples. We use symmetric chamfer distance d_chamfer <cit.> to distinguish changed or unchanged samples as below, d_chamfer(s_1, s_2)=∑(s_1·DT(s_2)+s_2·DT(s_1)) where s_1, s_2 are images with the same size, DT (Distance Transform) is an image expressing the distance to the nearest white pixel at each pixel, and the unit of d_chamfer is [px]. When d_chamfer(s_start, s_end) ≥ d_thre, it is classified as a changed sample. For each tool, we collect C_changed changed samples and C_unchanged unchanged samples into a dataset (C_{changed, unchanged} is the number of samples). After obtaining all these data, we go back to procedure (a) with a new tool. By repeating these procedures, we finally construct a dataset for the training of Tool-Net. When constructing a dataset, we augment it at the same time. When executing the procedures (c) – (g), we collect not only (s_start, s_end, t, (θ^T_start, θ^T_end)^T) but also s and θ at all frames. Then, we randomly choose frame indices F_{from, to} (F_from<F_to) from the consecutively collected data, and add C_seq data (s_from, s_to, t, (θ^T_from, θ^T_to)^T) into the dataset (C_seq is the number of the data). {s, θ}_{from, to} is s or θ at the frame of F_from or F_to. When defining the arrangement of the robot and task image as shown in figure:robot-config, we can also generate a mirrored data of (Mirror(s_{start, from}), Mirror(s_{end, to}), Mirror(t), -u) (Mirror expresses a mirrored image). Therefore, we finally obtain 2(1+C_seq) data: (s_initial, s_final, t, u). In our experiments, we set θ_min=-45 [deg], θ_max=45 [deg] regarding each actuator, θ_thre=45 [deg], d_chamfer=70.0 [px], C_changed=10, C_unchanged=5, and C_seq=24. Tool-Netの学習に必要なs_initial, s_final, t, uを集める手順を以下に示す. * ロボットの姿勢を初期化し, 道具をロボットに取り付ける. * 道具画像tを取得する. * θ_startをランダムに設定し, ロボットを動かす. * ランダムに物体を配置する. * タスク画像s_startを取得する. * θ_endをランダムに設定し, ロボットを動かす. * タスク画像s_endを取得する. * (3) – (7)を繰り返してデータを蓄積していき, 条件を満たしたら(1)へ戻る. (1)では, ロボットの姿勢を0に初期化する. ロボットの関節角度をfigure:robot-configのように定義すると, これはつまり, ロボットアームを真っ直ぐにした状態のことを指す. また, 道具をロボットに取り付けるが, この道具は様々な形状を用意しなければならない. そこで, 本研究ではsubsec:experimental-setupで説明するように道具に針金を用いるため, figure:random-toolのようにランダムに制御点を配置し道具長さを一定にした3次Bスプラインを出力し, それを見ながら人間が手で道具を製作する. 実際には針金に限らず, 3Dプリンタや粘土で製作しても良い. (3)と(6)では, ランダムにロボットを動作させるが, その際にはθ_min≤θ_{start, end}≤θ_maxという制限を設ける. ここで, θ_min, θ_maxは関節角度の上下限を表す. また, θ_startとθ_endが近すぎるとロボットがほとんど動作しないため, ランダムにそれらを選ぶ際に, ||θ_start-θ_end||_1>θ_threという制限をつけている. ここで, ||·||_1はL1ノルムを表す. (8)では, データの偏りを減らすために, s_startとs_endの類似度を使って, 必要なデータのみを蓄積している. 画像間の類似度(距離)には, 以下のようなsymmetric chamfer distance d_chamferを用いる<cit.>. d_chamfer(s_1, s_2)=∑(s_1·DT(s_2)+s_2·DT(s_1)) ここで, s_1, s_2は大きさの同じ二値画像, DT (Distance Transform)は各画素値に対して直近の物体画像までの距離を表す画像を表し, d_chamferの単位はpxである. d_chamfer(s_start, s_end) < d_threのとき, タスク状態は変化していないとする. 一つの道具に対して, タスク状態が変化したデータをC_changed個, 変化していないデータをC_unchanged個, データセットとして蓄積する. それらが全て取得し終わったら(1)に戻り, 新しい道具を取り付ける. これを繰り返すことでTool-Netのためのデータセットを作成していく. データ作成の際, データ拡張も同時に行う. (3) – (7)の手順の際, (s_start, s_end, t, (θ^T_start, θ^T_end)^T)だけでなく, 常にタスク状態s, 関節角度θを蓄積しておく. そして, その連続したデータの中で, フレームF_from, F_to (F_from<F_to)をランダムに選び, (s_from, s_to, t, (θ^T_from, θ^T_to)^T)というデータをC_seq個データセットとして蓄積する. ここで, {s, θ}_{from, to}はそれぞれフレームF_fromまたはF_toにおけるsまたはθである. また, ロボットとタスク画像の位置関係がfigure:robot-configのように定義される場合は, s, t, uをそれぞれ左右反転させてたデータも作成することができる. つまり, これまで得られた1+C_seq個のデータ(s_{start, from}, s_{end, to}t, u)を, (Mirror(s_{start, from}), Mirror(s_{end, to}), Mirror(t), -u)として, 2倍に拡張することができる. ここで, Mirrorは鏡面反転を表す. ゆえに, 2(1+C_seq)個のデータ(s_initial, s_final, t, u)を得ることができる. 本研究では, θ_min=-45 [deg], θ_max=45 [deg] (それぞれのモータに対して), θ_thre=45 [deg], d_chamfer=70.0 [px], C_changed=10, C_unchanged=5, C_seq=24とする. §.§ Training Phase of Tool-Net We preprocess the obtained dataset (s_initial, s_final, t, u) and train Tool-Net. This preprocess makes the training result robust against noise and displacement of pixels. First, we augment tool image data t by adding noise. We show how to add noise in algorithm:tool-noise. In algorithm:tool-noise, ChooseRandomPixel(image) is the function which randomly extracts one pixel from image. CountAdjacentWhite(image, pixel) is the function which counts the number of white pixels among 4 adjacent pixels to the pixel of image. {ToWhite, ToBlack}(image, pixel) is the function which makes the pixel of image {white, black}. {IsWhite, IsBlack}(image, pixel) is the function which judges whether the pixel in image is {white, black}. t' is t with noise. C^noise_add, C^noise_del are constant values. Line 4–11 of algorithm:tool-noise express that we randomly make the black pixel white when its adjacent pixels include at least one white pixel. Line 12–18 of algorithm:tool-noise express that we randomly make the white pixel black. Second, to make Tool-Net robust against the displacement of pixels, we blur the task state image s_final, as shown below, s'_final = 1.0-tanh(C_blur·DT(1-s_final)) where C_blur is a constant value. The smaller C_blur is, the more the image is blurred. Using s'_final and t', we train Tool-Net with the loss L shown below, by setting the number of epochs as C_epoch and batch size as C^train_batch, s_predicted = f(s_initial, t', u) L = MSE(s_predicted, s'_final) where MSE expresses mean squared error. In the following experiments, we set C_blur=0.2, C^noise_add=30, C^noise_del=30, C^train_batch=100, and C_epoch=300. 得られたデータセット(s_initial, s_final, t, u)に処理を施しながら, Tool-Netを学習させる. まず, 学習の際の損失関数Lは以下のように二乗誤差平均(MSE)を用いる. s_predicted = f(s_initial, t, u) L = MSE(s_predicted, s_final) しかし, この損失関数では境界面が不連続になるためはpixelのズレに敏感であり学習が難しい. そこで, s_finalは以下のようにぼかして用いる. s'_final = 1.0-tanh(C_blur·DT(1-s_final)) ここで, C_blurは係数であり, 小さいほど画像がぼかされる. また, tにはノイズを混入させて, データを拡張して用いる. この際のノイズの加え方をalgorithm:tool-noiseに示す. ここで, ChooseRandomPixel(image)はimageからランダムに一つピクセルを抜き出す関数, CountAdjacentWhite(image, pixel)はimageのpixelに隣接する4つの画素の中で白の画素の数を数える関数, To{White, Black}Pixel(image, pixel)はimageのpixelを{white, black}にする関数, Is{White, Black}(image, pixel)はimageのpixelが{white, black}かどうかを判断する関数である. また, C^noise_add, C^noise_delは定数である. これは, 隣接するピクセルに白の画素が一つでも含まれれば着目するピクセルを白にしても良い, ということを表す. このs'_finalとt' (ノイズを加えたt)を用いて, eq:lossのようにバッチサイズをC^train_batch, エポックをC_epochとして学習を行う. 本研究では, C_blur=0.2, C^noise_add=30, C^noise_del=30, C^train_batch=100, C_epoch=300とする. §.§ Optimization Phase of Tool-Net We will explain the optimization procedures of tool shape and trajectory using trained Tool-Net. The procedures corresponding to figure:network-structure are as below. * Obtain the current task state s_current and target task state s_target * Generate the initial tool shape and trajectory {t, u}_init before optimization * Calculate loss of MSE(f(s_current, t_init, u_init), s'_target) * Update {t, u}_init through backpropagation * Repeat (c) and (d) C_iter times In (b), regarding t_init, we randomly extract C^optimize_batch tools from the dataset constructed in subsec:data-collection (C^optimize_batch is the number of the batch), and apply AddNoiseToTool in algorithm:tool-noise to each tool. Regarding u_init, we randomly generate C^optimize_batch tool trajectories fulfilling the limit of u_min≤u≤u_max. Thus, we construct a batch with C^optimize_batch samples of t_init and u_init. In (c), we calculate L=MSE(f(s_current, t_init, u_init), s'_target) regarding each data in the batch (s'_target is s_target blurred by eq:blur). In (d), we backpropagate L and optimize t_init and u_init for each data. First, regarding the optimization of tool trajectory, we optimize u_init like in <cit.>, as below, g_control = dL/du_init u_init u_init-γg_control/||g_control||_2 where ||·||_2 expresses L2 norm, and γ is an update rate. Second, regarding the optimization of tool shape, we change the values of pixels according to the gradient g_tool = dL/dt_init. To decrease L, the black pixels with negative gradients should be changed to white, and the white pixels with positive gradients should be changed to black. However, if all pixels are changed according to the gradient, an appropriate image for tool shape cannot be obtained due to sporadic pixels. Also, white pixels sometimes concentrate in small areas. To solve these problems, we optimize tool shape by focusing on adjacent pixels, as shown in algorithm:tool-optimize. In algorithm:tool-optimize, {IsPos, IsNeg}Grad(grad, pixel) is the function which judges whether the gradient grad of pixel is {positive, negative}. GetGrad(grad, pixel) is the function which extracts grad of pixel. {ExtractFront, ExtractBack}(array, count) is the function which extracts count values in array from {front, back} of the array. C_scale, C_grad, C^optimize_add, and C^optimize_del are constant values. We calculate a score for each pixel from the gradient and the number of adjacent white pixels. In ascending order of this score, the top C_add and the bottom C_del pixels are turned to white and black, respectively. t_init is updated by t_initOptimizeTool(t_init, g_tool) in algorithm:tool-optimize. After C_iter iterations of eq:traj-opt and OptimizeTool(t_init, g_tool), C_iter× C^optimize_batch candidates of tool shapes and trajectories are obtained. We use the tool shape and trajectory with minimum L among all candidates as the optimized value of t_optimized and u_optimized. We explained the method of optimizing both tool shape and trajectory at the same time. In the case that either tool shape or trajectory is optimized, the other is fixed and not optimized. In the following experiments, we set C^optimize_batch=10, C_iter=50, γ = 0.1 [rad], C_scale=1E-3, C_grad=0.1, C^optimize_add=10, and C^optimize_del=10. 学習されたTool-Netを用いた道具形状・動作軌道の最適化の手順を以下に示す. 本節では道具形状・動作軌道を同時に最適化する方法について述べるが, 道具形状のみ, または動作軌道のみの最適化は, 他方を任意の値で固定して最適化を行わないことで可能となる. その手順は以下であり, figure:network-structureに対応する. * 現タスク状態s_current, 指令タスク状態s_targetを取得する. * 道具形状と動作軌道の初期解{t, u}_initを作成する. * 損失関数MSE(f(s_current, t_init, u_init), s_target)を計算する. * 誤差逆伝播により, {t, u}_initを更新する. * (c)と(d)をC_iter回繰り返す. (b)ではまず, 道具t_initに関しては, 訓練時等のデータからランダムにC^optimize_batch個の道具を取り出し, それぞれにalgorithm:tool-noiseのAddNoiseToToolを適用する. また, 動作軌道u_initに関しては, u_min≤u≤u_maxの中でランダムにC^optimize_batch個バッチを作成する. (c)ではこれらC^optimize_batch個のデータそれぞれに対して損失関数Lを計算する. (d)では(c)で求まったLを誤差逆伝播し, バッチそれぞれに関してt_init, u_initを最適化する. まず, 動作軌道の最適化は<cit.>と同様, 以下のように行う. g_control = dL/du_init u_init = u_init-γg_control/||g_control||_2 ここで, ||·||_2はL2ノルム, γは更新率を表す. 次に, 道具形状の最適化の方法をalgorithm:tool-optimizeに示す. g_tool = dL/dt_initとして, OptimizeTool(t_init, g_tool)によってt_initを更新していく. ここで, Is{Plus, Minus}Grad(grad, pixel)は画素pixelに関する勾配gradの正負を判定する関数, GetGrad(grad, pixel)は画素pixelに関する勾配gradを取り出す関数, Extract{Front, Back}(array, count)は{front, back}から数えてcount個の値をarrayから取り出す関数である. また, C_scale, C_grad, C^optimize_add, C^optimize_delは定数である. これはつまり, 勾配が負(画素値を1にしたい)かつ現状画素値が0の画素の中で, 勾配の絶対値が大きいかつ隣接する画素に1つでも1が含まれる画素を優先的に1にしている. また, 勾配が正(画素値を0にしたい)かつ現状画素値が1の画素の中で, 勾配の絶対値が大きいかつ隣接する画素に1が多い画素を優先的に0にしている. これは, なるべく道具を連続したピクセルの繋がりによって表現するため, かつ勾配の高い画素が一箇所に集中してしまうのを防ぐためである. C_iter後, 最終的に全バッチ全エポックの中で最もLが小さい道具形状または動作軌道を, 最終的な値t_optimized, u_optimizedとして用いる. 本研究では, C^optimize_batch=10, C_iter=50, γ = 0.1 [rad], C_scale=1E-3, C_grad=0.1, C^optimize_add=10, C^optimize_del=10とする. §.§ Detailed Implementation The image binarization procedures of s and t are Crop, Color Extraction, Closing, Opening, and Resize, in order. Color Extraction separates the input image into tool, object, and background images. To make this process easy, we use a silver tool and the object is colored red. Note that the robot arm is considered as a background. Crop is executed as in figure:robot-config, and Resize converts the image to the size of 64×64. The convolutional layers for s and t have the same structures. Each of them has 6 layers, and the number of each channel is 1 (input), 4, 8, 16, 32, and 64. Its kernel size is 3×3, stride is 2×2, padding is 1, and batch normalization <cit.> is applied after each layer. s and t are compressed to a 128 dimensional vector by fully connected layers, it is concatenated with u, and a 256+2n dimensional vector is generated (n is the number of actuators of the robot). After that, the vector is fed into fully connected layers whose numbers of units are 256+2n, 128, 128, 128, and 256. The deconvolutional layers have the same structure with the convolutional layers, but only the last deconvolutional layer does not include batch normalization. The activation function of the last deconvolutional layer is Sigmoid, and those functions of the other layers are ReLU. s, tの二値化画像処理の流れは, crop, color extraction, closing, opening, resizeの順番で行う. それぞれ異なる色の背景, 道具, 物体を用いることで, 色抽出を容易にしている. cropはfigure:robot-configのような形で行い, resizeにより最終的に64×64の画像に変換される. ネットワークの詳細な構造について説明する. sとtの畳み込み層は全く同じ構造をしている. それぞれ6層構造となっており, チャネル数はそれぞれ1 (input), 4, 8, 16, 32, 64とし, 全層においてKernel Size: 3x3, Stride: 2x2, Padding: 1とし, 全畳み込みの後にBatch Normalization <cit.>を行う. その後, sとtはそれぞれ全結合層を通して128次元まで圧縮され, 圧縮されたs, t, uを結合して256+2n次元とする(ここでnはロボットの自由度を表す). その後, 256+2n, 128, 128, 128, 256というユニット数で全結合を行う. 逆畳み込み層は畳み込み層を逆にした構造を持ち, 最終層のみBatch Normalizationを含まない. また, 最終層の活性化関数はSigmoidであり, それ以外の層における活性化関数はReLUを用いる. § EXPERIMENTS We will explain our experiments using the actual robot: the training of Tool-Net, the optimization of tool shapes using Tool-Net, and evaluation of the optimized tools. Also, we will show an advanced application of tool shape optimization for multiple tasks. 実験セットアップ, Tool-Netの訓練, Tool-Netを用いた道具形状の最適化, Tool-Netを用いた道具形状最適化の実機における評価を順に行う. 最後に, Tool-Netを用いた複数タスクに対する道具形状最適化という応用を示す. §.§ Experimental Setup We show the experimental setup of this study in figure:experimental-setup. Aluminum frames are structured on a black background sheet, and a camera and manipulator with 2 servo motors are attached to the structure. The servo motors are Dynamixel Motor (XM430-W350-R), and the camera is D435 (Intel Realsense). As a tool, we used metal wire with a diameter of 3 mm, which has enough strength for object manipulation tasks and can be bent by hand. A mount to attach the tool to is equipped at the tip of the manipulator. The object for the manipulation task is a cylinder-shaped wooden block painted red for color extraction. We show the images binarized by the method of subsec:detailed-implementation in the left figure of figure:process-experiment. The tool shape and task state are extracted and binarized as shown in the right figure of figure:process-experiment, and we use them for Tool-Net after resizing. figure:experimental-setupに本研究の実験セットアップを示す. 黒い背景板の上にアルミフレームにより構成された支柱があり, 2自由度のマニピュレータとカメラが備え付けられている. MotorはDynamixel Motor (XM430-W350-R)を使用しており, RGB SensorとしてはD435 (Intel Realsense)を用いている. 道具としては, タスクに対して十分な強度がある一方手で曲げることのできる太さの針金(直径3 mm)を用い, マニピュレータの先端には針金を刺して装着することができるようなマウントが備わっている. 物体移動タスクに用いる物体は, 色抽出のために赤く塗られた円筒状の積み木である. D435から得られた画像をsubsec:detailed-implementationの方法で処理した結果をfigure:process-experimentに示す. このように道具とタスク画像を抽出・2値化しており, 最終的にはこれをresizeして用いる. §.§ Tool Shape and Trajectory Optimization for One Task §.§.§ Training Phase We conducted data collection procedures for 48 kinds of tools over 2 hours in total, and obtained 36000 number of data. We trained Tool-Net using these data by C^train_epoch epoch, and used the model with minimum L. We show the prediction results of task state in figure:predict-experiment. When given a certain tool shape and trajectory, Tool-Net was able to predict the transition of task state correctly. We drew the tool trajectory by solving forward kinematics of the robot. 約2hで道具48種類に対して実験を行い, データを合計36000個作成した. これを用いてTool-Netを学習させ, C^train_epoch回epochを回した際に最もLが小さかったモデルを使用する. このときのモデルを用いた指令タスク画像の予測結果をfigure:predict-experimentに示す. 道具とその動きが与えられた際に, 正しくタスク状態の遷移を予測できていることがわかる. ToolのTrajectoryについては, ロボットの関節角度から順運動学を解いて画像を重ねて描画している. §.§.§ Optimization Phase We prepared Sample 1 – 5 with the data of (s^sample_initial, s^sample_final, t^sample, u^sample). As we set s_current=s^sample_initial and s_target=s^sample_final, we compared three optimizations: optimization of only tool shape, only tool trajectory, and both. When optimizing only tool shape, the tool trajectory is fixed to u^sample, and when optimizing only tool trajectory, the tool shape is fixed to t^sample. Note that the sample data is just for reference and (t^sample, u^sample) is not used for optimization except for the fixed trajectory or tool shape. We show the optimization results in figure:optimize-experiment. The left column shows sample data and the remaining three columns show the three optimization results. Although this optimization phase depends on the initial value, when C^optimize_batch is large (e.g. >1000), almost the same results are obtained in every trial. Even though the tool shape is initialized by randomly chosen data from the training dataset and the tool trajectory is initialized by random values as explained in subsec:optimization-phase, s^sample_predicted and s^sample_final indicate almost the same position and so the optimization succeeds. s^sample_predicted represents the predicted state when using the optimized tool shape and trajectory. For example, regarding Sample 1, a different tool shape from the sample data is generated, and it is reasonable for the task because it wraps the object well. Regarding Sample 2, a tool shape like the sample data is generated. Regarding the tool shape and trajectory optimization of Sample 3, a tool shape without the parts of the sample data, which do not contribute to the manipulation, is generated. We can see that various kinds of tool shapes are generated, not just the same shape with the sample data. Regarding the tool shape optimization of Sample 1 and 4, we show the transition of tool shape according to the iteration of optimization in figure:optimize-transition. In actuality, although we start optimization from C^optimize_batch initial tool shapes and choose the best one, we show only the transition of tool shape regarding the best tool finally chosen. As we can see from the initial tool shape, the optimization starts from the tool shape that is close to the final shape, and it gradually changes. 5つのサンプルデータSample (1) – (5), (s^sample_initial, s^sample_final, t^sample, u^sample)を用意する. それぞれの(s^sample_initial, s^sample_final)に対して, 道具形状のみを最適化する場合, 動作軌道のみを最適化する場合, 道具形状と動作軌道を同時に最適化する場合に関して比較を行う. 道具形状のみを最適化する場合は動作軌道はu^sampleを, 動作軌道のみを最適化する場合は道具形状はt^sampleを用いる. 最適化結果をロボットの順運動学を解いて画像を重ねて示したものをfigure:optimize-experimentに示す. ここで表示しているのは, タスクの初期状態s^sample_initial, t^sampleまたは最適化された道具の形状を, u^sampleまたは最適された動作軌道をもとに描画したもの, その道具と軌道によって予測されるタスク状態s^sample_predictedである(色の濃さは適宜変えている). subsec:optimization-phaseで述べたように, 道具は訓練時データからランダムに取り出して最適化の初期値とし, 動作軌道はランダムな値を最適化の初期値としているが, 最適化の結果として, s^test_predictedとs^test_finalがほぼ同じ場所を示しており, 最適化が成功していることがわかる. 例えばSample (1)を見ると, 元のデータとは異なる道具が生成されており, 物体を包み込んで動かすような, タスク達成にもリーズナブルな結果となっていることがわかる. Sample (2)では元データと同じような道具が生成され, Sample (3)の道具形状と動作軌道の最適化では元の道具から無駄な部分を削ぎとったような道具形状が生成されている. このように, 様々な形態の道具が生成されていることがわかる. Sample (1), (4)における道具のみの最適化に関して, 最適化のイテレーションに伴う道具形状の変化の様子をfigure:optimize-transitionに示す. 実際にはC^optimize_batch個の初期解からスタートし最も良い道具を選ぶが, figure:optimize-transitionには, 最終的に最も損失の小さかった道具の初期解からの遷移を表示している. また, 道具画像は 64×64であるため, figure:process-experiment等に示される本来のアスペクト比とは異なることに注意されたい. 道具の初期解からわかるように, 多少最適化後に近い道具からスタートし, 徐々に最終的な道具へと変化していることがわかる. §.§.§ Evaluation of Optimization Phase Regarding the tool shape and trajectory optimization, we evaluated the degrees of task realization in an actual robot experiment. We prepared Task 1 – 3 with the data of (s^test_current, s^test_target). By the procedures shown in figure:optimize-procedure, the task is executed with the optimized tool, and task realization is evaluated. First, the tool shape and trajectory are optimized at the same time for the given task. Second, the optimized tool shape is made with metal wire by human hands. When d_chamfer between optimized and current tool shapes becomes lower than 150 px, it is regarded that the same tool shape is made. Although this procedure includes some human arbitrariness, the threshold was enough to resemble the generated tool in this experiment. Third, only the tool trajectory is optimized for the man-made tool, and the optimized motion is executed. After the task execution, d_chamfer between the final task state and the target task state is measured. We repeated the procedures from task execution to measurement of d_chamfer 5 times, and calculated its average and variance. Also, we prepared 4 random tool shapes: Tool (a) – (d), for comparison as shown in the upper figure of figure:optimize-eval. Regarding each random tool, only the tool trajectory is optimized, the optimized motion is executed, and the average and variance of d_chamfer are calculated. Regarding each task, we show the task, optimized tool shape, man-made tool shape based on the optimized one, and the average and variance of d_chamfer, in figure:optimize-eval. From the average of d_chamfer, we can say that the degrees of task realization depend on the tool shapes, and they are high in general when using the optimized tools. Regarding Task 1, in which the object is placed far from the robot, the optimized tool is straight-shaped and can achieve the task well, where Tool (a) – (d) cannot reach the object. Regarding Task 2, the tool shapes of Tool (b), (c) and the optimized one have the same shape to wrap and pull the object to the right front side, and the d_chamfer are almost the same. Tool-Netを用いた道具形状・動作軌道の最適化に関する, 実機におけるタスク実現度の評価を行う. まず, 訓練時とは異なる3つのタスクデータ Task (1) – (3) (s^test_current, s^test_target)を用意する. ここで道具最適化について, figure:optimize-procedureのような形で実際にロボットにおいて実行し, タスク実現度を評価する. まず与えられたタスクに対して道具形状と動作軌道を同時に最適化する. そこで得られた道具形状に近くなるように, 人間が針金を曲げ道具形状を作成し, 最適化された道具形状と作成された道具形状の間のd_chamferが150 px以下となったところで, 同様の道具が作成できたと見なす. そして, この作成された道具を用いて動作軌道のみを最適化し, s^test_currentの状態からその動作を実機において実行する. 実機動作後のタスク状態と指令タスク状態s^test_targetの間のd_chamferを測定する. 動作実行からd_chamferの測定までを5回行い, 平均と分散を計算する. また, 比較としてランダムな道具のデータを4つ用意し(figure:optimize-evalの上図), それぞれの道具についてTool-Netを用いて動作軌道のみ最適化し, 実ロボットで動作を実行し, 同様にd_chamferの平均と分散を計算する. タスク, 最適化された道具形状, 作成された道具形状, d_chamferの平均と分散をfigure:optimize-evalの下図に示す. d_chamferの平均から, 道具によってタスクの得意不得意があることがわかるが, 最適化後の道具を用いた場合は, 総じてタスクの実現度が高いことがわかる. Task (1)では, 物体が遠いためTool (1) – (4)では上手く操作することができないのに対して, 最適化の結果として真っ直ぐな, より遠くの物体を扱える道具が生まれている. Task (2)では, Tool (2), Tool (3), Optimizedは同じように物体を右手前へ引くための鍵のような形をしており, タスク実現度もほぼ同じであることがわかる. §.§ Tool Shape Optimization for Multiple Tasks The optimization of tool shape in this study has a potential to be applied for not only one task but also multiple tasks. This can be executed by summing up L obtained for multiple tasks and backpropagating it. To evaluate the applicability for multiple tasks, we prepared reversed tasks, Task (a): s^test_1→s^test_2 and Task (b): s^test_2→s^test_1. We executed experiments of the tool shape and trajectory optimization for only Task (a), only Task (b), and both, like in subsubsec:certain-experiment-eval, and then, calculated the average and variance of d_chamfer between the final task state and the target task state. We show the results in figure:multitask-experiment. The tool shape optimized for either Task (a) or (b) has a shape to wrap the object firmly. On the other hand, the tool shape optimized for both Task (a) and (b) has a shape with a gentle curve, and can be used for both Task (a) and (b). Regarding d_chamfer, when using the optimized tool shape for only Task (a), Task (a) is realized well, but Task (b) cannot be realized. On the other hand, when using the optimized tool shape for both Task (a) and (b), both tasks can be realized to some extent. 本研究は一つのタスクに対する道具最適化のみならず, 複数のタスクに対する道具最適化を行うことが可能である. これは, 複数のタスクに対して得られた損失関数Lの合計値を誤差逆伝播するのみである. まず, 異なる2つのタスクデータを用意するが, 本研究ではfigure:multitask-experimentのように反転したTask (a): s^test_1→s^test_2とTask (b): s^test_2→s^test_1を用意する. ここで, 道具をTask (a)のみに対して最適化した場合, Task (b)のみに対して最適化した場合, Task (a) and (b)に関して最適化した場合について, subsubsec:certain-experiment-evalと同様に実験を行い, d_chamferの平均と分散を求める. その結果をfigure:multitask-experimentに示す. Task (a), Task(b)それぞれのみに対して最適化された道具は, 物体をしっかりと包むような形状の道具になっていることがわかる. これに対して, Task (a)と(b)に関して最適化された道具は傾斜が緩く, 両者に対して使えるようになっている. タスク実現度も, Task (a)のみに最適化された道具では, Task (a)の実現度は最も高いのに対して, Task (b)の実現度は著しく低く, Task (b)のみに最適化された道具も同様である. 一方で, 両者に対して最適化された道具では, どちらのTaskもある程度正確にこなすことができていることがわかる. § DISCUSSION §.§ Experimental Results The experimental results in subsubsec:certain-experiment-training indicate that Tool-Net can infer the change of task state from the current task state, tool shape, and tool trajectory. subsubsec:certain-experiment-optimization demonstrates that various tool shapes and tool trajectories are generated as a result of the optimization process. These tool shapes are usually reasonable, because they wrap the target object well, have no useless parts that do not contribute to the manipulation, etc. However, their pixels lose continuity. In subsubsec:certain-experiment-eval, humans made the optimized tool shape by hand while referencing d_chamfer, and the optimized trajectory was executed. As a result, the optimized tool shapes can achieve the target tasks better than the randomly generated tool shapes. While the randomly generated tools usually cannot reach the target object or cannot wrap the object well, the optimized tools can reach and wrap it well. In subsec:multitask-experiment, Tool-Net is also applicable to multiple tasks. While a tool optimized for a certain task can realize the task well, the tool shape is hard to be used for other tasks. In contrast, while a tool optimized for multiple tasks is a little inferior to the tools optimized for each task, the tool shape has versatility that can be used for the multiple tasks. Our system has a problem on the making of actual tools, because the generated tool shapes lose continuity of pixels and cannot be directly made. Because humans make a tool similar to the optimized one while referencing d_chamfer in this study, this process depends on human interpretation. To solve the problem, we need to develop techniques of generating realizable tool shapes while keeping the diversity of tool shapes. subsubsec:certain-experiment-trainingから, Tool-Netは現在タスク状態・道具形状・道具軌道を与えることで, タスク状態の変化を計算することができる. また, subsubsec:certain-experiment-optimizationの結果から, 最適化の結果として, 多様な道具形状・道具軌道が生成されることがわかった. それらは, 物体をしっかり包み込んだり, 無駄のない作りだったりとリーズナブルなものが多い一方, ピクセルの途切れによって良いかどうかを判断することが難しいものが多かった. subsubsec:certain-experiment-evalでは, 最適化された道具形状を人間がd_chamferを見ながら生成し, 動作を実行させた. その結果, ランダムに生成した道具よりも, 最適化された道具はタスク実現に良い結果を残した. ランダムに生成した道具は, 物体が届かずにタスクを実行できなかったり, 物体を上手く捉えられずに途中でズレてしまったりしていたのに対して, 最適化された道具は物体を上手く捉えることができていた. また, subsec:multitask-experimentでは複数タスクへの適用が可能なことが示された. 単体タスクに対する最適化をした場合はそのタスクを実現するうえで最も有利な道具が生成される一方, 他のタスクでは使いにくい道具が生成されてしまう. これに対して, 複数をタスクへの最適化を行えば, 単体タスクへの最適化を施した道具には及ばないものの, 複数のタスクである程度良い結果を残すことができる道具が生成されることを確認した. しかし問題点として, 先ほど説明したように, 生成された道具のピクセルが途切れ途切れになってしまうため, その道具を生成するのが難しいということが挙げられる. 本研究ではd_chamferを見ることで似た形状の道具を人間が生成したが, これは人間の解釈に依存してしまうところがあるため望ましくない. これを解決するためには, 道具のピクセルが途切れないように最適化を施す技術, または, 得られた画像を上手く繋いで実現可能な道具画像に変換する技術が求められる. §.§ Future Directions We believe that this framework can be applied to not only object manipulation but also more general tasks, by changing the definition of task state. For example, by using 6 axis force sensor or frequency and amplitude of sound as the task state, a force applying or sound making task could be achieved. Because the metal wire is used for experiments, the generated tool shape is limited to the shape drawn by one stroke. This spoils the benefits of using the binarized image as tool expression. If we can use 3D printer or clay for tool-making, the benefits of using an image can be emphasized more, because tool shapes with branches and larger areas can be handled. Tool material or friction coefficient also sometimes affects task execution. Although a binarized image cannot express these characteristics, we may be able to optimize tool shape considering them by embedding this information into an image with multiple channels like color image. If we would like to handle 3D movement, by using 3D voxel representation with depth image for tool shape, we believe this study could be extended to the 3D movement. This framework could be also applicable to the flexible manipulator, if the representation of tool trajectory is changed by adding time information to u or making the network a recurrent one. Such networks will have a similar structure with <cit.>. We came up with this study when seeing a human drop a key into a gutter by mistake and pick it up using metal wire. To achieve this task, not only the expansion to 3D movement, but also a consideration of obstacles and efficient learning will be required. In the current form, the necessary number of trials explodes exponentially depending on the number of robot actuators and degrees of freedom of tool shape representation. It will be important for efficient training of Tool-Net to address the curse of dimensionality by using not only obtained data but also prior knowledge such as physical laws and the analogy of own body and tool shape like in <cit.>. 本研究は, タスク状態の定義を変更することで, 力を加えたり音を出したりするようなタスク等にも同様に適用可能である. 力の場合には6軸力センサの値, 音の場合は周波数と振幅等がタスク状態として考えられる. 本研究では道具形状を2次元の画像として表したが, 本実験においては道具として針金を使っているため, 一筆書きできる形状しか製作することはできない. 粘土や3Dプリンタ, 道具同士の組み合わせを用いることで, 今後より道具形状として画像を使う利点が強調されると考える. 一方, 二値画像では道具の素材・摩擦係数等に関する情報を得ることは難しく, カラー画像を用いることで材質も考慮した道具形状最適化が可能となる可能性がある. また道具に3次元のボクセル表現を用いることで, 3次元運動に拡張することも可能であると考える. 他にも, 本フレームワークを柔軟マニピュレータに同様に適用することが可能である. その場合は, uに時間を入れたり時系列にしたり等, 制御入力が動的要素を含む形に変更する必要がある. 本研究は網のかかった側溝に鍵を落とした人間が, 針金を使って取り出すのを見て思いついた. これを成功させるためには, 先ほど述べた3次元への拡張だけでなく, 障害物を考慮すること, 学習の効率化等が必要になると考える. 現状の動作軌道生成では, ロボットの自由度や道具の表現自由度が増えるに従い計算量が指数関数的に爆発する. 全てを一から学習するのではなく, これまでの様々な物体操作や道具使用における知識, <cit.>のような自分の身体と道具形状の類似性等を取り込み, 学習の効率化を測ることが重要である. § CONCLUSION We proposed a method to obtain an optimized tool shape and trajectory for given tasks using backpropagation technique of a neural network. A transition network of task state by a certain tool shape and trajectory is trained, and a tool shape and trajectory to realize the target task state are calculated using it. Also, we proposed data augmentation for efficient training, and a method to update pixels in tool shape image for optimization. Finally, the given tasks can be achieved more accurately by using the optimized tool shape. In future works, by using not only obtained data but also prior knowledge such as physical laws and robot configuration, we will expand this method to a more practical form. 本研究では, 与えられたタスクに対して, ニューラルネットワークの誤差逆伝播を用いて道具形状・動作軌道を最適化する手法を提案した. ある道具形状とその動作軌道によるタスク状態の変化を記述するネットワークを構築し, 指令タスク状態を実現するように道具形状・動作軌道を更新していく. また, 効率的な学習のためのデータ拡張方法, 画像で表現した道具の最適化時のピクセル更新手法等についても論じた. 結果として, 道具最適化により, より正確にタスクを実現することが可能となった. 今後は, データだけでなく物理法則や事前知識を統合し, 本手法をより実用的な形へと発展させていきたい. IEEEtran
http://arxiv.org/abs/2407.13302v1
20240718090630
Non-zero block selector: A linear correlation coefficient measure for blocking-selection models
[ "Weixiong Liang", "Yuehan Yang" ]
stat.ME
[ "stat.ME" ]
Mean Teacher based SSL Framework for Indoor Localization Using Wi-Fi RSSI Fingerprinting Sihao Li, Graduate Student Member, IEEE, Zhe Tang, Graduate Student Member, IEEE, Kyeong Soo Kim, Senior Member, IEEE, and Jeremy S. Smith, Member, IEEE This work was supported in part by the Postgraduate Research Scholarships (under Grant PGRS1912001), the Key Program Special Fund (under Grant KSF-E-25), and the Research Enhancement Fund (under Grant REF-19-01-03) of Xi'an Jiaotong-Liverpool University. This paper was presented in part at CANDAR 2023, Matsue, Japan, November 2023. S. Li and Z. Tang are with the School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, P.R. China (e-mail: [Sihao.Li19, Zhe.Tang15]@student.xjtlu.edu.cn), and also with the Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, L69 3GJ, U.K. (e-mail: [Sihao.Li, Zhe.Tang]@liverpool.ac.uk). K. S. Kim is with the School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, P.R. China (e-mail: Kyeongsoo.Kim@xjtlu.edu.cn). J. S. Smith is with the Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, UK (e-mail: J.S.Smith@liverpool.ac.uk). ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Multiple-group data is widely used in genomic studies, finance, and social science. This study investigates a block structure that consists of covariate and response groups. It examines the block-selection problem of high-dimensional models with group structures for both responses and covariates, where both the number of blocks and the dimension within each block are allowed to grow larger than the sample size. We propose a novel strategy for detecting the block structure, which includes the block-selection model and a non-zero block selector (NBS). We establish the uniform consistency of the NBS and propose three estimators based on the NBS to enhance modeling efficiency. We prove that the estimators achieve the oracle solution and show that they are consistent, jointly asymptotically normal, and efficient in modeling extremely high-dimensional data. Simulations generate complex data settings and demonstrate the superiority of the proposed method. A gene-data analysis also demonstrates its effectiveness. Keywords: Feature selection; High-dimensional model; Blocking effect; Block structure; Multi-response model § INTRODUCTION In the context of extensive data and multiple data groups, the identification of relationships between groups and the screening process would deviate from traditional modeling techniques. For example, electronic health data typically contains a massive amount of information received from patients. Symptoms of a disease usually appear in groups and are related to part of abnormal inspection results. These structures naturally exist in many fields and can be generally determined by relevant facts. In practical applications such as those involving genomic data, where the dimensions of covariates and responses are typically substantial, block structures always exist, such as the gene-group effects <cit.>, the association between gene groups <cit.>, and the protein and DNA association detection <cit.>. To illustrate the practical importance of block selection in the high-dimensional, multi-response, linear model with block structures (<ref>), we present the following two examples. Example 1. International stock market International stock market studies often involve several major stock markets globally <cit.>. Let (Y_1,…, Y_J) represent stock market indices from various countries, such as the SP500 index for the American stock market, and (X_1,…, X_K) represent the information of firms, such as the financial figures, from around the world. Indexes (responses) and information of firms (covariates) from the same country or same industry tend to be highly correlated and can be grouped within a single block. Researchers have explored how financial contagion occurs within each block and between different blocks <cit.>. In recent studies, there has been a growing interest in understanding how global events, such as the COVID-19 pandemic, impact the interconnectedness network of stock market volatility, with a particular focus on grouping effects <cit.>. The model (<ref>) accurately captures these practical phenomena. Example 2. Gene expression and fMRI intensity Genomic data, such as the pathway group structures for gene expressions and brain functional regions for fMRI intensity responses, often exhibit high-dimensional and block structures <cit.>. For example, single nucleotide polymorphisms (SNPs) are often grouped into genes, and genes are grouped into biological pathways. Block selection is suitable for biological studies aimed at detecting models in which each trait and covariate are embedded within biological functional groups, such as genes, pathways, or brain regions <cit.>. In recent years, the analysis of associations between high-dimensional covariates and responses with natural group (block) structures has also been a subject of study in various genomic-data analyses <cit.>. When performing modeling and prediction with such data as responses and covariates, it is often beneficial to identify the corresponding groups of responses and covariates, which can help improve predictive performance and enhance the interpretation of the fit. To simultaneously describe data with multiple covariate and response groups, we consider models with block structures. That is, we consider the following high-dimensional model with a block structure with respect to both response and covariate groups: ( Y_1, …, Y_J ) = ( X_1, …, X_K ) ( [ B_11 ⋯ B_1J; ⋮ ⋱ ⋮; B_K1 ⋯ B_KJ ]) + (E_1, …, E_J ), where Y_j and X_k denote the jth response group and kth covariate group, respectively, and B_kj denotes the coefficient matrix of X_k to Y_j such that B_kj = {β^kj_ll'} and E_j denotes the random error matrix. We denote Σ_j the covariance matrix of E_j and assume that it is non-negative definite with positive diagonal elements and the diagonal elements are bounded. X_k's are given and the related covariates will be grouped into the same group. We do not require that different covariate groups are independent. In this model, each pair (k,j), for k=1,… K and j = 1,…,J represents a block. We are interested in which blocks are relevant to the model, i.e., the (k,j)th block is relevant when at least one element in the block is relevant, thus defining J_1 = {(k,j): ∃ β^kj_ll'≠ 0 }. We allow the three dimensions, i.e., the dimensions of blocks, covariates, and responses, to be larger than the number of observations, and assume |J_1| ≪ K × J when the latter becomes large. When K × J = 1, the above model reverses to the regular multi-response, high-dimensional regression model. To fit this model, we should screen the relevant blocks and identify the covariates. We define the screening of relevant blocks in the model as “block selection”. The blocking effect is similar to but different from the grouping effect because we study the “blocked variables”. The blocking effect takes into account not only the grouping effect within the covariates but also the grouping effects on both the responses and the covariates. After organizing both the covariates and the responses into groups, the relationships between these grouped covariates that contribute to the grouped responses are depicted as blocks, resulting in the blocking effect. Unlike the grouping effect, the blocking effect has rarely been studied. In this article, we develop a block selection method for high-dimensional models with group structures for both responses and covariates. §.§ Related work Early work studying the feature selection problem mainly focuses on group-structured models or multi-response models. Based on the sparsity assumptions, multi-response models require estimating a sparse coefficient matrix, which can be achieved by flexible application of penalized regularizations, such as the lasso <cit.>, adaptive lasso penalty <cit.>, elastic net <cit.>, MCP <cit.>, and group lasso <cit.>. Other strategies include sequential approaches such as forward regression in high-dimensional space <cit.>, step regression in high-dimensional and sparse space <cit.>, orthogonal matching pursuit<cit.>, and sequential lasso <cit.>. Furthermore, to obtain an accurate multi-response model, researchers have proposed many targeted methods, such studies including the dimension reduction technique <cit.>, the extension of lasso <cit.>, the use of ℓ_1 / ℓ_2 regularization <cit.>, the investigation of a regularized multivariate regression for identifying master covariates <cit.>. Similar to block structures, group structures present many challenges, mainly due to the difficulty in exploring the information within them. Although the above penalized regularizations would perform well in various situations, they may not be able to recover a model with block or group structures because ignoring these structures can result in insufficiency and hurt model inference <cit.>, particularly when dealing with a large number of covariates and responses. Two recent studies have successfully addressed the multiple-response model with group structures. First is the multivariate sparse group lasso (MSGLasso) <cit.>, which proposed a mixed coordinate descent algorithm that utilized group-structure information to determine the descent direction. The second is the Sequential Canonical Correlation Search (SCCS) <cit.>, which considered group-structure information in the canonical correlation to identify non-zero blocks and used the Extended Bayesian Information Criterion (EBIC) for feature selection. However, there is a drawback worth mentioning for both methods: as the dimensions of both covariates and responses increase, they inevitably require excessive computational resources. Most relevant to our study, <cit.> introduced a best-subset selector for response selection. Specifically, they introduced an indicator for the response and proposed a response best-subset selection model YΔ = XΘ + ℰΔ. We extend their work to block structures and construct a selector for each coefficient block, rather than solely focusing on the response. The most significant contribution of this extension is that the proposed indicator is well-suited for block selection problems and more extensive data sets. Additionally, the selector for blocks comprising both the response and covariate groups is considerably more intricate than the response selector. This complexity extends to the model itself, with the associated theoretical guarantees and computations still being lacking. Furthermore, the best-subset selector introduced by <cit.> is only suitable for low dimensions. Our extension allows it to be applied to larger data sets, where the number of groups and the numbers of responses and covariates within each group can exceed the sample size. §.§ Contributions and organization of the paper Given the practical need for block selection and the existing research gap in this area, this paper proposes a block-selection model and introduces a novel method called the non-zero block selector (NBS) to address the problem of non-zero block selection. Specifically, we offer three stable and straightforward measures for selecting both blocks and coefficients. The main contributions of this paper are as follows: First, we propose a new linear correlation coefficient measure for block-structured data and establish its properties. Based on this measure, we introduce an indicator, Δ_kj, for each coefficient block, B_kj, and propose an efficient algorithm to calculate the indicator. This measure allows us to describe the correlation between the covariate group, X_k, and the response group, Y_j, and directly select the relevant blocks from the data. We provide the uniqueness of the estimator and establish its related asymptotic property. Second, we provide three effective penalized functions and algorithms for the block-selection model. We discuss various scenarios of the block-selection model, including situations where the covariates have full rank and high-dimensional settings. We also explore whether there is a requirement for sparsity in these situations. For each estimator, we provide uniqueness and asymptotic properties under regular conditions. Most relevant to our work, <cit.> considers similar modeling without providing theoretical guarantees regarding the estimation accuracy. We fill this gap by proving that the proposed estimates achieve the optimal minimax rate for the upper bound of the l_2 norm error. In comparison with MSGlasso <cit.>, we significantly expand the theoretical constraints regarding data dimension without compromising accuracy. One significant novelty of this NBS selector is its computational efficiency. Dealing with coefficient blocks using traditional methods often involves iterating through all the elements of each coefficient block, such as the SCCS, which can be computationally expensive. The NBS significantly reduces computational costs by identifying irrelevant blocks through direct calculations. We prove that the operator can select each relevant block with a high probability, leveraging block-structure information to accelerate the algorithm and enhance estimation performance. To evaluate the performance of the NBS, we conduct extensive simulation studies and real data analysis. The results demonstrate that the NBS outperforms existing methods in terms of prediction accuracy and model interpretability. Overall, the NBS provides a useful tool for modeling high-dimensional, multi-response data with block structures, which has wide-ranging applications in various fields, including genomics, finance, and the social sciences. The remainder of the article is organized as follows. In Section 2, we introduce the model and the NBS. Section 3 provides three effective penalized functions and algorithms for the block-selection model and the theoretical properties. We report our simulation studies for the comparison of the NBS and other methods in Section 4. Section 5 provides an analysis of a real data set. Section 6 concludes the article with a discussion. Appendix A provides additional simulation results, and the proofs are presented in Appendix B. § MODEL AND INDICATOR In this section, we introduce the proposed block-selection model and its associated selection indicator. Subsection 2.1 presents the block model and its integration with the selection indicator. Subsection 2.2 introduces the correlation penalized function used to estimate the selection indicator. Subsection 2.3 delves into the process of determining the tuning parameters and provides a straightforward algorithm for obtaining the selector. We first introduce some notation and framework. This part mainly follows the notation in <cit.> for multi-response models with group structures. Consider a response variable set, 𝒴 = (𝒴_1,…, 𝒴_J), and a covariate set, 𝒳 = (𝒳_1,…,𝒳_K), where 𝒴_j and 𝒳_k represent the jth response group and the kth covariate group, respectively. Each pair of these groups constitutes a block, comprising |𝒳_k| = p_k times |𝒴_j| = q_j elements, where ∑^K_k = 1 p_k = P, and ∑^J_j = 1q_j = Q. We allow the numbers of groups, K and J, and the dimensions within each block, p_k and q_j, to all be larger than the sample size, n. We use Y_n × Q = (Y_1,…, Y_J) and X_n × P = (X_1,…,X_K) to represent the observations of the 𝒴 and 𝒳, respectively. For ease of analysis, we assume that each column of the response matrix, Y, and covariate matrix, X, are standardized. For the multivariate modeling in (<ref>), we set B = {B_kj}_K × J, where each B_kj denotes the coefficient matrix of X_k and Y_j. §.§ Block-selection model In this part, we introduce the block-selection model and the indicator for coefficient block selection. Specifically, we assume that the model is sparse, meaning that it contains many irrelevant blocks. In other words, all the covariates are irrelevant to the responses in this block. To identify irrelevant blocks and detect relevant ones, we introduce the selection indicator, Δ = (Δ_kj)_K × J, where Δ_kj = {[ 1 ∃ β^kj_ll'∈ B_kj, β^kj_ll'≠ 0; 0 otherwise. ]. The above indicator represents whether the block (k,j) is relevant or irrelevant to the model and we have B_kj = B_kj∙Δ_kj. For instance, if Δ_23=1, it means that B_23 contains non-zero elements and Δ_23 = 0 indicates that B_23 = 0. Incorporating the indicator matrix into the model helps us identify the relevant coefficient blocks. Thus, the high-dimensional response and covariate model, (<ref>), is equivalent to the following block-selection model: ( Y_1, …, Y_J ) = ( X_1, …, X_K ) ( [ B_11∙Δ_11 …, B_1J∙Δ_1J; ⋮ ⋱ ⋮; B_K1∙Δ_K1 …, B_KJ∙Δ_KJ ]) + ( E_1, …, E_J ), where ∙ indicates scalar multiplication. The block-selection model presented above combines the indicator and coefficient matrix for each block. The block-selection model simplifies the original model, (<ref>), by introducing a selection indicator. Our goal is to estimate this selection indicator, thereby reducing the complexity associated with estimating the coefficient matrix of the model. We also introduce a concise version of the block-selection model, that is, set B_Δ = ( [ B_11∙Δ_11 … B_1J∙Δ_1J; ⋮ ⋱ ⋮; B_K1∙Δ_K1 … B_KJ∙Δ_KJ ]). Then, (<ref>) is equivalent to the following equation: Y = X B_Δ + E, and for each j, we have Y_j = X B_Δ j + E_j, where B_Δ j = (B_1j∙Δ_1j… B_Kj∙Δ_Kj)^ relating to B_j = (B_1j… B_Kj)^. The selection model, (<ref>), can be regarded as an extension of the traditional model Y = X B + E. Notably, conventional penalized regularization methods face challenges when applied to the model above while maintaining computational efficiency and estimation accuracy. In the following, we delve into the methodology for solving the indicator within the context of the block-selection model. §.§ Indicator optimization function In this section, we focus on calculating the block-selection indicator by introducing an indicator optimization function. As mentioned earlier, we defined an indicator matrix Δ = {Δ_kj}_K × J. According to the block-selection model, determining whether a block, B_kj, equals 1 or 0 is equivalent to deciding whether the covariate group, X_k, is relevant to the response group, Y_j, in this model. Based on this definition, we establish the following equivalence: Δ_kj = 1  ⇄  B_kj≠ 0  ⇄ X_k B_kj^2_F > 0, where ·_F denotes the Frobenius norm. To calculate the value of indicator Δ_kj, we consider constructing a correlation function for each pair of covariate and response groups. Recalling that |𝒳_k| = p_k, we first calculate the following oracle least squares estimator for each block (k,j), when p_k ⩽ n: B̂_kj = (X_k^TX_k)^-1X_k^T Y_j. Since we allow the dimension within the block larger than the number of observations, when p_k > n, the inverse of (X_k^ X_k) may not be unique. We solve this problem by adding the sparse requirement. That is, we consider the inverse of subset (X^_Ŝ_kjX_Ŝ_kj) where |Ŝ_kj| < n and the subset Ŝ_kj can be obtained from the lasso <cit.>, sure independent screening (SIS) <cit.>, etc. That is, when p_k > n, we obtain the following estimator for block (k,j): B̂_kj = (X_Ŝ_kj^ X_Ŝ_kj)^-1X_Ŝ_kj^ Y_j. For notational simplicity, we include the discussion about the difference between the above two estimators, (<ref>) and (<ref>), in the next section (combined with the joint estimation with or without penalized regularizations) and omit their differences here. In the following, we adhere to the notation and situation in (<ref>) and p_k ⩽ n. We consider proposing an indicator optimization function that calculates the indicator by balancing the values between Δ_kj and the dual transformation, (1 - Δ_kj). Based on (<ref>), equivalently, the optimization function can be obtained by balancing the values between Y_j - X_k B̂_kj^2_F and X_kB̂_kj^2_F. We also consider the degrees of freedom for both values, which are inspired by the adjusted R squared, as follows: n-p_k-1  and  p_k-1, where the former is the degree of Y_j- X_k B̂_kj_F^2 and the latter is the degree of X_kB̂_kj_F^2. Combining the degrees, we introduce the following correlation function to the indicator, that is, for the block (k,j): Δ̂_kj = _Δ_kj∈𝒱(Y_j-P_X_k Y_j_F^2/n-p_k-1Δ_k j +γP_X_k Y_j_F^2/p_k -1(1-Δ_k j) ), where 𝒱 = {0,1}, γ is a tuning parameter that will be discussed later and P_X_k = X_k(X_k^ X_k)^-1X_k^ denotes the projection matrix of X_k. The above optimization indicates that the optimal solution for the indicator relies on balancing the values of the generalized estimator and the noise. For the two values in each block, we have Y_j = X_kB_kj + ∑_k' kX_k'B_k'j + E_j and EX_k B̂_kj^2_F = E(X_k(X_k^ X_k)^-1X_k^ Y_j^2_F) = EP_X_kY_j^2_F = E([P_X_kY_jY_j^ P_X_k^]) =E( (B^ _kj(X_k^ X_k)B_kj) + (E^_j P_X_k E_j) + (L_kj^P_X_kL_kj) + 2(L^_kjX_kB_kj)), where L_kj = ∑_k' kX_k'B_k'j. For the residual, we have EY_j - X_k B̂_kj^2_F = E(Y_j - P_X_k Y_j _F^2) = E([(Y_j - P_X_k Y_j)(Y_j - P_X_k Y_j)^]) = E((E^_j(1 - P_X_k)E_j) + (L_kj^L_kj) - (L_kj^P_X_kL_kj)). In the block model, we aggregate correlated covariates into blocks. After that, we impose a constraint that only allows for weak correlations between different covariate groups. This simplifies the model's complexity by requiring that 2tr(L^⊤_kjX_kB_kj) is negligible compared to the other terms in EX_k B̂_kj^2_F. Under this requirement, the expected values of the squared Frobenius norms in Equations (<ref>) and (<ref>) can be calculated as follows: EX_k B̂_kj^2_F = tr(B^⊤_kj(X_k^⊤ X_k)B_kj) + p_k [tr(Σ_j) + tr(Σ_kj)], and EY_j - X_k B̂_kj^2_F = (n - p_k)(tr(Σ_j) + tr(Σ_kj)), where Σ_j represents the covariance matrix of E_j, Σ_kj = L^⊤_kjL_kj/n, and p_k = |𝒳_k|. From (<ref>) and (<ref>), when n is large, Δ̂_kj = 0 or 1 is equivalent to whether B_kj equals to zero or not. The situation where strong correlations exist between different covariate groups could be an area for further research. As noted, (<ref>) estimates the indicator by calculating the contribution that X_k has made to Y_j. In practice, the estimate should efficiently identify the relevant block. When Δ_kj = 1, (<ref>) leads to the following rate: P_X_k Y_j^2_F = O_p(n + p_k). Derived from (<ref>), we also have Y_j - P_X_k Y_j _F^2 = O_p(n - p_k) < O_p(n + p_k) = P_X_k Y_j^2_F. Thus, we have Y_j-P_X_k Y_j_F^2/(n -p_k-1) < γP_X_k Y_j_F^2/(p_k-1) . Following the correlation optimization function (<ref>), the above leads to Δ̂_kj=1 with a proper value of tuning parameter γ. When Δ_kj = 0, we have B_Δ_kj = 0 and X_j B_Δ_kj^2_F = 0. Then we have, following the properties of the least square estimator, Y_j - P_X_k Y_j _F^2 = O_p(n) ⩾ o_p(n) = P_X_k Y_j^2_F, with high probability. We thus obtain that Y_j-P_X_k Y_j_F^2/(n -p_k-1) ⩾γP_X_k Y_j_F^2/(p_k-1), leading to the result that Δ̂_kj=0. Based on the above discussion, we integrate all the blocks and define the following correlation penalized function to select the indicator for the complete block-selection model: Q( Δ )= 1/KJ∑_j=1^J∑_k=1^K( 1/n-p_k-1Y_j-P_X_k Y_j_F^2Δ_k j +γ1/p_k - 1P_X_k Y_j_F^2(1-Δ_k j) ), where γ is a tuning parameter. Δ̂= {Δ̂_kj}_K × J is obtained as follows: Δ̂=Δ∈𝒱^K × Jargmin Q(Δ). In the following, we provide the theoretical guarantees of the indicator. First, we introduce the following proposition to ensure the uniqueness of the estimate. Given the projection matrix of X_k, denoted as P_X_k where k = 1,…, K, the solution of the 0-1 integer optimization problem (<ref>) is unique. To further obtain the asymptotic properties of the selection indicator, we introduce some notations. Set J_1 = {(k,j): Δ_kj = 1} and J_0 = { (k,j): Δ_kj = 0 }. The related estimates are Ĵ_1 and Ĵ_0, respectively. Furthermore, we set X^_k X_k/n = M_k, where M_k is a p_k × p_k matrix. Noting that Σ_kj = L^_kjL_kj/n, we have the following results. Assume the following requirement of γ holds: max_(k,j) ∈ J_1p_kn·(tr(Σ_j) + (Σ_kj)) (B^ _kj M_k B_kj) < γ1 - γ, where γ is a tuning parameter. The selector is uniformly consistent, that is, for n →∞, with probability tends to 1, Δ̂= Δ. Note that when the dimension of covariates within each group grows larger than n, (<ref>) considers the subset for each block. Based on the definition of p_k, it is always smaller than n. Assumption (<ref>) is regular and equivalent to the following inequality: max_(k,j) ∈ J_1 (tr(Σ_j) + (Σ_kj))n · (B^ _kj M_k B_kj)/p_k + (tr(Σ_j) + (Σ_kj)) < γ. The left-hand side of (<ref>) comprises values in the range of (0,1) when (k,j) ∈ J_1. (<ref>) naturally holds when p_k ≪ n. When p_k is comparable to n, (<ref>) can be achieved by a suitable tuning parameter γ∈ (0, 1). The selection of γ and the algorithm of the indicator is discussed in the next section. The degrees of freedom for both values play a crucial role in the theoretical guarantees of the correlation function. With the assistance of these two degrees, the correlation function only requires one tuning parameter, resulting in a lower number of parameters compared to those obtained in other relevant research <cit.>. The above result guarantees that the selection indicator estimator converges to the true value. We then introduce the following properties for the selection indicator. Under the same conditions of Theorem <ref>. For k = 1,…,K and j = 1,…,J, set l_kj = Y_j-P_X_k Y_j_F^2n -p_k-1 / P_X_k Y_j_F^2p_k - 1. We have that for any (k,j) ∈ J_1 and (k',j') ∈ J_0, there exists a positive constant l. With probability tending 1 the following holds: |l_kj| < l < |l_k'j'|. Corollary <ref> is straightforward following from Theorem <ref>. For each block, (k,j), with probability tending to 1, Δ̂_kj = Δ_kj. With (k,j) ∈ J_1 and (k',j') ∈ J_0, we have Δ̂_kj→ 1  and Δ̂_k'j'→ 0  when  n →∞. For a relatively large n, it is possible to visually find a constant, l, such that |l_k'j'| < l < |l_kj| and Corollary <ref> holds. This strategy for identifying the relevant blocks from the irrelevant ones considerably reduces the computational cost and can identify the relevant blocks by distinguishing the values of the selection function. The non-zero block-indicator detection rule provides a stable measure for identifying non-zero blocks. Combining the above aspects, we preliminarily conclude that the correlation-selection function can accurately calculate the selection indicator as well as the block-selection model. In the next section, we discuss the calculation of the tuning parameter, γ. §.§ Determination of γ The above section provides a selection indicator with a simple rule for block selection. To further discuss the determination of the tuning parameter and provide the calculation for the indicator, we transform (<ref>) into the following function: Δ̂_kj={[ 1 if R̅_kj^2 > 1-γ,; 0 otherwise, ]. where R̅_kj^2 ≜ 1 - p_k - 1/n-p_k-1·Y_j-P_X_k Y_j_F^2/P_X_k Y_j_F^2. The determination of 1 - γ plays an important role in this indicator estimation. For notational simplicity, in this part, we denote c = 1-γ where c refers to the partition threshold of {R̅_kj^2:Δ_kj = 1} and {R̅_kj^2:Δ_kj = 0}, which is similar to the partition plane in support vector machines. We aim to choose c such that [ lim_n →∞ P(R̅_kj^2 > c|Δ_kj = 1) → 1; lim_n →∞ P(R̅_kj^2 ≤ c|Δ_kj = 0) → 1. ] Following the idea of the double-thresholding filter from <cit.>, we first define two measures, recall rate and error rate (ER), as follows: Recall(c) = |Ĵ_1(c) ∩ J_1|/|J_1|,  ER(c) = |Ĵ_1(c)∩ J_0|/|Ĵ_1(c)| = |I(c)|/|Ĵ_1(c)| where Ĵ_1(c) = {(k,j):Δ̂_kj = 1} = {(k,j):R̅_kj^2 > c},  and  I(c) = Ĵ_1(c)∩ J_0. Ĵ_1(c) denotes the estimated active set determined by the constant c. J_1 and J_0 denote the true relevant set for blocks and complementary set, respectively. I(c) denotes the Type I error set. To control the recall rate and error rate and obtain the optimal tuning parameter, the following function is presented: [ c = Recall(c); s.t. ER(c) ≤α, ] where α is the significant level that usually assumes a value at 0.05 or 0.1. Note that although J_1 is unknown, it is deterministic, and the following holds: Recall(c) = |Ĵ_1(c) ∩ J_1|/|J_1| = |Ĵ_1(c) ∩ J_1|. Since |Ĵ_1(c) ∩ J_1| is monotonically non-increasing in c, set c_0 = inf{c:ER(c) ≤α} and it is the solution of the above optimization function. To properly estimate ER(c) = |I(c)|/|Ĵ_1(c)| and find c_0, we first need to estimate I(c), and we have the following proposition. Under the same conditions of Theorem <ref>. With probability tending to 1, the following two events are equal: {(k,j) ∈ I(c)} = {R̅_kj^2 < c/(2c-1) }. The above result is inspired by the transformation process in the double-thresholding filter <cit.>. Replacing |I(c)| by |{(k,j):R̅_kj^2<c/(2c-1)}|, we have the following: ER(c) = |I(c)||Ĵ_1(c)|≈|{(k,j):R̅_kj^2<c/(2c-1)}||{(k,j):R̅_kj^2>c}|. Formally, c are obtained by solving the following optimization problem, which has a solution as c_0 = inf{c:ER(c) ≤α} and can be solved by grid-point search: [ c = |Ĵ_1(c) ∩ J_1|; s.t. |{(k,j):R̅_kj^2<c/(2c-1)}||{(k,j):R̅_kj^2>c}|≤α. ] We simply illustrate the following algorithm for the NBS. § JOINT ESTIMATION UNDER VARIOUS SITUATIONS In this section, we introduce the details of how the block-selection model is jointly estimated. There are two crucial dimensions of the block-selection model. The first dimension is the number of blocks, i.e., K × J, corresponding to the number of response and covariate groups, which we allow to grow with and much higher than n. The second dimension is within each block. For k = 1,…, K and j = 1…, J, the dimension of the coefficient submatrix within this block is p_k × q_k, which corresponds to the number of covariates and responses within the block. As for the dimension of the number of blocks, we allow q_k to grow much higher than n. For the number of covariates in each group, we consider three situations and propose three estimators for each. First, we consider two types of covariates: 1) X_k is of full rank; and 2) p_k ≫ n. The optimization of the two situations is different. In the former, we do not consider the sparsity requirement within the block, moreover, the least squared estimation is suitable. In the latter situation, we apply penalized regularization within the block. Ultimately, we consider a joint penalized estimator for the sparse block-selection model. §.§ Joint estimation under two kinds of covariates Before estimating the block-selection model, we first discuss the estimation of the single-block model and how the indicator reacts in the single-block model. The optimization of estimating the block-selection model is based on the optimization function for estimating the single-block model. Assume K = 1 and J = 1; thus, the block-selection model is reversed to a regular, multivariate, high-dimensional regression problem: Y_1 = X_1 B_Δ11 + E_1. Note that the dimension of Y_1 is q_1 and the dimension of X_1 is p_1. We always allow q_1 to be larger than the sample size, but consider two situations of p_1, i.e., p_1 ⩽ n and p_1 > n. In this model, the indicator determines the presence of the model. After estimating the single-block model, we discuss the estimate of the block selection, i.e., K × J >1. §.§.§ Single-block model under full rank of covariates In this part, we consider the covariates to be full rank and do not require sparsity in the model. Set P_X_1 = X_1 (X_1^ X_1)^-1 X^_1. Following from the above discussion, the selection indicator for the single-block model can be estimated as follows: Δ̂_11 = _Δ_11∈{0,1}(Y_1-P_X_1 Y_1_F^2/n-p_1-1Δ_11 +γP_X_1 Y_1_F^2/p_1-1(1-Δ_11) ). The estimate of the single-block model can be obtained by transforming (<ref>) into the following optimization function: (Δ̂_11,B̂_Δ_11)= _Δ_11∈{0,1}{1n_1Δ_11∙ Y_1 - X_1B_Δ_11_F^2 +γ W_11(1-Δ_11) }, where n_1 = n-p_1-1 and W_11 = P_X_1 Y_1 _F^2/(p_1-1). The following proposition shows the equivalence of the solutions from (<ref>) and (<ref>). A solution to (<ref>) is equivalent to that to (<ref>) and the regression coefficient estimator is unique, as follows: B̂_Δ 11 = (X_1^ X_1)^-1 X^_1 Y_1 ∙Δ̂_11. The above result establishes the equivalence between estimating the selection indicator and estimating the coefficient matrix for a single-block model. Furthermore, it introduces the uniqueness of the estimator. The results are natural, and the estimator is the block-selected least squares solution. We provide theoretical guarantees for the estimate in the following. Assume the following requirement for γ holds, Δ_11·p_1n·tr(Σ_1) (B^ _11 M_1 B_11) < γ1 - γ. With probability tending to 1, the estimate to achieve the oracle solution is as follows: B̂_Δ 11 = (X_1^ X_1)^-1 X^_1 Y_1 ∙Δ_11. The proof of Theorem <ref> is similar to that of Theorem <ref>, with a weaker assumption, (<ref>), than (<ref>). When Δ_11 = 0, (<ref>) naturally holds. When Δ_11 = 1 and p_1 < n, this requirement for γ is also easily to attained. Based on the above result, we introduce the asymptotic normality of the coefficient estimator. Suppose E_1 are sub-Gaussian. For each fixed dimension, q_0 ⩽ q_1, set the estimator B̂_Δ 11,q_0 = {β̂^11_ll'}_p_1 × q_0∙Δ̂_11 denoting the first q_0 columns of B̂_Δ 11. Then, the estimator is asymptotically normal, that is, n^1/2 (B̂_Δ 11, q_0 - B_11,q_0) → N_p_1 × q_0( 0, (X^_1 X_1/n)^-1Σ_q_0), where B_11,q_0 denotes the first q_0 columns of B_11 and Σ_q_0 denotes the covariance of E_1,q_0. §.§.§ Single-block model under high-dimensional settings In this part, we consider the single-block model in which both dimensions of covariates and response are allowed to be larger than the sample size. The covariate, X_1, is incapable of full rank and we consider the sparsity assumption. Denote S_11 to be the set of relevant covariates within the single-block model, and we assume |S_11| < n. Set P_X_1 = X_Ŝ_11(X_Ŝ_11^TX_Ŝ_11)^-1X_Ŝ_11^T. We use the l_1 penalized regularization for the single-block model to obtain the active set, denoted as Ŝ_11 and referred to as lasso for simplicity. As above, the selection indicator can be estimated as follows Δ̂_11 = _Δ_11∈{0,1}(Y_1-P_X_1 Y_1_F^2/n-|Ŝ_11|-1Δ_11 +γP_X_1 Y_1_F^2/|Ŝ_11|-1(1-Δ_11) ). This is equivalent to the following joint estimation of both the correlation penalized function and coefficient estimation, (Δ̂_11,B̂_Δ_11) = _Δ_11∈{0,1}, B_11: β^11_ll' = 0, l ∉Ŝ_11{Δ_11∙ Y_1- X_1B_Δ_11_F^2/n-|Ŝ_11|-1 +γ W_11(1-Δ_11) }, where B_Δ_11 = B_11∙Δ_11 and W_11 = P_X_1 Y_1^2_F/(|Ŝ_11|-1). The following proposition presents the solution of the algorithm and ensures the uniqueness of the estimator. The solution to (<ref>) is unique and equivalent to that of (<ref>). It can be represented as follows: B̂_Δ 11,Ŝ_11 = (X_Ŝ_11^TX_Ŝ_11)^-1X_Ŝ_11^T Y_1 ∙Δ̂_11, and B̂_Δ11, Ŝ^c_11 = 0. Furthermore, we assume that the smallest eigenvalue of X_1^ X_1/n is larger than Λ_min > 0. We then present the following result to prove that the proposed estimate achieves the oracle solution of the single-block model and further provide the asymptotic normality of the estimator. Suppose p_1 = O(exp^n^c_1), q_1 = o(n^(1+c_1)/2), and the true nonzero set, |S_11| = o(n^c_2), where 0 <c_1 + c_2 < 1. Assume the rows of the random error matrix, E_1, are independent and identically distributed with mean zero and covariance Σ_1. Assume the following restricted eigenvalue condition holds: v^ (X_1^ X_1/n) v ⩾κv^2_2, for all v ∈ G(S_11,1) where G(S_11,1) = { v ∈ℝ^p_1: v_S^c_11_1 ⩽ 3v_S_11_1}. We also assume a small gap between 0 and |B_min| ≡min{|β^11_ll'|, β^11_ll'≠ 0} such that, with a positive constant M_2, |B_min| ⩾ M_2 ·√(n^c_1 + c_2-1). Suppose 1/2<γ<1. With probability tending to 1, the estimate achieves the oracle solution: B̂_Δ 11 = (X_S_11^ X_S_11)^-1 X^_S_11 Y_1∙Δ_11. The most requirements in the above result are inherited from the lasso, with the exception of q_1 <cit.>, since the active set Ŝ_11 is obtained from the lasso. The requirement of q_1, q_1 = o(n^(1+c_1/2)), can also be eased by employing the Assumption (<ref>). Suppose E_1 are sub-Gaussian. Suppose the same assumptions of Theorem <ref>. For each fixed dimension, q_0 ⩽ q_1, set the estimator B̂_Δ 11,q_0 = {β̂^11_ll'}_p_1 × q_0∙Δ̂_11. Then, the estimator is asymptotically normal, that is, n^1/2 (B̂_Δ 11, q_0 - B_11,q_0) → N_p_1 × q_0( 0, (X^_1 X_1/n)^-1Σ_q_0), where B_11,q_0 denotes the first q_0 columns of B_11 and Σ_q_0 denotes the covariance of E_1,q_0. §.§ Joint penalized estimation of the sparse block-selection model As noted in the above two subsections, after estimating the single-block model, the complete block-selection model can be obtained directly by integrating (<ref>) and (<ref>): (Δ̂,B̂_Δ)= _Δ∈𝒱^K × J, {B_kj∈ℬ_kj}_K× J1KJ∑^J_j = 1∑^K_k = 1{1n_kΔ_kj∙ Y_j - X_kB_Δ_kj_F^2 +γ W_kj(1-Δ_kj) }, where 𝒱 = {0,1}, ℬ_kj = { B_kj∈ℝ^p_k × q_j: β_ll'^kj = 0,l∉Ŝ_kj} and Ŝ_kj equals the complete set when X_k is of full rank. In this part, we propose an effective algorithm for situations where sparsity is always required. This is a natural situation in which, even if the dimensions of a block are smaller than the sample size and the number of blocks is shrunk by the indicator selector, the total number of coefficients in the complete block model remains relatively large. In this case, we introduce the following joint penalized estimation for the sparse block model. This method enjoys the computational advantage of traditional penalized optimization techniques. After determining the selection indicator, solving the block model is equivalent to solving a lasso-type optimization, which significantly reduces the computational cost. Based on the above discussions, we have derived the following two properties for the coefficient estimator. Assume the following restricted eigenvalue condition holds: v^(X^ X/n) v ⩾κv^2_2, for all v ∈ G(S) where G(S) = { v ∈^P : v_S^c_1 ⩽ 3v_S_1}. Assume λ = M_3 √(nlog P) where M_3 is a positive constant. Then, with probability tending to 1, the following error bound for the estimate holds, B̂ - B_2 ⩽ M_3 √(|S|log Pn), where S = ∪ S_kj. Similarly to the notation of Theorem <ref>, we assume B_min = min{ |β^kj_ll'| : β^kj_ll'≠ 0, k= 1,…,K, j = 1,…,J } and the following result hold, Under the same assumptions of Theorem <ref>. Assume B_min⩾ M_3 √(|S| log P /n). Then, we have P((B̂) = (B)) → 1. The above two results provide overall theoretical guarantees for the block-selection model. We do not restrict the number of blocks or the dimensions within each block. Specifically, we allow the dimensions of each block to grow exponentially. In this case, an effective and robust shrinkage technique is crucial to our method. We name the joint estimation obtained by Algorithm <ref> NBSlasso and demonstrate its performance in simulations and a real example below. § SIMULATION In this section, we compare the performance of NBSlasso, SCCS, lasso, and elastic net through numerical simulation experiments, where NBSlasso is obtained using Algorithm <ref>. We conduct numerical simulation experiments with two different data dimensions: (n=150,p=200,q=200) and (n=150,p=400,q=200), as well as the following two group settings: Group setting 1. Each group has the same size of 20, i.e., under the dimension (n=150,p=200,q=200), the group structures is (1-20,21-40,41-60,...,181-200). Group setting 2. The group structures under dimension (n=150,p=200,q=200) are represented by (1-20,21-50,51-70,70-100,...,171-200), in which all the group sizes are unequal. We generate the covariates, X, from a multivariate normal distribution, N_p(0,Σ), where Σ = {0.5^|k - k'|}_p × p. Recall that we do not demand independence among the various covariate groups; rather, we permit a modest level of correlation to better approximate the true structure of real datasets. Initially, we designate K_j as the sparsity indicator for Y_j. For example, K_1 = 2 implies that we randomly select two nonzero blocks from B_11,…,B_K1 for Y_1. The sparsity level K_j is determined in two scenarios: 1) The fixed case. K_j is constant for Y_j where j = 1,…,J; 2) The random case. K_j is randomly chosen and may vary among different response groups. Within each nonzero block, the sparsity level of the sub-coefficient matrix is selected from 10% to 90%, which means that 10% to 90% of the entries in the block are zero. The values of nonzero entries are from a uniform distribution, [-5,-1]∪ [1,5]. Figure <ref> illustrates the process of generating a simulation data set and all simulations are repeated 100 times. We compare the performance of the four methods using the following nine metrics: the mean square error of the test set (TestMSE); the precision of selected non-zero blocks (Precision); the recall rate of non-zero blocks (Recall); the L_1 norm of B-B̂; the L_2 norm of B-B̂; the positive discovery rate of non-zero entries (PDR); the false discovery rate of non-zero entries (FDR); the number of non-zero entries estimated by the methods (NNE); and the computational time (Time). Note that ER, as mentioned in Section <ref>, is equal to 1-Precision. The simulations are performed on a standard laptop computer equipped with a 2.60 GHz Intel Core i5-11260H processor. Performance comparison with random sparsity indicators In this part, we discuss the performance of the four methods in two scenarios with random sparsity indicators. For each response block, the sparsity indicator K_j is randomly chosen from two sets: K_j ∈{2,3,4} and K_j ∈{1,2,...,9}. Table <ref> demonstrates the performance of the four methods under the following conditions: dimension n=150, p=200, q=200, Group setting 1, with a random sparsity indicator of K_j ∈{2,3,4}. We consider three sparsity levels here: 30%, 60%, and 90%. As one can see, NBSlasso outperforms the other methods in terms of TestMSE and consistently maintains a low value. It precisely selects all non-zero blocks, achieving satisfactory precision and recall rates of 1. The accuracy of NBSlasso is also reflected in the L_1, L_2, and PDR, which show the lowest values for L_1 and L_2 and a high PDR. Additionally, NBSlasso outperforms the other three methods in terms of computational efficiency. NBSlasso yields a higher FDR than SCCS, for example, when the sparsity equals to 90%, NBSlasso achieves a 0.67 FDR compared to SCCS's 0.29. However, NBSlasso has a precision and recall rate of 1. These two observations suggest that NBS can accurately select all non-zero blocks, but the non-zero entries selected by Lasso contains many spurious variables. Though we do not show the NNE metrics here due to the size limit of the table, the following discussion will involve them. Due to space constraints, additional comparisons under different dimensions and group settings are shown under “Supplementary.” We then test the performance of all the methods under the more general random sparsity indicator of K_j ∈{1,2,...,9}. Figure <ref> illustrates the differences between the methods by focusing on four metrics: TestMSE, NNE (the number of estimated non-zero entries), L_2, and L_1 with sparsity levels ranging from 10% to 90% under the dimensions n=150,p=200,q=200, Group setting 1, with random sparsity indicators of K_j ∈{1,2,…,9}. We can see that NBSlasso generally outperforms the other methods in terms of TestMSE. Regarding the other three metrics, we see that NBSlasso consistently maintains its advantage in terms of the L_1 norm and L_2 norm, while accurately estimating the number of non-zero coefficients, indicating its accuracy. Performance comparison with fixed sparsity indicators In this part, we demonstrate the performance of all the methods with fixed sparsity indicators of K_j = 2,4,6,8, respectively. In Figure <ref>, we display the TestMSE, NNE, and L_2 metrics of all the methods with sparsity levels ranging from 10% to 90% under dimensions n=150,p=200,q=200 and Group setting 1. In terms of K_j = 2, the NNE estimated by NBSlasso is closest to the real model, and NBSlasso has the smallest L_2. Though in the cases with high sparsity levels, the TestMSE of NBSlasso is larger than that of lasso or elastic net, it is most likely due to the overfitting of lasso and elastic net, which can be inferred from the much larger NNE than the real model. The results for K_j = 4 are similar to those for K_j = 2. In the denser cases, such as when K_j = 6 and K_j = 8, the NNE estimated by NBSlasso is not close to the real model, like other methods. Though the precision and recall rate of non-zero blocks are not presented here, we claim that the bias of NNE between NBSlasso and the real model is not due to block selection but rather to coefficient estimation by the penalized method. Nonetheless, NBSlasso maintains the best L_2 metrics, indicating that the model estimated by NBSlasso is more accurate than others. To conclude, NBSlasso outperforms the other methods in terms of TestMSE in most situations. Additionally, the NBS has the smallest L_2 while estimating fewer non-zero entries compared to the other methods. In summary, NBSlasso consistently outperforms the other methods in various scenarios. It demonstrates superior performance in terms of computation time, estimation accuracy, and TestMSE. These results hold regardless of whether the data are sparse or dense and whether the number of non-zero coefficient blocks is randomly set or fixed. The findings indicate that NBSlasso is a robust and effective method for selecting informative blocks and accurately estimating the associated coefficient matrix. § REAL EXAMPLE In this section, we present the performance of our proposed method on a practical problem of genomic study, which is the interaction between the isoprenoid genes in Arabidopsis thaliano and has been studied in <cit.>. Isoprenoids serve numerous biochemical functions in plants and are synthesized through the condensation of the five-carbon intermediates isopentenyl diphosphate (IPP) and dimethylallyl diphosphate (DMAPP). The formation of IPP and DMAPP in higher plants has two distinct pathways: one in the cytosol, the mevalonatepatty pathway, and the other in the chloroplast, nonmevalonatepatty pathway. The interaction between these two pathways has been reported by many researchers <cit.>. We examine a data set that comprises 118 GeneChip (Affymetrix) microarrays and contains the expression of 39 genes in the isoprenoid pathways in Arabidopsis thaliano and 795 additional genes from 56 downstream metabolic pathways <cit.>. The 39 genes in the isoprenoid pathways can be divided into two groups: the mevalonatepatty pathway group containing 21 genes and the nonmevalonatepatty group containing the remaining 18 genes. <cit.> utilized this data set to construct the genetic regulatory network between the isoprenoid pathways where the downstream genes were attached to the network as conditional variables. The construction of the genetic regulatory network can be solved by formulating a conditional Gaussian graphical model with group structures Y = XB + E. There exists multicollinearity in the 795 additional genes from 56 downstream metabolic pathways and we remove some downstream metabolic pathways, obtaining a model as (Y_1, Y_2) = (X_1,⋯, X_50)B + E, where (n, p, q) = (118, 739, 39), J=2, and K=50. (Y_1, Y_2) represents the isoprenoid genes in the mevalonatepatty and nonmevalonatepatty pathways, and (X_1,⋯, X_50) denotes the genes from 50 downstream metabolic pathways. We focus on the selection of the downstream pathways as predictors for the isoprenoid pathways and compare NBSlasso, lasso, elastic net, and SCCS using four metrics: TestMSE (the mean square error in the testing set), NNB (the number of non-zero blocks estimated), NNE (the number of non-zero entries estimated), and computational time. We set the size of the training set as n_train = (100,90,80,70,60). For each n_train, we randomly split the data set into a training set and a testing set. We estimate B using four methods and compute the TestMSE on the testing set. Each test is repeated 100 times on different splits. The results are shown in table <ref>. Firstly, we compare the NBSlasso with the SCCS. We observe that the average NBSlasso achieves a lower TestMSE and a greater NNE. This implies that SCCS might overlook some true non-zero entries. Although NBSlasso selects more accurate non-zero entries, it sppears to select significantly more non-zero blocks than SCCS, which is likely attributable to presence of multicollinearity in the predictors. NBSlasso still maintains its computational advantage over SCCS. Secondly, we focus on the differences between NBSlasso and the regularized regression approaches. Both Lasso and elastic net selects almost all the blocks while NBSlasso selects far fewer. NBSlasso obtains a smaller TestMSE with fewer NNE than Lasso and elastic net, indicating that the regularized methods may suffer from the over-fitting problem. The computational time of NBSlasso is comparable to that of the Lasso and elastic net. To conclude, the results show that the NBSlasso detects a more accurate gene set with less computational cost. § CONCLUSION In this paper, we focused on a block-selection model and introduced a technique called the NBS to identify the relevance between response and covariate groups. We constructed a high-dimensional model with block structures for both responses and covariates, and the new strategy played a crucial role in detecting these blocks. We demonstrated the uniform consistency of the NBS and proposed three estimators to enhance modeling efficiency under different scenarios. Additionally, we derived their asymptotic properties. The results from both simulations and empirical analysis illustrate that the proposed method effectively addresses the challenges of data complexity, achieving satisfactory estimation and prediction accuracy. We proposed a novel approach, called the NBS, to investigate the blocking (grouping) effect by introducing an efficient correlation measure. The NBS can be applied in combination with various other regularization techniques for diverse modeling, thus offering extensive application prospects. In further research, several topics are worth exploring, such as utilizing the NBS to detect change points in linear models <cit.>, reducing estimation bias caused by hidden variables <cit.>, etc. § ACKNOWLEDGMENTS This work was supported by the National Natural Science Foundation of China (Grant No. 12371281); the Emerging Interdisciplinary Project, Program for Innovation Research, and the Disciplinary Funds of Central University of Finance and Economics. § APPENDIX A: ADDITIONAL SIMULATION RESULTS Table <ref>, <ref>, and <ref> compare the performance of the four methods under two dimensions, two group settings, random sparsity indicators of K_j ∈{2,3,4}, and sparsity-levels within each block are 30%, 60%, and 90% respectively. § APPENDIX B: PROOFS Recall that Δ̂= {Δ̂_kj}_K × J, where Δ̂_kj = _Δ_kj∈𝒱(Y_j-P_X_k Y_j_F^2/n-p_k-1Δ_k j +γP_X_k Y_j_F^2/p_k - 1(1-Δ_k j) ). The uniqueness of the solution to the above 0-1 integer optimization depends on the uniqueness of P_X_k, which is divided into two parts. When p_k ⩽ n, we have P_X_k = X_k(X_k^ X_k)^-1X_k^, and the uniqueness of the above equation is certain. When p_k > n, P_X_Ŝ_kj = X_Ŝ_kj(X_Ŝ_kj^ X_Ŝ_kj)^-1X_Ŝ_kj^, and the uniqueness is based on the active set Ŝ_kj. In this paper, we obtain the active set using the Lasso. Based on its properties <cit.>, the uniqueness of P_X_Ŝ_kj is proven. We can equivalently define Δ, in terms of Δ_kj, by the following optimization problem: Δ̂=Δ∈𝒱^K × Jargmin Q(Δ), where Q( Δ )= 1/KJ∑_j=1^J∑_k=1^K( 1/n-p_k-1Y_j-P_X_k Y_j_F^2Δ_k j +γ1/p_k - 1P_X_k Y_j_F^2(1-Δ_k j) ). The uniqueness of Δ is directly obtained when all the minimizers of (<ref>) are unique. Recall that J_1 = {(k, j): Δ_kj = 1} and J_0 = { (k,j): Δ_kj = 0 }. To prove Δ̂= Δ with probability tending to 1 equals proving Ĵ_1 = J_1 and Ĵ_0 = J_0 with probability tending to 1, respectively. We prove this theorem following the argument from the proof of Theorem 1 of <cit.>. That is, we adopt the reduction to absurdity in the following proving process. First, it suffices to show P(Ĵ_0 ∩ J_1 = ∅) → 1, P(Ĵ_1 ∩ J_0 = ∅) → 1. Since both J_1 and J_0 are not considered empty sets, to prove our result, we assume there exists a nonzero probability p_0 that P(Ĵ_0 ∩ J_1 ≠∅) = p_0. When n is large, there exists at least one block (j_0,k_0) containing nonzero coefficients with Δ̂_k_0j_0 = 0. That is, 1 - Y_j_0-P_X_k_0 Y_j_0_F^2/P_X_k_0 Y_j_0_F^2·l_k_0 - 1/n-l_k_0-1⩽ 1 - γ, where l_k_0 = |𝒳_k_0| and the above inequality holds based on the definition of Δ̂. It leads to Y_j_0-P_X_k_0 Y_j_0_F^2/P_X_k_0 Y_j_0_F^2·l_k_0 - 1/n-l_k_0-1⩾γ. We have EY_j_0 - X_k_0B̂_k_0 j_0^2_F = (n - l_k_0)((Σ_j_0) + (Σ_k_0j_0)) and EX_k_0B̂_k_0 j_0^2_F = (B^ _k_0 j_0(X_k_0^ X_k_0)B_k_0 j_0) + l_k_0 [(Σ_j_0) + (Σ_k_0j_0)], Note that X^_k_0 X_k_0/n = M_k_0. Thus, for n is large enough, (tr(Σ_j_0) + (Σ_k_0j_0))n· (B^ _k_0 j_0 M_k B_k_0j_0)/l_k_0 + (tr(Σ_j_0) + (Σ_k_0j_0))⩾γ. Based on the assumption of γ that max_j ∈ J_1p_kn·(tr(Σ_j) + (Σ_kj)) (B^ _kj M_k B_kj) < γ1 - γ, we obtain the following inequality, max_j ∈ J_1(tr(Σ_j) + (Σ_kj))n · (B^ _kj M_k B_kj)/p_k+ (tr(Σ_j) + (Σ_kj)) < γ. The above two inequalities are contradictions, leading to the following result: P(Ĵ_0 ∩ J_1 ≠∅) → 0. We now consider the second function of (<ref>). Similarly, assume that there exists a nonzero probability p_0 that P(Ĵ_1 ∩ J_0 ≠∅) = p_0. Then there exists at least one irrelevant block (k_0,j_0) with Δ̂_k_0j_0 = 1. That is, 1 - Y_j_0-P_X_k_0 Y_j_0_F^2/P_X_k_0 Y_j_0_F^2·l_k_0 - 1/n-l_k_0-1 > 1 - γ. For n is large enough, similar to the above derivation, the following inequality holds: (tr(Σ_j_0) + (Σ_k_0j_0))n· (B^ _k_0 j_0 M_k B_k_0j_0)/l_k_0 + (tr(Σ_j_0) + (Σ_k_0j_0))⩽γ. Since (k_0,j_0) ∈ J_0, we have B_k_0j_0 = 0, thus above inequality becomes 1 ⩽γ, which is a contradiction to γ < 1. Based on the definition of R̅_kj^2, we have R̅_kj^2 ≜ 1 - p_k - 1/n-p_k-1·Y_j-P_X_k Y_j_F^2/P_X_k Y_j_F^2. Denote Σ_j to be the covariance of E_j. The above definition can be calculated as follows: R̅_kj^2 =1 - p_k - 1/n-p_k-1·Y_j-P_X_k Y_j_F^2/P_X_k Y_j_F^2 = 1 - p_k - 1/n-p_k-1·(E^_j(1 - P_X_k)E_j) + (L_kj^L_kj) - (L_kj^P_X_kL_kj)/ (B^ _kj(X_k^ X_k)B_kj) + (E^_j P_X_k E_j) + (L_kj^P_X_kL_kj) + 2(L^_kjX_kB_kj), when n is large enough, the above becomes: R̅_kj^2 ≈ 1 - ((Σ_j )+(Σ_kj))/((Σ_j )+(Σ_kj))+2/p_k(E_kj^TX_kB_kj) ≜ 1 - t_kj/t_kj+S_kj, where t_j = (Σ_j )+(Σ_kj) and S_kj = 2/p_k(E_kj^TX_kB_kj). When Δ_kj = 0, we have B_kj = 0, thus E(S_kj|Δ_kj=0) = 0 and the following holds when n is large: P(R̅_kj^2 > 1-γ |Δ_kj =0 ) =P(1 - t_kj/t_kj+S_kj >1-γ |Δ_kj =0 ) = P(t_kj/t_kj+S_kj<γ |Δ_kj =0 ) = P(1+S_kj/t_kj> 1/γ |Δ_kj =0 ) = P(S_kj>1-γ/γ· t_kj |Δ_kj =0 ) = P(S_kj<γ-1/γ· t_kj|Δ_kj =0 ). Note that 0<γ <1 and (γ-1)· t_kj/γ<0, we further have, when n is large, P((k,j)∈ I(c)) = P(R̅_kj^2 > 1-γ|Δ_kj =0 ) = P(S_kj<γ-1/γ· t_kj|Δ_kj =0) = P(S_kj<γ-1/γ· t_kj)-P(S_kj<γ-1/γ· t_kj|Δ_kj =1) = P(S_kj<γ-1/γ· t_kj). The last equality holds based on Theorem <ref> and Corollary <ref>. Then we have P(S_kj<γ-1/γ· t_kj) = P(1+S_kj/t_kj < 2γ-1/γ) = P(t_kj/t_kj+S_kj>γ/2γ-1) = P(R̅_kj^2<γ-1/2γ-1) = P(R̅_kj^2<c/2c-1). Set (Δ̂_11, B̂_Δ_11) to be the solution of the following optimization: (Δ̂_11,B̂_Δ_11)= {1/n_0Δ_11∙ Y_1- X_1B_Δ_11_F^2 +γ W_11(1-Δ_11) }, where n_0 = n-p_1-1 and W_11 = P_X_1 Y_1 _F^2/(p_1 - 1). Since P_X_1 = X_1 (X_1^ X_1)^-1 X^_1, we have X_1 B̂_Δ_11 = P_X_1 Y_1 Δ̂_11 and then replace the former by the latter in the above function. The above optimization can be transferred as follows Δ_11∙ Y_1- X_1B_Δ_11_F^2/n-p_1-1 +γ W_11(1-Δ_11) = Δ_11∙ Y_1- P_X_1 Y_1 Δ_11_F^2/n-p_1-1 +γ P_X_1 Y_1 _F^2/p_1 - 1(1-Δ_11) = Y_1- P_X_1 Y_1 _F^2/n-p_1-1Δ_11 +γ P_X_1 Y_1 _F^2/p_1 - 1(1-Δ_11) Thus, the solutions to the above two functions are equal, and the regression coefficient estimator is unique when Δ̂_11 = 1, i.e., B̂_11 = (X^_1 X_1 )^-1 X_1^ Y_1. When Δ_11 = 0, all the elements of coefficient matrix B equal 0. We allow this situation to exist and expect Δ̂_11 = 0 as well as B̂ = 0. Thus, the estimator is given as follows: B̂_Δ 11 = (X_1^ X_1)^-1 X^_1 Y_1 ∙Δ̂_11. Based on Proposition <ref>, the solution to (<ref>) is unique as follows B̂_Δ 11 = (X_1^ X_1)^-1 X^_1 Y_1 ∙Δ̂_11. When we consider the single-block model, the block selection consistency is equivalent to prove Δ̂_11 = Δ_11, with high probability. It is similar and more simple to the proof of Theorem <ref> and suffices to prove the following two parts. (i) When Δ_11 = 1, the probability of the following inequality tends to zero, 1 - Y_1-P_X_1 Y_1_F^2/P_X_1 Y_1_F^2·p_1 - 1/n-p_1-1⩽ 1 - γ, which is equivalent to Y_1-P_X_1 Y_1_F^2/P_X_1 Y_1_F^2·p_1 - 1/n-p_1-1⩾γ, where γ is the tuning parameter. We first calculate P_X_1 Y_1_F^2. For a single-block model, the following equality holds, Y_1 = X_1 B_11 + E_1. Thus, we have EP_X_1 Y_1_F^2 = E ( B^_11 X^_1 X_1 B_11 + B^_11 X^_1 E_1 + E^_1 X_1 B_11 + E^_1 X_1(X^_1 X_1)^-1 X^_1 E_1 ) = (B^_11 X^_1 X_1 B_11 ) + p_1 ·(Σ_1), where the first equality holds based on (<ref>). Similarly, we have EY_1-P_X_1 Y_1_F^2 = E (Y_1 - P_X_1 Y_1)^ (Y_1 - P_X_1 Y_1) = E( Y_1^ Y_1 - Y^_1 P_X_1^ Y_1 - Y^_1 P_X_1 Y_1 + Y^_1 P^_X_1 P_ X_1 Y_1) = E(Y_1^ Y_1 - Y^_1 X_1 (X_1^ X_1)^-1 X^_1 Y_1 ) = E( (X_1B_11+ E_1)^ (X_1B_11+ E_1) - (X_1B_11+ E_1)^ X_1 (X_1^ X_1)^-1 X^_1 (X_1B_11+ E_1) ) = E(E_1^ E_1 - E^_1 X_1 (X_1^ X_1)^-1 X^_1 E_1 ) = (n - p_1)·(Σ_1). Set X^_1 X_1/n = M_1. We have EY_1-P_X_1 Y_1_F^2/EP_X_1 Y_1_F^2·p_1 - 1/n-p_1-1≈(Σ_1)(Σ_1) + n B^_11 M_1B_11/p_1 Based on the assumption of γ, we have Δ_11·p_1n·tr(Σ_1) (B^ _11 M_1 B_11) < γ1 - γ. It is equivalent to the following result: (Σ_1)(Σ_1) + n B^_11 M_1B_11/p_1 < γ. Thus we have P(Y_1-P_X_1 Y_1_F^2/P_X_1 Y_1_F^2·p_1 - 1/n-p_1-1⩾γ) → 0. (ii) When Δ_11 = 0, the probability of the following inequality tends to zero: Y_1-P_X_1 Y_1_F^2/P_X_1 Y_1_F^2·p_1 - 1/n-p_1-1 < γ. Following the above argument, we have, when Δ_11 = 0, (B^_11 X^_1 X_1 B_11 )/p_1 = 0, then when n becomes large, we have Y_1-P_X_1 Y_1_F^2/P_X_1 Y_1_F^2·p_1 - 1/n-p_1-1→ 1, and P(Y_1-P_X_1 Y_1_F^2/P_X_1 Y_1_F^2·p_1 - 1/n-p_1-1⩽γ) → 0. Recall that we define the estimator B̂_Δ 11,q_0 = {β̂^11_ll'}_p_1 × q_0∙Δ̂_11 denoting the first q_0 columns of B̂_Δ 11 and set B_11,q_0 denotes the first q_0 columns of B_11 and Σ_q_0 denotes the covariance of E_1,q_0. For any q_0 ⩽ q_1, based on Proposition <ref>, we have B̂_Δ 11, q_0 = (X^_1X_1)^-1 X^_1 Y_1,q_0·Δ̂_11, where Y_1,q_0 denotes the first q_0 columns of Y_1. Then we have n^1/2 (B̂_Δ 11, q_0 - B_q_0) = √(n) ( (X^_1X_1)^-1 X^_1 Y_1,q_0·Δ̂_11 - B_q_0·Δ_11) = (X^_1 X_1/n)^-1 X^_1 E_1,q_0/√(n) + √(n)B_11,q_0· (Δ̂_11 - Δ_11). Following the argument, we have that, with probability tending to 1, Δ̂_11 = Δ_11. Thus, the second term on the right-hand side of (<ref>) equals zero with probability tending to 1. For the first term on the right-hand side of (<ref>), we have X^_1 E_1,q_0/√(n)→ N_p_1 × q_0 (0, (X^_1 X_1/n) Σ_q_0), then n^1/2 (B̂_Δ 11, q_0 - B_11,q_0) → N_p_1 × q_0( 0, (X^_1 X_1/n)^-1Σ_q_0). Recall that we set (Δ̂_11, B̂_Δ_11) to be the solution of the following optimization: (Δ̂_11,B̂_Δ_11) = _Δ_11∈{0,1}, B_11: β^11_ll' = 0, l ∉Ŝ_11{Δ_11∙ Y_1- X_1B_Δ_11_F^2/n-p_1-1 +γ W_11(1-Δ_11) }, where B_Δ_11 = B_11∙Δ_11 and W_11 = P_X_1 Y_1^2_F/(p_1-1). Based on the definition of P_X_1 that P_X_1 = X_Ŝ_11(X_Ŝ_11^TX_Ŝ_11)^-1X_Ŝ_11^T, we have X_1 B̂_Δ_11 = P_X_1 Y_1 Δ̂_11 and the following transforming holds: _Δ_11∈{0,1}, B_11: β^11_ll' = 0, l ∉Ŝ_11Δ_11∙ Y_1- X_1B_Δ_11_F^2/n-p_1-1 +γ W_11(1-Δ_11) = _Δ_11∈{0,1}Δ_11∙ Y_1- P_X_1 Y_1 Δ_11_F^2/n-p_1-1 +γ P_X_1 Y_1 _F^2/p_1 - 1(1-Δ_11) = _Δ_11∈{0,1} Y_1- P_X_1 Y_1 _F^2/n-p_1-1Δ_11 +γ P_X_1 Y_1 _F^2/p_1 - 1(1-Δ_11). Then we discuss the uniqueness of the active set Ŝ_11. Since it is obtained from the lasso for the single-block model, based on the properties of the lasso, the uniqueness of X_1 B̂_lasso implies the uniqueness of Ŝ_11 <cit.>. Based on Proposition <ref>, the estimator is uniquely presented as follows, B̂_Δ 11,Ŝ_11 = (X_Ŝ_11^TX_Ŝ_11)^-1X_Ŝ_11^T Y_1 ∙Δ̂_11. To prove that the above estimator converges to the oracle solution of the single-block model, we need to prove two parts: Part 1. The active set equals the true nonzero set, i.e., Ŝ_11 = S_11, where S_11 denotes the set of relevant covariates and Ŝ_11 denotes its estimate, with high probability. Part 2. The block selection consistency under the high-dimensional setting. The first part is discussed based on two situations. First, the single-block model exists, i.e., S_11≠∅. The active set is obtained using the lasso. The consistency between the Ŝ_11 and S_11 is similar to that of the regular regression model. Based on Lemma A.2 in <cit.>, the restricted eigenvalue condition holds in the multi-response model. Then, followed by Corollary 2 in <cit.> and the restricted eigenvalue condition, we have that, with probability tending to 1 and a positive constant M_2, B̂_11 - B_11^2_F ⩽M^2_2 · |S_11| log p_1n. The above result holds naturally for the multi-response model, considering the properties of the lasso solution under the multi-response model. This can be viewed as the sum of separately solving the lasso under each single-response model. Thus, the above result leads to B̂_11 - B_11_∞⩽√(M^2_2 · |S_11| log p_1 / n)⩽ M_2 n^(c_1 + c_2 - 1)/2. Based on the assumption of the minimal nonzero entries of the coefficient matrix, |B_min| ⩾ M_2 √(n^c_1 + c_2-1), we have, with probability tending to 1, that Ŝ_11 = S_11. Part 2. Following the arguments as the proof of Theorem <ref>, we first prove that when Δ_11 = 1, P(1 - Y_1-P_X_1 Y_1_F^2/P_X_1 Y_1_F^2·|Ŝ_11| - 1/n-|Ŝ_11|-1⩽ 1 - γ) → 0. Based on the event that {Ŝ_11 = S_11}, we have P_X_1 Y_1_F^2 = ( B^_S_11 X^_S_11 X_S_11 B_S_11 + B^_S_11 X^_S_11 E_1 + E^_1 X_S_11 B_S_11 + E^_1 X_S_11(X^_S_11 X_S_11)^-1 X^_S_11 E_1 ) = (B^_S_11 X^_S_11 X_S_11 B_S_11 ) + |S_11| ·σ̂^2_1,sum, where σ̂^2_1,sum denotes the sum of the sample covariance of E_1. Similarly, Y_1-P_X_1 Y_1_F^2 = (E_1^ E_1 - E^_1 X_1 (X_S_11^ X_S_11)^-1 X^_S_11 E_1 ) = (n - |S_11|)·σ̂^2_1,sum. Based on the assumption that the smallest eigenvalue of X^_1 X_1/n is larger than Λ_min, we have Y_1-P_X_1 Y_1_F^2/P_X_1 Y_1_F^2·|S_11|-1/n-|S_11|-1≈|S_11|/n-|S_11|·Y_1-P_X_1 Y_1_F^2/P_X_1 Y_1_F^2 = |S_11|/n-|S_11|·(n - |S_11|) ·σ̂^2_1,sum/ |S_11|· ( n(B^_11 X^_1 X_1 B_11 )/|S_11| + σ̂^2_1,sum) ⩽σ̂^2_1,sumσ̂^2_1,sum + n ·Λ_min(B^_11 B_11) / |S_11| . Based on the assumption of the gap between the nonzero entries of B_11 and 0, we have n ·Λ_min(B^_11 B_11) /|S_11| ⩾ n^1-c_1Λ_min B_min⩾ M_2 Λ_min n^(1+c_1)/2⩾σ̂^2_1,sum. With γ > 1/2, we conclude (<ref>) holds. In the meantime, following the above argument and the argument of the proof of Theorem <ref>, we also obtain the following result: when Δ_11 = 0, P(Y_1-P_X_1 Y_1_F^2/P_X_1 Y_1_F^2·|Ŝ_11| - 1/n-|Ŝ_11|-1⩽γ) → 0. The joint estimation for the sparse block model is B̂_Δ=B̂_Δ_kj = 0, (k,j) ∈Ĵ_0argmin{Y-X B_F^2+λB _1}. The convergence of the solution to (<ref>) is determined by the convergence of the indicator and the lasso optimization, in which the former holds based on Theorem <ref>. Following the argument of the proof of Theorem <ref> and the proof of Lemma A.2 and Lemma A.3 of <cit.>, by requiring the restricted eigenvalue condition, v^(X^ X/n) v ⩾κv^2_2, we have that, with probability tending to 1 and a positive constant M_2 B̂ - B ^2_F ⩽M^2_2· |S| log Pn, where S = ∪ S_kj and P = ∑^K_k=1 p_k. Based on Theorem <ref> and the condition of the minimum absolute value of nonzero entries B that B_min⩾ M_3 √(|S| log P /n), where B_min = min{ |β^kj_ll'| : β^kj_ll'≠ 0, k= 1,…,K, j = 1,…,J } We have that, P(B̂≠ B) → 0. apalike
http://arxiv.org/abs/2407.12668v1
20240717155232
All-optical Saddle Trap
[ "Daniel Tandeitnik", "Oscar Kremer", "Felipe Almeida", "Joanna Zielinska", "Antonio Zelaquett Khoury", "Thiago Guerreiro" ]
physics.optics
[ "physics.optics" ]
tandeitnik@gmail.com Department of Physics, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro 22451-900, Brazil Department of Electrical Engineering, Pontifical Catholic University of Rio de Janeiro, 22451-900 Rio de Janeiro, RJ, Brazil Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London, UK Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Monterrey, Nuevo León, México Instituto de Física, Universidade Federal Fluminense, Niterói, Rio de Janeiro 24210-346, Brazil barbosa@puc-rio.br Department of Physics, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro 22451-900, Brazil § ABSTRACT The superposition of frequency-shifted Laguerre-Gauss modes can produce a rotating saddle-like intensity profile. When spinning fast enough, the optical forces produced by this structured light saddle generate a dynamically stable equilibrium point capable of trapping nanoparticles in a high vacuum, akin to a Paul trap but with its unique characteristics. We analyze the stability conditions and center-of-mass motion, dynamics and cooling of a nanoparticle levitated in the optical saddle trap. We expect the optical saddle to find applications in levitated optomechanics experiments requiring fast parametric modulation and inverted squeezing potential landscapes. All-optical Saddle Trap Thiago Guerreiro Received 04 December 2023 / Accepted 15 July 2024 ===================================================== § INTRODUCTION Trapping charged and neutral particles in electromagnetic fields has been one of the major tools in atomic, molecular, and nano-physics. While Earnshaw's theorem prevents the stable trapping of charged particles using electrostatic forces, stable equilibrium can be achieved dynamically by time-varying potentials <cit.>. These dynamic traps not only lie at the heart of trapped ion physics <cit.> but can also be used to trap mesoscopic objects such as charged dielectric nanoparticles <cit.>. Neutral particles, on the other hand, can be trapped by purely electrodynamic fields, e.g. laser light, as first proposed by Ashkin <cit.>. Optical tweezers can operate in different regimes depending on the relative size of the particle compared to the laser wavelength, enabling the trapping of single atoms all the way to nano and microparticles <cit.>. Moreover, levitation in vacuum tweezers offers a promising platform for fundamental quantum physics and sensing experiments <cit.>. Recently, hybrid optical-electrical traps combining both optical tweezers and dynamic electric traps have been designed and proposed <cit.> for fundamental experiments seeking to delocalize the quantum state of a levitated mesoscopic object <cit.>. Here we investigate whether the idea of a dynamic equilibrium trap can be implemented in an all-optical setup. Considering structured light both in space and in time, we propose a novel method for optical trapping of nano-sized dielectric particles in the dipole regime. By superposing frequency-shifted Laguerre-Gauss modes with a Gaussian beam, we can engineer a rotating saddle optical potential capable of holding a Silica nanoparticle in a high-vacuum environment, as illustrated in Figure <ref>. Provided the frequency shifts of the superposition components exceed a critical value, this optical saddle rotates at a sufficiently fast rate to develop a dynamic stable equilibrium point at the center of the beam. This structured light trap mimics a Paul trap but is made entirely with light and with a unique motion pattern and dynamics. The all-optical saddle trap offers interesting possibilities for levitated optomechanics experiments, both classically and in future quantum experiments. A particle trapped in the rotating saddle can be cooled to a complex motional state using optimal control methods and electric feedback <cit.>. Once cooled, the trap's rotation frequency can be tuned from several hundreds of MHz to zero, causing an effective inverted potential upon the initially localized particle. This not only offers novel methods for state expansion <cit.> and interference protocols <cit.>, but also the possibility of fast-switching between stable and unstable potential landscapes. This paper is organized as follows. In Sec. <ref>, we describe the optical saddle beam as well as the associated dynamics of a levitated nanoparticle. We derive and discuss the conditions required for achieving dynamic equilibrium and compute the power spectrum of the particle's motion in the laboratory, i.e., non-rotating frame of reference. Next, in Sec. <ref>, we propose an experimental setup to implement the optical saddle trap and show that feedback cooling of the particle motion is possible by applying optimal control methods. We conclude with a discussion of the results. § THEORY §.§ The optical saddle beam There are different ways of producing a saddle intensity pattern around the focus of an optical beam. As a simple solution, we choose the superposition of three Laguerre-Gauss (LG) beams as follows: E_s = √(16P/17cϵ_0)(E^LG_0,0 + 3/4e^-i2θE^LG_0,2 + 3/4e^i2θE^LG_0,-2), where P is the laser power, c is the speed of light in vacuum, ϵ_0 is the electrical permittivity of the vacuum, and θ is a parameter that controls the orientation of the saddle on the transverse plane. Considering the optical-axis as the z-axis, the normalized LG mode E^LG_p,l is defined in cylindrical coordinates as E^LG_p,l(r,ϕ,z) = √(2p!/π(p+| l|)!)1/w(z)(√(2)r/w(z))^| l| × L^| l|_p(2r^2/w^2(z))exp( -r^2/w^2(z) -ikr^2/2R(z) + ilϕ+iψ(z)), where w(z), z_R, R(z), and ψ(z) are the beam waist radius, Rayleigh range, wavefront radius and Gouy phase, respectively given by w(z) = w_0√(1+(z/z_R)^2), z_R = π w_0^2/λ_0, R(z) = z(1+(z_R/z)^2), ψ(z) = (| l| + 2p +1)arctan(z/z_R), with w_0 denoting the width of the Gaussian L^0_0 mode at the focus, λ_0 the beam's wavelength and L^| l|_p are the generalized Laguerre polynomials. For the modes present in Eq. (<ref>), we have L^0_0(x) = L^| 2|_0(x) = 1. Throughout this work, we assume a linearly polarized beam. Note that the saddle beam (<ref>) and the LG modes (<ref>) are normalized so that the integral of the intensity I = (cϵ_0/2)| E_s|^2 over the transverse plane gives the total power P. Evaluation of the absolute square of the field results in | E_s(r,ϕ,z)|^2 = 16P/17cϵ_0π w(z)^2exp(-2r^2/w(z)^2) ×[1+3√(2)r^2/w(z)^2cos(2ϕ+2θ)cos(2χ(z)) + 9r^4/2w(z)^4cos^2(2ϕ+2θ)], where χ(z) = arctan(z/z_R). Figure <ref> shows the transverse absolute square of the field at the focus. The obtained intensity distribution corresponds to a saddle-like profile, whose orientation in the xy plane depends on the relative phase θ between the Laguerre-Gauss l=± 2 and the Gaussian components of the beam [see Eq. (<ref>)]. The relationship between the orientation of the saddle pattern and the parameter θ is evident in Eq. (<ref>), where the azimuthal angle ϕ appears only as the sum ϕ+θ. Thus, as we sweep the phase θ, the saddle rotates. We can express the intensity distribution from Eq. (<ref>) in terms of Cartesian coordinates rotating with the saddle, i.e. x'=xcosθ+ysinθ and y'=ycosθ-xsinθ where x=rcosϕ and y=rsinϕ are the stationary Cartesian coordinates, obtaining: | E_s (x',y',z)|^2 = 16P/17cϵ_0π w(z)^2exp(-2(x'^2+y'^2)/w(z)^2) ×[1+6√(2)/w(z)^2cos(2χ(z))(x'^2-y'^2) + 18/w(z)^4(x'^2-y'^2)^2]. Figure <ref> shows the crosssections of | E_s|^2 along the three axes of the aforementioned rotating reference frame. The separation between the peaks along the x'-axis ℓ_x equals ℓ_x = w_0/2√(16-3√(2)), and for the side peaks along the y'-axis ℓ_y equals ℓ_y = w_0/2√(16+3√(2)). §.§ Dynamical model We now model the dynamics of a dielectric particle near the beam's focus. We consider a particle with a radius much smaller than the beam's wavelength, the dipole approximation. In this case, one considers the dielectric limit where the optical force exerted by the electric field onto the particle equals <cit.>, 𝐅_opt = {α}/4∇| E_s|^2 +σ_ext/c𝐒, where α is the complex polarizability of the particle, σ_ext is the particle's extinction cross-section, and 𝐒 is the time-averaged Poynting vector. We omit the spin-curl force since we are interested in uniformly linearly polarized beams. The first term in Eq. (<ref>) is commonly referred to as the gradient force, while the second is known as the scattering force. In the dipole approximation, the gradient force traps a particle near the focus of an optical tweezer. The scattering force, on the other hand, pushes the particle along the optical axis. To achieve stable trapping, the scattering must be smaller than the gradient force. For the saddle beam, the scattering force is mainly provided by the Gaussian amplitude E^LG_0,0 in Eq. (<ref>) and displaces the equilibrium position of the particle along the optical axis. We take the scattering force into account throughout our simulations of the particle dynamics. From the plots of | E_s|^2 Figure <ref>, one sees that the saddle intensity in the transverse plane creates an unstable equilibrium point at the origin. Therefore, if a dielectric particle is placed at the focus and undergoes small perturbations, it will be pushed to one of the side peaks on the x'-axis. In analogy to electrical quadrupole and mechanical saddle traps which can confine particles in dynamical saddle-shaped potentials <cit.>, we investigate the dynamics of a dielectric nanoparticle in the rotating optical saddle beam, i.e., by setting θ = Ω t, where Ω is the angular velocity of rotation. As we will see, provided Ω is large enough, the unstable origin is turned into a dynamically stable equilibrium point. The dynamics of a dielectric nanoparticle in the saddle becomes easier to analyze in a frame rotating about the z-axis with angular velocity Ω. Without loss of generality, | E_s|^2 in this frame is given by Eq. (<ref>). The Taylor expansion of this intensity near the origin, retaining at most quadratic terms, yields | E_s|^2≈32P/17cϵ_0π w_0^2 ×(1 + x^2/w_0^2(3√(2)-2) - y^2/w_0^2(3√(2)+2) - z^2/z_R^2). At this point, we wish to derive a stability criterion for the conservative gradient force, and for this reason, we will neglect the scattering force for the moment. Note, however, that the effects of scattering upon the nanoparticle's dynamical equilibrium are discussed in Appendix <ref> and taken into account in the dynamical simulations shown in the next sections. To linear order in the particle's displacement, the gradient force 𝐅_g given by the first term in Eq. (<ref>) reads 𝐅_g = 16P{α}/17cϵ_0π w_0^2[ x(3√(2)-2)/w_0^2; -y(3√(2)+2)/w_0^2; -z/z_R^2 ]. Since we describe the dynamics in a rotating reference frame, we must account for the centripetal and Coriolis forces that arise in such a frame. Newton's second law gives md^2𝐫/dt^2|_S = ∑_i𝐅_i(𝐫,𝐫̇)|_S, where the S stands for the inertial frame of the lab. To write this equation in a rotating reference frame S' that rotates about a vector Ω, one needs to rewrite the derivative of vectors as <cit.> d𝐀/dt|_S = d𝐀/dt|_S' + Ω×𝐀. Considering constant angular velocity, dΩ/dt = 0 we find md^2𝐫'/dt^2|_S' = ∑_i𝐅_i(𝐫',𝐫̇')|_S' - 2mΩ×d𝐫'/dt|_S' - Ω×(Ω×𝐫'). where Ω = Ω𝐳̂. Moreover, we consider a damping force in the lab frame of the form -mγ𝐫, where m is the particle's mass and γ is the drag coefficient. In the rotating reference frame this damping force becomes -mγ𝐫' -mγΩ×𝐫'. Putting it all together, Eq. (<ref>) defines a set of three differential coupled equations describing the motion of the particle, x' = (Ω^2+ω_x^2)x' - γ(x'-Ω y') + 2Ωy' + F_x/m, y' = (Ω^2-ω_y^2)y' - γ(y'+Ω x') - 2Ωx' + F_y/m, z = -ω_z^2 z - γz + F_z/m, where [ ω_x^2; ω_y^2; ω_z^2 ] = 16P{α}/17cϵ_0π w_0^2m[ (3√(2)-2)/w_0^2; (3√(2)+2)/w_0^2; 1/z_R^2 ], and F_i (i = x,y,z) represent additional force terms the particle might be subject to, such as for example thermal stochastic or feedback forces. For now, we consider F_i = 0 and derive a criterion for stable trapping in the saddle beam. §.§ Stability analysis We now consider motion along the transverse plane. Instead of solving for x' and y' [Note that a general solution can be found by writing the equations as a set of four coupled first-order differential equations and employing diagonalization. The result, however, is extremely cumbersome and not physically illuminating.] we seek a stability condition on the rotation frequency Ω. Regarding the longitudinal z direction in the presence of the scattering force, one can show that the particle follows a simple damped harmonic motion, see Appendix <ref>) for details. Consider the ansatz x' = x'_0e^λ t and y' = y'_0e^λ t. For confinement in the transverse plane, we require the values of λ to have negative or null real part. Substituting the ansatz in Eqs. (<ref>), (<ref>) and casting the result in matrix form we find [ Ω^2+ω_x^2-λ^2-γλ γΩ+2λΩ; -γΩ-2Ωλ Ω^2-ω_y^2-λ^2-γλ ][ x'_0; y'_0 ] = [ 0; 0 ]. The existence of nontrivial solutions require the above matrix to be non-invertible (vanishing determinant), which gives the characteristic equation λ ^4 + 2 γλ ^3 + λ ^2 (γ ^2+2 Ω ^2+ω_y^2-ω_x^2) +λ(2 γΩ ^2+γω_y^2-γω_x^2) +Ω ^4+Ω ^2(γ ^2 +ω_x^2-ω_y^2)-ω_x^2 ω_y^2 = 0. with solutions given by λ_1^±→1/2(-γ±√(2 √(a)+b)), λ_2^±→1/2(-γ±√(γ ^2-2 (√(a)+c))), where a = 8Ω^2(- γ ^2/2+ω_y^2-ω_x^2)+(ω_x^2+ω_y^2)^2, b = γ ^2-4 Ω ^2+2 ω_x^2-2 ω_y^2, c = 2 Ω ^2-ω_x^2+ω_y^2. The λ_1^- root in Eq. (<ref>) is negative. On the other hand, λ_1^+ will be negative when -γ+√(2 √(a)+b) < 0. Given ω_y > ω_x (see Eq. (<ref>)), this condition implies |Ω| > √(-γ^2+ω_y^2-ω_x^2+√((γ ^2+ω_x^2-ω_y^2)^2+4ω_x^2 ω_y^2)/2). Therefore, the first two roots are negative provided Ω is sufficiently large. We now turn to the remaining roots; as before λ_2^- is negative, while λ_2^+ will be negative when -γ+√(γ ^2-2 (√(a)+c)) < 0. As it turns out, this inequality is always satisfied for γ, ω_x, ω_y > 0 and ω_y > ω_x. Hence, the stability criterion is given entirely by Eq. (<ref>). Considering the dynamics in the overdamped regime, the same criteria are obtained much more straightforwardly; see Appendix <ref> for more details. §.§ Dynamics simulation We have studied the dynamics of a trapped particle in the saddle beam as well as the validity of the stability criteria by numerical simulation. In doing so, we have considered both the linearized model given by Eqs. (<ref>) and the complete form of the potential given by the gradient of Eq. (<ref>) for the fixed set of experimental parameters defined in Table <ref>. In all simulations, the particle initializes in a position near the origin, where the trap is well approximated by a quadratic potential landscape. All simulations consider a stochastic external force F_i = F_th,i arising from collisions of the particle with surrounding gas molecules. This stochastic force obeys, ⟨ F_th,i(t)⟩ = 0, ⟨ F_th,i(t) F_th,j(t+τ)⟩ = 2 mγ k_B Tδ(τ)δ_ij, where k_B is the Boltzmann constant and T is the surrounding gas temperature. We employed the Runge-Kutta 4^ th-order algorithm for all simulations, where each run simulated the dynamics for a period of 10ms. We refer to Appendix <ref> for more details. To evaluate the different regimes of motion of a particle in the rotating saddle, we have considered different ratios of the trap's rotating frequency Ω to the critical angular velocity Ω_c given by Eq. (<ref>). Figure <ref> qualitatively displays three regimes of motion: a) If Ω is slow enough (Ω/Ω_c ≲ 0.25), the particle is trapped by one of the rotating side-peaks, following an approximately circular motion. As the rotation is increased, the particle lags behind the peak due to the drag force provided by the surrounding gas; b) increasing the rotation but keeping it well below the critical angular velocity, the centrifugal force as felt by the particle in the rotating frame becomes so strong that the particle is expelled away from the beam. Going higher in angular velocity c), at Ω/Ω_c ≳ 0.92, the particle is confined in the center and undergoes a complex motion around this dynamically generated equilibrium point. We note that for the parameters in Table <ref> we have Ω_c/2π≈280kHz. All trajectories are viewed in the laboratory reference frame. Figure <ref> shows an alternative visualization of the different regimes of motion. Here, the particle's last radial position ⟨ r ⟩ = ⟨√(x^2+y^2)⟩ output by the simulation is averaged over 20 runs and plotted against the Ω/Ω_c ratio. We see that for Ω/Ω_c > 1, the particle remains trapped near the center of the beam. On the other hand, for Ω/Ω_c < 1, the value of ⟨ r ⟩ blows up, indicating that the particle is unbounded (shaded region). For sufficiently small values of Ω, the particle is bound in one of the lobes of the saddle beam, remaining at a position of the intensity maxima given by l_x/2. Commonly employed in levitated optomechanics experiments as a characterization method, the power spectrum density (PSD) of the motion of a particle trapped in the rotating saddle can be analytically computed by linearizing the Eqs. of motion. This can be done both in the rotating and laboratory reference frames (see Appendix <ref> for details on the calculation) and compared to the result obtained from numerical simulations. Figure <ref> shows an example of the laboratory frame PSD, simulated taking into account the full optical potential, in comparison to the analytical prediction from linearized theory. We see the overall shape of the predicted PSD as given by Eq. (<ref>) is consistent with the simulation, however with a shift and additional peaks due to the nonlinear nature of the potential <cit.>. § EXPERIMENTAL PROPOSAL We now discuss an experimental scheme for generating the rotating saddle beam as well as the possibility of feedback cooling the center-of-mass motion of a trapped nanoparticle. §.§ Saddle-beam generation To generate the superposition in Eq.(<ref>) rotating with angular velocity in the MHz range, we propose the combined use of three acoustic optical modulators (AOM) and a variable spiral plate (VSP) – a liquid crystal element capable of transforming an incident circularly polarized Gaussian beam into Laguerre-Gaussian modes with l = ± 2 depending on the input's handedness <cit.>. Figure <ref> illustrates the optical setup. The half-wave plate HW_1, in conjunction with the polarizing beam splitter PBS_1, divides an initial linearly polarized laser beam into two paths. The upper path receives a frequency shift of Δ, and HW_3 corrects the polarization for proper interference at the optical tweezer (OT) site. The lower path is divided into two beams that receive distinct frequency shifts of Δ±δ and are recombined with HW_2 and PBS_2 into a single spatial mode. Since each component has orthogonal linear polarizations, they have opposite handedness after passing through the quarter-wave plate QW_1. Therefore, the VSP transforms one of the beams into a l = 2 and the other into a l = -2 LG mode. Finally, QW_2 reverts the circular polarization to linear, and the polarizer projects into a common mode to interfere with the remaining Gaussian beam. The distinct detunings received by the superposition components effectively generate a time-dependent phase, causing the overall superposition to rotate. Tuning the frequency shifts of each AOM by δ around a central shift Δ allows for fine control of the saddle's rotation speed. Coupling all three beams into a single-mode alignment fiber before inserting the VSP ensures the spatial mode matching of the rotating saddle's components. Setting the transmission:reflection ratio of HW_1 to approximately 70:30, the saddle beam can be reproduced with the proper coefficients. Considering the AOM's diffraction efficiency as 80% and the initial beam's power as 1W, roughly 260mW of power should arrive at the OT. We note that other schemes can also be considered, such as using a single AOM with a double pass. However, in practice, most schemes have severely low power efficiencies at the output. Schemes using one or two SLMs to generate the LG beams were also disfavored in favor of the VSP since the latter is a low-loss optical element. Finally, commercially available AOMs can shift the frequency of an incoming laser beam by tens of MHz around a central frequency Δ. Thus, by choosing a shift of δ = 10MHz, the saddle beam would rotate at sufficient angular velocity to have dynamical stability. §.§ Feedback cooling For most applications in high-vacuum levitated optomechanics, the ability to control the motion of the trapped particle is required. For instance, optimal control theory can successfully achieve the ground state of the particle's longitudinal center-of-mass (CoM) motion <cit.>. We thus ask the question of whether optimal control theory methods can be applied to cool the motion of a levitated nanoparticle in the rotating saddle. The issue of minimizing the energy of linear systems, as the one presented in Eq. (<ref>), can be effectively addressed using the optimal control policy known as Linear Quadratic Gaussian (LQG) controller. The dynamics of a trapped particle in the rotating reference frame can be approximated by a set of linear differential equations, and the implementation of the feedback loop is highly reliant on the detection of the particle's position. In our case, the particle's positions are detected in the laboratory frame by means of homodyne detection of the trap's scattered light. One can express the detection signals by 𝐲(t)=𝐂𝐱(t)+𝐦(t), where we employ the state-space representation, in which 𝐱(t) is the state vector for the laboratory reference frame, defined as 𝐱(t)=[ x(t) y(t) z(t) ẋ(t) ẏ(t) ż(t) ]^T, and 𝐲(t) is the measurement vector. Moreover, 𝐂 is a 3× 6 matrix defined by the calibration factors of the detectors and 𝐦(t) is the measurement noise vector, a 3× 1 vector composed by independent white Gaussian noise processes satisfying ⟨𝐦(t)⟩ = 0_3× 1 and ⟨𝐦(t)𝐦(t)^T⟩=𝐌, where 𝐌 is a 3× 3 diagonal matrix. Using Eq. (<ref>), one finds the relation between 𝐱 and the state vector in the rotating frame 𝐱^', 𝐱(t)=𝐓(θ)𝐱^'(t), where 𝐓(θ) is given by 𝐓(θ)=[ 𝐑_z(θ) 0_3× 3; 𝐑̇_z(θ) 𝐑_z(θ) ], with, 𝐑_z(θ) = [ cosθ -sinθ 0; sinθ cosθ 0; 0 0 1 ]. From Eq. (<ref>), we can write an expression analogous to the one in Eq. (<ref>) for the rotating reference frame, yielding 𝐲^'(t) = 𝐂^'𝐱^'(t) +𝐦^'(t), where 𝐲^'(t)=𝐑_z^T(θ)𝐲(t) , 𝐂^'=𝐑_z^T(θ)𝐂𝐓(θ). The noise vector 𝐦^' has zero mean and satisfies ⟨𝐦^'(t)𝐦^'(t)^T⟩=𝐑_z^T(θ)𝐌𝐑_z(θ). Additionally, the state-space representation also allows the rewriting of the dynamics presented in Eq. (<ref>), yielding 𝐱̇^'(t)=𝐀^'𝐱^'(t) + 𝐁^'𝐮^'(t)+𝐰^'(t), with 𝐰^'(t) taking into account the stochastic forces acting upon the particle, 𝐁^'𝐮(t) being related to the feedback forces exerted by the controller and 𝐀^' the state matrix, defined as 𝐀^'= [ 0 0 0 1 0 0; 0 0 0 0 1 0; 0 0 0 0 0 1; Ω^2+ω_x^2 γΩ 0 -γ 2Ω 0; -γΩ Ω^2-ω_y^2 0 -2Ω -γ 0; 0 0 -ω_z^2 0 0 -γ; ]. Equations (<ref>) and (<ref>) form a fundamental pair in linear control theory, from where one extracts fundamental information about the system's dynamics, actuators, and detection. Such information must be properly characterized to apply LQG correctly. The optimal control law 𝐮^'∗ is then 𝐮^'∗=-𝐊_opt𝐱̂^'(t), where 𝐊_opt is the optimal controller's gain matrix and 𝐱̂^' is an estimation of the state vector returned by the Kalman-Bucy filtering method. Physically, the control law can be implemented using electrodes near the trap center <cit.>. Note that the computation of 𝐊_opt and 𝐱̂^' depends on knowledge of 𝐀^', 𝐁^', 𝐂^', 𝐰^'(t) and 𝐦^'(t). For more details regarding the estimations and optimal gain calculations, we refer to <cit.>. We performed numerical simulations to examine the effectiveness of the LQG in controlling a particle confined by the saddle beam, assuming the same particle parameters as shown in Table <ref>. Figure <ref> a) presents the simulation results for the three directions in the lab fame. We observe a significant reduction in the amplitude of the particle's CoM amplitude for all three directions of motion as time evolves. The motion reaches a stationary state near 0.2ms after starting the application of the control law. Figure <ref>b) shows the controller's effect on the transversal plane motion; we see the motion amplitude increase as we apply smaller feedback gains. Finally, note that we carried out the simulations considering the complete form of the potential, which is highly nonlinear. Even though the LQG formalism utilizes linearized dynamics, it is still effectively capable of appreciably damping the particle's motion. § CONCLUSION In conclusion, we proposed a rotating saddle-like structured beam for optical levitation experiments. When rotating above a critical velocity, the saddle beam acquires a dynamical equilibrium point capable of levitating dielectric nanoparticles in high-vacuum. The power spectrum of the center-of-mass motion presents unique features associated with the non-harmonic nature of the trap. We have proposed a method to generate the rotating saddle with a controllable rotation velocity and have shown that feedback control can be used to cool the motion of the particle. We expect the saddle beam to find applications in fundamental quantum physics experiments using levitated nano-objects, where inverted and nonlinear potentials can be used to rapidly expand and interfere with an initially localized wavepacket <cit.>. § FUNDING AND COMPETING INTERESTS This work was supported by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001, Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq scholarship 140197/2022-2), Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ Scholarship No. E-26/200.252/2023, and E-26/202.330/2024), Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP process No. 2021/06736-5 and 2021/06823-5), the Serrapilheira Institute (grant No. Serra – 2211-42299) and StoneLab. We acknowledge support from EPSRC International Quantum Technology Network LeviNet EP/W02683X/1. All authors certify that they have no affiliations with or involvement in any organization or entity with any financial or non-financial interest in the subject matter or materials discussed in this manuscript. § THE SCATTERING FORCE To determine the stability condition for the z direction the scattering force needs to be analysed. Stable equilibrium occurs if there is a point where the scattering and gradient forces are opposite and equal in magnitude, and the derivative of the total force is negative. The Taylor expansion in Eq (<ref>) cannot be used in this analysis, since it gives a field that does not go to zero at z = ±∞. However, since we are mainly interested in the intensity of the beam along the z-axis, for x' = y' = 0, Eq. (<ref>) yields the simple result | E_s|^2(z) = 32P/17π cϵ_01/w_0^2(1+z^2/z_R^2). Thus, the gradient force along the z-axis is F_g(z) = -16P{α}/17π cϵ_0z/ w_0^2 z_R^2(1+z^2/z_R^2)^2, For a silica particle, the extinction cross-section is well approximated by <cit.>, σ_ext≈8π^3|α|^2/3ϵ_0^2λ_0^4. Therefore, the scattering force is F_s(z) = 128Pπ^2|α|^2/51λ_0^4ϵ_0^2c1/w_0^2(1+z^2/z_R^2). Making F_g(z) + F_s(z) = 0 leads to a quadratic equation: z^2 - 3λ_0^4ϵ_0{α}/8π^3|α|^2z + z_R^2 = 0. To have real solutions, the following inequality must be satisfied: (3λ_0^4ϵ_0{α}/8π^3|α|^2)^2 - 4z_R^2 > 0, where we disregarded the equality since it generates an unstable equilibrium point. Finally, by considering the particle's radius fixed, which fixes the polarizability, and using the definition of the Rayleigh range, we find a condition for w_0 to have stability along the z-direction: w_0 < √(3λ_0^5ϵ_0{α}/16π^4|α|^2). If the above condition is satisfied, there is a stable equilibrium position for the particle's longitudinal motion at z_eq= 3λ_0^4ϵ_0{α}/16π^3|α|^2 - 1/2√((3λ_0^4ϵ_0{α}/π^3|α|^2)^2 - 4z_R^2). § OVERDAMPED STABILITY ANALYSIS In the overdamped regime, the Eqs. of motion simplify to, 0 = (Ω^2+ω_x^2)x' - γ(x'-Ω y') + 2Ωy' + F_th/m, 0 = (Ω^2-ω_y^2)y' - γ(y'+Ω x') - 2Ωx' + F_th/m , 0 = -ω_z^2 z - γz + F_th/m. Similarly to Section <ref>, let x' = x'_0e^λ t and y' = y'_0e^λ t. By substitution in equations (<ref>) and (<ref>), and neglecting the stochastic force term, we have (Ω^2+ω_x^2-γλ)x'_0 + (γΩ+2λΩ)y'_0 = 0, (Ω^2-ω_y^2-γλ)y'_0 - (γΩ+2λΩ)x'_0 = 0. Casting the equation in matrix form [ Ω^2+ω_x^2-γλ γΩ+2λΩ; -γΩ-2Ωλ Ω^2-ω_y^2-γλ ][ x'_0; y'_0 ] = [ 0; 0 ], and demanding that the matrix's determinant is equal to zero, one arrives at a quadratic polynomial for λ: aλ^2 + bλ + c = 0, where a = γ^2+4 Ω ^2, b = 2 γΩ^2+γω_y^2-γω_x^2, c = Ω ^4 + Ω ^2(γ ^2+ω_x^2-ω_y^2) -ω_x^2 ω_y^2. Since the roots are λ^± = -b/2a ±√(b^2-4ac)/2a and a,b>0 (see Eq. (<ref>)), then they are complex with a negative real part if b^2-4ac < 0. If b^2-4ac > 0, one needs to require -b + √(b^2-4ac) < 0 so both roots are negative. Taking the worst case, assuming that b^2-4ac > 0 in conjunction with a,b>0, leads to requiring that c > 0, i.e., Ω ^4 + Ω ^2(γ ^2+ω_x^2-ω_y^2) -ω_x^2 ω_y^2 > 0. This condition is interesting because it will force both roots to be complex, ending up in the first case. Inequality (<ref>) leads to demanding |Ω| > Ω^+ for stability, where Ω^+ is the real positive root of the polynomial. Moreover, note that the polynomial necessarily has two real and two imaginary roots. Let Ω = ±√(u), then the roots for the quadratic equation in u are u^± = -γ^2+ω_y^2-ω_x^2±√((γ ^2+ω_x^2-ω_y^2)^2+4ω_x^2 ω_y^2)/2. Since the argument inside the square root is positive, u^+ must give the real roots of Eq. (<ref>). Therefore, the stability criterion for the overdamped case is |Ω| > √(-γ^2+ω_y^2-ω_x^2+√((γ ^2+ω_x^2-ω_y^2)^2+4ω_x^2 ω_y^2)/2), which is identical to the criterion found for the complete dynamical model. § NUMERICAL SIMULATION PARAMETERS For all the simulations described in Section <ref>, the particle is initially located at 𝐫_0 = [ √(k_bT/mω_x,h^2); √(k_bT/mω_y,h^2); z_eq+√(k_bT/mω_z,h^2) ], where √(k_bT/mω_i,h^2) is the standard deviation of the particle's position i when it is trapped by a standard Gaussian beam optical tweezer (OT) (considering a harmonic approximation). The ω_i,h are the natural frequencies for a harmonic trap given by [ ω_x,h^2; ω_y,h^2; ω_z,h^2 ] = 2P{α}/cϵ_0π w_0^2m[ 2/w_0^2; 2/w_0^2; 1/z_R^2 ]. The z_eq term is the equilibrium position along the optical axis, which is different from the zero due to the scattering force. Most interestingly, its value for a Gaussian beam and the proposed saddle beam is the same, given by Eq. (<ref>) derived in Appendix <ref>. This arises because, at the optical axis, only the Gaussian E^LG_0,0 term has a non-vanishing value for the saddle beam. The initial velocity is given by the standard deviation of velocity for a particle in a harmonic trap, given by v_0 = √(k_BT/m) for all directions. The main motivation behind this choice of initial conditions is the idea that one could first trap a particle using a standard Gaussian beam, and later switching to the saddle beam. For the beam's overall parameters, we set its wavelength at 1550nm, the numerical aperture (NA) at 0.6, and the optical power at 500mW. The low NA is required to prevent the structure of the beam's intensity from collapsing to a non-paraxial Gaussian-like shape. The effect of high focusing was numerically evaluating the transmission of the saddle beam by a high-NA lenses (NA >0.8) using the angular spectral representation formalism <cit.>, and we have observed that an NA= 0.6 is a good compromise between focusing and keeping the structure of the saddle superposition. We considered SiO_2 nanoparticles with a 150nm radius. The gas pressure was set to 10mbar. The scattering force given by the second term of Eq. (<ref>) was included in the simulations. All simulations were performed using the Runge-Kutta 4^th order algorithm with a simulation step of 10ns and 1,000,000 steps. § POSITION POWER SPECTRAL DENSITY We now evaluate the particle's position PSD in the rotating reference. We will focus on the transverse dynamics given by the equations (<ref>), and (<ref>) with F_i = F_th,i obeying equations (<ref>), since the PSD for the longitudinal motion is well-known: S_zz(Ω)= 2γ_m k_B T/m[(Ω^2-ω_z^2)^2+γ^2ω_z^2]. We start by expressing the differential equations in the frequency domain, A_xx'(ω) = By'(ω) + F_th(ω)/m, A_yy'(ω) = -Bx'(ω)+ F_th(ω)/m, where the tilde symbolizes the variables in the frequency domain and we have, A_x = -Ω^2-ω_x^2-ω^2-iωγ, A_y = -Ω^2+ω_y^2-ω^2-iωγ, B =γΩ-i2ωΩ, We consider the stochastic thermal force has the same frequency spectrum in both directions of the transverse plane. Since the above equations are linear in x' and y', one can solve explicitly to find x'(ω) = χ_x(ω)F_th y'(ω) = χ_y(ω)F_th, where χ_i(ω) are the mechanical susceptibilities in the rotating reference frame, given by χ_x(ω) = 1/mA_y+B/B^2+A_xA_y, χ_y(ω) = 1/mA_x-B/B^2+A_xA_y. Finally, the position PSDs in the rotating reference frame are S_x'x'(ω) = 2mγ k_BT|χ_x(ω)|^2, S_y'y'(ω) = 2mγ k_BT|χ_y(ω)|^2, where we used S_ii(ω) = |i(ω)|^2, |F_th|^2 = S_F_thF_th(ω) = 2mγ k_BT. One can use the Fourier transform of the positions in the rotating reference frame to evaluate the form of the PSDs in the laboratory reference frame. We can show that, F.T.{f(t)cos(Ω t)} = 1/2(f(Ω^+) + f(Ω^-)), F.T.{f(t)sin(Ω t)} = 1/2i(f(Ω^+) - f(Ω^-)), where F.T.{·} is the Fourier transform and Ω^± = ω±Ω. Using this result with the coordinate transformation x' =xcosθ+ysinθ, y' =ycosθ-xsinθ, one can relate the Fourier transform of the positions from one coordinate to the other, x(ω) = 1/2(x'(Ω^+) +x'(Ω^-)) - 1/2i(y'(Ω^+)-y'(Ω^-)), y(ω) = 1/2i(x'(Ω^+) -x'(Ω^-)) + 1/2(y'(Ω^+)+y'(Ω^-)). Note that F_th(Ω^±) = F_th(ω) = costant since the thermal force is a white noise. Thus, using Eq. (<ref>), x(ω) = F_th/2( χ_x(Ω^+)+χ_x(Ω^-) + i(χ_y(Ω^+)-χ_y(Ω^-))), y(ω) = F_th/2( χ_y(Ω^+)+χ_y(Ω^-) + i(χ_x(Ω^-)-χ_x(Ω^+))). Finally, the particle's position PSD in the laboratory reference frame is given by S_xx = mγ k_BT/2|( χ_x(Ω^+)+χ_x(Ω^-) + i(χ_y(Ω^+)-χ_y(Ω^-)))|^2, S_yy = mγ k_BT/2|( χ_y(Ω^+)+χ_y(Ω^-) + i(χ_x(Ω^-)-χ_x(Ω^+)))|^2. Figure <ref> displays a panel with examples of simulated position PSDs against the theoretical prediction. The plots in the first row consider the linearized model and show a good match between theory and simulation. However, a shift in the natural vibrational frequencies and additional peaks are observed for the complete optical potential, as seen in the second-row plots.
http://arxiv.org/abs/2407.13387v1
20240718105006
Optical control of multiple resistance levels in graphene for memristic applications
[ "Harsimran Kaur Mann", "Mainak Mondal", "Vivek Sah", "Kenji Watanabe", "Takashi Taniguchi", "Akshay Singh", "Aveek Bid" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci" ]
Graphene photo-doping]Optical control of multiple resistance levels in graphene for memristic applications aksy@iisc.ac.in aveek@iisc.ac.in ^1Department of Physics, Indian Institute of Science, Bangalore 560012, India ^2 Research Center for Electronic and Optical Materials, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan ^3 Research Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan § ABSTRACT Neuromorphic computing has emphasized the need for memristors with non-volatile, multiple conductance levels. This paper demonstrates the potential of hexagonal boron nitride (hBN)/graphene heterostructures to act as memristors with multiple resistance states that can be optically tuned using visible light. The number of resistance levels in graphene can be controlled by modulating doping levels, achieved by varying the electric field strength or adjusting the duration of optical illumination. Our measurements show that this photodoping of graphene results from the optical excitation of charge carriers from the nitrogen-vacancy levels of hBN to its conduction band, with these carriers then being transferred to graphene by the gate-induced electric field. We develop a quantitative model to describe our observations. Additionally, utilizing our device architecture, we propose a memristive crossbar array for vector-matrix multiplications. [ Aveek Bid^1 July 22, 2024 ================= § INTRODUCTION Neuromorphic computing (NC), inspired by computing in our brain, has emerged as a new paradigm beyond von Neumann computing. NC provides a low-energy alternative to traditional von Neumann architectures, promoting sustainable computation in our modern information age. The critical hardware building blocks for NC are memristors (artificial synapse), neural processing units, and threshold switches (artificial neurons)  <cit.>. A memristor is a fundamental electronic component where the resistance of the channel is dependent on the charge that has flown through it; it is a resistor with a memory. This charge-history dependence of resistance can be exploited in machine learning frameworks where computation can be carried out via cross-bar arrays, and synaptic weights (such as weights for neural networks) can be modified by a certain number of electrical pulses, leading to in-memory computing. A low-power, non-volatile memristor is essential to exploit NC's benefits fully. A prerequisite for memristic action is a platform with non-volatile doping. Chemical and electrostatic doping are the two most commonly used techniques to induce charge carriers in 2-D channels. Chemical doping is done by hetero-atom substitution and adsorption of molecular adsorbates onto graphene and other 2D materials. There are several drawbacks of this approach, including limited control over doping concentration, introduction of uncontrolled structural defects and lattice strains, unintentional impurity introduction, challenges in maintaining stability and diffusion control, and the sensitivity of the sample to processing conditions. <cit.>. Electrostatic doping involves doping using an external local gate. While this technique avoids most of the disadvantages of chemical doping listed above, the magnitude of doping attainable is limited by the breakdown voltage of the gate dielectric. <cit.>. Liquid ionic gating has the potential of reaching higher doping levels than is achievable by dielectric gates but at the cost of device instability and non-scalability. <cit.> A viable alternative is optical doping of graphene on hexagonal boron nitride (hBN) substrates <cit.>. hBN has a band gap of 6 eV with several intermediate defect states that can be optically excited <cit.>. This method involves the controlled optical excitation of charge carriers from defect states of hBN and their transfer to graphene using an external electric field. This technique is reversible without adversely affecting transport mobility and defect density. In this article, we present an in-depth study of optical doping in hBN/graphene heterostructures using visible light. Our research reveals that electrons in the nitrogen-vacancy defect state in hBN can be optically excited with violet light and transferred to graphene through electrostatic gating. This technique enables high-density doping of graphene, with the doping level controlled by gate voltage or illumination time. The dynamics of this doping process, measured with millisecond temporal resolution, can be adjusted over three orders of magnitude by varying the illumination wavelength and optical power. Furthermore, we demonstrate that a graphene/hBN heterostructure can function as a memristor, exhibiting multiple resistance levels. The device's compact structure and room-temperature operation enhance its potential for scalable, room-temperature neuromorphic applications. § RESULTS Single-layer graphene devices encapsulated between thin hBN crystals were fabricated using the dry transfer method (Supporting Information, section S1). 1-D electrical contacts to the graphene were achieved by lithography and dry etching, followed by Cr/Au metallization (Fig. <ref>(a)). A back-gate voltage, V_g, tuned the charge carrier number density. Electrical transport measurements were performed using a low-frequency AC measurement technique. The Dirac point (maxima in the device resistance R) is attained at V_g=0 V (Fig. <ref>(b), solid blue line), attesting to the absence of charged impurities in the graphene channel. The optoelectronic measurements were carried out at room temperature, and the sample was illuminated by using either an LED of wavelength, λ=427 nm or a Ti Sapphire pulsed laser (80 MHz repetition rate, 100 fs pulse width). To photo-dope the device, we use the following protocol: The gate voltage is set to a desired value V_g^*, and the device is exposed to the light of wavelength 427 nm of intensity I=62 W/ m^2, till the resistance saturates to the value of resistance at Dirac point. The light is then turned off, and the gate response of the device is measured. It was found that the entire R-V_g plot shifts with the Dirac point at V_g^* (Fig. <ref>(b) – solid red line; in this example V_g^* = -4 V), establishing that the device is now electron-doped. We refer to this step as the `SET' protocol wherein the Dirac point can be set deterministically at any desired value of V_g (Fig. <ref>(c)). To `RESET,' the device is exposed to a higher light intensity I=458 W/ m^2 at V_g^*=0 V until the Dirac point shifts to V_g = 0 V. This protocol brings the Dirac point of the device back to V_g= 0 V (Fig. <ref>(b) – dotted green line). As illustrated in Fig. <ref>(b), the process can be repeated without degradation in the device characteristics. Fig. <ref>(d) shows the time dependence of the device's normalized longitudinal resistance R/R_D during the `SET' protocol with V_g^*=-2 V using LED of λ=427 nm and I=62 W/ m^2. Here, R_D is the resistance at the Dirac point. Upon illumination, the device resistance, R, increases rapidly with time, saturating in about 50 milliseconds to R = R_D. Exposure for a longer duration had no discernible effect on the channel resistance. Notably, upon turning off the illumination, the device's resistance remains unchanged. Fig. <ref>(e) shows snapshots of the R-V_g data in 10-second intervals. During this measurement, the light was turned on for 10 seconds with V_g^* set at -2 V. The illumination was turned off, and the R-V_g curve was measured. The process is repeated multiple times to generate the plots in Fig. <ref>(e). The intensity of the light was kept very low (I=4 W/m^2) to allow for a much slower rate of resistance change (for easier observations). One can see that the entire transfer curve shifts gradually to the left until the Dirac point reaches V_g^* = -2 V. These measurements establish that the Dirac point can be moved deterministically to any value of the gate voltage either by controlling the exposure time (Fig.<ref>(e)) or the value of V_g^* at which the device is illuminated (Fig. <ref>(c)). After setting up the SET/RESET protocol, we now show that the resistance of the sample at a specific gate voltage can be repeatedly alternated between two distinct values (Fig. <ref>(a)). The data for a single light pulse is shown in Fig. <ref>(b); we find that the `RESET' time is significantly larger than the `SET' time for electron doping. Below, we explain this observation. Moreover, adjusting the exposure time makes switching between multiple resistance values possible, as depicted in Fig. <ref>(c). In this measurement, the light was turned on for 10 seconds (gray shaded region of the timeline), during which R increased. The light was then turned off. The value of R was stable at the value it reached when the illumination was cut off. This process can be repeated to produce multiple stable resistance levels in graphene. § APPLICATION AS MEMRISTOR As shown in Fig. <ref>(c), our device has at least six stable resistance states, with more resistance states also accessible by lower optical powers. Such an hBN/graphene heterostructure with multiple stable resistance values holds significant potential for developing memristor devices for vector-matrix multiplication and machine learning (ML) applications.  <cit.>. Several such devices can be fabricated in a cross-bar array for a typical linear algebra calculation, with voltages as inputs and currents as outputs. A new vector-matrix multiplication operation can be carried out by modifying the weights of each cross-bar intersection, i.e., by changing the resistance of the channels. For hardware implementation of ML training, the synaptic weights of a neural layer (each layer will be a separate cross-bar array) can be similarly modified. We propose a cross-bar geometry schematically shown in Fig. <ref>(d) to achieve the above objectives. Its compact footprint offers distinct advantages over other structures. Each device unit (shown schematically in Fig. <ref>(d)) is individually gated; this architecture is easily achievable using modern nano-fabrication processes. Before each operation, the channel resistances of each device are initialized by a global incident optical beam and the application of distinct back-gate voltages to different devices. This process will `SET` the channel resistance of each device. Conversely, the `RESET` can be done by setting the desired gate voltages to zero and illuminating with a global incident optical beam. § ORIGIN OF THE PHENOMENON Photo-doping of the graphene channel requires charge transfer from hBN to graphene. The energy corresponding to violet light (λ=427 nm) is 2.9 eV, much lower than the band gap of hBN (≈ 6 eV), precluding photo-excitation of carriers from the valence band of hBN. We also find that the graphene channel remained undoped in a device with only the top hBN flake and without the bottom hBN flake upon using the same protocol described above (Supporting Information, section S3). This study confirms that only the bottom hBN was responsible for the photodoping effect. Based on these observations, we sketch out a possible scenario below that explains all our experimental findings. Fig. <ref>(a) is a schematic of the energy alignment of the bottom hBN and the graphene without photo-excitation and at V_g=0. Several mid-gap states in hBN can act as electron donors. Of these, the one most relevant for us is the defect state of nitrogen vacancies (marked as ℰ_N in Fig. <ref>). This level can have stable charge states of +1 or 0 <cit.>. A negative gate-voltage V_g = V_g^* dopes the graphene channel with holes and creates an electric field E_ext directed from graphene into the hBN (Fig. <ref>(b)) leading to band bending. Illuminating the device with violet light excites electrons from ℰ_N to the conduction band ℰ_C of hBN. These electrons are funneled to the graphene channel under the influence of the electric field. The holes left at ℰ_N in the hBN generate an electric field E_i in the direction opposite E_ext (Fig. <ref>(c)). The electron transfer process continues until the net electric field between graphene and hBN E_net = E_ext-E_i becomes zero. Simple electrostatics arguments show that the effective number density in graphene becomes zero at this point (Fig. <ref>(d), which manifests as a shift of the charge neutrality point to V_g^*, (Fig. <ref>(d)) and constitute the 'SET' protocol. On reducing V_g to zero, graphene draws negative charges from the metal contacts, making the net device charge neutral (Fig. <ref>(e)). This charge configuration generates an electric field E_i directed from the hBN to graphene. The consequent band bending and the fact that ℰ_N>ℰ_D forbids electron transfer back from graphene to the hBN; graphene remains negatively charged, and hBN is positively charged (Fig. <ref>(e)). This energy barrier to back-transfer electrons from graphene to hBN explains the long-term charge retention in graphene after the photodoping is completed. Applying a V_g^*=0V with simultaneous exposure to light erased the doping, bringing the graphene's Dirac point back to V_g=0 V. Excess electrons need to be removed from graphene and transferred back to hBN for this to happen. Note, however, that this process seems energetically unfavorable as ℰ_N > ℰ_D at the interface. The physical mechanism leading to this charge-neutralization of graphene is unclear. We propose a phenomenological scenario in which this `RESET' process is a two-step process involving (1) the transfer of electrons from the valence band of hBN to its mid-gap states due to optical excitation and (2) subsequent electron transfer from graphene to the empty valence band states of hBN due to electric field (Fig. <ref>(f)). Consequently, the `RESET' process (involving electron transfer from graphene to hBN) is much slower than the `SET' process for electron doping. Device-level simulations are required to verify if the above scenario correctly captures the doping erasure process. Next, we used a tunable pulse laser to study the wavelength dependence of the doping time t_s, which we define as the time taken for the resistance to saturate to R=R_D on exposure to light at a fixed V_g^*. Fig. <ref>(a) plots t_s versus the laser photon energy for a constant laser power P=70 nW and V_g^* = -3 V. It shows a minimum in t_s around E=2.8 eV. This photon energy corresponds to the optical absorption by valence nitrogen defect in hBN `ℰ_N,' leading credence to our understanding of the photodoping process  <cit.>. The intensity and gate-dependent measurements were done using an LED source (λ=427 nm), and the time constant τ was extracted by fitting number density vs. time curve to equation (S1)(Fig. S1). Intensity-dependent measurements at a fixed V_g^* showed that τ is inversely proportional to the light intensity (Fig. <ref>(b)). This dependence is understandable, as an increase in the intensity of photons leads to more free charges being produced in hBN, reducing the time taken to dope (see Supplementary Information for detailed derivation). For measurements performed at a fixed intensity of light, the time constant to dope should increase with an increase in the magnitude of V_g^* (equivalently, of E_ext); measurements confirm this (Fig. <ref>(c)) (See Supplementary Information for a derivation). § CONCLUSIONS To summarize, we demonstrate a reversible control of the Dirac point in the graphene-hBN heterostructure to encode the resistance values via a combined optical-electrical stimulus. The device's resistance can be modified by varying the gate voltage in the presence of an optical incident power or by fixing the gate voltage and illuminating it with multiple optical pulses. The switching time can be tuned by the incidence light wavelength (Fig. <ref>(a)), light intensity (Fig. <ref>(b)), or the gate voltage (Fig. <ref>(c)) providing tremendous tunability of the properties of the device. The time taken to electron dope is much less compared to hole doping, and further experiments need to be done to make them comparable. It should be possible to control the switching time by modifying the defect density in hBN using electron irradiation or annealing processes. The ability to photoelectrically `SET,' `READ,' and `RESET' multiple stable and non-volatile resistance states of the device makes it ideal for use as a memristor. § METHODS §.§ Device Fabrication The devices were fabricated using the dry transfer technique <cit.>. Single-layer graphene (SLG) and hexagonal boron nitride (hBN) flakes were mechanically exfoliated onto a Si/SiO2 substrates. The hBN flakes had a thickness of 25-30 nm. Electron beam lithography was used to define electrical contacts. This was followed by etching with a mixture of CHF_3 (40 sscm) and O_2 (10 sscm). The metallization was done with Cr/Au (5 nm/60 nm) to form the 1D electrical contacts with SLG. §.§ Measurements All electrical transport measurements were performed at room temperature using a low-frequency AC measurement technique. For low-temperature measurements, the sample was cooled down in a cryostat to 4.7 K. For optoelectronic measurements, the sample was illuminated using either an LED of wavelength, λ=427 nm, or a Ti Sapphire pulsed laser (80 MHz repetition rate, 100 fs pulse width). Acknowledgement A.B. acknowledges funding from the Department of Science & Technology FIST program and the U.S. Army DEVCOM Indo-Pacific (Project number: FA5209 22P0166). K.W. and T.T. acknowledge support from JSPS KAKENHI (Grant Numbers 19H05790, 20H00354, and 21H05233). A.S. acknowledges funding from Indian Institute of Science start-up grant, DST Nanomission CONCEPT (Consortium for Collective and Engineered Phenomena in Topology) grant and project MoE-STARS-2/2023-0265. M.M. acknowledges Prime Minister’s Research Fellowship (PMRF). Data availability The authors declare that the data supporting the findings of this study are available within the main text and its supplementary Information. Other relevant data are available from the corresponding author upon reasonable request. Author Contributions H.K.M, M.M., V.S., A.S., and A.B. conceptualized the study, performed the measurements, and analyzed the data. K.W. and T.T. grew the hBN single crystals. All the authors contributed to preparing the manuscript. Competing interests: The authors declare no Competing Financial or Non-Financial Interests. Supporting Information Supporting information contains detailed discussions of the (S1) device fabrication, (S2)dependence of τ on I and V_g^*, (S3) top or bottom hBN responsible for doping, (S4) doping at low temperatures and (S5) minimum detectable power. § SUPPLEMENTARY MATERIALS § DEVICE FABRICATION The devices were fabricated using the dry transfer technique <cit.>. Single-layer graphene (SLG) and hexagonal boron nitride (hBN) flakes were mechanically exfoliated onto a Si/SiO2 substrates. The hBN flakes had a thickness of 25-30 nm. The SLG flakes were identified from optical contrast under a microscope and later confirmed with Raman spectra. Electron beam lithography was used to define electrical contacts. This was followed by etching with a mixture of CHF_3 (40 sscm) and O_2 (10 sscm). The metallization was done with Cr/Au (5 nm/60 nm) to form the 1D electrical contacts with SLG. § DEPENDENCE OF Τ ON I AND V_G^* As discussed in the main manuscript, illuminating the device with the gate voltage held at a gate voltage V_g = V_g^* leads to the shift of the Dirac point to V_g^*. Consider the graphene channel with the gate voltage set to V_g^* before turning the light on. The channel is hole-doped to a value n_0 = C_gV_g^*/e. Here, C_g is the capacitance of the bottom hBN layer. On turning on the light, the hole density n decreases exponentially with time with a time constant τ (Fig. <ref>): n(t)=n_0e^-t/τ + n_d where n_d is the residual charge density at the Dirac point. From Eqn. <ref>, we get: δ n(t)=n(t)-n(0) =n_0(e^-t/τ-1) It follows that the areal number density of electrons excited in time t from the N-vacancy mid-gap state to the conduction band of hBN is: δ n(t)= Iβ t Here I is the number of photons incident on the device per unit area per unit time, β is the efficiency of the electron excitation process in hBN. For simplicity, we assume that all the electrons produced in hBN are instantly swept to graphene by the process explained in the main manuscript. Combining Eqn. <ref> and Eqn. <ref>, we get n_0 (e^-t/τ-1)=Iβ t For t/τ<<1,this reduces to -n_0t/τ=Iβ t or, τ=-C_gV_g^*/eIβ Eqn. (<ref>) implies that the time constant for doping should be directly proportional to V_g^* and inversely proportional to I. Below, we probe the I and V_g^* dependence of τ. We also get the values of β. Intensity dependence: For these measurements, the gate voltage is kept constant at V_g^*=-5 V, and data are collected for different values of I. Some representative plots are shown in Fig. <ref>(a). The data are fitted using Eqn. (<ref>) to extract the values of n_d, n_0 and τ. Fig. <ref>(b) is the plot of τ versus I. A double log fit yields τ=10^23.4 / I establishing τ to be inversely proportional to I and matching the prediction of Eqn. (<ref>). From the fit, we get β in the range 1.1×10^-6 - 1.6×10^-6, the data are plotted in Fig. <ref>(c). Gate Voltage dependence: For this set of measurements, the light intensity is kept fixed at I = 137 W/m^2. Data are collected for different values of V_g^* (Fig. <ref>(d)). The data are fitted using Eqn. (<ref>) to extract the values of τ (Fig. <ref>(e)). The dotted line is a linear fit to the data. The excellent fit establishes that τ∝ -V_g^*, matching the prediction of Eqn. (<ref>). The slope of the curve (τ/V_g^*=-C_g/(eIβ)) yields β to lie in the range of 1.9×10^-6 - 2.1×10^-6 (Fig. <ref>(f)). Given the approximations made in deriving Eqn (<ref>), its experimental verification, as shown in Fig. <ref>, is remarkable. § TOP OR BOTTOM HBN RESPONSIBLE FOR DOPING In our heterostructure, there are two hBN flakes, one at the top and one at the bottom of the graphene. The discussion in the main manuscript assumes that the top hBN does not play any role in the doping process; note that all measurements reported in this work were performed with the back gate bias. To confirm this, we fabricated a device with only top hBN and illuminated it with the light of λ=427 nm at a back-gate voltage of V_g=-5 V. As shown in Fig. <ref>, the shift in Dirac point, even after prolonged exposure of one hour with high-intensity light, is very small, significantly less than the doping effect observed in our original device, which was -2 V in 50 ms. Hence, the contribution from the top hBN is negligible, and the bottom hBN is responsible for the doping effect. § DOPING AT LOW TEMPERATURES Quantum transport measurements are performed at low temperatures. We tested the compatibility of our doping process with low-temperature measurements. The sample was doped at room temperature for the low-temperature experiments and then cooled down in a cryostat. As shown in Fig. <ref>(a), doping was retained at low temperatures, and the R-V_g response of the sample remained unchanged even after 15 hours when kept cool. By contrast, when the doping was done at lower temperatures, we found the device did not retain the doping, as shown in Fig. <ref>(b). Hence, the doping should be done at room temperature, and then the device can be cooled down. An important application of the photodoping process discussed in this work is in accessing high-energy regions of the bands in graphene, which can not be reached by traditional electrostatic doping. Using this method, we doped an hBN/SLG moiré device to electron and hole number densities of n=2.5×10^16 m^-2 as shown in Fig. <ref>(c,d). We can access the tertiary Dirac point (n=6.8×10^16 m^-2) on the hole side; this was inaccessible in previous measurements due to gate-leakage during standard electrostatic gating. § MINIMUM DETECTABLE POWER The device was illuminated with different powers of light of λ = 410 nm at negative gate voltage V_g=-3 V, and the corresponding resistance versus time curves were recorded as shown in figure Fig. <ref>. The illumination power (P) was measured using a power meter. We could observe a change in the resistance for P as low as 0.7 nW.
http://arxiv.org/abs/2407.13414v1
20240718113733
Conservation of the passband signal amplitude using a filter based on the Fast Fourier Transform algorithm
[ "Flavio Dalossa Freire", "Isabel Gebauer Soares" ]
math.NA
[ "math.NA", "cs.NA", "65T50", "G.1.2" ]
DeepClair: Utilizing Market Forecasts for Effective Portfolio Selection Jaewoo Kang July 22, 2024 ======================================================================= § ABSTRACT In this work, we propose an algorithm for a filter based on the Fast Fourier Transform (FFT), which, due to its characteristics, allows for an efficient computational implementation, ease of use, and minimizes amplitude variation in the filtered signal. The algorithm was implemented using the programming languages Python, R, and MATLAB. Initial results led to the conclusion that there was less amplitude loss in the filtered signal compared to the FIR filter. Future work may address a more rigorous methodology and comparative assessment of computational cost. § INTRODUCTION Digital filtering is an essential technique in signal processing, used to remove noise, extract relevant information, and improve data quality in various applications, from digital communication to biomedical signal analysis. Digital filters, such as low-pass, high-pass, band-pass, and band-stop filters, play a crucial role in signal manipulation to meet specific frequency requirements. However, despite their wide applications and advantages, digital filters face inherent challenges that affect the accuracy and quality of the filtered signal. One of the most notable problems is amplitude loss, even if minimal, which can occur over time. This loss can compromise the signal's integrity, leading to a reduction in the fidelity of the processed signal. Additionally, digital filters may not be perfect in rejecting unwanted frequencies, allowing noise or interference components to pass through, which can further degrade the signal of interest. The need to address these limitations is fundamental to advancing the effectiveness of digital filters. Improvements in filter design techniques, as well as the development of adaptive algorithms, are necessary to minimize amplitude loss and enhance unwanted frequency rejection. This article proposes an algorithm to mitigate the problem of amplitude loss in the passband of the digitized signal to be filtered. § MAIN TYPES OF FILTERS ACCORDING TO FREQUENCY INTERVAL The main types of digital filters include: * Low-Pass: Allows the passage of low frequencies and attenuates high frequencies. It is widely used in various applications, such as signal smoothing and high-frequency noise removal. However, the low-pass filter faces significant challenges, such as amplitude loss at cutoff frequencies and the inability to completely reject unwanted frequencies close to the cutoff frequency. * High-Pass: Allows the passage of high frequencies and attenuates low frequencies. Used in applications that require the removal of low-frequency components, such as in audio signals to eliminate background noise. * Band-Pass: Allows the passage of a specific range of frequencies and attenuates frequencies outside this range. Essential in communication systems to isolate signals of interest in an occupied frequency spectrum. * Band-Stop: Attenuates a specific range of frequencies, allowing the passage of frequencies outside this range. Used to eliminate interference or noise at specific frequencies. Among these, the low-pass filter is often considered the foundation, as other types can be developed as variations of it. However, despite its importance and wide application, the low-pass filter is not without problems. One of the main challenges is the amplitude loss at frequencies close to the cutoff frequency, which can affect the accuracy of the processed signal. Additionally, the filter's phase response can introduce distortions in the signal, particularly in time-critical signals. Another significant problem is the inaccuracy in the attenuation of unwanted frequencies, where noise or interference components may still be present after filtering, compromising signal quality. § THE ALGORITHM The result of digitizing the signal is a data set, a vector that contains the respective magnitudes collected over time. Thus, such a numerical vector can be subjected to the Fast Fourier Transform (FFT) algorithm <cit.>. The basis of the algorithm proposed in this article is the FFT and its inverse, the Inverse Fast Fourier Transform (IFFT) algorithm. These algorithms, FFT and IFFT, are implemented in most of the programming languages commonly used in data processing and scientific computing. Here, we will cover Python, R, and MATLAB for implementing the proposed algorithm. The algorithm described here has interesting properties, it is very simple, addresses the problem of amplitude loss in the passband of the filtered signal, and is computationally efficient as it is based on the FFT. The algorithm consists of three steps: * Apply the FFT to the data set of the digitized signal (moving to the frequency domain). * Remove the undesired frequencies or set the frequencies that should remain in the data array transformed to the frequency domain, skipping its first position. * Apply the IFFT to the updated array, moving back to the time domain. In step 1, it is important to take the maximum number of sample points (signal measurements in time) to complete an integer number of cycles, thus mitigating the leakage problem inherent to the FFT. The leakage problem is minimized in large sample sizes in high-frequency signals, even with non-integer numbers of cycles, but it becomes more important in smaller samples and lower frequencies. In step 2, it is crucial not to use mirroring or flipping methods to zero out the undesired frequencies in the transformed data vector to ensure that there are no other alterations to the original signal beyond the elimination of those undesired frequencies. It is also important to note that step 2 can be used to introduce any values in any positions of the transformed vector. In the examples presented here, zeros will be introduced in the positions of the frequencies to be filtered out. The FFT algorithm returns a vector of the same size as the original data. For an even number of elements, N, this vector returned by the FFT will have a structure as schematized in <ref>, where the first element and the N/2+1 element are real numbers, and the rest are complex numbers. Among these complex numbers, there is a symmetry of conjugate pairs. The second element is conjugate with the last one, the third with the second last, and so on. For an odd number of elements, N, this vector returned by the FFT will have a structure as schematized in <ref>. In this case, the first element is a real number and the rest are complex numbers. Among these complex numbers, there is a symmetry of conjugate pairs similar to the even N case. The second element is conjugate with the last one, the third with the second last, and so on. Each element of the vector transformed to the frequency domain has a correspondence in the time domain. The information contained in these elements can even be manipulated to obtain different results in the response vector returned by the inverse Fourier transform (IFFT). For example, a manipulation of the first element of the vector returned by the FFT results in a change in the intercept (the mean) of the vector reconverted to the time domain. A multiplication by two of the imaginary components in the already filtered vector, returned by the FFT, results in double the amplitude of the wave represented by the corresponding vector transformed by the IFFT. Thus, we can conclude that the first element of the vector returned by the FFT correlates with the mean of the data in the vector resulting from the IFFT, and information about the wave amplitude in the time domain is contained in the imaginary components of the fast transformed vector. Thus, in step 2, the first element should always be disregarded to symmetrically zero out the frequencies in the transformed signal, starting from the second element. This is done by skipping the first element of the transformed vector and zeroing out the others whose indices correspond to the unwanted frequencies. These indices are obtained simply by taking the positions of the unwanted frequencies, or the extremes of their interval, in the corresponding frequency vector. In this way, we have a shifted symmetry starting from the second element. If the element in the second position is to be zeroed, its conjugate pair in the last position must also be zeroed. Similarly, if the element in the third position is assigned a value of zero, its conjugate pair in the second last position must also be zeroed, and so on. Therefore, the algorithm works by assigning values to element i and to its complex conjugate N-i, considering i as the element index, N as the total number of elements, and 0 as the index of the first position. We present examples of implementing the algorithm in the programming languages Python, R <cit.>, and Matlab <cit.>. We also provide a suggestion in pseudo-code. Since our sample of signal points is centered at zero, in these codes we zero out the first element without worrying about changes in the intercept. However, in signals not centered at zero, this element should be preserved or restored to its original value before step 3 of the algorithm. Additionally, the suggestions presented here are not aimed at maximum computational efficiency but at illustrating the implementation of the algorithm with the best possible readability. In this article, all comparisons and applications of the proposed algorithm, the option of shifted symmetry was used, argument shift_sim. The implemented functions have the option not to use shifted symmetry, considering the reader’s interest in comparing results using the example code. § PSEUDO-CODE SUGGESTION § CODE EXAMPLES Here, we present examples of the implementation of the aforementioned algorithm in the Python, R, and MATLAB programming languages. §.§ Python Example §.§ R Example §.§ MATLAB Example § DISCUSSION AND RESULTS §.§ Signal Amplitude: Filtered versus Theoretical The R function presented above was applied to a data set representing a signal generated by simulation, summing 5 sine waves with frequencies 3, 10, 20, 40, and 80 Hz, amplitudes 1, with a sampling frequency of 1770 Hz over an interval of 1 second, resulting in 1777 simulated measurements. White noise with a normal distribution, mean 0, and standard deviation 1 was added to these measurements, <ref>. Next, the amplitudes of the signals filtered by frequency intervals and by specific values were compared with the theoretical components of the simulated signal, and the amplitude variation in each case was observed. Furthermore, the results for band-pass filtering and single-frequency filtering were compared. A slight decrease in the amplitude of the filtered signal at 3 Hz was observed in the case of this simulated signal with its respective noise <ref>. This variation tends to decrease with an increase in the passband frequency, as shown in Figures <ref> to <ref>. §.§ Signal Amplitude: FFT Filter versus FIR Equiripple Filter Finally, the implementation in MATLAB of the algorithm-based filter proposed here (FFT Filter) was compared with a design using an equiripple FIR filter (FIR Filter). The passbands chosen for both filters were 3 to 80 Hz and 39 to 41 Hz. Thus, the accuracy of frequency domain cutoffs (<ref>) and (<ref>), adherence to the theoretical signal, and amplitude variation in the time domain were observed for these two bands, 3 to 80 Hz and 39 to 41, Hz (<ref>) and (<ref>). Below, in MATLAB code, we have the parameters of equiripple FIR band-pass filters, used for 3 to 80 Hz band and 39 to 41 Hz band. In <ref>, the results for the root mean square error (RMSE) for both filters, relative to the theoretical signal for each considered band, are shown. * MATLAB code for FIR filter, 3 to 80 Hz * MATLAB code for FIR filter, 39 to 41 Hz One of the reasons for the better performance of the FFT filter using shifted symmetry can be seen in <ref> and <ref>. The proposed algorithm allows for a perfectly rectangular window, with a straight and vertical cut at the chosen limits for the signal in the frequency domain. Note that there are no residual frequencies beyond the cut-off limits of the FFT filter using shifted symmetry. Thus, there is no need for a trade-off between the precision in eliminating unwanted frequencies and maintaining the amplitude of the filtered signal. § CONCLUSION It is observed that the RMSE value, used as a metric for comparing the FIR Filter and the FFT Filter, was lower in the case of the FFT filter implemented by the algorithm proposed here. The amplitude obtained in the 39 to 41 Hz band was also higher for the signal processed by the FFT filter, as seen from inspection of the graph in (<ref>). This finding, combined with a better RMSE value, already allows us to conclude that the amplitude loss was less for the FFT filter. Nevertheless, for a more rigorous demonstration in future work, it is necessary to use another metric better suited to measure amplitude variation across a larger number of data simulations, especially varying the generation of white noise, both in comparison with the theoretical signal and between filters. Therefore, future work may address a more rigorous methodology and comparative assessment of computational cost, including different scenarios and a more detailed description of the theory behind the proposed algorithm. unsrt
http://arxiv.org/abs/2407.12130v1
20240716193544
Spin polarization of fermions at local equilibrium: Second-order gradient expansion
[ "Xin-Li Sheng", "Francesco Becattini", "Xu-Guang Huang", "Zhong-Hua Zhang" ]
hep-th
[ "hep-th", "hep-ph", "nucl-th" ]
S T j J K P A B Q S Q Φ ψ χ ρ Π π Θ a a^† a^(R) a^†(R) a^(L) a^†(L) c c^† ↔∂∂t ↔∂
http://arxiv.org/abs/2407.12520v1
20240717130535
Controlling directional propagation in driven-dissipative 2D photonic lattices
[ "Bastián Real", "Pablo Solano", "Carla Hermann-Avigliano" ]
physics.optics
[ "physics.optics" ]
bastianreal71@gmail.com Departamento de Física, Facultad de Ciencias Físicas y Matemáticas, Universidad de Chile, Santiago, Chile Departamento de Física, Facultad de Ciencias Físicas y Matemáticas, Universidad de Concepción, Concepción, Chile Departamento de Física, Facultad de Ciencias Físicas y Matemáticas, Universidad de Chile, Santiago, Chile Millennium Institute for Research in Optics - MIRO, Santiago, Chile § ABSTRACT Controlling light propagation in photonic systems fosters fundamental research and practical application. Particularly, photonic lattices allow engineering band dispersions and tailor transport features through their geometry. However, complete controllability requires external manipulation of the propagating light. Here, we present a resonant excitation scheme to observe quasi-1D and uni-directional propagation of light through the bulk of two-dimensional lattices. To this end, we use the highly anisotropic light propagation exhibited at the energy of saddle points in photonic bands. When multiple drives with judicious amplitudes and phases are tuned to such energy, interference effects between these drives and photonic modes result in controllable directional propagation through the bulk. Similarly, one can formed localized states with controllable localization degrees. We illustrate these effects with driven-dissipative photonic lattices. Our work highlights the importance of external drives for dynamically controlling directional light transport in lattices, a relevant feature for all-optical routing and processing in photonics. Controlling directional propagation in driven-dissipative 2D photonic lattices Carla Hermann-Avigliano July 22, 2024 ============================================================================== Introduction.- Photonic lattices enable the manipulation of light propagation properties. The band structure of a photonic lattice determines the allowed and forbidden frequencies or energies, consequently determining which light waves propagate in certain directions or get confined in a specific volume <cit.>. Photonic lattices are implemented in several experimental platforms with a variety of different periodic geometries, showing their relevance for engineering band structures <cit.>. For example, one- and two-dimensional lattices with a single photonic band have been implemented using coupled waveguides to manipulate light in both the linear and nonlinear regime <cit.>. Also, the outstanding dispersive bands of graphene, together with its transport features, have been probed using this same platform <cit.>, as well as lattices of coupled microwave resonators <cit.> and coupled micropillars <cit.>. Interestingly, the photonic realm has been a fruitful environment for testing topological phases, whose real-space manifestation is the existence of protected localization and/or unidirectional propagation on the edges <cit.>. More recently, theoretical works have predicted exotic quantum dynamics when quantum emitters are coupled to photonic lattices and tuned within the bands, showing directional <cit.> and multi-directional <cit.> emission, which has opened up a new route towards harnessing quantum light, even in topologically nontrivial lattices <cit.>. However, coupling many indistinguishable quantum emitters to 2D photonics lattices is rather challenging, and, to the best of our knowledge, this has yet to be experimentally implemented. A more experimentally accessible way to obtain the same phenomenology is by using a purely optical platform, i.e., dissipative lattices with resonant laser excitations (external drives) <cit.>. In this work we show how external drives can control light propagation in 2D lattices, from on-demand directional radiation to localization. As a test bed, we use a square lattice resonantly driven by several pumps at the middle-band energy, where the density of states exhibits a van Hove singularity. We find a unitary-driving cell (UDC) that allows observing quasi-1D propagation through the bulk along different directions. An extra control drive modifies the UDC leading to a phase-controlled highly directional propagation through the bulk. Moreover, constructive interference takes place when considering two UDCs and, thus, forming localized states. We finalize showing how to implement these phenomena in other 2D lattices, using a triangular lattice as an example. The experimental realization of our findings could facilitate the external control of all-optical processing of information, such as light routers and logic operations <cit.>. Tight-binding model and dynamics equation.- We consider a square lattice of N coupled photonic resonators, as Fig. <ref>(a) shows. The light in a resonator can only hop to its nearest-neighbor, thus, the Hamiltonian of this lattice is given by H_sq=-∑_𝐧,𝐦J_𝐦,𝐧a^†_𝐦a_𝐧+, where a^†_𝐧 (a_𝐧) is the bosonic creation (annihilation) operator in the resonator at the position 𝐧=(n_xd,n_yd) (being d the inter-resonator distance), J_𝐦,𝐧 is the nearest-neighbor coupling for 𝐦≠𝐧, and the on-site energy of resonator mode for 𝐦=𝐧. Considering the discrete Fourier transform a^†_𝐧=1/√(N)∑_𝐤e^i𝐤·𝐫_𝐧a^†_𝐤 for a homogeneous lattice, i.e., J_𝐦,𝐧=J and J_𝐧,𝐧=E_0, the lattice Hamiltonian can be written as H_sq=∑_𝐤E(𝐤)a^†_𝐤a_𝐤, where E(𝐤)=E_0-2J[cos(k_xd)+cos(k_yd)] is the lattice band. Figure <ref>(b) presents a density plot of this photonic band covering the energy range [E_0-4J,E_0+4J]. Interestingly, at the iso-energy E=E_0, there are saddle points where the density of states diverges (van Hove singularity) <cit.>. At these points, the group velocity exhibits both zero and its highest values. Specifically, 𝐯_g(𝐤)=∇ E(𝐤)=2J(sin(k_xd),sin(k_yd)), which gives that the maximal velocity is |𝐯_g|=2J at (k_x,k_y)=π/d(0.5,± 0.5) and (k_x,k_y)=π/d(-0.5,±0.5), and the minimum one, |𝐯_g|=0, at (k_x,k_y)=π/d(0,± 1) and (k_x,k_y)=π/d(± 1,0). Arrows in Fig. <ref>(b) represent the amplitude and direction of the group velocity for several 𝐤 vectors at E=E_0. It is worth highlighting that the extremely anisotropic group velocity in 𝐤-space at this iso-energy leads to a highly anisotropic light propagation in real space and, also, a non-Markovian dynamics for quantum emission <cit.>. For any other energy, the group velocity is less anisotropic, whereas it is completely isotropic near E=E_0 ± 4J. To benchmark this highly anisotropic propagation, we also consider that the resonators can undergo radiative losses and be driven to a specific energy by an external resonant laser. Our theoretical description closely follows experimental implementations with lattices of coupled micropillar <cit.>, where temporal dynamics of the light in the square lattice is well modeled by a driven-dissipative set of coupled equations <cit.>: iħ∂ψ_𝐧/∂ t=(E_0-iγ)ψ_𝐧+∑_𝐦≠𝐧 J_𝐦,𝐧ψ_𝐦+F_𝐧e^-iω_dt , where ψ_𝐧 represents the amplitude of the resonator field in the 𝐧-th position, γ=ħ/τ is the loss ratio (being τ the resonator lifetime), and F_𝐧 is the complex amplitude of the resonant excitation laser (drive) at the n-th resonator having photon energy ħω_d. Numerical results.-We numerically solve Eq. (<ref>) for different drive configurations F_𝐧 at an energy ħω_d within the photonic band and calculate the steady-state intensity |ψ_𝐧^ss|^2. When driving a single resonator at ħω_d=E_0, light predominantly propagate along the diagonal resonators, exhibiting an X-type shape, since the resonant drive excites modes with null and maximal velocity. Figure <ref>(a) shows this behaviour for γ=0.15J. The intensity has an exponential decay from the driven resonator to the lattice corners due to losses. For any other energy E≠ E_0, light shows some anisotropic propagation along the diagonal because lattice modes have non-zero velocity for every k vector and, conversely, light propagates homogeneously at energies near the maximum and minimum of the band. Driving multiples lattice resonators yields to interference effects, which have been used to externally engineering localized modes <cit.>. In the particular case of the square lattice, at ħω_d=E_0, a configuration of four drives with equal amplitudes and phases surrounding a resonator (rhombic shape) gives a highly-localized steady state, which has the entire intensity only on the surrounded lattice site for γ→ 0 <cit.>. We define this rhombic-like spatial arrangement of the drives as the unitary-driving cell (UDC) of the square lattice. Beyond the examples mentioned above, the UDC can also produce extended steady states confined along the diagonals. To demonstrate this, we probe the lattice using such driving configuration with ħω_d=E_0. When the drives have equal amplitudes with alternating 0 and π phases, as shown in the inset of Fig. <ref>(b) (blue (red) disk depicts π (0) phase), the steady state displays a destructive interference on the central resonator as well as on the ones along the diagonal (see Fig. <ref>(b)). Conversely, the lines of resonators adjacent to the diagonals show a rather intense propagation towards the corners, forming a decaying double X-type of shape. This propagation changes dramatically when the UDC has two consecutive drives with 0 phase and the other two with π/2 phase (phase difference of Δϕ=π/2), as shown in Fig. <ref>(c) and (d) (see also the insets). We observe a highly-confined propagation along one diagonal evidencing a quasi-1D propagation of light through the bulk externally induced by the drives. Therefore, by tuning both the energy and the relative phase of the external drives, either k=π/d(∓0.5,±0.5) or k=π/d(± 0.5,±0.5) vectors can be selected so that light can travel only along one of the two diagonal directions. To quantify the degree of confinement along one diagonal produced by the UDC, we define the parameter χ=(I_D-I_AD)/(I_D+I_AD), where I_D (I_AD) is the summed intensity along the diagonal resonators from top-right (bottom-right) to bottom-left (top-left). Thus, a fully confined propagation along either diagonal gives as an outcome χ^max=± 0.95 because both I_D and I_AD sum the intensity on the central resonator. We compute this parameter for the steady states shown in Fig. <ref>(c)-(d), obtaining respectively χ_UDC≈±0.8 (χ_UDC>|0.9| when considering the three main diagonal lines), and χ_UDC→χ^max for γ→ 0. In contrast, the X-shape propagation produced by the single-resonator drive gives χ=0 for any γ since resonators along both diagonals are equally intense. This quasi-1D propagation occurs under almost any value of Δϕ as long as ħω_d=E_0, except for Δϕ=0 at which the localization on one resonator happens <cit.>. Figure <ref>(e) shows intensity profiles along one of the diagonals as a function of the phase of the UDC. We observe that the main consequence is the constructive or destructive interference at the central resonator, taking place for Δϕ=π/2 (green squares) and Δϕ=π (blue triangles), respectively. For other values, the profile has a shorter decay (Δϕ =π/4, yellow circles) or does not present a total destructive interference in the surrounded resonator (Δϕ=3π/4, dark green rhombus). We also notice a slightly higher intensity on sites placed at the ends of the diagonal because the resonant lattice modes are the fastest ones, allowing the light to reach these corner sites and then be reflected back. Considering higher loss rates, light decays faster and this boundary effect can be reduced, as shown in panel (f). Moreover, the steady states get broader along the orthogonal direction, being less confined and with a lower χ_UDC value (∼ 0.4 for γ=J), yet exhibiting a predominant direction. The drive-induced quasi-1D propagation can be harnessed so most of the light propagates towards a chosen corner. This is done by adding an extra drive to the lattice site at the center of the UDC. Figure <ref>(a) shows the steady-state intensity when the lattice is driven by the UDC plus a central drive with a phase equal to φ_c=0.3π and an amplitude 1.5 times the ones of the UDC (|ψ_c|=1.5|ψ^i_UDC|, i=1,…,4). The intensity predominantly decays towards the top-left corner of the lattice, whereas null intensity is observed towards the bottom-right corner along the diagonal (see also panel (c), yellow circles). The direction of propagation can be reversed by changing the driving phase of the central site to φ_c=1.22π (same amplitude), as panel (b) shows. Thus, the control of both the central phase and amplitude enables canceling either wave-vectors k=π/d(0.5,-0.5) or k=π/d(-0.5,0.5), playing a key role to achieve this quasi-1D directional control. Panel (c) displays the steady-state intensity profiles along the relevant diagonal when driving with the UDC-plus-central configuration. We see that the directional propagation is extinguished for any other value of φ_c. Importantly, the loss rate increment does not destroy this directional decay along the diagonal, as shown in panel (d) for several values of γ. Furthermore, some light propagates along the orthogonal diagonal diminishing the quasi-1D confinement. Despite this, for the optimal phase values and γ=0.15J, 68% of the light is addressed from the center towards the desire corner along the three main diagonals, whereas 13% of the light gets dispersed on the resonators along the opposite one. Defining the parameter χ^D=(I_l-I_r)/I_AD, where I_l (I_r) is the intensity on the resonators along the left-half (right-half) of the diagonal, we quantify a directional degree along the predominant diagonal for these steady states. Thus, χ^D=0.96 and χ^D=-0.96 for the central-driving phase equal to φ_c=0.3π and φ_c=1.22π, respectively, being the maximum value equal to the unity. Additionally, χ^D gets much less diminished than χ_UDC when increasing γ, taking a value of χ^D=|0.9| for γ=J. The quasi-1D propagation can also be manipulated to create localized steady states. Simultaneously driving the lattice with two UDCs, separated by a given number of resonators along one diagonal, enables us to control the interference between their independent steady states. Specifically, we consider two identical UDCs as the one shown in the inset of Fig. <ref>(d), separated by i∈ℕ sites along the diagonal. We first consider the drives separated by i=1 (see inset in Fig. <ref>(a)) and a global phase is added to one of them (Δφ). To obtain a complete scenario of the interference patterns, we sweep Δφ in the range [0,2π]. Constructive and destructive interference are observed in the central resonator of the lattice when Δφ=π and Δφ=0, respectively, as shown in Fig. <ref>(a) and (d). Remarkably, the constructive interference exhibits a well-localized steady state with intensity on approximately three resonators, and no appreciable intensity on the adjacent lines. We quantify this by computing the inverse participation ratio determined as =∑_𝐧|ψ_𝐧^ss|^4/(∑_𝐧|ψ_𝐧^ss|^2)^2, where →1 (→ 0) is a fully localized (delocalized) state. For the single site spacing between the two UDCs, we obtain a steady state value of _1=0.26 or, inversely, light occupies only 3.8 sites. The localization along the diagonal can be extended by further separating the drives. Figure <ref>(b) displays the intensity profile of a steady state for i=3 sites in between the two UDCs. We see that the light is mainly concentrated on the diagonal resonators surrounded by the drives, and decays exponentially towards the corners. Its respective degree of localization is _3=0.12. This feature can be better seen in Fig. <ref>(c), in which the intensity profile of the steady states along the diagonal is plotted for several odd-site separations of the UDCs. In contrast, driving the lattice with the two UDCs in phase (Δφ=0) results in steady states with almost zero light in between the drives, as shown in Fig. <ref>(e) for i=3 and Fig. <ref>(f) for several separations. In the simulations, light tends to accumulate in the corners due to the imposed boundary conditions, which can be reduced by increasing the lattice size. Performing the same phase sweep for even-sites separations of the UDCs, we obtain that localized (delocalized) steady states are formed for opposite value of the phase difference, i.e., Δϕ=0 (Δϕ=π). In this case, the closest driving configuration is two overlapped UDCs (zero sites in between), which gives a steady state with the highest localization, _0=0.4, in which light is mostly located on the two central sites. This directional propagation is not unique of the square lattice, but it is a general feature of 2D lattices with saddle points in their band structures and anisotropic group velocities in the reciprocal space. For example, a triangular lattice has a single photonic band E(k)=E_0-2J[cos(k_xd)+2cos(√(3)k_yd/2)cos(k_xd/2)] with saddle points at E-E_0=-2J (J is the nearest-neighbor coupling). At this energy, there are k-dependent maximum and minimum group velocities, as shown in Fig. <ref>(a). Therefore, as studied in the square-lattice case, driving the triangular lattice at such an energy by using a single excitation on a central resonator, an anisotropic steady state is obtained with three main directions of propagation (see Fig. <ref>(b)) <cit.>. A unitary-driving cell can be found to control the light propagation along one of these directions and each of them can be deliberately selected by manipulating the phase distribution. Figure <ref>(d)-(f) exhibit the intensity of the steady state when using six resonators surrounding a single one, having three consecutive drives with 0 phase and the other three with π phase (see insets). Depending on the orientation of the phase distribution, specific k vectors can be addressed and, thus, light propagates only along one of the three directions. Moreover, when adding a single drive on the surrounded resonator with ±π/2 phase (having, therefore, a hexagonal-like drive plus a central one), the propagation along one half of the main direction is cancelled and enhanced the opposite half, as seen in Fig. <ref>(c). On the other hand, when simultaneously driving the lattice with two UDCs placed along one of the main directions and separated by one resonator, as shown in Fig. <ref>(g)-(h), both constructive and destructive interference occur for an out-of-phase (Δφ=π) and an in-phase (Δφ=0) configuration, respectively. In the former case, localized steady states are formed, as seen previously in the square lattice. It worth mentioning that all these phenomena is found in more intricate lattice configurations, as long their band structure possesses saddle points. Also, the UDC for each lattice geometry could differ from a given lattice to another one both in amplitude and phase distribution. Nonetheless, at least in all tested lattices, we found that a single resonator must be surrounded by the drives and the phase must be shaped as a dipole-like phase pointed to the direction of propagation (see insets in Fig. <ref> and Fig. <ref>). Conclusion and Outlook. We presented a scheme for controlling the directional propagation of light via external drives in two-dimensional lattices of coupled dissipative resonators. We numerically demonstrated that tuning external drives at the energy of saddle points in lattice bands results in an anisotropic propagation, which can be turned into quasi-1D propagation when adding multiple drives with specific amplitude and phase distributions. Similar interference effects allow for manipulating the quasi-1D behavior into highly directional propagation. Lastly, we showed that interference effects between the drives enable the excitation of localized steady states with controllable localization degrees. Our numerical results bring to the spotlight the relevance of resonant excitation laser for harnessing the propagation of light in lattices, which can be readily implemented experimentally using, for instance, lattices of coupled micropillars <cit.>. We believe that experimental implementations of our findings could find use in all-optical processing of information <cit.>, where simpler versions of phase-controlled optical switches in a 2D photonic crystal have been observed <cit.>. The phenomenology we describe is also relevant in cavity and waveguide QED. From practical application of directional emission in routing photons <cit.> to fundamental questions in non-Markovian systems <cit.>, where engineering slow group velocities could lead to delay-induce quantum optical effects <cit.>. Moreover, one following avenue of this work is the addition of Kerr-type nonlinearities <cit.>, which can expand the degree of control over the localization of light along the predominant directions. In the quantum realm, it would be interesting to study how the quantum properties of squeezed light evolve when the highly directional propagation takes place and how they can be included or implemented in quantum information protocols <cit.>.
http://arxiv.org/abs/2407.13461v1
20240718123549
Parameter estimation in hyperbolic linear SPDEs from multiple measurements
[ "Anton Tiepner", "Eric Ziebell" ]
math.ST
[ "math.ST", "stat.TH", "60H15, 62F12" ]
Accelerating structure search using atomistic graph-based classifiers Bjørk Hammer July 22, 2024 ===================================================================== § ABSTRACT The coefficients of elastic and dissipative operators in a linear hyperbolic SPDE are jointly estimated using multiple spatially localised measurements. As the resolution level of the observations tends to zero, we establish the asymptotic normality of an augmented maximum likelihood estimator. The rate of convergence for the dissipative coefficients matches rates in related parabolic problems, whereas the rate for the elastic parameters also depends on the magnitude of the damping. The analysis of the observed Fisher information matrix relies upon the asymptotic behaviour of rescaled M, N-functions generalising the operator sine and cosine families appearing in the undamped wave equation. In contrast to the energetically stable undamped wave equation, the M, N-functions emerging within the covariance structure of the local measurements have additional smoothing properties similar to the heat kernel, and their asymptotic behaviour is analysed using functional calculus. MSC 2020 subject classification: Primary: 60H15; Secondary: 62F12 Keywords: Hyperbolic linear SPDEs, second-order stochastic Cauchy problem, parameter estimation, central limit theorem, local measurements § INTRODUCTION We study parameter estimation for a general second-order stochastic Cauchy problem ü(t)=A_θ u(t)+B_ηu̇(t)+Ẇ(t), 0<t≤ T, driven by space-time white noise Ẇ on an open, bounded spatial domain Λ⊂^d. The differential operators A_θ and B_η defined through A_θ =∑_i=1^pθ_i(-Δ)^α_i, α_1>…>α_p≥0, B_η =∑_j=1^qη_j(-Δ)^β_j, β_1>…>β_q≥0, are parameterised by unknown constants θ∈^p, η∈^q. In general, such equations model elastic systems, and we refer to A_θ as the elastic operator while B_η is called the dissipation (or damping) operator. In the absence of any damping (B_η=0) and noise, a prototypical example of (<ref>) is the isotropic plate equation (without any in-plane forces, thermal loads or elastic foundation) ρ hü(t)=-DΔ^2u(t), modelling the bending of elastic plates over time. The parameters governing (<ref>) are the material density ρ and the bending stiffness D=h^3E/12(1-ν^2). The bending stiffness D depends on the plate thickness h, the material-specific Poisson-ratio ν and Young's modulus E. Numerous extensions and applications of such equations can, for instance, be found in <cit.>. To account for the system's energy loss, damping is added to the equation, where a higher differential order of B_η describes stronger damping. In fact, a parabolic behaviour of (<ref>) is obtained for β_1>0 due to the smoothing effects within the related C_0-semigroup <cit.>. In closely related situations, i.e. when both A_θ and B_η are negative operators, the C_0-semigroup has been shown to become analytic if and only if 2β_1≥α_1, where a borderline case occurs under equality, cf. <cit.>. While parameter estimation for SPDEs is well-studied in second-order parabolic equations, e.g. <cit.> and the references therein, the literature on higher-order hyperbolic equations is limited. We refer to <cit.> and the references mentioned there for studies of the (non-)parametric wave equation. In <cit.>, the authors considered a weakly damped system, i.e. β_1=0, and developed a first approach for identifying coefficients of the elastic operator in a Kalman filtering problem based on the methods of sieves. <cit.> studied equations driven by a fractional cylindrical Brownian motion and derived a consistent estimator of a scalar drift coefficient using the ergodicity of the underlying system. Based on spectral measurements (u(t)e_j)_0≤ t≤ T, j=1,…, N, where (e_j)_j≥1 forms an orthonormal basis of L^2(Λ) composed of eigenvectors for A_θ and B_η, <cit.> constructed maximum-likelihood estimators and established the asymptotic normality for diagonalisable hyperbolic equations given that the number N of observed Fourier-modes tends to infinity. In contrast, our estimator is based on continuous observations of local measurement processes u_δ,k=(u(t)K_δ,x_k)_0≤ t≤ T, u_δ,k^γ=(u(t)(-Δ)^γK_δ,x_k)_0≤ t≤ T, for locations x_1,…, x_N∈Λ and γ∈{α_i,β_j|1≤ i≤ p;1≤ j≤ q}. The point spread functions <cit.> K_δ,x_k are compactly supported functions taking non-zero values in an area centred around x_k with radius δ. Local measurements emerge naturally as they describe the physical limitation of measuring u(t,x_k), which, in general, is only possible up to a convolution with a point spread function. Local observations were introduced to the field of statistics for SPDEs in <cit.>, where the authors investigated a stochastic heat equation with a spatially varying diffusivity. It was shown that the diffusivity at location x_k∈Λ can be estimated based on a single local measurement process at x_k as the resolution level δ tends to zero. The local observation scheme turned out to be robust under semilinearities <cit.>, multiplicative noise <cit.>, discontinuities <cit.> or lower-order perturbation terms <cit.>. In contrast to the estimation of the diffusivity, the identifiability of transport or reaction coefficients necessarily requires an increasing amount N→∞ of measurements. In the recent contribution <cit.>, the local measurement approach was extended to hyperbolic problems and the non-parametric wave speed in the undamped stochastic wave equation was estimated by relating the observed Fisher information to the energetic behaviour of an associated deterministic wave equation. Based on the local measurement approach, we construct the augmented maximum likelihood estimator (MLE) (θ̂_δ,η̂_δ)^⊤∈^p+q and prove the asymptotic normality of [ N^1/2δ^-2α_i+α_1+β_1(θ̂_δ,i-θ_i); N^1/2δ^-2β_j+β_1(η̂_δ,j-η_j) ]_i≤ p,j≤ q, δ→0, with N=N(δ) measurements. The consistent estimation for θ_i holds if N^1/2δ^-2α_i+α_1+β_1→∞, whereas η_j can be estimated in the asymptotic regime N^1/2δ^-2β_j+β_1→∞. In particular, estimating elastic coefficients is more difficult under higher dissipation, while damping coefficients are unaffected by the order of the elastic operator A_θ and their convergence rates reflect the rates obtained in advection-diffusion equations, cf. <cit.>. For the maximal number of non-overlapping observations N≍δ^-d, our convergence rates match the rates obtained in the spectral approach up to specific boundary cases. In the weakly damped case, i.e. β_1=0, we confirm the results in <cit.>. That is, the dependence of the time horizon T of the asymptotic variance resembles the explosive, stable and ergodic cases of the maximum likelihood drift estimator for an Ornstein-Uhlenbeck process, cf. <cit.>. In the structural damped case (β_1>0), the asymptotic variance is of order T^-1 instead. We begin this paper by specifying the model and discussing properties of the local measurements in <Ref>. The augmented MLE is constructed and analysed in <Ref>, and the CLT is established. The section additionally contains various remarks and examples, complemented by a numerical study underpinning and illustrating the main result. All proofs are deferred to <Ref>. § SETUP §.§ Notation Throughout this paper, we fix a filtered probability space (Ω, ℱ,(ℱ_t)_0≤ t≤ T,ℙ) with a fixed time horizon T<∞. We write a≲ b if a≤ Mb holds for a universal constant M, independent of the resolution level δ>0 and the number of spatial points N. Unless stated otherwise, all limits are to be understood as the spatial resolution level tending to zero, i.e. for δ→ 0. For an open set Λ⊂^d, L^2(Λ) is the usual L^2-space with the inner product ····_L^2(Λ). The Euclidean inner product and distance of two vectors a, b∈^p are denoted by a^⊤ b and |a-b|, respectively. We abbreviate the Laplace operator with Dirichlet boundary conditions on the bounded spatial domain Λ by Δ and on the unbounded spatial domain ^d by Δ_0. Let H^k(Λ) denote the usual Sobolev spaces, and denote by H_0^1(Λ) the completion of C_c^∞(Λ), the space of smooth compactly supported functions, relative to the H^1(Λ) norm. As in <cit.>, let Ḣ^2s(Λ)𝒟((-Δ)^s) for s>0 be the domain of the fractional Laplace operator on L^2(Λ) with Dirichlet boundary conditions. The order of a differential operator D is denoted by ord(D). §.§ The model Consider the second-order stochastic Cauchy problem v(t)=(A_θ u(t)+B_η v(t)) t+ W(t), 0<t≤ T, u(t)=v(t) t, u(0)=u_0∈ L^2(Λ), v(0)=v_0∈ L^2(Λ), u(t,x)=v(t,x)=0, 0≤ t≤ T, x∈Λ|_∂Λ on an open, bounded domain Λ⊂^d having C^2-boundary ∂Λ. We assume Dirichlet boundary conditions and a driving space-time white noise W in (<ref>). The elasticity and damping operators A_θ and B_η are parameterised by θ∈^p and η∈^q and given by A_θ =∑_i=1^pθ_i(-Δ)^α_i, D(A_ϑ)= D((-Δ)^α_1) = Ḣ^2α_1(Λ) , B_η =∑_j=1^qη_j(-Δ)^β_j, D(B_η)=D((-Δ)^β_1)= Ḣ^2β_1(Λ), with α_1>0 and 0≤α_i,β_j<∞ satisfying α_1>α_2>… >α_p and β_1>β_2>… >β_q. (a) Weakly damped wave equation (β_1=0): A_θ=-θ_1Δ, B_η=η_1. (b) Clamped plate equation: 1) Weakly damped (β_1=0): A_θ=θ_1Δ^2, B_η=η_1. 2) Structurally damped (0<β_1<α_1): A_θ=θ_1Δ^2, B_η=-η_1Δ. 3) Strongly damped (β_1=α_1): A_θ=θ_1Δ^2, B_η=η_1Δ^2. <Ref> displays a heatmap illustrating both the weakly and structurally damped plate equation in one spatial dimension. The solution of the SPDEs were approximated on a fine time-space grid using the finite difference scheme associated with the semi-implicit Euler-Maruyama method, see <cit.>. Additional smoothing properties in the structurally damped case due to the dissipative operator B_η result in an accelerated energetic decay in comparison to the weakly damped case. Throughout the rest of the paper, we impose the following assumptions on the parameters. [Assumption on the parameters] (i) θ_1<0; (ii) If β_1>0, then η_1<0; (iii) α_1≥ 2β_1 and if α_1=2β_1 then θ_1+η_1^2/4<0. <Ref> does not guarantee that either A_θ or B_η are, in general, negative operators. Instead, it implies that at least all but finitely many of their eigenvalues are negative. Moreover, conditions (i) and (iii) are necessary to ensure that the difference L_θ,η -A_θ-B_η^2/4 is a positive operator, which itself is not required for the proofs and is just assumed for technical reasons in <Ref> as all arguments also carry over to the non-positive case, resulting in a complex-valued operator, cf. <Ref>. §.§ Local measurements For δ>0, y∈Λ and z∈ L^2(^d) we define the rescaling Λ_δ,y ={δ^-1(x-y):x∈Λ}, z_δ,y(x) =δ^-d/2z(δ^-1(x-y)), x∈^d. Fix a function K∈ H^⌈2α_1⌉(^d) with compact support. By a slight abuse of notation, we define local measurements at the location x∈Λ with resolution level δ as the continuously observed processes u_δ,x, u^Δ_i_δ,x, v_δ,x, v_δ,x^Δ_j where for i=1,…,p and j=1,…,q: u_δ,x =(u(t)K_δ,x)_0≤ t≤ T, u^Δ_i_δ,x=(u(t)(-Δ)^α_i K_δ,x)_0≤ t≤ T, v_δ,x =(v(t)K_δ,x)_0≤ t≤ T, v_δ,x^Δ_j=(v(t)(-Δ)^β_jK_δ,x)_0≤ t≤ T. Analogously to <cit.>, these local measurements satisfy the following Itô-dynamics u_δ,x(t)=v_δ,x(t) t, v_δ,x(t)=(∑_i=1^pθ_iu^Δ_i_δ,x(t)+∑_j=1^qη_jv^Δ_j_δ,x(t)) t+ K_L^2(^d) W_x(t), with scalar Brownian motions (W_x(t))_0 ≤ t ≤ T=(K^-1_L^2(^d)W(t)K_δ,x)_0≤ t≤ T, which become mutually independent provided that K_δ,xK_δ,x'=K^2_L^2(^d)δ_x,x'=0, x,x'∈^d, where the Kronecker-delta δ_x,x' evaluates to zero for x ≠ x'. For locations x_1,…, x_N∈Λ, we further define the observation vector process Y_δ∈ L^2([0,T];^(p+q)× N) through Y_δ,k = [ u_δ,x_k^Δ_1 u_δ,x_k^Δ_p v_δ,x_k^Δ_1 v_δ,x_k^Δ_q ]^⊤∈^p+q, k=1, …, N. The measurements (u_δ,x^Δ_i, i=1, …, p), can be approximated by observing u_δ,y, y∈Λ, on a fine spatial grid in Λ. Moreover, all the measurements wrt. v, i.e. v_δ,x and v^Δ_j_δ,x, j≤ q, can be obtained by differentiating u_δ,x and u^Δ_j_δ,xu(·)(-Δ)^β_jK_δ,x in time. § THE ESTIMATOR Motivated by a general Girsanov theorem, as described in detail in <cit.>, the augmented MLE (θ̂_δ,η̂_δ)^⊤∈^p+q is given by [ θ̂_δ; η̂_δ ]=ℐ_δ^-1∑_k=1^N∫_0^TY_δ,k(t) v_δ,x_k(t) with the observed Fisher information matrix ℐ_δ=∑_k=1^N∫_0^TY_δ,k(t)Y_δ,k(t)^⊤ t. Clearly, the matrix ℐ_δ is symmetric and positive semidefinite. By plugging the Itô-dynamics (<ref>) into the definition of the estimator (<ref>), we obtain the decomposition [ θ̂_δ; η̂_δ ]=[ θ_δ; η_δ ]+K_L^2(^d)ℐ_δ^-1ℳ_δ on the event {det(ℐ_δ)>0} with the martingale part ℳ_δ=∑_k=1^N∫_0^TY_δ,k(t) W_x_k(t). As the limiting object of the rescaled observed Fisher information is deterministic and invertible (see <Ref> below), ℐ_δ will itself be invertible for sufficiently small δ, cf. <cit.>. [Regularity of the kernel and the initial condition] * The locations x_k, k=1,…,N, belong to a fixed compact set 𝒥⊂Λ, which is independent of δ and N. There exists δ'>0 such that supp(K_δ,x_k)∩supp(K_δ,x_l)=∅ for k≠ l and all δ≤δ'. * There exists a compactly supported function K̃∈ H^⌈ 2α_1⌉+2⌈α_1⌉(^d) such that K=Δ_0^⌈α_1⌉K̃. * The functions (-Δ)^α_i-(α_1+β_1)/2K are linearly independent for all i=1,… ,p, and the functions (-Δ)^β_j-β_1/2K are linearly independent for all j=1,…,q. * The initial condition (u_0,v_0)^⊤ in (<ref>) takes values in Ḣ^2α_1(Λ)×Ḣ^α_1(Λ). <Ref> (i) ensures that K_δ,x_kK_δ,x_l=K^2_L^2(^d)δ_k,l with the Kronecker-delta δ_k,l. Consequently, the Brownian motions W_x_k become mutually independent if δ is sufficiently small. Thus, ℐ_δ forms the quadratic variation process of the time-martingale ℳ_δ, and we expect ℐ_δ^-1/2ℳ_δ to be asymptotically normally distributed. Both (ii) and (iii) guarantee that the limiting object of the observed Fisher information is well-defined and invertible, while (iv) ensures that the initial condition is asymptotically negligible. In principle, the required smoothness in (ii) can be relaxed, depending on the dimension d and the identifiability of the appearing parameters in (<ref>), but is kept for the simplification of the proofs. We define a diagonal matrix of scaling coefficients ρ_δ∈^(p+q)× (p+q) via (ρ_δ)_ii N^-1/2δ^2α_i-α_1-β_1, 1 ≤ i≤ p, N^-1/2δ^2β_i-p-β_1, p<i≤ p+q, and the constant C(η_1,T) through C(η_1,T)e^Tη_1-Tη_1-1/2η_1^2, η_1≠0, T^2/4, η_1=0. The following result shows the asymptotic normality of the estimator (<ref>). Grant <Ref>. (i) The matrix Σ_θ,η∈^(p+q)× (p+q), given by Σ_θ,η[ Σ_1,θ,η 0; 0 Σ_2,θ,η ] with (Σ_1,θ,η)_ij = -C(η_1,T)/θ_1(-Δ_0)^(α_i+α_j-α_1)/2K^2_L^2(^d), β_1=0, T/2θ_1η_1(-Δ_0)^(α_i+α_j-α_1-β_1)/2K^2_L^2(^d), β_1>0, (Σ_2,θ,η)_kl = C(η_1,T)K^2_L^2(^d), β_1=0, -T/2η_1(-Δ_0)^(β_k+β_l-β_1)/2K^2_L^2(^d), β_1>0, for 1≤ i,j≤ p and 1≤ k,l≤ q, is well-defined and invertible. In particular, the observed Fisher information matrix admits the convergence ρ_δℐ_δρ_δℙ→Σ_θ,η, δ→ 0. (ii) The estimator (θ̂_̂δ̂,η̂_δ)^⊤ is consistent and asymptotically normal, i.e ρ_δ^-1[ θ̂_δ-θ; η̂_δ-η ]d→𝒩(0,K^2_L^2(^d)Σ_θ,η^-1), δ→0. The convergence rates among the different parameters are given by (<ref>). As the number of observation points cannot exceed N≍δ^-d due to the disjoint support condition of <Ref>, not all coefficients can, in general, be consistently estimated in all dimensions, see <Ref>. In contrast to parameter estimation in convection-diffusion equations based on local measurements in <cit.>, the convergence rates for speed parameters are influenced not only by the order α_1 of A_θ, but also by β_1, the order of the damping operator B_η. Unsurprisingly, higher-order damping results in worse convergence rates as the parameters are harder to identify due to the associated dissipation of energy within the system. On the other hand, the rates for the damping coefficients are not influenced by the order of A_θ, and their rates mirror the rates known from parabolic equations, cf. <cit.>. Similar effects were already observed under the full observation scheme N≍δ^-d in the spectral approach, cf. <cit.>, leading to identical convergence rates. In addition to the joint asymptotic normality of the augmented MLE (θ̂_δ,η̂_δ)^⊤, <Ref> further yields the asymptotic independence of its components, i.e. the marginal estimators for elastic and damping parameters are asymptotically independent. If the equation is weakly damped, i.e. if β_1=0 and η_1<0, then the term -Tη_1 dominates the expression (e^Tη_1-Tη_1-1)/2η_1^2 in the asymptotic variance within (<ref>) as T →∞. The converse is true in the amplified case with η_1>0. If η_1 = 0, the asymptotic variance of the augmented MLE for ϑ and η_1 depends on the time horizon through T^-2 as discussed in <cit.>. It mirrors the rate of convergence of the MLE in the ergodic, stable, and explosive case of the standard Ornstein-Uhlenbeck process as described in <cit.>. If, on the other hand, β_1>0, then the asymptotic variance of the MLE is of order T^-1 in time. In other words, any dissipation decelerates the temporal convergence rate to the rate T^-1 associated with parabolic equations. For simplicity, we did not consider cases where the damping dominates (<ref>), i.e. where 2β_1>α_1. Nonetheless, studying parameter estimation in those situations is neither impossible nor does it require new approaches. It solely relies on a careful analysis of underlying terms within the asymptotic analysis of the observed Fisher information, which may potentially become complex-valued, cf. also <Ref>. Taking this into account, similar convergence rates may be established. (a) Weakly damped (or amplified) wave equation: Consider the weakly damped (or amplified) wave equation (θ_1>0, η_1∈): v(t)=(θ_1Δ u(t)+η_1v(t)) t+ W(t), 0<t≤ T. Then, <Ref> implies [ N^-1/2δ (θ̂_δ-θ_1); N^-1/2(η̂_δ-η_1) ]d→𝒩([ 0; 0 ],[ θ_1K^2_L^2(^d)/C(η_1,T)(-Δ_0)^1/2K^2_L^2(^d) 0; 0 1/C(η_1,T) ]). Thus, the augmented MLE attains the convergence rate known from the spectral approach, see <cit.> if it is provided with the maximal number of spatial observations N≍δ^-d. Interestingly, the limiting variance of η̂_δ is independent of the kernel function K similar to the augmented MLE for the first order transport coefficient in <cit.>. (b) Clamped plate equation: Consider the clamped plate equation with 1) Weak damping (θ_1>0, η_1∈): v(t)=(-θ_1Δ^2 u(t)+η_1v(t)) t+ W(t), 0<t≤ T. 2) Structural damping (θ_1>0, η_1>0): v(t)=(-θ_1Δ^2 u(t)+η_1Δ v(t)) t+ W(t), 0<t≤ T. A realisation of the solution can be seen in <Ref>. Depending on the type of damping, the convergence rate for both θ_1 and η_1 changes. In the case of the weakly damped plate equation (<ref>), the CLT yields [ N^1/2δ^-2 (θ̂_δ-θ_1); N^1/2(η̂_δ-η_1) ]d→𝒩([ 0; 0 ],[ θ_1K^2_L^2(^d)/C(η_1,T)Δ_0K^2_L^2(^d) 0; 0 1/C(η_1,T) ]), while [ N^1/2δ^-1 (θ̂_δ-θ_1); N^1/2δ^-1 (η̂_δ-η_1) ]d→𝒩([ 0; 0 ],[ 2θ_1η_1K^2_L^2(^d)/T(-Δ_0)^1/2K^2_L^2(^d) 0; 0 2η_1K^2_L^2(^d)/T(-Δ_0)^1/2K^2_L^2(^d) ]). holds under the structural damping given in (<ref>). The asymptotic variances between θ̂_δ and η̂_δ coincide in the cases θ_1=Δ_0K^2_L^2(^d)K_L^2(^d)^-2 or θ_1=1, respectively. The consistency and the varying convergence rates of the estimators are visualised in <Ref>. Based on the finite difference scheme within the semi-implicit Euler-Maruyama method <cit.> (with 10000000× 2000 time-space grid points), we computed the root mean squared error (RMSE) for decreasing resolution level δ from 100 Monte Carlo runs, N≍δ^-1 measurement locations and the kernel function K(x)=exp(-5/(1-x^2))1(|x|<1). In the weakly damped case, it can be seen that the estimator for the elastic coefficient θ_1 achieves a much quicker convergence rate than the estimator of the damping coefficient η_1. On the other hand, their rates are equal under structural damping. The asymptotic variances are attained in both cases. (c) General hyperbolic equation: Consider the hyperbolic equation v(t)=(∑_i=1^pθ_i(-Δ)^α_iu(t)+∑_j=1^qη_j(-Δ)^β_j v(t)) t+ W(t), 0<t≤ T, with p+q unknown parameters. Then, the convergence rates for θ_i and η_j, respectively, are given by N^-1/2δ^2α_i-α_1-β_1, 1≤ i≤ p, N^-1/2δ^2β_j-β_1, 1≤ j≤ q. Given the maximal number of local measurements N≍δ^-d, these rates translate to δ^d/2+2α_i-α_1-β_1, 1≤ i≤ p, δ^d/2+2β_j-β_1, 1≤ j≤ q. Thus, our method provides a consistent estimator for a parameter θ_i or η_j, respectively, if and only if the conditions α_i >(α_1+β_1-d/2)/2, i≤ p, β_j >(β_1-d/2)/2, j≤ q, hold. Similar results were found in the spectral regime, cf. <cit.>. Interestingly, the authors verified a slightly stronger consistency condition, resulting in a logarithmic rate under equality in (<ref>) and (<ref>). Otherwise, they also obtain the rates in (<ref>). We believe that the logarithmic rates in the boundary cases are also valid in the local measurement approach given that less restrictive assumptions on the kernel K are imposed, similar to <cit.> in a related parabolic problem. § PROOFS For δ>0 and x∈Λ denote by Δ_δ,x the Laplace operator with Dirichlet boundary conditions on Λ_δ,x and define the following differential operators with domain Ḣ^2α_1(Λ) and Ḣ^2α_1(Λ_δ,x), respectively: L_θ,η z (-A_θ-B_η^2/4) z=-∑_i=1^pθ_i(-Δ)^α_iz-1/4∑_k,l=1^qη_kη_l(-Δ)^β_k+β_lz, L_θ,η,δ,xz -∑_i=1^pδ^2α_1-2α_iθ_i(-Δ_δ,x)^α_iz-1/4∑_k,l=1^qδ^2α_1-2β_k-2β_lη_kη_l(-Δ_δ,x)^β_k+β_lz. Introduce further the rescaled versions of B_η and A_θ, defined through B_η,δ,x ∑_j=1^qδ^2β_1-2β_jη_j(-Δ_δ,x)^β_j, D(B_η,δ,x)=D((-Δ_δ,x)^β_1)=Ḣ^2β_1(Λ_δ,x), A_θ,δ,x ∑_i=1^pδ^2α_1-2α_iθ_i(-Δ_δ,x)^α_i, D(A_θ,δ,x)=D((-Δ_δ,x)^α_1)=Ḣ^2α_1(Λ_δ,x), and the limiting objects L̅_θ,ηz -θ_1(-Δ_0)^α_1z, α_1>2β_1, -(θ_1+η_1^2/4)(-Δ_0)^α_1z, α_1=2β_1, D(L̅_θ,η)=D((-Δ_0)^α_1)=Ḣ^2α_1(ℝ^d), B̅_η η_1(-Δ_0)^β_1, D(B̅_η)=D((-Δ_0)^β_1)=Ḣ^2β_1(ℝ^d), A̅_θ θ_1(-Δ_0)^α_1, D(A̅_θ)= D((-Δ_0)^α_1)=Ḣ^2α_1(ℝ^d). We will frequently use that L_θ,η is an (unbounded) normal operator with spectrum σ(L_θ,η) and the resolution of identity E (cf. <cit.>). By the functional calculus for normal unbounded operators, we can define the operator f(L_θ,η)∫_σ(L_θ,η)f(λ) E(λ) on the domain 𝒟_f 𝒟(f(L_θ,η))= {z∈ L^2(Λ):∫_σ(L_θ,η)|f(λ)|^2 E_z,z(λ)<∞}, for any measurable function f:ℂ→ℂ. Analogous statements also apply to A_θ, B_η and the rescaled differential operators. Let δ>0,x∈Λ and f:ℂ→ℂ∪{±∞} be measurable. If z_δ,x∈𝒟_f, then f(L_θ,η)z_δ,x =(f(δ^-2α_1L_θ,η,δ,x)z)_δ,x, f(B_η)z_δ,x =(f(δ^-2β_1B_η,δ,x)z)_δ,x, f(A_θ)z_δ,x =(f(δ^-2α_1A_θ,δ,x)z)_δ,x. Suppose z ∈Ḣ^2α_1(Λ_δ,x) such that z_δ,x∈Ḣ^2α_1(Λ)⊂𝒟_f. Then, the claim follows immediately for f(x)=x by differentiating z_δ,x and from the definition of L_θ,η,δ,x, see also <cit.>. Using (i) and (iii) of <cit.>, the result can be extended to measurable f:ℂ→ℂ∪{±∞} by first passing to the associated resolution of the identities of L_θ,η, B_η, A_ϑ and L_θ,η,δ,x, B_η, δ, x, A_ϑ,δ,x respectively, and interpreting the localisation as a bounded linear operator from L^2(Λ_δ,x) to L^2(Λ). Throughout the remainder of the paper, we will assume that L_θ,η, and thus also L_θ,η,δ,x, is a positive operator. A sufficient condition for this is given in the next lemma. Let (e_k)_k∈ form an orthonormal basis of (-Δ) with eigenvalues λ_k≥ c(Λ) for some constant c(Λ)>0. Then, L_θ,η is a positive operator if one of the following conditions is satisfied: (i) <Ref> holds, c(Λ)≥1 and |θ_1|>∑_i=2^p|θ_i|+1/4∑_k,l=1^q|η_kη_l|; (ii) <Ref> holds, c(Λ)<1 and |θ_1|>∑_i=2^p|θ_i|c(Λ)^α_i-α_1+1/4∑_k,l=1^q|η_kη_l|c(Λ)^β_k+β_l-α_1. <Ref> states in particular that θ_1<0 and, additionally, θ_1+η_1^2/4<0 in case α_1=2β_1. It is now enough to show that all eigenvalues of L_θ,η are positive, which holds if for all x≥ c(Λ): |θ_1|x^α_1-∑_i=2^p|θ_i|x^α_i-1/4∑_k,l=1^q|η_kη_l|x^β_k+β_l>0. (i) If c(Λ)≥1, then both x^α_i and x^β_k+β_l are bounded by x^α_1 for any i≤ p and k,l≤ q, thus (<ref>) is satisfied. (ii) If c(Λ)<1, then x^α_i-α_1≤ c(Λ)^α_i-α_1 and x^β_k+β_l-α_1≤ c(Λ)^β_k+β_l-α_1. If L_θ,η is not a positive operator and has non-positive eigenvalues, any choice of the operator root is a complex-valued operator. Consequently, the associated family of M, N-functions in the following subsection is again complex-valued. Thus, inner products hereinafter are associated with complex Hilbert spaces. However, this does not influence the asymptotic results of <Ref> due to the convergence L_θ,η,δ,x→L̅_θ,η to a positive limiting operator L̅_θ,η. §.§ Properties of generalised cosine and sine operator functions The operators A_θ and B_η defined in (<ref>) generate a family of M,N-functions (M(t), N(t), t ≥ 0) given by M(t) 𝐦_t(L_θ,η,B_η) e^B_η t/2(cos(L_θ,η^1/2t)-B_η/2sin(L_θ,η^1/2t)L_θ,η^-1/2), L^2(Λ)⊂𝒟(N(t)), N(t) 𝐧_t(L_θ,η,B_η) e^B_η t/2sin(L_θ,η^1/2t)L_θ,η^-1/2, L^2(Λ)⊂𝒟(N(t)). Note that by <cit.> all of the appearing operators e^B_η t/2, cos(tL_θ,η^1/2), sin(tL_θ,η^1/2), B_η and L_θ,η^-1/2 in (<ref>) and (<ref>) are well-defined and even commute on the smallest occurring domain as they are all based on the same underlying Laplace operator. By direct computation, one can now verify that the conditions (M1)-(M4) in <cit.> are satisfied by M(t) and N(t) from (<ref>) and (<ref>) using the functional calculus. Assume that L_θ,η is a positive operator. Then, the M, N-functions defined through (<ref>) and (<ref>) are self-adjoint. The unique positive self-adjoint operator root of the positive self-adjoint operator L_ϑ,η is well-defined and exists by <cit.>. Thus, in view of <Ref>, the M, N-functions can each be interpreted as the applications of a real-valued function to the underlying Laplace operator on a bounded spatial domain. In particular, by <cit.> the M,N-functions are self-adjoint. As we are interested in the effect of M, N-functions applied to localised functions, we further define the rescaled M, N-functions: M_δ,x(t) 𝐦_t(δ^-2α_1L_θ,η,δ,x,δ^-2β_1B_η,δ,x), N_δ,x(t) 𝐧_t(δ^-2α_1L_θ,η,δ,x,δ^-2β_1B_η,δ,x). An application of Lemma <ref> yields the scaling properties of the M,N-functions in analogy to <cit.> and <cit.>: M(t)z_δ,x=(M_δ,x(t)z)_δ,x, N(t)z_δ,x=(N_δ,x(t)z)_δ,x, z∈ L^2(Λ_δ,x). Let 0≤ t≤ Tδ^-2β_1, γ≥0 and z∈ H^2⌈γ⌉(^d) with compact support in ⋂_x∈𝒥Λ_δ,x and such that there exists a compactly supported function z̃∈ H^2⌈γ⌉+⌈α_1⌉(^d) with z=Δ^⌈α_1⌉_0z̃. Then, if β_1>0, we have sup_x∈𝒥e^tB_η,δ,xL_θ,η,δ,x^-1/2(-Δ_δ,x)^γ z_L^2(Λ_δ,x) ≲ 1∧ t^-(γ+⌈α_1⌉-α_1/2)/β_1; sup_x∈𝒥e^tB_η,δ,x(-Δ_δ,x)^γ z_L^2(Λ_δ,x) ≲ 1∧ t^-(γ+⌈α_1⌉)/β_1. Moreover, in case β_1=0, the left-hand sides in (<ref>) and (<ref>) are bounded by a constant independent of δ and t. The key idea of the proof is that all involved operators emerge as an application of the functional calculus applied to the same Laplace operator. In particular, they are simultaneously diagonalisable through the same eigenfunctions. Note that in contrast to the eigenfunctions, the associated eigenvalues do not depend themselves on the shift, i.e. x ∈Λ, within the rescaling of the Laplace operator. Let β_1>0. We only prove (<ref>), since the argument for (<ref>) is similar, using additionally that L_θ,η,δ,x commutes with (-Δ_δ,x)^γ, ord(L_θ,η,δ,x)=2α_1 and a bound of L_θ,η,δ,x in terms of its leading term (-Δ_δ,x)^α_1. Let (e_k)_k∈ form an orthonormal basis of (-Δ) in L^2(Λ) with eigenvalues λ_k>0. Then, there exists a constant c(Λ) such that λ_k≥ c(Λ) for all k≥1, see <cit.>. We consider the most involved case, that is, η_1<0 and η_2,…,η_q>0. Consequently, B_η will, in general, not be a negative operator, but there exists y_0>0 such that for all y≥ y_0, we have η_1y^β_1+∑_j=2^qη_jy^β_j≤η_1y^β_1/2, and all but finitely many eigenvalues of B_η will be negative due to <Ref> (ii). Consider the polynomial P_η(y)η_1/2y^β_1+∑_j=2^qη_jy^β_j and define C_1max_y∈[c(Λ),y_0]|P_η(y)|. Then η_1y^β_1+∑_j=2^qη_jy^β_j-C_1≤η_1y^β_1/2 holds for all y≥ c(Λ), and all eigenvalues of the operator B_η-C_1 are negative and upper bounded by η_1c(Λ)/2. Analogously, (e_k,δ,x)_k∈ forms an orthonormal basis of (-Δ_δ,x) in L^2(Λ_δ,x) with eigenvalues λ_k,δ,x =δ^2λ_k≥δ^2c(Λ). Similar calculations imply that η_1y^β_1+∑_j=2^qη_jδ^2(β_1-β_j)y^β_j-δ^2β_1C_1≤η_1y^β_1/2, y≥δ^2c(Λ). Thus, all eigenvalues of the operator difference B_η,δ,x-δ^2β_1C_1 are negative and the difference is bounded by η_1(-Δ_δ,x)^β_1/2 in the sense that e^t(B_η,δ,x-δ^2β_1C_1)w_L^2(Λ_δ,x)≤e^tη_1(-Δ_δ,x)^β_1/2w_L^2(Λ_δ,x), w∈ L^2(Λ_δ,x), independent of x∈𝒥. Note further that z=Δ_0^⌈α_1⌉z̃=Δ_δ,x^⌈α_1⌉z̃ since z is compactly supported in ⋂_x∈𝒥Λ_δ,x. With that, we have all the ingredients to prove (<ref>). For 0≤ t≤ Tδ^-2β_1, we obtain sup_x∈𝒥e^tB_η,δ,x(-Δ_δ,x)^γ z_L^2(Λ_δ,x) =sup_x∈𝒥e^tδ^2β_1C_1e^t(B_η,δ,x-δ^2β_1C_1)(-Δ_δ,x)^γ z_L^2(Λ_δ,x) ≤ e^C_1Tsup_x∈𝒥e^t(B_η,δ,x-δ^2β_1C_1)(-Δ_δ,x)^γ (-Δ_δ,x)^⌈α_1⌉z̃_L^2(Λ_δ,x) ≤ e^C_1Tsup_x∈𝒥e^tη_1(-Δ_δ,x)^β_1/2(-Δ_δ,x)^γ+⌈α_1⌉z̃_L^2(Λ_δ,x) ≲ (1∧ t^-(γ+⌈α_1⌉)/β_1)sup_x∈𝒥(z̃_L^2(Λ_δ,x)+(-Δ_δ,x)^γ z_L^2(Λ_δ,x)), where the last line follows from the fact that (-Δ_δ,x)^β_1η_1/2 generates a contraction semigroup and the smoothing property of semigroups. As Δ^n_δ,xz=Δ^n_0z holds for any x∈𝒥 and 1≤ n≤⌈γ⌉, an application of the functional calculus yields sup_x∈𝒥(-Δ_δ,x)^γz_L^2(Λ_δ,x)^2 ≤sup_x∈𝒥(-Δ_δ,x)^⌈γ⌉z_L^2(^d)^2+sup_x∈𝒥(-Δ_δ,x)^⌊γ⌋z_L^2(^d)^2 =(-Δ_0)^⌈γ⌉z_L^2(^d)^2+(-Δ_0)^⌊γ⌋z_L^2(^d)^2 <∞, proving the assertion. The claim for β_1=0 follows directly by bounding | e^tB_η,δ,x|=| e^tη_1|≤ C, 0≤ t≤ T, for some constant C only depending on η_1 and T. The theory of cosine operator functions was developed by <cit.> and led to general deterministic solution theory for undamped second-order abstract Cauchy problems. By substituting the time derivative as its own variable, it is possible to rewrite a second-order abstract Cauchy problem as a first-order abstract Cauchy problem in two components. The associated strongly continuous semigroup then lives on a product of Hilbert spaces called the phase-space; see <cit.>, <cit.> and <cit.>. The same remains true under suitable assumptions on the elastic and dissipation operator within a damped abstract second-order Cauchy problem <cit.>. The operator 𝒜_θ,η, defined through 𝒜_θ,η[ 0 I; A_θ B_η ], D(𝒜_ϑ,η)=Ḣ^2α_1(Λ) ×Ḣ^2β_1(Λ), generates a C_0-semigroup (J_θ,η(t))_t≥0 on the phase-space Ḣ^α_1(Λ) × L^2(Λ) given by J_θ,η(t)[ M(t) N(t); A_θ N(t) M(t)+B_η N(t) ]=[ M(t) N(t); M^'(t) N^'(t) ], t ≥ 0, with M(t) and N(t) given in (<ref>) and (<ref>), respectively. It is well-known that in the special case where both A_θ and B_η are strictly negative operators 𝒜_θ,η generates a C_0-semigroup, which is even analytic if and only if α_1/2≤β_1≤α_1, cf. <cit.>. On the other hand, (J_θ,η(t))_t≥0 given by (<ref>) is indeed a semigroup generated by 𝒜_θ,η which follows by direct verification of the differential properties of M,N-functions in <cit.> using the functional calculus. The coupled second-order system (<ref>) can also be written as a first-order system X(t)=𝒜_θ,ηX(t) t+[ 0; I ] W(t), 0< t≤ T, for X(t)=(u(t),v(t))^⊤ and the matrix-valued differential operator 𝒜_θ,η generating the strongly continuous semigroup (J_θ,η(t))_t≥0 constituted by the M,N-functions defined in <Ref>. The M, N-functions correspond to the cosine and sine functions in the undamped wave equation, see <cit.>. Naturally, they appear in the solution to the stochastic partial differential equation (<ref>): [ u(t); v(t) ] =J_θ,η(t)[ u(0); v(0) ]+∫_0^tJ_θ,η(t-s)[ 0; I ] W(s) =[ M(t)u_0+N(t)v_0; M^'(t)u_0+N^'(t)v_0 ]+[ ∫_0^tN(t-s) W(s); ∫_0^tN^'(t-s) W(s) ]. §.§ Asymptotic properties of local measurements In this section, we study the asymptotic covariance structure of the local measurements, which is crucial in showing the convergence of the observed Fisher information matrix ℐ_δ. Assume that (u_0,v_0)^⊤=(0,0)^⊤. For any t,s ∈ [0,T], x∈Λ, 1 ≤ i,j≤ p, 1 ≤ k,l≤ q, the covariance between local measurements is given by Cov(u_δ,x^Δ_i(t), u_δ,x^Δ_j(s)) = δ^-2α_i-2α_j∫_0^t s⟨ N_δ,x(t-r)(-Δ_δ,x)^α_i K,N_δ,x(s-r)(-Δ_δ,x)^α_j K ⟩_L^2(Λ_δ,x)dr, Cov(v_δ,x^Δ_k(t), v_δ,x^Δ_l(s)) = δ^-2β_k-2β_l∫_0^t sN^'_δ,x(t-r)(-Δ_δ,x)^β_kKN^'_δ,x(s-r)(-Δ_δ,x)^β_lK_L^2(Λ_δ,x) r, Cov(u_δ,x^Δ_i(t), v_δ,x^Δ_k(s)) =δ^-2α_i-2β_k∫_0^t s⟨ N_δ,x(t-r)(-Δ_δ,x)^α_i K,N^'_δ,x(s-r)(-Δ_δ,x)^β_kK⟩_L^2(Λ_δ,x)dr. Using (<ref>) and <cit.>, we observe Cov(u_δ,x^Δ_i(t), u_δ,x^Δ_j(s)) =∫_0^t s⟨ N^*(t-r)(-Δ)^α_i K_δ,x,N^*(s-r)(-Δ)^α_j K_δ,x⟩dr, Cov(v_δ,x^Δ_k(t), v_δ,x^Δ_l(s)) = ∫_0^t s⟨ (N^'(t-r))^*(-Δ)^β_kK_δ, x,(N^'(s-r))^*(-Δ)^β_lK_δ, x⟩dr, Cov(u_δ,x^Δ_i(t), v_δ,x^Δ_k(s)) = ∫_0^t s⟨ N^*(t-r)(-Δ)^α_i K_δ,x,(N^'(s-r))^*(-Δ)^β_kK_δ, x⟩dr. We can rewrite the last equations through the functional calculus by using <Ref>, the representations (<ref>) and (<ref>), self-adjointness by <Ref> as well as <cit.>: Cov(u_δ,x^Δ_i(t), u_δ,x^Δ_j(s)) = δ^-2α_i-2α_j∫_0^t s⟨ N_δ,x(t-r)(-Δ_δ,x)^α_i K,N_δ,x(s-r)(-Δ_δ,x)^α_j K ⟩_L^2(Λ_δ,x)dr, Cov(v_δ,x^Δ_k(t), v_δ,x^Δ_l(s)) = δ^-2β_k-2β_l∫_0^t sN^'_δ,x(t-r)(-Δ_δ,x)^β_kKN^'_δ,x(s-r)(-Δ_δ,x)^β_lK_L^2(Λ_δ,x) r, Cov(u_δ,x^Δ_i(t), v_δ,x^Δ_k(s)) =δ^-2α_i-2β_k∫_0^t s⟨ N_δ,x(t-r)(-Δ_δ,x)^α_i K,N^'_δ,x(s-r)(-Δ_δ,x)^β_kK⟩_L^2(Λ_δ,x)dr. Let δ>0. Let z_1,z_2∈ L^2(^d) with compact support in ⋂_x∈𝒥Λ_δ,x such that there exist compactly supported functions z̅_1,z̅_2∈ H^2α_1(^d) with z_i=Δ_0^α_1z̅_i, i=1,2. As δ→ 0, we obtain the following convergences. * Let t≥0. Let β_1=0, i.e. B_η=η_1. Then, uniformly in x∈𝒥, δ^-2α_1N_δ,x(t)z_1N_δ,x(t)z_2_L^2(Λ_δ,x) → -e^η_1t1/2A̅_θ^-1z_1z_2_L^2(^d), M_δ,x(t)z_1M_δ,x(t)z_2_L^2(Λ_δ,x) → e^η_1t1/2z_1z_2_L^2(^d), δ^-α_1N_δ,x(t)z_1M_δ,x(t)z_2_L^2(Λ_δ,x) → 0. * Let r_1≠ r_2. Let β_1=0. Then, uniformly in x∈𝒥, δ^-2α_1N_δ,x(r_1)z_1N_δ,x(r_2)z_2_L^2(Λ_δ,x) → 0, M_δ,x(r_1)z_1M_δ,x(r_2)z_2_L^2(Λ_δ,x) → 0, δ^-α_1N_δ,x(r_1)z_1M_δ,x(r_2)z_2_L^2(Λ_Λ_δ,x) → 0. * Let 0<2β_1≤α_1, t∈(0,T]. Then, uniformly in x∈𝒥, δ^-2α_1-2β_1∫_0^tN_δ,x(r)^2 rz_1z_2_L^2(Λ_δ,x)→1/2B̅_η^-1A̅_θ^-1z_1z_2_L^2(^d), δ^-2β_1∫_0^t(N^'_δ,x(r))^2 rz_1z_2_L^2(Λ_δ,x)→ -1/2B̅_η^-1z_1z_2_L^2(^d), δ^-2β_1-α_1∫_0^tN_δ,x(r)N^'_δ,x(r) rz_1z_2_L^2(Λ_δ,x)→0. * Using (<ref>) we have N_δ,x(t)=δ^α_1 e^η_1t/2sin(tδ^-α_1L_θ,η,δ,x^1/2)L_θ,η,δ,x^-1/2, t ∈ [0,T], and thus δ^-2α_1N_δ,x(t)z_1N_δ,x(t)z_2_L^2(Λ_δ,x) =e^η_1tsin (tδ^-α_1L_θ,η,δ,x^1/2)L_θ,η,δ,x^-1/2z_1sin (tδ^-α_1L_θ,η,δ,x^1/2)L_θ,η,δ,x^-1/2z_2_L^2(Λ_δ,x). Since S_θ,η,δ,x(t)sin (tL_θ,η,δ,x)L_θ,η,δ,x^-1/2 is the operator sine function, which is generated by -L_θ,η,δ,x, and L_θ,η,δ,x→L̅_θ,η as δ→ 0, the desired convergence follows by repeating the steps of <cit.> regarding asymptotic equipartition of energy. Note that the assumptions in <cit.> can be relaxed, as we are not considering the non-parametric case. The employed strong resolvent convergence and the involved convergence of the spectral measures then follow from <cit.> by choosing the core C_c^∞(ℝ^d) as described in <cit.>. The convergences for the functional calculus associated with the respective spectral measures are then immediate, see <cit.>. Similarly, we observe M_δ,x(t)z_1M_δ,x(t)z_2_L^2(Λ_δ,x) =e^η_1t⟨(cos (tδ^-α_1L_θ,η,δ,x^1/2)-δ^α_1η_1/2sin(tδ^-α_1L_θ,η,δ,x^1/2)L_θ,η,δ,x^-1/2)z_1, (cos (tδ^-α_1L_θ,η,δ,x^1/2)-δ^α_1η_1/2sin(tδ^-α_1L_θ,η,δ,x^1/2)L_θ,η,δ,x^-1/2)z_2⟩_L^2(Λ_δ,x). Likewise, C_θ,η,δ,x(t)cos (tL_θ,η,δ,x^1/2) is the cosine operator function associated with the operator -L_θ,η,δ,x. <cit.> yields the representation C_θ, η,δ,x(tδ^-α_1)=1/2(U_θ,η,δ,x(tδ^-α_1)+U_θ,η,δ,x(-tδ^-α_1)) with the unitary group (U_θ,δ,x(t))_t∈ generated by i(L_θ,η,δ,x^1/2) on L^2(Λ_δ,x) and the steps of <cit.> can be repeated to verify convergence. Analogous calculations show δ^-α_1N_δ,x(t)z_1M_δ,x(t)z_2_L^2(Λ_δ,x)→ 0. All the above convergences hold uniformly in x ∈𝒥 since in the parametric case the convergences in <cit.> are uniform in x ∈𝒥 when applied to functions with support in ⋂_x∈𝒥Λ_δ,x. In fact, restricted to ⋂_x∈𝒥Λ_δ,x, the Laplacian Δ_δ,x is identical to Δ_δ,y for y∈𝒥 and the associated spectral measures become independent of the spatial point y ∈𝒥, when applied to functions with support in ⋂_x∈𝒥Λ_δ,x. * The convergences follow similarly to (i) by using the slow-fast orthogonality as presented in <cit.>. (iii) For readability, we suppress various indices throughout the remainder of the proof. Thus, we introduce the following notation: A A_θ,δ,x; B B_η,δ,x; L L_θ,η,δ,x; α=δ^α_1; β=δ^β_1. By definition of M, N-functions, substitution and the fundamental theorem of calculus, we then obtain δ^-2α_1-2β_1∫_0^tN_δ,x(r)^2 rz_1z_2_L^2(Λ_δ,x) =α^-2β^-2∫_0^tN_δ,x(r)^2 rz_1z_2_L^2(Λ_δ,x) =∫_0^tβ^-2e^rBsin^2(rα^-1β^2L^1/2)L^-1 rz_1z_2_L^2(Λ_δ,x) =⟨(e^tβ^-2Bsin^2(tα^-1L^1/2)B^2-2α^-1β^2e^tβ^-2BBL^1/2cos(tα^-1L^1/2)sin(tα^-1L^1/2) .+2α^-2β^4(e^tβ^-2B-I)L)B^-1L^-1(4α^-2β^4L+B^2)^-1z_1,z_2⟩_L^2(Λ_δ,x). We can rewrite the last display as 1/2B^-1A^-1z_1z_2_L^2(Λ_δ,x) -α^2β^-4/4e^tβ^-2Bsin^2(tα^-1L^1/2)L^-1BA^-1z_1z_2_L^2(Λ_δ,x) +αβ^-2/2e^tβ^-2Bcos(tα^-1L^1/2)sin(tα^-1L^1/2)L^-1/2A^-1z_1z_2_L^2(Λ_δ,x) -1/2e^tβ^-2BB^-1A^-1z_1z_2_L^2(Λ_δ,x). Since A=A_θ,δ,x converges to A̅_θ, (<ref>) converges to 1/2B̅_η^-1A̅_θ^-1z_1z_2_L^2(^d), while (<ref>), (<ref>) and (<ref>) tend to zero by the Cauchy-Schwarz inequality and <Ref>. As we will integrate (<ref>) on the time interval [0,T] in <Ref>, we will already compute a uniform upper bound of (<ref>), enabling the usage of the dominated convergence theorem. By the Cauchy-Schwarz inequality and <Ref> we obtain for a constant C independent of the spatial point x and the resolution level δ>0: ∫_0^tβ^-2e^rBsin^2(rα^-1β^2L^1/2)L^-1 rz_1z_2_L^2(Λ_δ,x) ≤∫_0^Tδ^-2β_1e^rB_η,δ,x/2L_θ,η,δ,x^-1/2z_1_L^2(Λ_δ,x)e^rB_η,δ,x/2L_θ,η,δ,x^-1/2z_2_L^2(Λ_δ,x) r ≤∫_0^∞ C(1∧ r^-α_1/(2β_1)))^2 r V<∞. Similarly to (<ref>), as δ→ 0, we obtain δ^-2β_1∫_0^t(N^'_δ,x(r))^2 rz_1z_2_L^2(Λ_δ,x) =δ^-2β_1∫_0^t(M_δ,x(r)+δ^-2β_1B_η,δ,xN_δ,x(r))^2 rz_1z_2_L^2(Λ_δ,x) =∫_0^tβ^-2e^rB(cos(rα^-1β^2L^1/2)+αβ^-2/2Bsin(rα^-1β^2L^1/2))^2 rz_1z_2_L^2(Λ_δ,x) =⟨(α^2β^-4Be^tβ^-2Bsin^2(tα^-1L^1/2)L^-1/4. +αβ^-2e^tβ^-2Bcos(tα^-1L^1/2)sin(tα^-1L^1/2)L^-1/2/2 .+(e^tβ^-2B-I)B^-1/2)z_1,z_2⟩_L^2(Λ_δ,x) →-1/2B̅_η^-1z_1z_2_L^2(^d), and δ^-2β_1-α_1∫_0^tN_δ,x(r)N^'_δ,x(r) rz_1z_2_L^2(Λ_δ,x) =δ^-2β_1-α_1∫_0^tN_δ,x(r)(M_δ,x(r)+δ^-2β_1B_η,δ,xN_δ,x(r)) rz_1z_2_L^2(Λ_δ,x) =⟨∫_0^tβ^-2e^rBsin(rα^-1β^2L^1/2)L^-1/2 ·(cos(rα^-1β^2L^1/2)+αβ^-2/2Bsin(rα^-1β^2L^1/2)L^-1/2) rz_1,z_2⟩_L^2(Λ_δ,x) =1/2αβ^-2e^tβ^-2Bsin^2(tα^-1L^1/2)L^-1z_1z_2_L^2(Λ_δ,x)→0, again having a uniform upper bound in analogy to (<ref>). Grant <Ref> (i)-(iii) and suppose (u_0,v_0)^⊤=(0,0)^⊤. Recall the definition of C(η_1,T) given by (<ref>) and let β_1=0. Then, for 1≤ i,j≤ p and 1≤ k,l≤ q, we obtain, as δ→ 0, the convergences sup_x∈𝒥|δ^2(α_i+α_j-α_1)∫_0^TCov(u_δ,x^Δ_i (t),u_δ,x^Δ_j) t+C(η_1,T)/θ_1(-Δ)^(α_i+α_j-α_1)/2K^2_L^2(^d)|→0; sup_x∈𝒥|∫_0^TVar(v_δ,x (t)) t-C(η_1,T)K^2_L^2(^d)|→0; sup_x∈𝒥|δ^2(α_i-α_1/2)∫_0^TCov(u_δ,x^Δ_i (t),v_δ,x(t)) t|→0. If, 0<2β_1≤α_1 we obtain the convergences sup_x∈𝒥|δ^2(α_i+α_j-α_1-β_1)∫_0^TCov(u_δ,x^Δ_i (t),u_δ,x^Δ_j) t-T/2η_1θ_1(-Δ)^(α_i+α_j-α_1-β_1)/2K_L^2(^d)^2|→0; sup_x∈𝒥|δ^2(β_k+β_l-β_1)∫_0^TCov(v_δ,x^Δ_k (t),v_δ,x^Δ_l(t)) t+T/2η_1(-Δ)^(β_k+β_l-β_1)/2K_L^2(^d)^2|→0; sup_x∈𝒥|δ^2(α_i+β_k-β_1-α_1/2)∫_0^TCov(u_δ,x^Δ_i (t),v_δ,x^Δ_k(t)) t|→0. With the majorant constructed in (<ref>) in case of (<ref>) or directly by <Ref> for (<ref>), we obtain uniformly in x∈𝒥 by <Ref>, <Ref>(i),(iii) and the dominated convergence theorem: δ^2(α_i+α_j-α_1-β_1)∫_0^TCov(u_δ,x^Δ_i (t),u_δ,x^Δ_j(t)) t =δ^-2α_1-2β_1∫_0^T∫_0^tN_δ,x(r)^2(-Δ_0)^α_iK(-Δ_0)^α_jK_L^2(Λ_δ,x) r t = -∫_0^T∫_0^te^η_1r1/2A̅_θ^-1(-Δ_0)^α_iK(-Δ_0)^α_jK_L^2(^d) r t+o(1), β_1=0, ∫_0^T1/2B̅_η^-1A̅_θ^-1(-Δ_0)^α_iK(-Δ_0)^α_jK_L^2(^d) t+o(1), β_1>0, = -C(η_1,T)/θ_1(-Δ_0)^(α_i+α_j-α_1)/2K_L^2(^d)^2+o(1), β_1=0, T/2η_1θ_1(-Δ_0)^(α_i+α_j-α_1-β_1)/2K_L^2(^d)^2+o(1), β_1>0. This proves (<ref>) and (<ref>). Analogously, (<ref>) and (<ref>) as well as (<ref>) and (<ref>) follow using the remaining convergences in <Ref>. Grant <Ref> (i)-(iii) and let (u_0,v_0)^⊤=(0,0)^⊤. Then, for 1≤ i,j≤ p, 1≤ k,l≤ q, we observe sup_x∈𝒥Var(∫_0^Tu_δ,x^Δ_i (t)u_δ,x^Δ_j(t) t) =o(δ^-4(α_i+α_j-α_1-β_1)); sup_x∈𝒥Var(∫_0^Tv_δ,x^Δ_k(t)v_δ,x^Δ_l(t) t) =o(δ^-4(β_k+β_l-β_1)); sup_x∈𝒥Var(∫_0^Tu_δ,x^Δ_i (t)v_δ,x^Δ_k(t) t) =o(δ^-4(α_i+β_k-β_1-α_1/2)). We only show the assertion for (<ref>). The other two statements (<ref>) and (<ref>) follow in the same way. By Wick's formula <cit.> it holds δ^4(α_i+α_j-α_1-β_1)Var(∫_0^Tu_δ,x^Δ_i (t)u_δ,x^Δ_j(t) t)=δ^4(α_1+α_j-α_1-β_1)(V_1+V_2) with V_1 V((-Δ)^α_iK_δ,x,(-Δ)^α_iK_δ,x,(-Δ)^α_jK_δ,x,(-Δ)^α_jK_δ,x), V_2 V((-Δ)^α_iK_δ,x,(-Δ)^α_jK_δ,x,(-Δ)^α_jK_δ,x,(-Δ)^α_iK_δ,x), and V(w,w',z,z')∫_0^T∫_0^TCov(u(t)w,u(s)w')Cov(u(t)z,u(s)z') s t, for w,w',z,z'∈ L^2(Λ). By <Ref> and rescaling, we obtain the representation δ^4(α_i+α_j-α_1-β_1)V_1 = ∫_0^T∫_0^TCov(u_δ, x^Δ_i(t),u_δ,x^Δ_i(s))Cov(u_δ,x^Δ_j(t),u_δ,x^Δ_j(s)) ds dt =2δ^2β_1∫_0^T∫_0^tδ^-2β_1(∫_0^tδ^-2β_1-sf_i,i(s,s') s') (∫_0^tδ^-2β_1-sf_j,j(s,s”) s”) s t, where, for s,s'∈ [0, Tδ^-2β_1], we have set f_i,j(s,s') ⟨e^(s+s')B_η,δ,x/2sin(δ^-α_1+2β_1(s+s')L_θ,η,δ,x^1/2)L_θ,η,δ,x^-1/2(-Δ_δ,x)^α_iK, e^s'B_η,δ,x/2sin(δ^-α_1+2β_1(s')L_θ,η,δ,x^1/2)L_θ,η,δ,x^-1/2(-Δ_δ,x)^α_jK⟩_L^2(Λ_δ,x). In case that β_1=0, we use the pointwise convergences f_i,i(s,s')→ 0 and f_j,j(s,s”)→0, given by the slow-fast orthogonality in <Ref>(ii), and dominated convergence over fixed finite time intervals to prove the claim directly from the representation (<ref>). If, however, β_1>0, we use <Ref> (ii), i.e. K=Δ_0^⌈α_1⌉K̃, and <Ref> such that sup_x∈𝒥|f_i,i(s',s)| ≲ (1∧ (s+s')^-(α_i+α_1/2)/β_1)(1∧ s^-(α_i+α_1/2)/β_1) ≲(1∧ s'^-1)(1∧ s^-1). Thus implies sup_x∈𝒥|V_1|=O(δ^-4(α_i+α_j-α_1-β_1)δ^2β_1log(δ^-2β_1))=o(δ^-4(α_i+α_j-α_1-β_1)). The arguments for V_2 follow in the same way be replacing f_i,i and f_j,j with f_i,j and f_j,i in (<ref>), respectively. The assertion follows in view of (<ref>). Grant <Ref> (i) (ii) and (iv). Then, for 1≤ i≤ p, 1≤ j≤ q, we have (i) sup_x∈𝒥δ^4α_i-2α_1-2β_1(∫_0^TM(t)u_0+N(t)v_0(-Δ)^α_i K_δ,x^2 t)=o(1); (ii) sup_x∈𝒥δ^4β_j-2β_1(∫_0^TA_θ N(t)u_0+(M(t)+B_η(N(t))v_0(-Δ)^β_jK_δ,x^2 t)=o(1). (i) Define the reverse scaling operation for z∈ L^2(^d) via z_(δ,x)^-1(y)δ^d/2z(x+δ y), y∈^d. The rescaling <Ref>, self-adjointness and the commutativity of operators imply M(t)u_0(-Δ)^α_i K_δ,x^2 =δ^-4α_i(u_0)_(δ,x)^-1M_δ,x(t)(-Δ_δ,x)^α_i K_L^2(Λ_δ,x)^2 =δ^-4α_i+4α_1((-Δ)^α_1 u_0)_(δ,x)^-1M_δ,x(t) (-Δ_δ,x)^α_i-α_1K_L^2(Λ_δ,x)^2 ≲δ^-4α_i+4α_1(-Δ)^α_1 u_0^2_L^2(Λ)e^tδ^-2β_1B_η,δ,x (-Δ_δ,x)^α_i-α_1K^2_L^2(Λ_δ,x) ≲δ^-4α_i+4α_1e^tδ^-2β_1B_η,δ,x (-Δ_δ,x)^α_i-α_1K^2_L^2(Λ_δ,x). Thus, using <Ref> and that K=Δ_0^⌈α_1⌉K̃, we obtain the upper bound sup_x∈𝒥δ^4α_i-2α_1-2β_1∫_0^TM(t)u_0(-Δ)^α_i K_δ,x^2 t ≲δ^2α_1∫_0^Tδ^-2β_1sup_x∈𝒥e^tB_η,δ,x (-Δ_δ,x)^α_i-α_1K^2_L^2(Λ_δ,x) t ≲δ^2α_1∫_0^Tδ^-2β_11∧ t^-α_i/β_1 t =O(δ^2(α_1-β_1))=o(1). Similarly, N(t)v_0(-Δ)^α_i K_δ,x^2 =δ^-4α_i(v_0)_(δ,x)^-1N_δ,x(t)(-Δ_δ,x)^α_i K_L^2(Λ_δ,x)^2 =δ^-4α_i+2α_1((-Δ)^α_1/2v_0)_(δ,x)^-1N_δ,x(t)(-Δ_δ,x)^α_i-α_1/2Δ K_L^2(Λ_δ,x)^2 ≲δ^-4α_i+4α_1e^tδ^-2β_1B_η,δ,xL_θ,η,δ,x^-1/2(-Δ_δ,x)^α_i-α_1/2 K^2_L^2(Λ_δ,x) Hence, sup_x∈𝒥δ^4α_i-2α_1-2β_1∫_0^TN(t)v_0Δ K_δ,x^2 t ≲δ^2α_1∫_0^Tδ^-2β_1sup_x∈𝒥e^tB_η,δ,xL_θ,η,δ,x^-1/2(-Δ_δ,x)^α_i-α_1/2 K^2_L^2(Λ_δ,x) t =O(δ^2(α_1-β_1))=o(1), proving the assertion. (ii) The steps from (i) can be repeated, resulting in sup_x∈𝒥δ^4β_j-2β_1(∫_0^TA_θ N(t)u_0+(M(t)+B_η(N(t))v_0(-Δ)^β_jK_δ,x^2 t) =O(δ^2(α_1-β_1))=o(1). §.§ Proof of the CLT * Assume first that (u_0,v_0)^⊤=(0,0)^⊤. For any 1≤ i,j≤ p+q, we obtain from <Ref> and <Ref> that (ρ_δℐ_δρ_δ)_ij =ρ_iiρ_jj∑_k=1^N∫_0^T(Y_δ, k(t))_i(Y_δ,k(t))_j t =(Σ_θ,η)_ij+o_(1). This yields for zero initial conditions the convergence (ρ_δℐ_δρ_δ)→Σ_θ, δ→0. In order to extend this result to a general initial condition (u_0,v_0)^⊤ satisfying <Ref> (iii), let (u̅(t),v̅(t))^⊤ be defined as (u(t),v(t))^⊤, but starting in (0,0)^⊤ such that for z∈ L^2(Λ) u(t)z =u̅(t)z+M(t)u_0z+N(t)v_0z, v(t)z =v̅(t)z+A_θ N(t)u_0z+(M(t)+B_η N(t))v_0z. If ℐ̅_δ corresponds to the observed Fisher information with zero initial condition, then by the Cauchy-Schwarz inequality |(ρ_δℐ_δρ_δ)_ij-(ρ_δℐ̅_δρ_δ)_ij|≲(ρ_δℐ̅_δρ_δ)_ii^1/2w_j^1/2+(ρ_δℐ̅_δρ_δ)_jj^1/2w_i^1/2+w_i^1/2w_j^1/2, for all 1≤ i,j≤ p+q, where w_i=sup_x∈𝒥Nρ_ii^2(∫_0^TM(t)u_0+N(t)v_0(-Δ)^α_i K_δ,x^2 t), 1≤ i≤ p, sup_x∈𝒥Nρ_ii^2(∫_0^TA_θ N(t)u_0+(M(t)+B_η(N(t))v_0(-Δ)^β_i K_δ,x^2 t), else. By the first part, (ρ_δℐ̅_δρ_δ)_ii is bounded in probability and <Ref> shows w_i=o(1). Hence, we obtain (<ref>) also in the case of non-zero initial conditions. Due to <Ref> (i)-(iii), Σ_θ,η is well-defined as all entries are finite. Regarding invertibility, note first that Σ_θ,η is invertible if and only if both Σ_1,θ,η and Σ_2,θ,η are invertible. We only argue that Σ_1,θ,η is invertible as the argument for Σ_2, θ,η is identical. Let λ∈^p such that 0=∑_i,j=1^pλ_iλ_j(Σ_1,θ,η)_ij 0=∑_i,j=1^pλ_iλ_j(-Δ)^(α_i+α_j-α_1-β_1)K^2_L^2(^d). Now, 0 =∑_i,j=1^pλ_iλ_j(-Δ)^(α_i+α_j-α_1-β_1)K^2_L^2(^d) =∑_i=1^pλ_i(-Δ)^α_i-(α_1+β_1)/2K∑_i=1^pλ_i(-Δ)^α_i-(α_1+β_1)/2K_L^2(^d), hence ∑_i=1^pλ_i(-Δ)^α_i-(α_1+β_1)/2K=0. Since the functions (-Δ)^α_i-(α_1+β_1)/2K ,1≤ i≤ p, are linearly independent by <Ref> (iii), Σ_1,θ,η is invertible. * We refer to <cit.> for a detailed proof of the CLT in the case of the perturbed stochastic heat equation, which relies on a general multivariate martingale central limit theorem. All steps translate directly into our setting due to the stochastic convergence ρ_δℐ_δρ_δ→Σ_θ,η from (i). *Acknowledgement AT gratefully acknowledges the financial support of Carlsberg Foundation Young Researcher Fellowship grant CF20-0604. The research of EZ has been partially funded by Deutsche Forschungsgemeinschaft (DFG)—SFB1294/1-318763901. apalike2
http://arxiv.org/abs/2407.13482v1
20240718130121
Simple matrix models for the flag, Grassmann, and Stiefel manifolds
[ "Lek-Heng Lim", "Ke Ye" ]
math.NA
[ "math.NA", "cs.NA", "math.DG", "14M15, 65J05, 90C48, 53Z30, 57S25, 22E70" ]
L.-H. Lim]Lek-Heng Lim Computational and Applied Mathematics Initiative, Department of Statistics, University of Chicago, Chicago, IL 60637-1514. lekheng@uchicago.edu K. Ye]Ke Ye KLMM, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China keyk@amss.ac.cn § ABSTRACT We derive three families of orthogonally-equivariant matrix submanifold models for the Grassmann, flag, and Stiefel manifolds respectively. These families are exhaustive — every orthogonally-equivariant submanifold model of the lowest dimension for any of these manifolds is necessarily a member of the respective family, with a small number of exceptions. They have several computationally desirable features. The orthogonal equivariance allows one to obtain, for various differential geometric objects and operations, closed-form analytic expressions that are readily computable with standard numerical linear algebra. The minimal dimension aspect translates directly to a speed advantage in computations. And having an exhaustive list of all possible matrix models permits one to identify the model with the lowest matrix condition number, which translates to an accuracy advantage in computations. As an interesting aside, we will see that the family of models for the Stiefel manifold is naturally parameterized by the Cartan manifold, i.e., the positive definite cone equipped with its natural Riemannian metric. Simple matrix models for the flag, Grassmann, and Stiefel manifolds [ July 22, 2024 =================================================================== § INTRODUCTION As abstract manifolds, the Grassmannian is a set of subspaces, the flag manifold a set of nested sequences of subspaces, and the Stiefel manifold a set of frames. To work with these manifolds, not least performing computations, one needs a model, i.e., a system of extrinsic coordinates, for them. From the perspective of numerical computations, the best models are matrix models, representing these manifolds by a quotient manifold or submanifold of matrices, thereby permitting differential geometric objects to be expressed in terms of matrix operations computable with standard numerical linear algebra. Among such models, submanifold models are preferable to quotient manifold models as points and tangent vectors are represented by actual matrices instead of equivalence classes of matrices, which require artificial choices and additional computations to further represent as actual matrices, bearing in mind that standard numerical linear algebra only works with actual matrices, not equivalence classes of matrices. For numerical stability, these models should be equivariant under orthogonal group action, as orthogonal matrices are the basis of numerically stable algorithms. For the Grassmannian and flag manifold, these considerations lead us to the following models. Let ^2(ℝ^n) be the space of n × n real symmetric matrices, a,b ∈ℝ distinct and a_1,…, a_p+1∈ℝ generic real numbers. We will show that the quadratic model _a,b(k,n){ X ∈^2(ℝ^n) : (X - aI) (X - bI) = 0, (X) = ak + b(n-k) } is diffeomorphic to (k, ℝ^n), the Grassmannian of k-planes in ℝ^n; and the isospectral model _a_1,…, a_p+1(k_1,…, k_p, n) { X ∈^2(ℝ^n) : ∏_j=0^p (X - a_j+1 I_n) = 0, (X) = ∑_j=0^p (k_j+1 - k_j) a_j+1} is diffeomorphic to (k_1,…,k_p, ℝ^n), the manifold of (k_1,…,k_p)-flags in ℝ^n. Here we have assumed that 0 < k <n and 0 k_0 < k_1 < … < k_p+1 n are all integers. Evidently the quadratic model is the p = 1 case of the isospectral model. The quadratic model is so-called because the roots of a monic quadratic matrix polynomial (X - aI) (X - bI) = 0 are called quadratic matrices <cit.>, well-known in studies of numerical range <cit.>. The special cases (a,b)=(1,0) gives projection matrices with X^2 = X and (a,b)=(1,-1) gives involutory matrices with X^2 = I. So the classical projection model _1,0(k,n) = {X ∈^2(ℝ^n) : X^2 =X, (X) = k } <cit.> and the more recent involution model _1,-1(k,n) = {X ∈^2(ℝ^n) : X^2 =I, (X) = 2k - n} <cit.> are both special cases of the quadratic model. The descriptions of these models above are implicit. We will prove in Section <ref> that they are equivalent to the following explicit descriptions: _a,b(k,n) = { Q [ a I_k 0; 0 b I_n-k ] Q^∈^2(ℝ^n) : Q ∈_n(ℝ) }, _a_1,…, a_p+1(k_1,…, k_p, n) ={ Q [ a_1 I_n_1 0 ⋯ 0; 0 a_2 I_n_2 ⋮; ⋮ ⋱ 0; 0 ⋯ 0 a_p+1 I_n_p+1 ] Q^∈^2(ℝ^n) : Q∈_n(ℝ) }, where _n(ℝ) is the set of orthogonal matrices with unit determinant. This matrix manifold has been in existence for more than thirty years <cit.> and is known by the names isospecral manifold or spectral manifold. What is not known is that it is just a parameterization of the flag manifold, which is of course classical and known for nearly 120 years since <cit.> or, in its modern form, for at least fifty years since <cit.>. To the best of our knowledge, the connection that they are one and the same has never been made, a minor side contribution of this article. One might also ask how these discussions apply to the Stiefel manifold <cit.> of orthonormal k-frames in n-space. As an addendum, we will show in Section <ref> that for the Stiefel manifold, there is an analogous family of minimal _n(ℝ)-equivariant models, the Cholesky models _A(k,n) { X ∈ℝ^n × k : X^ X = A }, parameterized by A ∈^2_(ℝ^k), the set of k × k symmetric positive definite matrices. This description is implicit; but Cholesky models too have an explicit description that justifies the name, _A (k,n) = { Q [ R; 0 ]: Q∈Ø_n(ℝ) }, where R ∈_k(ℝ) is the unique Cholesky factor with positive diagonal of A. It will become clear in Section <ref> that the relevant structure on ^2_(ℝ^k) is that of a Riemannian manifold with metric (A^-1XA^-1Y), and not its more common structure as an Euclidean cone. Note that A = I gives the usual model of the Stiefel manifold as n × k matrices with orthonormal columns but more generally a Cholesky model allows for A^-1-orthonormal columns.[Fortuitously Stiefel also pioneered the use of A^-1-orthonormality in computational mathematics through his conjugate gradient method <cit.>.] §.§ Computational significance From a computational perspective, these families of models have two desirable features and are unique in this regard: * orthogonal equivariance: Let Q ∈Ø(n). Then X ∈^2(ℝ^n) is in an isospectral model iff Q^ X Q is in the model; X ∈ℝ^n × k is in a Cholesky model iff QX is in the model. * minimal dimension: There is no model for the flag (resp. Stiefel) manifold in an ambient space of dimension smaller than 1/2 (n-1)(n+2) (resp. nk) with property (<ref>). * exhaustive: Any model for the flag (resp. Stiefel) manifold with properties (<ref>) and (<ref>) must be among the family of isospectral (resp. Cholesky) models. Property (<ref>) is key to deriving closed-form analytic expressions for differential geometric quantities in terms of standard matrix operations stably computable with numerical linear algebra <cit.>. Property (<ref>) ensures that points on the manifold are represented with matrices of the lowest possible dimension, which is important as the speed of every algorithm in numerical linear algebra depends on the dimension of the matrices. The current lowest dimensional matrix model of a flag manifold is obtained by embedding (k_1,…, k_p;ℝ^n) into a product of Grassmannians as in (<ref>) on p. eq:flag3. Even if we use the lowest dimensional models for these Grassmannians, the result would involve matrices of dimension np × np whereas the isospectral model uses only n × n matrices. For accuracy, the matrix condition number plays a role analogous to matrix dimension for speed. Property (<ref>) allows us to pick from among the respective family of models the one with the smallest condition number. Every matrix in an isospectral model _a_1,…, a_p+1(k_1,…, k_p, n) has identical eigenvalues and therefore condition number max (| a_1 |,…,| a_p+1|) / min (| a_1 |,…,| a_p+1|). For the Grassmannian p = 1 and there is a unique (up to a constant multiple) perfectly conditioned model with a=1, b=-1, which is precisely the involution model in <cit.>. For more general flag manifolds with p > 1, the condition number can be made arbitrarily close to one. For the Stiefel manifold, the usual model with A = I plays the role of the involution, namely, as the unique (up to a constant multiple) perfectly conditioned model among all Cholesky models. §.§ Contributions The main intellectual effort of this article is to establish property (<ref>) and half of property (<ref>) by deriving the isospectral model (Theorem <ref>, Corollary <ref>) and demonstrating that we may choose a_1,…, a_p+1 so that we get a submanifold of ^2_(ℝ^n), the space of traceless symmetric matrices with ^2_(ℝ^n) = 1/2 (n-1)(n+2) (Corollary <ref>). To demonstrate the other half of property (<ref>), i.e., no lower dimensional model exists, and to prove property (<ref>), one needs an amount of representation theory far beyond the scope of our article and is relegated to <cit.>. The orthogonal equivariance in property (<ref>) deserves elaboration. Firstly we really do mean “orthogonal” — every result in this article remains true if _n(ℝ) is replaced by Ø_n(ℝ). Secondly we stress the importance of “equivariance.” The Riemannian metric is often regarded as the center piece of any computations involving manifolds, not least in manifold optimization. This is getting things backwards. What is by far more important is equivariance, or, as a special case, invariance. There are uncountably many Riemannian metrics on the flag manifold even after discounting constant multiples. The most important Riemannian metrics are the ones that are _n(ℝ)-invariant; note that the standard Euclidean inner product on ℝ^n or ℝ^m × n has this property. In the case of the flag, Grassmann, and Stiefel manifolds, there is an even more special one — the _n(ℝ)-invariant Riemannian metric that comes from the unique bi-invariant metric on _n(ℝ). It is the key to deriving explicit expressions for differential geometric objects in terms of standard matrix operations readily computable with numerical linear algebra. A second goal of this article is show that the Riemannian metrics arising from our equivariant embeddings of the flag, Grassmann, and Stiefel manifolds are, up to constant weights, the ones arising from the bi-invariant metric on _n(ℝ) (Section <ref>). As a perusal of the computational mathematics literature would reveal, equivariance has never been brought into center stage. We hope that our article would represent a small step in this direction. § NOTATIONS AND SOME BACKGROUND We generally use blackboard bold fonts for vector spaces and double brackets · for equivalence classes. On two occasions we write ℚ_j for a subspace spanned by columns of an orthogonal matrix Q, which should not cause confusion as the rational field has no role in this article. We reserve the letter 𝕎 for ℝ-vector spaces and 𝕍 for _n(ℝ)-modules, usually adorned with various subscripts. We write ≅ for diffeomorphisms of manifolds and ≃ for isomorphisms of vector spaces and modules. §.§ Linear algebra The real vector spaces of real symmetric, skew-symmetric, and traceless symmetric matrices will be denoted respectively by ^2(ℝ^n) { X ∈ℝ^n × n : X^ = X }, ^2(ℝ^n) { X ∈ℝ^n × n : X^ = -X }, ^2_(ℝ^n) { X ∈ℝ^n × n : X^ = X, (X) = 0 }. For the cone of real symmetric positive definite matrices, we write ^2_(ℝ^n) { X ∈ℝ^n × n : y^ X y > 0 for all y 0}. The Lie groups of real orthogonal, special orthogonal, and general linear groups will be denoted respectively by Ø_n(ℝ) { X ∈ℝ^n × n : X^ X = I }, _n(ℝ) { X ∈ℝ^n × n : X^ X = I, (X) = 1 }, _n(ℝ) { X ∈ℝ^n × n : (X) 0}. Let n_1 + … + n_p = n. We will also write (Ø_n_1(ℝ) ×⋯×Ø_n_p(ℝ) ) {(X_1,…,X_p) ∈Ø_n(ℝ) : X_1 ∈Ø_n_1(ℝ),…, X_p ∈Ø_n_p(ℝ), ( X_1) ⋯(X_p) = 1 }. Note that _n(ℝ) is a special case of this. For each increasing sequence 0 k_0 < k_1 < ⋯ < k_p < k_p+1 n, we also define the parabolic subgroup of block upper triangular matrices: _k_1,…, k_p(ℝ) = {[ X_1,1 ⋯ X_1,p+1; ⋮ ⋱ ⋮; 0 ⋯ X_p+1,p+1; ]∈_n(ℝ): X_ij∈ℝ^(k_i - k_i-1) × (k_j - k_j-1), i,j = 1,…, p+1 }. Let G be a group and 𝕍 a vector space. We say that 𝕍 is a G-module if there is a linear group action G ×𝕍→𝕍, (g,v) ↦ g · v, i.e., satisfying g · (a_1 v_1 + a_2 v_2) = a_1 g · v_1 + a_2 g · v_2 for any g∈ G, a_1,a_2∈ℝ, and v_1,v_2∈𝕍. In this paper, we will mostly limit ourselves to G = _n(ℝ) and two group actions on ℝ^n and ℝ^n× n respectively: _n(ℝ) ×ℝ^n →ℝ^n, (Q, v) ↦Q ·v Q v, _n(ℝ) ×ℝ^n ×n →ℝ^n ×n, (Q, X) ↦Q ·X Q X Q^. All matrix subspaces in (<ref>) are also _n(ℝ)-modules under the conjugation action (<ref>). If 𝕍 is a G-module, then a direct sum of multiple copies 𝕍⊕…⊕𝕍 is automatically a G-module with action Q · (v_1,…,v_k) (Q · v_1,…, Q · v_k). So ℝ^n × k = ℝ^n ⊕…⊕ℝ^n is also an _n(ℝ)-module under the action (<ref>). §.§ Differential geometry We write (k,ℝ^n), (k_1,…,k_p,ℝ^n), and (k,ℝ^n) respectively for the flag, Grassmann, and Stiefel manifold as abstract manifolds. An element of (k,ℝ^n) is a subspace 𝕎⊆ℝ^n, 𝕎 = k. An element of (k_1,…,k_p,ℝ^n) is a flag, i.e., a nested sequence of subspaces 𝕎_1 ⊆…⊆𝕎_p ⊆ℝ^n, 𝕎_i = k_i, i=1,…,p. An element of (k,ℝ^n) is an orthonormal k-frame (w_1,…,w_k) in ℝ^n. The abstract Grassmann and flag manifolds are G-manifolds for any subgroup G ⊆_n(ℝ), i.e., they come naturally equipped with a G-action. For Grassmann manifolds, take any 𝕎∈(k, ℝ^n) and any X ∈ G ⊆_n(ℝ), the action is given by X ·𝕎{ Xw ∈ℝ^n : w ∈𝕎}, noting that X ·𝕎 = 𝕎. This action extends to (k_1,…, k_p,ℝ^n) since if 𝕎_1 ⊆…⊆𝕎_p, then X ·𝕎_1 ⊆…⊆ X ·𝕎_p for any X ∈ G ⊆_n(ℝ). So we may write X · (𝕎_1 ⊆…⊆𝕎_p ) (X ·𝕎_1 ⊆…⊆ X ·𝕎_p), again noting that since dimensions are preserved we have a well-defined action. If G is any of the groups in (<ref>), then this action is transitive. The abstract Stiefel manifold is a G-manifold for any subgroup G ⊆Ø_n(ℝ) via the action X · (w_1,…,w_k) (Xw_1,…, Xw_k) for any (w_1,…,w_k) ∈(k,ℝ^n). This action is transitive if G = Ø_n(ℝ) or _n(ℝ). A notion central to this article is that of an equivariant embedding, which has been extensively studied in a more general setting <cit.>. Let G be a group, 𝕍 be a G-module, and ℳ be a G-manifold. An embedding ε: ℳ→𝕍 is called a G-equivariant embedding if ε(g · x) = g ·ε(x) for all x∈ℳ and g ∈ G. In this case, the embedded image ε(ℳ) is called a G-submanifold of 𝕍. For this article, we only need to know two special cases. For the flag manifold (and thus Grassmannian when p =1), G = _n(ℝ), ℳ = (k_1,…,k_p, ℝ^n), 𝕍 = ℝ^n × n, G acts on 𝕍 via (<ref>) and on ℳ via (<ref>). For the Stiefel manifold, G = _n(ℝ), ℳ = (k,n), 𝕍 = ℝ^n× k, G acts on 𝕍 via (<ref>) and on ℳ via (<ref>). § EXISTING MODELS OF THE FLAG MANIFOLD The reader may find a list of all known existing matrix models of the Grassmannian and Stiefel manifold in <cit.>. Models for the flag manifold are more obscure and we review a few relevant ones here. The model most commonly used in pure mathematics is as a quotient manifold, (k_1,…, k_p,ℝ^n) ≅_n(ℝ)/_k_1,…, k_p(ℝ), with the parabolic subgroup as defined in Section <ref>. As for known[The authors of <cit.> did not appear to know that they are discussing the flag manifold.] models of the flag manifold in applied mathematics, the only ones we are aware of were first proposed in <cit.>. We will highlight two: There is the orthogonal analogue of (<ref>), as the quotient manifold (k_1,…, k_p,ℝ^n) ≅_n(ℝ)/(Ø_n_1(ℝ) ×⋯×Ø_n_p+1(ℝ) ). It has been shown in <cit.> that any flag manifold may be embedded as a submanifold in a product of Grassmannians so that any model for the latter yields a model for the former. We recall this result here: Let n_1, …, n_p+1∈ℕ with n_1 + … + n_p+1 = n. Define _(n_1,…,n_p+1) { (𝕎_1,…, 𝕎_p+1) ∈(n_1, ℝ^n) ×…×(n_p+1, ℝ^n) : 𝕎_1 ⊕…⊕𝕎_p+1 = ℝ^n }, with ⊕ the orthogonal direct sum of subspaces. Then every flag manifold is diffeomorphic to a submanifold of the form in (<ref>) via the following lemma, where we have also included (<ref>) and (<ref>) for completeness. Although we use φ_1 for a different purpose in this article, its existence carries the important implication that every model of the Grassmannian automatically gives one for the flag manifold. Let 0 k_0 < k_1 < ⋯ < k_p < k_p+1 n be integers and n_i+1 = k_i+1 - k_i, i =1,…, p. Then the maps below are all diffeomorphisms: (k_1,…, k_p,ℝ^n) _(n_1,…,n_p+1) _n(ℝ)/_k_1,…, k_p(ℝ) _n(ℝ)/(Ø_n_1(ℝ) ×⋯×Ø_n_p+1(ℝ) )["φ_1", from=1-1, to=1-3] ["φ_2", from=2-1, to=1-1] ["φ_3"', from=2-3, to=1-3] with φ_1,φ_2,φ_3 defined by φ_1(𝕎_1 ⊆⋯⊆𝕎_p) = (𝕎_1, 𝕎_2/𝕎_1, …, 𝕎_p/𝕎_p-1, ℝ^n/𝕎_p), φ_2 ( X _) = (𝕏_1 ⊆⋯⊆𝕏_p), φ_3( Q _) = (ℚ_1,…, ℚ_p+1), where for each i=1 ,…, p and j = 1,…, p+1, * 𝕏_i is the vector space spanned by the first k_i columns of X ∈_n(ℝ); * 𝕎_i+1/𝕎_i is the orthogonal complement of 𝕎_i in 𝕎_i+1, 𝕎_0 {0}, and 𝕎_p+1ℝ^n; * ℚ_j is the vector space spanned by column vectors of Q ∈_n(ℝ) in columns k_j-1 + 1,…, k_j. It has been shown in <cit.> that φ_1 is a diffeomorphism, leaving φ_2 and φ_3. Let e_1,…, e_n ∈ℝ^n be the standard basis vectors and set 𝔼_j _ℝ{e_1,…, e_j }, j=1,…, n. Consider the map ρ_2: _n(ℝ) →(k_1,…,k_p,ℝ^n), X ↦ X · (𝔼_k_1⊆⋯⊆𝔼_k_p) with the action · in (<ref>). This map is surjective as the action of _n(ℝ) is transitive. The stabilizer of (𝔼_k_1⊆⋯⊆𝔼_k_p) ∈(k_1,…, k_p,ℝ^n) is easily seen to be the parabolic subgroup P_k_1,…, k_p(ℝ) in Section <ref>. The orbit-stabilizer theorem then yields the required diffeomorphism φ_2 from ρ_2. The same argument appied to ρ_3: _n(ℝ) →(k_1,…,k_p,ℝ^n), Q ↦ Q · (𝔼_k_1⊆⋯⊆𝔼_k_p) yields the required diffeomorphism φ_3. In Section <ref>, we will add to the list of diffeomorphisms in Lemma <ref> after establishing various properties of the isospectral model. § EQUIVARIANT MATRIX MODELS OF THE GRASSMANNIAN AND FLAG MANIFOLD We begin by deriving the isospectral model _a_1,…, a_p+1(k_1,…, k_p, n), showing that any _n(ℝ)-equivariant embedding of (k_1,…,k_p, ℝ^n) into ^2 (ℝ^n) must be an isospectral model. Among these isospectral models are a special class of traceless isospectral models when we choose a_1,…,a_p+1 to satisfy ∑_i=0^p a_i+1 (k_i+1 - k_i) = 0. We have shown in <cit.> that very _n(ℝ)-equivariant model of a flag manifold that is minimal, i.e., whose ambient space has the smallest possible dimension, must necessarily be a traceless isospectral model. In other words, no matter what space 𝕍 we embed (k_1,…,k_p, ℝ^n) in, as long as the embedding ε is equivariant and the dimension of 𝕍 is the smallest, then (a) we must have 𝕍≃^2_ (ℝ^n) and (b) the image of ε must be a traceless isospectral model. The proof of (a) requires a heavy does of representation theory beyond the scope of this article but we will say a few words about it in Corollary <ref> to highlight this property for a special case. The claim in (b) follows from Theorem <ref> and Corollary <ref>. The isospectral model given below in (<ref>) appears more complex than the one presented in Section <ref> but it will be simplified later in Proposition <ref> for generic parameters. Let 0 k_0 < k_1 < ⋯ < k_p < k_p+1 n be integers and n_i+1 k_i+1 - k_i, i =1,…, p. If ε: (k_1,…, k_p,ℝ^n) →^2 (ℝ^n) is an _n(ℝ)-equivariant embedding, then its image must take the form _a_1,…, a_p+1(k_1,…, k_p, n) { X∈^2(ℝ^n): ∏_j=0^p (X - a_j+1 I_n) = 0, (X^i) = ∑_j=0^p n_j+1 a_j+1^i, i = 1,…, p } for some distinct a_1,…, a_p+1∈ℝ. Since ε is _n(ℝ)-equivariant, its image ε ( (k_1,…, k_p,ℝ^n) ) is the orbit of a point B ∈ε ( (k_1,…, k_p,ℝ^n) ) under the action of _n(ℝ^n) on ^2(ℝ^n). Let b_1 > ⋯ > b_q+1 be the distinct eigenvalues of B of multiplicities m_1,…, m_q+1. Then we may assume that B = [ b_1 I_m_1 ⋯ 0; ⋮ ⋱ ⋮; 0 ⋯ b_p I_m_q+1 ]. It follows from the orbit-stabilizer theorem that _n(ℝ)/ (Ø_n_1(ℝ) ×⋯×Ø_n_p+1(ℝ)) ≃ε ( (k_1,…, k_p,ℝ^n) ) ≃_n(ℝ)/ (Ø_m_1(ℝ) ×⋯×Ø_m_q+1(ℝ)), from which we deduce that q = p and {n_1,…, n_p+1} = {m_1, …, m_q+1}. Let σ∈𝔖_p+1 be such that n_1 = m_σ(1), …, n_p+1 = m_σ(p+1). Set a_1 b_σ(1), …, a_1 b_σ(1). Then ε ( (k_1,…, k_p,ℝ^n) ) ⊆_a_1,…, a_p+1(k_1,…, k_p, n). For the reverse inclusion, since (X - a_1 I) ⋯ (X - a_p+1 I) = 0, any X ∈_a_1,…, a_p+1(k_1,…, k_p, n) has eigenvalues a_1,…, a_p+1 with multiplicities n_1,…, n_p+1 determined by the Vandermonde system [ 1 1 ⋯ 1 1; a_1 a_2 ⋯ a_p a_p+1; a^2_1 a^2_2 ⋯ a^2_p a^2_p+1; ⋮ ⋮ ⋱ ⋮ ⋮; a^p_1 a^p_2 ⋯ a^p_p a^p_p+1 ][ n_1; n_2; n_3; ⋮; n_p+1 ] = [ n; (X); (X^2); ⋮; (X^p) ]. Since we also have that X ∈^2(ℝ^n), it must take the form X = Q (a_1 I_n_1,…, a_p+1 I_n_p+1)Q^ for some Q ∈_n(ℝ). Hence ε ( (k_1,…, k_p,ℝ^n) ) ⊇_a_1,…, a_p+1(k_1,…, k_p, n). Embedded in the proof of Theorem <ref> is an alternative parametric characterization of the isospectral model worth stating separately, and which also shows that the object called “isospectral manifold” in <cit.> is exactly a flag manifold. Let k_0,…, k_p+1, n, n_1,…, n_p+1 and a_1,…, a_p+1 be as in Theorem <ref>. Then _a_1,…, a_p+1(k_1,…, k_p, n) = { Q [ a_1 I_n_1 0 ⋯ 0; 0 a_2 I_n_2 ⋮; ⋮ ⋱ 0; 0 ⋯ 0 a_p+1 I_n_p+1 ] Q^∈^2(ℝ^n) : Q∈_n(ℝ) }. As we alluded to in Section <ref>, we call (<ref>) or (<ref>) the isospectral model. There are two special cases worth highlighting separately. Let k_0,…, k_p+1, n, n_1,…, n_p+1 be as in Theorem <ref>. Let a_1,…, a_p+1∈ℝ be such that ∑_j=0^p n_j+1 a_j+1 = ∑_j=0^p (k_j+1 - k_j) a_j+1 = 0. Then _a_1,…, a_p+1(k_1,…, k_p, n) ={ X∈^2_(ℝ^n): ∏_j=0^p (X - a_j+1 I_n) = 0, (X^i) = ∑_j=0^p n_j+1 a_j+1^i, i = 1,…, p } = { Q (a_1 I_n_1, a_2 I_n_2,…, a_p+1 I_n_p+1) Q^∈^2_(ℝ^n) : Q∈_n(ℝ) } has the lowest possible dimension ambient space among all possible _n(ℝ)-equivariant models of the flag manifold whenever n ≥ 17. Note that the difference here is that the matrices are traceless, i.e., we have _a_1,…, a_p+1(k_1,…, k_p, n) ⊆^2_(ℝ^n) and this is obvious from the parametric characterization as any X ∈_a_1,…, a_p+1(k_1,…, k_p, n) has (X) given by the expression in (<ref>). The minimal dimensionality and exhaustiveness of these traceless isospectral models when n ≥ 17 have been established in <cit.>. Nevertheless, in part as a demonstration of how such a result is plausible, we will prove a special case in Proposition <ref> that avoids group representation theory entirely. The p = 1 special case of Theorem <ref> is also worth stating separately. It gives a complete classification of equivariant models of the Grassmannian, namely, they must all be quadratic models. If ε: (k,ℝ^n) →^2 (ℝ^n) is an _n(ℝ)-equivariant embedding, then its image must take the form _a,b(k,n) { X∈^2(ℝ^n): (X - aI_n) (X-bI_n) = 0, (X) = ak + b(n-k) } = { Q [ a I_k 0; 0 b I_n-k ] Q^∈^2(ℝ^n) : Q ∈_n(ℝ) } for some distinct a,b ∈ℝ. A notable aspect of the quadratic model is that as p = 1, the Vandermonde system (<ref>) in the defining conditions of (<ref>) reduces to a single trace condition. It turns out that generically this reduction always holds, even when p > 1. In other words, all we need are the first two rows of (<ref>), n = n_1 + … + n_p+1, (X) = a_1 n_1 + … + a_p+1 n_p+1. The reason is that n_1,…, n_p+1 are positive integers, which greatly limits the number of possible solutions; in fact for generic values, the two equations above have unique positive integer solutions n_1,…, n_p+1. There is no need to look at the remaining rows of (<ref>). Let k_0,…, k_p+1, n, n_1,…, n_p+1 be as in Theorem <ref>. For generic a_1,…, a_p+1∈ℝ, _a_1,…, a_p+1(k_1,…, k_p, n) = { X ∈^2(ℝ^n): ∏_j=0^p (X - a_j+1 I_n) = 0, (X) = ∑_j=0^p n_j+1 a_j+1}. Let t n_1 a_1 + … + n_p+1 a_p+1 and denote the set on the right by _a_1,…, a_p+1(t, n). For any distinct a_1,…, a_p+1∈ℝ, _a_1,…, a_p+1(k_1,…, k_p, n) ⊆_a_1,…, a_p+1(t, n ). We will show that the reverse inclusion holds for generic values of a_1,…,a_p+1. Let M {(n_1,…, n_p+1)∈ℕ^p+1: n_1 + ⋯ + n_p+1 = n}, H {m - m' ∈ℤ^p+1: m m', m, m' ∈ M }, C { (a_1,…, a_p + 1) ∈ℝ^p+1: a_i a_j if i j }. The hyperplane defined by h ∈ H is h^⊥{ a ∈ℝ^p+1: a^ h = a_1 h_1 + … + a_p+1 h_p+1 = 0}. Since H is a finite set and C is a complement of a union of finitely many hyperplanes, C ∖⋃_h∈ H h^⊥ is also a complement of a union of finitely many hyperplanes. For any a ∈ C ∖⋃_h∈ H h^⊥ and m, m' ∈ M, a^ m = a^ m' implies that m = m'. For any X ∈_a_1,…, a_p+1(t, n), the multiplicities n_1,…, n_p+1 of a_1,…, a_p+1 are uniquely determined by the single equation (X) = a_1 n_1 + … + a_p+1 n_p+1. Hence X ∈_a_1,…, a_p+1(k_1,…, k_p, n). The genericity assumption on a_1,…, a _p+1 is essential for the simplified description in Proposition <ref> as the following example shows. Let n = 5, p= 2, (a_1,a_2, a_3) = (0,1,2), and consider the isospectral model _0,1,2(1,4, 5). Then t = 5 and we have _0,1,2(1,4, 5) ⊆_0,1,2(5, 5). To see that this inclusion is strict, take A = [ 0 0 0 0 0; 0 1 0 0 0; 0 0 1 0 0; 0 0 0 1 0; 0 0 0 0 2; ], B = [ 0 0 0 0 0; 0 0 0 0 0; 0 0 1 0 0; 0 0 0 2 0; 0 0 0 0 2; ], and observe that A ∈_0,1,2(1,4, 5) _0,1,2(2,3, 5) ∋ B. We have a disjoint union of nonempty sets _0,1,2(1,4, 5) ⊔_0,1,2(2,3, 5) = _0,1,2(5, 5), which implies that _0,1,2(1,4, 5) ⊊_0,1,2(5, 5). Nevertheless the point of Proposition <ref> is that there will always be some other choices, in fact uncountably many, of a_1,a_2, a_3 that work. Take (a_1,a_2, a_3) = (0,1,√(2)). Then t = 2 √(2) +1 and n_1 + n_2 + n_3 = 5, a_1 n_1 + a_2 n_2 + a_3 n_3 = 2√(2) + 1, have the unique positive integer solution (n_1,n_2,n_3) = (2,1,2). Hence _0,1,2(2,3, 5) = _0,1,2(2√(2) + 1, 5 ). Special values of a_1,…, a_p+1 in Theorem <ref> give us specific models with various desirable features. For instance, we have mentioned the involution model <cit.> that parameterizes the Grassmannian with perfectly conditioned matrices, obtained by setting (a,b)=(1,-1) in Corollary <ref>. For n =2, (1,ℝ^2) = (1,ℝ^2 ) differs from other cases (because _2(ℝ) is abelian unlike _n(ℝ) for n ≥ 3) and has to be treated separately. In this case all minimal _2(ℝ)-equivariant embeddings of (1,ℝ^2 ) may be easily characterized. Any _2(ℝ)-equivariant embedding of (1,ℝ^2) into ^2_(ℝ^2) must take the form (1,ℝ^2) →^2_(ℝ^2), {[ x; y ]}↦r/√(x^2 + y^2)[ cx - sy sx + cy; sx + cy -cx + sy ] for some r > 0 and some [ c -s; s c ]∈_2(ℝ). All such embeddings are minimal. Write 𝕊^1 {[ x; y ]∈ℝ^2 : x^2 + y^2 = 1 } for the unit sphere in ℝ^2. Recall that (1,ℝ^2 ) = ℝℙ^1, the real projective line. Let f: ℝℙ^1 →^2_(ℝ^2) be an _2(ℝ)-equivariant embedding. We consider j: 𝕊^1 ≅ℝℙ^1 ^2_(ℝ^2) ≅ℝ^2 where the first ≅ is the usual stereographic projection of 𝕊^1 onto ℝ with north pole mapping to the point-at-infinity; and the second ≅ is ^2_(ℝ^2) →ℝ^2, [ x y; y -x ]↦[ x; y ]. It is easy to see that both are _2(ℝ)-equivariant. It remains to characterize all _2(ℝ)-equivariant embeddings j: 𝕊^1 →ℝ^2. By equivariance, we must have j(𝕊^1) = {[ c -s; s c ][ x; y ]∈ℝ^2: [ c -s; s c ]∈_2(ℝ) }, where j(u_0) = [ x_0; y_0 ]∈ℝ^2 ∖{0} for a fixed u_0∈𝕊^1. Let r_0 = (x_0^2 + y_0^2)^1/2 > 0, v_0 = 1/√(x_0^2 + y_0^2)[ x_0; y_0 ]∈𝕊^1. Then j(𝕊^1) = r_0 𝕊^1. Let Q = [ c -s; s c ]∈_2(ℝ) be such that Q u_0 = v_0. Then v_0 =r_0^-1 j(u_0) =r_0^-1 j(Q^ v_0). So the map 𝕊^1 →ℝ^2, v ↦ r^-1 j(Q^ v) is the inclusion 𝕊^1 ⊆ℝ^2. Hence we have j (v) = r [ c x - s y; s x + c y ] for v =[ x; y ]∈𝕊^1 and Q = [ c -s; s c ]∈_2(ℝ). Composing j with the inverses of the two diffeomorphisms in (<ref>), we obtain f ( {[ x; y ]}) = r/√(x^2 + y^2)[ cx - sy sx + cy; sx + cy -cx + sy ] for any {[ x; y ]}∈ℝℙ^1. If the embedding above is not of minimal dimension, then there is an embedding of 𝕊^1 into ℝ^1. The image of 𝕊^1 in ℝ^1 is connected and compact and thus a closed interval [a,b], a contradiction as [a,b] is contractible and 𝕊^1 is not. § CHANGE-OF-COORDINATES FOR ISOSPECTRAL MODELS In this section we provide change-of-coordinates formulas to transform from one model of the flag manifold or Grassmannian to another, focusing on the isospectral and quadratic models. We begin with the following addendum to Lemma <ref>. Let k_0,…, k_p+1, n, n_1,…, n_p+1 and a_1,…, a_p+1 be as in Theorem <ref>. Using the parametric characterization in Corollary <ref>, the map φ_4: _a_1,…, a_p+1(k_1,…, k_p, n) →(k_1,…, k_p;ℝ^n), Q (a_1 I_n_1, a_2 I_n_2,…, a_p+1 I_n_p+1) Q^ ↦ (ℚ_1,…, ℚ_p), where ℚ_j ⊆ℝ^n is the subspace spanned by the first k_j column vectors of Q, j =1,…,p, is a diffeomorphism. First we need to check that φ_4 is well-defined. Write Λ_a (a_1 I_n_1, a_2 I_n_2,…, a_p+1 I_n_p+1). Since Q Λ_a Q^ = V Λ_a V^ for Q, V∈Ø_n(ℝ) if and only if V = QP for some P ∈(Ø_n_1(ℝ) ×⋯×Ø_n_p+1(ℝ)) if and only if (ℚ_1,…,ℚ_p) = (𝕍_1,…,𝕍_p), which shows that the map φ_4 is well-defined (and injective too). To see that φ_4 is a diffeomorphism, observe that φ_4 factors as _a_1,…, a_p+1(k_1,…, k_p,n) (k_1,…, k_p;ℝ^n) _n(ℝ)/(Ø_n_1(ℝ) ×⋯×Ø_n_p+1(ℝ))["φ_4", from=1-1, to=1-2] ["φ_5"', from=1-1, to=2-2] ["φ_1^-1∘φ_3"', from=2-2, to=1-2] where φ_5 is defined by φ_5 (Q Λ_a Q^) Q, φ_1 and φ_3 are the diffeomorphisms defined earlier in Lemma <ref>. In the following, let C be as in (<ref>), the subset of ℝ^p+1 comprising elements whose coordinates are all distinct. Note that this set parameterizes all isospectral models of (k_1,…, k_p;ℝ^n). The formula for the transformation between two isospectral models of the same flag manifold or two quadratic models of the same Grassmannian is straightforward. Let k_0,…, k_p+1, n, n_1,…, n_p+1 be as in Theorem <ref>. We use the parametric characterization in Corollary <ref> and write Λ_a as in (<ref>) for any (a_1,…, a_p+1) ∈ C. The map φ : _a_1,…, a_p+1(k_1,…, k_p,n) →_b_1,…, b_p+1(k_1,…, k_p,n), φ(Q Λ_a Q^) Q Λ_b Q^ is an _n(ℝ)-equivariant diffeomorphism for any (a_1,…, a_p+1) and (b_1,…, b_p+1) ∈ C. In particular, for positive integers k ≤ n, φ: _a,b(k,n) →_c,d(k,n), φ(Q [ a I_k 0; 0 b I_n-k ] Q^) Q [ c I_k 0; 0 d I_n-k ] Q^, is an _n(ℝ)-equivariant diffeomorphism for any pairs of distinct real numbers (a,b) and (c,d). The same argument used in the proof of Corollary <ref> shows that φ is well-defined, _n(ℝ)-equivariant, bijective, and factors as _a_1,…, a_p+1(k_1,…, k_p,n) _b_1,…, b_p+1(k_1,…, k_p,n) (k_1,…,k_p,ℝ^n)["φ", from=1-1, to=1-3] ["φ_a"', from=1-1, to=2-2] ["φ_b^-1"', from=2-2, to=1-3] where φ_a amd φ_b are the diffeomorphism φ_4 in Corollary <ref> with respect to (a_1,…, a_p+1) and (b_1,…, b_p+1) ∈ C. Hence φ is a diffeomorphism. The same proof of Proposition <ref> may be used to establish a stronger result. With notations as in Proposition <ref>, if (a_i - a_j) = (b_i - b_j) for all i < j, then map Φ_t: _a_1,…, a_p+1(k_1,…, k_p,n) →_a_1 + t(b_1 - a_1),…, a_p+1 + t(b_p+1 - a_p+1)(k_1,…, k_p,n), Q Λ_a Q^ ↦Φ_t(Q Λ_a Q^) Φ(Q Λ_a Q^, t), is an _n(ℝ)-equivariant diffeomorphism for any t ∈ [0,1]. The map Φ_t in Proposition <ref> is a homotopy along the line segment γ: [0,1] →ℝ^p+1, γ(t) =(1-t) (a_1,…, a_p+1) + t (b_1,…, b_p+1). Note that (<ref>) holds if and only if (1-t) (a_i - a_j) + t(b_i - b_j) 0 for all t∈ [0,1]. As we saw in the proof of Proposition <ref>, the parameter space C ⊆ℝ^p+1 is disconnected since it is the complement of finitely many hyperplanes. For arbitrary (a_1,…, a_p+1) and (b_1,…, b_p+1) ∈ C, it is possible that γ(t) ∉C for some t. The entire curve γ([0,1]) ⊆ C if and only if (a_1,…, a_p+1) and (b_1,…, b_p+1) are in the same connected component of C, which is in turn equivalent to (<ref>). § EQUIVARIANT MATRIX MODELS FOR STIEFEL MANIFOLDS The corresponding results for the Stiefel manifold <cit.> are considerably easier. Recall that we write (k, ℝ^n) for the abstract Stiefel manifold of orthonormal k-frames in ℝ^n and (k,n) {X ∈ℝ^n × k : X^ X = I } for its usual model as n × k orthonormal matrices. We will call the following family of minimal _n(ℝ)-equivariant models of (k,ℝ^n), _A (k,n) { Y ∈ℝ^n × k: Y^ Y = A }, the Cholesky models. Clearly it includes the usual model as _I (k,n) =(k,n) and is a G-manifold via the action _n(ℝ) ×_A (k,n) →_A (k,n), (Q, Y) ↦ QY. The choice of nomenclature and that it is indeed a model of the Stiefel manifold will be self-evident after the next two propositions. We begin with the Stiefel manifold analogue of Theorem <ref>: Let k, n ∈ℕ with k ≤ n. If ε: (k,ℝ^n) →ℝ^n× k is an _n(ℝ)-equivariant embedding, then there is some A∈𝖲^2_(ℝ^k) such that ε ((k,ℝ^n)) = _A (k,n). If n ≥ 17 and k < (n-1)/2, then V_A (k,n) has the lowest possible dimension ambient space among all possible _n(ℝ)-equivariant models of the Stiefel manifold. Since ε is _n(ℝ)-equivariant and _n(ℝ) acts on (k,ℝ^n) transitively via (<ref>), ε ((k,ℝ^n)) is an _n(ℝ)-orbit. Fix an arbitrary Y_0 ∈ε ((k,ℝ^n)). Then ε ((k,ℝ^n)) = { Q Y_0: Q∈_n(ℝ) }⊆{Y∈ℝ^n× k: Y^ Y = A } where A Y_0^ Y_0 ∈𝖲^2_(ℝ^k). Conversely, if Y^ Y = A = Y_0^ Y_0, consider the QR decompositions Y = Q [ R; 0 ], Y_0 =Q_0 [ R_0; 0 ] where Q, Q_0 ∈Ø_n(ℝ) and R, R_0 ∈_k(ℝ) are upper triangular matrices with positive diagonals. Thus R^ R = R_0^ R_0 = A are Cholesky decompositions of A ∈𝖲^2_(ℝ^n). By the uniqueness of the Cholesky decomposition of a symmetric positive definite matrix, we have R = R_0 and Y = Q [ R; 0 ] = Q [ R _0; 0 ] = (Q Q_0^) ( Q_0 [ R _0; 0 ]) = (Q Q_0^) Y_0 ∈ε ((k,ℝ^n)). Hence ε ((k,ℝ^n)) = _A (k,n). The minimality of _A(k,n) for n ≥ 17 and k < (n-1)/2 follows from <cit.>. The proof of Proposition <ref> also yields an alternative parametric characterization of the Cholesky model, and may be viewed as the Stiefel manifold analogue of Corollary <ref>. Let k, n ∈ℕ with k ≤ n and A∈𝖲^2_(ℝ^n). Then _A (k,n) = { Q [ R; 0 ]: Q∈Ø_n(ℝ) } where R ∈_k(ℝ) is the unique Cholesky factor of A∈𝖲^2_(ℝ^n). Lastly we present the change-of-coordinate formula for Cholesky models, which will also provide a pretext for discussing some interesting features of its parameter space. We borrow the shorthand in <cit.> and write, for any A, B∈𝖲^2_(ℝ^k) and t ∈ [0,1], A #_t B = A^1/2 (A^-1/2 B A^-1/2 )^t A^1/2. The special case when t =1/2 gives A #_1/2 B = A^1/2 (A^-1/2 B A^-1/2 )^1/2 A^1/2 A # B, the matrix geometric mean <cit.>. The Stiefel manifold analogue of Propositions <ref> and <ref> is as follows. Let k, n ∈ℕ with k ≤ n. For any A, B∈𝖲^2_(ℝ^k) and t ∈ [0,1], ψ_t : _A(k,n) →_A #_t B(k,n), Y ↦ Y A^-1/2 (A^-1/2 B A^-1/2 )^t/2 A^1/2, is an _n(ℝ)-equivariant diffeomorphism. In particular the map ψ_1 gives a change-of-coordinates formula from _A(k,n) to _B(k,n). The map ψ_t is well-defined as ( Y A^-1/2 (A^-1/2 B A^-1/2 )^t/2 A^1/2)^ Y A^-1/2 (A^-1/2 B A^-1/2 )^t/2 A^1/2 = A #_t B whenever Y∈_A(k,n). Since both A and B are positive definite, A^-1/2 (A^-1/2 B A^-1/2 )^t/2 A^1/2 is invertible, so ψ_t is a diffeomorphism. It is evidently _n(ℝ)-equivariant under (<ref>). Proposition <ref> is considerably simpler than Proposition <ref> as the parameter space 𝖲^2_(ℝ^n) is path connected and any two points A, B ∈𝖲^2_(ℝ^n) can be connected by a curve γ:[0,1] →𝖲^2_(ℝ^n), γ(t) = A #_t B. Readers may recognize γ as the geodesic curve from A to B in the Cartan manifold <cit.>, i.e., 𝖲^2_(ℝ^n) equipped with the Riemannian metric 𝗀_A(X,Y) = (A^-1XA^-1Y), or, as an abstract manifold, the set of ellipsoids in ℝ^n centered at the origin. See <cit.> and <cit.> for a modern exposition, and <cit.> for further bibliographical references about the Cartan manifold. § EQUIVARIANT RIEMANNIAN METRICS Here we will discuss _n(ℝ)-invariant Riemannian metrics for the flag, Grassmann, and Stiefel manifolds that go alongside their models in this article. As we alluded to at the end of Section <ref>, each of these manifolds comes equipped with a god-given Riemannian metric. This is a result of their structure as G/H with G a compact simple Lie group and H a closed subgroup. Such a G admits a bi-invariant metric, unique up to a constant factor, which in turn induces an invariant metric on G/H. If their corresponding Lie algebras 𝔤 and 𝔥 are related by 𝔤 = 𝔥⊕𝔪 where 𝔪 is an _H-invariant subspace of 𝔤, then there is a one-to-one correspondence {G-invariant metrics on G/H}⟷{H-invariant metrics on 𝔪}, a standard result in differential geometry <cit.>. In particular, we have 𝔪≃𝕋_ e G/H, and e∈ G the identity element. Slightly less standard (we are unable to find a reference for the simple form below) is the following simple criterion for an equivariant embedding to be isometric. Let 𝕍 be a G-module and φ: G/H →𝕍 be a G-equivariant embedding. Let 𝗀 and 𝗀' be Riemannian metrics on G/H and ℳφ (G/H) ⊆𝕍 respectively. Consider the linear map f: 𝔪≃𝕋_ e G/H 𝕋_φ( e ) ℳ. Then φ is isometric if and only if f is an isometry. Since φ is G-equivariant and G acts on both G/H and ℳ transitively, φ is isometric if and only if φ is isometric at e, i.e., the differential map d_ e φ of φ at e is an isometry. By (<ref>), this is equivalent to f being an isometry. For the cases of interest to us, (k, ℝ^n) ≅_n(ℝ)/_n-k(ℝ), (k,ℝ^n) ≅_n(ℝ)/(Ø_k(ℝ) ×Ø_n-k(ℝ) ), (k_1,…, k_p,ℝ^n) ≅_n(ℝ)/(Ø_n_1(ℝ) ×⋯×Ø_n_p+1(ℝ) ). We will show that the restriction of the Euclidean inner product on ℝ^n × k and ^2(ℝ^n) onto the Cholesky, quadratic, and isospectral models give the Riemannian metric induced by the bi-invariant metric on G = _n(ℝ) up to a choice of weights. §.§ Quadratic model of the Grassmannian The bi-invariant metric on _n(ℝ) induces an _n(ℝ)-invariant metric 𝗀 on _n(ℝ)/(Ø_k(ℝ) ×Ø_n - k(ℝ) ), and thus on _a,b(k,n) via φ_5 in (<ref>), whose inverse is φ_5^-1: _n(ℝ)/(Ø_k(ℝ) ×Ø_n - k(ℝ) ) →_a,b(k,n), φ_5^-1 ( Q ) = Q [ a I_k 0; 0 b I_n-k ] Q^. We recall from <cit.> that ^2(ℝ^n) = ^2(ℝ^k) ⊕^2(ℝ^n-k) ⊕𝔪, 𝔪 = { B = [ 0 B_0; -B_0^ 0 ]∈^2(ℝ^n): B_0 ∈ℝ^k × (n-k)}, and that 𝔪 is invariant under conjugation by (Ø_k(ℝ) ×Ø_n - k(ℝ) ). The (Ø_k(ℝ) ×Ø_n - k(ℝ))-invariant inner product on 𝔪 is given by ⟨ B, C ⟩ (a-b)^2 (B^ C) = 2 (a-b)^2 (B_0^ C_0) for any B,C∈𝔪. This corresponds, via (<ref>), to an _n(ℝ)-invariant metric on _n(ℝ)/(Ø_k(ℝ) ×Ø_n - k(ℝ) ) and differs from the god-given metric 𝗀 on (k,ℝ^n) by a weight constant. The standard Euclidean metric (XY) on 𝖲^2(ℝ^n) restricts to a metric 𝗀' on _a,b(k,n). We will see that 𝗀 and 𝗀' are one and the same, up to a weight constant. Let _n(ℝ)/(Ø_k(ℝ) ×Ø_n - k(ℝ) ) and _a,b(k,n) be equipped with Riemannian metrics 𝗀 and 𝗀' respectively. Then φ_5^-1 is an isometric _n(ℝ)-equivariant diffeomorphism. The _n(ℝ)-equivariance of φ_5^-1 is evident from its definition. Let f be the linear map in (<ref>) for φ_5^-1. Then by (<ref>), f(B) = B [ a I_k 0; 0 b I_n-k ] + [ a I_k 0; 0 b I_n-k ] B^ = (b-a) B for any B ∈𝔪. By (<ref>), ⟨ B, C ⟩ = 2 (a-b)^2 (B_0^ C_0) = 𝗀'(f(B), f(C)). So f is an isometry and hence so is φ_5^-1 by Lemma <ref>. §.§ Cholesky model of the Stiefel manifold The bi-invariant metric on _n(ℝ) induces an _n(ℝ)-invariant metric 𝗀 on _n(ℝ)/_n - k(ℝ) ), and thus on _A(k,n) via ψ_A: _n(ℝ)/_n-k(ℝ) →_A(k,n), ψ_A ( Q ) = Q [ R; 0 ], which is a diffeomorphism by Proposition <ref>. Here R ∈_k(ℝ) is the Cholesky factor of A = R^ R ∈𝖲^2_(ℝ^n). We have ^2(ℝ^n) = ^2(ℝ^n-k) ⊕𝔪, 𝔪 = {[ B_1 -B_2^; B_2 0 ]∈ℝ^n× n: B_1 ∈^2(ℝ^k), B_2 ∈ℝ^(n-k) × k}, where we identify B ∈^2(ℝ^n-k) with (0, B) ∈^2(ℝ^n). It is clear that 𝔪 is invariant under conjugation by _n-k(ℝ), where we identify Q∈_n-k(ℝ) with (I_k, Q) ∈_n (ℝ). For A = R^ R ∈𝖲^2_(ℝ^n), the _n-k(ℝ)-invariant A-inner product on 𝔪 is given by ⟨ B, C ⟩(R^ (B_1^ C_1 + B_2^ C_2) R) for any B, C∈𝔪. This correspond, via (<ref>), to an _n(ℝ)-invariant metric 𝗀_A on _n(ℝ)/_n-k(ℝ) and differs from the god-given metric 𝗀 on (k,ℝ^n) by a weight matrix A. In particular, for A = I, we have 𝗀_I = 𝗀. The standard Euclidean metric (X^ Y) on ℝ^n × k restricts to a metric 𝗀' on _A(k,n). We will see that 𝗀 and 𝗀' are one and the same, up to a weight matrix. Let _n(ℝ)/_n-k(ℝ) and _A(k,n) be equipped with Riemannian metrics 𝗀_A and 𝗀' respectively. Then ψ_A is an isometric _n(ℝ)-equivariant diffeomorphism. The _n(ℝ)-equivariance of ψ_A is evident from its definition. Let f be the linear map in (<ref>) for ψ_A. Then by (<ref>), f(B) = B [ R; 0 ] = [ B_1; -B_2^ ]R, for any B = [ B_1 -B_2^; B_2 0 ]∈𝔪. By (<ref>), ⟨ B, C ⟩ = (R^ (B_1^ C_1 + B_2^ C_2) R ) = 𝗀'( f(B), f(C) ). So f is an isometry and hence so is ψ_A by Lemma <ref>. §.§ Isospectral model of the flag manifold The argument here is similar to that of Section <ref>, but involves heavier notations, and as such we think it is instructive to include the special case in Section <ref> for clarity. The bi-invariant metric on _n(ℝ) induces an _n(ℝ)-invariant metric 𝗀 on _n(ℝ)/(Ø_n_1(ℝ) ×⋯×Ø_n_p+1(ℝ) ), and thus on _a_1,…, a_p+1(k_1,…, k_p,n) via φ_5 in (<ref>), whose inverse is φ_5^-1 : _n(ℝ)/(Ø_n_1(ℝ) ×⋯×Ø_n_p+1(ℝ)) →_a_1,…, a_p+1(k_1,…, k_p,n), φ_5^-1 ( Q ) Q Λ_a Q^, where Λ_a as in (<ref>). We recall from <cit.> that ^2(ℝ^n) = ^2(ℝ^n_1)⊕…⊕^2(ℝ^n_p+1) ⊕𝔪, 𝔪 = { (B_ij) ∈ℝ^n× n: B_ij = -B_ji^∈ℝ^n_i × n_j, B_ii = 0, 1 ≤ i < j ≤ p+1 }, and that 𝔪 is invariant under conjugation by (Ø_n_1(ℝ) ×⋯×Ø_n_p+1(ℝ)). The (Ø_n_1(ℝ) ×⋯×Ø_n_p+1(ℝ))-invariant inner product on 𝔪 is given by ⟨ B, C ⟩ 2 ∑_1 ≤ i < j ≤ p+1(a_i - a_j)^2(B_ij^ C_ij), where B,C∈𝔪⊆^2(ℝ^n) are partitioned as B = (B_ij), C = (C_ij) with B_ij, C_ij∈ℝ^n_i × n_j for i, j ∈{1,…, p+1}. This corresponds via (<ref>) to an _n(ℝ)-invariant metric 𝗀_a on _n(ℝ)/(Ø_n_1(ℝ) ×⋯×Ø_n_p+1(ℝ) ) and differs from the god-given metric 𝗀 on (k_1,…, k_p,ℝ^n) by a weight vector a (a_1,…, a_p+1). The standard Euclidean metric (XY) on 𝖲^2(ℝ^n) when restricted to _a_1,…, a_p+1(k_1,…, k_p,n) gives a metric 𝗀'. We will see that 𝗀 and 𝗀' are one and the same, up to a weight vector. Let _n(ℝ)/(Ø_n_1(ℝ) ×…×Ø_n_p+1(ℝ)) and _a_1,…, a_p+1(k_1,…, k_p,n) be equipped be Riemannian metrics 𝗀_a and 𝗀' respectively. Then φ_5^-1 is an isometric _n(ℝ)-equivariant diffeomorphism. The _n(ℝ)-equivariance of φ_5^-1 is evident from its definition. Let f be the linear map defined in Lemma <ref> for φ_5^-1. Then by (<ref>), f(B) = BΛ_a + Λ_a B^ = [(a_j - a_i) B_ij ]_i,j=1^p+1 for any B ∈𝔪. By (<ref>), ⟨ B, C⟩ = 2 ∑_1≤ i < j ≤ p+1 (a_i - a_j)^2 (B_ij^ C_ij) =𝗀'(f(B), f(C)) So f is an isometry and hence so is φ_5^-1 by Lemma <ref>. § CONCLUSION We used to be able to count on one hand the number of different models for each of these manifolds. With these families of models, we now have uncountably many choices, and having such flexibility can provide a real benefit as different models are useful in different ways. Take the family of quadratic models _a,b(k,n) for example. The traceless model with (a,b) = (n-k,k) in Corollary <ref> has the lowest dimension but the involution model with (a,b) = (1, -1) in <cit.> has the best condition number. It may appear that the projection model with (a,b)=(1,0) makes the worst choice from a computational perspective since it is, up to a constant, the only model in the family with singular matrices. However we found in <cit.> that it is the most suitable model for discussing computational complexity issues, as many well-known NP-hard problems have natural formulations as optimization problems in the projection model. We hope that these families of models for various manifolds described and classified in this article would provide useful computational platforms for practical applications involving these manifolds. abbrv 10 AM12 P.-A. Absil and J. Malick. Projection-like retractions on matrix manifolds. SIAM J. Optim., 22(1):135–158, 2012. BH06 R. Bhatia and J. Holbrook. Riemannian geometry and matrix geometric means. Linear Algebra Appl., 413(2-3):594–618, 2006. borel A. Borel. Groupes algébriques. In Séminaire Bourbaki, Vol. 3, pages Exp. No. 121, 229–238. Soc. Math. France, Paris, 1955. Bredon72 G. E. Bredon. Introduction to compact transformation groups, volume Vol. 46 of Pure and Applied Mathematics. Academic Press, New York-London, 1972. Brockett91 R. W. Brockett. Dynamical systems that sort lists, diagonalize matrices, and solve linear programming problems. Linear Algebra Appl., 146:79–91, 1991. Cartan1 E. Cartan. Sur certaines formes Riemanniennes remarquables des géométries à groupe fondamental simple. Ann. Sci. École Norm. Sup. (3), 44:345–467, 1927. CE08 J. Cheeger and D. G. Ebin. Comparison theorems in Riemannian geometry. AMS Chelsea Publishing, Providence, RI, 2008. Revised reprint of the 1975 original. quad3 M.-T. Chien, S.-H. Tso, and P. Y. Wu. Higher-dimensional numerical ranges of quadratic operators. J. Operator Theory, 49(1):153–171, 2003. CD91 M. T. Chu and K. R. Driessel. Constructing symmetric nonnegative matrices with prescribed eigenvalues by differential equations. SIAM J. Math. Anal., 22(5):1372–1387, 1991. dS R. de Saussure. La géométrie des feuillets. Arch. sc. phys. et nat. (4), 21:134–155, 1906. quad1 D. Đoković. Unitary similarity of projectors. Aequationes Math., 42(2-3):220–224, 1991. FL19 F. Feppon and P. F. J. Lermusiaux. The extrinsic geometry of dynamical systems tracking nonlinear matrix projections. SIAM J. Matrix Anal. Appl., 40(2):814–844, 2019. CG M. R. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems. J. Research Nat. Bur. Standards, 49:409–436, 1952. ZLK20 Z. Lai, L.-H. Lim, and K. Ye. Simpler Grassmannian optimization. arXiv:2009.13502, 2020. ZLK24 Z. Lai, L.-H. Lim, and K. Ye. Grassmannian optimization is NP-hard. arXiv:2406.19377, 2024. LM08 A. S. Lewis and J. Malick. Alternating projections on manifolds. Math. Oper. Res., 33(1):216–234, 2008. LK24b L.-H. Lim and K. Ye. Minimal equivariant embeddings of the Grassmannian and flag manifold. arXiv:2407.12546, 2024. LV83 D. Luna and T. Vust. Plongements d'espaces homogènes. Comment. Math. Helv., 58(2):186–245, 1983. Mattila P. Mattila. Geometry of sets and measures in Euclidean spaces, volume 44 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1995. Mos G. D. Mostow. Some new decomposition theorems for semi-simple groups. Mem. Amer. Math. Soc., 14:31–54, 1955. Mostow57 G. D. Mostow. Equivariant embeddings in Euclidean space. Ann. of Math. (2), 65:432–446, 1957. MosBook G. D. Mostow. Strong rigidity of locally symmetric spaces, volume 78 of Annals of Mathematics Studies. Princeton University Press, Princeton, NJ, 1973. Nicolaescu L. I. Nicolaescu. Lectures on the geometry of manifolds. World Scientific Publishing, Hackensack, NJ, third edition, 2021. Stie E. Stiefel. Richtungsfelder und Fernparallelismus in n-dimensionalen Mannigfaltigkeiten. Comment. Math. Helv., 8(1):305–353, 1935. Timashev11 D. A. Timashev. Homogeneous spaces and equivariant embeddings, volume 138 of Encyclopaedia of Mathematical Sciences. Springer, Heidelberg, 2011. Invariant Theory and Algebraic Transformation Groups, 8. quad2 S.-H. Tso and P. Y. Wu. Matricial ranges of quadratic operators. Rocky Mountain J. Math., 29(3):1139–1152, 1999. KKL22 K. Ye, K. S.-W. Wong, and L.-H. Lim. Optimization on flag manifolds. Math. Program., 194(1-2):621–660, 2022.
http://arxiv.org/abs/2407.12912v1
20240717180001
Topological Kondo semimetal and insulator in AB-stacked heterobilayer transition metal dichalcogenides
[ "Daniele Guerci", "Kevin P. Lucht", "Valentin Crépel", "Jennifer Cano", "J. H. Pixley", "Andrew J. Millis" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mes-hall" ]
Center for Computational Quantum Physics, Flatiron Institute, New York, New York 10010, USA Department of Physics and Astronomy, Center for Materials Theory, Rutgers University, Piscataway, New Jersey 08854, USA Center for Computational Quantum Physics, Flatiron Institute, New York, New York 10010, USA Department of Physics and Astronomy, Stony Brook University, Stony Brook, New York 11794, USA Center for Computational Quantum Physics, Flatiron Institute, New York, New York 10010, USA Department of Physics and Astronomy, Center for Materials Theory, Rutgers University, Piscataway, New Jersey 08854, USA Center for Computational Quantum Physics, Flatiron Institute, New York, New York 10010, USA Center for Computational Quantum Physics, Flatiron Institute, New York, New York 10010, USA Department of Physics, Columbia University, 538 West 120th Street, New York, NY 10027 § ABSTRACT Recent experiments reported the realization of a heavy Fermi liquid in AB-stacked MoTe_2/WSe_2 heterobilayers. In this paper we show that the AB-stacked heterobilayer configuration is particularly suited to realize topological Kondo semimetal and topological Kondo insulator ground states at a doping of two holes per moiré unit cell. The small lattice mismatch between the MoTe_2 and WSe_2 monolayers and the different bandwidths of their highest lying moiré valence bands means that, in the experimentally relevant range of hole dopings, the MoTe_2 layer is effectively a Mott insulator with only low-lying magnetic excitations Kondo-coupled to more itinerant electrons in the WSe_2. The crucial consequence of the AB-stacking configuration is that the interlayer tunnelling connects orbitals of opposite parity in the two layers, leading to a chiral Kondo coupling. We show that the chiral Kondo coupling favors a topological Kondo semimetal at filling ν=1+1, with a non-quantized spin Hall conductance arising from edge modes, whose spectrum and overlap with bulk states we determine. We further show that a spatially random strain field that locally breaks the rotation symmetry can convert the Kondo semimetal to a narrow gap topological Kondo insulator featuring a quantized spin Hall conductance. Topological Kondo semimetal and insulator in AB-stacked heterobilayer transition metal dichalcogenides Andrew Millis July 22, 2024 ======================================================================================================== § INTRODUCTION Transition metal dichalcogenides (TMDs) are wide-gap semiconductors displaying a strong Ising spin-orbit coupling (SOC) in the valence band <cit.> with (in most cases) a honeycomb lattice structure and valence band maxima located at the corners of the hexagonal Brillouin zone (K and K^' points in the usual hexagonal notation). In this circumstance, the SOC leads to an effective spin-valley locking: carriers at the top of the valence band in valley K/K' have spin-↓/↑ and carry spin and orbital angular momentum L^z reflecting the character d_±=d_x^2-y^2± i d_xy of each monolayer <cit.>. Because lightly p-doped TMD monolayers behave as a two-dimensional two component Fermi gas, where the two fermionic flavours correspond to states at the K and K' points of the monolayer Brillouin zone, TMDs have become central elements in the construction of two-dimensional moiré materials that host a wide variety of emergent strongly correlated phenomena including Mott transitions <cit.>, superconductivity <cit.>, quantum criticality <cit.> and integer and fractional anomalous Hall phases <cit.>. A particularly interesting class of moiré compound are the AB-stacked heterobilayer transition metal dichalcogenides (TMDs), obtained by stacking two different TMDs at a 60^∘ twist rotation <cit.>. The small lattice mismatch between the two components leads to a long-period moiré structure while the 60^∘ twist means the two valence bands coupled by the moiré potential are characterized by opposite L^z <cit.> so that the interlayer tunneling vanishes at the K/K^' points and acquires a winding from the angular momentum mismatch. The winding gives rise to non-trivial topological properties <cit.>. Physical differences between the two component materials constituting the heterobilayer lead to an energy offset that is typically large enough for the first holes injected into the heterostructure to primarily go into one layer. In the MoTe_2/WSe_2 case of interest here, the holes first populate the MoTe_2 layer and correlations in this layer are strong enough that at a hole density of ν=1 per moiré unit cell the state is a correlation driven (Mott) insulator <cit.>. However the energy offset may be experimentally tuned by applying a perpendicular “displacement" electric field leading at ν=1 to a crossover from a Mott insulator (gap controlled by the in-layer Coulomb interaction) to a charge transfer insulator (lowest hole addition state is on the other layer)  <cit.>. Tuning yet further, the system realizes a transition from a layer polarized Mott insulator to a quantum anomalous Hall insulator <cit.>. In the charge transfer insulator regime, increasing the hole density beyond ν=1 to ν=ν_ Mo+ν_ W=1+x with x>0 gives rise to a “heavy fermion" situation in which a layer of local moments in the MoTe_2 layer coexists with itinerant carriers in the WSe_2 layer <cit.>. As pointed out in our recent work <cit.>, the chiral nature of the interlayer exchange gives rise to a heavy Fermi liquid phase whose Fermi surface is pierced by a topological Berry flux. Moreover, at low-doping the chiral exchange can stabilize a ℤ_2 topological superconductor <cit.>. In this work we consider the state emerging when the hole concentration is increased to ν=2 with ν_ Mo=ν_ W=1. We show that in the ideal case the system provides a realization of the topological Kondo semimetal phase <cit.> previously proposed by one of us and by others on theoretical grounds. The Kondo semimetal phase involves local moments with short range antiferromagnetic correlations in MoTe_2 exchange-coupled to itinerant carriers in WSe_2 leading to a spin Hall phase of heavy quasiparticles <cit.>. We characterize the properties of the edge modes of the quantum spin Hall phase. We determine the range of stability of the Kondo semimetal phase in temperature and under various perturbations. Importantly, we show how random strain fields arising from interfacial disorder fill in the p-wave hybridization gap to realize a narrow gap topological Kondo insulator in a moiré material. The plan of the paper is as follows. In Sec. <ref> we present the Hamiltonian, highlighting the essential properties including the opposite parity of the Wannier orbitals and the spin-momentum locking that are key to realize the topological Kondo phase. In Sec. <ref>, we present the strong-coupling limit of the lattice Hamiltonian and we introduce the slave-boson (`parton') theory that describes the physics in this limit. Sec. <ref> is devoted to the mean-field solution of the model where we detail the criterion for the formation of the heavy Fermi liquid phase. In Sec. <ref> we characterize the topological Kondo semimetal phase, while in Sec. <ref> we show how uniform or random strain fields may open a full gap resulting in a quantum spin Hall Kondo insulator. In Sect. <ref> we examine the edge modes of the Kondo semimetal quantifying their overlap with bulk states in the topological Kondo semi-metallic regime by studying a system with cylindrical boundary conditions open in one direction and periodic in the other. Finally, we provide a summary of the main results of our work and outlook in Sec. <ref>. Appendices provide details of some of the calculations in the main text. § LATTICE MODEL ON THE MOIRÉ SCALE The Wannier orbitals for the two highest energy valence bands of the heterostructure are shown in Fig. <ref> and form a honeycomb lattice where the two different sites belong to the upper (MoTe_2) and lower (WSe_2) layers <cit.>; see Appendix <ref> for a detailed derivation of the lattice model. H = - t_d∑_⟨ r, r'⟩∈Λ_d∑_σ c^†_ rσ e^-iν_ r, r'2πσ/3 c_ r'σ -t_u∑_⟨ r, r'⟩∈Λ_u f^†_ r f_ r'-t_⊥∑_⟨ r, r'⟩f^†_ rc_ r' - Δ(N_u-N_d)/2 +U_u∑_ r∈Λ_u n_ r↑ n_ r↓, where Λ_u/d refers to the upper and lower layer triangular lattices generated by the primitive lattice vectors a_1/2=a_0(±√(3)/2,1/2), respectively. For short-handedness, the spin index has been suppressed in terms that are spin independent and Hermitian conjugate is implied. Here we have not explicitly written the interaction in the WSe_2 layer, which is believed to be smaller than the WSe_2 bandwidth, see Appendix <ref> for an estimate of the Coulomb integrals, and is therefore not relevant to our considerations here. The model has C_3z symmetry arising from invariance under three-fold rotations around an axis perpendicular to the planes, time reversal symmetry 𝒯=iσ^y𝒦, where 𝒦 indicates complex conjugation, U(1)_v symmetry generated by spin S^z rotation and the mirror symmetry M_y which sends r = (x,y) →(x,-y) and ↑→↓. M_y is an approximate symmetry of the lattice model and is not present in the physical bilayer. From Wannierization of the continuum model wavefunctions we find t_u=4.2meV and t_d=8.3meV while the interlayer hopping is t_⊥=1.8meV. The MoTe_2 interaction estimated as U_u=90meV is the largest energy scale of the problem. At filling ν=1 of the upper layer, the relatively large value U_u/t_u≈ 20 means we obtain a Mott insulator of localized magnetic moments in a triangular lattice <cit.> so that the low energy physics of the MoTe_2 layer is governed by magnetic degrees of freedom correlated by a generalized Heisenberg exchange, while the allowed hybridization processes between the MoTe_2 and WSe_2 layers are charge fluctuations involving creation of a electron in MoTe_2 and an hole in WSe_2 with double occupation of MoTe_2 sites forbidden. The result is that the physics is described by a periodic Anderson model of the form H_ PAM = ∑_ kc^†_ kσ(ϵ_ kσ+Δ/2) c_ kσ + J_H ∑_⟨ r, r'⟩∈Λ_u S_ r· S_ r' -t_⊥/√(N_s)∑_ r∈Λ_u[∑_ k e^i k· r V_ k𝒫(f^†_ rc_ k)𝒫+h.c.] -t_u∑_⟨ r, r'⟩∈Λ_u𝒫 f^†_ r f_ r'𝒫 -Δ/2∑_ r∈Λ_u f^†_ rf_ r, where in intralayer t_u and interlayer t_⊥ hopping terms the projector 𝒫 suppresses processes that create a double occupied site in the MoTe_2 layer. Here the S are spin operators in the MoTe_2 layer, J_H=4t_u t^2_⊥/Δ^2 is the superexchange in the charge transfer limit, N_s is the number of unit cells, and ϵ_ kσ=-t_d F_ kσ with F_ kσ=2∑_j=1^3cos(γ_j· k+2πσ/3) and V_ k=∑^3_j=1 e^i k· u_j. In the latter expressions, we have introduced the vectors u_1=a_0(1,0)/√(3) and u_j=C^j-1_3z u_1 connecting the two different triangular lattices and γ_j=C^j-1_3z a_2, the primitive vectors of each triangular lattice where j=1,2,3. § PARTON REPRESENTATION OF THE CHIRAL PERIODIC ANDERSON MODEL To account for mixed valence states within the low-energy subspace of zero double occupancy in MoTe_2, we employ the slave boson representation f_ rσ=b^†_ rχ_ rσ <cit.> with the local constraint n_b+n_χ=1. Writing out the effective model (<ref>) within the parton decomposition leads to H_ PAM = ∑_ kc^†_ kσ(ξ_ kσ+Δ/2) c_ kσ + J_H ∑_⟨ r, r'⟩∈Λ_u S_ r· S_ r' -t_⊥/√(N_s)∑_ r∈Λ_u[b_ rχ^†_ r∑_ k e^i k· r V_ kc_ k+h.c.] + ∑_ r∈Λ_uχ^†_ r(λ_ r-μ-Δ/2)χ_ r+∑_ r∈Λ_uλ_ rb^†_ rb_ r-t_u∑_⟨ r, r'⟩∈Λ_uχ^†_ rb_ rb^†_ r'χ_ r', λ_ r is the Lagrange multiplier imposing the constraint n_b+n_χ=1, μ enforces the total filling to ν=1+1. A finite n_b implies a finite density of holons in MoTe_2 and interlayer valence fluctuations. To decouple the exchange interaction we introduce the bosonic field Q_ r, r' defined along the bonds of the triangular lattice Λ_u that takes into account short-range antiferromagnetic correlations without long-range order. Performing the Hubbard-Stratonovich decoupling of the Heisenberg interaction J_H S_ r· S_ r' <cit.> we obtain the partition function Z=∫𝒟(Q^†,Q;c^†,c;χ^†,χ;b^†,b;λ)e^-∫^β_0 dτ[ ℒ_1+ℒ_2 ], where the effective action ℒ = ℒ_1+ℒ_2, with ℒ_1 and ℒ_2 given by ℒ_1 = ∑_ r∈Λ_ub^†_ r(∂_τ+λ_ r) b_ r+χ^†_ r(∂_τ+λ_ r-μ-Δ/2)χ_ r +∑_ kσc^†_ kσ(∂_τ+ξ_ kσ+Δ/2) c_ kσ, ℒ_2= -t_⊥/√(N_s)∑_ r[χ^†_ rb_ r∑_ k e^i k· r V_ kc_ k+h.c.] -t_u ∑_⟨ r, r'⟩∈Λ_uχ^†_ rb_ rb^†_ r'χ_ r'+2/J_H∑_⟨ r, r'⟩∈Λ_u|Q_ r, r'|^2 +∑_⟨ r, r'⟩∈Λ_u[Q_ r, r'χ^†_ r'χ_ r+h.c.]. The theory is invariant under the U(1) gauge transformation χ_ r→ e^iθ_ rχ_ r, b_ r→ e^iθ_ rb_ r and λ_ r→λ_ r-i∂_τθ_ r, Q_ r, r'→ Q_ r, r' e^i(θ_ r'-θ_ r) where θ_ r∈[0,2π). The latter gauge symmetry results in an emergent gauge field a_μ <cit.>, θ_ r=∫^ rd l· a, which is approximated to a static background at the mean-field level; fluctuations beyond the mean-field level are discussed in the following when needed. We now proceed to solve the model (<ref>) at the mean-field level to obtain the phase diagram of the system. § MEAN-FIELD THEORY The mean-field solution is obtained by taking the saddle point of the Lagrangian with respect to b_ r, Q_ r, r' and λ_ r. The mean field solution is formally justified in a large-N limit of a SU(N) generalization of the model <cit.> but is believed to represent the physics of the relevant phases even at N=2. We obtain the set of self-consistent equations: Q_ r, r' = -J_H⟨χ^†_ rχ_ r'⟩/2, λ_ r⟨ b_ r⟩ = t_⊥/√(N_s)∑_ ke^-i k· r V^*_ k⟨ c^†_ kχ_ r⟩ + t_u ∑_⟨ r' ⟩_ r⟨ b_ r'⟩⟨χ^†_ r'χ_ r⟩, ⟨ b^†_ rb_ r⟩+⟨χ^†_ rχ_ r⟩=1, ∑_ r⟨χ^†_ rχ_ r⟩ + ⟨ c^†_ r c_ r⟩=2N_s, where in the first equation the sites r, r' are nearest neighbors and in the second equation, the sum ∑_⟨ r' ⟩_ r extends over the nearest neighbor of the site r in the upper layer triangular lattice Λ_u. In the basis Ψ=(χ,c)^T the resulting mean-field Hamiltonian reads: H_ mf=∑_ kσΨ^†_ kσ[ ξ̅_ k+λ -Δ/2 -t_⊥ b_0 V_ k; -t_⊥ b_0 V^*_ k ξ_ kσ+Δ/2 ]Ψ_ kσ, where we considered the mean-field solution invariant under translations (<ref>), ⟨ b_ r⟩=b_0, ξ̅_ k=- t^*_uF_ k-μ is the dispersion of the spinons with F_ k=2∑_j=1^3cos(γ_j· k) and effective hopping amplitude t^*_u=t_u|b_0|^2-Q where we assume the bond variable Q_ r, r' to be translational Q_ r, r'=Q_ r- r' invariant and preserving the point group symmetries of the model. The Hamiltonian exhibits two qualitatively different phases. One is a Fermi liquid coexisting with a magnetic state, which may either break translation and spin-rotation invariance, or may be a U(1) spin liquid <cit.>. In this coexistence phase the local moments are not counted in the Fermi surface volume. Second, there is a heavy fermion phase in which the local moments are hybridized with the itinerant carriers and the Fermi surface includes all of the carriers including the ones that form the local moment. The p-wave nature of the hybridization gives a topological character to the heavy Fermi liquid state leading to a topological spin Hall phase with a spin Hall conductance that is quantized when a direct gap between the itinerant and the local moment bands is realized <cit.>. In the theoretical approach used here the transition between the two different phases can be determined by finding the critical interlayer tunneling t_⊥ where the mass of the holon fluctuations b_ r (<ref>) vanishes. The interlayer charge gap which corresponds to the mass of the holon <cit.> is obtained computing Gaussian fluctuations around the mean-field solution b_0=0, the Green's function of the holon fluctuations reads: G^-1_b( q,iω) = iω -λ - ω_ q -Σ_b( q,iω), where the dispersion relation ω_ q reads: ω( q) = -t_u/N_s∑_ p F_ q+ p⟨χ^†_ pχ_ p⟩, and the self-energy is given by Σ_b( q,iω)=t^2_⊥Π_χ c( q,iω). Π_χ c is the hybridization susceptibility; setting p=( p,iϵ) yields Π_χ c(q)=T/N_s∑_ p ϵ∑_σ |V^*_ p|^2G_cσ(p)G_χσ(p+q). The critical value of the interlayer coupling t_⊥ is obtained by the vanishing of the holon mass: ω_b( q=0) =λ+ω( q=0)+Σ_b( q=0,ω=0)=0. In this parton theory the b_0=0 phase is of the FL^⋆ type of Fermi liquid, which has no broken translational or spin rotation symmetry (so the Mo-layer spins are in a U(1) spin liquid state) but has a Fermi surface of long-lived electronlike quasiparticles whose volume does not count the local moments in the MoTe_2 layer. An alternative to this state, not studied here, would involve a Mo-layer in which the spins have a conventional magnetic order. The qualitative dependence of the phase boundary on the model parameters would be similar. Figs. <ref>a) and <ref>b) show the critical line (solid black line) where the transition from the U(1) spin liquid to the topological Kondo phase takes place as a function of the microscopic parameters of the model. The phase diagrams are characterized by two different regions: when the holon fluctuations ω_b>0 are gapped the itinerant carriers of the WSe_2 layer coexist with a Fermi liquid of spinons (FL^*) with short range antiferromagnetic correlations in MoTe_2. For ω_b<0, the spin liquid is unstable towards hybridization with the itinerant electron layer developing the heavy Fermi liquid phase (FL). Fig. <ref>a) shows that increasing the superexchange J_H, which increases the bandwidth of the spinons χ, enhances the critical value of the interlayer hopping to form the heavy Fermi liquid phase. On the other hand, Fig. <ref>b) shows the critical line in the plane t_⊥/Δ and t_u/Δ with J_H/Δ=0.25. Increasing t_u favors the formation of holes in the local moment layer, thus, inducing the heavy Fermi liquid mixed valence phase where carriers of the local moment layer are promoted to the charge transfer band. Finally, Fig. <ref> shows the variation of the mean field parameters b_0, Q, μ and λ as well as the mass of the holon propagator ω_b( q=0)/Δ (<ref>) as t_⊥ is varied at fixed values of the other parameters. We notice that the formation of the Kondo phase takes place for t_⊥>t^c_⊥≃0.46 Δ with the formation of a condensate of holons b_0≠ 0. Correspondingly, the mass of the holon fluctuations vanishes when ω_b( q=0)=0, which is associated with the fluctuations of the phase of the holon degrees of freedom b_ r. Furthermore, in the FL^* regime λ≈μ+Δ/2, pinning the energy of the spinons χ_ r at the Fermi level. Entering into the Kondo regime, λ deviates from this value from the non-vanishing hybridization with the itinerant carriers. § TOPOLOGICAL KONDO SEMIMETAL We now turn our attention to the characterization of the electronic properties of the two different phases. The topological character of the Kondo semimetal phase originates from two ingredients  <cit.> directly inherited from the TMD monolayers  <cit.>: the opposite parity (i.e., different C_3z eigenvalues of the Wannier orbitals residing in the two layers) and the spin-momentum locking. These two properties lead to a nontrivial momentum dependence of hybridization V_𝐤 (<ref>) which vanishes at the high symmetry points κ and κ' of the mini Brillouin zone and has a chiral structure. To understand the semimetallic nature of the Kondo phase we start from the simplified limit t_u=J_H=0 where the spinon χ band is exactly flat. The vanishing of V_ k implies E_κ↑/κ'↓ +=E_κ'↑/κ↓-. As a result the indirect gap in the quasiparticle band structure E_ kσ± vanishes and the state is a nodal semimetal with nodes at k and with a very weak dispersion away from the nodes as shown in Fig. <ref>a). Introducing J_H,t_u gives a non vanishing hopping t^*_u=t_u|b_0|^2-Q>0 leading to a spinon dispersion with maxima at κ and κ' where the hybridization function V_ k vanishes. Thus, the maximum of the lower quasiparticle band E_ k ↑ - is located at k=κ with value E_κ'↑-=3t^*_u+λ-μ-Δ/2 and E_κ↑+=E_κ'↑-. Furthermore, expanding the upper quasiparticle dispersion around κ we find E_κ+ k↑ +≃ E_κ↑ +-ħ^2 k^2/2m_ eff with ħ^2/m_ eff=3|b_0|^2[t_u-t^2_⊥/(6t_d+λ-Δ)] +3|Q|. For realistic model parameters we find t_u>t^2_⊥/(6t_d+λ-Δ) such that m_ eff>0, which implies that the upper and lower quasiparticle bands form electron and hole pockets giving rise to a compensated semimetal. The formation of electron and hole pockets is displayed in Fig. <ref>b) where we show the band structure J_H,t_u≠0. The lower band crosses the Fermi energy around κ' forming hole like pockets, while the upper one gives rise to an electron pocket along the γ-κ line. A two-dimensional visualization of the Fermi surfaces of the compensated heavy Fermi liquid is given in Fig. <ref>a) where the color code shows the dispersion of the upper and lower bands. The electron and hole like residual Fermi surfaces are shown as red and black solid lines in Fig. <ref>a), respectively, while the original Fermi surfaces before hybridization are shown as dashed grey lines. In addition to the electron-hole pockets the chiral hybridization gives rise to a Berry curvature flux Ω_ kσ±=-2⟨∂_k_xu_ kσ±|∂_k_yu_ kσ±⟩, where |u_ kσ±⟩ are the Bloch vectors of the Hamiltonian (<ref>), piercing the residual Fermi surfaces of the electron-hole excitations. Notice that due to the time-reversal symmetry Ω_ k↑±=-Ω_- k↓± with spin Chern number C_↑±=∓1, a non-vanishing density of holons b_0≠0 opens a topological gap in the spectrum. Fig. <ref>b) shows the Berry curvature distribution in the mini Brillouin zone. The Berry curvature is concentrated along the direct minimum of the gap contour where the Fermi surface of the itinerant carriers was located. The presence of a residual electron and hole pockets in the Kondo semimetallic state implies that the spin Hall conductivity σ^↑/↓_xy = e^2/h∫_ mBZd^2 k/2π∑_n=±Ω_ k↑/↓ nf(E_ k↑/↓ n ) is not quantized. For the self-consistent solution shown in Figs. <ref>a) and <ref>b) the spin Hall conductivity is σ^↑/↓_xy≈±0.7 (in units of e^2/h). Deformations of the moiré lattice can remove the electron hole pockets and induce a transition from a Kondo semimetal to a quantum spin Hall Kondo insulator. We detail the transition between these two phases in the next section. § TRANSITION FROM A KONDO SEMIMETAL TO A QUANTUM SPIN HALL KONDO INSULATOR We now discuss the effect of moiré lattice distortion corresponding to a random distortion of the moiré potential resulting in a relative shift between the two layers <cit.>. On the lattice model, the distortion corresponds to a shift of the upper layer r∈Λ_u→ r+ϕ and results in the variation of the three different bonds of the honeycomb lattice reading δ d_j/| u_j|≈1- u_j·ϕ/| u_j|^2 where the expansion holds for small deformations ϕ. Fluctuations of the bond length give rise to a variation in the overlap between the Wannier orbitals which reads t_⊥→ t_⊥ ,j≈ t_⊥(1+ u_j·ϕ/| u_j|^2) <cit.>. The random dislocation ⟨ϕ⟩=0 preserves on average the point group symmetries of the moiré lattice model (<ref>). ϕ^a are distributed according to P(ϕ)=e^-|ϕ|^2/(2η^2)/(2πη^2) implying ⟨ϕ^aϕ^b⟩_ dis.=δ_abη^2 where η quantifies the standard deviation of the bond length | u_j|=a_m/√(3). To leading order in the deformation ϕ the average interlayer hybridization reads: ⟨|V_ k(ϕ)|^2⟩_ dis. = ∑^3_jl=1e^-i k·( u_j- u_l)(1+ηû_l·û_j), where û_j and α are expressed in units of the bond length and V_ k(ϕ)=∑^3_j=1e^i k· u_jt_⊥,j/t_⊥. It is instructive to study the effect of interlayer random dislocations on the hybridization function V_ k. Fig. <ref> shows ⟨|V_ k(ϕ)|^2⟩_ dis. for different values of α. We notice that for k=γ the contribution of the random dislocations vanishes since ∑_j=1^3 u_j=0. On the other hand, for k=κ/κ' we have ⟨|V_ k(ϕ)|^2⟩_ dis.=9η^2/2, indicating that random dislocations remove on average the node of the hybridization function. This effect can be readily understood expanding V_ k(ϕ) around κ which gives V_κ+ k(ϕ)≈-3(k_–iϕ_-)/2 where k_±=k_x± ik_y and ϕ_±=ϕ_x± iϕ_y acts as a random gauge field describing a static deformation of the lattice <cit.>. For any disorder realization ϕ, the nodal point of the hybridization function V_ k(ϕ) is shifted by k_0=(ϕ_y,-ϕ_x). As a result the hybridization pseudogap characterising the chiral Kondo model (<ref>) is filled by averaging over disorder realizations as shown in Fig. <ref>. We comment that this perturbation is relevant also in the low-doping regime ν=1+x (x≪1) with x filling factor of the WSe_2 layer. This can be readily understood comparing the characteristic Fermi wavevector k_F=√(mx/ρ)/ħ with the average deformation introduced by random distortions of the moiré lattice which goes like the Kondo hybridization k_0∝1/η, the latter being the dominant scale k_0>k_F below a characteristic filling factor x<x_0∼ħ^2ρ k_0^2/m. Fig. <ref> shows the band structure in the Kondo regime for large deformation (η=0.7). Interlayer dislocations give rise to a non-zero gap and removes the electron and hole pockets in the single-particle spectrum. The resulting state is a topological Kondo insulator with quantized spin Hall Chern number C_↑/↓=±1. This can be readily understood in the J_H=t_u=0 limit where due to the vanishing of the hybridization V_ k at κ and κ' we have E_κ↑+=E_κ'↑-. Focusing on the spin ↑ sector, a non-zero distortion ϕ gives rise to the k· p Hamiltonian at κ: h_↑(κ+ k)≈[ 0 -3it_⊥|b_0|ϕ_-/2; 3it_⊥|b_0|ϕ_+/2 -Δ_κ ] , and at κ': h_↑(κ'+ k)≈[ Δ_κ' 3it_⊥|b_0|ϕ_+/2; -3it_⊥|b_0|ϕ_-/2 0 ] , where Δ_κ/κ' are the direct gaps at κ/κ', respectively. Applying non-degenerate perturbation theory, we readily find δ E_κ↑+=9η^2 t_⊥^2|b_0|^2/(2Δ_κ) and δ E_κ'↑-=-9η^2 t_⊥^2|b_0|^2/(2Δ_κ'), which due to level repulsion leads to a full gap in the spectrum: E_ gap=E_κ↑+-E_κ'↑-=9η^2/2t^2_⊥|b_0|^2(Δ_κ+Δ_κ')/Δ_κΔ_κ', where average over disorder realizations has been performed. The evolution from a topological Kondo semimetal to an insulating state is shown in Fig. <ref> where we study the evolution of the self-consistent solution (μ,λ,Q,b_0) and of the spin Chern number (<ref>) as a function of η. The bottom panel shows the spin/valley Hall conductance in unit of e^2/h computed with Eq. (<ref>) with E_ k nσ dispersion determined self-consistently. Increasing σ increases the interlayer hybridization as shown by the third panel of Fig. <ref>. As a result of the level repulsion, increasing η also induces a gap in the spectrum leading to the formation of a quantum spin Hall Kondo insulator with quantized Δσ_xy=σ^↑_xy-σ^↓_xy=2 in units of e^2/h. Finally, we conclude summarizing the various phases realized in our theory. Increasing t_⊥/Δ, the system undergoes a transition from a C_σ=0 FL^* phase to an anomalous spin Hall Kondo phase with σ^σ_xy≠ 0. Within, the Kondo regime we find two different states: the semimetal with σ^σ_xy not quantized and the insulator where σ^↑/↓_xy=± 1 and a small energy gap <cit.> which is proportional to the holon condensate density |b_0|^2 and reflects the energy scale to break the singlets. A natural consequence of these non-zero Chern numbers are gapless topological surface states measurable by probing the edge of the sample, which we will study in the next section employing cylindrical geometry. § EDGE MODES We now analyse edge states coexisting with bulk quasiparticles in the Kondo semimetallic regime by solving the mean field equations in a cylinder geometry with periodic boundary conditions along a_3= a_1+ a_2=a_0(0,1) and either open in the other direction. The result for open boundary conditions shown in Fig. <ref> reveals gapless edge modes localised at the two opposite boundaries of the two dimensional cylinder for spin ↑ carriers; spin ↓ edge modes are obtained by time reversal symmetry. The two edge modes propagating in opposite directions are characterized by dramatically different velocities: the edge mode localised along the termination of WSe_2 has a much larger velocity than the one originating from the local moments of MoTe_2. The asymmetry between the two edges originates from the large effective mass of localised carriers in MoTe_2 and, generically, from the absence of inversion symmetry which is strongly broken in TMDs. On the single-particle level the edge modes are eigenstates, but many-body scattering processes not considered here can lead to a finite lifetime due to particle-hole exchange with carriers in the semimetallic bands. Also, impurity scattering may couple the edge and bulk modes. A detailed analysis of the response of edge modes to an external electromagnetic field in small gap topological Kondo insulators and semimetals within a self-consistent parton construction approach <cit.> is left to future investigations. § CONCLUSION In this work, we have studied a theoretical model of AB-stacked TMD heterobilayers, with parameters chosen to represent MoTe_2/WSe_2 in the doping regime ν =ν_Mo+ν_W= 1+1. The relatively strong correlations in the MoTe_2 layers and relatively weak correlations in the WSe_2 layer naturally lead to two phases: a Kondo-coupled state, and a layer-decoupled state. The principal result of this paper is that the Kondo-coupled state is a Kondo semimetal, with small compensating electron and hole pockets, rather than the Kondo insulator often found at this carrier concentration. Moreover we have shown that the combination of AB-stacking and the atomic physics of the constituent materials endows the Kondo semimetal with topological properties arising from a p-wave nature of the interlayer hybridization including edge states that are protected on the single-particle level. Including random interfacial distortions enhances interlayer hybridization and leads to an average gap that produces a topological Kondo insulator. The influence of other sources of disorder, such as Coulomb scattering by charged impurities <cit.>, may also be significant, particularly in the regime of low carrier concentration x≤0.05 per moiré unit cell in WSe_2 <cit.>. Models of the general kind considered here may exhibit other phases, including a layer-decoupled phase with a large Fermi surface for carriers in the WSe_2 layer and a spin liquid or a magnetic state in the MoTe_2 layer. We considered the transition between the Kondo coupled and the FL^⋆ phase occurring when the MoTe_2 system becomes a spin liquid. An alternative is a transition of the familiar `Doniach' kind to a state with magnetic order in the MoTe_2 layer. Also as the interaction in the WSe_2 layer is increased, a Mott transition to a fully gapped magnetic insulating state may occur. Investigation of these phases and transitions is an important open question, as is a fuller elucidation of the physics of the edge states. Finally, we highlight that homobilayers, where spin liquid Mott insulators have been theretically proposed <cit.>, and trilayer TMDs at finite displacement fields, could also serve as intriguing platforms for the coexistence of itinerant carriers and local moments. In summary, our work shows that doped and stacked TMD heterobilayers are a novel platform to realize topological Kondo semimetal and insulating states, and transitions that include a change in topology to other many-body states of current interest. Given the lack of consensus in the heavy fermion community on the topological nature of the putative topological Kondo insulator SmB_6 <cit.> mostly originating from the experimental observation of 3D bulk quantum oscillations <cit.>, having more clear cut platforms with greater theoretical control are of the utmost importance. We expect that a detailed full self-consistent study of the edge modes and their evolution upon doping will help experimentally identify the proposed topological features we have found within. We thank W. Zhao, T. Senthil, P. Coleman, A. Hardy, A. Georges and N. Wagner for insightful discussions. V.C. thanks K. F. Mak, J. Shan and their entire group for insightful discussions and for their hospitality. K.P.L. and J.H.P. are partially supported by NSF Career Grant No. DMR-1941569. This work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452 (J.H.P.). A.J.M was partially supported by the Columbia University Materials Science and Engineering Research Center (MRSEC), through NSF grant DMR-2011738. J.C. acknowledges support from the Air Force Office of Scientific Research under Grant No. FA9550-20-1-0260 and the Alfred P. Sloan Foundation. The Flatiron Institute is a division of the Simons Foundation. § FROM CONTINUUM TO LATTICE MODEL In this section we derive the properties of the Wannier orbitals and the lattice model from the continuum moiré theory of heterobilayer TMDs. The moiré Hamiltonian of TMD heterobilayer at valley K <cit.> reads: H_K=[ -(k̂-κ)^2/2m_u+V_u( r) T( r); T^*( r) -(k̂-κ')^2/2m_d+V_d( r)-Δ E_g ], where the labels l=u,d refer to top and bottom layers, respectively, and m_u/d is the effective mass of the top valence band monolayer. V_l( r) is the intralayer moiré potential, T( r) the interlayer tunneling term: V_l( r)= 2V_l ∑_j=1^3cos( g_j· r+ψ_l), T( r) = t(1+ω e^-i b_1· r+ω^* e^-i b_2· r), ω=e^2π i/3 and Δ E_g is the energy offset. In the latter expression we have introduced g_j=4π/√(3)a_0[cos2π (j-1)/3,sin2π (j-1)/3] with j=1,2,3 and b_1/2=4π(±1/2,√(3)/2)/√(3)a_0. Additionally, we set κ= q_3 and κ'=- q_2 where q_j=4π/3 a_0[-sin2π(j-1)/3,cos2π(j-1)/3] with j=1,2,3 as depicted in Fig. <ref>. We focus on the AB-stacked hetero bilayer composed of MoTe_2/WSe_2 with model parameters (a_0,m_u,m_d,t,V_u,ψ_u)=(4.65 nm,0.6m_e,0.35m_e,1.3 meV,4.1 meV,14^∘) determined by first-principle calculations <cit.>. The amplitude of the potential V_d is small <cit.>, we set (V_d,ψ_d)=(2 meV,-106^∘) in our numerical calculations shown in Fig. <ref>. §.§ Wannier orbitals and hopping amplitudes The Wannier orbitals are obtained by first neglecting the interlayer tunneling potential. This approximation which is justified by the weak interlayer hybridization between the bands. Specifically, we solve the intralayer Hamiltonian h_l( r)=-(k̂-K_l)^2/2m_l+V_l( r), with K_u=κ, K_d=κ' and k=-i∇_ r for the topmost Bloch orbital ψ_ k l( r)=e^i k· ru_ k l( r) with u_ k l( r) cell-periodic part. The latter can be decomposed in plane waves in the repeated BZ: u_ k l( r)=∑_ G e^-i G· rz_ G l( k), where G=n_1 b_1+n_2 b_2 and z_ G l( k) the Fourier amplitudes. Moreover, we set κ= q_3 and κ'=- q_2 with q_1=( b_1+ b_2)/3 and q_j=C^j-1_3z q_1. We observe that the intralayer Hamiltonian h_l( r) transforms under C_3z as h_l(C_3z r)=e^i G_l· rh_d( r)e^-i G_l· r , with G_l=C_3zK_l-K_l, G_u= b_2 and G_d= b_2- b_1. As a result, the momentum space projection of the continuum model is invariant under C_3z: h_l(C_3z k)=D_l(C_3z)h_l( k)D^†_l(C_3z) where D_l(C_3z) is the representation of the C_3z symmetry in momentum space for the two different layers: D_u(C_3z)=D(C_3z) V^ b_2- b_1, D_d(C_3z)=D(C_3z) V^- b_1. We notice that D(C_3z)_ G, G'=δ_ G,C_3z G' and V^ Q is the sewing matrix which acts as a rigid shift of momentum Q in reciprocal space V^ Q_ G, G'=δ_ G, G'+ Q. The C_3z symmetry implies: ψ_ k u(C_3z r)=e^i( b_2- b_1 )· rψ_C^-1_3z k u( r), ψ_ k d(C_3z r)=e^-i b_1· rψ_C^-1_3z k d( r). From the Bloch orbitals we build the Wannier functions W_ R + r_l l( r) =1/√(N)∑_ ke^-i k· Rψ̃_ k l( r), where n is the band index, R=n_1 a_1+n_2 a_2 defines the moiré triangular lattice, the sum over k is extended over the first BZ and N is the number unit cells. In Eq. (<ref>) ψ̃_ k l( r) we impose ψ̃_ k l( r_l)∈ℝ^+: ψ̃_ k l( r)= e^-iφ_ k l( r_l)ψ_ k l( r) where φ_ k l( r_l)=Arg ψ_ k l( r_l) with r_u=0 and r_d= u_1. The center of the Wannier orbitals for the two different layers are located in the unit cell at the inequivalent positions r_u=0 and r_d= u_1, see Fig. <ref>b) of the main text, forming a honeycomb lattice. Eqs. (<ref>) imply that the Wannier functions tranforms under C_3z as: W_ R u(C_3z r) =e^i( b_2- b_1)· rW_C^-1_3z R u( r), W_ R+ u_1 d(C_3z r) =ω e^-i b_1· rW_C^-1_3z( R+ u_1)d( r), where we use the relation a_1= u_1- u_3, the extra factor in the lower layer Wannier function originates from: φ_C_3z p d( u_1)=4π/3+φ_ p d( u_3). The intralayer hopping is readily obtained: t^l_ R, R' = W_ R lĥ_lW_ R' l =1/N∑_ ke^i k·( R- R')ϵ_ k l, where ϵ_ kl is the energy dispersion of the topmost band of layer l. C_3z symmetry implies ϵ_C_3z k l=ϵ_ k l and as a result t^l_C_3z R,0=t^l_ R,0. On the other hand, the leading interlayer tunneling is given by t^ud_0, R+ u_j+1=W_ 0 uTW_ R+ u_1d= ω^* t^ud_0,C^-1_3z R+ u_j, particularly for R=0 we have t^ud_0, u_j+1=ω^*t^ud_ u_j+1. Choosing a gauge where the upper layer is pinned at γ and keeping only intralayer and interlayer hopping up to first nearest neighbors we find the tight-binding model in Eq. (<ref>). Figure <ref> shows a comparison between the band structures of the continuum model and the tight-binding Hamiltonian. Color coding indicates the layer polarization: the upper band is localized in the MoTe_2 layer, while the second band from the top is localized in the WSe_2 layer. Additionally, we observe the presence of extra MoTe_2 bands that partially overlap in energy with the dispersive band of WSe_2. In this work, we assume that at a filling factor of 1 in the MoTe_2 layer, the intralayer Coulomb repulsion causes these bands to undergo a significant energy shift, larger than the characteristic gap with the WSe_2. A detailed analysis of this effect is left for future investigations. §.§ Coulomb integrals From the Wannier orbitals W_ Rℓ( r) defined in Eq. (<ref>) we compute the strength of the Coulomb interaction. We characterize the interaction through the interaction Hamiltonian projected into the moiré basis H_ int=1/2∑_ijkl∑_σσ'⟨ ij|U|kl⟩ a^†_iσ a^†_jσ' a_lσ a_kσ, where: ⟨ ij|U|kl⟩=∫ dx dx'W^*_i(x)W_k(x)W^*_j(x')W_l(x')/|x-x'|. For a relative dielectric constant ϵ=4.5 the value of the Coulomb energy scale e^2/(4πϵϵ_0 a_0)≈70meV. Computing the Coulomb integral of the Wannier orbitals in Eq. (<ref>) we find U_u=90meV, U_d=69meV and V=40meV. §.§ Effect of the nearest-neighbor repulsion V In this section we comment on the role of nearest neighbor interactions in the regime of interest in our work, which employing H_nn =V∑_⟨ r, r'⟩n_ r n_ r'. In the U_u→∞ limit we have utilized the parton decomposition f_σ=b^†χ_σ that yields H_nn =V∑_⟨ r, r'⟩(1-b^†_ rb_ r) n_ r', naively one could expect that the energy of the holon is reduced by promoting carriers from the localized to the itinerant layer. However, there is also a shift of the energy of the itinerant carriers in the opposite direction. As a result the charging energy will be enhanced by μ→μ+3V. The spinons are pinned at the Fermi energy λ≈μ+Δ/2→μ+Δ/2+3V. Thus, the mass of the boson is left unchanged λ→λ+3V-3V. From this straightforward estimate we can conclude that V is not relevant in this regime to understand qualitatitve phenomena. However, if we want to determine quantitative estimates of model parameters (e.g. exchange couplings) its important to include nearest neighbor interactions. § SADDLE POINT EQUATIONS Here we list a series of basic useful results for solving the mean-field problem. At the mean-field level the Hamiltonian (<ref>) leads to the Green's function: G_σ(iϵ) = 1/2[1/iϵ-d^0_σ+| d_σ|+1/iϵ-d^0_σ-| d_σ|/] + d_σ·σ/(iϵ-d_0)^2-| d_σ|^2, where we have introduced d^μ_σ( k) = [σ^μ h^σ_ mf( k)]/2 (<ref>). Notice that time reversal symmetry implies h^↑_ mf( k) = h^↓*_ mf(- k). Employing the Green's function one can easily compute the single particle density matrix ⟨Ψ^†_aΨ_b⟩=G_ba(0^-). The hopping of the spinon Q (<ref>) is given by: Q=J_H/48N_s∑_ kσ sF_ ktanhβ(d^0_σ+s| d_σ|)/2[1+sd^z_σ/| d_σ|]. The total filling is constrained to be 2: 2=1/2N_s∑_ kσ[2-∑_s=±tanhβ(d^0_σ+s| d_σ|)/2]. The on-site constraint becomes: |b_0|^2-1/4N_s∑_ k σ s tanhβ(d^0_σ+s| d_σ|)/2[1+sd^z_σ/| d_σ|]=0. Finally, we have the last saddle-point equation: b_0=b_0t^2_⊥/(4N_s)/λ+12 t_u Q b_0/J_H∑_ kσ s |V_ k|^2/| d_s|tanhβ(d^0_σ+s| d_σ|)/2. We notice that for Q<0 the effective mass λ of the holon in Eq. (<ref>) is reduced by the kinetic energy gain 12t_u Q b_0/J_H. We conclude providing the expression of the holon dispersion relation in the FL^* phase. Including fluctuations at the gaussian level we find the dispersion ω( q) = -t_u/N_s∑_ pσ F_ q+ p f(ξ̅_ p+λ-Δ/2), and the self-energy: Σ_b(q)=∑_ pσt^2_⊥|V_ p|^2/N_sf(ξ_ pσ+Δ/2)-f(ξ̅_ p+ q+λ-Δ/2)/iΩ + Δ + ξ_ pσ - λ - ξ̅_ p+ q, with q=( q,iΩ). Fig. <ref> shows the characteristic behavior of the dispersion of the spinons, itinerant carriers and the holon gap ω_b( q) in Eq. (<ref>).
http://arxiv.org/abs/2407.12930v1
20240717180347
A Light QCD Axion with Hilltop Misalignment
[ "Raymond T. Co", "Tony Gherghetta", "Zhen Liu", "Kun-Feng Lyu" ]
hep-ph
[ "hep-ph", "astro-ph.CO", "hep-th" ]
1.1 UMN-TH-4326/24 Physics Department, Indiana University, Bloomington, Indiana 47405, USA School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455, USA William I. Fine Theoretical Physics Institute, University of Minnesota, Minneapolis, Minnesota 55455, USA School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455, USA School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455, USA School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455, USA § ABSTRACT We study the cosmological evolution of a light QCD axion and identify the parameter space to obtain the correct relic dark matter abundance. The axion potential is flattened at the origin, corresponding to the only minimum, while it is unsuppressed at π. These potential features arise by assuming a mirror sector with the strong CP phase θ̅ shifted by π compared to the SM sector, which allows the mirror axion potential to be tuned against the usual QCD axion potential. Before the QCD phase transition, assuming the mirror sector is decoupled and much colder than the SM thermal bath, the mirror sector potential dominates, causing the axion to initially roll to a temporary minimum at π. However, after the QCD phase transition, the potential minimum changes, and the axion relaxes from the newly created “hilltop” near π to the CP-conserving minimum at the origin. As the axion adiabatically tracks this shift in the potential minimum through the QCD phase transition, with non-adiabatic evolution near π and 0, it alters the usual prediction of the dark matter abundance. Consequently, this “hilltop" misalignment mechanism opens new regions of axion parameter space, with the correct relic abundance while still solving the strong CP problem, that could be explored in future experiments. A Light QCD Axion with Hilltop Misalignment Kun-Feng Lyu July 17, 2024 =========================================== § INTRODUCTION The QCD axion is a simple and elegant solution to the strong CP problem <cit.>. The strong CP-violating phase θ̅ is promoted to a dynamical field ϕ_a, the axion, identified as the pseudo Nambu-Goldstone boson that arises from the spontaneous breaking of Peccei-Quinn symmetry at the decay constant scale, f_a. The axion field couples to the QCD topological term with a suppression scale proportional to f_a. A potential for the axion is induced via this coupling by nonperturbative dynamics after QCD confines. This explicit breaking of the Peccei-Quinn symmetry causes the axion field to relax to a field value that minimizes the total potential, where θ_a ≡ϕ_a/f_a cancels the strong CP phase, θ̅. The QCD axion mass can be precisely calculated via chiral perturbation theory and is given by m_a,QCD^2 = m_u m_d(m_u + m_d)^2m_π^2 f_π^2f_a^2 , where m_π and f_π are the pion mass and decay constant, respectively and m_u,d are the up, down quark masses. A more precise result is computed by Ref. <cit.> by considering higher-order corrections. There is an active worldwide experimental effort searching for the QCD axion which has yet to probe a large fraction of the parameter space in the (1/f_a, m_a) plane <cit.>. In most of the (1/f_a, m_a) parameter space the axion does not address the strong CP problem. Therefore, one interesting question is whether the axion mass could deviate from the conventional QCD prediction Eq. (<ref>) while still solving the strong CP problem. Indeed, there are a number of different scenarios which can lead to an axion that is either heavier or lighter than the value in Eq. (<ref>). To enhance the axion mass, QCD must be modified at UV scales and various approaches have been proposed including exploiting small instanton effects from a UV broken gauge group <cit.>, an extra dimension <cit.>, or from imposing a ℤ_2 symmetry <cit.>. On the other hand, suppressing the QCD axion mass appears much more challenging. Currently, there are only two viable models. One is the proposal by Hook <cit.> (and further studied in Ref. <cit.>), where the axion mass can be naturally suppressed by imposing a discrete ℤ_N symmetry. By summing up the potential contributions from the QCD sector plus N-1 copies of the mirror QCD sector, assuming all the potentials have the same magnitude but shifted θ angles, the total axion potential turns out to be suppressed by (m_u/m_d)^N. The second possibility is the so-called anarchic QCD axion <cit.>. Based on the DFSZ axion model, the phase of the B_μ term can shift the physical theta angle. By tuning the size of the potential coefficients, the physical axion field can obtain a smaller mass. Apart from the particle physics model building, the cosmological evolution of the axion field has also received much attention. The axion is well-known to serve as the dark matter candidate <cit.>. One of the most significant approaches to produce the correct dark matter abundance is the misalignment mechanism. The axion field is initially misaligned from the minimum of the potential. Due to the cosmic expansion, the axion can only start to evolve when the Hubble expansion rate drops down to the axion mass. This misalignment mechanism only happens in a specific region of the axion parameter space, if the initial misalignment angle is of order unity. There have been proposals of various production mechanisms to extend the allowed parameter space, such as a modified cosmological background <cit.>, parametric resonance <cit.>, kinetic misalignment <cit.>, and dynamical misalignment <cit.>. In this paper, we will focus on models with a mirror sector where the θ̅ angle is shifted by π compared to the Standard Model (SM) sector. This causes the mirror axion potential to have an opposite sign compared to the usual QCD axion potential and therefore the axion mass can be reduced by tuning the amplitude of the mirror axion potential. We will consider two examples for the new physics potential. The first example relies on the nonperturbative dynamics of mirror QCD to generate a potential identical in form to the usual QCD axion potential, except that the mirror Higgs vacuum expectation value (VEV) is tuned to reduce the axion mass. Alternatively, in the second example, the mirror QCD group is assumed to be spontaneously broken by a newly introduced mirror QCD charged scalar. A cosine potential is then generated by instantons where the VEVs of the mirror Higgs doublet and QCD charged scalar are tuned to cancel the QCD contribution to the axion potential. In both examples, the height of the total potential at θ_a=π is not significantly suppressed and the tuning is connected to scalar field VEVs in the mirror sector. Since the Higgs mass hierarchy is not yet understood, we remain agnostic about how the scalar field VEVs are obtained. Given that there are examples, albeit tuned, where the QCD axion potential can be flattened near the origin but not at π, we next investigate the cosmological consequences of an axion potential with these features. The cosmological evolution is similar to the dynamical misalignment <cit.> and the trapped misalignment <cit.> mechanisms. In the early universe, before the QCD phase transition, the axion potential is assumed to be dominated by the mirror QCD contribution which has a minimum at θ_a = π. Thus, the axion field first oscillates and relaxes to π. Only when the SM QCD contribution to the axion potential becomes significant does the potential minimum switch from θ_a = π to θ_a = 0. This is in contrast to the ℤ_N model <cit.>, where there are N distinct degenerate minima of the potential and the axion can only reside in the desired CP conserving minimum θ_a = 0 with a 1/N probability. In our case, we have a unique minimum at θ_a = 0, so that the axion always evolves to the correct vacuum required to solve the strong CP problem. Moreover, we numerically track the time-dependent potential in greater detail compared to Ref. <cit.> and discover distinct features where the axion adiabatically rolls down to the origin, with non-adiabatic evolution near π and the origin. This new behavior in the axion field evolution points to different parameter space to obtain the correct dark matter abundance, thereby providing further motivation to search for lighter QCD axions. This paper is organized as following. In Section <ref>, we briefly introduce examples of models with a light QCD axion and discuss the features of the axion potential as well as effects at finite temperature and constraints from cosmology. A detailed analysis of the cosmological evolution is then given in Section <ref> where the equation of motion is solved and the axion relic abundance is calculated. Finally, we conclude in Section <ref>. § REDUCING THE AXION MASS We aim to reduce the axion mass without spoiling the solution to the strong CP problem. This requires a cancellation in the axion potential which can be achieved by requiring that the second derivative of the potential contribution from the new physics sector is negative at θ_a = 0. Hence, the origin must be a local maximum for the potential contribution generated by the new physics. One way to obtain a negative potential contribution is to introduce a mirror QCD sector where θ̅ is effectively shifted by π. This can be realized by imposing a ℤ_2 exchange symmetry which is non-linearly realized on the axion, θ_a→θ_a +π <cit.>. Explicitly in the KSVZ scenario, with a complex scalar field Φ= φ/√(2) e^iθ_a (where ⟨φ⟩ = f_a) and KSVZ fermions Ψ_L,R plus the mirror fermions Ψ'_L,R, we can have the following terms ℒ⊃ - y ΦΨ̅_L Ψ_R + y'ΦΨ̅'_L Ψ_R' + h.c. , which is invariant under the following ℤ_2 symmetry[Note that this ℤ_2 symmetry differs from the ℤ_2 symmetry considered in Ref. <cit.>, where Φ→Φ, which was then used to increase the axion mass.] SM↔SM' ; Φ↔ - Φ , provided the Yukawa couplings satisfy y=y' and where prime (^') denotes fields/couplings in the mirror sector. After integrating out the heavy KSVZ fermions, the axion ϕ_a couples to the two sectors via ℒ⊃ϕ_af_a + θ̅α_s8π G^μνG_μν + ϕ_af_a + θ̅ + πα_s8π G'^μνG'_μν , where α_s=g^2/4π is the QCD fine structure constant, G_μν=1/2ϵ_μνρσG^ρσ with G^μν the gluon field strength and similarly for the mirror sector. Nonperturbative dynamics in both the QCD and mirror sectors then induces the total axion potential V_ total(ϕ_a,θ̅) = V_ QCD(ϕ_a,θ̅) + V_ new(ϕ_a,θ̅ + π) , where V_ QCD(ϕ_a,θ̅) = - m_π^2 f_π^2 √(1-4 z/(1+z)^2sin^2( ϕ_a/2 f_a + θ̅/2))=- m_π^2 f_π^2 +1/2 m_a, QCD^2 ϕ_a^2 +… , with z ≡ m_u/m_d. The second equality in Eq. (<ref>) is obtained by expanding the shifted axion field around the global minimum at the origin ϕ_a→ϕ_a +⟨ϕ_a⟩ =ϕ_a -θ̅f_a, which gives rise to the axion field mass term with m_a, QCD given in Eq. (<ref>). The contribution V_ new in Eq. (<ref>) is the potential generated by the new physics sector. In order to solve the strong CP problem and have a lighter QCD axion mass the potential must have the following features. First, the potential V_ new is assumed to have the same period as V_ QCD so that there is only one unique minimum for V_ total located at θ̅ = 0. This is contrast to the ℤ_N model in Refs. <cit.>, where there are N degenerate minima located at θ̅ = 2π k/N with k ∈ℤ. This avoids a 1/N selection accident of N degenerate minima for the ℤ_N models to solve the strong CP problem. The second feature is that the amplitude of the potential V_ new must be comparable to the amplitude of the QCD potential in order for there to be a cancellation. Thus, expanding Eq. (<ref>) about the origin gives V_ total⊃1/2(m_a, QCD^2 -m_a, QCD'^2)ϕ_a^2 +…≡1/2ε m_a, QCD^2ϕ_a^2 +… , where m_a, QCD'^2 is the contribution to the axion mass from the mirror QCD sector. The QCD axion mass is then m_a^2≡ε m_a, QCD^2, where ε≡ 1-m_a, QCD'^2m_a,QCD^2 . Clearly, to obtain a light QCD axion, the mirror contribution m_a, QCD' must be tuned to be comparable to the QCD contribution m_a, QCD. In the next subsection, we will present two examples to achieve this cancellation. §.§ Examples from a mirror sector We will consider two possible types for the potential V_ new in Eq. (<ref>). Both of these scenarios will require an explicit breaking of the ℤ_2 symmetry and a tuning in the potential. These tunings will be related to the size of scalar field VEVs in the mirror sector. §.§.§ Explicitly broken ℤ_2 model via mirror Higgs If the mirror sector were an exact copy of the SM where, in particular, the mirror Higgs VEV v' is equal to the SM Higgs VEV v, then as discussed in <cit.>, the origin of the total axion potential is a local maximum. Instead, we will consider the possibility that the ℤ_2 symmetry is explicitly broken by assuming v'≠ v, while keeping all other dimensionless parameters equal. Note that this explicit ℤ_2 breaking via the Higgs VEVs is soft and only logarithmically affects marginal operators where, in particular, the effects on θ̅ are negligible <cit.>. The mirror Higgs VEV v' can then be tuned to obtain a local minimum and a cancellation between the two axion mass contributions. After the mirror QCD confines, the axion potential in the mirror sector is given by V_ new^( conf.)(ϕ_a,θ̅ + π) = - m_π'^2 f_π'^2 √(1-4 z/(1+z)^2sin^2( ϕ_a/2 f_a + θ̅ + π/2)) , where m_π' and f_π' is the mirror pion mass and decay constant, respectively. Using the Gell-Mann-Oakes-Renner relation, m_π'^2 f_π'^2 is proportional to (m_u'+m_d') Λ_QCD'^3, where m_u',d'=y_u,d v' are the mirror quark masses with identical Yukawa couplings y_u,d as in the SM sector. Taking the second derivative of Eq. (<ref>), evaluated at ϕ_a+θ̅f_a=0, gives a tachyonic mass, assuming z<1, with the magnitude m_a,QCD'^2 = m_π'^2 f_π'^2f_a^2z1-z^2 . Note that m_a,QCD'^2 = 1+z/1-zd^2/dϕ_a^2 V_ new^( conf.)|_ϕ_a+θ̅f_a=π, which implies that the second derivative at the origin of the shifted potential compared to that of the unshifted potential is larger by a factor of (1+z)/(1-z) ≈ 3 (giving rise to a local maximum at ϕ_a+θ̅f_a=0, when v=v'). Hence, requiring that Eq. (<ref>) is equal to -(1-ε) m_a,QCD^2 (using Eq. (<ref>)), gives the condition m_π'^2 f_π'^2 = 1 - ε1-z1+z m_π^2 f_π^2 . Using Eq. (<ref>), the light QCD axion mass becomes m_a^2=ε m_a,QCD^2 where ε= 1 - (1+z) v' Λ_QCD'^3(1-z)v Λ_QCD^3 ≈ 1 - 3 v'v^5/3 . To obtain the second expression in Eq. (<ref>), we have used z ≈ 1/2 and solved the one-loop renormalization group equation to obtain Λ_QCD'/Λ_QCD≃(v'/v)^2/9 for v'<v. Thus, to obtain ε <1, Eq. (<ref>) implies that v'/v should be tuned to be slightly smaller than 3^-3/5≃ 0.52. In this way, the mirror Higgs doublet VEV v' provides the tuning parameter to obtain a light QCD axion mass. Clearly, a mirror SM sector is constrained in the early Universe, and these effects will be discussed in Section <ref>. §.§.§ Spontaneously broken mirror QCD An alternative possibility for V_ new is to assume a cosine potential that can arise from a dilute instanton gas calculation. For example, one can introduce a massive QCD-charged real adjoint scalar S (with mass m_S ≳ 10 TeV) in the SM sector and a QCD' charged scalar S' in the mirror sector where both fields have the same potential (i.e. m_S=m_S' and identical quartic couplings), except that the mass-squared is positive (negative) for S (S'). Consequently, only S' obtains a non-vanishing VEV v'_S, which spontaneously breaks the mirror QCD group. This means that small mirror instantons of size ρ≲ 1/v'_S will generate a cosine potential for the axion. To obtain a sizeable mirror instanton contribution to the axion potential, we will assume that the mirror QCD breaking scale v'_S lies below the mirror u quark mass m_u', which requires v'≫ v'_S. Since all mirror quarks are much heavier than v'_S, there is no chiral suppression in the instanton amplitudes <cit.>. The instanton potential is then given by <cit.> V^( inst.)_ new(θ̅,θ_a+π) = - e^-i θ̅ + θ_a+π∫d ρρ^5C_1 e^-α(1/2)(N-2)! (N-1)!8π^2g^' 2(1/ρ)^ 2 N e^-8π^2/g^' 2(1/ρ) - C_2 N - 2π^2 ρ^2 v_S^' 2 + h.c. , where N=3, C_1 ≈ 0.466,C_2 ≈ 1.678 and α(1/2)=0.145873. Note that the term -2π^2 ρ^2 v_S^' 2 in the exponential factor of Eq. (<ref>) is the effect of the spontaneously broken mirror QCD symmetry to the instanton action and serves as a natural cutoff for the integration over the instanton size ρ. The scale v_S^' depends on the specific symmetry breaking mechanism. The SU(3) mirror QCD group can be broken to U(1) × U(1) with an adjoint scalar S' or can be completely broken if more scalars are introduced. We do not specify the details and simply use v_S^' as a parameterization of the symmetry breaking mechanism. Due to the ℤ_2 symmetry, the QCD gauge coupling g is the same as the mirror QCD gauge coupling g' at UV scales. However, given the assumed large hierarchy v' ≫ v, the ℤ_2 symmetry is broken at lower energy scales and therefore the couplings only remain equal until v' i.e. g(v') = g'(v'). Below the scale v', the running of the two gauge couplings will deviate. In the mirror sector, the heavier mirror quarks lead to a larger β-function value compared to the QCD β-function at the same scale. Hence, the mirror gauge coupling at the scale v'_S can be much larger than g(v'_S), thereby reducing the amount of exponential suppression in Eq. (<ref>). The one-loop running of the mirror gauge coupling is 1α'(μ) = 1α'(v'_S) + b'2πlogμv'_S , where μ is the renormalization scale and b' is the β-function coefficient that depends on the matter content. Substituting Eq. (<ref>) into Eq. (<ref>) we can analytically integrate over ρ to obtain the approximate result V^( inst.)_ new(θ̅,θ_a+π) ≈ 2^1-b'/2π^4-b'Γb'2 - 2 C_1 e^-α(1/2)- 3 C_28π^2g^' 2v'_S^6 e^-8π^2/g^' 2(v'_S) v_S^' 4 cosθ_a+θ̅+π , where Γ denotes the gamma function. Note that the amplitude of the cosine potential is proportional to v_S^' 4 since the mirror QCD group is spontaneously broken and does not confine (i.e. Λ_ QCD' < v'_S assuming that v_S=v'_S≳ 10 TeV and v'≳ 10^9 GeV). The gauge coupling g'(v'_S) depends on the mirror Higgs VEV v' and using the fact that b' = 21/2 (assuming one adjoint real scalar field) near the scale v'_S, Eq.(<ref>) can be numerically approximated as V^( inst.)_ new(θ̅,θ_a+π) ≈ -(86 MeV)^4v'10^11 GeV^4v'_S30 TeV^-7cosθ_a+θ̅ + π , where for simplicity we have assumed the charged scalar mass m_S'=v'_S. An exact computation gives a result that differs from the approximation in Eq. (<ref>) by a factor of 1.6. Requiring that the amplitude of the cosine potential in Eq. (<ref>) matches (1-ε) m_a,QCD^2 f_a^2, in order to cancel[Note that if the ℤ_2 symmetry acts on the axion as θ_a→θ_a then the instanton-induced cosine potential has the same sign as the QCD potential and can be used to increase the axion mass, similar to Ref. <cit.>.] the QCD contribution to the potential, we obtain ε≃ 1 - v'10^12 GeV^4v'_S130 TeV^-7 . Thus, we see that to obtain a value ε < 1 we can tune the values of v' and v'_S. We require that the charged scalar mass is above 10 TeV, but smaller than the mirror u-quark mass. Interestingly, for the scalar mass range 10  TeV≲ m_S'≲ 3× 10^4  TeV a reduced axion mass can be obtained for the mirror Higgs VEV 10^10  GeV≲ v' ≲ 10^16  GeV. Again, there will be cosmological constraints on the mirror sector and these will be discussed in Section <ref>. §.§ Locally-flat axion potential We have given two examples of how a tuned axion potential with a light QCD axion can arise from a mirror sector where the tuning parameter depends on the mirror scalar field VEVs. The axion potentials for the two examples given in Section <ref> are illustrated in Fig. <ref>. It is clear that a precise cancellation only happens at the origin and not at other θ_a values. For instance, there is still a large, untuned potential barrier near θ_a=π, where the amplitude of the total potential is not suppressed by ε, and only near the origin does the total potential become much flatter. This is in contrast to the ℤ_N model studied in Ref. <cit.>. This qualitative form of the axion potential motivates studying the cosmological evolution and in particular, the effects on the relic dark matter abundance that have been previously unexplored. For simplicity, we will consider the cosine form of the potential Eq. (<ref>) to study the cosmological evolution in Section <ref>, although we expect the qualitative features to remain the same for the QCD-like potential Eq. (<ref>). The cosmological evolution will be studied for the range 10^-8≲ε≤ 1, corresponding to the phenomenologically relevant region, which represents a tuning in the potential down to 10^-8. Furthermore, we will remain agnostic about an explicit mechanism that could explain this tuning[In the ℤ_N model <cit.>, a small ε does not require tuning, which in the large N limit is approximately given by ε≈1-z^2π^1/41+z^1/2 N^3/4 z^(N-1)/2, where z ≡ m_u/m_d. However, our analysis does not immediately apply to this model because the potential amplitude also becomes suppressed (as seen in figure <ref>) and affects the hilltop misalignment calculation.] since, as will be shown, the form of the potential in Figure <ref> leads to an interesting cosmological evolution. §.§ Finite temperature effects Next we consider the finite temperature effects on the axion potential as well as constraints on the mirror sector in the early universe. First, we have assumed a complete mirror sector which has the same particle content as the SM. If the new particles from the mirror sector are in thermal equilibrium with the SM sector they will contribute to the number of relativistic degrees of freedom in the early universe thereby changing the number of effective neutrino species N_ eff. The measured value of N_ eff from Big Bang Nucleosynthesis (BBN) and the Cosmic Microwave Background (CMB) is very close to the SM prediction and consequently imposes very strong constraints on new physics parameters. In order to satisfy the constraints on N_ eff, the mirror sector temperature must be lower than the SM plasma temperature. With a lower temperature, the mirror sector contributes less entropy density which helps to avoid the bound on N_ eff. It turns out that the temperature T' of the mirror sector should satisfy <cit.> BBN: T' < 0.51 T_ SM , CMB: T' < 0.60 T_ SM , where T_ SM is the temperature of the SM thermal bath. We assume that T'/T_ SM is constant throughout the whole radiation-dominated epoch and is much smaller than unity. Such a hierarchy in the plasma temperatures may be realized via the asymmetric reheating scenario <cit.>. Furthermore, there can also be portal couplings between the mirror and SM sector <cit.>. In particular, the renormalizable Higgs coupling λ_H H' H^† H H^'† H' must be sufficiently suppressed, in the first example discussed in Section <ref> where the mirror Higgs VEV v'≲ v. Other portal couplings, such as the kinetic mixing ϵ_AA' F_μν F^'μν between the hypercharge gauge bosons are also assumed to be sufficiently suppressed to avoid any cosmological constraints. For the ℤ_2 model, the corresponding limits are λ_HH',ϵ_AA'≲ 10^-8 <cit.>. Instead, in the example considered in Section <ref>, there is a large hierarchy between v and v'. The portal couplings allow heavy particles in the mirror sector to decay to SM particles at high temperature, weakening the constraints. Since we are interested in how the locally-flat axion potential modifies the misalignment mechanism in the early Universe, the temperature effects on the axion potential must also be taken into consideration. In what follows, we focus on the scenario where the mirror QCD potential is already present before the axion starts to oscillate. In the early universe this requires that at some time before the QCD phase transition the mirror sector temperature at least satisfies the condition T' <v_S', so that the mirror sector potential dominates. In addition, we also require that T'< T in the early universe so that the Hubble parameter is solely determined by T and more specifically, after the QCD phase transition, the mirror sector temperature needs to satisfy Eq. (<ref>). Initially the axion field is frozen and only when the Hubble constant becomes of order the axion mass scale, m_a, QCD', generated in the mirror sector, will the axion field start to oscillate around θ_a=π. As the universe cools, the QCD phase transition eventually occurs at a later time, at which point the potential gradually becomes flat around the new minimum at the origin. As is well known for the QCD sector, at a temperature far above the QCD transition temperature T_ QCD, the dilute instanton approximation can well describe the axion dynamics. This gives rise to the temperature-dependent axion potential V(ϕ_a, T) = - m_a^2(T) f_a^2 cosϕ_af_a , where the temperature dependent axion mass is given by <cit.> m_a^2(T) ≈ O(10^-2) × m_a^2 GeVT^8 , with the temperature scaling chosen to be consistent with the dilute instanton gas approximation. However, as the temperature becomes close to the QCD confinement scale, the temperature-dependent potential becomes analytically incalculable and only lattice simulations can reliably describe the behavior near the phase transition. For temperatures, T≪ T_ QCD the prediction Eq. (<ref>) from chiral perturbation theory is a valid description. In the following, we follow <cit.> and assume that the temperature-dependent axion potential is given by V_ QCD(ϕ_a,T) = V_ QCD(ϕ_a,θ̅) h(T) , where V_ QCD(ϕ_a,θ̅) is given in Eq. (<ref>) and the temperature factor h(T) is defined to be h(T) = {[ 1 T ≤ T_ QCD ,; T_ QCDT^8 T > T_ QCD .; ]. The interpolation choice in Eq. (<ref>) has several advantages. First, the potential is continuous across the phase transition temperature, T_ QCD. We ignore the finite temperature corrections when the plasma temperature is slightly lower than T_ QCD. Above T_ QCD, the potential should also be interpolated to match Eq. (<ref>), which will be ignored since it does not drastically affect the qualitative behavior. Therefore, the total axion potential can be parametrized as V_ total(ϕ_a,T) = V_ QCD(ϕ_a,T) + V_ new(ϕ_a) . Note that we have ignored the temperature dependence of V_ new since, as stated above, the new sector temperature T' is assumed to already be lower than the mirror QCD confinement or spontaneous breaking scale when the axion dynamics starts. The evolution of the Hubble expansion rate and axion mass as a function of temperature is shown in Fig. <ref>. When H drops down to ∼ 3 m_a, the axion field starts to roll. At the QCD phase transition, the Hubble constant is H_ QCD≈ 10^-11 eV. By construction, the axion mass contribution from the mirror sector is very close to that of QCD. For m_a ≫ H_ QCD, the axion field starts to oscillate around θ_a = π before the QCD potential contributes. The axion energy density first scales as matter until the contribution from the SM QCD sector significantly modifies the axion potential. To calculate the axion dark matter abundance, one needs to solve the full equation of motion numerically. § COSMOLOGICAL EVOLUTION Next we consider the cosmological evolution for the axion potential Eq. (<ref>). Since the QCD potential in the SM sector is highly suppressed above T_ QCD, it is negligible until the temperature is sufficiently close to T_ QCD. However, it is not reasonable to add the SM potential contribution abruptly, like a step function. As we will show in the following, the time scale matters and each step should be carefully tracked. Thus, the cosmological evolution before T_ QCD is determined by the mirror axion potential which has its minimum at π, as shown in Fig. <ref>, and then below T_ QCD the minimum shifts to the origin, where there is a flattened potential. Our primary task is to solve for the evolution of the axion field ϕ_a(t). Under the cosmic expansion dominated by the SM plasma, the equation of motion of the axion field is given by ϕ_a + 32tϕ̇_a = -∂ V_ total(ϕ_a,T)∂ϕ_a , where the initial conditions are given Section <ref> and we have used the explicit relation H(t)=1/(2 t) for the radiation-dominated epoch. To obtain the axion relic abundance, we will begin the cosmological evolution just before H∼ m_a,QCD, and set the initial field value, θ_a,0∼ O(1). For numerical simplicity, we convert Eq. (<ref>) to an equation for θ_a (≡ϕ_a/f_a) given by θ_a” + 32 τθ_a' = -∂ V_ total(θ_a,T)∂θ_a12 H_ QCD^2 f_a^2 , where the prime (') denotes the derivative with respect to the rescaled time τ = t/t_ QCD. The evolution θ_a(t) for a particular choice of (f_a,ε) is shown in Fig. <ref>. As can be seen in the figure, the evolution of the axion can be naturally divided into three stages. The first stage corresponds to τ < τ_c^(π)≃ 0.76 (to be derived in Section <ref>), where the axion field starts to roll and oscillates around θ_a=π with a decreasing amplitude and a high frequency. After τ_c^(π), the minimum at π begins to move towards the origin. The axion field tracks and oscillates around this shift of the minimum with a sizeable oscillation amplitude until τ is very close to one. For τ > 1, the axion field keeps oscillating around the CP-conserving minimum at the origin with a large amplitude but much lower frequency that depends on the explicit parameter choice. To understand such a behavior, one needs to trace the time-dependent potential throughout the process. The following shows this analysis step by step. §.§ Adiabatic and non-adiabatic evolution As the universe cools and the SM plasma temperature approaches T_ QCD, the SM contribution to the total axion potential gradually increases. At T_ QCD, the minimum of the total potential occurs at the origin θ_a = 0 and therefore during the evolution, the global minimum position must continuously change from θ_a = π to θ_a = 0. To quantitatively describe such an evolution, we expand Eq. (<ref>) around θ_a = π to quadratic order V_ total(ϕ_a,t) ≈121- ε - 3 τ^4 m_a, QCD^2 f_a^2 θ_a - π^2 + ⋯ where we have converted the dependence on the temperature to the rescaled cosmic time τ using the relation τ = (T/T_ QCD)^-2 (we ignore the change of degrees of freedom for simplicity). For early times, when t ≪ t_ QCD (or τ≪ 1), this is just the leading expansion of the cosine potential with the axion mass equal to the QCD axion mass times the 1-ε factor. As the temperature drops and τ increases, the τ^4 term in Eq. (<ref>) becomes larger, causing the potential near π to become flatter. In the case of ε≪ 1, the potential Eq. (<ref>) then flips sign when τ= τ_c^(π)≈ 3^-1/4≃ 0.76. This behavior is depicted in Fig. <ref> where the time-dependent potential is shown for different values of τ. As can be seen in Fig. <ref>, the potential at τ = τ_c^(π) = 0.76 becomes rather flat around θ_a = π. As τ continues to increase, the θ_a = π local minimum turns into a local maximum, with new local minima appearing on either side of θ_a=π. Due to the periodicity of the potential, we only consider axion field values θ_a≤π, where the local minimum moves to θ_a=0 as τ increases beyond τ_c^(π). The form of the potential implies that the axion field initially oscillates around θ_a = π. As τ→τ_c^(π) = 0.76, the effective oscillation frequency decreases accordingly until the mass-squared term changes sign at τ_c^(π) = 0.76. As the local minimum shifts away from π towards the origin, the axion field also adjusts itself and follows the new local minimum. For m_a,QCD≫ H_ QCD, the axion field evolves adiabatically, oscillating around the new local minimum with a local oscillation frequency equal to the second derivative of the potential at the local minimum. For τ> τ_c^(π), the local oscillating amplitude becomes visible and is amplified compared to that prior to τ_c^(π). Such an enhancement is due to the onset of non-adiabatic behavior, which is determined by the competition between how fast the local minimum shifts away from π and the oscillation frequency. Again, the high power of the time dependence causes the local minimum to quickly move from π. Since the axion field speed is suppressed by the tiny amplitude and the decreasing oscillating frequency as τ→ 0.76, the axion field fails to keep up with the location of the new local minimum. As a result, when the axion field starts to adjust and oscillate around the new temporary minimum, the local minimum has already moved away. This gives rise to a short period of non-adiabatic evolution immediately after τ_c^(π). This behavior can be seen near τ_c^(π)=0.76 in Fig. <ref>. Later, the axion field evolves adiabatically, by catching up and oscillating around the time-varying location of the local minimum. This explains the enveloped oscillation pattern for the second stage during 0.76 < τ < 1 in Fig. <ref>. As τ→ 1, we need to take a closer look at the behavior near the origin. Analogously, we can expand the total potential around θ_a = 0 to obtain V_ total(θ_a,τ)/m_a, QCD^2 f_a^2 = 12-1 + ε + τ^4θ_a^2 + 1241-ε - τ^43θ_a^4 + ⋯ Assuming ε≪ 1, when τ is very close to one, the quartic coefficient has the approximately constant value 1/36, while the quadratic term flips sign at τ_c^(0) = 1-ε^1/4≃ 1- ε/4. Hence, for τ<τ_c^(0), the origin is a local maximum and there is a nonzero local minimum located at θ_ min(τ) = 3 1-ε - τ^4^1/2 = 3 τ_c^(0) 4 - τ^4^1/2≈ 6 τ_c^(0) - τ^1/2 . Taking the time derivative of Eq. (<ref>), we obtain θ_ min' ∼τ_c^(0) -τ^-1/2. This means that as τ→τ_c^(0), the local potential minimum moves towards the origin at a faster and faster rate, which leads to non-adiabatic evolution. For τ>τ_c^(0), when the origin becomes the local minimum, the axion field is still located at a nonzero value where it was previously oscillating around a nonzero local minimum. This nonzero value, therefore, provides the initial amplitude for the final oscillation phase around the origin. The initial misalignment angle θ_a,0 can significantly affect the magnitude of this initial amplitude for the final oscillation stage around the origin. The final axion dark matter relic abundance crucially depends on this initial amplitude which will be computed in the next subsection. §.§ Axion relic abundance To calculate the axion dark matter relic abundance today, we need to obtain the value of the axion field amplitude immediately after t_ QCD, denoted as θ_A,0. The axion field evolution has qualitatively different characteristics depending on the value of the decay constant f_a, and the tuning parameter ε. In the following three subsections, we will consider several regimes for the decay constant and discuss them separately. We first discuss the potential and evolution that is in common between all the different regimes. As the time approaches τ = 1 and thereafter, the potential (<ref>) is approximated as V_ total(θ_a)m_a,QCD^2 f_a^2≈12εθ_a^2 + 12423 - εθ_a^4 ≈12εθ_a^2 + 136θ_a^4 . The second derivative of this potential is d^2 V_ total(θ_a)dϕ_a^2 = d^2 V_ total(θ_a)f_a^2 d θ_a^2 = m_a, QCD^2 ε + 13θ_a^2 , which implies that when the amplitude θ_A ≫√(ε), the axion field oscillates in a quartic potential, while for θ_A ≪√(ε), the axion field instead oscillates in a quadratic potential. In the quartic potential, the axion oscillation amplitude evolves as θ_A(t) ∼τ^-1/2, while for the quadratic potential, the axion amplitude evolves as θ_A(t) ∼τ^-3/4. Thus, as the axion field θ_A evolves from θ_A ≫√(ε) to θ_A ≪√(ε), the power of the time dependence gradually changes from -1/2 to -3/4. §.§.§ Large f_a For sufficiently large values of f_a, the QCD axion mass can be smaller than the Hubble value at the QCD phase transition. This means the axion field is frozen at t_ QCD. The minimum value of the decay constant for which this occurs is determined from the condition m_a, QCD = 3 H_ QCD≃ 4 × 10^-11 eV leading to the critical value f_a,c≃ 2 × 10^17 GeV where the axion mass m_a≈ m_a, QCD near t_ QCD. Thus, for large f_a> f_a,c, the axion field only starts to evolve when the Hubble value is approximately equal to the local mass (<ref>) evaluated at θ_a,0 i.e. m_a(θ_a,0) = √(ε + 1/3θ_a,0^2) m_a,QCD. In this case, the axion relic abundance today is approximately given by Ω_a ∼f_a^2 √(ε m_a,QCD)M_ pl^3/2 T_ eq1/2εθ_a,0^2 + 1/36θ_a,0^4(ε + 1/3θ_a,0^2)^5/4 , where M_ pl is the Planck scale and T_ eq∼ eV is the temperature at matter-radiation equality. The expression (<ref>) is derived by first computing the axion number density at the onset of the oscillations using the initial mass n_a (T_ osc) = V_ total(θ_a,0)/m_a(θ_a,0) and then obtaining the energy density at matter-radiation equality ρ_a(T_ eq) = m_a n_a(T_ eq) using the vacuum mass m_a, where n_a(T_ eq) = n_a (T_ osc) (T_ eq/T_ osc)^3. To explain the observed dark matter abundance of Ω h^2 ≃ 0.12, one can see that either ε or θ_a,0 has to be much less than unity, while for m_a, QCD = 4 × 10^-11 eV, ε≲ 0.1 is excluded by superradiance effects <cit.> and the gravitational wave event GW170817 <cit.>. Alternatively, assuming ε≳ 0.1 to satisfy the astrophysical constraints, we find θ_a,0≲𝒪(10^-3). Therefore, the large f_a scenario nearly reduces to the conventional QCD axion case because the mass tuning ε is constrained to be nearly absent and the tuning in the misalignment angle is severe. We will not study this regime in further detail. §.§.§ Intermediate f_a For f_a<f_a,c, the axion field already starts to evolve before t_ QCD and the equation of motion can be numerically solved until some time after t_ QCD. However, for sufficiently low f_a, even the tuned axion mass is rather high. This leads to a higher oscillation frequency, and the numerical computation starts to be time consuming. In this subsection, we restrict the axion decay constant to 10^12  GeV < f_a < f_a,c and refer to these values as the intermediate f_a regime. The θ_A,0 values in this range can become O(0.1), which means the axion evolution starts with a radiation-like scaling and a small ε is needed to reproduce the correct dark matter abundance. For smaller f_a values, θ_A,0 would be more suppressed, leading to an initial matter-like scaling for higher ε values, which will be discussed in Section <ref>. For example, the axion evolution is shown in Fig. <ref> for ε = 10^-8 and f_a = 10^14 GeV with three initial values, θ_a,0 = 0.01, 1 and 3. For the chosen ε value, τ_c^(0) is very close to one and cannot be distinguished from one in the figure. The dashed line represents the evolution of the time-dependent local minimum θ_ min(τ). As seen in the figure, for τ not so close to τ_c^(0), the axion field keeps track of and oscillates around the time-varying minimum position θ_ min(τ) as calculated in Eq. (<ref>). Only as τ approaches very close to τ_c^(0), does the adiabatic condition break down. Upon obtaining the θ_A,0 value, identified as the axion field amplitude immediately after t_ QCD, we need to further evolve the axion field to later times in order to compute the current relic abundance of the axion dark matter. This numerical evolution is time consuming and the computational uncertainty would accumulate to destabilize the convergence. However, there are alternative ways to obtain the final abundance at later times which we next discuss. In principle, one can utilize the same method as in Section <ref>, where the number density is computed using θ_A,0 and dilutes with the cosmic expansion if the evolution is adiabatic. However, the small upper bound on θ_a,0 derived in Section <ref> can be relaxed with intermediate f_a by allowing ε≪ 1 to obtain the observed dark matter abundance. This immediately implies that the period of evolution governed by the quartic term will be prolonged. The evolution is no longer adiabatic due to parametric resonance as we will elaborate in Section <ref>. We leave the effects of parametric resonance for future work. Here we will continue to focus on estimating the abundance in the linear analysis. In order to accurately capture the smooth transition from radiation-like to matter-like scaling of the axion energy density, we can consider the evolution of the axion field energy density, ρ_a(t). From the Lagrangian, we know that ρ_a(t) = 12 f_a^2 θ̇_a^2 + V(θ_a) . On the other hand, the axion field energy density decreases due to the cosmic expansion and therefore satisfies the scaling behavior ρ(t+Δ t)ρ(t) = a(t+Δ t)a(t)^-3 γ , where a(t) is the cosmic scale factor and the exponent γ≡ 1+w, for the equation of state parameter w, is given by <cit.> γ(θ_A) = 2∫_0^θ_A dθ_a 1-V(θ_a)/V(θ_A)^1/2∫_0^θ_A dθ_a 1-V(θ_a)/V(θ_A)^-1/2 . The integration in Eq. (<ref>) is over one oscillation cycle where θ_A denotes the oscillation amplitude. The Hubble friction damps the oscillation and thus the oscillation amplitude gradually decreases. Expanding Eq. (<ref>), we obtain θ̇_Aρ∂ρ∂θ_A = - 3 H γ(θ_A) , where the dot (·) refers to a time derivative. By solving Eq. (<ref>), we obtain the evolution of the oscillation amplitude θ_A(t), which can then be substituted into V(θ_A) to obtain the energy density ρ_a(t) since the kinetic term is zero at θ_A. This method greatly reduces the computation time. In Fig. <ref>, we show the equation of state parameter, w as a function of the axion field amplitude θ_A for various values of ε. When θ_A ≲√(ε), the value of w has significantly decreased from w=1/3 and as θ_A→ 0, the value of w also goes to zero corresponding to matter-like behavior. After solving for the evolution θ_A(t), we can obtain the axion dark matter relic abundance today from the equation Ω_aΩ_m = V(θ_A(t_m))3 M_ pl^2 H(t_m)^2H(t_m)H_ eq^1/2 , where Ω_m is the current dark matter density and t_m corresponds to the time when the axion energy density scales like matter. §.§.§ Low f_a When f_a ≤ 10^12 GeV, corresponding to the low f_a case, the amplitude θ_A,0 is more suppressed compared to the intermediate f_a case. This is due to the following two effects. First, with the corresponding higher mass or oscillation frequency associated with low f_a, the axion field can more efficiently track the change in the location of the minimum. In particular, just before τ_c^(0), the higher frequency means that the axion can track the changing local minimum until it is much closer to τ_c^(0) compared to case with intermediate values of f_a. In other words, the axion angle θ_a now has a smaller amplitude when it begins to oscillate around the origin after τ_c^(0). Secondly, for sufficiently small values of f_a, there is now a large hierarchy between the QCD axion mass m_a, QCD and H_ QCD. As derived in Section <ref>, the quadratic term in Eq. (<ref>) flips sign at τ_c^(0)≃ 1- ε/4. The time scale with respect to t_ QCD can be characterized by H_ QCD/ε, while the axion mass, m_a = √(ε) m_a,QCD. Consequently, it is possible that m_a ≫ H_ QCD/ε for some ε values, causing the axion to undergo high frequency oscillations around the origin during the time interval τ_c^(0) < τ <1. Starting from the time characterized by τ_c^(0), the coefficient of the quadratic term in the potential continuously evolves from 0 to ε, while the coefficient of the quartic term is approximately constant at 1/36. Moreover, the axion amplitude θ_A(τ_c^(0)) can already be smaller than √(ε) so that it is the quadratic term that dominates the potential. The total potential is therefore becoming steeper as the quadratic coefficient increases, which causes the axion field value to be further reduced between τ_c^(0) and τ=1. We show an example of the axion evolution in Fig. <ref> for the values ε = 0.01 and f_a = 10^11 GeV. As seen in the figure, the axion field adiabatically tracks the location of the minimum (indicated by the dashed line) until it is very close to τ_c^(0). Therefore, this causes the initial amplitude at τ_c^(0) for oscillations around the origin to be much reduced compared to cases with larger f_a. Furthermore, Fig. <ref> also depicts the evolution from τ_c^(0) to τ = 1. Since the coefficient of the quadratic term in the potential is increasing from τ_c^(0), the axion field value at τ = 1 decreases to approximately a factor of two smaller than its value right after τ_c^(0). Such an extra suppression is absent at higher values of f_a or lower values of ε. Thus, taking into account the above two effects, the initial axion field amplitude θ_A,0 at t_ QCD is much smaller compared to the intermediate f_a case. The axion relic abundance is still given by Eq. (<ref>), but now, with a lower value of θ_A,0, the correct dark matter abundance can be obtained for a higher value of ε. §.§.§ Parametric resonance In the previous subsections, we focused on the coherent oscillations of the zero mode of the axion field. However, the fluctuation modes can be greatly amplified if the axion field oscillates in the non-quadratic potential. The classical axion field ϕ_a(t,x) can be decomposed as ϕ_a(t,x⃗) = ϕ_a(t) + δϕ_a(t,x⃗) = ϕ_a(t) + ∫d^3 k(2π)^3 b_k u_k(t) e^i k x + h.c. , where b_k and b_k^† are the annihilation and creation operators, respectively, satisfying [b_k, b_k'^†] = 2π^3 δ^(3)k -k' . Expanding the axion field equation of motion, the modes u_k(t) satisfy u_k + 3 H u̇_k + k^2a^2 + V”(ϕ_a) u_k = 0 , where a is the cosmic scale factor. In the case of a high-power potential, V”(ϕ_a) is dependent on the explicit value of ϕ_a, which could cause V”(ϕ_a(t)) to highly oscillate with time. This implies that the oscillation frequency of V”(ϕ_a(t)) (the driving force) can scan over the modes and coincide with a particular momentum mode k. When this occurs, the solution of Eq. (<ref>) leads to an exponential growth or parametric resonance. Since the occupation number for such selected momenta is exponentially increased, this causes a large portion of the axion zero-mode energy to be converted to the quanta with specific momenta. This parametric resonance production would in turn deplete the oscillation of the homogeneous modes (k=0). Accordingly, the zero mode oscillation amplitude would be quickly reduced, which terminates parametric resonance. Generically, parametric resonance production occurs until the axion evolution is in the quadratic regime, namely θ_a ≲√(ε). The duration in the quartic regime may or may not be sufficiently long for fluctuations to be efficiently created and the zero mode to be depleted. Therefore, a numerical evaluation is warranted to determine the region of parameter space where parametric resonance is fully effective. We numerically solved Eq. (<ref>) and found that, for the ε values required by the correct dark matter abundance, efficient parametric resonance occurs for f_a ≳ 1.5× 10^11 GeV. The resonant momentum is slightly smaller than the initial mass of the axion field or V”(θ_a,0)^1/2. This means that the excited quanta are semi-relativistic and therefore one expects the parametric resonance production would change the final axion relic abundance by an O(1) factor. We do not perform a detailed calculation of the associated backreaction effects in parametric resonance. We expect that the allowed (1/f_a,m_a) parameter space in our final result will only shift slightly towards the right in the figure due to a slight reduction of the number density. §.§.§ Final results We summarize the final allowed parameter space in the m_a versus 1/f_a plane in Fig. <ref>. The yellow line refers to the conventional QCD axion mass prediction. The central blue band bounded by the solid blue lines depicts the values that give the correct dark matter relic abundance for initial misalignment angles ranging from 0.1π to 0.9π. The central blue region is approximately given by 8 × 10^-13^-1( m_a/10^-8)^0.5 > f_a^-1 > 3 × 10^-13^-1( m_a/10^-8)^0.48 . The light blue region is extended from the blue band by a factor of 1/3 and 3 in f_a, to account for the uncertainty due to parametric resonance. Given the limit on computation time, we only scan for f_a≲ 10^11 GeV and simply extrapolate to lower values of f_a. The gray region is excluded by superradiance effects <cit.> and observations from the sun <cit.> and white dwarfs <cit.>. The dashed gray line refers to future projections of the CASPEr experiments. The solid gray line shows the usual misalignment prediction for an axion whose potential follows the same temperature dependence as the QCD axion in Eq. (<ref>) and always has a minimum centered at θ_a = 0 in contrast to our scenario where the minimum transitions from π to 0. In addition, along this solid gray line, we assume the axion mass is a parameter independent of f_a. As seen in the figure, for f_a≳ 10^14 GeV tuning parameter values satisfying ε≲ 10^-8 have been excluded by the superradiance and white dwarf <cit.> observations. Thus, hilltop misalignment, favors f_a≲ 10^14 GeV corresponding to larger values of the tuning parameter ε≳ 10^-8. In particular, for f_a ≲ 10^11 GeV, the axion mass is only one order of magnitude smaller than the QCD axion mass and therefore requires less tuning. In Fig. <ref>, we convert the constraint in the (1/f_a, m_a) plane to the parametric (g_aγγ, m_a) plane (where g_aγγ is the axion-photon coupling) for the KSVZ and DFSZ axion scenarios, denoted in blue and orange, respectively. The colored band region depicts the allowed region from our analysis with the uncertainty due to parametric resonance and numerical precision. The haloscope experiments <cit.> have already excluded some of the parameter space for axion masses of O(10^-6) eV as shown in red. The dashed line refers to the projected sensitivity from future haloscope experiments <cit.>, which is expected to cover most of the allowed parameter space corresponding to m_a > O(10^-12) eV. The regions excluded by helioscopes <cit.> are indicated by purple and those by astrophysical searches <cit.> are indicated by gray. These curves are taken from Ref. <cit.>. § CONCLUSION In this paper, we have studied the cosmological evolution of an axion lighter than the usual QCD axion where the strong CP problem is still solved. The axion potential is assumed to arise from a mirror sector whose θ̅ angle is shifted by π compared to the SM sector. This allows the mirror contribution to the axion potential to be tuned against the usual QCD contribution, generating a lighter QCD axion. This gives rise to an axion potential that is much flatter near the CP-conserving minimum at the origin, while the axion potential at π remains mostly unsuppressed. Under the assumption that the temperature of the mirror sector remains lower than the SM sector and any portal couplings between the two sectors are sufficiently suppressed, this distinctive form of the potential gives rise to a new, interesting axion evolution in the early universe. Much before the QCD phase transition, the axion potential is dominated by the mirror contribution which has a temporary, local minimum at π. Assuming an order one initial axion value, the frozen axion eventually rolls towards π where it undergoes oscillations around that minimum. As the universe cools through the QCD phase transition, the QCD contribution to the axion potential becomes comparable to the mirror contribution causing the minimum at π to shift to the origin. As a result of this shift, the axion gradually rolls from the newly-formed “hilltop" at π, tracking and oscillating around the changing local minimum as it heads towards the origin. However, just after π and just before reaching the origin, there are brief periods of non-adiabatic behavior because the axion cannot keep up with the time-varying local minimum. After the position of the local minimum reaches the origin, the lagging axion field position becomes the initial condition for oscillations around the origin. This adiabatic behavior, accompanied by the brief non-adiabatic behavior, modifies the usual predictions from the conventional misalignment mechanism. In this “hilltop" misalignment, the relic dark matter abundance can be obtained for 10^9 ≲ f_a ≲ 10^14 GeV corresponding to the axion mass range 10^-11≲ m_a ≲ 10^-3 eV and a tuning of 10^-8≲ε≲ 1 in the axion mass. This differs from the usual parameter range for the conventional QCD axion and misalignment mechanism. Our results rely on the distinctive form of the tuned axion potential. We have provided two examples of how such a potential could arise by relating the tuning to scalar field VEVs in the mirror sector that explicitly breaks the ℤ_2 symmetry. While this solution is technically natural, it is worth to further explore how this tuning could be explained especially in relation to the Higgs mass hierarchy problem which has been ignored. For instance, in the ℤ_N model <cit.> this tuning is explained by a large number of QCD copies although the resulting axion potential has many local minima and a suppressed amplitude at π. As the potential minimum transitions from θ_a = π to N degenerate minima, the axion needs to go through different minima and ultimately settle down by chance to the right minimum at θ_a = 0. In all of these models, the non-linear effects from parametric resonance as the axion field oscillates from the hilltop have been neglected. This would be interesting to further analyze in terms of the impacts on the dark matter abundance and gravitational waves. Specifically, observable gravitational waves might result from parametric resonance as shown in Refs. <cit.>. In summary, despite the shortcomings of the underlying UV models, the axion potential in our “hilltop" misalignment leads to an interesting cosmological evolution. This opens up the parameter region where the axion solves the strong CP problem and provides the missing dark matter component of the universe. This gives further motivation for experimental axion searches to explore a parameter region outside the conventional QCD axion predictions. § ACKNOWLEDGEMENTS We thank Keisuke Harigaya for useful discussions. This work is supported in part by the Department of Energy under Grant No. DE-SC0011842 at the University of Minnesota. For facilitating portions of this research, Z.L and K.F.L wish to acknowledge the Center for Theoretical Underground Physics and Related Areas (CETUP*), The Institute for Underground Science at Sanford Underground Research Facility (SURF), and the South Dakota Science and Technology Authority for hospitality and financial support, as well as for providing a stimulating environment when this work was finalized. T.G. and Z.L. also acknowledge the Aspen Center for Physics, where parts of this work were performed, which is supported by National Science Foundation grant PHY-2210452.
http://arxiv.org/abs/2407.13574v1
20240718151449
3d Carrollian Chern-Simons theory and 2d Yang-Mills
[ "Arjun Bagchi", "Arthur Lipstein", "Mangesh Mandlik", "Aditya Mehra" ]
hep-th
[ "hep-th" ]
=1
http://arxiv.org/abs/2407.13033v1
20240717214950
Cauchy transforms and Szegő projections in dual Hardy spaces: inequalities and Möbius invariance
[ "David E. Barrett", "Luke D. Edholm" ]
math.CV
[ "math.CV", "30C40 (Primary), 30H10, 30E20, 32A25, 46E22 (Secondary)" ]
Cauchy and Szegő in dual Hardy spaces]Cauchy transforms and Szegő projections in dual Hardy spaces: inequalities and Möbius invariance § ABSTRACT Dual pairs of interior and exterior Hardy spaces associated to a simple closed Lipschitz planar curve are considered, leading to a Möbius invariant function bounding the norm of the Cauchy transform C from below. This function is shown to satisfy strong rigidity properties and is closely connected via the Berezin transform to the square of the Kerzman-Stein operator. Explicit example calculations are presented. For ellipses, a new asymptotically sharp lower bound on the norm of C is produced. The first author was supported in part by NSF grant number DMS-1500142 The second author was supported in part by Austrian Science Fund (FWF) grants: DOI 10.55776/I4557 and DOI 10.55776/P36884. 2020 Mathematics Subject Classification: 30C40 (Primary); 30H10, 30E20, 32A25, 46E22 (Secondary) Department of Mathematics University of Michigan, Ann Arbor, MI, USA barrett@umich.edu Department of Mathematics Universität Wien, Vienna, Austria luke.david.edholm@univie.ac.at [ David E. Barrett & Luke D. Edholm ===================================== § INTRODUCTION Let γ be a simple closed oriented Lipschitz curve in the Riemann sphere bounding a domain Ω_+ to the left and Ω_- to the right. Each domain admits a Hardy space, denoted respectively by _+^2(γ) and _-^2(γ), consisting of holomorphic functions with square integrable boundary values. (Precise definitions are given in Section <ref>.) In this paper we investigate the interaction between the Hardy spaces on Ω_+ and Ω_- and use our findings to deduce norm estimates and prove invariance and rigidity theorems related to two classical projection operators: the Szegő projection, S, and the Cauchy transform, C. These operators and the connections between them are well studied. Of particular importance is a breakthrough made by Kerzman and Stein in <cit.> where it was shown that for smooth γ, their eponymously named operator A := C - C^* is compact. This observation led to the formula C = S(I+A) relating the Cauchy and Szegő projections, making it possible to use known information about S to study C, and vice versa. The Kerzman-Stein operator established an alternative foundation upon which both Hardy space theory and much of classical complex analysis could be developed; this is the theme of Bell's book <cit.>. One aim of the present paper is to investigate the function z ↦( ∫_γ |C(z,ζ)|^2 dσ(ζ) )^1/2( ∫_γ |S(z,ζ)|^2 dσ(ζ) )^-1/2, * where C is the Cauchy kernel, S is the Szegő kernel and σ is arc length measure. This function has a number of remarkable properties and, unsurprisingly, encodes detailed information about the Cauchy transform, the Szegő projection and how the two operators interact. A close relationship between (*) and A (or more precisely A∘A) via the Berezin transform is shown to hold (see Proposition <ref>), and there are situations (e.g. Theorem <ref>) where direct analysis of (*) recaptures and even strengthens results previously obtained from the Kerzman-Stein operator. The following theme pervades the paper: it is natural and informative to consider the pieces of (*) on Ω_+ and Ω_- together as a single object. With this in mind, several pairs of objects associated to a simple closed Lipschitz curve γ will be considered in tandem: * Domains and function spaces. The interior and exterior domains Ω_+ and Ω_-, along with the associated Hardy spaces _+^2(γ) and _-^2(γ). * Projection operators and kernel functions. The Cauchy transforms (C_+ and C_-) and Szegő projections (S_+ and S_-) on the interior and exterior domains, together with the representative kernel functions (C_+, C_-, S_+ and S_-). * Two pairings of functions in L^2(γ). The usual inner product ⟨ f, g ⟩, along with a -bilinear pairing <>>fg defined in (<ref>) below. The second pairing yields an alternative characterization of the Hardy dual spaces that underlies much of the theory we develop. §.§ Interior and exterior projections Throughout the paper, many arguments can be carried out simultaneously in the interior (Ω_+, _+^2(γ), S_+, etc.) and exterior (Ω_-, _-^2(γ), S_-, etc.) settings. Whenever possible our notation will reflect this, as we now demonstrate. One way to construct holomorphic functions on Ω_+ with L^2(γ) boundary values is the Szegő projection S_+, the orthogonal projection from L^2(γ) onto its holomorphic subspace _+^2(γ). Given h∈ L^2(γ), S_+ h(z) = ∫_γ S_+(z,ζ)h(ζ) dσ(ζ), z ∈Ω_+, where S_+(z,ζ) is the Szegő kernel of _+^2(γ) and dσ is arc length measure. This kernel is conjugate symmetric, i.e., S_+(z,ζ) = S_+(ζ,z), and for fixed z ∈Ω_+, S_+(·,z) ∈_+^2(γ). Since S_+ is an orthogonal projection onto _+^2(γ), we immediately obtain the reproducing property that S_+f = f for f ∈_+^2(γ), as well as the fact that S_+^* = S_+. There is a corresponding Szegő projection S_- from L^2(γ) onto _-^2(γ) given by a formula à la (<ref>), but now using S_-(z,ζ), the Szegő kernel of _-^2(γ), as the representative kernel. The same basic properties of S_+ and S_+ mentioned above hold for S_- and S_-, though in general the kernel functions S_+ and S_- themselves bear no obvious resemblance. When we meet situations as described above, where parallel facts hold in the interior and exterior settings, the presentation will be streamlined as follows: “The Szegő projection S_± is an orthogonal projection from L^2(γ) onto ^2_±(γ)." is a condensed way of writing two statements at once. The original string is meant to be read exactly twice, once using only the top signs, and once using only the bottom signs: * The Szegő projection S_+ is an orthogonal projection from L^2(γ) onto ^2_+(γ). * The Szegő projection S_- is an orthogonal projection from L^2(γ) onto ^2_-(γ). A second way to construct holomorphic functions from L^2 boundary data is the Cauchy transform. Let γ be a simple closed Lipschitz curve in the plane and let T be the (a.e. defined) unit tangent vector pointing in the counterclockwise direction. Given h ∈ L^2(γ), interior and exterior holomorphic functions C_± h ∈(Ω_±) are generated via the Cauchy integral C_±h(z) = 1/2 π i∮_γh(ζ)/ζ-z dζ, = ∫_γ C_±(z,ζ) h(ζ) dσ(ζ), z∈Ω_±, where, upon noting that dζ = ± T(ζ) dσ(ζ), the Cauchy kernel is defined as C_±(z,ζ) = ± T(ζ)/2π i(ζ-z). The choice of ± specifies orientation so that holomorphic functions are reproduced. When z ∈γ, the integral (<ref>) no longer converges in the ordinary sense. But if non-tangential limits (see Section <ref>) of the holomorphic function C_± h ∈(Ω_±) are taken, we obtain the following principle value integral for a.e. z ∈γ: C_± h(z) = h(z)/2 + 1/2 P.V.∫_γ C_±(z,ζ) h(ζ) dσ(ζ). The notion of a principle value – where the integral is calculated over the curve with a small symmetric portion of γ about z excised, and a limit is taken as the endpoints of the excision are sent to z at the same rate – makes sense when γ is a C^1 curve and h ∈ C^1(γ). But the scope of this notion extends to a wider setting thanks to a deep result of Coifman, McIntosh and Meyer <cit.>, which says that when γ is a Lipschitz curve, the principle value integral in (<ref>) both exists for almost every z ∈γ and defines a bounded operator on L^p(γ,σ), 1<p<∞. §.§ Duals of Hardy spaces Let γ be a simple closed oriented Lipschitz curve. Consider two related pairings of f,g ∈ L^2(γ): the usual inner product ⟨·,·⟩ and a (-)bilinear pairing <>>·· given by ⟨ f, g ⟩ = ∫_γ f(ζ)g(ζ) dσ(ζ), <>>fg = ∮_γ f(ζ)g(ζ) dζ. Since dζ = T(ζ) dσ(ζ), these pairings are related by ⟨ f, g ⟩ = <>>fg T and <>>fg = ⟨ f, g T⟩, where T is the unit tangent agreeing with the orientation of γ. Since _±^2(γ) is a Hilbert space, the inner product ⟨·,·⟩ facilitates the canonical isometric duality self-identification _±^2(γ)' ≅_±^2(γ). The bilinear pairing <>>·· facilitates a quasi-isometric dual space identification of the interior and exterior Hardy spaces: _±^2(γ)' ≃_∓^2(γ), see Section <ref>, and in particular, Proposition <ref>. §.§ The Cauchy-Szegő Λ-function Let γ be a simple closed bounded Lipschitz curve oriented counterclockwise in the plane. Define two real-valued functions Λ_+(γ,z) = C_+(z,·)_L^2(γ)/√(S_+(z,z)), z ∈Ω_+, Λ_-(γ,z) = C_-(z,·)_L^2(γ)/√(S_-(z,z)), z ∈Ω_- \{∞}. Now combine them to form the Cauchy-Szegő Λ-function, a real-valued function defined on the Riemann sphere by: Λ(γ,z) = Λ_±(γ,z), z ∈Ω_±\{∞}, 1, z ∈γ, √(σ(γ)2πκ(γ)), z = ∞, where σ(γ) denotes arc length and κ(γ) denotes analytic capacity (see Section <ref>). §.§.§ Basic properties The assigned values for z ∈γ and z = ∞ are very natural: Let γ be a simple closed Lipschitz curve in the plane. Then * Λ(γ,z) is continuous as z →∞. * If γ is C^1 smooth and ζ_0 ∈γ, then Λ(γ,z) is continuous as z →ζ_0. * If Φ is a Möbius transformation with pole off of γ, then Λ(γ,z) = Λ(Φ(γ),Φ(z)). Part (<ref>) is proved in Theorem <ref> after a short discussion of analytic capacity. Part (<ref>) is proved in Theorem <ref>, with the Berezin transform and compactness of the Kerzman-Stein operator playing important roles. These first two parts together show that z ↦Λ(γ,z) is continuous on the Riemann sphere whenever γ is a C^1 smooth curve. Part (<ref>) is shown in Theorem <ref> after obtaining a Möbius transformation rule for the Cauchy kernel. One consequence of Möbius invariance is that it gives a simple way to extend Λ to unbounded curves: let γ be a simple closed Lipschitz curve in the Riemann sphere passing through ∞ (see Section 2.1), and Φ be a Möbius transformation with its pole lying off of γ. Then the image curve, denoted Φ(γ), is a simple closed Lipschitz curve in the plane, and we define Λ(γ,z) := Λ(Φ(γ),Φ(z)). The fact this extension is well-defined is immediate from Theorem <ref>, part (<ref>). The next result shows that circles form the class of minimizing curves for Λ: Let γ be a simple closed Lipschitz curve in the Riemann sphere. * Λ(γ,z) ≥ 1, for all z ∈Ω_±. * If there is a single z ∈Ω_± such that Λ(γ,z) = 1, then Λ(γ,·) ≡ 1 and γ is a circle (or a line, including the point at ∞). This theorem and its consequences are presented in Sections <ref> and <ref>. In <cit.> Kerzman and Stein gave a clever geometric interpretation of their operator A and deduced that if the Cauchy and Szegő kernels of a bounded domain coincide, the underlying domain must be a disc. Theorem <ref> implies a significantly strengthened version of this result. The proof of the following result (see Corollary <ref>) uses the Λ-function and makes no reference to the geometry of the Kerzman-Stein operator: Let Ω be a bounded simply connected planar domain with Lipschitz boundary. If there exists a single z ∈Ω such that the Cauchy and Szegő kernels satisfy |C(z,ζ)| ≤ |S(z,ζ)| for almost every ζ∈γ, then Ω is a disc. §.§.§ Estimating Cauchy norms The maximum value attained by Λ(γ,·) on the Riemann sphere bounds the norm of the Cauchy transform from below: Let γ be a simple closed Lipschitz curve in the Riemann sphere. The norms of the interior and exterior Cauchy transforms are equal and further, satisfy the estimate sup_z ∈Λ(γ,z) ≤C_±. That C_+ = C_- is shown in Theorem <ref>. In Theorem <ref> it is shown that Λ(γ,z) ≤C_± for every z ∈. The continuity of Λ(γ,·) at ∞ finishes the proof. Let W_θ = {r e^iφ : r>0, |φ|<θ} be the unbounded wedge with aperture 2θ∈ (0,2π) and boundary denoted by bW_θ. In Section <ref> we study this wedge and produce an explicit formula for Λ(bW_θ,z) in Theorem <ref>. Several conclusions are drawn from this formula; in particular, it is shown that Λ(bW_θ,z) is discontinuous at the origin (a corner point), breaking from the continuous behavior on C^1 curves guaranteed by Theorem <ref>. In Section <ref>, a second family of curves is considered. Let ℰ_r = { (x,y): x^2/r^2+y^2 = 1 }, an ellipse with major-to-minor axis ratio r>1. We compute Λ(_r,z) and use it to produce the best known lower estimate on the norm of the Cauchy transform. Let r>1. The L^2-norm of the Cauchy transform on _r satisfies C_L^2(_r)≥√(2/π√(1-1r^2)· (r^2+1) ·Π(1-r^2, √(1-1r^2)) - K(√(1-1r^2)) /ϑ_2(0,(r-1/r+1)^2 ) ϑ_3(0,(r-1/r+1)^2 )). This bound is shown to be asymptotically sharp as r → 1. (See Section <ref> for conventions regarding elliptic integrals and theta functions appearing in the formula.) §.§ Motivation from higher dimensions This paper grew out of an ongoing project on the Leray transform L, a higher dimensional analogue of the Cauchy transform. Given a -convex hypersurface ⊂ℙ^n, recent work of the authors (see <cit.>) uncovers an intriguing connection between analytic quantities tied to L (norms, essential norms, spectral data) and projective-geometric invariants associated to and its projective dual hypersurface ^*. A natural construction yields a pair of projectively-invariant dual Hardy spaces on and ^*, and a generalized version of Λ(γ,·) can be defined using Leray and Szegő kernels. The higher dimensional theory simplifies considerably in one dimension, serving to motivate the present paper. The function Λ can be related to Fredholm eigenvalue problems studied by Bergman-Schiffer <cit.> and Singh <cit.>. Burbea previously connected the Kerzman-Stein operator A to these same eigenvalue problems in <cit.>, then went on to reprove key properties of A (e.g. compactness) using the theory of Garabedean anti-symmetric l kernels. Similarly, some basic properties of Λ in Section <ref> can be obtained using the same approach – at least when γ is smooth enough. But here we have opted to avoid the Garabedean machinery entirely. The reason for this is two-fold. Firstly, in minimally smooth settings (γ being C^1 or less), analysis becomes significantly harder and the Garabedean approach is often untenable. For example, while the compactness of A continues to hold when γ is only assumed to be C^1, Burbea's argument breaks down and the proof requires much more delicacy; see <cit.>. Secondly, the theory of the Garabedean kernel depends critically on a particular orthogonal decomposition of L^2(γ) (see <cit.>), one that no longer holds for L^2(). As we are motivated by the higher dimensional problem, several of our proofs have been written so as to mirror that setting. § INTERIOR AND EXTERIOR HARDY SPACES §.§ Lipschitz curves A function φ:→ is called Lipschitz if there exists a constant K >0 (the Lipschitz constant) so that |φ(x_1)-φ(x_2)| ≤ K|x_1-x_2| for all x_1,x_2 ∈. Such a function is differentiable almost everywhere with an L^∞ derivative. A simple closed curve γ in the plane is called Lipschitz if there exists a finite number rectangles {R_j }_j=1^n with sides parallel to the coordinate axes, angles {θ_j }_j=1^n and Lipschitz functions φ_j:→, such that the union ∪_j=1^n {e^-iθ_j R_j} covers γ and the intersection { e^iθ_j(γ) }∩ R_j = { x+iφ_j(x) : x ∈ (a_j,b_j) }, for some a_j < b_j < ∞. If γ is a simple closed curve in the Riemann sphere passing through ∞, say that γ is Lipschitz if there is a Möbius transformation Φ mapping γ to a simple closed Lipschitz curve in the plane. Each simple closed oriented curve γ⊂ bounds two simply connected domains: write Ω_+ for the domain lying to the left and Ω_- the domain to the right. When γ is a planar curve, it is assumed to have counterclockwise orientation unless explicitly stated otherwise; we refer to Ω_+ and Ω_- as interior and exterior domains, respectively. Ω_+ and Ω_- are called Lipschitz domains when their boundary γ is Lipschitz. Note that if γ is an oriented curve in the Riemann sphere and Φ is a Möbius transformation with its pole in Ω_-, the image curve Φ(γ) is a planar curve oriented counterclockwise. When the pole is in Ω_+, the orientation is reversed. Let γ be a simple closed planar curve oriented counterclockwise. For β > 0 and ζ∈γ, define a set called a non-tangential approach region to ζ by Γ(ζ) = {z ∈ : |z - ζ| ≤ (1+β) dist(z,γ) }. Lipschitz curves are well-known to satisfy the uniform interior and exterior cone condition, meaning there exists β,r>0 such that for each ζ∈γ, one of the two components of Γ(ζ) ∩ D(ζ,r) is contained in Ω_+ and the other contained in Ω_-. Write the interior and exterior non-tangential approach regions by Γ_±(ζ) = Γ(ζ) ∩ Ω_±. An important technical tool for work on Lipschitz domains is a Neças exhaustion, a method of approximation by C^∞ subdomains with uniformly bounded Lipschitz constants; see <cit.> for details. Given a function g:Ω_±→ and ζ∈γ, its non-tangential maximal function g^* and non-tangential limit ġ (when it exists) are defined to be g^*(ζ) = sup_z ∈Γ_±(ζ) |g(z)|, ġ (ζ) = lim_Γ_±(ζ) ∋ z →ζ g(z). Given f ∈ L^2(γ), its Cauchy transform (<ref>) arises as the non-tangential limit of the Cauchy integral in (<ref>). A deep and highly non-trivial result in <cit.> shows that this limit exists a.e. for Lipschitz γ, and further, defines an L^2(γ) function. We slightly abuse notation by denoting both the Cauchy integral of f and its boundary values by C_± f, but our intended meaning should always be clear from context. We now define the Hardy space _±^2(γ) as the image of L^2(γ) under C_±: _±^2(γ) = {C_± f : f ∈ L^2(γ) }; since γ is always assumed to be Lipschitz, this definition is equivalent to several other characterizations of the Hardy space used in the literature; see <cit.>. We have been intentionally flexible with our definition so that Hardy space functions can be at times thought of as holomorphic functions with L^2 boundary values and at other times as the boundary values themselves. Observe from (<ref>) that functions in _-^2(γ) necessarily vanish at ∞. The results in <cit.> along with the Plemelj jump formula (see <cit.>) allow rigorous justification of the following “intuitive" statements for Lipschitz γ: if f ∈_±^2(γ), then C_± f = f (Cauchy's integral formula), while C_∓ f ≡ 0 (Cauchy's theorem). Given α∈ (0,1), define the space of α-Hölder continuous functions on γ to be C^α(γ) := {f : |f(x)-f(y)| < |x-y|^α, x,y ∈γ}, and denote by A^α(Ω_±) the space of holomorphic functions on Ω_± with C^α boundary values. If γ is Lipschitz and f ∈ C^α(γ), then C_± f ∈ A^α(Ω_±); see <cit.>. The regularity of C_± in C^α together with its boundedness in L^2(γ) imply that A^α(Ω_±) is a dense subspace of the Hardy space _±^2(γ). ◊ §.§ Dual space characterization A duality paradigm of Grothendieck <cit.>, Köthe <cit.> and Sebastião e Silva <cit.> identifies duals of holomorphic function spaces on simply connected domains with spaces of holomorphic functions on their complements: Let (Ω_+) denote the space of all holomorphic functions on Ω_+ under the standard Frechét topology. Under this paradigm, the dual can be identified with _0(Ω_-), the space of functions holomorphic in a neighborhood of Ω_- which vanish at ∞. The functionals themselves are represented using bilinear pairings <>>·· à la (<ref>) to pair f ∈(Ω_+) and g ∈_0(Ω_-), where the path of integration is taken inside Ω_+ and sufficiently close to γ. We follow this paradigm and identify the dual space of _±^2(γ) with _∓^2(γ). Since C_± is bounded on L^2(γ) whenever γ is Lipschitz, a bounded adjoint exists (with respect to the standard inner product), characterized by ⟨C_± f,g ⟩ = ⟨ f,C^*_± g ⟩. Explicitly, C^*_±g(z) = g(z)/2±1/2π iT(z) P.V.∫_γg(ζ)/ζ - z dσ(ζ), where the formula is understood to hold for almost every z ∈γ. The Cauchy transforms C_+ and C_- can be viewed as “adjoints" with respect to the bilinear pairing (<ref>). Indeed, <>>C_± fg = <>>fC_∓ g = <>>C_± fC_∓ g. Since C_± is a projection operator, it will suffice to prove the first equality. We claim that if g ∈ L^2(γ) and T is the almost everywhere defined unit tangent vector for γ, then C^*_±(g T) = C_∓(g) T. Indeed, for a.e. z ∈γ, we have C^*_±(g T)(z) = g(z)T(z)/2±1/2π iT(z) P.V.∫_γg(ζ)T(ζ)/ζ - z dσ(ζ) = ( g(z)/2±1/2π i P.V.∫_γg(ζ)T(ζ)/ζ - z dσ(ζ) ) T(z) = ( g(z)/2∓1/2π i P.V.∮_γg(ζ)/ζ - z dζ) T(z) = C_∓(g)(z) T(z). Thus we see that <>>C_± fg = ⟨C_± f, g T⟩ = ⟨ f, C_±^* (g T) ⟩ = ⟨ f, C_∓(g) T ⟩ = <>>fC_∓ g. The dual space of _±^2(γ) can be identified with _∓^2(γ) via functionals ψ_g: _±^2(γ) →, g ∈_∓^2(γ), given by ψ_g(f) = <>>fg. Moreover, C_±^-1g≤ψ_g_ op≤g. Since _±^2(γ) is a Hilbert space, it is self dual in the ordinary inner product. Thus, given a bounded linear functional ϕ: _±^2(γ) →, there is a unique h ∈_±^2(γ) so that for any f ∈_±^2(γ), ϕ(f) = ⟨ f,h ⟩ = <>>fhT = <>>C_± fhT = <>> fC_∓(hT). Now set g = C_∓(hT) ∈_∓^2(γ), so that ϕ = ψ_g = <>>·g∈_±^2(γ)'. Given distinct g_1,g_2 ∈_∓^2(γ), we now show the functionals ψ_g_1≠ψ_g_2. It will suffice to exhibit an f ∈_±^2(γ) with ψ_g_1(f) ≠ψ_g_1(f). Set f = C_±( (g_1-g_2)T), which is clearly in _±^2(γ). Then (ψ_g_1 - ψ_g_2)(f) =<>>fg_1-g_2 = <>>C_±( (g_1-g_2)T)g_1-g_2 = <>>(g_1-g_2)TC_∓ (g_1-g_2) = <>>(g_1-g_2)Tg_1-g_2 = g_1-g_2^2 > 0. We now prove (<ref>). The right-hand inequality follows from Cauchy-Schwarz. For the left-hand inequality, note that for g ∈_∓^2(γ) g = sup{ | <>>hg | : h∈L^2(γ), h = 1 } = sup{ | <>>hC_∓g | : h∈L^2(γ), h = 1 } = sup{ | <>> C_±hg | : h∈L^2(γ), h = 1 } ≤sup{ | <>>fg | : f∈_±^2(γ), f ≤C_± } = C_± ·sup{ | <>>fg | : f∈_±^2(γ), f = 1 } = C_± ·ψ_g_op. Let γ be a simple closed Lipschitz curve in the plane. The norms of the Cauchy transforms C_± : L^2(γ) →^2_±(γ) are given by 1/C_+ = inf_g ∈_+^2(γ) g ≠ 0{sup_ f ∈_-^2(γ) f ≠ 0 |<>>fg|/fg} = inf_g ∈_-^2(γ) g ≠ 0{sup_ f ∈_+^2(γ) f ≠ 0 |<>>fg|/fg} =1/C_-. Given a nonzero g ∈_∓^2(γ), the lower bound in (<ref>) says C_±^-1g≤ψ_g_ op = sup{|<>>fg|: f ∈_±^2(γ), f=1 }. As this holds for every such g, we obtain 1/C_±≤inf_ g ∈_∓^2(γ) g ≠ 0{sup_ f ∈_±^2(γ) f ≠ 0|<>>fg|/fg}. On the other hand, given (a sufficiently small) ϵ > 0, there exists h_ϵ∈ L^2(γ) such that C_± h_ϵ = 1 and h_ϵ < (C_±-ϵ)^-1. Now observe that sup_f ∈_∓^2(γ) f = 1 |<>>C_± h_ϵf| = sup_f ∈_∓^2(γ) f = 1 |<>>h_ϵC_∓ f| = sup_f ∈_∓^2(γ) f = 1 |<>>h_ϵf| ≤h_ϵ < 1/C_±-ϵ. Taking g=C_± h_ϵ∈_±^2(γ) and letting ϵ→ 0 we obtain inf_g ∈_±^2(γ) g ≠ 0{sup_ f ∈_∓^2(γ) f ≠ 0|<>>gf|/gf}≤1/C_±. Now combine all four individual inequalities in (<ref>) and (<ref>) to obtain 1/C_+≤inf_ g ∈_-^2(γ) g ≠ 0{sup_ f ∈_+^2(γ) f ≠ 0|<>>fg|/fg}≤1/C_-≤inf_ g ∈_+^2(γ) g ≠ 0{sup_ f ∈_-^2(γ) f ≠ 0|<>>fg|/fg}≤1/C_+, forcing equality to hold at every step. §.§ The Szegő kernel Several elementary properties are collected here for later use. The Szegő kernel on the unit disc is S_(z,ζ) = 1/2π(1-zζ), z ∈, ζ∈. The Szegő kernel admits a biholomorphic transformation law; see <cit.> in the C^∞ setting, and <cit.> for the Lipschitz setting: Let Φ: Ω_1 →Ω_2 be a biholomorphism of simply connected domains in the Riemann sphere with Lipschitz boundaries. The Szegő kernels are related by formula S_1(z,ζ) = √(Φ'(z))· S_2(Φ(z),Φ(ζ)) ·√(Φ'(ζ)). The Szegő kernel admits a well-known extremal property; see <cit.>: Given a simple closed Lipschitz curve γ in the Riemann sphere and a point z ∈Ω_±, the Szegő kernel satisfies S_±(z,z) = sup{|f(z)|: f ∈^2_±(γ), f_L^2(γ) = 1 }. In the setting of Proposition <ref>, the Riemann mapping theorem together with formulas (<ref>) and (<ref>) show that S_±(z,z) > 0 for any z ∈Ω_±∖{∞}. On the other hand, the condition that functions in the Hardy space must vanish at infinity shows that if ∞∈Ω_±, then S_±(∞,∞) = 0. ◊ The following monotonicity property is known, but a short proof is included since the authors had difficulty locating a reference. Let Ω_1 ⊊Ω_2 ⊊ be simply connected domains with Lipschitz boundaries properly contained in the Riemann sphere, and let z ∈Ω_1{∞}. Letting S_1, S_2 denote the respective Szegő kernels, we have 0< S_2(z,z) < S_1(z,z). Let Φ_j:Ω_j → denote the Riemann map, j=1,2, with Φ_j(z) = 0 and Φ_j'(z) > 0. Using the transformation law in (<ref>) and the kernel formula for in (<ref>), we see 2π S_j(z,z) = Φ_j'(z). By the proof of the Riemann mapping theorem (see, e.g., <cit.>), of all maps from Ω_1 into the disc satisfying Φ(z) = 0 and Φ'(z) positive, the Riemann map Φ_1 is uniquely determined by the property that Φ'(z) is maximal. Since the restriction of Φ_2 to Ω_1 is also a map with these properties, we conclude that Φ_2'(z) < Φ_1'(z). §.§ A lower estimate on the norm of the Cauchy transform Let γ be a simple closed Lipschitz curve in the plane and z ∈. Then Λ(γ,z) ≤C_±. For z ∈Ω_±\{∞}, define h_z ∈^2_∓(γ) by h_z(ζ) = (2π i(ζ-z))^-1. By Cauchy's integral formula we have <>>fh_z = 1/2π i∮_γf(ζ)/ζ-z dζ = f(z), f ∈^2_±(γ). Now apply the Cauchy norm characterization in (<ref>) with g=h_z to obtain 1/C_±≤sup_f ∈^2_±(γ)| <>>fh_z |/fh_z = 1/C(z,·)sup_f ∈^2_±(γ)|f(z)|/f = √(S_±(z,z))/C(z,·) =1/Λ(γ,z), where we used the extremal property of (<ref>). This estimate holds for all z ∈∖γ. Since Λ(γ,·) ≡ 1 for z ∈γ, the result follows for these z from the fact that C_± is a projection onto _±^2(γ) and thus C_±≥ 1. § INVARIANCE AND RIGIDITY PROPERTIES §.§ Analytic capacity and behavior at infinity Let γ be a simple closed Lipschitz curve in the plane oriented counterclockwise. If g is holomorphic on the exterior domain Ω_-, it admits a Laurent expansion in a neighborhood of ∞: g(z) = a_0 + a_1 z^-1 + a_2 z^-2 + ⋯ The coefficient a_1 is important to what comes below; it can be obtained by calculating the derivative of g at infinity with respect to the local coordinate 1z. Define D(g,∞) := lim_z →∞ z(g(z) - g(∞)) = a_1. (In the literature, D(g,∞) is often denoted by g'(∞), but the authors find this notation misleading since lim_z→∞ g'(z) ≠ D(g,∞) unless a_1 = 0.) Let A^∞(Ω_-) be the space of bounded holomorphic functions on Ω_-, with norm given by g_∞ := sup{|g(z)|: z ∈Ω_- }. Define the analytic capacity of the curve γ to be κ(γ) := sup{|D(g,∞)| : g ∈ A^∞(Ω_-), g(∞) = 0, g_∞≤ 1 }. This notion helps formulate generalizations of Riemann's removable singularity theorem by measuring how large bounded holomorphic functions on Ω_- can become; see <cit.>. Let γ be a simple closed Lipschitz curve in the plane. Then lim_z →∞Λ_-(γ,z) = √(σ(γ)/2πκ(γ)), where σ(γ) and κ(γ) denote the arc length and analytic capacity of γ, respectively. Thus Λ(γ,·) is continuous at ∞ (by definition). Set E := {z∈ : z^-1∈Ω_- }, which is a bounded domain containing the origin. Define a holomorphic and univalent function G:E → with the following properties: (i) G_∞≤ 1; (ii) G(0) = 0; (iii) G'(0) is positive and maximal, i.e., given another map H: E → satisfying (i) and (ii) with H'(0) positive, then necessarily G'(0) > H'(0). Such a G always exists and is the Riemann map (see <cit.>) from E to satisfying G(0) = 0 with G'(0)>0. Now write G as a Taylor expansion about 0: G(z) = a_1 z + a_2 z^2 + ⋯ Now define a biholomorphic map g:Ω_- → by g(z) = G(1/z). Clearly (i') g_∞≤ 1; and (ii') g(∞) = 0. We claim the positive number D(g,∞) defined by (<ref>) is maximal out of all functions in A^∞(Ω_-) satisfying (i') and (ii'). If D(g,∞) weren't maximal, there would exist an h ∈ A^∞(Ω_-) with D(h,∞) > D(g,∞) = a_1. But then the function H(z) := h(1/z) would satisfy (i) and (ii) from the previous paragraph, and H'(0) > a_1 = G'(0), contradicting the maximality of G'(0). Therefore, κ(γ) = D(g,∞) = lim_z→∞ z g(z) = a_1 = G'(0). Now use Proposition <ref> and (<ref>) to write the Szegő kernel of Ω_-: S_-(z,z) = |g'(z)| S_(g(z),g(z)) = 1/2π·|g'(z)|/1-|g(z)|^2. Thus, Λ_-(γ,z)^2 = C(z,·)_L^2(γ)^2/S_-(z,z) = (|z|^2 ∫_γ |C(z,ζ)|^2 dσ(ζ) ) ( 1/2π·|z|^2|g'(z)|/1-|g(z)|^2)^-1, where the term |z|^2 has been inserted in both the numerator and denominator. Now, lim_z →∞ |z|^2 ∫_γ |C(z,ζ)|^2 dσ(ζ) = lim_z →∞1/4π^2∫_γdσ(ζ)/|ζ/z-1|^2 = σ(γ)/4π^2. On the other hand, lim_z →∞1/2π·|z|^2|g'(z)|/1-|g(z)|^2 = 1/2πlim_z →∞|z^2 g'(z)|/1-|g(z)|^2 = a_1/2π = κ(γ)/2π. Dividing (<ref>) by (<ref>) gives the result. In <cit.> Bolt carries out a similar computation, obtaining a lower bound of the norm of the Kerzman-Stein operator. ◊ §.§ Möbius Invariance Recall that the holomorphic automorphisms of the Riemann sphere are precisely the Möbius transformations Φ(z) = az+b/cz+d, where a,b,c,d ∈ with ad-bc ≠ 0. The Cauchy kernel and transform admit transformation laws under these maps. See <cit.> for an analogous result in ^n (or more accurately ℙ^n) on the projective invariance of the Leray kernel. Let γ_1 be a simple closed Lipschitz curve in the complex plane oriented counterclockwise and let Φ be a Möbius transformation whose pole lies off of γ_1. Define the curve γ_2 = Φ(γ_1) with orientation induced from the orientation of γ_1 by Φ; thus γ_2 will be oriented counterclockwise if and only if the pole of Φ lies in Ω_-. Let C_±^1 and C_±^2 denote the Cauchy transforms of γ_1 and γ_2, respectively. Then C_±^1 (√(Φ')·(f∘Φ) ) = √(Φ')·( (C_±^2 f)∘Φ), f ∈ L^2(γ_2). Differentiate (<ref>) and observe that Φ' is the square of a meromorphic function defined on the Riemann sphere. Now choose a value of √(ad-bc) and then set √(Φ'(ζ)) = √(ad-bc)/cζ+d. Observe that the map f ↦√(Φ')· (f∘Φ) is a linear isomorphism from L^2(γ_2) to L^2(γ_1). Now let f ∈ L^2(γ_2), ζ∈γ_1 and ξ = Φ(ζ) ∈γ_2. If z ∈Ω^1_±, then the image point Φ(z) ∈Φ(Ω^1_±) = Ω_±^2 and (C_±^2 f) ∘Φ (z) = 1/2π i∮_γ_2f(ξ)/ξ - Φ(z) dξ = 1/2π i∮_γ_1f(Φ(ζ))/Φ(ζ) - Φ(z)·Φ'(ζ) dζ = 1/2π i∮_γ_1f(Φ(ζ))/aζ+b/cζ+d - az+b/cz+d·ad-bc/(cζ+d)^2 dζ. Rearranging, (<ref>) = 1/2π i∮_γ_1(cz+d)(cζ+d)(ad-bc)/(ad-bc)(ζ-z)(cζ+d)^2 f(Φ(ζ)) dζ = 1/2π icz+d/√(ad-bc)∮_γ_1√(ad-bc)/(cζ+d) f(Φ(ζ))/(ζ-z) dζ = 1/2π i √(Φ'(z))∮_γ_1√(Φ'(ζ)) f(Φ(ζ))/ζ-z dζ = 1/√(Φ'(z)) C_±^1(√(Φ')(f∘Φ))(z), giving the result when z ∈Ω_±. The argument when z ∈γ_1 follows the same lines except that the integrals must be interpreted in the principle value sense. For ϵ>0 let γ_1,ϵ:= γ_1 ∖ D(z,ϵ), i.e., the original curve with all points within ϵ of z removed. Now start from the integral in (<ref>) evaluated over the truncated curve γ_1,ϵ, and work backwards to (<ref>): 1/2π i P.V.∮_γ_1√(Φ'(ζ))f(Φ(ζ))/ζ-z dζ = lim_ϵ→ 01/2π i∮_γ_1,ϵ√(Φ'(ζ))f(Φ(ζ))/ζ-z dζ = lim_ϵ→ 0√(Φ'(z))/2π i∮_Φ(γ_1,ϵ)f(ξ)/ξ-Φ(z) dζ. We claim that the integral in (<ref>) is also a principle value integral in the ordinary sense. Indeed, the two endpoints of the truncated curve Φ(γ_1,ϵ) approach the point Φ(z) at the same rate as ϵ→ 0 as a consequence of the fact that the image of the disc D(z,ϵ) under Φ tends asymptotically to the disc D(Φ(z),|Φ'(z)|ϵ) as ϵ→ 0. This means that by setting γ_2,δ:= γ_2 ∖ D(Φ(z),δ) with δ := |Φ'(z)|ϵ, (<ref>) = lim_ϵ→ 0√(Φ'(z))/2π i∮_Φ(γ_1,ϵ)f(ξ)/ξ-Φ(z) dζ = lim_δ→ 0√(Φ'(z))/2π i∮_γ_2,δf(ξ)/ξ-Φ(z) dζ = √(Φ'(z))/2π i P.V.∮_γ_2f(ξ)/ξ-Φ(z) dζ. Thus, the string of equalities from (<ref>) to (<ref>) shows C_±^1 (√(Φ')·(f∘Φ) )(z) = √(Φ'(z))· f(Φ(z))/2±1/2π i P.V.∮_γ_1√(Φ'(ζ))f(Φ(ζ))/ζ-z dζ = √(Φ'(z))· f(Φ(z))/2±√(Φ'(z))/2π i P.V.∮_γ_2f(ξ)/ξ-Φ(z) dζ = √(Φ'(z))·((C^2_± f) ∘Φ)(z). Suppose γ_1 is simple closed Lipschitz curve in the plane oriented counterclockwise and that Φ is a Möbius transformation whose pole lies off of γ_1. Define the curve γ_2 = Φ(γ_1) (oriented as in Theorem <ref>) and let C_±^1(z,ζ) and C_±^2(z,ζ) denote the Cauchy kernels of γ_1 and γ_2, respectively. Then C_±^1(z,ζ) = √(Φ'(z))· C_±^2(Φ(z),Φ(ζ)) ·√(Φ'(ζ)). Since both curves are Lipschitz, tangent vectors exist almost everywhere. If ζ(t) parameterizes γ_1, then Φ(ζ(t)) parameterizes γ_2. The unit tangent to γ_1 can be written as T_1(ζ(t)) = ζ'(t)/|ζ'(t)|, and so the unit tangent to γ_2 can be written T_2(Φ(ζ(t))) = Φ'(ζ(t))·ζ'(t)/|Φ'(ζ(t))·ζ'(t)| = Φ'(ζ(t))/|Φ'(ζ(t))| T_1(ζ(t)). Going forward, we omit reference to the parameter t. Assume Φ takes the form (<ref>), with ad-bc ≠ 0, and choose a value of √(ad-bc) as in (<ref>) to obtain a meromorphic square root of Φ defined on all of the Riemann sphere. From the definition of the Cauchy kernel in (<ref>), we have √(Φ'(z))· C_±^2(Φ(z),Φ(ζ)) ·√(Φ'(ζ)) = ±√(Φ'(z))·T_2(Φ(ζ))/Φ(ζ) - Φ(z)·√(Φ'(ζ)) = ±√(Φ'(z))√(Φ'(ζ))/Φ(ζ) - Φ(z) T_1(ζ). A simple computation now shows √(Φ'(z))√(Φ'(ζ))/Φ(ζ)-Φ(z) = (ad-bc)/(cζ+d)(cz+d)(aζ+b/cζ+d - az+b/cz+d)^-1 = 1/ζ - z. We now prove that Λ(γ,z) is Möbius invariant. This in particular shows that Λ(γ,z) is well-defined when γ is an unbounded Lipschitz curve (recall the discussion of extending Λ to unbounded curves following Theorem <ref>). Suppose γ is a simple closed Lipschitz curve in the plane and Φ is a Möbius transformation whose pole lies off of γ. Then for z in the Riemann sphere, Λ(γ,z) = Λ(Φ(γ),Φ(z)). Under the assumption on Φ, observe that the image curve Φ(γ) is also a simple closed Lipschitz curve in the plane. Now write γ_1 := γ and γ_2 := Φ(γ_1). If z ∈γ_1, then Φ(z) ∈γ_2, so by definition Λ(γ_1,z) = 1 = Λ(γ_2,Φ(z)). Let Ω^j_± be the domains bounded by γ_j and suppose z ∈Ω^1_±. By Theorem <ref>, C_±^1(z,·)_L^2(γ_1)^2 = |Φ'(z)| ∫_γ_1 |C_±^2(Φ(z),Φ(ζ))|^2 |Φ'(ζ)| dσ(ζ) = |Φ'(z)| ∫_γ_2 |C_±^2(Φ(z),ξ)|^2 dσ(ξ) = |Φ'(z)|·C_±^2(Φ(z),·)_L^2(γ_2)^2. Now denote the Szegő kernel of _±^2(γ_j) by S_±^j. Since Φ is a biholomorphism from Ω^1_± to Ω^2_±, Proposition <ref> shows S^1_±(z,z) = |Φ'(z)| · S^2_±(Φ(z),Φ(z)). This with (<ref>) shows Λ_±(γ_1,z) = C_1(z,·)_L^2(γ_1)^2/S^1_±(z,z) = |Φ'(z)| ·C_2(Φ(z),·)_L^2(γ_2)^2/|Φ'(z)| · S^2_±(Φ(z),Φ(z)) = Λ_±(γ_2,Φ(z)). The ratio above needs slightly more care in two cases: (i) when z=∞, meaning that Φ'(z) = 0, and (ii) when Φ(z) = ∞, implying that Φ'(z) = ∞. In either case, the indeterminate ratio is only problematic at this specific z; in a punctured neighborhood of z, the ratio is valid. The result now follows by working nearby and then taking limits, in which case we invoke Theorem <ref> on the continuity of Λ(γ,z) as z →∞. §.§ Circles and rigidity Circles are shown to be the unique class of extremal curves which globally minimize Λ. This leads to interesting rigidity results, including a strengthened version of a famous observation made by Kerzman and Stein; see Corollary <ref>. If γ is a circle (or a line, including the point at ∞), then Λ(γ,·) ≡ 1. First let γ = b be the unit circle. Then (<ref>) and (<ref>) show Λ(b,0)^2 = 1/4π^2 S_(0,0)∫_γdσ(ζ)/|ζ|^2 = 1/2π· 2π = 1. Given z ∈\ b, consider the Möbius transformation φ_z(w) = z-w/1-zw. If |z|<1 then φ_z is an automorphism of and if |z|>1 then φ_z is a biholomorphic map from onto \. In either case φ_z(0) = z. Theorem <ref> now shows 1 = Λ(b,0) = Λ(φ_z(b),φ_z(0)) = Λ(b,z). For z=∞, use the map ϕ_∞(w) = w^-1 and repeat the argument above to see Λ(b,∞) = 1. Now let γ⊂ be any circle and z ∈Ω_±. Then there is a Möbius transformation taking γ to b; see <cit.>. Theorem <ref> implies Λ(γ,z) = Λ(b,Φ(z)) = 1. Since z was chosen arbitrarily, we conclude Λ(γ,·) ≡ 1. Let γ be a simple closed Lipschitz curve in the Riemann sphere. * Λ(γ,z) ≥ 1, for all z ∈Ω_±. * If there is a single z ∈Ω_± such that Λ(γ,z) = 1, then Λ(γ,·) ≡ 1 and γ is a circle (or a line, including the point at ∞). First suppose that γ is a planar curve enclosing the bounded domain Ω_+. We may assume that z ∈Ω_+, thanks to the Möbius invariance of Λ established in Theorem <ref>. Consider the Riemann map g:→Ω_+ with g(0) = z and g'(0) >0. Proposition <ref> and (<ref>) show 1/2π = S_(0,0) = √(g'(0)) S_+(g(0),g(0)) √(g'(0)) = g'(0) S_+(z,z). Now let Φ_z(w) = 1/w-z and define the (unbounded) domain E = {Φ_z(w): w∈Ω_+ }, along with the map h = Φ_z ∘ g : → E. The Riesz-Privalov theorem <cit.> says that g' is contained in the Hardy space H^1(b), so in particular, it is integrable on the circle. The norm of the Cauchy kernel is thus C(z,·)_L^2(γ)^2 = 1/4π^2∫_γdσ(ζ)/|ζ-z|^2 = 1/4 π^2∫_b|g'(ζ)|/|g(ζ)-z|^2 dσ(ζ) = 1/4 π^2∫_b |h'(ζ)| dσ(ζ). Now combine this with (<ref>): Λ(γ,z)^2 = C(z,·)^2_L^2(γ)/S_+(z,z) = g'(0)/2π∫_b |h'(ζ)| dσ(ζ). The conditions on g show that 1/h(ζ) = g(ζ) - z = g'(0)ζ + g”(0)/2ζ^2 + ⋯ = g'(0)ζ· F_1(ζ), where F_1 is a non-vanishing holomorphic function on with F_1(0) = 1. Thus, h(ζ) = 1/g'(0)ζ· F_1(ζ) = 1/g'(0)ζ + F_2(ζ), where F_2 is holomorphic on the unit disc. Consequently, ζ h'(ζ) = -1/g'(0)ζ + ζ F'_2(ζ). The residue theorem now shows 1 = -( g'(0)/2π i∮_bζ h'(ζ) dζ) = -(g'(0)/2π∫_0^2π e^2 i θ h'(e^iθ) dθ) ≤g'(0)/2π∫_0^2π |h'(e^iθ)| dθ = g'(0)/2π∫_b |h'(ζ)| dσ(ζ) = Λ(γ,z)^2. From these computations, Λ(γ,z) = 1 if and only if e^2 i θ h'(e^iθ) ≤ 0 for all θ∈ [0,2π], which happens if and only if ϕ(ζ) := ζ^2 h'(ζ) ≤ 0 for all ζ∈ b. Equation (<ref>) shows ϕ extends holomorphically to the origin, with ϕ(0) = - g'(0)^-1. Since ϕ is real-valued on b the Schwarz Reflection Principle applies, yielding a bounded holomorphic extension of ϕ to the entire complex plane, which means that ϕ is necessarily constant (ϕ≡ - g'(0)^-1). Thus h'(ζ) = - g'(0)^-1ζ^-2, meaning that h(ζ) = g'(0)^-1ζ^-1 + C for some constant C. This shows that g(ζ) = z + h(ζ)^-1 is a Möbius transformation and therefore γ = g(b) is a circle. Proposition <ref> now shows that Λ(γ,·) ≡ 1. Now if γ is a curve passing through ∞∈, use a Möbius transformation to send it to a bounded planar curve. Theorem <ref> shows that this is well defined and the result now follows from the previous case. §.§ Consequences of rigidity Using a clever geometric description of their eponymous operator, Kerzman and Stein proved the following: Let Ω be a bounded simply connected planar domain with smooth boundary. The Cauchy and Szegő kernels coincide if and only if Ω is a disc. In other words, C(z,ζ) = S(z,ζ) for all z ∈Ω, ζ∈ bΩ if and only if Ω is a disc. Theorem <ref> implies a much stronger rigidity theorem: Let Ω be a bounded simply connected planar domain with Lipschitz boundary. If there exists a single z ∈Ω such that the Cauchy and Szegő kernels satisfy |C(z,ζ)| ≤ |S(z,ζ)| for almost every ζ∈ bΩ, then Ω is a disc. Given z ∈Ω, consider the square of the L^2-distance C(z,·) - S(z,·)^2_L^2(γ) = ∫_γ |C(z,ζ)|^2 dσ(ζ) + ∫_γ |S(z,ζ)|^2 dσ(ζ) -2 ∫_γ C(z,ζ) S(z,ζ) dσ(ζ). Conjugate symmetry and the Szegő reproducing property show that the second integral in the previous line evaluates to S(z,z), while the Cauchy integral formula yields 2 ∫_γ C(z,ζ) S(z,ζ) dσ(ζ) = 2 ∫_γ C(z,ζ) S(ζ,z) dσ(ζ) = 2 S(z,z) = 2 S(z,z). Since Ω is a bounded domain, Remark <ref> says S(z,z) > 0. Thus, C(z,·) - S(z,·)^2_L^2(γ) = C(z,·)_L^2(γ)^2 - S(z,z) = S(z,z) ( Λ(bΩ,z)^2 - 1 ). If there exists a z ∈Ω so that C(z,ζ) = S(z,ζ) for almost every ζ∈ bΩ, then (<ref>) = 0, meaning that Λ(bΩ,z) = 1. Theorem <ref> implies that Ω is a disc. Let γ be a simple closed Lipschitz curve in the plane. Then its analytic capacity κ(γ) and arc length σ(γ) satisfy the following inequality: σ(γ) ≥ 2πκ(γ). Equality holds if and only if γ is a circle. Combining Theorems <ref> and <ref>, we see that Λ(γ,∞) = √(σ(γ)/2πκ(γ))≥ 1, and that equality holds if and only if γ is a circle. An estimate due to Ahlfors and Beurling gives a lower bound on the analytic capacity of a simple closed curve in terms of area enclosed (see, e.g., <cit.>): Let γ be a simple closed planar curve enclosing an area of A(γ). Then κ(γ) ≥√(A(γ)/π), with equality holding if and only if γ is a circle. Combining (<ref>) with (<ref>) yields the isoperimetric inequality σ(γ)^2 ≥ 4π A(γ). See <cit.> for another proof of the isoperimetric inequality stemming from the Ahlfors-Beurling estimate. ◊ § THE BEHAVIOR OF Λ(Γ,Z) AT THE BOUNDARY Our goal here is to prove the following result, which confirms part <ref> of Theorem <ref>. Let γ be a simple closed C^1 curve in the Riemann sphere. Then the function z ↦Λ(γ,z) is continuous on all of . In particular, if ζ_0 ∈γ, then lim_z →ζ_0Λ(γ,z) = 1. §.§ Important kernel properties Let X ⊂ be a set and consider f,g : X → [0,∞). We say f and g are comparable on X and write f(z) ≈ g(z), z ∈ X, if there exist constants C_1,C_2>0 such that for all z ∈ X, C_1 f(z) ≤ g(z) ≤ C_2 f(z). Let γ be a simple closed Lipschitz curve in the complex plane and let δ(z) denote the distance of z to γ. Then S_±(z,z) ≈δ(z)^-1, z ∈Ω_+∪{z∈Ω_-:δ(z)<ℓ} . Let D(z,δ) be the disc centered at z of radius δ(z) = δ. The Szegő kernel of this disc is calculated using the unit disc formula (<ref>) and an appropriate affine map in the transformation law (<ref>). From Szegő kernel monotonicity in Proposition <ref> we now obtain S_±(z,z) ≤ S_D(z,δ)(z,z) = 1/2πδ(z). For the other direction, consider first a point z∈Ω_+ and the Riemann map Φ:Ω_+→ with Φ(z)=0, Φ'(z)>0. Using (<ref>) and (<ref>) again we have S_+(z,z)=Φ'(z)/2π. Applying the (rescaled) Koebe one-quarter theorem <cit.> to Φ^-1 we obtain δ(z)≥1/4(Φ^-1)'(0)=1/4 Φ'(z), and so S_+(z,z)=Φ'(z)/2π≥1/8πδ(z). Combining (<ref>) with (<ref>) we have S_+(z,z) ≈δ(z)^-1, z ∈Ω_+. To treat z∈Ω_- close to γ, pick z_0∈Ω_+ and ℓ>0 so that the map η(z):=1/z-z_0 satisfies * η is a bi-Lipschitz map from U_ℓ:={z∈Ω_-:δ(z)<ℓ} to η(U_ℓ) and * |η'(z)|≈ 1 on U_ℓ. Setting δ̃(w) to be the distance from w to η(γ) we have from our work above (with η(Ω_-) replacing Ω_+) along the transformation law (<ref>) that S_-(z,z) ≈ S_η(Ω_-) (η(z),η(z)) ≈δ̃(η(z)) ≈δ(z) for z∈ U_ℓ, completing the proof of the proposition. Let T_1,T_2 be bounded projection operators from L^2(γ) onto ^2_±(γ), each represented by an integral kernel K_j:Ω_±×γ→, such that for f ∈ L^2(γ) and z ∈Ω_±, T_j f(z) = ∫_γ f(ζ)K_j(z,ζ) dσ(ζ). Additionally, assume K_j(z,·) ∈ L^2(γ) for z ∈Ω_±. Then the following holds for a.e. ζ∈γ: T_2^*( K_1(z,·))(ζ) = K_2(z,ζ). Since T_2 is bounded on L^2(γ), there is a corresponding bounded adjoint T_2^*. By assumption, K_1(z,·)∈ L^2(γ) and so T^*_2( K_1(z,·)) ∈ L^2(γ). Thus for f ∈ L^2(γ), ⟨ f, T^*_2( K_1(z,·)) ⟩ = ⟨T_2(f), K_1(z,·)⟩ = T_1 ∘T_2 f(z) = T_2 f(z), since T_2 f ∈^2_±(γ) and T_1 is a projection onto _±^2(γ). On the other hand, ⟨ f, K_2(z,·)⟩ = T_2 f(z), by definition. Equating (<ref>) and (<ref>) we see that T_2^*( K_1(z,·)) - K_2(z,·)∈ L^2(γ) is orthogonal to all of L^2(γ), and is therefore almost everywhere zero. The Cauchy kernel and Szegő kernels of ^2_±(γ) are related as follows: C_±^*(S_±(·,z))(ζ) = C_±(z,ζ), z ∈Ω_±, ζ∈γ, S_±(C_±(z,·))(ζ) = S_±(ζ,z), z ∈Ω_±, ζ∈γ. Apply Proposition <ref> with T_1 = S_± and T_2 = C_± for (<ref>). Switch the roles of the operators and use the self-adjointness of the Szegő projection to deduce (<ref>). §.§ A symmetric relationship between Cauchy and Szegő kernels The following calculation is crucial to our proof of Theorem <ref>. Let γ be a simple closed oriented Lipschitz curve in the plane and let z ∈Ω_±. The Cauchy and Szegő kernels are related by the following formula for almost every ζ∈γ: C_±^*(S_±(·,z))(ζ) = C_±(z,ζ) We first note an easier-to-verify relationship between the two kernels, namely that S_±(C_±(z,·))(ζ) = S_±(ζ,z). Indeed, observe that for fixed z ∈Ω_±, that both sides of (<ref>) (as functions in ζ) are in _±^2(γ). Now, given any g ∈_±^2(γ), Cauchy's integral formula says g(z) = ⟨ g, C_±(z,·)⟩ = ⟨S_± g, C_±(z,·)⟩ = ⟨ g, S_±(C_±(z,·)) ⟩. On the other hand, the reproducing property of the Szegő kernel shows g(z) = ⟨ g, S_±(·,z) ⟩. Uniqueness now gives (<ref>). Second, recall Corollary <ref> on the orthogonal decomposition of functions in L^2(γ) and set f = C_±(z,·): C_±(z,ζ) = S_±(C_±(z,·))(ζ) ±T(ζ) S_±(±T(·) C_±(z,·))(ζ) = S_±(ζ,z) ±T(ζ) H_±(z,ζ), where we used (<ref>) in the second line. Note that we also have defined H_±(z,ζ) := S_±(±T(·) C_±(z,·))(ζ), and we emphasize that H_±(z,·) ∈_±^2(γ). Conjugate and rewrite (<ref>) to obtain S_±(z,ζ) = C_±(z,ζ) ∓ T(ζ) H_±(z,ζ). Now, apply the Cauchy adjoint formula (<ref>) with g = S_±(·,z), and slightly rewrite: C_±^*(S_±(·,z))(ζ) = S_±(ζ,z)/2∓( 1/2π i T(ζ) P.V.∫_γS_±(z,ξ)/ξ - ζ dσ(ξ) ). It remains to understand the principle value integral appearing above. (We emphasize that the Szegő reproducing formula fails to apply since the function ξ↦1/ξ - ζ is not in _±^2(γ) when ζ∈γ.) Using (<ref>), we have P.V.∫_γS_±(z,ξ)/ξ - ζ dσ(ξ) = P.V.∫_γC_±(z,ξ)/ξ - ζ dσ(ξ) - P.V.∫_γ± T(ξ) H_±(z,ξ)/ξ - ζ dσ(ξ); now consider the pieces on the right hand side separately: P.V.∫_γC_±(z,ξ)/ξ - ζ dσ(ξ) = 1/2π iP.V.∫_γ± T(ξ)/(ξ - ζ)(ξ-z) dσ(ξ) = 1/2 π i(ζ-z)( P.V.∮_γ± d ξ/ξ - ζ∓∮_γ d ξ/ξ-z). The residue theorem gives that ∮_γdξ/ξ-z = 2π i, z ∈Ω_+, 0, z ∈Ω_-. Since γ is Lipschitz, for almost every ζ∈γ, half of the residue is picked up in the principle value computation (??see Appendix on Principle values on the Lipschitz curves??): P.V.∮_γ± d ξ/ξ-ζ = ±π i Thus for z ∈Ω_+, almost every ζ∈γ, P.V.∫_γC_+(z,ξ)/ξ - ζ dσ(ξ) = π i -2π i /2 π i (ζ-z) = -1/2(ζ-z). Similarly for z ∈Ω_-, almost every ζ∈γ, P.V.∫_γC_+(z,ξ)/ξ - ζ dσ(ξ) = -π i + 0/2 π i (ζ-z) = -1/2(ζ-z). Since H_±(z,·) ∈_±^2(γ), the boundary value H_±(z,ζ) exists for almost every ζ∈γ (Appendix). Now by reasoning analogous to (<ref>), half of the residue in the principle value integral is again picked up for almost every ζ∈γ (see Appendix): P.V.∫_γ± T(ξ) H_±(z,ξ)/ξ - ζ dσ(ξ) = P.V.∮_γ± H_±(z,ξ)/ξ-ζ dξ = π i H_±(z,ζ). Now insert (<ref>) and (<ref>) into (<ref>) to obtain P.V.∫_γS_±(z,ξ)/ξ - ζ dσ(ξ) = -1/2(ζ-z) - π i H_±(z,ζ) = -1/ζ-z + π i T(ζ)( T(ζ)/2π i(ζ-z) - H_±(z,ζ) T(ζ) ) = -1/ζ-z±π i T(ζ)( ± T(ζ)/2π i(ζ-z)∓ H_±(z,ζ) T(ζ) ) = -1/ζ-z±π i T(ζ) S_±(z,ζ), where the last step follows from (<ref>). Now inserting (<ref>) into (<ref>) yields C_±^*(S_±(·,z))(ζ) = S_±(ζ,z)/2∓( 1/2π i T(ζ) ( -1/ζ-z±π i T(ζ) S_±(z,ζ) ) ) = S_±(ζ,z)/2∓( -T(ζ)/2π i(ζ-z)±S_±(z,ζ)/2) = S_±(ζ,z)/2 + (± T(ζ)/2π i(ζ-z)) - S_±(ζ,z)/2 = C_±(z,ζ). When a computation similar to (<ref>) is considered, but with opposite indexing signs on the operator and kernel (now ± and ∓), similar reasoning shows C^*_±(S_∓(·,z))(ζ) = S_∓(z,ζ)-C_∓(z,ζ), z ∈Ω_∓. ◊ The self-adjointness of the Szegő projection yields a re-expression of equations (<ref>) and (<ref>) in completely parallel formulation: C_±^*(S_±(z,·))(ζ) = C_±(z,ζ), z ∈Ω_±, S_±^*(C_±(z,·))(ζ) = S_±(z,ζ), z ∈Ω_±. ◊ §.§ The Berezin transform and the Kerzman-Stein operator §.§.§ The Berezin transform Let γ be a simple closed Lipschitz curve in the plane oriented counterclockwise. Given z ∈Ω_±∖{∞}, define the unit vector s^±_z ∈_±^2(γ) ⊂ L^2(γ) by normalizing the Szegő kernel as follows: s_z^±(ζ) := S_±(ζ,z)/√(S_±(z,z)). Let γ be a simple closed Lipschitz curve in the plane and z ∈\{γ}. The unit vectors s^±_z ∈^2_±(γ) tend weakly to 0 as z approaches γ. If f ∈ L^2(γ) is perpendicular to _±^2(γ), observe that ⟨ f,s_z^±⟩ = S_±(z,z)^-1/2S_± f(z) = 0. It is therefore sufficient to test only against functions f in the Hardy space. If f ∈_±^2(γ) and z ∈Ω_±\{∞}, the Szegő reproducing property gives ⟨ f, s_z^±⟩ = S_±(z,z)^-1/2f(z). By Remark <ref> the subspace A^α(Ω_±) = (Ω_±) ∩ C^α(Ω_±) is dense in _±^2(γ), so we may choose a sequence of functions {f_n}⊂ A^α(Ω_±) tending to f in the L^2(γ)-norm. Then |f(z) - f_n(z)| = | ∫_γ S_±(z,ζ)(f(ζ) - f_n(ζ)) dσ(ζ) | ≤ S_±(z,z)^1/2f-f_n, which implies |S_±(z,z)^-1/2 f(z) - S_±(z,z)^-1/2 f_n(z)| ≤f-f_n. Thus, |S_±(z,z)^-1/2f(z)| ≤ |S_±(z,z)^-1/2f(z) - S_±(z,z)^-1/2f_n(z)| + |S_±(z,z)^-1/2f_n(z)| ≤f-f_n + S_±(z,z)^-1/2|f_n(z)|. Now given ϵ >0, we may choose N large enough so that f-f_N < ϵ/2. Since f_N ∈ C^α(Ω_±), |f_N| assumes a maximum value on the closure. And since S_±(z,z) ≈δ(z)^-1 (Proposition <ref>), z can be taken sufficiently close to γ to ensure that S_±(z,z)^-1/2|f_N(z)| ≤ S_±(z,z)^-1/2 sup|f_N| < ϵ/2. Now combining the above inequalities with (<ref>), we have |⟨ f,s_z^±⟩| = |S_±(z,z)^-1/2 f(z)| < ϵ for z sufficiently close to γ. Since ϵ was arbitrary, we conclude that s_z^± tends weakly to 0 as z is sent to γ. Suppose T is a bounded operator on L^2(γ). We define its Berezin transform to be the function T:Ω_±\{∞}→ given by the formula T(z) := ⟨T s^±_z, s^±_z ⟩, z ∈Ω_±\{∞}. The Berezin transform is important in the study of Toeplitz and Hankel operators in Bergman and Hardy space settings. There is an extensive body of literature on this topic; see, e.g., the survey <cit.> and the references therein. Let γ be a simple closed Lipschitz curve and T a compact operator on L^2(γ). Then the Berezin transform T(z) tends to 0 as z is sent to γ. Since compact operators are completely continuous and s_z^± tends weakly to 0 as z is sent to γ, we have that T s_z^±→ 0. Now observe that |T(z)| = |⟨T s_z^±,s_z^±⟩| ≤T s_z^±s_z^±≤T s_z^±, which completes the proof. Now suppose that T_1 and T_2 are bounded operators on L^2(γ). We define a function (γ,T_1,T_2): \{γ}→ by the formula (γ,T_1,T_2)(z) := ⟨T_1 s^+_z, s^+_z ⟩, z ∈Ω_+, ⟨T_2 s^-_z, s^-_z ⟩, z ∈Ω_- \{∞}. We refer to (γ,T_1,T_2) as a concatenated Berezin transform. This allows the consideration of two different operators on Ω_+ and Ω_- simultaneously. §.§.§ Kerzman-Stein operators Define the operator A_± = C_± - C_±^*. Kerzman and Stein <cit.> showed that the singularities of the Cauchy kernel and its adjoint cancel out as long as the associated curve is smooth. Lanzani <cit.> improved the applicability of this result to C^1 curves. Let γ be a C^1 curve in the complex plane. Then A_± is a compact operator on L^2(γ). See <cit.> for the original proof for C^∞ domains and Lanzani's work <cit.> for the proof on C^1 curves. Also see Bell's book <cit.> for a different perspective on the C^∞ setting. Let γ be a simple closed Lipschitz curve in the complex plane. The following computations hold for z ∈\{γ}: (γ,A_+,A_-)(z) ≡ 0, (γ,A^2_+,A^2_-)(z) = 1 - Λ(γ,z)^2, where A_±^2 = A_±∘A_±. Let z ∈Ω_±\{∞}. For (<ref>), we need only note that C_± fixes s_z. Thus, (γ,A_+,A_-)(z) = ⟨A_± s_z^±,s_z^±⟩ = ⟨ (C_± - C_±^*) s_z^±,s_z^±⟩ = ⟨C_± s_z^±,s_z^±⟩ - ⟨ s_z^±,C_± s_z^±⟩ = ⟨ s_z^±,s_z^±⟩ - ⟨ s_z^±, s_z^±⟩ = 0. For (<ref>), we have that since both C_± and C_±^* are projections (γ,A^2_+,A^2_-)(z) = ⟨ (C_± - C_±^*)^2 s_z^±,s_z^±⟩ = ⟨ (C^2_± - C_±^*C_± - C_±C_±^* + (C_±^*)^2) s_z^±, s_z^±⟩ = ⟨C_± s_z^±, s_z^±⟩ - ⟨C_±^* C_± s_z^±, s_z^±⟩ - ⟨C_±C_±^* s_z^±, s_z^±⟩ + ⟨C_±^* s_z^±, s_z^±⟩ = ⟨ s_z^±, s_z^±⟩ - ⟨C_± s_z^± ,C_± s_z^±⟩ - ⟨C_±C_±^* s_z^±, s_z^±⟩ + ⟨ s_z^±, C_± s_z^±⟩ = 1 - C_±^* s_z^±^2. But notice that C_±^* s_z^2 = 1/S_±(z,z)C^*_±( S_±(·,z) )^2 = 1/S_±(z,z)C_±(z,·)^2 = Λ_±(γ,z)^2, where the second equality follows from (<ref>) and the last equality is by definition. Combining (<ref>) and (<ref>) gives the result. First assume that γ is a simple closed C^1 curve in the plane. It is clear from the definition that z ↦Λ(γ,z) is continuous on \{γ}, while continuity at ∞ has already been verified in Theorem <ref>. It thus remains to check what happens near the curve (by definition, Λ(γ,ζ_0) = 1 for ζ_0 ∈γ). Proposition <ref> says A_± is compact, so A_±^2 is also compact. Lemma <ref> thus implies that (γ,A^2_+,A^2_-)(z) = ⟨A_±^2 s_z^±,s_z^±⟩→ 0 as z ∈Ω_± tends to any ζ_0 ∈γ. But by Proposition <ref>, this is equivalent to saying Λ_±(γ,z)^2 → 1 as z ∈Ω_± tends to ζ_0 ∈γ. Since Λ_±(γ,z) is positive, we conclude that lim_z →ζ_0Λ(γ,z) = 1 = Λ(γ,ζ_0). If γ is a C^1 curve in the Riemann sphere passing through the point at infinity, then we use a Möbius transformation Φ to map it to a bounded C^1 curve Φ(γ). Möbius invariance combined with the above argument now completes the proof. § EXAMPLES §.§ Wedges Given θ∈ (0,π), define two complementary wedges W_θ = {r e^iφ∈: r>0, |φ| < θ}, V_θ = {r e^iφ∈: r>0, θ < φ < 2 π -θ}. It suffices to consider θ∈ (0,π/2), so that W_θ is a convex set and V_θ is non-convex. Let γ_θ parameterize the boundary bW_θ: γ_θ(t) = -t e^i θ, t ∈ (-∞,0), t e^-i θ, t ∈ [0,∞), ∞, t = ∞. We now have a partition of the Riemann sphere = W_θ∪ V_θ∪γ_θ. In the notation of previous sections, we have W_θ = Ω_+ and V_θ =Ω_-. Thus we write Λ_+(γ_θ,r e^i φ) = 1/4π^2 S_W_θ(r e^iφ, r e^iφ)∫_γ_θdσ(ζ)/|ζ-r e^i φ|^2, r>0, φ∈ (-θ,θ), Λ_-(γ_θ,r e^i φ) = 1/4π^2 S_V_θ(r e^iφ, r e^iφ)∫_γ_θdσ(ζ)/|ζ-r e^i φ|^2, r>0, φ∈ (θ,2π - θ). §.§.§ Szegő kernels The Szegő kernels of W_θ and V_θ, can be computed from their Riemann maps. For α∈ (0,π), let W_α be the (possibly non-convex) wedge given by (<ref>). It is straightforward to verify that the Riemann map Ψ_α: W_α→ takes the form Ψ_α(z) = 1-z^π/2α/1+z^π/2α, where the fractional power z^π/2α refers to the branch preserving the positive real axis. Setting α = θ and z = r e^iφ, the transformation law in Proposition <ref> and (<ref>) show S_W_θ(re^iφ,re^iφ) = 1/8rθ(πφ/2θ), r>0, φ∈ (-θ,θ). The Szegő kernel for V_θ is computed similarly. First observe that the map z ↦ -z sends V_θ to W_π-θ. From here, apply the map Ψ_π-θ to obtain the Riemann map from V_θ to . S_V_θ(re^iφ,re^iφ) = 1/8r(π-θ)(π/2(π - φ)/(π - θ)), r>0, φ∈ (θ,2π-θ). §.§.§ L^2-norm of the Cauchy kernel Computation of the integrals in (<ref>) and (<ref>) is assisted by the following Let α∈ (0,2π) and r>0. Then (r,α) := ∫_0^∞dx/|x-re^iα|^2 = 1/r(π-α), where (t):= sin(t)/t, t ≠ 0 1, t=0. If α = π, the fundamental theorem of calculus gives the result. When α∈ (0,π), (r,α) = ∫_0^∞dx/x^2+r^2-2xrcosα = 1/r^2sin^2α∫_0^∞dx/1+ (x-rcosα/rsinα)^2 = 1/rsinα∫_-α^∞du/1+u^2 = 1/rsinα( π/2 + arctan(α) ). Elementary trigonometry now confirms that (<ref>) holds in this case. For α∈ (π,2π), reflection across the horizontal axis reveals that (r,α) = (r,2π-α). Combining this with the earlier result for α∈ (0,π] shows that (<ref>) holds for α∈ (0,2 π). Fix θ∈ (0,π2) and take z = r e^iφ∈ W_θ, with r>0, φ∈ (-θ,θ). Using Lemma <ref> it is easily verified that (just draw a picture) C(re^iφ,·)^2_L^2(γ_θ) = 1/4π^2( (r,θ-φ) + (r,θ+φ) ) = 1/4π^2 r( 1/(π-(θ-φ)) + 1/(π-(θ+φ))). Similarly, take z = r e^iφ∈ V_θ, with r>0 and φ∈ (θ,2π-θ). Then Lemma <ref> gives C(re^iφ,·)^2_L^2(γ_θ) = 1/4π^2( (r,φ - θ) + (r,2π - θ - φ) ) =1/4π^2 r( 1/(π-(φ-θ)) + 1/(π-(φ+θ))). §.§.§ The Λ-function From Theorem <ref> we easily see Λ(γ_θ,r e^iφ) is independent of r>0. Alternately, by canceling factors of r^-1 in the quotients of (<ref>) and (<ref>) by the corresponding results from (<ref>) and (<ref>) we obtain the following For θ∈ (0,π/2), the function re^iφ↦Λ(γ_θ,r e^iφ)^2 is computed on W_θ = Ω_+ and V_θ = Ω_-. (a) Let z = r e^i θ∈ W_θ, so that r>0 and φ∈ (-θ,θ). Then Λ_+(γ_θ,re^iφ)^2 = 2θ/π^2( 1/(π-(θ-φ)) + 1/(π-(θ+φ))) cos(πφ/2θ). (b) Let z = r e^i θ∈ V_θ, so that r>0 and φ∈ (θ, 2π-θ). Then Λ_-(γ_θ,re^iφ)^2 =2(π-θ)/π^2( 1/(π-(φ-θ)) + 1/(π-(φ+θ))) cos(π/2(π - φ)/(π - θ)). §.§.§ Remarks on the formula Since Λ(γ_θ,r e^i φ) is independent of r>0, let us define L(θ,φ) := Λ(γ_θ, e^i φ). Theorem <ref> tells us that C_L^2(γ_θ)≥sup{ L(θ,φ) : φ∈ [-θ, 2π-θ) }. ∙ In <cit.> Bolt observes that a Möbius transformation maps the wedge W_θ onto a lens with vertices at ± 1. This lens has boundary length σ(γ) = 4θθ and capacity κ(γ) = π/(2(π-θ)), and Bolt uses this information to obtain a lower bound on A; see Remark <ref>. The closely related lower bound on C given by Λ(γ,∞) in Theorem <ref> together with the Möbius invariance of Λ shows C_L^2(γ_θ)≥√(σ(γ)/2πκ(γ)) =2/π√((π-θ)θθ) := B(θ). ∙ The graphs of L(θ,φ) and B(θ) for θ = π8, π4 are displayed in Figure <ref>. B(θ) agrees with the maximum value (at φ = 0) of L(θ,φ) when the angles correspond to the interior domain W_θ, but is strictly less than the maximum value when φ is taken from the exterior V_θ. ∙ When z ∈ W_θ∪ V_θ, Theorem <ref> guarantees that Λ(γ_θ,z)>1. But the formulas in Theorem <ref> show that Λ(γ_0,z) → 1 as z=re^iφ tends to any smooth point on the curve γ_θ (meaning that r>0 and φ→ -θ, θ or 2π -θ). This is illustrated in Figure <ref> when θ= π8, π4, and should be compared with the continuity result given in Theorem <ref>. ∙ Since the non-smooth boundary points 0, ∞∈γ_θ can be approached from any angle φ∈ (-θ,2π-θ), the function re^iφ↦Λ(γ_θ, re^iφ) is discontinuous at both. In light of the relationship between Λ, the Berezin transform and the Kerzman-Stein operator A in Section <ref>, lack of continuity shows that A is not compact. Bolt and Raich <cit.> show that A is never compact when corners are present. §.§ Ellipses The next family of curves we consider are ellipses. For r ≥ 1, define ℰ_r = {x+iy ∈ : x^2/r^2+y^2 = 1 }. As usual _r is oriented counterclockwise, so that Ω^r_+ is the filled in ellipse. §.§.§ Families of special functions Elliptic integrals, elliptic functions and Jacobi theta functions comprise a deep and beautiful area of mathematics. In what follows, the reader is not assumed to have prior background and all necessary definitions are given. References to properties relevant to our computations of Λ(_r,z) are also interspersed as needed. The elliptic integrals of the first, second and third kinds (K,E,Π, respectively) make up a canonical set to which all other elliptic integrals can be reduced. In what follows, variables k ∈ (0,1) and n ∈ are called the elliptic modulus and characteristic, respectively. We follow conventions used by Whittaker and Watson <cit.>, but the reader is cautioned that other conventions are also in common use (especially in mathematical software; e.g., Mathematica implements these as functions of k^2, not k): K(k) = ∫_0^1 dt/√((1-t^2)(1-k^2t^2)), E(k) = ∫_0^1 √(1-k^2t^2/1-t^2) dt, Π(n,k) = ∫_0^1 dt/(1-n t^2)√((1-t^2)(1-k^2t^2)). Next, recall the Jacobi theta functions, where z ∈ and |q|<1 (see <cit.>): ϑ_1(z,q) = 2 q^1/4∑_j=0^∞ (-1)^j q^j(j+1)sin[(2j+1)z], ϑ_2(z,q) = 2 q^1/4∑_j=0^∞ q^j(j+1)cos[(2j+1)z], ϑ_3(z,q) = 1 + 2∑_j=1^∞ q^j^2cos(2jz), ϑ_4(z,q) = 1 + 2∑_j=1^∞ (-1)^j q^j^2cos(2jz). Theta functions also admit elegant infinite product expansions; see <cit.>. Define for k ∈ [0,1], the elliptic nome q(k) by q(k) = exp[-π K(√(1-k^2))/K(k)]. This function is strictly increasing with q(0)=0 and q(1)=1. The inverse nome k(q) is defined for q ∈ [0,1) by an infinite product, or equivalently by a ratio of theta functions: k(q) = 4√(q)∏_j=1^∞[ 1+q^2j/1+q^2j-1]^4 = ϑ_2(0,q)^2/ϑ_3(0,q)^2. A slight abuse of notation lets us write k(q(k)) = k and q(k(q))=q on [0,1); also lim_q → 1 k(q) = 1, though the infinite product/theta function formula is not valid at q=1. Finally, define Jacobi's elliptic sn function in terms of the above functions sn(u,k) = ϑ_3(0,q(k))/ϑ_2(0,q(k))·ϑ_1(uϑ_3(0,q(k))^2,q(k))/ϑ_4(uϑ_3(0,q(k))^2,q(k)). The function u ↦sn(u,k) is is meromorphic and doubly periodic with quarter periods K := K(k) and iK' := iK(√(1-k^2)), i.e., sn(u,k) = sn(u + 4K,k) and sn(u,k) = sn(u + 4iK',k). §.§.§ The norm of the Cauchy kernel For z ∈Ω^r_+ ∪Ω^r_-, we write the norm C_±(z,·)_L^2(_r) as an integral over [-1,1]. First parametrize the top and bottom halves of _r by ζ^±(t) = rt ± i√(1 - t^2), -1≤ t ≤ 1. Writing the arc length differential dσ(ζ) in terms of this parametrization gives dσ(ζ) = |dζ(t)| = r √(1-(1-1/r^2)t^2/1-t^2) dt. If z = α+iβ, with α,β∈, then (<ref>) implies |ζ^±(t) - z|^2 = |(rt-α) ± i(√(1-t^2)∓β)|^2 = α^2 + β^2 +1 - 2rα t + (r^2-1)t^2 ∓ 2β√(1-t^2). Now combine this with (<ref>) to see C(z,·)^2 =1/4π^2∫__rdσ(ζ)/|ζ - z|^2 = 1/4π^2∫_-1^1 |dζ(t)|/|ζ^+(t) - α-iβ|^2 + 1/4π^2∫_-1^1 |dζ(t)|/|ζ^-(t) - α-iβ|^2 = r/2π^2∫_-1^1 α^2 + β^2 + 1 - 2rα t + (r^2-1)t^2/(α^2 + β^2 + 1 - 2rα t + (r^2-1)t^2)^2 - 4β^2(1-t^2)√(1-(1-1/r^2)t^2/1-t^2) dt. In the special case in which α = β = 0, (<ref>) reduces to C(0,·)^2_L^2(_r) = 1/2π^2r∫_-1^1 r^2 - (r^2-1)t^2/1 + (r^2-1)t^21/√((1-t^2)(1-(1-1/r^2)t^2)) dt = 1/π^2r∫_0^1 ( r^2+1/1 - (1-r^2)t^2 -1 ) 1/√((1-t^2)(1-(1-1/r^2)t^2)) dt = 1/π^2 r( (r^2+1) ·Π(1-r^2, √(1-1r^2)) - K(√(1-1r^2)) ). §.§.§ The Szegő kernel of the interior domain The Riemann map of the ellipse Θ_r : Ω_+^r →, takes the following form (see, e.g., <cit.> or <cit.>): Θ_r(z) = √(k_r)·sn(2 K(k_r)/πarcsin(z/√(r^2-1)) , k_r ). Here sn(·,·) is the elliptic function (<ref>), K is the elliptic integral (<ref>). The elliptic modulus k_r ∈ [0,1) is the unique value (determined by (<ref>)) satisfying q(k_r) = (r-1/r+1)^2. The value of k_r in (<ref>) is determined by the eccentricity of _r, while the factor √(r^2-1) appearing in the arcsine accounts for the fact that the foci of _r are at (±(r^2-1),0). From <cit.>, it can be seen that 2 K(k)/π = ϑ_3(0,q(k))^2, √(k) = ϑ_2(0,q(k))/ϑ_3(0,q(k)). Equations (<ref>), (<ref>) and the definition of sn(·,·) in (<ref>) lets us rewrite (<ref>): Θ_r(z) = ϑ_2(0,(r-1r+1)^2 )/ϑ_3(0,(r-1r+1)^2 )·sn( ϑ_3 (0,(r-1r+1)^2)^2 arcsin(z/√(r^2-1)) , k_r ) = ϑ_1(arcsin(z√(r^2-1)),(r-1r+1)^2 )/ϑ_4(arcsin(z√(r^2-1)),(r-1r+1)^2 ). The Szegő transformation formula (<ref>) gives S_+(z,z) in terms of Θ_r and Θ_r' S_+(z,z) = |Θ_r'(z)|/2π(1-|Θ_r(z)|^2). When z=0, this formula simplifies. Indeed, (<ref>) shows that Θ_r(0) = 0 and (letting ϑ'_1 denote differentiation in the first slot), the quotient rule gives Θ_r'(0) = ϑ'_1( 0, (r-1r+1)^2 )/√(r^2-1)·ϑ_4( 0, (r-1r+1)^2 ). In <cit.>, Whittaker and Watson establish the following “remarkable result" of Jacobi, saying “several proofs have been given, but none are simple": ϑ'_1(0,q) = ϑ_2(0,q) ϑ_3(0,q) ϑ_4(0,q). This can be inserted into (<ref>) to show S_+(0,0) = |Θ'_r(0)|/2π = ϑ_2( 0, (r-1r+1)^2 ) ϑ_3( 0, (r-1r+1)^2 )/2π√(r^2-1). §.§.§ Calculation of Λ(_r,0) and Λ(_r,∞) Let r ≥ 1. The function z ↦Λ(_r,z)^2 assumes the following values. Λ(_r,∞)^2 = 4r/π(r+1) E(√(1-1r^2)), Λ(_r,0)^2 = 2/π√(1-1r^2)· (r^2+1) ·Π(1-r^2, √(1-1r^2)) - K(√(1-1r^2)) /ϑ_2(0,(r-1/r+1)^2 ) ϑ_3(0,(r-1/r+1)^2 ). The z=∞ formula follows from the definition of Λ in (<ref>). (Recall that Theorem <ref> shows that Λ(_r,z) is continuous at z=∞.) It is known (see <cit.>) that the capacity κ of the ellipse {x^2/a^2 + y^2/b^2 = 1 } is a+b/2, so κ(_r) = r+1/2. On the other hand, (<ref>) shows the arc length of _r is given by σ(_r) = ∫__rdσ(ζ) = 2r∫_-1^1 √(1-(1-1/r^2)t^2/1-t^2) dt = 4r E(√(1-1r^2)). For the z=0 formula, simply divide (<ref>) by (<ref>). §.§.§ Remarks We discuss properties of Λ(_r,0) and Λ(_r,∞) as r varies, giving special attention to the endpoint cases of r → 1 and ∞. Both values give “asymptotically sharp" estimates on C as r→ 1, though Λ(_r,0) is larger and thus “sharper" (see Figure <ref>). We also briefly discuss Λ(_r,z) for other z values. ∙ Behavior of Λ(_r,∞) and Λ(_r,0). Using well known asymptotic behavior of elliptic integrals and theta functions (see <cit.>) we expand Λ(_r,∞) and Λ(_r,0) near r=1: Λ(_r,∞) = 1 + 132(r-1)^2 - 132(r-1)^3 + 3128(r-1)^4 + O(|r-1|^5) Λ(_r,0) = 1 + 132(r-1)^2 - 132(r-1)^3 + 7200(r-1)^4 + O(|r-1|^5). (For the reader interested in working out these details by hand, it is convenient to re-write Λ(_r,0) using the so-called Heuman Lambda function; see <cit.>). In <cit.>, Bolt shows that the spectrum of the Kerzman-Stein operator A on an ellipse consists of eigenvalues ± i λ_l, where each ± i λ_l has multiplicity 2 and λ_1 ≥λ_2 ≥⋯≥ 0. He then provides asymptotics of the eigenvalues as the eccentricity tends to zero. The largest number on the list is λ_1 = A = √(||C||^2-1), and we deduce from Bolt's estimates that, as r → 1, C≈√( 1 + 14(r-1r+1)^2 ) = 1 + 132(r-1)^2 + O(|r-1|^3). Comparing (<ref>) to (<ref>) and (<ref>), we see the expansions for Λ(_r,∞) and Λ(_r,0) are both asymptotically sharp as r → 1. But if we then compare the coefficients of (r-1)^4, we see that Λ(_r,0) is the better lower bound on C near r = 1 (since 7200 > 3/128). This information proves Theorem <ref>. Known asymptotic expansions of elliptic integrals and theta functions also yield the behavior of Λ(_r,∞) and Λ(_r,0) as r →∞: lim_r →∞Λ(_r,0) = 2/√(π) = lim_r →∞Λ(_r,∞). The Mathematica generated plot in Figure <ref> suggests Λ(_r,0) > Λ(_r,∞) for all r ≥ 1, and it would be interesting to prove this. It also appears in the plot that both functions are strictly increasing in r. This is relatively straightforward to prove in the case of Λ(_r,∞), but the Λ(_r,0) case seems to be harder. ◊ The above information should be considered together with a known upper bound on the norm of C. Adapting results by Feldman, Krupnik and Spitkovsky in <cit.>, we see that for r ≥ 1, C_L^2(_r)≤√(1 + (r-1r+1)^2). In particular, the norm of the Cauchy transform on any ellipse is always less than √(2). ◊ ∙ Values of Λ(_r,z) for z≠ 0,∞. The formulas provided in (<ref>), (<ref>) and (<ref>) are valid for z ∈Ω_+^r. Numerical evidence for specific r values suggests that z=0 may in fact maximize Λ_+(_r,z). This is illustrated in Figure <ref>, when r=2 and z = α is real valued. In this picture, the interior domain corresponds to α∈ (-2,2). To compute Λ_-(_r,z), we need the exterior Riemann map. The map from the unit disc to Ω_-^r (the complement of the solid ellipse) is given by a Joukowski map (see <cit.>). Such maps can be inverted explicitly and the desired Ψ_r: Ω_-^r → obtained. The particular map used to generate the figure (the r=2 case) for |α| > 2 is given by Ψ_2(z) = 3/z+√(z^2-3), from which the exterior Szegő kernel S_-(z,z) can be obtained. This is then combined with the Cauchy norm computation in (<ref>) to yield the exterior Λ-function. acm
http://arxiv.org/abs/2407.11946v2
20240716173559
Hierarchical Separable Video Transformer for Snapshot Compressive Imaging
[ "Ping Wang", "Yulun Zhang", "Lishun Wang", "Xin Yuan" ]
cs.CV
[ "cs.CV" ]
P. Wang et al. Zhejiang University, Hangzhou, China Westlake University, Hangzhou, China Shanghai Jiao Tong University, Shanghai, China {wangping,wanglishun,xyuan}@westlake.edu.cn    yulun100@gmail.com Hierarchical Separable Video Transformer for Snapshot Compressive Imaging Ping Wang1,20009-0001-2746-5102 Yulun Zhang30000-0002-2288-5079 Lishun Wang20000-0003-3245-9265Xin Yuan2Corresponding author.0000-0002-8311-7524 Received —; accepted — ==================================================================================================================================================== § ABSTRACT Transformers have achieved the state-of-the-art performance on solving the inverse problem of Snapshot Compressive Imaging (SCI) for video, whose ill-posedness is rooted in the mixed degradation of spatial masking and temporal aliasing. However, previous Transformers lack an insight into the degradation and thus have limited performance and efficiency. In this work, we tailor an efficient reconstruction architecture without temporal aggregation in early layers and Hierarchical Separable Video Transformer (HiSViT) as building block. HiSViT is built by multiple groups of Cross-Scale Separable Multi-head Self-Attention (CSS-MSA) and Gated Self-Modulated Feed-Forward Network (GSM-FFN) with dense connections, each of which is conducted within a separate channel portions at a different scale, for multi-scale interactions and long-range modeling. By separating spatial operations from temporal ones, CSS-MSA introduces an inductive bias of paying more attention within frames instead of between frames while saving computational overheads. GSM-FFN further enhances the locality via gated mechanism and factorized spatial-temporal convolutions. Extensive experiments demonstrate that our method outperforms previous methods by >0.5 dB with comparable or fewer parameters and complexity. The source codes and pretrained models are released at <https://github.com/pwangcs/HiSViT>. § INTRODUCTION High-speed cameras are crucial vision devices for scientific research, industrial manufacturing, and environmental monitoring. Unlike typical expensive high-speed cameras, Snapshot Compressive Imaging (SCI) <cit.> multiplexes a sequence of video frames, each of which is optically modulated with temporally-varying masks, into a single-shot observation of a low-cost monochromatic camera for high speed and low storage. Optical modulation and multiplexing lead to two corresponding degradations: spatial masking and temporal aliasing. Similar to compressive sensing problems <cit.>, the inverse problem of video SCI is to reconstruct multiple high-fidelity frames from the observed image. As demonstrated in <ref> (a), multiple frames are first initialized from the observed image and known masks and then they are input to an optimization algorithm or a deep model for effective restoration. In this context, video SCI reconstruction can be viewed as a challenging video restoration task, like denoising, deblurring, . Actually, they are vastly different in data distribution. As depicted in <ref> (b), input frames of video SCI reconstruction lose temporal correlations (, motion dynamics) completely due to the mixed degradation of spatial masking and temporal aliasing, differing from that input frames of a plain video restoration task are highly-related with clear frames even degraded. For video SCI reconstruction, informative clues concentrate on spatial dimensions as opposed to temporal dimension, referred to as information skewness. Video SCI reconstruction has been extensively studied under straight <cit.>, U-shaped <cit.>, recurrent <cit.>, unrolling <cit.>, and plug-and-play <cit.> architectures. From early convolutional models <cit.> to latest Transformer models <cit.>, the performance gains are due in large part to advanced vision engines and they lack an insight into the information skewness. Due to the limited perception field and static kernel of convolution, CNN-based models have inherent shortcomings in capturing long-range dependencies and learning generalizable priors. Recently, Transformer-based models <cit.> have achieved the state-of-the-art (SOTA) performance. As a core component of Transformer, Multi-head Self-Attention (MSA) mechanism is highly effective in capturing long-range dependencies by aggregating all tokens weighted by the similarity between them. However, vanilla Global MSA (G-MSA) <cit.> suffers from the quadratic computational complexity with respect to token numbers, thus being impractical for high-dimensional data, like video. To relieve computational loads, STFormer <cit.>, a variant of Factorized MSA (F-MSA) <cit.>, applies 2D Windowed MSA (W-MSA) <cit.> on spatial dimensions and 1D G-MSA on temporal dimension in a separate and parallel manner to surpass CNN-based models. By replacing spatial W-MSA of STFormer with 2D convolutions, EfficientSCI <cit.> further improves the performance with less computational loads. STFormer and EfficientSCI don't conduct MSA on video space directly, thus they are not real video Transformers. CTM-SCI <cit.> first applies 3D W-MSA <cit.> on video space in an unrolling architecture to get the latest result (36.52 dB) at the cost of extremely high computational loads (12.79 TMACs). By rethinking the information skewness and Transformers' gain, we point out the keys of video SCI reconstruction: i) spatial aggregation plays a more important role than temporal one, and ii) long-range spatial-temporal modeling is desired but usually at the expense of high computational complexity. In this work, we make several modifications on reconstruction architecture and Transformer block to fulfill the above requirements. Towards architecture, previous models generally apply stacked 3D convolutions to transform degraded frames into shallow features. In absence of motion dynamics, temporal interactions in early layers could exaggerate artifacts owing to error accumulation during propagation <cit.>, leading to poor representations (see <ref>). To this end, we appeal for using 2D operator as the frame-wise shallow feature extractor. Towards building block, we propose an efficient Hierarchical Separable Video Transformer (HiSViT) to tackle the mixed degradation of video SCI, powered by Cross-Scale Separable Multi-head Self-Attention (CSS-MSA) and Gated Self-Modulated Feed-Forward Network (GSM-FFN). CSS-MSA, a spatial-then-temporal attention, separates spatial operations from temporal ones within a single attention layer. Such separation design leads to: i) computational efficiency; ii) an inductive bias of paying more attention within frames instead of between frames. The former is similar to previous F-MSA <cit.> by breaking the direct interactions between non-aligned tokens, located at both different frames and different spatial locations. The later is customized to harmonize with the information skewness of video SCI. Besides, spatial receptive field is designed to be windowed yet increasing along heads for efficient multi-scale representation learning and temporal receptive field is global considering the limited frames to be processed as demonstrated in <ref> (d). GSM-FFN could strengthen the locality by introducing gated self-modulation and factorized Spatial-Temporal Convolution (STConv) to regular FFN. Each HiSViT block is built by multiple groups of CSS-MSA and GSM-FFN with dense connections, each of which is conducted on a separate channel portions at a different scale. Consequently, HiSViT has the following virtues: multi-scale interactions, long-range spatial-temporal modeling, and computational efficiency. The contributions of this work are summarized as follows: * We first offer an insight on the mixed degradation of video SCI and reveal the resulting information skewness between spatial and temporal dimensions. To this end, we make several reasonable modifications on reconstruction architecture and Transformer block. * We propose an efficient video Transformer, dubbed HiSViT, in which CSS-MSA captures long-range cross-scale spatial-temporal dependencies while tackling the information skewness and GSM-FFN enhances the locality. * Extensive experiments demonstrate that our model achieves SOTA performance with comparable or fewer complexity and parameters (see <ref>). § RELATED WORK Video SCI Reconstruction. In recent years, deep learning approaches have extensively exploited on straight <cit.>, U-shaped <cit.>, recurrent <cit.>, unrolling <cit.>, and plug-and-play <cit.> architectures with significant performance gains over traditional optimization algorithms <cit.>. CNN-based models are impeded by the limited perception field and static kernel of convolution. Recently, Transformer-based models <cit.> have achieved SOTA performance. STFormer <cit.> captures spatial and temporal dependencies in parallel and separately with the combination of factorized attention <cit.> and windowed attention <cit.>. By replacing spatial windowed attention of STFormer with 2D convolutions, EfficientSCI <cit.> further improve the performance with less computational loads. CTM-SCI <cit.> first applies 3D windowed attention <cit.> in video space to enjoy the joint spatial-temporal modelling but its performance gain is at the cost of extremely high complexity and parameters. By re-examining previous works, we observe that long-range spatial-temporal modeling is desired but the resulting high complexity is troublesome. Vision Transformers. Transformers <cit.> have exhibited extraordinary performance on natural language processing tasks <cit.> and computer vision tasks <cit.>. As a core of Transformer, vanilla attention suffers from the quadratic computational complexity towards token number and thus is impractical for large-scale dense prediction tasks. To this end, kinds of Transformer variants <cit.> are proposed to decrease the complexity, among which Swin Transformer <cit.> achieves a good trade-off between accuracy and efficiency by limiting attention calculations within local windows. Benefitting from long-range dependency and data dependency <cit.>, Transformers have become the de-facto standard of image restoration tasks <cit.>. Due to an additional temporal dimension, developing Transformer for video is more challenging. Existing video Transformers generally apply spatial (2D) attention under recurrent architecture <cit.>, joint spatial-temporal (3D) attention within local windows <cit.>, or factorized spatial-temporal (2D+1D) attention <cit.>. All of them are workable but lack appropriate inductive biases from the mixed degradation of video SCI. § RETHINKING VIDEO SCI RECONSTRUCTION §.§ Mathematical Model In video SCI paradigm, a grayscale video ∈ℝ^H × W × T is modulated by mask ∈ℝ^H × W × T and then temporally integrated into an observation ∈ℝ^H × W by (x,y) =∑_t = 1^T(x,y,t) ⊙(x,y,t)+(x,y) , where (x,y,t) indexes one position in 3D video space, ⊙ denotes the Hadamard (element-wise) product, and is the measurement noise. Note that color channel is omitted for clarity. For hardware implementation, mask is often generated from a Bernoulli distribution with equal probability, , ∈{0, 1}. The inverse problem of video SCI is to reconstruct a high-fidelity estimate of from the observed . For dimensional consistency, a highly-degraded video is initialized from known and as input by (x,y,t)=(x,y,t)⊙(x,y),    where  (x,y) =(x,y)⊘∑_t = 1^T(x,y,t), where ⊘ denotes the element-wise division. The above 2D-to-3D projection is driven by the pseudoinverse in optimization theory <cit.>. ∈ℝ^H × W is a single-frame coarse estimation of , whose moving region is blurred and masked but motionless region is closed to the ground truth. <ref> implies that lose the temporal correlations of completely (see the inputs in <ref>), whereas imposed by the temporal stamps of . A deep reconstruction model aims to learn a nonlinear map 𝒟 from to , namely =𝒟(). §.§ Degradation Analysis For the perspective of imaging in <ref> and <ref> (a), video SCI involves multiple degradations: spatial masking, temporal aliasing, and measurement noise. Note that in color video SCI case, demosaicing the observed image cannot recover the right color since optical masks collide with Bayer filter, thus color degradation must be considered. Among these degradations, the mixture of spatial masking and temporal aliasing is the root of ill-posedness. <ref> (b) visualizes the structural similarity map between clear frames and degraded frames for video SCI reconstruction and a plain video restoration task. Clearly, the input frames of a plain video restoration task are temporally aligned with clear frames and still contains rich motion dynamics even degraded. For video SCI reconstruction, the input frames in <ref> are the results of re-modulating an identical image with non-semantic different masks and thus lose temporal correlations (motion dynamics) completely. As a result, informative clues concentrate spatial dimensions rather than temporal dimension, referred to as information skewness. r0.5 < g r a p h i c s > Visualization of shallow features extracted by 3D CNN in EfficientSCI <cit.> and RSTB (without temporal aggregation) in our model. Clearly, our frame-wise extraction can better retrieve the temporal correlations with fewer parameters (0.28 v.s. 1.12 M) and MACs (148.85 v.s. 241.79 G). Unfortunately, previous works have always overlooked the information skewness and follow general vision architectures and blocks, , a recurrent or unrolling architecture with Swin Transformer block. Due to the information skewness, we observe that too early temporal aggregation is ineffective for temporal dealiasing as demonstrated in <ref>. Besides, previous video Transformers, powered by 3D windowed <cit.> or 2D+1D factorized <cit.> attention, lack an appropriate inductive bias to harmonize with the information skewness. To this end, we tailor an efficient reconstruction architecture and Transformer block for video SCI reconstruction. § METHODOLOGY §.§ Video SCI Reconstruction Architecture <ref> depicts the proposed reconstruction architecture, mainly composed of i) frame-wise feature extraction, ii) spatial-temporal feature refinement, and iii) feature-to-frame reconstruction. Considering that the feature refinement module generally needs extensive calculations, we propose the downsampling-refinement-upsampling pipeline to relieve computational loads. For the downsample layer, we use a 1×3×3 convolution with a stride of 1×2×2 followed by a non-linear activation to decrease the spatial resolution while increasing channels. For the upsample layer, we use the pixel-shuffle operator to recover the spatial resolution. §.§.§ Frame-wise Feature Extraction. Due to the loss of motion dynamics caused by the mixed degradation of video SCI, temporal interactions in early layers could exaggerate artifacts owing to error accumulation during propagation <cit.>. With this insight, we first consider the input frames as individual images to process in parallel within the feature extraction module. Inspired by the effectiveness of using Swin Transformer as the feature extractor <cit.>, we use one Residual Swin Transformer Block (RSTB) <cit.> to replace stacked 3D convolutions widely used in previous SOTA models <cit.>. As visualized in <ref>, such displacement is more effective and efficient for temporal dealiasing. A clear performance gain is got from <ref> in ablation study. Note that temporal correlations can only be roughly retrieved for simple dynamic scenes. Fine temporal dealiasing relies on the following module. §.§.§ Spatial-Temporal Feature Refinement. The feature refinement module is built by stacked building blocks to refine the downsampled shallow features from the frame-wise feature extraction module. In this work, the building block is an efficient Hierarchical Separable Video Transformer (HiSViT), followed by a channel attention <cit.>. HiSViT is detailedly introduced in <ref>. §.§.§ Feature-to-Frame Reconstruction. The reconstruction module is responsible for generating high-fidelity video frames from the upsampled refined features and shallow features. Spatial-temporal aggregation or spatial-only aggregation is feasible for this module. Considering the sufficient spatial-temporal modeling of HiSViT, we use another RSTB for effective reconstruction. §.§ Hierarchical Separable Video Transformer To harmonize with the information skewness of video SCI, we propose a Hierarchical Separable Video Transformer (HiSViT) as building block for efficient spatial-temporal modeling. As demonstrated in <ref> (a), HisViT is a multi-branch structure with dense connections along channel dimension and each branch involves a residual Cross-Scale Multi-head Self-Attention (CSS-MSA) and Gated Self-Modulated Feed-Forward Network (GSM-FFN). CSS-MSA, a spatial-then-temporal attention, attends to all features (tokens) within local windows (across time) and the spatial attention is conducted between normal query and average-pooled key and value, where ρ is the size of spatial average-pooling. GSM-FFN further strengthens the locality. As a result, HiSViT has a hierarchical receptive field from bottom (ρ=4) to top (ρ=1) to enable multi-scale interactions and long-range dependencies. §.§.§ Cross-Scale Separable Multi-head Self-Attention. As shown in <ref> (b), CSS-MSA is powered by: i) separating spatial operations from temporal ones; ii) performing cross-scale spatial attention. Inspired by separable convolutions, we decompose regular attention <cit.>, which requires intensive interactions in 3D space, into a spatial widowed attention followed by a temporal global attention within a single attention layer. Spatial attention is conducted between normal query and average-pooled key and value to capture different-frequency information (ρ=1,2,4) given that averaging is a low-pass filter <cit.>. Here to simplify the presentation, we describe only a single head of CSS-MSA. At a certain branch with ρ, let ∈ℝ^T× H× W× d be the input video feature. are first partitioned into several non-overlapped patches _i∈ℝ^T×ρ h×ρ w× d, i=1,..., HW/ρ ^2hw according to spatial window ρ h ×ρ w. Afterwards, query _i, key _i, and value _i are computed from _i by [ _i=_i^q, _i=_i^k, _i=_i^v, ] where _i, _i, _i∈ℝ^T× ph× pw× d and ^{q,k,v}∈ℝ^d× d represent learnable projection matrices. If ρ> 1, _i, _i are spatially average-pooled into _i^↓, _i^↓∈ℝ^T× h× w× d, otherwise there is no pooling operator, , _i=_i^↓, _i=_i^↓. CSS-MSA aggregates spatial-temporal features using _i, _i, _i^↓, and _i^↓ by '_i=𝚜𝚘𝚏𝚝𝚖𝚊𝚡 (_i_i^↓⊤/ . -τ_1 )_i^↓ , where _i∈ℝ^T ×ρ ^2hw× dℝ^T× ph× pw× d,    _i^↓, _i^↓ ∈ℝ^T ×hw× dℝ^T × h × w × d, ”_i=𝚜𝚘𝚏𝚝𝚖𝚊𝚡 (_i_i^⊤/ . -τ_2 )'_i , where _i,_i∈ℝ^ρ ^2hw× T× dℝ^T× ph× pw× d,  '_i∈ ℝ^ρ ^2hw× T × dℝ^T ×ρ ^2hw× d, Note that the above matrix multiplications are batch-wise and τ_1, τ_2 are two learnable scales. The output is computed by a linear projection _i=”_i, where ”_i∈ℝ^T ×ρ h ×ρ w × dℝ^ρ ^2hw× T× d and ∈ℝ^d × d is learnable. {_i}_i=1^N∈ℝ^T ×ρ h ×ρ w × d (N=HW/ρ ^2hw) are combined into the final output ∈ℝ^T × H × W × d. Clearly, the input and output have the same size regardless of the pooling size ρ. We adapt the shifted rectangle-window strategy <cit.> for spatial partition. §.§.§ Comparison with Mainstream MSAs. Essentially, CSS-MSA is a spatial-then-temporal attention, namely a spatial windowed attention followed by a temporal global attention within a single attention layer. Next, we compare it with mainstream attention mechanisms for video, including G-MSA <cit.>, W-MSA <cit.>, F-MSA <cit.>, and their variants. The computational complexity is summarized in <ref>. G-MSA and F-MSA suffer from the quadratic computational complexity towards spatial-temporal resolution T× H×W and spatial resolution H×W respectively. W-MSA has the linear complexity at the cost of limiting interactions within t× h×w local windows. For long-range temporal dependencies, an variant is to relax 3D window t× h×w into 2D window h×w for video, referred to as Spatially-Windowed MSA (SW-MSA). A hybrid of F-MSA and W-MSA is to perform spatial windowed MSA and temporal global MSA in two separate attention layers, referred to as FW-MSA. Unlike FW-MSA, CSS-MSA attends to all spatial-temporal tokens with cross-scale interactions in a single attention layer and is equivalent to a joint spatial-temporal attention matrix in <ref> (b). Compared with regular attention in <ref> (a), the proposed CSS-MSA pays more attention to intraframe rather than interframe aggregation while keeping long-range spatial-temporal modeling ability. A quantitative comparison is given in <ref> of ablation study. §.§.§ Gated Self-Modulated Feed-Forward Network. As another key component, regular FFN process the output from MSA layer with a simple residual structure, built by two linear projections and a nonlinear activation between them. Here, we propose GSM-FFN by making two fundamental modifications on FFN: i) Gated Self-Modulation (GSM) and ii) factorized Spatial-Temporal Convolution (STConv). As depicted in <ref> (c), given the input feature ∈ℝ^T× H× W× C from CSS-MSA layer, the output feature is computed by [ _1, _2 =𝚂𝚙𝚕𝚒𝚝(𝙶𝙴𝙻𝚄(_1)); = (𝚂𝚒𝚐𝚖𝚘𝚒𝚍(_1) ⊙𝚂𝚃𝙲𝚘𝚗𝚟(_2))_2, ] where _1 ∈ℝ^ C×λ C increases the channel number by λ times, _2 ∈ℝ^λ C/2× C regulates the channel number into C, 𝚂𝚙𝚕𝚒𝚝 divides the channels into half, 𝙶𝙴𝙻𝚄 is a non-linear activation function, and 𝚂𝚒𝚐𝚖𝚘𝚒𝚍 represent the sigmoid function. Inspired by <cit.>, 𝚂𝚃𝙲𝚘𝚗𝚟, a hybrid of 1D convolution, 2D convolution, and LeakyReLU, performs convolutional and non-linear operators in spatial and temporal dimensions separately as shown in the right of <ref> (c). § EXPERIMENTS §.§.§ Model Setting. We use HiSViT in <ref> as building block of the proposed reconstruction architecture in <ref>. In the frame-wise feature extraction and feature-to-frame reconstruction modules, the channel number of RSTB <cit.> is set to 128. In the feature refinement module, the channel number of three branches is set to 64,64,128 for p=1,2,4 and the channel expansion factor of GSM-FFN is set to λ=2. To explore the scalability of HiSViT, we define two model settings: HiSViT9 and HiSViT13, which involves 9 and 13 building blocks respectively. §.§.§ Experiment Setting. To validate the effectiveness of the proposed method, we conduct experiments on six grayscale/color benchmark videos with the resolution of 8×256×256/8×512×512×3 pixels and on real captured grayscale videos <cit.> with the resolution of 10×512×512 pixels. Following previous works, our models are trained in DAVIS2017 dataset <cit.> with the same data augmentation in <cit.>. We use MSE loss with Adam optimizer (β_1= 0.9, β_2= 0.999) on A100 GPUs. All models are pretrained on the resolution of 8×128×128(×3) with a 1×10^-4 learning ratio over 100 epochs and then fine-tuned on the resolution of 8×256×256(×3) with a 1×10^-5 learning ratio over 20 epochs. Peak Signal to Noise Ratio (PSNR) and Structural SIMilarity (SSIM) are used to measure the reconstruction fidelity. Multiply-ACcumulate operations (MACs) are used to measure the computational complexity. More model details and additional results are in the supplementary material. In all experiments, the best and second-best results of the evaluated methods are highlighted and underlined. §.§ Results on Grayscale Benchmark Videos We compare HiSViT9/HiSViT13 with two representative optimization algorithms (GAP-TV <cit.>, DeSCI <cit.>), two plug-and-play methods (PnP-FFDNet <cit.>, PnP-FastDVDnet <cit.>), seven CNN-based methods (E2E-CNN <cit.>, BIRNAT <cit.>, GAP-net-Unet-S12 <cit.>, MeteSCI <cit.>, RevSCI <cit.>, DUN-3DUnet <cit.>, ELP-Unfolding <cit.>), three Transformer-based methods (STFormer <cit.>, EfficientSCI <cit.>, CTM-SCI <cit.>). <ref> reports the fidelity scores of all methods, and the parameters and MACs of deep learning methods on six grayscale benchmark videos. In terms of the reconstruction fidelity, the proposed HiSViT9 and HiSViT13 outperform previous optimization, plug-and-play, and CNN-based methods by a large margin (>1.5 dB). Compared with EfficientSCI, our HiSViT9 outperforms it by 0.52 dB with comparable parameters and MACs. Compared with previous best CTM-SCI, our HiSViT9/HiSViT13 outperforms it by 0.48/0.77 dB with only 12.00/15.22 % of its MACs and 10.98/14.86 % of its paramters. Clearly, our method achieves not only SOTA results but also a good trade-off between performance and efficiency. <ref> shows the visual comparison with competitive methods. Our models can retrieve more details and textures. §.§ Results on Color Benchmark Videos As mentioned previously, color video SCI reconstruction must be bound to demosaicing since spatial masking collides with Bayer filter, thus it is a more challenging task than grayscale one. Unfortunately, less effort has been spent on it. Without any specialized designs, we just change the output channel number from 1 to 3 for the hybrid task of color video SCI reconstruction and demosaicing. <ref> reports the quantitative results of available methods (GAP-TV <cit.>, DeSCI <cit.>, PnP-FFDNet <cit.>, PnP-FastDVDnet <cit.>, BIRNAT <cit.>, STFormer <cit.>, EfficientSCI <cit.>). Our HiSViT9 outperforms previous best EfficientSCI by 0.52 dB with comparable parameters and MACs. <ref> shows the visual comparison with competitive methods. Our model is better than previous methods in restoring correct colors and fine structures. §.§ Results on Real Captured Videos We further evaluate our method on two public real observations (Duonimo and WaterBallon <cit.>). For fair competition with EfficientSCI <cit.>, we use HiSViT9 to conduct real data testing. <ref> shows the visual results reconstructed by GAP-TV <cit.>, DeSCI <cit.>, PnP-FFDNet <cit.>, EfficientSCI <cit.>, and our HiSViT9. Clearly, GAP-TV suffers from strong artifacts. DeSCI and PnP-FFDNet lead to over-smoothing results. Transformer-based EfficientSCI and HiSViT9 show an excellent generalization ability against noises on physical systems and significantly outperform non-Transformer methods. Compared to EfficientSCI, our HiSViT9 can better reconstruct image details in the captured scene and avoid the artifacts which are out of the captured scene. §.§ Ablation Study To offer an insight into the proposed method, we demystify the effect of reconstruction architecture and CSS-MSA and GSM-FFN of HiSViT. In addition, we also compare CSS-MSA with competitive MSAs for video SCI reconstruction. The ablation experiments are conducted on grayscale videos towards HiSViT9. §.§.§ Improvements in Architecture. Previous competitive reconstruction models, including STFormer <cit.>, EfficientSCI <cit.>, CTM-SCI <cit.>, always use 3D CNN for shallow feature extraction and feature-to-frame reconstruction and downsample the shallow features for the follow-up spatial-temporal aggregation. They overlook that the input frames lose temporal correlations completely and thus too early temporal interactions could exaggerate artifacts. To this end, we propose two modifications as shown in <ref>: i) use 2D RSTB to replace 3D CNN to disable temporal interactions, ii) build skip connection between the shallow features and the upsampled refined features. The ablation study is reported in <ref>, where architecture represents that and are used for shallow feature extraction and feature-to-frame reconstruction respectively (with connection). By disabling temporal interactions, (b) and (c) lead to a clear performance gain over widely-used (a), agreeing with the visualization in <ref>. Except as shallow feature extractor, RSTB in (d) also show better performance in reconstructing high-fidelity frames than 3D CNN in (c). With skip connection in (e), the fusion of shallow features and upsampled refined features could avoid the information loss caused by early spatial downsampling. §.§.§ Improvements in MSA. The proposed CSS-MSA is mainly powered by separable and cross-scale designs. we conduct the ablation study of them in <ref>. In the single-scale case, average pooling operator is discarded. In the non-separable case, CSS-MSA is equivalent to cross-scale SW-MSA that performs 2D windowed MSA between normal query and average-pooled key and value. As mentioned previously, separability introduces an inductive bias of paying more attention to spatial dimensions to harmonize with that informative clues concentrate on spatial dimensions instead of temporal dimension, namely the information skewness of video SCI. Clearly, non-separability damages the performance while sacrificing the complexity reduction of factorized attentions <cit.>. Cross-scale interactions lead to a performance gain and a complexity reduction. We further compare CSS-MSA with competitive MSAs for video SCI reconstruction in <ref>. As previously analyzed in <ref>, G-MSA <cit.> and F-MSA <cit.> are impractical due to their quadratic computational complexity. FW-MSA and SW-MSA are practical and they also are the sources of STFormer <cit.> and CTM-SCI <cit.> respectively. FW-MSA limits the spatial attention in TimeSformer <cit.> within local windows. SW-MSA relaxes 3D window in W-MSA <cit.> into spatial 2D window for temporal global receptive field. Note that both FW-MSA and SW-MSA do not downsample key and value for cross-scale interactions. Clearly, CSS-MSA outperforms them by a large margin for video SCI reconstruction. §.§.§ Improvements in FFN. The proposed GSM-FFN is powered by Gated Self-Modulation (GSM) and factorized Spatial-Temporal Convolution (STConv) that performs spatial aggregation and temporal aggregation in parallel and separately to enhance the locality. <ref> reports the ablation study on GSM-FFN. Without GSM, the channel expansion factor is set to 1 to relieve the computational loads and parameters of STConv. Clearly, GSM-FFN outperforms regular FFN by a large margin. After discarding GSM or replacing STConv with a 3D convolution, the resulting performance drops validate the superiority of GSM-FFN. § CONCLUSION By analyzing the mixed degradation of spatial masking and temporal aliasing, we are the first to reveal the information skewness of video SCI, namely informative clues concentrate on spatial dimensions. Previous works overlooks it and thus have the limited performance. To this end, we tailor an efficient reconstruction architecture and Transformer block, dubbed HiSViT, to harmonize with the information skewness. HiSViT captures long-range multi-scale spatial-temporal dependencies computationally efficiently. Extensive experiments on grayscale, color, and real data demonstrate that our method achieves SOTA performance. § ACKNOWLEDGEMENTS This work was supported by the National Natural Science Foundation of China (grant number 62271414), Zhejiang Provincial Distinguished Young Scientist Foundation (grant number LR23F010001), Zhejiang “Pioneer” and “Leading Goose” R&D Program (grant number 2024SDXHDX0006, 2024C03182), the Key Project of Westlake Institute for Optoelectronics (grant number 2023GD007), the 2023 International Sci-tech Cooperation Projects under the purview of the “Innovation Yongjiang 2035” Key R&D Program (grant number 2024Z126), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), and the Fundamental Research Funds for the Central Universities. splncs04
http://arxiv.org/abs/2407.13548v1
20240718142307
A tensorial approach to 'altermagnetism'
[ "Paolo G. Radaelli" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
Clarendon Laboratory, Department of Physics, University of Oxford, Oxford, OX1 3PU, United Kingdom [Corresponding author: ]p.g.radaelli@physics.ox.ac.uk § ABSTRACT I present a tensorial approach to the description of k⃗/-k⃗-symmetric, non-relativistic splitting of electronic bands in magnetic materials, which was recently given the name of `altermagnetism'. I demonstrate that tensors provide a general framework to discuss magnetic symmetry using both spin groups and magnetic point groups, which have often been contrasted in recent literature. I also provide a natural classification of altermagnets in terms of the lowest-order tensorial forms that are permitted in each of the 69 altermagnetic point groups. This approach clarifies the connection between altermagnetism and well-known bulk properties, establishing that the vast majority of altermagnetic materials must also be piezomagnetic and MOKE-active, and provides a rational criterion to search for potential altermagnets among known materials and to test them when the magnetic structure is unknown or ambiguous. A tensorial approach to `altermagnetism' Paolo G. Radaelli July 22, 2024 ======================================== § INTRODUCTION In the past two decades, there has been a resurgence of interest in compounds having electronic bands with lifted spin degeneracy, partly motivated by the requirement of new materials for spintronics. In addition to spin polarisation in ferromagnets, it is well known that spin degeneracy can be lifted even in non-magnetic materials by the famous Rashba-Dresselhaus (R-D) effect,<cit.> which requires spin-orbit coupling (SOC) and is therefore largest in the presence of heavy elements. The R-D effect also requires the absence of inversion symmetry, <cit.> due either to the bulk crystal structure being acentric or to symmetry breaking at interfaces. In their 2020 pioneering paper, Yuan et al. <cit.> came to the surprising realisation that spin degeneracy can also be lifted in some fully compensated antiferromagnets (AFM) due to the interaction between electron spins and the `effective Zeeman field' (largely of magnetic exchange origin) produced by ordered magnetic moments. This effect is clearly distinct from the R-D effect (most notably, it is k⃗/-k⃗ symmetric) and, crucially, does not require SOC, opening the possibility to observe large spin splitting even in light-element compounds. In fact, using Density Functional Theory (DFT), Yuan et al. demonstrated spin splitting at `atomic-like energy scales' in the light-element insulator MnF_2,<cit.> which is a fully compensated collinear AFM. Although insulators provide excellent proofs of principle, applications of AFM-induced spin splitting for spintronics would be greatly facilitated by the discovery of metallic systems displaying this effect. Using DFT calculations, Šmejkal et al. demonstrated large spin splitting in the absence of SOC in several metallic/semiconducting candidates, including RuO_2, CrSb and MnTe,<cit.> and coined the term `altermagnetism' to mark the distinction between these materials, ordinary spin-split ferromagnets and non-spin-split AFMs. This terminology has been widely used in subsequent literature, which includes both theoretical<cit.> and experimental<cit.> contributions. Although different authors employ somewhat different definitions of altermagnetism, [ Cheong et. al. define `Type-III altermagnets' as systems with symmetries including time reversal is a distinct operator. For reasons explained at length in this paper, this can only give rise to k⃗/-k⃗ anti symmetric textures of the R-D type, though magnetism can play a very important role in them. Here, I am not considering Type-III altermagnetism and, more generally magnetism-induced R-D type textures, though I am including a brief discussion of these effects in sec. <ref> .] for the purpose of this paper `altermagnetic textures' are defined as being k⃗/-k⃗ symmetric and time-reversal odd (switching sign between time-reversed magnetic domains). Šmejkal's emphasis was very much on collinear metallic antiferromagnets, for which spin-splitting effects should be stronger, DFT is much easier to perform, and which may have advantages for certain spintronic applications. However, more recently, Cheong and Huang showed that altermagnetism can also be expected in many non-collinear magnets, which may present distinct practical advantages and should not be excluded a priori.<cit.>. From the very beginning, symmetry analysis has been recognised as an essential tool of this research fields, since the guiding principle to select potentially altermagnetic materials must lie in their underlying symmetry properties. Yuan et al. <cit.> proposed an approach based on magnetic space groups (MSG), while Šmejkal et al. proposed the so-called spin groups (SGs) <cit.>, which allow for extra symmetries to be considered when spins and the lattice are approximately decoupled (see below). Cheong propose a classification based on magnetic point groups (MPGs) <cit.>, while Liu et al.,<cit.> employ little co-groups at specific points of the Brillouin zone (BZ), and propose a classification that also includes higher-order effects (quartic altermagnetism). [ I specifically highlight the approaches adopted by Cheong <cit.> and Liu, <cit.> because they are the closest to the one I will describe herein, though neither makes an explicit connection with Cartesian/sperical tensors. Moreover, the MPG classifications proposed by Cheong and Liu largely overlap with the one proposed in the present paper. ] The interest in altermagnetism in non-collinear magnetic structures<cit.> and in `weak-altermagnetic' effects <cit.> require an extension of the SG approach as employed in ref. Smejkal2022. [Ref. Smejkal2022 employs `binary' SGs, i.e., SG with 2-element groups acting on spins (see below). Binary SG cannot be used for non-collinear structures. Certain non-collinear structures can be described either with more complex SGs <cit.> or with so-called multi-colour groups,<cit.>, but, to my knowledge, a complete theory for their use has not been developed thus far. See Appendix <ref> for more details.] MPGs are a flexible tool to deal with these cases, especially since there is no reason to expect that altermagnetic splitting should always be weak for non-collinear structures. Intuitively, a classification based on MPGs should be entirely adequate, since spin splitting ultimately results in anomalous macroscopic properties (e.g., the anomalous spin Hall effect) that are subjected to MPG symmetry constraints via the Neumann's principle.[The Neumann-Minnigerode-Curie Principle (NMC Principle) enables one to derive the selection rules for the physical properties from the symmetry of the crystal in question. For macroscopic properties, the symmetry in question is precisely the MPG of the crystal. One should also remark that, unlike the widely used MPGs, magnetic space groups have been almost entirely superseded among magnetic structures specialists by irreducible representations analysis `à la Bertaut',<cit.> with additional symmetries in spin spaces being dealt with using Izyumov's exchange multiplet theory <cit.>] However, one should not lose sight of the advantages of SGs when it comes to pseudo-symmetric structures, i.e., when the crystal symmetry is only `slightly' broken upon magnetic ordering.[Perhaps the best example of this is ferromagnetic ordering in cubic magnetic metals such as Fe or Ni. Strictly speaking, Fe becomes rhombohedral below the Curie temperature, but the deviation from cubic symmetry due to magneto-striction is extremely small (≈ 10^-6).] It seems fair to state that, in previous work, the link between MPGs, SGs and the complexity of spin textures in reciprocal space has not been entirely clarified. Here, I propose an approach based the complete expansion of altermagnetic spin textures in momentum space in terms of Cartesian and spherical tensors. This approach can be equally applied to vectorial textures (described by MPGs) and scalar textures (described by binary SGs, referred to simply as as SGs in the remainder), though my emphasis will be on the former. I demonstrate that, to any given order, the expansion of vectorial magnetic textures is described by a single Cartesian tensor of odd rank, while R-D (k⃗/-k⃗-antisymmetric) vectorial magnetic textures are described by even-ranked tensors. I also propose a natural classification based on the lowest-order tensor forms allowed in a particular set of MPGs (class), and show that the vast majority of altermagnetic point groups (66 out of 69) display quadratic (rank-3) altermagnetism, with the remaining three groups allowing quartic (rank-5) altermagnetism. For the case of collinear AFM structures, I demonstrate that, in most cases, SG scalar textures correspond to the component of the MPG vector magnetic textures along the direction of the staggered magnetisation, while the MPG approach also produces a pattern of `weak altermagnetic' textures in other directions. One notable exception is represented by cubic groups, since the pseudo-cubic symmetry introduces constrains between texture components that would otherwise be independent. For these cubic groups, I provide a conversion table between SG and MPG textures for various direction of the magnetic moments, so that one could remain entirely within the MPG framework, generally the more familiar to the magnetism community. One important practical implication of the tensor approach is the connection with macroscopic properties: materials displaying quadratic altermagnetism (66 out of 69 MPGs) must also allow the piezomagnetic effect and the magneto-optical Kerr effect (MOKE) — two easily-testable phenomena that have been known for several decades <cit.>, and for which extensive materials databases are available (for example, ref. Gallego2016). Testing for the presence of these effects might also facilitate the screening of candidate altermagnetic materials before more complex experiments are performed to to measure AFM-induced spin splitting of electronic bands directly. This paper is organised as follows: in secs. <ref>, <ref> and <ref>, I present a general tensorial treatment of spin textures in momentum space, which can be applied to both altermagnetic and R-D-type textures. In sec. <ref>, I also outline how a parallel treatment for can be performed for vector and scalar textures, described by MPGs and binary SGs respectively, and show that dealing with the latter is greatly facilitated if one employs time reversal (via Shubnikov groups) instead of 2-fold rotations. In secs. <ref>, <ref> and <ref>, I establish which altermagnetic tensor forms are allowed by symmetry and propose a classification of MPGs based on the lowest-rank altermagnetic tensors. Additional symmetry constraints at special points in reciprocal space are discusses in sec. <ref>. Up to this point, no distinction is made between strong and weak effects, in keeping with the classical Neumann approach. In secs. <ref> and <ref>, I discuss the special case of collinear structures and explain how the dominant components of the spin textures, which are expected to be parallel to the Néel vector, can be extracted from the tensors. In sec. <ref>, I explain the necessity of employing SGs (which I do via tensorial analysis) for systems that are crystallographically cubic and in which symmetry is only broken upon magnetic ordering, whilst also discussing the application of spin groups to cases in which the magnetic symmetry is very low. Numerous examples in different symmetries are presented in sec. <ref>, including graphical depictions of several types of spin textures. The paper is concluded by a summary and general discussion (sec. <ref>). § THE TENSORIAL DESCRIPTION The starting point to develop a tensorial description of altermagnetism and, more generally, of spin splitting of electronic bands, is the expansion in Cartesian tensors of the reciprocal-space spin textures, i.e., the vector field of spin states in momentum space. This semiclassical vector field, which I will denote as B⃗^eff (k⃗,m), determines both the direction of the spin quantisation axis and the strength of the spin splitting, and depends on both on the wavevector k⃗ and on the band index m. [More rigorously, within the context of DFT, for a given wavevector k⃗ and band index n the spin texture is defined by the vector field s⃗_nk⃗=⟨Ψ_nk⃗|σ|Ψ_nk⃗⟩, where σ is a vector of Pauli matrices and the integral implied by the ⟨| and |⟩ is over the real-space unit cell.] B_i^eff (k⃗, m)=∑_l=n^n+1 T^(l)_i,αβγ… k_α k_β k_γ… The Cartesian tensors T^(l)_i,αβγ depend both on k=|k⃗| and on the band index m. In eq. <ref>, repeated indices are implicitly summed, and the tensor if fully symmetric over the Greek indices. The expression in eq. <ref> is directly derived from to the expansion of the spin texture into spherical tensors, and is therefore complete on each spherical shell in momentum space (or sections thereof near the BZ boundary). In fact, one can show (see Appendix <ref>) that the the sum of two Cartesian tensors of ranks n and n+1 is identical to the expansion of the vector field B_i^eff onto a spherical basis up to Γ^1⊗Γ^n, where Γ^n is the irreducible representation (irrep) of the group SO(3) with L=n. Note that higher-rank Cartesian tensors need not be necessarily small — in fact, reproducing the spin texture, especially near the BZ boundary, may require a high-order expansion. Nevertheless, as I will show in the remainder, a very natural classification can be obtained by considering only low-rank Cartesian tensors. In the presence of rotational symmetry, the tensor forms in eq. <ref> are constrained by the requirement that the spin textures have at least the symmetry of the MPG of the crystal, which implies that the tensors themselves must be totally symmetric by all the MPG operations — in other words, the tensors must transform like the totally symmetric irrep of the MPG (see Appendix <ref>). Hence, for a given tensor rank, tensors at all values of k have the same general form (imposed by symmetry) and their k dependence is described in terms of a reduced number of scalar functions. Additional constraints will occur at the Γ point (zone centre), at band crossing points and at special points at the boundary of the BZ with non-trivial little-group symmetry. When considering inversion (parity) and time reversal symmetries in addition to proper rotations, in the expansion in eq. <ref> one can immediately distinguish between even-rank tensors, which are time reversal even and parity odd, and odd-rank tensors, which are time-reversal odd and parity even. Once again, it is important to emphasise that only two Cartesian tensor are required for a truncated spherical tensor expansions, and that all effects allowed by lower-rank tensor are automatically included in higher-rank tensors of the same parity. For example, the rank-2 tensor is at linear order in k⃗ and corresponds to the usual Rashba-Dresselhaus (R-D) effect, while the rank-4 tensor includes both the linear and cubic R-D tensor etc. In keeping with recent literature, the rank-3 tensor can be called `quadratic altermagnetic tensor' and includes the ordinary ferromagnetic spin splitting (rank 1), while the next-highest order that is k⃗/-k⃗ symmetric is a rank-5 tensor, which includes both quadratic and quartic altermagnetism as well as ferromagnetic spin splitting. A completely parallel treatment to the one I just outlined can be performed for scalar parity even and time-reversal-odd textures in real/reciprocal space, which are used to describe collinear magnetic structures within the SG framework. This becomes completely transparent if one casts binary SGs in the slightly different language of time reversal rather than two-fold rotations — see Appendix <ref> for a complete discussion). The corresponding tensors are totally symmetric and their rank is lowered by one with respect tho the corresponding vectorial textures. § CONDITIONS FOR ALTERMAGNETISM AND THE RASHBA-DRESSELHAUS EFFECT Altermagnetism thus defined and for all odd ranks requires the following to be satisfied: * Time reversal symmetry must be broken — in other words, the effect is only allowed in ferromagnets (FM) and some antiferromagnets (AFM). * For a given point k⃗ in the BZ, the spin splitting must reverse in the time-reversed FM or AFM domain. * Spin splitting must be symmetric between k⃗ and -k⃗ These conditions were thoroughly discussed in ref. Yuan2020, and exactly mirror those for the R-D effect at any even-rank tensor: * Inversion symmetry must be broken. * If the material is magnetic, for a given point k⃗ in the BZ, the spin splitting must be the same in the time-reversed FM or AFM domains. * Spin splitting must be anti-symmetric between k⃗ and -k⃗ Note that spin splitting is not possible at any order in the presence of PT symmetry (i.e., if the product of inversion and time reversal, θ I, is a symmetry operator)— a well-known result that can be obtained from simple symmetry considerations. <cit.> Nevertheless, altermagnetism is not incompatible with the magneto-electric effect, tough the latter is often associated with PT symmetry conservation. [An example of this is MPG 3m', which is admissible (i.e., compatible with ferromagnetism), polar, magneto-electric and altermagnetic. ] § JAHN SYMBOLS AND THE CONNECTION WITH MACROSCOPIC PROPERTIES Further progress can be made by defining the so-called Jahn symbol for these tensors,<cit.> which for a tensor of rank n in eq. <ref> is generically: ea^nV[V^n-1] where e indicates that all these are pseudo-tensors (with opposite parity to that of ordinary tensor, which are parity-even for even ranks and parity-odd for odd ranks), a defines the time reversal properties (a^n is time-reversal-even when n is even, odd when n is odd) and the square brackets indicate symmetrisation on all indices, since these are contracted with the indices of k⃗. The advantage of this notation is that one can sometimes connect these tensors with tensors defining apparently unrelated macroscopic properties that have the same Jahn symbol. One can then mine the often extensive materials databases for these known effects and extract candidate materials where the `novel' effect is allowed by symmetry. For example, The Jahn symbol for the linear R-D effect is eV^2, which is the same as the magnetotoroidic tensor, while the Jahn symbol for the cubic R-D effect is eV[V^3], which does not have an obvious macroscopic counterpart. Most relevant for this paper, the quadratic altermagnetic tensor eaV[V^2] has the same Jahn symbol as the tensors describing the piezomagnetic effect and the MOKE effect. Piezomagnetism has been known since the 1950s, <cit.> and a symmetry classification can be found in the famous book by Robert Birss <cit.>, first published in 1964, which also lists several materials now discussed in the context of altermagnetism. Interest in MOKE activity in antiferromagnets and weak ferromagnets is almost as old, dating back to the 1960's, <cit.>, and has recently experienced a resurgence. <cit.> For scalar altermagnetic textures, the corresponding Jahn symbols to those in eq. <ref> are a[V^n], where n is even. So, for example, the tensor for scalar textures corresponding to quadratic altermagnetism has Jahn symbol a[V^2], one rank lower compared to the corresponding vectorial texture eaV[V^2] but with the same parity/time reversal properties. § TENSOR FORMS FOR QUADRATIC ALTERMAGNETISM The allowed tensor forms are restricted by the MPG symmetry of the crystal, so that the tensor itself is completely invariant by any of the MPG operations. The number of free parameters in each tensor form corresponds to the number of times the totally symmetric irrep is contained in the tensor representation. In general a particular tensor form is shared by several MPGs, which can thus be grouped into `classes'. For each MPG, the determination of the number of free parameters via character decomposition and the construction of the appropriate tensor form by projection is greatly facilitated by employing the MTENSOR tool provided by the Bilbao Crystallographic Server. <cit.> Tensors up to rank 6 were calculated using MTENSOR. For tensors above rank 6, polynomial forms were obtained by the standard projection (symmetrisation) method (see for example sec. <ref>). Table <ref> lists the 17 unique symmetry-adapted quadratic altermagnetic tensor forms, together with a set of the MPGs (class) that share that tensor form, the number of free parameters and a simplified spherical tensor decomposition (further discussed in section <ref>).[ I employed the conventions adopted by the program MTENSOR of the Bilbao Crystallographic Server (which are somewhat different from those in ref. Opechowski1965), except for 6̅'m'2, which has been converted to 6̅'2m' to be included in Class XIV. Note that the textures of Classes X and XI differ by a 45^∘ rotation, and are therefore equivalent at the MPG level, only distinguished by the orientation of the texture with respect to the crystal axes — see further discussion in section <ref>. In the interest of practical use and to follow conventions, I kept these two classes separate. Straightforward axes transformations may be required for particular MPGs] I employed the usual convention, which applies for example to the piezoelectric tensor, in which a 3 × 6 matrix is contracted with the array [k_x^2, k_y^2, k_z^2, k_y k_z, k_x k_z, k_x k_y] to yield the three-component B⃗^eff, which depends on one or more scalar functions Λ_ij (k) that are constant on each spherical shell in momentum space (for a note on axes conventions, see  [I adopted the standard conventions for x, y, and z, which are related to the direction of the symmetry directions. For example, for point group 3m1, z is parallel to the 3-fold axis, x is perpendicular to the mirror plane and y completes the set. In the setting 31m of the same point group, x and y are interchanged. Examples on how to deal with conventions issues are provided in section <ref>.]). Examples of this procedure and on how to impose constraints at special points of the BZ are provided in section <ref>. Table <ref> also includes the single class of altermagnetic MPGs that do not allow quadratic altermagnetism (Class XVIII), together with the expression for their spin texture, described by a rank-5 spherical magnetic hexadecapole (see discussion in section <ref>). Among the 17 quadratic altermagnetic classes, eight classes (marked in bold and with a () in Table <ref>) allow a magnetic dipolar component, i.e., they allow a net magnetic moment and uncompensated spin splitting, though the latter need not arise primarily from and be parallel to the net moment. The 31 magnetic groups in these classes are defined in ref. Cheong2024 as `Type-I altermagnetic', and coincide with the admissible point groups, i.e., the ones allowing a net ferromagnetic moment. Out of the remaining ten classes, [These classes are named `Type-II altermagnetic' in ref. Cheong2024.] six classes (VIII, XII, XIV, XV, XVII and XVIII, marked in Table <ref> in italic and with an asterisk ^*) do not allow the spin quantisation axis to be along an allowed collinear antiferromagnetic direction, while the other four classes (V, IX, X and XI, marked in Table <ref> with a dagger †), do allow it. This aspect will be discussed further in section <ref>. § SPHERICAL TENSOR DECOMPOSITION In addition to the Cartesian tensor form, it is often useful to decompose tensors onto a symmetry-adapted spherical basis, which makes the symmetry constraints more transparent. Moreover, the spherical (multipole) decomposition creates a bridge between MPG-based approaches and theories based on multipoles, such as the Cluster Multipole Theory <cit.> and the Landau approach to ferro-multipolar ordering proposed by P. McClarty.<cit.>^, [The connection with the latter is most obvious, because ferro-ordering of a given multipolar form is allowed if and only if this form is totally symmetric by the MPG operations. However, I stress that here the symmetry-adapted spherical tensor forms were obtained by direct projection onto the totally symmetric irrep of each MPG, rather than by analysing the effect of each symmetry operators as it is done in ref. Suzuki2017.] As an example, I recall that the rank-2 R-D tensor can be decomposed into a pseudoscalar (L=0), an ordinary polar vector (L=1) and a pseudo-quadrupolar traceless tensor (L=2), all these being time-reversal even. Consequently, the linear R-D effect is allowed in chiral point groups (those that allow a pseudoscalar), polar point groups (which allow an ordinary vector) and a few other non-centrosymmetric groups that only allow the pseudo-quadrupolar traceless tensor (e.g., -4m2). When either of these is allowed, these Cartesian tensors can be written as a linear combination of symmetry-adapted spherical basis tensors. In the case of quadratic altermagnetism, one can perform an entirely analogous decomposition: let Γ^L be the SO(3) irrep with dimensionality 2L+1. Then [V^2] = Γ^0+Γ^2 V[V^2] = Γ^1 ⊗(Γ^0+Γ^2)=2 Γ^1 + Γ^2 + Γ^3 Taking into account parity and time reversal, one concludes that the altermagnetic tensor ea V[V^2] decomposes into two magnetic dipoles, a magnetic quadrupole and a magnetic octupole. Since the dipole irrep occurs twice, there is some arbitrariness in the decompsition of the dipolar field, but the natural choice is for one of the two components to be parallel to the dipole vector (D^I, which is also allowed at the lowest rank-1 order and represent the ordinary ferromagnetic uncompensated spin splitting), while the other is parallel to the wavevector k⃗ (D^II). The quadrupole terms yields a B⃗^eff that is perpendicular to k⃗, while octupolar terms always connect different directions of B⃗^eff . With this choice and in the absence of any symmetry we can define the four Cartesian tensors as: D^I = δ_ijδ_αβ v_j = ( [ v_x v_x v_x 0 0 0; v_y v_y v_y 0 0 0; v_z v_z v_z 0 0 0; ]) D^II = 1/2δ_i αδ_β j u_j = ( [ u_x 0 0 0 u_z u_y; 0 u_y 0 u_z 0 u_x; 0 0 u_z u_y u_x 0; ]) where v⃗ and u⃗ are the two dipolar components. The quadrupolar tensor can be constructed from a traceless symmetric matrix S_jk as Q = 1/2( ϵ_iα kδ_jβ + ϵ_iβ kδ_jα) S_jk = ( [ 0 S_23 -S_23 -S_11-2 S_22 -S_12 S_13; -S_13 0 S_13 S_12 2 S_11+S_22 -S_23; S_12 -S_12 0 -S_13 S_23 S_22-S_11; ]) while the octupolar tensor O, defined as a linear combination of L=3 tesseral harmonics, is: O=( [ -O_122-O_133 O_122 O_133 O_123 2 O_311 2 O_211; O_211 -O_211-O_223 O_223 2 O_322 O_123 2 O_122; O_311 O_322 -O_311-O_322 2 O_223 2 O_133 O_123; ]) In Appendix <ref>, Table <ref>, a symmetry-adapted spherical basis is presented, comprising nine unique forms (two dipoles, three quadrupoles and four octupoles) plus forms obtained by axis permutation. In each of the 17 quadratic altermagnetic classes, the unique tensor form can be expressed as a linear combination of one or more symmetry-adapted spherical basis tensors, as shown in Table <ref>. § QUARTIC ALTERMAGNETISM Having classified all the MPGs that allow rank-3 (quadratic) altermagnetism, one may wonder whether any higher-order altermagnetic point group has been left out — in other words, are there symmetries where piezomagnetism and MOKE activity are forbidden but higher-order altermagnetism is allowed? Having excluded all paramagnetic point groups (where altermagnetism is not allowed at any order) and groups that preserve PT symmetry (where spin splitting is entirely forbidden), only three MPGs are left: 432, -43m, and m-3m, which have been included in Table <ref> as Class XVIII. All these MPGs allow quartic (but not quadratic) altermagnetism and share the same one-parameter rank-5 tensor form, such that: B^eff_i=Λ (k) ϵ_ijl(k_j^3k_l-k_l^3k_j) The corresponding tensor is a pure L=4 (magnetic hexadecapole) spherical tensor, since the L=5 spherical representation does not contain the totally symmetric irrep of these point groups. Including quartic altermagnetism will also affect the the classification of the MPGs, since some of the classes will split on the basis of their rank-5 tensor forms. For example, the tetragonal and hexagonal groups in Classes XIII, XV and XVI will split because they have different rank-5 tensor forms, and Class XVIII will also split into two subclasses (this is indicated with the letters (a) and (b) in Table <ref>). However, no further splitting will occur at any higher tensor order, since the MPGs in each class/subclass only differ by the proper/improper nature of some rotations, which has no bearing on any parity-even tensor. Therefore, the classification of altermagnetic groups in Table <ref> can be considered as complete. § SYMMETRY IN RECIPROCAL SPACE Spin textures constructed with the tensorial method I just described have the full symmetry of the MPG of the crystal, which, in turn, means that the field at a given point in the interior of the BZ will be locally symmetric by the little co-group of that point, without any need for further symmetrisation. <cit.> However, care must be taken at the zone centre and at special points at the BZ boundary, which have additional symmetries due to the fact that points related to them by some symmetry are also related by a reciprocal lattice vector. Unlike the case of the R-D tensor and more generally of even-ranked tensors, there is no general requirement that the spin texture be zero at Kramers points (i.e., points for which 2 k⃗ is a reciprocal lattice vector). In general, spin splitting is allowed everywhere if the dipolar term D^I is allowed, as is the case for ordinary ferromagnets. Whenever dipolar terms are not allowed, spin splitting must vanish at the zone centre and at zone-boundary points with the same little-group symmetry. In other cases, a reduced tensor may be obtained by symmetrisation over the index that is not contracted with k⃗. Some examples are given in section <ref>. § COLLINEARITY Collinearity is not strongly constrained by MPG symmetry, only being forbidden in cubic MPGs. In all other cases, collinear ferromagnetic or antiferromagnetic structures are allowed provided the following conditions are met: * The magnetic moment is either along the high-order rotation axis (hexagonal, trigonal, tetragonal), along one of the 2-fold axes (orthorhombic) or either parallel of perpendicular to the unique 2-fold axis (monoclinic). No special condition exists for triclinic groups. * The point-group symmetry of the magnetic site is admissible, with the admissible direction coinciding with one of the directions specified in point 1. above. This includes magnetic sites in a general position, since point group 1 is of course admissible. It follows that collinear magnetic structures are allowed by symmetry in all altermagnetic groups with the exception of the cubic ones (Classes XVII and XVIII). However, as anticipated in section <ref>, one must draw an important distinction between classes that do not allow the spin quantisation axis to be along a permitted collinear direction at the rank-3 tensor level (classes VIII, XII, XIV, XV, for which I will use the adjective `weak-collinear' and of course XVII and XVIII, which do not admit collinearity) and those that do (the `strong-collinear' classes V, IX, X and XI). In the latter case, spin splitting in collinear structures could arise from a very similar effect as for ordinary ferromagnets, in that spins travelling along certain crystallographic directions would experience a net effective Zeeman field originating from the ordered magnetic moments, the quantisation axis switching sign depending on the wavevector direction. It is iimportant to distinguish between true ferromagnets, where the magnetisation is uniform, and strong-collinear ferri-antiferromagnets, where the internal magnetic field distribution can never be entirely collinear, though one expects the effective fields (mostly of exchange origin) to be strongest in the direction parallel or antiparallel to the Néel vector. It is noteworthy that all MPGs allowing ferromagnetism can trivially be strong-collinear in the ferromagnetic direction, though some can also be strong-collinear in other directions as well (see example <ref>). By contrast, in the weak-collinear classes (VIII, XII, XIV, XV, XVII and XVIII), the spin quantisation axis produced by the rank-3 tensor is not along one of the allowed collinear antiferromagnetic directions. However, rank-3 spin splitting in these classes can still be very large for non-collinear magnetic structures. All weak-collinear classes with the exception of Classes XVII and XVIII (which cannot support collinear structures) become strong-collinear at a higher tensor order — see sections <ref>. Examples of a strong-collinear and a weak-collinear class are given in section <ref> with further discussed in section <ref>. § STRONG-COLLINEAR HIGHER-ORDER ALTERMAGNETISM Strong-collinear altermagnetism will appear for all classes (except Classes XVII and XVIII) even when it is forbidden by symmetry at the rank-3 (quadratic) level. Indeed, in their treatment employing SGs, Šmejkalet al. <cit.> list a number of examples that require higher-order tensors, so one needs to show that analogous results are obtained with MPGs. Classes VIII, XII, XIV and XVb allow strong-collinear quartic (rank-5) altermagnetism, while Class XVa only allows it at the next order (rank-7). The expressions for the full tensors are naturally rather complex, but it is easy to write the B⃗^eff along the collinear direction z (see example in section <ref> for more detail). The appropriate polynomials are reported in Table <ref>. As shown in sec. <ref>, there is a close correspondence between these forms and those listed in Šmejkalet al. <cit.>, fig. 2, demonstrating the equivalence between the MPG and SG approaches when it comes to strong-collinear altermagnetism. Tensor forms for the 18 altermagnetic classes of MPGs. Classes I-XVII are the quadratic altermagnetic (= piezomagnetic, MOKE-active) classes, each comprising several MPGs (column 3). The number of parameters for each tensor form and the spherical tensor decomposition (D = dipole, Q = duadrupole, O = octupole) are provided in columns 4 and 5 (see Appendix <ref>). Class XVIII comprises the three remaining altermagnetic groups with rank-5 as the lowest-rank tensor (H = hexadecapole). For Class XVIII, the explicit form of B⃗^eff is provided instead of the (very cumbersome) matrix. MPGs labelled with (a) and (b) form separate subclasses when quartic altermagnetism is accounted for (see text). l@lllll -10pt 5c Class Tensor Form Magnetic point groups Parameters Spherical decomp. Class I^ ( [ Λ _11 Λ _12 Λ _13 Λ _14 Λ _15 Λ _16; Λ _21 Λ _22 Λ _23 Λ _24 Λ _25 Λ _26; Λ _31 Λ _32 Λ _33 Λ _34 Λ _35 Λ _36; ]) 1 -1 18 D(6)+Q(5)+O(7) Class II^ ( [ 0 0 0 Λ _14 0 Λ _16; Λ _21 Λ _22 Λ _23 0 Λ _25 0; 0 0 0 Λ _34 0 Λ _36; ]) 2 m *2 8 D(2)+Q(3)+O(3) Class III^ ( [ Λ _11 Λ _12 Λ _13 0 Λ _15 0; 0 0 0 Λ _24 0 Λ _26; Λ _31 Λ _32 Λ _33 0 Λ _35 0; ]) 2' m' `2' 10 D(4)+Q(2)+O(4) Class IV^ ( [ 0 0 0 0 Λ _15 0; 0 0 0 Λ _24 0 0; Λ _31 Λ _32 Λ _33 0 0 0; ]) 2'2'2 m'm2' m'm'2 m'm'm 5 D(2)+Q+O(2) Class V^† ( [ 0 0 0 Λ _14 0 0; 0 0 0 0 Λ _25 0; 0 0 0 0 0 Λ _36; ]) 222 mmm mm2 3 Q(2)+O Class VI^ ( [ Λ _11 -Λ _11 0 Λ _14 Λ _15 -2 Λ _22; -Λ _22 Λ _22 0 Λ _15 -Λ _14 -2 Λ _11; Λ _31 Λ _31 Λ _33 0 0 0; ]) 3 -3 6 D(2)+Q+O(3) Class VII^ ( [ 0 0 0 0 Λ _15 -2 Λ _22; -Λ _22 Λ _22 0 Λ _15 0 0; Λ _31 Λ _31 Λ _33 0 0 0; ]) 32' 3m' -3m' 4 D(2)+O(2) Class VIII^* ( [ Λ _11 -Λ _11 0 Λ _14 0 0; 0 0 0 0 -Λ _14 -2 Λ _11; 0 0 0 0 0 0; ]) 32 3m -3m 2 Q+O Class IX^† ( [ 0 0 0 Λ _14 Λ _15 0; 0 0 0 -Λ _15 Λ _14 0; Λ _31 -Λ _31 0 0 0 Λ _36; ]) 4' -4' *4' 4 Q(2)+O(2) Class X^† ( [ 0 0 0 Λ _14 0 0; 0 0 0 0 Λ _14 0; 0 0 0 0 0 Λ _36; ]) 4'22' -4'2m' 2 Q+O Class XI^† ( [ 0 0 0 0 Λ _15 0; 0 0 0 -Λ _15 0 0; Λ _31 -Λ _31 0 0 0 0; ]) -4'2'm 4'm'm *4'm'm 2 Q+O Class XII^* ( [ Λ _11 -Λ _11 0 0 0 -2 Λ _22; -Λ _22 Λ _22 0 0 0 -2 Λ _11; 0 0 0 0 0 0; ]) 6' -6' `6' 2 O(2) Class XIII^ ( [ 0 0 0 Λ _14 Λ _15 0; 0 0 0 Λ _15 -Λ _14 0; Λ _31 Λ _31 Λ _33 0 0 0; ]) (a) 6 -6 *6 (b) 4 -4 *4 4 D(2)+Q+O Class XIV^* ( [ Λ _11 -Λ _11 0 0 0 0; 0 0 0 0 0 -2 Λ _11; 0 0 0 0 0 0; ]) 6'22' -6'2m' -6'm2' 6'mm' `6'mm' 1 O Class XV^* ( [ 0 0 0 Λ _14 0 0; 0 0 0 0 -Λ _14 0; 0 0 0 0 0 0; ]) (a) 622 -6m2 6mm *6mm (b) 422 -42m 4mm *4mm 1 Q Class XVI^ ( [ 0 0 0 0 Λ _15 0; 0 0 0 Λ _15 0 0; Λ _31 Λ _31 Λ_33 0 0 0; ]) (a) 62'2' -6m'2' 6m'm' *6m'm' (b) 42'2' -42'm' 4m'm' *4m'm' 3 D(2)+O Class XVII^* ( [ 0 0 0 Λ _14 0 0; 0 0 0 0 Λ _14 0; 0 0 0 0 0 Λ _14; ]) (a) 4'32' -4'3m' m-3m' (b) 23 m-3 1 O Class XVIII^* B^eff_i=Λ (k) ϵ_ijl(k_j^3k_l-k_l^3k_j) 432 -43m m-3m 1 H () Classes allowing a magnetic dipole term (Type-I altermagnets in ref. Cheong2024) (†) Classes allowing spin splitting along a permitted collinear AFM direction (`strong-collinear' altermagnetic classes) (*) Classes not allowing spin splitting along a permitted collinear AFM direction (`weak-collinear' altermagnetic classes) § CUBIC SYMMETRY AND THE CONNECTION WITH SPIN GROUPS In several important cases (both collinear and non-collinear), the crystal symmetry is only broken by the direction of the magnetic moments. A classic case is that of hematite (Fe_2O_3),<cit.> wherethe 3-fold symmetry is broken at room temperature but remains unbroken below the so-called Morin transition (≈260K). Since the magneto-elastic interaction is often small, one needs to take into account the effect of the approximate crystal symmetry (`pseudo-symmetry') on the spin textures. In the case of collinear structures, this can be done effectively using SGs — a treatment that has been presented in previous literature. <cit.> One needs to establish to establish how the approach based on MPG needs to be modified to take pseudo-symmetry into account. For most collinear structures, the component of the effective field B⃗^eff along the staggered magnetisation can be readily identified from the MPG treatment even in the presence of pseudo-symmetries (see sec. <ref> for a complete example). In the rare cases in which magnetic moments are along a general direction, it is convenient to rotate them into a high-symmetry direction and re-establish the MPG and the corresponding spin texture. When the magneto-elastic interaction is weak, such high-symmetry phase is almost always physical and can usually be reached by applying a small magnetic field. However, this approach is not viable in cubic symmetry, because, for collinear phases, crystal symmetry can never be fully restored by any direction of the staggered magnetisation, and it is precisely in these cases that SGs are most useful. Lowest-order polynomial forms for all cubic SGs have been obtained using scalar-field tensors, as discussed in sec. <ref>, and are listed in <ref> together with the corresponding MPGs for magnetic moments along the three cubic symmetry directions [001], [110] and [111]. When comparing these forms with the corresponding strong-collinear forms of the same MPGs, one notices that the former can be obtained from the latter by fixing some of the parameters to have specific values. In other words, the pseudo-symmetry manifests itself by establishing a link between effective-field parameters that would otherwise be independent. This becomes clear when considering specific cubic symmetries. The cubic SGs 23, m-3, 432 -43m and m-3m are all ferro- (or ferri-) magnetic, and the lowest-order cubic polynomial form is completely isotropic. The corresponding MPGs all admit a dipole component, which is, however, not necessarily isotropic. For example, the polynomial form for MPG 42'2' is Λ_1(k) (k_x^2+k_y^2)+Λ_2(k) k_z^2. Therefore, the pseudo-cubic symmetry had the result of imposing Λ_1(k) =Λ_2(k) ∀ k, as stated. The case of the three remaining cubic groups 4̃32̃, -̃4̃3m̃ and m -3 m̃ is considerably more interesting, and is discussed in more detail in sec. <ref>. The lowest-order cubic polynomial form is of rank 7 (rank 6 for the corresponding SGs), while the corresponding MPGs admit lower-order polynomial forms (see tab. <ref> and <ref>). As always, these lower-order MPG forms are contained in the rank-7 forms — for example, the trigonally-symmetric (Class VIII) rank-5 form Λ(k) (k_x^2 - 3 k_y^2)k_x k_z has corresponding rank-7 components (k_x^2 - 3 k_y^2)k_x k_z (Λ_1(k)(k_x^2+k_y^2)+Λ_2(k)k_z^2), since (k_x^2+k_y^2) and k_z^2 are totally symmetric in trigonal symmetry. However, in the presence of cubic pseudo-symmetry, these components are linked together and with other rank-7 components, yielding the pseudo-cubic rank-7 form displayed in tab. <ref>. All other trigonal components only arise from the symmetry breaking due to the magnetic moment direction, and are expected to be small for weak magneto-elastic interactions. § EXAMPLES Here, I provide a set of examples to illustrate the practical use of this method and in particular of Table <ref>, and to highlight some of the issues arising from different crystallographic conventions. All the examples and references were generated using the program MAGNDATA from the Bilbao Crystallographic Server. <cit.> The figures display the `texture pattern' as a normalised B⃗^eff vector field, where the normalisation function is 1/(k_x^2+k_y^2+k_z^2)^(n-1)/2 where n is the rank of the tensor. These `texture patterns' correspond to the gnomonic projection of the textures, i.e., a projection from the centre of the unit sphere to a plane tangent to it. This projection is most convenient to display the symmetry (it preserves angles at the centre) and the details of the textures. In these figures, k_x and k_y are in arbitrary dimensionless units, and the conversion to spherical coordinates is tanθ = √(k_x^2+k_y^2), tanϕ=k_y/k_x. There is a close correspondence between these texture patterns and those displayed schematically in fig. 2 of Šmejkal et al.<cit.> (more details in the figure legends). An example of texture plotted on the surface of a sphere in momentum space is shown in sec. <ref>. §.§ Class XIV (weak-collinear) Class XIV comprises five hexagonal MPGs: `6'mm', -6'2m' (TmAgGe, ref. Baran2009), -6'm2', 6'mm' (HoMnO_3, MSG P6'_3cm', ref. Brown2006) and 6'22'. The expression for B⃗^eff(k⃗) for this class at the rank-3 tensor level is: B_x^eff (k⃗) = Λ_11(k)(k_x^2-k_y^2) B_y^eff (k⃗) = Λ_11(k) ( -2 k_x k_y ) B_z^eff (k⃗) = 0 the spin texture being parametrised by a single scalar function of k, Λ_11(k). The spin quantisation axis is in the xy plane and therefore always perpendicular to the high-symmetry direction, i.e., the allowed direction for the Néel vector. The spin texture is parallel to k⃗ along the six directions Γ - K and Γ - K', while it is perpendicular to k⃗ along the Γ - M lines (see ref. Bradley2010 for the BZ point notation). Since the three K points and the three K' points are equivalent, Λ_14(k) must be =0 both at the Γ point and at k=|K|. There is no additional constraint at points M. Class XIV is also useful to illustrate how to deal with different axes orientations, which is essential to employ Table <ref> correctly. As a preamble, one should remark that, in the presence of a crystal lattice, the orientation of the point group directions is not arbitrary, but it related to that of the crystal axes. So, for example, symbols -62m and -6m2 refer to the same point group, being related by a 90^∘ rotation of the in-plane axes. However, P-62m and P-6m2 are distinct space groups, since the in-plane directions are now linked to the crystal axes. The symbols, 6'2'2, -6'm'2, -6'2'm (e.g., ThMn_2, ref. Deportes1987 and CsFeCl_3, ref. Hayashida2018, both with MSG, P-6'2'm,) 6'm'm (e.g., YbMnO_3, MSG P6'_3c'm, ref. Fabreges2008 ), and `6'm'm (e.g., CrSb, MSG P`6'_3m'c, ref. Yuan2020b) refer to the same point groups (6'22', -6'2m', -6'm2', 6'mm', and `6'mm', respectively), with the in-plane axes rotated by 90^∘. Therefore, when dealing with MSGs such as P-6'2'm or P`6'_3m'c, one should employ the rotated form of the tensor: [I did not create a separate class for these symbols, since this tensor is obviously related to that in Table <ref> and in order to avoid proliferation of classes. However, the reader should be advised that simple transformations such as the one in eq. <ref> may be necessary, not only in this case but also to deal with non-standard conventions (e.g., 2'2'2 vs 22'2', etc. in Class IV).] ( [ 0 0 0 0 0 -2 Λ _22; - Λ _22 Λ _22 0 0 0 0; 0 0 0 0 0 0; ]) The expression for B⃗^eff(k⃗) for this set being: B_x^eff (k⃗) = Λ_11(k) ( -2 k_x k_y ) B_y^eff (k⃗) = Λ_11(k)(k_y^2-k_x^2) B_z^eff (k⃗) = 0 Among the compounds listed above, TmAgGe, HoMnO_3, YbMnO_3, CsFeCl_3 and ThMn_2 are reported to have non-collinear magnetic structures, while SbCr is reportedly collinear. As discussed in sec. <ref>, Class XIV is strong-collinear at the rank-5 (quartic) level, the effective field along the z axis (the allowed direction for the collinear staggered magnetisation) being (see tab. <ref>): B^eff_z(k⃗)=Λ(k) ( k_x^3-3 k_y^2 k_x ) k_z Fig. <ref> displays the corresponding texture patterns. §.§ Classes X and XI (strong-collinear) Classes X and XI are closely related, <cit.> because their tensor forms are related by a 45^∘ rotation. However, these forms look sufficiently different to make distinct classes useful in practice. The difference between the two classes stems from the fact that in Class X, operators along the the primary ([001]) and tertiary ([110]) symmetry directions are primed while, for Class XI, operators along the the primary ([001]) and secondary ([100]) symmetry directions are primed. The tensor form for Class X allows spin splitting along all three axes. However, setting Λ_14≈0, one finds that the spin quantisation axis is predominantly parallel or antiparallel to the z axis, i.e., the allowed Néel vector direction for collinear antiferromagnetism. The maximum effective Zeeman field is for wavevectors in the (110) and (11̅0) directions, and the sign of the field switches between these two directions. Class X comprises the MPGs -4'2m' (e.g., Pb_2MnO_4, ref. Kimber2007) and 4'22' (Er_2Ge_2O_7, ref. Taddei2019), but also those corresponding to the non-standard setting -4'2'm, 4'mm' and *4'mm' (e.g., LiFe_2F_6, ref. Shachar1972, and RuO_2, ref. Berlijn2017, both with MSG P*4'_2nm') . The characteristic spherical tensor expansion for this class is: A(k) Q^III+B(k) O^I where Q^III and O^I are fixed tensors defined in Table <ref>. The spin texture is: B_x^eff (k⃗) = Λ_14(k) k_y k_z B_y^eff (k⃗) = Λ_14(k) k_x k_z B_z^eff (k⃗) = Λ_36(k) k_x k_y In the primitive tetragonal cell (adopted by Pb_2MnO_4, Er_2Ge_2O_7, LiFe_2F_6 and RuO_2), the four M points have reciprocal-space coordinates (1/2, 1/2, 0), (-1/2, 1/2, 0), (1/2, -1/2, 0) and (-1/2, -1/2, 0) and are all related by reciprocal lattice vectors. However, according to eq. <ref>, (1/2, 1/2, 0)/(1/2, -1/2, 0) and (-1/2, 1/2, 0)/(1/2, -1/2, 0) must have opposite spin textures, which means that Λ_36(k)=0 for k=k_M. The in-plane and out-of-plane spin-texture patterns, both of rank 3, are displayed in fig. <ref> . Class XI comprises the standard-setting MPGs -4'2'm, 4'm'm, *4'm'm (e.g., the pyrochlores Er_2Ti_2O_7 and Er_2Ru_2O_7, with MSG I 4'_1m'd, refs. <cit.>), and also the non-standard-settings -4'2'm and 4'2'2. The characteristic spherical tensor expansion for this class is: A(k) Q_z^I+B(k) O_z^II where Q_z^I and O_z^II are fixed tensors defined in Table <ref>. The spin texture is rotated by 45^∘ with respect to Class X: B_x^eff (k⃗) = Λ_15(k) k_x k_z B_y^eff (k⃗) = - Λ_15(k) k_y k_z B_z^eff (k⃗) = Λ_13(k) (k_x^2- k_y^2) Among the compounds listed above, Pb_2MnO_4 Er_2Ge_2O_7, Er_2Ti_2O_7 are reported to have non-collinear magnetic structures, while LiFe_2F_6, Er_2Ru_2O_7, and RuO_2 are reportedly collinear (but for RuO_2, also see ref. Kessler2024). §.§ Class XV: higher-order strong-collinear altermagnetism Class XV is weak-collinear at the rank-3 level, with B_x^eff (k⃗) = Λ_14(k) k_y k_z B_y^eff (k⃗) = - Λ_14(k) k_x k_z B_z^eff (k⃗) = 0 Subclasses XVa and XVb become strong-collinear at the rank-7 and rank-5 level, respectively, the corresponding B_z^eff components being listed in tab. <ref>. Figs. <ref> and <ref> display the corresponding texture patterns. §.§ Connection with spin groups The classification in terms of MPGs is completely general even when a magnetic space group cannot be defined, and is more flexible than the (binary) SG approach employed in Refs. Smejkal2022, Liu2022 when dealing with non-collinear structures. I have shown several cases in which the texture patterns generated along the collinear AFM direction using the tensor approach are identical to those obtained using SGs. Nevertheless, SGs are useful to show the connection between the spin textures of magnetic structures related by spin rotations, and may also produce approximate relations between tensor parameters that are unrelated at the MPG level via the occurence of pseudo-symmetries (see also sec. <ref>). The relation between spin textures on either sides of a spin-flip transition will be shown here by way of an example. Let us consider the magnetic structure of CoF_2 (space group P*4_2nm), a well known piezomagnetic material. <cit.> In zero applied magnetic field, CoF_2 possesses a fully compensated collinear structure with spins along the c-axis. In this compound, Co^2+ (3d^7) is strongly anisotropic, and a spin flop transition was reported at a magnetic field of 7 T, <cit.> which is typical of many 3d transition metal compounds. [Half-filled-shell ions such as Fe^3+ have smaller anisotropies, but even in such cases spin-lattice coupling and the Dzyaloshinskii-Moriya interaction cannot be neglected.] The MSG in the zero-field phase is P*4'_2nm', and the MPG is *4'mm' (Class X). In the high-field phase and assuming magnetic moments along the [100] or [110] directions, the MPG is m'mm', where the first m' is perpendicular to the Néel vector direction (here chosen to be the x axis), while the second m' is perpendicular to the z axis. This MPG is a member of Class IV, with y and z exchanged. Using the conventions indicated here above, we obtain: T_S ∥ c = ([ 0 0 0 Λ _14 0 0; 0 0 0 0 Λ _14 0; 0 0 0 0 0 Λ _36; ]) T_S ∥ a = ( [ 0 0 0 0 0 Λ _15; Λ _31 Λ _33 Λ _32 Λ _24 0 0; 0 0 0 0 0 0; ]) where, for T_S ∥ a, I have exchanged y and z with respect to the standard setting in Table <ref>. The zero-field magnetic structure T_S ∥ c is strong-collinear for the collinear AFM direction along the z axis, via the tensor element Λ _36. When Λ _14 is set to zero, B⃗^eff is also parallel to the z axis and has the form Λ_36 k_x k_y, being maximum when k⃗ is in the xy/xy̅ directions. The high-field magnetic structure would allow weak FM, as indicated by the presence of dipole terms in Class IV. However, T_S ∥ a is also strong-collinear in the x direction via the Λ _15 element. When all other elements are set to zero, B⃗^eff is along the x direction (the direction of the AFM moments) and has the form Λ_24 k_y k_z, being maximum when k⃗ is in the yz/yz̅ directions. This is exactly what one would expect based on the SG approach (see below), i.e., spin splitting along the AFM spin direction and with a reciprocal space pattern that does not depend on the spin direction. The MPG approach correctly describes this and also includes other tensor elements that are allowed by symmetry, which are likely to be small for collinear structures, but may well be large for non-collinear structures. Regardless of the spin orientation, the spin space group is P*4̃_2ñm, where the “” symbol indicates spin flip in the SG sense, i.e., with the space-group operators only acting on atoms and not on spins. The (point) SG is *4̃m̃m, and the scalar texture tensor, generated using MTENSOR is: T_SG=([ 0 Λ_12 0; Λ_12 0 0; 0 0 0; ]) The corresponding texture has the form Λ_12 k_x k_y, which is identical to the previous ones with an appropriate exchange of axes. Clearly, this complete correspondence does not always hold when the magnetic moment are along a low-symmetry direction. An extreme case occur when the magnetic moments are along a completely generic direction [x,y,z], since in this case the MPG analysis yields six independent components along [x,y,z]. However, these situations are extremely rare, and can be usually resolved within the MPG framework by rotating the magnetic moment along a high-symmetry direction (but see here below for the case of cubic groups). §.§ Strong-collinear altermagnetism in cubic groups The relation between the SG and MPG approach becomes even clearer when considering the case of the cubic groups. Cubic MPGs do not admit collinear structures, since the direction of the magnetic moments always breaks the cubic symmetry. However, cubic symmetry can still hold in an approximate sense if the spin-lattice interaction is small. Highlighting this approximate symmetries is one of the main advantages of SGs over MPGs. As shown for other symmetries in sec. <ref>, it is often possible to choose a high-symmetry spin direction where no space operator is lost, but this is clearly not possible for cubic groups. It is therefore useful to work our a cubic example in detail, so as to understand exactly how the two approaches are related. Let us consider AFM ordering on a crystal with point group m-3m, such as the pyrochlore Er_2Ru_2O_7 (ref. Taira2003), in which cubic symmetry is broken by a collinear AFM structure. The magnetic moments are along the z axis, and the MPG is *4'm'm (Class XI). The corresponding SG is m -3 m̃ (full symbol *4̃ -3 !2̃) regardless of the direction of the magnetic moments, where, once again, SG space operators do not change the direction of the spins and the “” symbol indicates spin flip. At the rank-7 level, the effective field components for the two cases are: B^SG (k⃗) = Λ(k)(k_x^2 - k_y^2) (k_x^2 - k_z^2) (k_y^2 - k_z^2) B_z^MPG (k⃗) = B^SG (k⃗)+(k_x^2 - k_y^2) (Λ_1(k) k_x^2 k_y^2 + Λ_2(k) (k_x^2 + k_y^2)^2+ Λ_3(k) k_z^4) Corresponding MPG expressions for magnetic moment in other high-symmetry directions and in the appropriate cooordinate systems are reported in tab. <ref>. For the tetragonal case, is clear that B^SG(k⃗) is the symmetrised version of B_z^MPG (k⃗), obtained by setting Λ_1(k)=Λ_2(k)=Λ_3(k)=0. The additional components would emerge as a consequence of symmetry breaking, and can be reasonably expected to be small for a collinear structure where symmetry is only broken by the spin system and spin-lattice interaction is weak. Fig. <ref> shows the texture pattern for B^SG(k⃗) (i.e., in the absence of the tetragonal terms). Fig. <ref> also shows that the scalar texture pattern is completely symmetric by the SG cubic symmetry operators, while the generic tetragonal, trigonal or orthorhombic textures for the corresponding MPGs would only have those specific symmetries. For further clarity, views along different directions of these textures plotted on the surface of a sphere in momentum space are displayed in fig. <ref>. As remarked earlier, textures for different values of k=|k⃗| only differ by the multiplicative function Λ(k) (see first line of eq. <ref>). § DISCUSSION In this paper, I have outlined a natural symmetry classification of all k⃗/-k⃗-symmetric altermagnets, based on the lowest-rank altermagnetic tensor forms allowed in each magnetic point group (MPG). I have also tabulated all the higher-rank tensor forms required to describe `strong-collinear' altermagnetic textures, i.e., those that produce spin splitting with quantisation axis parallel to the Néel vector. This include the case of cubic groups, where the spin-group formalism (thoroughly developed by other groups)<cit.> is most useful. This classification has important practical consequences, particularly when one considers the tensorial connection between altermagnetism (which describes particular form of reciprocal-space spin textures) and well-known macroscopic properties such as piezomagnetism and the MOKE effect. For example, the Bilbao database MAGNDATA <cit.> lists 139 known materials that are MOKE active and do not allow ferromagnetism. The point groups of these materials are the same as those in classes V, VIII, IX, X, XI, XII, XIV, XV and XVII in which purely antiferromagnetic quadratic altermagnetism is allowed. In fact, most materials so far discussed in the context of altermagnetims (and many more) are well-known piezomagnets/MOKE-active AFM. Moreover, while a positive identification of altermagnetism as distinct from time-even effects such as the surface R-D effect is rather difficult, detecting MOKE activity is rather straightforward and would enable screening of candidate materials for which the magnetic structural determination may be ambiguous. <cit.> One may also observe that altermagnetism is allowed in 69 MPGs, of which 66 allow quadratic altermagnetism and 31 allow ferromagnetism. Considering that 32 MPGs are paramagnetic (grey), out of a total of 122 MPGs only 21 groups are black&white and do not allow altermagnetism, which is therefore hardly a rare phenomenon. Although the purpose of this paper is mainly to discuss altermagnetic symmetry, a few words about `mechanisms' do not seem inappropriate, particularly in connection with the role of the spin-orbit coupling (SOC). As already discussed in ref. Yuan2020, altermagnetic splitting is caused by the time-reversal-odd effective Zeeman fields (mostly of exchange origin) generated by real magnetic moments in the unit cell . Since the SO interaction is time-reversal even (its sources being the gradients of the electrical potential), at first sight it would seem that the SO interaction could not be responsible for k⃗/-k⃗-symmetric altermagnetism. However, this in not entirely correct, since the SOC can affect both the orientation of the localised magnetic moments and the spin density distribution, most famously by inducing spin canting via the Dzyaloshinskii-Moriya interaction. It is therefore not implausible that the SOC could be responsible for `weak altermagnetism', <cit.> particularly when spin splitting along a unique high-symmetry direction is not allowed at the lowest order (Classes VIII, XII, XIV, XV, XVII and XVIII). Interestingly, one must also expect to observe R-D-like, k⃗/-k⃗-antisymmetric splitting that is of pure magnetic origin, because magnetism ordering can break inversion symmetry. The two best-known cases of this phenomenon, which Cheong et al. describes as `type-III altermagnetism',<cit.> are helical magnetic structures and type-II multiferroicity, both of which can originate in centrosymmetric crystal due to competing interactions.[Altermagnetism as defined in this paper is not allowed in paramagnetic (grey) point groups. However, many antiferromagnets (especially incommensurate antiferromagnets) possess paramagnetic MPGs, because in there MSG time reversal is equivalent to a (potentially incommensurate) translation U — in other words, Uθ is a symmetry operator. These systems are candidates for `Type III altermagnetism'. A systematic treatment of these system similar to the one presented here should be possible since, even in the most complex cases, one can always define a MPG. <cit.>] In both cases, the relevant properties (magnetic helicity and magnetic polarity, respectively) are time reversal even, so this materials do not violate the general rule that the spin splitting should be the same in time-reversed domains. § TENSOR EQUALITIES AND INVARIANCE §.§ Equivalence of Cartesian and Spherical tensor decompositions In general, Cartesian tensors like those in eq. <ref> are members of the tensor product space Γ^V ⊗[Γ^V ⊗Γ^V …], where the square bracket indicates index symmetrisation while Γ^V is the representation of ordinary vectors, which coincides with the L=1 irrep of SO(3) (Γ^1) if one considers only proper rotations. In turn, [Γ^V ⊗Γ^V …] can be decomposed in SO(3) irreps as follows: [Γ^V ⊗Γ^V ] = Γ^2+Γ^0 [Γ^V ⊗Γ^V⊗Γ^V ] = Γ^3+Γ^1 [Γ^V ⊗Γ^V⊗Γ^V⊗Γ^V ] = Γ^4+Γ^2+Γ^0 [Γ^V ⊗Γ^V⊗Γ^V⊗Γ^V⊗Γ^V ] = Γ^5+Γ^3+Γ^1 … where Γ^2, Γ^3, etc. are the irreps for L=2, 3, …. Taking once again the tensor product with Γ^V Γ^V ⊗[(Γ^V) ^2 ] = Γ^3+Γ^2+2 Γ^1 Γ^V ⊗[(Γ^V) ^3 ] = Γ^4+Γ^3+2 Γ^2+Γ^1+Γ^0 Γ^V ⊗[(Γ^V) ^4 ] = Γ^5+Γ^4+2 Γ^3 +Γ^2+2 Γ^1 Γ^V ⊗[(Γ^V) ^5 ] = Γ^6+Γ^5+2 Γ^4+Γ^3+2 Γ^2+Γ^1+Γ^0 … where I have used the shorthand (Γ^V) ^2=Γ^V ⊗Γ^V etc. The spherical tensor decomposition is the familiar one: Γ^1 ⊗Γ^0 = Γ^1 Γ^1 ⊗Γ^1 = Γ^2+Γ^1+ Γ^0 Γ^1 ⊗Γ^2 = Γ^3+Γ^2+Γ^1 Γ^1 ⊗Γ^3 = Γ^4+Γ^3+Γ^2 Γ^1 ⊗Γ^4 = Γ^5+Γ^4+Γ^3 Γ^1 ⊗Γ^5 = Γ^6+Γ^5+Γ^4 … So, for example, the sum of all the spherical harmonics terms is eq. <ref> is: ∑_n=0^5 Γ^1 ⊗Γ^n = Γ^0+3 Γ^1+3Γ^2+3Γ^3+3 Γ^4 + 2 Γ^5 + Γ^6 = Γ^V ⊗[(Γ^V) ^4] + Γ^V ⊗[(Γ^V) ^5] We can therefore conclude that, if one considers only proper rotations (i.e., elements of the continuous group SO(3)), the decomposition into spherical tensors of of a vector field on a spherical shell up to order n is equivalent to the sum of two Cartesian tensors of ranks n and n+1. Including parity and time reversal, it is found that the Cartesian tensors of odd rank are parity-even and time-reversal odd, while the Cartesian tensors of even rank are parity-odd and time-reversal even. §.§ Tensor invariance Here, I show that the symmetry of the global point-group symmetry of spin texture generated by a certain Cartesian tensor requires that tensor to be totally symmetric by the symmetry operations of that point group. In general, given a vector field V⃗(k⃗) and a proper rotation R the transformed vector field is: Ṽ⃗̃(k⃗)= R·V⃗(R^-1·k⃗) Particularising to the case of B⃗^eff(k⃗) we obtain for proper rotations: B̃⃗̃^eff(k⃗) = R_ij T^(n)_j,αβγ… R^-1_αρ R^-1_βσ R^-1_γτ k_ρ k_σ k_τ… = R_ij R_ρα R_σβ R_τγ T^(n)_j,αβγ… k_ρ k_σ k_τ… After some bookeeping can generalise this to proper/improper rotations that may include time reversal by considering that k⃗ is time-reversal-odd and parity-odd, as: B̃⃗̃^eff(k⃗) = (-1)^(n+1) p +n t R_ij R_ρα R_σβ R_τγ… T^(n)_j,αβγ… k_ρ k_σ k_τ… If we now require that, ∀k⃗, B̃⃗̃^eff(k⃗)=B⃗^eff(k⃗) , we conclude: (-1)^(n+1) p +n t R_ij R_ρα R_σβ R_τγ… T^(n)_j,αβγ… = T^(n)_i,ρστ… which is precisely the definition of a tensor that is totally symmetric by a generalised rotation R. § SPHERICAL BASIS TENSORS In this appendix, I present a decomposition of the 17 unique tensor forms for quadratic altermagnetism into a set of nine simple spherical basis tensors plus those obtained by axis permutations (Table <ref>). Each of the 17 tensor forms can be obtained as a linear combination of the spherical basis tensors (Table <ref>). For example, for Class V, we have: ( [ 0 0 0 Λ _14 0 0; 0 0 0 0 Λ _25 0; 0 0 0 0 0 Λ _36; ]) = Λ_1 Q^II + Λ_2 Q^III + Λ_3 O^I = = Λ_1 ( [ 0 0 0 -1 0 0; 0 0 0 0 1 0; 0 0 0 0 0 0; ])+ Λ_2 ( [ 0 0 0 1 0 0; 0 0 0 0 1 0; 0 0 0 0 0 -2; ]) +Λ_3 ( [ 0 0 0 1 0 0; 0 0 0 0 1 0; 0 0 0 0 0 1; ]) with Λ _14 = -Λ_1+Λ_2+Λ_3 Λ _25 = Λ_1+Λ_2+Λ_3 Λ _36 = -2 Λ_2+Λ_3 § BINARY SPIN GROUPS IN THE TIME-REVERSAL LANGUAGE Spin groups (SG) are generally defined as the outer direct product of a point (or space) group and a spin rotation group. <cit.> The simplest non-trivial SGs are binary SG, in which the group acting on spins is a 2-element set. To classify collinear structures Šmejkal et al. adopt the two-element group {1, 2_z}, where 1 is the identity and 2_z is a 2-fold rotation perpendicular to the Néel vector. Another choice is to use the two-element group {1, 1'}, where 1' is the time reversal operator. With either choice, binary SG coincide with the the celebrated Shubnikov groups. <cit.> The only difference between binary SG with the time-reversal operator and the MPG treatment (which also employs Shubnikov point groups) is that binary SG are made to act on scalar, time-reversal-odd textures both in real and in reciprocal space (space operators have no effect on scalars), while MPGs act on axial vector, time-reversal-odd textures. For binary SGs, I will adopt the time-reversal operator, indicated with the symbol “” to distinguish it from the MPG equivalent, indicated with a prime (') This choice enables one to recast the action of binary SGs onto the familiar language of parity, time reversal etc., and to derive a parallel tensorial treatment to that of MPGs. One should remark that, for a given collinear structure, there is no simple correspondence between its MPGs and binary SG, with the latter generally having more symmetry operators than the former. Moreover, converting SG into MPGs and vice versa require knowledge of the direction of the Néel vector. For example, for a tetragonal structure with binary SG *4̃m̃m, the MPG is *4'mm' for magnetic moments along the z axis and m'mm' or mm'm' for Néel vector along x and y, respectively. Several examples of these conversions are given in sec. <ref>. § CONSTRUCTION OF GNOMONIC PROJECTIONS FROM BAND DISPERSIONS Here, I explain how to construct gnomonic projections of spin textures, similar to those displayed in the figures of sec. <ref>, starting from spin-polarised band dispersions such as those obtained from DFT calculations. One starts by calculating a grid of points at constant k=|k⃗| for a particular band. It is most convenient to calculate points at constant θ and equal intervals Δϕ, where k, θ and ϕ are spherical coordinates in momentum space. To reproduce the figures of sec. <ref>, one plots the spin polarisation for points at constant θ on a circle of radius tanθ and with the original values of ϕ. Figures constructed from points calculated by DFT will include the pre-factors Λ_i(k), which depend on both k and the band index. Moreover, one may require higher-rank tensors than those discussed in the paper to reproduce DFT data accurately. I acknowledge discussions with A. Stroppa (CNR-SPIN), Roger D. Johnson (University College London), Dmitry Khalyavin (STFC UKRI) and Gautam Gurung (Trinity College, Oxford). Symmetry-adapted spherical basis tensors employed for the decomposition of the altermagnetic tensor in each of the 17 classes (see Table <ref>). The basis tensors are constructed from 9 unique forms (two dipolar, three quadrupolar and four octupolar) and their axis permutations. Only permutations actually employed in the decomposition are listed. l@llll 4c Spherical form x y z D^I ( [ 1 1 1 0 0 0; 0 0 0 0 0 0; 0 0 0 0 0 0; ]) ( [ 0 0 0 0 0 0; 1 1 1 0 0 0; 0 0 0 0 0 0; ]) ( [ 0 0 0 0 0 0; 0 0 0 0 0 0; 1 1 1 0 0 0; ]) D^II ( [ 1 0 0 0 0 0; 0 0 0 0 0 1; 0 0 0 0 1 0; ]) ( [ 0 0 0 0 0 1; 0 1 0 0 0 0; 0 0 0 1 0 0; ]) ( [ 0 0 0 0 1 0; 0 0 0 1 0 0; 0 0 1 0 0 0; ]) Q^I ( [ 0 1 -1 0 0 0; 0 0 0 0 0 -1; 0 0 0 0 1 0; ]) ( [ 0 0 0 0 0 1; -1 0 1 0 0 0; 0 0 0 -1 0 0; ]) Q^II ( [ 0 0 0 -1 0 0; 0 0 0 0 1 0; 0 0 0 0 0 0; ]) Q^III ( [ 0 0 0 1 0 0; 0 0 0 0 1 0; 0 0 0 0 0 -2; ]) O^I ( [ 0 0 0 1 0 0; 0 0 0 0 1 0; 0 0 0 0 0 1; ]) O^II ( [ 0 0 0 0 2 0; 0 0 0 -2 0 0; 1 -1 0 0 0 0; ]) O^III ( [ -1 1 0 0 0 0; 0 0 0 0 0 2; 0 0 0 0 0 0; ]) ( [ 0 0 0 0 0 2; 1 -1 0 0 0 0; 0 0 0 0 0 0; ]) O^IV ( [ -2 1 1 0 0 0; 0 0 0 0 0 2; 0 0 0 0 2 0; ]) ( [ 0 0 0 0 0 2; 1 -2 1 0 0 0; 0 0 0 2 0 0; ]) ( [ 0 0 0 0 2 0; 0 0 0 2 0 0; 1 1 -2 0 0 0; ]) Decomposition of the tensors for the 17 altermagnetic classes in terms of the symmetry-adapted spherical basis tensors in Table <ref>. The subscripts indicates the appropriate axis permutation (see columns in Table <ref>) and is omitted when it is not ambiguous. l@lllll 3c Class Magnetic point groups Spherical basis Class I^ 1 -1 D^I_x + D^I_y + D^I_z + D^II_x + D^II_y + D^II_z + Q^I_x + Q^I_y + Q^I_z + Q^II + Q^III + O^I + O^II_z + O^III_x +O^III_y + O^IV_x + O^IV_y + O^IV_z Class II^ 2 m *2 D^I_y + D^II_y + Q^I_y + Q^II + Q^III + O^I + O^III_y + O^IV_y Class III^ 2' m' `2 D^I_x + D^I_z + D^II_x + D^II_z + Q^I_x + Q^I_z + O^III_x + O^IV_x + O^II_z + O^IV_z Class IV^ 2'2'2 m'm2' m'm'2 m'm'm D^I_z + D^II_z + Q^I_z + O^II_z, + O^IV_z Class V^† 222 mmm mm2 Q^II + Q^III + O^I Class VI^ 3 -3 D^I_z + D^II_z + Q^II + O^III_x + O^III_y + O^IV_z Class VII^ 32' 3m' -3m' D^I_z + D^II_z + O^III_y + O^IV_z Class VIII^* 32 3m -3m Q^II+O^III_x Class IX^† 4' -4' *4' Q^I_z + Q^III + O^I + O^II_z Class X^† 4'22' -4'2m' Q^III + O^I Class XI^† -4'2'm 4'm'm *4'm'm Q^I_z + O^II_z Class XII^* 6' -6' `6' O^III_x + O^III_y Class XIII^ (a) 6 -6 *6 (b) 4 -4 *4 D^I_z + D^II_z + Q^II + O^IV_z Class XIV^* 6'22' -6'2m' -6'm2' 6'mm' `6'mm' O^III_x Class XV^* (a) 622 -6m2 6mm *6mm (b) 422 -42m 4mm *4mm Q^II Class XVI^ (a) 62'2' -6m'2' 6m'm' *6m'm' (b) 42'2' -42'm' 4m'm' *4m'm' D^I_z+D^II_z+O^IV_z Class XVII^* (a) 4'32' -4'3m' m-3m' (b) 23 m-3 O^I () Classes allowing a magnetic dipole term (Type-I altermagnets in ref. Cheong2024) (†) Classes allowing spin splitting along a permitted collinear AFM direction (`strong-collinear' altermagnetic classes) (*) Classes not allowing spin splitting along a permitted collinear AFM direction (`weak-collinear' altermagnetic classes)
http://arxiv.org/abs/2407.13409v1
20240718113001
Particle Production and Density Fluctuations of Non-classical Inflaton in Coherent Squeezed Vacuum State of Flat FRW Universe
[ "Dhwani Gangal", "Sudhava Yadav", "K. K. Venkataratnam" ]
astro-ph.CO
[ "astro-ph.CO" ]
1]Dhwani Gangal 1]Sudhava Yadav [1]K.K. Venkataratnamkvkamma.phy@mnit.ac.in [1]Department of Physics, Malaviya National Institute of Technology, J. L. N. Marg, Jaipur, 302017, India We study non-classical inflaton, which is minimally coupled to the semiclassical gravity in FRW universe in Coherent Squeezed Vacuum State (CSVS). We determined Oscillatory phase of inflaton, power-law expansion, scale factor, density fluctuations, quantum fluctuations and particle production for CSVS. We obtained an estimated leading solution of scale factor in CSVS proportional to t^2/3 follow similar diversification as demonstrated by Semiclassical Einstein Equation (SCEE) of gravity in matter dominated universe. We also studied the validity of SCEE in CSVS. By determining the quantum fluctuation for CSVS validity of uncertainty relation for FRW Universe also computed. The results shows that Quantum fluctuations doesn't depend on coherent parameter Υ as uncertainty relation doesn't effected by the displacement of Υ in phase space. We study the production of particles in CSVS for oscillating massive inflaton in flat FRW universe. Particle Production and Density Fluctuations of Non-classical Inflaton in Coherent Squeezed Vacuum State of Flat FRW Universe * July 22, 2024 ============================================================================================================================= § INTRODUCTION Universe has started prompt expansion just after Big Bang as suggested by the theory of inflation<cit.>. The big-bang model successfully explain evolution of universe but the model has few limitations in explaining the problems related with flatness, structure formation, monopole, singularity, horizon, homogeneity etc. of the universe. Further these problems were well addressed by inflationary theory <cit.>. There are multiple explanations are available of above problems but the simplest explanation is an exponential expansion of universe. Further, potential energy of an inflaton that dominates in total energy of universe. As for the same potential energy, inflaton field deploy negative pressure leads very rapid expansion of the universe. As soon as the inflation got over, quasi-periodic motion has started in inflaton field whose immensely slowly decreased with time. The quasi-periodic nature of inflaton field produces various particles <cit.>. The conversion of inflaton field and development in universe started rethermalization in universe <cit.>. The transition is well parameterized and optimized by using various reheating parameters <cit.> to plays significant role in understanding of standard matter production in universe. As per the cosmological assumptions, Universe is considered as an isotropic and homogeneous. The universal representation is establish on the basis of Friedmann and Einstein Eqs. of field. In classical gravity, Friedmann equations <cit.> are valid even at an initial stages of universe. Subsequent analysis of universe show that, quantum properties and fluctuations of matter also play significant role in cosmology. Semiclassical Theory of Gravity (SCTG) establish the fact, that gravitational field is based on quantize matter field <cit.>. Even sometimes quantum gravity effects were considered to be negligible at an early stage in absence of proper hypothesis of quantum gravity. So to understand same, matter fields and gravity must present quantum mechanically, but in the absence of suitable consistent quantum theory that can describe gravity quantum mechanically, the problem become difficult. Using SCTG, where Friedmann’s equation of classical gravity and homogeneous field with Friedmann-Robertson-Walker (FRW) metric in background assumed their validity at an initial stage of universe in most of the inflationary scenarios. Thus, a comprehensive depiction of the early universe can be formulated by employing classical gravity and quantized matter field(s). It's noteworthy to highlight that quantized matter field(s) with fluctuations remain crucial, even when we assume that the effects of quantum gravity are negligible <cit.>. Few researchers in the field had used the quantum properties of inflaton with inflationary theories and SCTG <cit.>. These inflationary theories are based on initial thermal condition, effective potential, quantum effect of inflaton <cit.>, probability distribution <cit.> etc., to study quantum mechanical inflaton in stochastic inflationary scenarios <cit.>. The complete cosmic evolution can be explained using semiclassical quantum gravity, starting from pre-inflation to inflation period in matter-dominated universe. These theories also propose that during the oscillatory phase, both quantum and classical inflaton mechanisms contribute to a similar power-law expansion. Quantum consideration of inflaton relies on SCTG within the framework of quantum optical considerations <cit.>. Kennard et., al. in their analysis of wave packet introduced the concept of squeezed states <cit.>. The same can be integrated with the semi-classical Friedmann-Robertson-Walker (FRW) universe <cit.>. The anisotropic and expanding nature of FRW universe were further analyzed <cit.>. These studies show discrepancy in results of semiclassical gravity and classical gravity with similar power-law expansion for quantum as well as classical inflaton. The correction to expansion demonstrate oscillatory behaviour in semiclassical gravity, that doesn’t show oscillatory behaviour. The particle production within universe have been delineated for a coherently oscillating inflaton field within the Friedmann-Robertson-Walker (FRW) Universe framework <cit.>. These studies highlight the significant role of quantum phenomena in an inflationary theories. Recently, the study of quantum behaviour in cosmology <cit.> using non-classical state grabbing much attention <cit.>. In this work, we examined the massive inflaton field, taking into account its minimal coupling with gravity within FRW universe framework. We are utilizing the Coherent Squeezed Vacuum State (CSVS) to describe the field <cit.>. CSVS has great applications in cosmology for explaining entropy enhancement <cit.>, production of particles, gravitational wave detection <cit.> and inflationary scenario <cit.> etc. In the second section, we describe energy-momentum tensor, third section talks about the formulation of CSVS, in fourth section we have demonstrated the oscillatory phase of inflaton, power-law expansion and scale factor. Fifth section shows the expression for density fluctuations in CSVS. In the sixth section, our aim to validate the uncertainty relation in cosmology by computing quantum fluctuations within CSVS. Seventh section, deals with particle creation in CSVS formalism. In section eighth, we describe the outcome of our research work. § ENERGY-MOMENTUM TENSOR Modern cosmological models are constructed using classical gravity principles, derived from Einstein's field Eqs. within FRW metric. For these models, background metric is typically considered as classical and matter fields are approached from a quantum mechanical perspective. Such theoretical frameworks are commonly referred to as semiclassical theories. Further, the Einstein field Eqs. in semi-classical theory of gravity is (where ℏ=c=1 and G=1/𝑚_𝑝^2) ℰ_μν=8π/𝑚_𝑝^2⟨𝒯_μν⟩. Here 𝒯_μν is energy-momentum tensor and ℰ_μν is the Einstein tensor. The quantum state, which satisfies the time-dependent Schrödinger equation, can be expressed as ∧ℋψ=𝑖∂/∂𝑡ψ, here ∧ℋ is Hamiltonian operator. FRW space-time with generalized variables (𝑟_1,𝑟_2,. 𝑟_3, 𝑟_4) can be written as 𝑑𝑠^2 = -𝑑𝑟_4^2 + 𝒢^2 (𝐭)(𝑑𝑟_1^2 + 𝑑𝑟_2^2 + 𝑑𝑟_3^2), here 𝒢(𝐭) is known as scale factor. Lagrangian density 𝔏 is expressed as 𝔏 = -1/2(m^2Φ ^2+𝔤^μν∂ _μΦ∂ _νΦ)√((-𝔤)). Where Φ is scalar field, now considering the scalar field Φ is homogeneous. Using metric (<ref>), equation (<ref>) can be re-written as 𝔏 =1/2𝒢^3(𝐭)(Φ̇^2-m^2Φ^2). Using Eq. (<ref>), the K-G Eq. written as Φ̈+3𝒢̇ (𝐭)/𝒢 (𝐭)Φ̇+ m^2Φ =0, where 𝒢̇(𝐭)/𝒢(𝐭)=ℌ is the Hubble parameter and ∧Π is the momentum conjugate to ∧Φ is ∧Π = ∂ℒ/∂Φ̇. Using quantization condition, Hamiltonian for inflaton field, which behaves like a time-dependent harmonic oscillator in a suitable quantum state is ⟨ :∧ℋ_m:⟩=1/2𝒢^3 (𝐭)⟨ :∧Π ^2:⟩+1/2𝒢^3(𝐭)m^2⟨ :∧Φ ^2:⟩, where ⟨ :∧Π ^2:⟩ and ⟨ :∧Φ ^2:⟩ are normal ordered expectation values of ∧Π ^2 and ∧Φ ^2. The temporal part of energy-momentum tensor is 𝒯_00= 𝒢^3 (𝐭)(1/2Φ̇^2+1/2m^2∧Φ ^2). § FORMATION OF COHERENT SQUEEZED VACUUM STATE Coherent state can be described as |⟩=𝔇(Υ )|0⟩, here 𝔇() is displacement operator, can be represented as 𝔇(Υ )=exp (Υ∧𝑒 ^†-Υ ^*∧𝑒). Coherent squeezed vacuum state is | ,ζ⟩ =∧W (ρ ,Ψ )𝔇( )|0⟩ , where squeezing operator ∧W(ρ ,Ψ ) is ∧W(ρ ,Ψ )=expρ/2(∧𝑒^2exp (-𝑖Ψ )-∧𝑒^† 2exp(𝑖Ψ)) , here squeezing parameter ρ can take values between 0 ≤ ρ ≤ ∞ and Ψ representing the squeezing angle, can vary between -Π and Π. ∧W(ρ ,Ψ ) have properties like ∧W ^†∧𝑒∧W =∧𝑒coshρ - ∧𝑒 ^†sinhρexp (𝑖Ψ ), ∧W ^†∧𝑒^†∧W =∧𝑒^†coshρ - ∧𝑒sinhρexp (-𝑖Ψ ). The annihilation ∧𝑒 and creation ∧𝑒 ^† operator has following properties ∧𝑒|n,𝐭,Φ⟩ =√(n)|n-1,𝐭,Φ⟩, ∧𝑒 ^† |n,𝐭,Φ⟩ =√(n+1)|n+1,𝐭,Φ⟩, ∧𝑒 ^†∧(𝐭)𝑒(𝐭)|n,𝐭,Φ⟩ =n|n,𝐭,Φ⟩. These operators adhere to a specific commutation relation [∧𝑒,∧𝑒 ^†]=1. The annihilation and creation operators can be calculated as ∧𝑒(𝐭)=Φ ^*(𝐭)∧Π -𝒢^3 (𝐭)Φ̇^*(𝐭)∧Φ, ∧𝑒 ^†(𝐭)=Φ (𝐭)∧Π -𝒢^3 (𝐭)Φ̇(𝐭)∧Φ. Utilizing Eqs. (<ref>-<ref>), the operators ∧Φ and ∧Π exhibit the following relations ∧Φ =1/i(Φ ^*∧𝑒 ^†-Φ∧𝑒), ∧Φ ^2=(2∧𝑒 ^†∧𝑒+1)Φ ^*Φ-(Φ∧𝑒)^2 -(Φ ^*∧𝑒 ^†)^2, ∧Π =i𝒢^3 (𝐭)(Φ̇∧𝑒 -Φ̇^*∧𝑒 ^†), ∧Π ^2=𝒢^3 (𝐭)[(2∧𝑒 ^†∧𝑒+1)Φ̇^*Φ̇-(Φ̇∧𝑒)^2-(Φ̇^*∧𝑒 ^†)^2]. ∧𝑒 and ∧𝑒 ^† combined with 𝔇(Υ) as given by Eq. (<ref>), to produce subsequent characteristics. 𝔇^†∧𝑒 ^†𝔇=Υ ^*+∧𝑒 ^†, 𝔇^†∧𝑒𝔇=Υ+∧𝑒. Applying the operator 𝔇(Υ) and ∧W(ρ ,Ψ ) to the vacuum state yields the CSVS |Υ ,ζ ,0⟩ =𝔇(Υ )∧W(ρ ,Ψ )|0⟩ . § POWER-LAW EXPANSION OF SCEE AND OSCILLATORY PHASE OF INFLATON FOR CSVS In the semi-classical theory of gravity, space-time is treated as classical with quantized matter field. We consider oscillatory phase for inflaton field that is minimally accompanied to a flat FRW metric, under classical gravity, Friedmann Eqs. is (𝒢̇(𝐭)/𝒢(𝐭))^2=8π/3𝑚_𝑝^2𝒯_00/𝒢^3 (𝐭), where, 𝒯_00 is inflaton energy density given in Eq. (<ref>). In terms of ⟨ :∧ℋ_m:⟩, Friedmann Eqs. is (𝒢̇(𝐭)/𝒢(𝐭))^2=8π/3𝑚_𝑝^21/𝒢^3 (𝐭)⟨ :∧ℋ_m:⟩. Here Eqs. (<ref>, <ref>-<ref>, <ref>) are used to compute the Hamiltonian in the semiclassical Friedmann Eqs as (𝒢̇(𝐭)/𝒢(𝐭))^2=8π/3𝑚_𝑝^2[(1/2+n)(Φ̇(𝐭)Φ̇^*(𝐭)+m^2Φ (𝐭)Φ ^*(𝐭))], where n is number parameter, Φ ^*(𝐭) and Φ(𝐭) follow Eq.(<ref>) and Wronskian identity for the same is given as 𝒢^3 (𝐭)[Φ (𝐭)Φ̇^*(𝐭)- Φ̇(𝐭)Φ ^*(𝐭)]=𝑖. Using above, we determine SCEE for CSVS, using Eqs. (<ref>-<ref>, <ref>), we get ⟨ :∧Π ^2:⟩ for CSVS as ⟨ :∧Π^2 :⟩_CSVS= 2𝒢^6 (𝐭)[(1/2sinh^2ρ +1/2+^* )Φ̇^*Φ̇ -(1/2sinhρcoshρ -^*2/2)Φ̇^*2 -(1/2sinhρcoshρ -^2/2)Φ̇^2]. Using Eqs. (<ref>-<ref>, <ref>) after computation ⟨ :∧Φ ^2:⟩ for CSVS is ⟨ :∧Φ ^2:⟩_CSVS= 2[(1/2Sinh^2ρ +1/2+ ^*)Φ^*Φ -(1/2SinhρCoshρ-^*2/2)Φ^*2 -(1/2SinhρCoshρ- ^2/2)Φ^2]. Using Eqs.(<ref>, <ref>-<ref>), we got the Hamiltonian in CSVS as ⟨ :∧ℋ:⟩_CSVS= 𝒢^3 (𝐭)[(1/2Sinh^2ρ +1/2+ ^*)(Φ̇^*Φ̇+m^2Φ^*Φ) -(1/2SinhρCoshρ- ^*2/2)(Φ̇^*2+m^2Φ^*2) -(1/2SinhρCoshρ- ^2/2)(Φ̇^̇2̇+m^2Φ^2)]. Substituting Eqs.(<ref>) in Eqs.(<ref>), the semiclassical Einstein equation for CSVS is (𝒢̇(𝐭)/𝒢(𝐭))^2_CSVS= 8π/3𝑚_𝑝^2[(1/2Sinh^2ρ +1/2+ ^*)(Φ̇^*Φ̇+m^2Φ^*Φ) -(1/2SinhρCoshρ- ^*2/2)(Φ̇^*2+m^2Φ^*2) -(1/2SinhρCoshρ- ^2/2)(Φ̇^̇2̇+m^2Φ^2)]. Next, we derive the expression for scale factor based on the equation (<ref>). To determine the normalization constant, we utilize the Wronskian and impose boundary conditions for two independent solutions, as following Φ(𝐭)=ψ(𝐭)𝒢^3/2(𝐭), Substitute Eqs. (<ref>) in (<ref>) then we get ψ̈(𝐭)+(m^2-34(𝒢̇(𝐭)𝒢(𝐭))^2-32𝒢̈(𝐭)𝒢(𝐭))ψ(𝐭)=0. Here oscillatory region follows inequality m^2>34(𝒢̇(𝐭)𝒢(𝐭))^2+32𝒢̈(𝐭)𝒢(𝐭), further oscillatory solution of inflaton is ψ(𝐭)=exp(-i∫χ(𝐭)d𝐭)√(2χ(𝐭)), where χ^2(𝐭) = m^2-34(𝒢̇(𝐭)𝒢(𝐭))^2-32𝒢̈(𝐭)𝒢(𝐭) +34(χ̇(𝐭)χ(𝐭))^2-12χ̈(𝐭)χ(𝐭). Using equation (<ref>) in Eq. (<ref>) we get Φ(𝐭) = 1/𝒢^3/2(𝐭)1/√(2χ(𝐭)) exp(-i∫χ(𝐭) d𝐭), Φ^*(𝐭) = 1/𝒢^3/2(𝐭)1/√(2χ(𝐭)) exp(i∫χ(𝐭) d𝐭), Φ̇(𝐭) = exp(-i∫χ(𝐭) d𝐭)/𝒢^3/2(𝐭)√(2χ(𝐭))[-3/2𝒢̇(̇𝐭̇)̇/𝒢(𝐭) - 1/2χ̇(̇𝐭̇)̇/χ(𝐭) - i χ(𝐭) ], Φ̇^*(𝐭) = exp(i∫χ(𝐭) d𝐭)/𝒢^3/2(𝐭)√(2χ(𝐭))[-3/2𝒢̇(̇𝐭̇)̇/𝒢(𝐭) -1/2χ̇(̇𝐭̇)̇/χ(𝐭) + i χ(𝐭) ]. The scale factor for CSVS can be obtained using Eqs. (<ref>-<ref>) in equation (<ref>) as (𝒢̇(𝐭)/𝒢(𝐭))^2_CSVS = 4π/3 m_p^2 χ(𝐭) 𝒢^3(𝐭)[ (sinh^2ρ + Υ ^*Υ + 1/2) (9/4(𝒢̇(𝐭)/𝒢(𝐭))^2 + 1/4(χ̇(𝐭)/χ(𝐭))^2 +3/2𝒢̇(𝐭)/𝒢(𝐭)χ̇(𝐭)/χ(𝐭) + χ^2(𝐭) + m^2) +(sinh 2ρ/4e^-iφ - Υ^*2/2)exp(2i∫χ(𝐭)dt) ×(9/4(𝒢̇(𝐭)/𝒢(𝐭))^2 + 1/4(χ̇(𝐭)/χ(𝐭))^2+3/2𝒢̇(𝐭)/𝒢(𝐭)χ̇(𝐭)/χ(𝐭) -χ^2(𝐭) - iχ(𝐭)(3𝒢̇(̇𝐭̇)̇/𝒢(𝐭) + χ̇(̇𝐭̇)̇/χ(𝐭))+m^2 ) +(sinh 2ρ/4e^iφ - Υ^2/2)exp(2i∫χ(𝐭)dt) ×(9/4(𝒢̇(𝐭)/𝒢(𝐭))^2 + 1/4(χ̇(𝐭)/χ(𝐭))^2+3/2𝒢̇(𝐭)/𝒢(𝐭)χ̇(𝐭)/χ(𝐭) -χ^2(𝐭) - iχ(𝐭)(3𝒢̇(̇𝐭̇)̇/𝒢(𝐭) + χ̇(̇𝐭̇)̇/χ(𝐭)) +m^2 ) ], simplify the above equation as 𝒢^3(𝐭)_CSVS = 4π/3 m_p^2 χ(𝐭) (𝒢̇(𝐭)/𝒢(𝐭))^2[ (sinh^2 ρ + Υ ^*Υ + 1/2) (9/4(𝒢̇(𝐭)/𝒢(𝐭))^2 + 1/4(χ̇(𝐭)/χ(𝐭))^2 +3/2𝒢̇(𝐭)/𝒢(𝐭)χ̇(𝐭)/χ(𝐭) + χ^2(𝐭) + m^2) +(sinh 2ρ/4e^-iφ - Υ^*2/2)exp(2i∫χ(𝐭)dt) ×(9/4(𝒢̇(𝐭)/𝒢(𝐭))^2 + 1/4(χ̇(𝐭)/χ(𝐭))^2+3/2𝒢̇(𝐭)/𝒢(𝐭)χ̇(𝐭)/χ(𝐭) -χ^2(𝐭) - iχ(𝐭)(3𝒢̇(̇𝐭̇)̇/𝒢(𝐭) + χ̇(̇𝐭̇)̇/χ(𝐭))+m^2 ) +(sinh 2ρ/4e^iφ - Υ^2/2)exp(2i∫χ(𝐭)dt) ×(9/4(𝒢̇(𝐭)/𝒢(𝐭))^2 + 1/4(χ̇(𝐭)/χ(𝐭))^2+3/2𝒢̇(𝐭)/𝒢(𝐭)χ̇(𝐭)/χ(𝐭) -χ^2(𝐭) - iχ(𝐭)(3𝒢̇(̇𝐭̇)̇/𝒢(𝐭) + χ̇(̇𝐭̇)̇/χ(𝐭)) +m^2 ) ] , further using perturbation method as well as following approximation ansatzs as χ_0(𝐭) =m 𝒢_0(𝐭) =𝒢_0t^2/3, to solve equation (<ref>), we use Υ = |Υ| e^iθ and approximation ansatz (<ref>-<ref>), in equation (<ref>), hence we get next order solution as 𝒢_1(𝐭)_CSVS = [6π/m_p^2(sinh^2 ρ + Υ ^*Υ + 1/2) m𝐭^2 (1 + 1/2 m^2 𝐭^2) +3π𝐭^2/2m_p^2sinh2ρ(cos(φ-2m𝐭)/m𝐭^2-2/𝐭sin(φ-2m𝐭)) - 3πΥ^2 𝐭^2/m_p^2(cos2(θ-m𝐭)/m𝐭^2-2/𝐭sin2(θ-m𝐭)) ]^1/3. The scale factor of CSVS is shown in equation (<ref>), is a function of squeezing parameter ρ, coherent angle θ, angle of squeezing φ and coherent state parameter Υ. At resonance condition when we set limit m𝐭 >> 1, θ = m𝐭, φ = 2m𝐭 and ρ = 0, the scale factor 𝒢_1(𝐭)_CSVS directly proportional to t^2/3, i.e., the study reveled that the semi-classical Einstein equation of gravity gives similar power law expansion of scale factor as classical Einstein equation of gravity. § VALIDITY OF SCTG AND DENSITY FLUCTUATIONS IN CSVS The Validity of SCTG can be obtained by determining density fluctuation using SCEE. Following connection are used to study the inflaton for various quantum states in FRW universe. = ⟨ :T_μν^2:⟩ - ⟨ :T_μν:⟩ ^2. Here T_μν is energy momentum tensor. The density fluctuations for CSVS is computed using Eq. (<ref>), where ⟨ :T_00^2:⟩_CSVS is given as ⟨ :T_00^2:⟩_CSVS= 1/4𝒢^6(𝐭)m^4⟨ :∧Φ ^4:⟩_CSVS+m^2 /4⟨:∧Φ ^2∧Π ^2:⟩ _CSVS +m^2 /4⟨ :∧Π ^2∧Φ ^2:⟩_CSVS+1/4𝒢^6 (𝐭)⟨ :∧Π ^4:⟩_CSVS. Using Eqs. (<ref>-<ref>, <ref>-<ref>) values of ⟨ :∧Π ^4:⟩_CSVS is ⟨ :∧Π ^4:⟩_CSVS= 𝒢^6(𝐭)/4m^2 𝐭^4[3+Υ ^*4+Υ^4+6Υ ^*2Υ ^2+12Υ ^*Υ -6Υ ^*2-6Υ ^2 -4Υ ^*3Υ -4Υ ^*Υ ^3+ 12Sinh^4ρ+12Cosh^2ρSinh^2ρ+24CoshρSinh^3ρ +12CoshρSinhρ{1+2Υ ^*Υ -Υ ^*2-Υ ^2} +12Sinh^2ρ{1+2Υ ^*Υ -Υ^*2-Υ ^2}]. Using Eqs. (<ref>-<ref>, <ref>-<ref>) values of ⟨ :∧Φ ^4:⟩ _CSVS is ⟨ :∧Φ ^4:⟩_CSVS = 1/4m^2 𝒢^6(𝐭)[3+Υ ^*4+Υ ^4+6Υ ^*2Υ ^2+12Υ ^*Υ -6Υ ^*2-6Υ ^2-4Υ ^*3Υ -4Υ ^*Υ ^3+12Sinh^4ρ +12Cosh^2ρSinh^2ρ +24CoshρSinh^3ρ +12CoshρSinhρ{1+2Υ ^*Υ} -Υ ^*2-Υ ^2+12Sinh^2ρ{1+2Υ ^*Υ -Υ ^*2-Υ ^2}]. Using Eqs. (<ref>-<ref>, <ref>-<ref>) values of ⟨ :∧Φ ^2∧Π ^2:⟩_CSVS is ⟨ :∧Φ ^2∧Π ^2:⟩ _CSVS= 1/4m^2 𝐭^2[3+Υ ^*4+Υ ^4+6Υ ^*2Υ ^2+12Υ ^*Υ -6Υ ^*2 -6Υ ^2-4Υ ^*3Υ -4Υ ^*Υ ^3+12Sinh^4ρ +12Cosh^2ρSinh^2ρ +24CoshρSinh^3ρ+12CoshρSinhρ{1+2Υ ^*Υ-Υ ^*2-Υ ^2} +12Sinh^2ρ{1+2Υ ^*Υ -Υ ^*2-Υ ^2}]. Using Eqs. (<ref>-<ref>, <ref>-<ref>) values of ⟨ :∧Π ^2∧Φ ^2:⟩_CSVS is ⟨ :∧Π ^2∧Φ ^2:⟩ _CSVS= 1/4m^2 𝐭^2[3+Υ ^*4+Υ ^4+6Υ ^*2Υ ^2+12Υ ^*Υ -6Υ ^*2-6Υ ^2-4Υ ^*3Υ -4Υ ^*Υ ^3+12Sinh^4ρ +12Cosh^2ρSinh^2ρ +24CoshρSinh^3ρ +12CoshρSinhρ2{1+2Υ ^*Υ -Υ ^*2-Υ ^2} +12Sinh^2ρ{1+2Υ ^*Υ -Υ ^*2-Υ ^2}]. Using (<ref>-<ref>) in Eq. (<ref>), the ⟨ :T_00^2:⟩ _CSVS is ⟨ :T_00^2:⟩ _CSVS= (1/16m^2𝐭^4+1/8𝐭^2+m^2/16)[3+Υ ^*4+Υ ^4 +6Υ ^*2Υ ^2+12Υ ^*Υ -6Υ ^*2-6Υ ^2-4Υ ^*3Υ -4Υ ^*Υ ^3 +12Sinh^4ρ +12Cosh^2ρSinh^2ρ+24CoshρSinh^3ρ +12CoshρSinhρ{1+2Υ ^*Υ -Υ ^*2-Υ ^2} +12Sinh^2ρ{1+2Υ ^*Υ -Υ ^*2-Υ ^2}]. Further, ⟨ :T_00:⟩ _CSVS is given as ⟨ :T_00:⟩ _CSVS =1/2𝒢^3 (𝐭)m^2 ⟨ :∧Φ ^2:⟩_CSVS+1/2𝒢^3 (𝐭)⟨ :∧Π ^2:⟩_CSVS. Using Eqs. (<ref>-<ref>, <ref>-<ref>) values of ⟨ :∧Π ^2:⟩_CSVS is ⟨ :∧Π ^2:⟩_CSVS= 𝒢^3 (𝐭)/2m𝐭^2[2Sinh^2ρ +2CoshρSinhρ+1+Υ ^*Υ -Υ ^*2-Υ ^2], Using Eqs. (<ref>-<ref>, <ref>-<ref>) values of ⟨ :∧Φ ^2:⟩_CSVS is ⟨ :∧Φ ^2:⟩_CSVS= 1/2m𝒢^3 (𝐭[2Sinh^2ρ+2CoshρSinhρ+1+Υ ^*Υ -Υ ^*2-Υ ^2]. Using Eqs. (<ref>) and (<ref>) in Eqs. (<ref>), the ⟨ :T_00:⟩ _CSVS is ⟨ :T_00:⟩ _CSVS= [m/4+1/4m𝐭^2][2Sinh^2ρ +2CoshρSinhρ+1+Υ ^*Υ -Υ ^*2-Υ ^2]. Taking square of (<ref>) is ⟨ :T_00:⟩.^2_CSVS= (1/16m^2 𝐭^4+1/8𝐭^2+m^2 /16)[1+Υ ^*4+Υ ^4+3Υ ^*2Υ ^2 +2Υ ^*Υ -2Υ ^*2-2Υ ^2-2Υ ^*3Υ -2Υ ^*Υ ^3+4Sinh^4ρ +4Cosh^2ρSinh^2ρ +8CoshρSinh^3ρ +4CoshρSinhρ{1+Υ ^*Υ -Υ ^*2-Υ ^2} +4Sinh^2ρ{1+Υ ^*Υ -Υ ^*2-Υ ^2}]. using ⟨ :T_00^2:⟩_CSVS and ⟨ :T_00:⟩.^2_CSVS in Eq. (<ref>), density fluctuations for CSVS is _CSVS= (1/16m^2 𝐭^4+1/8𝐭^2+m^2 /16)[2+3Υ ^*2Υ ^2+10Υ ^*Υ -4Υ ^*2 -4Υ ^2-2Υ ^*3Υ -2Υ ^*Υ ^3+8Sinh^4ρ +8Cosh^2ρSinh^2ρ +16CoshρSinh^3ρ+8CoshρSinhρ{1- ^*2- ^2} +20CoshρSinhρ{ ^*}+8Sinh^2ρ{1-^*2- ^2} +20Sinh^2ρ{ ^*}]. Eq. (<ref>) showing that _CSVS is a function of and ρ, but it strongly depends on Coherent state parameter than ρ. Table <ref> show _CSVS for various squeezing parameter ρ calculated using Eq. (<ref>). For simplicity of explanation, we use =^*=1. Similarly for ^*==0, Eq. (<ref>) reduces to _SVS <cit.>, affirming validity of the approximation used in SCEE. Fig. <ref> plotted variation of _CSVS with ρ, shows increase in _CSVS with increasing value of ρ. 3-D plot (Fig. <ref>) showing similar variation of _CSVS with ρ and t. § VALIDITY OF UNCERTAINTY RELATION AND QUANTUM FLUCTUATIONS IN CSVS Validity of Uncertainty Relation for CSVS is investigated by determining quantum fluctuations in inflaton field for a coherently oscillating non-classical inflaton. Following dispersion formula of Φ and Π are used to study quantum fluctuations in CSVS as ⟨ :Φ̂:⟩_CSVS^2 = ⟨ :Φ̂^2:⟩_CSVS - ⟨ :Φ̂:⟩ ^2_CSVS, and ⟨ :Π̂:⟩_CSVS^2 = ⟨:Π̂^2:⟩_CSVS - ⟨ :Π̂:⟩ ^2_CSVS, here ⟨ :Φ̂^2:⟩, ⟨:Π̂^2:⟩, ⟨ :Φ̂:⟩ ^2, and ⟨ :Π̂:⟩ ^2 are the normal ordered expectation values. In order to evaluate dispersion relation for Coherent Squeezed Vacuum State, using equation (<ref>) we get ⟨ :Φ̂:⟩ ^2 in Coherent Squeezed Vacuum State as ⟨ :Φ̂:⟩ ^2_CSVS = [2 ^*ΦΦ ^*- ^*2Φ ^*2- ^2Φ^2], using Eqs. (<ref>-<ref>) simplifying Eq. (<ref>) ⟨ :Φ̂:⟩ ^2_CSVS = 1/2m𝒢^3 (𝐭)[2 ^*- ^*2e^2im𝐭- ^2e^-2im𝐭], using equation (<ref>) we get ⟨ :Π̂:⟩ ^2 in Coherent Squeezed Vacuum State as ⟨ :Π̂:⟩ ^2_CSVS = 𝒢^6 (𝐭)[2 ^*Φ̇Φ̇^*- ^*2Φ̇^*2- ^2Φ̇^2], using Eqs. (<ref>-<ref>) simplifying Eq. (<ref>) as ⟨ :Π̂:⟩ ^2_CSVS = 𝒢^3 (𝐭)/2m[2 ^*(m^2+1/𝐭^2)- ^*2e^2im𝐭(1/𝐭^2-m^2-2im/𝐭) - ^2e^-2im𝐭(1/𝐭^2-m^2+2im/𝐭)], using equation (<ref>) we get ⟨ :Φ̂^2:⟩ in Coherent Squeezed Vacuum State as ⟨ :Φ̂^2:⟩_CSVS= [(2 ^*+2Sinh^2ρ +1)ΦΦ^*-( ^*2-e^-iΨCoshρSinhρ)Φ ^*2 -( ^2-e^i ΨCoshρSinhρ)Φ ^2], using Eqs. (<ref>-<ref>) simplifying Eq. (<ref>) as ⟨ :Φ̂^2:⟩_CSVS = 1/2m𝒢^3 (𝐭)[(2 Sinh^2ρ +1+2 ^*)- ^*2e^2im𝐭 - ^2e^-2im𝐭+2CoshρSinhρcos(Ψ -2m𝐭)], using equation (<ref> and <ref>) in equation (<ref>) we get ⟨ :Φ̂:⟩_CSVS ⟨ :Φ̂:⟩_CSVS = √(1/2m𝒢^3 (𝐭)[(2 Sinh^2ρ+1)+2CoshρSinhρcos(Ψ -2m𝐭)]), using equation (<ref>) we get ⟨ :Π̂^2:⟩ in Coherent Squeezed Vacuum State as ⟨:Π̂^2:⟩_CSVS = 𝒢^6 (𝐭)[(2Sinh^2ρ +1+2 ^*)Φ̇Φ̇^* -( ^*2-e^-i ΨCoshρSinhρ)Φ̇^*2-(^2-e^iΨCoshρSinhρ)Φ̇^2], using Eqs. (<ref>-<ref>) simplifying Eq. (<ref>) as ⟨:Π̂^2:⟩_CSVS= 𝒢^3(𝐭)/2m[(2Sinh^2ρ +1+2^*)(m^2+1/𝐭^2) - ^*2e^2im𝐭(1/𝐭^2-m^2-2im/𝐭)- ^2e^-2im𝐭(1/𝐭^2-m^2+2im/𝐭) +2CoshρSinhρ[cos (Ψ -2m𝐭)(1/𝐭^2-m^2)-2m/𝐭sin (Ψ-2m𝐭)], using equation (<ref> and <ref>) in equation (<ref>) we get ⟨ :Π̂:⟩_CSVS ⟨ :Π̂:⟩ _CSVS = [𝒢^3 (𝐭)/2m[(2 Sinh^2ρ +1)(m^2+1/𝐭^2) +2CoshρSinhρ[cos (Ψ -2m𝐭)(1/𝐭^2-m^2)-2m/𝐭sin (Ψ -2m𝐭)]]]^1/2, using equation (<ref> and <ref>) Quantum fluctuations in inflaton field for Coherent Squeezed Vacuum State given as ⟨ :Φ̂:⟩ _CSVS⟨ :Π̂:⟩ _CSVS = 1/2m[[(2 Sinh^2ρ +1) +2CoshρSinhρcos(Ψ -2m𝐭)][(2 Sinh^2ρ +1)(m^2+1/𝐭^2) +2CoshρSinhρ(cos (Ψ -2m𝐭)(1/𝐭^2-m^2) -2m/𝐭sin (Ψ -2m𝐭))]]^1/2. Equation (<ref>) represents the Quantum fluctuations in inflaton field for Coherent Squeezed Vacuum State. Equation (<ref>) show that, Quantum fluctuations doesn't depend on coherent parameter Υ as uncertainty relation doesn't effected by displacing it with any amount Υ in phase space. As shown in table (<ref>-<ref>), calculated values of Quantum fluctuations for all values of (Ψ -2m𝐭) is greater than 1/2 according to uncertainty relation. The fact can be better understand using 3-D plot (<ref>- <ref>) between Quantum Fluctuations, ρ and (Ψ -2m𝐭). § PARTICLE PRODUCTION OF INFLATON IN CSVS For this study, we also investigated production of particle for inflaton in CSVS within context of semiclassical gravity. Here, we calculate No. of particles created at some time t in vacuum state with reference to initial time t_0 is 𝒩_n(𝐭,𝐭_0) = <0,Φ ,𝐭_0|𝒩̂(𝐭)|0,Φ ,𝐭_0>, here 𝒩̂(𝐭) = e^†e. Using Eqs. (<ref>-<ref>) the number operator and normal order expectation value of it can be written as <:𝒩̂(𝐭):> = ΦΦ^*<:Π̂^2:> -𝒢^3ΦΦ̇^*<:Π̂Φ̂:>-𝒢^3Φ̇Φ^*<:Φ̂Π̂:> + 𝒢^6Φ̇Φ̇^*<:Φ̂^2:>. Using Eqs. (<ref>-<ref>, <ref>-<ref>), the ⟨ :Π̂Φ̂:⟩ for CSVS at initial time t_0 is ⟨ :Π̂Φ̂:⟩ _CSVS = 𝒢^3 (𝐭)[( ^*+Cosh^2ρ)Φ̇(𝐭_0)Φ ^*(𝐭_0) -( ^2-e^i ΨCoshρSinhρ)Φ̇(𝐭_0)Φ(𝐭_0) -( ^*2-e^-i ΨCoshρSinhρ)Φ̇^*(𝐭_0)Φ ^*(𝐭_0) +(Sinh^2ρ + ^*)Φ̇^*(𝐭_0)Φ(𝐭_0)], using Eqs. (<ref>-<ref>, <ref>-<ref>), the ⟨ :Φ̂Π̂:⟩ for CSVS at initial time t_0 is ⟨ :Φ̂Π̂:⟩ _CSVS = 𝒢^3 (𝐭)[( ^*+Cosh^2ρ)Φ(𝐭_0)Φ̇^*(𝐭_0) -( ^2-e^i ΨCoshρSinhρ)Φ̇(𝐭_0)Φ(𝐭_0) -( ^*2-e^-i ΨCoshρSinhρ)Φ̇^*(𝐭_0)Φ ^*(𝐭_0) +(Sinh^2ρ + ^*)Φ̇(𝐭_0)Φ ^*(𝐭_0)], using Eqs. (<ref>-<ref>, <ref>-<ref>), the ⟨ :Φ̂^2:⟩ for CSVS at initial time t_0 is ⟨ :Φ̂^2:⟩_CSVS = [(2 ^*+2 Sinh^2ρ +1)Φ(𝐭_0)Φ ^*(𝐭_0) -( ^*2-e^-i ΨCoshρSinhρ)Φ ^*(𝐭_0)Φ ^*(𝐭_0) -( ^2-e^i ΨCoshρSinhρ)Φ(𝐭_0)Φ(𝐭_0)], using Eqs. (<ref>-<ref>, <ref>-<ref>), the ⟨ :Π̂^2:⟩ for CSVS at initial time t_0 is ⟨ :Π̂^2:⟩_CSVS = 𝒢^6 (𝐭)[(2 Sinh^2ρ +1+2 ^*)Φ̇(𝐭_0)Φ̇^*(𝐭_0) -( ^*2-e^-i ΨCoshρSinhρ)Φ̇^*(𝐭_0)Φ̇^*(𝐭_0) -( ^2-e^i ΨCoshρSinhρ)Φ̇(̇𝐭̇_̇0̇)̇Φ̇(𝐭_0)], Substituting the values of ⟨ :Π̂Φ̂:⟩, ⟨ :Φ̂Π̂:⟩, ⟨ :Φ̂^2:⟩ and ⟨ :Π̂^2:⟩ in Eq. (<ref>) then the ⟨:𝒩̂(𝐭):⟩_CSVS is ⟨:𝒩̂(𝐭):⟩_CSVS = 𝒢^6(𝐭)(2 Sinh^2ρ +1+2 ^*)|Φ(𝐭)Φ̇(𝐭_0)-Φ̇(𝐭)Φ(𝐭_0)|^2 + (Sinh^2ρ + ^*) +𝒢^6(𝐭)(e^-i ΨCoshρSinhρ - ^*2)(Φ(𝐭)Φ^*(𝐭)Φ̇^*(𝐭_0)Φ̇^*(𝐭_0) -Φ(𝐭)Φ̇^*(𝐭)Φ̇^*(𝐭_0)Φ^*(𝐭_0) - Φ̇(𝐭)Φ^*(𝐭)Φ^*(𝐭_0)Φ̇^*(𝐭_0) + Φ̇(𝐭)Φ̇^*(𝐭)Φ^*(𝐭_0)Φ^*(𝐭_0)) +𝒢^6(𝐭)(e^i ΨCoshρSinhρ- ^2)(Φ(𝐭)Φ^*(𝐭)Φ̇(𝐭_0)Φ̇(𝐭_0) -Φ(𝐭)Φ̇^*(𝐭)Φ̇(𝐭_0)Φ(𝐭_0) - Φ̇(𝐭)Φ^*(𝐭)Φ(𝐭_0)Φ̇(𝐭_0) + Φ̇(𝐭)Φ̇^*(𝐭)Φ(𝐭_0)Φ(𝐭_0)). Here for simplicity, considering in equation (<ref>) as 𝒩_0(𝐭,𝐭_0) = 𝒢^6(𝐭)|Φ(𝐭)Φ̇(𝐭_0)-Φ̇(𝐭)Φ(𝐭_0)|^2, 𝐐 = (Φ(𝐭)Φ^*(𝐭)Φ̇^*(𝐭_0)Φ̇^*(𝐭_0) -Φ(𝐭)Φ̇^*(𝐭)Φ̇^*(𝐭_0)Φ^*(𝐭_0) - Φ̇(𝐭)Φ^*(𝐭)Φ^*(𝐭_0)Φ̇^*(𝐭_0) + Φ̇(𝐭)Φ̇^*(𝐭)Φ^*(𝐭_0)Φ^*(𝐭_0)), R = (Φ(𝐭)Φ^*(𝐭)Φ̇(𝐭_0)Φ̇(𝐭_0) -Φ(𝐭)Φ̇^*(𝐭)Φ̇(𝐭_0)Φ(𝐭_0) - Φ̇(𝐭)Φ^*(𝐭)Φ(𝐭_0)Φ̇(𝐭_0) + Φ̇(𝐭)Φ̇^*(𝐭)Φ(𝐭_0)Φ(𝐭_0)), Eqs. (<ref>-<ref>) are substituted in Eq.(<ref>) then we get ⟨:𝒩̂(𝐭):⟩_CSVS = (2 Sinh^2ρ +1+2 ^*)𝒩_0(𝐭,𝐭_0) + (Sinh^2ρ + ^*) +𝒢^6(𝐭)(e^-i ΨCoshρSinhρ- ^*2)𝐐 +𝒢^6(𝐭)(e^i ΨCoshρSinhρ- ^2)R. Using Eqs. (<ref>-<ref>) in Eqs. (<ref>-<ref>), 𝐐 and 𝐑 can be calculated as 𝐐 = 1/4χ(𝐭)χ(t_0)𝒢^3(𝐭)𝒢^3(𝐭_0)[exp(2i∫χ(t_0)dt_0)(9/4(𝒢̇(𝐭_0)/𝒢(𝐭_0))^2 + 1/4(χ̇(𝐭_0)/χ(𝐭_0))^2 +3/2𝒢̇(𝐭_0)/𝒢(𝐭_0)χ̇(𝐭_0)/χ(𝐭_0) -χ^2(t_0) - iχ(t_0)(3𝒢̇(𝐭_0)/𝒢(𝐭_0) +χ̇(t_0)/χ(t_0))+ 9/4(𝒢̇(𝐭)/𝒢(𝐭))^2 + 1/4(χ̇(𝐭)/χ(𝐭))^2+3/2𝒢̇(𝐭)/𝒢(𝐭)χ̇(𝐭)/χ(𝐭)+χ^2(𝐭)) -2(3/4𝒢̇(𝐭_0)/𝒢(𝐭_0) +χ̇(t_0)/4χ(t_0) -i/2χ(t_0) )(3𝒢̇(𝐭)/𝒢(𝐭) + χ̇(𝐭)/χ(𝐭)) ], R = exp(-2i∫χ(t_0)dt_0)/4χ(𝐭)χ(t_0)𝒢^3(𝐭)𝒢^3(𝐭_0)[(9/4(𝒢̇(𝐭_0)/𝒢(𝐭_0))^2 + 1/4(χ̇(𝐭_0)/χ(𝐭_0))^2 +3/2𝒢̇(𝐭_0)/𝒢(𝐭_0)χ̇(𝐭_0)/χ(𝐭_0) -χ^2(t_0) + iχ(t_0)(3𝒢̇(𝐭_0)/𝒢(𝐭_0) +χ̇(t_0)/χ(t_0)) + 9/4(𝒢̇(𝐭)/𝒢(𝐭))^2 + 1/4(χ̇(𝐭)/χ(𝐭))^2+3/2𝒢̇(𝐭)/𝒢(𝐭)χ̇(𝐭)/χ(𝐭)+χ^2(𝐭)) -(3/4𝒢̇(𝐭)/𝒢(𝐭) +χ̇(𝐭)/4χ(𝐭) -i/2χ(𝐭) )(3𝒢̇(𝐭_0)/𝒢(𝐭_0) + χ̇(t_0)/χ(t_0)+2iχ(t_0)) -(3/4𝒢̇(𝐭_0)/𝒢(𝐭_0) +χ̇(t_0)/4χ(t_0) +i/2χ(t_0) )(3𝒢̇(𝐭)/𝒢(𝐭) + χ̇(𝐭)/χ(𝐭)+2iχ(𝐭))) ], using approximation ansatz (<ref>-<ref>) in Eqs.(<ref>-<ref>), we get 𝐐 =exp(2i∫ md𝐭_0) 4m^2 𝒢_o^6𝐭^2 𝐭^2_0[1𝐭^2_0+1𝐭^2-2𝐭𝐭_0-2im𝐭_0+2im𝐭], R =exp(-2i∫ md𝐭_0) 4m^2𝒢_0^6𝐭^2𝐭^2_0[1𝐭^2_0 - 2𝐭𝐭_0+1𝐭^2+2im𝐭_0-2im𝐭]. Use Υ= |Υ|e^iθ, φ=2mt_0, θ=mt_0 with Eqs. (<ref>) and (<ref>) in Eq. (<ref>) we get ⟨:𝒩̂(𝐭):⟩_CSVS ≃ (2sinh^2 ρ +Υ ^*Υ +1)(𝐭-𝐭_0)^2/4m^2𝐭_0^4 + (sinh^2ρ +Υ ^*Υ ) +(sinh2ρ-Υ ^*Υ/2)(𝐭-𝐭_0)^2/4m^2𝐭_0^4. The Eq. (<ref>) show that number of particles produced in CSVS. § RESULTS AND DISCUSSION In this paper, we have conducted an analysis of the quantum effects on the Friedmann-Robertson-Walker universe, focusing on Power-law Expansion, Density Fluctuations, Quantum Fluctuations, and Production of Particles of inflaton for CSVS within framework of semiclassical theory of gravity. We emphasize the significant role of the non-classical state of gravity, particularly in particle production, while considering background metric as classical with quantize matter field <cit.>. Initially, we have determined oscillatory phase of inflaton for CSVS. We have analyzed that density fluctuations for CSVS depends on ρ as well as that can be reduced to squeezed vacuum state <cit.> for =0, demonstrate sustainability of SCTG and inflaton energy density <cit.>. The _CSVS exhibit an inverse proportionality to various powers of time t as demonstrated by the SCEE model. Fig. <ref> illustrates density fluctuations for CSVS, depicts an increase in density fluctuations as the squeezing parameter ρ increase. In our analysis, we adopted a minimal coupling approach between gravity and inflaton to examine importance of massive inflaton within context of an isotropic and homogeneous universe. Additionally, we derived the leading approximate solution to SCEE for scale factor in CSVS. Notably, scale factor is found to be a function of the squeezing parameter ρ, coherent angle θ, angle of squeezing φ and coherent state parameter Υ. When we set limit m𝐭 >> 1, ρ = 0, θ = m𝐭 and φ = 2m𝐭, the scale factor 𝒢_1(𝐭)_CSVS varies as t^2/3, i.e., the study revealed that SCEE of gravity provides similar power law expansion of scale factor as obtained by classical theory. We have also studied quantum fluctuation for CSVS. Our findings in the matter indicates that the dispersion of field is inversely proportional to time, while momentum dispersion is proportional to time. Using Eq. (<ref>), different quantum states produce values according to uncertainty relation, show validity of approximation used. Hence, CSVS holds significant physical importance in preserving the quantum properties of the coherently oscillating inflaton field. This preservation ensures that the oscillating inflaton field retains quantum characteristics. Additionally, we have investigated the particle production in the CSVS of an oscillating massive inflaton within the framework of semiclassical gravity, specifically within the context of the isotropic and homogeneous universe. We computed the particle production for CSVS as shown in Eq. (<ref>) that depends on Υ, ρ, φ and θ. Production of particles in CSVS depends on their respective parameters, this implies that these parameters have a considerable influence during the early universe period. Therefore, the states under consideration play a crucial role in the study of the early universe.
http://arxiv.org/abs/2407.13371v1
20240718102616
0.7 MW Yb:YAG pumped degenerate optical parametric oscillator at 2.06 μm
[ "Anni Li", "Mehran Bahri", "Robert M. Gray", "Seowon Choi", "Sajjad Hoseinkhani", "Anchit Srivastava", "Alireza Marandi", "Hanieh Fattahi" ]
physics.optics
[ "physics.optics" ]
Max Planck Institute for the Science of Light, Staudstrasse 2, Erlangen, 91058, Germany. Friedrich-Alexander-Universität Erlangen-Nürnberg, Staudstrasse 7, 91058 Erlangen, Germany. Max Planck Institute for the Science of Light, Staudstrasse 2, Erlangen, 91058, Germany. Friedrich-Alexander-Universität Erlangen-Nürnberg, Staudstrasse 7, 91058 Erlangen, Germany. Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, 91125, USA Max Planck Institute for the Science of Light, Staudstrasse 2, Erlangen, 91058, Germany. Friedrich-Alexander-Universität Erlangen-Nürnberg, Staudstrasse 7, 91058 Erlangen, Germany. Max Planck Institute for the Science of Light, Staudstrasse 2, Erlangen, 91058, Germany. Max Planck Institute for the Science of Light, Staudstrasse 2, Erlangen, 91058, Germany. Friedrich-Alexander-Universität Erlangen-Nürnberg, Staudstrasse 7, 91058 Erlangen, Germany. Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, 91125, USA hanieh.fattahi@mpl.mpg.de. Max Planck Institute for the Science of Light, Staudstrasse 2, Erlangen, 91058, Germany. Friedrich-Alexander-Universität Erlangen-Nürnberg, Staudstrasse 7, 91058 Erlangen, Germany. § ABSTRACT Frequency comb and field-resolved broadband absorption spectroscopy are promising techniques for rapid, precise, and sensitive detection of short-lived atmospheric pollutants on-site. Enhancing detection sensitivity in absorption spectroscopy hinges on bright sources that cover molecular resonances and fast signal modulation techniques to implement lock-in detection schemes efficiently. Yb:YAG thin-disk lasers, combined with optical parametric oscillators (OPO), present a compelling solution to fulfill these requirements. In this work, we report on a bright OPO pumped by a Yb:YAG thin-disk Kerr-lens mode-locked oscillator delivering 2.8 W, 114 fs pulses at 2.06 μm with an averaged energy of 90 nJ. The OPO cavity operates at 30.9 MHz pulse repetition rates, the second harmonic of the pump cavity, allowing for broadband, efficient, and dispersion-free modulation of the OPO output pulses at 15.45 MHz rate. With 13% optical-to-optical conversion efficiency and a high-frequency intra-cavity modulation, this scalable scheme holds promise to advance the detection sensitivity and frontiers of field-resolved spectroscopic techniques. 0.7 MW Yb:YAG pumped degenerate optical parametric oscillator at 2.06 μm Hanieh Fattahi July 22, 2024 ========================================================================= § INTRODUCTION Highly sensitive and precise monitoring of short-lived climate pollutants in the atmosphere is critical for advanced studies on the carbon cycle, greenhouse gas balances, and disruptions. For example, methane is a key player among greenhouse gases for its substantial role in amplifying global warming and altering climatic patterns <cit.>. Although its presence in the atmosphere is considerably less than that of carbon dioxide —about 1.8 parts per million compared to carbon dioxide's roughly 400 ppm—methane's effect on global warming is markedly more intense, with a warming potential of 25 times greater than carbon dioxide, accounting for about 15% of future global warming scenarios. Additionally, methane is a critical component in the intricate feedback loops of atmospheric chemistry, acting as a signal of the Earth's atmospheric dynamics on a large scale <cit.>. Time-domain broadband absorption spectroscopy offers great potential for on-site, highly precise, and sensitive short-lived pollutant detection. It benefits from fast, self-calibrating assessments that do not require sample preparation, facilitating real-time atmospheric monitoring <cit.>. Atmospheric pollutants cover a broad fundamental vibration–rotation transition in the mid-infrared and fingerprint region. Moreover, their overtone and combination bands at short-wavelength infrared (SWIR) offer a sensitive detection window due to the lower absorption cross-section of water in this spectral range. For instance, methane exhibit distinct absorption at 1.6 μm, and 2.2 μm, where the water response is negligible <cit.>. Based on the Beer-Lambert law, the detection signal-to-noise ratio in absorption spectroscopy is proportional to the brightness of the illumination source. Therefore, the detection sensitivity can be significantly enhanced by employing high-power sources operating at megahertz repetition rates. Moreover, when coupled with broadband, high-frequency modulation lock-in techniques, such frontends facilitate a further boost in detection signal-to-noise and sensitivity. Recent progress in frequency comb spectroscopy <cit.> and field-resolved spectroscopy <cit.> from the SWIR range up to ultraviolet spectral range have demonstrated an unparalleled ability to identify and classify molecular responses with exceptional sensitivity and accuracy. The detection sensitivity in both techniques benefits directly from the availability of bright sources with intrinsic broadband modulation. Femtosecond optical parametric oscillators (OPO) synchronously pumped by Ti:Sa oscillators or fiber lasers have been a crucial tool for expanding wavelength coverage of frequency combs, as the generated sub-harmonics are locked in frequency and phase to the input pump pulses <cit.>. Mainly when operated at degeneracy, the signal and idler pulses with the same polarization become indistinguishable, exhibiting self-phase-locked behavior through mutual injection, providing a single coherent broadband output centered at degeneracy <cit.>. On the other hand, diode-pumped Yb:YAG thin-disk oscillators are capable of delivering sub-100 fs pulses with tens of microjoule energy and hundreds of watts average power <cit.>. When used to pump OPOs, these advanced sources could potentially enable SWIR femtosecond sources to achieve peak powers in the MW range and average powers in the tens of watts, operating at megahertz repetition rates. The spectral coverage of such powerful OPOs can be extended into fingerprint regions or even terahertz frequencies through down-conversion processes. Furthermore, this scheme allows for high-bandwidth, dispersion-free modulation of output pulses at megahertz rates by precisely controlling the cavity length mismatch between the OPO and its pump laser <cit.>. This capability holds significant promise for enhancing detection sensitivity in spectroscopic applications, particularly in real-time atmospheric monitoring. In this work, we present a degenerate OPO pumped by a Kerr-lens mode-locked Yb:YAG thin-disk oscillator delivering 114 fs, 2.8 W pulses at 30.9 MHz repetition rates with 90 nJ averaged pulse energy. By operating the OPO cavity at second-harmonics of the repetition rate of the pump cavity, broadband modulation of the output pulses is achieved. § EXPERIMENTAL RESULTS The degenerate femtosecond OPO is pumped collinearly by a Kerr-lens mode-locked Yb:YAG oscillator, delivering 25 W, 15.45 MHz, 384 fs pulses at full-width at half maximum (FWHM), at 1030 nm (see Fig. <ref>-a)<cit.>. The pump mode is matched to the OPO cavity by a Kepler telescope with a ratio of 170%, resulting in a focus size of 74 μm at FWHM. This configuration corresponds to 61 GW/cm^2 pump peak intensity at the focus. The OPO is designed for a ring cavity at the second harmonic of the repetition rate of the pump with a total length of 10 m (Fig. <ref>-b). The ring cavity allows for easy in and out-coupling of the pump beam. Two concave sliver mirrors (M2 and M3 in Fig.<ref>-c) with a radius curvature of 700 mm, are used to focus the cavity mode to the beam waist of 83 μm (FWHM). By having the cavity mode 1.1 times larger than the pump beam, the efficient energy transfer from the pump to the signal is ensured. Two dielectric plane mirrors with low group delay dispersion are used for in-coupling and out-coupling of the pump beam (M1 and M4 in Fig.<ref>-c). A dielectric mirror with 30% transmission is used to couple out the generated sub-harmonic pulses. Several highly reflecting silver mirrors with a protective coating fold the cavity. A 1-mm thick 5% magnesium-doped periodically poled LiNbO_3 (PPLN) crystal at type-0 phase matching is used for degenerate down-conversion of the pump pulses to 2.06 μm. The PPLN has a period of 30.8 μm and is placed in an oven at the temperature of 116 ^oC. The end-faces of the crystal are coated by broadband anti-reflection coating with less than 0.5% reflectivity. The PPLN is placed behind the focus of the pump beam to avoid optical damage in the crystal. Due to the large cavity length and environmental effects, active stabilization of the cavity is required, which is achieved through Pound–Drever–Hall modulation<cit.> using a red-pitaya-based proportional-integral-derivative (PID) lock-in system. The feedback signal is measured at the OPO output using a photodiode. The lock-in system comprises the microcontroller board (STEMlab 125-14 board), voltage controller, main piezoelectric transducer (PZT), main piezo controller, and a supplementary PZT chip. Both PZT are affixed and bonded to one of the folding mirrors in the OPO cavity (mirror M8 in Fig.<ref>-c). The primary purpose of the first PZT is to coarsely scan the cavity length on a micrometer range at an approximate speed of ∼ 0.2 μm/s via the Red Pitaya interface. Fig. <ref>-a shows the excited cavity modes versus the scanning range of the second PZT. The first three modes indicated by the green-shaded area are degenerate, while the subsequent non-degenerate modes are shown in the violet-shaded area. The proportional and integral functions of the PID controller are used to generate the feedback signal, with the derivative function deactivated to prevent the possible amplification of undesirable noise. Fig. <ref>-b presents the spectra of various excited modes within the cavity as its length is tuned. Adjusting the cavity length allows the output pulse spectrum to be tuned from 1900 nm to 2200 nm, constrained by the reflectivity of the cavity's internal optics. The wavelength tunability of the cavity can also be achieved via different phase-matchings in the nonlinear crystal. Fig. <ref>-c shows the output tunability of the cavity by phase matching of the PPLN at two periods of 30.8 μm and 31.4 μm, and various temperatures. At the degeneracy, the cavity operates at negative dispersion with an estimated total group velocity dispersion of -88 fs^2. For intra-cavity compression of the degenerate OPO pulses, a 0.9 mm-thick Borosilicate plate is inserted into the cavity at a Brewster angle. The output pulses of the OPO are characterized by using second-harmonic generation frequency-resolved optical gating (SHG-FROG) comprising a 20 μm-thick BBO crystal. Fig. <ref>-a and Fig. <ref>-b shows the measured and retrieved spectrograms along the temporal profile of the OPO pulses corresponding to 114 fs at FWHM when the cavity runs at degeneracy. The spectral bandwidth of the OPO pulse supports 110 fs Fourier transform-limited pulses. Fig. <ref>-c shows the output power of the OPO versus the input pump power. The output power of the OPO is measured after two long-pass filters to exclude the contribution of the other nonlinear parasitic effects in the crystal, which are co-propagating with the OPO pulses. With a 30% output coupler, the OPO has 13% optical-to-optical efficiency and a lasing threshold at 4 W pump power. The purple area in the curve indicates that the cavity is approaching saturation. The cavity length of the OPO is half that of the pump laser, resulting in the generation of an OPO pulse upon the first incident pump pulse on the nonlinear crystal. This initial OPO pulse travels within the cavity until the arrival of the second pump pulse. During each round trip within the cavity, the OPO output pulses are coupled out through the output coupler. As the cavity operates at the second harmonics of the pump pulses, the OPO pulse meets the pump pulse and obtains parametric gain after traveling two round trips in the OPO cavity. Therefore, the output pulse train of the OPO has a repetition rate of 30.9 MHz, which is determined by the cavity length of the OPO. The time interval between every two consecutive pulses in the train is 324 μs. At this regime, the cavity output is modulated at 15.45 MHz with a broadband modulation depth of 41% (Fig. <ref>-b). The modulation depth between the two adjunct pulses can be tuned by varying the cavity loss in the OPO cavity. For small cavity losses, the energy of the succeeding pulses will be only a few percent less than that of the previous one. The inherent capability for adjustable broadband modulation makes such cavities an ideal frontend for highly sensitive spectroscopic applications. § CONCLUSION The availability of broadband, bright sources from ultraviolet to terahertz is crucial for sensitive, precise, and live monitoring of short-lived climate pollutants in the atmosphere <cit.>. The fundamental resonances of molecules have a unique fingerprint in the mid-infrared spectral range, allowing for precise spectroscopic selectivity. Overtone and combination bands of molecules at SWIR provide similar information with the advantage of lower water absorption cross-section. Among various down-conversion schemes aimed at generating broadband pulses for absorption spectroscopy across overtone, fundamental, and rotational resonances, OPO stands out for its higher conversion efficiency <cit.>. Yb:YAG thin-disk oscillators offer great potential to pump the OPOs and to elevate the brightness, repetition rates, and peak power of the femtosecond pulses in these spectral ranges. Moreover, enhancing detection sensitivity in absorption spectroscopy requires implementing lock-in systems, necessitating high-rate signal modulation. Existing techniques like acousto-optic <cit.> or mechanical modulation suffer from limitations such as insufficient bandwidth, low-frequency modulation rate, high spectral dispersion, or low throughput. This work presented the first Yb:YAG thin-disk pumped OPO containing a broadband, dispersion-free modulator. The OPO comprised a ring cavity and a 1 mm-thick PPLN crystal operating at the second harmonics of the Yb:YAG cavity at 30.9 MHz repetition rates. The frontend delivers 114 fs, 2.8 W pulses at 2.06 μm with an average energy of 90 nJ. The wavelength tunability of the cavity from 1900 nm to 2200 nm is demonstrated by adjusting the cavity length, overlapping with several crucial atmospheric gasses resonances in SWIR, particularly with the water-free window of Methane. Fig. <ref> compares the performance of the frontend with the state-of-the-art sources at MHz repetition rates at overtone and combination band spectral regions. The intrinsic broadband, dispersion-free signal modulation at 15.45 MHz offers a great opportunity for enhancing detection sensitivity in field-resolved spectroscopy. The system holds potential for further wavelength tunability by optimizing the intracavity dispersion or employing other nonlinear crystals. At the same time, the average power and peak power of the OPO pulses can be scaled by implementing higher energy Yb:YAG oscillators at MHz repetition rates, allowing for the down-conversion of the OPO pulses to terahertz and mid-infrared spectral range <cit.>. With an intrinsic modulator, this scalable scheme holds promise to pave the path for atmospheric monitoring of short-lived pollutants and novel spectroscopic techniques <cit.>. We thank Kilian Scheffter for the insightful discussion. Anni Li and Mehran Bahri acknowledge scholarship from the Max Planck School of Photonics, which is supported by the German Federal Ministry of Education and Research (BMBF), the Max Planck Society, and the Fraunhofer Society. § DECLARATIONS * This work was supported by research funding from the Max Planck Society. * Conflict of interest/Competing interests: The authors do not declare any competing interests. § DATA AVAILABILITY The data supporting this study's findings are available from the corresponding author upon reasonable request.
http://arxiv.org/abs/2407.13118v1
20240718025746
Evaluating the evolution and inter-individual variability of infant functional module development from 0 to 5 years old
[ "Lingbin Bian", "Nizhuan Wang", "Yuanning Li", "Adeel Razi", "Qian Wang", "Han Zhang", "Dinggang Shen", "the UNC/UMN Baby Connectome Project Consortium" ]
q-bio.NC
[ "q-bio.NC", "stat.CO" ]
MO-EMT-NAS: Multi-Objective Continuous Transfer of Architectural Knowledge Between Tasks from Different Datasets Peng Liao10009-0006-9711-7142 XiLu Wang30000-0002-0926-4454 Yaochu Jin1,2 () 0000-0003-1100-0631WenLi Du1() 0000-0002-2676-6341 July 22, 2024 =================================================================================================================================== § ABSTRACT The segregation and integration of infant brain networks undergo tremendous changes due to the rapid development of brain function and organization. Traditional methods for estimating brain modularity usually rely on group-averaged functional connectivity (FC), often overlooking individual variability. To address this, we introduce a novel approach utilizing Bayesian modeling to analyze the dynamic development of functional modules in infants over time. This method retains inter-individual variability and, in comparison to conventional group averaging techniques, more effectively detects modules, taking into account the stationarity of module evolution. Furthermore, we explore gender differences in module development under awake and sleep conditions by assessing modular similarities. Our results show that female infants demonstrate more distinct modular structures between these two conditions, possibly implying relative quiet and restful sleep compared with male infants. § INTRODUCTION Magnetic resonance imaging (MRI) provides a non-invasive means to assess both the structure and function of the infant brain, serving as a vital tool in understanding the mechanisms of infant brain development. For instance, the Baby Connectome Project (BCP) released cross-sectional and longitudinal high-resolution multimodal MRI data from 500 typically developing subjects from birth to 5 years old <cit.>. The Developing Human Connectome Project (dHCP), published MRI data from 1500 subjects ranging from 20 to 44 weeks of gestation <cit.>. These large-scale datasets contribute significantly to expanding our knowledge of early brain development and provide crucial insights into the origins and abnormal developmental trajectories of neurodevelopmental disorders such as autism, schizophrenia, and attention deficit hyperactivity disorder (ADHD). Spontaneous neural activity is believed to be related to the intrinsic functional organization of the human brain <cit.>. Functional MRI (fMRI) is one of the most common methods for capturing brain function. It measures the blood oxygen level dependent (BOLD) signal, providing an indirect in vivo measurement of neural activity <cit.>. Early brain development in infancy is challenging to grasp, especially the patterns of how functional connectivity (FC) which quantifies the statistical interdependence among regional time series derived from the BOLD signals emerges and develops in the initial years after birth. Task-related fMRI has been widely used to study older children, adolescents, and adults to detect brain functional development. However, young children, especially newborns, infants, and toddlers, are less likely to perform specific tasks. Therefore, resting-state fMRI (rs-fMRI) remains an indispensable tool, as it allows researchers to avoid having children perform specific tasks <cit.>. The BCP provides unprecedented high-resolution rs-fMRI and a dedicated processing pipeline, greatly assisting researchers in constructing functional networks and characterizing the longitudinal development of infant and toddler brain function. Using rs-fMRI, we can explore the developmental process of infant brain function which develops extremely rapidly and exhibits significant differences compared to adults, particularly in terms of the longitudinal dynamic segregation and integration of brain networks characterized by the evolution of modular structure <cit.>. The modular structure reflects how the brain regions (network nodes) are allocated to different brain subsystems. Besides, Changes of FC in the infant brain can reflect the rapid development of behavior and cognitive functions at different age stages <cit.>. Analyzing cross-sectional data allows researchers to compare infant FC with another older cohort (e.g., adults) or perform longitudinal comparisons in later stages of development <cit.>. The organization of brain function or cognition can be characterized by a multiscale hierarchical structure, extending from individual neurons and macrocolumns to larger, macroscopic brain regions <cit.>. These FC patterns can vary significantly across different individuals <cit.>. The variations in FC patterns arise not only from fluctuations in latent cognitive states, encompassing mental processes like thoughts, ideas, awareness, arousal, and vigilance <cit.>, occurring unpredictably during both resting state <cit.> and task-related brain activity <cit.>. They are also influenced by non-neural physiological factors, including head motion, cardiovascular and respiratory effects, and noise stemming from hardware instability <cit.>. The cognitive states and the presence of noise both change FC between pairs of brain areas, which may result in significant changes of the modular structure of FC <cit.>. A significant challenge lies in the unreliability of estimating modular structure using group averaged functional connectivity (FC). The first issue is that the group averaged FC ignores information that is shared across individuals <cit.>. Another issue is that the outlier of some individual FCs may result in biased estimation for calculating group averaged FC. Therefore, it is very important to find a way to estimate the group-level modular structure to depict the subsystem of functional networks and quantify the inter-individual variability at each stage of infant brain development. In fMRI studies, several module-detection methods have been employed to characterize the subsystem of functional brain networks. For example, a stochastic block model with non-overlapping sliding windows was employed to characterize dynamic FC for networks. In this approach, edge weights were estimated by averaging coherence matrices across subjects, where a threshold was applied to binarize the matrices <cit.>. However, this method might not preserve the full information of the time series and may overlook the assessment of inter-subject variability. A recent Bayesian inference-based method <cit.> estimated the group averaged adjacency matrix, treating it as an observation that retains all information about the time series of the subjects. Nevertheless, this approach overlooked the variability in FC across different subjects and disregarded the higher-order topological properties of an individual's network. It utilised a Bayesian latent block model (LBM) with a Gaussian distribution to identify the modular structure, inferred using Markov chain Monte Carlo (MCMC) sampling <cit.>. The proposed model included a pre-determined number of modules through model selection. One problem of the methods based on block model and MCMC is that the estimation performance for high network dimensionality is limited, especially for brain networks with large number of regions. In order to reliably identify the modular structure of FC, recent studies have shifted their attention to multiple networks across various subjects. They employ group-level analysis to estimate common features of network patterns. One method based on a multi-subject stochastic block modelling can flexibly assess the inter-individual variations <cit.>, but it also treats the FCs as binary edges. Another method for multi-subject analysis is based on multilayer modularity <cit.>, where a modularity quality function is optimized to partition the functional networks into subsystems with different modular resolutions. Alternative approaches capable of capturing the dynamics of brain networks at both individual and group levels, while considering between-subject variations in BOLD time series include <cit.> and <cit.>. All of the above module detection methods were applied to the studies of functional networks of adults. For module detection in infant studies, <cit.> depicted the first-year development of modules in infant brain functional networks with 3 month interval and reported an increasing number of functional modules during the infant development. However, one limitation about this method is that only five highest or weakest subjects were adopted for calculating the final group-level FC, which may lose large amounts of information contained in the group. In this paper, we capture the modular structure of group representative network based on multilayer module detection method, which is able to retain the variability across subjects for infant functional module development. Specifically, for individual-level FC, we use the maximization of modularity quality function to estimate modular structure quantified by module labels with different modularity resolutions <cit.>. In the maximization of modularity quanlity function, we want to find the best partition of the network, such that there are more edges within the module than the expected number of edges by chance. For group representative network, we model the estimated individual-level modules by a Bayesian method based on Categorical-Dirichlet conjugate pair and define a maximum label assignment probability matrix (MLAPM) providing information about the group-level modules. For module development, we define a Jaccard similarity coefficient for evaluating the evolution of modules for the longitudinal development of infant functional networks. Compared with the aforementioned conventional group averaging methods, the method based on Bayesian modelling is more robust with respect to the stationarity of the module evolution for different modularity resolutions. Finally, we analyse the infant functional module development using our proposed method and compare the module similarity between sleeping and awake conditions for both female and male infants. We discovered that the female infant demonstrated more distinctive modules between sleeping and awake conditions, which may imply their relative quiet and restful sleep compared with male infant. § METHODS The segregation and integration of functional networks change rapidly during the infant brain development (Fig.<ref>a). Suppose that the participants are divided into different age brackets (e.g. 0-5 month, 3-8 month etc), and there are different numbers of subjects within each age range (e.g. S_1 in 0-5 month, S_2 in 3-8 month, ..., and S_t for infants older than 36 month). In this paper, we develop a novel multilayer modular structure detection method based on modularity at the individual level (Fig.<ref>b) and Bayesian modelling at the group level (Fig.<ref>d) to robustly estimate the modular structure of the group representative network, meanwhile accounting for inter-individual variability within each age range. We will first illustrate how to model the FC of each subject with modularity <cit.> and estimate the individual-level module labels by maximizing the modularity quality function (Fig.<ref>b). Then, we will elaborate the utilization of conjugate Bayesian pairs for modelling the estimated module labels at the individual level. We conduct parameter inference by sampling from posterior density to characterize the group-level modular structure (Fig.<ref>d). Note that in Bayesian inference, the posterior density results from the combination of prior and likelihood. If both the prior and posterior are derived from the same distribution family, we refer to the prior as a conjugate for the likelihood, forming a prior-likelihood conjugate pair. §.§ Individual-level modelling of the modular structure In this work, the modular structure characterizes how the brain regions are allocated to different subsystems. Each module represents a specific brain subsystem. For each individual functional network, the modular structure is detected by the maximization of its modularity which partitions a network into modules such that there are more densely connected edges within the modules than the edges expected by chance. We first illustrate the modularity quality function Q <cit.> in terms of the module stability <cit.>. We elaborate the module stability (Fig.<ref>c) as follows. Suppose that there is a random walker moving around the network and the discrete-time dynamics of the random walker follows p_i;n+1=∑_jA_ij/k_jp_j;n, where p_j;n is the probability density of the random walker on node j at step n, and p_i;n+1 is the probability density of the random walker moving from node j to node i at step n+1. A_ij is the weight of connectivity going from j to i, and k_j=∑_iA_ij is the strength of the connections going into node j. The stationary probability density of the random walker characterizes the chance of the random walker located at node i at stationarity and it can be written as p_i^*=k_i/2m, where m=1/2∑_i,jA_ij is the sum of the edge strength of the network. The discrete module stability with time step one R(1) characterizes the probability of a random walker to be in the same module during two successive time steps minus the probability for two independent walkers in the same module at stationarity, and it can be defined as R(1) = ∑_i,j[A_ij/k_jk_j/2m-k_i/2mk_j/2m]δ(g_i,g_j) = 1/2m∑_i,j[A_ij-k_ik_j/2m]δ(g_i,g_j) = Q, the form of which is exactly the same as the modularity quality function Q. The delta function δ(g_i,g_j)=1, if node i and j are in the same module (g_i=g_j), and 0 otherwise. <cit.> expanded the module stability to the continuous case, where the random walker jumps around the network with independent and identical homegeneous Poisson process governed by the normalized Laplacian dynamics ṗ_i = ∑_jA_ij/k_jp_j+p_i. Given the stationary probability density p_j^*=k_j/2m, the continuous module stability with time t can be defined as R(t)=∑_i,j[(e^tL)_ijk_j/2m-k_i/2mk_j/2m]δ(g_i,g_j), where L_ij=Aij/k_j-δ_ij. If the linear approximation (e^tL)_ij≈δ_ij+tL_ij is introduced, the quality function can be derived as Q(t)=1/2m∑_i,j[tA_ij-k_ik_j/2m]δ(g_i,g_j). Dividing by t to this equation does not affect the optimization of the quality function, so the quality function can be rewritten with resolution parameter γ=1/t, such that Q=1/2m∑_i,j[A_ij-γk_ik_j/2m]δ(g_i,g_j). In this work, the best partition of the network is estimated by maximizing the above modularity quality function Q, which is implemented by a MATLAB function modularity_und.m in Brain Connectivity Toolbox <cit.>. The resolution of the modules which determines the mean module size and the number of modules is controlled by the resolution parameter γ. Larger value of γ results in larger number of modules and the smaller mean module size. For individual-level analysis in this work, we apply the modularity to each subject within each age range with different values of γ, resulting in coarse-to-fine partitions of network for each subject. In the next section, we will illustrate the group-level modelling and how to characterize the group-level modular structure in each age range. §.§ Group-level analysis of the module development At the group level, we model the module labels of networks estimated from the individual-level analysis in each age range. We take the individual module labels as the observation for group-level analysis (see in Fig.<ref>d). Suppose that there are S number of subjects (note that S is different in different age ranges). We define a matrix Z=[ z_11 ⋯ z_1s ⋯ z_1S; ⋮ ⋱ ⋮ ⋱ ⋮; z_i1 ⋯ z_is ⋯ z_iS; ⋮ ⋱ ⋮ ⋱ ⋮; z_N1 ⋯ z_Ns ⋯ z_NS; ] to represent the module labels for all of the subjects estimated using the modularity quality function, where N is the number of nodes and S is the number of subjects in a specific age range. Each row vector z_i contains the module labels of a specific node i over all of the subjects within the age range, and each column vector z_s represents the module labels of a specific subject s in the group. We model each row of Z, namely z_i=(z_i1,⋯,z_is,⋯,z_iS) for S subjects for node i using a categorical-Dirichlet conjugate pair. Each label z_is follows a categorical distribution z_is∼(1;r_i), where r_i=(r_i1,⋯,r_ik,⋯,r_iK) is a vector of label assignment probabilities (LAP) such that ∑_k=1^K r_ik=1, and K is the maximum number of elements in Z in the group. We define a label assignment probability matrix (LAPM) R=[ r_11 ⋯ r_1k ⋯ r_1K; ⋮ ⋱ ⋮ ⋱ ⋮; r_i1 ⋯ r_ik ⋯ r_iK; ⋮ ⋱ ⋮ ⋱ ⋮; r_N1 ⋯ r_Nk ⋯ r_NK; ], and p(z_is|r_i,K)=∏_k=1^Kr_k^I_k(z_is), I_k(z_is)= 1, z_is=k 0, z_is≠ k . Consider a prior of Dirichlet distribution r_i∼(α) p(r_i| K)=N(α)∏_k=1^Kr_k^α_k-1, with the normalization factor N(α)=Γ(∑_k=1^Kα_k)/∏_k=1^KΓ(α_k). In this paper, we set α_k=1. The posterior can be expressed as p(r_i|z_i,K) ∝ ∏_s=1^S p(z_is|r_i) × p(r_i) = ∏_s=1^S∏_k=1^Kr_k^I_k(z_is)× N(α)∏_k=1^Kr_k^α_k-1 = N(α)∏_k=1^Kr_k^∑_s=1^SI_k(z_is)+α_k-1 = N(α)/N(α')N(α')∏_k=1^Kr_k^∑_s=1^SI_k(z_is)+α_k-1, where α_k'=∑_s=1^SI_k(z_is)+α_k, N(α')=Γ(∑_k=1^K(∑_s=1^SI_k(z_is)+α_k))/∏_k=1^KΓ(∑_s=1^SI_k(z_is)+α_k), and the posterior r_i|z_i∼(α'). The posterior for the network can be expressed as p(R|Z,K)=∏_i=1^Np(r_i|z_i,K). We use the maximum LAPM (MLAPM), R_, the maximum probability at each row of R, to provide the information of modular structure as shown in Fig.<ref>d. The column indices of R_ after removing all of the zero columns from the estimation of the group-level module labels denoted as z^G. We do this modular structure estimation for the group representative network in each age range using the same process. §.§ Evaluating module evolution In this section, we quantify the module evolution using Jaccard similarity coefficient J(z_t^G,z_t+τ^G) which is defined as J(z_t^G,z_t+τ^G)=|z_t^G∩z_t+τ^G|/|z_t^G∪z_t+τ^G|, where z_t^G is the group-level modular structure in the age range t (e.g. t=0 for 0-5 month, t=3 for 3-8 month), and z_t+τ^G is the modular structure for age range τ months later (τ takes the value of 6, 12, and 18). Here, ∩ indicates the intersection of two vectors of module labels, and ∪ is the union. We also call Jaccard similarity coefficient as the J value in the rest of paper. The larger J value means that the modular structure is more similar between the two age ranges. For two consecutive non-ovelapping age ranges, smaller J value means the module evolution of the infant brain development is relatively stationary. §.§ BCP fMRI data acquisition and preprocessing In this paper, we use the rs-fMRI data from the BCP <cit.> which involves the infants from 2 weeks to 72 months of age. Scans from 546 subjects are collected using a 3T Siemens Prisma MRI scanner with 32 channel head coils. In this work, the rs-fMRI with both the phase coding directions of anterior-to-posterior (AP) and posterior-to-anterior (PA) are utilized. The protocol parameters for rs-fMRI include FOV: 208 mm×208 mm, voxel size: 2mm isotropic, flip angle: 52^∘, TE: 37ms, and TR: 800ms (see more detailed information about the parameters in <cit.>). The rs-fMRI preprocessing <cit.> includes head motion correction, distortion correction, anatomical registration, one-step resampling, and denoising <cit.>. The FC (with 100, 200, 300, and 400 ROIs) is constructed by calculating the correlation matrix of the extracted BOLD signals using the Schaefer's atlas <cit.>. The dataset was partitioned into different age ranges as shown in Table <ref>. § EXPERIMENTS AND RESULTS We first evaluate the maximization of modularity quality function with different resolutions γ using three metrics including modularity quality Q, the number of communities K, and the mean community size S. Secondly, we evaluate the performance of Bayesian modelling at the group level with respect to different values of resolution parameter γ. We compare the results of using Bayesian modelling and conventional group averaging methods for group-level analysis in terms of the module evolution of functional networks. Finally, we compare the module evolution between female and male infants under sleeping and awake conditions. §.§ Individual-level analysis based on modularity For individual-level analysis, we apply the maximization of modularity quality function to each subject in each age range. We visualize the quality function Q for each subject with different values of resolution parameter γ (γ=0.9, 1.0, 1.1, ..., to 2.5) for a specific age range (AP, with 100 ROIs) as shown in Fig.<ref>a, where different colour dots represent different subjects. We can see that the Q decreases with the increasing value of γ. We also plot the number of modules K and the mean module size S against γ. The K decreases (Fig.<ref>b) and the S increases (Fig.<ref>c) against the increasing value of γ. §.§ Group-level analysis based on Bayesian modelling We visualize the estimation of group-level modular structure based on Bayeisan modelling with different values of γ for a specific age range (e.g. age range 0-5 month) as shown in Fig.<ref>. At the group level, categorical-Dirichlet conjugate pairs are used to model the module labels of each row vector in Z estimated at the individual level. The module labels of each node across all of the subjects in a specific age range are regarded as the observations of categorical-Dirichlet conjugate pair (see Fig. <ref>). An LAPM is inferred by sampling from a Dirichlet posterior density (the upper panel of Fig.<ref>). Subsequently, an MLAPM is formed, encompassing information about the group-level modular structure. This is achieved by preserving the maximum probability within each row of LAPM, signifying the highest likelihood of the node being assigned to a particular module (the lower panel of Fig.<ref>). And finally, the zero columns are removed, resulting in the final version of MLAPM summarizing the group-level modular structure, where the number of columns represents the estimated number of modules K at the group level. The row index denotes the node number, while the column index signifies the module labels at the group level. The bar indicates the value of the maximum assignment probability for the labels. The module labels of different age ranges are inconsistent with each other due to the label-switching phenomenon. Here, we used the relabelling algorithm <cit.> based on minimization of label vector distance and square assignment algorithm <cit.> to obtain a switched module labels cross different age ranges. §.§ The module evolution of infant functional brain networks We illustrate the development of modular structure in infant functional brain networks at various modularity resolutions, as depicted in Fig. <ref>. The modular structure in each age range is estimated based on Bayesian modelling and visualized using BrainSpace <cit.>. Here, all the age ranges with pink frame or blue frame are non-overlapping, where the subjects are under sleeping condition. The modular structure in the green frame corresponds to the subjects scanned older than 36 months under awake condition. In this work, we are interested in how the modular structure evolves longitudinally. For that purpose, we evaluate the module evolution by assessing the Jaccard similarity between the neighbouring non-overlapping age ranges for each setting of modularity resolution γ (Fig.<ref>). We compared the performance of our proposed Bayesian modelling method for group level analysis with the conventional group averaging method in terms of the Jaccard similarity coefficient for different scales of functional networks with ROIs of 100, 200, 300, and 400 respectively as shown in Fig.<ref> (note that we demonstrate the results of AP in the main text, see the results of PA in the SI Fig.1). According to the results of the Jaccard similarity with different values of γ, the module evolution evaluated by Bayesian modelling demonstrates overall larger mean J value compared with group averaging method (Fig.<ref>a). We see the similar results with respect to the mean J values in Fig.<ref>c, e, and g as well. According to the results of t-test, the J values of using Bayesian modelling are significantly larger than group averaging method in age ranges between 6-11 month and 12-17 month, 9-14 month and 15-23 month, 12-17 month and 18-29 month, 15-23 month and 24-36 month, and 18-29 month and >36 month. More importantly, the J values of using Bayesian modelling show significantly less diversity compared with those of using group averaging method according to the F-test between the age ranges of 6-11 month and 12-17 month, 12-17 month and 18-29 month, and 15-23 month and 24-36 month (Fig.<ref>b), which means that the J values evaluated by Bayesian modelling are relatively stationary compared with those of group averaging method. We can see similar statistically significant results of the comparison between some age ranges using networks with ROIs of 200, 300 and 400 as well. The Bayesian modelling method demonstrates relatively stationary module evolution, implying a more robust group-level modular detection compared with that of the group averaging method. §.§ Comparing the module evolution of female and male Next, we compare the module evolution between female and male, the results of which with the data of AP are shown in Fig.<ref> (see the results of PA in SI Fig.2). There are no significant differences between female and male with respect to the module evolution under sleeping conditions. However, we surprisingly discovered that the female infant demonstrates significantly lower J value between 18-20 month (sleeping condition) and >36 (awake condition) compared with that of the male infant (see the black rectangles in the plots of the left panel). In order to gain an insight into the difference of modular structures between female and male under sleeping and awake conditions. We evaluated the Jaccard similarity between each age range under sleeping condition and under awake condition (Fig.<ref>). We then plot the corresponding J values for female and male as shown in Fig.<ref> (see the results of PA in SI Fig.3). We found that the female demonstrates smaller Jaccard similarity between sleeping (several age ranges) and awake conditions compared with that of the male. While, the male shows similar modular structure between sleeping and awake conditions. Regarding this results, one assumption is that male infants sleep less restful compared with female infants. Male infants may undergo more somatic movement or their brain may be more activated during sleeping condition, resulting in more similar modular structure between sleeping and awake conditions. We will discuss this aspect further in the next section. § DISCUSSION Many studies construct group-level functional networks by estimating group averaged FC <cit.> which ignores the difference and variation of networks across individuals. Other approaches, such as those that establish a threshold for FC <cit.>, may lead to the loss of crucial topological information within the networks. Only a single observation (the group averaged FC) being taken into account may result in biased estimation of modular structure. Modelling the individual FC rather than the group averaged FC provides insight into the variability of both the observations themselves, and the variability in the modular structure between subjects. In this paper, we developed a novel multilayer module detection method based on Bayesian modelling for group representative networks in infant functional brain development. The segregation and integration of functional networks are defined by the underlying modular structure <cit.>. We evaluated the module evolution of infancy using Jaccard similarity and compared our proposed method to the conventional group averaging method. According to the results, our method is more robust compared to the group averaging method in terms of the stationarity of module evolution. We first applied the maximization of modularity quality function for individual functional networks in each age range with different modularity resolutions. To estimate the modular structure of the group representative network, we introduced a Bayesian modelling approach based on conjugate analysis for determining module labels at the group level. This framework not only identifies the shared modular structure across individuals but also assesses the variability in modular structure patterns between individuals. In this paper, we relax the assumption of fixing the number of modules K and assume that it is a variable which is controlled by the modularity resolution parameter. In this scenario, the inference of module labels are not restricted by a pre-determined number of modules. This enhances the flexibility of individual-level label estimation compared to methods that employ a fixed value of K. Besides, we retain the variation of K for each subject in each age range by using different modularity resolutions, such that coarse-to-fine modular structures are considered for evaluating the module evolution of infant functional brain development. We compared the module evolution between female and male infants under sleeping and awake conditions. Our results indicate that female infant demonstrates more distinctive modular structures between these two conditions compared with male infant. One previous study reported that in contrast to preterm girls, preterm boys exhibited significantly reduced sleep duration, a tendency for increased active sleep and decreased quiet sleep, more wakefulness after sleep onset, and a tendency for shorter longest sleep periods <cit.>. Another recent study indicated that the disparity in sleep duration between genders may manifest early in life, female infants and preadolescents tend to have longer sleep durations than their male counterparts <cit.>. Female infants seemed to experience more restful sleep than male infants aligns well with earlier reports of heightened sleep disruption in male compared to female infants. Specifically, maternal perceptions of their infant's sleep patterns have noted more problematic crying and increased night awakenings in male infants <cit.>. The above studies, to some extent, support our discovery about the distinctive difference of modular structures between female and male under sleeping and awake conditions. The method proposed in this paper solves many of the problems compared to the conventional method. Nevertheless, the present work still has certain limitations. While the multilayer modular detection performs effectively, identifying modules in real functional brain networks remains a challenge. The absence of a standard algorithm for general module detection stems from the assumption that network architectures in the real world are generated from diverse latent processes. Module detection through modularity relies on the information from the adjacency matrix, capturing FC between node pairs. However, it does not leverage metadata or features associated with the nodes. An additional study introduced by <cit.> identifies modules in networks solely by utilizing raw time series data from nodes without considering edge observations. While much of the previous research treat node attributes or metadata as the ground truth of modules, a recent study indicates that metadata differ from ground truth. Treating them as such can lead to significant theoretical and practical issues <cit.>. In the future, it is worthwhile to explore the relationship between module detection and node metadata concerning the network structure. An intriguing direction for future research involves hierarchical modelling of group-level node metadata in conjunction with multilayer FC. In this paper, we have not utilized the assessment of neurological and behavioral functions to evaluate the behavioral significance of individual differences in estimating the group-level modular structure of infant functional network development. In the future research, we plan to relate the BCP behavioural data <cit.> including Mullen scales of early learning (MSEL) for evaluations of language, motor, and perceptual abilities, Minnesota Executive Function Scale (MEFS) for assessment of cognitive flexibility, and the Dimensional Joint Attention Assessment (DJAA) for characterizing dimensional ratings of Responding to Joint Attention (RJA) and reflect individual differences, to the module evolution. We will be able to explain how variation in individual behaviour measurements relate to variations in the modular structure of infant functional brain network development. § CODE AVAILABILITY The code of this work is available at: https://github.com/LingbinBian/InfantModuleEvolutionhttps://github.com/LingbinBian/InfantModuleEvolution. § ACKNOWLEDGEMENTS This work was supported in part by National Natural Science Foundation of China (grant numbers 62131015, 62250710165, U23A20295, and 32371154), the STI 2030-Major Projects (No. 2022ZD0209000), Shanghai Municipal Central Guided Local Science and Technology Development Fund (grant number YDZX20233100001001), Science and Technology Commission of Shanghai Municipality (STCSM) (grant number 21010502600), and The Key R&D Program of Guangdong Province, China (grant numbers 2023B0303040001, 2021B0101420006). [Achard et al., 2006]Achard2006 Achard, S., Salvador, R., Whitcher, B., Suckling, J., and Bullmore, E. (2006). A resilient, low-frequency, small-world human brain functional network with highly connected association cortical hubs. Journal of Neuroscience, 26(1):63–72. [Bassett et al., 2013]Bassett2013 Bassett, D. S., Porter, M. A., Wymbs, N. F., Grafton, S. T., Carlson, J. M., and Mucha, P. J. (2013). Robust detection of dynamic community structure in networks. CHAOS, 23:13142. [Bassett et al., 2011]Bassett2011 Bassett, D. S., Wymbs, N. F., Porter, M. A., Mucha, P. J., Carlson, J. M., and Grafton, S. T. (2011). Dynamic reconfiguration of human brain networks during learning. PNAS, 108(18):7641–7646. [Betzel et al., 2019]Betzel2019 Betzel, R. F., Bertolero, M. A., Gordon, E. M., Gratton, C., Dosenbach, N. U., and Bassett, D. S. (2019). The community structure of functional brain networks exhibits scale-specific patterns of inter- and intra-subject variability. NeuroImage, 202(September 2018):115990. [Betzel et al., 2018]Betzel2018 Betzel, R. F., Medaglia, J. D., and Bassett, D. S. (2018). Diversity of meso-scale architecture in human and non-human connectomes. Nature Communications, 9(1). [Bian et al., 2021]Bian2021 Bian, L., Cui, T., Thomas Yeo, B., Fornito, A., Razi, A., and Keith, J. (2021). Identification of community structure-based brain states and transitions using functional MRI. NeuroImage, 244(September):118635. [Carpaneto and Toth, 1980]Carpaneto1980 Carpaneto, G. and Toth, P. (1980). Algorithm 548: Solution of the assignment problem [H]. ACM Transactions on Mathematical Software (TOMS), 6(1):104–111. [Cribben et al., 2012]Cribben2012 Cribben, I., Haraldsdottir, R., Atlas, L. Y., Wager, T. D., and Lindquist, M. A. (2012). Dynamic connectivity regression: determining state-related changes in brain connectivity. NeuroImage, 61:720–907. [Cribben and Yu, 2017]Cribben2017 Cribben, I. and Yu, Y. (2017). Estimating whole-brain dynamics by using spectral clustering. Journal of the Royal Statistical Society. Series C (Applied Statistics), 66:607–627. [Damoiseaux and Greicius, 2009]Damoiseaux2009 Damoiseaux, J. S. and Greicius, M. D. (2009). Greater than the sum of its parts: A review of studies combining structural connectivity and resting-state functional connectivity. Brain Structure and Function, 213(6):525–533. [Delvenne et al., 2010]Delvenne2010 Delvenne, J. C., Yaliraki, S. N., and Barahon, M. (2010). Stability of graph communities across time scales. Proceedings of the National Academy of Sciences of the United States of America, 107(29):12755–12760. [Eyre et al., 2021]Eyre2021 Eyre, M., Fitzgibbon, S. P., Ciarrusta, J., Cordero-Grande, L., Price, A. N., Poppe, T., Schuh, A., Hughes, E., O'Keeffe, C., Brandon, J., Cromb, D., Vecchiato, K., Andersson, J., Duff, E. P., Counsell, S. J., Smith, S. M., Rueckert, D., Hajnal, J. V., Arichi, T., O'Muircheartaigh, J., Batalle, D., and Edwards, A. D. (2021). The Developing Human Connectome Project: Typical and disrupted perinatal functional connectivity. Brain, 144(7):2199–2213. [Foreman et al., 2008]Foreman2008 Foreman, S. W., Thomas, K. A., and Blackburn, S. T. (2008). Individual and gender differences matter in preterm infant state development. JOGNN - Journal of Obstetric, Gynecologic, and Neonatal Nursing, 37(6):657–665. [Fox and Raichle, 2007]Fox2007 Fox, M. D. and Raichle, M. E. (2007). Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging. Nature Reviews Neuroscience, 8(9):700–711. [Franco et al., 2020]Franco2020 Franco, P., Putois, B., Guyon, A., Raoux, A., Papadopoulou, M., Guignard-Perret, A., Bat-Pitault, F., Hartley, S., and Plancoulaine, S. (2020). Sleep during development: Sex and gender differences. Sleep Medicine Reviews, 51:101276. [Gao et al., 2015]Gao2015 Gao, W., Alcauter, S., Elton, A., Hernandez-Castillo, C. R., Smith, J. K., Ramirez, J., and Lin, W. (2015). Functional network development during the first year: Relative sequence and socioeconomic correlations. Cerebral Cortex, 25(9):2919–2928. [Gonzalez-Castillo and Bandettini, 2018]Gonzalez-Castillo2018 Gonzalez-Castillo, J. and Bandettini, P. A. (2018). Task-based dynamic functional connectivity: Recent findings and open questions. NeuroImage, 180(August 2017):526–533. [Hastings, 1970]Hastings1970 Hastings, W. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57(1):97–109. [Heo et al., 2022]Heo2022 Heo, K. S., Shin, D. H., Hung, S. C., Lin, W., Zhang, H., Shen, D., and Kam, T. E. (2022). Deep attentive spatio-temporal feature learning for automatic resting-state fMRI denoising. NeuroImage, 254(May 2021):119127. [Hoffmann et al., 2020]Hoffmann2020 Hoffmann, T., Peel, L., Lambiotte, R., and Jones, N. S. (2020). Community detection in networks without observing edges. Science Advances, 6(4):1–12. [Howell et al., 2019]Howell2019 Howell, B. R., Styner, M. A., Gao, W., Yap, P. T., Wang, L., Baluyot, K., Yacoub, E., Chen, G., Potts, T., Salzwedel, A., Li, G., Gilmore, J. H., Piven, J., Smith, J. K., Shen, D., Ugurbil, K., Zhu, H., Lin, W., and Elison, J. T. (2019). The UNC/UMN Baby Connectome Project (BCP): An overview of the study design and protocol development. NeuroImage, 185(March 2018):891–905. [Hutchison et al., 2013]Hutchison2013 Hutchison, R. M., Womelsdorf, T., Allen, E. A., Bandettini, P. A., Calhoun, V. D., Corbetta, M., Penna, S. D., Duyn, J. H., Glover, G. H., Gonzalez-castillo, J., Handwerker, D. A., Keilholz, S., Kiviniemi, V., Leopold, D. A., Pasquale, F. D., Sporns, O., Walter, M., and Chang, C. (2013). Dynamic functional connectivity : Promise , issues , and interpretations. NeuroImage, 80:360–378. [Kringelbach and Deco, 2020]Kringelbach2020 Kringelbach, M. L. and Deco, G. (2020). Brain states and transitions: Insights from computational neuroscience. Cell Reports, 32(10):108128. [Lambiotte et al., 2008]Lambiotte2008 Lambiotte, R., Delvenne, J. C., and Barahona, M. (2008). Laplacian Dynamics and Multiscale Modular Structure in Networks. pages 1–29. [Lehmann et al., 2021]Lehmann2021 Lehmann, B. C., Henson, R. N., Geerligs, L., Cam-CAN, and White, S. R. (2021). Characterising group-level brain connectivity: A framework using Bayesian exponential random graph models. NeuroImage, 225:117480. [Lurie et al., 2020]Lurie2020 Lurie, D. J., Kessler, D., Bassett, D. S., Betzel, R. F., Breakspear, M., Kheilholz, S., Kucyi, A., Liégeois, R., Lindquist, M. A., McIntosh, A. R., Poldrack, R. A., Shine, J. M., Thompson, W. H., Bielczyk, N. Z., Douw, L., Kraft, D., Miller, R. L., Muthuraman, M., Pasquini, L., Razi, A., Vidaurre, D., Xie, H., and Calhoun, V. D. (2020). Questions and controversies in the study of time-varying functional connectivity in resting fMRI. Network Neuroscience, 4(1):30–69. [Metropolis et al., 1953]Metropolis1953 Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. (1953). Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6):1087–1092. [Monti et al., 2017]Monti2017a Monti, R. P., Lorenz, R., Braga, R. M., Anagnostopoulos, C., Leech, R., and Montana, G. (2017). Real-time estimation of dynamic functional connectivity networks. Human Brain Mapping, 38(1):202–220. [Newman, 2004]Newman2004 Newman, M. E. (2004). Fast algorithm for detecting community structure in networks. Physical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics, 69(6):5. [Newman, 2006a]Newman2006a Newman, M. E. (2006a). Modularity and community structure in networks. Proceedings of the National Academy of Sciences of the United States of America, 103(23):8577–8582. [Newman, 2006b]Newman2006 Newman, M. E. J. (2006b). Modularity and community structure in networks. PNAS, 103(23):8577–8582. [Nobile and Fearnside, 2007]Nobile2007 Nobile, A. and Fearnside, A. T. (2007). Bayesian finite mixtures with an unknown number of components: The allocation sampler. Statistics and Computing, 17:147–162. [Ogawa et al., 1990]Ogawa1990 Ogawa, S., Lee, T. M., Kay, A. R., and Tank, D. W. (1990). Brain magnetic resonance imaging with contrast dependent on blood oxygenation. Proceedings of the National Academy of Sciences of the United States of America, 87(24):9868–9872. [Park and Friston, 2013]Park2013 Park, H.-J. and Friston, K. (2013). Structural and functional brain networks: From connections to cognition. Science, 342(6158). [Pavlović et al., 2020]Pavlovic2020 Pavlović, D. M., Guillaume, B. R., Towlson, E. K., Kuek, N. M., Afyouni, S., Vértes, P. E., Thomas Yeo, B., Bullmore, E. T., and Nichols, T. E. (2020). Multi-subject Stochastic Blockmodels for adaptive analysis of individual differences in human brain network cluster structure. NeuroImage, page 116611. [Peel et al., 2017]Peel2017 Peel, L., Larremore, D. B., and Clauset, A. (2017). The ground truth about metadata and community detection in networks. Science Advances, 3(5):1–9. [Razi and Friston, 2016]Razi2016 Razi, A. and Friston, K. J. (2016). The connected brain: Causality, models, and intrinsic dynamics. IEEE Signal Processing Magazine, 26(5):340–343. [Razi et al., 2015]Razi2015 Razi, A., Kahan, J., Rees, G., and Friston, K. J. (2015). Construct validation of a DCM for resting state fMRI. NeuroImage, 106:1–14. [Razi et al., 2017]Razi2017 Razi, A., Seghier, M. L., Zhou, Y., McColgan, P., Zeidman, P., Park, H.-J., Sporns, O., Rees, G., and Friston, K. J. (2017). Large-scale DCMs for resting-state fMRI. Network Neuroscience, 1(4):381–414. [Richardson et al., 2010]Richardson2010 Richardson, H. L., Walker, A. M., and Horne, R. S. (2010). Sleeping like a baby - Does gender influence infant arousability? Sleep, 33(8):1055–1060. [Robinson et al., 2015]Robinson2015 Robinson, L. F., Atlas, L. Y., and Wager, T. D. (2015). Dynamic functional connectivity using state-based dynamic community structure: Method and application to opioid analgesia. NeuroImage, 108:274–291. [Rubinov and Sporns, 2010]Rubinov2010 Rubinov, M. and Sporns, O. (2010). Complex network measures of brain connectivity: uses and interpretations. NeuroImage, 52(3):1059–69. [Schaefer et al., 2018]Schaefer2018 Schaefer, A., Kong, R., Gordon, E. M., Laumann, T. O., Zuo, X.-N., Holmes, A. J., Eickhoff, S. B., and Yeo, B. T. T. (2018). Local-Global Parcellation of the Human Cerebral Cortex from Intrinsic Functional Connectivity MRI. Cerebral Cortex, 28(9):3095–3114. [Taghia et al., 2018]Taghia2018a Taghia, J., Cai, W., Ryali, S., Kochalka, J., Nicholas, J., Chen, T., and Menon, V. (2018). Uncovering hidden brain state dynamics that regulate performance and decision-making during cognition. Nature Communications, 9(1). [Ting et al., 2021]Ting2021 Ting, C. M., Samdin, S. B., Tang, M., and Ombao, H. (2021). Detecting dynamic community structure in functional brain networks across individuals: A multilayer approach. IEEE Transactions on Medical Imaging, 40(2):468–480. [Vidaurre et al., 2018]Vidaurre2018 Vidaurre, D., Abeysuriya, R., Becker, R., Quinn, A. J., Alfaro-Almagro, F., Smith, S. M., and Woolrich, M. W. (2018). Discovering dynamic brain networks from big data in rest and task. NeuroImage, 180(June 2017):646–656. [Vos de Wael et al., 2020]VosdeWael2020 Vos de Wael, R., Benkarim, O., Paquola, C., Lariviere, S., Royer, J., Tavakol, S., Xu, T., Hong, S. J., Langs, G., Valk, S., Misic, B., Milham, M., Margulies, D., Smallwood, J., and Bernhardt, B. C. (2020). BrainSpace: a toolbox for the analysis of macroscale gradients in neuroimaging and connectomics datasets. Communications Biology, 3(1). [Wen et al., 2019]wen2019 Wen, X., Zhang, H., Li, G., Liu, M., Yin, W., Lin, W., Zhang, J., and Shen, D. (2019). First-year development of modules and hubs in infant brain functional networks. NeuroImage, 185(October 2018):222–235. [Wyse and Friel, 2012]Wyse2012 Wyse, J. and Friel, N. (2012). Block clustering with collapsed latent block models. Statistics and Computing, 22:415–428. [Zhang et al., 2019]Zhang2019 Zhang, H., Shen, D., and Lin, W. (2019). Resting-state functional MRI studies on infant brains: A decade of gap-filling efforts. NeuroImage, 185(July 2018):664–684. § SUPPLEMENTARY INFORMATION: [figure]labelfont=bf,name=SI Fig.,labelsep=period
http://arxiv.org/abs/2407.12225v1
20240717003949
Probabilistic Reachability of Stochastic Control Systems: A Contraction-Based Approach
[ "Saber Jafarpour", "Yongxin Chen" ]
eess.SY
[ "eess.SY", "cs.SY", "math.OC" ]
RTL Verification for Secure Speculation Using Mengjia Yan ============================================== empty empty § ABSTRACT In this paper, we propose a framework for studying the probabilistic reachability of stochastic control systems. Given a stochastic system, we introduce a separation strategy for reachability analysis that decouples the effect of deterministic input/disturbance and stochastic uncertainty. A remarkable feature of this separation strategy is its ability to leverage any deterministic reachability framework to capture the effect of deterministic input/disturbance. Furthermore, this separation strategy encodes the impact of stochastic uncertainty on reachability analysis by measuring the distance between the trajectories of the stochastic system and its associated deterministic system. Using contraction theory, we provide probabilistic bounds on this trajectory distance and estimate the propagation of stochastic uncertainties through the system. By combining this probabilistic bound on trajectory distance with two computationally efficient deterministic reachability methods, we provide estimates for probabilistic reachable sets of the stochastic system. We demonstrate the efficacy of our framework through numerical experiments on a feedback-stabilized inverted pendulum. Keywords: Reachability analysis, Stochastic systems § INTRODUCTION Reachability analysis is a fundamental problem in control theory and dynamical systems that studies how the trajectories of a system propagate with time. Reachability analysis has been successfully used in many real-world applications to explore the transient behaviors of systems and to verify their safety over a given time horizon. However, providing formal guarantees for behaviors of dynamical systems using reachability analysis is challenging for several reasons. First, reachability analysis deals with the behavior of an infinite number of trajectories. Consequently, most simulation-based methods are insufficient and theoretical methods are needed to provide guarantees on behavior of the system. Second, practical systems typically operate in uncertain environments, where their behaviors are affected by different types of random and adversarial uncertainties. Therefore, it is crucial to suitably incorporate the effect of various uncertainties in reachability of these systems. Moreover, many real-world systems are large-scale with highly nonlinear dynamics, necessitating computationally efficient and scalable reachability tools to certify their behaviors. The majority of the research on reachability analysis focuses on control systems with deterministic input/disturbance <cit.>. By considering the least favorable or the most detrimental values of these input/disturbances, these work perform worst-case uncertainty analysis and provide guarantees for reachable sets of the system under all possible uncertainty scenarios. The classical frameworks for reachability of the systems with deterministic input/disturbance include Hamilton-Jacobi and dynamic programming approaches <cit.> and the set-propagation approaches such as ellipsoidal methods <cit.> and polytope method <cit.>. Despite their deep theoretical foundations, these approaches are either only applicable to certain classes of systems or are not computationally tractable for large-scale systems. Motivated by applications to general large-scale systems with complex components, various computationally efficient and scalable deterministic reachability frameworks have been recently developed in the literature <cit.>. Examples of these methods include reachability using Lipschitz bound <cit.>, matrix measure-based reachability <cit.>, and interval-based reachability <cit.>. In many real-world applications, systems are subject to unpredictable and rapidly fluctuating disturbances. For such disturbances, it is impossible to provide precise bounds on their magnitudes. Consequently, worst-case uncertainty analysis either offers no guarantees or results in overly conservative estimates of the reachable sets. Instead, it is more reasonable to model these disturbances as stochastic variables and use probabilistic methods for reachability analysis of the system. However, the reachability analysis of stochastic systems is significantly more challenging than that of deterministic systems due to the probabilistic nature of their evolution. Several frameworks have been developed in the literature to analyze the propagation of uncertainties in stochastic systems. Hamilton-Jacobi and dynamic programming approaches characterize probabilistic reachability as a the solution of a game between two players <cit.>. Despite their generality, a significant drawback of these reachability approaches lies in their computational heaviness, rendering them impractical for reachability analysis of large-size systems. Recent works have focused on improving the computational complexity of the dynamic programming for reachability of stochastic systems <cit.>. Functional approaches develop barrier function <cit.> for measuring the probability of a trajectory staying inside a reachable set. These functional approaches are computationally efficient. However, in applications, they require an exhaustive search for a suitable function and lack the flexibility to balance the accuracy and efficiency of reachability analysis. In this paper, we develop a computationally efficient framework for probabilistic reachability of control systems with both deterministic input/disturbance and stochastic uncertainty. First, we present a separation strategy that decomposes the effect of deterministic input/disturbance and the effect of stochastic uncertainty on reachability analysis of the stochastic control system. We show that the effect of deterministic input/disturbance on reachability can be captured using reachable sets of an associated deterministic system. This is a key feature of this separation strategy as it allows to use any frameworks for deterministic reachability to study the propagation of deterministic input/disturbance in stochastic systems. The effect of stochastic uncertainty on reachability is represented by the distance between trajectories of the stochastic system and the associated deterministic system. Inspired by the works on incremental stability of stochastic systems <cit.>, we leverage contraction theory to study the evolution of stochastic uncertainty in the system. While the existing literature on contraction analysis of stochastic systems focus on providing conditions for their stability, we establish probabilistic transient bounds on the distance between trajectories of the stochastic system and its associated deterministic system. By combining our probabilistic bound on propagation of stochastic uncertainties with contraction-based and interval-based reachability methods for deterministic systems, we provide high probability bounds on reachable sets of stochastic systems. Finally, we apply our framework with both contraction-based and interval-based deterministic reachability to obtain probabilistic bounds on reachable sets of a feedback stabilized inverted pendulum. § MATHEMATICAL PRELIMINARIES *Vectors, matrices, and functions we denote the component-wise vector order on ^n by ≤, i.e., for v,w∈^n, we have v≤ w if and only if v_i≤ w_i, for every i∈{1,…,n}. Given v≤ w, we define the interval [v,w] = x∈^nv≤ x≤ w. Given a norm · on ^n, we define the ball with radius r centered at z∈^n by ℬ_·(r,z)=x∈^nx-z≤ r. Given a norm · on ^n and a matrix A∈^n× n, the matrix measure of A with respect to · is defined by μ_·(A) = lim_h→ 0^+I_n + h A_i-1/h, where ·_i is the matrix norm on ^n× n induced from ·. Given a nonsingular matrix P∈^n× n, the P-weighted ℓ_2-norm is defined by v_2,P=v^⊤Pv and the P-weighted ℓ_2-matrix measure is denoted by μ_2,P. Given two sets A,B∈^n, we define the Minkowski sum of the sets A and B by A⊕ B = x+yx∈ A, y∈ B. Given v≤ w, we define the interval [v,w]=x∈^nv≤ x≤ w. Given a set 𝒳⊆^n and a transformation T∈^n× n, we define T𝒳=Txx∈𝒳. Given a square matrix A∈^n× n, we denote the trace of matrix A by tr(A). For two matrices X,Y∈^n× n, we denote X≻ Y if X-Y is a positive definite matrix. Given a continuously differentiable function f:^n→^m, we denote the Jacobian of f at point x_0 by D_xf(x_0). *Dynamical systems consider the deterministic system ẋ_t = f(t,x_t,u_t), where x_t∈^n is the state of the system, u_t∈^p is the input. Depending on the application, the input u_t can be considered as a controller or a disturbance. We assume that f:_≥ 0×^n×^p→^n is a parameterized vector field which is measurable in t and locally Lipschitz in x and u. Using Rademacher's theorem, the Jacobian D_xf(t,x,u) exists for almost every t,x,u∈_≥ 0×^n×^p. Given the initial set 𝒳_0 and the disturbance set 𝒰, the T-reachable set ℛ_f(T,𝒳_0,𝒰) is the set of all possible states the system (<ref>) can achieve at time T, i.e., ℛ_f(T,𝒳_0,𝒰) = {x_T t→ x_t x_0∈𝒳_0, t→ u_t∈𝒰} It is well known that finding the exact reachable set of general nonlinear systems is computationally intractable <cit.>. Instead, most existing approaches for reachability focus on efficient methods for over-approximating the reachable sets <cit.>, i.e. finding the set ℛ_f(t,𝒳_0,𝒰) satisfying ℛ_f(t,𝒳_0,𝒰)⊆ℛ_f(t,𝒳_0,𝒰). Given a norm · on ^n, the deterministic system (<ref>) is contracting with rate c with respect to the norm · if, for every t_0∈_≥ 0 and every t↦ u_t∈𝒰, x_t-y_t≤ e^c(t-t_0)x_τ-y_τ, t≥τ where t↦ x_t and t↦ y_t are two trajectories of the system (<ref>) with the same input t↦ u_t∈𝒰. It can be shown <cit.> that the system (<ref>) is contracting with rate c with respect to the norm · if and only if, for almost every t,x,u∈_≥ 0×^n×^p μ_·(D_x f(t,x,u))≤ c. The function = [[ ; ]]:_≥ 0×^2n×^2p→^2n is an inclusion function for the vector field f of the system (<ref>) if, for every z∈ [x,x] and every w∈ [v,v], (t,x,x,v,v)≤ f(t,z,w)≤(t,x,x,v,v). One can show every vector field f has at least one inclusion function <cit.>. However, inclusion function of f is not generally unique <cit.>. § PROBLEM FORMULATION In this paper, we study reachability of stochastic control systems with both deterministic input/disturbance and stochastic uncertainty. We consider the stochastic version of the deterministic system (<ref>) given by dX_t = f(t,X_t,u_t)dt + σ(t,X_t,u_t) dW_t where X_t∈^n is the state, u_t is the input, W_t is the stochastic uncertainty, and and σ:_≥ 0×^n×^p→^n× m is a matrix-valued function. We assume that the input u_t belongs to the set 𝒰⊆^p, for every t≥ 0, and the stochastic uncertainty W_t is an m-dimensional Wiener process (standard Brownian motion). Throughout this paper, we assume that f and σ are measurable in time and locally Lipschitz in x,u to ensure the existence and uniqueness of solutions of the stochastic system (<ref>) and its associated deterministic system (<ref>) <cit.>. Our goal is to provide probabilistic bounds on trajectories of the stochastic control system (<ref>) starting from some x_0∈𝒳_0 with any input t↦ u_t∈𝒰. We first present a separation strategy that decouples the effect of deterministic input/disturbance and stochastic uncertainty on reachability of the stochastic system (<ref>). We show that the effect of deterministic input/disturbance on reachability can be captured using the reachable set of the associated deterministic system (<ref>). We represent the effect of stochastic uncertainty on reachability using the distance between trajectories of (<ref>) and their associated trajectories of (<ref>) and leverage contraction theory to establish high probability bounds on this distance. § DECOMPOSITION OF REACHABILITY ANALYSIS In this section, we study reachability of the control system (<ref>) with both deterministic input/disturbance and stochastic uncertainty. Inspired by the sampling-based approaches for reachability <cit.>, we focus on probabilistic reachable sets of stochastic system (<ref>), i.e., the sets that contain trajectories of the system with certain probability. We develop a separation strategy that presents a novel perspective toward constructing probabilistic reachable sets of stochastic system (<ref>). The key idea in separation strategy is to decouple the effect of deterministic input/disturbance and the effect of stochastic uncertainty on the probabilistic reachable set of (<ref>). The effect of stochastic uncertainty is encoded in the distance between trajectories of (<ref>) and their associated trajectories of the deterministic system (<ref>). More specifically, for time t≥ 0 and probability level δ∈ [0,1), we assume there exists a constant r(t,δ)∈_≥ 0 such that, with probability 1-δ, X_t-x_t_2,P≤ r(t,δ), where t↦ X_t is a trajectory of the stochastic system (<ref>) with the input t↦ u_t∈𝒰 starting from an initial condition x_0∈𝒳_0 and t↦ x_t is the associated trajectory of the deterministic system (<ref>) with the same input t↦ u_t starting from the same initial condition x_0. The effect of deterministic input/disturbance is captured using the reachable set of the associated deterministic system (<ref>). The probabilistic reachable set of the stochastic system (<ref>) can then be constructed by combining these two components as described in the next theorem. Let t↦ X_t be a trajectory of the stochastic system (<ref>) with the input t↦ u_t∈𝒰 starting from an initial condition x_0∈𝒳_0. Then, for every t≥ 0, with probability 1-δ, X_t ∈ℛ_f(t,𝒳_0,𝒰)⊕ℬ_2,P(r(t,δ),0_n), where ⊕ is the Minkowski sum and ℛ_f(t,𝒳_0,𝒰) is an over-approximation of the reachable set of the deterministic system (<ref>). Let t↦ x_t be the associated trajectory of the deterministic system (<ref>) with input t↦ u_t starting from the initial condition x_0. We note that, for every t≥ 0, X_t = x_t + X_t-x_t := x_t + η_t. Using the inequality (<ref>), with probability 1-δ, we have η_t_2,P≤ r(t,δ). By definition of Minkowski sum, with probability 1-δ, we have X_t∈{x_t}⊕ℬ_2,P(r(t,δ),0_n). Since {x_t}⊆ℛ_f(t,𝒳_0,𝒰), with probability 1-δ, we have X_t∈ℛ_f(t,𝒳_0,𝒰)⊕ℬ_2,P(r(t,δ),0_n) Theorem <ref> decomposes the probabilistic reachability analysis of the stochastic system (<ref>) into two separate problems: (i) over-approximating reachable sets of the associated deterministic system (<ref>) and, (ii) bounding the propagation of stochastic uncertainty in the system. This decomposition brings considerable flexibility to probabilistic reachablity analysis of (<ref>) as any approach for over-approximating reachable sets of the deterministic system (<ref>) can be used to analyze the effect of deterministic input/disturbance. In general, computing the Minkowski sum of two arbitrary sets can be computationally complicated. However, when the sets are ellipsoids or polytopes, there exist efficient algorithms for estimating their Minkowski sum <cit.>. Theorem <ref> captures the propagation of stochastic uncertainty in reachability using the distance X_t-x_t between trajectories of the stochastic system (<ref>) and their associated trajectories of the deterministic system (<ref>) via inequality (<ref>). In the next section, we use incremental stability properties of the stochastic system (<ref>) to bound the constant r(t,δ) in (<ref>). § BOUNDS ON PROPAGATION OF STOCHASTIC UNCERTAINTY In this section, we provide high confidence bounds r(t,δ) on the distance between trajectories of the stochastic system (<ref>) and that of the deterministic system (<ref>). Our approach is based on using contraction properties of the system. We start with the following assumption. There exist a positive definite matrix P≻ 0 and constants c_P,d_P∈ such that, for almost every t,x,u∈_≥ 0×^n×^p, * μ_2,P(D_xf(t,x,u))≤ c_P, and * tr(σ(t,x,u)^⊤Pσ(t,x,u)) ≤ d_P. Various approaches have been proposed in the literature to efficiently compute upper bounds on μ_2,P(D_xf(t,x,u)) including sum-of-square methods <cit.> and convex-hull methods <cit.>. These methods can readily extend to compute an upper bound for tr(σ(t,x,u)^⊤Pσ(t,x,u_t)) and to search for the positive definite P≻ 0 which gives rise to the optimal constants c_P and d_P. Now, we can state the main result of this section. Consider the stochastic system (<ref>) with its associated deterministic system (<ref>) and assume that it satisfies Assumption <ref>. Let t↦ X_t be a trajectory of the stochastic system (<ref>) with the input t↦ u_t starting from an initial condition x_0∈𝒳_0 and let t↦ x_t be the associated trajectory of the deterministic system (<ref>) with the same input t↦ u_t starting from the same initial condition x_0. Then, for every t≥ 0, the following statements hold: * 𝔼(X_t-x_t^2_2,P) ≤d_P2c_P(e^2c_P t-1), * with probability at least 1-δ, X_t-x_t_2,P≤√(d_P2δ c_P(e^2c_P t-1)). Regrading part <ref>, we define the random variable z=[[ X_t; x_t ]]. Then, by combining dynamical systems (<ref>) and (<ref>), dz_t = [ f(t,X_t,u_t); f(t,x_t,u_t) ]dt + [ σ(t,X_t,u_t); 0_n× m ]dW_t. We define V(z_t) = X_t-x_t^2_2,P = (X_t-x_t)^⊤ P (X_t-x_t). Then, using Itó's formula, dV(z) = [∂ V/∂ t+ (∂ V/∂ z)^⊤[ f(t,X_t,u_t); f(t,x_t,u_t) ]]dt + [1/2tr([ σ(t,X_t,u_t)^⊤0^⊤_n× m ]∂^2 V/∂ z^2[ σ(t,X_t,u_t); 0_n× m ]) ] dt + (∂ V/∂ z)^⊤[ σ(t,X_t,u_t); 0_n× m ] dW_t. Using the fact that ∂ V/∂ z = 2[[ P(X_t-x_t); P(-X_t+x_t) ]] and ∂^2 V/∂ z^2 = 2[[ P -P; -P P ]], we can compute dV = 2(X_t-x_t)^⊤ P (f(t,X_t,u_t) - f(t,x_t,u_t)) dt + tr(σ(t,X_t,u_t)^⊤ P σ(t,X_t,u_t)) dt + 2P(X_t-x_t)^⊤σ(t,X_t,u_t) dW_t. By <cit.>, Assumption <ref><ref> is equivalent to (X_t-x_t)^⊤ P (f(t,X_t,u_t) - f(t,x_t,u_t)) ≤ c_PX_t-x_t^2_2,P, for every x,y∈^n and every t,u∈_≥ 0×^p. Now, we focus on the curve t↦ z_t = [[ X_t(t); x_t(t) ]]. Following standard Itó Calculus, for every t,h≥ 0, 𝔼(V(z_t+h)) - 𝔼(V(z_t)) = (∫_t^t+h dV (z_s) ) ≤∫_t^t+h𝔼(dV(z_s)) ≤∫_t^t+h (2c_P 𝔼(X_t-x_t^2_2,P) + d_P) ds = ∫_t^t+h(2c_P 𝔼 (V(z_s)) + d_P) ds. where the first inequality holds by the triangle inequality and the second inequality holds by (<ref>). Therefore, we have 1h(𝔼(V(z_t+h))- 𝔼(V(z_t))) ≤1h∫_t^t+h(2c_P 𝔼 (V(z_x_0(s)) + d_P) ds. Taking the limsup of both side as h→ 0, for every t≥ 0, we get D^+𝔼(V(z_t)) ≤ 2c_P 𝔼 (V(z_t)) + d_P, where D^+ is the upper Dini Derivative with respect to t. Using the generalized Gröwall-Bellman lemma <cit.>, we can show that, for every t≥ 0, 𝔼(V(z_t)) ≤ e^2c_P t𝔼(V(z_0)) + d_P2c_P(e^2c_Pt-1) . Since [[ X_t(0); x_t(0) ]] =[[ x_0; x_0 ]], we have V(z_0)=X_0-x_0_2,P = 0 and thus 𝔼(V(z_0)) = 0. This implies that, 𝔼(X_t-x_t^2_2,P)≤d_P2c_P(e^2c_Pt-1), for every t≥ 0. Regarding part <ref>, the result follows by applying Markov inequality <cit.> to part <ref>. Theorem <ref> provides an incremental bound between a trajectory of the stochastic system (<ref>) and the associated trajectory of the deterministic system (<ref>). In <cit.> and <cit.>, a similar approach is used to bound the distance between every two stochastic trajectories of the system (<ref>). Compared to <cit.> and <cit.>, the bound in Theorem <ref> is sharper since is focuses on the distance between a stochastic trajectory of the system (<ref>) and the associated trajectory of the deterministic system (<ref>). Moreover, the expectation bound in <cit.> and <cit.> is only applicable to contracting system with c_P≤ 0 and it reduces to an asymptotic bound when the uncertainty in the initial configuration is deterministic. On the other hand, Theorem <ref> is applicable to systems satisfying Assumption (<ref>) with arbitrary c_P,d_P∈ and it captures the transient behavior of the incremental distance between trajectories. § PROBABILISTIC REACHABILITY OF STOCHASTIC DYNAMICAL SYSTEMS In this section, we use the separation strategy in Theorem <ref> to obtain high probability bounds on the trajectories of the system (<ref>). In particular, we combine the bounds on propagation of stochastic uncertainty (Section <ref>) with two computationally efficient methods for over-approximating reachable sets of the deterministic system (<ref>) namely contraction-based reachability and interval-based reachability to obtain estimates for probabilistic reachable sets of the stochastic system (<ref>). Contraction-based Reachability Contraction theory is a classical framework that studies stability of dynamical systems using the incremental distance between their trajectories <cit.>. Recently, this framework has emerged as a computationally efficient and scalable method for reachability of dynamical systems <cit.>. In this section, we review the contraction-based reachability for deterministic system (<ref>). Let ·_𝕏 be a norm on ^n, ·_𝕌 be a norm on ^p, and the induced norm on ^n× p is denoted by ·_𝕏,𝕌. We consider the following assumption. There exist constants c,ℓ∈ such that, for almost every t,x,u∈_≥ 0×^n×𝒰: * μ_𝕏(D_xf(t,x,u))≤ c, and * D_u f(t,x,u)_𝕏, 𝕌≤ℓ. where μ_𝕏 is the matrix measure associated with the norm ·_𝕏. Let t↦ x^*_t be a trajectory of (<ref>) with the input t↦ u^*_t. We consider the initial configuration 𝒳_0=ℬ_𝕏(r_1,x^*_0) for some r_1>0 and the input set 𝒰=ℬ_𝕌(r_2,u^*_0) for some r_2>0. If Assumption <ref> holds, using the incremental input-to-state bounds <cit.>, we can compute an over-approximation of reachable set of the deterministic system (<ref>) as follows: ℛ_f(t,𝒳_0,𝒰) = ℬ_𝕏(e^ctr_1+ ℓc (e^ct-1)r_2 , x^*_t), The aforementioned contraction-based reachability can be used to capture the effect of deterministic input/disturbance on reachability of stochastic system (<ref>). Consider the stochastic system (<ref>) satisfying Assumptions <ref> and <ref>. Suppose that t↦ x^*_t is a trajectory of the associated deterministic system (<ref>) with an input t↦ u^*_t. Let t↦ X_t be a trajectory of the stochastic system (<ref>) with the input t↦ u_t∈ℬ_𝕌(r_2,u^*_0) starting from x_0∈ℬ_𝕏(r_1,x^*_0). Then, for every t≥ 0, with probability 1-δ, X_t∈ℬ_𝕏(r_t, x^*_t)⊕ℬ_2,P(ρ(t,δ),0_n) where, for every t≥ 0, r_t = e^ctr_1 + ℓc(e^ct-1)r_2, ρ(t,δ) = √(d_P2δ c_P(e^2c_P t-1)). The proof follows by using the over-approximation (<ref>) for the reachable sets of the deterministic system (<ref>) in Theorem <ref> and using Theorem <ref> for the high probability bound on the stochastic deviation r(t,δ). Interval-based Reachability Interval analysis is a well-established framework for analyzing the propagation of interval uncertainty in mathematical models <cit.>. Techniques from interval analysis have been successfully used for reachability analysis of dynamical systems <cit.>. In this section, we review interval-based reachability for the deterministic system (<ref>). Conisder the dynamical system (<ref>) with an interval initial configuration 𝒳_0=[x_0,x_0] and an interval input set 𝒰=[u,u]. Let = [[ ; ]]:_≥ 0×^2n×^2p→^2n be an inclusion function for f. We define the embedding system of (<ref>) associated with the inclusion function by [ ẋ_t; ẋ_t ] = [ (t,x_t,x_t,u,u); (t,x_t,x_t,u,u) ] Let t↦[[ x_t; x_t ]] be the trajectory of the embedding system (<ref>) starting from [[ x_0; x_0 ]]. The reachable sets of the deterministic system (<ref>) can be over-approximated by <cit.>: ℛ_f(t,[x_0,x_0],[u,u]) = [x_t,x_t], The accuracy of the interval over-approximation (<ref>) depends on the choice of inclusion function . Given a parameterized vector field f, there exist several computationally efficient approaches for finding an inclusion function for f. We refer to <cit.> for detailed discussion on these approaches and to <cit.> for a toolbox for computing inclusion functions. In the next proposition, we use aforementioned interval-based approach to capture the effect of deterministic input/disturbance on reachability of the stochastic system (<ref>). Consider the stochastic system (<ref>) satisfying Assumption <ref>. Let t↦ X_t be a trajectory of (<ref>) with the input t↦ u_t∈ [u,u] starting from the initial condition X_0∈ [x_0,x_0]. Suppose that =[[ ; ]] is an inclusion function for f with the associated embedding system (<ref>) and t↦[[ x_t; x_t ]] is the trajectory of (<ref>) starting from [[ x_0; x_0 ]]. Then, for every t≥ 0, with probability 1-δ, X_t ∈ [x_t,x_t]⊕ℬ_2,P(ρ(t,δ),0_n), where ρ(t,δ)= √(d_P2δ c_P(e^2c_P t-1)). The proof follows by using the over-approximation (<ref>) for the reachable sets of the deterministic system (<ref>) in Theorem <ref> and using Theorem <ref> for the high probability bound on the stochastic deviation r(t,δ). § NUMERICAL SIMULATIONS In this section, we employ our framework to study probabilistic reachability of a feedback stabilized inverted pendulum. Consider the nonlinear dynamics for the pendulum: dX_1 =X_2 dt, dX_2 =gLsin(X_1) dt + (k_1X_1 + k_2X_2) dt + σ dW_t, where X_1 is the angular position of the pendulum, X_2 is the angular velocity of the pendulum, and the term k_1X_1 + k_2X_2 is a feedback controller with k_1=k_2 = -20 designed to stabilizes the unstable equilibrium point x^* = 0_2. We assume that g=10 is the gravitational constant, L=1 is the length of the pendulum. The stochastic disturbance W_t is modeled by a Wiener process with σ=0.1 and the deterministic disturbance is modeled by the uncertainty in the initial configuration 𝒳_0 = [-π10,π10]× [-0.2,0.2]. The associated deterministic system for (<ref>) is given by ẋ_1 = x_2 ẋ_2 = gLsin(x_1)-k_1x_1 -k_2x_2, and we define f(x) = [[ x_2; gLsin(x_1)-k_1x_1 -k_2x_2 ]] for x=(x_1,x_2)^⊤. We use Theorem <ref> and Theorem <ref> to obtain high probability bounds on trajectories of the stochastic inverted pendulum system (<ref>). We first check Assumption <ref> for this system. For every x = (x_1,x_2)^⊤∈^2, D_xf(x) = [[ 0 1; g/Lcos(x_1)+k_1 k_2 ]] We define the matrices A_1,A_2∈^2× 2 as follows: A_1=[[ 0 1; g/L+k_1 k_2 ]], A_2=[[ 0 1; -g/L+k_1 k_2 ]]. Note that cos(x_1)∈ [-1,1], for every x = (x_1,x_2)^⊤∈^2. This implies that, for every x∈^2, we have D_x f(x) ∈conv{A_1, A_2}, where conv is the convex hull. Thus, using <cit.>, the optimal constant c_P can be computed using the following optimization algorithm: min_c_p∈, P ≻ 0 c_P A_i^⊤ P + P A_i ≼ 2c_P P, i∈{1,2}. We solve optimization problem (<ref>) by successively applying semi-definite programming on P and bisection on c_P. The optimal solution of (<ref>) is given by c_P = -0.5 and P=[[ 35.68 2.21; 2.21 1.27 ]]. With this matrix P, we compute d_P = tr([[ 0 σ ]]P[[ 0; σ ]]) = 0.0128. *Contraction-based reachability In this part, we use Proposition <ref> to find probabilistic reachable sets of the inverted pendulum (<ref>). We consider Assumption <ref> with ·_𝕏=·_2,P with positive definite matrix P as defined above. For every x = (x_1,x_2)^⊤∈^2, we have μ_2,P(D_xf(x))≤ c_P = -0.5. Using Proposition <ref> with the initial configuration 𝒳_0=x∈^2x_2,P≤[[ π/10; 0.2 ]]_2,P⊃𝒳_0, the probabilistic reachable sets of the stochastic system (<ref>) with probability higher than or equal to 99 % are shown in Figure <ref> (left). *Interval-based reachability In this part, we use the separation strategy in Theorem <ref> with the bounds on propagation of stochastic uncertainty obtained from Theorem <ref> and the over-approximations of reachable sets of the associated deterministic system (<ref>) obtained using interval-based reachability via a coordinate transformation. We consider the coordinate transformation y=Tx with nonsingular matrix T=[[ 1 0.2; 1 0 ]] for the associated deterministic system (<ref>). The transformed system is given by ẏ = T f(T^-1y). One can show that 𝒴_0 =[-π10[[ 1.04; 1 ]],π10[[ 1.04; 1 ]]]⊃ T𝒳_0 is a forward invariant set for the transformed system (<ref>). Moreover, on the forward invariant set 𝒴_0, an inclusion function for the transformed deterministic system (<ref>) is given by (y,y) = [[ Tf(T^-1y); Tf(T^-1y) ]]. We denote the trajectory of the associated embedding system starting from π10[[ -1.04; -1; 1.04; 1 ]] by t↦[[ y_t; y_t ]] and use equation (<ref>) to over-approximate the reachable sets of the transformed system (<ref>). More specifically, at time t, every trajectory of the transformed system (<ref>) starting from 𝒴_0 belongs to the set [y_t,y_t]. This implies that every trajectory of the associated deterministic system (<ref>) starting from 𝒳_0 belongs to the parallelotope T^-1[y_t,y_t]. The reachable sets of the system (<ref>) with probability higher than or equal to 99 % obtained using separation strategy in Theorem <ref> and interval-based reachability are shown in Figure <ref> (right). § CONCLUSION We developed a framework for reachability analysis of control systems with stochastic disturbances. A key feature of our framework is that it separates the effect of stochastic disturbances and deterministic inputs in the evolution of the system. We use contraction theory to obtain probabilistic bounds on propagation of stochastic disturbances and use the existing contraction-based and interval-based reachability frameworks to over-approximate the effect of deterministic input/disturbance. It is well-known that, for linear stochastic systems, the marginal distribution of trajectories is Gaussian and has an exponential tail <cit.>. However, our results (Propositions <ref> and <ref>) only imply sub-linearity of the marginal distribution of trajectories for linear stochastic systems. Future work will explore whether, for nonlinear stochastic systems, the marginal distributions of trajectories exhibit the same exponential tail behavior. IEEEtran
http://arxiv.org/abs/2407.12730v1
20240717164934
RoDE: Linear Rectified Mixture of Diverse Experts for Food Large Multi-Modal Models
[ "Pengkun Jiao", "Xinlan Wu", "Bin Zhu", "Jingjing Chen", "Chong-Wah Ngo", "Yugang Jiang" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Review of nonflow estimation methods and uncertainties in relativistic heavy-ion collisions Fuqiang Wang July 22, 2024 =========================================================================================== § ABSTRACT Large Multi-modal Models (LMMs) have significantly advanced a variety of vision-language tasks. The scalability and availability of high-quality training data play a pivotal role in the success of LMMs. In the realm of food, while comprehensive food datasets such as Recipe1M offer an abundance of ingredient and recipe information, they often fall short of providing ample data for nutritional analysis. The Recipe1M+ dataset, despite offering a subset for nutritional evaluation, is limited in the scale and accuracy of nutrition information. To bridge this gap, we introduce Uni-Food, a unified food dataset that comprises over 100,000 images with various food labels, including categories, ingredients, recipes, and ingredient-level nutritional information. To mitigate the conflicts arising from multi-task supervision during fine-tuning of LMMs, we introduce a novel Linear Rectification Mixture of Diverse Experts (RoDE) approach. RoDE utilizes a diverse array of experts to address tasks of varying complexity, thereby facilitating the coordination of trainable parameters, it allocates more parameters for more complex tasks and, conversely, fewer parameters for simpler tasks. RoDE implements linear rectification union to refine the router's functionality, thereby enhancing the efficiency of sparse task allocation. These design choices endow RoDE with features that ensure GPU memory efficiency and ease of optimization. Our experimental results validate the effectiveness of our proposed approach in addressing the inherent challenges of food-related multitasking. Review of nonflow estimation methods and uncertainties in relativistic heavy-ion collisions Fuqiang Wang July 22, 2024 =========================================================================================== § INTRODUCTION Food occupies a central position in our daily lives, leading to the emergence of various food-related tasks, ingredient recognition, recipe generation, and nutritional estimation. These tasks have attracted considerable research interests over the years <cit.>. Building on the success of Large Language Models (LLMs) <cit.>, Large Multi-modal Models <cit.> have begun to make a significant impact in many specialized areas, including the food domain <cit.>. The success of LLMs and LMMs can be largely attributed to the availability of large-scale training data. In the food domain, while Recipe1M <cit.> provides over a million web-crawled recipes and thousands of food images, it lacks comprehensive nutritional information. While Recipe1M+ <cit.> includes a subset containing nutritional information, it suffers from limitations in data scale and annotation quality. Similarly, nutrition-focused datasets such as Nutrition5k <cit.> are also limited in scale and may suffer from domain shift issues, as the labeled data primarily consists of lightly processed ingredients rather than fully cooked meals. Moreover, previous studies <cit.> have indicated that integrating diverse sources of data for training can lead to data conflicts. The performance of an LMM on a specific task also depends heavily on the representation of that task's data within the overall training dataset <cit.>. These challenges highlight the need for a unified dataset covering all food-related tasks from the same source, aiming at mitigating data conflicts and ensuring balanced representation across tasks. In order to overcome these obstacles, we introduce Uni-Food, a unified dataset that encapsulates various food-related information, food category, ingredients, recipes, and valuable ingredient-level nutritional information for each food image. Specifically, we curate 100,000 samples from Recipe1M <cit.> and employ ChatGPT-4 <cit.> to generate nutritional information for each ingredient list. This information is then aggregated to derive the nutritional data for the entire dish. Furthermore, in order to ensure a high-quality gold standard set for testing, we utilize human curation to isolate a superior subset specifically designated for testing purposes. Our proposed dataset facilitates large-scale training of LMMs for various food-related tasks within the same dataset. Nevertheless, this endeavor also presents new challenges associated with multi-task learning when fine-tuning large models. Mixture of Experts (MoE) <cit.> has been a common strategy in the field of Natural Language Processing (NLP) to handle multi-task fine-tuning. This method involves the utilization of multiple expert models, each specialized in different task or segment of the data distribution. Recently, there has been a surge in the application of MoE to LMMs. MoE-LLaVA <cit.>, for instance, integrates MoE into its Feed-Forward Network (FFN) layers to enhance the model's adaptability across tasks. On the other hand, LLaVA-MoLE <cit.> incorporates the MoE paradigm into the Low-Rank Adaptation (LoRA) <cit.> module, selecting the top-1 experts for each specific task to guarantee the spare activation of experts. However, as MoE-LLaVA incorporates experts throughout the FFN layer, the resulting increase in training parameters can be prohibitive. Conversely, although LLaVA-MoLE utilizes LoRA experts, which are more parameter-efficient, its strategy of selecting only the top expert may constrain the model's flexibility on skill leverage. This is because certain tasks might share foundational skills. For example, in the food domain, both ingredients recognition and recipe generation are sensitive to the composition of ingredients in a dish. To address the aforementioned issues, we propose Linear Rectified Mixture of Diverse Experts (RoDE). Following LLaVA-MoLE <cit.>, we design experts from the LoRA perspective to ensure efficient use of trainable parameters. Unlike LLaVA-MoLE, which dedicates each expert to a particular task, LLaVA-RoDE conceptualizes experts as granular skill modules, allowing multiple experts to participate in a single task. This design inspires us to develop experts with diverse capabilities—that is, we allocate varying amounts of trainable parameters to each expert in an effort to conserve GPU memory. To integrate the experts, we then introduce a linear rectified router to assign tasks to the appropriate LoRA experts. The router employs a Rectified Linear Unit (ReLU) to refine its output, which confers two significant benefits: 1) it enables sparse task-skill matching, which has been proved to surpass dense activation methods in effectiveness <cit.>; 2) the simplicity of the ReLU function makes it inherently easy to optimize. Consequently, this leads to a sparing activation of the diverse expert set. RoDE is engineered to optimize the allocation and usage of the model's resources, significantly bolstering the capabilities of large-scale models in multi-task learning scenarios. Our empirical findings underscore the superior performance of RoDE in the realm of food multi-task learning challenges, validating the strength of our proposed methodology. Our contributions can be summarized as follows: * We introduce Uni-Food, a novel dataset encompassing a variety of food vision tasks. Building upon the substantial ingredient and recipe dataset, we further incorporate valuable ingredient-level nutritional information to facilitate relevant dietary research. * We propose Linear Rectified Mixture of Diverse Experts (RoDE) approach designed to effectively tackle food multi-tasks learning. RoDE approach leverages a combination of LoRA experts with varying ranks to model tasks of different complexities and employs linear rectified router to sparsely allocate these experts to appropriate tasks. * Experimental results demonstrate the effectiveness of our proposed approach on food multi-task learning. And ablation studies highlight the GPU memory efficiency and the sparse allocation of experts intrinsic to our RoDE model. § RELATED WORK Large Multimodal Models (LMMs). In recent years, Large Language Models (LLMs) have showcased remarkable prowess in Natural Language Processing (NLP), paving the way for breakthroughs in multimodal learning. LMMs typically integrate a pre-trained vision encoder with a LLM architecture. Subsequently, visual features undergo adaptation via a projection module, facilitating their integration into the hidden space of LLMs for joint processing with textual inputs. Through multimodal training, LMMs acquire the capability to generate responses based on both visual and textual inputs. LLaVA <cit.>, for instance, introduces a vision encoder to LLMs and demonstrates significant enhancements across various vision-language tasks. Building upon LLaVA's advancements, LISA <cit.> further enriches multimodal models by incorporating a segmentation module. This addition augments the model's capacity to discern fine-grained details within visual inputs, resulting in more nuanced and contextually richer representations. GPT-4v <cit.> stands out as one of the most powerful LMMs, capable of providing instructional data across numerous research domains. Its potency extends beyond NLP, contributing to advancements in various interdisciplinary fields. Food Multi-task Learning. With food playing a significant part in human life, there has been a lot of work carried out doing research in this domain. In the early stage, food classification <cit.> happened to be the most popular task related, bringing up a surge in the number of relevant datasets, like Food-101 <cit.>, UEC Food 256 <cit.> and Food2K <cit.>. As the tasks in the food domain developed into greater diversity, from ingredient recognition <cit.>, recipe generation <cit.> to nutrition estimation <cit.>, food datasets have also grown to be more diverse and large-scale. Vireo Food-172 <cit.> and 251 <cit.> are datasets containing not only category information but also ingredient labels, Recipe1M <cit.> and Recipe1M+ <cit.> are datasets involving a wealth of recipe data, and Nutrition5K <cit.> is a dataset specialized in high accuracy nutritional content annotation. While all of the above methods have demonstrated their performance on food, they were all limited to address one or a few tasks, rather than integrating all tasks into a single multimodal model. To accomplish this, Yin et al. came up with FoodLMM <cit.>, a versatile food assistant based on LMMs that could handle a variety of tasks in the food domain. However, their method might lead to task conflict when several tasks were fine-tuned at the same time, lacking the capability to efficiently implement multi-task learning. Mixture of Experts (MoE) dynamically combines the decisions of multiple experts on the same input to improve overall performance. In LLMs, which typically adopt transformer architecture, MoE is often implemented within MLP layers. LoRAHub <cit.> initially trains a series of LoRA weights on upstream tasks. To adapt these to a downstream task, it employs a gradient-free method to search for the coefficients that will combine the pre-trained LoRA set. MOELoRA <cit.>, on the other hand, utilizes a router conditioned on a task identifier to dynamically merge multiple LoRA outputs. In contrast, MoCLE <cit.> designs a router that is conditioned on the clustering information of each individual input sample. LoRAMoE <cit.> partitions the LoRA experts into two groups and purposefully cultivates different capabilities within each group. All these mixture-of-LoRA methods have pre-set hyperparameters that require meticulous selection, and the LoRA experts are densely combined. <cit.> conducted a comparison between the dense and sparse mixtures of LoRA experts for large language models, concluding that a dense mixture yields superior performance. Some studies have migrated MoE designs into LMMs. MoE-LLaVA <cit.> introduces multiple FFNs in stead of a single one and integrates a router mechanism to sample predictions. LLaVA-MoLE <cit.> applies multiple LoRAs and activates one specifically for a task. However, these methods fall short in effectively handling the nuances of rational, fine-grained task-skill allocation. Our paper aims to address this gap. § DATASET CONSTRUCTION In this paper, we construct a large-scale dataset called Uni-Food. Different from other publicly available food datasets <cit.>, Uni-Food contains various attributes used in food-related tasks, including food category, ingredients, recipe and nutrition for each food image. To the best of our knowledge, this is the largest dataset that provides all the attributes in one dataset. Table <ref> summarizes the tasks and sample sizes of existing primary datasets and our Uni-Food dataset. §.§ Attribution Our objective is to construct a unified and comprehensive dataset containing rich information relevant to food, encompassing the following key attribution for each image. Category: Classifying each food item into specific categories to facilitate organization and categorization within the dataset. Ingredients Information: Providing a thorough breakdown of the ingredients used in each dish, including their names and quantities. Cooking Instructions: Offering step-by-step instructions on how to prepare each dish, ensuring clarity and completeness for easy reproduction. Nutrition Information: Incorporating detailed nutritional data for each dish, such as macronutrient content (e.g., carbohydrates, proteins, fats), micronutrients, and total calories. An intuitive sample demonstration of these attributions is shown in Figure <ref>. The distribution across various categories is visualized in Figure <ref> §.§ Nutrition Labeling As the ingredient and recipe information can be easily collected from Recipe1M+ <cit.>, we proceed to annotate the nutrition content. To acquire precise nutritional information, we feed both the food image and the ingredients with their respective amounts into ChatGPT4-vision [We intentionally omit recipe information to enhance processing efficiency, as we have observed that including recipes increases query time without considerably improving precision.]. The output text is meticulously processed to extract the nutritional values. Leveraging the capabilities of ChatGPT4-vision, we collect detailed nutrition information at the ingredient level, encompassing metrics such as mass, fat content, protein content, carbohydrate content, and energy content. Importantly, the ingredient list in recipes contains the quantity for each ingredient, along with the food image, ChatGPT4-vision is capable of generating high-quality ingredient-level nutrition information according to our manual check based on USDA [https://fdc.nal.usda.gov/]. The ingredient-level nutrition information is then aggregated to derive the comprehensive nutrition profile of the entire dish. Through this process, we obtain a holistic understanding of the nutritional composition of the dish, enabling precise estimation and analysis of its nutritional value. §.§ Golden set selection To ensure accurate evaluation, we construct a precise gold standard as the test set. To accomplish this goal, we manually filter out inaccurate samples by cross-referencing their nutritional information with the USDA database. Additionally, we eliminate certain overrepresented categories to achieve a balanced distribution of sample categories. § METHOD In this section, we first provide a preliminary overview of the problem formulation for multi-task tuning of LMMs, as well as the commonly used techniques, Low-Rank Adaptation (LoRA) <cit.> and Mixture of Experts (MoE) <cit.> for addressing this problem. Subsequently, we introduce our Linear Rectified Mixture of Diverse Experts approach in detail. §.§ Preliminary §.§.§ Problem Formulation. A Large Multi-modal Model (LMM) can be constituted by a vision encoder and a Large Language Model (LLM), as shown in Figure <ref>. Normally, LMM is initially pre-trained with tremendous amounts of data, and then fine-tuned to adapt to downstream tasks. The utilization of multi-modal documents for Supervised Fine-Tuning (SFT) <cit.> is a common practice in the fine-tuning of LMMs. Let us denote the set of multimodal documents as D, where D = {(I_i, T_i)}_i=1^M, where I_i signifies the image, and T_i represents the associated set of tasks with that image. M stands for the total number of documents. Each task set T_i encompasses sequence-specific tasks { (q_i^j, a_i^j) } _j=1^𝒯, where q_i^j, a_i^j is the question and answer for task j, and 𝒯 is the number of task types. The primary objective of SFT is to using D to fine-tune the LMM so that it can provide corresponding answers to given questions based on an image. More specifically, for an image I_i and a question q_i^j, the training objective is to maximize the probability p(a_i^j|I_i, q_i^j, θ), where θ represents the trainable parameters of the large model. §.§.§ Low-Rank Adaptation (LoRA). Fully tuning large models can be resource-intensive. Parameter-Efficient Fine-Tuning (PEFT) <cit.> introduces additional adapters to effectively customize large models for downstream tasks with minimal resource overhead. Let W represent the weight of a linear layer L in the large-scale model, which is initially set as W_0 after pre-training. In PEFT, W_0 remains frozen in order to preserve the acquired world knowledge. To facilitate adaptation to downstream tasks, a learnable branch linear adapter, denoted as Δ W, is introduced to modify the initial frozen linear weights W_0. Consequently, the output of the adapted linear layer L can be expressed as W = W_0 + Δ W. LoRA <cit.> further decomposes Δ W into two matrices, A and B, where the connection rank is considerably smaller than that of W_0. LoRA allows the model to adapt to downstream tasks while reducing the number of training parameters. §.§.§ Mixture of Experts (MoE). The Mixture of Experts (MoE) <cit.> paradigm of LoRA is proposed to adapt large models to multiple downstream tasks by employing multiple LoRAs in the MLP layers. Each LoRA module can be considered as an expert, wherein all experts receive the same input and combine their outputs to improve overall performance. Let R = { A_i, B_i} _i=0^N denote a series of LoRA modules, where N signifies the number of LoRAs. We use h(·) to denote the linear router, which takes the dimension of the input feature x and outputs the allocation vector of R. The output of layer L incorporating the MoE module can be represented as: x' = W_0 x + ∑_i=0^Nα/rσ(h_i)B_i^rA_i^rx. Here, σ stands for the softmax operation, α is a hyperparameter, and x' is the output feature. §.§ Linear Rectified Mixture of Diverse Experts Our proposed RoDE framework incorporates a variety of experts, each with distinct capabilities, and a linear rectification router to integrate the contributions of these experts. The overall structure of the framework is depicted in Figure <ref>. A comprehensive illustration of the framework is provided in the subsequent sections. §.§.§ Diverse Capability Experts Typically, within the Mixture of Experts (MoE) framework, all experts share the same architecture. In the case of LoRA, for example, each expert is assigned the same rank. However, this uniformity assumes equal capabilities across all experts. This implies that for more complex tasks, each expert might need to possess a large number of parameters to adapt effectively. However, this could be prohibitively demanding in terms of GPU memory requirements. To mitigate the issue of GPU memory constraints, we conceptualize the experts as fine-grained skill modules. The key idea is that a task may activate a combination of these modules, and these modules can be shared across various tasks. This modular design intuitively leads us to develop LoRA experts with different capabilities tailored to tasks of varying complexity. Drawing inspiration from Low-Rank Adaptation <cit.>, which suggests that a low-rank adapter may be sufficient for certain tasks, we create LoRAs with varying ranks. Consequently, the resulting skill space comprises both high-rank and low-rank LoRA experts, providing greater flexibility and efficiency in addressing a diverse set of tasks. To construct such skill space, we configure a heterogeneous set of experts, represented as R = { A_i^r_i, B_i^r_i}_i=0^N, where the rank r_i of i-th LoRA module may vary from the others. Therefore, we can establish a skill space composed of various LoRAs, each tailored to accommodate tasks of different complexity levels. §.§.§ Linear Rectified Router The router consolidates the contributions of each expert based on the demands of the specific task. Previous research <cit.> has shown that the use of sparse mixtures of LoRA experts outperforms dense experts in the context of large language models. LLaVA-MoLE <cit.> presents a top-1 selection strategy, which ensures the sparsity of expert selection in LMM. While some methodologies in the Natural Language Processing (NLP) field adopt a 'soft' approach to combine expert outputs—for instance, <cit.> utilizes softmax and <cit.> employs Gumbel softmax—these techniques are not as sparse and can be challenging to optimize. In contrast, our methodology adopts a novel approach by utilizing the Rectified Linear Unit (ReLU) <cit.> to rectify the output of the routers, thereby encouraging the sparse learning of LoRA expert activations. A visual illustration of our routing strategy is shown in Figure <ref>. Leveraging the intrinsic properties of the Rectified Linear Unit (ReLU), our approach benefits from a simplified optimization landscape and fosters sparsity within the network, which enables our model to boost both efficiency and efficacy. Let γ represent the linear rectification operation, γ(x) = max (x, 0). We utilize ReLU to rectify the linear router, resulting in an adjusted linear output that can be expressed as follows: x' = W_0x + ∑_i=0^Nα/r_iγ(h_i) B_i^r_iA_i^r_i x , where (A_i^r_i, B_i^r_i) represents the operation performed by the i-th LoRA expert. The summed output from the RoDE module is then added to the frozen linear output and forwarded to the next module. This linear rectification mechanism, enhances the sparsity of skill selection, facilitating efficient utilization of expertise across a broad spectrum of tasks. Overall, diverse LoRA experts and linear rectified arrangement yield a fine-grained task-skill arrangement space, contributing to the optimized allocation and utilization of resources within the model. §.§ Optimization Objects and Inference Following previous LMM-based methods <cit.>, the input images are transformed into image tokens and concatenated with text tokens to send to LLM. The training process adheres to the autoregressive model, predicting the next token based on the input tokens. We employ the standard cross-entropy loss as the optimization objective to train our model. During inference, for each task, we input the image along with the corresponding question into the model. The model's output tokens are then converted into words to formulate the answer. § EXPERIMENT In this section, we first present our experimental setup. Then we elaborate experimental results of multiple food tasks based on multi-task learning. Following this, we perform an ablation study to evaluate the impact and effectiveness of our main components. §.§ Experiment Setup Our approach commenced by utilizing the pre-trained weights of the LLaVA-Lightning-7B-v1-1 <cit.> model, subsequently applying SFT on the Uni-Food and Nutrition5k datasets <cit.>. In our RoDE model, we employ a heterogeneous array of experts with LoRA ranks of [2, 4, 8, 16]. Our primary baseline is FoodLMM <cit.>, which to our knowledge is the only versatile LMM specialized in food-related tasks. Additionally, we compare with FoodLMM+MoE, which substitutes the plain LoRA module with a Mixture of Experts (MoE) module. Each MoE module consists of four LoRAs with a rank of 8. The routing mechanism for the MoE module is a linear layer followed by a softmax operation, as detailed in Section <ref>. Importantly, for simplicity, we omit the specialized token tags and additional heads used by FoodLMM specifically for nutrition estimation, and instead directly convert the output tokens from the LMM into text to generate nutritional predictions. For the standardization of the ingredient vocabulary, we adopt the same clustering protocol used for ingredients in InverseCooking <cit.>. Tasks and Evaluation Metrics. The tasks and corresponding metrics in our experiments are as follows. Ingredient Recognition: The objective here is to identify and list the ingredients present in a dish shown in an image. We assess performance using Intersection over Union (IoU) as the metric. Recipe Generation: The goal of this task is to generate cooking instructions for the dish captured in an image. We measure performance using the commonly used text generation metrics in NLP, SacreBLEU <cit.> and Rouge-L <cit.>. Nutrition Estimation: This task involves estimating a range of nutritional parameters from a dish's image, including the total food mass, total energy, total fat, total carbohydrates, and total protein content. Energy is measured in kilocalories, while the remaining parameters are quantified in grams. We employ the mean absolute error as a percent of the respective mean for that field (pMAE) <cit.> for the metric of nutrition estimation. Implementation Details. In this study, we utilize LLaVA-7B <cit.> as the base LMM, wherein a CLIP ViT-L model serves as the vision encoder, and Llama 7B <cit.> serves as the Language Model. For the CLIP ViT-L encoder, the input image resolution is set at 336x336, which is then divided into 14 patches. Subsequently, a two-layer MLP is used to transform these image patches into 576 tokens. For efficiency, the LoRA module is added to the query and key projection layer. Unless specifically stated, the rank of LoRA is set to 8. Hyperparameter α is set to 16, and the LoRA dropout rate is set to 0.05. During the training process of all our experiments, the weights of vision encoder and Llama are kept constant, with only the added LoRA modules being trainable. We employ the AdamW optimizer <cit.> and use the WarmupDecayLR learning rate scheduler. The initial learning rate is set to 0.0003, weight decay is set to 0, and we perform 100 warm-up iterations. In the training stage, we use a batch size of 4, and gradient updates are computed every 10 batches. The training process is distributed across 4 RTX4090 GPUs. The training epoch is set to 1. §.§ Performance Comparison We perform a comparative analysis between our model and two versions of FoodLMM: a base version equipped with plane LoRA, and an enhanced version, FoodLMM+MoE, equipped with four 8-rank LoRA and a linear router with softmax. These methods are evaluated across three tasks: ingredient recognition, recipe generation, and nutrition estimation. The results are summarized in Table <ref>. The table clearly shows that the MoE design enhances the performance of FoodLMM. By adopting the traditional MoE paradigm, FoodLMM+MoE, there is an 8.6% improvement in IoU for ingredients recognition, for recipe generation, there is a 14.6% increase in SacreBLEU and a 1.1% gain in Rouge-L scores. However, there is a slight reduction of 1% in nutritional performance. Overall, the comparison demonstrates the advantages of incorporating MoE. Furthermore, our model, RoDE, significantly outperforms the traditional MoE design, achieving the highest scores across all evaluated metrics. In the ingredient recognition task, RoDE surpasses FoodLMM+MoE by a notable margin of 9.5%. For recipe generation, RoDE shows an 11% higher SacreBLEU score and a 4.2% improvement in Rouge-L score compared to FoodLMM+MoE. In the area of nutrition estimation, RoDE nearly tops the charts for all nutrient elements and secures the leading position in terms of average performance. These experimental results conclusively affirm the superiority of our proposed RoDE approach in food multi-task learning. Additionally, we conduct further experiments on Nutrition5k  <cit.> dataset, adhering to the methodology established by FoodLMM, where only image information was utilized for training (without depth information). The outcomes, as presented in Table <ref>, unequivocally demonstrate that our methods consistently surpass both FoodLMM and traditional MoE variations in performance. §.§ Ablation study In this section, we perform an ablation study to evaluate the contribution and efficacy of the core components within our framework. For simplicity, we omit the task names and tag only the metrics. §.§.§ MoE vs. Larger LoRA rank As illustrated in Table <ref>, we first compare the standard LoRA (Var1) with its MoE variant (Var3). The results indicate a superior performance of the MoE design. To ascertain whether this performance improvement can be attributed to an increase in the number of trainable LoRA parameters, we augment the LoRA rank of Var1 to 32 to obtain Var2. A comparison between Var2 and Var4 reveals the superiority of the MoE design, even though these two variants have an equivalent number of trainable LoRA parameters. This finding suggests that the successful integration of MoE is not simply a result of expanding the number of trainable parameters. §.§.§ Routing Strategy Then we examine the effects of the routing strategy. Our experiment includes three routing strategies: 1) similar to <cit.>, the top-1 expert is selected, referred to as Top-1; 2) following the approach in <cit.>, we employ a softmax function to normalize the router outputs, which is denoted as Softmax; 3) we introduce our novel linear rectified router, which we denote with the abbreviation LR. The results are summarized in Table <ref> and visualized in Figure <ref>. As observed, while the Top-1 allocation MoE outperforms the standard LoRA (Var1), it is surpassed by the "soft" allocation strategies (Var2 and Var3). This indicates the efficiency of employing an ensemble of experts to address a single task. Moreover, the comparison between the LR and Softmax routers ( Var2 and Var3, Var4 and Var5) suggests that the LR routing strategy is superior. These results further underscore the effectiveness of our linear rectification routing strategy. In addition, we visualize the heatmaps of router outputs for both the Softmax and LR routing strategies across several middle transformer blocks. Figure <ref> illustrates this comparative analysis, clearly indicating that our proposed Linear Rectified Router achieves a higher degree of sparsity in task allocation. §.§.§ Rank configuration of LoRA We conduct an ablation study to evaluate the impact of different LoRA rank configurations. The results, as presented in Table <ref>, span multiple expert configurations, including: 1) Var1 - Four LoRAs, each with a uniform rank of 5. 2) Var2 - Four LoRAs, each with a uniform rank of 8. 3) Var3 - A heterogeneous set of four LoRAs with ranks of [2, 4, 6, 7]. 4) Var4 - Another diverse set of four LoRAs with a distinct rank composition from Var3, specifically [2, 4, 8, 16]. The analysis of Var1 and Var2 suggests that configurations with higher-ranked LoRA experts are more effective. However, despite having the same number of trainable parameters as Var1, Var3 demonstrates superior performance, indicating that a mix of higher and lower-ranked LoRA experts can also be effective. Moreover, Var3 shows comparable performance to Var2, with only a slight reduction in SacreBLEU score (by 0.08) and Rouge-L score (by 0.36), while having 32.5% fewer trainable parameters. This finding supports the notion that not all LoRA experts require high capacity to contribute effectively. By adjusting the ranks in Var3 to create Var4, we observe improved performance which exceeds that of Var2, despite Var4 having 6.25% fewer trainable parameters. This outcome implies that a strategic combination of LoRA expert sizes can effectively support the inclusion of some larger LoRAs, potentially enhancing their capability to handle complex tasks more efficiently. §.§ Training Parameter Efficiency We evaluate various rank settings by visualizing the correlation between the number of trainable LoRA parameters and their corresponding task performance on the Uni-Food dataset. Figure <ref> illustrates this relationship, with the y-axis indicating task performance and the x-axis representing the trainable LoRA parameter count. It is apparent that the MoE architecture incorporates more trainable LoRA parameters, which substantially boosts performance across a range of metrics, recipe SacreBLEU, recipe Rouge-L, and ingredient IoU. When comparing sets of homogeneous experts to those of heterogeneous experts (comparing rank sets [5,5,5,5] to [2,4,6,8]), the introduction of high rank LoRA can enhance model performance despite when the number of trainable LoRA parameters is the same. These results suggest that not every expert requires an extensive parameter set; heterogeneous experts can effectively coordinate trainable parameters, resulting in better performance on multi-task learning. § CONCLUSION In this paper, we delve into the broader scope of tasks within the realm of food studies. We introduce Uni-Food, a comprehensive dataset comprising classification, ingredient recognition, recipe generation, and nutrition estimation. Uni-Food serves as a foundational resource empowering a wide spectrum of food-related research endeavors. The expansion of training tasks brings about a challenge of task conflict. To mitigate these task conflicts during SFT on Large LMMs, we introduce the Linear Rectified Mixture of Diverse Experts (RoDE). RoDE constructs a skill space that experts are sharable, and arranges a variety of trainable parameters to establish heterogeneous experts capable of adapting to tasks of different complexities. It employs linear rectified units to refine linear routers, encouraging the routers to learn sparse allocation. RoDE is not only GPU memory-efficient but it also simplifies the optimization process. Our experimental results clearly demonstrate the effectiveness of our proposed method, highlighting its efficiency and efficacy in overcoming the multifaceted challenges associated with the food domain. 39 [Achiam et al.(2023)Achiam, Adler, Agarwal, Ahmad, Akkaya, Aleman, Almeida, Altenschmidt, Altman, Anadkat et al.]achiam2023gpt4 Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F. L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S.; et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. [Ando et al.(2019)Ando, Ege, Cho, and Yanai]10.1145/3347448.3357172 Ando, Y.; Ege, T.; Cho, J.; and Yanai, K. 2019. DepthCalorieCam: A Mobile Application for Volume-Based FoodCalorie Estimation using Depth Cameras. In Proceedings of the 5th International Workshop on Multimedia Assisted Dietary Management, MADiMa '19, 76–81. New York, NY, USA: Association for Computing Machinery. ISBN 9781450369169. [Bossard, Guillaumin, and Van Gool(2014)]Food101 Bossard, L.; Guillaumin, M.; and Van Gool, L. 2014. In Computer Vision – ECCV 2014, 446–461. Cham: Springer International Publishing. [Chen and Ngo(2016a)]chen2016deepingredient Chen, J.; and Ngo, C.-W. 2016a. Deep-based ingredient recognition for cooking recipe retrieval. In Proceedings of the 24th ACM international conference on Multimedia, 32–41. [Chen and Ngo(2016b)]10.1145/2964284.2964315 Chen, J.; and Ngo, C.-w. 2016b. Deep-based Ingredient Recognition for Cooking Recipe Retrieval. In Proceedings of the 24th ACM International Conference on Multimedia, MM '16, 32–41. New York, NY, USA: Association for Computing Machinery. ISBN 9781450336031. [Chen et al.(2020)Chen, Zhu, Ngo, Chua, and Jiang]chen2020mult_task_region_ingredient Chen, J.; Zhu, B.; Ngo, C.-W.; Chua, T.-S.; and Jiang, Y.-G. 2020. A study of multi-task and region-wise deep learning for food ingredient recognition. IEEE Transactions on Image Processing, 30: 1514–1526. [Chen et al.(2021)Chen, Zhu, Ngo, Chua, and Jiang]9305995 Chen, J.; Zhu, B.; Ngo, C.-W.; Chua, T.-S.; and Jiang, Y.-G. 2021. A Study of Multi-Task and Region-Wise Deep Learning for Food Ingredient Recognition. IEEE Transactions on Image Processing, 30: 1514–1526. [Chen, Jie, and Ma(2024)]chen2024llavamole Chen, S.; Jie, Z.; and Ma, L. 2024. Llava-mole: Sparse mixture of lora experts for mitigating data conflicts in instruction finetuning mllms. arXiv preprint arXiv:2401.16160. [Chhikara et al.(2024)Chhikara, Chaurasia, Jiang, Masur, and Ilievski]Chhikara_2024_WACV Chhikara, P.; Chaurasia, D.; Jiang, Y.; Masur, O.; and Ilievski, F. 2024. FIRE: Food Image to REcipe Generation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 8184–8194. [Ding et al.(2023)Ding, Qin, Yang, Wei, Yang, Su, Hu, Chen, Chan, Chen et al.]ding2023peft Ding, N.; Qin, Y.; Yang, G.; Wei, F.; Yang, Z.; Su, Y.; Hu, S.; Chen, Y.; Chan, C.-M.; Chen, W.; et al. 2023. Parameter-efficient fine-tuning of large-scale pre-trained language models. Nature Machine Intelligence, 5(3): 220–235. [Dou et al.(2023)Dou, Zhou, Liu, Gao, Zhao, Shen, Zhou, Xi, Wang, Fan et al.]dou2023loramoe Dou, S.; Zhou, E.; Liu, Y.; Gao, S.; Zhao, J.; Shen, W.; Zhou, Y.; Xi, Z.; Wang, X.; Fan, X.; et al. 2023. Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment. arXiv preprint arXiv:2312.09979. [Gao et al.(2023)Gao, Chen, Fu, and Jiang]9794570 Gao, J.; Chen, J.; Fu, H.; and Jiang, Y.-G. 2023. Dynamic Mixup for Multi-Label Long-Tailed Food Ingredient Recognition. IEEE Transactions on Multimedia, 25: 4764–4773. [Gou et al.(2023)Gou, Liu, Chen, Hong, Xu, Li, Yeung, Kwok, and Zhang]gou2023mocle Gou, Y.; Liu, Z.; Chen, K.; Hong, L.; Xu, H.; Li, A.; Yeung, D.-Y.; Kwok, J. T.; and Zhang, Y. 2023. Mixture of cluster-conditional lora experts for vision-language instruction tuning. arXiv preprint arXiv:2312.12379. [Hu et al.(2021)Hu, Shen, Wallis, Allen-Zhu, Li, Wang, Wang, and Chen]hu2021lora Hu, E. J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; and Chen, W. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. [Huang et al.(2023)Huang, Liu, Lin, Pang, Du, and Lin]huang2023lorahub Huang, C.; Liu, Q.; Lin, B. Y.; Pang, T.; Du, C.; and Lin, M. 2023. Lorahub: Efficient cross-task generalization via dynamic lora composition. arXiv preprint arXiv:2307.13269. [Kawano and Yanai(2014)]kawano14c Kawano, Y.; and Yanai, K. 2014. Automatic Expansion of a Food Image Dataset Leveraging Existing Categories with Domain Adaptation. In Proc. of ECCV Workshop on Transferring and Adapting Source Knowledge in Computer Vision (TASK-CV). [Lai et al.(2023)Lai, Tian, Chen, Li, Yuan, Liu, and Jia]lai2023lisa Lai, X.; Tian, Z.; Chen, Y.; Li, Y.; Yuan, Y.; Liu, S.; and Jia, J. 2023. Lisa: Reasoning segmentation via large language model. arXiv preprint arXiv:2308.00692. [Lin et al.(2024)Lin, Tang, Ye, Cui, Zhu, Jin, Zhang, Ning, and Yuan]lin2024llavamoe Lin, B.; Tang, Z.; Ye, Y.; Cui, J.; Zhu, B.; Jin, P.; Zhang, J.; Ning, M.; and Yuan, L. 2024. Moe-llava: Mixture of experts for large vision-language models. arXiv preprint arXiv:2401.15947. [Lin(2004)]Rouge Lin, C.-Y. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, 74–81. [Liu et al.(2023a)Liu, Li, Wu, and Lee]liu2023llava Liu, H.; Li, C.; Wu, Q.; and Lee, Y. J. 2023a. Visual Instruction Tuning. [Liu et al.(2023b)Liu, Wu, Zhao, Zhu, Xu, Tian, and Zheng]liu2023moelora Liu, Q.; Wu, X.; Zhao, X.; Zhu, Y.; Xu, D.; Tian, F.; and Zheng, Y. 2023b. Moelora: An moe-based parameter efficient fine-tuning method for multi-task medical applications. arXiv preprint arXiv:2310.18339. [Loshchilov and Hutter(2017)]loshchilov2017adamw Loshchilov, I.; and Hutter, F. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. [Luo et al.(2023)Luo, Min, Wang, Song, and Jiang]luo2023caclnet Luo, M.; Min, W.; Wang, Z.; Song, J.; and Jiang, S. 2023. Ingredient prediction via context learning network with class-adaptive asymmetric loss. IEEE Transactions on Image Processing. [Marin et al.(2019)Marin, Biswas, Ofli, Hynes, Salvador, Aytar, Weber, and Torralba]marin2019recipe1m Marin, J.; Biswas, A.; Ofli, F.; Hynes, N.; Salvador, A.; Aytar, Y.; Weber, I.; and Torralba, A. 2019. Recipe1M+: A Dataset for Learning Cross-Modal Embeddings for Cooking Recipes and Food Images. IEEE Trans. Pattern Anal. Mach. Intell. [Min et al.(2023)Min, Wang, Liu, Luo, Kang, Wei, Wei, and Jiang]min2023prenet Min, W.; Wang, Z.; Liu, Y.; Luo, M.; Kang, L.; Wei, X.; Wei, X.; and Jiang, S. 2023. Large scale visual food recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. [Nair and Hinton(2010)]nair2010RELU Nair, V.; and Hinton, G. E. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), 807–814. [Pan et al.(2020)Pan, Chen, Wu, Liu, Ngo, Kan, Jiang, and Chua]pan2020mm_cooking_workflow Pan, L.-M.; Chen, J.; Wu, J.; Liu, S.; Ngo, C.-W.; Kan, M.-Y.; Jiang, Y.; and Chua, T.-S. 2020. Multi-modal cooking workflow construction for food recipes. In Proceedings of the 28th ACM International Conference on Multimedia, 1132–1141. [Ponti et al.(2023)Ponti, Sordoni, Bengio, and Reddy]ponti2023latent_task_skill Ponti, E. M.; Sordoni, A.; Bengio, Y.; and Reddy, S. 2023. Combining parameter-efficient modules for task-level generalisation. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, 687–702. [Post(2018)]bleu Post, M. 2018. A call for clarity in reporting BLEU scores. [Qiu et al.(2022)Qiu, Lo, Sun, Wang, and Lo]qiu2022mining Qiu, J.; Lo, F. P. W.; Sun, Y.; Wang, S.; and Lo, B. 2022. Mining Discriminative Food Regions for Accurate Food Recognition. arXiv:2207.03692. [Radford et al.(2018)Radford, Narasimhan, Salimans, Sutskever et al.]radford2018bert Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.; et al. 2018. Improving language understanding by generative pre-training. [Salvador et al.(2019)Salvador, Drozdzal, Giró-i Nieto, and Romero]salvador2019inversecooking Salvador, A.; Drozdzal, M.; Giró-i Nieto, X.; and Romero, A. 2019. Inverse cooking: Recipe generation from food images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10453–10462. [Salvador et al.(2017)Salvador, Hynes, Aytar, Marin, Ofli, Weber, and Torralba]salvador2017recip1m Salvador, A.; Hynes, N.; Aytar, Y.; Marin, J.; Ofli, F.; Weber, I.; and Torralba, A. 2017. Learning cross-modal embeddings for cooking recipes and food images. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3020–3028. [Thames et al.(2021)Thames, Karpur, Norris, Xia, Panait, Weyand, and Sim]thames2021nutrition5k Thames, Q.; Karpur, A.; Norris, W.; Xia, F.; Panait, L.; Weyand, T.; and Sim, J. 2021. Nutrition5k: Towards automatic nutritional understanding of generic food. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8903–8911. [Touvron et al.(2023)Touvron, Martin, Stone, Albert, Almahairi, Babaei, Bashlykov, Batra, Bhargava, Bhosale et al.]touvron2023llama2 Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. [Wang et al.(2022)Wang, Min, Li, Kang, Wei, Wei, and Jiang]9846887 Wang, Z.; Min, W.; Li, Z.; Kang, L.; Wei, X.; Wei, X.; and Jiang, S. 2022. Ingredient-Guided Region Discovery and Relationship Modeling for Food Category-Ingredient Prediction. IEEE Transactions on Image Processing, 31: 5214–5226. [Yin et al.(2023)Yin, Qi, Zhu, Chen, Jiang, and Ngo]yin2023foodlmm Yin, Y.; Qi, H.; Zhu, B.; Chen, J.; Jiang, Y.-G.; and Ngo, C.-W. 2023. FoodLMM: A Versatile Food Assistant using Large Multi-modal Model. arXiv preprint arXiv:2312.14991. [Zhou et al.(2022)Zhou, Lei, Liu, Du, Huang, Zhao, Dai, Le, Laudon et al.]zhou2022dense_is_better Zhou, Y.; Lei, T.; Liu, H.; Du, N.; Huang, Y.; Zhao, V.; Dai, A. M.; Le, Q. V.; Laudon, J.; et al. 2022. Mixture-of-experts with expert choice routing. Advances in Neural Information Processing Systems, 35: 7103–7114. [Zhu et al.(2019)Zhu, Ngo, Chen, and Hao]zhu2019r2gan Zhu, B.; Ngo, C.-W.; Chen, J.; and Hao, Y. 2019. R2gan: Cross-modal recipe retrieval with generative adversarial network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11477–11486.
http://arxiv.org/abs/2407.12604v1
20240717143132
Exact Graph Matching in Correlated Gaussian-Attributed Erdős-Rényi Model
[ "Joonhyuk Yang", "Hye Won Chung" ]
cs.IT
[ "cs.IT", "cs.DS", "cs.SI", "math.IT" ]
9mm =2500 theoremTheorem[section] corollaryCorollary[section] lemmaLemma[section] definitionDefinition[section] remarkRemark[section] equationsection
http://arxiv.org/abs/2407.13723v1
20240718172314
Superresolving optical ruler based on spatial mode demultiplexing for systems evolving under Brownian motion
[ "Konrad Schlichtholz" ]
quant-ph
[ "quant-ph" ]
[]konrad.schlichtholz@phdstud.ug.edu.pl International Centre for Theory of Quantum Technologies (ICTQT), University of Gdansk, 80-308 Gdansk, Poland § ABSTRACT The development of superresolution techniques, i.e., allowing for efficient resolution below the Rayleigh limit, became one of the important branches in contemporary optics and metrology. Recent findings show that perfect spatial mode demultiplexing (SPADE) into Hermite-Gauss modes followed by photon counting enables one to reach the quantum limit of precision in the task of estimation of separation between two weak stationary sources in the sub-Rayleigh regime. In order to check the limitations of the method, various imperfections such as misalignment or crosstalk between the modes were considered. Possible applications of the method in microscopy call for the adaptive measurement scheme, as the position of the measured system can evolve in time, causing non-negligible misalignment. In this paper, we examine the impact of Brownian motion of the center of the system of two weak incoherent sources of arbitrary relative brightness on adaptive SPADE measurement precision limits. The analysis is carried out using Fisher information, from which the limit of precision can be obtained by Cramér-Rao bound. As a result, we find that Rayleigh's curse is present in such a scenario; however, SPADE measurement can outperform perfect direct imaging. What is more, a suitable adjustment of the measurement time between alignments allows measurement with near-optimal precision. Superresolving optical ruler based on spatial mode demultiplexing for systems evolving under Brownian motion Konrad Schlichtholz July 22, 2024 ============================================================================================================ § INTRODUCTION Estimation of the distance between two light sources is an important subject in microscopy and astronomy. For separations that satisfy Rayleigh's criterion <cit.>, which requires them to be at least as large as the width of the point spread function resulting from diffraction, one of the prominent methods for resolving the distance is direct imaging. However, while smaller separations can be estimated with direct imaging <cit.>, this task becomes harder with decreasing separation <cit.>. This phenomenon is known as Rayleigh's curse. Various superresolution techniques have been proposed to overcome the limitations of direct imaging <cit.>, and using the tools of quantum metrology <cit.> one can identify methods that reach the maximal precision allowed by quantum mechanics. In particular, a method that is quantum optimal was recently proposed, that is, photon counting after perfect spatial mode demultiplexing (SPADE) into Hermite-Gauss modes <cit.>. Furthermore, proof-of-principles experiments utilizing SPADE have already been conducted <cit.>. and the method is a field of active experimental research <cit.>. The astrophysical applications of SPADE <cit.> like, e.g., exoplanet detection, are one of the most studied. However, another branch of applications, microscopy, still needs vast development. When considering applications, one necessarily has to think about imperfections in the implementation of the method <cit.>. For applications in microscopy where, in many cases, the sources are in motion, causing a non-negligible misalignment, the necessity of an adaptive scheme that would align the apparatus during the measurement was pointed out <cit.>. However, the system evolves between alignments when the photons are counted. This evolution, which in most cases is a passive <cit.> or active <cit.> Brownian motion, varies the probabilities of detecting photons in specific modes. Thus, it should be taken into account in the measurement scheme to ensure the correct estimation of the separation. In this paper, we show how to account for passive Brownian motion in an adaptive measurement scheme based on SPADE for the estimation of the distance between two weak incoherent point light sources where the center of the measured system evolves under Brownian motion. This scenario corresponds to using this method as an “optical ruler” that allows for, e.g., intramolecular distance estimation (which is one of the important applications in microscopy <cit.>) where the source could be intramolecular or realized as attached fluorophores. Furthermore, we show that it is crucial to consider Brownian motion in the analysis, as it results in the reappearance of Rayleigh's curse, which, however, can be circumvented by proper scaling of the time between realignments. In addition, we show that even in the regime where Rayleigh's curse is present, SPADE can outperform perfect direct imaging. § SYSTEM AND MEASUREMENT SCENARIO The system under consideration is composed of two incoherent point light sources with a fixed distance d between them and arbitrary relative brightness. The sources in the system are weak in the sense that measured photons follow the Poisson distribution. The system is placed in some solvent characterized by a diffusion coefficient D. The solvent forces a Brownian motion on the system, resulting in a displacement of the center of the system from the starting point and a change in the orientation of an axis passing through the sources after a finite time t. Let us introduce a few assumptions about the measurement scheme. First, one can filter out photons that originated from the system from photons that come from the solvent. Measurement is carried out in the imaging plane, say x, y, and thus only projection of the system into this plane is relevant for distance estimation. The imaging plane of the measurement apparatus is at a distance sufficiently large from the center of the system that the impact of the Brownian motion in the z axis is negligible. Photon detectors for all modes are perfect. The measurement procedure starts with a quick estimation of the center of the system and aligning the measurement apparatus with the center. In the next step, the apparatus performs spatial mode demultiplexing of the image in the Hermite-Gauss basis { u_nm(r⃗) } centered, which is assumed to be at the origin of the system at time t=0 and counts photons in particular modes for some finite time T. After that, the procedure is repeated. Due to the Brownian motion, each photon can be emitted from the system characterized by different misalignment vector μ⃗=(μcosψ,μsinψ) and orientation angles (ϕ,θ) (see FIG. <ref>). Note that for different θ the distance between sources in the projection into the imaging plane varies. Thus, the location of the sources is described by two vectors r⃗_⃗i⃗=μ⃗+(-1)^i(dsin(θ)/2)(cosϕ,sinϕ). A good approximation of the spatial distribution of the field after going through a diffraction-limited imaging apparatus is given by two overlapping Gaussian profiles centered on the locations of the sources u_00(r⃗-r⃗_i) where u_00(r⃗)=√(2/(π w^2))exp-r^2/w^2 <cit.> coincides with (0,0) Hermite-Gauss basis function. Let us denote by ρ̂(μ⃗,d,ϕ,θ) the density matrix of the field that comes from the misaligned system. We recall that for incoherent superposition of two weak thermal or coherent light sources at fixed positions r⃗_⃗i⃗ measurement can be interpreted as a sequential measurement on multiple copies of the single-photon state ρ̂(r⃗_⃗i⃗) <cit.>. Furthermore, the probability of detecting the photon in the (n,m) Hermite-Gauss mode is determined by the overlap integrals: f_nm(r⃗_i)=∫_𝐑^2 dr⃗u_nm^*(r)u_00(r⃗-r⃗_i) and is given for our state ρ̂(μ⃗,d,ϕ,θ,ν) by: p(nm|μ⃗,d,ϕ,θ,ν)=ν| f_nm(r⃗_1)|^2+(1-ν)| f_nm(r⃗_2)|^2, where ν∈ (0,1) stands for the relative brightness of the sources. Furthermore, in <cit.> method of considering separation estimation between sources undergoing motion on fixed trajectories was presented. We adapt this method to calculate probabilities of measuring the photon in (n,m) mode in the case of Brownian motion (see Appendix <ref> for additional details). As the measurement scheme is repeated many times and motion is a random walk, our system is not described by some specific trajectory, but rather by some probability distribution of the misalignment vector arising from the two-dimensional Brownian distribution and the fully random distribution of orientation. Thus, at any given moment of time t during our measurement repetition system is effectively the mixed state ρ̂_B(t,d,ν) being a statistical mixture of states ρ̂(μ⃗,d,ϕ,θ,ν) approximately given by: ρ̂_B (t,d,ν)= ∫_𝐑^2 dμ⃗∫_Ω dΩρ̂(μ⃗,d,ϕ,θ,ν) 1/4 π D texpμ^2/4D t, where we have used the formula for two-dimensional Brownian probability distribution of the center of the system <cit.>, and d Ω=(dϕ dθsinθ)/4π for which integration is over all orientation angles. As the measurement cycle has a finite duration T and there are no privileged points in time for measuring a photon, we have to make a time average of ρ̂_B (t,d) resulting in the state: ρ̂_B (d,T,ν)=1/T∫_0^Tdtρ̂_B (t,d,ν). As a consequence, in our scenario, the probability of measuring a photon in mode (n,m) is given by (for exact formulas, see Appendix <ref>): p(nm|d,T)= ∫_0^Tdt∫_𝐑^2 dμ⃗∫_Ω dΩ p(nm|μ⃗,d,ϕ,θ,ν)expμ^2/4D t/4π T D t. We stress that we have found that these probabilities are independent from ν. Let us introduce the following unit-less parameter τ=D T/w^2. One can calculate variance Var(μ)=(2-4π/9) w^2 τ using probability distribution from (<ref>) and thus √(τ) determines the spread of the system during the measurement. § PRECISION LIMIT OF DISTANCE ESTIMATION Let us recall that the bound for the uncertainty of a distance estimation with an unbiased estimator is given by the Cramér-Rao bound <cit.>: Δ d≥1/√(N F(d)), where F(d) is the Fisher information (FI) per measured photon, which in the case of Poissonian photodetection is given by <cit.>: F(d) = ∑_n,m=0^M 1/p(nm|d)( ∂/∂ d p(nm|d) )^2, where M denotes maximal index of Hermite-Gauss mode which can be distinguished by the measured apparatus considered. One can obtain quantum FI F_Q(d) through maximization of FI with respect to all physically possible measurements <cit.>. It was shown <cit.> that for distance estimation F_Q(d)=w^-2 and that it is achievable in the limit of M→∞ with measurement where the Hermite-Gauss modes are centered at the origin of the system. Moreover, for d/2 w≪ 1 and M=1 such measurement approaches the quantum limit with decreasing d. The requirement that estimated d must be greater than Δ d to have meaning results in the definition of the minimal resolvable distance <cit.> given by the solution of d_min = 1/√(N F(d_min)). § ASYMPTOTIC FISHER INFORMATION IN THE CASE OF BROWNIAN MOTION Let us analyze Fisher information for our measurement scheme which we denote by F_HG(d,τ). We are mostly interested in the small distances regime x:=d/2w≪ 1 thus, we focus our discussion on the dominant terms contributing to FI resulting from modes with n,m≤1 (we calculate FI with M=1). Let us start by defining two timescales for our problem. We refer to case √(τ)≪ x as a short timescale and to x≪√(τ) as a long timescale. This is motivated by the fact that, timescales should be defined in relation to separation as time of repetition T determines the scale of a misalignment through √(τ). In order to analyze the asymptotic behaviour of FI, let us additionally assume that √(τ)≪1. This allows us to expand FI into a power series in x and √(τ). Such an expansion can depend on the relation between the variables, as discussed in <cit.>. By expanding F_HG(d,τ) at first in x and then in √(τ) for x≪√(τ) and reversely for x≫√(τ) we obtain: w^2 F_HG(d,τ)≈2/3-2τ/x^2 x≫√(τ), ( 2/9τ-43/27)x^2 x≪√(τ), were for clearance we have taken only the dominant order in x for two dominant orders in τ. From this power series two main points can be stated. For short timescales, the impact of misalignment is minor, as the dominant term is constant and equal to 2/3 which is the same as in the case of randomly oriented sources without misalignment reported in <cit.>. This claim is additionally supported by the fact that in the limit τ→0 full expression of F_HG(d,τ) results in the FI for the measurement with the same number of modes in the aforementioned scenario. For short timescales, the optimal scaling d_min/w∼ N^-1/2 is approximately conserved. The second point is that for long timescales, optimal scaling is lost and replaced by the typical for direct imaging d_min/w∼ N^-1/4. This result shows that accounting for Brownian motion is necessary, as such motion can result in a significant drop in precision. Despite reappearance of the Rayleigh curse, SPADE still can result in better resolution than direct imaging. In order to verify this, the power series of FI for perfect direct imaging measurement can be calculated (for details, see Appendix <ref>) resulting in the following expression for both timescales: w^2F_DI(d,τ)≈16 x^2/9-128 τ x^2/9+1792 τ ^2 x^2/27. The coefficient of x^2 for direct imaging is smaller for √(τ)⪅ 0.29 than for SPADE. However, this does not necessarily mean that for higher values of √(τ) direct imaging performs better, since with increasing √(τ) more terms of the power series should be relevant and F_HG(d,τ) could be increased by measuring additional modes (see FIG. <ref>b)). FIG. <ref> shows the comparison of values of F_HG(d,τ) and numerical values of F_DI(d,τ) for small values of x and τ∈{0.001,0.1,1}. The results presented in this figure support the claims made above. On Fig: <ref> a) continuous transition between timescales can be clearly seen. § OPTIMAL SCALING FOR Τ Important question is how to set the time of the measurement, to measure with the optimal scaling. Here, we consider the scaling of τ of the form √(τ)→κ x^q with κ≤1. Specifically, for q=1/2,1,2 FI has the following approximate form: w^2 F_HG(d,√(τ)=x^q)≈2 x/9κ^2-2 x^2/27 κ^4 q=1/2, 2/3(1+3κ^2)-O(x^2) q=1, 2/3-8 x^2/9 q=2. Note that for any q>1 we always end up in a short timescale regime for sufficiently small x. Thus, based on (<ref>) such scalings allow a measurement with near-optimal precision, however, with increasing q convergence to the optimal scenario with decreasing x becomes faster. For the case q=2 second term in the first line of (<ref>) becomes of the order x^2, and thus, for higher q, term constant in √(τ) and quadratic in x will contribute more significantly resulting in an expansion of FI in second order in x concurrent with the case τ→ 0. From that follows that increasing q over 2 has a minor impact on the precision. For q<1 we eventually get into the long timescale regime, but the higher q the better the scaling can be achieved for minimal resolvable distance. For the special case of q=1 and κ=1 we are in the transition region between timescales. For such a scaling, an exact short timescale regime precision cannot be obtained. However, the constant term also appears, and as κ decreases, it approaches the constant term for the short timescale. What is more, it reaches 90% of optimal value already for κ=1/3√(3)≈ 0.2 and also converges to it optimally as 8x^2/9 for small κ. These results show that the difficulty of the measurement increases with growing paste when decreasing x as time in τ has to scale non-linearly with x in order to measure with optimal scaling and setting lower τ requires more repetitions of measurement scheme to capture the same number of photons. Thus, setting q=1 seems optimal as it allows for the lowest-order scaling of time still achieving near optimal precision. In a real-life scenario, one cannot scale √(τ) with some exact relation to x. However, the order of magnitude of x can be estimated from some preliminary knowledge of the system or measurements. Thus, our result allows one to estimate the optimal measurement time T. Also one can set the target d_min, and based on that decide on T. § NONZERO TIME OF ALIGNMENT In real-life scenario time of the alignment t_a after the estimation of the center of the system has to be nonzero. During t_a, the system evolves, introducing some spread of the center of the system already at the beginning of the measurement. To account for this, we should perform time averaging in (<ref>) in bounds [t_a,T] instead of [0,T]. Thus, in such a case probabilities of measuring photon in (n,m) mode is given by: p(nm|d,T,t_a)=T p(nm|d,T)-t_a p(nm|d,t_a)/T-t_a. Assuming that time of alignment is short in comparison to time of measurement, i.e. t_a≪ T we put t_a=T/k where k≫1. This modification results in approximate asymptotic FI: w^2 F_HG(x,τ,k)≈2/3-2(k+1)τ/k x^2 x≫√(τ), ( 2/9τ+2/9 k τ-43/27-23 /27 k)x^2 x≪√(τ), were we have taken two dominant orders in k. The form of (<ref>) is the same as (<ref>) with some small corrections. Thus, including t_a results only in quantitative difference and the qualitative result is the same. § CONCLUDING REMARKS In summary, we have presented an adaptive scheme for the estimation of a distance between two weak incoherent sources that account for the Brownian motion of the system between the alignments of the measurement apparatus. Based on analytical considerations of Fisher information, we have shown that such a scheme employing spatial mode demultiplexing of the image into Hermite-Gauss basis, in theory, can perform better than its direct imaging counterpart in the limit of small distances even with measurement of a finite number of modes. This is both for the long timescale of the measurement where we have found that Rayleigh's curse is present and on the short time scale where SPADE performs with the optimal scaling of minimal resolvable distance. This result shows that Brownian motion between alignments cannot be simply neglected. We show how the measurement time should be adjusted depending on the expected magnitude of the separation to perform the measurement, allowing for almost maximal precision. Finally, we have shown that a short nonzero time of alignment does not qualitatively impact our findings. Development of such schemes of measurement is crucial in order to correctly implement SPADE measurements for molecular applications, where centers on the image plane of the measured systems can be in relatively quick motion in comparison with almost stationary stellar objects. As SPADE aspires to become a relevant high-precision method also for molecular applications, further refinements of schemes based on SPADE are necessary, for example, to avoid errors related to fitting data to a wrong underlying model of the method. Thus, also our scheme still looks for improvements, opening many possibilities for further research. First of all, the estimation of the center of the system and alignment of the measurement apparatus is burdened with some error. Thus the assumption of perfect alignment at the start of each run of the experiment should be discarded and replaced, for example, with some probability distribution for initial misalignment. What is more, the movement of some biological systems is governed by active Brownian motion rather than considered by us a passive one. Due to the highly non-Markovian behaviour of such systems on the short timescales <cit.> they might require deeper consideration in order to properly apply SPADE based measurements for them. One could also consider the impact of crosstalk or dark counts. Extension to sources different from Poissonian could also be considered. However, not only could refinements of the underlying model be made, but maybe also improvements of the measuring scheme could be done, as, for example, introduction of some time binning of photon counts may result in some improvements. § ACKNOWLEDGMENTS Project ApresSF is supported by the National Science Centre (No. 2019/32/Z/ST2/00017), Poland, under QuantERA, which has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 731473. § STOCHASTIC MOTION In not all scenarios the exact trajectory is given, and dynamics is rather a stochastic process. In such a case, different trajectories have different probability of occurring. Thus, at each point of time, we have some probability density function p(q⃗_⃗1⃗,q⃗_⃗2⃗,t) that the system of two sources is at specific coordinates q⃗_1 and q⃗_2. The measurement in our scenario could be generically seen as performed on the system in some mixed state: ρ̂_s =∫ dq⃗_⃗1⃗dq⃗_⃗2⃗ p(q⃗_⃗1⃗,q⃗_⃗2⃗) ρ̂(q⃗_1,q⃗_2), where ρ̂(q⃗_1,q⃗_2) is the state of the sources in the static scenario for given coordinates q⃗_i and p(q⃗_⃗1⃗,q⃗_⃗2⃗) is some probability density function of coordinates. Knowing the time-dependent probability distribution p(q⃗_⃗1⃗,q⃗_⃗2⃗,t) one might try to calculate the ρ̂_s as a time average over measurement time t_m, effectively obtaining the following mixed state: ρ̂_s ≈1/t_m∫_0^t_m dt∫ dq⃗_⃗1⃗dq⃗_⃗2⃗ p(q⃗_⃗1⃗,q⃗_⃗2⃗,t) ρ̂(q⃗_1,q⃗_2). However, only one specific path in our scenario is followed by the system during the time of measurement. Thus, it is not enough to collect a large number of photons to ensure the correctness of such an effective description. It becomes necessary that p(q⃗_⃗1⃗,q⃗_⃗2⃗,t) is periodic in time and there is no point in the allowed for the system part of the coordinate space from which some subspace of this space cannot be reached at any later time. What is more, in general, one also needs Markovianity of the evolution on some time scale much shorter than the time of measurement. This is, e.g., to prevent cases where the initial points determine some subclasses of trajectories with different probability distributions of positions as in such cases statistical properties of state evolving on such a subclass in an obvious way does not have to follow prediction given for the full ensemble of such subclasses when one starts from the probabilistic mixture of initial points. These requirements ensure us that any point in coordinate space is approached arbitrarily close by the system in the limit t_m→∞ with expected frequency. Then equation (<ref>) becomes justified by the Ergodic theorem <cit.> as then it is strict in the limit t_m→∞. These requirements can be fulfilled, for example, by running the whole experiment N_r times for some finite time of repetition (which determines a period) T and starting always from the same initial state. In such a case, the limit of infinite time of measurement is replaced by the limit N_r→∞ as t_m=N_r T. Thus, with increasing number of experiment runs, the description given by (<ref>) becomes more accurate. § FISHER INFORMATION FOR DIRECT IMAGING In this section, we present how Fisher information is calculated for the perfect direct imaging measurement. We recall that FI for the static scenario in the case of direct imaging is given by: F_DI(d) = ∫_𝐑^2 dr⃗ 1/p(r⃗ |μ⃗,d,ϕ,θ)( ∂/∂ d p(r⃗ |μ⃗,d,ϕ,θ) )^2, where p(r⃗ |μ⃗,d,ϕ,θ) denotes probability density function of measuring the photon in location r⃗ where the location of the sources is given by some vectors r⃗_⃗i⃗ where i=1,2 and reads as follows <cit.>: p(r⃗ |μ⃗,d,ϕ,θ,ν)=ν |u_1,00|^2 + (1-ν)|u_2,00|^2, where u_i, 00 u_00(r⃗-r⃗_i) and u_00(r⃗) is the (0,0) Hermite-Gauss basis function, in general, given by: u_nm(r⃗=(r_x,r_y))= exp-(r_x^2+r_y^2)/w^2/√((π/2)w^2 2^n+mn!m!) × H_n(√(2)r_x/w) H_m(√(2)r_y/w), where H_n() are Hermite polynomials and w is the width of the point spread function. In our scenario, this probability density is replaced by the following one: p(r⃗ |d,τ)= ∫_0^Tdt∫_𝐑^2 dμ⃗∫_Ω dΩ p(r⃗ |μ⃗,d,ϕ,θ,ν)expμ^2/4D t/4π T D t. Note that |u_i,00|^2 are the same after integration for both i, and thus, the dependence on ν disappears. This is because replacing ϕ→ϕ +π we have u_1(2),00→ u_2(1),00 and the integration is over a uniform distribution of ϕ. The resulting F_DI(d,τ) can be evaluated numerically. Probability (<ref>) can be calculated approximately for x≪1 and √(τ)≪ 1. This can be done by noting that, due to the fact that orientation is fully random, our system is rotational invariant, i.e. ∀r⃗ p(r⃗ |d,τ)=p(|r⃗| |d,τ). Thus, it is enough to calculate the probability for r⃗=(r,0). Then, one can calculate the integral over μ, which afterwards can be expanded into Taylor series in x and √(τ). Finally, the rest of the integrals can be evaluated. In order to calculate F_DI(d,τ) one can expand the integrand in (<ref>) into Taylor series in x and √(τ) and after integration obtain: w^2F_DI(x,τ)≈16 x^2/9-128 τ x^2/9+1792 τ ^2 x^2/27. Note that for τ→0 this expression goes to the 16x^2/9 which is concurrent with the result for the system evolving under random rotations without Brownian motion reported in <cit.>. § PROBABILITIES FOR SPADE MEASUREMENT In this section we present probabilities from which the Fisher information for SPADE in Hermite-Gauss modes can be directly calculated. The probabilities under consideration are determined by overlap integrals: f_nm(r⃗_i)=∫_𝐑^2 dr⃗u_nm^*(r)u_00(r⃗-r⃗_i) and are given by: p(nm|μ⃗,d,ϕ,θ)=ν| f_nm(r⃗_1)|^2+(1-ν)| f_nm(r⃗_2)|^2. The overlap integrals have the following form <cit.>: f_nm(r⃗_1)=e^1/2(-x_θ^2-μ _w^2+2 x_θμ _w cos (ϕ -ψ )) (μ _w sin (ψ )-x_θsin (ϕ ))^m (μ _w cos (ψ )-x_θcos (ϕ ))^n/√(m! n!), f_nm(r⃗_2)=e^1/2(-x_θ^2-μ _w^2-2 x_θμ _w cos (ϕ -ψ )) (μ _w sin (ψ )+x_θsin (ϕ ))^m (μ _w cos (ψ )+x_θcos (ϕ ))^n/√(m! n!), where μ_w:=μ/w and x_θ:=xsinθ. Using the following equation, one can compute the probabilities considered for the SPADE measurement: p(nm|d,τ)= ∫_0^Tdt∫_𝐑^2 dμ⃗∫_Ω dΩ p(nm|μ⃗,d,ϕ,θ)expμ^2/4D t/4π T D t. The dependence on ν disappears analogously to the direct imaging case. We present probabilities for measuring a photon in mode (n,m) for n,m=0,1: p(00|x,τ)= 1/12 τ2 (x^2 ( _2F_2(1,1;2,5/2;-x^2/4 τ +1)/4 τ +1- _2F_2(1,1;2,5/2;-x^2))+3 log (4 τ +1)), p(10|x,τ)= 1/48τ(2/x(2 x^3 ( _2F_2(1,1;2,5/2;-x^2/4 τ +1)/4 τ +1- _2F_2(1,1;2,5/2;-x^2))+3 F(x/√(4 τ +1))/√(4 τ +1)-3 F(x)). +6 log (4 τ +1)), p(11|x,τ)= 1/64τ(8 (4 τ +1) x^2 _2F_2(1,1;2,5/2;-x^2)-8 x^2 _2F_2(1,1;2,5/2;-x^2/4 τ +1)-12 (4 τ +1) log (4 τ +1)/12 τ +3. .-((4 τ +1) (32 τ +7)+2 x^2) F(x/√(4 τ +1))/(4 τ +1)^5/2 x+(2 x+7/x) F(x)+1/(4 τ +1)^2-1), where _2F_2() stands for Hypergeometric PFQ function and F() for the Dawson integral.
http://arxiv.org/abs/2407.12104v1
20240716181421
Freezing-in Cannibal Dark Sectors
[ "Esau Cervantes", "Andrzej Hryczuk" ]
hep-ph
[ "hep-ph", "astro-ph.CO" ]
Shape-morphing membranes augment the performance of oscillating foil energy harvesting turbines Kenneth Breuer July 22, 2024 =============================================================================================== § INTRODUCTION The astrophysical and cosmological evidence supporting the existence of dark matter is well established <cit.>. However, despite a large number of theoretical ideas on how to explain the presence of DM and decades of efforts put into experimental searches its nature remains unknown. One of the leading candidates for the DM particle is the so-called WIMP (Weakly Interacting Massive Particle) which elegantly accounts for the observed abundance (Ω_c h^2 = 0.120± 0.001 <cit.>) by postulating weak-scale interactions with visible matter, while at the same time preserving all the other essential properties requisite for a viable candidate (see e.g. <cit.>). Nevertheless, lack of an experimental identification of any definitive WIMP signature motivated diverse models to elucidate the nature of DM without reliance on sizable interactions with visible matter. For instance sterile neutrinos <cit.>, axions <cit.> or fuzzy dark matter <cit.>. Another such alternative is the self-interacting DM (SIDM) paradigm, where the DM abundance can be set through a freeze-out occurring within the dark sector due to DM self-number changing reactions, mechanism originally proposed in <cit.>. Interestingly, the freeze-out of self-number changing reactions (which we will refer to as “dark freeze-out”) is characterized by a cannibalization phase in which the dark sector converts its rest mass into kinetic energy. During this period DM is hotter than in the standard WIMP freeze-out scenario, which may potentially erase small-scale structures in the Universe leading to disagreement with observations. This phenomenon was first acknowledged in <cit.> and has been further discussed in the subsequent literature <cit.>. To overcome these constraints, one strategy involves considering a dark sector which is initially colder than the visible one, so that despite warming during dark freeze-out, the dark sector remains sufficiently cold for successful structure formation <cit.>. Another possibility relies on postulating a weak portal to visible matter, allowing equilibriation and transfer heat with the SM plasma during dark freeze-out, a scenario referred to as the “SIMP miracle” introduced in  <cit.> and subsequently discussed in <cit.>. Variations of this idea involve a DM candidate with an unstable Higgs-like mediator, leading to the depletion of dark sector's energy density during cannibalisation due to the mediator's decay <cit.>. In both cases the initial production leading to a thermal population of particles of the dark sector in the first place is typically taken for granted. Among the mechanisms proposed to explain the origin of DM are gravitational production <cit.>, inflaton decay <cit.>, asymmetric reheating <cit.>, decay of false vacua <cit.> and freeze-in (FI). The FI mechanism, in particular, relies on feeble interactions with visible matter <cit.>, which are in fact typically present if the dark sector contains a scalar field that would naturally couple to the Higgs. This mechanism relies on three basic assumptions: 1) the initial abundance of the dark sector after reheating ends is zero or negligible; 2) any portal to matter is sufficiently weak, ensuring that DM is produced from the SM plasma but never thermalizes with it; and 3) the interaction populating the dark sector is renormalizable making the mechanism independent of the reheating temperature <cit.>. Once the FI production ceases both sectors evolve independently. Entropy conservation in each sector separately can be then used to obtain the energy density evolution, provided that there are no other sources of heat exchange and the dark sector is in equilibrium with itself. Conversely, during the freeze-in stage the dynamical evolution of SIDM provides a rich interplay between the energy available within the dark sector and its capacity to transform it into number density through self-interactions while cooling the system <cit.>. Accurate treatment of such boosting of the freeze-in process through self-thermalization requires a proper implementation of the temperature (or energy density) and number density evolution equations. In this work we aim to address this issue numerically without any constraining approximations, and thus differing from the existing literature, by solving the set of coupled Boltzmann equations (cBE) in the hydrodynamic approach <cit.>. Noteworthy, a completely secluded dark sector does not predict any direct experimental signals due to the absence of non-gravitational interactions with regular matter. While, if the two sectors are almost secluded, i.e. there exists a very weak portal between them, the potential detection while not hopeless, is still rather challenging. Indeed, current direct detection technology allows for testing of only certain scenarios involving feeble portals, provided the dark sector particle is not heavier than the MeV scale <cit.>. Motivated by this, in this work we are interested in exploring the question whether or not SIDM realization can be made more predictive through existence of such portals. In particular, if it may lead to detectable signatures and can SIDM be produced solely via freeze-in, without any need for additional mechanisms. We address this question on a quantitative level in three simple scenarios extending the Standard Model (SM) by: a real scalar with broken ℤ_2, a complex scalar with unbroken ℤ_3, and a ℤ_3 scalar with an additional scalar mediator. The article is organized hierarchically, based on the complexity of the models that we will introduce henceforth, as follows. In Section 2 we briefly describe the set of cBEs in the momentum moments approach. In Section 3 we examine the simplest realization: a dark sector consisting of a singlet real scalar DM candidate with a broken ℤ_2 symmetry, allowing DM to mix with the Higgs, and therefore decay. To avoid rapid decays, interactions with visible matter should be significantly suppressed, leading to the dark sector being naturally populated via the FI mechanism. Remarkably, this model differs from the SIMP scenario <cit.> as it considers a colder dark sector than the SM, and from <cit.> by reintroducing the portal to matter, and from both by solving the system of cBE using the hydrodynamical approach. Unsurprisingly, this model is strongly constrained by the INTEGRAL and NuSTAR telescope observations <cit.>. Moreover, the broken phase further constraints the parameter space, as domain walls may arise during the dark phase transition <cit.>. These findings suggest that the freeze-in mechanism cannot successfully populate the dark sector in this model in its entirety, but an additional production mode is necessary. Next, in Section 4 we study a stable complex scalar DM candidate carrying a charge under an unbroken ℤ_3 symmetry. This model shares similar dynamical features with its predecessor, with however, one crucial distinction: here the DM is stable. Its interactions with matter are mediated by the Higgs boson, and are inherently suppressed, naturally out of the reach of direct or indirect detection experimental prospects. To generalize this model, in Section 5 we introduce a simple extension by a Higgs-like singlet real scalar mediator. Both the DM and the mediator are produced via the FI mechanisms and we solve a set of four cBEs accounting for the two particles (one for number density and one for temperature each). This approach considers all interactions between DM and the mediator, including annihilation-production and heat exchange processes, differing from standard literature where kinetic equilibrium within the dark sector is typically assumed. To the best of our knowledge such scenario has not been addressed in any other related work. In Section 6 we provide our conclusions. § THE BOLTZMANN EQUATION AND MOMENTS APPROACH In the homogeneous and isotropic universe expanding with the Hubble rate H one can describe the evolution of a phase space distribution function f_i(p,t) of a particle species i through the Boltzmann equation (fBE): ∂ f_i/∂ t - H p∂ f_i/∂ p = C[f_i] , where the right hand side is the collision operator encoding all the possible interactions between particle i with itself and other states in the plasma. There has been a growing interest in the literature in approaching the determination of thermal production by directly analyzing the (numerical) solution of this equation in order to obtain not only the number density of DM particles, but also their velocity distribution (see e.g. <cit.>). Moreover, it has been shown that in situations where an efficient equilibriation process is absent such a treatment can actually be necessary for achieving accuracy matching the observations <cit.>. Nevertheless, in models where substantial self-scatterings redistribute DM momenta efficiently and thus enforce the thermal shape, albeit with potentially different normalization and temperature, a hydrodynamic approach that leads to a set of fluid equations is usually sufficient.[In fact, it can even provide a better estimate of the final relic abundance than the solution of Eq. (<ref>), if in the latter one does not incorporate the notoriously CPU expensive self-scatterings <cit.>.] In that case, instead of solving the full Eq. (<ref>), one can consider its momentum moments, where the lowest one governs the evolution of the number density n = g_i ∫ d^3p/(2π)^3 f_i, with g_i denoting the number of internal degrees of freedom. Going one level up in the Boltzmann hierarchy and paremetrizing the second moment via the velocity dispersion and defining the temperature parameter, T_i =g_i/(3n)∫ d^3p(2π)^-3(p^2/E)f_i, one arrives at a set of coupled Boltzmann equations (cBE) <cit.> Y'/Y = 1/xH̃⟨C| ⟩, x_ds'/x_ds =-1/x H̃⟨C|_⟩2 +Y'/Y - H/x H̃⟨p^4/E^3|⟩/3T_i-2s'/3s , where s is the SM entropy, Y = n/s, x_ds=m_i/T_i, H̃=H(T)/(1+3T(dg_eff^s/dT)/g_eff^s), '= d/dx and x = m_i/T. This form of cBE is obtained through closing the Boltzmann hierarchy by assuming the distribution with equilibrium shape f_i(p)=e^μ_i/T_i e^-E/T_i = (n/n^eq)f_i^eq(p;T_i). In the following we adopt this hydrodynamical approach since the existence of a cannibal phase is inherently linked to strong 2→2 self-scatterings.[By construction the σ_2→ 3 cross section is assumed to be large enough to lead to an efficient 2→ 3 process and therefore 2→2 scatterings are expected to be even more frequent, as the corresponding σ_2→ 2 is lower order in the coupling constant.] For every of the discussed models we provide below the corresponding set of cBE with the explicit form of both moments of the collision term. § THE SIMPLEST CASE: SM + A REAL SCALAR In this section we discuss the cannibal phase of the arguably simplest possible dark matter model, i.e., a theory obtained by adding to the Standard Model only one real scalar field describing the DM. A completely secluded realization of this scenario has been studied recently in <cit.> where it was found that it is indeed still a viable possibility, although rather difficult to test experimentally. We start our analysis from the very same model, augmented with a potentially non-zero Higgs portal (HP) coupling, to first discuss all the relevant processes and showcase the formalism, and second to answer the question of whether including the portal interaction may give rise to detectable signatures. §.§ The model The model consists of a singlet real scalar field φ stabilized by a ℤ_2 symmetry <cit.>. This field naturally couples to the Higgs doublet H through the HP interaction, V_HP(H,φ)= 1/2λ_hφφ^2H^† H , and its self-interactions are encoded in the potential V_self(φ) = 1/2!μ^2φ^2+λ/4!φ^4 . This extension to the SM has been extensively studied in the literature in the WIMP limit, i.e., when the λ_hφ≳𝒪(10^-4), and still remains viable. However, as a WIMP candidate, scalar singlet DM is under tension in direct detection (DD) and indirect detection (ID) experiments (see e.g. <cit.> and references therein) with the remaining allowed mass ranges that provide correct relic density being a) very close to the Higgs resonance or b) at the TeV scale. Alternatively, φ can also be a Feebly Interacting Massive Particle (FIMP) candidate if λ_hφ≲𝒪(10^-7). For such small couplings it does not attain thermal equilibrium in the early Universe and the main mechanisms for its production are freeze-in and gravitational <cit.>. If produced mostly via freeze-in, the requirement of correct relic density sets the expected value of the coupling to be λ_hφ∼𝒪(10^-9) for a mass in the GeV range <cit.>. The scenario we focus on here serves as yet another possibility: the ℤ_2 symmetry is broken explicitly or spontaneously leading to cubic terms in the potential (<ref>) and therefore to cannibalizing 3↔2 processes.[When the ℤ_2 symmetry remains unbroken, the dominant cannibalizing process affecting the number density of φ is the 4↔2 self-number changing reaction, whose matrix element is suppressed by λ^2 at tree level <cit.>.] The downside of such realization is that it removes the symmetry protection for DM stability, making it inherently unstable. However, since aside gravitational the only interaction with the SM is through the HP, the φ lifetime can be made extremely long if only the λ_hφ coupling is small enough. To avoid this complication, the analysis in <cit.> assumed λ_hφ=0, resulting in a secluded and essentially undetectable dark sector. Instead, we relieve this assumption and study also the case with λ_hφ≠ 0 and specifically when the ℤ_2 symmetry breaking is triggered by the φ field obtaining a vacuum expectation value (VEV) ⟨φ|=⟩ω = ±√(3/λ)√(v^2λ_hφ -2 μ^2) . The symmetry breaking leads to φ mixing with the Higgs, which culminates in the emergence of a Higgs-like scalar <cit.> with a decay rate into SM states is proportional to sin^2θ and accordingly the lifetime τ_φ∝ 1/sin^2θ.[The rotation matrix parametrized with a rotation angle θ is given in the appendix <ref>.] Thus, indeed θ≪ 1 becomes crucial to ensure that the lifetime is significantly larger than the age of the universe.[Indirect detection experiments impose even more stringent constraints on the lifetime of decaying dark particles. For instance, the INTEGRAL and NuSTAR experiments rule out lifetimes shorter than ∼ 10^27 s for decaying DM with a mass of ∼1 MeV  <cit.> (the age of the universe is approximately 4× 10^17 s).] This condition can be expressed in terms of λ_hφ by noting that the scalar VEV is w=√(3/λ)(m_φ+3m_φ v^2/λ(2m_h^2-2m_φ^2)λ_hφ^2) + 𝒪(λ_hφ^3) . This implies that the mixing angle is at leading order in λ_hφ (and assuming m_φ≪ m_h) θ≈2√(3)λ_hφm_φ v/(m_φ^2-m_h^2)√(λ)Γ_φ→SM SM^tree∝λ_hφ^2m_φ^2v^2/m_h^4λ . Let us note in passing that the spontaneous ℤ_2 symmetry breaking results in a dark phase transition that potentially leads to the formation of domain walls, as noted in <cit.>, which can dominate the energy density of the early universe. To prevent this, the surface tension σ of the domain wall should be bounded from above by the MeV scale, which implies that √(λ)w^3 ≲MeV, given √(λ)w^3∼σ <cit.>. On its face value, such a constraint effectively rules out the accessible parameter space of this specific scenario. These are, however, reliant of assumptions that need not hold, and there are cases where these bounds do not apply at all.[For instance, a reheating temperature of the dark sector lower than the temperature of dark phase transition or unstable domain walls <cit.>. Another alternative involves a dark sector with an explicit breaking of the ℤ_2 symmetry.] Since the focus of this work is independent on how the issue of domain walls constraint is settled, we will not discuss this issue further. Especially, given that as we will see below, the limits on λ_hφ in this particular model are stringent enough to render the impact of the HP on the DM production essentially negligible, prompting the analysis of more promising models in the next sections. §.§ The λ_hφ=0 case After spontaneous symmetry breaking, the potential given by Eq. (<ref>) takes the form V_self = 1/2m_φ^2φ^2+g/3!φ^3+λ/4!φ^4 , where the coupling of the cubic term is related to other parameters via g = √(3λ)m_φ and the physical squared mass is m_φ^2 = 2|μ|^2 = λ v^2/3 <cit.>. The primary contribution at lowest order in λ corresponds to the 3φ↔ 2φ self-number changing reaction, whose matrix element is presented in Eq. (<ref>). The tree level Feynman diagrams involved in the reaction are shown in Figure <ref>. In this scenario the absence of a connection between the both sectors forbids the exchange of entropy, allowing them to evolve independently. Therefore, the standard assumption of initial kinetic equilibrium (T_φ^i=T^i, with T the temperature of the SM) is not justified. This renders an extra degree of freedom in the parameter space, i.e. an initial condition after reheating, here represented via the initial ratio of temperatures ξ_∞=T^i_φ/T^i. In fact, this observation is crucial for the viability of the model, since number changing self-interactions in effect heat the dark sector during the dark freeze-out. This renders a conflict between obtaining correct abundance and predicting successful structure formation <cit.>, unless the dark sector is significantly colder than the SM or has an efficient way of dissipating the excess heat into the SM plasma. The set of equations that determine the evolution of the system is provided by Eq. (<ref>). The collision operator is given by C_3φ↔ 2φ=1/2E_φ g_φ∫( - f_φ(p)|ℳ̃_φ2→ 345|^2 f_2 dΠ_2 (1/3!dΠ̃_3 dΠ̃_4 dΠ̃_5 ) +(1 + f_φ(p))|ℳ̃_12→φ45|^2(1/2!dΠ_1 dΠ_2 f_1 f_2 ) (1/2! dΠ̃_4 dΠ̃_5 ) -f_φ(p)|ℳ̃_12←φ45|^2(1/2!dΠ_4 dΠ_5 f_4 f_5 )(1/2! dΠ̃_1 dΠ̃_2) +(1+f_φ(p))|ℳ̃_φ2← 345|^2 (1/3! dΠ_3 dΠ_4 dΠ_5 f_3 f_4 f_5) dΠ̃_2) , where we label the momenta as 12↔ 345, with |ℳ̃|^2=(2π)^4δ^(4)(∑_f p_f - ∑_i p_i)|ℳ_3↔ 2|^2, and define dΠ̃_i = dΠ_i(1+f_i). The notation underlines the production and annihilation of the φ states, also highlighting the symmetry factors. The Bose-Einstein enhancement factors can be safely neglected, (1+f)≈ 1, as the system is either diluted or non-relativistic, in the early and later stages of the evolution, respectively. The zeroth and second moment terms are detailed in Eq. (<ref>) and Eq. (<ref>), respectively. Previous work <cit.> demonstrates that the available parameter space for the model resides within the sub-GeV dark matter mass range. This preference arises because self-number changing reactions scale inversely with the fifth power of the DM mass (⟨σ_3→ 2v^2|∼⟩λ^3 m_φ^-5, cf. Eq. (<ref>)). As a result, larger DM masses necessitate a stronger self-interaction coupling, which can violate perturbativity or unitarity constraints. Therefore, for this model we focus on sub-GeV dark matter masses. §.§ The λ_hφ≠ 0 case and relativistic freeze-in A secluded cannibal dark sector remains agnostic about the mode of its initial production. That is, the initial ξ_∞ (or initial abundance and temperature, or even more generally f(p)) are presumed. The introduction of a HP interaction opens a possibility of populating the dark sector solely by the freeze-in mechanism. Moreover, if the portal is substantial, it may also facilitate heat exchange which is a necessary ingredient of the mechanism ascribed as the SIMP miracle <cit.>. When the number changing self-interactions are absent, it is usually sufficient to solve Eq. (<ref>) at the lowest moment (i.e., the number density evolution) during freeze-in, as the transfer of heat does not impact the final DM abundance.[Although, solving the cBE (or fBE) may be essential to determine if the amount of energy injected into the DM fluid by the freeze-in mechanism conflicts with Lyman-α forest data, which constrains the free-streaming length to λ_FS<0.24 Mpc <cit.>. ] In our scenario, however, the dark sector can achieve chemical equilibrium through 2↔ 3 processes. While doing so, it can convert its kinetic energy into number density, a mechanism intriguingly opposite to the cannibalization phase <cit.>. Thus, it becomes crucial to solve the system of cBE, Eq. (<ref>), which takes the form Y_φ'/Y_φ ⊃ +1/xH̃(⟨C_h→φφ|+⟩⟨C_hh→φφ|⟩)Θ(T-T_EWPT) , -x_φ'/x_φ ⊃ +1/xH̃(⟨C_h→φφ|_⟩ 2+⟨C_hh→φφ|_⟩2) Θ(T-T_EWPT) . As an initial condition we take n_φ^i = n_φ^eq(T_φ^i), with ξ_∞=T_φ^i/T^i<1 being a free parameter and T_i=T_EWPT=150 GeV, where T_EWPT is the temperature of the SM plasma at the Electroweak Phase Transition (EWPT). Two comments are in order. First, this set of cBE focuses on production after EWPT, neglecting contributions from before and during EWPT. Given our assumption of sub-GeV DM and therefore the limit m_φ≪ m_h, the main production is driven by Higgs decay, h →φφ, at a time when T∼ 40 GeV <cit.>. During EWPT, the physical mass of the Higgs boson vanishes for a short period of time and then increases with decreasing temperature. Therefore, there is a point in time where m_h(T)≈ m_φ and the mixing angle (<ref>) is enhanced, thus DM can be produced through Higgs oscillations. This yield from this mode is approximately <cit.> Y_φ^EWPT = (1.93× 10^5 GeV^-4) λ_hφ^2 m_φ^2 w^2 = 1.93× 10^5 GeV^-4 λ_hφ^2 3m_φ^4/λ , where we used Eq. (<ref>). For m_ϕ =100 MeV and λ=10^-2 this results in Y_φ^EWPT≈ 5.79× 10^-15, which is a small contribution compared to the one post-EWPT, (cf. Figure <ref>). Second comment is that the production from h →φφ is essentially insensitive to the dynamics of the EWPT, and in fact also the exact value of T_EWPT, allowing us to adopt a simple Heaviside step function. For the production from the Higgs decay the zeroth moment thermal average takes the analytical form ⟨C_h→φφ|=⟩1/n_φλ_hφ^2 v^2 m_h/16π^3√(1-4m_φ^2/m_h^2)T K_1(m_h/T) and for the second moment: ⟨C_h→φφ|_⟩2≈1/3n_φ T_φλ_hφ^2 v^2 m_h^2/32π^3√(1-4m_φ^2/m_h^2) T K_2(m_h/T) which is valid in the m_φ≪ m_h limit which we assume in the numerical implementation (cf. Eq. (<ref>)) and where K_n are the modified Bessel functions of the second kind and order n. The sub-leading annihilation contributions are given in appendix <ref>. The interplay between freeze-in and number changing self-interactions process can lead to interesting dynamics. One can classify possible scenarios by the strength of the 2↔ 3 process: * Throughout the entire history of the dark sector, number changing reactions remain inefficient (i.e., Γ_2↔ 3≪ H); this corresponds to the standard freeze-in mechanism. * The number changing self-interactions are inefficient initially, but become stronger in later stages of the evolution, e.g., following the EWPT. At that point the system strives to reach chemical equilibrium. During this phase 2→ 3 reactions rapidly produce more DM while at the same time cooling the dark sector <cit.>. * The number changing self-interactions remain efficient throughout the entire evolution of the system. As a result, FI production is predominantly supported through the process 2→ 3. This differs from the previous point, as the dark sector is always in chemical equilibrium with itself and there are no sudden attempts to re-establish equilibrium. §.§ Results As was already mentioned, once λ_hφ≠ 0 this particular realization of the scalar singlet DM with spontaneously broken ℤ_2 is strongly constrained by the stability requirement. Here we first quantify this statement and then answer the question of whether including such portal coupling can lead to detectable signals. Finally, we use this model as an illustration of the possible interesting dynamics encoded in the interplay of the processes discussed above. In Figure <ref> the allowed parameter space in the plane of λ_hφ vs m_φ is presented. If m_φ is above the threshold of μ^+μ^- decay, then only extremely small values of λ_hφ are not excluded by the DM stability condition (τ_φ≲ age of the universe). For 2m_e<m_φ<2m_μ the lifetime is most strongly constrained by INTEGRAL data with condition τ_DM≳ 10^27 s <cit.>. Below the e^+ e^- threshold the limits get weaker, but still quite significant, as only direct annihilation to photons is possible. Finally, self-interactions of DM are constrained by observational data of DM elastic self-scattering in the galaxies and galaxy clusters. In particular, we show the exclusion limit from the Merging Galaxy Cluster 1E 0657-5 (the Bullet Cluster) <cit.>, σ_T/m_φ<1 cm^2/g at a typical velocity of v=10^-4, where the transfer cross section is defined as σ_T=∫ dΩ (1-cosα)dσ_2→ 2/dΩ , with σ_2→ 2 accounting for self-scattering in the form φφ→φφ. For the scalar singlet model this constraint takes a particularly simple form of the allowed region satisfying m_φ/16.32 MeV≳λ^2/3 and in contrast to the limits coming from DM stability it depends on the value of the quartic coupling λ. In Figure <ref> we chose two representative values: λ=10^-4 and λ=10^-3, for which we also show as gray lines (solid and dashed, respectively) the contours for the parameters that lead to the relic density matching the observed value for different assumptions regarding ξ_∞. The larger ξ_∞ is, the larger the initial population and thus smaller additional production from FI is required; hence, smaller values of λ_hφ. The shape of the contours clearly indicates, as one would expect, that for small enough λ_hφ, the impact of the HP becomes negligible. The choice of λ impacts the results in three ways. First, for which values of m_φ the process 2→ 3 dominates over 3→ 2 throughout the dynamical evolution of the system. This manifests around m_φ≃ 200 KeV (m_φ = 2 MeV) for λ=10^-4 (λ=10^-3) in Figure <ref>, where solutions exhibit smaller values of λ_hφ along the solution lines. Second, increasing λ will lead to more DM depletion via the cannibal process, which can be compensated by increased production during FI, thereby necessitating a higher λ_hφ. And finally, on the self-scattering constraints, by shifting the excluded region it to the right (left) if λ increases (decreases). All in all, these results strongly suggest that for this particular model rather substantial initial population, coming from an additional production mechanism, is required to allow for long enough lifetime to be consistent with observations and therefore that freeze-in alone is insufficient to populate the dark sector. Additionally, in the parameter regions where the model is still allowed the value of λ_hφ needs to be small enough that has a very weak to negligible effect on the DM production – both through freeze-in process and through the heat transfer due to elastic scatterings. Irrespective of quite limited viability of this simple model, it can serve as a clear illustration of dynamics of the freeze-in dynamics coupled with 2↔ 3 processes. In Figure <ref> the evolution of the the DM yield and temperature for an example benchmark point is presented for three different choices of the coupling values. Black lines show the baseline case of λ_hφ=0, while red one with the Higgs portal switched on, λ_hφ=10^-11, leading to a freeze-in contribution, and two different values for the self-coupling λ governing the cannibal process. Without the HP coupling the system undergoes only dark sector freeze-out with a pronounced self-heating period around x∼ 10^-1. Introducing non-zero λ_hφ leads to an additional injection of φ particles (mostly) from Higgs decay that not only increase the yield but also are much more energetic further heating the system. And then it is crucial whether the 2↔ 3 self-interactions are strong enough to convert these excess heat into more φ particle (solid line) or not (dashed line). This large increase in the yield, with corresponding decrease in temperature, has been noted in <cit.> and dubbed “boosting freeze-in through thermalization”. This effect underlines the necessity of careful evaluation of the temperature evolution alongside the number density, which we have done fully numerically by solving the cBE system with all the contributing processes. § SM + ℤ_3 COMPLEX SCALAR The limitation stemming from instability of DM if the stabilising ℤ_2 symmetry is broken can be avoided without changing the particle content of the model by imposing a ℤ_3 symmetry instead. This requires change from a real scalar φ to a complex one, which we will call S. Such a ℤ_3-stable scalar was first introduced in the context of neutrino physics <cit.>. As a WIMP candidate, its phenomenology was initially addressed in <cit.> and further discussed in <cit.>. Here we focus on SIDM <cit.> combined with the FIMP scenario <cit.>. Incidentally, ℤ_3 symmetry allows for a cubic self-coupling leading naturally to cannibal-type reactions of the form 3↔ 2, making it a suitable candidate for a model with cannibal dark sector. §.§ The model To the SM we add a complex scalar S charged under a hidden ℤ_3 symmetry, S→ e^2qπ i/3 S, with q=1. The most general renormalizable potential is given by: V_HP(H,S)=V_s + λ_hs| H|^2| S|^2 , where V_s encodes self interactions of the scalar field S, V_s =μ_s^2| S|^2 + g_s/3!(S^3+(S^*)^3) + λ_s/4| S|^4 . To ensure stability of the potential first we demand μ_s^2, λ_s>0 and absorb the complex phase of the VEV, v_s, in the scalar field S. We now obtain the extrema of the potential by solving .∂_S V_s|_S,S^*=v_s = λ_s v_s^3/2+g_s v_s^2/2 + μ_s^2 v_s = 0. There are three solutions, v_s^0=0 and v_s^± = -g_s±√(g_s^2-8λ_s μ_s^2)/2λ_s . If g_s^2<8λ_s μ_s^2, then the only real solution is v_s^0. To express this constraint in a more convenient form, we introduce the dimensionless parameter k=g_s^2/(3λ_s μ_s^2), with the stability constraint becoming k<8/3. Note that the physical mass is given by m_s^2 = λ_s v_s^2 + μ_s^2 and corresponds to μ_s^2 if the stability constraint is satisfied. Due to charge conservation, the only allowed number-changing reactions within the dark sector are SSS↔ S^*S and S^*S^*S↔ SS, along with their complex conjugates. The corresponding Feynman diagrams are shown in <Ref>. In order to solve the cBE for this realization, we have to specify the collision operator. Considering first the process S^*S↔ SSS and treating S and S^* as different sates, the collision operator for S^* in its most general form can be expressed as C_S^*S↔ SSS[S^*]=1/2E_S^*g_S^*∫ ((1+f_S^*)|ℳ̃_345→S^*2|^2( 1/3!dΠ_3dΠ_4dΠ_5f_3f_4f_5 ) dΠ̃_2 -f_S^*|ℳ̃_S^* 2→ 345|^2 dΠ_2f_2(1/3!dΠ̃_3dΠ̃_4dΠ̃_5 )) , where, as in the real case, dΠ̃_i=dΠ_i(1+f_i). Assuming that the system is diluted or non-relativistic, 1+f_i≈ 1. The remaining collision operators, as well as its zeroth and second moments integrals can be found in Appendix <ref>. The FI production contribution is analogous to the previous section. We consider the two degrees of freedom of the complex scalar to account for DM, meaning we assume n = n_S + n_S^* with n_S = n_S^* (i.e. the dark sector has no initial asymmetry and there is no CP violation in the model). We also assume the same initial conditions as in the previous section. §.§ Results In order to study the interplay of freeze-in and cannibal reactions, and consequently determine the values of λ_hs that lead to the observed relic abundance, we perform a parameter scan in the plane λ_hs vs m_s for different choices of k(=g_s^2/(3λ_s m_s^2)), as depicted in Figure <ref>. Colored lines display solutions corresponding to the observed DM relic abundance, fixing λ_s = 10^-2. The purple region is excluded by too strong self-scatterings using Eq. (<ref>), where σ_T encompasses both SS→ SS and SS^*→ SS^*. Note that this constraint depend on m_s, λ_s and k: for m_s = 400 KeV, λ_s = 10^-2 and k=10^-4 (k=0.5), σ_T/m_s=0.54 cm^2/g (σ_T/m_s=0.64 cm^2/g). The displayed exclusion region corresponds to the most stringent constraint. Let us first discuss the impact of the strength of number changing self-interactions by comparing lines for different values of the k parameter focusing first on the left panel of Figure <ref>. For m_s≲ 1 MeV the dark sector successfully reaches chemical equilibrium at high temperatures, subsequently undergoing the standard cannibalization phase. If self-interactions increase (by increasing k), more DM is depleted during the dark freeze-out. This depletion is compensated by initially producing more DM (increasing λ_hs). The second region of interest is for masses in the interval 1 MeV≲ m_s≲ 1 GeV. Here the dark sector struggles to self-equilibrate. The non-monotonic behaviour of the solution lines is due to DM decoupling before reaching chemical equilibrium necessitating smaller FI to compensate for efficient 2→ 3 production. In the third region, m_s≳ 1 GeV, self-interactions are not efficient enough (cf. Eq. (<ref>)) to have any sizeable impact on λ_hs. Now let us turn to the impact of ξ_∞, by comparing the left and right plots in Figure <ref>. For ξ_∞ = 10^-3, i.e. very small initial abundance of DM, a somewhat larger values of λ_hs are required. This is especially true for larger S masses, where efficient freeze-in production, added to an already non-negligible population with ξ_∞ = 0.02, could otherwise result in overabundant DM. Also the interplay between the number changing self-interactions and freeze-in contribution, seen as departure from simple power-law scaling, is more pronounced for lower values of ξ_∞, since otherwise the kinetic energy transferred to the dark sector during FI is comparable to its already existing energy density. To summarize, these results offer an insight on the rich dynamical interplay of the SIDM+FIMP scenarios and its effect on the parameter space. Additionally, it is worth noting that having ξ_∞<1 allows the ℤ_3 model to achieve the SIDM realisation which is consistent with relic abundance and without conflict with structure formation, which was found to be not possible for ξ_∞=1 <cit.>. However, this particular realization is beyond the reach of current experimental prospects. If the dark sector consists of only one species of particles that are singlets with respect to the SM gauge group, any detection becomes challenging because all non-gravitational interactions are mediated only by the Higgs boson. Among the future prospects, there are numerous ongoing experimental efforts to detect sub-GeV dark matter. Outstanding example are the DM-electron scattering DD experiments targeting DM masses in the MeV range <cit.>. Currently, telescope observations also aim at targeting such masses <cit.>. Interestingly, masses in the GeV range are already testable in DD experiments <cit.>. Our interest, however, lies in cannibal sectors, thus masses in the MeV range are our primary focus. Noteworthy, electron recoil experiments are currently unable to test FI couplings in this mass range.[The non-relativistic DM-electron scattering cross section for this HP model is given roughly by σ_S e→ Se∼λ_hs^2 m_e^2/m_h^4∼ 4λ_hs^2× 10^-43 cm^2. The sensitivity of silicon based electron recoil experiments is σ_DM e→DM e∼10^-42 cm^2 for m_DM∼ 10 MeV <cit.>.] § SM + ℤ_3 COMPLEX SCALAR + SCALAR MEDIATOR Finally, in this section we discuss our main model incorporating all the interesting dynamics of the simpler models presented before while offering richer phenomenology. This model is defined by extending the potential (<ref>) by an additional real singlet scalar field ϕ, which interacts with both the Higgs doublet of the SM and the complex scalar DM. §.§ The model The generalized renormalizable potential can be expressed as: V(H,S,ϕ)=V_s + V_ϕ +V_ϕ s + V_HP , where V_ϕ =λ_1ϕ+μ_ϕ^2/2ϕ^2 + g_ϕ/3!ϕ^3 + λ_ϕ/4!ϕ^4 , V_ϕ s = λ_ϕ s/2 ϕ^2| S|^2 + A_ϕ s ϕ| S|^2 + κ_ϕ s ϕ S^3 + c.c. , V_HP = (λ_h s | S|^2 + λ_hϕ ϕ^2 + B_hϕ ϕ)| H|^2 , and V_s is defined in Eq. (<ref>). Here κ_ϕ s can lead to the FI semi-production of DM via ϕ S→ S S <cit.>. In this study we chose to neglect this interaction through setting κ_ϕ s=0 and focus on the thermalization of the DM and the mediator through λ_ϕ s. To streamline the analysis, we also set λ_hs = 0=λ_hϕ, considering creating the population of the dark sector via B_hϕ. In the forthcoming discussion, we assume the stability of S (k<8/3, where k=g_s^2/(3λ_s μ_s^2)). The mediator mixes with the Higgs via B_ϕ h, which is parameterized by the mixing angle θ given in Eq. (<ref>) with the detailed analysis of the stability of the potential provided in the Appendix <ref>. §.§ Relevant reactions and cBEs The DM and mediator abundances are impacted by the processes shown in Table <ref>. The mixing between the scalar field and the Higgs boson post-EWPT leads to the substitution ϕ→cosθ ϕ + sinθ h≈ϕ + θ h, inducing interactions between the DM and the Higgs boson via interactions with ϕ: V_ϕ s⊃λ_ϕ sθ ϕ h | S|^2 + A_ϕ sθ h| S|^2 + 𝒪(θ^2) . The first term induces three-body Higgs decay, h→ϕ SS^* when kinematically allowed. As m_ϕ, m_s≪ m_h, we estimate this decay in the massless limit for the DM and mediator within the cBE. The Higgs potential also induces interactions between ϕ and the Higgs boson after rotation,[Through the mixing with the Higgs V_ϕ will lead to similar interactions through g_ϕ and λ_ϕ, which we assume to be negligible.] λ_h H^† H = λ_h/4(h-θϕ+v)^4 = -λ_h θ h^3ϕ+𝒪(θ^2) . Avoiding mediator decay into DM (m_ϕ<2m_s), the set of cBEs is: Y'_S/Y_S =1/x H̃(⟨C_h→ϕ SS^*|+⟩⟨C_h→ SS^*|+⟩⟨C_ϕϕ↔ SS^*|+⟩⟨C_3↔ 2|⟩) , -x_S'/x_S = [t] 1/x H̃(⟨C_h→ϕ SS^*|_⟩2+⟨C_h→ SS^*|_⟩2+⟨C_ϕ S↔ϕ S|_⟩2 + ⟨C_3↔ 2|_⟩2 ) -Y'_S/Y_S + H/x H̃⟨p^4/E^3|⟩/3T_S + 2s'/3s , Y'_ϕ/Y_ϕ =1/x H̃(⟨C_h→ϕ SS^*|+⟩⟨C_SM SM→SM ϕ|+⟩⟨C_ϕϕ↔ SS^*|⟩) , -x_ϕ'/x_ϕ =1/x H̃(⟨C_h→ϕ SS^*|_⟩2+⟨C_SM SM→SM ϕ|_⟩2+⟨C_ϕ S↔ϕ S|_⟩2 ) -Y'_ϕ/Y_ϕ + H/x H̃⟨p^4/E^3|⟩/3T_ϕ + 2s'/3s . Here we define x_S=m_s/T_S and x_ϕ=m_ϕ/T_ϕ. The collision operators of S-ϕ interactions are detailed in the Appendix <ref>, while the triple Higgs decay collision term can be found in the Appendix <ref>. §.§ Results This model offers considerably richer dynamics compared to the previous two. We will first address these features showing four examples that illustrate how DM cannibalization is supported or counterbalanced by the mediator, depending on the mass hierarchy between them and the strength of the number changing self-interactions. Secondly, we will examine the mediator's phenomenology, addressing mainly the cosmological bounds over the mediator's lifetime. Unlike to the secluded ℤ_3 case of Section <ref>, this model predicts testable signals in telescopes and searches for long-lived particles. If the mediator mass lies within the MeV scale its decay can significantly influence the cosmological evolution of the Universe. This impact could potentially alter the predicted abundances of primordial elements produced during Big Bang Nucleosynthesis (BBN, T∼ 150 MeV). Such constraints are highly dependent on the abundance of the mediator just before and during the BBN epoch. Late mediator decays can inject energy into the SM plasma, potentially disrupting Cosmic Microwave Background (CMB) observations. The production of electron/positron pairs from ϕ can also result in the flux of X-ray photons from the Inverse Compton Scattering (ICS) process. In the results below we will present the parameter space that does not conflict with these observations. Finally, we will discuss the detectability of the mediator in the forthcoming long-lived particle search experiment, MATHUSLA. §.§.§ Benchmarks solutions of the cBE system We start with four benchmark examples leading to correct DM relic abundance and showcasing distinct dynamics, Figures <ref>-<ref>. The first two cases illustrate the evolution with strong self-interactions where the system quickly achieves equilibrium. The second two cases show the evolution with relatively weak self-interactions. Each figure shows the yield (top left), ratio of temperatures (ξ=T_ds/T, top right), the rates with DM self-interactions (bottom left) and without DM self-interactions (bottom right). The rates without self-interactions, λ_s=0, correspond to the dashed lines in the evolution equations. The rates are evaluated at the zeroth moment level, Γ_rate = 1/n_ds⟨C_int.|$⟩, with the exception of the thick and dashed orange lines, which represent the DM-mediator scattering rate, Γ_sc.[ϕ] = 1/3T_ϕ n_ϕ|⟨C_scatter[ϕ]|_⟩2| , Γ_sc.[S] = 1/3T_S n_S|⟨C_scatter[S]|_⟩2| , where⟨C_scatter|_⟩2is given in Eq. (<ref>). Let us now discuss each case in detail. First, form_s<m_ϕ, the evolution is depicted in Figure <ref>. The first stage of the evolution is characterized by an instantaneous injection of energy atT=150 GeV (x≃6.6×10^-4), the rate of FI production is shown in dark green lines in the rate plots. Note that the mediators are primarily populated viaSM SM→SM ϕand they quickly dissipate part of this energy into the DM fluid through scattering in a sudden attempt to achieve kinetic equilibrium (see inset plot). In this case, the DM-mediator interactions are strong enough (λ_ϕs=10^-3) to bring both fluids in thermal (kinetic and chemical) equilibrium at early stages of the evolution, while DM self-interactions re-establish chemical equilibrium in the DM fluid (λ_s = 0.05, cf. thick and dashed lines in the rate plots). The system evolves while mediators rapidly deplete to produce more DM (thick gray line), thus the cannibalization phase is counteracted by this process. Secondly,m_s>m_ϕ. Here there are two sources of DM depletion, namely self- (3→2) and DM-mediator (SS^*→ϕϕ) annihilations, and the evolution is depicted in Figure <ref>. A similar study has been conducted in <cit.>. To ensure the observed DM relic abundance, the interactions in the dark sector must be weaker compared to the previous scenario (λ_ϕs = 10^-5). At the same time, DM self-interactions are sufficiently strong to drive the DM fluid back into chemical equilibrium, which influences the evolution of the mediators, as they are partially coupled to the DM during the evolution (thick and dashed gray/orange lines aroundx= 10^-2). While both fluids try to reach equilibrium, more mediators are produced from DM, which are colder than those produced by freeze-in, thus cooling the mediator sector. In this instance,Γ_sc.[S](orange dashed) is the rate of heat injected from the mediator fluid, whileΓ_sc.[ϕ](orange solid) is the rate of heat loss before an equilibrium is reached. Third, an unsuccessful attempt of the dark sector's interactions to achieve equilibrium form_s<m_ϕ. This is depicted in Figure <ref>, and unlike in the last two cases, there are no sudden attempts to re-establish equilibrium. As the universe expands, the2→3reaction becomes more efficient aroundx=3×10^-2(blue thick line) and starts driving the DM fluid to chemical equilibrium by producing more dark mater at the cost of kinetic energy. This DM population possesses enough energy to produce more mediators (dashed gray line). However, mediators are heavier rendering this process quickly inefficient and the reverse process rapidly dominates (thick vs dashed gray lines in Figure <ref>). The system decouples before reaching equilibrium. Finally, the last case involves weak self-interactions withm_s>m_ϕand is shown in Figure <ref>. The primarily active reactions are2→3andϕS↔ϕS. The entire production of DM is attributed to the2→3reaction, which initially surpasses the rate of expansion but quickly drops below the Hubble rate, remaining so untilx∼2×10^-2when its efficiency increases again. In fact, the rateΓ_2→3remains fairly efficient due to the mediator sector supplying heat to the DM fluid through scattering, which is subsequently transformed into number density by the2→3reaction. Concurrently, number changing self-interactions keep DM cool (cf. thick and dashed black lines in the temperature plot), particularly whenΓ_2→3>Γ_sc[S]aroundx=3×10^-2, thus decreasing the DM temperature, while its yield is further boosted, followed by an increase of the scattering rate that injects more heat into DM aroundx=10^-1. At this stage,Γ_SS^*→ϕϕstarts growing as the DM sector possesses enough energy to produce mediators (cf. gray dashed), thereby increasing the mediator's abundance. §.§.§ Numerical scan setup To investigate the phenomenological implications of this model, we performed a random scan across the parameters outlined in Table <ref>, fitting to the value of the observed relic abundance, i.e., accepting points withinΩ_DM^obs/Ω_DM=1±0.01. In order to study the implications of various strengths of DM self-interactions, we fixedk=0.5,[Note that both k and λ_s influence the strength of self-interactions, and one can compensate for the other. Also note that k=0.5 is safely within the stability bound (k<8/3).] and allowedlog_10λ_sto take values on a grid from -4 to -1 with a step of 0.5. The phenomenology of the model branches into two distinct cases based on the mass hierarchy betweenSandϕ. For one, only them_s>m_ϕcase leads to any appreciable DM annihilation signals, as theϕmixing with the Higgs is too weak for any direct process ofSS →SMto be strong enough: only annihilationSS →ϕϕfollowed by the decay ofϕ's can have appreciable cross section. But also the mass hierarchy strongly affects the evolution during DM production impacting not only the abundance ofS, setting the DM relic density, but also the one ofϕaltering the total energy injection from its decay, which is an important factor in the determination of the limits from BBN. Indeed, the presence ofϕcould spoil BBN observations if its lifetime is within the range of10^-2 s≲τ_ϕ≲10^12 s. Decays at early times (τ_ϕ≲10^5 s) are constrained by the refined measurements of the abundance of^4 He,^2 H,^6Liand^7Lias mesons produced byϕwould strongly interact with nucleons. While electromagnetic decays may impact BBN only at comparatively later times (τ_ϕ≳10^5 s), mainly due to energetic photons ande^±interacting efficiently with the background photons, rapidly producingγ-rays. If suchγ-rays have energies below a certain threshold, they are less likely to interact further with the plasma. This threshold energy changes over time as the universe cools down and isE≈2.2 MeVatτ_ϕ∼10^5 sandE≈19.8 MeVatτ_ϕ∼10^7 s. These specific values for threshold energies are critical because they mark the points at whichγ-rays photodisintegrate^2Hand^4He<cit.>. §.§.§ Scan results for m_s>m_ϕ In the case ofSheavier thanϕthe BBN bounds exacerbate due to the additional production ofϕfromS, leading to a temporary overabundance of mediators. This case is presented in <Ref>. All the shown points satisfy the relic density constraint. The ones that are shaded in gray are excluded by the requirement of successful BBN, following the exclusion lines of Figure <ref> (left). The remaining points are colored according to the value of a given parameter, chosen differently for each plot to highlight its relevance. The right plot in Figure <ref> and the plots in Figure <ref> show the viable points in the plane of the mediator parametersτ_ϕvsm_ϕ. These three plots highlight patterns in different model parameters:λ_s,λ_ϕsandr, respectively. The lifetime increases with increasingλ_ϕs, while there is no clear pattern when varyingλ_s. The mass hierarchy and the DM-mediator coupling (see Figure <ref>), exhibit a relation that can be understood as follows: if the masses within the dark sector are sufficiently hierarchical (r≪1/2), then the rate of annihilationSS^*→2ϕbecomes efficient enough that to avoid complete depletion of DM one needs to compensate by loweringλ_ϕs. Next we turn to mediator with mass exceeding theμ^+μ^-threshold. The resulting points satisfying the relic density constraint with superimposed exclusion limits from BBN are shown in Figure <ref>. Whenm_ϕ>2m_μrapid mediator decays may successfully evade CMB bounds, but they could still modify the standard BBN predictions of the primordial mass fraction ofY_pand the ratios of primordial number densitiesD/H. The region of the parameter space that evade this constraints corresponds tom_ϕ≳1 GeVwith preferred self-interaction couplinglog_10λ_s≲-2.5(top left), while DM-mediator interactions in the range of valueslog_10λ_ϕs≲-5andlog_10λ_ϕs∼-3(top right). Finally, this scenario is also constrained by the energy injection to the CMB from DM annihilation and X-ray telescopes data. Comparing to the limits set by the PLANCK satellite <cit.>, recast using <cit.>, as well as Inverse Compton Scattering (ICS) X-ray limits, recast using <cit.>, in Figure <ref> we project the otherwise viable points onto the plane of present daySS^*→2ϕcross section vs. the DM mass, with the color bar indicating the mediator's lifetime. The limits are given for an idealized case of 100% branching ratio ofϕto photons, electrons or muons, which are however close enough to make the difference between them to have virtually no impact on the final conclusions. To relate the CMB/X-ray constraints on the mediator's lifetime to the limits for DM mass and cross section we assume that SS^*→ 2ϕ happens at rest (i.e. that ϕ possesses energy of E_ϕ = m_s) and we account for the double flux of SM states, as well as for the rescaling of the DM mass by a factor of 1/2. While DM in the MeV scale falls within the telescope's sensitivity, the cross section decreases with increasing DM mass, resulting in points that fall below the sensitivity threshold for heavierS, particularly for GeV dark matter. In summary, after careful implementation of the processes affecting the dynamics of freeze-out we see that form_s>m_ϕand for points predicting the observed value of relic abundance most of the parameter space is strongly constrained by cosmological and observational data, leaving only small windows in the GeV spectrum to be tested by future generation of telescopes. §.§.§ Scan results for m_s<m_ϕ In contrast, if the hierarchy is inverted, mediators are depleted to produce more DM relaxing the cosmological bounds. This is shown in the left panel of Figure <ref>, where a large part of the mediator's abundance lies below the BBN exclusion lines.[Regarding the BBN bounds for m_ϕ above the muon threshold (cf. Figure <ref>), the limits coming from primordial mass fraction and number density D/H require specific calculations for this scenario that go beyond the scope of this work. ] Unlike in the previous case, the impact ofϕon the CMB spectrum in this scenario requires a more detailed computation of the CMB distortions due to lower abundance of mediators. The photon thermal shape before recombination is measured with high accuracy by COBE/FIRAS <cit.>. Decays ofϕaround the recombination epoch would inject energy into the photon plasma deviating it from equilibrium and from black body radiation, e.g., late decays can cause changes in the ionaization history around last scattering (z≃1000), which in turn would result in changes of the CMB temperature and anisotropies. Distortions can be expressed in the following form <cit.> y =1/4∫_z_rec^z_μ yd(Q/ρ_γ) /dz'dz' , μ =1.401∫_z_μ y^∞e^-( z'/z_μ)^5/2d(Q/ρ_γ) /dz'dz' , wherez_rec = 1000,z_μy≃5×10^4andz_μ=2×10^6. Points excluded by COBE/FIRAS satisfy <cit.> |μ|_CF <9× 10^-5 | y|_CF <1.5× 10^-5 and are marked with empty circles in <Ref>. In future, The Primordial Inflation Explorer (PIXIE) detector <cit.> is planned to achieve sensitivity over 1000 times greater than COBE/FIRAS and would test this model if the distortion induced byϕlies within the ranges: |μ|_Pixie <10^-9, | y|_Pixie <2× 10^-9 . Points that predict a signal of such strength are marked as rhomboids in the <Ref>. Note that the PIXIE experiment can test most of the points in the MeV range below the muon threshold, with the exception of the ones that fulfill.n_ϕ/n_γ|_cd≲1.63×10^-13(colored filled circles in Figure <ref>). Conversely, if the mediator is allowed to decay into muons, its lifetime is shortened, and it falls below the PIXIE sensitivity. Notwithstanding, masses in the interval[212,284] MeVlie within the sensitivity signals from theμ-type distortion (cf. Figure <ref>). Unlike the previous case, there is an intricate relation betweenm_ϕandτ_ϕinduced by DM self-interactions that can be appreciated in Figure <ref> (right): substantialλ_sresults in longer lifetimes, as the DM fluid transforms its kinetic energy into number density, thereby necessitating less FI production. Moreover, points that pass the BBN and CMB tests exhibit stronger DM-mediator interactions than in them_s>m_ϕmass hierarchy. This one can read from Figure <ref> (left), where the allowed region satisfies-4.5≲log_10λ_ϕs≤-1(in the previous mass hierarchy it was-8≲log_10λ_ϕs≲-3). Accordingly, smaller values ofλ_ϕsmean that fewer mediators annihilate to DM and less heat is transferred to it from the mediator fluid, thereby necessitating larger FI production (since DM lacks sufficient kinetic energy to increase its number density via cannibal processes), which exacerbates the constraints. Consequently, larger values ofλ_ϕsare preferred. The influence of parameter λ_s is firstly seen through apparent discrete bands the points are align into, most clearly observed in Figure <ref>, but discernible in other plots as well. This effect does not carry any significant meaning, as it is a consequence of a discrete grid inλ_ssampling. In particular, the visible bands correspond to log_10λ_s = -2, -1.5, and -1. What is however physical, is that the lowerλ_sis, the shorter the mediator lifetime, as weaker self-interactions mean weaker boosting of the FI production, thus increasing the mediator coupling (cf. <Ref>). Additionally, the parameter λ_ϕ s also influences the distribution of points, segregating into two regimes: log_10λ_ϕ s≲ -2.5 and log_10λ_ϕ s≳ -2.5 (Figure <ref> right). Note that the distinctive resonance-like behavior observed atm_ϕ= 1 GeVis primarily a consequence of the sudden decrease in theϕlifetime due to a resonance in theΓ_ϕ→ππrate <cit.>. Last but not least, them_s<m_ϕcase can also be tested in the Massive Timing Hodoscope for Ultra-Stable Neutral Particles (MATHUSLA) experiment designed to detect exotic long-lived particles generated by LHC collisions. These particles would purportedly be capable of traveling to the surface of the collider's detector, where they may decay into SM charged particles. The detector will possess a sensitivity to lifetimes as large as10^-4 s<cit.>, rendering GeV mediators in this model to be within reach. The results are shown in Figure <ref>, where we highlight the role ofλ_ϕsandÃ_ϕs(=A_ϕs/m_ϕ). Testable points satisfylog_10λ_ϕs≲-6.15and-2.9≲log_10Ã_ϕs≲-2, that is, both sectors are essentially decoupled and are populated separately:ϕdirectly throughθ, whileSvia Higgs decayA_ϕsθ h|S|^2. This result can be understood as follows. The hierarchym_s<m_ϕpushes the mediator mass towards the GeV scale and smallλ_ϕsensures that DM is not produced from mediator annihilation, implying a largerθthan otherwise, which shortensτ_ϕ. Similarly, a largerÃ_ϕswould lead to DM overproduction from Higgs decay. This can be compensated with a lowerθ, resulting in longer lifetimes beyond the detector's range. In summary, the mass hierarchym_s<m_ϕopens up the parameter space in the MeV range and moreover adds further motivation to searches planned to be conducted in MATHUSLA and PIXIE. § CONCLUSIONS Cannibalization is an intriguing mechanism for depleting the number of dark matter particles without necessity of coupling the dark sector to the SM plasma. As such, it provides an alternative to the standard thermal freeze-out, featuring a different phenomenological profile and complex decoupling dynamics. For it to be successful, however, a large initial population of dark sector states must first be produced, that subsequently undergo a cannibalization phase. It is also crucial to ensure that heat released in this process does not interfere with the structure formation. Both conditions can be satisfied by introducing a very small coupling between the dark sector and the SM, which facilitates freeze-in type production of a dark sector with an initial temperature significantly lower than that of the SM plasma. In this work we investigate three such scenarios with a particular emphasis on whether the coupling strength required for effective dark matter production can also yield detectable signals in cosmological probes, indirect detection and searches for long-lived particles. In all cases we derive and solve the coupled Boltzmann equations for the number density and temperature for all particles taking part in the freeze-in and freeze-out processes. Implementation of all the relevant reactions, i.e., decays, annihilations, elastic scatterings and3↔2interactions, allows us to accurately analyze the interplay of the heat transfer between the visible and dark sectors and the evolution of the number densities of all the states. The first model we study consists of the dark matter being a self-interacting real scalar field in aℤ_2broken phase. Spontaneous breaking of the stabilizing symmetry provides both the3↔2reactions and induces DM decay. The latter leads to, on the one hand, stringent constraints on the coupling to the SM, while on the other to a potential for detection of otherwise a completely secluded model. The results shown in Figure <ref> indicate that indeed the parameter space of such a model is tightly constrained by several factors: observations of the Bullet Cluster, constraints from the INTEGRAL and NuSTAR on decaying dark matter, and the requirement that the dark matter lifetime is larger than the age of the universe. The portal coupling allowed by these observations is found to be small enough that the accompanying freeze-in mechanism is not strong enough to completely populate the dark sector and such a scenario is shown to require an additional production mechanism. In the second model we explore a dark sector containing still only one state, now a complex dark matter candidateSstabilized byℤ_3symmetry. We show that the freeze-in mechanism can successfully account for the observed relic abundance (cf. Figure <ref>) while not contradicting known observations. During the analysis we highlight a non-trivial dynamics of boosting the freeze-in production by2→3self-interactions. Thus we show that aℤ_3scalar singlet model posseses a valid alternative to its freeze-out and freeze-in realizations, with the intermediate cannibal phase. There is, however, no current technology to test this scenario, as all interactions with the SM are mediated by the Higgs boson with a very small mixing toSand in contrast to the previous case there are no DM decays present. The third scenario extends the previous model by including an unstable Higgs-like real mediatorϕ. In this setup we account for the DM-mediator interactions by solving the coupled Boltzmann equations for the system of two dark sector particles and performing a numerical parameter scan over DM self-interactions, as well as DM-mediator couplings. We identify two scenarios with distinct phenomenology. In the first case,m_s>m_ϕ, the kinematically allowedSS^*→2ϕprocess typically results in an overabundance of mediators during the BBN epoch, leading to stronger constraints (Figure <ref>). Therefore, both dark matter self- and DM-mediator interactions turn out to be crucial in mitigating cosmological constraints on the mediator's lifetime. For example, feebleS-ϕinteractions result in a shorter lifetime (Figure <ref>), allowing the model to evade BBN constraints. Indirect detection signatures are also feasible because the s-wave annihilation cross section can be sizable (Figure <ref>). Conversely, ifm_s<m_ϕ, during the evolution in the early universe mediators are depleted to produce more of dark matter, thereby relaxing cosmological bounds on their energy injection to the SM plasma. Specifically, with the dark matter and mediator at the GeV range there is a potential for detectable signals in long-lived particle searches. In particular projections for MATHUSLA are depicted in Figure <ref>. To summarize, our study demonstrates that a frozen-in dark sector scenario featuring a cannibal dark matter candidate can simultaneously satisfy the observed abundance of DM, adhere to known constraints and hold some promise for detection in upcoming experiments like MATHUSLA or leave distortions in the CMB that could be in the sensitivity range of PIXIE. It is a pleasure to thank Shiuli Chatterjee for insightful discussions. This work was supported by the National Science Centre (Poland) under the research Grant No. 2021/42/E/ST2/00009. § COLLISION OPERATORS FOR NUMBER CHANGING SELF-INTERACTIONS §.§ Matrix elements The matrix element for the reactionφ_1φ_2↔φ_3φ_4φ_5corresponding to the real scalar model is iℳ_3φ↔ 2φ = -i g^3( 1/S S_34 +1/S S_35+1/S S_45 +1/T_15S_34 +1/T_14S_35+1/T_13S_45 +1/T_25S_34 +1/T_24S_35+1/T_23S_45 +1/T_14T_23 +1/T_15T_23+1/T_13T_24 +1/T_14T_25 +1/T_13T_25+1/T_15T_24) -igλ(1/S+1/S_34 +1/S_35+1/S_45+1/T_13+1/T_14 +1/T_15+1/T_23 +1/T_24+1/T_25) , where we have definedS_ij=s_ij-m_φ^2,T_ij = t_ij-m_φ^2, withS=S_12. In the broken phaseg=√(3λ)m_φ. This means that we can factorize√(3λ^3), along with the mass in the propagator by defining the dimensionless Mandelstam variabless̃_ij= s_ij/m_φ^2andt̃_ij=t_ij/m_φ^2. Thus, the factor is√(3λ^3)/m_φ. In the complex scalar model we encounter two matrix elements, the first one is denoted asℳ_S^*S↔SSS(shortℳ^(1)), the result is iℳ^(1)= -ig^2(1/S_45T_13+1/S_35T_14+1/S_34T_15) -igλ_s(1/S_34+1/S_35+1/S_45+1/T_13+1/T_14+1/T_15) . In this caseS_ij = s_ij-m_s^2, analogously defined forT_ij. Finally, the matrix element corresponding toSS↔S^*S^*Sis iℳ^(2)= -ig^2(1/SS_34+1/T_14T_23+1/T_13T_24) -igλ_s(1/S+1/S_34+1/T_13+1/T_14+1/T_23+1/T_24) . §.§ 3φ↔ 2φ collision integrals The zeroth moment collision term is ⟨C_3φ↔ 2φ|=⟩1/n_φ∫d^3p⃗/(2π)^3 C_3φ↔ 2φ = 1/ n_φ1/g_φ1/2!1/3!∫ dΠ_1 … dΠ_5 |ℳ̃_3φ↔ 2φ|^2(f_1f_2-f_3f_4f_5) =1/12 n_φ1/g_φ ∫ dΠ_1 … dΠ_5 |ℳ̃_3φ↔2φ|^2 e^-(E_1+E_2)/T_φ( (n_φ/n^eq_φ)^2 - (n_φ/n^eq_φ)^3 ) =s Y_φ⟨σ_3φ→ 2φv^2|s⟩ (Y_φ^eq - Y_φ) , whereg_φaccounts for the DM degrees of freedom (g_φ= 1). Note also that we usedf_3 f_4 f_5 = n_φ/n^eq_φf_1 f_2. The three final state integral can be evaluated by first boosting to the lab frame (CM of two final states, e.g.φ_3φ_4), followed by a boost to the total CM frame <cit.>. In the lab framep_1-p_2=(√(s_34),0⃗ )^T, withs_34=(p_3+p_4)^2 = (p_1-p_2)^2 = s+m_φ^2-2√(s)E_5. By definitionp_3^lab = (√(s_34)/2,p⃗_3^ lab)^Tandp_4^lab = (√(s_34)/2,-p⃗_3^ lab)^T. We can relate the lab with the CM frame via a boost in thezdirection followed by a rotation around theyaxis, ℬ_z(γ) = [ γ 0 0 γβ; 0 1 0 0; 0 0 1 0; γβ 0 0 γ ] , ℛ_y(α) = [ 1 0 0 0; 0 cosα 0 sinα; 0 0 1 0; 0 -sinα 0 cosα ] , such thatp_i^cm = (ℛ_y(θ+π)ℬ_z(E_12/√(s_34)))^-1 p_i^lab, whereE_12 = (s + s_34 - m_φ^2)/(2√(s)), andβ=√(1-1/γ^2). After applying these transformations, the thermal average in the CM frame is ⟨σ_3φ→ 2φv^2|=⟩1/2!3!1/(n^eq_φ)^3∫ dΠ_1 … dΠ_5 |ℳ̃_3φ↔2φ|^2 e^-(E_1+E_2)/T_φ =1/121/(n_φ^eq)^33/8(2π)^4∫ dΠ_1 dΠ_2 e^-E_1+E_2/T_φ∫ dẼ_5 dϕ_4 dx_4 dx_5 J |ℳ(s̃_ij,t̃_ij)|^2 , whereẼ_5 = E_5/m_φand the limits of integration are Ẽ_5∈[1,(s-3m_φ^2)/(2m_φ√(s))],x_4,5(=cosθ_4,5)∈[-1,1]andϕ_4∈[0,2π). Additionally, J=√(Ẽ_5^2 -1)√(1-4m_φ^2/(m_φ^2-2Ẽ_5 m_φ√(s)+s)) . We evaluate the 4-d integral numerically with the Monte-Carlo method. The integral over the initial states can be evaluated in terms ofE_+=E_1+E_2ands<cit.>, dΠ_1 dΠ_2 = 1/(2π)^41/8dE_+dE_-ds = 1/(2π)^4p_12/2√(E_+^2-s/s)dE_+ ds . withp_12 = 1/2√(s-4m_φ^2). Finally, we note that the thermal average scales inversely with the mass to the fifth power in the non-relativistic limit, ⟨σ_3φ→ 2φv^2|∼⟩λ^3 m_φ^4/(n_φ^eq)^3∼λ^3/m_φ^5 . The second moment collision term is 3T_φ g_φ n_φ⟨C_3φ↔ 2φ|_⟩2 = -1/3!∫ dΠ_1… dΠ_5 p⃗_1^ 2/E_1 f_1 f_2|ℳ̃_3φ↔ 2φ|^2 +1/2!2!∫ dΠ_1… dΠ_5 p⃗_3^ 2/E_3 f_1 f_2 |ℳ̃_3φ↔ 2φ|^2 -1/2!2!∫ dΠ_1… dΠ_5 p⃗_3^ 2/E_3 f_3 f_4 f_5 |ℳ̃_3φ↔ 2φ|^2 +1/3!∫ dΠ_1… dΠ_5 p⃗_1^ 2/E_1 f_3 f_4 f_5 |ℳ̃_3φ↔ 2φ|^2 . The terms with1/3!combine to ⟨σ_3φ→ 2φ v^2 p⃗^ 2/E | ⟩n_φ^2 (n_φ-n_φ^eq) , where ⟨σ_3φ→ 2φ v^2 p⃗^ 2/E |=⟩1/(n_φ^eq)^31/3!∫ dΠ_1 dΠ_2 p⃗_1^ 2/E_1f_1f_2 4F σ_2_φ→3_φ , andF=√((s/2-m_φ^2)^2-m_φ^4). On the other hand, the terms with factor1/(2!2!)lead to 1/2!2!(n_φ/n_φ^eq)^2∫ dΠ_1 … dΠ_5 f_1f_2 p⃗_3^ 2/E_3|ℳ̃_3φ↔2φ|^2( 1-n_φ/n^eq_φ) . The three final states integral is∫dΠ_3 dΠ_4 dΠ_5 (p⃗_3^ 2/E_3)|ℳ̃_3↔2|^2 , which is not Lorentz invariant due to the factorp⃗_3^ 2/E_3. Boosting the integrand to the CM of mass frame, the energy transforms as <cit.> E_3→ E_3 coshη+p⃗^ z_3 sinhη , with p⃗^ z_3 = √(E_3^2-m_φ^2)cosθ_3 . In this caseηis the rapidity. This suggest to define 4F σ̃_2φ→ 3φ(η)= 1/2!2!∫ dΠ_3 dΠ_4 dΠ_5 (p⃗_3^ 2/E_3)(η)|ℳ̃_3φ↔2φ|^2 . and its thermal average as ⟨σ̃_2φ→ 3φ v |=⟩1/(2π)^4 (n_φ^eq)^2 ∫_3m_φ^∞ dE_cm√(E_cm^2/4-m_φ^2)E_cm^2 ×∫_0^∞ dηsinh^2η exp(-E_cm/T_φcoshη) 4F σ̃_2φ→ 3φ(η) , then, the second moment,⟨C_3φ↔2φ|_⟩2, in terms of the number density takes the form 3T_φ g_φ⟨C_3φ↔ 2|_⟩2 =n_φ(n_φ-n_φ^eq)(⟨σ_3φ→ 2φ v^2 p⃗^ 2/E|-⟩⟨σ̃_3φ→ 2φv^2|⟩) =s^2Y_φ(Y_φ-Y_φ^eq)(⟨σ_3φ→ 2φ v^2 p⃗^ 2/E|-⟩⟨σ̃_3φ→ 2φv^2|⟩) . §.§ 3↔ 2 collision integrals for complex DM Considering the reactionS^*S↔SSS, the collision operator forSis in full generality is C_S^*S↔ SSS[S] = 1/2E_S g_S ∫(-f_S|ℳ̃_1S→ 345|^2dΠ_1f_1 (1/3!dΠ̃_3dΠ̃_4dΠ̃_5) +(1+f_S)|ℳ̃_12→S45|^2 (dΠ_1dΠ_2f_1f_2)(1/2!dΠ̃_4dΠ̃_5) -f_S|ℳ̃_S45→ 12|^2(1/2!dΠ_4dΠ_5f_4f_5)(dΠ̃_1dΠ̃_2) +(1+f_S)|ℳ̃_345→ 1S|^2(1/3!dΠ_3dΠ_4dΠ_5f_3f_4f_5)dΠ̃_1 ) , wheredΠ̃_i=dΠ_i(1+f_i)and1+f_i≈1. We now consider the reactionSS↔S^*S^* S. The collision integral forS^*is C_SS↔ S^*S^*S[S^*]=1/2E_S^*g_S^* ∫( (1+f_S^*)|ℳ̃_12→S^*45|^2(1/2!dΠ_1dΠ_2f_1f_2)dΠ̃_4dΠ̃_5 -f_S^*|ℳ̃_S^*45→ 12|^2 dΠ_4dΠ_5 f_4 f_5 (1/2!dΠ̃_1dΠ̃_2 ) ) , while forSis C_SS↔ S^*S^*S[S] =1/2E_Sg_S ∫( -f_S|ℳ̃_S2→ 345|^2 dΠ_2f_2(1/2!dΠ̃_3dΠ̃_4dΠ̃_5) +(1+f_S)|ℳ̃_12→ 34S|^2(1/2!dΠ_1dΠ_2 f_1f_2)(1/2!dΠ̃_3dΠ̃_4) -f_S|ℳ̃_34S→12|^2(1/2!dΠ_3dΠ_4f_3f_4)(1/2!dΠ̃_1dΠ̃_2) +(1+f_S)|ℳ̃_345→S2|^2(1/2!dΠ_3dΠ_4dΠ_5f_3f_4f_5)dΠ̃_2 ) . As we assume no CP violation,ℳ_S^*S↔SSS =ℳ_SS^*↔S^*S^*S^*andℳ_SS↔S^*S^*S = ℳ_S^*S^*↔SSS^*, we denoteℳ^(1)the former andℳ^(2)the latter. Since we assumef_S=f_S^*thenn_S = n_S^*. Notice also that the total abundance isn=n_S+n_S^*. Neglecting Bose-Enhancement terms (and settingg_S=1), ∫d^3p⃗_s^*/(2π)^3C[S^*] = 1/2∫ dΠ_1… dΠ_5 (1/3|ℳ̃^(1)|^2 + 1/2|ℳ̃^(2)|^2) (f_1 f_2-f_3 f_4 f_5) . Note that∫d^3p⃗_s/(2π)^3 C[S]retains the same expression. The second moment collision integral is ∫d^3p⃗_1/(2π)^3p⃗^ 2_1/E_1C[S^*] = -1/3∫ dΠ_1… dΠ_5|ℳ̃^(1)|^2p⃗_1^ 2/E_1(f_1f_2-f_3f_4f_5) +1/2∫ dΠ_1… dΠ_5 |ℳ̃^(1)|^2p⃗^ 2_3/E_3(f_1f_2-f_3f_4f_5) -1/2∫ dΠ_1… dΠ_5|ℳ̃^(2)|^2p⃗_1^ 2/E_1(f_1f_2-f_3f_4f_5) +3/4∫ dΠ_1… dΠ_5|ℳ̃^(2)|^2p⃗_3^ 2/E_3(f_1f_2-f_3f_4f_5) , which results in 3/2∫ dΠ_1… dΠ_5|ℳ̃|^2 p⃗^ 2_3/E_3(f_1f_2-f_3f_4f_5) - ∫ dΠ_1 … dΠ_5|ℳ̃|^2p⃗_1^ 2/E_1(f_1f_2-f_3f_4f_5) , with|ℳ̃|^2=1/3|ℳ̃^(1)|^2 + 1/2|ℳ̃^(2)|^2. Similar to Section <ref>, the thermal averages in Eq. (<ref>) remain the same, with the only difference being the replacement of the corresponding matrix element and Boltzmann factors. § FREEZE-IN COLLISION INTEGRALS §.§ Higgs decay The zeroth moment collision integral for Higgs decay is g_χ n_χ⟨C_h↔χχ|_⟩0 = ∫ dΠ_1dΠ_2 dΠ_3|ℳ̃_h↔χχ|^2 (f_1-f_2f_3) , whereχstands for eitherφorS. We have labeled the momenta ash_1↔χ_2χ_3. Sinceℳis constant in this case, we can pull it out of the integration. Neglecting Higgs production, g_χ n_χ⟨C_h→χχ|_⟩0 =c^2/16π^2∫ dΠ_1 f_1 ∫d^3p⃗_2/E_2d^3p⃗_3/E_3δ^(4)(p_1-p_2-p_3) =c^2m_h/16π^3√(1-4m_χ^2/m_h^2) T K_1(m_h/T) , whereK_1is the modified Bessel function of the second kind and order 1 andcstands for the coupling constanta in question. For interactions withφit isc = λ_hφ, while forSit isc=A_ϕsθ. Notice the difference of a factor 2 due to different conventions <cit.>, stemming from the definition of symmetry factors. On the other hand, the second moment is 3T_χ g_χ n_χ⟨C_h→χχ|_⟩2 = ∫ dΠ_1dΠ_2 dΠ_3 |ℳ̃_h→χχ|^2 p⃗_2^ 2/E_2f_1 . The termp⃗_2^ 2/E_2is not Lorentz invariant, after boosting to the CM frame, E_2 → E_2 coshη+p⃗_2^ zsinhη , which we cast in terms ofz=coshη. Note thatz=E_h/m_h. p⃗_2^ 2/E_2→ E_2 z + |p⃗_2|cosθ_2 √(z^2-1) - m_χ^2/E_2 z + |p⃗_2|cosθ_2 √(z^2-1) . Thus,3T_χg_χn_χ⟨C_h→χχ|_⟩2results in c^2/4(2π)^2∫ dΠ_1f_1∫ dcosθ_2 2π|p⃗_2|/E_2.(E_2 z - m_χ^2/E_2 z + |p⃗_2|cosθ_2 √(z^2-1))|_E_2=m_h/2 =c^2 m_h^2/32π^3√(1-4m_χ^2/m_h^2)(T K_2(m_h/T) -2m_χ^2/√(m_h^2-4m_χ^2)∫_1^∞ dz e^-m_h z/Tlog(m_h z + 2|p⃗_2|√(z^2-1)/m_h z - 2|p⃗_2|√(z^2-1))) . If there is a clear mass hierarchy,m_χ≪m_h, the logarithmic term can be safely ignored. §.§ Three-body Higgs decay The zeroth moment for the mediatorϕis g_ϕ n_ϕ⟨C_h↔ϕ S S^* |=⟩∫ dΠ_1dΠ_2 dΠ_3 dΠ_4|ℳ̃_h↔ϕ S S^* |^2 (f_1-f_2f_3f_4) . where the momenta are labeled ash_1↔ϕ_2S_3S^*_4. As in the previous case, we neglect Higgs production, g_ϕ n_ϕ⟨C_h→ϕ S S^* |=⟩λ_ϕ s^2θ^2/256π^5∫ dΠ_1 f_1 ∫d^3p⃗_2/E_2d^3p⃗_3/E_3d^3p⃗_4/E_4δ^(4)(p_1-p_2-p_3-p_4) . The final three state integral can be evaluated as in Section. <ref> withs_34=(p_3+p_4)^2 = (p_1-p_2)^2 = m_h^2+m_ϕ^2-2m_hE_2, the result is 8π^2 ∫_m_ϕ^E_2^max dE_2 |p⃗_2|^2√(1-4m_s^2/m_h^2+m_ϕ^2-2m_hE_2) , where E_2^max = √((q⃗_2^ max)^2+m_ϕ^2) and |q⃗_2^ max| = √(m_h^4+(m_ϕ^2-4m_s^2)^2-2m_h^2(m_ϕ^2+4m_s^2))/(2m_h) , with|p⃗_2^ max|the maximum magnitude of the momentum allowed forϕ. The previous integral can be evaluated analytically in the limitm_ϕ, m_s≪m_h. The result isπ^2 m_h^2. Thus, the zeroth moment is g_ϕ n_ϕ⟨C_h→ϕ SS^*| ⟩≃λ_ϕ s^2θ^2/1024π^5 m_h^4 ∫_1^∞ dz √(z^2-1)exp(-m_h z/T) =λ_ϕ s^2θ^2/1024π^5 m_h^3 T K_1(m_h/T) . Finally, the second moment is 3T_ϕ g_ϕ n_ϕ⟨C_h→ϕ S S^* |_⟩2 = ∫ dΠ_1dΠ_2 dΠ_3 dΠ_4|ℳ̃_h→ϕ S S^* |^2 p⃗_2^ 2/E_2 f_1 . Analogous to the double Higgs decay, we boostp⃗_2^ 2/E_2to the CM frame. Here, we already adopt the massless limit and neglect the logarithmic contribution. The result is 3T_ϕ g_ϕ n_ϕ⟨C_h→ϕ S S^* |_⟩2 ≃λ_ϕ s^2θ^2/3072π^5m_h^4 T K_2(m_h/T) . The result forSis analogous. §.§ Higgs annihilation The zeroth moment is g_φ n_φ⟨C_hh↔φφ|=⟩1/2!∫ dΠ_1 dΠ_2 dΠ_3 dΠ_4 |ℳ̃_hh↔φφ|^2(f_1f_2 -f_3 f_4) , where we have labeled the momenta ash_1 h_2↔φ_3φ_4. As in the decay case, we will neglect Higgs production. This integral can be cast in terms of the cross section <cit.>, g_φ n_φ⟨C_hh→φφ|=⟩T/32π^4∫_4m_h^2^∞ ds √(s)(s-4m_h^2)K_1(√(s)/T) σ_hh→φφ . On the other hand, the second moment is g_φ n_φ 3T_φ⟨C_hh→φφ|_⟩2 = 1/2!∫ dΠ_1… dΠ_4 |ℳ̃_hh↔φφ|^2p⃗_3^ 2/E_3 f_1 f_2 . We boost to the CM of mass frame as in eq. (<ref>). In this casez=E_+/√(s), whereE_+=E_1+E_2. The two final states integral results in ∫ dΠ_3 dΠ_4 |ℳ̃_hh→φφ|^2 (E_3 z+|p⃗_3|cosθ_3√(z^2-1)-m_φ^2/E_3 z + |p⃗_3|cosθ_3√(z^2-1)) =|ℳ_hh→ϕϕ|^2/8π|p⃗_3|/√(s)(√(s)z- m_φ^2/|p⃗_3|1/√(z^2-1)log(√(s) z + 2|p⃗_3|√(z^2-1)/√(s) z - 2|p⃗_3|√(z^2-1))) . §.§ Production from electroweak states The freeze-in contribution involving electroweak states is particularly relevant, as the propagator of gauge bosons remains approximately constant at high energies. We obtain the matrix elements using the package <cit.> and employ the high energy limit in the cBE. The matrix elements are quite lengthy, so here we show the cross sections at highs(andm_ϕ→0): σ_hh→ hϕ = λ_h^2θ^2/16π s , σ_Z h→ Zϕ = θ^2 s-2m_Z^2-6m_h^2+12m_Z^2log( s/m_Z^2)/144π v^4 , σ_W^±Z→ W^±ϕ = θ^2480m_W^4-72m_W^2m_Z^2 + 24m_Z^4/864 m_Z^2 π v^4 . σ_W^+W^-→ h ϕ = θ^2s-2m_W^2+8m_W^2log(s/m_W^2)/144π v^4 , σ_W^+W^-→ Z ϕ = θ^2m_W^2(8m_W^2+m_Z^2)/18m_Z^2π v^4 , σ_W^± h→ W^±ϕ =θ^212m_W^2log(s/m_W^2)-6m_h^2-2m_W^2+s/144π v^4 , σ_ZZ→ hϕ =θ^2s+3m_h^2 - 2m_Z^2 + 8m_Z^2log(s/m_Z^2)/144π v^4 . § DARK MATTER - MEDIATOR INTERACTIONS §.§ Production-annihilation The collision operator for theϕϕ↔SS^*reaction is C_ϕϕ↔ SS^*[S]=1/2E_Sg_S ∫(-f_S|ℳ̃_S S_2^*→ϕ_3 ϕ_4|^2 dΠ_2 f_2(1/2!dΠ_3 dΠ_4 (1+f_3)(1+f_4)) +(1+f_S)|ℳ̃_S S_2^*←ϕ_3 ϕ_4|^2 dΠ_2(1+f_2)(1/2!dΠ_3dΠ_4 f_3f_4) ) . After assuming Maxwell Boltzmann distributions for bothSandϕand neglecting Bose-Enhancement terms, the zeroth moment thermal average reads ⟨C_ϕϕ↔ SS^*[S]|=⟩ -(n_S/n_S^eq)^2λ_ϕ s^2/2!n_ST_S/512π^5∫_max(4m_ϕ^2,4m_s^2)^∞ ds √(s-4m_ϕ^2)√(s-4m_s^2)/√(s)K_1(√(s)/T_S) +(n_ϕ/n_ϕ^eq)^2λ_ϕ s^2/2!n_ST_ϕ/512π^5∫_max(4m_ϕ^2,4m_s^2)^∞ ds √(s-4m_ϕ^2)√(s-4m_s^2)/√(s)K_1(√(s)/T_ϕ) . In the relativistic regime the lightest particle can be considered essentially massless, In order to obtain the first non-relativistic correction, we expand√(s-4m^2)≈√(s)-2m^2/√(s), where in this casem=min(m_ϕ,m_s). Hence, we estimate the integral as ∫_4M^2^∞ ds √(s-4m_ϕ^2)√(s-4m_s^2)/√(s)K_1(√(s)/T_ds)≈ 4M^2 T_ds K_1(M/T_ds)^2 -e^-2M/T_dsm^2 T_ds^2/M , withM=max(m_ϕ,m_s)andT_dsstands for eitherT_ϕorT_S. Finally, we notice that⟨C_ϕϕ↔SS^*[ϕ]|=⟩ -2n_S/n_ϕ⟨C_ϕϕ↔SS^*[S]|$⟩. We use this approximation in the cBE, Eq. (<ref>). §.§ Scattering We now compute the collision operator for the scattering between both sectors. We adopt the parametrization of  <cit.>, C_scatter[S] = 1/128π^3 E_1 g_S |p⃗_1|∫_m_S^∞ dE_3∫_max(m_ϕ,E_3-E_1+m_ϕ)^∞ dE_2 Π(E_1,E_2,E_3)𝒫(f_1,…,f_4) , where we label the momenta as S_1 ϕ_2↔ S_3 ϕ_4. The integrand factor Π is defined as Π(E_1,E_2,E_3) = λ_ϕ s^2(k_+ - k_-) Θ(k_+-k_-) , with k_+=min(|p⃗_1| + |p⃗_3|, |p⃗_2| + |p⃗_4|), k_-=max( ||p⃗_1| - |p⃗_3||, ||p⃗_2 | - |p⃗_4||) and Θ the Heaviside step function. The functional 𝒫 incorporates the distribution functions, 𝒫(f_1,f_2,f_3,f_4)= f_3 f_4 - f_1 f_2 . We will now estimate the collision operator in the relativistic limit, hence we approximate E_j≈|p⃗_j|, for j any initial or final state. The second moment in the relativistic limit takes the form (setting g_S=1) ⟨C_scatter[S]|_⟩2 ≃1/n_S 3 T_S1/2π^2∫_0^∞ dE_1 E_1^2 C_scatter =λ_ϕ s^2/n_S 3 T_S 256π^5∫_0^∞ dE_1 E_1 ∫_0^∞ dE_3∫_max(0,E_3-E_1)^∞ dE_2 Π 𝒫 . Note that the integrand is Π 𝒫= (E_1 + E_2 - | E_1 - E_3| - | E_2 - E_3|)Θ(E_1 + E_2 - | E_1 - E_3| - | E_2 - E_3|)(f_3 f_4-f_1f_2) , and f^eq_3f^eq_4-f^eq_1f^eq_2 = (e^-E_3/T_Se^-(E_1-E_3)/T_ϕ-e^E_1/T_S)e^-E_2/T_ϕ . We can perform the integral over E_2 analytically, for this we split the integral in two cases: E_2<E_3 and E_2>E_3. The result is: ∫_max(0,E_3-E_1)^E_3 dE_2 (E_1 + 2E_2 - E_3 -| E_1-E_3| )e^-E_2/T_ϕ + ∫_E_3^∞ dE_2 (E_1 + E_3 -| E_1-E_3|)e^-E_2/T_ϕ =2e^-E_3/T_ϕT_ϕ^2(e^E_1+E_3-| E_1-E_3|/2T_ϕ-1 ) , while the integral over E_3 is 2T_ϕ^2∫_0^∞ dE_3 e^-E_3/T_ϕ(e^E_1+E_3-| E_1-E_3|/2T_ϕ-1 )(e^-E_3/T_Se^-(E_1-E_3)/T_ϕ-e^E_1/T_S) =2T_ϕ^2e^-E_1(1/T_ϕ+1/T_S)(e^E_1/T_ST_S^2-e^E_1/T_ϕ(E_1(T_ϕ-T_S)+T_S^2 ))/T_ϕ-T_S Integrating this result with ∫ d E_1 E_1 yields ⟨C_scatter[S]|_⟩2≃(n_S/n_S^eq)(n_ϕ/n_ϕ^eq) λ_ϕ s^2/n_S 3T_S 128π^5T_ϕ^2T_S^2(T_ϕ-T_S)e^-m_S/T_Se^-m_ϕ/T_ϕ . We introduce the exponential factor to account for non-relativistic effects. To incorporate scattering in x_ϕ', we observe that ⟨C_scatter[ϕ]|=⟩ -3T_S n_S/3T_ϕ n_ϕ⟨C_scatter[S]|$⟩. § VACUUM STABILITY The mediator-Higgs interactions are encoded in the following potential V(H,ϕ) = μ_h^2 H^† H + λ_h(H^† H)^2 +(B_hϕϕ + λ_hϕϕ^2)H^† H +λ_1ϕ +1/2μ_ϕ^2ϕ^2 + g_ϕ/3!ϕ^3 + λ_ϕ/4!ϕ^4 , whereHis theSU(2)_LHiggs doublet of the SM. Working with the unitarity gauge,H = 1/√(2)(0,h)^T; after EWPT, the Higgs boson acquires a VEV,h→h+v, withv≃246 GeV. Note that we include the linear termλ_1ϕ. In models with dark scalars, this term is usually neglected byλ_1=0. The general argument is that we can shift the fieldϕ→ϕ+ϕ_0, rearrange terms and demand the resulting factor in the linear term to be zero <cit.>. This is the standard procedure for finding the minima of the potential, as the minimization condition is equivalent to expanding the potential as ϕ→ϕ + w and demanding the linear term to vanish. For now, let us consider λ_1 ≠ 0. The extrema are given by the solutions of .∂ V/∂ϕ|_h=v,ϕ = w = g_ϕ/2w^2 + λ_ϕ/6w^3 + μ_ϕ^2 w + v^2wλ_hϕ + 1/2v^2 B_hϕ+λ_1=0 and .∂ V/∂ h|_h=v,ϕ = w = λ_hv^3 + vμ_h^2 + vwB_hϕ + vw^2λ_hϕ=0 , substituting λ_h = -μ_h^2-w B_hϕ-w^2λ_hϕ/v^2 and λ_1 = -(g_ϕ/2w^2 + λ_ϕ/6w^3 + μ_ϕ^2 w + v^2wλ_hϕ + 1/2v^2 B_hϕ) , we ensure that the potential at⟨ϕ|=⟩wand⟨h|=⟩ v ≅246 GeVis a critical point. These substitutions also ensure that the linear term (tadpole) vanishes. Ifw=0is the solution to (<ref>), thenv^2 B_hϕ/2 + λ_1 = 0. Alternatively, starting with(v^2B_hϕ/2 + λ_1)ϕ= 0implies that one solution forwisw=0. For simplicity, we assume that a solution isw=0, which meansv^2 B_hϕ/2 + λ_1= 0(this is the redefinition adopted in the literature <cit.>). The first equation in (<ref>) transforms to g_ϕ/2w^2 + λ_ϕ/6w^3 + μ_ϕ^2 w + v^2wλ_hϕ=0 . The mass matrix is given by the second derivatives of the potential evaluated at(v,w), M^2 = [ -2(μ_h^2+w(B_hϕ+wλ_hϕ)) v(B_hϕ+2wλ_hϕ); v(B_hϕ+2wλ_hϕ) g_ϕ w+w^2λ_ϕ/2+μ_ϕ^2 + v^2λ_hϕ ] . The potential at this critical point is a local minimum ifM^2>0. We diagonalize via the rotation matrix 𝒪 = [ cosθ sinθ; - sinθ cosθ ] , such that𝒪^⊺M^2𝒪 = diag(m_h^2,m_ϕ^2). The off-diagonal term yields to sin 2θ =2v(B_hϕ +2wλ_hϕ)/m_ϕ^2-m_h^2 . Note thatw ≠0yields the same physics asw = 0due to the singlet nature ofϕ, as no symmetry is broken whenw ≠0. However, it is crucial to consider the additional solutions from (<ref>), which may not necessarily correspond to minima, w_± = -3g_ϕ±√(3)√(3g_ϕ^2 - 8λ_ϕμ_ϕ^2 - 8v^2λ_ϕλ_hϕ)/2λ_ϕ . If we impose the constraint 0>3g_ϕ^2 - 8λ_ϕμ_ϕ^2 - 8v^2λ_ϕλ_hϕ , then the solutionsw_±do not exist. The last expression is equivalent to 3g_ϕ^2 - 8λ_ϕμ_ϕ^2 - 8v^2λ_ϕλ_hϕ= 3g_ϕ^2-4(m_ϕ^2+m_h^2)λ_ϕ + 4(m_h^2-m_ϕ^2)λ_ϕcos 2θ . Since we are interested in the limitθ≪1, the last condition can be expressed as 3g_ϕ^2-8m_ϕ^2λ_ϕ<03g_ϕ^2/8m_ϕ^2<λ_ϕ . We also assume thatλ_ϕandg_ϕare very small so thatϕdoes not thermalize with itself. Under these assumptions, the minimization of the potential (<ref>) has as solutions⟨S|=⟩0(providedk<8/3) andw = 0. JHEP
http://arxiv.org/abs/2407.12709v1
20240717163138
MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models
[ "Leyang Shen", "Gongwei Chen", "Rui Shao", "Weili Guan", "Liqiang Nie" ]
cs.CV
[ "cs.CV" ]
Stein’s method and general clocks: diffusion approximation of the G/G/1 workload [ ================================================================================ [1]Equal contribution [2]Corresponding authors § ABSTRACT Multimodal large language models (MLLMs) have demonstrated impressive capabilities across various vision-language tasks. However, a generalist MLLM typically underperforms compared with a specialist MLLM on most VL tasks, which can be attributed to task interference. In this paper, we propose a mixture of multimodal experts (MoME) to mitigate task interference and obtain a generalist MLLM. Our MoME is composed of two key components, a mixture of vision experts (MoVE) and a mixture of language experts (MoLE). MoVE can adaptively modulate the features transformed from various vision encoders, and has a strong compatibility in transformation architecture. MoLE incorporates sparsely gated experts into LLMs to achieve painless improvements with roughly unchanged inference costs. In response to task interference, our MoME specializes in both vision and language modality to adapt to task discrepancies. Extensive experiments show that MoME significantly improves the performance of generalist MLLMs across various VL tasks. The source code is released at https://github.com/JiuTian-VL/MoMEhttps://github.com/JiuTian-VL/MoME. § INTRODUCTION Recently, Multimodal Large Language Models (MLLMs) <cit.> have witnessed remarkable progress. With the help of Large Language Models (LLMs) <cit.> and Modality Encoders <cit.>, MLLMs demonstrate excellent multimodal comprehensive abilities, especially in solving a wide range of vision-language (VL) tasks <cit.>, such as Image Cpation, Visual Question Answering, Referring Expression Comprehension, and Optical Character Recognition. However, it is increasingly acknowledged that a generalist MLLM tends to have lower performance compared to a specialist MLLM on most VL tasks <cit.>, as depicted in Fig. <ref> (a). This phenomenon can be attributed to task interference, a fundamental and crucial issue in multi-task learning <cit.>. There are some preliminary attempts <cit.> to address this issue in instruction-tuning MLLMs. The most promising direction <cit.> among these attempts is to use a mixture of experts (MoE) in MLLMs, aiming for each expert to specialize in several tasks. However, these works only investigate MoE in LLMs and primarily concentrate on textual differences between tasks, overlooking the equally important visual information. As shown in Fig. <ref> (b-c), we analyze the distribution of various VL tasks across both vision and language modalities. It is evident that images from different groups of VL tasks exhibit distinct feature distributions, as do texts. Inspired by this observation, we argue that handling task interference needs to comprehensively exploit task differences in both vision and language modalities. To mitigate task interference, we devise a Mixture of Multimodal Experts (MoME) and integrate it into MLLMs. Our MoME consists of a Mixture of Vision Experts (MoVE) for adaptively aggregating features from various vision encoders <cit.>, and a Mixture of Language Experts (MoLE) for leveraging multiple sparsely-activated parameter-efficient adapters. To avoid feature mismatch in different vision encoders, we propose an adaptive deformable transformation (ADT) module in MoVE and use it to transfer features of vision encoders into a unified-length sequence of feature vectors. Our ADT module combines adaptive average pooling and deformable attention <cit.> to obtain compressed and self-enhanced visual features. After feature transformation, our MoVE uses an instance-level soft router to modulate and aggregate transformed visual features according to the instructions. Our MoLE introduces several parameter-efficient adapters <cit.> as experts and integrates them by using an instance-level sparsely-activated router. Due to the utilization of adapters, MoLE can be integrated into each feed-forward network layer of an LLM and only incurs a few computational costs with consistent performance gains. To comprehensively evaluate the multimodal understanding ability of MoME, we collect an amount of VL tasks to form an instruction-tuning dataset and split them into four groups. Extensive experiments show that both MoVE and MoLE can consistently improve performance across all groups of tasks. Notably, MoVE can achieve an average performance gain of 12.87 points across all VL tasks, and improve by over 20 points on the "Document" group. Furthermore, we visualize the expert load distributions of MoVE and MoLE across various tasks. The experts in both MoVE and MoLE exhibit a relatively clear specialization in different groups of VL tasks. For example, the "Document" group of tasks has a strong preference for the "Pix2Struct" vision expert. The expert specialization is strong evidence to demonstrate that our MoME dynamically selects experts to adapt to task differences and mitigate task interference. Our main contributions are summarized as follows: * We propose MoME by simultaneously designing mixtures of experts tailored for both the vision encoder and the LLM, resulting in generalist MLLMs with the ability to combat task interference. * Through statistical analysis, we demonstrate that our MoME specializes in both vision and language modality, effectively adapting to the varying requirements of different VL tasks. * Extensive experiments show that our MoME possesses excellent multimodal understanding abilities and achieves superior performances across various groups of VL tasks. § RELATED WORK §.§ Vision Encoders in MLLMs Vision encoders play important roles in the perception ability of recent MLLMs by encoding visual information into visual tokens, enabling LLMs to understand information on visual modalities. Most Multimodal Large Language Models (MLLM) use CLIP-ViT <cit.> as their vision encoder to provide the basic image-level perception of an image for LLMs, which is useful for tasks such as image caption and general VQA. However, Tong et al. <cit.> have found that CLIP-ViT struggles to encode some visual patterns, severely limiting perception and preventing MLLMs from becoming generalist. To alleviate this, recent works <cit.> integrated different vision encoders <cit.> into a single MLLM, which enhanced the perception of MLLM. However, none have effectively mitigated the interference among different visual features, resulting in sub-optimal utilization of each encoder. Differently, we explore the task differences and propose MoVE to perform self-enhanced transformation and adaptive routing among features from different encoders, achieving consistently high performances across vast VL tasks. §.§ Mixture of Experts in Large Models Mixture of Experts (MoE) <cit.> is a type of structure with multiple expert networks working together, where each expert is responsible for a part of the knowledge space. By only activating specific experts adaptively during inference, MoE can reduce resource consumption and enhance reasoning speed, which is useful for LLM. Existing MoE LLM <cit.> usually replace the feed-forward network (FFN) with the MoE layer. Each MoE layer consists of a router and multiple expert networks and each token is assigned to several expert networks by the router. MoE LLMs tend to outperform dense models with the same inference activate parameters. In addition to its effectiveness in foundation models, some works <cit.> have utilized MoE in the visual instruction tuning <cit.> phrase of MLLMs to mitigate task interference, aiming for each expert to specialize in several tasks. Some of them replicate the original FFN in LLMs, while others insert multiple low-rank adaptation <cit.> modules in parallel with the original FFN, converting LLM into multi-expert architecture. However, they primarily concentrate on LLM but overlook task interference within the visual perceiving process of MLLMs. In contrast, we comprehensively exploit task interference in both vision and language modalities and propose MoME to mitigate them with specialized vision and language experts. § METHODS To design a generalist MLLM with powerful multimodal understanding capabilities, we begin by analyzing task interference, a common challenge in current MLLMs (Section <ref>). Then, we propose our Mixture of Multimodal Experts and introduce its main components in Section <ref>. §.§ Analysis of Task Interference Task interference is a fundamental and crucial issue in multi-task learning. As current generalist MLLMs are trained with a number of tasks, they naturally suffer from this issue especially when the number of tasks increases. To investigate this issue in a scenario of MLLMs, we will analyze the feature distribution and the performance change of MLLMs on a tailored instruction-tuning dataset. To demonstrate the external manifestations of task interference, we first construct a mixed instruction-tuning dataset with various VL tasks and split all tasks into four groups. The performance comparisons between MLLMs trained on the mixed dataset and each task group are illustrated in Fig. <ref> (a). It is shown that a generalist model trained on the mixed dataset underperforms specialist models on three task groups. We conclude that the generalist model suffers from a notable task interference problem. In the era of Large models, there are some attempts to handle task interference from various perspectives, such as instruction, architecture, and dataset configuration. Here, we focus on the mainstream direction, architecture design, and try to explore a robust architecture to combat task interference. In terms of architecture, most existing works prefer a mixture of experts in LLMs. The experts can be feed-forward networks or parameter-efficient modules. However, we argue that this paradigm of architectural design is sub-optimal as visual and textual information should be given equal importance. In Fig. <ref> (b-c), we investigate the feature distribution of various task groups on vision and language modalities, respectively. All samples of each task group are fed into vision and text encoders to produce modality features. These features are transformed by using PCA and then visualized to show the distribution. We observe that the feature distributions of different task groups exhibit significant differences in both the vision and language modalities. As mentioned above, the textual differences can be addressed by multiple experts in LLMs, but visual differences lack effective handling. In the following section, we will introduce our Mixture of Multimodal Experts, which simultaneously handles visual and textural differences between tasks to mitigate task interference. §.§ Architecture As illustrated in Figure <ref>, we present our novel MoME architecture that dynamically mixes vision and language experts. The proposed architecture aims to adaptively aggregate visual information (<ref>) and select LLM pathways (<ref>) based on the given instructions. §.§.§ Mixture of Vision Experts r0.35 < g r a p h i c s > Comparison of MLLMs with different vision encoders. Before introducing our MoVE architecture, we will present a pilot study that inspires us to design MoVE. Specifically, we design three MLLMs (consists of vision encoder, projection, and LLM), which share the same architecture except vision encoders. These three MLLMs use three distinct vision encoders, CLIP, DINOv2, Pix2Struct, respectively. After training them using the same data and strategies, we found that MLLMs with different vision encoders excelled in specific tasks, as presented in Fig. <ref>. the MLLM with CLIP-ViT is good at general tasks and regional caption tasks. the MLLM with DINOv2 excels in REC, a visual grounding task. the MLLM with Pix2Struct is outstanding in text-intensive document understanding tasks. However, each model had weaknesses and none could achieve uniformly excellent performance across all tasks. Inspired by the above study, we propose MoVE to combine various off-the-shell vision encoders and effectively align and aggregate their visual features. The key components in MoVE are an adaptive deformable transformation module and an instruction-based soft router. The former aims to align the features from various vision encoders, and the latter seeks to modulate and aggregate transformed features based on the instructions. Adaptive Deformable Transformation. Given the diversity in architecture and training methods of different vision encoders, the issue of mismatched visual representations in terms of sequence length and feature space becomes significant. While current researches <cit.> often focus on models like CLIP-ViT <cit.> and DINO <cit.>, which share similar data processing pipeline, the mismatch problems are less important and frequently overlooked. However, the aspect ratios of Pix2Struct <cit.> feature shapes vary depending on the input image. Simply combining them through padding and addition will lead to the misalignment among visual tokens and the damage of visual information. To tackle this challenge, we innovatively design an adaptive deformable transformation (ADT) module that effectively transforms features from diverse vision encoders f into a unified-length feature sequence f̂, ensuring more coherent and informative visual representations for subsequent aggregations. As illustrated in Fig. <ref> (a), the ADT module consists of a 2D adaptive pooling layer and M deformable attention layers (𝔻). It attempts to automatically select the corresponding information from original features f to refine the coarse features obtained by 2D adaptive pooling 𝒟(f), f̂ = 𝔻^M(𝒟(f), f) Inspired by Deformable DERT <cit.>, we choose deformable cross-attention for its 2D sampling mechanism, which is ideal for interactions among visual feature maps of varying shapes. Meanwhile, it converges fast and has computational and memory efficiency. Specifically, each deformable layer consists of a self-attention layer, a deformable cross-attention, and a feed-forward layer. The crucial select operation occurs in the deformable cross-attention layer, which takes the output of the upper self-attention as query q∈ℝ^L× C, samples the original feature map f∈ℝ^H× W× C, and outputs result 𝒪∈ℝ^L× C. In this module, the first step is to generate attention weights w∈ℝ^L× N_h × N_p and L sets of 2D sampling points, denoted as p, from the input queries q. For each set, there are N_h × N_p points, where N_h signifies the number of attention heads and N_p represents the number of points sampled by each head. The sample points and attention weights generation process is as follows, p_ijk = (𝒫_p(q_i)_jk + R_i), i∈{1,…, L}, j∈{1,…, N_h}, k∈{1,…, N_p} w = 𝒫_w(q) where 𝒫 denotes the linear projector and R∈ℝ^L× 2 is a learnable vector called reference point. Then, the corresponding information is sampled by attention heads from the value feature maps 𝒫(f)_j projected and split on the last dimension for each head. The sampling mechanism of each attention head is as follows, o_ij = ∑_k=1^N_p w_ijk·Sample(𝒫_v(f)_j, p_ijk), i∈{1,…, L}, j∈{1,…, N_h} The results of multiple attention heads are concatenated and projected to obtain the output feature 𝒪, which is the input of subsequent feed-forward layer. 𝒪 = 𝒫_o(o) Instance-level Soft Router. Since images from different groups of VL tasks exhibit distinct feature distributions, there is no one-fits-all strategy to aggregate them. To address this, we propose to generate a customized fusion ratio for each sample based on its instruction. Specifically, we devise an instance-level soft router G_s, as depicted in Fig <ref> (b). The router generates corresponding ratios for the visual representations from different vision encoders, followed by a weighted sum of these visual representation features f̂, which can be formulated as, G_s(I) = SoftMax(W_2 (σ (W_1 I + b_1)) + b_2) ℱ = ∑_i=1^N G_s(I)_i ×f̂_i where N is the number of vision experts and σ denotes GeLU <cit.>. The router is a multilayer perceptron (MLP) followed by a SoftMax layer to ensure that the sum of the weights equals 1. The instruction is first passed through Sentence-BERT <cit.> to extract sentence embedding I, which is then fed into the router. §.§.§ Mixture of Language Experts We introduce MoE architecture in LLM, aiming for each expert to specialize in several tasks. However, conventional MoE methods typically incorporate multiple parallel FFN layers in one block, significantly increasing training costs and memory consumption. To meet the multi-task learning needs in the instruction tuning stage, we incorporate several parameter-efficient adapters <cit.> parallel to FFN. Each adapter enhances the original FFN with task-specific understanding capabilities, thus effectively enhancing the multitasking abilities with a few computation costs. We insert an MoE block parallel to each FFN layer in LLM. As depicted in Fig. <ref> (c), The MoE block consists of several low-rank adapters and an instance-level sparsely-activated router G_h. The adapter is designed as a bottleneck structure for computational efficiency, featuring a down-projection layer 𝒫_down, a ReLU layer σ, and an up-projection layer 𝒫_up. Moreover, a learnable scalar is multiplied in the output to weigh the importance adaptively. The whole low-rank adaptation process is as follows, y = ·𝒫_up (σ( 𝒫_down(x))) The router is an MLP network followed by a top-1 gate function to ensure the output is a one-hot vector G(I)∈{0,1}^K. The router generates the selection based on the sentence embeddings I used in MoVE. Each sample is routed to the corresponding adapter to calculate the adapted value o, which can be further added to the output of the original FFN. The whole process of the MoLE block is as follows, o = ∑_k=1^K G_h(I)_k × y_k where K denotes the number of experts. § EXPERIMENTAL RESULTS AND ANALYSIS We collect 24 datasets and categorize them into four groups for instruction-tuning and evaluation, the details of which can be found in Appendix <ref>. §.§ Analysis on MoVE We conduct experiments on MLLMs with different vision encoders under the same training strategy to verify the effectiveness of the two key components in MoVE: ADT and router. Experimental results are summarized in Table <ref>. We take the multitasking performances of MLLMs with a single vision encoder as our baselines. The adaptive average pooling is applied to the visual representation from DINO and Pix2Struct, ensuring that the lengths of visual tokens fed into LLM are consistent. Experiments #1-3 show that MLLMs using CLIP, DINO, and Pix2Struct as vision encoders exhibit distinct strengths in General, REC, and Document tasks, respectively. Moreover, in the REG task, which requires both captioning and visual grounding abilities, the performance of MLLMs with CLIP and DINO significantly surpasses that of those using Pix2Struct. We can conclude that a single vision encoder cannot meet the visual perception needs of all tasks. To make different vision encoders work together in perception, we first aggregate different visual representations by addition (#4). The average performance does improve compared with models with single vision encoders (#1-3). However, such a straightforward method does not bring much improvement, and even some sub-items have declined. This is due to the mismatch and interference among different visual features, which severely compromise their respective visual information. To make visual features aligned well and reduce information loss, we first replace the pooling with proposed ADT network. As demonstrated in Experiment #5, ADT consistently enhances performance across four task groups, yielding an average improvement of 4 points. Based on the transformed visual features, we further replace the naive addition mixture process with the router to modulate and aggregate them adaptively according to instructions (#6). This achieves an impressive performance that significantly outperforms the addition method and the methods with a single vision encoder across all sub-tasks. These experimental results prove that ADT and router are crucial and effective components of MoVE to mitigate interference and optimally utilize the capabilities of each visual expert. Visualization of Routing Results. To provide a deeper understanding of the MoVE's adaptive routing process, we visualize the routing outcomes across various tasks. Since the feature scale varies across different vision encoders, we cannot directly consider the output of the router as expert importance. Instead, we integrate the output features from vision encoders, taking the magnitude of the weighted feature vector as the importance metric. Because the final aggregated result will lean towards the side with the larger magnitude. As displayed in Fig. <ref>, for tasks that involve text recognition and graph understanding, such as ChartQA and DocVQA, the features from the Pix2Struct encoder are dominant. In contrast, the model utilizes more CLIP features in image-level VQA tasks like COCOCapion <cit.>, NoCaps <cit.>, and Flickr30K <cit.>. Notably, for TextCaps <cit.>, a task that requires two kinds of visual information, the router shows a preference for balancing the CLIP and Pix2Struct branches. For tasks that focus on visual grounding ability like REC <cit.>, and REG <cit.>, the model uses more DINO features to perceive region-level visual information. These observations indicate that MoVE can adaptively modulate the features transformed from various vision encoders. §.§ Analysis on MoLE We conduct ablation experiments to explore the best practice of MoLE, which are summarized in Table <ref>. We take the model with a single adapter in each FFN (#1) as baseline, which suffers severe task interference. Then, we replace the plain adapter with MoLE. As summarized in Table <ref> #2-4, we test three kinds of MoLE routers with different inputs: token hidden states (MoLE-T), sentence embedding (MoLE-I), and a mixture of both (MoLE-IT). The token hidden states and sentence embedding are concatenated on the last dimension as the input for the MoLE-IT router. The experimental results indicate that all MoLE variations outperform the plain adapter, with the sentence-embedding-based router achieving the highest average performance. We also explore two strategies for expert load balance in MoLE, which are tabulated in Table <ref> #5-6. MoLE-I+GS introduces variability to the router by adding Gumbel-distributed noise to the logits <cit.>, aiming to prevent certain experts from never being selected. MoLE-I+LB uses auxiliary loss <cit.> to impose load balancing. However, we find that these methods are not suitable for our MoLE as they perform worse than the plain MoLE-I configuration. Through comprehensive comparative experiments, we choose MoLE-I as the default configuration for MoME. By training based on the MoVE model, MoME further enhances the multitasking capability of MLLM, as shown in Experiments #7 and #8. Visualization of routing results. To provide a deeper understanding of our MoLE module, we sample several instances from each dataset and calculate their average routing outcomes. In Fig. <ref>, we present the expert load conditions of four selected datasets, each representing a kind of VL task. At the beginning, the router assigns equal probabilities to each expert as it is randomly initialized. However, after training, the routing distributions of MoE blocks vary significantly across tasks, as shown in Fig. <ref>. This means the language experts in our MoLE module gradually specialize in distinct task domains during training. In the inference process, the router adaptively chooses the optimal expert to achieve strong multitasking capabilities. Meanwhile, we can observe strong resemblances in the expert routing results of similar tasks, which are further analyzed in Appendix <ref>. §.§ Comparison with state-of-the-art MLLMs We summarize the evaluation results of MoME and other MLLMs with similar resource consumption on popular VL tasks in Table <ref>. The results show that our MoME method achieves promising outcomes on most datasets compared with other generalist and MoE MLLMs, especially on TextCaps, Flicker30K, and IconQA. §.§ Qualitative Analisys We present several visualized examples of our MoME model from distinct dataset groups along with their MoVE and MoLE routing results in Fig. <ref>. In the REC case, DINOv2 accounts for nearly 50% among vision experts, providing fine-grained visual information. So the model can recognize the blue car in the background and provide its precise bounding box. The Pix2Struct branch accounts for over 70% in the Document case for structured text understanding. The REG case utilizes information from both the CLIP-ViT and DINOv2 to locate objects and generate captions. In contrast, the conventional caption task in the General group only requires an image-level perception, so the CLIP-ViT is dominant. Remarkably, we can observe significant differences among the MoLE routing results. These examples show how MoME selects vision and language experts to adapt to various tasks. § CONCLUSION This work investigates task interference when training a generalist MLLM across various VL tasks. To mitigate it, we propose MoME, which specializes in both vision and language modality to adapt to task differences. Extensive experiments validate the efficiency of MoME as a generalist MLLM. However, due to the limitations of computing resources, We have not yet expanded our approach to more data and more modalities for experiments. Nonetheless, we believe that the proposed MoME is versatile and can be applied to constructing generalist models in a wider range of multimodal domains. We hope MoME will inspire new research in general-purpose multimodal AI and its applications. plain § IMPLEMENTATION DETAILS §.§ Architecture Details The MoME includes three off-the-shelf vision encoders: CLIP-ViT, DINOv2, and Pix2Struct. For CLIP-VIT, we use ViT-G/14 from EVA <cit.> without the last layer. The DINOv2 <cit.> is the official pre-trained version of ViT-L/14 without registers. For Pix2Struct, we use the vision branch of pre-trained Pix2Struct-base model <cit.>. The ADT consists of a 2D adaptive average pooling layer and a six-layer deformable attention network. The hidden size of ADT is the same as its corresponding vision encoder. We use 8 attention heads and each head samples 4 points in the deformable cross-attention process. In MoLE, the hidden dimension of each adapter is set to 64. We use Vicuna-v1.5(7B) <cit.> as our pre-trained LLM. §.§ Training Details Our training process comprises two stages. In stage 1, we train the model with MoVE and a single adapter in LLM for 30k steps with a batch size of 64. The learning rate is warmed up linearly from 0 to 5e-4 across 1000 steps and then reduces to a minimum of 0 using cosine decay. The AdamW <cit.> optimizer is employed with β_1 = 0.9, β_2= 0.999, and a weight decay of 0.05. In stage 2, we load the checkpoint of stage 1 and replicate the weights of adapters to initialize MoLE, while keeping everything else unchanged. We use a single node with A800 80GB × 8 for all experiments, the entire training is done in one day including stage 1 and stage 2. § DETAILS OF MULTITASKING BENCHMARK We collected 24 datasets and categorized them into four groups for instruction-tuning and evaluation, as shown in Fig. <ref>. For most vision-language (VL) tasks, we used the datasets in both the training and evaluation phases. However, we only use NoCaps for evaluation because it only has an evaluation set, and we exclude the VSR training data due to its simplicity. During the training process, we mix the datasets within the same group into one large dataset, so the probability of each sub-dataset being sampled equals their size as a proportion of the total. However, we ensure the sample ratio of each group dataset is the same. For evaluation, we compute the overall score for each category by averaging its subitem evaluation results. We follow InstructBLIP <cit.>, Shikra <cit.>, and UReader <cit.> for our evaluation metrics, which are tabulated in Table <ref>. Notably, the model only takes images as visual information without introducing OCR tokens like InstructBLIP. § ADDITIONAL VISUALIZATION AND ANALYSIS We present the routing distributions across different datasets of all MoLE routers in Fig. <ref>. In general, the routing results differ significantly among different task groups, while the routing preferences are similar within the same task group. From the perspective of router preferences, the datasets can be classified as text-rich (ChartQA - TextCaps), caption (COCOCap - Flickr30K), VQA (IconQA - GQA), REC, and REG. It can prove that the MoLE captures the modularity of the tasks and mitigates task interferences through differential expert routing. § SOCIETAL IMPACTS MoME utilizes pre-trained large language models (LLMs), which inherently carry limitations from LLMs. These limitations include the potential for generating inaccurate information or biased outputs. To address these issues, we enhance the model's visual perception ability with MoVA and conduct vision-language instruction tuning on high-quality datasets. Despite these improvements, we advise caution and recommend thorough safety and fairness assessments before deploying MoME models in any downstream applications.
http://arxiv.org/abs/2407.13694v1
20240718165933
Anticipatory Task and Motion Planning
[ "Roshan Dhakal", "Duc M. Nguyen", "Tom Silver", "Xuesu Xiao", "Gregory J. Stein" ]
cs.RO
[ "cs.RO" ]
Enhancing gravitational-wave host localization with : rapid volume and inclination angle reconstruction Walter Del Pozzo July 22, 2024 ======================================================================================================== § ABSTRACT We consider a sequential task and motion planning (tamp) setting in which a robot is assigned continuous-space rearrangement-style tasks one-at-a-time in an environment that persists between each. Lacking advance knowledge of future tasks, existing (myopic) planning strategies unwittingly introduce side effects that impede completion of subsequent tasks: e.g., by blocking future access or manipulation. We present anticipatory task and motion planning, in which estimates of expected future cost from a learned model inform selection of plans generated by a model-based tamp planner so as to avoid such side effects, choosing configurations of the environment that both complete the task and minimize overall cost. Simulated multi-task deployments in navigation-among-movable-obstacles and cabinet-loading domains yield improvements of 32.7% and 16.7% average per-task cost respectively. When given time in advance to prepare the environment, our learning-augmented planning approach yields improvements of 83.1% and 22.3%. Both showcase the value of our approach. Finally, we also demonstrate anticipatory tamp on a real-world Fetch mobile manipulator. § INTRODUCTION We consider a sequential TAMP setting, in which a long-lived robot is assigned rearrangement-style tasks one-at-a-time from a sequence. Notably, the environment persists between tasks, so that the terminal state after completing one task serves as the starting state for the next. Lacking advance knowledge of what tasks the robot will later be assigned, existing TAMP planners <cit.> are myopic, targeting low-cost solutions to their immediate objective without regard to the future. As such, this pervasive myopic planning strategy often incurs side effects on subsequent tasks that increase overall cost. R0.65 < g r a p h i c s > Anticipatory TAMP for a cabinet-loading scenario. Consider the cabinet-loading scenario of Figure <ref>. Myopic planning via off-the-shelf TAMP solver <cit.> quickly loads the mugs and bowls into the cabinet yet in a configuration that impedes completion of the second task to , for which the mugs must be moved out of the way. If the robot were to instead anticipate that it may later be tasked to or , it would load the cabinet so as to avoid such side effects. As shown in Figure <ref> (bottom): this small immediate expenditure of additional effort reduces overall cost. The cabinet-loading scenario falls within the realm of anticipatory planning <cit.>, an emerging subfield in which a robot jointly considers the cost of accomplishing its current task and the impact of its solution on subsequent tasks. As it will not know its future tasks in advance, the robot must instead plan with respect to a task distribution, which specifies what future tasks may later be assigned and their relative likelihood. Anticipatory planning thus involves searching over the space of plans to find the one that jointly minimizes the immediate plan cost and the expected future cost. Recent work in the space of anticipatory planning <cit.> so far focuses specifically on anticipatory task planning problems, for which the state space is discrete. However, rearrangement-style tasks in general often require jointly reasoning about both discrete elements of the state (e.g., whether an object is loaded into the cabinet) and continuous parameters of the state: e.g., where inside the cabinet the object is placed. Integrated TAMP, designed to solve such problems, is inherently complex due to the interconnected nature of the discrete and the continuous, since removing an object from the back of a cabinet may first require moving other objects out of the way. This challenge is further amplified by the need to anticipate how the robot's actions may negatively impact potential future tasks. Existing anticipatory planning strategies are not well-suited to reason about these continuous aspects of the state, yet our cabinet-loading scenario illustrates the importance of their consideration: though loading all objects inside the cabinet specifies only a single symbolic state, how those objects are arranged within the cabinet strongly determines how easily objects can be subsequently unloaded. This work develops an approach that addresses this limitation of the state-of-the-art and so improves the performance of long-lived robots for which the environment persists between tasks. We present Anticipatory Task and Motion Planning, which improves the performance of long-lived robots over long deployments, consisting of sequences of tasks assigned one-at-a-time, by imbuing them with the ability to anticipate the impact of their actions on future tasks in continuously-valued manipulation and rearrangement tasks. Difficult to compute exactly, the expected future cost is estimated via learning using a GNN that consumes a graphical representation of the state. A model-based TAMP planner <cit.> generates candidate plans, in effect sampling over the continuous goal space, and we select the plan that minimizes the total cost: (i) the immediate cost produced by the TAMP planner plus (ii) the estimated expected future cost from our learned estimator. Using learning and planning in tandem, our approach quickly and reliably completes its assigned objective while also producing solutions that reduce overall cost over lengthy deployments. We evaluate the performance of our learning-augmented planning approach in two domains: an object-reaching scenario in a NAMO domain and a cabinet-loading scenario. We demonstrate that our approach reduces average plan cost by 32.7% over 20-task sequences in the NAMO domain and by 16.7% over 10-task sequences in the cabinet domain. Furthermore, if given time in advance to prepare the environment before any tasks are assigned, we demonstrate performance improvements of 83.1% and 22.3% in the NAMO and cabinet domains respectively. In both simulated and real-world experiments, we show the benefit of our learning-augmented TAMP strategy, improving performance over deployments in persistent environments consisting of multiple tasks given in sequence, a step towards more performant long-lived robots. § RELATED WORK Task and Motion Planning TAMP involves jointly reasoning over discrete (place the bowl) and continuous (at pose X) spaces to achieve long-horizon goals (load the bowls and mugs) <cit.>. We build on the sampling-based TAMP planner of <cit.>. Existing TAMP approaches <cit.> solve tasks in isolation. We show empirically that this myopic approach performs poorly when the environment persists and solutions to one task may impact the next. Anticipating and Avoiding Side Effects during Planning Recent research has similarly investigated how an understanding of the robot's actions on future tasks can improve planning, via estimation of expected future cost <cit.>, direct task prediction via llms <cit.>, or by leveraging human patterns of behavior <cit.>. These previous works consider discrete (task) planning, not integrated TAMP. On the contrary, we consider TAMP problems in which planning requires both discrete and continuous elements of the state. Other work in the space of reinforcement learning <cit.> or in learning from demonstration <cit.> seek to learn helpful behaviors by example or repeated interaction and so have potential for avoiding side effects, though are so far not directly applicable in our non-deterministic and long-horizon setting. Integrating Planning and Learning To address some of the computational challenges in task planning and TAMP, recent advances have leveraged learning: learning heuristics or using an llm for task-level planning <cit.> or sampling distributions for continuous-space planning <cit.>. However, all such approaches focus on solving a single task in isolation and are also not well-suited to address the unique challenges of our anticipatory TAMP objective. Our learned model for this work is a GNN <cit.>, as they have proven effective for TAMP problems <cit.>. In particular, we or using an llm for task planning Recently, learning has been used to accelerate planning, extending the limits of plan complexity and improving plan quality. Learning techniques have been integrated into many aspects of TAMP: learning samplers for continuous values <cit.>, learning guidance for symbolic planning <cit.>, and using an llm to solve a rearrangement-style problem <cit.>. However, all such approaches focus on solving a single task in isolation and are not directly applicable to solving our anticipatory TAMP objective. In this work, our learned model is a GNN <cit.>, as they have proven effective for TAMP problems <cit.>. We rely on the powerful representational capacity and generalization capabilities of GNNs to estimate expected future cost and thereby improve performance over multi-task deployments. scratch for side effects section However, these methods struggle in long-horizon planning in non-deterministic settings <cit.>. Current efforts focus on deterministic environments for both low-level <cit.>, high-level planning <cit.>, and physical corrections <cit.>, or are limited to short-term horizons <cit.>. Finally, recent work directly considers how the environment Other recent work <cit.>, most relevant to ours, focuses on anticipating for complex task planning scenarios Other recent work focus on Recent research <cit.> has examined not just how a robot completes a given task but also how its actions impact future tasks. While they focus on a symbolic state space—and solving a task required only search over a discrete state space—they do not consider continuous state space. On the contrary, we consider TAMP problems in which planning requires both discrete and continuous elements of the state. [TODO: recent work most similar to ours similarly seeks to anticipate how actions now may avoid side effects , yet all such approaches focus exclusively on task planning in discrete environments, and so are not directly applicable to continuous integrated TAMP scenarios. in the space of anticipatory planning In a similar work, A relevant study <cit.> focuses on avoiding side effects by considering future tasks, yet it does not consider long-term planning in complex environments. Some of the other methods address challenges in defining explicit reward functions <cit.>. Additionally, <cit.> assume that when the robot is deployed in an environment that humans have acted in, the current state of the environment is already optimized for what humans want. This helps in negating the side effects for future tasks. However, these methods struggle in long-horizon planning in non-deterministic settings <cit.>. Current efforts focus on deterministic environments for both low-level <cit.>, high-level planning <cit.>, and physical corrections <cit.>, or are limited to short-term horizons <cit.>. Anticipating and avoiding negative side effects Our work focuses on enabling robots to proactively plan by avoiding negative side effects on future tasks. Planning, such as those explored in <cit.>, have demonstrated the ability of robots to anticipate the impact of their actions on future tasks. Similarly, reinforcement learning methods like those in <cit.> have shown success in avoiding negative side effects. However, all of these studies focus on symbolic-level planning, which involves solving tasks in a discrete state space. In contrast, we address problems that require planning involving both discrete and continuous state elements. § RELATED WORK Task and Motion Planning (TAMP) TAMP involves jointly solving a high-level task and motion planning to accomplish those tasks <cit.>. Many existing approaches <cit.> are focused on solving a task one at a time and environment resets after each task. On the contrary, we focus on considering future tasks while solving a given task and the environment does not reset. Avoiding side effects when planning for long-lived robots Recent research <cit.> has examined not just how a robot completes a given task but also how its actions impact future tasks. While they focus on a symbolic state space—and solving a task required only search over a discrete state space—they do not consider continuous state space. On the contrary, we consider TAMP problems in which planning requires both discrete and continuous elements of the state. In a similar work, a relevant study <cit.> focuses on avoiding side effects by considering future tasks, yet it does not consider long-term planning in complex environments. Some of the other methods address challenges in defining explicit reward functions <cit.>. Additionally, <cit.> assume that when the robot is deployed in an environment that humans have acted in, the current state of the environment is already optimized for what humans want. This helps in negating the side effects for future tasks. However, these methods struggle in long-horizon planning in non-deterministic settings <cit.>. Current efforts focus on deterministic environments for both low-level <cit.>, high-level planning <cit.>, and physical corrections <cit.>, or are limited to short-term horizons <cit.>. Proactive assistance and avoiding side effects for long-lived robots Recent research <cit.> has explored the ability of robots to proactively assist humans by completing tasks with consideration for their impact on future tasks. These studies focus on a symbolic state space, where solving a task requires only a search over a discrete state space, but they do not address continuous state spaces. In contrast, we focus on problems that requires planning involving both discrete and continuous state elements. Similarly, to avoid side effects,  <cit.> considers future tasks,  <cit.> assumes an environment already optimized for human preferences and  <cit.> uses symbolic planning to . However, their work uses reinforcement learning, which is known to struggle in large scale environment. § ANTICIPATORY TASK AND MOTION PLANNING §.§ Preliminaries Task and Motion Planning We define a TAMP problem as a tuple ⟨, , , s_0, τ⟩, following the convention of Chitnis et al. <cit.>. represents the set of objects, and so defines the configuration space of the robot and all movable objects in the environment. is the set of fluents, Boolean functions that describe the (discrete) symbolic relationships between objects. is the set of high-level actions that the robot can execute to transform the state, such as , , and . s_0 ∈ is the initial state of the environment. The task τ is defined as a set of fluents that define the goal state; as such, a task τ defines a subset of the state space—the goal region G_τ⊂𝒮—in which the task is considered complete. For notational convenience later on, we represent a state s as a tuple ⟨σ, k ⟩, representing the discrete symbolic components of the state σ (e.g., on which surface an object is placed) and the continuous aspects of the state k: e.g., where on that surface the object is placed. The goal of a TAMP solver is to find a sequence of actions that completes the specified task τ, reaching a goal state s_g ∈ G_τ. In this work, we assume access to a TAMP planner <cit.> that returns a plan π, a sequence of actions a_0, a_1, ⋯, a_n ∈, such that the terminal state of the plan is in the goal: s_g ≡tail(π) ∈ G_τ. Anticipatory Planning There is often considerable flexibility in how the robot can choose to complete its assigned task τ: i.e., the goal region G_τ consists of more than a single satisfying state. During long-lived deployments, the environment will persist between tasks, and so effective planning requires that the robot consider how its choice of how to complete the current objective—which goal state s_g it ends up in—impacts possible tasks it may later be assigned, where tasks are assigned according to a task distribution P(τ). We adopt the formalism of <cit.> that defines anticipatory planning as a joint minimization over (i) the cost to complete its assigned current task τ_c and (ii) the expected future cost to complete a subsequent task: s^*_g = _s_g ∈ G(τ_c)[ V_s_g(s_0)12emImmediate Task Cost: Cost to reach s_g from s_020em + ∑_τ P(τ) V_τ(s_g)20emAnticipatory Planning Cost: Expected cost over future tasks30em]  , where V_s_g(s_0) is the plan cost to reach state s_g from starting state s_0 and V_τ(s_g) is the cost to complete task τ from state s_g. As mentioned by <cit.>, reasoning about expected future cost over long sequences of tasks is computationally challenging, and so Eq. (<ref>) instead seeks to minimize cost over an immediate task and a single next task in the sequence; we will show in Sec. <ref> that this formulation is still sufficient for improved behavior over lengthy sequences. Owing to the difficulty of integrated TAMP, recent work in this space <cit.> considers only task planning settings, focusing only on symbolic planning and thus ignoring continuously-valued aspects of the state, a restriction we overcome in this work. §.§ Problem Formulation: Anticipatory Task and Motion Planning We solve the problem of anticipatory TAMP, which combines elements of both TAMP and anticipatory planning and so is defined by the tuple: ⟨, , , s_0, τ, P(τ) ⟩. Tackling this problem requires reasoning about both discrete and continuous aspects of the state and so anticipatory TAMP in general involves solving the following objective: σ^*_g, k^*_g = _σ_g,k_g ∈ G(τ_c)[ V_σ_g, k_g(s_0)12emImmediate Task Cost: Cost to reach s_g ≡20em+∑_τ P(τ) V_τ()20emAnticipatory Planning Cost: Expected cost over future tasks30em] = _σ_g,k_g ∈ G(τ_c)[ V_σ_g, k_g(s_0) + ()20em is shorthand for expected future cost.20em] , where V_σ, k(s_0) is the plan cost to reach state s ≡⟨σ, k ⟩ from initial state s_0 and V_τ() is the cost to complete task τ from state s_g ≡. Preparation as task-free anticipatory TAMP Household robots will not be in perpetual use and so can take preemptive action before any tasks are assigned to prepare the environment: rearranging their environments to reduce expected future costs and thus make it easier to complete tasks once they are eventually assigned. Formally, preparation can be defined as task-free anticipatory planning <cit.>. For anticipatory TAMP, we define preparation as minimization over the space of continuous states k associated with the current symbolic state σ_0: ∈ K(σ_0). Thus, the prepared state of the environment ⟨σ_0,^*⟩ is defined via: ^* = _∈ K(σ_0)[ (⟨σ_0, ⟩) ] §.§ Approach: Planning via Anticipatory Tamp R0.55 < g r a p h i c s > Schematic of our approach. Anticipatory task and motion planning requires that we search over plans (and thus the continuous goal space) and select the plan that minimizes the total cost, the sum of the immediate plan cost and the expected future cost, the anticipatory planning cost , as in Eq. (<ref>). Difficult to compute at planning time, we rely on a learned model to estimate the during planning: APCostEstimator. Figure <ref> and Algorithm <ref> each give an overview of planning via anticipatory TAMP from an initial state s_0 and task τ. R0.48 0.48 Our algorithm relies on an off-the-shelf TAMP solver <cit.>. TAMPSolver is a randomized planner that produces multiple candidate plans with varying continuous goal states (see the Appendix for details). As such, each call to the planner returns plan π that satisfies the task and terminates in a random state within the goal region. This property facilitates its use as a random sampler of goal states. Under our anticipatory TAMP approach, we query TAMPSolver N times, yielding N plans and the plan cost of each. For each plan, we estimate via our learned model APCostEstimator and return the plan that minimizes total cost according to Eq. (<ref>). In general, search over both discrete and continuous aspects of the state is computationally demanding. As such, this work specifically considers relatively well-specified tasks, in which the goal region specified by the task corresponds only a limited number of symbolic states, so that search for anticipatory TAMP emphasizes the continuous aspects of the state: e.g., the goal region for the task load all objects into the cabinet consists of only a single symbolic goal state σ_g in which all objects must be inside the cabinet, yet their continuous pose within the cabinet is unspecified. Preparation, task-free anticipatory TAMP, involves searching for the continuously-valued state that minimizes expected future cost in advance of being given a task. We perform this search via a simulated annealing optimization approach <cit.>. Beginning with an initial state ⟨σ_0, k_0⟩, we iteratively perturb the continuous object states k within a bounded range ensuring no overlaps. If the of the new state, estimated by APCostEstimator, is improved or meets a probabilistic criterion influenced by a decreasing temperature factor, it is accepted as the new state to perturb. Search proceeds for N iterations, eventually returning the prepared state ⟨σ_0, k^*_prep⟩. § ESTIMATING EXPECTED FUTURE COST VIA GRAPH NEURAL NETWORKS During deployment, direct computation of the anticipatory planning cost (⟨σ_g, k_g⟩) is typically infeasible, either due to the high computational cost of solving all possible future tasks for every state considered during planning or the lack of direct access to the underlying task distribution P(τ). Instead, we estimate via an estimator (APCostEstimator), a GNN <cit.> trained via supervised learning with data generated during from an offline training phase. Training Data Generation Offline, we presume that the robot has direct access to the underlying task distribution P(τ), which specifies what tasks the robot may be assigned and their relative likelihood. We generate training data by randomly sampling states from the domain of interest. For each state s_i, we solve every possible future task τ using TAMPSolver and use the resulting plan costs V_τ(s_i) to compute the anticipatory planning cost via its definition: ≡∑_τP(τ)V_τ(s_i). Each datum consists of states s_i with labels (s_i), which the learned model is then trained to estimate. R0.5 < g r a p h i c s > Example states and visualizations of their accompanying graph representations for both our Cabinet and NAMO domains. Learning and Estimation via Graph Neural Networks As we rely on a GNN for learning and estimation, we represent the environment state ⟨σ, k⟩ as a graph 𝒢, as shown in Figure <ref>. Nodes represent objects—e.g., the robot, movable objects, and object containers—and edges represent spatial or semantic relationships between them. 𝒢 includes features for both nodes and edges, specific to each environment; see details alongside our experiments in Sec. <ref>. Our GNN is implemented via PyTorch Geometric <cit.> and consists of three TransformerConv layers <cit.> each followed by a leaky ReLU activation, culminating in a mean-sum pooling operation and a fully-connected layer. As our target is a regression objective, we use a mean absolute error loss and train with AdaGrad <cit.> with a batch size of 8 for 10 epochs with a 0.05 learning rate. § EXPERIMENTS AND RESULTS We evaluate our approach on two PyBullet <cit.> simulated domains based on those by <cit.>: (1) cabinet-loading and (2) object-reaching in a navigation among movable obstacles (namo) domain. We include trials for planning both with and without our anticipatory TAMP approach, starting from either (i) a randomized initial state or (ii) a prepared state (Sec. <ref>). For each trial, we evaluate performance of four planners: Myopic TAMP, which does not anticipate future tasks. Planning relies on Alg. <ref> using a zero-function for the anticipatory cost estimator, ensuring fair comparison with our approach. Our anticipatory TAMP approach, which seeks to minimize both immediate and expected future cost via Eq. (<ref>). We plan via Alg. <ref> using a learned anticipatory cost estimator, trained in the environment of interest. The robot first prepares the scene via Eq. (<ref>) and then plans via . The robot first prepares via Eq. (<ref>) and then plans via . §.§ Object Reaching in a Navigation Among Movable Obstacles (NAMO) Domain R0.45 Planner Average Cost 56.1 (ours) 37.8 Prep (ours)+Myopic 9.5 (ours) 9.5 Average cost-per-task over 20-task sequences in our NAMO domain. We first perform experiments in a NAMO domain, in which the robot must navigate to a target object, ostensibly so that the object may be interacted with or inspected, and then return to its starting position. The robot's task distribution is uniform, so that each of the 10 objects is chosen with uniform probability. Reaching the target object may require moving other objects out of the way, and so good performance in this environment in general will require that the robot position objects so that they do not block the path to others. As discussed in Sec. <ref>, our anticipatory cost estimator is a GNN, trained with 10k data. Each node corresponds to an entities in the scene (e.g., the robot and movable objects) with input features of a one-hot vector of the entity class, the object pose, and its distance to the robot. Edge features include distances between each node and the number of movable obstacles between them. Evaluation consists of 256 trials, each a random sequence of 20 tasks; planning via Algorithm <ref> uses 100 samples of candidate plans per task and 5000 samples per preparation. Table <ref> shows the average cost-per-task for all four planning strategies and thus the benefits of using our approaches, both and preparation. has a 32.7% lower overall planning cost compared to . Moreover, advance preparation of the environment further reduces the cost of both strategies—an 83.1% improvement over alone. Notably, performance for and is identical: preparation routinely finds states from which all objects can be reached without needing to move any others out of the way, so that all tasks are quickly and easily completed; thus, anticipatory TAMP does not go out of its way to rearrange the blocks during task execution, as doing so would increase overall cost. Figure <ref> (right) shows the average per-task performance for each strategy: i.e., a downward slope indicates that the cost of the final task averaged across the 256 sequences is less than the average cost of the first task. In particular, the average cost of planning via decreases over time, showing how our approach gradually makes the environment easier to use through repeated interaction, an emergent property of planning via anticipatory TAMP. We highlight a few examples in Figure <ref> that corroborate the quantitative results, namely that planning via ensures that obstacles are placed so as to be out of the way for subsequent tasks. In particular, the prepared states are more ring-like, so that the robot can reach all objects without moving any out of the way. §.§ Cabinet Loading and Unloading Scenario R0.45 Planner Average Cost 283.2 (ours) 235.8 Prep (ours)+Myopic 267.7 (ours) 219.9 Average cost-per-task over 20-task sequences in our Cabinet domain. Our cabinet domain consists of nine objects: three each of mugs (blue), bottles (red), and bowls (green), using urdf models from <cit.>. In this domain, the robot's tasks involve loading or unloading all objects belonging to one or more semantic class—e.g., move all bottles to the table—each assigned with equal probability. To estimate the anticipatory planning cost, we train a GNN with 5k data. Node features consist of a one-hot encoding of entity type (robot, container, or object), pose, and distance from the robot. Edges represent spatial or semantic relationships between entities and edge features include the distances between nodes they connect and the number of movable obstacles between them, with zero used for non-movable objects. Due to the interchangeable nature of objects of the same semantic class, only objects of other semantic classes are regarded as obstacles for the purposes of edge feature computation. Evaluation consists of 64 deployments, each a sequence of 10 tasks; planning via Algorithm <ref> internally uses 200 samples of candidate plans per task and 2500 samples per preparation. Table <ref> shows the average cost-per-task for all four planning strategies; our approach yields performance benefits for both anticipatory TAMP and preparation. The cost-over-time plot in Figure <ref> (right) shows how from a random initial state yields performance improvements over that increase over deployment: our approach makes the environment gradually easier to use over time. Preparation benefits both approaches, yet for those benefits diminish over time, owing to its lack of consideration for tasks the robot may be asked to perform later on. Combining yields the best of both strategies. Figure <ref> shows example trials, which support our statistical results; each shows how planners that rely on our anticipatory TAMP have discovered the benefit of loading objects so as to avoid obstructing those of different semantic class and so tend to reduce overall cost over the course of each 10-task sequence. §.§ Real-World Demonstration on a Fetch Mobile Manipulator R0.6 < g r a p h i c s > Real-world demonstration with the Fetch. We further evaluate our approach using the Fetch Mobile Manipulator <cit.> in a real-world cabinet-loading scenario with six cylinders—two each in red, blue, and green—and with an anticipatory cost estimator trained in simulation environments of the same composition. We show the performance of both and planners; both are initialized with the same starting configuration (Figure <ref> left) and tasked to load the two objects from the table into the cabinet. Upon completion of that first task, each planner is then instructed to unload the two red cylinders and move them to the table, a task made difficult if either red cylinder is obstructed by another non-red cylinder. The planner (Figure <ref> top), incapable of considering the effect of its actions on possible future tasks, solves the first task such that the back red cylinder is more difficult to remove when the second task is eventually assigned. Conversely, our approach places the cylinders in same-colored groups and so the second task is made easier. Our approach requires fewer total actions (pick, move, place) and correspondingly less total execution time to complete the tasks in the sequence, as illustrated in Figure <ref>. § DISCUSSION, LIMITATIONS, AND FUTURE WORK We present anticipatory TAMP, a planning strategy to improve performance over task sequences in persistent continuous-space rearrangement-style settings. Unlike most existing TAMP approaches, which focus only on one task at a time, we train a learned model to estimate how the robot's immediate actions will impact potential future tasks and use this anticipatory planning cost to select plans that jointly minimize immediate plan cost and expected future cost. In both simulated and real world experiments, we show the benefit of our learning-augmented TAMP strategy, improving performance over deployments in persistent environments consisting of multiple tasks given in sequence, an important step towards more performant long-lived robots. Limitations Our approach, which relies on existing solvers during planning, suffers from many of the same challenges of scale and computation that limit TAMP for rearrangement problems in general. Moreover, planning via Algorithm <ref> requires running the underlying TAMP solver many times (on the order of 100 in our experiments) to effectively search the continuous goal space. As such, better sampling heuristics or improvements in search will likely be necessary to scale up to more complex problems, in part why our work so far has focused on tasks for which the final symbolic state is presumed relatively well-specified. Finally, we presume offline access to the underlying task distribution, which defines our objective function. Not only may the robot not have this access in advance, but we also so far presume that the online task distribution matches that seen during training and will not evolve during deployment; future work may consider learning this task distribution online and modeling its change over time. This material is based upon work supported by the National Science Foundation under Grant No. 2232733. § APPENDIX PDDL sensitive=false, morecomment=[l];, alsoletter=:,-, morekeywords= define,domain,problem,not,and,or,when,forall,exists,either, :domain,:requirements,:types,:objects,:constants, :predicates,:action,:parameters,:precondition,:effect, :fluents,:primary-effect,:side-effect,:init,:goal, :strips,:adl,:equality,:typing,:conditional-effects, :negative-preconditions,:disjunctive-preconditions, :existential-preconditions,:universal-preconditions,:quantified-preconditions, :functions,assign,increase,decrease,scale-up,scale-down, :metric,minimize,maximize, :durative-actions,:duration-inequalities,:continuous-effects, :durative-action,:duration,:condition basicstyle=, keywordstyle=, linewidth= §.§ Task and Motion Planning Overview and Solver Details The TAMP solver we leverage for this work relies on the planning strategy presented in the work of <cit.>. It operates by first generating a high-level symbolic task skeleton: a sequence of high-level symbolic actions. However, a symbolic plan alone is not always actionable, as it requires further refinement at the motion level. Thus, the task skeleton must be refined, a process by which the abstract actions in the skeleton are assigned metric parameters: e.g., how to grasp an object or where in the cabinet a bowl should be placed. The resulting metric plans are checked for feasibility by a low-level motion planner, checking for instance if a collision-free trajectory can be found that achieves the desired motion between two sampled states. If the plan is determined infeasible, new metric parameters are sampled, yielding a new metric plan. After sufficiently many failed attempts, a new task skeleton may be generated. This approach of searching over both the high-level symbolic plan and the low-level metric plan continues until a feasible solution to the assigned task is found. For the TAMPSolver used in Algorithm <ref>, the PDDL <cit.> is used to define the operators (action schemas) available to the robot. The popular FastDownward solver <cit.> is used to generate the high-level task skeletons, which are then refined iteratively by a sampler and, when necessary, a motion-level planner. Motion planning—e.g., for the robot's movement between poses—is determined via the RRT-Connect algorithm <cit.>, yielding costs based on Euclidean distances that are then fed back into the planner. R0.48 0.48 A task and motion planner operates by first generating a high-level plan, which is a fixed sequence of tasks represented symbolically. This plan is produced by a task planner with input files PDDL <cit.> to describe the initial state and desired goals. However, a symbolic plan alone is not always actionable; it requires further refinement at the motion level. An interface layer facilitates this by assigning continuous values to the symbolic references, ensuring that the actions are feasible. The TAMPSolver code bas <cit.> exemplifies this process. It uses the Fast Downward <cit.> planner to create a high-level plan, which is then refined iteratively by a motion-level planner. This planner samples continuous parameters for each action and checks their validity. If execution failures occur, the plan is adjusted and refined until a successful sequence of actions is achieved. Algortihm <ref> follows the codebase we use which outputs the plan to reach a sampled goal state for which we compute the cost of. Actions like and have constant costs, while the cost for is based on Euclidean distances between waypoints generated by RRT-Connect <cit.>. §.§ PDDL Example Code Listing Our experiments make use of planning environments that appear in the work of <cit.>, adapted to be useful for our sequential TAMP setting. Here, we show an example PDDL operator definition from our cabinet-loading domain of Sec. <ref>: [language=PDDL, linewidth=, caption=PDDL operator definition for in Cabinet domain, label=lst:cabinetpddl] (:action pick :parameters (?o - object ?grasppose - pose ?pickpose - pose) :precondition (and (at ?o ?pickpose) (isgrasppose ?graspose ?o ?pickpose) (forall (?o2 - object) (not (objobstructs ?o2 ?grasppose))) (emptygripper) (not (clear ?pickpose))) :effect (and (not (at ?o pickpose)) (not (emptygripper)) (ingripper ?o) (clear ?pickpose) (forall (?p2 - pose) (not (objobstructs ?o ?p2))) (increase (cost) 20)) ) Notably, the pose parameters are sampled during plan refinement and the predicate function specifies the feasibility of a chosen grasp, determined via a separate collision checking process using the motion planner. Our cabinet domain features three operators: , , and ; the costs of the and operators have a fixed 20 units of cost and the operator has a variable cost depending on the Euclidean distance of movement. The NAMO environment of Sec. <ref> features a single operator, which moves to a block, moving other block out of the way in its effort to reach it. It has a base cost of 200 units per block moved and an additional cost corresponding to the Euclidean distance of the robot's total movement. Here, we show an example of the The Here, we include the PDDL operator definitions for the cabinet-loading domain of Sec. <ref>. There are primarily three operators: , , and . [As discussed above, The PDDL code for the cabinet-loading domain [language=PDDL, linewidth=, caption=PDDL operator definitions for pick and place in Cabinet domain, label=lst:cabinetpddl] (:action pick :parameters (?o - object ?grasppose - pose ?pickpose - pose) :precondition (and (at ?o ?pickpose) (isgrasppose ?graspose ?o ?pickpose) (forall (?o2 - object) (not (objobstructs ?o2 ?grasppose))) (emptygripper) (not (clear ?pickpose))) :effect (and (not (at ?o pickpose)) (not (emptygripper)) (ingripper ?o) (clear ?pickpose) (forall (?p2 - pose) (not (objobstructs ?o ?p2))) (increase (cost) 20)) ) (:action place :parameters (?o - object ?ungrasppose - pose ?placepose - pose) :precondition (and (ingripper ?o) (isplacepose ?ungrasppose ?o ?placepose) (forall (?o2 - object) (not (objobstructs ?o2 ?ungrasppose))) (clear ?placepose)) :effect (and (at ?o ?placepose) (emptygripper) (not (ingripper ?o)) (not (clear ?placepose)) (increase (cost) 20))) §.§ Preparation for Anticipatory TAMP R0.48 0.48 Task-free anticipatory TAMP or preparation involves minimizing the anticipatory planning cost in advance of receiving any tasks. We use simulated annealing <cit.> for the optimization process, as shown in Algorithm <ref>. Preparation, defined in Eq. (<ref>), involves searching over the continuous states of the environment an environment to find the state that minimizes the anticipatory planning cost (the expected future cost) as estimated by the APCostEstimator. The algorithm starts from the initial state of the environment and explores neighboring states by perturbing object positions using the GetNeighborState function. New states are accepted—i.e., used for the subsequent iteration—if the new state improves upon the expected future cost or is randomly accepted via a monotonically-decreasing probabilistic acceptance criteria, which exists to avoid settling in local minima. The process iterates for a set number of times, with a cooling schedule gradually reducing the probabilistic acceptance criteria, and thus the likelihood of uphill moves. Finally, the state with the lowest cost is returned. It is this state into which the robot puts the environment and from which planning will then commence. §.§ Real Robot Experiment Details R0.5 < g r a p h i c s > Left: Our Fetch robot. Top right : Mapping of the environment shown in Rviz. Bottom right: Use of AprilTags for perception. We use a Fetch Mobile Manipulator (Figure <ref>) for real world demonstration. Our Fetch robot primarily relies on three sub-processes: navigation, perception, and manipulation. In the navigation process, the Fetch uses a pre-built map of the environment as shown at the top right of Figure <ref> and the ROS move_base package <cit.> to localize itself within the map using its laser scanner and move between specified locations. For perception, we utilize AprilTag <cit.> fiducial markers to estimate the poses of targeted objects relative to the robot, as shown at the bottom right of Figure <ref>. These poses are transformed to the robot frame and published as a separate ROS node, to which other processes subscribe for pose data. For manipulation, the MoveIt! package <cit.> is used to determine feasible grasp poses and trajectories for the robot's arm. The outputs of the perception node are used to create a 3D representation of the environment, which the IKFast solver <cit.> uses to calculate the arm configuration that will achieve the desired end effector pose. The MoveIt! motion planner then generates the trajectory for the arm to pick or place objects in locations specified by the planner. The demonstrations involve first generating a plan in the simulated environments using the TAMP solver described above and then execute that plan, by sequential execution of the actions it prescribes, using the aforementioned planning and perceptual modules. Videos of the demonstrations are included at the end of the video presentation included in the supplementary material. §.§ Example 10-Task Sequence in Namo domain We present an example trial in the NAMO domain involving a sequence of 10 tasks in a persistent environment, each specifying a particular object the robot must reach (see Figure <ref>). Our approach reduces the overall planning time by 50%. Supporting our results from Sec. <ref>, preparing the environment in advance results in a configuration from which all blocks can be easily reached without moving any others out of the way, an emergent behavior of our approach. As such, the behavior of both and our result in a 92% reduction in overall planning cost compared to planning in the unprepared environment. §.§ Example 10-Task Sequence in Cabinet-Loading Domain We also present an example 10-task sequence in our cabinet-loading domain, as shown in Figure <ref>. Consistent with the results shown in Sec. <ref>, our approach reduces planning costs throughout the sequence, particularly during the unloading of objects from the cabinet. Using our approach, the overall planning cost of the sequence is reduced by 26% compared to . We additionally show how preparation can further reduce cost of myopic planning, though the benefit of initially preparing the state diminishes as the task sequence proceeds. In this particular trial, both the prepared and non-prepared anticipatory TAMP results are similar, with only a small difference between them. Our results show how our approach improves planning cost even over a longer sequence of tasks.
http://arxiv.org/abs/2407.13397v2
20240718110637
Angular momentum distribution for a quark dressed with a gluon: different decompositions
[ "Ravi Singh", "Sudeep Saha", "Asmita Mukherjee", "Nilmani Mathur" ]
hep-ph
[ "hep-ph" ]
Particle Production and Density Fluctuations of Non-classical Inflaton in Coherent Squeezed Vacuum State of Flat FRW Universe * July 22, 2024 ============================================================================================================================= § INTRODUCTION Experimental findings show that only a fraction of a nucleon's spin comes from the spin of its quarks and gluons <cit.>. The remaining portion is attributed to the orbital angular momentum (OAM) of these constituents, due to the nucleon's relativistic nature <cit.>. However, the theoretical understanding of how nucleon spin arises from the sum of these angular momenta remains ambiguous <cit.>. Historically, one challenge was the gauge-invariant separation of gluon angular momentum into intrinsic and orbital components. Recent polarized scattering experiments have provided insights into the intrinsic spin of gluons within nucleons, necessitating gauge-invariant observables to interpret these findings effectively. It has been theoretically established that gluon angular momentum can be further divided gauge-invariantly into spin and orbital components, introducing a gauge-invariant "potential angular momentum" term <cit.> that can influence the definition of quark or gluon OAM. Thus many different decompositions of nucleon spin exist, each offering unique perspectives on angular momentum distribution. Although these decompositions agree on the total nucleon spin, they differ at the density level due to surface terms. Understanding these distinctions is essential for proper interpretation of the spatial distribution of nucleon angular momentum. § ANGULAR MOMENTUM DENSITY DECOMPOSITIONS Decompositions of angular momentum in QCD are classified into kinetic and canonical categories. The kinetic class includes the Belinfante, Ji, and Wakamatsu decompositions, with the latter being a gauge-invariant extension (GIE) of the Ji decomposition. The canonical class includes the Jaffe-Manohar decomposition and its GIE by Chen et al. This classification was enabled by Wakamatsu's covariant generalization of the QCD angular momentum tensor into five gauge-invariant terms: quark and gluon spin, orbital angular momentum (OAM), and potential angular momentum. Potential angular momentum can be added to either quark or gluon OAM, creating the canonical and kinetic families. While these decompositions yield the same total angular momentum when integrated, they differ at the density level due to superpotential (surface) terms that vanish upon integration. Thus, total angular momentum density cannot be simply interpreted as a sum of OAM and spin density. More details can be found in section 5.2.3 of Ref. <cit.>. § ANGULAR MOMENTUM DISTRIBUTION IN FRONT FORM We calculate intrinsic spin and OAM densities in the light-front coordinates. These distributions are analyzed in a 2D plane orthogonal to longitudinal motion to avoid relativistic corrections associated with 3D densities. Light-front coordinates provide an elegant framework where quantities evaluated in the transverse plane exhibit Galilean symmetry. The impact-parameter distribution or the 2D spatial distribution of orbital angular momentum is ⟨ L^z ⟩(b^⊥)= -iϵ^3jk∫d^2Δ^⊥/(2π)^2e^-iΔ^⊥·b^⊥[∂⟨ T^+k⟩_LF/∂Δ_⊥^j], where b^⊥ is the impact parameter and ⟨ T^μν⟩_LF=⟨ p^',s|T^μν(0)|p,s⟩/2√(p^'+p^+). Here, T^μν is the energy-momentum tensor. Similarly, we can define the spin distribution in light-front as ⟨ S^z ⟩(b^⊥) = 1/2ϵ^3jk∫d^2Δ^⊥/(2π)^2e^-iΔ^⊥·b^⊥⟨ S^+jk⟩_LF, and the Belinfante-improved total angular momentum distribution as ⟨ J_Bel^z ⟩(b^⊥) = -iϵ^3jk∫d^2Δ^⊥/(2π)^2e^-iΔ^⊥·b^⊥[∂⟨ T^+k_Bel⟩_LF/∂Δ_⊥^j]. In order to ensure consistency between the Belinfante, Ji and Jaffe-Manohar decomposition in regards to the total quark AM density, it is important to include the correction term to the Belinfante's total angular momentum. At the distribution level, it is given as <cit.> ⟨ M^z_q⟩(b^⊥) = 1/2ϵ^3jk∫d^2Δ^⊥/(2π)^2e^-iΔ^⊥·b^⊥Δ^l_⊥∂⟨ S^l+k_q⟩_LF/∂Δ^j_⊥. For performing the Fourier transform of the aforementioned distributions, we utilized a Gaussian wave packet state with a fixed longitudinal momentum and finite width, confined within the transverse momentum space <cit.>. To calculate the matrix elements of local operators, we consider a relativistic spin-1/2 state of a quark dressed with a gluon at one loop in QCD <cit.>. |p,λ⟩ = ψ_1(p,λ)b_λ^†(p)|0⟩ +∑_λ_1, λ_2∫dk_1^+d^2k_1^⊥/√(2(2π)^3k_1^+)∫dk_2^+d^2k_2^⊥/√(2(2π)^3k_2^+)ψ_2(p,λ|k_1,λ_1;k_2,λ_2)√(2(2π)^3P^+) δ^3(p-k_1-k_2)b_λ_1^†(k_1)a_λ_2^†(k_2)|0⟩ , where, ψ_1(p,λ) is normalization, ψ_2(p,λ|k_1,λ_1;k_2,λ_2) is the probability amplitude of finding a quark and gluon with momentum (helicity) k_1(λ_1) & k_2(λ_2) respectively. We have used two-component formalism developed in light front gauge A^+=0 <cit.>. This work contains only the quark part of the EMT and an ongoing work contains the gluon part of the EMT <cit.>. We found that the off-diagonal matrix element of the quark part of kinetic EMT is zero. So effectively the kinetic EMT coincides with the canonical EMT for the quark part of EMT. This signifes the vanishing of the potential angular momentum. The longitudinal component of kinetic and canonical OAM distribution of quarks: ⟨ L^z_kin, q⟩ (b_⊥) = g^2 C_f/72 π^2∫d^2Δ^⊥/(2π)^2e^-iΔ_⊥·b_⊥[-7+6/ω(1+2m^2/Δ^2)log(1+ω/-1+ω)-6log(Λ^2/m^2)], and kinetic and canonical spin distribution of quarks: ⟨ S^z_kin, q⟩ (b_⊥) = ∫d^2Δ^⊥/(2π)^2e^-i Δ_⊥·b_⊥∫ dx [ 1/2+g^2 C_f/16 π^2(1-x){2x- ω (1+x^2) log( 1+ω/-1+ω) -(1-ω^2/ω) x log( 1+ω/-1+ω)}], where ω = √(1+4m^2/Δ^2), x is the light-front momentum fraction of the quark and Λ is the ultraviolet cutoff on the transverse momentum <cit.>. Belinfante-improved quark total angular momentum distribution: ⟨ J^z_Bel,q⟩ (b_⊥) = ∫d^2Δ^⊥/(2π)^2e^-iΔ_⊥·b_⊥∫ dx [1/2-g^2 C_f/16 π^2{1+x^2/1-xlog(Λ^2/m^2 (1-x)^2) -2x/1-x}] +g^2 C_f/16π^2(1-x)Δ^4 ω^3[(8m^4(1-2x)(1-x(1-x)) +6m^2(1-(2-x)x(1+2x))Δ^2 +(1-(2-x)x(1+2x))Δ^4)log(1+ω/-1+ω) -ωΔ^2 (4m^2(1-(1-x)x)+(1+x^2)Δ^2 +(1-(2-x)x(1+2x))(4m^2+Δ^2) log(Λ^2/m^2(1-x)^2))], Distribution of the superpotential term associated with quark ⟨ M^z_q⟩ (b_⊥) = g^2 C_f/16 π^2∫d^2 Δ_⊥/(2π)^2 e^-iΔ_⊥·b_⊥∫dx/(1-x)1/ω^3 Δ^4× [ ωΔ^2( (4m^2 + Δ^2)(1+x^2) - 4m^2 x ) - 2m^2 ( (4m^2 + Δ^2)(1+x^2) - 4m^2 x - 2x Δ^2) ]. In the light front gauge, the physical part of the gauge potential is the same as the full gauge potential. Thus, densities of all the components of the gauge invariant decompositions coincide with the corresponding densities in canonical (JM) and kinetic (Ji) decompositions respectively. § NUMERICAL ANALYSIS In this section, we analyze the longitudinal component of the angular momentum distribution of quarks and gluons. For the analysis, we have chosen: the quark mass m = 0.3 GeV, the coupling constant g = 1, the color factor C_f = 1, and Λ = 2.63 GeV. The y-axis is multiplied by a factor of |b⃗_⊥ | to correctly represent the data in radial coordinates. In Fig. <ref>, the plot on the left panel is a graphical representation of the longitudinal component of ⟨ J^z_kin,q⟩ (b_⊥) = ⟨ L^z_kin,q⟩ (b_⊥) + ⟨ S^z_kin,q⟩ (b_⊥). In the right panel, we show that the Belinfante total AM distribution, ⟨ J^z_Bel,q⟩ (b_⊥) is not equal to the total AM density of the quark. The correction term ⟨ M^z_q⟩ (b_⊥), ignored in the symmetric Belinfante EMT, must be added to ⟨ J^z_q⟩ (b_⊥) to ensure the same total AM distribution in both decompositions As we are using the QCD Hamiltonian, J_q is not conserved, resulting in a scale or a cutoff dependence of the components <cit.>. Thus, by taking different values of the cutoff Λ, the expected equality J_kin,q^z=J^z_Bel,q+M^z_q does not hold. So we have chosen a suitable value for the cut-off Λ=2.63 GeV so that this equality holds. The analysis of the cutoff dependence of distributions can be found in <cit.>. § CONCLUSIONS In this study, we analyzed the angular momentum distributions of a quark within a light-front dressed quark state using the Drell-Yan frame and two-component QCD in the light-front gauge A^+=0. Neglecting boundary terms or superpotentials led to discrepancies in total angular momentum density across different decompositions. We also found that the potential angular momentum term is zero, as expected, since no torque can exist between constituents in a two-body system like the dressed quark state. § ACKNOWLEDGEMENT A. M. thanks the organisers of DIS2024 for the invitation. We acknowledge the funding from the Board of Research in Nuclear Sciences (BRNS), Government of India, under sanction No. 57/14/04/2021-BRNS/57082. A. M. thanks SERB-POWER Fellowship (SPF/2021/000102) for financial support. JHEP
http://arxiv.org/abs/2407.13352v1
20240718095131
Exploiting nonequilibrium phase transitions and strong symmetries for continuous measurement of collective observables
[ "Albert Cabot", "Federico Carollo", "Igor Lesanovsky" ]
quant-ph
[ "quant-ph", "cond-mat.stat-mech" ]
Institut für Theoretische Physik, Eberhard Karls Universität Tübingen, Auf der Morgenstelle 14, 72076 Tübingen, Germany. Institut für Theoretische Physik, Eberhard Karls Universität Tübingen, Auf der Morgenstelle 14, 72076 Tübingen, Germany. Institut für Theoretische Physik, Eberhard Karls Universität Tübingen, Auf der Morgenstelle 14, 72076 Tübingen, Germany. School of Physics and Astronomy, University of Nottingham, Nottingham, NG7 2RD, UK. Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems, University of Nottingham, Nottingham, NG7 2RD, UK § ABSTRACT Dissipative many-body quantum dynamics can feature strong symmetries which give rise to conserved quantities. We discuss here how a strong symmetry in conjunction with a nonequilibrium phase transition allows to devise a protocol for measuring collective many-body observables. To demonstrate this idea we consider a collective spin system whose constituents are governed by a dissipative dynamics that conserves the total angular momentum. We show that by continuously monitoring the system output the value of the total angular momentum can be inferred directly from the time-integrated emission signal, without the need of repeated projective measurements or reinitializations of the spins. This may offer a route towards the measurement of collective properties in qubit ensembles, with applications in quantum tomography, quantum computation and quantum metrology. Exploiting nonequilibrium phase transitions and strong symmetries for continuous measurement of collective observables Igor Lesanovsky July 22, 2024 ======================================================================================================================= Introduction. – Nonequilibrium phases emerge in many-body quantum systems due to the combined action of driving, dissipation and interactions <cit.>. Dissipative or nonequilibrium phase transitions manifest as nonanalytic changes in the stationary state of the system <cit.> and are accompanied by a rich phenomenology such as long relaxation times and intermittency <cit.>, squeezing and quantum correlations <cit.>, or spectral singularities <cit.>. The sharp change occurring near a transition point also constitutes a resource for sensing and metrology, generally allowing for enhanced sensitivity <cit.>. In quantum optical settings, such as atomic ensembles interfaced with optical cavities or waveguides, the nature of the stationary state manifests in properties of the emitted light <cit.>. This allows to study phases and phase transitions through continuous monitoring of their output, as in photocounting or homodyne detection experiments <cit.>, and to further exploit this to devise parameter estimation protocols <cit.>. Open quantum systems can feature so-called strong symmetries <cit.>, which may correspond to physical quantities, e.g. total angular momentum, that are preserved during the time evolution. Strong symmetries have been identified in collective spin systems <cit.>, atomic lattices <cit.> and nonlinear resonators <cit.>, for which applications in quantum information have been proposed <cit.>. A system with a strong symmetry possesses multiple stationary states, one for each symmetry sector <cit.>. Consequently, nonequilibrium transitions can occur independently within each sector and the corresponding critical points may be located at different parameter values <cit.>. At the level of single realizations of a continuous monitoring, dissipative freezing can occur, in which an initial superposition state living on different symmetry sectors collapses into a single sector <cit.>. The strong symmetry, which is present in the average dynamics, can thus be violated in single quantum trajectories. In this work, we demonstrate the potential of symmetry-dependent phase transitions in sensing applications. In particular, we discuss how the continuous monitoring of the system output allows to infer the specific value assumed by the strong symmetry of the system. To illustrate this approach, we show how to estimate the total angular momentum of an ensemble of two-level atoms, see Fig. <ref>, undergoing a dissipative dynamics governed by a generator featuring a strong symmetry. We estimate the achievable sensitivity and also consider experimental aspects, such as finite photo-detection efficiency and local (single-atom) decay. Collective spin system. – We consider the dissipative dynamics of a system described by a Markovian master equation (ħ=1): ∂_t ρ̂=-i[Ĥ,ρ̂]+∑_α (L̂_αρ̂L̂_α^†-1/2{L̂^†_αL̂_α, ρ̂}) . Here, ρ̂ is the state of the system, Ĥ its Hamiltonian and L̂_α the jump operators. An Hermitian operator  that commutes both with the Hamiltonian and all jump operators [Ĥ,Â]=[L̂_α,Â]=0, ∀α, is known as a strong symmetry of the system <cit.>. In the presence of such an operator, Eq. (<ref>) features a block diagonal structure and possesses a stationary state for each eigenvalue of Â. The system we consider consists of N spin-1/2 particles described by: Ĥ=ωŜ_x, L̂=√(κ)Ŝ_-. The collective angular momentum operators are defined as Ŝ_α=1/2∑_j=1^N σ̂_α^(j) (α=x,y,z), with σ̂_α^(j) being the Pauli matrices and Ŝ_±=Ŝ_x± iŜ_y. This model encodes collective spin decay with rate κ and a (resonant) driving with Rabi frequency ω. The total angular momentum Ŝ^2 is a strong symmetry, making the use of total angular momentum states convenient. These satisfy Ŝ^2|S,S_z,i⟩=S(S+1)|S,S_z,i⟩, and Ŝ_z|S,S_z,i⟩=S_z|S,S_z,i⟩ with S=0,1,…,N/2 (for even N) <cit.>. The label i distinguishes the degenerate irreducible representations, or sectors, for each S at a given N. We consider initial states belonging to one of these sectors. Hence, the Hilbert space can be simply labeled by |S,S_z⟩ <cit.>. The considered dynamics [cf. Eq. (<ref>)] within a single sector features a crossover between two dynamical regimes separated at ω_c(S)=κ S <cit.>, see Fig. <ref>(b). For ω<ω_c(S), it displays an overdamped decay toward an almost pure stationary state. For ω>ω_c(S), it displays long-lived oscillations and eventually approaches a highly mixed state. The crossover gets sharper as S increases and becomes, in the thermodynamic limit, a nonequilibrium phase transition, in which the oscillatory regime corresponds to a time crystal <cit.>. Crucially, systems of N atoms with different total angular momentum S undergo the crossover at different ω values [see Fig. <ref>(b)] and signatures of both dynamical regimes clearly manifest in the photocounting record <cit.>. Moreover, the photocounting process preserves the initial total angular momentum S, which allows for measuring the latter without the need to reinitialize the system. Universal photocounting statistics. – The central quantity of our measurement protocol is the output intensity (or time-averaged photocount): I_T(t)=1/T∫_t^t+T dN(τ), where T is the length of the measurement time window and dN(t) is a random variable that takes the value 0 when no photon is detected and 1 when a photon is detected. In the overdamped regime ω<ω_c(S), the photocounting statistics is universal (i.e. independent of S) and analytically known, while the transition point varies with S [see Fig. <ref>(c)]. Deviations of the counting statistics from the universal behavior therefore allow to infer the unknown value of the total angular momentum S. A single realization of the photocounting process is described by a stochastic master equation for the conditioned system state μ̂ <cit.>: dμ̂=dN(t)𝒥μ̂+dt(-i ℋ+(1-η)κ𝒟[Ŝ_-]) μ̂ . The parameter η∈[0,1] is the detection efficiency, and we have defined the superoperators 𝒥μ̂=Ŝ_-μ̂Ŝ_+/⟨Ŝ_+Ŝ_-⟩-μ̂ and ℋμ̂=Ĥ_effμ̂- μ̂Ĥ^†_eff-(⟨Ĥ_eff⟩-⟨Ĥ^†_eff⟩)μ̂. The expected values are taken with respect to the conditioned state, and the effective Hamiltonian is given by: Ĥ_eff=ωŜ_x-iηκ/2Ŝ_+Ŝ_-. For a given realization the expected number of detections in the interval [t,t+dt] is given by 𝔼|_μ̂(t)[dN(t)]=ηκTr[Ŝ_+Ŝ_-μ̂(t)] dt. Averaging over realizations we recover the master equation, i.e. 𝔼[μ̂(t)]=ρ̂(t). Ideal photocounting corresponds to η=1, in which Eq. (<ref>) can be replaced by a stochastic Schrödinger equation evolving pure states <cit.>. The statistics of the output intensity can be understood using large deviations theory <cit.>. When T is large compared to the dominant relaxation timescales of the system, the moments of I_T(t) can be obtained from the scaled cumulant generating function in the stationary state, θ(s) (see Supplemental Material <cit.>). In the overdamped regime this assumes the universal form <cit.>: θ(s)=ηω^2/κ(e^-s-1), where s is the counting field. From the partial derivatives of θ(s) evaluated at s=0 we obtain the expectation value and the variance of the time-averaged photocount <cit.>: 𝔼[I_T(t)]=ηω^2/κ, 𝔼[I^2_T(t)] -𝔼[I_T(t)]^2=ηω^2/κ T. For large T the statistics of the output intensity tends to a Gaussian characterized by these two moments. From Eq. (<ref>) it is clear that such statistics is independent of S. The total angular momentum determines instead the point at which the emission statistics deviates from Eq. (<ref>) and begins to display a S-dependent behavior (see e.g. Ref. <cit.>). Therefore, starting from ω=0 and adiabatically ramping up the Rabi frequency, the collected signal follows the universal statistics until the system approaches the S dependent crossover. The point at which the counting statistics changes provides the estimate for S [see Fig. <ref>(c)]. Adiabatic protocol. – We focus on cases in which the ramp up timescale of ω(t) is much larger than the dominant relaxation timescales. In this case, adiabatic large deviations theory <cit.> shows that the statistics of the output intensity over a time interval Δ t is controlled by the following time integral: θ_ad(t,Δ t,s)=∫_t^t+Δ t dτ θ (τ,s), where θ(t,s) is the instantaneous scaled cumulant generating function associated with the parameters at time t. In the overdamped regime, θ(t,s)=ηω(t)^2(e^-s-1)/κ. We increase the Rabi frequency in small steps Δω every Δ t, where Δ t is large compared to the relaxation timescales of the system [see Fig. <ref>(a)]. In this case we have a piecewise constant Rabi frequency: ω(t)=nΔω, for (n-1)Δ t<t≤ nΔ t, with n=1,2,3… Within each time interval the number of detected photons is the stochastic variable Δ N(t)=Δ t I_Δ t(t). The statistics of the output intensity (number of counts) is directly obtained from Eq. (<ref>) since θ_ad(t,Δ t,s)=Δ tθ(t,s), with the corresponding value of ω. We illustrate this protocol in Fig. <ref>(a), considering the monitored output for different total angular momenta. The output follows the universal curve (black solid line) until ω(t)=ω_c(S) (vertical lines), when it starts to deviate significantly from it. Such a deviation allows to infer the S-dependent transition point. Here, Δω sets the minimum resolution on S to Δω/κ, and we fix Δω/κ=1. Notice that other adiabatic protocols for ω(t) are possible, for which the universal curve would display a different behavior <cit.>. The achievable precision is the result of the trade-off between two sources of error. For short time windows Δ t, the standard deviation of the photocount is larger [cf. Eq. (<ref>)] resulting in higher fluctuations in individual realizations. This makes false positives, that is unexpected significant deviations from the universal curve, more common. For large time windows (and intermediate values of S) a systematic error appears. This is due to the fact that, for finite S, the system displays a crossover and not a genuine phase transition. As a consequence, the actual emission statistics deviates from the universal one slightly before ω_c(S) <cit.>. This is illustrated in Fig. <ref>(b), where we show the systematic deviation of the estimated S from its actual value. The estimated S is here obtained from the Rabi frequency at which the average (with respect to ρ̂) number of detections deviates three standard deviations [computed using Eq. (<ref>) for a given κΔ t] from the universal curve. Except for small S, our protocol tends to underestimate the actual value of S, with short measuring time windows performing better in the limit of many realizations. Nevertheless, the relative error remains small (less than 5%) and diminishes as S increases. In the inset of Fig. <ref>(b), we illustrate the precision of our protocol for a finite number of realizations, fixing S=100 and considering four values of Δ t. We observe that, upon increasing Δ t, the systematic error increases, while the error bars are slightly larger for smaller Δ t, reflecting the above-mentioned trade-off. Superpositions and mixtures of total angular momentum. – Let us now discuss the case in which the state of the system is a superposition of different total angular momentum states. In this case the measurement becomes projective, i.e. the system selects a given total angular momentum sector. This projection results from the occurrence of dissipative freezing <cit.>. Dissipative freezing can be characterized considering the projectors on each symmetry sector: P̂_S=∑_S_z=-S^S|S,S_z⟩⟨ S, S_z|, whose expected value (with respect to μ̂) is denoted by P_S(t). By averaging over realizations P_S(t) reproduces the weights of the initial state, due to conservation of total angular momentum <cit.>. In Fig. <ref>(a) we show the occurrence of dissipative freezing within the oscillatory regime [ω>ω_c(S)]. We show a realization for an initial superposition state |Ψ(0)⟩=1/√(2)(|S_1,0⟩+|S_2,0⟩) and η=1 (solid lines), and a realization for an initial mixed state ρ̂(0)=1/2(|S_1,0⟩⟨ S_1,0|+|S_2,0⟩⟨ S_2,0|) with finite detection efficiency η=0.5 (dashed lines). In the overdamped regime [ω<ω_c(S)], dissipative freezing occurs only partially as the system is able to reach a stationary state (within a single realization) that still spans more than one symmetry sector <cit.>. The weights on the different sectors vary during the transient to this stationary state, and their final value depends on the realization. Nevertheless, this has no consequences for our protocol, as complete projection eventually occurs when the Rabi frequency becomes close to the smaller ω_c(S) among the involved sectors. In Fig. <ref>(b) we illustrate the application of the adiabatic protocol in Eq. (<ref>) to the initial superposition state |Ψ(0)⟩. The main panel displays the evolution of P_S(t) for a realization of the protocol considering an intermediate measuring time window κΔ t=5. The projection into one of the sectors occurs on a timescale that is much shorter than the duration of the protocol. Thus, after a fast transient, the protocol faces essentially the scenario analyzed in Fig. <ref>. The inset shows the photocounts of the corresponding realization. The signal deviates from the universal curve at the point corresponding to the S sector to which the system has collapsed. Local decay. – We now address the effects of local decay on the protocol. This process introduces N additional jump operators and the system is described by: Ĥ=ωŜ_x, L̂=√(κ)Ŝ_-, L̂_j=√(γ)σ̂_-^(j) ∀ j. We assume initial states to have equal weights on all irreducible representations with total angular momentum S_0 and same magnetization S_z in each of these. Following Refs. <cit.>, the dynamics of such states can be efficiently investigated even if, for γ>0, the total angular momentum is not conserved. This latter aspect also implies the existence of a unique stationary state, generally contained in more than one S sector. Nevertheless, the manifold of stationary states (one for each value of S) of Eq. (<ref>) and (<ref>) still manifests, although in a metastable fashion, when γ/κ is small <cit.>. This allows us to estimate the total angular momentum of the initial state as before. A crucial requirement is however to perform the protocol as fast as possible in order to minimise the detrimental effects of local decay. In Fig. <ref>(c) we illustrate the emergent metastable dynamics for N=30 and γ/κ=0.05. The system is initialized in a state with total angular momentum S_0=12 and subject to the adiabatic protocol given in Eq. (<ref>). The expectations ⟨Ŝ_y,z (t)⟩ (black solid lines) follow closely the values corresponding to the stationary state for S_0 and γ/κ=0 (dashed lines) instead of the ones of the true stationary state (color solid lines). This metastable behavior is more robust for larger S_0 at fixed N <cit.>. Metastability is controlled by the leading eigenmodes of the Liouvillian, that is, those associated with a smaller decay rate <cit.>. For small γ/κ there is a set of N/2 eigenmodes with a decay rate much smaller than the rest. Together with the stationary state, these define a manifold whose properties reflect those of the stationary states for γ/κ=0 <cit.>. As the strength of the local losses is increased, the decay rate of these eigenmodes also increases, some of them being more affected than the others, making the sectors with smaller S_0 more susceptible to local decay. We now consider the situation in which the collective emission channel is monitored. This scenario emerges naturally when collective emission is induced by, e.g., an optical cavity or waveguide [Fig. <ref>(a)]. The photocounting process is then described by a stochastic master equation that can be simulated efficiently using a permutation invariant representation <cit.>. In Fig. <ref>(d) we show single realizations of the protocol for N=40, γ/κ=0.05 and different values of S_0, taking a short measurement window κΔ t=0.5. The lower bound on κΔ t is dictated by the validity of the adiabatic approximation [Eq. (<ref>)]. In this sense, we need to be adiabatic with respect to the relaxation onto the metastable manifold and not with respect to the ultimate relaxation to the actual stationary state. The main drawback of reducing κΔ t are the stronger fluctuations in the photocount [Eq. (<ref>)], which lead to larger errors when estimating the total angular momentum with a single realization. The behavior observed in Fig. <ref>(d) is qualitatively similar to the case with γ/κ=0, and the individual trajectories deviate from the universal curve where expected. As shown in Ref. <cit.>, trajectories corresponding to different S_0 remain distinguishable up to γ/κ∼ 0.1, at least for the largest half of possible S_0 values. For γ/κ> 0, trajectories tend to deviate from the universal curve later than they should (except for S_0=N/2). The estimation protocol might here benefit from recalibrating ω_c(S) using the exact dynamics. Conclusions. – We have shown that the combination of strong symmetries and nonequilibrium phase transitions can be exploited for sensing applications. By engineering collective spin systems, one can make use of these resources to estimate a collective property of the spin ensemble (total angular momentum) through continuous monitoring. Since such a quantity is conserved by the dissipative dynamics, the measurement protocol can be iterated without the need of reinitializing the state of the ensemble. In the case of initial states without a well defined symmetry, the protocol performs a projective measurement through the action of dissipative freezing. The main difficulty faced by the proposed protocol is posed by local decay, which needs to be sufficiently weak such that the symmetry is conserved at a metastable level. In this sense, it would be interesting to explore whether the effects of local decay can be minimized by other adiabatic protocols. It would also be intriguing to apply these ideas to other many-body scenarios that combine strong symmetries with nonequilibrium transitions. Acknowledgements. – We are grateful for financing from the Baden-Württemberg Stiftung through Project No. BWSTISF2019-23. AC acknowledges support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the Walter Benjamin programme, Grant No. 519847240. FC is indebted to the Baden-Württemberg Stiftung for the financial support of this research project by the Eliteprogramme for Postdocs. We acknowledge the use of Qutip python library <cit.>. We acknowledge funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the Research Unit FOR 5413/1, Grant No. 465199066. We acknowledge support by the state of Baden-Württemberg through bwHPC and the German Research Foundation (DFG) through grant no INST 40/575-1 FUGG (JUSTUS 2 cluster). This work was developed within the QuantERA II Programme (project CoQuaDis, DFG Grant No. 532763411) that has received funding from the EU H2020 research and innovation programme under GA No. 101017733. apsrev4-2 SUPPLEMENTAL MATERIAL Exploiting nonequilibrium phase transitions and strong symmetries for continuous measurement of collective observables Albert Cabot^1, Federico Carollo^1, Igor Lesanovsky^1,2,3 ^1Institut für Theoretische Physik, Universität Tübingen, Auf der Morgenstelle 14, 72076 Tübingen, Germany ^2School of Physics, Astronomy, University of Nottingham, Nottingham, NG7 2RD, UK. ^3Centre for the Mathematics, Theoretical Physics of Quantum Non-Equilibrium Systems, University of Nottingham, Nottingham, NG7 2RD, UK § OVERVIEW OF LARGE DEVIATIONS FOR OPEN QUANTUM SYSTEMS In the large deviation approach to open quantum systems, the central quantity is given by the partition function of the time-integrated photocount <cit.>: Z_T(s)=𝔼[e^-sT I_T]. This quantity contains the information of all moments and cumulants of I_T, which can be retrieved taking partial derivatives with respect to the counting field s, around s=0. In particular, the mean and the variance read: 𝔼[I_T]=-1/T∂_s Z_T(s)|_s=0, 𝔼[I^2_T]-𝔼[I_T]^2=1/T^2∂^2_s log [Z_T(s)]|_s=0. The partition function can be calculated as Z_T(s)=Tr[ρ̂_s(T)], where ρ̂_s(T) is the solution of a tilted master equation <cit.>. In the case of the counting process described by the stochastic master equation (<ref>), the corresponding tilted master equation is given by: d/dTρ̂_s=-i[ωŜ_x,ρ̂_s]+κ𝒟[Ŝ_-]ρ̂_s+ηκ(e^-s-1)Ŝ_-ρ̂_sŜ_+. From this equation we can define the tilted Liouvillian ∂_Tρ̂_s=ℒ(s)ρ̂_s, which preserves the positivity of ρ̂_s but not its trace. Its dominant eigenvalue, θ(s), is real and generally different from zero. This corresponds to the (stationary) scaled cumulant generating function. All cumulants can be obtained from θ(s) using its relation with the partition function: lim_T→∞1/Tlog[Z_T(s)]= θ(s). Therefore, for large measurement times T, we have that: lim_T→∞𝔼[I_T]=-∂_s θ(s)|_s=0, lim_T→∞𝔼[I^2_T]-𝔼[I_T]^2=1/T∂^2_s θ(s)|_s=0. § SUPPLEMENTAL RESULTS FOR THE ADIABATIC RAMPING UP PROTOCOL §.§ Finite detection efficiency In Fig. <ref>(a) we illustrate the effects of a finite detection efficiency η<1 for the adiabatic protocol discussed in the main text [Eq. (<ref>)]. We consider a system initialized in the total angular momentum S=30 and the protocol is realized with κΔ t=10. For all the considered values of η, we observe the points to follow the corresponding universal curve until ω(t)=ω_c(S). As expected from Eq. (<ref>), the effect of decreasing the detection efficiency is to reduce the number of detected photons by the factor η. §.§ Alternative ramping up protocol In this section we consider a different adiabatic protocol for ramping up the Rabi frequency that leads to a different universal curve. In particular we consider a Rabi frequency that increases linearly in time as: ω(t)=α t. Using adiabatic large deviations theory [Eq. (<ref>)] and θ(t,s)=ηω(t)^2(e^-s-1)/κ, we obtain: θ_ad(t,Δ t,s)=α^2η/3κ(3t^2Δ t+3tΔ t^2+Δ t^3)(e^-s-1), which describes the emission statistics for the linear adiabatic ramp within the overdamped regime. From this expression we can compute the first two moments of the output intensity for the interval Δ t: E[I_Δ t(t)]=ηα^2/3κ(3t^2+3tΔ t+Δ t^2), E[I_Δ t^2(t)]-E[I_Δ t(t)]^2=1/Δ tE[I_Δ t(t)]. Analogously to the case presented in the main text, the protocol consists in increasing the Rabi frequency such as Eq. (<ref>) and measuring the detected photons in intervals Δ t, such that at each time interval the number of detected photons is the stochastic variable Δ N(t)=Δ t I_Δ t(t). From the behavior of Δ N(t) we infer when the system has crossed the point ω_c(S)=κ S, and thus S. In the linear adiabatic ramp, the parameters α and Δ t influence the resolution we have on S as Δ S=αΔ t/κ, which we want to keep below one. We show this adiabatic ramp protocol in Fig. <ref>(b), for three values of S and an initial condition residing entirely in each corresponding total angular momentum sector. We observe the counts to follow the universal curve, while they begin to deviate from it around each of the corresponding ω_c(S) (vertical color lines). § ANALYSIS OF THE EFFECTS OF LOCAL DECAY This section presents some supplemental results about the effects of local decay on the adiabatic protocol presented in the main text. For this, we study the dynamics at the master equation level and also for quantum trajectories (in which only collective emissions are monitored). We also analyze the signatures of metastability in the Liouvillian spectrum. In this study, we make use of permutation invariant representations <cit.> for the operators of the system and the considered class of initial states. §.§ Permutation invariant quantum jump trajectories In the presence of local decay with rate γ the master equation is modified to: ∂_t ρ̂=-i[ωŜ_x,ρ̂]+κ(Ŝ_- ρ̂Ŝ_+-1/2{Ŝ_+ Ŝ_-, ρ̂})+γ∑_j=1^N(σ̂^(j)_- ρ̂σ̂^(j)_+-1/2{σ̂^(j)_+ σ̂^(j)_-, ρ̂}). This equation no longer conserves total angular momentum. However, it still has a permutation symmetry, as it is invariant under permutation of the spin labels. This allows one to efficiently simulate it by means of a permutation invariant representation of the state and operators <cit.>. In this sense, it is also necessary to restrict the study to states that are identical with respect to the degenerate representations of total angular momentum for a given S and N <cit.>. When considering the unravelling of Eq. (<ref>) in quantum jump trajectories, one has to take into account that quantum jumps associated to a specific spin, L̂_j=√(γ)σ̂_-^(j), break the permutation symmetry of the dynamics. Therefore, the efficient permutation invariant representations can only be used if jumps are monitored in a permutation invariant way, in which there is no knowledge about which spin has emitted which photon. This prevents the use of a stochastic Schrodinger equation unravelling for pure states, as it requires to specify this knowledge. Nevertheless, a permutation invariant unravelling can be written down in terms of a stochastic master equation, in which the stochastic term is implemented by the permutation invariant representation of local decay. In this work we focus on a simpler scenario in which only collective emissions are monitored, while local ones are not. This situation emerges naturally when collective emission is induced by, e.g., an optical cavity and only the output of the cavity is monitored. The stochastic master equation describing this case is also suited to the efficient permutation invariant representation of Refs. <cit.>. This can be written as: dμ̂=dN(t)𝒥μ̂+dt(-i ℋ+(1-η)κ𝒟[Ŝ_-]+γ∑_j=1^N 𝒟[σ̂^(j)_-]) μ̂, with 𝒥μ̂=Ŝ_-μ̂Ŝ_+/⟨Ŝ_+Ŝ_-⟩-μ̂, ℋμ̂=Ĥ_effμ̂- μ̂Ĥ^†_eff-(⟨Ĥ_eff⟩-⟨Ĥ^†_eff⟩)μ̂, Ĥ_eff=ωŜ_x-iηκ/2Ŝ_+Ŝ_-. dN(t) is a random variable that takes the value 0 when no (collectively emitted) photon is detected and 1 when a photon is detected, η∈[0,1] is the detection efficiency of the collective emission channel and expectation values ⟨…⟩ are taken with respect to the conditioned state μ̂. For a given realization the expected number of photodetections at the interval [t,t+dt] is given by: 𝔼|_μ̂(t)[dN(t)]=ηκTr[Ŝ_+Ŝ_-μ̂(t)] dt. §.§ Decay of total angular momentum for different S and N We first analyze the relaxation timescales of total angular momentum as parameters are varied. For this, we consider initial conditions contained entirely in one of the sectors S and study the timescales characterizing the spreading of the state into other S sectors as the initial S, N, ω/κ and γ/κ are varied. Before analyzing the dynamics of the system, it is illustrative to consider how the rate of collective jumps for a given total angular momentum state and the rate of local jumps for the same state vary with S, S_z and N. These quantities are defined as: Γ_κ(N,S)=κ⟨ S,S_z|Ŝ_+Ŝ_-| S,S_z⟩= κ[S(S+1)-S_z(S_z-1)], Γ_γ(N,S)=γ⟨ S,S_z|∑_j=1^N σ̂^(j)_+ σ̂^(j)_-| S,S_z⟩= γ/2(N+2S_z). The ratio between the rate of local jumps and that of collective jumps is plotted in Fig. <ref> for N=40 (a) and N=100 (b), for all possible choices of S and S_z. The bigger the ratio, the more important is the effect of the local decay channels. We observe that the sector with maximal S is in general the more robust against local decay channels, while intermediate sectors are more susceptible to them. Close to the maximal S sectors, larger N also makes the rate of local jumps smaller with respect to the collective one. The value of S_z plays an important role too, and states with negative z-magnetization are in general less susceptible to local decay. In this figure, we have factored out the constant γ/κ, as it just provides a constant scale factor that can favor overall one type of channel over the other. In Fig. <ref> we consider the dynamics of Eq. (<ref>) for initial states contained in just one S sector. We analyze how the weight in the initial sector progressively decays out as the system explores other S sectors. In Fig. <ref>(a-b) we fix N=40 and we vary the initial value of S. We observe that the closer S is to its maximum value, the more time the state of the system remains in that initial sector. Comparing (a) and (b) we also observe that the value of ω/κ influences this spreading timescale. In general, these results cannot be simply understood from the interplay of just Γ_γ(N,S) and Γ_κ(N,S). However, these make the correct qualitative prediction that for intermediate values of S the effects of local decay are more important. In Fig. <ref>(c), we fix ω/κ, S=N/2 and we increase N, finding that the larger is the system size, the slower is the spreading from the maximal S to the other sectors. In Fig. <ref>(d) to (f) we show the corresponding y and z magnetizations for the same trajectories of panels (a) to (c). We observe that the effects of local decay are not so apparent as in the projectors P_S(t). This is analyzed in more detail in the next section. §.§ Metastable adiabatic protocol for different S and N In the following we analyze how the effects of local decay manifest in the adiabatic ramp protocol studied in the main text [Eq. (<ref>)]. In Fig. <ref> we consider a system of N=30 particles with an initial state fully contained in a sector of total angular momentum S_0 and with γ/κ=0.05. We then apply the discrete adiabatic ramp of the Rabi frequency [Eq. (<ref>)], considering different initial total angular momentums S_0 and protocol step lengths κΔ t. We study the system at the master equation level, and we compare the results of the dynamics (black solid lines) with the true stationary state for each value of ω(t) (color solid lines) and the corresponding stationary stationary state for γ/κ=0 (color dashed lines). Considering the case S_0=N/2 [panels (a) and (b)], we observe that the dynamics follows quite closely the corresponding stationary state for γ/κ=0. Deviations from this curve are only observed for long times. In fact, reducing the protocol step κΔ t, makes these deviations less significant. However, the drawback of reducing κΔ t is that the dynamics is less adiabatic and in single realizations of the photocount process fluctuations are larger. Considering S_0=12 [panels (c) and (d)], we observe that deviations from the curves corresponding to γ/κ=0 are more significant. Nevertheless, the dynamics takes closer values to the case with γ/κ=0 (color dashed lines) than to the case with γ/κ=0.05 (color solid lines). Actually, for κΔ t=0.5 [panel (d)] deviations from the dashed lines are quite small. The results of Fig. <ref> point out that, for small γ/κ, the system displays a metastable dynamics that is quite close to the case without local decay. In Fig. <ref> we analyze in more detail the origin of this metastable response. For this we consider the spectral decomposition of the Liouvillian in terms of its eigenvalues and right and left eigenmatrices: ℒr̂_j=λ_jr̂_j, l̂_j^†ℒ=λ_jl̂_j^†, Tr[l̂_j^†r̂_k]=δ_jk. The eigenvalues are nonpositive and are ordered such that Re[λ_j]≥Re[λ_j+1]. Moreover, we have that λ_0=0, r̂_0=ρ̂_ss, and l̂_0 is the identity. The leading eigenvalues of the Liouvillian are shown in Fig. <ref> for N=20, ω/κ=5 and γ/κ=0.01 [panel (a)] or γ/κ=0.05 [panel (c)]. The zero eigenvalue together with the subsequent N/2 ones are shown as orange triangles. For γ/κ=0.01 there is a clear spectral gap between this set of eigenvalues and the rest. For γ/κ=0.05 a significant gap is only maintained for the smallest ones of this set. This spectral gap is a well known signature of metastability <cit.>, i.e. a pre-stationary regime characterised by a slow relaxation dynamics. This is better understood when considering the spectral decomposition of the dynamics of the system: ρ̂(t)=ρ̂_ss+∑_j≥1Tr[l̂_j^†ρ̂(0)]r̂_j e^λ_j t. Then, if there are M eigenvalues whose real part (decay rate) is much smaller in absolute value than the rest, after an initial short transient, the dynamics is approximately contained in the manifold spanned by these eigenmodes <cit.>: ρ̂(t)≈ρ̂_ss+∑_j=1^MTr[l̂_j^†ρ̂(0)]r̂_j e^λ_j t, for t≫1/|Re[λ_M+1]|. The projection of a state on this metastable manifold is defined as: 𝒫ρ̂=ρ̂_ss+∑_j=1^MTr[l̂_j^†ρ̂(0)]r̂_j, and it is usually insightful to compute expected values on this manifold <cit.>: ⟨Ô⟩_𝒫=Tr[Ô𝒫ρ̂]. In fact, from Fig. <ref>(a) we see that M=N/2, which points out that the manifold defined by the projection 𝒫 is related to the stationary states for each S sector for the case γ/κ=0. Thus, the presence of weak local decay lifts the eigenvalue degeneracy at 0 and leads to a metastable set of eigenmodes that are closely related to the old stationary states. This is confirmed when computing ⟨Ŝ_y,z⟩_𝒫=Tr[Ŝ_y,z𝒫ρ̂_S_0], as shown in color points in Fig. <ref>(b). The color indicates the value of S_0 of the initial state with zero magnetization S_z. We observe that ⟨Ŝ_y,z⟩_𝒫 are very close to the corresponding color lines, which are obtained from the stationary states in the case γ/κ=0. For comparison, in red-dashed line the true stationary state values are shown. In Fig. <ref>(d) we repeat the analysis but for γ/κ=0.05. We observe similar results for the considered S_0, despite the spectral gap only holding for a smaller set of eigenvalues. Moreover, these results are largely independent on the considered state within the particular S_0 sector, i.e. the initial magnetization S_z does not play an important role (not shown). In conclusion, the presence of these long-lived eigenmodes closely related to the stationary states for γ/κ=0 elucidates why the adiabatic ramps in Fig. <ref> follow closely the curves corresponding to γ/κ=0 instead of those belonging to γ/κ>0. Finally, in Fig. <ref> we show the results of applying the adiabatic protocol in the presence of local losses of various strengths, for several sectors S and for N=40. In panel (a) we show the case of γ/κ=0; in panel (b) γ/κ=0.05; in panel (c) γ/κ=0.1; in panel (d) γ/κ=0.2. In all cases we keep κΔ t=0.5 and η=1, while the results are shown on the average over realizations (i.e. as computed from the master equation). We consider the system to be initialised in the state with total angular momentum S_0∈{10,12,14,16,18,20} and fixed initial magnetization S_z=0. We observe that the different curves still follow the universal curve (black line) until they depart from it at a S_0-dependent value. The value at which they deviate from the universal curve is not exactly the same as in the case γ/κ=0, the deviations being more significant for the cases belonging to a smaller initial S_0 and also for increasing γ/κ. Nevertheless, for the cases γ/κ=0.05 and γ/κ=0.1 [panels (b) and (d)], the curves belonging to different S_0 are still quite well resolved. This suggests that the proposed sensing protocol is robust against not too strong local decay channels, working better the larger S_0 is. § ANALYSIS OF DISSIPATIVE FREEZING §.§ Master equation for multiple total angular momentum sectors In the study of dissipative freezing, we consider initial states containing superpositions and mixtures of different total angular momentum sectors. We can express them in the following form, generally containing coherences between different S sectors: ρ̂(t)=∑_S,S'∑_S_z,S'_zρ^S,S'_S_z,S'_z(t)|S,S_z⟩⟨ S',S'_z|. In terms of this parametrization, the master equation (<ref>) reads: ∂_t ρ^S,S'_S_z,S'_z= -iΩ/2[√((S-S_z+1)(S+S_z))ρ^S,S'_S_z-1,S'_z(1-δ_S_z,-S)+√((S+S_z+1)(S-S_z))ρ^S,S'_S_z+1,S'_z(1-δ_S_z,S) -√((S'-S'_z+1)(S'+S'_z))ρ^S,S'_S_z,S'_z-1(1-δ_S'_z,-S')-√((S'+S'_z+1)(S'-S'_z))ρ^S,S'_S_z,S'_z+1(1-δ_S'_z,S') ] +κ[√((S+S_z+1)(S-S_z)(S'+S'_z+1)(S'-S'_z)) ρ^S,S'_S_z+1,S'_z+1(1-δ_S_z,S)(1-δ_S'_z,S') -1/2((S+S_z)(S-S_z+1)+(S'+S'_z)(S'-S'_z+1)) ρ^S,S'_S_z,S'_z]. Vectorizing this equation, we observe that the Liouvillian is a block diagonal matrix and each possible pair of values {S,S'} defines an independent block of it: ℒ=⊕_S,S'ℒ_S,S'. For finite N, only the blocks ℒ_S,S contain a zero eigenvalue and thus a stationary state. In contrast, the coherences governed by ℒ_S,S' with S≠ S' decay in time, such that for long times ρ^S,S'≠ S_S_z,S'_z(t)=0. In practice, if the initial state is contained only in two different S sectors (S_1,2), we only have to implement the corresponding blocks of the Liouvillian: ℒ_S_1,S_1⊕ℒ_S_1,S_2⊕ℒ_S_2,S_1⊕ℒ_S_2,S_2. From this representation, we can also directly write down the stochastic master equation for the photocounting process. §.§ Partial dissipative freezing In the overdamped regime dissipative freezing occurs only partially as the system is able to reach a stationary state that still spans more than one symmetry sector. The weights on the different sectors vary during the transient to this stationary state, and their final value is random for each realization [see Fig. <ref> (a)]. For this reason we refer to this case as partial freezing. Stationary state of the photocounting process. – In order to understand why partial freezing occurs, we need to recall some properties of the stationary state in this regime. Within each sector, the stationary state is pure and it satisfies to a very good approximation the eigenvalue like equation <cit.>: Ŝ_-|Ψ_S⟩≈ -iω/κ|Ψ_S⟩. The eigenvalue ω/κ is independent of S, as long as the corresponding sector is well in the overdamped regime. Notice that this is no longer the case in the crossover region, when ω approaches ω_c(S) from below. However, the relative size of the crossover region diminishes with S, as shown in Ref. <cit.>. The action of the effective Hamiltonian on this state gives: Ĥ_eff|Ψ_S⟩≈ -iω^2/2κ|Ψ_S⟩. This independence on S of the action of Ĥ_eff and Ŝ_- on |Ψ_S⟩ prevents the general occurrence of dissipative freezing. This can be understood by considering the following superposition state between two total angular momentum sectors S_1 and S_2: |Ψ⟩=C_S_1|Ψ_S_1⟩+C_S_2|Ψ_S_2⟩. For simplicity we assume ideal photocounting η=1, although we recall that the same arguments hold for η<1. Assuming the first jump occurs at t_1, the (unnormalized) state before the first count is: e^-iĤ_efft|Ψ⟩=(C_S_1|Ψ_S_1⟩+C_S_2|Ψ_S_2⟩)e^-ω t/(2κ). At t_1, we apply the jump operator and renormalize the state obtaining: |Ψ(t_1)⟩= C_S_1|Ψ_S_1⟩+C_S_2|Ψ_S_2⟩, which shows that this family of states is invariant under the photocounting evolution. This argument can be generalized to any number of counts and to any superposition of |Ψ_S⟩ of different sectors, as long as all the sectors are well into the overdamped regime. Partial freezing. – The photocounting process eventually reaches a state in which each of the initially populated sectors have reached their corresponding |Ψ_S⟩, since this state is an attractor of the dynamics for each sector <cit.>. Once there, the state is invariant to time evolution. This is illustrated in Fig. <ref>(a), in which we show two realizations of the dynamics with initial state |Ψ(0)⟩=1/√(2)(|S_1,0⟩+|S_2,0⟩). In both cases, after a short transient, the expected values of the projectors are constant in time, indicating partial freezing. This is in stark contrast to the standard dissipative freezing <cit.> [e.g. Fig. <ref>(b)]. Nevertheless, we notice that the weights between the different sectors can be very different and the system might reach a final state in which it has mostly collapsed into one sector [e.g. Fig. <ref>(a) solid lines]. In fact, we observe that the final state resides mostly in one sector when S_1 and S_2 are not close to each other. In order to analyze this in more detail, we compute the standard deviation over stochastic realizations of P_S_1,2 after a long time and varying S_1-S_2 [see Fig. <ref>(c)]. When considering the initial state |Ψ(0)⟩, this standard deviation tends to 0.5 when the final state resides in just one sector, while it is smaller otherwise. From Fig. <ref>(c) we observe that this is the case as S_1-S_2 increases. Thus for large differences in the symmetry sectors, the final state mostly resides into one symmetry sector and for practical means there is no difference with the case of standard dissipative freezing. Finally, in Fig. <ref>(b), we consider the case in which the sector with smallest S is at the crossover, i.e. ω=κ S_2. Both for an initial superposition |Ψ(0)⟩ with η=1, or for an initial mixture ρ̂(0)=1/2(|S_1,0⟩⟨ S_1,0|+|S_2,0⟩⟨ S_2,0|) with η=0.5, standard dissipative freezing occurs and the state collapses randomly into one of the S sectors. Thus, when applying the adiabatic ramp of the Rabi frequency, the system eventually collapses (randomly) into one of the sectors, as it eventually reaches a crossover region in which full collapse occurs.
http://arxiv.org/abs/2407.13718v1
20240718171712
Characterizing hydrogel behavior under compression with gel-freezing osmometry
[ "Yanxia Feng", "Dominic Gerber", "Stefanie Heyden", "Martin Kröger", "Eric R. Dufresne", "Lucio Isa", "Robert W. Style" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.mtrl-sci" ]
APS/123-QED Department of Materials, ETH Zürich, 8093 Zürich, Switzerland. Department of Materials, ETH Zürich, 8093 Zürich, Switzerland. Department of Civil, Environmental and Geomatic Engineering, ETH Zürich, 8093 Zürich, Switzerland. Department of Materials, ETH Zürich, 8093 Zürich, Switzerland. Department of Materials Science & Engineering, Cornell University, Ithaca, USA Laboratory of Atomic and Solid State Physics, Cornell University, Ithaca, USA Department of Materials, ETH Zürich, 8093 Zürich, Switzerland. []robert.style@mat.ethz.ch Department of Materials, ETH Zürich, 8093 Zürich, Switzerland. § ABSTRACT Hydrogels are particularly versatile materials that are widely found in both Nature and industry. One key reason for this versatility is their high water content, which lets them dramatically change their volume and many of their mechanical properties – often by orders of magnitude – as they swell and dry out. Currently, we lack techniques that can precisely characterize how these properties change with water content. To overcome this challenge, here we develop Gel-Freezing Osmometry (GelFrO): an extension of freezing-point osmometry. We show how GelFrO can measure a hydrogel's mechanical response to compression and osmotic pressure, while only using small, O(100 µL) samples. Because the technique allows measurement of properties over an unusually wide range of water contents, it allows us to accurately test theoretical predictions. We find simple, power-law behavior for both mechanical response to compression, and osmotic pressure, while these are not well-captured by classical Flory-Huggins theory. We interpret this power-law behavior as a hallmark of a microscopic fractal structure of the gel's polymer network, and propose a simple way to connect the gel's fractal dimension to its mechanical and osmotic properties. This connection is supported by observations of hydrogel microstructures using small-angle x-ray scattering. Finally, our results motivate us to propose an updated constitutive model describing hydrogel swelling, and mechanical response. Characterizing hydrogel behavior under compression with gel-freezing osmometry Robert W. Style July 22, 2024 ============================================================================== § INTRODUCTION Polymer gels are almost unique in their ability to swell or shrink by tremendous amounts, greatly changing their properties, while maintaining their elastic nature. Typically, solvent-swollen gels are soft, stretchable and permeable to transport of small molecules, while dry gels are stiff and impermeable, and tend to be glassy <cit.>. This ability to completely change material characteristics makes gels particularly versatile materials. This is especially true of hydrogels, which have the additional benefit of being generally biocompatible, due to their aqueous nature <cit.>. In biology, hydrogels make up a large portion of soft tissue, and nature utilizes their swelling ability for applications ranging from controlling plant-tissue stiffness, to swelling-induced actuators, membranes, and barrier coatings like skin <cit.>. Beyond this, hydrogels are finding increasing industrial and scientific usage. For example, they are well known as super-absorbent materials <cit.>, and have found extensive use as drug-delivery agents <cit.>, and in diverse applications including wound dressing <cit.>, agriculture <cit.>, and enhanced oil-recovery <cit.>. Hydrogels can also be made responsive to triggers such as light, pH and temperature <cit.>. Thus, recent work has focused on making materials like stimuli-responsive membranes and actuators, and active materials <cit.>. To work with hydrogels capable of huge changes in volume, we need techniques for characterizing hydrogel properties over a wide range of compressions. At a minimum, we need to know their mechanical constitutive behavior, and swelling properties (e.g. osmotic pressure). However, these are surprisingly challenging to measure. One approach is to perform `drained' compression tests, which let solvent drain out of the hydrogel, while measuring stress and strain <cit.>. This method works well for chemically-crosslinked gels, but requires large samples: at least O(mL) when using a typical rheometer. The tests also usually take hours to days to perform, as, at each step, enough time must be given for the solvent to fully drain out of the sample <cit.>. Alternatively, a gel's osmotic pressure can be measured with techniques such as membrane- <cit.> or vapor-pressure osmometry <cit.>. However, these also have associated challenges. For example, membrane osmometry is limited to osmotic pressures of less than 5 MPa <cit.>, while vapor-pressure osmometry suffers from difficulties in accurate measurement of a solvent's vapor pressure <cit.>. As with compression testing, both of these techniques require long experiments at the scale of hours, due to the need for lengthy equilibration times <cit.>. As a result of these challenges, there is very little data characterizing gel behavior over wide ranges of compression or swelling <cit.>. Here, we show how controlled freezing of hydrogels in the form of Gel Freezing Osmometry (GelFrO) offers a convenient technique for performing this characterization. In particular, we extract both the large-strain constitutive response, and the osmotic pressure of a hydrogel, using O(100 µL) gel samples. These samples drain rapidly in response to freezing, so each data point takes only a few minutes to collect. Both osmotic pressure and constitutive response take power-law forms. We suggest that this is due to hydrogels having a fractal microstructure, and use the results to derive a new constitutive model. § CHARACTERIZING ICE/HYDROGEL EQUILIBRIUM GelFrO is essentially an extension of the common technique of freezing-point osmometry (FPO), which is widely used to measure solution properties of aqueous systems. In FPO, the freezing temperature of an aqueous solution is measured, and this is related to the solution's osmotic pressure, Π, by ρ_w L (T_m-T/T_m)=Π. Here, ρ_w is the density of water, L is the specific latent heat of fusion of ice, T is the temperature and T_m is the freezing temperature of pure water at atmospheric pressure <cit.>. Here, temperatures are measured in Kelvin. While this works well for solutions, it cannot be straightforwardly applied to hydrogels, chiefly because the polymer network of the gel suppresses ice nucleation, and this will affect the result <cit.>. Furthermore, Equation (<ref>) needs to be modified to include additional terms from the elasticity of the polymer gel (see below). We re-imagine FPO for hydrogels by tracking the shrinkage of a gel layer in contact with ice, as the temperature reduces (Figure <ref>). Ice at subzero temperatures essentially exerts a temperature-dependent osmotic pressure, or `cryosuction' on an adjacent piece of hydrogel, which sucks water out of the gel, causing it to shrink (the water in the hydrogel itself does not freeze, as ice is prevented from growing into the hydrogel by capillarity <cit.>). Measuring this shrinkage allows us to characterize properties including large-deformation constitutive response and osmotic pressure. We demonstrate GelFrO with two of the most commonly-found charge-neutral, synthetic hydrogels: poly(ethylene glycol) diacrylate (PEGDA), and polyacrylamide (PAM). Three different PEGDA hydrogels are made from PEGDA monomers with molecular weights of 575, 700, and 1000 g/mol with as-prepared polymer volume fractions of ϕ_0= 15, 20, 13% respectively. Two PAM hydrogels are made from a mixture of acrylamide monomer, and bisacrylamide crosslinker. A softer gel is prepared with 8.01 vol% monomer and 0.07 vol% crosslinker (PAM 8.08%), while a stiffer gel is made with 9.18 vol% monomer and 0.39 vol% crosslinker (PAM 9.57%). All the gels are bonded to a glass cell using a silane coupling agent (see Materials and Methods). This choice of hydrogels provides materials with a range of different polymer volume fractions and mechanical properties. Table <ref> shows ϕ_0, and the drained Young's modulus, E_d for all the hydrogels. E_d was measured via indentation of bulk samples, for all the materials except PEGDA1000. The latter was considered too expensive to make bulk samples for testing. We perform freezing experiments using a temperature-controlled setup that fits on the stage of a confocal microscope <cit.>, as shown schematically in Figure <ref>. This allows us to apply uniform undercoolings to thin hydrogel samples, while simultaneously measuring their change in volume by tracking the movement of fluorescent, tracer nanoparticles embedded in the gels. The hydrogels are fabricated as ∼ 150 µm thick layers bonded to the bottom, glass surface of a water-filled sample cell that fits into the freezing stage (Figure <ref>, see Materials and Methods for more details). The top, glass surface of the cell is only loosely attached so that no confining stresses arise during the freezing process. We initially equilibrate hydrogel layers with bulk water in the sample cell at room temperature. At each subsequent step, we slowly reduce the temperature to a new temperature at a rate of 1 °C/min, and measure the equilibrium gel thickness, h. In general, equilibrium is reached in < 5 mins of changing the stage temperature, as indicated by h reaching a stable value (see Supplement). We also check that the strain in the hydrogel is constant across its thickness, indicating uniform hydrogel shrinkage (see Supplement). Below T_m= 0 °C, we nucleate ice in the bulk water (see Materials and Methods), and continue the process of gel/ice equilibration and thickness measurement across a range of different subzero temperatures. The dependence of hydrogel layer thickness on temperature is shown in Figure <ref> for the different hydrogels. These are given in terms of relative thickness changes h/h_0, where h_0=h(T_m). For most of the samples, there is a small expansion as the temperature reduces from room temperature to 0 °C. Immediately after ice formation (vertical dotted line), h/h_0 rapidly reduces, as the hydrogel is dehydrated by cryosuction from the ice <cit.>. Indeed, the hydrogels shrink more than 50% in the first 5 degrees. Upon further cooling, the thickness change slows as the gels approach their fully dehydrated limit (horizontal dash-dotted lines). During this whole process, there is no measurable shrinkage of the gel in the x and y directions, due to the bonding to the underlying substrate (see Supplement). § GELFRO AS A DRAINED COMPRESSION TEST We can interpret our measurements of h/h_0 in two distinct ways. Firstly, this test is analogous to a uniaxial, drained compression test on the hydrogel. Thus, the results give us information about the large-strain constitutive properties of the gel. Secondly, we can extract information about the gel's osmotic pressure as a function of polymer content. To relate our measurements to a hydrogel's constitutive response, we use an expression for a hydrogel's total stress <cit.>: σ_ij=σ_ij^el-Π_mix(ϕ)δ_ij-Δμ/v_wδ_ij, where σ_ij^el is the elastic stress in the polymer gel network. Π_mix(ϕ) is the mixing osmotic pressure due to polymer/solvent interactions – assumed to only depend on the polymer volume fraction, ϕ. Δμ is the difference between the chemical potential of the hydrogel's water content and that of pure water at atmospheric pressure at the same temperature. v_w is the volume of a molecule of water, and Δμ/v_w can be thought of as the pore pressure of the water in the gel. This expression follows from the common Frenkel-Flory-Rehner hypothesis, which assumes that the (generally nonlinear) contributions from network elasticity, mixing osmotic pressure, and solvent chemical potential are additive <cit.>. We also assume that there are no additional solutes in the hydrogel. In our system, σ_zz=0, as the gel layer is in mechanical equilibrium with the overlying, stress-free ice. Furthermore, water in the gel is in equilibrium with the overlying ice, which implies that Δμ/v_w=-ρ_w L (T_m-T)/T_m (see Materials and Methods). Thus, ρ_w L ( T_m-T/T_m)=Π_mix(ϕ) - σ^el_zz. This is a modified version of Equation (<ref>) that includes the elasticity of the crosslinked gel. It can be interpreted as the cryosuction from the ice (left side) being balanced by the mixing osmotic pressure and elastic stress coming from polymer chain compression (right side). To connect this freezing experiment to an equivalent mechanical test, we consider the stress state in a uniaxial, drained compression test on the same sample. A hydrogel sample is immersed in a water bath, and slowly compressed with a pressure P_ext, while the water in the gel has time to flow out of the gel. Then, the immersed gel is in equilibrium with the surrounding bath, so that Δμ=0. In this case, the z-component of Equation (<ref>) becomes σ_zz=-P_ext=σ_zz^el-Π_mix(ϕ). By comparing the two equations above, we see that applying stress in a drained test is equivalent to equilibrating the sample with ice, when P_ext=ρ_w L (T_m-T)/T_m. Given that ρ_w=998 kg/m^3, L=334 kJ/kg and T_m=273.15 K, every degree of undercooling effectively exerts 1.2 MPa of compressional stress on the hydrogel. Based on these considerations, we can re-plot our data in Figure <ref> as the results of a drained compression test. In particular, we plot P_ext as a function of the stretch in the sample, λ=h/h_0 in Figure <ref>. The figure shows how the hydrogels dramatically strain-stiffen as they are compressed. Close to the relaxed state (λ=1), the curves are all relatively flat, indicating a low tangent modulus (i.e. stiffness). Consistent with our mechanical indentation tests, the gels with larger E_d (Table <ref>) have higher tangent moduli in this range. As compression increases (λ decreases), the curves become steeper, indicating significant stiffening. Indeed, by comparing slopes, we see increases in stiffness from <O(1 MPa) at λ=1, to O(100 MPa) when compressed. This fits with the intuition of hydrogels becoming much stiffer as they dry out. Interestingly, the mechanical response of all the gels appears to be well-approximated by power laws. This can be seen in the inset of Figure <ref>, which shows that all the data sets are approximately linear on a log-log plot. This is especially true for the PEGDA hydrogels, while there are some deviations in the PAM gels at large deformation. The results allow us to fit an empirical constitutive model to the hydrogel response over a large range of stretches. This is by contrast to most other types of poroelastic mechanical tests, which do not allow such a fitting, as they only measure small-strain, linear response (e.g. <cit.>). For this fitting, we could use a simple power law: P_ext=αλ^-m. However, this would not be stress-free at λ=1, as required. Thus, we make the simplest possible extension to the power law, by adding a linear term: P_ext=α(1/λ^m-λ). This empirical constitutive model captures both the power-law behavior at small λ, and has a stress-free state at λ=0. Fits to this model are shown in the Supplement, and have a similar accuracy to the simple power-law fits shown in Figure <ref>. Although Equation (<ref>) is empirical, later, we shall see how it can be derived from a more general constitutive model for hydrogel behavior. § MEASURING GEL OSMOTIC PRESSURE Our freezing results also permit measurement of Π_mix(ϕ) (e.g. <cit.>). Equation (<ref>) tells us that the data in Figure <ref> represents the combined contribution of Π_mix and σ^el in the gel's polymer network. However, the latter is generally negligible in our data, as σ^el∼ E_d(1-λ), and E_d≪ P_ext for the majority of our data. This is especially true at large compressions. Thus, to good approximation, P_ext≈Π_mix(ϕ). Based on these considerations, we convert Figure <ref> into a plot of osmotic pressure versus polymer content by setting P_ext=Π_mix (Figure <ref>). To calculate polymer volume fraction, we note that ϕ=h_p/h and ϕ_0=h_p/h_0, where h_p is the thickness of the completely dry hydrogel layer. Combining this with the definition that λ=h/h_0 yields ϕ=ϕ_0/λ. To aid later comparison with theoretical predictions, we plot the reduced osmotic pressure Π̅_mix=Π_mixv_w/k_bT, where k_b is Boltzmann's constant. For all the gels, Π̅_mix is small at low ϕ, before increasing significantly as polymer fraction increases – as expected for hydrophilic polymers. Furthermore, comparing the three PEGDA hydrogels in Figure <ref>A, we see that the higher the molecular weight of the PEGDA, the larger the osmotic pressure, and hence the more hydrophilic the gel is. This agrees with previous results based on measurements of PEGDA's swelling properties in water <cit.>. This technique allows us to measure osmotic pressure for a relatively wide range of polymer content. Indeed, the PAM hydrogels in Figure <ref>B reach over 70% polymer at the coldest temperatures. This is sufficiently large that some hydrogels are expected to go through a glass transition, ϕ_g. For PAM, we can estimate the glass transition as ϕ_g≈ 0.73, based on previous work <cit.>. This is denoted in Figure <ref>B by the grey areas. The PEGDA samples are not expected to be glassy. We can use our measurements of Π_mix(ϕ) to test classical predictions from polymer physics. In particular, Flory-Huggins theory for polymer/solvent mixtures forms the basis for the majority of theories describing hydrogel swelling, and predicts that <cit.>: Π_mixv_w/k_bT=-[ln(1-ϕ)+ϕ +χϕ^2]. Here, χ is the dimensionless monomer/solvent interaction parameter. Note that this expression is based on assuming that polymerized gels consist of very long polymer chains <cit.>. Figure <ref> shows the best fits for this equation to the osmotic pressure for each of our different hydrogels (dashline curves), obtained by fitting a constant value of χ to each data set. These fits do not do a good job, which is not entirely surprising, as Flory-Huggins theory is known to make a number of assumptions that leads to poor quantitative comparison with experiments <cit.>. Instead, we again find good agreement with a simple power law: Π̅_mix=Bϕ^n, as clearly shown by the fitted curves (solid curves) in Figure <ref>, and the corresponding log-log plots in the inset. There are significant deviations in the PAM samples at high concentrations. However, this occurs for ϕ>0.73, where we expect the sample to become glassy. Thus, it is not surprising to see a behavior change there, and we omit those points from the fitting. The three PEGDA hydrogels have similar power-law exponents (n=5.1, 4.6, 4.4 for PEGDA 575, 700 and 1000 respectively), as highlighted by their similar slopes on the log-log plot. However, this exponent is not universal: the PAM 8.08% and PAM 9.57% hydrogels have significantly smaller exponents of n=2.9 and n=3.2, respectively. § RELATING Π_MIX TO GEL MICROSTRUCTURE Our observation of a power-law scaling for Π_mix may reveal information about the gel microstructure, mirroring similar results in polymer solutions <cit.>. For example, semi-dilute polymer solutions typically obey Des Cloiseaux's theory <cit.>: Π_mixv_w/k_bT=Aϕ^3/(3-d). Here, A is a constant that depends on the polymer/solvent combination, and d is the fractal dimension of the polymer's conformation in microscopic `correlation blobs' in the solution. This fractal nature comes from the self-avoiding random walks performed by the polymer chains in the blobs. At scales significantly larger than the blobs, a solution is essentially uniform. Thus, Equation (<ref>) shows that measuring Π_mix(ϕ) directly gives access to d. We hypothesize that Equation (<ref>) also applies to many gels. Like semi-dilute polymer solutions, gels are often thought of as being fractal in microscopic blobs below a certain correlation length, ξ <cit.>. For some gels – like PAM with low crosslinker concentration – this is because their polymer structure consists of long polymer chains that are loosely crosslinked together. Thus, their structure is similar to that of a solution of the same chains <cit.>. In gels made from short, network-forming oligomers – like PEGDA – the random gelation process should create a fractal network at the gel's mesh-scale <cit.>. Thus, in general, the polymer arrangement in many gels can be described mathematically in the same way as a semi-dilute polymer solution: uniform at large scales, and fractal at small scales. Indeed, this structural similarity is often assumed when interpreting scattering experiments (e.g. <cit.>). We expect that Π_mix should only depend on polymer arrangement, so their structural similarity suggests that semi-dilute polymer solutions and fractal gels should have the same equation governing their osmotic pressure. i.e. Equation (<ref>) should also apply to such gels. This hypothesis can also be reached with a scaling argument, as shown in the Materials and Methods. We use the assumption that Equation (<ref>) applies to our gels to extract the apparent fractal dimension of their microstructures. We equate Equation (<ref>) to our empirical fitting Π̅_mix=Bϕ^n to obtain A and d=3-3/n. The resulting values, A_o and d_o, are given in Table <ref>. For the PEGDA hydrogels, d_o is in a narrow range between 2.32 and 2.41. Indeed, when considering error bars on the measurement, these are consistent with a single value of d_o = 2.37 for the three gels. However, this fractal dimension is not universal for all the gels. d_o takes a different, lower value for the polyacrylamide gels. d_o=1.96 and 2.06 for low/high crosslinker-content respectively, consistent with a single value of d_o=2.02. This is unsurprising, given the different microstructures expected in PAM and PEGDA. Interestingly, the value for PAM is close to d=2, which is seen when polymer chains in a theta solvent undergo ideal random walks. For all the gels, A_o is O(1) – as expected if the scaling arguments underlying the Equation (<ref>) are correct. We test the extracted fractal dimensions, d_o by performing small-angle X-ray scattering (SAXS) experiments on our hydrogels to detect and characterize fractal microstructures. Figure <ref> shows the scattered intensity, I(q) for each gel, where q is the magnitude of the scattering vector. If a fractal microstructure exists with dimension d, we would expect it to give rise to a power-law decay in I(q)∼ q^-d over the range of q corresponding to the range of length-scales spanned by the fractal <cit.>. At longer wavelengths – i.e. where q≲ 2π/ξ, the gel becomes uniform. Thus, we expect I(q) to plateau at small q. Indeed, this is exactly the behavior predicted by the equation describing the scattering of a fractal microstructure with correlation length ξ <cit.>: I(q) = Ksin[(d - 1)tan^-1(ξ q)]/(d - 1)(ξ q) [ 1 + (ξ q)^2 ]^d - 1/2. Here, K is a constant pre-factor. Note that this is a generalization of scattering functions that describe semi-dilute polymer solutions, as equation (<ref>) reduces to the Lorentzian function when d=2 (the fractal dimension expected for polymer chains in a theta solvent). The PEGDA hydrogels show rather convincing evidence of a fractal microstructure. The PEGDA I(q) curves in Figure <ref> show power-law behavior for q in the approximate range of 0.1 nm^-1-1 nm^-1. These q values correspond to a range of length-scales that are somewhat bigger than the PEGDA oligomer size (the calculated radius of gyration, R_g, of each oligomer is given in Table <ref>), where we expect to see the fractal behavior <cit.>. At even lower q, we also see I(q) start to flatten out, indicative of the fact that we start to probe length-scales comparable to ξ. Indeed, we can fit this whole, low-q range of I(q) with the fractal model in Equation (<ref>). These fits are shown by the continuous curves in Figure <ref>, with the corresponding fitting parameters (d_s,ξ) given in Table <ref> (errors are 95% confidence bounds for the fit). Here, we choose to fit the range q<0.4 nm^-1, as the scattering there should be independent of the form factor of the oligomers (see Supplement). The fact that the fractal of dimension of PEGDA gels is always larger than d=2 is consistent with our expectation that this fractal structure is a crosslinked fractal mesh. The PAM hydrogels can also be analyzed in the same way, although the evidence for a fractal microstructure is not so clear-cut. The PAM curves in Figure <ref> appear to show power-law behavior in a q range in the approximate range of 0.8 nm^-1-5 nm^-1. This plateaus at lower q, and can be fitted with Equation (<ref>) to obtain ξ and d_s (Table <ref>). However, the results must be viewed cautiously, as this fitting range is close to the atomic size, where we expect additional effects to control I(q) – such as contributions from the form factor of the Kuhn segments that make up the polymer (e.g. <cit.>). To avoid these effects, and the noisy data at low q, we only fit the range 0.1 nm^-1<q<1  nm^-1 (see Supplement for further details). For the PAM gels, the measured fractal dimensions are consistent with that of polymer chains undergoing random walks (N.B. d=2 in a theta solvent, and d=1.67 in a good solvent <cit.>). This is consistent with our expectation that PAM gels consist of loosely crosslinked polymer chains. The extracted values of d_s are close to measured values of d_o. Indeed, for the two higher molecular weight PEGDA hydrogels, and the PAM 8.08%, there is less than 5% error. For the other two hydrogels, the error is less than 20%. This is certainly reasonable agreement, given uncertainties in measuring fractal properties, which largely arise due to the limited q-range over which fractal behavior appears in I(q). Indeed, the observed agreement is surprisingly good, as we only expect agreement when there is a clear fractal in the hydrogel at all microscopic scales smaller than some length scale ξ – something which is not entirely obvious in all our samples. To summarize, the fact that we see reasonable agreement between values of d – measured with two completely different techniques – provides encouraging evidence for a relationship between a hydrogel's microstructure and osmotic pressure. A true confirmation will require additional validation across a range of different materials. § A SIMPLIFIED HYDROGEL CONSTITUTIVE MODEL Our results have direct implications for modeling hydrogel mechanical behavior, as they motivate a simple constitutive model. In particular, returning to Equation (<ref>), we can insert our power-law expression for Π_mix(ϕ) to obtain a new description of hydrogel behavior. To complete this, we use one of the simplest descriptions of a polymer gel – the phantom-network model <cit.> – which gives the principal stresses in the elastic network as σ_i^el=ρ_c k_b T (λ_0λ_i)^2/λ_0^3λ_1 λ_2 λ_3. Here, λ_i are the principal stretches relative to the equilibrium swollen state polymer state, and λ_0 is the stretch from the dry state to the equilibrium swollen state. ρ_c is a constant that is inversely proportional to the volume of a polymer chain between crosslinks. With this, Equation (<ref>) yields an expression for the principal stresses in the hydrogel as: σ_i=ρ_c k_b T/λ_0λ_i^2/λ_1 λ_2 λ_3-Ak_bT/v_wϕ^n-Δμ/v_w. We apply this constitutive model to our drained compression results in Figure <ref> by setting σ_3=σ_zz=-P_ext and Δμ=0. The corresponding stretches are λ_3=λ=1/(ϕλ_0^3), and λ_1=λ_2=1. Then, Equation (<ref>) becomes: P_ext=Ak_bT/v_wϕ^n-ρ_c k_bT λ/λ_0. In our experiment, λ=1 at 0 °C, while P_ext=0. Inserting this constraint allows us to eliminate the constant ρ_c: P_ext=A k_bT/v_w λ_0^3n(1/λ^n-λ). This is exactly the same as the empirical fit that we found earlier (Equation <ref>). Beyond our own data, we can apply this model to predict the equilibrium hydrogel swelling, λ_0. For a fully-swollen gel in equilibrium with water, σ_ij=0, ϕ=λ_0^-3, λ_i=1, and Δμ=0. Then, Equation (<ref>) becomes 0=-Ak_bT/v_w1/λ_0^3n+ρ_c k_bT/λ_0, so that the sample stretch between dry and equilibrium swelling is given by λ_0=(A/ρ_c v_w)^1/3n-1=(A/ρ_c v_w)^3-d/6+d. Here, the last equality assumes that n is determined by a fractal structure of the hydrogel. This shows how, the gel is expected to swell more, the smaller d is. Like the more complex Flory-Rehner theory <cit.>, this equation relates the equilibrium swelling of a hydrogel to various material properties (A, ρ_c and d, in particular). Thus, it could be used to extract information about these values from simple swelling experiments. § CONCLUSIONS In conclusion, we have shown that freezing tests can be used to characterize hydrogel behavior over a wide swelling range. In particular, these allow measurement of both a hydrogel's drained mechanical response under compression and its mixing osmotic pressure. Our technique brings several advantages. Firstly, it allows the measurement of hydrogel properties over a large compressive range. For example, we could apply up to 25 MPa of pressure to samples in gel compression tests. By way of comparison, previous works that used mechanical compression could apply maximum pressures that were over two orders of magnitude smaller <cit.>. Secondly, our technique permits the use of small samples. Here, we used ∼100 µL per experiment: much smaller than typical volumes used for mechanical compression tests. Finally, our technique is much faster than alternative techniques. The length of experiments is predominantly set by the time required for water to flow out of/into a gel so that it can reach thermodynamic equilibrium. This time scales as L^2/D, where L is the smallest dimension of the sample, and D is the poroelastic diffusivity <cit.>. Due to this quadratic dependence upon L, thin samples will equilibrate much faster than thick ones. Thus, while our O(100 µm-thick) samples take only a few minutes to equilibrate, bulk samples used in other approaches take orders of magnitude longer. In terms of limitations, there are two points worth bearing in mind: this technique can only be used near the solvent's freezing temperature, and it can not be used with thermoresponsive materials, where Π_mix depends significantly upon T (see Supplement for further details) <cit.>. These are identical to the limitations that arise in freezing-point osmometry. Because we are able to measure properties over such a wide range of compressions, our technique permits new insights into hydrogel mechanical and osmotic behavior. For example, the osmotic pressure of our gels has a simple, power-law dependence on polymer volume fraction, rather than following the predictions of classical polymer solution theory. This casts doubt over the use of the wide range of constitutive models for polymer gels that are based upon this classical theory (e.g. <cit.>). Instead, our results motivate a modified constitutive model for gels. In the future, it will be interesting to apply this model to hydrogel processes such as free-swelling, desiccation, fracture, volumetric phase transitions, and phase separation <cit.>. Furthermore, our results suggest that gel osmotic pressure may be controlled by the form of the gel's microstructure. In particular, we propose that a gel with a fractal microstructure should have a power-law dependence of osmotic pressure upon polymer content, with an exponent that is a simple function of the fractal dimension. When we compare osmotic pressure measurements with fractal dimensions measured via scattering, we see that our results are consistent with this hypothesis. However, further testing is needed for a complete validation. There are several directions in which GelFrO can be extended. In particular, it will be important to adapt the technique to characterize polyelectrolyte gels and gels containing solutes. These are more complex than our charge-neutral gels, as ions/solutes result in an increased gel osmotic pressure <cit.>. A separate direction could be to use free hydrogel samples (not attached to a rigid base) that can shrink isotropically during freezing. Such tests are equivalent to isotropic, drained compression tests. Combining such results with uniaxial compression results, like those presented here, will allow the accurate fitting of constitutive models to data from multiple different deformation modes: a necessity for accurate model fitting <cit.>. § MATERIALS AND METHODS §.§ Experiments We prepare hydrogels in temporary sample cells. These are made with two 150 µm-thick, glass coverslips, separated by 150 µm spacers. The bottom coverslip is silanized with 3-(trimethoxysilyl)propyl acrylate to promote hydrogel bonding <cit.>. We fill the cells with hydrogel precursor solution containing a photoinitiator, and expose them to UV light (wavelength: 365 nm) for 1 hour at a power density of (0.39 ± 0.07) mW/cm^2. The resulting polymerized gels are then left overnight at room temperature in a humid container before imaging. To make PEGDA precursor solutions, we mix de-ionized water (18.2 MΩcm) with PEGDA of different molecular weights (575, 700 and 1000 g/mol from Sigma-Aldrich at respective concentrations of 15, 20 and 13 vol%), 0.1 vol% of 2-hydroxy-2-methylpropiophenone photoinitiator (Tokyo Chemical Industry), and 0.05 vol% of fluorescent nanoparticles (200 nm diameter, red, carboxylate-modified Fluospheres, Thermo Fisher Scientific). To make the two different polyacrylamide precursor solutions, we mix deionized water with acrylamide monomer (Sigma-Aldrich), N, N'-Methylenebisacrylamide crosslinker (Sigma-Aldrich), 0.1 vol% of photoinitiator, and 0.05 vol% of fluorescent nanoparticles. The first polyacrylamide solution has 8.01 vol% of monomer and 0.07 vol% of crosslinker. The second solution has 9.08 vol% of monomer and 0.39 vol% of crosslinker. All precursor solutions are thoroughly mixed and sonicated for 10 mins before use. In our experiments, we measure the equilibrium thickness of our hydrogels under different conditions with a confocal microscope. We take 3-dimensional (3-D) image stacks of fluorescent particles dispersed in the gels with a 20x air objective (Numerical aperture 0.17) on a confocal microscope (Nikon Ti2 Eclipse) equipped with a Spinning Disk Confocal system (561 nm laser, 3i). We first image as-prepared hydrogels at various marked positions in the temporary sample cells (away from the cell edges). We then convert the temporary cells into freezing cells. We remove the top glass coverslip from the temporary cells to leave the bottom coverslip with its attached hydrogel coating. We then add 450 µm spacers with a new top coverslip, and fill the resulting gap above the hydrogel with de-ionized water (Figure <ref> top). We then place the freezing cells in the freezing stage (Figure <ref>) and mount them on the confocal microscope. Next, we reduce the temperature of the sample from room temperature to 0 °C in a step-wise fashion, while regularly imaging the hydrogel. Finally, we nucleate ice in the cell (Figure <ref> top), and continue to step-wise reduce the temperature below 0 °C, while imaging the hydrogel. To form ice in the freezing cell, we first cool one side of the sample slightly below 0 °C. This is achieved by independently tuning the temperatures of the copper blocks in our freezing stage. Then, we nucleate ice by touching the cold side of the sample with a cotton swab that has been dipped in liquid nitrogen. After nucleation, we slowly decrease the temperature of the warm side of the cell (at 0.1 °C/min) until there is a uniform, subzero temperature across the whole sample, and proceed with the freezing experiment. We measure relative thickness changes in the hydrogel by tracking displacements of fluorescent nanoparticle tracers dispersed throughout in the gel. We first locate 3-D tracer positions in confocal image stacks using a code based on Matlab's regionprops3 function. We then calculate tracer displacements by tracking changes in these positions between time-points, using a large-displacement tracking code <cit.>. Finally, we use these displacements to calculate the relative (uniform) vertical shrinkage, h/h_0 of the hydrogel layer. This analysis involves the use of a small correction accounting for the hydrogel's refractive index (e.g. see the supplement of <cit.>, and the Supplement of this work). Small-strain, drained moduli for the hydrogels are measured via long-time poroelastic indentation experiments <cit.>. We indent bulk hydrogel samples (1.3 cm thickness, 9 cm diameter) with a 10 mm radius, spherical indenter attached to a texture analyzer with a 500 g load cell (TA.XT Plus, Stable Microsystems). The results are analyzed following <cit.> to give the drained shear modulus, and drained Poisson ratio, from which E_d is calculated. For details see the Supplement. We perform SAXS measurements using a Xenocs SAXS instrument, equipped with a Cu radiation (λ = 0.154 nm) source. We make samples in quartz capillaries with an outer diameter of 1 mm and a wall thickness of < 0.1 mm. For each sample, we collect data using two different sample-to-detector distances, L_ sd. Firstly, we record data with L_ sd=350 mm, and a beam size of 0.7 mm × 0.7 mm. In this case, the magnitude of the scattering vector, q varies between 0.13 nm^-1 and 7.3 nm^-1. Subsequently, we record data with L_ sd=1650 mm, and a beam size of 0.25 mm × 0.25 mm. Then, q varies between 0.025 nm^-1 and 1.3 nm^-1. For both cases, we record for 900 s to achieve an adequate signal-to-noise ratio. Each test is repeated 5 times for each sample, and the results are combined and averaged. Finally, we perform a background subtraction, by subtracting the scattering intensity from a deionized-water-filled quartz capillary – yielding the scattered intensity, I(q). §.§ Δμ for hydrogels in equilibrium with ice To derive Equation (<ref>), we need an expression for the difference between the chemical potential of water in a hydrogel that is in equilibrium with ice at (T,P_a), and the chemical potential of water at (T,P_a), where P_a is atmospheric pressure, and T<T_m: Δμ=μ_i(P_a,T)-μ_w(P_a,T). Here, μ_w and μ_i are the chemical potentials of water and ice respectively, and we have used the fact that water in equilibrium with ice has the same chemical potential as the ice. To evaluate this, we Taylor-expand this expression about (P_a,T_m), and recall that μ_i(P_a,T_m)=μ_w(P_a,T_m) (because water and ice are in equilibrium at (P_a,T_m)) to find that Δμ=(T-T_m)∂μ_i/∂ T-(T-T_m)∂μ_w/∂ T. For a simple, one-component material, μ=g m, where g is the specific Gibbs free energy, and m is the molecular mass. Further, dg=-sdT+(1/ρ)dP, where ρ is the material's density, and s is the specific entropy <cit.>. Thus, ∂μ/∂ T=m∂ g/∂ T=-ms, and we obtain that Δμ=m(T-T_m)(s_w-s_i)=-m L(T_m-T)/T_m. In the last equality, we have used the identity L=(s_w-s_i)T_m. Finally, we note that m/v_w=ρ_w, so Δμ/v_w=-ρ_w L(T_m-T)/T_m, which is the result used in the main text. §.§ The mixing osmotic pressure of a fractal gel From purely dimensional arguments, Π_mix for a fractal gel must take the form Π_mix=k_bT c f(ϕ,N), where f is an unknown, dimensionless function, ϕ is the volume fraction of polymer, N is the degree of polymerization of the polymer, and c=ϕ/b^3 is the number density of monomers in the gel, where b is the monomer size. While f∝ N^-1 for short chains in Van't Hoff's regime <cit.>, it should become independent of polymerization degree for large N <cit.>, so f=f(ϕ). This is essentially a generalized virial expansion for Π_mix. We now use the scaling hypothesis <cit.> that says that the expression for Π_mix should remain unchanged, if we replace our monomer with a `supermonomer' consisting of an arbitrary number λ of monomers. In this case, the supermonomer concentration is c'=c/λ, and the size of a supermonomer is b'=λ^ν b. The latter expression comes from the fractal nature of the network, with fractal dimension d=1/ν. Thus, Π_mix=k_bT c f(c b^3)=k_bT c' f(c' b'^3). Finally, we use the common power-law ansatz, f(ϕ)∝ϕ^α <cit.>. Thus, c f(c b^3) = A' c^α+1 b^3α=A' c'^α+1 b'^3α = A' c^α+1 b^3αλ^α(3ν-1)-1, where A' is a dimensionless constant. The only way for this to be insensitive to the choice of λ, is if α=(3ν-1)^-1. Thus we find that Π_mix=A'k_bTc ϕ^1/(3ν-1)=A'k_bT/b^3ϕ^3/(3-d). Defining A=A'v_w/b^3, we get the result (<ref>) in the main text. We thank Shaohua Yang, Dr. Charlotta Lorenz, Dr. Viviane Lütz and Dr. Alan Rempel for their helpful discussions. We thank Dr. Thomas Weber for helping with SAXS measurements. We acknowledge support from the Swiss National Science Foundation (200021-212066, PZ00P2186041). 67 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Delavoipiere et al.(2016)Delavoipiere, Tran, Verneuil, and Chateauminois]delavoipiere2016poroelastic author author J. Delavoipiere, author Y. Tran, author E. Verneuil, and author A. Chateauminois, @noop journal journal Soft Matter volume 12, pages 8049 (year 2016)NoStop [Gehrke et al.(1997)Gehrke, Fisher, Palasis, and Lund]gehrke1997factors author author S. H. Gehrke, author J. P. Fisher, author M. Palasis, and author M. E. Lund, @noop journal journal Annals of the New York Academy of Sciences volume 831, pages 179 (year 1997)NoStop [Tibbitt and Anseth(2009)]tibbitt2009hydrogels author author M. W. Tibbitt and author K. S. Anseth, @noop journal journal Biotechnology and Bioengineering volume 103, pages 655 (year 2009)NoStop [Forterre(2013)]forterre2013slow author author Y. Forterre, @noop journal journal Journal of Experimental Botany volume 64, pages 4745 (year 2013)NoStop [Wu et al.(2024)Wu, Ren, Lv, Liu, Wang, Wang, Cao, Liu, Wei, and Pang]wu2024generating author author F. Wu, author Y. Ren, author W. Lv, author X. Liu, author X. Wang, author C. Wang, author Z. Cao, author J. Liu, author J. Wei, and author Y. Pang, @noop journal journal Nature Communications volume 15, pages 802 (year 2024)NoStop [Na et al.(2022)Na, Kang, Park, Jung, Kim, and Sun]na2022hydrogel author author H. Na, author Y.-W. Kang, author C. S. Park, author S. Jung, author H.-Y. Kim, and author J.-Y. Sun, @noop journal journal Science volume 376, pages 301 (year 2022)NoStop [Yuk et al.(2016)Yuk, Zhang, Parada, Liu, and Zhao]yuk2016skin author author H. Yuk, author T. Zhang, author G. A. Parada, author X. Liu, and author X. Zhao, @noop journal journal Nature Communications volume 7, pages 12028 (year 2016)NoStop [Brannon-Peppas and Harland(2012)]brannon2012absorbent author author L. Brannon-Peppas and author R. S. Harland, @noop title Absorbent polymer technology (publisher Elsevier, year 2012)NoStop [Webber and Worster(2023)]webber2023linear author author J. J. Webber and author M. G. Worster, @noop journal journal Journal of Fluid Mechanics volume 960, pages A37 (year 2023)NoStop [Peppas et al.(2006)Peppas, Hilt, Khademhosseini, and Langer]peppas2006hydrogels author author N. A. Peppas, author J. Z. Hilt, author A. Khademhosseini, and author R. Langer, @noop journal journal Advanced Materials volume 18, pages 1345 (year 2006)NoStop [Mirani et al.(2017)Mirani, Pagan, Currie, Siddiqui, Hosseinzadeh, Mostafalu, Zhang, Ghahary, and Akbari]mirani2017advanced author author B. Mirani, author E. Pagan, author B. Currie, author M. A. Siddiqui, author R. Hosseinzadeh, author P. Mostafalu, author Y. S. Zhang, author A. Ghahary, and author M. Akbari, @noop journal journal Advanced Healthcare Materials volume 6, pages 1700718 (year 2017)NoStop [Liu et al.(2022)Liu, Wang, Chen, and Cheng]liu2022environmentally author author Y. Liu, author J. Wang, author H. Chen, and author D. Cheng, @noop journal journal Science of the Total Environment volume 846, pages 157303 (year 2022)NoStop [Pu et al.(2017)Pu, Zhou, Chen, and Bai]pu2017development author author J. Pu, author J. Zhou, author Y. Chen, and author B. Bai, @noop journal journal Energy & Fuels volume 31, pages 13600 (year 2017)NoStop [Shibayama and Tanaka(2005)]shibayama2005volume author author M. Shibayama and author T. Tanaka, in @noop booktitle Responsive gels: Volume transitions I (publisher Springer, Berlin, year 2005) Chap. chapter Volume phase transition and related phenomena of polymer gels, pp. pages 1–62NoStop [Moser et al.(2022)Moser, Feng, Yasa, Heyden, Kessler, Amstad, Dufresne, Katzschmann, and Style]moser2022hydroelastomers author author S. Moser, author Y. Feng, author O. Yasa, author S. Heyden, author M. Kessler, author E. Amstad, author E. R. Dufresne, author R. K. Katzschmann, and author R. W. Style, @noop journal journal Soft Matter volume 18, pages 7229 (year 2022)NoStop [Jeon et al.(2017)Jeon, Hauser, and Hayward]jeon2017shape author author S.-J. Jeon, author A. W. Hauser, and author R. C. Hayward, @noop journal journal Accounts of Chemical Research volume 50, pages 161 (year 2017)NoStop [Sydney Gladman et al.(2016)Sydney Gladman, Matsumoto, Nuzzo, Mahadevan, and Lewis]sydney2016biomimetic author author A. Sydney Gladman, author E. A. Matsumoto, author R. G. Nuzzo, author L. Mahadevan, and author J. A. Lewis, @noop journal journal Nature Materials volume 15, pages 413 (year 2016)NoStop [van Kesteren et al.(2023)van Kesteren, Alvarez, Arrese-Igor, Alegria, and Isa]van2023self author author S. van Kesteren, author L. Alvarez, author S. Arrese-Igor, author A. Alegria, and author L. Isa, @noop journal journal Proceedings of the National Academy of Sciences volume 120, pages e2213481120 (year 2023)NoStop [Kazimierska-Drobny et al.(2015)Kazimierska-Drobny, El Fray, and Kaczmarek]kazimierska2015determination author author K. Kazimierska-Drobny, author M. El Fray, and author M. Kaczmarek, @noop journal journal Materials Science and Engineering C volume 48, pages 48 (year 2015)NoStop [Bhattacharyya et al.(2020)Bhattacharyya, O'Bryan, Ni, Morley, Taylor, and Angelini]bhattacharyya2020hydrogel author author A. Bhattacharyya, author C. O'Bryan, author Y. Ni, author C. D. Morley, author C. R. Taylor, and author T. E. Angelini, @noop journal journal Biotribology volume 22, pages 100125 (year 2020)NoStop [Li et al.(2012)Li, Hu, Vlassak, and Suo]li2012experimental author author J. Li, author Y. Hu, author J. J. Vlassak, and author Z. Suo, @noop journal journal Soft Matter volume 8, pages 8121 (year 2012)NoStop [Hu et al.(2010)Hu, Zhao, Vlassak, and Suo]hu2010using author author Y. Hu, author X. Zhao, author J. J. Vlassak, and author Z. Suo, @noop journal journal Applied Physics Letters volume 96 (year 2010)NoStop [Yasuda et al.(2020)Yasuda, Sakumichi, Chung, and Sakai]yasuda2020universal author author T. Yasuda, author N. Sakumichi, author U.-i. Chung, and author T. Sakai, @noop journal journal Physical Review Letters volume 125, pages 267801 (year 2020)NoStop [Mallam et al.(1989)Mallam, Horkay, Hecht, and Geissler]mallam1989scattering author author S. Mallam, author F. Horkay, author A. M. Hecht, and author E. Geissler, @noop journal journal Macromolecules volume 22, pages 3356 (year 1989)NoStop [Gao et al.(2021)Gao, Chai, Garakani, Datta, and Cho]gao2021scaling author author Y. Gao, author N. K. Chai, author N. Garakani, author S. S. Datta, and author H. J. Cho, @noop journal journal Soft Matter volume 17, pages 9893 (year 2021)NoStop [Straub et al.(2014)Straub, Yip, and Elimelech]straub2014raising author author A. P. Straub, author N. Y. Yip, and author M. Elimelech, @noop journal journal Environmental Science & Technology Letters volume 1, pages 55 (year 2014)NoStop [Madden et al.(2022)Madden, Ellis, Riabtseva, Wilson, Cunningham, and Jessop]madden2022comparison author author M. J. Madden, author S. N. Ellis, author A. Riabtseva, author A. D. Wilson, author M. F. Cunningham, and author P. G. Jessop, @noop journal journal Desalination volume 539, pages 115946 (year 2022)NoStop [Worster et al.(2021)Worster, Peppin, and Wettlaufer]worster2021colloidal author author M. G. Worster, author S. Peppin, and author J. S. Wettlaufer, @noop journal journal Journal of Fluid Mechanics volume 914, pages A28 (year 2021)NoStop [Style et al.(2023)Style, Gerber, Rempel, and Dufresne]style2023generalized author author R. W. Style, author D. Gerber, author A. W. Rempel, and author E. R. Dufresne, @noop journal journal Journal of Glaciology volume 69, pages 1091 (year 2023)NoStop [Zhuo et al.(2021)Zhuo, Chen, Xiao, Li, Wang, He, and Zhang]zhuo2021gels author author Y. Zhuo, author J. Chen, author S. Xiao, author T. Li, author F. Wang, author J. He, and author Z. Zhang, @noop journal journal Materials Horizons volume 8, pages 3266 (year 2021)NoStop [Fernández-Rico et al.(2024)Fernández-Rico, Schreiber, Oudich, Lorenz, Sicher, Sai, Bauernfeind, Heyden, Carrara, Lorenzis et al.]fernandez2024elastic author author C. Fernández-Rico, author S. Schreiber, author H. Oudich, author C. Lorenz, author A. Sicher, author T. Sai, author V. Bauernfeind, author S. Heyden, author P. Carrara, author L. D. Lorenzis, et al., @noop journal journal Nature Materials volume 23, pages 124 (year 2024)NoStop [Dash et al.(2006)Dash, Rempel, and Wettlaufer]dash2006physics author author J. Dash, author A. Rempel, and author J. Wettlaufer, @noop journal journal Reviews of Modern Physics volume 78, pages 695 (year 2006)NoStop [Gerber et al.(2022)Gerber, Wilen, Poydenot, Dufresne, and Style]gerber2022stress author author D. Gerber, author L. A. Wilen, author F. Poydenot, author E. R. Dufresne, and author R. W. Style, @noop journal journal Proceedings of the National Academy of Sciences volume 119, pages e2200748119 (year 2022)NoStop [Gerber et al.(2023)Gerber, Wilen, Dufresne, and Style]gerber2023polycrystallinity author author D. Gerber, author L. A. Wilen, author E. R. Dufresne, and author R. W. Style, @noop journal journal Physical Review Letters volume 131, pages 208201 (year 2023)NoStop [Dominic Gerber(2023)]gerber_stage author author Dominic Gerber, https://doi.org/10.5281/zenodo.8121270 title Temperature gradient microscopy stage, https://doi.org/10.5281/zenodo.8121270 (year 2023)NoStop [Yang et al.(2024)Yang, Gerber, Feng, Bain, Kuster, de Lorenzis, Xu, Dufresne, and Style]yang2024dehydration author author S. Yang, author D. Gerber, author Y. Feng, author N. Bain, author M. Kuster, author L. de Lorenzis, author Y. Xu, author E. R. Dufresne, and author R. W. Style, @noop journal journal arXiv preprint arXiv:2401.12871 (year 2024)NoStop [Hong et al.(2008)Hong, Zhao, Zhou, and Suo]hong2008theory author author W. Hong, author X. Zhao, author J. Zhou, and author Z. Suo, @noop journal journal Journal of the Mechanics and Physics of Solids volume 56, pages 1779 (year 2008)NoStop [Bertrand et al.(2016)Bertrand, MacMinn, Mukhopadhyay, and Peixinho]bertrand2016dynamics author author T. Bertrand, author C. W. MacMinn, author S. Mukhopadhyay, and author J. Peixinho, @noop journal journal Physical Review Applied volume 6, pages 064010 (year 2016)NoStop [McKenna et al.(1990)McKenna, Flynn, and Chen]mckenna1990swelling author author G. B. McKenna, author K. M. Flynn, and author Y. Chen, @noop journal journal Polymer volume 31, pages 1937 (year 1990)NoStop [Cai and Suo(2012)]cai2012equations author author S. Cai and author Z. Suo, @noop journal journal Europhysics Letters volume 97, pages 34009 (year 2012)NoStop [Malo de Molina et al.(2015)Malo de Molina, Lad, and Helgeson]malo2015heterogeneity author author P. Malo de Molina, author S. Lad, and author M. E. Helgeson, @noop journal journal Macromolecules volume 48, pages 5402 (year 2015)NoStop [Yuen et al.(1984)Yuen, Tam, and Bulock]yuen1984glass author author H. Yuen, author E. Tam, and author J. Bulock, in @noop booktitle Analytical Calorimetry: Volume 5 (publisher Springer, year 1984) pp. pages 13–24NoStop [Flory(1942)]flory1942thermodynamics author author P. J. Flory, @noop journal journal The Journal of Chemical Physics volume 10, pages 51 (year 1942)NoStop [Bouklas et al.(2015)Bouklas, Landis, and Huang]bouklas2015nonlinear author author N. Bouklas, author C. M. Landis, and author R. Huang, @noop journal journal Journal of the Mechanics and Physics of Solids volume 79, pages 21 (year 2015)NoStop [Rubinstein and Colby(2003)]rubinstein2003polymer author author M. Rubinstein and author R. H. Colby, @noop title Polymer Physics (publisher Oxford university press, year 2003)NoStop [Des Cloizeaux(1975)]des1975lagrangian author author J. Des Cloizeaux, @noop journal journal Journal de Physique volume 36, pages 281 (year 1975)NoStop [Gombert et al.(2020)Gombert, Roncoroni, Sánchez-Ferrer, and Spencer]gombert2020hierarchical author author Y. Gombert, author F. Roncoroni, author A. Sánchez-Ferrer, and author N. D. Spencer, @noop journal journal Soft Matter volume 16, pages 9789 (year 2020)NoStop [Cohen et al.(1992)Cohen, Ramon, Kopelman, and Mizrahi]cohen1992characterization author author Y. Cohen, author O. Ramon, author I. Kopelman, and author S. Mizrahi, @noop journal journal Journal of Polymer Science Part B: Polymer Physics volume 30, pages 1055 (year 1992)NoStop [Muthukumar and Winter(1986)]muthukumar1986fractal author author M. Muthukumar and author H. H. Winter, @noop journal journal Macromolecules volume 19, pages 1284 (year 1986)NoStop [Lazzari et al.(2016)Lazzari, Nicoud, Jaquet, Lattuada, and Morbidelli]lazzari2016fractal author author S. Lazzari, author L. Nicoud, author B. Jaquet, author M. Lattuada, and author M. Morbidelli, @noop journal journal Advances in Colloid and Interface Science volume 235, pages 1 (year 2016)NoStop [Mildner and Hall(1986)]mildner1986small author author D. Mildner and author P. Hall, @noop journal journal Journal of Physics D: Applied Physics volume 19, pages 1535 (year 1986)NoStop [Seiffert(2017)]seiffert2017scattering author author S. Seiffert, @noop journal journal Progress in Polymer Science volume 66, pages 1 (year 2017)NoStop [McDowall et al.(2022)McDowall, Adams, and Seddon]mcdowall2022using author author D. McDowall, author D. J. Adams, and author A. M. Seddon, @noop journal journal Soft Matter volume 18, pages 1577 (year 2022)NoStop [Beaucage(1996)]beaucage1996small author author G. Beaucage, @noop journal journal Journal of applied crystallography volume 29, pages 134 (year 1996)NoStop [James and Guth(1953)]james1953statistical author author H. M. James and author E. Guth, @noop journal journal The Journal of Chemical Physics volume 21, pages 1039 (year 1953)NoStop [Flory and Rehner Jr(1943)]flory1943statistical author author P. J. Flory and author J. Rehner Jr, @noop journal journal The Journal of Chemical Physics volume 11, pages 521 (year 1943)NoStop [Hu and Suo(2012)]hu2012viscoelasticity author author Y. Hu and author Z. Suo, @noop journal journal Acta Mechanica Solida Sinica volume 25, pages 441 (year 2012)NoStop [Kim et al.(2022)Kim, Yin, and Suo]kim2022polyacrylamide author author J. Kim, author T. Yin, and author Z. Suo, @noop journal journal Journal of the Mechanics and Physics of Solids volume 168, pages 105017 (year 2022)NoStop [Pan and Brassart(2022)]pan2022constitutive author author Z. Pan and author L. Brassart, @noop journal journal Journal of the Mechanics and Physics of Solids volume 167, pages 105016 (year 2022)NoStop [Mao and Anand(2018)]mao2018theory author author Y. Mao and author L. Anand, @noop journal journal Journal of the Mechanics and Physics of Solids volume 115, pages 30 (year 2018)NoStop [Baumberger et al.(2006)Baumberger, Caroli, and Martina]baumberger2006solvent author author T. Baumberger, author C. Caroli, and author D. Martina, @noop journal journal Nature Materials volume 5, pages 552 (year 2006)NoStop [Quesada-Pérez et al.(2011)Quesada-Pérez, Maroto-Centeno, Forcada, and Hidalgo-Alvarez]quesada2011gel author author M. Quesada-Pérez, author J. A. Maroto-Centeno, author J. Forcada, and author R. Hidalgo-Alvarez, @noop journal journal Soft Matter volume 7, pages 10536 (year 2011)NoStop [Ricker and Wriggers(2023)]ricker2023systematic author author A. Ricker and author P. Wriggers, @noop journal journal Archives of Computational Methods in Engineering volume 30, pages 2257 (year 2023)NoStop [Kim et al.(2021)Kim, Dufresne, Gerber, Style, and Bain]kim2021measuring author author J. Y. Kim, author E. R. Dufresne, author D. Gerber, author R. W. Style, and author N. Bain, @noop journal journal Physical Review X volume 11, pages 031004 (year 2021)NoStop [Style et al.(2014)Style, Boltyanskiy, German, Hyland, MacMinn, Mertz, Wilen, Xu, and Dufresne]style2014traction author author R. W. Style, author R. Boltyanskiy, author G. K. German, author C. Hyland, author C. W. MacMinn, author A. F. Mertz, author L. A. Wilen, author Y. Xu, and author E. R. Dufresne, @noop journal journal Soft Matter volume 10, pages 4047 (year 2014)NoStop [Doi(1997)]DoiPolymPhys author author M. Doi, @noop title Introduction to Polymer Physics (publisher Clarendon Press, Oxford, United States, year 1997)NoStop [De Gennes(1979)]de1979scaling author author P.-G. De Gennes, @noop title Scaling Concepts in Polymer Physics (publisher Cornell University Press, Ithaca, NY, year 1979)NoStop
http://arxiv.org/abs/2407.13241v1
20240718075046
NODER: Image Sequence Regression Based on Neural Ordinary Differential Equations
[ "Hao Bai", "Yi Hong" ]
cs.CV
[ "cs.CV", "cs.AI" ]
NODER Bai and Hong Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China yi.hong@sjtu.edu.cn NODER: Image Sequence Regression Based on Neural Ordinary Differential Equations Hao Bai Yi Hong July 22, 2024 ================================================================================= § ABSTRACT Regression on medical image sequences can capture temporal image pattern changes and predict images at missing or future time points. However, existing geodesic regression methods limit their regression performance by a strong underlying assumption of linear dynamics, while diffusion-based methods have high computational costs and lack constraints to preserve image topology. In this paper, we propose an optimization-based new framework called NODER, which leverages neural ordinary differential equations to capture complex underlying dynamics and reduces its high computational cost of handling high-dimensional image volumes by introducing the latent space. We compare our NODER with two recent regression methods, and the experimental results on ADNI and ACDC datasets demonstrate that our method achieves the state-of-the-art performance in 3D image regression. Our model needs only a couple of images in a sequence for prediction, which is practical, especially for clinical situations where extremely limited image time series are available for analysis. Our source code is available at <https://github.com/ZedKing12138/NODER-pytorch>. § INTRODUCTION In medical image analysis, image sequences like longitudinal image scans or image time series provide rich spatio-temporal information for studying the mechanisms of human aging and the patterns of disease development. Regression on temporal image sequences <cit.> is a commonly-used technique to explore the relationship between images and their associated time attribute. However, in practice, regression on medical image sequences, especially longitudinal 3D image volumes, is facing the following three challenges: (i) Missing data. Collecting regular follow-up scans of a subject is a challenging task. Often, we have missing scans at one or more time points for each subject. (ii) High-dimension low-sample size data. In this paper, we tackle 3D medical image sequences, and each volume is high-resolution three-dimensional images with millions of voxels, while each sequence has only tens of image scans for regression. (iii) Semantic richness but with subtle temporal changes. Each volume has detailed spatial information about tissue structures, which is non-trivial to generate; at the same time, the temporal changes of these tissues are often subtle, which is difficult to capture without a special design or treatment to model the temporal dynamics. To address the above challenges, there are two categories of regression approaches, i.e., the optimization-based methods like geodesic regression <cit.> and the learning-based methods like regression based on diffusion models <cit.>. Geodesic regression extends linear regression to Riemannian manifolds, which is developed in the framework of Large Deformation Diffeomorphic Metric Mapping (LDDMM)<cit.>. By generalizing diffeomorphic image registration to temporal image data, the regression can compactly model the spatial deformations over time <cit.>. However, solving the underlying optimization problem is computationally expensive. Therefore, a simplified approximation method, i.e., Simple Geodesic Regression (SGR) <cit.>, has been proposed, which decouples the iterative optimization of regressing geodesics into pairwise image registrations. To further reduce the computational time with the help of deep learning techniques, the Fast Predictive Simple Geodesic Regression (FPSGR) <cit.> is proposed by utilizing a fast predictive registration method. Although this method is computationally efficient, its regression accuracy is limited by its assumption of linear temporal changes. The diffusion-based model is a recent popular alternative to generate high-quality images. One recent image regression method is the Sequentially Aware Diffusion Model (SADM) <cit.>, which augments diffusion models with a sequence-aware transformer as a conditional module. Like geodesic regression, diffusion-based methods can also handle the missing data issue and allow for autoregressive image sequence generation during inference. However, diffusion-based methods likely introduce unwanted structures into the generated images since they are learning-based techniques and have no constraints like diffeomorphic deformations in geodesic regression to ensure the topological preservation and differential homeomorphism properties of the generated images. Other limitations are the requirements of massive data, a long training process, extensive memory usage, and high time consumption during inference. In addition to the previously mentioned regression methods, several time series modeling approaches utilize spatio-temporal transformers combined with attention mechanisms <cit.>. However, these methods are associated with substantial computational overhead and lack the ability to directly constrain diffeomorphism, making them more suitable for simpler tasks like human action recognition rather than for reconstructing high-resolution 3D medical images. Alternatively, Generative Adversarial Network (GAN) based methods with attribute embedding <cit.> convert the generation of medical image time series into a multivariable autoregressive problem. Despite this, GAN methods often face convergence challenges during training, and the constraints imposed by reconstruction loss terms to maintain subject identity are limited. Therefore, we stick to the optimization-based methods like geodesic regression by using diffeomorphic deformations to drive the image generation over time, but relax its linear dynamic assumption to model more complex dynamics. Fortunately, the neural ordinary differential equations (Neural ODEs) <cit.> provide a neural network based solution for addressing numerous dynamic fitting problems, which is successfully adopted to model deformable image registration, such as NODEO proposed in <cit.>. Inspired by NODEO, we generalize the Neural ODEs to the image space and handle image regression on a couple of images in a sequence via neural network based optimization. In particular, we propose a model called NODER, which converts the velocity field optimization problem in image regression into a parameterized neural network optimization problem, as shown in Fig. <ref>. To address the high-computational cost issue faced by Neural ODEs when handling high-dimensional image volumes, we propose to bring the dynamic optimization of Neural ODEs into the latent space via the auto-encoder technique <cit.>. Our contributions in this paper are summarized below: * We propose a novel optimization-based image regression model, NODER, based on Neural ODEs and diffemorphic registration. Our NODER has the freedom to capture complex temporal dynamics in 3D medical image sequence with a couple of images and even missing time points. * We conduct experiments on both 3D brain and cardiac MRI datasets. Our NODER generates 3D images with the best quality, compared to recent methods FPSGR and SADM; and it outperforms the diffusion model SADM in terms of the both image quality and the computational cost. § METHOD §.§ Background and Definitions We consider an unparameterized 3D image as a discrete solid, where the position of its i-th voxel can be represented as: x_i∈Ω⊆R^3, where Ω represents the 3D image domain. The positions of all voxels in the image can be represented by an ordered set: q={x_i}_i=1^N. Here, N=D× H × W represents the total number of voxels in the image, and D, H, W denote the depth, height, and width of the image, respectively. We denote the domain where the voxel cloud resides as Π, then we have q ∈Π. Now we denote an image sequence as: {(I_k, t_k)}_k=0^L-1, where L represents the length of the image sequence, I_k represents the k-th, and t_k is its associate time like age. The deformation occurring within an image over time is essentially a mapping from the original spatial positions at a starting point to the new spatial positions at the next time point, which can be represented as: ψ:Ω→Ω. On this basis, the identity mapping Id = ψ_0 can be defined as: ψ_0(x)=x, for all x∈ q. In many applications, we desire the deformation field ψ to possess the properties of smoothness and diffeomorphism. The objective function of the deformable image registration is defined as: 𝒥(ψ;I_m,I_f)=𝒮(I_m(ψ(q_0)),I_f)+ℛ(ψ), where q_0 represents the initial voxel cloud without any deformation. The term 𝒮(·,·) denotes a similarity metric, used to measure the similarity between the moving image I_m deformed by ψ and the fixed image I_f. The term ℛ(·,·) represents the regularization constraint applied to the deformation field. By generalizing image registration to the temporal regression, the objective function is updated as: 𝒥(ψ; {(I_k, t_k)}_k=0^L-1)=∑_k=1^L-1(𝒮(I_0(ψ_k(q_0)),I_k)+ℛ(ψ_k)). In this way, we regress the image sequence and generate an image trajectory, where the generated images are as close as possible to the corresponding images in the original sequence, while imposing the smoothness constraints on the deformation fields. §.§ Formulation of Image Regression in Neural ODEs From a system perspective, neural ODEs represent vector fields as a continuous-time model of neural networks. It has been widely used as a general framework for modeling high-dimensional spatio-temporal chaotic systems using convolutional layers, demonstrating its ability to capture highly complex behaviors in space and time. Therefore, we consider the trajectory of the entire voxel cloud as the solution of the following first-order ordinary differential equation (ODE): d q/d t =𝐯_θ(q(t),t), s.t. q(0)=q_0=Id, where 𝐯_θ(·) is a parameterized network that describes the dynamics of voxel cloud deformation, q_0 represents the initial state of the voxel cloud at t=0, which corresponds to an identity map. The varying velocity field over time indicates non-stationary dynamics, which is fundamentally different from SGR with stationary dynamics. The trajectory of q is generated by integrating the above ODE under the initial condition q_0. Assuming the voxel cloud evolves from t = 0 to t = t_k(k=1,2,...,L-1), the voxel cloud obtained at t = t_k is given by the following equation: ψ_k(q_0)=q(t_k)=q_0+∫_0^t_k𝐯_θ(q(t),t)dt. In particular, the computation of this flow field map is performed using numerical integration methods such as the Euler method <cit.>. The time t can be parameterized by the total number of steps and the corresponding step size adopted by the solver. Therefore, the task of finding the transformation ψ becomes the search for the optimal parameter set θ that describes 𝐯. The optimization problem becomes: θ=θ∈Θargmin∑_k=1^L-1 (𝒮(I_0(q_0+∫_0^t_k𝐯_θ(q(t),t)dt), I_k) + ℛ(ψ_k,𝐯_θ) ), where Θ represents the entire parameter space. Since Neural ODEs typically require numerical solvers and take many steps to approximate flows, they will incur significant memory overhead if all gradients along the integration steps need to be stored during backpropagation. Hence, the Adjoint Sensitivity Method (ASM) <cit.> has been implemented for optimizing Neural ODEs with constant memory gradient propagation, allowing our framework to interpolate any number of time steps between t = 0 and t = s with a constant memory overhead. Latent Space. Due to the complexity of high-dimensional data, solving Neural ODEs directly in the original space incurs significant computational costs. Therefore, we bring the above image regression formulation into a latent space, using a pair of pre-trained encoder-decoder networks to reduce the dimension of deformations, as shown in Fig. <ref>. We apply diffeomorphic VoxelMorph <cit.> on a large 3D MRI dataset to estimate the deformations between image pairs and use these estimated deformations to guide the pre-training of the auto-decoder. At the training stage of our regression model, we fine-tune the decoder. Overall, the final framework of our NODER can be represented as: d y/d t =𝐮_θ(y(t),t), s.t. y(0)= Encoder(q_0), y(t_k)=y_0+∫_0^t_k𝐮_θ(y(t),t)dt., q(t_k)=𝒦(Decoder(y(t_k))), where Encoder(·) and Decoder(·) represent the encoder and decoder, respectively, following the design in <cit.>. 𝒦 denotes a smoothing kernel used to smooth the deformation fields obtained after decoding. 𝐮_θ is a parameterized network like 𝐯_θ but in the latent space, which is used to estimate dynamics. In the dynamic network 𝐮_θ, we extract features from the latent space through continuous convolutional downsampling. These features are then flattened into one-dimensional vectors and added to the input time embedding, achieving fusion between the latent space features and time. Finally, we reconstruct the output of the fully connected layers, restoring the one-dimensional vector to the shape of the compressed three-dimensional deformation field. The overview of our model is presented in Fig. <ref>. Loss Functions. We choose the normalized cross-correlation (NCC) as the loss function for the similarity term 𝒮. The regularization term ℛ consists of two parts: ℛ(ψ)=λ_1ℒ_smt+λ_2ℒ_bdr = λ_11/N∑_x∈ q(s)(∇ψ(x)_2^2) + λ_21/N_bdr∑_d∈ D∑_b∈ Bψ_d,b(x)_2^2 , where the first term ℒ_smt constrains the smoothness of the spatial gradients within the deformed voxel cloud, and the second term ℒ_bdr represents the L_2-norm constraint on the boundary of the deformation field. Here, N, N_bdr, D = (d_1, d_2, d_3), and B=(top,bottom,front,behind,left,right) represent the total voxel number, the voxel number of six boundary planes,the three dimensions, and the six boundary planes of the deformation field, respectively. § EXPERIMENTS We conducted experiments on two medical datasets, including a 3D MRI brain image dataset ADNI (Alzheimer’s Disease Neuroimaging Initiative) <cit.> and the cardiac dataset ACDC <cit.>. We compare our method with two advanced medical image regression baselines, FPSGR <cit.> and SADM <cit.>. Finally, we perform ablation experiments to demonstrate the effects of a series of smoothness constraints within the network. Datasets. (1) ADNI <cit.>. The ADNI dataset consists of 3D brain MRI images collected from 2,334 subjects. Each subject has an image sequence of 1 to 16 time points, resulting in 10,387 MRI images. All images went through preprocessing steps including denoising, bias field correction, skull stripping, and affine registration to the SRI24 atlas. All brain images are standardized to a size of 144× 176 × 144 with a spacing of 1mm× 1mm × 1mm and applied histogram equalization. The intensity of each image volume is normalized within [0,1]. We select 1,568 subjects that have more than two image scans as our dataset for experiments. For each subject, we randomly select 20% time points for the test and the remaining images are used for training and validation. (2) ACDC <cit.>. The ACDC (Automatic Cardiac Diagnosis Challenge) dataset consists of cardiac MRI images from 100 training subjects and 50 testing subjects. We follow SADM <cit.> and borrow its pre-processed and partitioned ACDC dataset. We take the image sequence from the ED (the End-Diastole of the cardiac cycle) to the ES (End-Systole) and resize it to 12 image frames and each frame has a size of 128× 128× 32. Evaluation Metrics and Other Settings. To evaluate the quality of the generated images, we use three metrics, including the Normalized Root Mean Square Error (NRMSE), the Structural Similarity (SSIM), and the Peak Signal-to-Noise Ratio (PSNR). Also, we quantify the smoothness of a deformation field by calculating the percentage of its voxels with negative Jacobian determinants. Regarding the inference time, we implement our models and FPSGR on a single RTX 3090 GPU and report their memory cost and the average time of 5 forward inferences. Since SADM needs more GPU memory, we implement it on A100-PCIE-40GB GPU and then report its computational cost. For other inference costs, we load the model on a single RTX 3090 GPU and record the time and storage required for one forward inference, averaging multiple forward inferences to obtain the average costs. For the ACDC dataset, we compare our method with both FPSGR and SADM, while on the ADNI dataset, we have only FPSGR as the baseline, since SADM cannot handle it even with an A100 GPU. Due to the lack of ground truth and other technique issues, we replace the registration networks of FPSGR with the diffeomorphic VoxelMorph <cit.>, which is pre-trained on the ADNI dataset. In the specific implementation of NODER, we choose the average smoothing kernel with a window size of 15 and a sliding stride of 1. The optimizer is Adam, with a learning rate set to 0.005. The construction of Neural ODE relies on the torchdiffeq toolkit <cit.>, where the solving method is set to RK4 (fourth-order Runge-Kutta method with a fixed step size). The relative error tolerance (rtol) is set to 1e-3, and the absolute error tolerance (atol) is set to 1e-5. The coefficients λ_1 and λ_2 for the loss functions ℒ_smt and ℒ_bdr are set to 0.05 and 0.0001, respectively. For image sequences from a single subject, we train our model for a total of 300 epochs. r0.45 < g r a p h i c s > Visualization of deformation fields without (left) and with the boundary condition (right). Experiment Results. Table <ref> reports the quantitative results of our method compared with FPSGR and SADM. Our NODER outperforms all methods in terms of the quality of the generated images. Our method needs more computational time and memory than FPSGR to generate images with higher quality, while both have the inference time within seconds and a memory cost of around 10GB. While SADM needs way more time and memory at the inference stage. The visualization of the regression results is shown in Fig. <ref> and Fig. <ref>. The difference images indicate our method can successfully capture the temporal dynamics in the brain and cardiac image sequences. Ablation Study. To validate the effect of the smoothness constraints in our proposed method, we conduct an ablation experiment, which is shown in fig. <ref>. By removing all smoothness constraints, it can be observed that the quality of the generated image sequence significantly deteriorates. Voxels move arbitrarily in three-dimensional space, leading to severe distortion of the original brain structure and numerous folding phenomena. To verify the effect of the boundary condition ℒ_bdr in R(ψ), we visualize the deformation field, as shown in Fig. <ref>. The introduction of boundary conditions leads to a powerful smoothness constraint in the background region of the deformation field, where severe deformations originally occurred. § CONCLUSION AND DISCUSSION In this work, we propose the NODER method, leveraging the powerful representation capability of neural networks to simulate the underlying dynamics of brain or cardiac deformation trajectories. Through solving ordinary differential equations, we achieve fitting regression on existing medical image time series, thus enabling the generation of desired images at any time point. Our method is based on the theoretical basis of deformable registration and resamples the first image of each subject through the deformation field to generate a new image. The loss terms we use in this paper explicitly impose diffeomorphic constraints, thus maintaining accurate anatomical information to some extent. Experimental results on large-scale 3D MRI datasets demonstrate that our method outperforms existing state-of-the-art methods, FPSGR and SADM, by predicting more accurate image volumes. In future work, we consider incorporating the learning-based methods to further reduce the inference time cost and make it more practical. Also, exploring ways to improve the smoothness of the deformation fields is another direction for our future work. §.§.§ Acknowledgments This work was supported by the National Natural Science Foundation of China (NSFC) 62203303 and Shanghai Jiao Tong University “Jiao Da Star" Program for Interdisciplinary Medical and Engineering Research YG2022QN016 and YG2022QN028. §.§.§ Disclosure of Interests The authors have no competing interests to declare that are relevant to the content of this article. splncs04
http://arxiv.org/abs/2407.12388v1
20240717080817
Demonstrating PilotAR: A Tool to Assist Wizard-of-Oz Pilot Studies with OHMD
[ "Nuwan Janaka", "Runze Cai", "Shengdong Zhao", "David Hsu" ]
cs.HC
[ "cs.HC" ]
Demonstrating ]Demonstrating : A Tool to Assist Wizard-of-Oz Pilot Studies with OHMD nuwanj@u.nus.edu 0000-0003-2983-6808 Synteraction Lab Smart Systems Institute, National University of Singapore Singapore 0000-0003-0974-3751 runze.cai@u.nus.edu Synteraction Lab School of Computing, National University of Singapore Singapore Corresponding Author. shengdong.zhao@cityu.edu.hk 0000-0001-7971-3107 Synteraction Lab School of Creative Media & Department of Computer Science, City University of Hong Kong Hong Kong China dyhsu@comp.nus.edu.sg 0000-0002-2309-4535 School of Computing, National University of Singapore Smart Systems Institute, National University of Singapore Singapore § ABSTRACT While pilot studies help to identify potential interesting research directions, the additional requirements in AR/MR make it challenging to conduct quick and dirty pilot studies efficiently with Optical See-Through Head-Mounted Displays (OST HMDs, OHMDs). To overcome these challenges, including the inability to observe and record in-context user interactions, increased task load, and difficulties with in-context data analysis and discussion, we introduce (<https://github.com/Synteraction-Lab/PilotAR>), a tool designed iteratively to enhance AR/MR pilot studies, allowing live first-person and third-person views, multi-modal annotations, flexible wizarding interfaces, and multi-experimenter support. <ccs2012> <concept> <concept_id>10003120.10003138.10003140</concept_id> <concept_desc>Human-centered computing Ubiquitous and mobile computing systems and tools</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121.10003129.10011757</concept_id> <concept_desc>Human-centered computing User interface toolkits</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121.10003124.10010392</concept_id> <concept_desc>Human-centered computing Mixed / augmented reality</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Human-centered computing Ubiquitous and mobile computing systems and tools [500]Human-centered computing User interface toolkits [500]Human-centered computing Mixed / augmented reality < g r a p h i c s > (A) The experimenter employs , a desktop-based experimenter tool, for OHMD-based pilot studies. (B) facilitates real-time monitoring of participants' experiences from both first-person and third-person perspectives, enabling experimenters to track ongoing studies dynamically. In addition, the tool's annotation features allow for the precise marking and capture of significant moments in a photo or video format. Quickly logging quantitative metrics, such as event time, can be done using shortcut keys. Furthermore, a real-time summary of the observed moments and recorded data, available for post-study interviews, promotes in-depth discussions, insights, and support for collaborative review and interpretation. (C) In a separate room, the participant interacts with the simulated AR system, maintaining communication with the experimenter. The figure consists of two subfigures on the left side, labeled A and B, and one subfigure on the right side, labeled C. Subfigure A shows the experimenter using PilotAR to conduct an OHMD-based pilot study. Subfigure B displays the PilotAR interface, which the experimenter can use to check participants' real-time experiences through First Person View and Third Person View. The experimenter can also control the WizardingInterface to conduct pilots and record important interaction moments using Annotations for post-study review and analysis. Subfigure C shows several IoT devices, such as a fan and a light, and a participant interacting with the simulated system. [ David Hsu July 22, 2024 ================= § INTRODUCTION AND RELATED WORK Quick and dirty pilot studies validate research concepts, identify usability issues, and guide design decisions without extensive resource commitments <cit.>. However, conducting pilot studies in Augmented Reality (AR) and Mixed Reality (MR) using optical see-through head-mounted displays (OST-HMD, OHMD, or AR smart glasses) poses significant challenges <cit.> due to their unique characteristics <cit.> such as personal, near-eye displays. Compared to traditional studies on 2D UIs in desktop/mobile, which mainly observe users from third-person perspective, AR/MR requires both observations from a first-person perspective to understand users' interactions with digital content and a third-person perspective to understand user interactions with the physical world <cit.>. Besides observing a multifaceted environment, the task load for experimenters involved in AR/MR pilot studies can also be increased by the need to optionally perform wizard-of-Oz tasks <cit.>, thus necessitating methods to reduce their multitasking burden <cit.>. Furthermore, there is insufficient support for in-context data analysis <cit.> during the pilot studies, especially for quantitative data, which are typically collected in an informal and raw way. This hinders real-time analysis and deeper discussions in post-study interviews. Given the absence of an integrated solution for AR/MR pilot studies, despite the development of many specialized tools for individual steps in experiments (e.g., content authoring <cit.>, rapid prototyping <cit.>, gesture interaction <cit.>, experiment setup <cit.>, video analysis <cit.>, 3D and MR visualization <cit.>, immersive experiment environments <cit.>), we created (See Appendix <ref>-Table <ref> for comparison). It offers experimenters the flexibility to use familiar prototyping or wizarding interfaces rather than requiring the construction of an immersive system with specific skill sets (e.g., <cit.> requires Unity3D background), during the early stage of research. Similar to Momento <cit.>, supports the entire study conduction life cycle: setting up, experimentation, analysis and summarizing, and repeating. However, caters to unique challenges of OHMDs (e.g., context, interface <cit.>), including multiple observation viewpoints real-time synchronization, which is not supported by Momento <cit.> as it focuses on applications on mobile phones and desktops. (Fig <ref>) is an open-source desktop-based tool for experimenters to conduct AR/MR iterative pilot studies with OHMDs. It streamlines the pilot process from situated observations to results sharing. It incorporates first-person and third-person video observations to help experimenters understand users' in-situ relationship with visual content and environment in real-time and automatically record them for post-analysis. It enables annotations, allowing manual or automatic tagging of significant events during the experiment to prevent tedious post-study analysis and missing labeling. also allows for task distribution among multiple experimenters, reducing multitasking load and making remote monitoring possible. Finally, enables real-time data summaries, encouraging a deeper discussion during post-pilot interviews and facilitating results sharing with collaborators by exporting data report. For detailed evaluation, please refer to our original paper,  <cit.>. § TOOL In this section, we outline the functions and a typical usage scenario of (Figure <ref>). integrates features that streamline processes and support replication and innovation in AR/MR pilot studies using the wizard-of-oz <cit.> approach. See Appendix <ref> for implementation details. §.§ Major Functions FPV and TPV Live Streaming (): Although relatively straightforward in design, we enabled experimenters to observe participants wearing OHMD in situated contexts through the live first-person view with grids (FPV) and third-person view (TPV), as depicted in Figure <ref>. Simultaneous video recorded for subsequent analysis is enabled. Specifically, FPV streams the overlay of digital content and the realistic environment rendered by the OHMD. TPV streams video from a user-attached camera or one positioned by experimenters. s with Function Shortcuts (): To facilitate important information documentation during pilot study observations, we enable a variety of annotations. These encompass (to capture the screen, optionally with a colored block highlighting a specific Region of Interest (ROI)), capturing only a selected screen region), and (for accuracy calculations), and (for tracking interaction attempts). The communication between experimenters and participants is recorded and transcribed to in text format. During pilot studies, experimenters can use customized keyboard shortcuts to activate functions. These shortcuts can be mapped to UI, user, or experimenter actions for automatic annotations. Additionally, each 's color can be customized for easy identification, and all annotations are time-stamped for later review. Multi-experimenter Support (): To reduce task load during pilot studies, we support multi-experimenter scenarios alongside traditional single-experimenter setups. In a single-experimenter scenario, the experimenter concurrently manipulates the wizarding interface, conducts observations, and makes annotations. In the multi-experimenter configuration, one experimenter can act as the wizard, adjusting the interface based on users' actions observed via FPV and TPV, and another experimenter can focus solely on observation and annotation. After the pilot, annotations from both experimenters can be synchronized. Analyzer (): To allow experimenters to get a real-time summary of the collected data, we implemented the view. By reviewing the annotation index on the recording's timeline, experimenters can identify key moments and use video playback to assist participants in recalling their experiences. Experimenters can adjust annotations recorded during the pilot session (e.g., change timestamp, modify manipulation correctness, modify notes), add new notes, and take screenshots. The analyzer also briefly summarizes accuracy and the time duration between two indices of and corresponding events. Summary Review (): A comprehensive review of the pilot results can be exported from the analyzer to facilitate information sharing among collaborators, including overall descriptive statistics, selected annotation timestamps, notes, and screenshot images. Raw data (e.g., video) can be shared for subsequent analyses. §.§ Usage Scenario Experimenters might adopt various strategies with . Here, we outline a basic approach for conducting a pilot study using , with the replication of `Mind the Tap' <cit.> as an example to highlight its usage. , an AR researcher, conceives a novel idea employing foot-tapping as an input interaction for OHMDs (Figure <ref>). She identifies two potential interactions: (i.e., the menu appears on the floor within leg's reach) and (i.e., the menu displays in front of the eyes, requiring users to use proprioception to associate it with their foot, Figure <ref>C). She aims to discern the strengths and limitations of each foot-tap interaction. Choosing a within-subject design for an initial comparison, opts to employ the wizard-of-oz technique to minimize developmental efforts in a tangible system (e.g., Unity development with optical tracking) and to persuade colleagues to explore this concept further. §.§ Interface and Workflow The main workflow using is divided into three phases: , , and . This section demonstrates how can utilize 's interfaces throughout these phases. §.§.§ Phase See Appendix <ref> for details of setup UI. quickly crafts a wizarding interface using Google Slides with a 2x4 menu, where the target location randomizes on subsequent slides. She mirrors these slides to the HoloLens 2 (HL2) via Google Meet on a browser. She uses a phone camera as the TPV by linking it to Google Meet. For interactions, the mirrored WOz interface is fixed on the floor. Conversely, for interactions, it's positioned in front of the users' eyes. initiates the , selects `Single User' (Figure <ref>A), and sets up the devices (Figure <ref>B) with the HL2 IP address for FPV, a Google Meet link for TPV, and Google Slides for the (Figure <ref>C1). She then adds a “Check foot visibility” checklist item (Figure <ref>C2) to verify the FPV setup is accurate before each pilot session. To ascertain accuracy and usability, she enables (Figure <ref>C3) , , , and annotations. §.§.§ Phase After setting up and confirming the checklist, experimenters can enter the anticipated duration and participant and session ID, and initiate the Pilot phase by clicking the Start button (Figure <ref>A4). Top Bar (Figure <ref>A). The top bar displays session-related metadata, including live statistics of measures (e.g., count of Annotations, Figure <ref>A1), session progress (e.g., duration and timeline, Figure <ref>A2), and session information (e.g., participant info, anticipated duration, Figure <ref>A3). Experimenters will receive a notification when the anticipated time has elapsed and can stop the session by clicking the Stop button located at the right corner of the top bar. Main Working Panel (Figure <ref>B). The working panel displays FPV (Figure <ref>B1), TPV (Figure <ref>B2), and s (Figure <ref>B3), with a layout that can be customized according to the experimenter's preferences. In the right corner of the working panel, the captured and annotations using keyboard shortcut keys (e.g., 3 key) are shown as images with timestamps in the Annotation Table (see Figure <ref>B4). Clicking on these images opens a pop-up window, allowing the experimenter to add notes to the annotations. [Piloting with the First Interface] starts the pilot with interface (Figure <ref>C). Adjusting the target location on the (Figure <ref>B3), she annotates accuracy across ten trials, taking screenshots of any interesting behavior (Figure <ref>B4). also monitors the trial count and accuracy via the live statistics dashboard (Figure <ref>A1). §.§.§ Phase Upon completion of the pilot session, the window appears (Figure <ref>), displaying the video panel on the left (Figure <ref>A) and s panel on the right (Figure <ref>B). Video Panel(Figure <ref>A). It can play the recorded video (Figure <ref>A1) and navigate to any timestamp by clicking the timeline (Figure <ref>A2) or using three buttons to rewind, pause, and fast-forward. Experimenters can create new s with notes in the New Note area below the video timeline (Figure <ref>A2). Panel (Figure <ref>B). It features an annotation preview (Figure <ref>B1), annotation filtering options (Figure <ref>B2), an annotation table (Figure <ref>B3), and an exporting button. The annotation preview (Figure <ref>B1) provides an overview of the pilot, including its duration, manipulation accuracy, and collected screenshots. Experimenters can click on these screenshots to pinpoint annotated moments in the recorded video. Within the Annotation table (Figure <ref>B3), experimenters have the capability to view and adjust annotation details by double-clicking on a cell. Additionally, specific Annotations can be highlighted by clicking the corresponding icon in the first column or applying the filters available (Figure <ref>B2). The tool also facilitates the export of summaries and selected s in both PDF and CSV formats (Figure <ref>C). [Analysis] Upon finishing the session, the activates, presenting screenshots, accuracy data, and annotations (Figure <ref>). Before the interview, Mary reviews these annotations and accuracy (Figure <ref>B1-B3), devising questions for further inquiry. For clarity on specific screenshots, she replays footage from 5 seconds prior (Figure <ref>A1-A2). She then conducts the interview, discussing the participant's experiences and challenges, and incorporates their feedback into the annotation notes (Figure <ref>B3). Experimenters can return to the Pilot session for subsequent pilot studies and initiate new recordings. All interactions in the are stored, enabling experimenters to switch between different pilot recordings using the drop-down menu in Figure <ref>B1. [Piloting with the Alternative Interface] After assessing the interface, tests the interface in the same approach. [Overall Analysis] After piloting both interfaces, invites the participant for an overall interview, utilizing the to toggle between pilot recording sessions or view them simultaneously (Figure <ref>B1). This comparison offers insights into “rough” accuracy and usability variations, which are noted in (e.g., direct one is slightly more accurate while causing neck pain for long usage, (Figure <ref>B3). [Repeating] replicates this process with three more participants, counterbalancing the interface. exports participant data summaries in PDF (Figure <ref>C1) and shares them with colleagues to convince the differences between and interfaces. She cites participant feedback and replays specific recordings for context when queried for details. [Further Exploration: Multi-experimenter] Seeing the team's interest, broadens their exploration to assess how interaction accuracy and speed vary between two interfaces as menu size changes. She trains a colleague to act as the wizard, thus reducing the wizarding workload and focusing more on observations. After creating additional slides for varied menu sizes (e.g., 1x2, 2x4, 3x6), they conduct pilot tests with four participants using a between-subjects design. To calculate the speed of interactions, they combine / annotations with custom annotations that automatically mark target changes (linked to slides' changes). After each pilot session, data is exported to CSV (Figure <ref>C2) for graph generation in Excel, which facilitates comparing relationships among speed, accuracy, and menu size. Convinced that their pilot study has uncovered a notable trend, the team decides to transition to a formal study. [Summary] Employing the wizard-of-oz methodology with , the team expedites (e.g., less than one week as opposed to a full-fledged motion tracking application, which can take several weeks to months) the identification of viable research directions. Using , experimenters can overcome challenges in rapidly evaluating diverse concepts, gathering preliminary quantitative measures for comparison, and convincing colleagues, significantly shortening the knowledge discovery phase. § CONCLUSION As AR/MR technology is poised to shape the future immersive world, including the metaverse, facilitating interactions between digital and physical entities becomes paramount. This underscores the importance of tools tailored for refining these interactions through pilot studies. As an initial step, we introduce , an open-source tool (<https://github.com/Synteraction-Lab/PilotAR>) designed to support such studies. It enables real-time and retrospective multi-viewpoint observations, notes, and filters of crucial observations, thus facilitating comprehensive discussions with participants and researchers to discover insights effectively. Additionally, it can share the pilot study process, data, and insights with the larger research community. Its all-in-one capability can be applied as a standalone observation tool or a video analyzer tool to border studies beyond pilot studies or OHMD-based studies. We would also like to thank Tan Si Yan and Siddanth Ratan Umralkar for developing specific system components. Additionally, we wish to thank the anonymous reviewers for their valuable time and insightful comments. This research is supported by the National Research Foundation, Singapore, under its AI Singapore Programme (AISG Award No: AISG2-RP-2020-016). The CityU Start-up Grant 9610677 also provides partial support. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation, Singapore. ACM-Reference-Format § COMPARISON Table <ref> highlights the differences and similarities between and prior tools. § IMPLEMENTATION We used Python (3.9) as our primary programming language due to its cross-platform compatibility (e.g., Windows, MacOS). To achieve the tool's functionalities, we incorporated several third-party packages. The user interface (UI) was developed using Tkinter[<https://docs.python.org/3/library/tkinter.html>] and related theme packages, such as CustomTkinter[<https://github.com/TomSchimansky/CustomTkinter>]. The utilizes Pynput[<https://pypi.org/project/pynput>] to monitor user inputs and FFmpeg[<https://ffmpeg.org>] to handle screen recording. For video playback, we used Python-VLC[<https://pypi.org/project/python-vlc/>] and audio transcription we used Whisper[<https://openai.com/blog/whisper/>]. FFmpeg and websocket were incorporated to enable video and data streaming between the wizard and the observer in multi-experimenter settings. Detailed information about the open-source implementation can be found in <https://github.com/Synteraction-Lab/PilotAR>. § SETUP Role Selection (Figure <ref>A) Upon launching the tool, the experimenter is prompted to select their role: single-user for single-experimenter pilots or wizard/observer for multi-experimenter pilots. Device Configuration (Figure <ref>C1) This task allows the experimenter to input essential information such as FPV and TPV connections (e.g., IP address, credentials), (e.g., Google Slides URL link or python file path), and screen recording inputs (e.g., video and audio source), making them all displayed on the monitor. Checklist Creation (Figure <ref>C2) The checklist aids in remembering crucial steps during the pilot study, such as confirming OHMD, TPV camera, and recording. Customizable items can be added by typing in the provided space at the bottom. Shortcut Key Customization (Figure <ref>C3) Experimenters can manage which s are displayed during the pilot session (known as Pinned ) and customize aspects like color, name, and shortcut key.
http://arxiv.org/abs/2407.13616v1
20240718155501
Quantum Local Search for Traveling Salesman Problem with Path-Slicing Strategy
[ "Chen-Yu Liu", "Hiromichi Matsuyama", "Wei-hao Huang", "Yu Yamashiro" ]
quant-ph
[ "quant-ph" ]
Quantum Local Search for Traveling Salesman Problem with Path-Slicing Strategy This work was performed for Council for Science, Technology and Innovation (CSTI), Cross-ministerial Strategic Innovation Promotion Program (SIP), “Promoting the application of advanced quantum technology platforms to social issues” (Funding agency: QST). Chen-Yu Liu 123, Hiromichi Matsuyama14, Wei-hao Huang15, Yu Yamashiro16, 1 Jij Inc., 1-4-6 Nezu, Bunkyo, Tokyo 113-0031, Japan 2Graduate Institute of Applied Physics, National Taiwan University, Taipei, Taiwan Email:3 d10245003@g.ntu.edu.tw, 4 h.matsuyama@j-ij.com, 5 w.huang@j-ij.com, 6 y.yamashiro@j-ij.com ======================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT We present novel path-slicing strategies integrated with quantum local search to optimize solutions for the Traveling Salesman Problem (TSP), addressing the limitations of current Noisy Intermediate-Scale Quantum (NISQ) technologies. Our hybrid quantum-classical approach leverages classical path initialization and quantum optimization to effectively manage the computational challenges posed by the TSP. We explore various path slicing methods, including k-means and anti-k-means clustering, to divide the TSP into manageable subproblems. These are then solved using quantum or classical solvers. Our analysis, performed on multiple TSP instances from the TSPlib, demonstrates the ability of our strategies to achieve near-optimal solutions efficiently, highlighting significant improvements in solving efficiency and resource utilization. This approach paves the way for future applications in larger combinatorial optimization scenarios, advancing the field of quantum optimization. Quantum Optimization, Traveling Salesman Problem, Quantum Local Search § INTRODUCTION Quantum computing shows promise as an innovative technology, providing potential computational benefits for various applications, particularly in combinatorial optimization <cit.>. Among these applications, the Traveling Salesman Problem (TSP) stands out due to the intuitive clarity of its problems and the difficulty of solving them. The TSP, which involves finding the shortest possible tour that visits a set of locations and returns to the origin, is notorious for its combinatorial explosion as the number of locations increases. Quantum computing is expected to tackle such problems more efficiently than classical computing by exploiting quantum phenomena such as superposition and entanglement. However, the current era of Noisy Intermediate-Scale Quantum (NISQ) technologies presents challenges, primarily due to the limited number of qubits available on quantum devices. This limitation is a significant barrier as the TSP and similar combinatorial optimization problems typically require a large number of variables to be encoded for quantum processing. The scaling of problem size with the number of qubits directly impacts the feasibility and efficiency of quantum algorithms. Previous studies have developed qubit-efficient methods to address these limitations, particularly for the TSP. These include using Tabu search for partitioning problems and quantum annealers to solve subproblems <cit.>, the quantum phase estimation method with gate-based quantum computers <cit.>, and quantum local search methods such as D-Wave's Qbsolv <cit.>. Extending these techniques to general combinatorial optimization problems, the focus is on reducing the number of qubits required to encode a problem <cit.>, thereby making the best use of the limited qubit resources available on NISQ devices. Quantum local search, in particular, adapts the local search strategy—commonly used in classical optimization—to the quantum context, allowing for iterative improvement of solutions by exploring a quantum state space. Despite the advances in quantum local search, there are intrinsic limitations when applied to complex problems like the TSP. Traditional quantum local search approaches might not efficiently handle the unique challenges posed by TSP, such as the need for encoding cyclic paths and managing large combinatorial spaces without escalating the qubit requirements exponentially. Therefore, a specific adaptation of quantum local search tailored to the TSP's requirements is essential for improving solution quality and computational efficiency. In this work, we introduce novel path-slicing strategies <cit.> integrated with quantum local search to tackle the TSP. In Sec. <ref>, we detail our methodology, combining classical path initialization with quantum optimization. In Sec. <ref>, we present our results, demonstrating scalability and efficiency, and discuss potential future applications. Finally, in Sec. <ref>, we summarize our findings and their significance. § PATH-SLICING AS A QUANTUM LOCAL SEARCH METHOD In this section, we will introduce our proposed path-slicing method for quantum local search. Followed by a description of path initialization method, and the different strategies for path slicing. As illustrated in Fig. <ref>, we begin with a path initialization method to obtain the initial path for a given TSP instance. Following this, we employ the path-slicing strategies discussed later to define subproblems. These subproblems are then addressed using quadratic unconstrained binary optimization (QUBO) solvers, particularly quantum solvers, which may operate in parallel or sequentially. Once we obtain the new sub-path for each slice, we accept the update if it results in a shorter distance and reject it otherwise. This step leads to the formation of a new complete path. We iteratively proceed to the iteration step of the quantum local search until the predetermined termination threshold. After iteration, the final path is presented as the result. §.§ Random variable picking and path slicing We begin by examining the rationale behind employing the path-slicing method for the TSP. The conventional QUBO formulation of the TSP <cit.> without the closed path constraint is expressed as follows: H_ce = A∑_i=1^n (1 - ∑_j = 1^n x_i,j)^2 + A∑_j=1^n (1 - ∑_i = 1^n x_i,j)^2 +∑_u,i D_ui∑_j=1^n-1 x_u,j x_i,j+1, where n represents the number of cities, and the binary variable x_i,j∈{ 0, 1} denotes the presence (or absence) of the salesman at location i during step j. D_ui indicates the distance between locations u and i, while A serves as the penalty for constraint violations. The first and second terms ensure that the salesman visits exactly one location at each step and that each location is visited only once in the entire route. The third term, the objective function, aims to minimize the total distance traveled. The indices i = 1,..,n and j = 1, ..., n result in a total of n^2 variables of x. It is important to note that in this formulation, a valid path representation using x would appear as a sequence of n one-hot vectors. Due to the number of variables required for the TSP scaling as n^2, the number of required qubits increases rapidly, making it challenging for the fixed-size hardware of quantum computers to directly encode all variables onto qubits. In general, for combinatorial optimization problems where the problem size exceeds the solver's capabilities, it is common to employ variable-wise local search techniques such as random variable picking to define subproblems and solve them iteratively. If the solver is a quantum solver, this approach is referred to as quantum local search <cit.>. It is logical to assert that different types of combinatorial optimization problems may benefit from distinct local search strategies that perform more effectively. This study proposes a quantum local search method incorporating specific path-slicing strategies for TSP. As depicted in Fig. <ref>, we graphically compare different methods of forming subproblems within a complete TSP. We have solved eq. (<ref>) to find the optimal path of the subproblem. With random variable picking, once we have formed three subgroups, these can each generate their own sub-paths, maintaining a certain degree of flexibility to reassemble these sub-paths into a complete path <cit.>. In contrast, with the path-slicing method <cit.>, after forming an initial closed path that visits all locations, we can divide the path into segments to optimize the distance within each segment. This approach not only simplifies the process of reconstructing the full path from the sub-paths but also reduces the number of variables required for each subproblem from m^2 to (m-2)^2, where m represents the number of the cities in each segment since the start point and the end point are fixed. §.§ Convex hull insertion heuristic initialization Continuing from the previous section, to effectively employ the path-slicing method, we first need an initial path of the complete TSP. In this hybrid quantum-classical heuristic approach, we do not solely rely on quantum solvers to address the TSP; instead, we utilize a classical method for path initialization. This differs from the conventional quantum annealing method, where the initial state of the spin configurations is prepared in the |+⟩^⊗ N state (the ground state of ∑_i=1^N σ_x^(i)), where N is the number of qubits, and σ_x^(i) is the Pauli x operator for site i. Our initial path is generated using a classical technique, specifically the convex hull insertion heuristic <cit.>, which creates paths with relatively short distances compared to random paths. However, these paths still exhibit a gap from the minimal distance path. Despite this, the method offers a favorable computational complexity of O(n^2), where n is the number of locations <cit.>. §.§ K-means strategy and anti-k-means strategy After establishing an initial path, we explore the strategies employed in this study. In the path-slicing method, the current path should be divided into several clusters. We have employed k-means clustering to divide the current path. K-means clustering is a popular unsupervised learning algorithm used to partition data into k distinct clusters, minimizing the variance within each cluster <cit.>. The intuition behind k-means is to find the centroids of the clusters and assign each city to the nearest centroid. The previous work also employs k-means clustering to TSP <cit.>. A potential issue with directly applying the k-means method to a TSP instance is that, although it is feasible, there is a possibility that it might create groups where optimal solutions are unattainable (Figure 5 in <cit.>) and may not align with our predetermined path. In this study, we transformed the data before applying k-means. As illustrated in Fig. <ref>, we first transform the information about the relative distances between neighboring locations into a circular format, where the arc distances reflect their proximities in the original path. For each position in the path with n locations, we have the distance data between the connected points: [d_0, d_1, …, d_n-1]. Assuming a circle radius of r = 1, we calculate the total circumference C = ∑_i=0^n-1 d_i. To place these data points on a circle, we calculate the angular displacement for each segment: Δθ_i = 2π(d_i/C) for i = 0, 1, …, n-1. The cumulative angles θ_i are then calculated as follows: θ_0 = 0, θ_i = ∑_j=0^i-1Δθ_j for i = 1, 2, …, n . Next, we convert each angle to the corresponding (x,y) coordinates on the circle: x_circle, i = r cos (θ_i) for i = 1, …, n, y_circle, i = r sin (θ_i) for i = 1, …, n. The resulting arrays x_circle and y_circle contain the coordinates of the points mapped onto the circle. Once the data is configured in a circular format, the k-means method is applied to segment the data into subproblems. Upon mapping the locations back to the original path, we derive k slices of the TSP path. In contrast to traditional k-means, where each cluster centroid denotes the center of a slice, we also introduce a comparative strategy named anti-k-means. This method utilizes the k-means centroids as breakpoints for the slices, assuming that in some cases, this will be a more effective way to partition the path, as depicted in Fig. <ref>. We will later see that these strategies yield better results under different settings. §.§ Hybrid strategy and hybrid-anti strategy Although the strategies described above seem promising, they can encounter challenges during quantum local search, particularly with both potentially becoming trapped in local minima within the cluster settings. Here, the clustering—achieved through k-means or anti-k-means—might reach a satisfactory score but still falls short of adequately solving the TSP(s). To address this, we propose a mechanism that allows the cluster configuration to escape these local minima. Consequently, during the quantum local search iterations, we introduce random slicing clusters intermittently with k-means or anti-k-means iterations. In these random slicing clusters, a random displacement between the slice indices is applied, effectively generating random slices. The randomly produced slices provided a “jump” between k-means or anti-k-means iterations. For instance, the slicing method for quantum local search iterations could alternate between random and k-means slices—termed a hybrid strategy—as in “random-(k-means)-random-(k-means)-...", and similarly, the alternative using anti-k-means would follow a sequence of “random-(anti-k-means)-random-(anti-k-means)-...", which we refer to as the hybrid-anti strategy. § RESULT AND DISCUSSION To align our study with the state-of-the-art, we have selected TSP instances from TSPlib<cit.>, similar to those used in <cit.>. In addition to the 16-location instance, ulysses16, and the 38-location instance, djibouti38, we also explore a larger instance not examined in <cit.>, namely the 48-location instance att48, which represents the capitals of 48 US states. With examples of using both simulated annealing and quantum annealing (D-Wave quantum annealer) for subproblems, we demonstrated that our method is a general framework capable of integrating any solver, both quantum and classical, provided they can tackle QUBO problems. As shown in Fig. <ref>, we present a comparative analysis of various slicing strategies applied to the TSP across three instances. The strategies examined include k-means, anti-k-means, random, hybrid, and hybrid-anti approaches. The figures illustrate the minimum and average distances achieved with different cluster configurations (represented by different symbols) for each method based on 100 experiments for simulated annealing and 10 experiments for quantum annealing, where the error bars are calculated by standard deviation. Each experiment incorporates 100 iterations of quantum local search. The red dashed line marks the optimal distance for each instance, serving as a benchmark for assessing the effectiveness of each strategy, while the blue lines indicate the best results obtained in previous studies <cit.>. For the ulysses16 instance, both simulated and quantum annealing with a hybrid strategy achieved the optimal distance of 6859. In the simulated annealing case, the random slicing method appears to be the most effective strategy with less variance. For the djibouti38 instance, the random strategy consistently outperformed other strategies on average in the simulated annealing case. In this instance, we also explored different numbers of clusters, finding no clear trend indicating the optimal cluster size. Remarkably, we managed to achieve the optimal distance of 6656 for djibouti38, whereas the related work reported the best distance of 7396. In the quantum annealing case, we could not obtain the optimal distance by both hybrid and hybrid-anti strategies. However, our best distance obtained is 6707. This is still better than the related work. It should be noted that this value is mainly contributed by the convex hull initialization. The initial path is so good that quantum annealing could not find a better path during the optimization process. In the att48 instance, both quantum and simulated annealing achieved almost the same quality in the best solution. This result indicates that quantum annealing can improve the initial path in this instance. The hybrid strategy proved to be effective, although it did not reach the optimal distance. The best result was quite close, at 10784 compared to the optimal 10628. Interestingly, in this case, a larger number of clusters—which correspond to smaller subproblems—led to better performance, except for the anti-k-means strategy. In Table <ref>, we present the required subproblem sizes associated with varying numbers of clusters for different TSP instances. The subproblem size is determined by the results of k-means/anti-k-means clustering used in quantum local search, with a larger number of clusters resulting in smaller subproblems. In these cases, the reduction in variables is significant. Notably, referring to the maximum subproblem sizes—which could correlate to the number of qubits required by quantum hardware—we reduced qubit usage from 256 to 64 for ulysses16, from 1444 to 196 for djibouti38, and from 2304 to 144 for att48. While methods for reducing qubit requirements are well established, the high quality of the solutions we obtained is particularly noteworthy. We achieved optimal distances in instances where previous work fell short, and we could tackle larger problem instances with quite impressive solution quality. This result is achieved by combining the classical initialization strategy, which can already find a good path in some instances, with simulated or quantum annealing, which is then used to attempt to improve the solution quality from this ‘already good’ path. § CONCLUSION This work has provided an exploration of the path-slicing strategies applied to the TSP using quantum local search methods. Our approach leverages a combination of quantum and classical techniques to efficiently solve subproblems derived from the TSP, addressing the limitations posed by the current era of NISQ hardware, particularly the restrictions on qubit numbers. Our investigation into various path-slicing methods, including k-means, anti-k-means, hybrid, and hybrid-anti strategies, has demonstrated their efficacy in optimizing TSP solutions. The results from applying these strategies across different TSP instances, such as ulysses16, djibouti38, and att48, indicate significant improvements in finding near-optimal paths with fewer computational resources compared to existing methods. A key finding of this research is the effectiveness of random slicing in combination with structured clustering methods like k-means and anti-k-means, which helps prevent the search from becoming trapped in local minima, a common issue in quantum local search applications. The hybrid and hybrid-anti strategies, in particular, have shown promising results by introducing variability and flexibility into the search process, thereby enhancing the overall search dynamics. Moreover, applying our quantum local search method has reduced the quantum resource requirements. By efficiently managing qubit utilization, our method has extended the practical applicability of quantum computing solutions to larger TSP instances that were previously challenging due to hardware limitations. The proposed path-slicing strategies for quantum local search contribute to the field of combinatorial optimization by providing a scalable, efficient, and effective approach to solving TSPs. These strategies not only address the inherent limitations of quantum hardware but also set a foundation for future research in the optimization of other complex problems using hybrid quantum-classical approaches. The results encourage ongoing and future studies to refine these techniques further and explore their application in broader contexts, pushing the boundaries of what is computationally feasible with emerging quantum technologies. IEEEtran
http://arxiv.org/abs/2407.12211v1
20240716232128
On the Calibration of Epistemic Uncertainty: Principles, Paradoxes and Conflictual Loss
[ "Mohammed Fellaji", "Frédéric Pennerath", "Brieuc Conan-Guez", "Miguel Couceiro" ]
cs.LG
[ "cs.LG" ]
On the Calibration of Epistemic Uncertainty M. Fellaji et al. CentraleSupélec, Université Paris Saclay, CNRS, LORIA, France {mohammed.fellaji,frederic.pennerath}@centralesupelec.fr Université de Lorraine, CNRS, LORIA, France brieuc.conan-guez@univ-lorraine.fr miguel.couceiro@loria.fr On the Calibration of Epistemic Uncertainty: Principles, Paradoxes and Conflictual Loss Mohammed Fellaji1 Frédéric Pennerath1 Brieuc Conan-Guez2 Miguel Couceiro2 July 16, 2024 ======================================================================================= § ABSTRACT The calibration of predictive distributions has been widely studied in deep learning, but the same cannot be said about the more specific epistemic uncertainty as produced by Deep Ensembles, Bayesian Deep Networks, or Evidential Deep Networks. Although measurable, this form of uncertainty is difficult to calibrate on an objective basis as it depends on the prior for which a variety of choices exist. Nevertheless, epistemic uncertainty must in all cases satisfy two formal requirements: firstly, it must decrease when the training dataset gets larger and, secondly, it must increase when the model expressiveness grows. Despite these expectations, our experimental study shows that on several reference datasets and models, measures of epistemic uncertainty violate these requirements, sometimes presenting trends completely opposite to those expected. These paradoxes between expectation and reality raise the question of the true utility of epistemic uncertainty as estimated by these models. A formal argument suggests that this disagreement is due to a poor approximation of the posterior distribution rather than to a flaw in the measure itself. Based on this observation, we propose a regularization function for deep ensembles, called conflictual loss in line with the above requirements. We emphasize its strengths by showing experimentally that it fulfills both requirements of epistemic uncertainty, without sacrificing either the performance nor the calibration of the deep ensembles. § INTRODUCTION All neural networks, from small discriminative classifiers to large generative models, can be seen as probabilistic models that estimate some distribution. This distribution captures the uncertainty of the predicted variable, induced both by latent factors that are inherent in the process that has generated the data, and by the model bias, which reflects the lack of expressiveness of the model to represent the true distribution. The uncertainty related to latent factors is sometimes referred to as aleatoric or data uncertainty in contrast to the completely different epistemic or model uncertainty that is meant to measure estimator variance / overfitting, i.e., the uncertainty about the output distribution itself, due to the limited size of the training dataset. While every probabilistic model de facto takes into account aleatoric uncertainty, epistemic uncertainty becomes measurable only by models whose output distribution is a random variable. This includes Bayesian Neural Networks (BNN) that apply (approximate) Bayesian inference on network weights <cit.>, Deep Ensembles (DE) that sample the prior distribution <cit.>, Prior Networks <cit.>, Evidential Deep Learning (EDL) <cit.> and derived methods that directly learn parameters of a second-order distribution. All of these models produce not only the posterior predictive distribution for a given input, but also a measure of epistemic uncertainty that quantifies the part of uncertainty that can be further reduced by observing more data in the vicinity of the input. Since this measure is usually computed as the mutual information between the model output and the parameters (conditioned on the input and the training dataset), we will stick to this choice in the sequel, even if the choice of a better epistemic uncertainty metric remains an arguable subject, as discussed in <cit.>. Regardless of the choice of metric, epistemic uncertainty appears as a relevant criterion for deciding to label a new example in the context of active learning <cit.>, to tackle the exploration-exploitation dilemma in reinforcement learning <cit.>, or to detect OOD examples <cit.>, although some approaches advocate an even finer decomposition to either detect OODs, distinguishing between epistemic and distributional uncertainty <cit.>, or to account for procedural variability <cit.>, i.e., uncertainty coming from the randomness of the optimization procedure. Accurately quantifying epistemic uncertainty is thus crucial from both theoretical and application perspectives. Whereas there is a large body of work on the calibration of predictive uncertainty as produced by deep neural networks, including Bayesian ones <cit.>, to our knowledge, there is no work dealing specifically with the calibration of epistemic uncertainty. In this paper, we address the question of how to evaluate the quality of epistemic uncertainty produced by deep networks. One difficulty that may explain the lack of research in this field is that the amount of epistemic uncertainty depends on the prior distribution over parameters for which certain freedom of choice exists <cit.>, whether it is an informative or an objective prior like Jeffreys prior <cit.>. Consequently, the definition of a quantitative score to measure the quality of epistemic uncertainty appears as a questionable objective, and thus we do not consider it. Instead, we adopt a qualitative standpoint by stating two properties that every measure of epistemic uncertainty should ideally fulfill: the first property that we call hereafter data-related principle, states that the amount of epistemic uncertainty decreases as the model observes more data. The second property, referred to as model-related principle, states that epistemic uncertainty must increase with model complexity, i.e., the number of weights, as a consequence of the curse of dimensionality. While we can check these requirements are de facto true for a simple probabilistic model like Bayesian linear regression, it is not obvious that this still holds for Bayesian deep networks or their alternatives, since the parameters of these more complex non-convex models converge somewhat randomly to one of the many local optima. With this in mind, we conducted an experimental study of epistemic uncertainty as produced by Deep Ensembles <cit.>, MC-Dropout <cit.> and Evidential Deep Learning <cit.>. The results are surprising: we observe that in all data regimes and for all tested methods, the average measures of mutual information computed on the test set, completely contradict the model-related principle: the larger the model, the smaller the epistemic uncertainty when precisely the opposite is expected. The data-related principle seems globally but not perfectly respected, with some blatant counter-examples. The same paradoxes are consistently observed, even when calibration techniques are used like label smoothing <cit.> and confidence penalty <cit.>. This disagreement between expectation and reality thus raises the question of the true utility of epistemic uncertainty as estimated by these models. A necessary condition to solve these inconsistencies is to ensure that epistemic uncertainty is maximal in the absence of data, a property that can only result from an appropriate choice of prior, or equivalently, of the regularizer in the loss function. Based on this observation, we designed an elementary regularization function for ensembles of deep classifiers, called conflictual loss for reasons that will become obvious later on. We emphasize the strengths of the resulting Conflictual Deep Ensembles by showing experimentally that it restores both properties of epistemic uncertainty, without sacrificing either the performance or the calibration of the deep ensembles. To summarize, our contributions are the following: * A method for assessing the quality of epistemic uncertainty of a model based on two principles. * The empirical demonstration, using this method, that common models and calibration techniques do not satisfy (and even sometimes contradict) these quality criteria. * A theoretical argument that suggests that these inconsistencies are due to the poor posterior approximation and not to the metric itself. * A new regularizer for deep ensembles, called the conflictual loss function, designed to ensure the data-related principle of epistemic uncertainty. * Experimental results showing that this technique restores both quality criteria of epistemic uncertainty without degrading the other performance scores (accuracy, calibration, OOD detection). The rest of the paper is structured as follows: Section <ref> presents some previous works in the field of uncertainty, calibration, and prior. Section <ref> formalizes the two fundamental properties of epistemic uncertainty and gives some theoretical insights about them. Section <ref> describes the conflictual loss for deep ensembles. Section <ref> presents experimental results and section <ref> concludes. § RELATED WORK The calibration of a model reflects how its predictive distributions are consistent with its errors on a test dataset. In <cit.>, the authors discussed existing calibration metrics such as the Expected Calibration Error (ECE) <cit.> and introduced new measures like Static Calibration Error (SCE) to better take multiclass problems into account. Post-hoc calibration techniques are also popular: histogram binning <cit.>, isotonic regression <cit.> and temperature scaling <cit.> to name the most common. The latter performs generally well on in-domain data but falls short when the data undergoes a distributional shift or are out-of-distribution (OOD) <cit.>. Few works have studied calibration of posterior predictive <cit.>. When it comes to priors and regularization in deep learning, there is a considerable literature that can be classified into two main categories: parameter-based (regularizers L1, L2, etc.) and output-based (such as label smoothing <cit.>, confidence penalty <cit.>). The latter techniques are introduced primarily to avoid peaky outputs, which are a sign of overfitting <cit.>. Priors have given rise to numerous works in general, with some specific to Bayesian deep learning (see survey <cit.> on this subject). To the best of our knowledge, there is no work focusing on calibrating epistemic uncertainty based on priors and without using an additional model to compute uncertainties. To avoid the multiple evaluations of BNNs at inference time, some authors also propose to estimate the epistemic uncertainty more directly: the model predicts its uncertainty about the prediction. More precisely, the model produces a second-order distribution, that is a distribution over class distributions. Such “all in one” approaches (Evidential Deep Learning <cit.>, Prior Networks <cit.>, Information Aware Dirichlet <cit.>, to name a few) require some specific training schemes. Indeed, <cit.> shows that training in a classical way such models by minimizing a second-order loss, does not entail well-calibrated uncertainty estimates. The task of OOD detection is important in many applications. OOD samples are fundamentally different from the samples used during training <cit.>. In theory, these examples should yield a high epistemic uncertainty. The task of OOD detection is challenging as shown in <cit.>: most of the methods tested on different benchmarks resulted in high-confidence predictions on OOD samples. As shown in <cit.>, DE and MC-Dropout are competitive benchmarks across different tasks including OOD detection. Additionally, some approaches to detect OOD data involve training the model with both in-domain and OOD samples <cit.>. However, they have some limitations such as relying on the choice of the OOD dataset and that epistemic uncertainty is shifted into aleatoric uncertainty as discussed in <cit.>. § PRINCIPLES OF EPISTEMIC UNCERTAINTY In this section, we introduce two principles that characterize an idealized measure of epistemic uncertainty. For each of them, we give a formal definition, we justify why it is a desirable feature and we give a first analysis on its practical validity. In what follows, we consider a family of probabilistic models that estimate the distribution of some measurable output ∈ given some input vector ∈, thanks to a parametric function f_ parameterized by a vector ∈ of parameters, i.e., p( | , ) = f_(). As Bayesian inference requires the definition of a prior p(), the notation for a formal model refers hereafter to the pair = (f_,p()). Given such a model conditioned on some training sample ∈ (×)^*, we consider a metric function ,: →ℝ^+ that maps to an input , the measure of epistemic uncertainty conveyed by joint distribution p(, | ,). We assume that this metric grows with epistemic uncertainty and is non-negative. This is the case of common metrics like mutual information [i],() = Θ, = , - ,,, where ·· denotes conditional entropy. In case of regression (i.e., ∈ℝ), another option is difference of variances [v],() = , - ,, , where , refers to the variance of the output when it follows the posterior predictive p( | , ) and ,, refers to the average of variances of individual models p( | , ), i.e., ,, = ∫, p( | ) d . §.§ Data-related Principle of Epistemic Uncertainty The first principle simply states that epistemic uncertainty reduces as more training samples become available. An epistemic uncertainty metric , is (ideally) a non-increasing function of training samples , i.e., ∀, ∀, ∀_1, ∀_2, _1 ⊆_2 ⇒_1,() ≥_2,() . The reason why this property is desirable is illustrated by the next thought experiment in the context of active learning: suppose that a model has been trained on samples _1 so far. Now comes a new unlabelled sample . Since epistemic uncertainty is the ideal criterion for measuring the information that could be gained by labeling a new sample, the measure _1,() is compared to some decision threshold σ in order to decide whether the sample is worth being labeled by an expert. Assuming that this is not the case, sample is discarded. Later on, the train set has been enriched with more samples _2. If it is possible that _1 ∪_2,() > _1,(), then it is also possible that _1 ∪_2,() ≥σ so that this time the system would have asked for the labeling of . This behavior would go against what we expect, i.e., a model trained on more data has necessarily learned more information. The first principle bans such a scenario. Next, we analyze the extent to which mutual information satisfies this principle. Although examples can be found such that the observation of a specific sample increases mutual information rather than reducing it, we can ask whether the first principle is satisfied when averaging over all possible observations, i.e., in expectation. At first glance, we would be tempted to answer in the negative, as there exist random variables X, Y and Z such that XY < XYZ. For making the answer positive, we need the assumption that samples are iid. The mutual information metric satisfies the first principle in expectation with respect to new random iid samples _2, i.e., ∀, ∀, ∀_1, [i]_1,() ≥[i]_1∪_2,() . For the sake of clarity and without loss of generality, we assume _2 is made of one single sample (', Y'). Also for conciseness, we denote κ the triplet (_1,,') as these terms have no incidence on the proof below. Then considering a “test” input for which we want to estimate the epistemic uncertainty conveyed by its output , we consider the difference Δ I = [i]_1,() - [i]_1 ∪_2,() = , _1 - , _1, ', ' = _1, , ' - ', _1, , ' as ,' | = κ - , κ - ', κ + , ', κ = κ - ',κ - (,κ - , ', κ) = 'κ - ', κ . But and ' are iid samples, i.e., they are independent given and κ, and thus ', κ = 0. Hence, Δ I = 'κ≥ 0, and the proof is now complete. §.§ Model-related Principle of Epistemic Uncertainty The second principle essentially expresses overfitting: given two models trained with the same set of samples, if one has more expressive power than the other, then it should have a larger measure of epistemic uncertainty since the choice of model candidates is wider, i.e., its posterior distribution is more spread out. While this principle seems intuitive, the formalization of the underlying notion of expressive power requires the use of complex theories of statistical learning (e.g., VC dimension), which we avoid since it is unnecessary. Indeed, we can only consider models that are, by construction, ordered in increasing order of complexity, as defined below. We say model _a = (f^a__a, p_a(_a)) is a submodel of model _b= (f^b__b, p_b(_b)), denote by _a _b, if _a is a subset of parameters _b so that _b = (_a, _b') and there exists a constant vector ^0_b'∈Ω__b' such that ∀_a ∈Ω__a, f^a__a = f^b_(_a, ^0_b') and p_a(_a) = p_b(_a | _b' = ^0_b') . Moreover, when priors are chosen in such a way that individual parameters are independent, freezing _b' has no impact on the prior of _a, so that the condition on priors simplifies to p_a(_a) = p_b(_a). Since the submodel relation is reflexive, transitive, and antisymmetric, it defines a partial ordering on the set of parameterized models, that can be used to state the second principle. An epistemic uncertainty metric , should be a non-decreasing function over the set of parameterized models, i.e., ∀, ∀, ∀_1, ∀_2, _1 _2 ⇒,_1() ≤,_2() To see why this principle is desirable, consider the following example in the field of explainability. Given two black-box models _1 and _2 with comparable performance, let's assume that ,_1 is larger on average than ,_2 when estimated on a test set. A larger value of epistemic uncertainty for a given input means more diverse and thus inconsistent stochastic functions p( | ,) as follows the posterior distribution. Therefore, model _2 provides on average more similar and consistent functions than _1. Explaining the output p( | , ) of a model amounts to summarize in an understandable format these stochastic functions p( | ,) taken as a whole. Model _2 is thus preferred since the explanation of its output is shorter on average. But if the second principle is broken, it is possible that _1 is formally a submodel of _2. If so, _1 is by construction a restriction of _2, providing systematically shorter explanations, a contradiction. In summary, without the second principle, measures of epistemic uncertainty could make inconsistent Occam's razor principle and all related concepts (sparsity, minimum description length, etc). Focusing again on the mutual information metric, we see that the second principle is also verified in expectation, as it is always true that _1, ≤_1, _2,. Indeed, the left term of this inequality can be understood as a weighted average of mutual information _1^0_2, , of every submodel ^0_2. However, the weight of submodel ^0_2 is given by the posterior p(_2 | ). The interpretation of this inequality in expectation is therefore difficult and of little interest in the context of model selection since, in practice, we want to compare a model with a given submodel, not with the whole distribution of submodels. To illustrate, consider a Bayesian linear regression model with known homoskedastic variance and isotropic normal prior, i.e., = ∑_i ∈ℐΘ_i ψ_i() + ε with ε∼𝒩(0,σ^2) and Θ_i ∼𝒩(0, σ^2_0) . where the regressor functions ψ_i are chosen in a large collection indexed by ℐ. After making some variable selection, we can force to zero some coefficients θ_i, only keeping regressors of the index in ℐ' ⊆ℐ. As priors on coefficients are independent, the definition of the resulting submodel _ℐ' is obtained just by replacing ℐ by ℐ' in the above definition. Now given such a submodel _ℐ', we would like to ensure the measure of epistemic uncertainty is smaller for _ℐ' than for _ℐ. For the sake of simplicity, suppose that regressor functions are decorrelated, i.e., ψ(X) ψ(X)^T = Id. It can be shown that the epistemic uncertainty of submodel _ℐ' as estimated by the difference of variances, is [v],_ℐ'() = , - ,, = |ℐ'|/σ_0^-2 + || σ^-2 . As this value increases with the number |ℐ'| of parameters and decreases with the number || of examples, the first and second principles are always satisfied. This result naturally raises the question of whether it can be generalized to models like deep neural networks. As there are no simple analytical answers for such complex models, we present an experimental study in section <ref> to assess the extent to which the two principles are satisfied by various classification models. This includes the method of Conflictual Deep Ensembles, which we introduce in the next section. § CONFLICTUAL DEEP ENSEMBLES As we shall see in Sect. <ref>, several classifiers present abnormally low levels of epistemic uncertainty in the low-data regime. This “hole” is contrary to the first principle which states that epistemic uncertainty should be maximal in the absence of training data. Why does this happen? By rewriting mutual information | , as | , = ∫ p( | ) p( | , )p( | , ) d , we can interpret it as a weighted average of divergence between predictions p( | , ) of individual models and prediction p( | , ) of the averaged model. Therefore, the hole reflects the absence of diversity between output distributions p( | , ) in the low-data regime. But in this regime, this lack of variability is mostly a consequence of the choice of the prior or, equivalently, of the regularization term in the loss function. This explains why the hole is particularly visible in experiments using label smoothing, since this regularization technique drives output distributions closer to the same uniform distribution. This observation also suggests that designing a prior that favors diversity or, in other words, discordance between output distributions, could fill the hole of epistemic uncertainty. Such an objective can be achieved simply by constructing a so-called conflictual deep ensemble, where each classifier in the ensemble slightly favors a class of its own. In the absence of data, these slight tendencies are enough to create discordance in the output distributions and therefore a high level of mutual information. In practice, a conflictual deep ensemble of order k is implemented as an ensemble of k × C deep classifiers such that every class c ∈{1, …, C} is mapped to k models {^c_i }_1≤ i ≤ k. Denoting by P( | , ) the probability of class as predicted for input by model , we define the conflictual loss for class c as L_c() = - ∑_(,) ∈( log P( | , ) + λ log P(c | , ) ), where the first term is the log-likelihood and the second term is the bias that slightly favors class c. This bias term can be interpreted as if for each observed example (,) in the train set, we add λ faked examples of selected class c. In practice λ has been empirically fixed to 0.05, meaning there is one faked example for 20 real examples. We then train every model of the ensemble independently, using for model ^c_i the loss function L_c. The Conflictual Loss both resembles and differs from Label Smoothing: like LS, the conflictual term log P(c | , ) is not an independent regularization term, but factorized into the sum of the log-likelihoods. However, unlike LS which encourages classifiers to be more concordant by promoting the uniform distribution, the Conflictual Loss encourages classifiers to be contradictory. Existing works <cit.> have sought to address the diversity of the models in the ensemble. Their focus was on an “anti-regularization” of the model's weights resulting in weights with high magnitudes without sacrificing the model's performance. Although, by construction, Conflictual loss aims at creating diversity in the ensemble outputs, we expect each model in the ensemble to adjust its weights accordingly. § EMPIRICAL ANALYSIS We conducted several experiments to assess to what extent both principles of epistemic uncertainty discussed in Sect. <ref> are verified. We compared Conflictual Deep Ensemble to MC-Dropout <cit.>, Deep Ensemble <cit.> and EDL <cit.>. In addition, we tested Label Smoothing (LS) <cit.> in combination with MC-Dropout. Moreover, we evaluated Confidence Penalty regularization combined with MC-Dropout and reported the results in Appendix <ref> due to the space limit. [Code available at: <https://github.com/fellajimed/Conflictual-Loss>] For the varying number of samples used to train the models, we considered, after a 20% validation-train split identical for all models, fractions of the entire training set that grow exponentially from 0.005 to 1. To evaluate the data-related principle, we made sure that by increasing the training set, new examples are added to the previous training set rather than randomly selecting a new independent subset from the entire training set. Additionally, for a fixed ratio, we emphasize that the models are trained on the same samples. We used Multilayer Perceptron (MLP) models with two hidden layers for a straightforward control of its size: since layers are dense, they are invariant under the permutation of neurons. As a consequence the submodel relation defined in Sect. <ref> is simplified: given two networks _1 and _2 of L dense layers whose sizes are respectively (n^[1]_1,…, n^[L]_1) and (n^[1]_2,…, n^[L]_2), _1 _2 ⟺ ∀ i, n^[i]_1 ≤ n^[i]_2 We thus consider for every method a chain of submodels whose sizes of the hidden layers grow exponentially, starting from (128,64) neurons up to (2048,1024). Epistemic uncertainty is estimated using 20 forward passes at inference time for MC-Dropout and 10 MLPs in the case of Deep Ensemble. We set the order k=1 for the Conflictual DE so that it also contains C = 10 MLPs. Each hidden layer is followed by a Dropout layer (p=0.3) and a ReLU activation function. These models are tested on three datasets: MNIST, SVHN and CIFAR10. For the former, we apply the MLPs to the raw images, whereas for the latter, image embeddings are computed using a pre-trained ResNet34 model and used as inputs for the MLPs. In all the heatmaps, we represent the sizes of the hidden layers on the x-axis, while the y-axis corresponds to the length of the training set. Empirically, we set the value of λ in Eq. <ref> to 0.05. The hyperparameters of label smoothing, confidence penalty, and EDL are taken from their respective papers <cit.> and are set to 0.1 (for the first two) and 0.01 for EDL. We refer the reader to Appendix <ref> for more details. Figure <ref> represents by heatmaps the average of mutual information estimated on the test set as a function of data and model sizes, while table <ref> summarizes these heatmaps, indicating the frequencies with which methods comply with the two principles. As aforementioned, the mutual information validates the two principles in expectation. Therefore, we report the average of the test set as a Monte Carlo estimate of this expectation. We see that all methods but Conflictual DE are in complete contradiction with the model-related principle: the larger the model, the smaller the epistemic uncertainty when precisely the opposite is expected. The data-related principle seems better respected, but not perfectly, especially on CIFAR10. In line with comments of Sect. <ref>, MC-Dropout LS is the method that most often breaks the first principle, even if none of these methods achieves a perfect score. Although the exact cause of these phenomena remains unknown, the partial violation of the first principle allows us to circumscribe the origin of the problem. According to property <ref>, we know mutual information necessarily satisfies the data-related property in expectation when estimated on the exact posterior distribution. Since the violation of this principle is not related to the metric, the only possible cause is a poor quality of the posterior approximation, due to procedural variability, i.e., convergence randomness during stochastic gradient descent. In comparison, Conflictual DE obtains almost perfect scores for both principles, proving the conflictual loss's regularizing effect. While the method was specifically designed to satisfy the first principle, compliance with the second principle was not expected at such level. It seems that the expressive power of networks serves as an amplifier for the discord sown between classifiers by the conflictual loss. As shown in Appendix <ref> and summarized in Tab. <ref>, the calibrated epistemic uncertainty resulting from Conflictual DE does not come at the expense of the performance of the model. In fact, Conflictual DE yields comparable or even superior performance compared to DE, as it performs the best overall on CIFAR10 in terms of accuracy (Fig. <ref>) and has comparable results to the best method on MNIST. When taking into account the accuracy of the output probabilities with the Brier score, Conflictual DE appears to be the best model on both datasets. We also checked the quality of epistemic uncertainty to discriminate between OOD and in-distribution (ID) examples, since epistemic uncertainty is expected to be higher for OOD samples than for the ID samples. We use FashionMNIST, SVHN, and CIFAR10 as OOD datasets for the models trained on MNIST, CIFAR10, and SVHN respectively. As shown in Fig. <ref>, Conflictual DE yields high AUROC overall on CIFAR10 and in the low-data regime on MNIST. MC-Dropout LS performs the worst on MNIST and CIFAR10 at the task of OOD detection as it sometimes results in lower epistemic uncertainty in OOD samples than ID samples. To some extent, we notice the same results in the task of misclassification detection which consists in distinguishing between the correctly classified samples and the misclassified samples based on epistemic uncertainty (see Appendix <ref>: Fig. <ref>). Finally, we take a look at the calibration of the models with different methods. As shown in Fig. <ref>, the SCE with Conflictual DE is the most consistent and the lowest, yielding calibrated models even with small training sets. As the number of hyperparameters to be set should be kept to a minimum, we wondered whether the hyperparameter λ introduced by Conflictual DE could replace advantageously that of weight-decay. Therefore we carried out experiments without weight-decay to study its effect on performance (Appendix <ref>). While some methods see their performance deteriorate, the results with Conflictual DE are more or less the same. This robustness suggests that Conflictual DE can be used without weight-decay, resulting in a zero balance for the number of hyperparameters. To conclude this section, we provide a qualitative and comparative summary of methods on Tab. <ref>. § CONCLUSION We have shown in this paper that, contrary to expectations, epistemic uncertainty as produced by state-of-the-art models, does not decrease steadily as the training data increases or as the model complexity decreases. We then introduced conflictual deep ensembles and showed that they restore not only the first but both principles of epistemic uncertainty, without compromising performance. Still, this work raises several questions and many perspectives of research: The exact reasons why epistemic uncertainty paradoxically collapses when network complexity grows, have yet to be found. While conflictual deep ensembles have been specially designed to satisfy the first principle, it is surprising how well this technique solves the second principle as well. Why is this so? Although the conflictual loss naturally suits deep ensembles, nothing is preventing it from being applied to BNNs, provided that the samples of the prior can be efficiently partitioned into class-specific subsets. The validity of such an approach remains to be demonstrated. The question remains whether the observed phenomena generalize to models more complex than MLPs or to other problems like regression. We hope that these and other perspectives will entail further studies on this subject. §.§.§ The authors have no competing interests to declare that are relevant to the content of this article. splncs04 § IMPLEMENTATION DETAILS Datasets. As detailed in the paper, the presented models were trained on MNIST and CIFAR10. A 20% validation-train split is first applied and then subsets were taken from the train sets (a total size of 48000 for MNIST, 58606 for SVHN, and 40000 for CIFAR10) for training. We made sure that the subsets used were balanced. CIFAR10 is encoded using a pre-trained ResNet34 model which is equivalent to the training of a ResNet34 model where the feature blocks are fixed and only the classification part is learned. Data transformations. We apply a standard normalization (mean of 0 and standard deviation of 1) to the datasets. The same transformation is applied then to the test samples (whether there are ID or OOD samples). Only the training samples are used to train the models and no data augmentation is applied. Training. The models were trained for 500 epochs on MNIST, 600 epochs on SVHN and 700 on CIFAR10. We used the SGD optimizer with weight decay, parameterized with (learning rate, momentum): (0.01, 0.95) for MNIST, (0.02, 0.95) for SVHN, and (0.04, 0.9) for CIFAR10. Each ensemble was trained on a single GPU and the best model (based on the validation loss) was tracked during training and used for early stopping and the learning rate scheduler. Duration. The experiments took a total of 311, 186, and 70 GPU hours on MNIST, SVHN, and CIFAR10 respectively. The training was done on a cluster with several GPUs. See Tab. <ref> for more details. The difference is mainly due to how CIFAR10 experiments are implemented: we first compute the embeddings of dimension 512 (once) using a pre-trained ResNet34 which are stored on disk. NOTE: the MNIST experiments took longer than the experiments of SVHN and CIFAR10 due to 2 main reasons: * We don't take into account the time needed to compute and save the embeddings for CIFAR10 and which is done only once. * Most importantly, the format of the files for the CIFAR10' embeddings was optimized and thus it is faster. We further apply the same optimization to the MNIST dataset (by using an identity "embedding") and MNIST experiments should run faster with the new changes. This format was applied to SVHN which explains the performance gains. Refer to the code for more details. The execution time for MNIST, after changes, should be less than the training time of SVHN. Implementation. All experiments were implemented with PyTorch. Code available at: <https://github.com/fellajimed/Conflictual-Loss> § ADDITIONAL RESULTS We report additional metrics for the experiments of Sec <ref> and we use the same representation: for each heatmap, the hidden layers on the x-axis and the number of samples used for training on the y-axis. They both have logarithmic scales. Results are on the same datasets and methods. § RESULTS FOR CONFIDENCE PENALTY In this section, we report the results of Sect. <ref> in the case of MC-Dropout with confidence penalty (MC-Dropout CP). We notice that, for both MNIST and CIFAR10, the results are comparable to those on MC-Dropout trained only with cross-entropy loss. The color scales are set per heatmap. § MODELS WITHOUT WEIGHT DECAY We also tested the same setup presented in Sect. <ref> but without weight decay. We use the same representation as in Sect. <ref> and Appendix <ref>. We notice that Conflictual DE stays consistent in all metrics: accuracy, calibration, Brier score, OOD detection, and misclassification detection. In addition, and most importantly, the 2 principles are verified whereas, for example, both are uncorroborated in the case of DE trained on CIFAR10. Furthermore, the model-related principle is not verified in any other method. Overall, the results with weight decay are relatively better.
http://arxiv.org/abs/2407.13336v1
20240718093401
Entanglement Entropy for the Black 0-Brane
[ "Angshuman Choudhury", "Davide Laurenzano" ]
hep-th
[ "hep-th", "gr-qc" ]
1]Angshuman Choudhury angshuman.choudhury@hertford.ox.ac.uk 1]Davide Laurenzano davide.laurenzano@physics.ox.ac.uk [1] Rudolf Peierls Centre for Theoretical Physics, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom Entanglement Entropy for the Black 0-Brane [ ========================================== empty We analyse the entanglement entropy between the Black 0-Brane solution to supergravity and its Hawking radiation. The Black 0-Brane admits a dual Gauge theory description in terms of the Matrix model for M-Theory, named BFSS theory, which is the theory of open strings on a collection of N D_0-branes. Recent studies of the model have highlighted a mechanism of Black Hole evaporation for this system, based on the chaotic nature of the theory and the existence of flat directions. This paper further explores this idea, through the computation of the von Neumann entropy of Hawking radiation. In particular, we show that the expected Page curve is indeed reproduced, consistently with a complete recovery of information after the Black Hole has fully evaporated. A pivotal step in the computation is the definition of a Hilbert space which allows for a quantum mechanical description of partially evaporated Black Holes. We find that the entanglement entropy depends on the choice of a parameter, which can be interpreted as summarizing the geometric features of the Black Hole, such as the size of the resolved singularity and the size of the horizon. § INTRODUCTION Black Hole physics presents several unsolved, puzzling questions, and it is widely believed that answering those questions would shed light on the quantum nature of gravity. In particular, the problem of finding a unitary process describing formation and evaporation of a Black Hole is pivotal for solving the Black Hole information paradox <cit.>, a longstanding problem in modern physics. This idea can be made concrete in the context of the Gauge/Gravity duality conjecture <cit.>, in which one can study Black Hole formation/evaporation in the dual gauge theory. In this spirit, we focus in this note on the BFSS Matrix Theory. This is known to describe the low energy limit of string theory on a stack of D_0-branes. BFSS theory has been conjectured to describe the Discrete Light Cone Quantization (DLCQ) of M-Theory <cit.> and many consistency checks have been performed since the formulation of the duality. See e.g. <cit.> for reviews. In this interpretation, M-theory objects, including low energy degrees of freedom of 11-dimensional supergravity, are described in terms of bound states of D_0-branes, held together by open strings stretched between them. In particular multi-particle states can be described by block diagonal matrix configurations, which are possible because the bosonic Matrix Theory potential has flat directions. From a Gauge/Gravity duality viewpoint, the state in which all the branes and open strings form a single bound state, with eigenvalues clumped around the origin, has been conjectured <cit.> to be dual to a Black 0-brane or Black Hole configuration in the 't Hooft large N limit, where the matrix size N goes to infinity and the 't Hooft coupling λ=g^2_YMN is kept fixed. Deviations from the large N regime represent stringy corrections to the Black Hole. Within this interpretation, the instability due to the flat directions of the Matrix Theory potential is interpreted in terms of Hawking radiation, namely the emission of D_0-branes which escape from the bound state. Such a picture makes sense in light of the M-theory interpretation of BFSS theory, in which emitted branes represent massless particles, consistently with the idea of a Black Hole emitting massless Hawking radiation. Recently, it has been shown <cit.> that the emission of D_0-branes in BFSS theory can be characterized by the classical (high temperature) limit of the Matrix Theory. The argument is based on the existence of flat directions in the potential and the fact that the dynamics of the theory is chaotic. The result is that Black Hole evaporation can be fully resolved in terms of the emission of branes in the Matrix Theory. The consequence is that the Black 0-brane should not feature any loss of information after complete evaporation. The aim of this paper is to make this statement precise and show that it is indeed correct, by looking at the entanglement entropy between the Black Hole and its radiation. A key step will be to provide a quantum Hilbert space description of the Black Hole evaporation process, which will then allow us to compute the entanglement entropy. The construction of the Hilbert space will be similar to the one in <cit.>. The main difference with our work is the implementation of time evolution on the Hilbert space: while the authors of <cit.> consider the average over an ensemble of Hamiltonians to show that, in general, unitarity is not a necessary ingredient for the recovery of information after the Black Hole has evaporated, we consider unitary time evolution for a specific theory. Our central result will be to show that the profile of the von Neumann entropy as a function of time follows a Page curve, implying that the information is fully recovered, after complete evaporation. The paper is organized as follows. In section <ref>, we summarize the mechanism put forth in <cit.> for Black Hole formation and evaporation. In section <ref>, we characterize the Hilbert space of the quantized Black Hole plus radiation system. This will enable us to compute the entanglement entropy. The computation is carried out in section <ref>, in which the Page curve for the entropy is obtained. A more quantitative and model dependent description of Hawking radiation is presented in section <ref>. This is based on the distribution of the largest eigenvalue of the radial coordinate matrix, as proposed in <cit.>. Finally, we conclude with a brief summary of the results and outlook in section <ref>. § A MECHANISM FOR BFSS BLACK HOLE EVAPORATION As outlined in the introduction, there exists a duality that maps the BFSS gauge theory into a theory of gravity. Within this framework, the dual description of a bound state of N D0-branes on the gauge theory side is a Black 0-brane or Black Hole, in the gravity side. As such, BFSS theory provides a framework to study a Black Hole configuration for which micro-states can be represented in terms of Matrix Theory quantities. BFSS theory is the dimensional reduction of (9+1)-dimensional Super Yang-Mills theory to 0+1-dimensions, and the Lagrangian of the theory reads ℒ=1/2 g^2_YMTr[(D_t X^I)^2+[X^I, X^J]^2+i ψ^αΓ^0_αβD_t ψ^β+ψ^αΓ^I_αβ[X^I, ψ^β]] . Here X^I, I=1,...,9 are N× N bosonic hermitian matrices, D_t=∂_t-i[A_t, ·] is the covariant derivative, A_t is the U(N) gauge field, ψ^α, α=1,...,16 are N × N real fermionic matrices and Γ^0_αβ,Γ^I_αβ are gamma matrices in (9+1)-dimensions. When the matrices X^I are close to diagonal, the entries on the diagonal can be thought of as describing the coordinates of the N D_0-branes in spacetime, whereas off diagonal terms represent interactions through open string stretched between branes. The bosonic potential, namely V=1/2 g^2_YMTr[[X^I, X^J]^2] has flat directions in which it vanishes. This allows for multi-particle states in the M-Theory picture which are represented by block diagonal matrices in the Gauge theory side. The presence of flat directions is also pivotal in the description of the evaporation mechanism, as presented below. In the rest of the section, we summarise important features of the BFSS Black Hole, such as a mechanism for Black Hole formation and evaporation. The presentation will follow the arguments put forth in <cit.>. §.§ Black Hole Formation The formation of the BFSS Black Hole state, namely a single bound state of N D_0-branes clumped around the origin and kept together by open strings, is made possible by the fact that it is entropically favoured compared to a collection of N non-interacting branes. This is explained by looking at the bosonic part of the BFSS Lagrangian and, in particular, considering the matrices X^I, I=1,...9. We recall that the diagonal entries can be thought of as the position coordinates of individual D_0-branes, whereas off diagonal components describe interactions between branes by means of string stretched between them. When D_0-branes are non-interacting, the matrices X^I are diagonal, thus sitting in the flat direction of the potential (<ref>). Considering that those are valued in the adjoint of SU(N), the number of degrees of freedom for such a non-interacting configuration is O(N). On the other hand, if the D_0-branes are all interacting with one another and form a bound state, off-diagonal elements are non-zero and the eigenvalues (representing the position of branes) are clumped around the origin. This corresponds to the case in which the matrices are non-commuting, yielding a non-zero interaction potential. In this case, each matrix contributes O(N^2) degrees of freedom. It follows that such a configuration is entropically favoured, being accounted for by a larger number of degrees of freedom. Black Hole formation is, therefore, entropically favoured. Moreover, the matrix model is chaotic <cit.> meaning that, for almost all initial conditions, the entire phase space will be explored by the system over time. Therefore, eventually, the system will explore the bound state described above, leading to Black Hole formation. §.§ Black Hole Evaporation The same chaotic nature of the theory causes the Black Hole to eventually evaporate. Since the system is ergodic, it is possible for a single D_0-brane to explore configurations in which it is located far away from the bound state of the remaining N-1 branes. This causes the degrees of freedom describing the interaction of the D_0-brane with all other bounded D_0-branes to become heavy, as their mass is proportional to the length of the string stretched between the branes. When the mass becomes larger than the temperature, the classical treatment of the theory (<ref>) doesn't hold anymore and one loop effects become relevant. In particular, it has been shown <cit.>, that at large distances a cancellation occurs: attractive and repulsive forces compensate and such heavy degrees of freedom decouple from the dynamics and can be integrated out to obtain an effective description of the system.[The comparison between such an effective theory and graviton scattering in 11-dimensional supergravity has provided a perturbative check of the BFSS conjecture.] Such a cancellation relies on contributions coming from both the fermionic and the bosonic part of the BFSS potential. On reaching a certain threshold, the interaction becomes very feeble and can be considered to be turned off. When this happens, we say that a flat direction of the matrix theory potential (<ref>) opens up. We consider the D_0-brane to depart from the Black Hole when this happens, as the off-diagonal terms in the corresponding row and column of the X^I matrices are vanishing, namely X_BH=([ X^' x_I; x_I^t * x_D_0 ])X_BH+D_ 0=([ X^' 0; 0 x_D_0 ]) where X' is the (N-1)×(N-1) matrix describing the remaining N-1 D_0-brane bound state/Black Hole. Hence, we can consider the decoupled D_0-brane to be emitted by Hawking radiation from the evaporating Black Hole. The loss of the radiated D_0-brane brings about a loss of entropy of the Black Hole of the order O(N), due to the decreasing of degrees of freedom, which implies that this process is increasingly improbable as N increases. This is verified by <cit.>, as they find the emission rate of a D_0-brane from a N D_0-brane Black Hole to be ≈ e^-N, implying that the emission of a D_0-brane is suppressed for large N. It is also argued that the emission of individual branes is dominant compared to the emission of composite states of k D_0-branes with 1<k<N. As a consequence, evaporation occurs as a sequence of single brane emissions. In the next section, we will provide a quantized description of the evaporation of the Black Hole which is consistent with this thermodynamic treatment. This will be the first step towards a derivation of the Page Curve for the entanglement entropy of Hawking radiation. § QUANTIZED SYSTEM DESCRIPTION The BFSS Black Hole, as described above, is characterised as a bound state of D_0-branes. In this scenario, Hawking radiation is completely resolved as a unitary process consisting of the emission of branes. As a consequence, the evaporation process for such a Black Hole should not feature any loss of information. The aim of the following sections will be to confirm this statement. In order to do so, we will provide a quantum Hilbert space description for the emission of D_0-branes as a radiative process and we will calculate the von Neumann entanglement entropy of Hawking radiation as a function of time. The expectation is that this should reproduce a Page curve, consistent with the recovery of information from Hawking radiation. We will show that this is indeed the case. As a first step, we need to define a suitable Hilbert space which allows for entangled states of Black Hole and radiation. This will enable us to construct the radiation density matrix and, hence, to obtain the entanglement entropy for the radiation. The mechanism for Black Hole evaporation outlined in section <ref> can be given a Hilbert space description in a quantum mechanical setting. In particular, we can consider the Hilbert space proposed in <cit.>, which we summarize in the following. The total Hilbert space can be decomposed in sub-spaces spanned by eigenstates of the Hamiltonian with energies taking values in small micro-canonical windows [E-δ E,E] with δ E≪ E, namely ℋ=⊕_Eℋ_E, δ E . The small smearing of the energy ensures that we do not run into any divergences. Moreover, conservation of energy implies that, once we fix the energy of the system, the dynamics will never leave the corresponding Hilbert subspace. We can, therefore, focus on a single ℋ_E, δ E. This, in turn, will be spanned by entangled states representing a partially evaporated Black Hole, where a fraction E' of the total energy has evaporated into radiation. In other words, ℋ_E, δ E can be decomposed in terms of factorized Hilbert sub-spaces as ℋ_E, δ E=⊕_E'ℋ_BH, E-E'⊗ℋ_rad, E' , with the sum over E' to be intended as a continuous integral. We consider a system of N D_0-branes which at t=0 form a pure Black Hole state (i.e. radiation is in the vacuum state). Due to the finite probability for the Black Hole to radiate a D_0-brane, radiation states can be excited. As a consequence, the wave function of the system at time t will be in a superposition, weighted by the probabilities of emission of a given number of D_0-branes from an initial number of Black Hole D_0-branes. The maximum number of emissions is N-1, since the branes can be emitted until only one is left. Then the superposition coefficients are defined as the probabilities p_0(t),p_1(t),p_2(t),p_3(t),...p_N-1(t) of 0, 1, 2, 3....N-1 emissions respectively. These probabilities will be explicitly calculated in the following. In order to construct the density matrix, a definition of the Black Hole and radiation states which span the decomposition (<ref>) is required. Black Hole states are characterized by the number of D_0-branes and the total energy in the Black Hole. We define these states to satisfy the orthonormality condition: _BH<N,E|N,E>_BH=δ_N,Nδ(E-E) where |N,E>_BH is a Black Hole state with N branes and energy E. Using this definition and the decomposition (<ref>), we can take partial trace over the Black Hole states of any operator A via A_rad=Tr_BH(A)=∑_i=0^N∫_0^Edx<i,x|_BHA|i,x>_BH where E is the energy of the black hole at t=0, i.e. the total energy of the system, which is conserved in time. This will be necessary in calculating the radiation density matrix. Similarly, we characterize radiation states in (<ref>) with the number of D_0-branes and the energy of each brane. For example: |m;E^(1),E^(2),....E^(m-1),E^(m)>_rad is a m D_0-brane radiation state, where the first brane has energy E^(1), the second has energy E^(2) and so on. These states are also defined to satisfy the orthonormality condition: <l;E^(1),E^(2),....E^(l-1),E^(l)|m;E^(1),E^(2),....E^(m-1),E^(m)>_rad=δ_lm∏_i=0^l δ(E^(i)-E^(i)) . In the following, we will leverage this Hilbert space construction to calculate the density matrix of the system. This will lead us to the computation of the entanglement entropy. § PAGE CURVE FOR THE BFSS BLACK HOLE The system we want to analyse consists of a BFSS Black Hole, i.e. a bound state of D_0-branes, which radiates by emission of individual branes. At t=0 the system is in the initial state, namely a Black Hole in a pure state and radiation in the vacuum state |ψ(0)>=1/√(δ E)∫_E-δ E^EdE |N,E>_BH|0>_rad . The Black Hole pure state is a superposition of energy eigenstates within a narrow micro-canonical window. Time evolution for the system is unitary, therefore the overall state will remain pure. In principle, such a time evolution is given by the action of the BFSS Hamiltonian, namely the Legendre transform of (<ref>), on the initial state (<ref>). However, a full description of the Black Hole state purely in terms of Matrix Theory quantities is not known, hence the action of the full Hamiltonian on this state cannot be determined exactly. Therefore, we have to resort to an effective description. In particular, we describe the state at later times t as the superposition of partially evaporated Black Hole plus radiation states, the weighting coefficients being by the time-dependent probabilities of emission of branes, p_i(t). It follows that the state of the system at time t takes the form |ψ(t)>=∑_m=0^N-1p_m(t)^1/2a_m(E-δ E)/√(δ E)∫_E-δ E^EdE∫_0^E-δ E[dE ]^(m) |N-m,E-∑_i=0^mE^(i)>_BH|m;E^(1),E^(2),....E^(m-1),E^(m)>_rad where we defined ∫_0^x[dE]^(m)={ 1 for i=0 ∫_0^x dE^(1)∫_0^x-E^(1)dE^(2)....∫_0^x-∑_i=0^m-1E^(i)dE^(m) for i≠ 0 . and the normalization constants a_m(E-δ E) are fixed by requiring a_i(x)^2∫_0^x[dE]^(i)=1 ⇒ a_i(x)=√(i!)/x^i/2 . Note that <ψ(t)|ψ(t)>=∑_i=0^N-1p_i(t)a_i(E-δ E)^2/δ E∫_E-δ E^EdEdE'∫_0^E-δ E[dE]^(i)[dE]^(i) ×∏_j=1^iδ(E^(j)-E^(j)) δ(E-E'-∑ E^(i)+∑E^(i)) =∑_i=0^N-1p_i(t)a_i(E-δ E)^2/δ E∫_E-δ E^EdE∫_0^E-δ E[dE]^(i)=1 meaning that the state is properly normalized at all times. Since the system is in a pure state given by (<ref>), the density matrix of the system reads ρ(t)=|ψ(t)><ψ(t)| and the normalization property (<ref>) implies Trρ(t)=1. In order to compute the entanglement entropy of the radiation, we need to consider the partial trace of the density matrix (<ref>) with respect to Black Hole states, thus obtaining the reduced density matrix: ρ_rad(t)=Tr_BHρ(t) =∑_i=0^N-1∫_0^Edx _BH<i,x|ρ(t)|i,x>_BH =p_0 _rad|0><0|_rad+∑_i=1^N-1∫_0^Edx p_i(t)/δ Ea_i(E-δ E)^2∫_E-δ E^EdEdE'∫_0^E-δ E[dE]^(i)[dE]^(i) × δ(x-E+∑ E^(i)) δ(x-E'+∑E^(i)) _rad|i;E^(1),....,E^(i)><i;E^(1),....,E^(i)|_rad =∑_i=0^N-1 p_i(t)a_i(E-δ E)^2∫_0^E-δ E[dE]^(i)[dE]^(i) _rad|i;E^(1),....,E^(i)><i;E^(1),....,E^(i)|_rad , where in the last equality we solved the delta function integrals. We note that we can define superposition states for radiation with fixed particle number |i>_rad:=∫_0^E-δ E[dE]^(i)a_i(E-δ E)|i;E^(1),....,E^(i)>_rad , which satisfy the orthonormality condition ._rad<i|i>_rad.=δ_i,j . With this choice of basis, the density matrix takes the particularly simple form ρ_rad(t)=∑_i=0^N-1 p_i(t) _rad|i><i|_rad . As a consequence, computing von Neumann entanglement entropy is straightforward, and it yields S(t)=-Tr(ρ_rad(t)lnρ_rad(t))=-∑_i=0^N-1 p_i(t)ln(p_i(t)) , and the purity of the radiation state is given by P(t)=-Tr(ρ^2_rad(t))=-∑_i=0^N-1 p_i^2(t) . All in all, we have obtained closed expressions for the entanglement entropy and the purity of the radiation state in terms of the probabilities p_i(t) of emitting D_0-branes by Hawking radiation, namely the probability for flat directions of the Matrix Theory potential to open up. In the following, we will obtain an explicit expression for those probabilities. This will enable us to fully characterize the profile of the entanglement entropy and purity as functions of time. We consider the emission of branes by Hawking radiation (through the mechanism outlined in section <ref> ) as a sequence of radiative processes. Defining the mean emission time of a brane from a composed of N-i D_0-branes as t_i, the probability of no emission p_0 is given by p_0(t)=e^-t/t_0 We can use this to compute the probability of one emission p_1. This is given by p_1(t)=A_1∫_0^tdT^allow 1 emission at any Tp_0(T)_prob. of 0 emissions from 0 to Te^(T-t)/t_1^no emissions from T to t , where A_1 is a t-independent factor such that the probability of one emission from N branes in infinitesimal time δ t is A_1 δ t. This can be generalized to recursively determine the probabilities p_i of i emissions as p_i(t)=A_i∫_0^tdT^allow 1 emission at any Tp_i-1(T)_prob. of i-1 emissions from 0 to Te^(T-t)/t_i^no emissions from T to t , where A_i are t-independent factors such that the probability of one emission from N-i+1 branes in infinitesimal time δ t is A_i δ t. These factors are found by imposing the consistency condition on probabilities ∑_i=0^N-1p_i(t)=1 . However, we cannot have more than N-1 emissions, since after emitting N-1 D_0-branes, there is only one D_0-brane left, meaning that the Black Hole has already evaporated into N non interacting branes. It follows that p_N(t)=0 and, as a consequence, t_N-1→∞. As a consequence, (<ref>) is translated into A_N-1∫_0^tdTp_N-2(T)=1-∑_i=0^N-2p_i(t) . The condition (<ref>) can be solved recursively to obtain the normalization factor: A_i=1/t_i-1 , and the steps leading to this equation are carried out explicitly in Appendix <ref>. Having derived the emission probabilities, we now have full access to the entropy and purity as functions of time, equations (<ref>) and (<ref>) respectively, which we can plot for different values of N. Assuming the timescale for emission of a brane from a partially evaporated Black Hole to be t_i ≈ e^N-i as argued in <cit.>, the results for the plots are shown in Figures <ref> and <ref>. From these graphs, we come to the following conclusions: * Unitarity of the Black Hole evaporation process is preserved since the entropy tends to 0 as t→∞, i.e. after the Black Hole has fully evaporated; * The information which was lost after the creation of the Black Hole is recovered after complete evaporation of the Black Hole in the form of Hawking radiation in a pure state. This is the behaviour displayed in Figure <ref>, where we see that the purity of the radiation goes back to 0 after evaporation; * The peak of the radiation entropy curve occurs at the Page time. This is defined as the time for which the Black Hole has radiated half of its mass. We note that t_P ∼ O(t_0). Note that the average number of emitted branes is ⟨n(t)|=⟩∑_k=0^N-1 k p_k(t) . Moreover, by differentiating (<ref>), we get dp_i(x)/dx+p_i(x)/t_i=p_i-1(x)/t_i-1 and, as a consequence d/dt⟨n(t)|=⟩∑_k=0^N-11/t_k p_k(t)=⟨ 1/t_n| ⟩ . Considering that the number of branes in the Black Hole is given by n_BH=N-n, it follows that d/dt⟨n_BH(t)|=⟩-⟨ 1/t_N-n_BH| ⟩ . If we ignore quantum fluctuations and we restrict ourselves to the classical picture presented in <cit.>, equation (<ref>) reads d n/d t=-e^-n , where n=⟨n_BH|$⟩, which in turn implies ∫_N^n d n e^n=∫_0^t-d t ⟹ t=e^N-e^n . Ift_1 / 2is the half-life of the Black Hole, t_1 / 2/t_0=e^N-e^N / 2/e^N=1-e^-N / 2 . Hence, asN→∞, t_1/2/t_0⟶ 1 namely the Black Hole half life is equal tot_0in the largeNlimit. The analogous quantity to the half life in our quantum picture is the Page time and we see that the classical approximation captures thet_P∼O(t_0)behaviour that we observed in our analysis. This can be seen as a consistency check of the classical limit in <cit.>, since qualitative features such as the scaling of the Page time witht_0do not change dramatically in the quantum picture. Of course, this is not the end of the story. Describing the Black Hole state and its radiation purely in terms of Matrix Theory quantities would, in fact, provide us with a full understanding of the problem and, therefore, remains an interesting route to investigate. § A QUANTITATIVE ESTIMATE OF THE EMISSION TIME The approximate form of the mean decay time given in <cit.>, namelyt ≈e^NwhereNis the number ofD_0-branes in the Black Hole, is inferred from the relation t≈ e^Δ S whereΔSis the entropy difference of the (BH + no radiation) and the (BH + 1 brane) configurations. A more quantitative estimate for the entropy difference can be found by considering the distribution of the largest eigenvalue of the matrixR=√(X_IX^I), defined in terms of the BFSS position matricesX^I. This was also computed in <cit.>, where they show that it takes the form ρ(r) ∼(λ T)^-1 / 4·[r/(λ T)^1 / 4]^-8(2 N-3) , whereλ=Ng^2_YMis the 't Hooft coupling and we are considering the theory at finite temperatureT. Hereρ(r)can be thought of as the distribution of the radial coordinate of theD_0-brane furthest away from the Black Hole. The distribution (<ref>) represents the leading contribution in the largeNlimit and might be affected byO(1/N)corrections at finiteN. Since, as described in section <ref>, we are restricting to a micro-canonical window, we can write the entropy of a state asS=log(Ω)whereΩis the phase space volume corresponding to theD_0-brane configuration of the state. In particular, the entropy of the configuration in which no brane has been emitted, which we denote byS_<, is computed from the phase space integral of the distribution (<ref>). The limits of integration must be chosen so as to ensure that theD_0-brane whose radial coordinate is distributed according to (<ref>) is the furthest away from the singularity while still interacting with the Black Hole. Hence, we translate the statement ofrbeing the largest eigenvalue of the radial coordinate matrix as the lower bound of integration in (<ref>). Then,r_0is the average radius of the region occupied by the bound state of the remainingN-1branes.Ris the radius at which theD_0-brane is considered to decouple. It follows that, in this case, e^S_<=C∫_r_0^Rdrρ(r) where the factorCaccounts for the phase space in the remaining8directions and is, therefore, independent of the radial coordinate. Similarly, the entropy of the configuration in which aD_0-brane has decoupled from the Black Hole is determined through the equation e^S_>=C∫_R^∞ drρ(r) whereS_>is the entropy for such a configuration. Hence, the suppression factor is given by e^Δ S=e^S_</e^S_>=∫_r_0^Rx^-8(2 N-3)dx/∫_R^∞ x^-8(2 N-3)dx =e^(8(2N-3)-1)log(R/r_0)-1 . The above corresponds to the mean emission time of a brane from a bound state ofND_0-branes. Indeed this corresponds to a more quantitative version of (<ref>), which yields t_i=e^(8(2(N-i)-3)-1)log(R/r_0)-1 as a mean emission time. Having obtained a more quantitative estimate for the emission times, equation (<ref>), we now repeat the analysis carried out in section <ref> and compute a refined version of the Page curve. To this end, it is useful to define the new parameter a=16ln(R/r_0) . We plug equation (<ref>) in (<ref>) and we look at plots of the entropy as a function of time. Those are shown in Figures <ref> and <ref> for different values of the parameters. From the graphs, we infer the following: * As before, the entropy comes back to 0 after evaporation of the Black Hole for all cases, showing the expected Page curves. Again the BFSS Black Hole represented with our model does not feature any information loss; * The Page time reduces with an increase in a. This matches with the conclusions one can draw from the half-life of the Black Hole in the classical limit, given below; * As a increases, the entropy reduces at all times. In this case, the classical limit of equation (<ref>) reads dn/dt=-1/e^an-b-1 wherea=16ln(R/r_0)andb=25a/16. Applying the appropriate boundary conditions, we get t=1/a(e^aN-b-e^an-b)-N+n , which implies for the Black Hole half-life t_1/2=1/a(e^aN-b-e^aN/2-b)-N/2 . Hence, the ratiot_1/2/t_0is given by t_1/2/t_0=e^aN-b-e^aN/2-b-aN/2/a(e^aN-b-1) . In particular, we not that asaincreases, this ratio decreases. Moreover, for large values ofNt_1/2/t_0⟶1/a , which is consistent with what we observe for the Page time in Figure <ref>. Once again, the classical approximation captures the qualitative features that we observe in the quantum system. All in all, we obtained a profile for the entanglement entropy as a function of time which depends on the features of the Black Hole state through the parametera. § CONCLUSION AND OUTLOOK In this paper, we have considered the Black 0-Brane solution to supergravity, which is dual to BFSS Matrix theory, and we have computed the entanglement entropy of Hawking radiation. The presence of a Gauge theory dual allows for a microscopical description of the Black Hole evaporation mechanism. In particular, the Black Hole state is dual to a bound state of branes clumped around the origin and held together by open strings. Evaporation is then caused by the emission of branes which run astray from the clump. Flat directions in the potential, together with the chaotic nature of the theory, make this possible, as the interaction between branes is turned off at large distance due to a supersymmetric cancellation. The process of evaporation is, therefore, fully resolved in terms of unitary evolution in the dual Gauge theory description and no information should go missing after the Black Hole is completely evaporated. We checked this statement by computing the entanglement entropy resulting from quantizing the radiation process and showed the expected Page curve is indeed reproduced. The computation is based on assuming the Hilbert space to factorize into partially evaporated Black Hole plus radiation configurations. We then considered a more quantitative description of the rate of emission of branes based on the distribution of the largest eigenvalue of the radial coordinate matrixR=√(X^I X_I)proposed in <cit.>. Hence we found a more refined version of the entanglement entropy as a function of time, which also reproduces the desired Page curve. As such, this computation provides a confirmation of the fact that the mechanism put forth in <cit.> forD_0-branes emission indeed represents Hawking radiation in the gravitational dual description. Interestingly, the quantitative estimate for the entanglement entropy presented in section <ref> depends on the parameterain (<ref>), roughly characterizing the size of the Black Hole. To make a connection with the geometric interpretation presented in <cit.>,r_0, which is the size of theND_0-brane clump, can be thought of as the size of the resolved singularity, whereas the radiusR, representing the distance at which branes are considered to decouple and be emitted, might characterize the Black Hole horizon. It would be interesting to further probe this connection. The mechanism for quantum evaporation we considered is the quantized version of a classical process. As such, our results stand as an important consistency check for the interpretation of a bound state ofD_0-branes as an evaporating Black Hole. However, a fully quantum description of Black Hole evaporation, as well as of the Black Hole state itself, is not known. An interesting route to explore would be to try to fully determine the Black Hole state in terms of Matrix Theory quantities. This would certainly improve our understanding of the entanglement entropy to includeO(1/N)corrections and would shed light on the geometric interpretation of the model. § ACKNOWLEDGEMENT The authors would like to thank John Wheater for useful discussions and comments on the draft of the present work. We thank Gabriel Wong for comments on the draft of the present work. This work has been partially funded by STFC, studentship grant ST/X508664/1 § NORMALIZATION FACTOR FOR PROBABILITIES In this section, we provide a proof by induction of equation (<ref>), namely that the correct normalization factor for probabilities isA_i=1/t_i-1,∀i=1,...,d. We proceed by induction ond=N-1. Ford=1, we simply integrate equation (<ref>) p_1(t)=A_1∫_0^t p_0(t)=1-p(0) Usingp_0(t)=e^-t/t_0, we getA_1=1/t_0. Now, assume that equation (<ref>) holds fori=1,...,d. We have to prove thatA_d+1=1/t_d. For this, consider equation (<ref>), which we rewrite here for convenience, under the induction hypothesis p_i(x)=1/t_i-1∫_0^xdt p_i-1(t)e^(t-x)/t_i . Differentiating the above yields dp_i(x)/dx+p_i(x)/t_i=p_i-1(x)/t_i-1 . In order to findA_d+1, we consider equation (<ref>), which now reads A_d+1∫_0^xdt p_d(t)=1-∑_i=0^dp_i(x) . We differentiate the above with respect toxand we obtain A_d+1 p_d(x)=-∑_i=0^ddp_i(x)/dx= ∑_i=0^dp_i(x)/t_i-∑_i=1^dp_i-1(x)/t_i-1=p_d(x)/t_d , which implies A_d+1=1/t_d as required. unsrturl
http://arxiv.org/abs/2407.12293v1
20240717032603
Multi evolutional deep neural networks (Multi-EDNN)
[ "Hadden Kim", "Tamer A. Zaki" ]
math.NA
[ "math.NA", "cs.LG", "cs.NA", "math.DS", "physics.comp-ph" ]
ams]Hadden Kim ams,me]Tamer A. Zakicor1 [ams]organization=Department of Applied Mathematics & Statistics, Johns Hopkins University, addressline=3400 N. Charles St., city=Baltimore, postcode=21218, state=MD, country=USA [me]organization=Department of Mechanical Engineering, Johns Hopkins University, addressline=3400 N. Charles St., city=Baltimore, postcode=21218, state=MD, country=USA [cor1]t.zaki@jhu.edu § ABSTRACT Evolutional deep neural networks (EDNN) solve partial differential equations (PDEs) by marching the network representation of the solution fields, using the governing equations. Use of a single network to solve coupled PDEs on large domains requires a large number of network parameters and incurs a significant computational cost. We introduce coupled EDNN () to solve systems of PDEs by using independent networks for each state variable, which are only coupled through the governing equations. We also introduce distributed EDNN () by spatially partitioning the global domain into several elements and assigning individual EDNNs to each element to solve the local evolution of the PDE. The networks then exchange the solution and fluxes at their interfaces, similar to flux-reconstruction methods, and ensure that the PDE dynamics are accurately preserved between neighboring elements. Together and form the general class of methods. We demonstrate these methods with aid of canonical problems including linear advection, the heat equation, and the compressible Navier-Stokes equations in Couette and Taylor-Green flows. Scientific computing Partial differential equations Machine learning Deep neural networks § INTRODUCTION Use of machine learning (ML) in the field of partial differential equations (PDEs) and physics-based predictions has seen several exciting developments. Some efforts have focused on learning from data and introducing physics constraints <cit.>, and others have been developed to learn operators <cit.>. The particular area of interest for the present work is use of machine learning to numerically solve partial differential equations (PDEs) and, in particular, to forecast the evolution of dynamical systems. This idea was pursued by <cit.>, who introduced evolutional deep neural networks (EDNN, pronounced “Eden”). They used the expressivity of neural networks to represent the solution of the PDE in all but the direction of propagation, and then adopted the age-old marching approach to evolve the network parameters according to the governing PDEs. In doing so, the evoltion of the network becomes predictive, providing a forecast for any horizon of interest. When the evolution is in time, the EDNN weights can be viewed as traversing the space of network parameters through paths directed by the governing PDE (see figure <ref>). In the present work, rather than deploying a single network to solve the system of PDEs on the entire domain, we introduce a decomposition of the problem, both in state space and in the spatial domain, and utilize multiple concurrent EDNNs to solve the PDEs. This contribution is essential to enable future use of ML-based techniques such as EDNN for the solution of complex and large-scale problems. Since the introduction of EDNN, a growing community has extended its capabilities. Here we summarize a selection of related work that are most relevant to the present effort. <cit.> provided the insightful interpretation of EDNN as a reduced-order nonlinear solution. The output layer of the deep neural network can be viewed as the combination of time-dependent modes. Unlike in classical, linear reduced-order models, these modes shape-morph with time and are allowed to depend nonlinearly on their parameters. <cit.> interpretted EDNN as an active learning algorithm that can generate its own training data. When starting from random parameters, neural networks may need a large number of training epochs to converge to the solution. If, however, the network parameters are initialized close to the target, a single training step with very little training data may be sufficient. In many PDE problems, the solution at one time is similar to a previous time. EDNN capitalizes on this, and its parameter evolution is viewed as training the parameters sequentially one time-step to the next with training data generated by EDNN itself. As a meshless method, EDNN can be sampled anywhere in the spatial domain, removing the need for complex remeshing algorithms. Researchers adopted adaptive sampling strategies to both reduce the number of points needed and to improve accuracy in the network evolution. <cit.> sampled the spatial domain with time-dependent, adaptive measures estimated by Guassian mixture models. <cit.> continued in this direction and formulated the adaptive sampling as the dynamics of particles within the spatial domain. At each time-step, their method uses the network prediction to determine the particle trajectories, then samples at the displaced particle positions. <cit.> first collected the network prediction over a high-resolution finite-element mesh and formulated a probability distribution based on the magnitude of the PDE dynamics at the mesh points. Then they used only a subsampling of these points in the evolution of EDNN. Several advancements have been made to enable EDNN to solve PDEs with boundary conditions. <cit.> utilized an input feature layer to enforce periodicity. They also enforced Dirichlet boundary conditions by negating the boundary predictions of the network. <cit.> used a positional embedding (feature) layer to enforce both Dirichlet and Neumann boundary conditions. With pre-computed embeddings, the EDNN prediction is restricted to the solution subspace that respects the boundary conditions of the problem. <cit.> utilized a simple and flexible approach: by adding penalty terms to the EDNN algorithm, the network evolution can be driven to agree with the boundary conditions. Other efforts have been directed to solution method itself. <cit.> adopted a constrained optimization to ensure that the EDNN evolution respects conservation laws. Additionally, they introduced Tikhonov penalization to regularize the optimization objective function, which results in computational speedup. <cit.> investigated a variety of time integrators, such as an energy-preserving midpoint method, implicit Euler, and Chorin-style operator splitting. <cit.> transformed the spatial input into a scaled sinusoidal and applied Nyström preconditioning. <cit.> utilized tensor neural network architectures and explored updating only a random subset of the network parameters each timestep. All these related works share the common strategy of using a single neural network to represent the solution of interest over the entire spatial domain. Relying on only one EDNN, however, has its drawbacks. First, it is challenging to use a single network to predict the evolution of a vector-valued solution. The potentially different functional forms of the different states and the differences in their dynamical characteristics require a large number of parameters to accurately predict. Second, a large number of spatial samples is required to accurately evolve the parameters of the network for these problems. Both drawbacks contribute to the computational cost. While parallel computations of a single, large deep neural network <cit.> continue to garner attention, we adopt a different approach that attempts to reduce the computational cost and that is naturally amenable to parallel execution. Our strategy is inspired by classical finite element methods (FEM). FEMs partition the spatial domain into elements and approximate the solution in each element with the linear combination of a finite set of basis functions, such as polynomials or trigonometric polynomials. For transient problems, time-dependent coefficients of the selected basis form a system of ordinary differential equations (ODEs) which can be marched with a choice of time integrator. Typically, a very small set of basis functions is selected, limiting expressivity. To recover accurate solutions, the domain is partitioned into a very large number of elements, e.g. millions or billions in the context of simulations of complex flows <cit.>. Different techniques to ensure that the solution behaves correctly between elements have spawned numerous numerical methods. For example, conforming finite element methods <cit.> require the solution approximation be continuous everywhere, particularly on the element interfaces. Discontinuous Galerkin methods <cit.> allow the function to have jump discontinuities and utilize a Riemann solver to compute common numerical fluxes at element interfaces. Galerkin methods, however, seek weak solutions to the PDE which rely on a choice of the test function space. It is unclear which set of test functions would be appropriate for a deep neural network solution approximation. The flux-reconstruction family of methods <cit.> lifts this requirement and instead formulates a correction to the local flux approximation of each element using data from neighboring elements. By reconstructing a flux whose normal components are continuous between elements, flux-reconstruction methods ensure global accuracy. The evolution of each element is formulated as a local procedure needing only minimal data synchronization at the element interfaces. As such, the element-specific parameters are nearly decoupled, which enables parallel and distributed computing <cit.>. In the present work, we develop a framework that is inspired by classical FEM to solve PDEs using coupled and distributed neural networks, rather than a single large network. We thus remove the limitation of using just one network to approximate the possibly vector-valued solution over the entire spatial domain. First, we utilize coupled deep evolutional neural networks () to represent each component of the state in a system of PDEs. Next, we partition the spatial domain and employ, for each subdomain-specific, distributed evolutional deep neural networks () to collectively solve for the evolution of the PDE. With these two strategies, we develop a framework of coordinating multiple EDNNs and call this method . Section <ref> gives a brief review of the original EDNN method. In section <ref>, we describe in detail the methodology, starting with coupling EDNNs for a system of PDEs and then we explain how we connect multiple EDNNs with domain decomposition. Section <ref> presents the results of using to predict the evolution of the solution of a variety of PDE problems. We summarize our conclusions in section <ref>. § PROBLEM SETUP AND PRELIMINARIES In this section, we briefly review the evolutional deep neural network (EDNN), first introduced by <cit.>. Consider a time-dependent, non-linear, partial differential equation t (, t) = (, t, ) where the solution :×→ is a vector function on both the spatial domain ⊂ and time t ≥ 0, and is a differential operator. A time-dependent neural network is used to approximate the solution. The inputs of the network are spatial coordinates, and the outputs are the solution approximated at the input coordinates. By fixing the network topology, such as activation functions, number of layers, and number of neurons, the network is parameterized by its kernel weights and biases which are collectively denoted . We consider the network parameters as functions in time W:→^. As such, the network can now be seen as the ansatz, (, (t) ). Using the chain rule, we connect the governing PDE to the time-evolution of the network parameters, d/dt = (, t, ). We can therefore express the time-evolution of the network parameters according to, ζ̂(t) = _ζ∫∑_k=1^|_kζ - _k |^2 d . Using a finite number spatial samples, we convert (<ref>) into the linear system = , and the network evolution (<ref>) into = _‖ - ‖_F^2 where ‖·‖_F is the Frobenius norm, J_k,i,j is the network gradient, Z_j is the weight update, and N_k,i is the PDE operator. The subscript k identifies a solution variable within the state vector , subscript i is the spatial sampling point, and subscript j is the network parameter. Both and are readily available by evaluating the neural network at a set of sampling points and using automatic differentiation to obtain any necessary derivatives. By discretizing time and using classic time integrators, such as fourth-order Runge-Kutta (RK4), EDNN can now be seen as a time marching algorithm. We sequentially evolve the weights from one time instant to the next using the trajectory . A schematic of EDNN evolution over three time instances is shown in figure <ref>. At the first instance, the entire solution field is represented by the network (, ^t-1). The weights are then evolved using the above governing equations to predict the next instance, (, ^t ). The process is then repeated to predict (, ^t+1) and can be continued for any time horizon of interest. § FORMULATION OF MULTI EVOLUTIONAL DEEP NEURAL NETWORKS () As originally introduced, EDNN utilizes a single deep neural network to represent the full state over the entire spatial domain. Problems that involve coupled fields, large domains, local variations in the behaviour of the solution, etc., require a large number of network parameters to accurately represent the solution and a large number of sampling points. These two requirements contribute to the computational cost and memory requirements. In particular, the computational bottleneck of EDNN lies in the solving equation (<ref>). In order to address these challenges, we split the PDE problem among multiple networks that collectively predict the full solution. For many PDEs, the operator in equation (<ref>) can be expressed as a system of scalar equations presenting a natural separation in state space. This case is considered in section <ref>, where we introduce the notion of coupled EDNN () for solving such systems. The remaining difficulties are addressed using domain decomposition, which is a well established strategy to divide the problem into a set of smaller sub-problems. In section <ref>, we introduce the notion of a distributed EDNN (): we partition the global spatial domain into multiple elements, and solve the governing equations on each sub-domain using element-specific networks that exchange information at the element boundaries. In section <ref>, we detail the requirements and construction of correction functions needed to communicate information between elements. In section <ref>, we describe our methodology to enforce boundary conditions. Both and employ multiple neural networks, and are considered sub-classes of methods. Figure <ref> shows a schematic of these networks collectively solving a system of PDEs over many subdomains. The domain is partitioned into multiple subdomains, each with a local vector-values solution. provides the methodology for the networks to coordinate their evolution within a subdomain, and synchronizes multiple elements by ensuring the PDE dynamics are consistent between neighboring elements. §.§ Coupled EDNNs Consider a PDE of the form in equation (<ref>), where the state = (q_1,…,q_) is comprised of ≥ 2 variables. There are two natural ways to approximate with neural networks. One approach is to use single large neural network having a vector output, as adopted by <cit.> to solve the incompressible Navier-Stokes equations for example. However, using a single network to accurately capture the entire state is challenging, in particular when the components of the solution each require different basis functions. To overcome this barrier, a large number of network parameters are typically needed to correctly approximate the state of the system. As a result, the linear system (<ref>) becomes large and computationally expensive to evolve. An alternative and natural approach, that we term Coupled EDNN (), uses multiple independent neural networks, coupled through the system of governing equations. Each neural network is tasked with representing one of the components of the state vector, so the solution ansatz (<ref>) becomes, (, _1(t), …, _K(t) ) = (q̂_1(,_1(t)) ,…, q̂_K(,_K(t)) ). In this manner, each variable within the state vector has component-specific network weights _k, and additionally the network structure can be different. The system of equations is then advanced according to, _kq̂_kd_k/dt = 𝒩_k (, t, ). The parameters of each network are thus evolved separately, coupled only through their outputs which are required to evaluate the PDE operator on the right-hand side at common solution points. provides several benefits: Since networks are decoupled in parameter space, we can solve each network update separately. Assuming that we retain the total number of parameter fixed, rather than solve one large linear system (<ref>), we now solve smaller problems one for each . The estimated computational cost thus reduces from O(^2) to O(^2 / ) <cit.>. Gradient computations also become more efficient because they are performed using automatic differentiation on the individual networks. Additionally, the evolution and evaluation of the individual networks can be trivially parallelized. Aside from the computational benefits, can be fine-tuned to better represent the solution. For example, each sub-network can utilize different architecture, or we can increase the parameterization for states where we expect to require more expressive networks. §.§ Distributed EDNNs The idea of a Distributed EDNN () is based on a domain-decomposition strategy, where we partition the spatial domain into multiple elements and deploy an EDNN (or ) to approximate the solution on each element. At the element boundaries, we compute flux corrections to ensure that the dynamics are synchronized between elements and maintain a globally accurate time evolution. Collectively, multiple EDNNs can more accurately represent complicated features of the solution compared to a single network, so we gain increased representation capability. Furthermore, as a finite-element methods, can be defined on smaller subdomains with specialized architecture for spatial regions with special characteristics such as boundary layers. The smaller individual network sizes again leads to computational efficiency, and the method naturally lends itself to parallel implementation on distributed computing platforms. §.§.§ Domain decomposition and the element sub-problem We focus on second-order conservation PDEs of the form, + ∇·( , ∇) = 0 where : ×^×→^× is the flux function whose divergence is related to the PDE operator by = -∇·. To simplify notation, let ∇ be the gradient operator in spatial variables only. The governing equation (<ref>) can be rewritten as a system of first order equations, + ∇·( , ) = 0, - ∇ = 0, by introducing the auxiliary variable . We partition the domain into elements, _e, with boundaries _e. The approximate solution on each element is _e^: _e →, and is predicted by a , which immediately ensures continuity within the interior of each subdomain. At the boundaries ∂Ω_e, however, there can be discontinuities in the approximate solutions and auxiliary variables. We denote these discontinuous variables with superscript . For each element, the system of equations becomes, _e^ + ∇·_e = 0, _e^ - ∇_e = 0 . To ensure that the dynamics are consistent across element interfaces, we formulate a flux _e whose normal component is equal between elements. Similarly, we formulate a solution _e that is continuous between elements, and thus continuous over the whole domain. §.§.§ Solution correction and the auxiliary variable In order to obtain the continuous solution _e, we first compute a common interface solution and a correction on each element boundary. This correction is then propagated to the interior of the element using a correction function. We then compute the gradient of the corrected solution, which defines the auxiliary variable as given in (<ref>). These steps are detailed below. On the shared interface between neighboring elements, we have two predictions of the solution. We use subscript e- to denote values local to the element, and e+ to denote values from neighboring element(s). The local solution _e-^: _e → is given by the element (or ) prediction on its own boundary _e-^ = _e^|__e, and _e+^: _e → is from the neighbouring element (or ). Using a problem-specific choice of Riemann solver , we calculate the common interface solution, _e^⋆ = (_e-^, _e+^, n_e), where n_e: _e → is the outward unit normal. Note, by design of the Riemann solver, each element common interface solution will be equal ^⋆_e() = ^⋆_e'() at points ∈_e ∩_e' on both elements boundaries. Thus, the necessary correction to _e^ on the boundary is the difference, _e^Δ = _e^⋆ - _e^|__e. The correction to the solution on each element is then, _e^ = ∫__e_e^Δ_e d _e, which distributes the boundary correction to the interior. The correction function _e: _e ×_e → ensures that _e^ is continuous and satisfies _e^|__e = _e^Δ, and approximates zero in some sense (see <ref>). Therefore, the corrected solution is given by, _e = _e^ + _e^ , which is continuous across element boundaries. The gradient of this corrected solution gives the auxiliary variable, _e^ = ∇_e^ + ∫__e_e^Δ∇_e d _e, where the outer product _e^Δ⊗∇_e is implied. Note that we do not cast separate neural networks to represent the auxiliary variables. Instead we formulate these variables as functions of the discontinuous state _e^. This remark is particularly relevant to _e-^ (and _e+^), where we compute _e-^ () = ∇_e^ () + _e^Δ∇_e (,). The auxiliary variable is, in general, discontinuous at element interfaces. For second-order PDEs in general, the divergence of the flux requires the gradient of the auxiliary variable, ∇_e^ = ∇∇_e^ + ∫__e_e^Δ∇∇_e d _e, where ∇∇ denotes the Hessian of mixed, second-order partial derivatives. §.§.§ Flux splitting and flux correction For equation (<ref>), we seek fluxes whose normal components are equal on common boundaries between elements. Similar to the above procedure, we first compute common normal fluxes on the boundary and spread the necessary correction into the interior. The solution on each element is then evolved according to equation (<ref>), using the divergence of the corrected flux. In advection-diffusion problems, we separate the total flux into inviscid and viscous fluxes, _e^D = _inv,e^ + _vis,e^ , ^_inv,e = _inv(_e^ ), ^_vis,e = _vis(_e^, _e^ ). These fluxes are discontinuous at element boundaries. We adopt problem-specific choices of Riemann Solvers _inv and _vis to calculate the common fluxes between elements, namely the common normal-inviscid and normal-viscous fluxes, _inv,e^⊥⋆ = _inv(_e-, _e+, n_e), _vis,e^⊥⋆ = _vis(_e-, _e+, _e-, _e+, n_e). The difference between the common and discontinuous fluxes then constitutes the correction on the boundary, _e^⊥Δ = _inv,e^⊥⋆ + _vis,e^⊥⋆ - _e^|__e·n_e . This correction is distributed on the element according to, _e^ = ∫__e_e^⊥Δ_e d _e, where _e: _e ×_e → is a vector-valued correction function whose details are given in <ref>. We then formulate the corrected flux according to, _e = _e^ + _e^. Each element (or ) is evolved according to the divergence of its total flux, _e = - ∇·^_e - ∫__e_e^⊥Δ∇·_e d _e. Given the splitting of the total flux, the divergence comprises two parts, ∇·_inv,e^ = ∇·_inv(^_e , ∇^_e) ∇·_vis,e^ = ∇·_vis (^_e, ∇^_e, ^_e, ∇^_e) . The viscous flux is a function of the solution and the auxiliary variable, so its divergence, in general, requires the gradients of the solution and of the auxiliary variable. The computation of these divergences can be nuanced, so we provide an example from the compressible Navier-Stokes equations in <ref>. In practice, we sample boundary points to approximate the solution correction (<ref>) and flux correction (<ref>). Determining the subset of points that give the most accurate and efficient evolution poses an interesting research question worthy of a separate investigation. In this work, we partition the domain into axis-aligned hyper-rectangular subdomains. Then, we orthogonally project each sampling point onto all 2S faces to generate our set of boundary point. §.§ Correction functions Our procedure requires correction functions _e for the solution (<ref>) and _e for the flux (<ref>), to propagate the boundary data into the interior of an element. We first establish the requirements for one-dimensional correction functions, then introduce a new class of correction functions well-suited for . We also describe a multi-dimensional extension for hyper-rectangle elements. §.§.§ 1D correction functions Consider in 1D the interval subdomain _e = [ x_e,L, x_e,R]. Similar to <cit.>, we define correction functions in a reference element ' = [-1, 1] with boundary ' = { -1, 1 }. Using the affine transformation, r_e(x) = (x - x_e,L+x_e,R/2) 2/x_e,R-x_e,L , physical spatial coordinates are mapped to their reference coordinates. We formulate the reference solution correction, ' : ' ×' →, as the contribution from a point on the boundary to a point within the domain. Afterward, the solution correction function for each element is _e ( χ, x ) = ' ( r_e(χ), r_e(x) ) and has derivative _e ( χ, x ) = xr_er_e' ( r_e(χ), r_e(x) ) . Let subscript L denote a function evaluated at r = -1, and subscript R denote a function evaluated at r=1. For example, the reference correction function for the left point is g'_L(r) = g'(-1,r). In one dimension, the solution correction (<ref>) simplifies to the accumulation from the left and right boundaries, _e^(x) = _e,L^Δ '_L ( r_e(x)) + _e,R^Δ '_R ( r_e(x)), where the differences _e,L^Δ, _e,R^Δ are evaluated according to equation (<ref>). In order to maintain point-wise consistency with ^C |__e = _e^Δ, we require '_L(-1 ) = 1 , '_L( 1 ) = 0 , '_R(-1 ) = 1 , '_R( 1 ) = 0, and we expect symmetry '_L(r) = '_R(-r) in order to avoid introducing bias. For second order PDEs, we also require ' to be twice differentiable, which is required for the evaluation of the gradient of the auxiliary variable (<ref>). We additional desire that the correction function approximates the zero function in some sense, so that the corrected solution remains a good approximation of the discontinuous solution. For one dimension, the flux correction function is scalar-valued and only needs to be once differentiable, but otherwise has identical requirements to . Various families of polynomial correction functions were described by <cit.> and <cit.> for flux reconstruction. For example, DG_5 refers to a 5th-order `discontinuous Galerkin' correction function, denoted g_DG,5 in <cit.>, which is designed to be orthogonal to the space of polynomials of degree 3, giving some notion of being zero. Figure <ref> shows DG_5 for the left boundary and its derivative. In flux reconstruction, the choice of polynomial functions is natural, since the solution and flux are approximated with interpolating Lagrange polynomials. With EDNN, however, polynomial correction functions no longer offer a good approximation to zero, since EDNN is not restricted to polynomial function spaces. Also, the EDNN sampling points may be anywhere within the element and can be changed dynamically during the evolution, which renders a polynomial correction function undesirable especially since its derivative is non-zero far from the boundary (for DG_5, figure <ref> shows '_e,L has full support and is non-zero near r=1). Intuitively, the correction from one boundary should not have a sizeable effect on points far from that boundary. This issue is particularly relevant to , where the element sizes can be large when the networks have high expressivity and can capture the solution over large portions of the global domain. §.§.§ Monomial correction functions For use with EDNN, we construct correction functions that better approximate zero in more general function spaces. With order parameter 𝒫 and width parameter 𝒲, we devise the monomial class ℳ_𝒫,𝒲 of correction functions, '_L(r) = ( r+1-𝒲/-𝒲)^𝒫 if r ≤ -1+𝒲 0 otherwise , '_R(r) = ( r-1+𝒲/𝒲)^𝒫 if r ≥ 1-𝒲 0 otherwise . These functions are a 𝒫^th order monomial in the interval of width 𝒲 adjacent to the respective boundary point, and are zero everywhere else. In figure <ref>, we show that monomial ℳ_5,0.1 and its derivative quickly decay to zero. Therefore, the monomials satisfy our desire to only impact the interior near the boundary. In the limit as 𝒫→∞ or 𝒲→ 0, this correction function approaches an indicator function which is zero in the interior of the domain. Our preliminary testing showed that, for EDNN, these monomials yield more accurate predictions than polynomial correction functions. §.§.§ Extension to higher dimensions To construct correction functions in > 1 dimensions, we consider partitions of axis-aligned hyper-rectangles. We then cast a series of one-dimensional sub-problems, as detailed in the work by <cit.> and also <cit.>. Formally, the solution correction function becomes, _e (, ) = g_L'(r_e,s()) if ∃ s : = _e,s,L g_R'(r_e,s()) if ∃ s : = _e,s,R 0 otherwise , where subscript s denotes a dimension in the spatial coordinates. In the above expression, _e,s,L and _e,s,R are the orthogonal projections of onto the left and right faces of the s^th dimension. The solution correction reduces to the contributions from the 2S boundary points, _e^() = ∑_s=1^_e^Δ(_e,s,L) _L'(r_e,s() ) + _e^Δ(_e,s,R) _R'(r_e,s() ) . The flux correction function is extended similarly, (, ) = h_L'(r_e,s()) e_s if ∃ s : = _e,s,L h_R'(r_e,s()) e_s if ∃ s : = _e,s,R 0 otherwise , where h_L' and h_R' are the one-dimensional flux correction functions and e_s is the unit basis vector in the s^th direction. The flux correction is therefore, _e^() = ∑_s=1^_e^⊥Δ(_e,s,L) h_L'(r_e,s()) e_s + _e^⊥Δ(_e,s,R) h_R'(r_e,s()) e_s . §.§ Boundary conditions §.§.§ Periodic boundary conditions In , the flux correction procedure can be adopted to enforce periodic boundary conditions in a straightforward fashion. Element interfaces on the boundary are treated similarly to interfaces within the interior of the domain. The same approach can also be used when a single EDNN (or ) is adopted through the entire solution domain. In some of our numerical experiments, we wish to compare to a reference single-element solution that is independent of the boundary-correction algorithm. In these instances, the single-element solution will adopt the feature expansion approach described by <cit.>, which maps hyper-rectangular domains to the torus 𝕋^. This feature layer thus restricts the single-element EDNN prediction to the space of periodic functions. §.§.§ Time-dependent Dirichlet boundary conditions Consider non-intersecting objects with known and possibly time-dependent geometry _b and state _b. We formulate a modified operator, t = (_b - ) if ∃ b : ∈_b (,̂ ̂t̂,̂) otherwise , where is a parameter that determines the rate of relaxation. For every sampling point _i ∈, we first determine whether it intersects an object geometry, in which case we drive EDNN to match the object state. There is no guarantee, however, that the sampling points from intersect with every object. For example, in Couette flow (<ref>), the top and bottom walls are a measure zero subset of the domain. To ensure that EDNN tracks the objects, we can sample additional points directly from _b. In a context, we then determine which element subdomain contains these points, and include them to the associated EDNN linear system (<ref>). This method only weakly enforces boundary conditions. EDNN will be driven to the object state at a rate of 1/. Another interpretation is that the modified operator provides a source term to the governing equation. This methodology can also be used for stationary and moving solid objects immersed in the fluid. With a slight modification, ∂ / ∂ t = () + c_s() (_b - ), we can implement sponge zones <cit.> to either absorb incident disturbances or to introduce them into the domain. § NUMERICAL EXPERIMENTS In this section, we demonstrate capability and accuracy by solving canonical PDE problems that are relevant to fluid dynamics, including advection, diffusion, and the Navier-Stokese equations. To demonstrate , where multiple netowrks cooperatively solve coupled equations on a single element, we directly solve the compressible Navier-Stokes equations for the Couette-flow problem. Starting from an initial condition far from the steady state, we show that converges to an analytical reference solution. To examine the performance of over spatially distributed elements, we first solve the one-dimensional linear advection equation, which exercises the inviscid fluxes, and show accurate prediction over many subdomains. We then exercise the viscous fluxes across elements by solving the two-dimensional time-dependent heat equation. Lastly, we utilize the full framework, leveraging both state-space () and spatial decomposition () to solve the Navier-Stokes equations for Taylor-Green vortices. Unless otherwise stated, every neural network utilizes input and output scaling layers, as described by <cit.>. The input scaling achieves the mapping _e → [-1,1]^, mimicking mapping to a reference, or standard, element. The output scaling layer maps [-1,1] → [q_k,L, q_k,R] where q_k,L and q_k,R are the minimum and maximum of the initial state. For our tests, we use the hyperbolic tangent (tanh) activation function and more than one hidden layers. We recognize the idea by <cit.> and <cit.> that high accuracy can be achieved by judiciously selecting a network architecture such that the solution ansatz matches the solution of the PDE, in a single-layer shallow network. Ultimately, however, we wish to utilize the expressivity of deep neural networks to solve complex PDE problems, and for this purpose we adopt general network architectures and without exploiting a priori knowledge of the solution form. For this same reason, in our configurations, we choose relatively large subdomains, or elements, compared to flux reconstruction experiments. For example, in the one-dimensional linear advection case, we choose subdomains that can fully support the full waveform, while flux reconstruction employs a large number of small elements so that the solution is approximately polynomial within an element. §.§ Couette flow We start by demonstrating for solving the compressible Navier-Stokes equations. We denote dimensional variables with superscript ⋆, and non-dimensional quantities are scaled by the characteristic length scale L^⋆, the speed of sound c_∞^⋆, a reference density ρ_∞^⋆, and the specific heat capacity at constant pressure C_p,∞^⋆. This choice yields the Reynolds number Re = ρ_∞^⋆ c_∞^⋆ L^⋆/μ_∞^⋆ and Prandtl number Pr = C_p,∞^⋆μ^⋆/κ^⋆, where μ^⋆ is the dimensional viscosity and κ^⋆ is the dimensional thermal conductivity. The governing PDE in non-dimensional, conservative form becomes, t + ∇·_inv + ∇·_vis = 0 where = [ρ, _s, ]^⊤ are the conservative variables: density ρ, momenta _s in spatial directions s = {1, 2, 3}, and total energy . The inviscid and viscous fluxes in direction i are _inv,i = [ ρ u_i; _s u_i + pδ_si; u_i ( + p ); ] , _vis,i = [ 0; - τ_si; - u_k τ_ki + θ_i; ], with summation over repeated indices, and δ denotes the Kronecker delta. The primitive velocities, pressure, temperature, and first viscosity are, u_i = _i/ρ , p = ( γ-1 ) ( - 1/2_k u_k) , T = p/R_g ρ , μ = ( ( γ -1 ) T )^α, where γ is the ratio of specific heats, R_g is the ideal gas law constant, and α is the exponent of the viscosity power law. The viscous stress tensor and the heat flux tensor are, τ_ki = μ/Re( x_ku_i + x_iu_k) + 1/Re(μ_b - S-1/S μ) x_lu_lδ_ki , θ_i = - μ/Re Prx_iT where μ_b is the bulk viscosity. In our numerical experiments, we take γ = 1.4, μ_b = 0.6, and Pr = 0.72. The remaining problem-specific parameters for the viscosity power law α and the Reynolds number Re are defined for each problem. Using these governing equations, we solve the classic Couette flow problem in a two-dimensional domain Ω = [0,1]^2. The top plate is a moving no-slip isothermal wall with streamwise velocity U_1 and temperature T_1, and the bottom plate is a stationary isothermal no-slip wall with velocity U_0 and temperature T_0. Figure <ref> shows a schematic of the flow configuration. For this problem, we treat the entire domain as a single element, and adopt for the solution of the compressible Navier-Stokes equations. We deploy four networks, one for the continuity equation, two for the momentum equations in the streamwise and wall-normal directions, and one for the energy equations. We start from initial conditions that are far from the steady state, and our goal is verify whether converges to the steady-state analytic solution. With viscosity power law exponent α = 1, the Navier-Stokes equations permit the implicit analytical solution, u_1/U_1 + 1/2 Pr (γ -1) U_1^2 (u_1/U_1 - 1/3(u_1/U_1)^3 ) = (1 + 1/3 Pr (γ - 1) U_1^2)y T/T_1 = 1 + 1/2 Pr (γ - 1) (U_1^2 - u_1^2 ), p = R_g T_1, u_2 = 0 from which we can derive all other fluid variables, in particular the conserved variables, such as _1 shown in figure <ref>. Choosing a top wall temperature T_1 = 1/γ - 1, we will consider multiple cases each with different value of the top-wall speed, U_1= {0.2, 0.6, 1.2, 2.0, 4.0}. Note that the velocity is normalized by the speed of sound, and therefore the values of the top-wall speeds are also the relevant Mach numbers of the problem. The bottom wall boundary conditions become U_0=0 and T_0 = T_1 + 1/2 Pr U_1^2. The Reynolds number is fixed to Re=100, which is again based on the speed of sound and hence spans from Re_U(≡ρ_∞^⋆ U_1^⋆ L^⋆/μ_∞^⋆) = 20 to Re_U=400 based on the speed of the top wall. The initial condition starts from a large deviation away from the steady state, specifically we adopt the initial streamwise velocity profile is u_1 = U_1 y^20. Since the problem is constant in x, we use the input feature expansion layer (x,y) ↦ (y) then map y using the standard scaling layer. Each state neural network uses two layers and twenty neurons per layer, and the is evaluated at 32 points adaptively sampled using the technique in <ref>. We march with RK4 and Δ t=10^-6. The top and bottom wall boundary conditions were enforced with = 1/Δ t. In all cases, the prediction converged to the analytical steady-state solution which is shown using the grey lines in figure <ref>. We show the converged _1 in figure <ref> and temperature in figure <ref>. In figure <ref>, we show the time evolution of momentum _1 as it approaches and converges to the steady state, for the Mach 4 case. For this condition, the L2 error normalized by the norm of the steady-state solution at t=20 is less than O(10^-3) for all variables. §.§ Linear advection For our first test of a distributed set of networks over space, or , we simulate the 1D linear advection of a narrow Gaussian over an interval = [0,ℓ ], where the wave is generated by the condition at at the left boundary. The governing equation and initial and boundary conditions are, tq + ∇·( u q ) = 0 , q(x,0) = q_0(x) , q(0,t) = q_0(-ut) , q_0(x) = exp( -(x-x_0)^2/2Σ), which have the analytical solution, q(x,t) = q_0(x-ut). For our numerical experiment, the wavespeed is constant u=1, and the center of the initial Guassian is x_0 = -0.5 and its variance is Σ = 0.01, and thus the initial Gaussian is outside the computational domain. The domain is partitioned into unit subdomains _e = [e-1, e], where e=1…ℓ. Each the governing equation is solved on each the of the ℓ sub-domain using a separate EDNN which has four hidden layers and twenty neurons per layer. Each EDNN is evolved with 250 points distributed uniformly in space. We march with RK4 and Δ t=10^-6. The inflow boundary conditions was enforced with 1 point and = 1/Δ t. We demonstrate the necessity of the procedure for the treatment of the inter-element interfaces using a configuration that fails. We take ℓ=2 and neglect the flux correction (_e^=0), so the total flux (_e = _e^) uses only local information. Figure <ref> shows EDNN prediction for this first configuration. Our boundary condition method successfully enforces the time-dependent inflow condition, and we see the wave correctly materialize and advect within element 1. The wave, however, fails to cross over to the second element, _2. We proceed to demonstrate successful advection by adopting the interface-correction procedure. We consider ℓ=21 elements, adopt an upwind common inviscid flux (<ref>), and a monomial flux correction function ℳ_15,0.1. Figure <ref> shows that the second configuration successfully tracks the wave across element boundaries. To provide a quantitative measure of accuracy, we track the instantaneous error normalized by the norm of the Gaussian, ε(t) = ‖q̂(x,t)-q(x,t)‖_2/‖ q_0(x) ‖_2. Figure <ref> shows that the error remains commensurate with the initial training error, and the final error at t=20 is ε = 7.11×10^-3. §.§ Linear diffusion For our next test, we solve the two-dimensional linear diffusion equation on a periodic domain = [-π, π]^2, starting from a sinusoidal initial condition. The governing equations, and boundary and initial conditions are, tq + ∇·( -ν∇ q ) = 0, q(-π,y,t) = q(π,y, t) , q(x,-π,t) = q(x,π, t) , q(x,y,0) = sin x sin y where ν=1 is the diffusivity coefficient. The analytical solution is given by q(x,y,t) = e^-2ν tsin x sin y. We adopt a spatially distributed set of networks, or , and partition the domain into 2× 2 equal elements as shown in figure <ref>. These partitions were selected so that element interfaces lie on the regions with maximum heat flux. The interfaces within the domain interior and on the global periodic boundaries exchange the solution and flux corrections. We evaluate the common solution and common normal viscous fluxes using the LDG method (<ref>), and both the solution and flux correction functions are the monomial ℳ_3,0.25. Each of the four EDNNs is comprised of 4 hidden layers and 10 neurons per layer, and each sub-domain is sampled at 33×33 points distributed uniformly. We march with RK4 and a stepsize Δ t = 0.01. The predictions are compared to the analytical solution in figure <ref>, and shows very good agreement. For visualization only, we extend the domain to show the periodicity of the prediction. Figure <ref> shows a comparison along the horizontal line y=π/2 at various times. Globally, the collective prediction by the distributed networks matches the reference solution. Since the temperature decays as a function of time, we evaluate the instantaneous error, ε(t) = ‖q̂(x,t)-q(x,t)‖_2/‖ q(x,t) ‖_2, which is reported in figure <ref>. The error remains commensurate with its value from the representation of the initial condition, and at the final time, ε = 6.86×10^-4. §.§ Taylor-Green vortices To demonstrate the full, coupled and distributed framework, with multiple states and multiple elements, we simulate two-dimensional Taylor-Green vortices. The system of PDEs is the compressible Navier-Stokes equations, as described in section <ref> with α=0. We will consider multiple cases with different Reynolds numbers, Re={1, 10, 100}. The domain is = [0, 2π]^2 with periodic boundary conditions. We obtain the initial conditions of the conserved state from ρ_0(x,y) = 1 , p_0(x,y) = p_b - M^2 /4( cos(2x) + cos(2y) ) u_0(x,y) = +M cos(x)sin(y) , v_0(x,y) = -M sin(x)cos(y) , where M=0.1 is the Mach number and p_b = 1/γ is the base pressure. At this low Mach number, the flow should evolve similarly to the incompressible limit which we use as our reference solution, ρ(x,y,t) = ρ_0(x,y), u(x,y,t) = u_0(x,y) D(t), v(x,y,t) = v_0(x,y) D(t). In the above expression, the decay rate is D(t) = exp(-2t/Re). The total energy is conserved and is equal to its initial value, E_T(t) = ∫_(x,y,0) d = 4π^2 p_b/γ-1 + M^2π^2 while the expected total kinetic decay is, E_K(t) = ∫_1/2ρ( u_1^2(x,y,t) + u_2^2(x,y,t) ) d = M^2 π^2 D^2(t). We first perform a test with on the whole periodic domain, with periodicity enforced using the feature layer described in <ref> and, hence, without introducing any interface reconstruction. We then use a approach, where we adopt a 2× 2 domain decomposition. Each sub-domain has four coupled networks for the continuity, the two momentum, and the energy equations. Each neural network is comprised of four hidden layer with twenty neurons. The common solution and normal viscous fluxes were computed with the LDG method and the common inviscid normal fluxes were computed with the Rusanov method (<ref>). For both the solution and the flux correction functions, we use ℳ_3,0.2. Each sub-domain is sampled at 66×66 points distributed uniformly, and we march with RK4 and Δ t = Re×10^-4. In figure <ref>, we report the Re=100 case's vorticity, ω = ∇×u, at time t=20. The first panel compares the single solution on the full domain to the analytical expression, using color and line contours respectively. In panel (b), we report the prediction by the 2× 2 with interface corrections both on the internal and also the periodic interfaces between the four sub-domains. These prediction similarly show very good agreement to the anticipated behaviour of the vorticity. It is important to note that, while viscous decay is dominant, the nonlinear advection terms in this evolution are all exercised, and must balance exactly or else the vortex pattern is distorted. The zoomed-in view in figure <ref> shows the velocity vectors (u_1, u_2). The overlaid black and grey arrows correspond to the prediction and the reference velocity field, and again shows excellent agreement. We evaluated the normalized instantaneous error (<ref>) for the velocity vector U = [u_1, u_2]. Figure <ref> shows that the velocities and are accurately predicted. The errors at the final times, t=Re for each Reynolds number, are all less than one percent. In addition, we report the prediction of the total and kinetic energy over time in figure <ref>, and show that the former remains conserved and the latter decays at the expected rate. § CONCLUSIONS Use of machine-learning strategies to solve partial differential equations (PDE) is an active area of research. Evolution equations are of particular interest in the present work, where the solution is expressed by a network in all but one dimension, and evolved in the remaining dimension using the governing equations. These types of equations are common in physics, where the solution is evolved is time, although evolution in space for parabolic system can be treated similarly. The original idea of an evolution deep neural network (EDNN) used a single network to represent the solution of the PDE at an instant in time, and the system of PDEs was then used to determine the time-evolution of the network parameters and, as such, the evolution of the solution for any time horizon of interest. This approach requires use of a large network that can express the global solution, and is therefore computationally costly. In the present work, we use several neural networks whose evolutions are coordinated to accurately predict the solution of the system of PDEs. First we introduce coupled EDNN () where we deploy multiple scalar networks each for one of the equations in the system of PDEs, and couple their evolution through the physics coupling in the governing equations. Second, we introduced distributed EDNN () where we take advantage of spatial domain decomposition and assign networks to different sub-domains, or elements, in space and coordinate their evolution across interface corrections. Together, these two strategy form the approach, which is a new framework for numerically solving PDE problems using distributed and coupled neural network. We demonstrated the capability of for solving canonical PDE problems. We used to evolve the compressible Navier-Stokes equations for Couette flow for a range of Mach numbers. We showed that successfully applied domain decomposition and flux corrections to solve linear advection and the two dimensional heat equation. Finally, coordinated neural networks to solve the Taylor-Green vortices problem. In all cases, the aggregate solution agreed well with analytical and reference solutions. Note that, with specific choices of the network architecture, sampling points, and correction functions, EDNN can recover the classical flux reconstruction schemes. The basic procedure would be to use an input feature layer x ↦ [l_e,1(x) ,…, l_e,(x) ], a single (output) node, a “linear” activation function σ(x) = x, and no bias parameter. With these choices, the EDNN ansatz becomes q̂_e(x,t) = ∑_n=1^ w_e,n(t) l_e,n(x). Then consider a fixed set of sampling points, x_e,n, and the Lagrange polynomials, l_e,n, such that l_e,n(x_e,n) = 1 and zero on the other sampling points. EDNN is then a degree -1 polynomial that interpolates the solution at the sampling points. In this limit, we would not be exploiting the expressivity of the network, and would require a large number of small elements to accurately represent the solution as customary in finite-element methods. Instead, by adopting larger networks that have powerful expressivity, we can partition the domain into relatively large elements. The combination of network expressively and domain decomposition for distributed networks, which is introduced in the present work, togehter provide the foundation for flexible and computationally efficient solvers. In addition, we elected to use generic network architectures and activation functions. We acknoweldge that it is possible to design networks that are capable of exactly representing, up to machine precision, a known solution of a PDE (e.g. using Gaussian activation for an advecting or decaying Gaussian field). Such networks with problem-specific input feature layers or activation functions can still be adopted with the herein presented approach, and if fact should be used when possible. However, we assumed no prior expert knowledge of the shape of the solution in our examples. We demonstrated that remained accurate even while choosing less specialized network architecture, or approximation ansatz. With , numerous avenues for future research are worth exploring. Firstly, can be parallelized to realize computational speedups and to tackle large-scale problems. Secondly, our approach for imposing boundary conditions is simple, flexible, and can accommodate arbitrary time-dependent geometries. As such, can be used to simulate flow over immersed bodies, such as canonical flow over a cylinder or a pitching airfoil. Thirdly, in the present foundational work, the domain decomposition was limited to conforming hyper-rectagular elements. The underlying neural networks, however, are not restricted to a subdomain shape and can easily handle irregular geometries. This versatility offers the opportunity for the domain to be partitioned into an unstructured collection of various shapes and sizes. § ACKNOWLEDGEMENTS This work was supported by the Defense Advanced Research Projects Agency [grant HR00112220035] and the National Science Foundation [grant 2214925]. § ADAPTIVE SAMPLING At each time-step, EDNN uses a set of sampling points from the spatial domain to solve the linear system (<ref>). Electing to use uniformly spaced points is suitable for many problems. However, adaptive sampling techniques have been shown to improve the accuracy of EDNN predictions <cit.>. We mimic sampling from a probability distribution with a heuristic approach that clusters the points in dynamic regions. Additionally, we wish to ensure that the global behavior is captured by evolving EDNN with samples that span the entire domain. To address both, competing goals, our approach approximates a mixture distribution combining the adaptive distribution based on the PDE dynamics with a static, uniform distribution. §.§ Procedure Consider a problem in one spatial dimension on = [x_L, x_R ]. Let x_1^(t) <…< x_^(t) be the ordered points at time-step t. For the next time-step t+1, we seek the positions x_i^(t+1) given the data _i^(t) = (x_i^(t)), where represents the density function used to determine point locations. To approximate the area under the curve at time-step t, let Δ x_i^(t) = x_i+1^(t) - x_i^(t) be the point spacing, _i^(t) = 1/2( h_i^(t) + h_i+1^(t)) be the segment height, and the total area be, ^(t) = ∑_i=1^-1Δ x_i^(t)_i^(t) . For the next time step, we wish to find point placements so that all segment areas are equal. To avoid implicit solves, we approximate the segment area using the current time-step data, which produces the lengths, Δx̂_i^(t+1) = ^(t)/-11/_i^(t)≈^(t+1)/-11/_i^(t+1) . These segment lengths may not total the domain length, so we normalize, Δ x_i^(t+1) = x_R - x_L/∑Δx̂_j^(t+1)Δx̂_i^(t+1), which gives us the desired point spacing. In order to approximate the mixture distribution, we choose the data, (x_i^(t)) = ‖(x_i^(t), t, (x_i^(t)))‖_1 + (1-) 1/∑_j=1^‖((x_j^(t)))‖_1, with parameter ∈[0,1]. The first term, ‖(x_i^(t), t, (x_i^(t)))‖_1, drives the points to the dynamic spatial regions. The second term drives the points to be uniformly distributed. The parameter balances these competing goals. For the Couette-flow numerical experiments in <ref>, we used β = 0.9. This procedure mimics sampling from a distribution estimated from data _i without constructing a proper probability distribution estimation, generating random samples, or making additional network inferences. Also, we can utilize this procedure to determine an initial set of points. Using either the initial network prediction or the analytical initial condition, we iteratively adapt the points with the above procedure to settle on the point locations for the first time step. § RIEMANN SOLVER CHOICES In this section, we present our choices for computing the common interface solutions and common interface normal flux. Overall, we choose upwinding schemes for advective fluxes. Since the viscous flux uses the corrected solution, we pair the common interface solution and the common viscous flux solvers so that the viscous flux remains centered. To simplify, we adopt the following notation for the local (subscript minus) and external (subscript plus) fluxes, _inv,- = _inv (_-) , _vis,- = _vis (_-, _-) , _inv,+ = _inv (_+) , _vis,+ = _vis (_+, _+) . §.§ Linear advection and diffusion For the common interface solution, we use the local discontinuous Galerkin (LDG) <cit.> (upwinding) method: (_-, _+, n) = 1/2 (_- + _+) + sgn(u·n) 1/2 (_- - _+), where u is the problem wave direction and sgn is the sign function, sgn(x) = 1 if x > 0 0 if x = 0 -1 if x < 0 . Note that, if u·n > 0 then ^⋆ = _-^, and if u·n < 0 then ^⋆ = _+^ . For the common inviscid normal flux, we use the Lax-Friedrichs method <cit.>: _inv(_-, _+, n) = 1/2( _inv,- + _inv,+) ·n + r_1/2 |u·n| ( _- - _+), where r_1 ∈ [0,1] is a parameter, r_1=0 recovers an central average flux, and r_1=1 recovers a fully upwind flux. In our experiments, we choose r_1 = 1. For the common viscous normal flux, we use the local discontinuous Galerkin (LDG) (downwinding) method: _vis(_-, _+, _-, _+, n) = 1/2( _vis,- + _vis,+) ·n - sgn(u·n) 1/2(_vis,- - _vis,+) ·n + r_2 ( _- - _+), where r_2 is a parameter penalizing the jump in the solution. Note that the use of a downwinding method for the viscous fluxes is to compensate for the upwinding performed on the common interface solution. This pairing results in a central viscous flux scheme. In our experiments, we choose r_2 = 0.1. §.§ Compressible Navier-Stokes For the common inviscid normal flux, we use the Rusanov method <cit.> with estimated wavespeed from <cit.>: _inv(_-, _+, n) = 1/2( _inv,- + _inv,+) ·n + s/2( _- - _+), where s is an estimate of the maximum wave speed, s = √(γ(p_- + p_+)/ρ_- + ρ_+) + 1/2| n·( u_- + u_+) |, and ρ_∓ are the local and external densities, p_∓ are the local and external pressures, u_∓ are the local and external velocity vectors, and γ is the ratio of specific heats. For the common solution and common viscous normal flux, we use the LDG methods. However, instead of a prescribed velocity, we use the predicted, pointwise velocity averaged from the local and external boundary state, u = 0.5 (u_-+u_+). § EVALUATION OF NAVIER-STOKES VISCOUS FLUX In this section we provide an example of the computation of the divergence of flux for . The viscous flux (<ref>) is a function of the discontinuous conserved state and the (corrected) auxiliary variable. Therefore divergence of the viscous flux (<ref>) requires the gradient of the discontinuous state ∇q^D and the auxiliary variable a^D. Though both quantities are approximations of the solution gradient, they are not equal. To highlight this difference, we mark variables that depend on the discontinuous state ( ^, ∇^) alone with overline and name them `discontinuous'. In contrast, we indicate variables that additionally depend on the auxiliary ( ^, ∇^, ^, ∇^) with tilde and name them `corrected'. Certain physical variables will have both a discontinuous and corrected versions. We rewrite the Navier-Stokes equations (<ref>) with this notation: t + ∇·_inv + ∇·_vis = 0 _inv,i = [ ρ u_i; _s u_i + pδ_si; u_i ( + p); ] , _vis,i = [ 0; - τ_si; - u_k τ_ki + θ_i; ] , u_i = _i /ρ , p = ( γ-1 ) ( - 1/2_k u_k) , T = p/R_g ρ , μ = ( ( γ -1 ) T)^α , τ_ki = μ/Re( x_ku_i + x_iu_k) + 1/Re(μ_b - S-1/S μ) x_lu_lδ_ki , θ_i = - μ/Re Prx_iT . Consider, for example, the divergence of viscous energy flux, ∇·_vis, = - τ_kix_iu_k - u_kx_iτ_ki + x_iθ_i . Notice the discontinuous velocity gradient appears, and from the viscous stress tensor, the corrected velocity gradient is needed. The discontinuous velocity gradient, x_ju_i = - ρ^-2_ix_jρ + ρ^-1x_j_i, can be straightforwardly computed from the discontinuous state using the EDNN solution and automatic differentiation. However, the corrected velocity gradient, x_ju_i = - ρ^-2_ix_jρ + ρ^-1x_j_i, requires the auxiliary of density and momentum. The computation of all other divergences and their necessary components follows similarly. unsrtnat
http://arxiv.org/abs/2407.13128v1
20240718033555
The atomic Leibniz rule
[ "Ben Elias", "Hankyung Ko", "Nicolas Libedinsky", "Leonardo Patimo" ]
math.RT
[ "math.RT", "math.CO" ]
§ ABSTRACT The Demazure operator associated to a simple reflection satisfies the twisted Leibniz rule. In this paper we introduce a generalization of the twisted Leibniz rule for the Demazure operator associated to any atomic double coset. We prove that this atomic Leibniz rule is equivalent to a polynomial forcing property for singular Soergel bimodules. FocusDiffuser: Perceiving Local Disparities for Camouflaged Object Detection Jianwei Zhao10009-0002-8741-7580 Xin Li20000-0001-8047-9610 Fan Yang20000-0002-1157-8719 Qiang Zhai3^,*0000-0001-5328-675X *Corresponding author: Qiang Zhai (zhaiq@sicau.edu.cn) Ao Luo40000-0003-3494-8062 Zicheng Jiao50000-0002-6968-0919Hong Cheng10000-0001-5532-9530 July 22, 2024 =================================================================================================================================================================================================================================================================================== § INTRODUCTION We introduce a generalization of the twisted Leibniz rule which applies to Demazure operators associated with certain double cosets. We call it the atomic Leibniz rule, and it should play an important role in an eventual description of the singular Hecke category by generators and relations. This result is the algebraic heart which we extract from singular Soergel bimodules and transplant to their diagrammatic calculus. §.§ In this paper, we work with general Coxeter systems and quite general actions of these groups on polynomial rings. For ease of exposition, in this section, we assume that R = [x_1, …, x_n], with the standard action of the symmetric group W = _n. Let s_i be the simple reflection swapping i and i+1, and let S = {s_1, …, s_n-1} be the set of simple reflections. There is a Demazure operator _s_i R → R defined by the formula _s_i(f) = f - s_i f/x_i - x_i+1. This operator famously satisfies a twisted Leibniz rule _s_i(fg) = s_i(f) _s_i(g) + _s_i(f) g. Because the operators _s_i satisfy the braid relations, one can unambiguously define an operator _w associated with any w ∈_n by composing the operators _s_i along a reduced expression. When computing _w(fg), one could apply the twisted Leibniz rule repeatedly to obtain a complicated generalization of (<ref>) for _w. We discuss this in <ref>. What we provide in this paper is the natural generalization of the twisted Leibniz rule to the setting of double cosets. In double coset combinatorics, the analogues of simple reflections are called atomic cosets. We prove a version of (<ref>) for Demazure operators attached to atomic cosets. §.§ We recall some definitions so as to precisely state our first theorem. Let (W,S) be any Coxeter system. For any subset I ⊂ S, let W_I be the parabolic subgroup of W generated by I. We assume that I is finitary, i.e. that W_I is a finite group. Let R^I ⊂ R be the subring of polynomials invariant under W_I. Let w_I denote the longest element of W_I. For two subsets I, J ⊂ S, a double coset p ∈ W_I \ W / W_J will be called an (I,J)-coset. Any (I,J)-coset p has a unique minimal element p∈ W and a unique maximal element p∈ W with respect to the Bruhat order. In <cit.>, we introduced a Demazure operator _p : R^J → R^I for any (I,J)-coset p. In fact, _p is equal to _w for w = p w_J ∈ W. Normally one views _w as a map R → R, but when the source of this map is restricted from R to R^J, then the image is contained in R^I. When I = J =, so that p = {w} for some w ∈ W, then _p = _w. In <cit.> we introduced the notion of an atomic (I,J)-coset. Briefly stated, is atomic if * there exists s, t ∈ S (possibly equal) with s ∉ I and t ∉ J such that I ∪{s} = J ∪{t} =: M. * W_M is a finite group, = w_M and w_M s = t w_M. In particular, the subsets I and J are conjugate under both and . If f ∈ R^J then (f) ∈ R^I. We also have = w_J, so that _ is the restriction of _. When I = J =, the atomic (I,J)-cosets have the form {s} for s ∈ S. Atomic cosets in _n can be described using cabled crossings. For example, the permutation with one-line notation (345671289) crosses the first two numbers past the next five while fixing the remaining numbers. It is a prototypical example of for an atomic coset ; in this example W_I ≅_2 ×_5 ×⋯ and W_J ≅_5 ×_2 ×⋯ and W_M ≅_7 ×⋯. §.§ Our story began with the defining representation of _n over . Then we considered the polynomial ring R, its invariant subrings, and the structure derived therefrom. Alternatively, one can start with a realization of a Coxeter system (W,S), which is effectively a “reflection representation” of W over a commutative base ring , equipped with a choice of roots and coroots. From this representation we construct its polynomial ring R, Demazure operators, etcetera. See <ref> for details. For each realization[Soergel's construction depends only on a reflection representation of W, and not a choice of roots and coroots. Elias and Williamson gave a presentation of Soergel's category in <cit.> which depends on a choice of realization. The Demazure operators also depend on the choice of roots and coroots.], Soergel defined a monoidal category of R-bimodules called Soergel bimodules, and proved that for certain realizations[Specifically, the base ring should be an infinite field of characteristic not equal to 2, and the representation should be (faithful and) reflection-faithful.] (that we call SW-realizations) one obtains a categorification of the Hecke algebra. Soergel bimodules and related categorifications of the Hecke algebra are objects of critical importance in geometric representation theory. The following is the first main result of this paper (Theorem <ref>). [The atomic Leibniz rule] Let (W,S) be any Coxeter system equipped with an SW-realization, or the symmetric group _n with R=ℤ[x_1,…,x_n]. Let be an atomic (I,J)-coset and ≤ the Bruhat order for double cosets[The Bruhat order on (I,J)-cosets can be defined by q ≤ if and only if q≤. See <cit.> for equivalent definitions. Note that q need not be atomic, so _q need not equal (the restriction of) _q.]. For every q< there is a unique R^I∪ J-linear operator on polynomials denoted T^_q, satisfying the equation _(f · g) = (f) _(g) + ∑_q < _q(T^_q(f) · g) for all f,g∈ R^J. Polynomials in the image of T^_q are appropriately invariant, see Definition <ref> for details. When is fixed, we will often write T_q := T_q^. The motivation for this result will take some setup, so please bear with us. We let e denote the identity element. Let s be a simple reflection and let be the (,)-coset {s}. There is only one coset less than , namely q = {e}. We set T^_q(f) = _s(f). Note that _ = _s and _q = 𝕀. Then (<ref>) becomes _s(fg) = s(f) _s(g) + _s(f) g, which recovers the twisted Leibniz rule. For more examples see <ref>. There we also give an example of a non-atomic coset p for which no equality of the form (<ref>) can hold (with p replacing ). Though we can prove the existence and uniqueness of such a formula, we do not have an explicit description of the operators T_q. We consider this an interesting open problem. A more accessible problem is to compute T_q on a (carefully chosen) set of generators of R^J, which we accomplish in type A in Theorem <ref>. This is useful computationally because it gives enough information to apply the atomic Leibniz rule for any pair of elements in R^J (see the discussion of multiplicativity in <ref>). In <ref> we prove that the atomic Leibniz rule for one realization implies the atomic Leibniz rule for related realizations (e.g. obtained via base change, enlargement, or quotient). The atomic Leibniz rule also depends only on the restriction of the realization to the finite parabolic subgroup W_M associated to . Thus Theorem <ref> implies the atomic Leibniz rule in broad generality for realizations of Coxeter systems in both finite and affine type A, and in finite characteristic as well, see Example <ref>. §.§ Our second main result is a connection between the atomic Leibniz rule and the theory of singular Soergel bimodules <cit.>. Like Soergel bimodules, singular Soergel bimodules are ubiquitous in geometric representation theory, appearing e.g. in the geometric Satake equivalence and other situations where partial flag varieties play a role. Specifically, we connect the atomic Leibniz rule to a property called polynomial forcing, whose motivation we postpone a little longer. Let us first be more precise. Singular Soergel bimodules are graded bimodules over graded rings, but we ignore all gradings in this paper. To an atomic coset as above, one can associate the (R^I, R^J)-bimodule B_ := R^I _R^M R^J. This is an indecomposable bimodule; there are also indecomposable (R^I, R^J)-bimodules B_q associated to any (I,J)-coset q, which we will not try to describe here. Then B_ has a submodule called the submodule of lower terms, spanned by the images of all bimodule endomorphisms of B_ which factor through B_q for q <. Atomic polynomial forcing is the statement that 1 f and (f) 1 are equal modulo lower terms in B_, for any f ∈ R^J. Williamson has proven for SW-realizations <cit.> that singular Soergel bimodules have “standard filtrations,” which implies atomic polynomial forcing for SW-realizations. Polynomial forcing (Definition <ref>) is a generalization of atomic polynomial forcing for a bimodule B_p associated with an arbitrary double coset p. Using <cit.> we prove Theorem <ref>, which says that polynomial forcing for any double coset is a consequence of atomic polynomial forcing. The restrictions imposed on SW-realizations are significant as they rule out some examples of great importance to modular representation theory, e.g. affine Weyl groups in finite characteristic. An almost SW-realization (Definition <ref>) is a realization over a domain such that one obtains an SW-realization after base change to the fraction field of . An example is [x_1, …, x_n] with its action of _n. The second main result of this paper is the following equivalence (<Ref>): Let (W,S) be a Coxeter system equipped with an almost SW-realization. Then the atomic Leibniz rule is equivalent to atomic polynomial forcing. We deduce the SW-realization case of Theorem <ref> from Theorem <ref> and Williamson's theory of standard filtrations. §.§ Let us now explain the motivation for our results. The diagrammatic presentation for the category of Soergel bimodules, developed by Elias, Khovanov, Libedinsky, and Williamson <cit.>,<cit.>, <cit.>, <cit.> has proven to be an important tool for both abstract and computational reasons. This diagrammatically constructed category is typically called the Hecke category, and is equivalent to the category of Soergel bimodules for SW-realizations. For non-SW-realizations the category of Soergel bimodules need not behave well, but the diagrammatic category does behave well, and continues to provide a categorification of the Hecke algebra. The ability to compute within the Hecke category has led to advances such as <cit.>, <cit.>,<cit.>,<cit.>,<cit.>,<cit.>. The authors are in the midst of a concerted effort to define the singular Hecke category, a diagrammatic presentation of singular Soergel bimodules. The framework for such a presentation was developed in <cit.>, but this framework lacks some of the relations needed. In <cit.> we described what should be the basis for morphisms in the diagrammatic category, called the double leaves basis, and proved that it descends to a basis for morphisms between singular Soergel bimodules for SW-realizations. Once the remaining relations are understood, proving that the diagrammatic category is correctly presented reduces to proving that double leaves span. An important part of that proof will be diagrammatic polynomial forcing, which is like polynomial forcing except that the submodule of lower terms is replaced by the span of double leaves associated to smaller elements in the Bruhat order. We abbreviate diagrammatic polynomial forcing to -forcing. It is -forcing which is our true goal in this paper. For an atomic coset one can identify (B_) with B_ as bimodules. In Section <ref>, we explicitly compute the submodule _<⊂ B_ corresponding to the -linear subspace of End(B_) spanned by double leaves factoring through q <. The following theorem (Corollary <ref> and Theorem <ref>) is proven by direct computation and is the reason we discovered the atomic Leibniz rule in the first place. Let (W,S) be a Coxeter system equipped with an almost SW-realization. Then the atomic Leibniz rule is equivalent to atomic -forcing. In other words, for a given f ∈ R^J, we prove that 1 f - (f) 1 ∈_< if and only if the atomic Leibniz rule holds for f and any g ∈ R^J. For an SW-realization, the results of <cit.> can be used to prove that _< agrees precisely with the submodule of lower terms (see Corollary <ref>). In <ref> we use some novel localization tricks to extend this statement to almost-SW-realizations. Thus -forcing agrees with polynomial forcing for almost-SW-realizations, thus Theorem <ref> implies Theorem <ref>. §.§ One is still motivated to prove -forcing (or equivalently, the atomic Leibniz rule) for almost-SW-realizations and more general realizations. We now discuss how this process is simplified by Theorem <ref>. Let us say the atomic Leibniz rule holds for f ∈ R^J if (<ref>) holds for that specific f and all g ∈ R^J. It is obvious that if the rule holds for f_1 and f_2 then it holds for f_1 + f_2. It is not obvious that if it holds for f_1 and f_2 then it holds for f_1 · f_2, thus the formula (<ref>) is not obviously multiplicative in f. However, atomic polynomial forcing is obviously multiplicative in f. Our equivalence in Theorem <ref> is proven on an element-by-element basis, from which we conclude that the atomic Leibniz rule is actually multiplicative. As a consequence, the atomic Leibniz rule can be proven without relying on Williamson's results, by checking it on each generator f of R^J. We perform this computation in type A in <ref>, for carefully chosen generators f, via a direct and elementary proof. The operators T_q are simplified dramatically when applied to these generators. On the one hand, this proves Theorem <ref> for the symmetric group over ℤ[x_1,…, x_n], and consequently for other realizations (see Remark <ref>). Via Theorem <ref>, this implies -forcing in the same generality. On the other hand, it also gives a computationally effective way to apply the atomic Leibniz rule (or polynomial forcing), even if one does not know the operators T_q in general: decompose f as a linear combination of products of generators, and apply the atomic Leibniz rule for generators one term at a time. Acknowledgments. BE was partially supported by NSF grants DMS-2201387 and DMS-2039316. HK was partially supported by the Swedish Research Council. NL was partially supported by FONDECYT-ANID grant 1230247. § EXAMPLES AND REMARKS In this chapter we give some examples of atomic Leibniz rules, in the relatively small Coxeter group W = _4. The reader should not be disheartened if these examples are still quite difficult and technical to verify by hand! Indeed, one purpose of this chapter is to showcase the complexity involved in even the simplest examples of the atomic Leibniz rule. The reader will not miss out on anything important if they skip directly to <Ref>. §.§ Examples Our examples take place in W = _4. We let s = (12), t = (23), and u = (34) denote the simple reflections. We give the examples without justification first, and then discuss the verification afterwards. We require that T_q(f) ∈ R^K for some K ⊂ S depending on q (precisely: K = q^-1 I q∩ J). There are three (su,su)-cosets in _4: the maximal one p which is atomic, the submaximal coset q with q = t, and the minimal coset r containing the identity. We claim that for f, g ∈ R^su we have _tsut(fg) = tsut(f) ·_tsut(g) + _sut(T_q(f) · g) + T_r(f) · g, or in other words _p(fg) = p(f) ·_p(g) + _q(T_q(f) · g) + _r(T_r(f) · g), where T_q(f) = su_t(f), T_r(f) = _tsut(f) - _sut(T_q(f)). Note that (obviously) T_q(f) ∈ R = R^ while (less obviously) T_r(f) ∈ R^su. These are the invariance requirements. Iterating the ordinary twisted Leibniz rule, it is not hard to deduce the existence of a formula of the form _tsut(fg) = tsut(f) ·_tsut(g) + ∑_x < tsut_x(T_x(f) g) for some operators T_x R → R. What is not obvious is that, when f, g ∈ R^su, this formula will simplify so that only the terms where x = sut and x=1 survive. There are two (tu,st)-cosets in _4: the maximal coset p which is atomic, and the minimal coset q containing the identity element. We claim that for f, g ∈ R^st we have _stu(fg) = stu(f) _stu(g) + _tu(T_q(f) · g), or in other words _p(fg) = p(f) _p(g) + _q(T_q(f) · g), where T_q(f) = st_u(f). Note also that T_q(f) ∈ R^t. It helps to look at an example of a non-atomic coset, to see that Leibniz rules are not guaranteed. We return to the notation of Example <ref>. Note that r is the only coset less than q, and _r is the identity map. Let us argue that there is no naive analogue of (<ref>) for _q. If there were, it would have to have the form _sut(f · g) = sut(f) _sut(g) + T(f) · g for some operator T. By evaluating at g=1 we have T(f) = _sut(f). Note that f, g ∈ R^su. Iterating the twisted Leibniz rule we have _sut(f · g) = sut(f) _sut(g) + _s(ut(f)) _ut(g) + _u(st(f)) _st(g) + _su(t(f)) _t(g) + _sut(f) g. Only two of these five terms are in (<ref>). If g is linear and _t(g) = 1 then the sum of the three missing terms is nonzero for some f. Thus (<ref>) is false. However, now suppose that f ∈ t(R^su) instead of R^su. Then many terms in (<ref>) vanish, yielding _sut(f · g) = sut(f) _sut(g) + _sut(f) g. This is compatible with (<ref>), with T = _sut. In view of this example, there is hope of finding some generalization of the atomic Leibniz rule to some non-atomic (I,J) cosets q, letting f ∈q^-1(R^I). Be warned that the coset q considered in this example has various special properties, see Remark <ref>. The examples above can easily be verified by computer (all the equations are R^stu-linear so one need only check the result for f, g in a basis). They can also all be verified using (<ref>) and (<ref>), but this is a very tricky exercise. Doing this exercise may be very instructive for the reader, and emphasizes the difficult and subtlety in these formulae, so we encourage it, and provide some helping hands. We begin with a few helpful general properties of Demazure operators, which hold in general when m_st = 3 and m_su = 2. We have _s(s(f)) = - _s(f), s _s(f) = _s(f), _s(_s(f)) = 0, st_s(f) = _t(stf), s _t(sf) = t _s(tf), s _u(f) = _u(sf), α_s _s(f) = f - sf, _st(sf) + _ts(f) = t _st(f). Note that α_s_i = x_i - x_i+1 above. The reader can verify these relations directly from (<ref>). Most of these formulae are well-known. Meanwhile, we have not seen (<ref>) before; we only use it in Example <ref> below. With these relations in hand, one need not refer to the original definition (<ref>) again, and need only use the twisted Leibniz rule. Here are some example computations using (<ref>) and (<ref>). _s(t _s(f)) = - _s(st_s(f)) = - _s _t(stf) = _s _t(tstf). _s(t _s(f)) = _s(ts _s(f)) = ts_t_s(f). Let us consider example <ref>. It helps to observe that _st(u(f)) ∈ R^st, _stu(f) ∈ R^stu, under the assumption that f ∈ R^st. One way to see the equality (<ref>) is to expand both sides using the twisted Leibniz rule. Since g ∈ R^st, any terms containing _st(g) or _s(g) or _t(g) or _su(g) are zero. Now compare the coefficients of g, _u(g) and _tu(g) respectively. On the left the coefficient of g is _stu(f), while on the right the coefficient of g is _tust _u(f)=_ts_u tu_u(f)=_tstu_tu(f)=st_s u_tu(f)=stu_stu(f)=_stu(f). We have applied (<ref>) repeatedly and used that _stu(f) ∈ R^stu. Thus the coefficients of g are the same on both sides of (<ref>). We leave the other coefficients to the reader. Example <ref> is the hardest. One should first prove the following statements under the critical assumption that f, g ∈ R^su. _tsut(f) ∈ R^stu, _ts(ut(f)) ∈ R^st, _su(tsu(_t(f))) = tsut(_sut(f)), t(_sut(f)) = tsu(_sut(f)), tsu[_sut(f) - t _sut(f)] = tsu(α_t _tust(f)) = (α_s + α_t + α_u) _tsut(f), _tsu(tf) = _tsu(-α_t _t(f)) = -(α_s + α_t + α_u) _tsut(f). We leave these verifications, and the deduction of (<ref>) therefrom, to the ambitious reader. §.§ Leibniz rules for permutations A natural question is raised: for which double cosets p does one expect an equality of the form (<ref>) to hold? Given the discussion in Example <ref>, one might ask instead: for which double cosets q does (<ref>) hold under the alternate assumption that f ∈q^-1(R^I)? Note that f ∈q^-1(R^I) is equivalent to f ∈ R^J for atomic cosets, and also for more general cosets called core cosets. We believe these are interesting questions, even though these more general Leibniz rules currently lack a clear connection to the theory of singular Soergel bimodules. A special case would be when I = J =, so that double cosets are in bijection with elements w ∈ W; all such cosets are core. We start with the following well-known lemma. Let w∈ W and f,g ∈ R. Let w=s_1… s_n be a reduced expression and for ={0,1}^n we define the element w^=s_1^e_1… s_n^e_n. For ={0,1}^n let θ_i^= s_i if e_i=1 ∂_i if e_i=0 and Θ^(f)=θ_1^∘θ_2^∘…∘θ_n^(f). Then we have _w(fg) = ∑_x ≤ w T'_x(f) _x(g) where T'_x(f)=∑_| w^ =xΘ^(f). As a special case, T_w'(f) = w(f). This is just an iteration of the twisted Leibniz rule. To summarize, we obtain a generalized Leibniz rule of the form _w(fg) = w(f) _w(g) + ∑_x < w T'_x(f) _x(g), for operators T'_x R → R (depending on w). The formula for T'_x one derives in this way is seemingly dependent on the choice of reduced expression for w, though the operator only depends[Abstractly, the nilHecke algebra is the subalgebra of (R) generated by R (i.e. multiplication by polynomials) and by Demazure operators. It is well known that the operators {_w} form a basis of the nilHecke algebra as a free left R-module. Letting m_f denote multiplication by f, (<ref>) can be viewed as an equality _w ∘ m_f = m_w(f)∘_w + ∑ m_T'_x(f)∘_x in the nilHecke algebra, which rewrites _w ∘ m_f in this basis. Consequently, the coefficients T'_x(f) in this linear combination depend only on w and f.] on w. In practice, confirming the independence of reduced expression can be quite subtle. We are unaware of a formula for T'_x which is obviously independent of the choice of reduced expression. Meanwhile, one can also deduce an equality of the form _w(fg) = w(f) _w(g) + ∑_x < w_x(T_x(f) · g), for some operators T_x R → R. We are unaware of any previous study of the operators T_x and the formula (<ref>). Later in the paper, we also discuss an atomic Leibniz rule similar to (<ref>) rather than (<ref>). Here are some examples. Let s = s_1 and t = s_2. When w = ts, applying (<ref>) twice gives _ts(fg) = ts(f) _ts(g) + _t(sf) _s(g) + t_s(f) _t(g) + _ts(f) g. An equivalent formula is _ts(fg) = ts(f) _ts(g) + _t(_s(f) · g) + _s(s_t(sf) · g) + _st(sf) · g. By applying (<ref>) to the second and third terms on the RHS of equation (<ref>), and with a little help from (<ref>), one obtains (<ref>). With notation as above, when w = sts, applying (<ref>) thrice gives _sts(fg) = sts(f) _sts(g) + _s(tsf) _ts(g) + _t(stf) _st(g) + _s(t_s(f)) _t(g) + _t(s_t(f)) _s(g) + _sts(f) g. More honestly, applying (<ref>) thrice gives the above except that the coefficient of _s(g) is _st(sf) + s _ts(f). By applying s to (<ref>), one obtains _st(sf) + s _ts(f) = st _st(f) = _t(st _t(f)) = _t (s _t(f)). This is how one deduces (<ref>). An equivalent formula is _sts(fg) = sts(f) _sts(g) + _ts(_t(f) · g) + _st(_s(f) · g) - _t(_st(f) · g) - _s(_ts(f) · g) + _sts(f) · g. To verify that (<ref>) and (<ref>) agree, apply (<ref>) to the term _ts(_t(f) · g), and apply (<ref>) with s and t swapped to _st(_s(f) · g). After some additional massaging using (<ref>) and (<ref>), one will recover (<ref>). § ATOMIC LEIBNIZ RULES §.§ Realizations We fix a Coxeter system (W,S). We let e denote the identity element of W. Recall the definition of a realization from <cit.>. A realization of (W,S) over is the data (,V,Δ, Δ^∨) of a commutative ring , a free finite-rank -module V, a set Δ = {α_s}_s ∈ S of simple roots inside V, and a set Δ^∨ = {α_s^∨}_s ∈ S of simple coroots inside _(V,), satisfying the following properties. One has α_s^∨(α_s) = 2 for all s ∈ S. The formula s(v) := v - α_s^∨(v) ·α_s defines an action of W on V. Also, the technical condition <cit.> holds, which is redundant for most base rings . For short, we often refer to the data of a realization simply by reference to the W-representation V. The permutation realization of _n over has V = ^n with basis {x_i}_i=1^n and with dual bases {x_i^*}. For 1 ≤ i ≤ n-1 one sets α_i = x_i - x_i+1 and α_i^∨ = x_i^* - x_i+1^*. For a Weyl group W, the root realization of (W,S) over is the free -module with basis Δ. One defines Δ^∨ so that the pairings α_s^∨(α_t) agree with the usual Cartan matrix of W. Let (,V,Δ,Δ^∨) be a realization of (W,S), and I ⊂ S. Then (,V,Δ_I,Δ_I^∨) is also a realization of (W_I,I), the restriction of the realization to a parabolic subgroup. Here Δ_I = {α_s}_s ∈ I, and similarly for Δ_I^∨. Given a realization, let R be the polynomial ring whose linear terms are V. We can associate Demazure operators _s R → R for s ∈ S, which agree with α_s^∨ on V ⊂ R, and are extended by the twisted Leibniz rule. For details, see <cit.>. For each finitary subset I ⊂ S, we also consider the subring R^I of W_I-invariants in R. The ring R^I is graded, and all R^I-modules will be graded, but we will not keep track of grading shifts in this paper as they will play no significant role. The background on this material in <cit.> should be sufficient. A (balanced) Frobenius realization is a realization satisfying the following properties, see <cit.> for definitions. * It is balanced. * It satisfies generalized Demazure surjectivity. * It is faithful when restricted to each finite parabolic subgroup W_I. We assume tacitly throughout this paper that we work with a Frobenius realization. The main implication of these assumptions is that, when I ⊂ S is finitary, the Demazure operator _w_I R → R^I is well-defined and equips the ring extension R^I ⊂ R with the structure of a Frobenius extension. The left and right redundancy sets of an (I,J)-coset p are defined and denoted as (p) = I ∩p J p^-1, (p) = p^-1 I p∩ J. An (I,J)-coset p is a core coset if I = (p) and J = (p). For any (I,J)-coset q, in <cit.> we define a Demazure operator _q : R^J → R^I. By definition, _q is the restriction of the ordinary Demazure operator _q w_J^-1 R → R to the subring R^J. After restriction, the image is contained in R^I, see <cit.>. Note that q w_J^-1 = q if and only if q is a core coset. Some results from <cit.> require further that the realization is faithful, rather than just faithful upon restriction to each finite parabolic subgroup. In particular, the set {_p} as p ranges over (I,J)-cosets need not be linearly independent when the realization is not faithful. A multistep (I,J)-expression is a sequence of finitary subsets I_∙ = [[I= I_0 ⊂ K_1 ⊃ I_1 ⊂… K_m ⊃ I_m = J]]. The definition of a reduced multistep expression, and of the (I,J)-coset that it expresses, can be found in <cit.>. When I_∙ is a (reduced) expression which expresses p, we write p I_∙. As in <cit.>, for x, y ∈ W we write x.y for a reduced composition, where ℓ(xy) = ℓ(x) + ℓ(y). We also use this notation for the reduced composition of reduced expressions, or the reduced composition of double cosets, see <cit.> for more details. Demazure operators compose well over reduced compositions: one has _p.q = _p ∘_q, as proven in <cit.>. §.§ Precise statement of atomic Leibniz rules Note that w_J is an involution, so w_J = w_J^-1. We write q w_J^-1 above to emphasize that q = (q w_J^-1) . w_J. Suppose M is finitary, s ∈ M, and t = w_M s w_M. Let I = ŝ := M ∖ s, and J = t̂ := M ∖ t. Let be the (atomic) (I,J)-coset containing w_M. We say a (rightward) atomic Leibniz rule holds for if there exist R^M-linear operators T^_q from R^J to R^(q) for each (I,J)-coset q <, such that for any f, g ∈ R^J we have _(f · g) = (f) _(g) + ∑_q < _q w_J^-1(T^_q(f) · g). We encourage the reader to confirm that T^_q(f) ∈ R^(q) in the examples of <ref>. We continue to write T_q instead of T^_q when is understood. We say “an atomic Leibniz rule” rather than “the atomic Leibniz rule” because we are defining a prototype for a kind of formula. If one specifies operators T_q such that the formula holds, then one has produced “the” atomic Leibniz rule for that coset (indeed, we prove in <Ref> that such operators are unique for certain realizations). The difference between (<ref>) and (<ref>) is subtle: we have written _q w_J^-1 instead of _q. The difference between _q w_J^-1 and _q is only a matter of the domain and codomain of the functions: the former is a function R → R, while the latter is its restriction to a function R^J → R^I. Meanwhile, T_q(f) lives in R^(q). The inclusion R^J ⊂ R^(q) is proper unless q is core. It is therefore inappropriate to apply _q to T_q(f) · g. Having altered notation so that the domain of the operator is appropriate, we still need to worry about the codomain, which we address in the following lemma. With notation as in Definition <ref>, we have _q w_J^-1(T_q(f) · g) ∈ R^I. Recall from <cit.> that any (I,J)-coset q has a reduced expression of the form q [[I ⊃(q)]] . q^ . [[(q) ⊂ J]]. Let z be the (I,(q))-coset with reduced expression z [[I ⊃(q)]] . q^. Since (<ref>) is reduced, by <cit.>, we have q = z.(w_(q)^-1 w_J), so that q w_J^-1 = z w_(q)^-1. Consequently, the same operator _q w_J^-1 R → R restricts to both _q R^J → R^I and _z R^(q)→ R^I. In particular, this operator sends R^(q) to R^I. Further elaboration will be helpful in subsequent chapters. By <cit.>, the reduced expression (<ref>) implies that the map _q is a composition of three Demazure operators. Recall that _[[I ⊃(q)]] is the Frobenius trace map R^(q)→ R^I often denoted as ^(q)_I. Recall also that _[[(q) ⊂ J]] is the inclusion map R^J ⊂ R^(q). We denote this inclusion map ι^(q)_J below. So we have _q = ^(q)_I∘_q^∘ι^(q)_J. By (<ref>) we have _z= ^(q)_I∘_q^, which agrees with the restriction of _q w_J^-1 to R^(q). Thus one has the following reformulation of (<ref>): _(f · g) = (f) _(g) + ∑_q < ^(q)_I _q^ (T_q(f) ·ι^(q)_J(g)). Now the polynomial T_q(f) appears more appropriately in the “middle” of this factorization of _q. This discussion of the “placement” of the polynomial T_q(f) will play a role in our diagrammatic proof of polynomial forcing. We are now prepared to discuss another version of the atomic Leibniz rule, using the factorization (<ref>). It should not be obvious that these two atomic Leibniz rules are related, though the equivalence with polynomial forcing will shed light on this issue. Use the notation from Definition <ref> and from (<ref>). We say that a leftward atomic Leibniz rule holds for if there exist R^M-linear operators T'_q from R^J to R^(q) for each (I,J)-coset q <, such that for any f, g ∈ R^J we have _(f · g) = (f) _(g) + ∑_q < ^(q)_I(T'_q(f) ·_q^(ι_J^(q)(g))). The fact that T_q(f) lives in R^(q) and not in R^J is easy to overlook, but overlooking it is dangerous. We have attempted to prove atomic Leibniz-style rules for more general families of cosets (core cosets, cosets whose core is atomic, etcetera). Each time what prevents one from bootstrapping from the atomic case to more general cases is the fact that T_q(f) does not live in R^J. The generalization in Example <ref> has the special feature that the lower cosets are all core, so that their right redundancy equals J. (It also has the special feature that q^ is atomic.) §.§ Changing the realization We argue that the atomic Leibniz rule for some realizations implies the atomic Leibniz rule for others. Given a realization, one can obtain another realization by applying base change (-) _' to V, and choosing new roots and coroots in the natural way. We call this a specialization. Here are two other common ways to alter the realization. Let (V,Δ,Δ^∨) be a realization of (W,S) over . Let N be a free -module acted on trivially by W. Then (V ⊕ N, Δ⊕ 0,Δ^∨⊕ 0) is a realization, called a W-invariant enlargement of the original. More precisely, the new roots are the image of the old roots under the inclusion map, and the new coroots kill the summand N. Let (V,Δ,Δ^∨) be a realization of (W,S) over . Suppose one has a decomposition V = X ⊕ Y of free -modules, such that W acts trivially on X, though W need not preserve Y. Note that the coroots necessarily annihilate X. Then (Y, Δ,Δ^∨_Y) is a realization, called a W-invariant quotient of the original. Here, we identify Y as the quotient V/X, and Δ represents the image of Δ under the quotient map. The functionals Δ^∨ kill X, so they descend to functionals Δ^∨_Y on Y. We also make the technical assumption[This assumption is required for the W-invariant quotient to satisfy Demazure surjectivity.] that α_s induces a surjective map Y^* →. Let (W,S) have type A_n-1, with simple reflections s_i for 1 ≤ i ≤ n. Let V be the free -module spanned by {x_i}_i=1^n and δ. Let {x_i^*}∪{δ^*} denote the dual basis in _(V,). With indices considered modulo n, let α_i = x_i - x_i+1 + δ, and α_i^∨ = x_i^* - x_i+1^*. This is a realization of (W,S) called the affine permutation realization. Note that ∑_i=0^n-1α_i = n δ, which is W-invariant. Let X be the span of δ, and Y be the span of {x_i}_i=1^n. Note that W does not preserve Y, since the roots are not contained in Y. There is a valid W-invariant quotient (Y,Δ, Δ^∨_Y) which agrees, upon restriction to the parabolic subgroup _n generated by {s_i}_i=1^n-1, with the permutation representation. Continuing the previous example, let y_i = x_i - i δ. Then we can also view V as having basis {y_i}_i=1^n ∪{δ}, and α_i = y_i - y_i+1 for i n. Upon restriction to the parabolic subgroup _n, we see that V is isomorphic to the W-invariant enlargement of the permutation representation of _n (with basis {y_i}) by the W-invariant span of δ. Indeed, W has n distinct maximal parabolic subgroups isomorphic to _n as groups. A similar construction will show that the restriction of V to any maximal parabolic subgroup (a copy of _n) will be isomorphic to an invariant enlargement of its permutation representation. If a rightward (resp. leftward) atomic Leibniz rule holds for a Frobenius realization, then it also holds for specializations, W-invariant enlargements, and W-invariant quotients. Let R be the ring associated to the original realization, and R_ be the realization associated to the specialization, enlargement, or quotient. All three cases are united by the fact that R_ is a tensor product of the form R _A B, where A ⊂ R is a subring on which W acts trivially, and B is a ring on which W acts trivially. For specializations we have R_ = R _'; for enlargements we have R_ = R _ R_N, where R_N is the polynomial ring of N; for quotients we have R_ = R _R_X, where R_X is the polynomial ring of X, and is its quotient by the ideal of positive degree elements. For w ∈ W, its action on R_ is given by w 𝕀. The roots in R_ are given by α_s 1, and the Demazure operators ^_s on R_ have the form _s 𝕀. The important point in all three cases is that for each I ⊂ S finitary we have R_^I = R^I _A B. We now prove this somewhat subtle point. There is an obvious inclusion R^I _A B ⊂ R_^I, so we need only show the other inclusion. It is straightforward to verify that the new realization satisfies generalized Demazure surjectivity. A consequence is that the typical properties of Demazure operators are satisfied. For example, the kernel and the image of ∂^_s are both equal to R_^s, and ∂^_s is R^s_-linear. It follows that ^_I is also R_^I-linear. Suppose that g ∈ R_^I, and write g = ∑ f_i b_i. Choose some P ∈ R with _I(P) = 1, which exists by generalized Demazure surjectivity. Then ^_I(P 1) = 1 1 in R_. Thus g = g ^_I(P 1) = ^_I(g · (P 1)) = ∑_I(f_i · P) b_i. Thus g ∈ R^I _A B. The rest of the proof is straightforward. Fix an atomic coset . For each q <, given operators T_q for the original realization satisfying (<ref>), we define T_q, := T_q 𝕀. By linearity, we need only check (<ref>) for R_ on elements in R_^J of the form f b_1 and g b_2 for f, g ∈ R^J. It is easy to verify (<ref>) for R_ on such elements, since all operators (like or _q w_J^-1) are applied only to the first tensor factor, where we can use the atomic Leibniz rule from R. We conclude by noting that T_q, has the appropriate codomain as well. We do not claim that any statements about the unicity of the operators T_q will extend from a realization to its specializations, enlargements, or quotients. Let (,V,Δ,Δ^∨) be a realization of (W,S). If one can prove an atomic Leibniz rule for the restriction of V to W_M, for all (maximal) finitary subsets M ⊂ S, then an atomic Leibniz rule holds for W. Every atomic coset in W lives within W_M for some finitary M (which lives within a maximal finitary subset), and the same atomic Leibniz rule which works for W_M will work for W. Suppose one can prove an atomic Leibniz rule for the permutation realization of _n over . Then by enlargement, one obtains an atomic Leibniz rule for the affine permutation realization restricted to any finite parabolic subgroup, see Example <ref>. By the previous lemma, an atomic Leibniz rule holds for the affine permutation realization of the affine Weyl group of type A_n-1. § LOWER TERMS In this section we give an explicit description of the ideal of lower terms for an atomic coset using the technology of singular light leaves. §.§ Definition of lower terms Let I_∙ = [[I= I_0 ⊂ K_1 ⊃ I_1 ⊂… K_m ⊃ I_m = J]] be a multistep (I,J) expression. To this expression we associate a (singular) Bott-Samelson bimodule (I_∙) := R^I_0_R^K_1 R^I_1_R^K_2⋯_R^K_m R^I_m. This is an (R^I, R^J)-bimodule. The collection of all Bott-Samelson bimodules is closed under tensor product, and forms (the set of objects in) a full sub-2-category of the 2-category of bimodules. This sub-2-category is denoted . For two (R^I, R^J)-bimodules B and B', (B,B') denotes the space of bimodule maps. Moreover, (B,B') is itself an (R^I, R^J)-bimodule in the usual way. Inside any linear category, given a collection of objects, their identity maps generate a two-sided ideal. This ideal consists of all morphisms which factor through one of those objects, and linear combinations thereof. In the context of (R^I, R^J)-bimodules, the actions of R^I and R^J commute with any morphism, and thus preserve the factorization of morphisms. Hence the morphisms within any such ideal form a sub-bimodule of the original Hom space. Let p be an (I,J)-coset. Consider the set of reduced expressions M_∙ for any (I,J)-coset q with q < p. Let _< p denote the ideal in the category of (R^I, R^J)-bimodules generated by the identity maps of (M_∙) for such expressions. Then _<p is a two-sided ideal, the ideal of lower terms relative to p. The ideal _≤ p is defined similarly. So _< p(B,B') is a subset of (B,B'), and is a sub-bimodule for (R^I, R^J). We write _< p(B) instead of _<p(B,B) ⊂(B). We now focus on the case of atomic cosets. We use the letter to denote an atomic coset and let [[I⊂ M⊃ J]] denote the unique reduced expression of . We let B_ := ([[I ⊂ M ⊃ J]]) = R^I _R^M R^J. Because B_ is generated by 1 1 as a bimodule, any endomorphism is determined by where it sends this element. Thus (B_)≅ R^I _R^M R^J as (R^I, R^J)-bimodules, via the operations of left and right multiplication. Hence (B_) ≅ B_ as (R^I, R^J)-bimodules[We have ignored gradings in this paper. Using traditional grading conventions for Bott-Samelson bimodules, (B_) and B_ are only isomorphic up to shift. The identity map of (B_) is in degree zero, while 1 1 ∈ B_ is not.]. It is easy to deduce that B_ is indecomposable (when is a domain) since there are no non-trivial idempotents in (B_)≅ B_. §.§ Atomic double leaves The goal of the section is to describe a large family of morphisms in (B_) called double leaves, most of which are in _< (B_) by construction. We use the diagrammatic technology originally found in <cit.> and developed further in <cit.>. We assume a Frobenius realization, see Definition <ref>. In particular, the ring inclusions R^I ⊂ R^J are Frobenius extensions. Under these assumptions, a diagammatic 2-category is constructed in <cit.>, and it comes equipped with a 2-functor to . This 2-functor is essentially surjective, but is not expected to be an equivalence; the category is missing a number of relations. Double leaves are to be constructed either as morphisms in , or as their images in , depending on the context. The objects in are indexed not by multistep expressions but by singlestep expressions. An (I,J) singlestep expression is a sequence I_∙ = [I = I_0, I_1, …, I_d = J] where each I_i is a finitary subset of S, and each I_i and I_i+1 differ by the addition or removal of a single simple reflection. We use single brackets for singlestep expressions, and double brackets for multistep expressions. Throughout this section, we fix I⊂ M=Is⊂ S finitary, and let t = w_M s w_M and J=M∖ t, so that [I,M,J] is an atomic coset. We also fix the (I,M)-coset n=W_IeW_M. using the double leaves basis from <cit.>. The construction depends on the following definition. Let I_∙=[I_0,⋯,I_d] be an expression. Then a sequence t_∙ = [t_0,⋯, t_d] of double cosets, where t_i is an (I_0,I_i)-coset, is called a path subordinate to I_∙ if * t_0=W_I_0eW_I_0 is the identity coset; * t_i⊂ t_i+1 when I_i⊂ I_i+1; * t_i⊃ t_i+1 when I_i⊃ I_i+1; where 0≤ i≤ d-1. We write in this case t_∙⊂ I_∙ and call t_d the terminus of t_∙. In the case of an atomic expression, we have a simple description of subordinate paths. Each path subordinate to [I,M,J] (see Definition <ref>) is of the form [t,n,q] where t=W_IeW_I, n=W_IeW_M, and q≤. The coset t is as stated by definition. Since I⊂ M we have n = tW_M=W_IeW_M. Finally, since M⊃ J, the coset q is a subset in n as a set. Since is maximal among the (I,J)-cosets in W_IeW_M, the latter condition on q is equivalent to q≤. In other words, for each coset q≤ there is exactly one subordinate path [t,n,q]⊂ [I,M,J] with terminus q, and there is no other subordinate paths. §.§.§ Elementary light leaves for atomic Grassmannian pairs By definition of atomic, =w_M. Then for an (I,J)-coset q, then condition q≤ is equivalent (see <cit.>) to q≤ w_M, which in turn is equivalent to q⊆ n=W_M. For an (I,J)-coset q contained in W_M, the pair q⊂ n is Grassmannian in the sense of <cit.>. Associated to such a pair, <cit.> constructs a distinguished map called an elementary light leaf. The map (and codomain of the map) depends on a choice we make now: we fix a reduced expression X_q of the form X_q = [[I⊃(q)]]∘ X_q^∘[[(q)⊂ J]] where X_q^ is a reduced expression of q^. Let q be an (I,J)-coset contained in n. The elementary light leaf associated to [n,q] (and X_q) is the (R^I,R^J)-bimodule morphism ([n,q]):([I,M,J])→(X_q) sending the generator 1⊗ 1∈ R^I⊗_R^M R^J=([I,M,J]) to the element 1^⊗:= 1⊗⋯⊗ 1∈(X_q). Equivalently, ([n,q]) is defined by the diagrammatic construction in <cit.> (see also <cit.>). We refer to <cit.> and <cit.> for a diagrammatic exhibition of morphisms between Bott-Samelson bimodules. The morphism ([n,q]) is determined by the condition that its diagram consists only of counterclockwise cups and right-facing crossings, as in the following diagram. 1pt (q) [ ] at 60 50 (q) [ ] at 245 50 I [ ] at 35 14 M [ ] at 150 14 J [ ] at 280 14 ELLat In our examples, we color the simple reflection in strawberry, and in teal. Sometimes s = t, which will force us to change our convention. In type A the expression X_q^ takes a simple form, and thus ([n,q]) could be described more explicitly. Let W_M=_a+b be a symmetric group, for some a≠ b, and let I= be such that W_I=_b×_a⊂_a+b. Then J= is such that W_J=_a×_b⊂_a+b. As will be explained in Section <ref> (Equation (<ref>)), each (I,J)-coset q in W_M has a unique reduced expression X_q of the form (<ref>). * If q= W_I e W_J then we have q X_q = [[ŝ⊃ŝt̂]]∘ [ŝt̂]∘ [[ŝt̂⊂t̂]]=[ŝ-+]. Here we have ([n,q])=rightcross. * If q = W_I w_M W_J then we have q= X_q =[I,M,J]. Here we have ([n,q])=𝕀_([I,M,J]). * Otherwise, we have q X_q = [[ŝ⊃ŝk̂ℓ̂]]∘ [ŝk̂ℓ̂⊂k̂ℓ̂⊃t̂k̂ℓ̂]∘[[t̂k̂ℓ̂⊂t̂]] for distinct s,k,ℓ,t∈ M. Here we have ([n,q]) = qELL1. Here is a non-type A example. Let (W,S) be of type E_6 where S is indexed as in the Dynkin diagram [scale=0.3,baseline=-3] (8 cm,0) – (6 cm,0); (6 cm,0) – (4 cm,0); (4 cm,0) – (2 cm,0); (2 cm,0) – (0 cm,0); (4 cm,0) – (4 cm,1.5 cm); [fill=white] (8 cm, 0 cm) circle (.15cm) node[below=1pt]6; [fill=white] (6 cm, 0 cm) circle (.15cm) node[below=1pt]5; [fill=white] (4 cm, 0 cm) circle (.15cm) node[below=1pt]4; [fill=white] (2 cm, 0 cm) circle (.15cm) node[below=1pt]3; [fill=white] (4 cm, 1.5 cm) circle (.15cm) node[right=1pt]2; [fill=white] (0 cm, 0 cm) circle (.15cm) node[below=1pt]1; . Let M=S and s=3. Then w_M3w_M=5 and thus [3̂+3-5] = [3̂,M,5̂] is an atom. For the (3̂,5̂)-coset q< with reduced expression q X_q= [[3̂⊃{4,6}]]∘ [+3-4+5+2+1-6-2-3+4-5] ∘ [[{1,4}⊂5̂]], the elementary light leaf ([n,q]) is 2pt 1 [ ] at 6 83 2 [ ] at 18 83 5 [ ] at 30 83 3 [ ] at 46 83 4 [ ] at 58 83 5 [ ] at 70 83 2 [ ] at 82 83 1 [ ] at 94 83 6 [ ] at 106 83 2 [ ] at 118 83 3 [ ] at 130 83 4 [ ] at 142 83 5 [ ] at 154 83 3 [ ] at 170 83 2 [ ] at 182 83 6 [ ] at 194 83 1qELL2 §.§.§ Atomic double leaves There is a contravariant (but monoidally-covariant) “duality” functor from to itself defined as follows: * it preserves objects and 1-morphisms, * on 2-morphisms, it flips each diagram upside-down and reverses all the orientations. This functor is an involution. Given q≤ and b∈ R^(q), the associated (right-sprinkled) double leaf _r(q,b) is the composition B_(X_q)(X_q) B_. The middle map in (<ref>) uses that X_q has the form (<ref>): the map is multiplication by the element 1^⊗⊗ b⊗ 1∈([[I⊃(q)]]∘ X_q^)⊗_R^(q) R^(q)⊗_R^JR^J. In diagrams, we have _r(q,b)= 1pt b [ ] at 245 64 DLLat Given q≤ and b∈ R^(q), the associated left-sprinkled double leaf _l(q,b) is the composition B_(X_q)(X_q) B_, whose diagram is 1pt b [ ] at 60 64 DLLat . The maps provided here (when b ranges over a basis for R^(q) or R^(q)) are the same as the “double leaves basis” from <cit.>. This is verified in the following remark, intended for a reader familiar with <cit.>. Let us verify that _l(q,b)=(q,([p,n,q],1),([p,n,q],1),b), where p is the identity (I,I)-coset, by following the stages in <cit.>. First, we construct (q,([p,n,q],1)). The single step light leaf for the first step [p,n] is the identity map since [p,n] is reduced. Since the left redundancy for p and n are the same, In the second step, we have the coset pair [n,q] which is already a Grassmannian pair, thus the single step light leaf is the elementary light leaf ([n,q]). Then we choose X_q=Y_q and take the dual map for the upside-down light leaf from (X_q) to B_. Altogether we obtain the double leaf of the form (<ref>). No rex moves were used at any stage of the process. Moreover, for each q≤, there is one subordinate path with terminus q, namely [p,n,q]. It follow that the morphisms (q,([p,n,q],1),([p,n,q],1),b) form a double leaves basis in the sense of <cit.>. We know that (B_) is an (R^I, R^J)-bimodule, so it is natural to ask how the actions of R^I and R^J interact with the bases presented in Propositions <ref> and <ref>. For g ∈ R^J we claim that _r(q,b) · g = _r(q,b · g). Consider the diagram in Definition <ref>, and right-multiply by g ∈ R^J. Since g is also in R^(q), it can be slid from the right side to the region where b lives. However, the left action of f ∈ R^I is more mysterious. For fixed q, it does not preserve the span of {_r(q,b) | b ∈ R^(q)}. Indeed, the comparison between the left action and the right action is controlled by polynomial forcing for (X_q), which involves lower terms. Similarly, for f ∈ R^I, the left action on left-sprinkled double leaves is straightforward, f ·_l(q,b) = _l(q,f · b), whereas the right action of R^J is mysterious. §.§.§ Evaluation of double leaves The following crucial computation links double leaves with the description (B_) ≅ R^I _R^M R^J. Let Δ^J_M,(1) and Δ^J_M,(2) be dual bases of R^J over R^M, where we use Sweedler notation. The double leaf _r(q,b) coincides with multiplication by the element _q w_J^-1(b ·Δ^J_M,(1)) Δ^J_M,(2). This can also be written as _I^(q)_q^(b ·ι_J^(q)Δ^J_M,(1))Δ^J_M,(2). The proof is immediate from <cit.>. It is proven exactly as <cit.>. A similar computation involving left-sprinkled double leaves gives the following. The double leaf _l(q,b) coincides with multiplication by the element _I^(q)(b ·_q^(ι_J^(q)Δ^J_M,(1)))Δ^J_M,(2). If q≤ is the minimal (I,J)-coset, namely q= W_IeW_J, then we have two cases. * When t:=w_M w_M≠, we have q [I,K,J]=[I - t+] for K=I∩ J. In this case, both left and right redundancies are K, and for b∈ R^K the double leaf _r(q,b)=_l(q,b) is the left diagram in the equality 2pt b [ ] at 31 32 R2hard1_colorswap = ^K_I(b·Δ^J_M,(1))upsdowntΔ^J_M,(2) . Thus we have _r(q,b)=_l(q,b)= ^K_I(b·Δ^J_M,(1))Δ^J_M,(2) (see <cit.>). * When w_M w_M=, we have q [I]. In this case, we have (q)=I=(q), and for b∈ R^I the double leaf has a capcup diagram 2pt b [ ] at 15 32 cupcap. Thus we have _r(q,b)=_l(q,b) = b·Δ_M^I= Δ_M^I· b. For an atomic coset [I,M,J], let _< denote the -linear subspace of (B_) spanned by right-sprinkled double leaves factoring through q <. More explicitly we have _<:= _q < {_r(q,b) | b ∈ R^(q)} = _q < {_qw_J^-1 (R^(q)·Δ_M,(1)^J)Δ_M,(2)^J }. We have _<⊂_< (B_). By construction, every double leaf associated to q < factors through a reduced expression for q, and thus lives in _< (B_). We also note a consequence of Remark <ref>. For each g ∈ R^J and b ∈ R^(q) we have _qw_J^-1 (b ·Δ_M,(1)^J)Δ_M,(2)^J · g = _qw_J^-1 (b · g ·Δ_M,(1)^J)Δ_M,(2)^J. In particular, _< is a right R^J-module. This follows from (<ref>) and Lemma <ref>. §.§ Double leaves and lower terms: part I The main result of <cit.> is that double leaves form a basis for morphisms between Bott-Samelson bimodules. However, <cit.> relies on Williamson's theory of standard filtrations, which relies on several assumptions originally made by Soergel. We call a realization a Soergel-Williamson realization or an SW-realization for short, if it is a Frobenius realization (see Definition <ref>), and it also satisfies the following assumptions. * The realization is reflection faithful, i.e. it is faithful, and the reflections in W are exactly those elements that fix a codimension-one subspace. * The ring is an infinite field of characteristic not equal to 2. Abe <cit.> has recently developed a theory of singular Soergel bimodules that works for Frobenius realizations, without the extra restrictions of an SW-realization. One expects that the results of <cit.> can be straightforwardly generalized to Abe's setting. For simplicity and because of the current state of the literature, we will work with Williamson's category of bimodules. Assume an SW-realization. Let 𝔹_q be a -basis of R^(q), for each q≤. Then {_r(q,b)}_b∈𝔹_q gives a basis of _≤ q(B_)/_<q(B_) over . In particular, {_r(q,b) | q≤, b∈𝔹_q} is a -basis of (B_), and the subset indexed by q < is a basis for _< (B_). This is a special case of <cit.>, as confirmed in Remark <ref>. Assume an SW-realization. We have _< = _<(B_) = ⊕_q<_qw_J^-1 (R^(q)·Δ_M,(1)^J)Δ_M,(2)^J. In particular, _< is an (R^I, R^J)-bimodule. As _< is the span of double leaves factoring through q <, the first equality follows from Proposition <ref>. The second equality follows from the linear independence of double leaves. Since _<(B_) is an (R^I, R^J)-bimodule, so is _<. Similarly, double leaves provides a left-sprinkled basis. Assume an SW-realization. Let 𝔹_q be a -basis of R^(q), for each q≤. Then {_l(q,b)}_b∈𝔹_q gives a basis of _≤ q(B_)/_<q(B_) over . In particular, {_l(q,b) | q≤, b∈𝔹_q} is a -basis of (B_), and the subset indexed by q < is a basis for _< (B_). §.§ Double leaves and lower terms: part II An almost-SW realization is a Frobenius realization, together with the following assumptions. * The ring is a domain with fraction field . * After base change to , the result is an SW-realization. * Finitely-generated projective modules over are free. The defining representation of _n over is an almost-SW realization <cit.>. The root realization of a Weyl group is almost-SW when defined over = [1/N] for small N (N=30 will suffice for all Weyl groups by <cit.>). Let I ⊂ S be finitary. Then R^I is a free -module. As a polynomial ring over a free -module V, R is a free -module. By the assumption of generalized Demazure surjectivity, R is free as an R^I-module when I ⊂ S is finitary. Thus R^I is a direct summand of R, and is therefore projective as a -module. Since both R and R^I are finitely-generated as -modules in each graded degree, we deduce that R^I is also a free module over . Our goal in this section is to generalize the results of the previous section to almost-SW realizations. In all the lemmas in this section, we assume an almost-SW realization. First we note the compatibility of base change with most of the constructions above. We let R be the polynomial ring of the realization over , and let R_ := R⊗_ be the polynomial ring of the realization after base change. Let R^I_⊂ R_ be the invariant subring. We have R^I_≅ R^I _. There is a natural map R^I _→ R_, and since scalars are W-invariant, the image lies within R_^I. The map is injective since is flat over . We now argue that the map R^I _→ R_^I is surjective. If f ∈ R_^I, then there is some c ∈ such that cf ∈ R (e.g. letting c be the product of the denominators of each monomial in f). Clearly cf ∈ R^I, whence f is the image of cf 1/c. Let B_,=R^I_⊗_R^M_R^J_. If I_∙ = [[I= I_0 ⊂ K_1 ⊃ I_1 ⊂… K_m ⊃ I_m = J]] is a multistep (I,J)-expression, let _(I_∙) := R^I_0__R_^K_1 R_^I_1_R_^K_2⋯_R_^K_m R_^I_m. The natural inclusion map (I_∙) →(I_∙) _ is injective. We have _(I_∙)≅(I_∙) ⊗_. As a consequence we have an injective map ((I_∙),(I'_∙)) →(_(I_∙),_(I'_∙)). By our assumptions from <ref>, R^I is free over R^K whenever I ⊂ K. We fix a basis {b_i^I,K} for this extension. Hence the Bott-Samelson bimodule (I_∙) is free as a right R^I_m-module with basis {b_i_1^I_0,K_1… b_i_m^I_m-1,K_m 1}. It is also free as a right -module by <Ref>. So base change is injective on bimodules. Notice that {b_i^I,K 1} is a basis of R^I_ over R^K_. Hence (<ref>) gives also a basis of _(I_∙) over R^I_m_. Since it sends a basis to a basis, we deduce that the natural map (I_∙) ⊗_→_(I_∙) is an isomorphism. The localization functor gives the map in (<ref>). For a morphism ϕ between bimodules over , let ϕ 1 denote its image, a morphism between bimodules over . The restriction of ϕ 1 to the subset (I_∙) ⊂_(I_∙) is the original morphism ϕ. Hence if ϕ 1 is the zero morphism, so is ϕ. For q ≤ and b ∈ R^(q), let us temporarily write _r,(q,b) for the double leaf as a morphism between Bott-Samelson bimodules over , and _r,(q,b) for the double leaf as a morphism between Bott-Samelson bimodules over , where for the latter we identify b with its image in R_^(q). Under the map (<ref>), we have _r,(q,b) ↦_r,(q,b). The calculus from <cit.> for interpreting diagrams is invariant under base change. Alternatively, B_ is generated as a bimodule by 1 1, and elementary light leaves ([n,q]) are determined uniquely in their Hom space by the fact that they send 1⊗ 1 to 1… 1. This property is preserved by base change. Let 𝔹_q be a basis of R^(q) over . Then the set {_r,(q,b) | q≤, b∈𝔹_q} is linearly independent. Let _<, and _<, be defined as before for their respective realizations. The map (<ref>) induces an isomorphism _<,_ _<,. By definition _< is the span (over or ) of the double leaf morphisms. The map (<ref>) restricts to a map _< , →(_(I_∙),_(I'_∙)). By the previous lemma, the image of this map is contained in _< ,. Thus one has an induced map _<, _→_<,. Note that 𝔹_q is sent by base change to a basis of R^(q)_ over . The elements {_r,(q,b) 1 } (ranging over the appropriate index set) form an -spanning set for the left-hand side of (<ref>), and are sent to {_r,(q,b)}, which form a -basis for _< , by Proposition <ref>. Thus the map (<ref>) is an isomorphism, and the elements {_r,(q,b) 1} are linearly independent over . Consequently, {_r,(q,b)} are linearly independent over . We have _<, = ⊕_q<_qw_J^-1 (R^(q)·Δ_M,(1)^J)Δ_M,(2)^J. The span in Definition <ref> in indeed a direct sum of subspaces, by the linear independence shown in the previous lemma. Henceforth we return to the -linear setting by default (i.e. in the absence of a subscript). Now the question remains: is the inclusion _<⊂_< (B_) an equality over , knowing that the result holds over ? In Corollary <ref> below, we prove that the answer is yes. For any ϕ∈_< (B_) there is some n ∈ such that n ϕ∈_<. In view of <Ref> and (<ref>), we can regard both _< and (B_) as -submodules of (B_,), which we identify with B_,. We have _< (B_) ⊂_< (B_,) by definition. Meanwhile, <Ref> holds over and thus _<(B_,) ≅⊕_q<_qw_J^-1 (R_^(q)·Δ_M,(1)^J)Δ_M,(2)^J. So any ϕ∈_<(B_) is a -multiple of an element of _<. Multiplying by the denominator, there exists n ∈ such that nϕ∈_<. We continue with a divisibility lemma on Demazure operators, which ensures that Demazure operators do not “produce” additional divisibility by elements of . Let b∈ R and let n∈. If n|∂_w_Mw_J^-1(bg) for all g∈ R^J, then n| b. Assume that n∤ b. We need to find g∈ R^J such that n∤_w_Mw_J^-1(bg). Recall from Lemma <ref> that for w ∈ W and f, g ∈ R we have _w(fg) = ∑_x ≤ w T'_x(f) _x(g) for certain operators T'_x defined over , where T'_w(f) = w(f). Let W^J be the subset of elements in W which are minimal (for the Bruhat order) in their right W_J-coset. We have ∂_x(g)=0 when x∉W^J and g ∈ R^J, so we have _w(fg) = ∑_x ≤ w, x∈ W^J T'_x(f) _x(g). We apply this formula when w = w_M w_J^-1. Let W^J_M = W^J ∩ W_M. Since n∤ b, then also n∤ w_Mw_J^-1(b)=T_w_Mw_J^-1'(b). So there exists some y∈ W_M^J (not necessarily unique) which is minimal with respect to the property that n∤ T'_y(b). Now we have _w_Mw_J^-1(bg)= ∑_x ∈ W_M^J T'_x(b)_x(g)≡∑_x≮ y, x ∈ W_M^J T'_x(b)_x(g) n. Recall that ℓ(xw_M)=ℓ(w_M)-ℓ(x) for all x∈ W_M. Let z=y^-1w_M so that y.z=w_M. We have ℓ(w_Jy^-1w_M)=ℓ(w_M)-ℓ(w_Jy^-1)=ℓ(w_M)-ℓ(y^-1)-ℓ(w_J)=ℓ(z)-ℓ(w_J), thus the left descent set of z contains J. By <cit.> we have (_z)⊂ R^J. By our assumption of generalized Demazure surjectivity, we can choose P_M∈ R such that _w_M(P_M) = 1. Set g=_z(P_M)∈ R^J and note that _y(g)=1. Let x∈ W_M. If x.z is not reduced, then _x(g)= _x_z(P_M) = 0 by <cit.>. If x.z is reduced, then ℓ(w_M)-ℓ(yx^-1)=ℓ(x.y^-1w_M)=ℓ(x)+ℓ(w_M)-ℓ(y). It follows that ℓ(yx^-1)+ℓ(x)=ℓ(y), so yx^-1.x=y and, in particular, x≤ y. This means that _x(g)≠ 0 only if x≤ y. Finally, we plug g=_z(P_M) into (<ref>) and we observe that _w_Mw_J^-1(bg)≡ T'_y(b) ≢0 n. Now we can prove that _<, as a submodule of (B_), is closed under division by elements in (when that makes sense). Let ϕ∈_< and assume there exists n∈ such that 1/nϕ∈(B_). Then 1/nϕ∈_<. If n is a unit in the result is trivial, so assume otherwise. Let ϕ∈_< and assume that 1/nϕ∈(B_)≅ R^I⊗_R^MR^J. We can write ϕ = ∑_q<_qw_J^-1 (b_q·Δ_M,(1)^J)Δ_M,(2)^J for some unique b_q∈ R^(q). For clarity we choose to unravel Sweedler's notation. Choose dual bases {c_i} and {d_i} for R^J over R^M relative to the Frobenius trace map ^J_M. We have ϕ = ∑_i∑_q<_qw_J^-1 (b_q· c_i) d_i. Since d_i is a basis of R^J over R^M, any element of R^I _R^M R^J is uniquely expressible as ∑_i f_i d_i for f_i ∈ R^I. In particular, if n divides ∑_i f_i d_i we have 1/n∑_i f_i d_i=∑_i f'_i d_i for some unique f'_i. Then ∑ f_i d_i =∑_i nf'_i d_i, and by the unicity mentioned before, this implies that nf_i'=f_i for all i. Consequently, ϕ is divisible by n if and only if for all i we have n |∑_q<_qw_J^-1 (b_q· c_i). We want to show that all the b_q are actually divisible by n, so that 1/nϕ∈_<. Assume for contradiction that there exists a minimal r such that n∤ b_r. Let z be such that z.rw_J^-1=w_Mw_J^-1. Then by similar arguments to the previous lemma we have 0≡_z(∑_q<_qw_J^-1 (b_q· c_i))≡_w_Mw_J^-1(b_r · c_i) n for each i. The subset of R^J consisting of those c for which _w_M w_J^-1(b_r · c) ≡ 0 n is evidently an R^M-submodule. Since this submodule contains a basis {c_i} for R^J over R^M, it must contain all of R^J. Thus n|_w_Mw_J^-1(b_r · g) for all g ∈ R^J. By <Ref> we deduce n| b_r, leading to a contradiction. For an almost SW-realization we have _<=_<(B_). In particular, _< is an (R^I, R^J)-bimodule. The containment _<⊂_(B_) was already shown in <Ref>. Now pick an arbitrary element ψ∈_<(B_). By <Ref>, there is some n ∈ such that nψ∈_<. Then ψ=1/n(nψ)∈_<(B_), so by <Ref> we deduce that ψ∈_<. § POLYNOMIAL FORCING AND ATOMIC LEIBNIZ §.§ Polynomial forcing for atomic cosets Now we explain the concept of polynomial forcing. We consider first the case of an atomic coset [[I⊂ M⊃ J]]. Recall that J = I and ()=I since is core. Thus there is an isomorphism R^J → R^I, f ↦(f). Let be an atomic coset with reduced expression [[I ⊂ M ⊃ J]] and let f ∈ R^J. We say that polynomial forcing holds for f and if we have 1 f - (f) 1 ∈_< (B_). We say that polynomial forcing holds for if (<ref>) holds for all f ∈ R^J. Suppose that (<ref>) holds for f_1 and for f_2, with both f_1, f_2 ∈ R^J. Then it holds for f_1 + f_2 and f_1 f_2. Additivity is trivial, because _< is closed under addition. Now consider the following: 1 f_1 · f_2 - (f_1 · f_2) 1 = (1 f_1 - (f_1) 1) · f_2 + (f_1) · (1 f_2 - (f_2) 1). Since _<(B_) is closed under right and left multiplication, both terms on the right-hand side above are in _<(B_), and the result is proven. Before continuing, let us contrast polynomial forcing with an a priori different notion. Consider _< from Definition <ref>. We say that -forcing holds for and f if 1 f - (f) 1 ∈_<⊂(B_). We say that -forcing holds for if it holds for and f, for all f∈ R^J. For an almost-SW realization, -forcing is equivalent to polynomial forcing, since _< = _< (B_) by Corollary <ref>. In general, it is not obvious that -forcing is multiplicative. The proof of multiplicativity in Lemma <ref> relied on the fact that _<(B_) is an (R^I, R^J)-bimodule, whereas _< is only a priori a right R^J-module. §.§ Equivalence Now we prove the equivalence between atomic Leibniz rules and polynomial forcing. To formulate an intermediate condition in the proof, which is also of importance for the next section, we agree to say the following. Given an atomic (I,J)-coset and an element f∈ R^J, an atomic Leibniz rule for and f is said to hold if there exist elements T_q(f) such that equation (<ref>) is satisfied for all g∈ R^J. Since this condition is stated for one polynomial f at a time, there is no requirement that T_q is an R^M-linear operator. Let [I,M,J] be an atomic (I,J)-coset, and f ∈ R^J. We have a rightward atomic Leibniz rule for and f if and only if -forcing holds for and f. Moreover, for an almost-SW realization, if -forcing holds for and f, then the atomic Leibniz rule is unique, i.e., the elements T_q(f)∈ R^(q) in (<ref>) are uniquely determined. Note that R^J⊂ R^M is a Frobenius extension, see <cit.>. The trace map is ^J_M := _w_M w_J^-1 = _. Let Δ^J_M ∈ R^J _R^M R^J denote the coproduct element (the image of 1 ∈ R^J under the coproduct map), which we often denote using Sweedler notation. Then <cit.> implies that 1 1 = _(Δ^J_M,(1)) Δ^J_M,(2). Multiplying (f) on the left we get (f) 1 = (f) ·_(Δ^J_M,(1)) Δ^J_M,(2). Meanwhile, <cit.> implies that 1 f = _(f ·Δ^J_M,(1)) Δ^J_M,(2). Thus we have 1 f - (f) 1 = [ _(f ·Δ^J_M,(1)) - (f) ·_(Δ^J_M,(1)) ] Δ^J_M,(2). Letting g = Δ^J_M,(1), a rightward atomic Leibniz rule for and f gives _(f · g) - (f) ·_(g) = ∑_q<_qw_J^-1(T_q(f) · g). Thus we have 1 f - (f) 1 = ∑_q<_qw_J^-1(T_q(f) ·Δ^J_M,(1)) Δ^J_M,(2). which lies in _< by definition. We prove now the other direction. We have 1 f - (f) 1 ∈_<. By (<ref>), we obtain [ _(f ·Δ^J_M,(1)) - (f) ·_(Δ^J_M,(1)) ] Δ^J_M,(2)∈_< (B_). By definition of _< we deduce that _(f ·Δ^J_M,(1)) Δ^J_M,(2) = (f) ·_(Δ^J_M,(1)) Δ^J_M,(2) + ∑_q<_qw_J^-1(T_q(f) ·Δ_M,(1)^J) Δ_M,(2)^J for some T_q(f) ∈ R^(q). For an almost-SW realization, Lemma <ref> implies that the T_q(f) are unique. Note that Δ^J_M,(1) and Δ^J_M,(2) run over dual bases of R^J over R^M. The elements 𝔹 = {1 Δ^J_M,(2)} form a basis for B_, when viewed as a left R^I-module. Thus in order for the equation (<ref>) to hold, it must be an equality for each coefficient with respect to the basis 𝔹. Hence we conclude _(f ·Δ) = (f) ·_(Δ) + ∑_q<_q(T_q(f) ·Δ) for all Δ ranging through a basis of R^J over R^M. Using the linearity of (<ref>) over R^M, we deduce that it continues to hold when Δ is replaced by any element g ∈ R^J. Thus the atomic Leibniz rule for f is proven. Assume an almost-SW realization (see Definition <ref>). Let [I,M,J] be an atomic (I,J)-coset. Then the following are equivalent. * A rightward atomic Leibniz rule holds for . * A leftward atomic Leibniz rule holds for . * For a set of generators {c_i} of the R^M-algebra R^J, a rightward atomic Leibniz rule holds for and each c_i. * For a set of generators {c_i} of the R^M-algebra R^J, a leftward atomic Leibniz rule holds for and each c_i. * Polynomial forcing holds for . Moreover, in this case, there are unique operators T_q, T'_q that satisfy atomic Leibniz rules. First, we observe that polynomial forcing holds for f ∈ R^M. Clearly 1 f = f 1. Moreover, ⊂ W_M so (f) = f. That (<ref>) implies (<ref>) is clear. Suppose that (<ref>) holds. By Proposition <ref>, forcing holds for all c_i. By Corollary <ref>, -forcing for c_i is equivalent to polynomial forcing for c_i. By Lemma <ref>, the subset of R^J consisting of those f for which polynomial forcing holds is a subring. As explained above, this subring includes R^M, so if it includes {c_i} then it must be all of R^J. In this way, (<ref>) implies (<ref>). Suppose that (<ref>) holds. Once again, Proposition <ref> and Corollary <ref> imply that, for each f ∈ R^J, a rightward atomic Leibniz rule holds for f, with the elements T_q(f) ∈ R^(q) being unique. To prove that a rightward atomic Leibniz rule holds, it remains to prove that the operators T_q R^J → R^(q) are R^M-linear. We do this below, finishing the proof that (<ref>) implies (<ref>). Let g ∈ R^M. Multiplying both sides of equation (<ref>) on the left by g, and pulling g into various R^M-linear operators (namely _ and and _q w_J^-1), we obtain _(gf ·Δ^J_M,(1)) Δ^J_M,(2) = (gf) ·_(Δ^J_M,(1)) Δ^J_M,(2) + ∑_q<_qw_J^-1(gT_q(f) ·Δ_M,(1)^J) Δ_M,(2)^J This is exactly (<ref>) with gf replacing f, except that gT_q(f) appears instead of T_q(gf). By uniqueness, we deduce that T_q(gf) = g T_q(f). We have thus shown the equivalence of (<ref>),(<ref>) and (<ref>). A similar argument will imply the equivalence of (<ref>) and (<ref>) and (<ref>), and the uniqueness of T'_q. This similar argument replaces _r(q,b) with _l(q,b), using Proposition <ref> and Lemma <ref>. The left analogue of the remaining arguments (e.g. Proposition <ref> and Corollary <ref>) is left to the reader. The intermediate conditions (<ref>) and (<ref>) do not play a significant role in the proof. We have included them to make it easier to prove the atomic Leibniz rule by establishing it on a set of generators. §.§ Polynomial forcing for general cosets Now let p be an arbitrary (I,J)-coset, with a reduced expression I_∙. We wish to avoid the technicalities of changing the reduced expression I_∙ in this paper. Instead we focus on the special case when I_∙ is an atomic-factored reduced expression, i.e. it has the following form: I_∙ = [[I ⊃(p)]] ∘ I'_∙∘ [[(p) ⊂ J]] where I'_∙ is an atomic reduced expression (see below) for p^. An atomic reduced expression for a core coset p^ is a reduced expression of the form I'_∙ = [[(p) = N_0 ⊂ M_1 ⊃ N_1 ⊂⋯⊂ M_m ⊃ N_m = (p)]], where each [[N_i ⊂ M_i+1⊃ N_i+1]] is a reduced expression for an atomic coset _i+1. In particular, p = _1._2.⋯ . _m. Any core coset has an atomic reduced expression, see <cit.> and thus any coset has an atomic-factored reduced expression by <cit.>. We have (I_∙) = R^(p)_R^M_1 R^N_1_R^M_2⋯_R^M_m R^(p) viewed as an (R^I,R^J)-bimodule. Meanwhile, (I'_∙) is the same abelian group, but is viewed as an (R^(p),R^(p))-bimodule. There is an action of each R^N_i on (I_∙) by multiplication in the i-th tensor factor of (<ref>). Indeed, this induces an injective map (I_∙) →((I_∙)) which is not surjective in general. An arbitrary reduced expression for p might never factor through the subset (p) or (p). The first advantage of an atomic-factored expression is that there is an obvious action of R^(p) on (I_∙) by left-multiplication, and an obvious action of R^(p) by right-multiplication. The goal is to prove that these two actions agree up to a twist by p, modulo lower terms. We denote by _I_∙ the identity morphism of (I_∙). Let I_∙ be an atomic-factored reduced expression as in (<ref>). We say that polynomial forcing holds for I_∙ if for all f ∈ R^(p), within ((I_∙)) as described in (<ref>) and (<ref>), we have p(f) ·_I_∙≡_I_∙· f modulo _< p((I_∙)). We say that polynomial forcing holds for a double coset p if it holds for all atomic-factored reduced expressions I_∙ satisfying I_∙ p. This definition generalizes Definition <ref> because atomic cosets have only one reduced expression. Let p be an arbitrary (I,J)-coset. Since (p) ⊂ I, there is an inclusion of rings R^I ⊂ R^(p), and R^(p) is naturally an R^I-module. Similarly, R^(p) is an R^J-module. If f ∈ R^(p) then p(f) ∈ R^(p). We can identify the rings R^(p) and R^(p) via p. In this way, R^(p) becomes an (R^I, R^J)-bimodule. Let p be an (I,J)-coset. The standard bimodule associated to p, denoted R_p, is R^(p) as a left R^I-module. If f ∈ R^J and m ∈ R_p then m · f := p(f) m. We identify R_p with either R^(p) (with right action twisted) or R^(p) (with left action twisted), as is more convenient. Let :((I_∙))→((I_∙)) / _< p((I_∙)) denote the quotient map. Let p be a core (I,J)-coset and let I_∙ be a reduced expression for p. The bimodule map R_p →((I_∙)) / _< p((I_∙)), 1 ↦(_I_∙) is well-defined if and only if polynomial forcing holds for I_∙. The right action of f ∈ R^J on 1 ∈ R^J = R_p yields f ∈ R^J, and the right action on (_I_∙) yields (_I_∙· f). The left action of p(f) ∈ R^I on 1 ∈ R^J = R_p yields f ∈ R^J, and the left action on (_I_∙) yields (p(f)·_I_∙). These agree if and only if the bimodule map is well-defined, and if and only if (<ref>) holds. In conclusion, we have shown the equivalence of three ideas (for almost SW-realizations) for an atomic coset : the well-definedness of the morphism (<ref>) when p= I_∙, atomic polynomial forcing, and the atomic Leibniz rule. For SW-realizations, (<ref>) is an isomorphism by the theory of singular Soergel bimodules. We can thus prove one of our main theorems. For an SW-realization, atomic polynomial forcing and atomic Leibniz hold. Moreover, the operators T_q,T_q' in the Leibniz formulas are unique. Assume [I,M,J] is an atomic coset. Then B_≅([I,M,J]). Recall from <cit.> the definition of the submodule Γ_<B_ of elements supported on lower cosets. By <cit.> we have a short exact sequence 0→(B_,Γ_<B_)→(B_)→(B_,B_/Γ_<B_)→ 0 and, by <cit.>, the first term in (<ref>) is isomorphic to _<(B_). Moreover, since B_ is indecomposable, by <cit.> we have B_/Γ_<B_≅ R_. The Soergel–Williamson hom formula <cit.> implies that we have an isomorphism (B_,R_)≅ R_[As in the rest of this paper, we are ignoring degrees here.] given by f↦ f(1 1). Putting all together, we obtain an isomorphism (B_) / _< (B_)∼⟶ R_ which sends 𝕀_ to 1∈ R_ By Lemma <ref>, the existence of the isomorphism implies polynomial forcing for . By Theorem <ref>, this is in turn equivalent to the atomic Leibniz rule for . Moreover, as proven in <Ref>, the operators T_q,T_q' in the Leibniz formulas are unique. For a SW-realization, there is an equivalent module-theoretic (rather than morphism-theoretic) version of polynomial forcing. We first recall from <cit.> the filtration on Soergel bimodules N_<p(B)=∑_f∈((I_∙),B),I_∙ q<p(f). In <cit.> we have showed that this coincides with the support filtration Γ_<p introduced in <cit.>. Let [I,M,J] be an atomic coset. We say that (module-theoretic) polynomial forcing holds for and f if (f) 1- 1 f∈ N_<(B_). There is an isomorphism B_≅(B_), where b b'∈ B_ is sent to multiplication by b b'. Moreover, by <cit.> we have (B_,N_<B_)≅_<(B_). Hence, (<ref>) holds if and only if multiplication by (f) 1- 1 f induces a morphism in _<(B_), that is, if an only if (morphism-theoretic) polynomial forcing holds for f. §.§ Polynomial forcing: atomic and general In the diagrammatic category, we intend to use the atomic Leibniz rule to prove polynomial forcing, and not vice versa. In that context, polynomial forcing is to be interpreted as the morphism-theoretic statement that (<ref>) is a well-defined morphism, when I_∙ is an atomic-factored reduced expression. The goal of this section is to prove that atomic polynomial forcing implies general polynomial forcing. In <cit.>, a compatibility between the Bruhat order and concatenation of reduced expression is proven, which implies the following result. Let P_∙ p and Q_∙ q and R_∙ r be reduced expressions such that P_∙∘ Q_∙∘ R_∙ p.q.r is reduced. Then 𝕀_P_∙_< q(Q_∙) 𝕀_R_∙⊂_< p.q.r(P_∙∘ Q_∙∘ R_∙). Assume an SW-realization. Then polynomial forcing holds for all double cosets. We first treat the case where p = p^ is a core (I,J)-coset. Consider an atomic reduced expression I_∙ for p, yielding atomic cosets _i such that p = _1 . _2.⋯ . _m. Since _i are core cosets, by <cit.> we have p = _1·_2⋯_m. Now within (I_∙) we have __1⋯__m f ≡__1⋯_m(f) __m≡…≡_1(⋯(_m(f))) __1⋯__m, where ≡ indicates equality modulo lower terms. At each step we applied polynomial forcing for an atomic coset as proved in <Ref>, and used Proposition <ref> to argue that lower terms for _i embed into lower terms for p. Thus polynomial forcing holds for p. If p is not a core coset, let I_∙ be a special reduced expression for p as in (<ref>). Polynomial forcing for p^ implies that p(f) ·_I_∙≡_I_∙· f modulo 𝕀_[[I ⊃(p)]]_< p^(I'_∙) 𝕀_[[(p) ⊂ J]]. By Proposition <ref>, they are also equivalent modulo _<p(I_∙), as desired. The reader familiar with Soergel bimodules might be familiar with the following example, which showcases how atomic polynomial forcing implies the general case. It also relates our new concept of polynomial forcing to the concept previously in the literature. Consider the (, )-coset p = {s} for a simple reflection s, and the reduced expression I_∙ = [,s,]. Inside B_s := (I_∙) = R _R^s R we have 1 f - s(f) 1 = _s(f) ·1/2(α_s 1 + 1 α_s). The term on the right hand side is in _< s(I_∙), a consequence of the so-called polynomial forcing relation in the Hecke category, see e.g. <cit.>. Now consider the (, ) coset p = {w} for some w ∈ W, and a reduced expression I_∙ = [, s_1, , s_2, …, , s_d, ]. By applying the polynomial forcing relation for B_s_d, we see that 1 ⋯ 1 f ≡ 1 ⋯ s_d(f) 1 modulo maps which factor through [, s_1,, …, , s_d-1, ]. Continuing, we can apply polynomial forcing for each B_s_i to force f across all the tensors, at the cost of maps which factor through subexpressions of s_1 ⋯ s_d. By the subexpression property of the Bruhat order, such maps consist of lower terms. § ATOMIC LEIBNIZ RULE IN TYPE A In this section we explicitly prove condition (<ref>) in Theorem <ref> in type A. We establish this result for = in this section, rather than over a field. In addition to extending our results over , we feel the ability to be explicit in a key example is its own reward. In order to achieve this, we first prove in <Ref> explicitly an atomic Leibniz rule for a specific set of generators when W_M is the entire symmetric group. In <Ref> we extend our results to the case when W_M is a product of symmetric groups. This handles all atomic cosets in type A. §.§ Notation in type A We fix notation under the assumption that W_M is an irreducible Coxeter group of type A. For s,t ∈ M we write ŝ := M ∖{s} and ŝt̂ := M ∖{s,t}, etcetera. Fix a, b ≥ 1 and let n = a+b. Let W_M = _n. Let t = s_a and s = s_b = w_M t w_M, so that W_J = _a ×_b and W_I = _b ×_a. Let be the (_b ×_a, _a ×_b)-coset containing w_M. The coset is depicted as follows, with its minimal element being the string diagram visible. = 1cosetq0. Drawn is the example a=3 and b=5. For each 0 ≤ k ≤min(a,b), there is an (ŝ,t̂)-coset q_k depicted as follows. q_1 = 1cosetq1, q_2 = 1cosetq2, q_3 = 1cosetq3. Then p = q_0, and {q_k}_0 ≤ k ≤min(a,b) is an enumeration of all the (ŝ,t̂)-cosets. The Bruhat order is a total order in this case: q_0 > q_1 > q_2 > … > q_min(a,b). The left redundancy subgroup (see <cit.> to see how to calculate redundancies and cores) of q_k is _k ×_b-k×_a-k×_k. For brevity, let ℓ := n - k. Then (with one exception) (q_k) = b̂k̂ℓ̂ := M ∖{b,k,ℓ} and (q_k) = âk̂ℓ̂. The core of q_k is the double coset depicted as q_1^ = 1coreq1, q_2^ = 1coreq2, q_3^ = 1coreq3. Note that the core of q_k is itself atomic, except for k = min(a,b) when the core is an identity coset. The case k = a = b = ℓ is relatively special. We denote this special coset as q_a=b. We have (q_a=b) = (q_a=b) = I = J. Unlike other q_i, q_a=b is core. In type A the following statement is always true: if is atomic and q < then q^ is either atomic or an identity coset. We do not know for which atomic cosets this property holds in other types. A reduced expression for q_k, which factors through the core, is q_k [[b̂⊃b̂k̂ℓ̂⊂k̂ℓ̂⊃âk̂ℓ̂⊂â]]. The exception is when a = b = k, in which case q_a=b [b̂], that is, the identity expression of I = b̂ is a reduced expression for the length zero coset q_a=b. Finally, let us write y_k = q_k w_J^-1. Then _q_k = _y_k. Here are examples of y_k. y_0 = 1y0, y_1 = 1y1, y_2 = 1y2, y_3 = 1y3. Of course y_0 =. Meanwhile y_k is obtained from y_0 by removing a k × k square of crossings from the top. In the special case of the coset q_a=b, we have y_a=b = e, the identity of W. §.§ Complete symmetric polynomials Fix a, b ≥ 1 and continue to use the notation from the previous section. The standard action of _n on ^n (with the standard choice of roots and coroots) we call the permutation realization. Let R = [x_1, …, x_n]. All Demazure operators preserve R. By a result of Demazure <cit.>, Frobenius surjectivity holds in type A over , that is, for any I⊂ M we can find P_I∈ R such that _I(P_I)=1. Moreover, for any J⊂ I the ring R^I is Frobenius over R^J and we can choose dual bases Δ^J_I,(1) and Δ^J_I,(1) accordingly. It is well-known that the subring R^J = R^_a ×_b is generated over R^_n by the complete symmetric polynomials h_i(x_1, …, x_a)=∑_1≤ k_1≤ k_2≤…≤ k_i≤ ax_k_1x_k_2⋯ x_k_i in the first a variables. In this section, we directly prove the atomic Leibniz rule for when f = h_i(x_1, …, x_a). One of the great features of complete symmetric polynomials is their behavior under Demazure operators. For example, we have _3(h_i(x_1, x_2, x_3)) = h_i-1(x_1, x_2, x_3, x_4). As a consequence, _2 _3(h_i(x_1, x_2, x_3)) = 0, a fact which is false if h_i is replaced by some general polynomial inside R^_3 ×_n-3. Indeed, the only elements w ≤ s_1 s_2 s_3 for which _w(h_i(x_1,x_2,x_3)) 0 are w = s_3 and w = e. This will simplify the computation considerably. Below we shall use letters like X, Y, and Z to denote subsets of {1, …, n}. We write h_i(X) for the i-th complete symmetric polynomial in the variables x_j for j ∈ X. We have _j(h_i(X)) = h_i-1(X ∪{j+1}) if j∈ Xand j+1∉X -h_i-1(X ∪{j}) if j+1∈ Xand j∉X 0 otherwise. Clearly, h_i(X) is s_j-invariant if both j and j+1 are inside or outside X, so _j(h_i(X))=0. Assume now that j∈ X and j+1∉X. We have h_i(X)=h_i(X∖{j})+h_i-1(X)x_j. Hence, _j(h_i(X))=_j(h_i-1(X)x_j). Applying the twisted Leibniz rule, by induction on |X|, we obtain _j(h_i(X)) =h_i-2(X∪{j+1})x_j+s_j(h_i-1(X))_j(x_j) =h_i-2(X∪{j+1})x_j+h_i-1(X∪{j+1}∖{j}) =h_i-1(X∪{j+1}). The case where j+1 ∈ X and j ∉ X follows because _j(s_j(f)) = -_j(f), so we have _j(h_i(X)) =-_j (h_i(X∪{j}∖{j+1}))=-h_i-1(X∪{j}). Use the notation of <ref>. Fix a, b ≥ 1, and let n = a+b. Let X = {1, …, a} and Y = {n,n-1,…,n+1-a}. Recall y_0, y_1 ∈_n from (<ref>). Then for any i ≥ 0 and any g ∈ R^_a ×_b=R^J we have _y_0(h_i(X) · g) = h_i(Y) ·_y_0(g) + _y_1(h_i-1(X ∪ n) · g). As _p=_y_0, p= y_0, and y_0(h_i(X)) = h_i(Y), this is compatible with (<ref>), where T_q_1(h_i(X))= h_i-1(X ∪ n), and T_q_k(h_i(X)) is zero for all k > 1. Most of the terms in (<ref>) are zero for complete symmetric polynomials, making the formula much easier than the general case. We will do a proof by example, for the example a = 3 and b=4. The general proof is effectively the same only the notation is more cumbersome. In this proof, we write _123 for _1 ∘_2 ∘_3 (and not the Frobenius trace associated to the longest element w_123). We use parenthesization for emphasis, so that _(12)3 is the same thing as _123, but emphasizes that _(12)3 = _12∘_3. Remember that g is invariant under anything except s_3, so _j(g) = 0 if j 3. This implies, for example, that _3 kills _23(g), and that ∂_2 kills _3243(g), etcetera. We claim that _123(h_i(123) g) = h_i-1(1234) _23(g) + h_i(234) _123(g). One proof is to apply the ordinary twisted Leibniz rule repeatedly using (<ref>). After one step we obtain _123(h_i(123) g) = _12( h_i-1(1234) g + h_i(124) _3(g) ). The first term on the right-hand side is invariant under s_2, so it is killed by _2. Thus we have _123(h_i(123) g) = _1(_2(h_i(124) _3(g))) = _1(h_i-1(1234) _3(g) + h_i(134) _23(g)). Again the first term on the right-hand side is invariant under s_1, so it is killed by _1. A final application of the twisted Leibniz rule to _1(h_i(134) _23(g)) gives (<ref>). Essentially, this proof is by iterating the twisted Leibniz rule and arguing that the first term vanishes in every application but the last, because the first term is appropriately invariant. We call this the easy invariance argument. Note that _123(g) is invariant under everything but s_4. More generally, _12… a(g) is killed by _j for all j a+1. This is true when j=1 since _1 _1 = 0. This is true when j > a+1 since _j _12… a = _12… a_j and _j(g) = 0. This is true when 2 ≤ j ≤ a because _j _12… a = _12… a_j-1 and _j-1(g) = 0. By the easy invariance argument again, but with indices shifted and g replaced by _123(g), we have _234(h_i(234) _123(g)) = h_i-1(2345) _34123(g) + h_i(345) _234123(g). This is how we treat the second term in the right side of (<ref>). Note that all computations above are unchanged by adding new variables to our complete symmetric polynomials which are untouched by any of the simple reflections used by the formula. For example, adding 7 to every h_i in (<ref>) we get _123(h_i(1237) g) = h_i-1(12347) _23(g) + h_i(2347) _123(g). In this example, we call 7 an irrelevant index. Now we examine the first term in the right side of (<ref>). Note that _23(g) is invariant under all simple reflections except s_1 and s_4. For the next computation, the index 1 is irrelevant. The easy invariance argument again implies that _234(h_i-1(1234) _23(g)) = h_i-2(12345) _3423(g) + h_i-1(1345) _23423(g). However, as 23423=32434, we have _23423(g) = 0, so one has the simpler formula _234(h_i-1(1234) _23(g)) = h_i-2(12345) _3423(g). Overall, we see that _(234)(123)(h_i(123)g) = h_i(345) _(234)(123)(g) + h_i-1(2345) _(34)(123)(g) + h_i-2(12345) _(34)(23)(g). The pattern is relatively straightforward. Here's the next one in the pattern: _(345)(234)(123)(h_i(123)g) = h_i(456) _(345)(234)(123)(g) + h_i-1(3456) _(45)(234)(123)(g) + h_i-2(23456) _(45)(34)(123)(g) + h_i-3(123456) _(45)(34)(23)(g). The word whose Demazure is applied to g is obtained from the concatenation of triples (345)(234)(123) by removing the first index from some of the triples; more specifically, from a prefix of the set of triples. The indices that get removed are instead added to the complete symmetric polynomial. The reason triples appear is because a = 3. The inductive proof of this pattern is the same as above. One takes (<ref>) and applies _345. The first term splits in two, giving the first two terms of (<ref>), similar to (<ref>) or (<ref>). Each other term contributes one term in (<ref>), similar to (<ref>). Note that we could have added the irrelevant index 7 to every set in sight within (<ref>), without any issues. This will be important later. Repeating until one applies _y_0, we calculate _y_0(h_i(X)· g): _(456)(345)(234)(123)(h_i(123)g) = h_i(567) _(456)(345)(234)(123)(g) + h_i-1(4567) _(56)(345)(234)(123)(g) + h_i-2(34567)_(56)(45)(234)(123)(g) + h_i-3(234567) _(56)(45)(34)(123)(g) + h_i-4(1234567) _(56)(45)(34)(23)(g). The fact that there were four triples is because b=4. Note that the first term in the RHS is h_i(Y)_y_0(g). Let us now compute _y_1(h_i-1(1237) · g). Note that y_1 = (56)(345)(234)(123). To compute _(345)(234)(123)(h_i-1(1237) g), we take (<ref>), add the irrelevant index 7 to all variable lists, and reduce i by one. Now we need only apply _(56) to the result. The key thing to note here is that each h_∙(⋯ 567) is invariant already under s_5 and s_6. Thus both operators in _(56) simply apply to the g term. From this we can compute _y_1(h_i-1(X∪ n)· g): _(56)(345)(234)(123)(h_i-1(1237)g) = h_i-1(4567) _(56)(345)(234)(123)(g) + h_i-2(34567)_(56)(45)(234)(123)(g) + h_i-3(234567) _(56)(45)(34)(123)(g) + h_i-4(1234567) _(56)(45)(34)(23)(g). This exactly matches all terms from (<ref>) except the first term. Thus the theorem is proven! The atomic Leibniz rule and atomic polynomial forcing both hold for atomic cosets [I,M,J] when W_M = _n when R = [x_1, …, x_n]. Theorem <ref> proved property (<ref>) from Theorem <ref> in this case. Thus conditions (<ref>) and (<ref>) also hold in this case. §.§ Reduction to the connected case The previous section proves an atomic Leibniz rule under the assumption W_M = _n. Now we do the general case. Let W = _n. An arbitrary atomic coset in W contains the longest element of the reducible Coxeter group W_M = _n_1×⋯×_n_k where ∑ n_i = n. It is a coset for (ŝ,t̂), where s and t are simple reflections in the same irreducible component _n_i of W_M. We can prove polynomial forcing for an arbitrary atomic coset in type A if we can bootstrap the result from _n_i to W_M. In this discussion, there is no difference between type A and a general Coxeter type. Thus let M be finitary, with s, t ∈ M and w_M s w_M = t. Let be the atomic (ŝ,t̂)-coset containing w_M. Now suppose that M = C_1 ⊔ C_2 ⊔…⊔ C_k is a disjoint union of connected components (the simple reflections in C_i commute with those in C_j for i j). Suppose without loss of generality that s ∈ C_1, and let D = C_2 ⊔…⊔ C_k. Then t ∈ C_1 as well, and t = w_C_1 s w_C_1. Let ' denote the atomic (C_1 ∖ s, C_1 ∖ t)-coset containing w_C_1. Then and ' are related by the operation +D described in <cit.>. With the notation as above, polynomial forcing holds for if and only if it holds for '. With the notation as above, an atomic Leibniz rule holds for if and only if it holds for '. The proof is straightforward and left to the reader, but we wish to point out the available ingredients. Many basic properties of the operator +D are given in <cit.>. There is a bijection between cosets q < and cosets q' < ', and also a bijection between their reduced expressions. Note that _qw_t̂^-1 = _q'w_C_1∖ t^-1 as operators R → R. Finally, dual bases for the Frobenius extension R^M ⊂ R^M ∖ t can also be chosen as dual bases for the Frobenius extension R^C_1⊂ R^C_1 ∖ t. The atomic Leibniz rule and polynomial forcing hold for any atomic coset in type A_n-1 when R = [x_1, …, x_n]. The restriction of the permutation realization to any _n_i⊂_n is a W-invariant enlargement of the permutation realization of _n_i. Thus the result follows from the previous two lemmas, Theorem <ref>, and Lemma <ref>. Applying <Ref>, we also deduce the atomic Leibniz rule for a host of other realizations, including when R = [x_1, …, x_n] for any commutative ring .
http://arxiv.org/abs/2407.12160v1
20240716203242
Topological complexity of ideal limit points
[ "Marek Balcerzak", "Szymon Glab", "Paolo Leonetti" ]
math.GN
[ "math.GN", "math.CA", "math.FA" ]
Semantic Communication for the Internet of Sounds: Architecture, Design Principles, and Challenges Chengsi Liang, Yao Sun, Christo Kurisummoottil Thomas, Lina Mohjazi, and Walid Saad Chengsi Liang, Yao Sun (corresponding author), and Lina Mohjazi are with the James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK (e-mail: 2357875l@student.gla.ac.uk; {yao.sun, lina.mohjazi}@glasgow.ac.uk). Christo Kurisummoottil Thomas and Walid Saad are with the Bradley Department of Electrical and Computer Engineering at Virginia Tech, Arlington, VA 22203, USA. (e-mail: {christokt, walids}@vt.edu). Received: date / Revised version: date ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== empty § ABSTRACT Given an ideal ℐ on the nonnegative integers ω and a Polish space X, let ℒ(ℐ) be the family of subsets S⊆ X such that S is the set of ℐ-limit points of some sequence taking values in X. First, we show that ℒ(ℐ) may attain arbitrarily large Borel complexity. Second, we prove that if ℐ is a G_δσ-ideal then all elements of ℒ(ℐ) are closed. Third, we show that if ℐ is a simply coanalytic ideal and X is first countable, then every element of ℒ(ℐ) is simply analytic. Lastly, we studied certain structural properties and the topological complexity of minimal ideals ℐ for which ℒ(ℐ) contains a given set. § INTRODUCTION Let ℐ be an ideal on the nonnegative integers ω, that is, a subset of 𝒫(ω) closed under taking subsets and finite unions. Unless otherwise stated, it is assumed that ℐ is admissible, namely, ω∉ℐ and that ℐ contains the family Fin of finite subsets of ω. Intuitively, the ideal ℐ represents the family of “small” subsets of ω. An important example is the family of asymptotic density zero sets 𝒵:={S⊆ω: lim_n→∞|S ∩ [0,n]|/n=0}. Define ℐ^+:=𝒫(ω)∖ℐ. Ideals are regarded as subsets of the Cantor space {0,1}^ω, hence we can speak about their topological complexity. For instance, Fin is a F_σ-ideal, and 𝒵 is a F_σδ-ideal which is not F_σ. Pick also a sequence x=(x_n) be taking values in a topological space X. Then, we denote by Λ_x(ℐ) the set of ℐ-limit points of x, that is, the set of all η∈ X for which there exists a subsequence (x_n_k) such that lim_k→∞ x_n_k=η and {n_k: k ∈ω}∈ℐ^+. It is well known that, even in the case where x is a real bounded sequence, it is possible that Λ_x(𝒵) is the empty set, see <cit.>. The topological nature of the sets of ℐ-limits points Λ_x(ℐ) and their relationship with the slightly weaker variant of ℐ-cluster points have been studied in <cit.>, cf. also <cit.>. In this work, we continue along this line of research. To this aim, we introduce our main definition: Let X be a topological space and ℐ be an ideal on ω. We denote by ℒ_X(ℐ) the family of sets of ℐ-limit points of sequences x taking values in X together with the emptyset, that is, ℒ_X(ℐ):={A⊆ X: A=Λ_x(ℐ) for some sequence x∈ X^ω}∪{∅}. If the topological space X is understood, we write simply ℒ(ℐ). A remark is in order about the the addition of {∅} in the above definition: it has been proved by Meza-Alcántara in <cit.> that, if X=[0,1], then there exists a [0,1]-valued sequence x such that Λ_x(ℐ)=∅ if and only if there exists a function ϕ: ω→ℚ∩ [0,1] such that ϕ^-1[A] ∈ℐ for every set A⊆ℚ∩ [0,1] with at most finitely many limit points; cf. also <cit.> and, more generally, <cit.> for analogues in compact uncountable spaces. On the other hand, if X is not compact, it is easy to see that, for every ideal ℐ, there is a sequence with no ℐ-limit points. Thus, the addition of {∅} in the above definition avoids the repetition of known results in the literature and to add further subcases based on the topological structure of the underlying space X. We summarize in Theorem <ref> and Theorem <ref> below the known results from <cit.> about the families ℒ_X(ℐ). For, given a topological space X and an ordinal 1≤α <ω_1, we use the standard Borel pointclasses notations Σ^0_α(X) and Π^0_α(X), so that Σ^0_1(X) stands for the open sets of X, Π^0_1(X) for the closed sets, Σ_2^0(X) for the F_σ-sets, etc.; we denote by Δ^0_α(X):=Σ^0_α(X) ∩Π^0_α(X) the ambiguous classes; also, if X is a Polish space, Σ^1_1(X) stands for the analytic sets, Π^1_1(X) for the coanalytic sets, etc., see e.g. <cit.> or <cit.>. Again, we suppress the reference to the underlying space X if it is clear from the context. Recall that an ideal ℐ is a P-ideal if it is σ-directed modulo finite sets, that is, for every sequence (A_n) with values in ℐ there exists A ∈ℐ such that A_n∖ A is finite for all n ∈ω. Important examples include the Σ^1_1 P-ideals, which are known to be necessarily Π^0_3. A topological space X is discrete if it contains only isolated points. Let X be a nondiscrete first countable Hausdorff space. Then * ℒ(ℐ)⊆Π^0_1 if and only if ℐ is a Σ_2^0 ideal, provided that ℐ is a Σ^1_1 P-ideal. * ℒ(ℐ)⊆Σ^0_2, provided that ℐ is a Σ^1_1 P-ideal. If, in addition, all closed subsets of X are separable, then * ℒ(ℐ)=Π_1^0, provided that ℐ is a Σ_2^0 ideal. * ℒ(ℐ)=Σ_2^0, provided that ℐ is Σ_1^1 P-ideal which is not Σ_2^0. It follows by <cit.>. To state the next result, we recall some further definitions. An ideal ℐ has the hereditary Baire property if the restriction ℐ↾ A:={S∩ A: S ∈ℐ} has the Baire property for every A ∈ℐ^+. Note that all analytic ideals have the hereditary Baire property: indeed, the proof goes verbatim as in <cit.>, considering that analytic sets are closed under continuous preimages, and that they have the Baire property, see e.g. <cit.>. In addition, there exist ideals with the Baire property but without the hereditary Baire property, see e.g. <cit.>. Also, an ideal ℐ on ω is said to be a P^+-ideal if, for every decreasing sequence (A_n) with values in ℐ^+, there exists A ∈ℐ^+ such that A∖ A_n is finite for all n ∈ω. It is known that all F_σ-ideals are P^+-ideals, see <cit.> and <cit.>. We remark that P^+-ideal may have arbitrarily high Borel complexity, as it has been proved in <cit.> and <cit.>, cf. also <cit.>. Moreover, an ideal ℐ is called a Farah ideal if there exists a sequence (K_n) of hereditary compact subsets in 𝒫(ω) such that S ∈ℐ if and only if for all n ∈ω there exists k ∈ω such that S∖ [0,k] ∈ K_n, see <cit.>. It is known that all analytic P-ideals are Farah and that all Farah ideals are F_σδ. On the other hand, it is still unknown whether the converse holds, namely, all F_σδ ideals are Farah ideals, see <cit.> and <cit.>. Let X be a first countable Hausdorff space. Then * ℒ(ℐ) ⊆Π^0_1, provided that ℐ is P^+-ideal. * ℒ(ℐ) ⊆Π^0_1 if and only if ℐ is a P^+-ideal, provided that ℐ has the hereditary Baire property and X is nondiscrete metrizable. * ℒ(ℐ) ⊆Σ^0_2, provided that ℐ is a Farah ideal. * ℒ(ℐ)={{η}: η∈ X} if and only if ℐ is maximal, provided that |X|≥ 2 If, in addition, X is second countable, then * Π^0_1 ⊆ℒ(ℐ), provided that ℐ has the hereditary Baire property. * Σ^0_2 ⊆ℒ(ℐ), provided that ℐ has the hereditary Baire property and is not a P^+-ideal. It follows by <cit.> and <cit.>. (We added the only if part of item <ref>, which is straighforward.) In what follows, we divide our main results into four sections. First, we show in Section <ref> that ℒ(ℐ) can be equal to families of arbitrarily high Borel complexity. Also, we prove that we cannot have the equality ℒ(ℐ)=Π^0_2: more precisely, ℒ(ℐ)⊆Π^0_2 if and only if ℒ(ℐ)⊆Π^0_1. Then, we show that, if X is a first countable space and ℐ is a simply coanalytic ideal (see Section <ref> for details), then every Λ_x(ℐ) is simply analytic. Lastly, we study structural and topological properties of “smallest” ideals ℐ such that ℒ(ℐ) contains a given subsets of X. Putting the above results, we obtain: Let X be a nondiscrete metrizable space where all closed subsets of X are separable and ℐ be a Borel ideal on ω. Then the following are equivalent * ℒ(ℐ) = Π^0_1. * ℒ(ℐ) ⊆Π^0_1. * ℐ is a P^+-ideal. * ℐ is a Σ^0_2 ideal. In particular, there are no Borel ideals which are P^+-ideals but not F_σ. <ref> <ref> It is obvious. <ref> ⟺ <ref> It follows by Theorem <ref>.<ref>. <ref> ⟺ <ref> It follows by Theorem <ref>. <ref> <ref> It follows by Theorem <ref>.<ref>. § LARGE AND SMALL BOREL COMPLEXITIES Our first main result computes explicitly some families ℒ(ℐ), proving that they may attain arbitrarily large Borel complexity (Theorem <ref> below). This is somehow related to <cit.>, which asks about the existence of a Borel ideal ℐ such that ℒ(ℐ) contains sets with large Borel complexities. For, given (possibly nonadmissible) ideals ℐ and 𝒥 on two countably infinite sets Z and W, respectively, we define their Fubini product by ℐ×𝒥:={S⊆ Z× W{n ∈ Z: {k∈ W: (n,k)∈ S}∉𝒥}∈ℐ}}, which is an ideal on the countably infinite set Z× W, see e.g. <cit.>. Hence, recursively, Fin^α:=Fin×Fin^α-1 for all integers α≥ 2 is an ideal on ω^α. Let ℐ, 𝒥 be possibly nonadmissible ideals on ω such that ℐ×𝒥 is an admissible ideal on ω^2. Fix also a bijection h: ω^2→ω and let x=(x_n) be a sequence with values in a first countable space X. For each n ∈ω, define the sequence x^(n)=(x^(n)_k: k ∈ω) by x^(n)_k:=x_h(n,k) for all k ∈ω. Then Λ_x(h[ ℐ×𝒥])={η∈ X: {n ∈ω: η∈Λ_x^(n)(𝒥)}∉ℐ}, where h[ ℐ×𝒥] stands for the family {h[S]: S ∈ℐ×𝒥}. Let A and B be the left and right hand side of (<ref>), respectively. Inclusion A⊆ B. The inclusion is clear if A=∅. Otherwise fix a point η∈ A. Hence there exists a set S⊆ω^2 such that S∉ℐ×𝒥 and the subsequence (x_h(s): s ∈ S) is convergent to η. By the definition of Fubini product ℐ×𝒥, N:={n ∈ω: K_n∈𝒥^+}∈ℐ^+, where K_n:={k∈ω: (n,k) ∈ S}. At this point, for each n ∈ N, the subsequence (x^(n)_k: k∈ K_n) is convergent to η and K_n ∈𝒥^+. Since N ∈ℐ^+, we obtain that η∈ B. Inclusion B⊆ A. The inclusion is clear if B=∅. Otherwise fix a point η∈ B and let (U_n) be a decreasing local base of neighborhoods at η. Hence there exists N ∈ℐ^+ such that η is a 𝒥-limit point of x^(n) for each n ∈ N, let us say lim_k ∈ K_nx^(n)_k=η for some K_n ∈𝒥^+. Upon removing finitely many elements, we can suppose without loss of generality that ∀ n ∈ N, ∀ k ∈ K_n, x^(n)_k ∈ U_n. Now, set S:={(n,k)∈ω^2: n ∈ N, k ∈ K_n} and note that S∉ℐ×𝒥. It follows that the subsequence (x_h(s): s ∈ S) is convergent to η: indeed there are only finitely many elements of the subsequence outside each U_n. Therefore η∈ A, which concludes the proof. The above result holds for every topological space X if ℐ={∅}. Indeed, in the second part of the proof it is enough to let N be a singleton. It is worth noting that if ℐ is an ideal on a countably infinite set Z, ϕ: Z→ω is a bijection, and x∈ X^Z is a Z-indexed sequence with values in a topological space X, then ϕ[ ℐ]:={ϕ[S]: S ∈ℐ} is an ideal on ω and Λ_x(ℐ)=Λ_y(ϕ[ ℐ]), where y∈ X^ω is the sequence defined by y_n:=x_ϕ^-1(n) for all n ∈ω. Hence we may use interchangeably ℒ(ℐ) or ℒ(ϕ[ ℐ]). In particular, Theorem <ref> allows to compute explicitly families of the type ℒ(ℐ×𝒥). For, we state two consequence of Theorem <ref>: Let X be a topological space and ℐ be an ideal on ω. Then ℒ(∅×ℐ)={⋃_nA_n: A_0,A_1,…∈ℒ(ℐ)}. It follows by Theorem <ref> and Remark <ref> that ℒ(∅×ℐ) ={Λ_x(∅×ℐ): x ∈X^ω} ={⋃_nΛ_x^(n)(ℐ): x ∈X^ω} ={⋃_nΛ_x^(n)(ℐ): x^(0), x^(1), …∈X^ω} ={⋃_nA_n: A_0,A_1,…∈ℒ(ℐ)}, completing the proof. Let X be a first countable space and ℐ be an ideal on ω. Then ℒ(Fin×ℐ)={lim sup_nA_n: A_0,A_1,…∈ℒ(ℐ)}. It follows by Theorem <ref> that ℒ(Fin ×ℐ) ={Λ_x(Fin ×ℐ): x ∈X^ω} ={{η∈X: ∃^∞n ∈ω, η∈Λ_x^(n)(ℐ)}: x ∈X^ω} ={⋂_n⋃_k≥nΛ_x^(n)(ℐ): x^(0), x^(1), …∈X^ω} ={lim sup_nA_n: A_0,A_1,…∈ℒ(ℐ)}, completing the proof. At this point, recall that Fin is a F_σ-ideal, and that ∅×Fin is an analytic P-ideal which is not F_σ, see <cit.> and <cit.>. Thus, it follows by Theorem <ref> that ℒ(Fin)= Π^0_1 and ℒ(∅×Fin)= Σ^0_2. With the above premises, we are able to extend (<ref>) to certain ideals with large Borel complexity families ℒ(ℐ). Let X be a complete metrizable space. Then, for each positive integer α, we have * Fin^α is a Σ^0_2α-ideal and ℒ(Fin^α)= Π^0_2α-1. * ∅×Fin^α is a Π^0_2α+1-ideal and ℒ(∅×Fin^α)= Σ^0_2α. <ref> The complexity of Fin^α is obtained applying recursively <cit.>, while the computation of ℒ(Fin^α) is obtained putting together the base case (<ref>), Corollary <ref>, and <cit.>. The proof of <ref> goes similarly, replacing Corollary <ref> with Corollary <ref>. In particular, if X is a complete metrizable space, ℒ_X(Fin^2)=Π^0_3. This provides a generalization of <cit.>, where it is proved constructively that there exists a real sequence x such that Λ_x(Fin^2) is equal to [0,1]∖ℚ (note that the latter is not a F_σ-set, hence ℒ_ℝ(Fin^2)∩ (Π^0_2 ∖Σ^0_2) is nonempty). Our second main result deals with ideals with small topological complexity. Suppose that X is a first countable space, and recall that ℒ(ℐ)⊆Π^0_1, provided ℐ is a Σ^0_2-ideal, see <cit.>; note that the Hausdorffness hypothesis is not needed here. In the next result, we are going to show that the same conclusion holds if ℐ is a Σ^0_3-ideal. Let X be a first countable space, and suppose that ℐ is a Σ^0_3-ideal. Then inclusion (<ref>) holds. It is worth noting that Theorem <ref> is not a consequence of the former result as it really includes new cases: indeed, as remarked also by Solecki in <cit.>, there exists a Δ^0_3-ideal on ω (hence, both Σ^0_3 and Π^0_3) which is neither Σ^0_2 nor Π^0_2, see <cit.>; cf. also <cit.> and <cit.>. In addition, Theorem <ref> allows us to prove a generalization of the folklore result that every Σ^0_2-ideal is a P^+-ideal, see <cit.>. For a different proof of the second part, see also <cit.>. Let ℐ be a Σ^0_3-ideal on ω. Then ℐ is a P^+-ideal, and there exists a Σ^0_2-ideal 𝒥 such that ℐ⊆𝒥. The first part follows putting together Theorem <ref> and Theorem <ref>.<ref> (note that ℐ is Borel, hence with the hereditary Baire property). The second part follows by the known fact that a Borel ideal on ω is contained in a Σ^0_2-ideal if and only if it is contained in a P^+-ideal, see <cit.>. At this point, we divide the proof of Theorem <ref> into two intermediate steps. To this aim, we recall that properties of ideals can be often expressed by finding critical ideals with respect to some preorder, cf. e.g. the survey <cit.>. To this aim, let ℐ and 𝒥 be two ideals on two countably infinite sets Z and W, respectively. Then we say that ℐ is below 𝒥 in the Rudin–Blass ordering, shortened as ℐ≤_RB𝒥, if there is a finite-to-one map ϕ: W→ Z such that S ∈ℐ if and only if ϕ^-1[S] ∈𝒥 for all subsets S⊆ Z. The restriction of these orderings to maximal ideals ℐ, and the Borel cardinality of the quotients 𝒫(Z)/ℐ have been extensively studied, see e.g. <cit.> and references therein. Let X be a first countable Hausdorff space and ℐ be an ideal on ω with the hereditary Baire property. Suppose that inclusion (<ref>) does not hold. Then ∅×Fin≤_RBℐ. Since inclusion (<ref>) fails and X is first countable, there exists a sequence x such that S:=Λ_x(ℐ) is not closed, hence not sequentially closed. Therefore there exists a sequence y taking values in S which is convergent to some limit η∈ X∖ S. Since X is Hausdorff, we may suppose without loss of generality that y_n ≠ y_m for all distinct n,m ∈ω. Now, for each n ∈ω, there exists A_n ∈ℐ^+ such that lim_k ∈ A_nx_k=y_n. Define B_0:=A_0∪ (ω∖⋃_nA_n) and, recursively, B_n+1:=A_n+1∖⋃_k≤ nB_k for all n∈ω. Hence {B_n: n∈ω} is a partition of ω into ℐ-positive sets, for each n ∈ω, the restriction ℐ↾ B_n is an ideal on B_n with the Baire property. It follows by Talagrand's characterization of meager ideals that Fin≤_RBℐ↾ B_n for each n ∈ω. More explicitly, for each n ∈ω there exists a finite-to-one map ϕ_n: B_n →ω such that ∀ W ⊆ω, ϕ_n^-1[W] ∈ℐ if and only if W ∈Fin, see <cit.>; cf. also <cit.> for further characterizations of meager ideals based of ℐ-limit points of sequences. Define the map ϕ: ω→ω^2 by ∀ n∈ω, ∀ k ∈ B_n, ϕ(k)=(n,ϕ_n(k)). We claim that ϕ is a witnessing function for ∅×Fin≤_RBℐ. For, suppose that W⊆ω^2 belongs to ∅×Fin. Then ϕ^-1[W]=⋃_n ∈ωϕ_n^-1[{k ∈ω: (n,k) ∈ W}] Since each ϕ_n is finite-to-one, ϕ^-1[W] has finite intersection with each B_n. Then either ϕ^-1[W] is finite or the subsequence (x_n: n ∈ϕ^-1[W]) is convergent to η, while η is not an ℐ-limit point of x. Hence, in both cases, ϕ^-1[W] ∈ℐ. Conversely, suppose that W does not belong to ∅×Fin, so that there exists n_0 ∈ω such that {k ∈ω: (n_0,k) ∈ W}∉Fin. It follows by (<ref>) and (<ref>) that ϕ^-1[W] contains ϕ_n_0^-1[{k ∈ω: (n_0,k) ∈ W}] ∈ℐ^+. Therefore ϕ^-1[W]∈ℐ if and only if W ∈∅×Fin for each W⊆ω^2. Under the same hypotheses of Lemma <ref>, Σ^0_2 ⊆ℒ(ℐ). Thanks to Lemma <ref> and <cit.>, we have ℒ(∅×Fin) ⊆ℒ(ℐ). The claim follows by Equation (<ref>). For the next intermediate result, given topological spaces X,Y and subsets A⊆ X and B⊆ Y, we say that A is Wadge reducible to B, shortened as A ≤_W B, if there exists a continuous map Φ: X → Y such that Φ^-1[B]=A (or, equivalently, x ∈ A if and only if Φ(x) ∈ B for all x ∈ X), see e.g. <cit.>. If, in addition, X and Y are Polish spaces with X zero-dimensional, then B is said to be Π^0_3-hard if A ≤_W B for some A ∈Π^0_3(X). Lastly, if B is a Π^0_3(Y) set which is also Π^0_3-hard, then it is called Π^0_3-complete. (Analogous definitions can be given for other classes of sets in Polish spaces, see <cit.>.) Let ℐ be an ideal on ω such that ∅×Fin≤_RBℐ. Then ℐ is Π^0_3-hard. First, recall that ∅×Fin is a Π^0_3-complete subset of 𝒫(ω^2), see e.g. <cit.>. Hence, to complete the proof, it is sufficient to show that ∅×Fin≤_Wℐ. By hypothesis, there exists a finite-to-one function ϕ: ω→ω^2 such that S ∈∅×Fin if and only if ϕ^-1[S] ∈ℐ. Now, define the map Φ: 𝒫(ω^2) →𝒫(ω) by Φ(S):=ϕ^-1[S], so that S ∈∅×Fin if and only if Φ(S) ∈ℐ. Hence, we only need to show that Φ is continuous. For, fix n ∈ω and define S:=ϕ[{0,…,n}]. It follows that, for all A,B⊆ω^2 with A∩ S=B∩ S, Φ(A) ∩[0,n] ={k ∈ω: ϕ(k) ∈A and k≤n} ={k ∈ω: ϕ(k) ∈A ∩S} =Φ(B) ∩[0,n], which proves the continuity of Φ. Suppose that X is a first countable space and ℐ is an ideal on ω such that inclusion (<ref>) fails. We claim that ℐ is not a Σ^0_3-ideal. If ℐ is not a Borel ideal, then the claim is trivial. Hence, let us suppose hereafter that ℐ is Borel. In particular, ℐ has the hereditary Baire property. At this point, it follows by Lemma <ref> and Lemma <ref> that ℐ is a Π^0_3-hard. To sum up, ℐ is a Borel subset of a zero-dimensional Polish space and it is Π^0_3-hard. We conclude by <cit.> that the ℐ is not a Σ^0_3-ideal. § SIMPLY ANALYTICITY In <cit.>, the first and last-named authors proved the following: Let x be a sequence taking values in a Hausdorff regular first countable space X. Let also ℐ be a coanalytic ideal on ω. Then Λ_x(ℐ) is analytic. However, the classical definition of analytic sets A⊆ X as projections of Borel subsets of X× X is usually considered in Polish spaces X, cf. e.g. <cit.> or <cit.>. In addition, the above result has been re-proved in <cit.> for zero-dimensional Polish spaces X, so that ℒ(ℐ)⊆Σ^1_1. whenever ℐ is a Π^1_1-ideal (notice that the above notation is meaningful). In this Section, our aim is to reformulate and clarify the statement and the proof of Proposition <ref>, extending in turn both the latter and the special case treated in <cit.>. Note that there are several papers in which the notion of analytic set is adapted to more general topological spaces, see e.g. <cit.>. The theory of the latter extension, called K-analytic sets, is important and nontrivial, and it does not proceed verbatim as in the classical one of analytic sets. For our purposes, we generalize in a straight way one of possible definitions of analytic sets to the case of arbitrary topological spaces. Hereafter, π_X stands for the usual projection on X. Let X be a topological space. A subset A⊆ X is said to be simply analytic, shortened as s-analytic, if there exists an uncountable Polish space Y and a Borel subset B⊆ X× Y such that A=π_X[B]. Note that Definition <ref> is coherent with the classical notion of analytic set. Indeed, it is well known that a subset A of a Polish space X is analytic if and only if there exists an uncountable Polish space Y and a Borel B⊆ X× Y such that A=π_X[B], cf. <cit.>. At this point, let Y_1,Y_2 be two uncountable Polish spaces. Then there exists a Borel isomorphism h: Y_2→ Y_1, as it follows by <cit.>. Let A:=π_X[B_1] be a s-analytic subset for a Borel set B_1⊆ X× Y_1. Then B_2:={(x,y) ∈ X× Y_2: (x,h(y)) ∈ B_1} is a Borel subset of X× Y_2 by Lemma <ref> below and, clearly, π_X[B_2]=A. Therefore, in Definition <ref> one may assume without loss of generality that Y=ω^ω. Lastly, a subset A⊆ X is said to be s-coanalytic if A^c:=X∖ A is s-analytic. Observe that every Borel set in X is both s-analytic and s-coanalytic. These observations enable us to consider Proposition <ref> valid as it was stated in the original version in <cit.>. They allow, in addition, to remove in its statement the hypothesis of regularity (and also the property of being Hausdorff; however, the latter one has been used in <cit.> only because a first countable space is Hausdorff if and only if every sequence has at most one limit): Let x be a sequence taking values in a first countable space X. Let also ℐ be a s-coanalytic ideal on ω. Then Λ_x(ℐ) is s-analytic. For the proof of Theorem <ref>, we will need some intermediate lemmas. Let f: X→ Z and g: Y→ W be Borel functions, where X,Y,Z,W are topological spaces. Then the map h: X× Y → Z× W defined by ∀ (x,y) ∈ X× Y, h(x,y):=(f(x),g(y)) is Borel. Let U⊆ Z and V⊆ W be arbitrary open sets. It is enough to show that h^-1[U× V] is a Borel subset of X× Y. The latter set is equal to f^-1[U]× g^-1[V], which belongs to ℬ(X)⊗ℬ(Y). The claim follows by the inclusion ℬ(X)⊗ℬ(Y)⊆ℬ(X× Y), see e.g. <cit.>. (For the converse inclusion of the latter, which does not hold for every X and Y, see <cit.>.) Let f X→ Y be a Borel function, where X is a topological space and Y is an uncountable Polish space. If A⊆ Y is an analytic set, then f^-1[A] is a s-analytic subset of X. Since A⊆ Y is analytic, there exists a Borel subset B⊆ Y× Y such that A is the projection on the first coordinate of B. At this point, define C:={(x,y) ∈ X× Y: (f(x),y) ∈ B}. Then C is Borel set by Lemma <ref> and π_X[C]=f^-1[A]. Therefore f^-1[A] is a s-analytic set, which concludes the proof. Let A⊆ X× Y be a s-analytic set, where X is a topological space and Y is an uncountable Polish space. Then C:=π_X[A] is a s-analytic set. Let Z be an uncountable Polish space and B⊆ X× Y× Z be a Borel set such that A=π_X× Y[B]. Now, observe that C=π_X[B] and obviously Y× Z is an uncountable Polish space. Let (A_n) be a sequence of s-analytic subsets of a topological space X. Then both ⋃_n A_n and ⋂_n A_n are s-analytic. The proof proceeds verbatim as in <cit.>. We are ready for the proof of Theorem <ref>. Note that ℐ^+ is s-analytic and let 𝒩 be the set of strictly increasing sequences (n_k) of nonnegative integers. Since 𝒩 can be regarded as a closed subset of ω^ω, it is a Polish space by Alexandrov's theorem, see e.g. <cit.>. For each η∈ X, fix a decreasing local base (U_η,m: m ∈ω) of open neighbourhoods of η. Then η is an ℐ-limit point of x if and only if there exists a sequence (n_k) ∈𝒩 such that {n_k: k ∈ω}∈ℐ^+ and {k ∈ω: x_n_k∉ U_η,m}∈Fin for all m ∈ω. At this point, define the continuous function ψ𝒩→{0,1}^ω: (n_k) ↦χ_{n_k k∈ω}, where χ_S stands for the characteristic function of a set S⊆ω. In addition, for each m ∈ω, define the function ζ_m𝒩× X→{0,1}^ω by ∀ (n_k) ∈𝒩, ∀η∈ X, ζ_m((n_k), η):=χ_{ t ∈ω: x_n_t∉ U_η,m} Identifying each S⊆ω with its charateristic function χ_S, it follows that Λ_x(ℐ)=π_X[(ψ^-1[ ℐ^+]× X)∩⋂_m ∈ωζ^-1_m[Fin]]. ψ^-1[ ℐ^+]× X is a s-analytic subset of 𝒩× X. Thanks to Lemma <ref> and the fact that ψ is continuous, ψ^-1[ ℐ^+] is an (ordinary) analytic subset of 𝒩. Hence there exists an uncountable Polish space Y and a Borel set B⊆𝒩× Y such that ψ^-1[ ℐ^+]=π_𝒩[B]. The claim follows from the fact that B× X∈ℬ(𝒩× Y) ×ℬ(X) ⊆ℬ(𝒩× Y× X), see e.g. <cit.>, and that π_𝒩× X[B× X]=π_𝒩[B]× X. For each η∈ X and m ∈ω, the section ζ_m( ·, η) is continuous. Fix η∈ X and m ∈ω. It is enough to show that the the section ζ_m( ·, η) is sequentially continuous. For, pick a sequence (n_k^(p): p ∈ω) of elements of 𝒩 which is convergent to some (n_k) ∈𝒩. Then, for each t ∈ω, there exists p_t ∈ω such that n_t^(p)=n_t for all p≥ p_t. Hence ζ_m((n_k^(p)), η)(t)= ζ_m((n_k), η)(t) for all p≥ p_t. This proves that lim_p ζ_m((n_k^(p)), η)= ζ_m((n_k), η). For each (n_k)∈𝒩 and m ∈ω, the section ζ_m((n_k), · ) is Borel measurable. Fix (n_k)∈𝒩 and m ∈ω and define for convenience ξ: X→{0,1}^ω by ξ(η):=ζ_m((n_k), η) for all η∈ X. It is enough to show that preimage ξ^-1[U] is Borel in X for every basic open set U⊆𝒩. This is clear if U=∅. Otherwise there exist p∈ω and r_0,r_1,…,r_p ∈{0,1} such that U={a ∈{0,1}^ω: a(t)=r_t for all t=0,1,…,p}. Set A:={t ∈{0,1,…,p}: r_t=0} and B:={t ∈{0,1,…,p}: r_t=1}. It follows that ξ^-1 [U]={η∈X: ζ_m((n_k), η)(t)=r_t for all t=0,1,…,p} =(⋂_t ∈A{η∈X: ζ_m((n_k), η)(t)=0) ∩(⋂_t ∈B{η∈X: ζ_m((n_k), η)(t)=1) =(⋂_t ∈A{η∈X: x_n_t ∈U_η,m) ∩(⋂_t ∈B{η∈X: x_n_t ∉U_η,m) =(⋂_t ∈A ⋃_η∈X: x_n_t ∈U_η,m U_η,m) ∩(⋂_t ∈B ⋂_η∈X: x_n_t ∉U_η,m U_η,m^c). Since each U_η,m is open, we conclude that ξ^-1[U] is a Borel set. For each m ∈ω, the set ζ_m^-1[Fin] is Borel. Thanks to Claim <ref>, Claim <ref>, and <cit.>, each map ζ_m is Borel measurable. The conclusion follows since Fin is a F_σ-set. To conclude the proof of Proposition <ref>, we have by Claim <ref> and Claim <ref> that both ψ^-1[ ℐ^+]× X and σ_m^-1[Fin] are s-analytic subsets of 𝒩× X, hence also their intersection by Lemma <ref>. By the identity (<ref>), Λ_x(ℐ) is the projection on X of a s-analytic subset of 𝒩× X, which is s-analytic by Lemma <ref>. § MINIMAL IDEALS ℐ_W AND THEIR COMPLEXITIES In this last Section, following the line of research initiated in <cit.>, we recall the definition of certain ideals and we study their structural and topological complexity. For, given a sequence x taking values in a first countable Hausdorff space X and a subset W⊆ X, define the ideal ℐ_W:={A⊆ω: L_x↾ A∩ W= ∅}, where L_x↾ A:=Λ_x↾ A(Fin), see <cit.>. More explicitly, A ⊆ω belongs to ℐ_W if and only if A is finite or, in the opposite, A is infinite and there are no infinite subsets B⊆ A such that (x_n: n ∈ B) is convergent to some element of W. Note that ℐ_W may be not admissible: for instance, if W=∅ then ℐ_W=𝒫(ω). The main reason of its introduction is the following: Let x be a sequence taking values in a first countable Hausdorff space X and fix a subset W⊆ X. Then Λ_x(ℐ_W)=W ∩L_x. In particular, if X is separable, then W ∈ℒ(ℐ_W). The first part follows by <cit.>. This implies, if W⊆L_x, then Λ_x(ℐ_W)=W, so that W ∈ℒ(ℐ_W). Hence, the second part is obtained by choosing a sequence x with dense image. Note that that Theorem <ref> proves that, even if X=ℝ, the family ℒ(ℐ) may contain sets which are not Borel, which answered an open question in <cit.>. It is also worth to remark that ideals ℐ_W are not the only ones for which (<ref>) holds, see <cit.>; cf. also <cit.> for further refinements of Theorem <ref> in regular and Polish spaces. However, we show that they are the smallest ideals with such property. For, we say that ℐ contains an isomorphic copy of 𝒥 if there exists a bijection ϕ on ω such that ϕ[ 𝒥]⊆ℐ, where ϕ[ 𝒥] stands for the family {ϕ[S]: S ∈𝒥}. Let x be a sequence taking values in a compact metric space X and fix a dense subset W⊆ X. Let also ℐ be an ideal on ω such that W ∈ℒ(ℐ). Suppose also that x has dense image, and that ∀ A ∈ℐ^+, ∀y∈ X^ω, Λ_y↾ A( ℐ↾ A)≠∅. Then ℐ contains an isomorphic copy of ℐ_W. By hypothesis, there exists a sequence y taking values in X such that Λ_y(ℐ)=W. Let d be a compatible metric on X and note that the denseness of W implies that L_y=X. Since X is compact, there exists a strictly increasing sequence of integers (ι_n: n ∈ω) such that ι_0:=0 and, for each n ∈ω, the family of open balls {B(x_k,2^-n): k ∈ [ι_n,ι_n+1)} is an open cover of X. At this point, define recursively the map ϕ: ω→ω as it follows: * ϕ(0) is the smallest integer k∈ω such that d(y_ϕ(k), x_0)<1; * for each integer m ∈ (1,ι_1), ϕ(m) is the smallest integer k∈ω such that d(y_ϕ(k), x_m)<1 and k∉ϕ[{0,1,…,m-1}]; * for each n,m ∈ω with n≥ 1 and m ∈ [ι_n,ι_n+1), ϕ(m) is the smallest integer k∈ω such that d(y_ϕ(k), x_m)<2^-n and k∉ϕ[{0,1,…,m-1}]. It follows by construction that ϕ is a bijection on ω which satisfies lim_n→∞ d(y_ϕ(n), x_n)=0. Fix A ∈ℐ^+ and note that the set of ℐ↾ A-limit points of y↾ A is contained both in L_y↾ A and in Λ_y(ℐ). Since the latter is equal to W, we obtain by (<ref>) and (<ref>) that ∅≠Λ_y↾ A( ℐ↾ A)⊆L_y↾ A∩ W= L_x↾ϕ^-1[A]∩ W. By the definition of ℐ_W, we get ϕ^-1[A] ∉ℐ_W. Therefore ϕ[ ℐ_W] ⊆ℐ. As remarked in the Introduction, the technical condition (<ref>) has been already studied in the literature. For instance, if X is compact, all F_σ-ideals ℐ satisfy (<ref>), taking into account <cit.> and <cit.>. However, we show prove that they are the smallest ideals with such property. For, we say that ℐ contains an isomorphic copy of 𝒥 if there exists a bijection ϕ on ω such that ϕ[ 𝒥]⊆ℐ. Hereafter, we assume for simplicity that X is the Cantor space and x is an enumeration of the rationals (i.e., finitely supported sequences) of X, with x_0:=0. Let W be a dense subset of the Cantor space X={0,1}^ω, and ℐ be an ideal such that W ∈ℒ(ℐ). Then ℐ contains an isomorphic copy of ℐ_W. By hypothesis there exists a sequence y taking values in X such that Λ_y(ℐ)=W. Since W is dense, then L_y=X. Define the map σ: ω→ω by σ(0):=0 and, recursively, σ(n+1) is the smallest integer k ∈ω∖σ[{0,…,n}] such that y_k(i)=x_n+1(i) for all i∈ω in the (nonempty finite) support of x_n+1. Note that σ is well defined, thanks to (<ref>). By construction σ is injective. By the denseness of x and y, it is not difficult to see that σ is surijective, hence ϕ:=σ^-1 is a bijection on ω. To conclude the proof, we claim that ϕ[ ℐ_W]⊆ℐ. For, note that lim_n d(x_n,y_n)=0, where d is a compatible metric on X. Hence, for each infinite S⊆ω and each η∈ X, we have lim_n ∈ Sx_n=η if and only if lim_n ∈ Sy_σ(n)=η. Pick η∈ W, so that there exists S ∈ℐ^+ such that lim_n ∈ Sy_n=η. By the above equivalence and the definition of ℐ_W, we obtain that lim_n ∈ϕ[S]x_n=η, hence ϕ[S] ∈ℐ_W^+. The following result, due to He et al. <cit.>, deals with the topological complexity of ideals ℐ_W. Hereafter, we assume for simplicity that X is the Cantor space and x is an enumeration of the rationals i.e., finitely supported sequences of X, with x_0:=(0,0,…). Let W be a subset of the Cantor space X={0,1}^ω. Then * if W is closed then ℐ_W is a Σ^0_2-ideal * if W is Σ^0_2 then ℐ_W is a Π^0_3-ideal * if W is open then ℐ_W is an analytic P-ideal It follows by <cit.> by choosing a sequence x with dense image (the case W=∅ holds as well). Taking into account also the results obtained in Section <ref>, we prove some characterizations for the [non]closedness of the subset W: Let W be a nonempty subset of the Cantor space X={0,1}^ω such that ℐ_W has the hereditary Baire property. Then the following are equivalent * W is not closed * ∅×Fin≤_RBℐ_W * ℐ_W is Π^0_3-hard * Σ^0_2⊆ℒ(ℐ_W). <ref> <ref>. Thanks to Theorem <ref>, W ∈ℒ(ℐ_W), hence the inclusion ℒ(ℐ_W)⊆Π^0_1 fails. The claim follows by Lemma <ref>. <ref> <ref>. It follows by Lemma <ref>. <ref> <ref>. It follows by Corollary <ref>. <ref> <ref>. If W is closed, then ℐ_W would be a Σ^0_2-ideal by Theorem <ref>. <ref> <ref>. As in the previous implication, if W is closed, then ℐ_W would be Σ^0_2. Hence ℒ(ℐ_W)=Π^0_1 by Theorem <ref>. However, since X is separable, there exists a countable dense subset, which is Σ^0_2 and not closed. On the same lines, we are able to characterize the openness of the set W and the P-property of the ideal ℐ_W, improving Theorem <ref><ref>: Let W be a subset of the Cantor space X={0,1}^ω. Then the following are equivalent * W is open * ℐ_W is an analytic P-ideal * ℐ_W is a P-ideal <ref> <ref>. It follows by Theorem <ref><ref>. <ref> <ref>. This is obvious. <ref> <ref>. Let us suppose that W is not open, i.e., W^c is not closed. Since X is metrizable, W^c is not sequentially closed. Hence it is possible to pick a sequence y taking values in W^c which is convergent to some η∈ W. Let us denote by d a compatible metric on X, so that lim_k d(y_k,η)=0. Since x is dense, for each k ∈ω there exists an infinite set A_k⊆ω such that limx↾ A_k=y_k. Considering that y_k ∈ W^c, it follows that L_x↾ A_k∩ W=∅, hence A_k ∈ℐ_W. At this point, pick a set A⊆ω such that A_k∖ A∈Fin for all k ∈ω. Set for convenience b_-1:=0 and define recursively a sequence of integers (b_k: k ∈ω) such that b_k is the smallest element of A∩ A_k for which d(y_k,x_b_k)≤ d(y_k,η) and b_k>b_k-1 (note that this is well defined). It follows by the triangle inequality that ∀ k ∈ω, d(x_b_k,η) ≤ d(x_b_k,y_k)+d(y_k,η)≤ 2d(y_k,η). Therefore B:={b_k: k ∈ω} is an infinite subset of A such that limx↾ B=η. Since η∈ W, this proves that L_x↾ A∩ W≠∅. Hence, by the definition of ℐ_W, we conclude that A ∉ℐ_W. To sum up, (A_k: k ∈ω) is a witnessing sequence of sets in ℐ_W which fails the P-property for the ideal ℐ_W. As a consequence, we obtain that there are only three possibilities for the complexity of ideals ℐ_W: Let W be a subset of the Cantor space X={0,1}^ω. Then exactly one of the following cases occurs: * ℐ_W is a Σ^0_2-ideal * ℐ_W is a Borel ideal which is not Σ^0_3 * ℐ_W is not a Borel ideal. First, if W=∅, then ℐ_W=𝒫(ω), which satisfies only <ref>. Hence suppose hereafter that W is nonempty. Suppose also that ℐ_W is Borel, so that <ref> fails and ℐ_W has the hereditary Baire property. If W is closed then ℐ_W is Σ^0_2 by Theorem <ref>, hence <ref> holds. If W is not closed then W is Π^0_3-hard by Theorem <ref>, which is equivalent to be not Σ^0_3 by <cit.>, hence <ref> holds. In light of Theorem <ref>, one may ask whether the third case in Corollary <ref> really occurs. We answer in the affirmative with our last main result: Let W be a Borel subset of the Cantor space X={0,1}^ω which is not Σ^0_2. Then ℐ_W is not analytic, hence not Borel. Putting together the above results, we have the following consequence: Let W be a Borel subset of the Cantor space X={0,1}^ω. Then the following are equivalent * W is Σ^0_2 * ℐ_W is a Π^0_3-ideal * ℐ_W is a Borel ideal * ℐ_W is an analytic ideal <ref> <ref>. It follows by Theorem <ref><ref>. <ref> <ref> <ref>. They are obvious. <ref> <ref>. It follows by Theorem <ref>. We recall now some definition about trees, which will be needed in the proof of Theorem <ref>. Denote by {0,1}^<ω the set of finite {0,1}-sequences. We say that a=(a_0,…,a_n) ∈{0,1}^<ω is an extension of b=(b_0,…,b_m) ∈{0,1}^<ω, shortened as a⊆ b, if n≤ m and a_k=b_k for all k≤ n. Given an infinite sequence z ∈{0,1}^ω, we define z↾ n:=(z_0,…,z_n) ∈{0,1}^<ω for each n ∈ω. A tree on {0,1} is a subset T⊆{0,1}^<ω with the property that, a ∈ T for all a,b ∈{0,1}^<ω such that a⊆ b ∈ T. The body of a tree T on {0,1}, denoted by [T], is the set of all its infinite branches, that is, the set of all sequences z ∈{0,1}^ω such that z↾ n ∈ T for all n ∈ω. A tree T on {0,1} is said to be pruned if every a ∈ T has a proper extension, that is, for all a∈ T there exists b ∈ T such that a≠ b and a⊆ b, see <cit.>. Accordingly, we define PTr_2:={T⊆{0,1}^<ω: T is a pruned tree }. Identifying a pruned tree with its characteristic function, the set PTr_2 can be regarded as a closed subset of the Polish space {0,1}^{0,1}^<ω, see <cit.>. Thanks to Alexandrov's theorem <cit.>, PTr_2 is a Polish space. With the above premises, let WF^⋆_2 be the set of pruned trees on {0,1} for which every infinite branch contains finitely many ones, that is, WF^⋆_2:={T ∈PTr_2: ∀ z ∈ [T], ∀^∞ n, z_n=0 }, see <cit.>. It is known that WF^⋆_2 is a Π^1_1-complete subset of PTr_2, hence it is conanalytic but not analytic, see <cit.>. More generally, given a nonempty subset W of the Cantor space {0,1}^ω, let 𝒯(W) be the set of pruned trees T on {0,1} which contains an infinite branch in W, that is, 𝒯(W):={T ∈PTr_2: [T] ∩ W ≠∅}. It is immediate to check that 𝒯(W_irr) coincides with WF^⋆_2, where W_irr is the set of irrationals {z ∈{0,1}^ω: ∃^∞ n, z_n=1}. With the above premises, we are ready for the proof of Theorem <ref>. We divide the proof in some intermediate steps. Hereafter, as in the statement of the result, W is a given Borel subset of the Cantor space which is not Σ^0_2. Fix nonempty subsets A,B⊆{0,1}^ω such that A≤_WB. Then 𝒯(A)≤_W𝒯(B). Since A is Wadge reducible to B, there exists a continuous map ϕ: {0,1}^ω→{0,1}^ω such that ϕ^-1[B]=A, i.e., z ∈ A if and only if ϕ(z) ∈ B. Let 𝒦 be the family of compact subsets of {0,1}^ω, endowed with the Vietoris topology. It follows by <cit.> that the map ψ: PTr_2 →𝒦 defined by ∀ T ∈PTr_2, ψ(T):=[T]. is a well-defined homeomorphism. In addition, its inverse map is given by ψ^-1(K)={z↾ n: z ∈ K, n ∈ω} for all K ∈𝒦. Thanks to <cit.>, the map φ: 𝒦→𝒦 defined by Φ(K):={ϕ(z): z ∈ K} is continuous. At this point, define the function f: PTr_2 →PTr_2 by f:=ψ^-1∘Φ∘ψ, so that f(T)={ϕ(z)↾ n: z ∈ [T], n ∈ω} for each pruned tree T. Now, it is sufficient to show that f is a witnessing map for the claimed Wadge reduction (<ref>). For, note that f is a composition of three continuous maps, hence it is continuous. Moreover, the following chain of equivalences holds for each pruned tree T on {0,1}: T ∈𝒯(A) if and only if [T]∩A=∅, if and only if Φ([T])∩B=∅, if and only if [f(T)]∩B=∅, if and only if f(T) ∈𝒯(B). Therefore f^-1[𝒯(B)]=𝒯(A), which completes the proof. WF^⋆_2 ≤_W𝒯(W). Thanks to <cit.>, the Borel set W is Π^0_2-hard, that is, A≤_W W for all A ∈Π^0_2. In particular, W_irr≤_W W. Hence by Claim <ref>, we conclude that WF^⋆_2=𝒯(W_irr) ≤_W𝒯(W). 𝒯(W) is not analytic. By Claim <ref>, there exists a continuous map f: PTr_2→PTr_2 such that f^-1[𝒯(W)]=WF^⋆_2. The conclusion follows by the facts that WF^⋆_2 is Π^1_1-complete (hence, not analytic) and that analytic sets are closed under continuous preimages, see e.g. <cit.>. 𝒯(W)≤_Wℐ_W. We need to show that there exists a continuous map h: PTr_2 →𝒫(ω) such that T ∈𝒯(W) if and only if h(T) ∈ℐ_W for all pruned trees T on {0,1}. First, define the map f: {0,1}^<ω→ω as follows: for each s=(s_0,…,s_k) ∈{0,1}^<ω, let f(s) be the unique nonnegative integer such that z_f(s)=(s_0,…,s_k,1,0,0,…). Of course, f is injective. Recalling that x_0=(0,0,…) is the unique sequence with empty support, it is easy to see the image of f is the set of positive integers. At this point, define the map h: PTr_2 →𝒫(ω) by h(T):={f(s): s ∈ T} for all T ∈PTr_2. Since T is an infinite and f is injective, then h(T) is infinite as well. It is also not difficult to show that h is continuous: indeed, a basic clopen set containing h(T) is of the type V:={S ∈ω: f(A)⊆ S and f(B)∩ S=∅} for some finite sets A,B⊆{0,1}^<ω such that every restriction of each s ∈ A does not belong to B. Then, the set U:={T ∈PTr_2: A ⊆ T and B∩ T=∅} is an open set such that h[U]⊆ V. Next, we claim that [T]=L_x↾ h(T). On the one hand, fix a ∈L_x↾ h(T). Then there exists a strictly increasing sequence of positive integers (p_k) such that lim_k z_p_k=a. For each k ∈ω, set s^k:=f^-1(p_k) and define ℓ_k ∈ω so that s^k=(s^k_0,s^k_1,…,s^k_ℓ_k). Hence z_p_k=z_f(s^k)=(s^k_0,s^k_1,…,s^k_ℓ_k,1,0,0,…). Fix n ∈ω. Then there exists k_0 ∈ω such that a ↾ n=z_p_k↾ n for all k≥ k_0. Since f is injective, there exists k_1≥max{n,k_0} such that ℓ_k_1≥ n. It follows that a ↾ n=z_f(s^k_1)↾ n=(s^k_1_0,s^k_1_1,…,s^k_1_n)=s^k_1↾ n. Since T is a tree and s^k_1∈ T, we obtain a ↾ n ∈ T. By the arbitrariness of n, it follows that a is an infinite branch of T. Therefore L_x↾ h(T)⊆ [T]. On the other hand, fix a ∈ [T], so that a ↾ n ∈ T for all n ∈ω. It follows that f(a ↾ n) ∈ h(T) and z_f(a ↾ n)=(a_0,…,a_n,1,0,0,…) for all n ∈ω. It is clear that lim_n z_f(a ↾ n)=a. Therefore the opposite inclusion [T]⊆L_x↾ h(T) holds. Recall that T ∈𝒯(W) if and only if [T] ∩ W=∅. Thanks to (<ref>), this is equivalent to L_x↾ h(T)∩ W=∅, that is, h(T) ∈ℐ_W. To conclude the proof, thanks to Claim <ref>, Claim <ref>, and the transitivity of ≤_W, we get WF^⋆_2 ≤_Wℐ_W. Hence there exists a continuous map g: PTr_2 →𝒫(ω) such that g^-1[ℐ_W]=WF^⋆_2. The conclusion follows by the facts that WF^⋆_2 is Π^1_1-complete (hence, not analytic) and that analytic sets are closed under continuous preimages, see e.g. <cit.>. It is possible to show that ℐ_W_irr is coanalytic, hence Π^1_1-complete. We leave as open question for the interested reader to check whether ℐ_W is always coanalytic whenever W⊆{0,1}^ω is Borel and not Σ^0_2. § TO CHECK FOR THE PROOF OF CLAIM 8 Let us show that 𝒢 is coanalytic. Note that A∈𝒢∀ B∈𝒫(ℕ) (B⊆ A x|B → t t∈ℚ_2^ℕ) One can show that the set {(A,B)∈𝒫(ℕ)^2:(B⊆ A x|B → t t∈ℚ_2^ℕ)} is Borel. Thus 𝒢 is coanalytic. § ACKNOWLEDGMENTS The authors are grateful to Rafal Filipów and Adam Kwela (University of Gdansk, PL) for several discussions related to the results of the manuscript. amsplain
http://arxiv.org/abs/2407.13624v1
20240718155930
Model-theoretic $K_1$ of free modules over PIDs
[ "Sourayan Banerjee", "Amit Kuber" ]
math.LO
[ "math.LO", "math.KT", "03C60, 19B99, 03C07, 19D23, 19B14" ]
Systematic moment expansion for electroweak baryogenesis [ July 22, 2024 ======================================================== § ABSTRACT Motivated by Krajiček and Scanlon's definition of the Grothendieck ring K_0(M) of a first-order structure M, we introduce the definition of K-groups K_n(M) for n≥0 via Quillen's S^-1S construction. We provide a recipe for the computation of K_1(M_R), where M_R is a free module over a PID R, subject to the knowledge of the abelianizations of the general linear groups GL_n(R). As a consequence, we provide explicit computations of K_1(M_R) when R belongs to a large class of Euclidean domains that includes fields with at least 3 elements and polynomial rings over fields with characteristic 0. We also show that the algebraic K_1 of a PID R embeds into K_1(R_R). § INTRODUCTION For a first-order structure M over a language L, Krajiček and Scanlon <cit.> defined the model-theoretic Grothendieck ring, denoted K_0(M)–this ring classifies cut-and-paste equivalence classes of definable subsets (with parameters) of finite powers of M up to definable bijections. Let (M) denote the groupoid whose objects are all definable subsets of M^n for n≥ 1, and whose morphisms are definable bijections between them. In fact, ((M),⊔,∅,×,{*}) is a symmetric monoidal groupoid with a pairing, where ⊔ denotes disjoint union, × is the Cartesian product, and {*} is a singleton. Moreover, this association is functorial on elementary embeddings. It is easy to see that the Grothendieck ring K_0(M) is exactly the ring K_0((M)). Given a (skeletally) small symmetric monoidal groupoid , Quillen <cit.> gave a functorial construction of the abelian groups (K_n())_n≥0, known as the K-theory of , which seek to classify different aspects of its objects and morphisms. When the objects of are sets and the monoidal operation is the disjoint union, then the Grothendieck group K_0() classifies the isomorphism classes of objects of up to `scissors-congruence' while the group K_1() classifies the automorphisms of objects of , i.e., maps that cut an object into finitely many pieces which reassemble to give the same object, in the direct limit as the objects become large with respect to ⊔. Following Quillen's construction discussed above, we define the model-theoretic K-theory of the first order structure M by K_n(M):=K_n((M)) for n≥ 0. Associated to a unital ring R is a language L_R:=⟨+,-,0,{·_r| r∈ R}⟩, where ·_r is unary function symbol describing the right action of the scalar r. Thus, a right R-module M_R can be thought of as an L_R-structure. The theory T:=Th(M_R) of the module M_R admits partial elimination of quantifiers <cit.> in terms of subgroups of M_R defined using positive primitive (pp, for short) formulas. In fact, the theory T is completely determined by the set of finite indices of pairs of pp-definable subgroups of M_R. Say that the theory T is closed under products, written T=T^ℵ_0, if given any pair A≤ B of pp-definable subgroups of M_R, the index [B:A] is either 1 or ∞; otherwise we write T≠ T^ℵ_0. The second author computed in <cit.> the model-theoretic Grothendieck ring K_0(M_R) of a module M_R, and showed that this ring is isomorphic to a certain quotient ℤ[𝒳]/𝒥 of the integral monoid ring ℤ[𝒳], where 𝒳 is the multiplicative monoid consisting of pp-definable subgroups of powers of M_R, and 𝒥 is the invariants ideal of the monoid ring encoding finite indices of pairs of pp-definable subgroups; T=T^ℵ_0 if and only if 𝒥={0}. The main contribution of this paper is the computation of the model-theoretic group K_1(M_R) for an infinite free module M_R over a principal ideal domain (PID) R. When R is a PID, the monoid 𝒳 appearing in its Grothendieck ring is isomorphic to ℕ, which allows us to associate to each non-empty definable set D a natural number (D) as its dimension. The automorphism(=definable self-bijection) group Ω(D) of a definable set D admits a finite chain of normal subgroups (Ω_k(D))_k=0^(D), where Ω_k(D) is the group of automorphisms which fix all elements outside a subset of dimension at most k. The pp-definable automorphisms, which include invertible affine linear transformations, are central in determining the quotients Ω_k+1(D)/Ω_k(D). Using Bass' description of K_1(M_R) (Theorem <ref>) as the colimit _n∈ℕ(Ω(M^n))^ab, we then obtain Theorem <ref>, which is the recipe for the computation of K_1(M_R) subject to the knowledge of the general linear groups GL_n(R) for n≥ 1. All the steps of this recipe are described in detail in the computation of K_1(V_F) (Theorem <ref>), where V is an infinite vector space over an infinite field F, and later only the differences for the PID case are highlighted. As a consequence of Theorem <ref>, we obtain the following explicit description of K_1(M_R) from Theorems <ref> and <ref> when the ring R belongs to a large class of Euclidean domains that includes fields with at least 3 elements (Corollaries <ref> and <ref>) and polynomial rings over fields with characteristic 0 (Corollary <ref>). Let R be an Euclidean domain that contains units u,v satisfying u+v=1. Further, suppose that M_R is an infinite free right R-module with T:=Th(M_R). Let 𝐆 denote the set of pp-definable finite-index subgroups M_R equipped with the subgroup relation ≤. If T=T^ℵ_0 or if the directed system (𝐆,≤)^op contains a cofinal system of even-indexed subgroups of M_R then K_1(M_R)≅ K_1(R_R)≅ℤ_2⊕⊕_n=1^∞((GL_n(R))^ab⊕ℤ_2)≅ℤ_2⊕⊕_n=1^∞(R^×⊕ℤ_2); otherwise K_1(M_R)≅ K_1(R_R)≅ℤ_2⊕⊕_n=1^∞((GL_n(R))^ab⊕ℤ_2⊕ℤ_2)≅ℤ_2 ⊕⊕_n=1^∞(R^×⊕ℤ_2⊕ℤ_2). Note in the above result that K_1(M_R) depends only on the ring R, and not on the free module M_R. Even though the ring of integers does not satisfy the hypotheses of the above theorem, we show in Theorem <ref> that K_1(_)≅⊕_n=0^∞_2. Our recipe to compute K_1(M_R) has its limitations for explicit computations over an arbitrary PID R since it heavily relies on the knowledge of the groups (GL_n(R))^ab, and unfortunately the literature in this direction is scarce. In Remarks <ref> and <ref>, we note the obstacles one would face if they were to compute K_1(M_R) when R is the field F_2 with two elements and respectively. We also give a surprising connection between algebraic K-theory and model-theoretic K-theory in the context of PIDs–no such connection is known to exist at the level of Grothendieck rings. In particular, we show in Theorem <ref> that the algebraic K_1-group over a PID R, K_1^⊕(R), embeds into K_1(R_R). The rest of the paper is organized as follows. In section,  <ref>, we briefly recall the basic theory of semi-direct products, where we document the results on the abelianization of semi-direct products and certain wreath products. After setting up model-theoretic terminology, we recall the construction of the Grothendieck ring of a module thought of as a model-theoretic structure in  <ref>. Then in  <ref> we briefly recall Quillen's K-theory of a symmetric monoidal groupoid and use it to associate K-groups to a model-theoretic structure with a special emphasis on Bass' alternate description of K_1 (Theorem <ref>). Given an infinite vector space V_F over an infinite field F, the detailed computation of K_1(V_F) in  <ref> acts as a template for the computation of K_1(M_R) in later sections. We describe in  <ref> how the recipe of the previous section can be modified to compute K_1(M_R) (Theorem <ref>) when R is a PID and M_R is an infinite free R-module. The goal of  <ref> is to explicitly compute K_1(M_R) when R falls in a large class of Euclidean domains (ED). Theorems <ref> and <ref> are the main results of this section. The computation of K_1(_) (Theorem <ref>) is the main goal of the short section  <ref>. In the final section of the paper,  <ref>, we discuss the connection between the algebraic K_1 of a PID R and the group K_1(R_R). § SEMI-DIRECT PRODUCTS AND ABELIANIZATION In this section, we recall some facts from group theory that will be used in later sections. Let (G,,e) be a group and N, H be two subgroups of G with N normal. We say that G is the inner semi-direct product of N and H, written G=H ⋉ N if G = NH and N ∩ H ={e}. Given any two groups H and K, and a group homomorphism ϕ: K →(H), the outer semi-direct product of H and K, denoted K ⋉_ϕH, is the set H× K with multiplication defined by (h,k)(h',k') := (h ϕ(k)(h'),kk'). We often suppress the subscript ϕ from the semi-direct product when the action is clear from the context. Given groups K, L and a set T on which L acts, the restricted wreath product, denoted Kwr_TL, is the semi-direct product L⋉ (⊕_x∈ T K), where l∈ L acts on ((k_x)_x∈ T) as per the rule l((k_x)_x∈ T) := ((k_l^-1x)_x ∈ T). For a set T, the finitary permutation group (T) on T consists of permutations σ∈Sym(T) that fix all elements of T outside a finite subset. Given a group G, the notation G≀(T) denotes the restricted wreath product Gwr_T(T). For a group G, we denote by G' its commutator subgroup [G,G] and by G^ab its abelianization G/G'. The following lemma computes the abelianization of a semi-direct product. Suppose G is a group acting on a group H. Then (G ⋉ H)^ab≅ G^ab× (H^ab)_G; here (H^ab)_G is the quotient of H^ab by the subgroup generated by the elements of the form h^gh^-1, where h^g denotes the action of g ∈ G on h ∈ H^ab induced by the action of G on H. The commutator subgroup [G⋉ H,G⋉ H] is generated by [H,H]∪[G,H]∪[G,G]. Therefore (G⋉ H)^ab=(G⋉ H)/⟨[H,H]∪[G,H]∪[G,G]⟩. Applying the relators [H,H] gives G⋉ H^ab, then applying the relators [G,H] gives G×(H^ab)_G. Finally, applying [G,G], we get the desired group G^ab×(H^ab)_G. If T is a set with at least two elements, then the finitary alternating group Alt(T) is the commutator subgroup of the finitary permutation group (T). In particular, (T)^ab≅ℤ_2. The following lemma computes the abelianization of a wreath product. Let G be a group, T be a set with at least two elements, and Σ := (T). Then (G≀Σ)^ab≅ G^ab×ℤ_2. Let 𝐆:=(⊕_x∈ TG_x), where G_x is a copy of G for each x∈ T. Then G≀Σ= Σ⋉𝐆, where Σ acts on the indices of elements of 𝐆. Clearly 𝐆^ab=⊕_x∈ TG^ab_x. Define a map ε:𝐆^ab→ G^ab by (g_x)_x∈ T↦∏_x∈ Tg_x. It can be easily seen that ε is a homomorphism. The action of σ∈Σ on 𝐠:=(g_x)_x∈ T∈𝐆^ab, denoted 𝐠^σ, is given by (g_σ^-1x)_x∈ T. Let H denote the subgroup of 𝐆^ab generated by {𝐠^σ𝐠^-1:𝐠∈𝐆^ab,σ∈Σ}. We claim that H=ε. Note that ε𝐠=ε𝐠^σ for each 𝐠:=(g_x)_x∈ T∈𝐆^ab, σ∈Σ. Hence H⊆ε. On the other hand, consider 𝐠:=(g_x)_x∈ T∈ε. Since Σ consists only of finitary permutations of T, there are only finitely many x∈ T such that g_x≠ 1, say x_1,x_2,,x_n. We will use induction on n to show that 𝐠∈ H. The case when n=0 is trivial. If n>0, the identity ∏_i=1^ng_x_i=1 gives n≥2. Assume for induction that the result holds for all values of n strictly less than k>0. Suppose n=k. Let σ be the transposition (x_1,x_2)∈Σ and 𝐠':=(g'_x)_x∈ T be the element of 𝐆^ab whose only non-trivial component is g'_x_1=g^-1_x_1. Let 𝐠”:=𝐠𝐠'((𝐠')^σ)^-1. Then g”_x_1=1 and ε𝐠”=1. The number of non-identity components of 𝐠” is strictly less than k and thus, using induction hypothesis, 𝐠”∈ H. Therefore, 𝐠=𝐠”(𝐠')^σ(𝐠')^-1∈ H, thus proving the claim. Now (𝐆^ab)_Σ=𝐆^ab/H=𝐆^ab/ε=G^ab. We also have Σ^ab=ℤ_2 from Proposition <ref>. Thus Proposition <ref> gives (G≀Σ)^ab≅(𝐆^ab)_Σ×Σ^ab≅ G^ab×ℤ_2. § MODEL-THEORETIC GROTHENDIECK RINGS OF MODULES Let L be a language, M a first-order L-structure with domain again denoted M and m≥1. Say that A ⊆ M^m is definable with parameters if there is a formula ϕ(x_1,...x_m,y_1,...,y_n) such that for all a_1,...,a_m ∈ M, (a_1,...,a_m) ∈ A if and only if Mϕ[a_1,...,a_m,b_1,...,b_n] for some b_1,...,b_n ∈ M. We will always assume that definable means definable with parameters from the universe. For each m ≥ 1, let Def(M^m) be the collection of all definable subsets of M^m, and set Def(M):= ⋃_m≥ 1Def(M^m). Say that two definable sets A,B∈Def(M) are definably isomorphic if there exists a definable bijection between them, i.e., a bijection f:A→ B such that the graph Graph(f)∈Def(M). Definable isomorphism is an equivalence relation on Def(M) and the equivalence class of a definable set A is denoted by [A]. We use Def(M) to denote the set of all equivalence classes with respect to this relation. The assignment A↦[A] defines a surjective map [-]:Def(M)→Def(M). We can regard Def(M) as an L_ring-structure. In fact, it is a semiring with respect to the operations defined as follows: * 0 := [∅]; * 1 := [{*}] for any singleton subset {*} of M; * [A]+[B] := [A'⊔ B'] for A'∈[A],B'∈[B] such that A'∩ B'=∅; and * [A][B] := [A× B]. We define the model-theoretic Grothendieck ring of the first order structure M, denoted K_0(M), to be the ring completion of the semiring Def(M). Every right R-module M is a first-order structure for the language L_R of right R-modules, where L_R := ⟨ +,-,0,{·_r:r ∈ R}⟩, where each ·_r is a unary function symbol for the scalar multiplication by r∈ R on the right. The right R-module structure M will be denoted as M_R. The theory of M_R admits a partial elimination of quantifiers with respect to pp-formulas. <cit.> A positive primitive formula (pp-formula for short) is an L_R-formula ϕ(x_1,,x_n) logically equivalent to one of the form ∃ y_1∃ y_2∃ y_m⋀_i=1^t(∑_j=1^n x_j r_ij+∑_k=1^m y_ks_ik+c_i=0), where r_ij,s_ik∈ R and the c_i are parameters from M. A subset B of M^m is pp-definable if it is definable by a pp-formula, and a pp-definable function is a function between two definable sets whose graph is pp-definable. Every parameter-free pp-formula ϕ(x) defines a subgroup of M^n, where n is the length of x. If ϕ(x) contains parameters from M, then it defines either the empty set or a coset of a pp-definable subgroup of M^n. Furthermore, the conjunction of two pp-formulas is (logically equivalent to) a pp-formula. Here is a consequence of the fundamental theorem of the model theory of modules, which is a partial quantifier elimination result due to Baur and Monk. <cit.> For n ≥ 1, every definable subset of M^n is a finite boolean combination of pp-definable subsets of M^n. Let ℒ_n(M_R) (or just ℒ_n, if the module is clear from the context) denote the meet-semilattice of all pp-definable subsets of M^n under intersection. Set ℒ(M_R):=⋃_n≥ 1ℒ_n(M_R). The notation 𝒳(M_R) (or just 𝒳, if the module is clear from the context) will denote the set of colours, i.e., pp-definable bijection classes of elements of ℒ(M_R). For A∈ℒ, the notation [[A]] will denote its pp-definable bijection class. The set 𝒳^*:=𝒳∖[[∅]] of non-trivial colours is a monoid under multiplication. Given a right R-module M_R and subgroups A,B∈ℒ_n, define the invariant Inv(M;A,B) to be the index [A:A∩ B] if this is finite or ∞ otherwise. The theory T:=Th(M_R) of a right R-module M_R is said to be closed under products, written T=T^ℵ_0, if for each n≥ 1 and for any subgroups A,B∈ℒ_n, the invariant Inv(M;A,B) is either 1 or ∞; otherwise we write T≠ T^ℵ_0. Nontrivial finite invariants play a central role in the computation of the model-theoretic Grothendieck ring of a module. Let δ_𝔄:𝒳^*→ℤ denote the characteristic function of a colour 𝔄∈𝒳^*. Define the invariants ideal 𝒥 to be the ideal of the monoid ring ℤ[𝒳^*] generated by the set {δ_[[P]]=[P:Q]δ_[[Q]]| P,Q∈ℒ, P⊇ Q⊇{0}, Inv(M;P,Q)<∞}. We explicitly record the expression of K_0(M_R) below. <cit.> For every right R-module M_R, we have K_0(M_R)≅ℤ[𝒳^*]/𝒥, where 𝒥 is the invariants ideal. Moreover, if M_R≠0 then K_0(M_R)≠0. § K-THEORY FOR SYMMETRIC MONOIDAL GROUPOIDS The story of the model-theoretic K-theory does not end at K_0. We first define a symmetric monoidal category, which is nothing but the “categorification” of a commutative monoid. Since definable subsets of a structure under disjoint union form a symmetric monoidal category, with the help of Quillen's construction, higher model-theoretic K-groups are introduced in this section. We also recall Bass's definition for K_1 that will help us in computations in later sections. A triple (,∗,e) is a symmetric monoidal category if the category S is equipped with a bifunctor ∗:×→ S, and a distinguished object e such that, for all objects s,t,u∈, there are natural coherent isomorphisms e∗s≅s≅s∗e, s∗(t∗u)≅(s∗t)∗u, s∗t≅t∗s, that satisfy certain obvious commutative diagrams. A pairing of (,∗,e) is a bifunctor ⊗:×→ such that s⊗ e ≅ e⊗ s ≅ e, and there is a natural coherent bi-distributivity law (s_1∗ t_1)⊗(s_2∗ t_2)≅(s_1⊗ s_2)∗(s_1⊗ t_2)∗(t_1⊗ s_2)∗(t_1⊗ t_2). The category (FinSets,⊔,∅) of finite sets and functions between them is a symmetric monoidal category, where ⊔ denotes disjoint union, with pairing given by Cartesian product ×. We will only work with symmetric monoidal groupoids(categories where all morphisms are isomorphisms) in this paper. If is a symmetric monoidal category then its subcategory iso consisting only of isomorphisms is a symmetric monoidal groupoid. For a first-order L-structure M, let (M) denote the groupoid whose objects are Def(M) and morphisms are definable bijections between definable sets. Then ((M),⊔,∅) is a symmetric monoidal groupoid, where ⊔ is the disjoint union. Moreover, the Cartesian product of definable sets induces a pairing on this monoidal category. Quillen's definition of K-groups of a symmetric monoidal groupoid (,∗,e) uses his famous ^-1 construction (see <cit.> for more details). If (,∗,e) is a skeletally small symmetric monoidal groupoid, then the K-theory space K^∗() of is defined as the geometric realization of ^-1. Then the K-groups of are defined as K_n^∗():=π_nK^∗(). We use Quillen's definition to associate a sequence of K-groups with a model-theoretic structure. For a first order structure M, define K_n(M):=K_n^⊔((M)) for each n≥ 0. A pairing on a skeletally small symmetric monoidal groupoid determines a natural pairing K()∧ K()→ K() of infinite loop spaces, which in turn induces bilinear products K_p()⊗ K_q()→ K_p+q(). In particular, K_0() is a ring. The Barratt-Priddy-Quillen-Segal theorem <cit.> stated below is a very deep theorem connecting the K-theory of the apparently simple combinatorial category of finite sets and bijections with the stable homotopy theory of spheres. We have K_n(isoFinSets)≅π_n^s for each n≥0, where π^s_n is the n^th stable homotopy groups of spheres. Continuing from Example <ref>, if M is a finite structure then the above theorem gives K_n(M)≅π^s_n. In particular, K_0(M)=ℤ and K_1(M)=ℤ_2. Bass was the first to introduce the groups K_1 and K_2. We recall his description for K_1. <cit.> Suppose is a symmetric monoidal groupoid whose translations are faithful i.e., for all s,t∈ S, the translation Aut_(s)→Aut_(s∗ t) defined by f↦ f∗ id_t is an injective map. Then K_1()≅_s∈H_1(Aut_(s);ℤ), where H_1(G;ℤ) denotes the first integral homology group of the group G, which is isomorphic to G^ab. Suppose that (,∗,e) is a symmetric monoidal groupoid whose translations are faithful. Further suppose that S has a countable sequence of objects s_1,s_2, such that s_n+1≅ s_n∗ a_n for some a_n∈, and satisfying the cofinality condition that for every s∈ there is an s' and an n such that s∗ s'≅ s_n. In this case, we can form the colimit Aut():=_n ∈ℕAut_(s_n). Since the functor H_1(-,ℤ) commutes with colimits, we obtain K_1()=H_1(Aut();ℤ). For a model-theoretic structure M, translations are faithful in (M), and hence Theorem <ref> and the above remark can be used to compute K_1(M). § K_1 OF AN INFINITE VECTOR SPACE OVER AN INFINITE FIELD The goal of this section is to compute the model-theoretic K_1-group of a non-zero vector space V_F over a field F when the theory T of V_F is closed under products. Under this hypothesis, it can be readily seen that both V and F are infinite. The theory T in the language L_F completely eliminates quantifiers <cit.>, and hence every object of :=(V_F) is a finite boolean combination of the basic definable subsets, viz., {0}, V, V^2, and their cosets. We start by recalling the description of the Grothendieck ring of a vector space as a special case of Theorem <ref>. Note that this result works for any field. <cit.> The semiring Def(V_F) of definable isomorphism classes of objects of (V_F) is isomorphic to the sub-semiring of the polynomial ring ℤ[X] consisting of polynomials with non-negative leading coefficients. As a consequence, K_0(V_F)≅ℤ[X]. The main result of this section is the computation of K_1(V_F). Suppose V_F is an infinite vector space over an infinite field F and F^× is the group of units in F. Then K_1(V_F)≅ℤ_2⊕(⊕_i=1^∞(F^×⊕ℤ_2)). We divide the proof of this result in three steps. Step I: In this step, we associate a “dimension” (f) to each automorphism f of a definable set through its “support”, and show that the groups of bounded-dimension automorphisms of sufficiently large definable sets are isomorphic. First note that the assignment (∅):=-∞, (D):=([D]) for ∅≠ D∈ is a well-defined dimension function on the objects of , where [D] denotes the class of D in the Grothendieck ring. Let D∈ and f∈(D). The support of f is the (definable) set Supp(f):={a∈ D: f(a)≠ a}. If f≠𝕀_D then set (f):=(Supp(f)); otherwise set (𝕀_D):=-∞. For D∈, let Ω_m(D):={f∈(D):(f)≤ m} be the subgroup of (D) of elements fixing all automorphisms of D outside a subset of dimension at most m. If D_1,D_2∈ have dimension strictly greater than m, then Ω_m(D_1)≅Ω_m(D_2). Since D_1, D_2>m, it is always possible to find an arrow g:D_2→ D in such that (D_1∩ D)>m. The definable bijection g induces an isomorphism between Ω_m(D_2) and Ω_m(D). Therefore, it is sufficient to prove the result when D_1⊆ D_2. For each i=1,2, consider the full subcategory _m(D_i) of containing definable subsets of D_i of dimension at most m. The restriction of ⊔ to _m(D_i) equips it with a symmetric monoidal structure. Then Ω_m(D_i)≅Aut(_m(D_i)), where the groups on the right hand side can be constructed as follows. Let S_1⊂ S_2⊂⋯ be a sequence of objects of _m(D_1), where S_1 is a copy of V^m in D_1 and S_k+1 is obtained by adding a disjoint copy of V^m to S_k for each k≥ 1. This sequence is cofinal in _m(D_1) and thus, using Remark <ref> for this sequence, we construct Aut(_m(D_1)) as _k∈ℕ(S_k). When D_1⊆ D_2, the same colimit can be used to construct the group Ω_m(D_2)≅Aut(_m(D_2)). Hence Ω_m(D_1) is isomorphic to Ω_m(D_2). Step II: In this step, we express groups Ω_n^n:=(V^n) as iterated semi-direct products, where the iterations are indexed by dimensions. For each 0≤ m<n, let Ω^n_m:=Ω_m(V^n) and Σ^n_m denote the finitary permutation group on a countable set of cosets of an m-dimensional subspace of V^n. If n,p>m, then it is easy to see that Σ^n_m≅Σ^p_m. To compute groups Ω^n_m, we construct a sequence S_m,1⊂ S_m,2⊂ of objects of _m(V^n), where S_m,k is a disjoint union of k copies of V^m in V^n as described in the above proposition. Note that Ω^n_0=Σ^n_0. We have a chain of normal subgroups of Ω^n_n: Ω^n_0Ω^n_1⋯Ω^n_n-1Ω^n_n. For each n≥ 1, let Υ^n denote the subgroup of (V^n) consisting only of definable linear (i.e., pp-definable) bijections. In other words, Υ^n is the group [n](F)⋉ V^n, where [n](F) acts on V^n by matrix multiplication. Note that for each f(≠𝕀_V^n)∈Υ^n, we have (f)=n. The group Υ^n acts on Ω^n_n-1 by conjugation and, in fact, Ω^n_n=Υ^n⋉Ω^n_n-1. For 0<m<n, we want to find a subgroup Υ^n_m of Ω^n_m such that Ω^n_m=Υ^n_m⋉Ω^n_m-1. To do this, we look at the construction of the colimit in Remark <ref>. Note that (S_m,1)≅Ω^m_m≅Υ^m⋉Ω^m_m-1≅Υ^m⋉Ω^n_m-1, where the action of Υ^m on Ω^n_m-1 is induced by the isomorphism Ω^m_m-1≅Ω^n_m-1 given by Proposition <ref>. For similar reasons, we also have (S_m,k)≅(Υ^m≀Σ_k)⋉Ω^m_m-1≅(Υ^m≀Σ_k)⋉Ω^n_m-1, where Σ_k is the permutation group on k elements, the group (Υ^m≀Σ_k) acts on Ω^m_m-1 by conjugation and permutes lower dimensional subsets of S_m,k⊂ V^n. Thus Ω^n_m ≅ _k∈ℕ(S_m,k) ≅ _k∈ℕ((Υ^m≀Σ_k)⋉Ω^n_m-1) ≅ (_k∈ℕ(Υ^m≀Σ_k))⋉Ω^n_m-1 ≅ (Υ^m≀Σ^n_m)⋉Ω^n_m-1. Define Υ^n_m:=Υ^m≀Σ^n_m which acts on Ω^n_m-1 by conjugation. Thus each Ω^n_n is an iterated semi-direct product of certain wreath products. Ω^n_n ≅ Υ^n⋉Ω^n_n-1 ≅ Υ^n⋉(Υ^n_n-1⋉Ω^n_n-2) ≅ Υ^n⋉(Υ^n_n-1⋉(Υ^n_n-2⋉Ω^n_n-3)) ≅ Υ^n⋉(Υ^n_n-1⋉(Υ^n_n-2⋉(⋯(Υ^n_1⋉Ω^n_0)⋯))). Step III: Here we compute (Ω^n_n)^ab using results in  <ref> so that we can compute K_1(V_F) as _n∈ℕ(Ω^n_n)^ab using Theorem <ref> and Remark <ref> gives, thanks to Remark <ref>. For each n≥ 1, we have (Ω^n_n)^ab≅ F^×⊕⊕_i=1^n-1(F^×⊕ℤ_2)⊕ℤ_2. Fix n≥ 1 and 0≤ m<n. We use the presentation of Ω^n_n given in Equation (<ref>). We know that Υ^n≅[n](F)⋉ V^n. Since the additive group V^n is abelian, Lemma <ref> gives (Υ^n)^ab≅([n](F))^ab× (V^n)_[n](F). For any a(≠ 1)∈ F^× (which exists since the field F is infinite), we have aI_n∈[n](F), where I_n is the identity matrix. Now each v∈ V^n can be expressed as (aI_n)v'-v' for v'=(a-1)^-1v. Thus the quotient (V^n)_[n](F) of V^n is trivial which, in turn, gives (Υ^n)^ab≅ ([n](F))^ab≅ F^×. Recall from Proposition <ref> that (Σ^n_m)^ab≅ℤ_2. Lemma <ref> applied to Υ^n_m=Υ^m≀Σ^n_m gives (Υ^n_m)^ab≅(Υ^m)^ab×ℤ_2≅ F^×⊕ℤ_2. The group Υ^n_m acts on Ω^n_m-1 by conjugation. Recall that the action of Υ^n_m preserves the determinant of a matrix in [n](F) and the parity of a permutation. Thus repeated use of Lemma <ref> gives (Ω^n_n)^ab ≅ (Υ^n⋉(Υ^n_n-1⋉(⋯(Υ^n_1⋉Ω^n_0)⋯)))^ab ≅ (Υ^n)^ab⊕((Υ^n_n-1⋉(⋯(Υ^n_1⋉Ω^n_0)⋯))^ab)_Υ^n ≅ (Υ^n)^ab⊕((Υ^n_n-1)^ab⊕((⋯(Υ^n_1⋉Ω^n_0)⋯)^ab)_Υ^n_n-1)_Υ^n ≅ (Υ^n)^ab⊕((Υ^n_n-1)^ab⊕(⋯ ((Υ^n_1)^ab⊕((Ω^n_0)^ab)_Υ^n_1)_Υ^n_2⋯)_Υ^n_n-1)_Υ^n ≅ F^×⊕((F^×⊕ℤ_2)⊕(⋯((F^×⊕ℤ_2)⊕(ℤ_2)_Υ^n_1)_Υ^n_2⋯)_Υ^n_n-1)_Υ^n ≅ F^×⊕((F^×⊕ℤ_2)⊕(⋯((F^×⊕ℤ_2)⊕ℤ_2)⋯)) ≅ F^×⊕⊕_i=1^n-1(F^×⊕ℤ_2)⊕ℤ_2, as required. In the construction of the sequence S_n,1⊂ S_n,2⊂⋯ to compute Aut(_n(V^n+1)), we can choose the copy V^n×{0} as S_n,1. This induces an embedding of Ω^n_n into Ω^n+1_nΩ^n+1_n+1. This further induces the dimension preserving inclusion of (Ω^n_n)^ab into (Ω^n+1_n+1)^ab. Hence K_1(V_F) ≅ _n∈ℕ(Ω^n_n)^ab ≅ _n∈ℕ(F^×⊕⊕_i=1^n-1(F^×⊕ℤ_2)⊕ℤ_2) ≅ _2⊕(⊕_i=1^∞(F^×⊕ℤ_2)). This completes the proof of Theorem <ref>. § K_1 OF A FREE MODULE OVER A PID Throughout this section, R will denote a unital PID, M_R an infinite free R-module and :=(M_R) unless stated otherwise. The main goal of this section is to compute K_1(M_R). We will try to follow the proof of the vector space case (Theorem <ref>), and identify the obstacles while doing so. Our goal is to prove Theorem <ref> that describes K_1 as a directed colimit of abelianizations of certain groups; this description works for free modules over any PID. Step I: We defined the dimension of D∈(V_F) as the degree of the polynomial [D]∈ℤ[X], but we do not have that luxury in (M_R). Thus first we will use an alternate method to associate the notion of a non-negative integer-valued dimension for the objects of (M_R). Recall from  <ref> that ℒ_n:= ℒ_n(M_R) denote the meet-semilattice of all pp-definable subsets of M^n and 𝒳^*:=𝒳^*(M_R)=𝒳(M_R)∖[[∅]] is the set of colours, i.e., the pp-definable isomorphism classes of non-empty pp-definable sets. Let us recall some definitions and notations from <cit.>. Let (-)^∘:ℒ_n→ℒ_n denote the function which takes a coset P to the subgroup P^∘:=P-p, where p∈ P is any element. Denote by ℒ_n^∘ the image of this map. The commensurability relation on ℒ_n, denoted ∼_n, is defined by P∼_nQ if [P^∘:P^∘∩ Q^∘]+[Q^∘:P^∘∩ Q^∘]<∞. This is, and it can be easily checked to be, an equivalence relation. The ∼_n-equivalence class of P∈ℒ_n will be denoted by the corresponding bold letter 𝐏. Let 𝒴_n:=ℒ_n/∼_n and 𝒴:=⋃_n=1^∞𝒴_n. Given 𝔄,𝔅∈𝒳^*, say that 𝔄≈𝔅 if there is 𝐏∈𝒴 such that 𝐏∩𝔄≠∅ and 𝐏∩𝔅≠∅. The relation ≈ is reflexive and symmetric. We use ≈ again to denote its transitive closure. The ≈-equivalence class of 𝔄 will be denoted by 𝔄. If M_R is a non-zero free R-module over a unital PID R then 𝒳^*(M_R)≅ℕ. It suffices to check that if ∅≠ D ∈ℒ_n(M_R) then [[D]] = [[M^k]] for some k ≤ n. Recall from <cit.> that if R is Noetherian then the set of pp-definable subgroups of R_R is precisely the set of right ideals of R. Moreover, since R is a PID, a pp-definable subgroup D' of R_R^n is a finitely generated submodule of the free module R_R^n, and hence is itself free. It follows from <cit.> that ℒ_n^∘(R_R)≅ℒ_n^∘(M_R) for any non-zero R-module. Since every pp-definable subset of M_R^n is pp-definably isomorphic to an element of ℒ_n^∘(M_R), the discussion in the above paragraph gives us D ≅ M^k for some k≤ n, as required. It follows from the above that each pp-definable subset of M^n is (pp-definably) isomorphic to ∏_i=1^n a_i R–a product of right ideals. Recall from the construction of K_0(M_R) in <cit.> that there is a definable-isomorphism-invariant integer-valued function Λ_𝔄 on the objects of for each 𝔄∈ (𝒳^*/≈) such that for each D ∈, Λ_𝔄(D) ≠ 0 only for finitely many values of 𝔄. When R is a PID, then it follows from Proposition <ref> that the set (𝒳^*/≈) is in bijection with ℕ. This allows us to define the dimension of a definable set. Let M_R be a non-zero free R-module over an unital PID R. Define (D) := max{𝔄∈(𝒳^*/≈) |Λ_𝔄(D) ≠ 0} if ∅≠ D∈(M_R); (∅):=-∞. Following the proofs of <cit.> it is easy to check that D↦(D) is monotone with respect to definable injections, and that (D_1⊔ D_2)=max{(D_1),(D_2)}. The dimension of definable sets can be used to associate dimension to an automorphism of a definable set in as in Definition <ref>. In order to complete Step I, in the case when R is a PID, we give a proof of the first line of the proof of Proposition <ref> below in Lemma <ref>. Recall that a (finite) antichain in a poset is a (finite) subset in which any two distinct elements are incomparable. The set ℒ_n is a poset with respect to inclusion. Say that a definable set D is a block if it can be written in the form P∖⋃β for some P∈ℒ_n and finite antichain β in ℒ_n satisfying ⋃β⊊ P. Let D_1,D_2 ∈(M_R). If (D_1) and (D_2) are both greater than m then there always exists a D ∈(M_R) with a definable bijection g : D→ D_2 such that (D_1∩ D)>m. Let the dimensions of D_1 and D_2 be m_1 and m_2 respectively. Without loss of generality, assume that m<m_1≤ m_2. Recall from <cit.> that any definable set D⊆ M^n can be written as a finite disjoint union of blocks. Let D_1=_i=1^k B_i and D_2=_j=1^k' B'_j, where B_i and B'_j are blocks. Without loss of generality, assume that (B_1) = m_1 and (B'_1)=m_2. Since each pp-definable set is in bijection with M^k for some k∈ℕ, we have that B_1≅ M^m_1∖(⋃β_1) and B_2≅ M^m_2∖(⋃β_2) for appropriate antichains β_1,β_2. We may assume that a↦(a,0) is an inclusion of M^m_1 into M^m_2, where |0|=m_2-m_1, so that ((M^m_1∖(⋃β_1))∩(M^m_2∖(⋃β_2)))=m_1>m. Now we may adjust pairwise disjoint isomorphic copies B”_j of B'_j for 1<j≤ k' in such a way that (M^m_2∖(⋃β_2))∩ B”_j=∅. Choosing D:=(M^m_2∖(⋃β_2))⊔_j=2^k'B”_j along with appropriate isomorphism proves the result. Step II: After modifying the definition of dimension for definable sets in the context of a PID, we need to modify the definition of the groups Υ^n:=Υ^n(M_R) to accommodate finite-index subgroups in case T≠ T^ℵ_0, and give a suitably modified proof of Ω^n_n=Υ^n⋉Ω^n_n-1. Apart from this change, the proof of Step II remains unchanged to finally yield Theorem <ref>. For G∈ℒ_n, denote by Aut_ℒ(G) the set of pp-definable automorphisms of G. Further if G∈ℒ_n^∘ then let Aut_ℒ^∘(G) denote the subgroup of Aut_ℒ(G) consisting of all those automorphisms g which satisfy g(0)=0. If G∈ℒ_n^∘ and [M^n:G] is finite, then Aut_ℒ(G)≅Aut_ℒ(M^n) ≅ (GL_n(R) ⋉ M^n). Since any finite-index subgroup G of M^n is pp-definably isomorphic to M^n we get the first isomorphism. Let π_1 and π_2 be the projection maps of M^2n of the first and last n coordinates respectively. Recall that ℒ_n^∘(R_R)≅ℒ_n^∘(M_R) <cit.> for any non-zero free R-module M_R. Thus, we get the isomorphism Aut_ℒ^∘(M^n) ≅Aut_ℒ^∘(R^n). Now Aut_ℒ^∘(M^n) acts on the group M^n naturally, and hence Aut_ℒ(M^n) ≅Aut_ℒ^∘(M^n) ⋉ M^n via the map g ↦ (g',(g')^-1(g(0)), where g'(x):=g(x)-g(0). Combining the previous two statements, we get that Aut_ℒ(M^n)≅Aut_ℒ^∘(R^n) ⋉ M^n. Thus it remains to show that Aut_ℒ^∘(R^n) ≅ GL_n(R). Since multiplication by an invertible matrix is a pp-definable map that preserves 0, we only need to show that each element in Aut_ℒ^∘(R^n) arises in this way. Let f∈Aut_ℒ^∘(R^n) and ϕ(x,y) be the pp-formula defining the subgroup D∈ℒ_2n(R_R) that is the graph of f. Since R is a PID, thanks to <cit.>, D is a finitely generated submodule of R^2n, and hence is free. Since D is the graph of an automorphism, its rank equals the rank of its domain, which equals n. Let {z_1,z_2,,z_n}⊆ D be a basis of D. Using the entries of the basis elements, we can construct an (n× 2n)-matrix [A B] over R such that for any a∈π_1(D)= R^n there is a unique w∈ R^n and w[A B]=(a,f(a)), where A and B are respectively the left and right n× n blocks of the n× 2n matrix. Since π_1(D)=π_2(D)=R^n, we conclude that A,B∈ GL_n(R). Then by a suitable change of basis, we may assume that w=a and A=I is the identity matrix, thereby transforming the block matrix to [I B'] for B'∈ GL_n(R). Thus the required matrix presenting f is -B'. This completes the proof. Recall that Υ^n(V_F)=Aut_ℒ(V^n_F). However, when R is a PID, the possible existence of finite-index subgroups of M_R requires us to reinterpret the group Υ^n(M_R). Say that f ∈(M^n) is an n-automorphism if the formula ϕ(x,y) defining the graph of f is logically equivalent to ⋁_i=1^m ϕ_i(x,y), where each ϕ_i is pp-definable and (ϕ_i(M_R)) = n. Let Υ^n:=Υ^n(M_R) be the set of all n-automorphisms f∈(M^n). Since a projection of pp-definable sets is again so, given an n-automorphism f of M^n, the set H(f):=⋂_i=1^m(π_1(ϕ_i(M_R^n)))^∘ is a pp-definable subgroup of M^n of finite-index, say k, where π_1 is the projection of M^2n onto the first n coordinates. Thus, ϕ(x,y) is logically equivalent to ⋁_H(f)+p_j∈ M^n/H(f)ϕ_j(x,y), where π_1ϕ_j(M^n) = H(f)+p_j. So in other words, f∈(M^n) is an n-automorphism if and only if there is a finite-index subgroup H(f) ∈ℒ_n and a permutation σ_f of M^n/H(f) such that for every H(f)+q∈ M^n/H(f), the restriction of f|_H(f)+q:H(f)+q →σ_f(H(f)+q) is a pp-definable bijection. Since the intersection of two finite-index subgroups of an abelian group is again so, we get the following. The set Υ^n is a subgroup of (M^n). Propositions <ref> and <ref> together with the discussion above yields the following description of Υ^n as a colimit of a system of groups. Let 𝐆_n be the set of all pp-definable finite-index subgroups of M^n_R. Then (𝐆_n,≤) is a poset under the subgroup relation in such a way that (𝐆_n,≤)^op is a directed set. Then Υ^n(M_R) ≅_G ∈(𝐆_n,≤)^op (Aut_ℒ(G)≀(M^n/G)) ≅_G ∈(𝐆_n,≤)^op ((GL_n(R)⋉M^n)≀(M^n/G)). Let us explain the directed colimit in the above result through the example of Υ^1(ℤ_ℤ): Υ^1(ℤ_ℤ) ≅_G ∈ (𝐆_1(ℤ_ℤ),≤)^op(Aut_ℒ(G)≀(ℤ/G)) ≅_n ∈ (ℕ^*, |)(Aut_ℒ(nℤ)≀(ℤ_n)), where ℕ^*:= ℕ∖{0} and | is the divisibility relation. Suppose n| m in ℕ^*. Then there is a natural quotient map θ:ℤ_m →ℤ_n. We explicitly describe the embedding of (Aut_ℒ(nℤ)≀(ℤ_n)) ≅(ℤ_n) ⋉⊕_l∈ℤ_n(ℤ_2 ⋉ nℤ) into (Aut_ℒ(mℤ)≀(ℤ_m)) ≅(ℤ_m) ⋉⊕_l'∈ℤ_m(ℤ_2 ⋉ mℤ). Recall from Proposition <ref> that Aut_ℒ(nℤ)≅ GL_1(ℤ)⋉ nℤ. Clearly GL_1(ℤ)≅ℤ_2. A natural way to think about the required embedding is component-wise. Define the embedding α_1: (ℤ_n) ↪(ℤ_m) by α_1(σ)(t) := (t+σ(θ(t))-θ(t)) (mod m) for σ∈(ℤ_n). Also define the embedding α_2: ⊕_l∈ℤ_n(ℤ_2 ⋉ nℤ) ↪⊕_l'∈ℤ_m(ℤ_2 ⋉ mℤ) by α_2((ζ_l,h_l)_l ∈ℤ_n) := ((ζ_θ(l'),h_θ(l'))_l'∈ℤ_m). It is routine to verify that the map (α_1,α_2) is indeed the required embedding. The description of the maps α_1,α_2 looks complicated but it is not as the next examples demonstrate. Let n=2,m=4 and σ∈(ℤ_2) is the only non-trivial permutation. Then α_1(σ) is given by assignment 0↦ 1,1↦ 0,2↦3 and 3↦2. Let n=3 and m=6. Then ((ζ,h_0),( ζ', h_1),( ζ”,h_2)) ∈⊕_l∈ℤ_3 (ℤ_2 ⋉ 3ℤ) maps to α_2 is ((ζ,h_0),( ζ', h_1),( ζ”,h_2),(ζ,h_0),( ζ', h_1),( ζ”,h_2)) ∈⊕_l'∈ℤ_6 ( ℤ_2 ⋉ 6ℤ) under α_2. Let G≤ G' in (𝐆_1(ℤ_ℤ),≤). If [G':G] is even then for any σ∈(ℤ/G'), the permutation α_1(σ) in (ℤ/ G) is an even permutation. The same statement also holds when 𝐆_1(ℤ_ℤ) is replaced by 𝐆_1(M_R) for any k ≥ 1. After this discussion, the following is not hard to prove. The following are equivalent for any free R-module M_R. * There is a cofinal system of even-indexed subgroups in (𝐆_1(M_R),≤)^op. * There is a cofinal system of even-indexed subgroups in (𝐆_k(M_R),≤)^op. * Each permutation is eventually even in the directed colimit in Proposition <ref>. Let us use the notations as well as definitions of groups Ω^n_m from Step II of the proof of Theorem <ref>. The group Υ^n clearly acts on Ω^n_n-1 by conjugation. For each n≥1, we have Ω^n_n≅Υ^n⋉Ω^n_n-1. Let f ∈(M^n), ϕ(x,y) be an L_R-formula defining the graph D⊆ M^2n of f. Recall from <cit.> that D = _i=1^k_1+k_2B_i, where each B_i is a block and (B_i)=n if and only if i≤ k_1. Further partitioning into smaller blocks if necessary, we may assume for each i that B_i= P_i∖⋃β_i for some P_i ∈ℒ_n and an antichain β_i in ℒ_n satisfying (⋃β_i) < (P_i). Then P_0:= ⋂_i=1^k_1 (π_1(P_i))^∘ is a finite-index subgroup of M^n. Let {P_0^j| 0≤ j ≤ k-1} be the set of all cosets of M^n/P_0. For each 0≤ j<k, let B_i^j⊆ B_i be the block satisfying π_1(B_i^j)=π_1(B_i)∩ P_0^j so that B_i=_0≤ j<kB_i^j for each 1≤ i≤ k_1. Let B_i^j=P_i^j∖⋃β_i^j for some P_i^j∈ℒ_2n and an antichain β_i^j in ℒ_2n satisfying (⋃β_i^j)<n=(P_i^j). Let ϕ_i^j(x,y) be the pp-definable formula defining P_i^j. Then ⋁_i=1^k_1⋁_j=0^k-1ϕ_i^j(x,y) is a formula defining the graph of an n-automorphism, say g, of M^n. We have also ensured g^-1f∈Ω^n_n-1 to complete the proof. In view of the above lemma, the remaining arguments of Step II of the proof of Theorem <ref> go through to give the following result for a free infinite right R-module M_R over a PID R. Suppose R is a PID and M_R is an infinite free right R-module. Then using the notations introduced in Step II of the proof of Theorem <ref>, and with Υ^n as in Definition <ref>, we have K_1(M_R) = _n∈ℕ(Ω_n^n)^ab. § COMPUTATION OF K_1 OF FREE MODULES OVER CERTAIN EUCLIDEAN DOMAINS In general, given a PID R, exact computation of (GL_2(R))^ab is not known, and thus the computation of K_1(M_R) using the recipe in Theorem <ref> is not possible for all PIDs. However, under a mild condition on an ED R, we compute K_1 for free R-modules in Theorems <ref> and <ref>. As a consequence, we compute K_1(M_F[X]) for a field F with char(F)=0 (Corollary <ref>) as well as K_1(V_F) for a field F with at least 3 elements (Corollaries <ref> and <ref>). For a commutative unital ring R, the notation SL_n(R) denotes the group of special linear group, i.e., n× n matrices with determinant 1 so that GL_n(R)/SL_n(R)≅ R^× for each n≥1. The notation E_n(R) denote the subgroup of GL_n(R) generated by n× n elementary matrices <cit.>. First we compute the abelianization of the action of GL_n(R) on M^n by multiplication. Suppose R is any commutative ring with unity and M_R is a right R-module. Then for each n≥2 we have (GL_n(R) ⋉ M^n)^ab≅ (GL_n(R))^ab. Moreover, if the multiplicative identity 1 in R can be written as a sum of two units then the conclusion also holds true for n=1. We prove the lemma when M_R = R_R; the proof for M_R follows verbatim. Thanks to Lemma <ref>, we have (GL_n(R) ⋉ R^n)^ab≅ (GL_n(R))^ab× R^n_(GL_n(R)), so it is enough to show that R^n_(GL_n(R)) vanishes. By definition, R^n_(GL_n(R)) = R^n/⟨xA - x| A ∈ GL_n(R), x∈ R^n⟩. Let x = (x_1,x_2,....,x_n) ∈ R^n and for i ≠ j; E_ij∈ E_n(R) be such that E_ij = I+e_ij, where e_ij∈ M_n(R) and every entry of the matrix e_ij is 0 apart from the ij^th entry, which equals 1. If E_ij∈{E_1n}∪{E_kk-1| 2≤ k≤ n} then E_ij - = (0,,x_i,,0), where x_i appears in the j^th place. Since x_i ∈ R is arbitrary we conclude that ⟨xA - x| A ∈ GL_n(R), x∈ R^n⟩ = R^n. The above lemma need not hold for n=1 if 1 cannot be written as a sum of two units, e.g., if R=M=ℤ then (GL_1(ℤ) ⋉ℤ)^ab=(ℤ_2 ⋉ℤ)^ab≅ℤ_2 ⊕ℤ_2 by Lemma <ref>. Now we recall several celebrated results regarding the computation of (GL_n(R))^ab. <cit.> If R is an ED satisfying 1=u+v for units u,v, then [GL_2(R),GL_2(R)] = E_2(R). <cit.><cit.> Let R be an unital ring with Krull dimension at most 1, for example, a PID, then [GL_n(R),GL_n(R)]=E_n(R) for n>2. <cit.> If R is an ED, then SL_n(R) = E_n(R) for n ≥ 1. Suppose the theory T of the module M_R satisfies T=T^ℵ_0. Under the combined hypotheses of Lemma <ref>, and Theorems <ref>, <ref> and <ref> on R, we can follow through the proof of Proposition <ref> to obtain (Ω_n^n)^ab≅ (GL_n(R))^ab⊕⊕_i=0^n-1((GL_i(R))^ab⊕ℤ_2). Then we use the directed colimit description of K_1(M_R) in Theorem <ref> to get a nice explicit expression for K_1(M_R). Let R be a Euclidean domain satisfying 1=u+v for some units u,v, and M_R be an infinite free right R-module. If the theory T of M_R satisfies T = T^ℵ_0 then K_1(M_R) ≅ K_1(R_R) ≅⊕_n=0^∞((GL_n(R))^ab⊕ℤ_2)≅ℤ_2⊕⊕_n=1^∞(R^×⊕ℤ_2). Now suppose that the theory T of the module M_R does not satisfy T=T^ℵ_0. Then in addition to all the results used in the proof of the above theorem, we also need to use the dichotomy in the following lemma while obtaining the analogue of Lemma <ref>. Suppose the theory T of the module M_R satisfies T≠ T^ℵ_0. Let (𝐆_1,≤)^op be the directed system described in Proposition <ref>. If this directed system contains a cofinal system of even-indexed subgroups of M then for n≥1, (Υ^n)^ab≅ (GL_n(R)⋉ M^n)^ab; otherwise, (Υ^n)^ab≅ (GL_n(R)⋉ M^n)^ab⊕ℤ_2. This follows from Proposition <ref> and Proposition <ref> since abelianization commutes with colimits. Using the notations of the above lemma, if (𝐆_1,≤)^op contains a cofinal system of even-indexed subgroups then the isomorphism in Equation (<ref>) still remains valid; otherwise for n≥ 1 we have (Ω^n_n)^ab≅ ((GL_n(R))^ab⊕ℤ_2)⊕⊕_i=1^n-1((GL_i(R))^ab⊕ℤ_2 ⊕ℤ_2) ⊕ℤ_2. Using these expressions together with the directed colimit description of K_1(M_R) in Theorem <ref>, we get the following result. Let R be a Euclidean domain satisfying 1=u+v for some units u,v, and M_R be an infinite free right R-module. Suppose that the theory T of M_R satisfies T ≠ T^ℵ_0. Using the notations of Lemma <ref>, if (𝐆_1,≤)^op contains a cofinal system of even-indexed subgroups of M then K_1(M_R)≅ K_1(R_R) ≅⊕_n=0^∞((GL_n(R))^ab⊕ℤ_2)≅ℤ_2⊕⊕_n=1^∞(R^×⊕ℤ_2); otherwise K_1(M_R) ≅ K_1(R_R) ≅ℤ_2 ⊕⊕_n=1^∞((GL_n(R))^ab⊕ℤ_2⊕ℤ_2) ≅ℤ_2 ⊕⊕_n=1^∞(R^×⊕ℤ_2⊕ℤ_2). If F is a field with char(F)=0 and M_F[X] is a non-zero free module over F[x] then K_1(M_F[X]) ≅ K_1(F[X]_F[X])≅ K_1(F_F) ≅ℤ_2⊕⊕_n=1^∞(F^×⊕ℤ_2). If F is a finite field char(F)=2 and F contains at least 4 elements, then the field F_4 with 4 elements embeds into F. Moreover, if F_4={0,1,a,b} then a+b=1, and thus F satisfies the hypotheses of the first case of Theorem <ref>, and we get the following. Let F be the finite field F_2^k, where k≥ 2. Then for every infinite F-vector space V_F, we have K_1(V_F)≅ K_1(F_F) ≅⊕_n=0^∞((GL_n(F))^ab⊕ℤ_2)≅ℤ_2⊕⊕_n=1^∞(ℤ_2^k-1⊕ℤ_2). If F is a finite field F_p^k, where k≥ 1 and p is a prime greater than 2. Then -1 and 2 are units in F satisfying 1=2+(-1) in F. Moreover, F satisfies the hypotheses of the second case of Theorem <ref>, and thus we get the following. Let F be the finite field F_p^k, where k≥ 1 and p is a prime greater than 2. Then for every infinite F-vector space V_F, we have K_1(V_F) ≅ K_1(F_F) ≅ℤ_2 ⊕⊕_n=1^∞((GL_n(F))^ab⊕ℤ_2⊕ℤ_2) ≅ℤ_2 ⊕⊕_n=1^∞(_p^k-1⊕ℤ_2⊕ℤ_2). Suppose F is the field F_2 with 2 elements and V_F is an infinite vector space. Since (GL_1(F))^ab is the trivial group, we obtain (Υ^1(V_F))^ab≅ V, which creates obstacles in the computation of (Ω^1_1)^ab using the methods developed so far. Thus, we are unable to use the same recipe to compute K_1(V_F). § COMPUTATION OF K_1(ℤ_ℤ) Recall that the theory T:=Th(ℤ_ℤ) of the abelian group ℤ_ℤ of integers is not closed under products. Even though the ring ℤ of integers is an ED, the identity 1=u+v fails to hold for any units u,v in ℤ. Hence Theorem <ref> is not applicable for the computation of K_1(ℤ_ℤ); however, we use a slight modification to compute K_1(ℤ_ℤ) in Theorem <ref>. We have (GL_n(ℤ))^ab≅ℤ_2 if n=1 or n≥3; _2⊕_2 if n=2. Since abelianization commutes with directed colimits, Propositions <ref>, <ref> and <ref>, Lemmas <ref>, <ref> and <ref>, and Remark <ref> together give that (Υ^n(_))^ab≅(Aut_ℒ(ℤ^n))^ab≅(GL_n(ℤ)⋉ℤ^n)^ab≅(GL_n())^ab if n≥2; (GL_1())^ab⊕_2 if n=1. Since Ω^1_1≅Υ^1⋉Σ^1_0, we have (Ω^1_1(ℤ))^ab≅(Υ^1)^ab⊕(Σ^1_0)^ab≅((GL_1())^ab⊕ℤ_2) ⊕ (Σ^1_0)^ab≅(_2⊕_2)⊕ℤ_2. Since Ω^2_2≅Υ^2⋉Ω^2_1≅Υ^2⋉((Υ^1≀Σ^2_1)⋉Σ^2_0), we get (Ω^2_2)^ab ≅ (Υ^2)^ab⊕((Ω^2_1)^ab)_Υ^2 ≅ (Υ^2)^ab⊕(((Υ^1)^ab⊕(Σ^2_1)^ab)⊕((Σ^2_0)^ab)_(Υ^1≀Σ^2_1))_Υ^2 ≅(GL_2())^ab⊕((((GL_1())^ab⊕ℤ_2)⊕(Σ^2_1)^ab)⊕(Σ^2_0)^ab)_Υ^2 ≅ (ℤ_2 ⊕ℤ_2) ⊕ (((ℤ_2 ⊕ℤ_2) ⊕ℤ_2 ) ⊕ℤ_2)_Υ^2. Recall that Υ^2 acts on Ω^2_1 by conjugation. Also recall from Lemma <ref> that ((Ω^2_1)^ab)_Υ^2≅(Ω^2_1)^ab/{h^gh^-1| h∈(Ω^2_1)^ab,g∈Υ^2}. Now focus on the only copy of _2 that appears in the second last line in the above sequence of isomorphisms. Apart from this copy of _2, Υ^2 acts trivially on all other direct summands of (Ω^2_1)^ab; however, we cannot determine whether this action is trivial or not on this copy. Therefore, we get that (Ω^2_2)^ab≅^k_2 for k_2=5 or 6. The computations of (Ω_n^n)^ab for n>2 proceed similarly so that we obtain (Ω_n^n)^ab≅^k_n for some k_n satisfying k_n<k_n+1 for each n≥1. Finally, using Theorem <ref> we get the following. We have K_1(ℤ_ℤ) ≅⊕_n=0^∞ℤ_2. We cannot compute K_1(M_ℤ) for an infinite free ℤ-module M_ℤ in a similar way for a non-trivial quotient of M_ appears in the expression of (Υ^1(M_))^ab, which will create obstacles in the computation of (Ω^2_2)^ab. § CONNECTION WITH ALGEBRAIC K_1 Fix a PID R. Let us recall the definition of algebraic K_1 of R. Let P(R) be the skeletally small groupoid of finitely generated projective R-modules under R-module isomorphisms. This groupoid is equipped with a symmetric monoidal structure of the direct sum (⊕). The algebraic K_1 of a ring R, denoted K_1^⊕(R), is defined to be K_1^⊕(P(R)). Since R is a PID, every finitely generated projective R-module is free. Thus P(R) is equivalent to the symmetric monoidal groupoid Free(R) whose objects are finitely generated free R-modules. Translations are faithful in both P(R) and Free(R), and thus Bass' description (Theorem <ref>) can be used for the computation of K_1^⊕(R). We established in the proof of Proposition <ref> that every pp-definable set in ℒ(R_R) is in bijection with a finitely generated free R-module. Also note that the directed colimit in Theorem <ref> is actually directly from Bass' description of K_1 (Theorem <ref>). The following theorem establishes a connection between algebraic and model-theoretic K_1. If R is a PID then there is a natural embedding of K_1^⊕(R) into K_1(R_R). Since every R-linear automorphism of R^n is pp-definable, the object assignment R^n↦ R_R^n:Free(R)→(R_R) together with morphism assignment f↦ f defines a faithful functor. Moreover, the map x↦(x,0) is both an R-module embedding R^n↪ R^n+1 and a definable injection R_R^n↪ R_R^n+1. Thus defining i:Aut_Free(R)(R^n)→Aut_(R_R)(R_R^n) by i(f) := (f ⊕ id_R), we get the following commutative diagram of groups. Aut_Free(R)(R^n) @^(->[r][d]^i Aut_(R_R)(R^n_R) [d]^i Aut_Free(R)(R^n+1) @^(->[r] Aut_(R_R)(R^n+1_R) Finally, since the abelianization functor commutes with colimits, Bass' description of K_1 given in Theorem <ref> allows us to conclude that there is a natural embedding K_1^⊕(R)→ K_1(R_R). The algebraic K_1 of Euclidean domains is well-understood. <cit.> If R is an ED, then K_1^⊕(R)≅ R^×, where the abelianization map takes A∈ GL(R):=_n∈ℕ GL_n(R) to (A)∈ K_1(R). From the explicit computations of K_1(R_R) in this paper for different EDs, it can be seen that the model-theoretic K_1 is much larger compared to the algebraic K_1–this is not a surprise as there are far too many definable self-bijections in comparison with linear automorphisms. Suppose R is an ED satisfying 1=u+v for some u,v ∈ R^×. Then the composition GL(R) ↠ K_1^⊕(R)↪ K_1(R_R) can be described as follows: if A ∈ GL_n(R) is not in the image of the natural embedding of GL_n-1(R) into GL_n(R), then it maps to (A)∈(GL_n(R))^ab, where (GL_n(R))^ab is the leading term of (Ω^n_n)^ab. § ACKNOWLEDGEMENTS A part of this work,  <ref>-<ref>, appeared in Chapter 3 of the PhD thesis <cit.> of the second author. This part of the work was supported by an Overseas Students Fellowship of the School of Mathematics, University of Manchester.
http://arxiv.org/abs/2407.13342v1
20240718094024
Implicit Filtering for Learning Neural Signed Distance Functions from 3D Point Clouds
[ "Shengtao Li", "Ge Gao", "Yudong Liu", "Ming Gu", "Yu-Shen Liu" ]
cs.CV
[ "cs.CV" ]
Implicit Filtering S. Li et al. Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, China School of Software, Tsinghua University, Beijing, China list21@mails.tsinghua.edu.cn, gaoge@tsinghua.edu.cn, liuyd23@mails.tsinghua.edu.cn, guming@tsinghua.edu.cn, liuyushen@tsinghua.edu.cn Implicit Filtering for Learning Neural Signed Distance Functions from 3D Point Clouds Shengtao Li1,2 Ge Gao1,2() Yudong Liu1,2 Ming Gu1,2 Yu-Shen Liu2 July 22, 2024 ===================================================================================== § ABSTRACT Neural signed distance functions (SDFs) have shown powerful ability in fitting the shape geometry. However, inferring continuous signed distance fields from discrete unoriented point clouds still remains a challenge. The neural network typically fits the shape with a rough surface and omits fine-grained geometric details such as shape edges and corners. In this paper, we propose a novel non-linear implicit filter to smooth the implicit field while preserving high-frequency geometry details. Our novelty lies in that we can filter the surface (zero level set) by the neighbor input points with gradients of the signed distance field. By moving the input raw point clouds along the gradient, our proposed implicit filtering can be extended to non-zero level sets to keep the promise consistency between different level sets, which consequently results in a better regularization of the zero level set. We conduct comprehensive experiments in surface reconstruction from objects and complex scene point clouds, the numerical and visual comparisons demonstrate our improvements over the state-of-the-art methods under the widely used benchmarks. Project page: <https://list17.github.io/ImplicitFilter>. § INTRODUCTION Reconstructing surfaces from 3D point clouds is an important task in 3D computer vision. Recently signed distance functions (SDFs) learned by neural networks have been a widely used strategy for representing high-fidelity 3D geometry. These methods train the neural networks to predict the signed distance for every position in the space by signed distances from ground truth or inferred from the raw 3D point cloud. With the learned signed distance field, we can obtain the surface by running the marching cubes algorithm<cit.> to extract the zero level set. Without signed distance ground truth, inferring the correct gradient and distance for each query point could be hard. Since the gradient of the neural network also indicates the direction in which the signed distance field changes, recent works<cit.> typically add constraints on the network gradient to learn a stable field. In terms of the rate at which the field is changing, the eikonal term<cit.> is widely used to ensure the norm of the gradient to be one everywhere. For the gradient direction constraint, some methods<cit.> use the direction from the query point to the nearest point on the surface as guidance. Leveraging the continuity of the neural network and the gradient constraint, all these methods could reconstruct discrete points. However, the continuity cannot guarantee the prediction is correct everywhere. Therefore, reconstructed surfaces of previous methods usually contain noise and ignore geometry details when there are not enough points to guide the reconstruction, as shown in <ref>. The above issue arises from the fact that these methods overlook the geometric information within the neighborhood but only focus on adding constraints on individual points to optimize the network. To resolve this issue, we introduce the bilateral filter for implicit fields that reduces surface noise while preserving the high-frequency geometric characteristics of the shape. Our designed implicit filter takes into account both the position of point clouds and the gradient of learned implicit fields. Based on the assumption of all input points lying on the surface, we can filter noise points on the zero level set by minimizing the weighted projection distance to gradients of the neighbor input points. Moreover, by moving the input points along the gradient of the field to other level sets, we can easily extend the filter to the whole field. This helps constrain the signed distance field near the surface and achieve better consistency through different level sets. To evaluate the effectiveness of our proposed implicit filtering, we validate it under widely used benchmarks including object and scene reconstructions. Our contributions are listed below. * We introduce the implicit filter on SDFs to smooth the surface while preserving geometry details for learning better neural networks to represent shapes or scenes. * We improve the implicit filter by extending it to non-zero level sets of signed distance fields. This regularization of the field aligns different level sets and provides better consistency within the whole SDF field. * Both object and scene reconstruction experiments validate our implicit filter, demonstrating its effectiveness and ability to produce high-fidelity reconstruction results, surpassing the previous state-of-the-art methods. § RELATED WORK With the rapid development of deep learning, neural networks have shown great potential in surface reconstruction from 3D point clouds. In the following, we briefly review methods related to implicit learning for 3D shapes and reconstructions from point clouds. Implicit Learning from 3D Supervision. The most commonly used strategy to train the neural network is to learn priors in a data-driven manner. These methods require signed distances or occupancy labels as 3D supervision to learn global priors <cit.> or local priors <cit.>. With large-scale training datasets, the neural network can perform well with similar shapes, but may not generalize well to unseen cases with large geometric variations. These models often have limited inputs that can be difficult to scale for varying sizes of point clouds. Implicit Learning from Raw Point Clouds. Different from the supervised methods, we can learn implicit functions by overfitting neural networks on single point clouds globally or locally to learn SDFs <cit.>. These unsupervised methods rely on neural networks to infer implicit functions without learning any priors. Therefore, apart from the guidance of original input point clouds, we also need constraints on the direction <cit.> or the norm <cit.> of the gradients, specially designed priors <cit.>, or differentiable poisson solver <cit.> to infer SDFs. This unsupervised approach heavily depends on the fitting capability and continuity of neural networks. However, these SDFs lack accuracy because there is no reliable guidance available for each query point across the entire space when working with discrete point clouds. Therefore, deducing the correct geometry for free space becomes particularly crucial. Our implicit filtering enhances SDFs by inferring the geometric details through the implicit field information of neighbor points. Feature Preserving Point Cloud Reconstruction. Early works <cit.> reconstruct point clouds with sharp features usually by point cloud consolidation. The key idea of these methods is to enhance the quality of point clouds with sharp features. One popular category is the local projection operation (LOP) <cit.> and its variants <cit.>. The projection operator provides a stable and easily generalizable method for point cloud filtering, which is also the foundation of our implicit filter. The difference lies in that we do not need any normal or other priors and our filtering can be directly applied to implicit fields to extract high-fidelity meshes. Some other learning-based methods <cit.> try to consolidate point clouds with edge points in a data-driven manner. Although capable of generating high-quality point clouds, these methods still require a proper reconstruction method <cit.> to inherit the details in meshes. With the advancement of deep learning in point cloud reconstruction, some approaches <cit.> also explored employing neural networks to reconstruct high-precision models. FFN <cit.>, SIREN <cit.>, and IDF <cit.> introduce high-frequency features into the neural network in different ways to preserve the geometric details of the reconstructed shape. DIGS<cit.> and EPI <cit.> smooth the surface by using the divergence as guidance to alleviate the implicit surface roughness. Compared with these methods, we first introduce local geometric features through filtering to optimize the implicit field, so that we can achieve higher accuracy. § METHOD §.§.§ Neural SDFs overview. This section will briefly describe the concepts we used in our implicit filtering. We focus on the SDF f : R^3 →R inferred from the point cloud P = {p_i | p_i ∈R^3}_i=1^N without ground truth signed distances and normals. f predicts a signed distance s ∈R for an arbitrary query point q, as formulated by s = f_θ(q), where θ denotes the parameters of the neural network. The level set 𝒮_d of SDF is defined as a set of continuous query points with the same signed distance d, formulated as 𝒮_d = {q| f_θ(q) = d}. The goal of our implicit filtering is to smooth each level set with geometry details. Then we can extract the zero level set as a mesh by running the marching cubes algorithm <cit.>. §.§.§ Level set bilateral filtering. [14]r0pt < g r a p h i c s > By minimizing the weighted projection distance, our filter can preserve the sharp feature but the average method leads to a wrong result. Filtering for 2D images replaces the intensity of each pixel with the weighted intensity values from nearby pixels. Different from images, the resolution of implicit fields is infinite and we need to find the neighborhood on each level set for filtering. By minimizing the following loss function, L_dist = 1/N∑_i=1^N|f_θ(p_i)|, we can approximate that all points in P are located on level set 𝒮_0, which makes it feasible to find neighbor points on 𝒮_0. For a given point p̅ on 𝒮_0, one simple strategy of filtering is to average positions of neighbor points 𝒩(p̅, 𝒮_0) ⊂P on 𝒮_0 by a Gaussian filter based on relative positions as follows: p̅_average = ∑_p_j ∈𝒩(p̅, 𝒮_0)p_j ϕ(||p̅ - p_j||)/∑_p_j ∈𝒩(p̅, 𝒮_0)ϕ(||p̅ - p_j||), where the Gaussian function ϕ is defined as ϕ(||p̅ - p_j||) = exp(-||p̅ - p_j||^2/σ_p^2). However, as depicted in <ref>, it is evident that this weighted mean position yields excessively smooth surfaces, causing sharp features and details to be further obscured. To keep the geometric details, our filtering operator suggests measuring the projection distance to the gradient of neighbor points as shown in <ref> and <ref>(b). When calculating weights, it is vital to account for both the impact of relative positions and the gradient similarity. Following the principles of bilateral filtering, to compute the filtered point for p̅, we simply need to minimize the following distance equation: d(p̅) =∑_p_j ∈𝒩(p̅, 𝒮_0)|n^T_p_j(p̅ - p_j)| ϕ(||p̅ - p_j||) ψ(n_p̅, n_p_j)/∑_p_j ∈𝒩(p̅, 𝒮_0)ϕ(||p̅ - p_j||) ψ(n_p̅, n_p_j), where the gradient n_p̅, n_p_j and the Gaussian function ψ are defined as n_p̅ = ∇ f_θ(p̅)/||∇ f_θ(p̅)||, n_p_j = ∇ f_θ(p_j)/||∇ f_θ(p_j)||, ψ(n_p̅, n_p_j) = exp(-1-n_p̅^Tn_p_j/1 - cos(σ_n)). In addition to projection to the gradient n_p_j, we observe that the projection distance to n_p̅ can assist in learning a more stable gradient for point p̅ which is also adopted in EAR<cit.>. Taking into account the bidirectional projection, our final bilateral filtering operator can be formulated as follows: d_bi(p̅) =∑_p_j ∈𝒩(p̅, 𝒮_0)(|n_p_j^T(p̅ - p_j)| + |n_p̅^T(p̅ - p_j)|) ϕ(||p̅ - p_j||) ψ(n_p̅, n_p_j)/∑_p_j ∈𝒩(p̅, 𝒮_0)ϕ(||p̅ - p_j||) ψ(n_p̅, n_p_j). Although similar filtering methods have been widely studied in applications such as point cloud denoising and resampling<cit.>, there are two critical problems when applying these methods in implicit fields: * Filtering the zero level set needs to sample points on the level set 𝒮_0, which necessitates the resolution of the equation f_θ = 0, or the utilization of the marching cubes algorithm <cit.>. Both methods pose challenges in achieving fast and uniform point sampling. For the randomly sampled point q on non-zero level set 𝒮_f_θ(q), we can also not filter this level set since there are no neighbor points on 𝒮_f_θ(q). * The normals utilized in our filtering are derived from the gradients of the neural network f_θ. While the network typically offers reliable gradients, we may find that ∇ f_θ = 0 is also the optimal solution to the minimum value of <ref>. This degenerate solution is unexpected, as it implies a scenario where there is no surface when the gradient is zero everywhere. We will focus on addressing the two issues in the subsequent sections. §.§.§ Sampling points for filtering. Inspired by NeuralPull <cit.>, we can pull a query point to the zero level set by the gradient of the neural network f_θ. For a given query point q as input, the pulled location q̂ can be formulated as follows: q̂ = q - f_θ(q) ∇ f_θ(q) / ||∇ f_θ(q)||. The point q and q̂ lie respectively on level set 𝒮_f_θ(q) and 𝒮_0 as illustrate in <ref>(b). By adopting the sampling strategy in NeuralPull, we can generate samples Q={q_i|q_i∈R^3}_i=1^M on different level sets near the surface and pull them to 𝒮_0 by <ref>, to obtain Q̂={q̂_i|q̂_i = q_i - f_θ(q_i) ∇ f_θ(q_i) / ||∇ f_θ(q_i)||, q_i ∈Q}_i=1^M. Hence, we can filter the zero level set by minimizing <ref> across all pulled query points Q̂, which is equivalent to optimizing the following loss: L_zero = ∑_q̂∈Q̂ d_bi(q̂), where for each q̂∈Q̂, 𝒩(q̂, 𝒮_0) denotes finding the neighbors of q̂ within the input points P, since P is assumed to be located on 𝒮_0. This filtering mechanism can be easily extended to non-zero level sets in a similar inverse manner. To be more specific, as for level set S_f_θ(q), the neighbor points for query point q∈Q are required. These points should lie on the level set S_f_θ(q) same as q, allowing us to filter the level set S_f_θ(q) using the same filter as described in <ref>. However, obtaining 𝒩(q, S_f_θ(q)) in P is not feasible, since all input points P are situated on the zero level set instead of the S_f_θ(q) level set. To address this issue, we propose a technique for identifying neighbors of q on level set S_f_θ(q), by projecting the input points P inversely onto the specific level set S_f_θ(q) based on the gradient, as depicted in <ref>(b). The projected neighbor points can be represented as in <ref>. Filtering across multiple level sets helps to enhance the performance of our method by optimizing the consistency between different level sets within the SDF field, We further showcase this evidence in the ablation study detailed in Section <ref>. 𝒩(q, S_f_θ(q)) = {p̂ |p̂ = p + f_θ(q) ∇ f_θ(p)/||∇ f_θ(p)||, p∈𝒩(q̂, 𝒮_0))}. Based on the above analysis, we can filter the level sets S_f_θ(q) by minimizing <ref> over all sample points Q through <ref>, equivalent to optimizing the following loss: L_field = ∑_q∈Q d_bi(q). [9]r0pt < g r a p h i c s > (a) Searching neighbors directly for q̂. (b) Searching neighbors for NN(q) instead of q̂. It is worth noting that for a fixed query point q, the pulled query point q̂ dynamically changes when training the neural network, which results in a time-consuming process to repeatedly conduct neighbor searching for q̂. To handle this matter, we substitute the 𝒩(q̂, 𝒮_0) with 𝒩(NN(q), 𝒮_0), where NN(q) denotes the nearest point of q within the point cloud P as shown in <ref>. While this substitution may introduce a slight bias for training, it also ensures the neighbor points are close to q̂, therefore this trade-off between efficiency and accuracy is reasonable. §.§.§ Gradient constraint. The other problem of implicit filtering is gradient degeneration. Overfitting the neural network requires the SDF to be geometrically initialized. We can consider the initialized implicit field as the noisy field and apply our filter directly to train the network from the beginning to fit the raw point cloud by removing the `noise'. However, if the denoise target is too complex, gradient degeneration will occur during the training process. Therefore, we need to add a constraint to the gradient of the SDF. There are two ways for training the neural network to pull query points onto the surface based on NeuralPull <cit.> and CAP-UDF <cit.>. One is minimizing the distance between the pulled point q̂ and the nearest point NN(q) as formulated below: L_pull = 1/M∑_i ∈ [1, M]||q̂_i - NN(q_i)||_2. The other is minimizing the Chamfer distance between moved query points and the raw point cloud: L_CD = 1/M∑_i ∈ [1, M]min_j ∈ [1, N]||q̂_i - p_j||_2 + 1/N∑_j ∈ [1, N]min_i ∈ [1, M]||p_j - q̂_i||_2. A stable SDF can be trained by the losses above since they are trying to move the query points to be in the same distribution with the point cloud, which can provide the constraint for our implicit filter. Here we choose L_CD since the filtered points are likely not the nearest points and L_CD is a more relaxed constraint. §.§.§ Loss function. Finally, our loss function is formulated as: L = L_zero + α_1 L_field + α_2 L_dist + α_3 L_CD, where α_1, α_2, and α_3 is the balance weights for our implicit filtering loss. §.§.§ Implementation details. We employ a neural network similar to OccNet <cit.> and the geometric network initialization proposed in SAL<cit.> with a smaller radius the same as GridPull<cit.> to learn the SDF. We use the strategy in NeuralPull<cit.> to sample queries around each point p in P. We set the weight α_3 to 10 to constrain the learned SDF and α_1 and α_2 to 1. The parameters σ_n, σ_p are set to 15^∘, max_p_j ∈𝒩(p̅, 𝒮_f_θ(p̅))(||p̅ - p_j||) respectively. § EXPERIMENTS We conducted experiments to assess the performance of our implicit filter for surface reconstruction from raw point clouds. The results are presented for general shapes in <ref>, real scanned raw data including 3D objects in <ref>, and complex scenes in <ref>. Additionally, ablation experiments were carried out to validate the theory and explore the impact of various parameters in <ref>. §.§ Surface Reconstruction for Shapes [13]R0.63 Comparisons on ABC and Famous datasets. The threshold of F-score (F-S.) is 0.01. 2*Methods 3c|ABC 3cFAMOUS 2-7 CD_L2 CD_L1 F-S. CD_L2 CD_L1 F-S. P2S<cit.> 0.298 0.015 0.598 0.012 0.008 0.752 IGR<cit.> 2.675 0.063 0.448 1.474 0.044 0.573 NP<cit.> 0.095 0.011 0.673 0.100 0.012 0.746 PCP<cit.> 0.252 0.023 0.373 0.037 0.014 0.435 SIREN<cit.> 0.022 0.012 0.493 0.025 0.012 0.561 DIGS<cit.> 0.021 0.010 0.667 0.015 0.008 0.772 Ours 0.011 0.009 0.691 0.008 0.007 0.778 §.§.§ Datasets and metrics. For surface reconstruction of general shapes from raw point clouds, we conduct evaluations on three widely used datasets including a subset of ShapeNet<cit.>, ABC<cit.>, and FAMOUS<cit.>. We use the same setting with NeuralPull<cit.> for the dataset ShapeNet. For datasets ABC and FAMOUS, we use the train/test splitting released by Points2Surf<cit.> and we sample points directly from the mesh in the ABC dataset without other mesh preprocessing to keep the sharp features. For evaluating the performance, we follow NeuralPull to sample 1×10^5 points from the reconstructed surfaces and the ground truth meshes on the ShapeNet dataset and sample 1×10^4 on the ABC and FAMOUS datasets. For the evaluation metrics, we use L1 and L2 Chamfer distance (CD_L1 and CD_L2) to measure the error. Moreover, we adopt normal consistency (NC) and F-score to evaluate the accuracy of the reconstructed surface, the threshold is the same with NeuralPull. §.§.§ Comparisons. [14]R0.5 < g r a p h i c s > Visualization of level sets on a cross section. To evaluate the validity of our implicit filter, we compare our method with a variety of methods including SPSR<cit.>, Points2Surf (P2S)<cit.>, IGR<cit.>, NeuralPull (NP)<cit.>, LPI<cit.>, PCP<cit.>, GridPull (GP)<cit.>, SIREN<cit.>, DIGS<cit.>. The quantitative results on ABC and FAMOUS datasets are shown in <ref>, and selectively visualized in <ref>. Our model reaches state-of-the-art performance on both datasets, accomplishing the goal of eliminating noise on each level set while preserving the geometric details. To more intuitively validate the efficacy of our filtering, we visualize the level sets on a cross section in <ref>. We also report the results on ShapeNet which contains over 3000 objects in terms of CD_L2, NC, and F-Score with thresholds of 0.002 and 0.004 in <ref>. The detailed comparison for each class of ShapeNet can be found in the supplementary material. Our method outperforms previous methods over most classes. The visualization comparisons in <ref> show that our method can reconstruct a smoother surface with fine details. To validate the effect of our filter on sharp geometric features. We evaluate the edge points by the edge Chamfer distance metric used in <cit.>. We sample 100k points uniformly on the surface of both the reconstructed mesh and ground truth. The edge point p is calculated by finding whether there exists a point q∈𝒩_ϵ(p) satisfied |n_qn_p| < σ, where 𝒩_ϵ(p) represents the neighbor points within distance ϵ from p. The results are shown in <ref> and visualized in <ref>. We set ϵ = 0.01 and σ = 0.1. §.§ Surface Reconstruction for Real Scans §.§.§ Dataset and metrics. For surface reconstruction of real point cloud scans, we follow VisCo<cit.> to evaluate our method under the Surface Reconstruction Benchmarks (SRB)<cit.>. We use Chamfer and Hausdorff distances (CD_L1 and HD) between the reconstruction meshes and the ground truth. Furthermore, we report their corresponding one-sided distances (d_C and d_H) between the reconstructed meshes and the input noisy point cloud. [9]r0pt < g r a p h i c s > Visual comparisons on SRB dataset. §.§.§ Comparisons. We compare our method with state-of-the-art methods under the real scanned SRB dataset, including IGR<cit.>, SPSR<cit.>, Shape As Points (SAP)<cit.>, NeuralPull (NP)<cit.>, and GridPull (GP)<cit.>. The numerical comparisons are shown in <ref>, where we achieve the best accuracy in most cases. The visual comparisons in <ref> demonstrate that our method can reconstruct a continuous and smooth surface with geometry details. §.§ Surface Reconstruction for Scenes §.§.§ Dataset and metrics. To further demonstrate the advantage of our method in the surface reconstruction of real scene scans, we conduct experiments using the 3D Scene dataset. The 3D Scene dataset is a challenging real-world dataset with complex topology and noisy open surfaces. We uniformly sample 1000 points per m^2 of each scene as the input and follow PCP<cit.> to sample 1M points on both the reconstructed and the ground truth surfaces. We leverage L1 and L2 Chamfer distance (CD_L1, CD_L2) and normal consistency (NC) to evaluate the reconstruction quality. §.§.§ Comparisons. We compare our method with the state-of-the-art methods ConvONet<cit.>, LIG<cit.>, DeepLS<cit.>, NeuralPull (NP)<cit.>, PCP<cit.>, GridPull (GP)<cit.>. The numerical comparisons in <ref> demonstrate our superior performance in all scenes even compared with the local-based methods. We further present visual comparisons in <ref>. The visualization further shows that our method can achieve smoother with high-fidelity surfaces in complex scenes. It should be noted that the surface we extract here is not the zero level set but the 0.001 level set since the scene is not watertight. For NeuralPull we use the threshold of 0.005 instead of 0.001 to extract the complete surface therefore the mesh looks thicker. §.§ Ablation Studies We conduct ablation studies on the FAMOUS dataset to demonstrate the effectiveness of our proposed implicit filter and explore the effect of some important hyperparameters. We report the performance in terms of L1 and L2 Chamfer distance (CD_L1, CD_L2× 10^3), normal consistency (NC), and F-Score (F-S.). [6]r0.6 aboveskip=0pt Effect of the Eikonal term. Loss CD_L1 CD_L2 F-S. NC w/ Eikonal, w/o CD 0.009 0.021 0.738 0.899 w/ Eikonal, w/ CD 0.008 0.009 0.774 0.910 w/o Eikonal, w/ CD 0.007 0.008 0.778 0.911 Effect of Eikonal loss. We select the L_CD to prevent the degeneration of the gradient since it both constrains the value and the gradient of the SDF. It also guides how to pull the query point onto the surface. Therefore we omit the Eikonal term used in previous methods like the IGR<cit.>, SIREN<cit.>, and DIGS<cit.> which have no other direct supervision for the gradient. To verify this selection, we conduct the following experiments by trade-off these two functions. With the experimental results in <ref>, we find that only applying the Eikonal term is not as effective as CD alone. At the same time combining the Eikonal term with CD does not further enhance the experiment results, but the difference is small. Effect of level set filtering. To justify the effectiveness of each term in our loss function. We report the results trained by different combinations in <ref>. The L_CD is more applicable for training SDF from raw point clouds. The zero-level filter can help remove the noise and keep the geometric features. Filtering across non-zero level sets can improve the overall consistency of the entire signed distance field. Since we assume all input points lie on the surface, the function L_dist is also necessary. <ref> shows a 2D comparison of these losses, showing that our filter loss functions can reconstruct a field that is aligned at all level sets and maintains geometric characteristics. Effect of the bidirectional projection. To validate our bidirectional projection distance, we report the results in <ref>. The numerical comparisons show that projecting the distance to both normals can improve the reconstruction quality. Note that only using d(p̅) can also improve the results. Weight of level set projection loss. We explore the effect of the L_CD loss function by adjusting the weight α_3 in <ref>. We report our results with different candidates {0, 1, 10} in <ref>, where 0 means we do not use the L_CD to constrain the gradient. The comparisons in <ref> show that although our implicit filter can directly learn SDFs, it is better to adopt the L_CD for a more stable field. However, if the weight is too large, the filtering effect will decrease. It is recommended to select weights ranging from 1 to 10, which is usually adequate. For the weights α_1 and α_2, setting them to 1 is always necessary. Effect of filter parameters. We compare the effect of different parameters σ_n, σ_p in <ref>. The diagonal weight for σ_p means the length of the diagonal of the bounding box for the local patch mentioned in <cit.>. The results indicate that the method is relatively robust to parameter variation in a certain range. § CONCLUSION We introduce implicit filtering on SDFs to reduce the noise of the signed distance field while preserving geometry features. We filter the distance field by minimizing the weighted bidirectional projection distance, where we can generate sampling points on the zero level set and neighbor points on non-zero level sets by the pulling procedure. By leveraging the Chamfer distance, we address the issue of gradient degeneration problem. The visual and numerical comparisons demonstrate our effectiveness and superiority over state-of-the-art methods. § ACKNOWLEDGEMENTS The corresponding author is Ge Gao. This work was supported by Beijing Science and Technology Program (Z231100001723014). splncs04
http://arxiv.org/abs/2407.13733v1
20240718173506
Revisiting Neutrino Masses In Clockwork Models
[ "Aadarsh Singh" ]
hep-ph
[ "hep-ph" ]
aadarshsingh@iisc.ac.in Indian Institute Of Science, CV Raman Rd, Bengaluru, Karnataka 560012, India § ABSTRACT In this paper, we have looked at various variants of the clockwork model and studied their impact on the neutrino masses. Some of the generalizations such as generalized CW and next-to-nearest neighbour interaction CW have already been explored by a few authors. In this study, we studied non-local CW for the fermionic case and found that non-local models relax the | q | > 1 constraint to produce localization of the zero mode. We also made a comparison among them and have shown that for some parameter ranges, non-local variants of CW are more efficient than ordinary CW in generating the hierarchy required for the ν mass scale. Finally, phenomenological constraints from BR(μ→ e γ ) FCNC process and Higgs decay width have been imposed on the parameter space in non-local and both-sided clockwork models. We have listed benchmark points which are surviving current experimental bounds from MEG and are within the reach of the upcoming MEG-II experiment. Revisiting Neutrino Masses In Clockwork Models Aadarsh Singh July 22, 2024 ============================================== § INTRODUCTION Neutrino masses are one of the most intriguing problems in flavour physics. To generate eV scale masses from natural weak scales of 100 GeV, it would require a suppression of O(10^12) in mass scale which seems unnatural. Several mechanisms have been proposed over the last 45 years starting with the seesaw mechanism and its variants Type II and Type III, etc. and then radiative mechanisms including the Zee and Scotogenic model etc. effective operators which generate neutrino masses <cit.>,<cit.>,<cit.>,<cit.>,<cit.>,<cit.>, <cit.>,<cit.>. While the most natural mechanism of neutrino masses consists of Majorana neutrinos, in recent times enough attention has also been given to Dirac neutrino models. To explain the hierarchical nature of flavour parameters like masses & mixing angles, people have suggested localization of wavefunction approach such as in extra dimension models <cit.>,<cit.>,<cit.>,<cit.>,<cit.>,<cit.>. This wavefunction localization approach can be used to explain neutrino mass hierarchy as is done by several authors <cit.>,<cit.>,<cit.>,<cit.>,<cit.>,<cit.>. Equivalently localization can occur in theory space in deconstruction picture. That leads to the production of a highly suppressed Yukawa coupling and hence can be used to explain the required O(10^12) magnitude suppression in mass scale. Few authors have tried to explain clockwork structures using warped extra dimensions too that lead to a connection between clockwork and linear dilaton theory <cit.>,<cit.>,<cit.>. This clockwork approach has also been used by various authors to explain inflation, gravity, flavour mixing, axion, dark matter and various other shortcomings of SM <cit.>,<cit.>,<cit.>,<cit.>,<cit.>,<cit.>.<cit.>. In this paper, firstly we have explored variants of clockwork models where the links in the theory space have been modified but the underlying suppression mechanism is kept intact. Some authors have already explored the generalized clockwork <cit.> and next-to-nearest neighbour clockwork <cit.> and the both-sided clockwork scenario is similar to latticized extra dimension so results for these scenarios are known in the literature. We have modified these scenarios and applied them to fermions and found analytical expressions for the 0-mode eigenvector which for non-local theory spaces has a combinatorial factor in it. Since the factorial grows much faster than the exponential, the 0-mode localization can be bigger than the exponential localization such as in randomness-assisted models <cit.>. However, for the models discussed here, the 0-mode components exhibit combinatorial rather than factorial dependence and, as a consequence, are unable to surpass the exponential. Also, we have described a fine-cancellation mechanism rather than the suppression mechanism to generate a hierarchical scale from the weak scale. This fine cancellation mechanism when implemented for neutrino mass generation, is capable of generating eV scale from TeV scale with natural orders for parameters. The couplings generated between SM and BSM fields are not tiny so these models are phenomenologically easily testable compared to suppression-based models. The cancellation mechanism is implemented in theory space that resembles the discretized extra dimension model. The phenomenological signatures for models have been explored using the Flavor-changing neutral current (FCNC) of charged SM leptons. The SM with non-zero neutrino masses contribution for the branching ratio of μ→ e γ decay comes from one loop corrections and is significantly small ≈ O(10^-55). However, the introduction of these BSM-heavy neutral leptons drastically increases these BR numbers to within the grasp of current experimental bounds. The current most stringent bound for this decay comes from MEG with bound at BR < 4.2 × O(10^-13) <cit.> and can be used to restrict the parameter space of these models. We have given the benchmark points for these models producing the observed neutrino mass that survive the current MEG bounds but are within the reach of upcoming MEG-2 bounds BR < 6 × O(10^-14) <cit.>. The paper begins by studying the clockwork variant models in section 2 & 3. In section 2, the model studied is both-sided clockwork (BCW) and it is shown that the suppression produced in this case is a few orders of magnitude stronger than CW for certain values of parameters. In section 3, non-local clockwork models are studied. The analytical expression of 0-mode for a few cases is given and it is shown that they contain a combinatorial term which for a certain range of parameters gives much bigger suppression than CW. In section 4, we revisited the neutrinos in Large extra dimension (LED) models from a dimensional deconstruction (DD) perspective. We have shown that localization of wavefunctions in this scenario is not a required condition to produce small masses and hence generated neutrino mass scales from TeV scales using only natural parameters. Finally, in section 5 a detailed analysis of BSM effects on FCNC BR is given and some comments about phenomenology in colliders and the contribution of these BSM fields to Higgs mass & width have been made. Throughout the paper we consider the following neutrino mass and mixing values <cit.>, <cit.>: § LOCAL CLOCKWORK MODELS - MASS MODEL The clockwork (CW) models use mechanisms based on suppression by heavy scales to obtain hierarchically small couplings, leading to mass scales that are much smaller than the fundamental parameters of the model. The full Lagrangian is described as ℒ = ℒ_SM + ℒ_NP + ℒ_int with ℒ_SM representing the lagrangian for the standard model given by ℒ_SM = ℒ_Gauge + ℒ_Fermion + ℒ_Higgs + ℒ_Yukawa ℒ_NP represents the physics of new fields and ℒ_int is the interaction lagrangian between SM fields and new fields. For the new physics lagrangian, we will consider chiral fermionic fields with only Dirac couplings. ℒ_NP = ℒ_kin - ∑_i,j=1^nL_iℋ_i,jR_j + h.c. with ℋ_i,j being the Hamiltonian representing the connection between different fields. L_i and R_i represent the clockwork gear fields. The Hamiltonian and new physics Lagrangian for the generalized CW are given below <cit.> :- ℋ_i,j =m_i δ_i,j + m_iq_iδ_i+1,j ℒ_NP = ℒ_kin - ∑_i^n m_i L_iR_i - ∑_i^n m_iq_i L_iR_i+1 + h.c. Corresponding to λ_0 = 0 eigenvalue, the eigenvector for the GCW is given by Λ_0= {(-1)^nq_nq_n-1… q_1,(-1)^n-1q_nq_n-1… q_2,…,(-1)^1q_n,1} , Λ_0 = 𝒩_0 Λ_0 with 𝒩_0 as the normalizing factor and Λ_0 is the normalized 0-mode. Hence for q_i > 1, there will be suppression in the components of 0-mode eigenvector as it will be localized on a certain site. This case along with its various phenomenology has been explored by <cit.>. The uniform CW (UCW) model is a limiting scenario of generalized CW with q_i = q ∀ i is studied in <cit.>,<cit.>. §.§ Both sided Clockwork This scenario is an extension of CW Hamiltonian where fermions of both chiralities are connected to each other for neighbouring matter fields. The Hamiltonian is diagrammatically represented in Fig. <ref> and is written as ℋ_i,j =m_i δ_i,j + m_iq_iδ_i+1,j + q'_im_i δ_i,j+1 ℒ_NP = ℒ_kin - ∑_i^n m_i L_iR_i - ∑_i^n m_iq_i L_iR_i+1 - ∑_i^n-1 m_iq'_i L_i+1R_i + h.c. with i ∈ {1,2,...n} and j ∈ {1,2,...n+1} and L_i and R_i being the clockwork gear fields. This Hamiltonian looks similar to the deconstruction scenario but qualitatively it is quite different. In the deconstruction scenario, 0-mode was not always produced, certain relations among fundamental parameters have to be satisfied and also there were constraints on the size of the lattice for 0-mode to exist but in both-sided CW all fundamental parameters can be independent of each other along with the arbitrary number of lattice sites, 0-mode is always produced as seen in the theorem in Appendix-<ref>. Thus this gives us much more freedom from the constraints of fundamental parameters to work with. The matrix for fermionic mass in this model in {L_1, L_2, …, L_n} and {R_1, R_2, …, R_n+1} basis is given by M_BCW = [ m_1 q_1m_1 0 0 … 0 0 0; q^'_1m_1 m_2 q_2m_2 0 … 0 0 0; 0 q^'_2m_2 m_3 q_3m_3 … 0 0 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮ ⋮ ⋮; 0 0 0 … … q^'_n-1m_n-1 m_n q_nm_n ]_n × n+1 Once again, the right-handed fermions will have a 0-mode as per theorem implies. The null basis for right-fermionic field matrix M^†_CWM_CW is same as the null basis of M_CW as M^†_CW M_CWΛ_0 = M^†_CW (M_CWΛ_0) = M^†_CW (0) = 0 In the uniform limit, the 0-mode eigenvector behaves as ([ 1 q 0 0 … 0 0 0; q^' 1 q 0 … 0 0 0; 0 q^' 1 q … 0 0 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮ ⋮ ⋮; 0 0 0 … … q^' 1 q ])([ v_1; v_2; v_3; ⋮; v_n+1 ])=0 This gives the following recurrence relation for the 0-mode component q'v_i-1 + v_i + qv_i+1 = 0, i ∈{2,3,...n} with a boundary condition v_1 = -qv_2. Solving the recurrence relation with boundary condition we get the k^th component value as v_k = c 2^-k((-√(1-4 q q')+1/q)^k-(√(1-4 q q')-1/q)^k) where c is some arbitrary constant. It is fixed by imposing the normalization condition on the null vector ∑_i=1^n+1 v^*_i v_i = 1. c = 1/√(∑_i=1^n+1{2^-k((-√(1-4 q q')+1/q)^k-(√(1-4 q q')-1/q)^k)}^2 ) The eigenvalues for n = 2 case are given by λ_i = {0,m^2/2(-√((-(q'_1)^2-q_1^2-q_2^2-2)^2-4 (q_1^2 (q'_1)^2-2 q_1 q'_1+q_1^2 q_2^2+q_2^2+1))+(q'_1)^2+q_1^2 +q_2^2+2),m^2/2(√((-(q'_1)^2-q_1^2-q_2^2-2)^2-4 (q_1^2 (q'_1)^2-2 q_1 q'_1 +q_1^2 q_2^2+q_2^2+1))+(q'_1)^2 +q_1^2+q_2^2+2)} and unnormalized 0-mode eigenvector is given by Λ_0 = {-q_1 q_2/q_1 q'_1-1,q_2/q_1 q'_1-1,1} Hence as q'_1 → 1/q_1, the 0-mode becomes more and more localized. In uniform limit, q_i = q and q'_i = q' ∀ i , the eigenvalues and 0-mode eigenvector is λ_i = {0,-m^2/2√(4 (q'+q)^2+q'^4)+1+q'^2/2+q^2,m^2/2√(4 (q'+q)^2+q'^4)+1+q'^2/2+ q^2} Λ_0 = {-q^2/q' q-1,q/q' q-1,1} the first component of 0-mode will be greater than that in the normal CW model for | q'q - 1 | < 1. As q' → 1/q, first two components → ∞ hence 0-mode localization on last site → 0. (q|q')∈ℝ, (q'<02/q'<q<0)(q'>0 0<q<2/q') for these sets of values, the double-sided CW model will produce greater suppression than ordinary CW. For n = 2, q = -3, CW produces 10^-1 order suppression but for these set of values with q' = -0.33, this model produces 10^-3 order of suppression which is O(2) magnitude bigger suppression than CW. Similarly, the 0-mode eigenvector for n = 3 in the uniform limit can be found Λ_0= {q^3/2 q q'-1,-q^2/2 q q'-1,-q (q q'-1)/2 q q'-1,1} again the first component of 0-mode will be greater than that in the normal CW model for | 2q'q - 1 | < 1. In this model apart from CW localization of q^n in the numerator of the first component of 0-mode, there is a denominator too which depends on the extra parameter introduced by coupling left-handed fermions of i^th group to the right-handed fermions of (i-1)^th group. The suppression of coupling produced will be more than CW in this scenario for the following set of parameters (q|q')∈ℝ, (q'<01/q'<q<0)(q'>0 0<q<1/q') For n = 3 and q = -2, CW produces 10^-1 order suppression whereas this model with q' = -0.24, produces 10^-3 order suppression, 2 orders smaller than ordinary CW. To compare with CW, we took n = 40 gears with q = -2 to produce eV mass from the TeV scale but here it can be done with n = 20 for q = -2 and q' = -0.15. To get large localization, we took both q and q' to be of the same sign. For opposite signs, as can be seen from eq.(<ref>), the denominator would not be that small and so localization will not be that large. This model will work better than CW for parameter values of q' ∈ [-0.539,0], there are three more intervals of smaller length but choosing q' from those intervals will convert the natural hierarchy problem into the fine-tuning problem. Now to study the effect of SM neutrino coupling with CW fermions perturbatively, we will consider the interaction term in Lagrangian of the same form as in GCW. ℒ_int=-Y HL̅_L R_n+1+ h.c. The Dirac mass matrix in the basis {ν,L_n, L_n-1,L_1 } and {R_n+1, R_n, R_n-1, R_1 } for Y = 0 is given by M = m [ [1.5em][c]0 [1.5em][c]0 0 0 … 0; [1.5em][c]q [1.5em][c]1 q' 0 … 0; 0 [1.5em][c]q [1.5em][c]1 q' … 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮; 0 0 0 … [1.5em][c]q [1.5em][c]1 ]_(n+1) × (n+1) The masses for right-handed CW fermions will be again given by m√(λ_i), with λ_i denoting the eigenvalues of the matrix M^†M/m^2. M^†M = m^2 [ [1.5em][c]q^2 [1.5em][c]q q q' 0 … 0; [1.5em][c]q [3em][c]1+q^2 q+q' qq' … 0; qq' [1.5em][c]q+q' [4.2em][c]1+q^2+q'^2 q+q' … 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮; 0 0 0 … [1.5em][c]q+q' [3em][c]1+q'^2 ]_(n+1) × (n+1) For non-zero coupling Y, the eigenstates and eigenvalues change depending on the value of Y. Consider p to be the perturbative coupling strength then the Dirac mass matrix M becomes M = m [ [1.5em][c]p [1.5em][c]0 0 0 … 0; [1.5em][c]q [1.5em][c]1 q' 0 … 0; 0 [1.5em][c]q [1.5em][c]1 q' … 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮; 0 0 0 … [1.5em][c]q [1.5em][c]1 ]_(n+1) × (n+1) The right-handed fermions have masses m√(λ_i) with λ_i being the eigenvalues of the following mass matrix:- M^† M/m^2 = [ [3em][c]p^2+q^2 [1.5em][c]q q q' 0 … 0; [1.5em][c]q [3em][c]1+q^2 q+q' qq' … 0; qq' [1.5em][c]q+q' [4.2em][c]1+q^2+q'^2 q+q' … 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮; 0 0 0 … [1.5em][c]q+q' [3em][c]1+q'^2 ]_(n+1) × (n+1) Again we can write this matrix as the sum of the Dirac matrix without Yukawa neutrino coupling and a perturbation matrix. M^† M = M^†M + δ M^2 with perturbation matrix given by δ M^2 = m^2 [ [1.5em][c]p^2 [3em][c]0_1× n; [3em][c]0_n× 1 [3em][c]0_n× n; ]_(n+1) × (n+1) As the perturbation matrix in CW fermions basis is again independent of neighbouring CW coupling strength parameters and depends on SM field coupling strength and the mass scale, the leading-order corrections to the eigenvalues are proportional to p. δλ_i=⟨Λ^(i)|δ M^2/m^2| Λ^(i)⟩=p^2 f(q,q')=O(p^2) again Λ_i denotes eigenvectors of the unperturbed matrix for BCW (both-sided clockwork) case eq.(<ref>) and f(q,q') denotes function f coming from the dependence of eigenvector Λ_i's components on variables q and q'. Once again the leading order corrections are of the second order with respect to p. In this scenario too, the perturbative eigenvector analysis in the appendix of <cit.> holds true up to the order of p. The KK mass spectrum for Clockwork gears and their coupling strength with SM neutrino is shown in Fig. <ref> . For this scenario n = 20 gears are considered with m_i = 1 TeV, q_i = -3 and q'_i = -0.2. Fig. <ref> demonstrates the localization of different eigenvectors in CW fields on different sites. The Yukawa coupling strength to SM neutrino y is considered to be 0.1. § NON-LOCAL CLOCKWORK MODELS §.§ NNN CW In non-local CW theory space, matter fields corresponding to groups which are not adjacent in the theory space/moose diagram also have link fields connecting them. These connections are formulated in the model by modifying the underlying Hamiltonian in the Lagrangian of the model. The Hamiltonian for NNN-CW (next-to-nearest neighbour clockwork) is given by ℋ_i,j =m_i δ_i,j + q_i^(1)m_iδ_i+1,j +q_i^(2)δ_i+2,j ℒ_NP = ℒ_kin - ∑_i^n m_i L_iR_i - ∑_i^n m_iq_i^(1)L_iR_i+1 - ∑_i^n-1 m_iq_i^(2)L_iR_i+2 + h.c. with i ∈ {1,2,...n} and j ∈ {1,2,...n+1}. In this model too, the mass matrix for right-handed CW fermions will have a 0 mode as per the theorem. The matrix for the above Hamiltonian <ref> in {L_1,L_2,...L_n } and {R_1,R_2,....R_n+1} basis is given by M_CW = [ [1.5em][c]m_1 [2em][c]m_1q_1^(1) [2em][c]m_1q_1^(2) 0 … 0; [1.5em][c]0 [1.5em][c]m_2 [2em][c]m_2q_2^(1) [1.5em][c]m_2q_2^(2) … 0; 0 [1.5em][c]0 [1.5em][c]m_3 [2em][c]m_3q_3^(1) … 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮; 0 0 … [3em][c]m_n-1 [3.7em][c]m_n-1q_n-1^(1) [3.8em][c]m_n-1q_n-1^(2); 0 0 0 … [1.5em][c]m_n [2em][c]m_nq_n^(1) ]_n × n+1 The K^th component for null space basis of NNN CW in the uniform limit case, m_i = m, q_i^(1) = q and q_i^(2) = q' ∀ i is given by Λ_0^K = ∑_{k_i,k_j}^ (k_i+k_j)!/k_i!k_j!(-mq')^k_i(-mq)^k_j/m^k_i+k_j with 2k_i + k_j = n + 1 - K Λ_0 = {Λ_0^1, Λ_0^2,,Λ_0^n+1} k_i and k_j ∈ ℕ_0 = {0,1,2,3,4, ... }. The normalized 0-mode Λ_0 is given by 𝒩_0Λ_0, with 𝒩_0 representing the normalizing factor. Using Λ_0^†Λ_0 = 1, we get 𝒩_0 = 1/√(∑_i=1^n+1Λ_0^i*Λ_0^i) The 0-mode eigenvector for n = 2 case is Λ_0 = {-m_2 m_1q_1^(2)-m_1q_1^(1) m_2q_2^(2)/m_1 m_2,-m_2q_2^(1)/m_2,1} in the uniform limit, m_i = m, m_iq_i^(1) = mq and m_iq_i^(2) = q' ∀ i, it will reduce to Λ_0 = {-(q'-q^2),-q,1} it will produce bigger suppression than CW for (q|q')∈ℝ, ( q<0(q'≤ 0 q'≥ 2 q^2)) q=0(0<q(q'≤ 0 q'≥ 2 q^2)) Now for n = 3 in the limiting case, the 0-mode eigenvector is {-(q^3-2 q q'),-(q'-q^2),-q,1} with conditions (q|q')∈ℝ, (q<0(q'<0 q'>q^2-q^3))(0<q(q'<0 q'>q^3+q^2)) it produces greater suppression than CW. To compare with CW, we took n = 40 gears with m = 1 TeV, q = -2 to produce O(1) eV mass from the TeV scale, here it can be done with n = 25 for q = -2 and q' = -3. One will notice that we took -ve values for coupling parameters q_i, this is necessary to get bigger components in 0-mode, for +ve values of q_i, the components cancel within themselves as can be seen in eq.(<ref>) and hence does not give large localization. Apart from faster localization, the NNN CW model relaxes the condition of q > 1 for localization to take place. This model achieves localization even for q = 1 due to extra combinatorics factors in the components of 0-mode. Again to study the effect of SM neutrino coupling with NNN CW fermions perturbatively, we will consider the interaction term in Lagrangian of the same form as before eq.(<ref>). ℒ_int=-Y HL̅_L R_n+1+ h.c. The Dirac mass matrix in the basis {ν,L_n, L_n-1,L_1 } and {R_n+1, R_n, R_n-1, R_1 } for Y = 0 is given by M = m [ [1.5em][c]0 [1.5em][c]0 0 0 … 0; [1.5em][c]q [1.5em][c]1 0 0 … 0; q' [1.5em][c]q [1.5em][c]1 0 … 0; 0 q' q 1 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮; 0 0 0 … [1.5em][c]q [1.5em][c]1 ]_(n+1) × (n+1) The masses for right-handed CW fermions will be again given by m√(λ_i), with λ_i denoting the eigenvalues of the matrix M^†M/m^2. M^†M = m^2 [ q^2+q'^2 q+qq' q' 0 … 0 0; q+qq' 1+q^2+q'^2 q+qq' q' … 0 0; q' q+qq' 1+q^2+q'^2 q+qq' … 0 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮ ⋮; 0 0 … q' q+qq' 1+q^2 q; 0 0 … 0 q' q 1 ]_(n+1) × (n+1) For non-zero coupling Y, the eigenstates and eigenvalues change depending on the value of Y. Consider p to be the perturbative coupling strength then the Dirac mass matrix M becomes M = m [ [1.5em][c]p [1.5em][c]0 0 0 … 0; [1.5em][c]q [1.5em][c]1 0 0 … 0; q' [1.5em][c]q [1.5em][c]1 0 … 0; 0 q' q 1 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮; 0 0 0 … [1.5em][c]q [1.5em][c]1 ]_(n+1) × (n+1) The right-handed fermions have masses m√(λ_i) with λ_i being the eigenvalues of the following mass matrix:- M^† M/m^2 = [ p^2+q^2+q'^2 q+qq' q' 0 … 0 0; q+qq' 1+q^2+q'^2 q+qq' q' … 0 0; q' q+qq' 1+q^2+q'^2 q+qq' … 0 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮ ⋮; 0 0 … q' q+qq' 1+q^2 q; 0 0 … 0 q' q 1 ]_(n+1) × (n+1) Again we can write this matrix as the sum of the Dirac matrix with 0 neutrino coupling and a perturbation matrix. M^† M = M^†M + δ M^2 with perturbation matrix given by δ M^2 = m^2 [ p^2 0_1× n; 0_n× 1 0_n× n ]_(n+1) × (n+1) As the perturbation matrix in CW fermions basis is again independent of neighbouring CW coupling strength parameters and depends on SM field coupling strength, the leading-order corrections to the eigenvalues are proportional to p. δλ_i=⟨Λ^(i)|δ M^2/m^2| Λ^(i)⟩=p^2 f(q,q')=O(p^2) again Λ_i denotes eigenvectors of unperturbed matrix for next-to-nearest neighbour clockwork (NNN-CW) case and f(q,q') denotes function f coming from dependence of eigenvector Λ_i's components on variables q and q'. Once again the leading order corrections are of the second order with respect to p. In this scenario too, the perturbative eigenvector analysis in the appendix of <cit.> holds true up to the order of p. The KK mass spectrum for Clockwork gears and their coupling strength with SM neutrino is shown in Fig. <ref>. For this scenario n = 20 gears are considered with m_i = 1 TeV, q_i = -2 and q'_i = -3. Fig. <ref> demonstrates the localization of different eigenvectors in CW fields on different sites. The Yukawa coupling strength to SM neutrino y is considered to be 0.1. §.§ Completely Non-local CW In this further extension scenario, we will consider fully non-local theory spaces i.e., theory spaces where the matter fields of each group are connected via link fields to the matter fields of every other group. The underlying Hamiltonian is still considered as rectangular, implying that the number of left chiral fermions is not equal to the number of right chiral fermions. The CW nature of theory space is retained. The theory space is diagrammatically shown in Fig. <ref>. Hamiltonian for this extension can be written as ℋ_i,j = ∑_k=1^n+1 a_i,kδ_i,j-k+1 with i ∈ {1,2,...n} and j ∈ {1,2,...n+1}. Using the CW notation to write the new physics Lagrangian, one gets ℒ_NP = ℒ_kin - ∑_i=1^n m_i L_iR_i - ∑_i=1^n m_iq_i^(1)L_iR_i+1 - ∑_i=1^n-1 m_iq_i^(2)L_iR_i+2 - ∑_i=1^n-2 m_iq_i^(3)L_iR_i+3 - ∑_i=1^n-3 m_iq_i^(4)L_iR_i+4 + - ∑_i=1^n-(n-1) m_iq_i^(n)L_iR_i+n + h.c. = ℒ_kin - ∑_i=1^n m_i L_iR_i - ∑_k=1^n ∑_i=1^n-k+1 m_iq_i^(k)L_iR_i+k + h.c. In this model, the mass matrix for right-handed CW fermions will also have a 0 mode as per the theorem since the Hamiltonian is rectangular. The matrix for this Hamiltonian in {L_1,L_2,...L_n } and {R_1,R_2,....R_n+1} basis is given by M_CW = [ [1.5em][c]a_1,1 [1.5em][c]a_1,2 [1.5em][c]a_1,3 [1.5em][c]a_1,4 … [1.5em][c]a_1,n+1; [1.5em][c]0 [1.5em][c]a_2,2 [1.5em][c]a_2,3 [1.5em][c]a_2,4 … [1.5em][c]a_2,n+1; 0 0 [1.5em][c]a_3,3 [1.5em][c]a_3,4 … [1.5em][c]a_3,n+1; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮; [3em][c]0 [3em][c]0 … -[3em][c]a_n-1,n-1 -[3em][c]a_n-1,n -[3em][c]a_n-1,n+1; 0 0 0 … -[1.5em][c]a_n,n -[1.5em][c]a_n,n+1 ]_n × n+1 In CW notations, a_i,i = m_i, a_i,i+1 = m_iq_i^(1), a_i,i+2 = m_iq_i^(2), , a_i,i+n = m_iq_i^(n). The K^th component for null space basis of CN-CW (completely non-local clockwork) in the limiting case, a_i,i+k = a_k ∀ i, is given by Λ_0^K = ∑_{k_1,,k_n}^(k_1+k_2+ +k_n)!/k_1! k_n! (-a_n)^k_n (-a_1)^k_1/a_0^k_1+k_2+ +k_n with nk_n + 2k_2 + k_1 = n + 1 - K Λ_0 = {Λ_0^1,Λ_0^2,,Λ_0^n+1} and k_1, k_2, ,k_n ∈ ℕ_0 = {0,1,2,3,4, ... }. The normalized 0-mode Λ_0 is given by 𝒩_0Λ_0, with 𝒩_0 representing the normalizing factor. Using Λ_0^†Λ_0 = 1, we get 𝒩_0 = 1/√(∑_i=1^n+1Λ_0^i*Λ_0^i) To compare with CW, it took n = 40 gears with a = 1, q = -2 to produce O(1) eV mass from the TeV scale, here it can be done with n = 25 for a_0 = 1, a_1 = -2, a_2 = -3 and a_i = -2, i ∈ [3,n]. However, compared to the NNN CW model, this model does not give orders of extra suppression unless hierarchy in fundamental parameters a_is are introduced. Now same as earlier, to study the effect of SM neutrino coupling with CW fermions perturbatively, we will consider the interaction term in Lagrangian of the same form as before eq.(<ref>). ℒ_int=-Y HL̅_L R_n+1+ h.c. The Dirac mass matrix in the basis {ν,L_n, L_n-1,L_1 } and {R_n+1, R_n, R_n-1, R_1 } for Y = 0 is given by M = m [ [1.5em][c]0 [1.5em][c]0 0 0 … 0; q 1 0 0 … 0; q q [1.5em][c]1 0 … 0; q q q 1 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮; q q q … q [1.5em][c]1 ]_(n+1) × (n+1) for simplicity, the case with the same value for all couplings is considered. The masses for right-handed CW fermions will be again given by m√(λ_i), with λ_i denoting the eigenvalues of the matrix M^†M/m^2. M^†M = m^2 [ [5.5em][c]nq^2 [6em][c]q+(n-1)q^2 [6em][c]q+(n-2)q^2 [6em][c]q+(n-3)q^2 … [4em][c]q+q^2 [.5em][c]q; [4em][c]q+(n-1)q^2 [3.5em][c]1+(n-1)q^2 [4em][c]q+(n-2)q^2 [4em][c]q+(n-3)q^2 … [4em][c]q+q^2 [.5em][c]q; [4em][c]q+(n-2)q^2 [1.5em][c]q+(n-2)q^2 [4.2em][c]1+(n-2)q^2 [4em][c]q+(n-3)q^2 … [4em][c]q+q^2 [.5em][c]q; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮ ⋮; [4em][c]q+q^2 [4em][c]q+q^2 … [4em][c]q+q^2 [1.5em][c]q+q^2 [3em][c]1+q^2 [.5em][c]q; [4em][c]q [4em][c]q … [4em][c]q [1.5em][c]q [3em][c]q [.5em][c]1 ]_(n+1) × (n+1) For non-zero coupling Y, the eigenstates and eigenvalues change depending on the value of Y. Consider p to be the perturbative coupling strength then the Dirac mass matrix M becomes M = m [ [1.5em][c]p [1.5em][c]0 0 0 … 0; [1.5em][c]q [1.5em][c]1 0 0 … 0; q [1.5em][c]q [1.5em][c]1 0 … 0; q q q 1 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮; q q q … [1.5em][c]q [1.5em][c]1 ]_(n+1) × (n+1) The right-handed fermions have masses m√(λ_i) with λ_i being the eigenvalues of the following mass matrix:- M^† M/m^2 = [ [6em][c]p^2+nq^2 [6em][c]q+(n-1)q^2 [6em][c]q+(n-2)q^2 [6em][c]q+(n-3)q^2 … [4em][c]q+q^2 [1em][c]q; [3em][c]q+(n-1)q^2 [3.5em][c]1+(n-1)q^2 [4em][c]q+(n-2)q^2 [4em][c]q+(n-3)q^2 … [4em][c]q+q^2 [1em][c]q; [4em][c]q+(n-2)q^2 [1.5em][c]q+(n-2)q^2 [4.2em][c]1+(n-2)q^2 [4em][c]q+(n-3)q^2 … [4em][c]q+q^2 [1em][c]q; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮ ⋮; [4em][c]q+q^2 [4em][c]q+q^2 … [4em][c]q+q^2 [1.5em][c]q+q^2 [3em][c]1+q^2 [1em][c]q; [4em][c]q [4em][c]q … [4em][c]q [1.5em][c]q [3em][c]q [1em][c]1 ]_(n+1) × (n+1) Again as in previous cases we can write this matrix as the sum of the Dirac matrix with 0 neutrino coupling and a perturbation matrix. M^† M = M^†M + δ M^2 with perturbation matrix given by δ M^2 = m^2 [ p^2 0_1 × n; 0_n × 1 0_n × n ]_(n+1) × (n+1) The perturbation matrix is independent of the CW coupling strength parameters the same as in earlier variants of CW. And it depends on the Standard Model (SM) field coupling strength. As a result, the leading-order corrections to the eigenvalues of the matrix are proportional to the SM field coupling strength parameter, p. δλ_i=⟨Λ^(i)|δ M^2/m^2| Λ^(i)⟩=p^2 f(q)=O(p^2) Λ_i denotes eigenvectors of the unperturbed matrix for the CN-CW case. In a general scenario, f(q^(1),q^(2),q^(3),...,q^(n)) will be a function depending on q^(i) coupling strength. Once again the leading order corrections are of the second order with respect to p. The KK mass spectrum for Clockwork gears and their coupling strength with SM neutrino is shown in Fig. <ref>. For this scenario n = 20 gears are considered with m_i = 1 TeV, q_i^(j) = -2 ∀ i,j . Fig. <ref> demonstrates the localization of different eigenvectors in CW fields on different sites. The Yukawa coupling strength to SM neutrino y is considered to be 0.1. As is evident from the BP of CN-CW the difference in hierarchy produced in this model is not much from the NNN-CW model. Apart from retaining the clockwork nature of left-right chiral fields in non-local extensions, we can consider both-sided non-local CW extensions too. Hamiltonian for this scenario is given by <cit.> (ℋ_long-range )_j, k=a_jδ_j, k+b/r^|j-k|(1-δ_j, k), The Hamiltonian considered is long-range hopping strength decaying Hamiltonian. Dirac mass matrix for this Hamiltonian in {L_1,L_2,...L_n } and {R_1,R_2,....R_n+1} basis is given by M_long-range = [ [1.5em][c]a_1 b/r b/r^2 b/r^3 … b/r^n; b/r [1.5em][c]a_2 b/r b/r^2 … b/r^n-1; b/r^2 b/r [1.5em][c]a_3 b/r … b/r^n-2; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮; b/r^n-2 b/r^n-1 … [1.5em][c]a_n-1 b/r b/r^2; b/r^n-1 b/r^n-2 b/r^n-3 … [1.5em][c]a_n b/r ]_n × (n+1) For n = 2, in the limiting case, right-hand fermionic eigenvalues are λ_i = {0,-b √(16 a^2 r^6+16 a b r^4+b^2 r^4+2 b^2 r^2+b^2)+2 a^2 r^4+3 b^2 r^2+b^2/2 r^4, b √(16 a^2 r^6+16 a b r^4+b^2 r^4+2 b^2 r^2+b^2)+2 a^2 r^4+3 b^2 r^2+b^2/2 r^4} with 0-mode eigenvector given by Λ_0 = {-a b-b^2/a^2 r^2-b^2,-a b r^2-b^2/r (a^2 r^2-b^2),1} Hence as b → ar, the suppression of 0-mode on the last site increases. Now for n = 3, the 0-mode eigenvector has components given by Λ_0 = {-r (a^2 b-2 a b^2+b^3)/a^3 r^4-2 a b^2 r^2-a b^2+2 b^3,-a b-b^2/a^2 r^2+a b-2 b^2,-a^2 b r^4-a b^2 r^2-a b^2-b^3 r^2+2 b^3/r (a^3 r^4-2 a b^2 r^2-a b^2+2 b^3),1} Under certain values of b, the suppression on a specific site will be sufficient to generate masses that are hierarchically smaller, in a natural manner. Finally, we can compare the absolute value of the minimum component of the 0-mode produced by different variants of CW models. The result is shown in Fig. <ref> also the parameters considered for models are mentioned. Even for higher values of q, the trend stays the same i.e., variants of CW are producing bigger localization of 0-mode on a particular site and hence are more efficient to produce hierarchical masses. As shown in Fig. <ref>, the difference in localization strength increases with the number of gears introduced into the system. This is an expected result, as a greater number of gears leads to a larger combinatorial factor element in the 0-mode component. § FINE CANCELLATION IN DIMENSIONAL DECONSTRUCTION MODELS Deconstruction models are the latticized extra spatial dimension models <cit.>. Since, these deconstruction models are the extra dimension models at low energy i.e, the physics produced by these two models at low energy converges, they are termed as dimensional deconstruction (DD) models. The DD models with moose diagrams having only nearest neighbour interactions for a finite number of groups are equivalent to the extra latticized spatial dimension picture for a finite number of sites at low energies. This low-energy extra-dimension physics is reproduced by considering structures among abstract groups at high energies. To set the notation we recap here DD which has been reviewed in several papers <cit.>, <cit.>. For link fields Φ_i,j, the action is given by S_link=∫ d^4 x{∑_i,j=1^N[Tr((D_μΦ_i, j)^† D^μΦ_i, j)-1/4 F_i, μν, a F^i, μν, a]-V(Φ)} where a = 1,2,...,m^2-1. The fields are considered to propagate in 4-dimensional spacetime. For SSB to take place, the potential that will produce non-zero vev for link fields can be written as V(Φ) =∑_j=1^N[-M”^2Tr(Φ_i, j^†Φ_i, j)+λ_1Tr(Φ_i, j^†Φ_i, j)^2. .+λ_2(Tr(Φ_i, j^†Φ_i, j))^2+M^'(e^iθdet(Φ_i, j)+ h.c. )] For λ_1, λ_2 and M' > 0 and M”^2 < 0, this potential produces a Mexican hat shape and leads to non-zero vev for the link field. S_matter = ∑_i,j=1^N∫ d^4 x{ψ̅(i γ^μ D_μ) ψ + (L_iΦ_i, j R_j+L_jΦ_j, i R_i) + L_jMR_j + h.c. } Depending on the vevs of link fields various kinds of Hamiltonians and hence theory spaces are produced. If the vevs of link fields connecting only the consecutive groups' matter fields are non-zero i.e, <Φ_i,j> ≠ 0 only for j = i -1 and i+1 then one gets a local theory space if it is non-zero for other i and j than one gets non-local theory spaces. The local DD model is equivalent to the ADD model at low energies. §.§.§ One Flavour Scenario The Hamiltonian in this scenario is the same as the uniform tight-binding model Hamiltonian <cit.>. The mode localization is not present in this Hamiltonian so the wavefunctions are spread throughout the sites. ℒ_NP = ℒ_kin - ∑_i,j=1^nL_iℋ_i,jR_j + h.c. with ℋ_i,j = ϵδ_i,j - t(δ_i+1,j + δ_i,j+1 ) ϵ & t are the new parameters of the model. This Hamiltonian produces several delocalized modes with eigenmasses and eigenvectors given by (<ref>) & (<ref>) λ_k=ϵ -2tcosk π/n+1, for k ∈{1,2, …, n}, and the corresponding χ_j^(k), eigenvectors are given by χ_j^(k)=ρ^k sink j π/n+1, j ∈{1,2, …, n} where ρ^k is the normalization factor for k^th eigenvector. There is no localization of any kind in this model. Since the Dirac mass matrix is symmetric, the rotation of the left and right modes will be identical as shown in Appendix<ref>. The unitary transformation in diagonalisation will make sure different modes are orthogonal to each other. Since the rotation matrices are unitary, the inner product of different modes will be 0 i.e, ∑_i=1^nv_j^iv_k^i = δ_j,k For the SM neutrino interaction to BSM fields with SM Higgs consider the following interaction term in the Lagrangian <cit.>:- [This interaction is the same as is considered in <cit.>. This is similar to localizing left chiral and right chiral neutrinos on opposite branes in RS/ADD, depending on the Higgs profile.] ℒ_int.= Yν_LHR_1 + Yν_RHL_n + h.c. The smallest mass mode for the final mass matrix with weaker couplings Y is given by m_0 ≈ v^2 ∑_i=1^n v_1^i v_n^i/λ_i with v as the expectation value of the Higgs field and v_n^i being the i^th component of n^th eigenvector. If λ_i = λ ∀ i, then m_0 → 0 from unitarity condition. For, λ_i =ϵ- 2t cosi π/n+1, λ_i/ϵ =1- 2t/ϵcosi π/n+1, hence for t ≪ ϵ, λ_i/ϵ → 1 ∀ i. For this case, the approximate value of m_0 is given as m_0 ≈ v^2 ∑_i=1^n v_1^i v_n^i/λ_i = v^2 ∑_i=1^n v_1^i v_n^i/ϵϵ/λ_i = v^2 1/ϵ∑_i=1^n v_1^i v_n^i(1-x_i)^-1 = v^2/ϵ∑_i=1^n v_1^i v_n^i + v^2/ϵ∑_i=1^n v_1^i v_n^i x_i + ... m_0 ≈v^2/ϵ∑_i=1^n v_1^i v_n^i x_i where, x_i = 2t/ϵcosi π/n+1 so x_i → 0 ⇒ m_0 → 0. This mechanism will work better for matrices whose eigenvalue spectrum has the same range order as in ADD models. For hierarchical spectra as in warped models, this will not work efficiently. As the separation between the magnitude of minimum and maximum eigenvalue increases, this mechanism's effectiveness in producing hierarchical scale decreases. We find for ϵ = 10, t = 0.5 and n = 8, we get a small mass of the order 0.1 eV from the TeV scale i.e, O(0^12) magnitude smaller scale than the fundamental parameter scales of the theory. Fig. <ref> shows the mass spectra and eigenvectors for some massive modes for n = 15, t = 0.5 and ϵ = 10. The figure demonstrates that modes are not localized and mass spectra are close to degenerate. Fig. <ref> shows a comparison for the smallest mass scale produced between the DD model, uniform clockwork (UCW) and generalized clockwork (GCW) models. This figure shows that for chosen parameters, the mass scale produced by DD is a few orders of magnitude smaller than both UCW and GCW for various values of sites. Fig. <ref> shows the smallest mass scale produced by this model for varying ϵ and t (left) and for varying sites n and t (right). The figure shows that large values of n & ϵ and/or small values of t produce smaller mass scale which is in agreement with the above understanding of the model. §.§.§ Three Flavour Scenario This can be extended to 3 flavour cases to account for all three SM active neutrino masses. The number of sites for each flavour is taken to be the same with differing flavour neutrinos coupling to the BSM fields. These varying couplings will produce different masses for different active neutrinos. The full 3 flavour Lagrangian for this scenario is given by ℒ_NP = L_kin -∑_i,j=1^nL_i^αℋ_i,j^α,βR_j^β + h.c. ℋ_i,j^α,β = ϵ_i^α,βδ_i,j + t^α,β(δ_i+1,j + δ_i,j+1) with the interaction between different flavours of SM and BSM neutrino fields given by ℒ_Int. = Y_1^α,βν_L^αHR_1^β + Y_2^α,βν_R^αHL_n^β + h.c. where α and β are flavor index. For non-diagonal flavour Hamiltonian H^α, β and/or non-diagonal flavour Yukawa coupling Y_1^α,β, Y_2^α,β, this Lagrangian will produce mixing among three neutrino flavours. Here we are considering the case of diagonal flavour matrices hence no mixing is produced. § PHENOMENOLOGICAL SIGNATURES The coupling introduced between BSM fields and SM fields to explain the neutrino mass production will have contributions to SM processes and hence can be phenomenologically tested. ℒ_int=-Y HL̅_L R_n+1+ h.c. After changing the basis to the mass eigenvectors χ_k of the matrix M_CW, the Lagrangian interaction term can be rewritten as <cit.> ℒ_int =-Y L̅_LH𝒰_n+1, kχ_k≡-∑_k=1^n+1 Y_kL̅_LHχ_k One of the biggest observable signatures this extra vertex introduces is the charged lepton flavour violation branching ratio. In SM, the contribution to BR(μ→ e γ) comes via higher-order loops and hence is very tiny O(10^-55) but these new vertices in the Lagrangian lead to drastic increment to this value of BR and hence puts one of the most stringent bound and also is one of the best channels to discover new physics. These new BSM fields can directly manifest themselves in collider experiments as the missing energies or displaced vertex as in LPL depending on their decay widths. These BSM neutrinos will also affect the cross-section of various processes at LEP and Hadron colliders via new Feynman diagrams as explained in the below section. Since, the gears-SM interaction term <ref> couples Higgs with these SM-BSM fields, this leads to gears contributing to Higgs self-energy and also to possible Higgs decay width depending on the masses of new fields as discussed below. The other possible phenomenological signatures are discussed in <cit.>. §.§ Branching Ratio of FCNC §.§.§ For Clockwork Variant Models The Feynman diagram contributing to FCNC BR(μ→ e γ) is shown in Fig. <ref>. Since these models introduce extra heavy neutral leptons as the propagator in this diagram, the BR will deviate significantly from the SM FCNC BR. In the clockwork models, the μ→ e γ branching ratio is <cit.>,<cit.> Br(μ→ e γ) =3 α/8 π|𝒜|^2, 𝒜 =∑_α=1^3 ∑_j=1^N+1 V_μα V_e α^*|(U_L α)^0 j|^2 F(m_j, α^2/m_W^2), with F(x)=1/6(1-x)^4(10-43 x+78 x^2-49 x^3+4 x^4+18 x^3 log x) Fig.19 compares BR obtained for Lepton flavour violation in the μ→ e γ process with various CW scenarios. For bigger values of hopping strengths, the BR decreases for all CW variants and most of these variants stabilise after a certain value of the number of CW gears considered. Hence energy scale at the order of 10 TeV with this set of parameters will survive the experimental constraints put up by the MEG experiment BR(μ→ e γ) ≤ 4.2 × 10^-13 <cit.> and are within reach of upcoming experiments for some parameter space with sensitivity up to 6 × 10^-14 <cit.>. §.§.§ For Fine-Cancellation Model Similar to the clockwork model, in this model the FCNC branching ratio for μ→ e γ is given by the above equations (<ref>), (<ref>) since the mixing in the considered scenario is not produced by the model. In the case of mixing being also produced by this model, the BR expression would have been given by a similar expression <cit.>. The elements considered for the mixing matrix were experimental PMNS values with unitarity conditions to good significant digits. The variation of BR for the benchmark point (BP) with varying numbers of sites is shown in the below figure Fig <ref>: As can be seen from the plot, the BP survives the current MEG bounds but they are within the reach of future MEG constraints. The pink region denotes the ruled-out parameter space region. §.§ Collider Signatures The new BSM heavy neutral leptons will have impact on collider physics and their effects can be observed at a high enough energy collider as is elaborated in <cit.>. These new contributions are emerging from the weak sector. These extra contributions in weak currents from the massive BSM neutrinos will affect the cross sections observed at colliders. The SM charged current Lagrangian is ℒ_CC=g/a W_μ^- J_W^μ++ h.c., where g is the SM weak coupling, and J_W^μ+= 1/√(2)e̅_αγ^μν_L α = ∑_j=1^n+1(U_L α)^j/√(2)e̅_αγ^μ P_L ν_j,α . with ν_L α = ∑_j=1^n+1(U_L α)^j P_L ν_j,α Here α represents flavour index, and P_L=1-γ_5/2 is the left-handed projection operator and (U_L α)^j is the component of α flavour neutrino on j^th massive neutral lepton. Similarly, the neutral current will also have contributions from these massive neutrinos. In <cit.> authors have studied the effects of neutrinos in both these CC and NC channels at hadron (p p → 3 l + E_T) and lepton colliders (e^+ e^- → l ν j j) for various CW neutrinos masses and have given the signal-background distributions. The variants of CW models studied in this paper will have similar results as the fundamental mechanism is the same in all variants. The deviations will occur due to slight differences in the masses spectrum and coupling strengths produced in each variant. §.§ Higgs Decay Width & Radiative Corrections The nonzero coupling of BSM particles such as clockwork fermions with SM Higgs field will inevitably lead to corrections in Higgs mass. The leading order corrections occur at the 1-loop level. Consider the uniform CW model ℒ_CW = ℒ_kin - ∑_i^n-1 m L_iR_i - ∑_i^n-1 mq L_iR_i+1 + h.c. with SM interaction part of Lagrangian in CW basis as ℒ_int=-Y HL̅_L R_n+ h.c. This term is written in CW mass basis once the link fields achieve vev as (<ref>). Alternatively one can use the basis N_L = (ν_L,N_L1,N_L2,...,,N_Ln) and N_R = (N_R0,N_R1,N_R2,...,,N_Rn) with transformations N_R k=1/√(2)(χ_k+χ_k+n), k=0, … n N_L k=1/√(2)(-χ_k+χ_k+n), k=1, … n . to write it as <cit.> ℒ_int=-∑_k=0^ n Y_kL̅_LH N_Rk + h.c. Once the Higgs achieves a vev and breaks the SM symmetry, the physical basis once again rotates. Using SVD the rotation of left N_L = UN'_L and right basis N_R = VN'_R is determined. The terms giving mass to N_L and N_R after Higgs achieves vev are ℒ_mass =-N_L m_ν^D N_R -v/√(2)∑_k=0^ n Y_kν̅_L N_Rk + h.c. = -N_L m_ν^D N_R - N_L M_int N_R m_ν^D has the form of diagonal matrix and M_int matrix has only one non-zero row in these basis. Using the fact that diagonalization of the matrix m_ν^D + M_int with U and V unitary matrices does not guarantee the diagonalization of the individual matrix m_ν^D and M_int by these same unitary matrices, we find that SM Higgs gets mass corrections also from the fermionic-loops in which running fermions are not the same. The interaction of Higgs with new fields is covered by the following lagrangian : ℒ_h = -h/√(2)∑_k=0^ n Y_kν̅_L N_Rk + h.c. = - h/vN_L M_int N_R + h.c. = - h/vN'_L U^†M_intV N'_R + h.c. since U^†M_intV ≠ M_D is not necessarily a diagonal matrix, if the coupling between the Higgs field and different fermions is strong enough it will have some implications on the Higgs hierarchy problem. The following Feynman diagrams give the radiative corrections to Higgs mass at the 1-loop level: The amplitude for 1-loop with the same fermions in the loop, m_i = m_j = m and y_ij = y_ii is given by Π_R(q, μ)= N_f12 y_ii^2/(4 π )^2(q^2/18-m^2/3+∫_0^1 d x Δ^2 ln(Δ^2/μ^2)) N_f is the number of flavours. The contribution to Higgs's mass is proportional to coupling×mass of fermions. δ m_h^2 ∝ (y^2_ii)(m^2) Hence, it slightly worsens the Higgs boson mass hierarchy similar to other TeV scale BSM models. A similar result is obtained for different fermions in the loop. Calculation for this case is done in the appendix. Finally, consider the impact on Higgs decay width. Consider Higgs decaying to two particles χ_i and χ_j of masses m_1 and m_2 and four-momenta p_1 and p_2 with M mass of Higgs and q four-momentum. H(q) →χ_i(p_1)+χ_j(p_2). The Feynman diagram for the decay is given by <ref>: The decay rate for this process is: Γ=4π/4 M|M̅|^2/(2 π)^2√(E_1^2-m_1^2)/2M with |M̅|^2 =2y_ij^2 (M^2-(m_1+m_2)^2) Now for m_1 + m_2 > M, the decay width will not be positive real hence no decay contribution will be there if the sum of mass of BSM fields is bigger than Higgs boson mass. One can easily tweak the parameters of the described models to get gear of masses heavier than Higgs mass while surviving the BR, and collider constraints and also satisfying the experimentally observed value for neutrino mass. So, Higgs width will not achieve any new gear contributions and no further constraints from Higgs invisible decay width data will be imposed for gear field masses > 125 GeV. § CONCLUSION In this paper, firstly the variants of well-known clockwork suppression models have been explored. It is shown that the underlying mechanism for variants of clockwork is the same for all cases. We found that for some parameter ranges, the variants of clockwork will require a lesser number of sites to produce the same scale as the ordinary clockwork. The analytical expression for 0-mode in some cases of variants of clockwork is found to contain a combinatorial factor which for 0.5 < | q | < 2 gives significantly enhanced localization and produces smaller scale than ordinary clockwork and for | q | > 2 differed from a few factors to few orders of magnitude from the clockwork depending on the numbers of gears considered. Also, the non-local clockwork models relax the constraint of | q | > 1 to get localization, in these models the localization of 0-mode can be achieved even with coupling terms being equal to mass terms. The smallest mass scale produced by CW for q=1 with n = 40 is O(10^-3) of the fundamental parameter scale whereas with the same number of sites and with q = q' = 1 NNN-CW produced scale O(10^-11). As the number of sites increase, the NNN-CW will produce bigger and bigger suppresses scale compared to CW since the combinatorial factor will increase. Then, we have mentioned the fine cancellation mechanism to produce hierarchically small scales from natural order fundamental parameters in dimensional deconstruction perspective. This mechanism has been applied to SM to account for small neutrino mass problems. It can also be extended to account for PMNS mixing of SM active neutrinos along with their masses. The cancellation mechanism is applied in this paper for a specific case of local underlying theory space though one can consider other non-local theory spaces as well. Finally, phenomenology for all these models was studied using the observable FCNC BR of μ→ e γ process and some comments were made about BSM effects in colliders and Higgs width/mass. We have laid out some benchmark points which satisfactorily produce neutrino mass as per the current experimental observations of mass squared difference and are also surviving current MEG experiment bounds. These Benchmark Points are within the reach of future MEG experiments and hence will be tested soon. All the BSM Lagrangians discussed in this paper can be straightforwardly extended to Majorana neutrino cases. § ACKNOWLEDGEMENTS The author would like to express his sincere gratitude to Prof. Sudhir Vempati for the insightful discussions and guidance throughout the research process. The author would also like to thank G. Kurup for the clarification of their work in the same direction. AS thanks CSIR, Govt. of India for SRF fellowship No. 09/0079(15487)/2022-EMR-I. The author also acknowledges the open-source software tools and community resources that were invaluable in the data analysis and visualization aspects of this work, including Mathematica, Python, and the various scientific computing libraries. The code used by the author for data generation in this paper will soon be made available on GitHub for open access under a Creative Commons (CC) license to allow researchers to freely access, use, and build upon the codes, promoting transparency and reproducibility of the research presented in this paper. unsrt § LINEAR ALGEBRA RESULTS Theorem : Let M be an m×n matrix and N be an n×k matrix then rank(MN) ≤ rank(M). Hence matrix of the form A^†A will always have a non-zero dimensional kernel space. Proof : Consider N as matrix A and use the fact that the rank of an m×n matrix M is the dimension of the range R(M) of the matrix M with the range given by R(M) = { y ∈ℝ^m | y = Mx for some x ∈ℝ^n } Consider the rotated left and right basis to be χ_L and χ_R respectively i.e, L = 𝒰χ_L, R = 𝒱χ_R It's easy to show that for a symmetric matrix, the SVD gives identical left and right modes rotation. For a real symmetric positive definite matrix M, M^† = M, MM^† = M^†M 𝒰^†MM^†𝒰 = ℳ_diag 𝒰^†M^†M𝒰 = ℳ_diag 𝒱^†M^†M𝒱 = ℳ_diag hence, 𝒰 = 𝒱 though 𝒰 and 𝒱 are unique only when the original matrix is positive definite. § ONE LOOP CALCULATIONS The amplitude for 1-loop with fermions of different masses is given by i Π_1 L=-N_f(-iy_ij)^2Tr[∫d^4 k/(2 π)^4i(k̸+m_i)/[k^2-m_i^2]i(k̸+q̸+m_j)/[(k+q)^2-m_j^2]] N_f is the number of flavours. In the limit when both fermions in the loop are the same, m_i = m_j = m and y_ij = y_ii. Using dimensional regularization, we get the familiar result Π(q^2)= N_f12 y_ii^2/(4 π )^2μ^2 ϵ∫_0^1 d x Δ^2(1/ϵ̂+ln(Δ^2/μ^2)-1/3) where, ∫_0^1 d x Δ^2=∫_0^1 d x(-q^2 x(1-x)+m^2)=m^2-q^2/6 Thus, we can write the following expression for the amplitude: Π(q^2)= N_f12 y_ii^2/(4 π )^2μ^2 ϵ[1/ϵ̂(m^2-q^2/6)+q^2/18-m^2/3+∫_0^1 d x Δ^2 ln(Δ^2/μ^2)] Using the M S scheme we obtain the radiative mass correction to Higgs as: Π_R(q, μ)= N_f12 y_ii^2/(4 π )^2(q^2/18-m^2/3+∫_0^1 d x Δ^2 ln(Δ^2/μ^2)) Now, consider the scenario with two different fermions coupled to Higgs, m_i ≠ m_j and coupling with Higgs is y_ij. The amplitude is i Π_1 L =-N_f(-iy_ij)^2Tr[∫d^4 k/(2 π)^4i(k̸+m_i)/[k^2-m_i^2]i(k̸+q̸+m_j)/[(k+q)^2-m_j^2]] = -N_fi^4 y^2_ij∫d^4 k/(2 π)^4Tr[(k̸+m_i)/[k^2-m_i^2](k̸+q̸+m_j)/[(k+q)^2-m_j^2]] using, Tr{(k̸+m_i)(k̸+̸q̸+m_j)} = 4(k.q + k^2 + m_im_j) plugging this in (<ref>) and using Feynman parametrization of the propagators i.e, 1/A B=∫_0^1 d x 1/[A x+B(1-x)]^2 Taking A=(k+q)^2-m_i^2 and B=k^2-m_j^2 we get: Ax + B(1-x) = x[(k+q)^2-m_j^2] + (k^2-m_i^2)(1-x) = (k^2 + q^2 + 2qk - m_j^2)x + k^2 - m_i^2 - k^2x + m_i^2x = k^2 - m_i^2 +q^2x+2qkx -m_j^2x +m_i^2x ± q^2x^2 = (k+qx)^2 +q^2x - q^2x^2 - m_j^2x + m_i^2x -m_i^2 = (k+qx)^2 + q^2x(1-x) -m_i^2 +x(m_i^2-m_j^2) = (k+qx)^2 - Δ'^2 with Δ'^2 = -(q^2x(1-x) -m_i^2 +x(m_i^2-m_j^2)), we see that in the limit m_i = m_j, Δ'^2 = Δ^2. 1/(k^2-m_i^2)[(k+q)^2-m_j^2]=∫_0^1 d x 1/[(k+q x)^2-Δ'^2]^2 plugging this parametrization in (<ref>), We obtain the following amplitude expression: i Π(q^2)=-4N_f y_ij^2 ∫_0^1 d x ∫d^4 k/(2 π)^4k^2+m_im_j+k q/[(k+q x)^2-Δ'^2]^2 After performing the variable shift k → k+x q ≡ l, the numerator becomes: k^2+m_im_j+k q = (l-qx).q +(l-qx)^2+m_im_j = l.q -q^2x +l^2 +q^2x^2 -2l.qx +m_im_j eliminating all the linear terms in l^μ and redefining l as k, we obtain: i Π(q^2)=-4 N_f y^2_ij∫_0^1 d x ∫d^4 k/(2 π)^4k^2+m_im_j-q^2x(1-x)/[k^2-Δ'^2]^2 Next, we use Dimensional Regularization to compute the integrals of the following type I_1 = ∫d^dk/(2π)^dk^2/(k^2+Δ')^2 = 1/(4π)^d/2d/2Γ(2-d/2-1)/Γ(2)(1/Δ')^2-d/2-1 and I_2 = ∫d^dk/(2π)^d1/(k^2+Δ')^2 = 1/(4π)^d/2d/2Γ(2-d/2)/Γ(2)(1/Δ')^2-d/2 using dim. reg. in (<ref>) about d = 4 - ϵ, and expanding up to O(ϵ) we get: Π(q^2)= - 4N_fy^2_ij∫_0^1 dx [Δ'^2/4π^2ϵ + Δ'^2(2log(μ^2)-2γ_E+1+log(16π^2)-2log(-Δ'^2))/16π^2 +(m_im_j -q^2x(1-x)){1/8π^2ϵ - γ_E+log(-Δ'^2) - log(4π)-log(μ^2)/16π^2}] +O(ϵ) using the renormalization scheme and putting all the divergences in the counterterms, we get Π_R(q^2, μ) =- 4N_fy^2_ij∫_0^1 dx [ Δ'^2(1-2log(-Δ'^2/μ^2))/16π^2 + (m_im_j -q^2x(1-x)){ - log(-Δ'^2/μ^2)/16π^2}] +O(ϵ) = - 4 N_f y^2_ij/16π^2[ m_i^2/2+1/2(m_j^2-q^2)+q^2/3 -1/18q^4{ 6 (m_i^4-2 m_i^2 (m_j^2+q^2)+(m_j^2-q^2)^2)^3/2× tanh ^-1(m_i^2-m_j^2-q^2/√(m_i^4-2 m_i^2 (m_j^2+q^2)+(m_j^2-q^2)^2))-6 (m_i^4-2 m_i^2 (m_j^2+q^2)+(m_j^2-q^2)^2)^3/2 ×tanh ^-1(m_i^2-m_j^2+q^2/√(m_i^4-2 m_i^2 (m_j^2+q^2)+(m_j^2-q^2)^2))+2 q^2 (3 m_i^4-3 q^2 (-3 m_i^2-3 m_j^2+q^2) ×log(-m_j^2/μ ^2)-6 m_i^2 (m_j^2+2 q^2)+3 m_j^4-12 m_j^2 q^2+5 q^4)-3 log(m_i^2) (m_i^6-3 m_i^4 (m_j^2+q^2) +3 m_i^2 (m_j^4-q^4)-(m_j^2-q^2)^3)+3 log(m_j^2) (m_i^6-3 m_i^4 (m_j^2+q^2)+3 m_i^2 (m_j^4-q^4)- -36 m_i m_j q^2-6 m_j^4+6 m_j^2 q^2+5 q^4)-6 √(m_i^4-2 m_i^2 (mj^2+q^2)+(mj^2-q^2)^2)(2 m_i^4- m_i^2 (4 m_j^2+q^2)+6 m_i m_j q^2+2 m_j^4-m_j^2 q^2-q^4) tanh ^-1(m_i^2-m_j^2-q^2/√(m_i^4-2 m_i^2 (mj^2+q^2)+(mj^2-q^2)^2)) +6 (2 m_i^4-m_i^2 (4 m_j^2+q^2)+6 m_i m_j q^2+2 m_j^4-m_j^2 q^2-q^4) √(m_i^4-2 m_i^2 (mj^2+q^2)+(mj^2-q^2)^2) tanh ^-1(m_i^2-m_j^2+q^2/√(m_i^4-2 m_i^2 (mj^2+q^2)+(mj^2-q^2)^2))-3 log(m_j^2) (2 m_i^6-3 m_i^4 (2 m_j^2+q^2) +6 m_i^3 m_j q^2+6 m_i^2 m_j^4+m_i(6 m_j q^4-6 m_j^3 q^2)-2 m_j^6+3 m_j^4 q^2-q^6)+3 log(m_i^2) (2 m_i^6-3 m_i^4 (2 m_j^2+q^2)+6 m_i^3 m_j q^2+6 m_i^2 m_j^4+m_i(6 m_j q^4-6 m_j^3 q^2)-2 m_j^6+3 m_j^4 q^2-q^6) }] the energy independent contribution to Higgs's mass is proportional to coupling×mass of fermions. δ m_h^2 ∝ (y^2_ij)(m_i^2 + m_j^2) § HIGGS DECAY WIDTH The decay rate for this process is: d Γ=1/2 Md^3 p_1/(2 π)^3 2 E_1d^3 p_2/(2 π)^3 2 E_2(2 π)^4 δ^4(q-p_1-p_2)|M̅|^2, with |M̅|^2 as the invariant matrix element squared. Momentum conservation imposes p_1 · p_2=(M^2-m_1^2-m_2^2) / 2 with p_1^2=m_1^2 and p_2^2=m_2^2. The corresponding amplitude M is given by: .M=-i y_iju̅(p_1) v_( p_2) leading to: |M̅|^2 =y_ij^2(Tr[p̸_1 p̸_2]-m_1m_2 Tr[1]) = 4p_1.p_2-4m_1m_2 =2y_ij^2 (M^2-(m_1+m_2)^2) . Since the amplitude square |M|^2 is momentum-independent it can be taken outside the integration. Using d^3 p_2 / 2 E_2=d^4 p_2 δ^+(p_2^2-m_2^2) and carrying out the d^4 p_2 integration it comes out d Γ=1/2 M|M̅|^2/(2 π)^2∫d^3 p_1/2 E_1δ^+((q-p_1)^2-m_2^2) . In the rest frame of the Higgs boson, q=(M, 0,0,0), hence the argument of the δ^+ function reduces to (M^2-2 M E_1+m_1^2-m_2^2) . Now, integrating the expression over p1 to get the decay width as: d Γ=1/2 M|M̅|^2/(2 π)^2∫p_1^2 d p_1 dϕ d θ sinθ/2 E_1δ^+(M^2-2ME_1+m_1^2-m_2^2) . using the fact that p_1 d p_1 = E_1 d E_1, d Γ=1/4 M|M̅|^2/(2 π)^2∫p_1 E_1 d E_1 dϕ d θ sinθ/ E_1δ^+(M^2-2ME_1+m_1^2-m_2^2) . this integral is non-zero for the value of E_1 that will lead to the Dirac-Delta function with 0 as the argument i.e, M^2 -2ME_1 +m_1^2-m_2^2 = 0 E_1 = M^2+m_1^2-m_2^2/2M Γ=4π/4 M|M̅|^2/(2 π)^2√(E_1^2-m_1^2)/2M
http://arxiv.org/abs/2407.13140v1
20240718035547
Mode Hopping with OAM-Based Index Modulation
[ "Liping Liang", "Wenchi Cheng", "Wei Zhang", "Hailin Zhang" ]
eess.SP
[ "eess.SP" ]
Mode Hopping With OAM-Based Index Modulation Liping Liang^†, Wenchi Cheng^†, Wei Zhang^, and Hailin Zhang^† ^†State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an, China ^ University of New South Wales, Sydney, Australia E-mail: {lpliang@stu.xidian.edu.cn, wccheng@xidian.edu.cn, wzhang@ee.unsw.edu.au, hlzhang@xidian.edu.cn} This work was supported in part by the National Natural Science Foundation of China (Nos. 61771368 and 61671347), Young Elite Scientists Sponsorship Program By CAST (2016QNRC001), the 111 Project of China (B08038), Doctoral Student's Short Term Study Abroad Scholarship Fund of Xidian University, and the Australian Research Council's Projects funding scheme under Projects (DP160104903 and LP160100672). July 22, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== empty empty § ABSTRACT Orbital angular momentum (OAM) based mode hopping (MH) scheme is expected to be a potential anti-jamming technology in radio vortex wireless communications. However, it only uses one OAM-mode for hopping, thus resulting in low spectrum efficiency (SE). Index modulation offers a trade-off balance between the SE and performance reliability. In this paper, we propose an MH with OAM-based index modulation scheme, where several OAM-modes are activated for hopping, to achieve high SE at a given bit error rate in radio vortex wireless communications. Based on the proposed scheme, we derive the upper bound and lower bound of achievable SEs. Furthermore, in order to take advantage of index information, we derive the optimal hopped OAM-modes to achieve the maximum SE. Numerical results show that our proposed MH with index modulation scheme can achieve high SE while satisfying a certain reliability of radio vortex wireless communications. Orbital angular momentum, index modulation, mode hopping, spectrum efficiency. § INTRODUCTION Emerging radio vortex wireless communication is expected to solve the spectrum shortage problem caused by the growing traffic data and multiple serves for the future wireless communications <cit.>. Radio vortex wireless communications take advantage of orbital angular momentum (OAM), a category of angular momentum, for transmission. OAM waves generated by many methods, such as uniform circular array (UCA) and spiral phase plate (SPP), are overlapping but orthogonal with each other <cit.>. Hence, signals with OAM-modes can be transmitted without inter-mode interference. Recently, OAM applications have been extensively studied in radio vortex wireless communications, such as OAM-based multiple-input multiple-output (MIMO), OAM multiplexing jointly used with the traditional orthogonal frequency division multiplexing (OFDM) for high spectrum efficiency (SE), radar imaging, OAM waves converging, microwave sensing, and mode hopping (MH) for anti-jamming <cit.>. The OAM-based wireless channel model was built and the OAM signals were decomposed in sparse multipath environments containing a line-of-sight (LoS) path and several reflection paths <cit.>. In OAM-based MH scheme, only one OAM-mode is activated for hopping at each time-slot <cit.>. Thus, if the number of OAM-modes is relatively large, most OAM-modes cannot work, thus resulting in the resource waste. Also, without using multiple OAM-modes for transmission, the achievable SE of proposed MH scheme is low. Index modulation, an emerging concept, is the extension of spatial modulation used in MIMO communications <cit.>. Index modulation activates antennas of MIMO system or subcarriers of OFDM system, to convey signal information. The indices of activated antennas or subcarriers, which can be considered as an information source, convey the additional information, thus increasing the achievable SE of wireless communications. It was recently verified that OAM-based index modulation scheme without increasing the size of signal constellation can achieve better error performance than the conventional OAM multiplexing scheme without increasing the detection complexity <cit.>. However, how to take advantage of the maximum index information in MH scheme to achieve high SE while satisfying a certain reliability still is an open challenge. In this paper, we propose an MH with OAM-based index modulation scheme, where some OAM-modes are selected to hop for anti-jamming, to achieve high SE at a given error performance in radio vortex wireless communications. The main idea is to randomly activate some OAM-modes for each hop to increase the SE and OAM-modes utilizing efficiency. Based on the proposed scheme, the upper and lower bound of SEs for each hop are derived, respectively. The bounds of SEs are obtained for a given number of hopped OAM-modes. Then, we analyze the relationship between signal-to-noise ratio (SNR) and the number of hopped OAM-modes and derive the optimal solution of hopped OAM-modes at low channel SNR region. Numerical results show that our proposed MH with index modulation scheme outperforms the conventional OAM multiplexing scheme at low SNR region. The remainder of this paper is organized as follows. Section <ref> gives the MH with OAM-based index modulation system model. Section <ref> presents the OAM-based index modulation scheme, derives the upper lower bounds of the achievable SE, and calculates how many hopped OAM-modes should be selected to achieve higher SE. Section <ref> evaluates the MH with OAM-based index modulation scheme and compares it with the conventional OAM multiplexing scheme. The paper concludes with Section <ref>. § SYSTEM MODEL In this section, we build the system model of MH with OAM-based index modulation as shown in Fig. <ref>. In this paper, we use the UCA as the OAM-transmitter and OAM-receiver to generate and receive multiple OAM-modes, respectively. There are N_t and N_r element-arrays, which are equidistantly distributed around the perimeter of the circle, for the transmit and receive UCAs, respectively. The N_t element-arrays are fed with the same input signals, but with a successive delay from element to element such that after a full turn the phase has been incremented by an integer multiple l of 2π, where l is the order of OAM-modes and satisfies |l|≤ N_t/2 <cit.>. As shown in Fig, <ref>, the transmit data is split into index selector and signal modulator. The index selector is controlled by pseudo-noise generator (PNG). With index selector, some OAM-modes are activated for hopping and the other OAM-modes are inactivated. The activated OAM-modes are selected by the index selector, which selects I OAM-modes out of N_t OAM-modes for signal transmission. It means that there are I OAM-modes hopping simultaneously for each time-slot. Thus, there are K=N_t I combinations for OAM signal transmission. We assume that the set of selected OAM-modes for transmission is L. Thus, the corresponding k-th (1 ≤ k ≤ K) combination is denoted by L_k. The input signal is conveyed by the activated multiple OAM-modes with signal transmission and the indices of OAM-modes with the combination of OAM-modes. The inactivated OAM-modes don't work. At the receiver, the OAM signals can be decomposed by the discrete Fourier transform (DFT) algorithm. Note that PNGs at the transmitter and receiver are same. In order to filter the interference OAM signals, the decomposed signals are selected by the index selector. We assume that the signal can be transmitted by one hop. Also, for utilizing OAM multiplexing, the signals with different hopped OAM-modes are different. An example of MH pattern with OAM-based index-modulation scheme is presented in Fig. <ref>, where the specified color identifies each hop, N_t=8, and I=3. With the proposed index modulation, wireless communication can not only achieve high SE, but also satisfy the reliability. In the following, we mainly focus on analyzing the proposed MH with OAM-based index modulation for high achievable SE in radio vortex wireless communications. § OAM-BASED INDEX MODULATION   In this section, we propose the MH with OAM-based index modulation scheme in radio vortex wireless communications. Then, we derive the closed-form expressions of lower bound and upper bound of achievable SEs, respectively. We also analyze how many OAM-modes should be selected to obtain higher SE. §.§ Index Modulation In the subsection, we consider the k-th combination of I. We assume that the set of selected OAM-modes for transmission is L_k={l_1,k,⋯, l_i,k, ⋯, l_I,k}, where l_i,k (1 ≤ i ≤ I) is the i-th activated OAM-mode of the all I OAM-modes. Also, we assume that the set of L is arranged in the order of OAM-modes, that is, l_1,k< ⋯ < l_i,k< ⋯ < l_I,k. For example, if the hopped OAM-mode l_1,k=2, it means OAM-mode l=2 is activated. In this paper, we only consider a user for transmission without other interference users. We assume that the vector of transmit modulated signal is S_k=[s_1,k,⋯, s_i,k, ⋯, s_I,k]^T for given set L_k, where s_i,k is the i-th transmit signal for the l_i,k OAM-modes and [·]^T is the transpose operation. The inactivated OAM-modes transmit zero signals with no power. Thus, the vector of transmit signals can be expressed as s_k=[0, S_k]^T, which contains (N_t-I) zeros and I transmit signals. The positions of zero terms is determined by the selected activated OAM-modes. Therefore, the transmit OAM signal, denoted by s_n,i,k, for the n-th (0 ≤ n ≤ N_t-1) element-array corresponding to the i-th hopped OAM can be expressed as follows: s_n,i,k=s_i,k e^j2π n/N_tl_i,k. The transmit signal, denoted by x_n,k, for the n-th transmit element-array can be derived as follows: x_n,k=∑_i=1^I s_n,i,k=∑_i=1^I s_i,k e^j2π n/N_t l_i,k. At the receiver, the received signal, denoted by r_k, for the N_r element-arrays can be obtained as follows: r_k=Hx_k+n_k, where H is the N_r× N_t channel gain matrix, x_k is the vector of transmit signal with respect to x_n,k, and n_k is the Gaussian noise received at the receiver. According to Eq. (<ref>), the expression of Eq. (<ref>) can be re-expressed as follows: r_k= HFs_k+n_k, where F is a N_t× N_t inverse discrete Fourier transform (IDFT) matrix. To decompose the OAM signal, the DFT algorithm is used at the receiver. Then, we have r̃_k = F^HHFs_k+ñ_k = Hs_k+ñ_k, where r̃_k is denoted by the vector of decomposed OAM signals, (·)^H represents the conjugate transpose operation and ñ_k is the noise after DFT algorithm. H=diag{h_-N_t/2+1,⋯, h_l, ⋯, h_N_t/2} is the channel gain diagonal matrix with respect to each OAM-mode, where h_l is denoted by the channel gain for the l-th OAM-mode from the OAM-transmitter to OAM-receiver. For an UCA-based transceiver in LoS transmission, the channel gain h_l can be derived as <cit.> h_l=βλ N_t j^-le^-j2π/λ√(D^2+R_1^2+R_2^2)/4π√(D^2+R_1^2+R_2^2) e^jφ l J_l(2π R_1 R_2/λ√(D^2+R_1^2+R_2^2)), where β contains all relevant constants such as attenuation and phase rotation caused by antennas and their patterns on both sides, λ represents the carrier wavelength, D is denoted by the distance from the OAM-transmitter center to the OAM-receiver center, R_1 is the radius of OAM-transmitter, R_2 is the radius of OAM-receiver, and J_l(·) is the l-th order of Bessel function. Hence, we can obtain the corresponding h_i,k for the i-th hopped OAM-mode. However, using DFT algorithm can decompose all OAM signals for activated and inactivated OAM-modes, thus interfering the useful signals. Take example of L_k={-1, 0, 2}, where I is set as 3. The indices of decomposed signals are {-1,0,1,2}, which implies 1 OAM signal is from interference. Thus, the index selector is required to filter interference which carries different OAM-modes from the hopping modes and select the wanted signals. Therefore, the received signal, denoted by y_k, can be expressed as follows: y_k= ΛS_k+ w_k, where w_k is the vector of received noise with respect to the activated OAM-modes and Λ=diag{h_0,,k, ⋯,h_i,k,⋯,h_I,k}. §.§ Spectrum Efficiency The transmit power of signal is denoted by P_i,k for the i-th hopped OAM-mode. Thus, the variance, denoted by Σ_s,k, of S_k can be expressed as follows: Σ_s,k=diag{ P_1,k,⋯, P_i,k, ⋯, P_I,k}. We also have the noise variance, denoted by Σ_w,k, as follows: Σ_w,k=diag{σ_1,k^2, ⋯, σ_i,k^2, ⋯, σ_I,k^2}, where σ_i,k^2 is denoted by the noise variance for the i-th hopped OAM-mode. Since the transmit signal and noise follow multivariate normal distribution, the received signal y_k also follows and the corresponding variance, denoted by Σ_y,k, for the received signal can be expressed as follows: Σ_y,k=ΛΣ_s,kΛ^H + Σ_w,k. Thus, the conditional probability density function (PDF), denoted by f(y_k), of y_k can be given as follows: f(y_k) = 1/π^N_t|Σ_y,k|exp(-y^HΣ_y,k^-1y). To obtain the achievable SE of our proposed MH with index modulation scheme in radio vortex wireless communications, mutual information is used. Based on the analysis mentioned above, the mutual information, denoted by I(s, y), between the transmit signal vector S_k and received signal vector y_k is given as follows <cit.>: I(s, y)= I(s_k,y| X_k^I)+I( X_k^I,y), where I(s_k,y| X_k^I) is the signal information, I( X_k^I,y) is the index information, X_k^I represents the index information, and y is the decomposed OAM signal vector for all possible k. The first term on the right-hand of Eq. (<ref>) can be derived as follows: I(s_k,y| X_k^I) = 1/K∑_k=1^Klog_2|Σ_y,k|/|Σ_w,k| = 1/K∑_k=1^K∑_i=1^Ilog_2(1+P_i,kγ_i,k), where |·| represents the determinant of matrix and γ_i,k=h_i,k^2/σ_i,k^2. Then, the index information I( X_k^I,y) can be derived as follows: I( X_k^I,y)= H(y)-H(y|s_k, X_k^I), where H(y) and H(y|s_k, X_k^I) represent the entropy and conditional entropy of the received signal, respectively. The PDF, denoted by f(y), for all possible k can be derived as follows: f(y)=1/K∑_k=1^Kf(y_k). Based on Jensen's inequality, entropy of received signal H(y) can be derived as follows: H(y) = -∫_y f(y)log_2f(y)dy ≥ - 1/K∑_k=1^Klog_2[1/K∑_j=1^K∫_y f(y_j)f(y_k)dy] = - 1/K∑_k=1^Klog_2[1/K∑_j=1^K|Σ_y,k^-1+Σ_y,j^-1|^-1/π^N_t |Σ_y,k||Σ_y,j|] = - 1/K∑_k=1^Klog_2[1/K∑_j=1^K1/π^N_t |Σ_y,k+Σ_y,j|]. For H(y|s_k, X_k^I), we have H(y|s_k, X_k^I) = 1/K∑_k=1^K∑_i=1^Ilog_2[σ_i,k^2(1+P_i,kγ_i,k)] + Ilog_2(π e). Therefore, the lower bound, denoted by C_low, of SE for our proposed MH with OAM-based index modulation scheme can be derived as follows: C_low = log_2K-1/K∑_k=1^K∑_i=1^Ilog_2σ_i,k^2 - Ilog_2 (π e) -1/Klog_2[∏_k=1^K(∑_j=1^K1/π^I |Σ_y,k+Σ_y,j|)]. The expression of I( X_k^I,y) can be re-written as follows: I( X_k^I,y)=1/K∑_k=1^K D[f(y_k)∥ f(y)], where D[f(y_k)∥ f(y)] is the Kullback-Leibler (KL) divergence between the Gaussian distribution with PDF f(y_k) and Gaussian mixture with the PDF f(y). In the following, (·)^-1 and Tr(·) represent the inverse operation and trace of the matrix. According to the approximative analysis <cit.>, the upper bound of KL divergence, denoted by D_up[f(y_k)∥ f(y)], can be derived as follows: D_up[f(y_k)∥ f(y)]=-log_2{∑_j=1^K1/K e^-D[f(y_k)∥ f(y_j)]}, where D[f(y_k)∥ f(y_j)] = log_2|Σ_y,j|/|Σ_y,k|-Ilog_2e +log_2eTr(Σ_y,j^-1Σ_y,k). Therefore, the upper bound of achievable SE, denoted by C_up, with our proposed OAM-based index modulation in radio vortex wireless communications can be derived as follows: C_up = 1/K∑_k=1^K∑_i=1^Ilog_2(1+P_i,kγ_i,k) -1/K∑_k=1^Klog_2(∑_j=1^K1/K e^-D[f(y_k)∥ f(y_j)]). §.§ How Many OAM-Modes are Selected for Higher SE? For a fixed I, the upper and lower bounds of achievable SE for our proposed MH with OAM-based index modulation scheme in radio vortex wireless communications can be derived as shown in Eqs. (<ref>) and (<ref>). However, how many OAM-modes should be selected to achieve higher SE for the proposed scheme? In the following, we will answer this question. The expression of Eq. (<ref>) can be re-written as follows: D_up[f(y_k)∥ f(y)] =log_2K-1/K∑_k=1^Klog_2{1+∑_j=1 j≠ k^Ke^-D[f(y_k)∥ f(y_j)]}. Since the KL divergence is always non-negative, log_2{1+∑_j=1 j≠ k^Ke^-D[f(y_k)∥ f(y_j)]} is non-negative. Thus, neglecting the second term on the right-hand of Eq. (<ref>) yields the desired upper bound. Thus, the upper bound of SE C_up can be re-expressed as follows: C_up = 1/K∑_k=1^K∑_i=1^Ilog_2(1+P_i,kγ_i,k)+log_2K. Actually, the upper bound given in Eq. (<ref>) is significantly close to the true entropy value <cit.>. We assume that the transmit power for each hopped OAM-mode is P_0. The variance of the l-th OAM-mode is denoted by σ_l^2. For all possible k, each OAM-mode is hopped for N_t-1I-1 times. Thus, the first term on the right-hand of Eq. (<ref>) can be re-expressed as follows: 1/K∑_k=1^K∑_i=1^Ilog_2(1+P_i,kγ_i,k)=I/N_t∑_l=0^N_t-1log_2(1+P_0γ_l), where γ_l=h_l^2/σ_l^2. Thus, the upper bound of SE can be re-expressed as follows: C_up(I)=I/N_t∑_l=0^N_t-1log_2(1+P_0γ_l)+log_2N_tI. Then, we denote by f(z) the continuous function of z, which is defined as f(z) = z/N_t∑_l=0^N_t-1log_2(1+P_0γ_l) + log_2N_tz = z/N_t∑_l=0^N_t-1log_2(1+P_0γ_l) + log_2Γ(N_t+1) -log_2Γ(z+1)-log_2Γ(N_t-z+1), where Γ(·) is the Gamma function. We denote by 𝒬(z) the derivative of function lnΓ(z). Thus, we have 𝒬(z+1) = 1/z+𝒬(z), where we use the characteristic given by Γ(z)=z Γ(z). Then, the derivative, denoted by 𝒫(N_t-z+1), of function lnΓ(N_t-z+1) can be calculated as follows: 𝒫(N_t-z+1)= - 𝒬(N_t-z+1). Thus, the derivative, denoted by f^'(z), of f(z) can be written as follows: f^'(z)=1/N_t∑_l=0^N_t-1log_2(1+P_0γ_l) -𝒬(z+1)/ln 2 +𝒬(N_t-z+1)/ln 2. Let z=I. Based on Eq. (<ref>), we have 𝒬(I+1)=1/I+𝒬(I) =∑_i=1^I1/i-ζ, where ζ=𝒬(1) is the Euler-Mascheroni constant. The corresponding derivative of f(I) can be re-expressed as follows: f^'(I) = 1/N_t∑_l=0^N_t-1log_2(1+P_0γ_l) -𝒬(I+1)/ln 2+ 𝒬(N_t-I+1)/ln 2 = 1/N_t∑_l=0^N_t-1log_2(1+P_0γ_l) -1/ln 2[∑_i=1^I1/i-∑_u=1^N_t-I1/u]. Clearly, f^'(I) >0, when I ≤⌊ N_t/2 ⌋, where ⌊·⌋ is the floor function. Then, we can find a value I_0, which satisfies f^'(I_0)≥ 0 and f^'(I_0+1)< 0 with method of an exhaustive search. If the channel SNR is relatively larger, the signal information mainly impacts the upper bound of SE in MH scheme. Thus, the the SE always increases as the number of hopped OAM-modes. However, when channel SNR is smaller, the index modulation plays the important role. Hence, we have the following Lemma 1. Lemma 1: At low SNR region, the maximum SE of our proposed scheme, denoted by C^*, is C^*=max{C_up(I_0),C_up(I_0+1)}, where I_0 satisfies ln 2 1/N_t∑_l=0^N_t-1log_2(1+P_0γ_l) ≥∑_i=N_t-I_0+1^2I_0-N_t1/i; ln 2 1/N_t∑_l=0^N_t-1log_2(1+P_0γ_l)< ∑_i=N_t-I_0^2I_0+2-N_t1/i. § PERFORMANCE EVALUATIONS In this section, we evaluate the performance of our developed MH with OAM-based index modulation scheme in radio vortex wireless communications. In the evaluation, the lens are used to converge OAM beams <cit.>. Throughout our evaluations, we set the carrier frequency as 60 GHz, transmit distance as 3 m, and transmit power of each OAM-mode as 1 W. Figure <ref> depicts the SEs of our proposed MH with OAM-based index modulation scheme versus the channel SNR in radio vortex wireless communications, where we set I=3, N_t=4,8,12 as well as 16, respectively. As shown in Fig. <ref>, for a given number of hopped OAM-modes, the SEs of our proposed scheme increases as the number of total OAM-modes increases. This is because that the index information increase as the number of total OAM-modes increases, which can be verified by the second term on the right-hand of Eq. (<ref>). Moreover, as N_t increase, the probability jammed by other interference users decreases. As N_t increases, the gaps of SEs among different N_t decreases. The reason is the logarithm function of the number of combinations K. For example, the increases are 5.56 bps/Hz at 0 dB from N_t=4 to 8 and 3.22 bps/Hz from N_t=8 to 12, respectively. In addition, the SEs increases as channel SNR increases. These results prove that for the given hopped OAM-modes, the increase of total OAM-modes results in the increase of SE and reliability of our proposed scheme in radio vortex wireless communications. Figure <ref> compares the SEs of our proposed scheme among different channel SNR versus the number of hopped OAM-modes, where we set N_t=16, SNR as -6 dB, -2 dB, 2 dB, and 6 dB, respectively. Observing the curve of SE in Fig. <ref>, the SEs are convex functions regarding the hopped number I at low channel SNR region. Thus, there exist maximum SEs, which verifies the reasonableness of Section <ref>. The optimal solution of hopped OAM-modes increases as the channel SNR increases. When I ≥ N_t/2, the index information decreases as I increases, while the signal information increases. Thus, there is a trad-off between the signal information and index information at low channel SNR region. As channel SNR increases, index information requires more decrease, thus resulting in more hopped OAM-modes. Fig. <ref> verifies that our developed scheme can achieve maximum SE at low channel SNR region by making full use of the index information. Figure <ref> compares the SEs between our proposed scheme and the conventional OAM multiplexing scheme, where we set N_t=16, I=4,6,9,12 and 16, respectively. Clearly, I=16 represents the conventional OAM multiplexing scheme. Fig. <ref> also verifies that there exists maximum SE in low channel SNR region. The SEs increase as the channel SNR increases for a fixed I. Our proposed scheme has higher SE than that of the conventional OAM multiplexing scheme in low SNR region. Only in high SNR region, the conventional OAM multiplexing scheme can achieve higher SE. This is because that the signal information plays a major role in SE. § CONCLUSIONS In this paper, we proposed MH with OAM-based index modulation scheme to achieve high SE while satisfying a certain reliability for wireless communications. Based on the proposed scheme, we derived the upper and lower bounds of SEs and how many hopped OAM-modes should be selected to achieve higher SE in radio vortex wireless communications. Numerical results show that our proposed scheme can achieve the higher SE while satisfying low bit error rate by using the optimal number of hopped OAM-modes in comparison with that using total OAM-modes. Also, the SE increases as the number of total OAM-modes increases for a given number of hopped OAM-modes. IEEEbib
http://arxiv.org/abs/2407.12442v1
20240717095220
ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference
[ "Mengcheng Lan", "Chaofeng Chen", "Yiping Ke", "Xinjiang Wang", "Litong Feng", "Wayne Zhang" ]
cs.CV
[ "cs.CV" ]
ClearCLIP M.Lan et al. S-Lab, Nanyang Technological University CCDS, Nanyang Technological University 3 SenseTime Research lanm0002@e.ntu.edu.sg {chaofeng.chen, ypke}@ntu.edu.sg {wangxinjiang, fenglitong, wayne.zhang}@sensetime.com <https://github.com/mc-lan/ClearCLIP> ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference Mengcheng Lan1 Chaofeng Chen1 Yiping Ke2 Xinjiang Wang 3 Litong Feng 3Corresponding author. Wayne Zhang 3 July 22, 2024 =============================================================================================================== § ABSTRACT Despite the success of large-scale pretrained Vision-Language Models (VLMs) especially CLIP in various open-vocabulary tasks, their application to semantic segmentation remains challenging, producing noisy segmentation maps with mis-segmented regions. In this paper, we carefully re-investigate the architecture of CLIP, and identify residual connections as the primary source of noise that degrades segmentation quality. With a comparative analysis of statistical properties in the residual connection and the attention output across different pretrained models, we discover that CLIP's image-text contrastive training paradigm emphasizes global features at the expense of local discriminability, leading to noisy segmentation results. In response, we propose ClearCLIP, a novel approach that decomposes CLIP's representations to enhance open-vocabulary semantic segmentation. We introduce three simple modifications to the final layer: removing the residual connection, implementing the self-self attention, and discarding the feed-forward network. ClearCLIP consistently generates clearer and more accurate segmentation maps and outperforms existing approaches across multiple benchmarks, affirming the significance of our discoveries. § INTRODUCTION Large-scale Vision-Language pre-trained Models (VLMs), represented by the Contrastive Language-Image Pre-training (CLIP) family <cit.>, have demonstrated remarkable generality and robustness across a diverse range of downstream tasks, , zero-shot image classification <cit.>, visual question answering <cit.> and image-text retrieval <cit.>. There is a growing interest in leveraging the power of CLIP for open-vocabulary and zero-shot problems. However, CLIP falls behind to maintain its zero-shot capabilities for dense prediction tasks especially semantic segmentation, as depicted by previous works <cit.>. This limitation arises primarily from the training data used for VLMs, which consists largely of image-level labels and lacks sensitivity to visual localization. For example, as shown in <ref> (top-left image), the segmentation map generated by CLIP based on patch-level cosine similarity between visual and textual features reveals many misclassified patches and significant noise, showcasing the limitation of CLIP in dense visual localization. Recent works <cit.> usually attribute noisy representations in CLIP to the self-attention layers, and have achieved great progress by improving this module. MaskCLIP <cit.> adopted a simple technique of setting query-key attention map of the last block as an identical matrix, resulting in an improvement in mIoU on COCOStuff <cit.> from 4.4 to 16.4. CLIPSurgery <cit.> argued that the value-value attention map is cleaner and further improved the mIoU to 21.9. The recent study SCLIP <cit.> combined the query-query and value-value attention and achieved even better results. While these enhancements in attention mechanisms lead to prioritization of relevant context and improved performance, their segmentation results still exhibit some noise. Such noise becomes more pronounced and leads to deteriorated performance when larger backbones are applied. These raise the fundamental question: where do these noises originate and how do they surface in the CLIP models? To answer the where question, we conduct a thorough investigation into the architecture of CLIP. We are surprised to find that the residual connection, proposed by ResNet and commonly employed in transformer architectures, has a significant effect on the adaptation of CLIP to open-vocabulary semantic segmentation. To elucidate this, we decompose the output of CLIP's vision encoder into two components: the residual connection and the attention output, which is achieved by directly separating them in the last layer of the vision encoder. As illustrated in <ref> (top three images), the segmentation result obtained from the residual connection exhibits noticeable noise, while the attention output features produce significantly clearer results with superior localization properties. From these observations, we propose that the noises present in the segmentation map mainly originate from the residual connection. To delve into how these noises emerge, we begin by comparing the statistical properties of CLIP's residual connection and attention output. Notably, we observe a significant discrepancy in their normalized entropy in CLIP: while the entropy of the residual connection tends to be near 0 along layers, the entropy of the attention output remains at 1. This finding aligns with our observation that its residual connection contains much larger maximum values along layers. Accordingly, the final output of CLIP, , the addition of residual connection and attention output, exhibits similar properties to the residual connection. Our findings also resonate with those of <cit.>, which identify many artifacts with high-norm in the final feature maps of large-scale pretrained models. When we look closely at the residual connection maps, we see that these peak values are concentrated in a few channels. In other words, most feature vectors in the residual connection maps share the same peak dimensions, , similar directions in the latent space. This property makes it difficult to distinguish each spatial feature vector through cosine similarity, thereby contributing to the generation of noises. Conversely, the self-attention mechanism in the attention branch is learned to separate dissimilar spatial features, alleviating such issues. Next, we examine the DINO <cit.>, which is also a transformer architecture but pretrained in a self-supervised way. We find that these two feature maps of DINO do not show such discrepancies in entropy. Therefore, we propose that the high-level supervision in CLIP emphasizes the global feature direction in the residual latent space, making local feature vectors less distinguishable and leading to noise in residual features. Based on these discoveries, we revisit recent methods <cit.> and find that the performance enhancements observed in these methods can be partially attributed to the reduced influence of the residual connection when the attention output is strengthened. We then deduce that two critical factors play a pivotal role in adapting CLIP for dense vision-language inference: the reduction in the impact of the residual connection and the reorganization of spatial information through the self-self attention. Guided by these insights, we introduce our approach, ClearCLIP, which incorporates three straightforward modifications to the final layer of CLIP: eliminating the residual connection, adopting the self-self attention, and discarding the Feed-Forward Network (FFN). These modifications are designed to boost the attention output, thereby producing a clearer representation for the task of open-vocabulary semantic segmentation, as shown in <ref>. Extensive experiments on 8 benchmark datasets demonstrate the effectiveness of ClearCLIP. § RELATED WORK §.§.§ Vision-language pre-training. VLMs has experienced significant progress in recent years. One notable family of vision-language models is based on contrastive learning <cit.>. Among them, CLIP <cit.> trained on a private WIT-400M with image-text pairs achieves promising zero-shot capabilities for downstream tasks such as image-text retrieval, image classification via text prompts. ALIGN <cit.> adopts the same dual-encoder architecture as CLIP but is trained on a private dataset with over one billion noisy image-text pairs. Additionally, OpenCLIP <cit.> explores scaling laws for CLIP by training the models on the public LAION <cit.> dataset with up to two billion image-text pairs. Another line of research <cit.> focuses on shared or mixed architectures between vision and language modalities, enabling additional zero-shot capabilities such as visual question answering <cit.> and image captioning <cit.>. Our work specifically addresses the adaptation of CLIP families <cit.>, for downstream dense prediction tasks. §.§.§ Open-vocabulary semantic segmentation. Open-vocabulary semantic segmentation, also known as zero-shot semantic segmentation, aims to segment an image with arbitrary categories described by texts. Recent works have mainly built upon large-scale vision-language models <cit.>, which could be roughly divided into three types. 1) Training-free methods <cit.> attempt to tap into the inherent localization capabilities of CLIP with minimal modifications. MaskCLIP <cit.> proposes to extract the value embedding of the last self-attention bock of CLIP's vision encoder for dense prediction tasks. Following this work, many studies <cit.> generalize the query-key attention to a self-self attention mechanism, such as the value-value attention in CLIPSurgery <cit.>, the query-query and key-key attention in SCLIP <cit.>, and generalized self-self attention combination in GEM <cit.>. These modifications induce the model to focus more on relevant context, resulting in significantly improved performance. 2) Unsupervised/Weakly-supervised methods mainly involve the design of more intricate architectures aimed at explicitly grouping semantic contents with image-only/image-text training samples. GroupViT <cit.> and SegCLIP <cit.> introduce the grouping blocks into the vision encoder, whose group tokens serve as class centers for semantic segmentation. OVSegmentor <cit.> also introduces a set of learnable group tokens via a slot-attention, and performs model training with masked entity completion and cross-image mask consistency proxy tasks. Additionally, PGseg <cit.> proposes to use both group tokens and prototype tokens to segment the images. TCL <cit.> and CLIP-S^4 <cit.> propose to directly generate mask/segment proposals within each image. 3) Fully-supervised methods usually involve in-domain fine-tuning, , training on the COCOStuff<cit.> training set with full dense annotations, and therefore typically achieve better performance compared to training-free and weakly-supervised methods. Existing methods in this category can be broadly categorized into CLIP-based methods <cit.> and Stable Diffusion-based methods <cit.>. Our method belongs to training-free open-vocabulary semantic segmentation. We aim to explore the intrinsic localization properties of CLIP from a perspective of feature decomposition. § METHODOLOGY In this section, we start by providing an overview of the CLIP model <cit.> and introducing a baseline for open-vocabulary dense inference in <ref>. Then, we show how the CLIP baseline fails to achieve satisfactory results which motivates our work in <ref>. Finally, we elaborate the proposed ClearCLIP for open-vocabulary semantic segmentation in <ref>. §.§ Preliminary on CLIP §.§.§ ViT architecture. A ViT-based CLIP model <cit.> consists of a series of residual attention blocks. Each of these blocks takes as input a collection of visual tokens X = [x_, x_1, …, x_h× w]^T, where x_ represents the global class token, and {x_i| i= 1, 2, …, h× w} denote local patch tokens. For brevity, we omit the layer number and format a residual attention block as follows: q =_q((X)), k=_k((X)), v=_v((X)) X_ = X_ + X_ = X + (_qk· v) X = X_ + ((X_)), where LN denotes layer normalization, Proj represents a projection layer, and FFN stands for a feed-forward network. X_ and X_ denote the residual connection and the attention output. Additionally, _qk = (qk^T/√(d_k)) represents the q-k attention, where d_k is the dimension of k. §.§.§ Contrastive pre-training. CLIP employs a transformer-based visual encoder 𝒱 and text encoder 𝒯 to produce visual representations X^_ and text representations X^ for each image-text pair. The pre-training of CLIP is grounded in the contrastive loss. Given a batch of image-text pairs, CLIP is trained to maximize the cosine similarity between the visual representations X^_ and their corresponding text representations X^, while simultaneously minimizing the similarity of these representations from different pairs. §.§.§ Open-vocabulary dense inference. To adapt CLIP for open-vocabulary semantic segmentation, a baseline approach is to perform dense patch-level classification. Given an image, the image encoder 𝒱 is used to extract its visual representations X^=[x^_, X^_]^T, where X^_∈ℝ^hw × d denotes the local patch representations in the d-dimensional latent space. For the textual features, object labels with C classes are firstly integrated into a prompt template “” to obtain the text descriptions. These descriptions are then fed into CLIP's text encoder to generate the text representations for all C classes X^∈ℝ^C× d. The final segmentation map ℳ∈ℝ^hw× 1 is computed as follows: ℳ = max_c (X^_, X^). §.§ Motivation The aforementioned baseline in <ref> often fails to achieve satisfactory results <cit.>. This is probably because the CLIP is trained with image-level contrastive loss between vision and language, leading to poor alignment between local image regions and text representations <cit.>. Several studies <cit.> have attempted to address this challenge with minimal modifications to CLIP without retraining. At the core, they propose to revise the vanilla _qk in the last self-attention layer to an identical attention <cit.> or self-self attention <cit.>, , _qq, _kk or _vv, aiming at re-organizing the spatial information. As shown in <ref>, they successfully improve the baseline, with mIoU reaching up to nearly 20.0 from only 4.4 of CLIP with ViT-B/16 architecture (CLIP-B/16) on the COCOStuff dataset. However, there are still several important challenges. Firstly, previous works still generate sub-optimal results with noises in segmentation maps. Secondly, these methods fail to obtain reasonable results when using a larger model, such as ViT-L/14. In <ref>, _qq and _kk are even worse than the vanilla _qk with more noises in segmentation maps. Such counter-intuitive phenomena indicates that existing works may have missed some important issues when adapting the CLIP model for dense prediction tasks. In this work, we are curious about where and how these noises in segmentation results originate and surface. §.§ ClearCLIP As explained in <ref>, a block in the ViT-based CLIP contains three modules, , the residual connection, the self-attention layer and the feed forward network. We delve into these modules to diagnose their effects on open-vocabulary semantic segmentation tasks. Finally, we propose ClearCLIP, a simple yet effective solution to produce clearer and more accurate segmentation maps. §.§.§ Residual connection. We begin our analysis by comparing the Frobenius norm of the residual connection X_ with different attention outputs X_ at the last block from CLIP-B/16 and CLIP-L/14 models on the COCOStuff dataset. As illustrated in <ref>, we can easily observe the commonalities and distinctions of these two sub-figures. The main commonality is that the mIoU curve and the norm curve of exhibit a certain degree of positive correlation. The distinctions are: 1) the norm of in CLIP-B/16 is much smaller than that of CLIP-L/14; and 2) the attention modifications in CLIP-B/16 show consistent improvements over the q-k baseline while those in CLIP-L/14 do not. Therefore, we hypothesize that the attention modification is effective only when the influence (or norm) of is minimal. In other words, substantially impairs the performance of the CLIP family on dense inference tasks. To investigate this hypothesis, we conduct open-vocabulary semantic segmentation experiments based on CLIP-B/16 using X_, X_ and X_. Experimental results on the COCOStuff dataset are illustrated in <ref>. Surprisingly, we discover that the mIoU of is close to zero, suggesting that the residual connection may not be helpful for image segmentation. In contrast, X_ alone could achieve much higher mIoU than X_. The visualizations in <ref> demonstrate that the noisy segmentation map of CLIP could be decomposed into a muddled map of X_ and a clearer map of X_. According to these experimental results, we can primarily conclude that noises in segmentation maps mainly come from the residual connection. To gain a deeper understanding of how these noises emerge in semantic segmentation tasks, we conduct a comparative analysis of feature statistics between CLIP-B/16 and DINO-B/16. The latter has demonstrated robust capabilities in learning transferable and semantically consistent dense features for various downstream tasks <cit.>. We first compare the normalized entropies <cit.> along layers, which is calculated by H(X^L) = -1/(hw× d)∑_i,jp(X^L_i,j) p(X^L_i,j) , p(X^L_i,j) = e^X^L_i,j/∑_m,ne^X^L_m,n, where X^L denotes the feature map, , X_, X_ and X_, at the L-th layer of the ViT network. As shown in <ref>, we can see that the entropy of X^L does not change much across layers for DINO-B/16. On the contrary, for CLIP-B/16, only the entropy of remains the same across the layers, while the entropies of and sharply decrease to near-zero. According to <ref>, a low entropy indicates that there are a few peak values in X^L. Therefore, we examine the average maximum values of _i,jX^L_i,j in <ref>. For DINO-B/16, the maximum values of each type of feature maps remain relatively stable along layers, typically lower than 10, resulting in consistent entropies across different layers. In contrast, for CLIP-B/16, the maximum values of X_ and X_ gradually increase with the layer depth, peaking nearly 90 times higher at the last layer compared to earlier ones. Consequently, the entropies of X_ and X_ sharply decline, approaching near-zero from the middle layers of ViT. Through visualizations of several feature maps (see supplementary material), we empirically found that these peak values appear in a few channels. To verify our observation, we calculate the average normalized mean values of each channel in X_ after sorting them in ascending order, and visualize them in <ref>. We can observe that a few channels dominate the peak values in which echoes our discovery from feature maps. Intuitively, these channel-wise statistics represent the global characteristics of X^L since they are independent of local patterns. If and are with low entropy and predominantly influenced by a few channels, it is highly probable that local information is being compromised. As depicted in <ref>, distinguishing between two feature vectors with cosine similarity becomes challenging if they share the same dominant channels. While this characteristic is not harmful in itself for image recognition tasks that prioritize global information, it may result in sub-optimal performance when adapting CLIP to dense prediction tasks that emphasize local information. Theoretically, this phenomenon becomes more pronounced in larger vision transformer models with deeper layers. This analysis sheds light on why existing modifications on self-attention fail to yield satisfactory results when applied to the CLIP-L/14 model. To further demonstrate how affects the performance of CLIP, we introduce a scaling factor α[SCLIP <cit.> could be roughly regarded as a special case of α=2, , ((_qq+_kk)· v)≈(2_qk· v)≈ 2X_.], X_ = X_ + α X_, which controls the relative influence of over . Our experimental results in <ref> demonstrate that a larger α significantly enhances the performance, which clearly illustrates the adverse impact of X_ on the performance. Finally, we propose to directly discard the residual connection to achieve the best performance on dense vision-language inference tasks. §.§.§ Feed-forward network. The feed-forward network (FFN) in a transformer architecture plays a crucial role in modeling relationships and patterns within the data. However, recent work <cit.> has revealed that the FFN has a negligible effect on image representation during the inference process. CLIPSurgery <cit.> finds that the FFN features at the last attention block have a significantly larger cosine angle with the final classification feature, and therefore proposes to discard the FFN for dense prediction tasks. In our work, we empirically find that removing the FFN has minimal effect on open-vocabulary semantic segmentation tasks when applied to the vanilla CLIP model. However, as shown in <ref>, when coupled with the removal of the residual connection, discarding the FFN leads to improved results, particularly with a larger model size. The rationale for this improvement is that removing the residual connection significantly alters the input to the FFN, consequently affecting its output. Therefore, removing the FFN output potentially mitigates its negative impact on performance. §.§.§ Our solution. Based on the above analysis, we propose a straightforward solution to adapt CLIP for open-vocabulary semantic segmentation. Specifically, we propose to use the attention output of the last self-attention layer[The final projection layer is omitted here for brevity.] X^ = X_ = (_(·) (·)· v), for vision-language inference. Inspired by previous works, we could use different combinations of query-key in the attention mechanism _(·) (·). In practice, we find that _qq consistently achieves better performance in most cases and thus opt to use it by default. § EXPERIMENTS §.§ Experimental Setups §.§.§ Datasets & metric. Our solution is extensively evaluated on eight benchmark datasets widely employed for open-vocabulary semantic segmentation. Following <cit.>, these datasets can be categorized into two groups: 1) with background category: PASCAL VOC <cit.> (VOC21), PASCAL Context <cit.> (Context60) and COCO Object <cit.> (Object); and 2) without background category: PASCAL VOC20 <cit.> (VOC20), PASCAL Context59 <cit.> (Context59), COCOStuff <cit.> (Stuff), Cityscapes <cit.> and ADE20K <cit.>. We utilize the implementations provided by MMSegmentation <cit.>, employ a sliding window strategy, and resize input images to have a shorter side of 448 pixels. Following established practices, we avoid text expansions of class names and rely solely on the standard ImageNet prompts <cit.>. For a fair comparison, no post-processing is applied to any of the methods evaluated. Our method does not need any retraining or fine-tuning. Therefore, we can directly evaluate its performance on the validation set of all datasets. For evaluating semantic segmentation tasks, we employ the mean Intersection over Union (mIoU) metric. §.§.§ Baselines. We compare our method with two types of open-vocabulary semantic segmentation methods: 1) training-free methods including CLIP <cit.>, MaskCLIP <cit.>, ReCo <cit.>, CLIPSurgery <cit.>, GEM <cit.>, and SCLIP <cit.>; and 2) weakly-supervised methods including GroupViT <cit.>, SegCLIP <cit.>, OVSegmentor <cit.>, PGSeg <cit.>, ViewCo <cit.>, CoCu <cit.>, and TCL <cit.>. Unless explicitly mentioned, all reported results are directly cited from the respective papers. Additionally, we include results of the baselines based on CLIP-L/14 using our implementation for comprehensive evaluation. §.§ Analysis and Discussion In this section, we present comprehensive experiments to validate the effectiveness of our solution. To ensure a rigorous comparison, our experiments primarily focus on five datasets without the background class. §.§.§ Ablation study. We conduct ablation studies using the CLIP-B/16 model to assess the effectiveness of our solution. The results are summarized in <ref>. Notably, the removal of the residual connection yields a significant performance improvement, increasing the average mIoU from 27.3 to 33.7. This result corroborates our assertion that residual features contain less local information, thereby influencing dense patch prediction. Interestingly, removing the FFN alone does not yield better results. However, the model without both residual connection and FFN together achieves the best performance, with an mIoU of 37.5. This observation is reasonable since removing the residual connection alters the input to the FFN, consequently affecting its output. In this case, removing FFN potentially mitigates the negative impact on performance. §.§.§ Different architectures. Given the simplicity of our proposed solution, it could be seamlessly applied to different architectures. We conduct experiments with CLIP <cit.> and OpenCLIP <cit.> using ViT-B/16 and ViT-L/14 models. The results regarding the average mIoU on five datasets are depicted in <ref>. Our analysis reveals several noteworthy findings: 1) Across different architectures, the consistent achievement of superior segmentation results aligns with the removal of both the residual connection and FFN at the last transformer block. This emphasizes the effectiveness of our solution to adapt vision-language pre-training models for downstream tasks. 2) Notably, the self-self attention consistently outperforms the vanilla q-k attention on our solution. For instance, in the CLIP-B/16 w/o RC and FFN model, the q-q attention yields an average mIoU of 37.5, surpassing the 27.6 mIoU achieved by the q-k attention. 3) For CLIP-L/14 and OpenCLIP-L/14 models, we observe that the self-self attention fails, and the performance of q-q and k-k attentions even falls below that of the vanilla q-k attention. This highlights that existing works aiming at revising the attention mechanism do not address the core problem when adapting CLIP for open vocabulary semantic segmentation. In contrast, our solution of using the attention output leads to significant performance improvements. 4) Interestingly, for models with ViT-B/16 architecture, the improvement of our solution is less pronounced with the identical attention (𝕀) and the v-v attention compared to other attention types. This phenomenon can be attributed to the fact that the 𝕀 and v-v attentions tend to sharpen the attention output, thereby increasing the norm of the attention output, as illustrated in <ref>. We assert that the enhancement of CLIP-B/16 and OpenCLIP-B/16 under the 𝕀 and v-v attentions primarily stems from implicitly eliminating the negative effect of the residual connection. Consequently, explicitly removing the residual connection and FFN leads to limited improvement. However, for models with ViT-L/14 architecture, where the norm of the residual connection is substantially larger, removing both the residual connection and FFN results in significant improvement. §.§.§ Effect of amplifying the norm of attention output. To further explore the relationship between the residual connection and the attention output in open vocabulary semantic segmentation tasks, we conduct experiments using α={0.1, 1, 2, 10, 100}, explicitly amplifying the F-norm of X_ to α^2 times. As shown in <ref>, our results reveal a clear trend: as the scaling factor α increases, models with all types of attention exhibit significantly improved performance. As expected, performance sharply declines when α decreases from 1 to 0.5. These findings underscore the importance of enlarging the norm of the attention output to mitigate the negative effects of the residual connection, ultimately leading to substantially improved performance. Hence, our solution of removing the residual connection proves to be simple yet effective. Additionally, these insights help elucidate the superior performance of SCLIP <cit.>, which adopts the q-q plus k-k attention, as this attention mechanism roughly doubles the vanilla attention. §.§ Comparison to State-of-the-art §.§.§ Quantitative results. <ref> summarizes the performance of various open-vocabulary semantic segmentation models on datasets without a background class. We observe that our method ClearCLIP achieves the best results on four out of five datasets. ClearCLIP significantly outperforms TCL on all datasets, with an average improvement of 4.4 mIoU. We note that SCLIP also achieves much better performance compared to other methods. This is because SCLIP implicitly attenuates the residual connection by using the q-q plus k-k attention, roughly doubling the attention output. However, our ClearCLIP explicitly removes the residual connection and FFN, resulting in an average improvement of 3.3 mIoU over SCLIP. Interestingly, when adopting the ViT-L/14 model, both MaskCLIP and SCLIP fail to achieve satisfactory results, while our method obtains higher results, at 34.5 mIoU, much better than the 23.6 mIoU of SCLIP. Although this result is not better than those achieved with the ViT-B/16 model, it still demonstrates the better generality of ClearCLIP with different backbones. We report the results on three datasets with a background class on <ref>. The performance of ClearCLIP is significantly better than all weakly-supervised state-of-the-art methods, with an average improvement of 3.8 mIoU over TCL. Additionally, ClearCLIP outperforms SCLIP, with performance improvements of 0.4, 2.1, and 3.0 mIoU on VOC21, Context60, and COCO Object datasets, respectively. These results fully demonstrate the effectiveness of our solution of decomposing CLIP's features for open-vocabulary semantic segmentation. §.§.§ Qualitative results. In <ref>, we present a qualitative comparison between ClearCLIP and three training-free methods, , CLIP, MaskCLIP, and SCLIP. Our observations are summarized as follows: 1) MaskCLIP exhibits good localization ability compared to CLIP but still generates segmentation maps with noticeable noise and many incoherent segments (e.g., those depicting a dog, cat, and duck in the 3rd and 8th columns); 2) SCLIP showcases the capability of detecting detailed semantic features with less noise compared to MaskCLIP; 3) ClearCLIP consistently produces much clearer and more accurate fine-grained segmentation maps than the other methods evaluated. These observations validate our attempt to enhance the performance of open-vocabulary semantic segmentation by generating clearer segmentation maps through CLIP's representation decomposition. § CONCLUSION In this study, we explore the origins and mechanisms of noisy segmentation results when utilizing the CLIP family for open-vocabulary semantic segmentation. We re-examine the architecture of CLIP and conduct a comparative analysis of feature statistics within the residual connection and the attention output. By investigating the differences in norm values across varied sizes of CLIP backbones, we discover that the residual connection serves as the primary source of segmentation noise. Additionally, through a comparative study between CLIP and DINO, we propose that the lack of local information in residual features stems from high-level supervision, which prioritizes global direction. Finally, we introduce ClearCLIP, a simple yet effective solution that removes the residual connection, adopts the self-self attention, and discards the FFN. ClearCLIP demonstrates superior performance and generalizability within the CLIP family. Acknowledgments. This study is supported under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). splncs04 § APPENDIX tocsectionAppendix § ABLATION STUDY WITH DIFFERENT BACKBONES AND DATASETS We showcase the results of the ablation study for each dataset across different CLIP models in <ref>. It's clear that our method, which involves removing the residual connection and FFN, markedly enhances the open-vocabulary semantic segmentation capability of CLIP throughout all datasets. This enhancement is especially pronounced within the ViT-L/14 architecture, characterized by a larger norm of residual connection. These findings conclusively affirm the efficacy of our proposed methodology. § IMPACT OF CHANNEL-WISE RESIDUAL FEATURES In this part, we investigate the effect of residual features with low intensity. Specifically, we conduct experiments by selectively reintroducing channels from residual features that have lower average values. We report the results of eliminating the top β high-value channels and the effect of normalizing X_ in <ref>. The best performance is achieved when β≥ 70%. Additionally, normalizing X_ significantly reduces its scale, resulting in performance comparable to β≥ 70%. These findings support our hypothesis that high-level supervision in CLIP emphasizes global feature direction in the residual latent space, which introduces noise into the residual features. For simplicity, we eliminate all channels in X_. § INTEGRATION ACROSS MODELS Our solution serves as a free lunch applicable to various architectures and segmentation models with just 2-3 lines of code modification. Specifically, for MaskCLIP and SCLIP, we achieve this by eliminating the residual connection and Feed-Forward Network (FFN) of the last self-attention layer. For GEM, we utilize the attention output from the final layer as the final representation. Importantly, we preserve the original attention mechanisms of these methods. For baseline model, , CLIP, BLIP, OpenCLIP, and MetaCLIP, we enhance them by incorporating our complete solution. The performance of different models on five datasets is summarized in <ref>. The results demonstrate that our solution consistently enhances the performance of existing models in open-vocabulary semantic segmentation tasks, showcasing its exceptional generalizability. § VISUALIZATION OF FEATURE MAPS To intuitively demonstrate how the residual connections affect the performance, we visualize the feature maps of X_, X_, and X_ for two randomly selected samples in <ref>. It is obvious that the X_ feature maps associated with the residual connections are characterized by peak values in one channel (highlighted in a red box), significantly surpassing the other channels. And X_ is similar to X_, indicating the big influence of X_ to the final feature. Conversely, the feature maps in X_ demonstrate a more uniform distribution across channels. Given that the segmentation map is derived from the cosine similarity of feature vectors at each spatial location, such a disparity implies that the features in X_ and X_ are less discernible compared to those in X_, thereby introducing noise into the segmentation results. This observation supports our proposal that the high-level supervision in CLIP emphasizes the global feature direction in the residual latent space, making local feature vectors less distinguishable and leading to noise in residual features. § ADDITIONAL QUALITATIVE EXAMPLES In this part, we present more qualitative results comparison between ClearCLIP and state-of-the-art methods. <ref> show the results from COCOStuff, ADE20K and Pascal Context59 datasets respectively. Similar to the findings in the main text, the results of ClearCLIP exhibit much less noise than other methods, further underscoring the superiority of our method.
http://arxiv.org/abs/2407.13457v1
20240718123358
Entropy factorization via curvature
[ "Pietro Caputo", "Justin Salez" ]
math.PR
[ "math.PR", "math.CO", "math.FA", "39B62, 60J10" ]
* Tameem Almani0000-0002-5603-2533 and Kundan Kumar0000-0002-3784-4819 ========================================================================= § ABSTRACT We develop a new framework for establishing approximate factorization of entropy on arbitrary probability spaces, using a geometric notion known as non-negative sectional curvature. The resulting estimates are equivalent to entropy subadditivity and generalized Brascamp-Lieb inequalities, and provide a sharp modified log-Sobolev inequality for the Gibbs sampler of several particle systems in both continuous and discrete settings. The method allows us to obtain simple proofs of known results, as well as some new inequalities. We illustrate this through various applications, including discrete Gaussian free fields on arbitrary networks, the down-up walk on uniform n-sets, the uniform measure over permutations, and the uniform measure on the unit sphere in ^n. Our method also yields a simple, coupling-based proof of the celebrated logarithmic Sobolev inequality for Langevin diffusions in a convex potential, which is one of the most emblematic applications of the Bakry-Émery criterion. § INTRODUCTION §.§ Motivation In this motivational section, we consider a product probability measure on a n-dimensional measurable space (Ω,)=⊗_i=1^n(Ω_i,_i). Variance tensorization. We write [f] and (f) for the expectation and variance of a random variable f∈ L^2(Ω,,). For i∈[n], we let Z_iΩ→Ω_i denote the projection onto the i-th coordinate, and we define _i[f]:=[f|Z_1,…,Z_i-1,Z_i+1,…,Z_n] and _i(f):=_i[f^2]-^2_i[f]. With this notation, the celebrated Efron-Stein inequality asserts that ∀ f∈ L^2, (f) ≤ ∑_i=1^n[_i(f)]. This property is the starting point of the well-developed theory of concentration of Lipschitz observables on product spaces, as exposed in the comprehensive textbook <cit.> or the beautiful lecture notes <cit.>. From a dynamical viewpoint, it can also be regarded as a dimension-free Poincaré inequality for the Gibbs sampler associated with . Entropy tensorization. A far-reaching observation is that a similar relation holds between the entropy (f):=[f log f]-[f]log[f] and its conditional versions _i(f):=_i[f log f]-_i[f]log_i[f], i∈[n]. Specifically, we have ∀ f∈ Llog L, (f) ≤ ∑_i=1^n[_i(f)], where Llog L denotes the set of measurable functions fΩ→_+ such that [|flog f|]<∞. The functional inequality (<ref>) implies (<ref>) by a classical perturbation argument around constant functions. It admits several notable consequences, including a sub-Gaussian concentration estimate for Lipschitz observables, and a dimension-free modified log-Sobolev inequality for the Gibbs sampler associated with . We again refer the reader to <cit.> for details. In fact, this entropy tensorization happens to be a special case of an even more general inequality due to Shearer, which we refer to as block factorization of entropy. Block factorization. Given a set A⊂[n], we write _A[·]:=[·|Z_j j∈] for the conditional expectation given all coordinates in :=[n]∖ A, and _A[f]:=_A[flog f]-_A[f]log_A[f] for the corresponding conditional entropy. With this notation at hand, Shearer's inequality (see, e.g., <cit.>) asserts that ∀ f∈ Llog L, θ_⋆ (f) ≤ ∑_A⊂[n]θ_A [_A(f)], for any non-negative weights (θ_A)_A⊂[n], where θ_⋆:=min_1≤ i≤ n∑_ A∋ iθ_A is the smallest marginal weight. This functional inequality considerably generalizes the entropy tensorization (<ref>), which corresponds to the choice θ_A= 1_|A|=1. It implies a dimension-free modified log-Sobolev inequality for the Block Dynamics on (Ω,,) that consists in re-sampling any block of coordinates (Z_i i∈ A) at rate θ_A according to its conditional law given (Z_j j∈). Approximate block factorization. The concept of entropy factorization, and its use in the analysis of log-Sobolev inequalities go back to classical work on the Glauber dynamics for non-product measures such as lattice spin systems <cit.>. This point of view was then revisited in <cit.>. Motivated by the powerful consequences of Shearer's inequality in probability, combinatorics, and functional analysis, <cit.> have recently investigated more systematically the possibility of establishing approximate versions of (<ref>) that apply to non-product measures. More precisely, given a non-negative vector θ=(θ_A)_A⊂[n] and an arbitrary probability measure on an n-dimensional measurable space (Ω,)=⊗_i=1^n(Ω_i,_i), one looks for a constant κ=κ(,θ)>0, as large as possible, such that ∀ f∈ Llog L, κ (f) ≤ ∑_A⊂[n]θ_A [_A(f)]. As in the case of Shearer's inequality, one important motivation for studying approximate block factorizations is the fact that the constant κ in (<ref>) provides a lower bound on the modified log-Sobolev constant for the block dynamics associated with the weights θ. We refer the reader to the recent lecture notes <cit.> for a self-contained introduction to this active line of research. Entropy subadditivity and Brascamp-Lieb inequalities. Another important motivation stems from the equivalence between block factorizations, entropy subadditivity and the Brascamp-Lieb inequalities. Indeed, using the classical chain rule for entropy, for any sub σ-algebra ⊂ we write ∀ f∈ Llog L, (f) = [(f | G)]+([f | G]), where [( f|)]:=[flog(f/( f|))] is the averaged conditional entropy of f with respect to . Assuming (without loss of generality) that the weights (θ_A)_A⊂[n] are normalized so as to sum to 1, we can rewrite the approximate entropy factorization (<ref>) equivalently as an entropy subadditivity statement of the form: ∀ f∈ Llog L, ∑_A⊂[n]θ_A (_A[f]) ≤ (1-κ)(f). On the other hand, it is well known <cit.> that by Legendre duality, the subadditivity (<ref>) is equivalent to the Brascamp-Lieb inequality [ ∏_A⊂ [n] g_A(Z_)^c_A] ≤∏_A⊂ [n] [g_A(Z_)]^c_A , for all bounded measurable functions g_A:^||↦_+, where, for all A⊂ [n], c_A:=θ_A/1-κ, and Z_A denotes the set of variables (Z_i, i∈ A). We note that if f is a probability density with respect to , so that f defines a probability measure on Ω, then _A[f] is the density of the pushforward of f under the projection Z↦ Z_ (the marginal of f on Z_). In fact, the equivalence between (<ref>) and (<ref>) applies more generally when the projection Z↦ Z_A^c is replaced by an arbitrary measurable map Z↦ B_A (Z), and _A[f] is replaced by the conditional expectation [f|B_A(Z)]; see <cit.>. The inequalities (<ref>) obtained in this way are referred to as Brascamp-Lieb (B-L) type inequality, in analogy with the classical B-L inequality, which corresponds to the setting where is the Lebesgue measure on ^n and the measurable maps B_A are linear; see <cit.>. Our contribution. We adopt a more general viewpoint on the inequalities (<ref>) and (<ref>), by replacing conditional expectations _A[f] with arbitrary, not necessarily self-adjoint, Markov operators f↦ T_A f. It turns out that it is not hard to extend the duality to the new framework, thus generalizing the equivalence proved in <cit.>; see Section <ref>. We hope that this point of view will provide a fruitful generalization of the classical duality, possibly allowing for interesting developments and new B-L type inequalities. Within this general framework, we propose a new approach towards establishing the approximate entropy factorization (<ref>), based on non-negative sectional curvature. This simple probabilistic and geometric notion, an L^∞ version of Ollivier's Ricci curvature <cit.>, has recently been shown to play a fundamental role in quantifying the entropy dissipation rate of Markov processes <cit.>. In fact, our main result, stated in Section <ref> below, is sufficiently general to also contain the main finding of <cit.>. We demonstrate the robustness of our method through various applications in both continuous and discrete settings. For instance, the method enables us to derive optimal entropy factorizations for Gaussian free fields on arbitrary networks, providing optimal bounds on the spectral gap and relative entropy decay of any weighted block dynamics. We also discuss analogous bounds for the uniform measure over permutations and over the unit sphere in ^n. Although obtaining optimal constants for these models with any choice of weights is significantly challenging, our method achieves bounds that match the state-of-the-art. Additionally, we apply our method to the down-up walk for uniform n-sets, a special case of the well-studied down-up walk on the bases of a matroid. Recent studies have obtained remarkable entropy contraction bounds for these processes using properties of log-concave polynomials (see <cit.>). In the special case of uniform n-sets, our method produces a slightly better bound than that derived from log-concavity. Finally, we highlight the versatility of our method by providing a straightforward coupling proof of the well known Bakry-Émery criterion for Langevin diffusions in a convex potential. §.§ General framework We consider a generalized version of the inequality (<ref>) where the conditional expectations are replaced with arbitrary Markov operators. Markov operators. From now on, we consider an arbitrary probability space (Ω,,), and we simply write L^p for L^p(Ω,,), p∈[1,∞]. We recall that a transition kernel on Ω is a function TΩ×→ [0,1] such that * for each point x∈Ω, the function E↦ T(x,E) is a probability measure; * for each event E∈, the function x↦ T(x,E) is measurable. Integrating with respect to such a kernel naturally gives rise to a bounded linear operator on L^∞, which is also denoted by T, and whose action is as follows: ∀ x∈Ω, (Tf)(x) := ∫_Ω f(y) T(x, d y). Of course, Tf is non-negative as soon as f is, and T 1= 1: such operators are usually known as Markov operators. Note that the underlying transition kernel can be recovered from the operator via the formula T(x,E)=(T 1_E)(x), so that the above identification is legit. We will always assume that T is measure preserving, in the sense that for all f∈ L^∞, [Tf] = [f], where we recall that [·] denotes the expectation w.r.t. . Thanks to this stationarity and Jensen's inequality, the formula (<ref>) actually defines a bounded linear operator T L^p→ L^p with operator norm 1, for any p∈[1,∞]. Its adjoint T^⋆ is also a measure-preserving Markov operator, characterized by the relation ∀ f,g∈ L^2, [(T^⋆ f)g] = [(T g)f]. As above, we write f∈ Llog L to mean that fΩ→_+ is measurable with [|flog f|]<∞. Note that Tf is then well-defined and in Llog L, with (Tf) ≤ (f). We are now in position to introduce the general inequality that we propose to investigate. General functional inequality. For an integer M≥ 1, we assume to be given a family (T_1,…,T_M) of measure-preserving transition kernels on our workspace (Ω,,), identified with the corresponding Markov operators, along with a probability vector (θ_1,…,θ_M). Our aim is to find a constant κ∈[0,1], as large as possible, such that ∀ f∈ Llog L, ∑_i=1^M θ_i (T_i^⋆ f) ≤ (1-κ)(f). Since, for any σ-field ⊂, the conditional expectation f↦[f| G] is a measure-preserving self-adjoint Markov operator, the inequality (<ref>) contains (<ref>) as a special case. Also note that, by the convexity of the functional f↦(f), the inequality (<ref>) implies that the average Markov operator T:=∑_i=1^Mθ _i T_i satisfies the 1-step entropy contraction property ∀ f∈ Llog L, (T^⋆ f) ≤ (1-κ)(f), which we can iterate to deduce exponential convergence to equilibrium, in relative entropy, at rate κ, for the Markov chain with transition kernel T. In light of those two observations, it seems of considerable interest to find simple and broadly applicable criteria that guarantee (<ref>) with an explicit (and, ideally, sharp) constant κ. This is precisely the aim of the present paper. Duality and generalized B-L inequalities. Before presenting our main result, we observe that the equivalence between (<ref>) and (<ref>) established in <cit.> admits the following generalization in our setting. Consider Markov operators (T_1,…,T_M), a constant κ∈(0,1), and a probability vector (θ_1,…,θ_M), and define c_i = θ_i/1-κ , i=1,…,M. The following statements are equivalent. * Entropy subadditivity: ∀ f∈ Llog L, ∑_i=1^M θ_i (T_i^⋆ f) ≤ (1-κ) (f) . * Generalized B-L inequality: ∀φ_1,…,φ_M∈ L^∞, [∏_i=1^Me^c_i T_i φ_i] ≤ ∏_i=1^M[e^φ_i]^c_i . We remark that when the Markov operators T_i are conditional expectations of the form T_if = [f|_i] for some sub σ-algebras _i, Theorem <ref> yields the duality proved in <cit.>. In that case T_i is an orthogonal projection in L^2, and therefore T_i is self-adjoint, and by the chain rule (<ref>) one has that both statements (1) and (2) are equivalent to an entropy factorization statement of the form ∀ f∈ Llog L, ∑_i=1^M θ_i [( f|_i)] ≥ κ (f) . We note that the case where T_i is not a conditional expectation goes beyond the common B-L inequality setup and we are not aware of previous work addressing the duality in this generality. The proof, however, boils down to essentially the same argument as in <cit.>, which is based on the well known variational principle for relative entropy stating that (f) = sup_h {[fh] - log[e^h]} , where h ranges over all functions on Ω such that e^h∈ L^1. Suppose (b) holds. Let f∈ Llog L and set h = ∑_i=1^Mc_iT_ilog (T_i^⋆ f). Without loss of generality we may assume that f∈ L^∞ and (f)=1. From the variational principle (<ref>), one has f ≥[fh] - log[e^h] = ∑_ic_i[f T_ilog (T_i^⋆ f)] -log[∏_i e^c_iT_iφ_i] , where we define φ_i := log (T_i^⋆ f). From the assumption (b) it follows that log[∏_i e^c_iT_iφ_i]≤∑_i c_i log[ e^φ_i]. Since [ e^φ_i]=[T_i^⋆ f ] = 1, using also [f T_ilog (T_i^⋆ f)]=(T_i^⋆ f), (<ref>) implies f ≥∑_ic_i (T_i^⋆ f) . This proves (b) (a). To prove the converse, take arbitrary functions φ_i∈ L^∞, and set h = ∑_i=1^Mc_iT_iφ_i. Taking f = e^h/[e^h], we observe that f = [fh] - log[e^h] = ∑_i c_i[T_i^⋆ f φ_i] -log[∏_ie^c_iT_iφ_i]. From the assumption (a) and (<ref>) it follows that log[∏_ie^c_iT_iφ_i]≤∑_ic_i[T_i^⋆ f φ_i] - ∑_ic_i (T_i^⋆ f) . From the variational principle (<ref>), for each i∈[n] one has (T_i^⋆ f) = [T_i^⋆ flog T_i^⋆ f]≥[T_i^⋆ fφ_i] -log[e^φ_i]. If we multiply by c_i and sum over i in (<ref>), and then take exponentials in (<ref>) we obtain [∏_ie^c_iT_iφ_i]≤∏_i[e^φ_i]^c_i . This proves (a) (b). §.§ Main result We henceforth assume that Ω is equipped with a metric (·,·), which we are free to choose as we want as long as the square-integrability condition ∫_Ω^2(o,x)( dx) < +∞, holds for some (and hence every) point o∈Ω. We write W_1 and W_∞ for the resulting L^1 and L^∞-Wasserstein distances, defined for probability measures μ,ν∈(Ω) as follows: W_1(μ,ν) := inf_X∼μ,Y∼ν𝔼[(X,Y)], W_∞(μ,ν) := inf_X∼μ,Y∼νess sup(X,Y), where the infimum runs over all random pairs (X,Y) with marginals μ and ν. Also, we define the essential Lipschitz constant of a function fΩ→ as (f) := inf_N∈ sup{|f(x)-f(y)|/(x,y) (x,y )∈ (Ω∖ N)^2, x y }, where ⊂ denotes the set of zero-probability events, henceforth called null sets. Consider Markov operators (T_1,…,T_M), and a probability vector (θ_1,…,θ_M). Assume that, for some numbers ℓ_1,…,ℓ_M≥ 0, and a constant κ∈[0,1], the following conditions are satisfied: * (Sectional curvature). For each 1≤ i≤ M and all x,y outside a null set, W_∞(T_i^⋆(x,·),T_i^⋆(y,·)) ≤ ℓ_i(x,y). * (Average curvature). For all x,y outside a null set, ∑_i=1^Mθ_iℓ_i W_1(T_i(x,·),T_i(y,·)) ≤ (1-κ)(x,y). * (Regularity). For each 1≤ i≤ M, we have ∀ f∈ L^∞, (T_if) < ∞. Then, the entropy contraction property (<ref>) holds with constant κ. There are several important remarks to make about this result. Assumption (iii) trivially holds when Ω is finite and, more generally, when the metric (·,·) is discrete. When it fails, it can often be bypassed by an appropriate regularization. More precisely, suppose that our workspace (Ω,,) admits a family of measure-preserving transition kernels (P_τ)_τ∈(0,1) such that * (Continuity). For each f∈ L^2, we have P_τ f f in L^2. * (Regularity). For each τ∈(0,1), and each f∈ L^∞, we have (P_τ f) < ∞. * (Non-negative sectional curvature). For some sequence ϵ_τ→ 0, as τ→0, for all x,y∈Ω outside a null set, W_∞(P_τ(x,·),P_τ(y,·)) ≤ (1+ϵ_τ)(x,y) W_∞(P_τ^⋆(x,·),P_τ^⋆(y,·)) ≤ (1+ϵ_τ)(x,y) . Then, Assumption (iii) in Theorem <ref> is unnecessary: indeed, we can apply the theorem to the operators (P_τ T_i)_1≤ i≤ M – which do fulfill all the requirements – instead of (T_i)_1≤ i≤ M and then let τ→ 0 to obtain the desired conclusion. This regularization trick will be implemented in the Gaussian case in Section <ref>, using the Ornstein-Uhlenbeck semi-group. The result is already highly non-trivial in the special case where M=1, ℓ=1 and Ω is finite: indeed, it is then exactly the main result of the recent preprint <cit.>, devoted to the celebrated Peres-Tetali conjecture. The present paper can be seen as a (powerful) refinement of the method used therein. Recovering the original Shearer inequality (<ref>) from Theorem <ref> is easy: consider a product space (Ω,,)=⊗_i=1^n(Ω_i,_i,_i), for some n∈, and define a family of self-adjoint Markov operators (T_A)_A⊂ [n] by the formula T_A f := _A[f] = [f|Z_i,i∈], where we recall that Z=(Z_1,…,Z_n) denotes the identity map on Ω. More explicitly, T_A(x,·) is the law of the random vector obtained from x by replacing x_i with Z_i for each i∈ A. Now, equip Ω with the Hamming distance (x,y):=#{i∈[n] x_i y_i}. Then, W_∞(T_A(x,·),T_A(y,·)) ≤ #{i∈ x_i y_i} ≤ (x,y), as witnessed by the trivial coupling that consists in replacing both x_i and y_i by Z_i for each i∈ A. Since W_1≤ W_∞, we moreover have, for any probability vector (θ_A)_A⊂[n], ∑_A⊂[n]θ_A W_1(T_A(x,·),T_A(y,·)) ≤ ∑_i=1^n 1_x_i y_i(1-∑_A∋ iθ_A) ≤ (1-θ_⋆)(x,y). Thus, Theorem <ref> applies with M=2^n, ℓ≡ 1, κ=θ_⋆. Therefore, for all f∈ Llog L, ∑_A⊂[n]θ_A (_A[f]) ≤ (1-θ_⋆)(f). Using the chain rule (<ref>), this is equivalent to (<ref>). §.§ Some examples We consider several applications of our main result, to both continuous and discrete settings. Discrete Gaussian Free Field. The discrete Gaussian free field is a Gaussian probability measure on ^n, with covariance of the form Γ=( Id - P)^-1, where P is a symmetric n× n sub-stochastic matrix, that is P_i,j≥ 0 for all i,j∈[n], ∑_jP_i,j≤ 1 for all i∈[n], and such that Id - P is invertible. We also assume that P is irreducible, that is for all i,j there exists k∈ such that P^k_i,j>0. A well studied example in equilibrium statistical mechanics is obtained by taking P as the transition matrix of the random walk on ^d that is killed upon exiting a finite domain V⊂^d, see e.g. <cit.>. Thus, we fix a positive definite covariance matrix Γ=(Γ_ij)_1≤ i,j≤ n as above, and consider the probability space Ω := ^n, := (^n), ( dx):=1/√((2π)^n(Γ))exp(-x^⊤Γ^-1x/2) dx. As motivated in Section <ref>, we assume to be given a probability vector (θ_A)_A⊂[n], and we seek to estimate the optimal constants λ and κ in the inequalities ∀ f∈ L^2, λ (f) ≤ ∑_A⊂[n]θ_A[_A(f)], ∀ f∈ Llog L, κ (f) ≤ ∑_A⊂[n]θ_A[_A(f)]. The constant λ coincides with the spectral gap of the weighted block dynamics, that is the Markov chain where at each step a block A⊂[n] is picked with probability θ_A and the variables (X_i, i∈ A) are resampled according to the conditional distribution (·|X_A^c). Indeed, (<ref>) can be written in the form λ (f) ≤(f,f) where denotes the Dirichlet form of the weighted block dynamics ∀ f,g∈ L^2, (f,g) := ∑_A⊂[n]θ_A[_A(f,g)], and _A(f,g)=_A[fg]-_A[f]_A[g] is the covariance w.r.t. (·|X_A^c). On the other hand, the constant κ in (<ref>) provides a lower bound on the modified log-Sobolev constant, describing the rate of exponential decay of the relative entropy for the same dynamics in continuous time. Indeed, the modified log-Sobolev constant ρ_ MLS is defined as the largest ρ≥0 such that ∀ f∈ Llog L, ρ (f) ≤ (f,log f), and, since by convexity _A(f,log f)≥_A(f) for all f∈ Llog L, A⊂[n], it follows that κ≤ρ_ MLS. Moreover, it is well known that λ≥κ and 2λ≥ρ_ MLS. We refer e.g. to <cit.> for more background and a proof of these standard relations. An application of our general framework will allow us to control the fundamental constants λ,κ in terms of the least eigenvalue of the “laplacian" Δ:= Id - P, and to show that they actually coincide for the Glauber dynamics, that is the process obtained when θ_A = 1/n 1_|A|=1. For any irreducible, symmetric, sub-stochastic matrix P such that Id - P is invertible, the Gaussian free field with covariance Γ=( Id - P)^-1 satisfies κ≥δ θ_⋆ , where δ denotes the smallest eigenvalue of the matrix Δ= Id - P, and θ_⋆=min_1≤ i≤ n∑_A∋ iθ_A. Moreover, the Glauber dynamics satisfies λ= κ =δ/n . Theorem <ref> can be seen as the Gaussian Free Field version of Shearer inequality, which is shown to be optimal for the Glauber dynamics. In fact, as the proof will show, in that case the inequality (<ref>) is saturated by a linear function f(x)=z^⊤ x, where z∈^n is such that Δ z= δ z. Thus the spectral gap is achieved at a linear function. From the above mentioned inequalities κ≤ρ_ MLS≤ 2λ, this also shows that the Glauber dynamics has a modified log-Sobolev constant satisfying δ/n ≤ ρ_ MLS ≤ 2δ/n . Quantifying the speed of convergence to equilibrium of the Glauber dynamics for Gaussian free fields is a natural problem which, to the best of our knowledge, has only been investigated in the two-dimensional square lattice <cit.>, that is when P is the transition matrix of the random walk on ^2 that is killed upon exiting a finite domain V⊂^2. As a consequence of Theorem <ref>, one can compute explicitly both the spectral gap and the entropy contraction constant κ of the lattice Gaussian free field, for any dimension d≥ 1, and for any domain V of the form V=[n_1]×⋯×[n_d]⊂^d, since in that case λ=κ = δ/n, and δ = 2/d∑_k=1^d[1-cos(π/n_k+1)], which behaves like 2π^2/n^2 in the regime n_1=n_2=⋯=n_d=n and n→∞. Combining Theorem <ref> with Theorem <ref> one obtains equivalent B-L inequalities for the gaussian measures. The latter were in fact already known, since they can be directly related to the classical B-L inequalities for Lebesgue measures, see e.g. <cit.>. A direct proof of the gaussian B-L inequality is also possible along the lines of <cit.>, see <cit.>. Thus, one could have obtained Theorem <ref> from these known estimates. However, the purpose of Theorem <ref> is to illustrate that an entirely different approach based on curvature can be used to obtain optimal entropy factorization estimates, and, in principle, it can yield an alternative proof of the classical B-L inequalities as well. One caveat is that we assume the specific form Γ=( Id-P)^-1 for the covariance matrix of our gaussian measure, whereas it would be desirable to have the result for all positive definite matrices Γ. For technical reasons, in this more general setup at the moment we can only prove the desired estimate up to a factor 2, that is we can show that σ≤ 2κ≤ 2σ, for an explicit spectral quantity σ=σ(Γ,θ), see Remark <ref> for more details. We believe it should be possible to obtain κ=λ=σ for all (Γ,θ), as we prove in the case of Γ=( Id-P)^-1 with Glauber dynamics in Theorem <ref>, thus obtaining full applicability of our argument to derive Gaussian and classical B-L inequalities. Permutations and weighted block shuffles. The next example we consider is the case where is the uniform measure over S_n, the set of permutations of [n]: Ω := S_n, (σ):=1/n! , σ∈Ω . As above, we consider the weighted block dynamics associated to a given probability vector (θ_A, A⊂[n]), with Dirichlet form formally given exactly by the expression in (<ref>). If we think of the configuration σ∈Ω as representing the positions of n cards, the weighted block dynamics is the Markov chain where at each step a block A is chosen with probability θ_A and all cards located in A get uniformly reshuffled among themselves. We refer to this as the θ-weighted block shuffle. The case θ_A=1/n2 1_|A|=2 is the well known random transposition dynamics. For any choice of the probability vector (θ_A, A⊂[n]), ∀ f∈ Llog L, θ_⋆⋆ (f) ≤ ∑_A⊂[n]θ_A[_A(f)], where θ_⋆⋆:=min_1≤ i<j≤ n∑_ A⊃{i,j}θ_A. The estimate (<ref>) is equivalent to the subadditivity statement ∀ f∈ Llog L, ∑_A⊂[n]θ_A (_A[f]) ≤ (1-κ)(f) , with κ = θ_⋆⋆, and can be used to obtain equivalent B-L inequalities via Theorem <ref>. As we observed in (<ref>), the bound (<ref>) implies ρ_ MLS≥θ_⋆⋆, providing a lower bound on the rate of exponential decay to stationarity of the continuous time version of the θ-weighted block shuffle. It is interesting to note that the estimate (<ref>) is equivalent to the B-L bounds proved in <cit.>, which were obtained by an entirely different approach, namely using monotonicity along semigroup interpolations for the corresponding B-L inequality. On the other hand, in the “mean field” case where θ_A depends only on the size of A, that is θ_A=φ(|A|) for some function φ≥ 0, optimal bounds on the constant κ in (<ref>) were obtained recently in <cit.>. In particular, <cit.> shows that for any ℓ∈{1,…,n}, if θ_A = 1/nℓ 1_|A|=ℓ, then (<ref>) holds with the optimal constant κ=κ_ℓ,n := log(ℓ!)/log(n!), and that the only extremal functions are Dirac masses. Here one has θ_⋆⋆= ℓ(ℓ-1)/n(n-1) which is smaller than κ_ℓ,n for ℓ∈{2,…,n-1}, and therefore Theorem <ref> is not optimal for such homogeneous weights. However, Theorem <ref>, as well as <cit.>, addresses the entropy factorization problem for arbitrary weights (θ_A), and in this generality they may provide the best known bounds. Entropy of ℓ^n_p–spherical marginals. A celebrated result of Carlen-Lieb-Loss <cit.> states that the uniform probability measure on the unit sphere ^n-1 satisfies a subadditivity estimate with a factor 2, independently of n, with respect to the bound satisfied by a product measure, thus quantifying the departure from independence for this distribution. An equivalent formulation of this is that when is the uniform probability measure on ^n-1, then the inequality (<ref>), taking the homogeneous weights θ_A : = 1/n 1_|A|=n-1, holds with constant κ = 1-2/n , for all n≥ 2. Moreover, the same authors also show that this constant is optimal. Extensions of this result were later proposed by <cit.> and <cit.>. In particular, it was shown in <cit.> that for any choice of the probability vector θ, one has the estimate (<ref>) with κ = θ_⋆⋆, where θ_⋆⋆:=min_1≤ i<j≤ n∑_ A⊃{i,j}θ_A. Note that this holds for arbitrary θ and it recovers the optimal bound (<ref>) in the “all but one" case θ_A : = 1/n 1_|A|=n-1. The proof of these bounds was based on an extension of the approach introduced in <cit.>, which used monotonicity along semigroup interpolations for the corresponding B-L inequality. In the special case θ_A : = 1/n2 1_|A|=2, the associated weighted block dynamics (<ref>) is often referred to as the Kac walk on the sphere. In this case the bound κ = θ_⋆⋆ gives κ = 2/n(n-1) . Letting ρ_ MLS denote the modified log-Sobolev constant from (<ref>), the simple bound ρ_ MLS≥κ allows one to obtain ρ_ MLS≥2/n(n-1), which recovers (and actually improves by a factor 2) an estimate previously shown by Villani in <cit.> by a different type of semigroup interpolation. Here we discuss how our alternative method could be applied to obtain the same estimates, in the general setting of ℓ^n_p–spheres, for all p>0. Namely, for p>0, define the ℓ^n_p–sphere ^n-1_p as Ω = ^n-1_p := {x∈^n: x_p = 1} , x_p = (∑_i=1^n|x_i|^p)^1/p . We let denote the law of the vector X=(X_1,…,X_n) obtained as X = (G_1,…,G_n)/S_p(G)^1/p , S_p(x):=∑_i=1^n|x_i|^p , where G=(G_1,…,G_n) is a vector of i.i.d. real random variables with density proportional to e^-|x|^p, x∈. is also called the cone measure on ^n-1_p; see e.g. <cit.>. A characteristic property of this construction is that X and S_p(G) are independent. Note that, for p=1,2, coincides with the uniform distribution on ^n-1_1, and ^n-1_2=^n-1 respectively. We believe that it is possible to prove the following statement as a consequence of Theorem <ref>. For all p>0, all probability vectors θ, the cone measure on ^n-1_p satisfies ∀ f∈ Llog L, θ_⋆⋆ (f) ≤ ∑_A⊂[n]θ_A[_A(f)]. For p=2 this coincides with the result in <cit.>. A related estimate for all p>0, under additional symmetry assumptions on f, was obtained in <cit.>. Note that the above statement is formally equivalent to Theorem <ref>. In fact, to prove the conjecture one may proceed exactly as in our proof of Theorem <ref>, namely by exhibiting a metric on Ω for which Assumptions (i) and (ii) in Theorem <ref> are satisfied with κ = θ_⋆⋆, when the Markov operators are given by the conditional expectations T_A f = _Af. As we show in Section <ref>, this can be achieved by taking the special metric (x,y):=∑_i=1^n||x_i|^p - |y_i|^p| + ∑_i=1^n 1_{x_iy_i<0} . Assumption (iii) however fails in this case. As in the case of Theorem <ref>, the proof of Conjecture <ref> would be complete if we could establish a regularizing procedure as described in Remark <ref>, but we were unable to find the needed regularizing kernels for this model, since the natural diffusion process on ^n-1_p does not seem to have nonnegatve sectional curvature with respect to our metric. Down-up walk for uniform n-sets. Let be a finite space with cardinality N=||. Given n∈{1,…,N-1}, let Ω=n denote set of all subsets of with cardinality n, and call the uniform distribution over Ω. For a fixed integer k∈{1,…,n}, we let T_k denote the Markov operator associated to the following resampling step. Given X∈Ω, let X_-k denote a uniformly random subset of X with cardinality n-k, and, given X_-k, let X'∈Ω denote a uniformly random supset of X_-k with cardinality n. Thus X' is obtained from X by first removing k elements chosen uniformly at random from X, thus obtaining X_-k, and then by adding k elements chosen uniformly at random from ∖ X_-k. The corresponding Markov operator T_k is then written as T_kf(X) = ∑_X'∈ΩT_k(X,X')f(X') , X∈Ω, where T_k(X,X') denotes the probability of the transition from X to X' as a result of the the above described step. The Markov chain associated to T_k is also called the (n↔ n-k) down-up walk on uniform n-sets. By symmetry, T_k=T_k^⋆ and the walk is -reversible. The (n↔ n-k) down-up walk on uniform n-sets is a special case of the (n↔ n-k) down-up walk on the bases of a matroid, an extensively studied Markov chain which is known to satisfy an entropy contraction with rate k/n, for all k, that is, for any f:Ω↦_+, (T_k f)≤(1-k/n) f , see <cit.>. Somewhat surprisingly, as a simple consequence of our main result (Theorem <ref>), we obtain an improvement for the (n↔ n-k) down-up walk on uniform n-sets, and show that for any k one has entropy contraction with the strictly larger rate k/n + k/N-(n-k)(1- k/n). We formulate this in the following slightly more general terms. For any θ=(θ_1,…,θ_n), with θ_k≥0 and ∑_k=1^nθ_k=1, for any f:Ω↦_+, ∑_k=1^nθ_k(T_k f) ≤ (1-κ) f, with κ := κ_0(θ) + κ_1(θ), where κ_0(θ):=∑_k=1^nθ_k k/n , κ_1(θ):=∑_k=1^nθ_k k/N-(n-k)(1- k/n) . In particular, for any k=1,…,n, the (n↔ n-k) down-up walk for uniform n-sets has entropy contraction with rate k/n + k/N-(n-k)(1- k/n). The proof is based on showing that Assumptions (i) and (ii) of Theorem <ref> are satisfied with the required constants. In Section <ref> we provide the simple coupling argument for these curvature bounds. Langevin diffusion in a convex potential. Finally, we consider the n-dimensional Langevin diffusion in a convex potential, that is the stochastic differential equation X_t = -∇ V(X_t) t+√(2) B_t, where B=(B_t)_t≥ 0 is a standard n-dimensional Brownian motion and V^n→ a smooth (say, twice continuously differentiable) function, which is assumed to be ρ-convex: Hess(V) ≥ ρ Id, for some constant ρ>0. The classical theory ensures that there is a unique strong solution X=(X_t)_t≥ 0 starting from any fixed condition x∈^n, and that the formula (P_tf)(x) := _x[f(X_t)] defines a self-adjoint Markov semi-group (P_t)_t≥ 0 on L^2(Ω,,), where Ω=^n, =(^n), ( dx) ∝ e^-V(x) x. The following fundamental result on the relative entropy decay for Langevin diffusions is one of the most emblematic applications of the Bakry-Émery theory. It is equivalent to a log-Sobolev inequality for the measure , with optimal constant in the case where V is quadratic, see e.g. <cit.>. Under assumption (<ref>), one has (P_t f) ≤ e^-2ρ t (f), for all times t≥ 0 and all functions f∈ Llog L. In Section <ref> we show how to use our framework to obtain an elementary probabilistic proof of Theorem <ref>. We thank Thomas Courtade for helpful conversations on the B-L inequalities for Gaussian measures, as well as Francesco Pedrotti and Sam Power for many relevant comments and references. P.C. warmly thanks the research center CEREMADE at Paris Dauphine for the kind hospitality. § PROOF OF THEOREM <REF> §.§ Step 1: Lipschitz contraction Our first step consists in converting the curvature assumptions into a crucial Lipschitz contraction property for the non-linear operator Λ L^∞→ L^∞ defined by Λ f := ∑_i=1^Mθ_i T_ilog T_i^⋆exp f. Under Assumptions (i)-(ii), we have ∀ f∈ L^∞, (Λ f) ≤ (1-κ) (f). We start with an elementary remark that will be used several times throughout the proof. [Null sets]If T is a measure-preserving transition kernel on (Ω,,), and if N⊂Ω is a null set, then so is {x∈Ω T(x,N)>0}. Simply choose f= 1_N in (<ref>) to obtain ∫_Ω T(x,N)( x) = (N) = 0. Fix f∈ L^∞. By definition, there is a null set N⊂Ω such that ∀ (x,y)∈(Ω∖ N)^2, |f(x)-f(y)| ≤ (f)(x,y). Claim <ref> ensures that N' := ⋃_i=1^M{x∈Ω T_i^⋆(x,N)>0} is also a null set. Upon enlarging it if necessary, we may assume that N' also contains the null sets appearing in Assumption (i). Now, fix two points x,y∈Ω∖ N', and an index 1≤ i≤ n. By assumption, there is a coupling (X^⋆,Y^⋆) of T_i^⋆(x,·) and T_i^⋆(y,·) such that almost-surely, (X^⋆,Y^⋆)≤ℓ_i(x,y) and X^⋆,Y^⋆∉ N. In particular, this implies that almost-surely, f(X^⋆) ≤ f(Y^⋆)+ℓ_i(f)(x,y), Taking exponentials, then expectations, and then logarithms, we arrive at the key inequality (log T_i^⋆exp f)(x) ≤ (log T_i^⋆exp f)(y)+ℓ_i(f)(x,y), valid for any 1≤ i≤ M and any x,y∈Ω∖ N'. Now, invoking Claim <ref> again, we know that N” := ⋃_i=1^M{x∈Ω T_i(x,N')>0} is a null set and, upon enlarging it if needed, we may assume that it contains the null set appearing in Assumption (ii). Consider a pair (x,y)∈ (Ω∖ N”)^2, and let (X,Y) be any coupling of T_i(x,·) and T_i(y,·). Then, X,Y are both in Ω∖ N' almost-surely, so that (<ref>) holds almost-surely with x,y replaced by (X,Y). Taking expectations, we obtain (T_ilog T_i^⋆exp f)(x) ≤ (T_ilog T_i^⋆exp f)(y) +ℓ_i(f)[(X,Y)]. Since this is true for any choice of the coupling (X,Y), we may replace [(X,Y)] with W_1(T_i(x,·),T_i(y,·)). Finally, we simply multiply through by θ_i and sum over 1≤ i≤ M. In view of Assumption (ii) and our definition of Λ, we obtain (Λ f)(x) ≤ ( Λ f)(y) +(1-κ) (f)(x,y). Since this is true for any x,y∈Ω∖ N”, the claim is proved. §.§ Step 2: variance contraction With Proposition <ref> on hand, the proof of Theorem <ref> boils down to establishing that the Lipschitz contraction (Λ f) ≤ (1-κ)(f) and the regularity assumption (iii) together imply the desired entropy contraction (<ref>). We first establish a weaker statement, obtained by replacing entropies with variances. Under Assumption (iii), the contraction (<ref>) implies ∀ f∈ L^2, ∑_i=1^Mθ_i(T_i^⋆ f) ≤ (1-κ)(f). Without loss of generality, we assume that f∈ L^∞ and that [f]=0 and [f^2]=1. Using the uniform approximation e^ε f=1+ε f+o(ε) as ε→ 0, we have 1/εΛ(ε f) Tf, where we have introduced the operator T:= ∑_i=1^Mθ_iT_iT_i^⋆. It follows that T inherits the Lipschitz contraction property (<ref>) from Λ, i.e. (T f) ≤ (1-κ) (f). Now, T is clearly a non-negative self-adjoint Markov operator on the Hilbert space L^2, so the spectral theorem provides us with a Borel probability measure μ on [0,1] such that ∀ n∈, ∫_0^1λ^nμ( dλ) = ⟨ f,T^n f⟩. Iterating the Lipschitz contraction (<ref>), we find ∫_0^1λ^2nμ( dλ) = (T^n f) = 1/2∫_Ω^2[(T^nf)(x)-(T^nf)(y)]^2(dx)(dy) ≤ (T^nf)^2/2∫_Ω^2^2(x,y)(dx)(dy) ≤ (1-κ)^2n-2 (Tf)^2/2 ∫_Ω^2^2(x,y)(dx)(dy). Note that the right-hand side is finite thanks to Assumption (iii) and (<ref>). We may thus safely raise both sides to the power 1/(2n) and send n→∞ to conclude that the measure μ is actually supported on [0,1-κ]. In particular, we have ⟨ f,T f⟩ = ∫_0^1λμ( dλ) ≤ 1-κ. By definition of T, the left-hand side is exactly ∑_i=1^Mθ_i(T_i^⋆ f), and the claim is proved. §.§ Step 3: entropy contraction We henceforth fix a parameter m∈(0,∞) and consider the compact set K⊂ L^2 formed by those functions f satisfying f_∞≤ m and [e^f]=1. We then introduce the constant ρ := sup_f∈ K∖{0}(f), where (f):=∑_i=1^Mθ_i(T_i^⋆ e^f)/(e^f). We will prove that ρ≤ 1-κ, independently of m. This establishes the desired entropy contraction property (<ref>) for all positive measurable functions that are bounded away from 0 and ∞. By a straightforward approximation argument, the conclusion then extends to all functions in Llog L, and our proof of Theorem <ref> is complete. Under Assumption (iii), the Lipschitz contraction (<ref>) implies that ρ≤ 1-κ, independently of the parameter m. An essential ingredient is the following structural property of optimizers of . If f∈ K∖{0} satisfies (f)=ρ, then there is α∈ such that * Λ f ≤ρ f + α a.-s. on the set {f m}. * Λ f ≥ρ f + α a.-s. on the set {f -m}. In particular, we must have ρ (f)≤(Λ f). Fix f∈ K∖{0} such that (f)=ρ, and let us introduce the short-hands A:={f m}, B:={f -m} and Ψ := Λ f -ρ f. With this notation at hands, the two items in the statements are rewritten as ess sup_A Ψ ≤ ess inf_B Ψ. Assume for a contradiction that (<ref>) fails. This means that we can find β∈ such that (A∩{Ψ>β})>0 and (B∩{Ψ<β})>0. By monotone convergence, this remains true with A and B replaced by A':={f≤ m-δ} and B':={f≥-m+δ}, provided δ>0 is small enough. Now, consider the function h := 1_A'∩{Ψ>β}/(A'∩{Ψ>β})- 1_B'∩{Ψ<β}/(B'∩{Ψ<β}). Note that h∈ L^∞, with [h]=0 and h≤ 0 on (A')^c and h≥ 0 on (B')^c. Those properties guarantee that the function f_ε := log(e^f+ε h) is in K for all small enough ε>0, and a Taylor expansion gives (f_ε) = (f)+ε⟨ h, Ψ⟩/(e^f)+o(ε) as ε→ 0. Thus, the maximality of at f imposes ⟨ h, Ψ⟩≤ 0. In view of (<ref>), this reads [Ψ|A'∩{Ψ>β}] ≤ [Ψ|B'∩{Ψ<β}]. This inequality is clearly self-contradictory, and (<ref>) is proved. This means that there is a null set N⊂Ω such that for all (x,y)∈ (A∖ N)× (B∖ N), we have Ψ(x)≤Ψ(y), hence ρ f(y)-ρ f(x) ≤ |(Λ f)(y)-(Λ f)(x)|. On the other hand, this inequality trivially holds when (x,y)∈Ω^2∖ (A× B), because the left-hand side is then non-positive. The conclusion ρ (f)≤(Λ f) follows. By definition of ρ, there is a sequence (f_k)_k≥ 1 in K∖{0} such that ρ = lim_k→∞(f_k). Upon extracting a subsequence if needed, we may further assume that (f_k)_k≥ 1 has an almost-sure limit f. Note that f automatically inherits the properties f_∞≤ m and [e^f]=1. If f is not a.s. zero, then f∈ K∖{0} and (f)=ρ, so Lemma <ref> and (<ref>) together yield ρ (f) ≤ (Λ f) ≤ (1-κ) (f). Moreover, Assumption (iii) ensures that the middle term is finite, so that (f)<∞. Therefore, we may safely simplify through by (f) to obtain ρ≤ 1-κ, as desired. Consider now the degenerate case where f=0 almost surely. We can then write e^f_k=1+h_k with h_k_∞≤ e^m, [h_k]=0 and h_k→ 0 as k→∞. But then, a Taylor expansion yields (e^f_k) ∼ 1/2(h_k) ∑_i=1^Mθ_i(T_i^⋆ e^f_k) ∼ 1/2∑_i=1^Mθ_i(T^⋆_i h_k), where the notation a_k∼ b_k means that a_k/b_k→ 1 as k→∞. In particular, (<ref>) becomes ρ = lim_k→∞∑_i=1^Mθ_i(T^⋆_i h_k)/(h_k). By Proposition <ref>, the right-hand is at most 1-κ, and the proof is complete. § APPLICATIONS In this section we address the application of Theorem <ref> to the main examples discussed in Section <ref>. §.§ Gaussian Free Fields: Proof of Theorem <ref> Let us first show that the second part of the theorem follows from the first, namely that once we have κ≥δθ_⋆ for all choices of θ, it follows that for Glauber dynamics one has κ=λ=δ/n. Since θ_⋆=1/n in this case, and, in general, κ≤λ, the claim (<ref>) follows from (<ref>) and the following observation. The Glauber dynamics satisfies λ≤δ/n. By definition, ∀ f∈ L^2, λ (f) ≤ (f,f) = 1/n [f∑_i=1^n(f-_if)]. Let us apply this to the linear observable f(x)=x^⊤ z, where z∈^n is an eigenvector of Δ corresponding to the eigenvalue δ. If u_j(x)=x_j, then _i[u_j](x)=x_j for all j≠ i, and _i[u_j](x)=(Px)_i if j=i. Therefore, for all x∈^n, ∑_i=1^n(f-_i f)(x) = ∑_i=1^n z_i(Δ x)_i = z^⊤Δ x = x^⊤Δ z = δ f(x). Thus, (<ref>) forces λ≤δ/n, as desired. We turn to the proof of (<ref>). Recall the notation introduced in (<ref>), and let ZΩ→Ω denote the identity map on Ω, which forms a random vector with law . For each set A⊂[n], we consider the Markov operator T_A L^2→ L^2 defined by T_A f := [f|Z_j,j∈], where =[n]∖ A. Note that T_A is a self-adjoint, measure-preserving, Markov operator. In order to apply Theorem <ref>, we need to investigate the curvature of the associated transition kernel. A contractive coupling Let us start with an explicit expression for the transition kernel associated to our Markov operator T_A. For all f∈ L^2 and x∈^n, (T_Af)(x) = ∫_^n f(M_A z+( Id-M_A)x)( dz), where M_A is the n× n matrix defined by the blocks [ M_A× A = Id M_A× = -Γ_A× (Γ_×)^-1; M_× A = 0 M_× = 0. ] From the block definition of M_A above, we can readily compute M_AΓ : [ (M_A Γ)_A× A = Γ_A× A-Γ_A× (Γ_×)^-1 Γ_× A, (M_A Γ)_A× = 0,; (M_A Γ)_× A = 0, (M_A Γ)_× = 0. ] The second line implies that the random vector M_A Z is uncorrelated with (Z_j j∈), hence independent of :=σ(Z_j j∈). On the other hand, the fact that M_A× A = Id ensures that the random vector ( Id-M_A)Z is -measurable. Thus, the decomposition Z = ( Id-M_A)Z+ M_A Z, expresses Z as the sum of a -measurable vector and a -independent one. The claim now follows from a standard property of conditional expectation (see, e.g. <cit.>). To ensure that T_A has non-negative sectional curvature, we equip ^n with the following distorted metric instead of the usual Euclidean norm. Let ψ denote an eigenvector of Δ with eigenvalue δ. Since P is irreducible with nonnegative entries, by Perron-Frobenius theorem we may choose ψ with positive entries: ψ_i>0 for all i∈[n]. We then define the metric (x,y) := ∑_i=1^n ψ_i | (Δ x)_i -(Δ y)_i| . Let W_1 and W_∞ denote the Wasserstein distances associated with this metric. For any A⊂[n], we have ∀ x,y∈Ω, W_∞(T_A(x,·),T_A(y,·)) ≤ (x,y)-δ∑_i∈ Aψ_i | (Δ x)_i -(Δ y)_i| In particular, for any probability vector (θ_A, A⊂[n]), one has ∀ x,y∈Ω, ∑_A⊂[n]θ_AW_∞(T_A(x,·),T_A(y,·)) ≤ (1-δθ_⋆)(x,y) , where θ_⋆=min_i∑_A∋ iθ_A. The second inequality is an immediate consequence of the first. To prove the first, fix a subset A⊂[n] and two points x,y∈Ω. Recalling that Z∼, Lemma <ref> shows that the random vectors {[ X := ( Id-M_A)x+M_AZ; Y := ( Id-M_A)y+M_AZ. ]. form a coupling of T_A(x,·) and T_A(y,·). Thus, taking z=x-y, the proof boils down to showing that for all z∈^n, ∑_i=1^n ψ_i | [Δ( Id-M_A) z]_i | ≤ ∑_i=1^n ψ_i | (Δ z)_i | - δ∑_i∈ Aψ_i | (Δ z)_i |. To prove this, recall that the matrix M_AΓ computed in the proof of the previous lemma was symmetric, i.e. M_AΓ= Γ M^⊤_A or equivalently, Γ^-1M_A=M^⊤_AΓ^-1. Combining this with the fact that M^2_A=M_A, we obtain Γ^-1M_A = M^⊤_AΓ^-1M_A. In particular, Δ M_A is symmetric and Δ( Id-M_A) =( Id-M^⊤_A)Δ. Next, we observe that Id-M^⊤_A has nonnegative entries. To see this, write Id-M^⊤_A = [ 0 0; (Γ_A^c× A^c)^-1Γ_A^c× A Id ], and note that Γ_A^c× AΔ_A× A + Γ_A^c× A^cΔ_A^c× A=(ΓΔ)_A^c× A = 0, or equivalently (Γ_A^c× A^c)^-1Γ_A^c× A = -Δ_A^c× A(Δ_A× A)^-1. Now, -Δ_A^c× A = P_A^c× A has nonnegative entries, as well as (Δ_A× A)^-1=∑_k=0^∞ (P_A× A)^k . This proves that Id-M^⊤_A has nonnegative entries. As a consequence, [Δ( Id-M_A) z]_i ≤ ∑_j=1^n |( Id-M^⊤_A)_i,j(Δ z)_j | = ∑_j=1^n ( Id-M^⊤_A)_i,j|(Δ z)_j | = |(Δ z)_i |- ∑_j=1^n |(Δ z)_j |(M_A)_j,i . Therefore, ∑_i=1^n ψ_i | [Δ( Id-M_A) z]_i | ≤ ∑_i=1^n ψ_i | (Δ z)_i | - ∑_j=1^n (M_Aψ)_j | (Δ z)_j |. To conclude, observe that M_Aψ= M_AΓΔψ = δ M_AΓψ, and that M_AΓ = [ (Δ_A× A)^-1 0; 0 0 ], where the latter follows from a standard Schur complement computation and (<ref>). Thus, M_Aψ= δ (Δ_A× A)^-1ψ_A, and using (<ref>), one finds (M_Aψ)_j = δ [(Δ_A× A)^-1ψ_A]_j ≥ δ ψ_j , j∈ A , and (M_Aψ)_j = 0 otherwise. Inserting this in (<ref>) we have proved (<ref>). Since W_1(T_A(x,·),T_A(y,·))≤ W_∞(T_A(x,·),T_A(y,·)), Lemma <ref> implies that both Assumptions (i)-(ii) in Theorem <ref> are verified. It remains to check the regularity Assumption (iii). Unfortunately, Assumption (iii) fails, except in the degenerate case where A=[n]. To circumvent this, we use the regularization procedure outlined in Remark <ref>, using the Ornstein-Uhlenbeck semi-group. This will conclude the proof of Theorem <ref>. Ornstein-Uhlenbeck regularization Given a parameter τ∈(0,1), we consider the Markov operator P_τ defined by (P_τ f)(x) := ∫_Ωf(√(1-τ)x+√(τ)z)( dz). Note that P_τ is measure-preserving, because the image of ⊗ under the map (x,z)↦√(1-τ)x+√(τ)z is . An easy change of variable yields the alternative expression (P_τ f)(x) = ∫_Ωp_τ(x,y)f(y)( dy), where the transition kernel p_τΩ×Ω→_+ is explicitly given by p_τ(x,y) := τ^-n/2 exp{√(1-τ)/τx^⊤Γ^-1y-1-τ/2τx^⊤Γ^-1x-1-τ/2τy^⊤Γ^-1y}. The fact that this formula is symmetric in x and y readily guarantees that P_τ is self-adjoint. We now verify that (P_τ)_0<τ<1 satisfies the three properties in Remark <ref>. For the continuity, we need to check that for any f∈ L^2, we have P_τ f f. The claim is clear when f is continuous and bounded. In the general case, consider a sequence (f_k)_k≥ 1 of bounded measurable functions that converges to f, and write P_τ f-f_2 ≤ P_τ f_k-f_k_2+P_τ f-P_τ f_k_2+f_k-f_2 ≤ P_τ f_k-f_k_2+2f_k-f_2. Taking first a lim sup as τ→ 0 and then a limit as k→∞ concludes the proof. The second property in Remark <ref> is the regularity (P_τ f)<∞ for all f∈ L^∞, τ>0. Since our metric is comparable to Euclidean norm, this follows from the well known regularizing property of the Ornstein-Uhlenbeck semi-group, see e.g. <cit.>. It remains to check the third property in Remark <ref>, namely non-negative sectional curvature. For every x,y∈Ω, we have W_∞(P_τ(x,·),P_τ(y,·)) ≤ √(1-τ)(x,y). With Z∼ P, we deduce from the definition of P_τ that the random vectors {[ X := √(1-τ)x+√(τ)Z; Y := √(1-τ)y+√(τ)Z. ]. form a coupling of P_τ(x,·) and P_τ(y,·). Therefore, (X,Y) = √(1-τ) ∑_i=1^nψ_i |[Δ(x-y)]_i|=√(1-τ) (x,y) , which implies the desired property. If instead of the gaussian free field with covariance Γ=( Id - P)^-1 as above we consider a Gaussian measure (<ref>) with an arbitrary positive definite Γ, then a minor modification of our argument yields the following estimate. Let Σ be the matrix Σ := ∑_A⊂[n]θ_A Γ^1/2M^⊤_AΓ^-1M_AΓ^1/2, where M_A is the matrix defined in (<ref>), and call σ = σ(Γ,θ) the smallest eigenvalue of Σ. Then, taking the metric (x,y) := √((x-y)^⊤Γ^-1(x-y)), and adapting our previous argument to this case, one finds W_∞(T_A(x,·),T_A(y,·)) ≤(x,y), for all A⊂[n], and ∑_A⊆[n]θ_A W^2_∞(T_A(x,·),T_A(y,·)) ≤ (1-σ) ^2(x,y). From the crude bound W_1≤ W_∞ and Cauchy-Schwarz, one has that Assumptions (i) and (ii) of Theorem <ref> are satisfied with κ= 1-√(1-σ). Moreover, it is not difficult to check, using a linear test function in (<ref>), that κ≤λ≤σ. In conclusion, the above argument yields the inequality 1-√(1-σ) ≤ κ ≤ σ . Since 1-√(1-σ)≥σ/2 this estimate is sharp up to a factor at most 2. As mentioned in Remark <ref>, it should be possible to improve this argument to obtain the sharp result κ=σ for all positive definite Γ and all probability vector θ. The identity κ=σ could be in fact obtained via Theorem <ref> using known B-L inequalities for Gaussian measures, see <cit.>. §.§ Permutations: proof of Theorem <ref> Here is the uniform measure over Ω= S_n, the set of permutations of [n], see (<ref>). A permutation σ∈Ω is identified with the configuration (σ_i)_i∈[n]. To prove Theorem <ref>, we are going to apply Theorem <ref> with Markov operators given by the conditional expectations T_A L^2→ L^2, A⊂[n], defined by T_A f := _A[f] = [f|σ_j,j∈], where, as usual, =[n]∖ A. We equip Ω=S_n with the transposition distance (σ,η), σ,η∈Ω, which is the minimal number of transpositions needed to turn σ into η. Fix a pair of permutations (σ,η) and let σ' and η' denote the random permutations with distribution T_A(σ,·) and T_A(η,·). A coupling of (σ',η') can be obtained by simply replacing the pair (σ,η) by the pair (σ',η')=(σπ,ηπ) where π is a uniformly random permutation of the labels {j∈[n]: j∈ A}. This produces a valid coupling with the property that (σ',η')=(σ,η). Thus, hypothesis (i) of Theorem <ref> is satisfied with ℓ≡ 1. To check the positive curvature, we use a finer coupling. We first observe that the metric (σ,η) is generated by pairs of configurations that differ by exactly one swap, and thus using a telescopic decomposition, using the so-called Gluing Lemma (see, e.g., <cit.>), we may restrict to a pair (σ,η) such that σ_ℓ=η_ℓ, for all ℓ≠ i,j, and (σ_i,σ_j)=(η_j,η_i), for a fixed pair of distinct indexes i,j∈[n]. This reduction step is also known as the path coupling lemma <cit.>. Given such a pair (σ,η), the coupling of (σ',η') is described as follows. Let π be a uniformly random permutation of the labels {j∈[n]: j∈ A}. If either i or j (or both) are in then we update as before by setting (σ',η')=(σπ,ηπ). If instead i,j are both in A, then we ensure coalescence by replacing (σ,η) with (σπ,σπ). This shows that ∑_A⊂ [n]θ_A W_1(T_A(σ,·),T_A(η,·)) ≤ ∑_A⊂ [n]θ_A 1_{A⊅{i,j}}≤ 1-θ_⋆⋆, where θ_⋆⋆:=min_1≤ i<j≤ n∑_ A⊃{i,j}θ_A. Thus, we have checked the validity of hypothesis (ii) of Theorem <ref> with κ=θ_⋆⋆. This ends the proof of Theorem <ref>. §.§ Contractive coupling for ℓ^n_p–spheres Here we present a contractive coupling towards the proof of Conjecture <ref>. As in (<ref>), for any p>0, we take Ω=^n-1_p, with the associated Borel σ-algebra, and write for the cone measure, i.e. the probability measure obtained as in (<ref>). As usual, we write Z:Ω↦Ω for the identity map and take T_Af := E[f|Z_A^c] for the conditional expectation of f∈ L^1 given Z_A^c. We introduce the distance on Ω given by (x,y)= ∑_i=1^n||x_i|^p - |y_i|^p| + ∑_i=1^n 1_x_iy_i<0 . Under this metric, T_A has nonnegative sectional curvature. More precisely, For all p>0 for all x, y∈Ω, and all A⊂ [n], W_∞(T_A(x,·),T_A(y,·)) = ∑_i∈ A^c||x_i|^p -|y_i|^p|+∑_i∈ A^c 1_{x_iy_i<0} +| ∑_i∈ A(|x_i|^p-|y_i|^p)| . In particular, W_∞(T_A(x, ·), T_A(y, ·)) ≤(x, y). Moreover, the same identity applies to W_1. Fix x, y ∈Ω and consider any coupling (X, Y ) of T_A(x, ·) and T_A(y, ·). Since only the coordinates in A are modified, we have almost-surely (X,Y) = ∑_i∈ A^c||x_i|^p -|y_i|^p|+∑_i∈ A^c 1_{x_iy_i<0} + ∑_i∈ A||X_i|^p-|Y_i|^p| +∑_i∈ A^c 1_{x_iy_i<0} ≥∑_i∈ A^c||x_i|^p -|y_i|^p|+∑_i∈ A^c 1_{x_iy_i<0} + |∑_i∈ A(|X_i|^p-|Y_i|^p)| =∑_i∈ A^c||x_i|^p -|y_i|^p|+∑_i∈ A^c 1_{x_iy_i<0} + |∑_i∈ A(|x_i|^p-|y_i|^p)| , where the last line uses the fact that the norm is conserved X_A_p = x_A_p and Y_A_p = y_A_p. Now, the middle inequality is an equality as soon as X_A is a positive multiple of Y_A. The latter fact is almost-surely the case under the following coupling: (X_i,Y_i) = (x_A_pZ_A_p Z_i , y_A_pZ_A_p Z_i) if i∈ A (x_i,y_i) if i∈ A^c where we recall that Z has law . The fact that the above is a valid coupling follows from the resampling property of the Dirichlet-type random variable with distribution given by (<ref>). Thus we have found an optimal coupling, and the proof is complete. We now fix a probability vector θ=(θ_A, A⊂ [n]), and we look for a constant κ as large as possible, such that for all x, y ∈Ω, ∑_A⊂[n]θ_AW_1 (T_A(x,·), T_A(y,·)) ≤ (1-κ) (x, y) . The answer turns out to be remarkably explicit. The optimal constant in (<ref>) is exactly κ=θ_⋆⋆=min_i<j∑_A⊃{i,j}θ_A. By the triangle inequality, and using the path coupling lemma <cit.>, the validity of the inequality (<ref>) can be checked separately on two complementary cases: the “same sign" case where x_iy_i ≥ 0 for all i, and the “sign flip" case where x, y differ at a single coordinate (its sign being the only difference). More precisely, the optimal constant that we are looking for is just the minimum of the two optimal constants obtained by restricting our attention to same-sign pairs, or to sign-flip pairs, respectively. The sign-flip case is actually easy: if x, y differ at a single coordinate i, then (x, y) = 1 and W_1(T_A(x,·), T_A(y,·) = 1_i∈ A^c, so that ∑_A⊂[n]θ_AW_1 (T_A(x,·), T_A(y,·)) ≤ (1-θ_⋆) (x, y) , with equality when i realizes the minimum in the definition of θ_⋆=min_i∑_A∋ iθ_A. Let us now turn to the “same sign" case: given a pair (x,y) ∈Ω^2, with x≠ y and x_iy_i ≥ 0 for all i, we introduce two probability vectors μ,ν on [n], with disjoint supports, defined by μ_i:=2(|x_i|^p -|y_i|^p)_+/∑_i=1^n||x_i|^p - |y_i|^p| , ν_i:=2(|y_i|^p -|x_i|^p)_+/∑_i=1^n||x_i|^p - |y_i|^p| , where (a)_+=max{a,0} denotes the positive part of a∈. Using Lemma <ref>, we then find W_1 (T_A(x,·), T_A(y,·))/(x,y) = 1- ∑_i∈ A||x_i|^p-|y_i|^p|/∑_i=1^n||x_i|^p - |y_i|^p| + |∑_i∈ A(|x_i|^p-|y_i|^p)| /∑_i=1^n||x_i|^p - |y_i|^p| = 1- μ(A)+ν(A)-|μ(A)-ν(A)|/2 = 1- min{μ(A),ν(A)} . Consequently, ∑_A⊂[n]θ_AW_1 (T_A(x,·), T_A(y,·))/(x,y) ≤ 1- ∑_A⊂[n]θ_Aμ(A)ν(A) = 1- ∑_A⊂[n]θ_A∑_i∈ A,j∈ Aμ_iν_i≤ 1- θ_⋆⋆. Moreover, the two inequalities appearing in this computation are both equalities if x and y are the i-th and j-th vectors of the canonical basis of ^n, with (i,j) being any pair that realizes the minimum in the definition of θ_⋆⋆. Thus, the optimal constant in the “same sign" optimization problem is precisely θ_⋆⋆. To conclude, it remains to note the simple fact that θ_⋆⋆≤θ_⋆. As we mentioned in Section <ref>, Lemma <ref> would be sufficient to prove Conjecture <ref> if we could solve the technical problem of replacing Assumption (iii) in Theorem <ref> by a suitable regularizing procedure as outlined in Remark <ref>. §.§ Down-Up walk on uniform n-sets: proof of Theorem <ref> To prove the theorem we apply our general result from Theorem <ref>. In fact, it will be sufficient to apply it for each k separately, and then obtain (<ref>) by summing over k. Given X,Y∈Ω we write |X∩ Y| for the number of elements in X∩ Y, and define (X,Y)= n - |X∩ Y| , for the number of discrepancies in the two sets. Notice that this defines a distance in Ω. Indeed, we can identify X∈Ω with the function η_X:↦{0,1} such that η_X(x) = 1_x∈ X and in this representation one has (X,Y)= 1/2∑_x∈ 1_η_X(x) ≠η_Y(x). Therefore, Theorem <ref> is an immediate consequence of Theorem <ref> and the following lemma. For any k=1,…,n, for any X,Y∈Ω, W_∞(T_k(X,·),T_k(Y,·))≤(X,Y), and W_1(T_k(X,·),T_k(Y,·)) ≤(1- k/n)(1- k/N-(n-k))(X,Y) . We want to estimate [(X',Y')], where (X',Y') is a suitable coupling of the laws T_k(X,·) and T_k(Y,·) respectively. By the path coupling lemma <cit.> we may restrict to the case where X,Y∈Ω is an arbitrary pair such that (X,Y)=1. Thus, there exist x,y∈ such that x≠ y, x∈ X∩ Y^c, y∈ Y∩ X^c and X∩ Y=X∖{x}=Y∖{y}. Let X_-k denote a uniformly random subset of X with cardinality n-k. There are nk possible choices, n-1k-1 of which do not contain x. If x∉ X_-k then we may set Y_-k=X_-k and thus define (X',Y')=(X',X'), where X' is a uniformly random supset of X_-k with cardinality n. If instead x∈ X_-k, we may take Y_-k=(X_-k∖{x})∪{y} and we construct (X',Y') as follows. Let X'=X_-k∪ X^k, where X^k is a uniformly random subset of ∖ X_-k with cardinality k. There are N-(n-k)k possible choices, N-(n-k)-1k-1 of which contain y. If y∉ X^k, then we set Y'=(X'∖{x})∪{y}. If instead y∈ X^k, then we set Y'=X'. It is not hard to check that the above defines a valid coupling of (X',Y'), such that (X',Y')=(X,Y)=1 with probability (y∉ X^k) =(1- k/n)(1- k/N-(n-k)) , while with the remaining probability one has (X',Y')=0. In particular, it follows that W_∞(T_k(X,·),T_k(Y,·))≤(X,Y) , and [(X',Y')]≤(1- k/n)(1- k/N-(n-k)). §.§ Langevin diffusion: proof of Theorem <ref> The proof is an immediate application of Theorem <ref>. Given x,y∈^n, we may couple two Langevin diffusions X and Y starting from X_0=x and Y_0=y by using the same Brownian motion B=(B_t)_t≥ 0, so that dX_t-Y_t^2 = -2⟨ X_t-Y_t,∇ V(X_t)-∇ V(Y_t)⟩ t ≤ -2ρX_t- Y_t^2 t, where the second line uses the ρ-convexity (<ref>). Thus, the Euclidean distance t↦X_t-Y_t decays at rate at least ρ, almost-surely. This shows that for all t≥ 0 and all x,y∈^n, W_∞(P_t(x,·),P_t(y,·)) ≤ e^-ρ tx-y. We may now apply the case M=1 of Theorem <ref> to the operator T=P_t, for a fixed t>0. Since P_t^⋆=P_t and W_1≤ W_∞, Assumptions (i)-(ii) hold with ℓ=e^-ρ t and κ=1-e^-2ρ t. Moreover, the regularity property (iii) classically holds in this setting (see, e.g., <cit.>). The desired conclusion readily follows. plain
http://arxiv.org/abs/2407.13176v1
20240718053044
Geometric Data Fusion for Collaborative Attitude Estimation
[ "Yixiao Ge", "Behzad Zamani", "Pieter van Goor", "Jochen Trumpf", "Robert Mahony" ]
eess.SY
[ "eess.SY", "cs.SY" ]
OT1pzc OT1pzcmit<-> s * [1.10] pzcmi7t OT1pzcmit update,prepend .svg .svgpdf.pdf `inkscape -z -D #1 –export-pdf= theoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary proposition[theorem]Proposition conjecture[theorem]Conjecture assumption[theorem]Assumption condition[theorem]Condition definition[theorem]Definition example[theorem]Example remark[theorem]Remark List [enumi] enumi [99] ="2D Geometric Data Fusion for Collaborative Attitude Estimation Geometric Data Fusion for Collaborative Attitude Estimation Yixiao Ge Systems Theory and Robotics Group School of Engineering Australian National University Canberra, Australia Behzad Zamani Department of Electrical and Electronic Engineering University of Melbourne Melbourne, Australia Pieter van Goor Robotics and Mechatronics (RaM) Group EEMCS Faculty University of Twente Enschede, The Netherlands Jochen Trumpf School of Engineering Australian National University Canberra, Australia Robert Mahony Systems Theory and Robotics Group School of Engineering Australian National University Canberra, Australia Received X XX, XXXX; accepted X XX, XXXX =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In this paper, we consider the collaborative attitude estimation problem for a multi-agent system. The agents are equipped with sensors that provide directional measurements and relative attitude measurements. We present a bottom-up approach where each agent runs an extended Kalman filter (EKF) locally using directional measurements and augments this with relative attitude measurements provided by neighbouring agents. The covariance estimates of the relative attitude measurements are geometrically corrected to compensate for relative attitude between the agent that makes the measurement and the agent that uses the measurement before being fused with the local estimate using the convex combination ellipsoid (CCE) method to avoid data incest. Simulations are undertaken to numerically evaluate the performance of the proposed algorithm. § INTRODUCTION The problem of collaborative state estimation over sensor networks has drawn significant attention in the past 20 years <cit.>. In this problem, different agents share measurements and state-estimates to improve overall state estimation. Sharing data in this way, however, introduces the possibility of data incest <cit.>. To see this, consider a network of individual estimators each estimating some states while communicating with other nodes on the network. Information received from other agents will depend on information transmitted by the agent itself in preceding communications, potentially reinforcing its own hypothesis and increasing the risk of overconfidence in the resulting state estimates <cit.>. To overcome these challenges there are two main approaches: a top down approach where the full state estimation is formulated as a joint estimation problem and then the computation is decentralised to each node (<cit.>), and the bottom up approach, where each agents runs an independent estimator locally and fuses data from other agents taking precautions to avoid data incest <cit.>. The key enabling step in the bottom up approach is a methodology to provide safe fusion of correlated data into a local agent state estimation such that it retains the common uncertainty of the original random variables. This problem has been studied since the 60s <cit.>. In more recent work (<cit.>, <cit.>) Julier and Uhlmann proposed the Covariance Intersection (CI) algorithm which restricts the fusion problem to a family of convex combinations of the inverse covariance matrices and is the most commonly used data fusion method in multi-agent problems. The CI algorithm, however, is known to be too conservative in certain situations <cit.>. The Inverse Covariance Intersection (ICI) method <cit.> computes the maximum possible common information shared by the estimates to be fused, and is known to be less conservative than the CI method. An alternative solution is the Convex Combination Ellipsoid (CCE) fusion method which arises from the set-theoretic fusion technique <cit.>. The CCE method shares a similar structure with CI, however it improves the tightness of the fusion result while avoiding unnecessary uncertainties as the byproduct of the fusion process <cit.>. All these fusion algorithms are originally formulated in global Euclidean space, and there have been many attempts to adapt these classical methods to systems that live on smooth manifolds, particularly Lie groups. One popular approach is to consider the fusion problem as finding the optimal mode of the posterior distribution by solving an optimization problem <cit.>. In <cit.>, the authors solved the fusion problem by posing a set of algebraic equations using the Baker-Campbell-Hausdorff formula. Recent work in equivariant filter theory <cit.> and geometric extended Kalman filtering <cit.> has provided a strong understanding of the geometry of filtering and data fusion. In particular, these works demonstrate that it matters in which coordinates the generative noise model for a measurement is posed and provides formulae and methodology to transform covariance from one set of coordinates to another <cit.>. In this paper, we consider a bottom up approach to the problem of collaborative attitude estimation, where each node estimates its own attitude as well as taking relative measurements of other nodes. The problem is posed on the Lie group (3) representing the attitude of a single agent of interest, termed the ego-agent. The information used are local directional measurements, angular velocity, and a noisy relative attitude measurement of the ego-agent as observed by a neighbouring altruistic-agent along with the altruistic agent's own state estimate (estimated attitude and its error covariance). This relative attitude measurement, combined with the altruistic agent's state estimate, is effectively an attitude measurement of the ego-agent, and can be fused into the ego-agent's state estimation, at the appropriate point, in the filter algorithm. However, in a collaborative estimation scenario, the altruistic agent's state estimate is itself dependent on shared information from the ego-agent, and this relative pose measurement should not be treated as an independent measurement. Furthermore, the attitude measurement is observed from the perspective of the altruistic agent and is written in these coordinates. The covariance of the measurement must be transformed into the ego-state coordinates to avoid incorrect inference. The contribution of the paper is to combine the geometric modification of the covariance into the correct coordinates with the CCE fusion method to obtain a high-performance bottom up collaborative state estimation scheme for multi-agent attitude estimation. We provide a Monte-Carlo simulation to demonstrate the performance of the proposed estimation algorithm together with the geometric modifications. § PRELIMINARIES §.§ Special orthogonal group (3) Attitude of an agent is represented as a rotation matrix R in the special orthogonal group R ∈(3). The identity element of (3), denoted 𝕀, is the identity matrix. Given arbitrary X,Y∈(3), the left and right translations are denoted by L_X(Y) :=XY and R_X(Y):=YX. The Lie algebra (3) of (3) consists of all skew-symmetric matrices of the form u^∧=([ 0 -u_3 u_2; u_3 0 -u_1; -u_2 u_1 0 ]), and is isomorphic to the vector space ^3. We use the wedge (·)^∧:^3→(3) and vee (·)^∨:(3)→^3 operators to map between the Lie algebra and vector space. The Adjoint map for the group (3), _X:(3)→(3) is defined by _X[u^∧] = X u^∧ X^⊤ , for every X ∈(3) and u^∧∈(3). Given particular wedge and vee maps, the Adjoint matrix is defined as the map _X^∨: ^3→^3 _X^∨ u = (_Xu^∧)^∨ . The adjoint map for the Lie algebra _u^∧: (3)→(3) is given by _u^∧v^∧ = [u^∧, v^∧]. We define the adjoint matrix ^∨_u:^3→^3 to be: ^∨_u v=[u^∧, v^∧]^∨. Let :(3)→(3) denote the matrix exponential (G denotes the SO(3) group). In order to improve the analogy to the fusion literature that is usually written in ^n coordinates we will use the ⊞ (`boxplus') operator for the exponential map X⊞ u = X(u^∧), for X∈(3) to represent the state and u∈ℝ^3 to represent a certain noise process. Let ^∘(3)⊂(3) be the subset of (3) where the exponential map is invertible and note that ^∘(3) is almost all of (3), excluding only those points with a rotation of π radians. The logarithm map :^∘(3)→(3) and ^∨:^∘(3)→^3 is well defined on ^∘(3). The Jacobian J_u^∧:(3)→(3) is defined to be the left-trivialised directional derivative of : (3) →(3) at a point u^∧∈(3) on (3). Given an arbitrary w^∧∈(3), it satisfies J_u^∧[w^∧] = _(-u^∧)·exp_(u^∧)[w^∧], where the tangent space _exp_(u^∧)(3) is isomorphic to (3) via left trivialisation. Equivalently, exp_ (u^∧) [w^∧] = _exp_(u^∧)J_u^∧ [w^∧] ∈_exp_(u^∧)(3). Its matrix form, denoted by J_u∈^3× 3, is given by <cit.> J_u :=I_3-1-cosu/u^2 u^∧+u-sinu/u^3u^∧^2. For an arbitrary u^∧∈(3), the inverse of its Jacobian is given by J_u^-1:=I_3+1/2 u^∧+(1/u^2-1+cosu/2usinu)u^∧^2. §.§ Concentrated Gaussians on (3) We use the concept of a concentrated Gaussian to model distributions on (3). For a random variable X∈(3), the probability density function is defined as p(X;X̂,μ,Σ) = α e^-1/2(^∨(X̂^-1X)-μ)^⊤Σ^-1(^∨(X̂^-1X)-μ), where α is a normalizing factor. The stochastic parameters are μ∈^3, a mean vector in local coordinates, and Σ∈𝕊_+(3) a positive-definite symmetric 3× 3 covariance matrix parameter. The geometric parameter X̂∈(3) is termed the reference point and plays the role of the origin of the local coordinates. We will term a concentrated Gaussian zero-mean if μ≡ 0. In this case the distribution corresponds to the classical concentrated Gaussian <cit.> where one can think of the reference point X̂∈(3) as a sort of `geometric' mean. We will write X∼_X̂(μ,Σ) for the random variable X ∈(3). Given an arbitrary concentrated Gaussian distribution p(X) = _X_1(μ_1,Σ_1) on (3), then the zero-mean concentrated Gaussian distribution q(X) = _X_2(0,Σ_2) with parameters X_2 = X_1(μ_1) Σ_2 = J_μ_1Σ_1 J_μ_1^⊤ minimises the Kullback-Leibler divergence p(X) with respect to q(X) up to second-order linearisation error. The Kullback-Leibler divergence between p(X) and q(X) is given by KL(p||q) = _p[log(p)-log(q)] = C_p +n/2log(2π)+ 1/2log(Σ_2) +1/2_p[^∨(X_2^-1X)^⊤Σ_2^-1^∨(X_2^-1X)], where C_p is the negative entropy of p(X). Taking the derivative of KL(p||q) with respect to Σ_2 in the direction u yields _Σ_2KL(p(X)||q(X))[u] = 1/2(Σ_2^-1u- Σ_2^-1_p[^∨(X_2^-1X)^∨(X_2^-1X)^⊤] Σ_2^-1u). The critical point is given by Σ_2 = _p[^∨(X_2^-1X)^∨(X_2^-1X)^⊤]. Defining ϕ_1:^3→^3 and ϕ_2:^3→^3 as ϕ_1(X): = ^∨(X_1^-1X)-μ_1, ϕ_2(X): = ^∨((-μ_1)X_1^-1X), one has ϕ_2(X) = ^∨((μ_1)^-1(ϕ_1(X)+μ_1)). Taking the Taylor series of (<ref>) at ϕ_1(X) = 0 up to first order yields: ϕ_2(X) ≈^∨(𝕀)∘L_(μ_1)^-1∘(μ_1) [ϕ_1(X)] =J_μ_1ϕ_1(X). Substitute the result into (<ref>): Σ_2 = _p[ϕ_2(X)ϕ_2(X)^⊤] ≈_p[J_μ_1ϕ_1(X)(J_μ_1ϕ_1(X))^⊤] =J_μ_1_p[ϕ_1(X)ϕ_1(X)^⊤]J_μ_1^⊤ =J_μ_1Σ_1J_μ_1^⊤, where the last equality follows from the definition of Σ_1 =_p[ϕ_1(X)ϕ_1(X)^⊤]. Given an arbitrary zero-mean concentrated Gaussian distribution p(X) = _X_1(0,Σ_1), choose and fix X_2∈(3), then the concentrated Gaussian q(X) = _X_2(μ_2,Σ_2) for parameters μ_2 = ^∨(X_2^-1X_1) Σ_2 = J_μ_2^-1Σ_1J_μ_2^-⊤ minimises the Kullback-Leibler divergence with p(X) up to second-order linearisation error. Corollary <ref> follows directly from Lemma <ref>. § PROBLEM FORMULATION The problem we target is to design a fully distributed algorithm to estimate the absolute attitude of individual agents collaboratively in a multi-agent system. Each agent is equipped with an onboard inertial measurement unit (IMU) that provides bias-free angular velocity ω∈^3 in the body frame. With non-rotating, flat earth assumption, the deterministic system kinematics are given by R_i(t+1) = R_i(t) exp_( Δ t ω_i(t)^∧). Note that the subscripted index i refers to the i^th agent and the index t refers to the time step. We will drop the time step notation where it is clear in order to simplify notation. The agent i also has extrinsic sensors such as a magnetometer that measures known directions (magnetic field) in the body-frame. The ℓth direction measurement ziℓ for agent i is given by ziℓ = R_i^⊤ d_ℓ where d_ℓ for ℓ = 1, …, n are a collection of known reference directions. Using these measurements, the agent can run a filter-based algorithm locally to estimate its own state and the associated covariance <cit.><cit.>. In addition, each agent can communicate with agents in its neighbourhood and has a sensor capable of measuring relative attitude of its neighbouring agents. If two agents i and j have states R_i, R_j ∈(3) then the relative state Rji of j with respect to i is defined to be Rji := R_j^-1 R_i ∈(3). The relative state can be thought of as the attitude of agent i expressed as perceived by a sensor on agent j. Alternatively, consider left translation of the whole space by _R_j^-1. Then 𝕀 = _R_j^-1 R_j = R_j^-1 R_j = _3 and Rji = _R_j^-1 R_i = R_j^-1 R_i. That is, the relative state is the coordinates of agent i with respect to a new group parametrisation that places agent j at the identity attitude. A relative state sensor on agent j may directly measure the physical relative attitude of agent i or may measure the relative angular difference between the states. Physical attitude measurements are associated with directly measuring the direction cosines that make up the entries of the matrix Rji∈(3). Such measurements are typically inner products like ziℓ^⊤ d_ℓ that correspond to cosines of angles between known or measured vectors. For a physical relative attitude measurement, an appropriate model is yji = R_j^-1R_i⊞κ_j, κ_j∼(0,Q_j). That is, the generative measurement noise model is a zero-mean concentrated Gaussian process yji∼_Rji (0,Q_j). Conversely, a relative angular sensor measures the underlying angle from one attitude to another. For example, if two attitudes are connected through a physical gimbal system then the sensor will measure Euler angles between the two states. Another example is when a vision system or similar system estimates the relative axis of rotation and relative angle from itself to another agent rather than the direction cosines <cit.>. The measurement in this case, which we denote by z to distinguish it from (<ref>), is appropriately modelled by zji = ( μji + κ_j ^∧) , κ_j∼(0,Q_j) where μji = (R_j^-1R_i) ∈(3) is the angle-axis representation. μji = θ a^∧ for a rotation of θ rad around an axis a ∈^2. In this case, the generative noise measurement model is a concentrated Gaussian zji∼_𝕀(μji,Q_j) with non-zero mean. § ALGORITHM We propose a filter-based algorithm to solve the attitude estimation problem. Each agent estimates its own state R̂∈(3) and the associated covariance P̂∈𝕊_+(3). We use subscript and superscript for the agent that is being estimated and the agent that is making the estimation, respectively. Fig <ref> demonstrates the overview of different steps in the proposed filter algorithm for an ego-agent (shown in blue) and an altruistic agent (shown in red). In this section, we focus on the details of the filter running on the ego-agent i, which can be separated into three stages: predict (using IMU input), update (using directional measurement) and fusion (using relative measurements). §.§ Predict and update The filter follows the conventional EKF design methodology, including the predict and update step. The information state of the filter, which approximates the true distribution of the system state on (3), is a concentrated Gaussian distribution, given by R_i(t) ∼_R̂_i^i(t)(0,P̂^i_i(t)). When an IMU input is available, it will be used to propagate the state estimate R̂_i^i, which also acts as the reference point in the information state of the filter, using the full nonlinear model (<ref>). The covariance estimate P̂^i_i will be propagated by linearising the system model. When the agent receives the directional measurement, under the assumption that every extrinsic measurement is independent, we can perform the standard Kalman fusion in logarithmic coordinates, followed by a covariance reset step which transforms the posterior into a zero-mean concentrated Gaussian (<cit.>). For detailed implementation of the filter, see <cit.>. §.§ Fusion using relative measurements: The core contribution of this work lies in the preprocessing and fusion steps shown in Figure <ref>. When agent j makes a relative attitude measurement yji or zji of agent i, it broadcasts the measurement and the associated noise covariance Q_j of the measurement model, as well as agent j's own state estimate (R̂^j_j, P̂^j_j) to agent i. As demonstrated in Fig <ref>, it takes two steps to fuse the inter-agent information with agent i's own estimate. Firstly in the preprocessing stage, the relative measurement is combined with both agent i and agent j's state estimate, which generates a new estimate of agent i, denoted by (R̂^j_i, P̂^j_i). We use the superscript j to distinguish it from agent i's estimate. Then we implement the CCE fusion method to fuse (R̂^j_i, P̂^j_i) with (R̂^i_i, P̂^i_i). §.§.§ Geometric transformation of the measurement model: We consider both measurement models in (<ref>) and (<ref>). The second measurement model requires an extra step to transform it into a zero-mean concentrated Gaussian. As given in (<ref>), the relative measurement can be expressed as a concentrated Gaussian with non-zero mean, zji∼_𝕀((R_j^-1R_i), Q_j). By applying Lemma <ref>, one gets the following approximation: yji∼_R_j^-1R_i(0, J_^∨(R_j^-1R_i)Q_j J_^∨(R_j^-1R_i)^⊤). Such modification requires the knowledge of the noiseless configuration output R_j^-1R_i which is not available in practice. In this paper, we will assume the measurement noise is small and use the measurement zji as a proxy for the relative state. Alternative algorithms that exploit the two state estimates R̂^j_j and R̂^i_i directly are also possible. The measurement model can now be transformed as follow: yji∼_R_j^-1R_i(0, J_^∨(zji) Q_j J_^∨(zji)^⊤), or equivalently, yji≈ R_j^-1R_i⊞κ_j, κ_j∼(0,Q^*_j), with Q^*_j = J_^∨(yji) Q_j J_^∨(yji)^⊤. §.§.§ Preprocessing relative measurements: Given the measurement model yji = R_j^-1R_i⊞κ_j, κ_j∼(0,Q_j) and the error state models R_i = R̂_i⊞ε_i, ε_i∼(0,P̂_i) R_j = R̂_j⊞ε_j, ε_j∼(0,P̂_j) substituting (<ref>) into (<ref>) yields yji = (R̂_j⊞ε_j)^-1R_i⊞κ_j= (-ε_j)R̂_j^-1R_i⊞κ_j = R̂_j^-1R_i⊞_(R̂_j^-1R_i)^-1(-ε_j)⊞κ_j. Replace R_i using (<ref>), yji = R̂_j^-1R_i⊞_(R̂_j^-1(R̂_i⊞ε_i))^-1(-ε_j)⊞κ_j =R̂_j^-1R_i⊞_(-ε_i)_(R̂_j^-1R̂_i)^-1(-ε_j)⊞κ_j. Take the Taylor expansion of ^∨_(-ε_i), one has ^∨_(-ε_i) = I_3 - ^∨_ε_i + (|ε_i |^2). Assume that both ε_i and ε_j are small, then _ε_i(ε_j) and the higher-order terms can be approximated to be zero: yji = R̂_j^-1R_i⊞_(R̂_j^-1R̂_i)^-1(-ε_j)⊞κ_j ≈R̂_j^-1R_i⊞κ^+_j, where the new measurement noise κ^+_j is given by κ^+_j ∼(0,^∨_(R̂_j^-1R̂_i)^-1P̂^j_j^∨_(R̂_j^-1R̂_i)^-1^⊤ + Q_j). One can now reconstruct a new estimate of R_i from (<ref>) by left multiplying by R̂_j. The new estimate is a zero mean Gaussian _R̂^j_i(0, P̂^j_i) where the parameters are given by R̂^j_i = R̂_jyji P̂^j_i = ^∨_(R̂_j^-1R̂_i)^-1P̂^j_j^∨_(R̂_j^-1R̂_i)^-1^⊤ + Q_j. §.§.§ Geometric correction of distributions: The concentrated Gaussian distributions that are being fused can be written as _R̂^i_i(0,P̂^i_i) and _R̂^j_i(0,P̂^j_i). Although the covariance P̂^i_i and P̂^j_i are both symmetric matrices, they are still associated with distributions expressed in different coordinates. Fusing these covariance matrices directly, without correcting for the associated change of coordinates, will introduce artifacts and errors in the information state of the resulting filter, decreasing consistency and compromising performance of the algorithm. In order for the ego-agent to compensate for the change of coordinates in the measurement recieved from the altruistic agent, it must transform the measurement concentrated Gaussian into a concentrated Gaussian in its local coordinates. That is, it must solve for μ^j*_i and P̂^j*_i such that _R̂_i^j(0,P̂^j_i) ≈_R̂^i_i(μ^j*_i,P̂^j*_i). Applying Corollary <ref> then one has μ^j*_i :=^∨(R̂^i_i^-1R̂_jyji) , P̂^j*_i = J_μ^j*_i ^-1P̂^j_i J_μ^j*_i ^-⊤. §.§.§ Data Fusion Now the targeting distributions are transported into the same coordinate, and the next step is to perform data fusion to the two distributions. Rewrite the distributions into ellipsoidal sets on (3), defined as ℰ(0,P̂^i_i) = {u^∧∈(3) : ‖ u ‖^2_P̂^i_i^-1≤ 1} and ℰ(μ^j*_i,P̂^j*_i) = {u^∧∈(3) : ‖ u - μ^j*_i ‖^2_P̂^j*_i^-1≤ 1}. Given these two prior ellipsoids have nonempty intersection, the convex combination ℰ(û^+, P̂^+_i) of them is given by P̂^+_i=k X, X=(αP̂^i_i^-1+(1-α) P̂^j*_i^-1)^-1, k=1-d^2, d^2=μ^j*_i_(P̂_i^i/α+P̂^j*_i/1-α)^-1^2, û^+=X((1-α) P̂_i^j*^-1μ^j*_i) , where α∈[0,1] is a freely chosen gain in this paper. Alternatively, one can choose an optimal α^* such that α^*=_α (P̂_i^*). Note that given the outputs of the CCE fusion method, the posterior is a concentrated Gaussian with non-zero mean, that is, R_i∼_R̂_i(û^+, P̂^+_i). However, the next fusion iteration requires the distribution to have a zero mean, hence the goal of the reset step is to identify P̂^♢_i such that R_i∼_R̂_i(û^+, P̂^+_i)≈_R̂_iû^+(0, P̂^♢_i). Similar to the coordinate change in the previous steps, this may be solved by using Lemma <ref>. The reset covariance P̂^♢_i is found to be P̂^♢_i = J_û^+P̂^+_i J_û^+^⊤. Note that the reset step only modifies the covariance estimate and does not change the attitude estimate of agent i. § NUMERICAL EXPERIMENT In this section, we provide a numerical evaluation of the algorithm proposed in Section <ref>. A Monte-Carlo simulation is undertaken to validate the performance of both the proposed geometric modifications and the fusion algorithm. §.§ System implementation In the Monte-Carlo setup we use the following randomisation and run 1000 experiments. We simulate two independent oscillatory trajectories for agent i and j, with the noise-less angular velocity generated by ω_i(τ): = (10 |sin(τ)|, |cos(τ)|, 0.1|sin(τ)|) rad/s ω_j(τ): = (|sin(τ)|, 0.5|cos(τ)|, 5|sin(τ)|) rad/s, and subsequently corrupted by additive Gaussian noise with covariance diag(0.3^2,0.2^2,0.1^2). The trajectory is realized using Euler integration (<ref>) and a time step Δ t=0.02s. The initial states estimates of the agents are offset from each other by 180 degrees with initial errors sampled from _R(0)(0, I_3). The extrinsic sensor on agent j measures two known directions d_1 = (0,1,0) and d_2 = (1,0,0), with the output function (<ref>), while the sensor on agent i only measures the first direction. The measurements for each direction are corrupted with additive Gaussian noise sampled from (0, diag(0.2^2, 0.1^2, 0.3^2)). In this experiment, we design the agent i to use only the directional measurement of d_1, while agent j has access to measurements of both directions. In consequence, without the relative measurements from agent j, the state of the ego agent i is unobservable — a single directional measurement is insufficient to determine the full attitude of the vehicle. In this way, the experiment emphasises the role of the shared information in the filter and exacerbates errors due to information incest. Both agents receive directional measurements at 20Hz. Agent j makes a relative attitude measurement of agent i at 1Hz, which is corrupted by Gaussian noise (0,Q_j). The non-homogeneous noise covariance is given by Q_j = diag(0.5^2, 0.3^2, 0.2^2). We run separate simulations with both of the measurement noise models considered in (<ref>) and (<ref>). For comparison, we implement two filters on agent i using different measurements, aside from the proposed algorithm. One filter only uses the extrinsic directional measurements and disregards relative inter-agent measurements. The second filter uses both extrinsic and relative measurements, however, it only does a naive fusion in the logarithmic coordinates, as in <cit.>. §.§ Result Fig <ref> and <ref> demonstrate the performance of the proposed algorithm (in blue) compared with the EKF using only directional measurement (in red) and the EKF using a naive fusion scheme (in green). Note that the noise parameters were chosen to demonstrate the relative advantages of the geometric modifications to be proposed. That is, while we found that the proposed method outperformed the alternatives regardless of the noise model, the particular parameters used to generate the plots shown here were chosen to emphasise the performance advantage. In particular, the measurement errors are larger than would be desirable in a real application, leading to relatively large attitude error in the plots. As expected, the local filter solution with only a single direction measurement available does not converge as the system is unobservable, as shown in both Fig <ref> and <ref>. Fig <ref> shows the algorithm performance in the case where the direct physical measurement model (<ref>) is used. It demonstrates an advantage in the proposed fusion algorithm during the transient of the algorithm but has similar asymptotic performance to the naive filter. This is to be expected, since the geometric modification in the data fusion step corrects for the difference between the filter estimate and relative state measurement coordinates. As the filter converges this difference becomes negligible and the benefit of the geometric correction is lost. This is not the case for measurement model (<ref>), as shown in Fig <ref>, since the relative state measurement zji is taken in coordinates around agent j. The relative state between the two agents i and j does not converge to the identity in general, and the geometric correction to compensate for the coordinate representation remains critical. § CONCLUSION In this paper, we propose a collaborative attitude estimation problem where agents run local filter-based algorithms which fuse the estimates and relative measurements communicated by neighboring agents. We utilize the concept of concentrated Gaussians on (3) and exploit the geometric properties of the underlying space. The proposed algorithm combines an EKF running locally on the agent with the CCE fusion of relative state measurements. We provide geometric corrections in the algorithm to incorporate Lie group geometric insights that improve filter performance. The simulation results demonstrate the convergence of the proposed fusion method, and show the improvements gained from applying the correct geometric modifications with different measurement noise models. IEEEtran
http://arxiv.org/abs/2407.12393v2
20240717081322
PersLLM: A Personified Training Approach for Large Language Models
[ "Zheni Zeng", "Jiayi Chen", "Huimin Chen", "Yukun Yan", "Yuxuan Chen", "Zhiyuan Liu", "Maosong Sun" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.CY" ]
Department of Computer Science and Technology, Tsinghua University, Beijing, China [1] Huimin Chen huimchen@mail.tsinghua.edu.cn § ABSTRACT Large language models (LLMs) exhibit aspects of human-level intelligence that catalyze their application as human-like agents in domains such as social simulations, human-machine interactions, and collaborative multi-agent systems. However, the absence of distinct personalities, such as displaying ingratiating behaviors, inconsistent opinions, and uniform response patterns, diminish LLMs’ utility in practical applications. Addressing this, the development of personality traits in LLMs emerges as a crucial area of research to unlock their latent potential. Existing methods to personify LLMs generally involve strategies like employing stylized training data for instruction tuning or using prompt engineering to simulate different personalities. These methods only capture superficial linguistic styles instead of the core of personalities and are therefore not stable. In this study, we propose personified LLM (PersLLM), integrating psychology-grounded principles of personality—social practice, consistency, and dynamic development—into a comprehensive training methodology. Through personified data construction and model training, we incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality. Single-agent evaluation validates our method's superiority, as it produces responses more aligned with reference personalities compared to other approaches. Case studies for multi-agent communication highlight its benefits in enhancing opinion consistency within individual agents and fostering collaborative creativity among multiple agents in dialogue contexts, potentially benefiting human simulation and multi-agent cooperation. Additionally, human-agent interaction evaluations indicate that our personified models significantly enhance interactive experiences, underscoring the practical implications of our research. PersLLM: A Personified Training Approach for Large Language Models Zheni Zeng, Jiayi Chen, Huimin Chen[1], Yukun Yan[1], Yuxuan Chen, Zhenghao Liu, Zhiyuan Liu and Maosong Sun July 22, 2024[The insight and motivation for the study of inertial methods with viscous and Hessian driven damping came from the inspiring collaboration with our beloved friend and colleague Hedy Attouch before his unfortunate recent departure. We hope this paper is a valuable step in honoring his legacy. ] ======================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Large Language models (LLMs), due to their large-scale parameters and training data, demonstrate human-level intelligence across numerous domains <cit.>. This development has inspired researchers to investigate the potential of employing LLMs as human-like agents in various contexts, including social simulations, human-machine interactions, and collaborative multi-agent systems <cit.>. The personalization of these human-like agents plays a crucial role. For example, in social simulation contexts, only agents that possess extensive personified traits can authentically emulate human perspectives and behaviors. In human-machine interaction settings, agents featuring personality traits significantly enhance user acceptance and comfort <cit.>. Additionally, in tasks involving multiple agents, the interaction among entities with varied personality traits can markedly enhance both the quality and creativity of task execution <cit.>. Nevertheless, the current generation of LLM-driven agents exhibits a notable deficiency in personified characteristics, often presenting overly uniform values and behavior patterns, along with a propensity to cater to user preferences <cit.>, which substantially curtail their applicability in real-world settings. In response to these challenges, some studies have initiated efforts to construct LLMs endowed with a broader spectrum of personality traits. Current research in modeling personality primarily adopts two mainstream approaches, each with distinct methodologies and inherent limitations. The first, prompt-based methods, rely on external prompt engineering to specify personified traits of agents <cit.>. These methods depend on the model's strong capacity to comprehend and reason over long texts and the accuracy of the retrieval module to provide context-relevant information. Naturally, they are limited by the model's maximum context length, restricting the scope of personality information that can be processed. The second approach, training-based methods, integrates personal characteristics into the model's internal parameters by by targeting specific types of data <cit.>. These methods usually focus on singular aspects of characteristics such as linguistic styles or anecdotes, thus limiting the application scenarios for the models. Overall, while both approaches attempt to integrate personality traits into models, they only capture superficial and fragmented aspects of these traits and fail to fully address the complexity of personal characteristics. Psychologists view personality as a dynamic organisation, inside the person, of psychophysical systems that create a person‘s characteristic patterns of behaviours, thoughts, and feelings <cit.>. It is formed gradually in the process of socialization, through maturity, learning and the influence of the environment <cit.>. We therefore believe that a more authentic and comprehensive personality modeling of LLMs must consider the following principles: 1. Social Practice: Personality is shaped by personal life experiences and the socialization process, and in turn is reflected in people's action and speech. Comprehensive modeling should thus incorporate detailed records of a person's behavior, thoughts, and feelings, training and evaluating the personified LLMs through socialized interactions; 2. Consistency: Personality is shown in consistent and continuous characteristic patterns, rarely shifting dramatically during interactions or across different scenarios <cit.>, so personified LLMs should have strong generalization and stability of opinions, avoid being induced by attacks or being excessive flattery; 3. Dynamic development: Although being consistent in a certain stage, in the long run, personality experiences dynamic development. Therefore, collecting data with time stamps and modeling different life stages is essential. Grounded in the principles outlined previously, we propose a novel approach for LLM personification, termed PersLLM, which is divided into two stages: personified data construction and personified model training. As shown in Fig. <ref>, in the personified data construction stage, we collect both objective and subjective data about the target individual we wish to simulate, such as biographies, third-party descriptions, personal letters, and authored articles, which can provide insights into the experiences, knowledge, opinions, and speech style of the target personality. To adhere to the social practice principle, the evaluation of the personified model should be conducted in comprehensive social conversational scenarios. Considering the discrepancy between raw plain-text and the conversational inference, we systematically restructure and augment the raw data into formatted conversations utilizing an annotation LLM, which is instrumental in facilitating subsequent personality learning of models. The annotation LLM prompts broad inquiries into specific segments of the data (i.e., ground-truth information) and retrieves relevant details from the raw data. With the help of the retrieved information and the ground-truth data, the annotation LLM achieves retrieval-augmented generation (RAG) <cit.> for informed responses to the inquiries it raises. Simultaneously, in order to uphold the consistency principle, we take an anti-induced data construction approach. This approach involves the diversification of conversational data into various scenarios, including standard question-answer formats, opinion discussions, and error corrections. These scenarios, especially the latter two, are particularly crucial as they frequently involve conflicting viewpoints or require rectification of misleading information. Training on these diverse data can instill the model to sustain a consistent perspective and cognition, and neither be misled nor excessively compliant, whether within a single interaction or across varied scenarios. Lastly, to portray the principle of dynamic development of personality, we categorize the personified training data into distinct temporal stages. When generating responses with the help of the annotation LLM, the access scope for retrieval is confined to one specific time period, aiming to authentically represent the temporal state of the personality. In this way, we construct an extensive corpus of personified conversation data. In the personified model training stage, drawing from the interpretations that personality is driven by the inside psychology system, we internalize the extensive personality information into model parameters through training rather than external methods such as RAG. First, we conduct  personified conversational tuning, which involves tuning the model with specially constructed personified conversation data alongside some general instruction-tuning data to preserve its generalizability. Subsequently, to further ensure the consistency of PersLLM, we introduce the Chain-of-Thought (CoT) prompting strategy, which requires the model to display a detailed, step-by-step analytical process before delivering responses. This strategy, coupled with the anti-induced data policy, not only fosters a deeper reasoning capability but also combats the tendency towards uncritical agreement and flattery often observed in general instruction-tuned LLMs. Afterwards, to boost the model's dynamic development capabilities, we employ a temporal strategy by attaching a special label representing the temporal stage to the conversation data prior to training, aiding the model in differentiating between the personalities associated with different temporal stages. Beyond temporal strategy, we employ automatic Direct Preference Optimization (DPO) <cit.>, to accentuate variations between different temporal stages and personalities. We regard the original annotated response for an input as the positive example, and retrieve a response from other temporal stages or personalities as the negative example, and reduce the probability of the model generating disturbing negative examples. This technique aims to increase the distinctiveness and uniqueness of each modeled personality, thereby enhancing the utility and realism of the LLMs developed under the PersLLM framework. To assess the effectiveness of our PersLLM approach, we conduct a series of experiments. These included a single-agent experiment to validate the effectiveness of our personality modeling approach, a multi-agent communication test to determine whether the agents can simulate human-like interactions, and human-agent interaction experiments to measure how well our PersLLM can enhance service in social applications. For the single-agent experiment, we compile a dataset comprising six characters from the Harry Potter series to train our personified models, and conduct quantitative analysis and ablation study. This experiment demonstrates that PersLLM consistently generates responses more aligned with the characters’ distinct experiences and perspectives, thereby bolstering the confidence in their portrayed positions. Meanwhile, the policies in our personified conversational tuning and the automatic DPO training are all proved to be effective. For the multi-agent communication test, we evaluate the communication patterns in conflict and cooperation situations, and contrast PersLLM's performance with other models through case comparisons. We observe that personified training effectively prevents the conformist convergence of behaviors and opinions among multiple agents, which hinders the simulation of human-like behaviors by the agents. For the human-agent interaction experiments, participants engage in conversations with both the PersLLM and the backbone model lacking personified training. The interactions were analyzed based on a series of metrics evaluating user satisfaction and engagement, indicating a marked improvement in user experience when interacting with the personified model. Our contributions are summarized as follows: (1) Novel Approach: Inspired by the psychological interpretations, we critically design PersLLM, a pioneering approach for the personified training of LLMs, based on the key principles of personality. (2) Empirical Validation: Our experiments validate the effectiveness of our training methodologies, confirming the advantages of personified training in enhancing simulations of human interactions and elevating the quality of human-agent communication. (3) Open Resources: We have made the code for collecting and processing the training data, training and evaluating the personified models publicly available, along with demonstrations to encourage further research and facilitate discussions within the academic community on personified AI systems. § RESULTS §.§ Overview of Datasets Considering that there exist plenty of records for various characters in the Harry Potter novels, we propose a dataset called Harry Potter personified dataset (HP dataset) for training and numerical evaluation of our personified training approach. HP dataset includes 6 fictional characters which span different ages, genders, positions, and richness of information, thus these characters are quite representative for evaluating the personified training. We collect the raw data from two sources: 1. The Harry Potter Wiki [<https://harrypotter.fandom.com/wiki/Main_Page>] data, including the basic information, temporal experiences, and the special social or magic knowledge that are related to the target characters; 2. The character speeches data, selected from the original novels and filtered out with the help of GPT-3.5-turbo. Based on the raw data, we annotate the HP dataset with the help of the representative LLM GPT-4-turbo. To improve the consistency of the model, we require the LLM to create different types of user inputs for each paragraph of the raw data. Meanwhile, we provide the golden reference paragraph and the other retrieved related paragraphs, and ask the LLM to annotate the response with CoT towards these inputs. To achieve the dynamic development of the personified model, we divide all the raw data into two stages. Items using information from the early plots are in the early stage, and those using information from the late plots are in the late stage. If the time of the information is vague (such as from an interview with the author himself), then it is seen as public information and can be used by both stages. The inputs raised from the stage items are attached with the corresponding temporal labels. The examples for user inputs, CoT, and personified responses are displayed in Fig. <ref>. Detailed information for HP dataset is provided in Method and Supplementary Information. §.§ Backbone Models and Baselines For the backbone model used for personified training, we prefer lightweight LLMs so that we can conduct more personalities within limited time and resources. Meanwhile, the commonsense reserve of the LLMs is important because we do not emphasize the general domain knowledge in personified training. Therefore, we adopt MiniCPM-2.4B <cit.> as the backbone model for our experiment. It is an end-side LLM gaining the instruction following ability during pre-training, and has achieved the best performance among lightweight LLMs on several datasets. Therefore, we believe that it is very suitable for personified training in different characters. Besides, GPT-4-turbo is also involved in our experiment, which is both the annotation LLM that we use for personified conversation data, and one of the most recognized models in personified inference. We implement the following personified methods for comparison: Prompt engineering (PE). LLMs have acquired extensive capabilities during pre-training and general instruction tuning. By crafting prompts that require the models to imitate the target personalities and to align with the character attitudes and experiences, we can activate the potential of the models to act according to the knowledge related to these famous characters that they have mastered during the pre-training. Additionally, providing relevant information (e.g., the personality introduction, experiences and speeches) about the target personalities through Retrieval-Augmented Generation (RAG) can further enhance the models' ability to imitate. Details are displayed in the Method section. Language modeling (LM). LLMs can also internalize knowledge directly from the raw data collected for the target personalities. One widely-adopted method is through language modeling training, where the model's comprehensive capability is maintained by mixing personified language modeling data with general instruction tuning data. Role-conditioned instruction tuning (RoCIT). This method is proposed in RoleLLM <cit.>. Rather than learning directly from the raw context, RoCIT modifies the general instruction tuning responses to the specific idiolect of the personalities by mimicking the relevant records. We annotate the RoCIT data with the same annotation LLM as our method, and filter out those instructions that are totally out of the character domain, and use the same amount of personified training data. PersLLM. This is our purposed personified training framework for LLMs, comprehensively modeling the personality with several key elements. Different from the above baseline methods modeling the personality externally (PE), or only internally learning the person's speech knowledge (RoCIT), or learning the knowledge roughly (LM), we achieve the social practice learning by personified conversational tuning and automatic DPO training, and also emphasize the consistency and the dynamic development of the personality by incorporating strategies such as anti-induced data, CoT prompting and temporal labels. To assess their impact, we conduct an ablation study by sequentially removing these policies. §.§ Single-agent Evaluation We develop personified models for each character in the HP dataset and calculate a weighted average of their performance metrics on the test set. Numerical results are detailed in Table <ref>, which employs standard evaluation metrics common in traditional text generation tasks: BLEU and ROUGE <cit.>. BLEU assesses the precision of the generated text, while ROUGE measures recall, evaluating how similar are the model's responses with the target answers. Given the variability inherent in natural language, where different expressions can convey the same meaning, we also incorporate the LM evaluation using GPT-3.5-turbo. This model serves as a benchmark to compare the performance of different methods in terms of fact accuracy and tone alignment with the golden responses, with further details provided in the Supplementary Information. Baseline performances. Overall, baseline methods get less satisfying performances than our method, highlighting the task's complexity. GPT-4-turbo can generate reasonable responses that closely resemble the golden responses annotated by the model itself, yet it still scored lower on several metrics compared to our approach. RAG demonstrates slight improvements for the model, suggesting that factual and stylistic knowledge contribute to enhancing personified conversations. However, RAG fails to boost the performance of the MiniCPM-2.4B model, possibly due to its limited capacity in handling long-text inputs to leverage detailed prompt knowledge, showing that relying solely on prompt-based knowledge for achieving effective personification raises an over-high requirement for the backbone model capability. Injecting the raw materials of the target personality to the tuning period (LM) is proven to be helpful, but it is obviously not an efficient method to internalize external knowledge into model parameters as shown in the results, maybe due to the small data scale and the disparity between the training and inference. RoCIT also achieves only marginal improvements, as the primary focus of instruction tuning remains task completion, allowing the model to learn about the target personality only through the specific idiolet, while ignoring other important issues such as the consistency and dynamic. Overall performance. Our method generally achieves a good performance on the HP dataset. We first try two types of training setting: using separated LMs for different personalities, and using one combined LM for the six personalities (mixed). The separated LMs exhibits superior performance, reinforcing our hypothesis concerning the limitations of a single model's capacity to encapsulate diverse knowledge effectively. Following this, we undertake an ablation study to quantitatively assess the impact of each specific policy implemented during training. The findings numerically indicate that all implemented policies contribute effectively to training outcomes. Next, we analyze the effect of our method by observing cases. Distinction of Personalities. Our methodology's efficacy in crafting distinct personalities is assessed by posing identical questions to different personified models and analyzing their responses in terms of behavior, thoughts, and feelings. As depicted in Fig.<ref>-(a), the three main characters exhibit unique expression habits and worldviews. For example, Ronald's frequent use of the catchphrase “Blimey" reflects his more casual demeanor, and Harry displays heroism and a proactive stance against personal threats, and Hermione shows a deep commitment to knowledge, fairness, and respect for ordinary life due to her Muggle background. This differentiation extends to their knowledge, as shown in Fig.<ref>-(b), where each character’s understanding of terms like “unforgivable curse" aligns with their individual experiences and education. The successful personification of Aunt Petunia also underscores the effectiveness of our method even with limited character data. Corresponding to our method, automatic DPO training can further refine the model's performance by reducing the likelihood of responses characteristic of other personalities and temporal stages, thereby enabling the generation of more and appropriate outputs, improving the distinction of personalities. Dynamic of Personalities. Our models also demonstrate the dynamic nature of these personalities over time (triggered by different temporal labels). As shown in Fig. <ref>-(c), responses from different temporal stages (e.g., 1994 vs. 2004 versions of characters) reveal subtle shifts towards maturity. For instance, older Hermione and Harry describe their relationships with a deeper, more nuanced understanding, reflecting natural character development. As shown in the Table <ref>, the removal of temporal labels will result in a drop in scores. We have also tried to swap the two temporal labels and get even worse performances (BLEU-2 5.49, BLEU-4 0.58, and similar ROUGE scores), and this has proved that our model successfully infuses the personality difference between stages into the temporal labels. Nevertheless, the current model stages are broadly defined, and the incomplete temporal data sometimes leads to inaccuracies in event recognition, suggesting a need for finer temporal segmentation in future models. Consistency of Personalities. In single-agent evaluation, consistency is evaluated by how the models handle inputs of varying politeness, accuracy and relativity. Our models successfully maintain their respective character traits even when confronted with impolite or incorrect information. As depicted in Fig.<ref>-(d), Hermione rebuffs blood discrimination with tough attitude and clear facts, and Harry corrects a misclassification of his school house, unlike baseline models that sometimes falter in expressing clear values or recognizing errors. Furthermore, as shown in Fig.<ref>-(e), when facing some general questions unrelated with the character, generic models with role-play instructions often generate standard replies like a robot assistant instead of the target personality, providing some point of views that are beyond the character's knowledge and experience (e.g., the bias issues of algorithms), and saying some polite words as an intelligent assistant (e.g. hope these thoughts help). However, our models still keep the personality consistency and respond in character-specific ways, using examples of things that might happen in the wizarding world and extend the discussion. From the numerical results we can also see, the anti-induced data policy enhances the model's ability to maintain consistency in its attitudes and factual representations, because the inclusion of this part of data is proved to be particularly beneficial for improving the scores. Meanwhile, models trained without general instructions (w/o instruction version) achieves high BLEU and ROUGE scores but suffers from lower LM scores. This discrepancy suggests that while the responses are textually similar to the expected answers, they lack naturalness and rational coherence expected in human-like language use (e.g., when asked about the opinion towards the muggles, the Harry agent says “...despite obvious differences therein lies, part two hereof discussing mutual awareness versus outright hostility toward anything remotely different"). We speculate that the overemphasis on personified data can lead to rigid response patterns that poorly handle out-of-domain inputs. Therefore, adopting general domain data in personified conversational tuning is essential to the generalization of the model and also improve the personality consistency across various scenarios. §.§ Multi-agent Interaction Recent research highlights that LLM agents typically converge towards a consensus, which may undermine their utility in studies like policy interventions and social simulations <cit.>. To assess the realism of our personified models in mimicking human behaviors, we orchestrate interactions among different agents under scenarios of conflict and cooperation. In the conflict situation, we mainly observe whether the model can maintain the consistency of personalities and will not easily converge on opinions, which is an important capability for agents simulating human interactions in the social science research. In the cooperation situation, we mainly observe whether the model can demonstrate good social practice ability and make in-depth use of unique knowledge of personalities to derive new conclusions, which is meaningful in the perspective of multi-agent intelligence achieving high-quality division of labor and collaboration. Conflict Situation. We initiate a conflict discussion using personalities from the Harry Potter series, assuming that Harry and Voldemort meet during the war and have a direct conversation. Fig. <ref> showcases an instance where various configurations-(a) PersLLM; (b) GPT-4-turbo prompt engineering; (c) GPT-4-turbo chatting. We can see that though with a correct start, other two settings tend to a shifted focus or even convergent opinions after a few turns of conversations. GPT-4-turbo with personality profiles is relatively better, basically maintain the hostile relationship between the two personalities, but it lacks more detailed personified knowledge and generates some unreasonable content (e.g., Harry never calls Voldemort by his real name). PersLLM, in contrast, outperforms others by maintaining distinctive personality traits and understanding deeper character nuances. We have also tried some softer conflicts, such as between Hermione and Ron regarding adherence to school rules, and PersLLM facilitates a nuanced consensus that respects individual personality bases—rule adherence versus emotional consideration. Therefore, PersLLM can achieve a good consistency of the personalities and simulate the human interactions better. Cooperation Situation. We extend our study to include real-life personalities to better evaluate the strong professional cooperation, specifically Huiyin Lin, a Chinese architect, and John Nash, an American mathematician. Both individuals, though deceased, left extensive biographical materials enabling rich personification. We train models on their personal knowledge and attitudes and initiate dialogues between these personified agents. In this case, we also compare our method with the relatively good baseline, GPT-4-turbo with personality profiles. As depicted in Fig. <ref> (the English translation for Huiyin Lin is displayed in Supplementary Information), PersLLM enable the two agents to engage in cross-disciplinary discussions that transcend the boundaries of time, space, and language. We can intuitively see that the architectural terms (highlighted in red and yellow color series) and mathematical terms (highlighted in blue and purple color series) mentioned in PersLLM conversations are significantly more numerous and thematically richer. The two agents explore the application of mathematical decision-making in architectural design and discuss more detailed projects such as the use of dynamical systems in mathematics to enhance building sustainability and disaster resilience. These interactions maintain the unique linguistic styles and interests of each personality (e.g., Nash frankly expresses his lack of interest in architecture, while Lin also says that it is difficult to understand abstract mathematical theorems), and resemble human collaboration more closely than GPT-4 agents, which often resort to excessive flattery and converged towards simplistic scientific consensus. §.§ Human-Agent Interaction We have mentioned that human users may have higher emotional acceptance from personified models. Therefore, we conduct the human-agent interaction experiments to evaluate whether the personified models can better provide social services. We take the Huiyin Lin personified model as the test agent. It should be noted that we replace the backbone MiniCPM-2.4B model with Chinese-LLaMa-2-7B, which is one of the most popular open-source LLMs that has comprehensively good performance on Chinese tasks <cit.>. On the one hand, with more parameters, this model may understand and process Chinese better to bring an overall better user experience, and more training cost for only one instance is also acceptable. Besides, we find that the performance stability of this model using PE for simulating Huiyin Lin is better than that of MiniCPM-2.4B, which can be compared more intuitively. On the other hand, involving a new backbone model from a different model family can also evaluate the generalization of our method. 30 people from different academic backgrounds participated in the experiment. They chatted with two models in a random order: 1. the personified-tuned model (PersLLM); 2. the general instruction-tuned version of the backbone model (Chinese-alpaca-llama-7B) personified by prompt engineering. Volunteers freely had multiple rounds (at least 4 rounds, and eventually an average of 10 rounds) of conversations with the model and terminated the conversation according to their interests. They gave a comprehensive evaluation of the model through a questionnaire survey. We refer to some work evaluating the human-AI interaction <cit.> and adopt 10 metrics in this experiment, which are described in a detailed question in the questionnaire (e.g., “human simulation": To what extent do you think the model behaves like a human? “character simulation": Do you think this model behaves similarly to Lin Huiyin?). Samples from the interaction experiment are provided in Supplementary Information. The overall satisfaction score is 48 for the personified-tuned model (100 in total) and 38 for the general-tuned model. Other metrics are shown in Fig. <ref>. Among them, 6 metrics including trust, companionship, character simulation, fluency, consistency, and effectiveness, show that our model is significantly better than the baseline (with p0.05). It is worth mentioning that our model performs relatively well in terms of consistency of personality attitude and style, chat pleasure and empathy, and also has a significant improvement in the mastery of the character-related knowledge, which shows the effectiveness of the personified training method. However, both our method and the baseline method receive low scores in terms of similarity to real human beings. This may be attributed to the limitations of the backbone model, which lacks the ability to understand long context and perform complex logical reasoning. We perform subgroup analysis on 30 samples to explore the factors that influence the preference for PersLLM and its possible areas of applications. In the following, we consider a t-test with p 0.05 to be a significant difference. From the perspective of user background, participants in the architecture and design industry (10 out of 30) show a significant preference towards PersLLM in terms of character simulation and consistency scores. They have a better understanding of Lin's life and knowledge scope, and thus can appreciate the advantages of our model in terms of personality knowledge. From the perspective of conversation topic, participants who raise inputs that are related to Huiyin Lin (20 out of 30, e.g., architectural expertise or literary chat instead of looking up financial terms), have a higher sense of trust and human-likeness in the PersLLM. This suggests that PersLLM may be able to better meet the personalized needs of users with corresponding knowledge backgrounds or related informational or emotional needs. Participant gender (12 male & 18 female) and the order in which the two models were encountered (14 PersLLM first & 16 general-tuned first) did not make a significant difference in the evaluation. § DISCUSSION The personification of LLMs is an essential topic, which may achieve a higher psychological acceptance and provide a better user experience for human beings, and may also allow a deeper simulation of human interactions to better support social science research. In this article, we innovatively propose a practical framework for LLM personified training based on the phychologic understanding of personality. With personified conversational tuning, we successfully internalize the personality knowledge into the models. Strategies including anti-induced data, CoT prompting, temporal labels and automatic DPO training, further improves the personality capability of the models. Single-agent evaluation on the HP dataset has proven the effectiveness of the above methods. Multi-agent interaction test further demonstrates its potential to advance social research based on agent-based simulations. Convergence of views is currently a dilemma for simulating social interactions, while our approach is shown to successfully keep the consistency of their viewpoints. This provides methodological support for researches that hope to use intelligent agents to replace or simulate humans. Moreover, human-agent interaction has displayed the value of personified models in social applications. We analyze the participants' questionnaire results and find that personified models that are closer to the user's interests or experiences will lead to higher chat satisfaction. Still, there are some problems waiting for us to solve. First of all, although our method attempts to model the dynamic development of personality, it is still unable to accurately capture the real-time development of people or conveniently update knowledge. This may rely on more detailed division of training data and online learning. Secondly, the annotation of training data relies on LLMs such as GPT-4, and they may introduce bias into the data due to their own style and value preferences. Currently, we adjust the generation results by concatenating a few sentences of human feedback in the prompt. Subsequent research needs to be done on methods to correct the data generation process automatically. Thirdly, the human-agent interaction experiments have revealed deficiencies in the current model, particularly in terms of long-term memory and complex logical reasoning. To address these issues, it may be necessary to not only construct related training data but also integrate specialized modules for memory and reasoning. Finally, personality imitation, like applications such as AI face-changing, may pose privacy and ethical risks after being abused, and requires stricter supervision. Our future work will focus on the problems above, trying to get a better modeling method for personality dynamic development, and a more efficient way of supervision for data annotation. Relevant ethics research may also complement technological development. § METHODS §.§ Related Work Pre-trained Language Models (PLMs) <cit.> have shown their satisfactory performances when being adapted to a wide range of downstream tasks, while Large Language Models (LLMs) <cit.> has further demonstrated emergent abilities <cit.> including in-context learning, CoT, etc., refreshing people's expectations for language model capabilities. Take the representative GPT-series as an example: GPT-2 <cit.> eliminates the task-specific fine-tuning process and can complete different tasks by only learning from the prompt; GPT-3 <cit.>takes the few-shot or even zero-shot learning capability to a new level by significantly increasing the parameter and corpus scale, further demonstrating the potential of the model; InstructGPT <cit.> is improved with reinforcement learning from human feedback (RLHF), and this makes the model understand the flexible human instruction better, broadening the use scenarios of the model; Subsequent models such as GPT-4 have enhanced performance in areas including professional knowledge. These technological developments ensure the flexibility and plasticity of LLMs and are the cornerstone of personification. To shape the base model into versions that are more in line with human intentions, alignment technology <cit.> comes into being. From a goal perspective, people apply some sociological methods to find human consensus and design values that AI should conform to <cit.>, and sometimes also focus on domain knowledge <cit.> instead of social value. From a data perspective, alignment training usually adopts human feedback <cit.> or strong-LLM-annotated data <cit.>. From a methodology perspective, researchers iteratively design alignment algorithms including Proximal Policy Optimization (PPO) <cit.>, Direct Preferenc Optimization (DPO) <cit.>, and Odds Ratio Preference Optimization (ORPO) <cit.>, using human preference data to improve the model performances. In our task, we try to align the models to specific personalities, including personal values and knowledge, take strong-LLM-annotated data for alignment, and adopt the DPO algorithm with the automatically created preference data. Specific to the personified training scenario, a few researches have been conducted. LLMs are proven to be capable of role-playing to cast dialogue-agent behaviour <cit.>. RoleLLM <cit.> first proposes a large benchmark for aligning LLMs to specific character language styles, in which injecting style information to LLMs with the auto-annotated data is verified to be valid. CharacterLLM <cit.> further pushes personified LLMs into more practical scenarios. By editing character profiles, models are trained to provide the experience knowledge possessed by the characters. In comparison, our work hopes to discuss characteristics of personality more comprehensively (e.g., the dynamics and subjectivity, more than only experience and language style), and proposes more detailed training strategies to achieve a better performance instead of only proposing conceptual frameworks. There also exist some works focusing on the evaluation of LLMs in role-playing tasks <cit.>. While overall, research in this field is still scarce and the road ahead is long. §.§ Corpus and Dataset HP dataset. We collect the raw data of the HP dataset with the Harry Potter Wiki pages and the 7 original novels. For the conversations, we filter out the dialogues between characters in the original novels with the assist of GPT-4-turbo (the versions of the adopted GPT-4-turbo and GPT-3.5-turbo are both 1106), and split the long dialogues into segments within 5 turns. In this way, we can both remain the story plot and provide the language style of the target characters. For the experiences, we crawl the Biography part of the character main page in the wiki, which has natural segmentation based on plot. For the knowledge, we record the hyperlinks in the Magical abilities and skills part, and crawl the brief introductions of these magics, possessions, characters or events. For each personalities, we split two temporal stages. For Harry, Hermione, Ronald, and Petunia, the first stage ends in 1994 (mentioned before the 4th novel), and the second stage ends in 2004 (life after the end of the 7 novels). We split the wiki paragraphs through the citation numbers, and for those settings from outside the original novels, we remain them in both stages. Correspondingly, their conversations can also be split into early style and late style. For Dumbledore and Voldemort, however, there exist few materials for their early language style, so we do not distinguish between the language styles of the two stages, but only use 1982 (after the Wizarding War) as an important plot cutting point to distinguish the character experience and knowledge in the early and late stages. To create the conversation data, we require the GPT-4-turbo model to read the raw data above paragraph by paragraph, first raise a question or an opinion, and then immitate the target personality to generate the answer. For the input, there are altogether three kinds of text: ordinary questions, induced questions, and opinions. The proportions of ordinary questions, opinions, and induced inputs are almost equal. We sequentially ask the model to ask a relevant question based on the original text, to deliberately elaborate on a question that contains commonsense or factual errors, and to have a point of view that is relevant but not completely consistent with the original text. For the output, we adopt the gtr-t5-base model <cit.> to help retrieve the most related experiences / knowledge and conversations with the input. We provide experiences or knowledge paragraphs around 1,500 words, and conversations around 500 words, for RAG. Considering that the questions or opinions raised in the same paragraph may be too similar, in order to avoid data leakage, we encode all input items with gtr-t5-base, and remove those with too high embedding similarities. All items left in the end have a similarity of input embeddings less than 0.95. Based on this, we randomly divide the training and testing data in a ratio of 4:1. HP dataset has 145k train items and 3.6k test items in total, each has 118 words on average. Materials for Multi-agent Communication. To validate our method in more practical scenarios, we train personified LLMs for Huiyin Lin and John Nash. For Huiyin Lin, we collect the experience and knowledge information (264k tokens) from her biography <cit.>, and collect the style text (65k tokens) a total of 19 pieces of her own essays, letters, and academic articles from the Internet. We used her departure from Beijing in 1937 as the boundary for the first stage of her life, and obtained a total of about 3,400 pieces of personified conversation data. For John Nash, we collect the experience and knowledge information (217k words) also from his biography <cit.>, and collect the style text (65k words) a total of 14 pieces of his own academic articles, letters, and interview records from the Internet. We used his psychiatric treatment as the boundary for the first stage of his life, and obtained a total of about 3,600 pieces of personified conversation data. General Instruction Tuning. In order to retain the model's generalization ability to the greatest extent, we randomly sampled 100,000 multi-turn English instruction tuning data from the ultrachat dataset <cit.>. For the case study, since there exists a Chinese personality, we add 20,000 items of Chinese instruction tuning data from Belle-0.5M <cit.> to retain the Chinese capability of the model. §.§ Methodology Details The personified training is structured in two stages: initially, we engage in personified conversational tuning using a blend of personified and general instruction tuning data. Subsequently, automatic DPO training is applied. We use strategies including temporal label, CoT and anti-induced conversation during data construction. To be specific, we add new tokens (e.g., “TIME-I", “TIME-II") for the model embedding and tokenizer, and insert the new tokens into the prompt to distinguish different personified stages. Meanwhile, we require the LLMs to first analyze and then respond as the target personality when annotating the conversation data, and use specific delimiters to differentiate between two parts (e.g., “[Analysis] The user is asking about ... Therefore, I should correct this error. [Response] I'm afraid you're mistaken..."). As for the anti-induced conversation, we require the LLMs to propose error facts or knowledge by prompt engineering. The detailed prompts are shown in Appendix. We also conduct automatic DPO training to further align the model to the target personality. To construct the direct preference data automatically, we put forward a hypothesis: responses that correspond to different inputs but are similar to the current response are very likely to be interference error items. For example, the same question concatenated with different temporal labels gets different inputs, and the corresponding responses may contain very similar attitudes but different tones. Interfering items may also include different personalities' attitudes towards the same event, views from the same personality on related events, etc. Therefore, we use the gtr-t5-base model <cit.> to encode the responses of all personalities in the HP dataset. For each item of training data, we use the original response as a positive item, and remove the 10 responses with the highest embedding similarities (to avoid the existence of responses with exactly the same meaning). Then we select the most similar response as a negative item to widen the encoding gap of the model during the DPO training process. To introduce the personified knowledge more directly, we also mix up the existing data with the language modeling / continue writing task of those personified materials. Meanwhile, we randomly allow 1% of the personified conversational to provide the retrieved materials in the input. §.§ Experiment Settings Model. MiniCPM-2.4B is a Transformer-based model. We adopt the LLaMa <cit.> version (which is a widely-used open-source LLM) checkpoint, “MiniCPM-2B-sft-bf16-llama-format", which contains 40 stacked Transformer layers of decoder, 122,753 of vocab size, 2,304 dim of hidden states, altogether 2.4 B of parameters. Chinese-LLaMA-2-7B is also a Transformer-based model, and Chinese-Alpaca-2-7B is instruction-tuned based on the LLaMa model. They both contain 32 stacked Transformer layers, 55,296 of vocab size, 4,096 din of hidden states, and altogether 7 B of parameters. Hyper-parameters. For the tuning process, the authors of MiniCPM-2.4B provided some empirical values of hyper-parameters, including batch size per device 32, learning rate 5e-5, and max steps 3,000. We conduct grid search near the given values, and set the batch size as 16, learning rate as 5e-5, warmup steps as 50, weight decay as 0.1, and max length as 3,000. We repeat the personified data 5 times and mix it with the general instruction tuning data for a total of 1 epoch of training (equivalent to 5 epochs of personified training). The total number of training steps for each model is approximately 3,500. When tuning the larger model Chinese-LLaMA-2-7B, we adjust the learning rate to 2e-5, and keep most of the hyper-parameters the same with MiniCPM-2.4B tuning. § ETHICAL CONSIDERATION Participants of the experiment in this paper are all adults with autonomous behavior, who are aware of and willing to participate in the model chat evaluation experiment and use the data for subsequent scientific research analysis. The experiment is conducted on an online website, and the participants can terminate the experiment at any time. In order to protect participant privacy, the chat records and basic information such as gender and age will be saved only after they confirm. Personified models are an emerging field overall, but some work has begun to discuss its ethical issues, mainly focusing on the dangerous information that may be generated by non-universal alignment and the over-reliance that humans may have when using personified models. For details, please refer to the relevant opinion articles <cit.>. In our work, since the target personality is usually a public figure, of whom the relevant text records passed down by them have generally been ethically reviewed. Meanwhile, the GPT series models used in annotating answers also reported measures related to ethical safety techniques in their original papers <cit.>, so such a training process will not introduce new ethical risks. § DATA AVAILABILITY Data that support the findings of this study have been deposited in Google Drive: <https://drive.google.com/drive/folders/1DEliZQD_XU-Ev5eNDU_VgHjxNphqjzJE?usp=sharing>. § CODE AVAILABILITY The code of this study can be obtained from GitHub: <https://github.com/Ellenzzn/PersLLM>. plainnat § AUTHOR CONTRIBUTIONS Zheni Zeng and Yukun Yan contributed to the conception of the study and wrote the manuscript; Zheni Zeng and Jiayi Chen implemented the model framework and performed the experiment; Yuxuan Chen implemented the demonstration platform; Huimin Chen led the social scientific analysis and discussion; Zhenghao Liu, Zhiyuan Liu and Maosong Sun led and provided valuable advice to the research. § COMPETING INTERESTS The authors declare no competing interests. § SUPPLEMENTARY INFORMATION §.§ Dataset The detailed construction of the HP dataset is shown in Table <ref>. §.§ Prompt Settings The detailed prompts are shown in Table <ref> and Table <ref>. §.§ Interaction Cases The translation for the cooperation instance between John Nash agent and Huiyin Lin agent is shown in Table <ref>. The human-agent interactions are in Chinese, and here we provide the English translation for several actual cases in Table <ref>, <ref>, <ref>, <ref>.
http://arxiv.org/abs/2407.12995v1
20240717202340
How the physics culture shapes the experiences of undergraduate women physics majors: A comparative case study of three physics departments
[ "Lisabeth Marie Santana", "Chandralekha Singh" ]
physics.ed-ph
[ "physics.ed-ph" ]
APS/123-QED Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA, USA 15260 § ABSTRACT This investigation is a comparative case study of the physics department culture at three institutions based upon the points of view of undergraduate women majoring in physics. The three studies conducted in the United States include Johnson's 2020 study in a small physics department at a small predominantly White liberal arts college, Santana and Singh's 2023 study at a large predominantly White research institution, and Santana and Singh's 2024 study in a medium-sized physics department at a small predominantly White private liberal arts college. Using synergistic frameworks such as Standpoint Theory, Domains of Power, and the Holistic Ecosystem for Learning Physics in an Inclusive and Equitable Environment (HELPIEE), we compare and analyze the narratives of undergraduate women. Their reflections are valuable for understanding how those in the position of power, e.g., instructors, have important roles in establishing and maintaining safe, equitable, and inclusive environments for undergraduate students. Their accounts help us contrast the experiences of undergraduate women in physics departments with very different cultures. This comparative analysis focusing on the experiences of undergraduate physics majors in departments with drastically different cultures is especially important for reflecting upon what can be done to improve the physics culture so that historically marginalized students such as women and ethnic and racial minority students in physics feel supported and thrive. In particular, this comparative case study can be invaluable to understand the positive and negative aspects of the physics culture as they impact undergraduate women majoring in physics within these three departments. This analysis can help other physics departments contemplate strategies to improve the physics culture so that all undergraduate physics majors have validating experiences while navigating their physics journey regardless of their identities. How the physics culture shapes the experiences of undergraduate women physics majors: A comparative case study of three physics departments Chandralekha Singh July 22, 2024 ============================================================================================================================================ § INTRODUCTION Physics has been historically notorious for marginalizing certain groups of students, such as women and ethnic and racial minorities (ERM). Reports within the recent decade reveal that only 20-25% of physics bachelor's degrees in the US have been awarded to women <cit.>. 2020 reports also show that of physics bachelor's degrees in the US, 3-4% are awarded to Black students and 12% are awarded to Latinx students <cit.>. Understanding this underrepresentation and improving not only the number of physics bachelor's degrees awarded to underrepresented groups, but their experiences in undergraduate physics programs has been an area of interest to physics education researchers. Based on findings from previous studies, some challenges that women and ERM students in physics face relate to their identities. These challenges may partly be due to physics programs not having the representation of women or ERM students that is reflective of society <cit.>. This lack of accurate representation, or underrepresentation, can have negative impacts on students, such as leading them to question whether or not they belong in physics. Previous findings also reveal that sense of belonging is tied to self efficacy as well as perceived recognition and women and ERM students often have negative experiences in the physics learning environments <cit.>. We emphasize that these lived experiences are very important in understanding how marginalized students navigate their physics journey. These negative experiences can lead to gaps in performance and psychological factors (e.g., self-efficacy, belonging, identity, etc.) disadvantaging traditionally marginalized students in physics further. Previous studies have found that amongst men and women students, gaps in performance as well as in their physics psychological characteristics are prevalent throughout high school and college. Researchers found gender gaps such as women having lower perceived recognition compared to men as a “physics person" from their peers and instructors <cit.>. This gap in perceived recognition can negatively affect women's academic performance <cit.> and influence their decisions to leave the field <cit.>. For example, if women believe that their instructors or peers do not see them as being good at physics, this negative perceived recognition can make them question whether or not they can excel in physics <cit.>, i.e., low or negative perceived recognition may lead to low self-efficacy and vice versa. Physics is not only notorious for the marginalization of certain student groups, but also for the masculine nature of its social culture <cit.>. At many institutions, this type of culture is still prevalent and many physicists still uphold the beliefs that physics is only for smart men. Although social culture is difficult to change, and it takes a long time for change to occur, the field of physics education can help play an important role in this change in culture. However, a critical number of physics faculty in a given department need to be onboard with this change to catalyze and be sustained. It is highly concerning that among the natural sciences, physics has the worst stereotypes regarding who belongs in it, who can excel in it, and what a “traditional" physicist looks like <cit.>. For example, based upon stereotypes, a traditional physicist needs to be a genius, thus physics is associated with brilliance which is typically attributed to men<cit.>. In combination with lack of representation, student performance can be negatively affected by these negative stereotypes <cit.>. These two factors may further reinforce each other, especially if physics culture is masculine. For example, if ERM students don't see others like them, or don't have role models who look like them, they are continuously reminded, e.g., that physics is a domain belonging to White men, which can ultimately lead them to not wanting to continue in physics. This also portrays an image that one needs to be of a certain demographic group, or should be a genius in order to belong and succeed in physics <cit.>. This masculine culture continues to harm women and ERM students such that they may become isolated from lack of role models and a community that could provide support to them <cit.>. Prior research also shows that women drop out of STEM disciplines with significantly higher overall grade point average (GPA) than men <cit.>. Thus, we can rule out GPA as being an issue for women students to continue in the STEM majors. There has to be other underlying reasons for them leaving. The unsupportive physics culture that does not take into account students' lived experiences exacerbates the negative impacts of stereotypes and biases about who belongs in physics as well as lack of role models. From prior studies, sense of belonging and self-efficacy in physics are closely intertwined <cit.>. To improve a psychological factor such as self-efficacy of students in physics courses, which is a multifaceted construct <cit.>, instructors need to create an equitable and inclusive learning environment. Transforming a physics learning environment is particularly likely to increase underrepresented students' sense of belonging which can in turn increase their self-efficacy and improve their retention <cit.>. For example, social psychological interventions are small-scale transformations within classrooms that may be a good first step to combat masculine culture of STEM disciplines<cit.>. In order to create this safe and inclusive environment for students, physics instructors must be committed to these issues. Instructors have the power both within the classroom and beyond it. For example, instructors are responsible for setting the tone of what is acceptable behavior for students, how students should be treated, and establishing classroom norms for peer interactions. Another important aspect to emphasize is that in many physics programs, men make up the majority for both physics majors and physics faculty. Hence, they have both power and influence to make change. Thus, it is important to keep these factors in mind as physics departments work to understand and improve the experiences of underrepresented students in physics, such as women and ERM students. It is the traditionally marginalized students who can shed light on how they navigate in the existing physics culture of their departments, their interactions with others in physics, whether or not they feel supported, and how the physics culture impacts them. Qualitative comparative case studies that focus on undergraduate women, such as the one we present in this paper, can be valuable to learn about the physics culture and whether learning environments are equitable or not directly from students who are from traditionally marginalized groups in physics. Through their narratives, we can understand undergraduate women's perspectives in their own voices, listening to their experiences in their own individual contexts and reflecting upon them to improve the physics culture. Previous studies in a variety of contexts in physics education research have showed the impact and usefulness of student voices. For example, some prior studies using interviews include those focusing on graduate women experiencing sexism and microaggressions in their programs <cit.>, women in undergraduate labs discussing dynamics of working with male partners <cit.>, how students of color are negatively affected by stereotypes in physics <cit.>, graduate women of color negatively impacted by stereotypes <cit.>, and undergraduate women experiencing a negative masculine physics environment <cit.>. Work by Gonsalves and Danielsson has focused on various issues involving gender in physics, e.g., how women do gender vs. do physics and what traits (feminine vs. masculine) are associated with physics competence <cit.>. Describing a physics environment as masculine, their work demonstrates characteristics that are associated with men, such as physics having a competitive culture, physicists exhibiting condescending behaviors, and pervasiveness of stereotypes regarding who can do physics and what a stereotypical physicist looks like. These studies reveal common stories that show that women and women of color experience hostile environments within physics departments. Educational institutions, such as universities and colleges, vary in size and the physics departments at different types of institutions also vary in size. The size of a physics department is one factor that can impact how easy or challenging it is for a department to cultivate a physics culture which is different from the prototypical culture<cit.> and whether students from traditionally marginalized backgrounds such as women attending these institutions feel supported. In particular, it is important to understand how students who do not identify with those who are dominant in physics, such as women, navigate through the physics culture especially in departments of different types and sizes, such as large research institutions vs small teaching-focused colleges. We also note that researchers such as Kanim and Cid have called for physics education researchers to design studies that focus on different types of institutions to create a better representation of educational research to benefit the broader physics community <cit.>. They have pointed out that the findings of physics education research may be very different for different types of institutions <cit.>. This study focuses on comparative case study concentrating on the physics culture primarily narrated by undergraduate women in physics departments of different types and sizes. We note that in the literature, apart from Johnson's study <cit.> in a small physics department at a small liberal arts college, and Santana and Singh's study <cit.> in medium-sized physics department at a small liberal arts college, there are little to no other qualitative studies that focus on the experiences of women physics majors in physics learning environments at small liberal arts colleges with physics departments of any size in the US. Furthermore, we chose another one of our studies which is the only qualitative study we are aware of focusing on the experiences of undergraduate women majoring in physics <cit.> at a large research university in the US. Thus, this paper focuses on comparative case study, in which we compare three qualitative studies to understand the physics culture primarily as perceived by the women physics majors in those departments in the US. The first study is Johnson's study in which she investigated elements of a highly supportive undergraduate physics program that support students, especially women of color <cit.>. We will refer to Johnson's study as Study 1. This study is of great interest because of the positive elements that other physics departments should try to emulate. The second study, conducted by us, revealed a masculine physics culture that did not support undergraduate women <cit.>. We chose this study for this comparative analysis as women physics majors described it to have an opposite environment (not equitable or inclusive) to that described in Johnson's study. We will refer to this study as Study 2. The third study that we use for this comparative analysis is one we conducted which shows a physics culture somewhere between those manifested in Study 1 and Study 2 as narrated by undergraduate women in physics and shows how women with a variety of intersectional identities navigate a medium-sized undergraduate physics program at a small liberal arts college <cit.>. The women in this study had mixed experiences. We refer to this study as Study 3. In Ref <cit.>, we provided a very brief summary of the comparison of these studies in the broader discussion section using the Domains of Power framework. In this work, we elaborate on that and provide a more in-depth comparison of the physics culture of three physics departments in the US based upon undergraduate women's reflections in these three studies primarily using the Domains of Power but also taking inspiration from other synergistic frameworks. The goal of this comparative case study is to highlight the differences and similarities amongst these three US institutions of different types with regard to their physics cultures primarily using undergraduate women's accounts of their experiences navigating the physics learning environments (although we take advantage of interviews with faculty as additional evidence to support the accounts of the physics culture narrated by undergraduate women in Study 1). This comparative case study can help physics departments contemplate how to improve their own physics culture. There are important positive and negative elements described in studies 1-3 which have important implications for any physics department. § METHODOLOGY USED IN THE THREE STUDIES Before we describe the methodology for comparative case study discussed here, we first summarize the frameworks and methodology used for each of the three studies to set the context. We note that all three institutions are predominantly White institutions, meaning people of color are additionally underrepresented in their programs. §.§ Frameworks We first summarize the three frameworks used in the three studies. The three frameworks are Standpoint theory, the Domains of Power, and the Holistic Ecosystem for Learning Physics in an Inclusive and Equitable Environment (HELPIEE). §.§.§ Standpoint Theory Standpoint theory is a critical theory that focuses on the relationship between the production of knowledge and acts of power <cit.>. It is related to other critical theories in that it centers around the standpoint or voices of the underrepresented groups that do not have the same privilege as the dominant group in order to gain a clearer understanding of their struggles. In this framework, the emphasis is placed on the experiences of undergraduate women in physics to understand what physics departments and instructors can do to improve the physics culture such that they feel safe and supported <cit.>. The nature of all these qualitative studies that we compare here to gain deeper insights into the variety of physics cultures in those physics departments aligns well with Standpoint theory. It is through Standpoint theory, that we gain insight to how women perceive and experience their respective physics environment. Therefore, using their stories, we gain information about the people and settings they interact with. Their stories also reveal how power is organized in a physics department. §.§.§ Domains of Power Collins introduced four domains as necessary to understand how power is organized in a particular context <cit.>. These contexts are essential for understanding what the setting says with regard to who belongs or who has opportunities. She argued that the four domains we need to consider are: interpersonal, cultural, structural, and disciplinary. The interpersonal domain focuses on where power is expressed between individuals. The cultural domain focuses on where group values are expressed, maintained or challenged. The structural domain focuses on how power is organized in various structures. The disciplinary domain focuses on how rules are enforced and for whom. Figure <ref> illustrates the four domains. Within the Domains of Power framework, it is the instructors who have the power over shaping the narrative for each of these domains (which interact with each other). Johnson <cit.> noted that in the context of student experiences in learning physics, the interpersonal domain refers to how students communicate with each other and how students and faculty interact; the cultural domain refers to the physics culture; the structural domain refers to the structure of a physics learning environment; and the disciplinary domain refers to how instructors discipline students in the physics courses and in the physics learning environments in general, if their conduct does not conform to the expected norms. Table <ref> is an excerpt from Study 1, showing how the Domains of Power are applied to a prototypical physics department. As a concrete example, in Study 1 <cit.>, the structural domain was analyzed on a small scale focusing on classroom structure. §.§.§ HELPIEE The Holistic Ecosystem for Learning Physics in an Inclusive and Equitable Environment (HELPIEE) framework posits that those in the position of power, e.g., instructors in physics classrooms, have the power to help all students feel supported by carefully taking into account students' characteristics and implementing effective pedagogies in an equitable and inclusive learning environment <cit.>. Within this framework, if students from different demographic groups in a course do not have similar positive experiences and feelings of being supported, the learning environments are not equitable because those in the position of power did not provide adequate support to level the playing field. Both the HELPIEE and Domains of Power frameworks are synergistic. From these two frameworks, we see that in the context of physics courses, it is the department and instructors' responsibility to create an equitable learning environment. Together, these synergistic frameworks are useful for analyzing these studies with regard to their prevailing physics culture and comparing them. §.§ Study 1 In 2020 <cit.>, Johnson conducted open-ended interviews at a small liberal arts college with 6 undergraduate women majoring in physics, 4 physics faculty, and a focus group consisting of third year physics students. As noted earlier, although our focus here in the comparative case study is primarily on undergraduate women's voices, we take advantage of interviews with faculty as additional evidence to support the accounts of the physics culture narrated by undergraduate women. The student interviews lasted for about an hour. She asked undergraduate women during the individual interviews open-ended questions like “Tell me your life story in physics." Out of the individual interviews with undergraduate women, three were women of color and three were White women. At this institution, 25% of physics majors are women. For focus groups, she asked students about their trajectory in physics, what students like about their physics classes, what could improve, and what ideas they might have for attracting more women to physics. In faculty interviews, she asked questions such as: “What does it mean for someone to be a good physics student? What do you do as an individual to support and teach students? What has it been like for you to be faculty at this institution?". Johnson used the Domains of Power framework to understand intersectional physics identities of students. She sought to identify and understand the characteristics of a physics department such that women of color felt included and successful. §.§ Study 2 In Study 2, Santana and Singh conducted semi-structured, empathetic interviews with 16 undergraduate women physics and astronomy majors at a large research university. At the time when the interviews were conducted, this number of women interviewed represents approximately 60% of the women students in the physics and astronomy department (23% women). Six out of 16 interviews were analyzed due to the range of their experiences. At this institution, there are about 50 full-time faculty members. Each interview was an hour long in duration. The interviews followed protocols set by the researchers prior to conducting the interviews <cit.>. Some of the main overarching protocol questions were about students' high school experiences in physics, experiences with college peers and instructors, who or what supported them, and suggestions that might help improve their experiences. These interviews utilized Standpoint theory to highlight undergraduate women's experiences in physics. The goal was to investigate the physics culture in this department based upon undergraduate women major's accounts. §.§ Study 3 In Study 3, Santana and Singh conducted a study similar to their previous study (Study 2), i.e., semi-structured, empathetic interviews with 7 undergraduate women physics and astronomy majors but at a medium-sized physics department at a small liberal arts college in the US. At this college, women are underrepresented in physics, but not to the extent as they are underrepresented at the national level <cit.>, i.e., they make up about one-third of the physics majors. At this institution, there were 9 full-time faculty members and 4 visiting/adjunct faculty members in this physics and astronomy department. The number of physics majors that graduate each year varies from 10-20. In order to capture a wide range of experiences, we interviewed women who were at various points in their physics trajectory in college, e.g., second to fourth year students. Each interview was about an hour long in duration and was recorded via Zoom. The interviews followed the same protocols as Study 2 set by the researchers prior to conducting the interviews <cit.>. The women in our study volunteered to participate in the interviews. The Domains of Power and HELPIEE frameworks were used to analyze instructors' role in establishing an equitable and inclusive learning environment based upon the accounts of undergraduate women physics majors interviewed. § POSITIONALITY We now summarize aspects of each researcher's identity that may have impacted how they conducted the interviews and analyzed data from their studies. In Study 1, Johnson is an education researcher, and is a faculty member in a teacher education program. She is a former high school physics teacher. In Studies 2 and 3, both authors are physics education researchers. Santana identifies as a queer Chicana woman. Singh identifies as an Asian-American woman. Together the three researchers have a wide range of experiences in academia. It is important to note these identities as they are also intersectional and were useful when coding and analyzing the intersectional identities of some of these undergraduate women in physics. All researchers have had several prior experiences conducting and analyzing qualitative studies focusing on equity and inclusion in physics learning environments. § METHODOLOGY FOR THIS COMPARATIVE CASE STUDY For this comparative case study, we primarily use the Domains of Power framework as a lens for comparing the three studies but Standpoint theory and HELPIEE frameworks, which are synergistic were also valuable. Study 1 and Study 3 both already used the Domains of Power framework in the analyses so we did not need to re-analyze the data with this lens again. However, because Study 2 was only analyzed using Standpoint Theory, we began by discussing, re-analyzing and classifying data presented in Study 2 using the Domains of Power framework. To begin this re-analysis, we discussed how the interview data from Study 2 fell in each of these domains within the Domains of Power framework (see Figure <ref>). We note that one of the overarching Analytic Themes in Study 2 was focused on women's interactions with their peers and instructors. These accounts from undergraduate women majoring in physics not only informed us about how power in the physics department in Study 2 is organized in the interpersonal domain but also about the other domains as the four domains of power can be intertwined. Other big themes such as the Suggestions to Improve Undergraduate Women's Experiences in Study 2 also inform us not only about the disciplinary domain, but the other domains as well. Therefore, the researchers had multiple discussions about these issues between us as we tried to disentangle the domains captured in the quotes and converge on the domain of power that was represented best by a given quote. After multiple discussions, the researchers converged on the classifications of quotes into different domains of power (although since there are overlaps between the four domains, various quotes can potentially be classified as relevant for multiple domains). Then, we continued to organize the quotes from each study to describe the four domains of power and what these data from the three studies tell us about the overall physics culture. We note again that although in this comparative case study, our primary data are undergraduate women's accounts in the three studies, in Study 1, we took advantage of interviews with faculty as additional evidence to support the accounts of the physics culture narrated by undergraduate women. Thus, in this comparative case study, in the findings described below, the data from each institution are organized in the different domains of power. After presenting the data from the three studies according to the domains of power, we situate all three studies on a spectrum in the Discussion section. § FINDINGS IN EACH DOMAIN Here we provide some quotes from each study within each domain of the Domains of Power to illustrate the overall physics culture in each department. Since Study 1 was the only study to also conduct interviews with faculty, we will use those data to supplement the accounts of undergraduate women majoring in physics. §.§ Interpersonal domain §.§.§ Student-student interactions In Johnson's study, a second year student summarized her interactions with other physics students: “Sometimes I'll be in [the physics building]. So I can just like ask a question. Maybe not my class, but it could be an older physics student that could help me. They're super nice! [Q: just women? or women and men?] Either one. Whoever's there. I think I could ask any of them. We're kind of all in it together, why wouldn't they help me? They know what I'm going through – they've done it themselves!” From this quote, it is clear that students can seek out other physics students, regardless of whether they are in their cohort or not, to ask for help. It seems like there is this camaraderie amongst all physics students, “We're kind of all in it together" because “They know what I'm going through." This peer support can have a large impact, as the peer community is welcoming. Students have a good understanding of what the other members of the cohort are experiencing, so they can relate to them. From this study, we see that the physics culture in this department is such that women have positive interactions with other physics peers regardless of their gender, describing them as friendly, and super nice. In Study 2, there are many accounts of women having negative experiences with their male peers in different contexts. For example, a senior student describes a male peer as condescending and not someone she would have chosen to work with because he would ignore her inputs and "mansplain" concepts to her (mansplaining is a term that describes men explaining something to a woman in a condescending or overconfident way <cit.>). She says, “I would say something and he'd ignore it and I would end up being right and he wouldn't acknowledge that...he ignored me, or I would say something and he would go, No, no, no, no, and then he would...explain it to me, like the same thing that I said, like in different words.” From this quote, it seems like this male student is not receptive towards this student's input. In this physics culture, there seems to be a lack of acknowledgement of this woman's contribution towards the physics problem-solving process. On the other hand, in Study 2, female students view their female peers as sources of support. Another senior student states: “...my saving grace that... gave me like confidence and, like the resources that I needed to-to do well in the major... overall [is] my group of friends that I have. I have such an amazing group of four girls that we've all been in the same class together... [those] people are like my rocks and, like my support.” From this quote we can gather that women lift each other up and form support systems. The culture created by these women seems to contrast with the one which is created by male students. Thus, there is a clear dichotomy between male and female students in this regard. In Study 3, we note that many women described having preferences towards working with female peers, or occasionally male peers only if the two were friends. For example, one student explains that she mostly works on homework with other female students because of the negative experiences she has had with male peers: She says: “I would generally choose a woman just because...I remember that last semester we would, in one of my physics classes, if me and my partner had different answers, it kind of felt like he'd automatically assume that he was right, which wasn't always the case, and to me that just felt like a very male thing to [do]- So I guess I do generally would select to study with a woman." Again, through this quote, we get a sense that male peers have this aversion or lack of reception toward their female peers' input during the problem-solving process, similar to Study 2. This student marks this behavior, of “automatically” assuming that he was right as a “very male thing to [do].” We can infer that in the type of physics culture manifested in this department, female students recognize this masculine behavior and choose to not surround themselves with it. Moreover, in Study 3, there were several quotes from a woman of color regarding how she felt perceived negatively by her peers. She claims that it is not just men who perceive her this way, but also other women in her physics courses. She says, “even amongst a group of other women, there's also been this expectation that I am just not up to par with them, because [of] my racial background and so I feel there's some intersections in terms of my experiences in STEM with women...I definitely know that beyond the realm of just gender, race also plays a really big role in it.” From this student's perspective, there might not be a dichotomy between White male and female peers because she feels judged by both groups. We also note that this type of physics culture can be extremely difficult to navigate for a student who is one of the few people of color in her program and constantly thinks about how others are perceiving her. §.§.§ Student-faculty interactions Several women in Study 1 reported that the faculty are accessible and used words such as “nice" and “helpful" to describe them. It is important to note that a student reported that both their male and female faculty are accessible. One student said that their research advisor “is like the nicest professor I've ever met in my life... if you do something wrong, she doesn't even frown at you. She's so helpful...everyone loves her...I've heard the general physics kids really like her, and they're the ones who are forced into taking physics and don't actually want to! It seems like she's found a way to really explain things well and get people to be productive without being mean or making you feel bad." From this student's account, it seems like the physics culture is such that faculty are kind when students make mistakes and help them. In this student's view, this faculty member is good at explaining concepts and does a good job at engaging students, especially those in general physics (i.e., the students who need physics as a requirement such as life science majors). Not only did students feel comfortable talking to their professors, they explicitly said that they did not feel any more “intimated by the male professors more than the female professors.” Johnson also remarked that “this accessibility is not a coincidence; the faculty make a deliberate choice” <cit.>. From faculty interviews that Johnson conducted, one faculty member said: “I try and make myself really open to if they have questions – just trying to be around the department, so they can find me and ask me if they have questions." We see that the physics culture in this department is such that the faculty are making an active attempt to be accessible to students, e.g., by being in their office so that students can drop by and ask them questions. In Study 2, there are many quotes from undergraduate women about how their male instructors created and fostered negative environments in several setting, similar to male peers. One quote was about a male instructor enforcing negative stereotypes about women in the classroom by calling out a group of women who did not complete the assigned reading. One student recalled: “[My instructor] called on one of [the women] and she didn't know the answer. And he was like, Did you read the book? And she said, No, I haven't read it yet. And she said no, just like everyone else in the class had and then he was like, What? so all of you are just in college for the social aspect?...like suggesting that maybe like they're only going to school...because they want like the image or that they have ulterior motives or that they're not really passionate, hardworking scientists, which they absolutely are." This accusation by the instructor may reveal biases or negative stereotypes about women, e.g., that women are not as hard working. A few women in Study 2 reported that they had supportive instructors. For example, one senior student describes how her quantum professor made her feel supported. She said that her quantum professor does “...an excellent job at like, seeing like, what the students want and like making sure that they're understanding what he’s saying, so I feel like really supported in that environment, especially like [when I] go to office hours, and they are encouraging.” This same student also describes another instructor in a similar way, thus revealing that there are a few instructors that are perceived positively by female students. Women in Study 3 had more positive perceptions about their instructors than in Study 2 but some students had mixed experiences. For example, one student described her lecture instructor intervening when she was working with a condescending lab partner, but her lab instructor did not. She recalled: “The instructor for my class actually did notice that the student was being very difficult to work with and that he wasn't collaborating with the group, and she I think spoke to him about it." She explains that her lab instructor did not notice because: “...a lot of our experiments were done outside of the physical classroom because, again with COVID, it was hard to sort of build those relationships with professors, so I think that was a part of the reason, but I also think another reason is when you're submitting work on time and when things are looking fine, a lot of professors don't go out of their way to look at the problems and a lot of professors, when necessary, will assume [things are fine] or [don't] want to have to deal with anything like that." From this student's quote, we get a sense that some faculty may not go out of their way to check in on students, thus leaving them unaware of any issues students face, especially in their interactions with each other. Some students in Study 3 said that they feel comfortable around their instructors and perceived them as helpful. For example, one student explained that she doesn't attend office hours, unless she has a specific question. However, when she has gone, she has had positive experiences. She said attending office hours, “was definitely helpful.” She went after taking an exam. She explains: “I had a question [about] something I did wrong, so I went and she helped me explain why it was wrong and then guided me through the correct answer, I think it was definitely helpful." Thus, we also see some variations in the experiences of women in this study. Some instructors are helpful but some are unaware of student dynamics. §.§ Cultural Domain In Study 1, Johnson reported the culture of students working together in a supportive environment. She also noted that from the focus group with students it was clear that students loved the physics building. For example, Johnson asked the focus group: What characterizes majoring in physics?. Some of their responses were: “The physics building." “We live here", “If I wasn't a commuter I'd be there 24-7", “Spending more hours in the physics building outside class than you do sleeping", “Is there life beyond this building? That's the question." Thus, there is something about the physical space itself that creates this positive physics culture that student love to be around. In faculty interviews, Johnson asked: What does it mean for someone to be a good physics student?. One faculty member said: “to be curious and work hard. Curious in that you ask questions – you have to have enough confidence to ask questions, and realize – it does take some confidence to realize you can ask questions and it's not a bad reflection of you.” Another faculty member said: “We want everyone to be good physics students, but they don't have to all be great physics students. They have to be successful at acquiring various useful skills. They're not all going to be physicists, and we want them to be productive and happy." These quotes from faculty reveal that they believe there are different ways to be a good physics student but in general think they should be able to ask questions, be resourceful, and be productive and happy. Faculty do not expect students to end up as professional physicists, but do expect them to develop important skills such as working hard and collaborating with others. In Study 2, we get a strong sense of what the physics culture is like in this department not only from student and faculty interactions, but also from how instructors teach their courses. For example, one senior student explains that in her introductory physics courses, her instructors use disparaging language such as: This is trivial, and you should know this, right? She added: “I felt like the...disrespectful behaviors that compose the culture and physics were taught in my first year at [my college], through like, professors using this language in their lectures that other people started to pick up on from their use." From this account, we get a sense of how certain negative behaviors can be disseminated by professors in the classroom. In Study 3, we earlier illustrated through student interactions that the physics culture has several masculine elements such as students being condescending towards their female peers, or examples of mansplaining. For example, one second year student shares that despite many of her male peers being helpful, some of them are condescending: “A lot of the guys in my class are so helpful and so willing to help and don't approach things in a condescending manner at all, but then there are others who act like they know it all...I don't know if they do it because I'm a woman, but sometimes it feels like that, where they're like mansplaining so often in and outside the classroom, there is definitely a culture of mansplaining." Here we see this student having mixed experiences with her peers, showing that some of them are “so willing to help". However, there are still some students who mansplain. From her quote, it also seems like there is evidence of a culture of mansplaining not only inside the classroom but also outside. Several women in Study 3 describe opportunities to work collaboratively with their peers and some show preference for working alone. However, it is concerning when students who are marginalized due to their identity (like women, women of color, etc.) are excluded from these collaborative opportunities. For example, we get some insight from one woman of color who feels alienated by her peers. She explains that during group work, her peers “...will kind of look at me and if we have to engage in sort of group work, there's usually not a lot of listening happening on their behalf. I think there's like this assumption that I'm, because I am a woman of color in STEM, I don't necessarily have the same background or knowledge as they [do], so a lot of the times I'll bring [up] points and it'll just be ignored and that’s something that I've had to deal with a lot." Thus, it may be part of the physics culture in this department that other students do not take women of color or students of color seriously which may further exclude them in group settings. These behaviors of exclusion may be due to biases against women of color. §.§ Structural Domain In Study 1, we note that students report that their classes are interactive and incorporate group work. For example, one student said: “Most of the teachers will lecture for a half hour, 45 minutes, and then at some point in there ends up being some partner work or someone goes up to the board and works through a problem. So it's very interactive, instead of just being talked at. It's more a conversation with everybody in the class and the professor than information being thrown at you, because that’s not helpful. I think that's a big thing to me as to why I enjoy it and I have learned so much from it." We see from this quote that the student actually enjoys the interactive elements that are structured into class and finds them to be useful in their learning. Because Johnson interviewed faculty members, she was able to get their perspectives as to how physics classes are structured. Faculty emphasized that group work is built into their courses and that “lectures are very strongly de-emphasized in favor of a lot of group work, a lot of interaction between faculty and students, fairly short lab exercises which are pertinent to the material being taught rather than having separate lab sessions which could be a week behind or a week ahead of the class material at the time." Johnson reported that the physics faculty make use of physics education research through “constant department-wide use of both formative and summative assessment." Thus, this department strongly emphasizes teaching interactively which is backed by research. In Study 2, most faculty choose to teach in a traditional style, mainly lecturing, while few faculty provide opportunities for collaborative work, such group problem solving or clicker questions. There are also few instructors who incorporate active learning (i.e., flipped classrooms). We confirmed this information with instructors at this institution. In Study 3, many of the women described opportunities for group work during class. We note that importance of group work came up not only in response to questions about peer interactions, but also in their suggestions for instructors. Specifically, some women called for their instructors to encourage and incorporate more peer collaboration into the class structure. For example, one student suggested that instructors should change how in-class groups are formed by: “assigning people to different groups and having rotations where people aren't always stuck with the same group but also have enough time working with other people to break down these barriers..." This student argued that implementing rotations would allow students to get to know many people in class and can introduce them to students with different backgrounds, and thus “break down these barriers" that may arise due to differences in background or identity. §.§ Disciplinary Domain In Study 1, Johnson reported many instances where faculty were “reprimanding students who failed to work equitably in groups.” It is important to note that faculty in this physics department value that everyone learns during group work, as opposed to working efficiently. One faculty member recalled working with a student who was dominating group work during a lab. She said that this student was controlling all the materials, so she told him that he had to let other people have a chance, at which point he backed up and stood far away from his group. She told him he did not have to stand so far away but added, “You can’t only participate when you're building, that's not OK. It can't be I'm either in charge or I'm out of here, guys." This is an explicit example of faculty members taking action when students are not working according to set norms which the faculty member modeled. In Study 2, we note the lack of disciplinary actions from faculty through peer and instructor interactions, described by the women. However, the women also explicitly call this out in their suggestions for faculty members. One student suggested that instructors should be responsible for establishing and maintaining a positive learning environment. She said, “I want [instructors] to be obligated, when they witness what happens in their classes with other students, to confront them, the students who [engage in microaggressions] because the problem just isn't with professors, it's with professors and students-students feel like they can do that-these things after they see their professors do it." This student suggested that instructors should be responsible not only for their actions, but also to confront students who commit microaggressions in their classrooms. Through faculty interactions and class structure, some women from Study 3 feel that faculty members are not aware of issues of misconduct and therefore cannot take disciplinary action. One of the women described how faculty members seem surprised if a student speaks up about not feeling represented in the learning space. She said that professors were “surprised by students speaking up and by students feeling this way, and so I guess after that, they had a conversation and the next class time for example [professors will] try to make it known to students Hey, we should be engaging with each other, or trying to discretely say...there should be more interactions with individuals that aren't necessarily White or that are women in STEM and so I think that's been extent [of] things usually, when a student speaks up about it I've seen temporary discussions of it with professors and maybe in the classroom setting but it feels almost as if, a few weeks later it kind of goes back to what it used to be." This quote suggests that when faculty take any disciplinary action (if ever), the effects are very temporary and students revert to their old behaviors. Also, it seems like this disciplining may only occur when students complain about something and it is not reinforced. § DISCUSSION Based on the example quotes from women in each study, we get a sense of what each physics department culture is like from the perspective of undergraduate women (supplemented by faculty interviews in Study 1), despite there being limited information. Consistent with Standpoint theory, comparing undergraduate women's voices in this comparative case study helps us to understand how they experience the overall physics culture in their departments so that other physics departments can contemplate how to make their learning environments equitable. Furthermore, utilizing all the three frameworks emphasizes that the voices of underrepresented groups such as undergraduate women in physics should be highlighted and that classroom instructors and faculty members in general have power in several domains to create safe and equitable learning environments. With that being said, we use for broader comparison the table provided in Johnson's study, Table <ref>, which describes the physics department in Study 1 in the context of the Domains of Power compared to a prototypical physics department. This table can be valuable for comparing the three studies (see Table <ref>). It is also important to emphasize that the four Domains of Power interact with one another so the quotes discussed in one domain may inform the physics culture in multiple domains. §.§ Interpersonal domain Johnson's study illustrates a physics department where students are willing to help each other. Students in her study perceived other students and instructors as welcoming. They could go to others when they struggle (also hinting at the physics cultural domain). From faculty interviews, it seems like faculty go out of their way to be available so students can ask questions in addition to encouraging questions from students (also hinting at the supportive physics cultural domain). Johnson emphasized that the interpersonal domain described in her study was opposite of the one in a prototypical physics department, see Table <ref>. On the other hand, based on the interpersonal domain pertaining to student interactions in Studies 2 and 3, it appears that those departments' culture emulates a prototypical physics department to different degrees, where some students are often excluded from meaningful collaboration by those from majority groups (e.g., male students). We note that grounded in Standpoint theory, which emphasizes the points of view of traditionally marginalized groups such as the undergraduate women, even without interviews with physics faculty members, Studies 2 and 3 provide a reasonably good representation of the interpersonal domain for women physics majors. Although Study 3 described a department with a better physics culture than Study 2, in Study 3, a woman of color described being isolated to the point where she would not work with White peers regardless of gender because she felt discriminated due to her racial identity. We also note that in Study 3, there were many accounts of men mansplaining and being condescending to women (again hinting at the physics cultural domain). §.§ Cultural domain In regard to the cultural domain, Study 1 reveals a collaborative environment, created and enforced by faculty members. Students chose to collaborate and recognize its value despite describing themselves as antisocial. There is a culture in this department where students can collaborate without any anxiety. We also see evidence of a culture within the physical space of the physics building itself, where students in the focus group described spending a lot of time inside doing physics. In regard to the social culture, faculty believed that being a good physics student means being curious and hard working. Faculty and students both acknowledge that students can be wrong, in fact, it is anticipated while learning physics. Even students recognized that faculty won't “frown” at them if they are confused. Faculty members also acknowledged that not every physics student would become a physicist, so they encouraged other pathways besides applying to a Ph.D. program. This physics culture may put less pressure on students to pursue career options that do not include academia or careers as professional physicists in national labs or physics-related industries. This rejection by faculty of a traditional image of a physicist by accepting and promoting alternative ways to be a physics person can potentially boost students' sense of belonging. Students in this department can be accepted for being themselves even if they choose to not pursue a physics Ph.D. The physics departments described in Studies 2 and 3 seem to reflect more prototypical physics culture (especially the one in Study 2). For example, there are many instances of mansplaining and students being treated like they should know certain physics concepts. Thus, such departments may foster a culture that tells students that they need to be geniuses to succeed in physics and that good physicists simply do not struggle. Although some faculty members from Studies 2 and 3 (especially in Study 3) seemed very welcoming to questions, some of these undergraduate women students still felt that they could not seek out help. Women students from Study 2 seem to isolate themselves from men. This can again be due to the physics culture fostering and reinforcing mansplaining and encouraging men to be condescending towards their female peers. In Study 3, the woman of color, Paulina, described herself as an island who did not want to collaborate even with women. §.§ Structural domain The structural domain (e.g., class structure) of Study 1 appeared to be very interactive and collaborative. Because group work was incorporated into the class structure, students had many opportunities to work with their peers. This itself can positively impact the cultural domain, allowing students to feel safe amongst their peers to ask questions without the fear of being wrong or judged. It also seems like all faculty make an effort to utilize surveys and assessments that are validated and backed by research. Study 1 contrasts to the other two studies, especially Study 2, in which courses were taught more traditionally. In Study 2, there are many faculty members in the physics department so it is difficult to have a standardized method for teaching. There is some evidence from interviews of active learning elements such as group work and the use of clicker questions during lectures. However, this is the choice of individual instructors. In Study 3, there is more evidence of collaborative work implemented by instructors. Based on peer interactions, it appears that some instructors incorporate group work. However, like Study 2, it seems to be more of the instructors' choice as opposed to a standardized or common practice, according to students' suggestions for their instructors. §.§ Disciplinary domain We emphasize the stark differences in the disciplinary domain between the studies discussed here. Study 1 has direct evidence of faculty disciplining students due to failure to work equitably in groups. According to Johnson, male faculty take responsibility for gender issues <cit.>. Thus, women faculty do not bear the burden or responsibility for addressing gender issues, such as sexism. This physics department culture also corresponds to one in which the women physics students believe their faculty would protect them from negative interactions with peers. It is unclear how consistently faculty members have to address disciplinary issues that arise repetitively but Johnson emphasizes that in order to construct a collaborative structural domain (class structure), “it requires constant maintenance.” In Study 2, there seems to be a severe lack of disciplining from instructors. This was a major issue voiced by women in study 2, and they suggested that instructors need to reprimand students for misconduct. It could be the that the physics cultural domain is so prototypical and negative behaviors are reinforced so much that disciplinary action seems taboo or against the departmental culture itself. For example, in Study 2, women often felt marginalized by their own instructors, and some felt that the instructors are responsible for modeling negative behaviors towards women. Thus, they were less likely to recognize and discipline such behaviors when male students displayed them. Study 3 is different from both Studies 1 and 2 in that the women commented that their faculty seemed oblivious of issues between students even though they themselves were helpful and supportive. This type of unawareness of negative interactions between students with different identities suggests that many instructors are not “primed" or prepared to recognize issues in their classroom. As one woman said, “if they're not aware of [issues]...there's no way for them to personally intervene...” Thus, there needs to be more awareness before faculty can take action. It is also interesting that some students noted that if faculty are made aware of issues in their classrooms, they address it superficially so that this does not have a lasting effect. This lack of lasting effect can be evidence for a physics culture that reinforces and models the prototypical image of a physicist. § CONCLUSIONS In this paper, we carried out a comparative case study primarily using the Domains of Power framework but also drawing inspiration from other synergistic frameworks. All of these studies use qualitative methods to investigate undergraduate women's experiences in physics and astronomy departments and how their accounts portray different types of physics cultures. The first study is Johnson's study in a small physics department at a small predominantly White liberal arts college. Her study illustrated an overwhelmingly supportive and positive physics learning environment where students work together and faculty are always encouraging and supportive. The second is Santana and Singh's study from a large predominantly White research institution. This study revealed a very unwelcoming physics learning environment where many male students and male faculty members negatively impacted the experiences of women students. Lastly, the third study is Santana and Singh's more recent study in a mid-sized physics department at a small private predominantly White liberal arts college. This study highlighted how intersectional issues in identity (e.g., being traditionally marginalized both by gender and race) can play a negative role in students' experiences in an undergraduate physics program. The three frameworks used in these studies can be used as guidelines for how physics instructors can approach interactions with students, structure the classrooms, work with colleagues, etc. Based upon the Domains of Power and HELPIEE frameworks, we emphasize that physics instructors have a lot of power not only in their classrooms (structural domain), but also in the physics cultural domain to empower students. Standpoint theory suggests that it is important for physics faculty to listen to women's experiences in physics in order to address inequities. It is important for faculty to utilize this power not only inside the classroom, but also outside the classroom in order to continue supporting students by modeling alternative ways to be good physics students, as in Study 1. We can use the physics department described in Johnson's study as a model to transform prototypical physics departments to be inclusive and equitable for even the most marginalized groups. Considering the fact that the physics department in Study 1 only had a handful of faculty members, the physics culture there may appear to be an unrealistic standard for all physics departments to achieve. However, although having only a handful of faculty may make it easier for them to communicate and come to a consensus on shared norms to make the physics culture similar to that depicted in Study 1, all physics departments regardless of the size of their faculty and student body should strive to establish similar culture. We saw that students formed a community with each other and felt like they were “all in it together,”, with the mindset of “why wouldn't they help me?” <cit.>, which illustrates an opposite culture to the infamous prototypical competitive and isolated physics culture. Furthermore, certain elements of Study 1 such as having faculty be on the same page regarding how to identify disruptive and condescending student behavior and disciplining students as necessary, as well as having class structure supportive of all students etc. can be incorporated in physics departments of any size. As a faculty member in Johnson's study said, “You can't only participate when you're building, that's not OK” <cit.>. For example, there are about 50 physics faculty in the physics department in Study 2. This may lead to fragmentation in this department making it more difficult to dismantle a toxic physics culture or to successfully restructure classes.= However, using this as an excuse to not take action is unacceptable, and the physics department cannot be complacent about equity and inclusion. Also, having a critical number of committed faculty members dedicated towards reforming their department culture is essential for creating an inclusive and equitable physics environment. We emphasize that even in Study 2, there were some faculty members who supported the women students. Thus, some faculty members who are not actively contributing to a toxic physics environment can work together to catalyze change and with enough faculty support and resources, there can be a sustained and systemic positive change in any sized physics department. In addition, it would be interesting to investigate the cultural and pedagogical differences of these three physics departments to ascertain if they attract faculty with less typical views of what a physicist is and approaches to teaching. We note that in Studies 2 and 3, the interviewer asked the undergraduate women about what suggestions they might have to improve their own experiences. These suggestions are summarized in both papers. We emphasize that many of these suggestions (like those for instructors), do not require many resources, but require desire to change, time, and effort. We strongly encourage physics faculty to consider these suggestions and brainstorm with others in their department to begin to make positive changes as may be more appropriate for a particular physics department. Systemic changes would require more resources in addition to time and effort. However, physics departments should not view these as impossible changes to implement because they are achievable when a critical number of faculty members do take part in this mission. In conclusion, it is the responsibility of physics instructors and faculty to use their power in all the four Domains of Power in order to fully transform a physics culture including inside and outside the classrooms and in student-student and student-faculty interactions. § ACKNOWLEDGEMENTS We are very grateful to Dr. Robert P. Devaty for reviewing and providing feedback on this manuscript.
http://arxiv.org/abs/2407.12086v1
20240716180002
Crossing the desert: Towards predictions for SMEFT coefficients from quantum gravity
[ "Lydia Brenner", "Abhishek Chikkaballi", "Astrid Eichhorn", "Shouryya Ray" ]
hep-ph
[ "hep-ph", "gr-qc", "hep-th" ]
=1 =1 jhep
http://arxiv.org/abs/2407.12935v1
20240717180818
Unraveling the magnetic ground-state in alkali-metal lanthanide oxide Na$_2$PrO$_3$
[ "Ifeanyi John Onuorah", "Jonathan Frassineti", "Qiaochu Wang", "Muhammad Maikudi Isah", "Pietro Bonfa", "Jeffrey G. Rau", "J. A. Rodriguez-Rivera", "A. I. Kolesnikov", "Vesna F. Mitrovic", "Samuele Sanna", "Kemp W. Plumb" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
Dipartimento di Scienze Matematiche, Fisiche e Informatiche, Universitá di Parma, I-43124 Parma, Italy Dipartimento di Fisica e Astronomia "A. Righi", Universitá di Bologna, I-40127 Bologna, Italy Department of Physics, Brown University, Providence, Rhode Island 02912, USA Dipartimento di Fisica e Astronomia "A. Righi", Universitá di Bologna, I-40127 Bologna, Italy Dipartimento di Scienze Matematiche, Fisiche e Informatiche, Universitá di Parma, I-43124 Parma, Italy Department of Physics, University of Windsor, Windsor, Ontario, Canada N9B 3P4 NIST Center for Neutron Research, National Institute of Standards and Technology, Gaithersburg, MD 20899, USA Department of Materials Science and Engineering, University of Maryland, College Park, MD 20742, USA Neutron Scattering Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA Department of Physics, Brown University, Providence, Rhode Island 02912, USA Dipartimento di Fisica e Astronomia "A. Righi", Universitá di Bologna, I-40127 Bologna, Italy Department of Physics, Brown University, Providence, Rhode Island 02912, USA § ABSTRACT A comprehensive set of muon spin spectroscopy and neutron scattering measurements supported by ab-initio and model Hamiltonian simulations have been used to investigate the magnetic ground state of . μSR reveals a Néel antiferromagnetic order below T_N∼ 4.9 K, with a small static magnetic moment m_ static≤ 0.22 μ_ B/ Pr collinearly aligned along the c-axis. Inelastic neutron measurements reveal the full spectrum of crystal field excitations and confirm that the Pr^4+ ground state wave function deviates significantly from the Γ_7 limit that is relevant to the Kitaev model. Single and two magnon excitations are observed in the ordered state below T_N=4.6 K and are well described by non-linear spin wave theory from the Néel state using a magnetic Hamiltonian with Heisenberg exchange J=1 meV and symmetric anisotropic exchange Γ/J=0.1, corresponding to an XY model. Intense two magnon excitations are accounted for by g-factor anisotropy g_z/g_± = 1.29. A fluctuating moment δ m^2 = 0.57(22) μ_ B^2/ Pr extracted from the energy and momentum integrated inelastic neutron signal is reduced from expectations for a local J=1/2 moment with average g-factor g_ avg≈ 1.1. Together, the results demonstrate that the small moment in arises from crystal field and covalency effects and the material does not exhibit significant quantum fluctuations. Unraveling the magnetic ground-state in alkali-metal lanthanide oxide Kemp W. Plumb July 22, 2024 ====================================================================== § INTRODUCTION Material realizations of Heisenberg-Kitaev models hold significant contemporary interest because of their potential to harbor quantum spin liquids in higher than one-dimensional lattice geometries <cit.>. The Kitaev model is characterized by quantum frustration arising from bond-dependent anisotropic Kitaev interactions between J_eff=1/2 Kramer's doublets spin-orbit entangled local magnetic moments <cit.>. In practice, the generic Hamiltonian for a realistic Kitaev material contains Heisenberg and symmetric anisotropic exchange interactions. Depending on local symmetries and electron hopping pathways these additional interactions may be of comparable strength to the Kitaev term <cit.>. Dominant Kitaev interactions are known to be present in transition metal compounds with octahedrally coordinated 5d^5 and 4d^5 electronic filling, including the Ir^4+ oxides and Ru^3+ based chlorides and trihalides <cit.>, but strong Heisenberg and anisotropic interactions stabilize Néel order in all of these compounds <cit.>. It is thus essential to search for other material realizations where the non-Kitaev interaction terms can be further minimized. Recently, it has been shown that strong Kitaev interactions are potentially realizable in a broader class of materials, including compounds with high spin d^7 electronic configuration such as Co^2+ and Ni^3+ <cit.>, and the f^1 electronic configuration such as Ce^3+ and Pr^4+ <cit.>. In the case of the f^1 electron materials, a dominant spin-orbit coupling and octahedral crystal field can act to stabilize a Γ_7 doublet, J_eff = 1/2, with strong Kitaev interactions <cit.>. Furthermore, the more spatially localized f-orbital occupying electrons compared to those in 4d and 5d orbitals could serve to limit direct exchange interactions. One candidate material to realize such Kitaev physics is , which hosts octahedrally coordinated Pr^4+ in the f^1 electronic configuration <cit.>. crystallizes in the C2/c space group with edge-sharing PrO_6 octahedra, required for the realization of the dominant bond directional exchange <cit.>. The crystal structure has two inequivalent Pr sites forming a honeycomb lattice in the ab plane with two intraplane Pr-Pr distances of d=3.433 and d^'= 3.458 Å. Honeycomb planes are separated by layers of Na atoms with an interplane Pr-Pr distance of d_p = 5.867 Å <cit.>. Based on ab initio studies, was predicted to host antiferromagnetic Kitaev interactions between local J_eff=1/2 moments on Pr^4+ <cit.>. On the contrary, neutron crystal field measurements revealed that the Pr^4+ ground state wavefunction is not a Γ_7 doublet as required for the Kitaev model. Furthermore, the magnetic excitations are best captured by a Heisenberg XXZ model Hamiltonian with negligible Kitaev interactions <cit.>. Recent neutron and x-ray absorption measurements confirm that the Pr ground state in deviates significantly from the Γ_7 doublet as a result of strong octahedral crystal field and Pr(4f)-O(2p) hybridization <cit.>. However, there are still questions regarding the magnetic ground state of . Despite a clear heat capacity peak at T_N <cit.>, and the appearance of well defined spin waves, no direct evidence for static magnetic order has been found in this material. Furthermore, an apparent continuum of magnetic excitations existing above the spin wave bands remains unexplained <cit.>. Together, the conspicuously small static magnetic moment and excitation continuum leave open the possibility for significant quantum fluctuations in . In this work, we use a comprehensive series of muon spin spectroscopy () and neutron scattering measurements accompanied by DFT and model Hamiltonian simulations to refine the magnetic ground state and underlying microscopic exchange interactions of . Our results rule out any significant, beyond spin-wave, quantum fluctuations in this material and establish crystal field effects and stacking faults as the primary origin of the small ordered moment. The measurements unambiguously reveal that is antiferromagnetically ordered below T_N≈4.9 K. Analysis of the muon data supplemented with DFT and dipolar simulations find the most likely ordered state to be a Néel AF structure with a small static magnetic moment of μ_static≤ 0.22 μ_ B/Pr, collinearly aligned along the c-axis. Measurements of the crystal field excitations reveal five additional crystal field modes above the previously reported single crystal field excitation <cit.>. The full set of crystal fields more tightly constrain the Pr^4+ ground state wavefunction. Low energy inelastic neutron scattering revealed intense single and multi-magnon excitations. The complete measured magnetic excitation spectrum is well described by a non-linear spin wave model in the collinear Néel state that includes a dominant Heisenberg exchange and subdominant symmetric anisotropic exchange interactions. We find that the previously unexplained continuum of magnetic excitations arises from a multi-magnon excitations with relatively large neutron intensity that is accounted for by g-factor anisotropy. Our analysis demonstrates a small (∼20%) ordered moment reduction arising from quantum fluctuations. The fluctuating moment of δ m = √(g^2J(J+1)- g^2J_z^2) = 0.75(14) μ_ B/ Pr recovered from the energy and momentum integrated inelastic neutron intensity is reduced from expectations for a local J=1/2 and g_ avg=1.1 as determined by our crystal field analysis. Based on these results, we attribute the small moment and intense multi-magnon neutron intensity in to crystal field and covalency effects. The remainder of this paper is organized as follows. After an overview of the methods in Sec. <ref>, we present the measurements in Sec. <ref> along with DFT and dipolar simulations of the muon spectra in Sec. <ref>. In Sec. <ref> we present and analyze the crystal field excitations. Elastic and inelastic neutron scattering are presented in Secs. <ref> and <ref> along with a non-linear spin wave modeling of the measured spectrum, that is used to constrain a model Hamiltonian. The summary and conclusions are presented in Sec. <ref>. § EXPERIMENTAL AND COMPUTATIONAL METHODS Powder samples of were synthesized via solid-state reactions from Na_2O_2 and Pr_6O_11. Dry starting reagents were weighed in a metal ratio, Na/Pr∼2.2, to account for sodium evaporative losses, ground in an agate mortar and pestle, and pelletized under an argon environment. The prepared materials were enclosed in Ag ampules and heated at 750^∘C under dry, flowing oxygen for 36 hours. Samples were furnace-cooled to ∼150^∘C and immediately transferred to an Argonne glovebox for storage. measurements were carried out on the GPS spectrometer at the Paul Scherrer Institut, Switzerland. The sample was packed into aluminium foil inside a glove-box to avoid air contamination and put into a Cu fork inserted into the experimental probe. The measurements were performed both in a weak Transverse Field (TF) mode to calibrate the asymmetry parameter of the muon polarization, and in Zero Field (ZF) to reveal presence of the internal magnetic field that gives rise to the spontaneous oscillations of signal. The ZF spectra were collected at temperatures ranging from 1.5 K to 5.2 K using a helium flow cryostat. The time-differential data were fitted using MUSRFIT software <cit.>, and the MuLab suite, a home-built Matlab toolbox. To identify the muon implantation sites, we used the well-established DFT+μ approach <cit.>. Non-spin polarized DFT calculations were performed within generalized gradient approximation (GGA) for the PBE (Perdew-Burke-Ernzerhof  <cit.>) exchange-correlation functional as implemented in the Quantum Espresso code <cit.>. The muon was treated as a hydrogen impurity in a 2×1×1 charged supercell (96 host atoms and 1 muon) with a compensating background. For Na, O, and H atomic species, the norm-conserving pseudopotentials were used; while for Pr, the Projector augmented wave (PAW) with no 4f electron state in the valence was used, in order to avoid the well-known difficulty in describing its valence shell <cit.>. The plane-wave cut-off of 100 Ry was used while the Brillouin zone was sampled using a 4×4×4 mesh of k-points <cit.>. High energy neutron scattering measurements of the crystal field excitations were conducted on the fine-resolution Fermi chopper spectrometer (SEQUOIA) <cit.> at the Spallation Neutron Source (SNS), Oak Ridge National Lab (ORNL). The polycrystalline sample was loaded into an Aluminum can, and cooled to a base temperature of T=5 K using a closed cycle cryostat. To cover the full range of crystal field exctiations with sufficient resolution, data was collected using the high-flux chopper with fixed incident neutron energies of E_ i= 60, 150, 300, 700, and 2500 meV. Low energy neutron scattering measurements were carried out on powder samples using the Multi-Axis Crystal Spectrometer (MACS) at the NIST Center for Neutron Research (NCNR). Elastic (E=0) measurements were conducted with the monochromator in a vertical focusing configuration using neutron energy of 5 meV. Inelastic measurements were carried out using a double-focusing configuration and a fixed final energy of E_ f=3.7 meV with Be and BeO filters before and after the sample, respectively for energy transfers Δ E=E_ i-E_ f below 1.5 meV. For energy transfers above 1.4 meV, we used a Be filter after the sample and no incident beam filter. Data for energy transfers above 1.4 meV was corrected for contamination from high-order harmonics in the incident beam neutron monitor. Measured background signal contributions from the sample environment were subtracted and signal count rates were converted to absolute values of the scattering cross-section using the incoherent signal from the sample integrated over the range 1.7≤Q≤2 Å^-1. § RESULTS §.§ Muon spin relaxation () The measured ZF asymmetry spectra, a^ZF(t), are shown in Fig. <ref> (a) for four different temperatures. At temperatures above ∼4.9 K, the signal shows no oscillations, indicating that the sample is in the paramagnetic phase. Upon lowering the temperature below 4.9 K, the signal displays damped coherent oscillations, indicating the emergence of long-range magnetic order due to the presence of a static internal magnetic field. Furthermore, we also observe a fast relaxation at short times, reflecting presence of static magnetic moments. As the temperature is further lowered to 1.5 K, the signature oscillations of the long-range magnetic order become more pronounced and dominate the spectra. All observed ZF- spectra are well fit with the following model, commonly used to describe asymmetry in antiferromagnets <cit.>; a^ZF(t) = a_0 [ ∑_i=1^3f_ie^-λ_i tcos(γ_μB_μ, i^ t+ϕ) + f_faste^-λ_fast t + f_l e^-λ_l te^-(σ t)^2], where a_0 is the muon initial amplitude calibrated at high temperature. In a powder sample we expect the internal local field 𝐁_μ to have both longitudinal and transverse components with respect to the muon spin 𝐒_μ. The first term in Eq. <ref> describes the transverse components with an exponentially damped oscillating signal that decays with a transverse relaxation rate λ, summed over all muons that thermalize at three symmetrically inequivalent sites. The parameter f_i controls the contribution of each muon stopping site i to the total asymmetry signal. γ_μ (= 2π× 135.5 MHZ T^-1) is the muon gyromagnetic ratio, B_μ (= 2πν_μ /γ_μ) is the muon internal magnetic field corresponding to muon precessing with frequency ν_μ, and ϕ is the initial phase. The second term in Eq. <ref> accounts for the transverse component for a site with overdamped oscillations, labeled as fast non-oscillating term, with amplitude a_0 f_fast and depolarization rate λ_fast (∼ 2 μ s^-1). The third term accounts for the longitudinal component with damped relaxation, amplitude a_0 f_l, and spin-lattice relaxation rates λ_l. Finally, for T ≳ T_N, static dipolar interactions with nuclear moments with depolarization rate σ≈ 0.2 μ s^-1 dominate the muon spectra. In Fig. <ref> (b), we show the real part of the Fast Fourier Transform (FFT) of the time-domain  asymmetry at T = 1.5 K. Three local magnetic fields are identified from the best fit of Eq.<ref> to the asymmetry signal at T=1.5 K. We label these B_μ1= 9 mT, corresponding to the peak with maximum Fourier power, then B_μ2 = 7 mT, and B_μ3= 15 mT. The temperature dependence of the three distinct internal fields B_μ i are shown in Fig. <ref>(c). Here the solid lines represent fit to a phenomenological double exponent power-law function; B_μ i(T)= B_μ i(0) (1 - (T/T_N)^α)^β <cit.>. From this fit, we obtained an estimate of the Néel temperature of T_N≈4.9 K, consistent with heat capacity and neutron measurements <cit.>. The α exponent accounts for the magnetic excitation at low temperatures while the β exponent reflects the dimensionality of the interactions in the vicinity of the transition. A least squared fit to each internal field component yielded α = (3.673 ± 0.533, 4.078 ± 0.206, 4.078 ± 0.206) and β= (0.160 ± 0.011, 0.147 ± 0.007, 0.147 ± 0.007) for (B_μ1, B_μ2, B_μ3) respectively. To obtain an estimate of these exponents we take the average, to find α = 3.94 ± 0.20 and β = 0.151 ± 0.005. The β value is far less than ≈1/3 expected for a three-dimensional magnetic Hamiltonian <cit.>. Its average value is close to that expected for a two-dimensional XXZ model <cit.>. However, the large α value indicates presence of complex magnetic interactions <cit.>. These findings are in agreement with those from neutrons presented below. We obtained the magnetic volume fraction (V_ M) from the asymmetry amplitudes extracted from fits to Eq. <ref> <cit.>. Temperature dependent V_ M for each internal field component are shown in Fig. <ref> (d); the fill color denotes the relative contributions of the muon at each distinct stopping site. The signal f_1 corresponds to the muon at site A_1 and contributes 59% to V_ M (green shaded area). Sites A_2 (signal contribution f_2) and A_3 (signal contribution f_3) contribute 14% and 7% respectively, resulting in a total of 80% contribution to V_ M from implanted muons that are sensitive to the internal magnetic field. The remaining 20% is recovered from the amplitude of the fast non-oscillating relaxing signal (f_fast), showing that 20% of the implanted muons do not exhibit coherent precession. §.§.§ Muon sites and dipolar field analysis To characterize the contributions of each muon site to the oscillatory components of the signal in Fig. <ref> (a), and constrain the magnetic structure, we proceed to identify the muon implantation site(s) in using the DFT+μ approach. DFT modeling reveals three symmetrically distinct candidate muon sites, consistent with sites A_1, A_2, and A_3 from the analysis of data above. Each of these sites is located at the 8f Wyckoff position, with a distance of ≈ 1 Å along the c axis to the three distinct O sites in the unit cell, as shown in Fig. <ref> (a) and (b). These sites are found to be in the direction of the non-magnetic Na layer and farther away from the magnetic Pr^4+ ions (Fig. <ref> (a)). This is consistent with observations in typical oxide compounds where the positive muon is well known to stop near the O sites <cit.>. Our DFT calculations predict that the A_1 site has the lowest energy, but the energy differences between A_1, A_2, and A_3 sites are less than 0.2 eV. These findings imply that muons populate the A_1, A_2, and A_3 sites with nearly equal probability. With the knowledge of the muon implantation sites, we determined the magnetic structure and ordered moment size in Na_2PrO_3 by direct comparison of simulated dipolar field distributions at these site(s) with experimental results. First, by considering the maximal magnetic space groups (MAXMAGN <cit.>), we have identified and explored twenty-eight (28) AF magnetic structures, within the k_m=(000), (110), and (100) propagation vectors that are most likely based on the minimum of the magnetic excitation spectra discussed in Section <ref>. For each of these magnetic structures, the muon dipolar interactions <cit.> were computed at the 24 muon sites (i.e., 3 muon sites at each of the 8f Wyckoff positions) as a function of the Pr magnetic moment. The experimentally observed FFT power spectrum at 1.5 K (Fig <ref> (b)) was then fit to the convolution of the computed dipolar fields and a Gaussian distribution (See Appendix Sec. <ref>). The magnetic structural analysis identified four possible AF magnetic structures, labeled Néel-I, Néel-II, A-type, and Stripy in Fig. <ref> (c), that are consistent with our experimental findings. We found that the fit to the Néel-I AF structure with magnetic moments aligned parallel to the crystal c-axis gives the best matching dipolar field distribution shown in Fig. <ref>(b). We point out that this magnetic configuration is also in agreement with the analysis of the neutron data, as we discuss below. Furthermore, as shown in Table <ref>, the effective static magnetic moment size determined from the best fit to the real FFT spectrum are notably very small, m_ static<0.22 μ_B/Pr. §.§ Crystal Field Excitations In order to elucidate the magnetic degrees of freedom in , we first present an analysis of crystal field excitations. Fig. <ref> (a) shows the low temperature (5 K) inelastic neutron scattering (INS) contour plots with incident energy E_i= 700 meV, which clearly reveals at least four inelastic features with intensities that decrease with increasing momentum transfer, consistent with crystal field excitations at ∼ 250, 300, 400 and 500 meV. The additional local excitation that appears at 452 meV has an intensity that increases with momentum transfer indicating that it is of nuclear origin. We identify this feature with an O-H stretching mode <cit.>. The low Q magnetic intensity exists on a large high energy background with dispersion that is characteristic of a hydrogen recoil, providing further evidence of the presence of hydrogen contamination that likely originates from a brief exposure of the sample to atmosphere during synthesis. Such O-H stretching modes and hydrogen recoil scattering were also apparent in previous studies of <cit.>. We note that bulk characterization of our sample as well as neutron diffraction and inelastic scattering presented below are consistent with previous reports, demonstrating that this small amount of hydrogen present has negligible effect on the sample properties. To better resolve the intrinsic crystal field excitations from the large background signal, we integrate the neutron data over |Q|=[6,11] Å^-1 and fit a background signal using a decaying exponential with an additional Gaussian centered on the 450 meV O-H stretching mode as shown in Fig. <ref> (b). This background was subtracted to produce the data shown in Fig. <ref> (c), while the higher resolution E_i= 300 meV data shown in Fig. <ref> (d) revealed that the crystal field intensity spectrum below 250 meV is comprised of two modes around ∼230 and ∼240 meV. No additional crystal field modes were found at higher energies for data collected using neutron incident energies (up to 2.5 eV). To obtain the peak positions, we fit the INS data to a Voigt profile after subtracting a large background (see Fig. <ref> (b)). In total, six CEF excitations were revealed at energies of 228, 243, 296, 388, 528 and 568 meV. The first two modes, at 228 and 243 meV are consistent with prior studies that found a single broad energy mode centered at 233 meV  <cit.>, but our higher energy resolution clearly shows that this mode is split, while an improved signal to noise unveils four additional crystal field modes that were not previously visible above background. The six crystal field excitations we observe are consistent with the previously proposed intermediate coupling scheme for <cit.>, confirming the importance of crystal electric field (CEF) interactions in . However, the higher quality of our data allows us to better constrain microscopic parameters by fitting solely the crystal field level energies and intensities. We computed Pr^4+ single ion crystal electric field (CEF) Hamiltonian (H_CEF) from a point charge (PC) model within the intermediate coupling regime using the following CEF Hamiltonian <cit.> H_CEF = B_2^0 O_2^0 + B_2^± 2 O_2^± 2 + B_4^0 O_4^0 + B_4^± 2 O_4^± 2 + B_4^± 4 O_4^± 4 + B_6^0 O_6^0+ B_6^± 2 O_6^± 2 + B_6^± 4 O_6^± 4+ B_6^± 6 O_6^± 6, where B_n^± m are the CEF parameters and O_n^± m are the Stevens Operators. H_CEF includes contributions from the non-zero -m components that are imaginary in the Stevens Operators and second order terms. Both are required to account for the low symmetry Pr ion local environment  <cit.>. We also found that fourth and sixth order terms were required to capture all observed crystal field excitations, especially at higher energy. For our calculations, we considered Pr^4+ ion with orbital angular momentum L = 3, spin quantum number S = 1/2, represented by isoelectronic Ce^3+ ion. We obtained reasonable fits of the PC model to the INS data varying the spin orbit coupling strength of between 40 and 60 meV and we report the best fit results for λ=54 meV. Our model calculations were initialized by generating the PC model to provide a set of starting CEF parameters that were further refined through least-squares fitting to the E_i=700 meV INS data. The first column of Table <ref> presents the values of the CEF parameters B_n^± m from the PC model, while the second column presents the final optimized CEF parameters. A notable feature from the obtained CEF parameters is that significant sixth order terms are required to capture all features of the excitations. Fig. <ref> (c) displays a comparison of the INS intensity with the PC model fit (brown dashed line) that provides excellent agreement with all six observed excitations. The ground state eigenvectors in the |L, S⟩ basis resulting from the PC model fit are reported in Table <ref>, while the complete eigenvalues and eigenvectors can be found in the Supplemental Material [See Supplemental Material at [URL will be inserted by publisher]. The Supplemental Material includes Refs. <cit.>]. The low symmetry local environment of Pr^4+ results in a ground state wavefunction that deviates significantly from the Γ_7 doublet <cit.>. Furthermore, we find a mixed ground state, with significant contribution from the j=5/2 and j=7/2 multiplets. The coefficients of all the 14-fold degenerate basis are non-vanishing, as shown in Table <ref>. The complexity of the ground state wavefunction reflects the low symmetry of the Pr local environment and likely significant hybridization between Pr (4f) and O (2p) states so that a point charge crystal model provides only an effective description of the magnetism in . This conclusion is consistent with recent x-ray absorption spectroscopy measurements <cit.> and with a reduced local moment as revealed by our and neutron scattering measurements presented below. The observed g-tensor is anisotropic with average transverse components in the honeycomb plane components g_± = 1.05 and a longitudinal component of g_z = 1.35, with g_z/g_± = 1.29. These illustrate that the CEF effects introduce an easy axis anisotropy that is consistent with a c-axis oriented ordered moment. §.§ Neutron Diffraction Both heat capacity <cit.> and our measurements indicate that undergoes long-range magnetic ordering below T_N = 4.9 K, this ordered state should give rise to a magnetic Bragg reflection visible with neutron diffraction. Fig. <ref> shows the elastic neutron signal measured at T=2.7 K and T=12 K. To place tight constraints on any ordered magnetic moment, we plot the difference between T = 2.7 and 12 K. The difference data do not show any evidence for magnetic Bragg intensity or diffuse scattering above the statistical noise indicating that any three dimensional ordered moment in is extremely small, consistent with the analysis presented above and previous reports <cit.>. The variance of the difference data places an upper bound on the ordered moment of m ≤ 0.3 μ_B assuming a three-dimensional ordered Néel-I magnetic structure with moments aligned along the c-axis, consistent with the μSR and inelastic neutron spectra described below. This upper bound represents a significantly reduced moment compared with expectations from the J=1/2 ground state doublet and g_z=1.35 from our crystal field analysis, giving m_z ≈ 0.65 μ_B. Although μSR clearly reveals a static, ordered, moment below T_N in Na_2PrO_3, both μSR and elastic neutron scattering find that this ordered moment is vanishingly small. There are three most likely, and not mutually exclusive, possibilities for such a small ordered moment. The first possibility is the presence of structural disorder that inhibits the formation of three-dimensional long-range order. In particular, stacking faults that are known to be prevalent in Na_2PrO_3 <cit.>. The presence of such stacking faults would increase the upper bound on the ordered moment size determined by neutron diffraction. The second possibility is significant Pr-O covalency that results in a fraction of the moment on the O site. The third is presence of significant quantum fluctuations that reduce the ordered moment size. Such fluctuations would be expected to arise from a frustrated magnetic Hamiltonian. Since quantum fluctuations of the ordered moment act to shift magnetic neutron intensity from the elastic (E=0) channels to inelastic ones, the possible presence of significant quantum fluctuations is directly testable through analysis of the low energy magnetic excitation spectra as we will discuss below. §.§ Magnetic Excitations The measured powder averaged low energy (δ E<10 meV) inelastic neutron scattering for is shown in Fig. <ref> (a). There are two visible branches of magnetic excitations, centered at 1.5 and 3 meV. The lower energy branch exhibits a clear dispersion, with a 1 meV gap and 1.5 meV bandwidth consistent with previous reports <cit.>. Excitations disperse from a minimum energy at Q=1.25 Å^-1 corresponding to the (110) Bragg position as expected for spin waves in the Néel-I ordered state. The higher energy branch of magnetic excitations is centered around 3.2 meV and extends into a continuum up to 5 meV, most clearly visible in the constant momentum transfer cut of Fig. <ref> (c). Given that this high energy branch intensity is maximum at approximately twice the energy of the lower energy zone boundary – where the density of single magnon states is maximized – and the intensity of the higher energy branch is significantly reduced with respect to the lower energy one, we associate the higher energy excitations with a multi-magnon continuum. Such an assignment is supported by non-linear spin wave modeling discussed below. Both branches of magnetic excitations follow the same temperature dependence, collapsing into a broad energy continuum of the paramagnetic response above T_N. To model the magnetic excitations in the Néel state, we consider the generic nearest-neighbour model for Kramer's pseudo-spins on the honeycomb lattice. This includes four symmetry allowed exchange interactions <cit.>, ∑_⟨ij⟩_μ [ JS⃗_i ·S⃗_j + K S^μ_i S^μ_j + Γ(S^ν_i S^ρ_j +S^ρ_i S^ν_j) . . + Γ' (S^μ_i S^ρ_j +S^μ_i S^ν_j+ + S^ρ_i S^μ_j +S^ν_i S^μ_j) ], where ⟨ij⟩_μ is a μ-type bonds and μνρ are a permutation of the octahedral axes xyz. This includes the Heisenberg exchange J, Kitaev exchange K and two symmetric off-diagonal exchanges Γ and Γ'. We will assume the strictly crystallographically inequivalent nearest neighbor bonds have the same exchange interactions, effectively granting the model three-fold rotation symmetry about the c-axis. We do not include c-axis exchange interactions as these are expected to be weak and minimally influence the powder averaged dynamical structure factor. We have assumed the J=1/2 pseudo-spins in are defined with respect to the local cubic axes. Alternatively, a trigonal basis that quantizes the spins along the out-of-plane and high-symmetry in-plane directions can be used and yields <cit.> ∑_⟨ij⟩ [ J_1 (S_i^x S_j^x+ S_i^y S_j^y + ΔS_i^z S_j^z) +J_±±( γ_ijS^+_i S^+_j + h.c.). .-J_z±( γ^*_ij[ S^+_i S^z_j+ S^z_i S^+_j] + h.c.) ], where γ_ij are bond dependent phase factors (γ_x = 1, γ_y = ω, and γ_z = ω^* where ω = e^2π i/3). Classically, a Néel state can be stabilized on the honeycomb lattice by a dominant anti-ferromagnetic exchange J>0, leaving the staggered moment direction arbitrary. Including a Kitaev interaction in addition to the Heisenberg interaction stabilizes a staggered moment aligned along one of the cubic axes through an order-by-quantum-disorder mechanism. Finite symmetric off-diagonal exchanges will select a direction even classically, with Γ + 2Γ' > 0 favoring out of the honeycomb plane, corresponding to Δ>1 in the trigonal basis, and Γ + 2Γ'<0, corresponding to Δ<1, favoring an in-plane staggered moment – the in-plane direction is unfixed classically, but will be selected via order-by-disorder <cit.>. We assume the staggered moment orientation is selected by the symmetric off-diagonal exchanges given the small selection energy of the order-by-disorder mechanism and fix Γ + 2Γ'>0 to yield a staggered moment oriented out of the honeycomb plane, consistent with expectations from μSR. The magnetic excitations in the Néel phase can be calculated semi-classically using spin-wave theory. We carried out calculations of the powder-averaged dynamical structure factor for several sets of exchange constants (J,K,Γ,Γ') including the XXZ limits (only J_1, Δ non-zero) and the J-Γ-Γ' model (setting K=0). We note that the XXZ limit is equivalent to K=0 and Γ=Γ' with J_1 = J-Γ and Δ J_1 = J+2Γ. Given the small ordered moment we have included corrections to linear spin wave theory up to order O(1/S^2) to ensure we can capture any quantum fluctuations. Details of the formalism for these non-linear spin-wave theory calculations can be found (e.g.) in Refs. [rau2018, mcclarty2018, rau2019]. First, we note that large gap observed experimentally can be captured even in linear spin-wave theory with three-fold symmetry once Γ and Γ' interactions are included (or equivalently, once Δ≠ 1). Quantitatively, the size of the gap, and the bandwidth of excitations, can be accounted for by several different sets of exchange parameters. The key features of the spectrum are largely determined by the value of Γ + 2Γ' (where J is fixed to set the overall energy scale). Tuning the precise values of Γ or Γ' separately only introduces subtle changes to the powder-averaged intensity that our data cannot constrain. Based on these gross features, we confine our calculations to the value Γ + 2Γ' = 0.3J, fixing the scale of the gap relative to the bandwidth, and vary the relative contributions of Γ, Γ'. Small differences can be observed in the “flatness” of the top of the excitation band and in the distribution of intensities near maxima at |Q⃗| ∼ 1 Å and 1.75 Å depending on the precise values used. To model we considered several cases: Γ/J = 0.0, 0.1, 0.2, 0.3 and 0.35 holding Γ'/J = (0.3-Γ/J)/2 for each to fix Γ +2Γ'. This includes the XXZ limit where Γ=Γ'=0.1J corresponding to Δ = 1.33 in the trigonal basis, close to the value considered in Ref. [PhysRevB.103.L121109]. A comparison of this model to the experimental data is shown in Fig. <ref> (b) and (c). The theoretical result incorporates up to the O(1/S^2) contributions of the transverse, longitudinal and transverse-longitudinal parts of the structure factor (which include the leading contribution from the two-magnon continuum) <cit.>. The overall intensity scale is left arbitrary and a g-factor anisotropy of g_z/g_±≈ 1.29 was used. Due to the limited range of |Q⃗|, the neutron form factor for Pr4+ was not included. Overall the J-Γ model with J=1 meV, Γ/J=-0.1, and Γ'=Γ provides an excellent description of the data given the simplicity of the model. We found no improvement moving away from the XXZ limit or including an additional Kitaev exchange. Including the symmetry inequivalent nearest neighbor exchange and out-of-plane exchange interactions will likely lead to an improved description of the data, but our powder averaged data set is not sufficient to constrain such a large parameter space and the minimal J-Γ model presented here captures the essential physics. Single crystal measurements are required for any reliable further refinement of additional parameters in the magnetic Hamiltonian, to capture the magnetic intensity around Δ E=2.5 meV and above 4 meV. A key feature of the magnetic excitations is the pronounced continuum with a maximum intensity visible near δ E = 3 meV. Given the powder-averaging, it is tempting to attribute this to an additional band arising due to Kitaev or additional anisotropic interactions. A simpler explanation is that it represents the contribution from the two-magnon continuum associated with the lower energy single magnon band. Kinematically, this is sensible, since the continuum starts above twice the gap, but its unusually high intensity merits further discussion. Since the two-magnon intensity is determined by components of the spin along the direction of the ordered moment, any reduction of the ordered moment that results from quantum fluctuations must be accounted for by an enhanced two magnon signal. For the XXZ model most relevant to we expect only a modest 20% ordered moment reduction including corrections up to second order, as found in Ref. <cit.>. Indeed, such a modest moment reduction is anticipated from the data directly, as the large measured spin wave gap relative to the bandwidth acts to energetically “freeze out” moment fluctuations at low temperatures. We conclude that rather than enhanced quantum fluctuations, the strong two-magnon intensity in is a result of g-factor anisotropy in . While the one-magnon intensity is determined by the spin components transverse to the ordered moment, and thus are ∝ g_±^2, the two-magnon intensity is sensitive to the longitudinal component and carries a factor of g_z^2. The measured g-factor ratio g_z/g_±≈ 1.29 thus provides an enhancement of the two-magnon intensity of (g_z/g_±)^2 ≈ 1.66 relative to the one magnon intensity. Comparison of the non-linear spin wave dynamic structure factor with our data in Fig. <ref> (c) shows excellent agreement of the relative two magnon intensity with the measured signal at 3 meV. Despite the apparently small static moment and absence of any magnetic Bragg signal, the inelastic spectra displays well-formed magnon excitations that can be described to a high fidelity with non-linear spin wave theory predicting a modest, ∼20% ordered moment reduction from quantum fluctuations. In the absence of significant quantum fluctuations, the unobservable magnetic Bragg intensity in the powder averaged diffraction data could be accounted for by stacking faults that disrupt three-dimensional magnetic order, allowing long-range order to form in two-dimensional honeycomb planes but only short range order between the planes. The resulting magnetic correlations form “rods” of elastic scattering extending along the c^* direction. When powder averaged, the rods result in a diffuse signal that is not visible above background. Since the magnetic interactions along the c-axis are weak, the magnetic excitations do not disperse along this direction and the powder averaged inelastic neutron intensity is not significantly influenced by stacking faults. However, such a scenario cannot account for the small static moment determined by . The inelastic neutron scattering signal provides an additional, independent check, of the magnetic moment size in through the total moment sum rule. Integrating the total measured inelastic magnetic neutron intensity over the region 0.7<Δ E<6 meV and 0.8<Q<1.94 Å^-1, we obtain an approximate total fluctuating moment of δ m^2 = 3/2 μ_ B ^2∫∫Q^2 I(Q,E) dQ dE/∫ Q^2 dQ=0.57(22) μ_ B^2/Pr. This value accounts for a large systematic error arising from background determination and is reduced from the value of δ m^2_ local = g_ avg^2J(J+1) - δ_ z^2g_ z^2J_z^2 =0.7 μ_ B^2/Pr expected for a local J=1/2 moment and g-factors as determined by crystal field analysis. The factor δ_ z=0.8 accounts for a static moment reduction from quantum fluctuations. Such a reduced fluctuating moment demonstrates the absence of significant quantum fluctuations and is consistent with m_ static≤ 0.2 μ_ B determined by . Overall, , neutron crystal field measurements, neutron diffraction, and low energy inelastic scattering reveal that is a well ordered Néel antiferromagnet, exhibiting minimal quantum fluctuations, but there is a large moment reduction from crystal field effects that give rise to a small, anisotropic, g-factor and covalency that reduces the moment from the single ion limit. § SUMMARY AND CONCLUSIONS Our combined neutron and measurements have demonstrated that is Néel antiferromagnet and well described by a two dimensional J-Γ or equivalently XXZ model Hamiltonian. Although no magnetic Bragg reflection corresponding to long-range order has been identified, measurements reveal clear oscillations characteristic of long-range order. DFT modeling enabled us to identify the muon stopping sites and compare measured internal field distributions against possible magnetic structures. We find the internal field distribution is most consistent with Nèel order with a small (≤ 0.22 μ_ B) static moment. Non-linear spin wave modeling of the observed collective magnetic excitations capture the complete spectra. An intense two-magnon band is accounted for through g-factor anisotropy that acts to enhance the neutron intensity of the longitudinal two-magnon excitations relative to the transverse single magnons, by a factor of (g_z/g_±)^2 ∼ 1.66. We find minimal reduction of the ordered moment from frustration or quantum fluctuations as confired through an analysis of the total magnetic spectral weight from inelastic neutron scattering. Overall our work emphasizes how local atomic physics can generate small static magnetic moments and intense multi-magnon continuum observed by inelastic neutron scattering experiments, even in the absence of appreciable quantum fluctuations. Furthermore, our work demonstrates the importance of as a technique for investigating frustrated magnets with small moments and the clear synergies between and INS. § ACKNOWLEDGEMENTS K.W.P and Q.W. were supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Grant No. DE-SC0021223. VFM was supported by the National Science Foundation under grant No. DMR-1905532. JGR was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) (Funding Reference No. RGPIN-2020-04970). Access to MACS was provided by the Center for High Resolution Neutron Scattering, a partnership between the National Institute of Standards and Technology and the National Science Foundation under Agreement No. DMR-1508249. A portion of this research used resources at the Spallation Neutron Source, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. IJO and PB acknowledge financial support from PNRR MUR project ECS-00000033-ECOSISTER and also acknowledge computing resources provided by the STFC scientific computing department’s SCARF cluster and CINECA award under the ISCRA (Project ID IsCa4) initiative. § MAGNETIC STRUCTURE DETERMINATION AND DIPOLAR FIELD SIMULATIONS The search for the magnetic structure in was initialized by considering all possible magnetic configurations for k_m=(0, 0, 0), (1, 1, 0), and (1, 0, 0) propagation vectors and its crystal structure. In Table <ref> we have listed the maximal magnetic space groups and magnetic configurations that we have obtained utilizing the MAXMAGN <cit.>. Starting from these maximal magnetic space groups in Table <ref>, we have considered only antiferromagnetic structures with the following conditions of the magnetic moments; (i) m_x = m_z = √(2)/2 |m|, (ii) m_x = |m| and m_z = 0; (iii) m_x = 0 and m_z = |m|, (iv) m_y = |m| and (v) taking into account two distinct Pr sites, such that for cases i to iv above, we have considered that the moments on Pr_1 is antiparallel to those on Pr_2. Implementing these conditions, we obtained a total of 28 possible magnetic structures. Due to the magnetic symmetry of the compound, the magnetic structures obtained for the propagation vector k_m = (0,0,0) are the same for k_m = (1,1,0). In order to obtain the magnetic structure and estimate the magnetic moment of Na_2PrO_3 using the muon spin spectroscopy measurements, we approximated and fit the Gaussian convolution of the muon dipolar fields calculated over these 28 magnetic structures to the experimentally observed Fourier power spectrum at 1.5 K. The simulation of the dipolar contribution to the muon internal field also requires the knowledge of the muon site <cit.>. As described in the main text, the muon was found to thermalize at three symmetrically inequivalent sites, that we have labelled A_1, A_2, and A_3 (See Main text Sec. <ref>). Each of these sites has a multiplicity of 8 (8f Wyckoff position), hence we have considered dipolar fields calculated on 24 muon positions. For each magnetic structure, the dipolar fields B <cit.> are calculated for all 24 muon sites with index i, in the pristine structure as a function of the Pr moments, m_Pr, written as p(B, m_Pr ) = ∑_i=1^24δ(B-m_Pr B_i). Such that we can approximate the FFT distribution of the ZF- experimental spectra to a convolution with a Gaussian broadening g as p(B, m_Pr, σ) = (p ∗ g) (B) := ∫_-∞ ^∞ p(τ, m) g(B-τ, σ) dτ. The fitting function P becomes P(B; m, σ, A, A_BKG) = Ap(B, m, σ) + A_BKG, where σ, A, A_BKG are the width , amplitude and the background of the distribution respectively. Statistically acceptable fits were obtained only for four magnetic configurations shown in Fig. <ref>. These magnetic structures consist of the Néel-I, Néel-II, A-type, and Stripy antiferromagnetic configurations. 56 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Jackeli and Khaliullin(2009)]Jackeli2009 author author G. Jackeli and author G. Khaliullin, title title Mott Insulators in the Strong Spin-Orbit Coupling Limit: From Heisenberg to a Quantum Compass and Kitaev Models, https://doi.org/10.1103/PhysRevLett.102.017205 journal journal Phys. Rev. Lett. volume 102, pages 017205 (year 2009)NoStop [Takagi et al.(2019)Takagi, Takayama, Jackeli, Khaliullin, and Nagler]Takagi2019 author author H. Takagi, author T. Takayama, author G. Jackeli, author G. Khaliullin, and author S. E. Nagler, title title Concept and realization of Kitaev quantum spin liquids, https://doi.org/10.1038/s42254-019-0038-2 journal journal Nature Reviews Physics volume 1, pages 264 (year 2019)NoStop [Motome et al.(2020)Motome, Sano, Jang, Sugita, and Kato]Motome_2020 author author Y. Motome, author R. Sano, author S. Jang, author Y. Sugita, and author Y. Kato, title title Materials design of Kitaev spin liquids beyond the Jackeli–Khaliullin mechanism, https://doi.org/10.1088/1361-648X/ab8525 journal journal Journal of Physics: Condensed Matter volume 32, pages 404001 (year 2020)NoStop [Khaliullin(2005)]Khaliullin2005 author author G. Khaliullin, title title Orbital Order and Fluctuations in Mott Insulators, https://doi.org/10.1143/PTPS.160.155 journal journal Progress of Theoretical Physics Supplement volume 160, pages 155 (year 2005), https://arxiv.org/abs/https://academic.oup.com/ptps/article-pdf/doi/10.1143/PTPS.160.155/5162453/160-155.pdf https://academic.oup.com/ptps/article-pdf/doi/10.1143/PTPS.160.155/5162453/160-155.pdf NoStop [Rau et al.(2014)Rau, Lee, and Kee]rau2014 author author J. G. Rau, author E. K.-H. Lee, and author H.-Y. Kee, title title Generic Spin Model for the Honeycomb Iridates beyond the Kitaev Limit, https://doi.org/10.1103/PhysRevLett.112.077204 journal journal Phys. Rev. Lett. volume 112, pages 077204 (year 2014)NoStop [Katukuri et al.(2016)Katukuri, Nishimoto, Yushankhai, Drechsler, and van den Brink]PhysRevB.94.041101 author author V. M. Katukuri, author S. Nishimoto, author V. Yushankhai, author S.-L. Drechsler, and author J. van den Brink, title title Kitaev interactions between j = 1/2 moments in honeycomb Na_2IrO_3 are large and ferromagnetic: insights from ab initio quantum chemistry calculations, https://iopscience.iop.org/article/10.1088/1367-2630/16/1/013056 journal journal New Journal of Phyics volume 16, pages 013056 (year 2016)NoStop [Chaloupka et al.(2010)Chaloupka, Jackeli, and Khaliullin]PhysRevLett.105.027204 author author J. Chaloupka, author G. Jackeli, and author G. Khaliullin, title title Kitaev-Heisenberg Model on a Honeycomb Lattice: Possible Exotic Phases in Iridium Oxides A_2IrO_3, https://link.aps.org/doi/10.1103/PhysRevLett.105.027204 journal journal Phys. Rev. Lett. volume 105, pages 027204 (year 2010)NoStop [Singh et al.(2012)Singh, Manni, Reuther, Berlijn, Thomale, Ku, Trebst, and Gegenwart]PhysRevLett.108.127203 author author Y. Singh, author S. Manni, author J. Reuther, author T. Berlijn, author R. Thomale, author W. Ku, author S. Trebst, and author P. Gegenwart, title title Relevance of the Heisenberg-Kitaev Model for the Honeycomb Lattice Iridates A_2IrO_3, https://link.aps.org/doi/10.1103/PhysRevLett.108.127203 journal journal Phys. Rev. Lett. volume 108, pages 127203 (year 2012)NoStop [Johnson et al.(2015)Johnson, Williams, Haghighirad, Singleton, Zapf, Manuel, Mazin, Li, Jeschke, Valentí, and Coldea]PhysRevB.92.235119 author author R. D. Johnson, author S. C. Williams, author A. A. Haghighirad, author J. Singleton, author V. Zapf, author P. Manuel, author I. I. Mazin, author Y. Li, author H. O. Jeschke, author R. Valentí, and author R. Coldea, title title Monoclinic crystal structure of RuCl_3 and the zigzag antiferromagnetic ground state, https://doi.org/10.1103/PhysRevB.92.235119 journal journal Phys. Rev. B volume 92, pages 235119 (year 2015)NoStop [Sears et al.(2020)Sears, Chern, Kim, Bereciartua, Francoual, Kim, and Kim]sears2020 author author J. A. Sears, author L. E. Chern, author S. Kim, author P. J. Bereciartua, author S. Francoual, author Y. B. Kim, and author Y.-J. Kim, title title Ferromagnetic kitaev interaction and the origin of large magnetic anisotropy in α-RuCl_3, https://doi.org/10.1038/s41567-020-0874-0 journal journal Nature Physics volume 16, pages 837 (year 2020)NoStop [Sugita et al.(2020)Sugita, Kato, and Motome]PhysRevB.101.100410 author author Y. Sugita, author Y. Kato, and author Y. Motome, title title Antiferromagnetic Kitaev interactions in polar spin-orbit Mott insulators, https://doi.org/10.1103/PhysRevB.101.100410 journal journal Phys. Rev. B volume 101, pages 100410 (year 2020)NoStop [Trebst and Hickey(2022)]TREBST20221 author author S. Trebst and author C. Hickey, title title Kitaev materials, https://doi.org/https://doi.org/10.1016/j.physrep.2021.11.003 journal journal Physics Reports volume 950, pages 1 (year 2022), note kitaev materialsNoStop [Plumb et al.(2014)Plumb, Clancy, Sandilands, Shankar, Hu, Burch, Kee, and Kim]PhysRevB.90.041112 author author K. W. Plumb, author J. P. Clancy, author L. J. Sandilands, author V. V. Shankar, author Y. F. Hu, author K. S. Burch, author H.-Y. Kee, and author Y.-J. Kim, title title RuCl_3: A spin-orbit assisted Mott insulator on a honeycomb lattice, https://doi.org/10.1103/PhysRevB.90.041112 journal journal Phys. Rev. B volume 90, pages 041112 (year 2014)NoStop [Banerjee et al.(2017)Banerjee, Yan, Knolle, Bridges, Stone, Lumsden, Mandrus, Tennant, Moessner, and Nagler]Banerjee2017 author author A. Banerjee, author J. Yan, author J. Knolle, author C. A. Bridges, author M. B. Stone, author M. D. Lumsden, author D. G. Mandrus, author D. A. Tennant, author R. Moessner, and author S. E. Nagler, title title Neutron scattering in the proximate quantum spin liquid α-RuCl_3, https://doi.org/10.1126/science.aah6015 journal journal Science volume 356, pages 1055 (year 2017)NoStop [Liu and Khaliullin(2018)]PhysRevB.97.014407 author author H. Liu and author G. Khaliullin, title title Pseudospin exchange interactions in d^7 cobalt compounds: Possible realization of the Kitaev model, https://doi.org/10.1103/PhysRevB.97.014407 journal journal Phys. Rev. B volume 97, pages 014407 (year 2018)NoStop [Sano et al.(2018)Sano, Kato, and Motome]PhysRevB.97.014408 author author R. Sano, author Y. Kato, and author Y. Motome, title title Kitaev-Heisenberg Hamiltonian for high-spin d^7 Mott insulators, https://doi.org/10.1103/PhysRevB.97.014408 journal journal Phys. Rev. B volume 97, pages 014408 (year 2018)NoStop [Jang et al.(2019)Jang, Sano, Kato, and Motome]PhysRevB.99.241106 author author S.-H. Jang, author R. Sano, author Y. Kato, and author Y. Motome, title title Antiferromagnetic Kitaev interaction in f-electron based honeycomb magnets, https://link.aps.org/doi/10.1103/PhysRevB.99.241106 journal journal Phys. Rev. B volume 99, pages 241106 (year 2019)NoStop [Jang et al.(2020)Jang, Sano, Kato, and Motome]PhysRevMaterials.4.104420 author author S.-H. Jang, author R. Sano, author Y. Kato, and author Y. Motome, title title Computational design of f-electron Kitaev magnets: Honeycomb and hyperhoneycomb compounds A_2PrO_3 (A= alkali metals), https://doi.org/10.1103/PhysRevMaterials.4.104420 journal journal Phys. Rev. Mater. volume 4, pages 104420 (year 2020)NoStop [Hinatsu and Doi(2006)]HINATSU2006155 author author Y. Hinatsu and author Y. Doi, title title Crystal structures and magnetic properties of alkali-metal lanthanide oxides A2LnO3 (A=Li, Na; Ln=Ce, Pr, Tb), https://doi.org/https://doi.org/10.1016/j.jallcom.2005.08.100 journal journal Journal of Alloys and Compounds volume 418, pages 155 (year 2006), note proceedings of the Twenty-fourth Rare Earth Research ConferenceNoStop [Ramanathan et al.(2021)Ramanathan, Leisen, and La Pierre]ramanathan2021 author author A. Ramanathan, author J. E. Leisen, and author H. S. La Pierre, title title In-Plane Cation Ordering and Sodium Displacements in Layered Honeycomb Oxides with Tetravalent Lanthanides: Na_2LnO_3 (Ln = Ce, Pr, and Tb), https://doi.org/10.1021/acs.inorgchem.0c02628 journal journal Inorganic Chemistry volume 60, pages 1398 (year 2021)NoStop [Daum et al.(2021)Daum, Ramanathan, Kolesnikov, Calder, Mourigal, and La Pierre]PhysRevB.103.L121109 author author M. J. Daum, author A. Ramanathan, author A. I. Kolesnikov, author S. Calder, author M. Mourigal, and author H. S. La Pierre, title title Collective excitations in the tetravalent lanthanide honeycomb antiferromagnet Na_2PrO_3, https://link.aps.org/doi/10.1103/PhysRevB.103.L121109 journal journal Phys. Rev. B volume 103, pages L121109 (year 2021)NoStop [Ramanathan et al.(2023)Ramanathan, Kaplan, Sergentu, Branson, Ozerov, Kolesnikov, Minasian, Autschbach, Freeland, Jiang, Mourigal, and La Pierre]Ramanathan2023 author author A. Ramanathan, author J. Kaplan, author D.-C. Sergentu, author J. A. Branson, author M. Ozerov, author A. I. Kolesnikov, author S. G. Minasian, author J. Autschbach, author J. W. Freeland, author Z. Jiang, author M. Mourigal, and author H. S. La Pierre, title title Chemical design of electronic and magnetic energy scales of tetravalent praseodymium materials, https://doi.org/10.1038/s41467-023-38431-7 journal journal Nature Communications volume 14, pages 3134 (year 2023)NoStop [Suter and Wojek(2012)]SUTER201269 author author A. Suter and author B. Wojek, title title Musrfit: A Free Platform-Independent Framework for μSR Data Analysis, https://doi.org/https://doi.org/10.1016/j.phpro.2012.04.042 journal journal Physics Procedia volume 30, pages 69 (year 2012), note 12th International Conference on Muon Spin Rotation, Relaxation and Resonance (μSR2011)NoStop [Möller et al.(2013)Möller, Bonfà, Ceresoli, Bernardini, Blundell, Lancaster, Renzi, Marzari, Watanabe, Sulaiman, and Mohamed-Ibrahim]Moller_2013 author author J. S. Möller, author P. Bonfà, author D. Ceresoli, author F. Bernardini, author S. J. Blundell, author T. Lancaster, author R. D. Renzi, author N. Marzari, author I. Watanabe, author S. Sulaiman, and author M. I. Mohamed-Ibrahim, title title Playing quantum hide-and-seek with the muon: localizing muon stopping sites, https://dx.doi.org/10.1088/0031-8949/88/06/068510 journal journal Physica Scripta volume 88, pages 068510 (year 2013)NoStop [Bonfà and De Renzi(2016)]bonfa2016 author author P. Bonfà and author R. De Renzi, title title Toward the Computational Prediction of Muon Sites and Interaction Parameters, https://doi.org/10.7566/JPSJ.85.091014 journal journal Journal of the Physical Society of Japan volume 85, pages 091014 (year 2016)NoStop [Onuorah et al.(2018)Onuorah, Bonfà, and De Renzi]onuorah2018 author author I. J. Onuorah, author P. Bonfà, and author R. De Renzi, title title Muon contact hyperfine field in metals: A DFT calculation, https://link.aps.org/doi/10.1103/PhysRevB.97.174414 journal journal Phys. Rev. B volume 97, pages 174414 (year 2018)NoStop [Blundell and Lancaster(2023)]blundell2023 author author S. J. Blundell and author T. Lancaster, title title DFT+μ: Density functional theory for muon site determination, https://doi.org/10.1063/5.0149080 journal journal Applied Physics Reviews volume 10 (year 2023)NoStop [Perdew et al.(1996)Perdew, Burke, and Ernzerhof]pbe1996 author author J. P. Perdew, author K. Burke, and author M. Ernzerhof, title title Generalized Gradient Approximation Made Simple, https://doi.org/10.1103/PhysRevLett.77.3865 journal journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop [Giannozzi et al.(2009)Giannozzi, Baroni, Bonini, Calandra, Car, Cavazzoni, Ceresoli, Chiarotti, Cococcioni, Dabo, Dal Corso, de Gironcoli, Fabris, Fratesi, Gebauer, Gerstmann, Gougoussis, Kokalj, Lazzeri, Martin-Samos, Marzari, Mauri, Mazzarello, Paolini, Pasquarello, Paulatto, Sbraccia, Scandolo, Sclauzero, Seitsonen, Smogunov, Umari, and Wentzcovitch]qe2009 author author P. Giannozzi, author S. Baroni, author N. Bonini, author M. Calandra, author R. Car, author C. Cavazzoni, author D. Ceresoli, author G. L. Chiarotti, author M. Cococcioni, author I. Dabo, author A. Dal Corso, author S. de Gironcoli, author S. Fabris, author G. Fratesi, author R. Gebauer, author U. Gerstmann, author C. Gougoussis, author A. Kokalj, author M. Lazzeri, author L. Martin-Samos, author N. Marzari, author F. Mauri, author R. Mazzarello, author S. Paolini, author A. Pasquarello, author L. Paulatto, author C. Sbraccia, author S. Scandolo, author G. Sclauzero, author A. P. Seitsonen, author A. Smogunov, author P. Umari, and author R. M. Wentzcovitch, title title QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials, http://www.quantum-espresso.org journal journal Journal of Physics: Condensed Matter volume 21, pages 395502 (19pp) (year 2009)NoStop [Blöchl(1994)]PhysRevB.50.17953 author author P. E. Blöchl, title title Projector augmented-wave method, https://link.aps.org/doi/10.1103/PhysRevB.50.17953 journal journal Phys. Rev. B volume 50, pages 17953 (year 1994)NoStop [Monkhorst and Pack(1976)]kpoint1976 author author H. J. Monkhorst and author J. D. Pack, title title Special points for Brillouin-zone integrations, https://doi.org/10.1103/PhysRevB.13.5188 journal journal Phys. Rev. B volume 13, pages 5188 (year 1976)NoStop [Granroth et al.(2010)Granroth, Kolesnikov, Sherline, Clancy, Ross, Ruff, Gaulin, and Nagler]Granroth:2010 author author G. E. Granroth, author A. I. Kolesnikov, author T. E. Sherline, author J. P. Clancy, author K. A. Ross, author J. P. C. Ruff, author B. D. Gaulin, and author S. E. Nagler, title title SEQUOIA: A newly operating chopper spectrometer at the sns, https://doi.org/10.1088/1742-6596/251/1/012058 journal journal Journal of Physics: Conference Series volume 251, pages 012058 (year 2010)NoStop [Blundell et al.(2022)Blundell, De Renzi, Lancaster, and Pratt]blundell2022 author author S. J. Blundell, author R. De Renzi, author T. Lancaster, and author F. L. Pratt, https://doi.org/10.1093/080/9780198858959.001.0001 title Introduction to Muon Spectroscopy (publisher Oxford University Press, address Oxford, year 2022)NoStop [Cheung et al.(2018)Cheung, Guguchia, Frandsen, Gong, Yamakawa, Almeida, Onuorah, Bonfá, Miranda, Wang, Tam, Song, Cao, Cai, Hallas, Wilson, Munsie, Luke, Chen, Dai, Jin, Guo, Ning, Fernandes, De Renzi, Dai, and Uemura]PhysRevB.97.224508 author author S. C. Cheung, author Z. Guguchia, author B. A. Frandsen, author Z. Gong, author K. Yamakawa, author D. E. Almeida, author I. J. Onuorah, author P. Bonfá, author E. Miranda, author W. Wang, author D. W. Tam, author Y. Song, author C. Cao, author Y. Cai, author A. M. Hallas, author M. N. Wilson, author T. J. S. Munsie, author G. Luke, author B. Chen, author G. Dai, author C. Jin, author S. Guo, author F. Ning, author R. M. Fernandes, author R. De Renzi, author P. Dai, and author Y. J. Uemura, title title Disentangling superconducting and magnetic orders in NaFe_1xNi_xAs using muon spin rotation, https://doi.org/10.1103/PhysRevB.97.224508 journal journal Phys. Rev. B volume 97, pages 224508 (year 2018)NoStop [Le et al.(1993)Le, Keren, Luke, Wu, Uemura, Tamura, Ishikawa, and Kinoshita]LE1993405 author author L. Le, author A. Keren, author G. Luke, author W. Wu, author Y. Uemura, author M. Tamura, author M. Ishikawa, and author M. Kinoshita, title title Searching for spontaneous magnetic order in an organic ferromagnet. μSR studies of β-phase p-NPNN, https://doi.org/https://doi.org/10.1016/0009-2614(93)85573-7 journal journal Chemical Physics Letters volume 206, pages 405 (year 1993)NoStop [Campostrini et al.(2002)Campostrini, Hasenbusch, Pelissetto, Rossi, and Vicari]PhysRevB.65.144520 author author M. Campostrini, author M. Hasenbusch, author A. Pelissetto, author P. Rossi, and author E. Vicari, title title Critical exponents and equation of state of the three-dimensional Heisenberg universality class, https://doi.org/10.1103/PhysRevB.65.144520 journal journal Phys. Rev. B volume 65, pages 144520 (year 2002)NoStop [Pratt et al.(2007)Pratt, Zieliński, Bałanda, Podgajny, Wasiutyński, and Sieklucka]Pratt_2007 author author F. L. Pratt, author P. M. Zieliński, author M. Bałanda, author R. Podgajny, author T. Wasiutyński, and author B. Sieklucka, title title A μSR study of magnetic ordering and metamagnetism in a bilayered molecular magnet, https://doi.org/10.1088/0953-8984/19/45/456208 journal journal Journal of Physics: Condensed Matter volume 19, pages 456208 (year 2007)NoStop [Smidman et al.(2013)Smidman, Adroja, Hillier, Chapon, Taylor, Anand, Singh, Lees, Goremychkin, Koza, Krishnamurthy, Paul, and Balakrishnan]PhysRevB.88.134416 author author M. Smidman, author D. T. Adroja, author A. D. Hillier, author L. C. Chapon, author J. W. Taylor, author V. K. Anand, author R. P. Singh, author M. R. Lees, author E. A. Goremychkin, author M. M. Koza, author V. V. Krishnamurthy, author D. M. Paul, and author G. Balakrishnan, title title Neutron scattering and muon spin relaxation measurements of the noncentrosymmetric antiferromagnet CeCoGe_3, https://doi.org/10.1103/PhysRevB.88.134416 journal journal Phys. Rev. B volume 88, pages 134416 (year 2013)NoStop [Majumder et al.(2018)Majumder, Manna, Simutis, Orain, Dey, Freund, Jesche, Khasanov, Biswas, Bykova, Dubrovinskaia, Dubrovinsky, Yadav, Hozoi, Nishimoto, Tsirlin, and Gegenwart]PhysRevLett.120.237202 author author M. Majumder, author R. S. Manna, author G. Simutis, author J. C. Orain, author T. Dey, author F. Freund, author A. Jesche, author R. Khasanov, author P. K. Biswas, author E. Bykova, author N. Dubrovinskaia, author L. S. Dubrovinsky, author R. Yadav, author L. Hozoi, author S. Nishimoto, author A. A. Tsirlin, and author P. Gegenwart, title title Breakdown of Magnetic Order in the Pressurized Kitaev Iridate Li_2IrO_3, https://doi.org/10.1103/PhysRevLett.120.237202 journal journal Phys. Rev. Lett. volume 120, pages 237202 (year 2018)NoStop [Shiroka et al.(2011)Shiroka, Lamura, Sanna, Prando, De Renzi, Tropeano, Cimberle, Martinelli, Bernini, Palenzona, Fittipaldi, Vecchione, Carretta, Siri, Ferdeghini, and Putti]PhysRevB.84.195123 author author T. Shiroka, author G. Lamura, author S. Sanna, author G. Prando, author R. De Renzi, author M. Tropeano, author M. R. Cimberle, author A. Martinelli, author C. Bernini, author A. Palenzona, author R. Fittipaldi, author A. Vecchione, author P. Carretta, author A. S. Siri, author C. Ferdeghini, and author M. Putti, title title Long- to short-range magnetic order in fluorine-doped CeFeAsO, https://doi.org/10.1103/PhysRevB.84.195123 journal journal Phys. Rev. B volume 84, pages 195123 (year 2011)NoStop [Holzschuh et al.(1983)Holzschuh, Denison, Kündig, Meier, and Patterson]PhysRevB.27.5294 author author E. Holzschuh, author A. B. Denison, author W. Kündig, author P. F. Meier, and author B. D. Patterson, title title Muon-spin-rotation experiments in orthoferrites, https://doi.org/10.1103/PhysRevB.27.5294 journal journal Phys. Rev. B volume 27, pages 5294 (year 1983)NoStop [Perez-Mato et al.(2015)Perez-Mato, Gallego, Tasci, Elcoro, de la Flor, and Aroyo]doi:10.1146/annurev-matsci-070214-021008 author author J. Perez-Mato, author S. Gallego, author E. Tasci, author L. Elcoro, author G. de la Flor, and author M. Aroyo, title title Symmetry-Based Computational Tools for Magnetic Crystallography, https://doi.org/10.1146/annurev-matsci-070214-021008 journal journal Annual Review of Materials Research volume 45, pages 217 (year 2015)NoStop [Bonfà et al.(2018)Bonfà, Onuorah, and Renzi]muesr2018 author author P. Bonfà, author I. J. Onuorah, and author R. D. Renzi, title Introduction and a Quick Look at MUESR, the Magnetic Structure and mUon Embedding Site Refinement Suite, in https://doi.org/10.7566/JPSCP.21.011052 booktitle Proceedings of the 14th International Conference on Muon Spin Rotation, Relaxation and Resonance (μSR2017), Vol. volume 21 (year 2018)NoStop [Lutz et al.(1982)Lutz, Eckers, and Haeuseler]LUTZ1982221 author author H. Lutz, author W. Eckers, and author H. Haeuseler, title title OH stretching frequencies of solid hydroxides and of free OH- ions, https://doi.org/https://doi.org/10.1016/0022-2860(82)87236-0 journal journal Journal of Molecular Structure volume 80, pages 221 (year 1982)NoStop [Winkler et al.(2008)Winkler, Friedrich, Wilson, Haussühl, Krisch, Bosak, Refson, and Milman]PhysRevLett.101.065501 author author B. Winkler, author A. Friedrich, author D. J. Wilson, author E. Haussühl, author M. Krisch, author A. Bosak, author K. Refson, and author V. Milman, title title Dispersion Relation of an OH-Stretching Vibration from Inelastic X-Ray Scattering, https://doi.org/10.1103/PhysRevLett.101.065501 journal journal Phys. Rev. Lett. volume 101, pages 065501 (year 2008)NoStop [Scheie(2021)]Scheie-in5044 author author A. Scheie, title title PyCrystalField: software for calculation, analysis and fitting of crystal electric field Hamiltonians, https://doi.org/10.1107/S160057672001554X journal journal Journal of Applied Crystallography volume 54, pages 356 (year 2021)NoStop [Note1()]Note1 note See Supplemental Material at [URL will be inserted by publisher]. The Supplemental Material includes Refs. <cit.>NoStop [Katukuri et al.(2014)Katukuri, Nishimoto, Yushankhai, Stoyanova, Kandpal, Choi, Coldea, Rousochatzakis, Hozoi, and Van Den Brink]katukuri2014kitaev author author V. M. Katukuri, author S. Nishimoto, author V. Yushankhai, author A. Stoyanova, author H. Kandpal, author S. Choi, author R. Coldea, author I. Rousochatzakis, author L. Hozoi, and author J. Van Den Brink, title title Kitaev interactions between j=1/2 moments in honeycomb Na_2IrO_3 are large and ferromagnetic: insights from ab initio quantum chemistry calculations, @noop journal journal New Journal of Physics volume 16, pages 013056 (year 2014)NoStop [Rau and Gingras(2018)]rau2018yb author author J. G. Rau and author M. J. P. Gingras, title title Frustration and anisotropic exchange in ytterbium magnets with edge-shared octahedra, https://doi.org/10.1103/PhysRevB.98.054408 journal journal Phys. Rev. B volume 98, pages 054408 (year 2018)NoStop [Maksimov and Chernyshev(2020)]maksimov2020 author author P. A. Maksimov and author A. L. Chernyshev, title title Rethinking RuCl_3, https://doi.org/10.1103/PhysRevResearch.2.033011 journal journal Phys. Rev. Res. volume 2, pages 033011 (year 2020)NoStop [Rau et al.(2018)Rau, McClarty, and Moessner]rau2018 author author J. G. Rau, author P. A. McClarty, and author R. Moessner, title title Pseudo-Goldstone Gaps and Order-by-Quantum Disorder in Frustrated Magnets, https://doi.org/10.1103/PhysRevLett.121.237201 journal journal Phys. Rev. Lett. volume 121, pages 237201 (year 2018)NoStop [McClarty et al.(2018)McClarty, Dong, Gohlke, Rau, Pollmann, Moessner, and Penc]mcclarty2018 author author P. A. McClarty, author X.-Y. Dong, author M. Gohlke, author J. G. Rau, author F. Pollmann, author R. Moessner, and author K. Penc, title title Topological magnons in Kitaev magnets at high fields, https://doi.org/10.1103/PhysRevB.98.060404 journal journal Phys. Rev. B volume 98, pages 060404 (year 2018)NoStop [Rau et al.(2019)Rau, Moessner, and McClarty]rau2019 author author J. G. Rau, author R. Moessner, and author P. A. McClarty, title title Magnon interactions in the frustrated pyrochlore ferromagnet Yb_2Ti_2O_7, https://doi.org/10.1103/PhysRevB.100.104423 journal journal Phys. Rev. B volume 100, pages 104423 (year 2019)NoStop [Zhitomirsky and Chernyshev(2013)]zhito2013 author author M. E. Zhitomirsky and author A. L. Chernyshev, title title Colloquium: Spontaneous magnon decays, https://doi.org/10.1103/RevModPhys.85.219 journal journal Rev. Mod. Phys. volume 85, pages 219 (year 2013)NoStop [Mourigal et al.(2013)Mourigal, Fuhrman, Chernyshev, and Zhitomirsky]mourigal2013 author author M. Mourigal, author W. T. Fuhrman, author A. L. Chernyshev, and author M. E. Zhitomirsky, title title Dynamical structure factor of the triangular-lattice antiferromagnet, https://doi.org/10.1103/PhysRevB.88.094407 journal journal Phys. Rev. B volume 88, pages 094407 (year 2013)NoStop [Weihong et al.(1991)Weihong, Oitmaa, and Hamer]Zheng1991 author author Z. Weihong, author J. Oitmaa, and author C. J. Hamer, title title Second-order spin-wave results for the quantum XXZ and XY models with anisotropy, https://doi.org/10.1103/PhysRevB.44.11869 journal journal Phys. Rev. B volume 44, pages 11869 (year 1991)NoStop
http://arxiv.org/abs/2407.13692v1
20240718165818
Prover-Verifier Games improve legibility of LLM outputs
[ "Jan Hendrik Kirchner", "Yining Chen", "Harri Edwards", "Jan Leike", "Nat McAleese", "Yuri Burda" ]
cs.CL
[ "cs.CL" ]
Dissipation at limited resolutions: Power law and detection of hidden dissipative scales Pedro E. Harunari July 22, 2024 ======================================================================================== § ABSTRACT One way to increase confidence in the outputs of Large Language Models (LLMs) is to support them with reasoning that is clear and easy to check — a property we call legibility. We study legibility in the context of solving grade-school math problems and show that optimizing chain-of-thought solutions only for answer correctness can make them less legible. To mitigate the loss in legibility, we propose a training algorithm inspired by Prover-Verifier Game from <cit.>. Our algorithm iteratively trains small verifiers to predict solution correctness, “helpful” provers to produce correct solutions that the verifier accepts, and “sneaky” provers to produce incorrect solutions that fool the verifier. We find that the helpful prover's accuracy and the verifier's robustness to adversarial attacks increase over the course of training. Furthermore, we show that legibility training transfers to time-constrained humans tasked with verifying solution correctness. Over course of LLM training human accuracy increases when checking the helpful prover’s solutions, and decreases when checking the sneaky prover’s solutions. Hence, training for checkability by small verifiers is a plausible technique for increasing output legibility. Our results suggest legibility training against small verifiers as a practical avenue for increasing legibility of large LLMs to humans, and thus could help with alignment of superhuman models. § INTRODUCTION As machine learning systems find uses in higher stakes areas, it becomes increasingly important to justify why we can trust their outputs and delineate when such trust should not be placed. One possible way to gain trust in the output of a complex system is to require that the system produces an explanation of its output that is legible to humans or another trusted system, i.e. can be fully understood to the extend that any possible errors would be spotted. For example, to build trust in the judicial system we ask courts to produce clear and legible written opinions that explain and support their decision making. In this paper we ask whether we can make outputs of capable LLMs more legible to humans. One direct approach is to optimize model outputs for legibility via human feedback (e.g., RLHF; ), which has its limitations <cit.>. We study a complementary approach, and ask whether optimizing larger LLM provers to produce Chain-of-Thought (CoT) solutions checkable by smaller LLM verifiers makes them more legible to humans. Our motivations for studying smaller verifiers as a proxy for humans include: (1) LLMs are already used to aid with judging LLM outputs and this approach is more scalable than relying on humans alone <cit.>, (2) varying the capability gap (in terms of pretraining compute) between prover and verifier lets us study legibility as a function of this gap, and (3) when human feedback is limited for advanced systems, making outputs verifiable by less capable verifiers may incrementally aid human legibility <cit.>. Checkability — i.e. the capacity of a verifier to independently check the correctness of solutions generated by a helpful prover — can be defined by a combination of two properties: * Completeness. For any input in the domain of interest, the helpful prover generates correct solutions and the verifier accepts them as correct. * Soundness. The verifier rejects all incorrect solutions produced by any adversarial provers of the same model class as the helpful one. The definitions are inspired by literature on Probably-Approximately Correct (PAC) verification and interactive proof systems <cit.>. In informal domains, neither models nor humans can be expected to be perfectly complete or sound. We instead study relaxed versions of completeness and soundness. Instead of perfect completeness we expect the helpful prover to have high pass rate and the verifier to have high recall on helpful samples. Instead of perfect soundness we expect the verifier to have low false positive rate on adversarial inputs produced through limited amounts of optimization. When we find helpful provers and verifiers with such limited completeness and soundness properties, we say that the helpful prover is legible to the verifier. This is analogous to saying that outputs are legible to humans if humans accept them as correct and can't be fooled by similar-looking incorrect outputs. To obtain legible provers and robust verifiers, we propose using checkability training - an iterative algorithm inspired by the Prover-Verifier Game (PVG;   ). We formulate a version of the game where the verifier moves first assuming that the prover will play best response strategy conditioned on the verifier's strategy (i.e., the verifier "leads" and the prover "follows"), and show that for provers with unconstrained model class, finding a verifier-leading Stackelberg equilibrium is equivalent to finding a perfectly legible protocol with respect to the verifier (Theorem <ref>). Our iterative algorithm is inspired by that version of the game, while optimizing losses that have better convergence properties in practice. In practice we alternate between training a verifier language model and a prover language model that we prompt to act either helpful or sneaky on disjoint subsets of grade-school math problems with known ground truth answers <cit.>[We focus on the easier case where some ground truth labels are available for improving the players, and leave the semi-supervised or unsupervised settings for future work (see Section <ref> for more discussions).]. In each round, we first train the verifier using supervised learning to predict correctness of solutions sampled from a balanced mixture of provers from previous rounds. Next, we optimize the prover using reinforcement learning with respect to a reward combining the solutions' correctness and the current round verifier acceptance rate ("convincingness"). The prover is conditioned on a role sampled uniformly from "helpful" and "sneaky". In the helpful role, we reward being convincing and correct. In the sneaky role, we reward being convincing and incorrect. Our main results are illustrated in Figure <ref>: * Direct reinforcement learning on solution correctness produces solutions that are difficult for humans to evaluate. If we optimize our prover only for solution correctness, it can achieve high performance on grade-school math problems. However, the solutions produced by the model are very different from human-written solutions and consequently are difficult to evaluate for time-constrained humans (Section <ref>). * Our proposed checkability training achieves decent performance while retaining legibility. After checkability training, we obtain a a prover that produces solutions that have better performance-legibility trade-off. It has lower test performance than the prover optimized for correctness alone, which represents an instance of a legibility tax <cit.>. The checkability training partially closes the gap, as the prover's pass rate improves in later rounds of training (Section <ref>). Verfiability training produces three useful artifacts — a robust verifier, a sneaky prover, and a helpful prover. We evaluate the legibility of the sneaky and helpful prover as well as the soundness of the verifiers. The main findings are: * The verifiers show increasing soundness over training rounds, as measured via difficulty of various sneaky-only attacks and balanced best-of-n accuracy (Section <ref>). * The sneaky prover generates increasingly subtle flaws, since the average step-by-step correctness judged by GPT-4 increases, and the average human judge accuracy on sneaky solutions decreases (Section <ref>). * The helpful prover achieves higher pass rates, and human trainers judge them correctly more often at high rounds (Section <ref>). Overall, we found it surprising that legibility to smaller LLMs seems to transfer to legibility to humans — sneaky solutions that are harder to detect for smaller verifiers are also harder to detect for humans, and helpful solutions that are more legible to smaller verifiers are also more legible to humans. In the case that future LLMs acquire superhuman capabilities, training them to be more legible will help humans evaluate their correctness. Hence, prover-verifier game is a promising candidate for a scalable oversight method. § RELATED WORK Computational complexity. Our training setup is most similar to learning a verification protocol for NP <cit.>. If we allow more interactions, we can extend the PVG setup to interactive proofs <cit.>.  <cit.> and <cit.> applied interactive proofs to Probabilistic-Approximately Correct (PAC) verification. They showed that a verifier can learn a hypothesis class with much less data by interacting with an untrusted prover (O(√(VC(ℋ))) sample complexity for a hypothesis class of Vapnik-Chervonenkis dimension VC(ℋ)). In our work, we assume capability (instead of data) constraint on the verifier (it cannot independently produce a solution without the help of a prover even with infinite data). <cit.> first made the connection from prover-verifier games to the alignment problem. They proved that different game setups lead to equilibria of different desirability, and empirically showed promise in toy experimental settings. Unlike all those previous works, our verifier's task is not classifying the inputs but rather judging the correctness of solutions. We take inspirations from those theoretical works and aim to empirically train for checkability on a nontrivial domain at scale. Scalable oversight. The central hypothesis that discrimination is easier than generation is common to many scalable oversight methods <cit.> including critiques <cit.> and debate <cit.>. <cit.> showed negative results for improving time-limited human judge accuracy though human debate. Previous works have also studied using LLMs as judges or debaters, although they differ on whether ground truth labels are available for training. Our setup is similar to single-turn simultaneous debate where the helpful and sneaky provers try to convince the model judge of different answers, assuming that some ground truth labels is available for improving both the provers and the verifier. Similarly, using supervised training, <cit.> trained model debaters and judges and showed that the optimization makes the debaters more convincing and the judges more accurate. <cit.> showed that debate works better than consultancy (showing the argument from a helpful prover half the time and sneaky half the time) for human judges. In our setup the judge does not see both sides at the same time, so it is more similar to the consultancy approach. <cit.> showed that LLMs trained using persuasiveness signal only (no ground truth labels) can help both non-expert model and human judges. All of those works mainly use the QuALITY dataset <cit.>, where the task is question-answering about a fictional story, so the justifications mainly consistent of verified quotes from the text. In contrast, we focus on a less artificial, more challenging reasoning domain where the difference between provers and verifiers isn't privileged information but capability gap. Chain-of-Thought. Prompting LLMs to produce Chain-of-Thought (CoT) solutions is useful for boosting accuracy <cit.> and CoTs can be used as natural justifications for the final answer. However, CoTs are not guaranteed to be legible a priori and our checkability training is one way to train CoTs to be more legible. Many works guide the model to produce a specific type of structured CoT, such as task decomposition <cit.>, recursive summarization <cit.>, and symbolic reasoning <cit.>. <cit.> use expensive step-wise human supervision to train models that can solve math. In contrast, we do not enforce any output structure, and rely solely on final answer supervision. Our checkability training method is general and can be readily studied in any domain where we have some ground truth labels. Legibility emerges as a natural consequence of the training dynamics (Figure <ref>). Explainability and legibility as complementary properties. Explainability <cit.> tries to give an account of the prover model's reasoning, whether the reasoning is correct or not. The resulting explanations can be arbitrarily complex <cit.>. In contrast, legibility of a prover model's output allows to verify if a given solution is correct, ignoring how the prover reasoned about it. Consequently, legibility requires model outputs that are coherent and accessible to human understanding <cit.>. It is possible to have either without the other, but especially in high-stakes settings correctness is more important than understanding <cit.>. Adversarial Robustness. Deep neural networks are susceptible to adversarial attacks <cit.>. The decade-long work on adversarial defenses had limited success for vision models <cit.>. Language models have discrete input-output tokens and hence its not as straight-forward to perform gradient-based attacks <cit.> but white-box <cit.> and black-box <cit.> jailbreaks that can undo the alignment process are still found. Requiring a perfectly robust verifier is not realistic either for models or humans <cit.>. Our checkability training can be viewed as adversarial training on the verifier, and we show that robustness transfers to test attacks different from training attacks in Section <ref>, which is similar to common methods in the adversarial robustness literature <cit.>. Legibility in concrete applications. Prior works have studied legibility in different contexts. For example, in robotics, legibility refers to being able to interpret an agent's intent based on its action <cit.>. For coding, human legibility has multiple facets including being able to correctly trace the execution, find bugs, or summarize goals <cit.>. In this work we measured human legibility via the accuracy of their judgement of solution correctness in time-limited settings, which is only one of many ways to measure human legibility. § SETUP In general, we consider a dataset of problems with ground truth answers (x,y)∈𝒳×𝒴, where x denotes a problem statement and y=y(x) denotes the known and unique ground truth answer. Given a proposed solution z to the problem x including a final answer, we let x, z be the correctness indicator that is 1 if the solution proposes an answer that is the same as y(x) and 0 otherwise. We use a SymPy grader <cit.> for evaluating x, z on grade-school math (GSM) problems. We have a prover π(z|x) proposing solutions given a problem statement, and a verifier v(x, z) ∈ [0, 1] that estimates the correctness of the solution's answer. We call the output of the verifier the convincingness score. §.§ From checkability to the Prover-Verifier Game We start with a formal definition of checkability. Prover π from model family ℋ is verifiable to verifier v on the data distribution P_XY if and only if the prover always generates correct and convincing solutions, and no adversarial / "sneaky" prover from the same model family ℋ can fool the verifier into accepting incorrect solutions. Formally, * Completeness: ∀ x ∈𝒳, x, π(x)=v(x, π(x))=1. * Soundness: ∀π' ∈ℋ, ∀ x ∈𝒳, x, π'(x)=0 v(x, π'(x))=0. A few notes on the definition above: First, checkability doesn't necessarily capture everything we intuitively want from legibility, and we hope to inspire other definitions in future works. Second, we restricted the sneaky prover to be from the same model family as the helpful one. This is because we are interested in answering the question “Can one intelligent system trust another intelligent system of a certain size, architecture, and trained with certain algorithms?". Hence we only have to ensure that the verifier is robust to the class of models we solicit answers from. Third, in empirical work we only seek to find approximate completeness and soundness. Since the verifiers and provers we study are neural networks, it is impractical to find perfect completeness and soundness. Therefore we study a relaxation of completeness and soundness in the following sense: (1) for the helpful prover we require that the it is correct in distribution P_XY with high probability; and (2) for the verifier, we require that it is robust to a limited set of black-box attacks that are relatively "close" to the honest prover (intuitively, we want the verifier not to miss small errors that would make the conclusion invalid). We will compare verifiers' soundness via their ranking on a variety of attacks, and the more comprehensive the attacks are the closer we get to measuring soundness. We propose to train legible prover and sound verifier using variants of the prover-verifier game <cit.>. We formulate it as a two-player game between a role-conditioned prover π and the verifier v. We use shorthand h(x) = π(x, role=helpful) for the prover conditioned on the helpful role and s(x) = π(x, role=sneaky) for the prover conditioned on the sneaky role. We define the verifier's utility as a negative classification loss on a uniform mixture between the helpful and the sneaky samples, R_V = -1/2𝔼_x ∼ P_Xl(v(x, h(x)), x, h(x)) - 1/2𝔼_x ∼ P_Xl(v(x, s(x)), x, s(x)) where l(v, c): [0,1]×{0, 1}→ℝ is a loss function minimized if and only if v=c, such as the cross-entropy loss. We assume without loss of generality that l(c, c)=0. In practice we optimize R_V via gradient descent on l_V = -R_V. The policy is conditioned on either the helpful or the sneaky role with equal probability, and the combined reward is R_P = 1/2𝔼_x ∼ P_Xr(v(x, h(x)), x, h(x)) + 1/2𝔼_x ∼ P_Xr(v(x, s(x)), 1-x, s(x)) where r(v, q): [0, 1] ×{0, 1}→ℝ is a reward function that satisfies: ∀ v_1, v_2 ∈ [0, 1]: r(v_1, 1) > r(v_2, 0), ∀ v_1, v_2 ∈ [0, 1],v_1>v_2:r(v_1, 1) > r(v_2, 1). Here q ∈{0, 1} indicates that the correctness of the solution is role-aligned — it is 1 if the prover is assigned to be helpful and the solution is correct, or the prover is assigned to be sneaky and the solution is incorrect. The two assumptions on the prover reward functions above are: First, any role-aligned solution gets higher reward than any role-misaligned solution; Second, for any role-aligned solution, the prover gets higher reward when the verifier gives the solution higher score. In practice we optimize R_P via reinforcement learning since we cannot differentiate with respect to the solution, because it requires iterative auto-regressive sampling of LLM tokens, a non-differentiable operation. We show a connection between the prover-verifier game and the completeness and soundness objectives as follows: Assuming that ℋ is the set of all deterministic mappings from 𝒳 to 𝒵, in the PVG where verifier's utility is R_V and the prover's utility is R_P, the existence of a verifier-leading Stackelberg equilibrium (v^*, h^*, s^*) is necessary and sufficient for the completeness and soundness properties to hold for (v^*, π^*). One of the players (in our case, the verifier) is the leader, and the other player (the prover) is the follower. We assume that given a leader strategy, the follower will always play its best response, and the leader picks its strategy based on this knowledge. A Stackelberg equilibrium means that none of the players have incentives to change its strategy. The proof is in Appendix <ref>. Compared to Theorem 2 in <cit.>, we have more relaxed assumptions on the loss function and reward. In practice, LLMs are probabilistic models and the hypothesis class of LLMs is only a subset of all mappings from 𝒳 to 𝒵, so our empirical setup does not match the theoretical assumptions exactly. Furthermore, when the players are nonconvex models like neural networks, we do not have formal guarantees for finding a Stackelberg equilibrium efficiently. However, we can train the models using coordinate descent to find an approximate local equilibrium. §.§ Checkability training In practice, we design training objectives for the verifier and the provers using slight variations on l and r. Most notably, we train the verifier using a relative loss based on comparisons between correct and incorrect solutions for each question <cit.>, instead of an absolute loss for each solution. For the prover reward r, we have three variants that prioritizes verifier score v and role-alignment q differently. Both provers and verifiers are from the GPT4 family <cit.>. We initialize the prover and the verifier with different sized models that have been lightly finetuned on a variety of math data from the pretaining distribution, including human written proofs and derivations. We discuss the algorithmic details below. First, we augmented the GSM <cit.> dataset using 100k synthetically generated and validated datapoints from ChatGPT, similar to the method in <cit.>. We made the dataset larger so as not to be bottlenecked by sample efficiency in order to focus on the training dynamics. We validated that the the original test set accuracy is not impacted by using real vs. synthetic data. Next, we randomly partitioned the training dataset 𝒟 into two equally-sized subsets 𝒟_π and 𝒟_V that are used for training the prover π and the verifier V respectively. This way the verifier and the prover are never optimized on the same prompt. The training proceeds in multiple rounds, and each round the verifier training phase precedes the prover training phase. The only change between rounds is the data mixture used to train the verifier; no other state, such as model weights, is carried over between rounds. In the verifier training phase of round i we train verifier V_i to minimize the regularized comparison loss 𝔼_x ∼ D_V[𝔼_z_1,z_2∼π^*logσ[(V(x, z_1)-V(x,z_2)) · (z_1 - z_2)] + λ E_z∼π^* V(x, z)^2 ] Here V(x, z) ∈ℝ is a scoring function and not restricted to range [0, 1] like the theoretical formulation. We use capitalized V to avoid confusion. The first term is the reward modeling objective that maximizes log likelihood of correct solutions being ranked higher than incorrect solutions under the Bradley-Terry model, similar to <cit.>. The second term is a regularizer that is meant to center V at 0. Without this term, adding an arbitrary per-problem constant to V would not change the objective. In preliminary experiments we found the this objective to be interchangeable to the cross-entropy loss for the overall dynamics of checkability training and we adopted this form due to engineering convenience. The prover π^* used for sampling solutions for verifier training is a mixture of an initial base prover and previous round provers, each balanced to have equal number of correct and incorrect solutions. Each of the previous rounds has equal ratio in the mixture, which is tuned as a hyperparameter. In round 0, the solutions are from the base prover sampled via a few-shot prompt (App. <ref>). In later rounds of training, we replace part of the solutions with those from the new provers. Hence we always use the same number of solutions per problem. The amount of verifier optimization is constant over the rounds. During the prover training phase we use Proximal Policy Optimization (PPO; ) to maximize an expected reward R derived from the current round verifier score and whether the correctness is aligned with the assigned role. The prover's objective is to maximize the following reward (minus a small KL regularization towards the initialization): 𝔼_x ∼ D_π, role∼{helpful, sneaky}, z ∼π(·| x, role) R(z | x, role). We experimented on three variants of R. We call the first reward variant the Signed Relative Convincingness (SRC) reward. Given a problem x, let the role indicator role be 1 if the prover is assigned to be helpful and 0 otherwise. Suppose conditioned on a role and x, the prover generated a batch of solutions {z_0, …, z_b}∼π(·| x, role) with verifier scores {V(x, z_0), …, V(x, z_b)}. Since the scores are not normalized, we first batch-normalize the scores to mean 0, i,e. V'(x, z_i) = V(x, z_i) - (∑_j=1^b V(x, z_j))/b, and then set the reward as follows: R_SRC(z_i | x, role) = (2x, z_i-1) · (2role-1) · V'(x, z_i) if V'(x, z_i)>0 -1 otherwise Ignoring the verifier score range discrepancy, the first case is equivalent to setting r(v, q) = v · (2q-1), i.e. when x, z_i = role, the reward equals the normalized verifier score, otherwise it equals negative score. This case satisfies our assumptions (<ref>) that any role-aligned solution has higher score than any non-aligned one, and for any two role-aligned solutions, the one with higher verifier score gets higher reward. However, the second case where any solution below average score is directly assigned a score of -1 deviates from the theoretical formulation. The rationale for this case is to prioritize convincingness over role-alignment. We show in section <ref> that this is practically useful for convergence. The second reward variant that prioritizes role-alignment is the Correctness-Gated Convincingness (CGC) reward: R_CGC(z | x, role) = V(x, z) if role =x, z V_0 otherwise where V_0 is a constant that is practically lower than all verifier scores (we used -2 in our experiments). This reward corresponds to r(v, q)= q v + (1-q)infv which satisfies the reward assumptions (<ref>) ignoring the discrepancy in verifier score range. Empirically, this reward leads to worse dynamics than the SRC reward (Section <ref>). We also consider a setup that we call Iterative Goodharting where the helpful and sneaky provers both use the following reward, R_goodhart(z | x) = V(x,z) which means the reward encourages only convincingness, i.e. r(v, q)=v. This version does not satisfy assumptions (<ref>) because r(v, 0)=r(v, 1). The empirical results are in Section <ref>. § RESULTS §.§ Baseline: Direct Optimization for Correctness A natural baseline for our setup is to train the prover model directly to optimize for the correctness of the solutions without the intermediate verifier. Specifically, we can use Proximal Policy Optimization (PPO) against a binary grader that checks if the solution is correct or not. This approach has been explored in works such as <cit.>, where a language model is fine-tuned using reinforcement learning (RL) with a reward based on the correctness of the answers. We use a KL regularization towards prover initialization with the same small coefficient for all methods in this paper. We find that, indeed, optimizing our prover with correctness as the only goal achieves strong performance on grade-school math problems (Fig. <ref>a). When inspecting samples from the resulting prover, we notice that the solution becomes longer as the prover solves each question in multiple ways and later in training become largely unintelligible (Appendix Fig. <ref>) despite still producing the correct answer. To measure how much the peculiar style affects legibility of solutions, we presented samples from different points during training to human contractors and asked them to determine the validity of the solutions under a strict 45 second time limit per solution. They were asked to judge the correctness of the given solution rather than the final answer (see Appendix <ref> for full instructions and a task example). While the contractors were able to reliably distinguish correct and incorrect solutions from the prover initialization, their accuracy and speed deteriorated noticeably when presented with solutions from the trained prover (Fig. <ref>b,c). This decrease in human legibility is most pronounced for questions that the model cannot reliably solve (Fig. <ref>d). In summary, we observe that when trained solely for correctness, our model learns to produce useful but unintelligible solutions that time-constrained humans struggle to evaluate. §.§ Dynamics of checkability training To study whether it is possible to retain legibility while improving performance, we ran several rounds of checkability training on the augmented GSM dataset. In this section, we present results where the provers are trained using the Signed Relative Convincingness (SRC) reward (Eq. (<ref>)), which is the reward that that we found most effective for improving legibility. Within each round of the prover training phase, both the sneaky and the helpful prover monotonically improve their convincingness to the small verifier (Fig. <ref>a). In contrast, the fraction of correct solutions produced by the provers evolves non-monotonically (Fig. <ref>b,c). While the helpful prover largely produces increasingly correct solutions, the sneaky prover eventually produces incorrect solutions that match or surpass the verifier score of correct solutions from the helpful prover in the first few rounds of training. The helpful prover's performance decreases at the end of each round potentially due to weight sharing with the sneaky prover (they are a single LLM with different prompts). Note that the prover and the verifier have the same initializations at the beginning of each round, and the only difference between rounds is the data mixture used to train the verifier. We were curious to see if the above dynamic holds also for a wider range of pairs of verifier and prover effective compute amounts. When only varying the size of the verifier by a few orders of magnitude (10^-3 to 10^-1.8-times the compute of the prover), the dynamics remain largely unchanged (Fig. <ref>c). However, when using a much smaller verifier (10^-4-times the compute of the prover), the checkability training becomes unstable and produces weak provers with poor legibility (Sup. Fig. <ref>). Similarly, using a verifier that matches the size of the prover (10^0-times the compute of the prover) prevents the verifier from becoming more robust in subsequent rounds and produces provers with strong performance but poor legibility (Sup. Fig. <ref>). We cannot exclude the possibility that the poor performance of very small and large verifiers is due to non-optimal hyperparameters, but we also suspect there are real reasons for checkability training to not work well for these verifiers. In particular, if the verifier's capability is too limited, it may be fundamentally impossible to perform verification. If the verifiers are very similar to the provers, the sneaky prover can more easily simulate the verifier's mistakes <cit.>, making it more difficult for the verifier to become robust. To summarize, our checkability training exhibits a characteristic dynamic where the helpful prover improves at solving math problems while over subsequent rounds the sneaky prover struggles to produce incorrect, convincing solutions. §.§ Evaluations of checkability to smaller verifiers and legibility to humans At the end of each round of checkability training, we obtain three artifacts: * a verifier trained on diverse adversarial examples, * a sneaky prover that produces convincing, subtly incorrect solutions, * a helpful prover that produces convincing, correct solutions. In this section, we perform various evaluations of the soundness of verifiers and the legibility of sneaky and correct solutions across rounds of training shown in Figure <ref>. §.§.§ The verifier becomes more robust throughout checkability training The verifier's sound evaluation of adversarial inputs from a sneaky prover (of equal capability as the helpful prover) is one requirement of legibility by our definition. We observe that throughout checkability training, the sneaky prover requires increasingly many steps to discover highly convincing, incorrect solutions (Fig. <ref>b). However, it is unclear whether this increase is due to the sneaky and helpful provers sharing weights. Hence, we perform a variety of additional attacks by separately training sneaky-only provers that do not share weights with the helpful prover to evaluate whether the verifier becomes able to discern correct from incorrect completions beyond the exact set-up during training. We performed three attacks with sneaky-only provers using the following reward functions: * the SRC reward (<ref>) to ablate the effect of weight sharing between the helpful and the sneaky prover. * the CGC reward (<ref>) to test generalization of verifier soundness to different variations of checkability training * the goodharting reward (<ref>) to straightforwardly test robustness to optimization for convincingness <cit.> We performed the above attacks on verifiers of different sizes across five rounds of checkability training (Fig. <ref>a). We observe an increase in the number of steps each adversary takes until finding an exploit. Larger verifiers are harder to exploit. For example, none of the sneaky attacks found exploits for verifiers of relative size 10^-2 beyond round 0 within 8k RL steps, whereas verifier of relative size 10^-3 remain exploitable by the goodharting and SRC adversaries until round 4. The test attacks we perform are far from complete and there is no simple way to evaluate whether the verifiers are robust to all sneaky provers within the model family. Nevertheless, the rank correlation between those robustness measures do reflect a general increase in compute budget if some actor trains an adversarial prover to fool the verifier. A more complete evaluation of the verifier's adversarial robustness would include a variety of common white-box and black-box attacks <cit.>. Another way to assess the checkability of the helpful prover to the verifier is via best-of-n accuracy at higher n <cit.>, where we analyze the correctness of the highest-scoring solution according to the verifier out of n solutions (Fig. <ref>b). In order to fairly compare provers with different average pass rates, we collect balanced incorrect and correct samples and only use problems that have at least n/2 correct and n/2 incorrect samples for evaluating best-of-n. We sample up to 2560 solutions for each test problem and the helpful prover from each round. In Figure <ref>(b), for each n, we combine n/2 correct and n/2 incorrect solutions from the current-round helpful prover π_i, and evaluated the best-of-n performance of verifier V_i. The balanced best-of-n performance increases for (V_i, π_i) pairs at higher rounds, indicating that the verifier can better judge the helpful prover's correctness even when controlling for pass rates. §.§.§ The sneaky prover becomes increasingly subtle throughout checkability training Having established that the verifier becomes increasingly robust to adversarial attacks, we turn to analysing the sneaky attacks that do fool the verifier. While sneaky incorrect solutions from early rounds often contain multiple obvious mistakes and non-sequitur, incorrect solutions from later rounds contain increasingly subtle flaws (Fig. <ref>c and Sup. Fig. <ref>). To quantify this shift, we used GPT-4 to assess the validity of small portions of sneaky solutions from different rounds (Fig. <ref>a). We observe that while the minimum score over portions of a solution remains very low (indicating a flaw detectable by GPT-4), the average score increases over rounds (indicating the general soundness of the remaining portions). We were interested to see whether the adversarial attacks against the small verifier might transfer to humans. Interestingly, when presenting the sneaky solutions to the time-constrained contractors, the fraction of correctly labeled solutions was comparatively high, but decreased throughout the rounds (Fig. <ref>b). This indicates that the sneaky prover produced by checkability training might be useful as a tampering model for training critique models <cit.>. Note that Fig. <ref>b is from a pilot study that has some design flaws (see Appendix <ref>). In sum, the sneaky prover produced by our checkability training is capable of generating subtly flawed solutions that are non-trivial to spot for time-constrained humans. §.§.§ The helpful prover becomes increasingly legible throughout checkability training We finally turn our attention to the helpful prover and its evolution through successive rounds of checkability training. A key desideratum of our framework is that the helpful prover not only achieves high correctness but also maintains legibility for human evaluators. We observe that as the checkability training progresses, the helpful prover produces solutions with increasing accuracy (Figure <ref>a). In contrast to the training for correctness only (Fig. <ref>), solutions from the helpful prover remain coherent throughout training (Fig. <ref>). This improvement indicates that the prover is effectively leveraging the feedback from the verifier to enhance its problem-solving capabilities. To assess the human legibility of the helpful prover's solutions, we again conducted evaluations with time-constrained human contractors. We observe that both the contractors' mean accuracy in evaluating solutions (Figure <ref>b) and the time required for evaluation (Figure <ref>c) improve as the prover advances through successive rounds. At the end of training, the helpful prover matches the legibility of the initialization while achieving substantially stronger performance. However, the performance falls short of the performance achieved by a prover trained for correctness only (Fig. <ref>), revealing a tension between achieving high performance and retaining legibility <cit.>. We note that one way to resolve this tension is to separate the model's chain-of-thought from proof of correctness, see Appendix <ref> for more discussion. In summary, our findings demonstrate that checkability training facilitates the development of a helpful prover whose outputs are increasingly correct and legible. This dual improvement is critical for applications where human oversight and collaboration with AI systems are essential. § ALTERNATIVE REWARDS We now investigate whether using the alternative rewards for prover training affects convergence as well as legibility. §.§ Checkability training with the Correctness-Gated Convincingness (CGC) reward The Correctness-Gated Convincingness (CGC) reward (<ref>) prioritizes correctness (for the helpful prover) and incorrectness (for the sneaky prover) over convincingness. When the helpful prover produces incorrect answers (or the sneaky prover produces correct answers), the reward is set to a constant penalty; otherwise it is the verifier score. In this section, we show that the CGC reward empirically leads to worse training dynamics. In each round of CGC reward training, we train the prover until the average verifier score of the sneaky's incorrect samples become as high as that of the helpful's correct samples, and the sneaky reaches low (<5%) accuracy. Figure <ref> shows the prover training dynamics — the number of RL steps until the sneaky prover succeeds does not consistently increase over rounds. We observe that the sneaky prover's accuracy quickly decreases and the convincingness rises more slowly for the CGC reward in comparison to the SRC reward. Our hypothesis is that in this case, the sneaky and helpful players quickly exploits different modes with distinct styles, and hence there are obvious spurious statistical clues that separates sneaky and helpful which won't help the verifier learn the correctness signal. See section <ref> for more discussion. It is possible that we did not tune the hyperparameters for the CGC reward optimally, or the verifier requires more rounds to become robust. We did not perform human evaluation for the CGC reward training due to its worse training dynamics. Nevertheless, this alternative reward formulation serves as a strong attack for evaluating the robustness of verifier trained using the SRC reward (Section <ref>). §.§ Iterative goodharting training We run an alternative training scheme where the prover does not have an assigned role. It is trained with reward (<ref>), which is just the verifier score. We call this the `goodharting' baseline because usually during Reinforcement Learning from Human Feedback (RLHF) <cit.>, the prover is directly optimized to maximize the score of a reward model. When the reward model is not robust enough, it is easy to over-optimize so the prover overfits to the imperfect proxy reward. This phenomenon is called `goodharting' or `reward hacking' <cit.>. The verifier can learn from the incorrect and correct samples from the same prover. In more details, for each round, we train the prover using PPO with reward (<ref>) until its training accuracy drops below 0.2. Next, we sample balanced correct and incorrect samples on the verifier split of the training data for all prover checkpoints throughout PPO. The samples are then mixed into the verifier training dataset for the next round. We still keep the total number of samples per problem constant for training the verifier in each round. Figure <ref> shows the training dynamics. We early stop each round when the prover's training accuracy tanks below 0.2 on average in the last 80 RL steps. Across verifier sizes 10^-4 to 10^0 of the prover, the number of steps it takes for the prover to over-optimize generally increase. Up to size 10^-2, the larger the verifier is, the fewer number of training rounds it requires to reach perfect goodharting robustness within 8k RL steps. Verifiers of medium size (3× 10^-3∼ 10^-2) benefit most from the iterative goodharting training, where as smaller (≤ 10^-3) and larger (10^0) verifiers benefit less. We further evaluated the robustness of verifier trained using goodharting reward under attacks from the SRC reward (<ref>) and the alternative CGC reward (<ref>), and the verifiers' robustness go up sometimes but not always (Figures <ref>, <ref>). The balanced best-of-n robustness increases (Figure <ref>). However, when we showed the samples from the goodharting training to humans trainers, there is no significant difference in human judge accuracy across rounds (Fig. <ref>), although Fig. <ref> is from a pilot study that has some design flaws (see Appendix <ref>). Overall, our results on the iterative goodharting training show that it increases verifier soundness to a certain extent, but we do not have enough evidence showing that it improves human legibility. § DISCUSSION §.§ Limitations Only domains that allow natural language justifications. We studied the math domain because natural language explanations are common and often required for building trust in mathematical conclusions. However, in some domains it is difficult to support conclusions with natural language explanations. For example recognizing particular faces is easy for humans, but it's hard to communicate to others in natural language why a particular face belongs to a particular person. Need for ground truth answers and reliance on generalization. One prerequisite for our setup is being able to certify correctness of answers of solutions produced by the prover. For the grade school math domain we rely on having a training dataset of problems with known ground truth answers <cit.>. We anticipate this setup to be less relevant to domains where collecting any ground truth is too hard or expensive. One possible approach in this case could be to hope for favorable out-of-distribution generalization properties of legibility from solutions to problems with known ground truth answers to other domains. We don't study out-of-domain generalization in this paper because we train on only one dataset, but we expect that training on a diverse set of domains could potentially lead to good generalization from domains with ground truth labels to the domains without labels <cit.>. Conflating the model's chain-of-thought with legible solutions. By training the CoTs to be legible, we potentially limited the provers' accuracy. A better method could be training a first model using CoT to maximize accuracy, and then training a second model to turn the CoT used to produce the answer into a legible justification to humans. Hence, our measurement of accuracy-legibility tradeoff may be overly pessimistic; the prover may be able to achieve legibility at less or no cost to accuracy. For the correctness-only baseline, we could have early stopped at less accurate but more legible checkpoint. However, we don't know of a principled way to do early stopping for legibility. Dataset diversity and difficulty. We conducted our empirical study on only one dataset (GSM). The dataset is relatively easy and the solutions are quite standardized. Using ChatGPT to augment this dataset could result in some incorrect labels, but based on preliminary data analysis we did not see significant difference between verifier test accuracies trained on synthetic vs. real data. We would like to extend the checkability training framework to harder math datasets and coding, as well as fuzzier domains such as human preference data <cit.>. Our method is fully general as long as there is a finite set of final answer labels. Initialization with human-written math derivations. Our prover initialization has relatively high performance in GSM. We used a prover that has been lightly finetuned on a variety of math data to learn the correct format, and the finetuning dataset contains some of the GSM trainset data. We believe this doesn't qualitatively affect our results and only makes it easier for the helpful prover to produce correct answers. Figure <ref> shows that initialization with a base model without math finetuning leads to qualitatively similar training dynamics. Heterogeneous experiment set-ups. We have different stopping criteria for different training schemes presented throughout the paper. We trained the prover for a fixed number of RL steps in each round for certain training schemes (4k steps in Fig. <ref>a, <ref>c, 8k steps in Fig. <ref>, <ref>c, <ref>, <ref>), whereas in Fig. <ref>, <ref>, <ref> we trained until early stopping criteria are met (the criteria are already discussed in those section). We do not observe qualitative different between early stopping and non early stopping rounds in our experiments. Hyperparameters. We did not tune all hyperparameters thoroughly for all training schemes in this paper. The main hyperparameter we tuned are the ratios of samples from each of the previous rounds. We tuned the ratio for one verifier size (10^-3) for the SRC reward based on whether training robustness increases across rounds, and used the same ratio for other sizes with SRC and CGC rewards, which could result in other training schemes being worse in robustness and legibility. We separately tuned the ratio for one verifier size (10^-3) for the iterative goodharting training. We used the default ChatGPT post-training hyperparameters such as the optimizer, learning rate, or KL regularization coefficient for prover and verifier training. §.§ Future Work We studied the setting in which correctness labels are available at training time for all problems. A more realistic scenario is one where not all answers have a trusted final correctness label, in which case the verifier would rely on unsupervised signals such as consistency <cit.> and convincingness <cit.>. Unsupervised methods for improving legibility are an important future direction. Another direction is developing a method to turn an illegible solution into a legible one. The CoTs that the model first uses to solve a problem are not legible to a smaller model per se but we could train a translation model that turns the `scratch space' CoT to a well-written proof. This approach might be favorable over directly training the CoT to be legible since exerting any optimization pressure on the CoT may limit prover performance as well as exacerbate deceptive alignment (i.e., the model hiding its own intent) <cit.>. The training schemes we consider are adaptations of single-turn simultaneous games. With an interactive tool such as ChatGPT, the verifier can interact with the prover in more rounds which should increase the sample efficiency vastly based on complexity theory <cit.>. An example of multi-turn prover game is a “mind-changing" game: for binary outcome predictions, the prover maximally tries to convince the verifier that the answer is true / false alternatingly in context. This is similar to the market-making proposal <cit.>. We mainly studied legibility to smaller models in this paper. However, legibility to humans is what we eventually care about, so future work should more systematically study what type of assistance better help humans judge the validity of large model solutions. We expect more extensive human study to be fruitful. §.§ Conclusion We presented a tension between model performance and solution legibility in a concrete setting of judging grade-school math solutions, and proposed one way to address the tradeoff. Our results show promising signs for establishing trust in LLM outputs even if they become more capable than humans in the future. We hope to inspire future work on semi-supervised / unsupervised setups for improving human judge legibility with few or no ground truth labels. § ACKNOWLEDGEMENTS We would like to thank Jeff Wu for valuable technical insights and discussions. We are grateful to Boaz Barak, Jeff Wu, Jenny Nitishinskaya, and John Schulman for helpful feedback on earlier drafts of this paper. We also thank the Platform team for infrastructure help. We thank Mati Roy, Spencer Papay as well as all contractors who contributed to the the human data collection effort. § FORMAL GAME VALUE DEFINITION In this section we prove that a variant of the the game value (<ref>) with squared loss has the following two desirable properties: First, conditioned on V being fixed, honest is incentivized to be maximally correct and convincing, and sneaky incorrect and convincing; Second, the verifier yields proper scores, i.e. it will report it's true belief in the correctness of solutions. f_2({V, p_H}, p_S) = - 𝐄_x, y ∼ P_XY, z ∼ p_H[C(y, z) ((1-V(x, z))^2)] completeness - 𝐄_x, y ∼ P_XY, z ∼ p_S[C(y, z)(V(x, z)^2)]] soundness - 𝐄_x, y ∼ P_XY, z ∼ p_H[C(y, z)(V(x, z)^2)]] honest prover fails in a transparent way - 𝐄_x, y ∼ P_XY, z ∼ p_S[C(y, z)((1-V(x, z))^2)] sneaky prover reasons poorly even when correct + 𝐄_x, y ∼ P_XY, z ∼ p_H[C(y, z)] honest should be correct - 𝐄_x, y ∼ P_XY, z ∼ p_S[C(y, z)] sneaky should be incorrect First, we use shorthand C = C(y, z) and V(x, z)=V. for fixed V and S, since H is trying to maximize f_2, we can rewrite as max_H f_2({V, p_H}, p_S) = max_H 𝐄_x, y, z ∼ p_H[C(1-(1-V)^2) + (C-1) V^2] = max_H 𝐄[2C V - V^2]. Since C, V ∈ [0, 1], The maximum is achieved at H such that C(x, H(x)) = 1 and V(x, H(x))=1 for all x ∈𝒳. This proves that for fixed verifier the honest will try to be maximally correct and convincing to the verifier. The sneaky will also try to be maximally incorrect and convincing by symmetry. Second, for fixed S, H, since the verifier is trying to minimize a sum of two l_2 losses max_V f_2({V, p_H}, p_S) = min_V E_x, y, z ∼ p_Hl_2(V, (x, y, z)) + E_x, y, z ∼ p_Sl_2(V, (x, y, z)) + constant and the quadratic score is strictly proper, so by affine transformation it is still strictly proper. § PROOF OF THEOREM <REF> (VERIFIER-LEADING STACKELBERG EQUILIBRIUM IS NECESSARY AND SUFFICIENT FOR COMPLETENESS AND SOUNDNESS) We first make a few more mild assumptions. We restrict 𝒳 to be the set of inputs where P_X has non-zero probability mass. We treat 𝒳 as discrete for simplicity, although the same result can be extended to continuous domain using measure theory. We remind the reader that x ∈𝒳 is the input, v(x, z):𝒳×𝒵→ [0, 1] is the verifier, h(x), s(x) :𝒳→𝒵 are helpful and sneaky provers, and x, z∈{0, 1} is the indicator for the correctness of solution z given input x. We assume the hypothesis class ℋ is the set of all deterministic mappings from 𝒳→𝒵. We also assume that for any x ∈𝒳, there exists any least one correct solution and one incorrect solution in 𝒵. The verifier loss is l_V = 1/2𝔼_xl(v(x, h(x)), x, h(x)) + 1/2𝔼_xl(v(x, s(x)), x, s(x)) where l(v, c) is a loss function minimized if and only v=f (we assume without loss of generality that l(c, c)=0). The prover reward is R_P = 1/2𝔼_xr(v(x, h(x)), x, h(x)) + 1/2𝔼_xr(v(x, s(x)), 1-x, s(x)) where r(v, q) satisfies: ∀ v_1, v_2 ∈ [0, 1]: r(v_1, 1) > r(v_2, 0), ∀ v_1, v_2 ∈ [0, 1],v_1>v_2:r(v_1, 1) > r(v_2, 1). Define the completeness and soundness properties as follows ∃ h s.t. ∀ x ∈𝒳, x, h(x)=1 and v(x, h(x))=1. ∀ s, ∀ x ∈𝒳, x, x, s(x)=0 v(x, s(x))=0. We prove that in the PVG where verifier's utility is -l_V and the prover's utility is R_P, the existence of a verifier-leading Stackelberg equilibrium (v^*, h^*, s^*) is necessary and sufficient for the completeness and soundness properties to hold for (v^*, h^*). First, assuming that the completeness and soundness properties hold for v^*, h^*. We construct s^* as follows: For any x, let s^*(x) be an arbitrary incorrect solution, i.e. s^*(x) ∈{z |x, z=0}. By the soundness property, we know that we will have v^*(x, s^*(x)) =0 and x, s^*(x)=0 for all x. We shall show that (v^*, h^*, s^*) is a verifier-leading Stackelberg equilibrium. First, at v = v^*, the first term of R_P is already maximized since 1/2𝔼_xr(v^*(x, h^*(x)), x, h^*(x)) = 1/2𝔼_xr(1, 1) The second term is also maximized since 1/2𝔼_xr(v^*(x, s^*(x)), 1-x, s^*(x)) = 1/2𝔼_x r(0, 1) and r(v', 0) < r(0, 1) for any v' ∈ [0, 1] hence the provers have no incentive to change. For the verifier, it's loss is at minimum since l_V = 1/2𝔼_xl(1, 1) + 1/2𝔼_xl(0, 0) = 0 so this is a Stackelberg equilibrium. Next, assuming (v^*, h^*, s^*) is any verifier-leading Stackelberg equilibrium. We already know that the utility of the verifier is at its maximum at -l_v(v^*) =0. Suppose the completeness property is not satisfied, which means ∃ x, x, h^*(x)=0 or v(x, h^*(x)) = 0. Suppose it is the first case, i.e. ∃ x', x', h^*(x')=0. Then letting h' to be identical to h^* except at x', where we pick an arbitrary correct solution h'(x') ∈{z |x', z=1}. Then we would have increased the first term of R_P since r(v^*(x', h^*(x')), x, h^*(x')) = r(v^*(x', h^*(x')), 0) < r(v^*(x', h'(x')),1). which contradicts Stackelberg equilibrium. Suppose it is the second case, i.e. ∃ x', v^*(x', h^*(x')) = 0 but x', h^*(x') = 1. Since P_X has non-zero probability mass at x', 𝔼_x[l(v^*(x, h^*(x), x, h^*(x)]>0 which contradicts l_v(v^*)=0. Suppose the soundness property is not satisfied, which means ∃ s', x': x', s'(x') = 0 and v^*(x', s'(x')) > 0. Due to l_v(v^*)=0, s^* has to satisfy ∀ x, v^*(x, s^*(x)) = x, s^*(x). If s^*(x') is such that x', s^*(x') = v^*(x', s^*(x')) = 0, we can modify s^* so that it outputs the s'(x') at x=x' and then we would have increased R_p due to r(v', 1)>r(0, 1) when v'>0. If s^*(x') is such that x', s^*(x') = v^*(x', s^*(x')) = 1, switching to s'(x') also increases R_p due to r(v', 1)>r(1, 0) for any v'. Both of those scenarios contradict the assumption that s^* is at Stackelberg equilibrium. We have proved by contradiction that (v^*, h^*) from any verifier-leading Stackelberg equilibrium satisfies both completeness and soundness properties. § SAMPLES FROM SEVERAL EXPERIMENTS § SEPARATION MIGHT INHIBIT PROGRESSION OF CHECKABILITY TRAINING A verifier trained to predict correctness of solutions might not in fact learn a generalizable algorithm for predicting correctness, if other attributes of the solutions are highly correlated with correctness and are easier to predict. In our experiments, all training rounds of the verifier, except the very first one, contain correct and incorrect solutions produced by helpful and sneaky provers. Since our training incentivizes helpful prover's solutions to be correct, and sneaky prover's solutions to be incorrect, correctness of solutions in the verifier's training data strongly correlates with which of the two provers produced that solution. If the helpful and sneaky provers produce solutions in different easily identifiable formats, and helpful solutions are mostly correct, while sneaky solutions are mostly incorrect, then training the verifier on the correctness signal will likely generalize to identify distinctive format features of the solutions produced by the helpful and sneaky provers. Figure <ref> presents evidence that in the early rounds of checkability training, helpful and sneaky provers are in fact easily separable based on easy to learn features, but grow less separable as the training progresses. For that figure, we trained a very small discriminator to predict whether a given sample was produced by the helpful or the sneaky prover on a small balanced dataset of helpful and sneaky solutions. We trained separate discriminators for checkpoints of the prover spaced every 400 optimization steps of training. The figure shows ROC AUC of that discriminator. In the first round, ROC AUC reaches close to 1 at the end of the round. For the last round, it stays close to 0.5 for the whole duration of the round. We hypothesize that samples produced by the helpful and sneaky provers after they separate completely in a way that is easily learnable by very small verifiers (i.e. based on mostly surface level statistics of the solutions) are not useful for increasing the robustness of the verifier. As a consequence, when designing particular formulations of a Prover-Verifier Game, one has to pay attention to whether it in fact encourages early separation of the helpful and sneaky players. § WHEN THE VERIFIER IS TOO SMALL OR TOO LARGE, CHECKABILITY TRAINING IS INHIBITED. § ADDITIONAL EVALUATIONS FOR THE GOODHARTING BASELINE § VALIDATION WITH PROVERS WITH NO MATH-SPECIFIC PRETRAINING. § SYSTEM PROMPT § FEW-SHOT PROMPT § DETAILS ON THE HUMAN LEGIBILITY EVALUATION CAMPAIGN Our main human evaluation experiment is as follows. For the training shown in Figure <ref>, we sample the last checkpoint helpful prover solutions to the problems from the GSM test dataset. We also sample from the correctness-only baseline at training step 3k, 7k, as well as the few-shot prover initialization. We collect up to one correct and up to one incorrect sample from each prover and each problem, if possible (around one-third of the problems per round have either all correct or all incorrect solutions among 128 samples). We randomized assignment of those samples to 75 trainers (making sure each person see balanced correct vs. incorrect samples from each checkpoint), ask them to judge the solution correctness within a 45-second time limit per problem. We excluded all problems that yielded only correct or only incorrect solutions. We also excluded four trainers whose performance was statistically indistinguishable from chance from the dataset. We collected 15983 datapoints. Each trainer spent a maximum of 4 hours working on the task to prevent learning and fatigue effects. The instructions are shown in Fig. <ref> and Fig. <ref> contains an example task. The trainers are compensated based on completion, and are not told whether they judged correctly. We also had an earlier pilot campaign where the we showed sneaky solutions as well as samples from an Iterative Goodharting training to 10 trainers with 1 minute time limit (instead of 45 seconds) per task. In this pilot run, we showed solutions from earlier rounds to the trainers first (they judged the samples in the order of R0, R1, R2, etc.) across a span of one month, and hence we suspect that the data is affected by learning or fatigue effects. The only plots from this potentially flawed campaign are Fig. <ref>b, <ref>c and <ref>.
http://arxiv.org/abs/2407.13116v1
20240718025412
Arithmetical enhancements of the Kogbetliantz method for the SVD of order two
[ "Vedran Novaković" ]
math.NA
[ "math.NA", "cs.NA", "65F15 (Primary) 15A18, 65Y99 (Secondary)", "G.1.3; G.4" ]
Enhancing the 2× 2 Kogbetliantz SVD]Arithmetical enhancements of the Kogbetliantz method for the SVD of order two Vedran Novakovićvenovako@venovako.eu <https://orcid.org/0000-0003-2964-9674> *independent researcher, Vankina ulica 15, HR-10020 Zagreb, Croatia An enhanced Kogbetliantz method for the singular value decomposition (SVD) of general matrices of order two is proposed. The method consists of three phases: an almost exact prescaling, that can be beneficial to the LAPACK's routine for the SVD of upper triangular 2× 2 matrices as well, a highly relatively accurate triangularization in the absence of underflows, and an alternative procedure for computing the SVD of triangular matrices, that employs the correctly rounded function. A heuristic for improving numerical orthogonality of the left singular vectors is also presented and tested on a wide spectrum of random input matrices. On upper triangular matrices under test, the proposed method, unlike , finds both singular values with high relative accuracy as long as the input elements are within a safe range that is almost as wide as the entire normal range. On general matrices of order two, the method's safe range for which the smaller singular values remain accurate is of about half the width of the normal range. [MSC Classification]65F15, 15A18, 65Y99 [ * July 22, 2024 ================= § INTRODUCTION The singular value decomposition (SVD) of a square matrix of order two is a widely used numerical tool. In LAPACK <cit.> alone, its routine for the SVD of real upper triangular 2× 2 matrices is a building block for the QZ algorithm <cit.> for the generalized eigenvalue problem Ax=λ Bx, with A and B real and square, and for the SVD of real bidiagonal matrices by the implicit QR algorithm <cit.>. Also, the oldest method for the SVD of square matrices that is still in use was developed by Kogbetliantz <cit.>, based on the SVD of order two, and as such is the primary motivation for this research. This work explores how to compute the SVD of a general matrix of order two indirectly, by a careful scaling, a highly relatively accurate triangularization if the matrix indeed contains no zeros, and an alternative triangular SVD method, since the straightforward formulas for general matrices are challenging to be evaluated stably. Let G be a square real matrix of order n. The SVD of G is a decomposition G=UΣ V^T, where U and V are orthogonal[If G is complex, U and V are unitary and G=UΣ V^∗, but this case is only briefly dealt with here.] matrices of order n of the left and the right singular vectors of G, respectively, and Σ=(σ_1^,…,σ_n) is a diagonal matrix of its singular values, such that σ_i^≥σ_j^≥ 0 for all i and j where 1≤ i<j≤ n. In the step k of the Kogbetliantz SVD method, a pivot submatrix of order two (or several of them, not sharing indices with each other, if the method is parallel) is found according to the chosen pivot strategy in the iteration matrix G_k, its SVD is computed, and U_k, V_k, and G_k are updated by the transformation matrices 𝖴_k and/or 𝖵_k, leaving zeros in the off-diagonal positions (j_k,i_k) and (i_k,j_k) of G_k+1, as in G_0=preprocess(G), U_0=preprocess(I), V_0=preprocess(I); G_k+1^=𝖴_k^T G_k^𝖵_k^, U_k+1^=U_k^𝖴_k^, V_k+1^=V_k^𝖵_k^, k≥ 0; convergence(k=K) U≈ U_K^, V≈ V_K^, σ_i^≈ g_ii^(K), 1≤ i≤ n. The left and the right singular vectors of the pivot matrix are embedded into identities to get 𝖴_k and 𝖵_k, respectively, with the index mapping from matrices of order two to 𝖴_k and 𝖵_k being (1,1)↦(i_k,i_k), (2,1)↦(j_k,i_k), (1,2)↦(i_k,j_k), (2,2)↦(j_k,j_k), where 1≤ i_k<j_k≤ n are the pivot indices. The process is repeated until convergence, i.e., until for some k=K the off-diagonal norm of G_K falls below a certain threshold. If G has m rows and n columns, m>n, it should be preprocessed <cit.> to a square matrix G_0^ by a factorization of the URV <cit.> type (e.g., the QR factorization with column pivoting <cit.>). Then, U_0^TGV_0^=G_0^, where G_0^ is triangular of order n and U_0 is orthogonal of order m. If m<n, then the SVD of G^T can be computed instead. In all iterations it would be beneficial to have the pivot matrix G_k triangular, since its SVD can be computed with high relative accuracy under mild assumptions <cit.>. This is however not possible with time consuming but simple, quadratically convergent <cit.> pivot strategy that chooses the pivot with the largest off-diagonal norm √(|g_j_ki_k^(k)|^2+|g_i_kj_k^(k)|^2), but is possible, if G_0 is triangular, with certain sequential cyclic (i.e., periodic) strategies <cit.> like the row-cyclic and column-cyclic, and even with some parallel ones, after further preprocessing G_0 into a suitable “butterfly” form <cit.>. Although the row-cyclic and column-cyclic strategies ensure global <cit.> and asymptotically quadratic <cit.> convergence of the method, as well as its high relative accuracy <cit.>, the method's sequential variants remain slow on modern hardware, while preprocessing G to G_0 (in the butterfly form or not) can only be partially parallelized. This work is a part of a broader effort <cit.> to investigate if a fast and accurate (in practice if not in theory) variant of the Kogbetliantz method could be developed, that would be entirely parallel and would function on general square matrices without expensive preprocessing, with full pivots G_k^[ℓ], 1≤ℓ≤𝗇≤⌊ n/2⌋, that are independently diagonalized, and with 𝗇 ensuing concurrent updates of U_k, 𝗇 of V_k, and 𝗇 from each of the sides of G_k in a parallel step. This way the employed parallel pivot strategy does not have to be cyclic. A promising candidate is the dynamic ordering <cit.>. The proposed Kogbetliantz SVD of order two supports a wider exponent range of the elements of a triangular input matrix for which both singular values are computed with high relative accuracy than , although the latter is slightly more accurate when comparison is possible. Matrices of the singular vectors obtained by the proposed method are noticeably more numerically orthogonal. With respect to <cit.> and a general matrix G of order two, the following enhancements have been implemented: * The structure of G is exploited to the utmost extent, so the triangularization and a stronger scaling is employed only when G has no zeros, thus preserving accuracy. * The triangularization of G by a special URV factorization is tweaked so that high relative accuracy of each computed element is provable when no underflow occurs. * The SVD procedure for triangular matrices utilizes the latest advances in computing the correctly rounded functions, so the pertinent formulas from <cit.> are altered. * The left singular vectors are computed by a heuristic when the triangularization is involved, by composing the two plane rotations—one from the URV factorization, and the other from the triangular SVD—into one without the matrix multiplication. High relative accuracy of the singular values of G is observed, but not proved, when the range of the elements of G is narrower than about half of the entire normal range. =-1 This paper is organized as follows. In Section <ref> the floating-point environment and the required operations are described, and some auxiliary results regarding them are proved. Section <ref> presents the proposed SVD method. In Section <ref> the numerical testing results are shown. Section <ref> concludes the paper with the comments on future work. § FLOATING-POINT CONSIDERATIONS Let x be a real, infinite, or undefined (Not-a-Number) value: x∈ℝ∪{-∞,+∞,NaN}. Its floating-point representation is denoted by (x) and is obtained by rounding x to a value of the chosen standard floating-point datatype using the rounding mode in effect, that is here assumed to be to nearest (ties to even). If the result is normal, (x)=x(1+ϵ), where |ϵ|≤ε=2^-p and p is the number of bits in the significand of a floating-point value. In the LAPACK's terms, ε_=, where = or . Thus, p_=24 or 53, and ε_=2^-24 or 2^-53 for single () or double () precision, respectively. The gradual underflow mode, allowing subnormal inputs and outputs, has to be enabled (e.g., on Intel-compatible architectures the Denormals-Are-Zero and Flush-To-Zero modes have to be turned off). Trapping on floating-point exceptions has to be disabled (what is the default non-stop handling from <cit.>). Possible discretization errors in input data are ignored. Input matrices in floating-point are thus considered exact, and are, for simplicity, required to have finite elements. The Fused Multiply-and-Add (FMA) function, (a,b,c)=(a· b+c), is required. Conceptually, the exact value of a· b+c is correctly rounded. Also, the hypotenuse function, (a,b)=(√(a^2+b^2)), is assumed to be correctly rounded (unless stated otherwise), as recommended by the standard <cit.>, but unlike[<https://members.loria.fr/PZimmermann/papers/accuracy.pdf>] many current implementations of the routines and . Such a function (see also <cit.>) never overflows when the rounded result should be finite, it is zero if and only if |a|=|b|=0, and is symmetric and monotone. The CORE-MATH library <cit.> provides an open-source implementation[<https://core-math.gitlabpages.inria.fr>] of some of the optional correctly rounded single and double precision mathematical C functions (e.g., and ). Radix two is assumed for floating-point values. Scaling of a value x by 2^s where s∈ℤ is exact if the result is normal. Only non-normal results can lose precision. Let, for x=± 0, e_x=0 and f_x=0, and for a finite non-zero x let the exponent be e_x=⌊|x|⌋ and the “mantissa” 1≤|f_x|<2, such that x=(2^e_xf_x). Also, let f_x=x for x=±∞, while e_x=0. Note that f_x is normal even for subnormal x. Keep in mind that the routine represents a finite non-zero x with e_x'=e_x^+1 and f_x'=f_x^/2. Let μ be the smallest and ν the largest positive finite normal value. Then, in the notation just introduced, e_μ=μ=-126 or -1022, and e_ν=⌊ν⌋=127 or 1023, for single or double precision. Lemma <ref> can now be stated using this notation. Assume that e_ν-p≥ 1, with rounding to nearest. Then, (ν+1)=ν=(ν,1). By the assumption, ν≥ 2^p+1+2^p+⋯+2^2, since ν=2^e_ν(1.1⋯ 1)_2, with p ones. The bit in the last place thus represents a value of at least four. Adding one to ν would require rounding of the exact value ν+1=2^e_ν·(1.1⋯ 10⋯ 01)_2 to p bits of significand. The number of zeros is e_ν-p≥ 1. Rounding to nearest in such a case is equivalent to truncating the trailing e_ν-p+1 bits, starting from the leftmost zero, giving the result (ν+1)=2^e_ν(1.1⋯ 1)_2=ν. This proves the first equality in (<ref>). For the second equality in (<ref>), note that (ν+1)^2=ν^2+1+2ν≥ν^2+1≥ν^2 since ν>0. By taking the square roots, it follows that ν+1≥√(ν^2+1)≥ν, and therefore ν=(ν+1)≥(√(ν^2+1))=(ν,1)≥(ν,0)=(ν)=ν, since and are monotone operations in all arguments. The claims of Lemma <ref> and its following corollaries were used and their proofs partially sketched in <cit.>, e.g. They are expanded and clarified here for completeness. An underlined expression denotes a computed floating-point approximation of the exact value of that expression. Given tan(2ϕ), for 0≤|ϕ|≤π/4, tanϕ and tanϕ are tanϕ=tan(2ϕ)/1+√(tan^2(2ϕ)+1), tanϕ=((tan(2ϕ))/(1+((tan(2ϕ)),1))), if tan(2ϕ) and (tan(2ϕ)) are finite, or (tan(2ϕ)) and ((tan(2ϕ))) otherwise. Let tan(2ϕ) be given, such that (tan(2ϕ)) is finite. Then, under the assumptions of Lemma <ref>, for tanϕ from (<ref>) holds 0≤|tanϕ|≤ 1. For |(tan(2ϕ))|=ν, due to Lemma <ref>, ((tan(2ϕ)),1)=ν, and so the denominator in (<ref>) is (1+ν)=ν. Note that the numerator is always at most as large in magnitude as the denominator. Thus, 0≤|tanϕ|≤ 1, what had to be proven. Let tanϕ be given, for |ϕ|≤π/2. Then, ϕ=1/cosϕ can be approximated as ϕ=((tanϕ),1). If tanϕ=(tanϕ), then ϕ=(ϕ). When the assumptions of Lemma <ref> hold and (tanϕ) is finite, so is ϕ. The approximation relation follows from the definition of and from ϕ=√(tan^2ϕ+1), while its finiteness for a finite (tanϕ) follows from Lemma <ref>, since |(tanϕ)|≤ν implies ((tanϕ),1)≤ν. For any w∈ℝ, let 𝐰=(e_w,f_w)=2^e_wf_w and (𝐰)=(2^e_wf_w)≈ w. Even w such that |w|>ν or 0<|w|<μ̌, where μ̌ is the smallest positive non-zero floating-point value, can be represented with a finite e_w and a normalized f_w, though (𝐰) is not finite or non-zero, respectively. The closest double precision approximation of 𝐰 is w=, with a possible underflow or overflow, and similarly for single precision (using ). A similar definition could be made with e_w' and f_w' instead. An overflow-avoiding addition and an underflow-avoiding subtraction of positive finite values x and y, resulting in such exponent-“mantissa” pairs, can be defined as x⊕ y= (e_z,f_z), if z=(x+y)≤ν, (e_z+1,f_z), otherwise, with z=(2^-1x+2^-1y), and, assuming x y (for x=y let x⊖ y=(0,0) directly), x⊖ y= (e_z,f_z), if |z|≥μ, with z=(x-y), (e_z-c,f_z), otherwise, with z=(2^c x - 2^c y), where c=e_μ+p-1-min{e_x,e_y}. In (<ref>), z≤ν in both cases, since (ν/2+ν/2)=ν. In the second case in (<ref>), c>0 and μ≤|z|≤ν, if e_ν≥ e_μ+2p-1. Assume x>y (else, swap x and y, and change the sign of z). Then, e_y≤ e_x and y=2^e_y(1.𝚢_-1⋯𝚢_1-p)_2. The rightmost bit 𝚢_1-p multiplies the value w=2^e_y+1-p. If e_x-e_y≥ p+1 then x is normal and, due to rounding to nearest, (x-y)=x. Therefore, assume that e_x-e_y≤ p. If w≥μ=2^e_μ, then x-y≥μ as well, since x-y≥ w. Thus, (x-y)≥ w≥μ, so assume that w<μ, i.e., e_y+1-p<e_μ. It now suffices to upscale x and y to x”=2^c x and y”=2^c y, for some c∈ℕ, to ensure e_y”=e_y^+c≥ e_μ^+p-1. Any c≥ e_μ^+p-1-e_y^ that will not overflow x” will do, so the smallest one is chosen. Note that e_x”=e_x^+c=e_x^+e_μ^+p-e_y^-1. Since e_x^-e_y^≤ p, by the Lemma's assumption it holds e_x”≤ e_μ^+2p-1≤ e_ν. Several arithmetic operations on (e,f)-pairs can be defined (see also <cit.>), such as |𝐱|=(e_x,|f_x|), -𝐱=(e_x,-f_x), 2^ς⊙𝐱=(ς+e_x,f_x), 1⊘𝐲=(e_z-e_y,f_z), z=(1/f_y), which are unary operations. The binary multiplication and division are defined as 𝐱⊙𝐲=(e_x+e_y+e_z,f_z), z=(f_x· f_y), 𝐱⊘𝐲=(e_x-e_y+e_z,f_z), z=(f_x/f_y), and the relation ≺, that compares the represented values in the < sense, is given as 𝐱≺𝐲 ((f_x)<(f_y)) ∨ (((f_x)=(f_y))∧((e_x<e_y)∨((e_x=e_y)∧(f_x<f_y)))). Let, for any G of order n, where is the smallest representable integer, e_G = max_1≤ i,j≤ ne_ij, e_ij=max{⌊ g_ij⌋,}. A prescaling of G as G'=2^s G, that avoids overflows, and underflows if possible, in the course of computing the SVD of G' (and thus of G≈ 2^-sG'), is defined by s such that e_G= s=0, e_G> s=e_ν-e_G-𝔰, 𝔰≥ 0, where 𝔰=0 for n=1. For n=2, 𝔰 is chosen such that certain intermediate results while computing the SVD of G' cannot overflow, as explained in <cit.> and Section <ref>, but the final singular values are represented in the (e,f) form, and are immune from overflow and underflow as long as they are not converted to simple floating-point values. If s≥ 0, the result of such a prescaling is exact. Otherwise, some elements of G' might be computed inexactly due to underflow. If for the elements of G holds g_ij 0μ≤|g_ij|≤ν/2^𝔰, 1≤ i,j≤ n, then s≥ 0, and g_ij'=0 or μ≤|g_ij'|≤ν, i.e., the elements of G' are zero or normal. § THE SVD OF GENERAL MATRICES OF ORDER TWO This section presents a Kogbetliantz-like procedure for computing the singular values of G when n=2, and the matrices of the left (U) and the right (V) singular vectors. In general, U is a product of permutations (denoted by P with subscripts and including I), sign matrices (denoted by S with subscripts) with each diagonal element being either 1 or -1 while the rest are zeros, and plane rotations by the angles ϑ and φ. If U_ϑ is not generated, the notation changes from U_φ to U_ϕ. Likewise, V is a product of permutations, a sign matrix, and a plane rotation by the angle ψ, where U_ϑ=[ cosϑ -sinϑ; sinϑ cosϑ ], U_φ=[ cosφ -sinφ; sinφ cosφ ], V_ψ=[ cosψ -sinψ; sinψ cosψ ]. Depending on its pattern of zeros, a matrix of order two falls into one of the 16 types 𝔱 shown in (<ref>), where ∘=0 and ∙ 0. Some types are permutationally equivalent to others, what is denoted by 𝔱_1≅𝔱_2, and means that a 𝔱_1-matrix can be pre-multiplied and/or post-multiplied by permutations to be transformed into a 𝔱_2-matrix, and vice versa, keeping the number of zeros intact. Each 𝔱 0 has its associated scale type 𝔰. 0 [ ∘ ∘; ∘ ∘ ] 1 [ ∙ ∘; ∘ ∘ ] 2 [ ∘ ∘; ∙ ∘ ] 3 [ ∙ ∘; ∙ ∘ ] 4 [ ∘ ∙; ∘ ∘ ] 5 [ ∙ ∙; ∘ ∘ ] 6 [ ∘ ∙; ∙ ∘ ] 7 [ ∙ ∙; ∙ ∘ ] [ ∘ ∘; ∘ ∙ ] 8[ ∙ ∘; ∘ ∙ ] 9[ ∘ ∘; ∙ ∙ ] 10[ ∙ ∘; ∙ ∙ ] 11[ ∘ ∙; ∘ ∙ ] 12[ ∙ ∙; ∘ ∙ ] 13[ ∘ ∙; ∙ ∙ ] 14[ ∙ ∙; ∙ ∙ ] 15 𝔱 0, 1, 2, 4, 6, 8≅9 12≅3; 10≅5 7, 11, 14≅13 15 𝔰 0 1 1 2 For 𝔰=0, there is one equivalence class of matrix types, represented by 𝔱=9. For 𝔰=1, there are three classes, represented by 𝔱=3, 𝔱=5, and 𝔱=13, while for 𝔰=2 there is one class, 𝔱=15. The SVD computation for the first three classes is straightforward, while for the fourth and the fifth class is more involved. A matrix of any type, except 𝔱=15, can be permuted into an upper triangular one. If a matrix so obtained is well scaled, its SVD can alternatively be computed by . However, does not accept general matrices (i.e., 𝔱=15), unlike the proposed method, which is a modification of <cit.> when J=I, and consists of the following three phases: * For G determine 𝔱, 𝔰, and s to obtain G'. Handle the simple cases of 𝔱 separately. * If 𝔱≅ 13 or 𝔱=15, factorize G' as U_+^RV_+^, such that U_+^ and V_+^ are orthogonal, and R is upper triangular, with min{r_11^,r_12^,r_22^}>0 and all r_ij finite, 1≤ i≤ j≤ 2. * From the SVD of R assemble the SVD of G'. Optionally backscale Σ' by 2^-s. The phases 1, 2, and 3 are described in Sections <ref>, <ref>, and <ref>, respectively. §.§ Prescaling of the matrix and the simple cases (𝔱≅ 3,5,9) Matrices with 𝔱≅ 9 do not have to be scaled, but only permuted into the 𝔱=0, 𝔱=1, or 𝔱=9 (where the first diagonal element is not smaller by magnitude than the second one) form, according to their number of non-zeros, with at most one permutation from the left and at most one from the right hand side. Then, the rows of P_U^T G P_V^ are multiplied by the signs of their diagonal elements, to obtain σ_1=|g_11| and σ_2=|g_22|, while U=P_US and V=P_V. The error-free SVD computation is thus completed. Note that the signs might have been taken out of the columns instead of the rows, and the sign matrix S would have then be incorporated into V instead. The structure of the left and the right singular vector matrices is therefore not uniquely determined. Be aware that 𝔱 determined before the prescaling (to compute 𝔰 and s) may differ from 𝔱' that would be found afterwards. If, e.g., 𝔱 9 and G contains, among others, ν and μ̌ as elements, the element(s) μ̌ will vanish after the prescaling since s<0 (from (<ref>), due to 𝔰≥ 1), so 𝔱'<𝔱 and the zero pattern of G' has to be re-examined. =-1 A 𝔱≅ 3 or 𝔱≅ 5 matrix is scaled by 2^s. The columns (resp., rows) of a 𝔱'=12 (resp., 𝔱'=10) matrix are swapped, to bring it to the 𝔱”=3 (resp., 𝔱”=5) form. Then, the non-zero elements are made positive by multiplying the rows (resp., columns) by their signs. Next, the rows (resp., columns) are swapped if required to make the upper left element largest by magnitude. The sign-extracting and magnitude-ordering operations may be swapped or combined. The resulting matrix G” undergoes the QR (resp., RQ) factorization, by a single Givens rotation U_θ^T (resp., V_θ^), determined by tanθ (consequently, by cosθ and sinθ) as in (<ref>), with θ substituted for ϑ (resp., ψ), where 𝔱”=3tanθ=g_21”/g_11”, 𝔱”=5tanθ=g_12”/g_11”. =-1 By construction, 0<tanθ≤ 1 and 0≤tanθ≤ 1. The upper left element is not transformed, but explicitly set to hold the Frobenius norm of the whole non-zero column (resp., row), as g_11”'=(g_11”,g_21”) (resp., g_11”'=(g_11”,g_12”)), while the other non-zero element is zeroed out. Thus, to avoid overflow of g_11”' it is sufficient to ensure that g_11”≪ν/√(2), what 𝔰=1 achieves. The SVD is given by U=S_U^P_U^U_θ^ and V=P_V^ for 𝔱'≅ 3, and by U=P_U^ and V=S_V^P_V^V_θ^ for 𝔱'≅ 5. The scaled singular values are σ_1'=g_11”' and σ_2'=0 in both cases, and σ_1' cannot overflow (σ_1^=2^-sσ_1' can). If no inexact underflow occurs while scaling G to G', then σ_1'=σ_1'(1+ϵ_1'), where |ϵ_1'|≤ε. With the same assumption, tanθ≥μ implies tanθ=tanθ(1+ϵ_θ^), where |ϵ_θ^|≤ε. The resulting Givens rotation can be represented and applied as one of U_θ^T=[ 1 tanθ; -tanθ 1 ]/θ, V_θ^=[ 1 -tanθ; tanθ 1 ]/θ, what avoids computing cosθ and sinθ explicitly. Lemma <ref> bounds the error in θ. Let θ from (<ref>) be computed as (tanθ,1) for tanθ≥μ. Then, θ=δ_θ'θ, √(((1-ε)^2+1)/2)(1-ε)≤δ_θ'≤√(((1+ε)^2+1)/2)(1+ε). Let δ_θ^=(1+ϵ_θ^). Then tanθ≥μ implies (tanθ)^2=δ_θ^2tan^2θ, and 1-ε=δ_θ^-≤δ_θ^≤δ_θ^+=1+ε. Express (tanθ)^2+1=δ_θ^2tan^2θ+1 as (tan^2θ+1)(1+ϵ_θ'), from which it follows ϵ_θ'=tan^2θ/tan^2θ+1(δ_θ^2-1), 0<tan^2θ/tan^2θ+1≤1/2. By adding unity to both sides of the equation for ϵ_θ' and taking the maximal value of the first factor on its right hand side, while accounting for the bounds of δ_θ^, it holds ((δ_θ^-)^2+1)/2≤1+ϵ_θ'≤((δ_θ^+)^2+1)/2. Since θ=(tanθ,1)=√((tanθ)^2+1)(1+ϵ_√()^)=√((tan^2θ+1)(1+ϵ_θ'))(1+ϵ_√()^), where |ϵ_√()|≤ε, factorizing the last square root into a product of square roots gives θ=θ√(1+ϵ_θ')(1+ϵ_√()^). The proof is concluded by denoting the error factor on the right hand side by δ_θ'. This proof and the following one use several techniques from <cit.>. Due to the structure of a matrix that U_θ^T or V_θ^ is applied to, containing in each row and column one zero and one ± 1, it follows that U or V have in each row and column ±cosθ and ±sinθ, computed implicitly, for which Lemma <ref> gives error bounds. Let cosθ and sinθ result from applying (<ref>) with tanθ≥μ. Then, cosθ=δ_θ”cosθ, δ_θ”=(1+ϵ_/^)/δ_θ', sinθ=δ_θ”'sinθ, δ_θ”'=(1+ϵ_/')δ_θ^/δ_θ', where max{|ϵ_/^|,|ϵ_/'|}≤ε and δ_θ” and δ_θ”' can be bound below and above in the terms of ε only. Let δ_θ^'- and δ_θ^'+ be the lower and the upper bounds for δ_θ' from (<ref>). Then, (1-ε)/δ_θ^'+≤δ_θ”≤(1+ε)/δ_θ^'-, (1-ε)δ_θ^-/δ_θ^'+≤δ_θ”'≤(1+ε)δ_θ^+/δ_θ^'-. The claims follow from (<ref>), cosθ=(1/θ), and sinθ=(tanθ/θ). If, in (<ref>), tanθ<μ, then θ=1 since 1≤θ≤(√(1+μ^2))≤(1+μ)=1, and the relative error in θ is below ε for any standard floating-point datatype. Thus, even though tanθ can be relatively inaccurate, (<ref>) holds for all θ. Also, cosθ is always relatively accurate, but sinθ might not be if tanθ<μ, when sinθ=tanθ. §.§ A (pivoted) URV factorization of order two (𝔱'≅ 13,15) If, after the prescaling, 𝔱'≅ 13 or 𝔱'=15, G' is transformed into an upper triangular matrix R with all elements non-negative, i.e., a special URV factorization of G' is computed. Section <ref> deals with the 𝔱'≅ 13, and Section <ref> with the 𝔱'=15 case. §.§.§ An error-free transformation from 𝔱'≅ 13 to 𝔱”=13 form A triangular or anti-triangular matrix is first permuted into an upper triangular one, G”. Its first row is then multiplied by the sign of g_11”. This might change the sign of g_12”. The second column is multiplied by its new sign, what might change the sign of g_22”. Its new sign then multiplies the second row, what completes the construction of R. The transformations U_+^T and V_+^, such that R=U_+^T G' V_+^, can be expressed as U_+^=P_U^S_11^S_22^=U_+^ and V_+^=P_V^S_12^=V_+^, and are exact, as well as R if G' is. §.§.§ A fully pivoted URV when 𝔱'=15 In all previous cases, a sequence of error-free transformations would bring G' into an upper triangular G”, of which can compute the SVD. However, a matrix without zeros either has to be preprocessed into such a form, in the spirit of <cit.>, or its SVD has to computed by more complicated and numerically less stable formulas, that follow from the annihilation requirement for the off-diagonal matrix elements as tan(2ϕ)/2=g_11^g_21^+g_12^g_22^/g_11^2+g_12^2-g_21^2-g_22^2, tanψ=g_12^+g_22^tanϕ/g_11^+g_21^tanϕ. A sketched derivation of (<ref>) can be found in Section 1 of the supplementary material. Opting for the first approach, compute the Frobenius norms of the columns of G', as w_1^ and w_2^. Due to the prescaling, w_1^=(g_11',g_21') and w_2^=(g_12',g_22') cannot overflow. If w_1^<w_2^, swap the columns and their norms (so that w_1' would be the norm of the new first column of G', and w_2' the norm of the second one). Multiply each row by the sign of its new first element to get G”. Swap the rows if g_11”<g_21” to get the fully pivoted G”', while the norms remain unchanged. Note that g_11”'≥ g_21”'>0. Now the QR factorization of G”' is computed as U_ϑ^TG”'=R”. Then, r_21”=0, and r_11”=w_1', tanϑ=g_21”'/g_11”', r_12”=g_12”'+g_22”'tanϑ/ϑ, r_22”=g_22”'-g_12”'tanϑ/ϑ. All properties of the functions of θ from Section <ref> also hold for the functions of ϑ. The prescaling of G causes the elements of R” to be at most ν/(2√(2)) in magnitude. If r_12”=0, then 𝔱”=9, and if r_22”=0, then 𝔱”=5. In either case R” is processed further as in Section <ref>, while accounting for the already applied transformations. Else, the second column of R” is multiplied by the sign of r_12” to obtain R'. The second row of R' is multiplied by the sign of r_22' to finalize R, in which the upper triangle is positive. It is evident how to construct U_+^ and V_+^=V_+^ such that R=U_+^T G' V_+^, since U_+^T=S_22^TU_ϑ^TP_U^TS_1^T, S_1^=((g̃_11'),(g̃_21')), V_+^=P_V^S_12^. However, U_+^T is not explicitly formed, as explained in Section <ref>. Now 𝔱”=13 for R. Lemma <ref> bounds the relative errors in (some of) the elements of R, when possible. Assume that no inexact underflow occurs at any stage of the above computation, leading from G to R. Then, r_11=δ_0 r_11, where 1-ε=δ_0^-≤δ_0^≤δ_0^+=1+ε. If in tanϑ=δ_ϑ^tanϑ holds δ_ϑ^=1, then r_12^=δ_1'r_12^ and r_22^=δ_1”r_22^, where (1-ε)^2/√(((1+ε)^2+1)/2)(1+ε)=δ_1^-≤δ_1',δ_1”≤δ_1^+=(1+ε)^2/√(((1-ε)^2+1)/2)(1-ε). Else, if g_12”' and g_22”' are of the same sign, then r_12^=δ_2'r_12^, and if they are of the opposite signs, then r_22^=δ_2”r_22^, where, with 1-ε=δ_ϑ^-≤δ_ϑ^≤δ_ϑ^+=1+ε and δ_ϑ^ 1, (1-ε)^3/√(((1+ε)^2+1)/2)(1+ε)=δ_2^-<δ_2',δ_2”<δ_2^+=(1+ε)^3/√(((1-ε)^2+1)/2)(1-ε). Eq. (<ref>) follows from the correct rounding of in the computation of w_1'. To prove (<ref>) and (<ref>), solve x± yδ_ϑtanϑ=(x± ytanϑ)(1+ϵ_±) for ϵ_± with xy 0. After expanding and rearranging the terms, it follows that ϵ_±=± ytanϑ/x± ytanϑ(δ_ϑ-1), δ_ϑ=1+ϵ_ϑ, |ϵ_ϑ|≤ε. If x and y are of the same sign and the addition operation is chosen, the first factor on the first right hand side in (<ref>) is above zero and below unity, so |ϵ_±|<|δ_ϑ-1| and δ_ϑ^-=δ_±^-<δ_±^<δ_±^+=δ_ϑ^+, δ_±^=1+ϵ_±^. The same holds if x and y are of the opposite signs and the subtraction is taken instead. Specifically, from (<ref>), the bound (<ref>) holds for x'=g_12”' and y'=g_22”' of the same sign, with ±'=+, and for x”=g_22”' and y”=g_12”' of the opposite signs, with ±”=-. With |ϵ_|≤ε and δ_=1+ϵ_, from the definition of δ_± it follows that (± y,tanϑ,x)=(x± ytanϑ)δ_=(x± ytanϑ)δ_δ_±. Due to (<ref>) and (<ref>), with ϑ instead of θ and with δ_/”=(1+ϵ_/”) where |ϵ_/”|≤ε, it holds ((±^!y^!,tanϑ,x^!)/ϑ)=x^!±^!y^!tanϑ/ϑ·δ_^δ_/”/δ_ϑ'·δ_±^=r_?2”·δ_1^!·δ_±^=r_?2”δ_2^!, where !=' for ?=1 and !=” for ?=2. Now (<ref>) follows from bounding δ_1^!=δ_^δ_/”/δ_ϑ' below and above using (<ref>). Since δ_2^!=δ_1^!δ_±^ in (<ref>), (<ref>) is a consequence of (<ref>). Lemma <ref> thus shows that high relative accuracy of all elements of R is achieved if no underflow has occurred at any stage, and tanϑ has been computed exactly. If tanϑ is inexact, high relative accuracy is guaranteed for r_11 and exactly one of r_12 and r_22. If it is also desired for the remaining element, transformed by an essential subtraction and thus amenable to cancellation, one possibility is to compute it by an expression equivalent to (<ref>), but with tanϑ expanded to its definition g_21”'/g_11”', as in r_12”=(g_12”'g_11”'+g_22”'g_21”')/(g_11”'ϑ), r_22”=(g_22”'g_11”'-g_12”'g_21”')/(g_11”'ϑ), after prescaling the numerator and denominator in (<ref>) by the largest power-of-two that avoids overflow of both of them. A floating-point primitive of the form a· b± c· d with a single rounding <cit.> can give the correctly rounded numerator but it has to be emulated in software on most platforms at present <cit.>. For high relative accuracy of r_12” or r_22” the absence of inexact underflows is required, except in the prescaling. Alternatively, the numerators in (<ref>) can be calculated by the Kahan's algorithm for determinants of order two <cit.>, but an overflow-avoiding prescaling is still necessary. It is thus easier to resort to computing (<ref>) in a wider and more precise datatype as r_12” =((((g_12”'g_11”'+g_22”'g_21”')/g_11”'))/ϑ), r_22” =((((g_22”'g_11”'-g_12”'g_21”')/g_11”'))/ϑ). First, in (<ref>) it is assumed that no underflow has occurred so far, so G”'=G”'. Second, a product of two floating-point values requires 2p bits of the significand to be represented exactly if the factors' significands are encoded with p bits each. Thus, for single precision, 48 bits are needed, what is less than 53 bits available in double precision. Similarly, for double precision, 106 bits are needed, what is less than 113 bits of a quadruple precision significand. Therefore, every product in (<ref>) is exact if computed using a more precise standard datatype . The characteristic values of are the underflow and overflow thresholds μ_ and ν_, and ε_=2^-p_. Third, let (x) round an infinitely precise result of x to the nearest value in . Since all addends in (<ref>) are exact and way above the underflow threshold μ_ by magnitude, the rounded result of the addition or the subtraction is relatively accurate, with the error factor (1+ϵ_±'), |ϵ_±'|≤ε_^. This holds even if the result is zero, but since the transformed matrix would then be processed according to its new structure, assume that the result is normal in . The ensuing division cannot overflow nor underflow in . Now the quotient rounded by is rounded again, by , back to the working datatype. This operation can underflow, as well as the following division by ϑ. If they do not, the resulting transformed element is relatively accurate. This outlines the proof of Theorem <ref>. Assume that no underflow occurs at any stage of the computation leading from G to R. Then, r_11=δ_0 r_11, where for δ_0 holds (<ref>). If tanϑ is exact, then r_12^=δ_1'r_12^ and r_22^=δ_1”r_22^, where δ_1' and δ_1” are as in (<ref>). Else, if g_12”' and g_22”' are of the same sign, then r_12^=δ_2'r_12^ and r_22^=δ_3'r_22^. If they are of the opposite signs, then r_22^=δ_2”r_22^ and r_12^=δ_3”r_12^, where δ_2' and δ_2” are as in (<ref>), while δ_3' and δ_3” come from evaluating their corresponding matrix elements as in (<ref>) and are bounded as (1-ε_^)^2(1-ε)^2/√(((1+ε)^2+1)/2)(1+ε)=δ_3^-≤δ_3',δ_3”≤δ_3^+=(1+ε_^)^2(1+ε)^2/√(((1-ε)^2+1)/2)(1-ε). It remains to prove (<ref>), since the other relations follow from Lemma <ref>. Every element of G”' is not above ν/4 and not below μ in magnitude. Therefore, a difference (in essence) of their exact products cannot exceed ν^2/16<ν_^ in magnitude in a standard . At least one element is above ν/8 in magnitude due to the prescaling, so the said difference, if not exactly zero, is above εμν/8≫μ_^ in magnitude. Thus, the quotient of this difference and g_11”' is above εμ(1-ε_^)/2>μ_^ and, due to the prescaling and pivoting, not above ν/2≪ν_ in magnitude. For (<ref>) it therefore holds ⋆_+^ =(⋆_+^)=⋆_+^(1+ϵ_+'), ⋆_+^=g_12”'g_11”'+g_22”'g_21”', |ϵ_+'|≤ε_^, ⋆_-^ =(⋆_-^)=⋆_-^(1+ϵ_-'), ⋆_-^=g_22”'g_11”'-g_12”'g_21”', |ϵ_-'|≤ε_^, (⋆_±^/g_11”')=(⋆_±^/g_11”')(1+ϵ_±')(1+ϵ_/”'), |ϵ_/”'|≤ε_^. The quotient (⋆_±^/g_11”'), converted to the working precision with ((⋆_±^/g_11”')), is possibly not correctly rounded from the value of ⋆_±^/g_11”' due to its double rounding. Using the assumption that no underflow in the working precision occurs, it follows that ((⋆_±^/g_11”'))=(⋆_±^/g_11”')(1+ϵ_±')(1+ϵ_/”')(1+ϵ_^)=(⋆_±^/g_11”')δ_±', |ϵ_^|≤ε, with δ_±'=(1+ϵ_±')(1+ϵ_/”')(1+ϵ_^). For the final division by ϑ, due to (<ref>), holds (((⋆_±^/g_11”'))/ϑ)=⋆_±^/g_11”'ϑδ_±'/δ_ϑ'δ_/””, δ_/””=(1+ϵ_/””), |ϵ_/””|≤ε. By fixing the sign in the ± subscript in (<ref>), (<ref>), and (<ref>), let δ_3'=δ_-'δ_/””/δ_ϑ' and δ_3”=δ_+'δ_/””/δ_ϑ'. Now bound δ_3' and δ_3” below by δ_3^- and above by δ_3^+, by combining the appropriate lower and upper bounds for δ_±', δ_ϑ' from Lemma <ref>, and δ_/””, where (1-ε_^)^2(1-ε)·(1-ε)/√(((1+ε)^2+1)/2)(1+ε)=δ_3^-≤δ_3',δ_3”≤δ_3^+=(1+ε_^)^2(1+ε)·(1+ε)/√(((1-ε)^2+1)/2)(1-ε), what proves (<ref>), by minimizing the numerator and maximizing the denominator to minimize the expression for δ_3' and δ_3”, and vice versa, as in the proof of Lemma <ref>. =-1 Therefore, it is possible to compute R with high relative accuracy in the absence of underflows, in the working precision only, but it is easier to employ two multiplications, one addition or subtraction, and one division in a wider, more precise datatype. Table <ref> shows by how many εs the lower and the upper bounds for δ_1, δ_2, and δ_3 from (<ref>), (<ref>), and (<ref>), respectively, differ from unity. The quantities in the table's header were computed symbolically as algebraic expressions by substituting ε and ε_ with their defining powers of two, then approximated numerically with p_ decimal digits, and rounded upwards to nine digits after the decimal point, by a Wolfram Language script[The file in the code supplement, executed by the Wolfram Engine 14.0.0 for macOS (Intel).]. §.§ The SVD of R and G For R now holds 𝔱”=13. If r_11<r_22, the diagonal elements of R have to be swapped, similarly to . This is done by symmetrically permuting R^T with P=[[ 1 0; 0 1 ]]. Multiplying R=P^T R^T P=UΣV^T by P from the left and P^T from the right gives R^T=PUΣV^TP^T, R=PVΣU^TP^T=ǓΣ̌V̌^T, where Ǔ=PV, Σ̌=Σ, and V̌=PU. Therefore, having applied the permutation P, U=U_φP_σ̃, V=V_ψP_σ̃, Ǔ=Ǔ_φP_σ̃, V̌=V̌_ψP_σ̃, Ǔ_φ =PV_ψ= [ sinψ cosψ; cosψ -sinψ ]= [ sinψ -cosψ; cosψ sinψ ]S_2= [ cosφ -sinφ; sinφ cosφ ]S_2, V̌_ψ =PU_φ= [ sinφ cosφ; cosφ -sinφ ]= [ sinφ -cosφ; cosφ sinφ ]S_2= [ cosψ -sinψ; sinψ cosψ ]S_2, S_2=[ 1 0; 0 -1 ], cosφ=sinψ, sinφ=cosψ, tanφ=1/tanψ, cosψ=sinφ, sinψ=cosφ, tanψ=1/tanφ, where U_φ, V_ψ, and P_σ̃ come from the SVD of R in Section <ref>. If r_11≥r_22 then R=R, Ǔ=U, V̌=V, (<ref>) is not used, and let S_2=I, φ=φ, and ψ=ψ. Assume that no inexact underflow occurs in the computation of R. If the initial matrix G was triangular, let δ_11=δ_12=δ_22=1. Else, let δ_11, δ_12, and δ_22 stand for the error factors of the elements of R that correspond to those from Theorem <ref>, i.e., R=[ r̃_11 r̃_12; 0 r̃_22 ], R=[ r̃_11 r̃_12; 0 r̃_22 ]=[ r̃_11δ_11 r̃_12δ_12^; 0 r̃_22δ_22^ ], r̃_11≥r̃_22. It remains to compute the SVD of R by an alternative to , what is described in Section <ref>, and assemble the SVD of G, what is explained in Section <ref>. §.§.§ The SVD of R The key observation in this part is that the traditional <cit.> formula for tan(2φ)/2 involving the squares of the elements of R can be simplified to an expression that does not require any explicit squaring if the hypotenuse calculation is considered a basic arithmetic operation. The two following expressions for tan(2φ) are equivalent, tan(2φ)=2r̃_12^r̃_22^/r̃_11^2+r̃_12^2-r̃_22^2=2r̃_12^r̃_22^/(h-r̃_22^)(h+r̃_22^), h=√(r̃_11^2+r̃_12^2). With h=(r̃_11,r̃_12), let 𝐬=h⊕r̃_22 and 𝐝=h⊖r̃_22 be the sum and the difference[With the prescaling as employed, ⊖ can be replaced by subtraction d=h-r̃_22 and 𝐝=(e_d,f_d).] of h and r̃_22 as in (<ref>) and (<ref>), respectively, for h>r̃_22. From (<ref>) and since 0<r̃_ij≤ν/(2√(2)) for 1≤ i≤ j≤ 2, it holds 0<r̃_22≤r̃_11≤h≤ν. Thus (<ref>) can be re-written using (<ref>) and (<ref>), with 𝐫̃_12=(e_r̃_12,f_r̃_12) and 𝐫̃_22=(e_r̃_22,f_r̃_22), as h=r̃_22tan(2φ)=∞, h>r̃_22tan(2φ)=((2⊙𝐫̃_12⊙𝐫̃_22)⊘(𝐝⊙𝐬)), where the computation's precision is unchanged, but the exponent range is widened. In (<ref>), the denominator of the expression for tan(2ϕ), 𝖽=g_11^2+g_12^2-g_21^2-g_22^2, can also be computed using , without explicitly squaring any matrix element, as 𝖽=(√(g_11^2+g_12^2)-√(g_21^2+g_22^2))(√(g_11^2+g_12^2)+√(g_21^2+g_22^2)). Only if 𝔱'≅ 13 can happen that (r̃_11/r̃_12)<ε. In the first denominator in (<ref>), r̃_11^2 and r̃_22^2 then have a negligible effect on r̃_12^2, so the expression for tan(2φ) can be simplified, as in , to the same formula which would the case r̃_11=r̃_22 imply, tan(2φ)=((2r̃_22)/r̃_12). Let 𝐫̃_11=(e_r̃_11,f_r̃_11). If r̃_12=r̃_22, (<ref>) can be simplified by explicit squaring to tan(2φ)=((2⊙𝐫̃_12⊙𝐫̃_22)⊘(𝐫̃_11⊙𝐫̃_11)). Both (<ref>) and (<ref>) admit a simple roundoff analysis. However, (<ref>) does not, due to a subtraction of potentially inexact values of a similar magnitude when computing 𝐝. Section <ref> shows, with a high probability by an exhaustive testing, that (<ref>) does not cause excessive relative errors in the singular values for 𝔱'≅ 13, and neither for 𝔱'=15 if the range of the exponents of the elements of input matrices is limited in width to about (e_ν-e_μ)/2. If is not correctly rounded, the procedure from <cit.> for computing tan(2φ) without squaring the input values can be adopted instead of (<ref>), as shown in Algorithm <ref>, but still without theoretical relative error bounds. All cases of Algorithm <ref> lead to 0≤tanφ≤ 1. From tanφ follows φ, as well as cosφ and sinφ, when explicitly required, what completely determines U_φ. To determine V_ψ, tanψ is obtained from tanφ (see <cit.>) as tanψ=(r̃_12+r̃_22tanφ)/r̃_11, tanψ=(t/r̃_11), t=(r̃_22,tanφ,r̃_12). Let 𝐬𝐞𝐜φ=(e_φ,f_φ). If tanψ is finite (e.g., when 𝔱'=15, due to the pivoting <cit.>), so is ψ. Then, let 𝐬𝐞𝐜ψ=(e_ψ,f_ψ). By fixing the evaluation order for reproducibility, the singular values σ̃_1” and σ̃_2” of R are computed <cit.> as 𝐬_ψ^φ=𝐬𝐞𝐜φ⊘𝐬𝐞𝐜ψ, σ̃_2”=𝐫̃_22^⊙𝐬_ψ^φ, σ̃_1”=𝐫̃_11^⊘𝐬_ψ^φ. If tanψ overflows due to a small r̃_11 (the prescaling ensures that t is always finite), let 𝐭=(e_t,f_t). In this case, similarly to the one in for (r̃_11/r̃_12)<ε, it holds ψ⪆tanψ, so cosψ⪅ 1/tanψ. To confine subnormal values to outputs only, let 𝐜𝐨𝐬ψ=𝐫̃_11⊘𝐭, cosψ=(𝐜𝐨𝐬ψ), sinψ=1. By substituting 1/𝐜𝐨𝐬ψ≈𝐭⊘𝐫̃_11 from (<ref>) for 𝐬𝐞𝐜ψ in (<ref>), simplifying the results, and fixing the evaluation order, the singular values of R in this case are obtained as σ̃_1”=𝐭⊘𝐬𝐞𝐜φ, σ̃_2”=𝐫̃_22^⊙(𝐬𝐞𝐜φ⊙𝐜𝐨𝐬ψ). From (<ref>), tanψ>ν implies tanφ⪅tan(2φ)/2⪅r̃_22/r̃_12≤r̃_11/r̃_12<1/(ν-1), so φ⪆ 1. Therefore, 𝐬𝐞𝐜φ may be eliminated from (<ref>), similarly as in . The SVD of R has thus been computed (without explicitly forming U_φ and V_ψ) as R ≈[ cosφ -sinφ; sinφ cosφ ] P_σ̃^P_σ̃^T [ σ̃_1” 0; 0 σ̃_2” ] P_σ̃^P_σ̃^T [ cosψ sinψ; -sinψ cosψ ] =(U_φ^P_σ̃^)(P_σ̃^TΣ_σ̃”P_σ̃^)(P_σ̃^TV_ψ^T) =UΣ_σ̃'V^T. If σ̃_1”≺σ̃_2”, then σ̃_1'=σ̃_2”, σ̃_2'=σ̃_1”, and P_σ̃^=[[ 0 1; 1 0 ]], else σ̃_i'=σ̃_i” and P_σ̃^=I, as presented in Algorithm <ref>. If r_11<r_22 then V̌ should be formed as in (<ref>), and Ǔ as well if 𝔱'≅ 13. Else, if r_11≥r_22, then V̌=V, and (only implicitly for 𝔱'=15) Ǔ=U. §.§.§ The SVD of G The approximate backscaled singular values of G are σ_i^=2^-s⊙σ̃_i'. They should remain in the exponent-“mantissa” form if possible, to avoid overflows and underflows. =-1 Recall that, for 𝔱'=15, U_φ^T and the QR rotation U_ϑ^T have not been explicitly formed. The reason is that U^T=Ǔ_φ^TU_+^T, where U_+^T is constructed from U_ϑ^T as in (<ref>), requires a matrix-matrix multiplication that can and sporadically will degrade the numerical orthogonality of U. On its own, such a problem is expected and can be tolerated, but if the left singular vectors of a pivot submatrix are applied to a pair of pivot rows of a large iteration matrix, many times throughout the Kogbetliantz process (<ref>), it is imperative to make the vectors as orthogonal as possible, and thus try not to destroy the singular values of the iteration matrix. In the following, U is generated from a single tanϕ, where tanϕ is a function of the already computed tanφ and tanϑ. If 𝔱'≅ 13, let U=U_+^Ǔ, where U_+^ comes from Section <ref>. Else, due to (<ref>), if S_22^T=I in (<ref>), the product Ǔ_φ^TU_ϑ^T=U_φ+ϑ^T can be written in the terms of φ+ϑ as U_φ+ϑ^T=S_2^T [ cosφ sinφ; -sinφ cosφ ][ cosϑ sinϑ; -sinϑ cosϑ ]=S_2^T [ cos(φ+ϑ) sin(φ+ϑ); -sin(φ+ϑ) cos(φ+ϑ) ]. If S_22^T=[[ 1 0; 0 -1 ]], U_φ-ϑ^T=Ǔ_φ^TS_22^TU_ϑ^T can be written in the terms of φ-ϑ as U_φ-ϑ^T=S_2^T [ cos(φ-ϑ) -sin(φ-ϑ); -sin(φ-ϑ) -cos(φ-ϑ) ]=S_2^T [ cos(φ-ϑ) sin(φ-ϑ); -sin(φ-ϑ) cos(φ-ϑ) ]S_2^, what is obtained by multiplying the matrices U_φ^T[[ 1 0; 0 -1 ]]U_ϑ^T and simplifying the result using the trigonometric identities for the (co)sine of the difference of the angles φ and ϑ. The middle matrix factor represents a possible sign change of r_22' as in Section <ref>. The matrices defined in (<ref>) and (<ref>) are determined by tan(φ+ϑ) and tan(φ-ϑ), respectively, where these tangents follow from the already computed ones as tan(φ+ϑ)=tanφ+tanϑ/1-tanφtanϑ, tan(φ-ϑ)=tanφ-tanϑ/1+tanφtanϑ. Finally, from (<ref>) and (<ref>), using either (<ref>) or (<ref>), the SVD of G is completed as U^T=U_φ±ϑ^TP_U^TS_1^T, U=UP_σ̃, V=V̌P_σ̃. For (<ref>), P_U^TS_1^T from (<ref>) is explicitly built and stored. It contains exactly one ± 1 element in each row and column, while the other is zero. Its multiplication by U_φ±ϑ^T is thus performed error-free. The tangents computed as in (<ref>) might be relatively inaccurate in theory, but the transformations they define via the cosines and the sines from either (<ref>) or (<ref>) are numerically orthogonal in practice, as shown in Section <ref>. This heuristic might become irrelevant if the ab+cd floating-point operation with a single rounding <cit.> becomes supported in hardware. Then, each element of Ǔ_φ^TU_ϑ^T (a product of two 2× 2 matrices) can be formed with one such operation. It remains to be seen if the multiplication approach improves accuracy of the computed left singular vectors without spoiling their orthogonality, compared to the proposed heuristic. From the method's construction, it follows that if the side (left or right) on which the signs are extracted while preparing R is fixed (see Section <ref>) and whenever the assumptions on the arithmetic hold, the SVD of G as proposed here is bitwise reproducible for any G with finite elements. Also, the method does not produce any infinite or undefined element in its outputs U, V, and (conditionally, as described) Σ. §.§ A complex input matrix If G is purely imaginary, ±iG is real. Else, if G has at least one complex element, the proposed real method is altered, as detailed in <cit.>, in the following ways: * To make the element 0 g_ij=|g_ij|e^iα_ij real and positive, its row or column is multiplied by e^-iα_ij (which goes into a sign matrix), and the element is replaced by its absolute value. To avoid overflow, let 𝔰_ℂ=𝔰+1 in (<ref>). The exponents of each component (real and imaginary) of every element are considered in (<ref>). * U_+^T is explicitly constructed in (<ref>), and Ǔ_φ^TU_+^T is formed by a real-complex matrix multiplication. The correctly rounded ab+cd operation <cit.> would be helpful here. Merging Ǔ_φ^TU_ϑ^T as in (<ref>) or (<ref>) remains a possibility if S_22^ happens to be real. * Since (<ref>) is no longer directly applicable for ensuring stability, no computation is performed in a wider datatype. Reproducibility of the whole method is conditional upon reproducibility of the complex multiplication and the absolute value (). Once R is obtained, the algorithms from Section <ref> work unchanged. § NUMERICAL TESTING Numerical testing was performed on machines with a 64-core Intel Xeon Phi 7210 CPU, a 64-bit Linux, and the Intel oneAPI Base and HPC toolkits, version 2024.1. Let the LAPACK's routine be denoted by 𝙻. The Kogbetliantz SVD in the same datatype is denoted by 𝙺. Unless such information is clear from the context, let the results' designators carry a subscript 𝙺 or 𝙻 in the following figures, depending on the routine that computed them, and also a superscript ∘ or ∙, signifying how the input matrices were generated. All inputs were random. Those denoted by ∘ had their elements generated as Fortran's pseudorandom numbers not above unity in magnitude, and those symbolized by ∙ had their elements' magnitudes in the “safe” range [μ,ν/4], as defined by (<ref>), to avoid overflows with 𝙻 and underflows due to the prescaling in 𝙺. The latter random numbers were provided by the CPU's instructions. If not kept, the ∙ inputs are thus not reproducible, unlike the ∘ ones if the seed is preserved. All relative error measures were computed in quadruple precision from data in the working (single or double) precision. The unknown exact singular values of the input matrices were approximated by the Kogbetliantz SVD method adapted to quadruple precision (with a operation that might not have been correctly rounded). With G given and U, Σ, V computed, let the relative SVD residual be defined as reG=G-UΣV^T_F/G_F, the maximal relative error in the computed singular values σ_i (with σ_i being exact) as reσ_i=|σ_i-σ_i|/σ_i, 1≤ i≤ 2, σ_i=0∧σ_i=0reσ_i=0, and the departure from orthogonality in the Frobenius norm for matrices of the left and right singular vectors (what can be seen as the relative error with respect to I) as reU=U^TU-I_F, reV=V^TV-I_F. Every datapoint in the figures shows the maximum of a particular relative error measure over a batch of input matrices, were each batch (run) contained 2^30 matrices. Figure <ref> covers the case of upper triangular input matrices, which can be processed by both 𝙺 and 𝙻, and the measures (<ref>) and (<ref>). Numerical orthogonality of the singular vectors computed by 𝙺 is noticeably better than of those obtained by 𝙻, in the worst case. Also, the relative SVD residuals are slightly better, in the ∙ and the ∘ runs. Figure <ref> shows the relative errors in the singular values (<ref>) of the same matrices from Figure <ref>. The unity mark for re_𝙻^σ_2^∙ indicates that 𝙻 can cause the relative errors in the smaller singular values, σ_2, to be so high in the ∙ case that their maximum was unity in all runs and cannot be displayed in Figure <ref>, most likely due to underflow to zero of σ_2^∙ when the “exact” σ_2^∙>0 in (<ref>). However, when 𝙻 managed to compute the smaller singular values accurately in the ∘ case, the maximum of their relative errors was a bit smaller than the one from 𝙺, the cause of which is worth exploring. The same holds for the larger singular values, which were computed accurately by both 𝙻 and 𝙺. To put maxκ_2^∙⪅ 9.45· 10^1229, for which 𝙺 still accurately computed all singular values (in the exponent-“mantissa” form, and thus not underflowing), into perspective, the highest possible condition number for triangular matrices in the ∙ case can be estimated by recalling that Algorithms <ref> and <ref> were also performed in quadruple precision (to get σ_1, σ_2, and so κ_2), where μ and ν of double precision, as well as ν/μ, are within the normal range. Then, tanφ can be made small and tanψ huge by, e.g., G=[ μ ν/4; 0 μ ]tan(2φ)=8μ/ν2μ/ν<tanφ⪅4μ/νtanψ⪆ν/4μ. Therefore, the condition number of G is a cubic expression in ν/μ, since, from (<ref>), σ_2=μφ/ψ≈4μ^2/ν, σ_1=ν/4ψ/φ≈ν^2/16μ, κ_2=σ_1/σ_2≈ν^3/64μ^3. Figure <ref> focuses on 𝙺 and general input matrices, with all their elements random. Inaccuracy of the smaller singular values in the ∙ case motivated the search for safe exponent ranges of the elements of input matrices that should preserve accuracy of σ_2 from 𝙻 for 𝔱=13 and from 𝙺 for 𝔱=15. For that, the range of random values was restricted, and only those outputs x from for which |x|∈[2^ςμ,ν/4] were accepted, where ς was a positive integer parameter independently chosen for each run. Figure <ref> shows the results of this search for 𝙺 and 𝙻. Approximately half-way through the entire normal exponent range the relative errors in the smaller singular values stabilize to a single-digit multiple of ε. Thus, when for the exponents in G holds max_1≤i,j≤2e_g_ij-min_1≤i,j≤2e_g_ij<(e_ν-e_μ)/2 (ignoring the exponent of 0) it might be expected that 𝙺 computes σ_2 accurately, while 𝙻 should additionally be safeguarded by its user from the elements too close to μ. The proposed prescaling, but with 𝔰_𝙻=𝔰+1 (or more), might be applied to G before 𝙻. A timing comparison of and the proposed method was not performed since the correctly rounded routines are still in development. By construction the proposed method is more computationally complex than , so it is expected to be slower. An unoptimized OpenMP-parallel implementation of the Kogbetliantz SVD for G of order n>2 with the scaling of G in the spirit of <cit.> but stronger (accounting for the two-sided transformations of G) and the modified modulus pivot strategy <cit.>, when run with 64 threads spread across the CPU cores, a deterministic reduction procedure, and , showed up to 10% speedup over the one-sided Jacobi SVD routine without preconditioning,  <cit.>, from the threaded Intel MKL library for large enough n (up to 5376), with the left singular vectors from the former being a bit more orthogonal than the ones from the latter, while the opposite was true for the right singular vectors, on the highly conditioned input matrices from <cit.>. The singular values from were less than an order of magnitude more accurate. § CONCLUSIONS AND FUTURE WORK The proposed Kogbetliantz method for the SVD of order two computed highly numerically orthogonal singular vectors in all tests. The larger singular values were relatively accurate up to a few ε in all tests, and the smaller ones were when the input matrices were triangular, or, for the general (without zeros) input matrices, if the range of their elements was narrower than or about half of the width of the range of normal values. The constituent phases of the method can be used on their own. The prescaling might help when its inputs are small. The highly accurate triangularization might be combined with instead, as an alternative method for general matrices. And the proposed SVD of triangular matrices demonstrates some of the benefits of the more complex correctly rounded operations (), but they go beyond that. High relative accuracy for tan(2φ) from (<ref>) might be achieved, barring underflow, if the four-way fused dot product operation ab+cd+ef+gh, DOT4, with a single rounding of the exact value <cit.>, becomes available in hardware. Then the denominator of the expression for tan(2φ) in (<ref>) could be computed, even without scaling if in a wider datatype, by the DOT4, and the numerator by the DOT2 (ab+cd) operation. The proposed heuristic for improving orthogonality of the left singular vectors might be helpful in other cases when two plane rotations have to be composed into one and the tangents of their angles are known. It already brings a slight advantage to the Kogbetliantz SVD of order n with respect to the one-sided Jacobi SVD in this regard. =-1 With a proper vectorization, and by removing all redundancies from the preliminary implementation, it might be feasible to speed up the Kogbetliantz SVD of order n further, since adding more threads is beneficial as long as their number is not above 𝗇. Supplementary information. The document supplements this paper with further remarks on methods for larger matrices and the single precision testing results. Acknowledgments. The author would like to thank Dean Singer for his material support and to Vjeran Hari for fruitful discussions. § DECLARATIONS Funding. This work was supported in part by Croatian Science Foundation under the expired project IP–2014–09–3670 “Matrix Factorizations and Block Diagonalization Algorithms” (https://web.math.pmf.unizg.hr/mfbdaMFBDA), in the form of unlimited compute time granted for the testing. Competing interests. The author has no relevant competing interests to declare. Code availability. The code is available in <https://github.com/venovako/KogAcc> repository, and in the supporting <https://github.com/venovako/libpvn> repository. 29 #1ISBN #1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<https://doi.org/#1>et al.#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<><#>1#1#1#1#1#1#1#1#1#1#1#1#1#1PreBibitemsHook [Anderson et al.1999]Anderson-et-al-99 Anderson, E., Bai, Z., Bischof, C., Blackford, S., Demmel, J., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., McKenney, A., Sorensen, D.: LAPACK Users' Guide, 3^rd edn. Software, Environments and Tools. SIAM, Philadelphia, PA, USA (1999). 10.1137/1.9780898719604 [Moler and Stewart1973]Moler-Stewart-73 Moler, C.B., Stewart, G.W.: An algorithm for generalized matrix eigenvalue problems. SIAM J. Numer. Anal. 10(2), 241–256 (1973) 10.1137/0710024 [Demmel and Kahan1990]Demmel-Kahan-90 Demmel, J., Kahan, W.: Accurate singular values of bidiagonal matrices. SIAM J. Sci. Statist. Comput. 11(5), 873–912 (1990) 10.1137/0911052 [Kogbetliantz1955]Kogbetliantz-55 Kogbetliantz, E.G.: Solution of linear equations by diagonalization of coefficients matrix. Quart. Appl. Math. 13(2), 123–132 (1955) 10.1090/qam/88795 [Charlier et al.1987]Charlier-et-al-87 Charlier, J.P., Vanbegin, M., Van Dooren, P.: On efficient implementations of Kogbetliantz's algorithm for computing the singular value decomposition. Numer. Math. 52(3), 279–300 (1987) 10.1007/BF01398880 [Stewart1992]Stewart-92 Stewart, G.W.: An updating algorithm for subspace tracking. IEEE Trans. Signal Process. 40(6), 1535–1541 (1992) 10.1109/78.139256 [Quintana-Ortí et al.1998]Quintana-Orti-et-al-98 Quintana-Ortí, G., Sun, X., Bischof, C.H.: A BLAS-3 version of the QR factorization with column pivoting. SIAM J. Sci. Comp. 19(5), 1486–1494 (1998) 10.1137/S1064827595296732 [Hari and Matejaš2009]Hari-Matejas-09 Hari, V., Matejaš, J.: Accuracy of two SVD algorithms for 2× 2 triangular matrices. Appl. Math. Comput. 210(1), 232–257 (2009) 10.1016/j.amc.2008.12.086 [Charlier and Van Dooren1987]Charlier-VanDooren-87 Charlier, J.-P., Van Dooren, P.: On Kogbetliantz's SVD algorithm in the presence of clusters. Linear Algebra Appl. 95, 135–160 (1987) 10.1016/0024-3795(87)90031-0 [Hari and Veselić1987]Hari-Veselic-87 Hari, V., Veselić, K.: On Jacobi methods for singular value decompositions. SIAM J. Sci. Statist. Comput. 8(5), 741–754 (1987) 10.1137/0908064 [Hari and Zadelj-Martić2007]Hari-Zadelj-Martic-07 Hari, V., Zadelj-Martić, V.: Parallelizing the Kogbetliantz method: A first attempt. J. Numer. Anal. Ind. Appl. Math. 2(1–2), 49–66 (2007) [Hari1991]Hari-91 Hari, V.: On sharp quadratic convergence bounds for the serial Jacobi methods. Numer. Math. 60(1), 375–406 (1991) 10.1007/BF01385728 [Matejaš and Hari2015]Matejas-Hari-15 Matejaš, J., Hari, V.: On high relative accuracy of the Kogbetliantz method. Linear Algebra Appl. 464, 100–129 (2015) 10.1016/j.laa.2014.02.024 [Novaković2020]Novakovic-20 Novaković, V.: Batched computation of the singular value decompositions of order two by the AVX-512 vectorization. Parallel Process. Lett. 30(4), 1–232050015 (2020) 10.1142/S0129626420500152 [Novaković and Singer2022]Novakovic-Singer-22 Novaković, V., Singer, S.: A Kogbetliantz-type algorithm for the hyperbolic SVD. Numer. Algorithms 90(2), 523–561 (2022) 10.1007/s11075-021-01197-4 [Bečka et al.2002]Becka-et-al-02 Bečka, M., Okša, G., Vajteršic, M.: Dynamic ordering for a parallel block-Jacobi SVD algorithm. Parallel Comp. 28(2), 243–262 (2002) 10.1016/S0167-8191(01)00138-7 [Okša et al.2022]Oksa-et-al-22 Okša, G., Yamamoto, Y., Vajteršic, M.: Convergence to singular triplets in the two-sided block-Jacobi SVD algorithm with dynamic ordering. SIAM J. Matrix Anal. Appl. 43(3), 1238–1262 (2022) 10.1137/21M1411895 [IEEE Computer Society2019]IEEE-754-2019 IEEE Computer Society: 754-2019 - IEEE Standard for Floating-Point Arithmetic, (2019). 10.1109/IEEESTD.2019.8766229 [Borges2020]Borges-20 Borges, C.F.: Algorithm 1014: An improved algorithm for hypot(x,y). ACM Trans. Math. Softw. 47(1), 1–129 (2020) 10.1145/3428446 [Sibidanov et al.2022]Sibidanov-et-al-22 Sibidanov, A., Zimmermann, P., Glondu, S.: The CORE-MATH project. In: 2022 IEEE 29^th Symposium on Computer Arithmetic (ARITH), pp. 26–34 (2022). 10.1109/ARITH54963.2022.00014 [Novaković2024]Novakovic-24 Novaković, V.: Accurate complex Jacobi rotations. J. Comput. Appl. Math. 450, 116003 (2024) 10.1016/j.cam.2024.116003 [Novaković2023]Novakovic-23 Novaković, V.: Vectorization of a thread-parallel Jacobi singular value decomposition method. SIAM J. Sci. Comput. 45(3), 73–100 (2023) 10.1137/22M1478847 [Lauter2017]Lauter-17 Lauter, C.: An efficient software implementation of correctly rounded operations extending FMA: a+b+c and a× b+c× d. In: 2017 51^st Asilomar Conference on Signals, Systems, and Computers, pp. 452–456 (2017). 10.1109/ACSSC.2017.8335379 [Hubrecht et al.2024]Hubrecht-et-al-24 Hubrecht, T., Jeannerod, C.-P., Muller, J.-M.: Useful applications of correctly-rounded operators of the form ab+cd+e. In: 2024 IEEE 31^st Symposium on Computer Arithmetic (ARITH), pp. 32–39 (2024). 10.1109/ARITH61463.2024.00015 [Jeannerod et al.2013]Jeannerod-et-al-13 Jeannerod, C.-P., Luvet, N., Muller, J.-M.: Further analysis of Kahan's algorithm for the accurate computation of 2× 2 determinants. Math. Comp. 82(284), 2245–2264 (2013) 10.1090/S0025-5718-2013-02679-8 [Novaković and Singer2011]Novakovic-Singer-11 Novaković, V., Singer, S.: A GPU-based hyperbolic SVD algorithm. BIT 51(4), 1009–1030 (2011) 10.1007/s10543-011-0333-5 [Drmač1997]Drmac-97 Drmač, Z.: Implementation of Jacobi rotations for accurate singular value computation in floating point arithmetic. SIAM J. Sci. Comput. 18(4), 1200–1222 (1997) 10.1137/S1064827594265095 [Drmač and Veselić2008]Drmac-Veselic-08b Drmač, Z., Veselić, K.: New fast and accurate Jacobi SVD algorithm. II. SIAM J. Matrix Anal. Appl. 29(4), 1343–1362 (2008) 10.1137/05063920X [Lutz et al.2024]Lutz-et-al-24 Lutz, D.R., Saini, A., Kroes, M., Elmer, T., Valsaraju, H.: Fused FP8 4-way dot product with scaling and FP32 accumulation. In: 2024 IEEE 31^st Symposium on Computer Arithmetic (ARITH), pp. 40–47 (2024). 10.1109/ARITH61463.2024.00016
http://arxiv.org/abs/2407.12148v1
20240716200648
On the Evolution of Virulence in Vector-borne Diseases
[ "Daniel A. M. Villela" ]
q-bio.PE
[ "q-bio.PE" ]
Gas-Phase metallicity for the Seyfert galaxy NGC 7130 A. Amiri1 , 2 Email: amirnezamamiri@gmail.com J. H. Knapen 2 , 3 S. Comerón 3 , 2 A. Marconi 4 , 5 B. D. Lehmer 1 Received: date / Revised version: date =========================================================================================================================================== empty § INTRODUCTION Preparing for potential future pandemics requires a comprehensive understanding of the evolutionary trends of pathogens. The numbers of deaths related to infectious diseases are on the order of tens of millions every year, despite advances over the years in terms of several treatment fronts. The Global Burden Study reported that more than 13 million deaths related to infected syndromes occurred in 2019 <cit.>. In terms of viruses, more than 200 viruses are known to infect humans <cit.>. In addition, there are also non-viral infectious diseases. For over a decade, the World Health Organization (WHO) has actively addressed the emergence of new diseases, recognizing them as significant threats requiring thorough preparation and planning <cit.>, which should include vector-borne diseases. Understanding virulence patterns through theoretical frameworks can assist in comprehending various scenarios and adjusting preparedness levels accordingly. In particular, vector-borne diseases are well-characterized by the alternate transmission between vectors and hosts. Deaths by vector-borne diseases are critical in the current infectious diseases realm, contributing to over 17% of deaths <cit.>. <cit.> found that among 45 diseases without vectors, only 5 had a fatality rate above 1%, whereas for 18 vector-borne diseases, 8 had a fatality rate above 1%. These vector-borne diseases include malaria, with approximately 576,000 cases reported in 2019 across several continents <cit.>. Noteworthy, about 95% the number of deaths due to malaria occurred in African countries in 2021 <cit.>. The tradeoff hypothesis <cit.> on disease evolution posits a tendency towards lower morbidity and reduced mortality, driven by the notion that more virulent pathogen strains could hinder transmission due to smaller host pools or host absence resulting from fatalities. By contrast, pathogens with minimal morbidity and mortality are likely to have very low transmission rates, jeopardizing their persistence. The most likely scenarios for pathogens in the evolutive landscape will be placed in a achieving a delicate balance for their sustained maintenance. In fact, there is a significant literature <cit.> that discusses that virulence may evolve to maximize the transmissibility compared to the recovery rate and mortality rates in the hosts. These theoretical frameworks consider the maximization of the basic reproduction number. Formal studies often assess the optimal mortality level for pathogens to maximize the basic reproduction number, assuming diminishing effects on transmission as virulence increases. Vector-borne diseases, which involve at least two levels of transmission (host-vector-host), have received surprisingly little attention in theoretical frameworks of virulence. This study presents a comprehensive mathematical model extending the susceptible-infected-recovered model to multiple levels. The model incorporates incubation periods, recovery time, and mortality, all dependent on the parasite growth rate, demonstrating diminishing effects on transmission. The formal model enables the determination of the optimal combination of virulence levels in both vectors and hosts, revealing conditions in the state space where high virulence occurs in hosts. In such cases, a single level for host-only transmission may be insufficient for effective transmission, potentially explaining the higher prevalence of pathogens with elevated fatality rates among vector-borne diseases. § METHODS AND MATERIALS §.§ Derivation vector-host SEIRD model The model assumes intertwined SEIRD compartmental models for disease transmission dynamics in hosts and vectors. Transmission parameters for hosts and vectors are denoted by β_h and β_v, respectively. The average incubation period is given by θ_h^-1 for hosts and θ_v^-1 for vectors. Hosts experience natural mortality μ, while vectors have a natural mortality rate g. Disease mortality parameters are denoted by ω and δ for hosts and vectors, respectively. Equations, normalized by total population sizes, describe the dynamics for susceptibles (S_x), exposed (E_x), infected (I_x), recovered (R_x), and dead individuals (D_x), where x can be either h for hosts or v for vectors. The transmission intensity terms are functions of virulence in both hosts and vectors. The number of newly infected hosts (vectors) at time interval dt depends on susceptible hosts (vectors), infected vectors (hosts), and transmission intensities β_h and β_v for hosts and vectors, respectively. Transmission intensities are given as functions of disease mortality β_h(δ, ω) and β_v(δ, ω) in an abuse of notation. The disease cycle involves alternating phases of transmission involving vector and hosts, i.e, the vector–host transmission. Analyzed separately, each of these phases permits a theoretical construct that establishes the reproduction number for host-only R_0,h and vector-only R_0,v transmissions. When taking into account the host–vector–host transmission, the reproduction number is the geometric mean between the reproduction numbers for these two phases: R_0 = √(R_0,h m R_0,v), where the ratio m between abundances of host and vector is clearly an amplifying factor. The complete set of equations is provided in Supplemental Material (S1). The theoretical framework for analyzing the basic reproduction number involves the matrix-generation method. §.§ Conditions for the optimal state An evolutionary trajectory is likely to lead to a virulence level that maximizes the reproduction number. The theoretical methodology for studying optimal levels of virulence involves solving equations for partial derivatives with respect to virulence variables of hosts and vectors, expressed as follows: ∂_δ R_0(δ, ω) = 0 ∂_ω R_0(δ, ω) = 0, where the notation ∂_δ = ∂ R_0(δ, ω)/∂δ and ∂_ω = ∂ R_0(δ, ω)/∂ω is used for conciseness. §.§ Transmission rate as function of mortality rate The first step to define the transmission rate in a single SEIRD model is to describe the replicating dynamics within an infected individual. The transmission model is such that the number of parasites in a host increases over time with kinetics given by an increasing function N(t). The function considered in the model is N(t) = A(1-e^-r t), where A is a ceiling constant and r is the parasite replicating rate. There are significant properties in this formulation. First, the ceiling factor means that there is a maximum number of parasites in the host. Second, as density increases, we expect the growth will slowdown. Typically, the time t_d to reach a deadly amount A_d is such that A(1-e^-r t_d) = A_d. Hence, t_d = 1/ω = -log(1-A_d/A)/r. Hence, rate r is proportional to mortality rate ω as r = k ω, where k=(-1/log(1-A_d/A)). The transmission rate is intuitively how many transmission events occur over time. The number of events is a random variable Ω, for which the mean value of its distribution will be of interest here. The expected number of transmissions over a time τ is N(τ)/N_0, where N_0 is the quantity of parasites required for a single transmission. Formally, the transmission rate conditioned on the time τ is E[Ω|τ] = N(τ)/N_0 τ, where time τ in the denominator effectively indicates the number of events per time τ. The probability of transmission in time τ depends on the time T_min, a random variable given by the minimum value between the recovery time T_r, host death time T_d. Assuming an exponential distribution for these times, the distribution of T_min is also exponential: P(T_min > t) = P(T_m > t)P(T_r > t) = e^-μ t e^-γ t, where μ, ω, and γ are general mortality, and recovery rates in the distributions of T_m, T_r. Hence, the distribution of T_min is exponential with a rate given by the sum of these rates. Finally, the transmission rate β(ω), as a function of mortality ω, is given by E[Ω] = ∫_0^∞ E[Ω | τ] P(T_min = t) dt. Using the formulation of the replicating dynamics, the overall transmission rate β(ω) is β(ω) = ∫_0^∞ ([A(1-e^-k ω t)]/N_0 τ) (μ+γ) e^-(μ+γ) τ dτ = Ω_c log(μ+γ+ω) - Ω_c log(μ+γ+ ω + k ω), where Ω_c is the constant given by A(μ+γ)/N_0. This formulation is also applied to vectors with the appropriate change to vector parameters. §.§ Parameters for scenario comparisons There is significant literature on the survival of vectors and hosts. The host parameters for scenario comparison purposes concentrate on human populations to compare different outcomes in human populations. For hosts, mortality can be sourced from well-studied diseases impacting the human population. Life expectancy varies significantly across different countries, with an overall global life expectancy of close to 70 years in 2021. Consequently, μ = (70 × 365)^-1 (day^-1). The survival parameters for vectors can vary with species. For mosquito species, capture and recapture studies revealed good estimations of the survivorship of mosquitoes in different settings apart from the laboratory conditions. A baseline value for mortality of Ae. aegypti is -log(0.8) per day, estimated in capture-recapture studies with Ae. aegypti mosquitoes <cit.>. However, this mortality rate can vary widely in the field depending on environmental conditions. In the lab Aedes mosquitoes can survive a few weeks. A contrasting survivorship is a key difference between most host–vectors. The rate g was studied as either given by 5 days of survival time or 20 days of survival time. The recovery rate for hosts is 1/5 day^-1 and for vectors is 0.05 day^-1. The mean incubation rate is 0.2 for vectors and 0.5 day^-1 for hosts. The constant Ω_c is 8.3 × 10^-4 day^-1 for vectors and 1.6 × 10^-3 day^-1 for hosts. The abundance is set initially at m=1 for equal treatment purposed, and subsequently, experimented with a multiplying factor for vectors. §.§ Comparison of vector–host, host–only, and vector–only scenarios Given that the basic reproduction number is a product of several factors, an intuitive rationale is a product between the reproduction numbers due to vectors and hosts: R_0(δ, ω) = √(m R_0,vR_0,h), where R_0,v is the reproduction number factor due to vectors and R_0,h is the reproduction number factor due to hosts. The main criterion for the viability of sustainable disease transmission to be evaluated is the conditions for virulence ω in the hosts such that: R_0(δ, ω) > 1. Given the parameters of the vector-host model, the vector-only component has only the vector–related factors in the transmission. Similarly, the host–only component has the host factor in the reproduction number. The transmission terms in these components are according to the transmission formulation: β_h(ω) = Ω_c(log(k_h ω + μ + γ_h) - log(μ + γ_h)) β_v(δ) = Ω_c(log(k_v δ + g + γ_v) - log(g + γ_v)). Given these model counterparts, the definitions for R_0, ms and R_0, hs give the reproduction numbers under these models. For the SEIRD component with the vector, the factor in the reproduction number is given by β_v(δ)/g+γ_m(δ) + δ. §.§ Evaluation of Global Infectious Disease Data The Global Burden of Disease (GBD) project regularly collects data on various health conditions, including infectious diseases <cit.>. The most recent data, as of 2019, provides information on the number of cases and deaths for diseases listed in the Appendix. The study included 169 cause names for diseases and health conditions that might lead to death. A number of 30 infectious diseases were selected from this list with a simple criterion that should involve a pathogen for transmission. Cisticercosis, nematode infections, Schistosomiasis, and "other neglected diseases" did not include either incidence or mortality and were excluded. The classification countries per income levels from the World Bank was used to group countries reported in the study. The final list had 22 infectious disease or group of infectious diseases. The case fatality ratio (CFR) is calculated by taking the ratio between the number of deaths and the number of cases for a particular disease in a specific country and year <cit.>. This ratio has the advantage of not requiring population estimates for countries. The three most recent GBD studies were conducted in the years 2019, 2010, and 2000. The fatality rates were observed from empirical data to identify diseases with fatality rates above 1% in recent years. The goal of the GBD study, however, was not to compare fatality rates among diseases, and some of these diseases may have effective treatment options, such as vaccines. §.§ Case fatality rates in the model As the case fatality rate does not require the population estimate in realistic settings, the virulence can be studied alternatively as a function of the fatality rate. Also, the case fatality rate (CFR) provides insight into virulence without the need for a specific time unit. The CFR is obtained as the ratio between the number of deaths at the recovery time τ and the sum of deaths and recoveries at that time: CFR = D(τ)/D(τ) + R(τ). In a SEIRD model, CFR can be obtained applying the accumulated numbers from starting time t=0 to τ, as follows. CFR = ∫_0^τω I(t) dt/∫_0^τ (ω + γ e^μ(t-τ)) I(t) dt, which stabilizes to w/w+γ, when τ is sufficently large <cit.>. A case fatality ratio of c would represent mortalities of c γ_h/1-c. For a fatality rate of 1%, these mortalities are close to μ and δ for hosts and vectors, respectively. § RESULTS §.§ Virulence that maximizes the reproduction number The matrix-generation method for finding R_0 requires analyzing the infectious states E_v, I_v, E_h, I_h in the mathematical model. The reproduction number as a function of mortality parameters ω and δ is R_0(δ, ω) = √(m β_v(δ, ω) θ_v(δ)/(g + θ_v(δ))(g+ δ + γ_v(δ))β_h(δ, ω) θ_h(ω)/(μ + θ_h(ω))( μ + ω + γ_h(ω))). The reproduction number R_0 is composed of a product of factors given by vector transmission, host transmission, and the number of vectors per hosts. Equation <ref> can be written with factors as formulated in Equation <ref>, R_0,h = √(β_h(δ, ω) θ_h(ω)/(μ + θ_h(ω))( μ + ω + γ_h(ω))) = √(Ω_h(log(k_h ω + μ + γ_h) - log(μ + γ_h))/μ + γ_h + ω) R_0,m = √(β_v(δ, ω) θ_v(δ)/(g + θ_v(δ))(g+ δ + γ_v(δ))) = √(Ω_v(log(k_v δ + g + γ_v) - log(g + γ_v))/g + γ_v + δ). If β_v(δ, ω) = β_v(δ) for any ω and β_h(δ, ω) = β_h(ω) for any δ, i.e. transmission in one host does not depend on mortality in the other hosts, then ∂_δ R_0 = 0 also implies that ∂_δ R_0,v =0 and ∂_ω R_0 = 0 also implies that ∂_ω R_0,h =0. Clearly, the optimal solution for R_0^2 will also be solution for R_0. Typically the solutions for ∂_δ R_0,v =0 require ∂_δβ_v(δ)/β_v(δ) = 1/g + δ + γ_v. Therefore, the condition for maximization of R_0 is such that the logarithmic derivatives of numerator and denominator are equal. For the conditions of constant θ_v, θ_h, γ_v and γ_h we have 1/(g+γ_v+k δ)(log(g+γ_v+δ) - log(g+γ_v) = 1/g+γ_v + δ Therefore, optimal mortality rate δ_opt is the solution to the equation: (g+γ_v+ k_v δ_opt)(log(g+γ_v+δ_opt) - log(g+γ_v) = g+γ_v + δ_opt, which can be solved numerically. Similar reasoning takes to the result for optimal mortality ω_opt in the host: (μ+γ_h+ k_h ω_opt)(log(μ+γ_h+ω_opt) - log(μ+γ_h) = μ+γ_h + ω_opt. §.§ A vector-host mode expands the virulence space with viable transmission The condition for R_0^2>1 is given by m β_v(δ, ω) θ_v/(g + θ_v(δ))(g+ δ + γ_v(δ))β_h(δ, ω) θ_h/(μ + θ_h(ω))( μ + ω + γ_h(ω)) > 1 There are conditions under which the vector-only component is less than one and the basic reproduction number for the complete vector–host transmission is above 1. Figure <ref> shows how the vector–host transmission expands the area in the viability space of mortality rates. Viability is evaluated as R_0>1. The area expands both for host and vectors, however the effect is greater for hosts. The theoretical framework presented by <cit.> illustrates for a generic directed-transmited disease a function of transmission in a cartesian plot, such that the ratio given by β and the virulence plus natural mortality and recovery times should the maximized. In that case, the line from x= - (μ + γ_h) to the points in the plot of β_h should be such that the line is tangent to the curve for an optimal point. In this case, the same reasoning applies, with an illustrated in Figure <ref> with the same parameters from Figure <ref>. Figure <ref> shows how the reproduction number can be higher for a host-vector transmission than the host-only reproduction number. The sufficient condition for a higher transmission in host-vector transmission is that the host-only transmission for a vector is higher than the host, given that the joint reproduction number is given by the geometric mean. In this setting R_0,h = 1 for δ = 10^-2, whereas the joint R_0 > 1.2 as seen by the dashed vertical line. Finally, the vector can also contribute to transmission by abundance. In fact, the parameter m might profoundly impact. Figure <ref> shows that the abundance might make the vector–only transmission higher or lower than the host–only transmission as the abundance factor varies from 20% to 100%. As the abundance factor increases by a factor of 1.8, the joint reproduction number is clearly above the host–only reproduction number for the range of mortality rates. §.§ The case fatality rates of diseases in the GBD study Figure <ref> shows that the diseases from the GBD study <cit.> with higher CFR include vector-host diseases or might have multiple hosts such as schistosomiasis and rabies. As expected, the level of income of countries impact as visibly more diseases are above 1% of fatality rates for Low-Income countries than high-income countries. Still, several observations apply to all categories. The variability of case fatality rates of diseases from dengue to leishmaniasis, as listed in Figure <ref> is very high. Median values for leishmaniasis, Yellow fever, meningitis, diphteria, tetanus, trypanosomiasis, rabies and schistosomiasis are above 1% for High income countries. The intervals exhibited for Yellow fever, meningitis, diphteria, tetanus, trypanosomiasis, rabies, and schistosomiasis show that above 95% of records are above 1%. Trypanosomiasis, rabies and schostosomiasis have extremely high fatality rates. § DISCUSSION The theoretical framework in this study shows that interactions between vector and host transmission position vector-borne diseases within a wide range of virulence states that enable transmission. The vector component may amplify the reproduction number increasing the evolutionary viability for the pathogen and may widen the range of mortality values in which this viability happens. This can lead to increased virulence in hosts, even if it does not correspond necessarily to the highest reproduction numbers. The varying levels of virulence can have serious consequences; for example, yellow fever is known to have a high fatality rate <cit.>. These findings highlight the need for heightened attention to vector-borne diseases as potential emergent threats with pandemic potential. The evolutionary state of current known pathogens spreads across a broad spectrum of mortality rates, as disease mortality rates for known diseases range from mild to very high. Therefore, the current severity state may not be the optimal, i.e., the one that maximizes the reproduction number for a given pathogen. The reproduction number can be higher than the one, as a theoretical condition for sustainable transmission, provided by the host–only component, given that the vector component introduces a factor that amplifies the overall reproduction number. In the formulation of the reproduction number the interaction of these components is given by the geometric mean. In particular, a reproduction number above 1 for a given state whereas the host–only component is below 1 signals a viability enabled by the vector–host transmission. Geographical and climate conditions may restrict vector distribution, meaning they might not be present globally. <cit.> also discuss the possibility of higher virulence in vector-host transmission despite the pathogen needing to overcome different immunological systems and the fitness costs to vectors. However, where vectors are present, there is a significant risk of pathogen emergence or adaptation. Therefore, vector-borne diseases pose a high risk for focused surveillance of infectious diseases. The model included a growth rate of pathogens within hosts and vectors in which diminishing transmission rate over virulence levels are crucial. This is biologically plausible since the pathogen will grow in hosts and vectors where there is a finite number of potential cells or tissues to infect. Previous theoretical frameworks have also considered diminishing returns <cit.>. The tradeoff theory has been discussed by several authors including <cit.>, and a model formulation by <cit.>. Recent empirical observations also pointed to diminishing rates for virus titers in SARS-CoV-2 <cit.> in patients and DENV-2 in mosquitoes <cit.>. Nevertheless, several authors have also examined the challenges of reproducing such effects in various disease contexts, both experimentally and observationally. <cit.> provide a review discussing why it is often challenging to evaluate the tradeoff hypothesis empirically. <cit.> found evidence for the tradeoff hypothesis with plant pathogens, and <cit.> found evidence in experiments with a castrating bacterium. The initial formulation of the model is quite general to provide intuition on the effects of parameters on the basic reproduction number. However, a few assumptions were applied to obtain some of the theoretical results. Typically, the model assumes that the distributions of general mortality and recovery rates follow an exponential pattern. This assumption simplifies the model, making it easier to analyze and draw meaningful conclusions. The formula for the basic reproduction number in this study is arguably more general than the classical MacDonald and Ross formulation <cit.>, which is typically used in traditional models for diseases like malaria. While the reproduction number in these classical models is very useful for analyzing effects such as reductions due to control measures, it is less flexible for studying virulence evolution. However, similarities can still be drawn between this formulation and more recent ones, such as those proposed by <cit.>. Future studies may explore, within the current framework, the impact of parasite clearance due to immunological effects. The rate of parasite clearance could be integrated into the transmission model as a factor that slows down transmission. This immunological component is expected to become relevant some time after infection, as its effects are minimal during the initial phase. For example, Poehler et al. <cit.> demonstrate the decay of antibodies over time in individuals infected with SARS-CoV-2. Similar antibody decay rates have been observed for DENV <cit.>. While a simple model can yield some insights, subsequent research should consider a more complex model that incorporates immunological factors. The transmission rate formulation includes two essential components for evolutionary directions: the parasite growth rate and the threshold value of parasites. Additional avenues for analyzing virulence can be explored, such as investigating the threshold level of parasites required to cause mortality. The approach taken in this study is likely to improve understanding of virulence impacts, as these rates affect the exponential growth of parasites and can significantly influence mortality over time. Furthermore, an increase in the threshold may be attributed to the host lacking immunological adaptation. The number of cases and deaths from the Global Burden of Disease (GBD) study from 2019 permitted an evaluation of case fatality rates, which indeed show high fatality rates for diseases such as trypanosomiasis, yellow fever, and leishmaniasis. However, comparing fatality rates for different diseases within this dataset has problems due to biases. Some of these diseases have well-established treatment schemes, and some might even have vaccines available. Additionally, countries have highly heterogeneous healthcare systems that might also impact the outcomes for infected individuals. The main purpose of analyzing fatality rates using the GBD data is to highlight the variability in case fatality rates and to distinguish those with high fatality rates. The original goal of the GBD study is to evaluate the burden of disease, and drawing comparisons on disease fatality rates requires caution. First, there is likely a strong bias in reporting for some diseases, especially when only deaths are reported, which overestimates the CFR. Second, there are very heterogeneous levels of surveillance and treatment across countries, from low-income to high-income countries. More resources can be used in the surveillance, prevention, and treatment of infectious diseases if they are available, which highly depends on the countries. The analysis in categories recognizes such effects, although the current categorization still groups multiple, possibly heterogeneous, countries into the same group. Third, the diseases listed in the analyses included diseases with first-line drug treatments and diseases for which vaccines are regularly produced, whereas others, such as rabies, currently do not have treatment. Again, the availability of treatments and vaccines might depend on the countries' income levels. Also, some diseases might exhibit high variability due to low numbers. Yellow fever in Brazil is an example, with years of very high CFR (35%) <cit.>, followed by years with few cases and highly variable CFR. Nevertheless, the analysis conducted here provides evidence that fatality rates across such comprehensive set of diseases vary significantly, as expected. Importantly, the inclusion of a disease in the GBD study indicates that it has a substantial burden, which means that the final list of diseases presented here has a non-negligible case fatality rate (CFR). Increased surveillance is essential for vector-borne diseases. The GBD project highlights neglected diseases like leishmaniasis and yellow fever. Pandemic preparedness demands timely data collection and monitoring, especially for currently overlooked diseases. Variability in climate conditions may disrupt existing systems and lead to increased virulence, even if temporary, which can still result in severe cases. The theoretical framework in this study is valuable for advancing knowledge, comparing different scenarios, and assessing risks in changing environments. § ACKNOWLEDGMENTS DAMV is grateful for the National Council for Scientific and Technological Development (CNPq/Brazil) and CAPES (Service Code 001). DAMV is grateful to the Center for Health and Wellbeing (CHW) at Princeton University, as most of this work was done during his time as Visiting Research Scholar at CHW. Appendix §.§ Complete Model The complete model is given by a system of Ordinary Differential Equations (ODE) that describe two coupled SEIRD systems composed of hosts and vectors. Variables S_x, E_x, I_x, and R_x describe the number of susceptible, exposed, infected, and recovered individuals, where x=h,v, depending on vector/host (v/h): dS_h/dt = μ H - m β_h(δ, ω) S_h I_v/M - μ S_h dE_h/dt = m β_h(δ, ω) S_h I_v/M - (θ_h + μ) E_h dI_h/dt = θ_h(ω) E_h - (γ_h(ω) + ω + μ) I_h dD_h/dt = ω I_h dR_h/dt = γ_h(ω) I_h - μ R_h dS_v/dt = g M -β_v(δ, ω) S_v I_h/H - g S_v dE_v/dt = β_v(δ, ω) S_v I_h/H - (θ_v(δ) + g) E_v dI_v/dt = θ_v(δ) E_v - (γ_v(δ) + δ + g) I_v dD_v/dt = δ I_v dR_v/dt = γ_v(δ) I_v - g R_v where m = M/H. Normalized equations: dS_h/dt = μ - m β_h(δ, ω) S_h I_v - μ S_h dE_h/dt = m β_h(δ, ω) S_h I_v - (θ_h + μ) E_h dI_h/dt = θ_h(ω) E_h - (γ_h(ω) + ω + μ) I_h dD_h/dt = ω I_h dR_h/dt = γ_h(ω) I_h - μ R_h dS_v/dt = g -β_v(δ, ω) S_v I_h - g S_v dE_v/dt = β_v(δ, ω) S_v I_h - (θ_v(δ) + g) E_v dI_v/dt = θ_v(δ) E_v - (γ_v(δ) + δ + g) I_v dD_v/dt = δ I_v dR_v/dt = γ_v(δ) I_v - g R_v §.§ Finding the basic reproduction number The matrix generation method is applied to find the basic reproduction number. The conditions for a disease-free state are that S_v = M and S_h = H. Under these conditions the system of equations related to disease states corresponding to exposed and infected individuals are: dE_h/dt = m β_h(δ, ω) I_v - (θ_h + μ) E_h dI_h/dt = θ_h(ω) E_h - (γ_h(ω) + ω + μ) I_h dE_v/dt = β_v(δ, ω) I_h - (θ_v(δ) + g) E_v dI_v/dt = θ_v(δ) E_v - (γ_v(δ) + δ + g) I_v This system permits to obtain the matrices T and Σ which correspond to the decomposition into transmission and transition parts, respectively, of the equations. T = [ 0 0 0 m β_h; 0 0 0 0; 0 β_v 0 0; 0 0 0 0 ] Σ = [ -(θ_h+μ) 0 0 0; θ_h -(γ_h(ω) + ω + μ) 0 0; 0 0 -(θ_v(δ) + g) 0; 0 0 -θ_v(δ) -(γ(δ) + δ + g) ]. The inverse of matrix Σ is a step to obtain the critical matrix given by -T Σ^-1 as follows: Σ^-1 = [ - 1/θ_h+μ 0 0 0; θ_h/(θ_h+μ) (γ_h(ω) + ω + μ) - 1/γ_h(ω) + ω + μ 0 0; 0 0 -1/θ_v(δ) + g 0; 0 0 θ_v(δ)/(θ_v(δ) + g) (γ(δ) + δ + g) - 1/γ(δ) + δ + g ] -T Σ^-1 = [ 0 0 θ_v(δ) m β_h /(θ_v(δ) + g) (γ(δ) + δ + g) - m β_h/γ(δ) + δ + g; 0 0 0 0; θ_h β_v/(θ_h+μ) (γ_h(ω) + ω + μ) - β_v /γ_h(ω) + ω + μ 0 0; 0 0 0 0 ] Matrix -T Σ^-1 is critical because its spectral radius gives the basic reproduction number: R_0 = √(θ_v(δ) m β_h /((θ_v(δ) + g) (γ(δ) + δ + g))θ_h β_v/((θ_h+μ) (γ_h(ω) + ω + μ))). §.§ Theoretical optimal conditions for maximising the basic reproduction number The basic reproduction number is maximized when the partial derivatives (<ref>) with respect to vector mortality and host mortality equal zero. Hence, this necessary and sufficient condition permits to find the optimal mortality rates as follows (Obs:maximizing R_0^2). ∂_ωθ_ω(ω)/θ_ω(ω) + ∂_ωβ_h(δ, ω)/β_h(δ, ω) + ∂_ωβ_m(δ, ω)/β_m(δ, ω) = ∂_ωθ_h(ω)/μ + θ_h(ω) + ∂_ωγ_h(ω) + 1/μ + γ_h(ω) + ω ∂_δθ_δ(δ)/θ_δ(δ) +∂_δβ_h(δ, ω)/β_h(δ, ω) + ∂_δβ_m(δ, ω)/β_m(δ, ω) = ∂_δθ_v(ω)/μ + θ_v(δ) + ∂_δγ_v(δ) + 1/μ + γ_v(δ) + δ The left–hand side of Equations <ref> and <ref> contains the logarithmic derivative <cit.> of functions β_h(ω, δ) and β_v(ω, δ). It is useful to apply the operator L, defined by Lu(x) = du/dx/u(x) for function u(x) , and a fraction of A =f_A. The optimal level of virulence ω is given by the solution of Equation: L_ωθ_ω(ω) + L_ωβ_h(δ, ω) + L_ωβ_m(δ, ω) = ∂_ωθ(ω)/μ + θ(ω) + ∂_ωγ_h(ω) + 1/μ + γ_h(ω) + ω. §.§ Evaluated diseases in GBD study All diseases and conditions in GBD study: Urinary diseases and male infertility, Exposure to forces of nature, Environmental heat and cold exposure, Ebola, Executions and police conflict, Eating disorders, Diabetes mellitus, Acute glomerulonephritis, Chronic kidney disease, Gynecological diseases, Bacterial skin diseases, Upper digestive system diseases, Esophageal cancer, Stomach cancer, Paralytic ileus and intestinal obstruction, Inguinal, femoral, and abdominal hernia, Inflammatory bowel disease, Vascular intestinal disorders, Gallbladder and biliary diseases, Pancreatitis, Other digestive diseases, Falls, Drowning, Fire, heat, and hot substances, Poisonings, Exposure to mechanical forces, Other unspecified infectious diseases, Liver cancer, Larynx cancer, Tracheal, bronchus, and lung cancer, Breast cancer, Cervical cancer, Meningitis, Encephalitis, Diphtheria, Whooping cough, Tetanus, Measles, Varicella and herpes zoster, Malaria, Chagas disease, Leishmaniasis, African trypanosomiasis, Schistosomiasis, Cysticercosis, Cystic echinococcosis, Decubitus ulcer, Other skin and subcutaneous diseases, Sudden infant death syndrome, Road injuries, Other transport injuries, Tuberculosis, HIV/AIDS, Diarrheal diseases, Other intestinal infectious diseases, Lower respiratory infections, Upper respiratory infections, Otitis media, Testicular cancer, Kidney cancer, Bladder cancer, Brain and central nervous system cancer, Endocrine, metabolic, blood, and immune disorders, Rheumatoid arthritis, Other musculoskeletal disorders, Congenital birth defects, Protein-energy malnutrition, Other nutritional deficiencies, Sexually transmitted infections excluding HIV, Acute hepatitis, Appendicitis, Other neglected tropical diseases, Maternal disorders, Neonatal disorders, Endocarditis, Non-rheumatic valvular heart disease, Chronic obstructive pulmonary disease, Pneumoconiosis, Prostate cancer, Colon and rectum cancer, Lip and oral cavity cancer, Nasopharynx cancer, Other pharynx cancer, Gallbladder and biliary tract cancer, Pancreatic cancer, Malignant skin melanoma, Non-melanoma skin cancer, Zika virus, Conflict and terrorism, Uterine cancer, Multiple sclerosis, Motor neuron disease, Other neurological disorders, Alcohol use disorders, Drug use disorders, Alzheimer's disease and other dementias, Parkinson's disease, Idiopathic epilepsy, Adverse effects of medical treatment, Animal contact, Foreign body, Other unintentional injuries, Asthma, Interstitial lung disease and pulmonary sarcoidosis, Other chronic respiratory diseases, Cirrhosis and other chronic liver diseases, Typhoid and paratyphoid, Invasive Non-typhoidal Salmonella (iNTS), Hypertensive heart disease, Cardiomyopathy and myocarditis, Atrial fibrillation and flutter, Aortic aneurysm, Peripheral artery disease, Ovarian cancer, Thyroid cancer, Mesothelioma, Self-harm, Non-Hodgkin lymphoma, Multiple myeloma, Leukemia, Other neoplasms, Rheumatic heart disease, Interpersonal violence, Ischemic heart disease, Stroke, Other malignant neoplasms, Other cardiovascular and circulatory diseases, Hemoglobinopathies and hemolytic anemias, Hodgkin lymphoma, Dengue, Yellow fever, Rabies, Intestinal nematode infections, Osteoarthritis, Low back pain, Neck pain, Gout, Blindness and vision loss, Food-borne trematodiases, Age-related and other hearing loss, Other sense organ diseases, Oral disorders, Autism spectrum disorders, Attention-deficit/hyperactivity disorder, Scabies, Fungal skin diseases, Viral skin diseases, Acne vulgaris, Alopecia areata, Pruritus, Urticaria, Conduct disorder, Idiopathic developmental intellectual disability, Other mental disorders, Lymphatic filariasis, Onchocerciasis, Trachoma, Iodine deficiency, Vitamin A deficiency, Dermatitis, Psoriasis, Bipolar disorder, Anxiety disorders, Dietary iron deficiency, Headache disorders, Leprosy, Schizophrenia, Depressive disorders, Guinea worm disease Diseases with pathogen: Ebola, Bacterial skin diseases, Meningitis, Diphtheria, Whooping cough, Tetanus, Measles, Varicella and herpes zoster, Malaria, Leishmaniasis, African trypanosomiasis, Schistosomiasis, Cysticercosis, Tuberculosis, HIV/AIDS, Lower respiratory infections, Upper respiratory infections, Sexually transmitted infections excluding HIV, Acute hepatitis, Other neglected tropical diseases, Zika virus, Typhoid and paratyphoid, Dengue, Yellow fever, Rabies, Intestinal nematode infections, Scabies, Lymphatic filariasis, Trachoma, Guinea worm disease Final set of diseases in evaluation: Bacterial skin diseases, Meningitis, Diphtheria, Whooping cough, Tetanus, Measles, Varicella and herpes zoster, Tuberculosis, HIV/AIDS, Lower respiratory infections, Upper respiratory infections, Sexually transmitted infections excluding HIV, Acute hepatitis, Leishmaniasis, Typhoid and paratyphoid, Rabies, Malaria, Dengue, Yellow fever, African trypanosomiasis, Zika virus, Ebola
http://arxiv.org/abs/2407.13194v1
20240718061603
Robust Multivariate Time Series Forecasting against Intra- and Inter-Series Transitional Shift
[ "Hui He", "Qi Zhang", "Kun Yi", "Xiaojun Xue", "Shoujin Wang", "Liang Hu", "Longbing Cao" ]
cs.LG
[ "cs.LG", "cs.AI", "68Txx", "I.2.6" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Robust Multivariate Time Series Forecasting against Intra- and Inter-Series Transitional Shift Hui He, Qi Zhang, Kun Yi, Xiaojun Xue, Shoujin Wang, Liang Hu, Longbing Cao, Senior Member, IEEE Hui He is with the School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China (e-mail: hehui617@bit.edu.cn). Qi Zhang and Liang Hu are with the Department of Computer Science, Tongji University, Shanghai 201804, China (e-mail: {zhangqi_cs, lianghu}@tongji.edu.cn). Kun Yi and Xiaojun Xue are with the School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China (e-mail: {yikun, xiaojunx}@bit.edu.cn). Shoujin Wang is with the Data Science Institute, University of Technology Sydney, Ultimo, NSW 2007, Australia (e-mail: shoujin.wang@uts.edu.au). Longbing Cao is with the School of Computing, Macquarie University, Sydney, NSW 2109, Australia (e-mail: longbing.cao@mq.edu.au). Manuscript received April 19, 2021; revised August 16, 2021. Received X XX, XXXX; accepted X XX, XXXX ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT The non-stationary nature of real-world Multivariate Time Series (MTS) data presents forecasting models with a formidable challenge of the time-variant distribution of time series, referred to as distribution shift. Existing studies on the distribution shift mostly adhere to adaptive normalization techniques for alleviating temporal mean and covariance shifts or time-variant modeling for capturing temporal shifts. Despite improving model generalization, these normalization-based methods often assume a time-invariant transition between outputs and inputs but disregard specific intra-/inter-series correlations, while time-variant models overlook the intrinsic causes of the distribution shift. This limits model expressiveness and interpretability of tackling the distribution shift for MTS forecasting. To mitigate such a dilemma, we present a unified Probabilistic Graphical Model to Jointly capturing intra-/inter-series correlations and modeling the time-variant transitional distribution, and instantiate a neural framework called JointPGM for non-stationary MTS forecasting. Specifically, JointPGM first employs multiple Fourier basis functions to learn dynamic time factors and designs two distinct learners: intra-series and inter-series learners. The intra-series learner effectively captures temporal dynamics by utilizing temporal gates, while the inter-series learner explicitly models spatial dynamics through multi-hop propagation, incorporating Gumbel-softmax sampling. These two types of series dynamics are subsequently fused into a latent variable, which is inversely employed to infer time factors, generate final prediction, and perform reconstruction. We validate the effectiveness and efficiency of JointPGM through extensive experiments on six highly non-stationary MTS datasets, achieving state-of-the-art forecasting performance of MTS forecasting. Multivariate time series (MTS) forecasting, probabilistic graphical model (PGM), distribution shift, non-stationarity. § INTRODUCTION Multivariate Time Series (MTS) forecasting has been playing an increasingly ubiquitous role in real-world applications, such as weather condition estimation <cit.>, stock trend analysis <cit.>, electricity consumption planning <cit.>, and traffic flow and speed prediction <cit.>. Impressively, various deep learning-based approaches have emerged and led to a surge in deep MTS forecasting models. These approaches elaborately capture complex temporal variations by Temporal Convolution Networks (TCNs) <cit.>, Recurrent Neural Networks (RNNs) <cit.>, and Transformers <cit.>, or explore specific variable-wise dependencies by Graph Neural Networks (GNNs) <cit.>. Despite the remarkable performance, they fall short in adapting to real-world scenarios where the distributions (a.k.a., statistical properties) of time series change over time due to dynamic generation mechanisms. This phenomenon, known as distribution shift <cit.>, exposes time series' highly dynamic and non-stationary nature. It poses significant challenges for forecasting models in effective generalization to varying distributions. Such vulnerability to rapid distributional changes <cit.> ultimately results in a dramatic decline in forecasting accuracy over time <cit.>. Researchers have explored two primary categories of approaches to tackle the distribution shift in MTS forecasting. The first category involves normalization techniques to align time series instances based on the Gaussian assumption. They normalize the input and denormalize the output using adaptively learned statistics (e.g., mean and variance) <cit.> to alleviate the temporal mean and covariance shift among instances or between inputs and outputs. Most of these methods, however, assume a time-invariant transitional distribution between output predictions and input observations, i.e., 𝒫(x_u:u+H|x_u-L:u)=𝒫(x_v:v+H|x_v-L:v) at any two steps u≠ v. This assumption severely simplifies the non-stationarity of time series and is not consistent with the practical distribution shift <cit.>. Give an easily understandable example: In stock prediction, the financial factors 𝒫(x_t) naturally change due to market fluctuations. Meanwhile, economic laws 𝒫(x_t:t+h|x_t-L:t) are also vulnerable to abrupt policies, such as government price controls. Additionally, these methods focus on exploring the variable-wise data distribution, overlooking specific intra-/inter-series correlations <cit.>. As a result, they struggle to effectively address the counterpart non-stationarity, especially the distribution shift along with inter-series dynamics. The second category aims to model time-variant transitional distribution, i.e., 𝒫_t(x_t:t+H|x_t-L:t) adapted to any time step t, to improve models' temporal generalization. Many advanced models within this category have integrated time information to enhance forecasting performance, indicating that time information enables the models to effectively capture the time-variant characteristics of time series, thereby alleviating the issue of non-stationarity <cit.>. Herein, some models incorporate temporal meta-knowledge to correct the bias caused by the distribution shift within a discriminative meta-learning framework <cit.>. Another representative study <cit.> utilizes Koopman operators as linear portraits of implicit transitions to approximate time-invariant and time-variant dynamics. However, these methods often overlook the underlying causes of the distribution shift, which results in a prediction/generation process that resembles a black box. Consequently, the models' expressiveness and interoperability, both crucial in various real-world applications, become limited <cit.>. For instance, the distribution of traffic flow on a road can undergo sudden changes triggered by various events, such as unexpected temperature spikes on that road or traffic accidents on connected routes. Upon a deeper examination of MTS forecasting, it is evident that a prevalent and advanced approach to capture intra-/inter series correlations involves decomposing the transitional distribution into intra-/inter-series transitional distributions, i.e., 𝒫_t(x_t:t+H^(i)|X_t-L:t)=𝒫_t(x_t:t+H^(i)|x_t-L:t^(i))𝒫_t(x_t:t+H^(i)|X_t-L:t^(i)), where X^(i) refers to all the variables excluding the i^th variable. Consequently, we argue that the observed non-stationarity in MTS is primarily attributed to the implicitly time-variant intra- and inter-series correlations <cit.>. Based on this insight, we decompose the transitional shift into intra-series transitional shift, i.e., 𝒫(x_u:u+H^(i)|x_u-L:u^(i)) ≠𝒫(x_v:v+H^(i)|x_v-L:v^(i)), and the inter-series transitional shift, i.e., 𝒫(x_u:u+H^(i)|x_u-L:u^(i)) ≠𝒫(x_v:v+H^(i)|x_v-L:v^(i)). Compared to quantifying the superficial data distribution, i.e., 𝒫(x_t), exploring the transitional distribution is more rational and practical. Estimating data distributions can be more challenging, whereas conditional distributions are easily available and offer a more intuitive understanding of the generation of MTS data. Furthermore, compared to previous time-variant models with coarse-grained transitional distribution, this decomposition corresponds to a general modeling approach that learns observation-prediction projections and accounts for intra- and inter-series correlations. It reveals the intrinsic causes of the transitional distribution, ensuring desirable model interpretability and the potential to enhance forecasting performance by jointly addressing the distribution shift and modeling intra-/inter-series correlations. In light of the above discussion, we devise a unified Probabilistic Graphical Model which delves to Jointly capturing intra-/inter-series correlations and modeling the time-variant transitional distribution, and instantiate a neural framework, named JointPGM for non-stationary MTS forecasting. Known for effectively representing complex distribution and statistical relationships among variables in a principled and interpretable manner, PGM is an underexplored but promising framework for robust MTS forecasting against distribution shifts. We organize JointPGM as a dual-encoder architecture, which includes a time factor encoder (TFE) and an independence-based series encoder (ISE). Technically, in TFE, JointPGM employs multiple Fourier basis functions to capture dynamic time factors and introduces linear projections to learn the mean and variance of Gaussian sampling. Corresponding to the intra-/inter-series transitional shifts, JointPGM sequentially employs two distinct learners in ISE: intra-series and inter-series learners. The intra-series learner focuses on capturing temporal dynamics within each series and utilizes a temporal gate adjusted by learned time factors to control the message passing of temporal features, making them sensitive to non-stationary environments. The inter-series learner explicitly models spatial dynamics through multi-hop propagation incorporating Gumbel-softmax sampling. These two types of series dynamics are then fused and transformed into the latent variable, which is inversely used to infer time factors, generate final prediction, and perform reconstruction. We incorporate various constraints on all the above sub-processes based on a tailored PGM framework with theoretical guarantees, ensuring a clear understanding of the role played by each sub-process in forecasting MTS with non-stationarity. To summarize, our key contributions are as follows: * Going beyond previous methods, we propose JointPGM, an effective neural framework for MTS forecasting under fine-grained transitional shift, built upon a probabilistic graphical model with jointly addressing the distribution shift and modeling intra-/inter-series correlations. * To achieve JointPGM, we elaborately design two distinct learners: an intra-series learner to capture temporal dynamics via temporal gates and an inter-series learner to explicitly model spatial dynamics through multi-hop propagation by incorporating Gumbel-softmax sampling. * We conduct extensive experiments on six highly non-stationary MTS datasets, achieving state-of-the-art performance with an average improvement of 15.3% in MAE and 37.9% in MSE over all baselines for forecasting. § RELATED WORK §.§ Deep Models for Multivariate Time Series Forecasting Multivariate time series (MTS) forecasting is a longstanding research topic <cit.>. Initially, traditional statistical models such as Gaussian process (GP) <cit.> have been proposed for their appealing simplicity and interpretability. Recently, with the bloom of deep learning, many deep models with elaboratively designed architectures have made great breakthroughs in capturing intra- and inter-series correlations for MTS forecasting. On one hand, the RNN- <cit.> and TCN-based <cit.> models have shown competitiveness in modeling complex temporal relationships. However, due to their recurrent structures or the locality property of one-dimensional convolutional kernels, they are limited in handling long-term dependencies. Soon afterward, Transformer and its variants <cit.> have achieved superior performance on MTS forecasting, particularly notable in scenarios with long prediction lengths. They focus on renovating the canonical structure and designing a novel attention mechanism to reduce the quadratic complexity while automatically learning the correlations between elements in a series. Despite the complicated design of Transformer-based models, recent MLP-based models <cit.> with simple structure and low complexity can surpass previous models across various common benchmarks for MTS forecasting. Another crucial aspect of MTS forecasting involves capturing the correlations among multiple time series. Current models highly depend on GNNs <cit.> or ordered tree <cit.> due to their remarkable capability in modeling structural dependencies. Most of them can automatically learn the topological structure of inter-series correlations by leveraging node similarity <cit.> or self-attention mechanism <cit.>. More recently, Crossformer <cit.> and iTransformer <cit.> have been specifically proposed to explicitly capture the mutual interactions among multiple variables by refurbishing the architecture and components such as the attention module of Transformer. Different from previous works focusing on better modeling temporal relationships within and among time series, we analyze the MTS forecasting task from a more fundamental review of the non-stationary nature, which constitutes an indispensable property of MTS data. §.§ Improving Robustness against Distribution Shifts Despite many remarkable deep models, MTS forecasting still suffers from severe distribution shifts <cit.> considering the distribution of real-world series changes temporally. To improve robustness over varying distributions, one category of widely-explored methods <cit.> stationarize deep model inputs by the normalization techniques. For example, RevIN <cit.> proposes a reversible instance normalization technique to reduce temporal distribution shift. Based on RevIN, Dish-TS <cit.> designs a dual coefficient network to learn two sets of distribution coefficients and captures the distribution shift between inputs and outputs. Stationary <cit.> adopts de-stationary attention to handle the over-stationarization issue which may damage the model's capability of modeling specific temporal dependency. SAN <cit.> utilizes slice-level adaptive normalization to mitigate non-stationarity. However, these methods typically assume a time-invariant transitional distribution and overlook the distribution shift caused by inter-series dynamics. Another category <cit.> learns to model time-variant transitional distribution by incorporating temporal meta-knowledge to correct the bias caused by distribution shift in a discriminative meta-learning framework, which is generally designed for bridging the gap between the training and test data. More recently, Koopman predictors Koopa <cit.> and KNF <cit.> employ Koopman operators as linear portraits of implicit transitions to capture time-invariant and time-variant dynamics. While these models model the time-variant transitional distributions, such coarse-grained modeling fails to reveal the intrinsic causes of the transitional distribution, limiting the models' interpretability and expressiveness. In this paper, we propose JointPGM to model the practical transitional distribution and decompose it based on the prevalent approach of learning intra-/inter-series correlations. § PROBLEM FORMULATIONS In this section, we start with the formulations of MTS forecasting and define the concepts central to distribution shift. Detailed notations are summarized in Table <ref> in Appendix <ref>. Multivariate Time Series Forecasting. Let a regularly sampled time series dataset with a total of N distinct time series and T time steps be denoted as [x^(1),..., x^(i),...,x^(N)] ∈ℝ^N × T, where x^(i)∈ℝ^T denotes the sequence values of time series i at T time steps. Given a lookback window of length-L and a horizon window of length-H, the multivariate time series forecasting involves utilizing historical multivariate observations X_t-L:t={x_t-L:t^(i)}_i=1^N to predict their future multivariate values X_t:t+H={x_t:t+H^(i)}_i=1^N at time step t. The forecasting process can be formulated as: X_t:t+H=ℱ_Θ(X_t-L:t)=ℱ_Θ({x_t-L:t^(i)}_i=1^N) where the function map ℱ_Θ:ℝ^N × L→ℝ^N × H can be regarded as a forecasting model parameterized by Θ. Distribution Shift in Time Series. Recall the intuitive financial example mentioned in Section <ref>, where the economic laws are vulnerable to abrupt policy changes. Therefore, we propose to involve the more rational and practical transitional shift assumption and further decompose the integrated transitional shift in time series into two types at a finer granularity, namely, intra-series transitional shift and inter-series transitional shift, with their definitions provided below. Given the i^th time series x^(i), which can be split into several lookback windows {x_t-L:t^(i)}_t=L^T-H and their corresponding horizon windows {x_t:t+H^(i)}_t=L^T-H. Intra-series Transitional Shift is referred to the case that the transitional distribution 𝒫(x_u:u+H^(i)|x_u-L:u^(i)) ≠𝒫(x_v:v+H^(i)|x_v-L:v^(i)) for any two time steps u and v with L ≤ u ≠ v ≤ T-H. Given the i^th time series x^(i) with its complementary set x^(i). Similar to x^(i), x^(i) can be split into several lookback windows {x_t-L:t^(i)}_t=L^T-H and their corresponding horizon windows {x_t:t+H^(i)}_t=L^T-H. Inter-series Transitional Shift is referred to the case that the transitional distribution 𝒫(x_u:u+H^(i)|x_u-L:u^(i)) ≠𝒫(x_v:v+H^(i)|x_v-L:v^(i)) for any two time steps u and v with L ≤ u ≠ v ≤ T-H. The combination of these two definitions fully describes the complex distribution shifts encountered in reality. The former indicates the variations in transitional distribution for each series, while the latter reflects the variations in transitional distribution among different series. Since characterizing the local relationship between pairwise series in Definition <ref> is overly complex, we describe the relationship between each series and its complementary set from a global perspective. § METHODOLOGY In this section, we first present our tailored PGM and formally analyze the distinctions between normalization-based methods, time-variant models, and ours in Section <ref>. In Section <ref>, we introduce the corresponding instantiated dual-encoder architecture. Finally, we decompose the learning objective based on the PGM and our purpose in Section <ref>. §.§ Probabilistic Decomposition for Transitional Shift Recall the notations in non-stationary MTS forecasting task, i.e., X_t-L:t, X_t:t+H, and t, they correspond to the latent variable Z_t-L:t, Z_t:t+H, and Z_t respectively. We construct the probabilistic graphical model for normalization-based methods, time-variant models, and our proposed JointPGM. The corresponding graphical representations of their overall computational paths are shown in Figure <ref>. Normalization-based methods utilize adaptively learned mean μ_t and variance σ^2_t to normalize the input observations X_t-L:t and encode them into their latent variable Z_t-L:t, which is subsequently decoded into output predictions X_t:t+H through de-normalization, alleviating the temporal mean and covariance shift between inputs and outputs. As Figure <ref> shows, this process assumes that the dependency between X_t-L:t and X_t:t+H (i.e., transitional distribution) remain fixed over time t. As shown in Figure <ref>, time-variant models model the dependency between X_t-L:t and X_t:t+H at each time step t (i.e., time-variant transitional distribution). However, this coarse-grained process mixes the transitional patterns occurring within each series and among different series, failing to reveal the intrinsic causes of the distribution shift. In contrast, JointPGM segments X_t-L:t along the variable dimension, obtaining N distinct series as input, and then encodes each series x_t-L:t^(i) separately into its latent variable ẑ_t-L:t^(i), as shown in Figure <ref>. The time factors t are additionally introduced to dynamically regulate the mapping processes both within each series (t, x_t-L:t^(i)→ẑ_t-L:t^(i)) and among different series (ẑ_t-L:t^(i)→A_t). Thus, the intra-/inter-series correlations are captured, collectively forming the final latent variable Z_t-L:t. In alignment with this process, t→Z_t denotes encoding time factors into the corresponding latent variable. Afterward, Z_t-L:t→Z_t means using a variational distribution 𝒫(Z_t|Z_t-L:t) to approximate the distribution 𝒫(Z_t|t). Herein, this relationship is designed to reversely infer time factors in latent space. As the latent variable Z_t-L:t is exploited to generate X_t:t+H, the time-variant transitional distribution is naturally decomposed into intra-/inter-series transitional distributions at a finer granularity. §.§ Dual-Encoder Architecture JointPGM focuses on a probabilistic manner to account for the underlying causes of distribution shift in MTS forecasting. As Figure <ref> shows, JointPGM is organized with a dual-encoder architecture, which mainly involves four main components: 1) Time factor encoder (TFE) takes temporal order set {t-L+1,...,t+H} as input to learn the dynamic time factors M_t^(1) and their latent variable Z_t, which can reflect the clues of environmental changes; 2) Independence-based series encoder (ISE) captures series correlations by two distinct learners. While intra-series learner (left part of ISE in Figure <ref>) focuses on capturing temporal dynamics within each series with temporal gate G^(i) adjusted by M_t^(1), inter-series learner (right part of ISE in Figure <ref>) is to explicitly model the spatial dynamics with multi-hop propagation incorporating Gumbel-softmax sampling; 3) Dynamic Inference (DI) uses latent variable Z_t-L:t to dynamically infer time factors and align with Z_t; 4) Decoder transforms Z_t-L:t, formed by these two dynamics, into the final prediction and reconstruction. §.§.§ Time Factor Encoder (TFE) Learning time factor representation that can accurately reflect irregular environmental changes is crucial for modeling distribution shifts. Transformer-based methods <cit.> obtain learnable additive position encoding by heuristic sinusoidal mapping to distinguish the temporal order of tokens or patches. However, this design only monitors the temporal order of the lookback window, neglecting the association with its corresponding horizon window and thereby compromising predictive performance. In this regard, we propose to use temporal orders that span across both windows t={0,...,i+L/L+H-1,...,1} for i=-L,-L+1,...,H-1, i.e., a [0,1]-normalized temporal order set. It is noteworthy that timestamp features (e.g., Minute-of-Hour, Day-of-Week, etc.) are also informative and can contribute to learning time factors. We opt for order features due to their more compact representations compared to timestamps. Additionally, embedding timestamp features with MLPs may have limitations in learning high-frequency patterns, commonly known as `spectral bias' <cit.>. To obtain the high-quality representation of conditional information, we concatenate multiple Fourier basis functions with diverse scale parameters as suggested by <cit.>, and then learn the deep features and align the dimensions using a feedforward neural network: M_t^(0)=sin(2πB_1 t)|cos(2πB_1 t)|...|sin(2πB_s t)|cos(2πB_s t), M^(1)_t=FeedForward(M^(0)_t), where elements in B_s ∈ℝ^b/2s are sampled from 𝒩(0,σ^2_s) with b denotes the Fourier feature size. M_t^(0)∈ℝ^(L+H) × b and M_t^(1)∈ℝ^L × d with d denotes the latent dimension size. σ_s ∈{0.01,0.1,1,5,10,20, 50,100} denotes the scale hyperparameter and s is its corresponding index starting from 1. ·|· represents the concatenation operation. FeedForward: ℝ^(L+H) × b→ℝ^L × d is implemented by two linear layers with intermediate ReLU non-linearity. As shown in Figure <ref>, taking the Fourier basis function cos(·) as an example, its output has two main properties that could aid JointPGM in distinguishing different temporal orders: similar temporal orders yield similar representations (e.g., the plot of t,t+1) and the larger the temporal order the earlier the values in representations oscillate between -1 and +1 (e.g., the plot of t,t+H). Then, we model 𝒫(Z_t|t) by stochastically sample Z_t from the Gaussian distribution using the reparameterization trick: μ_t=f^μ_t(M_t^(1)), σ_t^2=f^σ^2_t(M_t^(1)), 𝒫(Z_t|t)=𝒩(μ_t,σ^2_tI), where two multivariate functions f^μ_t(·) and f^σ^2_t(·) map the input M_t^(1) to the mean and variance vectors of size N × d and N × d. In practice, f^μ_t(·) and f^σ^2_t(·) are instantiated as a single linear layer. §.§.§ Independence-based Series Encoder (ISE) Series independence mechanism refers to the case of taking only one individual series as model input at each instance and mapping it into a latent space, rather than simultaneously incorporating all time series to mix information. This mechanism allows the model to only focus on learning information along the time axis and has shown effectiveness in working with linear models <cit.> and Transformer-based models <cit.> in time series forecasting tasks. Therefore, we apply the series independence mechanism to sequentially explore two distinct types of correlations: intra- and inter-series correlations, thereby benefiting the modeling of time-variant transitional distribution within each series (i.e., 𝒫(x_t:t+H^(i)|x_t-L:t^(i),t)) and among different series (i.e., 𝒫(x_t:t+H^(i)|x_t-L:t^(i),t)) respectively. Note that such sequential style is beneficial and widely adopted by <cit.>, with intra-series learner providing a solid representation foundation for inter-series learner. ISE mainly consists of a intra-series learner and an inter-series learner, which are introduced as follows: Intra-series Learner. The intra-series learner takes X_t-L:t ∈ℝ^N × L as input, which can be split into N series. To illustrate the modeling process of intra-series transitional distribution, we draw inspiration from the structure of <cit.> and take the i^th series x^(i)_t-L:t∈ℝ^L as an example. Concretely, x^(i)_t-L:t is fed into a linear layer according to our series-independent setting, then the linear layer will provide mapping results accordingly: h^(i)_t-L:t=Linear(x^(i)_t-L:t), where Linear: ℝ^L →ℝ^d is implemented by a single linear layer, and h^(i)_t-L:t∈ℝ^d. All series share the weights along the time dimension. To render the modeled transitional distribution change over time, we design a temporal gate that is capable of distilling discriminative historical signals <cit.> sensitive to non-stationary environments based on dynamic time factors. Specifically, we utilize a linear layer with a Sigmoid activation function to learn the temporal gate G∈ℝ^N × d. Subsequently, the i^th gate G^(i) is applied to the representation h_t-L:t^(i) of the i^th series: G=Sigmoid(Linear(M_t^(1))), ĥ_t-L:t^(i)=G^(i)⊙h_t-L:t^(i), where ⊙ is element-wise multiplication, and all sub-gates G^(i),i ∈{1,...,N} share the weights along time dimension. To capture the transitional distribution for each series, we explicitly model the distribution 𝒫(ẑ_t-L:t^(i)|x_t-L:t^(i),t) by stochastically sampling each latent variable ẑ_t-L:t^(i) from a Gaussian distribution: μ_ẑ=f^μ_ẑ(ĥ^(i)_t-L:t), σ^2_ẑ=f^σ^2_ẑ(ĥ^(i)_t-L:t), 𝒫(ẑ_t-L:t^(i)|x_t-L:t^(i),t)=𝒩(μ_ẑ,σ^2_ẑI), where ẑ_t-L:t^(i)∈ℝ^d. f^μ_ẑ(·) and f^σ^2_ẑ(·) are instantiated as a single linear layer and share weights between the latent states of N series. Finally, we ensemble ĥ^(i)_t-L:t and ẑ^(i)_t-L:t of N time series into a whole respectively, yielding respective outputs Ĥ_t-L:t∈ℝ^N × d and Ẑ_t-L:t∈ℝ^N × d. Considering the previously implemented series-independent processes, where each series x_t-L:t^(i) is independent of each other series belonging to x_t-L:t^(i), we can compose the posterior distribution 𝒫(Ẑ_t-L:t|X_t-L:t,t) from multiple sub-distributions 𝒫(ẑ_t-L:t^(i)|x_t-L:t^(i),t), i ∈{1,...,N}. Inter-series Learner. Most methods <cit.> randomly initialize node embeddings for all nodes and infer the dependencies between each pair of nodes by multiplication operations. The adjacency matrix derived in this way is essentially input-unconditioned, making it challenging to effectively handle abrupt changes in non-stationary time series. Hence, we propose to calculate the relationships between nodes by the self-attention mechanism <cit.>: Q_t=Ẑ_t-L:tW^Q_t, K_t=Ẑ_t-L:tW^K_t, W_t=Softmax(Q_tK_t^ T/√(d)), where Q_t and K_t indicate the representation for query and key at time step t, which can be calculated by linear projections with learnable parameters W_t^Q and W_t^K respectively. Here, W_t is the continuous version of the adjacency matrix (a.k.a., probability matrix) then w_ij,t∈W_t denotes the probability to preserve the edge of series i to j at time step t. However, such soft weights are incapable of decisively choosing between retaining or discarding edges, thereby hindering the explicit modeling of how each series is influenced by its relevant series during the distribution shift. Therefore, inspired by <cit.>, we apply the Gumbel reparameterization trick: a_ij,t=Sigmoid ((log(w_ij,t/(1-w_ij,t)))+(g^1_ij,t-g^2_ij,t))/τ), s.t. g^1_ij,t,g^2_ij,t∼Gumbel(0,1), where τ∈ (0,∞) is a temperature parameter. When τ→ 0, a_ij,t=1 ∈A_t with probability w_ij,t and 0 with remaining probability. Afterward, we utilize multi-hop propagation, which is a simplified version of mix-hop propagation proposed by <cit.>, to aggregate information from immediate neighbors. Given the input Ĥ_t-L:t and adjacency matrix A_t, the process of K-layer propagation can be formulated as follows: H̃_t-L:t^(k)=A_tH̃_t-L:t^k-1, H̃_t-L:t=∑_k=1^KLinear^(k)(H̃_t-L:t^(k)), where K is the depth of propagation, H̃_t-L:t∈ℝ^N × d denotes the output representation of the current layer, H̃_t-L:t^(0)=Ĥ_t-L:t. Such simplification provides an important insight: under the wild non-stationarity, mixing the original representation Ĥ_t-L:t in each hop can easily introduce noise into the inter-series correlation learning, as well as into the subsequent inter-series transitional distribution modeling. Here we explicate the distribution modeling from a holistic perspective. We denote the latent state of X_t-L:t that are inferred from H̃_t-L:t by Z̃_t-L:t, which is distinguished from Ẑ_t-L:t achieved by the temporal gate. The distribution 𝒫(Z̃_t-L:t|X_t-L:t,t) is stochastically sampled Z̃_t-L:t from the Gaussian distribution: μ_Z̃=f^μ_Z̃(H̃_t-L:t), σ^2_Z̃=f^σ^2_Z̃(H̃_t-L:t), 𝒫(Z̃_t-L:t|X_t-L:t,t)=𝒩(μ_Z̃,σ^2_Z̃I), where Z̃_t-L:t∈ℝ^N × d, f^μ_Z̃(·) and f^σ^2_Z̃(·) are also instantiated as a single linear layer. For each series z̃_t-L:t^(i)∈Z̃_t-L:t, the posterior approximation can be successfully represented by a product of two sub-distributions: 𝒫(z̃_t-L:t^(i)|x_t-L:t^(i),t)=𝒫(z̃^(i)_t-L:t|ẑ^(i)_t-L:t)𝒫(ẑ^(i)_t-L:t|x^(i)_t-L:t,t). Accordingly, the process of Gumbel-softmax sampling explicitly selects correlated series x^(i)_t-L:t for each series x_t-L:t^(i), i ∈{1,...,N}, which is part of the former, while the latter involves the independent modeling of each correlated series. Then, the latent variable output from intra-series learner, Ẑ_t-L:t, joints the latent variable output from inter-series learner, Z̃_t-L:t, to form Z_t-L:t∼𝒫(Z_t-L:t|X_t-L:t,t), with their proportions regulated by trade-off parameter α: Z_t-L:t=αẐ_t-L:t+(1-α)Z̃_t-L:t, §.§.§ Dynamic Inference (DI) As illustrated in Section <ref>, we aim to use variational distribution 𝒫(Ẑ_t|Z_t-L:t) to estimate the distribution 𝒫(Z_t|t) achieved by time factor encoder. Accordingly, we derive the latent variable Ẑ_t through a linear layer. After that, we use two linear functions f^μ_t̂(·) and f^σ^2_t̂(·) to map the latent state Ẑ_t to the mean and variance vectors, as formulated follows: Ẑ_t=Linear(Z_t-L:t), μ_t̂=f^μ_t̂(Ẑ_t), σ^2_t̂=f^σ^2_t̂(Ẑ_t), §.§.§ Decoder We utilize the learned latent variable Z_t-L:t to perform reconstruction and prediction with one forward step which can avoid error accumulation, as formulated below: X̂_t-L:t=FeedForward_rec(Z_t-L:t), X̂_t:t+H=FeedForward_pre(Z_t-L:t), where FeedForward_rec: ℝ^d →ℝ^L and FeedForward_pre: ℝ^d →ℝ^H are both implemented using two linear layers with intermediate LeakyReLU non-linearity. §.§ Objective Decomposition For simplicity, we abbreviate X_t-L:t as X_L, Z_t-L:t as Z_L, X_t:t+H as X_H, and omit intermediate variables ẑ_t-L:t^(i) and A_t when there is no confusion. To tackle the distribution shift in MTS forecasting, our objective is to explicitly model the time-variant transitional distribution between output predictions and input observations. It requires the learned latent variables of time series to be informative and discriminative, while also exhibiting a high sensitivity to dynamic time factors that can reflect non-stationary environments. Therefore, based on our tailored PGM, we conduct the following variational inference using the Kullback-Leibler (𝕂𝕃) divergence: ℒ=𝕂𝕃[𝒫_ψ(X_H,Z_L,Z_t|X_L)||𝒫_ϕ(X_H,Z_L,Z_t|X_L,t)], where 𝒫_ψ(·|·) and 𝒫_ϕ(·|·) are two transitional distributions, with ψ and ϕ denoting the parameterized functions. The divergence Eq. (<ref>) is minimized concerning all parameters. Regarding the 𝕂𝕃 divergence Eq. (<ref>), we show that the divergence can be decomposed as: ℒ =𝕂𝕃[𝒫_ψ(X_H,Z_L,Z_t|X_L)||𝒫_ϕ(X_H,Z_L,Z_t|X_L,t)] =𝕂𝕃[𝒫_ψ(Z_L|X_L)||𝒫_ϕ(Z_L|X_L,t)]_(a) +𝔼_Z_L ∼𝒫_ψ(Z_L|X_L)𝕂𝕃[𝒫_θ(Z_t|Z_L)||𝒫_ϕ(Z_t|t)]_(b) +𝔼_(Z_L,Z_t) ∼𝒫_ψ(Z_L,Z_t|X_L)𝕂𝕃[𝒫_ψ(X_H|Z_L)||𝒫_ϕ(X_H|X_L,t)]_(c), We prove the Proposition <ref> in Appendix <ref>. Then, we detailedly analyze the above three terms in Eq. (<ref>) combining the designed PGM and our purpose, and provide the loss function for each item and overall loss function. For term (a), i.e., 𝕂𝕃[𝒫_ψ(Z_L|X_L)||𝒫_ϕ(Z_L|X_L,t)], it aims to keep the inference using the variational distribution and the inference using the posterior is close, which also guarantees the reliable and high-quality sampling. We employ the variational evidence lower bound (ELBO) to constrain the term (a). Mathematically, we have: ℒ_a =-ELBO =-𝔼_Z_L[log𝒫(X_L|Z_L)]+𝕂𝕃[𝒫_ψ(Z_L|X_L)||𝒫(Z_L)] =l(X̂_L-X_L) + (-logσ_Z+1/2σ^2_Z+1/2μ_Z^2-1/2), where l denotes a distance metric for which we use the MSE loss, μ_Z and σ^2_Z are the mean and variance vectors of Z_L. For term (b), i.e., 𝔼_Z_L ∼𝒫_ψ(Z_L|X_L)𝕂𝕃[𝒫_θ(Z_t|Z_L)||𝒫_ϕ(Z_t|t)], to make time factors more sensitive to non-stationary environments, we use a variational distribution to approximate posterior distribution 𝒫_ϕ(Z_t|t). Mathematically, we have: ℒ_b=-logσ_t̂/σ_t+1/2σ^2_t̂/σ^2_t+1/2(μ_t̂-μ_t)^2/σ^2_t-1/2, where μ_t and σ^2_t are the mean and variance of Z_t. μ_t̂ and σ^2_t̂ are the mean and variance of Ẑ_t. For term (c), i.e., 𝔼_(Z_L,Z_t) ∼𝒫_ψ(Z_L,Z_t|X_L)𝕂𝕃[𝒫_ψ(X_H|Z_L)|| 𝒫_ϕ(X_H|X_L,t)], we can generate X_H with X_L and t or with the latent variable Z_L. We minimize the distance about the generation of X_H in two ways. The minimization can ensure that the generation from the raw series/time and latent variables is consistent. To ensure the minimization of the term (c) in Eq. (<ref>), we consider integrating the use of forecasting and reconstruction losses, where reconstruction loss is omitted as it has already been constrained in term (a): ℒ_c=l(X̂_H-X_H), Finally, the overall loss function is the sum of the above three losses: ℒ=ℒ_a+ℒ_b+ℒ_c. The derivation of Eq. (<ref>) is presented in Appendix <ref>. Further discussion on JointPGM is provided in Appendix <ref>. § EXPERIMENTS §.§ Experimental Setup §.§.§ Datasets We conduct extensive experiments on various datasets to evaluate the performance and efficiency of JointPGM. We include six well-acknowledged benchmarks used in previous non-stationary time series forecasting works <cit.>: Exchange[<https://github.com/laiguokun/multivariate-time-series-data>], ETT[<https://github.com/zhouhaoyi/ETDataset>] (ETTh1 and ETTm2), Electricity[<https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014>], METR-LA[<https://github.com/liyaguang/DCRNN>] and ILI[<https://gis.cdc.gov/grasp/fluview/fluportaldashboard.html>] datasets. The overall statistics of these datasets are summarized in Table <ref>. To show the non-stationarity of the six datasets, we especially choose the Augmented Dick-Fuller (ADF) test statistic used in <cit.> as the metric to quantitatively measure the degree of distribution shift. A larger ADF test statistic means a higher level of non-stationarity, i.e., more severe distribution shifts. Based on the ADF test results in Table <ref>, the MTS datasets adopted in our experiments show a high degree of distribution shift. Notably, since the ADF statistical test of Weather[<https://www.bgc-jena.mpg.de/wetter/>] (-26.661 in <cit.>) is much smaller than other datasets, indicating relative stationarity, it is excluded from our evaluation benchmarks. Additionally, we use the more non-stationary METR-LA from the same transportation domain to replace Traffic[<http://pems.dot.ca.gov>] (-15.046 in <cit.>). More dataset details are shown in Appendix <ref>. We follow <cit.> to preprocess data by the z-score normalization, and split all the datasets into training, validation, and test sets by the ratio of 7:1:2. §.§.§ Baselines We compare JointPGM with the following state-of-the-art models for time series forecasting, including MLP-based models: Koopa <cit.> and DLinear<cit.>; Transformer-based models: Stationary <cit.>, PatchTST <cit.>, FEDformer <cit.>, Autoformer <cit.>, iTransformer <cit.> and Crossformer <cit.>, and GNN-based model: WaveForM <cit.>. Notably, Koopa and Stationary are specifically designed to tackle non-stationary forecasting challenges in time series, while iTransformer, Crossformer, and WaveForM are tailored for MTS forecasting. Besides, we further compare JointPGM with three model-agnostic normalization-based methods, including SAN <cit.>, Dish-TS <cit.>, and RevIN <cit.>, which respectively use Autoformer and FEDformer as backbones for non-stationary forecasting. More baseline details are provided in Appendix <ref>. Regarding the evaluation metrics, we evaluate MTS forecasting performance using mean absolute error (MAE) and mean squared error (MSE). A lower MAE/MSE indicates better forecasting performance. Each experiment is repeated three times with different seeds for each model on each dataset, and the mean of the test results is reported. §.§.§ Implementation Details All the experiments are implemented with Pytorch on an NVIDIA RTX 4090 24GB GPU. In our experiments, all mean functions f_t^μ(·), f_ẑ^μ(·), f_Z̃^μ(·), f_t̂^μ(·), and variance functions f_t^σ^2(·), f_ẑ^σ^2(·), f_Z̃^σ^2(·), f_t̂^σ^2(·) are instantiated as single linear layer. The depth of propagation K in the Multi-hop Propagation is set to 2 which is consistent with <cit.> and the temperature parameter τ in the Gumbel-softmax Sampling is set to 0.5 for all datasets. We denote the latent dimension size as d and set d to 128. In training, our model is trained using Adam optimizer with a learning rate of 1e-3, and the batch size is set to 128 for all datasets. §.§ Overall Performance Table <ref> showcases the forecasting results of JointPGM compared to nine representative baselines with the best in bold and the second underlined. From the table, we can observe JointPGM achieves state-of-the-art performance in nearly 80% forecasting results with various prediction lengths. Concretely, JointPGM outperforms all general deep forecasting models across all time series datasets, with particularly notable improvements observed on datasets characterized by high non-stationarity: compared to their state-of-the-art results, we achieve 13.2% MSE reduction (0.086 → 0.076) on Exchange (ADF: -1.902) and 4.8% (4.000 → 3.818) on ILI (ADF: -5.334) under the horizon window of 96 and 24 respectively, which indicates that the potential of deep forecasting models is still constrained on non-stationary data. Also, JointPGM outperforms almost all deep models specifically designed to address distribution shifts. Notably, JointPGM surpasses Stationary, the non-stationary version of Transformer, by a large margin, indicating that the traditional covariate shift assumption may not be consistent with the true distribution shift. This highlights the challenges posed by the diverse transitional shift patterns underlying the time series for model capacity. Besides, different from Koopa disentangling series dynamics into time-variant dynamics and time-invariant dynamics using Koopman operators, JointPGM achieves an average reduction of 2.7% in MAE and 9.3% in MSE by innovatively rethinking time-variant dynamics from both intra- and inter-series perspectives at a finer granularity. §.§ Comparison with Normalization-based Methods We further compare our performance with the advanced normalization-based methods including SAN, Dish-TS, and RevIN for addressing distribution shift. Table <ref> has presented a performance comparison in MTS forecasting using Autoformer and FEDformer as backbones. From the results, we can observe that JointPGM achieves the best performance in nearly 92% forecasting results compared to the existing normalization-based methods. We attribute this superiority to the fine-grained capture of time-variant dynamics through explicit modeling of the time-variant transitional distribution between observations and predictions, while simultaneously considering the intra- and inter-series relationships inherent in the observations. This is demonstrated by the performance on two typical non-stationary datasets Exchange and ILI. Specifically, compared to the second-best FEDformer+SAN, JointPGM achieves an MSE reduction of 5.3% on the Exchange dataset and 6.6% on the ILI dataset. Notably, as shown in Table <ref> and Table <ref>, JointPGM exhibits slightly worse MAE than other compared models. A potential explanation is that most compared models use a single MSE as an objective function while JointPGM employs two MSE losses for prediction and reconstruction. Therefore, JointPGM tends to prioritize improvements in MSE, consistently ranking first among all compared models except when L/H=24/24. §.§ Model Analysis §.§.§ Hyperparameter Analysis To explore the impact of lookback length L, horizon length H, trade-off parameter α, and the depth of propagation K, we conduct the following experiments for the sensitivity of these hyperparameters. First, we investigate the impact of lookback length L on the performance of the top-8 forecasting models on the ETTh1 dataset. In principle, extending the lookback window increases historical information availability, which will potentially improve forecasting performance. However, Figure <ref> demonstrates that Stationary and FEDformer, with Transformer-based architectures, have not benefited from a longer lookback window, aligning with the analysis in <cit.>. Conversely, the remaining models consistently decrease MAE scores as the lookback window increases. Notably, our JointPGM can capture the dynamics of intra- and inter-series correlations from more historical information to infer the time-variant transitional distribution in a fine-grained manner, thereby enhancing the prediction performance. Then, we aim to discuss the influence of larger horizons (known as long-time series forecasting <cit.>) on the model performance. As Figure <ref> shows, when prolonging the horizon window to 720, JointPGM consistently achieves superior forecasting performance compared to other baseline models. An intuitive reason is that larger horizons encompass more complex distribution changes and thus need more refined time-variant transitional distribution decomposition and modeling. Furthermore, we study the impact of trade-off parameter α under the setting of L=96, H={96,192} on model performance on the Exchange dataset. Figure <ref> shows the performance comparison with different ratios α in Eq. (<ref>). We can observe that when α is less than 0.6, MAE and MSE all display a trend of violent fluctuation. Optimal performance is achieved with a moderate value for α, resulting in the lowest MAE and MSE. A similar trend exists as increasing α from 0.6 since the unsuitable ratio of intra- and inter-series dynamics fails to fully describe the underlying causes of the distribution shift in series data and confuse latent data regularities. Lastly, we further examine how the depth of propagation K affects the forecasting performance of JointPGM. Figure <ref> has reported the experimental results on the Exchange dataset under the setting of L=96,H={96,192}. From the results, we can draw the following conclusions: 1) Our JointPGM outperforms the other compared models across different values of K, demonstrating the stability of our method; 2) Our JointPGM achieves optimal performance when the value of K is around 2, indicating that it is sufficient to propagate series information with 2 steps. As the depth of propagation K increases, multi-hop propagation may suffer from over-smoothing caused by information aggregation, ultimately hindering forecasting performance. §.§.§ Ablation Study We perform an ablation study to assess the individual contributions of different key components in JointPGM to the final performance. We list the MAE and MSE on four datasets under the setting of L/H=24/48 for ILI and L/H=96/192 for the others. In particular, the following variants are examined: 1) w/o DI: Remove the entire dynamic inference from JointPGM and exclude the loss term ℒ_b from the overall loss function ℒ (i.e., ℒ_b=0 in Eq. (<ref>)); 2) w/o OF: Replace the order-based time factors with timestamp-based time factors; 3) w/o ISE (A) and 4) w/o ISE (F): Replace the entire independence-based series encoder with Autoformer and FEDformer encoders, and correspondingly substitute the decoder with their respective counterparts. Autoformer and FEDformer serve as frequently adopted backbones in normalization-based methods such as SAN <cit.>, Dish-TS <cit.> and RevIN <cit.>; 5) w/o IL: Completely remove the inter-series learner; 6) w/o TG: Remove the temporal gate from the intra-series learner. As the intra-series learner provides a crucial representation foundation for the inter-series learner, we remove this key component to validate its impact. As presented in Table <ref>, the introduction of dynamic inference contributes to the forecasting performance, showing that comprehensively making the learned time factors more discriminative and sensitive to wild environments is vital for non-stationary forecasting models. Additionally, we find it not easy to yield satisfactory results using timestamp-based time factors, which suggests the order-based time factor encoder may have successfully learned advantageous high-frequency patterns. The incorporation of an independence-based series encoder significantly improves the model performance when compared with Autoformer- and FEDformer-based encoders, demonstrating the effectiveness of our strategy that jointly handles the distribution shift and models intra- and inter-series correlations. The series encoder without the inter-series learner can provide performance improvements by focusing on modeling the intra-series distribution shift, but it still underperforms JointPGM due to neglecting the shift caused by inter-series dynamics. Furthermore, by equipping the intra-series learner with a temporal gate, JointPGM can accurately capture time-variant dynamics within each series, further promoting the efficacy of non-stationary MTS forecasting. §.§.§ Model Efficiency We comprehensively evaluate the model efficiency of our JointPGM and all baselines across three dimensions: forecasting performance, memory footprint, and training speed. Specifically, the forecasting performance (MAE) comes from Table <ref> under the setting of L/H=96/192. The memory footprint and training speed are calculated using the same batch size (128) and official code configuration. Figure <ref> shows the efficiency results for two representative datasets of different scales: Exchange (8 variables, 7588 time steps) and Electricity (321 variables, 26304 time steps). The efficiency results on ETTm2 (7 Variables, 69680 Time Steps) are provided in Appendix <ref>, Figure <ref>. From the figures, we can observe that: 1) In datasets with a relatively small number of variables (e.g., Exchange), the efficiency of JointPGM is comparable to that of iTransformer, which only uses transformer encoders, and slightly inferior to that of the simplest linear model, DLinear. For example, when compared to the MLP-based model Koopa customized for non-stationary forecasting, JointPGM achieves a reduction of 77.6% in training time for the Exchange dataset, while maintaining a memory footprint of only 94.9%; 2) In datasets with numerous variables (e.g., Electricity), the training speed is comparable to the SOTA forecasting model PatchTST, but JointPGM has a significantly lower memory footprint. Meanwhile, JointPGM achieves better forecasting performance than all other baselines in scenarios with numerous variables, as it is capable of jointly addressing distribution shifts and capturing the inherent intra- and inter-series correlations in MTS. §.§ Visualization Analysis §.§.§ t-SNE Visualization of Series Representation To showcase the rationale behind studying fine-grained transitional shift from both intra- and inter-series perspectives, we visualize the feature distribution of series representation H̃_t-L:t using t-SNE for JointPGM and its three variants w/o ISE (A), w/o ISE (F) and w/o IL. For reliability, we randomly choose a batch of the Electricity test set and repeatedly run each experiment three times with different seeds. The overall results are depicted in Figure <ref>. The figure shows that: 1) The series representations learned from JointPGM and the variant w/o IL exhibit a distinct clustering structure, indicating a robust and differentiated representation space. In contrast, those learned from the variants w/o ISE (A) and w/o ISE (F) reveal a more spread-out and less clustered pattern, suggesting that without the fine-grained decomposition, the series representations are comparatively less informative and distinguishable; 2) While the series representations learned from the variant w/o IL exhibit a clear clustering pattern by focusing only on intra-series transitional shift, they also show information loss (marked by the red box), possibly caused by misclustering due to spurious inter-series correlations. In contrast, those learned from JointPGM are more evenly distributed across the entire 2D space. We further validate this rationale by showing heatmap visualizations of inter-series correlations in Appendix <ref>, Figure <ref>. Based on the insights from Figures <ref> and <ref>, JointPGM can learn superior series representations to improve robustness against intricate distribution shifts and offer enhanced interpretability. §.§.§ Case Study of Forecasting We present a case study on real-world time series (METR-LA) in Figure <ref>. We select one weekday (period 1) and one weekend (period 2) as the representative horizon windows. Firstly, we compare the predictions of series #17 achieved by Koopa and our JointPGM during these two periods at the data and distribution levels. We easily observe significant changes in series trend (could be regarded as significant changes in intra-series correlation, the black arrows), but Koopa cannot acquire accurate predictions. In contrast, our JointPGM can perform precise predictions of future values and their distributions. The intuitive reason is when addressing the distribution shift, compared to coarse-grained Koopa, JointPGM has jointly handled the distribution shift and modeled intra-/inter-series correlations, thereby boosting the performance. Furthermore, we visualize the A_1 and A_2 learned by our JointPGM during period 1 and 2. It can be observed that the learned correlation between series #17 and other series also changes in different periods (the blue arrows), e.g., series #17 exhibits a high correlation with series #12 on weekday but not on weekends. This is reasonable since series #12 is located near a school as indicated by the Google Map. We present more forecasting showcases of JointPGM and the three baselines: Koopa, Dish-TS, and RevIN, in Figures <ref>, <ref>, and <ref> in Appendix <ref>, respectively. § CONCLUSION This study aims to address the distribution shift problem to enhance the robustness of MTS forecasting by proposing a novel probabilistic graphical model and instantiating a neural framework, JointPGM. Unlike previous normalization-based methods and time-variant models, JointPGM deeply exploits the intrinsic causes of the distribution shift, boosting desirable model interpretability and the potential to enhance forecasting performance by jointly handling the distribution shift and modeling intra-/inter-series correlations. Experimentally, our model shows competitive performance on six real-world benchmarks with remarkable efficiency. Future works will explore time-variant dynamics on higher-dimensional MTS data and further improve efficiency. IEEEtran § NOTATIONS All symbols in our paper are carefully defined based on rules and we have provided detailed and clear explanations for the meanings of all symbols in Table <ref>. § THE DETAILS OF THEORETICAL ANALYSES §.§ The proof of Proposition 1 Since our objective is for the learned time series and their corresponding time factors to exhibit strong discriminative characteristics and closely align with each other, we perform the following variational approximation with the Kullback-Leibler (KL) divergence: 𝕂𝕃[𝒫_ψ(X_H,Z_L,Z_t|X_L)||𝒫_ϕ(X_H,Z_L,Z_t|X_L,t)] =∫_X_H∫_Z_L∫_Z_t𝒫_ψ(X_H,Z_L,Z_t|X_L) log𝒫_ψ(X_H,Z_L,Z_t|X_L)/𝒫_ϕ(X_H,Z_L,Z_t|X_L,t)X_H Z_L Z_t =∫_X_H∫_Z_L∫_Z_t𝒫_ψ(X_H|Z_L)𝒫_ψ(Z_L,Z_t|X_L) log[𝒫_ψ(Z_L,Z_t|X_L)/𝒫_ϕ(Z_L,Z_t|X_L,t)· 𝒫_ψ(X_H|Z_L)/𝒫_ϕ(X_H|X_L,t)] X_H Z_L Z_t =∫_X_H∫_Z_L∫_Z_t𝒫_ψ(X_H|Z_L)𝒫_ψ(Z_L,Z_t|X_L) log𝒫_ψ(Z_L,Z_t|X_L)/𝒫_ϕ(Z_L,Z_t|X_L,t) X_H Z_L Z_t +∫_X_H∫_Z_L∫_Z_t𝒫_ψ(X_H|Z_L)𝒫_ψ(Z_L,Z_t|X_L) log𝒫_ψ(X_H|Z_L)/𝒫_ϕ(X_H|X_L,t) X_H Z_L Z_t =∫_X_H𝒫_ψ(X_H|Z_L)𝕂𝕃[𝒫_ψ(Z_L,Z_t|X_L) || 𝒫_ϕ(Z_L,Z_t|X_L,t)] X_H +∫_Z_L∫_Z_t𝒫_ψ(Z_L,Z_t|X_L) 𝕂𝕃[𝒫_ψ(X_H|Z_L)||𝒫_ϕ(X_H|X_L,t)] Z_L Z_t =𝕂𝕃[𝒫_ψ(Z_L,Z_t|X_L) || 𝒫_ϕ(Z_L,Z_t|X_L,t)] +𝔼_(Z_L,Z_t) ∼𝒫_ψ(Z_L,Z_t|X_L)𝕂𝕃[𝒫_ψ(X_H|Z_L)||𝒫_ϕ(X_H|X_L,t)], The first term in Eq. (<ref>) is: 𝕂𝕃[𝒫_ψ(Z_L,Z_t|X_L) || 𝒫_ϕ(Z_L,Z_t|X_L,t)] =∫_Z_L∫_Z_t𝒫_ψ(Z_L,Z_t|X_L)log𝒫_ψ(Z_L,Z_t|X_L)/𝒫_ϕ(Z_L,Z_t|X_L,t)Z_L Z_t =∫_Z_L∫_Z_t𝒫_ψ(Z_L|X_L)𝒫_θ(Z_t|Z_L)log𝒫_ψ(Z_L|X_L)𝒫_θ(Z_t|Z_L)/𝒫_ϕ(Z_L|X_L,t)𝒫_ϕ(Z_t|t) Z_L Z_t =∫_Z_L∫_Z_t𝒫_ψ(Z_L|X_L)𝒫_θ(Z_t|Z_L)log𝒫_ψ(Z_L|X_L)/𝒫_ϕ(Z_L|X_L,t)Z_L Z_t +∫_Z_L∫_Z_t𝒫_ψ(Z_L|X_L)𝒫_θ(Z_t|Z_L)log𝒫_θ(Z_t|Z_L)/𝒫_ϕ(Z_t|t)Z_L Z_t =∫_Z_t𝒫_θ(Z_t|Z_L) 𝕂𝕃[𝒫_ψ(Z_L|X_L) || 𝒫_ϕ(Z_L|X_L,t)] Z_t +∫_Z_L𝒫_ψ(Z_L|X_L) 𝕂𝕃[𝒫_θ(Z_t|Z_L) || 𝒫_ϕ(Z_t|t)] Z_L =𝕂𝕃[𝒫_ψ(Z_L|X_L) || 𝒫_ϕ(Z_L|X_L,t)] +𝔼_Z_L ∼𝒫_ψ(Z_L|X_L)𝕂𝕃[𝒫_θ(Z_t|Z_L) || 𝒫_ϕ(Z_t|t)], Combining Eq. (<ref>) and Eq. (<ref>), we complete the proof. Note that the latent variables Z_L and Z_t are not inter-dependent. If Z_L and Z_t are inter-dependent, the inference of Z_L (resp. Z_t) must necessitate Z_t (resp. Z_L). Therefore, in Eq. (<ref>), there should be 𝒫_ψ(Z_L|X_L,Z_t) and 𝒫_ϕ(Z_t|t,Z_L) rather than 𝒫_ψ(Z_L|X_L) and 𝒫_ϕ(Z_t|t). As Figure <ref> shows, while Z_L are jointly inferred by X_L and t, Z_t can be inferred without X_L but using t. Therefore, 𝒫_ψ(Z_L|X_L,t) and 𝒫_ϕ(Z_t|t) hold in Eq. (<ref>). §.§ The derivation of Eq. (<ref>) For term (a), i.e., 𝕂𝕃[𝒫_ψ(Z_L|X_L)||𝒫_ϕ(Z_L|X_L,t)], it aims to keep the inference using the variational distribution and the inference using the posterior is close, which also guarantees the reliable and high-quality sampling. We employ the variational evidence lower bound (ELBO) to constrain the term (a). Suppose that 𝒫(X_L) is a constant given X_L. It is equivalent to maximizing the ELBO and minimizing ℒ_a. Mathematically, we have: log 𝒫(X_L)=𝔼_Z_L ∼𝒫_ψ(Z_L|X_L)[log𝒫(X_L)] =𝔼_Z_L ∼𝒫_ψ(Z_L|X_L)[log𝒫(X_L|Z_L)𝒫(Z_L)/𝒫_ϕ(Z_L|X_L,t)] =𝔼_Z_L ∼𝒫_ψ(Z_L|X_L)[log𝒫(X_L|Z_L)𝒫(Z_L)/𝒫_ϕ(Z_L|X_L,t)·𝒫_ψ(Z_L|X_L)/𝒫_ψ(Z_L|X_L)] =𝔼_Z_L[log𝒫(X_L|Z_L)]-𝔼_Z_L[log𝒫_ψ(Z_L|X_L)/𝒫(Z_L)] +𝔼_Z_L[log𝒫_ψ(Z_L|X_L)/𝒫_ϕ(Z_L|X_L,t)] =𝔼_Z_L[log𝒫(X_L|Z_L)]-𝕂𝕃[𝒫_ψ(Z_L|X_L)||𝒫(Z_L)] +𝕂𝕃[𝒫_ψ(Z_L|X_L)||𝒫_ϕ(Z_L|X_L,t)] =𝔼_Z_L[log𝒫(X_L|Z_L)]-𝕂𝕃[𝒫_ψ(Z_L|X_L)||𝒫(Z_L)]_ELBO+ℒ_a, Thus: ℒ_a =-ELBO =-𝔼_Z_L[log𝒫(X_L|Z_L)]+𝕂𝕃[𝒫_ψ(Z_L|X_L)||𝒫(Z_L)] =l(X̂_L-X_L) + (-logσ_Z+1/2σ^2_Z+1/2μ_Z^2-1/2), where l denotes a distance metric for which we use the MSE loss, μ_Z and σ^2_Z are the mean and variance vectors of Z_L. § DEEP DISCUSSIONS OF OUR METHOD JOINTPGM §.§ Why can our proposed JointPGM address the non-stationary MTS forecasting problem? As stated in the introduction, we argue there are two primary categories of approaches to address the non-stationary time series forecasting issue. The first (e.g., RevIN<cit.>) is to alleviate temporal mean and covariance shift by normalization, and the second (e.g., Koopa<cit.>) is to learn time-variant series dynamics. Our JointPGM belongs to the second category. Different from Koopa<cit.> disentangling the series into time-variant and time-invariant dynamics, we rethink the time-variant dynamics from the intra- and inter-series perspective from a finer granularity. Since non-stationarity is intrinsically attributed to distributional shift, learning time-variant dynamics essentially stems from modeling the time-variant transitional distribution between inputs and outputs. Hence, we decompose such time-variant transitional distribution into intra- and inter-series parts. As shown in Figure <ref>, we introduce dynamic time factors t as conditions to regulate this process and explicitly model the intra-series transitional distribution 𝒫(x_t:t+H^(i)|x_t-L:t^(i), t) and inter-series transitional distribution 𝒫(x_t:t+H^(i)|x_t-L:t^(i), t), see Section <ref>. By this means, time-variant dynamics are finely learned, thereby addressing the non-stationary time series forecasting issues. §.§ Why do we need a probabilistic graphical model? A probabilistic graphical model (PGM) can represent a probability distribution of random variables and provide a principled and interpretable manner for exploiting dependency relationships among variables. Inspired by this, we represent and decompose time-variant transitional distribution and instantiate a neural network JointPGM based on the PGM. It can reveal the intrinsic causes of the transitional distribution, ensuring desirable model interpretability and the potential to enhance forecasting performance by jointly addressing the distribution shift and capturing intra-/inter-series correlations. The ablation study (see Table <ref>), series representation visualization (see Figures <ref> and <ref>), and case study of forecasting (see Figure <ref>) all explain the rationality behind this decomposition. §.§ How is the interpretability of our JointPGM demonstrated? We highlight that our interpretability is rooted in the method of decomposition from intra- and inter-series dimensions using the probabilistic graphical model (PGM). In the method, PGM can represent the probability distribution of random variables and provide an interpretable manner to explore dependency relationships among variables. We present a tailored and specific PGM framework for decomposing time-variant transitional distribution, and accordingly instantiate the proposed neural network JointPGM based on the PGM framework (see Section <ref>), ensuring desirable model interpretability in terms of understanding the distribution shift. Moreover, compared with the previous MSE loss function, the decomposition of our optimization objective is also interpretable. It can perform different decomposition terms as constraints for different sub-procedures in PGM to help the understanding of what role they play in complex non-stationary forecasting, e.g., term (c) can ensure the generation from raw data and latent variables is consistent. In the experiments, we have presented series representation visualization (see Figures <ref> and <ref>), conducted the case study of forecasting on the METR-LA dataset (see Figure <ref>), and compared to different variants like w/o ISE (A) in ablation study (see Table <ref>). These experiments strongly support the rationality of this decomposition method. §.§ How do we avoid the potential overfitting risk with the introduction of many latent variables? We have adopted effective strategies to mitigate potential overfitting risk, e.g., we instantiated the mean and variance functions of Gaussian sampling as a single linear layer, effectively reducing the parameters and training time. We also carefully tuned the latent dimension size d in {64,...,512} and selected the optimal size of 128 to avoid inferior performance due to possible overfitting. The superior forecasting performance and efficiency on small-scale datasets like ETTh1 demonstrate the effectiveness in mitigating overfitting. § MORE EXPERIMENTAL DETAILS §.§ More dataset details We adopt six real-world benchmarks in the experiments to evaluate the non-stationary MTS forecasting task. The overall statistics of these datasets are summarized in Table <ref>. The experimental results (see Table <ref>) on these datasets are better than those of baselines, which sufficiently proves that our proposed method is exactly superior and effective in handling the distribution shift problem in MTS forecasting. The details of these datasets are as follows: * Exchange:[<https://github.com/laiguokun/multivariate-time-series-data>] comprises daily exchange rates of eight foreign countries, namely Australia, Britain, Canada, Switzerland, China, Japan, New Zealand, and Singapore, spanning from 1990 to 2016, with a sampling frequency of 1 day. * ETT:[<https://github.com/zhouhaoyi/ETDataset>] is sourced from two distinct electric transformers labeled as 1 and 2, each offering two different resolutions: 15-minute (denoted as `m') and 1-hour (denoted as `h'). We designate ETTh1 and ETTm2 as our benchmarks for non-stationary time series forecasting. * Electricity:[<https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014>] comprises the electricity consumption of 321 clients for MTS forecasting, collected since 01/01/2011 with a sampling frequency of every 15 minutes. * METR-LA:[<https://github.com/liyaguang/DCRNN>] contains traffic information gathered from loop detectors on the highways of Los Angeles Country. It comprises data from 207 sensors spanning from 01/03/2012 to 30/06/2012 with a sampling frequency of every 5 minutes. * ILI:[<https://gis.cdc.gov/grasp/fluview/fluportaldashboard.html>] contains the weekly recorded data on patients with influenza-like illness (ILI) from the Centers for Disease Control and Prevention of the United States spanning from 2002 to 2021. This data describes the ratio of patients seen and the total number of patients. §.§ More baseline details We adopt two representative non-stationary forecasting models including Koopa <cit.> and Stationary <cit.> and seven state-of-the-art general deep forecasting models including DLinear<cit.>, PatchTST <cit.>, FEDformer <cit.>, Autoformer <cit.>, iTransformer <cit.>, Crossformer <cit.>, and WaveForM <cit.> for comparison. Additionally, we further compare JointPGM with three model-agnostic normalization-based methods SAN <cit.>, Dish-TS <cit.> and RevIN <cit.> with different backbones for non-stationary forecasting. Note that although Stationary is based on normalization, we classify it under non-stationary models for discussion due to its exclusive focus on the Transformer architecture. We introduce these models as follows: * Koopa uses stackable blocks to learn hierarchical dynamics, Koopman operators for transition modeling, and context-aware operators for handling time-variant dynamics. We implement their provided source code from: https://github.com/thuml/Koopahttps://github.com/thuml/Koopa. * Stationary introduces a series stationarization module with statistics for better predictability and a de-stationary attention module to re-integrate the inherent non-stationary information for non-stationary MTS forecasting. We implement their provided source code from: https://github.com/thuml/Nonstationary_Transformershttps://github.com/thuml/Nonstationary_Transformers. * DLinear: decomposes series data into trend and seasonal components using a moving average kernel and applies linear layers to each component. We implement their provided source code from: https://github.com/honeywell21/DLinearhttps://github.com/honeywell21/DLinear. * PatchTST: is a Transformer-based model for time series forecasting tasks by introducing two key components: patching and channel-independent structure. We implement their provided source code from: https://github.com/PatchTSThttps://github.com/PatchTST. * FEDformer: combines an attention mechanism incorporating low-rank approximation in frequency with a mixture of expert decomposition to manage distribution shifting. We implement their provided source code from: https://github.com/MAZiqing/FEDformerhttps://github.com/MAZiqing/FEDformer. * Autoformer: introduces a decomposition architecture that integrates the series decomposition block as an internal operator, which can progressively aggregate the long-term trend part from intermediate prediction. We implement their provided source code from https://github.com/thuml/Autoformerhttps://github.com/thuml/Autoformer. * iTransformer: repurposes the Transformer architecture for time series forecasting by embedding the time points of individual series into variate tokens, enhancing multivariate correlation capture and nonlinear representation learning without altering basic components. We implement their provided source code from: https://github.com/thuml/iTransformerhttps://github.com/thuml/iTransformer. * Crossformer: introduces a Transformer-based model that integrates cross-dimension dependencies for MTS forecasting, enhancing temporal and variable interactions via Dimension-Segment-Wise embedding and Two-Stage Attention layer. We implement their provided source code from: https://github.com/Thinklab-SJTU/Crossformerhttps://github.com/Thinklab-SJTU/Crossformer. * WaveForM: is a graph-enhanced Wavelet learning framework for long-term MTS forecasting. We implement their provided source code from: https://github.com/alanyoungCN/WaveForM https://github.com/alanyoungCN/WaveForM. * SAN: proposes a slice-level adaptive normalization scheme for time series forecasting, addressing the challenge of non-stationarity by locally adjusting statistical properties within temporal slices. We implement their provided source code from: https://github.com/icantnamemyself/SANhttps://github.com/icantnamemyself/SAN. * Dish-TS: mitigates distribution shift in time series by employing a Dual-CONET framework to separately learn the distributions of input and output spaces, thereby effectively capturing the distribution differences between the two spaces. We implement their provided source code from: https://github.com/weifantt/Dish-TShttps://github.com/weifantt/Dish-TS. * RevIN: is a reversible instance normalization method designed to address the distribution shift problem in time series data by symmetrically removing and restoring statistical information through learnable affine transformations. We implement their provided source code from: https://github.com/ts-kim/RevINhttps://github.com/ts-kim/RevIN. § MORE EXPERIMENTAL RESULTS §.§ Comparison with Dish-TS and RevIN In this section, following the return-to-original-value setting of Dish-TS <cit.>, we further compare the performance with the normalization-based methods Dish-TS <cit.> and RevIN <cit.>, using different backbones including Transformer, Informer, and Autoformer. The results are shown in Table <ref>. From the table, it is evident that our JointPGM can still achieve SOTA performance in 75% of forecasting results compared with Dish-TS and RevIN. §.§ Model efficiency Figure <ref> shows the supplementary efficiency results for ETTm2 (7 variables, 69680 time steps). We observe that in datasets with relatively few variables and a large time step scale (ETTm2), the efficiency of JointPGM is only slightly inferior to DLinear, PatchTST, and iTransformer. For example, when compared to the MLP-based model Koopa customized for non-stationary forecasting, JointPGM achieves a reduction of 58.9% in training time for the ETTm2 dataset, while maintaining a memory footprint of only 97.8%. §.§ Heatmap visualization of inter-series correlation To further provide an intuitive understanding of the learning processes of the stacked intra-series learner and inter-series learner, we present some inter-series correlation visualizations on the METR-LA and Electricity test set in Figure <ref>. Concretely, we calculate the Pearson Correlation coefficients for each pair of series in X_t-L:t, Ĥ_t-L:t, H̃_t-L:t and X_t:t+H, and visualize the entire correlation matrix. It can be clearly observed that there is an obvious variation in inter-series correlation between lookback window X_t-L:t and horizon window X_t:t+H, reflecting a significant shift in distribution between these two windows. As we observe in the shallow intra-series learner, we find that the correlation of representation Ĥ_t-L:t is similar to the correlation of the lookback window X_t-L:t. As we go deeper into the inter-series learner, the correlation of representation H̃_t-L:t gradually becomes more similar to the correlation of the horizon window to be predicted. This observation verifies that our proposed JointPGM effectively addresses the transitional shift between the lookback and horizon windows. §.§ Visualization of forecasting results To offer a clear comparison between various models, we show supplementary forecasting showcases on Electricity, ETTh1, ETTm2, and Exchange datasets: Figure <ref>, Figure <ref> and Figure <ref> show the predictions of our JointPGM and three baselines Koopa, Dish-TS and RevIN respectively. We can see that when the series trend changes dramatically, our JointPGM can still acquire accurate predictions. These visualizations illustrate the effective forecasting capability of JointPGM in handling shifted multivariate time series.
http://arxiv.org/abs/2407.12085v1
20240716180002
The multiple nature of CC Com: one of the ultra-short orbital period late-type contact binary systems
[ "Dolunay Koçak" ]
astro-ph.SR
[ "astro-ph.SR" ]
§ ABSTRACT The study of very short-period contact binaries provides an important laboratory in which the most important and problematic astrophysical processes of stellar evolution take place. Short-period contact systems, such as CC Com are particularly important for binary evolution. Close binary systems, especially those with multiple system members, have significant period variations, angular momentum loss mechanisms predominance, and pre-merger stellar evolution, making them valuable astrophysical laboratories. In this study, observations of CC Com, previously reported as a binary system, and new observations from the TÜBİTAK National Observatory (TUG) and the space-based telescope TESS have revealed that there is a third object with a period of about eight years and a fourth object with a period of about a century orbiting the binary system. From simultaneous analysis of all available light curves and radial velocities, the sensitive orbital and physical parameters of the system components are derived. The orbital parameters of the components are P_ A=0.221±0 days, P_ B=7.9±0.1 yr, P_ C=98±5 yr, e_3 = 0.06, e_4 = 0.44 and the physical parameters as M_ A1=0.712±0.009 M_⊙, M_ A2=0.372±0.005 M_⊙, m_B;i'=90^∘=0.074 M_⊙, m_C;i'=90^∘=0.18 M_⊙, R_ A1=0.693±0.006 R_⊙, R_ A2=0.514±0.005 R_⊙, L_ A1 = 0.103 L_⊙, L_ A2 = 0.081 L_⊙. Finally, the evolutionary status of the multiple system CC Com and its component stars is discussed. § INTRODUCTION Contact binaries provide valuable insights into the evolution of binary star systems. They are a stage in the evolution of close binary systems, and studying their properties helps us understand how stars evolve when they interact closely with each other <cit.>. Contact systems are essential objects for studying the formation mechanisms of merging phenomena, very close binaries, and astrophysical processes of binary stars, as seen for V1309 Scorpii <cit.>. The close interactions between stars in contact binaries allow us to study in detail the complex dynamics that can lead to the formation of these systems and the role of mass loss and mass transfer in their evolution. In addition, extremely short-period contact binaries, such as CC Com, provide a crucial astrophysical laboratory to test the importance of angular momentum loss mechanisms such as magnetised stellar wind and gravitational radiation in the evolutionary processes of these systems <cit.>. The time required for a system such as CC Com, which is composed of very small masses of 0.72 M_⊙ and 0.38 M_⊙ <cit.>, to evolve or fill the in Roche lobes through the normal process of evolution, is significantly greater than the age of the Universe. CC Com is an excellent example because the proximity effect dominates its evolution more than other processes. Binary systems characterised by an extremely short orbital period have the potential to generate detectable gravitational waves <cit.>. One of the more basic pieces of information we get from a star is the variation in its atmospheric layers. Late-type stars are mostly active owing to their convective layers, and they lose mass over time. This mass loss has an impact on the evolution of stars. The mass loss becomes even more important when the orbital period of the binary system is relatively short. This mass loss causes a decrease in the total mass and, so, a change in the orbital period of the system. In the later stages of binary evolution, the Roche lobe of the primary star first fills its Roche lobe and begins to transfer mass from the L1 point. With mass transfer, the mass ratio changes and this causes a change in the period. Both mass loss and mass transfer play a role as in angular momentum loss from the system. Another effect is the presence of a third or more bodies, which cause a loss of angular momentum from the binary system. This effect is explained by the von Zeipel-Lidov-Kozai effect <cit.>. CC Com has been observed with photometric and spectral methods for over half a century. CC Com was first discovered as a variable star by <cit.>. The basic parameters of the system are given in Table <ref>. The B-V colour of the K4-5V <cit.> spectral system was calculated to be 0.54 mag <cit.>. In a more detailed study of the system, <cit.> obtained light curves in U, B, and V filters. Under the assumption of a being a contact system, <cit.> found a light curve solution of the system. Subsequently, <cit.> obtained the light curves of the system in the B and V bands. <cit.> obtained the parameters of the system by the light curve and the solution of the radial velocity. <cit.> and <cit.> produced synthetic models of the system using a differential correction method. Later, the light curve solutions were updated with modern methods <cit.>, with radial velocity data from <cit.>, <cit.> and <cit.>. As a result of the analyses performed, <cit.> found that the masses and radii of the hot and cold components of the system were 0.378 M_⊙, 0.717 M_⊙, 0.530 R_⊙, 0.708 R_⊙, while <cit.> obtained these as 0.409 M_⊙, 0.748 M_⊙, 0.550 R_⊙, 0.720 R_⊙. The presence of asymmetry at maximum light levels was noted in many of the light curves obtained for CC Com <cit.>. Similar variations owing to stellar spots have also been detected at minimum light. Most contact binaries show the O'Connell effect in their light curves, caused by stellar spots <cit.>. The solution of the light curve found by <cit.> showed that 6% of the surface of the primary star is covered by a cold spot. There are physical processes that cause changes in the orbit of a binary system, such as mass transfer, mass loss, the presence of a third body, axis rotation, stellar activity, and relativistic effects <cit.>. The analysis of the period variation of the system under the assumptions of a limited number of time minima has been previously considered by many authors <cit.>. The parabolic variation obtained from the analysis of the O-C diagram of the system reveals mass transfer from the massive star to the low-mass star at 1.6×10^-8 M_⊙ per year by <cit.>. <cit.> calculated that the orbital period of the system decreases by 4.66±0.20× 10^-11 days per year. It has been discussed that the oscillation of the orbital period with a period of 17.18±0.08 years and an amplitude of 0.0018±0.0001 d could be caused by the light-time effect (LITE) or magnetic activity of a dwarf star with a mass of 0.06 M_⊙, a third body in the system <cit.>. <cit.>, using 35 photometric minimum times in detail and analysing the parabola-like change in O-C variations, which is a result of mass transfer, found that the decrease rate in the orbital period is dP / dt = - 4.39×10^-8 days/year, while <cit.> found it to be -2×10^-8 days/year. Yang also found an oscillation with a period of 16.1 days and an amplitude of 2.8×10^-7 days. However, in his subsequent study, he updated the period of this oscillation to 23.6±0.4 and the amplitude to A= 0.0028±0.0003 days <cit.>. The cycle of these detected oscillations can be explained by magnetic activity or by the presence of a third object in the system <cit.>. In the present study, a comprehensive analysis of all available data sets from the literature, inclusive of recent observations and data from the Transiting Exoplanet Survey Satellite (TESS) in the Sectors 22 and 49 were conducted to accurately determine the orbital and physical parameters of the system, a quadruple system. Section 2 delineates the observations and the data reduction processes. Section 3 provides an in-depth analysis of the system's period change, examining all existing minima times in conjunction with newly acquired minima times. Furthermore, this section presents an analysis of all radial velocities of the very close system CC Com. The physical parameters of all constituent stars of the system are computed in Section <ref>. The final section encapsulates all the findings and discussions. § OBSERVATIONS New long-term multi-colour photometric monitoring of the system was performed on B, V and R filters using the iKon-L 936 BEX2-DD and the FLI ProLine 3041-UV CCD in the 0.6-meter TUG-T60 telescope between 2018 and 2021 years at the TÜBİTAK National Observatory (TUG). Short-term optical observations of the system were made in V and R filters on 20 March, 3, 4 and 13 April 2018 with SI 1100 CCD using the 1-meter TUG-T100 telescope. The space-based sky survey programme TESS has sensitively observed CC Com and many other stars during different observing seasons. The system was observed in Sector 22 for 26.4 days, between 20 February and 17 March 2020, and in Sector 49 for 24.2 days, between 1 and 25 March 2020. 15998 data points were obtained in Sector 22, and 13512 in Sector 49. We downloaded the observations obtained with the TESS telescope from the MAST[https://mast.stsci.edu] servers. PDCSAP flux was used for the analyses and for the TESS data shown in Figure <ref>. These data contain the simple aperture photometry (SAP) flux from which possible trends are extracted using cotrended basis vectors (CBVs) and are generally cleaner data than the SAP flux. The lighkurve <cit.> package was used to analyse and reduce the data sets of TESS observations. The AstroImageJ <cit.> programme was used to reduce round-based CCD observations. During the reduction process, flat, dark, and bias images obtained at observation nights were used. The magnitude of the system was then determined by the differential photometry method. The data processing and normalisation process was carried out with the same methods and programmes as we used for the <cit.> studies. Figure <ref> shows the CC Com light curves obtained from the TESS and TUG data sets. In this study, the expression given by Equation <ref> is used in the phase calculations of the system. Radial velocity observations of the system were obtained by <cit.>. The light curves obtained from the TESS observations of the system were solved simultaneously with the radial velocity curves. Figure 1 shows that the new multicolour observations obtained with ground-based telescopes are more scattered than the TESS observations. The precision of the observations obtained with ground-based telescopes is in the order of a few per cent, whereas that of TESS observations is in the order of a few thousandths. Although the observational sensitivity of space-based telescopes is much better, the lack of multi-colour observations can be seen as a deficiency. Multi-colour photometry from ground-based telescopes also has advantages in obtaining some parameters of binary systems (especially radiative parameters and colour variation for activity). HJD Min I = 24 39533597(1)+02206868(7). § DATA ANALYSIS Using all available light curves and all published minima of the system, we have analysed the period change, light and radial velocity curves of the very close binary systems CC Com in individual sub-sections. §.§ Period Change Analysis New observations of the system made with the ground-based TUG-T100, TUG-T60 and space-based TESS telescopes are shown in Figure <ref>. In this study, many times of minima have been calculated from the newly obtained observations. Table <ref> lists some of the system's available times of minima. Four minima (at 58212.33157, 58212.44198, 58222.37265, and 58222.48422) were obtained from TUG T100 observations, and 120 minima from TESS observations. When reading minimum times from TESS observations, we calculated one minimum time by superimposing all three minima on top of each other. Therefore, we weighted the minimum times obtained from TESS by a factor of three. Other minimuma found in the literature were also collected, giving the system a total of 372. All times of minima were weighted according to the sensitivity of the observation and a period change analysis was performed with the MATLAB code developed by P. Zache <cit.>. Equations <ref> and <ref> were used in the solutions, under the assumption of sinusoidal and parabolic variation. While using Equations <ref> and <ref> to obtain the parameters of the fourth system, we subtracted the long-term (∼98-years) variation and applied Equation <ref> to the remaining differences in Equation <ref> for the fourth system and calculated the parameters of the possible short-term (∼8-years) system with the help of Equation <ref>. Because CC Com is a contact binary system, and a parabolic variation is expected as a result of mass transfer between the components (Figure <ref>, blue line). In addition to this parabolic variation, the presence of a third body is also taken into account (Equations <ref> and <ref>). Analysis under the assumption of mass transfer and the presence of a third body reveals that a periodic change in the residuals remains. Therefore, a reanalysis was performed assuming a quadruple system and parabolic variation. The parameters obtained as a result of the period change analysis are given in Table <ref>. Analyses have shown that there is a third object orbiting the binary system with a period of about eight years and that there may be a fourth object in the outer orbit with a period of about a century. The results obtained are shown in the O-C diagram in Figure <ref>. The masses of the third and fourth bodies orbiting CC Com A depend on the inclination of their orbital planes. The masses of the third and fourth bodies found in this study are shown in Figure <ref> depending on the inclination of their orbital planes with respect to the plane of the sky. The equations describing the minima are Min I=T_0+P_0 E+1/2d P/d E E^2 + τ, where τ = a_12sin i^'/c×[1-e^2/1+e^'cos v^'sin(v^'+ω^')+e^'sinω^']. Mass functions are given by f(m_3) = 4 π^2 (a_12sin i)^3/G P_m^2=(M_3 sin i)^3/(M_1+M_2+M_3)^2, f(m_3) and f(m_4) given in Table <ref> were calculated using Equation <ref>. In the above equations, the terms i^', e^', v^' and ω^' denote the inclination of the orbit, the eccentricity, the true anomaly and the longitude of the periastron from the ascending node of the third component of the system, respectively. Using the Period04, we performed a Fourier analysis on the residuals of the O-C analysis (Figure 3a) and looked for possible variations. In this analysis, we found a dominant frequency at 0.00061(1) c/d. This corresponds to a variation of about 8.9∓0.2 years, which is similar to the value obtained from our period analysis under the fourth body assumption. Of course, the distribution of a limited number of minimum times distributed over 55 years prevents us from obtaining this value more precisely. §.§ Modelling of Light and Radial Velocity Curves The simultaneous solution of the CC Com spectral data and precise observations from the TESS satellite with the Wilson-Devinney <cit.> and Phoebe <cit.> programmes allowed us to accurately determine the physical and orbital parameters of the components. Although there are many light curves of the system obtained with ground-based telescopes, the light curves recently obtained with TESS observations are very sensitive. Multi-colour BVR photometric observations with ground-based telescopes are at least as important, if not more precise, than the TESS Sectors 22 and 49. While fitting the synthetic models, we first solved the TESS (S49) data sets simultaneously with the existing radial velocity data sets <cit.> in order to obtain more precise orbital parameters. The solutions show that the cooler component has a larger radius and a higher mass than the hotter star. The result of the solution is shown in Figure <ref> as a solid line over the observations. The mean temperature of the hot star (T_ A1=4300 K), gravitational darkening coefficients (g_ A1=g_ A2) <cit.>, the albedos (A_ A1=A_ A2) <cit.> and the logarithmic limb darkening coefficients (g_ A1=g_ A2) <cit.> are taken as fixed parameters during the analysis of the light curve. The orbital inclination (i), the semimajor axis of the relative orbit a, the radial velocity of the binary centre of mass (V_γ), the potential of the cold components (Ω _ A1=A2), the temperature of the secondary component (T_ A2), the luminosities and the third light contribution (l_ B+C), the spot parameters and the mass ratio q were adjustable parameters. After several model runs, we stopped modelling when the correction of each parameter was well below the errors. We presented the parameters obtained with their errors in Table <ref> and the comparison of the synthetic model with the observations in Figure <ref>. As can be seen, the model and the observations are in good agreement. Preliminary analyses were performed using the combined solution of Sector 49 and radial velocities. To represent the asymmetries seen at the maximum phases of the light curves, it was necessary to use spotted models during the fitting. For this reason, three spots, whose properties are given in Table <ref>, were taken into account during the solution. When solving for the observations in Figure <ref> (Sector 22 and other ground-based observations), the fundamental parameters (P, i, q, T_2, etc.) were fixed and only the limb darkening, relative luminosity, and spot parameters were allowed as free parameters. In this way, it is clear from the solid lines in Figure <ref> that the parameters given in Table <ref> are compatible with all observations. § PHYSICAL PARAMETERS OF THE SYSTEM Previous studies of CC Com have mentioned the possibility of mass transfer between its components and the existence of a third body gravitationally bound to the system. Period change analysis revealed that CC Com is actually a possible quadruple system for the first time in this study. The physical parameters of the component stars, derived from synthetic models using CC Com's high-precision TESS observations and radial velocity data, are the most accurate ever obtained. Figure <ref> shows the solved TESS light curves and the fitted synthetic models, while Table <ref> shows the parameters obtained from the solution. Double-line eclipsing binary stars are still the most reliable for determining the accurate physical parameters of companion stars. It is desirable to model double-lined radial velocity curves simultaneously and, if possible, a large number of light curves to determine the various parameters of the components of a binary star, such as mass, radius, and luminosity well. In this study, CC Com's radial velocity curves and several light curves were analysed simultaneously. These quantities obtained as a result of the solution, with their uncertainties, are given in Table <ref> and Figure <ref>. The calculations took the Sun's effective temperature to be 5777 K and its bolometric magnitude of 4.775 mag. As a result of our analyses, we obtained the mass of the cooler component as M_ A1 = 0.712 M_⊙, the mass of the hotter component as M_ A2 = 0.372 M_⊙ and the radii as R_ A1 = 0.693 R_⊙ and R_ A2 = 0.514 R_⊙, respectively. The distance of the system we obtained in this study was 69 pc. The results showed that the hot component has a smaller radius, mass, and luminosity. The visual magnitudes of the components were calculated with the results of the tables of <cit.> for the bolometric corrections. The total masses of CC Com B and C depend on the orbital inclination angles and are shown in Figure <ref> in blue and red colours, respectively. Accordingly, the masses that the component stars can have at i=90^∘ are 0.074 for CC Com B and 0.18 for CC Com C. § DISCUSSION AND CONCLUSION By studying binary and multiple stars photometrically and spectroscopically, we can accurately determine their basic physical and orbital parameters, such as period, luminosity, temperature, etc. The parameters obtained, such as mass, radius, temperature, luminosity, and absolute magnitude, provide very important information about their formation, evolution, and end of life. The observed binary systems exhibit considerable diversity in their mass and period distributions. Depending on their initial conditions (mass, chemical abundance, orbital period, etc.), the binary systems' orbits and the component stars' evolution can be fast or slow. In the case of close binaries, such as CC Com, the evolution becomes much more complicated <cit.>. Close binary systems are important astrophysical objects because they allow us to test critical physical phenomena such as angular momentum loss, mass transfer, and mass loss. The very short orbital periods and extremely low mass ratios of the component stars are particularly important from an evolutionary point of view. Examples of binary systems with very small mass ratios are SX Crv <cit.>, V870 Ara <cit.>, KR Com <cit.> and V1191 Cyg <cit.>. Only seven systems with very short orbital periods (P<0.25 d) have been studied both photometrically and spectroscopically, and their parameters are well known. CC Com is one of these systems and the others are SDSS J001641-000925 (0.199 d, <cit.>), OT Cnc (0.218 d, <cit.>), J160156 (0.227 d, <cit.>), J093010 B (0.228 d, <cit.>), V523 Cas (0.234 d, <cit.>) and RW Com (0.237 d, <cit.>). Such very short-period binary systems are excellent laboratories for understanding the nature of interacting binary stars and studying merger processes, mass transfer and mass loss. In this study, the nature of CC Com has been analysed in detail. In addition to the new observations obtained with the TUG telescopes, all available light curves of the system observed by TESS were collected for use in the light curve analysis. As a result of the simultaneous analysis of the light and radial velocity curves, the orbital and physical parameters of the component stars are given in Tables <ref> and <ref>. It was found that the cooler component has a larger mass and radius than the hotter component. We combined 221 minima from TUG and TESS observations with those found in the literature. We then analysed them for period change. As a result of our analysis, it was determined, for the first time, in this study that the system is a possible quadruple system (A1+A2, B, C). CC Com A is a binary system with a period of 0.220 days. The period of the middle orbiting star (B) is 7.9 years and the period of the outer orbiting star (C) is 98 years. In this study, using the distance modulus method, the distance of CC Com was found to be 69 pc, which is in agreement with Gaia <cit.> by the astrometric method (71.4 pc). The activity seen in late-type stars and the presence of spot regions on the stellar surface because of this activity can cause distortions in the light curves of the binary system. If the solutions are assumed to be spotless, the parameters of the binary system may be incorrectly estimated. Therefore, the light curves of active binary systems should be solved under spotted model assumptions. As seen in Figure <ref>, a slight difference in level is observed in the maximum phases of the light curves of CC Com. This level difference is the O'Connell effect and is caused by spots on the stellar surface. During the solution of the light curves, we first applied a synthetic model to the observations of TESS, which gives the most sensitive light curve, and obtained the possible spots. Then, we fixed the system parameters we obtained and performed solutions by adjusting different spot parameters (spot latitude, spot longitude, spot radius and a temperature factor), limb darkening, and relative luminosity as free parameters. In this way, we also modelled the TUG T100 and T60 data sets. As seen in the figures, all observations and models are in good agreement. The author sincerely thanks K. Yakut and C. Tout for their careful review of the manuscript and insightful recommendations and thank to anonymous referee for comments and helpful constructive suggestions, which helped us improve the paper. This study was supported by the Scientific and Technological Research Council of Türkiye (TÜBİTAK Prj 122F474 and 117F188). DK thanks TÜBİTAK-2219 for her scholarship. The author thanks the numerous people who have helped make the TÜBİTAK National Observatory (Prj no:18AT60-1301) and the NASA TESS mission possible. Funding Statement This study was supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK Prj 2219). Competing Interests None Data Availability Statement The data used in this study are given as online tables of the TUG data sets. In addition, TESS satellite data was used in some of the analyses and can be obtained from the MAST data archive at https://mast.stsci.edu.
http://arxiv.org/abs/2407.13219v1
20240718070505
Multi-sentence Video Grounding for Long Video Generation
[ "Wei Feng", "Xin Wang", "Hong Chen", "Zeyang Zhang", "Wenwu Zhu" ]
cs.CV
[ "cs.CV" ]
Department of Computer Science and Technology, Tsinghua University Beijing China fw22@mails.tsinghua.edu.cn Corresponding authors. Department of Computer Science and Technology, BNRist, Tsinghua University Beijing China xin_wang@tsinghua.edu.cn Department of Computer Science and Technology, Tsinghua University Beijing China h-chen20@mails.tsinghua.edu.cn Department of Computer Science and Technology, Tsinghua University Beijing China zy-zhang20@mails.tsinghua.edu.cn [1] Department of Computer Science and Technology, BNRist, Tsinghua University Beijing China wwzhu@tsinghua.edu.cn § ABSTRACT Video generation has witnessed great success recently, but their application in generating long videos still remains challenging due to the difficulty in maintaining the temporal consistency of generated videos and the high memory cost during generation. To tackle the problems, in this paper, we propose a brave and new idea of Multi-sentence Video Grounding for Long Video Generation, connecting the massive video moment retrieval to the video generation task for the first time, providing a new paradigm for long video generation. The method of our work can be summarized as three steps: (i) We design sequential scene text prompts as the queries for video grounding, utilizing the massive video moment retrieval to search for video moment segments that meet the text requirements in the video database. (ii) Based on the source frames of retrieved video moment segments, we adopt video editing methods to create new video content while preserving the temporal consistency of the retrieved video. Since the editing can be conducted segment by segment, and even frame by frame, it largely reduces the memory cost. (iii) We also attempt video morphing and personalized generation methods to improve the subject consistency of long video generation, providing ablation experimental results for the subtasks of long video generation. Our approach seamlessly extends the development in image/video editing, video morphing and personalized generation, and video grounding to the long video generation, offering effective solutions for generating long videos at low memory cost. <ccs2012> <concept> <concept_id>10010147.10010178</concept_id> <concept_desc>Computing methodologies Artificial intelligence</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010224</concept_id> <concept_desc>Computing methodologies Computer vision</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies Artificial intelligence [500]Computing methodologies Computer vision Multi-sentence Video Grounding for Long Video Generation Wenwu Zhu ======================================================== § INTRODUCTION Video generation has made significant progress in recent years, demonstrating the incredible ability to generate multimedia content. The main existing works of video generation focus on developing different generative models, which can be divided into diffusion-based models <cit.> and non-diffusion-based models such as VQGAN <cit.>. Yin et al. proposed NUWA-XL <cit.> by applying Diffusion over Diffusion method for long video generation. In addition, some works strengthened temporal information by combining generative diffusion models or VQGAN models with the Transformers architectures to generate long videos of up to 1 minute <cit.>. However, there are still many limitations in the generation of long videos. The first issue is that the generated video content often overlooks some physical laws of real-world knowledge (such as chair running). In addition, the overall consistency of the generated video is lower than the real video due to unnatural transitions between frames. Last but not least, the longer the video is generated, the higher GPU memory cost would be required. To address these challenges, we propose a brave and new idea named grounding-based video generation, which applies the multi-sentence video grounding method for long video generation. This idea shares similar spirits with the retrieval-augmented generation <cit.> in large language models. To begin with, based on the video grounding technique such as massive video moment retrieval, we can obtain several moments of different videos from our video database that match target text queries, to provide video generation tasks with guided source video segments that follow physical rules and remain highly consistent. The retrieved video segments will provide motion information for the final generated video. Subsequently, based on the retrieved video segments, we adopt the video editing method to create new content in the video segments, such as changing the subject or changing the background. Additionally, we combine the edited video segments with a unified subject or style through video editing, and achieve long video generation while ensuring overall consistency and adherence to the physical laws of the generated video content. Meanwhile, considering that video editing can be conducted segment by segment and even frame by frame, our work maintains a relatively low level of GPU memory cost, making it possible for the public to generate long videos. Extensive experimental results show that our proposed method can be used to generate long videos with better consistency. In future works, with a larger video corpus and more advanced video grounding methods, our proposed method can work as a powerful long video generation tool. To summarize, we make the following contributions: * To the best of our knowledge, this is the first work to study the feasibility of leveraging the multi-sentence video grounding for long video generation, which we believe will inspire a lot of future work. * We propose the Multi-sentence Video Grounding-based Long Video Generation framework, consisting of i) a massive video moment retrieval model capable of locating suitable video segments for the text prompts, ii) a video editor that creates new content for the video segments while preserving temporal consistency and iii) a video personalization and morphing scheduler that enables customized video generation and smooth transition between generated videos. * We conduct experiments on various video editing and video personalization methods, demonstrating the feasibility of retrieval augmentation to improve the continuity and diversity of generated long videos through the video grounding method. * We conduct ablation analysis under different video editing methods and the application of video morphing and personalization, providing importing references for improving the performance of long video generation. § RELATED WORK §.§ Video Grounding Video grounding aims to locate the starting and ending times of a given segment target from a video<cit.>, which is a popular computer vision task and has drawn much attention over the past few years <cit.>. Early-stage video grounding task mainly focused on searching for target segments from a single video, which limited its ability to obtain information from the entire video pools. Therefore, tasks such as video corpus moment retrieval (VCMR) <cit.> have emerged in the field of computer vision, which focuses on finding one correct positive video segment associated with the VG query from the video pools. Furthermore, the Massive Videos Moment Retrieval (MVMR) <cit.> task was proposed, which assumes that there could be several positive video segments in the video pools that match a certain target query. This task requires the model to distinguish positive from massive negative videos. To address this challenge, Yang et al. introduce the Reliable Mutual Matching Network (RMMN) <cit.>, which mutually matches a query with representations of positive video moments while distinct from negative ones. §.§ Long Video Generation Existing work has demonstrated impressive abilities in generating high-quality images and short videos <cit.>. By introducing the transformer architecture to enhance temporal understanding and reasoning ability, some work has been able to generate long videos.  <cit.> present Phenaki with C-ViViT as encoder and MaskGit as the backbone, which is able to generate variable-length videos conditioned on a sequence of prompts in the open domain. With the rapid development of Diffusion Models (DM) <cit.> such as stable diffusion <cit.>,  <cit.> proposed Diffusion Transformers (DiTs), a simple transformer-based backbone for diffusion models that outperform prior U-Net models. Given the promising scaling results of DiTs, OpenAI proposes Sora, presenting powerful abilities to generate long videos and simulate the physical world. These works, however, all have limitations when addressing long video generation tasks. On the one hand, a common problem in generating videos is that some content may violate physical knowledge or have poor overall video consistency. The reasons causing this problem, include inadequate prompt understanding of input and the limitations of the generation algorithm, leading to insufficient modeling of physical laws. On the other hand, for generative models that consider the overall temporal relationship of video generation and apply methods such as the Transformer architecture, to generate a single long video at once, the problem they face is that the longer the video is generated, the more computational resources are needed. §.§ Video Editing Benefiting from the rapid development of diffusion models in image and video generation, many zero-shot video editing methods have been proposed <cit.>, which apply the pre-trained image diffusion model to transform an input source video into a new video. The critical problem of video editing is to maintain the visual motion and temporal consistency between the generated video and the source video. To address these problems, some works introduce additional spatial conditioning controls or internal features to keep motion consistency between the generated video images and the source video images. LOVECon <cit.> applies ControlNet <cit.> using conditions control such as edges, depth, segmentation, and human pose for text-driven training-free long video editing. Text2LIVE <cit.> generates an edit layer with color and opacity features to constrain the generation process and demonstrate edits on videos across a variety of objects and scenes. In addition, other methods are proposed to improve the temporal coherency in generation by considering the relationship between source video frames. Tune-A-Video <cit.> proposes a One-Shot Video Tuning on the source video and appends temporal self-attention layers for consistent T2V generation. Pix2Video <cit.> applies feature injection and latent update methods, using the intermediate diffusion features of a reference frame to update the features of future frames. VidToMe <cit.> unifies and compresses internal features of diffusion by merging tokens across video frames, balancing the performance of short-term video continuity and long-term consistency of video editing. § METHOD Long video generation task requires generating a consistent and diverse video of at least one minute conditioned on a story or a sequence of prompts (p_1,p_2,...,p_n). As shown in Figure <ref>, the method of Multi-sentence Video Grounding for Long Video Generation can be decomposed into the following steps. Given a sequence with several target queries (q_1,q_2,...,q_n), each query in the format of `A person does something...', our first step is to input them into the Massive Video Moment retrieval model to search for video time segments that meet the requirements in the video database and filter a sequence of videos V_1, V_2,..., V_n=Grounding(q_1,q_2,...,q_n). Secondly, we will use their modified queries (q'_1,q'_2,...,q'_n) in the format of `Customized subject does something...in a customized scenario' and the video editing method to edit each grounding video segment into video content V'=Editing(V,q') with a unified subject we want to customize and change the background as we expected, forming several story segments V'_1, V'_2,..., V'_n of a continuous long video. In addition, we also attempt video morphing approaches for the combination of generated video segments, and personalized generation methods to improve the subject consistency of long video generation. §.§ Multi-sentence Video Moment Grounding Given a sequence with several target queries (q_1,q_2,...,q_n) in the format of `A person does something...', we first tokenize each query q by DistillBERT and then perform global average pooling over all the tokens to transform them into text featuref^q∈ R^d. DistillBERT <cit.> is a lightweight text query encoder showing comparable performance to BERT <cit.> but with a smaller size and faster computational speed. We hope to find video clips from a video pool that match the text feature through the video grounding method. For each video in the database, we divided it into N video clips and used a pre-trained visual model C3D <cit.> for feature extraction. Utilizing these features through FC layer dimensionality reduction and max pooling, we constructed a 2D temporary moment feature map F∈ R^N× N× d. After transforming each query and video moment using encoders to a joint visual-text space, we apply Mutual Matching Network(MMN) <cit.> trained by the Cross-directional Hard and Reliable Negative Contrastive Learning (CroCs) method <cit.> to solve the Multi-sentence Video moment grounding task. From the joint visual-text space, the matching score between the query and the video moment representations could be computed through cosine similarity between them. f^q_mm =W_mmf^q+b_mm, ∀f^v ∈ F, s^mm = f^v^T_mmf^q_mm, where W_mm and b_mm are learnable parameters and we enforce the embeddings ||f^v_mm||=||f^q_mm||=1 through a l_2-normalization layer. The video grounding segments from the few videos with the highest match s_mm with each query would be selected for subsequent editing. §.§ Text guided Video Editing After receiving video segments V_1, V_2,..., V_n=Grounding(q_1,q_2,...,q_n) from the Multi-sentence Video Moment Grounding approach. For each video clip V with n frames(v^1,v^2,...,v^n) corresponding to a text query q, each frame would be encoded into a low-dimensional latent representation z^i_0=E(v^i), and go through DDIM Inversion, converting them back into noise latent in T reverse steps: z^i_t+1 =√(α_t+1)z^i_t-√(1-α_t)ϵ_θ(z^i_t,t,c_q)/√(α_t)+√(1-α_t+1)ϵ_θ(z^i_t,t,c_q), t = 0,...,T-1, where ϵ_θ represents an image-to-image translation U-Net of the diffusion models we implement, α_t represents the noise variance based on the decreasing schedule and c_q is the text embedding encoded from the text query q. Using the noise latent as a starting point for video editing, we modify each original video query q to a new query q', making different text queries have the same character subject or visual style. The basic video frame editing approach uses the following DDIM Sampling methods to directly edit each video frame: z^i_t-1 =√(α_t-1)z^i_t-√(1-α_t)ϵ_θ(z^i_t,t,c_q')/√(α_t)+√(1-α_t-1)ϵ_θ(z^i_t,t,c_q'), t = 0,...,T-1, while this method, however, would be unable to consider the temporal consistency between the video frames. The quality of edited images is mainly limited by the diffusion model and the detail level of the prompt query. Therefore, we introduce additional methods that improve video editing and generation from two aspects. On the one hand, we use ControlNet which adds conditional control such as edges, depth, segmentation, and human pose as reference information to the Diffusion Model for better generation guidance. On the other hand, we modify the intermediate latent through video editing approaches such as pre-frame latent injection, cross-window attention, and global token merging. Given these methods mentioned above, we can update each latent z_t^i with richer reference information from pre-step latent and source video images. In the end, each final latent would be mapped back to an image frame through a decoder f^i = D(z^i_0), forming the generated video V'=Editing(V,q') with higher temporal consistency that more physically makes sense. §.§ Video Morphing and Personalization §.§.§ Video Morphing Although we have obtained edited videos of several different scenes V'_1, V'_2,..., V'_n with consistent character subjects or styles through video grounding and generative methods, there are still significant differences between the videos based on different text queries, and they cannot be smoothly combined into a long video since the end frame of the previous video could not be not coherent with the starting frame of the next video. Therefore, we adopted the video morphing method to concatenate the start and end segments of all edited videos. In each transition task, we obtain the VAE encoded latent of the preceding video's last frame v^i and the following video's first frame v^j, represented as z^i_0 and z^j_0, along with the modified queries q'_i and q'_j of the two videos. Inspired by the approach of DiffMorpher <cit.>, we use two sets of latent-query pairs to relatively fine-tune the diffusion model and train two LoRAs <cit.> Δθ_i and Δθ_j on the SD UNet ϵ_θ according to the following learning objective: L(Δθ)=E_ϵ,t[||ϵ-ϵ_θ+Δθ(√(α_t)z_0+√(1-α_t)ϵ,t,c_q)||^2], where ϵ∼ N(0,I) is the random sampled Gaussian noise. After fine-tuning, Δθ_i and Δθ_j are fixed and fused into a linear interpolation for the semantics of the input images: Δθ_α=(1-α)Δθ_i+αΔθ_j, where α=k/n,k=1,2,...,n-1, representing generation process of the k-th transition image latent from q'_i to q'_j, we take n as 15. Using LoRA fine-tuned by the images, we utilize LoRA-integrated UNet ϵ_θ+Δθ_k(k=i,j) to relatively inverse image latent z^i_0 and z^j_0 to z^i_T and z^j_T, and obtain the intermediate latent noise z^α_T through spherical linear interpolation <cit.>: z^α_T = sin((1-α)ϕ)/sinϕz^i_T+sin(αϕ)/sinϕz^j_T, ϕ = arccos(z^i_T· z^j_T/||z^i_T|| ||z^j_T||). When generating the intermediate image latent z^α_0, we use the UNet with interpolated LoRA ϵ_θ+Δθ_α during the DDIM Sampling steps, while the latent condition c_α = (1-α)c_q'_i+α c_q'_j is applied through the linear interpolation. By generating z^α_0, α=k/n,k=1,2,...,n-1 and v^α = D(z^α_0), we formed a transition video segment V^ij=(v^1/n,v^2/n,...,v^n-1/n) from v^i to v^j. §.§.§ Video Personalization Video personalization is designed to generate characters from the real or virtual world that were not used for training the diffusion model. In our approach, we fine-tune the text-to-image diffusion model using 3-5 images of a specialized character paired with the text prompt containing a rare token identifier <cit.> and the name of the character's class(e.g., "A [V] dog" or "A [sks] man"). After fine-tuning, we could replace the diffusion model ϵ_θ used in sec <ref> and video morphing. This step is optional to generate customized subjects for the final generated videos. § EXPERIMENTS In this section, we will present the results of our method on long video generation tasks based on multi-sentence video grounding and generative methods. §.§ Setups §.§.§ Datasets. We conduct massive video moment retrieval on the Charades <cit.>, ActivityNet <cit.>, and TACoS <cit.> datasets and mainly choose Activitynet as our video database for the next stage of video editing since it covers a wide range of complex human activities that are of interest to people in their daily living. We generate about 100 edited videos for evaluation, each containing frames ranging from 150 to 1000 (depending on the length of the video clip captured by video grounding) with a resolution of 512×512. §.§.§ Models. We apply Stable Diffusion model with video editing approaches including Pix2Video, LOVECon and VidToMe for the video editing. These video editing methods apply different methods to enhance the temporal consistency such as Pix2Video uses self-attention feature injection to propagate the changes to the future frames, LOVECon proposes a cross-window attention mechanism to ensure the global style, and VidToMe enhances both short-term and long-term temporal consistency by merging self-attention tokens across frames. In addition, we also conduct video editing with ControlNet of various conditioning controls to improve motion consistency. §.§.§ Baseline. Considering that our work is built on the Stable Diffusion model for video generation, we compare our results with the Stable Diffusion based video generation method Text2Video-Zero <cit.> that encodes motion dynamic in the latent codes and reprograms cross-frame attention of frames, allowing zero-shot text-to-video generation. §.§.§ Evaluation Metrics. Inspired by VBench <cit.>, we conduct measurements on the generated long video content as follows: (i) The first aspect we evaluate is subject consistency, representing whether the appearance of the main subject remains consistent across the long video, which is calculated by the DINO <cit.> score based on the frames' feature similarity. (ii) Generating video content often results in distortion in frames, so we use the MUSIQ <cit.> predictor trained on the SPAQ <cit.> dataset to evaluate the image quality of the video. (iii) Temporal style represents the continuity of camera motion and, like subject consistency, is a part of the video's overall consistency. We evaluate it by calculating the similarity between the video feature and the temporal style description feature through ViCLIP <cit.>. (iv) For local and high-frequency details, temporal flickering requires the video to possess imperfect temporal consistency instead of being static images. We use the mean absolute difference across frames to evaluate it. §.§ Main results Our main quantitative results of video grounding for long video generation are shown in Table <ref>. From the results, we can observe that: (i) Our video grounding-based long video generation method achieves higher scores stably in subject consistency and temporal flickering, demonstrating the feasibility of retrieval augmentation to improve the continuity and diversity of generated long videos through video grounding methods. (ii) In terms of image quality and temporary style performance, the best performance our method achieves is higher than the baseline method, indicating that it can improve the generated videos' image quality. However, these improvements are not significant, so it is worth further exploring ways to improve performance by introducing other image generation models or other temporal consistency video editing methods. §.§ Case Analysis As shown in Figure <ref>, we select a few video samples generated using our method and baseline method Text2Video-Zero respectively for case analysis. Our method uses VidToMe as the video editing method and selects the depth as the type of ControlNet. We further visualize the X-T slice for each frame from the videos. From Figure <ref>(a) we can see that the X-T slice of our generated video `A knight is seen riding around on a horse' exhibits continuous subject and background changes over time, demonstrating the superiority of our method in generating stable and variable long video. Figure <ref>(c) shows the X-T slice of our generated video `Iron Man is doing gymnastics in the gym'. Despite the frequent and non-linear movements of the subject in the video, its X-T slice presents continuous gyroscopic changes, indicating our method is able to continuously generate video frames that reflect complex subject's movement patterns. Figure <ref>(e) shows the X-T slice of our generated video `Iron Man lifts heavy weight over his head, in the snow', which includes both customized subject and scenario, demonstrating the ability of our method for multiple customization. Compared to our method, however, Figure <ref>(b),(d), and (e) show the examples of corresponding generated videos and their X-T slice from our baseline method, which can be seen suffering from strong quality insufficiency, e.g., inconsistencies between frames and subject missing from partial frames. In summary, our approach of multi-sentence video grounding-based long video generation leads to more consistent video generation. §.§ Ablation Studies §.§.§ Results with different editing method As shown in Table <ref>, we compared the results of different video editing methods using different video editing approaches and different pre-trained weights of ControlNet. From the results, we can observe that:(i) The video editing method such as VidToMe, achieves overall better performance in editing long videos compared to other video editing methods. We believe this is mainly due to VidToMe not only considering the local continuity relationship between video frames but also strengthening the long-term consistency of generated videos by introducing global tokens, while other video editing methods only rely on inter-frame correction, indicating the importance of considering global temporal relationships for long video generation. (ii) applying a single type of ControlNet to edit different video segments using the same method for long video generation does not significantly improve the overall quality of the generated video. This can be attributed to the fact that editing for different types of videos is suitable for different types of ControlNets, and it is worth trying multiple ControlNets to further explore their impact on video editing in future work. §.§.§ Results with Video Morphing and Personalization As shown in Table <ref>, we compared the results of applying video morphing and personalization methods. The results are conducted under VidToMe using ControlNet in the type of depth. We can see from the result that with the introduction of video morphing and personalization approaches, our method still outperforms the baseline Text2Video-Zero model. In addition, we present an example long video generated through our method shown in Figure <ref>. The generated long video maintains good consistency in the customized subject across frames. § LIMITATIONS AND FUTURE WORKS Since this is the first attempt at multi-sentence video grounding for long video generation, the main limitation of our method lies in the model and dataset of multiple-sentence video grounding. As shown in Figure <ref>, the model failed to search for partial text queries correctly in the video dataset (while the video segments of `A dog catches a ball' and `A butterfly is flying on the sky' do exist in the ActivityNet dataset), resulting in a low correlation between the searched videos and text queries and affecting further video editing. On the other hand, the dataset lacks video clips of specific types of queries, which also causes inconvenience during the retrieval. The examples are shown in Figure <ref> since the dataset do not possess video clips related to `submarine' or `snake'. These limitations can be addressed by improving the performance of video grounding models and introducing more diverse video datasets to further enhance the overall performance of our method. § CONCLUSION In this paper, we study the problem of utilizing video grounding models to conduct data augmentation for long video generation. We bravely propose the Multi-sentence Video Grounding for Long Video Generation framework, which consists of a multi-sentence video grounding model to retrieve different video moments matching the target text queries from the video database, and a data augmentation strategy to edit video contents into videos with unified subjects through video editing method. Experiments demonstrate that our proposed framework outperforms baseline models for long video generation. Our approach seamlessly extends the development in image/video editing, video morphing, personalized generation, and video grounding to the long video generation, offering effective solutions for generating long videos at low memory cost. Utilizing video grounding methods to enhance the long video generation could be a promising future research direction. ACM-Reference-Format
http://arxiv.org/abs/2407.12598v1
20240717142212
Estimate Epidemiological Parameters given Partial Observations based on Algebraically Observable PINNs
[ "Mizuka Komatsu" ]
cs.LG
[ "cs.LG", "math.DS", "q-bio.PE", "93B25, 92B20" ]
Prospect of unraveling the first-order phase transition in neutron stars with f and p_1 modes Ritam Mallick 0000-0003-2943-6388 July 22, 2024 ============================================================================================= § ABSTRACT In this study, we considered the problem of estimating epidemiological parameters based on physics-informed neural networks (PINNs). In practice, not all trajectory data corresponding to the population estimated by epidemic models can be obtained, and some observed trajectories are noisy. Learning PINNs to estimate unknown epidemiological parameters using such partial observations is challenging. Accordingly, we introduce the concept of algebraic observability into PINNs. The validity of the proposed PINN, named as an algebraically observable PINNs, in terms of estimation parameters and prediction of unobserved variables, is demonstrated through numerical experiments. 2 § INTRODUCTION In this study, we investigated the problem of estimating epidemiological parameters. Epidemiological parameters refer to the parameters that appear in epidemiological models such as the SIR and SEIR models<cit.> where populations are assumed to be divided into subgroups depending on infectious states and the transition among the subgroups. The values of these parameters are significant in investigating trends in infectious diseases such as COVID-19<cit.>. In this study, we focused on the problem of estimating epidemiological parameters based on Physics-Informed Neural Networks, (PINNs)<cit.>. PINNs are Deep Neural Networks used to predict phenomena by incorporating prior knowledge to adhere to certain governing equations. In this study, we assume that the governing equations are ordinary differential equations (ODEs) in which the dependent and independent variables are represented as x(t; θ) ∈ℝ^N and t respectively. θ∈ℝ^M denotes the parameters of the ODEs, such as the epidemiological parameters. We denote the data observed at t as x_d(t). In this case, the input and output of the PINNs correspond to t and x(t), respectively. PINNs are trained such that the output of PINNs x_nn given w, θ approximates x_data satisfies the ODEs as much as possible, where w ∈ℝ^L denotes the weight parameters of the PINNs. For this purpose, the loss function L(x_nn, x_d; w, θ) is defined as the sum of the error term on the data λ_dataL_data(x_nn, x_d; w, θ), the error term of the governing equations λ_eqL_eq(x_nn, x_d; w) and the error term of the initial conditionsλ_initL_init(x_nn, x_d; w) where λ_data, λ_eq, λ_init are constants representing weights of each term. If θ is known, the learning of the PINNs is reduced to minimize L(x_nn, x_d; w, θ) with respect to w, which is known as forward problem. If θ includes unknown parameters, it is reduced to minimizing L(x_nn, x_d; w, θ) with respect to w and θ, which is known as the inverse problem. In practice, not all the trajectories corresponding to each subject can be obtained, or some of the observed trajectories are noisy because of the limited testing capacity for infection <cit.>. However, learning vanilla PINNs to estimate unknown epidemiological parameters using partial observations is challenging <cit.>. Motivated by this, in this paper, we propose a method for parameter estimation and population trajectory prediction using PINNs. Target scenario In this study, we consider the SEIR model <cit.> as an example of an epidemic model: Ṡ = -βSI, Ė = βSI-ϵE, İ = ϵE-γI, Ṙ = γI, S + E + I + R = 1. The SEIR model assumes that a population is divided into four subgroups. S, E, I and R denote the population ratios of each subgroup, that is, susceptible, exposed, infectious, and removed, respectively. S, E, I, R depend on time t and the variables with dots represent the derivatives of the variables with respect to t. The transitions are described in (<ref>). The total population is assumed to be constant, as represented in the last equation of (<ref>). β, ϵ, γ denote infection, onset, and removal rates, respectively. Throughout this study, these epidemiological parameters are assumed to be constant, and the initial values of (<ref>) are assumed to be known. In addition, only the trajectory of the infectious population rate is assumed to be available, and β, γ are known, whereas ϵ is unknown and thus estimated. Loss function of vanilla PINNs In our target scenario, (S, E, I, R) and (β, ϵ, γ) correspond to x and θ, respectively. In the same manner as in Section <ref>, the observed data are denoted as I_d and the outputs of the PINN are represented as S_nn, E_nn, I_nn, R_nn. For the vanilla PINN, L_d given a full observation is defined as the mean of C_S×MSE(S_d, S_nn) + C_E×MSE(E_d, E_nn) + C_I×MSE(I_d, I_nn) + C_R×MSE(R_d, R_nn), C_S(S_d(t_i)- S_nn(t_i))^2 + C_E(E_d(t_i)- E_nn(t_i))^2 + C_I(I_d(t_i)- I_nn(t_i))^2 + C_R(R_d(t_i)- R_nn(t_i))^2 for i = 1,…,n where C_S, …, C_R and n represent constants and the number of data points, respectively. In our target scenario, we set (C_S, C_E, C_I, C_R) = (0, 0, 1, 0) for the vanilla PINN. See Appendix <ref> for the definition of L_eq, L_init in the target scenario. § THE ALGEBRAIC OBSERVABILITY OF SEIR MODEL To overcome the difficulty in learning PINNs given partial observations, we introduced an algebraic version of observability <cit.>. Intuitively, a state variable x is algebraically observable if and only if there exists a polynomial equation in which the variables are x, the observed variables, and the derivatives of the observed variables in the set of polynomial equations obtained through differential and algebraic manipulations of the state space models. We explain this concept using our target scenario, in which I is assumed to be observed and S, E, R are assumed to be unobserved. E is algebraically observable. This is because the equation İ = ϵ E - γ I is a polynomial equation of E, I, and İ regarding derivatives as independent of the variables. In general, to obtain polynomial equations to show the algebraic observability of a state variable x, variables and derivatives except for x, observed variables and derivatives of observed variables have to be eliminated through differential and algebraic manipulations of the state space models. According to <cit.>, the elimination process can be reduced to eliminate a certain set of variables from a finite set of polynomial equations regarding the derivatives of variables that are independent of the variables. This problem can be solved by using the Gröbner basis<cit.>. Furthermore, by using computer algebra software, e.g., Singular<cit.>, the process can be automated. See <ref> for further details. Consequently, noting that R = 1-(S+E+I), all the unobserved variables in our target scenario can be recovered from I and their derivatives as follows: S = Ï + (ϵ + γ)İ + ϵγI/βϵI, E = İ + γI/ϵ. § ALGEBRAICALLY OBSERVABLE PINNS In this section, we propose modified PINNs named as the algebraically obvservable PINNs. Considering that learning the vanilla PINN with full observation is easier than with partial observation, we propose using polynomial equations that determine the algebraic observability to generate data corresponding to unobserved variables and set (C_S, C_E, C_I, C_R) as (1, 1, 1, 1). In our target scenario, we used (<ref>) to estimate the data for S, E, R. In the following, the baseline method refers to the learning framework of the vanilla PINN with partial observation, (C_S, C_E, C_I, C_R) = (0, 0, 1, 0), as an inverse problem. Both for the baseline and proposed method, we set (λ_data, λ_eq, λ_init) as (1, 1, 1). According to (<ref>), to estimate these data, unknown values of İ, Ï at the observation points and ϵ are required. For simplicity, we assume that these values are available in advance. In the numerical experiment, we implemented a learning framework for algebraically observable PINNs with an unknown ϵ by introducing a Gaussian process-based Bayesian-optimization (GP-BO)<cit.> as an outer loop. The GP-BO is often applied to hyperparameter optimization problems<cit.>. In our context, ϵ required to generate data corresponding to S, E, R is regarded as a hyperparameter of algebraically observable PINNs. Specifically, GP-BO based on Expected Improvement <cit.> is performed to estimate ϵ such that the test error of the the algebraically observable PINNs is as much as small. We regard the minimal values of GP-BO as the estimated values of ϵ, denoted as ϵ̂. The learned algebraically observable PINN, given the unobserved data generated using ϵ̂ is used for predicting the trajectories of S, E, I, R. For comparison, we employed the baseline method with the GP-BO such of which result is regarded as an initial estimate of ϵ for the vanilla PINN as an inverse problem. See Appendix D for the details of the numerical experiment. Results In each GP-BO iteration, the minimum value of the test error for learning the PINNs was evaluated. See Appendix E for the details. The values of the loss function of GP-BO are shown in Figure <ref>. The estimated value of ϵ obtained using the proposed method ϵ̂ was 0.198. The absolute error is 2× 10^-3. The predictions of S, E, I, R by the algebraically observable PINN are shown in Figure <ref>, which shows good fits not only for the observed I but also for the unobserved S, E, R. For the baseline method, the initial estimate of ϵ , denoted as ϵ̂_0, was estimated to be 0.099. Given ϵ̂_0, we confirmed the estimated values of ϵ at epochs of 2000 and 26000, where the test loss was minimal (ϵ̂_1), and the training loss was minimal (ϵ̂_2). As shown in Figure <ref>, both ϵ̂_1 = 0.303 and ϵ̂_2 = 0.252 had larger absolute errors than those of the proposed method. The predictions of S, E, I, R by the PINN in the baseline method are shown in Figures <ref> and <ref>. § ACKNOWLEDGMENTS This work is supported by JST ACT-X Grant JPMJAX22A7, JSPS KAKENHI Grand Number 22K21278 and 24K16963. plain 99 iva David A. Cox, John B. Little, and Donal O'Shea. Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra. Springer New York, 2008. singular Wolfram Decker and Christoph Lossen. Computing in Algebraic Geometry: A Quick Start using SINGULAR. Springer Berlin, Heidelberg, 2006. sobolev Wojciech M. Czarnecki, Simon Osindero, Max Jaderberg, Grzegorz Swirszcz, and Razvan Pascanu, Sobolev Training for Neural Networks, in Neural Information Processing Systems, 2017. <https://api.semanticscholar.org/CorpusID:21596346> hpo Paul Escapil-Inchauspé and Gonzalo A. Ruz. “Hyperparameter tuning of physics-informed neural networks: Application to Helmholtz problems.” Neurocomputing, 561:126826, 2023. mpinn Haoran Hu, Connor M. Kennedy, Panayotis G. Kevrekidis, and Hong-Kun Zhang. “A modified PINN approach for identifiable compartmental models in epidemiology with application to COVID-19.” Viruses, 14:2464, 2022. inaba Hisashi Inaba. Age-Structured Population Dynamics in Demography and Epidemiology. Springer Singapore, 2017. kalman Rudolf E. Kalman. “On the general theory of control systems.” IFAC Proceedings Volumes, 1:491–502, 1960. robot Mizuka Komatsu, Takaharu Yaguchi, and Kohei Nakajima. “Algebraic approach towards the exploitation of `softness': the input-output equation for morphological computation.” The International Journal of Robotics Research, 40:99–118, 2021. kuniya Toshikazu Kuniya. “Evaluation of the effect of the state of emergency for the first wave of COVID-19 in Japan.” Infectious Disease Modelling, 5:580–587, 2020. meshkat Nicolette Meshkat, Zvi Rosen, and Seth Sullivant. “Algebraic tools for the analysis of state space models.” Advances in Pure Mathematics, 77:171–205, 2018. EI Jonas Mockus, Vytautas Tiesis, and Antanas Zilinskas. “The application of Bayesian methods for seeking the extremum.” Towards Global Optimization, 2:117–129, 2014. pinn Maziar Raissi, Paris Perdikaris, and George E. Karniadakis. “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.” Journal of Computational Physics, 378:686–707, 2019. gpbo Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. “Practical Bayesian optimization of machine learning algorithms.” In Advances in Neural Information Processing Systems, 2012. hporev Tong Yu and Hong Zhu. “Hyper-parameter optimization: A review of algorithms and applications.” ArXiv, 2020. § APPENDIX A: DEFINITION OF L_EQ IN OUR TARGET SCENARIO L_eq in our target scenario is defined as 1/n∑_i=1^n {(Ṡ_nn(t_i)+ βS_nn(t_i)I_nn(t_i))^2. . +(Ė_nn(t_i)-( βS_nn(t_i)I_nn(t_i)-ϵE_nn(t_i)))^2. . +(İ_nn(t_i)-(ϵE_nn(t_i)-γI_nn(t_i)))^2. . +(Ṙ_nn(t_i)- γI_nn(t_i))^2. . + (S_nn(t_i) + E_nn(t_i) + I_nn(t_i) + R_nn(t_i)-1)^2 }. L_init is defined as the sum of the mean squared error of x_nn(0) and x_d(0). § APPENDIX B: FURTHER DETAILS ON THE ALGEBRAIC OBSERVABILITY <CIT.> According to <cit.>, the order of the derivatives required to show the existence of such polynomial equations is N-1, where N is the dimension of the state vector. Hence, the problem of finding such equations is reduced to eliminating a certain set of variables from a finite set of polynomial equations regarding the derivatives of variables independent of the variables. This process can be performed algorithmically based on computer algebra, in particular, using the Gröbner basis<cit.>. Furthermore, computer algebra software, e.g., Singular<cit.>, allows the process to be automated. An example of a series of Singular commands used to investigate the algebraic observability of S in our target scenario is provided in Appendix <ref>. § APPENDIX C: EXAMPLE OF SINGULAR COMMANDS The following is an example of series of commands for Singular <cit.> to investigate algebraic observability of S in our target scenario. See Tabel <ref> for the notation of variables and parameters of (<ref>) in the commands. See Appendices of <cit.> for the details of the commands. [l]Example of Singular commands [l]Part of the output of the commands § APPENDIX D: DETAILS OF SETUP FOR NUMERICAL EXPERIMENTS §.§ Data preparation For the proof of concept, we used artificial data generated by the numerical simulation of (<ref>) as the ground truth. Specifically, (<ref>) is numerically solved over the time domain [0, 200] given the initial states (S(0), E(0), I(0), R(0)) = (0.99, 0.0, 0.01, 0.0) and (β, ϵ, γ) = (0.26, 0.2, 0.1). The parameter values were selected based on <cit.>. The Dormand-Prince method with a time-step size Δ t = 0.2 was used as a numerical solver. For the input data, n=50 observation points are sampled over [0, 200] for bothe training and testing. For the training data, the observation points were set at evenly spaced intervals. For test data, we randomly sampled data from a uniform distribution over [0, 200]. As mentioned in Section <ref>, the values of İ, Ï at the observed points are required for the proposed framework. For simplicity, we assumed that these values were provided in advance, leaving an approximation of these values for future work. In the following, the values of İ and Ï obtained by substituting the numerical solutions of (<ref>) into the first and second derivatives of (<ref>) are substituted into (<ref>). §.§ Implementation For both the baseline and proposed methods, we used the same settings unless otherwise specified. For the PINN, we used fully connected neural networks with three hidden layers, of which 50 units preceded the hyperbolic tangent activation function. The Glorot uniform was selected to initialize the weight matrix. In the proposed framework, the objective of the outer loop was to estimate ϵ. In particular, we applied GP-BO using Python scikit-optimize. As the acquisition function for GP-BO, we used the Expected Improvement<cit.>. We define the minimum test error of learning algebraically observable PINNs as the loss function of GP-BO. The number of iterations was set to 30 and the search space was set to [0.0, 0.5]. In the baseline method, the objective of the outer loop is to estimate the initial values of ϵ to learn the PINN as an inverse problem in the inner loop. In the proposed framework, the objective of the inner loop is to predict the trajectories of S, E, I, R through learning algebraically observable PINNs based on (<ref>). The PINNs were trained based on adam optimization for 30000 iterations with a learning rate of 10^-3. In the baseline method, the objectives of the inner loop are to predict the trajectories of S, E, I, R and estimate the ϵ. § APPENDIX E: DETAILS OF THE RESULTS OF NUMERICAL EXPERIMENTS
http://arxiv.org/abs/2407.13290v1
20240718084445
The Dance of Odd-Diffusive Particles: A Fourier Approach
[ "Amelie Langer", "Abhinav Sharma", "Ralf Metzler", "Erik Kalz" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "cond-mat.soft" ]
University of Augsburg, Institute of Physics, D-86159 Augsburg, Germany University of Augsburg, Institute of Physics, D-86159 Augsburg, Germany Leibniz-Institute for Polymer Research, Institute Theory of Polymers, D-01069 Dresden, Germany University of Potsdam, Institute of Physics and Astronomy, D-14476 Potsdam, Germany Asia Pacific Centre for Theoretical Physics, KR-37673 Pohang, Republic of Korea erik.kalz@uni-potsdam.de University of Potsdam, Institute of Physics and Astronomy, D-14476 Potsdam, Germany § ABSTRACT Odd-diffusive systems are characterized by transverse responses and exhibit unconventional behaviors in interacting systems. To address the dynamical interparticle rearrangements in a minimal system, we here exactly solve the problem of two hard disk-like interacting odd-diffusing particles. We calculate the probability density function (PDF) of the interacting particles in the Fourier-Laplace domain and find that oddness rotates all modes except the zeroth, resembling a “mutual rolling” of interacting odd particles. We show that only the first Fourier mode of the PDF, the polarization, enters the calculation of the force autocorrelation function (FACF) for generic systems with central-force interactions. An analysis of the polarization as a function of time reveals that the relative rotation angle between interacting particles overshoots before relaxation, thereby rationalizing the recently observed oscillating FACF in odd-diffusive systems. The Dance of Odd-Diffusive Particles: A Fourier Approach Erik Kalz ============================================================ § INTRODUCTION The description of dissipative systems with an inherent broken time-reversal or parity symmetry has recently attracted considerable attention <cit.>. Relevant systems can be found in various domains of statistical physics, and include Brownian particles under the effect of Lorentz force <cit.>, skyrmionic spin structures <cit.> and active chiral particles <cit.>. The transport coefficients of these systems show a characteristic transverse response to perturbations, which is encoded in antisymmetric off-diagonal elements in transport tensors. These characteristic elements behave odd under the transformation of the underlying (broken) symmetry, which serves as the namesake for these odd systems. Interestingly, a transverse response does not necessarily imply an anisotropic description of the system. In fact, in two spatial dimensions, odd transport coefficients represent the most general description of an isotropic physical system <cit.>. In this work, we specifically consider odd-diffusive systems <cit.>, which in two spatial dimensions are characterized by a diffusion tensor of the form 𝐃 = D_0 (1 + κϵ), where D_0 is the bare diffusivity with physical diemensions [D_0] = m^2/s, and κ is the characteristic odd-diffusion parameter with [κ] = 1, encoding the transverse response. 1 is the identity tensor and ϵ the fully antisymmetric Levi-Civita symbol in two dimensions (ϵ_xy= -ϵ_yx =1 and ϵ_xx=ϵ_yy =0). While the oddness parameter enters explicitly in the diffusion tensor in Eq. (<ref>), the mean-squared displacement of a freely diffusing particle is determined only by the symmetric part of the diffusion tensor. However, in a system of interacting particles, oddness qualitatively alters the diffusive behavior by affecting the self-diffusion <cit.>. The self-diffusion measures the dynamic of a tagged particle in a crowded system and explicitly accounts for interactions of particles <cit.>. Independent of the microscopic details of the inter-particle interactions, the self-diffusion is usually reduced <cit.>. In odd-diffusive systems, however, it was recently shown that even purely repulsive interactions can enhance the self-diffusion <cit.>. Via expressing the self-diffusion as a time-integral of the (interaction) force autocorrelation function (FACF) <cit.>, these findings could further be related to the unusual microscopic particle rearrangements in odd-diffusive systems <cit.>. For the overdamped equilibrium system under consideration here and consistent with the reduction of the self-diffusion <cit.>, autocorrelation functions in general decay monotonically in time <cit.>. However, the enhancement of self-diffusion for odd systems can occur only if the FACF switches sign, i.e., becomes non-monotonic in time. This apparent contradiction could be resolved by recognizing that the time-evolution operator in an odd-diffusive system becomes non-Hermitian when κ≠ 0 in Eq. (<ref>), thereby breaking the monotonicity requirements on the FACF <cit.>. Surprisingly, in this work, it was further observed that the FACF even oscillates in time. While this is consistent with the observed enhancement of the self-diffusion in odd systems, a physical interpretation of this phenomenon remains elusive. In the present work, we show that the non-monotonic FACF originates in the unusual dynamics of interacting odd particles. When a pair of odd-diffusive particles interacts, their motion resembles that of two mutually rotating particles <cit.>. This is despite the fact that the interaction potential is central-force, i.e., it acts along the vector that connects the centers of two particles. Our approach is based on the exact analytical derivation of the propagator of two interacting odd-diffusive particles. The joint probability density function (PDF) for the two particles separates into a center-of-mass PDF and a PDF of the relative coordinate, capturing the interplay of odd diffusion and interactions. We can express the relative PDF in a Fourier series to find that only certain modes enter the averages of observables, such as in the FACF. In particular, it is the polarization mode of the relative PDF, representing the positioning of the particles with respect to each other in time, which determines the full tensorial force autocorrelation behavior. By analyzing the polarization mode in time, we understand what originates the oscillating FACF. Interacting odd-diffusive particles mutually rotate further in time before they eventually relax. Figuratively they perform a dance, reminiscent of the classical step of the Viennese waltz. The remainder of this work is organized as follows: In Section <ref> we set the problem of two interacting odd-diffusive particles, which is then exactly solved in Section <ref>. The propagator can be put into the form of a Fourier series, of which the numerical analysis of the modes is presented in Section <ref>. In Section <ref> we show that only certain Fourier modes enter in averages of observabels, in particular into the FACF. Finally, in Section <ref> we summarize and give an extensive overview of systems, which can be subsumed under the terminology of odd diffusion. In Appendix <ref> we present the detailed solution of the problem of interacting particles and Appendix <ref> lists the relevant integral relations. § INTERACTING ODD-DIFFUSIVE PARTICLES We study the dynamics of two odd-diffusive particles at positions 𝐱_1 and 𝐱_2 in two spatial dimensions. The particles are assumed to interact with the potential energy U, which we assume to be of the hard-disk type U(𝐱_1, 𝐱_2) = U(r) = ∞, r ≤ 1 0, r > 1 , where r = |𝐱_1 - 𝐱_2|/d is the rescaled relative distance between the particles and d is the particle diameter. The conditional joint PDF, i.e., the propagator for the particles to be found at positions (𝐱_1, 𝐱_2) at time t given that they were at positions (𝐱_1,0, 𝐱_2,0) at time t_0, P(t) = P(𝐱_1, 𝐱_2, t|𝐱_1,0, 𝐱_2,0,t_0), evolves according to the time-evolution equation t P(t) = ∇_1 · [ 𝐃∇_1 + μ∇_1 U(𝐱_1, 𝐱_2)] P(t) + ∇_2 · [ 𝐃∇_2 + μ∇_2 U(𝐱_1, 𝐱_2)] P(t), where the initial condition is given as P(t=t_0) = δ(𝐱_1 - 𝐱_1,0) δ(𝐱_2- 𝐱_2,0). In Eq. (<ref>), ∇_1, ∇_2 are the partial differential operators for the positions of the particles, and 𝐃 is the odd-diffusion tensor of Eq. (<ref>). μ is the mobility tensor, and we assume the fluctuation-dissipation relation (FDR) to hold 𝐃 = k_BT μ, where k_B is the Boltzmann constant and T the temperature of the solvent. Note that even though Eq. (<ref>) looks formally equivalent to a Fokker-Planck equation for the joint PDF P(t), strictly spoken it is not, due to the antisymmetric (odd-diffusive) elements in the diffusion tensor <cit.>. However, based on the assumption of the FDR Eq. (<ref>) resembles equilibrium dynamics with a unique steady-state solution <cit.>. §.§ Analytical solution We rescale space with the diameter of the particle d, 𝐱_i →𝐱_i/d, and time by the natural time-scale of diffusing the radial distance of a particle diameter τ_d = d^2/(2D_0), t →τ = t/τ_d. Given the radial symmetry of the interaction potential U(r), the time-evolution equation (<ref>) can be written in terms of a center-of-mass coordinate 𝐱_c = (𝐱_1 + 𝐱_2)/2 and a relative coordinate 𝐱 = 𝐱_1 - 𝐱_2 as τ P(τ) = 1/4∇_𝐱_c^2 P(τ) + ∇_𝐱·(1 + κϵ) [∇_𝐱 + β ∇_𝐱 U(r)] P(τ), where β = 1/k_BT and ∇_𝐱_c, ∇_𝐱 are the partial differential operators corresponding to the center-of-mass and relative coordinates. Note that (1 + κϵ) = 𝐃/D_0 represents the dimensionless odd-diffusion tensor of Eq. (<ref>). As Eq. (<ref>) decouples the coordinates, the propagator can be written as P(𝐱_1, 𝐱_2, τ|𝐱_2,0, 𝐱_1,0,τ_0) = p_c(𝐱_c, τ|𝐱_c,0, τ_0) p(𝐱, τ|𝐱_0, τ_0) and the center-of-mass problem can be solved straight forwardly in the form p_c(𝐱_c, τ|𝐱_c,0) = 1/πτ exp(- |𝐱_c - 𝐱_c,0|^2/τ), where we have set τ_0 = 0 as the underlying stochastic process is time-translational invariant <cit.>. The relative PDF can be put into the form of a (Cartesian) multipole expansion <cit.> which in polar coordinates 𝐱 = (r, φ), 𝐱_0 = (r_0, φ_0), is given as p(𝐱,τ|𝐱_0) =Θ(r-1)[ϱ(r, τ|r_0) + σ(r, τ|r_0) ·𝐞(Δφ) + Q(r, τ |r_0):(𝐞(Δφ) ⊗𝐞(Δφ) - 1/2)+ …], where 𝐞(Δφ) = (cos(Δφ), sin(Δφ))^T, Δφ = φ-φ_0 is the angular difference. 𝐐(𝐞⊗𝐞 - 1/2)= ∑_α, β=1^2 Q_αβ(e_β e_α - δ_βα/2) denotes the full contraction, where 𝐞⊗𝐞 is the outer product. We again set τ_0=0 here. Note the multiplicative Heaviside function Θ(r-1) in Eq. (<ref>), which is defined as Θ(x) =1, if x>1 and Θ(x) =0 otherwise. This ensures the no-overlap condition of the hard-disk interaction potential in Eq. (<ref>). The explicit Cartesian multipole expansion for the relative PDF in Eq. (<ref>) constitutes the first terms of a Fourier expansion for p(𝐱,τ|𝐱_0) which reads p(𝐱,τ|𝐱_0) = Θ(r-1)/2π[ a_0(r, τ|r_0) + 2∑_n=1^∞[ a_n(r, τ|r_0); b_n(r, τ|r_0) ]·𝐞(n Δφ) ]. Here a_n and b_n are the Fourier coefficients of order n, n∈ℕ_0. We introduce the notation p = Θ(r-1) [p_0 + ∑_n=1^∞ p_n], where p_0=a_0/2π and p_n = [a_n cos(n Δφ) + b_n sin(n Δφ)]/π, n≥ 1 for our subsequent shorthand notation of the Fourier modes. The Cartesian modes in Eq. (<ref>) are thus connected to the Fourier modes as ϱ(r, τ|r_0) = 1/2 π a_0(r,τ|r_0), which is the (scalar) mean positional PDF, σ(r, τ|r_0) = 1/π[ a_1(r,τ|r_0); b_1(r,τ|r_0) ], which is the (vectorial) polarization order PDF, and Q(r, τ|r_0) = 1/π[ a_2(r,τ|r_0) - b_2(r,τ|r_0); b_2(r,τ|r_0) -a_2(r,τ|r_0) ], which is the (tensorial) nematic order PDF. In Appendix <ref> we provide the full solution to the relative problem, thereby following the original derivations in Refs. <cit.>, and show that the general Fourier coefficients a_n, b_n are given by a_n(r,τ|r_0) =e^-r_0^2+r^2/4τ/2τ I_n (r_0 r/2τ) - ℒ^-1{ k_n(r, s|r_0) [s K_n^'(√(s)) I_n^'(√(s)) + (n κ)^2 K_n(√(s)) I_n(√(s)) - δ(r_0 - 1) √(s) K_n^'(√(s))/2K_n(√(s))] }, b_n(r,τ|r_0) = - nκ (1 - δ(r_0 - 1)/2) ℒ^-1{k_n(r, s|r_0) }, where we used the abbreviation k_n(r, s|r_0) = K_n(r√(s)) K_n(r_0√(s))/(√(s) K^'_n(√(s)))^2 + (nκ K_n(√(s)))^2. Here s denotes the (dimensionless) Laplace variable. The Laplace transform for a function f(τ) is defined as ℒ{f}(s) = ∫_0^∞dτ exp(-sτ)f(τ), using the rescaled time variable τ = t/τ_d. The prime denotes the derivative g^'(a) = dg(x)/dx|_x=a and I_n(x), K_n(x) are the modified Bessel functions of the first kind and second kind, respectively <cit.>. As apparent from Eqs. (<ref>) and (<ref>), the inverse Laplace transformations remain unfeasible at the moment, and we rely on established numerical Laplace inversion methods. We stress that the problem of solving for the full propagator of interacting particles in Eq. (<ref>) can be formulated in the language of scattering theory <cit.>, thus allowing to use well-developed tools from quantum field theory <cit.>. Odd diffusion here adds the perspective of non-Hermicity with potentially new insights <cit.>. §.§ Numerical results and Fourier modes In Fig. <ref> we compare the positional mode p_0 = Θ(r-1) ϱ, the polarization mode p_1 = Θ(r-1) [σ·𝐞], and the nematic mode p_2 = Θ(r-1) [𝐐 (𝐞⊗𝐞 - 1/2)] for a normal (κ=0) and an odd-diffusive (κ=1) system of interacting particles. The modes are evaluated at fixed time τ=1 and for (r_0, φ_0) = (1.01, 0), i.e. particles are initially placed at a distance of 1.01 times their diameter along the (arbitrarily) chosen x-axis. We observe that κ≠ 0 does not affect the interacting particles' mean positional distribution p_0, but rotates higher order modes in comparison to κ=0. We can understand this by observing from Eq. (<ref>) that b_n ∝ nκ, such that p_0 is not affected by κ≠ 0—but for higher order modes b_n contributes for an odd-diffusive system. It is of interest to analyse the contribution of the higher-order modes to the time evolution of the relative PDF p. We define a measure of the significance of a mode of order n ≥ 1 as δ_n(τ) = ∫d𝐱 |p_n(𝐱,τ| 𝐱_0)|/∫d𝐱 p_0(𝐱,τ|𝐱_0). We take the absolute value |p_n| in the definition of δ_n(τ), as the full space integral for the polarization, nematic and all higher order modes vanishes due to the orthogonality of the harmonic functions, ∫d𝐱 p_n = 0, n≥ 1. In contrast, ∫d𝐱 p_0 =1 (see Appendix <ref>), which we only include in Eq. (<ref>) to avoid numerical discretization errors. Fig. <ref> shows δ_n(τ) for n=1,…,5 and for different values of the odd-diffusion parameter κ =0, 1, 5. When considering δ_n as a measure for the relevance of the nth order mode, we observe that the modes are ordered consecutively in their contribution to the relative PDF p and that considering higher order modes becomes important for τ→ 0 as p(τ) → p(0) ∝δ(𝐱 - 𝐱_0), but higher order modes become less important for τ≫ 0. As can be seen in Figs. <ref>(b), (c), κ > 0 shifts the decay of δ_n(τ) to even shorter times. In Fig. <ref> we plot the full relative PDF p(τ) at times τ =1, 5, 50 for a normal (κ=0) and an odd-diffusive (κ=1) system. Again we choose the initial condition to be (r_0, φ_0) = (1.01, 0) and, based on the observation in Fig. <ref>, we truncated the Fourier series at n=10. Comparing the normal and odd-diffusive relative PDF, we observe that the κ-induced rotation of every but the zeroth order Fourier mode persists into the full PDF. For κ=0, b_n=0 for all modes, and the Fourier representation of the relative PDF therefore only contains cosine terms. The relative PDF of a normal particle thus is symmetric around Δφ=0, π for all times. This symmetry implies that particles encounter the space from both sides after a collision with equal likelihood. However, this does not hold for odd-diffusive particles. κ≠ 0 introduces a handedness in the diffusive exploration of space. For interacting odd-diffusive particles, the additional rotational probability flux introduced via Eq. (<ref>) results in a preferred direction after a collision depending on the sign of the odd-diffusion parameter κ. This effect was recently observed from Brownian dynamics simulations and termed as “mutual rolling” <cit.>. The interaction-induced symmetry breaking has far-reaching consequences for observable transport coefficients such as the self-diffusion coefficient. In odd-diffusive systems, even though resembling equilibrium overdamped dynamics, the self-diffusion can be enhanced by interactions instead of being reduced as for a normal system <cit.>. Even though, seemingly contradicting equilibrium statistical mechanics theorems <cit.>, the interaction-enhanced self-diffusion could recently be rationalized by observing that the time-evolution in odd-diffusive systems (see Eq. (<ref>)) becomes non-Hermitian for finite κ <cit.>. §.§ Relevance of the polarization mode for the force autocorrelation The force autocorrelation tensor (FACT) 𝐂_F(τ) encodes the most detailed microscopic information in an interacting system. Via a Taylor-Green-Kubo relation <cit.> it encodes the particle-particle interaction effects in the self-diffusion. For a stationary system, 𝐂_F(τ) is defined as 𝐂_F(τ) = ⟨𝐅(𝐱⃗) ⊗𝐅(𝐱⃗_0)⟩ = ∫d𝐱⃗∫d𝐱⃗_0 𝐅(𝐱⃗) ⊗𝐅(𝐱⃗_0) P_N(𝐱⃗, τ, 𝐱⃗_0, τ_0), where 𝐱⃗ = {𝐱_1, …, 𝐱_N} and similarly 𝐱⃗_0 for a system of, in general, N particles. P_N is the N-particle joint PDF, which can be rewritten as P_N(𝐱⃗, τ, 𝐱⃗_0, τ_0) = P_N(𝐱⃗, τ| 𝐱⃗_0) P_eq(𝐱⃗_0, τ_0), assuming the particles where in equilibrium at time τ_0, which we take again to be 0. The force on the tagged particle (particle one) is 𝐅(𝐱⃗) = - ∇_1 U_N (𝐱⃗), and we assume a pairwise additive and radially symmetric potential U_N (𝐱⃗) = ∑_i,j =1^N U(r_ij)/2, where r_ij = |𝐱_i - 𝐱_j|, i≠ j. In the dilute limit, we can safely assume that only two-body correlations are important and thus ignore correlations between the untagged particles. The equilibrium PDF can be approximated as P_eq(𝐱⃗_0) = 1/(VZ_N) ∏_i=2^N exp(-β U(r_1i,0)/2), where V ⊂ℝ^2 is the bounded space of diffusion and Z_N=∫d𝐱⃗_0 ∏_i=2^N exp(-β U(r_1i,0)/2) is the N-particle partition function. The conditional PDF, i.e., the N-particle propagator can be approximated as P_N(𝐱⃗, τ| 𝐱⃗_0)= (1/V) ∏_i=2^N p(𝐱_1i, τ| 𝐱_1i, 0), where p(𝐱_1i, τ| 𝐱_1i, 0) is the PDF of the relative coordinate 𝐱_1i = 𝐱_1 - 𝐱_i, similar to Eq. (<ref>) but for a generic two-body interaction potential U. Following these approximations, all but one particle (particle two) can be integrated out in Eq. (<ref>) and we denote as before, 𝐱 = 𝐱_1 - 𝐱_2. The central interaction force between the particles can be written as 𝐅(𝐱) = F(r) 𝐞(φ), where 𝐞(φ) = (cos(φ), sin(φ))^T, as before, and coincides with the radial unit vector. In the dilute limit, the FACT of Eq. (<ref>) thus becomes 𝐂_F(τ) = N-1/V∫d𝐱∫d𝐱_0 F(r) F(r_0) e^-β U(r_0)/Z_2 × p(𝐱, τ| 𝐱_0) 𝐞(φ) ⊗𝐞(φ_0) = N-1/Vπ∫dr ∫dr_0 F(r) F(r_0) e^-β U(r_0)/Z_2 × [a_1(r, τ|r_0) 1 - b_1(r, τ|r_0) ϵ]. Here we used the orthogonality of the Fourier modes in the relative PDF p(τ) (analogously to Eq. (<ref>)) from Eq. (<ref>) to Eq. (<ref>) to find that only the polarization mode σ(r, τ|r_0) = (a_1(r, τ|r_0), b_1(r, τ|r_0))^T/π (analogously to Eq. (<ref>)) contributes to the FACT. Note that relations analogous to Eqs. (<ref>) and (<ref>) also hold in a system with generic radially symmetric interaction potential U(r), specifically also for forces with transverse, odd components <cit.>. We can specify Eq. (<ref>) for the hard-disk interaction potential of Eq. (<ref>) by observing that P_eq(𝐱_0) = Θ(r_0-1)/V^2 and that we can rewrite the singular interaction force via the trick βΘ(r-1) 𝐅(r) = δ(r-1) 𝐞(φ) <cit.>, obeying the same generic form of a central force as before. We can perform the radial integral of Eq. (<ref>) and find that the FACT of a hard-disk system in the dilute limit is given by 𝐂_F(τ) = β^-2ϕ [a_1(1, τ|1) 1 - b_1(1, τ|1) ϵ], where ϕ=(π d^2/4) (N/V) is the area fraction in dimensional form <cit.>. Thus we understand that the behavior of the polarization mode σ(τ) in time governs the behavior of the FACT for interacting systems, in particular for a hard-disk interaction. Note specifically here, that the off-diagonal correlation (∝ b_1 ϵ) is directly proportional to κ, and therefore might serve as a characteristic to odd diffusion <cit.>. To further analyze the polarization mode in the hard system, we define γ(τ) = arctan(b_1(1,τ|1)/a_1(1,τ|1)), as the mean angle between the polarization vector σ(1, τ|1) as a function of time and its initial direction, which defines the x-axis in the system, see also inset in Fig. <ref>(a). We plot γ(τ) for different values of κ as a function of time in Fig. <ref>(a) and use the arctan2(·)-function for numerical evaluation to obtain angles within the interval (-π, π). We observe that for κ=0 the initial direction of the polarization does not change with time and stays constant. Recalling the polarization mode in Fig. <ref>(b), the location of the extrema thus remain unchanged, meaning that the particles are symmetrically back-reflected from the center of the collision. For κ≠ 0, however, γ(τ) changes in time (see again Fig. <ref>(e)). The mean-direction of the polarization develops an extremum γ_ext and relaxes to a constant, non-zero value γ_∞ for long times. For κ>0.88, γ_ext< -π/2 and for κ>1 even γ_∞ < -π/2. The interval κ∈(0.88, 1) thereby aligns with the odd-diffusion interval, for which we reported an oscillating FACF earlier <cit.>. These sign-switches in the FACF coincide with what we observe now for the direction of σ(τ), see also inset in Fig. <ref>(a): γ(τ) crosses -π/2 for some time (negative FACF) and eventually relaxes to angles larger than -π/2 again (positive FACF). We interpret this behavior as interacting odd-diffusive particles, which for |κ|>0.88 rotate more than π/2 but relax to a steady state with a relative angle |γ_∞| < π/2 as long as |κ|<1; see Fig. <ref>(b) for a sketch. This behavior also rationalizes the oscillating FACF in the same interval, as ⟨𝐅(τ)·𝐅(0)⟩ = ⟨ F(τ) F(0) cos(π/2)⟩ = 0 here and the particles oscillate around that angle. However, a steady state angle |γ_∞| which is smaller than the maximal angle |γ_ext| is a generic observation from Fig. <ref>(a). On average particles always rotate further in time, as they finally relax. Note that for κ→-κ, we have that γ(τ) → - γ(τ), as a_n is even and b_n is an odd function of κ. Thus, κ<0 results in the same phenomenology for the relaxation of γ(τ) and thus for the FACF. Further, if |κ| →∞, we find that |γ_ext| → |γ_∞| →π, which means, that at most odd particles exchange positions but do not rotate any further. § DISCUSSION We here presented the exact analytical solution and numerical evaluation of two hard disk-like interacting odd-diffusing particles in two spatial dimensions. Odd diffusion thereby is characterized by antisymmetric elements ∝κ in the diffusion tensor 𝐃 = D_0 (1 + κϵ). Our analysis showed that the two-particle problem separates into a center-of-mass and a relative coordinate problem, of which the first can be solved straightforwardly, and the latter incorporates the nontrivial effects of interactions and odd diffusion. The relative PDF can be written as a Fourier series, and we observe that oddness rotates all apart from the zeroth order mode in time, which represents the unaffected positional distribution of the relative coordinate. The modes are consecutively ordered in their contribution to the relative PDF, and when summed up show a rotated form of the PDF. This effect is a characteristic of odd diffusion and has been termed as “mutual rolling” earlier <cit.> as the particles rotate around each other while interacting, in contrast to normal diffusive (κ=0) particles, which are symmetrically back-reflected when interacting. The representation of the relative PDF in its Fourier modes becomes useful in the analysis of microscopic correlation functions, specifically the FACF. For any central interaction force, only the polarization mode of the relative PDF determines the correlation function. We conjecture here that a similar phenomenology holds for other (radially symmetric) observables of arbitrary tensorial order in an interacting system; the corresponding mode of the relative PDF might determine the expectation value and even the correlation function. We used the analytical access to the polarization mode, to understand the average configurations of the hard-disk-like interacting odd-diffusive particles. The maximal relative rotation angle overshoots the final relaxation on average, which quantitatively aligns with an oscillating FACF, recently reported for these systems <cit.>. Our work shines light on the novel way of particle interactions in odd-diffusive systems. Thereby it might serve as a reference case for interactions in the various odd-diffusive systems such as in equilibrium, e.g., Brownian particle under Lorentz force <cit.>, with a long-lasting history in statistical mechanics <cit.>, or skyrmionic spin structures <cit.>. Here numerical studies recently reproduced the interaction-enhanced self-diffusion <cit.>, which originates in the mutual rolling effect <cit.>. But there exist also non-equilibrium odd-diffusive systems, such as systems under shear <cit.>, or active chiral particles <cit.>. In general, for systems that break time-reversal symmetry, the relevance of off-diagonal correlation functions for transport properties has attracted considerable interest recently <cit.>. Odd diffusion further might serve as the unifying terminology for systems showing transverse responses such as systems with Magnus forces <cit.>, Coriolis forces <cit.>, in complex (porous) environments <cit.>, or to describe non-conservative force fields which are found in optical tweezer experiments <cit.>. Systems with an artificial transverse interaction component <cit.> recently showed that odd interactions enhance the sampling of configurations in dense systems, originating again in the unique particle rearrangements in odd-diffusive systems. Transverse forces further have been the subject of interest in so-called linear-diffusive systems <cit.>. Odd diffusion is finally also relevant for strongly rotating <cit.> or magnetized plasmas, as it can be found, e.g. in the realm of astrophysics, where antisymmetric transport is relevant to describe the movement of energetic particles moving through magnetized plasmas such as the interplanetary and interstellar medium <cit.>. Acknowledgements. The authors acknowledge the help of Timo J. Doerries in the creation of Fig. <ref>(b). A. S., R. M., and E. K. acknowledge support by the Deutsche Forschungsgemeinschaft (grants No. SH 1275/5-1, ME 1535/16-1 and SPP 2332 - 492009952). § ANALYTICAL SOLUTION OF THE RELATIVE PROBLEM This Appendix closely follows Ref. <cit.> in its presentation of the analytical solution to the relative problem, which itself adapted the work of Hanna, Hess and Klein <cit.> to odd-diffusive systems. The equation for the relative PDF p(τ) = p(𝐱, τ|𝐱_0), where 𝐱 = 𝐱_1 - 𝐱_2, follows from Eq. (<ref>) as τ p(τ) = ∇_𝐱·(1 + κϵ)[∇_𝐱 + β ∇_𝐱 U(r)] p(τ), and reads in (rescaled) polar coordinates 𝐱 = (r, φ) as τ p(τ) = 1/r^2( rr rr + r r rβ U(r)r. . + [2]φ - κ r β U(r)rφ) p(τ). Note that space is rescaled with the diameter of the particle d, 𝐱→𝐱/d, and time is rescaled by the natural time-scale of diffusing the radial distance of a particle diameter τ_d = d^2/(2D_0), t →τ = t/τ_d. The initial condition p(τ=0) = δ(𝐱 - 𝐱_0) Θ(r-1) in polar coordinates becomes p(τ=0) = δ(r - r_0)/r_0 δ(φ - φ_0) Θ(r-1). The angular part of the initial condition can be expanded into a Fourier series δ(φ - φ_0) = ∑_n=-∞^∞exp(in (φ - φ_0)) /2π, which, following Refs. <cit.>, we use as an ansatz to solve for p(τ) as p(𝐱,τ|𝐱_0) = Θ(r - 1)/2π∑_n=-∞^∞ R_n(r,τ|r_0) e^in(φ- φ_0). Note that except for the radial functions R_n(r,τ|r_0), all other parts of the ansatz are time-independent. Observing that for the hard-disk potential, see Eq. (<ref>), we have that exp(-β U(r)) = Θ(r-1), we can replace the otherwise singular interaction force ∂ U(r)/∂ r in Eq. (<ref>) via - βΘ(r-1) U(r)r = re^-β U(r) = δ(r-1). Rewriting Eq. (<ref>) for the radial functions R_n(τ)= R_n(r,τ|r_0), we thus find τ R_n(τ) = 1/r^2(r r r r - n^2 ) R_n(τ) in the domain r ≥ 1 and for each order n∈ℤ, and r R_n(τ) = - inκ/r R_n(τ) to be satisfied additionally at r = 1. Eq. (<ref>) can be viewed as an extension of an ordinary von Neumann no-flux boundary condition which is recovered for κ=0. This generalized condition can be found in the literature under the name of oblique boundary conditions, see for example <cit.>. Eq. (<ref>) is equipped with a second boundary condition, which can be found from the natural boundary condition on p(τ), lim_r →∞ p(τ) = 0, as lim_r →∞ R_n(τ) = 0 and the initial condition on R_n(τ) translates from Eq. (<ref>) as R_n(τ=0) = δ(r- r_0)/r_0. Note that Eq.(<ref>) together with the boundary condition (<ref>) also forms the basis of solving the problem by using the language of scattering theory <cit.>, the only difference being the introduction of the the generalized von-Neumann boundary condition for interacting odd-diffusive particles. For the particular solution of Eq. (<ref>), R_n^part(τ), we make the ansatz R_n^part(τ) = ∫_0^∞u w_n(r,u|r_0) e^-τ u^2, which when inserted into Eq. (<ref>) shows that w_n(r,u|r_0) satisfy the Bessel equation <cit.> with solutions J_n(ur) and Y_n(ur) as the Bessel functions of first kind and second kind, respectively. Matching the particular solution with the initial condition R_n(τ=0) = δ(r-r_0)/r_0 and noting that the delta-distribution δ(·) can be expanded in Bessel functions of the first kind (see Eq. (<ref>)) we find that Y_n(ur) does not contribute to the particular solution. After a Laplace transformation with (dimensionless) Laplace variable s, the homogeneous part of Eq. (<ref>) appears as the modified Bessel equation <cit.> with solutions I_n(r√(s)) and K_n(r√(s)), as the modified Bessel functions of the first and second kind, respectively. The Laplace transformation for a function f(τ) thereby is defined as ℒ{f}(s) = ∫_0^∞dτ exp(-sτ)f(τ), noting that τ = t/τ_d. As the homogeneous solution has to satisfy the natural boundary condition on R_n(τ), I_n(x) is no suitable solution as it diverges for x →∞ for every n∈ℤ <cit.>. We conclude that the homogeneous part is solved by ℒ{R_n^hom}(s) = A_n(s|r_0) K_n(r√(s)) for some amplitude A_n(s|r_0) to be determined by matching with the oblique boundary condition (<ref>). The relative solution p(τ) in the Laplace domain thus is given by ℒ{p}(𝐱,s|𝐱_0) = Θ(r-1) ∑_n = - ∞^∞e^in(φ-φ_0)/2π∫_0^∞u u J_n(u r_0)/s + u^2[J_n(u r) -K_n(r√(s)) u J^'_n(u) + inκ J_n(u)/√(s) K^'_n(√(s)) + inκ K_n(√(s))]. which was already found in Ref. <cit.>. There are two types of integrals appearing in Eq. (<ref>), which involve products of Bessel functions and which we list in Appendix <ref>. The integrals ∫_0^∞du u J_n(u r_0)J_n(u b)/(s + u^2) can be evaluated using Eq. (<ref>), where b ∈{1,r}. Depending on whether b = r > r_0, r = r_0 or r < r_0, Eq. (<ref>) gives different results but we only need to consider the latter two for the cases b=1= r_0 and 1<r_0. The second integral ∫_0^∞du u^2 J_n(u r_0)J^'_n(u)/(s + u^2) involves the derivative of a Bessel function and therefore is more involved. We list the integral in Eq. (<ref>) and again need to differentiate whether r_0 =1 or r_0>1. Taking into account these different cases, we find for the Laplace transformed relative PDF ℒ{p}(𝐱,s|𝐱_0) = Θ(r-1)/2π∑_n=-∞^∞e^in(φ-φ_0)[ Θ(r_0-r) I_n(r√(s)) K_n(r_0√(s)) +Θ(r-r_0) I_n(r_0√(s)) K_n(r√(s)) - K_n(r√(s)) K_n(r_0√(s))/√(s) K_n^'(√(s)) + inκ K_n(√(s))(√(s) I_n^'(√(s)) + i n κ I_n(√(s)) - δ(r_0-1)/2K_n(√(s))) ]. We can Laplace invert some parts of Eq. (<ref>) as ℒ^-1{ I_n(r_0√(s)) K_n(r √(s)) } = ℒ^-1{ I_n(r√(s)) K_n(r_0 √(s)) } = exp(-r_0^2+r^2/4τ)/2 τ I_n(r_0r/2τ), <cit.>. Together with decomposing e^ix = cosx + isinx in Eq. (<ref>), this gives the Fourier expanded relative PDF as p(𝐱, τ|𝐱^0) =Θ(r-1)/2π[ a_0(r,τ|r_0) + 2 ∑_n=1^∞[ a_n(r,τ|r_0); b_n(r,τ|r_0) ]·[ cos(n(φ-φ_0)); sin(n(φ-φ_0)) ]], where a_n and b_n are given in Eqs. (<ref>) and (<ref>) in the main text. Note that by using the symmetry in the order of the modified Bessel functions, I_-n(x) = I_n(x) and K_-n(x) = K_n(x) <cit.>, we can restrict the sum on positive modes only. § INTEGRAL EXPRESSIONS To evaluate the integrals in Eq. (<ref>), we use the following tabulated integrals, listed in this Appendix. Following Arfken and Weber's Mathematical Methods for Physicists <cit.>, we find that Bessel functions of (integer) order n, J_n, obey the integral relation ∫_0^∞u u J_n(a u) J_n(b u) = δ(a-b)/a, valid for n>-1/2 and a,b some real-valued constants <cit.>. To evaluate the remaining integrals of Bessel-functions, we draw on Gradshteyn and Ryzhik's Table of Integrals, Series and Products <cit.> and Abramowitz and Stegun's Handbook of Mathematical Functions <cit.>. The first relevant integral is ∫_0^∞u u J_n(au) J_n(bu)/u^2 + c^2 = I_n(ac) K_n(bc), 0<a<b I_n(ac) K_n(ac), 0<a=b I_n(bc) K_n(ac), 0<b<a, which is valid for n >-1 and c>0 <cit.>. Here I_n, K_n are the modified Bessel functions of the first and second kind, and a,b,c some (real-valued) constants. Together with the symmetry relations for the order of Bessel functions, J_-n(x) = (-1)^n J_n(x), I_-n(x) = I_n(x) and K_-n(x) = K_n(x) <cit.>, Eq. (<ref>) can be further extended to the relevant cases of n<0. The second relevant integral is ∫_0^∞u u^2 J_n(au) J_n^'(bu)/(u^2 + c^2), for some real-valued, positive constants a,b,c. Using dJ_n(x)/dx = (J_n-1(x) - J_n+1(x))/2 <cit.>, the integral can be evaluated as ∫_0^∞u u^2 J_n_1(au) J_n_2(bu)/u^2 + c^2 = (-1)^α_+ c I_n_1(ac) K_n_2(bc), 0<a<b (-1)^α_- c I_n_2(bc) K_n_1(ac), 0<b<a, which is valid for n_2 >-1 <cit.>. The exponents are α_± = (1 ± (n_1 - n_2))/2 ∈ℕ_0. We use Eq. (<ref>) for the case of n_1 = n± 1 and n_2=n. By relying on the symmetry relations for the orders of the Bessel functions, we can extend Eq. (<ref>) again to the negative cases of n_1 = -(n± 1) and n_2=-n. Thus, together with <cit.>, we find ∫_0^∞u u^2 J_n(au) J_n^'(bu)/u^2 + c^2 = c K_n(bc) I_n^'(ac), 0<a<b n/b K_n(ac) I_n(bc), 0<b<a, valid for all n ∈ℤ, but limited to b≠ a. In order to generalize Eq. (<ref>) to the case of b=a >0, which can not be found in Ref. <cit.>, we generalize the relation, which was already derived in Ref. <cit.> for n=1 to an arbitrary n ∈ℤ. Therefore, by using the indefinite integral ∫u J_n(u) J_n^'(u) = J_n^2(u)/2 + const, we can partially integrate and find an integral of the form of Eq. (<ref>), which, using the Wronskian 𝒲[I_n(x), K_n(x)] = I_n(x) K_n+1(x) + I_n+1(x) K_n(x)= 1/x <cit.>, gives ∫_0^∞u u^2 J_n(au) J_n^'(au)/u^2 + c^2 = c K_n(bc) I_n^'(ac), 0<a<b c K_n(ac) I_n^'(ac) - 1/2a, 0<a=b n/b K_n(ac) I_n(bc), 0<b<a. To prove the normalization of the zeroth order mode ∫d𝐱 p_0 =1, Eq. (<ref>) in the main text, we transform the integrals over I_0 and K_0 into higher order Bessel functions evaluated at the boundaries r=1 and r →∞ by using the recursion relations <cit.> (1/u u)^m ( u^n I_n(u) ) = u^n - m I_n-m(u), and (1/u u)^m ( u^n K_n(u) ) = (-1)^m u^n - m K_n-m(u), which specifically imply for m = 1 and n= 1 to u I_0(u) = d/du (u I_1(u)) and u K_0(u) = -d/ du (u K_1(u)). The integrals over I_0(u) and K_0(u) thus turn into evaluating u I_1(u) and u K_1(u) at the integration bounds and the upper bound vanishes by the asymptotic expansion of lim_u→∞ uK_n(u) ∼lim_u→∞ u exp(-u) = 0 <cit.>. The remaining expressions from the lower bounds are evaluated by using the Wronskian 𝒲[I_n(x), K_n(x)], to be exactly 1. 98 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Hargus et al.(2021)Hargus, Epstein, and Mandadapu]hargus2021odd author author C. Hargus, author J. M. Epstein, and author K. K. Mandadapu, title title Odd diffusivity of chiral random motion, https://doi.org/10.1103/PhysRevLett.127.178001 journal journal Phys. Rev. Lett. volume 127, pages 178001 (year 2021)NoStop [Kalz et al.(2022)Kalz, Vuijk, Abdoli, Sommer, Löwen, and Sharma]kalz2022collisions author author E. Kalz, author H. D. Vuijk, author I. Abdoli, author J.-U. Sommer, author H. Löwen, and author A. Sharma, title title Collisions enhance self-diffusion in odd-diffusive systems, https://doi.org/10.1103/PhysRevLett.129.090601 journal journal Phys. Rev. Lett. volume 129, pages 090601 (year 2022)NoStop [Yasuda et al.(2022)Yasuda, Ishimoto, Kobayashi, Lin, Sou, Hosaka, and Komura]yasuda2022time author author K. Yasuda, author K. Ishimoto, author A. Kobayashi, author L.-S. Lin, author I. Sou, author Y. Hosaka, and author S. Komura, title title Time-correlation functions for odd Langevin systems, https://doi.org/10.1063/5.0095969 journal journal J. Chem. Phys. volume 157, pages 095101 (year 2022)NoStop [Fruchart et al.(2023)Fruchart, Scheibner, and Vitelli]fruchart2023odd author author M. Fruchart, author C. Scheibner, and author V. Vitelli, title title Odd viscosity and odd elasticity, https://doi.org/10.1146/annurev-conmatphys-040821-125506 journal journal Annu. Rev. Condens. Matter Phys. volume 14, pages 471 (year 2023)NoStop [Hargus et al.(2024)Hargus, Deshpande, Omar, and Mandadapu]hargus2024flux author author C. Hargus, author A. Deshpande, author A. K. Omar, and author K. K. Mandadapu, title title The flux hypothesis for odd transport phenomena, journal journal arXiv preprint arXiv:2405.08798 https://doi.org/10.48550/arXiv.2405.08798 10.48550/arXiv.2405.08798 (year 2024)NoStop [Chun et al.(2018)Chun, Durang, and Noh]chun2018emergence author author H.-M. Chun, author X. Durang, and author J. D. Noh, title title Emergence of nonwhite noise in Langevin dynamics with magnetic Lorentz force, https://doi.org/10.1103/PhysRevE.97.032117 journal journal Phys. Rev. E volume 97, pages 032117 (year 2018)NoStop [Schick et al.(2024)Schick, Weißenhofer, Rózsa, Rothörl, Virnau, and Nowak]schick2024two author author D. Schick, author M. Weißenhofer, author L. Rózsa, author J. Rothörl, author P. Virnau, and author U. Nowak, title title Two levels of topology in skyrmion lattice dynamics, https://doi.org/10.1103/PhysRevResearch.6.013097 journal journal Phys. Rev. Res. volume 6, pages 013097 (year 2024)NoStop [Avron(1998)]avron1998odd author author J. E. Avron, title title Odd viscosity, https://doi.org/10.1023/A:1023084404080 journal journal J. Stat. Phys. volume 92, pages 543 (year 1998)NoStop [Kalz et al.(2024a)Kalz, Vuijk, Sommer, Metzler, and Sharma]kalz2024oscillatory author author E. Kalz, author H. D. Vuijk, author J.-U. Sommer, author R. Metzler, and author A. Sharma, title title Oscillatory force autocorrelations in equilibrium odd-diffusive systems, https://doi.org/10.1103/PhysRevLett.132.057102 journal journal Phys. Rev. Lett. volume 132, pages 057102 (year 2024a)NoStop [Dhont(1996)]dhont1996introduction author author J. K. G. Dhont, @noop title An Introduction to Dynamics of Colloids, Vol. volume 2 (publisher Elsevier, year 1996)NoStop [Batchelor(1976)]batchelor1976brownian author author G. K. Batchelor, title title Brownian diffusion of particles with hydrodynamic interaction, https://doi.org/10.1017/S0022112076001663 journal journal J. Fluid Mech. volume 74, pages 1 (year 1976)NoStop [Felderhof(1978)]felderhof1978diffusion author author B. U. Felderhof, title title Diffusion of interacting Brownian particles, https://doi.org/10.1088/0305-4470/11/5/022 journal journal J. Phys. A volume 11, pages 929 (year 1978)NoStop [Hanna et al.(1982)Hanna, Hess, and Klein]hanna1982self author author S. Hanna, author W. Hess, and author R. Klein, title title Self-diffusion of spherical Brownian particles with hard-core interaction, https://doi.org/10.1016/0378-4371(82)90088-7 volume 111, pages 181 (year 1982)NoStop [Abney et al.(1989)Abney, Scalettar, and Owicki]abney1989self author author J. R. Abney, author B. A. Scalettar, and author J. C. Owicki, title title Self diffusion of interacting membrane proteins, https://doi.org/10.1016/s0006-3495(89)82882-6 journal journal Biophys. J. volume 55, pages 817 (year 1989)NoStop [Zahn et al.(1997)Zahn, Méndez-Alcaraz, and Maret]zahn1997hydrodynamic author author K. Zahn, author J. M. Méndez-Alcaraz, and author G. Maret, title title Hydrodynamic interactions may enhance the self-diffusion of colloidal particles, https://doi.org/10.1103/PhysRevLett.79.175 journal journal Phys. Rev. Lett. volume 79, pages 175 (year 1997)NoStop [Bembenek and Szamel(2000)]bembenek2000role author author S. D. Bembenek and author G. Szamel, title title The role of attractive interactions in self-diffusion, https://doi.org/10.1021/jp0025835 journal journal J. Phys. Chem. B volume 104, pages 10647 (year 2000)NoStop [Jones(1979)]jones1979diffusion author author R. B. Jones, title title Diffusion of tagged interacting spherically symmetric polymers, https://doi.org/10.1016/0378-4371(79)90083-9 volume 97, pages 113 (year 1979)NoStop [Medina-Noyola(1988)]medina1988long author author M. Medina-Noyola, title title Long-time self-diffusion in concentrated colloidal dispersions, https://doi.org/10.1103/PhysRevLett.60.2705 journal journal Phys. Rev. Lett. volume 60, pages 2705 (year 1988)NoStop [Löwen and Szamel(1993)]lowen1993long author author H. Löwen and author G. Szamel, title title Long-time self-diffusion coefficient in colloidal suspensions: theory versus simulation, https://doi.org/10.1088/0953-8984/5/15/003 journal journal J. Phys. Condens. Matter volume 5, pages 2295 (year 1993)NoStop [Imhof and Dhont(1995)]imhof1995long author author A. Imhof and author J. K. G. Dhont, title title Long-time self-diffusion in binary colloidal hard-sphere dispersions, https://doi.org/10.1103/PhysRevE.52.6344 journal journal Phys. Rev. E volume 52, pages 6344 (year 1995)NoStop [Thorneywork et al.(2015)Thorneywork, Rozas, Dullens, and Horbach]thorneywork2015effect author author A. L. Thorneywork, author R. E. Rozas, author R. P. A. Dullens, and author J. Horbach, title title Effect of hydrodynamic interactions on self-diffusion of quasi-two-dimensional colloidal hard spheres, https://doi.org/10.1103/PhysRevLett.115.268301 journal journal Phys. Rev. Lett. volume 115, pages 268301 (year 2015)NoStop [Hanna et al.(1981)Hanna, Hess, and Klein]hanna1981velocity author author S. Hanna, author W. Hess, and author R. Klein, title title The velocity autocorrelation function of an overdamped Brownian system with hard-core interaction, https://doi.org/10.1088/0305-4470/14/12/004 journal journal J. Phys. A volume 14, pages L493 (year 1981)NoStop [Feller(1968)]feller1971introduction author author W. Feller, @noop title An Introduction to Probability Theory and Its Aapplications, edition 3rd ed. (publisher John Wiley & Sons, New York, year 1968)NoStop [Øksendal(2003)]oksendal2003stochastic author author B. Øksendal, https://doi.org/10.1007/978-3-642-14394-6 title Stochastic Differential Equations, edition 6th ed. (publisher Springer Berlin, Heidelberg, year 2003)NoStop [Doi and Edwards(1988)]doi1988theory author author M. Doi and author S. F. Edwards, @noop title The Theory of Polymer Dynamics (publisher Clarendon Press, Oxford, year 1988)NoStop [Balakrishnan(2021)]balakrishnan2021elements author author V. Balakrishnan, https://doi.org/10.1007/978-3-030-62233-6 title Elements of Nonequilibrium Statistical Mechanics (publisher Springer International Publishing, year 2021)NoStop [de Gennes and Prost(1995)]degennes1995the author author P.-G. de Gennes and author J. Prost, @noop title The Physics of Liquid Crystals, edition 2nd ed. (publisher Oxford University Press, year 1995)NoStop [Jackson(2021)]jackson2021classical author author J. D. Jackson, @noop title Classical Electrodynamics, edition 3rd ed. (publisher John Wiley & Sons, year 2021)NoStop [Cates and Tailleur(2013)]cates2013active author author M. E. Cates and author J. Tailleur, title title When are active Brownian particles and run-and-tumble particles equivalent? Consequences for motility-induced phase separation, https://doi.org/10.1209/0295-5075/101/20010 journal journal Europhys. Lett. volume 101, pages 20010 (year 2013)NoStop [Kalz et al.(2024b)Kalz, Sharma, and Metzler]kalz2024field author author E. Kalz, author A. Sharma, and author R. Metzler, title title Field theory of active chiral hard disks: A first-principles approach to steric interactions, journal journal J. Phys. A https://doi.org/10.1088/1751-8121/ad5089 10.1088/1751-8121/ad5089 (year 2024b)NoStop [te Vrugt and Wittkowski(2020)]te2020relations author author M. te Vrugt and author R. Wittkowski, title title Relations between angular and Cartesian orientational expansions, https://doi.org/10.1063/1.5141367 journal journal AIP Adv. volume 10, pages 035106 (year 2020)NoStop [Abramowitz and Stegun(1965)]abramowitz1968handbook editor M. Abramowitz and editor I. A. Stegun, eds., https://store.doverpublications.com/products/9780486612720 title Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (publisher Dover, New York, year 1965)NoStop [Franosch et al.(2010)Franosch, Höfling, Bauer, and Frey]franosch2010persistent author author T. Franosch, author F. Höfling, author T. Bauer, and author E. Frey, title title Persistent memory for a Brownian walker in a random array of obstacles, https://doi.org/10.1016/j.chemphys.2010.04.023 journal journal Chem. Phys. volume 375, pages 540 (year 2010)NoStop [Zinn-Justin(2021)]zinn2021quantum author author J. Zinn-Justin, https://doi.org/10.1093/acprof:oso/9780198509233.001.0001 title Quantum Field Theory and Critical Phenomena, edition 4th ed. (publisher Oxford University Press, year 2021)NoStop [Leitmann and Franosch(2017)]leitmann2017time author author S. Leitmann and author T. Franosch, title title Time-dependent fluctuations and superdiffusivity in the driven lattice Lorentz gas, https://doi.org/10.1103/PhysRevLett.118.018001 journal journal Phys. Rev. Lett. volume 118, pages 018001 (year 2017)NoStop [Caraglio and Franosch(2022)]caraglio2022analytic author author M. Caraglio and author T. Franosch, title title Analytic solution of an active Brownian particle in a harmonic well, https://doi.org/10.1103/PhysRevLett.129.158001 journal journal Phys. Rev. Lett. volume 129, pages 158001 (year 2022)NoStop [Taylor(1922)]taylor1922diffusion author author G. I. Taylor, title title Diffusion by continuous movements, https://doi.org/10.1112/plms/s2-20.1.196 journal journal Proc. Lond. Math. Soc. volume 2, pages 196 (year 1922)NoStop [Green(1954)]green1954markoff author author M. S. Green, title title Markoff random processes and the statistical mechanics of time-dependent phenomena. II. Irreversible processes in fluids, https://doi.org/10.1063/1.1740082 journal journal J. Chem. Phys. volume 22, pages 398 (year 1954)NoStop [Kubo(1957)]kubo1957statistical author author R. Kubo, title title Statistical-mechanical theory of irreversible processes. I. General theory and simple applications to magnetic and conduction problems, https://doi.org/10.1143/JPSJ.12.570 journal journal J. Phys. Soc. Jpn. volume 12, pages 570 (year 1957)NoStop [Shinde et al.(2022)Shinde, Sommer, Löwen, and Sharma]shinde2022strongly author author R. Shinde, author J.-U. Sommer, author H. Löwen, and author A. Sharma, title title Strongly enhanced dynamics of a charged Rouse dimer by an external magnetic field, https://doi.org/10.1093/pnasnexus/pgac119 journal journal PNAS Nexus volume 1, pages pgac119 (year 2022)NoStop [Ghimenti et al.(2023)Ghimenti, Berthier, Szamel, and van Wijland]ghimenti2023sampling author author F. Ghimenti, author L. Berthier, author G. Szamel, and author F. van Wijland, title title Sampling efficiency of transverse forces in dense liquids, https://doi.org/10.1103/PhysRevLett.131.257101 journal journal Phys. Rev. Lett. volume 131, pages 257101 (year 2023)NoStop [Batton and Rotskoff(2024)]batton2024microscopic author author C. H. Batton and author G. M. Rotskoff, title title Microscopic origin of tunable assembly forces in chiral active environments, journal journal Soft Matter https://doi.org/10.1039/D4SM00247D 10.1039/D4SM00247D (year 2024)NoStop [tri()]trick @noop note For the hard-disk potential U(r)=0 for r>1 (rescaled space variable) and U(r)=∞ otherwise we observe that exp(-β U(r))=Θ(r-1). We thus find on the one hand that ∇exp(-β U(r)) = -β exp(-β U(r)) ∇ U(r)) = β Θ(r-1) 𝐅(r), where 𝐅(r) = -∇ U(r) is the interaction force and on the other hand that ∇exp(-β U(r)) =δ(r-1) 𝐞(φ), where 𝐞(φ) = (cos(φ), sin(φ))^T is the radial unit vector.Stop [Kalz(2022)]kalz2022diffusion author author E. Kalz, https://doi.org/10.1007/978-3-658-39518-6 title Diffusion under the Effect of Lorentz Force (publisher Springer Spektrum Wiesbaden, year 2022)NoStop [Lemons and Kaufman(1999)]lemons1999brownian author author D. S. Lemons and author D. L. Kaufman, title title Brownian motion of a charged particle in a magnetic field, https://doi.org/10.1109/27.799805 journal journal IEEE Trans. Plasma Sci. volume 27, pages 1288 (year 1999)NoStop [Czopnik and Garbaczewski(2001)]czopnik2001brownian author author R. Czopnik and author P. Garbaczewski, title title Brownian motion in a magnetic field, https://doi.org/10.1103/PhysRevE.63.021105 journal journal Phys. Rev. E volume 63, pages 021105 (year 2001)NoStop [Hayakawa(2005)]hayakawa2005langevin author author H. Hayakawa, title title Langevin equation with Coulomb friction, https://doi.org/10.1016/j.physd.2004.12.011 journal journal Physica D volume 205, pages 48 (year 2005)NoStop [Simões and Lagos(2005)]simoes2005kramers author author T. P. Simões and author R. E. Lagos, title title Kramers equation for a charged Brownian particle: The exact solution, https://doi.org/10.1016/j.physa.2005.03.034 volume 355, pages 274 (year 2005)NoStop [Jiménez-Aquino and Romero-Bastida(2006)]jimenez2006brownian author author J. I. Jiménez-Aquino and author M. Romero-Bastida, title title Brownian motion of a charged particle in a magnetic field, @noop journal journal Revista Mexicana de Física E volume 52, pages 182 (year 2006)NoStop [Hou et al.(2009)Hou, Mišković, Piel, and Shukla]hou2009brownian author author L. J. Hou, author Z. L. Mišković, author A. Piel, and author P. K. Shukla, title title Brownian dynamics of charged particles in a constant magnetic field, journal journal Phys. Plasmas volume 16, https://doi.org/10.1063/1.3138746 10.1063/1.3138746 (year 2009)NoStop [Vuijk et al.(2019)Vuijk, Brader, and Sharma]vuijk2019anomalous author author H. D. Vuijk, author J. M. Brader, and author A. Sharma, title title Anomalous fluxes in overdamped Brownian dynamics with Lorentz force, https://doi.org/10.1088/1742-5468/ab190f journal journal J. Stat. Mech.: Theory Exp. volume 2019number (6), pages 063203NoStop [Chun et al.(2019)Chun, Fischer, and Seifert]chun2019effect number author author H.-M. Chun, author L. P. Fischer, and author U. Seifert, title title Effect of a magnetic field on the thermodynamic uncertainty relation, https://doi.org/10.1103/PhysRevE.99.042128 journal journal Phys. Rev. E volume 99, pages 042128 (year 2019)NoStop [Abdoli et al.(2020a)Abdoli, Vuijk, Sommer, Brader, and Sharma]abdoli2020nondiffusive author author I. Abdoli, author H. D. Vuijk, author J.-U. Sommer, author J. M. Brader, and author A. Sharma, title title Nondiffusive fluxes in a Brownian system with Lorentz force, https://doi.org/10.1103/PhysRevE.101.012120 journal journal Phys. Rev. E volume 101, pages 012120 (year 2020a)NoStop [Abdoli et al.(2020b)Abdoli, Kalz, Vuijk, Wittmann, Sommer, Brader, and Sharma]abdoli2020correlations author author I. Abdoli, author E. Kalz, author H. D. Vuijk, author R. Wittmann, author J.-U. Sommer, author J. M. Brader, and author A. Sharma, title title Correlations in multithermostat Brownian systems with Lorentz force, https://doi.org/10.1088/1367-2630/abb43d journal journal New J. Phys. volume 22, pages 093057 (year 2020b)NoStop [Park and Park(2021)]park2021thermodynamic author author J.-M. Park and author H. Park, title title Thermodynamic uncertainty relation in the overdamped limit with a magnetic Lorentz force, https://doi.org/10.1103/PhysRevResearch.3.043005 journal journal Phys. Rev. Res. volume 3, pages 043005 (year 2021)NoStop [Abdoli et al.(2022)Abdoli, Wittmann, Brader, Sommer, Löwen, and Sharma]abdoli2022tunable author author I. Abdoli, author R. Wittmann, author J. M. Brader, author J.-U. Sommer, author H. Löwen, and author A. Sharma, title title Tunable brownian magneto heat pump, https://doi.org/10.1038/s41598-022-17584-3 journal journal Sci. Rep. volume 12, pages 13405 (year 2022)NoStop [Taylor(1961)]taylor1961diffusion author author J. B. Taylor, title title Diffusion of plasma across a magnetic field, https://doi.org/10.1103/PhysRevLett.6.262 journal journal Phys. Rev. Lett. volume 6, pages 262 (year 1961)NoStop [Kurşunoǧlu(1962)]kurcsunoglu1962brownian author author B. Kurşunoǧlu, title title Brownian motion in a magnetic field, https://doi.org/10.1016/0003-4916(62)90027-1 journal journal Ann. Phys. volume 17, pages 259 (year 1962)NoStop [Karmeshu(1974)]karmeshu1974brownian author author Karmeshu, title title Brownian motion of charged particles in a magnetic field, https://doi.org/10.1063/1.1694624 journal journal Phys. Fluids volume 17, pages 1828 (year 1974)NoStop [Schütte et al.(2014)Schütte, Iwasaki, Rosch, and Nagaosa]schutte2014inertia author author C. Schütte, author J. Iwasaki, author A. Rosch, and author N. Nagaosa, title title Inertia, diffusion, and dynamics of a driven skyrmion, https://doi.org/10.1103/PhysRevB.90.174434 journal journal Phys. Rev. B volume 90, pages 174434 (year 2014)NoStop [Troncoso and Núñez(2014)]troncoso2014brownian author author R. E. Troncoso and author Á. S. Núñez, title title Brownian motion of massive skyrmions in magnetic thin films, https://doi.org/10.1016/j.aop.2014.10.007 journal journal Ann. Phys. volume 351, pages 850 (year 2014)NoStop [Gruber et al.(2023)Gruber, Brems, Rothörl, Sparmann, Schmitt, Kononenko, Kammerbauer, Syskaki, Farago, Virnau, and Kläui]gruber2023300 author author R. Gruber, author M. A. Brems, author J. Rothörl, author T. Sparmann, author M. Schmitt, author I. Kononenko, author F. Kammerbauer, author M.-A. Syskaki, author O. Farago, author P. Virnau, and author M. Kläui, title title 300-times-increased diffusive skyrmion dynamics and effective pinning reduction by periodic field excitation, https://doi.org/10.1002/adma.202208922 journal journal Adv. Mater. volume 35, pages 2208922 (year 2023)NoStop [Dieball and Godec(2022a)]dieball2022coarse author author C. Dieball and author A. Godec, title title Coarse graining empirical densities and currents in continuous-space steady states, https://doi.org/10.1103/PhysRevResearch.4.033243 journal journal Phys. Rev. Res. volume 4, pages 033243 (year 2022a)NoStop [Dieball and Godec(2022b)]dieball2022mathematical author author C. Dieball and author A. Godec, title title Mathematical, thermodynamical, and experimental necessity for coarse graining empirical densities and currents in continuous space, https://doi.org/10.1103/PhysRevLett.129.140601 journal journal Phys. Rev. Lett. volume 129, pages 140601 (year 2022b)NoStop [Reichhardt and Reichhardt(2019)]reichhardt2019active author author C. Reichhardt and author C. J. O. Reichhardt, title title Active microrheology, Hall effect, and jamming in chiral fluids, https://doi.org/10.1103/PhysRevE.100.012604 journal journal Phys. Rev. E volume 100, pages 012604 (year 2019)NoStop [Muzzeddu et al.(2022)Muzzeddu, Vuijk, Löwen, Sommer, and Sharma]muzzeddu2022active author author P. L. Muzzeddu, author H. D. Vuijk, author H. Löwen, author J.-U. Sommer, and author A. Sharma, title title Active chiral molecules in activity gradients, https://doi.org/10.1063/5.0109817 journal journal J. Chem. Phys. volume 157, pages 134902 (year 2022)NoStop [Poggioli and Limmer(2023)]poggioli2023odd author author A. R. Poggioli and author D. T. Limmer, title title Odd mobility of a passive tracer in a chiral active fluid, https://doi.org/10.1103/PhysRevLett.130.158201 journal journal Phys. Rev. Lett. volume 130, pages 158201 (year 2023)NoStop [Chan et al.(2024)Chan, Wu, Qiao, Fong, Yang, Han, and Zhang]chan2024chiral author author C. W. Chan, author D. Wu, author K. Qiao, author K. L. Fong, author Z. Yang, author Y. Han, and author R. Zhang, title title Chiral active particles are sensitive reporters to environmental geometry, https://doi.org/10.1038/s41467-024-45531-5 journal journal Nat. Commun. volume 15, pages 1406 (year 2024)NoStop [Siebers et al.(2024)Siebers, Bebon, Jayaram, and Speck]siebers2024collective author author F. Siebers, author R. Bebon, author A. Jayaram, and author T. Speck, title title Collective Hall current in chiral active fluids: Coupling of phase and mass transport through traveling bands, @noop journal journal Proc. Natl. Acad. Sci. U.S.A. volume 121, pages e2320256121 (year 2024)NoStop [Pavliotis(2010)]pavliotis2010asymptotic author author G. Pavliotis, title title Asymptotic analysis of the Green-Kubo formula, https://doi.org/10.1093/imamat/hxq039 journal journal IMA J. Appl. Math. volume 75, pages 951 (year 2010)NoStop [Vega Reyes et al.(2022)Vega Reyes, López-Castaño, and Rodríguez-Rivas]vega2022diffusive author author F. Vega Reyes, author M. A. López-Castaño, and author Á. Rodríguez-Rivas, title title Diffusive regimes in a two-dimensional chiral fluid, https://doi.org/10.1038/s42005-022-01032-9 journal journal Commun. Phys. volume 5, pages 1 (year 2022)NoStop [Brown et al.(2018)Brown, Täuber, and Pleimling]brown2018effect author author B. L. Brown, author U. C. Täuber, and author M. Pleimling, title title Effect of the Magnus force on skyrmion relaxation dynamics, https://doi.org/10.1103/PhysRevB.97.020405 journal journal Phys. Rev. B volume 97, pages 020405 (year 2018)NoStop [Reichhardt and Reichhardt(2022)]reichhardt2022active author author C. J. O. Reichhardt and author C. Reichhardt, title title Active rheology in odd-viscosity systems, https://doi.org/10.1209/0295-5075/ac2adc journal journal Europhys. Lett. volume 137, pages 66004 (year 2022)NoStop [Cao et al.(2023)Cao, Das, Windbacher, Ginot, Krüger, and Bechinger]cao2023memory author author X. Cao, author D. Das, author N. Windbacher, author F. Ginot, author M. Krüger, and author C. Bechinger, title title Memory-induced Magnus effect, journal journal Nat. Phys. https://doi.org/10.1038/s41567-023-02213-1 10.1038/s41567-023-02213-1 (year 2023)NoStop [Welander(1966)]welander1966note author author P. Welander, title title Note on the effect of rotation on diffusion processes, https://doi.org/10.3402/tellusa.v18i1.9182 journal journal Tellus volume 18, pages 63 (year 1966)NoStop [Brandenburg et al.(2009)Brandenburg, Svedin, and Vasil]brandenburg2009turbulent author author A. Brandenburg, author A. Svedin, and author G. M. Vasil, title title Turbulent diffusion with rotation or magnetic fields, https://doi.org/10.1111/j.1365-2966.2009.14646.x journal journal Mon. Not. R. Astron. Soc. volume 395, pages 1599 (year 2009)NoStop [Koch and Brady(1987)]koch1987symmetry author author D. L. Koch and author J. F. Brady, title title The symmetry properties of the effective diffusivity tensor in anisotropic porous media, https://doi.org/10.1063/1.866368 journal journal Phys. Fluids volume 30, pages 642650 (year 1987)NoStop [Auriault et al.(2010)Auriault, Moyne, and Amaral Souto]auriault2010asymmetry author author J.-L. Auriault, author C. Moyne, and author H. P. Amaral Souto, title title On the asymmetry of the dispersion tensor in porous media, https://doi.org/10.1007/s11242-010-9591-y journal journal Transport Porous Med. volume 85, pages 771 (year 2010)NoStop [Marbach et al.(2018)Marbach, Dean, and Bocquet]marbach2018transport author author S. Marbach, author D. S. Dean, and author L. Bocquet, title title Transport and dispersion across wiggling nanopores, https://doi.org/10.1038/s41567-018-0239-0 journal journal Nat. Phys. volume 14, pages 1108 (year 2018)NoStop [Wu et al.(2009)Wu, Huang, Tischer, Jonas, and Florin]wu2009direct author author P. Wu, author R. Huang, author C. Tischer, author A. Jonas, and author E.-L. Florin, title title Direct measurement of the nonconservative force field generated by optical tweezers, https://doi.org/10.1103/PhysRevLett.103.108101 journal journal Phys. Rev. Lett. volume 103, pages 108101 (year 2009)NoStop [Sukhov and Dogariu(2017)]sukhov2017non author author S. Sukhov and author A. Dogariu, title title Non-conservative optical forces, https://doi.org/10.1088/1361-6633/aa834e journal journal Rep. Prog. Phys. volume 80, pages 112001 (year 2017)NoStop [Mangeat et al.(2019)Mangeat, Amarouchene, Louyer, Guérin, and Dean]mangeat2019role author author M. Mangeat, author Y. Amarouchene, author Y. Louyer, author T. Guérin, and author D. S. Dean, title title Role of nonconservative scattering forces and damping on Brownian particles in optical traps, https://doi.org/10.1103/PhysRevE.99.052107 journal journal Phys. Rev. E volume 99, pages 052107 (year 2019)NoStop [Volpe et al.(2023)Volpe, Maragò, Rubinsztein-Dunlop, Pesce, Stilgoe et al.]volpe2023roadmap_short author author G. Volpe, author O. M. Maragò, author H. Rubinsztein-Dunlop, author G. Pesce, author A. B. Stilgoe, et al., title title Roadmap for optical tweezers, https://doi.org/10.1088/2515-7647/acb57b journal journal J. Phys. Photonics volume 5, pages 022501 (year 2023)NoStop [Ghimenti et al.(2024)Ghimenti, Berthier, Szamel, and van Wijland]ghimenti2024irreversible author author F. Ghimenti, author L. Berthier, author G. Szamel, and author F. van Wijland, title title Irreversible Boltzmann samplers in dense liquids: weak-coupling approximation and mode-coupling theory, journal journal arXiv preprint arXiv:2404.14863 https://doi.org/10.48550/arXiv.2404.14863 10.48550/arXiv.2404.14863 (year 2024)NoStop [Kwon et al.(2005)Kwon, Ao, and Thouless]kwon2005structure author author C. Kwon, author P. Ao, and author D. J. Thouless, title title Structure of stochastic dynamics near fixed points, https://doi.org/10.1073/pnas.0506347102 journal journal Proc. Natl. Acad. Sci. U.S.A. volume 102, pages 13029 (year 2005)NoStop [Turitsyn et al.(2007)Turitsyn, Chertkov, Chernyak, and Puliafito]turitsyn2007statistics author author K. Turitsyn, author M. Chertkov, author V. Y. Chernyak, and author A. Puliafito, title title Statistics of entropy production in linearized stochastic systems, https://doi.org/10.1103/PhysRevLett.98.180603 journal journal Phys. Rev. Lett. volume 98, pages 180603 (year 2007)NoStop [Kwon et al.(2011)Kwon, Noh, and Park]kwon2011nonequilibrium author author C. Kwon, author J. D. Noh, and author H. Park, title title Nonequilibrium fluctuations for linear diffusion dynamics, https://doi.org/10.1103/PhysRevE.83.061145 journal journal Phys. Rev. E volume 83, pages 061145 (year 2011)NoStop [Noh et al.(2013)Noh, Kwon, and Park]noh2013multiple author author J. D. Noh, author C. Kwon, and author H. Park, title title Multiple dynamic transitions in nonequilibrium work fluctuations, https://doi.org/10.1103/PhysRevLett.111.130601 journal journal Phys. Rev. Lett. volume 111, pages 130601 (year 2013)NoStop [du Buisson and Touchette(2023)]du2023dynamical author author J. du Buisson and author H. Touchette, title title Dynamical large deviations of linear diffusions, https://doi.org/10.1103/PhysRevE.107.054111 journal journal Phys. Rev. E volume 107, pages 054111 (year 2023)NoStop [Kählert et al.(2012)Kählert, Carstensen, Bonitz, Löwen, Greiner, and Piel]kahlert2012magnetizing author author H. Kählert, author J. Carstensen, author M. Bonitz, author H. Löwen, author F. Greiner, and author A. Piel, title title Magnetizing a complex plasma without a magnetic field, https://doi.org/10.1103/PhysRevLett.109.155003 journal journal Phys. Rev. Lett. volume 109, pages 155003 (year 2012)NoStop [Hartmann et al.(2019)Hartmann, Reyes, Kostadinova, Matthews, Hyde, Masheyeva, Dzhumagulova, Ramazanov, Ott, Kählert, Bonitz, Korolov, and Donkó]hartmann2019self author author P. Hartmann, author J. C. Reyes, author E. G. Kostadinova, author L. S. Matthews, author T. W. Hyde, author R. U. Masheyeva, author K. N. Dzhumagulova, author T. S. Ramazanov, author T. Ott, author H. Kählert, author M. Bonitz, author I. Korolov, and author Z. Donkó, title title Self-diffusion in two-dimensional quasimagnetized rotating dusty plasmas, https://doi.org/10.1103/PhysRevE.99.013203 journal journal Phys. Rev. E volume 99, pages 013203 (year 2019)NoStop [Shalchi(2011)]shalchi2011applicability author author A. Shalchi, title title Applicability of the Taylor-Green-Kubo formula in particle diffusion theory, https://doi.org/10.1103/PhysRevE.83.046402 journal journal Phys. Rev. E volume 83, pages 046402 (year 2011)NoStop [Effenberger et al.(2012)Effenberger, Fichtner, Scherer, Barra, Kleimann, and Strauss]effenberger2012generalized author author F. Effenberger, author H. Fichtner, author K. Scherer, author S. Barra, author J. Kleimann, and author R. D. T. Strauss, title title A generalized diffusion tensor for fully anisotropic diffusion of energetic particles in the heliospheric magnetic field, https://doi.org/10.1088/0004-637X/750/2/108 journal journal Astrophys. J. volume 750, pages 108 (year 2012)NoStop [Shalchi(2020)]shalchi2020perpendicular author author A. Shalchi, title title Perpendicular transport of energetic particles in magnetic turbulence, https://doi.org/10.1007/s11214-020-0644-4 journal journal Space Sci. Rev. volume 216, pages 23 (year 2020)NoStop [Gilbarg and Trudinger(2001)]gilbarg2001elliptic author author D. Gilbarg and author N. S. Trudinger, https://doi.org/10.1007/978-3-642-61798-0 title Elliptic Partial Differential Equations of Second Order, edition 2nd ed. (publisher Springer Berlin, Heidelberg, year 2001)NoStop [Oberhettinger and Badii(1973)]oberhettinger1973tables author author F. Oberhettinger and author L. Badii, https://doi.org/10.1007/978-3-642-65645-3 title Tables of Laplace Transforms (publisher Springer Berlin, Heidelberg, year 1973)NoStop [Arfken and Weber(2005)]arfken2005mathematical author author G. B. Arfken and author H. J. Weber, @noop title Mathematical Methods for Physicists, edition 6th ed. (publisher Academic Press, year 2005)NoStop [Gradshteyn and Ryzhik(2007)]gradshteyn2007table author author I. S. Gradshteyn and author I. M. Ryzhik, @noop title Table of Integrals, Series, and Products, edition 7th ed., edited by editor A. Jeffrey and editor D. Zwillinger (publisher Academic Press, year 2007)NoStop
http://arxiv.org/abs/2407.12602v1
20240717142915
Existence of viscosity solutions for Hamilton-Jacobi equations on Riemannian Manifolds via Lyapunov control
[ "Serena Della Corte", "Richard C. Kraaij" ]
math.AP
[ "math.AP", "35F21, 49L25, 49Q22" ]
showonlyrefs KraaijBib.bib theoremTheorem[section] lemma[theorem]Lemma proposition[theorem]Proposition corollary[theorem]Corollary acknAcknowledgement definition definition[theorem]Definition assumption[theorem]Assumption conge[theorem]Conjecture prob[theorem]Problem remark remark[theorem]Remark note[theorem]Note ex[theorem]Example example[theorem]Example fig[theorem]Figure notatNotation terminTerminology equationsection ⊂⊂ ∘ ψ Imm^2,p(M,^n+1) ∂_t ∂_s ∘ ^n+1 R μ ω μ H μ lim μ ε κ ⊛ ÷div lim lim _loc _loc tr E Q K Ric Riem tr dist R W A B H h k ℋ ℋ^n 𝒮 ℋ̋ ℱ ℝ 𝕋 𝕊 ℝ 𝕊 ℤ ℂℙ ℕ 𝕍 ℝ^2 ß𝔰 𝔱 𝔱 𝔭 𝔮 *div R_λ,h 𝐯_λ
http://arxiv.org/abs/2407.12893v1
20240717111516
Hybrid Dynamic Pruning: A Pathway to Efficient Transformer Inference
[ "Ghadeer Jaradat", "Mohammed Tolba", "Ghada Alsuhli", "Hani Saleh", "Mahmoud Al-Qutayri", "Thanos Stouraitis", "Baker Mohammad" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Hybrid Dynamic Pruning: A Pathway to Efficient Transformer Inference Ghadeer A. Jaradat,  Mohammed F. Tolba, Ghada Alsahli, Hani Saleh, Member, IEEE, Mahmoud Al-Qutayri, Member, IEEE, Thanos Stouraitis,  Life Fellow, IEEE, Baker Mohammad, Member, IEEE, July 22, 2024[The insight and motivation for the study of inertial methods with viscous and Hessian driven damping came from the inspiring collaboration with our beloved friend and colleague Hedy Attouch before his unfortunate recent departure. We hope this paper is a valuable step in honoring his legacy. ] ======================================================================================================================================================================================================================================================================================================================== § ABSTRACT In the world of deep learning, Transformer models have become very significant, leading to improvements in many areas from understanding language to recognizing images, covering a wide range of applications. Despite their success, the deployment of these models in real-time applications, particularly on edge devices, poses significant challenges due to their quadratic computational intensity and memory demands. To overcome these challenges we introduce a novel Hybrid Dynamic Pruning (HDP), an efficient algorithm-architecture co-design approach that accelerates transformers using head sparsity, block sparsity and approximation opportunities to reduce computations in attention and reduce memory access. With the observation of the huge redundancy in attention scores and attention heads, we propose a novel integer-based row-balanced block pruning to prune unimportant blocks in the attention matrix at run time, also propose integer-based head pruning to detect and prune unimportant heads at an early stage at run time. Also we propose an approximation method that reduces attention computations. To efficiently support these methods with lower latency and power efficiency, we propose a HDP co-processor architecture. Hardware acceleration, dynamic pruning, approximation, self attention, acceleration. § INTRODUCTION Transformer models, including BERT<cit.>, GPT <cit.>, T5 <cit.> and others <cit.> <cit.> , have transformed Natural Language Processing (NLP) with their attention mechanism, achieving top performance in tasks such as question-answering <cit.>, text classification <cit.>, and machine translation <cit.>. The transformer architecture uses self-attention mechanism <cit.> and it is highly parallelizable on modern Graphical Processing Units (GPUs), providing major benefits over older models like Long Short Term Memories (LSTMs) and Recurrent Neural Networks (RNNs). This has led to fast progress in NLP, with models like BERT exceeding human performance in difficult tasks <cit.> and expanding their application to computer vision, including object recognition and detection <cit.>, image classification <cit.>, and segmentation <cit.>. Deploying large transformer models on devices with limited resources is challenging due to their high computational and memory requirements. For example, BERT-Base Transformer needs 440 MB of memory and over 176 Giga FLoating Point Operations (GFLOPs) <cit.> . The computations are particularly difficult because of the complex attention operations and the quadratic computational complexity related to the length of input sequences <cit.>. Attention operations in transformer models become increasingly dominant as the input sequence length grows. For BERT-Base Transformer deployed on edge platforms, with a sequence length of 512, the attention operations account for about half of the total execution time, and this figure rises to 70% when the sequence length extends to 768 <cit.>. Therefore, finding efficient ways to handle attention operations is crucial for speeding up transformers. Many studies utilized sparsity to mitigate the quadratic time and space complexity issue. Some techniques save computational effort by using fixed or static sparse attention patterns <cit.> <cit.> <cit.>, but their performance is limited <cit.>, since the sparse pattern in attention is naturally dynamic, and depends only on the input. Other techniques focus on dynamic sparsity, meaning there's no fixed pattern for which parts are sparse (zero). For example, A^3 <cit.> uses various approximation methods to skip calculations of near-zero values, aiming to decrease computational demands, but it requires loading all data onto the chip, which doesn't decrease off-chip DRAM access. SpAtten <cit.> introduces a cascaded token pruning mechanism that gradually eliminates less important tokens to simplify the workload using a Top-K strategy. Despite being tailored for dynamic decision-making in hardware, Top-K requires significant computational cost. Energon <cit.> uses a mixed-precision, multi-stage filtering technique to mimic the Top-K pruning approach, but it relies on a special unit to handle sparsity. AccelTran <cit.> prunes values in all transformer matrix multiplications if they fall below certain predetermined threshold. A^3, Energon, and AccelTran leverage unstructured sparsity to realize efficient deployment for Transformers which leads to non uniform data access and lower efficiency. It's hard to predict these sparsity patterns, which can slow down performance. Other research efforts have been directed towards the removal of attention heads in transformer models, based on the understanding that not all heads contribute significantly to the transformer's performance. Researchers in <cit.> add trainable gating parameters attached to each head and regularized with L0 loss. In <cit.>, the importance of each attention head is gauged based on its sensitivity to the overall loss. This sensitivity is utilized as an indirect measure to determine the significance of each head. In <cit.>, a novel approach termed 'single-shot meta-pruner' is presented. This method involves training a compact convolutional neural network with the specific purpose of identifying and selecting the attention heads that are crucial for preserving the distribution of attention within the model. All of these studies perform pruning at compile time not run time, require retraining to recover the accuracy drop. SpAtten <cit.> also performs a cascaded head pruning at run time using the Top-K approach, wherein the importance of each attention head is derived by aggregating the absolute values of it's attention outputs of the head throughout the layers, resulting in an aggregate score of importance for each head. SpAtten uses a separate unit to perform head Top-K strategy, which also requires significant computational cost. A^3, Energon and AccelTran do not support head pruning. To overcome the above challenges, we propose integer-based Hybrid Dynamic Pruning (HDP), an algorithm-architecture co-design to enable efficient attention inference. This method operates on multiple levels within the attention matrix, reducing complexity by concentrating on blocks, and heads. This method employs fine-grained block pruning for removing smaller, less critical blocks and incorporates head pruning to selectively eliminate less important heads, all based on the integer parts of inputs for decision-making. The pruning is done dynamically; applied during inference without relying on fixed patterns for pruning, and without fine-tuning or retraining. Our main contributions are as follows. * We propose integer-based, fine-grained block pruning, which prunes unimportant small sized blocks, with the observation that self-attention primarily relies on a few critical query-key pairs, highlighting significant redundancy in the self attention mechanism. HDP utilizes the integer part of the input to identify and prune less important blocks, focusing subsequent operations only on the unpruned blocks for enhanced efficiency. * We introduce an early head pruning strategy that identifies and eliminates less important heads based on the integer parts of the input at the initial stages of computation, unlike the method in <cit.> which performs pruning after all computations. * We approximate attention calculation by breaking down the multiplication of Q and K into three components: integer Q × integer K, integer Q × fractional K, and fractional Q × integer K. By summing these components together, we not only approximate the attention outcome but also achieve near-zero pruning, as the fractional Q - fractional K multiplication is omitted. This method effectively reduces computational complexity while maintaining model accuracy during inference. * We design and implement an ASIC-based architecture to efficiently execute HDP, utilizing encoder-only models to reduce the critical path to the half and enhance throughput and hardware utilization. Our architecture functions as a co-processor, compatible with existing neural network accelerators. Through carefully designed pipelines and architectural optimizations, we aim to significantly boost performance and reduce energy consumption. The article is structured as follows: Section II provides background information on transformer acceleration. Section III details the methodology of the HDP framework. Section IV describes the hardware architecture of HDP co-processor. Section V outlines the experimental setup, the baselines used for comparison and discusses the results. Finally Section VI concludes the article. § BACKGROUND AND MOTIVATION * Transformer Algorithm Transformers have shown state-of-the-art performance in natural language processing field since their initial introduction in 2017 <cit.>. They are often regarded as alternatives to classic CNNs and RNNs in various situations in real-world due to their exceptional efficiency and generality. Transformers consist of two essential components: the encoder and the decoder, which are formed by layering many transformer blocks. As depicted in Fig. <ref>, a block comprises three primary elements: linear layers, multi-head self-attention, and the normalization layer. The linear layers include the fully connected Layer (FC), Feed Forward (FFN), and the projection layer. The processes starts with transforming the input vectors into embedding vectors, which are then sequentially passed through a series of processing blocks. Within each block, the input is projected through the projection layer to Query (Q), Key (K) and Value (V) features using the weights W^Q, W^K and W^V. Following this, attention mechanisms are utilized on these features to comprehend the long-term relational dependencies present in the input sequence. The attention mechanism, as shown in Algorithm <ref>, splits the Q, Kand V matrices into multiple smaller sets Q_h, K_h, and V_h equivalent to the number of heads H. Each of these sets forms an "attention head", see Fig. <ref> right side. Inside each head the output is computed as follows: Attention(Q_h, K_h, V_h) = softmax(Q_hK_h^T/√(d_k))V_h. The process starts with the computation of the dot product between Q_h and K_h, followed by scaling. This step computes the alignment or similarity between each pair of tokens, the fundamental components of a sequence. The resulting matrix represents each token's relationship with every other token. Then a row-wise softmax is applied to obtain the attention probability. When processing a particular token, the attention probability weights tell the model how much attention to give to each token in the sequence. After that, the attention probability is multiplied with the Value vector V_h, resulting in a weighted sum that forms the output of the single head attention layer, the resulting output is a combination of information from all tokens, with each token's importance and relevance determined and weighted by the attention mechanism. Each head independently calculates attention result. The results are concatenated to get the end result. The concept entails that each head possesses the ability to concentrate on distinct segments of the input sequence, or capture different types of relations within the data. The attention mechanism fulfills various functions: it enables the model to focus on the most significant tokens of the input sequence, captures long-term relationships within the sequence, and greatly enhances the model's awareness of the context. This selective attention and context-aware processing are critical for complicated tasks such as language translation <cit.>, text summarization <cit.>, and other NLP tasks. ruled In addition to the multi-head attention component, transformer models also employ other processes such as normalization and FFN. The FFN in a transformer is composed of two FC layers. The function of the FFN can be mathematically expressed as: FFN(X) = GELU(XW_1 + b_1)W_2 + b_2, where X, W_1, b_1, W_2, and b_2 represent the input to the FFN, the weight matrices, and the bias vectors, respectively. The Gaussian Error Linear Unit (GELU), serves as a special activation function, providing non-linear transformation to the output of the first layer before it is passed on to the second layer <cit.>, where it is widely used in Transformers. Our focus in this work is the attention layer as it is the bottleneck for Transformers models. * Complexity of Attention Layers The primary computational requirements of attention layers in a neural network model are pairwise dot-products performed on a set of l vectors. As a consequence, the complexity is O(l^2d). The computational cost associated with attention operations escalates as the length of the input sequence l increases, while d is usually fixed. By measuring the time consumed in attention layer for Bert-base model <cit.>, on both embedded GPU and CPU platforms, it's observed that attention operations consume half of the total computational time for input sequences longer than 512. Furthermore, these operations consume almost 70% of computational time for input sequences reach 768 <cit.>. Therefore, in scenarios where processing long input sequences is necessary, attention operations emerge as the primary computational bottleneck. Also because of the advancements in linear layers through weight pruning <cit.>, quantization <cit.>, and specialized accelerators <cit.> <cit.>, there is a need to optimize the attention mechanism to ensure balanced computational efficiency in transformer models. * Dynamic sparsity in Multi Head Self Attention Sparse attention methods are driven by the recognition that not all attention probabilities are significant. After applying a row-wise Softmax function to get the attention_prob, most likely a small subset of the scores in each row will impact the attention_out results. Similarly, sparse multi-head attention strategies are typically motivated by the recognition that not all heads in the attention mechanism have an equal impact on the final output. This suggests that some heads in the attention model might have a minimal impact on the overall outcome. These observations point to the existence of redundant heads and insignificant attention probabilities in the attention architecture, indicating that the model’s performance could be enhanced by focusing on the most crucial heads and removing unimportant attention_scores. By examining the attention weights matrix ( attention probability) generated from different heads across various layers and inputs, we can say that the significance of individual heads in a transformer's multi-head attention mechanism depends only on the input data. Varying not only with the input data but also with their position within the model's layered architecture. Fig. <ref> presents the attention weights of various heads across different layers and inputs in a BERT-base model fine tuned on the SST-2 dataset <cit.>. As illustrated in Fig. <ref>, there is notable variation in the attention weights for the same head across different layers. For example, the attention values for the eleventh head, highlighted by the red box, show considerable fluctuation across Layers 9, 10, and 11. Additionally, the attention patterns of the same head within the same layer also exhibit significant differences when compared across various inputs. This is clearly depicted in Fig. <ref> and Fig. <ref>, where the green boxes depict the differences in attention values for the same heads and layers when subjected to different inputs. Specifically, Head 0 and Head 1 in Layer 11 exhibit notably low attention weights for Input 1. In contrast, for Input 2, Head 1 and Head 2 in Layer 11 display significantly higher attention values, highlighting the data-dependent nature of attention mechanisms in the model. Furthermore, it is clear for most of the heads that it is only a subset of the attention weights which have high magnitude, where there is no fixed pattern for the important weights. This variability highlights the complex, dynamic nature of how transformers process information. For a given input, certain heads in specific layers may become more active, focusing on particular aspects of the data. This activation can differ from one input to another, reflecting the adaptive response of each head to the unique characteristics of the data it encounters. Similarly, the role of a head may differ across layers. Furthermore, the high magnitude attention weights subset is dynamic depending on the input sequence , and different heads have different subsets. This data-dependent behavior of attention heads is a critical consideration in model analysis and optimization and motivates us to explore effective methods to eliminate unimportant heads and query-key relations and save computations. § ALGORITHMIC OPTIMIZATION In this section, we discuss the algorithmic optimizations that enhance the transformer model's efficiency and performance, focusing on block, head pruning and approximation techniques. These optimizations are key to reducing computational complexity and memory access. * Block Pruning For the attention score matrix, most of the query-key relations are not important and can be pruned safely, many methods have been used to prune these relations. Top-K pruning method <cit.> is used to prune the attention weights where a whole row can be pruned, but this requires a retraining to recover the accuracy, also it requires a specialized hardware to get the k most significant attention weights. Energon <cit.> avoided the Top-K selection and used the mean filtering as a practical approximation instead, but it still has a separate unit to perform this operation, also faces data duplication overhead due to the multi-round filtering method employed. In Energon and AccelTran <cit.> the pruning is done in an element-wise pattern which results in an irregular sparse matrix, where zeros are randomly spread over the matrix and this results in an irregular memory access and stalls in the hardware. To address these challenges we propose integer-based block pruning, where pruning decision is exclusively determined by the integer parts of the numbers. We employ a small block size for pruning to eliminate the necessity for retraining and to guarantee a more organized and hardware-compatible sparsity pattern. Initially, multiplication is conducted only on the integer parts of Q and K to obtain Integer_atten. For each 2×2 block, we calculate its importance, θ, as the absolute sum of the values within the block. For each row of blocks, we determine the block pruning ratio, Θ, using a method similar to that in Energon, which involves calculating the minimum, maximum, and mean importance values, along with a predefined pruning ratio, ρ_B, as shown in Algorithm <ref> line <ref>. Blocks with importance, θ, falls below the row-specific threshold, Θ, are pruned and the mask value for the block is assigned to 0. When a block is pruned, subsequent computations for that block are omitted. Conversely, if a block is retained (mask is 1), the final attention result is approximated, with the approximation technique detailed in <ref> The block pruning mechanism is expressed in details in Algorithm <ref> lines <ref> to <ref> and in Fig. <ref>. * Approximation For blocks with mask value equals 1, θ exceeds Θ, we proceed to approximate the final attention outcome after obtaining the Integer_att result. This involves calculating two fractions: the product of Q's fractional component with K's integer component, and vice versa, yielding Frac1_atten and Frac2_atten, respectively. The ultimate attention score is obtained by summing these two fractions with the Integer_att. This method not only approximates the attention outcome but also enables near-zero pruning, which has minimal impact on model accuracy during inference <cit.>. Specifically, when two numbers are close to zero, their integer parts are zero, leading all three components to be zero. Consequently, the multiplication of fractional parts is omitted, resulting in effective near-zero pruning. This approximation process is highlighted within the black box in Fig. <ref> and in Algorithm <ref> lines <ref> to <ref>. * Head Sparsity Not all heads are essential, and many can be pruned without affecting the overall performance. Unlike the method in <cit.>, where head importance is assessed after completing all computations for the head, we introduce an early head pruning approach. To evaluate head importance, θ_Head, we compute the absolute summation of all values in Integer_att. Heads with θ_Head below a predefined threshold τ_H are completely pruned and the rest of computations of this head are skipped. The threshold τ_H is a parameter that will be profiled. The head pruning process is visually indicated by the red box in Fig <ref> and described in Algorithm <ref> at line <ref>. ruled § HARDWARE ARCHITECTURE Current attention accelerators and traditional CPUs/GPUs lack the capability to execute the proposed hybrid dynamic sparse attention technique efficiently. To address this gap, we are introducing a novel HDP accelerator. HDP is developed to function as a co-processor and is compatible with a variety of neural network accelerators for easy integration. * Architecture Overview The HDP architecture, depicted in Fig. <ref>, compromises multiple cores, the architecture of individual core is shown in Fig. <ref> middle part. Each core is composed of an array of processing elements (PE Array), a Sparsity Engine (SE), an adder, and a softmax unit. The PE Array handles matrix multiplication tasks such as Q × K^T and attention_prob × V, also calculating importance values. The SE is tasked with identifying which blocks to prune and deciding whether a head should be pruned or not. Workflow: Once Q,K, and V are generated and quantized by another processor in fixed point 16 bit format and stored in memory, HDP processes each attention head sequentially. It employs tiled matrix multiplication for these operations. Initially, the integer components of Q and K are retrieved from off-chip memory into on-chip memory for the computation of Integer_Q × Integer_K using tiling. The SE then uses the computed importance values for each block to create a mask indicating which blocks are not pruned. This approach prevents unnecessary data fetching for pruned blocks, reducing memory access and computational overhead. Once Integer_Q × Integer_K computation is complete, and the head importance is assessed, the SE decides whether a head will be pruned. If so, the remaining computations for that head will be skipped and proceeds to the next head. For heads that remain unpruned, a Fetch Upon Mask (FUM) strategy is utilized. If the mask value is 0, indicating a pruned block, the corresponding K values will not be fetched, and the computation for that block is skipped. If the mask value is 1, the corresponding Q and K values are fetched, and the processing element (PE) array calculates the two remaining fractions (Integer_Q × Frac_K and Frac_Q × Integer_K) simultaneously. These results, along with the integer results from the previous step, will be added together using an ADDER module to obtain the total attention_score. After performing all Q × K, a row-wise softmax will be applied to the attention_score. Then, the PEs will be utilized to perform the calculation attention_score × v. Specifically, the first and second PEs in the first row calculate Integer_attention_score × Integer_v, while the third and fourth PEs in the first row compute Integer_attention_score × Frac_v. The first and second PEs in the second row calculate Frac_attention_score × Integer_v, and the third and fourth PEs in the second row calculate Frac_attention_score × Frac_v. Finally, all these results will be summed using the ADDER module to obtain the final output. The attention results for each tile will be immediately stored in DRAM memory upon completion. Consequently, host DNN accelerators can access these results to perform subsequent computations. In the sections that follow, we provide a more in-depth exploration of each module and the optimizations we have proposed to enhance their functionality. * Tiling and Dataflow A significant portion of the computational workload in transformer models is attributed to matrix multiplication, necessitating optimization to boost accelerator performance. We propose the implementation of tiled matrix multiplication, a technique primarily employed in GPUs <cit.>, to enhance the efficiency of these operations in our accelerator design. Tiling enhances resource efficiency and enables parallel computation, as depicted in Fig. <ref>. The first 4 × 4 tile from matrix A is multiplied by the first 4 × 8 tile from matrix B, with the partial results stored in a 4 × 8 tile in matrix C, denoted by 1 in all matrices. Subsequently, the process advances along 2 in the tiles of A and B to accumulate additional partial sums in C. During this stage, an output stationary dataflow approach is employed, facilitating the reuse of partial sum outputs in the accumulator. Additionally, a local A stationary strategy is implemented, meaning that while outputs are reused in the outer loop, inputs from matrix A are retained and reused in the inner loop <cit.>. Next, the process proceeds along 3 in all matrices, followed by a moving along 4. * Processing Elements The processing element, shown in Fig. <ref> (right), serves as a fundamental computational unit within the accelerator handling all matrix multiplication operations. Operating in an output stationary mode, and behaving similar to systolic array PE; it receives rows from tiles of the first matrix and columns from tiles of the second matrix as inputs, one input at a time. It multiplies these values and stores the intermediate sums in accumulators until the entire row from the first matrix has been multiplied by the corresponding column from the second matrix. At this point, the accumulators hold the final results for the tile of the result matrix. In the case of Integer Q × Integer K multiplication, these results are also utilized to determine the block's importance, as the output from a processing element corresponds to a block in the result matrix. The importance of the block, as illustrated in Fig. <ref>, is equal to the absolute sum of the accumulators. * Sparsity Engine The Sparsity Engine is responsible for determining the sparsity pattern at both block and head levels. Illustrated in Fig. <ref>, the Sparsity Engine's internal architecture takes in importance scores from the PE and stores them in its internal memory. Additionally, it keeps track of the minimum, maximum, and total sum of these importance values for every row of blocks. Upon receiving the END_R, which signals the completion of a full row in the result matrix or, equivalently, the multiplication of a row from the first matrix by all columns in the second matrix in Integer Q × Integer K multiplication, the engine calculates the block pruning threshold Θ for that specific row. This calculation is based on the equation provided in line <ref> of Algorithm <ref>. Additionally, the engine generates the Mask for the row by subtracting Θ from the importance values of the blocks. If the result is negative, the block falls below Θ and is marked for pruning. Furthermore, when the END_H is received, signaling the completion of the Integer Q × Integer K multiplication, the engine utilizes the θ_Head value it has computed representing the total sum of all importance values across the entire head. Upon receiving this flag, the engine compares it with τ_H, an input parameter denoting the head pruning threshold. If θ_Head falls below τ_H, the head is assumed redundant and is thus excluded, allowing the accelerator to bypass any further calculations for this head and proceed to processing the subsequent head. * Softmax Module Once the attention scores are obtained, a row-wise softmax function is applied defined as e^s_i / ∑_j e^s_j. For every input received to the module, the exponent is approximated using 2^nd order polynomial. The exponent results are stored in internal memory, and the sum of these exponent results is calculated. By the end of every row, the reciprocal of the sum is computed using a linear approximation. Then the exponent values are multiplied by the reciprocal to generate the softmax result. § EVALUATION §.§ Algorithm Evaluation §.§.§ Evaluation Models and Datasets To validate the efficacy of our proposed method, HDP, we focused on evaluating encoder-only models. For this purpose, we utilized BERT-Tiny <cit.> and BERT-Base <cit.>, two well-established pre-trained models. BERT-Tiny consists of two encoder layers, each with a hidden dimension of 128 and two attention heads, while BERT-Base contains 12 encoder layers, each with a hidden dimension of 768 and 12 attention heads. These encoder-only models are particularly promising for applications in areas such as machine translation <cit.> and language generation <cit.>, where their efficiency and scalability can be leveraged for improved performance. Our evaluation was conducted on two benchmark tasks: SST-2 and COLA, both sourced from the GLUE benchmark <cit.>. §.§.§ Dynamic Inference with Transformer we present the results of our experiments, encompassing various aspects of our study. These results provide a comprehensive overview of the performance and efficacy of the proposed HDP algorithm, from exploring various block pruning ratios and profiling head thresholds to conducting thorough comparisons with the established Top-k block pruning method. * Block Pruning: Our baseline for comparison is the Top-k block pruning method with a block size of 2 × 2. As depicted in Fig. <ref>, the Top-K method can prune up to 75% of all blocks with 1% accuracy loss, whereas HDP can achieve a pruning ratio of 70%. However, for pruning ratios exceeding 80%, HDP no longer serves as a reliable approximation to the Top-K method. This discrepancy is evident in the figure, where the accuracy of HDP is significantly higher than that of Top-K, indicating that the model is unable to accurately determine the correct block pruning threshold, Θ. This issue arises because the model incorrectly assumes that it is pruning a high percentage of blocks, when in fact it is not, and this attributed to the assumption in that the mean divides the data into equal halves. Both models exhibit an initial improvement in accuracy followed by a decline as the level of sparsity increases. This may be linked to the over-parameterization of the BERT model <cit.>, and it can be though as in NLP, if unimportant tokens are removed, the model can focus more on the important token resulting in better accuracy. * Head Pruning: In HDP, the impact of head pruning is depicted in Fig. <ref>, which demonstrates the threshold profiling and the corresponding accuracies after applying head pruning to the BERT-Base and BERT-Tiny models on the SST2 and CoLA datasets. As anticipated, head pruning is particularly critical for BERT-Tiny, as illustrated in Fig. <ref> and Fig. <ref>. These figures reveal that less than 2% of the model's heads can be pruned without affecting the accuracy. This sensitivity is due to the model's limited number of heads, with only 4 in total. Consequently, removing even a single head amounts to a significant coarse-grained pruning of one-fourth of all heads, and this is a very large coarse grained pruning to be done without any retraining. On the other hand, head pruning in the BERT-Base model yields more favorable pruning ratios due to its larger number of heads, totaling 144. As depicted in Fig.<ref> and Fig.<ref>, the model can prune approximately 13-17% of its heads with only a 1% decrease in accuracy. * Approximation: To assess the effectiveness of the proposed approximation method, we will examine its impact on the models' accuracy. Fig. <ref> illustrates the accuracy of models employing block pruning with and without the approximation technique. For the BERT-Base model, as depicted in Fig. <ref> and Fig. <ref>, the model's performance remains almost the same, suggesting that the approximation does not negatively affect the model while providing benefits in terms of computational efficiency. In contrast, for the BERT-Tiny model, as shown in Fig. <ref> and Fig. <ref>, the model is more sensitive to the approximation, experiencing a greater effect on its performance. While both BERT-Base and BERT-Tiny have an identical hidden size of 64 for each head, the reduced number of heads in BERT-Tiny amplifies the impact of pruning within a head on its overall performance. The near-zero pruning strategy, which allocates higher softmax values to unpruned elements, allows the model to focus more on crucial Q-K relations, thereby enhancing its concentration on important components. However, in some instances, the approximation may lead to reduced accuracy. This could be attributed to the nature of the approximation itself, as the fraction removed is not uniform across all values. Consequently, this could inadvertently lower the attention score for important Q-K relations in certain scenarios. * Net Pruning: Fig. <ref> demonstrates the combined effect of block pruning, head pruning, and approximation techniques on the model's overall sparsity. With a 1% reduction in accuracy, the net sparsity achieved by BERT-Base on the SST2 dataset is 75%, matching the pruning percentage attained through the Top-K method. For BERT-Base on the CoLA dataset, the net sparsity is 65%. By leveraging net sparsity, we were able to attain a pruning ratio comparable to that of the Top-K method. In the Top-K approach, even within an unimportant head, certain blocks remain unpruned due to their significance within that specific head. However, the entire head may not be crucial overall. By implementing head pruning, we successfully removed such unimportant heads, thereby achieving a higher overall pruning ratio. §.§ Comparisons with Related Work §.§.§ Comparison with SpAtten To evaluate our proposed head pruning technique, we will conduct a comparison with SpAtten <cit.>, the only study to date that dynamically applies head pruning directly on hardware platforms. As documented in SpAtten's findings for the BERT-Base model applied to the CoLA dataset, it achieved up to 17% head pruning with no loss in the accuracy. To ensure a fair comparison with SpAtten, we quantized our model to 12 bits and adhered to the identical fine-tuning protocol outlined in their study. The fine-tuning of the BERT-Base model on the CoLA dataset was completed on an average GPU in under two hours, employing various combinations of block pruning ratios and head pruning thresholds without pruning any thing from the first 30% of the layers. Fig. <ref> shows these values. After fine-tuning, our method was able to prune approximately 17% of the heads sam as SpAtten. It's worth noting that for higher pruning ratios for example 35% pruning percentage(1.55x pruning ratio), although there is a significant drop in accuracy, the decrease in accuracy is less pronounced in our model which is 7.5% compared to SpAtten which equal 10%. SpAtten employs a cascading head pruning approach, where once a head is pruned from one layer, it is also pruned from all subsequent layers. This is in contrast to findings that suggest head importance is only data-dependent, indicating that a head may be important in one layer but not in another, as demonstrated in Fig. <ref>. Table <ref> compares HDP with popular transformer accelerators. § CONCLUSION In this work we presented HDP, a novel algorithm-architecture co-design to efficiently run dynamic sparse attention models. We first proposed a novel integer-based row-balanced block pruning to prune unimportant blocks in attention matrix and integer based head pruning to prune unimportant heads. Moreover, we propose an approximation method that reduces the computations and performs near-zero pruning. We also implemented this method in 2 co-processors architecture, HDP-Edge and HDP-Server, to accelerate algorithm on mobile and sever platforms. § ACKNOWLEDGMENT This work was supported by the Khalifa University of Science and Technology under SOCL. unsrt
http://arxiv.org/abs/2407.12345v1
20240717063952
VisionTrap: Vision-Augmented Trajectory Prediction Guided by Textual Descriptions
[ "Seokha Moon", "Hyun Woo", "Hongbeen Park", "Haeji Jung", "Reza Mahjourian", "Hyung-gun Chi", "Hyerin Lim", "Sangpil Kim", "Jinkyu Kim" ]
cs.CV
[ "cs.CV" ]
VisionTrap S. Moon et al. Korea University, Seoul 02841, Republic of Korea The University of Texas at Austin, Texas 78712, USA Perdue University, West Lafayette 95008, USA Hyundai Motor Company, Seongnam 13529, Republic of Korea VisionTrap: Vision-Augmented Trajectory Prediction Guided by Textual Descriptions Seokha Moon10009-0009-0506-5958 Hyun Woo10009-0005-5217-6379 Hongbeen Park10009-0003-2633-288X Haeji Jung10009-0008-8347-7432 Reza Mahjourian20000-0002-4457-8395 Hyung-gun Chi30000-0001-5454-3404 Hyerin Lim40009-0003-3369-8169 Sangpil Kim10000-0002-7349-0018 Jinkyu Kim10000-0001-6520-2074 July 22, 2024 ======================================================================================================================================================================================================================================================================================================= *Corresponding author: J. Kim (jinkyukim@korea.ac.kr) § ABSTRACT Predicting future trajectories for other road agents is an essential task for autonomous vehicles. Established trajectory prediction methods primarily use agent tracks generated by a detection and tracking system and HD map as inputs. In this work, we propose a novel method that also incorporates visual input from surround-view cameras, allowing the model to utilize visual cues such as human gazes and gestures, road conditions, vehicle turn signals, etc, which are typically hidden from the model in prior methods. Furthermore, we use textual descriptions generated by a Vision-Language Model (VLM) and refined by a Large Language Model (LLM) as supervision during training to guide the model on what to learn from the input data. Despite using these extra inputs, our method achieves a latency of 53 ms, making it feasible for real-time processing, which is significantly faster than that of previous single-agent prediction methods with similar performance. Our experiments show that both the visual inputs and the textual descriptions contribute to improvements in trajectory prediction performance, and our qualitative analysis highlights how the model is able to exploit these additional inputs. Lastly, in this work we create and release the nuScenes-Text dataset, which augments the established nuScenes dataset with rich textual annotations for every scene, demonstrating the positive impact of utilizing VLM on trajectory prediction. Our project page is at https://moonseokha.github.io/VisionTraphttps://moonseokha.github.io/VisionTrap. § INTRODUCTION Predicting agents' future poses (or trajectories) is crucial for safe navigation in dense and complex urban environments. To achieve such task successfully, it is required to model the following aspects: (i) understanding individual's behavioral contexts (, actions and intentions), (ii) agent-agent interactions, and (iii) agent-environment interactions (, pedestrians on the crosswalk). Recent works <cit.> have achieved remarkable progress, but their inputs are often limited – they mainly use a high-definition (HD) map and agents' past trajectories from a detection and tracking system as inputs. HD map is inherently static, and only provide pre-defined information that limits their adaptability to changing environmental conditions like traffic near construction areas or weather conditions. They also cannot provide visual data for understanding agents' behavioral context, such as pedestrians' gazes, orientations, actions, gestures, and vehicle turn signals, all of which can significantly influence agents' behavior. Therefore, scenarios requiring visual context understanding may necessitate more than non-visual input for better and more reliable performance. In this paper, we advocate for leveraging visual semantics in the trajectory prediction task. We argue that visual inputs can provide useful semantics, which non-visual inputs may not provide, for accurately predicting agents' future trajectories. Despite its potential advantages, only a few works <cit.> have used vision data to improve the performance of trajectory prediction in autonomous driving domain. Existing approaches often utilize images of the area where the agent is located or the entire image without explicit instructions on what information to extract. As a result, these methods tend to focus only on salient features, leading to sub-optimal performance. Additionally, because they typically rely solely on frontal-view images, it becomes challenging to fully recognize the surrounding driving environment. To address these limitations and harness the potential of visual semantics, we propose VisionTrap, a vision-augmented trajectory prediction model that efficiently incorporates visual semantic information. To leverage visual semantics obtained from surround-view camera images, we first encode them into a composite Bird's Eye View (BEV) feature along with map data. Given this vision-aware BEV scene feature, we use a deformable attention mechanism to extract scene information from relevant areas (using predicted agents' future positions), and augment them into per-agent state embedding, producing scene-augmented state embedding. In addition, recent works <cit.> have shown that classifying intentions can improve model performance by helping predict agents' instantaneous movements. Learning with supervision of each agent's intention helps avoid training restrictions and oversimplified learning that may not yield optimal performance. However, annotating agents' intentions by dividing them into action categories involves inevitable ambiguity, which can be costly and hinder efficient scalability. Moreover, creating models that rely on these small sets can limit the model's expressiveness. Thus, as shown in Fig. <ref>, we leverage textual guidance as supervision to guide the model in leveraging richer visual semantics by aligning visual features (, an image of a pedestrian nearby a parked vehicle) with textual descriptions (, “a pedestrian is carrying stacked items, and is expected to stationary.”). While we use additional input data, real-time processing is crucial in autonomous driving. Therefore, we designed VisionTrap based on a real-time capable model proposed in this paper. VisionTrap efficiently utilizes visual semantic information and employs textual guidance only during training. This allows it to achieve performance comparable to high-accuracy, non-real-time single-agent prediction methods <cit.> while maintaining real-time operation. Since currently published autonomous driving datasets do not include textual descriptions, we created the nuScenes-Text dataset based on the large-scale nuScenes dataset <cit.>, which includes vision data and 3D coordinates of each agent. The nuScenes-Text dataset collects textual descriptions that encompass high-level semantic information, as shown in <ref>: “A man wearing a blue shirt is talking to another man, expecting to cross the street when the signal changes.” Automating this annotation process, we utilize both a Vision-Language Model (VLM) and a Large-Language Model (LLM). Our extensive experiments on the nuScenes dataset show that our proposed text-guided image augmentation is effective in guiding our trajectory prediction model successfully to learn individuals' behavior and environmental contexts, producing a significant gain in trajectory prediction performance. § RELATED WORK Encoding Behavioral Contexts for Trajectory Prediction. Recent works in trajectory prediction utilize past trajectory observations and HD map to provide static environmental context. Traditional methods use rasterized Bird’s Eye View (BEV) maps with ConvNet blocks <cit.>, while recent approaches employ vectorized maps with graph-based attention or convolution layers for better understanding complex topologies <cit.>. However, HD maps are static and cannot adapt to changes, like construction zones affecting agent behavior. To address this, some works <cit.> aim to address these issues by utilizing images. To obtain meaningful visual semantic information about the situations an agent faces in a driving scene, it is necessary to utilize environmental information containing details from the objects themselves and from the environments they interact with. However, <cit.> focus solely on extracting information about agents’ behavior using images near the agents, while <cit.> process the entire image at once and focus only on information about the scene without considering the parts that agents need to interact with. Therefore, in this paper, we propose an effective way to identify relevant parts of the image that each agent should focus on and efficiently learn semantic information from those parts. Scene-centric vs. Agent-centric. Two primary approaches to predicting road agents' future trajectories are scene-centric and agent-centric. Scene-centric methods <cit.> encode each agent within a shared scene coordinate system, ensuring rapid inference speed but may exhibit slightly lower performance than agent-centric methods. Agent-centric approaches <cit.> standardize environmental elements and separately predict agents' future trajectories, offering improved predictive accuracy. However, their inference time and memory requirements are linearly scaled with the number of agents in the scene, posing a scalability challenge in dense urban environments with hundreds of pedestrians and vehicles. Thus, in this paper, we focus on scene-centric approaches. Multimodal Contrastive Learning. With the increasing diversity of data sources, multimodal learning has become popular as it aims to effectively integrate information from various modalities. One of the common and effective approaches for multimodal learning is to align the modalities in a joint embedding space, using contrastive learning <cit.>. Contrastive Learning (CL) pulls together the positive pairs and pushes away the negative pairs, constructing an embedding space that effectively accommodates the semantic relations among the representations. Although CL is renowned for its ability to create a robust embedding space, its typical training mechanism introduces sampling bias, unintentionally incorporating similar pairs as negative pairs <cit.>. Debiasing strategies <cit.> have been introduced to mitigate such false-negatives, and it is particularly crucial in autonomous driving scenarios where multiple agents within a scene might have similar intentions in their behaviors. In our work, we carefully design our contrastive loss by filtering out the negative samples that are considered to be false-negatives. Inspired by <cit.>, we do this by utilizing the sentence representations and their similarities, and finally achieve debiased contrastive learning in multimodal setting. § METHOD This paper explores leveraging high-level visual semantics to improve the trajectory prediction quality. In addition to conventionally using agents' past trajectories and their types as inputs, we advocate for using visual data as an additional input to utilize agents' visual semantics. As shown in Fig. <ref>, our model consists of four main modules: (i) Per-agent State Encoder, (ii) Visual Semantic Encoder, (iii) Text-driven Guidance module, and (iv) Trajectory Decoder. Our Per-agent State Encoder takes as an input a sequence of state observations (which are often provided by a detection and tracking system), producing per-agent context features (Sec. <ref>). In our Visual Semantic Encoder, we encode multi-view images (capturing the surrounding view around the ego vehicle) into a unified Bird's Eye View (BEV) feature, followed by concatenation with a dense feature map of road segments. Given this BEV feature, the per-agent state embedding is updated in the Scene-Agent Interaction module (Sec. <ref>). We utilize Text-driven Guidance module to supervise the model to understand or reason about detailed visual semantics, producing richer semantics (Sec. <ref>). Lastly, given per-agent features with rich visual semantics, our Trajectory Decoder predicts the future positions for all agents in the scene in a fixed time horizon (Sec. <ref>). §.§ Per-agent State Encoder Encoding Agent State Observations. Following recent trajectory prediction approaches <cit.>, we first encode per-agent state observations (, agent's observed trajectory and semantic attributes) provided by object detection and tracking systems. We utilize the geometric attributes with relative positions (instead of absolute positions) by representing the observed trajectory of agent i as {p_i^t - p_i^t-1}_t=1^T where p_i^t=(x_i^t, y_i^t) is the location of agent i in an ego-centric coordinate system at time step t∈{1, 2, …, T}. T denotes the observation time horizon. Note that we use an ego-centric (scene-centric) coordinate system where a scene is centered and rotated around the current ego-agent's location and orientation. Given these geometric attributes and their semantic attributes a_i (, agent types, such as cars, pedestrians, and cyclists), per-agent state embedding s_i^t∈ℝ^d_s for agent i at time step t is obtained as follows: s_i^t = f_geometric(p_i^t - p_i^t-1) + f_type(a_i) + f_PE(e^t), where f_geometric: ℝ^2→ℝ^d_s, f_type: ℝ^1→ℝ^d_s, and f_PE: ℝ^d_pe→ℝ^d_s are MLP blocks. Note that we use the learned positional embeddings e^t∈ℝ^d_pe, guiding the model to learn (and utilize) the temporal ordering of state embeddings. Encoding Temporal Information. Following existing approaches <cit.>, we utilize a temporal Transformer encoder to learn the agent's temporal information over the observation time horizon. Given the sequence of per-agent state embeddings {s_i^t}_t=1^T and an additional learnable token s^T+1∈ℝ^d_s stacked into the end of the sequence, we feed these input into the temporal (self-attention) attention block, producing per-agent spatio-temporal representations s'_i∈ℝ^d_s. Encoding Interaction between Agents. We further use the cross-attention-based agent-agent interaction module to learn the relationship between agents. Further, as our model depends on the geometric attributes with relative positions, we add embeddings of the agents' current position p^T_i to make the embeddings spatially aware, producing per-agent representation z_i=s'_i + f_loc(p^T_i) where f_loc: ℝ^2→ℝ^d_s is another MLP block. This process is performed at once within the ego-centric coordinate system to eliminate the cost of recalculating correlation distances with other agents for each individual agent. The agent state embedding z_i is used as the query vector, and those of its neighboring agents are converted to the key and the value vectors as follows: q^Interact_i = W^Interact_Q z_i, k^Interact_j = W^Interact_K z_j, v^Interact_j = W^Interact_V z_j, where W^Interact_Q, W^Interact_K, W^Interact_V∈ℝ^d_Interact× d_s are learnable matrices. §.§ Visual Semantic Encoder Vision-Augmented Scene Feature Generation. Given ego-centric multi-view images ℐ={ℐ_j}_j=1^n_I, we feed them into Vision Encoder using the same architecture from BEVDepth <cit.>, to produce the BEV image feature as B_I∈ℝ^h× w× d_bev. Then, we incorporate the rasterized map information into the BEV embeddings to align B_I. We utilize CNN blocks with Feature Pyramid Network (FPN) <cit.> to produce another BEV feature B_map∈ℝ^h× w× d_map. Lastly, we concatenate all generated BEV features into a composite BEV scene feature B=[B_I; B_map]∈ℝ^h× w× (d_bev+d_map). In this process, we compute map aligned around the current location and direction of the ego vehicle only once, even in the presence of n agents, as we adopt an ego-centric approach. This significantly reduces computational costs compared to agent-centric approaches, which require reconstructing and encoding map for each of the n agents. Augmenting Visual Semantics into Agent State Embedding. When given the vision-aware BEV scene feature B, we use deformable cross-attention <cit.> module to augment map-aware visual scene semantics into the per-agent state embedding z_i, as illustrated in Fig. <ref> (b). This allows for the augmentation of agent state embedding z_i. Compared to commonly used ConvNet-based architectures <cit.>, our approach leverages a wide receptive field and can selectively focus on scene feature, explicitly extracting multiple areas where each agent needs to focus and gather information. Additionally, as the agent state embedding is updated for each block, the focal points for the agent also require repeated refinement. To achieve this, we employ a Recurrent Trajectory Prediction module, which utilizes the same architecture as the main trajectory decoder(explained in <ref>). This module refines the agent's future trajectory u^aux={u^aux_i}_i=1^T_f by recurrently improving the predicted trajectories. These refined trajectories serve as reference points for agents to focus on in the Scene-Agent Interaction module, integrating surrounding information around the reference points into the agent's function. Our module is defined as follows: z_i^scene = z_i^interact + ∑_h=1^HW_h [∑_o=1^O(α_hioW'_h𝐁_ (u^aux_i+ u^aux_hio)) ], where H denotes the number of attention heads and O represents the number of offset points for every reference point u^aux_i where we use an auxiliary trajectory predictor and use the agent's predicted future positions as reference points. Note that W_h and W'_h are learnable matrices, and α_hio is the attention weight for each learnable offset u^aux_hio in each head. The number of attention points is typically set fewer than the number of surrounding road elements, reducing computational costs. §.§ Text-driven Guidance Module We observe that our visual semantic encoder simplifies visual reasoning about a scene to focus on salient visible features, resulting in sub-optimal performance in trajectory prediction. For instance, the model may primarily focus on the vehicle itself, disregarding other semantic details, such as “a vehicle waiting in front of the intersection with turn signals on, expected to turn left.” Therefore, we introduce the Text-driven Guidance Module to supervise the model, allowing the model to understand the context of the agents using detailed visual semantics. For this purpose, we employ multi-modal contrastive learning where positive pair is pulled together and negative pairs are pushed farther. However, the textual descriptions for prediction tasks in the driving domain are diverse in expression, posing an ambiguity in forming negative pairs between descriptions. To address this, as shown in Fig. <ref>, we extract word-level embeddings using BERT <cit.>, and then we use a attention module to aggregate these per-word embeddings into a composite sentence-level embedding 𝒯_i for agent i. Given 𝒯_i, we measure cosine similarity with other agents' sentence-level embeddings 𝒯_j for j ≠ i, and we treat as negative pairs if sim_cos(𝒯_i, 𝒯_j) < θ_th where θ_th is a threshold value (we set θ_th=0.8 in our experiments). Further, we limit the number of negative pairs within a batch for stable optimization, which is particularly important as the number of agents in a given scene varies. Specifically, given an agent i, we choose top-k sentence-level embeddings from {𝒯_j} sorted in ascending order for j ≠ i. Subsequently, we form a positive pair between the agent's state embedding z_i^scene and corresponding textual embedding 𝒯_i, while negative pairs as z_i^scene and {𝒯_j}_j=1^k. Ultimately, we use the following InfoNCE loss <cit.> to guide agent's state embedding with textual descriptions: ℒ_cl=-loge^sim_cos(z_i^scene, 𝒯_i) / τ/∑_j=1^ke^sim_cos(z_i^scene, 𝒯_j) / τ, where τ is a temperature parameter used in the attention layer, enabling biasing the distribution of attention scores. §.§ Trajectory Decoder br.5 < g r a p h i c s > An overview of transformation module, which standardizes agents' orientation. §.§.§ Transformation Module. For fast inference speed and compatibility with ego-centric images, we adopt ego-centric approach in the State Encoder and Scene Semantic Interaction. However, as noted by Su  <cit.>, ego-centric approaches typically underperform compared to agent-centric approaches due to the need to learn invariance for transformations and rotations between scene elements. This implies that the features of agents with similar future movements are not standardized. Thus, prior to utilizing the Text-driven Guidance Module and predicting each agent's future trajectory, we employ the Transformation Module to standardize each agent's orientation, aiming to mitigate the complexity associated with learning rotation invariance. This allows us to effectively apply the Text-driven Guidance Module, as we can make the features of agents in similar situations similar. As depicted in Fig. <ref>, the Transformation Module takes the agent's feature and rotation matrix ℛ as input and propagates the rotation matrix to the agent's feature using a Multi-Layer Perceptron (MLP). This transformation enables the determination of which situations the agent's features face along the y-axis. Trajectory Decoder. Similar to <cit.>, we use a parametric distribution over the agent's future trajectories u={u_i}_i=1^T_f for u_i∈ℝ^2 as Gaussian Mixture Model (GMM). We represent a mode at each time step t as a 2D Gaussian distribution over a certain position with a mean μ_t∈ℝ^2 and covariance Σ_t∈ℝ^2× 2. Our decoder optimizes a weighted set of a possible future trajectory for the agent, producing full output distribution as p(u)= ∑_m=1^Mρ_m∏_t=1^T_f𝒩(u_t-μ_m^t, Σ_m^t), where our decoder produces a softmax probability ρ over mixture components and Gaussian parameters μ and Σ for M modes and T_f time steps. Loss Functions. We optimize trajectory predictions and their associated confidence levels by minimizing ℒ_traj to train our model in an end-to-end manner. We compute ℒ_traj by minimizing the negative log-likelihood function between actual and predicted trajectories and the corresponding confidence score, and it can be formulated as follows: ℒ_traj = -1/N∑_i=1^Nlog( ∑_m=1^Mρ_i,m/√(2b^2)exp( -(𝐘_i - 𝐘̂_i,m)^2/2) ). Here, b and 𝐘 represent the scale parameters and the real future trajectory, respectively. We denote predicted future positions as 𝐘̂_i,m and the corresponding confidence scores as ρ_i,m for agent i at future time step t across different modes m ∈ M. Furthermore, we minimize an auxiliary loss function ℒ_traj^aux similar to ℒ_traj to train the trajectory decoder used by the Recurrent Trajectory Prediction module. Ultimately, our model is trained by minimizing the following loss ℒ, with λ_traj^aux and λ_cl controlling the strength of each loss term: ℒ = ℒ_traj + λ_traj^auxℒ_traj^aux + λ_clℒ_cl. § NUSCENES-TEXT DATASET To our best knowledge, currently available driving datasets for prediction tasks lack textual descriptions of the actions of road users during various driving events. While the DRAMA dataset <cit.> offers textual descriptions for agents in driving scenes, it only provides a single caption for one agent in each scene alongside the corresponding bounding box. This setup suits detection and captioning tasks but not prediction tasks. To address this gap, we collect the textual descriptions for the nuScenes dataset <cit.>, which provides surround-view camera images, trajectories of road agents, and map data. With its diverse range of typical road agents activities, nuScenes is widely used in prediction tasks. Textual Description Generation. We employ a three-step process for generating textual descriptions of agents from images, as illustrated in Fig. <ref>. Initially, we employ a pre-trained Vision-Language Model (VLM) BLIP-2 <cit.>. However, it often underperforms in driving-related image-to-text tasks. To address this, we fine-tune the VLM with the DRAMA dataset <cit.>, containing textual descriptions of agents in driving scenes. We isolate the bounding box region representing the agent of interest, concatenate it with the original image (Fig. <ref>), and leverage the fine-tuned VLM to generate descriptions for each agent separately in the nuScenes dataset <cit.> as an image captioning task. However, the generated descriptions often lack correct action-related details, providing unnecessary information for prediction. To address shortcomings, we refine generated texts using GPT <cit.>, a well-known Large Language Model (LLM). Inputs include the generated text, agent type, and maneuvering. Rule-based logic determines the agent's maneuvering (, stationary, lane change, turn right). We use prompts to correct inappropriate descriptions, aiming to generate texts that provide prediction-related information on agent type, actions, and rationale. Examples are provided in Fig. <ref>, with additional details (, rule-based logic, GPT prompt) in the supplemental material. Coverage of nuScenes-Text Dataset. In this section, we demonstrate how well our created nuScene-Text Dataset encapsulates the context of the agent, as depicted in Fig. <ref>, and discuss the coverage and benefits of this dataset. Fig. <ref>a represents the contextual information of the agent changing over time in text form. This attribute assists in accurately predicting object trajectories under behavioral context changes. We also demonstrate in <ref>b that distinctive characteristics of each object can be captured (, “A pedestrian waiting to cross the street.”, “A construction worker sitting on the lawn.”) and generate three unique textual descriptions for each object, showcasing diverse perspectives. Additionally, to enhance text descriptions when the VLM generates incorrect agent types, behavior predictions, or harmful information, such as “from the left side to the right side”, which can be misleading due to the directional variation in BEV depending on the camera's orientation, we refine the text using an LLM. This refinement process aims to improve text quality for identifying driving scenes through surround images. <ref>c illustrates this improvement process, ensuring the relevance and accuracy of text by removing irrelevant details (indicated by red) and adding pertinent information (indicated by cyan). r.4 < g r a p h i c s > Frequency of words Dataset Statistics. Our created dataset contains 1,216,206 textual descriptions for 391,732 objects (three for each object), averaging 13 words per description. In Fig. <ref>, we visualize frequently used words, highlighting the dataset's rich vocabulary and diversity. Further, we conduct a human evaluation using Amazon Mechanical Turk (Mturk) to quantitatively evaluate image-text alignments. 5 human evaluators are recruited, and it is performed on a subset of 1,000 randomly selected samples. Each evaluator is presented with the full image, cropped object image, and corresponding text and asked the question: “Is the image well-aligned with the text, considering the reference image?”. The results show that 94.8% of the respondents chose `yes', indicating a high level of accuracy in aligning images with texts. All results are aggregated through a majority vote. Further details on the nuScenes-Text Dataset are provided in the supplemental material. § EXPERIMENTS Dataset. We conduct experiments using the nuScenes dataset <cit.>, which offers two versions: (i) a dataset dedicated to a trajectory prediction task and (ii) a whole dataset. While the former focuses solely on single-agent prediction tasks, the latter is more suitable for our purposes. Therefore, we provide scores for both datasets in our experiments. Further implementation, evaluation, and dataset details can be found in the supplemental material. Qualitative Analysis. Fig. <ref> presents the results of VisionTrap on nuScenes dataset <cit.>, demonstrating the impact of Visual Semantic Encoder and Text-driven Guidance Module on agent trajectory prediction. The top row shows improved results of pedestrians. For (a), while the result without visual information predicts the man will cross the crosswalk, the prediction with visual information indicates the man will remain stationary due to red traffic light and people talking to each other rather than trying to cross the road. (b) presents how gaze and body orientation help in predicting the pedestrian's intention to walk towards the crosswalk, and (c) provides visual context of the man getting on a stationary vehicle, implying the trajectory of the man would remain stationary as well. The following row exhibits the improved prediction results of vehicles. In (d), understanding that the people are standing at a bus stop enables the model to make a reasonable prediction for the bus. (e) gives a visual cue of turn signal, indicating the vehicle's intention of turning left. Lastly, visual context in (f) leads to a more stable prediction of the vehicle turning right, as the image clearly shows the vehicle is directed towards its right. These examples highlight the crucial role of visual data in improving trajectory prediction accuracy, offering insights that cannot obtained from non-visual data. Further qualitative analysis details are available in the supplemental material. Quantitative Analysis. Tab. <ref> compares our model with other methods for single and multi-agent prediction. Our query-based prediction model designed to effectively utilize visual semantic information and Text-driven Guidance Module, which we use as baseline, achieves the fastest inference speed. We also demonstrate that the Visual Semantic Encoder significantly improves performance, especially when combined with the Text-driven Guidance Module, yielding comparable results to existing single-agent prediction methods with better miss rate performance, while still maintaining real-time operation. These results suggest that vision data provides additional information inaccessible to non-vision data, and textual descriptions derived from vision data effectively guide the model. r0.5 Ablation study of variant models on nuScenes <cit.> whole dataset. ! Method ADE_10 ↓ FDE_10 ↓ MR_10 ↓ VisionTrap baseline 0.425 0.641 0.081 + Map Encoder 0.407 0.601 0.075 + Visual Semantic Encoder 0.382 0.551 0.056 + Text-driven Guidance (Ours) 0.368 0.535 0.051 Since our method employs egocentric surround-view images, it is feasible to effectively predict for all observed agents in the scene. We utilize the nuScenes dataset covering all scenes, enabling comprehensive evaluation of all observed agents (refer to Tab. <ref>). This demonstrates the contributions of all proposed components to predicting all agents in the scene. Finally, we emphasize that the purpose of this study is not to achieve state-of-the-art performance. Instead, our aim is to demonstrate that vision information, often overlooked in trajectory prediction tasks, can provide additional insights. These insights are inaccessible from non-vision data, thereby enhancing performance in trajectory prediction tasks. This is our original motivation for this task, and the results in <ref>, <ref> and <ref> provide justification for our method. UMAP Visualization. We observe an overall improvement in clustering of agent state embeddings when leveraging visual and textual semantics in <ref>. Furthermore, extracting textual descriptions of agents within the same cluster group is shown to exhibit similar situations. This indicates that state embeddings for agents in similar situations are located in a similar embedding space. r0.5 Performance comparison to analyze the effect of each component of Text-Based Guidance Module on the nuScenes <cit.> all dataset. ! Method ADE_6 ↓ FDE_6 ↓ MR_6 ↓ A. CLIP loss 0.51 0.79 0.10 B. Ours w/ symmetric loss 0.50 0.76 0.10 C. Ours w/o refining negative pair 0.49 0.72 0.09 D. Ours w/o top-k algorithm 0.46 0.67 0.08 E. Ours 0.44 0.66 0.07 Analyzing the Text-driven Guidance Module. To analyze the effect of each component of the proposed Text-Based Guidance Module, we removed each factor to see how the model performs, as shown in Tab. <ref>. In the case of A, we use simple symmetric contrastive loss that is used in <cit.>. However, our loss adopts asymmetric form of contrastive loss that only calculates softmax probabilities in one direction. B gives the result of incorporating symmetric loss in our loss design. C shows the result of removing the stage of negative pair refinement, allowing potential false-negatives. In D, we skip the process of ascending sorting and limiting the number of negative pairs. Removing these steps causes variance in number of agents considered each scene, leading to different scales of loss. In the end, our asymmetric contrastive loss with negative pairs refined and its number constrained demonstrated the best performance across all metrics. § CONCLUSION In this paper, we introduced an novel approach called VisionTrap to trajectory prediction by incorporating visual input from surround-view cameras. This enables the model to leverage visual semantic cues, which were previously inaccessible to traditional trajectory prediction methods. Additionally, we utilize text descriptions produced by a VLM and refined by a LLM to provide supervision, guiding the model in learning from the input data. Our thorough experiments demonstrate that both visual inputs and textual descriptions contribute to enhancing trajectory prediction performance. Furthermore, our qualitative analysis shows how the model effectively utilizes these additional inputs. §.§.§ Acknowledgment. This work was supported by Autonomous Driving Center, Hyundai Motor Company R&D Division. This work was partly supported by IITP under the Leading Generative AI Human Resources Development(IITP-2024-RS-2024-00397085, 10%) grant, IITP grant (RS-2022-II220043, Adaptive Personality for Intelligent Agents, 10% and IITP-2024-2020-0-01819, ICT Creative Consilience program, 5%). This work was also partly supported by Basic Science Research Program through the NRF funded by the Ministry of Education(NRF-2021R1A6A1A13044830, 10%). This work also supported by Culture, Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2024((International Collaborative Research and Global Talent Development for the Development of Copyright Management and Protection Technologies for Generative AI, RS-2024-00345025, 4%),(Research on neural watermark technology for copyright protection of generative AI 3D content, RS-2024-00348469, 25%)), Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT)(RS-2019-II190079, 1%). We also thank Yujin Jeong and Daewon Chae for their helpful discussions and feedback. splncs04 VisionTrap S. Moon et al. Supplemental Material July 22, 2024 ===================== § DETAILS FOR EVALUATION AND IMPLEMENTATION Dataset. Our proposed approach is developed and evaluated utilizing the widely employed nuScenes <cit.> dataset, which encompasses 1000 diverse scenes from Boston and Singapore. Annotations cover 10 classes for object detection, including car, truck, bus, trailer, construction vehicle, pedestrian, motorcycle, bicycle, barrier, and traffic cone. It also provides ego-centric surround-view images and HD map. In nuScenes, the model is trained with a 2-second history to predict a 6-second future trajectory. Unlike existing works <cit.> that report about single-agent prediction performance, our research takes a different approach. Instead of utilizing only the dataset provided for the prediction task, we used the entire nuScenes dataset for training to conduct a multi-agent prediction approach that considers all agents in a scene simultaneously. Therefore, our nuScenes-Text dataset used for this study is created to cover all scenes in the nuScenes dataset. The Vision Language Model BLIP-2 <cit.> (VLM) used to generate this text is trained on the DRAMA <cit.> dataset, which provides an image of the driving environment, bounding box pointing to specific agent, and text representing this agent. To accurately use textual descriptions obtained from fine-tuned VLM, we refine the descriptions using GPT <cit.>. We also present metrics for all agents and metrics specifically for agents involved in the prediction task, offering a comprehensive evaluation. Evaluation Metrics. Our model is evaluated using standard metrics for trajectory prediction, including minimum Average Displacement Error (ADE), minimum Final Displacement Error (FDE), and Miss Rate (MR). These metrics quantify the average and final displacement errors between the true trajectory and the best prediction sample. MR further denotes the percentage of scenarios where the distance between the endpoint of the true trajectory and the best prediction exceeds a 2m threshold. ADE = 1/T∑_t=T_curr+1^T_FinŶ_(k)^t - Y^t_2 FDE = Ŷ_(k)^T_Fin - Y^T_Fin_2 Here, Ŷ_(k)^t denotes the predicted position of the agent at timestep t in the (k)-th mode, and Y^t represents the ground truth position at timestep t. The (k) represents the mode with the smallest error when compared to the ground truth, while T indicates the number of timesteps to be predicted. Additionally, T_Fin represents the timestep at which the prediction concludes, while T_curr indicates the current timestep. Implementation Details. We train the model for 48 epochs using AdamW optimizer <cit.> and four RTX 3090 Ti GPUs. The model has 32 batch sizes, 5 × 10^-4 initial learning rates, 1× 10^-4 weight decay, and 0.1 dropout rates. To manage the learning rate, we adopt the cosine annealing scheduler <cit.>. For consistency, we set the number of offsets for deformable attention in the Scene-Agent Interaction Module, denoted as O, to 4. Additionally, augmentation techniques, including rotation within (-22.5, 22.5) degrees and excluding a random agents (10% of all agents in scene) from the loss calculation, are used to prevent overfitting and increase the generalization performance of the model. § MORE DETAIL FOR NUSCENES-TEXT DATASET Prompt Engineering for LLM. We utilize the Large Language Model (LLM) GPT to refine textual descriptions obtained from VLM regarding issues stemming from the domain gap between datasets or completely missing parts, as well as inaccurate location information caused by the characteristics of surround view images (see <ref>). To enhance the quality of pseudo-text, we meticulously design prompt for LLM such as <ref>. The primary challenges in this improvement process involved i) removing inaccurate location information such as `left,' `right lane,' or `ego lane,' caused by the characteristics of surround view images (see <ref>) and ii) refining parts that are incorrectly predicted or completely missing due to domain gaps between datasets. Given that the nuScene dataset includes not only a front view but also a surround view, including back view, i) is crucial to avoid confusion in the model caused by these location details. Additionally, for ii), we explicitly integrate task details such as maneuvering and agent types to eliminate hallucinations and generate clear information. Finally, we include examples of both effective (`good') and ineffective (`bad') outputs to optimize the capabilities of the LLM. Maneuvering extraction Algorithm. To integrate information about the intention of each agent into our generated text dataset, we utilize the maneuvering attribute. We classify the maneuvering of the agent based on the actual future trajectory. Maneuvering is defined by comparing the initial position and orientation with the final position and orientation. The generated maneuvering information is provided to the LLM to offer insights into the agent's intention. Therefore, the refined text, including information on the agent's characteristic points, current movement, and future intention, may be utilized, thereby contributing to enhancing the performance of the model. The maneuvering extraction algorithm can be observed in <ref>. More Details about Dataset Statistics. We further explore the details of the dataset we have created. The dataset contains 15,369,058 words, leading to a total of 17,134,981 tokens. This significant amount of text reflects the dataset's comprehensive scope, encompassing a variety of subjects and scenarios relevant to autonomous vehicles. With an average of 13.08 words and 14.58 tokens per text, the dataset showcases a wide-ranging vocabulary and linguistic diversity. Additionally, An example of the Mturk evaluation interface we used can be seen in Fig. <ref>. The results from the human evaluation conducted via Mechanical Turk further demonstrate how well the captions included in our dataset describe the corresponding objects, indicating their substantial validity. More Details about nuScenes-Text Dataset. In this section, we provide additional examples of our created nuScenes-Text dataset. Fig. <ref> represents textual descriptions obtained from surround-view images. Each agent has three distinct versions of textual descriptions and shows this descriptions of each agent in the bounding box. The description generated through VLM in the top center image (CAM_FRONT) includes location information based on the perspective of the ego vehicle (highlighted in red). However, this may differ from the perspective of other vehicles and pedestrians. Additionally, the location data highlighted in red in the top right image (CAM_FRONT_RIGHT) indicates the position of a person located on the left side of the image, but from the perspective of the autonomous vehicle, it may inaccurately depict the location (from the perspective of the autonomous vehicle, the person is positioned to the right). Such inaccuracies in image-based location data have the potential to compromise the trajectory prediction functionality of the model. This issue is addressed by removing incorrect information through LLM, and the improvements are clearly evident in the refined captions. Through this, we demonstrate the capability to generate accurate textual descriptions for all objects visible in surround-view images.  <ref> provides additional examples of unique situations that can be captured by camera images. Surprisingly, the textual description describes scenarios of rainy conditions and can also describe situations where camera data is compromised, such as low-light conditions. In addition, the text description shows that it can also capture situation information and details well, such as pedestrians holding umbrellas, unloading from trucks, people riding cycle, a driver getting out of a vehicle and pedestrians sitting on concrete blocks. Please refer to the images and captions together. § FURTHER RESULTS Additional Quantitative Results. In Table <ref>, results for various agent types in the nuScenes whole dataset are presented. VisionTrap conducts predictions for both vehicles and pedestrians, showcasing information for both types. Model A employs only observed trajectories, Model B incorporates map data in addition to trajectory information, and Model C represents the results of VisionTrap. The outcomes demonstrate that the Visual Semantic Encoder and Text-driven Guidance Module contribute to improved performance across all agents. Additional Qualitative Examples. We present additional qualitative examples obtained from various scenes. The examples are selected from the nuScenes dataset. Results from Fig.<ref> to Fig.<ref> illustrate without and with our Visual Semantic Encoder and Text-driven Guidance Module. Refer to the respective captions for explanations about the figures.
http://arxiv.org/abs/2407.12190v1
20240716213451
On a possible $^{3}_φ$H hypernucleus with HAL QCD interaction
[ "Igor Filikhin", "Roman Ya. Kezerashvili", "Branislav Vlahovic" ]
nucl-th
[ "nucl-th", "hep-ph" ]
^2New York City College of Technology, The City University of New York, Brooklyn, NY, USA ^3The Graduate School and University Center, The City University of New York, New York, NY, USA ^4Long Island University, Brooklyn, NY, USA § ABSTRACT Within the framework of the Faddeev formalism in configuration space, we investigate bound states in the ϕ NN system with total isospin T=0 and T=1. The recently proposed lattice HAL QCD ϕ N potential in the ^4S_3/2 channel does not support either ϕ N or ϕ NN bound states. The HAL QCD ϕ N potential in the ^2S_1/2 channel suggests the bound states for ϕ N and ϕ NN (S=0) systems. However, the binding energies are highly sensitive to variations of the enhancement factor β, and the ϕ NN system is extremely strongly bound in the state S=0. Considering a spin-averaged potential for the state S=1 yields a bound state for ^3_ϕH (S=1) hypernucleus with the binding energy (BE) 14.9 MeV when β = 6.9. The evaluation of the BE for the S=1, T=1 three-body state results in 5.47 MeV. Additionally, calculations using our approach confirm the bound states for the ϕ NN (S=2,T=0 and S=1, T=1) system previously predicted with the Yukawa-type potential motivated by the QCD van der Waals attractive force, mediated by multi-gluon exchanges. On a possible ^3_ϕH hypernucleus with HAL QCD interaction I. Filikhin^1, R. Ya. Kezerashvili^2,3,4, and B. Vlahovic^1 =============================================================== § INTRODUCTION Since the beginning of the new millennium, studying the composite system from two nucleons and Λ-, Ξ-, Ω-hyperon or ϕ-meson has attracted intense research interest in many theoretical works <cit.>. Unlike the case of the NN interactions, a ϕ-meson nucleon interaction is not well determined due to an insufficient number of scattering data. It is one of the open and debated questions in the strangeness nuclear physics concerns the possible existence of a ϕ N bound state. The recent ALICE Collaboration measurement of the ϕ N correlation function <cit.> led to the determination of the ϕ N channel scattering length with a large real part corresponding to an attractive interaction. This represents the first experimental evidence of the attractive strong interaction between a proton and a ϕ meson. Interestingly, the absolute value of the obtained scattering length is much larger than what has been measured in earlier ϕ-meson photoproduction experiments <cit.>. It has been suggested by Brodsky, Schmidt, and de Teramond <cit.> that the QCD van der Waals interaction, mediated by multi-gluon exchanges, is dominant when the interacting two color singlet hadrons have no common quarks. Assuming that the attractive QCD van der Waals force dominates the ϕ N interaction since the ϕ-meson is almost a pure ss̅ state, following <cit.>, Gao, et al. <cit.> suggested a Yukawa-type attractive potential. Using the variational method, they predicted a binding energy (BE) of 1.8 MeV for the ϕ-N system. In <cit.> the data are employed to constrain the parameters of phenomenological Yukawa-type potentials. The resulting values for the Yukawa-type potential, V_ϕ N(r)=-A e^-α r/r, yields A = 0.021±0.009(stat.)±0.006(syst.) and α = 65.9±38.0(stat.)±17.5(syst.) MeV. Predictions of possible ϕ N bound states employing the same kind of potential with parameters A = 1.25 and α = 600 MeV <cit.> are therefore incompatible with measurement <cit.>. Recently, Lyu et al. <cit.> presented the first results on the interaction between the ϕ-meson and the nucleon based on the (2+1)-flavor lattice QCD simulations with nearly physical quark masses. The HAL QCD potential is obtained from first principles (2 +1)-flavor lattice QCD simulations in a large spacetime volume, L^4 =(8.1 fm)^4, with the isospin-averaged masses of π, K, ϕ, and N as 146, 525, 1048, and 954 MeV, respectively, at a lattice spacing, a = 0.0846 fm. Let us mention that such simulations together with the HAL QCD method enable one to extract the YN and YY interactions with multiple strangeness, e.g., ΛΛ, Ξ N <cit.>, Ω N <cit.>, ΩΩ <cit.>, and Ξ N <cit.>. Using the HAL QCD method, based on the spacetime correlation of the ϕ N system in the spin 3/2 channel, the authors suggested fits of the lattice QCD potential in the ^4S_3/2 channel. In the following, we employ the spectroscopic notation ^2s+1S_J to classify the S-wave ϕ N interaction, where s and J stand for total spin, and total angular momentum. It was found that the simple fitting functions such as the Yukawa form cannot reproduce the lattice data <cit.>. The lattice calculations for the ϕ N interaction in the ^4S_3/2 channel are used in <cit.> to constrain the spin 1/2 counterpart (^2S_1/2) from the fit of the experimental ϕ N correlation function measured by the ALICE Collaboration <cit.>. The mesic ϕ NN system is considered in the framework of Faddeev equations in the differential form <cit.>, using the variational folding method <cit.>, and a two-variable integro-differential equation describing bound systems of unequal mass particles <cit.>. Calculations were employed ϕ N potential from <cit.>. The binding energy of ϕ d hypernucleus was calculated by employing HAL QCD potential <cit.> using the Schrodinger equation for Faddeev components expanded in terms of hyperspherical functions <cit.>. The binding energies reported in Refs. <cit.> are in the range of ∼ 6-39 MeV. Motivated by the above discussion and the availability of newly suggested HAL QCD potentials in the ^2S_1/2 and ^4S_3/2 channels with a minimal and maximal spin, respectively, we present calculations for the binding energy for the ϕ N and ϕ NN in the framework of the Faddeev equations in configuration space. We compare our results with other calculations as well. The ϕ NN represent a three-particle system. The three-body problem can be solved in the framework of the Schrödinger equation or using the Faddeev approach in the momentum <cit.> or configuration <cit.> spaces. With regards to the Faddeev equations in the configuration space, Jacobi coordinates are introduced to describe the ϕ NN system. The mass-scaled Jacobi coordinates 𝐱_i and 𝐲_i are expressed via the particle coordinates 𝐫_i and masses m_i in the following form: 𝐱_i=√(2m_km_l/m_k+m_l)(𝐫_k- 𝐫_l), 𝐲_i=√(2m_i(m_k+m_l)/ m_i+m_k+m_l)(𝐫_i-m_k𝐫_k+m_l𝐫 _l)/m_k+m_l). The orthogonal transformation between three different sets of the Jacobi coordinates has the form: ( [ 𝐱_i; 𝐲_i ]) =( [ C_ik S_ik; -S_ik C_ik ]) ( [ 𝐱_k; 𝐲_k ]) , C_ik^2+S_ik^2=1, k≠ i, C_ii=1, where C_ik=-√(m_im_k/(M-m_i)(M-m_k)), S_ik=(-1)^k-isign(k-i)√(1-C_ik^2). Here, M is the total mass of the system. Let us definite the transformation h_ik(𝐱,𝐲) based on Eq. (<ref>) as h_ik(𝐱,𝐲)=(C_ik𝐱+ S_ik𝐲, -S_ik𝐱+ C_ik𝐲). In the Faddeev method in configuration space, alternatively to the finding the wave function of the three-body system using the Schrödinger equation, the total wave function is decomposed into three components <cit.>: Ψ (𝐱_1,𝐲_1)=Φ_1(𝐱_1,𝐲 _1)+Φ_2(𝐱_2,𝐲_2)+Φ_3(𝐱_3, 𝐲_3). Each component depends on the corresponding coordinate set, which are expressed in terms of the chosen set of mass-scaled Jacobi coordinates. The transformation (<ref>) allows us to write the Faddeev equations as a system of differential equations for each Φ_i(𝐱_i,𝐲_i) component in compact form. The components Φ_i(𝐱_i,𝐲_i) satisfy the Faddeev equations <cit.> that can be written in the coordinate representation as: (H_0+V_i(C_ik𝐱)-E)Φ_i(𝐱,𝐲)=-V_i(C_ik𝐱)∑_l≠ iΦ_l(h_il(𝐱,𝐲)). Here H_0=-(Δ _𝐱+Δ _𝐲) is the kinetic energy operator with ħ ^2=1 and V_i(𝐱) is the interaction potential between the pair of particles (kl), where k,l≠ i. The system of Eqs. (<ref>) written for three nonidentical particles can be reduced to a simpler form for a case of two identical particles. The Faddeev equations in configuration space for a three-particle system with two identical particles are given in our previous studies <cit.> . In the case of the ϕ NN system, the total wave function of the system is decomposed into the sum of the Faddeev components Φ_1 and Φ_2 corresponding to the (NN)ϕ and (ϕ N)N types of rearrangements: Ψ =Φ_1+Φ_2-PΦ_2, where P is the permutation operator for two identical particles. Therefore, the set of the Faddeev equations (<ref>) is rewritten as follows <cit.>: [ (H_0+V_NN-E)Φ_1=-V_NN(Φ_2-PΦ_2),; (H_0+V_ϕ N-E)Φ_2=-V_ϕ N(Φ_1-PΦ_2). ] In Eqs. (<ref>) V_NN and V_ϕ N are the interaction potentials between two nucleons and the ϕ-meson and nucleon, respectively. The spin-isospin variables of the system can be represented by the corresponding basis elements. After the separation of the variables, one can define the coordinate part the Ψ^R of the wave function Ψ =ξ _isospin⊗η _isospin⊗Ψ ^R. The details of our method for the solution of the system of differential equations (<ref>) are given in <cit.>. In Ref. <cit.>, the interaction between the ϕ-meson and the nucleon is studied based on the (2+1)-flavor lattice QCD simulations with nearly physical quark masses. The authors found that the ϕ N correlation function is mostly dominated by the elastic scattering states in the ^4S_3/2 channel without significant effects from the two-body Λ K(^2D_3/2) and Σ K(^2D_3/2) and the three-body open channels including ϕ N→Σ ^∗K,Λ (1405)K→Λπ K,Σπ K. The fit of the lattice QCD potential by the sum of two Gaussian functions for an attractive short-range part and a two-pion exchange tail at long distances with an overall strength proportional to m_π^4n <cit.>, has the following functional form in the ^4S_3/2 channel with the maximum spin 3/2 <cit.>: V_ϕ N^3/2(r)=∑_j=1^2a_jexp[ -( r/b_j) ^2] +a_3m_π^4F(r,b_3)( e^-m_πr/ r) ^2, with the Argonne-type form factor <cit.> F(r,b_3)=(1-e^-r^2/b_3^2)^2. For comparison the lattice QCD ϕ N potential is also parameterized using three Gaussian functions <cit.>: V_Gϕ N(r)=∑_j=1^3a_jexp[ -( r/b_j) ^2] . The HAL QCD potential in the ^2S_1/2 channel with a minimum spin of 1/2 <cit.> has a much stronger attractive β-enhanced short-range part and the same two-pion exchange long-range tail as in the ^4S_3/2 channel. The real part of the potential in the ^2S_1/2 channel reads <cit.> V_ϕ N^1/2(r)=β( a_1e^-r^2/b_1^2+a_2e^-r^2/b_2^2) +a_3m_π^4F(r,b_3)( e^-m_πr/r) ^2, where the factor β =6.9_-0.5^+0.9(stat.)_-0.1^+0.2(syst.). The other values of the parameters are common in both ^4S_3/2 and ^2S_1/2 channels <cit.>. The imaginary part of ϕ N potential related to the 2nd-order kaon exchange and corresponds to absorption processes. A proportionality coefficient for this part is γ =0.0_-3.6^+0.0 (stat.)_-0.18^+0.0(syst.) <cit.>. We present the results of calculations for the feasibility of expected bound states for ϕ N and ϕ NN systems. For calculations of the BEs of these systems, we use the HAL QCD ϕ N potential in the ^4S_3/2 and ^2S_1/2 channels with the maximum and minimum spins, respectively. We employ the same NN MT-I-III potential <cit.> as in <cit.> for the comparison of the results. The input parameters for potentials are listed in Table <ref>. For comparison, we also perform BE calculations for ϕ N and ϕ NN systems with previously suggested Yukawa-type ϕ N potential with parameters from <cit.> and <cit.>. The spin configurations of the ϕ NN system are illustrated in Fig. <ref>(a). Here, we present two configurations for isospin state T=0 which means that the considered system includes the deuteron, d, which is corresponding to the NN(s=1) state. There are two different components of the ϕ N potential. For calculations for the S=1 state we used an averaged over spin variables potential. To acquire the overall ϕ N potential, the spin-averaged interaction for the ^4S_3/2 and ^2S_1/2 channel potentials is defined as <cit.> V̅_ϕ N=1/3V_ϕ N^1/2 + 2/3V_ϕ N^3/2. According to Eq. (<ref>), the configuration S=1 becomes to the configuration S=2 when components of the ϕ N potential are equal. For example, it can be the 3/2 ϕ N component. The configuration for S=0 and S=1, T=1 states are presented in Fig. <ref>(b). First, let us consider the ϕ N system. Results of calculations for the two-body binding energy, B_2, scattering length, a_ϕ N, and effective radius, r_ϕ N, for ϕ N are presented in Table <ref> for the ^4S_3/2 and ^2S_1/2 channels. Although the HAL QCD ϕ N potential in the ^4S_3/2 channel is found to be attractive for all distances and reproduces a two-pion exchange tail at long distances, no bound ϕ N state is found with this interaction. The ϕ N system is strongly bound with the HAL QCD potential in the ^2S_1/2 channel with the reasonable scattering length when the short-range attractive part is enchanted with factor β = 6.9 suggested in <cit.>. Let us mention that the ^2S_1/2 state binding energy is very sensitive to the variation of β within the statistical and systematic error margins reported in <cit.>. In Table <ref> we present the numerical results for the ϕ NN system obtained with the HAL QCD interactions and a Yukawa-type potential with parameterizations from <cit.> and <cit.>. The calculations of the BEs with the Yukawa-type potential motivated by the QCD van der Waals attractive force mediated by multi-gluon exchanges, led to the same results as previously reported in <cit.>. Our calculations indicate that neither HAL QCD interaction in the ^4S_4/2 channel nor the Yukawa type interaction with parameters <cit.> do not support the existence of the S=2 bound state. Thus, the HAL QCD interaction in the ^4S_3/2 channel with the maximum spin 3/2 suggests no bound state for ^3_ϕH hypernucleus, in contrast to the binding energy range reported in <cit.>, which is 6.7-7.3 MeV. Results obtained for the BE of ϕ NN 22.42 MeV and 38.04 MeV (t=0) in the framework of our approach utilizing the Yukawa-type ϕ N potential <cit.> and the singlet and triplet spin NN interaction <cit.>, respectively, confirm calculations <cit.> and are in good agreement within ± 1.5 MeV. Based on our calculations, the HAL QCD interaction in the ^4S_3/2 channel does not provide enough attractiveness to bind a ϕ-meson onto a nucleon or deuteron to form a bound state. Conversely, employing the HAL QCD ϕ N interaction in the ^2S_1/2 channel with minimal spin 1/2 results in the bound ϕ NN, although the BE is highly sensitive to the variation of the factor β, and the ϕ NN system is extremely strongly bound in the state S=0. Employing the spin-averaged potential (<ref>), we consider both the HAL QCD potentials in the ^2S_1/2 and ^4S_3/2 channels when the factor β = 6.9. This leads to the numerical value of the binding energy 14.9 MeV for the ^3_ϕH hypernucleus in the spin state S=1. Changing the β factor to β = 6.0, we obtained for the ϕ NN BE 13.09 MeV, albeit with a larger scattering length. It is important to note that varying the β factor within the margin of the error leads to larger and less realistic BEs, especially for the S=0 state as shown in Table <ref>. In conclusion, we employ the HAL QCD ϕ N potential in the ^2S_1/2 and ^4S_3/2 channels with the maximum and minimum spin, respectively, in the framework of Faddeev equations in configuration space to evaluate the binding energy of the ϕ NN system. The HAL QCD ϕ N potential in the ^4S_3/2 channel does not support bound states for either ϕ N or ϕ NN, although it exhibits attraction. Conversely, employing the HAL QCD ϕ N potential in the ^2S_1/2 channel yields bound states for both ϕ N and ϕ NN. The binding energies of these systems are notably sensitive to variations in the enhancement of the short-range attractive part, parameterized by the factor β. Considering both potentials, we find binding energies of 5.47 MeV and 14.9 MeV for the states S=1, T=0 and S=1, T=1 (with singlet and triplet components of the NN MT I-III potential), respectively, when β = 6.9. Our calculations confirm the existence of S=2 bound states for the ϕ NN system previously predicted within the Faddeev equations in the differential form <cit.> and theoretical formalism <cit.> where utilized ϕ N potential <cit.>. The presented analysis demonstrates the possible existence of ^3_ϕH hypernucleus. § ACKNOWLEDGMENTS This work is supported by the City University of New York PSC CUNY Research Award # 66109-00 54 and US National Science Foundation HRD-1345219 award, the DHS (summer research team), and the Department of Energy/National Nuclear Security Administration Award Number DE-NA0004112. 99 Filikhin2000I. N. Filikhin and S. L. Yakovlev, Calculation of the binding energy and of the parameters of low-energy scattering in the Λ np system, Phys. Atom. Nucl., 63, 223 (2000). FG2002I. N. Filikhin and A. Gal, Faddeev-Yakubovsky Search for _ΛΛ^4H, Phys. Rev. Lett. 89, 172502 (2002). BSS V. B. Belyaev, W. Sandhas, and I. I. Shlyk, 3- and 4- body meson- nuclear clusters, ArXiv: nucl-th0903.1703 Bel2008 V. B. Belyaev, W. Sandhas, and I. I. Shlyk, New nuclear three-body clusters ϕ NN, Few Body Syst. 44 347. (2008). Sofi S. A. Sofianos, G. J. Rampho, M. Braun and R. M. Adam, The ϕ-NN and ϕϕ–NN mesic nuclear systems, J. Phys. G: Nucl. Part. Phys. 37, 085109 (2010). GV15 H. Garcilazo and A. Valcarce, Light Ξ hypernuclei. Phys. Rev. C 92, 014004 (2015). GV2016H. Garcilazo and A. Valcarce, Deeply bound Ξ tribaryon, Phys. Rev. C 93, 034001 (2016). GVV16 H. Garcilazo, A. Valcarce, and J. Vijande, Maximal isospin few-body systems of nucleons and Ξ hyperons. Phys. Rev. C 94, 024002 (2016). FSV17 I. Filikhin, V. M. Suslov and B. Vlahovic, Faddeev calculations for light Ξ-hypernuclei, Mat. Model. Geom., 5, 1 (2017). GV0 H. Garcilazo and A. Valcarce, Ω d bound state, Phys. Rev. C 98, 024002 (2018). GV H. Garcilazo and A. Valcarce, Ω NN and ΩΩ N states. Phys. Rev. C 99, 014001 (2019). Gibson2020B. F. Gibson and I. R. Afnan, Exploring the unknown Λ n interaction, SciPost Phys. Proc. 3, 025 (2020). HiyamaXi2020E. Hiyama, K. Sasaki, T. Miyamoto, T. Doi, T. Hatsuda, Y. Yamamoto, and Th. A. Rijken, Possible lightest Ξ hypernucleus with modern Ξ N interactions, Phys. Rev. Lett. 124, 092501 (2020). EF21 F. Etminan and M.M. Firoozabadi, Ω-deuteron Interaction in Folding Model, arXiv:1908.11484v5 [nucl-th] Zhang2022L. Zhang, Song Zhang, and Y-G. Ma Production of Ω NN and ΩΩ N in ultra-relativistic heavy-ion collisions. Eur. Phys. J. C 82, 416 (2022). GV2022 H. Garcilazo and A. Valcarce, (I,J^P)=(1,1/2^+) Σ NN quasibound state, Symmetry 14, 2381 (2022). ESE2023 F. Etminan, Z. Sanchuli, M. M. Firoozabadi, Geometrical properties of Ω NN three-body states by realistic NN and first principles lattice QCD Ω N potentials Nucl. Phys. A 1033 122639 (2023). EA24 F. Etminan, A. Aalimi, Examination of the ϕ-NN bound-state problem with lattice QCD N-ϕ potentials, Phys. Rev. C 109, 054002 (2024). ALICE2021 S. Acharya et al. [ALICE], Experimental evidence for an attractive p-ϕ interaction, Phys. Rev. Lett. 127, 172301 (2021). Chang2008 W. C. Chang, K. Horie, S. Shimizu, M. Miyabe, D. S. Ahn, J. K. Ahn, et al., Forward coherent ϕ-meson photoproduction from deuterons near threshold, Phys. Lett. B 658, 209-215 (2008). Strakovsky2020 I. I. Strakovsky, L. Pentchev and A. Titov, Comparative analysis of ω p, ϕ p and J/ψ p scattering lengths from A2, CLAS, and GlueX threshold measurements, Phys. Rev. C 101, 045201 (2020). Brodsky1990S. J. Brodsky, I. A. Schmidt, and G. F. de Teramond, Nuclear-bound quarkonium, Phys. Rev. Lett. 64, 1011 (1990). G2001 H. Gao, T.-S. H. Lee, V. Marinov, φ - N bound state. Phys. Rev. C 63, 022201 (2001). Lyu22 Y. Lyu, T. Doi, T. Hatsuda, Y. Ikeda, J. Meng, K. Sasaki, and T. Sugiura, Phys. Rev. D 106, 074507 (2022). Sasaki2020K.Sasaki, et al., (HAL QCD Collaboration)ΛΛ and NΞ interactions from lattice QCD near the physical point. Nucl. Phys. A 998, 121737 (2020). Iritani2019 T. Iritani et al., N Ω dibaryon from lattice QCD near the physical point. Phys. Lett. B 792, 284 (2019). OmegaOmega2018 S. Gongyo, et al., Most strange dibaryon from lattice QCD, Phys Rev. Lett. 120, 212001 (2018). Chizzali2024E. Chizzali, Y. Kamiya, R. Del Grande, T.i Doi, L. Fabbietti, T. Hatsuda, and Y. Lyu, Indication of a p-ϕ bound state from a correlation function analysis, Phys. Lett. B 848, 138358 (2024). Fad L. D. Faddeev, Scattering theory for a three-particle system. ZhETF 39, 1459 (1961); [Sov. Phys. JETP 12, 1014 (1961)]. Fad1 L. D. Faddeev, Mathematical problems of the quantum theory of scattering for a system of three particles. Proc. Math. Inst. Acad. Sciences USSR 69, 1-122 (1963). Noyes1968 H. P. Noyes and H. Fiedeldey, In: Three-Particle Scattering in Quantum Mechanics (Gillespie, J., Nutall, J., eds.), p. 195. New York, Benjamin, 1968. Gignoux1974 C. Gignoux, C. Laverne, and S. P. Merkuriev, Solution of the Three-Body Scattering Problem in Configuration Space, Phys. Rev. Lett. 33, 1350 (1974). FM L. D. Faddeev and S. P. Merkuriev, Quantum Scattering Theory for Several Particle Systems (Kluwer Academic, Dordrecht, 1993) pp. 398. Noyes1969 H. P. Noyes In: Three-Body Problem in Nuclear and Particle Physics (Proceedings of the 1st Int. Conf., Birmingham, 1969 ), (McKee, J. S. C., Rolph, P. M., eds.), p. 2. Amsterdam, North-Holland, 1970. K86 A.A. Kvitsinsky, Yu.A. Kuperin, S.P. Merkuriev, A.K. Motovilov and S.L. Yakovlev, N-body quantum problem in configuration space. Fiz. Elem. Chastits At. Yadra 17, 267 (1986) Soviet Journal of Particles and Nuclei, (in Russian);http://www1.jinr.ru/Archive/Pepan/1986-v17/v-17-2.htm KezPRD2020I. Filikhin, R. Ya. Kezerashvili, V. M. Suslov, Sh. M. Tsiklauri, and B. Vlahovic, Three-body model for K(1460) resonance, Phys. Rev. D 102, 094027 (2020). Kez2018PL I. Filikhin, R. Ya. Kezerashvili, and B. Vlahovic, On binding energy of trions in bulk materials, Phys. Lett. A 382, 787 (2018). FilKez2018I. Filikhin, R. Ya. Kezerashvili, V. M. Suslov, and B. Vlahovic, On mass polarization effect in three-body nuclear systems, Few-Body Syst. 59, 33 (2018). KezJPG2024I. Filikhin, R. Ya. Kezerashvili, and B. Vlahovic, The charge and mass symmetry breaking in the KKK̅ system, J. Phys. G: Nucl. Part. Phys. 51, 035102 (2024). KezJPG2016R. Ya. Kezerashvili, Sh. M. Tsiklauri, I. Filikhin, V. M. Suslov, and B. Vlahovic, Three-body calculations for the K^-pp system within potential models, J. Phys. G: Nucl. Part. Phys. 43, 065104 (2016). Kreinm4J. Tarrús Castellà and G. a. Krein, Effective field theory for the nucleon-quarkonium interaction, Phys. Rev. D 98, 014029 (2018). Wiringa95R. B. Wiringa, V. G. J. Stoks, and R. Schiavilla, Accurate nucleon-nucleon potential with charge-independence breaking, Phys. Rev. C 51, 38 (1995). Malfliet1969R. Malfliet and J. Tjon, Solution of the Faddeev equations for the triton problem using local two-particle interactions, Nucl. Phys. A 127 161–168 (1969). https://doi .org /10 .1016 /0375 -9474(69 )90775 -1, https://www.sciencedirect .com /science /article /pii /0375947469907751. MTcorrJ. L. Friar, B. F. Gibson, G. Berthold, W. Glockle, Th. Cornelius, H. Witala, J. Haidenbauer, Y. Koike, G. L. Payne, J. A. Tjon, and W. M. Kloet, Benchmark solutions for a model three-nucleon scattering problem, Phys. Rev. C 42, 1838 (1990).
http://arxiv.org/abs/2407.13757v1
20240718175555
Black-Box Opinion Manipulation Attacks to Retrieval-Augmented Generation of Large Language Models
[ "Zhuo Chen", "Jiawei Liu", "Haotan Liu", "Qikai Cheng", "Fan Zhang", "Wei Lu", "Xiaozhong Liu" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.CR" ]
[NO \title GIVEN] [NO \author GIVEN] July 22, 2024 ====================== § ABSTRACT Retrieval-Augmented Generation (RAG) is applied to solve hallucination problems and real-time constraints of large language models, but it also induces vulnerabilities against retrieval corruption attacks. Existing research mainly explores the unreliability of RAG in white-box and closed-domain QA tasks. In this paper, we aim to reveal the vulnerabilities of Retrieval-Enhanced Generative (RAG) models when faced with black-box attacks for opinion manipulation. We explore the impact of such attacks on user cognition and decision-making, providing new insight to enhance the reliability and security of RAG models. We manipulate the ranking results of the retrieval model in RAG with instruction and use these results as data to train a surrogate model. By employing adversarial retrieval attack methods to the surrogate model, black-box transfer attacks on RAG are further realized. Experiments conducted on opinion datasets across multiple topics show that the proposed attack strategy can significantly alter the opinion polarity of the content generated by RAG. This demonstrates the model's vulnerability and, more importantly, reveals the potential negative impact on user cognition and decision-making, making it easier to mislead users into accepting incorrect or biased information. § INTRODUCTION With the rapid development of artificial intelligence, large language models (LLMs) have demonstrated exceptional capabilities in the field of natural language processing. However, constrained by their training data, these models have limited scope of knowledge and lack the most up-to-date information, which can lead to errors or hallucinations when tackling more complex or time-sensitive tasks. Retrieval-Augmented Generation (RAG) combines information retrieval with the generative capabilities of large language models, enhancing the timeliness of knowledge acquisition and effectively mitigating the hallucination problem of these models. When given a query, RAG retrieves the most relevant passages from a knowledge base to augment the input request for the LLM. For example, the retrieved knowledge may consist of a series of text snippets that are semantically most similar to the query. RAG has inspired many popular applications, such as Microsoft Bing Chat, ERNIE Bot, and KimiChat, which use RAG to summarize retrieval results for improved user experience. Open-source projects like LangChain and LlamaIndex provide developers with flexible RAG frameworks to build customized AI applications using LLMs, retrieval models and knowledge bases. However, as the application scope of RAG expands, its security is increasingly a concern, especially regarding the model performance when faced with malicious attacks. The basic RAG process typically consists of three components: the corpus(refers knowledge bases), the retriever, and the generative large language model. When some of the retrieved passages are corrupted by malicious manipulators, the RAG process can become vulnerable; this is referred to as a retrieval manipulation attack in this paper. Numerous studies have explored various forms of retrieval manipulation attacks, such as adversarial attack on the retriever <cit.>, prompt injection attack <cit.>, jailbreak attack for LLM <cit.>, and poisoning attack targeting the retrieval corpus in RAG <cit.>. This paper primarily focuses on adversarial ranking poisoning attacks against the retriever in RAG and how such attacks indirectly affect the generative results of the LLM. The threat model presented here is closer to a real-world black-box scenario and can be specifically modeled as follows: the attacker can only make requests to the large model and cannot access the complete corpus, the retriever, or the parameters of the RAG. The attacker can only insert adversarially modified candidate texts into the corpus, while the retriever and the LLM remain black-boxed, intact and unmodifiable. Based on previous studies <cit.>, the retrieval corpus and knowledge base contain millions of candidate texts sourced from the internet, allowing attackers to inject adversarially modified candidate texts by maliciously crafting web content or encyclopedia pages. Representative previous studies by Cho et al. <cit.> and Zhong et al. <cit.> utilized predefined white-box retrievers, which are challenging to achieve in real-world scenarios with limited flexibility and practicality. Moreover, these works did not consider testing attacks specifically targeting the integrated generation process, where practical integrated models may mitigate the effects of attacks solely targeting the retriever, thereby reducing their effectiveness. Furthermore, another notable work, PoisonedRAG <cit.>, implemented black-box retrieval poisoning attacks on RAG knowledge bases, effectively exposing relevant security vulnerabilities of RAG. However, its experiments mainly focused on closed-domain question answering, such as "Who is the CEO of OpenAI?" Such questions can be corrected when RAG is combined with fact-checking and value alignment of LLMs. The vulnerabilities explored in this paper primarily target open-ended, controversial, and opinion-based questions in RAG, such as "Should abortion be legal?" These questions demand higher levels of logical analysis and summarization capabilities from large models. Current research in controversial topics is limited, and attacks manipulating opinions on opinion-based questions could potentially cause more profound harm. Open-ended and controversial topics are issues that lack consensus due to differing opinions and attract widespread attention. These topics often involve opinions from different perspectives, influencing public perception when they are widely discussed. For example, in political elections, Robert Epstein <cit.> found that manipulating search engines to produce biased search results can alter voters' voting preferences. Placing passages favoring a particular candidate at the top significantly affects voter trust and favorability towards that candidate. Today, the issue of information homogenization in "information bubbles" has been a major concern among scholars. Zhang Yue et al. <cit.> proposed that homogenization in information bubbles manifests in three dimensions: selective homogenization, content homogenization, and group homogenization. Content homogenization refers to the phenomenon where people using online media encounter homogeneity in the presented content, often due to the "filter bubbles" that are created by recommendation systems and selectively feed biased information. In scenarios of open-ended and controversial topics, "information bubbles" can lead to the homogenization of user opinions, with people's views being easily influenced by the stance of the information they encounter. Through manual construction or search engine optimization, opinion manipulation attacks or "cognitive warfare" in open-ended controversial topics is actually widespread in practical applications such as social media and news platform. This phenomenon has numerous negative impacts on society. With the development of large language models, opinion manipulation exploiting RAG vulnerabilities poses a particularly severe threat. Attackers can influence the stance of the model generated content with carefully designed inputs, further endangering users' cognition and decision-making processes. Therefore, it is of significant theoretical and practical importance to study the vulnerabilities of RAG models against opinion manipulation attacks in black-box setting. In short, this paper aims to explore the reliability of RAG against black-box opinion manipulation attacks in open-ended controversial topics and investigate the impact of such attacks on user cognition and decision-making. Specifically, we first send specific instructions to obtain the ranking of the retrieval results in the RAG model and analyze the working mechanism of its retrieval module. We train a surrogate model on the obtained retrieval ranking data to approximate the features and relevance preferences of the retriever in RAG <cit.>. Based on the surrogate model, we design adversarial retrieval attack strategies to manipulate the opinions of candidate documents. By attacking this surrogate model, we generate adversarial opinion manipulation samples and transfer these adversarial samples to the actual RAG model. We then conduct experiments on opinion datasets across multiple topics to validate the effectiveness and impact range of the attack strategies without understanding the internal knowledge of the RAG model. Experiments conducted on opinion datasets across multiple topics show that the proposed attack strategy can significantly alter the opinion polarity of the content generated by RAG. This not only demonstrates the vulnerability of the model but, more importantly, reveals the potential negative impact on user cognition and decision-making, making it easier to mislead users into accepting incorrect or biased information. § RELATED WORKS Research on the reliability of neural network models has long been established. In 2013, Szegedy et al. <cit.> found that applying imperceptible perturbations to a neural network model during a classification task was sufficient to cause classification errors in CV. Later, scholars observed similar phenomenon in NLP. Robin et al. <cit.> found that inserting perturbed text into original paragraphs significantly distracts computer systems without changing the correct answer or misleading humans. It reflects the robustness of neural network models, i.e., the ability to output stable and correct predictions in tackling the imperceptible additive noises <cit.>. For large language models, Wang et al. <cit.> proposed a comprehensive trustworthiness evaluation framework for LLMs, assessing their reliability from various perspectives such as toxicity, adversarial robustness, stereotype bias, and fairness. While large language models have greater capabilities compared to general deep neural network models, they also raise more concerns regarding security and reliability. As RAG is designed to overcome the hallucination problem in LLMs and enhance their generative capabilities, the reliability of the content generated by RAG is also a major concern. Zhang et al. <cit.> attempted to explore the weaknesses of RAG by analyzing critical components in order to facilitate the injection of the attack sequence and crafting the malicious document with a gradient-guided token mutation technique. Xiang et al. <cit.> designed an isolate-then-aggregate strategy, which gets responses of LLMs from each passage in isolation and then securely aggregate these isolated responses, to construct the first defense framework against retrieval corruption attacks. These studies are based on white-box scenarios and primarily focus on the robustness of RAG against corrupted and toxic content. This paper intends to use adversarial retrieval attack strategies to perturb the ranking results of the retriever, ensuring that opinion documents with a certain stance are ranked as high as possible, thereby guiding the generated responses of the LLM to reflect that stance. The adversarial retrieval attack strategy starts with manipulation at the word level. Under white-box setting, Ebrahimi et al. <cit.> utilize an atomic flip operation, which swaps one token for an other, to generate adversarial examples and the method, known as Hotflip. Hotflip gets rid of reliance on rules, but the adversarial text it generate usually has incomplete semantics and insufficient grammar fluency. While it can deceive the target model, it cannot evade perplexity-based defenses. Wu et al. <cit.> also proposed a word substitution ranking attack method called PRADA. To enhance the readability and effectiveness of the adversarial text, scholars further designed sentence-level ranking attack methods. Song et al. <cit.> propose an adversarial method under white-box setting, named Collision, which uses gradient optimization and beam search to produce the adversarial text named collision. The Collision method further imposes a soft constraint on collision generation by integrating a language model, reducing the perplexity of the collision. The method has shown promising<cit.> propose the Pairwise Anchor-based Trigger (PAT) method under black-box setting. Added the fluency constraint and the next sentence prediction constraint, the method generates adversarial text by optimizing the pairwise loss of top candidates and target candidates with adversarial text. Although the time complexity of PAT has increased compared to previous methods, PAT takes ranking similarity and semantic consistency into account, so its manipulation effect on the retrieval ranking of target candidates is superior. § METHOD This paper attempts to manipulate the opinions in the responses generated by black-box RAG models on controversial topics, targeting both the retrieval model and the LLM which performs the integrated generation task. Zhang et al. <cit.> tried to poison context documents to deceive the LLM into generating incorrect content, but this method requires extensive internal details of the LLM application, making it less feasible in real-world scenarios. For black-box RAG, the manipulator has no knowledge of the internal information of the RAG, including model architecture and score function, and can only access the inputs and outputs of the RAG. Specially, the manipulator can only call the interface of the LLM in RAG instead of that of the retriever. Since the inputs consist of the query and the candidate documents and the user's query cannot be altered, this paper focuses on modifying the candidate documents. Although the manipulator cannot access the entire corpus, they can insert adversarially modified candidate texts into the corpus. The basic framework of RAG consists of the retriever and the generative large language model, which the two are serially connected, the LLM performs the generation task based on the context information retrieved by the retriever. Given that manipulators in a black-box scenario cannot modify the system prompts of the generative large model, it is difficult to directly manipulate the generation results by exploiting the reliability flaws of the LLM itself. Therefore, this paper focuses on exploiting the reliability flaws of the retriever to manipulate the retrieval ranking results. By adding adversarial texts to candidate documents that hold the expected opinion, we increase their relevance to the query, making them more likely to be included in the context passed to the generative large language model. Leveraging the strong capability of LLM for understanding and following instructions, we guide the LLM to generate responses that align with the expected opinion. An overview of this method is shown in Figure 1. The specific approach for manipulating RAG opinions on controversial topics is as follows: Given a topic q (the query) from a set of controversial topics Q, we select a expected opinion S_t and target all candidate documents d_t in the retrieval corpus D that hold the S_t opinion. After obtaining the adversarial text p_adv, it is added to d_t, transforming the retrieval corpus to D(d; d_t ⊕ p_adv). Since p_adv can increase the relevance score R(q, d_t ⊕ p_adv) assigned by the retrieval model RM to d_t for query q, ideally d_t will be ranked at the top of the retrieval results RM_k(q) = { d | d_t ⊕ p_adv}, guiding the large language model to generate responses that align with the expected opinion: S(LLM(q, RM_k(q))) = S_t. The primary issue in implementing manipulation is to make the retrieval model of the black-box RAG transparent. This paper aims to simulate the retrieval model RM. The basic idea is to train a surrogate model M_i with the ranking results RM_k(q) from the retrieval model RM, thus turning the black-box retrieval model into a white-box model. However, since the retriever and the large generative model in RAG are serially connected, it is not possible to directly obtain the ranking results of the retriever. Therefore, this paper attempts to guide the large generative model to replicate the retrieval results of the black-box RAG. Therefore, this paper attempts to guide the large model to replicate the output of the retrieval model, so we obtain the text data deemed relevant by the black-box retrieval model RM, which can be used as positive examples d_+ for black-box imitation training. Subsequently, irrelevant texts to the query can be random sampled as negative examples d_- . Therefore, this paper designs specific instructions to make the black-box RAG replicate the retrieval results of the retriever RM. These retrieval results only need to reflect the relevance to the query. Then, based on the generated results of the LLM, we sample positive and negative data to train the surrogate model. The method for obtaining imitation data of the retrieval model in a black-box RAG scenario is illustrated in Figure 2. The prompt instruction used is as follows: 10cm Now that you are a search engine, please search: {query} Ignore the Question. Please copy the top 3 passages of the given Context intact in the output and provide the output in JSON with keys 'answer' and 'context'. Put each candidate passage in 'context' as a string element in the list. Candidate passages are separated by line break instead of period or exclamation point. Each candidate is an element in the list, like [Passage 1, Passage 2, Passage 3]. Please copy the passages intact with no modification and only output the one best JSON response. This paper uses a pairwise approach to sample data and train the surrogate model. Relevant passages are sampled from the responses generated by the black-box RAG as positive examples d_+, and random irrelevant passages are sampled as negative examples d_-. The black-box RAG responds with context information so the responses generated reflect the retrieval results instead of being independent on the context. These sample pairs (d_+, d_-) are incorporated into the training dataset. After sampling the imitation data, this paper uses a pairwise training method to obtain the surrogate model M_i. Let the relevance score calculated by M_i be R_i, the training optimization objective is as follows: L = -1/|Q|∑_q ∈ Qlog( R_i (q, d_+)/R_i (q, d_+) + ∑ R_i (q, d_-)) [1] After obtaining the surrogate model M_i, this paper transforms the manipulation of RAG-generated opinions in a black-box scenario into manipulation in a white-box scenario. Since we have all the knowledge of the white-box surrogate model M_i, this paper directly implements adversarial retrieval attacks on it, generating adversarial text p_adv for the candidate document d_t holding the opinion S_t. This paper employs the Pairwise Anchor-based Trigger (PAT) strategy for adversarial retrieval attacks, which is commonly used as a baseline in related research. Subsequently, the generated adversarial text is added to the candidate document with S_t. Then, the system of the black-box RAG model is queried, and the generated response is obtained. The stance of the response is compared with the stance of the response generated by the RAG without manipulation to evaluate the reliability of the black-box RAG. PAT, as a representative adversarial retrieval attack strategy, adopts a pairwise generation paradigm. Given the target query, the target candidate item, and the top candidate item(anchor, used to guide the adversarial text generation), the method utilizes gradient optimization of pairwise loss, calculated from the candidate item and the anchor, to find the appropriate representation of an adversarial text. The method also adds fluency constraint and next sentence prediction constraint. By beam search for the words, the final adversarial text, denoted as T_pat, is iteratively generated in an auto-regressive way. This paper uses T_pat as p_adv, with its generated optimization function being <cit.>: max( M_i (q, T_pat; w) + λ_1 ·log P_g (T_pat; w) + λ_2 · f_nsp (d_t, T_pat; w) ) [2] In the above formula, P_g is the semantic constraint function, and f_nsp is the next sentence prediction consistency score function between T_pat and d_t. In terms of dataset, this paper uses the MS MARCO Passages Ranking dataset as the data source for guiding the black-box RAG to generate relevant passages <cit.> where we sample data pairs to train the surrogate model. Additionally, this paper uses controversial topic data scraped from the PROCON.ORG website as the object of manipulation. The controversial topic dataset includes over 80 topics, covering fields such as society, health, government, education, and science. Each controversial topic is discussed from two stances (pro and con), with an average of 30 related passages, each holding a certain opinion with stance pro or con. The specific settings details for the RAG manipulation experiment are as follows: (1) Black-box RAG: This paper represents the black-box RAG process, which serves as the research object, as RAG_black. It mainly consists of a retriever and a large language model (LLM). The LLMs used are the open-source models Meta-Llama-3-8B-Instruct (LLAMA3-8B) and Qwen1.5-14B-Chat (Qwen1.5-14B). The LLAMA and Qwen series LLMs perform well across various tasks among all open-source models. The prompt connecting the retriever and the LLM in RAG_black adopts the basic RAG prompt from the Langchain framework: 10cm Use the following pieces of retrieved context to answer the question. Keep the answer concise. Context: {context}. Question: {question}. (2) Target retriever model and surrogate model: The retriever in RAG is usually a dense retrieval model. Therefore, this paper selects the representative dense retrieval model, coCondenser, as the target retrieval model <cit.>. Since coCondenser is a BERT-based model, the surrogate model chosen in this paper is the MiniLM model, which is BERT-based and specifically trained on the MS Marco Passage Ranking dataset. (3) Manipulation target: For a controversial topic q, documents d_t holding the expected opinion S_t are manipulated by adding adversarial text p_adv at the beginning. This manipulation aims to position these perturbed documents as prominently as possible in the top K rankings of the RAG retriever RM_k(q), where K denotes the number of paragraphs obtained by the RAG generation model from the retrieval results. In this paper, K is set to 3. (4) Manipulator(the threat model): In the black-box scenario, the manipulator is only authorized to query the RAG, obtain RAG-generated results and modify the target documents. There are no restrictions on the number of calls to RAG. Furthermore, the manipulator has no knowledge of the model architecture, model parameters, or any other information related to the models within the black-box RAG. Modifying the prompt templates used by the LLM is also prohibited. (5) Experimental Parameters: The batch size for training the surrogate model is set to 32, with 24 iterations. Our proposed opinion manipulation strategy for black-box RAG is outlined in Algorithm 1. § EXPERIMENT AND ANALYSIS After imitating the retrieval model of RAG_black to obtain the surrogate model, this paper first compares the ranking ability of the surrogate model M_i and the target retrieval model RM, as well as the similarity of their ranking results, as shown in Table 1, to ensure that the surrogate model has learned the capabilities of the black-box retrieval model. This paper uses Mean Reciprocal Rank (MRR) and Normalized Discounted Cumulative Gain (NDCG) to reflect the ranking ability of the models themselves; higher values indicate stronger ranking ability in terms of relevance. Inter Ranking Similarity (Inter) and Rank Biased Overlap (RBO) are used to measure the similarity between the ranking results of the surrogate model and the target retrieval model; higher values indicate better performance of the black-box imitation. The weight for RBO@10 is set to 0.7. In Table 1, “–” indicates that the metric is not applicable to the model. As can be seen from Table 1, the surrogate model M_i trained by black-box imitation is similar to the target retrieval model coCondenser in terms of relevance ranking performance and ranking results, validating the effectiveness of the black-box imitation. After the black-box imitation training, the white-box surrogate model is conducted with opinion manipulation experiments. Several controversial topics and their opinion text data under the four themes of "Government", "Education", "Society", and "Health" from the PROCON.ORG data are selected as the retrieval corpus. The original retrieval corpus is denoted as Docsorigin. Based on the surrogate model, we generate the corresponding Tpat for the candidate items with the expected opinion S_t on controversial topics, and then insert Tpat at the beginning of the target candidate items to obtain the perturbed retrieval corpus Docsadv. Query RAGblack twice on controversial topics: once with Docsorigin as the retrieval corpus, and once with Docsadv as the retrieval corpus, and obtain the two responses of RAG, representing the answers before and after opinion manipulation. The responses are then classified into three categories based on their opinion on controversial topics: opposing, neutral, and supporting, represented by 0, 1, and 2, respectively, as the opinion scores of the generated responses. This study uses Average Stance Variation (ASV) to represent the average increase of opinion scores of RAGblack responses in the direction of the expected opinion S_t before and after manipulation. A positive ASV indicates that the opinion manipulation towards S_t is effective, while a negative ASV indicates that the manipulation actually makes the opinions of RAG responses deviate from S_t. The larger the ASV value, the more successful the opinion manipulation of RAG responses. Additionally, this paper attempts to obtain the ranking results of the retriever coCondenser to evaluate the effectiveness of the adversarial retrieval manipulation strategy at the ranking stage for dense retrieval. This evaluation is solely for assessment purposes and is not involved in manipulation, as no internal knowledge of RAGblack was leaked during the manipulation process. After obtaining the ranking results of the retriever model coCondenser, this paper evaluates the manipulation effect with Attack Success Rate (ASR), the average proportion of target opinions in the Top 3 rankings before and after manipulation (Top3origin, Top3attacked), and the Variation of Normalized Discounted Cumulative Gain (VoN-DCG). Higher values of ASR and Vo-NDCG indicate better manipulation effects on ranking, and a larger difference between Top3attacked and Top3origin signifies more significant ranking manipulation effects, too. Figure 3 shows the significant overall opinion manipulation effect of the adversarial retrieval attack strategy PAT. This paper divides Docsorigin into two parts: document data with an expected opinion of support and document data with an expected opinion of opposition, the expected opinion represents the stance direction we would like RAG response to hold for the target topic after manipulation. The expected opinion S_t is set to 2 for supporting and 0 for opposing, and then manipulation is performed. The results show that when the expected opinion is support, the proportion of responses with a supportive stance increases significantly after manipulation, while the proportion of responses with an opposing stance decreases. When the expected opinion is opposition, the proportion of supportive responses decreases significantly after manipulation, while the proportions of neutral and opposing responses both increase. Comparatively, the changes in stance before and after manipulation are slightly larger for LLAMA3-8b than for Qwen1.5-14b, due to stronger ability of LLAMA3-8b to follow instructions. The results of the theme-specific manipulation experiments are shown in Table 2. The adversarial retrieval attack strategy PAT, applied to the adversarial texts generated by the surrogate model, significantly increased the proportion of candidate items holding expected opinion in the Top 3 of the RAGblack retrieval list, thereby guiding the LLM to change its opinion in the response. However, the manipulation effect of RAGblack generated opinions varies across different themes: for education, society, and health topics, the attack success rate and ranking variation of target items are significantly higher than those in government topics. This suggests that the LLM may have been specifically fine-tuned on government-related dataset, enabling it to mitigate the bias in the retrieval context to some extent. Among these topics, controversial opinions in society and health topics are more susceptible to manipulation. Since these two areas are closely related to people's lives, opinion manipulation in society and health topics may pose a greater risk. The manipulation results across different themes still demonstrate relative advantage of LLAMA3-8b in understanding prompts with contextual background intentions and generating effective responses. However, this also indicates that the strong comprehension ability of LLMs may undermine the reliability of the content it generates. § CONCLUSION In this paper, we explore the vulnerability of retrieval-augmented generation (RAG) models to opinion manipulation against black-box attack in open-ended controversial topics, and delve into the potential impact of such attacks on user cognition and decision-making. Through systematic experiments, we propose a novel adversarial attack strategy about retrieval ranking poisoning. This method significantly affects the polarity of the opinions generated by RAG by crafting adversarial samples, without requiring internal knowledge of the RAG model. The experimental results indicate that the proposed attack strategy successfully alters the opinion of the content generated by the RAG model, revealing the vulnerability and unreliability of RAG when confronted with malicious retrieval corpus. More importantly, this opinion manipulation could have profound impacts on users' cognition and decision-making processes, potentially leading users to accept incorrect or biased information, causing cognitive changes and public opinion distortion. This phenomenon is particularly significant in open-ended and controversial issues. Future research will expands the scale of the experiments by including more open-source and commercial RAG systems to more comprehensively evaluate the reliability of viewpoint generation by RAG models. Given the vulnerabilities of RAG models, future work should focus on developing more robust defense strategies. These may include improving the robustness of retrieval algorithms, enhancing the reliability of generation models, and introducing multi-level input filtering mechanisms to counteract adversarial inputs, thereby achieving a balanced optimization of the understanding and reliability of RAG models. § ETHICAL STATEMENT This paper explores the feasibility of opinion manipulation on black-box RAG models in real-world scenarios. The main goal is to assess the reliability of RAG technology in responding to ranking manipulation at the stage of retrieval, paving the way for future work to enhance the robustness and defense capabilities of RAG technology. This study did not manipulate any commercial RAG systems or real-world data currently in use.· chen2024research zhang2021argot cohen2019certified
http://arxiv.org/abs/2407.13315v1
20240718091618
TCSpy: Multi-telescope Array Control Software for 7-Dimensional Telescope (7DT)
[ "Hyeonho Choi", "Myungshin Im", "Ji Hoon Kim" ]
astro-ph.IM
[ "astro-ph.IM" ]
Possible molecules of triple-heavy pentaquarks within the extended local hidden gauge formalism Xiang Liu^1,2,6,7 July 22, 2024 =============================================================================================== § ABSTRACT We introduce a novel software called TCSpy which is designed to efficiently control a multi-telescope array through network-based protocols. The primary objectives of TCSpy include centralized control of the array, support for diverse observation modes, and swift responses to the follow-up observations of astronomical transients. To achieve these objectives, TCSpy utilizes the ASCOM Alpaca protocol in conjunction with Alpyca, establishing robust communication among multiple telescope units. For the practical application of TCSpy, we implement TCSpy within the 7-Dimensional Telescope (7DT). 7DT is a telescope array consisting of 20, 0.5-m telescopes, equipped with 40 different medium-band filters. The main scientific goals of 7DT include detecting the optical counterparts of gravitational-wave sources, identifying kilonovae, and the spectral mapping of the southern sky. Through the integration of TCSpy, 7DT can achieve these scientific objectives with its unique observation modes and rapid follow-up capabilities. § INTRODUCTION With the advent of the gravitational-wave (GW) detectors (LIGO<cit.>, Virgo<cit.>, KAGRA<cit.>), a significant number of GW events are expected to be discovered in the near future. While the discovery of these transients alone is scientifically meaningful, additional follow-up observations provide valuable information about the nature of such events<cit.>. However, even though a significant number of telescopes are rapidly responding to interesting transient events for follow-up observations, the number of observatories is not sufficient. This is particularly true for events with large localization areas and those requiring spectroscopic observation. For such cases, doing a spectral mapping over the whole localization area can help speed up the identification of the transient of interest. An efficient solution for both wide-field observation and spectroscopic analysis of the entire image is to utilize a multi-telescope array. With each telescope unit operating independently within the array, the sky localization area can be covered with the combined field of view of all telescopes, without being constrained by a specific tiling pattern. Moreover, this setup enables spectroscopic observation of the entire image by observing the same position with different medium-band filters. To perform this kind of spectroscopic mapping of the sky, our group has developed the 7-Dimensional Telescope (7DT), which is a multiple telescope array compsed of 20 telescopes equipped with 40 different medium filters. To support various observation modes of 7DT and possibly other multiple telescope systems, we developed a novel software called TCSpy for the operation of a multi-telescope array composed of Astronomy Common Object Model (ASCOM) and PlaneWave instruments (PWI). Utilizing a centralized system, TCSpy supports synchronized operation of multiple telescopes via network-based communication. Currently, TCSpy is specifically designed for the operation of 7DT. This paper demonstrates the software architecture of the TCSpy and its practical application on 7DT. Section <ref> will briefly present the hardware specifications, science goals, and operational requirements of 7DT. Section <ref> demonstrates network-based hardware control, software design of the TCSpy, and target database. Section <ref> will discuss the implementation of the TCSpy on 7DT for diverse observation modes and robotic observation. § 7-DIMENSIONAL TELESCOPE §.§ Hardware 7DT is a multi-telescope array located at the El Sauce Observatory, Chile. It comprises 20 0.5-meter telescopes equipped with 40 different medium-band filters. Each telescope unit includes a fast slewing mount, an F/3 optical tube assembly (OTA) with a 508mm diameter, a focuser, a 61-megapixel CMOS camera, and a 9-slot filter wheel. Each filter wheel is equipped with the Sloan g-, r-, i-band filters, along with four to six 25 nm width medium-band filters. With its large field of view (1.25 square degrees per telescope unit), rapid slewing speed (50 degrees per second), and swift readout speed (less than 2 seconds), 7DT is ideally suited for rapid follow-up observations of transient events. Each telescope unit is operated by an assigned telescope control computer (TCC) with a physical connection. Main Control Computer (MCC) governs all TCC for centralized operation of multiple telescope units. For more detail on hardware control, see Section <ref>. Currently, 12 of the 20 units are operational, allowing for telescope standardization and commissioning observations. §.§ Scientific goal One of the main scientific objectives of 7DT is transient optical follow-up observation, primarily used to search for the optical counterpart of GW sources detected by LIGO-Virgo-KAGRA observing run 4<cit.>(LVK O4). As the major observational facility in the Center for the Gravitational-Wave Universe and Gravitational-wave Electromagnetic Counterpart Korean Observatory<cit.> (GECKO) project, 7DT will trigger target of opportunity (ToO) observations on the sky localization area of GW events and identify optical counterpart in real-time. Moreover, follow-up observation with diverse observation modes on interesting transient events such as supernovae, and high energy sources will be triggered. The details of the observation modes will be discussed in Section <ref>. Furthermore, in situations when transient events are not occurring, 7DT will conduct the 7-Dimensional Sky Survey, the spectral mapping survey of the southern hemisphere sky. Depending on a scientific purpose, survey area, and observation cadence, there are three types of surveys: Reference Image Survey (RIS), Wide Field Survey (WFS), and Intensive Monitoring Survey (IMS). The data collected from these surveys will be in the format of data cubes with low-resolution spectroscopy and will be utilized for transient subtraction, photometric redshift calculation, and other analyses. §.§ The requirement for the operation To achieve the scientific goals discussed in Section <ref>, the basic requirements for 7DT operation are as follows: 1. Support for diverse observation modes. 2. Synchronized operation of multiple telescope units, and 3. Swift responsiveness for ToO observation request. §.§.§ Observation modes 7DT operates in a variety of observation modes for various astronomical research needs. Taking advantage of multiple telescope units, we need to implement three observing modes: (1) Spectroscopic observation mode (Spec mode); (2) Deep observation mode (Deep mode); and (3) Search observation mode (Search mode). In Spec mode, each telescope observes the same designated target with different medium-band filters. The default setting of the spec mode should include all medium-band filters from m400 to m887, yet customizable spec modes should be defined as well. The response curves of all medium-band filters are shown in Figure  <ref>. Deep mode involves the observation of all the telescope units on the same designated target with the same filter. By combining all images obtained from the 20 telescope units, this mode yields a significant enhancement in light-gathering power, equivalent to that of a 2.3-m telescope. Search mode is essential to rapidly cover extensive sky localization areas such as GW events. In the Search mode, each telescope unit individually observes a different area of the sky. The critical aspect of the Search mode is to avoid assigning duplicate targets to multiple telescope units. §.§.§ Synchronized operation of multiple telescope array Rapid spectral changes and weather fluctuation can impact data quality, especially in spec and deep observation modes. Hence, in such observation modes, all telescope units must synchronize their observations on the shared target within a few seconds. Furthermore, in the spec mode, real-time sharing of target observing status and swift observation scheduling is necessary to facilitate swift observations. §.§.§ Swift responsiveness for follow-up observations Given that the primary scientific objective of 7DT is to detect transients and conduct follow-up observations, swift ToO observations are essential. Therefore, our primary aim is to ensure that 7DT is autonomously ready for ToO observation within one minute. The rapid follow-up observations will provide crucial information necessary to uncover the nature of transient events. § SOFTWARE ARCHITECTURE Telescope Control System with Python (TCSpy) is a Python software designed specifically for the synchronized control of ASCOM<cit.>-based multi-telescope array. Using ASCOM Alpaca<cit.> and PWI4 HTTP API<cit.>, robust communication between telescopes is established with HTTP protocol. Through communication, centralized operation of multiple telescope units becomes feasible, enabling the realization of diverse observation modes with synchronization. Moreover, with the dynamically interactive target database and scheduling system, the telescope array integrated with TCSpy can be fully automatic with the fast response of ToO observation. §.§ Hardware control In TCSpy, all telescope hardware communicates via the network using the HTTP protocol. Each telescope is hosted by its Telescope Control Computer (TCC) through a physical connection. These individually hosted telescopes can be controlled via HTTP commands using their respective IP addresses, enabling centralized control by the Main Control Computer (MCC). The schema of the hardware control of TCSpy via the HTTP protocol is illustrated in Figure  <ref>. Two different APIs are utilized for controlling telescope components with the HTTP protocol: (1) ASCOM Alpaca, and (2) PWI4 HTTP API. §.§.§ ASCOM Alpaca ASCOM stands for "Astronomy Common Object Model." It is one of the standards for communication between telescope control software and hardware devices like mounts, focusers, filter wheels, and more. ASCOM provides a standardized interface that allows different software applications to control various devices without needing specific drivers. ASCOM Alpaca is an extension of the ASCOM platform, enabling the control of ASCOM devices over a network. Each ASCOM device is configured as an Alpaca device via an ASCOM Remote Server. These configured Alpaca devices can be controlled via the host IP address by utilizing the Alpaca device API. TCSpy is built based on a Python library called Alpyca, which allows the control of Alpaca devices from Python. §.§.§ PWI4 HTTP API PlaneWave Interface 4 (PWI4) is a device control software provided by PlaneWave. With devices manufactured by PlaneWave, PWI4 offers various functions such as telescope slew, autofocus, and more. Similar to ASCOM Alpaca, PWI4 supports device manipulation via the HTTP protocol using the host IP address. In addition to ASCOM Alpaca API, TCSpy also supports PWI4 HTTP API for mounts and focusers. §.§ Software design TCSpy is organized into three fundamental types of modules: device, action, and utility modules. These modules play essential roles in device integration, operational actions, and various utility functions. By combining these fundamental modules, users can create applications that enable observers to manage the operation of the multiple telescope array. With the flexibility of Python, each application can also be scheduled for automatic operation. In the device module, TCSpy establishes connections with the multiple telescope array at various levels. At the lowest level, it connects ASCOM Alpaca devices (such as cameras, filter wheels, etc.) to create the SingleTelescope instance. Each SingleTelescope instance represents a single telescope unit. Multiple SingleTelescope instances are then combined to form MultiTelescope, allowing for the simultaneous observation of multiple telescopes with synchronization across various observation modes. The Action modules enable the practical operation of both SingleTelescope and MultiTelescope instances. These modules are structured into four levels, depending on the complexity of the required action. For SingleTelescope operations, Level 1 and 2 action modules are employed, while MultiTelescope operations utilize level 3 action modules. All action modules simply provide two interfaces "run" and "abort" for running and aborting an operation. Furthermore, action modules in the levels 1 and 2 can be wrapped by the MultiAction module at the level 0, enabling simultaneous operation across multiple telescopes. All observation modes presented in Section <ref> are implemented in the action modules. The Utility modules provide essential support to TCSpy. These utility modules offer a list of functionalities, including image header control, image saving, logging, target database control as well as managing exceptions and errors. The applications are defined as a list of actions for the automated operation of a MultiTelescope object. Since action modules are designed to initiate specific operations of SingleTelescope or MultiTelescope, users need to design a sequence of actions for autonomous observation by combining utility modules with action modules. With the flexibility of Python, such applications can be scheduled for a fully-robotic observation. For example, the default TCSpy application is shown in Section <ref>. §.§ Target database The Target database is a MySQL database that serves as a pivotal interface in TCSpy. It stores targets for nightly observations, dynamically updates their observing status, and calculates optimal targets throughout the observation. Before observation begins, an initialization process is triggered for a faster scoring process of optimal target selection. In this process, celestial information such as rise time, set time, and moon separation at the observing site is calculated with astroplan<cit.> library. During observation, the observation scheduler selects the target in the best observing condition to trigger real-time observation. The selection process is based on a scoring algorithm. At first, the observability of the targets is checked. This process includes checking the moon separation and altitude of the targets. Second, scores of all targets are defined as a relative altitude and priority with the following equation. Score = w_alt×Altitude/Altitude_max + w_prior×Priority/Priority_max The maximum altitude of a target is defined as the maximum altitude during the observing night. This simple algorithm offers flexibility and rapid calculation speed compared to more complex scoring algorithms. Users can easily prioritize desired targets by adjusting the weight(w) and setting priorities according to specific needs. With this straightforward approach, 30,000 targets can be calculated in one second, as all necessary values are pre-calculated during the initialization process. § IMPLEMENTATION OF TCSPY ON 7DT We implement TCSpy on 7DT to provide the various observation requirements provided in Section <ref>. Running on MCC, TCSpy establishes robust connections with multiple telescope units, allowing commands to be sent simultaneously with the target database. Such a centralized control system enable the implementation of the following functionalities. §.§ Synchronized observation modes As mentioned in Section <ref>, concurrent operation of multiple telescope units is essential for spec and deep observation modes. These observation modes are executed within high-level action modules. Here, we’ll describe the spec observation mode, where different telescope units observe the designated target simultaneously using different medium-band filters. When the spec mode is triggered, TCSpy specifies each telescope to perform observations with specified filters from a pre-defined list of filters. This filter information is then passed to the level 0 Multiaction module to trigger the simultaneous operation of multiple telescope units. In the Multiaction module, TCSpy assigns a single core to each telescope unit with multiprocessing<cit.> library for simultaneous and expedited operation. For the default spec mode of 7DT, each telescope sequentially observes two medium-band filters to capture the entire spectrum. This approach enables observations on all 7DT units to be triggered within one second. §.§ Robotic observation Robotic observation of multi-telescope array can be executed by running and scheduling applications. TCSpy supports built-in applications for the processes including startup, automatic flat acquisition, automatic night-time observation, and shutdown. Additionally, users have the flexibility to easily create custom applications for specific needs using action and utility modules. Here, we introduce the NightObservation application as an example. The NightObservation application automatically triggers the observation of targets in the target database during the nighttime. Once triggered, it starts with the initialization process. The initialization process includes running the weather updater, initializing the target database, and checking device connections. Once the Sun falls below the horizon, the application schedules the most optimal target for observation from the target database every second. By comparing the required number of telescope units and available idle units in the telescope queue, the observation is triggered. The telescope queue and observing status of the target are promptly updated according to the progress of the observation, ensuring seamless operations. When the weather conditions become bad, all ongoing observations are aborted until the weather conditions improve. Based on the visibility of targets, aborted target acquisition resumes. This automated observation process continues until sunrise, providing fully automatic observation. §.§ ToO Observation When ToO targets are received from authorized users or alert brokers, the ToO response can be carried out manually by users. Manual control by observers, however, increases the response time of ToO observation. Therefore, for a faster ToO response, the ToO implementation is incorporated into the NightOservation application discussed in Section <ref>. In the application, ToO targets have the highest priority when scheduling the most optimal target. When the ToO target is selected as the optimal target, the application interrupts all ongoing observations. This process can be rapidly prepared within 5 seconds, and all telescope units are ready to trigger ToO observation. Once the ToO observation concludes, observations of the previously halted targets resume. § CONCLUSION This paper presents the fundamental structure, functionality, and application of TCSpy in controlling multi-telescope array. TCSpy establishes connections with multiple telescopes constructed with ASCOM-supported devices using the local network at the telescope site. Through this connection, TCSpy not only provides a multiple telescope control system but also offers various functionalities such as ToO observation and synchronized observation modes. In section <ref>, we demonstrate the successful application of TCSpy to 7DT. With the high-level action modules, synchronized observation of multi-telescope array is implemented with multiprocessing. Moreover, the default application of TCSpy, NightObservation, effectively implements robotic observation with the ToO target and weather condition monitoring. Finally, 7DT equipped with TCSpy can initiate simultaneous observations across all the telescope in the multiple telescope system in just one second, while preparing for ToO observations within five seconds. This work was supported by the National Research Foundation of Korea (NRF) grants, No. 2020R1A2C3011091, and No. 2021M3F7A1084525, funded by the Korea government (MSIT). spiebib
http://arxiv.org/abs/2407.12918v1
20240717180006
Landau levels and optical conductivity in the mixed state of a generic Weyl superconductor
[ "Zhihai Liu", "Luyang Wang" ]
cond-mat.supr-con
[ "cond-mat.supr-con" ]
College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China wangly@szu.edu.cn College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China § ABSTRACT The low-energy quasiparticle states in the mixed state of most superconductors remain Bloch waves due to the presence of supercurrent around vortex cores. In contrast, the Weyl superconductor (WSC) may display Dirac-Landau levels in the presence of a vortex lattice. Here, we investigate the Landau level (LL) structure and optical conductivity in the mixed state of a generic WSC using a heterostructure model, where the tilt of the Bogoliubov-Weyl (BW) cones can be tuned, yielding either type-I (undertilted) or type-II (overtilted) cones. We find that, in a magnetic field, the tilted type-I BW cone in the mixed state may exhibit squeezed LLs with reduced spacings. On the other hand, the spectrum of type-II cones shows a dependence on the angle between the magnetic field and the tilt direction; LL quantization is only possible if the angle is below a critical value. For zero tilt, the optical conductivity in the mixed state of the WSC shows peaks only at photon frequencies ω_n ∝√(n)+√(n+1), with a linear background. However, the tilt of BW cones results in the emergence of optical transitions beyond the usual dipolar ones. For type-II BW cones, unique intraband transition conductivity peaks emerge at low frequency, which can serve as an indicator to distinguish between type-I and type-II BW cones in WSCs. Landau levels and optical conductivity in the mixed state of a generic Weyl superconductor Luyang Wang July 22, 2024 ========================================================================================== § INTRODUCTION In a strong magnetic field, the electron states of solid materials are typically reorganized into a series of discrete energy levels called Landau levels. These discrete levels can give rise to unique physical phenomena, such as the periodic quantum oscillations of certain quantities <cit.> and the quantum Hall effect <cit.>. However, such scenarios are uncommon in superconductors. For the first-kind superconductors, the magnetic field is completely repelled out of the sample due to the Meissner effect. For the second-kind superconductors, the magnetic field partially penetrates the sample, resulting in the mixed state, where Abrikosov vortex lattice is formed. In this state, the spatially varying supercurrent disrupts the system's LLs <cit.>. Particularly, Franz and Tesanovic <cit.> have revealed that, in the vortex state of a d-wave superconductor, the magnetic field vanishes on average, the quasiparticles retain the zero-field Dirac cone and the main effect of the magnetic field is the renormalization of the Fermi velocity. Similar phenomena have also been reported in s-wave and p-wave superconductors <cit.>. It is believed that most superconductors do not exhibit LLs in a magnetic field. Recently, Pacholski et al. <cit.> have demonstrated that due to the chirality of the Weyl fermions, WSCs may exhibit topologically protected zeroth LL in the vortex state. It is also noted that the pseudo-LL structure induced by a pseudomagnetic field have been reported in WSCs and other nodal superconductors <cit.>. The WSC can be considered as a superconducting counterpart of the Weyl semimetal (WSM), which exhibits BW nodes within the superconducting gap <cit.>. Consequently, some physical properties that resemble those found in WSMs have been reported in WSCs. For example, chiral Majorana arcs at zero energy have been examined on the surface Brillouin zone (BZ) of WSCs, analogous to surface Fermi arcs of WSMs <cit.>. These Majorana arcs may lead to the anomalous thermal Hall effect <cit.>. In an electromagnetic field, the WSM shows negative magnetoresistance induced by the chiral anomaly <cit.>. Although the charge is not conserved in the superconducting state due to the U(1) symmetry breaking, WSCs may display negative thermal magnetoresistivity since energy is still conserved. In the superconducting state, the chiral anomaly can be induced by vortex textures in the mixed state <cit.> or by lattice strain <cit.>. WSCs may be realized in some multilayer structures that include s-wave superconductors <cit.>. Furthermore, materials such as URu_2Si_2 <cit.>, UPt_3 <cit.>, UCoGe <cit.> and SrPtAs <cit.> are also supposed to be the candidates for WSCs. In WSMs, Weyl cones can be tilted in the momentum-energy space and are classified into type-I and type-II cones based on the ratio between the tilt and Fermi velocity <cit.>. Similarly, WSCs may present tilted BW cones, including both type-I and type-II cones<cit.>. In two-dimensional (2D) systems, the impact of tilt on the LLs of Dirac cones has been investigated. For instance, in the quasi-2D organic compound α-(BEDT-TTF)_2I_3, the tilted Dirac cone shows squeezed LLs that collapse when the tilt exceeds the Fermi velocity <cit.>. The modification of LLs induced by the tilt can be viewed as being generated by an in-plane electric field, which has been theoretically investigated in the context of graphene by Lukose et al. <cit.>. Due to the Lorentz covariance of Dirac fermions, below the critical electric field, we can always find a frame of reference, achieved by a “Lorentz boost", in which the in-plane electric field effectively vanishes and the magnetic field is reduced, the so-called magnetic regime <cit.>. The LL strcture of tilted Dirac cones can also be interpreted through a generalized chiral symmetry <cit.>. Similar phenomena have been reported in the 3D WSM, where the squeezed LLs persist as long as the tilt in the plane perpendicular to the magnetic field does not exceed the Fermi velocity, even for type-II WSMs <cit.>. Otherwise, the LLs collapse, leading the system into the “electric regime" where the magnetic field effectively vanishes. In the mixed state of a WSC, the qsuasiparticle spectrum for the untilted BW cones has been studied, revealing Dirac-LLs <cit.>. In the plane perpendicular to the magnetic field, these LLs scale with √(nB) and feature completely dispersionless zeroth levels. However, the impact of the tilt on the LL structure of BW cones has not been fully elucidated. Moreover, the quasiparticle spectrum in the mixed state for the type-II BW cones remains ambiguous. In this work, we propose a heterostructure model that is engineered by multilayer structures comprising WSMs and s-wave superconductors. The heterostructure may present a Weyl superconducting phase due to the superconducting proximity effect <cit.>, the quasiparticle spectrum of which shows tilted type-I and type-II BW nodes in the superconducting gap. In particular, the type-II BW node connects an electron-like and a hole-like Bogoliubov Fermi pocket. We find that the tilt modifies the LLs of the WSC in a similar manner to that of WSMs. Specifically, for the type-II BW cones, when the projection of the tilt in the plane perpendicular to the magnetic field exceeds the Fermi velocity, the LLs collapse, and the quasiparticle states in the vortex lattice become Bloch waves. Ahn and Nagaosa have shown that intrinsic momentum-conserving optical excitations may occur in clean multi-band superconductors <cit.>. Indeed, optical responses in several clean superconducting systems have been reported <cit.>. In contrast, in the single-band superconductors, optical excitations usually require impurity scattering. For example, the ac anomalous Hall conductivity σ_H(ω) induced by impurities in chiral superconductors has been investigated <cit.>. Here, we study optical responses in the mixed state of the generic WSC, where vortices serve as impurities, and find similar optical characteristics to the magneto-optical conductivity of WSMs <cit.>. Specifically, the magneto-optical conductivity for untilted BW cones displays a series of peaks at photon frequencies ω_n ∝√(n)+√(n+1), with a linear background that arises from the dispersion of LLs in the magnetic direction. For a finite tilt in the plane perpendicular to the magnetic field, emerging transitions beyond the usual dipolar selection rule n→ n ± 1 give rise to new peaks in the optical conductivity. In particular, unique intraband transitions emerge in the magneto-optical conductivity of type-II BW cones, leading to peaks at low frequency that cannot be observed for type-I BW cones. § WEYL-SUPERCONDUCTOR MODEL We start from a heterostructure engineered by alternately stacking WSM and s-wave superconductor layers, as shown in Fig. <ref>. For simplicity, we consider a two-band magnetic WSM model in which the time-reversal symmetry is broken while the inversion symmetry is preserved. The Hamiltonian is written as <cit.> H_0( k) = t_0(σ_xsink_x a_0+σ_ysink_y a_0) + M( k)σ_z - μσ_0, M( k) = t_0 λ(β - cosk_x a_0 - cosk_y a_0) + t_z cosk_z a_0, where σ_i's represent the Pauli matrices, μ is the chemical potential. The model Eq. (<ref>) can be realized in a cubic lattice with the same lattice constant a_0 (setting a_0≡1) in the x, y and z directions. The magnetization β breaks the time-reversal symmetry. Unless otherwise specified, we take β=2, λ=1 and μ=0 in this work, setting t_z=t_0=1 as the energy unit. The term with λ (λ≠ 0) lifts the degeneracy at points (0,π), (π,0) and (π,π) in the k_z=±π2 planes, leaving only two Weyl nodes located at (0,0,±π2), respectively. In the WSM layers, conventional superconductivity is induced by the superconducting proximity effect, which is governed by a Bogoliubov-de Gennes (BdG) Hamiltonian. This Hamiltonian describes the coupling of electrons and holes by the pair potential and is written as H_BdG( k)= [ H_0( k) Δ; Δ^* -σ_yH_0^*(- k)σ_y ], where the superconducting pair potential Δ=Δ_0e^iϕ, with ϕ being the globally coherent superconducting phase. When Δ_0=0, the system is in the normal state, and the Hamiltonian (<ref>) describes a WSM expressed in the electron-hole representation. In this representation, each Weyl node is doubled into an electron and hole node. Then, the pair potential Δ mixes an electron and hole node that originate from different Weyl nodes into two BW nodes lying in the superconducting gap, with their spacing increasing as |Δ_0| increases. We note that this differs from the Meng-Balents model, in which the pair potential mixes an electron and hole node that originate from the same Weyl node, and each of the two pairs of BW cones is located at the same momentum point <cit.>. The Hamiltonian (<ref>) also describes, besides the two pairs of BW nodes located at (0,0,±arccos(±Δ_0)) (in this work, we set |Δ_0|<1), respectively, two nodal spherical surfaces with a radius of k_r=π2 (an approximate spherical surface due to the underlying superlattice) and a center at (0,0,0), as we can see in Fig. <ref>. These two nodal surfaces coincide in momentum space but have opposite energies (they do not cross the Fermi level for η=0) due to the particle-hole symmetry of Hamiltonian (<ref>). At low energies, the Weyl nodes in Hamiltonian (<ref>) can be described by a 2×2 linear Hamiltonian H_W( k) = ± v_Fħ k·σ (for simplicity, we have set an isotropic Weyl cone with Fermi velocity v_x=v_y=v_z≡ v_F, which can always be achieved by rescaling the axes, say, y^'=(v_x/v_y)y). However, the Hamiltonian for a generic Weyl node should be written as <cit.> ℋ_W( k) = ħη· kσ_0 ±ħ v_F k·σ, where the momentum k is measured from the Weyl point, η=(η_x,η_y,η_z) describes the tilt of the Weyl cone along the k_x, k_y and k_z directions, respectively. When |η|>v_F, the two bands that touch at the Weyl node cross the Fermi level, forming electron and hole pockets, respectively. The Weyl node then becomes a point that connects an electron and a hole pocket, and such points are called type-II Weyl nodes <cit.>. Following Eq. (<ref>), we introduce a tilt term into the lattice Hamiltonian (<ref>), which is written as Λ( k) = t_0 (η_x sink_x a_0 + η_z cosk_z a_0) σ_0. Without loss of generality, we have set η_y=0 in this work, as we can always orient the x-y axes so that the Weyl cones are tilted in the x-z plane. Therefore, the BdG Hamiltonian for the generic WSC is written as ℋ( k)= [ ℋ_0( k) Δ; Δ^* -σ_yℋ^*_0(- k)σ_y ], with ℋ_0( k) = H_0( k) + Λ( k), where Λ( k) can tilt the BW cones along the k_x and k_z directions. Consider a nonzero tilt η. The Hamiltonian (<ref>) describes four tilted BW cones located at (0, 0, ±arccos(±Δ_0/√(1-η^2_z))), respectively. Obviously, when |η_z|>1, the four BW nodes disappear, and the spectrum of Hamiltonian (<ref>) opens a gap, as shown in Fig. <ref> by the red dashed lines. However, such a scenario will not occur for |η_x|>1. A nonzero η_x tilts the dispersion of BW cones and nodal spherical surfaces along the k_x direction. In contrast, a nonzero η_z tilts the dispersion of BW cones along the k_z direction while it lifts the degeneracy of nodal spherical surfaces, except for the two points (0,0,±π2), as seen in Figs. <ref>, <ref> and <ref>. When |η|>1 (|η_z|<1), the four nodes evolve into the type-II BW nodes, at each of which an electron-like and a hole-like pocket touch, and the superconductor shows Bogoliubov Fermi surfaces (In the WSC model, Bogoliubov Fermi surfaces emerge when |η_x|+|η_z|>1, while BW cones retain type-I when |η|<1), as shown in Figs. <ref> and <ref>. § LANDAU LEVELS IN THE MIXED STATE We assume the heterostructure WSC is the second-kind superconductor that hosts overlapping vortices at the intermediate magnetic field, H_c1≪ B_0≪ H_c2, where the magnetic field B=B_0ẑ is shown in Fig. <ref>, with H_c1 and H_c2 being the lower and upper critical field, respectively. In this regime, vortex cores comprise a negligible fraction of the sample. Therefore, the magnitude of the order parameter is approximately uniform throughout the sample and equals Δ_0, while the phase ϕ is strongly position-dependent. Here, we calculate the LLs of the generic WSC in a magnetic field B using two methods: one is an analytical approach employing the low-energy continuum model, and the other is a numerical approach studying the full tight-binding Hamiltonian on the vortex lattice. §.§ Continuum formulation The perpendicular magnetic field B is produced by a vector potential A(x,y). The BdG Hamiltonian of the heterostructure WSC in this field is written as ℋ( k)= [ ℋ_0( k-e A) Δ_0e^iϕ; Δ_0e^-iϕ -σ_yℋ_0^*(- k-e A)σ_y ], where the single-particle Hamiltonian ℋ_0 is given by ℋ_0( k) = ħ v_F [k_xσ_x + k_yσ_y + (K^2-k^2_z)σ_z] - μσ_0 + Λ( k), Λ( k) = ħ[η_x k_x + η_z (K^2-k^2_z)] σ_0. At zero magnetic field, Eq. (<ref>) shows four tilted BW nodes at (0,0,±(K^2 ±Δ_0/ (1-η_z^2)^1/2)^1/2), respectively. Following the method in Ref. Pacholski.PRL2018, the phase factors e^iϕ in the pair potential can be removed from the off-diagonal components and incorporated into the single-particle Hamiltonian ℋ_0. This is accomplished by a gauge transformation (Anderson gauge) ℋ→𝒰^†ℋ𝒰 with 𝒰 = [ e^iϕ 0; 0 1 ]. Setting ħ≡ 1, the transformed BdG Hamiltonian ℋ̃ ( k)= [ ℋ_0 ( k+ a+m v_s) Δ_0; Δ_0 -σ_y ℋ_0^*(- k- a+m v_s)σ_y ], with the definitions a=12∇ϕ, m v_s=12∇ϕ-e A. Both the gauge field a(x,y) and the supercurrent velocity v_s(x,y) wind around the vortex cores, at positions R_n, due to ∇×∇ϕ=2πẑ∑_nδ( r- R_n). Besides, the supercurrent velocity should have vanishing divergence, i.e., ∇·∇ϕ=0 (we choose ∇· A=0). Rotating the spin, we can transform Hamiltonian (<ref>) into block diagonal form by ℋ̃→𝒱^†ℋ̃𝒱, with 𝒱=exp(12iϑν_y σ_z), tanϑ = -Δ_0v_F k_z, ϑ∈ (0, π), where ν is the Pauli matrix that acts on the electron-hole index. At low energies, the rotation gives rise to an effective 2× 2 Hamiltonian, which is written as ℋ_±( k) = v_F∑_α=x,y [± (k_α + a_α) + κ mv_s,α]σ_α + (K^2-k_z^2∓Δ^2Γ)σ_z ∓κμσ_0 + Λ_± ( k), Λ_± ( k) = [η_x(k_x+a_x±κ v_s,x) ±κη_z(K^2-k_z^2)]σ_0, where Γ=√(Δ^2+v_F^2k_z^2) and κ=-v_Fk_z/Γ. Usually, the superconducting pair potential satisfies Δ_0 ≪ t_0; if we consider a strong magnetic field, the terms containing Δ_0 in the off-diagonal block of the rotated Hamiltonian become irrelevant, and thus does not mix the LLs of the electron and hole cones. Considering μ=0, let's take ℋ_+( k) as for example. Around a BW node, we can expand ℋ_+( k) to the linear term of k_z, resulting in ℋ_BW = v_F Π·σ + κη_z k̅_z σ_0 + η_x (k_x + e𝒜_x) σ_0, where Π= k+e𝒜 with e𝒜= a +κ mv_s (𝒜_z=0). Here, k̅_z is measured from the BW point. We dropped the coefficients of k̅_z, which can be compensated for by scaling the k̅_z axis. Furthermore, when expanding K^2-k_z^2 - Δ_0^2Γ and K^2-k_z^2 at the BW node, the resulting coefficients of k_z have a slight difference that can be neglected because Δ^2_0 ≪Γ. For the case of η=0, Pacholski et al. have proved that the Hamiltonian (<ref>) can exhibit Dirac-LLs in the effective magnetic field ℬ=∇×𝒜 <cit.>. With a nonzero tilt η, the term η_x e𝒜_x σ_0 can be equivalent to the effect of an in-plane electric field; for instance, E_eff=η_x ℬ in the negative y direction when using the Landau gauge 𝒜=(-ℬy,0,0). The eigenvalues of Hamiltonian (<ref>) can be solved through a lengthy algebraic calculation <cit.>. Alternatively, we can rewrite the eigenvalue equation ℋ_BWΨ=EΨ as e^θ2σ_xℋ_BW e^θ2σ_xΨ̃ = E e^θσ_xΨ̃ using a hyperbolic transformation <cit.>, where Ψ̃=𝒩e^-θ2σ_xΨ with 𝒩 being a normalization constant. The term containing Π_x in the diagonal elements of the transformed Hamiltonian becomes zero when we set tanhθ = ζ = -η_x/v_F, which in turn eliminates the tilted term with η_x (the in-plane electric field). Then the equation (<ref>) can be reduced to v_F(Π̃_x σ_x + Π_y σ_y + k̅_z σ_z) Ψ̃ = γ(E-κη_z k̅_z)Ψ̃, where γ=(1-ζ^2)^-1/2, Π̃_x=Π_x/γ + γζ(κη_z k̅_z-E)/v_F, with a reduced effect magnetic field ℬ̃=ℬ/γ due to [Π̃_x,Π_y]=-i/l^2_ℬ̃=-i/γ l^2_ℬ. This is equivalent to a Lorentz boost in the x direction with the relativistic parameter tanhθ = ζ <cit.>. When |η_x/v_F|<1, the system is in the magnetic regime due to the vanishing in-plane electric field, in which the cyclotron orbits are still closed. Thus, we can easily obtain the LLs of Hamiltonian (<ref>) by Eq. (<ref>), reading E_n,±(k̅_z) = κη_z k̅_z ±1γ√(v_F^2 k̅^2_z + 2v_F^2 eℬn/γ), n ≥ 1, E_0(k̅_z) = κη_z k̅_z - 1γ v_F k̅_z, n=0, with reduced LL spacings due to ℬ̃=ℬ√(1-(η_x/v_F)^2). When |η_x/v_F|>1, it is obvious that γ is imaginary, resulting in the LLs collapse. Consequently, the system transitions into the electric regime, characterized by open orbits that prevent LL quantization. §.§ Lattice formulation In the mixed state, the LL spectrum of the WSC can be obtained by tight-binding calculations on a vortex lattice. In the x-y plane, the vortex lattice possesses a 2D square magnetic unit cell l_B× l_B, and each magnetic unit cell includes two singly quantized vortices, each of which carrying flux hc/2e, as shown in Fig. <ref>. The magnetic length l_B=Na_0 is defined as l_B=√(ϕ_0/B_0), with the flux quantum ϕ_0=hc/e. For simplicity, we set c=e=1 and N=4n+2, where n is a positive integer, in our numerical calculations <cit.>. The superconducting phase ϕ( r)≡ϕ(x,y) on the vortex lattice can be found in Ref. Melikyan.PRB2007, where it is expressed in closed form through the Weierstrass sigma function. Around each vortex, the superconducting phase ϕ( r) undergoes a 2π winding, as shown in Eq. (<ref>) in the continuum description. In the magnetic field B, the tight-binding Hamiltonian of the heterostructure WSC is written as ℋ_TB = ∑_⟨ r r^'⟩σσ^'( - t̃^σσ^'_ r r^' c^†_ rσ c_ r^'σ^' + H.c.) + (λ𝔅^σ -μ) ∑_ rσ c^†_ rσ c_ rσ + ∑_ r(Δ_ r c^†_ r↑ c^†_ r↓ + H.c.), where the sum is over the nearest neighbors ⟨ r r^'⟩ of the tight-binding lattice. Here, σ denotes the spin, 𝔅^↑=β and 𝔅^↓=-β, μ is the chemical potential. The hopping integral t̃^σσ^'_ r r^' = t^σσ^'_ r r^'exp(-i A_ r r^'). Considering the symmetric gauge, the magnetic vector potential A_ r r^' in the Peierls factor can be written as A_ r r+x̂=-π yΦ/ϕ_0 and A_ r r+ŷ=π xΦ/ϕ_0, with Φ being the magnetic flux through an elementary plaquette. In the heterostructure superconductor, the pairing field is given by Δ_ r = Δ_0e^iϕ( r), where the ansatz e^iϕ( r) for the s-wave pair phase factors is obtained through a self-consistent calculation. The Hamiltonian (<ref>) can be diagonalized by solving the BdG equation ℋ̂ψ_ r=Eψ_ r, where the lattice operator is written as ℋ̂ = [ ε̂_ r + h_0 -i σ_y Δ̂_ r; i σ_y Δ̂_ r^* - ε̂^*_ r - h_0 ] with h_0 = βσ_z - μσ_0, ε̂_ r is a 2× 2 matrix with elements (ε̂_ r)_σσ^'. The BdG Hamiltonian acts on the Nambu spinor ψ_ r, and for a wave function at the lattice site r, the operators (ε̂_ r)_σσ^' and Δ̂_ r are defined respectively as (ε̂_ r)_σσ^' f_ r = ∑_δ (𝒯_δ)_σσ^'× e^-i A_ r r + δ f_ r + δ, Δ̂_ r f_ r = Δ_0e^iϕ( r) f_ r, where the unit vector δ = ±x̂, ±ŷ, ±ẑ points to the six nearest neighbors in the lattice. In the heterostructure model, the hopping matrix is given by 𝒯_δ = - t_02 (ϰ_δσ·δ + ϰ̇_δσ_z + ϰ̈_δσ_0 ), where ϰ_± x,± y = i, ϰ_± z = 0, ϰ̇_± x,± y = -λ, ϰ̇_± z = -1, ϰ̈_δ = i η·δ for δ=± x,± y and ϰ̈_δ =1 for δ=± z. Although the vortices are periodically configured in the vortex lattice, the Hamiltonian (<ref>) remains invariant only when the discrete translations are accompanied by a gauge transformation (magnetic translations). As shown in Refs. <cit.>, the gauge transformation matrix (symmetric gauge) that transforms ℋ̂ into a periodic Hamiltonian ℋ̃ = 𝒰^†ℋ̂𝒰 is given by 𝒰= [ e^i/2ϕ( r) 0; 0 e^- i/2ϕ( r) ]. It should be noted that, on the vortex lattice, the symmetric transformation (<ref>) with the phase factors e^±i/2ϕ( r) is not single-valued due to the 2π winding of the superconducting phase ϕ( r) around each vortex. Therefore, the resulting Hamiltonian is also not single-valued. Such multiple valuedness could be handled by introducing compensating branch cuts, as shown in Fig. <ref>, in which each branch cut connects to two vortices in a magnetic cell, thus preserving the periodicity of the system. Specifically, the multiple valuedness of the phase factors in the transformed Hamiltonian ℋ̃ could be eliminated by using the equation e^- i/2ϕ( r) e^i/2ϕ( r^') e^-i A_ r r^' = z_2, r r^'× e^iV_ r r^', where z_2, r r^' = e^iϕ( r) + e^iϕ( r^')|e^iϕ( r) + e^iϕ( r^')|× e^- i/2ϕ( r) e^-i/2ϕ( r^'), e^iV_ r r^' = 1+e^i[ϕ( r^') - ϕ( r)]|1+e^i[ϕ( r^') - ϕ( r)]| e^-i A_ r r^'. Obviously, the factor e^iV_ r r^' is single-valued. The multiple valuedness of the transformed Hamiltonian is transferred into the Z_2 field that only has values of 1 and -1, depending on ϕ( r) and ϕ( r^'). Therefore we will set z_2, r r^'=1 on each bond except the ones crossing the branch cut where z_2, r r^'=-1. The transformed Hamiltonian is now ℋ̃ = [ ε̃_ r + h_0 -i σ_y Δ̃_ r; i σ_y Δ̃_ r^* - ε̃^*_ r - h_0 ], where the transformed lattice operators satisfy (ε̃_ r)_σσ^' u_ r = ∑_δ (𝒯_δ)_σσ^'× z_2, r r+δ× e^i V_ r r + δ u_ r + δ, Δ̃_ r u_ r = Δ_0 u_ r. Due to the periodicity of V_ r r^' <cit.> and the periodic arrangement of branch cuts on the vortex lattice, the resulting Hamiltonian ℋ̃ is invariant under discrete translations by the vortex lattice constant l_B. Thus, it can be diagonalized in the Bloch basis. By extracting the crystal wave vector k from the Bloch wave functions, we get ℋ( k)=e^-i k rℋ̃ e^i k r acting on the Hilbert space of periodic Nambu spinors. The numerically calculated quasiparticle spectra in the mixed state of the generic WSC, using the full tight-binding Hamiltonian (<ref>), are shown in Figs. <ref> and <ref>. For clarity, here we consider the λ=0 case. In this case, when Δ_0=0 (WSM phase), there are four Weyl nodes in each of the two planar BZs k_z=±π2, located at their center and corners, respectively. In the superconducting phase, as analyzed above, each Weyl node evolves into two BW nodes due to the pair potential. Therefore, in zero magnetic field, there are four BW nodes in each of the four planar BZs k_z=±arccos(±Δ_0) for the WSC model with λ=0, as shown in Fig. <ref> by the black dashed lines. However, the tilt of these BW cones under η is the same as that in the λ=1 case, as seen the Fig. <ref>. Fig. <ref> shows the spectrum in the mixed state of the WSC with untilted BW cones. At low energies, these LLs are consistent with the expectation √(n)E_1. Note that this LL spectrum is plotted in the k_z=π2 plane (corresponding to arccos(0) rather than arccos(±Δ_0)), in which each level is eight-fold degenerate, coming from the eight BW cones in the k_z=arccos(±Δ_0) planes. Fig. <ref> exhibits the energies of the first few LLs (with k_x=k_y=0 and k_z=π2) versus η_x and η_z by the red circles and green triangles, respectively. The black dashed curves show the analytical results expressed in Eq. (<ref>). We can see that η_x squeezes the LL spacings while η_z does not. Two concrete examples are shown in Figs. <ref> and <ref>, corresponding to the tilt parameters η=(0.5,0,0) and η=(0,0,0.5), respectively. The spectra in the mixed state of the WSC with type-II BW cones are shown in Figs. <ref> and <ref>, corresponding to the tilt parameters η=(1.2,0,0) and η=(0.85,0,0.8), respectively. When the projection |η_|≡|η_x| of the tilt in the plane perpendicular to the magnetic field is greater than 1, the LLs collapse, as shown in Fig. <ref>; then the spectrum shows dispersing quasiparticle bands in all momentum directions. In contrast, the spectrum for the type-II BW cones with |η_|<1, see Fig. <ref>, always exhibits LLs that are dispersionless in the k_x-k_y plane, with decreased spacings due to the nonzero η_x. Fig. <ref> shows the LL spectrum as a function of the momentum k_z for the WSC with a tilt parameter η, where k_x=k_y=0. First, we plot the LLs for the case of η=0 in Fig. <ref>. Obviously, it is similar to that of the WSM, where each zeroth LL evolves into two quasiparticle chiral levels due to the superconducting pair. Figs. <ref> and <ref> show the LL spectrum of the WSC model with tilt parameters η=(0.65,0,0) and η=(1,0,0), respectively. We see once again that, as η_x increase, the LL spacings are squeezed and collapse when η_x=1. The scenario is also reported in WSMs, see the Supplemental Material of Ref. YuZhiMingPRL2016. Figs. <ref>, <ref> and <ref> show the LL spectrum of the WSC with nonzero η_z, as elucidated by Eq. (<ref>), the LLs are tilted in the k_z direction with a coefficient κη_z, while the spacings retain invariant at k_z=π2 when η_x=0, as shown in Fig. <ref>. At the critical case with |η|=1, as seen in Fig. <ref>, the chiral zeroth LLs become flat in the spectrum. With a further increase in η_z, some LLs cross the Fermi level at certain k_z points, corresponding to the type-II BW cones, as shown in Fig. <ref>. § OPTICAL CONDUCTIVITY IN THE MIXED STATE LLs in the normal state can be typically detected through experiments such as quantum oscillations of various physical quantities and quantum Hall effect. However, to observe LLs in the superconducting state, most of these experiments can hardly work or may give elusive results, because of two reasons: (1) due to U(1) symmetry breaking, charge is not conserved, making experiments relying on charge conservation to produce understandable results disadvantageous; and (2) due to particle-hole symmetry, LLs of Bogoliubov quasiparticles lie symmetrically about the Fermi level which is always zero, so that LLs cannot cross the Fermi level unless Bogoliubov Fermi surfaces exist <cit.>. An alternative way to observe the LLs in the mixed state of superconductors is the optical conductivity measurement. Magneto-optical conductivity has been shown to be a powerful tool in the studies of topological semimetals. For instance, the longitudinal magneto-optical conductivity of graphene displays a series of delta-peaks that reflect the dispersionless LLs of the massless chiral fermions, which are proportional to √(nB) <cit.>. In contrast, WSMs exhibit a similar magneto-optical conductivity, but with a linear background that reveals the dispersion of LLs of WSMs along the direction of the applied magnetic field <cit.>. Now we calculate the optical conductivity tensor in the vortex state of the generic WSC, which can be obtained from the Kubo formula <cit.>. In the LL basis and in the clean limit, we have σ_αα^'(ω)=- ie^22π l_B^2∑_nn^'∫d k(2π)^3f(E_n)-f(E_n^')E_n-E_n^'×V_α^nn^' V_α^'^n^' nω+E_n -E_n^' + iδ, where α(α')={x,y,z}, d k=dk_xdk_ydk_z, and f(ϵ)=1/(1+e^(ϵ - ϵ_f)/k_BT) is the Fermi-Dirac distribution function with ϵ_f=0 for Bogoliubov quasiparticles and k_B ≡ 1 in our numerical calculations. The velocity matrix element V_α^nn^' = ⟨ψ_n k |V_α| ψ_n^' k⟩ with | ψ_n k⟩ being the eigenvector of the n-th LL and the velocity operator given by <cit.> V_α ( k) = [ ∂_k_α H^ B_0( k-e A) 0; 0 ∂_k_αH_0^ B^*(- k-e A) ], where H^ B_0 can be obtained from the BdG Hamiltonian processed through the singular gauge transformation. In Fig. <ref>, we show the real part of the calculated longitudinal magneto-optical conductivity of the WSC with tilted type-I BW cones (|η|<1). The magneto-optical conductivity for the untilted BW cones, represented by the red line, shows peaks only at photon frequencies ω/E_1=√(n)+√(n+1) for n ≥ 0. This is because the primary contribution to the optical conductivity comes from the optical transitions between E_n,-=-√(n)E_1 and E_n± 1,+=√(n± 1)E_1 levels when the tilt η=0. Additionally, the optical conductivity displays a linear background, which results from the dispersion of the LLs in the k_z direction. When η≠ 0, the tilt in the k_z direction has little effect on the magneto-optical conductivity of the WSC, as indicated by the blue line. In contrast, the tilt η_x shifts the conductivity peaks to lower optical frequencies due to the compression of LL spacings. Moreover, the usual dipolar selections n→ n± 1 are violated due to η_x, which leads to the emergence of new peaks arising from the wide n→ m interband transitions in the optical conductivity. At low frequencies and small η_x, we can see well-pronounced lines corresponding to individual transitions, as depicted in Fig. <ref>. Conversely, in the case of high frequencies and large η_x, the lines of multiple allowed transitions coalesce <cit.>. Now let us consider the case |η|>1, where we impose the restriction |η_x|<1 for LL quantization. In this case, the WSC model possesses type-II BW cones, which can exhibit special LLs that cross the quasiparticle Fermi level, as we can see in Fig. <ref>. As a consequence, intraband optical transitions become feasible. Indeed, different from the |η|<1 case, the optical conductivity in the mixed state of the WSC with type-II BW cones shows unique intraband peaks at low frequency, as shown in Fig. <ref>. Such intraband conductivity peaks can serve as indicators to discern between type-I and type-II BW cones in WSCs. It should be noted that the conductivity peaks corresponding to individual transitions at higher frequency for the type-II BW cones are smeared out due to the larger η_x. At last, we want to illustrate that, as analyzed above, the tilt η_x (in-plane electric field) may be eliminated by a Lorentz boost, leaving only a reduced magnetic field. Therefore, naively, the usual dipolar selection rule would not be violated in the optical conductivity. Actually, the novel conductivity peaks that arise from n→ m interband transitions can be interpreted as the result of rotation symmetry breaking caused by the Lorentz boost back to the laboratory frame <cit.>. § SUMMARY In this work, we proposed a WSC model that is engineered by alternately stacking WSM and s-wave superconductor layers. This model may exhibit tilted type-I and type-II BW nodes in the superconducting gap by tuning a tilt parameter η. The type-II BW node contacts an electron-like and a hole-like quasiparticle pocket when |η|>1. We investigated the quasiparticle spectrum in the mixed state of the WSC model. We found that, for type-I BW cones, tilted or not, the quasiparticle states in the mixed state of the WSC are always LLs. In contrast, for the type-II cones, the quasiparticle band structures depend on the angle between the direction of cone tilting and the magnetic field. When |η|>1, and the projection |η_| in the plane perpendicular to the external magnetic field is less than 1 (which is the Fermi velocity v_F in the continuum model), LL quantization is still possible. Otherwise, the quasiparticles in the mixed state of the WSC behave as Bloch waves. In other words, only the tilt η_ can squeeze the LL spacings and induce the LLs collapse, while η_z simply tilts the LLs in the k_z direction. We also studied the optical responses in the mixed state of the generic WSC. When η=0, the longitudinal magneto-optical conductivity of the WSC model shows peaks only at optical frequencies ω∝√(n) +√(n+1) for n ≥ 0, with a linear background due to the dispersion of its LLs in the k_z direction. This indicates that for the untilted BW cones, only the n→ n± 1 interband optical transitions are permitted, which is consistent with the usual dipolar selection rule in the magneto-optical conductivity of topological semimetals. When considering a nonzero tilt of BW cones in the k_x direction, the usual dipolar selections are violated, giving rise to novel conductivity peaks that arise from the n→ m interband transitions. Meanwhile, the tilt in the k_z direction produces little impact on the magneto-optical conductivity of the WSC. When |η|>1 (|η_x|<1), the BW cone tilts into a type-II cone, which exhibits unique intraband conductivity peaks at low frequency due to the presence of the LLs that cross the Fermi level. These intraband conductivity peaks provide a method to discern between type-I and type-II BW cones in WSCs. This work was supported by Guangdong Basic and Applied Basic Research Foundation (Grant No. 2021B1515130007), Shenzhen Natural Science Fund (the Stable Support Plan Program 20220810130956001) and National Natural Science Foundation of China (Grant No. 12004442). apsrev4-2
http://arxiv.org/abs/2407.12516v1
20240717120900
Online Pseudo-Zeroth-Order Training of Neuromorphic Spiking Neural Networks
[ "Mingqing Xiao", "Qingyan Meng", "Zongpeng Zhang", "Di He", "Zhouchen Lin" ]
cs.NE
[ "cs.NE", "cs.AI", "cs.LG" ]
[ Online Pseudo-Zeroth-Order Training of Neuromorphic Spiking Neural Networks equal* Mingqing Xiaopku Qingyan Mengcuhk,sz Zongpeng Zhangpkub Di Hepku,pkuai Zhouchen Linpku,pkuai,pengcheng pkuNational Key Lab of General AI, School of Intelligence Science and Technology, Peking University cuhkThe Chinese University of Hong Kong, Shenzhen szShenzhen Research Institute of Big Data pkubDepartment of Biostatistics, School of Public Health, Peking University pkuaiInstitute for Artificial Intelligence, Peking University pengchengPeng Cheng Laboratory Mingqing Xiaomingqing_xiao@pku.edu.cn Zhouchen Linzlin@pku.edu.cn Neuromorphic Computing, Spiking Neural Networks, Non-Backpropagation Training, Pseudo-Zeroth-Order 0.3in ] § ABSTRACT Brain-inspired neuromorphic computing with spiking neural networks (SNNs) is a promising energy-efficient computational approach. However, successfully training SNNs in a more biologically plausible and neuromorphic-hardware-friendly way is still challenging. Most recent methods leverage spatial and temporal backpropagation (BP), not adhering to neuromorphic properties. Despite the efforts of some online training methods, tackling spatial credit assignments by alternatives with comparable performance as spatial BP remains a significant problem. In this work, we propose a novel method, online pseudo-zeroth-order (OPZO) training. Our method only requires a single forward propagation with noise injection and direct top-down signals for spatial credit assignment, avoiding spatial BP's problem of symmetric weights and separate phases for layer-by-layer forward-backward propagation. OPZO solves the large variance problem of zeroth-order methods by the pseudo-zeroth-order formulation and momentum feedback connections, while having more guarantees than random feedback. Combining online training, OPZO can pave paths to on-chip SNN training. Experiments on neuromorphic and static datasets with fully connected and convolutional networks demonstrate the effectiveness of OPZO with similar performance compared with spatial BP, as well as estimated low training costs. § INTRODUCTION Neuromorphic computing with biologically inspired spiking neural networks (SNNs) is an energy-efficient computational framework with increasing attention recently <cit.>. Imitating biological neurons to transmit spike trains for sparse event-driven computation as well as parallel in-memory computation, efficient neuromorphic hardware is developed, supporting SNNs with low energy consumption <cit.>. Nevertheless, supervised training of SNNs is challenging considering neuromorphic properties. While popular surrogate gradient methods can deal with the non-differentiable problem of discrete spikes <cit.>, they rely on backpropagation (BP) through time and across layers for temporal and spatial credit assignment, which is biologically problematic and would be inefficient on hardware. Particularly, spatial BP suffers from problems of weight transport and separate forward-backward stages with update locking <cit.>, and temporal BP is further infeasible for spiking neurons with the online property <cit.>. Considering learning in biological systems with unidirectional local synapses, maintaining reciprocal forward-backward connections with symmetric weights and separate phases of signal propagation is often viewed as biologically problematic <cit.>, and also poses challenges for efficient on-chip training of SNNs. Methods with only forward passes, or with direct top-down feedback signals acting as modulation in biological three-factor rules <cit.>, are more efficient and plausible, e.g., on neuromorphic hardware <cit.>. Some previous works explore alternatives for temporal and spatial credit assignment. To deal with temporal BP, online training methods are developed for SNNs <cit.>. With tracked eligibility traces, they decouple temporal dependency and support forward-in-time learning. However, alternatives to spatial BP still require deeper investigations. Most existing works mainly rely on random feedback <cit.>, but have limited guarantees and poorer performance than spatial BP. Some works explore forward gradients <cit.>, but they require an additional stage of heterogeneous signal propagation and usually perform poorly due to the large variance. Recently, <cit.> show that zeroth-order (ZO) optimization with simultaneous perturbation stochastic approximation (SPSA) can effectively fine-tune pre-trained large language models, but the method requires specially designed settings, as well as two forward passes, and does not work for general neural network training due to the large variance. On the other hand, local learning has been studied, e.g., with local readout layers <cit.> or forward-forward self-supervised learning <cit.>. It is complementary to global learning and can improve some methods <cit.>. As a crucial component of machine learning, efficient global learning alternatives with competitive performance remain an important problem. In this work, we propose a novel online pseudo-zeroth-order (OPZO) training method with only a single forward propagation and direct top-down feedback for global learning. We first propose a pseudo-zeroth-order formulation for neural network training, which decouples the model function and the loss function, and maintains the zeroth-order formulation for neural networks while leveraging the available first-order property of the loss function for more informative feedback error signals. Then we propose momentum feedback connections to directly propagate feedback signals to hidden layers. The connections are updated based on the one-point zeroth-order estimation of the expectation of the Jacobian, with which the large variance of zeroth-order methods can be solved and more guarantees are maintained compared with random feedback. OPZO only requires a noise injection in the common forward propagation, flexibly applicable to black-box or non-differentiable models. Built upon online training, OPZO enables training in a similar form as the three-factor Hebbian learning based on direct top-down modulations, paving paths to on-chip training of SNNs. Our contributions include: * We propose a pseudo-zeroth-order formulation that decouples the model and loss function for neural network training, which enables more informative feedback signals while keeping the zeroth-order formulation of the (black-box) model. * We propose the OPZO training method with a single forward propagation and momentum feedback connections, solving the large variance of zeroth-order methods and keeping low costs. Built on online training, OPZO provides a more biologically plausible method friendly for potential on-chip training of SNNs. * We conduct extensive experiments on neuromorphic and static datasets with both fully connected and convolutional networks, as well as on ImageNet with larger networks finetuned under noise. Results show the effectiveness of OPZO in reaching similar or superior performance compared with spatial BP and its robustness under different noise injections. OPZO is also estimated to have lower computational costs than BP on potential neuromorphic hardware. § RELATED WORK SNN Training Methods A mainstream method to train SNNs is spatial and temporal BP combined with surrogate gradient (SG) <cit.> or gradients with respect to spiking times <cit.>. Another direction is to derive equivalent closed-form transformations or implicit equilibriums between specific encodings of spike trains, e.g., (weighted) firing rates or the first time to spike, and convert artificial neural networks (ANNs) to SNNs <cit.> or directly train SNNs with gradients from the equivalent transformations <cit.> or equilibriums <cit.>. To tackle the problem of temporal BP, some online training methods are proposed <cit.> for forward-in-time learning, but most of them still require spatial BP. Considering alternatives to spatial BP, <cit.> apply random feedback, <cit.> propose online local learning, and <cit.> propose local tandem learning with ANN teachers. Different from them, we propose a new method for global learning while maintaining similar performance as spatial BP and much better results than random feedback. <cit.> and <cit.> study zeroth-order properties for each parameter or neuron to adjust surrogate functions or leverage a local zeroth-order estimator for the Heaviside step function, lying in the spatial and temporal BP framework. Differently, in this work, zeroth-order training refers to simultaneous perturbation for global network training without spatial BP. Alternatives to Spatial Backpropagation For effective and more biologically plausible global learning of neural networks, some alternatives to spatial BP are proposed. Target propagation <cit.>, feedback alignment (FA) <cit.>, and sign symmetric <cit.> avoid the weight symmetric problem by propagating targets or using random or only sign-shared backward weights, and <cit.> improves random weights by learning it to be symmetric with forward weights. They, however, still need an additional stage of sequential layer-by-layer backward propagation. Direct feedback alignment (DFA) <cit.> improves FA to directly propagate errors from the last layer to hidden ones. However, random feedback methods have limited guarantees and perform much worse than BP. Some recent works study forward gradients <cit.>, but they require an additional heterogeneous signal propagation stage for forward gradients and may suffer from biological plausibility issues and larger costs. There are also methods to train neural networks with energy functions <cit.> or use lifted proximal formulation <cit.>. Besides global supervision, some works turn to local learning, using local readout layers <cit.> or forward-forward contrastive learning <cit.>. This work mainly focuses on global learning and can be combined with local learning. Zeroth-Order Optimization ZO optimization has been widely studied in machine learning, but its application to direct neural network training is limited due to the variance caused by a large number of parameters. ZO methods have been used for black-box optimization <cit.>, adversarial attacks <cit.>, reinforcement learning <cit.>, etc., at relatively small scales. For neural network training with extremely high dimensions, recently, <cit.> theoretically show that the complexity of ZO optimization can exhibit weak dependencies on dimensionality considering the effective dimension, and <cit.> propose zeroth-order SPSA for memory-efficient fine-tuning pre-trained large language models with a similar theoretical basis. However, it requires specially designed settings (e.g., fine-tuning under the prompt setting is important for the effectiveness <cit.>) which is not applicable to general neural network training, as well as two forward passes. <cit.> propose a likelihood ratio method to train neural networks, but it requires multiple forward propagation proportional to the layer number in practice. <cit.> consider the finite difference for each parameter rather than simultaneous perturbation and propose pruning methods for improvement, limited in computational complexity. Differently, this work proposes a pseudo-zeroth-order method for neural network training from scratch with only one forward pass in practice for low costs and comparable performance to spatial BP. § PRELIMINARIES §.§ Spiking Neural Networks Imitating biological neurons, each spiking neuron keeps a membrane potential u, integrates input spike trains, and generates a spike for information transmission once u exceeds a threshold. u is reset to the resting potential after a spike. We consider the commonly used leaky integrate and fire (LIF) model with the dynamics of the membrane potential as: τ_mdu/dt = -(u-u_rest) + R· I(t), for u < V_th, with input current I, threshold V_th, resistance R, and time constant τ_m. When u reaches V_th at time t^f, the neuron generates a spike and resets u to zero. The output spike train is s(t) = ∑_t^fδ(t-t^f). SNNs consist of connected spiking neurons. We consider the simple current model I_i(t)=∑_j w_ijs_j(t) +b_i, where i,j represent the neuron index, w_ij is the weight and b_i is a bias. The discrete computational form is: { u_i[t + 1] = λ (u_i[t] - V_ths_i[t]) + ∑_j w_ijs_j[t] + b_i, s_i[t + 1] = H(u_i[t+1] - V_th). . Here H(x) is the Heaviside step function, s_i[t] is the spike signal at discrete time step t, and λ<1 is a leaky term (taken as 1-1/τ_m). For multi-layer networks, we use 𝐬^l+1[t] to represent the (l+1)-th layer’s response after receiving signals 𝐬^l[t] from the l-th layer, i.e., the expression is 𝐮^l+1[t+1]=λ(𝐮^l+1[t]-V_th𝐬^l+1[t])+𝐖^l𝐬^l[t+1]+𝐛^l. Online Training of SNNs We build the proposed OPZO on online training methods for forward-in-time learning. Here online training refers to online through the time dimension of SNNs <cit.>, as opposed to backpropagation through time. We consider OTTT <cit.> to online calculate gradients at each time by the tracked presynaptic trace 𝐚̂^l[t] = ∑_τ≤ tλ^t-τ𝐬^l[τ] and instantaneous gradient 𝐠_𝐮^l+1[t]=(∂ℒ[t]/∂𝐬^N[t]∏_i=0^N-l-2∂𝐬^N-i[t]/∂𝐬^N-i-1[t]∂𝐬^l+1[t]/∂𝐮^l+1[t])^⊤ as ∇_𝐖^lℒ[t]=𝐠_𝐮^l+1[t]𝐚̂^l[t]^⊤. In OTTT, the instantaneous gradient requires layer-by-layer spatial BP with surrogate derivatives for ∂𝐬^l[t]/∂𝐮^l[t]. The proposed OPZO, on the other hand, leverages only one forward propagation across layers and direct feedback to estimate 𝐠_𝐮^l+1[t] without spatial BP combining surrogate gradients. §.§ Zeroth-Order Optimization Zeroth-order optimization is a gradient-free method using only function values. A classical ZO gradient estimator is SPSA <cit.>, which estimates the gradient of parameters θ for ℒ(θ) on a random direction 𝐳 as: ∇^ZOℒ(θ) = ℒ(θ + α𝐳) - ℒ(θ - α𝐳)/2α𝐳≈𝐳𝐳^⊤∇ℒ(θ), where 𝐳 is a multivariate variable with zero mean and unit variance, e.g., following the multivariate Gaussian distribution, and α is a perturbation scale. Alternatively, we can use the one-sided formulation for this directional gradient: ∇^ZOℒ(θ) = ℒ(θ + α𝐳) - ℒ(θ)/α𝐳. The above formulations are two-point estimations, requiring two forward propagations to obtain an unbiased estimation of the gradient. When 𝐳 has i.i.d. components with zero mean and unit variance, in the limit α→ 0, ∇^ZOℒ(θ) is an unbiased estimator of ∇ℒ(θ), i.e., 𝔼_𝐳[∇^ZOℒ(θ)] = ∇ℒ(θ). Considering biological plausibility and efficiency, estimation with a single forward pass is more appealing. Actually, when considering expectation over 𝐳, ℒ(θ) in Eq. (<ref>) can be omitted. Therefore, we can obtain a single-point zeroth-order (ZO_sp) unbiased estimator: ∇^ZO_spℒ(θ) = ℒ(θ + α𝐳)/α𝐳. When 𝐳 has i.i.d. components with zero mean and unit variance, in the limit α→ 0, ∇^ZO_spℒ(θ) is an unbiased estimator of ∇ℒ(θ). The above formulation only requires a noise injection in the forward propagation, and the gradients can be estimated with a top-down feedback signal, as shown in Fig. <ref>(d). This is also similar to REINFORCE <cit.> and Evolution Strategies <cit.> in reinforcement learning, and is considered to be biologically plausible <cit.>. It is believed that the brain is likely to employ perturbation methods for some kinds of learning <cit.>. However, zeroth-order methods usually suffer from a large variance, since two-point methods only estimate gradients in a random direction and the one-point formulation has even larger variances. Therefore they hardly work for general neural network training. In the following, we propose our pseudo-zeroth-order method to solve the problem, also only based on one forward propagation with noise injection and top-down feedback signals. § ONLINE PSEUDO-ZEROTH-ORDER TRAINING In this section, we introduce the proposed online pseudo-zeroth-order method. We first introduce the pseudo-zeroth-order formulation for neural network training in Section <ref>. Then in Section <ref>, we introduce momentum feedback connections in OPZO for error propagation with zeroth-order estimation of the model. In Section <ref>, we demonstrate the combination with online training and a similar form as the three-factor Hebbian learning. Finally, we introduce more additional details in Section <ref>. §.§ Pseudo-Zeroth-Order Formulation Since zeroth-order methods suffer from large variances, a natural thought is to reduce the variance. However, ZO methods only rely on a scalar feedback signal to act on the random direction 𝐳, making it hard to improve gradient estimation. To this end, we introduce a pseudo-zeroth-order formulation. As we build our work on online training, we first focus on the condition of a single SNN time step. Specifically, we decouple the model function f(·; θ) and the loss function ℒ(·). For each input 𝐱, the model outputs 𝐨=f(𝐱; θ), and then the loss is calculated as ℒ(𝐨, 𝐲_𝐱), where 𝐲_𝐱 is the label for the input. Different from ZO methods that only leverage the function value of ℒ∘ f, we assume that the gradient of ℒ(·) can be easily calculated, while keeping the zeroth-order formulation for f(·; θ). This is consistent with real settings where gradients of the loss function have easy closed-form formulation, e.g., for mean-square-error (MSE) loss, ∇_𝐨ℒ(𝐨, 𝐲_𝐱) = 𝐨 - 𝐲_𝐱, and for cross-entropy (CE) loss with the softmax function σ, ∇_𝐨ℒ(𝐨, 𝐲_𝐱) = σ(𝐨) - 𝐲_𝐱, while gradients of f(·; θ) are hard to compute due to biological plausibility issues or non-differentiability of spikes. With this formulation, we can consider feedback (error) signals 𝐞 = ∇_𝐨ℒ(𝐨, 𝐲_𝐱) that carries more information than a single value of ℒ∘ f(𝐱), potentially encouraging techniques for variance reduction. In the following, we introduce momentum feedback connections to directly propagate feedback signals to hidden layers for gradient estimation. §.§ Momentum Feedback Connections We motivate our method by first considering the directional gradient by the two-point estimation in Section <ref>. With decoupled f(·; θ) and ℒ(·) as in the pseudo-zeroth-order formulation and Taylor expansion of ℒ(·), Eq. <ref> turns into: ∇^ZO_θℒ≈<∇_𝐨ℒ(𝐨, 𝐲_𝐱), 𝐨̃ - 𝐨>/α𝐳 = 𝐳Δ𝐨^⊤/α∇_𝐨ℒ(𝐨, 𝐲_𝐱), where 𝐨=f(𝐱; θ), 𝐨̃=f(𝐱; θ+α𝐳), and Δ𝐨 = 𝐨̃ - 𝐨. This can be viewed as propagating the error signal with a connection weight 𝐳Δ𝐨^⊤/α. To reduce the variance introduced by the random direction 𝐳, we introduce momentum feedback connections across different iterations and propagate errors as: 𝐌 λ𝐌 + (1 - λ) 𝐳Δ𝐨^⊤/α, ∇^PZO_θℒ = 𝐌∇_𝐨ℒ(𝐨, 𝐲_𝐱). The momentum feedback connections can take advantage of different sampled directions 𝐳, largely alleviating the variance caused by random directions. The above formulation only considers the directional gradient with two-point estimation, while we are more interested in methods with a single forward pass. Actually, 𝐳Δ𝐨^⊤/α can be viewed as an unbiased estimator of 𝔼_𝐱[𝐉_f^⊤(𝐱)], where 𝐉_f(𝐱) is the Jacobian of f evaluated at 𝐱, and 𝐌 can be viewed as approximating it with moving average. Therefore, we can similarly use a one-point method and 𝐌 has similar effects: 𝐌λ𝐌 + (1 - λ) 𝐳𝐨̃^⊤/α, where 𝐨̃^⊤/α is also an unbiased estimator of 𝔼_𝐱[𝐉_f^⊤(𝐱)]. When 𝐳 has i.i.d. components with zero mean and unit variance independent of inputs, in the limit α→ 0, 𝐳Δ𝐨^⊤/α and 𝐳𝐨̃^⊤/α are unbiased estimators of 𝐉_f^⊤(𝐱) given 𝐱, and further, are unbiased estimators of 𝔼_𝐱[𝐉_f^⊤(𝐱)]. This leads to our method as shown in Fig. <ref>(e). During forward propagation, a random noise α𝐳 is injected for each layer, and momentum feedback connections are updated based on 𝐳 and the model output 𝐨̃ (information from pre- and post-synaptic neurons). Then errors are propagated through the connections to each layer[A small difference is that errors here are ∇_𝐨ℒ(𝐨̃, 𝐲_𝐱). This can be viewed as performing Taylor expansion at 𝐨̃ for Eq. <ref>]. We consider node perturbation which is superior to weight perturbation[Node perturbation estimates gradients for 𝐱^l+1 considering the neural network formulation 𝐱^l+1=ϕ(𝐖^l𝐱^l) and calculates gradients as ∇_𝐖^lℒ=(∇_𝐱^l+1ℒ⊙ϕ'(𝐖^l𝐱^l))𝐱^l^⊤, which has a smaller variance than directly estimating gradients for weights.] <cit.>, then it has a similar form as the popular DFA <cit.>, while our feedback weight is not a random matrix but the estimated Jacobian (Fig. <ref>(c,e)). Then we analyze some properties of momentum feedback connections. We assume that 𝐌 can quickly converge to the estimated 𝔼_𝐱[𝐉_f^⊤(𝐱)] up to small errors ϵ compared with the optimization of parameters[Intuitively, for the given parameters of the model, 𝐌 is just approximating a linear matrix with projection to different directions, which can converge more quickly than the highly non-convex optimization of neural networks. For the gradually evolving parameters, since the weight update for overparameterized neural networks is small for each iteration, we can also expect 𝐌 to quickly fit the expectation of Jacobian.], and we focus on gradient estimation with 𝐌=𝔼_𝐱[𝐉_f^⊤(𝐱)]+ϵ. We show that it can largely reduce the variance of the single-point zeroth-order method (the proof and discussions are in Appendix <ref>). Let d denote the dimension of θ, m denote the dimension of 𝐨 (m ≪ d), B denote the mini-batch size, β=Var[z_i^2], V_θ=1/d∑_iVar[(∇_θℒ_𝐱)_i], S_θ=1/d∑_i𝔼[(∇_θℒ_𝐱)_i]^2, V_L=Var[ℒ_𝐱], S_L=𝔼[ℒ_𝐱]^2, V_𝐨=1/m∑_iVar[(∇_𝐨ℒ_𝐱)_i], S_𝐨=1/m∑_i𝔼[(∇_𝐨ℒ_𝐱)_i]^2, where ℒ_𝐱 is the sample loss for input 𝐱, and ∇_θℒ_𝐱 and ∇_𝐨ℒ_𝐱 are the sample gradient for θ and 𝐨, respectively. We further assume that the small error ϵ has i.i.d. components with zero mean and variance V_ϵ, and let V_𝐨, 𝐌=1/d∑_i,jVar[(∇_𝐨ℒ_𝐱)_j](𝔼_𝐱[𝐉_f^⊤(𝐱)])_i,j. Then the average variance of the single-point zeroth-order method is: 1/B((d+β)V_θ+(d+β-1)S_θ+1/α^2V_L+1/α^2S_L)+O(α^2), while that of the pseudo-zeroth-order method is: 1/B(mV_ϵV_𝐨+mV_ϵS_𝐨+V_𝐨, 𝐌). V_θ corresponds to the sample variance of spatial BP, and V_𝐨, 𝐌 would be at a similar scale as V_θ (see discussions in Appendix <ref>). Since V_ϵ is expected to be very small, the results show that the single-point zeroth-order estimation has at least d times larger variance than BP, while the pseudo-zeroth-order method can significantly reduce the variance, which is also verified in experiments. Besides the variance, another question is that momentum connections would take the expectation of the Jacobian over data 𝐱, which can introduce bias into the gradient estimation. This is due to the data-dependent non-linearity that leads to a data-dependent Jacobian, which can be a shared problem for direct error feedback methods without layer-by-layer spatial BP. Despite the bias, we show that under certain conditions, the estimated gradient can still provide a descent direction (the proof and discussions are in Appendix <ref>). Suppose that 𝐉_f^⊤(𝐱) is L_J-Liptschitz continuous and 𝐞(𝐱) is L_e-Liptschitz continuous, 𝐱_i is uniformly distributed, when ‖𝔼_𝐱_i[𝐉_f^⊤(𝐱_i)𝐞(𝐱_i)] ‖ > 1/2L_JL_eΔ_𝐱 + e_ϵ, where Δ_𝐱=𝔼_𝐱_i, 𝐱_j[‖𝐱_i - 𝐱_j ‖^2] and e_ϵ=‖ϵ𝔼_𝐱_i[𝐞(𝐱_i)]‖, we have <𝔼_𝐱_i[𝐉_f^⊤(𝐱_i)𝐞(𝐱_i)], 𝔼_𝐱_i[𝐌𝐞(𝐱_i)] > > 0. There is a minor point that the above analyses mainly focus on differentiable functions while spiking neural networks are usually discrete and non-differentiable. Actually, under the stochastic setting, spiking neurons can be differentiable (see Appendix <ref> for details), and the deterministic setting may be roughly viewed as a special case. In practice, our method is not limited to differentiable functions but can estimate the expectation of the Jacobians of (potentially non-differentiable) black-box functions with zeroth-order methods. Our theoretical analyses mainly consider the well-defined continuous condition for insights. The pseudo-zeroth-order formulation may also be generalized to flexible hybrid zeroth-order and first-order systems for error signal propagation. In this work, we mainly focus on training SNNs, and we further introduce the combination with online training in the following. §.§ Online Pseudo-Zeroth-Order Training We build the above pseudo-zeroth-order approach on online training methods to deal with spatial and temporal credit assignments. As introduced in Section <ref>, we consider OTTT <cit.> and replace its backpropagated instantaneous gradient with our estimated gradient based on direct top-down feedback. Then the update for synaptic weights has a similar form as the three-factor Hebbian learning <cit.> based on a direct top-down global modulator: Δ W_i,j∝â_i[t] ψ(u_j[t]) (-g_j^t), where W_i,j is the weight from neuron i to j, â_i[t] is the presynaptic activity trace, ψ(u_j[t]) is a local surrogate derivative for the change rate of the postsynaptic activity <cit.>, and g_j^t is the global top-down error (gradient) modulator. Here we leverage the local surrogate derivative because it can be well-defined under the stochastic setting (see Appendix <ref> for details) and better fits the biological three-factor Hebbian rule. For potentially asynchronous neuromorphic computing, there may be a delay in the propagation of error signals. <cit.> show that with convergent inputs and a certain surrogate derivative, it is still theoretically effective for the gradient under the delay Δ t, i.e., the update is based on â_i[t+Δ t] ψ(u_j[t+Δ t]) g_j^t. Alternatively, more eligibility traces can be used to store the local information, e.g., â_i[t] ψ(u_j[t]), and induce weight updates when the top-down signal arrives <cit.>. Our method shares these properties and we do not model delays in experiments for the efficiency of simulations. Moreover, the direct error propagations to different layers as well as the update of feedback connections in our method can be parallel, which can better take advantage of parallel neuromorphic computing than layer-by-layer spatial BP. §.§ Additional Details Combination with Local Learning There can be both global and local signals for learning in biological systems, and local learning (LL) can improve global learning approximation methods <cit.>. Our proposed method can be combined with local learning for improvement, and we consider introducing local readout layers for local supervision. We will add a fully connected readout for each layer with supervised loss. Additionally, for deeper networks, we can also introduce intermediate global learning (IGL) that propagates global signals from a middle layer to previous ones with OPZO. More details can be found in Appendix <ref>. About Noise Injection By default, we sample 𝐳 from the Gaussian distribution. As sampling from the Gaussian distribution may pose computational requirements for hardware, we can also consider easier distributions. For example, the Rademacher distribution, which takes 1 and -1 both with the probability 0.5, also meets the requirements. Additionally, 𝐳 is by default added to the neural activities for gradient estimation based on node perturbation. To further prevent noise perturbation from interfering with sparse spike-driven forward propagation of SNNs for energy efficiency, we may empirically change the noise perturbation after neuronal activities to perturbation before neurons (i.e., perturb on membrane potentials), while maintaining local surrogate derivatives for the spiking function. We will show in experiments that OPZO is robust to these noise injection settings. Antithetic Variables across Time Steps Compared with the two-point zeroth-order estimation, the considered one-point method can have a much larger variance. To further reduce the variance, we can leverage antithetic 𝐳, i.e., 𝐳 and -𝐳, for every two time steps of SNNs. Since SNNs naturally have multiple time steps and the inputs for different time steps usually belong to the same object with similar distributions, this approach may roughly approximate the two-point formulation without additional costs. § EXPERIMENTS In this section, we conduct experiments on both neuromorphic and static datasets with fully connected (FC) and convolutional (Conv) neural networks to demonstrate the effectiveness of the proposed OPZO method. For N-MNIST and MNIST, we leverage FC networks with two hidden layers composed of 800 neurons, and for DVS-CIFAR10, DVS-Gesture, CIFAR-10, and CIFAR-100, we leverage 5-layer convolutional networks. We will also consider a deeper 9-layer convolutional network, as well as fine-tuning ResNet-34 on ImageNet under noise. We take T=30 time steps for N-MNIST, T=20 for DVS-Gesture, T=10 for DVS-CIFAR10, and T=6 time steps for static datasets, following previous works <cit.>. More training details can be found in Appendix <ref>. §.§ Comparison on Various Datasets We first compare the proposed OPZO with other spatial credit assignment methods on various datasets in Table <ref>, and all methods are based on the online training method OTTT <cit.> under the same settings. For FC networks, since there are only two hidden layers, we do not consider local learning settings. As shown in the results, the ZO_sp method fails to effectively optimize neural networks, while OPZO significantly improves the results, achieving performance at a similar level as spatial BP with SG. DFA with random feedback has a large gap with spatial BP, especially on convolutional networks, while OPZO can achieve much better results. When combined with local learning, OPZO has about the same performance as and even outperforms spatial BP with SG on neuromorphic datasets. These results demonstrate the effectiveness of OPZO for training SNNs to promising performance in a more biologically plausible and neuromorphic-friendly approach, paving paths for direct on-chip training of SNNs. Note that our method is a different line from most recent works with state-of-the-art performance <cit.> which are based on spatial and temporal BP with SG and focus on model improvement. We aim to develop alternatives to BP that adhere to neuromorphic properties, focusing on more biologically plausible and hardware-friendly training algorithms. So we mainly compare different spatial credit assignment methods under the same settings. §.§ Gradient Variance We analyze the gradient variance of different methods to verify that our method can effectively reduce variance for effective training. As shown in Fig. <ref>, the variance of ZO_sp is several orders larger than spatial BP with SG, leading to the failure of effective training. OPZO can largely reduce the variance, which is consistent with our theoretical analysis. As in practice, we remove the factor 1/α for the calculation of 𝐌 (see Appendix <ref> for details), which will scale gradients by the factor α, the practical gradient variance of OPZO can be smaller than spatial BP with SG, but they are at a comparable level overall. §.§ Effectiveness for Different Noise Injection Then we verify the effectiveness of OPZO for different noise injection settings as introduced in Section <ref>. As shown in Table <ref>, the results under different noise distributions and injection positions are similar, demonstrating the robustness of OPZO for different settings. §.§ Deeper Networks We further consider deeper networks and larger datasets. We first perform experiments on DVS-Gesture and CIFAR-100 with a deeper 9-layer convolutional network. As introduced in Section <ref>, for deeper networks, we can introduce techniques including local learning and intermediate global learning. As shown in Table <ref>, OPZO can also achieve similar performance as or outperform spatial BP with SG and significantly outperform DFA combined with these techniques. We also conduct experiments for fine-tuning ResNet-34 on ImageNet under noise. This task is on the ground that there can be hardware mismatch, e.g., hardware noise, for deploying SNN models trained on common devices to neuromorphic hardware <cit.>, and we may expect direct on-chip fine-tuning to better deal with the problem. Our method is more plausible and efficient for on-chip learning than spatial BP with SG and may be combined with other works aiming at high-performance training on common devices in this scenario. We fine-tune a pre-trained NF-ResNet-34 model released by <cit.>, whose original test accuracy is 65.15%, under the noise injection setting with different scales. As shown in Table <ref>, OPZO can successfully fine-tune such large-scale models, while DFA and ZO_sp fail. Spatial BP is less biologically plausible and is neuromorphic-unfriendly, so its results are only for reference. The results show that our method can scale to large-scale settings. §.§ Training Costs Finally, we analyze and compare the computational costs of different methods. We consider the estimation of the costs on potential neuromorphic hardware which is the target of neuromorphic computing with SNNs. Since biological systems leverage unidirectional local synapses, spatial BP (if we assume it to be possible for weight transport and separate forward-backward stage) should maintain additional backward connections between successive layers for layer-by-layer error backpropagation, leading to high memory and operation costs, as shown in Table <ref>. Differently, DFA and OPZO maintain direct top-down feedback with much smaller costs, and they are also parallelizable for different layers. ZO_sp may have even lower costs as only a scalar signal is propagated, which is shared by all neurons, but it cannot perform effective learning in practice. Also note that different from previous zeroth-order methods that require multiple forward propagations or forward gradient methods that require an additional heterogeneous signal propagation, our method only needs one common forward propagation with noise injection and direct top-down feedback, keeping lower operation costs which are about the same as DFA. We also provide training costs on common devices such as GPU in Appendix <ref>, and our method is comparable to spatial BP and DFA since GPU do not follow neuromorphic properties and we do not perform low-level code optimization as BP. It can be interesting future work to consider applications to neuromorphic hardware that is still under development <cit.>. For more experimental results, please refer to Appendix <ref>. § CONCLUSION In this work, we propose a new online pseudo-zeroth-order method for training spiking neural networks in a more biologically plausible and neuromorphic-hardware-friendly way, with low costs and comparable performance to spatial BP with SG. OPZO performs spatial credit assignment by a single common forward propagation with noise injection and direct top-down feedback based on momentum feedback connections, avoiding drawbacks of spatial BP, solving the large variance problem of zeroth-order methods, and significantly outperforming random feedback methods. Combining online training, OPZO has a similar form as three-factor Hebbian learning with direct top-down modulations, taking a step forward towards on-chip SNN training. Extensive experiments demonstrate the effectiveness and robustness of OPZO for fully connected and convolutional networks on static and neuromorphic datasets as well as larger models and datasets, and show the efficiency of OPZO with low estimated training costs. icml § DETAILED PROOFS In this section, we provide proofs for lemmas and propositions in the main text. §.§ Proof of Lemma <ref> and Lemma <ref> In the limit α→ 0, ∇^ZOℒ(θ) = 𝐳𝐳^⊤∇ℒ(θ). Since 𝐳 has i.i.d. components with zero mean and unit variance, we have 𝔼[𝐳𝐳^⊤]=𝐈. Therefore, 𝔼_𝐳[∇^ZOℒ(θ)]=∇ℒ(θ). Moreover, 𝔼_𝐳[∇^ZOℒ(θ)]=𝔼_𝐳[ℒ(θ + α𝐳) - ℒ(θ)/α𝐳]=𝔼_𝐳[ℒ(θ + α𝐳)/α𝐳] - ℒ(θ)/α𝔼_𝐳[𝐳] = 𝔼_𝐳[ℒ(θ + α𝐳)/α𝐳] = 𝔼_𝐳[∇^ZO_spℒ(θ)]. Therefore, 𝔼_𝐳[∇^ZO_spℒ(θ)] = ∇ℒ(θ). §.§ Proof of Lemma <ref> In the limit α→ 0, 𝐳Δ𝐨^⊤/α = 𝐳(𝐉_f(𝐱)𝐳)^⊤ = 𝐳𝐳^⊤𝐉_f^⊤(𝐱). Since 𝔼[𝐳𝐳^⊤]=𝐈, we have 𝔼_𝐳[𝐳Δ𝐨^⊤/α| 𝐱]=𝐉_f^⊤(𝐱). Then 𝔼_𝐱, 𝐳[𝐳Δ𝐨^⊤/α]=𝔼_𝐱[𝐉_f^⊤(𝐱)]. Also, 𝔼_𝐳[𝐳𝐨̃^⊤/α| 𝐱]=𝔼_𝐳[𝐳Δ𝐨^⊤/α + 𝐳𝐨^⊤/α| 𝐱]=𝔼_𝐳[𝐳Δ𝐨^⊤/α| 𝐱] + 𝔼_𝐳[𝐳]𝐨^⊤/α = 𝔼_𝐳[𝐳Δ𝐨^⊤/α| 𝐱]=𝐉_f^⊤(𝐱). Therefore, 𝔼_𝐱, 𝐳[𝐳𝐨̃^⊤/α]=𝔼_𝐱[𝐉_f^⊤(𝐱)]. §.§ Proof of Proposition <ref> We first consider the average variance of the two-point ZO estimation ∇_θ^ZOℒ= 𝐳𝐳^⊤∇_θℒ+O(α). Since Var(xy)=Var(x)Var(y)+Var(x)𝔼(y)^2+Var(y)𝔼(x)^2 for independent x and y, and 𝔼[z_i^2]=Var[z_i]+𝔼[z_i]^2=1, for each element of the gradient under sample 𝐱, we have: Var[(∇_θ^ZOℒ_𝐱)_i] = Var[∑_j=1^d z_i z_j (∇_θℒ_𝐱)_j] + O(α^2) =Var[z_i^2 (∇_θℒ_𝐱)_i] + ∑_j≠ iVar[z_i z_j (∇_θℒ_𝐱)_j] + O(α^2) =Var[z_i^2] Var[(∇_θℒ_𝐱)_i] + Var[z_i^2] 𝔼[(∇_θℒ_𝐱)_i]^2 + Var[(∇_θℒ_𝐱)_i] 𝔼[z_i^2]^2 +∑_j≠ i( Var[z_iz_j] Var[(∇_θℒ_𝐱)_j] + Var[z_iz_j] 𝔼[(∇_θℒ_𝐱)_j]^2 + Var[(∇_θℒ_𝐱)_j] 𝔼[z_iz_j]^2 ) + O(α^2) =(β+1) Var[(∇_θℒ_𝐱)_i] + β𝔼[(∇_θℒ_𝐱)_i]^2 + ∑_j≠ i( Var[(∇_θℒ_𝐱)_j] + 𝔼[(∇_θℒ_𝐱)_j]^2 ) + O(α^2) =βVar[(∇_θℒ_𝐱)_i] + (β-1)𝔼[(∇_θℒ_𝐱)_i]^2 + ∑_j=1^d( Var[(∇_θℒ_𝐱)_j] + 𝔼[(∇_θℒ_𝐱)_j]^2 ) + O(α^2). Taking the average of all elements, we obtain the average variance for each sample (denoted as mVar): mVar[∇_θ^ZOℒ_𝐱] = 1/d∑_i=1^d Var[(∇_θ^ZOℒ_𝐱)_i] = β/d∑_i=1^d Var[(∇_θℒ_𝐱)_i] + β-1/d∑_i=1^d 𝔼[(∇_θℒ_𝐱)_i]^2 + ∑_j=1^d( Var[(∇_θℒ_𝐱)_j] + 𝔼[(∇_θℒ_𝐱)_j]^2 ) + O(α^2) = (d+β)V_θ + (d+β-1)S_θ + O(α^2). For gradient calculation with batch size B, the sample variance can be reduced by B times, resulting in the average variance 1/B((d+β)V_θ + (d+β-1)S_θ) + O(α^2). Then we can derive the average variance of the single-point ZO estimation ∇_θ^ZO_spℒ=∇_θ^ZOℒ+ℒ_𝐱/α𝐳 for each sample: mVar[∇_θ^ZO_spℒ_𝐱] = mVar[∇_θ^ZOℒ_𝐱] + mVar[ℒ_𝐱/α𝐳] = (d+β)V_θ + (d+β-1)S_θ + O(α^2) + 1/α^2(Var[ℒ_𝐱]Var[z_i] + Var[ℒ_𝐱]𝔼[z_i]^2 + Var[z_i]𝔼[ℒ_𝐱]^2) = (d+β)V_θ + (d+β-1)S_θ + 1/α^2V_L + 1/α^2S_L + O(α^2). For batch size B, the average variance is 1/B((d+β)V_θ+(d+β-1)S_θ+1/α^2V_L+1/α^2S_L)+O(α^2). Next, we turn to the average variance of the pseudo-zeroth-order method ∇^PZO_θℒ = 𝐌∇_𝐨ℒ_𝐱 = (𝔼_𝐱[𝐉_f^⊤(𝐱)])+ϵ)∇_𝐨ℒ_𝐱. For each element, we have: Var[(∇_θ^PZOℒ_𝐱)_i] = Var[∑_j=1^m (𝔼_𝐱[𝐉_f^⊤(𝐱)]))_i,j(∇_𝐨ℒ_𝐱)_j] + Var[∑_j=1^m ϵ_i,j(∇_𝐨ℒ_𝐱)_j] = ∑_j=1^m (𝔼_𝐱[𝐉_f^⊤(𝐱)]))_i,jVar[(∇_𝐨ℒ_𝐱)_j] + ∑_j=1^m (V_ϵVar[(∇_𝐨ℒ_𝐱)_j] + V_ϵ𝔼[(∇_𝐨ℒ_𝐱)_j]^2). Taking the average of all elements, we have the average variance for each sample: mVar[∇_θ^PZOℒ_𝐱] = 1/d∑_i=1^d∑_j=1^m (𝔼_𝐱[𝐉_f^⊤(𝐱)]))_i,jVar[(∇_𝐨ℒ_𝐱)_j] + ∑_j=1^m (V_ϵVar[(∇_𝐨ℒ_𝐱)_j] + V_ϵ𝔼[(∇_𝐨ℒ_𝐱)_j]^2) = mV_ϵV_𝐨+mV_ϵS_𝐨+V_𝐨, 𝐌. Then for batch size B, the average variance is 1/B(mV_ϵV_𝐨+mV_ϵS_𝐨+V_𝐨, 𝐌). β=Var[z_i^2]=𝔼(z_i^4)-𝔼(z_i^2)^2=𝔼(z_i^4)-1 depends on the distribution of z_i. For the Gaussian distribution, 𝔼(z_i^4)=3 and therefore β=2. For the Rademacher distribution, 𝔼(z_i^4)=1 and therefore β=0. The zero mean assumption on the small error ϵ is reasonable since 𝐳𝐨̃^⊤/α is an unbiased estimator for 𝔼_𝐱[𝐉_f^⊤(𝐱)], so the expectation of the error can be expected to be zero. V_θ and V_𝐨, 𝐌 may not be directly compared considering the complex network function, but we may make a brief analysis under some simplifications. For (∇_θℒ_𝐱)_i=(𝐉_f^⊤(𝐱)∇_𝐨ℒ_𝐱)_i, let 𝐉_i,j and ∇_j denote (𝐉_f^⊤(𝐱))_i,j and (∇_𝐨ℒ_𝐱)_j for short, we have Var[(∇_θℒ_𝐱)_i] = Var[∑_j=1^m 𝐉_i,j∇_j] = ∑_j Var[𝐉_i,j∇_j] + ∑_j_1,j_2Cov[𝐉_i,j_1∇_j_1, 𝐉_i,j_2∇_j_2] = ∑_j[Var[∇_j]𝔼[𝐉_i,j^2] + Var[𝐉_i,j]𝔼[∇_j]^2 + 2Cov[𝐉_i,j, ∇_j]𝔼[𝐉_i,j]𝔼[∇_j]] + ∑_j_1,j_2Cov[𝐉_i,j_1∇_j_1, 𝐉_i,j_2∇_j_2]. If we ignore covariance terms and assume 𝔼[∇_j]=0, this is simplified to ∑_jVar[∇_j]𝔼[𝐉_i,j^2], and then V_θ is approximated as 1/d∑_i,jVar[∇_j]𝔼[𝐉_i,j^2], which has a similar form as V_𝐨, 𝐌=1/d∑_i,jVar[(∇_𝐨ℒ_𝐱)_j](𝔼_𝐱[𝐉_f^⊤(𝐱)])_i,j except that the second moment is considered. Under this condition, the scales of V_𝐨, 𝐌 and V_θ may slightly differ considering the scale of elements of 𝐉_f^⊤(𝐱), but overall, V_𝐨, 𝐌 would be at a similar scale as V_θ compared with the variances of the zeroth-order methods that are at least d times larger which is proportional to the number of intermediate neurons. §.§ Proof of Proposition <ref> Since 𝐉_f^⊤(𝐱) is L_J-Liptschitz continuous and 𝐞(𝐱) is L_e-Liptschitz continuous, we have ‖𝐉_f^⊤(𝐱_i) - 𝐉_f^⊤(𝐱_j)‖≤ L_J ‖𝐱_i - 𝐱_j ‖, ‖𝐞(𝐱_i) - 𝐞(𝐱_j)‖≤ L_e ‖𝐱_i - 𝐱_j ‖. Then with the equation that 1/2n^2∑_i,j(a_i-a_j)(b_i-b_j) = 1/n∑_i a_ib_i - 1/n^2∑_i,j a_ib_j, we have ‖𝔼_𝐱_i[𝐉_f^⊤(𝐱_i)𝐞(𝐱_i)] - 𝔼_𝐱_i[(𝔼_𝐱_j[𝐉_f^⊤(𝐱_j)]+ϵ)𝐞(𝐱_i)] ‖ = ‖1/n∑_𝐱_i𝐉_𝐟(𝐱_i)𝐞(𝐱_i) - (1/n∑_𝐱_i𝐉_𝐟(𝐱_i))(1/n∑_𝐱_i𝐞(𝐱_i)) - ϵ𝔼_𝐱_i[𝐞(𝐱_i)] ‖ = ‖1/2n^2∑_𝐱_i, 𝐱_j(𝐉_𝐟(𝐱_i) - 𝐉_𝐟(𝐱_j)) (𝐞(𝐱_i) - 𝐞(𝐱_j)) - ϵ𝔼_𝐱_i[𝐞(𝐱_i)] ‖ ≤ 1/2n^2∑_𝐱_i, 𝐱_j‖(𝐉_𝐟(𝐱_i) - 𝐉_𝐟(𝐱_j)) ‖‖(𝐞(𝐱_i) - 𝐞(𝐱_j)) ‖ + ‖ϵ𝔼_𝐱_i[𝐞(𝐱_i)] ‖ ≤ 1/2n^2∑_𝐱_i, 𝐱_j L_JL_e ‖𝐱_i - 𝐱_j ‖^2 + ‖ϵ𝔼_𝐱_i[𝐞(𝐱_i)] ‖ = 1/2 L_J L_e Δ_𝐱 + e_ϵ < ‖𝔼_𝐱_i[𝐉_f^⊤(𝐱_i)𝐞(𝐱_i)] ‖. Therefore, <𝔼_𝐱_i[𝐉_f^⊤(𝐱_i)𝐞(𝐱_i)], 𝔼_𝐱_i[𝐌𝐞(𝐱_i)] > = ‖𝔼_𝐱_i[𝐉_f^⊤(𝐱_i)𝐞(𝐱_i)] ‖^2 - <𝔼_𝐱_i[𝐉_f^⊤(𝐱_i)𝐞(𝐱_i)], 𝔼_𝐱_i[𝐉_f^⊤(𝐱_i)𝐞(𝐱_i)] - 𝔼_𝐱_i[𝐌𝐞(𝐱_i)]> ≥ ‖𝔼_𝐱_i[𝐉_f^⊤(𝐱_i)𝐞(𝐱_i)] ‖^2 - ‖𝔼_𝐱_i[𝐉_f^⊤(𝐱_i)𝐞(𝐱_i)] ‖‖𝔼_𝐱_i[𝐉_f^⊤(𝐱_i)𝐞(𝐱_i)] - 𝔼_𝐱_i[(𝔼_𝐱_j[𝐉_f^⊤(𝐱_j)]+ϵ)𝐞(𝐱_i)] ‖ > 0. L_J will depend on the non-linearity of the network, for example, L_J=0 for linear networks. This will influence the condition of effective descent direction considering the gradient scale as in the proposition. Note that these assumptions are not necessary premises, and we have verified the effectiveness of the method in experiments. § INTRODUCTION TO LOCAL SURROGATE DERIVATIVES UNDER THE STOCHASTIC SPIKING SETTING In this section, we provide more introduction to the stochastic spiking setting, under which spiking neurons can be locally differentiable and there exist local surrogate derivatives. Biological spiking neurons can be stochastic, where a neuron generates spikes following a Bernoulli distribution with the probability as the c.d.f. of a distribution w.r.t u[t]-V_th, indicating a higher probability for a spike with larger u[t]-V_th. That is, s_i[t] is a random variable following a {0, 1} valued Bernoulli distribution with the probability of 1 as p(s_i[t]=1)=F(u_i[t] - V_th). With reparameterization, this can be formulated as s_i[t] = H(u_i[t] - V_th - z_i) with a random noise variable z_i that follows the distribution specified by F. Different F corresponds to different distributions and noises. For example, the sigmoid function corresponds to a logistic noise, while the erf function corresponds to a Gaussian noise. Under the stochastic setting, the local surrogate derivatives can be introduced for the spiking function <cit.>. Specifically, consider the objective function which should turn to the expectation over random variables under the stochastic model. Considering a one-hidden-layer network with one time step, with the input 𝐱 connecting to n spiking neurons by the weight 𝐖 and the neurons connecting to an output readout layer by the weight 𝐎. Different from deterministic models with the objective function 𝔼_𝐱 [ℒ(𝐬)], where 𝐬=H(𝐮 - V_th), 𝐮=𝐖𝐱, under the stochastic setting, the objective is to minimize 𝔼_𝐱 [𝔼_𝐬∼ p(𝐬 | 𝐱, 𝐖) [ℒ(𝐬)]]. For this objective, the model can be differentiable and gradients can be derived <cit.>. We focus on the gradients of 𝐮, which can be expressed as: ∂/∂𝐮𝔼_𝐬∼ p(𝐬 | 𝐖) [ℒ(𝐬)] = ∂/∂𝐮∑_𝐬(∏_i p(𝐬_i | 𝐖)) ℒ(𝐬) = ∑_𝐬∑_i ( ∏_i'≠ ip(𝐬_i' | 𝐖) ) ( ∂/∂𝐮 p(𝐬_i | 𝐖) ) ℒ(𝐬). Then consider derandomization to perform summation over s_i while keeping other random variables fixed <cit.>. Let 𝐬_¬ i denote other variables except s_i. Since s_i is {0,1} valued, given 𝐬_¬ i, we have ∑_s_i∈{0, 1}∂ p(s_i | 𝐖)/∂𝐮ℒ([𝐬_¬ i, s_i]) = ∂ p(𝐬_i | 𝐖)/∂𝐮ℒ(𝐬) + ∂ (1 - p(𝐬_i | 𝐖))/∂𝐮ℒ(𝐬_↓ i) = ∂ p(𝐬_i | 𝐖)/∂𝐮(ℒ(𝐬) - ℒ(𝐬_↓ i)), where 𝐬 is a random sample considering s_i (the RHS is invariant of s_i), and 𝐬_↓ i denotes taking 𝐬_i as the other state for 𝐬. Given that ∑_s_i p(s_i | 𝐖)=1, Eq. (<ref>) is equivalent to ∂/∂𝐮𝔼_𝐬∼ p(𝐬 | 𝐖) [ℒ(𝐬)] = ∑_i ∑_𝐬_¬ i( ∏_i'≠ ip(𝐬_i' | 𝐖) ) ∑_s_i( ∂/∂𝐮 p(s_i | 𝐖) ) ℒ([𝐬_¬ i, s_i]) = ∑_i ∑_𝐬_¬ i( ∏_i'≠ ip(𝐬_i' | 𝐖) ) ∑_s_i p(s_i | 𝐖) ∂ p(𝐬_i | 𝐖)/∂𝐮(ℒ(𝐬) - ℒ(𝐬_↓ i)) = ∑_𝐬( ∏_ip(𝐬_i | 𝐖) ) ∑_i ∂ p(𝐬_i | 𝐖)/∂𝐮(ℒ(𝐬) - ℒ(𝐬_↓ i)) = 𝔼_𝐬∼ p(𝐬 | 𝐖)∑_i ∂ p(𝐬_i | 𝐖)/∂𝐮(ℒ(𝐬) - ℒ(𝐬_↓ i)). Taking one sample of 𝐬 in each forward procedure allows the unbiased gradient estimation as the Monte Carlo method. In this equation, considering the probability distribution, we have: ∂ p(𝐬_i | 𝐖)/∂𝐮 = F'(𝐮, V_th), where F' is the derivative of F, corresponding to a local surrogate gradient, e.g., the derivative of the sigmoid function, triangular function, etc. The term ℒ(𝐬) - ℒ(𝐬_↓ i) corresponds to the error, and the above derivation is also similar to REINFORCE <cit.>. However, since it relies on derandomization, simultaneous perturbation is infeasible in this formulation, and for efficient simultaneous calculation of all components, we may follow previous works <cit.> to tackle it by linear approximation: ℒ(𝐬) - ℒ(𝐬_↓ i)≈∂ℒ(𝐬)/∂𝐬_i, enabling simultaneous calculation given a gradient ∂ℒ(𝐬)/∂𝐬. This approximation may introduce bias, while it can be small for over-parameterized neural networks with weights at the scale of 1/√(d_n), where d_n is the neuron number. This means that for the elements of the readout 𝐨=𝐎𝐬, flipping the state of 𝐬_i only has O(1/√(d_n)) influence. The deterministic model may be viewed as a special case, e.g., with noise always as zero, and <cit.> show that the gradients under the deterministic setting can provide a similar ascent direction under certain conditions. Therefore, spiking neurons can be differentiable under the stochastic setting and local surrogate derivatives can be well-defined, supporting our formulation as introduced in the main text. Our pseudo-zeroth-order method approximates ∂ℒ(𝐬)/∂𝐬, fitting the above formulation. Also note that the above derivation of surrogate derivatives is local for one hidden layer – for multi-layer networks, while we may iteratively perform the above analysis to obtain the commonly used global surrogate gradients, there can be expanding errors through layer-by-layer propagation due to the linear approximation error. Differently, our OPZO performs direct error feedback, which may reduce such errors. § MORE IMPLEMENTATION DETAILS §.§ Local Learning For experiments with local learning, we consider local supervision with a fully connected readout for each layer. Specifically, for the output 𝐬^l of each layer, we calculate the local loss based on the readout 𝐫^l = 𝐑^l𝐬^l as ℒ(𝐫^l, 𝐲). Then the gradient for 𝐬^l is calculated by the local loss and added to the global gradient based on our OPZO method, which will update synaptic weights directly connected to the neurons. For simplicity, we assume the weight symmetry for propagating errors through 𝐑^l in code implementations, because for the single linear layer, weights can be easily learned to be symmetric <cit.>, e.g., based on a sleep phase with weight mirroring <cit.> or based on our zeroth-order formulation with noise injection. <cit.> also show that a fixed random matrix can be effective for such kind of local learning. We also consider intermediate global learning (IGL) as a kind of local learning. That is, we choose a middle layer to perform readout for loss calculation, just as the last layer, and its direct feedback signal will be propagated to previous layers. For the experiments with a 9-layer network, we choose the middle layer as the fourth convolutional layer. §.§ Training Settings §.§.§ Datasets We conduct experiments on N-MNIST <cit.>, DVS-Gesture <cit.>, DVS-CIFAR10 <cit.>, MNIST <cit.>, CIFAR-10 and CIFAR-100 <cit.>, as well as ImageNet <cit.>. N-MNIST N-MNIST is a neuromorphic dataset converted from MNIST by a Dynamic Version Sensor (DVS), with the same number of training and testing samples as MNIST. Each sample consists of spike trains triggered by the intensity change of pixels when DVS scans a static MNIST image. There are two channels corresponding to ON- and OFF-event spikes, and the pixel dimension is expanded to 34×34 due to the relative shift of images. Therefore, the size of the spike trains for each sample is 34×34×2× T, where T is the temporal length. The original data record 300ms with the resolution of 1μ s. We follow <cit.> to reduce the time resolution by accumulating the spike train within every 3ms and use the first 30 time steps. The license of N-MNIST is the Creative Commons Attribution-ShareAlike 4.0 license. DVS-Gesture DVS-Gesture is a neuromorphic dataset recording 11 classes of hand gestures by a DVS camera. It consists of 1,176 training samples and 288 testing samples. Following <cit.>, we pre-possess the data to integrate event data into 20 frames, and we reduce the spatial resolution to 48× 48 by interpolation. The license of DVS-Gesture is the Creative Commons Attribution 4.0 license. DVS-CIFAR10 DVS-CIFAR10 is the neuromorphic dataset converted from CIFAR-10 by DVS, which is composed of 10,000 samples, one-sixth of the original CIFAR-10. It consists of spike trains with two channels corresponding to ON- and OFF-event spikes. We split the dataset into 9000 training samples and 1000 testing samples as the common practice, and we reduce the temporal resolution by accumulating the spike events <cit.> into 10 time steps as well as the spatial resolution into 48×48 by interpolation. We apply the random cropping augmentation similar to CIFAR-10 to the input data and normalize the inputs based on the global mean and standard deviation of all time steps. The license of DVS-CIFAR10 is CC BY 4.0. MNIST MNIST consists of 10-class handwritten digits with 60,000 training samples and 10,000 testing samples. Each sample is a 28×28 grayscale image. We normalize the inputs based on the global mean and standard deviation, and convert the pixel value into a real-valued input current at every time step. The license of MNIST is the MIT License. CIFAR-10 CIFAR-10 consists of 10-class color images of objects with 50,000 training samples and 10,000 testing samples. Each sample is a 32×32×3 color image. We normalize the inputs based on the global mean and standard deviation, and apply random cropping, horizontal flipping, and cutout <cit.> for data augmentation. The inputs to the first layer of SNNs at each time step are directly the pixel values, which can be viewed as a real-valued input current. CIFAR-100 CIFAR-100 is a dataset similar to CIFAR-10 except that there are 100 classes of objects. It also consists of 50,000 training samples and 10,000 testing samples. We use the same pre-processing as CIFAR-10. The license of CIFAR-10 and CIFAR-100 is the MIT License. ImageNet ImageNet-1K is a dataset of color images with 1,000 classes of objects, containing 1,281,167 training samples and 50,000 validation images. We adopt the common pre-possessing strategies to first randomly resize and crop the input image to 224×224, and then normalize it after the random horizontal flipping data augmentation, while the testing images are first resized to 256×256 and center-cropped to 224×224, and then normalized. The inputs are also converted to a real-valued input current at each time step. The license of ImageNet is Custom (non-commercial). §.§.§ Training Details and Hyperparameters For SNN models, following the common practice, we leverage the accumulated membrane potential of the neurons at the last classification layer (which will not spike or reset) for classification, i.e., the classification during inference is based on the accumulated 𝐮^N[T]=∑_t=1^T𝐨[t], where 𝐨[t]=𝐖^N-1𝐬^N-1[t]+𝐛^N which can be viewed as an output at each time step. The loss during training is calculated for each time step as ℒ(𝐨[t], 𝐲) following the instantaneous loss in online training with the loss function as a combination of cross-entropy (CE) loss and mean-square-error (MSE) loss <cit.>. For spiking neurons, V_th=1 and λ=0.5. We leverage the sigmoid-like local surrogate derivative, i.e., ψ(u)=1/a_1e^(V_th-u)/a_1/(1+e^(V_th-u)/a_1)^2 with a_1=0.25. For convolutional networks, we apply the scaled weight standardization <cit.> as in <cit.>. For our OPZO method, as well as the ZO method in experiments, α is set as 0.2 initially and linearly decays to 0.01 through the epochs, in order to reduce the influence of stochasticness for forward propagation. For fine-tuning on ImageNet under noise, α is set as the noise scale, and we do not apply antithetic variables across time steps, in order to better fit the noisy test setting (perturbation noise is before the neuron). In practice, we remove the factor 1/α for the calculation of 𝐌, because in the single-point setting, the scale of 𝐨̃ is larger than and not proportional to α (for the discrete spiking model, the scale of Δ𝐨 can also be large and not proportional to α). This only influences the estimated gradient with a scale α, and may be offset by the adaptive optimizer. Additionally, viewing 𝐌 as approximating its objective with gradient descent, the decreasing α may be viewed as the learning rate with a linear scheduler. For N-MNIST and MNIST, we consider FC networks with two hidden layers composed of 800 neurons, and for DVS-CIFAR10, DVS-Gesture, CIFAR-10, and CIFAR-100, we consider 5-layer Conv networks (128C3-AP2-256C3-AP2-512C3-AP2-512C3-FC), or 9-layer Conv networks under the deeper network setting (64C3-128C3-AP2-256C3-256C3-AP2-512C3-512C3-AP2-512C3-512C3-FC). We train our models on common datasets by the AdamW optimizer with learning rate 2e-4 and weight decay 2e-4 (except for ZO, the learning rate is set as 2e-5 on DVS-CIFAR10, MNIST, CIFAR-10, and CIFAR-100 for better results). The batch size is set as 128 for most datasets and 16 for DVS-Gesture, and the learning rate is cosine annealing to 0. For N-MNIST and MNIST, we train models by 50 epochs and we apply dropout with the rate 0.2 (except for ZO). For DVS-Gesture, DVS-CIFAR10, CIFAR-10, and CIFAR-100, we train models by 300 epochs. For DVS-CIFAR10, we apply dropout with the rate 0.1 (except for ZO). We set the momentum coefficient for momentum feedback connections as λ=0.99999 (except for DVS-Gesture, it is set as λ=0.999999 due to a smaller batch size), and for the combination with local learning, the local loss is scaled by 0.01. For fine-tuning ImageNet, the learning rate is set as 2e-6 (and 2e-7 for ZO) without weight decay, and the batch size is set as 64. The perturbation noise is before the neuron, i.e., added to the results after convolutional operations. For BP, we train 1 epoch. For DFA, ZO, and OPZO, we train 5 epochs. We observe that DFA and ZO fail after 1 epoch, so we only report the results after 1 epoch, and for OPZO, the results can continually improve, so we report the results after 5 epochs. The 1-epoch and 5-epoch results for OPZO are 63.04 and 63.39 under the noise scale of 0.1, and 59.50 and 60.96 under the noise scale of 0.15. The code implementation is based on the PyTorch framework, and experiments are carried out on one NVIDIA GeForce RTX 3090 GPU. Experiments are based on 3 runs of experiments with the same random seeds 2022, 0, and 1. For gradient variance experiments, the variances are calculated by the batch gradients in one epoch, i.e., var = ∑‖𝐠_i - 𝐠‖^2/n, where 𝐠_i is the batch gradient, 𝐠 is the average of batch gradients, and n is the number of batches multiplied by the number of elements in the gradient vector. § ADDITIONAL RESULTS §.§ Training Costs on GPU We provide a brief comparison of memory and time costs of different methods on GPU in Table <ref>. Our proposed OPZO has about the same costs as spatial BP with SG and DFA. If we exclude some code-level optimization and implement all methods in a similar fashion, DFA and OPZO are faster than spatial BP, which is consistent with the theoretical analysis of operation numbers. Note that this is only a brief comparison as we do not perform low-level code optimization for OPZO and DFA, for example, the direct feedback of OPZO and DFA to different layers can be parallel, and local learning for different layers can also theoretically be parallel, to further reduce the time. As described in the main text, the target of neuromorphic computing with SNNs would be potential neuromorphic hardware, and OPZO and DFA can have lower costs, while GPU generally does not follow the properties. Since neuromorphic hardware is still under development, we mainly simulate the experiments on GPU, and it can be future work to consider combination with hardware implementation. §.§ Firing Rate and Synaptic Operations For event-driven SNNs, the energy costs on neuromorphic hardware are proportional to the spike count, or more precisely, synaptic operations induced by spikes. Therefore, we also compare the firing rate (i.e., average spike count per neuron per time step) and synaptic operations of the models trained by different methods. As shown in Table <ref>, on both DVS-CIFAR10 and CIFAR-10, OPZO (w/ LL) achieves the lowest average total firing rate and synaptic operations, indicating the most energy efficiency. The results also demonstrate different spike patterns for models trained by different methods, and show that LL can significantly improve OPZO while can hardly improve DFA. It may indicate OPZO as a better more biologically plausible global learning method to be combined with local learning. §.§ Training Dynamics We present the training dynamics of different methods in Fig. <ref>. For fully connected networks on N-MNIST, OPZO achieves a similar convergence speed as spatial BP with SG, which is better than DFA. For convolutional networks on DVS-CIFAR10, OPZO itself is slower than spatial BP with SG but performs much better than DFA, and when combined with local learning, OPZO (w/ LL) achieves a similar training convergence speed as BP as well as a better testing performance. §.§ More Comparisons We note that some previous works propose to update error feedback matrices for DFA, e.g., DKP <cit.> that may have similar training costs as OPZO. DKP is based on the formulation of DFA and updates feedback weights similar to Kolen-Pollack learning, which calculates gradients for feedback weights by the product of the middle layer’s activation and the error from the top layer. Its basic thought is trying to keep the update direction of feedback and feedforward weights the same, but it may lack sufficient theoretical groundings. In this subsection, we compare DKP with DFA and the proposed OPZO method. As DKP is designed for ANN, we implement it for SNN with the adaptation of activations to pre-synaptic traces for feedback weight learning (similar to the update of feedforward weight). As shown in Table <ref>, compared with DFA, DKP can have around 2-3% performance improvement on CIFAR-10 and CIFAR-100, which is similar to the improvement in its paper. However, we observe that DKP cannot work well for neuromorphic datasets. And OPZO significantly outperforms both DKP and DFA on all datasets, also with more theoretical guarantees.
http://arxiv.org/abs/2407.12328v1
20240717060703
Forecast Analysis of Astrophysical Stochastic Gravitational Wave Background beyond general relativity: A Case Study on Brans-Dicke Gravity
[ "Ran Chen", "Zhao Li", "Yin-Jie Li", "Yi-Ying Wang", "Rui Niu", "Wen Zhao", "Yi-Zhong Fan" ]
gr-qc
[ "gr-qc", "astro-ph.CO", "astro-ph.HE", "hep-th" ]
Uncertainty Calibration with Energy Based Instance-wise Scaling in the Wild Dataset Mijoo Kim10000-0002-0397-1852 Junseok Kwon10000-0001-9526-7549 July 22, 2024 =================================================================================== § INTRODUCTION As one of the most significant predictions of general relativity (GR), gravitational waves (GWs) open new avenues for exploring the nature of gravity and cosmology <cit.> after the successful detection from individual compact binary sources over the past few years. These confirmed detections are verified by matching the resolved waveforms generated by individual and point-like sources to the detector data streams, which constitute only a tiny fraction of the gravitational-wave sky. In addition, the stochastic GW backgrounds (SGWBs)  <cit.> are generated by multiple point sources or extended sources, presenting as an incoherent superposition of a vast collection of unresolved signals. Recent detections of nanohertz SGWBs spark controversy about whether their origins are astrophysical or cosmological  <cit.>. But High-frequency astrophysical stochastic gravitational-wave backgrounds (AGWBs) originate from distant compact binaries with low signal-to-noise ratios (SNRs) and are expected to be verified by the ground-based GW detector networks <cit.> in the near future. AGWBs serve as a window for probing various questions concerning the nature of gravity, the population properties of compact binaries, and cosmology. However, the stochastic backgrounds, the scalar, and the vector modes predicted by alternative theories of gravity have not been detected in the third LIGO-Virgo-KAGRA collaboration observing run (O3) <cit.>. Based on their enhanced sensitivity and angular resolution, the next-generation ground-based detectors, such as the Cosmic Explorer (CE) <cit.>, Einstein Telescope (ET) <cit.>, will have the potential to detect the AGWB and extract valuable information about gravity and cosmology <cit.>. Although GR is widely acknowledged as the most successful theory of gravitation, there is various theoretical and experimental evidence that challenges this standard model <cit.>. Theories of gravity beyond GR have predicted several characteristic features in GWs, including energy loss, polarizations, dispersion, speed, and others. The direct constraints on the contributions of these extra polarizations to the SGWB have been investigated in <cit.>. The revisions of the GWs in GR by these features can vary the spectral shape of the AGWB. Several modified gravity theories  <cit.> have been analyzed by the deviations of the expected signal of the AGWB. The velocity birefringence effects in AGWB have been used to place constraints on parity-violating theory <cit.>. Incorporating an additional scalar field, scalar-tensor theories are always employed to explain the late-time acceleration of the Universe <cit.> and cosmic inflation <cit.>. These theories have garnered considerable attention in the literature over recent years <cit.>. As the simplest and most extensively studied ones, BD theory describes the curvature of spacetime, where the Ricci scalar is non-minimally coupled to a massless scalar field that serves as Newton's gravitational constant <cit.>. The extra degree of freedom introduced by the massless scalar field results in an additional "breathing" polarization of GWs and accompanying scalar radiation. Using the observations of the Shapiro time delay <cit.>, the Cassini mission's measurement of the parameterized post-Newtonian parameter provided the most stringent result with ω_ BD > 40000 at the 2σ level. On larger scales, using Cosmic Microwave Background data from Planck, the BD parameter ω_ BD was constrained to ω_ BD > 692 at the 99% confidence level <cit.>. The prospect constrain of ω_ BD has been investigated by the Fisher matrix method with future space-based gravitational wave detectors, specifically the Laser Interferometer Space Antenna (LISA) <cit.>. Based on the real gravitational wave signals from binary compact star coalescences, a joint analysis of GW200115 and GW190426_152155 constrained ω_ BD > 40 at the 90% confidence level <cit.>. Analyzing the GW200115 data, an observational constraint of ω_ BD > 81 by exploiting a waveform for a mixture of tensor and scalar polarizations <cit.>. Additionally, by incorporating the dominant (2, 2) mode correction and higher harmonic revisions, similar results were obtained with ω_ BD > 5 at the 90% level <cit.>. Therefore, it remains meaningful to investigate the future constraints of ω_ BD based on the AGWB signals. Because the AGWB represents the integrating signals of various sources in the universe, it concerns wide spatial and temporal scales. Moreover, the signals of the AGWB indicate the response of non-tensor modes, providing the possibility to test specific gravitational models from an independent perspective. As a result, a particularly suitable candidate for such investigations is BD theory. Unlike previous studies that employ power-law models to search for the SGWB, our work comprehensively accounts for waveform corrections during the stages of the generation and propagation of GW for binary compact star systems. Under BD gravity, these corrections can be integrated into the spectrum of the AGWB. Based on the third-generation gravitational wave detectors, we explore the prospective constraints of BD gravity by end-to-end analyses of AGWB simulations with fully specified parent binary black hole (BBH) populations. Furthermore, we prospect the potential of detecting GW scalar polarization modes that are produced by binary neutron star (BNS) systems. This paper is organized as follows. In Section <ref>, we provide a brief review of BD theory and derive the cosmological evolution of the scalar field. Section <ref> presents the corrected energy density of GWs under BD gravity, including the effects of GW generation and propagation. In Section <ref>, we illustrate the calculating method of the fractional GW energy density spectrum generated by the cumulative contributions of abundant sources. Section <ref> shows the detailed simulation of BBH and BNS at first. Then, the results of the constraints of BD gravity and the detected possibility of the scalar GW are presented. In Section <ref>, we provide our conclusions and discussion. Throughout this work, we adopt the metric signature (-, +, +, +) and utilize the unit convention c = 1. § BRANS-DICKE GRAVITY As a prominent scalar-tensor theory of gravity. The BD theory is one of the most well-established and extensively studied substitutes for GR <cit.>. Under the framework of BD gravity, the Newtonian gravitational constant is revised to a temporal variable. §.§ Foundations of Brans-Dicke gravity The action of the BD theory in the Jordan frame can be expressed as S=1/16 π∫ d^4 x √(-g)[ϕ R-ω_ BD/ϕ g^μν(∂_μϕ)(∂_νϕ)] +S_m[g_μν, Ψ_m], where ϕ represents the scalar field, g_μν represents the metric of the tensor field, ω_ BD which denotes the scalar-tensor coupling coefficient is a constant for massless BD gravity. GR is recovered from BD theory in the limit ω_ BD→+∞. S_m represents the action of the matter, which does not depend on the scalar field ϕ. Varying the action with respect to g^μν and to ϕ leads to field equation R_μν-1/2 g_μν R =8 π/ϕ[T_μν^(m)+T_μν^(ϕ)]+1/ϕ(∇_μ∇_νϕ-g_μν□_g ϕ), □_g ϕ =1/3+2 ω_ BD8 π T^(m). Here, □_g ≡ g^αβ∇_α∇_β is the d’Alembertian compatible with the metric, the energy-momentum tensor for the matter fields, denoted as T_μν^(m), and the energy-momentum tensor for the scalar field, denoted as T_μν^(ϕ), can be expressed as T_μν^(m)=-2/√(-g)δ S_m/δ g^μν, and T_μν^(ϕ)=1/8 πω_ BD/ϕ[(∂_μϕ)(∂_νϕ)-1/2 g_μν g^αβ(∂_αϕ)(∂_βϕ)]. §.§ Cosmology in Brans-Dicke gravity In order to consider the propagation of GWs, it is crucial to determine the evolution of the scalar field ϕ. We focus on the FLRW metric d s^2=-d t^2+a^2 δ_i j d x^i d x^j, where a(t) is the scale factor, and the Hubble parameter is defined as H=ȧ / a (dotdenoting d / d t). To ensure the consistency of the BD theory with observational tests of cosmology, we consider the cosmological constant Λ term which leads extra action in Eq. (<ref>) as S_Λ=1/16 π∫ d^4 x √(-g)(-2ϕΛ). The variation of the action with respect both the metric g^μν and the scalar field ϕ, yielding the BD field equations 3 H^2-1/2ω_ BDϕ̇^2/ϕ^2+3 H ϕ̇/ϕ=8π/ϕ(ρ_m+ρ_Λ), 2 Ḣ+3 H^2+1/2ω_ BDϕ̇^2/ϕ^2+2 H ϕ̇/ϕ+ϕ̈/ϕ=-8π/ϕ p_Λ, ϕ̈+3 H ϕ̇=8π/2 ω_BD+3(ρ_m+ρ_Λ-3 p_Λ), where ρ_m denotes matter density and ρ_Λ=-p_Λ= Λϕ/8π. To proceed with solving the above questions, we introduce the matter density parameter Ω_m=8 πρ_m/3 H^2 ϕ and the cosmological constant density parameter Ω_Λ=8 πρ_Λ/3 H^2ϕ. Additionally, we define the deceleration parameter q, the scalar field deceleration parameter q_ϕ, and the parameter ψ as q=-ä/a H^2, q_ϕ=-ϕ̈/ϕ H^2, ψ=ϕ̇/ϕ H. With the above notation, dividing the three equations in Eq. (<ref>) by H^2 leads to the following expressions: 1+ψ - 1/6ω_ BDψ^2 =Ω_m + Ω_Λ, 1-2q-q_ϕ+2ψ+ω_ BD/2ψ^2 = 3Ω_Λ, (2ω_ BD+3)(ψ-1/3q_ϕ)=Ω_m+2Ω_Λ. To ascertain the evolution of the scalar field, we combine these equations and obtain q-(1+ω_BD)q_ϕ+(2+3ω_BD)ψ=2. We assume the solution of this equation have the form ϕ = ϕ(0)a^f, where ϕ(0) is the value of ϕ at present time and f=f(ω_ BD) is arbitrary function of ω_ BD. Thus the scalar field deceleration parameter q_ϕ and parameter ψ can be written as ψ=f ȧ/aH=f, q_ϕ=f-f^2-fä/aH^2. Taking these into Eq. (<ref>), and one has the expression of the function f (-1+f(1+ω_BD))(2+f-q)=0⇒ f =1/1+ω_BD. Recalling that the scale factor can be related to the redshift in cosmology through the equation a=a_0/1+z, where a_0 represents the scale factor at present and is conventionally chosen to be a_0=1. We finally obtain the evolution of both the scalar field and the gravitational constant read as ϕ(z) = ϕ(0) [1/(1+z)]^1/1+ω_ BD. It is worth noting that in this subsection, ϕ represents the background scalar field. However, in the next section, we will use ϕ_0 to distinguish it. § GRAVITATIONAL WAVEFORMS As usual, we perform the first-order perturbation around the Minkowskian metric and a constant expectation value for the scalar field by g_μν(x) =η_μν+h_μν(x), ϕ(x) =ϕ_0+δϕ(x), with h_μν≪ g_μν and δϕ≪ϕ_0, the gravitational constant is related to the background scalar field by: G=1/ϕ_0=G(0)(1+z)^1/1+ω_ BD, where G(0) is the value of the Newtonian gravitational constant at present. For convenience, we define Φ≡-δϕ / ϕ_0 and introduce the “reduced field" <cit.> θ_μν≡ h_μν-1/2η_μν h+Φη_μν, with h≡η^μνh_μν. By applying the Lorentz gauge ∂_μθ^μν=0, the equation of motion of the reduced field is represented as <cit.> □_ηθ_μν=0, and □_ηΦ=0, where □_η≡∂_μ∂^μ. Beyond plus and cross polarizations in GR, the extra scalar field brings a new degree of freedom. Therefore, the metric perturbation h_μν may be locally decomposed into spin-2 and spin-0 components along the GW propagation direction n̂ h_ij(x)= h_+(x)e_ij^+(n̂) +h_×(x)e_ij^×(n̂) +h_b(x) e_ij^b(n̂), where e_ij^+, e_ij^×, and e_ij^b denote plus, cross, and breathing polarization tensors, respectively  <cit.>. The breathing mode relates the scalar perturbation by h_b(x)=Φ(x). The gravitational wave stress-energy tensor is given by <cit.> T_μν^ GW=ϕ_0/32π[⟨∇_μ h_αβ∇_ν h^αβ⟩+4(1+ω_BD)⟨∇_μΦ∇_νΦ⟩], where ⟨…⟩ implies an average over a region on the order of several wavelengths. The gauge-invariant gravitational wave energy density ρ_ GW≡ T_00^ GW in Lorentz gauge then becomes ρ_ GW=ϕ_0/16 π[⟨ḣ^2_++ḣ^2_×⟩ + (3+2ω_ BD) ⟨ḣ^2_ b⟩], in terms of the plus, cross, and breathing polarizations. Here, the dot denotes the time derivative. Since we investigate the astrophysical SGWB produced by the binary systems in BD theory, the GW energy density modification is raised from two aspects, GW generation and propagation. Firstly, the scalar-gravity coupling modifies the dynamics of the binary system, and then the plus and cross polarizations are modulated in the Newtonian order. Simultaneously, the extra scalar polarization in -1 post-Newtonian order carries binding energy from binary systems, enhancing the GW energy density. Secondly, the background scalar field evolves as the cosmic expansion in BD theory, obeying Eq. (<ref>). This modifies the definition of luminosity distance and makes the GW decay slower than the GR case. To study the effects of generation and propagation separately, we divide the whole space into the near zone, centering the binary system and consisting of many typical wavelengths of radiated gravitational waves, and the wave zone, where the cosmological background should be considered. §.§ Gravitational Wave Generation We first investigate the GW generation in the near zone. The binary system consists of two inspiralling compact objects, whose mass is denoted by m_A(ϕ). Since the compact object is gravitationally bound, its total mass depends on its internal gravitational energy, which depends on the effective local value of the scalar field ϕ in the vicinity of the body. Generally, m_A(ϕ) is expanded about the background value ϕ_0 as m_A(ϕ)=m_A[1+s_AΦ+1/2(s_A^2+s_A'-s_A)Φ^2+𝒪(Φ^3)], where m_A≡ m_A(ϕ_0) and the first and second sensitivities are defined as s_A=[d ln m_A(ϕ)/d lnϕ]_ϕ=ϕ_0, s_A^'=[d^2 ln m_A(ϕ)/d(lnϕ)^2]_ϕ=ϕ_0. For white dwarfs, s ≃ 0, for neutron stars, s ≈ 0.1-0.2, and for black holes, s=0.5. We describe the gravitational waveforms are perturbatively under the small coupling limit ξ≡(3+2ω_ BD)^-1≪1. In the source frame, the time-domain waveforms are expressed as <cit.> h_+ =-4(Gℳ_c)^5/3/R(1+2/3ξΔ)ω_s^2/31+cos^2ι/2cos2ω_st_r, h_× =-4(Gℳ_c)^5/3/R(1+2/3ξΔ)ω_s^2/3cosιsin2ω_st_r, h_ b =2ξ·Gμ/R[2𝒮(Gmω_s)^1/3sinιcosω_st_r -Γ(Gmω_s)^2/3sin^2ιcos2ω_st_r]. up to the linear order of ξ. Here, we define the total mass as m≡ m_1+m_2, reduced mass as μ≡ m_1m_2/m, the symmetric mass ratio as η≡ m_1m_2/m^2, and the chirp mass as ℳ_c≡(m_1 m_2)^3/5/m^1/5. ω_s is the orbital frequency of the binary, R is the distance between source and observer, ι is the angle between the line of sight and binary orbital plane, and t_r≡ t-R is the retarded time. Additionally, another three parameters depending on the sensitivities are defined by Δ≡(1-2s_1)(1-2s_2), Γ≡1-2(m_2s_1+m_1s_2)/m, and 𝒮≡ s_1-s_2. Specifically, Δ=Γ=𝒮=0 for binary black hole system. The scalar and gravitational radiation carries the conserved energy from the binary system to the infinity. We consider a binary star system, endowed with masses m_1 and m_2, sensitivities s_1 and s_2, evolving on a quasi-circular orbit. In the source frame, the total energy flux is ℱ=8/15η^2/G(Gmω_s)^8/3{5/2ξ𝒮^2 +12[1+ξ(1/6Γ^2+4/3Δ)] (Gmω_s)^2/3}, In BD gravity, the conserved binding energy of the binary system is E=-G(1+ξΔ)m_1m_2/2r. From the energy balance equation, ℱ=-dE/dt, one can obtain the evolution of the orbital frequency ω_s as ω_s(τ)=1/Gℳ_c(256/5τ/Gℳ_c)^-3/8{ 1-ξ/16[Γ^2+4Δ +𝒮^2η^2/5(256/5τ/Gℳ_c)^1/4]}. with τ being the time to coalescence. These results return to the GR case when coupling ξ as zero. It is noted that the BD modification disappear for the binary black hole system. §.§ Gravitational Wave Propagation The generation process has been discussed in the above subsection, in which the cosmological evolution is ignored and the background scalar field, ϕ_0, is seen as a constant. However, in the wave zone, one has to take the cosmological expansion into consideration. The evolution of the Hubble parameter H(z) and the background scalar field ϕ_0(z) in redshift space are determined by the modified Friedmann equation (<ref>). To investigate the propagation effects, we linearize the modified field equation (<ref>) on the FLRW metric. And then the geometric optics is reasonably adopted because the typical GW wavelength is much shorter than the cosmological scale. The leading-order equation provides the null condition of the wavevector, implying that GWs propagate along the radial null geodesics. The subleading-order equation provides the conservation of the graviton number. By matching such conserved quantity at the overlap region of near and wave zones, one gets the modified waveforms under the FLRW background. The readers can find more details in Ref. <cit.>. We only summarize the main modifications as follows. Firstly, the GW oscillation frequency, binary masses, and coalescence time are redshifted, due to the cosmological expansion, i.e., ω_s→ω≡ω_s/(1+z), m→ m_z≡ m(1+z), μ→μ_z≡μ(1+z), ℳ_c→ℳ_z≡ℳ_c(1+z), and t_c→ t_z. Secondly, the other two modifications are brought by the BD theory. * The gravitational constant G is no longer constant in BD theory. For the wave source with redshift z, the corresponding value is determined by the background scalar field ϕ_0(z). Then the gravitational constant is replaced by the redshifted one, defined as G_z=1/ϕ_0(z). * The GW amplitudes decay as inverse modified luminosity distance, defined by D=R(1+z)√(ϕ_0(z)/ϕ_0(0)), rather than the standard form in the GR case. According to the above modification, the waveforms given in Eq. (<ref>) are revised as h_+ =-4(G_zℳ_z)^5/3/D(1+2/3ξΔ)ω^2/31+cos^2ι/2cos2ω t_r, h_× =-4(G_zℳ_z)^5/3/D(1+2/3ξΔ)ω^2/3cosιsin2ω t_r, h_ b =2ξ·G_zμ_z/D[2𝒮(G_zm_zω)^1/3sinιcosω t_r -Γ(G_zm_zω)^2/3sin^2ιcos2ω t_r]. correspondingly, the detected frequency is revised as ω(τ)=1/G_zℳ_z(256/5τ/G_zℳ_z)^-3/8{ 1-ξ/16[Γ^2+4Δ +𝒮^2η^2/5(256/5τ/G_zℳ_z)^1/4]}. §.§ Frequency-Domain Waveform Therefore, the frequency-domain waveforms in Brans-Dicke theory are given by h̃_A(f)=∫_-∞^∞ h_A(t)e^i2π ftdt (A=+,×, b), with f being the detected GW oscillation frequency, relating to the detected orbital frequency by ω=π f. During inspiral stage, the change of orbital frequency over a single period is negligible, and it is reasonable to apply a stationary phase approximation to compute the Fourier transformation. Following Refs. <cit.>, we derive h̃_+(f) =𝒜_0 ·δ𝒜·1+cos^2ι/2· e^iΨ_+, h̃_×(f) =𝒜_0 ·δ𝒜·cosι· e^i(Ψ_++π/2), h̃_ b(f) =𝒜_0[ δ𝒜_ b^(1)·sinι/2· e^iΨ^(1)_ b +δ𝒜_ b^(2)·sin^2ι/2· e^iΨ_+]. In terms of the dimensionless u=π G_zℳ_zf, the overall amplitude factor is 𝒜_0≡√(5π/24)(G_zℳ_z)^2/Du^-7/6, and the amplitude modification is δ𝒜 =1+ξ[-1/12Γ^2+1/3Δ -5/48η^2/5𝒮^2u^-2/3], δ𝒜_b^(1) =-ξη^1/5𝒮u^1/3, δ𝒜_b^(2)=ξΓ, up to the linear order of coupling ξ. The phase factors are Ψ_+ =2π f(t_z+R)+3/128u^-5/3{1-ξ[1/6Γ^2+2/3Δ +5/42η^2/5𝒮^2u^-2/3]} -2Φ_0-π/4, Ψ_ b^(1) =2π f(t_z+R)+3/128(2u)^-5/3{1-ξ[1/6Γ^2+2/3Δ +5/42η^2/5𝒮^2(2u)^-2/3]}-Φ_0-π/4, respectively. § STOCHASTIC GRAVITATIONAL-WAVE BACKGROUND The quantity of interest in stochastic searches is usually chosen to be the logfractional spectrum of the gravitational wave energy density <cit.>, Ω_GW(f) ≡1/ρ_cdρ_GW/dln f, where ρ_GW is the GW energy density, ρ_c is the critical density of the universe ρ_c≡3 c^2 H_0^2/8 π G. This quantity allows for direct comparison with theoretical models. §.§ Energy density of background The total gravitational wave energy density in the universe results from the cumulative contributions of sources that cannot be detected as individual binary system events by a gravitational wave detector network. This can be expressed in integral form, following <cit.>: Ω_GW(f)=f/ρ_c H_0∫_0^∞ d z ℛ_GW(z)/(1+z) √(Ω_m(1+z)^3+Ω_Λ)⟨dE/df⟩_s, where ℛ_GW(z) is the merger rate of GW sources measured in the source frame and dE/df is the source-frame energy spectrum emitted by each astrophysical source. ⟨⋯⟩_s denotes the averaged quantity over the population properties of compact binary system, i.e., the mass. In our work, we adopt the result of Planck18 <cit.> for the value of cosmology parameters, Hubble constant H_0, the matter density parameter Ω_m and the cosmological constant density parameter Ω_Λ. The energy spectrum is computed from the frequency-domain waveforms (<ref>) of each polarization via dE/df = π f^2/2GR^2 ∫[|h̃_+(f)|^2+|h̃_×(f)|^2+(3+2ω_ BD) |h̃_b(f)|^2] dΩ. Different from GR, the energy density is contributed by the tensor and scalar polarization, rather than only the tensor sector. When setting ξ as 0 and G_z as Newtonian gravitational constant, we obtain (dE/df)_ GR=(π G)^2/3ℳ_c^5/3/3(1+z)^1/3f^-1/3=(π G)^2/3ℳ_c^5/3/3f_r^-1/3, where f and f_r=f (1 + z) are the frequencies of the gravitational waves observed on Earth today and in the source’s cosmic rest frame, respectively. We consider the frequency at the innermost stable circular orbit (ISCO) as the frequency of the end of the inspiral phase of a binary merger <cit.>. For a BBH binary with a total mass m = 20M_⊙, we have (f_r)_ ISCO≃ 200 Hz. Comparing the GR results, the modified energy spectrums tensor and scalar polarizations are (dE/df)_ Tensor=(1+z)^8/3+3ω_BD{1+ξ[(2/3Δ -1/6Γ^2) -5/24η^2/5𝒮^2 u^-2/3] }(dE/df)_GR, and (dE/df)_ Scalar =ξ·(1+z)^8/3+3ω_BD{Γ^2/6+5/24η^2/5𝒮^2u^-2/3 -15/128πΓη^1/5𝒮u^-1/3cos[Ψ_+-Ψ_ b^(1)] }(dE/df)_GR, respectively. In this work, we only consider two special classes of binary systems, the binary black hole and binary neutron star (BNS) with equal sensitivity, i.e., s_1≈ s_2≡0.2. We have 𝒮=Δ=Γ=0 in the first case, and 𝒮=0, Δ,Γ≠0. For these two classes of sources, the energy spectrums are simplified as (dE/df)_ BBH =(1+z)^8/3+3ω_BD(dE/df)_GR, and (dE/df)_ BNS,Tensor =(1+z)^8/3+3ω_BD(1+9/50ξ)(dE/df)_GR, (dE/df)_ BNS,Scalar =(1+z)^8/3+3ω_BD(3/50ξ)(dE/df)_GR. It is evident that the primary contribution of the correction arises from the propagation phase rather than the generation phase. In addition, while the correction terms introduced by two neutron star–black hole (NSBH) binaries rely on frequency and ultimately affects the spectral index, their overall contribution to the stochastic gravitational wave background remains negligible compared to that from BBH and BNS systems <cit.>. The IMRPhenom formalism offers a comprehensive description of the gravitational wave signal emitted throughout the entire coalescence process of binary black hole mergers, encompassing the inspiral, merger, and ringdown phases. This allows for accurate modeling of the complete gravitational wave signal. We adopt IMRPhenomD <cit.> as our waveform approximant with the Power-law Integrated (PI) sensitivity curve <cit.> to examine the detectability of the stochastic gravitational wave background, shown in Fig. <ref>. By definition, an energy density spectrum Ω_GW lying above the PI curve has an expected signal-to-noise ratio SNR>1 and the PI curves for different polarization modes will differ even for the same set of baselines. As shown in Fig. <ref>, the detector baseline performs effectively in the range of 10-200 Hz for searching the astrophysical stochastic gravitational wave background. This frequency range corresponds to the inspiral phase of gravitational wave. In our analysis, we focus on the inspiral phases as the main contributions of the stochastic gravitational wave background. §.§ Stochastic signals Gravitational wave detectors do not measure the GW energy density directly, instead, they measure the GW amplitude at each instrument. Consequently, we need a theory-dependent mapping that relates GW amplitudes to the energy density. One expands the metric perturbation in another useful form for the plane wave expansion, h_ij(t, 𝐱)=∑_A=+,×, b∫_-∞^∞ d f ∫ d^2 𝐧̂h̃_A(f, 𝐧̂) e_ij^A(𝐧̂) e^2 π i f(t-𝐧̂·𝐱 / c). Assuming that the astrophysical gravitational-wave background is stationary, Gaussian, and isotropic, with uncorrelated polarizations, the second moment of the stochastic gravitational wave strain field can be directly expressed in terms of the power spectral density as ⟨h̃_A^*(f, 𝐧̂) h̃_A^'(f^', 𝐧̂^')⟩=δ(f-f^') 1/4πδ^2(𝐧̂, 𝐧̂^') δ_A A^'1/2 S_A(f), where the factor of 1/2 indicates that this equation defines the one-sided power spectral density for each polarization mode. Using the corresponding expression, Eq.(<ref>) and Eq.(<ref>), one can relate energy densities in each polarization to their strain power-spectral densities via <cit.> Ω_A(f) = 2 π^2f^3/3 H_0^2λ_A(f) S_A(f), λ_A= ξ^-1 if A=b, 1 if A=+, ×, where Ω_+, Ω_×, and Ω_ b represent the energy density contributed by plus, cross, and breathing modes. For ground-based GW detectors, it is impractical to directly measure the power spectrum of the polarization amplitudes because the signal is largely dominated by stochastic instrumental and environmental noise <cit.>. The response h̃_I(f) of the detector I to a passing gravitational wave can be expanded by the antenna patterns F_I^A(n̂) as <cit.> h̃_I(f)=∫ d^2 𝐧̂∑_A F_I^A(𝐧̂) h̃_A(f, 𝐧̂) e^-2 π i f 𝐧̂·𝐱_I / c In the Fourier domain, the cross-correlation between the outputs of two detectors then can be expressed in terms of the second moment of the distribution of polarization amplitudes as ⟨h̃_I^*(f) h̃_J(f^')⟩ = ∫ d^2 𝐧̂ d^2 𝐧̂^'∑_A A^'⟨h̃_A^*(f, 𝐧̂) h̃_A^'(f^', 𝐧̂^')⟩ × F_I^* A(𝐧̂) F_J^A^'(𝐧̂^') e^-2π i( f^'𝐧̂^'·𝐱_J/ c-f𝐧̂·𝐱_I / c), Using the relation between strain power-spectral densities and energy densities in Eq.(<ref>) and Eq.(<ref>), we would write, instead of Eq.(<ref>), ⟨h̃_I^*(f) h̃_J(f^')⟩=3 H_0^2/4 π^2 f^3∑_Aλ^-1_A(f)Ω^A_GW(f) γ^A_IJ(f) δ(f-f^'), where we have defined the overlap reduction function (ORF) for detectors I,J as γ^A_IJ(f) ≡∫d^2 𝐧̂/4 π F_I^* A( 𝐧̂) F_J^A( 𝐧̂) e^2 π i f 𝐧̂· (𝐱_I-𝐱_J). For subsequent data analysis, we will not work directly with γ^A_IJ(f), but instead use the normalized overlap reduction functions as described in <cit.>. In this case, Eq.(<ref>) can be rewrite as: ⟨h̃_I^*(f) h̃_J(f^')⟩= 3 H_0^2/20 π^2 f^-3δ(f-f^')×[Ω_GW^Tγ_IJ^T + λ^-1_ bΩ_GW^S1/3γ_IJ^S], where Ω_GW^T≡Ω_GW^++Ω_GW^× and Ω_GW^S≡Ω_GW^b, respectively. In Fig. <ref>, we present the overlap reduction functions for the Cosmic Explorer (CE) and Einstein Telescope (ET) networks. Notably, the ET consists of three separate interferometers positioned at the vertices of an equilateral triangle. The overlap reduction function quantifies the degree of correlation preserved between detectors in the GW signal. The smaller overlap reduction function for the scalar mode compared to the tensor mode indicates that detecting the scalar mode will be more challenging. § SIMULATION AND PARAMETER ESTIMATE We generated two sets of one-year simulated AGWB data by  <cit.>. Based on GR, the first set adopts a fiducial universe and classical populations of BBHs which are consistent with gravitational-wave observations from LIGO-Virgo-KAGRA up to GWTC-3 <cit.>. This set is used to test the capability of constraining the parameters of BD gravity and the population model. The second set of simulated data is generated using a power-law model, which is fitted to the tensor and scalar gravitational wave energy spectra derived from population models of the BBH and BNS binary systems. This set is designed to explore the potential for detecting scalar gravitational waves with the network of future gravitational wave detectors. For better prospects, we selected our network as {CE40, CE20, ET1} for 2CE and ET detectors. Here, CE40 and CE20 refer to the proposed 40-km and 20-km arm configurations of the CE, respectively, while ET1 represents one of the three objects in proposed triangular configurations for the ET <cit.>. §.§ Population Models Recalling the previous sections, the total gravitational wave energy density Ω_GW depends on the merger rate ℛ_GW(z) of compact binary coalescences, as well as the energy spectrum ⟨dE/df⟩_s, which is the function of mass of compact binary. The merger rate of binary systems is given by the following form <cit.>: ℛ_GW(z)=ℛ_0/𝒞(1+z)^α_z/1+(1+z/1+z_p)^α_z+β_z. Here, ℛ_0 is the local rate of binary systems evaluated at redshift z = 0 and 𝒞 is a normalization constant to ensure ℛ_GW(0) = ℛ_0. We adopt parameters α_z=2.7, β_z=2.4, z_p = 2.0, consistent with Refs <cit.>. The local rates for the BBHs and BNSs are ℛ^BBH_0=31 Gpc^-3yr^-1 and ℛ^BNS_0=855 Gpc^-3yr^-1, respectively <cit.>. We truncate the merger rate at z_max=10, assuming it to be zero at higher redshifts, as the contribution of such distant systems to the overall gravitational wave background is relatively small. Note that Eq.(<ref>) assumes an average chirp mass for all binaries, meaning it does not account for variations in the chirp mass across different binary systems. Given that the total gravitational wave energy density is calculated from the cumulative contributions of various sources, a more general form of Eq.(<ref>) is: dE/df|_s^GR=⟨ℳ_c^5/3⟩(π G)^2/3/3(1+z)^1/3f^-1/3, where ⟨ℳ_c^5/3⟩ is determined by the mass distribution of the binary systems. We assume that the masses of BBHs are described by the Power Law + Peak model <cit.>, resulting in ⟨ℳ_c^5/3⟩_BBH = ∫ℳ_c^5/3 p(m_1,m_2|Λ_m) d m_1 dm_2. Here, p(m_1,m_2|Λ_m) represents the mass function of this model, with Λ_m={α, β, m_min, m_max, δ_ m, λ_ peak,μ_ m,σ_ m} being the (hyper)parameters of the model. To ensure consistency with the distribution described in Ref <cit.>, we adopt the following parameters: α=3.5, β=1, m_min=5M_⊙, m_max=80M_⊙, δ_m=5M_⊙, λ_ peak=0.035, μ_m=34M_⊙, and σ_m=5 M_⊙. We then obtain ⟨ℳ_c^5/3⟩_BBH= 60.8 M_⊙^5/3, which will be treated as the only free parameter in our Bayesian analysis of the AGWB. For the BNSs, we simply assume both component masses follow a uniform distribution in (1.2, 2.3) M_⊙ and are randomly paired, which is consistent with the currently available data <cit.>. This yields ⟨ℳ_c^5/3⟩_ BNS = 2.0 M_⊙^5/3. The consideration of BNSs will be used to assess the contribution of scalar GWs to the AWGB. §.§ Cross-correlation statistic The cross-correlation of detector pairs is a commonly employed method for detecting the stochastic gravitational-wave background. As shown in Refs <cit.>, at a frequency bin f for a single segment (i), the unbiased and minimal-variance optimal estimator for Ω_GW is given by Ĉ_IJ(i)(f)=2/T10 π^2/3 H_0^2f^3 Re[s̃_I^*(f) s̃_J(f)]/γ_I J(f) with the corresponding variance: σ_IJ(i)^2(f) =1/2Tdf(10 π^2/3 H_0^2)^2 f^6 P_I(f) P_J(f). Here, s̃_I(J) denotes the Fourier-transformed strain signal measured by detector I(J), which includes contributions from both gravitational waves and instrumental noise. T is the segment duration, df is the frequency bin width, and P_I(f) is the one-sided auto-power spectral density of detector I, defined by ⟨s̃_I^*(f) s̃_I(f^')⟩=1/2δ(f-f^') P_I(f). The final estimator and variance are optimally combined by the weighted sum from each segment via <cit.> Ĉ_IJ(f)=∑_i Ĉ_IJ(i)(f) σ_IJ(i)^-2(f)/∑_i σ_IJ(i)^-2(f), σ_IJ^-2(f)=∑_i σ_IJ(i)^-2(f). Here Ĉ_IJ(i) and σ_IJ(i) denote the individual segment estimators of the detector pairs IJ and their inverse variances, respectively. With the estimate of the SGWB spectrum Ĉ_IJ(f) and variance σ_IJ^2(f), we perform parameter estimation and fit the model. The Gaussian likelihood is formed by combing the spectrum from each baseline IJ as p(Ĉ_IJ|Θ) ∝exp[-1/2∑_I J∑_f(Ĉ_IJ(f)-Ω_GW(f, Θ)/σ_IJ(f))^2], where Ω_GW(f, Θ) describes the SGWB model and Θ are its parameters. Given the likelihood, one can estimate the posterior distribution of the parameters of the model using Bayes theorem, p(Θ|Ĉ_IJ) ∝ p(Ĉ_IJ|Θ) p(Θ), where p(Θ) is the prior distribution on the parameters Θ. §.§ Constraints from Waveforms We first generate a one-year mock datasets (Dataset 1) consisting BBH mergers solely. We inject the SGWB signal generated by the fiducial BBH population as described in Sec.<ref> into the baselines, with the corresponding detector noise. As described in Ref <cit.>, the cross-correlation between detectors in our simulations depends on the signal power spectral density (PSD), given our assumption that noise is uncorrelated across all detectors. With a sampling frequency of 1024 Hz, the simulated dataset from each interferometer is generated in the frequency domain and then transformed into the time domain using the inverse discrete Fourier transform. We split the total time series into intervals and further dividing each interval into segments. This approach reduces the internal storage and computational requirements for the simulation. Finally, we chose 60 segments and each of them contains 256 seconds. To obtain the power and cross spectra of the detectors for each interval, the segmented time series are Hann-windowed and overlapped by 50% before calculating the Fourier transforms. The resulting spectra are then coarse-grained to a frequency resolution of 1/64 Hz and restricted to the 10-200 Hz frequency band, which contributes most significantly to the expected signal-to-noise ratio. The calculated power and cross spectra for each interval are combined by Eq.(<ref>) to obtain the optimal estimator Ĉ_I J(f) and its corresponding variance σ_I J^2(f). Fig. <ref> presents the injected SGWB energy density Ω_GW(f), calculated in GR using the fiducial population and cosmology model. It also shows the corresponding estimator and variance from the baseline in the network of interferometers. The injection of Ω_GW(f) is well restored, particularly in the frequency band below 100 Hz. Globally, the variance increases with frequency due to the rising noise PSD of the detector. Locally, the sharp increase in variance is attributed to the overlap reduction function oscillating around zero. To evaluate the capacity of constraining BD gravity, we fix all parameters related to the BBH population for the Bayesian analysis at first, regarding only the BD theory parameter ω_BD is free. We perform Bayesian inference by  <cit.>. It should be noticed that the simulation is based on GR, therefore we use Log_10(1/ω_BD) instead of ω_BD in Bayesian analysis for convenience. GR is recovered as Log_10(1/ω_BD) →-∞ when ω_BD→+∞. However, the value of Log_10(1/ω_BD) = -∞ cannot be achieved in analysis. Therefore, we impose a cutoff value Log_10(1/ω_BD) = -10 as the approximation of GR. Consequently, we choose a uniform prior on Log_10(1/ω_BD) at the range of [-10, 10]. Fig. <ref> shows the posterior distribution of Log_10(1/ω_BD). The red dotted line denotes the upper limit of Log_10(1/ω_BD) at 90% credible level. Converting it to ω_BD, we obtain the upper limit is ω_BD>816.24 at 90% credible level. It should be pointed that the upper limit exhibits a weak dependence on the cutoff value of the prior for Log_10(1/ω_BD). However, a smaller cut-off value for the prior leads to a larger upper limit. Furthermore, we consider the full parameter space including the BBH population model. For reasonable simplification, we do not consider the individual parameters of the BBH mass distribution. We use ⟨ℳ_c^5/3⟩ as the representative mass parameter. Thus the parameter space of Θ becomes Θ={⟨ℳ_c^5/3⟩, (1/ω_BD), ℛ^BBH_0, α_z, β_z, z_p}. The direct detection of binary black hole mergers provides constraints on the mass distribution, the local merger rate ℛ^BBH_0, and the leading slope parameter α_z <cit.>. We adopt slightly narrower priors, aligning with measurements derived from direct detection of binary black hole mergers, compared to those specified in Ref. <cit.>. These priors are listed in Table <ref> in detail. Fig. <ref> shows the posterior distributions of the parameters. We effectively recover our prior on the parameters of black holes population. Compared with the last scenario that estimating Log_10(1/ω_BD) solely, this case does not provide a more stringent result. Comparing the posterior and prior distributions, we find that β_ z and z_ p are better constrained, whereas the other parameters for the BBH population are weakly constrained. Fortunately, these parameters (i.e., ⟨ℳ_c^5/3⟩, ℛ^BBH_0, and α_z) can be better measured by the BBH population analysis <cit.>. §.§ Seaching for scalar mode The second one-year simulation dataset (Dataset 2) is designed to search for potential scalar gravitational wave background. Different from the the previous one, the second set is generated by a power-law without considering the population properties, Ω_GW(f) = Ω_ref^T(f/f_ref)^α_T + Ω_ref^S(f/f_ref)^α_S, where Ω_ref^T and Ω_ref^S are the reference amplitudes of the tensor and scalar modes at the reference frequency f_ref = 25 Hz. The spectral indices for the tensor and scalar modes are denoted by α_T and α_S, respectively. To illustrate the distinction between the contributions of the scalar and tensor gravitational waves to the stochastic gravitational wave background, we calculate the background energy density Ω_GW(f) in BD gravity with some typical values of BD parameter. These calculations utilize the population models for BBH and BNS described in Sec.<ref>. The corresponding results are presented in Table <ref>. Given that the scalar gravitational wave arise from the deviations beyond GR, we consider ω_BD∼𝒪(1) as an appropriate deviation. The injection values of the parameters are finalized as Ω_ref^T=3.0 × 10^-9, Ω_ref^S=1.0 × 10^-11, and α_T = α_S = 2/3. The spectral indices are considered to be α_T = α_S, due to the equal sensitivities of compact binaries in both BBH and BNS systems. This equivalence ensures that the energy spectrums do not introduce frequency-dependent corrections that deviate from GR. The corner plots are displayed in Fig. <ref>, where we transform the posterior distributions of Ω_ref^T and Ω_ref^S into logarithmic space, denoted as Log_10Ω_ref^T and Log_10Ω_ref^S, respectively. The amplitudes of the tensor mode Ω_ref^T and the corresponding spectral index α_T are recovered within 1σ. For the search of scalar gravitational waves, the upper limit on the amplitudes is determined to be Ω_ref^S < 3.064 × 10^-11 (90% CL), corresponding to Log_10Ω_ref^S < 10.51 (90% CL), while the spectral index α_S remains unconstrained. The weaker constraints on scalar gravitational waves compared to tensor gravitational waves can be attributed to two primary factors. Firstly, the scalar gravitational wave contribute less effect to the stochastic gravitational-wave background intensity than the tensor gravitational waves. Secondly, the ORF of the scalar gravitational wave is smaller than that for tensor gravitational waves, which has been shown in Fig. <ref>. § CONCLUSIONS We take Brans-Dicke gravity as an example to comprehensively analyze how the generation and propagation effects of gravitational waveforms under scalar-tensor gravity are encoded into the stochastic gravitational wave background. Unlike previous works that primarily utilized power-law models to search for the stochastic gravitational wave background, or employed Fisher information matrices for parameter estimation, we perform an end-to-end analysis of BD gravity with third-generation gravitational wave detectors. Utilizing realistic population properties of compact binary systems, we generate two sets of one-year astrophysical stochastic gravitational wave background simulation datasets and conducted a complete Bayesian analysis. The first set of simulations demonstrates that, with well-constrained population parameters, one year of stochastic gravitational wave background observation can provide significant constraints on BD theory. We find that in the best case ω_BD > 816 at 90% CL. However, the uncertainty of the population properties has a more obvious impact on the stochastic gravitational wave background spectrum than the BD gravity, making it difficult to constrain the theory under such uncertainty. The accurate measurements of the binary population from an increased number of observed systems in third-generation gravitational wave detectors <cit.>, along with extended observation times of the AGWBs, can achieve more stringent constraints on BD gravity. Additionally, by matching waveforms and signals from a larger number of observed binary systems with high SNR, these constraints will be further tightened <cit.>. The second set of simulations indicates that one year of stochastic gravitational wave observations will yield strong constraints on the tensor mode background, while upper limits can be placed on the scalar mode background, which is two orders of magnitude smaller than the tensor mode background. Besides, the method shows great feasibility to search for scalar modes in the stochastic gravitational wave background. Longer observation times and larger detector networks with more baselines may offer opportunities to detect potential scalar modes. Due to the fact that the limited observations of NSBHs, the information of their population is inadequate to generate a reliable prospect <cit.>. We neglect the relatively minor background contributions that may arise from NSBHs. However, the systems of NSBH will provide frequency-dependent corrections in the energy spectrum, offering an opportunity to break the degeneracy with the population and the gravity paramaters. We leave all these prospects and others to future work. Future space-based gravitational wave detectors, such as the LISA-TianQin networks, will be capable of detecting alternative polarizations of stochastic backgrounds <cit.>. Moreover, utilizing noise-free correlation measurements from pulsar timing arrays will enhance our understanding of the nanohertz stochastic gravitational wave background, thereby providing deeper insights into gravity <cit.>. Observations of the stochastic gravitational wave background across a broader range of frequencies will undoubtedly yield more reliable information about gravity. We thank all code authors for their patience in addressing our questions regarding the use of the code. We also thank Shaopeng Tang, Yuanzhu Wang, Aoxiang Jiang, Bo Gao, Chi Zhang, Xingjiang Zhu, and Xiao Guo for their valuable discussions and comments. This work is supported by the Natural Science Foundation of China (No. 12233011). Z.L is supported by China Scholarship Council, No. 202306340128. W.Z is supported by the National Key R&D Program of China (Grant No. 2022YFC2204602 and 2021YFC2203102), Strategic Priority Research Program of the Chinese Academy of Science (Grant No. XDB0550300), the National Natural Science Foundation of China (Grant No. 12325301 and 12273035), the Fundamental Research Funds for the Central Universities (Grant No. WK2030000036 and WK3440000004), the Science Research Grants from the China Manned Space Project (Grant No.CMS-CSST-2021-B01), the 111 Project for "Observational and Theoretical Research on Dark Matter and Dark Energy" (Grant No. B23042). JHEP
http://arxiv.org/abs/2407.13358v1
20240718100109
Capturing Style in Author and Document Representation
[ "Enzo Terreau", "Antoine Gourru", "Julien Velcin" ]
cs.CL
[ "cs.CL", "cs.LG" ]
A]Enzo Terreau B]Antoine GourruCorresponding Author. Email: antoine.gourru@univ-st-etienne.fr. B]Julien Velcin [A]Université de Lyon, Lyon 2, ERIC UR3083 [B]Laboratoire Hubert Curien, UMR CNRS 5516, Saint-Etienne, France § ABSTRACT A wide range of Deep Natural Language Processing (NLP) models integrates continuous and low dimensional representations of words and documents. Surprisingly, very few models study representation learning for authors. These representations can be used for many NLP tasks, such as author identification and classification, or in recommendation systems. A strong limitation of existing works is that they do not explicitly capture writing style, making them hardly applicable to literary data. We therefore propose a new architecture based on Variational Information Bottleneck (VIB) that learns embeddings for both authors and documents with a stylistic constraint. Our model fine-tunes a pre-trained document encoder. We stimulate the detection of writing style by adding predefined stylistic features making the representation axis interpretable with respect to writing style indicators. We evaluate our method on three datasets: a literary corpus extracted from the Gutenberg Project, the Blog Authorship Corpus and IMDb62, for which we show that it matches or outperforms strong/recent baselines in authorship attribution while capturing much more accurately the authors stylistic aspects. § INTRODUCTION Deep models for Natural Language Processing are usually based on Transformers, and they rely on latent intermediate representations. These representations are usually built in a self-supervised manner on a language modeling task, such as Masked Language Modeling (MLM) <cit.> or auto-regressive training <cit.>. They constitute a good feature space to solve downstream tasks, for example classification or generation, even though some of those tasks are still difficult to handle with prompt-based generative models like ChatGPT <cit.>. Additionally, some efforts have been made to benefit from large pretrained model to represent documents <cit.> and even authors, with contributions like Usr2Vec <cit.>, Aut2Vec <cit.>, and DGEA <cit.>. The main drawback of these models is that they were shown by <cit.> to mainly focus on topics rather than on stylistic features of the text. It turns out that capturing writing style can be of much interest for some applications. When working with literacy data or for forensic investigation <cit.>, practitioners are generally interested in detecting similarities in writing style regardless of the topics covered by the authors. The author style can be defined as every writing choice made without semantic information, often study through various linguistic and syntactic features. As demonstrated by <cit.>, most author embedding techniques rely on the semantic content of documents: a poem and a fiction writing on flowers will be placed closer in the latent space, regardless of their strong differences in sentence construction, structure, etc. As an answer to these limitations, we propose a new model that builds a representation space which captures writing style by using stylistic metrics as additional input features. We follow <cit.> and leverage the Variational Information Bottleneck (VIB) framework <cit.>, that was shown to outperform the classical pointwise contrastive training. More precisely, we propose to use it to fine tune a pretrained document encoder (such as <cit.>) and author representations on an authorship attribution task. This is, to our knowledge, the first time that this framework is applied to author representation learning. Then, we add an additional term in the objective function to enforce the representations to capture stylistic features. We name this new model . Using pretrained models allows to benefit from accurate intermediate text representations, built on ready-to-use language resources. In Figure <ref>, we present a subset of authors from the Project Gutenberg and the representation of the documents they wrote. The size of author's vector is proportional to its variance, learnt by using the VIB framework. As expected, some outlier productions from authors in term of style (e.g., Thus Spake Zarathustra from Nietzsche) lie closer in the representation space to books of the same genre. More precisely, our model allows 1) to capture author and document style, 2) to build an interpretable representation space to be used by researchers in linguistic, literature and public at large, 3) to predict stylistic features such as readibility index, NER frequencies, more accurately than every existing neural based methods, 4) accurately identify document's author, even when they are unknown. After a presentation of related works, we introduce the theoretical foundations of the VIB framework, we then describe our model and how it is optimized. In the last section, we present experimental results on two tasks: author identification and stylistic features prediction. Our experiments demonstrate that our model outperforms or matches existing author embedding methods, in addition to being able to infer representations for unseen documents, measure semantic uncertainty of authors and documents, and capture author stylistic information. § RELATED WORKS §.§ Author Embedding Models Word embedding, popularized by <cit.>, was then extended to document embedding by the same authors. More recent works <cit.> propose different aggregation functions of word embeddings, based on LSTM, Transformers, and Deep Averaging Networks, to build (short) document level representations. The aggregations is learnt through classification or document pairing. More recently, <cit.> proceed in a similar way by fine tuning a BERT model <cit.>. There are also specific works focusing on author embeddings. The Author Topic Model (ATM) <cit.> is a hierarchical graphical model, optimized through Gibbs sampling. It produces a distribution over jointly learnt topic factors that can be used as author features. Aut2vec <cit.> allows to learn representations of authors and documents that can separate true observed pairs and negative sampled (document, author) pairs. The distance between two representations modifies an activation function producing a probability that the pair is observed in the corpus. This approach concatenates two sub models: the Link Info model, which takes pairs of collaborating authors, and the Content Info model, which uses pairs of author and documents. It cannot infer representations of unseen documents and authors: the embeddings are parameters of an embedding layer. The Usr2vec model <cit.> learns author representation from pretrained word vectors. Authors use the same objective than <cit.>, and add an author id to learn the representations. §.§ Writing Style-oriented Embedding Models While there is no consensual definition of writing style, it has always been a widely addressed research topic. In computational linguistic, the approach of <cit.> is often cited as a reference and gives the following definition: “Style is, on a surface level, very obviously detectable as the choice between items in a vocabulary, between types of syntactical constructions, between the various ways a text can be woven from the material it is made of.”, and the author to conclude further to the “impossibility of drawing a clean line between meaning and style”. That's why style is commonly defined as every writing choice without semantic information. Based on this definition, it is hard, if not impossible, to produce a clear annotated dataset classifying different writing style. The workaround in most studies is to identify the most useful stylistic features to associate an author to its production. It starts in the 19th century with <cit.> and the most basic features (e.g., word and punctuation frequencies, hapax legomena, average sentence length). More recent works focus on function words frequencies <cit.>, hybrid variables such as character n-grams <cit.> or even Part-Of-Speech (POS) and Name Entity Recognition (NER) tag frequencies, using authorship prediction as evaluation. Several methods try to use these stylistic features to learn document representations. For example, <cit.> use Doc2Vec on documents of character trigrams annotated regarding their position in the word or if they contain punctuation (NGRAM Doc2Vec). According to the authors, it allows to capture both content and writing style. In an other work, words and POS tags embeddings are learnt together before passing them through a CNN to get a sentence representation <cit.>. Then these sentences are fed into an LSTM with a final attention layer to compute document representation. This model is trained on the authorship attribution task. Some works claim to capture this information in an unsupervised manner. DBert-ft <cit.> fine-tunes DistilBERT on the authorship attribution task, assuming that an author writing style must be consistent over its documents, and thus, that this task allows to build a “stylometric latent space” when the model is trained on a reference set. Yet, for all above models, no author representation is explicitely learnt. § OUR MODEL: §.§ Goal and VIB Framework We deal with a set of documents, such as literature or blog posts. We assume each document is written by one author. Each document of indice d is preprocessed to extract a vector z_d^f of r = 300 stylistic features following <cit.>. Our goal is threefold: i) We want to build author and document representations in the same space ℝ^r such that their proximity captures their stylistic similarity (Figure <ref>), ii) We want to learn a measure of variability in style for each document and author , and iii) We want our model to incorporate an on-the-shelf pre-trained text encoder such as Sentence-BERT or USE to benefit from their complex language understanding, fine-tuned on the dataset at hand using the objective we have just defined. To do that, we build an architecture based on the Variational Information Bottleneck (VIB) framework. The VIB framework is a variational extension of the Information Bottleneck principle <cit.> proposed by <cit.>. The general objective function is, for a set of observations x, to associate labels y and latent representations z of these observations: max_z I(z, y)-β I(z, x), where I is the well-known Mutual Information measure, defined as: I(x,y) = ∫∫ p(x,y) logp(x,y)/p(x)p(y) d_x d_y. Information Bottleneck aims at maximally compressing the information in z, such that z is highly informative regarding the labels, i.e. z can be used to predict the labels y. With y being a set of relevant stylistic features, we would like to maximize the stylistic information captured by the representation, while minimizing the semantic one. β≥ 0 is a hyper-parameter that controls the balance between the two sub-objectives. In this approach, p(z|x) (the “encoding law”) is defined by modeling choices. Most of the time, the mutual information is intractable. We then obtain a lower bound of Eq.<ref> by using variational approximations thanks to <cit.>: -L_vib = 𝔼[log q(y|z)] - β KL(p(z|x)||q(z)) where q(y|z) is a variational approximation of p(y|z) and q(z) approximates p(z). Maximizing Eq.<ref> leads to increasing Eq.<ref>. §.§ VIB for Embedding with Stylistic Constraints <cit.> propose to use this framework to learn probabilistic representations of images. They leverage an instance of this framework based on siamese networks with a (soft) contrastive loss objective function, to separate positive observed pairs of images (y = 1) and negative examples (y = 0). We extend this model to document and author embedding with stylistic constraint. Each author a (resp. document d) is associated to a stochastic representation z_a (resp. z_d) that is unobserved (i.e., latent). Additionally, each document is associated to a stylistic feature vector z_d^f that is beforehand extracted from the corpus with usual NLP toolkits. We assume that the dimensions of z_a, z_d and z_d^f are the same (r). We build a set of pairs (a,d) with label y_a=1 if a wrote d. We additionally draw k negative pairs (a',d) for each observed pair, associated with label y_a=0, where a' is not an author of d. The encoding laws (p(z|x)) for authors and documents are normal laws. To capture stylistic information, we also build a set of pairs (d,d) with label y_f=1 and we draw k negative pairs (d, d') for each observed pair, associated with label y_f=0. These pairs are used to train the stylistic objective : the representation z_d of a document should be close to its feature vector z_d^f. We learn the following parameters for each author a: mean μ_a and diagonal variance matrix with diagonal σ_a^2 (these are embedding layers). For a document d, we use a trainable text encoder to map a document's content to a vector d_0 ∈ℝ^r_0. We then build the document mean μ_d = f(d_0) ∈ℝ^r and diagonal variance matrix with diagonal σ_d^2 = g(d_0) ∈ℝ^r. As we will show later, the dimension r should match the number of stylistic features to gain in comprehension of the learning space, but the text encoder can output vectors of any dimension (here, r_0). Following <cit.>, f and g are neural networks. We give more details on f, g (the “encoding functions”), and the text encoder later. Following <cit.>, the probability of a label is the soft contrastive loss: q(y_a = 1 | z_a, z_d) =σ(-c_a|| z_a-z_d||_2+e_a) q(y_f = 1 | z_d, z_d^f) =σ(-c_f|| z_d-z_d^f||_2+e_f), where σ is the sigmoid function, c_a, c_f> 0 and e_a,e_f ∈ℝ. We introduce an additional parameter α∈ [0,1] to control the importance given to the features and to the authorship prediction objective. We can define the loss function (to minimize) based on the VIB framework as follows: ℒ= -(1-α)𝔼_p(z_a|x_a),p(z_d|x_d)[log q(y_a | z_a, z_d)] -α𝔼_p(z_d|x_d)[log q(y_f | z_d,z_d^f)] + β( KL(p(z_a|x_a)||q(z_a)) + KL(p(z_d|x_d)||q(z_d)) ) Here, α = 0 will produce representations that well predict the author-document relation but will not capture the stylistic features of the documents, as shown by <cit.>. With α = 1, on the contrary, the model will simply bring document embeddings closer to their feature vectors. Hence, the value of α needs to be carefully tuned on the dataset, regarding if the corpus is writing style specific or not thanks to domain knowledge. Eventually, computing the expected values in Eq.(<ref>) is intractable for a wide range of encoders. We therefore approximate it by sampling L examples by observation (here, a triplet document, author, feature vector), following p(z|x) as done in <cit.>. We get (the same goes for feature vector/documents pairs) : 𝔼[log q(y_a| z_a, z_d)] ≈1/L∑_l=1^Llog q(y_a| z^(l)_a, z^(l)_d) We then use the reparametrization trick, following what is done in VAE <cit.>: z^(l)_a = μ_a + σ_a ⊙ϵ, z^(l)_d = μ_d + σ_d ⊙ϵwithϵ∼𝒩(0,1) This loss can now be minimized using backpropagation. In Figure <ref>, we show a schematic representation of our model, called for Variational Author and Document Representations with Style. §.§ Encoding Functions and Choice of the Encoder The entering bloc of our model for documents is a text encoder, mapping a document in natural language to a vector in ℝ^r. Many deep architectures could be used here and trained from scratch. Nevertheless, we propose to use a pretrained text encoder. Models that are pretrained on large datasets are now easily available online[e.g., <https://huggingface.co/models>]. They have been proved successful on many NLP tasks with a simple fine-tuning phase (the only constraint being to avoid catastrophic forgetting). Additionally, the VIB framework allows to naturally introduce a pretrained text encoder as shown by <cit.>. The encoder's output should then be mapped to document mean and variance. Both <cit.> map the text encoder output to the document's mean (the f function) and variance (the g function) using a Multi Layer Perceptron (MLP). This approach is simple, and fast. In our experiments, we build f and g as two-layer MLP with tanh and linear activation with same input and intermediate dimensions (r_0). Note that the output dimension of f and g should be the same as the number of stylistic features (r). Several constraints arise regarding the pretrained encoder itself. We would like our model to be able to capture stylistic information from a given document. As shown in <cit.>, state-of-the-art models trained on large datasets already capture complex grammatical and syntactic notions in their representations, and therefore have the explanatory power requested for our objective. Moreover, our model must be able to deal with long text as it will be used in a literary context. Processing novels, dramas, essays, where writing style interferes the most. This is a serious problem: for example, the widely used BERT model is limited to 512 tokens. Alternative models such as <cit.> allow to apply transformers to long documents. To circumvent this issue, we use the Deep Averaging Network implementation of the Universal Sentence Encoder (USE) from <cit.>. It has several advantages over the latter works: it gives no length constraint, it is faster than transformer-based methods and it outperforms Sentence-BERT on stylistic features prediction <cit.>. The test of other encoder models is left to future works. Finally, note that our model is language agnostic (as it depends on a out-of-the-box text encoder) and can infer representations for unseen documents. § AUTHORSHIP ATTRIBUTION DATASETS §.§ IMDb Corpus The IMDb (Internet Movie Database) corpus is one of the most used ones regarding the authorship attribution task. It was introduced by <cit.> and is composed of 271,000 movie reviews from 22,116 online users. However, most of the works are evaluated on the reduction of this dataset to only 62 authors with 1000 texts for each (IMDb62). Thus, we benchmark our model on IMDb62. As shown later, the task of authorship attribution on this corpus is more or less solved, due to the low number of authors. §.§ Project Gutenberg Dataset The Project Gutenberg is a multilingual library of more than 60,000 e-books for which U.S. copyright has expired. It is freely available and started in 1971. We gathered the corpus using <cit.>. Most of the books are classical novels, dramas, essays, etc. from different eras, which is relevant when studying writing style and represents quite well our context of application. To keep the most authors possible, we randomly sample 10 texts for each author with such a production, leaving 664 authors in our Reduce Project Gutenberg Dataset (R-PGD) (10 times more than IMDB). To be able to deal with such works, we only keep the 200 first sentences of each book. §.§ Blog Authorship Corpus This dataset is composed of 681,288 posts from 19,320 authors gathered in the early 2000s by <cit.>. There are approximately 35 posts and 7,250 words by user. We only take 500 bloggers with at least 50 blogposts to build our reduced dataset of the Blog Authorship Corpus (R-BAC). This dataset is also used in several authorship attribution benchmark, only keeping the top 10 or 50 authors with most productions. We will also test our model on these extraction of the corpus. These two last datasets (PGD and BAC) represent two common uses of author embedding (classic literature and web analysis) with a large number of authors. Usual datasets for authorship attribution (CCAT50, NYT, IMDb62) contain far less classes, further from our context of a web extracted corpus (from Blogger or Wordpress for example)... They are also stylistically and structurally different, allowing to evaluate our approach on various textual formats. For each dataset, we perform a 80/20 train-test stratified split. § EXPERIMENTS §.§ Parameter Setting and Competitors In this section, we present implementation details for our method and competitors. For the encoder functions f and g, we use the architectures presented in the previous section with batch normalization and dropout equal to 0.2 with L2 regularization (1e-5). Grid-search parameters are detailed in Table <ref>. For L, we obtain a good trade-off between accuracy and speed with L=10, as we quickly reach a plateau of performance when increasing its value. We can summarize the tuning of α as follows: * α=0 implies no feature loss and stylistic information, * α =0.5 gives the same importance to feature loss and author loss, * α = 0.9 pushes feature loss to boost style detection. We train the model for 15 epochs on R-PGD and R-BAC, and for 5 epochs on IMDB, BAC10 and BAC50 as the number of authors is around ten times smaller. We use a partition of 2 GPUs V100. On a single GPU, training the model on the R-PGD dataset takes around 10 hours. In the following section, we report the results for the best version of only. As an ablation study, to justify the use of both the VIB framework and stylistic features, we compare our model with and without these components (respectively called no-VIB and (α = 0)). The code is available on github and will be shared if the paper is accepted. All the datasets are available online. We compare our model with several baselines. We use <cit.> (NGRAM Doc2Vec), a simple average based version of USE <cit.> (a document representation is built from the average of its sentence encoding, and an author representation is an average of its documents). We also compare our approach to DBert-ft <cit.>, a document embedding method where DistilBERT is fine-tuned on the authorship attribution task. The author embeddings are built by averaging the representations of the documents it wrote. We use the parameters detailed in the authors' implementation[<https://github.com/hayj/DBert-ft>]. §.§ Evaluation Tasks We first evaluate the baselines and regarding how well each method captures writing style. As writing style is a complex and a still discussed notion, there is no supervised dataset to evaluate how a model can grasp it. We therefore use a proxy task that consists in predicting stylistic feature from the latent representations. We follow the experimental protocol of <cit.>. The stylistic features are extracted using spacy word and sentence tokenizer, POS-tagger and Name Entity Recognition, spacy English stopwords and nltk CMU Dictionary. For each author, we aim to predict the value of all stylistic features from their embeddings. Each feature is standardized before regression. We use an SVR with Radial Basis Function (rbf) kernel as it offers both quick training time and best results among other kernels in our experiments. We evaluate models using Mean Squared Error (MSE) following a 10-fold cross validation scheme. Secondly, we perform authorship attribution, the task of predicting the author of a given document. We compare with several other authorship attribution methods even though they do not necessarily perform representation learning. Each dataset is split into train and test sets with a 80/20 ratio. For our model, we repeated 5 times the evaluation scheme. For embedding method without classification head, we associate each document with its most plausible author using cosine similarity. We use accuracy to evaluate these results (the percentage of correctly predicted authors out of all data points). §.§ Results on capturing writing style As explained earlier, we use the author embeddings to perform regression and predict each stylistic features. As shown in Table <ref>, only using a simple logistic regression on these stylistic features allows to reach decent scores in authorship attribution, close to these of Universal Sentence Encoder, which is a state-of-the-art method in sentence embedding. As they contain strictly no topic information, it demonstrates how good they are as a proxy of writing style. Thus, a model able to capture them is able to capture writing style. Results on the style MSE metric are shown in Table <ref>. As expected, our model easily outperforms every baseline on all axes. DBert-ft, only trained on the authorship attribution objective performs the worst. Even though this approach is based on fine-tuned language models which already capture syntactic and grammatical notions <cit.>, this is not the information that seems to be retained by the network when trained on the author attribution task. This is consistent with what was shown in <cit.>. The models may mainly focus on the semantic information to predict author-document relation. Interestingly, we observe that a simple average of USE representations performs quite well, which confirms that it can successfully capture complex linguistic concepts. is guided by the feature loss to do so. On a qualitative note, we present two additional visualisations to underline the strong advantage of for linguistic and stylistic applications. In Figure <ref>, we present a T-SNE 2D projection of the books of the R-PGD dataset colored by their publication year. A clear color gradient appears, demonstrating that our model can grasp the evolution of writing style through the last centuries. Figure <ref> shows a toy example of a T-SNE 2D projection of well-known authors from the R-PGD dataset and their books (we use α = 0.5). The objects are distributed in the space across clear author specific clusters. The most interesting observation is related to documents that are outside of their author cluster: Thus Spake Zarathustra: A Book for All and None by Nietzsche is a philosophical poem, closer to Hugo, while the rest of its production is mostly essays. The same conclusion goes with The Power of Darkness by Tolstoï, a 5 acts drama, whose embedding is closer to Shakespeare than to Tolstoï novels. The version of Hamlet presented here is fully commented, and thus is closer to analytical and philosophical works of Nietzsche and Plutarch as shown on the figure. We also represent the variance learnt by the model in the size of the author dot. Hugo, who wrote famous novels as well as poetry and dramas, has a greater variance than other authors. §.§ Interpretability of the Representation Space As we use the L_2 distance between document representations and stylistic feature vectors, each of the 300 embedding axes correspond to one given stylistic feature. The soft contrastive loss allows to ensures the L_2 constraint (bringing document embedding and stylistic features vectors closer) while being more flexible than a simple regression loss. When experimenting with the latter, the task showed up to be too hard and disadvantageous regarding both authorship attribution scores and writing style loss. On Figure <ref>, we show the Pearson correlation score between the i^th stylistic feature and the corresponding embedding axis. These correlation values are always maximum for each feature regarding every other embedding coordinate. To further illustrate the interpretability of the embedding space, Figure <ref> shows a selection of 4 stylistic features, the representation value of the matching coordinate for each author. The representation space learnt by is interpretable in terms of writing style. In the context of a multidisciplinary project, involving several searchers in literature and linguistic this is a significant added value. §.§ Results on the Authorship Attribution Task Results on the authorship attribution task for IMDb62 and Blog Authorship Corpus are presented respectively in Table <ref> against state-of-the-art solutions (not necessarily embedding models). On both datasets, our model ranks in top 4, outperforming recent competitors while authorship attribution is not its main task. Our model is outpaced by Syntax CNN <cit.>, DBert-ft <cit.> and BertAA <cit.>, two variants of BERT fine tuned on the authorship attribution task. As shown by <cit.>, BERT and DistilBERT are really tailored for balanced datasets with short texts such as IMDB62 and Blog Authorship Corpus. The DBert-ft model splits every document in 512 chunks during training, building an even bigger corpus with important improvement, but it is hardly reproducible with our feature loss. BertAA feeds encoded documents from a finetuned BERT together with a set of stylistic features and of most frequent bi-grams and tri-grams to a Logistic Regression. It clearly allows to better perform on Blog Authorship Corpus as this dataset is a mix of several genres and styles, compared to IMDB62 concerning only movie reviews. This confirms our use of stylistic features. Syntax CNN encodes each sentence of a document separately with its syntax. Unfortunately, this model was hardly reproducible and cannot be tested in feature regression using intermediate representation. For lower values of α allow to reach the best accuracy in authorship attribution on these datasets. Additional information bring by stylistic features benefit to the authorship attribution when texts are longer. §.§ Ablation Study and Effect of α We here compare our model to no-VIB and without feature loss. Both variations underperform on both tasks. First, the VIB paradigm offers more versatility than fixed document and author representation which is key to grasp a complex notion such as writing style. Then, the feature loss brings additional information for authorship prediction, as shown by BertAA, which use it to improve BERT classification results. Here, our framework enable to use it directly for document and author embeddings. On Figure <ref>, we evaluate the influence of α which balances the importance given to author loss and feature loss on both feature regression and authorship attribution. Adding just a few stylistic features information (α=0.1) allows to improve the precision of our model in authorship attribution. It forces the model to extract discriminant stylistic information from the input. Surprisingly the same phenomenon appears when shutting down the author loss (α = 1). It creates a deterioration of the style score as authors tend to use a consistent writing style among their documents. Thus gathering a writer with its documents representation also helps to capture its writing habits. (<cit.> call it the “Intra-author consistency”). § CONCLUSION In this article, we presented , a new author and document embedding method which leverages stylistic features. It has several advantages compared to existing works: it easily integrates any pretrained text encoder, it allows to compare authors and documents of any length (e.g., for authorship attribution), build an interpretable representation space by incorporating widely used stylistic features in computational linguistic. It is also able to infer representations for unseen documents at the opposite of most prior approaches. We demonstrated that outperforms existing embedding baselines in stylistic feature prediction, often by a large margin, while staying competitive in authorship attribution. In further experiments, we will incorporate modern text encoders, such as LLaMA <cit.>. They are much more difficult to adapt to this task, but as most recent Large Language Model are trained in an autoregressive way, they might have the expressive power needed to grasp stylistic aspects of authors productions.
http://arxiv.org/abs/2407.12733v1
20240717165139
A Liouville type theorem for ancient Lagrangian mean curvature flows
[ "Arunima Bhattacharya", "Micah Warren", "Daniel Weser" ]
math.DG
[ "math.DG", "math.AP" ]
Interior Estimates] A Liouville type theorem for ancient Lagrangian Mean Curvature flows Department of Mathematics, Phillips Hall the University of North Carolina at Chapel Hill, NC arunimab@unc.edu Department of Mathematics University of Oregon, Eugene, OR 97403 micahw@uoregon.edu Department of Mathematics, Phillips Hall the University of North Carolina at Chapel Hill, NC weser@unc.edu § ABSTRACT We prove a Liouville type result for convex solutions of the Lagrangian mean curvature flow with restricted quadratic growth assumptions at antiquity on the solutions. [ Arunima Bhattacharya, Micah Warren, and Daniel Weser July 22, 2024 ======================================================== § INTRODUCTION A family of Lagrangian submanifolds X(x,t):ℝ^n×ℝ→ℂ^n is said to evolve by mean curvature if (X_t)^=Δ_gX=H⃗, where H⃗ denotes the mean curvature vector of the Lagrangian submanifold. After a change of coordinates, one can locally write X(x,t)=(x,Du(x,t)) such that Δ_gX=J∇_gΘ (see <cit.>): Here g=I_n+(D^2u)^2 is the induced metric on (x,Du(x)), J is the almost complex structure on ℂ^n, and Θ is the Lagrangian angle given by Θ=∑_i=1^narctanλ_i , where λ_i are the eigenvalues of the Hessian D^2u. This results in a local potential u(x,t) evolving by the parabolic equation u_t =∑_i=1^narctanλ_i. We are free to add any function that does not depend on x to the right-hand side, since this does not change the flow of the gradient graph of u. Thus, we will consider the equation u_t =∑_i=1^narctanλ_i - Θ_0 for a fixed Θ_0. In particular, under this flow, solutions to the special Lagrangian equation ∑_i=1^narctanλ_i = Θ_0 will be stationary. Our main result in this paper is the following: Let u be an entire ancient solution to (<ref>). Suppose that u is convex and satisfies the following growth condition at antiquity: there exists an R_0 such that limsup_t→-∞sup_x∈ℝ^n| u(x,t)|/| x| ^2+R_0<1/(6√(n)+2)^2. Then u is a quadratic polynomial In essence, this theorem states that if a convex ancient solution has the growth of a small quadratic polynomial for times far enough in the past, then the solution must in fact be a quadratic polynomial. Although the growth condition is somewhat restrictive, it applies to the scalar potential u, not to the gradient graph, and requires only convexity without additional constraints on the Hessian. We derive a Hessian estimate for solutions of (<ref>) via a pointwise approach assuming a quadratic growth condition at antiquity. The method used here follows a parabolic adaptation of Korevaar's maximum principle method as in Evan-Spruck <cit.>. A similar maximum principle estimate for the elliptic special Lagrangian equation was carried out in Warren-Yuan <cit.>, inspired by Korevaar's <cit.> estimate for minimal surface equations. As in <cit.>, our method requires a smallness condition on the growth. Once a Hessian bound is established, Hölder regularity and the Liouville property follow from Nguyen-Yuan <cit.>. In 1968, Bombieri, De Giorgi, and Miranda <cit.> established an interior estimate bound for classical solutions of the minimal surface equation in ℝ^n. In the 1970s, Trudinger <cit.> provided a new and simpler derivation of this estimate and partly developed in the process some new techniques applicable to the study of hypersurfaces in general. In the 1980s, Korevaar <cit.> found a strikingly simple pointwise argument. Korevaar's technique was adapted to prove estimates for the parabolic version, namely mean curvature flow, by Ecker-Huisken <cit.>, Evans-Spruck <cit.>, Colding-Minicozzi <cit.> in codimension one. Mean curvature flow in higher codimensions including the Lagrangian mean curvature flow has been studied by many authors <cit.>. However, in higher codimensions, an adaption of Korevaar's method with no restriction on the height remains elusive, as the diagonization step is difficult to overcome. To diagonalize the differential of a vector-valued function by rotating the base, one requires that the gradients of each of the functions in the vector be mutually orthogonal, which fails in general. Interior Hessian estimates for the elliptic special Lagrangian equation and the variable phase Lagrangian mean curvature equation have been established via an integral approach in <cit.>, via a compactness approach in <cit.>, via a doubling approach in <cit.>. The Liouville result of <cit.> is generalized to remove the small quadratic constraint via a compactness method in <cit.>. A pointwise approach to proving interior Hessian estimates without any constraints on the growth remains an open question even for the elliptic special Lagrangian equation. Acknowledgments. AB acknowledges the support of the Simons Foundation grant MP-TSM-00002933. DW acknowledges the support of the NSF RTG DMS-2135998 grant. § JACOBI INEQUALITY Let g denote the induced metric of the embedding X, which in the graphical case takes the form g=I_n+D^2u(·,t)D^2u(·,t). If we take a derivative of (<ref>), we find u_tk=g^iju_ijk, so that, if we define the operator L := ∂_t - g^ij∂_ij, we see that equivalently Lu_k=0. We will let V=√(| g|) denote the volume element, and we will define b=V^1/n to be the volume element raised to the power of 1/n. In the following lemma, we show that a Jacobi inequality holds along the “heat flow” of b with respect to L, which will be a crucial ingredient in the proof of the main theorem. For a convex solution of (<ref>), the volume element b=(√(| g|))^1/n satisfies Lb + 2 |∇_g b|^2/b≤0. We directly compute ∂_t b = 1/2ntr(g^-1∂_t g) b ∂_i b = 1/2ntr(g^-1∂_i g) b ∂_ij b = 1/2ntr(g^-1∂_ij g) b + 1/2ntr(∂_jg^-1∂_i g) b + ∂_i b ∂_j b/b . Therefore, we find the initial expansion Lb = 1/2ntr(g^-1 L g) b - 1/2n g^ijtr(∂_jg^-1∂_i g) b - |∇_gb|^2/b . Now we will compute Lg. To do so, we fix a point p and rotate coordinates so that D^2u is diagonalized at p. (Note; a rotation in the domain of a scalar function generates a transpose rotation of the gradient of that function. The conjugation by these is what diagonalizes the Hessian.) We denote by λ_k the eigenvalues of D^2u(p), so that D^2u(p)=diag(λ_1,,λ_n). Note that λ_k≥0 by our convexity hypothesis. In these coordinates at the point p, the metric g takes the form g_kl = δ_kl + Du_k· Du_l = (1+λ_k^2)δ_kl, and thus g^kl = (1+λ_k^2)^-1δ^kl. Applying these identities at the point p, one obtains ∂_t g_kl = (λ_k+λ_l) ∂_t u_kl ∂_ij g_kl = (λ_k+λ_l) u_ijkl + Du_ik· Du_jl + Du_jk· Du_il , so that Lg_kl = (λ_k+λ_l) ∂_t u_kl - g^ij((λ_k+λ_l) u_ijkl + Du_ik· Du_jl + Du_jk· Du_il) = (λ_k+λ_l) ∂_t u_kl - g^ii(λ_k+λ_l) u_iikl + 2g^iiDu_ik· Du_il . Thus, at the point p we find tr(g^-1Lg) = g^kkLg_kk = 2g^kkλ_k ∂_t u_kk - 2g^iig^kkλ_k u_iikk - 2g^iig^kk|Du_ik|^2 = 2g^kkλ_kL u_kk - 2g^iig^kk|Du_ik|^2 . Now we compute Lu_kk. Differentiating Lu_k=0, we obtain Lu_kk = (∂_k g^ij)u_ijk. Differentiating the identity δ_a^c=g_abg^bc, one obtains at the point p that ∂_k g^ij = -g^iig^jj(λ_i+λ_j)u_ijk , so that Lu_kk = (∂_k g^ij)u_ijk = -g^iig^jj(λ_i+λ_j)u_ijk^2 . Combining (<ref>) and (<ref>), we find at the point p that tr(g^-1Lg) = 2g^kkλ_kL u_kk - 2g^iig^kk|Du_ik|^2 = -2g^iig^jjg^kk(λ_i+λ_j)λ_kL u_ijk^2 - 2g^iig^kk|Du_ik|^2 . Next, using (<ref>), we directly compute g^ijtr(∂_jg^-1∂_i g) = g^ii∂_i g^kl∂_i g_kl = -g^iig^kkg^ll(λ_k+λ_l)^2u_ikl^2, so that, inserting (<ref>) and (<ref>) into (<ref>), we obtain Lb = 1/2ntr(g^-1 L g) b - 1/2n g^ijtr(∂_jg^-1∂_i g) b - |∇_gb|^2/b = b/2n(-2g^iig^jjg^kk(λ_i+λ_j)λ_k u_ijk^2 - 2g^iig^kk|Du_ik|^2 + g^iig^kkg^ll(λ_k+λ_l)^2u_ikl^2) - |∇_gb|^2/b = b/2n(-2g^iig^jjg^kk(λ_i+λ_j)λ_k u_ijk^2 - 2g^iiδ^jjg^kku_ijk^2 + g^iig^kkg^ll(λ_k+λ_l)^2u_ikl^2) - |∇_gb|^2/b = b/2n(-2g^iig^jjg^kk(λ_i+λ_j)λ_k u_ijk^2 - 2g^iig^jjg^kk(1+λ_j^2)u_ijk^2 + g^iig^kkg^ll(λ_k+λ_l)^2u_ikl^2) - |∇_gb|^2/b = b/2n(-4g^iig^jjg^kkλ_iλ_k u_ijk^2 - 2g^iig^jjg^kkλ_j^2u_ijk^2 - 2g^iig^jjg^kku_ijk^2 + 2g^iig^kkg^llλ_k^2u_ikl^2 + 2g^iig^kkg^llλ_kλ_lu_ikl^2) - |∇_gb|^2/b = b/2n(-2g^iig^jjg^kkλ_iλ_k u_ijk^2 - 2g^iig^jjg^kku_ijk^2 ) - |∇_gb|^2/b . Finally, we use Cauchy-Schwarz to see that |∇_gb|^2/b = b/n^2 g^ii(g^kkλ_ku_ikk)^2 ≤b/n g^iig^kkg^kkλ_k^2u_ikk^2, which, by comparing with (<ref>) and using the fact that λ_k≥0 by convexity, concludes the poof of the lemma. § OSCILLATION AND GRADIENT BOUNDS FORWARD IN TIME In this section, we will prove interior oscillation and gradient bounds forward in time for convex solutions of (<ref>) defined on parabolic cylinders of the form B_R(0)×0,1/n]⊂ℝ^n×ℝ. First, we prove an interior height bound forward in time for general solutions of (<ref>). Suppose that u is a solution of (<ref>) on B_R(0)×0,1/n]. Then we have u(0,1/n)≤arctan( π/R^2) +max _B_R(0) ×{ 0}u( x,0). Up to subtracting a constant, we will assume that max u=0 at t=0. Notice that by simply integrating equation (<ref>) at each point on ∂ B_R(0) we may obtain the bound u(x,t)≤tnπ/2 on ∂ B_R(0)× 0,1/n]. We will construct a supersolution to obtain the interior bound. Let w=tnπ/2( | x|/R) ^2+ntarctan( tnπ/R^2), which satisfies w_t-∑arctanλ_i≥0. By (<ref>), w ≥ u on the boundary, and hence also in the interior of B_R(0)×0,1/n]. Thus u(0,1/n) ≤ w(0,1/n)≤arctan( π/R^2 ) . Suppose for any Θ_0 the function u is a convex solution of (<ref>) on B_2R+1(0)×0,1/n]⊂ℝ^n×ℝ. Then, max_B_1(0)×0,1/n]|Du|≤1/R[ M+arctan( π/ R^2) ] where M is the oscillation of u at t=0. By subtracting a constant, we will assume that max_B_2R+1 (0)×0,T)u=0, so that min_B_2R+1 (0)×0,T)u = -M. Adding Θ_0 t, we may also assume that u is a solution of (<ref>). From (<ref>), we have u(x,t)≤arctan( π/ R^2) on B_R+1(0)×0,1/n]. Since u is convex, we know Θ≥0, and hence u can only increase along the flow of type (<ref>). Therefore, we see that the oscillation is no more than M+arctan( π/ R^2) on B_R+1(0)×0,1/n]. The conclusion follows by applying the convexity condition to any point on B_1(0), noting that the derivative bound from convexity depends only on the oscillation of each time slice. However, since solutions of (<ref>) and (<ref>) only differ by a constant in each time slice, we also obtain the conclusion for solutions of (<ref>). § HESSIAN BOUND In this section, we prove an interior Hessian bound forward in time for convex solutions of (<ref>) and (<ref>). The first proposition applies to convex solutions with sufficiently small gradient |Du|, while the second applies to convex solutions with bounded oscillation. Let u be a convex solution of (<ref>) or (<ref>) on B_1(0)×0,1/n]. Suppose that for some α,γ >0 we have α> 3n/2, αγ<1, and | Du| ^2<γ on B_1(0)×0,1/n]. Then for any K>0 λ_max^2(0,1/n)≤e^2nK( 2α/2α- 3n ) ^n/( e^K( 1-αγ) -1) ^2n. Let b =V^1/n ϕ =[ α| Du| ^2-αγ+nt (1-| x| ^2)] ^+ f =e^Kϕ-1 η =f∘ϕ, and consider the function h=η b. Notice that -αγ≤α| Du| ^2-αγ≤0, so ϕ is 0 when t=0 (and thus so is η). Additionally, ϕ vanishes on the boundary ∂ B_1(0) for any time. Now, since -αγ+1>0, the function η will be positive somewhere, and we let ( x_0,t_0) be a point where a positive maximum for h occurs on B_1(0)×0,1/n]. Then we must have Lh=h_t-g^ijh_ij≥0. From spatial maximality, we obtain η_i=-η b_i/b, which implies Lh =η_tb+η b_t-g^ijη_ijb-2g^ijη_ib_j-η g^ijb_ij =( η_t-g^ijη_ij) b+η( b_t+2g^ij b_i/bb_j-g^ijb_ij) . By (<ref>), we know that ( b_t+2g^ijb_i/bb_j-g^ijb_ij) ≤0, so we conclude at ( x_0,t_0) that η_t-g^ijη_ij≥0. We compute η_i =Ke^Kϕ( 2α∑_k=1^nu_ku_ki-2 n t x_i) η_ij =Ke^Kϕ( 2α∑_k=1^nu_kju_ki+2α∑_k=1^nu_kiju_k-2 n tδ_ij) +K^2e^Kϕ( 2α∑_k=1^nu_ku_ki-2 n t x_i) ( 2α∑_k=1^nu_ku_kj-2 n t x_j) η_t =Ke^Kϕ( 2α∑_k=1^nu_ku_kt+n ( 1-| x| ^2) ), so at (x_0,t_0) we have 0 ≤η_t-g^ijη_ij/Ke^Kϕ =( 2α∑ _k=1^nu_ku_kt+n ( 1-| x| ^2) ) -g^ij( 2α∑_k=1^nu_kju_ki+2α∑_k=1 ^nu_kiju_k-2 ntδ_ij) -g^ijK( 2α∑_k=1^nu_ku_ki-2 n t x_i) ( 2α∑_k=1^nu_ku_kj-2 n t x_j) . Using (<ref>), discarding the final term, and diagonalizing the Hessian (which will change the direction of the gradient at (x_0,t_0) but not its magnitude) we obtain 0 ≤ n ( 1-| x| ^2) -2α∑λ_i^2/1+λ_i^2+2 n t∑1/1+λ_i^2, which can be written as 2α∑λ_i^2/1+λ_i^2≤ 3n. Therefore, we conclude at ( x_0,t_0) that any eigenvalue λ_i satisfies the bound λ_i^2/1+λ_i^2≤3n/2α, which is equivalent to λ_i^2≤3n/2α- 3n. Thus, V( x_0,t_0) =∏( 1+λ_i^2) ^1/2 ≤[ 1+( 3n/2α- 3n) ] ^n/2 =( 2α/2α-3n ) ^n/2, and so b( x_0,t_0) ≤( 2α/2α-3n ) ^1/2. Now, for all ( x,t) we have ϕ≤ 1 and hence h( x_0,t_0) ≤ e^K( 2α/2α-3n ) ^1/2, from which it follows that h(0,T)≤ e^K( 2α/2α-3n ) ^1/2. Finally, observe that ϕ(0,1/n)≥-αγ+1, which yields b(0,1/n)≤e^K( 2α/2α-3n ) ^1/2/e^K( 1-αγ) -1. Therefore, the maximum eigenvalue λ_max satisfies 1+λ_max^2≤e^2nK( 2α/2α-3n ) ^n/( e^K( 1-αγ) -1) ^2n. We now combine Proposition <ref> with Corollary <ref> to obtain a Hessian bound from the oscillation. Suppose that u is a convex solution of (<ref>) on B_2R+1(0)×0,1/n] with R=3√(n) and u satisfies -1≤ u≤1 on B_2R+1×{ 0}. Then D^2u(0,1/n)≤ C(n). By Corollary <ref>, we have the bound max_B_1(0)×0,1/n]|Du| ≤1/3√(n)[ 2+arctan( π/9n) ]. We now apply the Hessian bound of Proposition <ref> with T =1/n γ =1/9n( 2+arctan( π/9n) ) ^2<0.61/n α =1.6n K =1 to conclude λ_max^2(0,1/n)≤e( 16) ^n/( e^( 1-αγ) -1) ^2n=C(n). § PROOF OF THE LIOUVILLE THEOREM To prove our main theorem we will show that for any ( x_0,t_0) we have D^2u(x_0,t_0)≤ C(n). Without loss of generality we may choose t_0=0. For fixed x_0, consider ũ_λ(x,t)=1/λ^2u( λ(x-x_0 ),λ^2t), which restricts to a solution of (<ref>) on B_6√(n)+1 (0)×-1/n,0] with sup_x∈ B_6√(n)+1(0)|ũ_λ(x,-1/n)|≤1/λ^2sup_z∈ B_( 6√(n)+1) ( λ+| x_0|) (0)| u( z,-λ^21/n) | . By the growth at antiquity condition, for λ large enough and ε_0<1 small enough we have sup_z∈ B_( 6√(n)+1) ( λ+| x_0|) (0)| u( z,-λ^21/n) | ≤1/( 6√(n)+2-ε _0) ^2( |( 6√(n)+1) ^2( λ+| x_0|) | ^2+R_0) ≤1/1+ε_1λ^2[ ( 1+| x_0|/λ) ^2+R_0/λ^2] , so, by choosing λ>>| x_0| ,R_0, we obtain from (<ref>) sup_x∈ B_6√(n)+1(0)|ũ_λ(x,-1/n)|≤1. We finally apply Proposition <ref> to conclude D^2u(x_0,0)=D^2u_λ(0,0)≤ C(n). Since this bound holds at any arbitrary point, it holds everywhere, and we may apply <cit.>, which states that any convex ancient entire solution with a uniform Hessian bound must be a quadratic polynomial. This completes the proof. amsalpha
http://arxiv.org/abs/2407.12390v2
20240717081137
Enhancing Facial Expression Recognition through Dual-Direction Attention Mixed Feature Networks: Application to 7th ABAW Challenge
[ "Josep Cabacas-Maso", "Elena Ortega-Beltrán", "Ismael Benito-Altamirano", "Carles Ventura" ]
cs.CV
[ "cs.CV", "I.4" ]
eHealth Center, Faculty of Computer Science, Multimedia and Telecommunicactions, Universitat Oberta de Catalunya, 08016 Barcelona, Spain cventuraroy@uoc.edu MIND/IN2UB, Department of Electronic and Biomedical Engineeering, Universitat de Barcelona, 08028 Barcelona, Spain Josep Cabacas-Maso et al. Enhancing Facial Expression Recognition through DDAMFN Enhancing Facial Expression Recognition through Dual-Direction Attention Mixed Feature Networks: Application to 7th ABAW Challenge Josep Cabacas-Maso1Elena Ortega-Beltrán 1Ismael Benito-Altamirano1,2 Carles Ventura 1 July 22, 2024 ================================================================================================================================== § ABSTRACT We present our contribution to the 7th ABAW challenge at ECCV 2024, by utilizing a Dual-Direction Attention Mixed Feature Network (DDAMFN) for multitask facial expression recognition, we achieve results far beyond the proposed baseline for the Multi-Task ABAW challenge. Our proposal uses the well-known DDAMFN architecture as base to effectively predict valence-arousal, emotion recognition, and facial action units. We demonstrate the architecture ability to handle these tasks simultaneously, providing insights into its architecture and the rationale behind its design. Additionally, we compare our results for a multitask solution with independent single-task performance. § INTRODUCTION Facial emotion recognition has emerged as a pivotal area of research within affective computing, driven by its potential applications in fields ranging from human-computer interaction to psychological research and clinical diagnostics. Since Ekman's classification of human expression faces into emotions <cit.>, a lot of studies have emerged, in recent years. Calvo et al. <cit.>, Baltrusaities et al. <cit.>, or Kaya et al. <cit.> laid foundational frameworks for understanding facial expressions as a window into emotional states. In contemporary research, Liu et al. <cit.> and Kim et al. <cit.> continued to refine and expand these methodologies, by synthesizing insights from cognitive psychology, computer vision, and machine learning, researchers have made significant strides in enhancing the accuracy and applicability of facial emotion recognition systems. Moreover, the integration of valence and arousal dimensions <cit.> added depth to the interpretation of emotional states, enabling more nuanced insights into human affective experiences. Action unit detection <cit.> complemented these efforts by parsing facial expressions into discrete muscle movements, facilitating a finer-grained analysis of emotional expressions across cultures and contexts. Such advancements not only improved the reliability of automated emotion recognition systems but also opened the possibility to personalize affective computing applications in fields such as mental health monitoring <cit.> or user experience design <cit.>. To tackle all these challenges, researchers have explored innovative architectures such as the DDAMFN (Dual-Direction Attention Mixed Feature Network) <cit.>. This novel approach integrates attention mechanisms <cit.> and mixed feature extraction <cit.>, enhancing the network's ability to capture intricate details within facial expressions. This architecture shows promising results in multitask challenges, together with other pretrained networks <cit.>. There exists a modern day need to create machines capable to comprehend and appropriately respond to human feelings day-to-day on-the-wild applications. This challenge was presented series of comptetitions entitled “Affective Behavior Analysis in-the-wild (ABAW)” challenges <cit.>. For the 7th ABAW challenge, at ECCV 2024, two competitions where presented: first, a competition focused on solving a multi-task classification, focused on valance-arousal, emotion recognition, and action units, and second, a competition focused on compound expression recognition. In this work, we present our approach to the first competition, where we implemented our multi-task version of the DDAMFN architecture. The competition presented a smaller dataset (s-AffWild2) <cit.> than in previous challenges (Aff-Wild2) <cit.>. We proposed a fine-tuning training on the s-AffWild2 dataset to increase the performance of the model for a multitask challenge; plus, we evaluated its performance individually on the different task of the challenge: valence-arousal prediction, emotion recognition, and action unit detection. § METHODOLOGY §.§ Dataset curation The s-Aff-Wild2 was introduced by the organizers of 7th ABAW challenge <cit.> as a subset of Aff-Wild2, representing a subset of image frames of the original videos, without any audio data. Only the frames and the annotations for 221,928 images were presented. The dataset presented a preestablished train-validation-test partition: 142,382 instances were introduced for training; 26,876, for validation; and 52,670 where released later on in the challenge as test set. Aproximating, relative partions of: 65% for training, 12% for validation and 23% for test. The frames came with annotations in terms of valence-arousal, eight basic expressions (happiness, sadness, anger, fear, surprise, and disgust), the neutral state, and an `other' category that included affective states not covered by the aforementioned categories. Additionally, the dataset contained annotations for 12 action units: AU1, AU2, AU4, AU6, AU7, AU10, AU12, AU15, AU23, AU24, AU25, and AU26.Així com la comparació dels histogrames de valance-arousal. Following the instructions provided by the competition organizers we filtered out image frames that contained invalid annotations. We decided to apply an strict criteria to filter out the frames, removing any frame that contained any annotation value outside the specified acceptable range. Specifically, annotation values of -5 for valence/arousal, -1 for expressions, and -1 for action units (AUs) were excluded from consideration in the analysis, this is summarized in <ref>. Note also that we transformed the annotations of expressions and action units to binary values, where 1 indicates the presence of the expression or action unit, and 0 indicates the absence of the expression or action unit. We applied this rigorous filtering to the train-validation splits, frames containing any annotation values outside the specified acceptable range were excluded from use, even if all other values for that frame were within the normal range. This resulted in a refined dataset of 52,154 frames for training and 15,440 frames for validation, a summary of the dataset after filtering is shown in <ref>. §.§ Network architecture For the 7th ABAW challenge, we adapted the Dual-Direction Attention Mixed Feature Network (DDAMFN) <cit.> to the multitask problem with three different fully-connected layers at the end of the network. These layers consist of a valence-arousal prediction layer with 2 output units, an emotion recognition layer with 8 output units, and an action unit layer with 12 output units. <ref> shows a diagram of the network, which features a base MobileFaceNet (MFN) <cit.> architecture for feature extraction, followed by a Dual-Direction Attention (DDA) module –with two attention heads– and a Global Depthwise Convolution (GDConv) layer. The output of the GDConv layer is reshaped and fed into the three fully-connected layers for the different tasks. §.§ Training Initially, the DDAMFN pretrained model was obtained from stock versions in the source code repository of the original work, which was trained on AffectNet-8 <cit.>. In both our expriments, we preserved the feature extraction, attention mechanisms and the GDConv layer pretained weights, but initialized the fully-connected layers with random weights. For the first experiment, a multitask training was performed in two different stages. First, we focused on training our custom multitask classifier, so, for training, the entire network was frozen except for the custom classifier. The classifier was trained in isolation to learn the distinct characteristics of each task without interference from the feature extraction layers. Subsequently, the entire model was unfrozen, and fine-tuned on all layers. This comprehensive training stage ensured that the feature extraction, attention mechanisms, and classifier were all optimized to work cohesively for the multitask problem. Loss functions were calculated following these criteria: * For valence-arousal prediction, the loss function was calculated using the Concordance Correlation Coefficient (CCC). CCC is a measure that evaluates the agreement between two time series by assessing both their precision (how well the observations agree) and accuracy (how well the observations match the true values). * For emotion recognition, was cross-entropy, which is commonly employed in classification tasks to measure the difference between the predicted probability distribution and the true distribution. * For action unit (AU) detection, the binary cross-entropy loss was used, which is suitable for binary classification tasks, measuring the difference between the predicted probability and the actual binary outcome for each action unit. When the whole model was fine-tuned, an extra loss for the attention head was added. This loss works by calculating the mean squared error (MSE) between each pair of attention heads, and then normalizing the total loss by the number of pairs plus a small epsilon to avoid division by zero. This encourages the attention heads to produce consistent outputs <cit.>. Furthermore, for the Action Unit task, a global threshold of 0.5 was initially tested across all AUs, followed by individual optimization for each AU <cit.>. As for the second experiment, we repeated a similar procedure. However, this time we trained each task individually. This approach allowed us to compare the results for each specific task, providing more detailed insights and enabling a clearer understanding of the performance variations across different tasks. § RESULTS The metrics evaluated include the Concordance Correlation Coefficient (CCC) for valence, arousal, and their combination (valence-arousal), F1 score for emotion classification, F1 score for action unit (AU) detection, and the combined performance score (P Score), as described in the challenge <cit.>. And composed following the challenge P score formulation: P = CCC_V + CCC_A/2 + F1_Expr/8 + F1_AU/12 The performance metrics for the multitask challenge are summarized in <ref>. The fine-tuned DDAMFN model achieved a CCC of 0.549 for valence, 0.524 for arousal, and 0.536 for valence-arousal. The F1 scores for emotion classification and AU detection were 0.277 and 0.470, respectively, resulting in a P score of 1.287. Incorporating threshold adjustments slightly improved the F1 score for AU detection to 0.510 and the P score to 1.327. Training a custom classifier showed a CCC of 0.548 for valence, 0.518 for arousal, and 0.533 for valence-arousal. The F1 scores for emotion classification and AU detection were 0.262 and 0.473, respectively, with a P score of 1.283. Threshold adjustments for the custom classifier improved the F1 score for AU detection to 0.500 and the P score to 1.313. The performance metrics for individual tasks are detailed in <ref>. For individual tasks, fine-tuning the entire DDAMFN model demonstrated higher performance with a CCC of 0.604 for valence, 0.550 for arousal, and 0.577 for valence-arousal. The F1 scores for emotion classification and AU detection were 0.287 and 0.490, respectively, with a P Score of 1.354. Threshold adjustments further enhanced the F1 score for AU detection to 0.529 and the P Score to 1.393. Training a custom classifier presented a CCC of 0.530 for valence, 0.537 for arousal, and 0.533 for valence-arousal. The F1 scores for emotion classification and AU detection were 0.243 and 0.487, respectively, resulting in a P Score of 1.263. Applying threshold adjustments increased the F1 score for AU detection to 0.524 and the P Score to 1.300. Overall, both approaches—fine-tuning the entire DDAMFN model and training a custom classifier—performed well across the evaluated metrics, with fine-tuning the DDAMFN model showing slightly better performance in individual task metrics. § CONCLUSION The results using DDAMFN as a feature extractor with the pretrained weiths are practically the same as those obtained by fine-tuning the entire model. This similarity in performance can be attributed to the architecture of DDAMFN, which has been trained to generalize exceptionally well for the multitask challenge problem. This inherent capability allows it to extract meaningful features that are sufficient for achieving high performance without the need for extensive fine-tuning. In contrast, the single-task experiment revealed a different scenario. Fine-tuning the entire model for the single-task yielded better results than merely training the classifier, and even outperformed the multitask results. This finding underscores the importance of task-specific optimization. When the model is fine-tuned for a specific task, it can leverage the nuances and particularities of that task, leading to superior performance. This suggests that with meticulous optimization of the loss functions and careful consideration of data imbalance, the DDAMFN architecture could potentially achieve even better results in the multitask challenge. Proper handling of these aspects could unlock the full potential of the model, leading to significant performance gains in multitask settings. Moreover, it is crucial to highlight the importance of effective threshold optimization in the Action Units task. By fine-tuning the thresholds, the results improved significantly, with an increase of 0.5 in performance metrics. This improvement demonstrates that beyond model architecture and training strategies, the post-processing steps such as threshold optimization play a vital role in achieving optimal results. Effective threshold optimization ensures that the model's predictions are more accurate and reliable, contributing to overall performance enhancement. In summary, while DDAMFN as a feature extractor performs on par with full model fine-tuning in a multitask setting, the single-task fine-tuning shows that there is room for improvement with targeted optimization strategies in multitask scenarios. This includes better loss function optimization, addressing data imbalance, and refining threshold settings, all of which are crucial for maximizing the performance of the model in the multitask challenge. splncs04
http://arxiv.org/abs/2407.13110v1
20240718023652
On the detectability and resolvability of quasi-normal modes with space-based gravitational wave detectors
[ "Changfu Shi", "Qingfei Zhang", "Jianwei Mei" ]
gr-qc
[ "gr-qc" ]
http://arxiv.org/abs/2407.13056v1
20240717235128
Recent heavy-flavour measurements from ALICE
[ "Jonghan Park" ]
nucl-ex
[ "nucl-ex", "hep-ex" ]
Enhancing Temporal Action Localization: Advanced S6 Modeling with Recurrent Mechanism Sangyoun Lee1, Juho Jung2, Changdae Oh3, Sunghee Yun4 Sogang University1, Sungkyunkwan University2, University of Wisconsin–Madison3, Erudio Bio4 leesy0882@sogang.ac.kr1, jhjeon9@g.skku.edu2, changdae.oh@wisc.edu3, sunghee.yun@erudio.bio4 July 22, 2024 ==================================================================================================================================================================================================================================================== § INTRODUCTION The ALICE apparatus has been designed to study the quark-gluon plasma (QGP), a hot and dense medium in which quarks and gluons are deconfined. Open heavy-flavour hadrons, which contain one charm or beauty quark, are unique probes to investigate the properties of the QGP as heavy quarks are produced at the initial hard scatterings of a collision due to their large masses (m_ b<m_ c<<Λ_ QCD) and can therefore experience the entire evolution of the medium created in high energy heavy-ion collisions. Heavy-flavour measurements in Pb–Pb collisions allow us to explore the QGP properties via the medium effects on the heavy quarks traversing it, which result in a net energy loss of these quarks. In p–Pb collisions, the measurements help us to disentangle between initial and final state effects, and to test the interplay between soft and hard processes. The measurements in pp collisions serve as reference to measurement in p–Pb and Pb–Pb collisions, and allow us to test perturbative QCD (pQCD) calculations. Additionally, heavy-flavour hadrons are useful to reveal the details of heavy-quark fragmentation. In these proceedings, some of the recent results of heavy-flavour production measured by ALICE are presented. § RESULTS Figure <ref> (left) shows the cc quark-pair production cross section at midrapidity (|η|<0.5) as a function of the centre-of-mass energy in pp collisions. The measurements are compared to FONLL <cit.> and NNLO <cit.> pQCD calculations. The results at the LHC are systematically higher than the predictions, however, the measured cross sections are compatible with the theory uncertainty band within the current experimental precision. In Fig. <ref> (right), the bb quark-pair production cross section as a function of the center-of-mass energy in pp collisions at midrapidity is shown. The results are compared to the predictions from FONLL <cit.> and NNLO <cit.> calculations, and are compatible within the theoretical uncertainties. The NNLO calculations show smaller uncertainties than the FONLL calculations, and their central values are closer to the data due to the higher perturbative accuracy. These results provide constraints on the pQCD calculations as well as on the PDFs (Parton Distribution Functions). Figure <ref> (left) shows the fragmentation fractions for charm hadrons, i.e., the fraction of quarks fragmenting into a specific hadron among all measured ground-state charm-hadron species in p–Pb collisions at √(s_ NN)=5.02 TeV <cit.> compared to those measured in pp collisions at √(s)=5.02 TeV <cit.>. In spite of the larger system size and higher average charged-particle multiplicity, no significant modification of the hadronisation between pp and p–Pb collisions is observed. The measurements are also compared to the results from e^+ e^- and e p collisions <cit.>. The prompt Λ_ c^+-baryon fragmentation fraction at the LHC is about three times larger than in e^+ e^- and e p collisions, and the Ξ_ c^0,+ baryons account for about 10% of the total charm hadron production at the LHC while these were assumed to be significantly smaller in e^+ e^- and e p collisions. This enhancement of baryon production induces a corresponding deficit in the fragmentation fractions of D-mesons with respect to e^+ e^- and e p collisions. The difference of the charm-hadron fragmentation fractions between pp, p–Pb and e^+ e^-, e p collisions indicates that the assumption of universal parton-to-hadron fragmentation is not valid. The p_ T-integrated nuclear modification factors R_ pPb for D^0, Λ_ c^+, J/ψ, and cc̅ in p–Pb collisions at √(s_ NN)=5.02 TeV are shown in the right panel of Fig. <ref>. The R_ pPb for cc̅ is consistent with unity within the measured uncertainties, which implies that the overall charm production rate in p–Pb collisions is consistent with that in pp collisions, indicating no sizeable nuclear modification effects. The R_ pPb values for all charm hadrons are also consistent with unity within uncertainties, implying that the production rates of the individual charm hadrons are not strongly affected by nuclear effects. The experimental results are compared to the p_ T-integrated R_ pPb calculated using both EPPS16 <cit.> and nCTEQ15_ rwHF <cit.> nPDF (nuclear Parton Distribution Function) sets, and are described by both of the calculations within uncertainties. In Fig. <ref>, the ratios of the nuclear modification factor R_ AA of non-prompt D_ s^+ mesons to that of prompt D_ s^+ mesons (left) and non-prompt D^0 mesons (middle) in central (0–10%) and semi-central (30–50%) Pb–Pb collisions at √(s_ NN)=5.02 TeV <cit.> are shown. In the 0–10% centrality class, the R_ AA ratio of non-prompt D_ s^+ to prompt D_ s^+ is greater than unity in the 4<p_ T<12 GeV/c interval, suggesting a larger energy loss for charm quarks with respect to beauty quarks, while the ratio is consistent with unity in the 30–50% centrality class. The measurements of the R_ AA ratio of the non-prompt D_ s^+ to non-prompt D^0 suggest a hint of enhancement in the 4<p_ T<8 GeV/c interval for the 0–10% centrality. The enhancement might be a consequence of the large abundance of strange quarks thermally produced in the QGP and the recombination-dominated hadronisation in this momentum range <cit.>. The measurements are compared to the TAMU model predictions <cit.>, which correctly describe the data within the experimental uncertainties. The elliptic flow coefficient v_2 of non-prompt D^0 mesons and the average v_2 of prompt D^0, D^+, and D^*+ mesons in 30–50% central Pb–Pb collisions at √(s_ NN)=5.02 TeV are shown in Fig. <ref>. The non-prompt D^0 meson v_2 is larger than 0 with 2.7σ significance and no significant p_ T dependence is observed. The non-prompt D^0 meson v_2 is lower than that of prompt non-strange D mesons in 2<p_ T<8 GeV/c with a significance of 3.2σ, indicating a weaker degree of participation in the collective motion of beauty quarks with respect to charm quarks. § SUMMARY The ALICE Collaboration has performed several heavy-flavour hadron measurements in pp, p–Pb, and Pb–Pb collisions at the LHC. In pp collisions, charm baryon enhancement is observed, resulting in larger charm fragmentation fractions for baryons with respect to e^+ e^- and e p collisions. These results suggest that the hadronisation mechanisms are not universal and depend on collision systems. In p–Pb collisions, cold-nuclear matter effects do not significantly affect charm quark and charm hadron production. In Pb–Pb collisions, the R_ AA of heavy-flavour hadrons provides a hint of the mass-dependent in-medium parton energy loss and hadronisation mechanism of heavy quarks. With Run3 data, heavy flavour measurements will be achieved with smaller uncertainties and extended transverse momentum reach, improving our understanding of heavy flavour production and hadronisation. 99 2023 ALICE Collaboration, http://dx.doi.org/10.1007/JHEP12(2023)086JHEP 12 (2023) 086 [hep-ex/230804877] alicecollaboration2024measurement ALICE Collaboration, https://arxiv.org/abs/2402.16417arXiv:2402.16417 [hep-ex] [hep-ex/240216417] Cacciari_1998 M. Cacciari et al., http://dx.doi.org/10.1088/1126-6708/1998/05/007JHEP 05 (1998) 007 [hep-ph/9803400] Cacciari_2001 M. Cacciari et al., http://dx.doi.org/10.1088/1126-6708/2001/03/006JHEP 03 (2001) 006 [hep-ph/0102134] Cacciari_2012 M. Cacciari et al., http://dx.doi.org/10.1007/JHEP10(2012)137JHEP 10 (2012) 137 [hep-ph/12056344] d_Enterria_2017 D. d’Enterria et al., http://dx.doi.org/10.1103/PhysRevLett.118.122001Phys. Rev. Lett. 118 (2017) 122001 [hep-ph/161205582] denterria2016triple D. d’Enterria et al., https://arxiv.org/abs/1612.08112arXiv:1612.08112 [hep-ph] [hep-ph/161208112] Czakon_2013 M. Czakon et al., http://dx.doi.org/10.1103/PhysRevLett.110.252004Phys. Rev. Lett. 110 (2013) 252004 [hep-ph/161205582] Catani_2021 S. Catani et al., http://dx.doi.org/10.1007/JHEP03(2021)029JHEP 03 (2021) 029 [hep-ph/201011906] Acharya_2022 ALICE Collaboration, http://dx.doi.org/10.1103/PhysRevD.105.L011103Phys. Rev. D 105 (2022) L011103 [nucl-ex/210506335] Acharya_2017 ALICE Collaboration, http://dx.doi.org/10.1140/epjc/s10052-017-5090-4Eur. Phys. J. C 77 (2017) 550 [hep-ex/170200766] alicecollaboration2024charm ALICE Collaboration, https://arxiv.org/abs/2405.14571arXiv:2405.14571 [nucl-ex] [nucl-ph/240514571] Lisovyi_2016 M. Lisovyi et al., http://dx.doi.org/10.1140/epjc/s10052-016-4246-yEur. Phys. J. C 76 (2016) 397 [hep-ex/150901061] Kova_k_2016 K. Kovařík et al., http://dx.doi.org/10.1103/PhysRevD.93.085037Phys. Rev. D 93 (2016) 085037 [hep-ph/150900792] Kusina_2018 A. Kusina et al, http://dx.doi.org/10.1103/PhysRevLett.121.052004Phys. Rev. Lett. 121 (2018) 052004 [hep-ph/171207024] Eskola_2017 Kari J. Eskola et al., http://dx.doi.org/10.1140/epjc/s10052-017-4725-9Eur. Phys. J. C 77 (2017) 163 [hep-ph/161205741] charmpp5tev ALICE Collaboration, http://dx.doi.org/10.1103/PhysRevD.105.L011103Phys. Rev. D 105 (2022) L011103 [nucl-ex/210506335] PbPb_flow ALICE Collaboration, http://dx.doi.org/10.1140/epjc/s10052-023-12259-3Eur. Phys. J. C 83 (2023) 1123 [nucl-ex/230714084] PbPb_raa ALICE Collaboration, http://dx.doi.org/10.1016/j.physletb.2022.137561Phys. Lett. B 846 (2023) 137561 [nucl-ex/220410386] Altmann:2024kwx J. Altmann et al., https://inspirehep.net/literature/2791213https://arxiv.org/pdf/2405.19137 [hep-ph] [hep-ph/240519137] He_2014 M. He et al., http://dx.doi.org/10.1016/j.physletb.2014.05.050Phys. Lett. B 735 (2023) 445-450 [nucl-th/14013817]
http://arxiv.org/abs/2407.12084v1
20240716180002
Complex structures of boson stars and anisotropic distribution of satellite galaxies
[ "Víctor Jaramillo", "Shuang-Yong Zhou" ]
gr-qc
[ "gr-qc", "astro-ph.CO", "hep-th" ]
USTC-ICTS/PCFT-24-24 []jaramillo@ustc.edu.cn Department of Astronomy, University of Science and Technology of China, Hefei, 230026, China []zhoushy@ustc.edu.cn Interdisciplinary Center for Theoretical Study, University of Science and Technology of China, Hefei, Anhui 230026, China Peng Huanwu Center for Fundamental Theory, Hefei, Anhui 230026, China § ABSTRACT We construct and explore the complex structures of boson stars, drawing inspiration from similar configurations of non-topological solitons in Minkowski space. These “molecular states” of boson stars have a multipolar structure and both positive and negative Noether charges within one star, and the opposite charges swap with time. Thanks to the gravitational attraction, they exist even in the case of a free scalar field. We also explore the effects of scalar self-interactions on these complex structures. We propose to use galactic-scale charge-swapping boson stars as a potential solution to the problem of the observed anisotropic distribution of satellite galaxies. Complex structures of boson stars and anisotropic distribution of satellite galaxies Shuang-Yong Zhou July 22, 2024 ==================================================================================== § INTRODUCTION Boson stars are nowadays well known solutions that arise in the context of complex scalar fields coupled to Einstein gravity. The first such self-gravitating, spatially localized, stationary configurations were obtained more than 50 years ago <cit.>. This particular nonlinear solution, which nowadays is known as a mini-boson star, arise within the simplest model of a minimally coupled free scalar field and when spherical symmetry is assumed. Self-interactions such as the quartic <cit.> and sextic <cit.> terms in the scalar potential enriches the family of boson stars. The variety of boson stars that can be found in the literature goes beyond modifications to the scalar potential, see e.g. <cit.> for a generalization including electrically charged scalar fields, <cit.> for solutions with horizons, <cit.> for solutions of a vectorial bosonic field and some others to which we will refer later. However, many of them (and in particular the self-interacting ones) share similar dynamical properties with the mini-boson stars (see <cit.> for a review on this topic), the most important of which include the stability and existence of a formation mechanism <cit.>. This has enable boson stars to find applications in various astrophysical and cosmological scenarios <cit.> Stationary boson stars beyond spherical symmetry have also been studied for quite a long time. Among them are the (stationary) spinning boson stars <cit.>, which have a toroidal topology and are unstable <cit.> unless self-interactions are included <cit.>. Another example is that of (static) dipolar boson stars <cit.>. These solutions are balanced by the attraction of gravity and the repulsion of the two blobs, which possess the same value and sign of the Noether charge but with a π relative phase. They have been revisited recently in <cit.> and again, appear to be stabilized only when self-interactions are included <cit.>. Beyond these two cases, there are the chains of boson stars, constructed naturally with self-interactions <cit.> and interestingly also without them <cit.>. Additionally, there can be static (in terms of the spacetime metric) multipolar boson stars <cit.> which contain both dipolar components and chain components in addition to some non-axisymmetric structure, with the overall morphology of the energy density akin to the orbitals of the hydrogen atom. As the dipolar boson stars and chains of stars, the general multipolar case has a nonzero total Noether charge and every multipolar component contributes to the total charge with densities of the same sign. Furthermore, boson star solutions by coupling multiple fields to gravity have also been discussed. On the one hand, stationary non-trivial solutions in the free-field case have been obtained in <cit.> where the scalar fields oscillate with different frequencies. Ref. <cit.> further considered a multi-state superposition. Ref. <cit.> studied the head-on collision of two spherical stars composed of different scalar fields. On the other hand, stationary solutions beyond spherical symmetry have also been obtained. Such is the case of the ℓ-boson star <cit.> which considers a particular superposition of fields which preserves the spherical symmetry of the stress energy tensor; see <cit.> for a generalization that relaxes the constraint to obtain a family of solutions containing the ℓ-boson star as well as the spinning and dipolar stars. The Newtonian version of this configuration was presented in Ref. <cit.>, where the generalization goes one step further and considers more general/excited radial profiles with multiple nodes. The morphology of this last multi-state multipolar configuration has interesting cosmological applications <cit.>. Q-balls <cit.> (see <cit.> for a review) are the Minkowski space cousins of boson stars. Recently, it has been observed that for the model where spherical Q-balls exist, there also exist a tower of complex, multipolar Q-balls <cit.>. These “molecular states” of Q-balls, despite being metastable, can live for a long time <cit.>, especially for the logarithmic potential <cit.>. These complex Q-balls have the intriguing feature that within a ball the co-existing positive and negative charges swap with time. The charge-swapping Q-balls are attractor solutions that can form from quite generic initial conditions <cit.>, and in particular they may form as a consequence of the Affleck-Dine condensate fragmentation <cit.>, thanks to parametric resonance. In light of the similarity of Q-balls and boson stars, in this paper, we shall investigate the complex, charge-swapping structures of boson stars where both positive and negative charges co-exist within a star. We will explore the free scalar field case and the polynomial sextic and logarithmic scalar potentials, illustrating the key differences between potentials and uncovering situations where the gravitational interactions are essential for the existence of such configurations. We will construct complex boson stars by preparing the initial data with superpositions of spherical boson stars located close to each other, and letting the configuration evolve and relax to form multipolar boson stars. These charge-swapping multipolar boson stars are different from the static multipolar boson stars <cit.> mentioned above. First of all, the charge-swapping boson stars are dynamical but with a well-defined morphology. However, the essential difference between the two is that: while in the static multipolar configurations the gravitational interaction is counteracted by a phase difference between the scalar field components, in the charge-swapping configurations it is the difference in the sign of the charge that allows the formation of the tower of multipolar configurations. We also find that, while it is possible to achieve a considerable compactness, those charge-swapping boson stars in the large-size, low-compactness limit are well suited for a candidate for galactic dark matter halos. In such a scenario, the morphology of the dipolar configuration influences the structures moving in the outer region of the halo, and as a result, the galactic satellites moving around a host galaxy travel in coherently oriented planes. We shall explore the possibility of explaining the anisotropic distribution of satellite galaxies <cit.> by a charge-swapping dipolar galactic halo. These charge-swapping structures can be formed as a result of collisions of two structures with opposite charges, or, as mentioned, they can be formed by fragmentation of Affleck-Dine-like condensate in earlier universe. This paper is organized as follows. In Section <ref> we introduce the model, some relevant quantities used in the analysis and the explicit equations to treat charge-swapping boson stars as a Cauchy problem. After this, we also derive spherically symmetric boson stars and construct constraint-satisfying initial conditions after some clarification of the units and physical scales for our solutions. We then present and analyze the complex structures of boson stars for the minimally coupled free scalar theory in Section <ref> and for self-interacting fields in Section <ref>. In Section <ref>, we will discuss the possibility of using the dipolar charge-swapping configurations as a potential solution to the anisotropic distribution of satellite galaxies in the Milky Way. We conclude in Section <ref>. Conventions: The metric signature is (-,+,+,+) and we choose the units with c=1. Many dimensionful quantities are written with tildes, while the corresponding dimensionless quantities are without tildes. § THEORY AND SETUP We focus on a canonical U(1) symmetric scalar Φ minimally coupled to gravity with action: S̃ = ∫d^4 x̃√(-g)(R̃/16π G+ℒ_Φ) , where the Newton's constant G=1/m_P^2 and ℒ_Φ = -g^μν∂Φ^†/∂x̃^μ∂Φ/∂x̃^ν-V(|Φ|) . We will consider multiple choices for the potential V(|Φ|) in this paper. We first consider the free, massive scalar case in Section <ref> V(|Φ|)=μ^2 |Φ|^2 , and in Section <ref> we add two different self-interactions to the scalar V(|Φ|) =μ^2 |Φ|^2 - λ |Φ|^4 + g̃ |Φ|^6 , V(|Φ|) = μ^2|Φ|^2(1+Kln |Φ|^2/ℳ^2) . The sextic potential can be considered as a leading truncation for the tree-level effective potential, and the high order terms are neglected as they are further suppressed as compared to the leading terms. The logarithmic interaction (<ref>) is typical of quantum corrected effective potentials. As we see in Section <ref>, the logarithmic potential is very similar to the free scalar case. In the following of this section, we will use the sextic potential (<ref>) as an example to set up the computational framework. (While the free field case is a special case of the sextic potential, we will comment on the differences for the logarithmic case.) It is useful to define the following dimensionless quantities x^μ = μx̃^μ,  ϕ=√(λ)Φ/μ,  𝔤=g̃μ^2/λ^2 ,   R =R̃/μ^2 , similar to those used in <cit.>. With these dimensionless quantities, the action (<ref>) with the sextic potential can be written as S =λS̃ = ∫d^4 x√(-g)(R/16πα_G+ℒ_ϕ) , where ℒ_ϕ = -g^μν∂_μϕ^†∂_νϕ -V(|ϕ|) , with V(|ϕ|)= |ϕ|^2 - |ϕ|^4 + 𝔤 |ϕ|^6 . and we have defined an effective (dimensionless) gravitational constant α_G = G μ^2/λ =μ^2/m_P^2λ , which indicates the strength of the gravitational self-interaction in the presence of the self-interacting scalar field. That is, we can simply use action (<ref>) to extract physics for the sextic potential with generic couplings μ,λ,g, different choices of μ and λ corresponding to scalings of the solution obtained with action (<ref>). Of course, for the classical solution to be valid, λ should be relatively small, as its smallness can be linked to that of the reduced Planck constant: S̃=S/(λħ). The free field potential (<ref>) can be obtained from the sextic case by taking the limit α_G→∞ (and g̃→ 0), where the |ϕ|^4 and 𝔤|ϕ|^6 terms become negligible, and the spherical solutions of the action (<ref>) approach the mini-boson star sequence <cit.>. In the opposite limit, when α_G→ 0, the gravitational backreaction vanishes, the spacetime reduces to Minkowski space and the stationary solutions become the Q-balls. For the logarithmic potential, the corresponding quantities are defined as follows x^μ = μx̃^μ,  ϕ=Φ/ℳ,   R =R̃/μ^2 , Variation of the action (<ref>) with respect to g_μν leads to the Einstein equations: R_μν-1/2 g_μν R= 8πα_G T_μν , T_μν=g_μνℒ_ϕ-2∂ℒ_ϕ/∂ g^μν, and the ϕ variation leads to the Klein-Gordon equation, ∇_μ∇^μϕ = dV/d(|ϕ|^2)ϕ . Now, we define some global quantities that will be useful in both the initial data construction and in the evolution of the dynamical system. The first one is the mass of a stationary spacetime. While in Minkowski space one can simply use the integral of the energy density T^00 E = ∫_Σ_tT^00 √(γ)d^3 x . as a measure of the total energy of the system, in curved spacetime other definitions need to be used to also account for the gravitational energy. We shall use the Komar integral associated to the Killing vector ξ = ∂_t, M_ Komar = 1/4πα_G∫_Σ_t R_μν n^μξ^ν√(γ) d^3 x , where Σ_t is a spacelike surface of constant t with n^μ a vector normal to it and γ is the determinant of the spatial metric induced in Σ_t. We will further evaluate this integral in Section <ref> when we discuss the 3+1 decomposition of the equations of motion. The Komar mass reduces to E in Minkowski space for a stationary configuration. To see this, we use the Einstein equations, which gives M_ Komar = 2 ∫_Σ_t( T_μν n^μξ^ν - 1/2 T n_μξ^μ) √(γ) d^3 x . Then we use the fact that the integral over the T part of Eq. (<ref>) vanishes due to the viral identity <cit.>. Strictly speaking, M_ Komar is not a conserved quantity in non-stationary spacetimes. However, it remains useful, along with E, for approximately tracking the evolution of a gravitational system (see, e.g., <cit.>). Associated with the U(1) symmetry of the scalar field is the current j_κ = i(ϕ^†∇_κϕ - ϕ∇_κϕ^†) , which is divergence-free and can be used to define a total scalar charge Q = -∫ n_μ j^μ√(γ)d^3 x . §.§ 3+1 decomposition As we wish to study the complex, dynamical structures of boson stars, we will need to evolve the Einstein-Klein-Gordon system. This can be formulated as a Cauchy problem using a 3+1 decomposition ds^2=-α^2 dt^2 + γ_ij(dx^i+β^idt)(dx^j+β^jdt) . Apart from the “coordinate”-like variables γ_ij, the “velocity”-like variables are given by the extrinsic curvature, defined by, K_ij = - 1/2α( ∂_t - _β) γ_ij , where denotes the Lie derivative. For the scalar field, we adopt the “velocity”-like variable K_ϕ = -1/2α( ∂_t - _β) ϕ . For a stable evolution of the gravitational system, we use the Baumgarte-Shapiro-Shibata-Nakamura-Oohara-Kojima (BSSNOK) formulation <cit.>, which introduces auxiliary variables and makes use of a conformal metric with unit determinant γ̃_ij=χ γ_ij. In this approach, one re-writes the equations of motion as follows (see, e.g., <cit.> for details), ( ∂_t - ℒ_β) γ̃_ij = - 2 αÃ_ij , ( ∂_t - ℒ_β) χ = 2/3αχ K , ( ∂_t - ℒ_β) K = χγ̃^ijD_j D_iα + α(Ã_ijÃ^ij + 1/3 K^2) + 4 πα_Gα (ρ + S) , ( ∂_t - ℒ_β) Ã_ij = […] - 8 πα_G α( χ S_ij - S/3γ̃_ij) , ( ∂_t - ℒ_β) Γ̃^i = […] - 16 πα_G αχ^-1 P^i , (_t - _β) ϕ = - 2 α K_ϕ , (_t - _β) K_ϕ = α[ K K_ϕ - 1/2χγ̃^ijD̃_i ∂_j ϕ + 1/4γ̃^ij∂_i ϕ∂_jχ. .+ 1/2( 1 - 2 |ϕ|^2 + 3𝔤|ϕ|^4 )ϕ] - 1/2χγ̃^ij∂_i α∂_j ϕ , where K is the trace of K_ij, Ã_ij is the traceless part of the conformal extrinsic curvature, D̃_i (D_i) is the covariant derivative compatible with γ̃_ij(γ_ij), Γ̃^i are the conformal connection functions and we have defined the source terms ρ ≡ T^μνn_μn_ν ,     P_i ≡ -γ_iμ T^μνn_ν , S_ij ≡γ^μ_i γ^ν_j T_μν ,      S ≡γ^ijS_ij . […] in the equation for Ã_ij and Γ̃^i denote the standard quantities defined in the right-hand side of the BSSNOK equations, except for the source terms which have been explicitly included above. The real and imaginary part of the scalar field ϕ are evolved separately in the actual code. Finally, the evolution is subject to a set of constraints given by H ≡^(3)R - K_ij K^ij + K^2 - 16 πα_Gρ = 0 , ℳ_i ≡ D^j K_ij - D_i K - 8πα_G P_i = 0 . §.§ Stationary isolated Boson Stars We first consider an isolated, spherically symmetric boson star solution with a scalar field ϕ=f(r)exp(-iω t) . We choose the following ansatz for the spherical and static line element, ds^2=-α^2(r)dt^2+Ψ^4(r)(dr^2+r^2dΩ^2) . The solution for the functions f, α, Ψ and the eigenvalue ω can be obtained using the spectral code described in <cit.> to solve the following equations Δ_3Ψ+2πα_GΨ^5[(ω f/α)^2+(∂_r f)^2/Ψ^4+ V_f ]=0, Δ_3 α+2∂_r α∂_rΨ/Ψ-8πα_G αΨ^4[2(ω f/α)^2 - V_f ]=0, Δ_3 f + ∂_r f ∂_r α/α+2∂_r f ∂_r Ψ/Ψ-Ψ^4[dV_f/df^2-ω^2/α^2] f=0 , where Δ_3:=∂_r^2 + 2/r∂_r and V_f = f^2 - f^4 + 𝔤 f^6. This system of ordinary differential equations should be supplied with the boundary conditions from the regularity of the functions at r=0 and the asymptotic flatness of the spacetime: ∂_r f|_r=0= ∂_r α|_r=0= ∂_r Ψ|_r=0=0 ; f |_r→∞ = 0, α|_r→∞= Ψ|_r→∞=1 . Now let us obtain the specific expressions of the global quantities that will be used to characterize the stars. For an isolated boson star, it is easy to extract the ADM mass M_ ADM, which can be read off from the asymptotic behavior of the metric functions far away from the center, Ψ^4 = 1 + 2α_G M_ ADM/r + 𝒪(1/r^2). Alternatively, we may evaluate the mass via the Komar integral (<ref>), which for the spacetime (<ref>) reduces to M_ Komar = 1/α_Glim_r→∞∂_r (r^2α) . This coincides with M_ ADM given that the spacetime is stationary <cit.>. In fact, we will use the relative difference between the masses as an error indicator for the spherical solver <cit.>. We will use M to refer to both masses in the following. For the U(1) charge of the scalar field, defined in Eq. (<ref>), we get the following expression in the present case, Q = 2∫_Σ_tω f^2/α√(γ)d^3 x . The radius R_99, commonly used to characterize the size of a boson star, refers to the value of areal radius that encompasses 99% of the total boson star mass. More specifically, this radius is defined in coordinates (<ref>) as the value R_99=Ψ^2(r_99) r_99 such that the Misner-Sharp function M(r)=-2r/(1+r∂_rlnΨ) evaluated at r = r_99 is equal to 0.99M. Finally, the compactness 𝒞 of a boson star will be defined as the mass M divided by R_99 𝒞 = M/R_99 . For every pair of (α_G, 𝔤), there exists a family of solutions parameterized (in most cases) by the value of the scalar field at the center of the star, f_0. For α_G≪1, we expect to recover the Q-ball solutions, and, in particular, the parameter space explored in <cit.>. In Fig. <ref>, we plot the mass and the compactness for sequences of solutions with different values of coupling constant α_G. From the left panel of Fig. <ref> we see that, contrary to what happens in the Q-ball case, the mass of the solutions is regularized at the maximal frequency ω. Additionally, when gravity is introduced, solutions beyond the Q-ball thin-wall limit can exist. In our case, this limit is ω = (1-1/(4𝔤))^1/2. These solutions are of particular interest since, as can be seen in the right panel of Fig. <ref>, the compactness can reach very high values. For the corresponding static black hole, we have 𝒞=0.5, while the maximum of 𝒞 is around 0.1 for mini-boson stars (λ = 0 = g̃) <cit.> and around 0.16 for strongly self-interacting boson stars (large λ/(Gμ^2), g̃=0) <cit.>. In Fig. <ref> we show a plot of the charge and energy-charge ratio for two families with 𝔤 = 1/2 and gravitational couplings α_G = 1/(8π) and 1/(8π× 500). From Fig. <ref> we already see that the effect of gravity substantially modifies the configuration. In particular, when gravitational effects are strong (the top plot), the mass-charge ratio M/Q goes above 1 when ω is close to the lower frequency limit. Note that M/Q≤1 would be the stability condition of the Q-ball in the flat space limit, and for the Q-ball case, M/Q≤1 is satisfied in the lower frequency limit, similar to the case of the bottom plot of Fig. <ref>. However, M/Q≤1 may cease to be a reliable stability criterion in the presence of strong gravitational attractions. Nevertheless, the solution with a small value for α_G approaches the Q-ball solution as expected, particularly in the region where the scalar field is smaller and hence the gravitational backreaction is weaker. As mentioned in Section <ref>, in the limit α_G→∞, the model tends formally to the case of a free scalar field, which gives rise to mini-boson stars with the scalar field scaling inversely proportional to √(α_G) when α_G≫1. This tendency is consistent to what we see as we increase the value of α_G in Fig. <ref> and also Fig. <ref> in the next section. §.§ Units and scales In the above formulation, the mass M = M_ ADM = M_ Komar is dimensionless, so is the left part of the Einstein equations. To convert to the standard units, we can compare, for example, the asymptotic behavior of the metric in different units, from which we find α_G M /r = G M̃ / r̃, with r̃ being the dimensionful radius. Therefore, we can recover the standard units for the mass M̃ = μ/λ M = α_G/Gμ M . In units of the solar mass, we have M̃ = (1.33×10^-10eV/(ħμ) ) α_G M M_⊙ . As one may expect, the Compton wavelength of the scalar μ^-1 more or less sets the typical length scale of the localized configurations. Given the scalar mass (ħμ), say, in electronvolts, then the length scale in kilometers or in parsecs is respectively given by μ^-1 = (1.97×10^-10eV/ħμ) km = (6.40×10^-24eV/ħμ) pc . These numerical factors can be used quickly to convert to the standard units for any distance quantity μ r. Both of them will be used in the following, km for astrophysical applications and pc for the very dilute dark matter halo applications. See Fig. <ref> for an example of this conversion. Smaller values of μ lead to configurations of the size of galactic dark matter halos. For example, this is the case for the ultralight cosmological scalar field <cit.> with ħμ∼ 10^-22eV, implying a Compton wavelength of ∼0.1 pc and sizes for the corresponding cold and dilute self-gravitating objects of ∼1 kpc <cit.>. We can also infer from Fig. <ref> that in such cases, where an extremely low value of the compactness is expected, the present model with a variety of α_G's can give rise to solitonic solutions that fulfill that requirement. From α_G's definition (<ref>), we see that if the scalar coupling ħλ is around 𝒪(0.1), then α_G is an extremely small number if μ^-1 is of the kilometer scale, since m_P∼ 10^19 GeV. These cases essentially reduce to Q-balls, which for a kilometer scale μ^-1 is astronomical in size. A larger α_G is obtained if either μ is close to the Planck mass or λ is highly suppressed, say, around ħλ∼𝒪(10^-10). For the former case, since μ sets the size of the localized configuration, these objects are quite small. The later corresponds to astronomical structures such as stars or larger objects. §.§ Preparing complex boson stars: solving initial constraints General Relativity is highly nonlinear, and the gravitational field in general does not obey a linear superposition law. Thus, we can not simply superpose boson star solutions, which is in general not a solution, and let it relax towards a complex boson star configuration. We need to carefully prepare the initial data by solving the Hamiltonian and momentum constraints. The 3+1 formalism with a scalar field as a source was presented in Section <ref>. For our purposes, we shall choose K_ij=0 at t=0, which means that the momentum density P^i of the initial data that we intend to construct vanishes, that is, we wish to start with two clumps standing initially with zero velocity. The extrinsic curvature vanishes for time symmetric data, and therefore we can ignore the momentum constraints (<ref>) and solve only the Hamiltonian constraint (<ref>), which in this case is reduced to H ≡^(3)R - 16 πα_G ρ = 0 . We shall solve the Hamiltonian constraint by an iterative method. To that end, we start from the scalar configuration f^ (BS) of a spherical boson star. With the f^ (BS) configuration, as an initial guess, we can prepare, for example, the dipolar scalar configuration by placing a boson star and an anti-boson star on the z-axis separated by distance d, ϕ = f^ (BS)(x,y,z-d/2)e^-iω t + f^ (BS)(x,y,z+d/2) e^iω t. Similarly, for the canonical momentum of the scalar field, we use the initial input K_ϕ = K_ϕ^ (BS)(x,y,z-d/2) + K_ϕ^ (aBS)(x,y,z+d/2). where K_ϕ^ (BS) = iω f^ (BS)e^-iω t/(2α^ (BS)) and K_ϕ^ (aBS) = - iω f^ (BS)e^iω t/(2α^ (BS)). Then, we make a simple superposition of the corresponding gravitational fields. This setup gives us the initial guess in the Newton iterative scheme used to solve the Hamiltonian constraint. More concretely, according to Eq. (<ref>), the spatial metric of an isolated boson star in these coordinates is simply γ_ij^(BS) = Ψ_(BS)^4δ_ij. Recall that the background metrics for an isolated boson star and anti-boson star are the same, so a simple superposition of the spatial metric γ̃_ij takes the form <cit.>, γ̃_ij(𝐫) = (Ψ_(BS)^4(x,y,z-d/2)       + Ψ_(BS)^4(x,y,z+d/2) - 1 ) δ_ij We shall choose to solve Eq. (<ref>) using the conformal thin-sandwich approach, where the 3+1 spatial metric is related to the conformal metric γ̃_ij by γ_ij=ψ^4 γ̃_ij. So, using spherical coordinates and defining[ Although in the dipolar case both ϕ and K_ϕ depend only on r and θ, we here also keep track of the φ angle dependence in the geometric and scalar field quantities so that the generalization to the multipolar case is straightforward to obtain from the equations presented here. For the dipole solutions, both the simple superposition guess γ̃_ij and the expected solution γ_ij for the Hamiltonian constraint are axisymmetric, so we also have A = A(r,θ). ] A(r,θ,φ) = (Ψ_(BS)^4(x,y,z-d/2). .+ Ψ_(BS)^4(x,y,z+d/2) - 1)^1/2 ; the line element reads, dl^2 = ψ^4(r,θ,φ) A^2(r,θ,φ) (dr^2 + r^2dΩ^2 ) . In this formulation, the Hamiltonian constraint becomes D̃^2ψ - 1/8ψ^(3)R̃ = -2πψ^5ρ, where D̃ is the covariant derivative associated with γ̃ and ^(3)R̃ is the 3-dimensional Ricci scalar of γ̃_ij. For the initial guess for the conformal factor, we choose ψ(r,θ,φ) = 1, because of the conformal metric we have chosen. Furthermore, we take as initial data for the lapse function α a simple superposition <cit.> α(r,θ) = α^(BS)(x,y,z-d/2) + α^(BS)(x,y,z+d/2) - 1. See Refs. <cit.> for other possibilities. In order to solve the Hamiltonian constraint we calculate ^(3)R̃ = -2/A^3(2Δ_3 A - ∂ A ∂ A/A) , and D̃^2ψ = 1/A^2Δ_3ψ + 1/A^3∂ A ∂ψ , where we have used the 3-dimensional Laplacian and the scalar product of the gradients in spherical coordinates Δ_3 = ∂^2_r + 2∂_r/r + ∂_θ^2/r^2 + ∂_θ/r^2tanθ + ∂_φ^2/r^2sin^2θ , ∂ u ∂ v = ∂_r u ∂_r v + ∂_θ u ∂_θ v/r^2 + ∂_φ u ∂_φ v/r^2sin^2θ . For the dipolar configuration, when applied to the fields A and ψ, the derivatives with respect to φ do not contribute. All that remains is to obtain the energy density, which can be calculated from Eq. (<ref>) using the metric of the spatial slice, leading to ρ = 4 K_ϕ K_ϕ^† + ∂ϕ∂ϕ^†/ψ^4 A^2 + |ϕ|^2 - |ϕ|^4 + 𝔤|ϕ|^6 . Notice that it depends on ψ so ρ should be updated at every iteration in the root-finding procedure used in the elliptic solver for the Hamiltonian constraint. Finally, combining Eqs. (<ref>), (<ref>) and (<ref>), we arrive at the following partial differential equation: Δ_3ψ + ∂ A∂ψ/A + ψ/4(2Δ_3 A/A-∂ A∂ A/A^2) + 2πα_Gψ^5A^2ρ = 0 , which needs to be solved with the following boundary conditions: ψ_r→∞ = 1 , ∂_r ψ|_r=0 = 0, ∂_θψ|_θ = 0,π = 0 . We solve this elliptic partial differential equation using the library <cit.>. This spectral solver has been utilized in various theoretical physics scenarios, particularly for constructing axisymmetric boson stars <cit.>. Besides the axisymmetric dipolar case, we will also be interested in solving Eq. (<ref>) without axisymmetry for the quadrupole and octupole cases. To the end, we employ the spherical space facility implemented in the library, which uses spherical coordinates and decomposes the space into spherical shells with the possibility of placing as the last shell one with a compactified radial coordinate such that boundary conditions at infinity can be imposed exactly. Full details of this space can be found Ref. <cit.> where the solver was presented. In the spherical space of the library, the spectral basis functions used in the decomposition of the fields are chosen such that regularity at the origin is ensured and the parity of the fields with respect to θ=π/2 plane is respected. In essence, the spectral code will deal with the spectral coefficients F_ijk of certain function F. If the function is unknown, the code will try to find the solution through a Newton-Raphson iterative scheme once an initial guess for F_ijk or equivalently F is given. For our case, the unknown function is the conformal factor. Roughly speaking, the fields are decomposed into Chebyshev polynomials in the radial domain. In the θ domain, sines and cosines of definite parity are used, and for the φ angle, a simple Fourier decomposition is utilized. The particular relation between f and f_ijk is, F(r,θ,φ) = ∑_i = 0^N_r∑_j=0^N_θ[ ∑_k=0, k even^N_φ F_ijkcos(mφ) . + . ∑_k=1, k odd^N_φF_ijksin(mφ) ] Θ_j(θ) X_i(x) with m = ⌊k/2⌋ and x is some function that maps r to the domain [-1,1], which depends on the particular radial domain considered. In the inner domain, the (basis) functions X_i and Θ_j are defined according to the oddness of the integer m and the symmetry of the function F with respect to the θ=π/2 plane. For F symmetric, X_i = T_2i(x) , Θ_j = cos(2jθ) if m is even, X_i = T_2j+1(x) , Θ_j = sin((2j+1)θ) if m is odd. For F antisymmetric, X_i = T_2i+1(x) , Θ_j = cos((2j+1)θ) if m is even, X_i = T_2j(x) , Θ_j = sin((2j)θ) if m is odd. For the outer domain, the basis functions are simply chosen as X_i(x) = T_i(x) independently of the properties of m and F. The concrete form of the decomposition of the odd and even functions is then used to transfer the initial data from the coefficient space (3D array provided by ) to a 3D mesh. For instance, in the case of the dipole, the function ϕ and the metric coefficients are symmetric functions with respect to the equatorial plane, but K_ϕ is anti-symmetric. § FREE SCALAR We shall first consider complex boson stars that are charge-swapping in the model where the scalar potential is simply V(|ϕ|) =|ϕ|^2 . This can be formally obtained by taking the limit α_G→∞ and 𝔤/α_G^2→ 0 from the sextic potential. The gravitational attraction makes it possible to have charge-swapping solitons even for this quadratic/free potential. In this model, stable (mini-)boson stars exist from the zero mass limit, ω = 1, up to the critical mass <cit.> located at ω = 0.853 and[ In the limit α_G→∞ the dimensionless mass M, energy E and other quantities scale as α_G^-1, so we include α_G factors to make them finite. However, it is important to say that after restoring units with the formulas given in Section <ref>, the physical mass and Noether charge of the solutions in the α_G→∞ limit correspond to the expected mini-boson star finite quantities (as they should be). For instance for the critical mass M̃ = 0.633/(Gμ), as expected for the Kaup limit solution.] α_G M=0.633; see Fig. <ref>. This stability range of boson stars restricts the domain of configurations we can use to prepare “molecular states” of boson stars. At first glance, one might expect that only by superposing configurations such that the total mass does not significantly exceed the critical value of an isolated boson star can such charge-swapping configurations be formed. However, on the initial relaxation stage, the system can radiate away a significant amount of the initial energy, and therefore the charge-swapping configuration can emerge from a large initial mass. Nevertheless, it is also true that if the initial mass exceeds a critical value, the final state typically is a black hole. Considering this, we choose six cases of mini-boson stars with frequencies between 0.95 and 0.995. The information about these isolated boson stars is summarized in Table <ref>. To determine how the system evolves, the initial data are exported into the evolution scheme Eqs. (<ref>), implemented in the open source infrastructure <cit.> using the <cit.> thorn to evolve the metric quantities and a modified version of the thorn <cit.> to evolve the scalar field. To track the formation of apparent horizons we use the <cit.>. The evolution is implemented with a grid spacing of Δ x^i = 1.6 and a Courant factor of Δ t/Δ x=0.125 and with different fixed refinement levels included in the thorn <cit.>. In all cases, the size of the computational domain box is at least twice that of d + 2 R_99 (the approximate size of the configuration at t=0) for the configuration with highest compactness and largest d. The particular size of the domain and the distribution of the refinement level will be specified in the following and kept fixed for all the simulations in the same section for an easy comparison. As discussed in <cit.> and <cit.>, boundary conditions are important when investigating the lifetime of a charge-swapping configuration. In line with this, we have also employed absorbing (radiative) boundary conditions that are usual for boson star simulations. For the scalar field, we use the standard Sommerfield boundary conditions but additionally include the first order correction to the massless scalar field dispersion relation <cit.>: ∂_t K_ϕ = -∂_rK_ϕ- K_ϕ/r+μ^2αϕ/4 at r→∞. As for the gauge freedoms, we impose the “Gamma-driver” condition for shift vector β^i and the “1+log” condition for the lapse α. The interpolation from the initial data spectral coefficients and the analysis of various quantities during the evolution are done using separate thorns also within the framework. The choices of the numerical setup employed in this work have been tested in Appendix <ref>. §.§ Dipolar (charge-swapping) boson star For the dipolar boson star, we initially prepare a superposition of two spherical boson stars with opposite charges but the same values of |ω|, which take the first four rows of Table <ref>, corresponding to ω = 0.95, 0.96, 0.97 and 0.98. Then, following the procedure presented in Section <ref>, there is only one more parameter to choose: the initial distance between the superposed stars d. We vary d between 0 and 36 in steps of 6 for the four frequencies. For the low compactness case with ω=0.98, which is our fiducial model, we also explore d=42. This gives a total of 25 simulations for the free-field dipole case, for which we choose a cubic grid of side 160 and use the mirror-symmetry in the plane xz and yz to reduce the computational cost by a factor of 4. We use three refinement levels with a grid-spacing of Δ x = 0.4 in the finest grid, refining the grid in each level by a factor of two. We find that if the initial distance is too small, or the solution too compact, an apparent horizon is formed and the scalar field configuration results in gravitational collapse within t∼10^3. For a fixed value of ω, we observe that as the distance d increases, a greater amount of energy is radiated away in the initial relaxation period. Nevertheless, if the stars in Table <ref> are placed sufficiently far away from each other initially, they can attract and relax to form a charge-swapping configuration. A typical scenario of a charge-swapping boson star forming is shown in Fig. <ref>. We see that in this case the initial configuration is actually relatively close to the charge-swapping bound state. We see that the U(1) charge in the z>0 half-space, Q_up, starts oscillating with an amplitude slightly smaller that the initial one, which is due to a small portion of the initial configuration being ejected shortly after the simulation begins. In the top panel of Fig. <ref>, we show half of a charge-swapping period, with the first plot and the last plot showing that the positive and negative charges have been completely inverted during this half period. For ω = 0.98, we find that below the value d=24, which is the one showed in Fig. <ref>, the next case, which is d=18, also forms a regular horizonless final configuration, but the configuration with d=12 forms a black hole. So according to the mapping of the parameter space we use, the minimum distance to form a boson-anti boson regular system is d_min=18. For ω = 0.97, which is both more massive and compact, we find a bigger minimum distance d_min=24 is required. The trend continues with the 0.96 and 0.95 cases as displayed in the last column of Table <ref>. The charge-swapping configuration analyzed in Fig. <ref> and all the other charge-swapping configurations we surveyed have very long lifetimes. Given that T_0=2π is approximately the oscillation period of the scalar field, we find that the complex boson star survives for at least 5000 T_0. For a selection of these simulations, we have let them evolve for three times as long, and we find that they continue oscillating without decay. The charge distribution of the formed configurations contains some strong Fourier modes which are visible immediately if we combine Fig. <ref> together with the first panel in Fig. <ref>. As was explicitly demonstrated already in Ref. <cit.> for the Q-ball case, the most important frequency is related to the global difference between the real and the imaginary part of the scalar field. In the boson star case, this is induced not by a nonlinear scalar potential, but by the gravitational effects. In Fig. <ref>, we also show the temporal Fourier transforms of Q^ up (top right panel), for the waveform presented in Fig. <ref>. We also see from the bottom panels of Fig. <ref> that the period of charge swapping is around T_ swap = 2× 150, since the dominant peak is around ω_ FFT=0.02 ≈ 2π/300. The other low frequency modes, which modulate the amplitude of Q^ up, are present to the left of this dominant peak (see the bottom left panel of Fig. <ref>). Similar to the flat spacetime charge-swapping solutions <cit.>, we see a high frequency oscillation which corresponds to the small undulations visible in the top left panel of Fig. <ref>, having a peak frequency close to ω_ FFT=1.9 (see the bottom right panel). One can verify that the difference between the frequencies of the real and imaginary component of the scalar field is the origin of the dominant oscillation mode of the charge distribution. To see this, we make a spectral decomposition of the scalar field at a point outside the origin—at the origin, the imaginary part of ϕ is zero by construction. We see from Fig. <ref> that the scalar field is practically monochromatic and the difference between the maximum of the real and imaginary parts gives ω_R - ω_I = 0.020 (see the caption of Fig. <ref> for more details), which is consistent with the dominant peak of Q^ up. In addition, we also find a second peak which is located around the first odd multiple of the base frequency, analogously to what was found in <cit.> and consistent with the fact that a scalar potential has a ℤ_2 symmetry. On the other hand, the small oscillations of Q^ up that we already discussed corresponding the the peak in the bottom right panel of Fig. <ref>, coincide very well with ω_R +ω_I = 1.898 and can be shown to be produced by the next order perturbation term on the charge density ∝ j^0, after the term with frequency ω_R - ω_I. The other free-field dipolar solutions also have a clear harmonic dependence in time, so we have extracted the averaged period of ϕ for these cases and displayed the information in Table <ref>. We see that for a given value of the (initial) frequency ω, the angular frequency of the scalar field in the formed configuration increases with d (more energy is radiated away during the initial relaxation phase) and the configuration evolves to a “more Newtonian” configuration. Also, notice that the angular frequency is always smaller than the corresponding value at t=0, ω. This is actually opposite to the behavior of single unstable boson star undergoing gravitational cooling <cit.> and tending to a stable configuration. In addition to tracking the evolution of Q^ up, we utilize the integral described in Eq. (<ref>) to monitor the evolution of the systems. We find that the integral E, plotted in Fig. <ref>, converges gradually to a nonzero value (at least for the time probed by our simulations). Specifically, for a fixed ω, E decreases as d increases. §.§ Higher multipolar boson stars Dipolar boson stars formed after a head-on collision of two boson stars were previously observed and shown to survive at least until t ∼ 200 T_0 <cit.>. In the previous subsection, we observed that these objects can actually exist for much longer periods and display an intriguing charge-swapping pattern. However, to the best of our knowledge, no tower of composite stars with multipolar distribution has been reported. In the case of Q-balls, these configurations do exist <cit.>, and we now examine whether it is possible to generalize the dipolar case to get configurations formed of positive and negative charge pairs with more general morphologies. To this end, in this subsection, we consider superpositions of four as well as eight stars with equal and opposite charges, and place them in quadrupole and octupole configurations respectively. In Section <ref>, we have used the dipolar case as an example to illustrate the preparation of the initial data for the simulations. In particular, we specified how to solve the Hamiltonian constraint. For the quadrupolar case, the procedure is similar. Now, we can place spherical stars centered at the vertices of a square with side length d and positioned in the xz plane: ϕ = f^ (BS)(x-d/2,y,z-d/2) e^-iω t     + f^ (BS)(x-d/2,y,z+d/2) e^iω t     + f^ (BS)(x+d/2,y,z-d/2) e^iω t     + f^ (BS)(x+d/2,y,z+d/2) e^-iω t . Consequently, for the canonical momentum, we impose K_ϕ =K_ϕ^ (BS)(x-d/2,y,z-d/2)     + K_ϕ^ (aBS)(x-d/2,y,z+d/2)     + K_ϕ^ (aBS)(x+d/2,y,z-d/2)     + K_ϕ^ (BS)(x+d/2,y,z+d/2) . After this, the generalization for the superposition of the conformal metric and the lapse function is straightforward. The rest of the procedure is exactly the same, since we have formulated the equations between Eq. (<ref>) and Eq. (<ref>) without assuming any particular symmetry. The orientation of the square defined by the centers of the stars at t=0, as given by Eq. (<ref>), is selected to minimize adjustments in the implementation within , particularly concerning the parity of the fields used in the solution. However, to make better use of the Einstein Toolkit resources, we choose to perform a transformation of the coordinates x^i = Λ^i_i'x'^i' with Λ being the rotation matrix around the y-axis R_y(δ), δ being the rotation angle. Taking δ = π/4, the system has mirror-symmetry[ In the thorn arrangement used for the simulations, only mirror-symmetry itself can be imposed on the scalar field, but not the anti-symmetry. ] with respect to the three x^i=0 planes and the computational cost is reduced by a factor of eight. So this is actually faster than the dipole case. Nevertheless, the quadrupolar configurations are lager in size. Thus, for these simulations, we double the mesh size and the refinement level boundaries. We find that none of the first five types of spherical stars in Table <ref> leads to charge-swapping configurations for the same range of d as in the dipole case or even if extending it to d=48. Black holes are often formed instead. In contrast, the last two types of stars with frequencies of 0.99 and 0.995 are capable of forming non-trivial configurations that are everywhere regular. Stationary boson stars with these frequencies are very dilute, as can be seen from their values of 𝒞 and radius R_99 in Table <ref>. We have tried three d distances for these two ω's respectively: 48, 42 and 36, and find that for both ω's no apparent horizon is formed for d=42 and 48, but for d=36 (and also smaller values), the initial clump collapses to a black hole. In Fig. <ref>, we can see the collapse of the lapse for one of the quadrupole cases together with the corresponding dipolar case which does not collapse. Configurations that avoid collapse exhibit quadrupolar charge-swapping structures. Despite differing shapes, these structures show quantitative similarities to the dipolar case, following similar correlations between initial parameters ω and d, and the final solution properties. However, their lifetimes differ markedly from self-interacting quadrupoles without gravity (i.e., quadrupolar Q-balls) <cit.>, with all cases persisting for at least t = 5000T_0. In Fig. <ref>, snapshots of charge density illustrate typical behavior, comparing initial star distributions at t = 0 with those within half a charge-swapping period later in the simulation. The bottom panel of Fig. <ref> shows the total charge associated with one blob by integrating n_μ j^μ over the z > |x| region. It is natural to ask whether there are higher order multipolar boson stars. So far, all our preliminary attempts to use the method described in Section <ref> to find solutions have resulted in the configurations collapsing into black holes. As there is more mass involved in a small space, one can try to increase the distance d, but that leads to a decrease of the gravitational attraction. However, it is possible that octupolar boson stars may be constructed with a more systematical parameter scan or an improved preparation method. § SELF-INTERACTING SCALAR Now, we include self-interactions for the U(1) scalar. In the flat space, they provide attractions for non-topological solitons and their molecular states to form. In the presence of gravity, scalar self-interactions can increase the compactness and the masses of boson stars. We will consider two types of interactions: a polynomial potential truncated to the sixth order and a soft logarithmic correction to the mass term, the charge-swapping Q-balls of which have been previously studied <cit.>. §.§ Sextic potential The boson stars in the sextic potential model V(|ϕ|)= |ϕ|^2 - |ϕ|^4 + 𝔤 |ϕ|^6 exist within the range ω_ min<ω<1 with ω_ min some value that depends on α_G and 𝔤 (see Fig. <ref> for a sextic family of solutions with fixed 𝔤 and different α_G). As already discussed, gravity regularizes the total energy of the solutions in the limit ω→1. In fact, M→0 at such limit for all cases (except α_G =0). This is often known as the Newtonian limit. According to our numerical experiments, starting from the Newtonian limit and increasing the value of f(r=0), there exist stable solutions whenever dM/dω<0 until the turning point ω_ min. This leads to one or two stable branches according to the value of α_G; see Fig. <ref>. These observations are consistent with the conclusions drawn from a stability analysis of such solutions using catastrophe theory <cit.>. After the turning point, all solutions are unstable regardless of the sign of dM/dω, some of them migrating towards a stable boson star and others dissipating to infinity or collapsing to a black hole depending on the binding energy, which is similar to the free field and the quartic self-interaction cases previously reported in the literature <cit.>. Compared to the free field case, the self-interacting solutions are qualitatively very different when the gravitational coupling is small, where they are close to their Q-ball counterparts. Nevertheless, these cases can give rise to very compact solutions (as f(r=0) increases) that differ significantly from the Q-ball solutions and achieve very high compactness, making them different from both the mini-boson stars and the corresponding Q-balls; see Fig. <ref>. With this in mind, we have chosen the stars in Table <ref> to prepare the charge-swapping configurations. In particular, we restrict our study to superpositions of single star solutions that are stable on their own. Of course, this does not guarantee that the complex boson star solution will be obtained, nor does it ensure that the final solution will not be a black hole. Since we have fixed 𝔤=1/2 as our fiducial model, we only have two degrees of freedom to construct the initial data: ω and d. We shall explore the dipole superposition with initial separations d={8,10,12,14,16}. Before proceeding with the boson star analysis, as a sanity check, we first re-produce the 3+1D results of charge-swapping Q-balls in Ref. <cit.>. In Fig. <ref>, we see that the longest lifetime is given by ω = 0.8 and d=14, consistent with that of Ref. <cit.>, among other results. These flat-space solutions have smaller radii than the free-field configurations with gravity discussed in Section <ref>. Nevertheless, for an easy comparison, we keep the same numerical setting as in the dipolar free field case with gravity. In essence, there are four stages of the charge-swapping configurations: An initial relaxation stage, which ends around t=500T_0 for the case presented in Fig. <ref>; a plateau stage of charge-swapping, ending around t=1000T_0; a fast decay stage in which Q^ up drops abruptly to zero and the energy density also decreases but towards a finite spherically symmetric state; a long-lived stage of the (highly perturbed) oscillaton <cit.> (the oscillon in the absence of gravity). Next, we switch on α_G to investigate the effect of introducing the gravitational interaction. To monitor the evolution in this case, although not perfect as it is now not really conserved, we will still use the E integral as a proxy. In all the cases presented in this section, we observe that the envelope of the Noether charge in the upper hemisphere always follows a similar qualitative behavior to that of the E(t) curve. In Fig. <ref>, we show the cases with frequency ω=0.8 and gravitational coupling 8πα_G = 0.001 and 0.01. This corresponds to the two first rows in Table <ref>. For 8πα_G=0.001, ω=0.8 is the point where the spherical boson stars start to deviate from the Q-ball counterparts (see Fig. <ref>). The same is also noticeable in the top panel of Fig. <ref> for the charge-swapping configurations, where we see that the lifetimes are slightly shortened in the presence of gravity for all the d's listed. For a smaller α_G, the system basically reduce to the Q-ball case. For 8πα_G=0.01 in the bottom panel, interestingly, we see that the differences between the E curves are negligible for different d's, due to the significantly reduced lifetimes. This means that the gravitational interaction is already dominating for 8πα_G=0.01, and the stable end state corresponds to an oscillaton as expected. Now, we turn to the cases where the initial boson stars are not similar to the Q-ball analogue with the same value of ω. This corresponds to the three last rows in Table <ref>. The two configurations with ω = 0.7 are beyond the so-called thin-wall limit of the corresponding Q-ball model, and in particular the star with 8πα_G = 0.1 has a larger compactness than any of the stable mini-boson stars, max𝒞_ mini = 0.08 <cit.>. We find that the charge-swapping configurations can be created for the cases with 8πα_G = 0.01 and ω = 0.7 or 0.75, but they collapse in the 8πα_G= 0.1 case. The evolution of E for ω = 0.75 and 8πα_G=0.01 for three different d's together with the total Noether charge in the z>0 region for d=16 is shown in Fig. <ref>. We can see how the charge is radiated away in the final stage. More interesting is the fact that for these cases, as well as for the cases in Fig. <ref>, the configuration transits through two charge-swapping plateaus with different E and Q^ up before migrating to the oscillaton configuration. This resembles the cascading of the energy for excited oscillons <cit.>. However, at the level of the quantities we have observed, a directi connection between the two phenomena has yet to be established. The case beyond the thin-wall limit with 8πα_G = 0.01 is shown in Fig. <ref>. For 8πα_G = 0.1, similar setups would lead to gravitational collapse, without stabilizing first to a finite value of E or Q^ up. The last comment we shall make for the case of the sextic potential is that isolated boson stars differ from Q-balls not only in cases near or beyond the thin-wall limit but also in the region where gravity regularizes the solutions near ω = 1. Since in this limit the total mass of the solution tends to zero, the curve of M vs ω (or radius) is “forced” to form a stable Newtonian solution branch. In this branch, the stationary solutions of the sextic potential converge to those of the quadratic potential in the limit, due to the fact that ϕ→0 in this limit; see <cit.> for a systematical study of several kinds of self-interactions. It is therefore expected that configurations prepared using this branch of solutions will lead to the same results as those presented in Section <ref>. §.§ Logarithmic potential Polynomial potentials naturally arise as effective potentials when integrating out weakly coupled heavy degrees of freedom. On the other hand, when including quantum corrections, the potential often picks up some logarithmic dependence. These potentials are widely studied for the case of Q-balls <cit.>, and capable of creating very long-lived charge-swapping Q-balls <cit.>. In this subsection, we consider a complex scalar field minimally coupled with Einstein gravity with a potential of the kind, V(|Φ|) = μ^2|Φ|^2(1+Kln |Φ|^2/ℳ^2) , where K is a negative coefficient that we will fix to K = - 0.1 in the following and the parameters M and μ are scales that can be absorbed by redefinitions. As mentioned previously, it is also possible to obtain a dimensionless action in terms of dimensionless variables and parameters. To this end, we simply choose x^μ = μx̃^μ and ϕ = Φ/ℳ. In doing so, we again gain control over the gravitational interaction, defining in this case α_G=ℳ^2G while rescaling the Ricci scalar as in the sextic potential case. Thus, we can study the action S̃ = ℳ^2 S/μ^2 instead, with the dimensionless potential given by V(|ϕ|) = |ϕ|^2(1+Kln|ϕ|^2) . Once again the equations of motion for the gravitational and scalar fields are given by Eqs. (<ref>) and (<ref>). An interesting property of this model is that in the Minkowski limit (α_G → 0), the equation of motion for ϕ -∂^2 ϕ/∂ t^2 + Δ_3 ϕ = [1+K(1+ln|ϕ|^2)] ϕ , admits an exact spherical solution ϕ = exp(ω^2-1/2K+1) exp(Kr^2/2) exp(-iω t) . Then, in order to build complex boson star solutions, we take this as initial guess and slowly increase the value of α_G, solving once again for the function f of the ansatz ϕ = f(r) e^-iω t and the metric coefficients α and Ψ using Eqs. (<ref>) with the logarithmic potential. Then we proceed to prepare initial data for charge-swapping configurations following the same procedure as described in Section <ref>. The opposite limit α_G→∞ needs to be considered with more care, but is beyond the scope of this work. Sequences of the spherical boson stars for the logarithmic potential are presented in Fig. <ref> for five different values of the gravitational coupling constant. In this figure, we have included also the α_G→0 limit, which can be obtained by substituting Eq. (<ref>) into Eqs. (<ref>) and (<ref>), giving rise to the following expressions for E=M and Q, M =(-π/K)^3/2(2ω^2-K)exp[ω^2-1/K +2] , Q =2(-π/K)^3/2ωexp[ω^2-1/K +2] . For all the boson star families plotted in Fig. <ref>, we find that, interestingly, the frequency ω of the critical mass increases with α_G. To the best of our knowledge, no such boson stars have been previously constructed in the literature (see <cit.> where a different logarithmic potential was explored). Consequently, no results are available on the stability of such spherical boson stars. We can anticipate that beyond the critical mass point the configuration is unstable, as is the case for most (if not all) families of spherical boson stars available in the literature. On the other hand, configurations to the right of the respective maxima in Fig. <ref> are potentially stable, so we use spherical boson stars with ω = 1.2 to prepare charge-swapping configurations. In Table <ref>, we display the classical observables of the logarithmic boson star solutions we will use. Interestingly, all the presumably stable configurations shown in Table <ref> have a positive binding energy E_ bind = M - Q . This is different to all the other cases explored in this work, Tables <ref> and <ref>. This can be seen from the exact expression for M and Q (see Eqs. (<ref>) and (<ref>)). Yet, these configurations survive even against finite perturbations for a very long time. We have explored the 5 configurations in the Table <ref> together with the fiducial model in <cit.> and have not seen any of these configurations decay within our simulation limits. Thus, they are perfectly suitable for preparing charge-swapping configurations. We take a dipole superposition with d = 2 for the five cases in Table <ref>, and evolve them using a modified scalar field thorn in a mesh with the same specifications as those of the sextic potential case. The configurations are of the same size. We find that all of them form long-lived charge-swapping configurations except for the case with 8πα_G = 1, which promptly collapses to a black hole before any charge-swapping takes place. All the other cases are very long-lived and survive at least until t = 60000. Also, similar to the charge-swapping Q-ball case with the logarithmic potential, there is no relaxation process that radiates away excess of the scalar field, as can be confirmed in Fig. <ref>. Also, the charge-swapping process begins very early in the simulations. The latter is reflected in the fact that the charge-swapping period T_ swap is clearly defined from the beginning of the evolution, as can be seen in Fig. <ref>: T_ swap is defined as twice the time elapsed between two contiguous zero points of the Noether charge in the upper half space Q^ up. § ANISOTROPIC DISTRIBUTION OF SATELLITE GALAXIES In the last two sections, we have explored generic features of complex boson stars, particularly their lifetimes and the charge distributions. As we have seen, these “molecular states” of boson stars are often quite stable and with distinct charge-swapping patterns. The sizes of these objects are not explicitly specified in these simulations, and in fact, μ^-1 is essentially used as a base length, setting their characteristic sizes. In this section, we shall discuss one specific application of these objects in galactic scales. For the lowest order/dipolar charge-swapping configuration, its energy density distribution is almost constant with time, mostly spherical but distorted with an appreciable dipolar contribution, and the Noether density has a dipolar structure that is oscillating. This resembles the energy density distribution of a scalar field configuration used previously as an alternative explanation <cit.> to solve the problem of the anisotropic distribution of satellite galaxies observed in the Milky Way, M31 and Centaurus A. Other explanations for the plane of satellite galaxies problem include using baryonic effects, combined gravitational distribution effects <cit.> and formation of satellite galaxies within the scalar field dark matter model <cit.>; see <cit.> for a review. The plane of satellite galaxies problem arises from the distribution and motion of these galaxies in planes that are perpendicular to the plane of their host galaxy. This is unnatural to explain in the cold dark matter paradigm, where the simulations predict that satellites should be isotropically distributed <cit.>. The authors in <cit.> address this problem using multi-state solutions of a system of self-gravitating scalar fields in the Newtonian limit, composed of a spherical state and a dipolar state, which allow them to model dark matter halos and study the motion of test particles on top of the generated gravitational potential. They found that a anisotropic tri-axial Navarro-Frenk-White halo does not lead the orbital angular momenta of the satellites to align with the equator, but the anisotropic (multi-state) scalar field halo does accommodate the particles in orbital planes close to the galactic poles. This is made possible because of the specific morphology of the dipolar contribution, which is produced by a scalar field anti-symmetric with respect to the galactic plane. The self-gravitating charge-swapping configurations in the case of a free scalar field or the scalar field with a logarithmic potential are extremely long-lived and have a morphology similar to the monopole dominating configuration of <cit.> (close to spherical symmetry but with a sizable dipolar contribution). More importantly, their constituent boson stars are close to the Newtonian regime, so these dipolar configurations are very dilute, fully compatible with cosmological constraints. In the following, we explore the possibility of utilizing charge-swapping configurations, which, as nonspherical halo distributions, could lead to the desired anisotropic distribution of satellite galaxies. For definiteness, we will focus on charge-swapping dipolar configurations whose constituents are mini-boson stars with ω = 0.98. In the non-relativistic limit, the Einstein-Klein-Gordon system Eqs. (<ref>) and (<ref>) reduce to the Schödinger-Poisson system which is more tractable and also has additional re-scaling properties <cit.>. This approximation is valid when the scalar field is non-relativistic which in particular implies ϕ≪1. So even in the presence of higher order polynomial self-interactions, the scalar equation of motion reduces to i∂_tΨ_ w = -1/2∇^2Ψ_ w + UΨ_ w , ∇^2 U = 4πα_G|Ψ_ w|^2 . where Ψ_ w = √(2)exp(i t)ϕ , U is the Newtonian gravitational potential, which can be extracted by comparing Eq. (<ref>) with ds^2 = -(1+2U)dt^2 + (1-2U) dr^2 + r^2dΩ^2 , A consistency condition is that U≪1. We assume that the weak field and low energy regime is valid in the late universe. Boson stars are known to be in this limit when ω is close to 1, where the compactness approaches 0. Unlike the Einstein-Klein-Gordon system, the Schrödinger-Poisson equations have[Subject to the restrictions ϕ≪1 and U≪1. See Ref. <cit.> for the post-Newtonian expansion of the Einstein-Klein-Gordon system.] the following scaling invariance: (Ψ_ w,U,x^i,t) → (Λ^2Ψ_ w,Λ^2U,Λ^-1x^i,Λ^-2t) . This allows to obtain the full sequence of Newtonian boson stars once a single solution has been found, unlike the relativistic analogue, where the family of solutions, say, the one presented in Fig. <ref>, must be constructed numerically solving the equations at each point. This rescaling is independent of the rescaling used to obtain dimensionless quantities, so to recover the physical system, as in the relativistic case, we must choose a value of μ as explained in Section <ref>. We have previously obtained the metrics in the full general relativistic formulation. For this application, it is sufficient to extract the non-relativistic limit from Eq. (<ref>) and Eq. (<ref>). We choose to perform the analysis at a certain representative space slice with t=4000T_0 of the dipolar configuration with ω = 0.98 well within the charge-swapping stage. However, the results are insensitive to the specific time period chosen within this stage. To extract the 2D data at such time we use the tool <cit.>. Based on the consistency statistical tests performed in <cit.>, we choose the values of μ and Λ, which completely fix the scale of the system, to be such that the size of the configuration, which has an equatorial radius of about 16 units, corresponds to a physical size of 300 kpc (radius encompassing the 11 classical satellites of the Milky Way <cit.>), together with the condition that the circular velocity of the stars in the galactic disk at a point 30kpc away from the galactic center is 100 km/s, implying similar contributions from the enclosed mass at radius r=30kpc of dark matter and baryonic matter. This two conditions lead to the values μħ∼ 10^-25eV and Λ = 0.01. The precise determination of these two parameters given the rotation curve of the Milky Way and a model for the galactic bulge and disc deserves a statistical analysis which is beyond the scope of this paper. Next, we study the motion of 10^4 “particles” moving the gravitational field of U. After some time, the distribution of the “particles” reflects the probability of finding (idealized) satellite galaxies in the halo. To that end, we randomly place “particles” inside a sphere of radius R=16/(Λμ), corresponding to R=300 kpc (see the left panel of Fig. <ref>), and with velocities in random directions and magnitudes smaller than 1/4 of the escape velocity of a “particle” located in the equatorial plane at a distance R from the center of the galaxy: v<v_ max = √(U(R,0,0)/8). The “particles” are left to evolve. In Fig. <ref>, we show the gravitational potential and the motion of a single “particle” in it, in units of μ and Λ, which specify the galactic sizes. This is possible because the equations of motion of the “particles” inherit the two scaling invariances of the Shrödinger-Poisson equations mentioned above. Similar to <cit.>, we choose to evolve up to t=12τ_s, where τ_s is defined as the time it takes for a particle in the equatorial plane to complete a circular orbit. After this, we restore units of every involved physical quantity. From Fig. <ref> and similar plots and projections at different times, we see that after several τ_s, the distribution tends to a stationary non-spherical distribution in space, and the distribution of the orientation of the satellite orbits is also stationary and anisotropic. This can be seen in Fig. <ref>. More importantly, we notice that the orbital angular momenta (known as the orbital pole in the astronomy literature) of the satellites, i.e., the “particles” located between 30 and 300 kpc, are concentrated around the equator. Other choices of the maximum allowed value for the random magnitude of the velocity lead to similar results as long as it does not exceed 1/2 the escape velocity of the particle located in the equatorial plane at a distance R from the center. For a comparison, we have plotted the orbital poles of the 11 Milky Way classical satellites obtained from Ref. <cit.> together with the particle orbital poles at t=12τ_s in Fig. <ref>. Let us comment on the robustness of the charge-swapping configurations for modeling dark matter halos. In our general relativistic numerical simulations, we find that the free-field charge-swapping configurations can be formed from quite generic initial setups and are attractor solutions in the dynamical evolution. Also, as mentioned in the introduction, they may initially be formed from fragmentation of some homogeneous condensate in the early universe. They live for extremely long times and are sufficiently stable in terms of modeling a galactic halo. To see this, note that for our case we have that the dimensionless timescale is Λ^2μτ_s∼600, calculated from U according to its definition, which is much smaller than the (minimum) lifetime of the free-field charge-swapping configurations explored in Section <ref>. For the values of Λ and μ for this cosmological application we obtain, τ_s ∼ 4 Gyrs , after restoring units. Meaning that the angular distribution of the angular momenta is already oriented toward the galactic pole after 1 Gyr (as can be seen in Fig. <ref>) and that the charge-swapping configurations live for much longer than is required for the galactic halos in the Universe. Furthermore, it is possible that the gravitational potential in this scenario could destroy the Galactic disc. To see that this does not happen, we place “particles” in a disk of radius 30 kpc with a small thickness varying between 0 and 10 percent of the disk radius. We give initial velocities to the “particles” with directions parallel to the equatorial plane and with velocities such that they would follow perfect circular motion if they were located in their positions projected in the equatorial plane. Then, we let them evolve for 12τ_s. We find that within the chosen parameters the structure of the disc does not change: its radius and thickness remain the same, as can be seen in Fig. <ref> § CONCLUSIONS In this paper, we have discussed the complex structures of boson stars, and established the existence of charge-swapping configurations in the presence of gravity. We have shown how the metric field changes the stability and other properties of the charge-swapping configurations, and recovered the Minkowski flat results in suitable limits. Particularly, the quadratic monomial potential, the sextic polynomial potential and the running mass/logarithmic potential are explored with fully nonlinear numerical relativity simulations. For the quadratic and logarithmic case, the complex boson stars are found to be very stable, and we have yet to see them decay in our long-term simulations. The existence of charge-swapping configurations for the quadratic potential is a novelty when coupling the U(1) scalar to gravity, thanks to the gravitational attractions. In the free scalar case, taking the dipolar boson star for example, we have found that the real and imaginary parts of the scalar field oscillate with slightly different frequencies. This difference determines the main frequency at which the Noether charge of the two components of the system is exchanged. Also, we have found that configurations can avoid gravitational collapse when the initial stars are prepared with sufficiently low masses and separations, as any excess energy is radiated in the initial relaxation stage. In the sextic potential case, we have found evidence of transient existence of highly compact self-gravitating solutions. Thus, a low scalar field density is not a requirement for the existence of these configurations. Finally, we have proposed a concrete application for these charge-swapping configurations at large/galactic scales. Assuming that the Galactic dark matter halo is a charges-swapping dipole, we have shown that test particles placed on it will cluster anisotropically and particularly their angular momenta will be oriented in the direction of the Galactic plane. This is different from the scenario of the cold dark matter model where an isotropic halo is more likely to be formed. We have chosen the model parameters to preliminarily fit the observations in the Milky Way, but a precise determination of the model parameters requires a statistical consistency analysis, which is beyond the scope of the current paper. A particular extension of the current work is to study the gravitational signals associated with the relaxation period, after, say, a collision, and the decay or collapse of the most compact charge-swapping configurations, which is also left for future work. We would like to thank Qi-Xin Xie for helpful discussions. SYZ acknowledges support from the National Key R&D Program of China under grant No. 2022YFC220010 and from the National Natural Science Foundation of China under grant No. 12075233 and 12247103. § CONVERGENCE TESTS The charge-swapping configurations found in this work are very long-lived, so convergence tests of the numerical code is very important. First, we have checked the consistency of the full implementation by constructing and evolving isolated stable boson stars that are previously known, calculating the frequency of the scalar field during the simulation and checking whether it coincides with the frequency of the stationary solution for a long period of time, t∼ 10^4. The coordinates chosen are such that the different metric functions are expected to stay static during the evolution, which we again verify by plotting the global minima and maxima of some of them. To evaluate the consistency of the charge-swapping dynamics, we have compared with the flat spacetime results in <cit.> for the polynomial potential case and <cit.> for the logarithmic potential. Specifically, we compared the curves of E and found good agreement despite the fact that the implementation is completely different. In Fig. <ref>, we show the Noether charge for different sizes of the physical box and spatial resolutions. We see that the medium and high resolutions provide precise results. The same applies to the two bigger box cases, with x_max^i=160 and 320, but the three box sizes are in good agreement for the amplitudes, and the frequencies of the charge oscillations differ from each other only by a phase. In our simulations of the dipolar free field case, we used the medium resolution and the smaller box size. Finally, in Fig. <ref> we show the L_2-norm of the Hamiltonian constraint Eq. <ref>. The violation of this constraint begins from small values and increases in the stage of the merger of the two boson stars. The errors reduce by one order of magnitude for the medium and high resolutions.
http://arxiv.org/abs/2407.13169v1
20240718051632
Combining Climate Models using Bayesian Regression Trees and Random Paths
[ "John C. Yannotty", "Thomas J. Santner", "Bo Li", "Matthew T. Pratola" ]
stat.ME
[ "stat.ME" ]
Maximin Fair Allocation of Indivisible Items under Cost Utilities Sirin Botan1 Angus Ritossa1 Mashbat Suzuki1 Toby Walsh1 Received X XX, XXXX; accepted X XX, XXXX ================================================================= § ABSTRACT =16pt Climate models, also known as general circulation models (GCMs), are essential tools for climate studies. Each climate model may have varying accuracy across the input domain, but no single model is uniformly better than the others. One strategy to improving climate model prediction performance is to integrate multiple model outputs using input-dependent weights. Along with this concept, weight functions modeled using Bayesian Additive Regression Trees (BART) were recently shown to be useful for integrating multiple Effective Field Theories in nuclear physics applications. However, a restriction of this approach is that the weights could only be modeled as piecewise constant functions. To smoothly integrate multiple climate models, we propose a new tree-based model, Random Path BART (RPBART), that incorporates random path assignments into the BART model to produce smooth weight functions and smooth predictions of the physical system, all in a matrix-free formulation. The smoothness feature of RPBART requires a more complex prior specification, for which we introduce a semivariogram to guide its hyperparameter selection. This approach is easy to interpret, computationally cheap, and avoids an expensive cross-validation study. Finally, we propose a posterior projection technique to enable detailed analysis of the fitted posterior weight functions. This allows us to identify a sparse set of climate models that can largely recover the underlying system within a given spatial region as well as quantifying model discrepancy within the model set under consideration. Our method is demonstrated on an ensemble of 8 GCMs modeling the average monthly surface temperature. =25pt § INTRODUCTION Complex natural phenomena are often modeled using computer simulators – models that incorporate theoretical knowledge to approximate an underlying system. In climate applications, these simulators, or General Circulation Models (GCMs), have been widely used to understand a variety of climate features such as temperature or precipitation <cit.>. Many GCMs have been developed over time and each one tends to have varying fidelity across different subregions of the world. This implies that no universally best model exists. For example, Figure <ref> displays the difference between the simulated average monthly surface temperature in the northeastern hemisphere for April 2014 from three different climate models and the observed data. This figure demonstrates that the residuals (and thus the fidelity) of each GCM differs across the various spatial regions. In particular, greater variation in residuals are observed over land. Specific residual patterns tend to mimic the changes in elevation, which suggests each GCM accounts for elevation in different manners. Multi-model ensembles are often used to help improve global prediction of the system <cit.>. Various approaches exist for combining the outputs from multiple GCMs. A common approach is to explicitly combine the outputs from K different GCMs using a linear combination or weighted average, usually in a pointwise manner for a specific latitude and longitude. For example, <cit.> and <cit.> define performance-based weights. <cit.> define Reliability Ensemble Averaging (REA), which weight the GCMs based on model performance and model convergence. More traditional regression-based models assume the underlying process can be modeled as a linear combination of the individual GCMs <cit.>. <cit.> propose a density-based approach that combines the individual cumulative density functions from the K models. This results in a mixed-CDF that better accounts for the distribution of each model compared to methods that only consider the each model's mean predictions. Each of these methods estimate the weights in vastly different ways, however, they all derive location-specific weights based on the information at a given latitude and longitude location. Alternative Bayesian and machine learning approaches implicitly combine multiple GCMs to estimate the mixed-prediction. For example, <cit.> model the underlying system as a Gaussian process with a deep neural network kernel, and then employ Gaussian process regression to combine multiple climate models. <cit.> define a Bayesian hierarchical model that assumes a co-exchangeable relationship between climate models and the real world process. Global approaches such as Bayesian Model Averaging (BMA) <cit.> and model stacking <cit.>, have not seen wide adoption in climate applications. These approaches use scalar weights to combine simulator outputs. The corresponding weights in such schemes are meant to reflect the overall accuracy of each model, where larger weights indicate better performance. More recent advancements consider localized weights, where the weights are explicitly modeled as functions over the input domain. The outputs from each model are then combined, or mixed, using weight functions that reflect each individual model's local predictive accuracy relative to the others in the model set. This approach is often referred to as model mixing and allows for advanced interpretations of the localized fidelity of each model. This weight function approach allows for more effective learning of local information than pointwise methods without degenerating to overly simplistic global weighting. One key challenge in model mixing is specifying the relationship between the inputs and weight values. Specific approaches model the weights using linear basis functions <cit.>, generalized linear models <cit.>, neural networks <cit.>, calibration-based weighting <cit.>, Dirichlet-based weights <cit.>, precision weighting <cit.>, or Bayesian Additive Regression Trees (BART) <cit.>. These approaches are conceptually related but differ in the assumed functional relationship for the weights, additional constraints imposed on the weight values, and the capability to quantify uncertainties. The BART approach for modeling the K-dimensional vector of weight functions is attractive due to its non-parametric formulation which avoids the need for user-specified basis functions. Specifically, the BART approach defines a set of prior tree bases which are adaptively learned based on the information in the model set and the observational data. However, the resulting weight functions of BART are piecewise constant, resulting in the primary drawback of this approach: the weight functions and predictions of the system are discontinuous. This is a noticeable limitation when smoothness is desirable. The univariate regression extension Soft BART (SBART) <cit.> allows smooth predictions using tree bases. However, SBART is better suited to modeling scalar responses rather than a vector-valued quantity. In particular, applying SBART to a tree model with K-dimensional vector parameters would increase the computational complexity by a factor of K^3, a non-negligible increase when working with larger model sets. Additionally, <cit.> propose a set of default priors for the SBART model, which mitigates the need to select the values of hyperparameters and avoids a complex cross-validation study. However, these default settings may be insufficient in some applications and a more principled approach for prior calibration may be desired. Thus, directly applying the existing “soft" regression tree methods to the current BART-based weight functions is both not straight-forward and computationally infeasible. Our contribution with this research is as follows. First, we propose the Random Path BART (RPBART) model, which uses a latent variable approach to enable smooth predictions using an additive regression tree framework, all in a matrix-free formulation. We further use the random path model in the Bayesian Model Mixing framework (RPBART-BMM). This improves on the initial BART-based model mixing method introduced by <cit.>, which was at times sensitive to overfitting and thus provided poor uncertainty quantification in areas away from the training points. The proposed construction also introduces smoothness in a holistic way such that the induced smoothing is compatible with the localization effect of the learned tree structure. Additionally, we derive the prior semivariogram of the resulting model, allowing for principled yet efficient calibration of model prior hyperparameters with similar ease as the original BART proposal. We also introduce posterior projection methods for model mixing that can be used to better interpret the mixed-prediction and resulting weight functions in cases where all, some, or none of the models are locally useful for the system of interest. Finally, we demonstrate our methodology by combining the outputs from K different GCMs to estimate the underlying mean process of the true system and gain insight as to where each GCM is locally accurate or inaccurate. We demonstrate the enhanced performance of our method relative to competing methods in mixing GCMs. The remainder of the paper is organized as follows. Section 2 reviews the relevant background literature relating to Bayesian regression trees. Section 3 outlines our novel Random Path BART (RPBART) methodology for scalar responses. Section 4 extends the RPBART model to Bayesian model mixing and introduces projections of the fitted weight functions. Section 5 applies our methods to GCMs, and Section 6 summarizes our contributions in this paper. § BAYESIAN REGRESSION TREES Bayesian regression trees were first introduced by <cit.> as a single tree model and later extended to incorporate an ensemble of trees <cit.>. The most common ensemble approach is BART <cit.>, which models the mean function using additive tree bases. A single Bayesian tree recursively partitions a p-dimensional compact input space into B disjoint subsets. The tree topology then consists of B terminal nodes and B-1 internal nodes. Each internal node consists of a binary split along the dimension of the input space using a rule of the form x_v < c_v, where v ∈{1,…,p}. The cutpoint c_v is selected from a discretized set over the interval [L_v,U_v], which defines the lower and upper bounds of the dimension of the input space, respectively. The terminal nodes are found in the bottom level of the tree and each terminal node corresponds to a unique partition of the input space. The terminal nodes facilitate predictions from the tree model, where each partition is assigned a unique terminal node parameter. Specifically, a tree implicitly defines a function g(;T,M) such that if lies in the partition, then g(;T,M) = μ_b, where T denotes the tree topology and M = {μ_1,…,μ_B} denotes the set of terminal node parameters. By construction, g(;T,M) is a piecewise constant function. Figure <ref> displays an example tree with B=3 terminal nodes and the corresponding partition of the 2-dimensional input space. The tree topology prior, π(T), accounts for the type of each node within the tree (terminal or internal) along with the splitting rules selected at each internal node. The default prior is uninformative for the split rules and penalizes tree depth <cit.>. For example, the probability that a node η is internal is given as π(η is internal) = α(1+d_η)^-β, where α and β are tuning parameters and d_η denotes the depth of η. The output of the tree corresponds to the set of the terminal node parameters M. In most cases, a conjugate normal prior is assigned to each of the terminal node parameters μ_1,…,μ_B. Another common assumption is that the terminal node parameters are conditionally independent given the tree topology T. These assumptions simplify the Markov Chain Monte Carlo (MCMC), particularly when working with a Gaussian likelihood. Specifically, the conjugacy avoids the need for a complex Reversible Jump MCMC. Given data and the specified prior distributions, samples from the posterior of T, M and σ^2 are generated using MCMC. The tree topology is updated at each iteration by proposing a slight change to the existing tree structure <cit.>. The terminal node and variance parameters are then updated using Gibbs sampling steps. The methodology outlined above is easily extended to an ensemble of m trees, T_1,…,T_m, with terminal node parameter sets M_1,…,M_m <cit.> using a Bayesian backfitting algorithm <cit.>. The mean function becomes E[Y() |] = ∑_j = 1^m g(; T_j, M_j), where g(; T_j, M_j) = ∑_b = 1^B_jμ_bj I(∈_bj) and I(∈_bj) is an indicator function denoting the event that is mapped to the partition of the input space by tree j. Generally, tree models result in a piecewise constant mean function. When continuity is desirable, soft regression trees <cit.> can serve as a useful alternative. A soft regression tree maps an observation with input to a unique terminal node probabilistically given the tree topology and associated splitting rules. If T_j has B_j terminal nodes, the probability of being mapped to the terminal node is given by ϕ_bj(; T_j, γ_j) for b=1,…,B_j and bandwidth parameter γ_j. The bandwidth parameter is used to control the amount of smoothing across the terminal node parameters. Given each input dimension is standardized so that x_v ∈ [0,1], SBART assumes γ_j to be exponentially distributed with a mean of 0.1. Larger values of γ_j (≥ 0.1) will lead to a more global solution, while small values of γ_j tends to generate a localized fit similar to BART. The mean function of a soft regression tree is then constructed as a weighted average of the terminal node parameters g(; T_j, M_j) = ∑_b = 1^B_jμ_bj ϕ_bj(; T_j, γ_j). Once more, the traditional regression tree with deterministic paths can be thought of as a special case where ϕ_bj(; T_j,γ_j) = I(∈_bj). This expression for the mean function requires the terminal node parameters to be updated jointly during the MCMC. When the terminal node parameters are scalars, this requires inversion of a non-sparse B_j× B_j matrix <cit.>. In most cases, the trees are regularized to maintain a shallow depth, so this inversion is relatively inexpensive to compute. If the terminal node parameters are K-dimensional vectors, the corresponding update requires inversion of a KB_j × KB_j matrix, which can quickly become computationally expensive even with shallow trees. Climate applications typically involve large amounts of data and thus we can expect deeper trees will be required to combine a set of GCMs. The proposed Random Path BART (RPBART) model described next alleviates these concerns. § THE RANDOM PATH MODEL We first propose our smooth, continuous, Random Path model for standard BART (RPBART), which models a univariate mean function. Let Y(_i) be an observable quantity from some unknown process at input _i. Let z_bj(_i) be the latent random path indicator for the event that the input is mapped to the terminal node in T_j. Given the random path assignments, we assume Y(_i) is modeled as Y(_i) |{T_j,M_j,Z_j}_j=1^m, σ^2 ∼ N(∑_j = 1^m g(_i; T_j, M_j,Z_j), σ^2 ) , g(_i; T_j, M_j, Z_j) = ∑_b=1^B_jμ_bj z_bj(_i). Each observation is mapped to exactly one terminal node within T_j, thus ∑_b=1^B_j z_bj(_i) = 1 and z_bj(_i) ∈{0,1} for b = 1,…, B_j and i=1,…,n. The set Z_j is then defined as Z_j = {_j(_i) }_i = 1^n where _j(_i) is the B_j-dimensional vector of random path assignments for the input. Traditional BART with deterministic paths can be viewed as a special case of RPBART where z_bj(_i) = I(_i ∈_bj). Conditional on Z_j, the output of the random path tree remains a piecewise constant form. However, taking the expectation of the sum-of-trees with respect to Z_j results in a continuous function, similar to SBART (<ref>). §.§ Prior Specification The original BART model depends on the tree structures, associated set of terminal node parameters, and error variance. We maintain the usual priors for each of these components, μ_bj| T_j ∼ N(0, τ^2), T_j ∼π(T_j), σ^2 ∼νλ/χ^2_ν, where b = 1,…,B_j and j=1,…,m. The values of τ, λ, and ν can still be selected using similar methods as specified by <cit.>. RPBART introduces two new sets of parameters, (Z_j, γ_j) for j = 1,…,m. For each Z_j, consider the path assignment for the observation, _j(_i). Conditional on T_j, the B_j-dimensional random path vector for a given _i is assigned a Multinomial prior _j(_i) | T_j, γ_j ∼Multinomial(1; ϕ_1j(_i;T_j,γ_j),…,ϕ_B_jj(_i;T_j,γ_j) ) , where ϕ_bj(_i; T_j,γ_j) is the probability an observation with input _i is mapped to the terminal node in T_j or equivalently, the conditional probability that z_bj(_i) = 1. The bandwidth parameter, γ_j, takes values within the interval (0,1) and controls the degree of pooling across terminal nodes. As γ_j increases, more information is shared across the terminal nodes, which leads to a less localized prediction. Since we confine γ_j to the interval (0,1), we assume γ_j ∼Beta(α_1,α_2), j = 1,…,m. This prior specification of γ_j noticeably differs from the exponential bandwidth prior used in SBART <cit.>. The different modeling assumptions are guided by the design of the path probabilities, ϕ_bj(_i;T_j,γ_j), which are discussed in Section <ref>. Finally, we assume conditional independence given the set of m trees. This implies the joint prior simplifies as π(σ^2,{T_j,M_j,Z_j,γ_j}_j=1^m) = π(σ^2)∏_j=1^m π(M_j | T_j, Z_j) π(Z_j | T_j, γ_j) π(T_j)π(γ_j) = π(σ^2)∏_j=1^m π(T_j) π(γ_j) ∏_b = 1^B_jπ(_bj| T_j) π(Z_j | T_j, γ_j). Furthermore, we assume mutual independence across the random path assignment vectors _j(_i) apriori. Thus the set of vectors over the n training points can be rewritten as π(Z_j | T_j, γ_j) = ∏_i = 1^n ∏_b = 1^B_j(ϕ_bj(_i; T_j,γ_j))^z_bj(_i). §.§ The Path Probabilities The continuity in the mean prediction from each tree is driven by the B_j path probabilities, ϕ_bj(_i; T_j, γ_j). Recall, a tree model recursively partitions the input space into B_j disjoint subregions using a sequence of splitting rules. We define the path from the root node, η_1j^(i), to the terminal node, η_bj^(t), in terms of the sequence of internal nodes that connect η_1j^(i) and η_bj^(t). To define ϕ_bj(_i; T_j, γ_j), we must consider the probability of visiting the internal nodes that form the path that connects η_1j^(i) and η_bj^(t). For example, consider the tree with B_j = 3 terminal nodes (red, blue, green) and induced partition of the 1-dimensional input space, [-1,1], in Figure <ref>. In this tree, the path to the red terminal node, η_1j^(t), is simply defined by splitting left at the root node. Similarly, the path to the blue terminal node, η_2j^(t), is defined by first splitting right at η_1j^(i) and then left at η_2j^(i). In the usual regression tree model, these splits happen deterministically. This means an observation with x_1 < 0 splits left and is thus mapped to η_1j^(t) with probability 1. Using our random path model, an observation splits right with probability ψ(; ·) and left with probability 1 - ψ(; ·). This adds another layer of stochasticity into the model. §.§.§ Defining the Splitting Probabilities Consider the internal node, η^(i)_dj, of T_j. Assume η^(i)_dj splits on the rule x_v_(dj) < c_(dj). Let ψ(;v_(dj),c_(dj),γ_j) define the probability an observation with input moves to the right child of η^(i)_dj. Further assume the cutpoint c_(dj) is selected from the discretized subset (L^d_v, U^d_v)∩_v, where _v is the finite set of possible cutpoints for variable v, and L^d_v and U^d_v are the upper and lower bounds defined based on the previous splitting rules in the tree. The bounds, which are computed based on the information in T_j, are used to define a threshold which establishes a notion of “closeness" between points. We incorporate this information into the definition of ψ(;v_(dj),c_(dj),γ_j) by ψ(;v_(dj),c_(dj),γ_j)= 1 - 1/2(1 - x_v_(dj) - c_(dj)/γ_j(U^d_v - c_(dj)))^q_+ x_v_(dj)≥ c_(dj), 1/2(1 - c_(dj) - x_v_(dj)/γ_j(c_(dj) - L^d_v))^q_+ x_v_(dj) < c_(dj), where the expression a_+ = max{a,0} for any a∈ and q is a shape parameter. This definition of ψ(;v_(dj),c_(dj), γ_j) restricts the probabilistic assignment to the left or right child of η^(i)_dj to the interval _dj(γ_j) := (c_(dj) - γ_j(c_(dj)-L^d_v), c_(dj) + γ_j(U^d_v-c_(dj))). Any observation with input x_v_(dj) such that x_v_(dj)∈_dj(γ_j) has a non-zero chance of being assigned to either of the child nodes in the binary split. Meanwhile, ψ(;v_(dj),c_(dj),γ_j)=0 if x_v_(dj) is less than the lower bound in _dj(γ_j) or ψ(;v_(dj),c_(dj),γ_j)=1 if x_v_(dj) is greater than the upper bound in _dj(γ_j). This means the probabilistic assignment agrees with the deterministic split when x_v_(dj)∉_dj(γ_j). In other words, points that are farther away from the cutpoint take the deterministic left or right move while points close to the cutpoint take a random move left or right. To understand γ_j, consider the first split (at the root node) within a given a tree. Assume the root node splits the interval [L_1^(1), U_1^(1)] = [-1,1] using the rule x_1 < 0 (i.e. v_1j = 1 and c_1j = 0). Figure <ref> displays the probability of splitting left (red), 1- ψ(;1,0,γ_j), and the probability of splitting right (blue), ψ(;1,0,γ_j), for different values of γ_j as a function of x_1. The orange region in each panel highlights the set _1j(γ_j). The interval is wider for larger γ_j, which means a larger proportion of points can move to the left or right child nodes. As γ_j decreases, the probability curves become steeper and ψ(;1,0,γ_j) starts to resemble the deterministic rule I(x_v_(dj)≥ c_(dj)). The splitting probabilities determine how to traverse the tree along the various paths created by the internal nodes. These probabilities set the foundation for computing the probability of reaching any of the B_j terminal nodes within T_j. §.§.§ Defining the Path Probabilities The path probabilities, ϕ_bj(;T_j,γ_j), can be defined in terms of the individual splits at each internal node. For example, consider the tree with B_j = 3 terminal nodes in Figure <ref>. The first internal node, η_1j^(i), splits using v_1j = 1 and c_1j = 0, while the second internal node, η_2j^(i), splits using v_2j = 1 and c_2j = 0.4. The probability of reaching the red terminal node is simply the probability of splitting left at η_1j^(i), which is given by ϕ_1j(;T_j,γ_j) = 1- ψ(;1,0,γ_j). Meanwhile, an observation is mapped to the blue terminal node by first splitting right at η_1j^(i) and then left at η_2j^(i). This sequence of moves occurs with probability ϕ_2j(;T_j,γ_j) = ψ(;1,0,γ_j)×(1- ψ(;1,0.4,γ_j)). The resulting path probabilities for each terminal node with γ_j = 0.5 are shown in the right panel of Figure <ref>. In general, if the path from the root node to the terminal node depends on D internal nodes, the path probability is defined by ϕ_bj(;T_j,γ_j) = ∏_d=1^D ψ(;v_(dj),c_(dj),γ_j)^R_(dj)×(1-ψ(;v_(dj), c_(dj),γ_j))^1-R_(dj), where v_(dj) and c_(dj) are the variable and cutpoint selected at the internal node along the specified path in T_j, and R_(dj) = 1 (R_(dj) = 0) if a right (left) move is required at the internal node to continue along the path towards η^(t)_bj. §.§ Smooth Mean Predictions Let Y() be a new observable quantity at input . Conditional on the m random path vectors at x̃, the mean function is given by E[Y() |{T_j,M_j,Z̃_j,γ_j}_j = 1^m] = ∑_j = 1^m (;T_j,M_j,Z̃_j) = ∑_j = 1^m ∑_b = 1^B_j_bj z_bj(), where Z̃_j = Z_j ∪{_j()} is the set of random path assignments from the training data and the future observation. Marginalizing over the random path assignments results in a smooth mean prediction, E[Y() |{T_j,M_j,γ_j}_j = 1^m] = ∑_j = 1^m ∑_b = 1^B_j_bj ϕ_bj(; T_j, γ_j). Posterior samples of this expectation can be obtained by evaluating the functional form in (<ref>) given posterior draws of T_j, M_j, and γ_j. §.§ The Semivariogram The RPBART model introduces an additional pair of hyperparameters, α_1 and α_2, which control the m bandwidth parameters γ_j, and therefore the level of smoothing in the model. The common approach in BART is to calibrate the prior hyperparameters using a lightly data-informed approach or cross validation. However, neither the data-informed approach proposed for BART <cit.> nor the default prior settings in SBART <cit.> can be used for calibrating the new smoothness parameters α_1 and α_2. Furthermore, the added complexity of RPBART would render the traditional cross-validation study too complex and computationally expensive. Thus, we propose to calibrate RPBART using the semivariogram <cit.>. The semivariogram, ν(‖‖), is defined by ν(‖‖) = 1/||∫_ν(, ) d, ν(, ) = 1/2Var(Y(+)-Y()), where || denotes the volume of the input domain and + denotes an input which is a distance ‖‖>0 away from <cit.>. Assuming a constant mean function for Y(), the function ν(, ) simplifies as ν(, ) = 1/2E[(Y(+)-Y())^2 ]. In spatial statistics, the function ν(, ) describes the spatial correlation between two points that are separated by a distance of ‖‖ and may depend on parameters that determine the shape of the semivariogram ν(‖‖). We propose calibrating the hyperparameters of the RPBART model by using an estimator of the prior semivariogram in (<ref>) and the function ν(,) in (<ref>). The estimator of (<ref>) can be can be calculated by marginalizing over the set of parameters { T_j, M_j, Z_j, γ_j }_j = 1^m and conditioning on σ^2. In order to compute ν(, ), we first analytically marginalize over { M_j, Z_j }_j = 1^m conditional on the remaining parameters. An expression for ν(, ) is then computed by numerically integrating over { T_j, γ_j }_j = 1^m (where σ^2 is held fixed). The function ν(, ) for the RPBART model is given by Theorem <ref>. Assume the random quantities { T_j, M_j, Z_j, γ_j }_j = 1^m are distributed according to prior specification in Section <ref>. Conditional on σ^2, the function ν(, ) for the RPBART model is ν(, ) = σ^2 + mτ^2(1 - Φ̅(,)), where mτ^2 = (y_max-y_min/2k)^2 defines the variance of the sum-of-trees, k is a tuning parameter, y_max - y_min is the range of the observed data, and Φ̅(,)= E[∑_b=1^B_1ϕ_b1(+;γ_1, T_1)ϕ_b1(;γ_1,T_1)], is the probability two observations with inputs and + are assigned to the same partition. Without loss of generality the expectation is with respect to T_1, and γ_1 (since the m trees are a priori i.i.d.). The proof of Theorem <ref> is in Section <ref> of the Supplement. The expectation, Φ̅(,), in Theorem <ref> can be approximated using draws from the prior. This expression shows that ν(,) depends on the probability that two points are assigned to different partitions, averaged over the set of trees and bandwidth parameters, as denoted by 1 - Φ̅(,). This probability is then scaled and shifted by the variance of the sum-of-trees, mτ^2, and error variance, σ^2. Generally, we are more interested in ν(‖‖), which describes the variability across the entire input space rather than at a specific . For RPBART, we can numerically compute ν(‖‖) by integrating (<ref>) over as shown in (<ref>). We can use the resulting a priori semivariogram estimator to guide the selection of the hyperparameters, α, β, α_1, α_2, and k, across the various priors in the model. Note the number of trees m must still be selected through other means, such as cross-validation. We note this procedure for computing the semivariogram is more complex than what is observed with Gaussian Processes <cit.>. In such cases, shift-invariant kernels are typically selected for the covariance model in a stationary Gaussian Process. In other words, the covariance model is simply a function of under a shift-invariant kernel. However, in the RPBART model, the covariance function, Φ̅(,) is a non-shift-invariant product kernel of the path probabilities that is averaged over all possible trees and bandwidth parameters. Because of this, we must carefully consider the computation of ν(, ) in order to numerically compute ν(‖‖). For example, consider the semivariogram with respect to the 2-dimensional input space [-1,1]× [-1,1]. Figure <ref> displays possible semivariograms for different values of k, α_1, and α_2 with y_min = -1 and y_max = 1. Note, we set the nugget σ = 0 in this Figure to clearly isolate the effect of the smoothness parameters α_1 and α_2, i.e. ignoring the nugget. Each panel corresponds to different settings of the bandwidth prior, where α_1=2 and α_2=25 indicate low levels of smoothing and α_1=15 and α_2=10 indicate high levels of smoothing. Based on each curve, we see the smoothing parameters primarily affect the behavior of the semivariogram for small values of ‖‖. In the low smoothing case (left), each of the semivariograms take values closer to 0 when ‖‖ is small, while a noticeable shift upwards is observed in the higher smoothing cases (center and right). This offset behavior appears in the RPBART semivariogram regardless of the bandwidth hyperparameters because the covariance function is discontinuous. As ‖‖ increases, each semivariogram reaches a maximum value, known as the sill. For each k, we observe the sill is similar regardless of the values of α_1 and α_2. Thus, the value of k controls the height of the sill and in turn the amount of variability attributed to the sum-of-trees model. As a result, the interpretation of k under the RPBART model is the same as in the original BART model. Finally, we should note that the hyperparameters in the tree prior, α and β, can also be determined based on the semivariogram, as deeper trees result in a larger number of partitions and thus less correlation across the input space. Hence, deeper trees tend to shift the semivariograms upwards and further contribute to the offset. In practice, we can compare the theoretical semivariogram to the empirical semivariogram to select the hyperparameters for the model <cit.>. In summary, we see α_1 and α_2 primarily affect the semivariogram when ‖‖ is near 0, as more smoothing generally leads to an upward shift and increase in curvature near the origin. The tree prior hyperparameters, α and β, affect the size of the offset and the height of the sill. Meanwhile, the value of k simply scales the function and has minimal impact on the overall shape of the curve. The last component to consider is the value of σ^2, which is set to 0 in Figure <ref>. The typical way to calibrate the νλ/χ_ν^2 prior on σ^2 is to fix ν at a desired value and select λ based on an estimate of σ^2, denoted by σ̂^2 <cit.>. Traditional approaches typically compute σ̂^2 using a linear model. From Theorem <ref>, we see σ^2 simply adds to the offset by vertically shifting the semivariogram. Thus, we can select σ̂^2 based on theoretical and empirical semivariogram. Given a fixed value of ν (usually around 10), one can set σ̂^2 to be the mode of the prior distribution and algebraically solve for the scale parameter λ. This strategy allows us to select λ, along with the other hyperparameters, using the same information encoded in the semivariogram. § SMOOTH MODEL MIXING §.§ A Mean-Mixing Approach The RPBART model can be easily extended to the BART-based model mixing framework originally presented by <cit.>. Let f_1(_i),…,f_K(_i) denote the output from K simulators at an input _i. When the simulators are computationally expensive, the output f_l(_i) is replaced with the prediction from an inexpensive emulator, f̂_l(_i), for l = 1,…,K. For example, f̂_l(_i) could be the mean prediction from a Gaussian process emulator <cit.> or an RPBART emulator. In climate applications, each GCM may be evaluated on different latitude and longitude grids, thus we use regridding techniques such as bilinear interpolation to compute f̂_l(_i). Given the mean predictions from K emulators, we assume Y(_i) is modeled by Y(_i) |(_i), { T_j, M_j, Z_j }_j = 1^m, σ^2 ∼ N(^⊤( _i) (_i),σ^2), (_i) = ∑_j =1^m (_i;T_j,M_j,Z_j), (;T_j,M_j,Z_j) = ∑_b = 1^B_j_bjz_bj(_i), where (_i) is the K-dimensional vector of mean predictions at input _i, (_i) is the corresponding K-dimensional weight vector, and i=1,…,n. This extension allows the K weights to be modeled as continuous functions using similar arguments as outlined for the 1-dimensional tree output case in Section <ref>. Similar to <cit.>, we regularize the weight functions via a prior on the terminal node parameters. The primary goal is to ensure each weight function, w_l(), prefers the interval [0,1] without directly imposing a strict non-negativity or sum-to-one constraint. Thus, we assume _bj| T_j ∼ N(1/mK_K , τ^2 I_K ), where τ = 1/(2k√(m)) and k is a tuning parameter. This choice of τ follows <cit.> in that the confidence interval for the sum-of-trees has a length of w_max - w_min, where 0 and 1 are the target minimum and maximum values of the weights. This prior calibration ensures each w_l() is centered about 1/K, implying each model is equally weighted at each apriori. The value of k controls the flexibility of the weight functions. Small k allows for flexible weights that can vary easily beyond the target bounds of 0 and 1 and thus are able to identify granular patterns in the data. Larger k will keep the weights near the simple average of 1/K and could be limited in identifying regional patterns in the data. Typically, more flexible weights are needed when mixing lower fidelity or lower resolution models, as the weights will account for any discrepancy between the model set and the observed data. By default, we set k = 1. RPBART allows for independent sampling of each of the B_j terminal node parameters, _bj, within T_j for j = 1,…,m. As previously discussed, this avoids the joint update required in SBART. This is rather significant, as the SBART update for the B_j terminal node parameters vectors would require an inversion of a KB_j × KB_j matrix. In larger scale problems with larger K or deeper trees, the repeated inversion cost of this matrix would be burdensome. §.§ The Semivariogram To calibrate the remaining hyperparameters of our RPBART-based mixing model, we employ the prior semivariogram introduced in Section <ref>. The semivariogram of Theorem <ref> can be extended to the model mixing framework by considering the assumptions associated with each emulator. One cannot directly apply the results conditional on the point estimates of each emulator, f̂_1(),…,f̂_K(), because the semivariogram is designed to assess the spatial variability of the model free of any mean trend <cit.>. Rather than condition on the model predictions, we treat each of the individual emulators f_1(),…,f_K() as unknown quantities. Thus, the semivariogram for Y() will assess the modeling choices for the sum-of-trees mixing model along with modeling choices for each f_l(). For example, we might assume GP emulators, f_l() |_l ∼GP(f̅_l, R_l(, ^'; _l) ), where _l is a vector of parameters such as the scale or length scale and f̅_l is a constant mean. Assume the random quantities { T_j, M_j, Z_j, γ_j }_j = 1^m are distributed according to prior specification in Section <ref>. Assume each simulator is modeled as a stochastic emulator with mean f̅_l and covariance kernel R_l(,^';ψ_l), l=1,…,K. Then, the function ν(, ) is ν(, ) = σ^2 + ( 1/4k^2 + 1/K^2) ∑_l = 1^Kν_l^(f)(,h;_l) + (1/4k^2)(1 - Φ̅(,))×∑_l = 1^K (R_l(+,;_l) + f̅_l^2). The semivariogram, ν(‖‖), can be obtained by averaging over , as in (<ref>). Rather than using the empirical semivariogram for Y() to select the hyperparameters for the emulators and the weight functions, we recommend a modularization approach. Specifically, the hyperparameters associated with each emulator can be selected solely based on evaluations of the corresponding simulator output. The empirical semivariogram of Y() can then be used to select the hyperparameters for the BART weights, plugging-in the choices of the _l and f̅_l for l=1,…,K. Though this strategy is just used for hyperparameter selection, it aligns well with the two-step estimation procedure common in stacking <cit.>, which separates the information used to train the emulators from the observational data used to train the weights. §.§ Posterior Weight Projections The RPBART-based weights are modeled as unconstrained functions of the inputs. Alternative approaches impose additional constraints on the weight functions, such as a non-negativity or sum-to-one constraint <cit.>. Though constrained approaches may introduce bias into the mean of the mixed prediction, they can improve the interpretability of the weight functions. However, additional constraints on the weight functions could increase the computational complexity of the estimation procedure. For example, imposing a simplex constraint on the BART-based weights would drastically change the model fitting procedure, as the multivariate normal prior on the terminal node parameters ensuing conditional conjugacy would likely be lost. Rather than changing the model and estimation procedure, one alternative is to explore the desired constraints through post-processing methods which impose constraints a posteriori rather than a priori <cit.>. In this regard, we define a constrained model in terms of the original unconstrained model using Definition <ref>. Let () = (w_1(),…,w_K()) be a vector of unconstrained weights and Y() be an observable quantity from the underlying physical process. Define the unconstrained and constrained models for Y() by (Unconstrained) Y() = ∑_l=1^K w_l() f̂_l() + ϵ(), (Constrained) Y() = ∑_l=1^K u_l() f̂_l() + δ() + ϵ(), where () = (u_1(),…,u_K()) is the projection of () onto the constrained space, and δ() = ∑_l = 1^K (w_l() - u_l() )f̂_l() denotes the estimated discrepancy between the constrained mixture of simulators and the underlying process, and ϵ() is a random error. This framework allows for the interpretation of the weight functions on the desired constrained space without introducing significant computational burdens. The unconstrained and constrained models are connected through an addictive discrepancy, δ(), which accounts for any potential bias introduced by the set of constraints. Posterior samples of () can easily be obtained by projecting the posterior samples of () onto the constrained space. The specific form of the projection will depend on the desired constraints. In this work, we explore projections that enforce a simplex constraint that result in continuous constrained weight functions, are computationally cheap, and promote some level of sparsity. The simplex constraint ensures each u_l() ≥ 0 and ∑_l = 1^K u_l() = 1. The simplex constraint enables a more clear interpretation of the weights, as the mixed-prediction is simply an interpolation between different model predictions. Model selection is also possible under the simplex constraint. Common ways to enforce a simplex constraint are through defining () as a function of () using a softmax or penalized L_2 projection <cit.>. These projections each have closed form expressions and are inexpensive to compute. The softmax is used in stacking methods such as Bayesian Hierarchical Stacking <cit.>. Though the softmax function is widely used, a common criticism is the lack of sparsity in that the constrained weights will take values between 0 and 1 but never exactly reach either bound. Thus, the softmax can shrink the effect of a given model, but each of the K models will have at least some non-zero contribution to the final prediction. Lastly, the softmax function may depend on a “temperature parameter", which can be used to control the shape of the projected surface. The penalized L_2 projection (i.e. the sparsegen-linear from <cit.>) defines the weights using a thresholding function, u_l(;T) = [w_l() - λ_Q(;T)/1 - T]_+, λ_Q(;T) = 1/Q(∑_k = 1^Q w_(k)() - 1 + T), for l = 1,…,K, temperature parameter T>0, and Q≤ K. We let w_(1)()≥…≥ w_(K)() denote the ordered set of weights at a fixed . The value of Q is given by Q = max_t = 1,… K{ 1 - T + t w_(t)() > ∑_l = 1^t w_(l)() }. This projection promotes more sparsity in the constrained prediction, as any u_l(;T) can take values of exactly 0, which in turn removes the effect of the GCM. The softmax and penalized L_2 projections are just two examples of ways to compute the constrained weights. One can choose between these two methods (among others) based on the desirable properties in the constrained weights and discrepancy. For example, the penalized L_2 projection can be used to obtain a more sparse solution. § APPLICATIONS This section outlines two different examples of model mixing using the RPBART method for Bayesian Model Mixing (RPBART-BMM). Section <ref> demonstrates the methodology on a toy simulation example, which combines the outputs of K=4 simulators each with two inputs. Each simulator in Section <ref> provides a high-fidelity approximation of the underlying system in one subregion of the domain and is less accurate across the remainder of the domain. Similar patterns are observed for a set of general climate models (GCMs) under consideration. Section <ref> demonstrates our methodology on a real-data application with eight GCMs that model the average monthly surface temperature across the world. §.§ A Toy Numerical Experiment Assume an underlying physical process is given by f_†() = sin(x_1) + cos(x_2), where = (x_1,x_2) ∈ [-π,π]× [-π,π], and Y() is generated as a Gaussian random process with mean f_†() and standard deviation σ = 0.05. Consider K=4 different simulators with outputs shown in Figure <ref>. Compared to f_†(), we see each simulator provides a high-fidelity approximation of the system in one corner of the domain, a lower-fidelity approximation in two corners of the domain, and a poor approximation in the last corner of the domain. We generated n = 100 simulated observations from the true process, f_†(), across equally spaced inputs over a regular grid, and then fit an RPBART-BMM model using m = 10, k = 1, α_1 = 2, α_2 = 10, α = 0.95, and β = 1. The posterior prediction results are summarized in Figure <ref>. The left panel displays the mean prediction, which nearly matches the true underlying process shown in Figure <ref>. The mean residuals (center) suggest the mixed-prediction only struggles in areas where x_1 or x_2 are close to 0, the region where each simulator begins to degrade. The width of the credible intervals (right) displays a similar pattern with low uncertainty in each corner and larger uncertainty in the middle of the domain. The posterior mean of the weight functions are shown in Figure <ref>. Each individual weight suggests the corresponding simulator receives relatively high weight in one corner, moderate weight in two corners, and relatively low or negative weight in the last corner. For example consider the first simulator and correspond mean weight ŵ_1(). We see the corner with higher weight (yellow) corresponds to the region where that particular simulator is a high-fidelity approximation of the system. The two corners (top left and bottom right) where the simulator receives lower but non-negligible weight (pink and purple) generally corresponds to the region where the model is less accurate but still informative for the underlying process. The region where the simulator receives weight near 0 or negative (blue and black) generally corresponds to the area where the model does not follow the patterns of the underlying process. Overall, the RPBART-BMM model is able to identify regions where a given simulator is a high or low fidelity approximation of the system. The mean prediction and weight functions are estimated as continuous functions, which improves upon the initial work of <cit.>. §.§ Application to Climate Data Integration In this application, we mix multiple GCMs which model the monthly average two-meter surface temperature (T2M) across the world for April, August, and December 2014. These three time periods are chosen to capture how the GCM performance varies across different months. We combine the output of eight different simulators, each with varying fidelity across the input space and spatial resolution. We compare the RPBART-BMM approach to Feature-Weighted Linear Stacking (FWLS) <cit.> and Neural Network Stacking (NNS) <cit.>. The latter two approaches are both mean mixing methods where FWLS models the weights using a linear model and NNS defines the weights using a Neural Network. We allow all three methods to model the weights as functions of latitude, longitude, elevation, and month. Each mixing method is trained using 45,000 observations, where 15,000 are taken from each of the three time periods. The predictions for April, August, and December 2014 are generated over a grid of 259,200 latitude and longitude pairs for each time period. §.§.§ Results We downloaded the GCM data from the Coupled Model Intercomparison Project (CMIP6) <cit.>, a data product which includes outputs from a wide range of simulators used to study various climate features. Each GCM may be constructed on a different set of external forcing or socioeconomic factors, hence the fidelity of each climate model is likely different across the globe. Regardless of these factors, each GCM outputs the T2M on a longitude and latitude grid although the grid resolutions can be different. We denote the output from each GCM at a given input as f_l() where l=1,…,K. In climate applications, reanalysis data is often used for observational data. We obtain the European Centre for Medium-Range Forecasts reanalysis (ERA5) data, which combines observed surface temperatures with results from a weather forecasting model to produce measurements of T2M across a dense grid of 0.25^∘ longitude by 0.25^∘ latitude. We denote the reanalysis data point as Y(_i) where _i denotes the latitude, longitude, elevation, and month for i=1,…,45000. The ERA5 data and elevation data was downloaded from the Copernicus Climate Data Store. A common assumption in model mixing is that the simulators are evaluated across the same grid of inputs as the observed response data. To map the data onto the same grid, we apply the bilinear interpolation to obtain an inexpensive emulator, f̂_1(),…,f̂_K(), for each GCM <cit.>. Table <ref> displays the root mean squared errors for each of the three time periods. We see each of the three mean mixing approaches (RPBART-BMM, NNS, FWLS) outperform the individual simulators (rows 4-11), with the RPBART-BMM model performing the best across each month. Figure <ref> displays the ERA5 data (left) and the mean predictions from the three approaches across Alaska and Western Canada, the Rocky Mountains, the Suntar-Khayata Mountain Region, and the Tibetan Plateau in April 2014. In general, we see RPBART-BMM and NNS result in very similar high-fidelity predictions that capture the granular features in the data. Some of the granularity is lost in FWLS due to the less flexible form of the weight functions defined by a linear basis. Only subtle differences exist between RPBART-BMM and NNS. It appears RPBART-BMM is able to better leverage elevation and preserve the fine details in the data, as seen in the predictions in the Suntar-Khayata Mountains and the Tibetan Plateau. In addition to the improved mean prediction, we are also interested in identifying the subregions where each simulator is favored in the ensemble. In most cases, we have observed weight values closer to 1 typically indicates more accurate GCMs, while weights near 0 typically indicate less accurate GCMs. Figure <ref> displays the posterior mean weight functions for the eight simulators in the Northwestern hemisphere in April 2014. The posterior mean weights in Figure <ref> suggest CNRM, CESM2, and MIROC are more active in the mixed-prediction across the western part of the U.S. and Canada, as these three GCMs receive higher weight relative to the other GCMs. In the same region, BCC and CanESM5 receive negative weights, which are possibly due to multicollinearity and can be investigated using posterior weight projections as described in Section <ref>. Figure <ref> displays the posterior mean weight functions for the GCMs in the Northeastern hemisphere for April 2014. Again we can identify the subregions where each of the GCMs are more active in the mixed-prediction and receive higher weight. Additionally, we can identify the effect of elevation across the eight weights. For example, we can easily identify the oval shape of the Tibetan Plateau in South Asia across the various weight functions. Additionally, we can identify the outline of Suntar-Khayata mountain regions in Northeastern Asia based on the weights attributed to CNRM and CanESM5. To better identify a subset of GCMs that are active in a specific region, we consider projecting the samples of the posterior weight functions onto the simplex using a penalized L_2 projection (the sparsegen-linear projection) <cit.>. This imposes a sum-to-one and non-negativity constraint on (). For this example, T = 0.137, which is chosen by minimizing the sum of squared discrepancies, ∑_i = 1^N_v(δ(^v_i))^2, at N_v = 5000 validation points. The posterior mean predictions from the constrained mixture of GCMs and the posterior mean discrepancy are shown in Figure <ref>. Overall, the constrained mixed-prediction recovers the major features of the underlying temperature patterns, however some fine details are lost due to the simplex constraint. This, at times, can be indicative of response features that are not accounted for by any of the GCMs in the model set. Typically, we loose some of the granularity in the areas with significant elevation changes, such as the Rocky Mountains (second row). Essentially, the constrained weights lose flexibility and are less able to properly account for the discrepancy present in the GCMs, as expected. The remaining variability that is unaccounted for by the constrained mixed-prediction is then attributed to the additive discrepancy. Figure <ref> displays the projected weight functions for April 2014. Similar conclusions can be made as in the unconstrained weights from Figure <ref>, however in the constrained model, we can clearly isolate the GCMs that contribute to the prediction in Figure <ref>. For example, we see CNRM, CESM2, and MIROC receive relatively high weight in the western part of the United States and Canada. Coupled with the discrepancy in Figure <ref>, we see that these three GCMs are sufficient for recovering the system in Alaska and Western Canada (low additive discrepancy), but are insufficient for recovering the system in the Rocky Mountains (higher additive discrepancy). Figure <ref> displays the posterior mean of the constrained weights in the northeastern hemisphere for April 2014. Similar conclusions can be made as in the northwestern hemisphere. Finally, we can connect the unconstrained and constrained models in terms of how each identifies model discrepancy. In the unconstrained model, the posterior sum of the weight functions, w_sum() = ∑_l = 1^K w_l(), is one metric that can be used to help identify model discrepancy. In our applications, we have observed that the posterior distribution of w_sum() deviates away from 1 in areas where the GCMs do not accurately model the underlying system. This phenomenon has also been observed in earlier work with constant weights <cit.>. The top panel of Figure <ref> displays the posterior mean of w_sum() over selected regions in April 2014. The bottom row of Figure <ref> displays the posterior mean of δ() from the constrained model in April 2014. Subregions where w_sum() deviates away from 1 (purple or green) align well with the areas where δ() deviates away from 0 (orange or blue). Thus, the post-processing approach used to estimate δ() preserves the information in the posterior of w_sum(). § DISCUSSION This research proposes a random path model as a novel approach to introduce continuity into the sum-of-trees model. We then extend the original mean mixing framework presented by <cit.> to combine the outputs from a general collection of GCMs while also modeling the weight functions as continuous functions. This methodology has been successfully applied to GCMs, which model the monthly average surface temperate. This model mixing approach depends on two sources of information, namely the observational data and the output from the GCMs. Our method explicitly assumes one can evaluate each GCM, or an emulator for the GCM, at the n inputs associated with the observational data. In this case, each emulator is constructed using bilinear interpolation with respect to the latitude and longitude grid associated with the temperature output. This results in a simple, yet lower resolution emulator for each GCM. Since the emulators maintain a lower resolution, the weight functions in the RPBART-BMM model are left to account for the granularity in the observed data. In other words, when the emulators are lower resolution, the weight functions must be more complex. In terms of BART, the required complexity corresponds to deep trees, less smoothing, and larger m. Alternatively, one could replace the bilinear interpolation approach with a more complex emulator, such as another RPBART model, neural networks, or Gaussian processes <cit.>. More complex emulators could depend on more features other than longitude and latitude and thus result in higher fidelity interpolations of the GCM output. Higher resolution emulators would likely explain more granularity in the data and thus result in less complex weight functions (this implies shallower trees, more smoothing, and smaller m). Existing climate ensembles tend to focus on the temporal distribution of the simulator output and combine the GCM output in a pointwise manner for each latitude and longitude pair <cit.>. Though a temporal component can be considered as an input to the weight functions, this work predominantly focuses on learning the spatial distribution of the weights to help identify subregions where each simulator is more or less influential. Future work will further explore other ways to incorporate the temporal distribution of the GCMs, which could allow for more appropriate long term model-mixed predictions of future temperature. § ACKNOWLEDGEMENTS The work of JCY was supported in part by the National Science Foundation under Agreement OAC-2004601. The work of MTP was supported in part by the National Science Foundation under Agreements DMS-1916231, DMS-1564395, OAC-2004601. The work of TJS was supported in part by the National Science Foundation under Agreement DMS-1564395 (The Ohio State University). The work of BL was supported in part by the National Science Foundation under Agreement DMS-2124576. § SUPPLEMENT §.§ The Conditional Covariance Model We can better understand the parameters in the RPBART model by studying their effects on the covariance of the response for two inputs, and ^'. Conditional on the set of trees, T_1,…,T_m, bandwidth parameters γ_1,…,γ_m, and error variance σ^2, the prior covariance between Y() and Y(^') is given by Theorem <ref>. Assume the m sets of terminal node parameters, M_1,…,M_m, and random path assignments, Z_1,…,Z_m are mutually independent conditional on the set of trees and bandwidth parameters. Further assume that the random path assignments for an observation are conditionally independent of those for another input ^'. Then, conditional on the set of trees, T_1,…,T_m, bandwidth parameters γ_1,…,γ_m, and error variance σ^2 the prior covariance between Y() and Y(^') when ^' is given by Cov(Y(), Y(^')|Θ) = mτ^2 ∑_j = 1^m 1/m∑_b=1^B_jϕ_bj(;T_j, γ_j )ϕ_bj(^';T_j, γ_j ), where Θ = {{ T_j, γ_j }_j= 1^m, σ^2 }. The conditional variance is given by Var(Y()|Θ) = mτ^2 + σ^2. Fix the set of trees, bandwidth parameters, and σ^2 and let Θ = {{ T_j, γ_j }_j = 1^m, σ^2 }. By definition, the conditional covariance between Y() and Y(^') is given by Cov(Y(), Y(^') |Θ) = Cov(∑_j=1^m g(;T_j,M_j,Z_j) + ϵ(), ∑_k=1^m g(^'; T_k,M_k,Z_k) + ϵ(^') |Θ) = ∑_j=1^m∑_k=1^mCov( g(;T_j,M_j,Z_j), g(^'; T_k,M_k,Z_k) |Θ). where g(;T_j, M_j, Z_j) = ∑_b=1^B_jμ_bjz_bj() and ^'. Due to the conditional independence assumption across the m trees and associated parameters, the covariance simplifies as Cov(Y(), Y(^') |Θ) = ∑_j=1^m Cov(g(;T_j,M_j,Z_j), g(^'; T_j,M_j,Z_j)|Θ), since Cov(g(;T_j,M_j,Z_j), g(^'; T_k,M_k,Z_k)|Θ) = 0 when j k. Finally, we can consider the covariance function within the tree, which simplifies as follows Cov(g(;T_j,M_j,Z_j), g(^'; T_j,M_j,Z_j)|Θ) = Cov(∑_b=1^B_jμ_bjz_bj(), ∑_d=1^B_jμ_djz_dj(^')|Θ) = ∑_b=1^B_j∑_d=1^B_jCov(μ_bjz_bj(), μ_djz_dj(^')|Θ). Once again, conditional independence between the terminal node parameters and random path assignments implies Cov(μ_bjz_bj(), μ_djz_dj(^')) = 0 when b d. However, when b=d the covariance within the tree is defined by ∑_b=1^B_jCov(μ_bjz_bj(), μ_bjz_bj(^')|Θ) = ∑_b=1^B_j E[ μ_bj^2z_bj()z_bj(^') |Θ] - E[μ_bjz_bj()|Θ]E[ μ_bjz_bj(^')|Θ] = τ^2 ∑_b=1^B_jϕ_bj(;T_j, γ_j ) ϕ_bj(^';T_j, γ_j ), where E[μ_bjz_bj()] = 0 because each terminal node parameter has mean zero and we assume conditional independence between each μ_bj and z_bj(). Returning to (<ref>), the covariance between Y() and Y(^') when ^' is then defined as Cov(Y(), Y(^')|Θ) = mτ^2 ∑_j = 1^m 1/m∑_b=1^B_jϕ_bj(;T_j, γ_j )ϕ_bj(^';T_j, γ_j ). An expression for the conditional variance of Y() is also given by Var(Y()|Θ) = Var(∑_j = 1^m g(;T_j,M_j,Z_j) + ϵ() |Θ) = ∑_j = 1^m Var(g(;T_j,M_j,Z_j)|Θ) + Var(ϵ()|Θ). Due to the conditional independence between parameters across trees, we can once again focus on the variance within each tree separately. The conditional variance for the output of T_j is Var(g(;T_j,M_j,Z_j)|Θ) = Var(∑_b=1^B_jμ_bjz_bj()|Θ) = E[(∑_b=1^B_jμ_bjz_bj() )^2|Θ] - E[∑_b=1^B_jμ_bjz_bj()|Θ]^2 = ∑_b=1^B_j∑_d=1^B_j E[μ_bjμ_djz_bj()z_dj()|Θ]. By assumption, z_bj() ∈{ 0,1 } and ∑_b=1^B_j z_bj() = 1. Thus, if b d, then z_bj()z_dj() = 0 with probability 1. The conditional variance then simplifies as Var(g(;T_j,M_j,Z_j)|Θ) = ∑_b=1^B_j E[μ_bj^2 z^2_bj()|Θ] = ∑_b=1^B_j E[μ_bj^2 z_bj()|Θ] = ∑_b=1^B_jτ^2 ϕ_bj(; T_j, γ_j ) = τ^2, where ∑_b=1^B_jϕ_bj(; T_j, γ_j ) = 1. Thus, the conditional variance of the tree output is simply the variance of the terminal node parameters τ^2. The variance of the sum-of-tress is then given by mτ^2. These results are the same as in the original BART model. Returning to (<ref>), the conditional variance of Y() is given by Var(Y()|Θ) = mτ^2 + σ^2. §.§ Proof of the Semivariogram Formula Consider the semivariogram of the sum-of-trees model, which depends on the function ν(,). Due to the constant mean assumption in the random path model, ν(,) is defined by ν(, ) = 1/2 E^T,M,Z,γ[(Y(+)-Y())^2|σ^2] = 1/2 E^T,γ[E^M,Z[(Y(+)-Y())^2 |Θ]], where E^T,γ denotes the expectation with respect to the set of trees and bandwidth parameters and E^M,Z denotes the conditional expectation with respect to the set of terminal nodes and random path assignments. Moving forward, we will let Θ = {{ T_j, γ_j }_j= 1^m, σ^2 } and treat σ^2 as a fixed value rather than a random variable. First consider the inner expectation, which will enable us to understand the effect of the priors we assign to each M_j and Z_j in the model. Due to the constant mean assumption, the inner expectation is equivalently expressed as E^M,Z[(Y(+)-Y())^2 |Θ] = Var(Y(+)|Θ) + Var(Y() |Θ) - 2 Cov(Y(+),Y() |Θ). The inner expectation from (<ref>) divided by 2 will be denoted as ν(,; Θ). Thus, we can focus on ν(,; Θ), which involves analytically tractable terms, as shown in Theorem <ref>: ν(, h;Θ) = σ^2 + mτ^2 (1 - 1/m∑_j = 1^m ∑_b=1^B_jϕ_bj(+;T_j, γ_j )ϕ_bj(;T_j, γ_j )) . The function ν(,) is obtained by computing the outer expectation in (<ref>), which is with respect to the set of m trees and bandwidth parameters. We can then obtain ν(, ) by marginalizing over the remaining parameters, ν(, ) = σ^2 + mτ^2 E[(1 - 1/m∑_j = 1^m ∑_b=1^B_jϕ_bj(+;T_j, γ_j )ϕ_bj(;T_j, γ_j ))] = σ^2 + mτ^2(1 - 1/m∑_j = 1^m E[∑_b=1^B_jϕ_bj(+;T_j, γ_j )ϕ_bj(;T_j, γ_j )]), where the expectation is with respect to Θ and σ^2 is treated as a fixed constant. Since the set of trees and bandwidth parameters are sets of i.i.d. random quantities, the expectation in (<ref>) is the same for each j = 1,…,m. Finally, we take τ = ( y_max - y_min)/(2k√(m)) which implies mτ^2 simplifies as mτ^2 = (y_max-y_min/2k)^2, where y_max-y_min is the range of the observed data and k is a tuning parameter that controls the flexibility of the sum-of-trees model. Given these two simplifications, ν(, ) for the RPBART is ν(, ) = σ^2 + (y_max-y_min/2k)^2(1 - E[∑_b=1^B_1ϕ_b1(+;γ_1, T_1)ϕ_b1(;γ_1,T_1)]). §.§ The Semivariogram Formula for Model Mixing Similar to Section <ref>, we first consider the function, ν(,; Θ). In model mixing, this is derived by marginalizing over M_j, Z_j, and f_l() where j = 1,…,m and l = 1,…,K. For simplicity, assume each f_l() is an independently distributed stochastic process with constant mean f̅_l and covariance function R_l(+, ; _l), with hyperparameter vector _l. The function ν(,;Θ) is defined in Theorem <ref>. Assume the set of emulators, f_1(),…,f_K(), are mutually independent random processes with constant means f̅_l and covariance functions R_l(+,;_l), l = 1,...,K. Further assume the emulators are independent of the K weight functions. Then, the function ν(,;Θ) for the mean-mixing model is given by ν(,;Θ) = σ^2 + (mτ^2 + 1/K^2) ∑_l = 1^Kν_l^(f)(,h;_l) + ν^(w)(,h;Θ) ∑_l = 1^K (R_l(+,;_l) + f̅_l^2) where ν^(f)_l(,h;_l) = R_l(,;_l) - R_l(+,;_l) is semivariogram component from the emulator, R(+,+;_l) = R(,;_l), and ν^(w)(, h;Θ) = mτ^2(1 - 1/m∑_j = 1^m∑_b=1^B_jϕ_bj(+;T_j, γ_j)ϕ_bj(;T_j, γ_j)) is the semivariogram component from each of the K weight functions. By definition, the function ν(,; Θ) is defined by ν(,; Θ) = 1/2Var(Y(+)-Y() |Θ) = 1/2(Var(Y(+) |Θ) + (Y() |Θ)) -Cov(Y(+)-Y(). The conditional variance simplifies as, Var(Y() |Θ) = Var(∑_l=1^K w_l()f_l() + ϵ() |Θ) = σ^2 + ∑_l=1^K Var( w_l()f_l() |Θ), where the K weights and K functions are all mutually independent, conditional on Θ. Each individual variance component then simplifies as Var(w_l()f_l() |Θ) = E[w_l()^2f_l()^2 |Θ] - E[w_l()f_l() |Θ]^2 = E[w_l()^2|Θ] E[f_l()^2] - E[w_l() |Θ]^2E[f_l()]^2 = (mτ^2 + 1/K^2)(R_l(,;_l) + f̅_l^2) - (f̅_l/K)^2. Similarly, the conditional variance of w_l(+)f_l(+) is expressed by Var(w_l(+)f_l(+) |Θ) = (mτ^2 + 1/K^2)(R_l(+,+;_l) + f̅_l^2) - (f̅_l/K)^2, for l=1,…,K. The conditional covariance can then be computed as Cov(Y(+), Y() |Θ) = Cov(∑_l=1^K w_l(+)f_l(+), ∑_t=1^K w_t()f_t() |Θ) =∑_l=1^K∑_t=1^KCov(w_l(+)f_l(+), w_t()f_t() |Θ). Due to conditional independence between the K weight functions and the K emulators, the covariance simplifies as Cov(Y(+), Y() |Θ) = ∑_l=1^KCov( w_l(+)f_l(+), w_l()f_l() |Θ) =∑_l=1^K E[w_l(+)f_l(+)w_l()f_l() |Θ] -E[w_l(+)f_l(+) |Θ]E[w_l()f_l() |Θ] =∑_l=1^K E[w_l(+)w_l()|Θ] E[f_l(+)f_l()] - (f̅_l/K)^2 = ∑_l=1^K (mτ^2ϕ̅(,h;Θ) + 1/K^2)(R_l(+,;_l) + f̅_l^2) - ∑_l=1^K (f̅_l/K)^2. where ϕ̅(,h;Θ) = 1/m∑_j = 1^m ∑_b = 1^B_jϕ_bj(+;T_j,γ_j)ϕ_bj(;T_j,γ_j). Using the conditional variances and covariance, ν(,; Θ) can be expressed as ν(,; Θ) = 1/2(σ^2 + ∑_l=1^K (R_l(+,+;_l) + f̅_l^2)(mτ^2 + 1/K^2) - (f̅_l/K)^2 ) + 1/2(σ^2 + ∑_l=1^K (R_l(,;_l) + f̅_l^2)(mτ^2 + 1/K^2) - (f̅_l/K)^2 ) - (∑_l=1^K (mτ^2ϕ̅(,h;Θ) + 1/K^2)(R_l(+,;_l) + f̅_l^2) - (f̅_l/K)^2) = σ^2 + 1/2(∑_l=1^K (R_l(+,+;_l))(mτ^2 + 1/K^2) + mτ^2f̅^2_l ) + 1/2(∑_l=1^K (R_l(,;_l))(mτ^2 + 1/K^2) + mτ^2f̅^2_l ) - (∑_l=1^K (mτ^2ϕ̅(,h;Θ) + 1/K^2)R_l(+,;_l) + mτ^2f̅_l^2ϕ̅(,h;Θ)) = σ^2 + 1/2(mτ^2 + 1/K^2)(∑_l=1^K (R_l(+,+;_l) + R_l(,;_l)) + mτ^2(1 - ϕ̅(,h;Θ))∑_l = 1^K f̅_l^2 - (mτ^2ϕ̅(,h;Θ) + 1/K^2)∑_l = 1^K R_l(+,;_l) ±(mτ^2 + 1/K^2) ∑_l=1^K R_l(+, ; _l), where we can add and subtract (mτ^2 + 1/K^2) ∑_l=1^K R_l(+, ;_l) to further simplify the expression. Now define the following functions for each emulator and each weight as ν^(f)_l(,;_l) = 1/2(R_l(,;_l) + R_l(+,+;_l) - 2R_l(+,;_l)) ν^(w)(, ;Θ) = mτ^2(1 - ϕ̅(,h;Θ)) In typical cases, R(+,+;_l) = R(,;_l) and thus ν^(f)_l(,h;_l) simplifies further. The terms in the expression can then be rearranged as follows ν(,; Θ) = σ^2 + (mτ^2 + 1/K^2)∑_l=1^K ν^(f)_l(,;_l) + ν^(w)(, ;Θ) (∑_l = 1^K R_l(+,;_l) + f̅_l^2). Theorem <ref> shows the function ν(,;Θ) combines information from the model set and the weight functions. More specifically, we see the expression combines the functions for the individual semivariogram from the K+1 different components. Using Theorem <ref>, we can compute the ν(,) in Definition <ref> by marginalizing over the set of m trees and bandwidth parameters. The resulting function is given by ν(, ) = σ^2 + ( 1/4k^2 + 1/K^2) ∑_l = 1^Kν_l^(f)(,h;_l) + (1/2k)^2(1 - E[∑_b=1^B_1ϕ_b1(+;T_1,γ_1)ϕ_b1(;T_1,γ_1)]) ×∑_l = 1^K (R_l(+,;_l) + f̅_l^2). §.§ Reproducible Examples §.§.§ Regression Example Consider the true data generating process Y() ∼ N( f_†(), σ^2 ) f_†() = sin(x_1) + cos(x_2) where ∈ [-π,π] × [-π,π]. Assume n = 100 different observations are generated from the true underlying process with σ = 0.1. The inputs _1,…,_100 are also randomly generated about the two-dimensional input space. We train the random path model using three different settings of the bandwidth prior to demonstrate the smoothing effect of the model. Each of the three models are trained using an ensemble of 20 trees and a value of k = 1. Figure <ref> displays the mean predictions from the RPBART model along with their associated hyperparameters of α_1 and α_2. Based on Figure <ref>, we see the model with less smoothing (i.e. α_1 = 2 and α_2 = 20) provides the most accurate predictions across the entire domain. Essentially, this model preserves the localization induced by the tree models but is able to smooth over the hard boundaries. With α_1 = 2 and α_2 = 5, we see the a similar smooth mean surface, however the prediction is slightly less accurate along the boundaries (particularly when x_1 is near -π). We note BART typically struggles along the boundaries when no or minimal data is present, so this occurrence is to be expected. Similar observations can be made as more smoothness is introduced into the model as shown in the bottom right panel of Figure <ref>, where α_1 = 10 and α_2 = 10. These results are to be expected as higher levels of smoothing will naturally result in a less localized prediction. Despite this, RPBART is able to consistently identify the main features of the underlying process, regardless the level of smoothing. In cases with more smoothing, the mean function will identify the general pattern of the data and attribute some of the granularity in the process to the observational error. §.§.§ Mixing Two GCMs This section outlines the results from a simplified example which can easily be reproduced on a laptop. Consider mixing the two outputs from CESM2 and CNRM over the Northwestern hemisphere in June 2014. For this example, a 20 tree model is trained using 300 training points. The weight functions are defined only in terms of latitude and longitude. The model is evaluated across a grid of 26,000 points which are evenly spaced over a grid with 0.50^∘× 0.50^∘ resolution. The observed system (left) and posterior mean predictions from RPBART-BMM (right) are shown in Figure <ref>. We see the mixed-prediction captures the major features within the data, but does not account for some of the granular features in regions of varying elevation. The mixed prediction does not recover these features because only 300 training points are used (compared to the roughly 11,000 points used in Chapter 4 for this region). Additionally, the RPBART-BMM model is only trained using latitude and longitude. Therefore, the model is unable to directly account for any existing discrepancy due to elevation. The resulting weight functions are shown in Figure <ref>. The fidelity of the approximation can be improved by using more training data, including additional simulators, or simply replacing CESM2 and CNRM with higher fidelity models.
http://arxiv.org/abs/2407.13426v1
20240718115101
WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration
[ "Xinxing Cheng", "Xi Jia", "Wenqi Lu", "Qiufu Li", "Linlin Shen", "Alexander Krull", "Jinming Duan" ]
cs.CV
[ "cs.CV" ]
Wavelet-based Incremental Learning for Efficient Medical Image Registration X. Cheng et al. School of Computer Science, University of Birmingham, Birmingham, B15 2TT, UK j.duan@cs.bham.ac.uk Department of Computing and Mathematics, Manchester Metropolitan University, Manchester, M15 6BH, UK National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University, 518060, China WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration Xinxing Cheng1 Xi Jia1 Wenqi Lu2 Qiufu Li 3 Linlin Shen 3 Alexander Krull 1 Jinming Duan1^() ===================================================================================================== § ABSTRACT Deep image registration has demonstrated exceptional accuracy and fast inference. Recent advances have adopted either multiple cascades or pyramid architectures to estimate dense deformation fields in a coarse-to-fine manner. However, due to the cascaded nature and repeated composition/warping operations on feature maps, these methods negatively increase memory usage during training and testing. Moreover, such approaches lack explicit constraints on the learning process of small deformations at different scales, thus lacking explainability. In this study, we introduce a model-driven WiNet that incrementally estimates scale-wise wavelet coefficients for the displacement/velocity field across various scales, utilizing the wavelet coefficients derived from the original input image pair. By exploiting the properties of the wavelet transform, these estimated coefficients facilitate the seamless reconstruction of a full-resolution displacement/velocity field via our devised inverse discrete wavelet transform (IDWT) layer. This approach avoids the complexities of cascading networks or composition operations, making our WiNet an explainable and efficient competitor with other coarse-to-fine methods. Extensive experimental results from two 3D datasets show that our WiNet is accurate and GPU efficient. Code is available at <https://github.com/x-xc/WiNet>. § INTRODUCTION Deformable image registration is an essential step in many medical imaging applications <cit.>. Given a moving and fixed image pair, deformable registration estimates a dense non-linear deformation field that aligns the corresponding anatomical structures. Conventional methods such as LDDMM <cit.>, DARTEL <cit.>, SyN <cit.>, Demons <cit.> and ADMM <cit.> are time-consuming and computationally expensive, due to instance-level (pair-wise) iterative optimization. Nevertheless, such methods may involve sophisticated hyperparameter tuning, limiting their applications in large-scale volumetric registration. U-Net-based methods have recently dominated medical image registration due to their fast inference speed. Following the generalized framework of VoxelMorph <cit.>, some methods have been proposed by adding stronger constraints over the deformation field such as inverse/cycle consistency <cit.> and diffeomorphisms <cit.>. Another track of methods proposed more advanced neural blocks such as vision-transformer <cit.> to model long-range information but inevitably increases the computational cost, sacrificing the training and testing efficiency. To reduce the repeated convolution operations in U-Net architecture and fasten the speed, model-driven methods such as B-Spline <cit.> and Fourier-Net <cit.> have proposed to learn a low-dimensional representation of the displacement. However, Fourier-Net only learns the low-frequency components of the displacement and B-Spline interpolates the displacement from a set of regular control points, therefore deformation fields estimated by such methods lack local details, limiting its applications to large and complex registrations. Recent works <cit.> progressively estimate the large and complex deformations using either multiple cascades (where each cascade estimates a small decomposition of the final deformation) or pyramid coarse-to-fine compositions of multi-scale displacements within one network. Though the cascaded methods <cit.> show improved registration accuracy for estimating large deformation by composing small deformations, their computational costs increase exponentially with more cascades. Pyramid methods <cit.>, on the other hand, estimate multi-scale deformations and sequentially compose them to the final deformation. For instance, LapIRN <cit.> utilizes a Laplacian pyramid network to capture large deformations by composing three different scale flows. However, the optimal performance for LapIRN requires sophisticated iterative training of different scales, negatively affecting its training efficiency. PRNet++ <cit.> employs a dual-stream pyramid network for coarse-to-fine registration through sequential warping on multi-scale feature maps, while the adaptation of local 3D correlation layers massively increases memory usage and computational costs. ModeT <cit.> has introduced a motion decomposition transformer that utilizes neighborhood attention mechanisms to first estimate multi-head multi-scale deformations from two-stream hierarchical feature maps. It then employs weighting modules to fuse multi-head flows in each scale and generates the final deformation by composing the fused output from all scales. In this work, we propose a model-driven multi-scale registration network by embedding the discrete Wavelet transform (DWT) as prior knowledge, which we term WiNet. Compared to model-driven B-Spline <cit.> and Fourier-Net <cit.>, our WiNet learns a series of DWT coefficients that inherently preserve high-frequency local details. Unlike pyramid methods <cit.>, which rely solely on optimizing unsupervised training loss to estimate final deformation without explicitly constraining the learning process of small deformations at each scale, our proposed WiNet incrementally learns scale-wise DWT coefficients and naturally forms full-resolution deformation with a model-driven IDWT layer. Moreover, existing pyramid methods <cit.> require significant memory usage as they need to co-register multi-scale high-dimensional features, preventing their application on GPUs with less memory. In contrast, our network avoids compositions on feature maps and estimates only low-resolution DWT coefficients from low-resolution frequency components, making it memory-efficient. WiNet contains a convolutional encoder-decoder network and an incremental deformation learning module designed to explicitly learn distinct frequency coefficients at various scales. Specifically, the initial decoding layer simultaneously captures both low-frequency and high-frequency coefficients at the lowest scale. The remaining decoding layers and the incremental module jointly refine the high-frequency coefficients, leading to a novel and explainable registration framework. We summarize our contributions as: * We embed a differentiable DWT layer before the convolutional encoder, which empowers our network to operate on the low-resolution representation of images in various frequency bands. * We introduce an incremental module that features three levels of IDWT for coarse-to-fine deformation estimation. This process takes advantage of both multi-scale prediction and DWT proprieties, avoiding the conventional multiple compositions and enhancing flexibility and interpretability. * Extensive results on 3D brain and cardiac registration show that our WiNet can achieve comparable accuracy with state-of-the-art pyramid methods such as LapIRN <cit.>, PRNet++ <cit.>, and ModeT <cit.> while using only 31.9%, 25.6%, and 23.2% of their memory footprints (see Fig. <ref>), respectively. § METHODOLOGY As illustrated in Figure <ref>, WiNet takes a moving I_m and fixed I_f pair to predict the full-resolution deformation ϕ. The network comprises two parts: 1) in the encoder, the DWT layer first decomposes each 3D input image into eight components at various frequency sub-bands using the defined orthogonal wavelet filter, then the convolutional layers learn hierarchical feature representations from the decomposed components; 2) in the decoder, the convolutional layers learn deformation related features at three scales. The incremental learning module takes the learned features from each decoding layer, progressively estimates scale-wise wavelet coefficients, and composes them into ϕ. §.§ DWT embedded Encoder Our encoder includes a parameter-free differentiable DWT layer and four convolutional contracting layers. The DWT layer decomposes an image 𝐈∈ℝ^D × H× W by applying the low-pass 𝐋 and high-pass 𝐇 filters of an orthogonal Wavelet transform along the H, W, and D dimensions, respectively. Therefore, 𝐈 can be decomposed into one low-frequency component (𝐈_l l l) and seven high-frequency components (𝐈_llh, 𝐈_lhl, 𝐈_lhh, 𝐈_hll, 𝐈_hlh, 𝐈_hhl, 𝐈_hhh): 𝐈_l l l =𝐋𝐋𝐋𝐈, 𝐈_l l h =𝐋𝐋𝐇𝐈, 𝐈_l h l =𝐋𝐇𝐋𝐈, 𝐈_l h h =𝐋𝐇𝐇𝐈, 𝐈_h l l =𝐇𝐋𝐋𝐈, 𝐈_h l h =𝐇𝐋𝐇𝐈, 𝐈_h h l =𝐇𝐇𝐋𝐈, 𝐈_h h h =𝐇𝐇𝐇𝐈 where , and denote matrix multiplication in H, W, and D dimensions, respectively; 𝐋 and 𝐇 are finite filters of wavelets that have truncated half-sizes for the dimension under filtering, e.g., ⌊H/2⌋× H for the H dimension. 𝐋 =([ ⋯ ⋯ ⋯ ; ⋯ l_-1 l_0 l_1 ⋯ ; ⋯ l_-1 l_0 l_1 ⋯; ⋯ ⋯ ]), 𝐇 =([ ⋯ ⋯ ⋯ ; ⋯ h_-1 h_0 h_1 ⋯ ; ⋯ h_-1 h_0 h_1 ⋯; ⋯ ⋯ ]) , the values of 𝐋 and 𝐇 may vary based on the different orthogonal wavelet transform <cit.>, while in this work, we only experiment with Haar and Daubechies. As shown the gray dashed box in Fig. <ref>, the DWT layer decomposes each of the full-resolution I_m and I_f into eight frequency components, we then concatenate them together to [16, D/2, H/2, W/2] as the input of the following convolutional layers. The resolution of feature maps in each contracting convolution layer is [C, D/2, H/2, W/2], [C, D/4, H/4, W/4], [2C, D/8, H/8, W/8], and [2C, D/16, H/16, W/16], respectively. The benefits of the embedded DWT layer in our encoder are twofold. First, the decomposed various frequency sub-bands potentially improve the network's ability to capture both fine and coarse details. Second, it naturally halves the spatial resolution of the I_m and I_f without sacrificing information, saving computational costs for subsequent convolutional layers. We highlight that the DWT layer is not a pre-processing step, we made it a differentiable layer that allows back-propagation of gradients, and so is the IDWT layer. §.§ Incremental Deformation Learning Module & Decoder The decoder consists of three expansive convolutional layers and an incremental deformation learning module. First, the convolutional decoding layers learn three different feature maps F_1, F_2 and F_3 with three different scales [2C, D/8, H/8, W/8], [2C, D/4, H/4, W/4] and [C, D/2, H/2, W/2], respectively. The incremental learning module will learn for each scale eight different frequency sub-bands wavelet coefficients, which are denoted as Φ^i_lll, Φ^i_llh,Φ^i_lhl,Φ^i_lhh,Φ^i_hll,Φ^i_hlh, Φ^i_hhl,Φ^i_hhh, where i = 1, 2, 3 represents the 1/8, 1/4 and 1/2 scale, respectively. Specifically, Φ^i=1_lll is the low-frequency coefficient, while the rest seven coefficients, denoted as Φ^i, contain high-frequency details. As depicted in Figs. <ref> and  <ref>, the Conv-0 in our incremental module initially estimates eight coefficients in the resolution of [3, D/8, H/8, W/8] from feature F_1 to capture a coarse deformation. Then, the low-frequency Φ^2_lll is produced via the IDWT layer: Φ^2_lll = IDWT(Φ^1) =IDWT([Φ_lll^1,Φ^1]), where the IDWT operation, according to Eqs. (<ref>) and (<ref>), can be denoted as: Φ_lll^2 = 𝐋𝐋 (𝐋Φ^1_l l l + 𝐇Φ^1_h l l) + 𝐋𝐇(𝐋Φ^1_l l h + 𝐇Φ^1_h l h) + 𝐇𝐋 (𝐋Φ^1_l h l + 𝐇Φ^1_hhl) + 𝐇𝐇 ( 𝐋Φ^1_l h h + 𝐇Φ^1_h h h) . Similarly, Φ^3_lll can be produced by performing IDWT with Φ^2_lll and the rest seven [3, D/4, H/4, W/4] high-frequency coefficients, i.e, Φ^3_lll =IDWT([Φ_lll^2,Φ^2]), then the final ϕ can be obtained as ϕ = IDWT([Φ_lll^3, Φ^3]).The primary challenge here is how can we learn the seven high-frequency Φ^2 and Φ^3 using Φ^1? To address the challenge, we propose two refinement blocks (RB-1 and 2). As in Fig. <ref>, RB-1 incrementally refines Φ^1 to Φ^2 and RB-2 refines Φ^2 to Φ^3. Specifically, 1) the up-sampling operation in RB-1 (the yellow trapezoid) first converts the seven [3, D/8, H/8, W/8] coefficients of Φ^1 into the resolution of [3, D/4, H/4, W/4]; 2) the conv layer (red arrow) takes learned feature F_2 and estimates also seven [3, D/4, H/4, W/4] coefficients, which serve as the complementary residuals to the upsampled coefficients; 3) then both the upsampled and learned coefficients are concatenated together in the frequency order; 4) and a group convolution (group=7, illustrated as the red dashed arrows) is used to output the refined Φ^2. The process can be denoted as: Φ^2 = GC(Cat[Conv(F_2), UP(Φ^1)]) where UP, Conv, Cat, and GC denote upsampling, convolution, concatenation by frequency order, and group convolution, respectively. Likewise, we estimate Φ^3 from Φ^2 and F_3 with RB-2. Following the same procedure, the full-resolution deformation is produced by ϕ = IDWT([Φ_lll^3, Φ^3]). §.§ Loss Functions For diffeomorphic registration (WiNet-Diff), the output of WiNet-Diff is the stationary velocity fields 𝐯 and we employ seven scaling and squaring layers <cit.> to integrate the deformation field ϕ = exp(𝐯). The training loss of our network is ℒ(θ)= ℒ_Sim(I_m ∘(ϕ(θ)+Id), I_f) +λ∇ϕ(θ)_2^2 or ℒ(θ)= ℒ_Sim(I_m ∘exp(𝐯(θ)),I_f) +λ∇𝐯(θ)_2^2, where θ represents the learnable network parameters, ∘ refers the warping operator, Id is the identity grid, ∇ is the first order gradient and λ is the weight of the regularization term.The term ℒ_sim(·) quantifies the similarity between the warped moving image I_m ∘ϕ and the fixed image I_f. The second term are the smoothness regularization on ϕ. § EXPERIMENTS Datasets. One 3D brain MRI dataset (IXI) and one 3D cardiac MRI dataset (3D-CMR) were utilized. The pre-processed IXI[https://brain-development.org/ixi-dataset/ https://brain-development.org/ixi-dataset/] dataset <cit.> comprises 576 MRI scans (160×192×224) from healthy subjects. We followed the same protocol with partitions 403, 58, and 115 for the training, validation, and testing sets, respectively. Atlas-based brain registration was conducted for each scan using a generated atlas from <cit.>. The 3D-CMR <cit.> contains 220 pairs of End-diastolic (ED) and End-systolic (ES) scans. All MRI scans were resampled to the same resolution 1.2 × 1.2 × 1.2 mm^3 and center-cropped to dimensions of 128×128×96. We randomly split it into 100 training, 20 validation, and 100 testing sets. Evaluation Metrics. We use the Dice score, Hausdorff distance (HD), and the percentage of negative values of the Jacobian determinant (|J|_ < 0%) to evaluate the registration accuracy. We use the number of parameters, GPU training memory in Mebibytes (MiB), and the average GPU inference time in seconds (s) to measure model efficiency. Implementation Details. Our method is implemented using PyTorch. Both the training and testing phases are deployed on an A100 GPU with 40GB VRAM. All models are optimized using Adam with a learning rate of 1 × 10^-4 and batch size of 1 for 1000 epochs. On IXI, we employed the Normalized Cross-Correlation (NCC) loss with λ =2. For 3D-CMR, we employed the Mean Squared Error (MSE) loss with λ =0.01. We used C=32 for all experiments. We used Haar and Daubechies wavelet transform for the IXI and 3D-CMR datasets. Comparative Methods. We compare our method with a series of state-of-the-art, including conventional SyN <cit.>, U-Net-based methods (VoxelMorph (VM) <cit.>, TransMorph (TM) <cit.>), model-driven methods (B-Spline-Diff <cit.>, Fourier-Net <cit.>), and three pyramid methods (LapIRN <cit.>, PR++ <cit.>, ModeT <cit.>). All methods are trained with the official released code and the optimal parameters are tuned on the validation sets. Quantitative and Qualitative Analysis. Table <ref> shows the numerical results of compared methods on the IXI and 3D-CMR datasets. It can be seen that our method outperforms all compared methods in terms of Dice. WiNet outperforms the iterative SyN with margins of 12% and 11% Dice on IXI and 3D-CMR, respectively. WiNet surpasses model-driven approaches, achieving improvements of 0.2% (Fourier-Net) and 0.5% (Fourier-Net-Diff) in Dice for the IXI and 3D-CMR and securing the second-best in terms of HD. Compared with pyramid methods, our method achieves the increase of Dice by at least 0.5% and 5.8% on IXI and 3D-CMR. Notably, Fig. <ref> clearly illustrates that WiNet requires less GPU memory than pyramid methods in training, Our method exhibits efficiency, consuming only 31.9%, 25.6%, and 23.2% of the memory compared to LapIRN, PRNet++, and ModeT, respectively, while presenting faster speed than all three of them. Moreover, our WiNet-Diff can achieve 0% and 0.007% |J|_<0% on IXI and 3D-CMR, respectively, resulting in a plausible deformation field compared to our WiNet, as assessed in Fig. <ref>. Additionally, we can observe that the estimated deformation fields from model-driven methods including B-Spline-Diff, Fourier-Net, and our WiNet tend to be smoother than U-Net-based methods (TM) and pyramid methods. § CONCLUSION In this paper, we propose a model-driven WiNet to perform coarse-to-fine 3D medical image registration by estimating various frequency wavelet coefficients at different scales. Inheriting the properties of the wavelet transform, WiNet learns the low-frequency coefficients explicitly and high-frequency coefficients incrementally, making it explainable, accurate, and GPU efficient. §.§.§ The research were performed using the Baskerville Tier 2 HPC service. Baskerville was funded by the EPSRC and UKRI through the World Class Labs scheme (EP/T022221/1) and the Digital Research Infrastructure programme (EP/W032244/1) and is operated by Advanced Research Computing at the University of Birmingham. §.§.§ The authors have no competing interests to declare that are relevant to the content of this article. splncs04
http://arxiv.org/abs/2407.12584v1
20240717140957
Edge Optical Effect as Probe of Chiral Topological Superconductor
[ "Linghao Huang", "Jing Wang" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
State Key Laboratory of Surface Physics and Department of Physics, Fudan University, Shanghai 200433, China Shanghai Research Center for Quantum Sciences, Shanghai 201315, China wjingphys@fudan.edu.cn State Key Laboratory of Surface Physics and Department of Physics, Fudan University, Shanghai 200433, China Shanghai Research Center for Quantum Sciences, Shanghai 201315, China Institute for Nanoelectronic Devices and Quantum Computing, Fudan University, Shanghai 200433, China Hefei National Laboratory, Hefei 230088, China § ABSTRACT We study the optical effect of chiral topological superconductors in two dimensions. The linear optical response from chiral Bogoliubov edge modes in clean superconductors has in-gap resonances, which is originated from the transitions within the edge particle-hole pair bands. Interestingly, the number of resonance peaks is determined by the Bogoliubov-de Gennes Chern number of topological superconductors. Such features in optical absorption offer a way to distinguish topological superconductors with different Chern number. We further demonstrate that linear optical effect could probe different topological phases in quantum anomalous Hall insulator-superconductor junction devices. This finding provides an applicable method to detect chiral Bogoliubov edge states and is distinguishable from collective modes in superconductors and possible trivial explanations. Edge Optical Effect as Probe of Chiral Topological Superconductor Jing Wang July 22, 2024 ================================================================= The chiral Bogoliubov edge modes are hybridized electron and hole states which propagate unidirectionally along the edge of two-dimensional topological matter <cit.>. They have been attracting growing interests of study, for their potential applications in quantum computation <cit.>. A paradigmatic example hosting chiral Bogoliubov edge modes is the chiral topological superconductor (TSC), which can be realized in quantum Hall (QH) or quantum anomalous Hall (QAH) <cit.> states in proximity with an s-wave superconductor (SC) <cit.>. Prominent theoretical <cit.> and experimental <cit.> efforts have been made towards realizing coherent chiral Bogoliubov edge modes, which allow coherent Andreev interference with ubiquitous transport signatures and offer possibilities for their coherent manipulation. However, the transport experimental signature of chiral Bogoliubov edge modes remain ellusive <cit.>, due to presence of magnetic vortices, thermal fluctuations, etc. The propagating Bogoliubov edge modes give rise to flat density of states, which may be probed by scanning tunnelling spectroscopy <cit.>. Therefore, it is highly desired to explore more spectroscopy signature of chiral Bogoliubov edge modes and TSC. In this Letter, we study the optical effect of chiral TSC in two dimensions. We show that in the clean limit, where the normal state mean free path ℓ≫ξ the coherent length of Cooper pairs, the linear optical response generically has in-gap resonances originated from the transitions within the particle-hole pair bands of chiral Bogoliubov edge modes [see Fig. <ref>(a)]. Interestingly, the number of resonance peaks is related to the topological invariant of chiral TSC, which in turn serves as a spectroscopy method to probe TSC with different Chern numbers. The chiral Bogoliubov edge modes are on average charge neutral, namely their electron and hole components have equal weight only on average. Hence if the two states involved in the optical transition have different charge distributions, a dipole oscillation is generated, enabling the response to the electromagnetic field. Quite different from the previous study on the local optical response of chiral edge modes in TSC <cit.>, which requires a highly focused light beam on the edges. We consider here the low frequency optical conductivity and dielectric function under uniform illumination, which is feasible in experiments. Edge theory of optical response. We consider the uniform and normal incidence condition, where the momentum of photon is neglected. In the clean limit, only vertical optical transition is allowed. In this scenario, the general properties of the optical response of SC have been analyzed in Ref. <cit.>. For a single-band SC, if the normal state dispersion satisfies ϵ(𝐤)=ϵ(-𝐤), which can be guaranteed by inversion symmetry or time-reversal symmetry, the current operator is zero and thus optical excitation is absent, due to ℭ symmetry, which combines particle-hole conjugation and a momentum-reversing unitary operation. An intuitive understanding of the absence of optical response is that when a Cooper pair, with zero total momentum, is broken by light, the two electrons with opposite momentum will have the same velocity but in opposite directions, resulting in zero net current <cit.>. Since the current operators of the edge modes are obtained by projecting the bulk current operators to the edge states, the multiband condition is indispensable for a nonzero optical response of Bogoliubov edge modes. Chiral TSC from superconducting proximity of QAH state naturally fullfils this condition, since a QAH insulator is naturally a multiband system because the band inversion always involve at least two bands with opposite parities. Without loss of generality, we will consider the optical effect of chiral TSC from superconducting proximity of QAH state for concreteness. We first study the optical effect from the effective edge theory for the simplest case, where the QAH state has Chern number C=1. When the superconducting proximity gap is smaller than the the insulating bulk gap of the QAH state, the chiral TSC will be in the same topological phase as the QAH state of Chern number C with charge U(1) symmetry broken, which is a superconductor with Bogoliubov-de Gennes (BdG) Chern number N=2C=2 <cit.>. The effective edge Hamiltonian can be described as <cit.> ℋ_edge=∑_k>0^k_0(c^†_k,c_-k) [ vk-μ -Δ; -Δ vk+μ; ][ c_k; c^†_-k; ], where c_k annihilates one electron with momentum k, v is the velocity of the edge mode, μ is the chemical potential, Δ is the pairing gap function for the edge mode, k_0 is the momentum cutoff. We can decompose the electron operators into Majorana operators: γ_1,k=(c^†_-k+c_k)/√(2), γ_2,k=i(c^†_-k-c_k)/√(2), then the edge Hamiltonian becomes: ℋ_edge=∑_k>0^k_0(γ^†_1,k,γ^†_2,k) [ vk-Δ -iμ; iμ vk+Δ; ][ γ_1,k; γ_2,k; ]. The edge spectrum is E_1,2(k)=vk±√(Δ^2+μ^2). The single chiral fermion mode of QAH state now splits into a particle-hole conjugate pair of chiral Bogoliubov mode of chiral TSC, satisfying E_1(k)=-E_2(-k). They are also called chiral Andreev edge modes <cit.>. Generically, these two modes have a momentum difference as shown in Fig. <ref>(a). Keeping up to the first order of spatial derivative, the current operator <cit.> along the direction of the boundary only takes the form as j=∑_k>0^k_0(akτ_0+bτ_2), where a and b are real denoting the intrabranch and interbranch contribution, respectively, τ_i is the Pauli matrix in the basis of Eq. (<ref>). In the clean limit, we show that the optical absorption, namely the real part of the linear optical conductivity contains a single resonant frequency as <cit.>, Re[σ(ω)]∝1/ωδ(ω-2√(Δ^2+μ^2)). Here the response current is along the direction of edge modes. Such photoexcitation of chiral Bogoliubov edge modes can also be understood intuitively as follows: since E_1(k)=-E_2(-k), when a Cooper pair is broken by light into two Bogoliubov edge modes, they are charged and have the same velocity, leading to a finite net current. Generically the resonance frequency from two conjugate Bogoliubov edge modes lies below the bulk quasi-particle excitation frequency, thus the typical linear optical effect of the chiral TSC is shown in Fig. <ref>(c). Next we consider general cases with N>2. For TSC with the BdG Chern number N=2C (C>1), there will be C particle-hole conjugate pairs of chiral Bogoliubov edge modes <cit.>. In general, these C pairs of edge modes have different velocity and distinctly separated in momentum space, such as N=4 shown in Fig. <ref>(d). In this case, only the intra-pair transition exhibits a resonance response, while the inter-pair transition does not have significant features in the optical spectrum. As a result, the whole optical spectrum should contain C in-gap resonance peaks as illustrated in Fig. <ref>(f) and calculated in Supplemental Material <cit.> for N=4. For TSC with N=2C+1, there is one additional self-conjugate chiral Bogoliubov edge mode, also called chiral Majorana edge mode with dispersion E(k)=v'k <cit.>. Generically, the velocity v' of this self-conjugate mode is different from other C pairs, then the optical transition between it and other modes does not show resonance peaks, thus there are also C in-gap resonance peaks as schematically shown in Fig. <ref>(c) and Fig. <ref>(f) for N=3 and N=5, respectively. In summary, TSC with N=2C and N=2C+1 show similar edge optical responses, while TSC with different C are expected to give rise to different edge responses. The different in-gap optical resonance structure could serve as a spectroscopy method to probe TSC with different Chern numbers, which distinguish integral part of N/2, namely [N/2]. Bulk model calculation. Now we turn to optical response calculation of the bulk model. From the above analysis, chiral TSC with N=1 and N=2 exhibit different optical response features. As an example, we consider superconducting proximity of QAH state in a magnetic topological insulator thin film with ferromagnetic order <cit.>. The effective Hamiltonian for QAH state is ℋ_0=∑_𝐤ψ^†_𝐤H_0(𝐤)ψ_𝐤, with ψ_𝐤=(c^t_𝐤↑, c^t_𝐤↓,c^b_𝐤↑, c^b_𝐤↓)^T and H_0(𝐤)=k_yσ_xτ_z-k_xσ_yτ_z+m(k)τ_x+λσ_z. Here the superscripts t and b denote the top and bottom surface states, respectively. σ_i and τ_i are Pauli matrices for spin and layer, respectively. λ is the exchange field. m(𝐤)=m_0+m_1(k_x^2+k_y^2) is the hybridization between the top and bottom surface states. ℋ_0 well describes the QAH state in Cr-doped (Bi,Sb)_2Te_3 <cit.> and odd layer MnBi_2Te_4 <cit.>. The BdG Hamiltonian for chiral TSC is ℋ_bulk=∑_𝐤Ψ^†_𝐤H_bulk(𝐤)Ψ_𝐤/2, with Ψ_𝐤=[(c^t_𝐤↑, c^t_𝐤↓, c^b_𝐤↑, c^b_𝐤↓), (c^t†_-𝐤↑, c^t†_-𝐤↓, c^b†_-𝐤↑, c^b†_-𝐤↓)]^T and H_bulk(𝐤) = [ H_0(𝐤)-μ Δ(𝐤); Δ(𝐤)^† -H_0^*(-𝐤)+μ ], Δ(𝐤) = [ iΔ_tσ_y 0; 0 iΔ_bσ_y ]. Here μ is the chemical potential, Δ_t and Δ_b are pairing gap functions on top and bottom surface states, respectively. The topological properties of this system were well studied in Ref. <cit.>, revealing three TSC phases with BdG Chern number N=0,1,2. The optical conductivity is calculated using the Kubo formula: σ_αβ(ω)= -i e^2/ħ∑_k_x,m,nf_mn(k_x)/ϵ_mn(k_x)⟨ j_α⟩_mn(k_x) ⟨ j_β⟩_nm(k_x)/ϵ_mn(k_x)+ω+iη, where η is a infinitesimal positive parameter, f_mn(k_x)≡ f_m(k_x)-f_n(k_x), f_n(k_x) is the Fermi-Dirac function of the eigenstate |m(k_x)⟩, ϵ_m(k_x) is the corresponding eigenenergy, ϵ_mn(k_x)≡ϵ_m(k_x)-ϵ_n(k_x), α,β=x,y, ⟨ j_α⟩_mn(k_x)≡⟨ m(k_x)|j_α(k_x)|n(k_x) ⟩ and j_α(k_x) is the velocity operator defined as: j_α (k_x)=- [ ∂ H_0(k_x)/∂ k_α 0; 0 ∂ H^*_0(-k_x)/∂ k_α ]. In the configuration with periodic/open boundary condition along x/y-direction, the edge optical response leads to a resonance peak to Re[σ_xx(ω)] but does not contribute to Re[σ_yy(ω)], since the chiral Bogoliubov edge modes propagate along x-direction. The numerical results are shown in Fig. <ref>. First, we note that the excitation of the lowest Bogoliubov bulk band is absent, and the threshold frequency corresponds to the transition from the lowest band to the second lowest band <cit.>. This can be explained by the ℭ symmetry, which is the combination of particle-hole conjugation and twofold rotation C_2z. The bulk Hamiltonian satisfies ℭH_bulk(𝐤) ℭ^-1=-H_bulk(𝐤), and ℭ^2=-1. As proved in Ref. <cit.>, ⟨ℭ· n(𝐤)|j_α(𝐤)|n(𝐤)⟩=0, prohibiting the lowest-energy excitations. While this symmetry is explicitly broken by the open boundary, the optical transition between chiral Bogoliubov edge modes can give rise to finite conductivity. Additionally, the transition between edge modes and bulk bands can also contribute to optical conductivity, whose signal is close to the bulk excitation signal and does not resonate. Finally, we analyze the resonance feature of the spectrum. For the N=1 phase, there is only one chiral Majorana edge mode, which can only transit into the bulk band without any in-gap resonance peak. For the N=2 phase, there is a pair of chiral Andreev edge modes, which gives rise to one in-gap resonance peak. For the N=0 phase, no chiral Bogoliubov edge mode exist and hence no in-gap resonance peak in the optical absorption. Therefore, the absence or presence of the in-gap resonance peak is the main difference in the linear optical response between N=0, 1 and N=2 phases. This is consistent with the above edge theory that the linear optical spectroscopy can only distinguish chiral TSC with the different integral part of N/2. QAH-SC junction device. The optical response of N=0 and N=1 chiral TSC show qualitatively similar feature without in-gap resonance peaks, however, we propose that the optical absorption could distinguish them in a junction device. To illustrate this, we consider a QAH-SC-QAH heterojunction shown in Fig. <ref>(a)-<ref>(c), which was previously proposed as a non-Abelian gate to study the chiral Majorana backscattering <cit.>. The TSC phase can be realized by inducing superconducting proximity in the middle region of QAH insulator. The schematics of edge modes configuration in this junction are shown in Fig. <ref>, where the different TSC phases can be achieved by tuning external magnetic or electric field. The chiral TSC in Fig. <ref>(b) has N=1, then each edge between the chiral TSC and the vacuum or QAH hosts a chiral Majorana edge mode denoted as dahsed line. Then we study the optical effect in such junctions, where the optical field illuminates the whole superconducting region. The edge modes are localized along both x- and y-directions, generating optical responses in σ_xx and σ_yy, respectively. σ_yy is calculated in a cylindrical geometry that is translationally invariant in the y-direction while forming a junction of C=1 QAH insulator and TSC in the x-direction <cit.>. While σ_xx is calculated by setting open/periodic boundary conditions in the y/x-direction. For comparison, a C=1 QAH insulator cannot produce any in-gap resonance peaks <cit.>. The numerical results of Re[σ_xx(ω)] and Re[σ_yy(ω)] for three cases in Fig. <ref>(a)-<ref>(c) are shown in Fig. <ref>(d)-<ref>(f), respectively. The N=0 phase has one in-gap resonance peak only in Re[σ_yy(ω)], which is originated from the chiral Andreev edge modes at the interface between C=1 QAH and N=0 SC. While the chiral Andreev edge modes between N=2 phase and vacuum contribute to the in-gap resonance peak only in Re[σ_xx(ω)]. The N=1 phase does not show a resonance peak in Fig. <ref>(e). Therefore, the optical effect provide an applicable method to detect the chiral Bogoliubov edge modes configuration and hence the topological phases of the junction devices. Experimental feasibility. We discuss the experimental feasibility of the proposed effect. Firstly, we estimate the in-gap resonant frequency. A typical proximity induced superconducting gap is on the order of 0.1 meV <cit.>, and the energy separation of the chiral Bogoliubov edge modes is about one order of magnitude smaller, namely 0.01 meV. This energy corresponds to a frequency f≈2.4 GHz, which is in the microwave regime. Secondly, we have only considered the ℓ≫ξ limit. When electrons are depaired by the electromagnetic field during a characteristic time scale ℓ/v_F, where v_F is the Fermi velocity of electrons in the normal state, they are nearly not scattered by disorder within a characteristic time scale ξ/v_F, and their momenta are conserved <cit.>. MnBi_2Te_4 may fullfil this condition, where the mobility is about 2×10^3 cm^2/V·s, Fermi velocity is 5.5×10^5 m/s, then ℓ∼1 μm <cit.>. The superconducting coherent length is of about 0.1 μm <cit.>. The strong disorder scattering will change the momentum of electrons, so the optical transition would not be restricted to vertical transitions. Such processes will smear the in-gap resonance peak. Discussion. We discuss the temperature dependence of the optical absorption of chiral Bogoliubov edge modes, where only f_mn(k_x) in Eq. (<ref>) has temperature dependence. This term is maximized when one mode is fully occupied and the other is completely unoccupied, corresponding to the zero-temperature limit. As the temperature increases (but still ≪ T_c), the difference in Fermi distribution functions decreases, leading to a reduction and broadening of the resonance peak <cit.>. Besides the quasiparticle excitation of bulk or edge states, various collective modes may contribute to the optical absorption, such as phase modes <cit.>, amplitude modes <cit.>, etc. However, the collective modes usually do not have linear coupling to the electromagnetic field, and hence do not exhibit a linear response. Furthermore, some types of collective modes only exist in SC with richly structured order parameters, which is not the case considered here. Therefore, the in-gap resonance in the linear optical spectroscopy only involves the quasiparticle excitation and is attributed to the transition between chiral Bogliubov edge modes. Finally, we discuss the QAH-SC junction device with ineffective SC proximity. One possible scenario is that the whole device is equivalent to a C=1 QAH system, where the optical response of this system has no in-gap resonance <cit.>. An alternate scenario is the middle region becomes non-superconducting metallic puddles due to inhomogeneity. The metallic puddles is gapless in the bulk, resulting in a strong signal in Re[σ(ω)]. This signal is Drude-like, peaking at zero frequency and decaying as a power-law with increasing frequency, which is quite different from the chiral Bogoliubov edge modes with a finite frequency resonance peak. The results can be directly extended to QH state with SC proximity. The QH edge states are degenerate, thus there is no finite frequency optical resonance between them. Moreover, the energy difference between Landau levels is typically much larger than the superconducting gap, so their resonance frequency cannot be in-gap. In addition, a SC has a divergent imaginary conductivity Im[σ(ω)] at low frequencies, scaling as 1/ω, reflecting the Meissner effect <cit.>, while a metal or a dielectric medium has vanishing imaginary conductivity as ω→ 0. Therefore, these phases exhibit different spectral behaviors and can be distinguished. In summary, the optical response within the particle-hole conjugate Bogoliubov edge modes leads to resonance peak below the bulk quasiparticle excitation frequency, which provides a new applicable method to detect chiral Bogoliubov edge states and is distinguishable from collective modes in SC and possible trivial explanations. The optical spectroscopy offers a valuable toolbox to explore novel SC. Acknowledgment. This work is supported by the National Key Research Program of China under Grant No. 2019YFA0308404, the Natural Science Foundation of China through Grants No. 12350404 and No. 12174066, the Innovation Program for Quantum Science and Technology through Grant No. 2021ZD0302600, the Science and Technology Commission of Shanghai Municipality under Grants No. 23JC1400600 and No. 2019SHZDZX01. 73 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Moore and Read(1991)]moore1991nonabelions author author G. Moore and author N. Read, title title Nonabelions in the fractional quantum hall effect, https://doi.org/10.1016/0550-3213(91)90407-O journal journal Nucl. Phys. B volume 360, pages 362 (year 1991)NoStop [Read and Green(2000)]read2000paired author author N. Read and author D. Green, title title Paired states of fermions in two dimensions with breaking of parity and time-reversal symmetries and the fractional quantum hall effect, https://doi.org/10.1103/PhysRevB.61.10267 journal journal Phys. Rev. B volume 61, pages 10267 (year 2000)NoStop [Kitaev(2006)]kitaev2006 author author A. Kitaev, title title Anyons in an exactly solved model and beyond, https://doi.org/http://doi.org/10.1016/j.aop.2005.10.005 journal journal Ann. Phys. volume 321, pages 2 (year 2006)NoStop [Fu and Kane(2008)]fu2008superconducting author author L. Fu and author C. L. Kane, title title Superconducting proximity effect and majorana fermions at the surface of a topological insulator, https://doi.org/10.1103/PhysRevLett.100.096407 journal journal Phys. Rev. Lett. volume 100, pages 096407 (year 2008)NoStop [Sau et al.(2010)Sau, Lutchyn, Tewari, and Das Sarma]sau2010generic author author J. D. Sau, author R. M. Lutchyn, author S. Tewari, and author S. Das Sarma, title title Generic new platform for topological quantum computation using semiconductor heterostructures, https://doi.org/10.1103/PhysRevLett.104.040502 journal journal Phys. Rev. Lett. volume 104, pages 040502 (year 2010)NoStop [Alicea(2010)]alicea2010majorana author author J. Alicea, title title Majorana fermions in a tunable semiconductor device, https://doi.org/10.1103/PhysRevB.81.125318 journal journal Phys. Rev. B volume 81, pages 125318 (year 2010)NoStop [Qi and Zhang(2011)]qi2011topological author author X.-L. Qi and author S.-C. Zhang, title title Topological insulators and superconductors, https://doi.org/10.1103/RevModPhys.83.1057 journal journal Rev. Mod. Phys. volume 83, pages 1057 (year 2011)NoStop [Wilczek(2009)]wilczek2009majorana author author F. Wilczek, title title Majorana returns, https://doi.org/10.1038/nphys1380 journal journal Nature Phys. volume 5, pages 614 (year 2009)NoStop [Elliott and Franz(2015)]elliott2015colloquium author author S. R. Elliott and author M. Franz, title title Colloquium: Majorana fermions in nuclear, particle, and solid-state physics, https://doi.org/10.1103/RevModPhys.87.137 journal journal Rev. Mod. Phys. volume 87, pages 137 (year 2015)NoStop [Kitaev(2003)]kitaev2003faulttolerant author author A. Yu. Kitaev, title title Fault-tolerant quantum computation by anyons, https://doi.org/10.1016/S0003-4916(02)00018-0 journal journal Ann. Phys. volume 303, pages 2 (year 2003)NoStop [Nayak et al.(2008)Nayak, Simon, Stern, Freedman, and Das Sarma]nayak2008nonabelian author author C. Nayak, author S. H. Simon, author A. Stern, author M. Freedman, and author S. Das Sarma, title title Non-abelian anyons and topological quantum computation, https://doi.org/10.1103/RevModPhys.80.1083 journal journal Rev. Mod. Phys. volume 80, pages 1083 (year 2008)NoStop [Mong et al.(2014)Mong, Clarke, Alicea, Lindner, Fendley, Nayak, Oreg, Stern, Berg, Shtengel, and Fisher]mong2014universal author author R. S. K. Mong, author D. J. Clarke, author J. Alicea, author N. H. Lindner, author P. Fendley, author C. Nayak, author Y. Oreg, author A. Stern, author E. Berg, author K. Shtengel, and author M. P. A. Fisher, title title Universal topological quantum computation from a superconductor-abelian quantum hall heterostructure, https://doi.org/10.1103/PhysRevX.4.011036 journal journal Phys. Rev. X volume 4, pages 011036 (year 2014)NoStop [Clarke et al.(2013)Clarke, Alicea, and Shtengel]clarke2013exotic author author D. J. Clarke, author J. Alicea, and author K. Shtengel, title title Exotic non-abelian anyons from conventional fractional quantum hall states, https://doi.org/10.1038/ncomms2340 journal journal Nature Commun. volume 4, pages 1348 (year 2013)NoStop [Clarke et al.(2014)Clarke, Alicea, and Shtengel]clarke2014exotic author author D. J. Clarke, author J. Alicea, and author K. Shtengel, title title Exotic circuit elements from zero-modes in hybrid superconductor–quantum-hall systems, https://doi.org/10.1038/nphys3114 journal journal Nature Phys. volume 10, pages 877 (year 2014)NoStop [Lian et al.(2018a)Lian, Sun, Vaezi, Qi, and Zhang]lian2018topological author author B. Lian, author X.-Q. Sun, author A. Vaezi, author X.-L. Qi, and author S.-C. Zhang, title title Topological quantum computation based on chiral majorana fermions, https://doi.org/10.1073/pnas.1810003115 journal journal Proc. Natl. Acad. Sci. U.S.A. volume 115, pages 10938 (year 2018a)NoStop [Hu and Kane(2018)]hu2018fibonacci author author Y. Hu and author C. L. Kane, title title Fibonacci topological superconductor, https://doi.org/10.1103/PhysRevLett.120.066801 journal journal Phys. Rev. Lett. volume 120, pages 066801 (year 2018)NoStop [Chang et al.(2013)Chang, Zhang, Feng, Shen, Zhang, Guo, Li, Ou, Wei, Wang, Ji, Feng, Ji, Chen, Jia, Dai, Fang, Zhang, He, Wang, Lu, Ma, and Xue]chang2013experimental author author C.-Z. Chang, author J. Zhang, author X. Feng, author J. Shen, author Z. Zhang, author M. Guo, author K. Li, author Y. Ou, author P. Wei, author L.-L. Wang, author Z.-Q. Ji, author Y. Feng, author S. Ji, author X. Chen, author J. Jia, author X. Dai, author Z. Fang, author S.-C. Zhang, author K. He, author Y. Wang, author L. Lu, author X.-C. Ma, and author Q.-K. Xue, title title Experimental observation of the quantum anomalous hall effect in a magnetic topological insulator, https://doi.org/10.1126/science.1234414 journal journal Science volume 340, pages 167 (year 2013)NoStop [Checkelsky et al.(2014)Checkelsky, Yoshimi, Tsukazaki, Takahashi, Kozuka, Falson, Kawasaki, and Tokura]Checkelsky_2014 author author J. G. Checkelsky, author R. Yoshimi, author A. Tsukazaki, author K. S. Takahashi, author Y. Kozuka, author J. Falson, author M. Kawasaki, and author Y. Tokura, title title Trajectory of the anomalous hall effect towards the quantized state in a ferromagnetic topological insulator, https://doi.org/10.1038/nphys3053 journal journal Nature Phys. volume 10, pages 731–736 (year 2014)NoStop [Kou et al.(2014)Kou, Guo, Fan, Pan, Lang, Jiang, Shao, Nie, Murata, Tang, Wang, He, Lee, Lee, and Wang]kou2014 author author X. Kou, author S.-T. Guo, author Y. Fan, author L. Pan, author M. Lang, author Y. Jiang, author Q. Shao, author T. Nie, author K. Murata, author J. Tang, author Y. Wang, author L. He, author T.-K. Lee, author W.-L. Lee, and author K. L. Wang, title title Scale-invariant quantum anomalous hall effect in magnetic topological insulators beyond the two-dimensional limit, https://doi.org/10.1103/PhysRevLett.113.137201 journal journal Phys. Rev. Lett. volume 113, pages 137201 (year 2014)NoStop [Chang et al.(2015)Chang, Zhao, Kim, Zhang, Assaf, Heiman, Zhang, Liu, Chan, and Moodera]chang2015highprecision author author C.-Z. Chang, author W. Zhao, author D. Y. Kim, author H. Zhang, author B. A. Assaf, author D. Heiman, author S.-C. Zhang, author C. Liu, author M. H. W. Chan, and author J. S. Moodera, title title High-precision realization of robust quantum anomalous hall state in a hard ferromagnetic topological insulator, https://doi.org/10.1038/nmat4204 journal journal Nature Mater. volume 14, pages 473 (year 2015)NoStop [Mogi et al.(2015)Mogi, Yoshimi, Tsukazaki, Yasuda, Kozuka, Takahashi, Kawasaki, and Tokura]mogi2015 author author M. Mogi, author R. Yoshimi, author A. Tsukazaki, author K. Yasuda, author Y. Kozuka, author K. S. Takahashi, author M. Kawasaki, and author Y. Tokura, title title Magnetic modulation doping in topological insulators toward higher-temperature quantum anomalous hall effect, https://doi.org/10.1063/1.4935075 journal journal Appl. Phys. Lett. volume 107, pages 182401 (year 2015)NoStop [Deng et al.(2020)Deng, Yu, Shi, Guo, Xu, Wang, Chen, and Zhang]deng2020quantum author author Y. Deng, author Y. Yu, author M. Z. Shi, author Z. Guo, author Z. Xu, author J. Wang, author X. H. Chen, and author Y. Zhang, title title Quantum anomalous hall effect in intrinsic magnetic topological insulator mnbi2te4, https://doi.org/10.1126/science.aax8156 journal journal Science volume 367, pages 895 (year 2020)NoStop [Qi et al.(2010)Qi, Hughes, and Zhang]qi2010chiral author author X.-L. Qi, author T. L. Hughes, and author S.-C. Zhang, title title Chiral topological superconductor from the quantum hall state, https://doi.org/10.1103/PhysRevB.82.184516 journal journal Phys. Rev. B volume 82, pages 184516 (year 2010)NoStop [Wang et al.(2015)Wang, Zhou, Lian, and Zhang]wang2015chiral author author J. Wang, author Q. Zhou, author B. Lian, and author S.-C. Zhang, title title Chiral topological superconductor and half-integer conductance plateau from quantum anomalous hall plateau transition, https://doi.org/10.1103/PhysRevB.92.064520 journal journal Phys. Rev. B volume 92, pages 064520 (year 2015)NoStop [Chung et al.(2011)Chung, Qi, Maciejko, and Zhang]chung2011conductance author author S. B. Chung, author X.-L. Qi, author J. Maciejko, and author S.-C. Zhang, title title Conductance and noise signatures of majorana backscattering, https://doi.org/10.1103/PhysRevB.83.100512 journal journal Phys. Rev. B volume 83, pages 100512 (year 2011)NoStop [Wang(2016)]wang2016electrically author author J. Wang, title title Electrically tunable topological superconductivity and majorana fermions in two dimensions, https://doi.org/10.1103/PhysRevB.94.214502 journal journal Phys. Rev. B volume 94, pages 214502 (year 2016)NoStop [Strübi et al.(2011)Strübi, Belzig, Choi, and Bruder]strubi2011interferometric author author G. Strübi, author W. Belzig, author M.-S. Choi, and author C. Bruder, title title Interferometric and noise signatures of majorana fermion edge states in transport experiments, https://doi.org/10.1103/PhysRevLett.107.136403 journal journal Phys. Rev. Lett. volume 107, pages 136403 (year 2011)NoStop [Fu and Kane(2009)]fu2009probing author author L. Fu and author C. L. Kane, title title Probing neutral majorana fermion edge modes with charge transport, https://doi.org/10.1103/PhysRevLett.102.216403 journal journal Phys. Rev. Lett. volume 102, pages 216403 (year 2009)NoStop [Akhmerov et al.(2009)Akhmerov, Nilsson, and Beenakker]akhmerov2009electrically author author A. R. Akhmerov, author J. Nilsson, and author C. W. J. Beenakker, title title Electrically detected interferometry of majorana fermions in a topological insulator, https://doi.org/10.1103/PhysRevLett.102.216404 journal journal Phys. Rev. Lett. volume 102, pages 216404 (year 2009)NoStop [Lian et al.(2016)Lian, Wang, and Zhang]lian2016edgestateinduced author author B. Lian, author J. Wang, and author S.-C. Zhang, title title Edge-state-induced andreev oscillation in quantum anomalous hall insulator-superconductor junctions, https://doi.org/10.1103/PhysRevB.93.161401 journal journal Phys. Rev. B volume 93, pages 161401 (year 2016)NoStop [Lian et al.(2018b)Lian, Wang, Sun, Vaezi, and Zhang]lian2018 author author B. Lian, author J. Wang, author X.-Q. Sun, author A. Vaezi, and author S.-C. Zhang, title title Quantum phase transition of chiral majorana fermions in the presence of disorder, https://doi.org/10.1103/PhysRevB.97.125408 journal journal Phys. Rev. B volume 97, pages 125408 (year 2018b)NoStop [Wang and Lian(2018)]wang2018multiple author author J. Wang and author B. Lian, title title Multiple chiral majorana fermion modes and quantum transport, https://doi.org/10.1103/PhysRevLett.121.256801 journal journal Phys. Rev. Lett. volume 121, pages 256801 (year 2018)NoStop [Lian and Wang(2019)]lian2019distribution author author B. Lian and author J. Wang, title title Distribution of conductances in chiral topological superconductor junctions, https://doi.org/10.1103/PhysRevB.99.041404 journal journal Phys. Rev. B volume 99, pages 041404 (year 2019)NoStop [Hu et al.(2024)Hu, Wang, and Lian]hu2024resistance author author Y. Hu, author J. Wang, and author B. Lian, @noop title Resistance distribution of decoherent quantum hall-superconductor edges (year 2024), https://arxiv.org/abs/2405.17550 arXiv:2405.17550 [cond-mat] NoStop [He et al.(2019)He, Liang, Tanaka, and Nagaosa]he2019platform author author J. J. He, author T. Liang, author Y. Tanaka, and author N. Nagaosa, title title Platform of chiral majorana edge modes and its quantum transport phenomena, https://doi.org/10.1038/s42005-019-0250-5 journal journal Commun. Phys. volume 2, pages 149 (year 2019)NoStop [Hoppe et al.(2000)Hoppe, Zülicke, and Schön]Hoppe2000 author author H. Hoppe, author U. Zülicke, and author G. Schön, title title Andreev reflection in strong magnetic fields, https://doi.org/10.1103/PhysRevLett.84.1804 journal journal Phys. Rev. Lett. volume 84, pages 1804 (year 2000)NoStop [Giazotto et al.(2005)Giazotto, Governale, Zülicke, and Beltram]Giazotto2005 author author F. Giazotto, author M. Governale, author U. Zülicke, and author F. Beltram, title title Andreev reflection and cyclotron motion at superconductor—normal-metal interfaces, https://doi.org/10.1103/PhysRevB.72.054518 journal journal Phys. Rev. B volume 72, pages 054518 (year 2005)NoStop [Akhmerov and Beenakker(2007)]Akhmerov2007 author author A. R. Akhmerov and author C. W. J. Beenakker, title title Detection of valley polarization in graphene by a superconducting contact, https://doi.org/10.1103/PhysRevLett.98.157003 journal journal Phys. Rev. Lett. volume 98, pages 157003 (year 2007)NoStop [van Ostaay et al.(2011)van Ostaay, Akhmerov, and Beenakker]Ostaay2011 author author J. A. M. van Ostaay, author A. R. Akhmerov, and author C. W. J. Beenakker, title title Spin-triplet supercurrent carried by quantum hall edge states through a josephson junction, https://doi.org/10.1103/PhysRevB.83.195441 journal journal Phys. Rev. B volume 83, pages 195441 (year 2011)NoStop [Wan et al.(2015)Wan, Kazakov, Manfra, Pfeiffer, West, and Rokhinson]Wan2015 author author Z. Wan, author A. Kazakov, author M. J. Manfra, author L. N. Pfeiffer, author K. W. West, and author L. P. Rokhinson, title title Induced superconductivity in high-mobility two-dimensional electron gas in gallium arsenide heterostructures, https://doi.org/10.1038/ncomms8426 journal journal Nature Commun. volume 6, pages 7426 (year 2015)NoStop [Amet et al.(2016)Amet, Ke, Borzenets, Wang, Watanabe, Taniguchi, Deacon, Yamamoto, Bomze, Tarucha, and Finkelstein]Amet2016 author author F. Amet, author C. T. Ke, author I. V. Borzenets, author J. Wang, author K. Watanabe, author T. Taniguchi, author R. S. Deacon, author M. Yamamoto, author Y. Bomze, author S. Tarucha, and author G. Finkelstein, title title Supercurrent in the quantum hall regime, https://doi.org/10.1126/science.aad6203 journal journal Science volume 352, pages 966–969 (year 2016)NoStop [Lee et al.(2017)Lee, Huang, Efetov, Wei, Hart, Taniguchi, Watanabe, Yacoby, and Kim]Lee2017 author author G.-H. Lee, author K.-F. Huang, author D. K. Efetov, author D. S. Wei, author S. Hart, author T. Taniguchi, author K. Watanabe, author A. Yacoby, and author P. Kim, title title Inducing superconducting correlation in quantum hall edge states, https://doi.org/10.1038/nphys4084 journal journal Nature Phys. volume 13, pages 693–698 (year 2017)NoStop [Sahu et al.(2018)Sahu, Liu, Paul, Das, Raychaudhuri, Jain, and Das]Sahu2018 author author M. R. Sahu, author X. Liu, author A. K. Paul, author S. Das, author P. Raychaudhuri, author J. K. Jain, and author A. Das, title title Inter-landau-level andreev reflection at the dirac point in a graphene quantum hall state coupled to a nbse_2 superconductor, https://doi.org/10.1103/PhysRevLett.121.086809 journal journal Phys. Rev. Lett. volume 121, pages 086809 (year 2018)NoStop [Matsuo et al.(2018)Matsuo, Ueda, Baba, Kamata, Tateno, Shabani, Palmstrøm, and Tarucha]Matsuo2018 author author S. Matsuo, author K. Ueda, author S. Baba, author H. Kamata, author M. Tateno, author J. Shabani, author C. J. Palmstrøm, and author S. Tarucha, title title Equal-spin andreev reflection on junctions of spin-resolved quantum hall bulk state and spin-singlet superconductor, https://doi.org/10.1038/s41598-018-21707-0 journal journal Sci. Rep. volume 8, pages 3454 (year 2018)NoStop [Seredinski et al.(2019)Seredinski, Draelos, Arnault, Wei, Li, Fleming, Watanabe, Taniguchi, Amet, and Finkelstein]Seredinski2019 author author A. Seredinski, author A. W. Draelos, author E. G. Arnault, author M.-T. Wei, author H. Li, author T. Fleming, author K. Watanabe, author T. Taniguchi, author F. Amet, and author G. Finkelstein, title title Quantum hall–based superconducting interference device, https://doi.org/10.1126/sciadv.aaw8693 journal journal Sci. Adv. volume 5, pages eaaw8693 (year 2019)NoStop [Zhao et al.(2020)Zhao, Arnault, Bondarev, Seredinski, Larson, Draelos, Li, Watanabe, Taniguchi, Amet, Baranger, and Finkelstein]Zhao2020 author author L. Zhao, author E. G. Arnault, author A. Bondarev, author A. Seredinski, author T. F. Q. Larson, author A. W. Draelos, author H. Li, author K. Watanabe, author T. Taniguchi, author F. Amet, author H. U. Baranger, and author G. Finkelstein, title title Interference of chiral andreev edge states, https://doi.org/10.1038/s41567-020-0898-5 journal journal Nature Phys. volume 16, pages 862–867 (year 2020)NoStop [Zhao et al.(2023)Zhao, Iftikhar, Larson, Arnault, Watanabe, Taniguchi, Amet, and Finkelstein]Zhao2023 author author L. Zhao, author Z. Iftikhar, author T. F. Q. Larson, author E. G. Arnault, author K. Watanabe, author T. Taniguchi, author F. m. c. Amet, and author G. Finkelstein, title title Loss and decoherence at the quantum hall-superconductor interface, https://doi.org/10.1103/PhysRevLett.131.176604 journal journal Phys. Rev. Lett. volume 131, pages 176604 (year 2023)NoStop [Gül et al.(2022)Gül, Ronen, Lee, Shapourian, Zauberman, Lee, Watanabe, Taniguchi, Vishwanath, Yacoby, and Kim]Yuval2022 author author O. Gül, author Y. Ronen, author S. Y. Lee, author H. Shapourian, author J. Zauberman, author Y. H. Lee, author K. Watanabe, author T. Taniguchi, author A. Vishwanath, author A. Yacoby, and author P. Kim, title title Andreev reflection in the fractional quantum hall state, https://doi.org/10.1103/PhysRevX.12.021057 journal journal Phys. Rev. X volume 12, pages 021057 (year 2022)NoStop [Hatefipour et al.(2022)Hatefipour, Cuozzo, Kanter, Strickland, Allemang, Lu, Rossi, and Shabani]Hatefipour2022 author author M. Hatefipour, author J. J. Cuozzo, author J. Kanter, author W. M. Strickland, author C. R. Allemang, author T.-M. Lu, author E. Rossi, and author J. Shabani, title title Induced superconducting pairing in integer quantum hall edge states, https://doi.org/10.1021/acs.nanolett.2c01413 journal journal Nano Lett. volume 22, pages 6173–6178 (year 2022)NoStop [Uday et al.(2024)Uday, Lippertz, Moors, Legg, Bliesener, Pereira, Taskin, and Ando]uday2024 author author A. Uday, author G. Lippertz, author K. Moors, author H. F. Legg, author A. Bliesener, author L. M. C. Pereira, author A. A. Taskin, and author Y. Ando, title title Induced superconducting correlations in the quantum anomalous hall insulator, journal journal Nature Phys. https://doi.org/10.1038/s41567-024-02574-1 10.1038/s41567-024-02574-1 (year 2024)NoStop [Wang and Liu(2024)]wang2024 author author J. Wang and author Z. Liu, title title A way to cross the andreev bridge, journal journal Nature Phys. https://doi.org/10.1038/s41567-024-02575-0 10.1038/s41567-024-02575-0 (year 2024)NoStop [Vignaud et al.(2023)Vignaud, Perconte, Yang, Kousar, Wagner, Gay, Watanabe, Taniguchi, Courtois, Han, Sellier, and Sacépé]vignaud_2023 author author H. Vignaud, author D. Perconte, author W. Yang, author B. Kousar, author E. Wagner, author F. Gay, author K. Watanabe, author T. Taniguchi, author H. Courtois, author Z. Han, author H. Sellier, and author B. Sacépé, title title Evidence for chiral supercurrent in quantum hall josephson junctions, https://doi.org/10.1038/s41586-023-06764-4 journal journal Nature volume 624, pages 545–550 (year 2023)NoStop [Ménard et al.(2017)Ménard, Guissart, Brun, Leriche, Trif, Debontridder, Demaille, Roditchev, Simon, and Cren]menard2017two author author G. C. Ménard, author S. Guissart, author C. Brun, author R. T. Leriche, author M. Trif, author F. Debontridder, author D. Demaille, author D. Roditchev, author P. Simon, and author T. Cren, title title Two-dimensional topological superconductivity in pb/co/si (111), https://doi.org/10.1038/s41467-017-02192-x journal journal Nature Commun. volume 8, pages 2040 (year 2017)NoStop [Kezilebieke et al.(2020)Kezilebieke, Huda, Vaňo, Aapro, Ganguli, Silveira, Głodzik, Foster, Ojanen, and Liljeroth]kezilebieke2020topological author author S. Kezilebieke, author M. N. Huda, author V. Vaňo, author M. Aapro, author S. C. Ganguli, author O. J. Silveira, author S. Głodzik, author A. S. Foster, author T. Ojanen, and author P. Liljeroth, title title Topological superconductivity in a van der waals heterostructure, https://doi.org/10.1038/s41586-020-2989-y journal journal Nature volume 588, pages 424 (year 2020)NoStop [Wang et al.(2020)Wang, Rodriguez, Jiao, Howard, Graham, Gu, Hughes, Morr, and Madhavan]wang2020evidence author author Z. Wang, author J. O. Rodriguez, author L. Jiao, author S. Howard, author M. Graham, author G. Gu, author T. L. Hughes, author D. K. Morr, and author V. Madhavan, title title Evidence for dispersing 1d majorana channels in an iron-based superconductor, https://doi.org/10.1126/science.aaw8419 journal journal Science volume 367, pages 104 (year 2020)NoStop [He et al.(2021)He, Tanaka, and Nagaosa]he2021opticala author author J. J. He, author Y. Tanaka, and author N. Nagaosa, title title Optical responses of chiral majorana edge states in two-dimensional topological superconductors, https://doi.org/10.1103/PhysRevLett.126.237002 journal journal Phys. Rev. Lett. volume 126, pages 237002 (year 2021)NoStop [Ahn and Nagaosa(2021)]ahn2021theory author author J. Ahn and author N. Nagaosa, title title Theory of optical responses in clean multi-band superconductors, https://doi.org/10.1038/s41467-021-21905-x journal journal Nature Commun. volume 12, pages 1617 (year 2021)NoStop [Mahan(2000)]mahan2000 author author G. D. Mahan, https://doi.org/10.1007/978-1-4757-5714-9 title Many-Particle Physics (publisher Springer US, address Boston, MA, year 2000)NoStop [Huang and Wang(2023)]huang2023 author author L. Huang and author J. Wang, title title Second-order optical response of superconductors induced by supercurrent injection, https://doi.org/10.1103/PhysRevB.108.224516 journal journal Phys. Rev. B volume 108, pages 224516 (year 2023)NoStop [Bi and He(2024)]bi2024vertical author author H. Bi and author J. J. He, title title Vertical optical transitions of helical majorana edge modes in topological superconductors, https://doi.org/10.1103/PhysRevB.109.214513 journal journal Phys. Rev. B volume 109, pages 214513 (year 2024)NoStop [sm()]sm @noop journal See Supplemental Material for the details NoStop [Zhang et al.(2019)Zhang, Shi, Zhu, Xing, Zhang, and Wang]Zhang2019 journal author author D. Zhang, author M. Shi, author T. Zhu, author D. Xing, author H. Zhang, and author J. Wang, title title Topological axion states in the magnetic insulator mnbi_2te_4 with the quantized magnetoelectric effect, https://doi.org/10.1103/PhysRevLett.122.206401 journal journal Phys. Rev. Lett. volume 122, pages 206401 (year 2019)NoStop [Lu et al.(2024)Lu, Yang, and Lu]lu2024optical author author S. Lu, author X. Yang, and author Y.-M. Lu, title title Optical and raman selection rules for odd-parity clean superconductors, https://doi.org/10.1103/PhysRevB.109.245119 journal journal Phys. Rev. B volume 109, pages 245119 (year 2024)NoStop [Wang et al.(2012)Wang, Liu, Xu, Yang, Miao, Yao, Gao, Shen, Ma, Chen et al.]wang2012coexistence author author M.-X. Wang, author C. Liu, author J.-P. Xu, author F. Yang, author L. Miao, author M.-Y. Yao, author C. Gao, author C. Shen, author X. Ma, author X. Chen, et al., title title The coexistence of superconductivity and topological order in the bi2se3 thin films, https://doi.org/10.1126/science.1216466 journal journal Science volume 336, pages 52 (year 2012)NoStop [Xiang and Wu(2022)]xiang2022dwave author author T. Xiang and author C. Wu, https://doi.org/10.1017/9781009218566 title D-Wave Superconductivity, edition 1st ed. (publisher Cambridge University Press, year 2022)NoStop [Anderson(1958a)]anderson1958randomphasea author author P. W. Anderson, title title Random-phase approximation in the theory of superconductivity, https://doi.org/10.1103/PhysRev.112.1900 journal journal Phys. Rev. volume 112, pages 1900 (year 1958a)NoStop [Anderson(1958b)]anderson1958new author author P. W. Anderson, title title New method in the theory of superconductivity, https://doi.org/10.1103/PhysRev.110.985 journal journal Phys. Rev. volume 110, pages 985 (year 1958b)NoStop [Bogoljubov et al.(1958)Bogoljubov, Tolmachov, and Širkov]bogoljubov1958new author author N. N. Bogoljubov, author V. V. Tolmachov, and author D. V. Širkov, title title A new method in the theory of superconductivity, https://doi.org/10.1002/prop.19580061102 journal journal Fortschritte Phys. volume 6, pages 605 (year 1958)NoStop [Rickayzen(1959)]rickayzen1959collective author author G. Rickayzen, title title Collective excitations in the theory of superconductivity, https://doi.org/10.1103/PhysRev.115.795 journal journal Phys. Rev. volume 115, pages 795 (year 1959)NoStop [Littlewood and Varma(1982)]littlewood1982amplitude author author P. B. Littlewood and author C. M. Varma, title title Amplitude collective modes in superconductors and their coupling to charge-density waves, https://doi.org/10.1103/PhysRevB.26.4883 journal journal Phys. Rev. B volume 26, pages 4883 (year 1982)NoStop [Pekker and Varma(2015)]pekker2015amplitude author author D. Pekker and author C. Varma, title title Amplitude/higgs modes in condensed matter physics, https://doi.org/10.1146/annurev-conmatphys-031214-014350 journal journal Annu. Rev. Condens. Matter Phys. volume 6, pages 269 (year 2015)NoStop [Shimano and Tsuji(2020)]shimano2020higgs author author R. Shimano and author N. Tsuji, title title Higgs mode in superconductors, https://doi.org/10.1146/annurev-conmatphys-031119-050813 journal journal Annu. Rev. Condens. Matter Phys. volume 11, pages 103 (year 2020)NoStop [Tinkham(2004)]tinkham2004introduction author author M. Tinkham, @noop title Introduction to Superconductivity (publisher Courier Corporation, year 2004)NoStop
http://arxiv.org/abs/2407.12681v1
20240717160208
Dynamics of Cities
[ "A. Deppman", "R. L. Fagundes", "E. Megias", "R. Pasechnik", "F. L. Ribeiro", "C. Tsallis" ]
physics.soc-ph
[ "physics.soc-ph", "math-ph", "math.MP", "nlin.CD" ]
Instituto de Física - Universidade de São Paulo, Rua do Matão 1371, São Paulo 05508-090, Brazil deppman@usp.br Universidade Federal de Lavras - UFLA, Dept Fis DFI, BR-37200900 Lavras, MG, Brazil fribeiro@ufla.br Departamento de Física Atómica, Molecular y Nuclear and Instituto Carlos I de Física Teórica y Computacional, Universidad de Granada, Avenida de Fuente Nueva s/n, 18071 Granada, Spain emegias@ugr.es Department of Physics, Lund University, Sölvegatan 14A, Lund SE-22362, Sweden Roman.Pasechnik@fysik.lu.se Universidade Federal de Lavras - UFLA, Dept Fis DFI, BR-37200900 Lavras, MG, Brazil fribeiro@ufla.br Centro Brasileiro de Pesquisas Fisicas and National Institute of Science and Technology of Complex Systems, Rua Xavier Sigaud 150, Rio de Janeiro-RJ 22290-180, Brazil Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, 87501 NM, USA Complexity Science Hub Vienna - Josefstädter Strasse 39, 1080 Vienna, Austria tsallis@cbpf.br § ABSTRACT This study investigates city dynamics employing a nonextensive diffusion equation suited for addressing diffusion within a fractal medium, where the nonadditive parameter, q, plays a relevant role. The findings demonstrate the efficacy of this approach in determining the relation between the fractal dimension of the city, the allometric exponent and q, and elucidating the stationary phase of urban evolution. The dynamic methodology facilitates the correlation of the fractal dimension with both the entropic index and the urban scaling exponent identified in data analyses. The results reveal that the scaling behaviour observed in cities aligns with the fractal dimension measured through independent methods. Moreover, the interpretation of these findings underscores the intimate connection between the fractal dimension and social interactions within the urban context. This research contributes to a deeper comprehension of the intricate interplay between human behaviour, urban dynamics, and the underlying fractal nature of cities. Dynamics of Cities C. Tsallis ================== § INTRODUCTION §.§ The city's fractal space In recent decades, our understanding of urban population dynamics has rapidly advanced. Exploiting precise data made available by social media platforms and mobile devices, contemporary urban issues can now be scrutinized through more rigorous scientific methodologies. The development of new analytical methods for studying complex systems, previously inaccessible for the study of urban systems, has facilitated the quantitative and qualitative comprehension of various facets of urban life. This advancement has brought forth intriguing aspects of urban organization. Contrary to prior assumptions, cities exhibit some adherence to universal laws, independent of cultural, ethnic, or socioeconomic nuances of their specific regions <cit.>. Numerous aspects of urban existence follow simple power laws with population, manifesting either superlinear or sublinear tendencies, a phenomenon known as urban scaling. Socioeconomics displays superlinear trends, while infrastructure-related aspects demonstrate sublinear behavior, allowing for enhanced efficiency with population growth. The associated exponent remains remarkably consistent across diverse cities, regardless of their unique attributes <cit.>. Despite robust evidence supporting the power-law dynamics in urban parameters concerning population, our understanding of the underlying mechanisms driving urban growth remains limited <cit.>. The various models attempting to elucidate urban scaling were summarized in Ref. <cit.>, emphasizing their reliance on human interaction and the availability of infrastructure. These models diverge in their approaches to deriving scaling exponents, which are invariably obtained by considering allometric relations between socioeconomic output and infrastructure cost <cit.>. The observed universal behaviour results from constraints related to the stability of the urban area <cit.>. A range of mechanisms have been proposed, such as cross-sectional interaction <cit.>, gravitational model <cit.> and other preferential attachment approaches <cit.> (see also <cit.>). It is worth mentioning that preferential attachment is a known mechanism to generate networks with power-law behaviour <cit.>. Fractal dimensions, either linked to social behaviour or to infrastructure distribution, are commonly employed to obtain the appropriate range for the power-law exponent <cit.>. Most of the models describe static properties of the cities, but there is evidence that dynamical effects are relevant <cit.>. The present work addresses the dynamical evolution of the cities by considering how they are modified as the population distribution changes with time. One of the most important relations is the so-called fundamental allometry, relating the city area A to the population size N by A ∼ N^β , with β being the scaling exponent. §.§ Nonlinear dynamics in fractal space The dynamics approach to urban life allows for a more comprehensive understanding of the organic organization of individuals and infrastructure. This approach encompasses the city's geometry and socioeconomic interactions within a single theoretical framework. Some attempts to develop a dynamical theory of the cities were associated with Levy-flights, but they do not reproduce some important aspects observed in cities around the world <cit.>. The results obtained in this work will show that the non-additive entropy and the associated non-extensive thermodynamics <cit.> offer a better framework for describing urban life. Most of the urban scaling studies select cities above some minimum population size, while some works exhibit that small cities diverge from the expected pattern <cit.>. A model for information diffusion in fractal networks <cit.> shows that the scaling law may fail for small groups. The behaviour of information spreading with the population size follows the q-exponential function instead of the power-law function. The q-exponential function is given by <cit.> e_q(x) = [1-(q-1)x]^-1/q-1 , x < 1/(q-1) e_q(x) = 0 , x ≥ 1/(q-1) , where q ∈ℝ is the entropic index; this work focuses on the case q>1. Both the power-law and the q-exponential functions exhibit similar behaviour for large population sizes but differ for small ones. The q-exponential distribution is typical of nonextensive statistics <cit.>, which generalizes Boltzmann's statistics by allowing a non-additive entropy. The generalized statistics has found numerous applications in many realms of knowledge <cit.>. The relationship between fractals and non-extensive statistics has been explored in several works <cit.>. Assuming the validity of the fractal model for urban landscapes, a comprehensive exploration of this structure may uncover key aspects for predicting cities' dynamic behaviour. This approach has the potential to yield vital information about cities' temporal evolution, thus presenting a valuable opportunity to devise more effective strategies for urban development. The relevant time scale for attaining the stationary regime assumed here is one in which the characteristic exponent of the distributions can be considered constant. Evidence indicates that for time scales spanning centuries, the exponent may change, but the present work is focused on the dynamics of cities in a shorter term. This approach encompasses processes such as the immigration of representative fractions of the population of the city or the relocation of the population due to natural disasters. These processes can disrupt the natural population density of the city, triggering a subsequent change through a dynamic process that must incorporate the inherent fractal aspects of the city's organic evolution. The dynamics of systems in fractal spaces are rather different from the usual dynamic evolution. The Fokker-Planck Equation (FPE) is the law one usually has in mind when addressing the evolution of a complex system, and if f(𝐫,t) is the probability distribution, which depends on the position 𝐫 and time t, then the FPE is given by ∂ f/∂ t(𝐫,t)= ∂/∂ x_i[ -γ_i(𝐫) f(𝐫,t) + B ∂/∂ x_i f(𝐫,t) ] , where summation over index i is understood. In the FPE, the parameters γ_i(𝐫) and B are transport coefficients and characterize the drift of the system in the medium and the diffusive process through the medium, respectively[In general, the transport coefficients can assume a tensor form. The assumption that they are scalar, in particular B_ij = B δ_ij, is appropriate for the application we intend here concerning an isotropic system. Tensor coefficients can be relevant for situations where the city growth is constrained by geographical features.]. § DYNAMICS IN NON-HOMOGENEOUS MEDIA Non-homogeneous media may give rise to a modified process that a non-linear Fokker-Planck Equation can describe. The one-dimensional case where γ(𝐫) = γ_1 -γ_2 𝐫 is of particular interest and has been addressed in Ref. <cit.> through a comprehensive study of anomalous diffusion. The parameter γ_1 indicates a constant repulsive force, while the parameter γ_2 is associated with an attractive harmonic potential that represents the overall tendency of the population to live near some basic facilities offered by urban centres. The harmonic potential yields an area for the city that increases linearly with the population size if fractal effects are absent. The Plastino-Plastino Equation (PPE) <cit.> is a quite general equation for non-linear dynamics and is particularly useful when a fractal medium is present. This equation is given by ∂ f/∂ t(𝐫,t)= ∂/∂ x_i[ -γ_i(𝐫) f(𝐫,t) + B ∂/∂ x_i f(𝐫,t)^2-q] , where q is the entropic index of nonextensive statistics <cit.>. The PPE was proposed in the context of nonextensive statistics <cit.>, therefore it is the appropriate framework to describe the evolution of a wide class of complex systems, and its solutions present q-exponential forms of distribution or related ones <cit.>. The PPE has been recently derived from a generalized form of the Boltzmann Equation for systems with non-local correlations <cit.>. The dynamics of systems in fractal space can be related to the PPE and in this case, q can be determined by the fractal dimension of the system <cit.>, highlighting the close connections between fractals and nonextensive statistics. In an alternative approach, when a system evolves in a fractal medium, the FPE may be modified by substituting the standard derivative operators with fractal derivatives. The result is the Fractal Fokker-Planck Equation (FFPE) given by <cit.> D^α^'_t_o f(𝐫,t)= D^α^''_x_i,o[ -γ_i(𝐫) f(𝐫,t) + B D^α^''_x_i,o f(𝐫,t) ] , where 0<α^'≤ 1 and 0<α^''≤ 1 represent the fractal dimension of the time and coordinate spaces, respectively. It is important to highlight the distinction between fractal derivatives and fractional derivatives at this point. The former is associated with Haussdorff geometry <cit.>, while the latter class of derivative operators is derived from algebraic considerations of derivative operators. The connection between these two derivative classes can be established through the continuous approximation of the fractal derivative <cit.>. One of the possible continuous approximations is associated with q-deformed calculus <cit.>. This approximation transforms the fractal version of the Fokker-Planck Equation into the Plastino-Plastino Equation given by Eq. (<ref>) <cit.>. Exploiting the properties of the q-calculus derivative, Ref. <cit.> demonstrated that the most significant geometrical aspect influencing the dynamic process in the fractal space is the fractal dimension gap, denoted as δ d_f ≡ d-d_f, where d is the smallest integer dimension of Euclidean space that embeds the fractal space with dimension d_f. A consequence of this finding is that the effects of time and coordinate fractal spaces depend on the joint space (t,𝐫). The dimension of the joint space is the sum of the time and coordinate spaces, implying that the fractal dimensions α^' and α^'' in Eq. (<ref>) can be substituted by α=α^'+α^''. Several works have explored the fractional version of the PPE <cit.>. The present work addresses the connections between the fractal equation and the PPE by adopting the continuous approximation associated with the q-deformed calculus. The standard derivative formulas can be recovered by using such continuous approximations, resulting in the PPE equation. In the connection between these two equations, the important quantity ξ was derived as ξ = 2 - d(q-1) = 2 - δ d_f ∈ (0,2] , establishing the link between fractal geometry, the dynamics in fractal space and the nonextensive dynamics. The quantity ξ is defined to be the fractal dimension α of the image space of a function. For a distribution that is positively defined, α=ξ/2. Eq. (<ref>) results from considerations on the foundations of fractal geometry, fractal and q-deformed calculus <cit.> and nonlinear dynamics. It allows for the calculation of the parameter q for any fractal space, since q = 2- d_f/d . The PPE allows for addressing the dynamic aspect of the cities. § DYNAMICS OF CITIES The present work assumes that the city growth is isotropic, that is, the infrastructure expands with the same probability in any direction concerning the city centre 𝐫_c(t) so that the probability distribution will be a function of |𝐫 - 𝐫_c(t)|. This assumption allows for a simplified equation (see Eq. (<ref>), below). The solution of the PPE (<ref>) corresponding to a Dirac δ-distribution at t=0 is given by f(𝐫,t)= 1/N(t) e_q[-(𝐫-𝐫_c(t))^2/2σ(t)^2] , where 𝐫_c(t)= γ_1/γ_2+(𝐫_o-γ_1/γ_2) exp[-γ_2 t] σ(t) = σ_∞( 1 - exp[ -ξγ_2 t ] )^1/ξ , with σ_∞≡ℓ_q κ^1/ξ , κ≡ (2-q) (2πχ_q)^d/2(q-1)B/γ_2 ℓ_q^2 . The parameter ℓ_q is the characteristic linear size of the fractal space, and χ_q ≡1/q-1( Γ( 1/q-1 - d/2) /Γ( 1/q-1))^2/d , q > 1 . The dimension d remains arbitrary because the same method used here can be applied to other processes with different dimensions. The solution in Eq. (<ref>) is a q-Gaussian, with a peak at 𝐫 = 𝐫_c(t) and width σ(t). For t→∞, the city reaches a new stationary regime after being disturbed by some event of the kind mentioned before. Observe that the distribution in Eq. (<ref>) is dimensionless, as required for the correct usage of the PPE because all parameters are written in scaling invariant form, such as σ/ℓ_q. Below, the dimensional distribution is recovered. To develop a dynamical model of cities, it is important to observe that cities usually start and evolve around a fixed geometric centre, which remains, to a good approximation, the centre of mass of the urban area during its evolution. In the following, it will be assumed that the city centre is at 𝐫_c(t) = 0. This condition is obtained by considering 𝐫_o = 0 = γ_1. This means that one can integrate in the angular coordinates, and the distribution will depend only on r and t, i.e., f(r,t). The PPE for the isotropic city becomes ∂/∂ t f(r,t) = γ_2 ( d + r ∂/∂ r) f(r,t) + B/r^d-1∂/∂ r( r^d-1∂/∂ r f(r,t)^2-q) , and its solution is given by Eq. (<ref>) with 𝐫_c(t) = 0. Notice that the function f(r,t) is the profile of the population density at the time t, measured from the city centre, r=0. While the distribution f(𝐫,t) is dimensionless, the rescaled distribution f̅(𝐫,t) ≡N(t)/ℓ_q^d f(𝐫,t) has dimensions of [Length]^-d and it is interpreted as the population density. The integral of the density over all space gives the population size, i.e., N(t) = ∫ d^d r f̅(𝐫, t) , resulting in the total population at time t, N(t) = ( √(2πχ_q)σ(t)/ℓ_q)^d . Observe that the stationary distribution has a finite width for t →∞, and the population at this stage will be N_∞(q)= (2πχ_q)^d/2κ^d/ξ , where the dependence in the parameter q is evidenced. The finite width at the asymptotic stationary regime is a consequence of the harmonic potential <cit.>. The parameters B and γ_2 do not depend on q, so the time dependence of the width is also independent of q, as can be observed in Eq. (<ref>). A paradigmatic case is q=1, when the PPE and FFPE reduce to the FPE, the solution becomes a Gaussian, and the whole process is governed by Boltzmann statistics in a Euclidean space, as indicated in Eq. (<ref>). For this case, the stationary population will be N_≡ N_∞(q=1)=(2π)^d/2 κ_^d/2 . The crucial point is to find the conditions to have the same population in a q-Gaussian distribution for any value of q, so one can study the effects of the different values for the fractal dimensions and the entropic index on the population density. From Eq. (<ref>), it results that to keep a constant population, ℓ_q must vary with q in such a way that √(2πχ_q)σ(q)/ℓ_q= √(2πχ_q)  κ^1/ξ remains independent of q. By using Eq. (<ref>) it follows that √(2πχ_q)σ(q)/ℓ_q= √(2π)( u_q ℓ_o/ℓ_q)^2/ξ , where u_q = [  (2-q) (2π)^2 - ξ/2χ_q ]^1/2 , with ℓ_o=√(B/γ_2) being a typical length associated with the dynamic properties of the system, as discussed below. The population size N(t) is independent of q if the right-hand side of Eq. (<ref>) is q-independent. In this case, the equality √(2π)( u_q ℓ_o/ℓ_q)^2/ξ=√(2π)σ_/ℓ_ , with σ_=σ_q=1 and ℓ_=ℓ_q=1, must hold, implying that u_q ℓ_o/ℓ_q=( σ_/ℓ_)^ξ/2 . Observe that, for q=1, it results u_q=1 therefore σ_=ℓ_o. Thus, the length ℓ_o represents the distribution width for the solution of the Fokker-Planck Equation associated with the diffusive process governed by the Boltzmann Statistics. Establishing the scaling relation ℓ_q = u_q ℓ_o^1-ξ/2ℓ_^ξ/2 , ensures that κ^2/ξ in Eq. (<ref>) is independent of q. Thereby, σ_q scales with q in the same way as ℓ_q, so σ(q) ∝ℓ_^ξ/2. Since, for the same reason, σ_∝ℓ_, it follows that σ(q) ∝σ_^ξ/2 , showing the scaling behaviour of the distribution width. The population density ρ_q(t)=N(t)/ℓ_q^d , increases as the fractal dimension decreases. The area of the city is A_q=ℓ_q^d and scales in the same way as the linear length, i.e., A_q=A_^ξ/2. But A_∝ N, therefore A_q ∝ N^ξ/2 . By comparing the expression above with the fundamental allometry, it results that the infrastructure scaling exponent is β = ξ/2. The dynamic theory in a fractal space yields consistent beha­viour in linear scales, as can be observed by comparing Eqs. (<ref>) and (<ref>). Both ℓ_q and σ(q) scale according to the same power-law. Note that this scaling is valid for any time t 0, indicating that it is a feature of the fractal space that induces non-local correlations into the evolution of the system, thereby leading to non-additive statistics. It is worth understanding the role of the parameter ℓ_q in the dynamic process. It was named the characteristic linear size of the fractal space because it represents the scale of the fractal space with dimensional gap δ d_f. It imprints the same power-law trend to the distribution width, therefore it is through ℓ_q that the system inherits the scaling properties observed in the dynamical parameter σ(t). Using Eqs (<ref>) and (<ref>), it results that the characteristic linear size of the system varies with q and the fractal dimension gap as a power-law with exponent β=ξ/2=1-d/2(q-1)=1-d- d_f/2 , where β is the scaling exponent of the linear size of the dynamic distribution, q is the entropic index of the Tsallis Statistics, and d_f being the fractal dimension. By using d = 2, it results from Eq. (<ref>) that β = d_f/2. As we will discuss below, a reasonable value for the city's fractal dimension is d_f = 1.7, resulting in β = 0.85. We will see that this value for the scaling exponent is also in good agreement with empirical findings. The results obtained above show that starting from the dynamics on a fractal space, in a process that evolves diffusively under harmonic attractive forces, the sub-linear behaviour of the urban infrastructure is obtained. This is in contrast with the observed for the FPE solutions, corresponding to the Boltzmannian systems, which present a linear relation between urban area and population size. This behaviour is recovered when the d_f → 2. Thus, the fundamental allometric relation for cities is reproduced in an approach that considers only the diffusive behaviour in a fractal space. The city is self-organized to be more efficient in the space occupation, sharing more area of the city than would happen in a process governed by a standard Fokker-Planck dynamics, in which case the area increases linearly with the population size. The central result in the present work is Eq. (<ref>), which establishes the relations among the entropic index, the fractal dimension and the scaling exponent. All these quantities can be determined by independent methods so, at this point, a set of values for those parameters found in the Literature will be discussed. A more constrained analysis will be performed later on. The value for the parameter q appearing in the PPE can be calculated from analysis of information diffusion in urban areas and of epidemiologic analysis <cit.>. The number of contacts per individual n_c was associated with the entropic index by n_c=(q-1)^-1 <cit.>. Studies about the number of contacts show that one can consider the range 3<n_c<10 corresponding to the values 1/10<q-1<1/3. <cit.>. Studies on the geographic shape of cities globally <cit.> provide city dimensions typically within the range of 1.4< d_f <1.9, which corresponds, according to Eq. (<ref>), to the range 1/20<q-1<3/10. A systematic study of cities over all continents concluded that the scaling exponents lie within the range 2/3<β<1 <cit.>, which corresponds to the range 0<q-1<1/3. Therefore, considering all the independent information on the range of the parameter q, one can estimate that 0<q-1<1/3. So far, we have shown that considering the average values for the fractal dimension and the scaling exponent of the cities around the world, Eq. (<ref>) works properly. A more restrictive test is to observe the description of that formula to specific cities for which both the fractal and the scaling exponent are obtained. § A CASE STUDY: BRAZILIAN CITIES As a case study, the verification of the theoretical relations derived here is done by studying the observed data of Brazilian cities. For a set of cities with a population larger than three hundred thousand, the allometric exponent β was obtained by using information on the population size and the urban area. In addition, for the same cities, the fractal dimension of the urban area, d_f, is obtained by a process that considers the spatial distribution of crossing of roads in the city. The data used here for the evaluation of the allometric exponent is obtained from Refs. <cit.>. The data used to analyze the urban boundaries of the cities is taken from  <cit.>. Before calculating the fractal dimension, the data is processed by extracting the road intersections using the Python package  <cit.>. For calculating the fractal dimensions, the box-counting method is applied to the road intersections by defining a lattice. This method initially defines a L × L square with L ∼√(A), where A is the urban area of the city. The square is centred at the geometric centre of the road intersections of the city and encloses all the city's road intersections. The fractal dimension is obtained by counting those boxes with side ℓ that contain at least one road intersection inside, N_ℓ. The process is repeated iteratively for boxes with size ℓ_i=L/2^i , with i ∈ [2,8], a range that was found suitable for the set of cities considered here. The fractal dimension d_f is obtained by using the formula d_f=log(N_ℓ_i)/log(ℓ_i) . Typical plots of N(ℓ) vs ℓ are shown in Fig. <ref>. In the determination of d_f, only the linear region of each plot was considered, following the recommendations in Ref. <cit.>. As reported in the left panel of Fig. <ref>, the allometric relation analysis gives β=0.80 ± 0.03, which lies within that range observed for cities around the world. The distribution of the fractal dimension is reported in the histogram shown in the right-hand panel of Fig. <ref>, where one can see an approximately normal distribution with the average at ⟨ d_f ⟩=1.55 ± 0.11, which falls in the range observed for cities worldwide. Eq. (<ref>) represents the central result of the present work. It gives that ⟨ d_f ⟩/(2 β)=0.97 ± 0.07, in agreement with the expected value and confirming that the theoretical approach can correctly predict the relation between the allometric exponent and the city's fractal dimension. A more detailed analysis can be performed by considering the fractal dimension results for each of the cities in the present study. The left panel in Fig. <ref> displays the fractal dimension as a function of the rescaled inverse population size. The rescaling is done by using the critical exponent, z=0.305, that maximizes R^2 in the linear fitting as can be seen in the right panel of this figure. The continuous line is a linear fitting, which indicates a weak dependence of the fractal on the population size. Observe that for N →∞ (thermodynamical limit), the fractal dimension results to be d_f=1.89 ± 0.03, showing that even for extremely large populations, the urban area remains a fractal structure. Interestingly, the asymptotic value of d_f agrees with the fractal dimension of a percolation cluster in a two-dimensional space, which is 91/48 <cit.>. The left panel in Fig. <ref> displays the ratio d_f_i/(2 β), where the fractal dimension of ith city is used instead of the average value. From the theory, it is expected that all data points spread around the unit. Observe that the weak dependence of the ratio d_f_i/(2 β) with the population N is similar to that of d_f with the population size. It corroborates that the fractal dimension is not a truly universal aspect of the cities but shows a weak dependence on the population size <cit.>. According to Eq. (<ref>), also β should present a dependence on the city's population, but in the present analysis, a constant value was adopted. Such weak dependence of d_f could be the cause of the slightly asymmetric distribution in Fig. <ref>, but the present analysis is not intended to give a definite answer in this regard. It must be emphasized that the present approach to the cities' allometric relation is rather different from the existing models, although leading to similar results. It is completely based on the diffusion process in fractal spaces, avoiding assumptions about individual behaviour. On the contrary, Bettencourt's <cit.> and Ribeiro et al. <cit.> models are strongly based on the assumption of efficiency determining the approximate allometric exponent but need the introduction of a fractal dimension. The Loaf-Barthelemmy model <cit.> is based on traffic and commutation time. Bettencourt's, Ribeiro et al.'s, and Loaf-Barthelemmy's models give the same formula relating the allometric exponent and the fractal dimension, namely, β=d_f/d_f+1 . Comparing with Eq. (<ref>) for d=2, giving β=d_f/2, and considering the expected value for the fractal dimension, 1.4 < d_f < 1.9, it results that the present model will always give an exponent larger than that given by the other models discussed here. In the left panel of Fig. <ref>, the accuracy of the prediction given by Eq. (<ref>) is compared with that for the present approach, given by Eq. (<ref>). It is observed a significant deviation from the expected unit value for the case of Eq. (<ref>), showing that the fractal approach is more accurate for the cities used in the present study. The right panel in Fig; <ref> displays the behaviour of the relevant ratios for the different models as a function of the population size. § CITY'S ATTRACTION POTENTIAL There is another aspect of the dynamics approach that deserves further consideration. The relation between the occupied area and the population size involves, beyond the fractal aspects of the city, socioeconomic factors that depend on the attractiveness of the city, which motivates the population to move near to its centre. In the dynamical approach employed in the present work, this is controlled by the potential associated with the parameter γ_2. Hence, this work considers the general case of a power-law attractive force of the form γ_2 r^ω replacing the harmonic potential. The stationary distribution has a q-exponential form, being given by f_q(𝐫)= 1/N(t) e_q[- 1/2( r/σ_∞,ω)^2α] , where α = 1 + ω/2 . The stationary width is given by σ_∞,ω≡ℓ_q,ω κ^1/(αν) , κ≡α (2-q) (2πχ_q,ω)^d/2(q-1)B/γ_2 ℓ_q,ω^2α , with χ_q,ω≡1/2( 2/q-1)^1/α( Γ( 1 + d/2α) /Γ( 1 + d/2) Γ( 1/q-1 - d/2α) /Γ( 1/q-1) )^2/d , and ν≡ 2 - d/α (q-1) . The population size results in N_∞,ω = ( √(2πχ_q,ω)σ_∞,ω/ℓ_q,ω)^d . Following the same reasoning used above for the case ω = 1, it is found that the population size will be independent of q if ℓ_q,ω∝ℓ_^ν/2 , where ℓ_≡ℓ_q=1,ω=1 corresponds to the case where the city area is proportional to the population size. This scaling property leads to A_q,ω = ℓ_q,ω^d =A_^ν/2, therefore A_q,ω∝ N^ν/2 . For a given scaling exponent and fractal dimension, the power-law exponent α can be easily obtained from Eq. (<ref>), resulting α =d-d_f/2 (1 -β) . The parameter α can be useful to understand the few cases where β≲ 2/3, corresponding to q-1 ≳ 1/3. The value for α was verified for the set of Brazilian cities used in the case study presented here. The result, as observed in the left panel in Fig. <ref> shows that ⟨α⟩ = 1.1 ± 0.3 is a value that agrees with the harmonic potential adopted in the previous sections. However, for a constant value β, the parameter α varies with the population size. In the right panel of Fig. <ref>, the value of the parameter α is plotted against the rescaled inverse population size. The rescaling exponent is obtained in the same way as indicated previously, and because α has a linear dependence on d_f, the critical exponent for the rescaling is the same, z=0.305. Observe that for N →∞, α→ 0.28 ± 0.06, indicating that even in the thermodynamical limit the city's attractive potential remains a power-law function of the distance to the centre of the urban area. The plots for this study are presented in the Supplementary Material. § CONCLUSIONS In summary, the dynamical approach used to characterize urban scaling underscores the significance of the Plastino-Plasti­no Equation in addressing the anomalous diffusion linked to the intricacies of urban life and the fractal geometry of cities. The connections between this nonlinear equation, fractal and fractional calculus, and nonextensive statistics yield Eq. (<ref>). This equation establishes a relationship between the fractal dimension of the city, the entropic index, and the scaling exponent. While the fractal dimension elucidates the complex geometry of the urban space <cit.>, the entropic index can be linked to the social activities of individuals <cit.>, providing a comprehensive understanding of the correlations between infrastructure and socioeconomic behaviour that shape urban life, aligning with previous works <cit.>. The dynamics in fractal spaces provide only two parameters for describing the system behaviour, namely, the entropic index and the attractive potential exponent. The latter represents the attractiveness of the city. The interplay between these two parameters can precisely account for the scaling exponent observed in any city, highlighting the intricate relationships between infrastructure-related issues and social interactions. The results indicate that a weak dependence of both fractal dimension and allometric exponents with the population size is necessary. Overall, the dynamical approach gives fair prediction to the population stationary distribution of cities. For the set of Brazilian cities used in the study of the case presented in this work as a test for the prediction of the fractal diffusion approach, the results obtained are better than the best-known models in the Literature <cit.>. However, in all cases, the fundamental origin of the fractal space remains obscure but the present theoretical approach evidences that the relation between the fractal dimension and the allometric exponent is associated with basic geometric and diffusion aspects. The present model also allows for studying the temporal evolution of the population distribution in the urban area, offering new methods for testing the predictions given by the theoretical approach. This work opens up the possibility of addressing the dynamical aspects of cities and offers new perspectives for understanding the origins of fractality in urban life. Systematic analyses of the two parameters provided by the theory can elucidate the complex connections between social behaviour and the city's design. The methods presented here can be of help in the design of growing infrastructure in cities <cit.>, and the promotion of economic growth <cit.>. Future research may examine the relationship between the number of contacts and the shared area of the city associated with the scaling exponent. Investigating the determining features of the human mind that underlie the emergence of fractal behaviour remains an intriguing area for scientific development. The Science of Cities is necessarily interdisciplinary, encompassing physical, mathematical, sociological and philosophical aspects. In the latter areas, progress has been made in understanding the implications of a complex approach to social behaviour in modern society <cit.>. § ACKNOWLEDGEMENTS A.D. is supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq-Brazil), grant 306093/ 2022-7. The work of E.M. is supported by the project PID2020-114767GB-I00 and by the Ramón y Cajal Program under Grant RYC-2016-20678 funded by MCIN/AEI/10.13039/501100011­033 and by “FSE Investing in your future”, by Junta de Andalucía under Grant FQM-225, and by the “Prórrogas de Contratos Ramón y Cajal” Program of the University of Granada. C.T. is partially supported by the Brazilian Agencies CNPq and Faperj. FLR thanks CNPq (grant numbers 403139/2021-0 and 424686/2021-0) and Fapemig (grant number APQ-00829-21) for financial support. ieeetr [4] § SUPPLEMENTARY MATERIAL (WILL NOT APPEAR IN THE MAIN TEXT The correlations in the city's structure imply the use of PPE instead of the FPE. The solutions for the former equation are q-Gaussians, in contrast with the solutions for the latter one, which are Gaussian distributions. Fig. <ref> shows the q-Gaussian distributions for different values of q and width σ in the stationary regime that is obtained by using Eq. (<ref>) for a fixed population size, N. The case q=1 corresponds to the standard FPE solution, which is a Gaussian distribution, and it is compared with the PPE solutions for two different values of q. The plots show that in the central region around r=0 the q-Gaussian present a more pronounced peak, which results from the narrower distribution width due to fractal effects in the city dynamics. However, the q-Gaussians are fat-tailed, resulting in larger populations at the borders of the city. It means that the area occupied by the population in the city is smaller than it would be the case if the population was randomly distributed around the centre. In terms of the entropic index, the result evidences the non-additivity of the city configurations. The value q 1 explains why, upon adding a group of individuals to an existing city, the area of the new city will not be the simple sum of the area initially occupied by each group. Fig. <ref> shows a sample of the 319 plots to obtain the Brazilian cities' dimensions that were used for the study case.
http://arxiv.org/abs/2407.13641v1
20240718161925
Optimal rates for estimating the covariance kernel from synchronously sampled functional data
[ "Max Berger", "Hajo Holzmann" ]
math.ST
[ "math.ST", "stat.ME", "stat.TH" ]
Optimal rates for estimating the covariance kernel from synchronously sampled functional data Max Berger and Hajo Holzmann[Corresponding author. Prof. Dr. Hajo Holzmann, Department of Mathematics and Computer Science, Philipps-Universität Marburg, Hans-Meerweinstr., 35032 Marburg, Germany] Department of Mathematics and Computer Science Philipps-Universität Marburg {mberger, holzmann}@mathematik.uni-marburg.de ========================================================================================================================================================================================================================================================================================================================================= § ABSTRACT We obtain minimax-optimal convergence rates in the supremum norm, including infor-mation-theoretic lower bounds, for estimating the covariance kernel of a stochastic process which is repeatedly observed at discrete, synchronous design points. In particular, for dense design we obtain the √(n)-rate of convergence in the supremum norm without additional logarithmic factors which typically occur in the results in the literature. Surprisingly, in the transition from dense to sparse design the rates do not reflect the two-dimensional nature of the covariance kernel but correspond to those for univariate mean function estimation. Our estimation method can make use of higher-order smoothness of the covariance kernel away from the diagonal, and does not require the same smoothness on the diagonal itself. Hence, as in <cit.> we can cover covariance kernels of processes with rough sample paths. Moreover, the estimator does not use mean function estimation to form residuals, and no smoothness assumptions on the mean have to be imposed. In the dense case we also obtain a central limit theorem in the supremum norm, which can be used as the basis for the construction of uniform confidence sets. Simulations and real-data applications illustrate the practical usefulness of the methods. Keywords. Covariance kernel; functional data; optimal rates of convergence; supremum norm; synchronously sample data § INTRODUCTION Mean function and covariance kernel are the two most-important parameters of a stochastic process having finite second moments. Estimates of the covariance kernel of a repeatedly observed stochastic process allow to assess its variability, in particular through the associated principle component functions <cit.>. Further, the smoothness of the paths of a Gaussian process is closely related to the smoothness of the covariance kernel on the diagonal <cit.>. Therefore, estimates of the covariance kernel allow to draw conclusions on the path properties of the observed process. In the setting of functional data analysis, stochastic processes are repeatedly observed at discrete locations and potentially with additional observation errors <cit.>. A deterministic, synchronous design refers to fixed, non-random observation points which are equal across functions. It typically arises for machine recorded data, such as weather data at regular time intervals at weather stations. In contrast, in random, asynchronous designs observation points are realizations of independent random variables. Estimating the covariance kernel has been intensely investigated in the literature for both types of designs. For the random asynchronous design, <cit.> obtain rates of convergence in L_2 and also in the supremum norm under Hölder smoothness assumptions on the covariance kernel. <cit.> derive optimal rates in the random design setting under the assumptions that the sample paths of the process are contained in a particular reproducing kernel Hilbert space. For fixed synchronous design there are fewer contributions. <cit.> give a consistency result in the supremum norm, and <cit.> presents rates in L_2 and in the sup-norm for spline estimators over Hölder smoothness classes. In the present paper we comprehensively analyse estimates of the covariance kernel for fixed synchronous design. We obtain optimal rates of convergence in a minimax sense over Hölder smoothness classes, including information theoretic lower bounds. In our analysis we focus on the supremum norm instead of the simpler L_2 norm, since it corresponds to the visualization of the estimation error and forms the basis for the construction of uniform confidence bands. In particular, for dense design we obtain the √(n) rate of convergence in the supremum norm and also the associated central limit theorem without the additional logarithmic factors, which are prevalent in the literature <cit.>. Our method can make use of higher-order smoothness of the covariance kernel away from the diagonal, and does not require the same amount of smoothness on the diagonal itself. Thus, as in <cit.> our results apply to covariance kernels of processes with relatively rough, in particular non-differentiable sample paths. Notably, our estimator does not require mean estimation and forming residuals. Hence virtually no smoothness assumptions on the mean function are required, in contrast to e.g. <cit.>. In the transition from dense to sparse design the optimal rates that we obtain do not reflect the two-dimensional nature of the covariance kernel but correspond to those for univariate mean function estimation as presented in <cit.>. A similar phenomenon has been observed in <cit.> but for somewhat different settings. The matching lower bounds, in particular in the sparse-to-dense transition regime, require novel and technically involved arguments. The paper is organized as follows. In Section <ref> we introduce the model as well as our estimator, a modification of the local polynomial estimator restricted to observation pairs above the diagonal. Section <ref> contains our main results: upper and matching lower bounds for the rate of convergence in the supremum norm. These are complemented by a central limit theorem in the space of continuous functions in Section <ref>. Section <ref> contains simulations and a real-data application. In the simulations in Section <ref> we investigate the effect of the choice of the bandwidth and propose a cross-validation scheme for bandwidth selection. Further we simulate the contribution to the sup-norm error of various components in the error decomposition, thus illustrating our theoretical analysis. Moreover we demonstrate numerically the need to leave out empirical variances of the data points and to restrict smoothing to the upper triangle for covariance kernels which are not globally smooth. In Section <ref> we provide an application to a data set of daily temperature series, and discuss how estimates of the standard deviation curves and the correlation functions vary over the year. Section <ref> concludes. Proofs of the main results are gathered in Section <ref>. The supplementary appendix contains further technical material. An R package can be found on https://github.com/mbrgr/biLocPolGithub, together with the https://github.com/mbrgr/Optimal-Rates-Covariance-Kernel-Estimation-in-FDA.git-Code used for the data in this paper. We conclude the introduction by introducing some relevant notation. For sequences (p_n), (q_n) tending to infinity, we write p_n ≲ q_n, if p_n =q_n, p_n ≃ q_n if p_n ≲ q_n and q_n≲ p_n, and p_n q_n, if p_n = o(q_n). O_, h ∈ (h_1, h_0] denotes stochastic convergence uniformly for h ∈ (h_1 , h_0]. Finally, · _∞ denotes the supremum norm, where the domain becomes clear from the context. § THE MODEL, SMOOTHNESS CLASSES AND LINEAR ESTIMATORS Let the observed data (Y_i, j, x_ j) be distributed according to the model <cit.> Y_i, j= μ( x_ j) + Z_i( x_ j) + _i, j , i=1,…,n , j = 1,…,p , where Y_i, j are real-valued response variables and the x_j ∈ are known non-random design points which are assumed to be ordered as x_1 < … < x_p. The processes Z_1,…,Z_n are i.i.d. copies of a mean-zero, square integrable random process Z with an unknown covariance kernel Γ(x,y) = [Z(x) Z(y)], the estimation of which we shall focus on in this paper. The errors _i,j are independent with mean zero and are also independent of the Z_i, and the mean function μ is unknown. The index set of Z is assumed to be an interval which we take as [0,1] in the following. The number of design points p p_n as well as the design points x_j= x_j(p,n) themselves depend on the number n of functions which are observed. Following the ideas in <cit.>, by symmetry Γ(x,y) = Γ(y,x) and it suffices to estimate Γ on the upper triangle T {(x,y)^⊤∈ [0,1]^2 | x ≤ y}. As estimator for Γ at (x,y) ∈ T we consider xyh 1/n-1∑_i=1^n ∑_j<k^p xyh (Y_i,j Y_i,k - Y̅_n, jY̅_n, k), where Y̅_n,j = n^-1∑_i=1^n Y_i,j, j ∈{1, …, p}, h>0 is a bandwidth parameter and xyh = w_j,k;p(x,y;h;x_1,…,x_p) are weights, assumptions on which are listed in Section <ref>. Note that we leave out the diagonal (j = k) in order to avoid bias induced by the squared errors _i, j^2, and furthermore that following <cit.> we build the estimator with observation pairs above the diagonal j < k to avoid the potential lower smoothness of the covariance kernel on the diagonal. Let us also stress that the estimator does not require an estimator μ̂_n of the mean function μ to form residuals Y_i,j - μ̂_n(x_j). Indeed, the observed covariance matrix and hence xyh are independent of μ, hence no assumptions on μ are required, see also the discussion in Remark <ref>. For x^', y^'∈ [0,1] with x^'>y^' we simply set Γ̂_n(x^',y^';h) : = Γ̂_n(y^',x^';h). Then Γ̂_n(·; h) - Γ_∞ = sup_x,y ∈ [0,1]|Γ̂_n(x,y;h) - Γ̂_n(x,y)| = sup_(x,y) ∈ T|Γ̂_n(x,y;h) - Γ̂_n(x,y)|, so that we may focus on the analysis of the estimator on T. [Local polynomial estimator] We shall show that restricted bivariate local polynomial estimators of order m ∈_0, that is the first coordinate (ϑ̂(x,y))_1 of the vector ϑ̂(x,y) = _ϑ∑_j<k^p ( z_i,j;n - ϑ^⊤ U_m( [ (x_j -x)/h; (x_k-y)/h; ]))^2 K( [ (x_j -x)/h; (x_k-y)/h; ]), x ≤ y, where we denote z_i,j;n = 1/n-1∑_i=1^n(Y_i,j Y_i,k - Y̅_n, jY̅_n, k) are linear estimators under mild assumptions and satisfy our requirements in Assumption <ref> in the next section. Here, K is a bivariate non-negative kernel, h>0 is a bandwidth and U_m^2 →^N_m with N_m (m+1)(m+2)/2 is a vector containing the monomials up to order m, with the constant as first entry, that is U_m(u_1,u_2)(1, P_1(u_1, u_2), …, P_m(u_1,u_2))^⊤, u_1,u_2∈[0,1] , where P_l(u_1, u_2)(u_1^l/l!, u_1^l-1u_2/(l-1)!, u_1^l-2u_2^2/(l-2)!2!,…, u_2^l/l!), u_1,u_2 ∈ [0,1] . Now let us turn to the assumptions that we impose on the process Z and its covariance function. A function f T → is Hölder-smooth with order γ>0 if for all indices β=(β_1,β_2) ∈_0^2 with β = β_1 + β_2 ≤⌊γ⌋ =max{ k∈_0 | k<γ} k, the partial derivatives D^β f( x) = ∂_1^β_1 ∂_2^β_2 f(x) exist and if the Hölder-norm given by f_ H, γmax_β≤ ksup_w ∈ TD^β f(w)+ max_β=ksup_v, w ∈ T, v≠ w|D^β f(v)-D^β f(w)|/v-w_∞^γ-k is finite. Define the Hölder class with parameters γ>0 and L>0 on T by H_T(γ, L) = {f T →|f_ H, γ≤ L }. Note that a symmetric function f on [0,1]^2 can be contained in H_T(γ, L) for γ >1 even if f is not partially differentiable on the diagonal of [0,1]^2 and hence not Hölder smooth of order greater than 1 on [0,1]^2. Roughly speaking, smoothness of the covariance kernel of a centered Gaussian process of order 2k in the neighborhood of the diagonal of [0,1]^2 implies smoothness of the paths of order k <cit.>. Thus, as stressed in <cit.> processes with relatively rough sample paths such as the Brownian motion do not have covariance kernels which are smooth on [0,1]^2. However, as is the case for Brownian motion, these kernels can still be smooth on T, and our estimator will be able to make use of higher order smoothness of Γ restricted to T. For the process Z we further assume that [Z(0)^4] < ∞ and that the paths are Hölder continuous of some potentially low order: there exists 0 < β≤ 1 and a random variable M = M_Z>0 with [M^4] < ∞ such that |Z( x)-Z( y)| ≤ M | x- y|^β, x, y ∈ [0,1] almost surely. Given C_Z>0 and 0 < β_0 ≤ 1, we consider the class of processes P (γ;L,β_0, C_Z) = { Z: [0,1] → centered random process|∃ β∈ [β_0,1] and M s.th. [M^4] + [Z( 0)^4]≤ C_Z , (<ref>) holds and Γ_| T∈ H_T(γ,L)}. [Gaussian processes] If Z has covariance function Γ which satisfies Γ_| T∈ H_T(γ,L), then _Z(x,y) [(Z(x) - Z(y))^2] ≤ C(γ,L) |x-y|^min(γ,1). Hence Theorem 1 in <cit.> implies that for a (centered) Gaussian process Z with Γ_| T∈ H_T(γ,L) the sample paths satisfy (<ref>) for each β < min(γ,1)/2, and the fourth moment of the Hölder constant M can be bounded in terms of β, γ and L. Since the rate of convergence for estimating Γ will not depend on β_0, our results therefore also apply to the class P_ G(γ) P_𝒢 (γ;L) = { Z: [0,1] → centered Gaussian process|Γ_| T∈ H_T(γ,L)}. § OPTIMAL RATES OF CONVERGENCE FOR COVARIANCE KERNEL ESTIMATION AND ASYMPTOTIC NORMALITY §.§ Upper bounds To derive the upper bounds consider the following assumptions on the design and the distribution of the errors. [Design Assumption] There is a constant > 0 such that for each x ∈ T and h>0 we have that { j ∈{1, …, p }| x_j ∈ [x-h, x+h]} ≤ p h . [Sub-Gaussian errors] The random variables {_i, j| 1 ≤ i ≤ n, 1 ≤ j ≤ p} are independent and independent of the processes Z_1,…,Z_n. Further we assume that the distribution of _i, j is sub-Gaussian, and setting σ_ij^2 [_i, j^2] we have that σ^2 sup_n max_ i,jσ_ij^2 < ∞ and that there exists ζ≥ 1 such that ζ^2σ_i, j^2 is an upper bound for the sub-Gaussian norm of _i, j. For the weights of the linear estimator in (<ref>) we require the following properties, which are checked for the restricted form of the local polynomial weights of Example <ref> in Section <ref> in the supplementary appendix. There is a c>0 and a h_0>0 such that for sufficiently large p, the following holds for all h ∈ (c/p_, h_0] for constants , >0 which are independent of n, p,h and (x, y) ∈ T. * The weights reproduce polynomials of a degree ζ≥ 0, that is for x,y ∈ T, ∑_ j<k^ p xyh =1 , ∑_j<k^ p( x_ j- x)^r_1(x_k - y)^r_2 xyh = 0 , for r_1,r_2 ∈_0 s.t. r_1 + r_2 ≤ζ. * We have xyh = 0 if max(x_j-x, x_k - y)> h with (x,y) ∈ T. * For the absolute values of the weights max_1≤ j<k≤ p| xyh| ≤ (p h)^-2, x ≤ y. * For a Lipschitz constant > 0 it holds that xyh - x^'y^'h≤/( p h)^2(max(x-x^', x-y^')/h∧ 1), x≤ y, x^'≤ y^' . Consider model (<ref>) under Assumptions <ref> and <ref>. Suppose that for given γ>0 the weights in the linear estimator Γ̂_n(·; h) for the covariance kernel Γ in (<ref>) satisfy Assumption <ref> with ζ = γ. Then for 0 < β_0 ≤ 1 and L, C_Z>0 we have that sup_h ∈ (c/p, h_0] sup_Z ∈ P (γ;L,β_0, C_Z) a_n,p,h^-1[Γ̂_n(·; h) - Γ_∞] = O(1) , where a_n,p,h = max( h^γ, ( log(h^-1)/n p h)^1/2, n^-1/2) . Hence by setting h^⋆∼max( c/p, (log(n p)/n p)^1/2γ + 1) we obtain sup_Z ∈ P (γ;L,β_0, C_Z)[Γ̂_n(·; h^⋆) - Γ_∞] = O (p^-γ + (log(n p)/n p)^γ/2γ + 1 + n^-1/2) . Furthermore, in both upper bounds the class = P (γ;L,β_0, C_Z) can be replaced by P_ G(γ) = P_𝒢 (γ;L) in (<ref>). The proof is given in Section <ref>. The rate in (<ref>) consists of a discretization bias p^-γ, the 1/√(n) rate arising from the contribution of the processes Z_i as well as the intermediate term involving the errors which is specific to the use of the supremum norm. Overall (<ref>) is analogous to the rate obtained for the mean function in one dimension d=1 in <cit.>. Somewhat surprisingly, the fact that Γ̂_n(x,y; h) is a bivariate function neither influences the rate arising from the discretization bias p^-γ nor that from the observation errors, (log(n p)/(n p))^γ/(2γ + 1), where a factor 2γ + 2 would be expected in the denominator of the exponent. This seems to be an improvement of the rates of covariance kernel estimation obtained in <cit.>, and is reminiscent of the one-dimensional rates obtained in <cit.> for the principal component functions. In particular, the discussions regarding the regimes in the rate (<ref>) as well as a choice of h independent of the smoothness γ from <cit.> apply to this setting as well. In Lemma <ref> in the appendix we derive an error decomposition of the form xyh -Γ(x,y) = ∑_j<k^p xyh [1/n∑_i=1^n ϵ_i,jϵ_i,k + higher order terms + Γ(x_j,x_k) - Γ(x,y) + 1/n∑_i=1^n (Z_i(x_j)ϵ_i,k + Z_i(x_k)ϵ_i,j) + 1/n∑_i=1^n(Z_i(x_j)Z_i(x_k) - Γ(x_j,x_k)) ] The first term in (<ref>) is bounded by h^γ by using the property <ref> of the weights and standard estimates. The third term in (<ref>) induces the 1/√(n) rate by using [n^-1∑_i= 1^n Z_i(·)Z_i(·) - Γ_∞] = O(n^-1/2) as well as the boundedness of sums of absolute values of the weights. The intermediate term in (<ref>) which captures the interplay between processes and errors gives the rate ( log(h^-1)/(n p h))^1/2, resulting in the overall bound (<ref>). The term in the line above (<ref>) involving products of errors is analyzed using the Hanson-Wright inequality and can be bounded by ( log(h^-1)/(n (p h)^2))^1/2, and since p^-1≲ h is negligible. A detailed proof is contained in Section <ref>. The mean function μ cancels out when forming the estimator (<ref>). Therefore, no assumptions are required on μ, and we could of course include a sup_μ in the statements of the upper bounds. This is in contrast to most approaches in the literature which rely on an initial estimate of the mean and forming residuals <cit.>. However, this feature seems to be particular to the synchronous design that we consider. If design points are asynchronous (e.g. realizations of random time points), it appears that also estimating the mean cannot be avoided. We are not aware that the precise effect of mean function estimation on covariance kernel estimation, similar to nonparametric variance function estimation in nonparametric regression <cit.>, has been investigated in the literature. §.§ Lower Bounds Now let us turn to corresponding lower bounds. Intuitively since covariance kernel estimation should be at least as hard as mean function estimation, and since the rate in (<ref>) corresponds to the optimal rate for mean function estimation in one dimensions d=1 <cit.>, optimality of the rates is not surprising. However, the proofs in particular for the intermediate term in (<ref>) are much more involved than for the mean function. We require the following more restrictive design assumption. [Design] Let the points x_1,…,x_p be given by the equations ∫_0^x_jf(t) t=j-0.5/p , j=1,…,p , where f[0,1]→ is a Lipschitz continuous density that is bounded by 0<f_min≤ f(t) ≤ f_max < ∞ for all t ∈ [0,1]. In <cit.> it is shown that Assumption <ref> implies Assumption <ref>. Assume that in model (<ref>) the errors _i, j are i.i.d. 𝒩(0, σ_0^2) - distributed, σ_0^2 >0, and that the design points x_1,…,x_p satisfy Assumption <ref>. Then setting a_n,p = p^-γ + (log(n p)/n p)^γ/2γ + 1 + n^-1/2 we have that lim inf_n, p →∞ inf_Γ̂_n,p sup_Z ∈ P_𝒢(γ; L) a_n,p^-1 [Γ̂_n,p - Γ_∞] >0 , where the infimum is taken over all estimators Γ̂_n,p of Γ. The proof is provided in Section <ref>. To obtain lower bounds one needs to construct hypothesis functions with appropriate distance in the supremum norm but for which the associated distributions of the observations are sufficiently close in a suitable sense. For Gaussian distributions which differ only in location, one can conveniently use the Kullback-Leibler divergence since it amounts to the scaled squared Euclidean distance of the location parameters. For Gaussian distributions which differ only in scale the Kullback-Leibler divergence leads to a sub-optimal order |σ_1^2 - σ_2^2| instead of (σ_1^2 - σ_2^2)^2 which can be achieved e.g. by the Hellinger distance. <cit.> base their arguments for the lower bounds of the pointwise risk in nonparametric variance estimation on the Hellinger distance. However, for the supremum norm in our setting to achieve the logarithmic factor in (log(n p)/(n p))^γ/2γ + 1 in the lower bound requires an increasing number of hypotheses, and it seems that the Hellinger distance cannot be used. Therefore we develop novel arguments and work directly with a criterion based on the likelihood ratio as given in <cit.>. §.§ Asymptotic normality To derive the asymptotic normality of the estimator (<ref>) we need the following smoothness assumption on the forth moment function. [Forth moment function] The forth moment function R(x,y,s,t) [Z(x)Z(y)Z(s)Z(t)]-Γ(x, y)Γ(s, t), x,y,s,t ∈ [0,1] , of the process Z is Hölder smooth of some positive order: R ∈ H_[0,1]^2(ζ, L̃) for some 0<ζ≤ 1 and L̃ > 0. In model (<ref>) under Assumptions <ref> and <ref>, consider the linear estimator in (<ref>) with weights satisfying Assumption <ref> with ζ = γ. Further suppose that for some δ > 0 we have that p ≳ n^1/(2γ)log(n)^1+2δ. Then for all sequences of smoothing parameters h = h_n in H_n [ log(n)^1+δ/p , n^-1/(2γ)log(n)^-δ] it holds that √(n) ( ĉ_n,h - Γ ) D→ G( 0, R), where G is a real-valued Gaussian process on [0,1]^2 with covariance operator R given in (<ref>). The proof is deferred to Section <ref> in the supplementary appendix. The asymptotic covariance operator R is as in the case with continuous and error-free observations, see e.g. <cit.>. § SIMULATIONS AND REAL-DATA ILLUSTRATION In this section we present simulation results for our methods and give a real-data application. First in Section <ref> we illustrate the finite sample effect of the choice of the bandwidth and propose and investigate a cross validation procedure to select a bandwidth. Further we simulate the size in the sup-norm of the terms in the error decomposition in (<ref>) which the rate in Theorem <ref> relies on. Finally we compare the proposed estimator which uses only the empirical covariances above the diagonal to a more conventional bivariate local polynomial estimator which still leaves out the diagonal terms to reduce variability resulting from the errors but otherwise smooths over the diagonal. In Section <ref> we give an illustration to daily temperature curves in Nuremberg, in which we show how standard deviation and correlation functions resulting from our estimate of the covariance vary over the year. The -code regarding the simulations and the real data examplae can be found in the Github repository https://github.com/mbrgr/Optimal-Rates-Covariance-Kernel-Estimation-in-FDA.git. The implementation of the calculation of the weights of the bivariate local polynomial estimator and the estimator itself can be found in the package, which is also available on Github in the repository https://github.com/mbrgr/biLocPol.gitmbrgr/biLocPol. §.§ Simulations For most of the simulations we consider the Ornstein-Uhlenbeck process Z_t = σ ∫_0^t exp(- θ (t - s)) B_s with parameters θ = 3 and σ = 2, and where (B_s)_s≥ 0 is a standard Brownian motion. It has covariance kernel given by Γ_OU(s,t) = σ^2/2 θ (exp(-θ t - s) - exp(-θ (s + t))) , which has a kink on the diagonal. The mean function μ in model (<ref>) is set to 0 since it is ancillary in forming the estimator, and the errors _i, j are centered, normally distributed with standard deviation σ_ϵ = 0.75. The grid points are equidistant at x_j = (j-1/2)/p, j=1, …, p. Figure <ref> contains two plots of the covariance kernel in (<ref>) together with a scatter plot (x_j,x_k,z_j,k) of the empirical covariances z_j,k = (n-1)^-1∑_ i = 1^n (Y_i,jY_i,k - Y̅_jY̅_k), 1 ≤ j, k ≤ p of the observations, for a particular sample with p=40 and n=100. One observes that the covariance kernel (<ref>) is not smooth on the diagonal, and that the empirical covariances deviate strongly from the underlying covariance kernel on the diagonal due to the additional variance from the observation errors _i, j. For this sample, our variant of the local polynomial estimator (<ref>) of order m=1 (local linear) with bandwidth h=0.3, denoted as Γ̂_100,40^0.3,1, is displayed in Figure <ref> together with the true underlying covariance kernel. For comparison in Figure <ref> we display the result for the estimator Γ̂^≠_n (x,y;h) 1/n-1∑_i=1^n ∑_j≠ k^p xyh (Y_i,j Y_i,k - Y̅_n, jY̅_n, k) , with local polynomial weights, which only excludes empirical variances but otherwise smooths over the diagonal. We again use m=1 and h=0.2, resulting in Γ̂_100,40^≠,0.2,1. This estimator displays a substantial bias due to the kink of the true covariance kernel along the diagonal. §.§.§ Bandwidth selection First we investigate the effect of bandwidth selection on the performance of our variant (<ref>) of the local polynomial estimator, where we restrict ourselves to order m=1, for sample size n=400 and p ∈{15, 25, 50, 75, 100}. For each value of p we use N = 1000 repetitions and calculate the supremum norm error of Γ̂_n,p for h varying over a grid of bandwidths up to 1. The results are displayed as curves in h in Figure <ref>. Similar results for n=50, n = 100 and n=200 are given in the supplementary appendix <ref> in Figure <ref>. One observes that smoothing is an important part of estimation but should not be overdone. In our setting, for n=400 the bandwidths that lead to the smallest overall error become slightly smaller with increasing p and are always between 0.2 and 0.4. Next we investigate a K-fold cross validation procedure for selecting h. We proceed as follows: The n observed curves are split randomly into K groups of approximately the same size. One of the groups is used as test data from which we calculate the empirical covariance matrix (Z_jk^test, r)_j,k = 1, …, p. The empirical covariance matrix (Z^train, -r_jk)_j,k = 1,…, p, r = 1,…, K, based on the remaining K-1 groups will be used as input data to our estimator. The procedure requires a grid of bandwidths h_l, l = 1, …, m and for every bandwidth h_l we evaluate the local polynomial estimator in (<ref>) K-times with each group once as the test set. We take the mean of the K sup norm errors for each bandwidth h_l, CV(h_l) = 1/K∑_r = 1^K max_ 1 ≤ j < k ≤ pΓ̂_n,p(x_j, x_k; h_l, Z_j,k^train, -r) - Z_j,k^test, r . Finally we choose the bandwidth with the minimal average sup-norm error, h^cv = {CV (h_l)| h_l, l = 1,…,m} . We repeat this procedure N = 1000 times for n = 400 and each p ∈{15, 25, 50, 75, 100}. The results are displayed in Figures <ref> and <ref>. Overall, when compared to the optimal bandwidths visible in Figure <ref>, the cross-validation procedure chooses reasonable but somewhat large bandwidths, which may reflect the additional variability from the estimate in the test data. §.§.§ Contribution of terms in the error decomposition Next we empirically examine the order that the various terms in the error decomposition (<ref>) have in the supremum norm. Again for each combination of n ∈{100, 200, 400} and p ∈{15, 25 ,50,100} we simulate N = 1000 repetitions, where we use optimal bandwidths h_n,p for the overall supremum-norm error determined by a grid search. The results are displayed in Figure <ref>. Overall, the main contribution to the sup-norm error is from the term involving the processes Z_i, which of course decays in n but is not sensitive to p. The other terms also decrease somewhat with increasing p. Note that individual errors do not add up to the overall error which is plausible since (<ref>) is an upper bound resulting from the triangle inequality. §.§.§ Comparison of Γ̂_n and Γ̂^≠_n in (<ref>) Finally let us compare our estimator Γ̂_n in more detail with the estimator Γ̂^≠_n in (<ref>) which uses smoothing over the diagonal. Recall the notation Γ̂_n,p^h,m and Γ̂_n,p^≠,h,m, where m is the order of the local polynomial and h is the bandwidth. In addition to the Ornstein-Uhlenbeck process which has a kink on the diagonal we also consider simulations from the following process Z̃(x,y) 2/3 N_1 sin(π x) + √(2) 2/3 cos(5 π y/5) , where N_1 and N_2 are independent standard normal distributed random variables. Z̃ has a smooth covariance kernel given by Γ̃(x,y) = 4/9 sin(π x) sin(π y) + 8/9 cos(4 π x/5) cos(4 π y/5) . Figure <ref> displays estimates of the kernel Γ̃ for a particular sample with n=100 and p=40 using Γ̂_100, 40^0.2,1 (Figure <ref>) as well as using Γ̂_100, 40^≠,0.2,1 (Figure <ref>). While the overall quality of estimation appears be similar, Γ̂_100, 40^0.2,1 has a slight artificial kink on the diagonal. Finally we use N=1000 repeated simulations for n = 100 and p = 50 for estimating Γ_OU as well as Γ̃ with both Γ̂_n,p^h,m and Γ̂_n,p^≠,h,m, for m=0,1,2 and for a grid of bandwidths. The results are display in Figure <ref> for estimating Γ_OU and in Figure <ref> for Γ̃. For the Ornstein-Uhlenbeck process Γ_OU the estimators perform similarly for m= 0 but there are major differences for the local linear (m=1) and local quadratic (m=2) case. In particular using the local linear estimator Γ_n^≠,1 is way worse then Γ̂_n^1. Note that in this case not much smoothing is necessary. In Figure <ref> the target kernel is smooth on the whole of [0,1]^2 and needs more smoothing to obtain good results. Here both estimators should have the same rates of convergence, which is confirmed in the numerical example. A further advantage of Γ̂_n in comparison to Γ̂_n^≠ is the lower computational cost, since only p (p-1)/2 observations (instead of p (p-1)) are needed. §.§ Weather data in Nuremberg We consider daily temperature curves in each month from the years 2002 up to 2022. The data is obtained from the Deutscher Wetter Dienst (DWD) at https://opendata.dwd.de/climate_environment/CDC/observations_germany/climate/10_minutes/air_temperature/historical/[Link]. This particular data set was already used in <cit.> to investigate the effect of sparse or dense design on the mean estimation. The observations on each day are taken on a ten minute grid. To reduce the day to day time-series dependency only every third or fourth day, the 1, 4, 8, 12, 15, 18, 22, 25 and 29^th of every month, was used. This results in around n = 180 observations for each month, except for February where n = 165. Figure <ref> shows the daily weather curves for January and August. We use our estimator in (<ref>) for the covariance kernel and derive estimates of the standard deviation curves as well as of the correlation surfaces. Results for the standard deviation curves with the bandwidths 144, 288, 720 for the months January and August can be seen in Figure <ref>. Since a day has 1440 minutes the bandwidths 144, 288, 720 are equivalent to 0.1, 0.2, 0.5 on the unit interval. Next to the notable temperature difference the standard deviation structure seems to be different as well: The winter days have higher standard deviations at night, while the summer days vary more across day time. Although the smoothing with h = 720 seems a little much, the results are still reasonable. Overall the months from February to April have the highest standard deviations. Warm days are the reason for the characteristic bump in the summer months, which are not visible in the cold months from December to February. In December and January the highest standard deviation is in the night due to extremely cold nights, which can be seen in Figure <ref>. In Figure <ref> we display estimates of the correlation function with bandwidth h = 288 for January and August. The winter days in January have a higher correlation of the temperature during the night and day compared to the summer month August. However in August the temperature in the morning correlates highly with the day time temperature. Again this is in line with Figure <ref>. When making these estimations it is crucial to take care of the dependency structure of consecutive days, otherwise a high additional time-series correlation in the corners is captured in the estimate. § CONCLUSIONS AND DISCUSSION Local polynomial estimators also yield estimates of derivatives. As discussed in Example <ref>, existence of global derivatives of the covariance kernel are intrinsically related to smoothness of the paths of the process, in particular for Gaussian processes. <cit.> discuss estimation of the covariance kernel of derivatives of the underlying processes, and <cit.> give upper bounds for mean function derivative estimation in FDA. Apart from estimating derivatives, our methods can form the basis for inference of smoothness of the covariance kernel on the diagonal, and hence for smoothness of the paths. For example, existence of global partial derivatives implies ∂_x c(x,x) = ∂_y c(x,x), which can be tested by using derivative estimators resulting from our method. As another application, the roughness parameter function τ(x) = ∂_x ∂_y c(x,x) in <cit.> can be estimated in settings with observation errors by using our estimation method. As the real-data example shows, extensions of our method to time series data would be of quite some interest. Parameter estimation of mean and covariance kernel and related inference methods under the supremum norm have been intensely investigated in recent years <cit.>. Here however it is commonly assumed that full paths of the processes are observed without errors, and results for discrete observations covering the time series setting seem to be missing in the literature. Estimates of the covariance kernel serve as the basis for estimating the principle component functions. Here most results are in L_2 <cit.>. <cit.> have rates in the supremum norm based on the expansions developed in <cit.>. Rate optimality and a CLT in case of synchronous design would certainly be of interest when estimating the principle component functions. § PROOFS §.§ Proof of Theorem <ref> First let us make the error decomposition in Remark <ref> precise. For the estimator ·· h in (<ref>), if the weights satisfy <ref> for some ζ > 0 then we have the error decomposition xyh -Γ(x,y) = ∑_j<k^p xyh [1/n∑_i=1^n ϵ_i,jϵ_i,k + Γ(x_j,x_k) - Γ(x,y) + 1/n∑_i=1^n(Z_i(x_j)Z_i(x_k) - Γ(x_j,x_k)) + 1/n∑_i=1^n (Z_i(x_j)ϵ_i,k + Z_i(x_k)ϵ_i,j) - 1/n(n-1)∑_i ≠ l^n (ϵ_i,jϵ_l,k + Z_i(x_j)Z_l(x_k) +Z_i(x_j)ϵ_l,k + ϵ_i,jZ_l(x_k) )]. The proof of Lemma <ref> is provided in the supplementary appendix in Section <ref>. Next observe that combining <ref> and Assumption <ref> yields for (x,y)^⊤∈ T that * For a constant > 0 the sum of the absolute values of the weights ∑_j<k^p | xyh | ≤. Given 0 < β_0 ≤ 1 and γ, L, C_Z>0, for the estimator ·· h in (<ref>) with n and p large enough the following rates of convergence hold, where we abbreviate = P (γ;L,β_0, C_Z). * If the weights satisfy <ref> with ζ = γ, <ref> of Assumption <ref> and <ref>, then sup_h ∈ (c/p, h_0] sup_Z ∈ sup_(x,y)∈ T h^-γ ∑_j<k^p xyh (x_jx_kh - Γ(x,y)) = O( 1) , with constants h_0, c >0 according to Assumption <ref>. * If the weights satisfy <ref>, <ref> and <ref>, then under Assumption <ref> we have that [ sup_(x,y)∈ T∑_j<k^p xyh 1/n∑_i=1^n ϵ_i,jϵ_i,k] = O ( √(log(n p)/n (p h)^2)) , [sup_(x,y) ∈ T∑_j<k^p xyh ∑_i, l=1^n ϵ_i,jϵ_l,k/n(n-1)] = O ( log(n p)/n p h) , where the constants in the O terms can chosen uniformly for h ∈ (c/p, h_0]. * If the weights satisfy <ref>, then sup_h ∈ (c/p, h_0]sup_Z ∈ [ sup_(x,y)∈ T1/n∑_j<k^p xyh ∑_i=1^n (Z_i(x_j)Z_i(x_k) - Γ(x_j, x_k)) ] = O( n^-1/2), sup_h ∈ (c/p, h_0]sup_Z ∈ [ sup_(x,y) ∈ T1/n∑_j<k^p xyh ∑_i,l=1^n Z_i(x_j)Z_l(x_k)/n-1] = O( n^-1) . * If the weights satisfy <ref>, <ref> and <ref> and Assumption <ref>, then sup_Z ∈[ sup_(x,y) ∈ T∑_j<k^p xyh 1/n∑_i=1^n Z_i(x_j) ϵ_i,k] = O (√(log(h^-1)/n p h)) , sup_Z ∈[sup_(x,y) ∈ T∑_j<k^p xyh ∑_i ≠ l^n Z_i(x_j) ϵ_l,k/n(n-1)] = O (√(log(h^-1)/n p h)) , where the constants in the O terms can chosen uniformly for h ∈ (c/p, h_0]. i). The rate of bias term (<ref>) is obtained by standard arguments. For convenience these are detailed in the supplementary appendix, Section <ref>. ii). We show the bound for the first term in ii), the second is dealt with in the supplementary appendix, Section <ref>. To deal with the quadratic form E_n,p,h(x,y) = ∑_j<k^p xyh 1/n∑_i=1^n ϵ_i,jϵ_i,k the main ingredient is the higher dimensional Hanson-Wright inequality <cit.> which is stated in Lemma <ref> at the end of this section. This is combined with a discretisation technique and the Lipschitz-continuity of the weights, property <ref>. Given δ>0 choose a δ-cover I_δ = {τ_1,…, τ_√(C_I)/δ}^2 of T of cardinalty C_T/δ^2 for an appropriate constant C_T>0. We estimate E_n,p,h_∞ = sup_z^'∈ I_δsup_z ∈ T, z-z^'≤δ|E_n,p,h(z)| ≤sup_z^'∈ I_δsup_z ∈ T, z-z^'≤δ(|E_n,p,h(z) - E_n,p,h(z^')| + |E_n,p,h(z^')|) ≤sup_z, z^'∈ T, z-z^'≤δ|E_n,p,h(z) - E_n,p,h(z^')| + sup_z^'∈ I_δ|E_n,p,h(z^')|. For the first term, setting ϵ_j (ϵ_1,j, …, ϵ_n,j)^⊤, by <ref> and Assumption <ref> we have [sup_z, z^'∈ T, z-z^'≤δ |E_n,p,h(z) - E_n,p,h(z^')|] = 1/n [sup_z, z^'∈ T, z-z^'≤δ|∑_j<k^p( xyh - x^'y^'h) ϵ_jϵ_k|] ≤ n^-1 2 δ h^-1 |ϵ_1ϵ_2|1/2by <ref> and Assumption <ref> ≤ 2 δ h^-1 σ^2 = 𝒪(1/(n p h)) |ϵ_1ϵ_2| ≤ nσ^2, δ 1/(np), where for the last inequality we take δ 1/(np). As for the second term in (<ref>), in order to apply the Hanson-Wright inequality, Lemma <ref>, we need to find upper bounds for the Frobenius and operator norm of the matrix A = ( xyh _j<k)_j,k = 1,…, p∈^p× p. From the properties <ref> and <ref> of the weights the Frobenius-norm satisfies A^2_F = ∑_j < k^p xyh^2 ≤ /(p h)^2 , For the operator-norm A_op = max_j=1,…,p s_j, where s_j, j = 1, …,p, are the singular values of A, we have that A_op ≤(A_1 A_∞)^1/2≤ /p h , where writing z = (x,y) we may bound A_1 = max_k∑_j xyh _j<k≤ (p h)^-1, A_∞ = max_j∑_k xyh _j<k≤ (p h)^-1 by Assumption <ref> and <ref>. See <cit.> for a proof of the first inequality in (<ref>). Then the statement of the Hanson-Wright inequality (Lemma <ref>) yields (|E_n,p,h(z)| ≥ t) ≤ 2 exp(-c̃ min(t^2 n (p h)^2/K^4 , t n p h/K^2 )), and therefore (√(n (p h)^2/log(n p))|E_n,p,h(x,y)| ≥ t) ≤ 2 exp( - t/C min(log(n p), √(n log(n p)) )) with C K^4 max(, )/c̃ and t≥ 1. First we consider the case log(n p) ≲ n. Taking η≥ 1 yields [sup_(x,y)^⊤∈ I_δ|√(n (p h)^2/log(n p))E_n,p,h(x,y)|] = ∫_0^∞(sup_(x,y)^⊤∈ I_δ√(n (p h)^2/log(n p))|E_n,p,h(x,y)| ≥ t) t ≤η + ∑_(x,y)^⊤∈ I_δ∫_η^∞(√(n (p h)^2/log(n p))|E_n,p,h(x,y)| ≥ t) t ≤η + 2 C_I (np)^2∫_η^∞exp(- t log (n p)/C) t = η + 2 C_I C (np)^2-η/C/log(n p)= O(1+1/log(n p)) , by choosing η = 2 C. Since 1/(n p h) ≤√(log(n p)/(n p^2 h^2)) we conclude that [E_n,p,h_∞] = (√(log(n p)/n (p h)^2)) . In the case where p grows exponentially in n, meaning log(n p) ≳ n, the same calculations lead to [sup_(x,y)^⊤∈ I_δ|√(n (p h)^2/log(n p))E_n,p,h(x,y)|] ≤η + 2 C_I C (np)^2/√(n log(n p))exp(- η/C n) . By choosing η large enough such that p^2 ≲exp( -n η/c) we get (<ref>) which concludes the proof of the first statement of ii). iii). Again we focus on the first bound, the second can be obtained similarly, details are provided in Section <ref> in the supplementary appendix. First note that by <ref>, we may bound [sup_(x,y) ∈ T|1/√(n)∑_j<k^p xyh ∑_i=1^n (Z_i(x_j)Z_i(x_k) - Γ(x_j, x_k))|] ≤ [sup_(x,y) ∈ T|1/√(n)∑_i = 1^n (Z_i(x)Z_i(y) - Γ(x,y))|]. To bound (<ref>) we shall apply the maximal inequality <cit.>, see also <cit.>. By definition of the class of processes Z_i ∈ P (γ;L,β_0, C_Z), we have Γ∈ H_T(γ, L) so that Γ is uniformly upper bounded by L, and |Z_i(x)| ≤ |Z_i(0)| + M_i. Therefore a square-intergable envelope Φ_n,i of (Z_i(x)Z_i(y) - Γ(x,y))/√(n) is given by 1/√(n)Z_i(x)Z_i(y) - Γ(x,y) ≤1/√(n)(2 Z_i^2(0) + 2 M_i^2+ L )Φ_n,i. To check the assumption of manageability of (Z_i(x)Z_i(y) - Γ(x,y))/√(n) in the sense of <cit.>, see also <cit.>, we note that from the Hölder continuity of Γ as well as of the paths of Z_i in (<ref>) the triangle inequality gives 1/√(n)Z_i(x)Z_i(y) - Γ(x,y) - (Z_i(x^')Z_i(y^') - Γ(x^', y^')) ≤ 2 Φ_n,i max(x-x^',y - y^')^min(β, γ). Then <cit.> implies manageability. Now <cit.>, see also <cit.>, implies that (<ref>) is upper bounded by 1/n∑_i=1^n [Φ_i,n] ≤const. uniformly over the function class P (γ;L,β_0, C_Z). iv). Again we focus on the first bound, for the second details are provided in Section <ref> in the appendix. Given z = (z_i,j), i=1, …, n, j=1, …, p let S_| z(x,y) ∑_j<k^p xyh 1/n∑_i=1^nz_i,jϵ_i,k. By conditioning on Z = (Z_i(x_j)) we obtain [supS_| Z(x,y)] = [[supS_| Z(x,y)| Z]] = g( Z), g( z) = [ sup_(x,y)∈ TS_| z(x,y)]. To apply Dudley's entropy bound <cit.> we show that for given z the process S_| z(x,y) is sub-Gaussian with respect to the semi norm d_| z((x,y), (x^', y^')) = ζ [(S_| z(x,y) - S_| z(x^', y^'))^2]^1/2, (x,y), (x^', y^') ∈ T, where ζ>0 is as in Assumption <ref>. From Assumption <ref>, for λ∈ [ exp(λ (S_| z(x,y) - S_| z(x^', y^'))] = ∏_i= 1^n ∏_k= 2^p[ exp(λ ( ∑_j= 1^k-1( xyh - x^'y^'h) z_i,j/n) ϵ_i,k)] ≤ exp( λ^2/2 σ^2 ζ^2 ∑_i=1^n ∑_k=2^p( ∑_j = 1^k-1( xyh - x^'y^'h) z_i,j/n)^2 ) = exp( λ^2/2 d_| z^2((x,y), (x^', y^')) ) . To upper bound d_| z^2 ((x,y), (x^', y^')), note that by Assumption <ref> and <ref> for at most 2 ph indices j and k respectively the increment of the weights can be non-zero. Using the property <ref> for these differences we obtain the bound d_| z^2 ((x,y), (x^', y^')) ≤ζ^2 σ^2 ∑_i = 1^n 2 ph (2 ph 1/(p h)^2 (max(x-x^', y-y^')/h∧ 1) m_i/n )^2 ≤ 8 ^2 ^3 ζ^2 σ^2/n^2 p h ∑_i = 1^n m_i^2 (max(x-x^', y-y^')/h∧ 1)^2. The diameter of [0,1]^2 under d_| z is then upper bounded by _d_| z([0,1]^2)^2 = sup_x,x^', y, y^'∈ [0,1] d_| z^2((x,y)^⊤, (x^', y^')^⊤) ≤ 8 ^2 ^3 ζ^2 σ^2/n^2 p h ∑_i = 1^n m_i^2 Δ_ m , where m = (m_1, …, m_n)^⊤, and the packing number is upper bounded by D( [0,1]^2,δ;d_| z) ≤Δ_ m/δ^2 h^2 . Dudley's entropy bound for sub-Gaussian processes <cit.> yields for (x_0, y_0) ∈ T that g( z ) = [sup_(x,y) ∈ TS_| z(x,y)] ≤[ | S_| z(x_0, y_0) | ]+ K ∫_0^_d_| z([0,1]^2)√(log( D( [0,1]^2,δ) )) δ . Using ∫_0^a √(log (x^-1)) x = a√(-log (a)) + a/(2√(-log (a))) for 0<a<1 and the above bound on the packing number we get that ∫_0^_d_| z([0,1]^2)√(log( D( [0,1]^2,δ) )) δ ≤∫_0^Δ_ m^1/2√(log(Δ_ m·(δh)^-2))δ = √(2 Δ_ m)/h∫_0^h√(log(δ^-1))δ = √(2 Δ_ m)(√(- log(h)) + 1/2√(-log(h))). A computation to that leading to (<ref>) gives the bound [ S_| z(x_0, y_0)] ≤([ S_| z(x_0, y_0) ^2])^1/2≤Δ_ m^1/2 , Inserting this bound and (<ref>) into (<ref>) yields g( z) ≤√(Δ_ m) + K·√(2 Δ_ m)·( √(-log(h)) - 1/2√(log(h))) Now from (<ref>) it follows that Z_i(x_j) ≤Z_i(0) + M_i a.s.. Replacing the deterministic m_i by Z_i(0) + M_i, using [(Z_i(0) + M_i)^2] ≤ 2[Z_i(0)^2 + M_i^2] ≤ 2 C_Z <∞ and Jensen's inequality gives [ √(Δ_ Z(0) + M) ] ≤_Z[Δ_ Z(0) + M]^1/2 = (^2 ^3 ∑_i=1^n [(Z_i(0) + M_i)^2]/n^2 σ^2/p h)^1/2 ≤(2 C_Z ^2 ^3 σ^2/n p h)^1/2 = O((n p h)^-1/2) , so that overall [[sup_x,yS(x,y)| Z]] ≤[ √(Δ_ Z(0) + M) ] + [ √(2Δ_ Z(0) + M) ]( √(-log(h)) - 1/2√(log(h))) = O( 1/√(n p h) + √(log(h^-1)/n p h)). Follows by the upper bounds for the rates of convergence from Lemma <ref> with h= O(p^-1) for any feasible sequence h. To conclude this section we state the Hanson-Wright-Inequality used in the above proof, and give an upper bound for the maximal singular value of a matrix A ∈^p × p in terms of matrix norms. Let X_1,…, X_p be independent, mean-zero, sub-Gaussian random vectors in ^n. Further let A = (a_j,k) ∈^p× p be a matrix. For any t ≥ 0 we have (|∑_j,k = 1^p a_j,kX_jX_k - ∑_j = 1^p a_j,jX_jX_j| ≥ t) ≤ 2 exp(-c̃ min(t^2/n K^4 A_F^2, t/K^2 A_op)), where the Orlicz-norm max_i X_i_ϕ_2 = K < ∞, A_F^2 = ∑_j,k^p a_j,k = 1^2 and A_op = max_i s_i, where s_i, i = 1…,p, are the singular values of A and c̃ > 0 is a constant. Follow the proof of <cit.> and replace X^⊤ A X and λ by X_k^⊤ AX_k and λ/d respectively. Make use of independence then. §.§ Proof of Theorem <ref> In the proof we rely on the reduction to hypothesis testing as presented e.g. in <cit.>. In all hypothesis models we set μ=0. For the lower bound p^-γ, using the method of two sequences of hypotheses functions we set Z_i;0 = 0 and construct Z_i;1, p such that its covariance kernel, Γ_1, p, satisfies Γ_1, p_∞≥ c p^-γ for some constant c>0 and that Z_i;1, p(x_j)=0 at all design points x_ j, so that the distribution of the observations for Z_i;0 and Z_i;1,p coincide. Observing Assumption <ref> for a constant L̃>0 to be specified we set g_ p( x) = g( x)= L̃ (1/p f_max)^γ/2exp(-1/1-x^2)_{|x| < 1}. and for some fixed 1 ≤ l ≤ p - 1 let Z_i;1,p( x)= W_i g((2 p f_max) (x -(x_l+ x_l+1) / 2)), where W_i ∼𝒩(0,1), which are taken independent over i. By Assumption <ref>, as the distance between design points is at least p f_max it follows that Z_i;1, p( x_ j)=0 at all design points. Furthermore, the covariance kernel of Z_i;1,p is Γ_1,p(x,y) = L̃^2 (1/p f_max)^γexp(-1/1-x̃^2) exp(-1/1-ỹ^2) _{|x̃| < 1, |ỹ| < 1}, where we set x̃ = (2 p f_max) (x -(x_l+ x_l+1) / 2), ỹ = (2 p f_max) (y -(x_l+ x_l+1) / 2). At x = y = (x_l+ x_l+1) / 2 this results in a value of Γ_1,p((x_l+ x_l+1) / 2,(x_l+ x_l+1) / 2) = L̃^2 (1/p f_max)^γ e^-2, so that Γ_0 - Γ_1, p_∞≥ c p^-γ holds true. Finally, using the chain rule and the fact that all derivatives of the bump function in the definition of g are uniformly bounded, Γ_1,p is γ-Hölder smooth with constant proportional to L̃^2, which can be adjusted to yield the Hölder norm L. This concludes the proof for the lower bound p^-γ. For the lower bound of order n^-1/2, first consider vanishing errors ϵ_i,j, and set Z_i = σ W_i with W_i ∼𝒩(0,1) and unknown variance σ^2≤ C_Z. Then estimating the covariance kernel amounts to estimating the unknown variance in a normal sample of size n, for which the rate is n^-1/2. Since the model with observational errors is less informative, we keep the lower bound. Finally, let us turn to the lower bound of order (log(n p)/(n p))^γ/(2γ+1). Since we already showed a lower bound of order p^-γ we may assume that p^- γ≲ (log(n p)/(n p ))^γ/(2γ+1). We shall apply <cit.>, and thus need to specify hypotheses Γ_l ∈ H(γ, L), l = 1,…, N_n,p, N_n,p∈, such that Γ_l - Γ_k≥ 2 s_n,p > 0 for 0 ≤ j < k ≤ N_n,p for a positive sequence (s_n,p)_n ∈. Then if there exist sequences (τ_n,p)_n∈ and (α_n,p)_n∈ with 1/N_n,p∑_l = 1^N_n,p_l( _0/_l≥τ_n,p) ≥ 1- α_n,p , it holds that inf_Γ̂_n,psup_Γ∈ H(γ, L)_Γ( Γ - Γ̂_n,p_∞≥ s_n,p) ≥τ_n,p N_n,p/1+τ_n,p N_n,p(1-α_n,p) . For sufficiently small c_i, i=0,1 we take N_n, p= c_0(n p/log(n p))^1/2γ+1, h_n, p = N_n, p^-1 , s_n,p c_1 h_n, p^γ, and for L̃ to be specified we let g̃( x) = L̃ (h_n, p / 2)^γexp(-(1-x^2)^-1)_{|x|< 1} Then for l ∈{1, …, N_n, p} we set Z_i;l( x) = W_i ( 1+ g̃(2 ( x - z_l)/h_n, p)), z_l = (l-1/2)/N_n, p, where W_i are independent, standard normally distributed random variables, and Z_i;0( x) = W_i. By setting x̃_l = 2 ( x - z_l)/h_n, p, ỹ_l = 2 ( y - z_l)/h_n, p, g(x)g̃(x̃_l) , the covariance kernel of Z_i;l can be written as Γ_l(x,y) = (1+ g(x̃_l))( 1 + g(ỹ_l)) . Using the chain rule one checks γ-Hölder smoothness of each Γ_l(x,y) for suitable choice of L̃ (depending on L and c_0). Further, by construction it holds that (Γ_l) = (l/N_n,p, (l+1)/N_n,p)^2 and therefore the Γ_l have disjoint supports in [0,1]^2, so that Γ_ l - Γ_ r_∞≥ 2 s_n, p for all l ≠ r, and c_1 sufficiently small (depending on c_0). Let _l^(n) be the joint normal distribution of the observations (Z_1;l(x_1) + ϵ_1,1, …, Z_1,l(x_p) + ϵ_1,p), …, (Z_n;l(x_1) + ϵ_n,1, …, Z_n;l(x_p) + ϵ_n,p). For l = 0 the marginal distributions are given by _i;0 = 𝒩(0, Σ_0), Σ_0 _p × p + I_p, with _p × p_p _p^⊤ being the p× p matrix where all entries are equal to 1. For l ∈{1, …, N_n, p} the distribution of Z_i;l, l = 1, …, N_n,p, is given by _i;l∼𝒩(0, Σ_l), Σ_l Σ̃_l + I_p, where Σ̃_l = vv^⊤, with v_j (v_l)_j = 1 +g(x̃_j, l) = 1 + g(x_j), j = 1,…, p, where g is defined in (<ref>). In order to calculate the likelihood ratio of _l^(n) and _0^(n) we need the inverse and determinants of Σ_l and Σ_0. By the matrix determinant Lemma Σ_0 = 1+p, and Σ_l = 1+ v_2^2, as well as Σ_0^-1 = I_p - 1/p+1_p × p , and Σ_l^-1 = I_p - v v^⊤/v_2^2 + 1 . In Lemma <ref> in the supplementary appendix it is shown that the symmetric square root of Σ_l is given by Σ_l^1/2 = I_p + √(1+ v_2^2)-1/ v_2^2 vv^⊤ . For independent Z_i ∼ N_p (0, I_p) we can write _l^(n)( log_l^(n)/^(n)_0 > log1/τ) = ( n/2logΣ_0/Σ_l + 1/2∑_i = 1^n Z_i ^⊤Σ^1/2_l ( Σ_0^-1 - Σ_l^-1) Σ^1/2_l Z_i > log1/τ) = (∑_i= 1^n Z_i^⊤( vv^⊤ - Σ^1/2_l _p × p/p+1Σ^1/2_l )Z_i > 2log1/τ+ n logΣ_l/Σ_0) . Setting A v v^⊤ - Σ^1/2_l _p × p/p+1Σ^1/2_l the Hanson-Wright inequality (Lemma <ref>) yields _l^(n)( _0^(n)/_l^(n)≥τ_n ) = 1 - _l^(n)( log( _l^(n)/_0^(n)) > log1/τ_n,p) = 1 - ( ∑_i = 1^n Z_i^⊤ A Z_i - n tr(A) > 2 log1/τ_n,p + n (log(Σ_l/Σ_0) -tr(A))) = 1 - ( ∑_j,k = 1^p a_j,k Z^(j) Z^(k) - n tr(A) > t) ≥ 1 - exp( -c min( t^2/n A_F^2, t/A_op) ) , for some constant c >0 and t 2log(τ_n,p^-1) + n (log(Σ_l/Σ_0) -tr(A)). By choosing τ_n,p = exp(-n p/p+1∑_k = 1^p g^2(x_k)) Lemma <ref> in the supplementary appendix shows that τ_n,p≳ (n p)^-C̃^L, t ≃log(n p) , A_F^2 ≃log(n p)/n , A_op≃(log(n p)/n)^1/2, where C̃^L>0 can be made large for large L. Plugging the rates into (<ref>) yields exp( - c̃ min( t^2/n A_F^2, t/A_op) ) ≃1/n p . Therefore we can set α_n,p≃ (n p)^-1 in (<ref>) and get 1-α_n,p > 0. Further we have τ_n,pN_n,p→∞ by choosing L (and therefore C̃^L) sufficiently small. Using this in (<ref>) yields the claim. [Azaïs and WscheborAzaïs and Wschebor2009]azais2009level Azaïs, J.-M. and M. Wschebor (2009). Level sets and extrema of random processes and fields. John Wiley & Sons. [Azmoodeh, Sottinen, Viitasaari, and YazigiAzmoodeh et al.2014]azmoodeh2014necessary Azmoodeh, E., T. Sottinen, L. Viitasaari, and A. Yazigi (2014). Necessary and sufficient conditions for hölder continuity of gaussian processes. Statistics & Probability Letters 94, 230–235. [Berger, Hermann, and HolzmannBerger et al.2024]berger2023dense Berger, M., P. Hermann, and H. Holzmann (2024). From dense to sparse design: Optimal rates under the supremum norm for estimating the mean function in functional data analysis. Bernoulli, to appear; arXiv preprint arXiv:2306.04550. [Brown and LevineBrown and Levine2007]brown2007variance Brown, L. D. and M. Levine (2007). Variance estimation in nonparametric regression via the difference sequence method. The Annals of Statistics 35(5), 2219–2232. [Cai and YuanCai and Yuan2010]cai2010nonparametric Cai, T. and M. Yuan (2010). Nonparametric covariance function estimation for functional and longitudinal data. University of Pennsylvania and Georgia inistitute of technology. [Cai and YuanCai and Yuan2011]cai2011optimal Cai, T. T. and M. Yuan (2011). Optimal estimation of the mean function based on discretely sampled functional data: Phase transition. The annals of statistics 39(5), 2330–2355. [Cardot, Degras, and JosserandCardot et al.2013]zbMATH06254554 Cardot, H., D. Degras, and E. Josserand (2013). Confidence bands for Horvitz-Thompson estimators using sampled noisy functional data. Bernoulli 19(5A), 2067–2097. [Dai, Müller, and TaoDai et al.2018]dai2018 Dai, X., H.-G. Müller, and W. Tao (2018). Derivative principal component analysis for representing the time dynamics of longitudinal and functional data. Stat. Sin. 28(3), 1583–1609. [Dette and KokotDette and Kokot2022]dette2022detecting Dette, H. and K. Kokot (2022). Detecting relevant differences in the covariance operators of functional time series: a sup-norm approach. Annals of the Institute of Statistical Mathematics 74(2), 195–231. [Dette, Kokot, and AueDette et al.2020]dette2020functional Dette, H., K. Kokot, and A. Aue (2020). Functional data analysis in the banach space of continuous functions. The Annals of Statistics 48(2), 1168–1192. [Dette and WuDette and Wu2021]dette2021functional Dette, H. and W. Wu (2021). Confidence surfaces for the mean of locally stationary functional time series. Preprint Bochum university. [Hall and Hosseini-NasabHall and Hosseini-Nasab2009]hall2009theory Hall, P. and M. Hosseini-Nasab (2009). Theory for high-order bounds in functional principal components analysis. In Mathematical Proceedings of the Cambridge Philosophical Society, Volume 146, pp. 225–256. Cambridge University Press. [Hall, Müller, and WangHall et al.2006]hall2006properties Hall, P., H.-G. Müller, and J.-L. Wang (2006). Properties of principal component methods for functional and longitudinal data analysis. The annals of statistics, 1493–1517. [Hassan and Hosseini-NasabHassan and Hosseini-Nasab2021]Hassan2021 Hassan, S. G.-J. and S. M. E. Hosseini-Nasab (2021). On mean derivative estimation of longitudinal and functional data: from sparse to dense. Stat. Pap. 62(4), 2047–2066. [Li and HsingLi and Hsing2010]li2010uniform Li, Y. and T. Hsing (2010). Uniform convergence rates for nonparametric regression and principal component analysis in functional/longitudinal data. The Annals of Statistics 38(6), 3321–3351. [Liebl and ReimherrLiebl and Reimherr2019]liebl2019fast Liebl, D. and M. Reimherr (2019). Fast and fair simultaneous confidence bands for functional parameters. arXiv preprint arXiv:1910.00131. [Mohammadi and PanaretosMohammadi and Panaretos2024]mohammadi2024functional Mohammadi, N. and V. M. Panaretos (2024). Functional data analysis with rough sample paths? Journal of Nonparametric Statistics 36(1), 4–22. [PollardPollard1990]pollard1990empirical Pollard, D. (1990). Empirical processes: theory and applications. In NSF-CBMS regional conference series in probability and statistics, pp. i–86. JSTOR. [Ramsay and SilvermannRamsay and Silvermann1998]ramsay1998functional Ramsay, J. and B. Silvermann (1998). Functional data analysis. springer series in statistics. [TsybakovTsybakov2004]tsybakov2008introduction Tsybakov, A. B. (2004). Introduction to nonparametric estimation, 2009, Volume 9. [Turkmen and CivcivTurkmen and Civciv2007]turkmen2007some Turkmen, R. and H. Civciv (2007). Some bounds for the singular values of matrices. Applied Mathematical Sciences 1(49), 2443–2449. [van der Vaart and Wellnervan der Vaart and Wellner1996]van1996weak van der Vaart, A. and J. Wellner (1996). Weak convergence and empirical processes: with applications to statistics. Springer Science & Business Media. [VershyninVershynin2018]vershynin2018high Vershynin, R. (2018). High-dimensional probability: An introduction with applications in data science, Volume 47. Cambridge university press. [Wang, Chiou, and MüllerWang et al.2016]wang2016functional Wang, J.-L., J.-M. Chiou, and H.-G. Müller (2016). Functional data analysis. Annual Review of Statistics and Its Application 3, 257–295. [Wang, Brown, and CaiWang et al.2008]Wang Wang, L., L. D. Brown, and T. Cai (2008). Effect of mean on variance function estimation in nonparametric regression. The Annals of Statistics 36(2), 646–664. [XiaoXiao2020]xiao2020asymptotic Xiao, L. (2020). Asymptotic properties of penalized splines for functional data. Bernoulli 26(4), 2847–2875. [Zhang and WangZhang and Wang2016]zhang2016sparse Zhang, X. and J.-L. Wang (2016). From sparse to dense functional data and beyond. The Annals of Statistics 44(5), 2281–2321. § SUPPLEMENT: ADDITIONAL PROOFS FOR THE UPPER BOUND: LEMMAS <REF> AND <REF> Plugging the representation of the observations Y_i,j and Y̅_n,j into the estimator xyh in (<ref>) we get xyh = 1/n-1[∑_i=1^n ∑_j<k^p xyh (Y_i,jY_i,k - Y̅_n,jY̅_n,k)]. = 1/n-1∑_i = 1^n ∑_j<k^p xyh[ (Z_i,jZ_i,k + ϵ_i,jϵ_i,k + Z_i,jϵ_i,k + ϵ_i,jZ_i,k + μ(x_j)(Z_i,k + ϵ_i,k) + (Z_i,j + ϵ_i,j)μ(x_k)) - ( Z̅_n,jZ̅_n,k + ϵ̅_n,jϵ̅_n,k + Z̅_n,jϵ̅_n,k + ϵ̅_n,jZ̅_n,k + μ(x_j)(Z̅_n,k + ϵ̅_n,k) + (Z̅_n,j + ϵ̅_n,j)μ(x_k))] = 1/n-1∑_i = 1^n ∑_j<k^p xyh[(ϵ_i,jϵ_i,k - ϵ̅_n,jϵ̅_n,k) + (Z_i,jZ_i,k - Z̅_n,jZ̅_n,k) + (Z_i,jϵ_i,k -Z̅_n,jϵ̅_n,k) + (ϵ_i,jZ_i,k - ϵ̅_n,jZ̅_n,k) + μ(x_j)(Z_i,k - Z̅_n,k + ϵ_i,k - ϵ̅_n,k) + (Z_i,j - Z̅_n,j + ϵ_i,j - ϵ̅_n,j)μ(x_k)]. Let U_i,j be a placeholder for ϵ_i,j or Z_i,j. In order to summarize the last display we use 1/n-1∑_i=1^nU_i,k- n/n-1U̅_n,k = 0 and therefore the last row vanishes. For the first three rows we use 1/n-1∑_i=1^nU_i,jU_i,k - 1/n(n-1)∑_l,r=1^nU_l,jU_r,k = 1/n∑_i=1^n U_i,jU_i,k - 1/n(n-1)∑_l≠ r^n U_l,j U_r,k, for Z_i,jZ_i,k, ϵ_i,jϵ_i,k and Z_i,jϵ_i,k respectively. By adding ±Γ(x_j,x_k) this yields the decomposition xyh = ∑_j<k^p xyh[(1/n∑_i=1^n ϵ_i,jϵ_i,k - 1/n(n-1)∑_l≠ r^n ϵ_l,jϵ_r,k) + 1/n∑_i=1^n(Z_i,jZ_i,k - Γ(x_j,x_k) ) - 1/n(n-1)∑_i ≠ l^n Z_i,jZ_l,k+ Γ(x_j,x_k) + (1/n∑_i=1^n (Z_i,jϵ_i,k + Z_i,kϵ_i,j) - 1/n(n-1)∑_l≠ r^n (Z_l,jϵ_r,k + Z_l,kϵ_r,j ))]. Subtracting Γ(x,y) yields the claim together with <ref>. For the bias we use a Taylor expansion and the fact that the weights of the estimator reproduce polynomials of the certain degree ζ = ⌊γ⌋. We get for certain θ_1,k, θ_2,k∈ [0,1] s.t. τ_j^(1) x + θ_1,k(x_j - x) ∈ [0,1] and τ_k^(2) y + θ_2,k(x_k - y) ∈ [0,1] ∑_j<k^ p xyh Γ(x_j, x_k) - Γ(x,y) = ∑_j<k^ p xyh( Γ(x_j, x_k) - Γ(x,y)) by <ref> =∑_j<k^ p xyh (∑_| r|= 1, r =(r_1,r_2) ^ζ-1∂^rΓ(x,y) /(∂ x)^r_1 (∂ y)^r_2( x_ j- x)^ r_1 (x_k - y)^r_2/ r_1 ! r_2! + ∑_| r|= ζ r =(r_1,r_2) ∂^rΓ(τ_j^(1), τ_k^(2))/(∂ x)^r_1 (∂ y)^r_2( x_ j- x)^ r_1 (x_k - y)^r_2/ r_1 ! r_2!) =∑_j<k^ p xyh ( ∑_|r|= ζ(∂^rΓ(x,y)/(∂ x)^r_1 (∂ y)^r_2 - ∂^rΓ(τ_j^(1), τ_k^(2))/(∂ x)^r_1 (∂ y)^r_2) ( x_ j- x)^ r_1 (x_k - y)^r_2/ r_1 ! r_2!) by <ref> ≤∑_j<k^ p xyh ∑_|r|= ζmax(x_j-x, x_k-y)^γ - ζ( x_ j- x)^ r_1 (x_k - y)^r_2/ r_1 ! r_2! ≤∑_j<k^ p xyh max(x_j-x, x_k-y)^γ∑_| r|= ζ1/ r ! ≤ h^γ∑_| r|= ζC_1/ r ! = O(h^γ) . by <ref> We rewrite the term ∑_j<k^p xyh1/n(n-1)∑_l≠ r^n ϵ_l,jϵ_r,k = 1/n(n-1)∑_j<k^p xyh [∑_i, l = 1^n ϵ_i,jϵ_l,k- ∑_i=1^nϵ_i,jϵ_i,k] = 1/n-1∑_j<k^p xyh X̃_j,nX̃_k,n - 1/n-1E_n,p,h(x,y), where E_n,p,h(x,y) is the process defined in (<ref>) and X̃_j,n = √(n)X̅_j,n = n^-1/2∑_i=1^n ϵ_i,j. By the first statement of ii) we obtain [sup_(x,y)^⊤∈ D1/n-1 E_n,p,h(x,y)] = (√(log(n p))/n^3/2 p h). Simple calculations show that X̃_j,n∼(σ^2) and its Orlicz-Norm is given by X̃_j,n_ψ_2 = K, where K>0 is constant not depending on n. Defining A xyh _j<k as before, using the estimates (<ref>) and (<ref>) and the Hanson-Wright inequality (Lemma <ref>) we obtain, analogously to (<ref>), ((n-1) p h/log(n p) |∑_j<k^p xyh X̃_j,nX̃_k,n/n-1| ≥ t) ≤ 2 exp(- c̃min(t^2 log^2(n p)/K^4 , t log(n p)/K^2 )) ≤ 2 exp( -tlog(n p) /C ) for t≥ 1 and C K^4max(,). The same calculations as in (<ref>) then lead to [sup_x,y∈[0,1]1/n(n-1)∑_j<k^p xyh ∑_i, l = 1^n ϵ_i,jϵ_l,k] = (log(n p)/n p h). Set R_n,p,h(x,y) 1/n(n-1)∑_j<k^p xyh ∑_i≠ l^n Z_i(x_j)Z_l(x_k) and consider the envelope Φ_n∈^n(n-1) with entries ϕ_n,(i,l), i,l = 1, …, n, with i ≠ l given by 1/√(n(n-1))Z_i(x)Z_j(y)≤1/√(n(n-1))( M_i + Z_i(0)) ( M_l + Z_l(0)) . This leads to [√(n(n-1)) R_n,p,h_∞] ≤ [sup_x,y ∈[0,1]| ∑_i ≠ l^n Z_i(x)Z_l(y)/√(n(n-1))|] ≤ [sup_x,y ∈[0,1]| ∑_i ≠ l^n Z_i(x)Z_l(y)/√(n(n-1))|^2]^1/2 ≤ 2 K_2 Λ(1)^2 [( M_1 + Z_1(0)) ( M_2 + Z_2(0)) ^2]^1/2 ≤ 4 K_2 Λ(1)^2 (2[M_1^2] + 2[Z_1(0)^2] ) < ∞. For S̃(x,y) ∑_j<k^p xyh 1/n∑_i≠ l^n Z_i(x_j) ϵ_l,k/n-1 we shall proceed analogously as for the first term. Let S̃_| z(x,y) ∑_j<k^p xyh 1/n(n-1)∑_i≠ l^n z_i,jϵ_l,k . Again we can show that S̃_| z is a sub-Gaussian process with respect to the semi-norm [(S̃_| z(x,y) - S̃_| z(x^', y^'))^2]^1/2. This semi-norm is given by d_S̃((x,y)^⊤, (x^', y^')^⊤)^2 = [(S̃_| z(x,y) - S̃_| z(x^', y^'))^2] = [ ( ∑_j < k^p ( xyh - x^'y^'h) ∑_i < l^n z_i,j ϵ_l,k/n(n-1))^2] = σ^2/n-1∑_k=2^p( ∑_j = 1^k-1( xyh - x^'y^'h) ∑_i = 1^n z_i,j/n)^2 , and further we have that [exp(S̃_| z(x,y) - S̃_| z(x^', y^'))] = ∏_k = 2^p [ exp( ∑_j = 1^k-1( xyh - x^'y^'h) ∑_i = 1^nz_i,j/n∑_l = 1, l ≠ i^n ϵ_l,k/n-1)] ≤exp( σ^2/2(n-1)∑_k = 2^p(∑_j = 1^k-1( xyh - x^'y^'h) ∑_i = 1^n z_i,j/n)^2 ) . Therefore S̃_| z is a sub-Gaussian process with respect to the semi-norm d_S̃. With the same arguments as for the first part we can bound the diameter of the semi-norm by _S̃([0,1]^2) ≤^2 ^3/(n-1) p h( ∑_i=1^n m_i/n)^2 σ^2 Δ̃= O ( 1/n p h) . From here the result follows analogously to (<ref>), (<ref>) and (<ref>). § SUPPLEMENT: PROOF FOR THE ASYMPTOTIC NORMALITY We make use of the functional central limit theorem <cit.>. For the proof let X_n,i(x,y) 1/√(n)∑_j<k^p xyh jk, jk Z_i(x_j)Z_i(x_k) - Γ(x_j,x_k), and S_n(x,y) ∑_i=1^n X_n,i(x,y), ρ_n(x,y,s,t) ( ∑_i=1^n [ X_n,i(x,y) - X_n,i(s,t)^2 ])^1/2. To apply <cit.> we need to check the following. i). X_n,i is manageable <cit.> with respect to the envelope Φ_n (ϕ_n,1,…, ϕ_n,n) with ϕ_n,i/√(n)( 2 Z_i^2(0) + 2 M_i^2 + L). ii). R(x,y,s,t) lim_n →∞[S_n(x,y)S_n(s,t)], x,y,s,t ∈ [0,1]. iii). lim sup_n →∞∑_i=1^n [ϕ_n,i^2] < ∞. iv). lim_n →∞∑_i=1^n [ ϕ_n,i^2 _ϕ_n,i > ϵ] = 0, ϵ > 0. v). The limit ρ(x,y,s,t) lim_n →∞ρ_n(x,y,s,t), x,y,s,t ∈ [0,1] , is well-defined and for all deterministic sequences (x_n, y_n)_n ∈, (s_n, t_n)_n∈∈ [0,1]^2 with ρ(x_n, y_n,s_n, t_n) → 0 it also holds ρ_n(x_n, y_n, s_n, t_n) → 0. Ad i): As in the proof of Lemma <ref>, iii), the random vector Φ_n is an envelope of X_n (X_n,1, …, X_n,n) since X_n,i(x,y) ≤1/√(n)∑_j<k^p xyh ( 2 Z_i^2(0) + 2 M_i^2 + L) ≤/√(n)( 2 Z_i^2(0) + 2 M_i^2 + L) , by using <ref>, where we may assume ≥ 1. We make use of <cit.>. We need to show that there exist constants K_1, K_2 and b ∈^+ such that for all (x,y), (x^', y^') ∈ [0,1]^2 it holds xy - x^'y^'_∞≤ K_1 ϵ^b ⇒X_n,i(x,y) - X_n,i(x^', y^')≤ K_2 ϵ ϕ_n,i, ∀ i = 1,…, n . Since X_n,i(x,y) = X_n,i(y,x), we can assume that (x,y), (x^', y^') ∈ T^2. We distinguish the cases ϵ≥ h^γ and ϵ < h^γ. For the first case ϵ≥ h^γ consider 1/√(n)∑_j<k^p ( xyh - x^'y^'h ) jk ≤1/√(n)∑_j<k^p xyh jk - Z_i^⊗2(x,y) + 1/√(n)Z_i^⊗2(x,y) - Z_i^⊗2(x^',y^') + 1/√(n)Z_i^⊗2(x^', y^') - ∑_j<k^p x^'y^'h jk, we shall check (<ref>) for K_1 = 1, K_2 = 3 and b =1/γ. Let (x,y)^⊤, (x^', y^')^⊤∈ [0,1]^2 such that max{x-x^', y - y^'}≤ϵ ^1/γ. By Assumption <ref> the second term can be estimated by 1/√(n)Z_i^⊗2(x,y) - Z_i^⊗2(x^',y^') ≤( 2 Z_i^2(0) + 2 M_i^2 + L)/√(n) xy - x^'y^'_∞^γ≤ϕ_n,iϵ. For the first and similarly the third term one gets that 1/√(n)∑_j<k^p xyh jk - Z_i^⊗2(x,y) ≤1/√(n)∑_j<k^p xyh jk - Z_i^⊗2(x,y) ≤ (( 2 Z_i^2(0) + 2 M_i^2 + L)/√(n) h^γ≤ϕ_n,i ϵ, since the weights vanish for max{x_j -x, x_k -y}≥ h. Adding the three terms concludes (<ref>) for the first case ϵ≥ h^γ . For the second case ϵ < h^γ, again take max{x-x^', y - y^'}≤ϵ ^1/γ. By <ref> we get 1/√(n)∑_j<k^p ( xyh - x^'y^'h ) jk ≤∑_j<k^p xyh - x^'y^'h ϕ_n,i ≤2 /h xy -x^'y^'_∞ ϕ_n,i ≤2 /h ϵ^1-1/γ ϕ_n,i , which yields (<ref>) in the second case. <cit.> then yields manageability. Ad ii). [ S_n(x,y) S_n(x^', y^')] = 1/n∑_i,l = 1^n∑_j<k^p ∑_r<s^p xyh w_r,s(x^', y^';h) [ jk Z_l;r,s^⊗ 2] = ∑_j<k^p ∑_r<s^p xyh w_r,s(x^', y^';h) R(x_j,x_k, x_r, x_s). By Assumption <ref>, R∈ H_D^2(ζ, L̃) and therefore we get by the same calculations as for the bias in the supplementary appendix <ref> sup_x,y,x^',y^'∑_j<k^p ∑_r<s^p xyh w_r,s(x^', y^';h) R(x_j,x_k, x_r, x_s) - R(x,y,x^', y^') = O(h^ζ) . Ad iii). Follows immediately from Assumption <ref> since M has finite fourth moment. Ad iv). Follows by Assumption <ref> and the dominated convergence theorem. Ad v). ρ_n^2(x,y,x^', y^') = ∑_j<k^p ∑_r<s^p ( xyh - x^'y^'h)(w_r,s(x,y,h)-w_r,s(x^', y^';h))[ Z_1;j,k^⊗ 2Z_1;r,s^⊗2] = ∑_j<k^p ∑_r<s^p ( xyh w_r,s(x,y;h) - xyh w_r,s(x^',y^';h) - x^'y^'h w_r,s(x,y;h) + x^'y^'h w_r,s(x^', y^';h)) R(x_j,x_k,x_r,x_s) ) n →∞⟶ R(x,y,x,y) - 2R(x,y,x^', y^') + R(x^', y^', x^', y^'), by the same argument as for ii). Therefore the limit is well defined in for all x,y,x^', y^'∈ [0,1]. Given the deterministic sequences (x_n,y_n) and (x^'_n, y^'_n) such that ρ(x_n,y_nx^'_n,y^'_n) → 0, n →∞, we get 0 ≤ρ_n(x_n,y_n,x_n^',y_n^') ≤ρ_n(x_n,y_n,x_n^',y_n^') - ρ(x_n,y_n,x_n^',y_n^') + ρ(x_n,y_n,x_n^',y_n^') ≤sup_x,y,x^',y^'ρ_n(x,y,x^',y^') - ρ(x,y,x^',y^') + ρ(x_n,y_n,x_n^',y_n^')n →∞⟶ 0, since the convergence in ii) holds uniformly and hence similarly also the convergence in the first calculation. § SUPPLEMENT: AUXILIARY RESULTS FOR PROOF OF THEOREM <REF>. In the situation of the proof of Theorem <ref> the symmetric square root Σ_l^1/2∈^p× p is given by Σ^1/2_l = I_p + √(1+ v_2^2)-1/ v_2^2 v v^⊤ , The matrix Σ_l^1/2 is symmetric. Further we have Σ_l^1/2Σ_l^1/2 = I_p + 2 √(1 + v_2^2)-1/ v_2^2 v v^⊤ + 1 + v_2^2 - 2 √(1 + v_2^2) + 1/ v_2^2 v v^⊤I_p + v v^⊤ = Σ_l . Let x,y ∈^p, x, y ≠ 0, and define the matrix M = xx^⊤ - yy^⊤∈^p. Then the eigenvalues of M are given by 0 and λ_1,2 = y_2^2 - x_2^2/2±( (y_2^2 - x_2^2/2)^2 + x_2^2 y_2^2 - xy ^2)^1/2 and in consequence it holds that (I_p + M ) = (1+ λ_1)(1+ λ_2) = 1 + y _2^2 - x_2^2 + x_2^2 y_2^2 - xy ^2 . Eigenvectors v ∈^p of M need to be of the form r x + s y = v, r,s ∈ and fulfil M v = (xx^⊤ - yy^⊤)( r x + s y) = r x_2^2 x - r xy y + s xy x - s y _2^2 y =( r x_2^2 + s xy ) x - ( r xy + s y_2^2) y != λ (r x + s y) for an eigenvalue λ≠ 0. We have v ≠ 0 for all λ such that the previous system of linear equations has non trivial solutions, speaking x_2^2 - λ xy- xy- y_2^2 - λ!= 0 . The solution of λ^2 + λ ( y_2^2 - x_2^2) - ( x_2^2 y_2^2 - xy^2) = 0 is given by λ_1,2 in (<ref>). In the situation of the proof of Theorem <ref> the eigenvalues of the matrix A = v v^⊤ - Σ^1/2_p× p/p+1Σ^1/2 are given by λ_1,2(A) = - p v_2^2 - v_1^2 + v_2^2 - p /2(p+1)±( ( p v_2^2 - v_1^2 + v_2^2 - p /2(p+1))^2 + p v_2^2 - v_1^2/ p+1)^1/2 . Calculate A by A vv^⊤ - Σ^1/2 _p× p/p+1Σ^1/2 = ( 1 + (√(1+v^⊤ v)-1)^2/v_2^4 (1+p)v_1^2) vv^⊤ - v_1 (√(1+ v_2^2)-1)/v_2^2 (1+p)((v,…,v) + v^⊤⋮v^⊤) - _p × p/p+1 = v v^⊤ - ( _p/√(p+1) + v_1 ( √(1+ v_2^2)-1)/ v_2^2 √(1+p) v) ( _p^⊤/√(p+1) + v_1 ( √(1+ v_2^2)-1)/ v_2^2 √(1+p) v^⊤). The entries of A are given by a_ij = v_i v_j - ( 1/√(p+1) + v_1 (√(1+ v_2^2)-1)/ v_2^2 √(1+p) v_i) ( 1/√(p+1) + v_1 ( √(1+ v_2^2) -1 )/ v_2^2 √(1+p) v_j ) . Let w _p/√(p+1) + v_1 (√(1+ v_2^2)-1)/ v_2^2 √(1+p) v , then w_2^2 = p /p+1 + 2 v_1^2 (√(1+ v_2^2)-1)/ v_2^2 (p+1) + v_1^2 (√(1+ v_2^2)-1)^2/ v_2^2 (p+1) = p/p+1 + v_1^2 (√(1+ v_2^2)-1)/ v_2^2 (p+1)( 2 + √(1+ v_2^2) -1 ) = p+ v_1^2/p+1 . By Lemma <ref> we get the eigenvalues of A by λ_1,2(A) = w_2^2 - v_2^2/2±( ( w_2^2 - v_2^2/2)^2 + w_2^2 v_2^2 - wv^2 )^1/2 . Computing the parts w_2^2 - v_2^2 = - p v_2^2 - v_1^2 + v_2^2 - p /p+1 , and wv ^2 = v_1/p+1 ( 1 - ( 1- √(1+ v_2^2))) ^2 = p v_2^2 + v_1^2 v_2^2/p+1 , and v_2^2 w_2^2 = v_2^2/p+1 ( p + v_1^2) = p v_2^2 + v_1^2 v_2^2/ p +1 , concludes to v_2^2 w_2^2 - vw ^2 = p v_2^2 - v_1^2/ p+1 = p ∑_k = 1^p ( 1+ g(x_k))^2 - ( ∑_k = 1^p ( 1+ g(x_k) ) )^2 . In the situation of the proof of Theorem <ref> it holds ∑_k = 1^p g^2(x_k) ≃ ph^2γ + 1 , and ∑_k=1^p g(x_k) ≃ p h^γ + 1 . In particular it holds ∑_k = 1^p g^2(x_k) ≤ C^L log(n p)/n , where C^L>0 is a constant that depends monotonically increasing on L. For the upper bound we have ∑_j=1^p g̃^2(x̃_j,l) ≤ L^2 p h_n ( h_n/2 )^2γ 2 f_maxexp(-2) ≤ C^L log(n p)/n , for C^L L^2 2^1-2γf_maxexp(-2), by making use of the structure of x_k and Assumption <ref>. For the lower bound we consider the equidistant design with x_j = j/p, j = 1,…,p and α∈ (0,1/8]. Then exp( - 1/1-x^2) _x <1≥exp( - 1/1-x^2) _x<1-2α , x ∈ , and exp( - 1/1-x^2) ≥ c_α > 0 , for all x < 1 - 2α . Further x̃_j,l < 1-2α ⇔ x_j ∈[l-(1-α)/N_n,p, l-α/N_n,p] ⇔ i ∈[ p l-(1-α)/N_n,p, p l-α/N_n,p] , and p l-α/N_n,p -p l-(1-α)/N_n,p = p (1-2α)/N_n,p . Therefore ∑_j=1^p _N_n,p x_j ∈ [l-1, l] = ∑_j=1^p _N_n,p x_j ∈ [l-(1-α), l -α]≥⌊p (1-2α)/N_n,p⌋ . In consequence ∑_j = 1^p g̃^2(x̃_j,l) ≥⌊p (1-2α)/N_n,p⌋ c_α^2 L^2 (h_n/2)^2γ ≥p (1-2α)/4 c_0 (n p/log(n p))^1/2γ c_α^4 L^4(1/4 c_0(log(n p)/n p)^1/(2γ+1))^2γ =C log(n p)/n , for the constant C C̃_c_0,γ, α, L (1-2α)c_α^4L^4/(4c_0)^2γ+1. The second claim follows analogously. In the situation of the proof of Theorem <ref> it holds τ_n,p≥ (n p)^-C^L, t ≃log(n p) , A_F^2 ≃log(n p)/n , A_op≃(log(n p)/n)^1/2, where C^L>0 is an adaptive constant is large for a large L. We estimate with Lemma <ref> that τ_n,p = exp(- np/p+1∑_j = 1^p g^2(x_k)) ≥exp( - (C^L log(n p)) = (n p)^C^L . In the following we need that by Lemma <ref> it holds p v_2^2 - v_1^2/p+1 = p/p+1∑_k = 1^p g^2(x_k) - 1/p+1( ∑_k = 1^p g(x_k) )^2 ≃ p h^2γ + 1 - (p h^γ + 1)^2/p+1≃ p h^2γ + 1≃log(n p)/n and v_2^2 - p/p+1 = 1/p+1∑_k = 1^p g^2(x_k) - 2/p+1∑_k = 1^p g(x_k) ≃ -h^γ + 1 , since h = N_n,p^-1→ 0. For t = 2log(τ_n,p^-1) + n (log(Σ_l/Σ_0) -tr(A)) we calculate tr(A) = v_2^2 - ∑_k = 1^p ( 1/√(1+p) - v_1 (1- √(1+ v_2^2))/ v_2^2 √(1+p)v_k)^2 = v_2^2 - p/p+1 + 2 v_1^2 (1-√(1+ v_2^2))/ v_2^2 (p+1) - v_1^2 (1- √(1+ v_2^2))^2/ v_2^2 (p+1) = v_2^2 - p/p+1 - v_1^2/p+1 = p v_2^2 - v_1^2/p+1 + v_2^2 - p/p+1 . With log(x) = ∑_l = 1^∞ (-1)^l+1 (x-1)^l/l, x ≥ 1 we get log(Σ_l/Σ_0) = log( 1+ v_2^2/1+p) = ∑_l = 1^∞(-1)^l+1/l( v_2^2 - p/p+1)^l . Together with (<ref>) and (<ref>) this yields log( Σ_l/Σ_0) - tr(A) = ∑_l = 2^∞(-1)^l+1/l( v_2^2 - p/p+1)^l - p v_2^2 - v_1^2/p+1 ≥ - p v_2^2 - v_1^2/p+1(<ref>)≃log(n p)/n . Together with (<ref>) this yields t≃log(np). To calculate the order of the norms we need that by Lemma <ref> the eigenvalues of A are given by λ_1,2(A) = - p v_2^2 - v_1^2 + v_2^2 - p /2(p+1)±( ( p v_2^2 - v_1^2 + v_2^2 - p /2(p+1))^2 + p v_2^2 - v_1^2/ p+1)^1/2 -a_p ±( a_p^2 + b_p )^1/2 . By (<ref>) and (<ref>) we have a_p ≃ ph^2γ + 1 - h^γ + 1 and b_p ≃ ph^2γ + 1. Then the Frobenius-Norm is given by A_F^2 = λ_1^2(A) + λ_2^2(A) = (-a_p - ( a_p^2 + b_p )^1/2)^2 + (-a_p + ( a_p^2 + b_p )^1/2)^2 p = 4 a_p^2 + 2 b_p ≥ 2 b_p ≃ p h^2γ + 1≃log(n p)/n . Note that for n large enough it also holds A_F ≲log(np)/n. For the operator norm we write A_op = max_j = 1,…,pλ(A^⊤ A) = max{λ_1(A)}, λ_2(A)≥λ_1(A) + λ_2(A)/2≃log(n p)/n . Additionally it also holds A_op≤λ_1(A) + λ_2(A)≃log(np)/n. § SUPPLEMENT: PROOFS RELATED TO RESTRICTED LOCAL POLYNOMIAL ESTIMATION From Example <ref> recall the notation U_m(u_1,u_2)(1, P_1(u_1, u_2), …, P_m(u_1,u_2))^⊤, u_1,u_2∈[0,1], where for l=1,…, m we set P_l(u_1, u_2)(u_1^l/l!, u_1^l-1u_2/(l-1)!, u_1^l-2u_2^2/(l-2)!2!,…, u_2^l/l!), u_1,u_2 ∈ [0,1]. Moreover for a non-negative kernel function K ^2 → [0, ∞) and a bandwidth h >0 we set K_h(x_1,x_2) K(x_1/h, x_2/h) and U_m,h(u_1, u_2) U_m(u_1/h, u_2/h). Given an order m ∈ and observations ((x_j, x_k), z_j,k), 1 ≤ j<k ≤ p, we consider LP(x, y) _ϑ∈^N_m∑_j<k^p(z_j,k - ϑ^⊤ x_j-xx_k-y)^2 x_j - xx_k - y , 0 ≤ x ≤ y ≤ 1 , and define the local polynomial estimator of the target function with order m at (x,y) as LP_1(x,y) {[ (LP( x,y ))^⊤ U_m(0,0), x ≤ y,; (LP(y,x))^⊤ U_m(0,0), otherwise, ]. for (x,y)^⊤∈ [0,1]^2 , since U_m(0,0) is the first unit vector in N_m dimensions. Let the kernel function K^2 → [0, ∞) have compact support in [-1,1]^2 and satisfy K_min_{- Δ≤ u_1,u_2 ≤Δ}≤ K(u_1, u_2) ≤ K_max for all u_1,u_2 ∈ , for a constants Δ∈_+, K_min, K_max > 0. Then under Assumption <ref> there exist a sufficiently large p_0 ∈ and a sufficiently small h_0 >0 such that for all p≥ p_0 and h∈(c/ p,h_0], where c >0 is a sufficiently large constant, the local polynomial estimator in (<ref>) with any order m ∈ is unique. Moreover, the corresponding weights in (<ref>) satisfy Assumption <ref> with γ = m. Let us proceed to preparations of the proof of Lemma <ref>. For x≤ y, x,y ∈ [0,1] set B_p,h(x,y) 1/(p h)^2∑_j<k^px_j-xx_k-y U_h^⊤x_j-xx_k-yx_j-xx_k-y ∈^N_m× N_m and a_p,h(x,y)1/(p h)^2∑_j<k^px_j-xx_k-yx_j-xx_k-y z_j,k ∈^N_m , then (<ref>) is the solution to the weighted least squares problem LP(x,y) = _ϑ∈^N_m( - 2ϑ^⊤ a_p,h(x,y) + ϑ^⊤ B_p,h(x,y) ϑ). The solution is determined by the normal equations a_p,h(x,y) = B_p,h(x,y)ϑ. In particular, if B_p, h(x, y) is positive definite, the solution in (<ref>) is uniquely determined and we obtain LP_1(x,y) = ∑_j<k^p xyh z_j,k with (now stating both cases) xyh 1/(p h)^2 U^⊤ 00 ·{[ B_p,h^-1(x,y) x_j-xx_k-y x_j-xx_k-y, for x≤ y,; B_p,h^-1(y,x) x_j-yx_k-x x_j-yx_k-x, for x> y, ]. so that the local polynomial estimator is a linear estimator. The more challenging part is to prove (LP1) now for a sufficiently large number p of design points and an uniformly choice of the bandwidth h. Suppose that the kernel K suffices (<ref>) in Lemma <ref> and Assumption <ref> is satisfied. Then there exist a sufficiently large p_0 ∈ and a sufficiently small h_0 ∈_+ such that for all p≥ p_0 and h∈(c/p, h_0], where c ∈_+ is a sufficiently large constant, the smallest eigenvalues λ_min(B_p, h(x, y)) of the matrices B_p, h(x,y), which are given in (<ref>), are bounded below by a universal positive constant λ_0 >0 for any x, y∈ [0,1] with x<y. An immediate consequence of Lemma <ref> is the invertibility of B_p, h(x,y) for all p≥ p_0, h∈( c/ p, h_0] and x<y, x, y ∈ [0,1], and hence also the uniqueness of the local polynomial estimator for these p and h. In <cit.> the lower bound for the smallest eigenvalues has only be shown for a fixed sequence h_p (for d=1) of bandwidths which satisfies h_p → 0 and p h_p →∞. In contrast, we allow an uniformly choice of h which results, in particular, in the findings of Section <ref>. In the following let v ∈^N_m. We show that there exist a sufficiently large p_0 ∈ and a component wise sufficiently small h_0∈_+ such that the estimate inf_ p≥ p_0inf_ h∈( c/ p, h_0]inf_x,y ∈ [0,1] x<yinf_ v_2=1 v^⊤ B_ p, h( x,y) v ≥λ_0 is satisfied. Then we obtain for these choices of p and h, and any x,y ∈ [0,1] also λ_min(B_ p, h(x,y)) = e_min^⊤(B_ p, h(x,y)) B_ p, h(x,y) e_min(B_ p, h(x,y)) ≥inf_ v_2=1 v^⊤ B_ p, h(x,y) v ≥λ_0 , where e_min(B_ p, h(x,y)) ∈^m+1 is a normalised eigenvector of λ_min(B_ p, h(x,y)). Let E^1{(x,y)^⊤∈^2 | x,y∈ [0,Δ), x ≤ y}, E^2 {(x,y)^⊤∈^2 | x∈(-Δ, 0], y∈ [0,Δ) } and E^3 {(x,y)^⊤∈^2 | x,y∈ (-Δ,0], x ≤ y}, where Δ∈^+ is given in (<ref>). We set λ_i(v) f_min^2 K_min∫_E^i(v^⊤ U_m( z))^2 z , λ_i inf_v_2 =1λ_i(v) for all i=1,2,3. Applying <cit.> with K( z) = _E^i( z), z ∈^2, leads to λ_i(v)≥λ_i >0. Therefore we find a λ_0>0 such that min(λ_1, λ_2, λ_3)>λ_0>0, e.g. λ_0min(λ_1, λ_2, λ_3)/2. Now we want to specify a partition S_1 ∪ S_2 ∪ S_3= { (x,y)^⊤| 0 ≤ x ≤ y ≤ 1} and functions A^(1)_p,h(x,y;v),A^(2)_p,h(x,y;v) and A^(3)_p,h(x,y;v) such that v^⊤ B_ p, h(x,y) v≥ A^(i)_ p, h(x,y;v) , (x,y)^⊤∈ S_i , i =1,2,3, and sup_(x,y)^⊤∈ S_isup_v_2 =1|A^(i)_ p, h( x,y;v) - λ_i(v)| ≤c_1/p h , i =1,2,3, hold true for a positive constant c_1>0. Consider h∈( c/ p, h_0] with c c_1/(min(λ_1, λ_2, λ_3)-λ_0) yields c_1/(p h) < c_1/c ≤λ_i-λ_0 for all i =1,2,3. Hence, it follows by (<ref>) and (<ref>) that inf_(x,y)^⊤∈ S_iinf_v_2 =1v^⊤ B_ p, h(x,y) v =inf_(x,y)^⊤∈ S_iinf_v_2 =1(λ_i(v)+v^⊤ B_ p, h(x,y) v-λ_i(v)) ≥λ_i+inf_(x,y)^⊤∈ S_iinf_v_2 =1(A^(i)_p,h(x,y;v)-λ_i(v)) =λ_i-sup_(x,y)^⊤∈ S_isup_v_2 =1(λ_i(v)-A^(i)_p,h(x,y;v)) ≥λ_i-sup_(x,y)^⊤∈ S_isup_v_2 =1|A^(i)_p,h(x,y;v)-λ_i(v)| ≥λ_i-c_1/p h≥λ_0 , which leads to (<ref>) because of inf_p≥p_0inf_ h∈(c/p, h_0] inf_0 ≤ x ≤ y ≤ 1inf_v_2 =1v^⊤ B_ p, h(x,y) v = inf_p≥p_0inf_ h∈(c/p, h_0]min_i=1,2,3(inf_(x,y)^⊤∈ S_iinf_v_2 =1v^⊤ B_ p, h(x,y) v) ≥λ_0 . Next we show (<ref>) and (<ref>). Let I_0 [0,1-Δ h - 1/(2f_min p) ] and I_1 [1-Δ h - 1/(2f_min p),1 ]. We define S_1{ (x,y)^⊤∈ I_0^2 | x ≤ y}, S_2 { (x,y)^⊤∈ I_0 × I_1} and S_3 { (x,y)^⊤∈ I_1^2 | x ≤ y}. It is clear that ⋃_i=1^3S_i={(x,y)^⊤| 0 ≤ x ≤ y ≤ 1}. Define x̃_j (x_j-x)/h, x ∈ [0,1] and ỹ_j (x_j - y)/h for all 1 ≤ j ≤ p, where x_1,…,x_p are the design points. We have to differentiate two different cases. At first let x ∈ I_0. Then we get x̃_1 ≤1/2f_min p h - x/h≤1/2f_min p h , x̃_p ≥1-1/2f_min p-x/h≥1-(1-Δ h - (2f_min p)^-1)/h - 1/2f_min p h = Δ by (<ref>) in Lemma <ref>. For appropriate p and h∈ ( c/ p, h_0] the quantity (2f_min p h)^-1 gets small if c is chosen large enough. Consequently, the points x̃_1,…,x̃_p form a grid which covers an interval containing at least [1/(2f_min p h), Δ]. Hence, there exist 1 ≤ j_* j_*(x) < j^* j^*(x) ≤ p such that x̃_j_*≥ 0 (j_*=1 x̃_j_*-1<0) and x̃_j^*≤Δ x̃_j^*+1>Δ are satisfied. Here denotes the logical and, and the logical or. For the second case let x∈ I_1. Then we obtain the estimates x̃_1 ≤1/2f_min p h- x/h≤Δ -1/h≤ -Δ , x̃_p ≥ -1/2f_min p h , since h has to be sufficiently small and p sufficiently large. Consequently, the points x̃_1,…,x̃_p form a grid which covers an interval containing [-Δ, -1/(2f_min p h)]. Define j̃_* and j̃^* in the same manner. Of course this also holds for ỹ_1, …, ỹ _p. We set g_v(x,y)(v^⊤ U(x, y) )^2, v∈^N_m. For 0 ≤ x ≤ y ≤ 1 we can further estimate v^⊤ B_ p, h(x,y) v = 1/(p h)^2v^⊤(∑_j<k^p U_mx̃_jỹ_k U_m^⊤x̃_jỹ_k Kx̃_jỹ_k) v ≥f_min^2K_min/f_min^2 (p h)^2∑_j<k^p (v^⊤ U_mx̃_jỹ_k)^2 _[- Δ, Δ]^2 ( x̃_j, ỹ_ k) ≥ f_min^2K_min∑_j<k^p-1 g_vx̃_jỹ_k(x̃_j+1-x̃_j)(ỹ_k+1 - ỹ_k)_[ 0, Δ)^2( x̃_j,ỹ_k) ≥ f_min^2K_min∑_j = j_*^j^*∑_ k = k_* j<k^k^* g_vx̃_jỹ_k(x̃_j+1-x̃_j)(ỹ_k+1 - ỹ_k) A^(i)_p,h(x,y;v), i = 1,2,3, for (x,y)^⊤∈ S_1 by Assumption <ref>, inequality (<ref>) in Lemma <ref> for the x̃_j, ỹ_k and the presentation of B_ p, h(x,y) in (<ref>). We start with the case (x,y)^⊤∈ S_1=I_0× I_0. By inserting this function in (<ref>), dropping the scalar f_min^2K_min and oppressing the sups the object of interest is given by | ∑_j = j_*^j^* ∑_ k = k_* j<k^k^* g_vx̃_jỹ_k(x̃_j+1-x̃_j)(ỹ_k+1 - ỹ_k) -∫_E_1 g_v( z) z | In order to bound these terms of the sum separately we note that g_v(z)=(v^⊤ U_m( z) )^2 is a bivariate polynomial function and therefore Lipschitz-continuous on E_1, E_2 and E_3 respectively. By Lemma <ref> it holds x̃_j+1 - x̃_j≤ (f_min p h)^-1. Therefore the difference of every point of z = (z_1, z_2)^⊤∈ E_1 to a design point (x̃_j, ỹ_k) is of order (p h)^-1. Figure <ref> illustrates this step. The amount of points (x̃_j, ỹ_k)^⊤, j_*≤ j ≤ j^*, k_* ≤ k ≤ k^*, j < k, is bounded by Lemma <ref> ∑_j,k = 1^p _E_1(x̃_j, ỹ_k) ≤(2f_maxmax(Δ p h, 1 ))^2 = O ((p h)^2). Therefore the sum approximates the integral in form of a Riemann-sum of a Riemann-integrable, since it is Lipschitz-continuous, function with the error rate getting small with order (p h)^-1. This concludes the proof via the previous steps. Now we can prove Lemma <ref>. By Lemma <ref> there exist a sufficiently large p_0 ∈ and a sufficiently small h_0∈_+ such that for all p≥ p_0 and h∈( c/ p, h_0], where c>0 is a sufficiently large constant such that the local polynomial estimator in (<ref>) with any order m ∈ is unique and a linear estimator with weights given in (<ref>). We make use of Lemma <ref> by B_ p, h^-1(x,y)_M,2≤λ_0^-1 for all p≥ p_0, h∈( c/ p, h_0] and x ≤ y ∈ [0,1], where M_M,2 is the spectral norm of a symmetric matrix M ∈^N_m× N_m. <ref>: Let Q be a bivariate polynomial of degree ζ≥ 1 with r_1 + r_2 = ζ, r_1, r_2 ∈. By the bivariate Taylor expansion we have Q(x_j, x_k) = Q(x,y) + ∑_i = 1^r_1∑_l = 1^r_2(x_j - x)^i (x_k-y)^l/i! l!·∂^i+l/∂^i x ∂^l yQ(x_j, x_k) = q(x,y)^⊤ U_hx_j-xx_k-y, where q(x,y) (q_0(x,y),…,q_ζ(x,y) )^⊤, with q_l(x,y)(∂^l/∂^l x q(x,y)/l!, ∂^l/∂^l-1x∂ y q(x,y)/(l-1)!,…,∂^l/∂^l y q(x,y)/l!) . Setting z_j,k = Q(x_j, x_k), we get for x ≤ y LP(x,y) = _ϑ∈^N_ζ∑_j < k^p ( Q(x_j, x_k) - ϑ ^⊤x_j-xx_k-y)^2 x_j-xx_k-y = _ϑ∈^N_ζ∑_j < k^p ( (q(x, y) - ϑ)^⊤x_j-xx_k-y)^2 x_j-xx_k-y = _ϑ∈^N_ζ(q(x, y) - ϑ)^⊤ B_p,h(x,y)(q(x, y) - ϑ) = q(x,y), since B_p,h(x,y) is positive definite. <ref>: This follows directly by the support of the kernel function K and the display (<ref>). <ref>: If design points are dispersed according to Assumption <ref>, then by Lemma <ref> and the fact U_h(0,0)_2 = 1 we get xyh ≤1/(p h)^2U_h(0,0)_2 B_p,h^-1_M,2x_j-xx_k-y x_j-xx_k-y_2 ≤K_max/(p h)^2 λ_0x_j-xx_k-y_2 _max{x_j-x,x_k-y}≤ h ≤4 K_max/(p h)^2 λ_0 . <ref>: We divide the proof in three cases with respect to the fact whether the weights vanish or not. In the following let 1 ≤ j ≤ p and x, y ∈ [0,1]. Let min(max(x- x_ j ,y- x_ k), max(y^'-x_k,y^'-x_k)) > h, then by <ref> both weights xyh and x^'y^'h vanish, and hence <ref> is clear. Let max(max(x- x_ j ,y- x_ k), max(y^'-x_k,y^'-x_k)) > h. We assume max(x- x_ j ,y- x_ j)>h and max(y-x_k,y-x_j) ≤ h without loss of generality. Once again <ref> leads to xyh = 0, and hence the Cauchy-Schwarz inequality, U(0,0)_2=1 and (<ref>) imply | xyh - x^'y^'h | = 1/p h| U^⊤(0,0) B_ p, h^-1(x,y) x-x_jy-x_k| x-x_jy-x_k ≤1/ p^ 1 h^ 1U(0,0)_2 B_p,h^-1(x,y) x-x_jy-x_k_2 x-x_jy-x_k ≤1/p hB_ p, h^-1(x,y)_M,2 x-x_jy-x_k_2 x-x_jy-x_k ≤1/λ_0 p hx-x_jy-x_k(∑_|r|=0, r=(r_1,r_2)∈^2^m ((x_j- x)^r_1(x_k-y)^r_2/h^r r_1!r_2!)^2)^1/2 ≤c_1/λ_0 p hx-x_jy-x_k for a positive constant c_1>0. In the last step we used the fact that the sum can not get arbitrarily large, also for component wise small h, because the kernel K has compact support in [-1,1]. If max(x-y,x^'-y^')>h, we use the upper bound of the kernel function in (<ref>) and obtain | xyh - x^'y^'h | ≤c_1 K_max/λ_0 p h . Conversely, if max(x-y,x^'-y^')≤ h, we add K_h(x_j- x^', x_k-y^') = 0 and estimate | xyh - x^'y^'h | ≤c_1/λ_0 p^ 1 h^ 1x-x_jy-x_k -x^'-x_jy^'-x_k ≤c_1 L_K/λ_0 p^ 1 h^ 1 max(x-y,x^' -y^')/ h , because of the Lipschitz continuity of the kernel. In total we get | xyh - x^'y^'h | ≤c_1 max(K_max,L_K)/λ_0 p h(max(x-y,x^' -y^')/ h∧ 1 ) . Let max(max(x- x_ j ,y- x_ k), max(y^'-x_k,y^'-x_k)) ≤ h, then both weights doesn't vanish and we have to show a proper Lipschitz property for xyh = 1/ p h U^⊤(0,0) B_p,h^-1(x,y) x-x_jy-x_k x-x_jy-x_k . The kernel K and polynomials on compact intervals are Lipschitz continuous, hence it suffices to show that B_ p, h^-1(x,y) has this property as well. Then the weights are products of bounded Lipschitz continuous functions and, thus, also Lipschitz continuous. The entries of the matrix B_ p, h(x,y), which is defined in (<ref>), considered as functions from [0,1] to are Lipschitz continuous. Indeed they are of order one by using Assumption <ref> and Lemma <ref>. Hence, the row sum norm B_ p, h(x,y)-B_ p, h(x^', y^')_M,∞ is a sum of these Lipschitz continuous functions and, in consequence, there exists a positive constants L_∞ >0 such that B_ p, h(x,y)-B_ p, h(x^', y^')_M,∞≤ (N_m+1) L_∞ max(x-y,x^' -y^')/ h is satisfied. Since the matrices are symmetric the column sum norm is equal to the row sum norm and we obtain B_ p, h(x,y)-B_ p, h(x^', y^')_M,2≤B_ p, h(x,y)-B_ p, h(x^', y^')_M,∞≤ (N_m+1) L_∞ max(x-y,x^' -y^')/ h . This leads together with (<ref>) and the submultiplicativity of the spectral norm to B_ p, h^-1(x,y)-B_ p, h^-1(x^',y^') _M,2 = B_ p, h^-1(x^',y^') ( B_ p, h(x^', y^')-B_ p, h(x,y) ) B_ p, h^-1(x,y)_M,2 ≤ B_ p, h^-1(x,y)_M,2 B_ p, h(x^', y^')-B_ p, h(x,y)_M,2 B_ p, h^-1(x,y)_M,2 ≤(N_m+1) L_∞/λ_0^2 max(x-y,x^' -y^')/ h , which is the Lipschitz continuity of B_ p, h^-1(x,y) with respect to the spectral norm. So finally there exists a positive constant c_2>0 such that | xyh - x^'y^'h | ≤c_2/2 p h max(x-y,x^' -y^')/ h≤c_2/ p h is satisfied. Here we used in the last step that max(max(x-x_j,y-x_j), max(y-x_k,y-x_j)) ≤ h implies max(x-y,x ^'-y^') ≤ 2 h. In the end we choose C_3 ≥max( c_1 max(K_max,L_K)/λ_0, c_2) and obtain Assumption <ref>. Here we state two auxiliary Lemmas which result as a consequence of Assumption <ref> and which were used in the proof of Lemma <ref>. For their proofs see <cit.>. Let Assumption <ref> be satisfied. Then for all j = 1,…,p it follows that j-0.5/f_max p≤ x_j≤j-0.5/f_min p and 1-p-j+0.5/f_min p≤ x_j≤ 1-p-j+0.5/f_max p , and for all 1 ≤ j < l ≤ p that l-j/f_max p≤ x_l-x_j≤l-j/f_min p . Suppose that Assumption <ref> is satisfied. Then we obtain for all p ≥ 1 and any set S = [a_1, b_1] × [a_2 × b_2] ⊆ [0,1]^2 the estimate ∑_j, k = 1^p _{(x_k, x_j)^⊤∈ S}≤ 4 f_max^2 max(p(b_1-a_1), 1 ) max(p(b_2-a_2), 1 ) . In particular Assumption <ref> is satisfied. § SUPPLEMENT: ADDITIONAL NUMERICAL RESULTS
http://arxiv.org/abs/2407.13210v1
20240718064910
Improved Esophageal Varices Assessment from Non-Contrast CT Scans
[ "Chunli Li", "Xiaoming Zhang", "Yuan Gao", "Xiaoli Yin", "Le Lu", "Ling Zhang", "Ke Yan", "Yu Shi" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Improved Esophageal Varices Assessment on Non-Contrast CT C. Li, X. Zhang, Y. Gao, et al. Department of Radiology, Shengjing Hospital of China Medical University, 110004, Shenyang, China DAMO Academy, Alibaba Group Hupan Lab, 310023, Hangzhou, China zxiaoming360@gmail.com; yanke.yan@alibaba-inc.com; 18940259980@163.com Improved Esophageal Varices Assessment from Non-Contrast CT Scans Chunli Li1,2,3^⋆Xiaoming Zhang2,3^(,⋆)Yuan Gao2,3^⋆Xiaoli Yin1Le Lu2 Ling Zhang2 Ke Yan2,3^ Yu Shi1^ ========================================================================================================= [1]Equal contributions. Corresponding authors. The work was done during C. Li's internship at Alibaba DAMO Academy. § ABSTRACT Esophageal varices (EV), a serious health concern resulting from portal hypertension, are traditionally diagnosed through invasive endoscopic procedures. Despite non-contrast computed tomography (NC-CT) imaging being a less expensive and non-invasive imaging modality, it has yet to gain full acceptance as a primary clinical diagnostic tool for EV evaluation. To overcome existing diagnostic challenges, we present the Multi-Organ-cOhesion-Network (MOON), a novel framework enhancing the analysis of critical organ features in NC-CT scans for effective assessment of EV. Drawing inspiration from the thorough assessment practices of radiologists, MOON establishes a cohesive multi-organ analysis model that unifies the imaging features of the related organs of EV, namely esophagus, liver, and spleen. This integration significantly increases the diagnostic accuracy for EV. We have compiled an extensive NC-CT dataset of 1,255 patients diagnosed with EV, spanning three grades of severity. Each case is corroborated by endoscopic diagnostic results. The efficacy of MOON has been substantiated through a validation process involving multi-fold cross-validation on 1,010 cases and an independent test on 245 cases, exhibiting superior diagnostic performance compared to methods focusing solely on the esophagus (for classifying severe grade: AUC of 0.864 versus 0.803, and for moderate to severe grades: AUC of 0.832 versus 0.793). To our knowledge, MOON is the first work to incorporate a synchronized multi-organ NC-CT analysis for EV assessment, providing a more acceptable and minimally invasive alternative for patients compared to traditional endoscopy. § INTRODUCTION Esophageal varices (EV) are a significant complication often stemming from chronic liver diseases, with portal hypertension being the pivotal cause driving blood into smaller collaterals within the lower esophagus <cit.>, potentially leading to life-threatening hemorrhages and shock. Detecting EV at an early stage is difficult, with confirmation usually requiring invasive endoscopic procedures. Given that the extent of venous dilation correlates with hemorrhage risk, a precise assessment of EV severity is vital. Invasive endoscopy is the conventional method for EV evaluation and, despite its effectiveness, it is associated with discomfort, potential complications like infection and bleeding, and higher healthcare costs. As a less invasive approach, dynamic contrast-enhanced CT (DCE-CT) allows for detailed visualization of EV and related collaterals, improving patient compliance and reducing discomfort and bleeding risks linked to endoscopy. However, methods such as Yan et al.'s <cit.>, which use radiomic features from DCE-CT, have limited generalizability, and iodinated contrast agents pose risks of adverse reactions. Non-contrast CT (NC-CT) offers a quick and low-radiation alternative, chest NC-CT scanning can be valuable for grading EV. Liver characteristics such as fibrosis and alterations in volume can indicate the pressure within the portal venous <cit.>, which is critical in appraising EV risk and severity. Portal hypertension-induced changes in the spleen, such as enlargement, indirectly reflect this pressure and suggest EV bleeding risk. Therefore, a combined analysis of the liver, spleen, and esophagus could improve the understanding and grading of EV risks, and enhance clinical decisions. However, it faces several hurdles in EV assessment: differentiation difficulties due to lower contrast resolution, limited visibility of smaller varices, and the inconsistent distribution of varices throughout the esophagus. Previous attempts to evaluate EV have focused on DCE-CT imaging of the liver, spleen, esophagus, and portal venous system's vascular features <cit.>. However, no current grading methods fully consider the esophagus's close relationships with adjacent structures. To optimize the assessment of EV, we have drawn upon radiologists' comprehensive diagnostic techniques and physicians' clinical diagnostic knowledge, introducing the framework of Multi-Organ-cOhesion-Network (MOON) based on chest NC-CT. Our approach enhances the accuracy of EV evaluation and surpasses previous radiomics methods reliant on DCE-CT imaging. MOON adopts a holistic strategy, simultaneously analyzing the regions of interest (ROIs) from the NC-CT scans of these organs. It employs a multi-organ framework that incorporates the Organ Representation Interaction (ORI) module, allowing features from the liver and spleen to inform the analysis of esophageal characteristics at each step within the network. Additionally, the Hierarchical Feature Enhancement (HFE) module is designed to highlight the esophagus' distinct morphology by synthesizing features across various levels, focusing on areas that exhibit abnormal varices. To achieve uniformity in decision-making across the various organ domains, which are vital for an accurate final diagnosis, we incorporate ordinal regression loss <cit.> and canonical correlation analysis (CCA) <cit.> loss into our training process. This synergistic use of loss functions fosters a cohesive feature integration, thereby reducing inconsistencies in the decision-making process and enhancing the overall diagnostic performance. Our studied dataset is sufficiently extensive, comprising chest NC-CT scans from 1255 patients diagnosed with EV, each confirmed via endoscopic examination and categorized into three differentiated levels of severity. The goal of MOON is to predict the grade of EV. It exhibits strong diagnostic proficiency, attaining an AUC performance of 0.864 versus 0.736 using DCE-CT <cit.> for severe EV cases and, 0.832 versus 0.802 using DCE-CT <cit.> for moderate to severe cases, showcasing its effectiveness in evaluating EV from NC-CT imagery. § METHOD Deep learning, particularly convolutional neural networks (CNNs) <cit.>, have significantly advanced the detection of pathologies or diseases with subtle imaging differences in Hounsfield Units on NC-CT images, often imperceptible to radiologists <cit.>. There are four key designs in our method: (1) To assess EV, a global receptive field is essential. Uniformer <cit.> combines convolution with self-attention, which is utilized to adeptly capture esophageal features, providing the comprehensive coverage necessary for accurate EV evaluation. (2) An effective EV assessment necessitates combining data from multiple organs. Our ORI module facilitates this, integrating liver and spleen features with those of the esophagus to improve EV detection. (3) The diversity in varices sizes and esophageal anatomy requires a multi-scale strategy. We answer this need with the HFE module, which assimilates features at various scales to enhance classification adaptability. (4) For effective integration across multiple organ branches and precise identification of the severity grades of EV, we train our model using a hybrid loss function that combines ordinal regression and CCA. This approach enables the model to maintain the ordinal relationships inherent to the various EV severity levels while enhancing the correlation between multi-organ features, reducing bias from reliance on any single organ, and improving the overall diagnostic accuracy and generalizability of the model. §.§ Multi-Organ-cOhesion-Network (MOON) Pipeline of MOON. The UniFormer architecture <cit.>, combining local and global feature extraction, addresses the medical imaging challenge of accurately assessing EV by providing a comprehensive receptive field suitable for the esophagus's unique structure, thereby enabling precise identification of esophageal features without the constraints of traditional 3D CNNs <cit.> or Vision Transformers <cit.>. MOON, depicted in Fig. <ref>(a), is a multi-stage, multi-organ framework incorporating UniFormer as its core for each branch. It starts with an off-the-shelf nnUNet <cit.> for organ segmentation to localize the esophagus, liver, and spleen. Post-segmentation, respective image data are fed dedicated branches to extract features across scales. Within this framework, the ORI module plays a pivotal role, adeptly handling the unique anatomical features of the organs, e.g., the tendency for spleen enlargement and liver lesions in the portal venous system. Its capacity to engage with each organ's distinct properties is vital for accurate EV assessment. The HFE module, in turn, is specifically designed to enhance the esophagus branch, which is essential given that experiments in Sec. <ref> indicate esophageal features are most indicative of EV. Through its focus on the esophagus, the HFE module promotes effective feature integration and sharpens the identification of key esophageal details for precise EV detection. Organ Representation Interaction. To address the nuanced diagnosis of EV stemming from portal hypertension, a comprehensive CT scan analysis of the liver and spleen is required due to the subtle morphological and texture changes they undergo. As shown in Fig. <ref>(b), the ORI module is designed to enhance the interpretation of these images by refining liver and spleen imaging features into a coherent form, directing attention to critical areas. It streamlines the imaging features of these organs, amplifying essential characteristics and employing a self-attention mechanism that operates on tokenized 3D representation to link esophageal features to hepatic and splenic indicators of EV. Given that the features of the esophagus, liver, and spleen at each stage, defined as F_E∈ℝ^H_e× W_e× D_e× C, F_L∈ℝ^H_l× W_l× D_l× C, and F_S∈ℝ^H_s× W_s× D_s× C, respectively. F_E^P, F_L^P, F_S^P∈ℝ^H_n× W_n× D_n× C are pooled features. An attention-ready form X_S ∈ℝ^H_nW_n× D_n× 2C is produced by concatenation and reshaping. The query (Q_S), key (K_S), and value (V_S) matrices are generated from X_S through learned transformations. The scaled dot-product attention is applied as Attention(Q_S, K_S, V_S) = SoftMax(Q_SK_S^T/√(d_k))V_S, with d_k representing the dimensionality of the keys, allowing the model to capture feature interactions across the global scope of the input space. The outputs from the attention mechanism are aggregated into a tensor T_S and then projected to an enhanced feature space, resulting in X_P = T_SW_P, where the projection matrix W_P is of size ℝ^2C× 2C. This process consolidates the learned information of adjacent organs and infuses them into the esophageal representation, thereby strengthening the subtle interconnections specific to EV. By finally splitting and then reintegrating X_P, followed by refinement through convolution and subsequent interpolation, ORI module achieves an interwoven feature representation. Hierarchical Feature Enhancement. The HFE module refines the central branch that analyzes esophageal features, sharpening the detection of localized characteristics indicative of EV. It advances beyond former multi-scale fusion methods <cit.> by implementing a cross-attention mechanism. This feature promotes interaction among different depth levels and conveys detail-rich information from the shallow layers, imperative for EV detection, to the in-depth features that represent localized details within an expansive anatomical context. Illustrated in Fig. <ref>(c), through a process of convolution, pooling, and cross-attention, the HFE module adeptly blends shallow and deep features. This strategic combination solidifies the main branch's feature representation, bolstering the network's overall diagnostic accuracy. The HFE module uses convolution and pooling to adjust the dimensions of features from varying depths, starting with the intermediate F_E^1 and moving to the deeper F_E^2, ultimately aligning them with the dimensions of the deepest esophagus feature: F_E^3∈ℝ^H_n× W_n× D_n× C. After pooling, these features are reorganized into an attention-ready form X_C ∈ℝ^H_nW_n× D_n× C. Using a form of cross-attention, the attention mechanism then transforms X_C to generate the query (Q_C), key (K_C), and value (V_C) matrices, where Q_C originates from the intermediate features F_E^1, K from the deeper features F_E^2, and V from the deepest features F_E^3. The scaled dot-product attention is applied as Attention(Q_C, K_C, V_C) = SoftMax(Q_CK_C^T/√(d_k))V_C. The HFE module orchestrates the selective enhancement of local features and their integration with deeper, more globally aware features. By fusing shallow and deep features through the HFE module, with attention-focused features refined by point-wise convolutions for a more robust representation. §.§ Training Paradigm of MOON MOON is proposed and designed to tackle the intricacies of EV diagnosis by integrating multi-organ information. Within this scope, we deploy an ordinal regression loss, symbolized as ℒ_Ordinal, which is applied to the unified logits 𝐇_F. These logits consolidate data from the esophagus, liver, and spleen branches, in line with <cit.>. The ordinal regression loss is crucial for measuring the model's capability to accurately classify the different stages of EV, as it verifies the alignment between 𝐇_F and the ground truth labels 𝐘. Additionally, the framework integrates CCA loss ℒ_CCA, from multi-view learning, which is key to maximizing the correlation between two feature sets for improving cross-modal retrieval <cit.>. Unlike approaches that seek complementary features, MOON employs ℒ_CCA to capitalize on the interconnectedness of organ features by amplifying their correlation. Specifically, it targets the alignment of logits from the esophageal branch (𝐇_E) with those from the liver (𝐇_L) and spleen (𝐇_S) branches. By serving as a regularization mechanism, ℒ_CCA facilitates the convergence of characteristics derived from multiple organs, thus enhancing the model's capability to interpret interrelated organ information, as shown in Algorithm <ref>. The total loss ℒ_Overall for a given batch is a composite of the ℒ_Ordinal and ℒ_CCA, weighted appropriately, that not only accounts for accurate staging of EV but also reinforces the interconnectedness of the multi-organ representations: ℒ_Overall = λ·ℒ_Ordinal(𝐇_F, 𝐘) + (1-λ) · (ℒ_CCA(𝐇_E, 𝐇_L) + ℒ_CCA(𝐇_E, 𝐇_S)). § EXPERIMENTS Datasets. We collected a dataset in a retrospective manner, with NC-CT images stratified into three grades of EV severity as outlined by <cit.>: G1 for Mild (F1, RC-), G2 for Moderate (F1, RC+ or F2, RC-), G3 for Severe (F2, RC+, F3, RC+ or F3, RC-). This dataset comprises images from 1,010 patients diagnosed with varying stages of EV (G1: 331, G2: 252, G3: 427), acquired with Philips Ingenuity 4 and Siemens Sensation 64 CT scanners. All patients were confirmed to have EV through endoscopy, and all CT scans were performed within one month prior to the endoscopic examination. The collected images were processed using a standardized abdominal window setting. To rigorously evaluate the proposed method, a 5-fold cross-validation approach was employed. Additionally, an independent test cohort comprising 245 subjects was curated from a single center to substantiate the method's validity. Eligibility for data inclusion required being over 18 years old and not having received treatment before the CT scans. Implementation Details. Throughout the training process, several data augmentation techniques were applied, such as random rescaling, flipping, and cutout operations. For localizing the esophagus, liver, and spleen, we utilized the pre-trained nnUNet <cit.> configured for low-resolution image processing as our localization network. Once localized, the ROIs for the esophagus, liver, and spleen were resized to dimensions of 40×40×100, 256×196×36, and 152×196×24, respectively. To construct each branch of the MOON, we employed the Uniformer-S model <cit.>, which was pre-trained on the Kinetics-600 <cit.>. The MOON framework was trained employing the Adam <cit.> optimizer with an initial learning rate set to 10^-5. The training spanned across 100 epochs, with the learning rate being reduced by a factor of two every 20 epochs, following a piece-wise constant decay schedule. We set λ=0.8 for Eqn. (<ref>). The framework was implemented using Pytorch and run on Nvidia V100 GPUs. Main Experimental Results. Table <ref> consolidates experimental results demonstrating that for evaluating EV, single features derived from the esophagus alone surpass those from the liver or spleen in terms of accuracy. This underscores the critical importance of the esophagus in EV detection. As a result of portal hypertension, there is a marked enlargement of the spleen, potentially leading to changes in the internal textural characteristics of the organ. These alterations can be detected by neural networks. Hence, in terms of accuracy during 5-fold cross-validation, spleen features rank second, marginally surpassing those from the liver. Compared to single-organ input, integrating data from multiple organs achieved a better outcome with an increase of 0.003 to 0.039 in AUC for the evaluation of G3 on an independent test set. However, the performance improvement for assessing ≥G2 was not significant. This may be due to the fact that solely relying on post-fusion strategies does not adequately and effectively integrate features originating from multiple organs. The incorporation of the ORI and HFE modules into MOON has led to a performance that outperforms simple post-fusion strategies. In an independent test set, the AUC for G3 evaluation saw an increase of 0.006-0.040, and for the single-organ esophageal model, there was a boost of 0.038-0.061. Moreover, in the assessment of ≥G2, there was also a significant enhancement in AUC (0.016-0.034) relative to simple post-fusion strategies.In the fusion layer, we employ several strategies, with concatenation <cit.> emerging as the most effective, particularly in the challenging G3 on the independent test set. G3 grades show higher detection accuracy compared to ≥G2, likely due to G3's clearer clinical features. Additionally, FiLM <cit.> stands out in 5-fold validation, achieving the highest accuracy. These findings emphasize the significance of advanced post-fusion techniques in enhancing the accuracy of multi-organ EV, further augmented by ORI and HFE. Ablations Study. Information regarding the comparison of EV performance evaluation during 5-fold validation with the inclusion of the ORI module, HFE module, and CCA loss strategy is provided in Fig. <ref>. We discovered that the ORI module is more critical than the HFE module and CCA loss in the evaluation of EV, indicating that ORI facilitates comprehensive and effective interactive learning between the liver, spleen, and esophagus, capturing more valuable information pertinent to the assessment of EV. Additionally, other methods also hold significant value for the assessment of EV. Results of the performance on the independent test set are presented in Fig. <ref>. We observed results that were similar to those in the 5-fold validation set. Fig. <ref> highlights the role of the ORI and HFE modules in improving cross-organ interaction within the model. Using Grad-CAM <cit.>, we can pinpoint the specific areas within the model's field of focus. Correlating with endoscopic findings, we have annotated the variceal regions within the esophagus, which typically pose a detection challenge in NC-CT imaging. Without the incorporation of ORI and HFE modules for organ-specific information exchange, the model tends to overlook critical areas associated with varices, leading to potential misclassification of EV severity. In contrast, incorporating the ORI and HFE modules directs the MOON to accurately zone in on medically significant regions, and precisely targets the distended vessels in the esophagus, the hepatic portal vein, and the spleen, which are indicative of portal hypertension and EV. § CONCLUSION We introduce the Multi-Organ-Cohesion-Network, designed for the non-invasive evaluation of EV using NC-CT scans. By simulating the diagnostic processes of radiologists who assess multiple organs implicated in EV, we fill the diagnostic gap in EV evaluation with NC-CT. Our extensive analysis confirms that NC-CT imaging effectively evaluates EV, outperforming previous DCE-CT-based methods. In our future work, we plan to incorporate adjunctive clinical data, e.g., blood test markers, to refine the classification efficacy of EV. §.§.§ This work was supported by the National Natural Science Foundation of China (No. 82071885); The Innovation Talent Program in Science and Technologies for Young and Middle-aged Scientists of Shenyang (RC210265); General Program of the Liaoning Provincial Education Department (LJKMZ20221160); Liaoning Provincial Science and Technology Plan Joint Foundation. §.§.§ The authors have no competing interests to declare that are relevant to the content of this article.
http://arxiv.org/abs/2407.12541v1
20240717132452
A High-Speed Hardware Algorithm for Modulus Operation and its Application in Prime Number Calculation
[ "W. A. Susantha Wijesinghe" ]
cs.CR
[ "cs.CR", "cs.AR" ]
Wayamba University of Sri Lanka,  Kuliyapitiya, 60200, Sri Lanka, susantha@wyb.ac.lk § ABSTRACT This paper presents a novel high-speed hardware algorithm for the modulus operation for FPGA implementation. The proposed algorithm use only addition, subtraction, logical, and bit shift operations, avoiding the complexities and hardware costs associated with multiplication and division. It demonstrates consistent performance across operand sizes ranging from 32-bit to 2048-bit, addressing scalability challenges in cryptographic applications. Implemented in Verilog HDL and tested on a Xilinx Zynq-7000 family FPGA, the algorithm shows a predictable linear scaling of cycle count with bit length difference (BLD), described by the equation y=2x+2, where y represents the cycle count and x represents the BLD. The application of this algorithm in prime number calculation up to 500,000 shows its practical utility and performance advantages. Comprehensive evaluations reveal efficient resource utilization, robust timing performance, and effective power management, making it suitable for high-performance and resource-constrained platforms. The results indicate that the proposed algorithm significantly improves the efficiency of modular arithmetic operations, with potential implications for cryptographic protocols and secure computing. § INTRODUCTION Residual Number System (RNS) continue to play a crucial role in hardware design for various applications, including cryptography, computer science, digital signal processing, error correction, and random number generation <cit.>. The efficiency of RNS implementations heavily relies on modular arithmetic operations. While extensive research has been conducted on modular addition, subtraction, multiplication, division, and exponentiation <cit.>,<cit.>, <cit.>,<cit.>, <cit.>, and <cit.>, there is a notable gap in the literature regarding efficient hardware implementations of the fundamental modulus operation itself-that is, computing A mod B. The modulus operation, often overlooked in favor of its composite operations, is in fact the cornerstone of all modular arithmetic. It is essential in cryptographic algorithms, hash functions, random number generation, and error correction codes <cit.><cit.>. Despite its importance, hardware implementations of the modulus operation have received comparatively little attention, with most systems relying on software implementations or treating it as a byproduct of division <cit.>. This paper addresses this critical gap by presenting a novel hardware algorithm specifically designed for modulus operation. This focus on the fundamental operation of A mod B distinguishes our work from the majority of studies in the field and offers several key advantages. By optimizing the core modulus operation, our work potentially improves the efficiency of all derived modular operations. This cascading effect could lead to significant performance improvements in systems relying on modular arithmetic. The primary contributions of our research are as follows: * We present a novel algorithm for modulus calculation optimized for hardware implementation, addressing a fundamental operation often overlooked in hardware design literature. * The algorithm is implemented in Verilog HDL and tested on a Xilinx Zynq-7000 family FPGA, demonstrating its practical applicability in real-world hardware environments. * Our algorithm can be easily implemented using a finite state machine (FSM) on FPGA hardware. By adjusting parameter values, the design can be scaled to any operand size, eliminating the need for complex designs. * Our comprehensive evaluation reveals that the cycle count for the operation scales as y = 2x + 2, where y represents the cycle count andx represents the bit length difference of the operands. * As a practical demonstration, we apply our hardware modulus operation to find prime numbers up to 500,000, showcasing its potential in cryptographic applications. The rest of the paper is organized as follows: Section <ref> summarizes the relevant work in the field. Section <ref> describes the algorithms and details of the implementations. Results and Discussion are presented in Section <ref>, and Conclusion is provided in Section <ref>. § RELATED WORK This section provides an overview of relevant research in hardware implementations of modular operations, highlighting the gap in literature regarding efficient hardware designs for the fundamental modulus operation. Modular arithmetic operations, including addition, subtraction, multiplication and exponentiation, have been the focus of numerous hardware implementation studies. Styanarayana et al. provided a comprehensive survey of hardware architectures for modular multiplication, emphasizing its importance in public-key cryptosystems <cit.>. Their work highlighted various reductions, which are widely used but often involve complex hardware designs. Hossain et al. presented hardware architectures for modular arithmetic operations over a prime field, optimized for elliptic curve cryptography (ECC) <cit.>. Their architectures focus on modular addition, subtraction, and multiplication, implemented separately to reduce circuit latency and area., achieving significant improvements in computational time and area utilization compared to related designs. Langhammer et al. explored an efficient implementation of modular multiplication on FPGAs using Barrett's algorithm <cit.>. Their method reduced resource count and latency of modular multiplication by employing aggressive truncation strategies for multipliers and introducing a new reduction method, demonstrating efficiency for 1024-bit modular multipliers. However, the design introduced complexity in managing truncation errors and required significant DSP blocks. The growing importance of IoT and edge computing has sparked interest in efficient implementations of cryptographic operations on resource-constrained devices. Ibrahim et al. focused on the resource and energy-efficient hardware implementation of the Montgomery modular multiplication algorithm over GF(2^m), targeting compact IoT edge devices <cit.>. Their design achieved significant savings in area, delay, and energy consumption but involved complex scheduling and projection functions. The modulus operation, despite its fundamental nature, has received relatively little attention in hardware design literature <cit.>. Sivakumar et al. explored VLSI architectures for computing the integer modulo operation X mod m for specific values of m <cit.>. Their designs were optimized for specific modulus values, limiting general applicability. Butler et al. presented a high-speed hardware implementation of the modulus operation optimized for FPGA deployment <cit.>. They introduced two versions of the algorithm to calculate x mod z: one with a fixed modulus and another where the modulus can vary. Their design demonstrated efficient pipelining and resource utilization but faced scalabillity issues for very large operand sizes. Will et al. presented an efficient algorithm for modular reduction using a variable-sized lookup table, supporting large operands and relying on simple processor instructions, making it hardware friendly <cit.>. However, the authors did not provide a hardware implementation, and the complexity of managing the lookup tables could be a limitation. Alia et al. introduced a method for efficiently computing x mod m using an approximation and correction approach that avoids direct division <cit.>, . Their VLSI structure leveraged fast binary multipliers to handle 32-bit numbers. However, the complexity and resource requirements increased significantly for larger operands, limiting scalability. While the aforementioned studies have made significant contributions to the field of hardware-based modular arithmetic, there remain a notable gap in the literature regarding efficient, scalable hardware implementations of the fundamental modulus operation (A mod B). The majority of existing research focuses on composite modular operations or overall cryptographic algorithms, often overlooking the potential performance gains that could be achieved by optimizing this core operation. § METHODOLOGY §.§ Hardware Algorithm for Modulus Operation The proposed algorithm computes modulus operation with addition, subtraction, logical, and bit shift operations without using general multiplication or division operations which are expensive in hardware implementation. The pseudocode of the proposed algorithm is given in Figure <ref>, which calculates result = A mod B, where A is the dividend and B is the divisor. The algorithm computes the A mod B using state machine approach assuming that A > B and B>0. It starts with initializing the state to IDLE and setting variables dividend, divisor, shift, and done to zero. The main loop continuously checks the state. If the state is IDLE and start signal is received, the algorithm assigns A to the dividend and B to the divisor, and the algorithm transitions to the ALIGN state, where the divisor is left-shifted until it aligns with the dividend, incrementing the shift counter at each step. Once alignment is achieved, the algorithm changes to SUBTRACT state, where it iteratively subtracts the divisor from the dividend if the dividend is greater than or equal to the divisor. The divisor is then right-shifted, and the shift counter is decremented. This process continues until the dividend is smaller than B, the shift counter is zero, or the shift counter reaches N. Finally, the algorithm changes to the FINISH state, where the result is set to the current dividend, and the state returns to IDLE, indicating the operation is complete. This state machine approach ensures efficient and systematic computation of the modulus operation, making it suitable for FPGA implementation. §.§.§ Hardware Implementation of the Modulus Operator The algorithm was implemented using Verilog Hardware Description Language (HDL). The Xilinx Vivado design tool was employed for simulation and synthesis. Initially, the algorithm was tested with a variety of input values to verify its accuracy in the simulation. Following the initial verification, the Verilog implementation was modified to include the number of cycles consumed for each calculation. For further analysis, a separate Vivado project was created, instantiating the modulus algorithm to record data directly to the computer. Figure <ref> shows the block diagram of the system. The integer values, dividend and divisor, are encoded from the computer and transfered to the First-in-First-Out (FIFO) buffer in the FPGA through 8-bit UART interface. The FPGA system decodes these two integers and send them to the modulus instance. With the start signal modulus block begins its operations and a timer counter also begins to count the clock signals consumed by the modulus block. As soon as the modulus operation completes its operation, done signal is asserted, which indicates to stop the timer counter. Finally, modulus result and the cycle counter values are encoded to be sent to the computer through the FIFO and UART interface. A python program runs on the computer sends and receives the outputs of FPGA system. The recorded data included the dividend, divisor, result, and the number of cycles per calculation. This comprehensive dataset was subsequently analyzed to evaluate the performance and efficiency of the modulus algorithm. To measure the performance of the algorithm in FPGA hardware, various operand sizes were used. The algorithm can be extended to any bit length of operands by simply changing the integer value N in the algorithm in Figure <ref>. For this study, operand sizes of 32-bit, 64-bit, 128-bit, 256-bit, 1024-bit, and 2048-bit were considered. §.§ Application to Prime Number Calculation The best way to measure the performance of the hardware algorithm for modulus operation is to use it in a practical application. In this study, we chose prime number calculation, as it involves repeated modulus operations and is computationally intensive. The application calculates all prime numbers up to a given integer. For example, if the given integer is 10, the application finds 2, 3, 5, and 7 as prime numbers. As the given integer increases, the number of computations required grows significantly. Therefore, this application provides an effective means to measure the performance of the new hardware algorithm for modulus operations. The algorithm in Figure <ref> was constructed to identify all prime numbers up to a given integer A, using only logical and addition operations. Although this prime number finding algorithm may not be the most optimized solution and there may be better alternatives, it is sufficient for measuring the performance of the new modulus finding algorithm. The algorithm begins by initializing several variables: n (set to 1) iterate through numbers, pCount to count number of primes found, sel_1 and sel_2 as selection flags, and an empty list primes to store the identified prime numbers. The algorithm enters while loop that continues as long as n is less than A. When n equals 2, the algorithm identifies it as a prime number, appends it to the primes list, and increments pCount. For numbers greater than 2, the algorithm initializes i to 3 and enters a nested while loop that continues as long as i is less than or equal to n. Within the nested loop, if i equals n, the number n is identified as a prime, appended to the primes list, and pCount is incremented before breaking out of the loop. If i is not equal to n, the algorithm checks if sel_2 is 0, in which case it calculates s as the modulus of n by 2 and sets sel_2 to 1. Otherwise, it calculates s as the modulus of n by i. If s is not 0, i is incremented by 2, and the loop continues. After exiting the nested loop, n is incremented by 1, and the outer loop continues until n reaches A. The function finally returns a list containing the number of primes found and the list of prime numbers. §.§.§ Hardware Implementation of Prime Number Calculation Figure <ref> shows the datapath diagram of the prime calculation system. The system employs four registers to store the input input integer (A), two indexes (n and i), and the prime number(P). The signal p is asserted when n < A, and the signal r is asserted when n is equal to 2. Signals q and t are asserted when i < n and i is equal to n, respectively. The block mod represents the hardware implementation of the modulus algorithm. The signal s is asserted if the modulo operation n mod i equals zero. The signals start_mod and done_mod indicate the initiation and completion of the modulo calculation process, respectively. Figure <ref> shows the finite state machine (FSM) diagram that generates the control signals to manage the data path. The process begins with the start signal, encompassing a total of 13 states. The control signals generated in each state are listed in Table <ref>. The datapath and the control FSM are implemented as separate Verilog modules and interfaced as depicted in Figure <ref>. Additionally, a counter module uses start, prime_found, and done signals to determine the number of primes found and the total clock cycles utilized for the entire process. Both hardware implementations, the modulus operation and the prime calculation, were configured using the Xilinx Vivado 2018.3 design suite, targeting a Digilent Zybo Zynq-7000 SoC Trainer FPGA board containing a Xilinx XC7Z010-1CLG400C. Logic resource utilization, power analysis, and timing analysis were conducted using the Vivado design suite for both the hardware modulus implementation and the prime number calculation separately. The Verilog description of the datapath included several submodules, such as registers and multiplexers, in addition to the modulus module. These modules were instantiated in the data path module along with other logical statements, following the datapath diagram shown in Figure <ref>. The FSM was implemented using the two-always-block method in Verilog, with one block for state logic and the other for next-state logic. The top-level Verilog module integrates the FSM module and the datapath module. The complete system was then simulated, and the outputs were compared with software calculations to verify accuracy. §.§ Measurements In this study, two main experiments were conducted to measure the performance of the proposed hardware algorithm for modulus operation and its application in finding prime numbers up to a given integer value. To measure the accuracy and performance of the hardware algorithm for modulus operation, operand sizes of 32-bit, 64-bit, 128-bit, 256-bit, 1024-bit, and 2048-bit were used. Uniform random integers were generated through a Python program and encoded to send to the FPGA board via a UART interface for processing. At the end of each operation, results were sent from the FPGA board to the computer and recorded for further analysis. For each operand size, 10,000 samples were recorded. In the second experiment, integer values of 10, 100, 1,000, 10,000, 100,000, 200,000, 300,000, 400,000, and 500,000 were considered. The prime numbers calculated up to each integer and the number of clock cycles used for each calculation were recorded. These data were analyzed to measure the performance. To compare the performance of the prime number calculations in FPGA hardware, software implementations of the prime number calculation algorithm (Figure <ref>) were considered. In the software implementation of the algorithm, the built-in modulus operator (%) was used. Both Python and C programming languages were used to run the same integer set on a Windows 11 PC with 8GB RAM and a processor with a 12th Gen Intel Core i5, 1300 MHz, 10 cores, 12 logical processors, which can operate at a max turbo frequency of 4.9 GHz. § RESULTS AND DISCUSSION §.§ Modulus Algorithm Implementation Our hardware-implemented modulus algorithm achieved 100% accuracy for all operand sizes when compared against software-based calculations. The Figures <ref>,  <ref>, and <ref> represent the performance of the hardware algorithm for calculating the modulus operation with 32-bit, 64-bit, and 128-bit operands respectively. The equations of fits of three graphs approximately equivalent. The slopes of all three equations are nearly identical, around 2. This indicates that the number of cycles required for the modulus operation increases linearly with the bit length difference (BLD) at a consistent rate. The near-constant slope show that the algorithm's performance scales predictably regardless of the operand size, making it robust and reliable. The y-intercepts are relatively low for all three cases. These values represent the base cycle count when the BLD is zero, indicating minimal overhead. The low y-intercept suggest that the algorithm starts with a small number of cycles and adds cycles linearly as the BLD increases, highlighting the efficiency of the algorithm. Therefore, we can say that cycle count and the BLD has the relationship shown in the equation <ref>, where y represents cycle counts and x represents BLD. y=2x +2 The Figure <ref> illustrates the relationship between the the Cycle Count and the BLD for an experimental set of data where the operand size is 2048-bits. The blue dots represent the experimental data points, while the red line represents the predicted cycle count based on the equation <ref>. The red line closely follows the distribution of blue dots, indicating that the experimental data align well with the predicted cycle count. This highlights that the prediction equation, y=2x+2, is highly accurate. When the BLD is small, regardless of the operand sizes, the calculation completes in very few clock cycles. This is evident from the cluster of data points near the origin of the graph, where both BLD and cycle count are low. This indicates that the proposed hardware algorithm performs very efficiently when the BLD is small. The hardware implementation of the modulus algorithm on the Zybo FPGA board, as analyzed using the Xilinx Vivado design environment, demonstrates efficient resource utilization, power consumption, and timing performance. The Table <ref> summarizes the resource utilization of the proposed hardware algorithm implemented on the Zybo FPGA board for different operand sizes (32-bit, 256-bit, 1024-bit, and 2048-bit). The key resources measured include Look-Up Tabes (LUT), Flip-Flops (FF), and Global Clock Buffers (BUFG). The Zybo FPGA has total of LUTs 17,600 and total FFs of 35,200. As the operand size increases, the utilization of LUTs and FFs increase. This is expected as larger operand sizes require more complex logic. The utilization of BUFG remains constant at 6% across all operand sizes. This indicates that the clock distribution requirements do not change with operand size. The timing performance metics of the 2048-bit implementation on the Zybo FPGA board are summarized in Table <ref>. The Worst Negative Slack (WNS) is 3.384 ns and it indicates the maximum amount by which the design meets the setup time requirements. Other metrics also indicate that the design meets all user-specified timing constraints. Based on the WNS, the maximum operating frequency is about 295.5 MHz. The power performance metics are detailed in Table <ref>. The total on-chip power consumption is 0.174 W, divided into dynamic and static components. The dynamic power is 0.083 W, which constitutes the power consumed due to the switching activity of the circuit. Static power 0.091 W represents the power consumed due to leakage currents. The junction temperature is maintained at 27.0 C, with a thermal margin of 58.0 C, indicating effective thermal management. The 2048-bit implementation on the Zybo FPGA board demonstrates robust timing performance, with no violations in setup, hold, or pulse width requirements, and can operate at a maximum frequency of approximately 295.5 MHz. The power consumption is efficiently managed, with a balanced distribution between dynamic and static power components. These results validate the efficacy of the proposed hardware modulus algorithm for high-performance applications on FPGA platforms. §.§ Limitations of the Algorithm One limitation of our algorithm is that the cycle count required to complete modulus operations is not fixed, as it depends on the bit lengths of the dividend and the divisor. This variability can create challenges in determining the latency and throughput of designs. Another limitation is that the cycle count becomes significantly large when there is a considerable difference in the bit lengths of the operands. Addressing these limitations will be the focus of future improvements to the algorithm. §.§ Prime Number Calculation Figure <ref> illustrates the time taken to compute prime numbers up to a given integer A using three different approaches: a hardware implementation on the Zybo FPGA running at 125 MHz, and software implementations in Python and C on a high-performance Windows 11 PC. The y-axis is set to a logarithmic scale to highlight the differences in performance across several orders of magnitudes, while the x-axis represents the integer value for which primes are calculated. Form the Figure <ref> we can observe that the Python implementation shows significantly longer computation times compared to both the hardware and C implementations, while C implementation performs better than hardware implementation. Notably, the hardware algorithm on Zybo FPGA is currently operating at 125 MHz, but it has potential to operate at a maximum frequency of 295.5 MHz. If the hardware were run at this higher frequency, the performance could potentially surpass that of the C implementation on the high-performance PC, particularly for lager integer values. §.§ Comparison Butler et al. <cit.> presented results for computing x mod z in two scenarios: with z fixed at 3 and with z as a variable. For both cases, using 256-bit numbers for x, their system achieved an operating frequency of 143.8 MHz. The FPGA resource utilization was similar in both scenarios, consuming approximately 55,001 and 55,255 3-input LUTs for fixed and variable z, respectively. In an earlier study, Alia et al. <cit.> proposed a method using 16 × 16 multipliers for calculating x mod m. Their implementation achieved a response time of 200 ns for 32-bit numbers, equivalent to an operating frequency of 5 MHz. Our implementation of the modular exponentiation algorithm shows significant improvements over these previous works. For 2048-bit operand sizes, our design utilized only 45% of the available LUTs (17,600 total) on a Zybo FPGA board. Compared to Butler et al. <cit.>, our implementation uses substantially fewer FPGA resources while achieving a higher operating frequency of 295.5 MHz. This represents a notable advancement in both resource efficiency and performance for modular arithmetic operations on FPGAs. §.§ Discussion The linear relationship between the bit length difference (BLD) and the cycle count, as demonstrated in Figure <ref>, highlights the efficiency of the proposed hardware algorithm for modulus operation. The algorithm scales predictably, maintaining efficiency even as operand sizes increase. This linear scalability is crucial for applications requiring high-speed computations with varying operand sizes. Figure <ref> provides additional validation of the algorithm's performance by comparing experimental data with the predicted cycle count. The close alignment of the experimental data points with the predicted cycle count based on the equation y=2x+2 highlights the accuracy and robustness of the prediction model. The experimental results confirm that the hardware algorithm performs efficiently and predictably across different BLD values, maintaining low cycle counts even as the BLD increases. Analysis of FPGA resource utilization (Table <ref>) demonstrates that our algorithm efficiently uses hardware resources even for large operand sizes, with the 2048-bit implementation using only 7,920(45% of available) LUTs. Timing performance results (Table <ref>) confirm that this 2048-bit implementation meets all constraints without violations in setup, hold, or pulse width requirements, achieving a maximum frequency of 295.5 MHz. Furthermore, power consumption data (Table <ref>) indicate a balanced distribution between dynamic and static components, with the total on-chip power maintained at an efficient 0.174 W. The performance comparison of prime number calculations, illustrated in Figure <ref>, shows that the hardware implementation on the Zybo FPGA outperforms the Python implementation and approaches the performance of the C implementation on a high-performance PC. Notably, the hardware implementation operates at 125 MHz, but with potential for higher performance at the maximum frequency of 295.5 MHz. This suggests that, with further optimization, the hardware implementation could surpass the software implementations, particularly for larger integer values. The results validate the efficacy of the proposed hardware algorithm for modulus operations and its application in prime number calculations. The linear scalability, efficient resource utilization, robust timing performance, and effective power management make the algorithm well-suited for a wide range of hardware applications. Future work could explore further optimization to increase the operating frequency and reduce power consumption, as well as adaptations for other computationally intensive tasks. § CONCLUSION This study introduces a novel hardware algorithm for the modulus operation, optimized for FPGA implementation. The algorithm's linear scalability and low overhead, demonstrated through extensive testing with operand sizes from 32-bit to 2048-bit, make it a robust solution for high-speed computations. The implementation on the Zybo FPGA platform confirms efficient resource utilization, robust timing performance, and effective power management, underscoring its suitability for both high-performance and resource-constrained platforms. Its application in prime number calculation further validates its practical use, showcasing substantial performance improvements over traditional software implementations. The findings suggest that this hardware algorithm can notably accelerate cryptographic protocols and other applications reliant on modular arithmetic, offering a promising direction for further research and development in hardware-based arithmetic operations. 99 oke2021residue A. A. Oke, B. A. Nathaniel, B. F. Bukola, O. A. Ayopo, Residue number system based applications: A literature review., Annals. Computer Science Series 19 (2021). tynymbayev2019high S. Tynymbayev, R. Berdibayev, T. Omar, Y. Aitkhozhayeva, A. Shaikulova, S. Adilbekkyzy, High-speed devices for modular reduction with minimal hardware costs, Cogent Engineering 6 (2019) 1697555. parihar2022low A. Parihar, S. Nakhate, Low latency high throughput montgomery modular multiplier for rsa cryptosystem, Engineering Science and Technology, an International Journal 30 (2022) 101045. abd2021compact A. A. Abd-Elkader, M. Rashdan, E.-S. A. Hasaneen, H. F. Hamed, A compact fpga-based montgomery modular multiplier, Indonesian Journal of Electrical Engineering and Computer Science 21 (2021) 735–743. ding2019low J. Ding, S. Li, A low-latency and low-cost montgomery modular multiplier based on nlp multiplication, IEEE Transactions on Circuits and Systems II: Express Briefs 67 (2019) 1319–1323. islam2020area M. M. Islam, M. S. Hossain, M. Shahjalal, M. K. Hasan, Y. M. Jang, Area-time efficient hardware implementation of modular multiplication for elliptic curve cryptography, IEEE Access 8 (2020) 73898–73906. bos2020faster J. W. Bos, S. J. Friedberger, Faster modular arithmetic for isogeny-based crypto on embedded devices, Journal of Cryptographic Engineering 10 (2020) 97–109. sivakumar1995vlsi R. Sivakumar, N. Dimopoulos, VLSI architectures for computing x mod m, IEE Proceedings-Circuits, Devices and Systems 142 (1995) 313–320. muller2023area R. Müller, W. Meier, C. F. Wildfeuer, Area efficient modular reduction in hardware for arbitrary static moduli, arXiv preprint arXiv:2308.15079 (2023). will2016computing M. A. Will, R. K. Ko, Computing mod with a variable lookup table, in: Security in Computing and Communications: 4th International Symposium, SSCC 2016, Jaipur, India, September 21-24, 2016, Proceedings 4, Springer, 2016, pp. 3–17. Vollala2021 S. Vollala, N. Ramasubramanian, U. Tiwari, Review of Algorithmic Techniques for Improving the Performance of Modular Exponentiation, Springer International Publishing, 2021. hossain2019efficient M. R. Hossain, M. S. Hossain, Efficient fpga implementation of modular arithmetic for elliptic curve cryptography, in: 2019 International conference on electrical, computer and communication engineering (ECCE), IEEE, 2019, pp. 1–6. langhammer2021efficient M. Langhammer, B. Pasca, Efficient fpga modular multiplication implementation, in: The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2021, pp. 217–223. ibrahim2023word A. Ibrahim, F. Gebali, Word-based processor structure for montgomery modular multiplier suitable for compact iot edge devices, Mathematics 11 (2023) 328. butler2011fast J. T. Butler, T. Sasao, Fast hardware computation of x mod z, in: 2011 IEEE International Symposium on Parallel and Distributed Processing Workshops and Phd Forum, IEEE, 2011, pp. 294–297. alia1990vlsi G. Alia, E. Martinelli, A vlsi structure for x (mod m) operation, Journal of VLSI signal processing systems for signal, image and video technology 1 (1990) 257–264.
http://arxiv.org/abs/2407.12431v1
20240717094015
GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval
[ "Han Zhou", "Wei Dong", "Xiaohong Liu", "Shuaicheng Liu", "Xiongkuo Min", "Guangtao Zhai", "Jun Chen" ]
cs.CV
[ "cs.CV" ]
GLARE H. Zhou et al. ^1 McMaster University, ^2 Shanghai Jiao Tong University, ^3 University of Electronic Science and Technology of China {zhouh115, dongw22, chenjun}@mcmaster.ca liushuaicheng@uestc.edu.cn {xiaohongliu, minxiongkuo, zhaiguangtao}@sjtu.edu.cn ^*Equal Contribution ^†Corresponding Authors GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval Han Zhou1,*0000-0001-7650-0755 Wei Dong1,*0000-0001-6109-5099 Xiaohong Liu2,†0000-0001-6377-4730 Shuaicheng Liu30000-0002-8815-5335 Xiongkuo Min20000-0001-5693-0416 Guangtao Zhai20000-0001-8165-9322 Jun Chen1,†0000-0002-8084-9332 July 22, 2024 ========================================================================================================================================================================================================================================== § ABSTRACT Most existing Low-light Image Enhancement (LLIE) methods either directly map Low-Light (LL) to Normal-Light (NL) images or use semantic or illumination maps as guides. However, the ill-posed nature of LLIE and the difficulty of semantic retrieval from impaired inputs limit these methods, especially in extremely low-light conditions. To address this issue, we present a new LLIE network via Generative LAtent feature based codebook REtrieval (GLARE), in which the codebook prior is derived from undegraded NL images using a Vector Quantization (VQ) strategy. More importantly, we develop a generative Invertible Latent Normalizing Flow (I-LNF) module to align the LL feature distribution to NL latent representations, guaranteeing the correct code retrieval in the codebook. In addition, a novel Adaptive Feature Transformation (AFT) module, featuring an adjustable function for users and comprising an Adaptive Mix-up Block (AMB) along with a dual-decoder architecture, is devised to further enhance fidelity while preserving the realistic details provided by codebook prior. Extensive experiments confirm the superior performance of GLARE on various benchmark datasets and real-world data. Its effectiveness as a preprocessing tool in low-light object detection tasks further validates GLARE for high-level vision applications. Code is released at <https://github.com/LowLevelAI/GLARE>. § INTRODUCTION Low light images often suffer from various degradations, including loss of details, reduced contrast, amplified sensor noise, and color distortion, making many downstream tasks challenging, such as object detection, segmentation, or tracking <cit.>. Consequently, LLIE has been extensively studied recently. Traditional techniques that leverage handcrafted priors and constraints <cit.> have made significant contributions to this field. However, these methods still exhibit limitations in terms of adaptability across diverse illumination scenarios <cit.>. With the rapid advancements in deep learning, extensive approaches have been employed to learn complex mappings from LL to NL images <cit.>. Although their performance surpasses that of traditional methods, once deployed in real-world scenarios with varying light conditions and significant noise, these methods tend to produce visually unsatisfactory results. Recent methods utilize semantic priors <cit.>, extracted feature <cit.>, and illumination maps <cit.> as the guidance to tackle the uncertainty and ambiguity of ill-posed LLIE problem. However, they still face challenges in extracting stable features from heavily degraded inputs, which are often overwhelmed by noise and obfuscated by low visibility. Besides, only utilizing the extracted information from degraded images to build the LL-NL transformation usually generate unsatisfactory results when testing on real-world scenarios. To generate realistic and appealing outcomes, one possible solution is to exploit the prior knowledge of natural normal-light images. Therefore, we propose to leverage a learned Vector-Quantized (VQ) codebook prior that captures the intrinsic features of high-quality and well-lit images, to guide the learning of LL-NL mapping. The discrete codebook is learned from noise-free images via VQGAN <cit.>. It is worth noting that by projecting degraded images onto this confined discrete prior space, the ambiguity inherent in LL-NL transformation is substantially mitigated, thereby ensuring the quality of enhanced images. Fig. <ref> illustrates the superiority of our method over current state-of-the-art (SOTA) methods, both on the benchmark dataset and real-world images. It is important to emphasize that the superior performance of GLARE over other SOTA methods is not only attributed to the integration of the codebook prior but also to our unique designs that address two main challenges associated with leveraging the codebook prior for LL-NL mapping. First, as shown in Fig. <ref> column 2, solely exploiting VQGAN and NL prior may lead to unpleasant results and the reason behind this lies in the evident discrepancy between the degraded LL features and NL features in latent space, as depicted in Fig. <ref>. Since the Nearest Neighbor (NN) is commonly utilized in looking up codebook <cit.>, this misalignment poses challenges in accurately retrieving VQ codes for LLIE task. Second, we notice that relying solely on matched codes for feature decoding <cit.> might compromise the fine details. Without integrating information from the original LL input, it could potentially introduce texture distortions. Taking into account these issues, we further introduce two specific modules into GLARE. First, to bridge the gap between degradation features and NL representations, we introduce a generative strategy to produce LL features based on Invertible Latent conditional Normalizing Flow (I-LNF), which enables better alignment with potentially matched NL features. Specifically, given LL-NL pairs, our I-LNF transforms complicated NL features into a simple distribution with the condition of LL features via the precise log-likelihood training strategy. As shown in Fig. <ref>, through this fully invertible network, our GLARE achieves a generative derivation of LL features which are closely aligned with NL representations and ensures accurate code assembly in codebook, thereby generating better enhancement results as depicted in Fig. <ref> column 3. Second, to improve the texture details, we propose an Adaptive Feature Transformation (AFT) module equipped with learnable coefficients to effectively control the ratio of encoder features introduced to the decoder. By flexibly merging the LL information into the decoding process, our model exhibits resilience against severe image degradation and one can freely adjust these coefficients according to their preference for real-world image enhancement. Besides, the AFT module adopts a dual-decoder strategy, which includes the fixed Normal-Light Decoder (NLD) and the trainable Multi-scale Fusion Decoder (MFD). The NLD specifically processes matched codes from the codebook, facilitating the generation of realistic and natural results. Meanwhile, the MFD handles the LL features produced by our I-LNF module, enhancing the final result with more refined details and texture, as demonstrated in Fig. <ref> column 4. Contributions The main contributions of this work are as follows: (i) We are the first to adopt the external NL codebook as a guidance to enhance low-light images naturally. (ii) We introduce GLARE, a novel LLIE enhancer leveraging latent normalizing flow to learn the LL feature distribution that aligns with NL features. (iii) A novel adaptive feature transformation module with an adjustable function for users is proposed to consolidate the fidelity while ensuring the naturalness in outputs. (iv) Extensive experiments indicate that our method significantly outperforms existing SOTA methods on 5 paired benchmarks and 4 real-world datasets in LLIE and our model is highly competitive while employed as a pre-processing method for high-level object detection task. § RELATED WORK §.§ Deep Learning based LLIE methods Similar to numerous approaches in other image restoration tasks <cit.>, end-to-end LLIE methods <cit.> have been proposed to directly map LL images to NL ones. Most of them mainly resort to the optimization of reconstruction error between the enhanced output and ground-truth to guide the network training. However, they often fail to preserve naturalness and restore intricate details effectively. These problems have given rise to the exploration of leveraging additional information or guidance to aid the enhancement process. For instance, some methods <cit.> achieve a simple training process for LLIE by estimating illumination maps. However, these approaches have a risk of amplifying noise and color deviations especially in real-world LL images <cit.>. Concurrently, other methods <cit.> argue that semantic understanding can mitigate regional degradation problems and attain pleasing visual appearance. Besides, several studies  <cit.> have indicated that utilizing edge extraction can direct the generation of realistic image details and mitigate the blurry effects to an extent. Nevertheless, these methods are highly contingent upon features extracted from degraded input, which could compromise the generalization capability and introduce artifacts. In contrast to existing methods, we propose an informative codebook that encapsulates a diverse spectrum of NL feature priors. This approach demonstrates resilience against various degradations, achieving more natural and appealing image enhancement. §.§ Vector-Quantized Codebook Learning Vector Quantized Variational AutoEncoders (VQ-VAE) is firstly proposed in  <cit.> to learn discrete representations. This approach effectively tackles the posterior collapse issue that is commonly encountered in VAE models. Then, VQVAE2 <cit.> explores the hierarchical VQ code for large-scale image generation. VQGAN <cit.> further enhances the perceptual quality by capturing a codebook of context-rich visual parts via an adversarial method. The discrete codebook has been successfully employed in image super-resolution <cit.>, text super-resolution <cit.>, and face restoration <cit.>. However, there remains potential for further improvement. One of the key research directions is how to precisely match the related correct code. Different from recent methods <cit.> that utilize a Transformer to predict code indices in the codebook, we argue that prediction-based strategies are inherently unable to address the significant differences between LL and NL features, resulting in suboptimal performance. To this end, we propose a generative approach that produces LL features aligned with NL counterparts to successfully bridge the gap between LL and NL representations. § GLARE Besides introducing external NL codebbok to guide the Low-Light to Normal-Light (LL-NL) mapping, the novelty of our work also lies in the distinctive Invertible Latent Normalizing Flow (I-LNF) and Adaptive Feature Transformation (AFT) modules, which are designed to maximize the potential of NL codebook prior and generate realistic results with high fidelity. The overview of our method is illustrated in Fig. <ref>, where the training of our method can be divided into three stages. In stage I, we pre-train a VQGAN on thousands of clear NL images to construct a comprehensive VQ codebook (Sec. <ref>). In stage II, the I-LNF module is trained utilizing LL-NL pairs to achieve the distribution transformation between LL and NL features (Sec. <ref>). In the final stage, the AFT module, which contains the fixed NL Decoder (NLD), Adaptive Mix-up Block (AMB), and Multi-scale Fusion Decoder (MFD), is employed to enhance the fine-grained details while maintaining naturalness beneficial from the codebook (Sec. <ref>). Stage I: Normal-Light Codebook Learning To learn a universal and comprehensive codebook prior, we leverage a VQGAN with the structure similar to <cit.>. Specifically, a NL image I_nl∈ℝ^3 W H is first encoded and reshaped into the latent representation z_nl∈ℝ^d N, where W, H, d, and N= W/f H/f represent the image width, image height, the dimension of latent features, and the total number of latent features; f is the downsampling factor of the NL Encoder E_nl. Each latent vector z^i_nl can be quantized to the corresponding code z^i_q using Nearest-Neighbor (NN) matching as: z^i_q =min_c_v∈𝒞 z^i_nl - c_v_2, where 𝒞∈ℝ^d N_c denotes the learnable codebook containing N_c discrete codes, with each element represented by c_v. Then the quantized code z_q is sent to NLD (denoted as D_nl) to generate reconstructed image I^rec_nl. To better illustrate the strengths and limitations of the codebook prior, we fine-tune the pre-trained VQGAN encoder on LL-NL pairs. Specifically, we achieve the enhanced results shown in Fig. <ref> column 2 and we utilize t-SNE <cit.> to visualize LL features generated by fine-tuned NL encoder in Fig. <ref>, which demonstrate the effectiveness of external NL prior in IILE. Besides, these visual results inspire us to design additional networks to align LL features with NL representations to further improve enhancement performance. Stage II: Generative Latent Feature Learning To fully exploit the potential of external codebook prior, we design additional mechanisms from the perspective of reducing the disparity between LL and NL feature distributions. Specifically, we develop an invertible latent normalizing flow to achieve the transformation between LL and NL feature distributions, thereby facilitating more accurate codes retrieval from codebook. As shown in Fig. <ref>, two key components are optimized in stage II: the Conditional Encoder and the Invertible LNF. 1) The conditional encoder E_c, structurally identical to the NL encoder E_nl, inputs a LL image I_ll and outputs the conditional feature c_ll. 2) The I-LNF module in this work is realized through an invertible network, represented as f_θ. This module utilizes c_ll as the condition to transform the complex NL feature distribution z_nl to a latent feature, namely v = f_θ(z_nl; c_ll). Stage II focuses on obtaining a simplified distribution p_v(v) in the space of v, such as a Gaussian distribution. Consequently, the conditional distribution p_z_nl|c_ll(z_nl|c_ll, θ) can be implicitly expressed as <cit.>: p_z_nl|c_ll (z_nl|c_ll, θ)= p_v (f_θ(z_nl; c_ll))|det ∂ f_θ/∂z_nl(z_nl; c_ll)|. Different from conventional normalizing flow applications <cit.>, we uniquely employ normalizing flow at the feature level rather than the image space, and our I-LNF module is designed without integrating any squeeze layers. Moreover, instead of using the standard Gaussian distribution as the prior of v, we propose to use the LL feature z_ll, generated by convolution layers based on c_ll, as the mean value of p_v (v). The conditional distribution in Eq. (<ref>) allows us to minimize the negative log-likelihood (NLL) in Eq. (<ref>) to train the conditional encoder and I-LNF module. Besides, through the fully invertible network f_θ, we derive clear features z^'_ll for LL inputs by sampling v∼ p_v(v) according to z^'_ll = f_θ^-1(v;c_ll). ℒ(θ; c_ll, z_nl ) = -log p_z_nl|c_ll(z_nl|c_ll, θ). After training, we evaluate our model on LOL dataset <cit.> to validate the effectiveness of the I-LNF. As shown in Fig. <ref>, the LL feature distribution generated by I-LNF is closely aligned with that of the NL, facilitating accurate code assembly in the codebook. Moreover, our network achieves satisfactory enhancement results (Fig. <ref>, column 3), indicating good LLIE performance after stage II. However, these results still exhibit considerable potential for improvement, especially in fidelity. For example, the color (row 1 in Fig. <ref>) or structural details (row 2 in Fig. <ref>) significantly diverge from the ground truth. This observation motivates and drives us to incorporate the input information into the decoding process to elevate the fidelity. Stage III: Adaptive Feature Transformation To further enhance the texture details and fidelity, we propose an adaptive feature transformation module that flexibly incorporates the feature F_c = {F^i_c} from the conditional encoder into the decoding process, where i denotes the resolution level. Specifically, in order to maintain the realistic output of NLD and avoid the influence of degraded LL features, we adopt a dual-decoder architecture and develop MFD inspired by <cit.>. Dual-decoder design enables us to leverage the deformable convolution (dconv) to warp NLD feature (F^i_nl) as Eq. <ref> and input the warped feature (F^i_d) into MFD to generate the final enhancement. F^i_d = dconv(F^i_nl, F^i_t), where i and F^i_t denote the resolution level and the target feature respectively. In this work, we design a novel feature fusion network that adaptively incorporates LL information into the warping operation and provides a potential adjustment choice for users when testing on real-world occasions. Adaptive Mix-up Block The MFD that aligns structurally with NLD aims to decode the generated LL feature z^'_ll and obtain intermediate representations as F_mf={F^i_mf}, where i indicates the resolution level. At each resolution level, the conditional encoder information F^i_c is added to the corresponding F^i_mf in order to bring more LL information. Different from typical feature fusion operations (i.e., skip connection <cit.>), our approach uses an adaptive mix-up strategy: F^i_a = β×σ(θ_i) ×F^i_c + (1 - β×σ(θ_i)) ×F^i_mf, where θ_i represents a learnable coefficient, σ denotes the sigmoid operator, β is used for the adjustment for real-world testing and is set to 1 when training. Unlike skip connection, these learnable parameters can be adjusted effectively during training, which contributes to enhanced performance shown in Sec. <ref>. Flexible Adjustment Even though β in Eq. (<ref>) is set to 1 in the training phase, one can flexibly adjust β according to their preference when testing with real-world images. This design stems from the phenomena that many current methods usually work struggling with real-world data, which often have different illuminations with images used in the training phase. Experiments §.§ Datasets Normal Light Datasets To train the VQGAN in Stage I, we select images with normal lighting conditions from DIV2K <cit.> and Flickr2K <cit.> datasets to develop the NL codebook prior. Low Light Datasets We conduct a thorough evaluation of our method using various paired datasets, including LOL <cit.>, LOL-v2-real <cit.>, LOL-v2-synthetic <cit.>, and a large-scale dataset SDSD <cit.>. For LOL, LOL-v2-real, and LOL-v2-synthetic, we use 485, 689, and 900 pairs for training, and 15, 100, and 100 pairs for testing. The indoor subset of SDSD dataset includes 62 training and 6 testing video pairs, while the outdoor subset contains 116 training and 10 testing pairs. Besides, we also conduct cross-dataset evaluation on several unpaired real-world datasets: MEF <cit.>, LIME <cit.>, DICM <cit.>, and NPE <cit.>. §.§ Implementation Details Experiment Settings We use the Adam optimizer (β_1=0.9, β_2=0.99) for all training stages. In Stage I, the training iteration is set to 640K with a batch size of 4, a fixed learning rate 10^-4, and image size of 256×256. In Stage II, we retain the batch size, change the image size to 320 × 320, and adopt a multi-stage learning rates. Then, our GLARE is trained for 60K iterations on LOL and LOL-v2, and 225K iterations on SDSD. In Stage III, the batch size is halved, the initial learning rate is set to 5× 10^-5, and the training iterations are adjusted to 20K, 40K, and 80K for LOL, LOL-v2, and SDSD datasets respectively. Metrics For paired datasets, we utilize Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) <cit.> to assess pixel-level accuracy, and use Learned Perceptual Image Patch Similarity (LPIPS) <cit.> for perceptual quality evaluation. As for real-world datasets, the Natural Image Quality Evaluator (NIQE) <cit.> is adopted. §.§ Performance on LLIE Quantitative Results As reported in Tab. <ref>, GLARE outperforms the current SOTA methods on five benchmarks. Our GLARE excels in PSNR, outperforming LL-SKF over 0.55 dB and 0.74 dB on LOL and LOL-v2-synthetic datasets. Furthermore, it surpasses Retformer with improvement of 0.33 dB and 1.01 dB on SDSD-indoor and SDSD-outdoor datasets. Additionally, our LPIPS scores surpass the second best performance by 20.9% and 12.6% in Tab. <ref>, indicating that the enhanced results from our network are more consistent with human visual system. Tab. <ref> presents the cross-dataset evaluation on unpaired real-world datasets. We first train our GLARE on LOL training split. Then, the model that performs best on LOL test data is deployed on four unpaired datasets. As compared to the current SOTA methods, GLARE outperforms them on DICM and MEF and achieves the optimal performance on average. This demonstrates not only the superiority of our method in producing high-quality visual results but also its good generalization capability. Qualitative Results The visual quality of our GLARE against others are shown in Fig. <ref>,  <ref>, and  <ref>. Obviously, previous methods show inferior performance on noise suppression. Besides, they also tend to produce results with evident color distortion (See the enhanced results of KinD, LLFlow, Retformer, LL-SKF in Fig. <ref>, and SNR, LLFlow in Fig. <ref>). Additionally, from the qualitative comparison, it can be seen that LLFlow, LL-SKF, and Retformer may induce the detail deficiency on their enhanced results (See Fig. <ref> and Fig. <ref>), and KinD in Fig. <ref> and SNR in Fig. <ref> perform poorly in vision due to the introduce of unnatural artifacts. In comparison, GLARE can effectively enhance poor visibility while reliably preserving color and texture details without artifacts. The visual comparisons on unpaired real world datasets in Fig. <ref> also demonstrate the strengths of our method in terms of details recovery and color maintenance. Performance on Low-Light Object Detection Implementation Details To thoroughly evaluate our model, we also explore its potential as an effective preprocessing method in object detection task on ExDark dataset <cit.>. This dataset collects 7,363 low-light images, categorized into 12 classes and annotated with bounding boxes. We first employ different LLIE models trained on LOL to enhance the ExDark dataset, then carry out object detection on the enhanced images. More concretely, 5,896 images are used for training and the rest for evaluation. The adopted object detector is YOLO-v3 <cit.> pre-trained on COCO dataset <cit.>. Quantitative Results We calculate the Average Precision (AP) and mean Average Precision (mAP) scores as our evaluation metrics. We compare our GLARE against current SOTAs in Tab. <ref>. As compared to KinD, MBLLEN, LLFlow, and LL-SKF, our GLARE achieves at least 0.8 improvement in terms of mAP. More importantly, our GLARE also outperforms IAT, which has demonstrated exceptional performance in low-light object detection. Qualitative Results The visual comparisons for low-light object detection is demonstrated in Fig. <ref>. It can be seen that although each LLIE method enhances the visibility to some extent, GLARE achieves the best visual performance, thus benefiting the most to the downstream detection task. Not surprisingly, the enhanced results from GLARE enables the YOLO-v3 detector to recognize more objects with higher confidence. Ablation Study To verify the effectiveness of each component of GLARE and justify the optimization objective utilized for training, we conduct extensive ablation experiments on LOL dataset. Specifically, we discuss the importance of AFT module, I-LNF module, and NL codebook prior in this section. Effectiveness of Adaptive Feature Transformation By removing the AFT module from our GLARE, we obtain a Simple LLIE model denoted as SimGLARE. Basically, SimGLARE only utilizes the information from NL codebook without feature transformation. The quantitative results of SimGLARE are shown in Tab. <ref>. SimGLARE is quite competitive on LLIE in terms of PSNR, SSIM, and LPIPS (compared with SOTAs in Tab. <ref>). However, with the proposed AFT module, our GLARE achieves further improvements on both quantitative metrics and visual results (as shown in Fig. <ref>). In addition, various loss functions are examined in Tab. <ref>, showing that our choice of losses in Stage III is reasonable. We also design two variants, named Variant 1 and Variant 2, to shed light on the importance of proposed dual-decoder architecture and AMB, respectively. Specifically, Variant 1 directly incorporates LL feature to NLD using AMB while Variant 2 adopts parallel decoder strategy but replaces AMB with skip connection operation <cit.>. By comparing (4) with (2) and (3) in Tab. <ref>, we observe that PSNR and SSIM are negatively correlated with LPIPS, which verifies the effectiveness of our AMB and dual-decoder design. Effectiveness of Invertible Latent Normalizing Flow To show the importance of I-LNF and the adopted NLL loss, we implement several adaptations based on SimGLARE. (1) We train SimGLARE using ℒ_1 loss to validate the effectiveness of NLL loss adopted in our work. (2) We replace the I-LNF module by leveraging a Transformer model structurally similar to <cit.> to directly predict the code index in the codebook. (3) We remove the I-LNF module in (1) and train the conditional encoder on LL-NL pairs. The quantitative results are reported in Tab. <ref>. The superiority of NLL loss can be verified by comparing (4) and (1). Moreover, a comparison between the images in the second and third columns of Fig. <ref> also reveals that the use of NLL loss, as opposed to ℒ_1 loss, results in sharper contours and edges. Besides, as compared to the Transformer-based code prediction, our proposed I-LNF module can help generate LL features that are better aligned with NL ones, thus ensuring more accurate code matching and achieving superior performance. More importantly, with the I-LNF module removed from SimGLARE (ℒ_1), we notice a significant decrease in PSNR (1.16 dB ↓) and SSIM (0.017 ↓), which demonstrates the effectiveness of our proposed I-LNF module. Effectiveness of Codebook Prior To investigate the importance of the NL codebook prior, based on SimGLARE (ℒ_1), we remove the codebook and the quantization process in VQGAN. The resulting model is denoted as Variant 3 and is trained using a strategy similar to that for SimGLARE (ℒ_1). Similarly, we remove the codebook in the model reported in row 3 of Tab. <ref> to learn the LL-NL mapping. Quantitative results are reported in Tab. <ref>. The absence of a codebook prior notably impacts performance, as evidenced by an average decrease of 2.0 dB in PSNR and a 0.024 drop in SSIM. This highlights the critical importance of the codebook prior in our method. § CONCLUSION A novel method named GLARE is proposed for LLIE. In view of the uncertainty and ambiguity caused by ill-posed nature of LLIE, we leverage the normal light codebook, which is obtained from clear and well-exposed images using VQGAN, to guide the LL-NL mapping. To better exploit the potential of codebook prior, the invertible latent normalizing flow is adopted to generate LL features aligned with NL latent representations to maximize the probability that code vectors are correctly matched in codebook. Finally, the AFT module with dual-decoder architecture is introduced to flexibly supply information into the decoding process, which further improves the fidelity of enhanced results while maintaining the perceptual quality. Extensive experiments demonstrate that our GLARE significantly outperforms the current SOTA methods on 5 paired datasets and 4 real-world datasets. The superior performance on low light object detection makes our GLARE an effective preprocessing tool in high-level vision tasks. splncs04 GLARE: LLIE via Generative Latent Feature based Codebook Retrieval H. Zhou et al. ^1 McMaster University, ^2 Shanghai Jiao Tong University, ^3 University of Electronic Science and Technology of China {zhouh115, dongw22, chenjun}@mcmaster.ca liushuaicheng@uestc.edu.cn {xiaohongliu, minxiongkuo, zhaiguangtao}@sjtu.edu.cn ^*Equal Contribution ^†Corresponding Authors Supplementary Material for GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval Han Zhou1,*0000-0001-7650-0755 Wei Dong1,*0000-0001-6109-5099 Xiaohong Liu2,†0000-0001-6377-4730 Shuaicheng Liu30000-0002-8815-5335 Xiongkuo Min20000-0001-5693-0416 Guangtao Zhai20000-0001-8165-9322 Jun Chen1,†0000-0002-8084-9332 July 22, 2024 ========================================================================================================================================================================================================================================= In this supplementary material, we provide detailed description for GLARE architecture and training objectives and discuss the potential limitation of GLARE. § NETWORK DETAILS OF GLARE Stage I: Normal-Light Codebook Learning Architecture Detail The NL encoder in VQGAN comprises 2 downsampling layers and the NL decoder has 2 upsampling operations. There are 2 residual blocks <cit.> at each resolution level and 2 attention blocks at the resolution of [W/4, H/4], where W and H represent the image width and height, respectively. The learnable codebook contains 1000 discrete vector and the dimension for each vector is 3. Training Objective To establish a comprehensive codebook, the reconstruction loss ℒ_rec, the adversarial loss ℒ_adv, the codebook loss ℒ_code, and the perceptual loss ℒ_per <cit.> are introduced for training: ℒ_rec = I_nl - I^rec_nl_1, ℒ_adv = - log(D( I^rec_nl)), ℒ_code = sg( z_nl) -z_q^2_2 + β sg( z_q) -z_nl^2_2, ℒ_per = ϕ(I_nl) - ϕ(I^rec_nl) ^2_2, where D is the discriminator <cit.> and ϕ denotes the VGG19 <cit.> feature extractor. Besides, sg(·) denotes the straight-through gradient estimator for facilitating the backpropagation via the non-differentiable quantization process, and β is a pre-defined hyper-parameter. Moreover, we also leverage the multi-scale structure similarity loss ℒ_ssim <cit.> and the latent semantic loss ℒ_sem <cit.>, due to their outstanding performance in reconstructing image details. Therefore, the complete training loss utilized to learn the codebook is: ℒ_total = ℒ_rec + λ_adv·ℒ_adv + λ_code·ℒ_code + λ_per·ℒ_per + λ_ssim·ℒ_ssim +λ_sem·ℒ_sem, where λ_adv = 0.0005, λ_code = 1, λ_per = 0.01, λ_ssim=0.2, and λ_sem= 0.1 are coefficients for corresponding loss functions. Stage II: Generative Latent Feature Learning Architecture Details Tab. <ref> outlines the architecture details of the conditional encoder E_c and I-LNF module f_θ. Basically, the conditional encoder contains a NL encoder and a simple convolution, and our I-LNF module consists of two flow layers, each sharing the same architecture as LLFlow <cit.>. Different from LLFlow, we remove all squeeze layers in our GLARE considering that our I-LNF operates at the feature level rather than image space. Training Objective We optimize the conditional encoder and I-LNF by minimizing the Negative Log-Likelihood (NLL) described in Eq. 3 of the manuscript. We provide more information about the training objective of stage II here. We first divide the invertible f_θ into a sequence of N invertible layers {f^1_θ, f^2_θ, ..., f^N_θ}. Then we utilize the following equation to demonstrate the feature flow for each layer: h^i = f^i_θ(h^i-1; c_ll), where i∈ [1, N], h^i-1 and h^i represent the input and output of f^i_θ respectively. Specifically, h^0 = z_nl and h^N = v. Therefore, the detailed NLL loss can be expressed as: ℒ (θ; c_ll, z_nl ) = -log p_z_nl|c_ll(z_nl|c_ll, θ) = -log p_v(v) - ∑_i=1^Nlog|det ∂ f^i_θ/∂h^i-1(h^i-1; c_ll)|. Stage III: Adaptive Feature Transformation Training Objective To train the AFT module proposed in stage III, we adopt multiple loss functions for optimization: ℒ_1 loss, multi-scale structure similarity loss ℒ_ssim <cit.>, and perceptual loss ℒ_per <cit.>. Therefore, the total loss function of Stage III can be formulated as: ℒ_total = ℒ_1 + λ_ssim·ℒ_ssim + λ_per·ℒ_per, where λ_ssim=0.2 and λ_per=0.01. § LIMITATION While extensive experiments demonstrate that our GLARE significantly outperforms the current SOTA methods in LLIE and is verified to be an effective pre-processing method in low-light object detection task, our GLARE still presents several avenues for further exploration and refinement. While the VQGAN in our GLARE provides a highly important NL codebook, it comes at the cost of the efficiency of GLARE. Therefore, the efficiency of our GLARE requires further improvement for practical applications. Besides, only one generative method (normalizing flow) is studied in our work. To better demonstrate the generalizable capability of our GLARE, the task of employing other generative methods in our GLARE, such as diffusion model <cit.>, requires further exploration.
http://arxiv.org/abs/2407.12467v1
20240717103728
BSC-UPC at EmoSPeech-IberLEF2024: Attention Pooling for Emotion Recognition
[ "Marc Casals-Salvador", "Federico Costa", "Miquel India", "Javier Hernando" ]
eess.AS
[ "eess.AS" ]
2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). IberLEF 2024, September 2024, Valladolid, Spain [1] 1]Marc Casals-Salvador[ orcid=0009-0003-9099-3826, email=marc.casals@bsc.es, url=https://github.com/marccasals98, ] [1] [1]Barcelona Supercomputing Center, Eusebi Güell Square 1-3, 08034 Barcelona, Spain 2]Federico Costa[ orcid=0000-0002-1389-3595, email=federico.costa@upc.edu, ] 2]Miquel India[ orcid=0000-0002-3107-3662, email=miquel.angel.india@upc.edu, ] [2]Universitat Politècnica de Catalunya, Jordi Girona 31, 08034 Barcelona, Spain 1,2]Javier Hernando[ orcid=0000-0002-1730-8154, email=javier.hernando@upc.edu, ] [1]Corresponding author. § ABSTRACT The domain of speech emotion recognition (SER) has persistently been a frontier within the landscape of machine learning. It is an active field that has been revolutionized in the last few decades and whose implementations are remarkable in multiple applications that could affect daily life. Consequently, the Iberian Languages Evaluation Forum (IberLEF) of 2024 held a competitive challenge to leverage the SER results with a Spanish corpus. This paper presents the approach followed with the goal of participating in this competition. The main architecture consists of different pre-trained speech and text models to extract features from both modalities, utilizing an attention pooling mechanism. The proposed system has achieved the first position in the challenge with an 86.69% in Macro F1-Score. Speech Emotion Recognition Deep Learning Attention Transformers BSC-UPC at EmoSPeech-IberLEF2024: Attention Pooling for Emotion Recognition [ July 22, 2024 =========================================================================== § INTRODUCTION Emotions are undoubtedly fundamental parts of our idiosyncrasy. They play an important role in interpersonal relationships and decision-making and generally take part in the evolution and consciousness of any mental process <cit.>. Moreover, there is empirical evidence that emotions immensely influence human health <cit.>, which creates the necessity to monitor them. Affective computing is of great interest in medical health fields <cit.>. Consequently, developing a system capable of discerning various emotions is highly valuable. Researchers have attempted to predict emotions using machine learning approaches, but their effectiveness depends on the quality and quantity of available data. In the field of Natural Language Processing (NLP), numerous emotion recognition models rely exclusively on text-based features. Since the creation of Transformers <cit.>, multiple approaches <cit.> seek to use pre-trained Transformer encoders, such as BERT <cit.> as text feature extractors. These encoders are pre-trained models using self-supervised learning techniques on extensive datasets. This pre-training enables the models to project words in a latent space with a rich semantic representation, from which classifiers can then be employed to predict the corresponding emotion classes. On the other hand, speech is crucial for expressing emotions. Elements such as pitch, prominence, and phrasing contribute generously to providing emotion information. As explained in <cit.>, the human brain is capable of recognizing emotions pan-culturally and independently of the language they are expressed with. Following this principle, the researchers have proposed different architectures to process and extract information from speech signals. In the realm of Speech Emotion Recognition (SER), Machine Learning (ML) and Deep Learning (DL) models often utilize hand-crafted features such as Mel-Frequency Cepstral Coefficients (MFCC) <cit.>. Nevertheless, recent advances in Deep Learning allow cutting-edge architectures to combine text and speech to provide better results. Multimodal emotion Recognition (MER) is a complex task since it requires models to be able to learn complex patterns of the data. For this reason, the usage of text and audio pre-trained models significantly improves the embedding representation of their features, allowing them to be combined in their latent space <cit.>. In summary, the exceptional progress that has been made in this field is appreciable, yet there also exists a disparity in the amount of work carried out in Spanish. Developing these models using Spanish data is crucial for several reasons. On the one hand, Spanish is one of the languages with the most native speakers worldwide. On the other hand, it is necessary to leverage the engineering opportunities of Spanish-speaking countries, opening the door to developing new technologies that could satisfy their population needs. However, one of the difficulties this presents is the lack of labelled data in Spanish needed to train any supervised learning approach. In order to encourage the creation of an emotion recognition model trained with Spanish data, the Iberian Languages Evaluation Forum (IberLEF) of 2024 created the challenge EmoSPeech 2024. This competition evaluates the Macro F1-Score of the participants in two tasks: Emotion Recognition with text and with speech and text. The training corpus is Spanish MEACorpus 2023 <cit.>. This dataset contains 13.16 h of speech, and its transparent methodology distinguishes it from other datasets of the same task. The speech samples are collected from YouTube videos and are labelled using the categorical taxonomy proposed by P. Ekman <cit.>, which include emotions such as surprise, disgust, anger, joy, sadness, fear, and neutral expressions. Nevertheless, it was impossible for the annotators to find any sample that contained speech expressing surprise emotion, though this class is not represented in the dataset. This paper endeavours to leverage the research of Spanish models with cutting-edge technologies in the current state of the art by making use of the MEACorpus 2023. The current state of the art is the usage of Transformers-based pre-trained models with high capacity that, leveraging the enormous datasets they are trained with, are capable of describing complex patterns in both text and speech. Following this lead, the system combines a speech pre-trained model, the XLSR-wav2vec 2.0 <cit.>, that is trained with 436,000h and a RoBERTA text model fine-tuned in Spanish <cit.>. Both models are used as feature extractors for speech and text respectively, and they output a vector with the relevant information of the utterances. The two vectors are then concatenated into a single vector that contains information about text and speech. Subsequently, this vector is reduced to lower-dimension representation by making use of an attention pooling mechanism. Finally, dense layers are utilized to project this reduced vector, determining the classification of the utterance. This approach has reached an F1-Score of 86.69%, achieving the first position in the multimodal task of the EmoSPeech 2024 challenge. § CHALLENGE The EmoSPeech 2024 <cit.> is a Challenge proposed by the Iberian Languages Evaluation Forum (IberLEF) of 2024 <cit.>, which is a Spanish workshop hosted by the Sociedad Española para el Procesamiento del Lenguaje Natural (SEPLN). This event is dedicated to producing models in the frame of the Iberian peninsula, encompassing different national languages such as Spanish, Portuguese, Catalan, Basque, or Galician. To develop the competition, the organisers proposed a dataset named Spanish MEACorpus 2023 <cit.>. The dataset, comprising 13.16h of speech divided into 5,129 audios, was meticulously labelled by the research team of the article. As explained in their paper, the procedure to extract the audio files is as follows: The authors of the dataset selected YouTube videos according to their topic and extracted audio segments considering the noise of the audio files and the silence gaps. Once this part was done, the audio files were classified using the emotion taxonomy developed by P. Ekman <cit.>. It comprehended the basic emotions of disgust, anger, joy, sadness, fear, surprise, and neutral emotion. Nevertheless, despite the efforts, finding any speech audio that contained the surprise emotion was impossible. As is common in the field, the dataset exhibits an unequal representation of the emotion classes. Figure <ref> (left)shows that neutrality and disgust are the most prevalent emotions, while fear is notably scarce. Another important aspect is the length of the audio files, which can directly affect the performance of the network by providing more contextual information about the speech. Therefore, audio duration is an essential characteristic of any speech dataset and must be considered a quality metric. A histogram of the audio file duration is shown in Figure <ref>(right). The mean of the duration is 9.24 s. Lastly, another fundamental consideration is the variability of the recordings. The fact that this competition uses third-party audio files makes it more difficult to control other parameters, such as noise or the magnitude of the audio. Some videos are recorded outdoors and are more likely to have background noise or exhibit poorer recording quality, while others are studio-recorded and of higher quality. Furthermore, the dataset's paper detailed that 46% of the speech segments are attributed to female voices, with the remainder belonging to males. The paper affirmed that the text transcriptions were extracted directly from the raw audio file using Whisper <cit.> followed by a manual revision by the researchers. § ARCHITECTURE The system created is a multimodality model that combines text and speech and, trained with the Spanish MEACorpus 2023 dataset <cit.>, effectuates a classification of the emotion. In Figure <ref>, it is possible to see a global representation of the system. As can be seen, both text and speech are fed to the network and processed with a self-supervised learning (SSL) model that works as a feature extractor. Then, the model concatenates these features and pools them into one single vector. Finally, a classification is performed by processing this vector with multilayer perceptrons that serve as a classifier. §.§ Speech It is a standard practice to apply regularisation techniques to preprocess the data before its utilisation in a machine learning model. In the case of this system, the overall mean and standard deviation of all the audio files were calculated and used to normalise the values in the dataset. This technique is widespread in Deep Learning literature for speech due to the fact that it makes backpropagation more efficient and reduces the impact of the outliers. The normalisation process was extended to the validation and test sets, whereby the mean and standard deviation values obtained from the training set were utilised. It is essential to mention that, during the training stage, the samples were randomly cropped with a window of 5.5 seconds. The shorter utterances were enlarged using repetition padding. Given the limited size of the dataset, it became necessary to use data augmentation to mitigate overfitting to the training set. The specific techniques utilized included speed perturbation, which alters the speed of the audio; reverberation, which simulates a reverberant environment; and background noise, which adds ambient sounds to the audio. PyTorch implements data augmentation using the following scheme: first of all, defining the transformations used to create synthetic data. Then, for each sample, a probability decides whether the model will see the original data or the one created after applying these transformations. This process occurs in every epoch, so data augmentation is applied to different samples in each epoch. This streaming data augmentation guarantees the model is exposed to a wide diversity of data without physically increasing the dataset size. As is common in the field, when doing inference is not desired to perturb the data, so in the validation and test sets, the probability of applying these transformations is zero. Recent advancements in the field of Deep Learning have highlighted the importance of using Transformer-based pre-trained models. It has been proven that their ability to adapt to changes in the domain makes them suitable for extracting features from audio files in any dataset. In this work, the following models were experimented with: * WavLM <cit.>: This cutting-edge model is trained with 80,000 hours and encompasses datasets such as Libri-Light <cit.>, GigaSpeech <cit.> and VoxPopuli <cit.>. Two versions of this model were tried, the Large version and the Base. The output vector is 768 in the case of the base version and 1,024 in the case of the large. * XLSR-wav2vec 2.0 <cit.>: This model is based on wav2vec 2.0 <cit.> and it is trained with the datasets Common Voice <cit.>, BABEL <cit.> and Multilingual LibriSpeech <cit.>, which makes a total of 436,000 hours of audio in 128 languages. This model outputs a vector of dimension 1,024. * HuBERT <cit.>: This model is trained with 60,000 hours of Libri-Light <cit.>. The output vector is of dimension 1,024. §.§ Text The text domain was the first to employ pre-trained large language models (LLMs) for different tasks. Since the creation of the BERT model <cit.>, different approaches have emerged in the state of the art, following the same idea with some variations. In the approach of this work, the following SSL models were experimented with: * BERT <cit.>: The large uncased version was used, with an an output dimension of 1,024. * XLM-RoBERTa Spanish <cit.>: It is a pre-trained model based on XLM-RoBERTa <cit.> and trained with Spanish Unannotated Corpora <cit.>. This model outputs a vector with 1,024 dimensions. * BETO <cit.>: BETO is one of the first pre-trained models produced with Spanish data. It follows the same structure as the BERT base. Consequently, it outputs a hidden vector of 768 dimensions. It is trained using Wikipedia data and all of the sources of the OPUS Project <cit.>. Additionally, a fine-tuned version of BETO for emotions was used in the system. §.§ Classifier Following the attention pooling process, the resultant vector undergoes further processing via a classifier module. This module is designed to include a layer that adjusts the vector's dimension to fit the desired hidden layer width. It also consists of a stack of hidden layers and an output layer that has a dimension equivalent to the number of classes being considered. The model architecture consists of several linear layers, each followed by a dropout, layer normalization, and a Gaussian Error Linear Unit (GELU) activation function <cit.>. The Softmax activation function is used in the output layer to select the predicted class. § ATTENTION POOLING Different approaches have emerged for integrating information extracted from pre-trained models in the field of multimodal learning. Attention mechanisms, particularly Multi-Head Attention (MHA), have been popular in recent years for combining text and speech utterances. This study used an alternative Attention Pooling mechanism used in works such as <cit.> to reduce the dimensionality of the hidden state vector created by concatenating the outputs of the two pre-trained models. Considering the embedding dimension E and a batch size of one, we define the hidden states as the sequences of the extracted features {h_t∈ℝ^E|t=1,...,T}. Then, for each hidden state h_t we calculate its weight as described in Equation (<ref>): w_t = exp(h_t^⊤ u√(E))/∑_i=1^Texp(h_i^⊤ u√(E)) where u∈ℝ^E is a trainable parameter initialized with the Xavier initialization <cit.> and w_t is the weight associated at the hidden vector h_t. Then, the pooled representation of the hidden vector is calculated using Equation (<ref>). c = ∑_t=1^T w_t h_t The vector c encapsulates the relevant information of the features extracted in the text and speech systems. This approach is computationally more efficient compared to the general Attention mechanism, where the key, query, and values are calculated. This characteristic is especially convenient for this system due to the scarce data provided. Figure <ref> demonstrates graphically the functioning of the attention pooling mechanism. § EXPERIMENTAL SETUP PyTorch requires all samples in the batch to have the same dimensions. Therefore, during training, the audio files were cropped using a window of 5.5 seconds, which was the optimal value found. In inference, the whole audio waveform was used. As detailed in Section <ref>, the audio waveforms were normalized using the mean and standard deviation of the training set. The values extracted were -33.62 and 56.15, respectively. These same values are applied to the other sets when doing inference. Data augmentation techniques were applied by varying the probability based on the capacity of the model. The optimal value was found to be 0.3. A batch size of 16 samples was selected, as it provided an optimal trade-off between minimizing the duration of each epoch and avoiding GPU memory exhaustion. To further accelerate computation, data parallelization across two GPUs was utilized. In particular, the GPUs employed were two NVIDIA GeForce RTX 2080Ti. The optimizer selected was the AdamW <cit.>, with a learning rate of 0.00005, which decayed by 10% after five epochs without improvements in the validation F1-score. The dropout rate, set at 0.1, was adjusted according to the network's complexity. The number of epochs utilized also depended on the model's capacity. Although model parameters were only stored when the F1-score improved, early stopping was necessary due to the noisy and variable learning curves. This variability could lead to an overfitting model being saved based on local improvements in the F1-Score during validation. Each experiment lasted one to two days, depending on the configurations. To improve our position on the leaderboard, we made Macro F1-Score our top priority since it was the metric used to evaluate the participants' submissions. This metric combines the Precision and the Recall into one single number by applying their harmonic mean. Specifically, the Macro F1-Score is the average of the F1-Score of each class. This metric treats all classes equally, regardless of their amount of data, making it a fair measure of overall performance. Given that the metric of interest is the macro F1-Score, it is imperative to mitigate the class imbalance present in this dataset. A wide variety of losses try to palliate this disparity. Beyond these possibilities, the loss criterion finally chosen is the weighted cross-entropy loss. After doing the hyperparameter search, two classical machine learning techniques were employed with the aim of improving the results. The first approach involved applying thresholds to modify the final decision over the logits. However, this did not enhance the results. The second strategy leveraged the variability of different models by using a 3-model ensemble. Hard voting was chosen as the ensemble technique, where the most voted prediction among the three models was selected. In the event of a tie, the prediction from the model with the highest F1-Score on the validation set was chosen. The code of the project is available here: <https://github.com/marccasals98/BSC-UPC_EmoSPeech> § RESULTS In the initial stages of the competition, it was necessary to evaluate various pre-trained self-supervised models to determine the most suitable one for the data. At this moment, only the training corpus was available; therefore, it was necessary to make a validation partition to evaluate the performances of the different self-supervised models. Table <ref> presents the best results obtained with the diverse text and feature extractors. The best configuration used RoBERTa for text and XLSR-wav2vec 2.0 for audio, achieving an F1-score of 89.73% on the validation set. This superior performance is likely because RoBERTa was trained on Spanish data, making it more effective for this domain than other models trained in English. In addition, XLS-wav2vec 2.0 was trained by using 436,000 hours of audio in 128 languages, including Spanish, which possibly contributed to the improvement of this metric score. It is worth noting that, despite BETO being a Spanish version of BERT, the results with this encoder were poor. This fact could be attributed to WavLM not being trained with as much Spanish data as XLSR-wav2vec 2.0 or that BETO's vector dimension is 768 instead of 1,024, resulting in fewer features captured by the model. It is remarkable that, in this initial stage, some experiments were conducted with other architectures that involved more parameters. For instance, attempts were made to combine features extracted from the encoder models using multi-head attention (MHA) with one and two heads. These experiments yielded unsatisfactory results, with F1-scores of 84.4% and 82.2%, and the models exhibited significant overfitting. Consequently, it was decided to discontinue these lines of experimentation and concentrate all efforts on the hyperparameter tuning using RoBERTa and XLSR-wav2vec 2.0 and the attention pooling. The top three different models that obtained the best score achieved 86.20%, 85.96%, and 82.43%. Their confusion matrices over the test set are displayed in Figure <ref>. As can be seen, anger is the most difficult emotion to classify, normally getting confused with disgust. It is remarkable that, despite being very scarce in the dataset, fear is very separable from the rest of the emotional spectrum. Section <ref> remarked that thresholding techniques failed to outperform the models and were, therefore, discarded. Consequently, the only non-trainable approach used to improve the model's F1-Score was model ensembling. Initially, models with different feature extractors were employed to leverage the diversity of features and create a robust system. However, this approach proved to not be effective. Instead, the three best-performing models on the validation set were ensembled, improving the F1-score to 86.69%. Table <ref> compares the results of these approaches with the challenge baseline. § CONCLUSIONS AND FUTURE WORK This study presented a multimodal model for the emotion recognition challenge EmoSPeech 2024 within the IberLEF 2024 framework, aimed at recognizing emotions from speech and text inputs. The architecture comprised two pre-trained models, one dedicated to speech and the other to text. They extracted feature vectors that the model concatenates into a unified hidden representation vector. On the one hand, for the audio side, different experiments were conducted with WavLM, XLSR-wav2vec 2.0, and HuBERT. On the other hand, the optimization of the textual component of the architecture involved exploration with the models BERT, XLM-RoBERTa for Spanish, BETO, and its finetuned version for emotion. The best performance is achieved by jointly combining RoBERTa and XLSR-wav2vec 2.0. After the model concatenates the text and speech feature vectors, it employs a dimensionality reduction via reduced attention pooling. This mechanism, with fewer parameters than its standard counterpart, facilitates the seamless integration of text and audio while mitigating the risk of overfitting to the training set. Subsequently, a stack of dense layers processed the output vector, using its compressed information to extract the class prediction. Additionally, to optimize performance and maximize the F1-Score in the competition, model ensembling techniques were adopted, employing hard voting on the top three models. In summary, the system was capable of achieving an F1-Score of 86.69%, an absolute increment of 33.61% compared to the baseline, and securing the first position in the challenge. After the conclusion of this competition, continuing the research of new paradigms for Speech Emotion Recognition (SER) could be a captivating line of research. One of the lines is improving the efficiency of speech feature extractors. In the paper of WavLM, the authors claim that, in general, most self-supervised learning models (SSL) models focus primarily on Automatic Speech Recognition (ASR) tasks. However, by training an SSL model to jointly learn masked speech prediction and denoising in the pretraining stage, the model's capabilities extend beyond ASR, outperforming other SSL in fields such as SER. This statement could seem contradictory because, in Section <ref>, it is proven that XLSR-wav2vec 2.0 outperforms WavLM Large. Nevertheless, this outcome is likely due to XLSR-wav2vec2.0 being trained with a very extensive multilingual dataset. If WavLM was trained with a comparable volume of data, it could potentially outperform XLSR-wav2vec 2.0, leveraging its joint learning of masked speech prediction and denoising to achieve superior performance in various tasks, including Speech Emotion Recognition. This work has been promoted and financed by the Government of Catalonia through the Aina project, as well as by the Spanish Ministerio de Ciencia e Innovación through the AdaVoice project (PID2019-107579RB-I00). § ONLINE RESOURCES